work_234hevxz7rasvkpmmtppbljwfq ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218559401 Params is empty 218559401 exception Params is empty 2021/04/06-02:16:39 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218559401 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:39 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_23rwnnqzpbcy7ap375xomnukye ---- Rates of rhinoplasty performed within the NHS IN England and Wales: A 10-year retrospective analysis Abstracts / International Journal of Surgery 10 (2012) S1–S52S36 ABSTRACTS Conclusions: In our study, thyroplasty as a method for vocal cord medi- alisation led to improved voice quality post-operatively and to good patient satisfaction. 0363: INSERTION OF A SECOND NASAL PACK AS A PROGNOSTIC INDICATOR OF EMERGENCY THEATRE REQUIREMENT IN EPISTAXIS PATIENTS Edward Ridyard 1, Vinay Varadarajan 2, Indu Mitra 3. 1 University of Manchester, Manchester, UK; 2 North West Higher Surgical Training Scheme, North West, UK; 3 Manchester Royal Infirmary, Manchester, UK Aim: To quantify the significance of second nasal pack insertion in epistaxis patients, as a measure of requirement for theatre. Method: A one year retrospective analysis of 100 patient notes was undertaken. After application of exclusion criteria (patients treated as outpatients, inappropriate documentation and patients transferred from peripheral hospitals) a total of n¼34 patients were included. Of the many variables measured, specific credence was given to requirement of second packing and requirement for definitive management in theatre. Results: Of all patients, 88.5% required packing. A further 25% (7/28) of this group had a second pack for cessation of recalcitrant haemorrhage. Of the second pack group, 85.7% (6/7) ultimately required definitive management in theatre. One sample t-test showed a statistically significant correlation between patients with a second nasal pack and requirement for theatre (p<0.001). Conclusions: Indications for surgical management for epistaxis vary from hospital to hospital. The results of this study show that insertion of a second pack is a very good indicator of requirement for definitive management in theatre. 0365: MANAGEMENT OF LARYNGEAL CANCERS: GRAMPIAN EXPERIENCE Therese Karlsson 3, Muhammad Shakeel 1, Peter Steele 1, Kim Wong Ah-See 1, Akhtar Hussain 1, David Hurman 2. 1 Department of otolaryngology-head and neck surgery, Aberdeen Royal Infirmary, Aberdeen, UK; 2 Department of Oncology, Aberdeen Royal Infirmary, Aberdeen, UK; 3 University of Aberdeen, Aberdeen, UK Aims: To determine the efficacy of our management protocol for laryngeal cancer and compare it to the published literature. Method: Retrospective study of prospectively maintained departmental oncology database over 10 years (1998-2008). Data collected include demographics, clinical presentation, investigations, management, surveillance, loco-regional control and disease free survival. Results: A total of 225 patients were identified, 183 were male (82%) and 42 female (18%). The average age was 67 years. There were 81 (36%) patients with Stage I disease, 54 (24%) with Stage II, 30 (13%) with Stage III and 60 (27%) with Stage IV disease. Out of 225 patients, (130)96% of Stage I and II carcinomas were treated with radiotherapy (55Gy in 20 fractions). Patients with stage III and IV carcinomas received combined treatment. Overall three-year survival for Stage I, II, III and IV were 91%, 65%, 63% and 45% respectively. Corresponding recurrence rates were 3%, 17%, 17% and 7%; 13 patients required a salvage total laryngectomy due to recurrent disease. Conclusion: Vast majority of our laryngeal cancer population is male (82%) and smokers. Primary radiotherapy provides comparable loco-regional control and survival for early stage disease (I & II). Advanced stage disease is also equally well controlled with multimodal treatment. 0366: RATES OF RHINOPLASTY PERFORMED WITHIN THE NHS IN ENGLAND AND WALES: A 10-YEAR RETROSPECTIVE ANALYSIS Luke Stroman, Robert McLeod, David Owens, Steven Backhouse. University of Cardiff, Wales, UK Aim: To determine whether financial restraint and national health cutbacks have affected the number of rhinoplasty operations done within the NHS both in England and in Wales, looking at varying demographics. Method: Retrospective study of the incidence of rhinoplasty in Wales and England from 1999 to 2009 using OPCS4 codes E025 and E026, using the electronic health databases of England (HesOnline) and Wales (PEDW). Extracted data were explored for total numbers, and variation with respect to age and gender for both nations. Results: 20222 and 1376 rhinoplasties were undertaken over the 10-year study period in England and Wales respectively. A statistical gender bias was seen in uptake of rhinoplasty with women more likely to undergo the surgery in both national cohorts (Wales, p<0.001 and England, p<0.001). Linear regression analysis suggests a statistical drop in numbers under- going rhinoplasty in England (p<0.001) but not in Wales (p>0.05). Conclusion: Rhinoplasty is a common operation in both England and Wales. The current economic constraint combined with differences in funding and corporate ethos between the two sister NHS organisations has led to a statistical reduction in numbers undergoing rhinoplasty in England but not in Wales. 0427: PATIENTS' PREFERENCES FOR HOW PRE-OPERATIVE PATIENT INFORMATION SHOULD BE DELIVERED Jonathan Bird, Venkat Reddy, Warren Bennett, Stuart Burrows. Royal Devon and Exeter Hospital, Exeter, Devon, UK Aim: To establish patients' preferences for preoperative patient informa- tion and their thoughts on the role of the internet. Method: Adult patients undergoing elective ENT surgery were invited to take part in this survey day of surgery. Participants completed a ques- tionnaire recording patient demographics, operation type, quality of the information leaflet they had received, access to the internet and whether they would be satisfied accessing pre-operative information online. Results: Respondents consisted of 52 males and 48 females. 16% were satisfied to receive the information online only, 24% wanted a hard copy only and 60% wanted both. Younger patients are more likely to want online information in stark contrast to elderly patients who preferred a hard copy. Patients aged 50-80 years would be most satisfied with paper and internet information as they were able to pass on the web link to friends and family who wanted to know more. 37% of people were using the internet to further research information on their condition/operation. However, these people wanted information on reliable online sources to use. Conclusions: ENT surgeons should be alert to the appetite for online information and identify links that are reliable to share with patients. 0510: ENHANCING COMMUNICATION BETWEEN DOCTORS USING DIGITAL PHOTOGRAPHY. A PILOT STUDY AND SYSTEMATIC REVIEW Hemanshoo Thakkar, Vikram Dhar, Tony Jacob. Lewisham Hospital NHS Trust, London, UK Aim: The European Working Time Directive has resulted in the practice of non-resident on-calls for senior surgeons across most specialties. Conse- quently majority of communication in the out-of-hours setting takes place over the telephone placing a greater emphasis on verbal communication. We hypothesised this could be improved with the use of digital images. Method: A pilot study involving a junior doctor and senior ENT surgeons. Several clinical scenarios were discussed over the telephone complemented by an image. The junior doctor was blinded to this. A questionnaire was completed which assessed the confidence of the surgeon in the diagnosis and management of the patient. A literature search was conducted using PubMED and the Cochrane Library. Keywords used: “mobile phone”, “photography”, “communication” and “medico-legal”. Results & Conclusions: In all the discussed cases, the use of images either maintained or enhanced the degree of the surgeon's confidence. The use of mobile-phone photography as a means of communication is widespread, however, it's medico-legal implications are often not considered. Our pilot study shows that such means of communication can enhance patient care. We feel that a secure means of data transfer safeguarded by law should be explored as a means of implementing this into routine practice. 0533: THE ENT EMERGENCY CLINIC AT THE ROYAL NATIONAL THROAT, NOSE AND EAR HOSPITAL, LONDON: COMPLETED AUDIT CYCLE Ashwin Algudkar, Gemma Pilgrim. Royal National Throat, Nose and Ear Hospital, London, UK Aims: Identify the type and number of patients seen in the ENT emergency clinic at the Royal National Throat, Nose and Ear Hospital, implement changes to improve the appropriateness of consultations and management and then close the audit. Also set up GP correspondence. Method: First cycle data was collected retrospectively over 2 weeks. Information was captured on patient volume, referral source, consultation Insertion of a second nasal pack as a prognostic indicator of emergency theatre requirement in epistaxis patients Management of laryngeal cancers: Grampian experience Rates of rhinoplasty performed within the NHS IN England and Wales: A 10-year retrospective analysis Patients' preferences for how pre-operative patient information should be delivered Enhancing communication between doctors using digital photography. A pilot study and systematic review The ENT emergency clinic at the royal national throat, nose and ear hospital, London: Completed audit cycle work_24azh22yq5dknkxwnk6fllqv4u ---- Impact testing of polymeric foam using Hopkinson bars and digital image analysis HAL Id: hal-01137384 https://hal.archives-ouvertes.fr/hal-01137384 Submitted on 30 Mar 2015 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Impact testing of polymeric foam using Hopkinson bars and digital image analysis Jiagui Liu, Dominique Saletti, Stéphane Pattofatto, Han Zhao To cite this version: Jiagui Liu, Dominique Saletti, Stéphane Pattofatto, Han Zhao. Impact testing of polymeric foam using Hopkinson bars and digital image analysis. Polymer Testing, Elsevier, 2014, pp.101-109. �10.1016/j.polymertesting.2014.03.014�. �hal-01137384� https://hal.archives-ouvertes.fr/hal-01137384 https://hal.archives-ouvertes.fr Impact testing of polymeric foam using Hopkinson bars and digital image analysis Jiagui Liua, Dominique Salettib, Stéphane Pattofattoc, Han Zhaoa,∗ aLaboratoire de Mécanique et Technologie (LMT-Cachan), ENS-Cachan/CNRS-UMR8535/Université Pierre & Marie Curie, 61, avenue du président Wilson, 94235 Cachan cedex, France bLaboratoire de Biomécanique (LBM), Arts et Mtiers ParisTech, Paris, France cSnecma Villaroche, Safran Group, Rond Point René Ravaud-Réau, 77550 Moissy-Cramayel, France Abstract This paper has investigated the impact testing method on the polymeric foams using digital image correlation. Accurate average stress-strain relations can be obtained when soft large di- ameter polymeric pressure bars and appropriate data processing are used. However, as there is generally no homogeneous strain and stress fields for polymeric foam, an optical displacement field observation is essential. In contrast with quasi-static tests where the digital image corre- lation (DIC) measurement is commonly used, technical difficulties still remain for an use under impact conditions such as synchronization and measuring accuracy due to rather poor quality images obtained from high speed camera. In the present paper, an accurate synchronization method based on direct displacement measurement from DIC is proposed and the feasibility of a calibration method using DIC on pressure bar ends is discussed. The relevance of the present method for establishing mechanical response of polymeric foam is also demonstrated. Keywords: Polymeric foam, Impact testing, Hopkinson bars, High-speed imaging, Digital image correlation. 1. Introduction Polymeric foams have been widely used in weight-saving and energy-absorbing structural applications, such as automobile vehicles, aircraft structures [1] and personal protections [2]. Under quasi-static loading, analytical/numerical approaches were developed to give a reasonable prediction of foam-like material behaviors [3, 4, 5, 6, 7] and experimental data were obtained without difficulties. However, such approaches cannot be easily generalized in the case of impact ∗Corresponding author. Email address: zhao@lmt.ens-cachan.fr (Han Zhao) Preprint submitted to Elsevier October 20, 2013 loading, which is generally the real working conditions of polymeric foams for energy absorption applications. It is therefore very important to develop a dynamic experimental approach to characterize polymeric foams under impact loading. The behavior of polymeric foams at relatively high strain rates has been an interest of inves- tigators since 1960s [8, 9]. Experimental results using different devices such as the falling weight or impacting mass technique [10, 11, 12], rapid hydraulic testing machine [13, 14, 15] and also split Hopkinson bar have been reported [16, 17, 18] Hopkinson bars have been widely used to experimentally characterize the mechanical behav- iors of materials under impact strain rates loading, from 102 s−1 to 104 s−1 and provide more accurate measurement than drop weight and high-speed testing machine at this range. Common metallic bar with quartz-crystal piezoelectric force transducers [18] or large diameter nylon pres- sure bars [19, 20] were used to improve signal/noise ratio. Typical overall stress-strain curves at high strain rates were obtained with such nylon pressure bars method for polymeric foams [21] and metallic foams [22]. Meanwhile, it is noticed that aforementioned testing results (under static as well as impact loading rate) are only the average stress-strain relations with an assumption of homogeneous stress and strain fields within the foam specimen, which is unfortunately not true for most of the foam-like materials. This non-uniform or even localized deformation has been observed by a number of studies [7, 19, 23]. Thus, field observation techniques have been added in more recent tests, often associated with digital image correlation (DIC) [24, 25]. In contrast with rather common applications of photography and DIC to the foam testing under static loading [26, 27, 28], only a few preliminary studies with high speed imaging were reported under impact loading rates [29, 30, 31, 32, 33] because of synchronization difficulties and poor image resolution from high-speed camera. In this paper, an optical field measurement along with large diameter nylon Hopkinson bars is highlighted in the experimental setup for the polymeric foams characterization. Nylon Hopkinson bar methods and the associated data processing are briefly recalled in the section 2. Afterwards, technical points of high speed strain field measurement such as time synchronization accuracy as well as DIC method with poor resolution images is discussed. Finally, testing of a polymeric foam is given as an example to illustrate the needs of an accurate field measurement. 2 2. Conventional tests on the polymeric foams 2.1. Foam specimen and quasi-static tests Conventional quasi-static compressive tests on polymeric foams can be performed using a universal testing Machine. A polymeric foam supplied by EADS company is studied as an example. The density is 70 kg/m3, the diameter of unit cell is about 0.8 mm and the thickness of the cell wall is about 50 µm from microscopy observations. The specimen tested in this study is a cylinder of 60 mm in diameter and 40 mm in height. MTS 810 material testing system is used. The specimen is centrally sandwiched between the two platens. The lower one rises at a constant speed of 0.2 mm/s during loading, which ensures the nominal strain-rate of 5×10−3 s−1. Three tests are carried out and repeatability is checked (see in Fig. 1). This mechanical response is typical for cellular materials which exhibits a linear 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 0.5 1 1.5 2 2.5 3 N o m in a l s tr e ss ( M P a ) Nominal strain Test1 Test2 Test3 Figure 1: Three nominal stress - nominal strain curves from quasi-static loading increase or elastic period, followed by a very long plateau and a densification phase. 2.2. Impact tests with SHPB The dynamic tests are performed on nylon Hopkinson bar. A typical SHPB is made of long input and output pressure bars with a short specimen sandwiched between them. The impact between the projectile and the input bar generates a compressive longitudinal incident pulse εi(t) in the input bar. Once this incident pulse reaches the specimen-bar interface, one part is reflected as the reflected pulse εr(t) in the input bar and the other part transmitted as the transmitted 3 pulse εt(t) in the output bar. With the gauges glued on the input bar and output bar, one can record these three pulses. The transmitted wave can be shifted to the output bar-specimen interface to obtain the output force and velocity, at the same time, the input force and velocity can be determined via incident and reflected waves shifted to the input bar-specimen interface. Forces and velocities at both faces of the specimen are then given by the following: Finput = SBE(εi(t) + εr(t)), Foutput = SBE(εt(t)) and Vinput = C0(εi(t) − εr(t)), Voutput = C0εt(t) (1) where Finput, Foutput, Vinput, Voutput are forces and particle velocities at the input and output interfaces, SB, E and C0 are the cross-sectional area, Young’s modulus and longitudinal wave speed in the pressure bars respectively. εi(t), εr(t) and εt(t) are the strain signals at the bar- specimen interfaces. SHPB system used for this study consists of two 3 m long nylon bars of 62 mm diameter (density 1200 kg/m3 and wave speed 1800 m/s). The use of the nylon bars multiply by 20 the impedance ratio between the foam and the bar material(thus the signal/noise ratio), in compar- ison with a classical steel bar (density being 7850 kg/m3 and wave speed 5000 m/s). However, with the use of big diameter visco-elastic bars, the reasoning in Eq. (1) should be amended by considering wave dispersion effect in the shift from measuring points to the bar/specimen interfaces[34, 35]. The correction of this dispersion effect on the basis of a generalized Pochham- mer’s wave propagation theory is consequently performed in data processing[19]. As the coefficients between the strain and recorded tension may be influenced by many factors, such as gauges factors, temperature, humidity or gain of the amplifier, calibration of these coefficients is very important for the overall precision. The principle of calibration for the input bar is on the basis of the energy conservation, i.e., the kinetic energy of the impactor is equal to the sum of the kinetic energy and the elastic energy carried by the incident impulse in the input bar. This is valid if the input bar and the impactor have the same properties (diameter, material). The calibration of the output bar is realized by the force balance during a test without specimen. Here, the forces can be calculated as Eq. (1) and should be exactly the same because they indicate the force at the same interface (input bar/output bar interface). Therefore, the calibration coefficient for the gauge on the output bar is ascertained. From the forces and velocities at both bar-specimen interfaces, nominal stress and nominal 4 strain can be obtained following the standard formulas of SHPB analysis, which assumes the uniformity of stress and strain fields: σs(t) = Foutput Ss εs(t) = ∫ t 0 Voutput(τ)−Vinput(τ) ls dτ (2) The present Hopkinson bars experiments followed this procedure with wave dispersion cor- rection and gauge calibrations. Before every test, the interfaces of bar-specimen are lubricated to decrease the frictional effect as much as possible. Input and output forces history is examined to check whether the dynamic equilibrium is achieved. An example is given in Fig. 2 for a test at 10 m/s. The equilibrium is established around 0.3 ms. 0 0.2 0.4 0.6 0.8 1 1.2 −1000 0 1000 2000 3000 4000 5000 6000 7000 Time (ms) F o rc e ( N ) Input force Output force Figure 2: Forces history at specimen interfaces for a test at 10 m/s. Force equilibrium is checked. The mechanical response of the foam at 18 m/s is represented in Fig. 3 and is compared to the quasi-static case. One can notice that, for dynamic loading, the densification phase is not reached, due to the impulse length generated in the SHPB system. Following the same procedure, seven specimens are tested using nylon SHPB with the pro- jectile speeds ranging from 7 m/s to 19 m/s. The plateau stress remains nearly constant in the range of impact loading speeds (Fig. 4), from 5×10−3 s−1 to 4.5×102 s−1. However the average strength increased of 8% from quasi-static loading to impact loading. 5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 0.5 1 1.5 2 2.5 3 Nominal strain N o m in a l s tr e ss (M P a ) static dynamic Figure 3: Comparison of nominal stress - nominal strain behavior under impact loading and quasi-static loading 0 2 4 6 8 10 12 14 16 18 20 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 Crash Speed(m/s) Y ie ld s tr e ss (M P a ) Figure 4: Mean plateau stress vs impact speed under both dynamic and quasi-static loading 6 3. Digital photography and image correlation As mentioned in the introduction, digital photography and the associated digital image cor- relation is nowadays widely used to observe the deforming mode of the specimen, to monitor the loading condition and to calculate the strain field. Even though the equilibrium state within the specimen is reached, heterogeneous strain field is still possible with a localized area such as the case of honeycombs for example. 3.1. Direct image observations For quasi-static loading, a Canon EOS 50D camera is used and triggered with an equal time- interval signal (3.5 s). Fig. 5 shows the deforming states of the specimen at different crush displacements. The deformation of the foam is highly localized in the darker areas, visible next to the platens. d=4.01 mm d=7.55 mm d=11.78 mm d=19.53 mm dark area Figure 5: The deforming states at different crush displacements under quasi-static loading A simple image processing is made with such images. By locating the boundary of the darker area, the length of the uncollapsed part can be obtained, which is plotted in Fig. 6. The x-axis is the actual specimen length obtained by subtracting the prescribed crush displacement from initial length of the specimen. In this figure, the markers ∗ are corresponding respectively to the last three images of Fig. 5 (the darker band is too small to be processed on the first image). It can be seen that the length of the uncollapsed part decreases linearly. One can find that the strain is highly localized in the collapsed areas and the crush displace- ment is mainly obtained by the increase of these areas. Along with the stress-strain curve in Fig. 3, one may conclude that the specimen collapses little by little at the ends as long as the crush displacement goes and it thus manifests a plateau in the stress-strain curve. Whether this 7 conclusion is correct or not, it needs further examination of the strain field on the surface of the specimen. 10 15 20 25 30 35 40 10 15 20 25 30 Actual length of specimen (mm) L e n g th o f u n co lla p se d p a rt ( m m ) loading direction Figure 6: Evolution of the uncollapsed central part length with the current length of specimen, where the stars mark the values corresponding to the last three images of Fig. 5 3.2. Principle of digital image correlation More sophisticated image processing can be used to match the material points in the image sequence and to calculate displacement fields. Consequently, a quantitative strain fields can be offered. DIC code CorreliQ4, developed in LMT-Cachan is used. It is a global approach of image correlation based on the conservation of optical flow. Considering a reference image f and a deformed image g, the conservation of optical flow can be written as (in 1D for brevity): f(x) = g [x + u(x)] (3) where u(x) is the motion from the reference image to the deformed image. When the motion is small, the first order Taylor expansion of g [x + u(x)] yields to: g [x + u(x)] ≈ g(x) + g′ · u(x) (4) To measure the motion u(x), a minimization of the gray level residual is therefore introduced on the region of interest (ROI): η2 = ∫ ROI {f(x) − g [x + u(x)]} 2 dx (5) 8 Assuming that the motion u(x) is expressed in a space of basis functions, {ψp}, i.e. u(x) = ∑ upψp(x) (6) where up is the unknown degrees of freedom. By substituting Eq. (6) to Eq. (5), it thus obtains: η2 = ∫ ROI { f(x) − g [ x + ∑ upψp(x) ]}2 dx (7) And considering the first order approximation, Eq. (4), the minimization of the object function becomes: η2lin = ∫ ROI [ f(x) − g(x) − ∑ upψp(x)g ′(x) ]2 dx (8) An iterative scheme is followed to obtain the best estimation of up, and thus u(x). Further- more, the strain field in the ROI is obtained ([25, 36, 37]). In this case, Q4P1-shape functions are chosen as the basis functions so that the measured displacement field is consistent with FE analysis computations. 3.3. Strain field analysis Displacement and strain fields on the surface of the specimen are calculated by DIC. Only the uncollapsed part is chosen as ROI. Indeed, the highly densified dark zone leads to convergency problem in DIC algorithm because the gradient of gray is nearly zero in this area. It might be worth noting that 2D image correlation generally requires the object moving in a plane parallel to the camera sensor, which is not the case with the present cylindrical specimen. Nevertheless, the strain in transversal direction is not our interest here. Moreover, as Poisson ratio is low, transversal strain is limited and does not affect the longitudinal strain. As a result, the strain analysis with DIC on the cylindrical specimen is still acceptable to provide the longitudinal strain. 32 pixels × 32 pixels element size is adopted for quasi-static cases. The ROI and the strain fields are shown in Fig. 7, in which the strain fields correspond to the images in Fig. 5. The results show the non-uniformity of strain even within this apparently uncollapsed part. It might be interesting to compare the nominal strain with the DIC strain measurement of this uncollapsed part of specimen. The evolutions of the global nominal strain and the average value of the DIC strain are plotted with respect to the image number in Fig. 8. Great disparity is found between these two curves. The nominal strain increases rather linearly and the average local strain remains constant at a value of about 2.2%. The DIC strain measurement confirms that a localized deformation area is created when the specimen is compressed to a critical strain around 2.2%. 9 Figure 7: ROI calculated in quasi-static loading and the longitudinal strain fields 2 4 6 8 10 12 14 16 18 −0.3 −0.25 −0.2 −0.15 −0.1 −0.05 0 Image N o m in a l S tr a in d=4.01 mm d=7.55 mm d=11.78 mm Global nominal strain Mean DIC Strain Figure 8: Comparison of the global nominal strain and the mean local strain in the quasi-static loading 10 4. Field measurement under impact loading 4.1. High Speed digital image system For the Hopkinson bar tests, a Photron Fastcam SA5 high-speed camera is used to record the deforming process of the specimen during impact tests. Image sequences with a 384 pixels×432 pixels resolution at 40,000 fps is acquired. Shutter speed is set to 1/70,000 s and aperture f5.6 so that the obtained images are sharp enough. Thanks to two powerful cold spotlights, the images are ensured to a suitable contrast in despite of the small aperture, high shutter speed and frequency. Compared to the static case, there are no apparent dark areas in the image sequences from dynamic experiments. An example is given in Fig. 9 for a prescribed velocity of 10 m/s. Instead, a lot of fragments and localized bands are found. d=1.99 mm d=7.57 mm d=11.82 mm Figure 9: The deforming states at different crush displacement under impact loading The synchronization between images and SHPB signals is not as natural as in quasi-static tests. Hopkinson bars and high speed camera, which are independent for conventional exper- iments, are connected with a multi-channel data link (MCDL) box as shown in Fig. 10. In practice, a LabView program has been created and records the signals from the strain gauges on the input bar and the output bar once triggered. At the same time it sends a TTL signal of 5 volts to trigger the high speed camera. For the MCDL, only one signal from Hopkinson bar system is sampled, with a frequency equal to 10 times the camera frame rate. This sample signal is sent to camera and stored in a common file with the image number shot at the same instant. This file is the basis of the synchronization between the Hopkinson bar system and the high speed camera system. 11 input outputspecimen Wheatstone bridge MCDL C a m e ra Acquisition Card Wheatsto LabView Interface Fastcam Viewer tr ig g e r Traditional SHPB system Digital image system Wheatsto brid isit Card Wheatsto brid Acquisit Card LabView Interface Fastcam Viewer Digital image systemsystem Figure 10: Diagram of SHPB system and the digital image photography system 4.2. Synchronization experimental results and images As indicated before, the MCDL box samples the signal from the gauge on the output bar, named v. The same signal recorded by the acquisition card with different acquisition frequency, 500 KHz in our case, is named u. The two signals should coincide exactly on time because they represent the same physical event. Therefore, synchronization between stress-strain curve and images can be established (see the diagram in Fig. 11). Indeed, as u and v share the same source, synchronization could be established by shifting them in time axis. The procedure is as following: 1) interpolating one of them acquired with lower frequency to the higher one, i.e. making sure that the two series have same time interval between two adjacent points, u(i) and v(i) are the interpolated results; 2) make a convolution between u(i) and v(i), and find the maximum location of convolutional result. The convolution is realized in detail as: w(k) = ∑ j u(j) · v(n − k + j) (9) where n is the length of series v(i). The maximum convolutional result, w(k), is corresponding to the location where u(i) and v(i) are best superposed and the time is thus needed to shift in order to achieve the synchronization. 12 output gauge signal u signal v Stress-strain image number MCDL Acquisition card post-processing *.csv synchronization Figure 11: Diagram of the relationship between stress-strain curve and images via MCDL 4.3. Tracking the movement of the bar If this synchronization is accurate and there is no time shift error from strain signals in the pressure bar, the displacements of the bars calculated from SHPB signals by integrating the velocities in Eq. (1) will be equal to the displacements from DIC measurement. It thus provide a possibility to check the accuracy of aforementioned synchronization method as well as calibration of strain/tension coefficient (position and scale respectively). Therefore, the ends of pressure bars are also captured during every test (see in Fig. 12) and DIC will use them to calculate the displacements of the bars. Figure 12: The illustration of the points on the bars traced by DIC 13 To trace the displacement of one point on the bar (the cross located in the central element in Fig. 12), a small element of 8 pixels × 8 pixels (about 1 mm × 1 mm in reality) is adopted to calculate the displacement of the bar (see the solid square in Fig. 12). The 8 surrounding elements help for this calculation. It is stressed that the strain of the small element on the bar is ignored, thus the displacement of the bar is calculated by averaging displacements of its four nodes. Generally, the point traced is very close to the bar-specimen interface, usually less than 2 mm. Therefore, the displacement of the bar in this point is regarded as the displacement of the interface. It is worthwhile to note that when the strain can be accurately measured in the bars with camera technical evolution, a direct stress/force measurement at the ends of the bars will be available. 4.4. Validation of synchronization 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 2 4 6 8 10 12 14 16 Time(ms) D is p la c e m e n t( m m ) SHPB DIC 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 2 4 6 8 10 12 14 16 Time(ms) D is p la c e m e n t( m m ) SHPB DIC a b input bar input bar output baroutput bar shift two images Figure 13: Comparison of displacements-time profiles obtained by SHPB and DIC tracking method: a) synchro- nized directly using the method (MCDL) detailed in section 4.2; (b) two images shifted to make coincidence. The synchronization method introduced in section 4.2 gives a correspondance between images and experimental data from Hopkinson bars. Therefore, the displacements of both specimen-bar interfaces synchronized are plotted in Fig. 13(a). The two solid lines are the displacements of bar-specimen interfaces measured by SHPB, while the dash lines are the displacements calculated with DIC tracking method (section 4.3). A significant gap is observed in the displacements at the input end, the displacement at the output end is too small to observe an obvious gap. However, the superpositions are well achieved if two images shift on the basis of the synchronization by MCDL method, as shown in Fig. 13(b). 14 Consequently, the conclusion can be made as that: firstly, both the displacements from SHPB and DIC tracking method are credible because of the possibility of good superpositions, there is no error of the coefficient calibrations, nor DIC parameter mistake; secondly, the synchronization method with MCDL box is not very accurate. In fact, the delay of several images in synchro- nization exists in all the experiments, not only in Nylon pressure bars presented in this paper but also in tensile bars in our laboratory. As a result, the examination of synchronization with DIC analysis is necessary in the case of high speed camera. 4.5. DIC measurements For the uncollapsed part of the specimen, the DIC measurement is possible. In contrast to static case, elements size of 8 pixels × 8 pixels (1 mm × 1 mm in reality) are employed in DIC algorithm. The chosen ROI along with the undeformed reference image is shown in Fig. 14. These strain fields corresponding to the images shown in Fig. 9 are also presented in Fig. 14. They manifest that even the uncollapsed part of the specimen is not compressed uniformly because each of these three strain fields shows color difference or color strips. Some highly deformed spots scatter in the fields, which reveals that some cells are largely compressed or collapsed. Moreover, the color difference becomes larger for image from left to right (from small crush displacement to the large one) and the number of the highly deformed spots increases as well. Figure 14: ROI calculated in impact loading and the longitudinal strain fields which are corresponding to the images in Fig. 9 Similarly to the quasi-static case, the global nominal strain and average value of the DIC local strain are compared, see in Fig. 15. The same conclusion can be made as the quasi-static one while the plateau strain is about 1.9%. 15 5 10 15 20 25 30 35 40 45 50 55 60 −0.25 −0.2 −0.15 −0.1 −0.05 0 Image N o m in a l S tr a in d=1.99 mm d=7.57 mm d=11.82 mm Global nominal strain Mean DIC Strain Figure 15: Comparison of the global nominal strain and the mean local strain in the dynamic loading With the knowledge of both the strain localization feature and the plateau of the mean local strain, the real deforming process of this polymeric foam is unveiled. At the beginning of the loading, the foam is firstly compressed as a whole where the stress increases linearly. With the increase of the strain, the limit of failure stress of cells is reached and they begin to collapse. From then on, the collapse front propagates while the uncollapsed part stops being further compressed, which displays the plateaus in the mean local strain and in nominal stress. 5. Conclusion This paper deals with impact testing on the polymeric foams using large diameter nylon Hopkinson bar technique and optical DIC strain field measurement. Even though the overall stress -strain curves under impact loading can be already obtained by SHPB measurement, a field measurement is still necessary because of heterogeneous fields within the foam specimen. High speed images together with the digital image correlation techniques is a complementary measurement to SHPB technique. They are not only used to evaluate the strain field of the foam specimen, but also to obtain the displacement history at the ends of pressure bars. By comparing this measurement with the ones from Hopkinson bars, it allows to correct the time synchronization error and to check the calibration of the strain/tension coefficients. For the studied brittle polymeric foam, an overall rate sensitivity of 8% is found from quasi- static loading to impact loading with a speed of 20m/s. Even though the equilibrium state (between forces of two bar/specimen) is reached rapidly in SHPB test, there is no homogeneous 16 strain field. The DIC strain field measurement is the only mean to reveal this crushing mechanism like a honeycombs, i.e., the crush displacement (nominal strain) is mainly due to the enlargement of the very localized area/bands within the specimen and the uncollapsed part remains at a constant small strain. Acknowledgements The authors acknowledge the support of China Scholarship Council (CSC) as well as NSFC funding no11228206, and appreciate the kindness of EADS in supplying the specimens. 17 [1] L. J. Gibson, M. F. Ashby, Cellular Solids: Structure and Properties, 2nd Edition, Cam- bridge University Press, Cambridge, 1997. [2] N. Mills, C. Fitzgerald, A. Gilchrist, R. Verdejo, Polymer foams for personal protection: Cushions, shoes and helmets, Composites Science and Technology 63 (16) (2003) 2389– 2400. doi:DOI10.1016/S0266-3538(03)00272-0. [3] H. Zhu, J. Knott, N. Mills, Analysis of the elastic properties of open-cell foams with tetrakaidecahedral cells, Journal of the Mechanics and Physics of Solids 45 (3) (1997) 319–&. [4] H. Zhu, N. Mills, J. Knott, Analysis of the high strain compression of open-cell foams, Journal of the Mechanics and Physics of Solids 45 (11-12) (1997) 1875–1904. [5] V. Deshpande, N. Fleck, Multi-axial yield behaviour of polymer foams, Acta Materialia 49 (10) (2001) 1859–1866. [6] L. Gong, S. Kyriakides, W. Jang, Compressive response of open-cell foams. part i: Morphol- ogy and elastic properties, International Journal of Solids and Structures 42 (5-6) (2005) 1355–1379. doi:DOI10.1016/j.ijsolstr.2004.07.023. [7] L. Gong, S. Kyriakides, Compressive response of open cell foams - part ii: Initiation and evolution of crushing, International Journal of Solids and Structures 42 (5-6) (2005) 1381– 1399. doi:DOI10.1016/j.ijsolstr.2004.07.024. [8] R. Lacey, Response of several materials at intermediate strain rates, Paper From High Speed Testing 5 (1965) 99–109. [9] R. Traeger, Physical properties of rigid polyurethane foams, Journal of Cellular Plastics 3 (9) (1967) 405–418. [10] H. Schreyer, Q. Zuo, A. Maji, Anisotropic plasticity model for foams and honeycombs, Journal of engineering mechanics 120 (9) (1994) 1913–1930. [11] O. Faruque, N. Liu, C. C. Chou, Strain rate dependent foam: Constitutive modeling and applications, SAE transactions 106 (5) (1997) 904–912. [12] J. Zhang, N. Kikuchi, V. Li, A. Yee, G. Nusholtz, Constitutive modeling of polymeric foam material subjected to dynamic crash loading, International Journal of Impact Engineering 21 (5) (1998) 369–386. 18 http://dx.doi.org/DOI 10.1016/S0266-3538(03)00272-0 http://dx.doi.org/DOI 10.1016/j.ijsolstr.2004.07.023 http://dx.doi.org/DOI 10.1016/j.ijsolstr.2004.07.024 [13] J. D. Rehkopf, G. M. McNeice, G. W. Brodland, Fluid and matrix components of polyurethane foam behavior under cyclic compression, Journal of engineering materials and technology 118 (1) (1996) 58–62. [14] D. A. Wagner, Y. Gur, S. M. Ward, M. A. Samus, Modeling foam damping materials in automotive structures, Journal of engineering materials and technology 119 (3) (1997) 279– 283. [15] F. S. Chang, Y. Song, D. Lu, C. DeSilva, Unified constitutive equations of foam materials, Journal of engineering materials and technology 120 (3). [16] J. Rinde, K. Hoge, Time and temperature dependence of the mechanical properties of polystyrene bead foam, Journal of Applied Polymer Science 15 (6) (1971) 1377–1395. [17] J. Rinde, K. Hoge, Dynamic shear modulus of polystyrene bead foams, Journal of Applied Polymer Science 16 (6) (1972) 1409–1415. [18] B. Song, W. Chen, D. J. Frew, Dynamic compressive response and failure behavior of an epoxy syntactic foam, Journal of composite materials 38 (11) (2004) 915–936. [19] H. Zhao, G. Gary, J. Klepaczko, On the use of a viscoelastic split hopkinson pressure bar, International Journal of Impact Engineering 19 (4) (1997) 319–330. [20] H. Zhao, G. Gary, Behaviour characterisation of polymeric foams over a large range of strain rates, International journal of vehicle design 30 (1) (2002) 135–145. [21] H. Zhao, Testing of polymeric foams at high and medium strain rates, Polymer Testing 16 (5) (1997) 507–516. [22] H. Zhao, I. Elnasri, S. Abdennadher, An experimental study on the behaviour under impact loading of metallic cellular materials, International Journal of Mechanical Sciences 47 (4-5) (2005) 757 – 774. [23] Y. Wang, G. Gioia, A. Cuitino, The deformation habits of compressed open-cell solid foams, Journal of Engineering Materials and Technology-Transactions of the Asme 122 (4) (2000) 376–378. [24] M. A. Sutton, J.-J. Orteu, H. Schreier, Image correlation for shape, motion and deforma- tion measurements: basic concepts, theory and applications, Springer Publishing Company, Incorporated, 2009. 19 [25] M. Grédiac, F. Hild, Mesures de champs et identification en mécanique des solides, Lavoisier, 2011. [26] D. Zhang, X. Zhang, G. Cheng, Compression strain measurement by digital speckle corre- lation, Experimental Mechanics 39 (1) (1999) 62–65. [27] Y. Wang, A. Cuitino, Full-field measurements of heterogeneous deformation patterns on polymeric foams using digital image correlation, International Journal of Solids and Struc- tures 39 (13-14) (2002) 3777–3796. [28] S. R. Heinz, J. S. Wiggins, Uniaxial compression analysis of glassy polymer networks using digital image correlation, Polymer Testing 29 (8) (2010) 925–932. [29] A. Gilchrist, N. Mills, Impact deformation of rigid polymeric foams: experiments and fea modelling, International journal of impact engineering 25 (8) (2001) 767–786. [30] P. Viot, F. Beani, J. Lataillade, Polymeric foam behavior under dynamic compressive loading, Journal of Materials Science 40 (22) (2005) 5829–5837. doi:DOI10.1007/s10853-005-4998-5. [31] S. Nemat-Nasser, W. Kang, J. McGee, W.-G. Guo, J. Isaacs, Experimental investigation of energy-absorption characteristics of components of sandwich structures, International journal of impact engineering 34 (6) (2007) 1119–1146. [32] R. Othman, S. Aloui, A. Poitou, Identification of non-homogeneous stress fields in dy- namic experiments with a non-parametric method, Polymer Testing 29 (5) (2010) 616–623. doi:DOI10.1016/j.polymertesting.2010.03.013. [33] J. Liu, F. Lu, D. Fang, H. Zhao, Impact testing of cellular materials with field measurement-a review, International Journal of Protective Structures 2 (4) (2011) 401–416. [34] H. Zhao, G. Gary, A three dimensional analytical solution of the longitudinal wave propaga- tion in an infinite linear viscoelastic cylindrical bar. application to experimental techniques, Journal of the Mechanics and Physics of Solids 43 (8) (1995) 1335 – 1348. [35] H. Zhao, G. Gary, On the use of shpb techniques to determine the dynamic behavior of materials in the range of small strains, International Journal of Solids and Structures 33 (23) (1996) 3363 – 3375. 20 http://dx.doi.org/DOI 10.1007/s10853-005-4998-5 http://dx.doi.org/DOI 10.1016/j.polymertesting.2010.03.013 [36] G. Besnard, F. Hild, S. Roux, ”finite-element” displacement fields analysis from digital images: Application to portevin-le chatelier bands, Experimental Mechanics 46 (6) (2006) 789–803. [37] F. Hild, S. Roux, Digital image correlation: from displacement measurement to identification of elastic properties–a review, Strain 42 (2) (2006) 69–80. 21 Introduction Conventional tests on the polymeric foams Foam specimen and quasi-static tests Impact tests with SHPB Digital photography and image correlation Direct image observations Principle of digital image correlation Strain field analysis Field measurement under impact loading High Speed digital image system Synchronization experimental results and images Tracking the movement of the bar Validation of synchronization DIC measurements Conclusion work_25opvge25rdhbpdctxnxdtw7ce ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218563563 Params is empty 218563563 exception Params is empty 2021/04/06-02:16:44 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218563563 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:44 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_26jxwxrav5bkdfdotymjkpqzq4 ---- Journal of Aesthetics & Culture: Vol 13, No 1 Log in  |  Register Cart Home All Journals Journal of Aesthetics & Culture List of Issues Volume 13, Issue 1 Journal of Aesthetics & Culture An open access journal Search in: This Journal Anywhere Advanced search Submit an article New content alerts RSS Citation search Citation search Current issue Browse list of issues Explore Top About this journal Journal metrics Aims and scope Instructions for authors Journal information Editorial board Editorial policies Journal news What is love in the digital age? Read new Special Issue Latest articles See all volumes and issues Volume 13, 2021 Vol 12, 2020 Vol 11, 2019 Vol 10, 2018 Vol 9, 2017 Vol 8, 2016 Vol 7, 2015 Vol 6, 2014 Vol 5, 2013 Vol 4, 2012 Vol 3, 2011 Vol 2, 2010 Vol 1, 2009 Login or register to access this feature Have an account? Login nowDon't have an account? Register for free Register a free Taylor & Francis Online account today to boost your research and gain these benefits: Download multiple PDFs directly from your searches and from tables of contents Easy remote access to your institution's subscriptions on any device, from any location Save your searches and schedule alerts to send you new results Choose new content alerts to be informed about new research of interest to you Export your search results into a .csv file to support your research Register now or learn more Download citations Choose format RIS (ProCit, Reference Manager) BibTeX RefWorks Direct Export Download citations Download citations Download PDFs Browse by section (All) Display order (Default) Published online date Page number Section A-Z Most read Most cited Journal of Aesthetics & Culture, Volume 13, Issue 1 (2021) Issue In Progress Original Article Article “A new image of man”: Harun Farocki and cinema as chiro-praxis Henrik Gustafsson Article: 1841977 Published online: 29 Dec 2020 Abstract | Full Text | References | PDF (4614 KB) | Permissions  768Views 0CrossRef citations Altmetric Article Poetic objectification of a shattered subject: the alchemical poetry of Josep Palau i Fabre Sergi Castella-Martinez Article: 1909314 Published online: 04 Apr 2021 Abstract | Full Text | References | PDF (373 KB) | Permissions  0Views 0CrossRef citations Altmetric Explore Most read articles Most cited articles Multimedia Information for Authors Editors Librarians Societies Open access Overview Open journals Open Select Cogent OA Dove Medical Press F1000Research Help and info Help & contact Newsroom Commercial services Advertising information All journals Books Keep up to date Register to receive personalised research and resources by email Sign me up Taylor and Francis Group Facebook page Taylor and Francis Group Twitter page Taylor and Francis Group Linkedin page Taylor and Francis Group Youtube page Taylor and Francis Group Weibo page Copyright © 2021 Informa UK Limited Privacy policy Cookies Terms & conditions Accessibility Registered in England & Wales No. 3099067 5 Howick Place | London | SW1P 1WG Accept We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Cookie Policy. By closing this message, you are consenting to our use of cookies. work_26xdw3ve7zffbmifrxqugxngeu ---- 78 78..78 obituary Edulji (Eddy) Sethna Formerly Consultant Psychiatrist, Hollymoor Hospital, Birmingham Dr Eddy Sethna was born on 3 December 1925 in Bombay, India. He won two scholarships which entirely funded his medical school training and he qualified with MB BS from Bombay in 1951. Having completed house jobs in Bombay, he became a senior house officer in medicine at the Bury and Rossendale Group of Hospitals in Lancashire in 1954, obtaining his MRCP in 1956. Having specialised in cardiology at the London Heart Hospital and Sefton General Hospital in Liverpool, he also obtained a diploma in tropical medicine and hygiene in preparation for his return to India as a consultant physician to the Jahangir Nursing Home in Poona. He spent 21 months in this post but decided to return to England where he started his career as a psychiatrist. Eddy’s first psychiatric appointment was as a registrar at St Francis and Lady Chichester Group of Hospitals in Sussex. He did his senior registrar training in Birmingham and was appointed in 1966 as a consultant psychiatrist to All Saints Hospital in Birmingham with an attach- ment to West Bromwich and District Hospital. He was awarded his MRC Psych in 1971, and in 1976 he became a consul- tant to Hollymoor Hospital in Birmingham and the Lyndon Clinic in Solihull. He was elected FRCP in 1987, having been elected FRCPsych in 1986. His publications included studies of the benefits of group psychotherapy and refractory depression, but he also had a major interest in phobias. When asked to organise a registrar training programme, Eddy with typical thoroughness and attention to detail demanded that he be allowed to establish the programme from scratch, ignoring the preconceived ideas of those more senior. Having gained their support he established the first rotational psychiatric training programme in the country. This scheme was so popular and successful with the trainees that it was adopted by the Royal College of Psychia- trists as their national model for rotational training. In his early fifties, Eddy returned to his boyhood interest of photography as an antidote to the stresses of his job. In retirement, he became a leader and inspiration to the legions of amateur photographers taking tentative steps into the field of digital photography. He approached digital photography as he had approached medicine, studying the Adobe Photoshop computer programme system- atically so that he understood its ever- evolving capabilities. He willingly offered one-to-one teaching sessions, wrote four books (two on paper and two on CD), was instrumental in the formation of the Royal Photographic Society’s Digital Imaging Group, was founding chairman of the Eyecon Group and served as vice-presi- dent of the Royal Photographic Society. More recently the Royal Photographic Society awarded Eddy its prestigious Fenton Medal and Honorary Membership in recognition of his huge contribution to photography in the UK. He had numerous acceptances in international exhibitions and took great pride in the gold medal he was awarded shortly before his death in recognition of his creativity. Eddy died at home of Hodgkin’s lymphoma on 29 June 2006, cared for by his wife, Beryl, and daughters, Beverley and Julie, as was his wish. Eddy is survived by his wife, three children and seven grandchildren whom he adored. Anne Sutcliffe doi: 10.1192/pb.bp.106.013813 review Clinical Handbook of Psychotropic Drugs for Children and Adolescents K. Z. Bezchlibnyk-Butler & A. S. Virani (eds) Hogrefe & Huber, 2004, US$49.95, 312 pp. ISBN 0-88937-271-3 This handbook is intended as a practical reference book for clinicians. Thus one measure of its success is how useful it is in a busy clinical setting and whether there is any added benefit over the British National Formulary or its international equivalents. The book starts by providing a brief overview of psychiatric disorders in child- hood and adolescence. This section is the weakest because although it provides the ‘basic facts’ there is not sufficient detail for prescribing clinicians. I did, however, find myself using this section as a basis for handouts for medical students. The main body of this text is devoted to medications likely to be used in child and adolescent psychiatric practice. Taking the section on antidepressants as an example, it starts with a brief overview of the different classes of available anti- depressants and general comments on the use of antidepressants in children and young people. For individual classes of drugs indications, pharmacology, dosing, pharmacokinetics, onset and duration of action, adverse effects, withdrawal, precautions, toxicity, use in pregnancy, nursing implications, patient instructions and drug interaction are all covered. There are also helpful (and reproducible) patient information leaflets. The information provided is concise and up to date, although in this fast-developing field there is a danger that such texts can become out of date relatively quickly. I found myself regularly referring to the handbook in clinics. The accessible writing style made it easy to share the informa- tion with young people and their parents/ carers. The only limitation is that the book’s American authorship means that it tends to refer to licensing under the US Food and Drug Administration and reflects American practice, which does differ in some respects from contemporary practice in the UK. Margaret Murphy Department of Child Psychiatry, Cambridgeshire and Peterborough Mental HealthTrust, Ida Darwin Hospital, Cambridge Road, Fulbourn, Cambridge CB15EE, email: margaret.murphy@cambsmh.nhs.uk doi: 10.1192/pb.bp.105.004333 Columns Obituary columns 78 work_27lekckjy5dcnompcp7ymdn3ry ---- SHTT-34353-telewound-care-----providing-remote-wound-assessment-in-a-ho © 2013 Santamaria and Kapp. This work is published by Dove Medical Press Limited, and licensed under Creative Commons Attribution – Non Commercial (unported, v3.0) License. The full terms of the License are available at http://creativecommons.org/licenses/by-nc/3.0/. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. Permissions beyond the scope of the License are administered by Dove Medical Press Limited. Information on how to request permission may be found at: http://www.dovepress.com/permissions.php Smart Homecare Technology and TeleHealth 2013:1 35–41 Smart Homecare Technology and TeleHealth Dovepress submit your manuscript | www.dovepress.com Dovepress 35 R e v i e w open access to scientific and medical research Open Access Full Text Article http://dx.doi.org/10.2147/SHTT.S34353 Telewound care – providing remote wound assessment and treatment in the home care setting: current status and future directions Nick Santamaria1 Suzanne Kapp2 1University of Melbourne and Melbourne Health, Royal Melbourne Hospital, Melbourne, viC, Australia; 2Royal District Nursing Service institute, Melbourne, viC, Australia Correspondence: Nick Santamaria University of Melbourne and Melbourne Health, Level 6, Royal Melbourne Hospital, Grattan Street, Parkville, Melbourne, viC 3050, Australia Tel +61 4 1456 0929 email nick.santamaria@mh.org.au Abstract: The use of wound telemedicine systems in the home care environment has been expanding for the last decade. These systems can generally be grouped into two main types: store and forward systems and video conference type systems; additionally, there are also hybrid systems available that include elements of both. Evidence to date suggests that these systems provide significant benefits to patients, clinicians, and to the health care system generally. Reductions in resource use, visit substitution, costs, and high patient and clinician satisfaction have been reported; however, there is a lack of integration with existing health care technology and no clearly defined technical or clinical standards as yet. Similarly, the legalities associated with wound telemedicine and remote consultation remain unclear. As wound telemedicine systems continue to evolve and be deployed in different locations, there remains significant potential to harness their power to benefit patients being treated at home. Keywords: telemedicine, home care, e-health Introduction The provision of evidence-based, clinically effective and efficient wound care out of the hospital environment is a goal that can be achieved through the adoption and deployment of currently available telemedicine technologies. However, the challenge facing the health care system generally and home care clinicians specifically is how to identify and operationalize available telemedicine systems into existing systems of care delivery. There are a number of synonyms used for the delivery of health care supported by digital technology. Terms such as telematics, telehealth, telemedicine, and e-health are commonly seen in the literature and in the area of wound management. The term TeleWound care has been used at times, however, it may be clearer to refer to these systems generically as telemedicine as this term appears to be the most widely employed. Telemedicine is defined by the World Health Organization (WHO) as the practice of health care using interactive, visual, and/or data communications. This includes health care delivery, diagnosis, consultation and treatment, as well as education and transfer of medical data.1 The introduction of telemedicine technologies is complex, requires significant investment, and their adoption is often dependent on fundamental clinical practice change. These challenges often determine the level of success of the introduction of the new technology and the viability of the change.2,3 S m a rt H o m e ca re T e ch n o lo g y a n d T e le H e a lth d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com/permissions.php http://creativecommons.org/licenses/by-nc/3.0/ www.dovepress.com www.dovepress.com www.dovepress.com http://dx.doi.org/10.2147/SHTT.S34353 mailto:nick.santamaria@mh.org.au Smart Homecare Technology and TeleHealth 2013:1submit your manuscript | www.dovepress.com Dovepress Dovepress 36 Santamaria and Kapp This paper provides a review of the types of telemedicine wound technologies that are currently available, the common design of clinical home wound care systems, and the critical factors that enabled the change to telemedicine to be made while improving clinical outcomes and enhancing patient satisfaction. Historical management of non-hospital wounds Traditionally, home care wound management has not differed substantially from in-hospital care in that wounds have been assessed and documented in similar ways that were employed in the acute setting. Parameters such as wound type, etiol- ogy, duration, position, clinical appearance, exudate, wound edges, degree of pain, appearance of surrounding skin, and the presence of signs of potential infection are recorded in the clinical record.4 Wound dimensions are also recorded using a variety of methods including the use of centimeter grids to measure surface area, length, width, and the depth of the wound. Tracing the wound outline onto an acetate sheet or even free hand drawing the shape of the wound have been used to determine change in the wound over time. Recording wound dimensions to determine wound healing has been particularly problematic due to the potential for significant measurement error.5 Visiting nurses would repeat the documentation of these parameters if there is a change in treatment, the wound deteriorates, or healing does not progress at home visits. In some cases, patients may be required to attend outpatient appointments at specialist multidisciplinary wound clinics or with a general practitioner depending on the nature of the wound and the patient’s capacity to travel. Prior to the review of current wound telemedicine systems it should be noted that ideally a detailed, face to face assess- ment of the patient and their wound should be conducted by a wound management expert prior to the use of a wound telemedicine system for remote management. This is not always possible, particularly when patients are located in very remote locations; however, a detailed physical assess- ment and history of the patient is considered an essential component of high quality wound management. Telemedicine systems available to home care wound clinicians System characteristics Wound telemedicine systems that are currently available to home wound care clinicians can be generally divided in two broad groupings and a number of sub groupings based on the mode and combination of communications used. The first group is commonly termed “store and forward” (SAF) or asynchronous communication systems.6,7 In SAF type systems, wound and patient details are recorded at the time of visit, stored, and then forwarded at a later time to an expert/specialist clinician for remote review/consultation. The nature of the data recorded, stored, and later forwarded can be as simple as a digital image of a wound or a more complex data set including wound measurements, character- istics, healing rate, and wound management details can also be recorded. In many cases, SAF type systems are used as an electronic wound medical record rather than being used for remote consultation and treatment support. The way that SAF systems are actually used in day to day clinical care is dependent on the local conditions and the work practices of the home care agencies using the systems. This variation in use is highly dependent on a number of factors such as the nature and structure of the health care system, capabilities of the telemedicine system itself, available information technol- ogy infrastructure, geography, the clinical needs of patients, and the time and expertise of the staff using the system. There are numerous examples of wound management SAF systems in use and they are constantly evolving as technology and communication infrastructure evolves. At the most basic level is the use of digital imaging to record and/or forward wound images. This requires the use of widely available digital cameras and a measurement scale that is included in the frame of the digital image for the purpose of scale. The greatest technical issue with the use of this approach is the ability of the staff to capture a clear, focused, and well lit image of the wound in the patient’s home. While this may seem simple it does require some practice and familiarity with the digital camera and its settings.8 More sophisticated systems that use digital imaging and computer software to not only record a digital wound image but to enable very precise measurement of the wound, calculation of healing rate, and the capture of wound characteristics include the Advanced Medical Wound Imaging System (Medseed Pty Ltd, Melbourne, VIC, Aus- tralia) and The Silhouette Star system (ARANZ Medical, Christchurch, NZ). These systems tend to be used by relatively small groups of clinicians, single health services, and wound researchers whereas the ComCare Mobile system (Silver Chain, Perth, WA, Australia) is used by a very large commu- nity nursing service in Western Australia to document, assess, and analyze wound care delivered at the patients’ residences. On a larger level, the Western Australian Health Department has deployed the MMEx Wound Management System (WMS; University of Western Australia) throughout the state, an area covering approximately one third of the Australian land mass S m a rt H o m e ca re T e ch n o lo g y a n d T e le H e a lth d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Smart Homecare Technology and TeleHealth 2013:1 submit your manuscript | www.dovepress.com Dovepress Dovepress 37 Telewound care with a widely distributed population that has very remote loca- tions providing wound management. MMEx enables digital imaging, wound analytics, and remote consultation9 and it is planned to ultimately link to hospitals, general practice physicians, and community nursing providers. The second form of wound telemedicine is that of real time video conferencing (VC) or synchronous telemedicine.6 In this form, there is a real-time interaction between the clini- cian and patient with a remote consultant. The requirements of VC are that all parties are available at the same time to undertake the remote consultation. Once again, the success of VC is dependent on the availability of communication infrastructure in the patient’s home. To date, VC has been better suited to a clinic based interaction with a remote center; however, the gradual introduction of broadband technology to patients’ residences will facilitate the potential for greater use of VC in future wound telemedicine. A third form of wound telemedicine has been described as hybrid systems of SAF and VC.6 Here we see the data being forwarded via a SAF system combined with real-time VC so that detailed data can be reviewed during a synchronous or real-time interaction between the wound care clinician, patient, and remote expert. Studies of remote wound management, best practice, and new developments in non-hospital wound care There are a number of studies investigating the effectiveness of wound telemedicine, however, they are difficult to com- pare directly due to methodological and technical differences as well as differing local characteristics and service delivery models. Conversely, some common themes do emerge from these investigations. Studies that have evaluated the feasi- bility of SAF wound telemedicine systems have focused on clinical outcomes such as healing rates, system reliability, financial outcomes, and clinician and patient satisfaction. A large, randomized controlled trial of the effectiveness of remote wound consultation for patients with chronic lower limb ulcers in the Kimberley region of Western Australia10 demonstrated significant clinical and cost ben- efit of using a SAF wound management system to analyze wound parameters and to conduct remote expert wound consultation. It was found that image quality and clinical data was adequate to enable accurate review and consultation for the complex wounds being treated in four remote locations. Results showed that patients in the telemedicine group had significantly faster healing rates, less amputations, and cost less to treat than controls receiving standard care. A randomized controlled trial11 that compared three groups, two of which received telemedicine interventions and one that received standard care failed to show any benefit (time to heal, cost, and length of stay) of the telemedicine intervention over standard care. This study had significant methodological problems including an unbalanced alloca- tion of patients with more severe wounds to the intervention groups than in the control group. Clegg et al12 reported an evaluation of a hybrid wound telemedicine system incorporating both SAF and VC technology to undertake expert wound review and reported a saving of approximately US$100 per visit compared to standard consultation. They concluded that the quality of wound care delivered through the telemedicine system was comparable to face to face consultation. Cost reductions have been found in treating patients at home for chronic wounds13 and in home management of leg ulcers using an initial face to face visit followed by a digital image review via a secure web- site.14 Reduction in hospitalization, attendances at emergency departments, and costs have also been reported for patients treated at home by visiting nurses when supported by remote consultation and guideline use via wound telemedicine.15 A number of studies have focused on the comparabil- ity of the quality of wound assessment using telemedicine systems compared to in-clinic assessment. Central to the success of wound telemedicine systems is the question of accuracy of assessment and clinical efficacy.16 In a study on the quality of intra- and inter-rater reliability of clinicians using a three-dimensional (3D) SAF wound telemedicine system, investigators found that the telemedicine system enabled wound assessment accuracy comparable to that of direct consultation for diabetic foot ulcers.17 Similar levels of accuracy were reported in an Austrian study on the use of a wound telemedicine system for remote leg ulcer assessment.18 Clinicians rated the quality of wound images and reported that in 89% of 492 consultations the image quality was suf- ficient to allow them to give therapeutic recommendations remotely. The high degree of clinical assessment accuracy of telemedicine assessment in the home follow-up of pediatric burns patients was reported in an Australian study.19 Wound telemedicine systems have also been used in clini- cal research both in home and clinic settings with ambulant patients. In a large randomized controlled trial of the relative effectiveness of Cadexomere Iodine and nanocrystalline silver dressings for home based patients with colonized venous leg ulcers, a wound telemedicine system was used to measure wound changes and to determine healing rates.20 The measurement accuracy of some wound telemedicine S m a rt H o m e ca re T e ch n o lo g y a n d T e le H e a lth d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Smart Homecare Technology and TeleHealth 2013:1submit your manuscript | www.dovepress.com Dovepress Dovepress 38 Santamaria and Kapp systems provide significant advantages for wound research- ers due to their ability to monitor and measure and quantify discrete changes in wounds such as changes to different tis- sue types, surface areas, perimeter advancement, and depth within wounds. Additionally, some wound telemedicine systems provide 3D wound characteristics and analyze healing rates21 which may clarify some elements of wound biology and healing. Patient and carer satisfaction and acceptance of home wound telemedicine There appears to be a consistent theme in wound telemedi- cine literature of clear patient and carer acceptance of this emerging technology. It should be noted that not all wound telemedicine research has investigated patient perceptions as part of the research. Where this element has been incorpo- rated into the research there appears to be different patient responses to SAF technologies than to VC or hybrid type systems, but it is difficult to draw clear conclusions due to methodological differences in these studies. Studies based on SAF approaches report that patients are reassured that their wound is healing due to being able to observe the changes in the wound over time.22,23 Binder et al14 reported high levels of patient satisfaction with remote monitoring of home based leg ulcer treatment; this finding is supported in Swedish and Austrian studies24,18 on the use of VC for expert consultation for leg ulcer treatment in cohorts of elderly patients. Simi- lar findings of patient satisfaction with VC are also noted by investigators of VC in the American aged care sector.25 Smith et al19 also noted high levels of carer satisfaction with wound telemedicine for pediatric patients in remote locations in Australia. One can summarize the limited but positive evidence of patient satisfaction with wound telemedicine as indicating that patients and carers appear to appreciate being involved in their care, monitoring progress with their wounds, and seem grateful for not having to make as many trips to clinical facilities that may involve significant time, inconvenience, and expense. Generally, the topic of patient and carer satisfaction with wound telemedicine is greatly under investigated and should be included in future studies in this area. Safety, legality, and practicality of remote wound care The safety of wound telemedicine systems relates to a num- ber of factors: 1) the performance of the system in terms of comparability to face to face consultation; 2) the validity, reliability, and integrity of measurement performance; and 3) the compliance of the wound telemedicine system with national or international wound management guidelines. There is clear evidence of the comparability of wound telemedicine systems with face to face consultation;17,18 however, the evolution of commercially available wound telemedicine systems over the past 10 years has not been in all cases accompanied by clear, independently evaluated performance data. The reason for this is that the evolution of these systems has occurred outside of a regulatory framework governing standards for wound telemedicine. Effectively, wound telemedicine has developed through the convergence of clinical need, developing information and communication technology and the availability of low-cost digital photog- raphy equipment. The Health Level 7 (HL7; Health Level 7 International, Ann Arbor, MI, USA) international standard for telemedicine defines the interoperability, communication, and security parameters for telemedicine systems, but now there is a need for the development of national/international standards relating to the wound management parameters of wound telemedicine particularly in the area of SAF systems. The legal position in relation to wound telemedicine is unclear from the perspective of liability of health care organi- zations for potential harm that may result from the use of these systems particularly in the absence of a regulatory framework and lack of standards for the performance of the systems.26,27 Additionally, health care organizations would reasonably be expected to provide adequate training for clinicians to ensure that they are using the equipment appropriately. Patient data integrity and confidentiality is of great concern and should be addressed by organizations28 employing wound telemedicine particularly when these systems are not integrated into existing hospital electronic medical records. Unfortunately, it is often the case that these and other issues tend to only be revealed once litigation is initiated. The practicality of introducing wound telemedicine systems within existing home nursing programs is highly dependent on the characteristics of the service, organiza- tional commitment, case load, funding model and existing information, and communication infrastructure. Due to the potential variability of these and other factors it is not pos- sible to provide definitive answers related to the practicality of introducing these systems; however, it is possible to gain an appreciation of how some services have introduced wound telemedicine and the staff and service response. Senior man- agement support is essential for the introduction of clinical information technology and the effect on the subsequent adoption of the systems across care settings.3,29 Similarly, S m a rt H o m e ca re T e ch n o lo g y a n d T e le H e a lth d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Smart Homecare Technology and TeleHealth 2013:1 submit your manuscript | www.dovepress.com Dovepress Dovepress 39 Telewound care clinical staff support is essential to the success of the intro- duction of these systems. Staff support for telemedicine may vary due to the characteristics and familiarity of staff with wound telemedicine and information technology.30 Alterna- tively, staff may not want to change existing work practices18 or may perceive the introduction of wound telemedicine as making existing work practices inefficient.31 Discussion The place of wound telemedicine in the future of home care practice The introduction of wound telemedicine has parallels with the development of many past technologies in that it was driven by the evolution and convergence of other technolo- gies such as personal computers, digital photography, and the internet. Wound digital imaging systems were developed for use in research and to better document and measure wounds in in-patient settings. From these beginnings SAF and VC wound systems migrated to the community sector and into home care. The drivers for this migration varied from country to country. In some, the evolution was based on cost effectiveness due to the ability to substitute some visits via wound telemedicine and the better, more efficient use of highly skilled staff. The capacity for cost savings also applied to patients due to the reduction in the need for these patients to attend wound clinics while still benefitting from having their wounds reviewed and treatment effectiveness remotely monitored by expert staff. In other countries, the impetus for the use of wound telemedicine is the need to better serve rural and remote communities that do not have access to specialist wound services. Here patients and local staff benefitted from wound telemedicine due to the elimination of many of the disad- vantages of geographical isolation. Expert wound clinicians could provide consultations via SAF technologies and could interact with staff and patients through VC or hybrid systems using both SAF and VC. A striking aspect of the evolution of wound telemedi- cine is the lack of standardization and certification of these technologies. It is still possible to buy and commence using a wound telemedicine SAF system today that does not have to comply with any particular technical or clinical standard. This lack of standardization presents a potential risk to both health care organizations and to patients. There is a pressing need for the development of minimum wound care standards for these systems. Similarly, there is no require- ment for integration with existing hospital/clinical systems, such as electronic medical records. The use of electronic medical records is by no means universal and this problem is compounded by the “stand alone” nature of a number of SAF wound telemedicine systems. There are some posi- tive signs that this situation is improving: in the home care environment, Silver Chain in Western Australia is a leader in the integration of its ComCare mobile wound telemedicine system with its ComCare electronic patient record. Also in Western Australia, the MMEx system has as part of its design parameters the need to communicate with hospitals and with general practitioners. While many would ask what the cost/benefit is for devel- oping and deploying wound telemedicine in the home care sector, perhaps it would be more beneficial to ask what the cost would be in the future of not using these systems within countries with aging populations and an increasing burden of chronic disease. The future of home care wound telemedicine systems will see increasing integration with existing health care organizations. Additionally, we will see the migration of these systems from primarily desktop based systems to mobile devices, such as smart phones and tablets. Within these devices we will also see greater automation of wound analysis functions. Currently, wound management systems require the transfer of an image from a digital camera to a computer for measurement of the wound. This measurement has been undertaken by tracing the margin of the wound so that a surface area measurement can be calculated and used to determine a healing rate. This is a time consuming process and one that will be replaced by automated edge detection technology. This is now available and it can also differentiate between differing tissue types within the wound, measure, and report on their change automatically, thereby making this aspect of wound assessment much more rapid. There are even now prototypes of mobile systems being developed that explore the use of spectral analysis to detect the presence of wound bacteria within a chronic wound. A further area of development and integration is that of linking wound telemedicine systems with clinical decision support algorithms aligned with evidence based clinical guidelines. Here the aim is to develop a wound telemedi- cine system that is highly automated in its ability to assess a wound. The system would provide a provisional diagnosis of the wound based on patient history, presenting problems, wound assessment parameters, and would suggest to the clinician a potential course of management based on the most current clinical evidence. Mobile systems would also benefit patients and carers by being able to deliver wound management education applications and in some cases S m a rt H o m e ca re T e ch n o lo g y a n d T e le H e a lth d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Smart Homecare Technology and TeleHealth 2013:1submit your manuscript | www.dovepress.com Dovepress Dovepress 40 Santamaria and Kapp wound treatment advice for patients who can self-care for their wounds. To take this a few steps further there is no reason why electronic patient records could not be linked to all wound telemedicine systems and thereby provide very large data- bases of wound types, treatment, and healing rates based on treatment. This approach is sometimes termed “big data” but it would enable very robust cost/benefit analyses to be conducted on the effectiveness of various wound treatments regardless of clinical setting. Home care will always require expert clinical staff to treat patients with wounds and wound telemedicine systems enable these staff to more effectively and efficiently provide high quality care based on the best available evidence. Disclosure The authors report no conflicts of interest in this work. References 1. World Health Organization. A Health Telematics Policy in Support of Who’s Health for-All-Strategy for Global Health Development: Report of the Who Group Consultation of the Health Telematics, December 11–16, 1997. Geneva: World Health Organization; 1998. 2. Barrett M, Larson A, Carville K, Ellis I. Challenges faced in imple- mentation of a telehealth enabled chronic wound care system. Rural Remote Health. 2009;9(3):1154. 3. Ellis I, Santamaria N, Carville K, et al. Improving pressure ulcer management in Australian nursing homes: Results of the PRIME Trial organisational study. Primary Intention. 2006;14(3):106–111. 4. Carville K. Wound Care Manual. 6th ed. Perth: Silver Chain Foundation; 2012. 5. Santamaria N, Clayton L. Cleaning up. The development of the Alfred/ Medseed Wound Imaging System. Collegian. 2000;7(4):14–15, 17. 6. Chittoria RK. Telemedicine for wound management. Indian J Plast Surg. 2012;45(2):412–417. 7. Ong CA. Telemedicine and Wound Care. In Latifi R, editor. Current Principles and Practices of Telemedicine and E-Health. Washington, DC: IOS Press; 2008:211–225. 8. Jones SM, Banwell PE, Shakespeare PG. Telemedicine in wound healing. Int Wound J. 2004;1(4):225–230. 9. Santamaria N, Glance D, Prentice J, Fielder K. The development of an electronic wound management system for Western Australia. Wound Practice and Research. 2010;18(4):174–179. 10. Santamaria N, Carville K, Ellis I, Prentice J. The effectiveness of digital imaging and remote wound consultation on healing rates in chronic lower leg ulcers in the Kimberley region of Western Australia. Primary Intention. 2004;12(2):62–70. 11. Terry M, Halstead LS, O’Hare P, et al. Feasibility study of home care wound management using telemedicine. Adv Skin Wound Care. 2009;22(8):358–364. 12. Clegg A, Brown T, Engels D, Griffin P, Simonds D. Telemedicine in a rural community hospital for remote wound care consultations. J Wound Ostomy Continence Nurs. 2009;38(3):301–304. 13. Dobke MK, Bhavsar D, Gosman A, De Neve J, De Neve B. Pilot trial of telemedicine as a decision aid for patients with chronic wounds. Telemed J E Health. 2008;14(3):245–249. 14. Binder B, Hofmann-Wellenhof R, Salmhofer W, Okcu A, Kerl H, Soyer HP. Teledermatological monitoring of leg ulcers in cooperation with home care nurses. Arch Dermatol. 2007;143(12):1511–1514. 15. Rees RS, Bashshur N. The effects of TeleWound management on use of service and financial outcomes. Telemed J E Health. 2007;13(6): 663–674. 16. Flowers C, Newall N, Kapp S, et al. Clinician inter-rater reliability using a medical wound imaging system. Wound Practice and Research. 2008;16(1):22–31. 17. Bowling FL, King L, Paterson JA, et al. Remote assessment of diabetic foot ulcers using a novel wound imaging system. Wound Repair Regen. 2011;19(1):25–30. 18. Hofmann-Wellenhof R, Salmhofer W, Binder B, Okcu A, Kerl H, Soyer HP. Feasibility and acceptance of telemedicine for wound care in patients with chronic leg ulcers. J Telemed Telecare. 2006; 12 Suppl 1:15–17. 19. Smith AC, Kimble R, Mill J, Bailey D, O’Rourke P, Wootton R. Diagnostic accuracy of and patient satisfaction with telemedicine for the follow-up of paediatric burns patients. J Telemed Telecare. 2004;10(4): 193–198. 20. Miller CN, Newall N, Kapp SE, et al. A randomized-controlled trial comparing cadexomer iodine and nanocrystalline silver on the healing of leg ulcers. Wound Repair Regen. 2010;18(4):359–367. 21. Santamaria N, Ogce F, Gorelik A. Healing rate calculation in the diabetic foot ulcer: comparing different methods. Wound Repair Regen. 2012;20(5):786–789. 22. Austin D, Santamaria N. Digital imaging and the chronic wound: clinical application and patient perception. The Journal of Stomal Therapy Australia. 2002;23(4):24–29. 23. Austin D, Santamaria N. Wound imaging and people with chronic wounds: what happened to hexis? Collegian. 2004;11(3):12–19. 24. Jönsson AM, Willman A. Development of a consultation and teaching concept for leg wound treatment in home health care. J Telemed Telecare. 2007;13(5):236–240. 25. Laflamme MR, Wilcox DC, Sullivan J, et al. A pilot study of use- fulness of clinician-patient videoconferencing for making routine medical decisions in the nursing home. J Am Geriatr Soc. 2005;53(8): 1380–1385. 26. Clark PA, Capuzzi K, Harrison J. Telemedicine: medical, legal and ethical perspectives. Med Sci Monit. 2010;16(12):RA261–RA272. 27. Gardiner S, Hartzell TL. Telemedicine and plastic surgery: a review of its applications, limitations and legal pitfalls. J Plast Reconstr Aesthet Surg. 2012;65(3):e47–e53. 28. Gray LC, Armfield NR, Smith AC. Telemedicine for wound care: current practice and future potential. Wound Practice and Research. 2010;18(4):158–163. 29. May C, Harrison R, MacFarlane A, Williams T, Mair F, Wallace P. Why do telemedicine systems fail to normalize as stable models of service delivery? J Telemed Telecare. 2003;9 Suppl 1:S25–S26. 30. Loera JA. Generational differences in acceptance of technology. Telemed J E Health. 2008;14(10):1087–1090. 31. Santamaria N, Austin D, Clayton L. Multi-site trial and evaluation of the Alfred/Medseed Wound Imaging System prototype. Primary Intention. 2002;10(3):119–124. S m a rt H o m e ca re T e ch n o lo g y a n d T e le H e a lth d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Smart Homecare Technology and TeleHealth Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/smart-homecare-technology-and-telehealth-journal Smart Homecare Technology and TeleHealth is an international, peer-reviewed, open access online journal publishing original research, reviews, editorials and commentaries on the application of technology to support people and patients at home and in assisted living centers to optimize healthcare and management resources. Specific topics in the journal include: Development and application of devices within the home and embedded in appliances; Healthcare provider com- munication and education tools; and drug ordering and adherence. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/ testimonials.php to read real quotes from published authors. Smart Homecare Technology and TeleHealth 2013:1 submit your manuscript | www.dovepress.com Dovepress Dovepress Dovepress 41 Telewound care S m a rt H o m e ca re T e ch n o lo g y a n d T e le H e a lth d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com/smart-homecare-technology-and-telehealth-journal http://www.dovepress.com/testimonials.php http://www.dovepress.com/testimonials.php www.dovepress.com www.dovepress.com www.dovepress.com www.dovepress.com Publication Info 2: Nimber of times reviewed: work_2bn7mwhvhffyxk22v7kiaidhqm ---- (Microsoft Word - Macro ingles_nota pr\351via 3311.doc) Unna boot compared to elastic bandages in patients with venous ulcers: clinical trial Alcione Matos de Abreu¹, Beatriz Guitton Renaud Baptista de Oliveira² 1, 2 Academic Master's Health Care Sciences, Fluminense Federal University ABSTRACT The main cause of venous ulcers is the venous hypertension and resulting capillar hypertension. The treatment aim is to reverse venous hypertension to the superficial veins level of lower members, and this is why compression therapy is indicated for these patients. Objective: Describe the process of tissue repair in venous ulcer patients using inelastic compression therapy (Unna Boot) compared to elastic bandages. Compare the clinical and evolving results of the tissue repair process of venous ulcers in patients submitted to treatment with inelastic and elastic therapy. Method: This is an experimental, randomized, controlled, open and prospective clinical trial, with quantitative approach, held in a University Hospital. The sample will be of 18 patients, accompanied during 13 weeks. The instrument of data collection will have socio- demographic, clinical and specific questions about the ulcer. The data analysis will be on SPSS software as basis. Descriptors: Unna Boot; compression therapy; venous ulcer. SITUATION/PROBLEM AND ITS SIGNIFICANCE The main cause of venous ulcers is the venous hypertension and resulting capillary hypertension (1,2). The treatment aim is to reverse venous hypertension to the superficial veins level of lower members, and this is why compression therapy is indicated for these patients (2,3). Nowadays, there's a great variety of compression therapies in the market for venous ulcers treatment, but it's not still clear if they're all really effective or which one of these therapies has the best indication (2). The objectives of the trial are: Evaluation of clinical and evolving results of the tissue repair process of patients with venous ulcers that are submitted to inelastic compression therapy (Unna Boot) compared to elastic therapy (elastic bandage); examine satisfaction and comfort of the patient during both compression therapies. METHODOLOGY This is an experimental, randomized, controlled, open and prospective clinical trial, with quantitative approach. It will be developed at Antonio Pedro University Hospital Outpatient Wounds Repair. The sample will be of 18 patients. For random allocation, a list will be done on Biostat 5.0 software that will have a draw to split the participants in two groups, one group will use Unna Boot as treatment, and the other will use elastic bandage, this one will use vaseline gauze as primary bandage. The list will be applied to the participants before entering the trial. All the volunteers must follow some requirements of inclusion: adult patients, both genres, diagnosed with venous ulcer, with ankle-brachial index higher than 0,9. Excluded from trial: Bedridden patients and/or wheelchair users; diabetics; patients diagnosed with arterial, neuropathic or mixed ulcer; those who suffer from allergy to any product used in the trial; infection in ulcer bed; pain and cyanosis on lower limb after application of the product; in use of corticoids. Monitoring time will be of 13 weeks. The project was approved by the Ethics Committee in Research of the Faculty of Medicine, University Hospital Antonio Pedro, under protocol nº 327/2010 and CAAE:0252.0.258-000-10, suiting Resolution nº 196/96 of Ministry of Health that regulates criteria for research using human beings. It's properly registered under denomination Trial (req:195) and WHO UTN U1111-1122-5489 at Brazilian Clinical Trials Registry. Data collection will follow two stages. Stage I: Initial approach with every patient of each group, research information provision and signing of Informed Consent Form. One day later, elaboration of anamnesis and beginning of inelastic therapy usage (Unna Boot) or elastic therapy. Stage II: Weekly check-ups, with bandage changings, injuries clinical evaluation, measurement, decal and digital photography through application of trial protocols. All the patients will be oriented to change the secondary bandage at home regularly, according to the exudate production of the ulcer, in addition of the bandage protection during the bath. These are the ways to prevent the ulcer odor and infection. The data analysis will be through descriptive and inferential statistics with basis on SPSS software, version 14.0 Windows. REFERENCES 1. Albino AP, Furtado K, Pina E. Boas práticas no tratamento e prevenção das úlceras de perna de origem venosa. Tipografia Lousanense, Ltda. 2007. 2. O´meara S, Cullun NA, Nelson EA. Compression for venous leg ulcers. Cochrane Data base Syst Rev, En: Biblioteca Cochrane Plus 2009 Number 2. Oxford: Update Software Ltd. Disponible en: http://www.update-software.com. (Translated from The Cochrane Library, 2009 Issue 1 Art no. CD000265. Chichester, UK: John Wiley & Sons, Ltd.). Accessed October 30, 2010. 3. Oliveira BGRB, Lima FFS, Araújo JO. Ambulatory care of wounds – clients profile with chronic lesion a prospective study. Online Brazilian Journal Nursing [online] 2008; [Read in December 20th 2011]; 7(2). Available at: http://www.objnursing.uff.br/index.php/nursing/article/viewArticle/j.1676- 4285.2008.150. Brazilian Clinical Trials Registry: (req:195) e WHO UTN U1111-1122-5489. PROJECT DATA Project data: Dissertation project for Academic Master's Health Care Sciences, of Aurora de Afonso Costa Nursing School, Fluminense Federal University. Approved at CEP: Medicine Faculty and Antonio Pedro University Hospital, Protocol 327/10, CAAE: 0252.0.258.000-10. Mailing address: Rua Noronha Torrezão 407 apt. 304 Bloco 7 Bairro Santa Rosa, Niterói RJ, Cep 24240-181. Research Financial Support: CAPES-CNPq (scholarship). work_2ctf5vpmcjgtvivxxtrfs2esi4 ---- Medico-legal study of intracranial causes of death Egyptian Journal of Forensic Sciences (2014) 4, 116–123 H O S T E D BY Contents lists available at ScienceDirect Egyptian Journal of Forensic Sciences journal homepage: http://www.journals.elsevier.com/egyptian-journal-of-forensic-sciences Medico-legal study of intracranial causes of death * Corresponding author. Address: Dept. of Pathology and Forensic Medicine, College of Medicine, University of Al-Nahrain, PO Box 10048, Iraq. Mobile: +964 7700693558. E-mail address: m_zq88@hotmail.com (M.A. Al-Qazzaz). Peer review under responsibility of The International Association of Law and Forensic Sciences (IALFS). http://dx.doi.org/10.1016/j.ejfs.2014.07.007 2090-536X ª 2014 The International Association of Law and Forensic Sciences (IALFS). Production and hosting by Elsevier B.V. A reserved. Muataz A. Al-Qazzaz a,*, Mohammad Abdul-Mohsin Jabor b a Dept. of Pathology and Forensic Medicine, College of Medicine, University of Al-Nahrain, Iraq b Dept. of Pathology and Forensic Medicine, College of Medicine, University of Al-Mustansirya, Iraq Received 19 March 2014; revised 14 June 2014; accepted 8 July 2014 Available online 2 August 2014 KEYWORDS Intracranial; Subarachnoid; Hemorrhage; Laceration Abstract Background: A lot of incidents related to the head region could lead to death, but for sim-plicity’s sake, these incidents are mainly of two broad categories: either non-traumatic (natural) or traumatic (violent). Objectives: To classify all intra-cranial lesions and injuries according to the mode and manner of death, gender, age and admission to hospital, and to evaluate these lesions and their role in the cause of death. Materials and methods: The study was performed on 119 cases referred to the Medico-Legal Insti- tute in Baghdad within the period of the study from 1 November 2012 to 1 May 2013. Complete routine autopsies were carried out for all cases with a thorough external and internal examination. Digital photography was used for some interesting cases. Tissue specimens were sent for histopa- thol-ogy and blood samples were drawn for alcohol and toxicology. Results: Intracranial lesions accounted for 11.54% of the total deaths. The mean age was 32.48 ± 17.59. The commonest age group affected by intracranial lesions was > 20–30 years. Trau- matic cases were the commonest. Males were more than females. In traumatic deaths, road traffic acci- dents were the commonest categories of death, while in the non-traumatic deaths cerebro-vascular accidents were the commonest category. Accidental manner of death was the commonest. Intracranial lesions alone were seen in only 27.73% of the cases. Subarachnoid hemorrhages were the commonest intracranial lesions. Pneumonia represents the commonest complication in delayed death. Conclusions: Pure intracranial causes of death were recorded in a minority of cases studied. Mostly they were sharing injuries in other anatomical sites. ª 2014 The International Association of Law and Forensic Sciences (IALFS). Production and hosting by Elsevier B.V. All rights reserved. 1. Introduction The head is the preferred target for criminal acts and is a favorite place for various pathological lesions. 1 According to the World Health Organization (WHO), deaths related to the head, whether traumatic or non-trau- matic, represent about 17% depending on global studies from different countries; in the United States of America (USA) cerebrovascular diseases represent the third cause of death. 2 ll rights http://crossmark.crossref.org/dialog/?doi=10.1016/j.ejfs.2014.07.007&domain=pdf mailto:m_zq88@hotmail.com http://dx.doi.org/10.1016/j.ejfs.2014.07.007 http://dx.doi.org/10.1016/j.ejfs.2014.07.007 http://www.sciencedirect.com/science/journal/2090536X http://dx.doi.org/10.1016/j.ejfs.2014.07.007 oothe 91 88. thers 912 8.46%6%%% inintrac lesi 11 11 racran sions 119 11.54 ranial ns .54% ial Figure 1 Percentage of deaths due to intracranial lesions from total number of cases. Intracranial death 117 Brain lesions could be caused by either traumatic or non- traumatic incidents. Traumatic brain injuries are caused by two mechanisms: either by impact and/or movement of the brain inside the skull. 3 In the USA the annual incidence of head injuries is 300 per 100,000 per year with a mortality rate of 25 for 100,000 in North America and 9 per 100,000 in the United Kingdom (UK). The most common causes of head trauma are RTA (road traffic accidents), home and occupa- tional accidents, falls and assaults. 4 Non-traumatic (natural) brain lesions could be due to increased intracranial pressure, cerebrovascular diseases, met- abolic disorders, tumors of the central nervous system (CNS), neurodegenerative diseases, infections of the CNS and myelin diseases. 5–7 In adults, CNS-related pathologies causing sudden death are of the following order: intracranial hemorrhages (more than 50% of the cases), seizures, neurodegenerative dis- orders, hypoxic-ischemic lesions with strokes, intracranial tumors, primarily CNS infections, multiple sclerosis and devel- opmental disorders with predominance of the male gender in all age groups except for the neurodegenerative disorders (dementia disorders), which are primarily of the female gender. 8,9 2. Method A prospective descriptive study was performed on 119 cases from a total of 1,031 cases referred to the Medico-Legal Insti- tute of Baghdad within the period of the study, including cases of head injuries and sudden or suspected circumstances of death. Information about each victim was gained from police reports, close relatives of the victim, eyewitnesses, and medical reports for those who were admitted to hospital prior to death. Decomposed bodies were excluded from the study. The information gathered included: age, sex, medical his- tory, mode of death and admission to hospital. External examination of the bodies to locate and mark all external injuries and their anatomical sites and types was car- ried out. X-ray examination was performed on some cases when needed, then a complete classical autopsy was performed starting with the head region. After removing the bony window, the meninges and the brain were examined for any hematomas, hemorrhages or any other lesions and abnormalities. The brain was removed and examined thoroughly; the base of the skull was examined after removing the dura to detect any fractures or pathological state. Separating the cerebellum from the cerebrum by single swap movement at its junction, the brain was sliced in two parts by the brain knife in the sagittal section to examine the ventricles and both the white and gray matter; then a coronal incision with one centimeter slices was carried out starting from the frontal lobe. The cerebellum and the brainstem were also sliced and examined. The autopsy was completed by dis- secting the chest and abdominal regions to reveal any internal organ injury associated with the intracranial pathology. 2.1. Complementary investigations � Tissue brain samples were taken from different brain sites or intracranial pathological lesions and from some other organs when found and kept in 10% formalin and sent the next day for histopathological examination. � Blood samples were drawn (about 10–20 ml) from the fem- oral vein for toxicology and alcohol investigations using a mixture of NaF or KFl and potassium oxalate. � Thin layer chromatography (TLC) screening test was used for toxicology and confirmed by UV/VIS (180–820 nm) vis- ible spectrophotometer for determination of concentration, if TLC test was positive. � For detection of alcohol, alcohol gas chromatography (AGC) was used. � Radiological survey for selected cases was done to reveal foreign bodies, bullets, bone injury, shells and their sites prior to autopsy. � Digital photography was done for interesting and unique injuries and pathologies. 2.2. Statistical analysis � Data were analyzed using SPSS (Statistical Package for Social Sciences), version 16 and Microsoft Office Excel 2007. 3. Results A medico-legal prospective post-mortem study was carried out on 119 cases representing 11.54% of the total number of 1,031 cases referred to the Medico-Legal Institute in Baghdad during a 6-month period that started from 1 November 2012 to 1 May 2013 (Figure 1). Eighty cases were males (67.23%) and 39 cases were females (32.77%) ranging in age between 2 months and 81 years, with a mean age of 32.48 years ± 17.59. The commonest age group affected by the intracranial inju- ries was > 20–30 years accounting for 21.8%, and the least common age group affected by those injuries was > 70 years accounting for 2.5% from the total number of all cases (Table 1). Traumatic cases totaled 89 from all cases (74.79%) with a mean age of 32.54 years ± 18.67, while non-traumatic cases were only seen in 20 cases (16.80%) with a mean age of 32.13 years ± 15.66 (see Table 2). Table 1 Age groups in ten-year intervals. Age interval Frequency Percent (%) <10 years 15 12.6 10–20 years 10 8.4 >20–30 years 26 21.8 >30–40 years 23 19.3 >40–50 years 18 15.1 >50–60 years 18 15.1 >60–70 years 6 5.0 >70 years 3 2.5 Total 119 100.0 118 M.A. Al-Qazzaz, M.A.-M. Jabor Male gender was predominant in 80 cases accounting for 67.2% of the total number, while female gender was seen in 39 cases (32.8%) of the total number with male: female ratio of 2:1. In death due to trauma, RTA (road traffic accidents) were the commonest category of death represented in 35 cases fol- lowed by bullet injury, which was seen in 24 cases, and only 1 case of stab wounds was found in non-traumatic death. CVA (cerebrovascular accidents) occupies the commonest non-traumatic category and was represented in 11 cases fol- lowed by meningitis that was found in 3 cases (Table 3). Accidental manner of death was predominant in traumatic mode and seen in 45 cases accounting for 37.8%. Homicidal Table 2 Age and mode of death. Mode of death N Percent (%) Trauma 89 74.79 Non-traumatic 20 16.80 Unknown 10 8.40 Total 119 100 Table 3 Categories and mode of death groups. Mode of death Traumatic Categories of death Fall from height (FFH) 6 Tumor 0 Road traffic accidents (RTA) 35 Bullet injury 24 Traumatic wounds 11 Sudden death 0 Unknown 0 Meningitis 0 Bomb explosion 3 Heavy object fall on the head 2 Railway injury 2 Cerebrovascular diseases 0 Convulsions 0 Electric shock and fall from height 2 FFH and RTA 2 Stab wounds 2 deaths were seen in 32 cases accounting for 26.9%, while sui- cidal deaths were the least type seen in 5 cases accounting for 4.2%. Natural deaths were seen in 20 cases (16.8%) (Figure 2). The study showed that intracranial lesions alone were seen in 33 cases (27.73%) and mixed intracranial and extra-cranial lesions were seen in 86 cases (72.26%), while intracranial lesions with extra-cranial lesions that accompanied other body lesions were seen in 57 cases (47.89%) (Table 4) (see Photo- graphs 1–4). Neither skull fractures nor any extra-cranial injury were found in 60 cases accounting for 50.42%. Skull fractures and other extra-cranial injuries were found in 59 cases from the total number of cases accounting for 49.57%. Skull fractures due to head injury were seen in 39 cases accounting for 32.77% followed by bullet injuries in 25 cases accounting for 21.01%. Other types of injuries are listed in Table 5. No associated internal organ injury was found in 83 cases accounting for 69.75%. Associated internal organ injuries were seen in 36 cases accounting for 30.25%. Spleen and lung were injured in 20 cases for each (16.81%) followed by liver in 15 cases and only a single injury was seen in the diaphragm, uterus, colon and mesenteries (0.84%) (Table 6). Sixty-six cases were admitted to hospital (delayed death) representing about 55.5%, and 53 cases (44.5%) died instantaneously. Complications were seen in 15 cases (those who were admit- ted to hospitals) from the total number of cases (12.61%). Mean age Std. deviation Std. error 32.5485 18.67879 2.01419 32.1364 15.66098 3.33893 31.0000 12.48110 3.94687 32.3404 17.59746 1.61998 Total Non-traumatic Unknown 0 0 6 2 0 2 0 0 35 0 0 24 0 0 11 2 0 2 0 10 10 3 0 3 0 0 3 0 0 2 0 0 2 11 0 11 2 0 2 0 0 2 0 0 2 0 0 2 Total 89 119 Figure 2 Manner of death. Table 4 Types of cranial lesions and other body lesions. Type of lesion No. Percent (%) Intracranial lesions only 33 27.73 Intracranial lesions and extra-cranial lesions 86 72.26 Intracranial lesions with extra–cranial lesions with other body lesions (gastric ulcers, wounds, etc.) 57 47.89 Photograph 1 A well-localized intra-cerebral hematoma located at the left fronto-temporal lobe in a 35-year-old male with a history of hypertension presented as sudden death. Photograph 2 Massive subdural and subarachnoid hemorrhages on both sides of the cerebral hemispheres mainly at the anterior fronto-temporal lobes and part of the parietal lobes. Intracranial death 119 Pneumonia represents the commonest complication in delayed death and was seen in 6 cases (20%) (Table 7). Subarachnoid hemorrhage was the commonest finding dur- ing autopsy and was found in 51 cases accounting for 42.68% Photograph 4 A big hematoma covering the Circle of Willis at the base of the brain. Photograph 3 Ruptured brain abscess in the right fronto-temporal lobe with dispersion of suppurative fluid. 120 M.A. Al-Qazzaz, M.A.-M. Jabor from the total number of cases. Subdural hemorrhages were found in 41 cases accounting for 34.45% (Table 8). No brain pathology was seen in 45 cases (37.81%). Lacer- ations were the commonest brain findings presented in 25 cases (21.00%) from the total number of the cases while edema was seen in 18 cases (15.13%) (Table 9). Alcohol was detected in 7 cases (5.88%). Only 1 of them was detected in a non-traumatic injury while the other 6 posi- tive cases were traumatic injuries. All toxicological laboratory tests were negative for drugs, including benzodiazepines, barbiturates and antiepileptic drugs, as many victims might be under their effect during driving. 4. Discussion Intracranial injuries were found in 11.54% of all cases. This result disagreed with a study done by Holbourn in England during a period of 2 years in the neurosurgery department in the Hospital of London, in which intracranial injuries were accounting for 15% of all cases. 2 This difference could be attributed to the fact that Holbourn’s study depends on mixed medico-legal and non-medico-legal deaths during surgery and hospital admission, while in the current study only those cases seen during autopsy were taken into account. The study coin- cides with a study in Austria with nearly similar results in which intracranial lesions represent 12% of all deaths. 10 Table 5 Types of cranial and extra-cranial injuries. Type of injury No. Percent (%) No injury 60 50.42 Skull fractures 39 32.77 Bullets injury 25 21.01 Traumatic wounds 14 11.76 Abrasions 14 11.76 Contusions 13 10.92 Bruises 3 2.52 Cut wounds 2 1.68 Shells 2 1.68 Recent surgical wounds 2 1.68 Tongue bites 2 1.68 Infected surgical wounds 1 0.84 Old scar of a neurosurgery in the skull 1 0.84 Recent neurosurgery scar 1 0.84 Petechial hemorrhages 1 0.84 Conjunctival bleeding 1 0.84 Electric burns 2 1.68 Dislocations 1 0.84 Wound abscess 1 0.84 Table 6 Associated internal organ injuries. Organ No. Percent (%) No injury 83 69.75 Spleen 20 16.81 Lung 20 16.81 Liver 15 12.61 Heart 6 5.04 Kidney 4 3.36 Stomach 3 2.52 Aorta 2 1.68 Pancreas 2 1.68 Small bowel 2 1.68 Bladder 2 1.68 Diaphragm 1 0.84 Uterus 1 0.84 Colon 1 0.84 Mesenteries 1 0.84 Table 7 Presence and type of complications. Complications No. Percent (%) Pneumonia 6 40 Myocardial infarction 3 20 Gastric ulcer 3 20 Gastroenteritis 1 6.66 Pancreatitis 1 6.66 Atrophied internal organs 1 6.66 Total 15 100 Intracranial death 121 The mean age of the victims was close to the mean age of a previous study done in 2003. 11 The commonest age group affected by the intracranial lesions was > 20–30 years and the least age group affected by those injuries was > 70 years. These results were in agree- ment with a study done by Ommaya, 12 while there was dis- agreement with a study in Japan that stated that the commonest age group affected by the intracranial lesions was > 40–60 years and the least age group was < 20 years. 13 This is because the Japanese study was performed on only nat- ural or pure intracranial lesions with 20-year age interval. The majority of cases were traumatic injuries due to a high number of incidents occurring daily that were referred to be autopsied while most of the non-traumatic cases were just given a death certificate. The result is similar to previous studies. 7,14,15 In males, traumatic and non-traumatic modes of death were more common than in females with no significant corre-lation between gender and mode of death. This result was in agreement with a retrospective study on traumatic deaths only in Brazil. 16 Accidental manner in traumatic mode of death was pre- dominant due to a high number of fatal road traffic injuries occurring daily followed by homicidal then suicidal modes of death while natural deaths were seen in a minority of cases. These results were similar to those found by other research- ers. 17–19 The majority of cases were with mixed intracranial and extra-cranial lesions. This was because most of them were trau- matic in nature. Within the intracranial group, subarachnoid hemorrhage was followed by intra-cerebral hemorrhage. These two causes of death were seen in both traumatic and non-trau- matic modes of death and coincide with previous studies. 20,21 Almost half of the cases were neither as a result of skull fractures nor extra-cranial or other external injuries while skull fractures followed by bullet injuries were the commonest find- ings among the other half. Associated internal organ injuries were seen in less than a third of the cases as this study focused on intracranial injuries. Pneumonia was the commonest complication in delayed death. This could be attributed to the longer duration of admission with dependence on parental feeding due to the patient being unconscious, in addition to contamination, mak- ing the victims more liable to pneumonia. Subarachnoid hemorrhage was the commonest finding dur- ing autopsy followed by subdural hemorrhage. These are expected as subarachnoid hemorrhage can be a cause of death in both traumatic and non-traumatic accidents while subdural hemorrhage is only traumatic in nature. Brain laceration is a common finding in death due to head injury which reflects the severity of trauma while Edema, which comes next, could be due to natural or trau-matic in origin. 22,23 Almost half of the victims died instantaneously and this result is similar to a previous finding. 24 This reflects the seri- ousness and severity of injuries in this country that resulted in a higher rate of instantaneous death and the delay in the transportation of injured patients to the hospitals. On the other hand, more than half of them died later on in hospitals due to complications such as pneumonia and delayed bleeding as a result of rupture of a cerebral artery, which may happen after a variable period of time following trauma due to injury to its wall and the formation of post-traumatic aneurysm. 25 There was a limited minor role for alcohol in the current study as in a previous study conducted in Saudi Arabia. 26 This is because of the religious and cultural background of the Isla- mic countries. Table 8 Types of meningeal pathologies. Meningeal finding No. Percent (%) Subarachnoid hemorrhage (traumatic and non-traumatic) 51 42.86 Subdural hemorrhage 41 34.45 Tears 31 26.05 No finding 18 15.13 Epidural hemorrhage 8 6.72 Meningitis 5 4.20 Thickened dura with fibrosis and presence of foci of bleeding 5 4.20 Petechial hemorrhages in the dura matter 3 2.52 Tumor deposits at dura matter 1 0.84 Table 9 Types of brain pathology. Brain finding No. Percent (%) No finding 45 37.81 Lacerations 25 21.00 Malformations 3 2.52 Edema 18 15.13 Intraparenchymal bleeding 16 13.45 Abscess 7 5.88 Hemorrhages in the gray matter 3 2.52 Hemorrhages in the white matter 6 5.04 Intraventricular bleeding 3 2.52 Herniation 3 2.52 Tumors 2 1.68 Cysts 1 0.84 Contusions 1 0.84 Dilatation of the ventricles 1 0.84 Atherosclerosis (basilar and vertebral arteries) 1 0.84 122 M.A. Al-Qazzaz, M.A.-M. Jabor 5. Conclusions Pure intracranial causes of death were recorded in a minority of cases studied. Young male adults were mostly affected and traumatic causes of death were commonly seen in both sexes. Accidental manner was mostly reported in traumatic death. Subarachnoid hemorrhage was the commonest pathol- ogy seen while lacerations were the commonest brain and brainstem lesions. Funding None. Conflict of interest None declared. Informed consent Information about each victim was gained from police reports, close relatives of the victim, eyewitnesses, and medical reports for those who were admitted to hospital prior to death. Ethical approval Necessary ethical approval was obtained from the institute ethics committee. References 1. Ommaya AK, Grubb RL Jr, Naumann RA. Coup and contre- coup injury: observations on the mechanics of visible brain injuries in the rhesus monkey. J Neurosurg 1971;35:503–5166. 2. Di Maio VI, Di Maio D. Forensic pathology. 2nd ed. Boca Raton, Fl: CRC Press; 2001 chapter 6. 3. Leetsma JE. Impact injuries to the brain and head, Forensic Neuropathology. New York: Raven Press; 1988, p. 184–253. 4. Cheung PS, Lam JM, Yeung JH, et al. Outcome of traumatic extradural hematoma in Hong Kong. Injury 2007;38:76–80. 5. Generalli TA. Head injury in man and experimental animals: clinical aspects. Acta Neurochir Suppl (Wien) 1983;32:1–13. 6. Yilmazlar S, Kocaeli H, Dogan S, et al. Traumatic epidural hematomas of non-arterial origin: analysis of 30 consecutive cases. Acta Neurochir (Wien) 2005;147 :1241–8 discussion 1248. 7. Graham DI, Smith C, Reichard R, Leclercq PD, Gentleman SM. Trials and tribulations of using b-amyloid precursor protein immunohistochemistry to evaluate traumatic brain injury in adults. Forensic Sci Int 2004;146:89–96. 8. Graham DI, Nicole JAR, Bone I. Editors. Adams and Graham’s introduction to neuropathology. 3rd ed. London: Hodder Arnold; 2006 chapter 2. 9. Harukuni I, Bhardwaj A. Mechanisms of brain injury after global brain ischemia. Neurol Clin 2006;24:1–21. 10. Unterberg AW, Stover J, Kress B, et al. Edema and brain trauma. Neuroscience 2004;129:1021–9. 11. d’Avella D, Servadei F, Scerrati M, et al. Traumatic acute subdural hematomas of the posterior fossa: clinicoradiologic analysis of 24 patients. Acta Neurochir (Wien) 2003;145:1037–44. 12. Chute DJ, Smialek JE. Pseudo-subarachnoid hemorrhage of the head diagnosed by computerized axial tomography: a post- mortem study of ten medical examiners cases. Forensic Sci 2002;47:360–5. 13. Katayama Y, Kawamata T. Edema fluid accumulation within necrotic brain tissue as a cause of the mass effect cerebral contusion in head trauma patients. Acta Neurochir Suppl 2003;86:323–7. 14. Oehmichen M, Walter T, Meissner C, Friedrich HJ. Time course of cortical hemorrhage after closed traumatic brain injury: statistical analysis of posttraumatic histomorphological altera- tions. J Neurotrauma 2003;20:87–103. 15. Amarenco P, Hauw JJ. Cerebellar infarction in the territory of the superior cerebellar artery: a clinicopathologic study of 33 cases. Neurology 1990;40:1383–90. http://refhub.elsevier.com/S2090-536X(14)00051-3/h0005 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0005 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0005 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0015 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0015 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0020 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0020 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0025 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0025 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0035 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0035 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0035 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0035 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0045 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0045 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0050 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0050 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0055 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0055 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0055 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0060 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0060 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0060 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0060 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0065 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0065 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0065 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0065 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0070 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0070 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0070 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0070 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0075 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0075 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0075 Intracranial death 123 16. Gilbert M, Prieto A, Allut AG. Acute bilateral extradural hematoma of the posterior cranial fossa. Br J Neurosurg 1997;11:573–5. 17. Walczak T, Leppik I, D’Amelio M, et al. Incidence and risk factors in sudden unexpected death in epilepsy: a cohort prospec- tive study. Neurology 2001;56:519–25. 18. Luke J, Helpern M. Sudden unexpected death from natural causes in adults. A review of 275 consecutive autopsied cases. Arch Pathol 1968;85(10–17):28. 19. Gray JT, Puetz SM, Jackson SL, et al. Traumatic subarachnoid hemorrhage: a 10-years case study and review. Forensic Sci Int 1999;105:13–23. 20. Black M, Graham DI. Sudden unexplained death in adults resulting from intracranial pathology. Clin Pathol 1997;18:384–90. 21. Jaster J, Zamesnik J, Bartos A, et al. Unexpected sudden death caused by medullary brain lesions involve all age groups and may include ‘‘sudden infant death syndrome’’ as a subset. Acta Neuropathol (Berl) 2005;109:552–3. 22. Gussamo SN, Pitella JE. Extradural hematoma and diffuse axonal injury in victims of fatal road traffic accidents. Br J Neurosurg 1998;12:123–6. 23. Marmarou A, Fatouros PP, Barzo P, et al. Contribution of edema and cerebral blood volume to traumatic brain swelling in head injured patients. J Neurosurg 2000;93:183–93. 24. Wiebers DO, Piepgras DG, Meyer FB, et al. Pathogenesis, natural history and treatment of unruptured intracranial aneu- rysm. Myo Cli Proc 2004;79:1572–83. 25. Argo A, Bono G, Zerbo S, Triolo V, Liota R, Procaccianti P. Post-traumatic lethal carotid-cavernous fistula. J Forensic Legal Med 2008;15(4):266–8. 26. Al Sarraj S, Mohamed S, Kibble M. Subdural hematoma (SDH): assessment of macrophages reactivity within the dura matter and underlying hematoma. Clin Neuropathol 2004;23(2):62–75. http://refhub.elsevier.com/S2090-536X(14)00051-3/h0080 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0080 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0080 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0085 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0085 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0085 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0090 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0090 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0090 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0095 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0095 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0095 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0100 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0100 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0105 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0105 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0105 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0105 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0110 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0110 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0110 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0115 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0115 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0115 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0120 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0120 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0120 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0125 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0125 http://refhub.elsevier.com/S2090-536X(14)00051-3/h0125 Medico-legal study of intracranial causes of death 1 Introduction 2 Method 2.1 Complementary investigations 2.2 Statistical analysis 3 Results 4 Discussion 5 Conclusions Funding Conflict of interest Informed consent Ethical approval References work_2d7ksvf6ozawvlgatkt7p6oqam ---- Insertion of a second nasal pack as a prognostic indicator of emergency theatre requirement in epistaxis patients Abstracts / International Journal of Surgery 10 (2012) S1–S52S36 ABSTRACTS Conclusions: In our study, thyroplasty as a method for vocal cord medi- alisation led to improved voice quality post-operatively and to good patient satisfaction. 0363: INSERTION OF A SECOND NASAL PACK AS A PROGNOSTIC INDICATOR OF EMERGENCY THEATRE REQUIREMENT IN EPISTAXIS PATIENTS Edward Ridyard 1, Vinay Varadarajan 2, Indu Mitra 3. 1 University of Manchester, Manchester, UK; 2 North West Higher Surgical Training Scheme, North West, UK; 3 Manchester Royal Infirmary, Manchester, UK Aim: To quantify the significance of second nasal pack insertion in epistaxis patients, as a measure of requirement for theatre. Method: A one year retrospective analysis of 100 patient notes was undertaken. After application of exclusion criteria (patients treated as outpatients, inappropriate documentation and patients transferred from peripheral hospitals) a total of n¼34 patients were included. Of the many variables measured, specific credence was given to requirement of second packing and requirement for definitive management in theatre. Results: Of all patients, 88.5% required packing. A further 25% (7/28) of this group had a second pack for cessation of recalcitrant haemorrhage. Of the second pack group, 85.7% (6/7) ultimately required definitive management in theatre. One sample t-test showed a statistically significant correlation between patients with a second nasal pack and requirement for theatre (p<0.001). Conclusions: Indications for surgical management for epistaxis vary from hospital to hospital. The results of this study show that insertion of a second pack is a very good indicator of requirement for definitive management in theatre. 0365: MANAGEMENT OF LARYNGEAL CANCERS: GRAMPIAN EXPERIENCE Therese Karlsson 3, Muhammad Shakeel 1, Peter Steele 1, Kim Wong Ah-See 1, Akhtar Hussain 1, David Hurman 2. 1 Department of otolaryngology-head and neck surgery, Aberdeen Royal Infirmary, Aberdeen, UK; 2 Department of Oncology, Aberdeen Royal Infirmary, Aberdeen, UK; 3 University of Aberdeen, Aberdeen, UK Aims: To determine the efficacy of our management protocol for laryngeal cancer and compare it to the published literature. Method: Retrospective study of prospectively maintained departmental oncology database over 10 years (1998-2008). Data collected include demographics, clinical presentation, investigations, management, surveillance, loco-regional control and disease free survival. Results: A total of 225 patients were identified, 183 were male (82%) and 42 female (18%). The average age was 67 years. There were 81 (36%) patients with Stage I disease, 54 (24%) with Stage II, 30 (13%) with Stage III and 60 (27%) with Stage IV disease. Out of 225 patients, (130)96% of Stage I and II carcinomas were treated with radiotherapy (55Gy in 20 fractions). Patients with stage III and IV carcinomas received combined treatment. Overall three-year survival for Stage I, II, III and IV were 91%, 65%, 63% and 45% respectively. Corresponding recurrence rates were 3%, 17%, 17% and 7%; 13 patients required a salvage total laryngectomy due to recurrent disease. Conclusion: Vast majority of our laryngeal cancer population is male (82%) and smokers. Primary radiotherapy provides comparable loco-regional control and survival for early stage disease (I & II). Advanced stage disease is also equally well controlled with multimodal treatment. 0366: RATES OF RHINOPLASTY PERFORMED WITHIN THE NHS IN ENGLAND AND WALES: A 10-YEAR RETROSPECTIVE ANALYSIS Luke Stroman, Robert McLeod, David Owens, Steven Backhouse. University of Cardiff, Wales, UK Aim: To determine whether financial restraint and national health cutbacks have affected the number of rhinoplasty operations done within the NHS both in England and in Wales, looking at varying demographics. Method: Retrospective study of the incidence of rhinoplasty in Wales and England from 1999 to 2009 using OPCS4 codes E025 and E026, using the electronic health databases of England (HesOnline) and Wales (PEDW). Extracted data were explored for total numbers, and variation with respect to age and gender for both nations. Results: 20222 and 1376 rhinoplasties were undertaken over the 10-year study period in England and Wales respectively. A statistical gender bias was seen in uptake of rhinoplasty with women more likely to undergo the surgery in both national cohorts (Wales, p<0.001 and England, p<0.001). Linear regression analysis suggests a statistical drop in numbers under- going rhinoplasty in England (p<0.001) but not in Wales (p>0.05). Conclusion: Rhinoplasty is a common operation in both England and Wales. The current economic constraint combined with differences in funding and corporate ethos between the two sister NHS organisations has led to a statistical reduction in numbers undergoing rhinoplasty in England but not in Wales. 0427: PATIENTS' PREFERENCES FOR HOW PRE-OPERATIVE PATIENT INFORMATION SHOULD BE DELIVERED Jonathan Bird, Venkat Reddy, Warren Bennett, Stuart Burrows. Royal Devon and Exeter Hospital, Exeter, Devon, UK Aim: To establish patients' preferences for preoperative patient informa- tion and their thoughts on the role of the internet. Method: Adult patients undergoing elective ENT surgery were invited to take part in this survey day of surgery. Participants completed a ques- tionnaire recording patient demographics, operation type, quality of the information leaflet they had received, access to the internet and whether they would be satisfied accessing pre-operative information online. Results: Respondents consisted of 52 males and 48 females. 16% were satisfied to receive the information online only, 24% wanted a hard copy only and 60% wanted both. Younger patients are more likely to want online information in stark contrast to elderly patients who preferred a hard copy. Patients aged 50-80 years would be most satisfied with paper and internet information as they were able to pass on the web link to friends and family who wanted to know more. 37% of people were using the internet to further research information on their condition/operation. However, these people wanted information on reliable online sources to use. Conclusions: ENT surgeons should be alert to the appetite for online information and identify links that are reliable to share with patients. 0510: ENHANCING COMMUNICATION BETWEEN DOCTORS USING DIGITAL PHOTOGRAPHY. A PILOT STUDY AND SYSTEMATIC REVIEW Hemanshoo Thakkar, Vikram Dhar, Tony Jacob. Lewisham Hospital NHS Trust, London, UK Aim: The European Working Time Directive has resulted in the practice of non-resident on-calls for senior surgeons across most specialties. Conse- quently majority of communication in the out-of-hours setting takes place over the telephone placing a greater emphasis on verbal communication. We hypothesised this could be improved with the use of digital images. Method: A pilot study involving a junior doctor and senior ENT surgeons. Several clinical scenarios were discussed over the telephone complemented by an image. The junior doctor was blinded to this. A questionnaire was completed which assessed the confidence of the surgeon in the diagnosis and management of the patient. A literature search was conducted using PubMED and the Cochrane Library. Keywords used: “mobile phone”, “photography”, “communication” and “medico-legal”. Results & Conclusions: In all the discussed cases, the use of images either maintained or enhanced the degree of the surgeon's confidence. The use of mobile-phone photography as a means of communication is widespread, however, it's medico-legal implications are often not considered. Our pilot study shows that such means of communication can enhance patient care. We feel that a secure means of data transfer safeguarded by law should be explored as a means of implementing this into routine practice. 0533: THE ENT EMERGENCY CLINIC AT THE ROYAL NATIONAL THROAT, NOSE AND EAR HOSPITAL, LONDON: COMPLETED AUDIT CYCLE Ashwin Algudkar, Gemma Pilgrim. Royal National Throat, Nose and Ear Hospital, London, UK Aims: Identify the type and number of patients seen in the ENT emergency clinic at the Royal National Throat, Nose and Ear Hospital, implement changes to improve the appropriateness of consultations and management and then close the audit. Also set up GP correspondence. Method: First cycle data was collected retrospectively over 2 weeks. Information was captured on patient volume, referral source, consultation Insertion of a second nasal pack as a prognostic indicator of emergency theatre requirement in epistaxis patients Management of laryngeal cancers: Grampian experience Rates of rhinoplasty performed within the NHS IN England and Wales: A 10-year retrospective analysis Patients' preferences for how pre-operative patient information should be delivered Enhancing communication between doctors using digital photography. A pilot study and systematic review The ENT emergency clinic at the royal national throat, nose and ear hospital, London: Completed audit cycle work_2dmzu5hq5zbuhozljssskzxaka ---- untitled Vol. 84, No. 1, 2007 67 NOTE A Device for the Preparation of Cereal Endosperm Bricks Craig F. Morris,1,2 Kameron Pecka,1 and Arthur D. Bettge1 Cereal Chem. 84(1):67–69 The study of cereal chemistry has been advanced by the analy- sis of the material properties of cereal endosperm. In particular, the study of wheat (Triticum sp. L.) endosperm hardness (kernel texture) has been a topic of intense interest for the past ≈100 years (Pomeranz and Williams 1990; Morris 2002). One of the first recorded devices for measuring wheat kernel texture was that of Roberts (1910), which determined the force required to crush individual kernels. There are two reasons why the assessment of kernel texture in wheat, in particular, is of great interest: 1) wheat exhibits distinct classes of kernel texture, namely soft, hard, and durum; and 2) the differences in kernel texture among these classes has a profound effect on flour milling, starch damage, particle size distribution, water absorption, and end-use quality (Morris and Rose 1996; Morris 2002). As we will describe, our objective was to produce a geometrically defined subsample of the endosperm as opposed to studying the whole kernel. The measurement of wheat kernel texture has been largely empirical because kernels and endosperm are difficult to work with due largely to their diminutive size. Current methods (AACC International 2000) such as near-infrared reflectance (Approved Method 39-70A), particle size index (Approved Method 55-30), and single kernel characterization System (Approved Method 55- 31) provide measurements in arbitrary unitless proportions or scales. Generally, to obtain objective measures of material proper- ties in universal units of force, work, etc., complex geometries must be simplified, and in the case of wheat, the bran, germ, and pigment strand should also be eliminated. A few researchers have been successful in achieving this goal. Glenn of the USDA devel- oped a method of “turning” on a lathe endosperm cylinders of defined geometry, and then subjecting them to testing in both com- pression and tension (Glenn et al 1991; Jolly 1991; Jolly et al 1996; Delwiche 2000; Osborne et al 2001; Dobraszczyk et al 2002). This same technique was studied in the senior author’s lab and found to be exceedingly tedious. The centering of the kernel “cheek” on the lathe stub was particularly problematic. Haddad, Abecassis and co-workers (Haddad et al 1998, 1999, 2001; Sam- son et al 2005) eliminated the problems associated with “turning” cylinders, by preparing through sanding rectangular parallelepi- pedal test samples. Here we term such specimens “bricks”. We have devised and constructed a similar device that eliminates the need for adhesive paper and glue to hold specimens during their preparation (see Haddad et al 1998). Furthermore, one of the pro- cessing steps in the technique described by Haddad et al (1998) as “Two machined half-grains are placed between two sheets, and a wedge is set at height l of the extremities of the sheets” has been eliminated. We have used the device described here to prepare hundreds of bricks of various wheat (Triticum aestivum and T. turgidum var. durum) cultivars, including vitreous and nonvitreous (mealy) kernels selected from individual grain lots. These bricks are amenable to material property analysis using common instru- mentation such as the TA-XT2i in compression mode. Our exper- ience to date indicates a very low rate of “aberrant” failure, which could be ascribed to cracks or other “defects”, of the bricks so prepared and tested, on the order of ≈1–2%. Description of the Device The kernel sander is comprised of two parts, the base (lower) and sander (upper) components (Figs. 1 and 2, respectively). The salient features of the base include 1) a series of channels of vari- ous dimensions to accommodate and hold the specimens during their sequential machining, and 2) side rails that support and guide the sander. Here the machining process involves the gradual removal of kernel material through the use of very fine sandpaper (silicon carbide ANSI Grade 320) (‘413Q 320 Wetordry Tri-M-ite’, ‘A’ weight paper, 3M Corp., St. Paul, MN). The sandpaper is held in place by clamps to the bottom surface of the sander. Although not deemed absolutely necessary, the leading and trailing edges of the bottom surface of the sander were beveled to reduce the sander thickness by 1.2 mm and sloping 13 mm from each end toward the center. The device was made from aluminum ‘6061’ and common sizes and types of bolts, for example the side rails are attached to the bottom part with #10-32 Allen head (hexagonal socket) cap screws. For wheat kernels, the dimensions of the chan- nels in the base were determined empirically and could be modi- fied for different cereal grains. Figure 3 shows in detail the channels and their dimensions. Endosperm Bricks The wheat kernel was first placed crease-side-up (Fig. 3, “X”) and split in two with a razor blade or scalpel. We found this preferable (and safer) to trying to hold the kernel with fingers. Each kernel half (cheek) was then placed in “C” where one smooth side was produced by sanding. In all sanding operations, no more weight than the upper component itself (≈735 g) was applied to the specimen; often less, as some of the weight of the upper component was partially supported by hand. The specimen was then flipped over and the other side was sanded smooth until the sander was supported by the side rails. Channels “A” and “B” provided additional thickness options. Once the half kernel had been reduced to a specimen with two parallel faces, the specimen was placed in “I”, sanded down until the side rails supported the sander and then placed in “H” and again sanded until the rails supported the sander. “D” and “E” were designed to provide equi- valent specimen preparation in concert with “B”; and “F” and “G” in concert with “A” (Fig. 3). After machining, the ends were trimmed using a razor blade or scalpel. Generally, machining each face required less than a dozen passes with the upper component. Figure 4 shows endosperm bricks prepared using the kernel sander device. Specimens were sputter-coated with gold to 300 Å thickness and imaged in a scanning electron microscope (Hitachi 1 USDA-ARS Western Wheat Quality Laboratory, Washington State University, Pullman WA 99164-6394. Names are necessary to report factually on available data; however, the USDA neither guarantees nor warrants the standard of the product, and the use of the name by the USDA implies no approval of the product to the exclusion of others that may also be suitable. 2 Corresponding author. Phone: +1.509.335.4062. Fax: +1.509.335.8573. E-mail: morrisc@wsu.edu DOI: 10.1094 / CCHEM-84-1-0067 This article is in the public domain and not copyrightable. It may be freely re- printed with customary crediting of the source. AACC International, Inc., 2007. 68 CEREAL CHEMISTRY S-570, San Jose, CA) at 20 kV and a 15-mm working distance. Figure 4A and B shows that the kernel sander is effective in pro- ducing bricks of uniform dimensions, flat faces, and square corners from soft and hard hexaploid wheat kernels, respectively. Figure 4C provides an example of a brick prepared from the vitreous portion of a kernel of a commercial yellow dent maize hybrid. Because of the limitations of the size of wheat kernels and the design of the kernel sander device, the bricks are ≈2.08 by 1.06 by 0.76 mm in size. Clearly, if larger cereal grains were the focus of material research, the base channels could be modified accord- ingly. Similarly, machining tolerances need only meet the practical needs of brick preparation. As opposed to attempting to measure directly the dimensions of the bricks by the use of calipers (micro- meters) (Haddad et al 1998), we routinely used digital photography where the specimen size was determined using a calibrated image of a 2-mm scale on an NIST stage micrometer microscope slide (AT#12-561-SM3, Fisher Scientific, Hampton, NJ) as reference to ascertain the number of pixels per unit length of the digital images of the brick. An accurate measure of brick dimensions is impor- tant to the conversion of compression data to common material property units such stress, strain, and Young’s elastic modulus. Fig. 1. Schematic of the base of a kernel sander device for preparing rectangular parallelepipedal cereal endosperm test samples (“bricks”). Dimensions of each side rail (A) are ≈12.8 mm × 25.6 mm × 360 mm; the central portion of the base with channels is ≈101.4 mm × 152.5 mm × 18.7 mm. Fig. 3. Schematic of the base of a kernel sander device showing channel detail. Channel dimensions are (depth by width): A, 0.58 × 11.0 mm; B, 0.63 × 11.0 mm; C, 0.68 × 11.0 mm; D, 1.0 × 0.72 mm; E, 1.4 × 0.72 mm; F, 1.0 × 0.77 mm; G, 1.4 × 0.77 mm; H, 1.0 × 0.82 mm; I, 1.4 × 0.82 mm; X is a half-cylinder of 6-mm diameter. Fig. 4. Scanning electron microscope images of endosperm “bricks” prepared using the kernel sander device from (A) a soft wheat kernel, (B) a hard wheat kernel, and (C) the vitreous portion of a kernel of a commercial yellow dent maize hybrid. Bars = 0.75 mm. Fig. 2. Schematic of sander (upper component) of a kernel sander device for preparing cereal endosperm bricks. Outer dimensions are ≈114.8 mm × 127.1 mm × 18.9 mm. Clamps for holding sandpaper with bolts are shown (A). Arrow points to the leading lower edge which can be optionally beveled; the beveled area is shaded gray and is very thin in the image. Vol. 84, No. 1, 2007 69 ACKNOWLEDGMENTS We wish to thank George Henry and Lauren Frei of Technical Services Instrument Shop; and Christine Davitt, Franceschi Microscopy & Imag- ing Center, Washington State University, Pullman, WA. The assistance of Stacey Sykes is gratefully acknowledged. LITERATURE CITED AACC International. 2000. Approved Methods of the American Asso- ciation of Cereal Chemists, 10th Ed. The Association: St. Paul, MN. Delwiche, S. R. 2000. Wheat endosperm compressive strength properties as affected by moisture. Trans. ASAE 43:365-373. Dobraszczyk, B. J., Whitworth, M. B., Vincent, J. F. V., and Khan, A. A. 2002. Single kernel wheat hardness and fracture properties in relation to density and the modelling of fracture in wheat endosperm. J. Cereal Sci. 35:245-263. Glenn, G. M., Younce, F. L., and Pitts, M. J. 1991. Fundamental physical properties characterizing the hardness of wheat endosperm. J. Cereal Sci. 13:179-194. Haddad, Y., Benet, J. C., and Abecassis, J. 1998. A rapid general method for appraising the rheological properties of the starchy endosperm of cereal grains. Cereal Chem. 75:673-676. Haddad, Y., Mabille, F., Mermet, A., Abecassis, J., and Benet, J. C. 1999. Rheological properties of wheat endosperm with a view on grinding behaviour. Powder Technol. 105:89-94. Haddad, Y., Benet, J. C., Delenne, J. Y., Mermet, A., and Abecassis, J. 2001. Rheological behaviour of wheat endosperm—Proposal for classification based on the rheological characteristics of endosperm test samples. J. Cereal Sci. 34:105-113. Jolly, C. 1991. The biochemistry and molecular genetics of grain softness and hardness in wheat, Triticum aestivum. PhD dissertation, Macquarie University: Sydney, NSW, Australia. Jolly, C. J., Glenn, G. M., and Rahman, S. 1996. GSP-1 genes are linked to the grain hardness locus (Ha) on wheat chromosome 5D. Proc. Natl. Acad. Sci. 93:2408-2413. Morris, C. F. 2002. Puroindolines: The molecular genetic basis of wheat grain hardness. Plant Mol. Biol. 48:633-347. Morris, C. F., and Rose, S. P. 1996. Wheat. Pages 3-54 in: Cereal Grain Quality. R. J. Henry and P. S. Kettlewell, eds. Chapman & Hall: London. Osborne, B. G., Jackson, R., and Delwiche, S. R. 2001. Note: Rapid prediction of wheat endosperm compressive strength properties using the single-kernel characterization system. Cereal Chem. 78:142-143. Pomeranz, Y., and Williams, P. C. 1990. Wheat hardness: Its genetic, structural, and biochemical background, measurement, and signifi- cance. Pages 471-548 in: Advances in Cereal Science and Technology, Vol. X. Y. Pomeranz, ed. AACC International: St. Paul, MN. Roberts, H. F. 1910. A quantitative method for the determination of hardness in wheat. Pages 371-390 in: Experiment Station Bulletin 167. Kansas State Agricultural College: Manhattan, KS. Samson, M.-F., Mabille, F., Chéret, R., Abecassis, J., and Morel, M.-H. 2005. Mechanical and physicochemical characterization of vitreous and mealy durum wheat endosperm. Cereal Chem. 82:81-87. [Received August 25, 2006. Accepted October 19, 2006.] work_2h64td2zp5fvjk5xt2dg4pic6a ---- Digital Food Photography: Dietary Surveillance and Beyond 2211-601X © 2013 The Authors. Published by Elsevier Ltd. Selection and peer-review under responsibility of National Nutrient Databank Conference Steering Committee doi: 10.1016/j.profoo.2013.04.019 Procedia Food Science 2 ( 2013 ) 122 – 128 36th National Nutrient Databank Conference Digital Food Photography: dietary surveillance and beyond Noemi G Islama*, Hafza Dadabhoya, Adam Gilluma, Janice Baranowskia, Thea Zimmermanb, Amy F Subarc, Tom Baranowskia aBaylor College of Medicine, Houston, TX, 77030, USA bWestat, Rockville, MD 20850, USA cNational Cancer Institute, Bethesda, MD 20892, USA Abstract The method used for creating a database of approximately 20,000 digital images of multiple portion sizes of foods linked to the for Dietary Studies (FNDDS) is presented. The creation of this database began in 2002 and its development has spanned 10 years. Initially the images were intended to be used as a kid-friendly aid for estimating portion size in the context of a computerized 24-hour dietary recall for 8-15 year old children. In 2006, Baylor College of Medicine, Westat, and the National Cancer Institute initiated a collaboration that resulted in the expansion of this image database in preparation for the release of the web-based Automated Self- Administered 24 Hour Dietary Recall (ASA24) for adults (now also available for use by children ASA24-Kids). Researchers in the US and overseas have capitalized on these digital images for purposes including, but not limited, to dietary assessment. © 2013 Published by Elsevier Ltd. Selection and peer-review under responsibility of the National Nutrient Databank Conference Editorial Board Keywords: digital images; portion size assessment; dietary surveillance; technology aided dietary assessment 1. Introduction The Food Intake Recording Software System (FIRSSt) [1] was originally developed as a computer- based, self-assessment tool to collect dietary information on number of servings of fruit and vegetables among school-age children. The success of this program, encouraged our team of nutritionists at the nutrient database and incorporating one of the top eleven technologies of the previous decade: digital photography [2] The idea was that the incorporation of digitally produced portion size images could * Corresponding author:. Tel.: +0- 713-798-7037; fax: +0-713-798-7130. E-mail address: nislam@bcm.edu Available online at www.sciencedirect.com © 2013 The Authors. Published by Elsevier Ltd. Selection and peer-review under responsibility of National Nutrient Databank Conference Steering Committee Open access under CC BY-NC-ND license. Open access under CC BY-NC-ND license. http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ 123 Noemi G. Islam et al. / Procedia Food Science 2 ( 2013 ) 122 – 128 n. However, to produce a large number of standardized digital images with minimal effort, a standardized tool was needed and the Food Photography System (FPS) was such tool. The FPS made it possible to produce approximately 4000 digital images representing the food types, forms and range of portion sizes frequently reported by children in the Continuing Survey of Food Intake by Individuals 1994-1996-1998 (CSFII 94-96, 98). In February 2006, the Exposure Biology Program, one of the components of the Genes, Environment and Health Initiative (GEI) was launched, thus providing the first funding opportunity to move the food photography project forward. However, it was the partnership with the National Cancer Institute and Westat, which permitted the expansion of the digital image database to accommodate the requirements of a self-administered tool for both adults and kids, the web-based Automated Self -Administered 24 Hour Dietary Recall (ASA24) and ASA24-Kids [3]. 2. Methods 2.1. Food Photography System (FPS) The main goal of the FPS was to produce high quality, high resolution digital images with minimal technical expertise and limited budget. To achieve this goal, the system had to be able to meet the following requirements: Produce properly exposed images with sharp detail Allow to easily obtain aerial and angled (42º) exposures of the foods Be simple to operate: require minimal manipulation of the cameras with only sporadic calibrations by a professional photographer Allow control of the exposures Allow the storage and transfer of images to and from dedicated PCs without manipulation of the secure digital card (SD) 2.2. System design and components A front view of the main components of the FPS is shown in Figure 1 and the two angles of exposure are illustrated in Figure 2. Of the two cameras shown in Figure 2, the one at the top captures aerial views, is placed at a distance of 86.36 cm (measured from the focal point to the film plate) and is set at an angle of 5º from the vertical plane to eliminate reflections of the camera on the images. The second camera, placed on the left side, captures angled views at a distance of 91.44 cm (measured from the focal point to the film plate) and is set at an angle of 42º above the horizontal, considered to be the average angle of viewing for a subject seated at a dining table [4]. 124 Noemi G. Islam et al. / Procedia Food Science 2 ( 2013 ) 122 – 128 Fig.1. FPS system components Fig.2. Side view of the FPS showing camera angles 125 Noemi G. Islam et al. / Procedia Food Science 2 ( 2013 ) 122 – 128 2.3. Lighting and background All foods are photographed in a light booth constructed upon a kitchen preparation table. The booth is large enough to allow the placement and retrieval of the plate/bowl without disturbing the set-up. The booth dimensions are: 138 cm width, by 122 cm height and 91.8 cm depth. Light is supplied by two flash heads with integrated modeling lamps that provide continuous lighting for composing the image. The main light is a large light box suspended above the place setting and slightly behind the top camera. The light emanating from the lamp is softened through a layer of diffusion material that forms the top of the booth. Fill light comes from an umbrella reflector positioned under the table. The left and right sides of the booth are made of white matte board. A stand supports a paper backdrop that gradually curves from the back of the booth to cover the table surface. It is coated with Chroma-key blue paint, thus allowing graphic artists to easily drop out the background of the picture and replace it with one of their own. 2.4. Image capture and storage Each camera connects to its own PC via a USB cable. The software installed on each of the PCs controls the cameras remotely, enabling the viewing of the LCD monitor. The monitor display allows the operator to view the images as soon as the shutter is released. The images captured by the top and side cameras are stored on different PCs. Upon completion of the photo capturing session, these images are transferred to a network location for safe-keeping and image cataloguing. 2.5. Image quality and standardization Figure 3 shows a thumbnail size picture representing the aerial and angled view exposures of a ¾ cup of cheese-filled manicotti generated by the FPS. Both cameras are set to capture JPEG files of the highest quality and size (3 megabytes each). Images are extremely detailed and can be used for all types of output ranging from screen viewing to quality printing. End users have the flexibility to choose which way they want to process the images for their own specific needs. Image standardization in the FPS is achieved by comparing images obtained against reference images of the place setting captured and stored on the computer hard drives for calibration purposes. Additionally, minute registration marks are strategically placed on the backdrop to facilitate the realignment of the plate and/or cutlery if a displacement occurs when the plates with food are moved in and out of the booth during a photo capturing session. Fig.3. Aerial and angled views of ¾ cup of cheese filled manicotti with tomato sauce 126 Noemi G. Islam et al. / Procedia Food Science 2 ( 2013 ) 122 – 128 Another way to achieve standardization is to use the same type of plates and bowls. Baylor has consistently used Corelle® -oz Corelle® Winter Frost soup and dessert bowls (6.25 top dia). To avoid disturbing the place setting during image capturing, we use a base plate of the same type and size as a place holder. 2.6. System operation A system like the FPS requires minimum technical skill to be operated successfully. However, tasks such as food selection, acquisition, preparation, weighing and/or measuring of the food items to be depicted on the images according to a detailed protocol are better done by a professional with a background in food and nutrition (dietitians, dietetic technicians, or advanced food science students are the best operators). 3. Results and Discussion To date, the FPS has generated about 20,000 standardized portion size images of foods, in both aerial and angled views. Half of the images produced have been incorporated in the ASA24 web-based application [5]. The protocol used to capture portion size images for ASA24 required that all images be linked to an eight-digit FNDDS code and to a portion size descriptor and gram weight value. This feature sets the image database apart from others, and has resulted in a number of collaborations with researchers both in the US and abroad. Examples of this collaborative work include: A researcher from Denmark, who included a subset of our digital images to be used as portion size aids in a self-report tool for 9-11 year old Danish children, the Web-based Dietary Assessment Software for Children known as WebDASC [6] A researcher from Pennington Biomedical Research Center, LSU, who is using the images produced by the FPS as a standard or reference against which to compare before and after pictures captured by study participants using a smart phone [7] A researcher from the University of Washington, who used a subset of the digital images in video games for diabetes nutrition education A researcher from Indiana State University who used our standardized images to appropriately scale the digital images they had produced in their lab In addition to the above mentioned research applications, portion size food photography could play a role in interventions aimed at helping consumers better cope with an environment known to promote tion size images alongside nutrition information, could potentially have a positive impact, by providing a normative benchmark [10] of how much is appropriate package To create and maintain an image database of the proportions above described requires a considerable investment of time and resources. By making the image database freely available to other scientists and/or educators that request the permission to use it by contacting the National Cancer Institute (http://riskfactor.cancer.gov/tools/instruments/asa24/resources/contact.html) we seek to have the best possible return for such an investment. 127 Noemi G. Islam et al. / Procedia Food Science 2 ( 2013 ) 122 – 128 Fig 4.Nutrition facts: using portion size photography as a visual cue Fat 0.2 Calories 156 Carb 39 Protein 2.6 128 Noemi G. Islam et al. / Procedia Food Science 2 ( 2013 ) 122 – 128 References [1] Baranowski T, Islam N, Baranowski J, Cullen KW, Myres D, Marsh T et al. The Food Intake Recording Software System is valid among fourth-grade children. J Am Diet Assoc 2002;102:380 5. [2] Ross, PE. Top 11 technologies of the decade. Spectrum, IEEE 2011;48-1:27-63. [3] Subar AF, Kirkpatrick, BM, Mittl B, Zimmerman TP Thompson, FE, Bingley C et al. The Automated Self-Administered 24- Hour Dietary Recall (ASA24): A Resource for Researchers, Clinicians, and Educators from the National Cancer Institute. J Acad Nutr Diet 2012;112: 1134-7 [4] Nelson M, Atkinson M, Darbyshire S. Food photography I: the perception of food portion size from photographs. Br Med J 1994;72:649-663 [5] Zimmerman TP, Hull SG, McNutt S, Mittl B, Islam N, Guenther P et al. Challenges in converting an interviewer- administered food probe database to self-administration in the National Cancer Institute automated self-administered 24-hour recall (ASA24). J Food Comps Anal 2009;225:S48-S51 [6] Biloft-Jensen A, Christensen T, Islam N, Andersen LF, Egenfeldt-Nielsen et al. WebDASC: a web-based dietary assessment software for 8-11-year old Danish children. J Hum Nutr Diet 2012; DOI:10.1111/j.1365-277X.2012.01257.x [7] Martin CK, Han H, Coulon SM, Allen RH, Champagne CM, Anton, SD. A novel method to remotely measure food intake of free-living individuals in real time: the remote food photography method. Br Med J 2009;101:446-456 [8] Steenhuis IHM, Vermeer WM. Portion size: review and framework for interventions.International Journal of Behavioral Nutrition and Physical Activity 2009;DOI:10.1186/1479-5868-6-58 [9] EUFIC (2011). Consumer response to portion information on food and drink packaging A pan-European study. EUFIC Forum nº 5 [10] Wansink B. Environmental Factors that Increase the Food Intake and Consumption Volume of Unknowing Consumers. Annual Review of Nutrition 2004;24: 455-479 Presented at NNDC (March 25-28, 2012 work_2iha4gqz3jeuhiu35xy5naadoe ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218561323 Params is empty 218561323 exception Params is empty 2021/04/06-02:16:42 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218561323 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:42 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_2jjjnibs5bhz5obzfmbvgc32hi ---- Department of Computer Science | Wake Forest University WFU HOMEPAGE WFU DIRECTORY GIVING Home About Us Why WFU Computer Science Our Faculty & Staff Our CS Center Scholarships & Aid Academics Research Publications The Student Experience News and Events Contact Us Select Page WFU HOMEPAGE WFU DIRECTORY GIVING Follow Follow Follow Follow Home About Us Why WFU Computer Science Our Faculty & Staff Our CS Center Scholarships & Aid Academics Research Publications The Student Experience News and Events Contact Us Follow Follow Follow BS, BA, MS & Minor Programs Your Future in Computer Science starts here at WFU LEARN MORE Leading-Edge Research Your future in Computer Science starts here at WFU LEARN MORE Passionate Faculty Engagement Your future in Computer Science starts here at WFU LEARN MORE ABOUT THE COMPUTER SCIENCE DEPARTMENT Home to Computer Science BS, BA, MS, and Minor Programs The Department of Computer Science at Wake Forest University prepares students for exciting employment opportunities and for entering our own or other prestigious graduate programs. We boast a highly-qualified and passionate faculty who enjoy student engagement in and out of the classroom and we pride ourselves on the opportunities provided to our students for meaningful participation in leading-edge research on a broad range of contemporary topics creating solutions to real-world problems. WHY WFU? THE STUDENT EXPERIENCE Faculty-Student Engagement The Humanitech Stem Initiative LEARN MORE The Department prides itself on engaging undergraduate and graduate students in and out of the classroom, providing opportunities for students to work closely with faculty members in their scholarship and research. Existing programs in the Department are in the areas of network security, digital sound and music, computing for persons with disabilities, imaging, mobile computing, computational mathematics, computational biophysics, bioinformatics, HPC education, databases/big data, and GPU computing. These areas represent not only research in the Department, but also interdisciplinary collaboration in and out of the sciences in other departments. Undergraduate students routinely engage our faculty in these research areas, and the students often receive summer research fellowships from the University for full-time summer research work. "The Computer Science Department at Wake Forest is an intimate community allowing deep engagement between students and faculty. We are dedicated to making use of innovative approaches to teaching and learning, performing impactful research, and providing meaningful service opportunities." Dr. William Turkett, Department Chair "... I have run my code on one of the largest super computers in the world, traveled to India, Austria, and Japan to present my research and published articles in peer-reviewed scientific journals and conferences. Without the mentorship and opportunities I found at WFU CS none of this would have been possible." Read More | Koby Hayashi, WFU CS Undergraduate Student SOME “AFTER-GRADUATION” STORIES Our Alumni Nick Gerace, BS 2019 “The Wake Forest University CS department gave me a community that few other could provide. I felt validated to learn, let myself get nerdy, creative and do awesome stuff with my classmates. Through founding the Wake Forest hackathon series, WakeHacks, in my sophomore year, I’ve met some amazing people READ MORE Makenzie Whitener, BS May 2018 “I was introduced to Computer Science when I was a Senior in High School. My dad worked for Wake Forest’s CS Department and insisted that I take an AP course to see if I enjoyed the topic. Turns out that Computer Science was one of my passions. READ MORE Hannah Goodwin, BS May 2019 “I was convinced to take Intro to Computer Science because my friend told me I would probably like it. He was right! Just a few weeks into Dr. Cañas‘s class and one of my peers in a study group said, “listen to her, she’s going to be a READ MORE Majors MS Students Dedicated Faculty Areas of Research FROM OUR BLOG Latest News 2021 Goldwater Scholars March 31, 2021 | Computer Science Ashley Peake and Joseph McCalmon, mentored by Computer Science faculty member Sarra Alqahtani, have been named 2021 Goldwater Scholars. This award is considered to be the premier undergraduate recognition for STEM students.... read more High Performance Computing: Building Connections Across Campus March 24, 2021 | Computer Science For TechX 2021, an IS hosted, virtual conference highlighting technology being used at Wake Forest University, the High Performance Computing (HPC) Team put together a video providing an overview of HPC, the DEAC Cluster, research, and classroom... read more WFU Students Take First Prize in WakeHacks 2021 Blockchain Event March 22, 2021 | Computer Science https://spark.adobe.com/page/touibb4Nd2L7P/ read more APPLY GIVE NEWS EVENTS p Contact Info Email: whitenlr@wfu.edu Phone: (336) 758-4982  Address 1834 Wake Forest Road 233 Manchester Hall P.O. Box 7311 Winston-Salem, NC 27109 Student Life Living in Winstom-Salem Visiting Campus Faculty-Student Engagement Student Clubs & Organizations Contact Us University Links WFU Main Site Academic Calendar Wake Information Network ZSR Library Employment Opportunities FOLLOW US Follow Follow Follow Follow SUPPORT US GIVE NOW © 2021 Wake Forest University | Website created by E's Web Design Privacy Policy Terms Of Use Designed by Divi Ultimate - Divi Child Theme Degree Programs Bachelor’s Degrees Minor in Computer Science Master’s Degree Joint BS & MS Program Graduate Data Science Certificate Undergraduate Info FAQ Course Descriptions Course Syllabi Course Schedule College Bulletin Graduate Info Graduate Admissions Course Descriptions Course Syllabi Thesis & Project Info Graduate School Bulletin Special Programs Honors Program Independent Study Study Abroad (or Summer Elsewhere) John W. Sawyer Prize work_2nyz2eitarcqregrjpxudh55se ---- rtm2745 1..9 Why is business model innovation that extends beyond the sustaining innovation space so diffi cult for estab- lished fi rms? Sony developed the Walkman audio player, redefi ning the market for portable music devices, but failed to develop a successful MP3 player and allowed Apple to displace it in the portable audio space with the iPod. Similarly, Knight Ridder, one of the largest news- paper publishers in the United States and a pioneer in the digital news market, failed to develop new digital adver- tising channels to capitalize on the potential of new rev- enue streams such as those exploited by Monster.com , AutoTrader.com , and REALTOR.com , clinging instead to traditional ad-based models. And Kodak, which dominated the fi lm photography industry, ceded the digital photography market to companies such as Peter Koen is an associate professor in the Wesley J. Howe School of Technology Management at Stevens In- stitute of Technology, where he teaches undergraduate and graduate courses in corporate entrepreneurship and technology management. He is also the director of the Consortium for Corporate Entrepreneurship, an organization he founded in 1998 with the mission to increase the number, speed, and success rates of highly profi table products and services at the front end of innovation. Peter also has 18 years of industry experience, having worked at ATT Bell Laboratories and BD. He holds a PhD in biomedical engineering from Drexel University. www.frontendinnovation.com ; peter.koen@stevens.edu Heidi Bertels is a visiting assistant professor of business administration at the University of Pittsburgh, where she teaches courses in entrepreneurship, corporate entrepreneurship, and strategic management. She is fi nishing her dissertation research at Stevens Institute of Technology on how established organizations can be successful when they try to enter new value networks. She also has a BA and an MA in integrated product development from Hogeschool Antwerpen in Belgium. Her research interests are innovation man- agement, product development, and entrepreneurship. hbertels@katz.pitt.edu Ian Elsum is principal adviser in the Science Strategy and Investment Group of Australia’s Commonwealth Scientifi c and Industrial Research Organisation (CSIRO), with responsibilities in planning and investment. He is also a visiting fellow at the Australian National University, where he is undertaking research on the management of radical and breakthrough innovation. Ian has 24 years of experience in the strategic manage- ment of applied research. He has been a member of a number of company boards and management and advi- sory committees, and has also been a regular partici- pant in developing innovation policy. Ian has a PhD in chemistry from Monash University. ian.elsum@csiro.au OVERVIEW: Business model innovation represents a signifi cant opportunity for established fi rms, as demon- strated by the considerable success of Apple’s iPod/iTunes franchise. However, it also represents a challenge, as evi- denced by Kodak’s failed attempt to dominate the digital photography market and Microsoft’s diffi culty gaining share in the gaming market, despite both companies’ huge fi nancial investments. We developed a business model innovation typology to better explain the complex set of factors that distinguishes three types of business model innovations and their associated challenges. KEY CONCEPTS : Business model innovation , Value networks , Radical innovation , Breakthrough innovation , Sustaining innovation , Disruptive innovation THE THREE FACES OF BUSINESS MODEL INNOVATION: CHALLENGES FOR ESTABLISHED FIRMS Established fi rms frequently have diffi culty with business model innovation. The business model innovation typology defi nes the challenges. Peter A. Koen , Heidi M. J. Bertels , and Ian R. Elsum DOI: 10.5437/08453608X5403009 May—June 2011 1 0895-6308/11/$5.00 © 2011 Industrial Research Institute, Inc. The Unifi ed Business Model Innovation Typology Established fi rms consistently demonstrate their ability to succeed in sustaining innovation. Intel, for instance, leads in the development of next-generation micropro- cessor chips using radically new technology. However, these same companies frequently have diffi culty trying to develop new business models in new markets—even with existing technology. Intel, for example, has been unsuccessful in penetrating the market for cellphone chips, despite many valiant attempts. Current innovation typologies—including those that rely on distinctions such as incremental/radical ( Wheelwright and Clark 1992 ), sustaining/disruptive ( Christensen 1997 ), or ex- ploitation/exploration ( March 1991 )—are inadequate to explain this phenomenon. The BMIT allows for consid- eration of a more complex set of factors and thus more readily distinguishes why and where established fi rms have diffi culty with business model innovation. The BMIT classifi es innovation along three dimensions: technology, value network, and fi nancial hurdle rate ( Figure 1 ). It further divides the innovation space into two zones: sustaining innovation, where established fi rms generally succeed, and business-model innova- tion, where otherwise successful fi rms frequently fail. Within the technology dimension, the model distin- guishes among incremental, architectural, and radical technological innovation. Incremental technological innovation involves the refi nement, improvement, and exploitation of existing technology. Architectural inno- vation involves creating new ways to integrate com- ponents in a system based on current or incremental changes to existing technology ( Henderson and Clark 1990 ). The iPod, for instance, incorporated no new technology, but provided an entirely new design. Fi- nally, radical innovation introduces an entirely new core technology. Hewlett-Packard, Canon, and Nikon. In each of these cases, the fi rms had adequate resources, an in-depth market understanding—not to mention a solid head start in the market—and the technical competencies needed to succeed, yet each of these companies allowed new entrants to disrupt them. We wanted to fi nd out why es- tablished companies that dominate their markets later allow other companies to succeed with business model innovations that either disrupt them or limit their ability to grow further. These cases do not fi t the usual pattern of disruptive in- novation. In elaborating their concept of disruptive in- novation, Christensen and colleagues ( Christensen and Raynor 2003 ; Christensen and Rosenbloom 1995 ) argue that “there are two types of disruptive innovations: low- end and new market” ( Christensen, Anthony, and Roth 2004 , xvii). At the low end, disruptors gain market share through a low-price business model focused on over- served customers. New market disruption targets new nonconsumers. Intel’s development of the low-priced Celeron microprocessor, which targeted the cost-con- scious computer market, is an example of an established fi rm pursuing a low-price business model. Sony’s Walk- man audio player is an example of a business model focused on reaching new nonconsumers. The new non- consumers for Sony’s portable transistor radio were teenagers who couldn’t afford more expensive, high- performance vacuum-tube radios. These consumers, who had previously had no other alternative, were delighted to have control of their own music, even with a sound quality much lower than that offered by vacuum-tube radios. However, Sony’s domination of the portable au- dio player market was disrupted by Apple’s iPod, which neither offered a lower price nor focused on new non- consumers. In fact, none of the disruptions we’ve de- scribed—Apple’s iPod, new digital advertising channels, and digital photography—relied on either a low-price or a new nonconsumer business model. As part of an Industrial Research Institute (IRI) Re- search-on-Research (ROR) working group project, we set out to better understand how and why disruption oc- curs in these cases, which do not seem to fall into Chris- tensen’s model for disruptive innovation. Furthermore, while Christensen’s work focuses on new disruptive business models, we wanted to understand the problem of disruption, and the challenges presented by disrup- tion, from the established fi rm’s perspective. We sought to develop a more comprehensive model to explain dis- ruptive business model innovations, especially those that do not involve low-cost or new nonconsumer busi- ness models. The result is a unifi ed business model in- novation typology (BMIT). 1 The BMIT classifi es innovation along three dimensions: technology, value network, and fi nancial hurdle rate 1 A brief, preliminary overview of our work was presented in an ear- lier report ( Koen et al. 2010 ). The work is also part of the PhD dis- sertation of Heidi Bertels. Research • Technology Management2 The value network dimension encompasses how a fi rm identifi es, works with, and reacts to customers, suppli- ers, and competitors. The value network is a tightly connected, complex system of suppliers, customers, distributors, and partners ( Christensen and Rosenbloom 1995 ). The value network dimension is encompassing, embracing the unique relationships that a company builds with both its upstream (supplier) and downstream (distributor and customer) channels. Relationships in these channels are a critical source of competitive ad- vantage. Business model innovation often requires the development of a new value network. The new relation- ships embedded in a new value network can be problem- atic, as they are diffi cult to establish and can disrupt existing relationships. For example, Nestlé distributed Nescafé coffee through existing mass market depart- ment store and grocery sales channels, a value network that was very familiar to them. In contrast, the company needed to develop an entirely new value network for Nespresso, a high-end coffee shop targeted to reach young professionals. In the BMIT, the value network dimension is divided into two areas: innovations within the company’s exist- ing network and innovations requiring value networks with components that are new to the company; new value networks may reach existing consumers in the market or new nonconsumers. For example, Zipcar, which makes cars conveniently available for very short- term rental at urban locations, mostly targets existing consumers for rental cars, but the company created a dif- ferent business model in the way that those consumers access the cars and pay rental fees. By contrast, the an- gioplasty catheter, developed to widen arteries blocked by cardiovascular disease, replaced the need for open- heart surgery to treat moderate cardiac disease. In mar- keting the device to cardiologists, who previously did not do surgeries for blocked arteries, rather than cardio- thoracic surgeons, medical device companies such as Bard and Medtronic reached out to a set of new noncon- sumers—cardiologists who could now treat their own patients rather than referring them to a surgeon. Hurdle rate is another factor in the BMIT. The hurdle- rate dimension describes the relationship of a given project’s fi nancial projections to the minimal expected return. The hurdle rate is a key factor in traditional dis- ruptive innovation that relies on a low-cost business model. Such low-cost business models are diffi cult for established companies to pursue because they do not meet the hurdle rates defi ned by the fi rm’s cost structure and expected rate of return. However, it is possible for established fi rms to pursue low-cost business models successfully. Dow Corning’s subsidiary Xiameter ( Gary 2004 ) offers one success story for low-cost business models implemented by established fi rms. Dow Corning Figure 1 .— Business model innovation typology (BMIT) model. Established fi rms tend to be successful in sustaining innovation (innovation that falls into the area defi ned by the bottom three boxes), but may have diffi culty succeeding with innovations outside this area. May—June 2011 3 developed Xiameter as a web-based discount channel through which customers could bulk order at a lower price the company’s more traditional products without the customer service usually provided by Corning. Xiam- eter became an important part of Dow Corning’s service offering and prevented the erosion of their market share by companies focused on commodity customers. Challenges in Sustaining Innovation The sustaining innovation space, where established fi rms tend to succeed, is marked by a reliance on exist- ing value networks and a comfortable fi nancial hurdle rate. However, this space is not without challenges even for innovative companies. Exploring it helps provide perspective on the challenges established companies en- counter when they leave the relative comfort zone of sustaining innovation. Sustaining innovation—technology improvements or even radical new technologies implemented within the companies’ existing value network and established fi - nancial hurdle rates—protects the status quo and repre- sents the majority of product development activities. Incremental sustaining innovations, which deploy mi- nor, progressive improvements in technology within an established value network, are the easiest and least risky, and hence the most common activities. Sustaining in- novations that utilize the existing value network to pro- duce and market architectural or radical technology improvements while maintaining existing fi nancial hur- dle rates are more diffi cult, since they involve higher degrees of technological novelty and higher levels of risk. As an example, Toyota’s Prius is a sustaining archi- tectural innovation; it involved no new technology, but combines existing gasoline and electric motor technol- ogy to create a hybrid design with signifi cantly improved fuel effi ciency, and it was created and sold within exist- ing value networks and hurdle rates. Intel’s dual-core processor is also a sustaining innovation, as it incorpo- rated a radical technology innovation (new designs that doubled the chip’s performance while reducing cooling demands) but relied on an existing value network for distribution. Sustaining innovations that rely on incremental technol- ogy improvements require different behaviors and pro- cesses than those implementing architectural or radical technology. Incremental projects can be managed using a well-honed serial innovation process with gated deci- sion points ( Cooper 2001 ), a system that has proven its merit in projects where the market and technology are known. In contrast, architectural and radical technology projects require a more complex learning strategy to manage the challenges associated with radical technol- ogy. A learning strategy is a cyclical process in which assumptions and uncertainties are tested and resolved through experimentation and iteration ( O’Connor et al. 2008 ). The project direction and strategy often shifts as uncertainties decrease. Challenges in Business Model Innovation The business model innovation space, where established fi rms frequently fail, requires companies to succeed with business models that require a lower than normal fi - nancial hurdle rate or the development of new value networks. Established companies typically encounter signifi cant challenges in this more diffi cult zone of the BMIT. Financial Hurdle Business Model Innovations Business model innovation challenges in the fi nancial hurdle space were fi rst explained by Christensen and colleagues, who describe how disrupters gain market share through low-price business models designed to appeal to existing consumers with a more affordable option ( Christensen and Raynor 2003 ; Christensen and Rosenbloom 1995 ). Low-cost business models like the disruptive innovations defi ned by Christensen are busi- ness model innovations that are guided by the fi nancial hurdle rate ( Figure 2 ). Innovations in this area typically involve projects with a lower hurdle rate than the estab- lished cost structure would allow. Christensen’s prime example was steel minimills, which disrupted integrated steel mills with electric arc-furnace technology, a radical new technology that allowed for cheaper production, although initially these furnaces could not match the quality of the larger mills’ product. The established mills initially ceded market share to the minimills, allowing the small producers to gain a foothold in the market by making cheap reinforcing bars (rebar). Over time, mini- mill producers learned how to make more profi table sheet steel at an acceptable quality level and eventually replaced integrated steel mills. The sustaining innovation space is not without challenges even for innovative companies. Research • Technology Management4 Some established companies have fl ourished by accept- ing a low-cost business model, meeting potential dis- rupters head-on. Intel developed the low-cost Celeron chip and Dow Corning established Xiameter in order to prevent erosion of their main offerings by low-cost competitors. Similarly, the Mercedes A-Class targeted a middle-class market, but leveraged existing sales chan- nels and distribution networks to sell the car. Marriott’s Courtyard chain of hotels targets a lower-cost segment of the market by eliminating the fancy restaurants and conference and meeting facilities that characterize its higher-end hotels. We found fewer examples of companies utilizing archi- tectural innovation within their existing value networks to power a low-cost business model. One example is the BIC pen corporation, which moved from manufacturing expensive fountain pens to selling low-cost ballpoint pens using a highly integrated, automated manufactur- ing process. Moving to a low-cost business model presents unique challenges for established fi rms. As both Christensen and Raynor (2003) and Govindarajan and Trimble (2005) argue, it is extraordinarily diffi cult for a company to maintain two different business models within the same business division. Such a situation, they assert, would produce trade-offs that would result in a strategy favoring the sustaining business. As a result, both Christensen and Raynor and Govindarajan and Trimble recommend that a company interested in pursuing a low-cost model while maintaining its existing business create two distinct organizations—which is exactly what Intel did in the development of the Celeron chip and Dow Corning did in establishing Xiameter. Both Intel and Dow Corning separated these units from the sustain- ing business. Where the larger Intel business pursued a string of breakthrough innovations in chip technology, the Celeron division focused on cost effi ciency, both in achieving just good enough performance features to offer value and in aggressively pursuing manufacturing effi ciencies. New Value Network Business Model Innovations Targeting Existing Consumers Established fi rms often see opportunities for growth in seeking out existing consumers within a new value net- work that allows the fi rm to maintain existing fi nancial hurdle rates ( Figure 3 ). For example, in an effort to reach young urban professionals, Nestlé developed Nespresso, a coffee outlet that has been described as an upscale Starbucks. Nespresso represented a new value network for Nestlé’s coffee business, which had previously sold instant coffee to the mass market via department and grocery stores. Similarly, Tesco, the United Kingdom’s largest supermarket chain, developed Tesco Direct as an online outlet to sell not only grocery items but also books, CDs, and other nonfood items. Toyota and Honda both developed luxury car brands, Lexus and Acura, which they sold in separate dealerships from their cur- rent lines, focused on a different market via a new value network for them targeted at existing affl uent customers. Fewer fi rms pair architectural innovation with a new value network. Microsoft launched a new business—videogame Figure 2 .— Examples of innovation projects at the intersection of the fi nancial hurdle and technology dimensions. May—June 2011 5 consoles—with a new value network with its develop- ment of the Xbox and its accompanying online services, and Knight Ridder, which already owned a network of print newspapers, developed an online newspaper to reach a wider market. In each case, the fi rms launched new businesses to reach existing consumers via new value networks. In each of these cases the companies lost a considerable amount of their investment. It is even rarer for a established fi rm to pair a radical technical in- novation with a new value network to reach existing consumers. Christensen and Raynor (2003) and Govindarajan and Trimble (2005) recommend separating sustaining busi- nesses from new value network projects. However, Markides and Charitou (2004) challenge this recom- mendation, arguing that separation should be dependent on the degree of synergy and confl ict between the two business models. Nestlé separated the Nespresso busi- ness unit from Nescafé, as the Nescafé division per- ceived that Nespresso would cannibalize its sales and the two units had markedly different cultures: the Nes- café unit saw its product as a low–price, fast-moving consumer product, while the Nespresso unit was work- ing to position itself as an up-market luxury experience. Unsurprisingly, values and attitudes were signifi cantly different, creating the potential for confl ict. In contrast, Markides and Charitou describe the creation of Tesco Direct as a part of Tesco, launched from one of Tesco’s west London stores. Since the supermarket’s customers were confi ned to the area surrounding the store, in con- trast to the Internet arm whose reach extended to all of the United Kingdom, there was little confl ict between the two ventures. Tesco Direct built on the synergy of the supermarket and leveraged the store’s stock to keep the initial start-up investment low. The most successful players in value network innovation pair the new value network with an incremental technology innovation. Figure 3 .— Examples of innovation projects at the intersection of the value network and technology dimension. Research • Technology Management6 O’Reilly and Tushman (2004) advocate yet another ap- proach for business model innovations in this area: the ambidextrous organization. This approach offers a mid- dle ground between completely separated and com- pletely integrated organizations. O’Reilly and Tushman suggest separating the new business model from the sustaining organization—but they argue that both or- ganizations should share senior management. Such an arrangement, they argue, ensures that the startup unit will have access to the resources and expertise of the established unit. Developing an ambidextrous organiza- tion requires considerable senior management leader- ship training. IBM has followed this approach with considerable success, growing their new-business-model revenue from $400 million in 2000 to $22 billion in 2006 ( O’Reilly, Harreld, and Tushman 2009 ). The most successful players in value network innova- tion pair the new value network with an incremental technology innovation. Fewer fi rms have successfully matched a new value network with an architectural or radical technology innovation. Knight Ridder reportedly accumulated losses of over $100 million in the launch of its fi rst online newspaper. Information Week reported in 2009 that Microsoft had total losses in the gaming indus- try of about $7 billion ( Schestowitz 2009 ). And Kodak invested over $5 billion in digital technology and never managed to become more than a small player in the market. Knight Ridder and Kodak perceived a threat to their businesses and invested signifi cant amounts of money in an effort to head off the threat. Microsoft, on the other hand, anticipated a technological revolution that would turn the family room into a wireless, networked nerve center for seamlessly accessing and managing all kinds of media, and positioned the XBox to be at the center of that revolution ( Grossman 2005 ). In that context, Micro- soft developed the XBox not as a game machine or toy, but as a way to own the entire digital environment in the home, a center for accessing music, movies, photo- graphs, and television. While the Xbox has achieved moderate success in the gaming market, Microsoft was unable to establish it as the nerve center for the family room, resulting in a signifi cant fi nancial loss for the company. Knight Ridder and Kodak acted out of fear, while Mi- crosoft saw a very large opportunity, but all of these companies were defeated by similar forces. All of these efforts failed in spite of the enormous resources made available to them because of routine rigidity—the ten- dency to frame responses to new challenges to fi t famil- iar frameworks ( Gilbert 2003 ). Routine rigidity led executives at all three companies to frame their efforts to fi t the familiar frameworks of their sustaining busi- nesses. Kodak could envision being successful only by leveraging its existing relationships with retailers and offering digital photography CD disks and digital pho- tography kiosks. Focused on its existing business mod- els, the company did not pursue sales of digital cameras, photo printers, and printer disposables, which ended up being the real moneymakers in the digital market, until much later. Similarly, Knight Ridder could only envi- sion the digital newspaper as an extension of the print newspaper, and so failed to exploit new revenue chan- nels or to develop the digital channel fully. Microsoft’s vision of the XBox as a gateway into the living room drove important technology decisions that ultimately produced an expensive console that appealed primarily to hardcore gamers—no one else was ready for the home entertainment hub Microsoft wanted to build. However, the hub platform strategy never came to fruition; al- though the Xbox became a moderately successful gam- ing product, the project was a failure that generated huge losses. And it never delivered on the intended business model. One notable example of a successful architectural inno- vation by an established company accessing a new value network is the iPod and iTunes. iTunes, which delivers single song tracks to consumers in a user-friendly for- mat, required Apple to build unique partnerships with the music industry, resulting in new value networks. Apple did not envision the music industry as a threat to the company or essential to its future. Rather, the com- pany envisioned music as a new opportunity to be ap- proached prudently, with limited initial investments. As a result, Apple did not try to frame the market as an ex- tension of its current sustaining computer business. First and second-year sales were paltry by most standards, less than $4 million the fi rst year and $10 million the second. But since the company did not view the iPod as central to its future business, the lower initial returns were acceptable and Apple was able to give the new business time and space to develop. The challenges of business model innovation are shaped by the scope and target of the innovation. May—June 2011 7 New Value Network Business Model Innovations Targeting Nonconsumers Established fi rms may also seek to establish entirely new value networks to reach nonconsumers—potential customers who have not entered the market. Developing new businesses in this context presents a different set of challenges for established companies, one that was also explored by Christensen and colleagues ( Christensen and Raynor 2003 ; Christensen and Rosenbloom 1995 ). Innovations to reach nonconsumers are the “hardest in- novations to identify” ( Christensen, Anthony, and Roth 2004 , 8), but they have the greatest potential for growth. We did not fi nd any examples of successful innovation in this space paired with incremental technological innovation. Sony’s Walkman is an example of archi- tectural innovation to access a market of previous nonconsumers, and Ciba Vision’s Visudyne represents a radical innovation paired with a new value network. With Visudyne, Ciba Vision entered a global agreement with QLT PhotoTherapeutics to develop compounds that can be used with photodynamic therapy to treat age- related macular degeneration, a debilitating disease that leads to blindness. Ciba Vision’s sustaining business is focused on improving their hard contact, extended wear, and daily disposable lenses, which are typically sold directly to end users. Visudyne, by contrast, uses funda- mentally different technology to slow macular degen- eration; it is a pharmaceutical product that is sold to ophthalmologists. Both the Walkman and Visudyne have proven to be very successful for their companies, estab- lishing entirely new value networks and entirely new markets. Christensen and Raynor (2003) provide numerous exam- ples of start-ups that have succeeded in implementing in- cremental and architectural innovations with new value networks to access an entirely new market. It is, they demonstrate, quite diffi cult for established companies to gain management support for the development of an en- tirely new business in a market that is yet to be defi ned. As a result, we suspect that new business model develop- ment in this area will be driven by new companies. Conclusion The BMIT illustrates how and why businesses behave differently in the two innovation zones—sustaining in- novation, where established fi rms typically succeed, and business model innovation, where they frequently fail. The challenges of business model innovation are shaped by the scope and target of the innovation. Established fi rms will fi nd both rewards and considerable risks in developing new value networks to reach existing con- sumers who are not yet customers. The challenges faced in building a value network to reach nonconsumers are quite different from those encountered in other types of innovation projects. Business model innovation represents a new frontier in innovation beyond just product or service innovation. However, it challenges most established fi rms to the core of their organization and culture and has proven very diffi cult for many companies. Developing a new business model requires organizations to develop new skills and at times reject the thinking that has led them to success in their sustaining businesses. The BMIT pro- vides a framework within which established companies may understand the different kinds of business model innovation and the organizational challenges associated with each type. References Christensen , C. M. 1997 . The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail . Boston, MA : Harvard Business School Press . Christensen , C. M. , Anthony , S. , and Roth , E. 2004 . Seeing What’s Next . Boston, MA : Harvard Business School Press . Christensen , C. M. , and Raynor , M. E. 2003 . The Innovator’s Solution . Boston, MA : Harvard Business School Press . Christensen , C. M. , and Rosenbloom , R. S. 1995 . Explaining the attacker’s advantage: Technological paradigms, organizational dynamics, and the value network . Research Policy 24 : 233 – 257 . Cooper , R. G. 2001 . Winning at New Products: Accelerating the Process From Idea to Launch . 3rd ed. Cambridge, MA : Perseus . Gary , L. 2004 . Dow Corning’s push for organic growth . Strategy & Innovation 2 ( 6 ): 1 – 5 . Gilbert , C. 2003 . Mercury rising: Knight Ridder’s digital venture. Harvard Business School Case 9-803-107 . Cambridge, MA : Harvard Business School . Govindarajan , V. , and Trimble , C. 2005 . Building breakthrough businesses within established organizations . Harvard Business Review 83 ( 5 ): 58 – 68 152 . Grossman, L. 2005. Microsoft: Out of the XBox. [Online exclusive.] Time Magazine , May 15. http://www.time.com/time/magazine/ article/0 ,9171,1061497,00.html (accessed February 14, 2010) . Henderson , R. M. , and Clark , K. B. 1990 . Architectural innovation: The reconfi guration of existing product technologies and the failure of established fi rms . Administrative Science Quarterly 35 : 9 – 30 . Koen , P. A. , Bertels , H. , Elsum , I. R. , Orroth , M. , and Tollett , B. L. 2010 . Breakthrough innovation dilemmas . Research-Technology Management 53 ( 6 ): 48 – 51 . March , J. G. 1991 . Exploration and exploitation in organizational learning . Organization Science 2 ( 1 ): 71 – 87 . Markides , C. , and Charitou , C. 2004 . Competing with dual business models: A contingency approach . Academy of Management Executive 18 ( 3 ): 22 – 36 . O’Connor , G. C. , Leifer , R. , Paulson , A. S. , and Peters , L. S. 2008 . Grabbing Lightning: Building a Capability for Breakthrough Innovation . 1 ed. San Francisco, CA : Jossey-Bass . O’Reilly , C. A. , and Tushman , M. L. 2004 . The ambidextrous organization . Harvard Business Review 82 ( 4 ): 74 – 81 . O’Reilly , C. A. , Harreld , J. B. , and Tushman , M. L. 2009 . Organizational ambidexterity: IBM and emerging business opportunities . California Management Review 51 ( 4 ): 75 – 99 . Schestowitz, R. 2009. Microsoft XBox group still operates at a loss, XBox director quits. [Blog post, May 3.] Techrights . http:// techrights.org/2009/05/03/microsoft-xbox-failure-departure/ (accessed February 14, 2010) . Wheelwright , S. C. , and Clark , K. 1992 . Revolutionizing Product Development: Quantum Leaps in Speed, Effi ciency, and Quality . New York : The Free Press . Research • Technology Management8 work_2o2aabe3qfcudnabhcvz7lrey4 ---- Error Message: Stanford GSB Skip to Content   A-Z Index | Find People | Visit Admissions | Faculty & Research | Alumni | Recruiters & Companies | News | About the GSB Search the GSB: GSB Home     HTTP Error 404 File Not Found The Web server cannot find the file or script you requested. Please check the URL to ensure that the path is correct. If the problem persists please contact the web server's Administrator.     Admissions Faculty & Research Alumni Recruiters & Companies Giving Executive Education News About the GSB Copyright ©2010 Stanford Graduate School of Business | Site Help | Stanford University work_2pp5fbs4qjbatgf7ndhggs6t5e ---- Microscopic Digital Imaging in Introductory Biology Microscopic Digital Imaging in Introductory Biology James Ekstrom* *Phillips Exeter Academy, Exeter, NH 03833 K-12 instruction in biology has traditionally taken a very descriptive approach. This is in marked contrast to a quantitative as well as qualitative way of looking at things in physics and chemistry. This qualitative/descriptive approach even extends into the laboratory portion of the biological course. One way to introduce a more quantitative approach can take place is in the microscopy portion of the biology curriculum. This area of biology ordinarily occurs first in the syllabus for several reasons. Because cellular structure is primarily a microscopic province it makes sense to introduce students to the different microscopic tools such as TEM and SEM as well as the light microscope that are used to investigate cell structure. Also, the light microscope is the principle, if sometimes only instrument, found in biology classrooms. A typical introduction to the microscope can involve a measurement of the “field of view” as well as getting use to the various controls found on the instrument. If the lowest power student objective is 4X and the ocular 10X this measurement can occur with a fair degree of accuracy using a 6” mass-produced plastic ruler that also has a metric edge to it. Using a higher power objective would involve mathematically calculating what the field would be or using an inexpensive $15.00) micrometer. Once the student makes these calculations for 40X (4X x 10X), 100X and 400X they can record this and keep this with the microscope. At some future point if a “wee beastie” or some such thing should occupy one-half of their field under 100X then they would have an approximate size of the object. The advent of inexpensive digital photography allows the instructor to carry out a more sophisticated approach to this exercise. Digital images of the three fields of magnification can be stored as calibrations for that microscope. A subsequent microscopic image can be digitized and the reference scale for that magnification can be cut and pasted on the image. Figure 1 on the second page shows a cheek cell that has been photographed and the relevant scale pasted on the image. This calibrated image can be printed and passed out to students as an introduction to measurement. The students would simply be given the sheets, told to work in small groups and allowed to have string and a millimeter ruler. Their goal is to determine the length of the cell. A discussion follows bearing on the accuracy of their results. The same calibrated image can then be brought up under one of the following freeware programs. NIH Image (Macintosh), Scion Image (PC) or the new internet applet know as ImageJ. Students carrying out the above exercise in either of these applications can compare their results to what they received on the paper exercise. After calibrating their image, they can measure the area and perimeter of various structures and then compare these measurements to other ways of looking at the cheek cell like the SEM image shown in Figure 2[1]. A teacher using ImageJ can put class results on the internet and allow the students to interpret their results and write-up their conclusions after the laboratory period. The biggest hurdle to the use of quantitative work in biology, with specific reference to digital imaging, is training teachers and the use of this technology and seeing that the exercises “fit-in” to the curricular standards. [1] J. Ekstrom http://science.exeter.edu/jekstrom/WEB/CELLS/Epith/Epith.html [2] J.Ekstrom Cell Structure Study, The Science Teacher Vol 67, No. 7, October 2000 http://science.exeter.edu/jekstrom/nsta/elodea.html [3] J. Ekstrom Slicing for Biology, The Science Teacher Vol 68, No. 2, February 2001 Microscopy Society of America 2002 Microsc. Microanal. 8 (Suppl. 2), 2002 DOI 10.1017.S1431927602101152 456 https://doi.org/10.1017/S1431927602101152 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:28, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1431927602101152 https://www.cambridge.org/core https://www.cambridge.org/core/terms Figure 1: Cheek cells stained with methylene blue. Figure 2: Cheek cells dried and sputter coated with gold. 457 https://doi.org/10.1017/S1431927602101152 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:28, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1431927602101152 https://www.cambridge.org/core https://www.cambridge.org/core/terms work_2tt4kjr3wvgv3efjw34n23csni ---- Social Media + Society April-June 2015: 1 –3 © The Author(s) 2015 DOI: 10.1177/2056305115578675 sms.sagepub.com Creative Commons CC-BY-NC: This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 3.0 License (http://www.creativecommons.org/licenses/by-nc/3.0/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (http://www.uk.sagepub.com/aboutus/openaccess.htm). SI: Manifesto When Typhoon Haiyan struck the town of Tacloban in the Philippines in November 2013, Gilbert lost all members of his family. Engulfed by enormous waves, his family home was washed away together with all material possessions. The only thing that remained of his father and sister were their Facebook profiles and photographs uploaded on social media. The permanence of digital photography and the retrievability of digital content on this occasion became cen- tral to Gilbert’s grieving and dealing with loss. Originally an infrequent Facebook user, after the Typhoon, Gilbert became a prolific contributor, expressing his emotions and recount- ing his memories on his own wall and those of his late relatives. Defining social media and assessing their social uses can be challenging given they become so many things to different people. While for Gilbert Facebook is a way of dealing with loss, for Aira, a Filipina teenager, it is a way of finding the ideal distance in the relationship with her mother who works abroad. The different temporal struc- ture and visuality of Facebook and WhatsApp compared to Skype explain why Aira prefers them for keeping in touch with her migrant mum as they spare her the embarrassment of having to find excuses not to turn on the webcam and reveal her untidy room. The asynchronous temporality of social networking sites (Baym, 2010) afford Aira more control over how she presents herself and handles the rela- tionship with her mother. As social media proliferate, each acquires its own niche in people’s communicative repertoires. What emerges then is a complex environment of multiple, evolving social media that combine with other platforms, older and newer. The term “polymedia,” which I developed together with D. Miller (Madianou & Miller, 2012, 2013), shifts our attention from social media as discrete platforms to an understanding of media environments which users navi- gate to suit their communicative needs. If the term “social media” is too generic while “Facebook,” “Viber,” or “Twitter” are too specific and ephemeral (given the per- petual evolution of platforms), polymedia puts forward a dynamic model of media as a composite structure of con- verging communicative opportunities within which social media can be understood. According to this approach, the emphasis is on the relational definition of all media from a users’ point of view within this composite structure. To return to Aira’s earlier example, each platform acquires its own niche depending on its affordances while all media together form the environment within which this particu- lar mother–daughter relationship is negotiated. For instance, Facebook has a specific set of features and affor- dances (boyd, 2014; Hutchby, 2001; Papacharissi, 2009) that distinguish it from other social media and online plat- forms. What Aira cannot do through Facebook, she can almost certainly achieve through an alternative social net- working site that combines a different set of affordances. Users rarely use just one platform, but an assemblage of media while increased convergence intensifies the 578675 SMSXXX10.1177/2056305115578675Social Media + SocietyMadianou research-article2015 Goldsmiths, University of London, UK Corresponding Author: Mirca Madianou, Department Media and Communications, Goldsmiths, University of London, New Cross, London SE14 6NW, UK. Email: m.madianou@gold.ac.uk Polymedia and Ethnography: Understanding the Social in Social Media Mirca Madianou Abstract In this essay I argue that social media need to be understood as part of complex environments of communicative opportunities which I conceptualize as polymedia. This approach shifts our attention from social media as discrete platforms to the ways users navigate environments of affordances in order to manage their social relationships. Ethnography emerges as the most appropriate method to capture the relational dynamics that underpin social media practices within polymedia. Keywords affordances, social media, media environments, ethnography, personal relationships, social media and disasters mailto:m.madianou@gold.ac.uk 2 Social Media + Society switching between platforms as is evident in research with smartphone users (Madianou, 2014). Polymedia highlights the ways in which users exploit the differences among media in order to manage their relationships. Assuming users have unconstrained access to and can skillfully use at least half a dozen media—and I recognize this is a big if—choosing one social network- ing site (say Facebook) over another platform (say Skype) acquires communicative significance. In Aira’s example, the choice of Facebook signals her desire to keep her mother at some distance—a choice that is not lost on the mum who prefers the immediacy of webcam. The choice of platform or medium can become as mean- ingful as the actual content of a particular exchange. The analysis of media as an environment and the emphasis on the ways in which users navigate this environment can open a window to the inner micro-workings of mediated communication and their consequences for people’s relationships. In this short piece, I can only very briefly sketch the con- tours of polymedia which is one of parallel intellectual efforts that acknowledge media as a process and as part of interlocking social, political, and economic environments (Couldry, 2012; Deuze, 2012; Kember & Zylinska, 2012; Latour, 2005). A more detailed analysis and comparison to other related concepts (such as multimedia and transmedia; see Jenkins, 2006) and theories (media ecology [Ito et al., 2009; Slater, 2013], media multiplexity [Haythornthwaite, 2005], mediation and mediatization [Couldry, 2012; Hepp, 2012; Livingstone, 2009]) can be found elsewhere (Madianou, 2014; Madianou & Miller, 2013). Even the shortest essay on polymedia, however, would be incomplete without a discussion of ethnography. The theory of polymedia emerged out of a comparative ethnography of new communication technologies among transnational families (Madianou & Miller, 2012). The eth- nographic approach was crucial for capturing media as envi- ronments and the relational dynamics therein. I argue that ethnography is the best if not the only way to study polyme- dia. While data-driven approaches have acquired enormous popularity and contribute to the generalized understanding of social media practices, only ethnography can unearth the nuanced ways in which people navigate the environment of social and other media and how this is shaped by relational dynamics. Because ethnography uniquely combines a wide lens and a microscopic attention to detail, it is perfect for capturing environments and their contexts but also the micro- dynamics that produce them. Ethnography does not assume what is social media, but rather highlights their social uses according to context. This is essential for capturing cultural and social differences in the context of the rising popularity of social media in what is called the “global south.” Beyond the cultural relativism argument, ethnographers can assess popular assumptions about social media. Gilbert’s earlier example emerged from a study of social media in disaster recovery. Rather than confirming the utopian, techno-deter- minist visions about the so-called “humanitarian technolo- gies” (United Nations Office for the Coordination of Humanitarian Affairs [UNOCHA], 2013; World Disasters Report [WDR], 2013) to empower affected people and trans- form the power asymmetries of humanitarianism, our eth- nography revealed rather more ordinary uses of social media such as dating and gaming in the aftermath of disaster. In Gilbert’s case, social media did not reverse the power struc- ture of humanitarianism, but they did facilitate grieving and coping with loss. The ethnographic surprise (Strathern, 1996) can enrich our understanding of social media. Polymedia and ethnography converge in that they take as a starting point the relational dynamics that underpin social media practices. Polymedia involves three types of relation- ships: those among media within a communicative environ- ment, the relationships between humans and technology, and the relationships among people through and “in” media. In other words, polymedia is about the convergence of the tech- nological and the social and in so doing about unpacking the “social” in social media. Declaration of Conflicting Interests The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The empirical examples quoted in this essay are derived from two Economic and Social Research Council (ESRC)-funded studies: the “Humanitarian technologies project,” 2014-2015 (ES/M001288/1), and “Migration, ICTs and transnational families,” 2007-2011 (RES-000-22-2266), with additional followup work conducted by the author. References Baym, N. (2010). Personal connections in the digital age. Cambridge, UK: Polity. boyd, d. (2014). It’s complicated: The social lives of networked teens. New Haven, CT: Yale University Press. Couldry, N. (2012). Media, society, world. Cambridge, UK: Polity. Deuze, M. (2012). Media life. Cambridge, UK: Polity. Haythornthwaite, C. (2005). Social networks and internet con- nectivity effects. Information, Communication & Society, 8, 125-147. Hepp, A. (2012). Cultures of mediatization. Cambridge, UK: Polity. Hutchby, I. (2001). Technologies, texts and affordances. Sociology, 35, 441-456. Ito, M., Baumer, S., Bittanti, M., boyd, d., Cody, R., Herr- Stephenson, B., . . . Tripp, L. (2009). Hanging out, messing around, and geeking out kids living and learning with new media. Cambridge, MA: MIT Press. Jenkins, H. (2006). Convergence culture: Where old and new media collide. New York: New York University Press. Kember, S., & Zylinska, J. (2012). Life after new media. Cambridge, MA: MIT Press. Madianou 3 Latour, B. (2005). Reassembling the social: An introduction to actor-network theory. Oxford, UK: Clarendon Press. Livingstone, S. (2009). On the mediation of everything. Journal of Communication, 59, 1-18. Madianou, M. (2014). Smartphones as polymedia. Journal of Computer-Mediated Communication, 19, 667-680. Madianou, M., & Miller, D. (2012). Migration and new media: Transnational families and polymedia. London, England: Routledge. Madianou, M., & Miller, D. (2013). Polymedia: Towards a new theory of digital media in interpersonal communication. International Journal of Cultural Studies, 16, 169-187. Papacharissi, Z. (2009). The virtual geographies of social net- works: A comparative analysis of Facebook, LinkedIn and ASmallWorld. New Media and Society, 11, 199-220. Slater, D. (2013). New media, development and globalisation. Cambridge, UK: Polity. Strathern, M. (1996). Cutting the network. Journal of the Royal Anthropological Institute, 2, 517-535. United Nations Office for the Coordination of Humanitarian Affairs. (2013). Humanitarianism in the network age (OCHA Policy and Studies Series). New York, NY: Author. World Disasters Report. (2013). Technology and the future of humanitarian action. Geneva, Switzerland: International Federation of Red Cross and Red Crescent Societies. Author Biography Mirca Madianou (PhD, London School of Economics) is Senior Lecturer in Media and Communications at Goldsmiths, University of London. Her research interests include the social uses of com- munication technologies, especially in non-Western contexts, social media in disasters, and the intersection of social media and mobil- ity/migration. work_2v53lf64yjc5jldyrdna5z5woi ---- Validity and reproducibility of cephalometric measurements obtained from digital photographs of analogue headfilms 11 4 Stom atolog ija, Bal ti c Dental an d Max ill of aci al Journ al, 2007, Vol . 9, N o. 4 Validity and reproducibility of cephalometric measurements obtained from digital photographs of analogue headfilms Simonas Grybauskas, Irena Balciuniene, Janis Vetra SCIENTIFIC ARTICLES Stomato log ija, Baltic Dental and Maxillo facial J o urnal, 9 :114 -12 0 , 2 00 7 SUMMARY The emerging market of digital cephalographs and computerized cephalometry is over- whelming the need to examine the advantages and drawbacks of manual cephalometry, meanwhile, small offices continue to benefit from the economic efficacy and ease of use of analogue cephalograms. The use of modern cephalometric software requires import of digital cephalograms or digital capture of analogue data: scanning and digital photography. The validity of digital photographs of analogue headfilms rather than original headfilms in clinical practice has not been well established. Digital photography could be a fast and inexpensive method of digital capture of analogue cephalograms for use in digital cephalometry. AIM. The objective of this study was to determine the validity and reproducibility of measurements obtained from digital photographs of analogue headfilms in lateral cephalometry. MATERIAL AND METHODS. Analogue cephalometric radiographs were performed on 15 human dry skulls. Each of them was traced on acetate paper and photographed three times independently. Acetate tracings and digital photographs were digitized and analyzed in cephalometric software. Linear regression model, paired t-test intergroup analysis and coefficient of repeatability were used to assess validity and reproducibility for 63 angular, linear and derivative measurements. RESULTS AND CONCLUSIONS. 54 out of 63 measurements were determined to have clinically acceptable reproducibility in the acetate tracing group as well as 46 out of 63 in the digital photography group. The worst reproducibility was determined for measurements dependent on landmarks of incisors and poorly defined outlines, majority of them being angular measurements. Validity was acceptable for all measurements, and although statistically significant differences between methods existed for as many as 15 parameters, they appeared to be clinically insignificant being smaller than 1 unit of measurement. Validity was acceptable for 59 of 63 measurements obtained from digital photographs, substantiating the use of digital photography for headfilm capture and computer-aided cephalometric analysis. Key words: cephalometry, reproducibility, dry skull, acetate tracing, digital photography. 1Institute of Odontology, Faculty of Medicine, Vilnius University, Vilnius, Lithuania 2Institute of Anatomy and Anthropology, Riga Stradins University, Riga, Latvia Simonas Grybauskas1 – D.D.S., MOS RCSEd Irena Balciuniene1 – D.D.S., PhD, Dr. habil. med., prof. Janis Vetra2 – M.D., Dr.habil.med., prof. Address correspondence to Dr. Simonas Grybauskas, Institute of Odontology, Faculty of Medicine, Vilnius University, Zalgirio 115, Vilnius LT-08217, Lithuania. E-mail: simonas.grybauskas@gmail.com nience in generation of treatment predictions have con- tributed to a shift from manual tracing on acetate pa- per towards digital computer-aided cephalometry [4]. Digital cephalometry has offered even more advan- tages, i.e., option to manipulate the image for size and contrast, image enhancement, ability to archive and improve access to images, superimposition of images [5]. Moreover, patients benefit from reduced dose of radiation if a digital cephalograph is chosen for image capture, whereas the lack of user-sensitive chemical development process and instantaneous image forma- tion save both space and time in the clinician’s prac- tice [6]. By now, many offices have not yet switched to the use of digital cephalographs, therefore the digiti- INTRODUCTION Variety of emerging computer software for lat- eral cephalometry in clinical orthodontics simplified the analysis and reduced time needed to perform certain measurements [1,2,3]. The ease of use and ability to perform several analyses at a time as well as conve- Stom atolog ija, Bal ti c Dental an d Max ill of aci al Journ al, 2007, Vol . 9, N o. 4 11 5 S. Grybauskas et al. SCIENTIFIC ARTICLES zation process of analogue head films is the only op- tion if the benefits of digital cephalometric analysis are anticipated. The two known methods of headfilm capture are scanning and digital photography. Stud- ies have shown that images captured from flatbed scanner can be reliable as compared to their corre- sponding analogue headfilms for use in clinical prac- tice, not so good for research [7-11]. Little data ex- ists on the reliability of images captured by means of digital photography – a poorly documented operator- sensitive technique with some speculations on distor- tion of images [12]. Computer-aided cephalometry and digitizing process of analogue headfilms were reported by numerous authors [2,8,12-21]. However, results of comparison of digitizing methods with ana- logue measurement methods were contradictory [2,9,14,18,22,23]. The aim of this study was to evaluate validity and reproducibility of measurements obtained from digital photographs of headfilms as compared to those obtained from traditional acetate paper tracings. Vali- dation of digital photography can enable its use in digital capture of analogue data for computer-aided cephalometric analysis without need for specific hard- war e. MATERIAL AND METHODS A set of 15 human dry skulls was obtained from the Department of Anatomy, Histology and Anthro- pology at Vilnius University. The skulls were chosen according to following criteria: occlusion was stable and reproducible with at least three pairs of antagonist teeth; posterior occlusal height was present; at least one of the condyles was intact and fit into glenoid fossa. The mandible part was related to the maxilla of its couterpart skull on the basis of occlusal interdigitation or maximal contact, and condylar seating in the gle- noid fossa. Since soft tissue components of the tem- poromandibular joint (TMJ) were missing on dry skulls, interpositional items were used to support the condyles in the center of the glenoid fossa preventing them from contact with the bone surface, thus, mimicking natural intra-articular space. Subsequently, the mandible was secured in this position with scotch tape around the skull. Fifteen lateral cephalograms were performed on the series of skulls by securing skulls in the cephalostat (Moviplan 8000 CE, Villa Sistemi Medicali, Italy) with the ear rods in the external auditory meati, and the distance between film and midsagittal plane at 13 cm. Preliminary work led to using the following radiographic setting: 77 kVp, 12 mA, 0.10 s. Headfilms were traced on acetate paper for three times with one week interval between independent tracings by the same operator, hence, the acetate trac- ing group was composed of 45 cephalometric acetate tracings. Two ruler points were marked on every trac- ing 108 mm apart. Following this, headfilms stayed on the view box and 15 digital pictures were taken (Canon 350D, Macro lens 100mm f2.8; 5Mp resolu- tion, image resolution 3200 x 2400 pixels) at a right angle from a distance of 2 meters. A transparent ruler of 108 mm was present on the radiograph whereas the camera was secured on the tripod when taking pictures of every radiograph. Three pictures were taken for every headfilm and the camera was dis- mounted a nd remounted a fter ever y picture to immitate independent attempts. The digital photogra- phy group was composed of 45 digital photographs of lateral headfilms. Digital pictures were imported into Dolphin 9.0 cephalometric software (Doplhin Imaging, USA) and digitizing procedure was performed on the series of 15 triplets of digital pictures. Images were sharpened, saturated, contrasted and brightened if needed to achieve best visibility of landmarks. Acetate tracings were stuck to the computer screen (hardware: IBM T60p, 1.8GHz, 2GB RAM, ATI Mobility FireGL V5200, screen resolution 1600x1200 dpi, 32bit color quality) with scotch tape and identical digitizing pro- cedure was performed on every tracing. The magni- fication factor was known to be 1.08 for the given cephalograph, therefore, 108 mm distance between ruler points was attributed to 100 mm distance on the software. The error was inherent in landmark identification process and was known to be variable depending upon the clarity of nature and definition of landmarks [9]. Hand measuring was abandoned in this study. Instead, once the digitizing procedures were finished for the 3 sets of acetate tracings and 3 sets of digital photo- graphs, software generated 6 sets of linear and angu- lar measurements that were exported and used to as- sess reproducibility and validity of digital photographs of headfilms (Table 1). Since measurements were gen- erated in automatic fashion by the software, no mea- suring errors were introduced in this part of the study. Data was imported and statistical analysis processed with SPSS 15.0. Assessment of reproducibility. Bla nd a nd Altman’s formula (1999) was used for the statistical analysis of reproducibility to determine coefficient of repeatability of every measurement for two different methods ( ∑ = ××−= n i ii s n R 1 21 1 ν ) [24]. Measurements were ranked as reproducible if both R coefficient and standard deviation (SD) of dif- 11 6 Stom atolog ija, Bal ti c Dental an d Max ill of aci al Journ al, 2007, Vol . 9, N o. 4 SCIENTIFIC ARTICLES S. Grybauskas et al. ferences from the average values were less than 1 unit of measurement. One “unit of measurement” in this study was an equivalent of one millimeter, one degree or one per- cent. It was used as a substitute in order to avoid rep- etition of bulky explanations of reproducibility for lin- ear angular and derivative measurements. The esti- mated reproducibility in this study was classified into four groups: ultra high reproducibility of measurements (R value and SD of differences smaller than 0.5 units), high reproducibility (R and SD of differences greater than 0.5 unit but smaller than 1 unit), moderate repro- ducibility (R value and SD of differences are between 1 and 2 units), and poor reproducibility (R value and SD of differences greater than 2 units). Mean of dif- ference is twice the mean of differences from the av- erage, therefore the limit of 2 units for R value was considered to be the range of clinical acceptance. All differences were taken for absolute numbers in this study. Assessment of validity. Validity was rated as ac- ceptable or non-acceptable in this study. Validity of measurements obtained from digital photographs were considered to be acceptable provided both of the two following conditions were met: first, paired t-test analy- sis revealed no statistically significant intergroup dif- ference (P>0.05) that would also be clinically signifi- cant (both mean and SD of differences greater than 2 units) between measurements obtained from digital photographs and those obtained from acetate tracings. Second, linear regression analysis showed strong cor- relation between methods: the intraclass correlation co- efficient r>0.75 ) )var()var()var( )var( ( errormethodskull skull r ++ = ; standardized beta coefficient >0.7 and confidence in- tervals for beta contained value 1; there was no sys- tematic offset in values and confidence intervals for alpha contained 0 value. The use of linear regression was essential in testing the agreement between two series of paired measurements that were shown to have few statistically significant differences between means, nevertheless, could have poor agreement [25]. Acetate tracing was the independent method, whereas digital photography was a dependent cephalometry method in linear regression model. RESULTS Reproducibility of measurements obtained from acetate tracings Fifteen (23.81%) out of 63 measurements used in lateral cephalometry were highly reproducible, with the standard deviation (SD) of differences of measure- ments being less than 0.5 unit (one unit equals one millimeter, one degree or one percent). Eight of them (12.70%) were characterized by ultra small R coeffi- cient (<0.5 unit) whereas 7 measurements by a small R value (0.5-1 of a unit). Thirty two (50,79%) mea- surements fell into moderate level of R value and SD of differences of 1 unit, 8 measurements demonstrated SD lower than one unit with R exceeding one unit. Nine (14.29%) parameters demonstrated both R and SD of differences being beyond 2 units of measure- ment. Reproducibility of measurements obtained from digital photographs of headfilms Eleven (17.46%) out of 63 measurements used in digital photography group were characterized by ultra high reproducibility with both R value and SD of differences being smaller than 0.5 of a unit of mea- surement. Twenty seven (42.86%) of 63 measure- ments showed high reproducibility with both R coef- ficient and SD of differences being smaller than 1 unit, 8 more measurements demonstrated SD lower than 1 unit, however R values were higher than 2 units. Seventeen (26.98%) of 63 measurements showed R values greater than 2 units, and four (6.35%) of them were characterized by SD of dif- ferences greater than 2 units. Overall characteristics of least reproducible measurements is presented in Table 2. Validity of measurements obtained from digi- tal photographs of lateral headfilms Validity was acceptable for all measurements except LI/Occ, S-Go, UFH/TFH and N-ANS (Table 3). There was a high correlation between methods for 59 out of 63 measurements: linear regression model showed interclass correlation coefficient r>0.8; standardized coefficient Beta>0.9; confidence inter- vals for Alpha and Beta values contained values 1 and 0 respectively. Non-acceptable validity was de- termined for 4 measurements: LI/OC, S-Go, UFH/ TFH and N-ANS. In 60 out of 63 lateral cephalom- etric measurements differences between the two methods were less than 0.5 units and less than 1 unit in the rest three measurements. There were no sta tistica lly significa nt differ ences between measurements obtained from digital photographs of lateral headfilms and corresponding acetate cephalo- metric tracings in 49 measures. A list of measure- ments for which paired t-test analysis and linear regression analysis showed statistically significant differences or poor correlation between the two methods is presented in Table 3. Stom atolog ija, Bal ti c Dental an d Max ill of aci al Journ al, 2007, Vol . 9, N o. 4 11 7 S. Grybauskas et al. SCIENTIFIC ARTICLES Table 1. Linear, angular and derivative measurements used for cephalometric analysis Measurement Definition Cranial base dimensions Linear measurements S-N Anterior cranial base length S-Ar Distance between sella and articulare S-Ba Posterior cranial base length Ba-N Total cranial base length Angular measurements N/S/Ar Angle between S-N and S-Ar lines N/S/Ba Cranial base saddle angle between S-N and S- Ba SN/FH Angle determined by S-N and Frankfort horizontal (FH) plane Facial height Linear measurements ANS-N UAFH, upper anterior facial height ANS-Me LAFH, lower anterior facial height N-Me TAFH, total anterior facial height LAFH/TAFH Ratio of lower anterior face height to total anterior face height UAFH/LAFH Ratio of upper facial height to lower facial height S-Go TPFH, total posterior face height S-PNS Posterior midfacial height Ar-Go Lower posterior facial height Jarabak ratio Ratio of total posterior to total anterior face height Vertical relationship Angular measurements SN/PP Angle determined by SN and Palatal plane SN/MP Angle determined by SN and Mandibular plane SN/OP Angle determined by SN and Occlusal plane FH/PP Angle between FH and Palatal plane FH/MP Angle determined by FH and Mandibular plane FH/OP Angle determined by FH and Occlusal plane MP/PP Angle determined by Mandibular and Palatal planes PP/OP Angle determined by Palatal and Occlusal planes Relationship of the maxilla to the cranial base Linear measurements A-Nv Distance from point A to Nv line Co-A Distance from Condylion to A point Ar-A Distance from Articulare to A point Ba-A Basialveolar length A-NPog Distance from point A to facial plane line Angular measurements S/N/A Angle determined by S-N and N-A lines NA/FH Angle determined by N-A line and FH plane Relationship of the mandible to the cranial base Linear measurements B-Nv Distance from point B to Nv line Angular measurements S/N/B Angle determined by S-N and N-B lines S/N/Pog Facial angle determined between S-N and facial plane lines FH/NPog Facial angle determined between FH plane and facial plane linbe N/S/Gn Y-axis, the angle determined by S-N and S-Gn lines S/Ar/Go Articulare angle, determined by S-Ar and Ar- Go lines Measurement Definition Relationship of the maxilla to the mandible Linear measurements Wit’s upraisal Distance between the projections of point A and B onto occlusal plane Angular measurements A/N/B Angle determined by N-A and N-B lines N/A/Pog Convexity angle, determined by N-A and A-Pog lines A/Ar/Gn Angle 1 from the A-Ar-Gn triangle A/Gn/Ar Angle 2 from the A-Ar-Gn triangle Ar/A/Gn Angle 3 from the A-Ar-Gn triangle Relationship of the maxillary dentition to the maxilla and the cranial base Linear measurements UIE-NA Distance from Upper incisor tip to N-A line UIE-APog Distance from Upper incisor tip to A-Pog line Angular measurements U1/NA Angle determined by maxillary incisor axis and N-A lines U1/FH Angle determined by maxillary incisor axis and FH plane U1/SN Angle determined by maxillary incisor axis and SN line U1/PP Angle determined by maxillary incisor axis and Palatal plane U1/OP Angle determined by maxillary incisor axis and Occlusal plane Relationship of the mandibular dentition to the mandible and the cranial base Linear measurements LIE-APog Distance from Lower incisor tip to A-Pog line LIE-NB Distance from Lower incisor tip to N-B line Angular measurements LI/MP Angle determined by Mandibular incisor axis and Mandibular plane LI/NB Angle determined by Mandibular incisor axis and N-B line LI/OP Angle determined by Mandibular incisor axis and Occlucal plane Relationship of the maxillary dentition to the mandibular dentition Angular measurements UI/LI Angle determined by Maxillary incisor axis and Mandibular incisor axis Maxillary or palatal dimensions Linear measurements ANS-PNS Palatal length, distance from Anterior nasal spine to Posterior nasal spine A-PNS Distance from posterior nasal spine to A point Mandibular length Linear measurements Go-Gn Length of mandibular corpus, distance between Gonion and Gnathion points Go-Co Ramus height, distance between Gonion and Condilion points Co-Gn Length of mandibular base, distance between Condilion and Gnathion points (Co-Gn)-(Co- A) Maxillo-mandibular length difference – difference between Co-Gn and Co-A values Angular measurements Co/Go/Gn Gonial angle, determined by Go-Co and Go-Gn lines 11 8 Stom atolog ija, Bal ti c Dental an d Max ill of aci al Journ al, 2007, Vol . 9, N o. 4 DISCUSSION In digital photography group the worst reproduc- ibility was seen for U1/FH, U1/L1, B-Nv and articu- lar angle (S-Ar-Go), followed by FH/OP, FH/OP, FH/ NPog, UI/SN, UI/FH, UI/OP. In the acetate tracing group poor reproducibility was determined for the measurements U1/SN, U1/PP, U1/OP, U1/NA, U1/ FH; L1/OP, L1/NB, L1/GoGn, U1/L1. Obviously, ma- jority of theses measurements depend on landmarks and references of incisor teeth and poorly defined outlines or low contrast area such as Articulare, Go- nion, PNS and Porion. Our data agrees with results reported by Chen et al (2000), who stated that least reliable landmarks are those that are located on curved anatomical boundaries or on axis on teeth, thus re- sulting in greatest inaccuracies of following measure- ments: U1/SN, U1/L1, L1/OP, L1/MP [9]. Our data is also in line with Baumrind and Frantz (1971a) who described “errors in identification” being specific for different landmarks and arising from inability to lo- cate anatomical landmarks [26]. Definition was later expanded by Vincent et al (1997) who classified er- rors of identification caused by: poor outline of the curvature of the line upon which the landmark is po- SCIENTIFIC ARTICLES S. Grybauskas et al. sitioned; contrast of the area; noise and superimposi- tion of other structures; poor definition of the land- mark [6]. In the acetate tracing group all nine measure- ments with poor reproducibility were angular, as well as 3 out of 4 in the analogue cephalometry group. Angular measurements showed worse reproducibil- ity than linear measurements and it is line with stud- ies conducted by Baumrind and Frantz as well as Savinsu et al [27,10]. The comparative analysis showed that there were few statistically significant differences between meth- ods, however all of them were clinically insignificant with mean and SD of differences smaller than 0.5 unit, thus substantiating the use of digital photogra- phy and tracing of digital photographs in orthodontic practice. According to linear regression model, the validity of measurements obtained from digital pho- tographs was acceptable: r>0.8, standardized beta coefficient >0.9 and confidence intervals for alpha and beta values were containing values 0 and 1 re- spectively (p<0.05) for majority of measurements (poor correlation between groups for 4 measurements needs further investigation). It is in agreement with Chen et al (2004) and Schulze et al (2002) results Table 2. Characteristics of least reproducible measurements obtained from digitized acetate tracings and digital photographs of headfilms Acetate tracing group Digital photography group Confidence interval of R (95%) Confidence interval of R (95%) Measurement R value SD of differences lower bound upper bound R value SD of differences lower bound upper bound Articular Angle S-Ar-Go(º) 1.91 1.52 -1.07 4.88 3.39 2.28 -1.07 7.85 B-Nv (mm) 0.92 0.75 -0.56 2.40 3.37 2.51 -1.54 8.28 Facial Angle (FH-NPog) (º) 0.59 0.49 -0.37 1.56 2.09 1.56 -0.96 5.14 FH / MP (º) 1.13 0.86 -0.57 2.82 2.25 1.59 -0.87 5.38 FH / OP (º) 1.44 1.04 -0.61 3.49 2.57 1.87 -1.09 6.23 FH / PP (º) 0.78 0.61 -0.42 1.98 2.15 1.59 -0.97 5.26 Interincisal Angle (UI/LI) (º) 4.40 4.22 -3.87 12.67 3.11 2.19 -1.17 7.40 LI / GoGn (º) 2.96 2.50 -1.94 7.86 2.12 1.45 -0.73 4.97 LI / NB (º) 2.65 2.43 -2.11 7.40 1.99 1.36 -0.68 4.67 LI / Occ Plane (º) 2.49 2.25 -1.92 6.89 2.08 1.54 -0.94 5.10 Lower Posterior Facial Height Ratio (Ar-Go/S-Go x 100) (%) 1.63 1.35 -1.03 4.28 2.52 1.87 -1.15 6.19 Mandibular Body Length (Go-Gn)(mm) 2.04 1.65 -1.18 5.26 2.67 1.85 -0.97 6.30 Maxillary Depth FH / NA (º) 0.83 0.65 -0.45 2.12 2.05 1.54 -0.98 5.07 U1 / FH (º) 3.32 2.69 -1.94 8.58 3.07 2.56 -1.94 8.09 U1 / NA (º) 3.03 2.65 -2.16 8.22 2.27 1.70 -1.05 5.60 U1 / Occ Plane (º) 2.71 2.43 -2.05 7.48 2.28 1.57 -0.80 5.36 U1 / Palatal Plane (º) 3.41 2.68 -1.85 8.66 2.47 1.71 -0.87 5.82 U1 / SN (º) 3.20 2.70 -2.08 8.48 2.15 1.59 -0.98 5.27 Stom atolog ija, Bal ti c Dental an d Max ill of aci al Journ al, 2007, Vol . 9, N o. 4 11 9 S. Grybauskas et al. SCIENTIFIC ARTICLES stating that although statistically significant differ- ences between digitized and analogue measurements existed, they were clinically insignificant [28,29]. It also agrees with the study conducted by Macri and Wenzel (1993) who stated that it was possible to achieve reliability of digital images comparable to that obtained with conventional equipment for radiographs of good quality [2]. Collins et al (2007) compared mea s ur ements fr om p hotogr a phed la t er a l cephalograms and scanned cephalograms and found statistically significant differences in linear measure- ments by using Dolphin software [30]. Although digi- talization of acetate tracings rather than scanning was used in our study, 11 out of 15 measurements that were shown to have statistically significant differ- ences were linear measurements suggesting a need for more thorough investigation of magnification fac- tors in computer-aided cephalometry. Table 3. Intergroup comparison of measurements obtained from digital photographs of analogue cephalograms and corre- sponding acetate tracings Measurement Compa- red methods Mean of diffe- rence SD of diffe- rence t Sig. (2- tailed) Unstandardized Coefficient Alpha Standardized Coefficient Beta Sig. 95% Confidence Interval for Alpha and Beta A-Gn-Ar (Angle 3) (°) Dig photo -0.09629 0.153659 2.516 0.036 -1.159 0.171 -2.955 0.637 Acetate 1.023 0.999 0 0.99 1.055 Anterior Cranial Base (S-N) (mm) Dig photo 0.31524 0.193747 -7.246 0 0.631 0.298 -0.697 1.958 Acetate 0.986 1 0 0.966 1.006 Ba - A (mm) Dig photo 0.4199 0.187261 -7.536 0 -0.491 0.664 -3.053 2.07 Acetate 1.001 1 0 0.972 1.029 L1 - Occ Plane (°) Dig photo -0.18518 0.699007 0.828 0.432 -3.478 0.01 -5.846 -1.11 Acetate 1.052 0.999 0 1.019 1.086 Lower Facial Height (ANS-Me) (mm) Dig photo 0.20422 0.292465 -3.118 0.014 0.222 0.867 -2.803 3.247 Acetate 0.993 0.999 0 0.947 1.04 Mandibular length (Co-Gn) (mm) Dig photo 0.50009 0.595312 -5.551 0.001 -0.304 0.897 -5.638 5.03 Acetate 0.998 0.999 0 0.953 1.044 Midfacial length Co- A (mm) Dig photo 0.40611 0.463005 -7.176 0 -0.408 0.711 -2.903 2.087 Acetate 1 0.999 0 0.971 1.029 Mx/Md diff (Co-Gn – Co-A) (mm) Dig photo 0.22033 0.259186 -5.897 0 -0.338 0.39 -1.21 0.535 Acetate 1.005 0.999 0 0.969 1.041 N - Ba (mm) Dig photo 0.46236 0.509173 - 10.128 0 0.255 0.757 -1.614 2.123 Acetate 0.993 1 0 0.974 1.012 PNS-A (mm) Dig photo 0.19733 0.274228 -3.341 0.01 0.63 0.539 -1.679 2.94 Acetate 0.983 0.999 0 0.935 1.031 Posterior Cranial Base (S-Ar) (mm) Dig photo 0.10776 0.157548 -2.981 0.018 -0.02877 0.943 -0.944 0.886 Acetate 0.998 1 0 0.972 1.023 Posterior Cranial Base (S-Ba) (mm) Dig photo 0.22644 0.255059 -7.913 0 -0.859 0.065 -1.786 0.068 Acetate 1.015 1 0 0.993 1.038 Posterior Face Height (SGo) (mm) Dig photo 0.27545 0.695934 -1.308 0.227 4.889 0.02 1.044 8.734 Acetate 0.934 0.998 0 0.885 0.983 Saddle/Sella Angle (SN-Ar) (°) Dig photo -0.18889 0.271058 3.104 0.015 -1.045 0.544 -4.92 2.83 Acetate 1.01 0.999 0 0.979 1.04 SN - MP (°) Dig photo 0.24074 0.436686 -2.494 0.037 0.333 0.468 -0.694 1.36 Acetate 0.979 0.999 0 0.943 1.016 Total Face Height (N-Me) (mm) Dig photo 0.46666 0.548392 -5.93 0 1.395 0.385 -2.168 4.957 Acetate 0.983 0.999 0 0.952 1.015 UFH/TFH (N- ANS:N-Me) (°) Dig photo 0.05556 0.15411 -1.17 0.276 2.127 0.018 0.488 3.767 Acetate 0.949 0.999 0 0.911 0.987 Upper Face Height (N-ANS) (mm) Dig photo 0.26003 0.350888 -3.596 0.007 2.012 0.009 0.671 3.353 Acetate 0.953 0.999 0 0.925 0.981 12 0 Stom atolog ija, Bal ti c Dental an d Max ill of aci al Journ al, 2007, Vol . 9, N o. 4 Received: 16 09 2007 Accepted for publishing: 21 12 2007 REFERENCES 1. Chen SK, Chen YJ, Yao CC, Chang HF. Enhanced speed and precision of measurement in a computer-assisted digital cephalometric analysis system. Angle Orthod 2004;74:501-7. 2. Macri V, Wenzel A. Reliability of landmark recording on film and digital lateral cephalograms. Eur J Orthod 1993;15:137- 48. 3. Power G, Breckon J, Sherriff M, McDonald F. Dolphin Imaging Software: an analysis of the accuracy of cephalometric digitization and orthognathic prediction. Int J Oral Maxillofac Surg 2005; 34:619-26. 4. Ongkosuwito EM, Katsaros C, Bodegom JC, Kuijpers-Jagtman AM. Digital cephalometrics. Ned Tijdschr Tandheelkd 2004; 111:266-70. 5. Forsyth DB , Davis DN. Assessment of an automated cephalometric analysis system. Eur J Orthod 1996; 18:471-8. 6. Vincent AM, West VC. Cephalometric landmark identification error. Austral Orthod J 1987;10:98-104. 7. Bassignani MJ, Bubash-Faust L, Ciambotti J, Moran R, McIlhenny J. Conversion of teaching file cases from film to digital format: a comparison between use of a diagnostic-quality digitizer and use of a flatbed scanner with transparency adapter. Acad Radiol 2003;10:536-42. 8. Bruntz LQ, Palomo JM, Baden S, Hans MG. A comparison of scanned lateral cephalograms with corresponding original radiographs. Am J Orthod Dentofacial Orthop 2006; 130:340- 8. 9. Chen YJ, Chen SK, Chang HF, Chen KC. Comparison of landmark identification in traditional versus computer-aided digital cephalometry. Angle Orthodontist 2000; 70: 387-92. 10. Sayinsu K, Isik F, Trakyali G, Arun T. An evaluation of the errors in ceph alom etric m easu rements on scann ed cephalometric images and conventional tracings. Eur J Orthod 2007;29:105-8. 11. Turner PJ, Weerakone S. An evaluation of the reproducibility of landmark identification using scanned cephalometric images. J Orthod 2001;28:221-9. 12. Nimkarn Y. Miles PG. Reliability of computer generated cephalometrics. Int J Adult Orthod Orthogn Surg 1995;10:43- 52. 13. Davis DN, MacKay F. Reliability of cephalometric analysis using manual and interactive computer methods. Br J Orthod 1991;18:105-9. 14. Geelen W, Wenzel A, Gotfredsen E, Kruger M, Hansson L-G. Reproducibility of cephalometric landmarks in conventional film, hardcopy and monitor-displayed images obtained by the storage phosphor technique. Eur J Orthod 1998; 20:331–40. 15. Houston WJ. The application of computer aided digital analysis to orthodontic records. Eur J Orthod 1979;1:71-9. 16. Lim KF, Foong KW. P hosphor stim ulated compu ted cephalometry: reliability of landmark identification. Br J Orthod 1997;24:301-8. 17. Liu J-K, Chen YC, Cheng KS. Accuracy of computerized automatic identification of cephalometric landmarks. Am J Orthod Dentofac Orthop 2000; 118:535-40. 18. Oliver RG. Cephalometric analysis comparing five different methods. Br J Orthod 1990;27:277-83. 19. Richardson A. A comparison of traditional and computerized methods of cephalometric analysis. Eur J Orthod 1981;3:15- 20. 20. Rudolph DJ, Sin clair PM , Coggin s JM . Au tomatic compouterized radiographic identification of cephalometric landmarks. Am J Orthod Dentofac Orthop 1989; 113:173-9. 21. Stirrups DR. A comparison of the accuracy of cephalometric landmark location between two screen/ film combinations. Angle Orthodontist 1989;59:211-5. 22. Cohen JM. Comparing digital and conventional cephalometric radiographs. Am J Orthod Dentofacial Orthop 2005;128:157- 60. 23. Santoro M, Jarjoura K, Cangialosi TJ. Accuracy of digital and analogue cephalometric measurements assessed with the sandwich technique. Am J Orthod Dentofacial Orthop 2006;129:345-51. 24. Bland JM, Altman DG. Measuring agreement in method comparison studies. Stat Methods Med Res 1999;8:135-60. 25. Stöckl D, Dewitte K, Thienpont LM. Validity of linear regression in method comparison studies: is it limited by the statistical model or the quality of the analytical input data? Clin Chem 1998;44:2340–6. 26. Baumrind S, Frantz RC. The reliability of head film measurements. Landmark identification. Am J Orthod 1971a; 60:111-27. 27. Baumrind S, Frantz RC. The reliability of head film measurements. Conventional angular and linear measures. Am J Orthod 1971b;60:505–17. 28. Chen YJ, Chen SK, Yao JC, Chang HF. The effects of differences in landmark identification on the cephalometric measurements in traditional versus digitized cephalometry. Angle Orthod 2004;74:155-61. 29. Schulze RK, Gloede MB, Doll GM. Landmark identification on direct digital versus film-based cephalometric radiographs: a human skull study. Am J Orthod Dentofacial Orthop 2002;122:635–42. 30. Collins J, Shah A, McCarthy C, Sandler J. Comparison of measurements from photographed lateral cephalograms and scanned cephalograms. Am J Orthod Dentofacial Orthop 2007;132:830-3. SCIENTIFIC ARTICLES S. Grybauskas et al. CONCLUSIONS 1. Both measurements obtained from acetate trac- ings and digital photographs of analogue cephalograms were shown to have adequate reproducibility with both R coefficients and SD of differences smaller than 2 units of measurement. Nine measurements in the acetate cephalometry group and seventeen in the analogue cephalometry groups failed to go within this limit and were shown to be less reproducible. 2. Majority of poorly reproducible measurements were angular or associated with least reproducible landmarks and references. 3. Validity of 59 out of 63 lateral measurements ob- tained from digital photographs was acceptable, thus, substantiated the use of digital photography for headfilm capture, digital tracing and computer-aided cephalom- etric analysis. work_2vbaf4dl6fh7blczwrxyicd2d4 ---- Inderscience Publishers - linking academia, business and industry through research Log in Log in For authors, reviewers, editors and board members Username Remember me Go Forgotten? Help Sitemap Home For Authors For Librarians Orders Inderscience Online News Explore our journals Browse journals by titleAfrican Journal of Accounting, Auditing and FinanceAfrican Journal of Economic and Sustainable DevelopmentAfro-Asian Journal of Finance and AccountingAmerican Journal of Finance and AccountingAsian Journal of Management Science and ApplicationsAtoms for Peace: an International JournalElectronic Government, an International JournalEuroMed Journal of ManagementEuropean Journal of Cross-Cultural Competence and ManagementEuropean Journal of Industrial EngineeringEuropean Journal of International ManagementGlobal Business and Economics ReviewInterdisciplinary Environmental ReviewInternational Journal of Abrasive TechnologyInternational Journal of Accounting and FinanceInternational Journal of Accounting, Auditing and Performance EvaluationInternational Journal of Ad Hoc and Ubiquitous ComputingInternational Journal of Adaptive and Innovative SystemsInternational Journal of Additive and Subtractive Materials ManufacturingInternational Journal of Advanced Intelligence ParadigmsInternational Journal of Advanced Mechatronic SystemsInternational Journal of Advanced Media and CommunicationInternational Journal of Advanced Operations ManagementInternational Journal of AerodynamicsInternational Journal of Aerospace System Science and EngineeringInternational Journal of Agent-Oriented Software EngineeringInternational Journal of Agile and Extreme Software DevelopmentInternational Journal of Agile Systems and ManagementInternational Journal of Agricultural Resources, Governance and EcologyInternational Journal of Agriculture Innovation, Technology and GlobalisationInternational Journal of Alternative PropulsionInternational Journal of Applied CryptographyInternational Journal of Applied Decision SciencesInternational Journal of Applied Management ScienceInternational Journal of Applied Nonlinear ScienceInternational Journal of Applied Pattern RecognitionInternational Journal of Applied Systemic StudiesInternational Journal of Arab Culture, Management and Sustainable DevelopmentInternational Journal of Artificial Intelligence and Soft ComputingInternational Journal of Arts and TechnologyInternational Journal of Auditing TechnologyInternational Journal of Automation and ControlInternational Journal of Automation and LogisticsInternational Journal of Automotive CompositesInternational Journal of Automotive Technology and ManagementInternational Journal of Autonomic ComputingInternational Journal of Autonomous and Adaptive Communications SystemsInternational Journal of Aviation ManagementInternational Journal of Banking, Accounting and FinanceInternational Journal of Behavioural Accounting and FinanceInternational Journal of Behavioural and Healthcare ResearchInternational Journal of Bibliometrics in Business and ManagementInternational Journal of Big Data IntelligenceInternational Journal of Big Data ManagementInternational Journal of Bioinformatics Research and ApplicationsInternational Journal of Bio-Inspired ComputationInternational Journal of Biomechatronics and Biomedical RoboticsInternational Journal of Biomedical Engineering and TechnologyInternational Journal of Biomedical Nanoscience and NanotechnologyInternational Journal of BiometricsInternational Journal of BiotechnologyInternational Journal of Blockchains and CryptocurrenciesInternational Journal of Bonds and DerivativesInternational Journal of Business and Data AnalyticsInternational Journal of the Built Environment and Asset ManagementInternational Journal of Business and Emerging MarketsInternational Journal of Business and GlobalisationInternational Journal of Business and Systems ResearchInternational Journal of Business Competition and GrowthInternational Journal of Business Continuity and Risk ManagementInternational Journal of Business EnvironmentInternational Journal of Business ExcellenceInternational Journal of Business Forecasting and Marketing IntelligenceInternational Journal of Business Governance and EthicsInternational Journal of Business Information SystemsInternational Journal of Business Innovation and ResearchInternational Journal of Business Intelligence and Data MiningInternational Journal of Business Intelligence and Systems EngineeringInternational Journal of Business Performance and Supply Chain ModellingInternational Journal of Business Performance ManagementInternational Journal of Business Process Integration and ManagementInternational Journal of Chinese Culture and ManagementInternational Journal of Circuits and Architecture DesignInternational Journal of Cloud ComputingInternational Journal of Cognitive BiometricsInternational Journal of Cognitive Performance SupportInternational Journal of Collaborative EngineeringInternational Journal of Collaborative EnterpriseInternational Journal of Collaborative IntelligenceInternational Journal of Communication Networks and Distributed SystemsInternational Journal of Comparative ManagementInternational Journal of CompetitivenessInternational Journal of Complexity in Applied Science and TechnologyInternational Journal of Complexity in Leadership and ManagementInternational Journal of Computational Biology and Drug DesignInternational Journal of Computational Complexity and Intelligent AlgorithmsInternational Journal of Computational Economics and EconometricsInternational Journal of Computational Intelligence in Bioinformatics and Systems BiologyInternational Journal of Computational Intelligence StudiesInternational Journal of Computational Materials Science and Surface EngineeringInternational Journal of Computational Medicine and HealthcareInternational Journal of Computational Microbiology and Medical EcologyInternational Journal of Computational Science and EngineeringInternational Journal of Computational Systems EngineeringInternational Journal of Computational Vision and RoboticsInternational Journal of Computer Aided Engineering and TechnologyInternational Journal of Computer Applications in TechnologyInternational Journal of Computers in HealthcareInternational Journal of Computing Science and MathematicsInternational Journal of Continuing Engineering Education and Life-Long LearningInternational Journal of Convergence ComputingInternational Journal of Corporate GovernanceInternational Journal of Corporate Strategy and Social ResponsibilityInternational Journal of Creative ComputingInternational Journal of Critical AccountingInternational Journal of Critical Computer-Based SystemsInternational Journal of Critical InfrastructuresInternational Journal of Cultural ManagementInternational Journal of Cybernetics and Cyber-Physical SystemsInternational Journal of Data Analysis Techniques and StrategiesInternational Journal of Data Mining and BioinformaticsInternational Journal of Data Mining, Modelling and ManagementInternational Journal of Data ScienceInternational Journal of Decision Sciences, Risk and ManagementInternational Journal of Decision Support SystemsInternational Journal of Design EngineeringInternational Journal of Digital Culture and Electronic TourismInternational Journal of Digital Enterprise TechnologyInternational Journal of the Digital HumanInternational Journal of Digital Signals and Smart SystemsInternational Journal of Diplomacy and EconomyInternational Journal of Dynamical Systems and Differential EquationsInternational Journal of Earthquake and Impact EngineeringInternational Journal of Ecological Bioscience and BiotechnologyInternational Journal of Economic Policy in Emerging EconomiesInternational Journal of Economics and AccountingInternational Journal of Economics and Business ResearchInternational Journal of Education Economics and DevelopmentInternational Journal of Electric and Hybrid VehiclesInternational Journal of Electronic BankingInternational Journal of Electronic BusinessInternational Journal of Electronic Customer Relationship ManagementInternational Journal of Electronic DemocracyInternational Journal of Electronic FinanceInternational Journal of Electronic GovernanceInternational Journal of Electronic HealthcareInternational Journal of Electronic Marketing and RetailingInternational Journal of Electronic Security and Digital ForensicsInternational Journal of Electronic TradeInternational Journal of Electronic TransportInternational Journal of Embedded SystemsInternational Journal of Emergency ManagementInternational Journal of Emerging Computing for Sustainable AgricultureInternational Journal of Energy Technology and PolicyInternational Journal of Engineering Management and EconomicsInternational Journal of Engineering Systems Modelling and SimulationInternational Journal of Enterprise Network ManagementInternational Journal of Enterprise Systems Integration and InteroperabilityInternational Journal of Entertainment Technology and ManagementInternational Journal of Entrepreneurial VenturingInternational Journal of Entrepreneurship and Innovation ManagementInternational Journal of Entrepreneurship and Small BusinessInternational Journal of Environment and HealthInternational Journal of Environment and PollutionInternational Journal of Environment and Sustainable DevelopmentInternational Journal of Environment and Waste ManagementInternational Journal of Environment, Workplace and EmploymentInternational Journal of Environmental EngineeringInternational Journal of Environmental Policy and Decision MakingInternational Journal of Environmental Technology and ManagementInternational Journal of ExergyInternational Journal of Experimental and Computational BiomechanicsInternational Journal of Experimental Design and Process OptimisationInternational Journal of Export MarketingInternational Journal of Family Business and Regional DevelopmentInternational Journal of Financial Engineering and Risk ManagementInternational Journal of Financial Innovation in BankingInternational Journal of Financial Markets and DerivativesInternational Journal of Financial Services ManagementInternational Journal of Food Safety, Nutrition and Public HealthInternational Journal of Forensic EngineeringInternational Journal of Forensic Engineering and ManagementInternational Journal of Forensic Software EngineeringInternational Journal of Foresight and Innovation PolicyInternational Journal of Functional Informatics and Personalised MedicineInternational Journal of Fuzzy Computation and ModellingInternational Journal of Gender Studies in Developing SocietiesInternational Journal of Global Energy IssuesInternational Journal of Global Environmental IssuesInternational Journal of Global WarmingInternational Journal of Globalisation and Small BusinessInternational Journal of Governance and Financial IntermediationInternational Journal of Granular Computing, Rough Sets and Intelligent SystemsInternational Journal of Green EconomicsInternational Journal of Grid and Utility ComputingInternational Journal of Happiness and DevelopmentInternational Journal of Healthcare PolicyInternational Journal of Healthcare Technology and ManagementInternational Journal of Heavy Vehicle SystemsInternational Journal of High Performance Computing and NetworkingInternational Journal of High Performance Systems ArchitectureInternational Journal of Higher Education and SustainabilityInternational Journal of Hospitality and Event ManagementInternational Journal of Human Factors and ErgonomicsInternational Journal of Human Factors Modelling and SimulationInternational Journal of Human Resources Development and ManagementInternational Journal of Human Rights and Constitutional StudiesInternational Journal of Humanitarian TechnologyInternational Journal of Hybrid IntelligenceInternational Journal of Hydrology Science and TechnologyInternational Journal of HydromechatronicsInternational Journal of Image MiningInternational Journal of Immunological StudiesInternational Journal of Indian Culture and Business ManagementInternational Journal of Industrial and Systems EngineeringInternational Journal of Industrial Electronics and DrivesInternational Journal of Information and Coding TheoryInternational Journal of Information and Communication TechnologyInternational Journal of Information and Computer SecurityInternational Journal of Information and Decision SciencesInternational Journal of Information and Operations Management EducationInternational Journal of Information Privacy, Security and IntegrityInternational Journal of Information QualityInternational Journal of Information Systems and Change ManagementInternational Journal of Information Systems and ManagementInternational Journal of Information Technology and ManagementInternational Journal of Information Technology, Communications and ConvergenceInternational Journal of Innovation and LearningInternational Journal of Innovation and Regional DevelopmentInternational Journal of Innovation and Sustainable DevelopmentInternational Journal of Innovation in EducationInternational Journal of Innovative Computing and ApplicationsInternational Journal of Instrumentation TechnologyInternational Journal of Integrated Supply ManagementInternational Journal of Intellectual Property ManagementInternational Journal of Intelligence and Sustainable ComputingInternational Journal of Intelligent Defence Support SystemsInternational Journal of Intelligent Engineering InformaticsInternational Journal of Intelligent EnterpriseInternational Journal of Intelligent Information and Database SystemsInternational Journal of Intelligent Internet of Things ComputingInternational Journal of Intelligent Machines and RoboticsInternational Journal of Intelligent Systems Design and ComputingInternational Journal of Intelligent Systems Technologies and ApplicationsInternational Journal of Intercultural Information ManagementInternational Journal of Internet and Enterprise ManagementInternational Journal of Internet Manufacturing and ServicesInternational Journal of Internet Marketing and AdvertisingInternational Journal of Internet of Things and Cyber-AssuranceInternational Journal of Internet Protocol TechnologyInternational Journal of Internet Technology and Secured TransactionsInternational Journal of Inventory ResearchInternational Journal of Islamic Marketing and BrandingInternational Journal of Knowledge and LearningInternational Journal of Knowledge and Web IntelligenceInternational Journal of Knowledge Engineering and Data MiningInternational Journal of Knowledge Engineering and Soft Data ParadigmsInternational Journal of Knowledge Management in Tourism and HospitalityInternational Journal of Knowledge Management StudiesInternational Journal of Knowledge Science and EngineeringInternational Journal of Knowledge-Based DevelopmentInternational Journal of Lean Enterprise ResearchInternational Journal of Learning and ChangeInternational Journal of Learning and Intellectual CapitalInternational Journal of Learning TechnologyInternational Journal of Legal Information DesignInternational Journal of Leisure and Tourism MarketingInternational Journal of Liability and Scientific EnquiryInternational Journal of Lifecycle Performance EngineeringInternational Journal of Logistics Economics and GlobalisationInternational Journal of Logistics Systems and ManagementInternational Journal of Low RadiationInternational Journal of Machine Intelligence and Sensory Signal ProcessingInternational Journal of Machining and Machinability of MaterialsInternational Journal of Management and Decision MakingInternational Journal of Management and Enterprise DevelopmentInternational Journal of Management and Network EconomicsInternational Journal of Management Concepts and PhilosophyInternational Journal of Management DevelopmentInternational Journal of Management in EducationInternational Journal of Management PracticeInternational Journal of Managerial and Financial AccountingInternational Journal of Manufacturing ResearchInternational Journal of Manufacturing Technology and ManagementInternational Journal of Masonry Research and InnovationInternational Journal of Markets and Business SystemsInternational Journal of Mass CustomisationInternational Journal of Materials and Product TechnologyInternational Journal of Materials and Structural IntegrityInternational Journal of Materials Engineering InnovationInternational Journal of Mathematical Modelling and Numerical OptimisationInternational Journal of Mathematics in Operational ResearchInternational Journal of Mechanisms and Robotic SystemsInternational Journal of Mechatronics and AutomationInternational Journal of Mechatronics and Manufacturing SystemsInternational Journal of Medical Engineering and InformaticsInternational Journal of Metadata, Semantics and OntologiesInternational Journal of MetaheuristicsInternational Journal of Microstructure and Materials PropertiesInternational Journal of Migration and Border StudiesInternational Journal of Migration and Residential MobilityInternational Journal of Mining and Mineral EngineeringInternational Journal of Mobile CommunicationsInternational Journal of Mobile Learning and OrganisationInternational Journal of Mobile Network Design and InnovationInternational Journal of Modelling in Operations ManagementInternational Journal of Modelling, Identification and ControlInternational Journal of Molecular EngineeringInternational Journal of Monetary Economics and FinanceInternational Journal of Multicriteria Decision MakingInternational Journal of Multimedia Intelligence and SecurityInternational Journal of Multinational Corporation StrategyInternational Journal of Multivariate Data AnalysisInternational Journal of Nano and BiomaterialsInternational Journal of NanomanufacturingInternational Journal of NanoparticlesInternational Journal of NanotechnologyInternational Journal of Network ScienceInternational Journal of Networking and SecurityInternational Journal of Networking and Virtual OrganisationsInternational Journal of Nonlinear Dynamics and ControlInternational Journal of Nuclear DesalinationInternational Journal of Nuclear Energy Science and TechnologyInternational Journal of Nuclear Governance, Economy and EcologyInternational Journal of Nuclear Hydrogen Production and ApplicationsInternational Journal of Nuclear Knowledge ManagementInternational Journal of Nuclear LawInternational Journal of Nuclear Safety and SecurityInternational Journal of Ocean Systems ManagementInternational Journal of Oil, Gas and Coal TechnologyInternational Journal of Operational ResearchInternational Journal of Organisational Design and EngineeringInternational Journal of Petroleum EngineeringInternational Journal of Physiotherapy and Life PhysicsInternational Journal of Planning and SchedulingInternational Journal of Pluralism and Economics EducationInternational Journal of Portfolio Analysis and ManagementInternational Journal of Postharvest Technology and InnovationInternational Journal of Power and Energy ConversionInternational Journal of Power ElectronicsInternational Journal of PowertrainsInternational Journal of Precision TechnologyInternational Journal of Private LawInternational Journal of Process Management and BenchmarkingInternational Journal of Process Systems EngineeringInternational Journal of Procurement ManagementInternational Journal of Product DevelopmentInternational Journal of Product Lifecycle ManagementInternational Journal of Product Sound QualityInternational Journal of Productivity and Quality ManagementInternational Journal of Project Organisation and ManagementInternational Journal of Public Law and PolicyInternational Journal of Public PolicyInternational Journal of Public Sector Performance ManagementInternational Journal of Qualitative Information Systems ResearchInternational Journal of Qualitative Research in ServicesInternational Journal of Quality and InnovationInternational Journal of Quality Engineering and TechnologyInternational Journal of Quantitative Research in EducationInternational Journal of Radio Frequency Identification Technology and ApplicationsInternational Journal of Rapid ManufacturingInternational Journal of Reasoning-based Intelligent SystemsInternational Journal of Reliability and SafetyInternational Journal of RemanufacturingInternational Journal of Renewable Energy TechnologyInternational Journal of Research, Innovation and CommercialisationInternational Journal of Responsible Management in Emerging EconomiesInternational Journal of Revenue ManagementInternational Journal of Risk Assessment and ManagementInternational Journal of Satellite Communications Policy and ManagementInternational Journal of Security and NetworksInternational Journal of Semantic and Infrastructure ServicesInternational Journal of Sensor NetworksInternational Journal of Service and Computing Oriented ManufacturingInternational Journal of Services and Operations ManagementInternational Journal of Services and StandardsInternational Journal of Services Operations and InformaticsInternational Journal of Services SciencesInternational Journal of Services Technology and ManagementInternational Journal of Services, Economics and ManagementInternational Journal of Shipping and Transport LogisticsInternational Journal of Signal and Imaging Systems EngineeringInternational Journal of Simulation and Process ModellingInternational Journal of Six Sigma and Competitive AdvantageInternational Journal of Smart Grid and Green CommunicationsInternational Journal of Smart Technology and LearningInternational Journal of Social and Humanistic ComputingInternational Journal of Social Computing and Cyber-Physical SystemsInternational Journal of Social Entrepreneurship and InnovationInternational Journal of Social Media and Interactive Learning EnvironmentsInternational Journal of Social Network MiningInternational Journal of Society Systems ScienceInternational Journal of Soft Computing and NetworkingInternational Journal of Software Engineering, Technology and ApplicationsInternational Journal of Space Science and EngineeringInternational Journal of Space-Based and Situated ComputingInternational Journal of Spatial, Temporal and Multimedia Information SystemsInternational Journal of Spatio-Temporal Data ScienceInternational Journal of Sport Management and MarketingInternational Journal of Strategic Business AlliancesInternational Journal of Strategic Change ManagementInternational Journal of Strategic Engineering Asset ManagementInternational Journal of Structural EngineeringInternational Journal of Student Project ReportingInternational Journal of Supply Chain and Inventory ManagementInternational Journal of Supply Chain and Operations ResilienceInternational Journal of Surface Science and EngineeringInternational Journal of Sustainable Agricultural Management and InformaticsInternational Journal of Sustainable AviationInternational Journal of Sustainable DesignInternational Journal of Sustainable DevelopmentInternational Journal of Sustainable EconomyInternational Journal of Sustainable ManufacturingInternational Journal of Sustainable Materials and Structural SystemsInternational Journal of Sustainable Real Estate and Construction EconomicsInternational Journal of Sustainable SocietyInternational Journal of Sustainable Strategic ManagementInternational Journal of Swarm IntelligenceInternational Journal of System Control and Information ProcessingInternational Journal of System of Systems EngineeringInternational Journal of Systems, Control and CommunicationsInternational Journal of Teaching and Case StudiesInternational Journal of TechnoentrepreneurshipInternational Journal of Technological Learning, Innovation and DevelopmentInternational Journal of Technology and GlobalisationInternational Journal of Technology Enhanced LearningInternational Journal of Technology Intelligence and PlanningInternational Journal of Technology ManagementInternational Journal of Technology MarketingInternational Journal of Technology Policy and LawInternational Journal of Technology, Policy and ManagementInternational Journal of Technology Transfer and CommercialisationInternational Journal of Telemedicine and Clinical PracticesInternational Journal of Theoretical and Applied Multiscale MechanicsInternational Journal of Tourism AnthropologyInternational Journal of Tourism PolicyInternational Journal of Trade and Global MarketsInternational Journal of Transitions and Innovation SystemsInternational Journal of Trust Management in Computing and CommunicationsInternational Journal of Ultra Wideband Communications and SystemsInternational Journal of Value Chain ManagementInternational Journal of Vehicle Autonomous SystemsInternational Journal of Vehicle DesignInternational Journal of Vehicle Information and Communication SystemsInternational Journal of Vehicle Noise and VibrationInternational Journal of Vehicle PerformanceInternational Journal of Vehicle SafetyInternational Journal of Vehicle Systems Modelling and TestingInternational Journal of Virtual Technology and MultimediaInternational Journal of WaterInternational Journal of Web and Grid ServicesInternational Journal of Web Based CommunitiesInternational Journal of Web Engineering and TechnologyInternational Journal of Web ScienceInternational Journal of Wireless and Mobile ComputingInternational Journal of Work InnovationInternational Journal of Work Organisation and EmotionJournal for Global Business AdvancementJournal for International Business and Entrepreneurship DevelopmentJournal of Design ResearchJournal of Supply Chain RelocationLatin American Journal of Management for Sustainable DevelopmentLuxury Research JournalMENA Journal of Cross-Cultural ManagementMiddle East Journal of ManagementNordic Journal of TourismProgress in Computational Fluid Dynamics, An International JournalProgress in Industrial Ecology, An International JournalThe Botulinum JournalWorld Review of Entrepreneurship, Management and Sustainable DevelopmentWorld Review of Intermodal Transportation ResearchWorld Review of Science, Technology and Sustainable Development Browse journals by subject Computing and Mathematics Economics and Finance Education, Knowledge and Learning Energy and Environment Healthcare and Biosciences Management and Business Public Policy and Administration Risk, Safety and Emergency Management Science, Engineering and Technology Society and Leisure All Subjects Research picks Securing telemedicineTelemedicine is slowly maturing allowing greater connectivity between patient and healthcare providers using information and communications technology (ICT). One issue that is yet to be addressed fully, however, is security and thence privacy. Researchers writing in the International Journal of Ad Hoc and Ubiquitous Computing, have turned to cloud computing to help them develop a new and strong authentication protocol for electronic healthcare systems. Prerna Mohit of the Indian Institute of Information Technology Senapati in Manipur, Ruhul Amin of the Dr Shyama Prasad Mukherjee International Institute of Information Technology, in Naya Raipur, and G.P. Biswas of the Indian Institute of Technology (ISM) Dhanbad, in Jharkhand, India, point out how medical information is personal and sensitive and so it is important that it remains private and confidential. The team's approach uses the flexibility of a mobile device to authenticate so that a user can securely retrieve pertinent information without a third party having the opportunity to access that information at any point. In a proof of principle, the team has carried out a security analysis and demonstrated that the system can resist attacks where a malicious third party attempts to breach the security protocol. They add that the costs in terms of additional computation and communication resources are lower than those offered by other security systems reported in the existing research literature. Mohit, P., Amin, R. and Biswas, G.P. (2021) 'An e-healthcare authentication protocol employing cloud computing', Int. J. Ad Hoc and Ubiquitous Computing, Vol. 36, No. 3, pp.155–168. DOI: 10.1504/IJAHUC.2021.113873 Anticancer drugs from the monsoonA small-branched shrub found in India known locally as Moddu Soppu (Justicia wynaadensis) is used to make a sweet dish during the monsoon season by the inhabitants of Kodagu district in Karanataka exclusively during the monsoons. Research published in the International Journal of Computational Biology and Drug Design has looked at phytochemicals present in extracts from the plant that may have putative anticancer agent properties. C.D. Vandana and K.N. Shanti of PES University in Bangalore, Karnataka and Vivek Chandramohan of the Siddaganga Institute of Technology also in Tumkur, Karnataka, investigated several phytochemicals that had been reported in the scientific literature as having anticancer activity. They used a computer model to look at how well twelve different compounds "docked" with the relevant enzyme thymidylate synthase and compared this activity with a reference drug, capecitabine, which targets this enzyme. Thymidylate synthase is involved in making DNA for cell replication. In cancer, uncontrolled cell replication is the underlying problem. If this enzyme can be blocked it will lead to DNA damage in the cancer cells and potentially halt the cancer growth. Two compounds had comparable activity and greater binding to the enzyme than capecitabine. The first, campesterol, is a well-known plant chemical with a structure similar to cholesterol, the second stigmasterol is another well-known phytochemical involved in the structural integrity of plant cells. The former proved itself to be more stable than the latter and represents a possible lead for further investigation and testing as an anticancer drug, the team reports. Vandana, C.D., Shanti, K.N., Karunakar, P. and Chandramohan, V. (2020) 'In silico studies of bioactive phytocompounds with anticancer activity from in vivo and in vitro extracts of Justicia wynaadensis (Nees) T. Anderson', Int. J. Computational Biology and Drug Design, Vol. 13, Nos. 5/6, pp.582–601. DOI: 10.1504/IJCBDD.2020.113836 Native reforestation benefits biodiversityTimber harvest and agriculture have had an enormous impact on biodiversity in many parts of the world over the last two hundred years of the industrial era. One such region is 20 to 50 kilometre belt of tropical dry evergreen forest that lies inland from the southeastern coast of India. Efforts to regenerate the biodiversity has been more successful when native tropical dry evergreen forest has been reinstated rather than where non-native Acacia planting has been carried out in regeneration efforts, according to research published in the Interdisciplinary Environmental Review. Christopher Frignoca and John McCarthy of the Department of Atmospheric Science and Chemistry at Plymouth State University in New Hampshire, USA, Aviram Rozin of Sadhana Forest in Auroville, Tamil Nadu, India, and Leonard Reitsma of the Department of Biological Sciences at Plymouth explain how reforestation can be used to rebuild the ecosystem and increases population sizes and diversity of flora and fauna. The team has looked at efforts to rebuild the ecosystem of Sadhana Forest. An area of 28 hectares had its water table replenished through intensive soil moisture conservation. The team has observed rapid growth of planted native species and germination of two species of dormant Acacia seeds. The team's standard biological inventory of this area revealed 75 bird, 8 mammal, 12 reptile, 5 amphibian, 55 invertebrate species, and 22 invertebrate orders present in the area. When they looked closely at the data obtained from bird abundance at point count stations, invertebrate sweep net captures and leaf count detections, as well as Odonate and Lepidopteran visual observations along fixed-paced transects they saw far greater diversity in those areas where native plants thrived rather than the non-native Acacia. "Sadhana Forest's reforestation demonstrates the potential to restore ecosystems and replenish water tables, vital components to reversing ecosystem degradation, and corroborates reforestation efforts in other regions of the world," the team writes. "Sadhana Forest serves as a model for effective reforestation and ecosystem restoration," the researchers conclude. Frignoca, C., McCarthy, J., Rozin, A. and Reitsma, L. (2021) 'Greater biodiversity in regenerated native tropical dry evergreen forest compared to non-native Acacia regeneration in Southeastern India', Interdisciplinary Environmental Review, Vol. 21, No. 1, pp.1–18. DOI: 10.1504/IER.2021.113781 Protection from coronavirus and zero-day pathogensResearchers in India are developing a disinfection chamber that integrates a system that can deactivate coronavirus particles. The team reports details in the International Journal of Design Engineering. As we enter the second year of the COVID-19 pandemic, there are signs that the causative virus SARS-CoV-2 and its variants may be with us for many years to come despite the unprecedented speed with vaccines against the disease have been developed, tested, and for some parts of the world rolled out. Sangam Sahu, Shivam Krishna Pandey, and Atul Mishra of the BML Munjal University suggest that we could adapt screening technology commonly used in security for checking whether a person is entering an area, such as airports, hospitals, or government buildings, for instance, carrying a weapon, explosives, or contraband goods. Such a system might be augmented with a body temperature check for spotting a person with a fever that might be a symptom of COVID-19 or another contagious viral infection. They add that the screening system might also incorporate technology that can kill viruses on surfaces with a quick flash of ultraviolet light or a spray of chemical disinfectant. Airborne microbial diseases represent a significant ongoing challenge to public health around the world. While COVID-19 is top of the agenda at the moment, seasonal and pandemic influenza are of perennial concern as is the emergence of drug-resistant strains of tuberculosis. Moreover, we are likely to see other emergent pathogens as we have many times in the past any one of which could lead to an even greater pandemic catastrophe than COVID-19. Screening and disinfecting systems as described by Sahu could become commonplace and perhaps act as an obligatory frontline defense against the spread of such emergent pathogens even before they are identified. Such an approach to unknown viruses is well known in the computer industry where novel malware emerges, so-called 0-day viruses, before the antivirus software is updated to recognize it and so blanket screening and disinfection software is often used. Sahu, S., Pandey, S.K. and Mishra, A. (2021) 'Disinfectant chamber for killing body germs with integrated FAR-UVC chamber (for COVID-19)', Int. J. Design Engineering, Vol. 10, No. 1, pp.1–9. DOI: 10.1504/IJDE.2021.113247 Wetware data retrievalA computer hard drive can be a rich source of evidence in a forensic investigation... but only if the device is intact and undamaged otherwise many additional steps to retrieve incriminating data from within are needed and not always successful even in the most expert hands. Research published in the International Journal of Electronic Security and Digital Forensics considers the data retrieval problems for investigators faced with a hard drive that has been submerged in water. Alicia Francois and Alastair Nisbet of the Cybersecurity Research Laboratory at Auckland University of Technology in New Zealand, point out that under pressure suspects in an investigation may attempt to destroy digital evidence prior to a seizure by the authorities. A common approach is simply to put a hard drive in water in the hope that damage to the circuitry and the storage media within will render the data inaccessible. The team has looked at the impact of water ingress on solid-state and conventional spinning magnetic disc hard drives and the timescale over which irreparable damage occurs and how this relates to the likelihood of significant data loss from the device. Circuitry and other components begin to corrode rather quickly following water ingress. However, if a device can be retrieved and dried within seven days, there is a reasonable chance of it still working and the data being accessible. "Ultimately, water submersion can damage a drive quickly but with the necessary haste and skills, data may still be recoverable from a water-damaged hard drive," the team writes. However, if the device has been submerged in saltwater, then irreparable damage can occur within 30 minutes. The situation is worse for a solid-state drive which will essentially be destroyed within a minute of saltwater ingress. The research provides a useful guide for forensic investigators retrieving hard drives that have been submerged in water. Francois, A. and Nisbet, A. (2021) 'Forensic analysis and data recovery from water-submerged hard drives', Int. J. Electronic Security and Digital Forensics, Vol. 13, No. 2, pp.219–231. DOI: 10.1504/IJESDF.2021.113374 Of alcohol and bootlacesThere is no consensus across medical science as to whether or not there is a safe lower limit on alcohol consumption nor whether a small amount of alcohol is beneficial. The picture is complicated by the various congeners, such as polyphenols and other substances that are present in different concentrations in different types of alcoholic beverage, such as red and white wine, beers and ales, ciders, and spirits. Moreover, while, there has been a decisive classification of alcohol consumption as a cause of cancer, there is strong evidence that small quantities have a protective effect on the cardiovascular system. Now, writing in the International Journal of Web and Grid Services, a team from China, Japan, Taiwan, and the USA, has looked at how a feature of our genetic material, DNA, relates to ageing and cancer and investigated a possible connection with alcohol consumption. The ends of our linear chromosomes are capped by repeated sequences of DNA base units that act as protective ends almost analogous to the stiff aglets on each end of a bootlace. These protective sections are known as telomeres. Which each cell replication the length of the telomeres on the ends of our chromosomes get shorter. This limits the number of times a cell can replicate before there is insufficient protection for the DNA between the ends that encodes the proteins that make up the cell. Once the telomeres are damaged beyond repair or gone the cell will die. This degradative process has been linked to the limited lifespan of the cells in our bodies and the aging process itself. Yan Pei of The University of Aizu in Aizuwakamatsu, Japan, and colleagues Jianqiang Li, Yu Guan, and Xi Xu of Beijing University of Technology, China, Jason Hung of the National Taichung University of Science and Technology, Taichung, Taiwan, and Weiliang Qiu of Brigham and Women's Hospital in Boston, USA, have carried out a meta-analysis of the scientific literature. Their analysis suggests that telomere length is associated with alcohol consumption. Given that shorter telomeres, before they reach the critical length, can nevertheless lead to genomic instability, this alcohol-associated shortening could offer insight into how cancerous tumour growth might be triggered. Telomere shortening is a natural part of the ageing process. However, it is influenced by various factors that are beyond our control such as paternal age at birth, ethnicity, gender, age, telomere maintenance genes, genetic mutations of the telomeres. However, telomere length is also affected by inflammation and oxidative stress, environmental, psychosocial, behavioural exposures, and for some of those factors we may have limited control. For others, such as chronic exposure to large quantities of alcohol we have greater control. Li, J., Guan, Y., Xu, X., Pei, Y., Hung, J.C. and Qiu, W. (2021) 'Association between alcohol consumption and telomere length', Int. J. Web and Grid Services, Vol. 17, No. 1, pp.36–59. DOI: 10.1504/IJWGS.2021.113686 Quality after the pandemicAdedeji Badiru of the Air Force Institute of Technology in Dayton, Ohio, USA, discusses the notion of quality insight in the International Journal of Quality Engineering and Technology and how this relates to motivating researchers and developers working on quality certification programs after the COVID-19 pandemic. In the realm of product quality, we depend on certification based on generally accepted standards to ensure high quality. Badiru writes that the ongoing COVID-19 pandemic has led to serious disruption to production facilities and led to the upending of normal quality engineering and technology programs. In the aftermath of the pandemic, there will be a pressing need to redress this problem and its impact on quality management processes may, as with many other areas of normal life, continue to be felt for a long time. Badiru suggests that now is the time to develop new approaches to ensure that we retrieve the pre-COVID quality levels. He suggests that in the area of quality certification, we must look at other methods in this field, perhaps borrowing from other areas of quality oversight. One mature area from which the new-normal of certification might borrow is academic accreditation. The work environment has changed beyond recognition through the pandemic and we are unlikely to revert to old approaches entirely. Indeed, the pandemic has already necessitated the urgent application of existing quantitative and qualitative tools and techniques to other areas, such as work design, workforce development, and the form of the curriculum in education. Action now, from the systems perspective in engineering and technology, "will get a company properly prepared for the quality certification of the future, post-COVID-19 pandemic," he writes. This will allow research and development of new products to satisfy the triage of cost, time, and quality requirements as we ultimately emerge from the pandemic. Badiru, A. (2021) 'Quality insight: product quality certification post COVID-19 using systems framework from academic program accreditation', Int. J. Quality Engineering and Technology, Vol. 8, No. 2, pp.218–227. DOI: 10.1504/IJQET.2021.113728 Spotting and stopping online abuseSocial media has brought huge benefits to many of those around the world with the resources to access its apps and websites. Indeed, there are billions of people using the popular platforms every month in almost, if not, every country of the world. Researchers writing in the International Journal of High Performance Systems Architecture, point out that as with much in life there are downsides that counter the positives of social media. One might refer to one such negative facet of social media as "cyber violence". Randa Zarnoufi of the FSR Mohammed V University in Rabat, Morocco, and colleagues suggest that the number of victims of this new form of hostility is growing day by day and is having a strongly detrimental effect on the psychological wellbeing of too many people. A perspective that has been little investigated in this area with regard to reducing the level of cyber violence in the world is to consider the psychological status and the emotional dimension of the perpetrators themselves. New understanding of what drives those people to commit heinous acts against others in the online world may improve our response to it and open up new ways to address the problem at its source rather than attempting to simply filter, censor, or protect victims directly. The team has analysed social media updates using Ensemble Machine Learning and the Plutchik wheel of basic emotions to extract the character of those updates in the context of cyber violence, bullying and trolling behaviour. The analysis draws the perhaps obvious, but nevertheless highly meaningful, conclusion that there is a significant association between an individual's emotional state and the personal propensity to harmful intent in the realm of social media. Importantly, the work shows how this emotional state can be detected and perhaps the perpetrator of cyber violence be approached with a view to improving their emotional state and reducing the negative impact their emotions would otherwise have on the people with whom they engage online. This is very much the first step in this approach to addressing the serious and growing problem of cyber violence. The team adds that they will train their system to detect specific issues in socoal media updates that are associated with harassment with respect to sexuality, appearance, intellectual capacity, and political persuasion. Zarnoufi, R., Boutbi, M. and Abik, M. (2020) 'AI to prevent cyber-violence: harmful behaviour detection in social media', Int. J. High Performance Systems Architecture, Vol. 9, No. 4, pp.182–191 DOI: 10.1504/IJHPSA.2020.113679 Me too #metooSexual harassment in the workplace is a serious problem. To address it, we need a systematic, multistage preventive approach, according to researchers writing in the International Journal of Work Organisation and Emotion. One international response to sexual harassment problems across a range of industries but initially emerging from the entertainment industry was the "#metoo" movement. Within this movement victims of harassment and abuse told their stories through social media and other outlets to raise awareness of this widespread problem and to advocate for new legal protections and societal change. Anna Michalkiewicz and Marzena Syper-Jedrzejak of the University of Lodz, Poland, describe how they have explored perception of the #metoo movement with regards to in reducing the incidence of sexual harassment. "Our findings show that #metoo may have had such preventive potential but it got 'diluted' due to various factors, for example, cultural determinants and lack of systemic solutions," the team writes. They suggest that because of these limitations the #metoo movement is yet to reach its full potential. The team's study considered 122 students finishing their master's degrees in management studies and readying themselves to enter the job market. They were surveyed about the categorisation of psychosocial hazards – such as sexual harassment – in the workplace that cause stress and other personal problems as opposed to the more familiar physical hazards. "Effective prevention of [sexual harassment] requires awareness but also motivation and competence to choose and implement in the organisations adequate measures that would effectively change the organisational culture and work conditions," the team writes. The #metoo movement brought prominence to the issues, but the team suggests that it did not lead to the requisite knowledge and practical competence that would facilitate prevention. They point out that the much-needed social changes cannot come about within a timescale of a few months of campaigning. Cultural changes need more time and a willing media to keep attention focused on the problem and how it might be addressed. There is also a pressing need for changes in the law to be considered to help eradicate sexual harassment in the workplace. Michałkiewicz, A. and Syper-Jędrzejak, M. (2020) 'Significance of the #metoo movement for the prevention of sexual harassment as perceived by people entering the job market', Int. J. Work Organisation and Emotion, Vol. 11, No. 4, pp.343–361. DOI: 10.1504/IJWOE.2020.113699 Data mining big data newsWhile the term "big data" has become something of a buzz phrase in recent years it has a solid foundation in computer science in many contexts and as such has emerged into the public consciousness via the media and even government initiatives in many parts of the world. A North American team has looked at the media and undertaken a mining operation to unearth nuggets of news regarding this term. Murtaza Haider of the Ted Rogers School of Management at Ryerson University in Toronto, Canada and Amir Gandomi of the Frank G. Zarb School of Business at Hofstra University in Hempstead, New York, USA, explain how big data-driven analytics emerged as one of the most sought-after business strategies of the decade. They have now used natural language processing and text mining algorithms to find the focus and tenor of news coverage surrounding big data. They mined a five million-word body of news coverage for references to the novelty of big data, showcasing the usual suspects in big data geographies and industries. "The insights gained from the text analysis show that big data news coverage indeed evolved where the initial focus on the promise of big data moderated over time," the team found. There work also demonstrates how text mining and NLP algorithms are potent tools for news content analysis. The team points out that academic journals have been the main source of trusted and unbiased advice regarding computing technologies, large databases, and scalable analytics, it is the popular and trade press that are the information source for over-stretched executives. It was the popular media that became what the team describes as "the primary channel for spreading awareness about 'big data' as a marketing concept". They add that the news media certainly helped popularise innovative ideas being discussed in the academic literature. Moreover, the latter has had to play catchup during the last decade on sharing the news. That said, much of the news coverage during this time has been about the novelty and the promise of big data rather than the proof of principles that are needed for it to proceed and mature as a discipline. Indeed, there are many big data clichés propagated in an often uncritical popular media suggesting that big data analytics is some kind of information panacea. In contrast, the more reserved nature of academic publication knows only too well that big data does not represent a cure-all for socio-economic ills nor does it have unlimited potential. Haider, M. and Gandomi, A. (2021) 'When big data made the headlines: mining the text of big data coverage in the news media', Int. J. Services Technology and Management, Vol. 27, Nos. 1/2, pp.23–50. DOI: 10.1504/IJSTM.2021.113574 More about Research Picks News New Editor for International Journal of Applied Nonlinear Science 23 March, 2021 Prof. Wen-Feng Wang from the Interscience Institute of Management and Technology in India and Shanghai Institute of Technology in China has been appointed to take over editorship of the International Journal of Applied Nonlinear Science. New Editor for Journal of Design Research 11 March, 2021 Prof. Jouke Verlinden from the University of Antwerp in Belgium has been appointed to take over editorship of the Journal of Design Research. The journal's former Editor in Chief, Prof. Renee Wever of Linköping University in Sweden, will remain on the board as Editor. Inderscience Editor in Chief receives Humboldt Research Award 5 March, 2021 Inderscience is pleased to announce that Prof. Nilmini Wickramasinghe, Editor in Chief of the International Journal of Biomedical Engineering and Technology and the International Journal of Networking and Virtual Organisations, has won a Humboldt Research Award. This award is conferred in recognition of the award winner's academic record. Prof. Wickramasinghe will be invited to carry out research projects in collaboration with specialists in Germany. Inderscience's Editorial Office extends its warmest congratulations to Prof. Wickramasinghe for her achievement, and thanks her for her continuing stellar work on her journals. Best Reviewer Award announced by International Journal of Environment and Pollution 11 February, 2021 We are pleased to announce that the International Journal of Environment and Pollution has launched a new Best Reviewer Award. The 2020 Award goes to Prof. Steven Hanna of the Harvard T.H. Chan School of Public Health in the USA. The senior editorial team thanks Prof. Hanna sincerely for his exemplary efforts. Inderscience new address 11 February, 2021 As of 1st March 2021, the address of Inderscience in Switzerland will change to: Inderscience Enterprises Limited Rue de Pré-Bois 14 Meyrin - 1216 Geneva SWITZERLAND For Authors Registered authors log in here Online submission: new author registration Preparing articles Submitting articles Copyright and author entitlement Conferences/Events Orders Journal subscriptions Buying one-off articles and issues Books and Conference Proceedings See our subscription rates (PDF format) 2021 New titles International Journal of Cybernetics and Cyber-Physical Systems MENA Journal of Cross-Cultural Management International Journal of Family Business and Regional Development International Journal of Forensic Engineering and Management International Journal of Big Data Management Previous Next Keep up-to-date Our Blog Follow us on Twitter Visit us on Facebook Our Newsletter (subscribe for free) RSS Feeds New issue alerts Return to top Contact us About Inderscience OAI Repository Privacy and Cookies Statement Terms and Conditions Help Sitemap © Inderscience Enterprises Ltd. work_2w7hsbzb4jatbl4fjftrmosv2q ---- Transatlantica, 2 | 2011 Transatlantica Revue d’études américaines. American Studies Journal  2 | 2011 Sport et société / Animals and the American Imagination Symposium “Visual Studies / Études visuelles: un champ en question” Université Paris 7, Institut Charles V, October 20-22, 2011 Camille Rouquet Édition électronique URL : http://journals.openedition.org/transatlantica/5431 DOI : 10.4000/transatlantica.5431 ISSN : 1765-2766 Éditeur AFEA Référence électronique Camille Rouquet, « Symposium “Visual Studies / Études visuelles: un champ en question” », Transatlantica [En ligne], 2 | 2011, mis en ligne le 29 février 2012, consulté le 11 février 2021. URL : http://journals.openedition.org/transatlantica/5431 ; DOI : https://doi.org/10.4000/transatlantica. 5431 Ce document a été généré automatiquement le 11 février 2021. Transatlantica – Revue d'études américaines est mis à disposition selon les termes de la licence Creative Commons Attribution - Pas d'Utilisation Commerciale - Pas de Modification 4.0 International. http://journals.openedition.org http://journals.openedition.org http://journals.openedition.org/transatlantica/5431 http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ Symposium “Visual Studies / Études visuelles: un champ en question” Université Paris 7, Institut Charles V, October 20-22, 2011 Camille Rouquet 1 Organised by François Brunet (UPD/LARCA), Catherine Bernard (UPD/LARCA), Marc Vernet (UPD/CERILAC) and André Gunthert (EHESS/LHIVIC), this three-day conference was dedicated to the following issues: the archaeology of visual studies in the Anglophone world, the translatability of visual studies into the French and European fields, and the relationship of visual studies to history and its methods. This symposium was an opportunity to try and assess the position of visual studies in France, where they are but recent fields of interests, contrary to the United States, Britain and Germany where they have been studied for a few decades.1 The objective of this symposium was to ask the following question: can visual studies find a place in French research fields, and if not, why? Keynote—Margaret Dikovitskaya (Wolfsonian-Florida International University), “Visual Studies, Ten Years After” 2 Margaret Dikovitskaya opened the symposium with a presentation centred on the evolution of visual studies in the last thirty years and on their long-standing opposition to the field of art history. In her own words, visual culture, or visual studies, is a field where the visual image is the focal point in the process through which meaning is constructed. Since its establishment within the humanities, it has crossed over to other fields of research, therefore making it fundamental to find a way to conciliate various points of view. 3 Despite the great evolution of the theory of visual studies these past ten years, the field has been subjected to some “friendly fire”. One of Margaret Dikovitskaya’s examples was Mieke Bal’s article “The Genius of Rome: Putting Things Together”, published in Symposium “Visual Studies / Études visuelles: un champ en question” Transatlantica, 2 | 2011 1 the Journal of Visual Culture shortly after it was launched in April 2002, and in which Bal talked of “visual essentialism” and deemed the concept of visual studies highly problematic. Likewise, in the 1970s and 1980s, visual studies were strongly criticised by semiotic and philosophical experts attached to the performance of art and its association with visual signs. W.J.T. Mitchell provided a new point of departure from the 1980s onwards when he stated that all media were visual media. By asking what was cultural about vision and, conversely, what was visual about culture, researchers, according to Margaret Dikovitskaya, realised there was a need for a new, richer and more philosophical definition of the visual. 4 At the Stone Summer Theory Institute of July 2011 entitled “Farewell to Visual Studies”, James Elkins stated that visual culture had not fulfilled its promise to provide a methodical model for the study of images. Margaret Dikovitskaya argued in response to his statement that image studies and visual studies are two different concepts. The visual is not restricted to images but encompasses everyday practices of seeing, either mediated or not. Whereas Elkins considered that “the growth of visual signs and hybrid departments signalled the end of the project of visual studies”, Margaret Dikovitskaya saw the evolution of the field in the past ten years as highly promising. She is one of many contemporary researchers intent on underlining the connection between what is seen and what is read, therefore moving away from the previous distinction between the visual on the one hand and the written language on the other. 5 Margaret Dikovitskaya concluded her argument by stating that these aspects of the evolution of visual practices and visual studies in the past three decades point to a “new visual culture”; its aim is to describe and analyse visual media and communication while resisting the temptation to define them systematically by focusing, instead, on specific issues such as the frontier between high and low culture. Frank Mehring (Freie Universität Berlin), “How Silhouettes Became Black, or What We Can Learn from Advertising the Harlem Renaissance in the Age of Transnational Studies” 6 Frank Mehring used the example of the iPod advertising campaigns of 2004 to try to bridge the gap between art history and modern advertising through the concept of re- appropriation. His examples were posters and clips featuring black silhouettes with white earphones against unified, brightly coloured backgrounds. Frank Mehring showed that during the Harlem Renaissance, silhouettes were appropriated from Alain Locke’s anthology of poems The New Negro—one of the first to use primitive figures as illustrations—for the promotion of Afro-American culture. Frank Mehring also approached the concept of “recodification” by focusing on another field of art that uses silhouettes: the shadow play. Originating at the time of Goethe to capture and study the features of individuals, shadows were incorporated into German visual arts in the 1920s (with the examples of Das Cabinett des Dr. Caligari in 1920 and of Schatten in 1923). The art of the silhouette culminated in 1926 with the animated film Die Abenteuer des Prinzen Achmed, recreating Arabian tales in shadow images against coloured backgrounds. In the United States, this film instantly evoked Black culture and the rebirth of the Symposium “Visual Studies / Études visuelles: un champ en question” Transatlantica, 2 | 2011 2 minstrel tradition. Since then, the silhouette has been central in many iconic works such as Disney’s Fantasia, many of Matisse’s paintings and Keith Haring’s graffiti style. 7 Frank Mehring used this historical presentation to show that we live in an age of “transnational mediation” and to ask what we can learn from re-reading this particular style from the Harlem Renaissance. This was a way to understand the transnational dimension of American society, as well as its persuasive cultural power when it comes to pushing the boundaries between fine art and popular arts. In Frank Mehring’s words, “from amplification to simplification, the viewers can activate their own fantasies.” Olivier Lugon (Université de Lausanne-UNIL), “Visual studies and European Modernism” 8 Olivier Lugon dedicated his presentation to the archaeology of the field of visual studies in Switzerland. He aimed at showing that the study of the visual could be focused on practices rather than theories and mentioned that his department in Lausanne does not seek out self-definition—a process which Olivier Lugon considers specific to the United States. 9 Olivier Lugon focused on the relationship between László Moholy-Nagy and Sigfried Giedion in the 1920s as an example of the willingness to redefine art and the teaching of art in the age of the Bauhaus. He explained that Moholy-Nagy and Giedion stepped out of the boundaries of classical arts at a time when words were starting to give way to the visual. The two men experimented with photography and focused on new ways to combine words and images (such as their photo-texts). Already, they felt that teaching visual studies was absolutely necessary and, as early as 1927, Moholy-Nagy declared: “the illiterate of the future will be the person ignorant of the use of the camera as well as the pen.” The new notion of visual literacy, as Olivier Lugon explained, was then created to encompass everything that related to the transmission of visual knowledge, to critical reflection and to personal expression. 10 According to Olivier Lugon, the collaboration between Moholy-Nagy and Giedion is a good example of how visual history took its definition into its own hands. He explained the need to open the way outwards, beyond the high arts, and insisted that, although art history should not be restricted to a certain category of visual production, visual studies, on the other hand, cannot oppose art history diametrically. Martine Beugnet (Edinburgh University), “ ‘Firing at the Clocks’: Cinema, Sampling and the ‘Cultural Logic of the late Capitalism Museum’ ” 11 Martine Beugnet chose a title composed of quotes from Rosalind Krauss and Walter Benjamin to underline contemporary visual theory’s need to turn to experts on modernity who first saw the importance of cinema in our cultures. Her point of departure was the alleged disappearance of cinema and she proposed to study this in relation to the new place given to film in the museum space. Martin Beugnet’s focus was on film compilation, a practice which deeply links the work of the artist to that of the museum curator. Symposium “Visual Studies / Études visuelles: un champ en question” Transatlantica, 2 | 2011 3 12 Her main example was Christian Marklay’s The Clock, a 2011 compilation of footage relating to time. The film is fully synchronised with non-diegetic life and lasts 24 hours. It has been screened intensively and was recently shown at the Centre Pompidou in Paris. This example was particularly relevant to the general issue of nationality in visual studies—Martine Beugnet showed that little footage was found in Indian cinema for example. She argued that cinema in relation to time was quite characteristic of capitalism and Western modernity. Marklay’s film was made directly for the museum space and is therefore not part of the category of experimental films, which are meant to be seen inside a cinema. The length of this film as well as the small number of hard copies made (5 or 6) also excludes it from the public screen—Martine Beugnet called it an artist’s film and noted the artist’s wish to limit it to museum exhibitions. 13 The issue of the reception of films in the gallery was also approached and deemed sensitive; indeed the example of The Clock showed that the behaviour of museumgoers is more akin to window-shopping than to a process of agreeing or disagreeing. In her presentation, Martine Beugnet showed that as cinema enters the museum space, there is the establishment of a stimulating dialogue between art theory and film theory, and a redefinition of the notion of reception, either by artists and experts, or by the public. Emmanuelle André (Université Paris Diderot), “The visible man at the core of vision. Visual studies and film analysis” 14 The question raised by Emmanuelle André’s presentation was how to determine what is at the core of vision and how this relates to visual studies in regards to the history of the ways of seeing. In order to answer this question, Emmanuelle André went back to the origins of visual culture as Béla Balázs defined it in The Visible Man (1924). For him, cinema was a new medium that reproduced and broadcasted productions of the mind, and had the potential to raise man to new visibility and new freedom. Images were thereon not organised around cinema but forced to rethink their own representation. 15 Emmanuelle André then presented the 19th-century tendency to look at the body in its most fragile state, a tendency that climaxed in Alphonse Bertillon’s anthropometric police shots and Dr Doyen’s “sagittal views” of sliced human bodies in the early 20th century. She explained that Alphonse Bertillon, who worked with Chauvin, started to make a distinction between man as body and man as subject. This shift in the definition was caused, according to Chauvin, by the recurring exposure of the body to the look of an audience, or a readership—in which case the process was mediated by images. Emmanuelle André exposed these ideas in order to compare them with the modern enthusiasm for the figure of the invisible man in literary and film. She showed that the way of looking at the human body has been modified: dissections and decapitations are no longer favourites of the public. She mentioned the importance of cinema in the process of redefinition of the body and said that once again images take on the role of mediators. In John Carpenter’s Memoirs of an Invisible Man (1992), the protagonist faces his invisibility, undresses his visibility, in Emmanuelle André’s words; he welcomes the public’s observation at the risk of making a part of himself invisible. Emmanuelle André concluded that, although observation had prevailed in the 19th and 20 th centuries, nowadays the subject—man—is no longer dissected by the look but upset by it. Symposium “Visual Studies / Études visuelles: un champ en question” Transatlantica, 2 | 2011 4 Jens Schroeter (Universität Siegen), “Visual Studies, Practice, History. The Example of ‘Digital Photography’ ” 16 Jens Schroeter’s contribution to the symposium was an attempt at deconstructing the opposition between “analogue and digital photography,” based on the issue of referentiality. Jens Schroeter presented the claim (recurrent in the history of visual media) that analogue photography refers to a specific reality, closely linked to the object that is photographed. This deeply changed the status of photography, and yet today there is a distinction between these two kinds of photography—what Jens Schroeter called a “haunting dichotomy between referentiality and manipulation.” He noted several contradictions in the definition of digital photography as less reliable: doctors, scientists, soldiers rely more on digital than analogue imagery to analyse reality. In Jens Schroeter’s opinion, nothing in the definition of photography suggests that the writing should be produced through a chemical process. 17 Jens Schroeter also mentioned that the definition of digital imagery is almost as complicated as that of referentiality—it encompasses scans and generated images. To show that manipulation is not directly opposed to reference, he took the example of the Apollo programme of 1964: the images sent by Ranger 7 were analogues transmitted through a video signal, but NASA processed them digitally upon reception. The same qualification between reference and manipulation goes with book illustrations; they are usually “photoshopped” to be made sharper but, according to Jens Schroeter, no one would call that manipulation. 18 Jens Schroeter’s conclusion was that there is no difference between analogue and digital photography; for him these are simply two different processes and ways of storage, which should both be included in the study of visual practices. W.J.T. Mitchell (University of Chicago), “Seeing Madness: Insanity, Media, and Visual Culture” (INHA, “Si la photo est bonne” conference) 19 Thanks to the collaboration with the LHIVIC conference “Si la photo est bonne. Le rôle des industries culturelles dans la construction de l’imaginaire” (EHESS), the symposium moved to the IHNA auditorium on Friday afternoon to listen to a presentation by W.J.T Mitchell. 20 W.J.T. Mitchell came to present madness as visual display and approached this topic through a focus on cinema. According to him, the arch of madness is deep; from divine folly in Greek mythology to Charcot and Freud, and to the keenness of surrealist cinema for paranoia and delirium. W.J.T. Mitchell used film noir and horror films to raise a double question about cinema and madness: what does cinema reveal about insanity that was not available to knowledge before? Why is madness in film so attractive to the general public? The point of departure of his argument was that madness can be seen, and W.J.T. Mitchell presented the history of its representations. He mentioned the ritualisation of symptomatology by Charcot in the 1880s, the Symposium “Visual Studies / Études visuelles: un champ en question” Transatlantica, 2 | 2011 5 dissimulation of madness in Ralph Ellison’s Invisible Man andWilliam Blake’s Nebuchadnezzar… Beyond these various topics relating to madness, W.J.T. Mitchell directed his inquiry towards visual culture and a series of films about the institutionalisation of madness (Shutter Island, The Snake Pit, Sunset Boulevard, Now, Voyager, One Flew Over the Cuckoo’s Nest, A Beautiful Mind, among others). He deliberately chose films that feature psychiatrists, and his goal was to turn the gaze towards the structures of confinement that put madness on display. W.J.T. Mitchell not only asked what these films bring to madness but what madness brings to these films. He insisted on the different effects of the representation of madness depending on the medium used: when phantasmagoria existed, the illusions were in the room with the audience; in an opera, the audience is easily brought to the same state as the artist. Cinema, however, has the capacity to turn the gaze around towards the institutions that treat mental illness, according to W.J.T. Mitchell. 21 W.J.T. Mitchell found that in films about madness, there often is a tracking of the clues at the origin of the trauma. One of the props recurrently used to do so is the cigarette— the cigarette normalises the moment between sanity and insanity; it is often offered by doctors to patient to cool them down. In Shutter Island, the detective is shown looking for his cigarettes in vain—a moment that W.J.T. Mitchell deemed a classical opening to the detective film. For him, smoking becomes the symbol of the detective genre and comes to link the medium to the prospect of seeing madness. W.J.T. Mitchell concluded his presentation by asking what role the post-cinematic medium might play in the representation of madness, mentioning second-life gaming as an example of “the reprivatisation of madness in the solitude of the game.” Roundtable and conclusion, “The outlook for visual studies” 22 François Brunet introduced the roundtable by summarising some of the issues dealt with during the symposium. A recurring idea was that new visual studies now have a definition that moves them closer to “visual science”; the visual is evolving beyond images to a realm of experiences. François Brunet noted the distinct willingness, among researchers, to look at visual studies no longer as a mere propaedeutic tool but as a legitimate field of knowledge (W.J.T. Mitchell speaks of visual literacy) within the humanities. Secondly, the archaeology of visual studies shows old attempts to create the conditions for visual literacy (with two important attempts, first in the 1920s-30s with a focus on understanding and practicing, then in the 1960s-70s in Europe when semiology was understood as a kind of education). The question that is asked now is: why do we start again? Was something not transmitted? Finally, François Brunet proposed the idea that visual studies often take on a political dimension in the Anglophone world and might therefore have difficulties finding a French context. 23 As far as education goes, Gil Bartholeyns noted that the lack of pedagogy in the field is notable. For him, there are two ways of teaching visual culture—presenting the history of the field and making visual culture—but the two are hard to conciliate. Margaret Dikovitskaya linked the notion of education with that of science and recommended caution in bringing together the fields of visual studies, humanities and sciences, in order not to create an “academic ghetto” and for visual studies not to disappear. Signs of the collaboration between different fields of research are already visible in France: Symposium “Visual Studies / Études visuelles: un champ en question” Transatlantica, 2 | 2011 6 the professorship in visual studies at Lille 3 was established within the History Department in an attempt, according to Gil Bartholeyns, to connect art history to humanities. The French field was under review during this symposium and a question was raised as to why the French tradition of the political critique of signs and images does not communicate with the more socially—and culturally—oriented outlook of visual studies. W.J.T. Mitchell showed that the comparison with American politics is striking: he chose to talk about the “Occupy Wall Street” movement to demonstrate that, in the United States, media coverage is a spectacle—an idea that is only starting to reach France. 24 A programme of the symposium can be found at: http://www.ufr-anglais.univ-paris- diderot.fr/COLLOC_CHV/20111020-22FB/PROGRAMME%20VisualStudies_1_10_11-1.pdf. NOTES 1. So far in France, the field of visual studies has been restricted to two major poles of research at the LHIVIC (EHESS) and around the unique Visual Studies professorship held by Gil Bartholeyns at the Université Lille 3. A second professorship in Visual Studies is to be filled in Spring 2012 within the English Department of the Université Paris 7. INDEX Thèmes : Actualité de la recherche AUTEUR CAMILLE ROUQUET Université Paris Diderot Symposium “Visual Studies / Études visuelles: un champ en question” Transatlantica, 2 | 2011 7 http://www.ufr-anglais.univ-paris-diderot.fr/COLLOC_CHV/20111020-22FB/PROGRAMME%20VisualStudies_1_10_11-1.pdf http://www.ufr-anglais.univ-paris-diderot.fr/COLLOC_CHV/20111020-22FB/PROGRAMME%20VisualStudies_1_10_11-1.pdf Symposium “Visual Studies / Études visuelles: un champ en question” work_2xbmphwsufgxdhqpkgoceo7x4y ---- 78 78..78 obituary Edulji (Eddy) Sethna Formerly Consultant Psychiatrist, Hollymoor Hospital, Birmingham Dr Eddy Sethna was born on 3 December 1925 in Bombay, India. He won two scholarships which entirely funded his medical school training and he qualified with MB BS from Bombay in 1951. Having completed house jobs in Bombay, he became a senior house officer in medicine at the Bury and Rossendale Group of Hospitals in Lancashire in 1954, obtaining his MRCP in 1956. Having specialised in cardiology at the London Heart Hospital and Sefton General Hospital in Liverpool, he also obtained a diploma in tropical medicine and hygiene in preparation for his return to India as a consultant physician to the Jahangir Nursing Home in Poona. He spent 21 months in this post but decided to return to England where he started his career as a psychiatrist. Eddy’s first psychiatric appointment was as a registrar at St Francis and Lady Chichester Group of Hospitals in Sussex. He did his senior registrar training in Birmingham and was appointed in 1966 as a consultant psychiatrist to All Saints Hospital in Birmingham with an attach- ment to West Bromwich and District Hospital. He was awarded his MRC Psych in 1971, and in 1976 he became a consul- tant to Hollymoor Hospital in Birmingham and the Lyndon Clinic in Solihull. He was elected FRCP in 1987, having been elected FRCPsych in 1986. His publications included studies of the benefits of group psychotherapy and refractory depression, but he also had a major interest in phobias. When asked to organise a registrar training programme, Eddy with typical thoroughness and attention to detail demanded that he be allowed to establish the programme from scratch, ignoring the preconceived ideas of those more senior. Having gained their support he established the first rotational psychiatric training programme in the country. This scheme was so popular and successful with the trainees that it was adopted by the Royal College of Psychia- trists as their national model for rotational training. In his early fifties, Eddy returned to his boyhood interest of photography as an antidote to the stresses of his job. In retirement, he became a leader and inspiration to the legions of amateur photographers taking tentative steps into the field of digital photography. He approached digital photography as he had approached medicine, studying the Adobe Photoshop computer programme system- atically so that he understood its ever- evolving capabilities. He willingly offered one-to-one teaching sessions, wrote four books (two on paper and two on CD), was instrumental in the formation of the Royal Photographic Society’s Digital Imaging Group, was founding chairman of the Eyecon Group and served as vice-presi- dent of the Royal Photographic Society. More recently the Royal Photographic Society awarded Eddy its prestigious Fenton Medal and Honorary Membership in recognition of his huge contribution to photography in the UK. He had numerous acceptances in international exhibitions and took great pride in the gold medal he was awarded shortly before his death in recognition of his creativity. Eddy died at home of Hodgkin’s lymphoma on 29 June 2006, cared for by his wife, Beryl, and daughters, Beverley and Julie, as was his wish. Eddy is survived by his wife, three children and seven grandchildren whom he adored. Anne Sutcliffe doi: 10.1192/pb.bp.106.013813 review Clinical Handbook of Psychotropic Drugs for Children and Adolescents K. Z. Bezchlibnyk-Butler & A. S. Virani (eds) Hogrefe & Huber, 2004, US$49.95, 312 pp. ISBN 0-88937-271-3 This handbook is intended as a practical reference book for clinicians. Thus one measure of its success is how useful it is in a busy clinical setting and whether there is any added benefit over the British National Formulary or its international equivalents. The book starts by providing a brief overview of psychiatric disorders in child- hood and adolescence. This section is the weakest because although it provides the ‘basic facts’ there is not sufficient detail for prescribing clinicians. I did, however, find myself using this section as a basis for handouts for medical students. The main body of this text is devoted to medications likely to be used in child and adolescent psychiatric practice. Taking the section on antidepressants as an example, it starts with a brief overview of the different classes of available anti- depressants and general comments on the use of antidepressants in children and young people. For individual classes of drugs indications, pharmacology, dosing, pharmacokinetics, onset and duration of action, adverse effects, withdrawal, precautions, toxicity, use in pregnancy, nursing implications, patient instructions and drug interaction are all covered. There are also helpful (and reproducible) patient information leaflets. The information provided is concise and up to date, although in this fast-developing field there is a danger that such texts can become out of date relatively quickly. I found myself regularly referring to the handbook in clinics. The accessible writing style made it easy to share the informa- tion with young people and their parents/ carers. The only limitation is that the book’s American authorship means that it tends to refer to licensing under the US Food and Drug Administration and reflects American practice, which does differ in some respects from contemporary practice in the UK. Margaret Murphy Department of Child Psychiatry, Cambridgeshire and Peterborough Mental HealthTrust, Ida Darwin Hospital, Cambridge Road, Fulbourn, Cambridge CB15EE, email: margaret.murphy@cambsmh.nhs.uk doi: 10.1192/pb.bp.105.004333 Columns Obituary columns 78 work_2zsa47thhbbqnhfpcu7m6gvqxy ---- <5B395F3332312D3332365D5BC3D6C1BE5DC0CEBCE2BABB4B4A414531332D333628B1E8B9CEBFB55FBAB8B9AE292E687770> Original article KOREAN JOURNAL OF APPLIED ENTOMOLOGY 한국응용곤충학회지 ⓒ The Korean Society of Applied Entomology Korean J. Appl. Entomol. 52(4): 321-326 (2013) pISSN 1225-0171, eISSN 2287-545X DOI: http://dx.doi.org/10.5656/KSAE.2013.08.0.036 Eleven Species, Including Three Unrecorded Species, Belonging to Coleophoridae (Lepidoptera) Collected from Baengnyeong and Yeonpyeong Islands, Korea Minyoung Kim 1 *, Bong-Woo Lee 2 , Heung-Sik Lee 3 and Kyu-Tek Park 4 1 Plant Quarantine Technology Center, Animal and Plant Quarantine Agency, Suwon 443-400, Republic of Korea 2 Division of Forest Biodiversity, Korea National Arboretum, Soheul, Pocheon 487-821, Republic of Korea 3 Yeongnam Regional Office, Animal and Plant Quarantine Agency, Busan 600-016, Republic of Korea 4 The Korean Academy of Science and Technology, Seongnam 463-808, Republic of Korea 백령도와 연평도에서 채집된 한국산 통나방과 (나비목)의 3 미기록종 김민영 1 *ㆍ이봉우 2 ㆍ이흥식 3 ㆍ박규택 4 1 농림축산검역본부 식물검역기술개발센터, 2 국립수목원, 3 농림축산검역본부 영남지역본부, 4 한국과학기술한림원 ABSTRACT: Eleven Coleophora species were found in a faunistic survey for the family Coleophoridae (Lepidoptera: Gelechioidea) in Baengnyeong and Yeonpyeong Islands, located near the Northern Limit Line in the West Sea. Among them, three species Coleophora adjunctella Hodgkinson, C. chenopodii Oku, and C. kurokoi Oku have been recorded for the first time in Korea. For the newly recorded species, taxonomic remarks and illustrations of the adults and female genitalia have been provided. Key words: Coleophora, Coleophoridae, Is. Baengnyeong, Is. Yeonpyeong, New record, Lepidoptera, Korea 초 록: 백령도와 연평도의 곤충상 조사결과, 통나방과 (나비목: 뿔나방상과)의 11종이 서해안지역에서는 처음으로 기록되었다. 그 중 Coleophora adjunctella Hodgkinson, C. chenopodii Oku, 그리고 C. kurokoi Oku는 한반도에서 처음으로 기록되는 종으로 이들 종 동정에 필요한 성충과 암컷 생식기 사진을 함께 기재한다. 검색어: Coleophora, 통나방과, 미기록종, 백령도, 연평도, 나비목, 한국 *Corresponding author: mothmy@korea.kr Received May 27 2013; Revised August 29 2013 Accepted September 23 2013 Is. Baengnyeong and Is. Yeonpyeong are located in the West Sea and belong administratively to Ongjin-gun, Incheon Metropolitan City, Korea. Is. Baengnyeong with width of 45.8 km 2 lies near the Northern Limit Line (NLL) and is approximately 10 km away from Jangyeo-gun, Prov. Hwanghae, North Korea. Is. Yeonpyeong is located about 80 km west of Incheon, South Korea, and 12 km south of the coast of North Korea. Due to an inconvenience to approach those islands, the fauna of Lepidoptera has not been well known, with only few surveys on Macrolepidoptera by Lee et al. (2006), Park et al. (2006), and Park et al. (2012). However, no report on Coleophoridae has been made and this is the first report for Coleophoridae in Is. Baengnyeong and Is. Yeonpyeong of the West Sea. The family Coleophoridae is closely related to Blastobasidae, Momphidae, and Pterolonchidae, and the rank of Coleophoridae has been confused, due to lack of phylogenetic analysis. Hodges (1998) assigned these four families into Coleophoridae as subfamilies: Coleophorinae, Blastobasinae, Momphinae, and Pterolonchinae. Whereas, Karsholt and Razowski (1996) and Baldizzone et al. (2006) treated it as a family. According to The Korean Society of Applied Entomology (KSAE) retains the exclusive copyright to reproduce and distribute for all KSAE publications. The journal follows an open access policy. 322 Korean J. Appl. Entomol. 52(4): 321~326 Figs. 1-3. Adults: 1. Coleophora adjunctella Hodgkinson; 2. Coleophora chenopodii Oku; 3. Coleophora kurokoi Oku. Fig. 4. Map indicating the location of Baengnyeong and Yeonpyeong Islands. a new phylogenetic study of superfamily Gelechioidea based on 19 nuclear gene sequences by Mitter et al. (pers. comm), the four families are monophyletic and sister to the main with Gelechiidae and Cosmopterigidae. Within the clade including the four families, each family is strongly monophyletic with bootstrap proportions of 100% in maximum likelihood analysis and Coleophoridae appears to be more closely related to Momphidae than to Blastobasidae. The family comprises more than 1,342 known species worldwide and most of them, except 16 species or less, belong to Coleophora (Baldizzone et al. 2006). As records of the adjacent countries, 43 species are known in China (Baldizzone, 1989; Hua, 2005) and 67 species in Japan (www.jpmoth.org). In Korea, 28 species in the only genus Coleophora have been recognized (Kim et al. 2013). From the material collected from both islands, we found 11 species of Coleophora, including three species which have not been recorded from Korea. Materials and methods The present study is based on the specimens collected from two islands; Is. Baengnyeong and Is. Yeonpyeong in the West Sea in 2006 and 2010. The material is deposited in the Korea National Arboretum (KNA) and a part of them are in the private collection of Plant Quarantine Technology Center, Animal and Plant Quarantine Agency (QIA). The specimens were collected mainly by light traps, using Mercury vapour lamps (220 V/200 W). For morphological studies, external structures and genital characters were examined under a stereo microscope (Leica S8 APO; Leica, Germany). A Nikon D90 (Nikon, Japan) and Carl Zeiss Axio Imager. M2 (Zeiss, Germany) were used for the digital photography. The microscopic images were taken by using the IMT i-Solution System (IMT i-Solution Inc., Scarborough, Canada). Terminology for the genitalia follows Razowski (1989). Systematic accounts Family Coleophoridae Hübner, [1825] Genus Coleophora Hübner, 1822, Type species: Tinea anatipennella Hübner, 1796. Coleophora adjunctella Hodgkinson, 1882 (Figs. 1, 5-5a) 흰띠통나방(신칭) Coleophora adjunctella Hodgkinson, 1882, Ent. Monthly Mag., 18: 189; Razowski, 1990: 116-117; Karsholt and Razowski, 1996: 90; Baldizzone and Savenkov, 2002: 373; Baldizzone, Wolf, and Landry, 2006: 21. Type locality (TL): Ulverston, United Kingdom. Diagnosis. The species is similar to the European species Coleophoridae in Baengnyeong and Yeonpyeong Islands 323 Figs. 5-7. Female genitalia: 5. Coleophora adjunctella Hodgkinson; 5a. Close-up of the signum; 6. Coleophora chenopodii Oku; 6a. Close-up of the signum; 7. Coleophora kurokoi Oku; 7a. Close-up of the signum (scale bar: 0.5 mm). Coleophora maritimella Newman in the wing pattern, but it can be easily differentiated by the absence of whitish costal streak on the forewing. Redescription for adult (Fig. 1). Wingspan 8.0-11.0 mm. Head and thorax reddish brown. Frons dark brown; vertex brown. Antenna reddish brown; dark brown speckling with fuscous scales beyond two-thirds of antenna; shorter than diameter of eye; flagellum reddish brown. Labial palps brown. Forewing ground color pale brownish orange; streakes unvisible; fringe pale brownish orange on termen, tinged with yellowish brown on tornus and posterior margin. Hindwing pale brownish orange. Legs reddish brown; tarsi with creamy-white basal part of each segment. Male is not available in this study. Female genitalia (Figs. 5-5a). Papillae anales elongated, lobed laterally. Apophyses posteriores about more than twice length of apophyses anteriores. Sterigma short, caudal margin concaved; ostial plate broad. Antrum tubular, sclerotized. Ductus bursae long, inflated spines in distal 1/3 length with two rows of thick conical spines; middle part inflated, membranous, coiled; anterior 1/3 narrow, membranous. Corpus bursae ovate; signum lanceolate in basal part, with long, thorn-like median precess (Fig. 5a). Material examined. 1♀, Is. Yeonpyeong, Ongin-gun, Incheon Metropolitan City, N37°40’21.0”, E125°42’42.6”, 31.viii.2010 (SY Park & JS Lim), genitalia slide no. Animal and Plant Quarantine Agency (QIA)-37. Host plant. Juncus gerardii Loisel. (Juncaceae) (Emmet, 1996). Distribution. Korea (new record), Russian Far East, Turkey, Turkmenistan, Iran, Afganistan, Europe. Coleophora chenopodii Oku, 1965 (Figs. 2, 6-6a) 서해명아 주통나방(신칭) Coleophora chenopodii Oku, 1965, Ins. Mat., 27: 121; Baldizzone, Wolf, and Landry, 2006: 41. TL: Sapporo, Hokkaido, Japan. Diagnosis. The female genitalia (Figs. 6-6a) is similar to those of Coleophora versurella Zeller, but it can be distinguished by the female genitalia: corpus bursae large, characteristic sterigma and long median process of signum. The forewing is relatively broad with costa gradually arched. Redescription for adult (Fig. 2). Wingspan 11.0-13.0 mm. Head and thorax pale ochreous brown. Frons brown, tinged with yellowish brown. Antenna with brown scape entirely, as long as diameter of eye; flagellum white entirely. Labial palps 324 Korean J. Appl. Entomol. 52(4): 321~326 white; second segment with brownish grey; third segment tinged with white. Forewing ground color pale brownish orange, with fine brownish streakes along veins; fringe brownish orange, rarely mixed with dark brownish scales. Hindwing more or less lanceolate; fringes brownish orange. Legs grayish brown. Male is not available in this study. Female genitalia (Figs. 6-6a). Papillae anales small. Apophyses posteriores twice as long as apophyses anteriores. Lamella postvaginalis long, heavily sclerotized, deeply incised on caudal margin, with elongate lateral lobes anteriorly. Ostium bursae broad, thick, cancaved of caudal margin of sterigma. Ductus bursae long, narrow, with two long, band-like rows of tiny conical spines. Corpus bursae rather small, pear-like; signum with a thorn-like median process. Material examined. 1♀, Is. Yeonpyeong, Ongin-gun, Incheon Metropolitan City, N37°40’21.0”, E125°42’42.6”, 31.viii.2010 (SY Park & JS Lim), gen. slide no. QIA-26. Host plant. Chenopodium album var. centrorubrum Makino (Chenopodiaceae) (Oku, 1965). Distribution. Korea (new record), Japan (Hokkaido, Honshu, Shikoku), Russian Far East. Coleophora kurokoi Oku, 1974 (Figs. 3, 7-7a) 국화통나방 (신칭) Coleophora kurokoi Oku, 1974, Kontyû, 42: 256-257; Baldizzone, 1989: 208; Baldizzone, Wolf, and Landry, 2006: 72. TL: Sakai, Osaka, Japan. Diagnosis. This species is similar to Coleophora yomogiella Oku in the superficial and in the genitalia structure, but is clearly distinguished from the latter by the female genitalia with chitinized ductus bursae, in the caudal and central part. Adult and larval cases are smaller than C. yomogiella. Redescription for adult (Fig. 3). Wingspan 9.0-11.0 mm. Head and frons ochreous brown. Antenna with creamy white, somewhat faded apically narrowed, and shorter than diameter of eye; flagellum yellowish white entirely. Labial palps white; second segment with ochreous brown. Forewing ground color ochreous brown, streaked with creamy white along costa and veins; fringe orange white, tinged with creamy white. Hindwing brownish grey, narrowed apically. Legs with entirely ochreous brown. Male is not available in this study. Female genitalia (Figls. 7-7a). Papillae anales small and short. Apophyses posteriores about twice as long as apophyses anteriores. Ostium bursae U-shaped, concave on caudal margin of sterigma. Ductus bursae long, narrow, with chitinized wall on its terminal 1/5 and central 1/5. Corpus bursae elongate, with a signum of small patch. Material examined. 1♀, Is. Baengnyeong, Ongin-gun, Incheon Metropolitan City, 15.viii.2006 (KT Park & TM Kang), genitalia slide no. QIA-32; 2♀, Is. Yeonpyeong, Ongin-gun, Incheon Metropolitan City, N37°40’21.0”, E125°42’42.6, 31.viii.2010 (SY Park & JS Lim), gen. slide no. QIA-30, 36. Host plant. Chrysanthemum morifolium Ramatuelle var. sinense Makino and Artemisia princeps Pamp. (Asteraceae) (Oku, 1974). Distribution. Korea (new record), China (Yunnan, Zhejiang), Japan (Honshu). List of the species collected in Is. Baengnyeong and Is. Yeonpyeong, with some taxonomic remarks Coleophora adspersella Benander, 1939 1♂, Is. Yeonpyeong, Ongin-gun, Incheon Metropolitan City, N37°40’21.0”, E125°42’42.6”, 31.viii.2010 (SY Park & JS Lim), genitalia slide no. QIA-27. Distribution. Korea (Central), China (Shaanxi), Japan (Honshu), Russian Far East, Caucasus, Europe. Remarks. This species was reported for the first time from Korea by Baldizzone and Savenkov (2002), based on male specimen collected in Chuncheon, Prov. Gangwon. Coleophora cristata Baldizzone, 1989 1♂, Is. Yeonpyeong, Yeonpyeong-myeon, Ongin-gun, Incheon Metropolitan City, N37°40’21.0”, E125°42’42.6”, 2.ix.2010 (SY Park & JS Lim), genitalia slide no. QIA-29. Distribution. Korea (Central), Eastern China, Japan (Honshu), Russian Far East. Remarks. This species was reported for the first time from Korea by Park and Baldizzone (1992). Coleophora falkovitshella Vives, 1984 1♀, Is. Yeonpyeong, Yeonpyeong-myeon, Ongin-gun, Incheon Metropolitan City, N37°40’21.0”, E125°42’42.6”, Coleophoridae in Baengnyeong and Yeonpyeong Islands 325 31.viii.2010 (SY Park & JS Lim), genitalia slide no. QIA-28. Distribution. Korea (Central), Japan (Honshu), Russian Far East, Mongolia. Remarks. This species was reported for the first time from Korea by Kim and Park (2009), based on female specimen collected in Chuncheon, Prov. Gangwon. Coleophora flavovena Matsumura, 1931 1♀, Is. Yeonpyeong, Yeonpyeong-myeon, Ongin-gun, Incheon Metropolitan City, N37°40’21.0”, E125°42’42.6”, 8.vii.2010 (SY Park, JS Lim & GH Ko), genitalia slide no. QIA-24. Distribution. Korea (Central), Japan (Hokkaido, Honshu), Russian Far East. Remarks. It was first reported from Korea by Baldizzone and Oku (1990), and it is one of the common species in the central part of the Korean peninsula. Coleophora juncivora Baldizzone and Oku, 1990 1♀, Is. Yeonpyeong, Yeonpyeong-myeon, Ongin-gun, Incheon Metropolitan City, N37°40’21.0”, E125°42’42.6”, 2.ix.2010 (SY Park & JS Lim), genitalia slide no. QIA-31. Distribution. Korea (Central), Japan (Honshu). Remarks. This species was reported for the first time from Korea by Park and Baldizzone (1992), based on female specimen collected in Jeongseon, Prov. Gangwon, the central part of the Peninsula. Coleophora parki Baldizzone and Savenkov, 2002 1♀, Is. Yeonpyeong, Yeonpyeong-myeon, Ongin-gun, Incheon Metropolitan City, N37°40’21.0”, E125°42’42.6”, 2.ix.2010 (SY Park & JS Lim), genitalia slide no. QIA-33. Distribution. Korea (Central), Russian Far East (Primorye). Remarks. This species was reported for the first time from Korea by Baldizzone and Savenkov (2002). Coleophora therinella Tengström, 1848 1♀, Is. Yeonpyeong, Yeonpyeong-myeon, Ongin-gun, Incheon Metropolitan City, N37°40’21.0”, E125°42’42.6”, 31.viii.2010 (SY Park & JS Lim), genitalia slide no. QIA-21. Distribution. Korea (Central, South), Japan (Honshu), Russian Far East, Southern Siberia, Altai, Caucasus, Mongolia, Europe, North America. Remarks. This species was reported for the first time from Korea by Park and Baldizzone (1992). Coleophora vibicigerella Zeller, 1839 1♂, Is. Baengnyeong, Baengnyeong-myeon, Ongin-gun, Incheon Metropolitan City, 15.viii.2006 (KT Park & TM Kang), genitalia slide no. QIA-20; 1♂, Is. Yeonpyeong, Yeonpyeong-myeon, Ongin-gun, Incheon Metropolitan City, N37°40’21.0”, E125°42’42.6”, 6.vii.2010 (SY Park, JS Lim & GH Ko), genitalia slide no. QIA-22. Distribution. Korea (Central, South), China, Europe, North Africa, Kazakhstan, Kyrgyzstan, Altai. Remarks. This species was reported for the first time from Korea by Park and Baldizzone (1992). Acknowledgments This study was supported by Animal and Plant Quarantine Agency (QIA) and Forest Science & Technology Project (Project No. S121212L110140) provided by the Korea Forest Service. Literature Cited Baldizzone, G., 1989. A taxonomic review of the Coleophoridae (Lepidoptera) of China. Contribution to the knowledge of the Coleophoridae LIII. Tijds. Ent. 132, 199-240. Baldizzone, G., Oku, T., 1990. Descriptions of Japanese Coleophoridae III. Tyô to Ga 41, 97-112. Baldizzone, G., 1996: Coleophoridae - In Karsholt, O. & Razowski, J. (eds): The Lepidoptera of Europe. A distributional Checklist. Appolo Books, Stenstrup. Denmark. pp. 84-95. Baldizzone, G., Savenkov, N., 2002. Casebearers (Lepidoptera: Coleophoridae) of the Far East region of Russia. I. (Contribution to the knowledge of the Eastern Palaearctic Insects 12). Beitr. Ent. 2, 367-405. Baldizzone, G., Wolf van der H., Landry, J.F., 2006. Coleophoridae, Coleophorinae (Lepidoptera). In: World Catalogue of Insects Vol. 8. Apollo Books Aps., Stenstrup, Denmark. 215 pp. Emmet, A.M., 1996. The moths and butterflies of Great Britain and Ireland; Vol. 1. [Yponomeutidae - Elachistidae]. Harley Books, Colchester. 328 pp. Hodges, R.W., 1998. The Gelechioidea. pp. 131-158 in: Kristensen, N.P. (eds.), Lepidoptera, moths and butterflies. Vol.1: Evolution, 326 Korean J. Appl. Entomol. 52(4): 321~326 Systematics, and Biogeography. Handbook of Zoology, Vol. IV Arthropoda: Insecta, Part 35. Walter de Gruyter, Berlin and New York. 491 pp. Hua, L.Z., 2005. Family Coleophoridae. List of Chinese Insects. Vol. III: Lepidoptera. Sun Yat-sen University Press, Guangzhou. pp. 12. Kim, M., Park, K.T., 2009. A taxonomic review of the genus Coleophora Hübner (Lepidoptera: Coleophoridae) in Korea. J. Asia Pac. Entomol. 12, 183-198. Kim, M., Lee, S., Lee, H.S., 2013. A New Record of Coleophora vir- gaureae (Lepidoptera: Gelechioidea: Coleophoridae) from Korea. Korean J. Appl. Entomol. 52, 125-127. Lee, I.K. (ed.), 2006. Survey report on the natural heritage and re- source near militarized zones in the Republic of Korea: Western regions. Cultural Heritage Administration. 463 pp. Oku, T., 1965. Descriptions of nine new species of the genus Coleophora from Japan, with notes on other species (Lepidoptera, Coleophoridae). Ins. Mats. 27, 114-124. Oku, T., 1974. Two new species of Coleophora (Lepidoptera, Coleophoridae) feeding on Artemisia in Japan. Kontyû, Tokyo. 42, 254-257. Park, K.T., Baldizzone, G., 1992. Systematics of Coleophoridae (Lepidoptera) in Korea. Korean J. Appl. Entomol. 31, 516-535. Park, K.T., Kang, T.M., Kim, M.Y., Chae, M.Y., Ji, E.M., Bae, Y.S., 2006. Discovery of the ten species of subtropical-moths in Is. Daecheong, Korea. Korean J. Appl. Entomol. 45, 261-268. Park, S.J., Lim, H.Y., Hong, E.J., Jeon, Y.L., Kim, B.J., 2012. Survey on Insect Diversity of Yeonpyeong-do Island, Korea. J. Kor. Nat. 5, 17-26. Razowski, J., 1989. Genitalia terminology in Coleophoridae. Nota lepid. 12, 192-197. Razowski, J., 1990. Motyle (Lepidoptera) polski, Część XVI - Coleophoridae. Monografie Fauny Polski, 18. Polska Akademia Nauk, Warszawa, Kraków. 270 pp. << /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /All /Binding /Left /CalGrayProfile (Dot Gain 20%) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Warning /CompatibilityLevel 1.4 /CompressObjects /Tags /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJDFFile false /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams false /MaxSubsetPct 100 /Optimize true /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo true /PreserveFlatness true /PreserveHalftoneInfo false /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Apply /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true ] /NeverEmbed [ true ] /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 300 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /ColorImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 300 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /GrayImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 1200 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 >> /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile () /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /Description << /CHS /CHT /DAN /DEU /ESP /FRA /ITA /JPN /KOR /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR /PTB /SUO /SVE /ENU (Use these settings to create Adobe PDF documents for quality printing on desktop printers and proofers. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.) >> /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ << /AsReaderSpreads false /CropImagesToFrames true /ErrorControl /WarnAndContinue /FlattenerIgnoreSpreadOverrides false /IncludeGuidesGrids false /IncludeNonPrinting false /IncludeSlug false /Namespace [ (Adobe) (InDesign) (4.0) ] /OmitPlacedBitmaps false /OmitPlacedEPS false /OmitPlacedPDF false /SimulateOverprint /Legacy >> << /AddBleedMarks false /AddColorBars false /AddCropMarks false /AddPageInfo false /AddRegMarks false /ConvertColors /NoConversion /DestinationProfileName () /DestinationProfileSelector /NA /Downsample16BitImages true /FlattenerPreset << /PresetSelector /MediumResolution >> /FormElements false /GenerateStructure true /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles true /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) ] /PDFXOutputIntentProfileSelector /NA /PreserveEditing true /UntaggedCMYKHandling /LeaveUntagged /UntaggedRGBHandling /LeaveUntagged /UseDocumentBleed false >> ] >> setdistillerparams << /HWResolution [2400 2400] /PageSize [612.000 792.000] >> setpagedevice work_323wup4gebfbbbnc2rywryih3e ---- Objective Assessment of Tip Projection and the Nasolabial Angle in Rhinoplasty Susanne Spörri; Daniel Simmen, MD; Hans Rudolf Briner, MD; Nick Jones, MD, FRCS Objective: To provide an objective method to measure the extent of nasal tip projection and the nasolabial angle. Design: We retrospectively studied preoperative and postoperative images using a novel approach. The con- stant position of the cornea in lateral views and the di- ameter of the iris in frontal views were used to standard- ize and compare digitalized images of patients before and after surgery. We tested this objective assessment tech- nique using the digitized slides of patients with saddle nose deformities and measured changes in their nasal tip projection and nasolabial angle. We included 63 pa- tients who had undergone an open rhinoplasty with the I-beam technique by the same surgeon over a 7-year pe- riod. We tested the reproducibility of these measure- ments with 10 independent investigators. We also de- termined whether the measurements using this objective technique correlated with the surgeon’s or patients’ sub- jective assessments of the outcome. Results: We were able to use the objective measure- ment technique in 42 patients (67%). It was not pos- sible to use the technique in 21 patients (33%) because the photographic conditions had not been fulfilled. The measurement variability of 10 different investigators expressed as standard deviations in percentage of the mean value was 6.7% for nasal tip projection and 1.3% for the nasolabial angle. The surgeon’s subjective assess- ment of the outcome correlated with the objective changes of nasal tip projection (P = .045) and the naso- labial angle (P = .045). There was no correlation between the patients’ assessments and the objective measurements. Conclusions: The objective measurements tested were easy to use and investigator independent. They also cor- related with the surgeon’s assessment of outcome. Arch Facial Plast Surg. 2004;6:295-298 A CCURATE PREOPERATIVE and postoperative analy- sis and evaluation of the anatomy and appearance of the nose are essential for assessing the efficacy of surgical tech- niques, as well as for modifying surgical procedures based on their long-term out- come.1-9 Photographs,3,4 cephalometric radiographs,1 and direct clinical mea- surements5,6 are the primary means by which the nasal tip has been assessed, but few studies have addressed quantita- tive changes in nasal tip projection7,8 and the nasolabial angle. A universally ac- cepted method of assessing nasal tip pro- jection and the nasolabial angle has not been described, to our knowledge. The measuring techniques described to date are laborious and often dependent on the patient’s compliance, and they do not make use of modern computer technol- ogy, eg, measurements from life-size pro- jections of slides,8,9 or tools such as the na- sal projectometer.5,9 Our aim was to establish an objec- tive, practical, easy-to-use, computer- assisted rhinoplasty assessment tech- nique for measuring the extent of nasal tip projection and the nasolabial angle based on an iris-dependent calibration of exist- ing profile photographs (if they fulfill mini- mal photographic conditions). METHODS The photographs of 63 patients with saddle nose deformities were used to study the rhi- noplasty assessment technique. All patients had changes in the degree of nasal tip pro- jection and the nasolabial angle after an open rhinoplasty with an I-beam transplan- tation performed by the same surgeon over a 7-year period. I-beam transplantation is a surgical technique that is used to increase nasal tip support and projection. There were 31 men (49%) and 32 women (51%), with a mean ± SD age of 40.1 ± 13.5 years (age range, 15-64 years). All patients were white. Follow-up ranged from 1 to 24 months. During the follow-up period, photographs of See also page 299 ORIGINAL ARTICLE From the Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital (Ms Spörri), and the Center for Otology, Skull Base Surgery, Rhinology, and Facial Plastic Surgery, Klinik Hirslanden (Drs Simmen and Briner), Zurich, Switzerland; and the Department of Otorhinolaryngology, University of Nottingham, Nottingham, England (Dr Jones). (REPRINTED) ARCH FACIAL PLAST SURG/ VOL 6, SEP/OCT 2004 WWW.ARCHFACIAL.COM 295 ©2004 American Medical Association. All rights reserved. the right and left profiles and a frontal view were taken 1 to 4 times. The measurements were performed on retrospectively digi- tized preoperative and postoperative right profile slides of nonsmiling patients. We used commercially available software (ImageAccess; PIC Systems AG, Glattbrugg, Switzerland). On preoperative and postoperative digitized slides, the degree of nasal tip projection and the nasolabial angle were analyzed ob- jectively using 4 defined lines superimposed on the face (Figure 1 and Figure 2). One line, which was drawn from the superior aspect of the tragus through the lateral canthus (Figure 1, point A), extending over the nasal root, was used to define the nasal frontal angle (Figure 1, point B), which is of- ten difficult to locate in a reproducible fashion. The measure- ment of the distance from A to B was used to define changes in the nasal frontal angle between the preoperative and postop- erative photographs. A second line was drawn from the nasal frontal angle (Figure 1, point B) to the vermillion cutaneous junction of the upper lip (Figure 1, point C). A third line, per- pendicular to the second, meets the most projecting part of the nasal tip (Figure 1, point E). The length of this third line (Fig- ure 1, D-E) in millimeters was used as a determinant of nasal tip projection. The angle between the second line (Figure 1, B-C) and the fourth line (Figure 1, F-G), following the colu- mella, was used to measure the nasolabial angle. The calibra- tion was performed using the mean ± SD diameter of the iris and cornea, which is consistently 11.5 ± 0.6 mm in adults. After cali- bration, the values were recalculated to life-size. The photographic conditions require a full-profile pho- tograph, with no rotation of a nonsmiling patient’s head. The patient’s eyes should be wide open, with a straight gaze, for ex- act calibration, and the superior aspect of the tragus, the lat- eral canthus, and the vermillion-cutaneous junction of the up- per lip should be visible. To evaluate investigator-dependent variability, 10 ran- domly chosen adult individuals were instructed regarding the aforementioned criteria and taught how to use the computer software to magnify or reduce the images, allowing them to stan- dardize the images before they did the measurements. Ques- tions were answered on demand. To compare objective measurements with subjective as- sessment, the patients and the surgeon were asked whether the outcome of the operation was successful. The 2 groups were classified as successful or unsuccessful, and each of the pa- tient’s and the surgeon’s results were compared using the Mann- Whitney rank-sum test. Statistical significance was defined by P�.05. RESULTS Changes in nasal tip projection and the nasolabial angle could be quantified in 42 patients (67%). Figure 2 shows a representative example of these measurements with pre- operative and postoperative measurements. An increase in nasal tip projection and nasolabial angle was noted in 31 (74%) and 33 (79%) patients, respectively, while a de- crease occurred in 11 (26%) and 9 (21%) patients. We were unable to use the rhinoplasty assessment technique in 21 patients (33%). The pictures of 12 pa- tients (19%) failed to show the ears; therefore, the nec- essary landmarks were lacking. Calibration was not pos- sible in 1 case because the patient’s closed eyes were closed and therefore his irises were hidden. The preoperative pictures of 5 patients (8%) and the preoperative and postoperative pictures of 2 patients (3%) were missing. In the preoperative picture of 1 patient, the profile was rotated. The measurement variability of 10 different inves- tigators is summarized in the Table. According to the surgeon’s assessment, the measurements in the group in which surgery was successful (n = 34) differed signifi- cantly from those in the group in which surgery was un- successful (n = 8) (P = .045 for both nasal tip projection and nasolabial angle). There were no significant differ- ences regarding nasal tip projection (P = .18) or nasola- bial angle (P = .08) between the 2 groups (38 successful outcomes vs 4 unsuccessful outcomes) when the mea- surements were assessed by the patients. COMMENT We have developed and tested an objective computer- assisted technique to assess nasal tip projection and the nasolabial angle with the use of an iris-dependent cali- bration, which is possible because the diameter of the iris in adults is 11.5 ± 0.6 mm, showing an extraordinary con- stancy, and any divergence from these measurements is pathological.10 With the use of this technique, differ- A B D E G F C Figure 1. Lateral view. Rhinoplasty assessment technique and photographic conditions illustrated by 4 lines superimposed on the face. A indicates the lateral canthus; B, the nasal frontal angle defined by a line from the superior aspect of the tragus through point A extended on to the nasal root; C, the vermillion cutaneous junction of the upper lip; D-E, the line that is perpendicular to B-C, extending to the most projecting part of the nasal tip; and F-G, the line that follows the columella. The length of D to E is used as a determinant for nasal tip projection. The angle between lines B to C and F to G determines the nasolabial angle. (REPRINTED) ARCH FACIAL PLAST SURG/ VOL 6, SEP/OCT 2004 WWW.ARCHFACIAL.COM 296 ©2004 American Medical Association. All rights reserved. ences in photograph size do not affect the measure- ments, as every distance is standardized. The technique is easy to use and can be applied to most existing pho- tographs that fulfill limited photographic criteria. Using computer technology is preferable to using manual mea- surements with callipers and rulers (eg, nasal tip projec- tometer5) or measurements taken from the life-size pro- jection of slides7 because the results are more reproducible. Given the current excellent image quality and ongoing refinements in digital photography, converting to digi- tal photography is fast and cost-effective,11 especially when the cost of digital photography is compared with the cost of serial cephalometric studies. Furthermore, cephalo- metric studies are laborious and are associated with ra- diation exposure.1 Instead of nasal tip projection being expressed as a ratio of midface length,4,12,13 the technique described herein expresses measurements in absolute values based on iris- dependent calibration, which allows a more accurate com- parison for future studies. We analyzed interinvestigator variability, and al- though the different investigators were only briefly in- troduced to the technique, the interinvestigator variabil- ity was less than 10%. We conclude that this method is easy to learn and provides reproducible results. The ob- jective values correlate with the subjective assessment of the experienced surgeon; therefore, the technique can also be used to rate the success of rhinoplasty. To our knowl- edge, this correlation has not been demonstrated with any Interinvestigator Variability of the Rhinoplasty Assessment Technique Investigator No.* Age, y† Profession Nasal Tip Projection, mm‡ Nasolabial Angle, Degrees§ 1 41 Physician 18.40 99.91 2 39 Photographer 19.06 99.29 3 25 Secretary 18.28 98.73 4 23 Secretary 17.12 97.26 5 39 Secretary 16.80 99.17 6 36 Nurse 17.95 101.40 7 28 Nurse 20.07 101.20 8 26 Nurse assistant 19.58 100.20 9 42 Nurse 20.47 100.45 10 33 Physician 17.55 101.18 *All 10 investigators were female. †Mean, 26. ‡Mean ± SD, 18.53 ± 1.24; SD%, 6.7. §Mean ± SD, 99.88 ± 1.30, SD%, 1.3. NTP = 9.3 mm NTP = 12.3 mm NLA = 105.8° NLA = 97.6° Figure 2. A, The right profile of a patient before rhinoplasty, with 4 lines and the resulting values of the preoperative nasal tip projection (NTP) (9.3 mm) and the nasolabial angle (NLA) (105.8°) superimposed. B, The same patient after rhinoplasty, with changes in the NTP (12.3 mm) and the NLA (97.6°). The computer-assisted calibration defines the radius of the iris as 5.75 mm in each picture, ensuring standardization of the patients’ photographs. (REPRINTED) ARCH FACIAL PLAST SURG/ VOL 6, SEP/OCT 2004 WWW.ARCHFACIAL.COM 297 ©2004 American Medical Association. All rights reserved. other measuring technique. The surgeon’s assessment did not correlate with the patients’ assessment probably because of the small number of patients involved and because patients are usually satisfied more easily than surgeons.2 Accepted for publication December 11, 2003. This study was presented in part at “The Nose 2000 . . . and Beyond”; September 20-23, 2000; Washing- ton, DC; and at the 89th Spring Meeting of the Swiss Soci- ety of Otorhinolaryngology; June 20-22, 2002; Pontresina, Switzerland. Correspondence: Daniel Simmen, MD, ORL-Zentrum, Klinik Hirslanden, Witellikerstrasse 40, CH-8029 Zurich, Swit- zerland (simmen@orl-zentrum.com). REFERENCES 1. Werther JR, Freeman JP. Changes in nasal tip projection and rotation after sep- torhinoplasty: a cephalometric analysis. J Oral Maxillofac Surg. 1998;56:728- 733. 2. Gurley JM, Pilgram T, Perlyn CA, Marsh JL. Long-term outcome of autogenous rib graft nasal reconstruction. Plast Reconstr Surg. 2001;108:1895-1907. 3. Farkas LG, Bryson W, Klotz J. Is photogrammetry of the face reliable? Plast Re- constr Surg. 1980;66:346-355. 4. Rich JS, Friedman WH, Pearlman SJ. The effects of lower lateral cartilage exci- sion on nasal tip projection. Arch Otolaryngol Head Neck Surg. 1991;117:56- 59. 5. Petroff MA, McCollough EG, Hom D, et al. Nasal tip projection: quantitative changes following rhinoplasty. Arch Otolaryngol Head Neck Surg. 1991;117:783-788. 6. Byrd HS, Hobar PC. Rhinoplasty: a practical guide for surgical planning. Plast Reconstr Surg. 1993;91:642-654. 7. Vuyk HD, Oakenfull C, Plaat RE. A quantitative appraisal of change in nasal tip projection after open rhinoplasty. Rhinology. 1997;35:124-129. 8. Kohout MP, Monasterio Aljaro L, Farkas LG, Mulliken JB. Photogrammetric com- parison of two methods for synchronous repair of bilateral cleft lip and nasal deformity. Plast Reconstr Surg. 1998;102:1339-1349. 9. Webster RC, Davidson TM, Rubin FF, Smith RC. Recording projection of nasal landmarks in rhinoplasty. Laryngoscope. 1977;87:1207-1211. 10. Rauber A, Kopsch F. Anatomie des Menschen, Band III, Nervensystem, Sinne- sorgane. Stuttgart, Germany: Georg Thieme Verlag; 1987:533-538. 11. Hollenbeak CS, Kokoska M, Stack BC. Cost considerations of converting to digi- tal photography. Arch Facial Plast Surg. 2000;2:122-123. 12. Crumley RL, Lanser M. Quantitative analysis of nasal tip projection. Laryngo- scope. 1988;98:202-208. 13. Bafaqeeh SA. Open rhinoplasty: effectiveness of different tipplasty techniques to increase nasal tip projection. Am J Otolaryngol. 2000;21:231-237. (REPRINTED) ARCH FACIAL PLAST SURG/ VOL 6, SEP/OCT 2004 WWW.ARCHFACIAL.COM 298 ©2004 American Medical Association. All rights reserved. work_325cyrrsbvc2laqhxlh3mnxk4m ---- Sub_electron_paper_03.PDF *Sub-Electron Read Noise at MHz Pixel Rates Craig D. Mackay, Robert N. Tubbs, Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge, CB3 0HA, UK Ray Bell, David Burt, Paul Jerram, Ian Moody Marconi Applied Technologies, Chelmsford, Essex, UK ABSTRACT A radically new CCD development by Marconi Applied Technologies has enabled substantial internal gain within the CCD before the signal reaches the output amplifier. With reasonably high gain, sub-electron readout noise levels are achieved even at MHz pixel rates. This paper reports a detailed assessment of these devices, including novel methods of measuring their properties when operated at peak mean signal levels well below one electron per pixel. The devices are shown to be photon shot noise limited at essentially all light levels below saturation. Even at the lowest signal levels the charge transfer efficiency is good. The conclusion is that these new devices have radically changed the balance in the perpetual trade-off between readout noise and the speed of readout. They will force a re-evaluation of camera technologies and imaging strategies to enable the maximum benefit to be gained from these high-speed, essentially noiseless readout devices. This new LLLCCD technology, in conjunction with thinning (backside illumination) should provide detectors which will be very close indeed to being theoretically perfect. 1. INTRODUCTION Although CCDs are suitable for such a wide range of applications there are still a number of areas where CCDs show significant limitations in their performance. The principal weakness is that their readout noise (the system noise that is achieved in the absence of any input signal) increases substantially as the pixel readout rate increases. As many applications are demanding increasing resolution, it is essential that the corresponding increase in pixel rate is not accompanied by a reduction in performance caused by increasing readout noise, particularly when the full well capacity of the higher resolution CCDs (generally with smaller pixels) is also significantly reduced. A novel CCD architecture has been developed by Marconi Applied Technologies, Chelmsford, UK with a view to providing CCD cameras with a sensitivity and readout noise similar to those obtained by the best image intensified camera. This same architecture will also allow a dramatic improvement in the performance of high-speed scientific imaging systems for a variety of applications. An evaluation of a video rate camera using an low light level CCD (LLLCCD) has already been published by Harris et al.1 A more detailed description of the architecture of the device is also presented by Cochrane et al. (2000)6, and by Wilson (2000)7 as well as at this conference by Jerram et al2. The purpose of this paper is to carry out the critical evaluation of the LLLCCD technology developed by Marconi Applied Technologies so as to quantify in as much detail as is possible the performance of the LLLCCD technology for scientific imaging applications. 2. LLLCCD ARCHITECTURE The technology behind the LLLCCD is disclosed in the European patent application EP 0 866 501 A1. In essence a conventional CCD structure is used with the output register extended with an additional section that has one of the three phases clocked with a much higher voltage than is needed purely for charge transfer. The large electric fields that are established in the semiconductor material beneath pairs of serial transfer electrodes cause charge carriers to be accelerated to sufficiently high velocities that additional carriers are generated by impact ionisation on transfer between the regions which are under the electrodes. The charge multiplication per transfer is really quite small, typically one percent but with a large number of transfers (591 for the device characterised here) substantial electronic gains may be achieved. The output of this *Correspondence: email: cdm@ast.cam.ac.uk extended serial register is passed on to a conventional CCD output amplifier. The electronic noise of this amplifier which might be equivalent to a few tens of electrons at MHz pixel rates is now divided by the gain factor of the multiplication register which, if this gain is high enough, will reduce the effective output read noise to levels much smaller than one electron rms. This architecture has many advantages. All the developments that had led to the astonishingly high performance of scientific CCDs such as their remarkable charge transfer efficiency, the extremely high quantum efficiencies of thinned (back illuminated) devices and the very low dark currents that are now achieved by operating the imaging area in inverted mode are unaffected by the high gain multiplication register of the LLLCCD. It will also be clear that by varying the amplitude of the higher voltage clock phase in the extended register the net gain of the register may be varied from unity (when the multiplication register is operated with normal clock levels, and the output amplifier will give its normal equivalent readout noise) to a high gain which could be in excess of 10,000. The only limitation of this method is that the dynamic range of the CCD operated in high gain will be limited by the capacity of the multiplication register in electrons divided by the gain of register. 3. SIGNAL-TO-NOISE CONSIDERATIONS It is relatively straightforward to model the characteristics of the multiplication register. This register has 591 stages (for the CCD 65 discussed here) each of which offers a low probability ( p, typically 1%) of converting one electron into two electrons. The overall gain is (1+p)591. In this way a multiplication probability of 1% will give an overall gain of approximately 358, while a gain probability of 1.5% will give the mean gain of 6670. In an ideal world the same gain value would be applied to every electron which enters the multiplication register. Unfortunately because of the statistical nature of the multiplication process there is a wide dispersion in the number of electrons generated, from one input electron. This is shown in figure 1 where for a gain probability of 1.5% (overall gain of 6670) is shown the distribution of the number of output electrons per input electron. The probability distribution shown in [Figure 1] was obtained by convolving together appropriate probability distributions for the amplification stages. Figure 1: The probability distribution for the signal output from a multiplication register of 591 stages with a mean gain per stage of 1.015. The mean overall gain for the entire multiplication register is then 6670. Probability distribution for output electrons given one input electron 0 0.00002 0.00004 0.00006 0.00008 0.0001 0.00012 0.00014 0.00016 0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 Number of electrons P ro b ab ili ty Figure 2: The probability distribution in the number of electrons output from a multiplication register of 591 stages with a mean gain per stage of 1.015, and a total overall gain of 6670 with an input signal which has a mean of 16 electrons and a Poisson distribution. We can see that this distribution is a monotonically decreasing function with no peak value around the average gain value. The standard deviation in the gain that we get is virtually identical to the mean. The effect of the gain can be seen in Figure 2 which shows the number of electrons coming out of the multiplication register. When there is no gain in the register we have a mean number of electrons equal to 16 and an RMS dispersion of 4 electrons. The distribution in the number of electrons output from the multiplication register when operated with a high gain (>10) has a mean value of 16 times the gain, but the signal-to-noise in the output electrons is reduced by a factor of √2. This reduction in signal-to-noise is brought about by the dispersion in the gain of the multiplication stages. We can therefore identify two regimes: firstly if the gain is unity, then the signal-to-noise in the output of the register is equal to the shot noise, whereas if the gain is high then the signal-to-noise is decreased by a factor of √2. However, when the gain is high the effect of the readout noise will be insignificant, providing an enormous improvement at low light levels. This reduction of √2 in the signal-to-noise for a given number of input electrons is exactly equivalent to the effect of using a detector system free from this effect but which has a detective quantum efficiency exactly half that of the device operated in unity gain mode. We can also see that it is possible to run the device in a photon counting mode as follows. Standard CCDs cannot be used for photon counting since even at slow read out rates they have a read noise of 2-3 electrons minimum. With the LLLCCD technology we can see that if we set the multiplication register gain to be much higher than the device readout noise in the absence of gain then the great majority of electrons entering the multiplication register will exit with an amplitude which is very much greater than the noise in the output stage. Selecting threshold of, say, five times the standard deviation in the noise of the output amplifier allows essentially all the electrons entering the register to be detected with a good signal-to- noise. Because of the wide dispersion in the energies of these amplified electrons the best way to process them is to accept that each represents one photon and to ignore the dispersion. We give each event detection a weight of unity. In this way we are able to restore all the lost quantum efficiency by working in photon counting mode. The big disadvantage is that we have to restrict the signal intensity so that there is an acceptably small risk of two photons been detected on the same pixel. Probability distribution for output electrons when input electrons selected from Poisson distribution with mean of 16 electrons 0.00E+00 2.00E-06 4.00E-06 6.00E-06 8.00E-06 1.00E-05 1.20E-05 1.40E-05 0 50000 100000 150000 200000 250000 300000 350000 400000 Number of electrons P ro b ab ili ty Should this happen then two photons will be counted as one, something we cannot discriminate against because of the monotonically decreasing pulse height distribution shown in figure 1. In practical terms it means that the maximum photon rate that can be tolerated without significant non-linearity is approximately one photon per pixel in 30 frames. At this rate then 30 photons will be counted as 29 photons. By looking carefully at the Poisson statistics of a photon stream we can show that 1 photon in 50/30/10 frames gives 1/1.6/5% non-linearity. This non-linearity is very predictable and can be corrected for, given the locally detected photon rate. It does, however, give a small reduction in detective quantum efficiency which at higher photon arrival rates will become significant and lower the resulting DQE to the levels that are seen with the gain mechanism working normally as described above. This coincidence loss is exactly the same for other intensifier based photon counting systems that use framing read-out systems such as the electron-bombardment CCD, or systems that use an intensifier with video camera read-out with photon counting hardware. The main difference here is that the CCD is a solid-state device with higher quantum efficiency when back-illuminated, long life and substantial immunity to light overload. In fact the coincidence losses with phosphor based image intensifiers can be poorer because the intensified photon event can cover several read-out system pixels, greatly increasing the likelihood of coincidence occurring. The frame rate limitation may not be a problem, depending on system design. A CCD with multiple outputs can allow very fast frame rates as can systems that read out a smaller sub-array of the CCD. For smaller areas the frame rates can be much higher, with the CCD65, for example, reaching several hundred Hz frame under some circumstances. It is also important to appreciate the importance of deep cooling if photon counting work is to be attempted. The CCD65 works in inverted mode to give the very low dark current level of 200 electrons/px/sec at +20C. This is virtually eliminated by cooling to -140C. Cooling to a typical Peltier-cooled temperature of -40C will reduce the dark current to about 1 electrons/pixel/second, far too high for photon counting rates where the maximum for linear operation might be 0.01 to 0.1 electrons/pixel/second from the signal plus the dark rates together. Cooling to lower temperatures (-55C and below) is possible with thermoelectric cooling and this should allow photon counting operation as described above, although deeper cooling is needed to give the ultimate sensitivity. So we now have three different regimes in which it is possible to operate this device: 1. The conventional CCD mode, with no gain in the multiplication register, and the signal-to-noise set by the photon shot noise added in quadrature with the readout noise of the CCD output amplifier. 2. The CCD operated with a gain in the multiplication register that substantially overcomes the readout noise of the output amplifier. In this case the signal-to-noise is worse than would be expected from the number of photons detected by a factor of root 2. Another way to think of this degradation is to calculate on the assumption that the signal-to-noise is set by the photon shot noise but that the detector has half the detective quantum efficiency that it has been for mode 1 above. 3. The CCD is operated with high gain in the multiplication register so that the readout noise of the CCD output amplifier is completely negligible for each multiplied electron. If each event is then thresholded and treated as a single event on equal weight without making any attempt to consider its amplitude then the quantum efficiency that is lost by operating in mode 2 above is restored, giving the same quantum efficiency essentially as that in mode 1 above. There are, however, the major limitations that the maximum photon rate used must be kept extremely low in order to avoid coincidence losses which will give rise to non-linearities in the response curve of the detector system, and the corresponding need for deep cooling to maintain correspondingly low levels of dark current. It is worth noting that modes 2 and 3 above are not mutually exclusive. By designing a system with parallel output signal electronics, that can photon count as well as frame average it is possible to imagine systems that are photon counted in those parts of the image where the signals are low enough to give good linearity, and where the ultimate DQE is important, while in those parts of the image that are brighter, the negligible read-out noise mode of operation is also achieved, and frame averaging gives the full dynamic range. 4. THE MARCONI CCD65 ARCHITECTURE The tests described in the remainder of this paper were carried out with a front-illuminated CCD 65 manufactured by Marconi Applied Technologies, Chelmsford, UK. The CCD 65 is designed for frame transfer interlaced operation for use in cameras operating at PAL or NTSC video rates. The image area consists of 576(H) by 288(V) pixels each of 20 by 30 microns, and works in inverted mode to give an extremely low dark current (typically 200 electrons per pixel per second at a temperature of 20 C). The image and store areas use a 2-phase structure with interlace provided in video operation by deriving alternate frames after integration in inverted mode (with both phases low) by bringing up one phase or the other. The parallel registers allow transfer rates of up to about 1 MHz and are non-antiblooming giving a 200,000 electron full well. The readout register is able to operate at pixel rates up to in excess of 12 MHz. The multiplication register has a full well capacity of approximately 900,000 electrons in order to permit high gain to be used without saturation. The output amplifier has a responsivity of 1.3 microvolts/electron and is very similar in design to the large signal, high- speed scientific type output amplifier which is included on devices such as the Marconi CCD 55. The imaging area has a full well capacity of 200,000 electrons. Although this is not a format of CCD that would normally be chosen for scientific imaging applications it is built using this same fundamental architecture that is common to many other devices in the Marconi range. The principal difference from their standard devices is, of course, in the multiplication register. The approach, therefore, in this paper was to characterise the device in its normal mode of operation (no additional gain from the multiplication register) and then see to what extent those characteristics were modified as a function of gain in the multiplication register. 5. INSTRUMENTAL CONFIGURATION The tests were done with a conventional Capella 4100 CCD imaging system manufactured by AstroCam Ltd (now PerkinElmer Life Sciences Ltd). The system is fully programmable and may be run at pixel rates of up to 5 MHz with 14 bit digitisation. The system has programmable gain, clock waveform generation and signal processing timing. It is fully integrated with several software packages that allow an accurate quantitation of many of the properties of CCDs. The CCD65 loaned by Marconi was mounted both in a compact thermoelectric head which allowed the CCD to be operated at approximately -30 C and also in a liquid nitrogen cooled dewar that operated at approximately -140 C so that all sources of dark signal could be eliminated. This was found to be necessary because there was some evidence that there was excess dark current generated in the multiplication register, possibly because of the high electric fields in it. An additional circuit board was provided to generate the higher voltage clocks needed for the multiplication register. Because the system is fully programmable it was necessary to design a driver that followed the high-speed clock generated by the Capella 4100 controller as closely as possible and gave an adjustable clock high-level that varied between 6 V above substrate (the level used in the standard output register and therefore the level that provided no gain in the multiplication register) and the maximum level in excess of 40 volt above substrate, in order to provide the maximum possible gain from the CCD multiplication register. With careful design it was possible to produce drivers capable of 50 volt slew in approximately 20 ns. One important design issue is that the gain (see later) is critically dependent on the clock voltage. If the stability is to be achieved when the gain is high then it is essential that the clock levels used are stable to only a few millivolts. In all other respects the electronics used were completely standard. The measurements were all carried out at a pixel rate of 1 MHz. The performance of the CCD 65 system was checked with a wide range of light wavelength, from a calibrated projector that used a variety of LEDs working between 470 and 950 nm. 6. TEST RESULTS The CCD 65 was fully characterised at room temperature, Peltier cooled to -30 C and liquid nitrogen cooled to -140 C with the multiplication register gain set to unity. In all respects the device behaved like a completely standard, normal Marconi CCD. The CCD 65 used clock voltages and waveforms which were similar to those used by current Marconi CCD families such as the CCD 55. Careful measurements were made of the gain achieved from the multiplication register as a function of the high voltage level used on the phase in the multiplication register that provides the gain. These results are shown in figure 3 which shows both the gain and the readout noise of the CCD as a function of gain clock voltage. Gains were measured as high as 45,000, and the corresponding readout noise was measured to be as low as 0.002 electrons rms. The fact that such measurements were made is not intended to imply that it would be in any way practical to operate CCD 65 at this gain level but rather that it is possible to measure gains as high as this. What is clear from the results shown in figure 3 is that the gain increases very rapidly indeed with slight increases in the clock voltage. The consequence of this is that it is essential to make clock drivers that are extremely stable and have very low levels of noise on them. The clock drivers designed for the system described here had a stability of better than one millivolt. It was also noticed that the gain that derived from a specific voltage level varied significantly with temperature. Between +30C and +12C, the gain approximately doubles for a 9C drop in temperature. The difference measured can be expressed by saying that the gain achieved at a specific voltage at -30 C was achieved with a voltage lower by 2.3 V at -140 C. CCD65: Gain vs Clock Voltage 1 10 100 1000 10000 100000 20 30 40 Clock High Voltage G ai n CCD65: Noise vs Clock Voltage 0.001 0.010 0.100 1.000 10.000 100.000 20 30 40 Clock High Voltage N o is e (e le ct ro n s rm s) E q u iv al en t Figure 3: The effect of changing the clock high level (relative to the substrate voltage) in the multiplication register on the gain and hence on the effective read-out noise of the CCD65 device tested here. The gain increases very rapidly for a relatively small change in clock high voltage, showing that gain stability will only be achieved by excellent clock-high voltage stability. At gain levels around 1000, the gain triples for 0.5 volts clock high increase, making it necessary to achieve millivolt clock-high repeatability for 1% gain stability. As the achieved gain comes from the plots shown convolved with the clock waveforms, careful control of clock waveform ringing repeatability is also essential. These results were obtained at a CCD temperature of approximately -30C. One of the most demanding tests to make on the system is to compare a single image taken at a specific light level with another image obtained by adding a large number of individual images each taken at a much lower light level so that the total number of photons per pixel in the two images should be the same. This was done at a number of light levels and the results are entirely consistent with the descriptions given above. Examples of these images are shown in figure 4. These images were taken at a gain level of about 3350 using blue (470nm) light. The images show that the CCD is able to transfer charge perfectly happily at these low signal levels, and that the image quality depends only on the integrated signal level and not on the number of images added together to achieve this signal level. The only difference is that there is a greater level of white spots on the image. They can be seen most clearly in the middle image of the left hand sequence of Figure 4. It is not clear what their origin might be. Figure 4: A series of test chart images showing that the CCD65 does allow effective operation at extremely low signal levels. The sets of three images on the left and the right show a single exposure (top), the sum of 16 exposures (middle) and a sum of 256 exposures (bottom of the three) at peak signal levels of approximately 1.5 detected photons per pixel per frame (left series) and 20 electrons per pixel per frame (right series). Underneath these is an image taken at much higher signal levels to show the appearance of the images being used. 7. REASSESSING APPLICATION TRADE-OFFS There are two main regimes experienced when operating CCD cameras for scientific applications. In the high light level case the photon shot noise is significantly greater than the system readout noise. Here there is no advantage at all in using the LLLCCD technology. In the other low light level regime, the photon shot noise in the image is comparable to or less than the readout noise. The effect is to make the overall system noise significantly poorer than would be expected purely from photon shot noise statistics, something that effectively reduces the overall system detective quantum efficiency. It is in this regime that the LLLCCD technology has a great deal to offer. In its simplest application, the LLLCCD technology simply allows the readout noise of the system to be reduced to a level where the photon shot noise will always dominate. In order to preserve dynamic range as much as possible it is sensible to minimise the multiplication register gain since that gain value is the same factor by which the dynamic range of the device is reduced. Using LLLCCD technology it is always possible to make a CCD system that is essentially free from readout noise but this is generally at the expense of loss of dynamic range. Of course it is always possible to run at a high frame rate and add together many images in order to extend the overall dynamic range. The potential of LLLCCD technology in providing much higher gain than is usually possible allows much faster readout rates to be used without losing the essentially noise free capability of this technology. The faster frame rates will inevitably reduce the dynamic range per image. However images can be added after readout to give the dynamic range necessary and this may be acceptable in some applications. However it is very important when considering the use of LLLCCD technology in any application to realise that many of the assumptions about CCDs (and in particular that it is essential to run them as slowly as possible in order to minimise the readout noise) are inappropriate if you have enough gain to overcome the intrinsic noise of the amplifier and produce an overall system readout noise that is negligible. A good example is the use of CCD detectors as wavefront sensors for applications in adaptive optics. In these applications it is necessary to measure the phase errors that affect the flatness of the light wavefront that comes into a telescope. The phase errors are created by turbulence in the atmosphere and are responsible for the loss of spatial resolution in images detected from the ground. A common strategy for measuring these wavefront errors is to use a Shack Hartmann arrangement whereby sub-apertures of the telescope are imaged separately using an array of lenslets. In the image plane of the Shack Hartmann sensor there is an array of stellar images formed, each image from one lenslet which covers one of the sub-apertures of the telescope (figure 5). Each star image is tracked as it moves around in response to the phase errors across the sub aperture. The amount of image motion is converted back into a phase error pattern across the whole telescope aperture and a flexible mirror is distorted to compensate for these errors so as to give an error free and therefore diffraction limited image in the telescope image plane. In order to get to as faint a limiting magnitude as possible the Shack Hartmann system uses the minimum number of lenslets across the aperture as this approach makes each star image as bright as possible. In addition the detector is read out as slowly as possible in order to minimise the readout noise. As a consequence the whole spatial and temporal construction of the system has effectively made assumptions about the scales of the errors to be encountered which generally will be incorrect for the actual night in question. The use of gain CCD system based on LLLCCD technology allows the detector to be run much faster than is likely to be necessary so that computer software can average the images temporally in a way that can be changed dynamically depending on the correlation times of the atmosphere on the night in question. In this way the effective readout rate may be made faster or slower in response to the real conditions experienced. When conditions are good, the slower readout possible allows much fainter objects to be used. This is important because there are many regions of sky with this technique cannot be used as there are not adequately bright guide stars within the field a view. The fainter the guide star, the larger at the number of objects may be observed. It is further possible to avoid the restrictions which are placed on the spatial scales of turbulence by the use of a fixed lenslet array. We may use a continuous wavefront sensor such as a shearing or a curvature sensor. By using a higher resolution detector than is strictly necessary to detect the phase errors it is possible to spatially combine the signals in order to give the best description of the phase errors across the telescope aperture. If the conditions are particularly good then an approach like this will allow the spatial and temporal scales of the detector system to be adjusted in real time to suit the conditions. Under the best conditions the phase errors change on relatively large scales and they vary relatively slowly. This allows operation at much fainter levels than is possible under the same conditions with a fixed rate and fixed resolution wavefront sensor as is often used, provided the system may be dynamically reconfigured in this way. Another related application is in "Lucky Astronomy" where large numbers of short exposures are taken at high speed (to beat the natural fluctuations in atmospheric seeing) and sorted to select those with the best images (Baldwin et al, 2001)3. Normally this would be very inefficient because of the high read-out noise of a fast read-out CCD system, and the low signal levels per frame. The LLLCCD technology completely changes this trade-off, allowing spatial and temporal averaging after read-out, and giving dramatic improvements in limiting sensitivity. Other applications that will benefit from the use of LLLCCD technology include: 1. Bio- and chemi-luminescence imaging where extremely low light levels often require substantial binning factors to ensure that signal levels are large enough to overcome CCD read-out noise. LLLCCD technology would allow images to be read out unbinned, and selectively averaged depending on the signal levels actually achieved (Mackay, 1999)4. 2. High-speed confocal microscopy (Mackay, 1998)5 where the need to achieve good signal-to noise and short frame times when working with dynamic systems is often limited by CCD read-out noise and resolution compromises. 3. Astronomical spectroscopy, where the need to take multiple images to give good cosmic ray suppression worsens overall read-out noise. In addition, spectra often are best taken at high resolution to give optimum night-sky suppression and at low resolution to give good signal to noise ration on the faintest parts of the spectrum. The LLLCCD technology again allows selective post-read-out binning to be used. 4. X-ray and neutron beam imaging can also suffer from low signal levels, and from the need to have multiple read- outs to allow discrimination against directly detected X-ray events. The LLLCCD technology will allow intelligent frame averaging to be carried out to allow reliable event suppression as well as minimising read-out noise. Figure 5: The Shack-Hartmann sensor system layout. The light from a star is reimaged with an 8 by 8 array of sub- apertures onto a CCD camera so that the motions of each can be tracked. (picture courtesy of Laser Focus World) 8. CONCLUSIONS We have undertaken a thorough examination of the performance of a scientific imaging system based on the new LLLCCD technology described above. The three modes of operation have been identified: 1. The conventional CCD mode, with no gain in the multiplication register, and the signal-to-noise set by the photon shot noise added in quadrature with the readout noise of the CCD output amplifier. 2. The CCD operated with a gain in the multiplication register that substantially overcomes the readout noise of the output amplifier. In this case the signal-to-noise is worse than would be expected from the number of photons detected by a factor of root 2. Another way to think of this degradation is to calculate on the assumption that the signal-to-noise is set by the photon shot noise but that the detector has half the detective quantum efficiency that it has been for mode 1 above. 3. The CCD is operated with high gain in the multiplication register so that the readout noise of the CCD output amplifier is completely negligible for each multiplied electron. If each event is then thresholded and treated as a single event on equal weight without making any attempt to consider its amplitude then the quantum efficiency that is lost by operating in mode 2 above is restored, giving the same quantum efficiency essentially as that in mode 1 above. There are, however, the major limitations that the maximum photon rate used must be kept extremely low in order to avoid coincidence losses which will give rise to non-linearities in the response curve of the detector system, and the corresponding need for deep cooling to maintain correspondingly low levels of dark current. The fact that it is possible to change the mode of operation, or to design systems which work in modes 2 and 3 above simultaneously so easily offers a great deal of flexibility. There can be little doubt that the new technology developed by Marconi Applied Technologies will have a substantial impact on the design of a wide range of scientific imaging systems. The ability demonstrated here to operate a CCD system with essentially no readout noise and yet for it not to affect the performance of the CCD detector in other ways is quite remarkable and quite unique. 9. REFERENCES 1. E. J. Harris, G. J. Royle, R. D. Spencer, S. Spencer, and W. Suske, "Evaluation of a Novel CCD Camera for Dose Reduction in Digital Radiography", Medical Symposium 2000, Lyon, September 2000. 2. P. Jerram, P. Pool, R. Bell, D. Burt, S. Bowring, S. Spencer, M. Hazlewood, I. Moody, N. Catlett, P. Heyes, "the LLLCCD: Low Light Imaging without the need for an intensifier, SPIE vol 4306, 2001. 3. J.E. Baldwin, R.N. Tubbs, G.C. Cox, C.D. Mackay, R.W. Wilson and M.I. Andersen, "Diffraction-limited 800nm imaging with the 2.56m Nordic Optical Telescope", for Astronomy and Astrophysics, 2001. 4. Mackay, C D.,"High-Speed Digital CCD Cameras-Principles and Applications", in "Fluorescent and Luminescent Probes", 2nd Edition, (Academic Press), p 517-524, (ISBN-0-12-447836-0),1999. 5. Mackay, C D.,"New Developments in Three-Dimensional Imaging with Optical Microscopes", in "Further Developments in Scientific Optical Imaging"", ed M Bonner Denton, (Royal Society of Chemistry, ISBN-0-85404- 784-0), 1998. 6. Cochrane, A., Spencer, S H, Night Vision conference, London, January 2000. 7. Wilson, A., "Low light CCD needs no intensifier". Vision Systems Design, October 2000. work_32stwavbdzevtnpqvljowbaz2y ---- Silveira et al. BMC Dermatology 2014, 14:19 http://www.biomedcentral.com/1471-5945/14/19 RESEARCH ARTICLE Open Access Digital photography in skin cancer screening by mobile units in remote areas of Brazil Carlos Eduardo Goulart Silveira1*, Thiago Buosi Silva1, José Humberto Guerreiro Tavares Fregnani3, René Aloisio da Costa Vieira2, Raphael Luiz Haikel Jr1, Kari Syrjänen3,4, André Lopes Carvalho3 and Edmundo Carvalho Mauad1 Abstract Background: Non-melanoma skin cancer (NMSC) is one of the most common neoplasms in the world. Despite the low mortality rates, NMSC can still cause severe sequelae when diagnosed at advanced stages. Malignant melanoma, the third most common type of skin cancer, has more aggressive behavior and a worse prognosis. Teledermatology provides a new tool for monitoring skin cancer, especially in countries with a large area and unequal population distribution. This study sought to evaluate the performance of digital photography in skin cancer diagnosis in remote areas of Brazil. Methods: A physician in a Mobile Prevention Unit (MPU) took four hundred sixteen digital images of suspicious lesions between April 2010 and July 2011. All of the photographs were electronically sent to two oncologists at Barretos Cancer Hospital who blindly evaluated the images and provided a diagnosis (benign or malignant). The absolute agreement rates between the diagnoses made by direct visual inspection (by the MPU physician) and through the use of digital imaging (by the two oncologists) were calculated. The oncologists’ accuracy in predicting skin cancer using digital imaging was assessed by means of overall accuracy (correct classification rate), sensitivity, specificity and predictive value (positive and negative). A skin biopsy was considered the gold standard. Results: Oncologist #1 classified 59 lesions as benign with the digital images, while oncologist #2 classified 27 lesions as benign using the same images. The absolute agreement rates with direct visual inspection were 85.8% for oncologist #1 (95% CI: 77.1-95.2) and 93.5% for oncologist #2 (95% CI: 84.5-100.0). The overall accuracy of the two oncologists did not differ significantly. Conclusions: Given the high sensitivity and PPV, Teledermatology seems to be a suitable tool for skin cancer screening by MPU in remote areas of Brazil. Keywords: Teledermatology, Skin cancer, Diagnosis, Public health, Accuracy, Verification bias Background Skin cancer is the most common malignancy in many parts of the world, including Brazil [1,2]. More than 2 million new non-melanoma skin cancer (NMSC) cases are diagnosed in the United States each year [3], and ap- proximately 76,250 new cases of malignant melanoma were expected to be diagnosed in 2012 [4]. However, the true incidence of NMSC remains unknown because these lesions are not commonly reported to cancer * Correspondence: cegsilveira@gmail.com 1Department of Cancer Prevention, Barretos Cancer Hospital, Rua Antenor Duarte Villella, 1331 - Bairro Dr. Paulo Prata, 14784-400 Barretos, SP, Brazil Full list of author information is available at the end of the article © 2014 Silveira et al.; licensee BioMed Central. Commons Attribution License (http://creativec reproduction in any medium, provided the or Dedication waiver (http://creativecommons.or unless otherwise stated. registries. It has been estimated that 25% of all new can- cer cases diagnosed in Brazil in 2012 will be skin can- cers, with approximately 134,170 new cases of NMSC and 6,230 malignant melanomas expected to be identified. These numbers represent an approximately 16% increase in new NMSC cases and a 4% increase in melanomas compared with 4 years ago [1]. Despite a low mortality rate, NMSC can still cause se- vere sequelae when diagnosed at advanced stages [5] be- cause these lesions occur predominantly on sun-exposed areas such as the face, which can become disfigured. Moreover, significant morbidity costs may occur [6]. Ma- lignant melanoma, the third most common type of skin This is an Open Access article distributed under the terms of the Creative ommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and iginal work is properly credited. The Creative Commons Public Domain g/publicdomain/zero/1.0/) applies to the data made available in this article, mailto:cegsilveira@gmail.com http://creativecommons.org/licenses/by/2.0 http://creativecommons.org/publicdomain/zero/1.0/ Silveira et al. BMC Dermatology 2014, 14:19 Page 2 of 5 http://www.biomedcentral.com/1471-5945/14/19 cancer, has more aggressive behavior and a worse prog- nosis, causing a significant decrease in life expectancy and lost productivity [6]. Currently, the best method for the early detection of skin cancer is to identify changes in skin lesions, includ- ing the appearance of new growths [4]. However, this strategy is not easily implemented in developing coun- tries due to a shortage of trained professionals and their availability in remote areas. For example, in the State of São Paulo, which has the largest cancer registry in Brazil, 9% of basal cell cancer (BCC) cases, 21% of squamous cell cancer (SCC) cases and 49% of malignant melano- mas are diagnosed at stages II, III or IV [7]. Teledermatology provides a new tool for monitoring skin cancer in countries such as Brazil, which has a large area and an unequal population distribution. Telederma- tology essentially involves sending digital images to spe- cialized cancer centers for evaluation by trained experts [8-10] and provides a platform for professional training programs and physicians to discuss complex cases without the necessity of transporting patients, which may lead to substantial savings in time and cost [11]. This approach also effectively reduces the waiting times for surgical treat- ment [12] and facilitates the spread of dermatological knowledge into poor regions of the world [13]. Because of the high incidence of skin cancer in Brazil, the difficulty accessing doctors in poor areas and the lack of studies on teledermatology in developing countries, we decided to utilize the existing telecommunication technol- ogy and evaluate the accuracy of digital imaging in diag- nosing skin cancer in remote areas of Brazil. Methods Patients The skin cancer screening was performed by a Mobile Prevention Unit (MPU) of Barretos Cancer Hospital (BCH). The MPU regularly visits the remote areas of Brazil, including the states of Mato Grosso, Mato Grosso do Sul and Rondônia, screening the local people for prostate, cervical and skin cancers. The MPU trailer is fully equipped to perform clinical procedures and ambu- latory surgeries. The MPU team consists of a physician, a nurse, three nurse technicians and a driver. This team is able to perform 40 clinical dermatology examinations or procedures per day, including cryotherapy and sur- gery. All of the patients examined at the MPU were pre- viously screened by a nurse from the local municipality who was trained at BCH. A more detailed description of the MPU concept has been published elsewhere [14]. The present study included individuals with skin le- sions that were determined to be suspicious after a dir- ect visual inspection by a physician between April 2010 and July 2011. These patients were evaluated in the MPU, and their lesions were photographed (digital imaging). All of the lesions suspected to be malignant by the MPU physician were biopsied and/or excised after pho- tography and submitted to the Department of Pathology at BCH for histological evaluation. The Institutional Review Board (IRB) at BCH previously approved the research protocol (No. 377/2010). All of the subjects signed an in- formed consent agreement. Methods The lesions were photographed by the MPU physician using a Sony Cybershot DSC-5780 digital camera with 8.1-megapixel resolution. One digital image of each le- sion was taken at a distance of 60 cm to evaluate the le- sion topography, with an additional image taken at a shorter distance (using the macro mode) to evaluate the lesion details. Pertinent information such as age, skin complexion, location of the lesion, stage and pathology results were collected and used for the TNM Classification of Malig- nant Tumors (AJCC) system, 7th edition [15]. All of the diagnoses were classified according to the International Classification of Diseases (ICD-10). We used the Skin Type (or complexion) Classification System proposed by Fitzpatrick, which utilizes the Skin Type 1–6 scale where 1 denotes pale white skin, 2 denotes white skin, 3 de- notes light brown skin, 4 denotes moderate brown skin, 5 denotes dark brown skin and 6 denotes pigmented dark brown to black skin [16]. All of the digital images were coded, stored and sub- mitted at random to two oncologists at BCH. These two experts were blinded to the MPU physician’s diagnosis and pathology reports, and classified the images using the following options: 1) a malignant lesion, oncological treatment is indicated; 2) a benign lesion, no treatment required; 3) unknown; or 4) a low-quality image. Both the oncologists and the MPU physician have more than 10-years of experience in skin cancer screening. Statistical analysis The diagnoses made by all three physicians were charac- terized by descriptive statistics using SPSS for Windows software (v. 17.0, SPSS Inc., Chicago, IL, USA). The ab- solute agreement rates between the diagnoses obtained from direct visual inspection (by the MPU physician) and through the use of digital images (by the two oncologists) were calculated. The oncologists’ performance in predict- ing skin cancer using the digital images was evaluated on the basis of overall accuracy (correct classification rate), sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV). The result of a skin bi- opsy was considered the gold standard. Confidence inter- vals (95% CI) were calculated whenever appropriate. Silveira et al. BMC Dermatology 2014, 14:19 Page 3 of 5 http://www.biomedcentral.com/1471-5945/14/19 Results A total of 2,592 patients underwent dermatological examination at the MPU throughout the duration of the study. Of these, 460 (17.7%) had a suspicious lesion that was classified as possibly malignant by the MPU phys- ician. These lesions were imaged, biopsied/removed and submitted for histopathological examination at BCH. Of the 460 patients, 21 (4.6%) were excluded from the study because of poor quality photos, leaving 439 patients with pathological results. Of these 439 lesions, 23 (5.2%) were excluded because of incomplete data preventing the identification of the patient, while 364 (87.5%) were con- firmed to be malignant by the biopsy. The majority of the lesions were BCCs (78.5%), and most were located in the head and neck area (75%). A large majority of the le- sions (93%) were classified as stages 0 and I (Table 1). These patients had a mean age of 63.5 years (range: 19 to 93 years) and were from 5 states of Brazil (MT, MS, RO, GO and MG). The patients were predominantly (81%) light-skinned, i.e., skin type 1 or 2 (Table 1). Altogether, 416 digital images were electronically sent the two oncologists, who completely blinded by any clin- ical description or attached information. The oncologists classified the tumors in the images as either malignant or benign. Oncologist #1 classified 59 lesions as benign using the digital images, while oncologist #2 classified 27 lesions as benign using the same images., The absolute agreement rates with the direct visual inspection were Table 1 Diagnosis and location of the biopsied or removed lesions and skin complexion of the patients Variable Category N (%) Pathology* Basal cell carcinoma 286 (78.5) Squamous cell carcinoma 59 (16.2) Melanoma 5 (1.4) Other** 14 (3.8) Site Head and Neck 273 (75.0) Trunk 28 (7.6) Upper limbs 61 (16.7) Lower limbs 2 (0.5) Clinical stage*** 0 11 (3.5) I 280 (89.5) II 21 (6.7) III 1 (0.3) IV 0 (0) Skin type scale 1-2 295 (81) 3-4 69 (19) 5-6 0 (0) *52 lesions were confirmed to be benign by the biopsy; **Bowen’s disease, malignant trichoepithelioma and metatypical carcinoma; ***51 patients were not staged because they did not go to BCH for diagnostic follow-up. 85.8% for oncologist #1 (95% CI: 77.1-95.2) and 93.5% for oncologist #2 (95% CI: 84.5-100.0) (Table 2). These rates were not statistically different. Table 3 summarizes the oncologists’ performance in predicting skin cancer using the digital images. The overall accuracy, specificity and predictive values (nega- tive and positive) did not differ significantly between the two oncologists; however, oncologist #2 had a slightly, but significant, higher sensitivity. Discussion The most remote areas of Brazil are located North of the Tropic of Capricorn and near the Equator—areas where solar radiation is very intense [17]. Since the 1980s, these regions have received large numbers of mi- grants from the Southern and Midwestern regions of Brazil, where people are predominantly of European des- cent. Thus, the majority of these inhabitants have light skin. Not surprisingly, this migration has increased the incidence of skin cancer in this area [1]. Despite this in- crease, the Brazilian government is reluctant to invest in permanent programs for skin cancer prevention because of the existing controversy over whether or not screen- ing increases the survival of melanoma patients [18]. Developing countries exhibit an unequal distribution of doctors throughout their territories. In Brazil, the concentration of physicians is higher in metropolitan areas, and there is a huge gap in the availability of these professionals in remote areas [19]. Brazil averages one doctor for every 551 inhabitants [20], with this number varying from one doctor for every 232 inhabitants in the city of São Paulo to one doctor for every 10,000 inhabi- tants in some remote areas of the Amazon States and Rondônia [20,21]. This situation is further exacerbated when medical experts, such as dermatologists, are sought. In this case, there may be one dermatologist for every 90,000 inhabitants in certain regions of Brazil [21,22]. Given this reality, one alternative is teledermatology, which is being promoted worldwide as an effective tool Table 2 Number and percentage of lesions, as determined by the raters and the skin biopsy Physician Physician’s diagnosis Skin biopsy Total Malignant Benign N (%) N (%) MPU physician Malignant 364 (87.5) 52 (12.5) 416 Benign - - - Oncologist #1 Malignant 325 (91.0) 32 (9.0) 367 Benign 39 (66.1) 20 (33.9) 59 Oncologist #2 Malignant 350 (90.0) 39 (10.0) 389 Benign 14 (51.9) 13 (48.1) 27 MPU = mobile prevention unit. Table 3 Oncologists’ performance indicators in predicting skin malignancy using digital imaging Oncologist #1 Oncologist #2 % (95% CI) % (95% CI) Sensitivity 89.3 (85.7-92.3) 96.2 (93.6-97.9) Specificity 38.5 (25.3-53.0) 25.0 (14.0-39.0) Positive predictive value 91.0 (87.6-93.8) 90.0 (86.6-92.8) Negative predictive value 33.9 (22.1-47.4) 48.2 (26.7-68.1) Overall accuracy 85.3 (76.7-94.7) 87.3 (78.5-96.7) Silveira et al. BMC Dermatology 2014, 14:19 Page 4 of 5 http://www.biomedcentral.com/1471-5945/14/19 for diagnosing skin cancer in remote areas [13]. This method reduces the waiting times for these patients and results in good customer satisfaction scores [23,24], satis- factory clinical results [25] and good cost-effectiveness [26]. Despite the fact that teledermatology has been recog- nized as an important skin cancer screening tool in even large populations [27], few studies have investigated the use of teledermatology for diagnosing skin cancer [28,29]. This technique could be particularly useful in developing countries [13], where skin cancer predominantly presents in the clinic at advanced stages. To our knowledge, this is the first study to use mobile units for teledermatology. The present study was a store-and-forward telederma- tology study used for skin cancer screening. The study was performed using an MPU in which a physician ex- amined a large number of patients who were referred because of a suspected skin cancer lesion. These patients were subjected to a biopsy and/or lesion removal for histopathological confirmation. Thus, the present study represents a typical screening setting, where a positive test (classification as suspicious by digital image) was verified by the gold standard (skin biopsy). This study showed a high rate of agreement between the MPU phy- sician’s diagnosis by visual inspection and the diagnosis given by the two oncologists using the digital images as a diagnostic tool. This result is quite impressive given the fact that oncologists #1 and #2 performed their ana- lysis in a tertiary cancer hospital (BCH) up to 1,200 miles away from the MPU. These concordance values are even more impressive if one also considers that some of this variability may be due to differences in data inter- pretation between the physicians and the different expe- riences of the physicians involved and not due to the technology itself [13]. It seems that the physician’s ex- perience in skin cancer screening may be more import- ant than the technology, a finding that is not surprising. Previous reports have found that inter-observer agree- ment varies significantly depending on case selection, the use of classification criteria [30] and the level of ex- pertise of the physicians [13,28]. Thus, the high level of concordance between the oncologists in this study may also be related to the fact that they have more than 10 years of experience in skin cancer screening in Brazil. When the diagnoses of the physicians were translated to performance indicators of digital imaging, the crude sensitivity for oncologist #1 was 89.3% and the crude sensitivity for oncologist #2 was even higher at 96.2% (Table 3). The PPV was equally high for both physicians, but the specificity and NPV were not particularly im- pressive. It must be emphasized that these calculations are based on incomplete evaluations [31], as biopsy veri- fication was only performed for test-positive cases, i.e., the digital images classified as malignant. This situation occurred because it would be unethical to perform a bi- opsy on normal skin tissue. For any type of screening, the optimal test is the one with the highest PPV [31]. In this respect, the teledermatology setting tested here seems to be a highly suitable screening tool. From the clinical point of view, the type of lesions missed by this screening approach is important; for ex- ample, missing a malignant melanoma is more serious than missing a BCC or SCC, which exhibit a substan- tially more protracted clinical course. According to our histological records, there were 5 histologically- confirmed malignant melanomas in the present series. Of those malignancies, observer 1 diagnosed 1/5 cor- rectly, while observer 2 correctly diagnosed 3/5 cases. This result would justify using dermatoscopy to screen all pigmented lesions. However, this practice is not feas- ible in large countries with limited resources in rural areas, such as Brazil. A dermatoscope is an expensive piece of equipment, and the proper use of this technique necessitates the practical training of the physicians, pre- cluding the use of this instrument in population-based screening for skin cancer. Digital image diagnosis approach seems to be a viable option for teledermatology settings that utilize MPUs. One limitation of this study is the fact that we were not able to capture images from a camera attached to a der- matoscope, which is known to further increase diagnos- tic precision [32-34]. We only used a simple digital camera, which is easier for any health professional to handle than a more complex device such as a dermato- scope. Another limitation was the lack of clinical histor- ies for the examined patients (work in agriculture, family history of skin cancer, etc.) and a description of the characteristics of the lesion (raised or flat, lesion size, time of development, etc.) to assist the observing physicians in making the final diagnoses. Despite the limitations of this study, the high number of evaluated lesions and the high concordance between the observers clearly indicate that teledermatology could be used as an effective tool to screen for malig- nant skin lesions, especially in the remote areas of de- veloping countries. Silveira et al. BMC Dermatology 2014, 14:19 Page 5 of 5 http://www.biomedcentral.com/1471-5945/14/19 Conclusion This study showed high agreement between direct visual inspection and digital imaging in identifying suspicious skin lesions. Moreover, digital imaging played a previously unrecognized role in predicting skin cancer. Hence, this study suggests that it is feasible to use digital imaging as a tool to screen for skin cancer in a population-based setting and that this approach would be particularly useful in remote areas. Competing interest The authors declare that they have no competing interest. Authors’ contributions CEGS, TBS, KS, ALC and ECM did substantial contributions to conception and design of the article, on acquisition, analysis and interpretation of data, and in the drafting the article too. RACV and RLH Jr did substantial contributions to conception and design of the article. JHGTF did substantial contributions to design of the article, statistical analysis and interpretation of data, and in the drafting the article. All authors contributed revising the article critically for important intellectual content and have given their final approval of the version to be published on BMC Dermatology. Acknowledgments We are indebted to Cleyton Zanardo de Oliveira and Allini Mafra da Costa from Research and Support Department of Barretos Cancer Hospital for their assistance in preparing the data for statistical analyses. Author details 1Department of Cancer Prevention, Barretos Cancer Hospital, Rua Antenor Duarte Villella, 1331 - Bairro Dr. Paulo Prata, 14784-400 Barretos, SP, Brazil. 2Department of Surgery, Barretos Cancer Hospital, Barretos, SP, Brazil. 3Teaching and Research Institute, Barretos Cancer Hospital, Barretos, SP, Brazil. 4Department of Oncology & Radiotherapy, Turku University Hospital, Turku, Finland. Received: 25 September 2013 Accepted: 4 December 2014 References 1. Brasil. Ministério da Saúde. Instituto Nacional de Câncer José Alencar Gomes da Silva (INCA). Coordenação Geral de Ações Estratégicas. Coordenação de Prevenção e Vigilância: Estimativa 2012: incidência de câncer no Brasil. Rio de Janeiro: INCA; 2011. 2. Siegel R, Ward E, Brawley O, Jemal A: Cancer statistics, 2011: the impact of eliminating socioeconomic and racial disparities on premature cancer deaths. CA Cancer J Clin 2011, 61:212–236. 3. Kim RH, Armstrong AW: Nonmelanoma skin cancer. Dermatol Clin 2012, 30:125–139. 4. Siegel R, Naishadham D, Jemal A: Cancer statistics, 2012. CA Cancer J Clin 2012, 62:10–29. 5. Gallagher RP: Sunscreens in melanoma and skin cancer prevention. CMAJ 2005, 173:244–245. 6. Guy GP, Ekwueme DU: Years of potential life lost and indirect costs of melanoma and non-melanoma skin cancer: a systematic review of the literature. Pharmacoeconomics 2011, 29:863–874. 7. Acesso ao Banco de Dados - Registro Hospital de Câncer [http://200.144.1.68/cgi-bin/dh?rhc/rhc-geral.def] 8. Desai B, McKoy K, Kovarik C: Overview of international teledermatology. Pan Afr Med J 2010, 6:3. 9. Wurm EM, Campbell TM, Soyer HP: Teledermatology: how to start a new teaching and diagnostic era in medicine. Dermatol Clin 2008, 26:295–300. 10. Massone C, Wurm EM, Hofmann-Wellenhof R, Soyer HP: Teledermatology: an update. Semin Cutan Med Surg 2008, 27:101–105. 11. Wurm EM, Hofmann-Wellenhof R, Wurm R, Soyer HP: Telemedicine and teledermatology: past, present and future. J Dtsch Dermatol Ges 2008, 6:106–112. 12. Ferrandiz L, Moreno-Ramirez D, Nieto-Garcia A, Carrasco R, Moreno-Alvarez P, Galdeano R, Bidegain E, Rios-Martin JJ, Camacho FM: Teledermatology- based presurgical management for nonmelanoma skin cancer: a pilot study. Dermatol Surg 2007, 33:1092–1098. 13. Tran K, Ayad M, Weinberg J, Cherng A, Chowdhury M, Monir S, El Hariri M, Kovarik C: Mobile teledermatology in the developing world: implications of a feasibility study on 30 Egyptian patients with common skin diseases. J Am Acad Dermatol 2011, 64:302–309. 14. Mauad EC, Silva TB, Latorre MR, Vieira RA, Haikel RL Jr, Vazquez VL, Longatto-Filho A: Opportunistic screening for skin cancer using a mobile unit in Brazil. BMC Dermatol 2011, 11:12. 15. Sobin L, Gospodarowicz M, Wittekind C: TNM Classification Of Malignant Tumors. 7th edition. Hoboken, NJ: Wiley; 2009. 16. Fitzpatrick TB: The validity and practicality of sun-reactive skin types I through VI. Arch Dermatol 1988, 124:869–871. 17. Garner KL, Rodney WM: Basal and squamous cell carcinoma. Prim Care 2000, 27:447–458. 18. Schmerling RA, Loria D, Cinat G, Ramos WE, Cardona AF, Sanchez JL, Martinez-Said H, Buzaid AC: Cutaneous melanoma in Latin America: the need for more data. Rev Panam Salud Publica 2011, 30:431–438. 19. Campos FE, Machado MH, Girardi SN: A fixação de profissionais de saúde em regiões de necessidades. Divulgação em Saúde para Debate 2009, 44:13–24. 20. Conselho Regional de Medicina do Estado de São Paulo (CREMESP): Aumenta a concentração de médicos no Estado de São Paulo. In Book Aumenta a concentração de médicos no Estado de São Paulo. City: CREMESP; 2010. 21. Estatistica de médicos [http://portal.cfm.org.br/?option=com_estatistica] 22. Machado MH, Vieira ALS: Perfil dos Dermatologistas no Brasil: Relatório Final. In Book Perfil dos Dermatologistas no Brasil: relatório final. City: Sociedade Brasileira de Dermatologia; 2003. 23. Collins K, Walters S, Bowns I: Patient satisfaction with teledermatology: quantitative and qualitative results from a randomized controlled trial. J Telemed Telecare 2004, 10:29–33. 24. Weinstock MA, Nguyen FQ, Risica PM: Patient and referring provider satisfaction with teledermatology. J Am Acad Dermatol 2002, 47:68–72. 25. Pak H, Triplett CA, Lindquist JH, Grambow SC, Whited JD: Store-and- forward teledermatology results in similar clinical outcomes to conventional clinic-based care. J Telemed Telecare 2007, 13:26–30. 26. Moreno-Ramirez D, Ferrandiz L, Ruiz-de-Casas A, Nieto-Garcia A, Moreno- Alvarez P, Galdeano R, Camacho FM: Economic evaluation of a store-and- forward teledermatology system for skin cancer patients. J Telemed Telecare 2009, 15:40–45. 27. Massone C, Maak D, Hofmann-Wellenhof R, Soyer HP, Fruhauf J: Teledermatol- ogy for skin cancer prevention: an experience on 690 Austrian patients. J Eur Acad Dermatol Venereol 2014, 8:1103–1108. 28. Moreno-Ramirez D, Ferrandiz L, Nieto-Garcia A, Carrasco R, Moreno-Alvarez P, Galdeano R, Bidegain E, Rios-Martin JJ, Camacho FM: Store-and-forward teledermatology in skin cancer triage: experience and evaluation of 2009 teleconsultations. Arch Dermatol 2007, 143:479–484. 29. Shapiro M, James WD, Kessler R, Lazorik FC, Katz KA, Tam J, Nieves DS, Miller JJ: Comparison of skin biopsy triage decisions in 49 patients with pigmented lesions and skin neoplasms: store-and-forward teledermatology vs face-to- face dermatology. Arch Dermatol 2004, 140:525–528. 30. Krupinski EA, LeSueur B, Ellsworth L, Levine N, Hansen R, Silvis N, Sarantopoulos P, Hite P, Wurzel J, Weinstein RS: Diagnostic accuracy and image quality using a digital camera for teledermatology. Telemed J 1999, 5:257–263. 31. Reichenheim ME, Ponce deLeon A: Estimation of sensitivity and specificity arising from validity studies with incomplete designs. Stata J 2002, 2:267–279. 32. Dauden E, Castaneda S, Suarez C, Garcia-Campayo J, Blasco AJ, Aguilar MD, Ferrandiz C, Puig L, Sanchez-Carazo JL: Integrated approach to comorbidity in patients with psoriasis. Actas Dermosifiliogr 2012, 103(Suppl 1):1–64. 33. Sihto H, Bohling T, Kavola H, Koljonen V, Salmi M, Jalkanen S, Joensuu H: Tumor infiltrating immune cells and outcome of Merkel cell carcinoma: a population-based study. Clin Cancer Res 2012, 18:2872–2881. 34. Massone C, Brunasso AM, Campbell TM, Soyer HP: Mobile teledermoscopy–melanoma diagnosis by one click? Semin Cutan Med Surg 2009, 28:203–205. doi:10.1186/s12895-014-0019-1 Cite this article as: Silveira et al.: Digital photography in skin cancer screening by mobile units in remote areas of Brazil. BMC Dermatology 2014 14:19. http://200.144.1.68/cgi-bin/dh?rhc/rhc-geral.def http://portal.cfm.org.br/?option=com_estatistica Abstract Background Methods Results Conclusions Background Methods Patients Methods Statistical analysis Results Discussion Conclusion Competing interest Authors’ contributions Acknowledgments Author details References work_34plnlhxsbeijemgpiydrpluc4 ---- J Braz Comput Soc (2013) 19:341–359 DOI 10.1007/s13173-013-0102-1 ORIGINAL PAPER A survey on automatic techniques for enhancement and analysis of digital photography Claudio S. V. C. Cavalcanti · Herman Martins Gomes · José Eustáquio Rangel De Queiroz Received: 14 September 2012 / Accepted: 7 February 2013 / Published online: 26 March 2013 © The Brazilian Computer Society 2013 Abstract Thefastgrowthintheconsumerdigitalphotogra- phy industry during the past decade has led to the acquisition and storage of large personal and public digital collections containing photos with different quality levels and redun- dancy, among other aspects. This naturally increased the dif- ficulty in selecting or modifying those photos. Within the above context, this survey focuses on systematically review- ing the state-of-art on techniques for the enhancement and analysis of digital photos. Nevertheless, it is not within the scope of this survey to review image quality metrics for eval- uating degradation due to compression, digital sensor noise, and affine issues. Assuming the photos have good quality in those aspects, this review is centered on techniques that might be useful to automate the task of selecting photos from large collections or to enhance the visual aspect of imperfect photos by using some perceptual measure. Keywords Image enhancement · Photographic analysis · Computational aesthetics · Survey 1 Introduction In the late 1990s, there was an immense growth in the digital photography industry. Manufacturers began to produce dig- ital cameras on a large scale and at decreasing prices [119]. Great changes have been noticed in photographic technology and practice since then. When using consumer analog film, the number of photos was limited by the roll size (which usually allowed at most 36 photos). Nowadays, with large C. S. V. C. Cavalcanti (B) · H. M. Gomes · J. E. R. De Queiroz Universidade Federal de Campina Grande, Rua Aprigio Veloso, 882 Bodocongo, Campina Grande, PB 58429-140, Brazil e-mail: claudio.cavalcanti@gmail.com capacity re-writable memory cards (e.g., 256 GB), the num- ber of photos that can be acquired/stored has increased by approximately three orders of magnitude (if considering digi- tal images, captured with a resolution of 8 MP). Digital pho- tography also changed the way photos were printed. With film, photos had to be developed first, in order to be seen, whereas when shooting with a digital camera, printing is no longer a requirement, once it is possible to preview images in the camera viewer or on a monitor screen, and then to decide which ones to print. One consequence of those changes is that taking photos has become an almost costless task. Thus, the judgment of what could be a good shot and the care for adjusting camera settings for a specific scene becomes less usual for most con- sumers and even for some professional photographers. As a result, large amounts of photos are taken and stored daily. This causes difficulties in selecting which ones to print or to publish, e.g., in digital albums. In summary, this results in a scenario involving a large amount of stored photos from which just a small part will be printed. In this survey, con- sumer photos are considered the ones obtained (1) with minor adjustments in camera settings, (2) aiming at portraying daily events, and (3) barely exploring art basic techniques. On the other hand, professional photos differ from consumer ones by the use of more elaborate techniques and better equip- ment utilization, which might improve the photo quality. In this survey, professional photos are not necessarily obtained by a professional photographer, and do not encompass other connotations of professional photos (e.g., artistic or journal- istic). There are several recent applications for which photo processing is an essential intermediate task, e.g., photo col- lage [149], slide showing [44], browsing [68,85,112,150], storytelling [53], and photo summarization [28,110,117, 141]. 123 342 J Braz Comput Soc (2013) 19:341–359 Fig. 1 Block diagram illustrating the sub-areas in which this work is subdivided The algorithms reviewed in this survey are organized into two categories: enhancement and analysis. Each category is divided in other sub-categories. An illustration for the divi- sion that is used in this work is shown in Fig. 1. While enhancement algorithms are intended to modify the image in such a way that it might become better-looking or appealing, analysis algorithms are designed to assess photos according to some criteria, such as composition, aesthetics, or overall quality. A number of papers have been written on both image enhancement and analysis in the past years. This survey focuses in work on image processing techniques that were already tested or may be directly used in specific problems of photo enhancement and analysis. In order to avoid ambi- guities, in this survey the words photo and photography are strictly related to consumer and professional photography. Image enhancement algorithms can be classified as on the fly and off-line. While on the fly algorithms modify the photo conditions before the photo is taken, off-line algorithms per- form changes after the acquisition took place. Although the on the fly algorithms might lead to better results than off-line algorithms, they must run faster since it might be necessary to do this in a real-time operation. Off-line algorithms are limited in the sense that they do not allow scene changes, e.g., it is not possible to zoom out from a photo or ask some- one to open his/her eyes. However, there is no a priori time frame to produce the enhancement result. Image analysis algorithms can be classified as assessment, information extraction, and grouping algorithms. Assess- ment algorithms analyze the visual aspect of a photo in two main facets: aesthetics and with respect to the image quality assessment (IQA). Formally, the main goal of IQA algo- rithms is to predict ratings in a human-like manner [21]. Although this definition is very broad, the term IQA is typi- cally used to denote the evaluation of the image degradation (e.g., due to lossy compression or noise) [21,62,63,118,167, 178]. Therefore, in this survey, IQA is used with this latter meaning. There is also some ambiguity regarding the use of the expression aesthetics quality assessment. In this survey, aesthetics quality assessment algorithms are defined as the ones whose goal is to assign a score (or a class of scores, such as professional and amateur) to a photo based in the analyzed feature, e.g., photographic composition rules or number of faces found, as used by other authors [7,74,126,145]. More- over, information extraction algorithms search for elements of interest, such as the place a photo was taken, the existence of faces, and the presence of specific people in the environ- ment, among others. Finally, grouping algorithms are defined in this survey as the ones which analyze images in order to find similarities between them. It is not within the scope of this survey to review image quality assessment for evaluating degradation due to com- pression, digital sensor noise, and affine issues. Assuming the photos have good quality in those aspects, this review focuses on techniques that might be useful to automate the task of photo selection from large collections or to enhance the visual aspect of imperfect photos by using a perceptual measure. This survey is organized as follows: in Sect. 2, the method- ology employed for finding related work is presented. In Sect. 3, the work on image enhancement is reviewed, in particular, enhancement that could be performed to increase the quality of a photo in a printing or selection scenario. In Sect. 4, work on image (and photo) analysis are reviewed. In Sect. 5, the main issues found in the reviewed approaches are discussed and summarized. Finally, in Sect. 6, some conclu- sions are given. 2 Methodology of the research This section is devoted to presenting the methodology adopted for searching the related work in the area. More specifically, information on search engines, digital libraries, and keywords used in the searching process is provided. Two search strategies were employed, inspired by the tra- versingorderofbreadth-firstanddepth-firstsearchstrategies, respectively. Breadth-first search was performed within a set of predefined published conference proceedings and jour- nals in a specified time period, but only the first level of the search tree was considered for reviewing. Depth-first search was performed by using a search engine to find papers given a set of keywords. A subsequent search was then performed by using the references of those papers as a starting point. This process was repeated until a maximum depth of 3 was reached. In the following two subsections, more details on each type of performed search are given. 2.1 Breadth-first search This strategy aimed at finding related papers in a set of recenttechnicalpublications,suchasjournalsandconference 123 J Braz Comput Soc (2013) 19:341–359 343 Table 1 Literature search results The first column corresponds to the publication name, the second column indicates if the publication is a conference or a journal, the third column shows the publisher name, and the fourth column shows the number of papers related to this survey Publication C/J Publisher No. of papers CVPR C IEEE 18 ICME C IEEE 15 CVIU J Elsevier 12 ICIP C IEEE 9 IJCV J Springer 8 Pattern recognition J Elsevier 8 MM C ACM 6 TIP J IEEE 6 IET-CV J IET 4 CIVR C ACM 3 Expert systems with applications J Elsevier 3 ECCV C Springer 2 Eurographics C Wiley-Blackwell 2 ICASSP C IEEE 2 IJPRAI J World Scientific 2 Transactions on consumer electronics J IEEE 2 Transactions on Graphics J ACM 2 Transactions on multimedia J IEEE 2 Visual communication and image representation J Elsevier 2 Other J/C - 42 Total 150 proceedings, given a specified time frame. The search was performed in the database of conferences and journals of IEEE, ACM, Springer, and Elsevier. Besides, by using the search results, there was also performed a search for all papers published in a given conference or journal by using its table of contents. The keywords used for the automatic search were (con- sumer OR personal OR digital) AND (image(s) OR photo(s) or photograph(s) OR photographic archive) AND (value OR quality OR aesthetics OR visual quality) AND (evaluation OR assessment OR analysis OR estimation). A publication period between 2006 and 2012 was defined. 2.2 Depth-first search In this strategy, the relevant papers were found by using the following methodology: (1) based on a set of key- words, for every result returned by a given search engine, (2) the bibliography was analyzed, and (3) relevant cited work was reviewed, including the root paper itself. This is a practical and useful method for reviewing the literature, once the search is seeded using papers already consid- ered relevant by other researchers. The great advantage is that this method dramatically reduces the searching time for finding relevant papers. On the other hand, there are some drawbacks. First, some stop criteria have to be defined, otherwise this becomes an almost endless process. Second, not every citation is directly related to the research area, since it is common to find papers from correlated areas such as Artificial Intelligence and Neurobiology. In order to perform this search, some constraints have to be defined. The search is performed in a single level. Another levelisconsideredifandonlyifacitedpaperisstrictlyrelated to the area. 2.3 Search results Table 1 contains the conferences and journals returned from the above mentioned search method. It is also noted if it is a conference or a journal, the publisher name, and the number of papers selected for this survey. In Table 1, Other refers to conferences or journals with only one related publication. Figure 2 illustrates the balance between conference and journal papers that are reviewed in this survey. 123 344 J Braz Comput Soc (2013) 19:341–359 91 61% 59 39% Conference Journal Fig. 2 Number of works published in conferences and journals that are studied in this survey. 2.4 Considerations on the methodology The methodology previously presented was defined in order to cover relevant papers in the research taxonomy defined in the previous section. Of course, work published prior to 2006, publishedinlowimpactconferences/journalsorindexedwith inadequate keywords might not have been included in this survey. Nonetheless, the number of relevant papers that were included in this survey (150) indicates that a good sample of the relevant work was considered. 3 Enhancement This section focuses on the research on enhancement tech- niques applied to digital photography. Usually, the areas of enhancement and analysis work side-by-side, e.g., enhance- mentisoftenperformedinordertoobtainmorepreciseanaly- sis, and a good analysis may help identify which aspects of a digital photo should be enhanced. In spite of that, and for didactic purposes, these areas are discussed separately in this survey. As mentioned in the previous section, enhancement work may be divided in on the fly and off-line. On the fly approaches are the ones for which it is possible to modify the environ- ment during image acquisition, while in off-line approaches that is not possible, thus, usually the photos are modified or enhanced after acquisition. Nonetheless, both expressions, off-line and on the fly, are also used with other connotations. Chartier and Renaud employed off-line in a noise filtering context [22], while Ercegovac and Lang used the same term in a digital arithmetic context [46]. In the following two sections, more details on photo enhancement approaches are given. 3.1 On the fly enhancement Although it is generally possible to improve photos by means of a wide range of enhancement algorithms (e.g., red eye cor- rection, histogram processing, among others), there are some particular scenarios from which some information is com- pletely lost during acquisition thus making useless a post- processing operation. Photograph acquisition is naturally a lossy process, which disregards factors such as color, tem- perature, environment, time and space of the environment, depth of the scene, among several others. For instance, a photo may be considered inadequate due to the zoom choice, e.g., a close-up should not be used when the goal is to show that the subject is in a given location. After the photo is shot, zooming out is not possible, and a good photo might be lost. In some specific situations, it is possible to perform some cor- rection. However, the results are usually far inferior to a sce- nario in which a new photo could be obtained. For example, image brightness can be adjusted after acquisition in order to improve image aesthetics, but this may result in intensity clipping. On the fly (or dynamic, live, real-time) enhancement algorithms are proposed to automatically perform or advise adjustments to the camera settings, before the photo is taken. Theperformedadjustmentsareintendedtoimprovethephoto quality or to avoid undesired conditions, such as inadequate focus and lighting. Most modern digital cameras have embedded on the fly enhancement mechanisms, such as an exposure meter, an automatic focus adjustment, and a white-balance adjustment. Since those mechanisms are mostly based on low level infor- mation, high level information about scene contents is usu- ally input by the user by means of an appropriate scene switch selection. For example, one may adjust the camera scene switch to motion when using camera to shoot a sports scene. A fully automated system may require that high-level information, such as the location of people in the scene, should be used in order to increase the overall understanding of the scene, as well as to help algorithms to decide where and how to perform changes. A first example of a fully-automatic approach is the face- priority auto-focus [125], which has been commercially used by several camera manufacturers. The goal is to set the focus of the camera to regions where there are faces, in order to avoid incorrect focus priority. For example, the Nikon D90 camera uses face position to correctly adjust focus on people present in the scene [107]. Another example, which recently became very popular, is the smile shutter function, which shots the photo only when every detected face in the image is smiling. Several cam- eras and prototypes incorporating the above features have been developed by camera manufacturers, such as Sony and Canon [47]. Photographic composition rules are also considered as important features for the dynamic adjustment of an image. The conformity with photographic composition rules can be achieved with slight movements of the camera. The pro- duction of an autonomous robot photographer is a possible direction to take to address this aspect. The robot devel- 123 J Braz Comput Soc (2013) 19:341–359 345 oped by Byers et al. [13,14] was designed to be placed in an event, moving towards possible subjects, proceeding the composition, and, finally, obtaining the photo. The subject of a photo may be identified by several approaches, such as considering the output of a skin detection algorithm and a laser range-finder sensor [13,14]. This information may also be used for finding the path that a robot must follow to reach the subjects. After getting to the desired place, once again the scene is analyzed for achieving a good composi- tion for the photo. Four composition rules (rule of thirds, empty space, no middle, and edge rule) were used to guide their system to obtain a good composition. The system per- formance was assessed in some real-world events such as a wedding reception and during the SIGGRAPH 2003 confer- ence. Another example of a robot photographer (but less inde- pendent than the one previously presented) is the Sony Party Shot [143]. The Sony Party Shot apparatus can be plugged into the camera for locating and photograph- ing people by moving in three degrees of freedom (pan, tilt, and zoom). The limitation of this approach is the need to put the robot in a fixed position with the aim of locating people of that point of view, and taking pho- tos. Some approaches may consider acquiring multiple images with different camera settings in order to detect and/or correct issues after image acquisition. For example, two subsequent photos may be obtained with different camera apertures [4]. The acquired images are then combined for finding the sub- ject and analyzing photo composition. Besides improving photographic composition of the image, it is also possible to locate the mergers (which occur due to the projection of a 3D world scene into a 2D representation, in which back- ground objects appear to be connected to the objects in the foreground) [4]. Despite the obvious advantage of on the fly approaches to digital photography, there are a number of limitations. Since most algorithms demand high processing times and some level of scene understanding, on the fly processing might be impractical due to battery consumption. A high processing time can be understood as a period of time that exceeds the time between two scenes setups (e.g., people position and lighting conditions). In order to improve scene understanding, stereo vision might be employed to find interesting objects, to pro- vide depth estimates, and to improve image segmenta- tion as well, which may help with the acquisition of bet- ter photos. In a dynamic environment, stereo vision may be obtained by the use of Pan–Tilt–Zoom (PTZ) cam- eras through simple-region SSD (sum of square difference) matching [158]. Finally, on the fly composition might be used for automatic and semi-automatic panorama creation [23]. 3.2 Off-line enhancement This section is devoted to discuss existing methods for the enhancement of photos that have already been acquired. The goal is to enhance a photo using only the existing informa- tion in the image file (i.e., pixels, Exchangeable Image File format—EXIF information, faces detected, etc.). Changes typically occur at pixel level, and are required when it is not possible to obtain another photo and the resulting photo has room for improvements. For off-line enhancement, the image representation might be in any color space, however Benoit et al. [5] have shown some advantages when using mod- els based on the human visual system for low-level photo enhancement. Generally, enhancement changes can be used for making more attractive a photo that presents some type of imperfec- tion. For instance, after removing the imperfection that lens dust may cause, a photo may look more attractive [188]. It is also possible to improve photos by smoothing the subject’s skin [77], by adjusting some general aspect, such as contrast and brightness for both generic [183] and specific type of photos (e.g., improving contours in nature photos [128]) or by removing an undesired object (e.g., an unknown person or a light pole [6,170]). This last type of enhancement may be achieved by image inpainting, as discussed next. Image inpainting has been largely used for enhancing pho- tos [170]. Through inpainting, one might remove an element which harms the composition of a photo [170]. Inpainting algorithms remove elements from photos by evaluating the surrounding indicated area with statistical analysis and filling this area with a surrounding-like texture. Inpainting may be obtained by combining texture synthesis, geometric partial differential equations (PDEs), and coherence among neigh- bor pixels [6]. Patch sparsity algorithms are used for improv- ing image inpainting [170]. Enhancement by example has also been employed [72]. When a user classifies a photo, he/she indirectly classifies the features he/she considers important. Processes such as sharpening, super-resolution, inpainting, white-balance, and deblurring are performed in photos so that they reflect the features present in example images. Other types of enhance- ment are the photo composite, in which a face can be replaced by another [86], and collage algorithms, in which groups selected images in a new one [149]. Cropping algorithms are designed for obtaining an image which has smaller dimensions than the original one. There are several methods for automatic [24,138,147,186,3,1,179] and semi-automatic [133] photo cropping. Cropping is per- formed by extracting an area of interest from an original photo [147,82,179], to improve the quality of the photo- graphic composition [186,133,18], to retarget a photo to smaller displays [24,138,3,1,179], and to recompose the photo [87]. 123 346 J Braz Comput Soc (2013) 19:341–359 Most cropping methods have in common the use of content-aware strategies. Content detection may vary from face detection [147,24,186,138,18], saliency detection [147, 24,186,3,1,87], and user-interaction with tracking of the user’s gaze [133]. Region of interest (ROI) cropping methods intend to remove a fraction of the original image which contains or includes some element of interest. The dimensions of the resulting image are dependent on the original image con- tents. Some restrictions may apply, such as maintaining the image original proportion or leaving some room from the element of interest to the photo edges [147,82]. Common applications for such methods are thumbnail cropping and image summarization. A ROI cropping may be improved with face detection for images containing people. However, other elements of interest (e.g., animals) may be detected by using specific detectors [165] or a generic detector such as saliency maps. Saliency and spatial priors have been also used for content-aware thumbnail cropping [82]. Cropping for improving composition may help the pho- tographer to achieve better results by modifying the image dimensions or image proportions. As it is going to be further discussed in Sect. 4.1, there are rules for analyzing the com- position quality of a photo, such as the rule of thirds, which may be used in conjunction with small changes in the image dimensions, aiming at better composition by just removing a few pixel rows (or columns) of the photo. On the other hand, there are methods that make direct changes in the image con- tents. They will be referred to in this paper as recomposition methods, as discussed next. Cropping algorithms are limited in the sense that they crop the images from the borders. Cropping columns or rows in the middle of the image usually results in distortions. However, in some cases the important content of the image is close to the borders. There is, however, a class of algorithms named retargeting algorithms, which may crop the image to regions other than the borders only. Retargeting algorithms are mainly designed for adapt- ing an image to different rendering devices, such as mobile phones [24,138,3,1]. The goal is to preserve the main con- tent of the image while discarding unnecessary or redundant information in such a way the main content is more visi- ble than if a simple resample were applied. Global energy optimization for the whole image may be used for image retargeting [127]. Face detection, text detection, and visual attention, all combined, may also be used for finding content in photos [24]. Another class of algorithms, similar to the retarget- ing algorithms is the class of recomposition algorithms. Recomposition algorithms present a very challenging area of research: the goal is to automatically change the image in order to obtain a more pleasant composition. Some examples of such changes include modifying the subject proportions, removing elements from the photo, cropping the image, etc. Most approaches are, as yet, semi-automatic in the sense they require human intervention to indicate which areas need improvement. In this survey, recomposition algorithms are considered as different from retargeting algorithms since they do not necessarily imply changing the original image dimensions or the original image proportions. The changes are usually artistic ones. Liu et al. [87] proposed a method for recomposition based on finding elements of interest, and applying composition rules (such as rule of thirds, diagonal guidance, and visual balance) to produce a better composed image. In a similar approach, Bhattacharya et al. [7] pro- posed a semi-automatic recomposition method which uses stress points (adapted from the rule of thirds) for optimal object placement and visual balance for improving compo- sition. Experiments show that 73 % of recomposed images were considered better than original counterparts by human observers. 4 Analysis In this section, relevant work on photography analysis is pre- sented. Methods in this area may be organized according to their purposes, as follows: 1. Assessment. The goal is to score photos on a given scale (e.g., from zero to ten, good or bad) according to some criterion: the image quality (related to some degradation in the image) or the aesthetics of the image could be assessed; and 2. Information extraction. The goal is to detect the pres- ence and location of some pre-defined elements of inter- est, e.g., people and faces, in a photo. The relationship between photos could also be extracted. Inthefollowingsections,eachoneofthesegroupsofmethods is described in more detail. 4.1 Assessment Assessment algorithms typically assign a score to a photo based on some metric. This allows the creation of an ordering based on the returned metric values. Assessing (or ranking) a photo is a very difficult and controversial task, especially when dealing with consumer ones. Two main aspects can be evaluated: the quality and the aesthetics of the photo. While the image quality analysis, in this survey, is understood as the assessment of the degradation of the image (e.g., sensor noise, resolution, and compression artefacts), the aesthetics analysis is related to the visual appearance and appeal of the photo (e.g., the color harmony and photo composition). IQA is out of the scope of this survey. 123 J Braz Comput Soc (2013) 19:341–359 347 Several photo composition techniques and rules of thumb have been defined by experienced photographers, based on heuristics and are considered as responsible for improving the aesthetic quality of a photo. Those rules, known as pho- tographic composition rules, may be used to identify higher- quality photos, based on the assessment of features. Photo composition may be regarded as the most determinant factor to consumers when considering photo quality [134]. The application of a photo composition rule will not nec- essarily assure best aesthetic results. Notwithstanding, pho- tos obeying such rules are likely to look more appealing to consumers than if they were shot without attention to the rules [134,17]. However, it is not necessarily true that a photo must have composition rules obeyed to be considered appeal- ing by consumers. This contradiction may be explained by the existence of other factors, apart from composition, e.g., people involved, photogeny and the place the photo was taken. Some of the photo composition rules were lately explained by the theories of perception. The rule of thirds is a good example: it is known that when the subject of the photo is placed in one of the thirds of the image, the viewer is stimu- lated, due to the nature of human visual system, to perceive other regions of the photo. Other rules are not well defined in the specialized photography literature or are defined in terms of more subjective concepts, e.g., trying to obtain a more casual and spontaneous picture [12]. Photographic composition rules have been adopted for ranking photos in many researches [87,17,18,39,4,14,73, 139,95,81,41], in which relationships between some prede- fined rules and the human judgment have been identified. Rules may come from human visual system theories as well as from professional photographer’s expertise. The rule of thirds is the most explored photographic com- position rule in the literature [87,17,18,39,4,14,74]. One of the main reasons for that is because it is easily translated to algorithms. The rule of thirds states that one should preferen- tially place the subject in a third of the image width or height (depending on the image orientation). Existing works differ on how the subject of the photo is located, for example by using (1) face detection algorithms [17,18,14]; (2) low-level information, such as borders and regions found by the mean shift algorithm [87,4]; or (3) by evaluating the differences of pixels positioned in those interest areas [39]. Other rules are also explored but with less consensus between authors. The zoom rule can be applied to classify photos according to the distance from the camera to the subject. Excessive or insufficient distances are penalized by the algorithm as inadequate compositions. Since a precise detection of the subject is required for this type of analysis, Cavalcanti et al. [18] and Byers et al. [14] used face detec- tion as the main information to identify subject position. In a similar approach, Kahn et al. [74], used the ratio between the area of the face and the area of the image. The integrity rule was proposed to identify undesired chopping of the main subject. The great drawback of this rule is the high cost of precisely detecting the subject in a photo. The use of anthropometric measures were shown to be effective to subjects in an upright frontal position. Using some reliable information, such as the coordinates, and the dimensions of a detected face [139,18], it is possible to infer the position of the rest of the subject body, and detect possible chops. Both zoom and integrity rules were designed consider- ing that there is trusty high-level information such as the face coordinates. This is a great drawback, since an impre- cise detection may lead to wrong conclusions. There are approaches that mainly rely on the intensities of pixels rather than on high-level information. One disadvantage is that color images may have their channels treated independently and may result in redundancy which must be treated by clas- sification algorithms. For instance, in the work of Datta et al. [39], 56 initial rules were reduced to just 15 rules after the execution of support vector machines (SVM) and prun- ing [33], since there was redundancy in applying the same data extraction algorithm in all images. Finally, the visual balance rule is also used for analyzing if the photo elements are well balanced, i.e., are placed in a photo in a way that observer attention is equally divided by the photo elements [87,7,176]. Besides the use of photographic rules there are authors which evaluate low-level features (e.g., sharpness, bright- ness, and contrast) in order to identify the overall appearance of a photo [108,73,39,81,80]. Higher level image analysis may also be employed for photo ranking. For instance, aesthetic analysis, may be achieved by learning how humans classify photos accord- ing to some subjective criteria. Although that might be dif- ficult, there are studies focusing on the emotions evoked by artwork in humans [174]. The criteria may be diverse. For instance, the time a human spends evaluating an image can be a criterion for confidence on human assessment [45]. It is believed that the emotions evoked by a natural image can be understood by means of aesthetics gap concept. Accord- ing to Datta et al. [40], “The aesthetics gap is the lack of coincidence between the information that one can extract from low-level visual data and the interpretation of emo- tions that the visual data may arouse in a particular user in a given scenario.”. Color harmony can also be consid- ered as an important feature to be considered [176]. Low- level information such as lighting, color [95,80,74], lumi- nance [74], edges, and range of lightness [48] are used for judging the harmony (a high-level subjective aspect) of a photo and videos [104]. 123 348 J Braz Comput Soc (2013) 19:341–359 Besides all the above presented factors, there are some other common sense factors that might influence human judgement. Below is a list of these additional factors: • People involved. A photo may be considered more or less appealing depending on the identity of the shown people, e.g., even a badly composed and illuminated photo might be considered good if it contains people for which the consumer has affection, such as the photographer’s child, a famous person, etc. The opposite might also happen: a well composed photo might be discarded if the person in the photo is unknown; • Place where the photo was taken. Some photos are related to places rarely visited. Thus, even if a photo has prob- lems, e.g., in composition or illumination, it is likely that it will not be discarded because of its uniqueness; • Photogeny. Well-composed photos do not necessarily contain photogenic people. It is possible to find one or more group members talking or looking elsewhere in the moment the photo was shot, especially in group photos; and • Personal preferences. Some people might prefer a photo without obeying composition rules. Despite the above discussed factors, photo ranking might be useful for helping consumers to identify (at least in a group of pre-selected photos) the ones with more attributes related to a better looking or appealing impression. 4.2 Information extraction This section includes a discussion on approaches for extract- ing elements of interest that might be important to a pho- tographic analysis system. The reviewed work involves approaches for face and people detection, landscape analysis (e.g., horizon tilt evaluation), and identification of the image class (e.g., if it is a photo or a graphic image). The goal is neither to rank nor to classify the images but to extract information. This may be considered as an auxiliary source of information for image ranking methods (discussed in the previous section). Elements of interest might be anything the user is searching for: (1) a face; (2) a person; (3) regions with unwanted features such as dissection lines [139], (4) unfocused or blurred regions [151], (5) a sunset area [9], (6) text [153], and many others. Generally, information extraction by different approaches involves the construction of a classification model for the tar- geted element (this can be performed, for example, through a learning process using a set of reference patterns). It is com- monly accepted that the best technique to build a particular model for a given problem is dependant on specific features of the problem. Decision trees [130,43] are typically employed to iden- tify classes that have a reduced number of constraints, both numerical and categorical, such as number of colors, num- ber of people, etc. The ID3 classification algorithm [124] was used for classifying an image as either a digital photo or as artwork (e.g., a logo, a drawing, and other images artificially generated). The decision tree was trained with 1,200 images. An accuracy of 95.6 % was achieved when distinguishing the classes. This result was verified through a tenfold cross validation. The SVM [33] is largely used for classification, which is useful for detecting features in photographs, such as indoor or outdoor scenes [137], the presence of a sunset [9], the level of expertise of the photographer [152], the presence of skin regions [64], among others. The SVM is normally used when the set of constraints is not small, and there is no clear linear separation of the data for each class. When defining an SVM model, a kernel must be specified. For instance, Serrano et al. [137] used a radial basis function. On the other hand, Boutell et al. [9] and Li et al. [80] used a gaussian function. Whenthereisagreatamountofdata,andagreatnumberof components as well, the high correlation between those com- ponents may harm classification. Many authors use principal component analysis (PCA) [65] for reducing the dimension- ality of the feature space [152]. Some information extraction methods are designed for detecting human-related information, such as face, eye, skin, pose, etc. Since most photos have people, such information is very important to any photographic analysis system. Recent work in face detection focused on multi-view, rota- tion, and scale invariant face and eye detectors. Discriminant features [162], low-level features [42], Sobel edge detection, morphological operations, and thresholding [154] may be used for this goal. Face recognition algorithms can be used for identifying photos which contain or do not contain a specific individual, as well as for finding relationships between images due to the presence of a given person or group [55]. The identification of a specific person might be used, according to a rule defined by Loui et al. [92], to infer the relevance of a given photo based on the relationship of the people to the photo owner. Recognition can be also used, along with human tagging and some logic formalism (e.g., Markov logic), to retrieve social connections in photo repositories [187,75,140,57]. In a photo selection scenario, it is very important to iden- tify the relationship between the people present in some image set. Such a relationship may be used for predicting the significance of such images to that set [91]. This can be done using local patterns of Gabor magnitude and phase [166]. Face recognition highly relies on face detection. Thus, impre- cision on face detection may result in a poor face recognition. There are, however, approaches for misalignment-robust face recognition [171]. 123 J Braz Comput Soc (2013) 19:341–359 349 The use of face details, such as birthmarks [120] and clothes [54], are also used to improve face recognition. A Markov random field is used for recognizing people based on contextual clues such as clothing [2]. Gender can also be a clue for face recognition by means of spatial Gaussian mixture models (SGMM) [84]. Considering that low-level image features are considered by consumers to determine if a given photo is better than another [137], detecting the presence of such low-level fea- tures may be very useful for ranking photos. One of those features is the blur. Blur may be used for automatically rank- ing photos [73,95]. Blurred images can be identified by the detection of some features, such as image color, gradient, and spectrum information [88]. The spectral analysis of an image gradient is also used for identifying blurring kernels in images [69]. Other features such as clarity, complexity, and color composition are also explored [94,39]. Besides the face, skin regions are another important evi- dence for the presence of a human in a photo. Several approaches were proposed. Skin tone may be detected by a pixel-wise approach or by a region-based approach [76]. Both approaches use a color model [64]. Additionally, it might be possible to decompose skin tone in hemoglobin and melanin [169], which can be used for a better understanding of skin texture. Skin classification may be performed by the use of SVM and region segmentation [64], as indicated earlier in this sec- tion. The approaches might be compared with receiver oper- ating characteristic (ROC) analysis [136]. While evidence for people can be obtained by skin detection or face detection, there are also approaches by which humans can be directly detected in images. Recent approaches use local binary patterns (LBP) for human detection through two variants, semantic-LBP and Fourier- LBP [105]. People detection can also be achieved by the use of quantified fuzzy temporal rules for representing knowl- edge of human spatial data. This kind of data is learned with an evolutionary approach [106]. A head and shoulders detec- tor can be also achieved by the use of a watershed and border detector, whose outputs are used to train a classifier using AdaBoost [168]. Some researches have been conducted for detecting peo- ple in a specific context, but might be extended to a more general scenario. For instance, it was shown that the use of region covariance features with radial basis function ker- nel SVM, and histograms of oriented gradients (HOG) with quadratic kernel SVM outperformed the use of local recep- tive fields with quadratic kernel SVM in the specific sce- nario of pedestrian detection [116]. In the same way, the detection of human activities, such as ‘fighting’ or ‘assault’, are recognized and encoded by using context-free gram- mars through a method which uses a description-based approach [131]. Besides people detection, other types of information may be useful for photography analysis. For example, the social context might be inferred by analyzing the distribution of people found within the image [56]. A graph-based approach has shown to be useful for finding rows of people [56]. The pose of the people is also important information about the photo. Each body part has a limited number of positions when compared relatively to other body parts. For instance, the head is directly connected to the shoulders and might not appear connected to the feet. Thus, if a face is found, the shoulders should come right below. There are several approaches for human pose estimation. Human pose may be estimated in video sequences using multi-dimensional boosting regression from Haar features [8]. In static images, pose can be classified through angular constraints and varia- tions of body joints with the use of SVM [97], with observa- tion a driven Gaussian process latent variable model (ODG- PLVM) [61] and non-tree graph models [70], and with a con- ditional random field (CRF) if multiple views are available. A bottom-up parsing approach can be used to recognize the human body for performing pose estimation by segmenting multiple images. Besides human subjects, other types of subjects may be considered in a photo. Different subjects (e.g., natural or man-made objects and animals) might appear alone or inter- acting with humans, resulting in a more complex photo. A shape-driven object detector [129,59], SIFT [78], and sets of mattes [144] may also be employed for a more general object detector. It is also possible to identify the region- of-interest by using captured camera information stored in EXIF [83]. Instead of detecting a specific type of object, it is also effective to use the identification of regions within the image with some correspondence. In this sense, the image segmen- tation algorithm has fundamental importance for the photog- raphy analysis. There are several approaches to image segmentation. Since in photography analysis, subjects and scenarios might vary widely, the more general the image segmentation algo- rithm is, the better the result. Main methods for image segmentation are based on edge information [159,19], fragment-based approaches [36,79], point-wise repetition [182], tree partitioning under a nor- malized cut criterion [160], a nonparametric Bayesian model [115], a geometric active contour model [180], Markov random fields with region growing [123], Markov randomfieldsandgraphcut[25],andlocalChan–Vese(LCV) model [163]. Most algorithms deal with both color and gray images. Some image segmentation algorithms are specific to color images [19,181]. It is normally difficult to compare different image segmentation algorithms, but unsupervised objective assessment methods have been attempted for this task [185]. 123 350 J Braz Comput Soc (2013) 19:341–359 4.3 Grouping Photo grouping is designed for setting associations between groups of photos. The associations may be set by either the semantic information found (such as number of faces detected, number of colors, etc.) or high-level information (e.g., Global Positioning System, GPS, position present in some image EXIF). 4.3.1 Classification and clustering Classification algorithms are designed to identify the class which a given image belongs to. There are several goals for image classification, e.g., (1) identifying, in a set of image files, which ones are photos and which ones are graph- ics [114]; (2) identifying whether photos were obtained in an indoor or an outdoor environment [137]; and (3) identifying if images were obtained by an amateur photographer or a professional one [151,80,94,111], among others. It is not completely known how humans perform clas- sification tasks. Vogel et al. [157] have shown, however, that humans use both local and global region-based configu- rations for scene categorization. This implies that human- inspired algorithms may consider both local and global region-based information for better results in image classifi- cation. It was also shown that the color plays an important roleinimagecategorizationforhumans.Naturalimageswere better classified when presented in color as opposed to gray levels [157]. Classification has a close relation to information extraction, as discussed in Sect. 4.2. One of the main steps for an accurate image classification is the representation of the image which will later be used as input to a classifier. Representation can be performed by local descriptors [101], a topic histogram using probabilistic latent semantic analysis (pLSA) and expectation–maximization (EM) [93], multilevel representation [164], triangular repre- sentation, which is robust to viewpoint changes [67], resolu- tion invariant image representation [161], and scale invariant feature transform (SIFT) [164], among other methods. Some relevant classifiers proposed in the literature are: AdaBoost [93], SVM [101,164], multiple kernel learn- ing (MKL) [93], Bayesian belief networks [38], Bayesian active learning [122], and conditional random fields mod- els [15,184]. The main challenges in image classification are the com- putational cost and the classification accuracy. A local adap- tive active learning (LA-AL) method was used for lowering the number of training samples needed [93]. The within- category confusion can be dealt with probabilistic patch descriptors, which encodes the appearance of an image frag- ment, and the variability within a category [101]. Clustering algorithms are intended to automatically group images when considering their extracted features. Given a set of photos, clustering can be used to identify the existing relationship between such photos. Cooper et al. [30] presented an automatic temporal similarity-based method using EXIF data. Graph-based algo- rithms [52] and local discriminant models and global integra- tion (LDMGI) [173] are common methods for image clus- tering. Since clustering is not commonly a supervised process, system improvements are necessary for reducing errors in the system. Thus, user feedback is used as a way to bring out relevant feedback about system performance [11]. 4.3.2 Summarization Another recent area of interest is finding relationships between photos for producing summaries. Summaries are useful since finding information in large sets of images can be time consuming. Summaries are used for producing con- densed displays of touristic destinations [117], simplifying photo browsing on personal collections [141,155,28], index- ing [156], and storytelling [53,110], among other appli- cations. A specific problem related to the task of produc- ing summaries or filtering out redundant information from a collection of photographs is the detection of near dupli- cates [28,148,126]. Photos matching specific keywords [142] or GPS-tagged information [135] have been grouped to build 3D models of some sightseeing. On-line tools, such as Bing maps [102], used some of those technologies for building 3D models of such places. 4.3.3 Image retrieval According to Marshall et al. [99], image retrieval techniques can be classed as content-based image retrieval (CBIR) and annotation-based image retrieval (ABIR). In CBIR, the images are processed for obtaining information while in ABIR, images are often annotated with textual information, such as place, time or photographer, and this information is used to retrieve images. Most detection algorithms can be used as an intermediate step for retrieving images in CBIR [99,90], such as recog- nized faces [187,121] and events [37], among others. Since manually tagging photos can be time consum- ing, recent work considers the use of information auto- matically obtained from EXIF [83,148,85,132], SIFT [148, 26], face recognition and connections found in social net- works [27,113,155], and georeferences, which might be obtained from GPS devices [16], people clues such as faces and clothes [146] or other high-level information [132]. 123 J Braz Comput Soc (2013) 19:341–359 351 4.4 Discussion In this section, algorithms for photo analysis have been orga- nized in three categories: assessment, information extraction, and grouping. From the performed review, assessment seems to be the less explored area. This may be explained by the highly subjective nature of the task, which makes it difficult to perform precise or universal analyses. The other two areas are more explored in the literature and present a richer set of approaches. Besides the underlying limitations discussed in the next section, the approaches seem very promising to be included in a photography analysis system. 5 Critical analysis The main issues covered in the studies reviewed in this survey are considered in this section. To better discuss such issues, the following information about the articles were summa- rized: the source of the used image set, the size of the image set, the main goal, the metrics used for assessment, and the achieved results. The photo analysis algorithms are shown in Table 2 and the enhancement algorithms are in Table 3. This section contains two subsections. In the first one, a review of the image sets used in the experiments is given. In the second one, commentaries about the validation processes are presented. 5.1 Image sets For most of the image analysis algorithms reviewed in this survey, the purpose is to perform tasks in a human-like man- ner. Thus, it is fundamentally important to ensure the photo sample is representative for testing. Some studies were performed to identify the user behav- iour when photographing [96], sharing [103], analyzing [49], and managing [35]. However, based on the conducted lit- erature review, strong evidence about the user preferences were not drawn, the assessment of most algorithms for photo enhancement and analysis are performed by means of sub- jective assessment. According to the conducted literature review, there is no defined methodology for carrying out subjective exper- iments for photos analysis. Some methodology ought to be employed due to the number of factors that might influence subjective assessment. Some of these factors are: 1. People involved. While in professional photos, the people present in the photo are usually part of the subject, in con- sumer photos, people are mostly known and significant to the photo owner. Therefore, a photo assessment per- formed by consumers might be too strict in the absence of a known person and too flexible in the occurrence of, for instance, a family member; 2. Place and event. In some situations, the photo might not be technically good, but captured a place or a rare event. This could positively influence the judgement of the photos; 3. Style used. Different users adopt different photo habits. The individual style of a user might not be appreciated by other users; 4. Number of Images. There are an endless number of poses, camera settings, and subject positioning. Therefore, it is barely infeasible to represent this diversity of possibilities in a small set of images. In Tables 2 and 3, the second column (Image sources) indi- cates the databases from which the images were obtained. In this column, Web refers to web crawled images and Own refers to particular photos from the authors or contributors. The third column, (Set size), represents the number of images used in the experiments (if any). The fourth column indicates the main goal of the work. The fifth column briefly indicates how the approach has been evaluated, in which Obj. repre- sents an objective assessment and Sub. a subjective assess- ment method. In the final column is shown the best reported performance of proposed algorithms. Tables 2 and 3 have been built based strictly on what was described in the papers. Whenever the information was not explicitly shown in the paper, results are shown in a non-numeric way or the infor- mation is not suitable for the discussed problem, NI (Not Informed) is used. Both tables are sorted based on the total number of images and then alphabetically by the name of authors. By analyzing the Tables 2 and 3 it is possible to draw some conclusions about the number of images and their sources. First, there is no consensus on the database to be used. This makes it impossible to perform a direct comparison between the results in Tables 2 and 3, and to reproduce the exper- iments as well. Second, the number of images employed in the evaluations drastically vary. The average number of images employed in photo analysis work is 45,344 with a standard deviation of 212,556, and a median of 3,581 with an interquartile range of 12,278. If only photo enhancement work is considered, the average is 716 with a standard devia- tion of 952, and a median of 375 with an interquartile range of 766. Third, no work presented a categorization of the image set, e.g., not known is the distribution of the number of peo- ple among a given set. Finally, some papers only presented a simple visual verification of the results (e.g., Achanta et al. [1] and Banerjee et al. [4]). It is important to highlight the non-utilization of a labeled and representative public image database for photographic 123 352 J Braz Comput Soc (2013) 19:341–359 T ab le 2 S um m ar y of th e re vi ew ed w or k on an al ys is te ch ni qu es A ut ho rs Im ag e so ur ce s S et si ze M ai n go al A ss es s. m et ho d: us ed m et ri cs R es ul ts L iu et al .[ 90 ] P ho to si g [1 0] , N U S -W ID E [2 9] , K od ak [1 72 ] 1, 30 0 ,0 00 C B IR O bj .: pr ec is io n 14 .5 % L i et al .[ 83 ] N I 70 ,0 00 R O I D et ec ti on O bj .: pr ec is io n an d R ec al l N I P an g et al .[ 11 7] F li ck r [5 0] 50 ,0 00 G ro up in g S ub .: sc al ed (1 –5 ) A ve ra ge ra nk > 4 S in ha [1 41 ] F li ck r [5 0] ,P ic as a [5 8] 40 ,0 00 G ro up in g O bj .: JS D iv er ge nc e JS D iv .< 0. 3 T on g et al .[ 15 2] C or el [3 2] ,M S 29 ,5 40 A ss es s. : ho m e us er x ph ot og ra ph er O bj .: M S E 11 .1 M ar sh al l [9 9] M IR F L IC K R 25 00 0 [5 1] 25 ,0 00 C B IR N I N I O ’H ar e [1 13 ] O w n 23 ,7 74 G ro up in g O bj .: H -h it ru le N I D ao et al .[ 37 ] P ic as a [5 8] 19 ,1 01 G ro up in g O bj .: F -M ea su re N I L uo et al .[ 94 ] W eb 17 ,6 13 A ss es s. : hi gh x lo w qu al it y O bj .: ac cu ra cy 95 % Y ao et al .[ 17 5] P ho to .n et [6 0] 13 ,3 02 A ss es s. : ra nk in g S ub .: sc al e (0 –1 00 ) 75 .3 3 % K e et al .[ 73 ] D pC ha ll en ge [2 0] 12 ,0 00 A ss es s. S ub .: sc al e (1 –1 0) 72 % Y eh et al .[ 17 7] D pC ha ll en ge [2 0] ,F li ck r [5 0] 12 ,0 00 A ss es s. : ra nk in g S ub .: sc al e (1 –1 0) 81 % Y eh et al .[ 17 6] D pC ha ll en ge [2 0 ] ,F li ck r [5 0] 12 ,0 00 A ss es s. : ra nk in g S ub .: sc al e (1 –1 0) 93 % S an dn es [1 32 ] O w n 7, 67 2 G ro up in g O bj .: ac cu ra cy 88 .1 % S u et al .[ 14 5] D pC ha ll en ge [2 0] 6, 00 0 A ss es s. S ub .: sc al e (1 –1 0) 92 .0 6 % B ou te ll et al .[ 9] C or el [3 2] /O w n 5, 77 0 C la ss .: su ns et ac cu ra cy 96 .4 % S in gl a et al .[ 14 0] O w n 4, 50 0 S um m . O bj .: pr ec is io n an d R ec al l N I O li ve ir a et al .[ 11 4] W eb 3, 70 0 C la ss .: ph ot o x gr ap hi c O bj .: cr os s- va li da ti on 95 .6 % D at ta et al .[ 39 ,4 1] P ho to .n et [6 0] 3, 58 1 A ss es s. : ra nk in g S ub .: sc al e (1 –7 ) 70 .1 2 % O br ad or et al .[ 11 1] P ho to .n et [6 0] 3, 14 1 C la ss .: hi gh x lo w ae st he ti cs S ub .: sc al e (1 –7 ) 66 .5 % Z ha ng et al .[ 18 7] O w n 2 ,5 97 G ro up in g N I N I T on g et al .[ 15 1] C or el [3 2] 2 ,3 55 C la ss .: bl ur O bj .: A cc ur ac y 98 .6 % O br ad or [1 08 ] N I 2 ,0 00 A ss es s. : ra nk in g S ub .: 6 gr ad es 37 .5 % S he n et al .[ 13 9] W eb ,F li ck r [5 0] 2 ,0 00 D et ec t. : di ss ec ti on li ne s S ub .: T P + F P 80 .8 7 an d 33 .6 1 % C oo pe r et al .[ 30 ] O w n 1, 44 9 C la ss .: ev en t O bj .: F -M ea su re 0. 85 68 S er ra no et al .[ 13 7] W eb 1, 20 0 C la ss .: in do or x O ut do or O bj .: ac cu ra cy 90 .2 % C hu et al .[ 26 ] O w n 1, 19 9 G ro up in g O bj .: pr ec is io n 0. 68 C hu et al .[ 27 ,2 8] F li ck r [5 0] 1, 02 4 G ro up in g S ub .: sc al e (1 –5 ) S at is fa ct io n > 4 T an g et al .[ 14 8] P ic as a [5 8] 97 5 G ro up in g O bj .: pr ec is io n an d R ec al l N I L ou i et al .[ 92 ] N I 94 3 G ro up in g S ub .: co rr el at io n 0. 84 L o P re st i et al .[ 12 1] G al la gh er [5 4] 58 9 R et ri ev al O bj .: er ro r ra te 27 .6 8 % K im et al .[ 75 ] O w n 56 4 G ro up in g O bj .: P re ci si on at T op -N M A P > 0. 4 L i et al .[ 81 ] F li ck r [5 0] 50 0 A ss es s. : ra nk in g S ub .: ch oi ce 51 % L i et al .[ 80 ] F li ck r [5 0] 50 0 A ss es s. & C la ss S ub .: sc al e (0 –1 0) R es id ua l su m -o f- sq ua re s: 2. 38 K ha n et al .[ 74 ] L i et al .[ 81 ] 50 0 A ss es s. : ra nk in g S ub .: ch oi ce 61 .1 0 % Ji an g et al .[ 71 ] F li ck r [5 0] ,K od ak [1 72 ], O w n 45 0 A ss es s. : ra nk in g S ub .: sc al e 0– 10 0 M S E < 17 O br ad or et al .[ 11 0] O w n 20 0 G ro up in g S ub .: ch oi ce 75 % 123 J Braz Comput Soc (2013) 19:341–359 353 Table 3 Summary of the reviewed work on enhancement techniques. Authors Image sources Set size Main goal Assess. method: Results used metrics Byers [13] Own 3,008 In-camera photo composition Sub.: user selection 35 % Tian et al. [149] Own 1,627 Photo Collage Sub.: professional Most results considered good Liu et al. [87] Web 900 Recomposition Sub.: forced choice 93.7 % Bhattacharya et al. [7] Web 632 Recomposition Sub.: forced choice 93.7 % Yin et al. [179] Own 600 Media Adaptation NI NI Suh et al. [147] Corbis [31] 150 Cropping Sub.: recognition time Faster using the approach Zhang et al. [186] Own 100 Cropping Sub.: scaled 41 % Chen et al. [24] Web 56 Recomposition Sub.: scaled 71.28 % Santella et al. [133] NI 50 Cropping Sub.: forced choice 58.4 % Setlur et al. [138] NI 40 Retargeting Sub.: forced choice 89.1 % Achanta et al. [1] Berkeley [100] and MSRA [89] NI Retargeting NI NI Banerjee et al. [4] NI NI Recomposition NI NI Lim et al. [86] NI NI Composite NI NI analysis. Therefore, most authors crawled images from on- line repositories. Web crawlers can be employed for creating image sets which present a richer and diverse number of sit- uations, and a higher number of pixels [114,137,24,139]. Nevertheless, the great drawback is the lack of copyright licenses for public experiments. There are some public image databases that are free for academic research use (such as Flickr [50] and other databases under Creative Commons license [34]), yet they are not labeled. Regarding photo analy- sis, there are some web databases which have been used as a ground-truth for subjective quality analysis (e.g., DPChal- lenge [20] and Photo.net [60]). However, since those data- bases were designed for photo contests, they typically do not represent the reality for consumer photography, which usually have less quality and less exigent evaluators. Two authors have built datasets in order to make them available to the community. The first work, from Luo et al. [94], presented a dataset of 17,000 labeled pho- tos. The set was built to be diverse, once photos are dis- tributed over seven categories, they were labeled as high or low quality. The problem of this photo set lies in the labeling process. Some important information is not shown, such as the exact number of votes for each cate- gory, the origin and background of the photographer, and the personal information of the voters. Besides this, a more precise ranking (instead of only classifying images as high/low quality) could be used for a more general use in enhancement and analysis algorithms. The other work, from Bhattacharya et al. [7], presented a smaller photo set (only 632 images). Other factors, such as the ones ana- lyzed on the Luo et al. [94] approach, could not be eval- uated, since the image set built by Bhattacharya et al. [7] could not be downloaded due to a Web server error. Thus, it might be considered that the image set is no longer avail- able. One might suggest that if it is possible to learn an expert opinionaboutaphoto,itwouldbepossibletoanalyzeaphoto. However this is surprisingly not always true. Since average photography consumers do not have training in what a good photo is, they often do not agree with advice given by experts. There are several other factors that might influence a photog- raphy user’s opinion, such as photo effects and the event from whichthephotowasobtained,ratherthanphotographicrules. In conclusion, it was not possible to identify compara- tive studies involving different approaches, which considered publicly available photo datasets. This causes difficulties to reliably compare techniques when dealing with consumer photography. The use of image sets from photography con- tests has also its disadvantages since both photographers and voters may have professional skills or are highly interested in photography. This may lead to results that are not related to ordinary photography consumer preferences. 5.2 Validation This section contains a discussion on validation approaches. Photography might be considered an art form [66]. There is no simple way of deciding whether a photo is aesthetically pleasant or not. However, it might be possible to identify some metrics that would help photo assessment, and that would be a step further in this area. Another important aspect to be considered is how appro- aches were validated. Since the reviewed work is about photo enhancement and analysis, the results are usually images (in the case of photo enhancement algorithms) or abstract infor- mation, e.g., color/gray-scale maps, statistics, and scores. Both have a very high subjective component, although some metrics might be defined for obtaining a more objective analysis in a specific scenario. 123 354 J Braz Comput Soc (2013) 19:341–359 Validation methods can be classified as subjective or objective. Subjective methods involve subjective experi- ments in which humans are asked to give their opinion on photosofapre-definedtestsetwithrespecttoagivenattribute or criterion. A participant may give his/her opinion based on the following methods [98]: – Single-stimulus rating. The participant will give a score toaphotooragroupofphotos.Thescoremightbecontin- uous (such as 0–10) or categorical (e.g., excellent, good, fair, bad, and poor). During the rating process, each photo is typically showed to the participant for a fixed presen- tation time (e.g., 3 s); – Double-stimulus rating. While analogous to the single- stimulus rating, in double-stimulus trials a reference photo and a test photo are presented in random order, one after another, for a fixed presentation time (e.g., 3 s); – Forced-Choice. The participant is forced to choose only one within a group of photos, according to a given crite- rion; – Pairwise similarity judgement. Similar to forced-choice but, besides choosing one from a group of photos, the participant has also to indicate on a continuous scale how large the difference in quality is between the two photos; and – Indirect. The participant does not directly give his/her opinion. The quality may be inferred by some measure- ment such as the time needed for the participant to choose a photo. Other details and comparing methods can be found in the work of Mantiuk et al. [98], in which a comparison between the first four above-mentioned methods is given. The better method is usually the one with higher correlation between human and automatic labeling. It was shown, however, that for comparing IQA algorithms, in which differences between images might be small, the forced-choice pairwise compari- son is the most accurate and time-efficient [98]. Besides the comparing method, there are also other factors have an influence in the experimental assessment, since such experiments involve humans. Some of the factors are: – Number of participants. Once the opinion about the qual- ity of a photo may vary from one person to another, it is important to have a large number of participants in order to identify features that are more significant in human analysis; – Used equipment. When the experiment is conducted in an uncontrolled environment, the equipment used might harm the results (e.g., the calibration of the screen in a color experiment might produce a different opinion); – Knowledge in photography. Experts evaluate photos in a different way than consumers do. As an example, profes- sional photos might be considered good by both expert and consumer while a consumer photo might be consid- ered good by a consumer but bad by experts; – Cultural diversity. The style and subject of the photo might influence the judgement depending on the partici- pant’s background and origin; and – Number of photos. The number of photos in the experi- ment is a factor as crucial as the number of participants. If, on the one hand, a great number of photos might better represent the diversity of the photos, on the other hand, it might reduce the number of volunteer participants, since it becomes a more laborious experiment. Since, according to the literature review, there is no data- base which considers all those factors, most of the conclu- sions drawn from subjective experiments might be consid- ered partially biased. Besides the drastic influence of such factors, there is no consensus on what are the ideal values for them. Thus, most papers present some questionable decisions on the validation step, such as the number of participants (e.g., three participants [108]), knowledge in photography (e.g., most participants are experts [73]), and the number of images used (e.g., only 34 photos to represent the analysis sample [49]). On the other hand, objective metrics present a set of well- defined criteria, and proposals are evaluated based on those criteria. For instance, the best algorithm might be the one which has lower false-positive rates in a face detection sce- nario. Objective methods are usually less expensive since they do not rely on the availability and classification coherence of human participants. However, there are some important features that are not yet well-assessed by computational algorithms, such as the global visual aspect of a photo. Even humans may disagree with a classification result, what may imply a harder subjective assessment. Both approaches (objective and subjective) are important, each in its specific application scenario. As it can be seen in Tables 2 and 3, the methodology of the assessment widely differs in the reviewed work, with regards to the following aspects: (1) the assessment method, (2) the metric used for assessment, and (3) the source of the photo set. Although the results reported in those tables were obtained with different algorithms and different goals, it is possible to conclude that most approaches have opted for subjec- tive assessment when dealing with image enhancement and analysis. The reason is probably the lack of consensus on the image set to be used as ground-truth and the essentially subjective task of comparing images. 123 J Braz Comput Soc (2013) 19:341–359 355 There is also a lack of clarity regarding the number of people used in the subjective experiments, their confidence with the labeling, and the methodology of the experiment. 6 Conclusions This survey reviewed state-of-the-art methods for photo enhancement and analysis. For better understanding of this research area, a taxonomy was defined based on the related work. The main conclusions of this survey are discussed next: • According to the conducted literature review, this is the first survey on consumer photographic enhancement and analysis techniques; • The interest in algorithms for photo enhancement and analysis has been growing recently, based on the number of recent papers published in this area; • Thereisnotaconsensusonamethodologyforconducting subjective photo analysis experiments; • Although the results were obtained with different algo- rithms and different goals, it is possible to conclude that most approaches have opted for a subjective assessment due to the lack of a public and labeled image set that might work as a ground-truth for an objective assessment, and due to the inherently subjective task of comparing images. Therefore, in this scenario, direct comparisons between existing approaches might be unfair; • Some work that indicates the photo sources are not repro- ducible, since the photos used for testing are not clearly identified due mostly to copyright reasons or the great number of images; • There is no consensus on the number of images to be used in the experiments; and • There is a lack of clarity regarding the number of people used in the subjective experiments, their confidence with the labeling provided, and the assessment methodology. Thus, it is possible to conclude that, although there has been recent growth in photo enhancement and analysis tech- niques, this is an area with large potential. Experimental assessment needs to be improved, and assessment method- ologies are required as well in order to obtain strong conclu- sions about methods and results. Acknowledgments The authors wish to thank Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) for the financial support of part of this research. References 1. Achanta R, Süsstrunk, S (2009) Saliency detection for content- aware image resizing. In: Proceedings of the IEEE ICIP 2009. Piscataway, IEEE, pp 1005–1008 2. Anguelov, D, Lee KC, Gokturk SB, Sumengen B (2007) Contex- tualidentityrecognitioninpersonalphotoalbums.In:Proceedings of the IEEE CVPR 2007. IEEE Computer Society, pp 1–7 3. Avidan S, Shamir A (2007) Seam carving for content-aware image resizing. ACM Trans Graphics 26(3):10.1-10.9 4. Banerjee S, Evans BL (2004) Unsupervised automation of photo- graphic composition rules in digital still cameras. In: Proceedings of the SPIE Conference on sensors, color, cameras, and systems for digital photography, VI. pp 364–373 5. Benoit A, Caplier A, Durette B, Herault J (2010) Using human visual system modeling for bio-inspired low level image process- ing. Comput Vis Image Underst 114(7):758–773 6. Bertalmio M, Bugeau A, Caselles V, Sapiro G (2010) A com- prehensive framework for image inpainting. IEEE Trans Image Process 19(10):2634–2645 7. Bhattacharya S, Sukthankar R, Shah M (2010) A framework for photo-quality assessment and enhancement based on visual aes- thetics. In: Proceedings of the ACM MM 2010, pp 271–280 8. Bissacco A, Yang M, Soatto S (2007) Fast human pose estima- tion using appearance and motion via multi-dimensional boosting regression. In: Proceedings of the IEEE CVPR 2007, pp 1–8 9. Boutell M, Luo J, Gray RT (2003) Sunset scene classification usingsimulatedimagerecomposition.In:ProceedingsoftheIEEE ICME 2003, pp 37–40 10. Boyce W, Wilkie S (2013) Photosig. http://www.photosig.com. Accessed 31 January 2013 11. Bruneau P, Picarougne F, Gelgon M (2010) Interactive unsuper- vised classification and visualization for browsing an image col- lection. Pattern Recogn 43(2):485–493 12. Busselle M (1999) Better picture guide to photographing people. RotoVision, Hove 13. Byers Z, Dixon M, Goodier K, Grimm CM, Smart WD (2003) An autonomous robot photographer. In: Proceedings IEEE/RSJ IROS 2003, pp 2636–2641 14. Byers Z, Dixon M, Smart W, Grimm C (2004) Say cheese!: expe- riences with a robot photographer. AAAI Mag 25(3):37–46 (this is an invited paper that wraps up all of the other Lewis papers) 15. Cao L, Luo J, Kautz H, Huang T (2008) Annotating collections of photos using hierarchical event and scene models. In: Proceedings of the IEEE CVPR 2008, pp 1–8 16. Cao L, Luo J, Kautz H, Huang T (2009) Image annotation within the context of personal photo collections using hierarchical event and scene models. IEEE Trans Multiméd 11(2):208–219 17. Cavalcanti C, Gomes H, Veloso L, Carvalho J, Lima Jr O (2010) Automatic single person composition analysis. In: Skala V (ed) Proceedings of the WSCG 2010. UNION Agency-Science Press, Plzen, pp 229–236 18. Cavalcanti CSVC, Gomes H, Meireles R, Guerra W (2006) Towards automating photographic composition of people. In: Pro- ceedings of the IASTED VIIP 2006. ACTA Press, Anaheim, pp 25–30 19. Celik T, Tjahjadi T (2010) Unsupervised colour image segmen- tation using dual-tree complex wavelet transform. Comput Vis Image Underst 114(7):813–826 20. Challenging technologies: dpchallenge a digital photography con- test (2013) http://www.dpchallenge.com. Accessed 31 January 2013 21. Charrier C, Knoblauch K, Moorthy AK, Bovik AC, Maloney LT (2010) Comparison of image quality assessment algorithms on compressed images. In: Proceedings of the SPIE, Image Quality and System Performance VII, 2010. pp 75, 290B–1-75, 290B–11 22. Chartier S, Renaud P (2008) An online noise filter for eye-tracker data recorded in a virtual environment. In: Proceedings of the ACM ETRA 2008, pp 153–156 123 http://www.photosig.com http://www.dpchallenge.com 356 J Braz Comput Soc (2013) 19:341–359 23. Chen H (2008) Note: Focal length and registration correction for building panorama from photographs. Comput Vis Image Underst 112(2):225–230 24. Chen Lq, Xie X, Fan X, Ma WY, Zhang Hj, Zhou HQ (2003) A visual attention model for adapting images on small displays. Multiméd Syst 9:353–364 25. Chen S, Cao L, Wang Y, Liu J, Tang X (2010) Image segmen- tation by map-ml estimations. IEEE Trans Image Process 19(9): 2254–2264 26. Chu WT, Lee YL, Yu JY (2009) Using context information and local feature points in face clustering for consumer photos. In: Proceedings of the IEEE ICASSP 2009, pp 1141–1144 27. Chu WT, Li CJ, Tseng SC (2011) Travelmedia: an intelligent management system for media captured in travel representation. J Vis Commun Image 22(1):93–104 28. Chu WT, Lin CH (2010) J Vis Commun Image Rep. Consumer photo management and browsing facilitated by near-duplicate detection with feature filtering 21(3):256–268 29. Chua TS, Tang J, Hong R, Li H, Luo Z, Zheng YT (2009) Nus- wide: a real-world web image database from national university of singapore. In: Proceedings of ACM CIVR 2009, July 8–10 30. Cooper M, Foote J, Girgensohn A, Wilcox L (2005) Temporal event clustering for digital photo collections. ACM Trans Multi- méd Comput Commun Appl 1(3):269–288 31. Corbis: Corbis image gallery (2001–2009). http://www.corbis. com. Accessed 31 January 2013 32. Corel Images: Corel images (2013) http://elib.cs.berkeley.edu/ photos/corel/. Accessed 31 January 2013 (currently unavailable) 33. Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273–297 34. Creative Commons: Creative commons (2012) http:// creativecommons.org/. Accessed 31 January 2013 35. Cunningham SJ, Masoodian M (2007) Identifying personal photo digital library features. In: Proceedings of the ACM/IEEE-CS JCDL 2007, pp 400–401 36. Daliri MR, Torre V (2009) Classification of silhouettes using con- tour fragments. Comput Vis Image Underst 113(9):1017–1025 37. Dao MS, Dang-Nguyen DT, De Natale FG (2011) Signature- image-based event analysis for personal photo albums. In: Pro- ceedings of the ACM MM 2011, pp 1481–1484 38. Das M, Loui AC (2009) Event classification in personal image collections. In: Proceedings of the IEEE ICME 2009. IEEE Press, New York, pp 1660–1663 39. Datta R, Joshi D, Li J, Wang JZ (2006) Studying aesthetics in photographic images using a computational approach. In: Pro- ceedings of the ECCV 2006, pp 7–13 40. Datta R, Li J, Wang JZ (2008) Algorithmic inferencing of aesthet- ics and emotion in natural images: an exposition. In: Proceedings of the IEEE ICIP 2008, pp 105–108 41. Datta R, Wang JZ (2010) Acquine: aesthetic quality inference engine—real-time automatic rating of photo aesthetics. In: Pro- ceedings of the ACM MIR 2010, pp 421–424 42. Destrero A, Mol C, Odone F, Verri A (2009) A regularized frame- work for feature selection in face detection and authentication. Int J Comput Vis 83(2):164–177 43. Duda RO, Stork DG, Hart PE (2000) Pattern classification and scene analysis. Part 1, Pattern classification, 2nd edn. Wiley, New York 44. DunkerP,PoppP,CookR(2011)Content-aware auto-soundtracks for personal photo music slideshows. In: Proceedings of the IEEE ICME 2011, pp 1–5 45. Engelke U, Maeder AJ, Zepernick HJ (2009) On confidence and response times of human observers in subjective image quality assessment. In: Proceedings of the IEEE ICME 2009, pp 910–913 46. Ercegovac M, Lang T (1992) On-the-fly rounding (computing arithmetic). IEEE Trans Comput 41(12):1497–1503 47. Etchells D (2005) Canon expo 2005—a one-company trade show. http://www.imaging-resource.com/NEWS/1126887991. html. Accessed 31 January 2013 48. Fedorovskaya E, Neustaedter C, Hao W (2008) Image harmony for consumer images. In: Proceedings of the IEEE ICIP 2008, pp 121–124 49. Fedorovskaya E, Neustaedter C, Hao W (2008) Image harmony for consumer images. In: Proceedings of the IEEE ICIP 2008, pp 121–124. doi:10.1109/ICIP.2008.4711706 50. Flickr (2013) Flickr photo sharing. http://www.flickr.com/. Accessed 31 January 2013 51. Flickr (2013) Mirflickr-25000. http://www.flickr.com/photos/ tags/. Accessed 31 January 2013 52. Foggia P, Percannella G, Sansone C, Vento M (2008) Int J Pattern Recogn Artif Intell. A graph-based algorithm for cluster detection 22(5):843–860 53. Fujita H, Arikawa M (2007) Creating animation with personal photo collections and map for storytelling. In: Proceedings of the ACM EATIS 2007. ACM, New York, pp 1:1–1:8 54. Gallagher A, Chen T (2008) Clothing cosegmentation for recog- nizing people. In: Proceedings of the IEEE CVPR 2008, pp 1–8 55. Gallagher AC, Chen T (2007) Using group prior to identify people in consumer images. In: Proceedings of the IEEE CVPR 2007, vol 0. IEEE Computer Society, pp 1–8 56. Gallagher AC, Chen T (2009) Finding rows of people in group images. In: Proceedings of the IEEE ICME 2009. IEEE Press, New York, pp 602–6058 57. Golder S (2008) Measuring social networks with digital photo- graph collections. In: Proceedings of the ACM HT 2008, pp 43–48 58. Google: Picasa (2013) http://picasa.google.com/. Accessed 31 January 2013 59. Gorelick L, Basri R (2009) Shape based detection and top– down delineation using image segments. Int J Comput Vis 83(3): 211–232 60. Greenspun P (2013) Photo.net photography community. http:// photo.net. Accessed 31 January 2013 61. Gupta A, Chen F, Kimber D, Davis LS (2008) Context and obser- vation driven latent variable model for human pose estimation. In: Proceedings of the IEEE CVPR 2008, pp 1–8 62. Haddad Z, Beghdadi A, Serir A, Mokraoui A (2010) Image quality assessment based on wave atoms transform. In: Proceedings of the IEEE ICIP 2010, pp 305–308 63. Han HS, Kim DO, Park RH (2009) Structural information-based image quality assessment using lu factorization. IEEE Trans Con- sum Electron 55(1):165–171 64. HanJ,AwadG,SutherlandA(2009)Automaticskinsegmentation and tracking in sign language recognition. IET-CV 3(1):24–35 65. Haykin S (1999) Neural networks: a comprehensive foundation. Prentice Hall, Englewood Cliffs 66. Hedgecoe J (2009) New manual of ohotography. Dorling Kinder- sley, New York 67. Hoíng NV, Gouet-Brunet V, Rukoz M, Manouvrier M (2010) Embedding spatial information into image content description for scene retrieval. Pattern Recogn 43(9):3013–3024 68. Hsu SH, Jumpertz S, Cubaud P (2008) A tangible interface for browsing digital photo collections. In: Proceedings of the ACM TEI 2008, pp 31–32 69. Ji H, Liu C (2008) Motion blur identification from image gra- dients. In: Proceedings of the IEEE CVPR 2007, vol 0, IEEE Computer Society, pp 1–8 70. Jiang H, Martin D (2008) Global pose estimation using non-tree models. In: Proceedings of the IEEE CVPR 2008, pp 1–8 123 http://www.corbis.com http://www.corbis.com http://elib.cs.berkeley.edu/photos/corel/ http://elib.cs.berkeley.edu/photos/corel/ http://creativecommons.org/ http://creativecommons.org/ http://www.imaging-resource.com/NEWS/1126887991.html http://www.imaging-resource.com/NEWS/1126887991.html http://dx.doi.org/10.1109/ICIP.2008.4711706 http://www.flickr.com/ http://www.flickr.com/photos/tags/ http://www.flickr.com/photos/tags/ http://picasa.google.com/ http://photo.net http://photo.net J Braz Comput Soc (2013) 19:341–359 357 71. Jiang W, Loui A, Cerosaletti C (2010) Automatic aesthetic value assessment in photographic images. In: Proceedings of the IEEE ICME 2010, pp 920–925 72. Joshi N, Matusik W, Adelson EH, Kriegman DJ (2010) Personal photo enhancement using example images. ACM Trans Graphics 29(2):1–15 73. Ke Y, Tang X, Jing F (2006) The design of high-level features for photo quality assessment. In: Proceedings of the IEEE CVPR 2006, pp 419–426 74. Khan SS, Vogel D (2012) Evaluating visual aesthetics in photo- graphic portraiture. In: Proceedings of the CAe 2012. Eurograph- ics Association, pp 55–62 75. Kim HN, Saddik AE, Jung JG (2012) Leveraging personal photos to inferring friendships in social network services. Expert Syst Appl 39(8):6955–6966 76. Kruppa H, Bauer MA, Schiele B (2002) Skin patch detection in real-world images. In: Proceedings of the the 24th DAGM Sym- posium on Pattern Recognition. Springer LNCS, pp 109–117 77. Lee C, Schramm MT, Boutin M, Allebach JP (2009) An algorithm for automatic skin smoothing in digital portraits. In: Proceedings of the IEEE ICIP 2009. IEEE Press, New York, pp 3113–3116 78. Lee S, Kim K, Kim JY, Kim M, Yoo HJ (2010) Familiarity based unified visual attention model for fast and robust object recogni- tion. Pattern Recogn 43(3):1116–1128 79. Levin A, Weiss Y (2009) Learning to combine bottom–up and top–down segmentation. Int J Comput Vis 81(1):105–118 80. Li C, Gallagher AC, Loui AC, Chen T (2010) Aesthetic quality assessment of consumer photos with faces. In: Proceedings of the IEEE ICIP 2010, pp 3221–3224 81. Li C, Loui AC, Chen T (2010) Towards aesthetics: a photo quality assessment and photo selection system. In: Proceedings of the ACM MM 2010, pp 827–830 82. Li X, Ling H (2009) Learning based thumbnail cropping. In: Proceedings of the IEEE ICME 2009. IEEE Press, New York, pp 558–561 83. Li Z, Luo H, Fan J (2009) Incorporating camera metadata for attended region detection and consumer photo classification. In: Proceedings of the ACM MM 2009, pp 517–520 84. Li Z, Zhou X, Huang TS (2009) Spatial gaussian mixture model for gender recognition. In: Proceedings of the IEEE ICIP 2009. IEEE Press, New York, pp 45–48 85. Liao WH (2009) A framework for attention-based personal photo manager. In: Proceedings of the the IEEE SMC 2009. IEEE Press, New York, pp 2128–2132 86. Lim SH, Lin Q, Petruszka A (2010) Automatic creation of face composite images for consumer applications. In: Proceedings of the IEEE ICASSP 2010, pp 1642–1645 87. Liu L, Chen R, Wolf L, Cohen-Or D (2010) Optimizing photo composition. In: Proceedings of the Eurographics, vol 29, pp 469–478 88. Liu R, Li Z, Jia J (2008) Image partial blur detection and clas- sification. In: Proceedings of the IEEE CVPR 2007, vol 0. IEEE Computer Society, Los Alamitos, pp 1–8 89. Liu T, Yuan Z, Sun J, Wang J, Zheng N, Tang X, Shum HY (2011) Learning to detect a salient object. IEEE Trans Pattern Anal Mach Intell 33(2):353–367 90. Liu Y, Xu D, Tsang IW, Luo J (2011) Textual query of personal photos facilitated by large-scale web data. IEEE Trans Pattern Anal Mach Intell 33(5):1022–1036 91. Loui A, Wood M, Scalise A, Birkelund J (2008) Multidimensional image value assessment and rating for automated albuming and retrieval. In: Proceedings of the IEEE ICIP 2008, pp 97–100 92. Loui AC, Wood MD, Scalise A, Birkelund J (2008) Multidi- mensional image value assessment and rating for automated albuming and retrieval. In: Proceedings of the IEEE ICIP 2008, pp 97–100 93. Lu F, Yang X, Zhang R, Yu S (2009) Image classification based on pyramid histogram of topics. In: Proceedings of the IEEE ICME 2009. IEEE Press, New York, pp 398–401 94. Luo W, Wang X, Tang X (2011) Content-based photo quality assessment. In: Proceedings of the IEEE ICCV 2011, vol. 0. IEEE Computer Society, Los Alamitos, pp 2206–2213 95. Luo Y, Tang X (2008) Photo and video quality evaluation: focus- ing on the subject. In: Proceedings of the ECCV 2008. Springer, Heidelberg, pp 386–399 96. Lux M, Kogler M, del Fabro M (2010) Why did you take this photo: a study on user intentions in digital photo productions. In: Proceedings of the ACM SAPMIA 2010, pp 41–44 97. Maik V, Paik D, Lim J, Park K, Paik J (2010) Hierarchical pose classification based on human physiology for behaviour analysis. IET-CV 4(1):12–24 98. Mantiuk RK, Tomaszewska A, Mantiuk R (2012) Comparison of four subjective methods for image quality assessment. Comput Graphics Forum 31(8):2478–2491 99. Marshall B (2010) Taking the tags with you: Digital photograph provenance. In: Proceedings of the IEEE symposium on data, privacy, and E-Commerce 2010. IEEE Computer Society, Los Alamitos, pp 72–77 100. Martin D, Fowlkes C, Tal D, Malik J (2001) A database of human segmented natural images and its application to evaluating seg- mentation algorithms and measuring ecological statistics. In: Pro- ceedings of the ICCV 2001, vol 2, pp 416–423 101. Mele K, Suc D, Maver J (2009) Local probabilistic descriptors for image categorisation. IET-CV 3(1):8–23 102. Microsoft Corporation: Bing maps (2012) http://www.bing.com/ maps/. Accessed 31 January 2013 103. Miller AD, Edwards WK (2007) Give and take: a study of con- sumer photo-sharing culture and practice. In: Proceedings of the ACM SIGCHI 2007, pp 347–356 104. Moorthy AK, Obrador P, Oliver N (2010) Towards computa- tional models of the visual aesthetic appeal of consumer videos. In: Proceedings of the ECCV 2010. Springer, Berlin/Heidelberg, pp 1–14 105. Mu Y, Yan S, Liu Y, Huang T, Zhou B (2008) Discriminative local binary patterns for human detection in personal album. In: Proceedings of the IEEE CVPR 2008, vol 0. New York, IEEE Computer Society, pp 1–8 106. Mucientes M, Bugarín A (2010) People detection through quan- tified fuzzy temporal rules. Pattern Recogn 43(4):1441–1453 107. Nikon Corporation: Nikon d90 advanced function (2008). http://chsvimg.nikon.com/products/imaging/lineup/d90/en/ advanced-function/. Accessed 31 January 2013 108. Obrador P (2008) Region based image appeal metric for consumer photos. In: Proceedings of the IEEE Workshop on multimedia signal 2008, pp 696–701 109. Obrador P, Moroney N (2009) Automatic image selection by means of a hierarchical scalable collection representation. In: Pro- ceedings of the SPIE visual communications and image process- ing, San Jose, vol 7257, pp 0W.1–0W.12 110. Obrador P, de Oliveira R, Oliver N (2010) Supporting personal photo storytelling for social albums. In: Proceedings of the ACM MM 2010, pp 561–570 111. Obrador P, Schmidt-Hackenberg L, Oliver N (2010) The role of image composition in image aesthetics. In: Proceedings of the IEEE ICIP 2010, pp 3185–3188 112. O’Hare N, Lee H, Cooray S, Gurrin C, Jones G, Malobabic J, O’Connor N, Smeaton AF, Uscilowski, B (2006) Mediassist: Using content-based analysis and context to manage personal photo collections. In: Proceedings of the CIVR 2006, vol 4071. Springer, Heidelberg, pp 529–532 123 http://www.bing.com/maps/ http://www.bing.com/maps/ http://chsvimg.nikon.com/products/imaging/lineup/d90/en/advanced-function/ http://chsvimg.nikon.com/products/imaging/lineup/d90/en/advanced-function/ 358 J Braz Comput Soc (2013) 19:341–359 113. O’Hare N, Smeaton AF (2009) Context-aware person identi- fication in personal photo collections. IEEE Trans Multiméd 11(2):220–228 114. Oliveira CJS, Araújo AdeA, Severiano CA Jr, Gomes DR (2002) Classifying images collected on the World Wide Web. In: Pro- ceedings of the SIBGRAPI 2002, IEEE Computer Society Press, Fortaleza, pp 327–334 115. Orbanz P, Buhmann JM (2008) Nonparametric bayesian image segmentation. Int J Comput Visi 77(1–3):25–45 116. Paisitkriangkrai S, Shen C, Zhang J (2008) Performance eval- uation of local features in human classification and detection. IET-CV 2(4):236–246 117. Pang Y, Hao Q, Yuan Y, Hu T, Cai R, Zhang L (2011) Summariz- ing tourist destinations by mining user-generated travelogues and photos. Comput Vis Image Underst 115(3):352–363 118. Park HJ, Har DH (2011) Subjective image quality assessment based on objective image quality measurement factors. IEEE Trans Consumer Electron 57(3):1176–1184 119. Peres M (2007) Focal encyclopedia of photography: digital imag- ing, theory and applications, history, and science. Elsevier Science Inc./Focal Press, Boston 120. Pierrard JS, Vetter T (2007) Skin detail analysis for face recogni- tion. In: Proceedings of the IEEE CVPR 2007, pp 1–8 121. Presti LL, Cascia ML (2012) An on-line learning method for face association in personal photo collection. Image Vis Comput 30 (4–5):306–316 122. Qi GJ, Hua XS, Rui Y, Tang J, Zhang HJ (2008) Two-dimensional active learning for image classification. In: Proceedings of the IEEE CVPR 2008, pp 1–8 123. QinAK,ClausiDA(2010)Multivariateimagesegmentationusing semantic region growing with adaptive edge penalty. IEEE Trans Image Process 19(8):2157–2170 124. Quinlan JR (1986) Induction of decision trees. Mach Learn 1(1):81–106 125. Rahman M, Gamadia M, Kehtarnavaz N (2008) Real-time face- based auto-focus for digital still and cell-phone cameras. In: Proceedings of the IEEE SSIAI 2008. IEEE Computer Society, Los Alamitos, pp 177–180 126. Redi JA, Heynderickx I (2012) Image integrity and aesthetics: towards a more encompassing definition of visual quality. In: Pro- ceedings of the SPIE human vision and electronic imaging XVII 2012, vol 8291. SPIE, San Jose, pp 15.1–15.10 127. Ren T, Liu Y, Wu G (2009) Image retargeting based on global energy optimization. In: Proceedings of the IEEE ICME 2009. IEEE Press, New York, pp 406–409 128. Ren X, Fowlkes CC, Malik J (2008) Learning probabilistic models for contour completion in natural images. Int J Comput Vis 77 (1–3):47–63 129. Rousson M, Paragios N (2008) Prior knowledge, level set repre- sentations & visual grouping. Int J Comput Vis 76(3):231–243 130. Russell SJ, Norvig P (2009) Artificial intelligence: a modern approach, 3rd edn. Prentice Hall, New Delhi 131. Ryoo MS, Aggarwal JK (2009) Semantic representation and recognition of continued and recursive human activities. Int J Comput Vis 82(1):1–24 132. Sandnes F (2010) Unsupervised and fast continent classification of digital image collections using time. In: Proceedings of the ICSSE 2010, pp 516–520 133. Santella A, Agrawala M, Decarlo D, Salesin D, Cohen M (2006) Proceedings of the gaze-based interaction for semi-automatic photo cropping. In: Proceedings of the ACM SIGCHI 2006. ACM Press, New York, pp 771–780 134. Savakis AE, Etz SP, Loui ACP (2000) Evaluation of image appeal in consumer photography. In: Proceedings of the SPIE human vision and electronic imaging V, vol 3959. SPIE, San Jose, pp 111–120 135. Schindler G, Krishnamurthy P, Lublinerman R, Liu Y, Dellaert F (2008) Detecting and matching repeated patterns for automatic geo-tagging in urban environments. In: Proceedings of the IEEE CVPR 2008, pp 208–219 136. Schmugge SJ, Jayaram S, Shin MC, Tsap LV (2007) Objective evaluation of approaches of skin detection using roc analysis. Comput Vis Image Underst 108(1–2):41–51 137. Serrano N, Savakis A, Luo J (2002) A computationally efficient approach to indoor/outdoor scene classification. In: Proceedings of the IEEE ICPR 2002. IEEE Computer Society, Los Alamitos, pp 146–149 138. Setlur V, Takagi S, Raskar R, Gleicher M, Gooch B (2005) Auto- matic image retargeting. In: Proceedings of the ACM MUM 2005. ACM Press, New York, pp 59–68 139. Shen CT, Liu JC, Shih SW, Hong JS (2009) Towards intelli- gent photo composition-automatic detection of unintentional dis- section lines in environmental portrait photos. Expert Syst Appl 36(5):9024–9030 140. Singla P, Kautz H, Gallagher A (2008) Discovery of social rela- tionships in consumer photo collections using markov logic. In: Proceedings of the IEEE CVPR 2008 Workshops, pp 1–7 141. Sinha P (2011) Summarization of archived and shared personal photo collections. In: Proceedings of the ACM WWW 2011, pp 421–426 142. Snavely N, Seitz SM, Szeliski R (2008) Modeling the world from internet photo collections. Int J Comput Vis 80(2):189–210 143. Sony Corporation: Sony party-shot automatic photogra- pher (2009). http://store.sony.com/webapp/wcs/stores/servlet/ ProductDisplay?catalogId=10551&storeId=10151&langId=-1& partNumber=IPTDS1. Accessed 31 January 2013 144. Stein A, Stepleton T, Hebert M (2008) Towards unsupervised whole-object segmentation: Combining automated matting with boundary detection. In: Proceedings of the IEEE CVPR 2008, pp 1–8 145. Su HH, Chen TW, Kao CC, Hsu WH, Chien SY (2011) Scenic photo quality assessment with bag of aesthetics-preserving fea- tures. In: Proceedings of the ACM MM 2011, pp 1213–1216 146. Suh B, Bederson BB (2007) Semi-automatic photo annotation strategies using event based clustering and clothing based person recognition. Interact Comput 19(4):524–544 147. Suh B, Ling H, Bederson BB, Jacobs DW (2003) Automatic thumbnail cropping and its effectiveness. In: Proceedings of the ACM UIST 2003. ACM Press, New york, pp 95–104 148. Tang F, Gao Y (2009) Fast near duplicate detection for per- sonal image collections. In: Proceedings of the ACM MM 2009, pp 701–704 149. Tian A, Zhang X, Tretter DR (2011) Content-aware photo-on- photo composition for consumer photos. In: Proceedings of the ACM MM 2011, pp 1549–1552 150. Tómasson G, Sigurp’orsson H, Jónsson B, Amsaleg L (2011) Photocube: effective and efficient multi-dimensional browsing of personal photo collections. In: Proceedings of the ACM ICMR 2011, pp 70:1–70:2 151. Tong H, Li M, Zhang H, Zhang C (2004) Blur detection for digi- tal images using wavelet transform. In: Proceedings of the IEEE ICME 2004, pp 17–20 152. Tong H, Li M, Zhang HJ, He J, Zhang C (2004) Classification of digital photos taken by photographers or home users. In: Pro- ceedings of the Pacific Rim Conference on Multimedia. Springer, Heidelberg, pp 198–205 153. Tran C, Wijnhoven R, de With P (2011) Text detection in per- sonal image collections. In: Proceedings of the IEEE ICCE 2011, pp 85–86 154. Tsao WK, Lee AJT, Liu YH, Chang TW, Lin HH (2010) A data mining approach to face detection. Pattern Recogn 43(3): 1039–1049 123 http://store.sony.com/webapp/wcs/stores/servlet/ProductDisplay?catalogId=10551&storeId=10151&langId=-1&partNumber=IPTDS1 http://store.sony.com/webapp/wcs/stores/servlet/ProductDisplay?catalogId=10551&storeId=10151&langId=-1&partNumber=IPTDS1 http://store.sony.com/webapp/wcs/stores/servlet/ProductDisplay?catalogId=10551&storeId=10151&langId=-1&partNumber=IPTDS1 J Braz Comput Soc (2013) 19:341–359 359 155. Tsay KE, Wu YL, Hor MK, Tang CY (2009) Personal photo orga- nizer based on automated annotation framework. International Conference on Intelligent Information Hiding and Multimedia Signal Processing, pp 507–510 156. Valle E, Cord M, Philipp-Foliguet S, Gorisse D (2010) Indexing personal image collections: a flexible, scalable solution. IEEE Trans Consumer Electron 56(3):1167–1175 157. Vogel J, Schwaninger A, Wallraven C, Bülthoff HH (2007) Cate- gorization of natural scenes: Local versus global information and the role of color. ACM Trans Appl Percept 4(3):19.1–19.21 158. Wan D, Zhou J (2008) Stereo vision using two ptz cameras. Comput Vis Image Underst 112(2):184–194 159. Wang H, Oliensis J (2010) Generalizing edge detection to contour detection for image segmentation. Comput Vis Image Underst 114(7):731–744 160. Wang J, Jia Y, Hua XS, Zhang C, Quan L (2008) Normalized tree partitioning for image segmentation. In: Proceeings of the IEEE CVPR 2008, vol 0. IEEE Computer Society, Los Alamitos, pp 1–8 161. Wang J, Zhu S, Gong Y (2009) Resolution-invariant image repre- sentation for content-based zooming. In: Proceedings of the IEEE ICME 2009. IEEE Press, New York, pp 918–921 162. Wang P, Ji Q (2007) Multi-view face and eye detection using discriminant features. Comput Vis Image Underst 105(2):99–111 163. Wang XF, Huang DS, Xu H (2010) An efficient local chan-vese model for image segmentation. Pattern Recogn 43(3):603–618 164. Wang Y, Huang Q, Gao W (2009) Pornographic image detection based on multilevel representation. IJPRAI 23(8):1633–1655 165. Wichmann FA, Drewes J, Rosas P, Gegenfurtner KR (2010) Ani- mal detection in natural scenes: critical features revisited. J Vis 10(4):6.1–27 166. Xie S, Shan S, Chen X, Chen J (2010) Fusing local patterns of gabor magnitude and phase for face recognition. IEEE Trans Image Process 19(5):1349–1361 167. Xie ZX, Wang ZF (2010) Color image quality assessment based on image quality parameters perceived by human vision system. In: Proceedings of the ICMT 2010, pp 1–4 168. Xin H, Ai H, Chao H (2011) Tretter D Human head-shoulder segmentation. In: Proceedings of the IEEE FG 2011, pp 227–232 169. Xu S, Ye X, Wu Y, Giron F, Leveque JL, Querleux B (2008) Automatic skin decomposition based on single image. Comput Vis Image Underst 110(1):1–6 170. Xu Z, Sun J (2010) Image inpainting by patch propagation using patch sparsity. IEEE Trans on Image Process 19(5):1153–1165 171. Yan S, Wang H, Liu J, Tang X, Huang TS (2010) Misalignment- robust face recognition. IEEE Trans Image Process 19(4): 1087–1096 172. Yanagawa A, Loui AC, Luo J, Chang SF, Ellis D, Jiang W, Kennedy L, Lee K (2008) Kodak consumer video benchmark data set: concept definition and annotation. Columbia University, Technical report 173. Yang Y, Xu D, Nie F, Yan S, Zhuang Y (2010) Image cluster- ing using local discriminant models and global integration. IEEE Trans Image Process 19(10):2761–2773 174. Yanulevskaya V, van Gemert J, Roth K, Herbold A, Sebe N, Geusebroek J (2008) Emotional valence categorization using holistic image features. In: Proceedings of the IEEE ICIP 2008, pp 101–104 175. Yao L, Suryanarayan P, Qiao M, Wang JZ, Li J (2012) Oscar: on-site composition and aesthetics feedback through exemplars for photographers. Int J Comput Vis 96(3):353–383 176. Yeh CH, Ho YC, Barsky BA, Ouhyoung M (2010) Personalized photograph ranking and selection system. In: Proceedings of the ACM MM 2010, pp 211–220 177. Yeh CH, Ng WS, Barsky BA, Ouhyoung M (2009) An esthetics rule-based ranking system for amateur photos. In: Proceedings of the ACM SIGGRAPH 2009, pp 24:1–24:1 178. Yi Y, Yu X, Wang L, Yang Z (2008) Image quality assessment based on structural distortion and image definition. In: Proceed- ings of the international conference on computer science and soft- ware engineering 2008(6):253–256 179. Yin W, Luo J, Chen CW (2010) Semantic adaptation of consumer photo for mobile device access. In: Proceedimgs of the ISCAS 2010, pp 1173–1176 180. YingZ,GuangyaoL,XiehuaS,XinminZ(2009)Geometricactive contours without re-initialization for image segmentation. Pattern Recogn 42(9):1970–1976 181. Yu Z, Au OC, Zou R, Yu W, Tian J (2010) An adaptive unsuper- vised approach toward pixel clustering and color image segmen- tation. Pattern Recogn 43(5):1889–1906 182. Zeng G, Gool LV (2008) Multi-label image segmentation via point-wise repetition. In: Proceedings of the IEEE CVPR 2008, vol 0. IEEE Computer Society, Los Alamitos, pp 1–8 183. Zeng YC (2009) Automatic local contrast enhancement using adaptive histogram adjustment. In: Proceedings of the IEEE ICME 2009. IEEE Press, New York, pp 1318–1321 184. Zha ZJ, Hua XS, Mei T, Wang J, Qi GJ, Wang Z (2008) Joint multi-label multi-instance learning for image classification. In: Proceedings of the IEEE CVPR 2008, vol 0. IEEE Computer Society, Los Alamitos, pp 1–8 185. Zhang H, Fritts JE, Goldman SA (2008) Image segmentation eval- uation: a survey of unsupervised methods. Comput Vis Image Underst 110(2):260–280 186. Zhang M, Zhang L, Sun Y, Feng L, Ma W (2005) Auto Cropping for Digital Photographs. In: Proceedings of the IEEE ICME 2005, pp 438–441 187. Zhang T, Chao H, Willis C, Tretter D (2010) Consumer image retrieval by estimating relation tree from family photo collections. In: Proceedings of the ACM CIVR 2010, pp 143–150 188. Zhou C, Lin S (2007) Removal of image artifacts due to sen- sor dust. In: Proceedings of the IEEE CVPR 2007, vol 0. IEEE Computer Society, Los Alamitos, pp 1–8 123 A survey on automatic techniques for enhancement and analysis of digital photography Abstract 1 Introduction 2 Methodology of the research 2.1 Breadth-first search 2.2 Depth-first search 2.3 Search results 2.4 Considerations on the methodology 3 Enhancement 3.1 On the fly enhancement 3.2 Off-line enhancement 4 Analysis 4.1 Assessment 4.2 Information extraction 4.3 Grouping 4.3.1 Classification and clustering 4.3.2 Summarization 4.3.3 Image retrieval 4.4 Discussion 5 Critical analysis 5.1 Image sets 5.2 Validation 6 Conclusions Acknowledgments References work_35fdyfkobrdg3pzhj5zwew4qbu ---- ORIGINAL ARTICLE Androgen receptor expression in ductal carcinoma in situ of the breast: relation to oestrogen and progesterone receptors A-G A Selim, G El-Ayat, C A Wells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J Clin Pathol 2002;55:14–16 Aims: Ductal carcinoma in situ (DCIS) of the breast has been diagnosed increasingly since the advent of mammographic screening. In contrast to the situation in invasive breast carcinoma, there are no reports on androgen receptor (AR) status in DCIS and few reports on oestrogen (ER) and progesterone (PR) receptors. Methods: AR expression was examined in 57 cases of DCIS of the breast and correlated to the degree of differentiation and ER/PR status using immunohistochemical methods. Results: AR positivity was noted in 19 of the cases, whereas the other 38 cases were negative. There was no significant association between AR expression and the degree of differentiation of DCIS; three of the 13 well differentiated DCIS cases, 10 of the 19 intermediately differentiated cases, and six of the 25 poorly differentiated cases were positive (p = 0.093). However, a strong association was shown between the expression of ER (p < 0.0001) and PR (p = 0.002) and the degree of differentia- tion of DCIS. In addition, no significant association was found between the expression of AR and the expression of ER (p = 0.26) or PR (p = 0.57) in DCIS of the breast. Conclusions: A large number of cases of DCIS of the breast express AR and this may be associated with apocrine differentiation, which may impact on accurate typing of DCIS. Moreover, the expression of AR (but not ER or PR) in DCIS does not appear to be associated with the degree of differentiation. D uctal carcinoma in situ (DCIS) of the breast without invasion has been reported increasingly since the advent of mammographic screening, but the natural history of this lesion remains unclear. DCIS of the breast does not repre- sent A single entity but is a heterogeneous group of lesions with histological and clinical differences.1–5 The histological subtype of DCIS influences its biological behaviour, but there are only a few studies correlating the classification with biological markers.4–7 The fact that sex steroid hormones and their receptors act in concert has led some investigators to study the role of the androgen receptor (AR) in patients with breast cancer. AR is expressed in approximately 35–75% of breast cancers.8–10 Vari- ations may be attributable to different methodologies and dif- ferent fixatives, but a different case mix may also affect these studies. It has been shown that AR values correlate reasonably well with oestrogen receptor (ER) values, but more so with those for the progesterone receptor (PR).8–11 AR positive breast cancer patients have prolonged survival and a better response to hormonal treatment than AR negative patients. Thus, some workers believe that knowledge of the receptor status of all three receptors may identify more accurately those patients with breast cancer who are most likely to respond to endocrine treatment.9–13 In addition, androgen stimulation has both stimulatory and inhibitory growth effects on some breast can- cer cell lines, depending on the status of receptors and other growth factor effects.14–16 The AR is also a marker of apocrine differentiation in normal apocrine epithelium,17 and this may indicate an association with apocrine differentiation in these tumours. This is supported by the findings of Gatalica in apocrine carcinomas.18 In contrast to the situation in invasive breast carcinoma, there are no reports on AR status in DCIS and only occasional reports on ER and PR expression in DCIS.6 7 19–21 Hence, this study was undertaken to investigate AR expression in DCIS and to correlate it with the expression of ER and PR, in addi- tion to the degree of differentiation of cases of DCIS of the breast. MATERIALS AND METHODS Case selection Fifty seven cases of DCIS were collected from the files of the histopathology department of St Bartholomew’s Hospital, London. The age of the patients ranged from 40 to 86 years (mean, 55.0). The cases were classified according to Holland et al,22 based mainly on cytonuclear and architectural differentia- tion into three categories, namely: well (13 cases), intermedi- ate (19 cases), and poorly (25 cases) differentiated DCIS. Immunohistochemistry Tissue Formalin fixed, paraffin wax embedded blocks of DCIS tissue were selected from the files and sectioned at a nominal 4 µm. The standard avidin biotin peroxidase complex method23 was used. Heat mediated antigen retrieval using the pressure cooker method24 was used for all staining. Appropriate positive and negative controls omitting the primary antibodies were included with each slide run. In addition, the normal breast tissue in the sample served as an internal control. Antibodies Table 1 summarises the monoclonal antibodies used against the AR, ER, and PR proteins. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Abbreviations: AR, androgen receptor; DCIS, ductal carcinoma in situ; ER, oestrogen receptor; PR, progesterone receptor See end of article for authors’ affiliations . . . . . . . . . . . . . . . . . . . . . . . Correspondence to: Dr A A Selim, Department of Histopathology, St Bartholomew’s Hospital, West Smithfield, London EC1A 7BE, UK; aaselim@doctors.net.uk . . . . . . . . . . . . . . . . . . . . . . . 14 www.jclinpath.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://jcp .b m j.co m / J C lin P a th o l: first p u b lish e d a s o n 1 Ja n u a ry 2 0 0 2 . D o w n lo a d e d fro m http://jcp.bmj.com/ Assessment Nuclear staining was taken as positive, with cytoplasmic staining being ignored. The Quick Score method25 was used for semiquantitation of AR, ER, and PR status as follows. (1) Intensity of staining. Slides were assessed for the average degree of staining on low power (×10) and the following scores allocated: weak (1), moderate (2), or strong (3). (2) The percentage of cells with positive nuclei was counted on high power (×40) and the following scores were allocated: < 25% (1), 25–< 50% (2), 50–< 75% (3), > 75% (4). The scores from (1) and (2) were added together to give a final score ranging from 0 to 7, designated as negative or positive as follows: score of 0–3, negative; score of 4–7, positive. Statistical analysis To evaluate significance the χ2 and Fisher exact tests were applied as appropriate. A p value of < 0.05 was considered to be significant. RESULTS Our study comprised 57 cases of DCIS, which were classified according to Holland and colleagues22 into three categories, namely: well (13 cases), intermediate (19 cases), and poorly (25 cases) differentiated DCIS. Nine cases were morphologi- cally of the apocrine type. Table 2 summarises the results of the three markers tested in the three categories of DCIS stud- ied. Nuclear staining of the tumour cells was counted as posi- tive. All non-specific cytoplasmic staining was ignored. In cases with normal tissue present, staining of nuclei in normal ducts or lobules was taken as a positive internal control. The intensity of nuclear staining varied between individual tumour cells. Of the 57 DCIS cases studied; 19, 31, and 28 cases were positive for AR (fig 1), ER (fig 2), and PR, respectively. No association between AR expression and the degree of differentiation of DCIS was identified; three of 13 cases of well differentiated DCIS, 10 of 19 cases of intermediately differen- tiated DCIS, and six of 25 cases of poorly differentiated DCIS were AR positive (p = 0.093). Six of the nine morphologically apocrine cases were positive for AR. A strong positive association between ER and PR expression and the degree of differentiation of DCIS was found. All the 13 cases of well dif- ferentiated DCIS, 10 of 19 intermediately differentiated DCIS, and eight of 25 poorly differentiated DCIS cases were positive for ER (p < 0.0001). Four of the morphologically apocrine cases showed immunopositivity for ER. Twelve of the 13 cases of well differentiated DCIS, eight of the nine intermediately differentiated DCIS, and eight of the 25 poorly differentiated DCIS cases were positive for PR (p = 0.002). Three of the morphologically apocrine cases were positive for PR. In the 19 DCIS cases positive for AR there were eight cases also positive for ER and PR, but the other 11 cases were negative for ER and PR. Table 3 shows no significant association between AR expression and the expression of ER (p = 0.260) or PR (p = 0.57) in the cases of DCIS studied. DISCUSSION In our study, using the European classification of Holland and colleagues22 to categorise cases into well, intermediately, or poorly differentiated DCIS, no association was found between immunoreactivity for AR and the degree of differentiation of DCIS. In addition, no association was found between AR Table 1 Details of primary monoclonal antibodies used Antibody against Source Clone Dilution Positive control AR Novocastra 2F12 1/50 Prostate ER Dako ID-5 1/300 Breast carcinoma PR Novocastra IA-6 1/200 Breast carcinoma AR, androgen receptor; ER, oestrogen receptor; PR, progesterone receptor. Table 2 Expression of AR, ER, and PR in the three categories of DCIS Differentiation AR ER PR + – + – + – Well (n = 13) 3 10 13 0 12 1 Intermediate (n = 19) 10 9 10 9 8 11 Poor (n = 25) 6 19 8 17 8 17 Total (n = 57) 19 38 31 26 28 29 p Value 0.093 <0.0001 0.002 AR, androgen receptor; DCIS, ductal carcinoma in situ; ER, oestrogen receptor; PR, progesterone receptor. Figure 2 Strong nuclear staining for the oestrogen receptor in well differentiated ductal carcinoma in situ of the breast (immunoperoxidase). Figure 1 Androgen receptor nuclear staining of poorly differentiated ductal carcinoma in situ of the breast (immunoperoxidase). AR, ER, and PR in DCIS 15 www.jclinpath.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://jcp .b m j.co m / J C lin P a th o l: first p u b lish e d a s o n 1 Ja n u a ry 2 0 0 2 . D o w n lo a d e d fro m http://jcp.bmj.com/ expression and the expression of ER or PR. However, Isola13 found A strong association between AR detected immunohisto- chemically and histological grade in 76 cases of invasive breast carcinoma using frozen sections. A strong positive association between AR and ER was also found in his study. Ellis et al found no significant association between AR and ER expression in invasive breast carcinoma; however, a strong positive association was found in their study between AR and PR expression.8 The difference in the number and nature of cases studied, in addition to technical differences may explain the disagreement between our study and those of others. A larger series of cases of DCIS would be needed to exclude a weak association of AR with the degree of differentiation. Our findings agree with those of Bobrow et al,4 Millis et al,7 and Pallis et al,19 in that most poorly differentiated DCIS cases were lacking immunoreactivity for ER and PR, and most well differentiated DCIS cases were immunoreactive with ER and PR. In conclusion, it seems that a large number of DCIS cases are positive for AR but negative for ER and PR, and this indi- cates the need for further investigation of AR status, in addi- tion to conventional ER and PR. This could yield potentially useful information for establishing new therapeutic strategies and evaluating the prognostic outcome in patients with DCIS, and may relate partially to apocrine differentiation of these tumours. ACKNOWLEDGEMENT The authors thank S Jordan for technical assistance and Dr C Sowter for digital photography and computer expertise. . . . . . . . . . . . . . . . . . . . . . Authors’ affiliations A A Selim, G El-Ayat, C A Wells, Department of Histopathology, St Bartholomew’s Hospital, St Bartholomew’s and the Royal London School of Medicine and Dentistry, Queen Mary and Westfield College, University of London, West Smithfield, London EC1A 7BE, UK REFERENCES 1 Lagios MD, Westdahl PR, Margolin FR, et al. Duct carcinoma in situ. Relationship of extent of noninvasive disease to the frequency of occult invasion, multicentricity, lymph node metastases and short-term treatment failures. Cancer 1982;50:1309–14. 2 Lagios MD, Margolin FR, Westdahl PR, et al. Mammographically detected duct carcinoma in situ: frequency of local recurrence following tylectomy and prognostic effect of nuclear grade on local recurrence. Cancer 1989;63:618–24. 3 Lennington WJ, Jensen RA, Dalton LW, et al. Ductal carcinoma in situ of the breast: heterogeneity of individual lesions. Cancer 1994;73:118–24. 4 Bobrow LG, Happerfield LC, Gregory WM, et al. Ductal carcinoma in situ: assessment of necrosis and nuclear morphology and their association with biological markers. J Pathol 1995;176:333–41. 5 Leal CB, Schmitt FC, Bento MJ, et al. Ductal carcinoma in situ of the breast: histological categorization and its relationship to ploidy and immunohistochemical expression of hormone receptors, p53 and c-erbB2 protein. Cancer 1995;75:2123–31. 6 Zafrani B, Leroyer A, Fourquet A, et al. Mammographically detected ductal in situ carcinoma of the breast analysed with a new classification. A study of 127 cases: correlation with estrogen and progesterone receptors, p53 and c-erbB2 proteins and proliferative activity. Semin Diagn Pathol 1994;11:208–14. 7 Millis RR, Bobrow LG, Barnes DM. Immunohistochemical evaluation of biological markers in mammary carcinoma in situ: correlation with morphological features and recently proposed schemes for histological classification. Breast 1996;5:113–22. 8 Ellis LM, Wittliff L, Bryant MS, et al. Correlation of estrogen, progesterone and androgen receptors in breast cancer. Am J Surg 1989;157:577–81. 9 Kuenen-Boumeester V, Van der Kwast TH, van Putten WL, et al. Immunohistochemical determination of androgen receptors in relation to oestrogen and progesterone receptors in female breast cancer. Int J Cancer 1992;52:581–4. 10 Collett K, Maehle BO, Skjarven R, et al. Prognostic role of oestrogen, progesterone and androgen receptor in relation to patient age in patients with breast cancer. Breast 1996;5:123–6. 11 Langer M, Kubista E, Schemper M, et al. Androgen receptors, serum androgen levels and survival of breast cancer patients. Arch Gynecol Obstet 1990;247:203–9. 12 Brentani MM, Franco EL, Oshima CTF, et al. Androgen, oestrogen and progesterone receptor levels in malignant and benign breast tumours: a multivariate analysis approach. Int J Cancer 1986;38:637–42. 13 Isola JJ. Immunohistochemical demonstration of androgen receptor in breast cancer and its relationship to other prognostic factors. J Pathol 1993;170:31–5. 14 Boccuzzi G, Di Monaco M, Brignardello E, et al. Dehydroepiandrosterone antiestrogenic action through androgen receptor in MCF-7 human breast cancer cell line. Anticancer Res 1993;13:2267–72. 15 Hackenberg R, Hawighorst T, Filmer A, et al. Medroxyprogesterone acetate inhibits the proliferation of estrogen- and progesterone-receptor negative MFM-223 human mammary cancer cells via the androgen receptor. Breast Cancer Res Treat 1993;25:217–24. 16 Liberato MH, Sonohara S, Brentani MM. Effects of androgens on proliferation and progesterone receptor levels in T47D human breast cancer cells. Tumour Biol 1993;14:38–45. 17 Selim AA, Wells CA. Immunohistochemical localisation of androgen receptor in apocrine metaplasia and apocrine adenosis of the breast: relation to oestrogen and progesterone receptors. J Clin Pathol 1999;52:838–41. 18 Gatalica Z. Immunohistochemical analysis of apocrine breast lesions. Consistent over-expression of androgen receptor accompanied by the loss of estrogen and progesterone receptors in apocrine metaplasia and apocrine carcinoma in situ. Pathol Res Pract 1997;193:753–8 19 Pallis L, Wilking N, Cedermar KB, et al. Receptors for oestrogen and progesterone in breast carcinoma in situ of the breast. Anticancer Res 1992;12:2113–15. 20 Poller DN, Snead DRJ, Roberts EC, et al. Oestrogen receptor assay in carcinoma in situ of the breast: relationship to flow cytometric analysis of DNA and expression of the c-erbB2 oncoprotein. Br J Cancer 1993;68:156–61. 21 Poller DN, Ellis IO. Ductal carcinoma in situ (DCIS) of the breast. In: Progress in pathology, Vol. 2. Edinburgh: Churchill Livingstone, 1995:47–87. 22 Holland R, Peterse JL, Millis RR, et al. Ductal carcinoma in situ: a proposal for new classification. Semin Diagn Pathol 1994;11:167–80. 23 Hsu S-M, Raine L, Fanger H. Use of avidin–biotin–peroxidase complex (ABC) in immmunoperoxidase techniques: a comparison between ABC and unlabelled antibody (PAP) procedures. J Histochem Cytochem 1981;29:577–80. 24 Norton AJ, Jordan S, Yeomans P. Brief, high temperature heat denaturation (pressure cooking): a simple and effective method of antigen retrieval for routinely processed tissues. J Pathol 1994;173:371–9. 25 Reiner A, Neumeister B, Spona J, et al. Immunocytochemical localization of estrogen and progesterone receptor and prognosis in human primary breast cancer. Cancer Res 1990;50:1057–61. Table 3 Association between AR expression and ER and PR expression in DCIS AR p Value+ (19) – (38) ER + 8 23 0.26 – 11 15 PR + 8 20 0.57 – 11 18 AR, androgen receptor; DCIS, ductal carcinoma in situ; ER, oestrogen receptor; PR, progesterone receptor. Take home messages • Many ductal carcinoma in situ (DCIS) cases are positive for the androgen receptor (AR) but negative for oestrogen (ER) and progesterone (PR) receptors • There was no association between AR expression and the degree of differentiation in DCIS of the breast • There was no association between AR expression and the expression of ER and PR in DCIS of the breast 16 Selim, El-Ayat, Wells www.jclinpath.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://jcp .b m j.co m / J C lin P a th o l: first p u b lish e d a s o n 1 Ja n u a ry 2 0 0 2 . D o w n lo a d e d fro m http://jcp.bmj.com/ work_3afysysssvbw7crlvceynxh4uy ---- -1 Use of digital photography for analysis of canopy closure A. Guevara-Escobar*, J. Tellez and E. Gonzalez-Sosa Facultad de Ciencias Naturales, Centro Universitario s/n, Santiago de Querétaro, Universidad Autónoma de Querétaro, Querétaro 76010, México; *Author for correspondence (e-mail: guevara@uaq.mx; phone: +52-442-192-1200, ext: 5321; fax: +52-442-192-1200, ext: 5318) Received 7 May 2003; accepted in revised form 9 January 2005 Key words: Canopy analyser, Canopy openness, Canopy structure, LAI-2000 Abstract The relationships between trees and understory crops are very important in agroforestry systems. Also, above ground interactions can be related to canopy structure. However, measurements of canopy structural parameters, either destructive or indirect, are time-consuming or prohibitively expensive. The present work explored the use of digital photography as a simple method to characterise the extent of canopy closure (CC), defined as the area of tree canopies projected onto the horizontal ground surface beneath, and expressed as a percentage of the ground covered. Measurements were made in two Eucalyptus (Eucalyptus nitens, Deane and Maiden) plantations and a subtropical mixed legume woodland dominated by Albizia (Albizia sp), Kidneywood (Eysenhardtia sp.) and Desert Fern (Lysiloma sp.). Images were captured at dawn to minimise light scattering and the number of sunlit foliage elements. Mean CC estimates provided by analysis of images obtained using digital cameras with contrasting performance, a Kodak DC-120 and a Canon EOS D1, were similar in precision and accuracy both between the two cameras and to those provided by a Li-Cor LAI-2000 canopy analyser. Bias between the estimates provided by the Kodak and Canon cameras was �0.02, between the Kodak and LAI-2000 was �0.07 and between the Canon and LAI- 2000 was �0.05. Data from a pruning experiment using alder also demonstrated the repeatability of estimates obtained with a photographic method using the Kodak camera. The number of ring sensors within the LAI-2000 used to estimate CC affected agreement between the photographic method and the LAI-2000. Introduction Tree canopy structure is a major factor defining competitive interactions in agroforestry systems. Due to growth, decay or management, the tree canopy structure is constantly changing in time and space. Canopy structure refers to the distri- bution of positions, orientations, areas and shapes of plant organs (Welles and Norman 1991). Tree canopy structure must be characterised to inves- tigate and explain its effect on biophysical and ecological processes. Relationships between can- opy dimensions, sapwood area or basal area are often established from canopy attributes to avoid labour-intensive sampling. Alternatively, canopy attributes such as leaf area or light interception may be measured indirectly using different optical instruments, particularly photography. Ansley et al. (1992) predicted leaf area using photographs of tree profiles and a planimeter to measure the area of a hand-drawn line delineating the canopy profile. Grace and Fownes (1998) Agroforestry Systems (2005) � Springer 2005 DOI 10.1007/s10457-005-0504-y estimated leaf area using film slides and a dot grid to manually count the number of dots intersecting the canopy profile. Both authors found a good relationship between estimates of leaf area obtained by photography as compared to har- vesting entire trees. Knowles et al. (1999) esti- mated canopy closure (CC), defined as the area of the tree canopy projected onto the horizontal ground surface below, using digitised still images taken from 8 mm video film; the canopy area was determined using digital image analysis. They found a good relationship between the extent of CC and stand parameters such as basal area and the ratio of green crown length to mean tree height. Hemispherical photography combined with digital image analysis is used routinely to estimate canopy attributes such as leaf area or foliage inclination angle (Chen et al. 1991). Photographs are taken from below the canopy looking upwards under conditions where there is high contrast between leaves and the sky. Film-based hemi- spherical photographs have disadvantages because they cannot be reviewed in the field, and the film needs to be processed before being digitised. Film images lose resolution because they cannot pre- serve a higher resolution than that provided by the scanner used. Also, few digital cameras have a fisheye lens, and digital reflex cameras that can accommodate a fisheye lens are expensive. Frazer et al. (2001) reported that estimates of canopy closure were 1.4 times greater when obtained using a digital camera than a film-based camera, both using hemispherical lenses. Comparisons of estimates of CC obtained using film-based or digital hemispherical photographs showed that image resolution, file compression, shutter speed and heterogeneity in sky luminance determine the accuracy of photographic estimates (Chen et al. 1991; Macfarlane et al. 2000; Frazer et al. 2001). Overexposure is the main problem associated with errors in the measurement of CC because this causes excessive light scattering and diffraction along the edges of branches and leaves (Chen et al. 1991; Frazer et al. 2001). The accuracy of digital image analysis depends on the edge definition between the structures, in this case the canopy and sky areas. Edge definition or sharpness is obtained by using the correct shooting speed, adequate contrast and film or sensor sensitivity (ISO speed). The accuracy of the imaging system also depends on tree architecture. The quality of digital images can also be improved by increasing image resolution, and photographs are taken under diffuse sky conditions because the range of brightness of foliage is minimised. The LAI-2000 canopy analyser (Li-Cor Inc., Lincoln, NE, USA) is an indirect method for describing canopy attributes. This instrument estimates leaf area index (LAI) and the fraction of sky visible from beneath the canopy. The LAI-2000 method is fast, but it is sensitive to illumination conditions and stand boundary ef- fects (Welles and Cohen 1996). These problems are reduced using sensor view- caps, obtaining measurements when the sun is low or protecting the sensor from direct sunlight (Li-Cor 1992). Although the effects of multiple light scattering can be corrected (Leblanc and Chen 2001) actual field LAI measurements are still routinely done under a range of illumination conditions, including direct sunlight. The estimates of LAI obtained can be up to 40% lower than those provided by direct measurements, especially in canopies with large gaps (Welles and Norman 1991). The protocol recommended by Li-Cor (1992) results in substantial underestima- tion of direct measurements of LAI in agroforestry tree rows and vineyard canopies (Broadhead et al. 2003; Johnson and Pierce 2004). Estimates of LAI provided by hemispherical photography and the LAI-2000 differed only by 10% in forest canopies, but this relationship is sensitive to sky conditions, exposure and field of view (Welles and Cohen 1996). Larger differences in the estimates of LAI obtained by hemispherical photography and the LAI-2000 have also been found (Frazer et al. 1998). Similarly, Macfarlane et al. (2000) reported that the photographic method underestimated LAI by 16–30% when compared to the LAI-2000 method. However, good agreement between direct methods and estimation of LAI with a LAI-2000 was found in a forest canopy (López-Serrano et al. 2000). Estimates of CC obtained using digital cameras are a potentially valuable alternative to the LAI- 2000 method because they are very economical, and provide a permanent visual record of what was measured. If image analysis of digital photo- graphs can provide reliable estimates of CC with the same precision as the LAI-2000, then canopy management decisions could be made based on this relatively straightforward and economical approach. However, there are many kinds of cameras providing a range of electronic capabili- ties and lens quality. This would affect the accu- racy of estimates of CC and calibration would be necessary. The following study reported here compared estimates of CC obtained with digital photography using various camera settings and those provided by the LAI-2000. Three digital cameras were compared. It was hypothesised that estimates of CC provided by the digital photog- raphy approach would have a similar variance to those provided by the LAI-2000. A second hypothesis was that good agreement would be obtained between estimates of CC provided by the tested cameras. Material and methods Study locations Measurements were made at three sites in Mexico. The Queretaro site (1867 m.a.s.l. 20�36¢ N, 100�22¢ W) had 45-year old Eucalyptus trees (Eucalyptus nitens (Deane and Maiden)), which were spaced 10–20 m apart, 25–30 m in height and with heavy branching. The Huimilpan site (2318 m.a.s.l. 20�22¢ N, 100�16¢ W) consisted of a 12-year-old E. nitens plantation initially spaced at 4 · 6 m; the trees were 7–12 m tall. The Amazcala site (1919 m.a.s.l. 20�41¢ N, 100�16¢ W) comprised a mixed subtropical woodland dominated by Al- bizia (Albizia sp.), Kidneywood (Eysenhardtia sp.) and Desert fern (Lysiloma sp.); the trees were 6– 7 m in height. These sites were chosen to provide low, medium and high values of CC. Data from a fourth site at the Horticultural Research Centre, Aokautere, New Zealand (30 m a.s.l. 40�22¢ S, 175�40¢ E) are also presented for comparison; this site was planted with 11-year old alder trees (Alnus glutinosa (L.) Gaertn). Three shade treatments were created by pruning trees to heights of 2.5, 5.0 and 7.0 m above ground level. These treatments provided three levels of trans- mitted photosynthetic active radiation i.e. 17, 27 and 77% of full sunlight as determined from measurements made at noon on three clear sky days using a LI-191 SA quantum sensor (Li-Cor Inc., Lincoln, NE, USA). Plot size was 5 by 7 m for all pruning treatments; CC was determined at the centre of each plot (Devkota et al. 2000). Instrumentation and calculations Three digital cameras of contrasting capability were used. A Kodak DC-120 camera with a fixed 39–114 mm f/2.5–3.8 lens and 1280·960 pixel im- age resolution (Eastman Kodak Corp., Rochester, NY, USA). A Mavica FD-7 (Sony Corp., New York, NY, USA) with a fixed 40–400 mm f/1.8– 2.9 lens and 640·480 pixel image resolution. A Canon EOS D1 with an image resolution of 2464·1648 pixels (Canon Inc., Lake Success, NY, USA). This camera was fitted with an EF 28– 70 mm f/3.5–5.6 lens. The cameras were mounted on a tripod, levelled and oriented to magnetic north. Automatic expo- sure was used to reduce the possibility of overex- posure (Macfarlane et al. 2000). A 10 s self-timer was also used to avoid any movement of the camera associated with the manual release of the shutter button (Frazer et al. 2001). Photographs were taken before sunrise to avoid large variation in brightness across the picture and reflections of direct sunlight from leaves or branches (Hale and Edwards 2002). Digital files were stored as RAW or KDC formats, which are the least compressed file types for the Canon and Kodak cameras, respectively. The Mavica camera stored images as JPG files. Each measurement with the LAI-2000 was made by recording one reference reading ‘above the canopy’ and four readings below the canopy. The ‘above canopy’ values were determined in an open area whose radius was more than four times greater than the adjacent stand height, and using the same view-cap as was used for the below canopy measurements (López-Serrano et al. 2000). A 90� view-cap was used to reduce the bias in the below canopy measurements caused by heteroge- neity in gap distribution within the field of view of the sensor (López-Serrano et al. 2000; Nackaerts et al. 2000). All measurements were made using a single LAI-2000 unit. When using this approach, it is essential that sky conditions remain uniform for the above and below canopy measurements (Li-Cor 1992). The readings beneath the canopy were made facing N, S, W and E at approximately 30 cm from the tripod. The LAI-2000 measures diffuse sunlight by five concentric detector rings associated with the following zenith angles: 0–13�, 16–28�, 32–43�, 47–58� and 61–74�, where zero degrees represents the direct upward-view. These five rings bands are hereafter termed ring numbers 1, 2, 3, 4 and 5, respectively. Because the visual angle of the cameras lens was smaller than that of the LAI- 2000, logged records were edited using the C2000 program and then recalculated using rings number 1 and 2, 1–3, 1–4 or 1–5. The diffuse non-interceptance value (DIFN) was used to estimate CC as (1-DIFN). Measurements Test 1 Sets of photographs were obtained to investigate the accuracy and precision of esti- mates of CC obtained using the LAI-2000 and digital photography. The first was collected be- tween 26 and 28 June 2002 from each of the three sites in Mexico. Sampling points were se- lected using the following criteria: three parallel line transects were laid at each site at least 10 m apart to prevent sampling the same canopy gap. All gaps that were approximately 10 m apart within the transect were identified, at these ground points, the extend of canopy closure was measured using the LAI-2000. Four ground points were randomly selected within the first and fourth quartiles from the median estimates of CC obtained at each site. Similarly, five ground points were randomly selected within the second and third quartiles. Selecting ground points form each quartile ensured that a wider range of canopy closure was included to evaluate the methods of estimation of CC. The 18 se- lected ground points were surrounded by trees to avoid edge effects. A sample consisted of one photograph obtained using the Kodak camera, one with the Canon camera and corresponding paired LAI-2000 readings. All observations were made during periods of calm weather and clear sky. Collection started at twilight 1100 h Greenwich Mean Time (GMT), and approxi- mately 50 min elapsed between the first and last image taken. Focal length was set to 50 mm for both cameras. For each site, a one-way ANOVA test was used to compare the mean estimates of CC obtained using digital photography and the LAI-2000. Estimates of CC obtained using the LAI-2000 were considered to provide the stan- dard value. To verify if both estimates of CC have the same precision, the Bartlett test was used to compare the variance of the mean esti- mates of CC. Regression analysis also was used to compare estimates of CC obtained with the different methods. Bias was calculated using the procedures described by Bland and Altman (1986). The difference between each pair of estimates of CC obtained with the methods tes- ted was calculated for each ground point (d). The mean difference ( �d) and their standard deviation were used to summarise the lack of agreement between methods. Bias was estimated by �d and 95% confidence intervals were also calculated. Test 2 A second set of photographs was col- lected on 4 September 2002 at the Amazcala site. Two ground points were selected within one standard deviation from the mean estimate of CC determined previously in Test 1 with the LAI-2000. Photographs were taken using the Canon camera at focal lengths of 28, 35 and 50 mm, respectively corresponding to 59�, 49� and 35� vertical angles of view. Three consecutive photographs were taken together with one LAI-2000 measurement. For each ground point, these measurements were repeated four times every 10 min within a 40-min period. Sampling started at twilight (1120 h GMT), weather was calm and sky was cloudy. LAI-2000 measurements were recomputed using the C-2000 software after elim- inating data from one ring at a time as described by Li-Cor (1992). This procedure was adopted to examine which group of rings yielded the highest correlation with the photographic estimates of CC. Data were analysed as repeated measures, one factor design, the factor being the vertical angle of view. Replicates were the ground points and the four repeated measures. One degree of freedom contrasts were used to compare mean estimates of CC. Pearson correlation of estimates of CC were obtained excluding different sensor rings of the LAI-2000 and the images obtained with different focal lengths. Test 3 A third set of 16 photographs was col- lected on 16 September 2002 at the Amazcala site to evaluate the relation between estimates of CC obtained using the LAI-2000 and digital photog- raphy at different sampling times starting at twilight (1120 h GMT). A total of eight measure- ments were made every 20 min for each ground point during a 150-min period. Other times of day were not considered because illumination conditions may affect the photographic and LAI-2000 estimates of CC (Chen et al. 1991; Macfarlane et al. 2000; Frazer et al. 2001). Only the Canon camera at a focal length of 50 mm was used in this test. Measurements consisted of paired photographs and LAI-2000 readings. The previ- ously selected ground points in Test 2 were used and considered as replicates. The tripod was repositioned on each occasion at virtually the same position and orientation. The LAI-2000 readings were edited as described previously. Weather was calm and sky conditions were cloudy. Linear regression was used to explore the relationship between time and the estimates of CC provided by both methods. Test 4 A fourth group of photographs was obtained between 22 October 1998 and 7 May 1999 in a system containing alder at Aokautere, New Zealand. Photographs were taken approxi- mately every 10 days. Eight photographs per pruning level were taken for each date. Photo- graphs were taken at twilight, 30–45 min before dusk, handholding the Kodak camera towards the canopy and manually releasing the shutter. Three sets of digital photographs were obtained using the Mavica camera from 29 January to 21 February 1999. All models were fitted using MEANS and GLM procedures within SAS (SAS Institute, Cary, NY, USA) to establish differences between means and the significance of regressions. The minimum level for significance was set at p £ 0.05. Residuals were checked for normality and independence using Kolmogorov–Smirnov and Durbin–Watson tests, respectively. Analysis of digital images Images were analysed using Photopaint v 9.0 (Corel Corp., Ottawa, ON, Canada). The thresh- old to classify pixels into ‘sky’ and ‘canopy’ was determined for the first image of the set obtained using each camera and date. The threshold was saved in a file and then applied to the rest of the images of the corresponding set. The threshold was estimated again only when it was noticeable that pixels pertaining to the sky were wrongly identified as canopy and vice versa and then applied to the remaining images. Classified images were analysed using Sigmascan v 4.0 (SPSS Inc., Chicago IL, USA) to calculate canopy area. The ratio of canopy area to frame area of the image was expressed as a percentage and used as an estimate of CC. Results Test 1: comparing cameras Variance and the mean estimates of CC obtained with the Canon and Kodak cameras set to 50 mm focal length and those provided by the LAI-2000 were similar at all the sites examined in Mexico (Table 1). The estimates of CC obtained using the LAI-2000 presented in Table 1 were calculated using rings 1–3; similar values were obtained when the estimates were based on rings 1–2 or 1–4. Only at the Amazcala site was a significant correlation found between the estimates provided by digital photography and the LAI-2000 (r 2 =0.5 and 0.61 Table 1. Canopy closure (CC) estimated by analysis of digital photographs taken with a zoom lens set to 50 mm focal length or the LAI-2000 canopy analyser using sensor rings number 1–3. Vertical angle of view Basal area m 2 ha �1 Species b CC Kodak DC-120 SE Canon EOS D1 SE LAI-2000 SE 46� 35� 38� Sites a Huimilpan 5.5 E 0.20 0.015 0.24 0.019 0.24 0.022 Queretaro 33.9 E 0.45 0.031 0.51 0.035 0.59 0.017 Amazcala 4.9 A, Ey, L 0.61 0.051 0.58 0.047 0.66 0.030 a Sites located in Mexico. b E – Eucalyptus nitens; A – Albizia sp.; Ey – Eysenhardtia sp.; L – Lysiloma sp. for the Kodak and Canon cameras, respectively; p<0.05). At each site, the relationship between estimates of CC provided by the Kodak and Canon cam- eras was linear (r 2 =0.90, 0.71 and 0.92 for Amazcala, Queretaro and Humilpan, respectively; p<0.054). The relationship between the Kodak (y) and Canon (x) cameras using the data from all sites was also linear (y=0.065+0.89x, r 2 =0.91, p< 0.05; Figure 1). Bias between estimates of CC obtained using the Kodak and Canon cameras was �0.02 with a 95% confidence interval of ±0.019. Bias be- tween the Kodak camera and LAI-2000 esti- mates of CC was �0.07±0.038. Bias between the Canon camera and the LAI-2000 estimates of CC was �0.05± 0.035. While the Kodak camera had a fixed 160 ISO value, the Canon camera automatically adjusted to the maximum 1600 ISO value and avoided extreme aperture settings. The Canon camera used diaphragm aperture settings higher than 8.4 while the Kodak camera used settings as low as 5.0. Images obtained using the Kodak camera were darker, poorer in sharpness and some fine de- tails were blurred. More effort was required to define the colour threshold in the images from the Kodak camera. Test 2: focal length Table 2 summarises the correlations between esti- mates of CC obtained using the LAI-2000 and the Canon camera set at different focal lengths. A focal length of 50 mm was correlated with LAI-2000 estimates calculated using ring numbers 1–5, 1–4 and 1–3. The photographic method estimated a CC of 0.77 when focal length was set at 50 mm, which was higher than the value of 0.54 provided by the LAI-2000 using ring numbers 1–3 (p< 0.01). Similar estimates of CC were obtained for all three focal lengths used with the photographic method (Table 3). However, excluding different ring num- bers from the calculations, and thus having differ- ent vertical angles of view, yielded different estimates of CC using the LAI-2000 (p<0.05). Test 3: effect of time on estimates of CC Over the 150-min time interval examined during the morning, the estimates of CC obtained with the LAI-2000 were not influenced by the time of measurement. However, this was not the case for estimates obtained using the Canon camera because the regression over time was significant (r 2 =0.58 and slope of 0.0003; Figure 2). By Figure 1. Relationship between CC estimated by analysis of digital photographs taken at three sites in Mexico using two cameras with zoom lenses set to 50 mm focal length. contrast, no significant correlation was found between estimates of CC obtained using the photographic method and the LAI-2000. The photographic estimate of CC of 0.73 was higher than that of 0.56 obtained using the LAI-2000 (p<0.01). The photographic estimate of CC was the same in Tests 2 and 3 when focal length was set to 50 mm (0.77 vs. 0.73). The LAI-2000 also pro- vided the same estimates of CC in Tests 2 and 3 (0.54 vs. 0.56). Test 4: pruning treatments The time course of CC differed between the pruning treatments (Figure 3, p<0.05). The polled standard error of the mean for the estimates of CC (0.02) was identical for all pruning treatments (17, 27 and 77% of full sunlight) for the growing sea- son. Only in the 17% of full sunlight treatment was a close relationship found for estimates of CC ob- tained using the Mavica and Kodak cameras (r 2 =0.73, p<0.05). When all three pruning treat- ments were considered, a non-linear relationship was obtained between estimates of CC obtained using the Mavica (y) and Kodak (x) cameras (y= 0.86 � x0.03, r2=0.86, p<0.05). However, photo- graphic estimates of CC differed between cameras for the 27% of full sunlight treatment and were more similar for the other pruning treatments (Table 4, p<0.05). Also, the polled standard error of the mean for the estimates of CC was similar for both cameras (0.02). The bias between estimates of CC obtained with the photographic method using the Kodak and the Mavica cameras was �0.07 with a 95% confidence interval of ±0.018. Discussion and conclusions Camera comparison Images obtained using the Kodak and Canon cameras were equally useful in estimating CC at the three sites examined. Considering the technical differences between these cameras, it was expected that the estimates of CC at Queretaro would be Table 3. Estimates of CC within a subtropical mixed legume woodland obtained using different sensor rings of the Licor LAI-2000 canopy analyser and a Canon EOS D1 camera with the lens set at different focal lengths. Vertical angle of view LAI-2000 rings used a Focal length (mm) 1 1–2 1–3 1–4 1–5 28 35 50 7� 23� 38� 53� 68� 59� 49� 35� CC 0.72 a b 0.72 a 0.54 b 0.52 b 0.52 b 0.77 a 0.78 a 0.78 a SE 0.007 0.007 0.017 0.020 0.018 0.011 0.010 0.007 a Vertical angle of view correspond to the mean angle of the outermost ring. b Means with same letter were not significantly different (p<0.05). Data are for to the Amazcala site, Mexico on 4 September 2002. Table 2. Pearson correlation between estimates of CC within a subtropical mixed legume woodland obtained using different sensor rings of the LAI-2000 canopy analyser and a Canon EOS D1 camera with the lens set at different focal lengths. LAI-2000 rings used Vertical angle of view b Focal length (mm) a 28 35 50 1 7� 0.58 0.45 0.54 1–2 23� 0.58 0.45 0.54 1–3 38� 0.65 0.69 0.82* 1–4 53� 0.77* 0.66 0.74* 1–5 68� 0.80* 0.76* 0.84* a Vertical angle of view was 59�, 49� and 35� for focal lengths of 28, 35 and 50 mm, respectively. b Vertical angle of view corresponds to the mid-angle of the outermost ring. *p £ 0.05. Data are for to the Amazcala site, Mexico on 4 September 2002. statistically different. The data for the Queretaro site shown in Figure 1 showed more scatter because the canopy was the tallest at this site and some small foliage objects were lost during image processing of the Kodak images, as their chro- matic values were very similar to those of the sky. Images from the Canon camera registered a higher canopy area because the definition of small bran- ches and leaves at the top of the canopy was better than in the images provided by the Kodak camera. Figure 3. Development of CC for Alnus glutinosa at Aokautere, New Zealand, estimated by analysis of digital photographs captured with a Kodak DC-120 camera. Vertical bars represent a 95% confidence intervals for the mean estimate of CC. Julian day 1 was on 1 January 1998. Figure 2. Time trend of estimates of CC within a subtropical mixed legume woodland obtained during the morning (11:20 GMT) on 16 September 2002 at Amazcala, Mexico. This also explained the lower regression coefficient between these cameras at Queretaro (r 2 =0.71) when compared to Humilpan and Amazcala (r 2 =0.90 and 0.92, respectively), where the cano- pies were much lower. Images from the Queretaro site were processed again using the additive threshold mode and the magic wand mask tool (Corel Corp.) to manually select all foliage ele- ments in the Kodak images and using the Canon images as a reference. The resulting mean estimate of canopy closure obtained using images from the Kodak camera was 0.50. However, selecting foli- age elements manually took 2 h per image, whereas each image could be processed in under 3 min using the colour threshold file definition. The Kodak camera consistently selected a higher diaphragm aperture to compensate for the low illumination environment, but this also reduced the depth of field and increased the pos- sibility of blurred foliage borders because some objects were not in focus. Chen et al. (1991) showed that variation in shutter speed for the same image taken at the same time causes large varia- tion in estimates of CC. Good definition of foliage was also important at Amazcala because the can- opy was denser and closer to the camera than at the other sites; the shorter distance to the focused object decreased the depth of field. The particu- larly small and pinnate leaf form of the legumi- nous tree species present at the Amazcala site imposed a further requirement for clear definition of the foliage border of the most distant canopy stratum, which the Kodak camera could not define adequately. The difference between estimates of CC provided by the Kodak and Canon cameras probably was smaller at Amazcala because the problem of foliage definition was less pronounced at this site (Table 1). However, the quality of Canon images could be improved if the ISO speed was set to its lower limit (200) rather than the 1600 ISO automatically selected. Although higher ISO speeds are needed to take photographs under poor light, they also increase the amount of noise in the image. Data from all tests indicate that the repeatability of the photographic method was adequate for the description of tree canopies. Agreement between repeated measurements using the Kodak camera was similar for all three pruning treatments in the alder canopy in New Zealand (0.02 standard error of the mean), and also similar to that observed at the three sites in Mexico (0.015, 0.031 and 0.051; Table 1). The consistent variability of the esti- mates of CC obtained using the Kodak camera indicates that the repeatability of the method was unaffected by canopy development during the growing season or differences in canopy structure; this is clearly shown by the similarity of the 95% confidence intervals for the estimates of CC for the various pruning treatments (Figure 4). Repeat- ability of the photographic method was also unaffected by selection of camera or selected sites because the standard error of the mean of esti- mates of CC was similar in magnitude for the Kodak, Canon and Mavica cameras in all tests (0.01–0.03). Nonetheless, using the Canon would be preferable to the Kodak camera because the magnitude of the bias was smaller with respect to the LAI-2000 (�0.05 vs. �0.07, respectively). Similarly, using the Kodak camera would be preferable to the Mavica because the bias between the Kodak and Canon cameras was smaller than Table 4. Canopy closure (CC) of pruned alder (Alnus glutinosa) estimated by analysis of digital images captured using Kodak DC-120 or Mavica FD-7 cameras. The pruning treatments correspond to levels of 17, 27 and 77% of full sunlight. Julian day a CC SE b 17% 27% 77% Mavica Kodak Mavica Kodak Mavica Kodak 402 0.68 a c 0.74 b 0.39 a 0.21 b 0.68 a 0.54 b 0.020 408 0.71 a 0.68 a 0.35 a 0.19 b 0.59 a 0.59 a 0.019 416 0.70 a 0.68 a 0.34 a 0.19 b 0.60 a 0.59 a 0.024 a Julian day 1 was on 1 January 1998. b Pooled standard error. c Within pruning treatments, means in the same row were not significantly different (p<0.05). Measurements were made during February 1999 at Aokautere, New Zealand. between the Kodak and Mavica cameras (�0.02 vs. �0.07, respectively). At Queretaro and Amazcala, where the foliage was further from the camera or smaller, image resolution was important. This problem was dis- cussed by Yamamoto (2000), who examined can- opy gap size, and concluded that higher resolution images would estimate the gap size more precisely than low resolution images. The use of higher resolution also was advocated by Takenaka (1987). Frazer et al. (2001) concluded that esti- mates of CC derived from low image resolutions (1024·768 or 640·480 pixels) combined with 1:4 file compression were lower than those obtained from 1600·1200 uncompressed images provided by a Nikon Coolpix 950 digital camera. Using the same camera (Coolpix 950), Englund et al. (2000) found that differences in digital image quality did not affect estimates of CC. Nevertheless, the rela- tionship between estimates of CC provided by the Kodak and Canon cameras in the present study (slope=0.89, r 2 =0.91) was close to that found by Hale and Edwards (2002) for CC when comparing hemispherical pictures from a film-based Nikon FM2 and Nikon Coolpix 950 cameras over a range of canopy densities (slope=0.905, r 2 =0.92). A bias value of �0.003 to �0.04 could be added to the CC estimates obtained using the Kodak cam- era as correction factor with respect to the Canon camera. This would be appropriate as the mea- surement errors of both methods were compara- ble, and also because the correlation between the difference of the two estimates of CC (d) and their average was small (r=0.2, Bland and Altman (1986)). Comparison between the LAI-2000 and cameras Results from three field tests showed that the digital photography and LAI-2000 approaches provided estimates of CC with similar precision because the variance within the results was similar. Although the standard errors of the mean were similar in Test 1 (Table 1), they were one order of magnitude greater than in Tests 2 and 3 (Table 3 and Figure 2). This difference was attributed to the sampling procedure used to obtain representative ground points. Ground points used in Test 1 were selected within each quartile of the median distri- bution of canopy closure estimated with the LAI-2000, whereas those used in Test 2 and 3 were selected to be within one standard deviation of the mean estimate of CC, with the result that variation between the ground points was smaller. The photographic and LAI-2000 methods pro- vided similar estimates of CC except in Tests 2 and 3, when LAI-2000 rings 3, 4 and 5 were included in the calculations. It is possible that the photo- graphic and LAI-2000 methods spatially assess CC differently. This possibility is supported by the non-significant regressions between CC estimates from either, the Canon or Kodak cameras, and the LAI-2000. In fact, the high correlation at 68� vertical angle of view between the LAI-2000 and the camera was considered an artifact because the camera was not able to capture light between 60� vertical angle of view and the horizon, when focal length was set to 28 mm (Table 2). Correlations of estimates of CC also were low between hemi- spherical photography and LAI-2000 methods in a deeply shaded conifer-dominated forest or tropical forest, even when the outermost zenithal rings were disregarded (Machado and Reich 1999; Fer- ment et al. 2001). Nonetheless, analyses based on hemispherical photography and LAI-2000 mea- surements revealed similar developmental trends in CC and stand leaf area, despite significant quantitative differences in their estimates (Frazer et al. 1998; Paula and Lemos-Filho 2001). Also, Peper and McPherson (2003) reported that esti- mates based on digital analysis of photographs obtained with a Kodak DC-50 camera and the LAI-2000, showed good correlation with total leaf area of isolated trees (r 2 >0.71). The regression shown in Figure 2 suggests that the sensor of the Canon EOS D1 was more responsive to changes in illumination than the sensors within the LAI-2000 and, hence, would also require measurements to be made under overcast conditions or low solar angles (Li-Cor 1992). Obtaining canopy images at other times of day is not advisable, because foliage elements may be sunlit and reflected light in images is difficult to distinguish from the sky. Images containing lateral chromatic aberration (halos) also have unpredict- able effects on estimates of CC (Frazer et al. 2001). It was concluded that digital photography is suitable for characterising CC within agroforestry systems, and that the estimates obtained exhibit similar variability to those provided by the LAI- 2000 canopy analyser. Estimates of CC obtained using the photographic method and a focal length of 50 mm were comparable to those provided by the LAI-2000 when considering rings 1 and 2 or 1–3 were used in the calculations. The reasons why the estimates disagreed at lower angles of view require further exploration. Estimates of canopy closure obtained by analysis of images from different cameras could be calibrated by including a cor- rection factor within the calculation. However, cameras with higher resolution should preferably be used. Acknowledgements Financial support for this project was provided by Consejo Nacional de Ciencia y Tecnologı́a under grant SIHGO 19990206019. We also thank the anonymous reviewers for suggestions to improve this manuscript. References Ansley R.J., Price D.L., Dowhower S.L. and Carlson D.H. 1992. Seasonal trends in leaf area of honey mesquite trees: determination using image analysis. J. Range Manage. 45: 339–344. Bland J.M. and Altman D.G. 1986. Statistical methods for assessing agreement between two methods of clinical mea- surement. Lancet i: 307–310. Broadhead J.S., Muxworthy A.R., Ong C.K. and Black C.R. 2003. Comparison of methods for determining leaf area in tree rows. Agric. Forest Meteorol. 115: 151–161. Chen J.M., Black T.A. and Adams R.S. 1991. Evaluation of hemispherical photography for determining plant area index and geometry of a forest stand. Agric. Forest Meteorol. 56: 129–43. Devkota N.R., Kemp P.D., Valentine I. and Hodgson J. 2000. Overview of shade tolerance of pasture species in relation to deciduous tree, temperate silvopastoral systems. Agron. New Zealand 30: 101–107. Englund S.R., O’Brien J.J. and Clark D.B. 2000. Evaluation of digital and film hemispherical photography and spherical densitometry for measuring forest light environments. Can. J. Forestry Res. 30: 1999–2005. Ferment A., Picard N., Gourlet-Fleury S. and Baraloto C. 2001. A comparison of five indirect methods for character- izing the light environment in a tropical forest. Ann. Forestry Sci. 58: 877–891. Frazer G., Lertzman K.P. and Trofymow J.A. 1998. Develop- mental tends of canopy structure in coastal forests of British Columbia. Northwest Sci. 72: 21–22. Frazer G.W., Fournier R.A., Trofymow J.A. and Hall R.J. 2001. A comparison of digital and film fisheye photography for analysis of forest canopy structure and gap light trans- mission. Agric. Forest Meteorol. 109: 249–263. Grace K.T. and Fownes H.J. 1998. Leaf area allometry and evaluation of non-destructive estimates of total leaf area and loss by browsing in a silvopastoral system. Agrofor. Syst. 40: 139–147. Hale S.E. and Edwards C. 2002. Comparison of film and digital hemispherical photography across a wide range of canopy densities. Agric. Forest Meteorol. 112: 51–56. Johnson L.F. and Pierce L.L. 2004. Indirect measurement of leaf area index in California North Coast Vineyards. HortSci. 39: 236–238. Knowles R.L., Horvath G.C., Carter M.A. and Hawke M.F. 1999. Developing a canopy closure model to predict over- storey/understorey relationships in Pinus radiata silvopas- toral systems. Agrofor. Syst. 43: 109–119. Leblanc S.G. and Chen J.M. 2001. A practical scheme for correcting multiple scattering effects on optical LAI mea- surements. Agric. Forest Meteorol. 110: 125–139. López-Serrano F.R., Landete-Castillejos T., Martı́nez-Millán J. and Cerro-Barja A. 2000. LAI estimation of natural pine forest using a non-standard sampling technique. Agric. Forest Meteorol. 101: 95–111. Li-Cor D. 1992. Plant Canopy Analyser Operating Manual. Li-Cor Inc, Lincoln, NE, USA. Macfarlane C., Coote M., White D.A. and Adams D.A. 2000. Photographic exposure affects indirect estimation of leaf area in plantations of Eucalyptus globulus Labill. Agric. Forest Meteorol. 100: 155–168. Machado J.L. and Reich P.B. 1999. Evaluation of several measures of canopy openness as predictors of photosyn- thetic photon flux density in deeply shaded conifer-domi- nated forest understory. Can. J. Forestry Res. 29: 1438– 1444. Nackaerts K., Coppin P., Muys B. and Hermy M. 2000. Sampling methodology for LAI measurements with LAI- 2000 in small forest stands. Agric. Forest Meteorol. 101: 247–250. Paula S.A. and Lemos-Filho J.P. 2001. Dynamics of canopy in a semideciduous forest at the urban perimeter of Belo Horizonte, MG. Rev. Brasileira de Bot. 24: 545–551. Peper P.J. and McPherson E.G. 2003. Evaluation of four methods for estimating leaf area of isolated trees. Urban Forestry Urban Green. 2: 19–29. Takenaka A. 1987. Analysis of light transmissivity of forest canopies with a telephoto method. Agric. Forest Meteorol. 40: 359–369. Welles J.M. and Cohen S. 1996. Canopy structure measurement by gap fraction analysis using commercial instrumentation. J. Exper. Bot. 47: 1335–1342. Welles J.M. and Norman J.M. 1991. Instrument for indirect measurement of canopy architecture. Agron. J. 83: 818– 825. Yamamoto K. 2000. Estimation of the canopy gap size using two photographs taken at different heights. Ecol. Res. 15: 203–208. work_3bcdf4hipvftpkmlj7gdontbuy ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218555100 Params is empty 218555100 exception Params is empty 2021/04/06-02:16:34 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218555100 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:34 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_3csewzvvgvdxnirxwbr35kitmy ---- BioMed CentralInfectious Agents and Cancer ss Open AcceOral presentation Kaposi's sarcoma-associated immune reconstitution inflammatory syndrome (KS-IRIS) in Africa: initial findings from a prospective evaluation JN Martin*1, M Laker2, A Kambugu2, D Janka1, J Orem3, A Mwaka3, T Maurer1 and EK Mbidde4 Address: 1University of California, San Francisco; California, USA, 2Infectious Diseases Institute, Kampala, Uganda, 3Uganda Cancer Institute, Kampala, Uganda and 4Uganda Virus Research Institute, Entebbe, Uganda * Corresponding author Background Immune reconstitution inflammatory syndrome (IRIS) is a set of conditions, characterized by findings of inflamma- tion, which occur in HIV-infected patients after initiation of antiretroviral therapy (ART). It is believed to result from an overly exuberant response to residual opportunistic pathogens by the newly reconstituted immune system. Specific manifestations of IRIS depend upon the pathogen being targeted, but among the variants of IRIS, Kaposi's sarcoma-associated IRIS (KS-IRIS) is one of the least understood – especially in resource-limited settings where KS is epidemic. Now that ART is becoming available in sub-Saharan Africa, we have hypothesized that KS-IRIS is likely to be most relevant in this region. This is because of the high prevalence of AIDS-related KS in sub-Saharan Africa (i.e., large number of patients with AIDS-KS initiat- ing ART) and because of factors that theoretically may pre- dispose to KS-IRIS, specifically higher KS lesion burden and lower pre-ART CD4+ T cell count. Methods In Kampala, Uganda, we studied the incidence and spec- trum of KS-IRIS in a randomized trial for the initial ther- apy of AIDS-related KS. Participants without indications for chemotherapy were randomized to one of two differ- ent ART regimens and then evaluated every 4 weeks for 48 weeks with a questionnaire, physical examination, and digital photography to record signs and symptoms com- patible with KS-IRIS. KS-IRIS was defined as development of a) any of the following in pre-existing KS lesions: swell- ing, pain or tenderness, paresthesia, erythema, or warmth; or b) not otherwise explained subcutaneous nodules, node enlargement, edema, or pleural effusion. Results Of the first 30 subjects evaluated, 17 (57%) exhibited ≥ 1 sign or symptom compatible with KS-IRIS. The most com- mon finding was lesion swelling (43%), and there were several instances of dramatic lesion enlargement followed by spontaneous reduction (see Figures 1 and 2 from two subjects). Other manifestations included lesion pain or pares-thesia (33%), warmth or erythema (23%), femoral or inguinal node enlargement with scrotal swelling (n = from 11th International Conference on Malignancies in AIDS and Other Acquired Immunodeficiencies (ICMAOI): Basic, Epidemiologic, and Clinical Research Bethesda, MD, USA. 6–7 October 2008 Published: 17 June 2009 Infectious Agents and Cancer 2009, 4(Suppl 2):O17 doi:10.1186/1750-9378-4-S2-O17 <p>Proceedings of the 11th International Conference on Malignancies in AIDS and Other Acquired Immunodeficiencies (ICMAOI): Basic, Epidemiologic, and Clinical Research</p> Publication of this supplement was made possible with support from the Office of HIV and AIDS Malignancy, National Cancer Institute, National Institutes of Health. Meeting abstracts – A single PDF containing all abstracts in this Supplement is available here. http://www.biomedcentral.com/content/pdf/1750-9378-4-S2-info.pdf This abstract is available from: http://www.infectagentscancer.com/content/4/S2/O17 © 2009 Martin et al; licensee BioMed Central Ltd. Figure 1 Page 1 of 2 (page number not for citation purposes) http://www.infectagentscancer.com/content/4/S2/O17 http://www.biomedcentral.com/ http://www.biomedcentral.com/info/about/charter/ Infectious Agents and Cancer 2009, 4(Suppl 2):O17 http://www.infectagentscancer.com/content/4/S2/O17 Publish with BioMed Central and every scientist can read your work free of charge "BioMed Central will be the most significant development for disseminating the results of biomedical researc h in our lifetime." Sir Paul Nurse, Cancer Research UK Your research papers will be: available free of charge to the entire biomedical community peer reviewed and published immediately upon acceptance cited in PubMed and archived on PubMed Central yours — you keep the copyright Submit your manuscript here: http://www.biomedcentral.com/info/publishing_adv.asp BioMedcentral 1), and pleural effusion (n = 1). The most fulminant KS- IRIS case featured diffuse lesion swelling and new diffuse subcutaneous nodules; death ensued but the causative role of KS-IRIS is unknown. Of the three participants with KS-IRIS that did not resolve spontaneously and who were given chemotherapy, two had a good response to lipo- somal doxorubicin. Conclusion In sub-Saharan Africa, KS-IRIS occurs at a clinically rele- vant frequency with a wide spectrum of manifestations. Many of the findings are difficult to distinguish in real time from natural KS progression, and even some of the most dramatic cases can be self-limiting. This, coupled with the general lack of effective chemotherapy for KS in resource-limited settings, makes patient management complicated when KS-IRIS is suspected. In this setting, diagnostic tests are thus urgently needed to distinguish IRIS-based disease from natural progression of KS. Figure 2 Page 2 of 2 (page number not for citation purposes) http://www.biomedcentral.com/ http://www.biomedcentral.com/info/publishing_adv.asp http://www.biomedcentral.com/ Background Methods Results Conclusion work_3czvs4fo4zhjjopqlzok654maq ---- Bone & Joint Publishing We use cookies to give you the best experience on our website. To find out more about how we use cookies and how to change your settings, see our Privacy Policy. Accept Log in to your account Email Password Forgot password? Keep me logged in New User Institutional Login OpenAthens Login Change Password Old Password New Password Too Short Weak Medium Strong Very Strong Too Long Password Changed Successfully Your password has been changed Create a new account Email Returning user Can't sign in? Forgot your password? Enter your email address below and we will send you the reset instructions Email Cancel If the address matches an existing account you will receive an email with instructions to reset your password. Close Search Anywhere Quick Search anywhereEnter words / phrases / DOI / ISBN / keywords / authors / etc SearchSearch Advanced Search 0 My Cart Log in Skip main navigationClose Drawer MenuOpen Drawer MenuHome Home About Information for Authors for Subscribers for Librarians for Advertisers Courses and Meetings Journals The Bone & Joint Journal Bone & Joint 360 Bone & Joint Research Bone & Joint Open Orthopaedic Proceedings EFORT Open Reviews Journal of Children's Orthopaedics CME Specialty Core Trainee Prize Skip slideshow Latest Articles Glenoid erosion is a risk factor for recurrent instability after Hill-Sachs remplissage Maxime Cavalier , Tyler Robert Johnston , Laurie Tran , Marc-Olivier Gauci , Pascal Boileau 1 Apr 2021 The Bone & Joint Journal Patient-reported outcomes and function after reinsertion of the triangular fibrocartilage complex by open surgery Reinier Feitz , Mark J. W. van der Oest , Elisabeth P. A. van der Heijden , Harm P. Slijper , Ruud W. Selles , Steven E. R. Hovius , 1 Apr 2021 The Bone & Joint Journal The influence of expectation modification in knee arthroplasty on satisfaction of patients: a randomized controlled trial Jaap J. Tolk , Rob P. A. Janssen , Tsjitske M. Haanstra , M. C. van der Steen , Sita M. A. Bierma-Zeinstra , Max Reijman 1 Apr 2021 The Bone & Joint Journal Limb-salvage reconstruction following resection of pelvic bone sarcomas involving the acetabulum Tomohiro Fujiwara , Manuel Ricardo Medellin Rincon , Andrea Sambri , Yusuke Tsuda , Rhys Clark , Jonathan Stevenson , Michael C. Parry , Robert J. Grimer , Lee Jeys 1 Apr 2021 The Bone & Joint Journal Does chronic kidney disease affect implant survival after primary hip and knee arthroplasty? Pyry Jämsä , Aleksi Reito , Niku Oksala , Antti Eskelinen , Esa Jämsen 1 Apr 2021 The Bone & Joint Journal Total ankle arthroplasty Timothy M. Clough , Joseph Ring 1 Apr 2021 The Bone & Joint Journal Browse by Specialty Arthritis (6) Biomaterials (11) Biomechanics (12) Bone Biology (20) Bone Fracture (7) Cartilage (8) Children’s Orthopaedics (1497) Foot & Ankle (565) General Orthopaedics (10024) Hand & Wrist (3) Hip (1462) Infection (28) Knee (1038) Oncology (318) Paediatrics (4) Research (2661) Shoulder & Elbow (139) Spine (1033) Sports & Arthroscopy (16) Sports Medicine (2) Trauma (1430) Upper Limb (96) Wrist & Hand (106) Support and Resources For Authors For Subscribers For Librarians Advertising and Sponsorship Featured Articles Robotic-arm assisted total knee arthroplasty is associated with improved early functional recovery and reduced time to hospital discharge compared with conventional jig-based total knee arthroplasty B. Kayani , S. Konan , J. Tahmassebi , J. R. T. Pietrzak , F. S. Haddad Vol. 100-B, No. 7 28 Jun 2018 Outcomes of dual mobility components in total hip arthroplasty B. Darrith , P. M. Courtney , C. J. Della Valle Vol. 100-B, No. 1 1 Jan 2018 Cytotoxicity of drugs injected into joints in orthopaedics P. Busse , C. Vater , M. Stiehler , J. Nowotny , P. Kasten , H. Bretschneider , S. B. Goodman , M. Gelinsky , S. Zwingenberger Vol. 8, No. 2 9 Feb 2019 Thromboembolism prophylaxis in orthopaedics: an update Dimitrios A. Flevas , Panayiotis D. Megaloikonomos , Leonidas Dimopoulos , Evanthia Mitsiokapa , Panayiotis Koulouvaris , Andreas F. Mavrogenis Vol. 3, No. 4 27 Apr 2018 Our Journals The Bone & Joint Journal Bone & Joint 360 Bone & Joint Research Bone & Joint Open Orthopaedic Proceedings EFORT Open Reviews Journal of Children's Orthopaedics Twitter Tweets by BoneJointPortal The British Editorial Society of Bone & Joint Surgery, 22 Buckingham Street, London WC2N 6ET +44 (0) 20 7782 0010 Information For Librarians For Advertisers For Authors For Subscribers Help & About Us About Us / Contact Privacy Policy Links & Resources Reprints and Permissions Terms and conditions Privacy policy Refunds policy Accessibility statement © 2021 The British Editorial Society of Bone & Joint Surgery. Registered charity no: 209299. Powered by Atypon® Literatum work_3d3dgcmpcvandogz2tlpc6akly ---- rtm2745 1..9 Why is business model innovation that extends beyond the sustaining innovation space so diffi cult for estab- lished fi rms? Sony developed the Walkman audio player, redefi ning the market for portable music devices, but failed to develop a successful MP3 player and allowed Apple to displace it in the portable audio space with the iPod. Similarly, Knight Ridder, one of the largest news- paper publishers in the United States and a pioneer in the digital news market, failed to develop new digital adver- tising channels to capitalize on the potential of new rev- enue streams such as those exploited by Monster.com , AutoTrader.com , and REALTOR.com , clinging instead to traditional ad-based models. And Kodak, which dominated the fi lm photography industry, ceded the digital photography market to companies such as Peter Koen is an associate professor in the Wesley J. Howe School of Technology Management at Stevens In- stitute of Technology, where he teaches undergraduate and graduate courses in corporate entrepreneurship and technology management. He is also the director of the Consortium for Corporate Entrepreneurship, an organization he founded in 1998 with the mission to increase the number, speed, and success rates of highly profi table products and services at the front end of innovation. Peter also has 18 years of industry experience, having worked at ATT Bell Laboratories and BD. He holds a PhD in biomedical engineering from Drexel University. www.frontendinnovation.com ; peter.koen@stevens.edu Heidi Bertels is a visiting assistant professor of business administration at the University of Pittsburgh, where she teaches courses in entrepreneurship, corporate entrepreneurship, and strategic management. She is fi nishing her dissertation research at Stevens Institute of Technology on how established organizations can be successful when they try to enter new value networks. She also has a BA and an MA in integrated product development from Hogeschool Antwerpen in Belgium. Her research interests are innovation man- agement, product development, and entrepreneurship. hbertels@katz.pitt.edu Ian Elsum is principal adviser in the Science Strategy and Investment Group of Australia’s Commonwealth Scientifi c and Industrial Research Organisation (CSIRO), with responsibilities in planning and investment. He is also a visiting fellow at the Australian National University, where he is undertaking research on the management of radical and breakthrough innovation. Ian has 24 years of experience in the strategic manage- ment of applied research. He has been a member of a number of company boards and management and advi- sory committees, and has also been a regular partici- pant in developing innovation policy. Ian has a PhD in chemistry from Monash University. ian.elsum@csiro.au OVERVIEW: Business model innovation represents a signifi cant opportunity for established fi rms, as demon- strated by the considerable success of Apple’s iPod/iTunes franchise. However, it also represents a challenge, as evi- denced by Kodak’s failed attempt to dominate the digital photography market and Microsoft’s diffi culty gaining share in the gaming market, despite both companies’ huge fi nancial investments. We developed a business model innovation typology to better explain the complex set of factors that distinguishes three types of business model innovations and their associated challenges. KEY CONCEPTS : Business model innovation , Value networks , Radical innovation , Breakthrough innovation , Sustaining innovation , Disruptive innovation THE THREE FACES OF BUSINESS MODEL INNOVATION: CHALLENGES FOR ESTABLISHED FIRMS Established fi rms frequently have diffi culty with business model innovation. The business model innovation typology defi nes the challenges. Peter A. Koen , Heidi M. J. Bertels , and Ian R. Elsum DOI: 10.5437/08453608X5403009 May—June 2011 1 0895-6308/11/$5.00 © 2011 Industrial Research Institute, Inc. The Unifi ed Business Model Innovation Typology Established fi rms consistently demonstrate their ability to succeed in sustaining innovation. Intel, for instance, leads in the development of next-generation micropro- cessor chips using radically new technology. However, these same companies frequently have diffi culty trying to develop new business models in new markets—even with existing technology. Intel, for example, has been unsuccessful in penetrating the market for cellphone chips, despite many valiant attempts. Current innovation typologies—including those that rely on distinctions such as incremental/radical ( Wheelwright and Clark 1992 ), sustaining/disruptive ( Christensen 1997 ), or ex- ploitation/exploration ( March 1991 )—are inadequate to explain this phenomenon. The BMIT allows for consid- eration of a more complex set of factors and thus more readily distinguishes why and where established fi rms have diffi culty with business model innovation. The BMIT classifi es innovation along three dimensions: technology, value network, and fi nancial hurdle rate ( Figure 1 ). It further divides the innovation space into two zones: sustaining innovation, where established fi rms generally succeed, and business-model innova- tion, where otherwise successful fi rms frequently fail. Within the technology dimension, the model distin- guishes among incremental, architectural, and radical technological innovation. Incremental technological innovation involves the refi nement, improvement, and exploitation of existing technology. Architectural inno- vation involves creating new ways to integrate com- ponents in a system based on current or incremental changes to existing technology ( Henderson and Clark 1990 ). The iPod, for instance, incorporated no new technology, but provided an entirely new design. Fi- nally, radical innovation introduces an entirely new core technology. Hewlett-Packard, Canon, and Nikon. In each of these cases, the fi rms had adequate resources, an in-depth market understanding—not to mention a solid head start in the market—and the technical competencies needed to succeed, yet each of these companies allowed new entrants to disrupt them. We wanted to fi nd out why es- tablished companies that dominate their markets later allow other companies to succeed with business model innovations that either disrupt them or limit their ability to grow further. These cases do not fi t the usual pattern of disruptive in- novation. In elaborating their concept of disruptive in- novation, Christensen and colleagues ( Christensen and Raynor 2003 ; Christensen and Rosenbloom 1995 ) argue that “there are two types of disruptive innovations: low- end and new market” ( Christensen, Anthony, and Roth 2004 , xvii). At the low end, disruptors gain market share through a low-price business model focused on over- served customers. New market disruption targets new nonconsumers. Intel’s development of the low-priced Celeron microprocessor, which targeted the cost-con- scious computer market, is an example of an established fi rm pursuing a low-price business model. Sony’s Walk- man audio player is an example of a business model focused on reaching new nonconsumers. The new non- consumers for Sony’s portable transistor radio were teenagers who couldn’t afford more expensive, high- performance vacuum-tube radios. These consumers, who had previously had no other alternative, were delighted to have control of their own music, even with a sound quality much lower than that offered by vacuum-tube radios. However, Sony’s domination of the portable au- dio player market was disrupted by Apple’s iPod, which neither offered a lower price nor focused on new non- consumers. In fact, none of the disruptions we’ve de- scribed—Apple’s iPod, new digital advertising channels, and digital photography—relied on either a low-price or a new nonconsumer business model. As part of an Industrial Research Institute (IRI) Re- search-on-Research (ROR) working group project, we set out to better understand how and why disruption oc- curs in these cases, which do not seem to fall into Chris- tensen’s model for disruptive innovation. Furthermore, while Christensen’s work focuses on new disruptive business models, we wanted to understand the problem of disruption, and the challenges presented by disrup- tion, from the established fi rm’s perspective. We sought to develop a more comprehensive model to explain dis- ruptive business model innovations, especially those that do not involve low-cost or new nonconsumer busi- ness models. The result is a unifi ed business model in- novation typology (BMIT). 1 The BMIT classifi es innovation along three dimensions: technology, value network, and fi nancial hurdle rate 1 A brief, preliminary overview of our work was presented in an ear- lier report ( Koen et al. 2010 ). The work is also part of the PhD dis- sertation of Heidi Bertels. Research • Technology Management2 The value network dimension encompasses how a fi rm identifi es, works with, and reacts to customers, suppli- ers, and competitors. The value network is a tightly connected, complex system of suppliers, customers, distributors, and partners ( Christensen and Rosenbloom 1995 ). The value network dimension is encompassing, embracing the unique relationships that a company builds with both its upstream (supplier) and downstream (distributor and customer) channels. Relationships in these channels are a critical source of competitive ad- vantage. Business model innovation often requires the development of a new value network. The new relation- ships embedded in a new value network can be problem- atic, as they are diffi cult to establish and can disrupt existing relationships. For example, Nestlé distributed Nescafé coffee through existing mass market depart- ment store and grocery sales channels, a value network that was very familiar to them. In contrast, the company needed to develop an entirely new value network for Nespresso, a high-end coffee shop targeted to reach young professionals. In the BMIT, the value network dimension is divided into two areas: innovations within the company’s exist- ing network and innovations requiring value networks with components that are new to the company; new value networks may reach existing consumers in the market or new nonconsumers. For example, Zipcar, which makes cars conveniently available for very short- term rental at urban locations, mostly targets existing consumers for rental cars, but the company created a dif- ferent business model in the way that those consumers access the cars and pay rental fees. By contrast, the an- gioplasty catheter, developed to widen arteries blocked by cardiovascular disease, replaced the need for open- heart surgery to treat moderate cardiac disease. In mar- keting the device to cardiologists, who previously did not do surgeries for blocked arteries, rather than cardio- thoracic surgeons, medical device companies such as Bard and Medtronic reached out to a set of new noncon- sumers—cardiologists who could now treat their own patients rather than referring them to a surgeon. Hurdle rate is another factor in the BMIT. The hurdle- rate dimension describes the relationship of a given project’s fi nancial projections to the minimal expected return. The hurdle rate is a key factor in traditional dis- ruptive innovation that relies on a low-cost business model. Such low-cost business models are diffi cult for established companies to pursue because they do not meet the hurdle rates defi ned by the fi rm’s cost structure and expected rate of return. However, it is possible for established fi rms to pursue low-cost business models successfully. Dow Corning’s subsidiary Xiameter ( Gary 2004 ) offers one success story for low-cost business models implemented by established fi rms. Dow Corning Figure 1 .— Business model innovation typology (BMIT) model. Established fi rms tend to be successful in sustaining innovation (innovation that falls into the area defi ned by the bottom three boxes), but may have diffi culty succeeding with innovations outside this area. May—June 2011 3 developed Xiameter as a web-based discount channel through which customers could bulk order at a lower price the company’s more traditional products without the customer service usually provided by Corning. Xiam- eter became an important part of Dow Corning’s service offering and prevented the erosion of their market share by companies focused on commodity customers. Challenges in Sustaining Innovation The sustaining innovation space, where established fi rms tend to succeed, is marked by a reliance on exist- ing value networks and a comfortable fi nancial hurdle rate. However, this space is not without challenges even for innovative companies. Exploring it helps provide perspective on the challenges established companies en- counter when they leave the relative comfort zone of sustaining innovation. Sustaining innovation—technology improvements or even radical new technologies implemented within the companies’ existing value network and established fi - nancial hurdle rates—protects the status quo and repre- sents the majority of product development activities. Incremental sustaining innovations, which deploy mi- nor, progressive improvements in technology within an established value network, are the easiest and least risky, and hence the most common activities. Sustaining in- novations that utilize the existing value network to pro- duce and market architectural or radical technology improvements while maintaining existing fi nancial hur- dle rates are more diffi cult, since they involve higher degrees of technological novelty and higher levels of risk. As an example, Toyota’s Prius is a sustaining archi- tectural innovation; it involved no new technology, but combines existing gasoline and electric motor technol- ogy to create a hybrid design with signifi cantly improved fuel effi ciency, and it was created and sold within exist- ing value networks and hurdle rates. Intel’s dual-core processor is also a sustaining innovation, as it incorpo- rated a radical technology innovation (new designs that doubled the chip’s performance while reducing cooling demands) but relied on an existing value network for distribution. Sustaining innovations that rely on incremental technol- ogy improvements require different behaviors and pro- cesses than those implementing architectural or radical technology. Incremental projects can be managed using a well-honed serial innovation process with gated deci- sion points ( Cooper 2001 ), a system that has proven its merit in projects where the market and technology are known. In contrast, architectural and radical technology projects require a more complex learning strategy to manage the challenges associated with radical technol- ogy. A learning strategy is a cyclical process in which assumptions and uncertainties are tested and resolved through experimentation and iteration ( O’Connor et al. 2008 ). The project direction and strategy often shifts as uncertainties decrease. Challenges in Business Model Innovation The business model innovation space, where established fi rms frequently fail, requires companies to succeed with business models that require a lower than normal fi - nancial hurdle rate or the development of new value networks. Established companies typically encounter signifi cant challenges in this more diffi cult zone of the BMIT. Financial Hurdle Business Model Innovations Business model innovation challenges in the fi nancial hurdle space were fi rst explained by Christensen and colleagues, who describe how disrupters gain market share through low-price business models designed to appeal to existing consumers with a more affordable option ( Christensen and Raynor 2003 ; Christensen and Rosenbloom 1995 ). Low-cost business models like the disruptive innovations defi ned by Christensen are busi- ness model innovations that are guided by the fi nancial hurdle rate ( Figure 2 ). Innovations in this area typically involve projects with a lower hurdle rate than the estab- lished cost structure would allow. Christensen’s prime example was steel minimills, which disrupted integrated steel mills with electric arc-furnace technology, a radical new technology that allowed for cheaper production, although initially these furnaces could not match the quality of the larger mills’ product. The established mills initially ceded market share to the minimills, allowing the small producers to gain a foothold in the market by making cheap reinforcing bars (rebar). Over time, mini- mill producers learned how to make more profi table sheet steel at an acceptable quality level and eventually replaced integrated steel mills. The sustaining innovation space is not without challenges even for innovative companies. Research • Technology Management4 Some established companies have fl ourished by accept- ing a low-cost business model, meeting potential dis- rupters head-on. Intel developed the low-cost Celeron chip and Dow Corning established Xiameter in order to prevent erosion of their main offerings by low-cost competitors. Similarly, the Mercedes A-Class targeted a middle-class market, but leveraged existing sales chan- nels and distribution networks to sell the car. Marriott’s Courtyard chain of hotels targets a lower-cost segment of the market by eliminating the fancy restaurants and conference and meeting facilities that characterize its higher-end hotels. We found fewer examples of companies utilizing archi- tectural innovation within their existing value networks to power a low-cost business model. One example is the BIC pen corporation, which moved from manufacturing expensive fountain pens to selling low-cost ballpoint pens using a highly integrated, automated manufactur- ing process. Moving to a low-cost business model presents unique challenges for established fi rms. As both Christensen and Raynor (2003) and Govindarajan and Trimble (2005) argue, it is extraordinarily diffi cult for a company to maintain two different business models within the same business division. Such a situation, they assert, would produce trade-offs that would result in a strategy favoring the sustaining business. As a result, both Christensen and Raynor and Govindarajan and Trimble recommend that a company interested in pursuing a low-cost model while maintaining its existing business create two distinct organizations—which is exactly what Intel did in the development of the Celeron chip and Dow Corning did in establishing Xiameter. Both Intel and Dow Corning separated these units from the sustain- ing business. Where the larger Intel business pursued a string of breakthrough innovations in chip technology, the Celeron division focused on cost effi ciency, both in achieving just good enough performance features to offer value and in aggressively pursuing manufacturing effi ciencies. New Value Network Business Model Innovations Targeting Existing Consumers Established fi rms often see opportunities for growth in seeking out existing consumers within a new value net- work that allows the fi rm to maintain existing fi nancial hurdle rates ( Figure 3 ). For example, in an effort to reach young urban professionals, Nestlé developed Nespresso, a coffee outlet that has been described as an upscale Starbucks. Nespresso represented a new value network for Nestlé’s coffee business, which had previously sold instant coffee to the mass market via department and grocery stores. Similarly, Tesco, the United Kingdom’s largest supermarket chain, developed Tesco Direct as an online outlet to sell not only grocery items but also books, CDs, and other nonfood items. Toyota and Honda both developed luxury car brands, Lexus and Acura, which they sold in separate dealerships from their cur- rent lines, focused on a different market via a new value network for them targeted at existing affl uent customers. Fewer fi rms pair architectural innovation with a new value network. Microsoft launched a new business—videogame Figure 2 .— Examples of innovation projects at the intersection of the fi nancial hurdle and technology dimensions. May—June 2011 5 consoles—with a new value network with its develop- ment of the Xbox and its accompanying online services, and Knight Ridder, which already owned a network of print newspapers, developed an online newspaper to reach a wider market. In each case, the fi rms launched new businesses to reach existing consumers via new value networks. In each of these cases the companies lost a considerable amount of their investment. It is even rarer for a established fi rm to pair a radical technical in- novation with a new value network to reach existing consumers. Christensen and Raynor (2003) and Govindarajan and Trimble (2005) recommend separating sustaining busi- nesses from new value network projects. However, Markides and Charitou (2004) challenge this recom- mendation, arguing that separation should be dependent on the degree of synergy and confl ict between the two business models. Nestlé separated the Nespresso busi- ness unit from Nescafé, as the Nescafé division per- ceived that Nespresso would cannibalize its sales and the two units had markedly different cultures: the Nes- café unit saw its product as a low–price, fast-moving consumer product, while the Nespresso unit was work- ing to position itself as an up-market luxury experience. Unsurprisingly, values and attitudes were signifi cantly different, creating the potential for confl ict. In contrast, Markides and Charitou describe the creation of Tesco Direct as a part of Tesco, launched from one of Tesco’s west London stores. Since the supermarket’s customers were confi ned to the area surrounding the store, in con- trast to the Internet arm whose reach extended to all of the United Kingdom, there was little confl ict between the two ventures. Tesco Direct built on the synergy of the supermarket and leveraged the store’s stock to keep the initial start-up investment low. The most successful players in value network innovation pair the new value network with an incremental technology innovation. Figure 3 .— Examples of innovation projects at the intersection of the value network and technology dimension. Research • Technology Management6 O’Reilly and Tushman (2004) advocate yet another ap- proach for business model innovations in this area: the ambidextrous organization. This approach offers a mid- dle ground between completely separated and com- pletely integrated organizations. O’Reilly and Tushman suggest separating the new business model from the sustaining organization—but they argue that both or- ganizations should share senior management. Such an arrangement, they argue, ensures that the startup unit will have access to the resources and expertise of the established unit. Developing an ambidextrous organiza- tion requires considerable senior management leader- ship training. IBM has followed this approach with considerable success, growing their new-business-model revenue from $400 million in 2000 to $22 billion in 2006 ( O’Reilly, Harreld, and Tushman 2009 ). The most successful players in value network innova- tion pair the new value network with an incremental technology innovation. Fewer fi rms have successfully matched a new value network with an architectural or radical technology innovation. Knight Ridder reportedly accumulated losses of over $100 million in the launch of its fi rst online newspaper. Information Week reported in 2009 that Microsoft had total losses in the gaming indus- try of about $7 billion ( Schestowitz 2009 ). And Kodak invested over $5 billion in digital technology and never managed to become more than a small player in the market. Knight Ridder and Kodak perceived a threat to their businesses and invested signifi cant amounts of money in an effort to head off the threat. Microsoft, on the other hand, anticipated a technological revolution that would turn the family room into a wireless, networked nerve center for seamlessly accessing and managing all kinds of media, and positioned the XBox to be at the center of that revolution ( Grossman 2005 ). In that context, Micro- soft developed the XBox not as a game machine or toy, but as a way to own the entire digital environment in the home, a center for accessing music, movies, photo- graphs, and television. While the Xbox has achieved moderate success in the gaming market, Microsoft was unable to establish it as the nerve center for the family room, resulting in a signifi cant fi nancial loss for the company. Knight Ridder and Kodak acted out of fear, while Mi- crosoft saw a very large opportunity, but all of these companies were defeated by similar forces. All of these efforts failed in spite of the enormous resources made available to them because of routine rigidity—the ten- dency to frame responses to new challenges to fi t famil- iar frameworks ( Gilbert 2003 ). Routine rigidity led executives at all three companies to frame their efforts to fi t the familiar frameworks of their sustaining busi- nesses. Kodak could envision being successful only by leveraging its existing relationships with retailers and offering digital photography CD disks and digital pho- tography kiosks. Focused on its existing business mod- els, the company did not pursue sales of digital cameras, photo printers, and printer disposables, which ended up being the real moneymakers in the digital market, until much later. Similarly, Knight Ridder could only envi- sion the digital newspaper as an extension of the print newspaper, and so failed to exploit new revenue chan- nels or to develop the digital channel fully. Microsoft’s vision of the XBox as a gateway into the living room drove important technology decisions that ultimately produced an expensive console that appealed primarily to hardcore gamers—no one else was ready for the home entertainment hub Microsoft wanted to build. However, the hub platform strategy never came to fruition; al- though the Xbox became a moderately successful gam- ing product, the project was a failure that generated huge losses. And it never delivered on the intended business model. One notable example of a successful architectural inno- vation by an established company accessing a new value network is the iPod and iTunes. iTunes, which delivers single song tracks to consumers in a user-friendly for- mat, required Apple to build unique partnerships with the music industry, resulting in new value networks. Apple did not envision the music industry as a threat to the company or essential to its future. Rather, the com- pany envisioned music as a new opportunity to be ap- proached prudently, with limited initial investments. As a result, Apple did not try to frame the market as an ex- tension of its current sustaining computer business. First and second-year sales were paltry by most standards, less than $4 million the fi rst year and $10 million the second. But since the company did not view the iPod as central to its future business, the lower initial returns were acceptable and Apple was able to give the new business time and space to develop. The challenges of business model innovation are shaped by the scope and target of the innovation. May—June 2011 7 New Value Network Business Model Innovations Targeting Nonconsumers Established fi rms may also seek to establish entirely new value networks to reach nonconsumers—potential customers who have not entered the market. Developing new businesses in this context presents a different set of challenges for established companies, one that was also explored by Christensen and colleagues ( Christensen and Raynor 2003 ; Christensen and Rosenbloom 1995 ). Innovations to reach nonconsumers are the “hardest in- novations to identify” ( Christensen, Anthony, and Roth 2004 , 8), but they have the greatest potential for growth. We did not fi nd any examples of successful innovation in this space paired with incremental technological innovation. Sony’s Walkman is an example of archi- tectural innovation to access a market of previous nonconsumers, and Ciba Vision’s Visudyne represents a radical innovation paired with a new value network. With Visudyne, Ciba Vision entered a global agreement with QLT PhotoTherapeutics to develop compounds that can be used with photodynamic therapy to treat age- related macular degeneration, a debilitating disease that leads to blindness. Ciba Vision’s sustaining business is focused on improving their hard contact, extended wear, and daily disposable lenses, which are typically sold directly to end users. Visudyne, by contrast, uses funda- mentally different technology to slow macular degen- eration; it is a pharmaceutical product that is sold to ophthalmologists. Both the Walkman and Visudyne have proven to be very successful for their companies, estab- lishing entirely new value networks and entirely new markets. Christensen and Raynor (2003) provide numerous exam- ples of start-ups that have succeeded in implementing in- cremental and architectural innovations with new value networks to access an entirely new market. It is, they demonstrate, quite diffi cult for established companies to gain management support for the development of an en- tirely new business in a market that is yet to be defi ned. As a result, we suspect that new business model develop- ment in this area will be driven by new companies. Conclusion The BMIT illustrates how and why businesses behave differently in the two innovation zones—sustaining in- novation, where established fi rms typically succeed, and business model innovation, where they frequently fail. The challenges of business model innovation are shaped by the scope and target of the innovation. Established fi rms will fi nd both rewards and considerable risks in developing new value networks to reach existing con- sumers who are not yet customers. The challenges faced in building a value network to reach nonconsumers are quite different from those encountered in other types of innovation projects. Business model innovation represents a new frontier in innovation beyond just product or service innovation. However, it challenges most established fi rms to the core of their organization and culture and has proven very diffi cult for many companies. Developing a new business model requires organizations to develop new skills and at times reject the thinking that has led them to success in their sustaining businesses. The BMIT pro- vides a framework within which established companies may understand the different kinds of business model innovation and the organizational challenges associated with each type. References Christensen , C. M. 1997 . The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail . Boston, MA : Harvard Business School Press . Christensen , C. M. , Anthony , S. , and Roth , E. 2004 . Seeing What’s Next . Boston, MA : Harvard Business School Press . Christensen , C. M. , and Raynor , M. E. 2003 . The Innovator’s Solution . Boston, MA : Harvard Business School Press . Christensen , C. M. , and Rosenbloom , R. S. 1995 . Explaining the attacker’s advantage: Technological paradigms, organizational dynamics, and the value network . Research Policy 24 : 233 – 257 . Cooper , R. G. 2001 . Winning at New Products: Accelerating the Process From Idea to Launch . 3rd ed. Cambridge, MA : Perseus . Gary , L. 2004 . Dow Corning’s push for organic growth . Strategy & Innovation 2 ( 6 ): 1 – 5 . Gilbert , C. 2003 . Mercury rising: Knight Ridder’s digital venture. Harvard Business School Case 9-803-107 . Cambridge, MA : Harvard Business School . Govindarajan , V. , and Trimble , C. 2005 . Building breakthrough businesses within established organizations . Harvard Business Review 83 ( 5 ): 58 – 68 152 . Grossman, L. 2005. Microsoft: Out of the XBox. [Online exclusive.] Time Magazine , May 15. http://www.time.com/time/magazine/ article/0 ,9171,1061497,00.html (accessed February 14, 2010) . Henderson , R. M. , and Clark , K. B. 1990 . Architectural innovation: The reconfi guration of existing product technologies and the failure of established fi rms . Administrative Science Quarterly 35 : 9 – 30 . Koen , P. A. , Bertels , H. , Elsum , I. R. , Orroth , M. , and Tollett , B. L. 2010 . Breakthrough innovation dilemmas . Research-Technology Management 53 ( 6 ): 48 – 51 . March , J. G. 1991 . Exploration and exploitation in organizational learning . Organization Science 2 ( 1 ): 71 – 87 . Markides , C. , and Charitou , C. 2004 . Competing with dual business models: A contingency approach . Academy of Management Executive 18 ( 3 ): 22 – 36 . O’Connor , G. C. , Leifer , R. , Paulson , A. S. , and Peters , L. S. 2008 . Grabbing Lightning: Building a Capability for Breakthrough Innovation . 1 ed. San Francisco, CA : Jossey-Bass . O’Reilly , C. A. , and Tushman , M. L. 2004 . The ambidextrous organization . Harvard Business Review 82 ( 4 ): 74 – 81 . O’Reilly , C. A. , Harreld , J. B. , and Tushman , M. L. 2009 . Organizational ambidexterity: IBM and emerging business opportunities . California Management Review 51 ( 4 ): 75 – 99 . Schestowitz, R. 2009. Microsoft XBox group still operates at a loss, XBox director quits. [Blog post, May 3.] Techrights . http:// techrights.org/2009/05/03/microsoft-xbox-failure-departure/ (accessed February 14, 2010) . Wheelwright , S. C. , and Clark , K. 1992 . Revolutionizing Product Development: Quantum Leaps in Speed, Effi ciency, and Quality . New York : The Free Press . Research • Technology Management8 work_3dl7iszvkrd4dk6hzqylgigxue ---- Microsoft Word - hara_rosenbaum-CM.doc 1 REVISING THE CONCEPTUALIZATION OF COMPUTERIZATION MOVEMENTS Noriko Hara School of Library and Information Science 1320 E. 10th Street, LI 016, Indiana University Bloomington, IN 47405-3907 (812) 855-1490 nhara @ indiana.edu Howard Rosenbaum School of Library and Information Science 1320 E. 10th Street, LI 016, Indiana University Bloomington, IN 47405-3907 (812) 855-4350 hrosenba @ indiana.edu April 9, 2007 Abstract One interesting problem arising from Kling and Iacono’s pioneering work on computerization movements (CMs) is the question of empirically determining a movement’s success or failure. This paper questions the question and argues that it is based on two assumptions that upon closer examination seem problematic. The first is that Kling and Iaconco’s concept of a CM is sufficient to cover the 2 range of CMs. Their approach to CMs is explicated, pointing out three ways in which it is limited, concluding that it should be reconceptualized. The second is that CMs are similar enough so that a single set of criteria is sufficient to judge the success or failure of any given CM. Using a heuristic analysis to examine a set of 41 CMs, a typology is introduced demonstrating that there are important differences among CMs. The paper concludes that since a single set of criteria is no longer appropriate, different sets of criteria are needed to evaluate the success or failure of different types of CMs. Keywords: computerization movement, typology, criteria for success and failure Running head: Revising computerization movements 1. INTRODUCTION In four articles in the 1990s, Kling and Iacono (1994; Iacono and Kling, 1995, 2001, Kling, 1996) introduced the concept of computerization movements (CM), defined as “a kind of movement whose advocates focus on computer-based systems as instruments to bring about a new social order” (Kling and Iacono, 1994; 3). At the center of CMs, discussed below, are core technologies (or sets of technologies) that are seen as “products of social movements rather than only as the products of research labs and industrial firms” (Kling, 1996; 1). This work is important because it places information and communication technologies (ICT) into a larger and more complex macro-sociological context, a move that presaged Kling and colleagues’ recent work in social informatics (Kling, Rosenbaum and Sawyer, 2005). The concept remains important as an analytic tool that can be used to understand the trajectories of different types of 3 ICTs because it directs research attention to the web of social factors and actors involved in the rise and fall of ICTs in the marketplace and in the society. There has been a steady stream of research on computerization over the last decade. For example, Hess (2005) proposes a concept similar to that of Kling and Iacono called technology-oriented and product-oriented movements. Agar (2006) examines the role computerization played in the changes that occurred in scientific work when computers were introduced in the 1950s. Lavioe and Therrien (2005) describe the diffusion of a combination of hardware and software and argue that these technological developments are responsible for the increasing pervasiveness of computerization over the last three decades. Moon, Day, Suen, Tse, and Tong (2005) study the effects of computerization on Hong Kong Chinese medical practitioners. Baldwin (2006) looks at the discourse surrounding computerization in the Public Archives of Canada during the 1960s and argues that automation, as a component of information retrieval, and the preservation of computer records are issues discussed then that remain relevant today. Bedard (2006) looks at computerization in civil engineering focusing on the adoption of ICT by the profession and the current status of computer and IT use. What this research shares is a narrow conception of computerization as the adoption of computers and the diffusion of IT tool use. A notable exception is Hedstrom (2004), whose study of “Caresys,” a medical information system, is based in an analytic framework that foregrounds the socio-political processes involved in computerization. This research indicates that computerization remains an important concept for those studying the impacts of computers on organizations, work and social life. However, this paper 4 argues that this research can be deepened and made empirically richer by considering computerization movements, instead of simply computerization. An important step in this direction was taken at a 2005 NSF-sponsored workshop held at the Center for Research on Information Technology and Organizations (CRITO, 2005) that brought together a group of scholars who presented 25 papers on various aspects of CMs; papers from the workshop have been published as an edited collection (Kraemer & Elliott, 2007) and a special issue of The Information Society. Computerization movements are still very much in vogue and are even being promoted by the federal government in their e-government and e-democracy campaigns. However, there is a dearth of studies in how CMs form, how they mobilize support for computerization of specific technologies, how the Internet has influenced the creation and persistence of particular CMs, and whether or not a particular CM has succeeded or failed. This last issue was among those raised at the CRITO workshop – by what criteria can the outcomes of a given CM be assessed? In this paper we address the problem of developing a set of criteria that can be used to determine whether a computerization movement (CM) has been a success or a failure. As explained in the workshop’s call for papers: Not all movements are successful. Success has been variously defined as social acceptance, accrual of new advantages, creation of new social acceptance, the creation of new social policies, or the implementation of new laws (Gamson, 1975). Others argue that the most important outcome of a social movement is a shift in public perception. 5 What criteria can researchers use to analyze a CM in terms of success or failure? (Kraemer, 2004). Our contention is that this question cannot be adequately addressed without prior questioning and analysis of two assumptions about CMs that upon closer examination seem problematic. The first is that the conception of a CM articulated by Kling and Iacono and adopted uncritically by others is sufficient to cover the range of CMs. The second is that CMs are similar enough so that a single set of criteria may be developed and deployed in the analysis of the success or failure of any given CM. In the first section of this paper, the first assumption is questioned; this involves explicating Kling and Iacono’s conception of CMs and pointing out three ways in which it is limited; it has an organizational bias, involves a classification error, and overemphasizes the importance of technological utopianism at the core of a CM’s ideology. With these limitations removed, the conception of CMs expands to include those originating outside of organizations or that are negatively perceived in the public discourse. In the second section, the second assumption, that CMs are of a single type, is challenged and a more nuanced definition is proposed that introduces five criteria that can be used to sort CMs into groups. In the third section the expanded conception is used in a heuristic analysis that examines a set of 41 CMs, dividing them into distinct categories on the basis of the set of criteria derived from the analysis in the second section of the paper. Based on the insight that there are important differences among CMs that can be used to distinguish among them, this analysis shows that there are groupings of CMs that do not share many characteristics, meaning that a search for a single set of criteria or criteria to evaluate success or failure is no longer appropriate. The final section asserts 6 that a more useful way to approach the original question is to develop different sets of criteria for evaluating different types of CMs; this claim is illustrated by considering the means by which to assess the success or failure of two widely diverging groups of CMs that emerged from the heuristic analysis. 2. ON THE MEANING OF COMPUTERIZATION MOVEMENTS The focus of the paper is on the analyses of computerization movements found in Kling and Iacono (1995), Iacono and Kling (1996), and Iacono and Kling (2001). A close reading reveals that their analysis is grounded in a sociotechnical concept of computerization and a classic sociological conception of a social movement. After a brief discussion of the role these ideas play in the conception of CMs, Kling and Iacono’s original conception is presented, three limitations are discussed and an expanded conception is proposed. 2.1 What is Computerization? Computerization is first and foremost a social process that unfolds as information and communication technologies are brought into social and organizational settings and integrated into social and work practices. It is enacted in organizational settings through the “social choices about the levels of appropriate investment and control over equipment and expertise, as well as choices of equipment” (Kling and Iacono, 1995; 119). Control over access to and support for ICTs are key elements when “developing, implementing and using computer systems for activities such as teaching, accounting, writing, or designing circuits” (Iacono and Kling, 1995; 7 119). Here the influence of Kling and Scacchi’s (1988) concept of the “web of computing” can be seen as computerization is extended to cover a wide range of sociotechnical activities that are necessary to support the work people do with ICTs. Because computerization is “deeply embedded in social worlds that extend beyond the confines of any particular organization or setting,” it is shaped by larger social forces (Kling and Iacono, 1995; 122). It therefore “has important social and cultural dimensions that are often neglected in discussions of the rise of computer-based technologies and networking” (Iacono and Kling; 1996; 87). For example, large-scale computerization projects may experience conflict because of the threat that such projects pose to organizational structures and functions and the organizational actors who stand to lose power, resources and influence (Iacono and Kling, 1996; 90). These insights are imported almost without change into their conception of a CM. 2.3 What is a Computerization Movement? Kling and Iacono’s conception of a computerization movement is strongly influenced by Blumer’s (1951; 8) treatment of social movements as collective enterprises, particularly the idea that: As a social movement develops, it takes on the character of a society. It acquires organization and form, a body of customs and traditions, established leadership, an enduring division of labor, social rules and social values – in short, a culture, and a new scheme of life. 8 Iacono and Kling (1995; 122) make use of Blumer's concept because they can then “consider elements relevant to computerization that other, narrower conceptions would rule out” such as collective action. Iacono and Kling (2001) also draw on Snow et al. (1986) who argue that a social movement is embedded within larger social processes that involve a “struggle over the production and counter-production of ideas and meanings associated with collective action” (Iacono and Kling, 2001; 98). This allows them to describe a CM as a social movement that develops around one or more core ICTs and depends for its growth on a “socially constructed process of societal mobilization” (Iacono and Kling, 2001; 97). It has six main characteristics including: • A core ICT; • Organizational structures (computer movement organizations); • A historical trajectory; • Organized opposition (computerization countermovements); • Collective action, particularly: Technological action frames and public discourse, Ideology and myths: revolutionary and reform, Organizational practices; and • Two main types: general and specific. A full treatment of each of these components is beyond the scope of this paper; the first four will be discussed briefly and two, collective action and types of CMs, will be singled out because they are central to the argument that the conception has three limitations. These 9 components are generic characteristics that all CMs share (with the possible exception of organized opposition), but, for reasons outlined below, they are not sufficient to generate a single set of criteria that can be used to evaluate the success or failure of a given CM. Clearly, core ICTs such as supercomputers or cell phones, are essential components of a CM, however, since Kling and Iacono’s conception focuses more on CMs’ social, cultural, and organizational components, ICTs will not be discussed here; see Appendix A: Original list of CMs listed in the workshop call for papers with additional CMs in boldface for examples of ICTs around which CMs have formed. CMs have organizational structures whose existence is important to their ability to persist over time. These structures, called computer movement organizations (CMOs) are Organizations or coalitions of organizations … [that] … generate resources, structure membership expectations, educate the public and ensure the presence of recognized leaders who can lend their prestige and interorganizational connections to the movement (Iacono and Kling, 1996; 91) Within CMOs are organizational structures, leadership roles, divisions of labor, and resources, of which people (members and leaders) and communication networks are important. These structures allow people to engage in collective social action where “they can raise money, mobilize resources, hold meetings and formulate positions” (Iacono and Kling, 1996: 91). Among the most important artifacts of participants’ work and interactions are the ideologies and supporting discourses that define and shape the movement and public perceptions of the movement. The dominant technological frames (described below) are formed, shaped, and 10 shared by people using these communication networks. CMOs serve a purpose of “amplifying current problems, interpreting events, and emphasizing the advantages of a transformed social order” (Iacono and Kling, 1996; 92). Resembling social movements, CMs have historical trajectories and originate in particular socio-historical times and places, gather momentum, and then follow one of several paths. For example, a CM can emerge, gain momentum and become successful in what appears to be a linear path of increasing influence and impact. It can emerge, gain momentum, falter, become stagnant, and then revive. The roles that CMOs play and the effects they can have on trajectories change over time as the CM’s social impact waxes and wanes. Two key activities that must be enacted are the recruitment of new members into the CM and the continued support and dissemination of the discourse about the core ICT. It is possible that in the trajectory of a particular CM, CMOs may find themselves working together at times and in opposition at other times; in addition, the number and types of CMOs that take part in a CM may vary over time “due to resource availability and historical conditions” (Iacono and Kling, 1996; 91). The distinction that Kling and Iacono make between general and specific types of CMs is the first component that can be used to illustrate two limitations of the original conception. A general CM is a macro-scale phenomenon whose core ICTs can transform entire societies. Their primary example of a general CM is “internetworking” (Iacono and Kling, 2001; 107). Specific CMs share some features of the general CM, are distinct from it, may only be loosely linked to each other and, taken together, form the general CM (King and Iacono, 1994; 3). Specific CMs are middle-level phenomena that can be seen in different organizations, connecting local practices in organizations and groups to external developments. As described above, there is a set of generic characteristics specific CMs share by virtue of being a CM; as will be argued below, a 11 more nuanced approach to distinguishing among CMs will provide a basis for evaluating the success or failure of a given CM. This distinction points to the first limitation in the conception of CMs, a bias towards organizational CMs. Most of the examples provided in Kling and Iacono’s work are found within organizations, including four of the five singled out as exemplary: urban information systems, artificial intelligence, computer-based education, office automation, and personal computing (Kling and Iacono, 1995; 121).1 This bias is reinforced by the statement that CMs depend on organizational practices, which are “the ways in which individuals and organizations put technological action frames and discourses into practice as they implement and use technologies in their micro-social contexts” (Iacono and Kling, 2001; 100). The first sense in which the concept can be usefully extended is by considering that some CMs are enacted outside of organizations; there are clearly ICTs that are developed, implemented, and used to support activities that routinely take place in social worlds. Computerization does not have to be bounded by organizational structures and CMs can certainly develop around ICTs outside of organizations. Peer-to-peer networking and the open source software movement are examples of CMs that can be accounted for within Kling and Iacono’s conception except for the fact that they are primarily taking place outside of organizational settings. Like an organizational CM, the use of these movements’ core ICTs involves social choices about levels of investment and control over equipment and expertise, and is enmeshed in larger social processes including legal and regulative activities. 1 Personal computing, however, no longer seems to be a specific CM because of its widespread adoption and routinization in social and organizational life. It has become a pervasive phenomenon that takes its place alongside internetworking as a second general CM. 12 The distinction between general and specific CMs also reveals a second limitation. Kling and Iacono’s dichotomization of CMs gathers together a diverse range of CMs and treats them as being of a kind. According to Kling and Iacono (2001), there are only two types of CMs, general and specific; further, there is only one general CM, internetworking, and all the rest belong to the residual category of specific CMs. Accepting this classification means that it is not necessary to group the range of CMs into finer grained categories. So for example, the list of CMs mentioned in the 2005 CRITO workshop call for papers (see Appendix A) must all be seen as examples of specific CMs. If the assumption is carried forward that all CMs that are not general are specific and there are no further distinctions to be made, then the call for a generic set of criteria that can be developed to evaluate the success or failure of all CMs is sensible. If this assumption is rejected, and it can be shown that there are finer distinctions to be made, then the search for a single set of criteria is no longer be appropriate. In sections 3 and 4, an analysis of CMs is conducted that shows that they can be grouped into distinct sets of clusters. With these finer distinctions, the original conception of CMs is made more complex but, hopefully, in ways useful for theorizing and research. The third limitation is illustrated by examining the role of collective action in CMs. For their vitality, viability, and potential for social impact, CMs depend on the collective action of CMOs, members, supporters, researchers, journalists, advocates, pundits, vendors, and adopters and others. One of the fundamental forms of collective social action that drives a CM is the shaping of public discourse about its core ICTs by those who write and speak about them and those who study them and publish their work. Central to this shaping is the concept of “framing” (Snow et al., 1986), a process by which social meanings are constructed, disseminated, and stabilized in discourse. CMs are built around technological action frames (Bijker, 1997) that 13 describe the socially constructed meanings ascribed to specific technologies, “tying together relevant social actors and the particular ways in which they understand a technology as ‘working’” (Iacono and Kling, 2001; 99). This is a useful idea because framing “describes the actions and interactions of actors, explaining how they socially construct a technology” (Bijker, 2001:15526). The technological action frame outlines the key problems the technology addresses, the problem solving strategies that can be invoked, and the range of acceptable problem resolutions. It also includes theories that can be used to develop the core ICTs, the tacit knowledge that supports their implementation and uses, the range of practices that proscribe their uses, and the exemplary artifacts that represent the ICTs’ output. The elements of the frame become the raw materials people draw upon in the social construction of a CM. These become the content of the movement’s ideology and of public perceptions about the CM’s core ICTs. The importance of the ideology embedded in a CM’s technological frame is the key to the third limitation in the conception. This ideology is a deeply held belief that the CM’s core ICTs can cause fundamental positive social change; what differs is the scope of the change, which can be societal or within a restricted domain. Utopian ideology, more specifically, technological utopianism, is a central element of CMs and it provides a set of longer-term goals that people can use to identify with the movement (Iacono and Kling, 1996; 92); in fact, Kling and Iacono (1994) explain that “it is simpler to characterize the ideologies of CMs as forms of ‘technological utopianism’.” There are five main themes that constitute the utopian core of CMs (Kling and Iacono, 1995; 137): 1. Computer-based technologies are central for a reformed world; 14 2. Improved computer-based technologies can further reform society; 3. More computing is better than less, and there are no conceptual limits to the scope of appropriate computerization; 4. No one loses from computerization; and 5. Uncooperative people are the main barriers to social reform through computing. What matters is not whether participants in a CM can verify their truth claims about the relationship between the core ICT and social change “but that they selectively ‘frame’ or provide an ‘interpretive schema’ by which disparate social groups and organizations can understand and interpret the meaning of the [CM] for their own social contexts and practices” (Iacono and Kling (2001; 94). Fueled by this ideology, actors in a CM can engage in “organized, insurgent action to displace or overcome the status quo and establish a new way of life” (Iacono and Kling, 1996; 90). As a consequence, technological frames and the public discourse of which they are a part may actually “misrepresent actual practice for long periods of time” (Iacono and Kling, 2001; 101). The assumption that a CM’s ideology is oriented towards positive social change certainly can be found at the center of many CMs. However, there are CMs that are perceived in the public discourse as negative or destructive in their intent, such as those involving spammers and virus writers. If the conception is extended to include these cases, the central assumption about the nature of a CM’s ideology must also be altered. The expanded conception of a CM offered here does not rely on the two problematic assumptions found in Kling and Iacono’s version because it is now sufficiently broad enough to include a wide range of CMs and allows for more finely grained groupings of CMs, 15 acknowledging that they are not all of a type. This is an improvement on the original version in three ways. First, by loosening the organizational constraints, CMs that emerge outside of organizational boundaries and that do not involve organizational practices can be accommodated. Second, the grouping of CMs into general and specific types is seen as only a starting point for a more nuanced analysis of the characteristics of CMs. Third, the ideology at the center of CMs does not have to be technologically utopian. CMs do not have to have to have as a main driving force the desire to change society for the better, although this more dystopian type of movement will admittedly be in the minority. The next step in the argument is to demonstrate that there are different types of CMs and that these types may share few characteristics. In the sections to follow, the expanded conception is used to analyze a set of 41 CMs and develop a typology based on a set of characteristics that can be used to describe and differentiate CMs to a degree not possible given the original conception explicated above. 3. A TYPOLOGY FOR ANALYZING CMs The first step in developing a typology for analyzing CMs was to develop the sample of CMs. Beginning with the original list of 34 CMs included in the call for papers for the NSF- sponsored CRITO Social Informatics workshop (see Appendix A for the list), widely-perceived negative CMs were added to the list and ones that did not fit the conception of CMs explained in the previous section were eliminated, resulting in a set of 41 CMs. As will be demonstrated below, the typology accounts for all 41 CMs in the sample set; the stability and generalizability of the scheme will have to be tested with CMs not included in the sample. 16 The second step was to derive a set of criteria that could be used to analyze this set. This was accomplished by compiling a list of characteristics of the CMs in the set and reducing them to a small number of criteria shared by the entire set. These categories emerged from a bottom- up and inductive analysis of popular discourses about the set of CMs. The result of this exercise was a list of five ways of categorizing CMs that were used to generate criteria for determining the relative success or failure of any given CM. Because the core ICT of a given CM may be used for different purposes and because the discourses about the CM at any given point in time may be heterogeneous, it is difficult to categorize CMs by using binary choices. For example, a typical CM is neither completely stand-alone nor completely bundled; it is more likely to be somewhere between these two poles. Hence, these criteria took the form of paired terms and were formulated as criteria pairs with each member of the pair as the endpoint of a continuum along which each of the CMs was subsequently positioned during coding. Using this approach, one CM could be coded as closer to being fully market-driven while another could be closer to the midpoint between market-driven and non-market-driven. These five criteria pairs are: • external – internal; • market-driven – non-market-driven; • wide – narrow; • stand-alone – bundled; and • positive – negative. The first criterion pair used in differentiating CMs is derived directly from the expanded definition of a CM offered above and focuses on whether or not specific examples of 17 computerization occur within organizational boundaries. This criterion pair is labeled internal – external. The determination is based on users’ rather than developers’ perspectives because the majority of CM development occurs within organizations. The exception, perhaps, occurs in the case of open source software or peer-to-peer networking, but the use of CMs takes place both inside and outside of organizations. As a CM, knowledge management (KM) is principally an organizational system generally implemented in corporate settings. Thus, the infrastructures shaping KM as a CM exist mainly within organizational boundaries, and the discourses about KM and activities of advocates, pundits, and others are aimed primarily at organizational audiences. As argued above, Iacono and Kling (1996) limit the focus of the original conception to this type of organizationally-bound CM. However, with this characteristic in mind, it is possible to rethink the nature of a CM, such as e-government. Online tax forms are developed within organization boundaries; however, those who use these e-government applications do so primarily outside of the organization. In this case, e-government is categorized as a CM that mostly exists external to organizational boundaries and was coded as being near the midpoint of the internal – external continuum. Other types of CMs also exist outside of organizational boundaries, e.g., instant messaging, which was coded near the external pole of the internal – external continuum. The infrastructure that supports such CMs is not present in a structured fashion but is dispersed throughout the wired part of the social world. For example, peer-to-peer computing is a boundary-less example of computerization that is driven by scattered individuals, i.e., it does not exist within an organizational structure. BitTorrent, a peer-to-peer networking application, was developed by a programmer working at home and distributed across the net as an open source program. As an example of a core ICT in a CM, BitTorrent is clearly outside of organizational 18 boundaries both in terms of development and use. As computerization has become a vital part of daily life in many developed countries, CMs, such as these, existing outside of organizations, need to be carefully examined. The second characteristic of the typology considers the position of the CM with respect to the marketplace. This criterion pair is labeled market-driven - non market-driven. Some CMs gain momentum through vendors’ marketing and advertising, the advocacy of writers in the trade press, and others who have a commercial interest in the core ICT’s success. CMs such as interactive TV, e-learning, e-commerce, and spamming are all driven by financial and marketplace concerns. In addition, some CMs like e-mail may, in time, become commodities while other CMs, such as artificial intelligence and ubiquitous computing, are still in the research and development stage and have not yet been commercially marketed. If they become full- blown CMs later in their trajectories, they could become market-driven. Other CMs operate outside of this nexus. CMs such as paperless courts, virtual reality, hacking etc., exist outside of the constraints of the marketplace and are not heavily market-driven. The third characteristic is based on the scope of CMs. This criterion pair is labeled wide – narrow. According to Kling and Iacono’s (1995) general CMs can change entire societies, while specific CMs have impacts in more restricted domains. This is a useful distinction when it is one among many and not the sole way to differentiate CMs. This criterion pair is used to ascertain the breadth of a CM’s impact along a continuum ranging from those affecting a majority of the population (wide) to those affecting a small and specialized group (narrow). For example, blogs, mobile technologies, and spam have had an impact on the general public and have thus changed the way people work and interact with each other. These are categorized as relatively wide CMs and were coded near the wide pole of the wide – narrow continuum. E-science, on the other 19 hand, is a fairly constrained CM that encourages discourses among a specialized population and was coded near the narrow end of the continuum. Though the scope of e-science is rather small, it is a CM because technological action frames have arisen around it, it has engendered various types of discourses, and it involves a range of computerization practices within a range of organizations. The fourth characteristic identifies the nature of a CM’s core ICT as either a single technology or a set of related and interlinked technologies. This criterion pair is labeled stand- alone - bundled. Some CMs develop around a single core ICT, such as Instant Messaging (IM), even though IM is operated via personal computers, cell phones, PDAs, etc. This type of CM is “stand-alone.” Other CMs develop around a configuration of linked technologies as in the case of e-government, which requires the interoperation of a range of ICTs. According to the World Bank Group (2004; Home page): E-Government refers to the use by government agencies of information technologies (such as Wide Area Networks, the Internet, and mobile computing) that have the ability to transform relations with citizens, businesses, and other arms of government. Based on the above definition of e-government, it is clear that this CM comprises core ICTs that are bundled together and appear to the person using them as a single application. Other similar CMs include digital libraries, CSCW, and surveillance technologies that use a variety of technology applications to support computerization. The fifth characteristic is perhaps the most subjective. This criterion pair is labeled positive – negative. Kling and Iacono (1995; 148) claim that “it is simpler to characterize the 20 ideologies of CMs as forms of ‘technological utopianism’” affirming that their conception of CMs assumes that a movement will have a primarily positive impact on the society. However, as argued above, CMs can be categorized according to whether they have positive or negative impacts on society. It is the determination of the location of a given CM along the positive- negative continuum that is subjective. In the analysis reported below, the evaluation of public discourse was the primary strategy for determining whether the ideology of a given CM contained utopian or dystopian connotations (or perhaps both). In other words, if the majority of the discourse about blogs among the general public reflected a perception that blogs had generally positive impacts, they were coded as being near the positive pole. While positive CMs have been the primary examples in original list, some negative CMs have also emerged, such as spamming, writing viruses, hacking, identity theft, and online stalking. Though a small handful of people may consider writing viruses a positive act because they have the potential to strengthen immune systems (Thomson, 2004), the majority of the population perceives viruses as negative. Expanding the sample set of CMs to include those that are perceived to have negative impacts creates a more holistic approach toward the evaluation of CMs. 4. METHODOLOGY Using the characteristics mentioned above, a set of 41 CMs was classified using heuristic analysis (Nielsen, 1994) and the five criteria pairs. This method is generally part of usability testing and involves the evaluation of artifacts or phenomena by experts using a list of usability principles. The goal of this method is to uncover problems in interface design. Heuristic analysis was appropriate here because the task at hand required the assessment of the set of CMs using the list of criteria pairs. The researchers used heuristic analysis to independently code all 41 CMs, 21 placing each CM on a point on the continuum for each criterion pair, after which the results were compared. Two of the categories were rather straightforward to code: stand-alone/bundled and market/non-market driven. Nevertheless, issues arose when considering changes that had occurred in the trajectories of certain CMs. For instance, there were differences between the authors’ coding of digital photography. When discussing the diverging placements of this CM along the stand-alone – bundled continuum, it became clear that when digital photography was first introduced into the marketplace, it would have been categorized as clearly “stand-alone” because it was operated primarily on one device, i.e., digital camera. However, in recent years, digital photography has expanded to include other media including online photo-sharing websites, e.g., Flickr. The differences were resolved and digital photography was coded as much closer to the fully bundled pole of the stand-alone - bundled continuum. A similar difficulty arose with two other criteria, wide - narrow and internal - external. At one time, some CMs might be coded towards the narrow pole of the continuum, but at a later point in time, when they have become more popular, they might be coded towards the wide pole of the continuum. If a CM originated inside an organization, it would be coded towards the internal pole of the continuum and at a later point in time, if the CM has diffused throughout the society, it would be coded towards the external pole. In order to code the last category, positive - negative, general discourses regarding specific CMs were analyzed. Gee (1999; 17) refers to “big D Discourse” as “socially accepted associations among ways of using languages, of thinking, valuing, acting, and interaction” as opposed to “small d discourse,” which is concerned with the use of languages in specific contexts. The coding is a result of the researchers’ agreement about the general public’s 22 perceptions about certain CMs that appear in media and more general discourses. Overall there was not much disagreement within this category. Each CM was coded by analyzing popular discourses surrounding a particular CM; upon completion of the coding, the reasons for positioning each CM at particular points along on the continua were discussed until a consensus was reached. The results of coding the sample set of CMs are shown below in Figure 1 Visualization of 41 Computerization Movements Classification. The numbers arrayed along the continua represent all 41 CMs in the sample set. The number for each CM in the set appears once on the continuum stretching between the poles for each of the five criteria. Because of the limitations of this type of graphic, it is not possible to display the coding results accurately as can be seen in the instance where as many as ten of the CMs were placed at the same point at the positive end of the positive – negative continuum. Therefore, numbers that are stacked on top of each other or are close in proximity are grouped together. [Figure 1 here] As is evident from even a cursory examination of Figure 1, it is extremely difficult, if not impossible, to identify distinct groupings of CMs based on an aggregate view of the heuristic analysis. Even the simplest combinations of criteria would require the examination of 120 possible combinations (i.e., 5 x 4 x 3 x 2 x 1). Thus, a technique called Formal Concept Analysis (FCA) was used to discover the presence of clustering of CM using the five criteria pairs in combination; for the purposes of the analysis, each criterion pair was considered a category. FCA is “a theory of data analysis which identifies conceptual structures among data sets” (Priss, 23 2003. Home Page). Originally developed by Rudolf Wille (1982), FCA employs the visual representation of the interconnectedness of data sets to analyze relationships using mathematical lattices (see Hara (2002) and Priss (2006) for more detailed explanations of FCA). FCA has been used for information retrieval, knowledge representation and discovery (i.e., classification), and logic and artificial intelligence (Priss, 2006). In this paper, it is used to reveal the patterns for combinations of categories based on the placement of the CMs on each criterion pair. The results in Figure 1 were converted to a format that can be processed by Java-based open source software called Concept Explorer version 1.2 (sourceforge.net/projects/conexp) to generate a lattice. An examination of the lattice revealed two sets of clusters with categories that did not overlap: one set that included two clusters that did not have overlapping categories except for bundled – narrow; the other set that included two clusters that did not overlap except for the fact that they were all positive CMs (see Appendix B). These two sets of clusters were then mapped back to the continua as shown in Figure 2: Two CM clusters without overlap except for bundled – narrow and Figure 3: Two CM clusters without overlap except for positive – negative. 5. FINDINGS: DISTINCT CLUSTERS FOR SETS OF CRITERIA PAIRS The coding of 41 CMs using the five criteria pairs revealed the usefulness of identifying characteristics of CMs that reflect the expanded definition offered above. The use of a set of five criteria pairs to classify CMs departs from Iacono and Kling’s (1996) original conception by providing a finer-grained level of detail to what they called “specific CMs,” a largely undifferentiated catch-all category. Two of these criteria pairs expand the original conception by adding new dimensions: internal - external and positive - negative. In the original formulation, 24 the concept does what it was intended to do, primarily covering movements enacted within organizational boundaries. As Internet penetration rates have increased, more people have begun to develop and use ICTs outside of organizations leading to new forms of CMs. Thus, the category that expands the original conception of the concept to include CMs external to organization is a noteworthy contribution. Second, by including a positive - negative criterion pair, the typology accounts for the fact that the ideologies of CMs are no long restricted to utopian concepts. These two extensions of the concept enrich the theoretical framework of CMs. Furthermore, the outcomes of coding result in some distinct clusters that include diverging sets of criteria pairs. Figure 2 illustrates two distinct clusters that do not overlap except being bundled; they are also on the narrow side of the wide – narrow continuum, but half are near the midpoint of the continuum indicating that they have much less overlap along this category. In other words, the rest of the categories (market/non-market, internal/external, and positive/negative) are clearly different from each other. One cluster contains hacking and online stalking. The other cluster contains office computing, knowledge management, e-medicine, computer-supported cooperative work, computer-based education, and telecommuting. Figure 3 illustrates two distinct sets of clusters that do not overlap except as positive CMs. One cluster contains e-health, e-mail, instant messaging, and blogs. The second cluster contains artificial intelligence, expert systems, paperless courts, virtual reality, digital libraries, warware, ubiquitous computing, and e-science. These examples demonstrate that subsets of CMs can be clustered into distinct groupings based on the five categories. [Figure 2 here] 25 [Figure 3 here] Based on the results of the analysis, it seems clear that a single generic set of criteria is inappropriate for evaluating the success and/or failure of CMs. The failure of a CM means that it, along with its core ICT, is disappearing. For instance the videophone, developed and promoted by AT&T in 1964 under the name “picturephone,” is no longer available on the market having been discontinued in 1978 (Lipatito, 2003). This is a clear example of a failed CM. When considering the success of a CM, as stated earlier, there is no single set of criteria that can be used to assess success that would be universally applicable across the range of CMs. This is one of the implications of the groupings found in Figure 2 and Figure 3. As Pinch and Bijker (1987) state, although economic success has been used to measure the success of technological innovations, this measure is rather incomplete. Instead, specific sets of criteria for critical success factors should be applied to specific groupings of CMs. For instance, measures based on economic success are not applicable to non-market driven CMs. The examples of criteria for evaluating CMs introduced below are intended to be illustrative of this assertion. Consider the following three CMs: videophone, telecommuting, and interactive TV. These are deemed to be unsuccessful CMs due to low adoption rates despite commercial interests. Based on three examples of rather sluggish CMs one might expect that a common set of characteristics could be used to determine unsuccessful CMs. However, this supposition does not hold because these three CMs are categorized rather differently. While they all have the tendency to be market-driven, have narrow impacts and be positively perceived, two criteria pairs (stand-alone - bundled and external - internal) are different. Consequently, it is necessary to have separate sets of criteria for evaluating CMs in each of these groupings. 26 One criterion for evaluating CM success is recognition by the general public, which is only applicable to CMs utilizing core ICTs that are widely disseminated among a large population—for example, e-mail, IM, blogs, e-learning, viruses, and identity theft. Such recognition is shaped by framing, public and professional discourses of CMs, and social and organizational practices. The CMs placed near “wide” pole along the wide – narrow continuum produce discourses among many users that lead to the reporting of core ICT use in the mass media. Eventually, such use becomes an area of interest for researchers, including sociologists, communication researchers, and information scientists. The combination of active discourse and practical application creates a snowball effect, the result of which is greater recognition of CMs among the general public. On the other hand, narrow and specific CMs such as KM or artificial intelligence are often recognizable only to a small number of practitioners. Thus, the criterion or public recognition (or lack thereof) is not appropriate for measuring the success or failure of all CMs. Another criterion that would apply to this wide scope categorization is “invisibility;” at a certain point in their trajectory, CMs may become routinized, institutionalized, and socialized and recede into the background. As Starr and Ruhleder (1996) contend, one characteristic of infrastructures is that they are not seen, i.e., that they are invisible or taken for granted. The telephone is a good example of this notion. It started as a communication tool (see Fischer, 1987), and still is, but few people pay attention to this remarkable and complex technology because it has become so pervasive that people accept it as a given in their day-to-day lives. E- mail is becoming more like the telephone in this regard; both are examples of the domestication of technology. Thus, it is possible that such “invisibility” can be a criterion for success with 27 regard to a widespread CM—however, this same criterion would not be applicable for narrow and specific CMs. A limited criterion for success is the capacity of a CM to reach critical mass in terms of the number of users. This criterion is especially relevant to CMs that operate outside of organizational boundaries. Because there is no organizational structure to reinforce the use and spread of core technologies, it is important that a technology has a large number of users to support the assertion that a CM is successful. In addition, this criterion is very important when analyzing market-driven CMs because if market share declines, the supplier would likely discontinue the manufacture of the product. Thus, in order to gain profits by producing computer-driven technologies, suppliers need to attain a critical mass. For example, while Apple’s Newton, an example of a handheld technology, did not successfully secure a critical mass of users, a similar technology, the Palm PDA, was able to reach out to this same demographic. As a result, PDAs have become a successful CM (c.f., Allen, 2004; Barabási, 2002). Still, this criterion would not be applicable to non-market-driven CMs. Another criterion for evaluating success that would be uniquely applicable to CMs internal to organizations is whether organizational policies are implemented with regard to the CMs. For the most part, only CMs within organizations have relevance to organizational policies. For instance, telecommuters need to follow certain rules, e.g., they need to appear in the office at least once a week. This sort of organizationally embedded policy implementation represents a criterion for assessing the success of a CM in that a significant number of individuals within a given population have adopted the practice. Nevertheless, this criterion would not be suitable to CMs external to organizations. 28 The final criterion for success and failure of CMs that is limited to particular categories is the presence or absence of a change in social policies (Gamson, 1975). This criterion appears to be most applicable to negative CMs. Because they have potential legal consequences, these CMs are more likely to influence legislation. One outstanding example is the Can-Spam Act (Controlling the Assault of Non-Solicited Pornography and Marketing Act). The bill, signed in 2003, has been in effect since January 2004. Hacking and the dissemination of viruses are also considered computer-related crimes, and legislation has been introduced to combat them (c.f., National Conference on State Legislatures, 2001). The fact that laws have been created to restrain negative computerization can be considered yet another paradoxical indication of a successful CM. In Figure 2, since one cluster is market-driven, and the other is non-market-driven, economic impacts to measure success are not appropriate for both types of CMs. Furthermore, one is internal to organization, and the other is external to organization. Consequently, changes in practices and policies in organizations do not fit as a measure for success for both types of CMs. Finally, constraints with legislation would be only applicable to measure success of negative CMs, but not positive CMs. All the CMs presented in Figure 3 share only “positive” category, but the rest of the four categories are different. In addition to the examples for Figure 2 above, CMs that are characterized as narrow or wide have different criteria for success. For example, the criterion that CMs have recognition by the general public is not suitable for CMs categorized as narrow. Thus, we need to have separate sets of criteria to measure success for different types of CMs. 29 6. CONCLUSION This paper addresses the question of evaluating the success of CMs by critically examining the original conception proposed by Kling and Iacono and arguing that the question of evaluation must be rethought on the basis of an expanded conception of CMs. This involved challenging two assumptions, that the conception of a CM in its original formulation is sufficient to cover the range of CMs and that CMs are similar enough that a single set of criteria can be used to evaluate them. The first part of the critical analysis showed that there is a bias towards organizational CMs, that the distinction between general and specific CMs is not sufficient to capture the heterogeneity of CMs, and that there is a strong technological utopianism at the core of a CM’s ideology. Removing these assumptions clears the way for an expanded and more useful conception of CMs. Using this logic, a set of criterion pairs was derived from an examination of the discourse about CMs. The term “computerization” encompasses a wide range of computer-based activities taking place in a wide variety of settings. CMs can be wide or narrow in scope, stand-alone or bundled, market- or non-market driven, organizational or societal, and positive or negative. This set of criterion pairs was used in a heuristic analysis where 41 CMs were coded and grouped as they were placed along a continuum for each criteria pair. These groupings were revealed by an additional method called formal concept analysis. This analysis showed that there were groupings of CMs that did not completely share the same categories, indicating that a single set of criteria would not be useful to evaluate success or failure because most of the categories were in mutually exclusive groupings outside of a shared set of characteristics, such as having an ideology or a core ICT. 30 This paper makes three main contributions. First, it offers an expanded conception of CMs that may be useful theoretically and empirically. Second, the development of five criteria pairs or categories encourages future studies that should generate a more nuanced understanding of CMs, their structure and functioning, and their trajectories. Third, and perhaps most importantly, the need to rethink the original question about the evaluation of CMs has been demonstrated. Instead of searching for a reliable set of criteria for evaluating the success of CMs that can be used by Social Informatics researchers, this analysis shows that it is important to consider that there may be different sets of criteria for evaluating different types of CMs. 7. ACKNOWLEDGEMENTS The authors would like dedicate this work to the memory of Rob Kling, friend, colleague and mentor. We gratefully acknowledge the insightful comments of Blaise Cronin and thank the anonymous referees and TIS editors for their helpful comments. 7. REFERENCES Agar, J. (2006). What difference did computers make? Social Studies of Science, 36(6), 869-907. Barabási, A. 2002. Linked: The new science of networks. Cambridge, MA: Perseus Publishing. Baldwin, B. (2006). Confronting computers: debates about computers at the Public Archives of Canada during the 1960s. Archivaria, 62, 159-178. Bedard, C. (2006). On the adoption of computing and IT by industry: The case for integration in early building design. Intelligent Computing in Engineering and Architecture Lexture Notes in Artificial Intelligence, 4200, 62-73. 31 Bijker, W. E. 1997. Of Bicycles, Bakelites and Bulbs: Toward a Theory of Sociotechnical Change. Cambridge, MA: The MIT Press. Bijker, W. E. 2001. Social Construction of Technology. In N. J. Smelser & P. B. Baltes (Eds.), International Encyclopedia of the Social & Behavioral Sciences (Vol. 23, pp. 15522- 15527). Oxford: Elsevier Science Ltd. Blumer, H. 1951. Social movements. In Lee, A.M. (ed), New Outline of the Principles of Sociology. 199-220. Fischer, C. S. 1987. The revolution in rural telephony, 1900-1920. Journal of Social History, 21(1), 5-26. Gamson, W. 1975. The Strategy of Social Protest, 2nd Ed.. Belmont, CA: Wadsworth Press. Ganter, B., & Wille, R. 1997. Formal concept analysis: Mathematical foundation. Heidelberg: Springer. Gee, J. P. 1999. An introduction to discourse analysis: Theory and method. New York: Routledge. Goffman, E. 1986. Frame Analysis. Boston: Northeastern University Press Hara, N. 2002. Analysis of computer-mediated communication: Using Formal Concept Analysis as a visualizing methodology. Journal of Educational Computing Research, 26(1), 25-49. Hedstrom, K. (2004). The socio-political construction of caresys: how interests and values influence computerization. In: Networked information technologies: diffusion and adoption, Nowell, MA: Kluwer Academic Publishers, 1-18. Hess, D. (2005). Technology- and product-oriented movements: Approximating social movement studies and STS. Science, Technology, and Human Values, 30(4), 515-535. Iacono, S., & Kling, R. 1996. Computerization Movements and Tales of Technological 32 Utopianism. In Kling, R. (Ed). Computerization and Controversy: Value Conflicts and Social Choices. 2nd Ed. 85-105. Iacono, S., & Kling, R. 2001. Computerization Movements: The Rise of the Internet and Distant Forms of Work. Yates, J. and Van Maanen J. (Eds.) Information Technology and Organizational Transformation: History, Rhetoric, and Practice. Sage. 93-136 Jacob, E, & Yang, K. 2005. Organizing the Web: Semi-automatic Construction of a Faceted Scheme. Presented at the School of Library & Information Science Brown Bag (Indiana University, February 4, 2005. Retrieved February 28, 2005, fromhttp://elvis.slis.indiana.edu/CSKD/CSKD_Facet.ppt Kling, R., & Iacono, C.S. 1995. Computerization Movements and the Mobilization of Support for Computerization. Starr, L. (ed.) Ecologies of Knowledge. SUNY Press. 119-153. http://www.slis.indiana.edu/faculty/kling/pubs/MOBIL94C.htm Kling, R., & Iacono, S. 1988. The Mobilization of Support for Computerization: The Role of Computerization Movements." Social Problems 35(3), 226-243. Kling, R., & Jewett, T. 1994. The Social Design of Worklife With Computers and Networks: An Open Natural Systems Perspective. In Yovits, M. (ed). Advances in Computers, vol 39. Orlando, FL:Academic Press, 239-293. Kling, R., & Scacchi, W. 1982. The Web of Computing: Computing Technology as Social Organization, in M. Yovits (ed.), Advances in Computers, Vol. 21, 3-85, New York: Academic Press. Kraemer, K. Call for Papers on Computerization Movements. Center for Research on Information Technology and Organizations at UCIrvine. Retrieved November 10, 2004 from http://www.crito.uci.edu/2/si/call.asp 33 Kraemer K. L. & Elliott, M. S. 2007. (Eds.). Computerization Movements and Technology Diffusion: From Mainframes to Ubiquitous Computing. Medford, NJ: Information Today, Inc. Lavoie, M. and Therrien, P. (2005). Different strokes for different folks: examining the effects of computerization on Canadian workers. Technovation, 25(8), 883-894. Lipatito, K. 2003. Picturephone and the Internet age: The social meaning of failure. Technology and Culture, 41(1), 50-81. McCarthy, J., & Zald, M. 1977. "Resource mobilization and social movements: a partial theory." American Journal of Sociology 82(6):1212-1241. Moon, F.C., Day, M., Suen, L., Tse, S. and Tong, T.F. (2005). Attitudes and skills of Hong Kong Chinese medicine practitioners towards computerization in practice: A cluster analysis. Medical Informatics & the Internet in Medicine, 30(1), 55-68. National Conference on State Legislatures. 2001. 2001 Hacking and virus legislation. Retrieved November 29, 2004, from http://www.ncsl.org/programs/lis/cip/hackleg01.htm Nielsen, J. 1994. Usability engineering. San Francisco, CA: Morgan Kaufmann. Priss, U. 2003. Formal Concept Analysis Home Page. Retrieved August 7, 2007, from http://www.upriss.org.uk/fca/fca.html Priss, U. 2006. Formal concept analysis in information science. Annual Review of Information Science & Technology. Medford, NJ: Information Today. Rosenfeld, L, & Morville, P. 2002. Information architecture for the World Wide Web (2nd ed.). Sebastopol, CA: O’Reilly. Starr S. L., & Ruhleder, K. 1996. Steps toward an ecology of infrastructure: Design and access for large information spaces. Information Systems Research, 7(1), 111-134. 34 Snow, D.A., Burke E.R Jr., Worden, S.K., & Benford, R.D. 1986. Frame alignment processes, micromobilization and movement participation. American Sociological Review, Vol. 51, 464-481. Thompson, C. February 8, 2004. The virus underground. The New York Times Magazine, 28-72, 79, 82, 83. Wille, R. 1982. Restructuring lattice theory: An approach based on hierarchies of concepts, (pp. 445-470). In I. Rival (Ed.), Ordered sets. Reidel, Dordrecht-Boston. The World Bank Group. A definition of e-government. Home Page. Retrieved August 7, 2007 from http://www1.worldbank.org/publicsector/egov/definition.htm Zald, M., & Berger, M. 1978. "Social Movements in Organizations: Coup d'Etat, Insurgency, and Mass Movements," American Journal of Sociology 83(4), 823-861. 35 Figure 1: Visualization of 41 Computerization Movements Classification 36 Figure 2: Two CM clusters without overlap except bundled and narrow 37 Figure 3: Two CM clusters without overlap except positive - negative APPENDIX A: Original list of CMs listed in the Call for Papers with additional CMs in boldface • Artificial intelligence • Expert systems, multiagent systems • Office automation • Email • Instant messaging 38 • Paperless courts • Virtual communities • Knowledge management • Virtual reality • Digital libraries • Remote work • Open source, free software • Online role playing games • E-government • E-commerce • E-democracy • E-health • Personal digital assistants • Network centric warfare • Interactive television • Supercomputing • Cybersecurity/cybertrust • Surveillance technologies • Ubiquitous computing • Human-robot-agent interactive technologies • Information privacy • E-science • Computer supported cooperative work 39 • Geographic information systems • Blog • E-learning • Digital photography • Telecommuting • Search engines • Videophone • Spam • Viruses • Identity theft • Stalking • Hacking • E-medicine Not used • Personal computing • Computer-based education • Urban information systems • Communities of practice • Virtual organizations • Distributed work • Internetworking • Software productivity 40 • Context-aware computing • Sensor networks • Cyberinfrastructure 41 APPENDIX B: TWO CLUSTERS OF CMS 42 43 44 Figure 4: Two CM clusters without overlap except bundled and narrow shown in lattices generated by FCA 45 46 47 48 Figure 5: Two CM clusters without overlap except positive – negative shown in lattices generated by FCA work_3ea7yvl2rrhlve5suhe3lk7gya ---- Integrating Patient Digital Photographs with Medical Imaging Examinations Senthil Ramamurthy & Pamela Bhatti & Chesnal D. Arepalli & Mohamed Salama & James M. Provenzale & Srini Tridandapani Published online: 14 February 2013 # The Author(s) 2013. This article is published with open access at Springerlink.com Abstract We introduce the concept, benefits, and general architecture for acquiring, storing, and displaying digital pho- tographs along with medical imaging examinations. We also discuss a specific implementation built around an Android- based system for simultaneously acquiring digital photo- graphs along with portable radiographs. By an innovative application of radiofrequency identification technology to radiographic cassettes, the system is able to maintain a tight relationship between these photographs and the radiographs within the picture archiving and communications system (PACS) environment. We provide a cost analysis demonstrat- ing the economic feasibility of this technology. Since our architecture naturally integrates with patient identification methods, we also address patient privacy issues. Keywords Patient identification . Electronicmedicalrecords . Medical imaging . Medical errors . Digital camera . DICOM . PACS Introduction Patient safety issues have gained prominence in the national dialog in the USA particularly since the publication of the 2001 Institute of Medicine’s report on quality [9]. The Joint Commission on Accreditation of Healthcare Organizations (JCAHO) in its 2010 National Patient Safety Goals (NPSG) provides a specific requirement (NPSG.01.01.01) that at least two patient identifiers be used when providing care, treatment, and services [13]. The rationale is that “wrong-patient errors occur in virtually all stages of diagnosis and treatment… Acceptable identifiers may be the individual’s name, an assigned identification number, telephone number, or other person-specific number” [13]. Meanwhile, the National Quality Forum [18], with support from the Agency for Healthcare Research and Quality, has spe- cifically endorsed, in its “30 Safe Practices for Better Health Care Fact Sheet,” the use of standardized proto- cols to prevent mislabeling of radiographs. One can easily see that many of the acceptable identifiers noted in the JCAHO NPSG requirements can be problematic, particularly if patients are unconscious, uncooperative, or noncommunicative for various reasons. On the other hand, human beings have been hardwired to use the human face as an identification device for millennia, and this identification A very preliminary version of this work was presented at the IEEE Engineering and Medicine and Biology Society’s Conference on Biomedical Engineering and Sciences (IECBES) 2010 and at the 2012 SIIM meeting. S. Ramamurthy : C. D. Arepalli : M. Salama : J. M. Provenzale : S. Tridandapani (*) Department of Radiology and Imaging Sciences, Winship Cancer Institute, Emory University School of Medicine, 1701 Uppergate Drive NE, Suite 5018, Atlanta, GA 30322, USA e-mail: stridan@emory.edu S. Ramamurthy e-mail: sramamu@emory.edu C. D. Arepalli e-mail: carepal@emory.edu M. Salama e-mail: mo.salama@emoryhealthcare.org J. M. Provenzale e-mail: jmprove@emory.edu S. Ramamurthy : P. Bhatti : S. Tridandapani School of Electrical and Computer Engineering, Georgia Institute of Technology, 777 Atlantic Drive NW, Atlanta, GA 30332, USA e-mail: pamela.bhatti@ece.gatech.edu J. M. Provenzale Department of Radiology, Duke University Medical Center, Durham, NC 27710, USA J Digit Imaging (2013) 26:875–885 DOI 10.1007/s10278-013-9579-6 device remains strong even if the patient is unconscious or uncooperative. To minimize or prevent mislabeling of medical imaging studies, we introduce the concept of obtaining digital photo- graphs of patients simultaneously with all medical imaging studies. These digital photographs will be small additions to the imaging study similar to the scout or localizer images that are performed with CT studies. We do not intend these digital photographs to entirely replace numerical identifiers, but rather we envision that they would supplement and strengthen these identifiers. However, in some cases, such as unconscious trauma patients, these photographs may indeed be the only available identifiers. Two parallel developments currently make our proposed technique a contender for serious consideration in electronic healthcare delivery systems: 1. Recent advances in charge coupled device (CCD) and complementary metal oxide semiconductor (CMOS) camera technologies have made it possible to miniatur- ize these relatively inexpensive devices, such that digital cameras capable of 12–16 megapixel resolution occupy less than a square centimeter. At the same time, memory costs have continued to drop and the addition of digital photographic data to an imaging study has negligible cost associated with it when compared with the overall cost of the study. 2. For more than two decades, the development of the digital imaging and communications in medicine (DICOM) standard has allowed for integrating imaging data from various modalities into hospital information systems [11, 14] and has provided the ability to present integrated image data to radiologists and other physi- cians [10]. Thus, the technical foundation exists, mak- ing our novel concept a feasible one. The distinguishing feature of our technique is that we consider point-of- care photographic imaging, that is, photographs will be obtained simultaneously with every instance of acquisi- tion of diagnostic imaging. The ideal implementation of this photography technique would require the cooperation of equipment vendors, that is, manufacturers of radiography, ultrasound, computed tomog- raphy, and magnetic resonance imaging equipment. These vendors will all have to integrate cameras in their equipment. However, there is an installed base of tens to hundreds of thousands of imaging devices, which would require some form of retrofitting for such a technique to work. In this paper, we describe the general architecture for achieving this inte- gration in an existing picture archiving and communications system (PACS). We also describe our prototype for retrofitting a camera system on an existing portable conventional radiog- raphy (CR) machine. Thus, we provide an end-to-end imple- mentation from image acquisition, to transmission, to storage, and to display, for retrofitting a camera system on an existing portable CR machine. The rest of this paper is organized as follows. In the “Motivations for and Advantages of the Proposed Concept” section, we discuss the motivations for and advantages of the proposed concept. In the “Implementation Strategies: Clinical Perspective” section, we consider implementation strategies for a variety of imaging modalities from a clinical perspective, and potential privacy concerns are explored in depth in the “Potential Privacy Concerns of Gathering Photographic Data” section. In the “Architecture for Integrating Photography with a Portable Radiography Machine” section, we discuss the specifics of the hardware architecture that we have designed to integrate a camera with a portable radiography machine, and the back-end processing that is required to integrate the digital photographs with the DICOM radiographic images; cost considerations are also discussed in this section. Finally, in the “Conclusions and Future Work” section, we provide conclusions. Motivations for and Advantages of the Proposed Concept There are two significant advantages of incorporating photo- graphs with imaging studies: 1. Decreasing medical errors: Medical errors are not an insignificant source of adverse clinical outcomes and medical complications, which add significantly to health care costs [9]. In particular, imaging studies are prone to mislabeling and misidentification errors. Such errors can cause medical problems for both the patient whose demographic information was tagged to the study and the patient to whom the images belong. While advances in PACS may lead to improved workflow and increased efficiency and throughput, medical mistakes may also become more prevalent [15]. A number of such mislabeled cases are identified at the time of image interpretation by the radiologist, when a current study is compared with an older study purport- ing to be from the same individual. The radiographs or the scout/localizer images (in the case of CT examina- tions) from the new and old study may show some obvious differences particularly if the body habitus of the patients in the two comparative examinations are quite different or if there are different medical support hardware between the two studies [5]. However, when the two imaged individuals have similar physiques, then determining that the old and new studies do not belong to the same patient can be challenging. We believe that obtaining a patient’s facial digital photograph simultaneously with the diagnostic images 876 J Digit Imaging (2013) 26:875–885 can significantly increase the detection rate of misla- beled studies, thereby decreasing medical error. This will also increase interpreting physicians’ efficiency and throughput since they will have to spend less time looking for anatomical landmarks. Aakre et al. noted that plain radiographic errors were reduced from 2.4 to 0.7 %, but not eliminated, after the introduction of bar code scanners to automatically gen- erate patients’ demographic information and examina- tion dates on the computed radiography modality via a DICOM modality work list [1]. Their intervention re- quired either the patient or the technologist to verify the demographic information, which is a potential source of errors. We believe that automatically adding digital photographs will help reduce this error rate further. 2. Improved diagnostic capabilities: In addition to facial digital photography, obtaining a digital photograph of the area that is imaged can, in many cases, add to the diagnostic value of the imaging studies. For example, with portable chest radiographs, it is often unclear if the many medical lines and tubes pro- jected over the patient are outside the patient or inside the patient, or what parts of such devices are outside or inside the patient. Quite often, such ambiguity requires a call by the radiologist to the clinical service taking care of the patient for clarification; the radiologist’s time is one of the more expensive costs associated with an otherwise simple study. A digital photograph of the chest may show por- tions of some of these lines and tubes outside the patient and thus provide additional clues and improve the diag- nostic value of the imaging study and interpretation. The digital facial photograph may also add to the diag- nostic value of the study. Quite often, standing orders for obtaining daily portable radiographs in the intensive care units are placed with generic indications, such as “check lines and tubes” or “evaluate endotracheal tube.” Many times, these generic indications propagate to the study requi- sitions that are submitted even after the questioned lines and tube have been removed from the patient. A digital facial photograph may show if tubes such as nasogastric tubes, orogastric tubes, or endotracheal tubes are present or absent in the patient. Such additional information can dramatically speed up the interpretation of portable chest radiographs. Another area of radiology where photographs of the affected region can aid tremendously in diagnosis is trauma imaging. Showing the entry and exit wounds of gunshot victims or the presence of foreign objects that protrude outside the patient can aid in the diagnostic accuracy of CT examinations by calling attention to these entities. A potential further advantage of such an identification system is that it could eventually be entirely automated given the significant progress that is being reported in the area of computerized face recognition techniques. Bowyer et al. [6] and Zhao et al. [25] have surveyed numerous techniques, with developers of these techniques reporting a greater than 90 % recognition rate. Indeed, O’Toole et al. have demonstrated the superiority of several state-of-the-art face recognition algorithms over human capabilities in de- termining whether pairs of face images, taken under differ- ent illumination conditions, were pictures of the same person or of different people [20]. Thus, face recognition technologies may serve as an additive safeguard to human face matching capabilities, just as other computer-aided diagnostic technologies are assisting radiologists in a variety of clinical conditions [12] including the detection of pulmo- nary nodules [2, 4], osseous metastases [19], and colorectal polyps [3]. Implementation Strategies: Clinical Perspective To avoid the possibility of tagging the wrong patient’s photograph with the imaging study, an important require- ment of this type of integration is to ensure “point-of-care” imaging, that is, the photographic information is obtained either simultaneously or as close in time as possible with the medical image acquisition. Implementation strategies for several modalities are discussed below: Digital Radiography and Portable Conventional Radiography Currently, a light source is used to illuminate and set the field-of-view of digital radiographs and portable radiographs by technologists. It is relatively straightforward to integrate CCD and CMOS cameras into these imaging devices so that photographs of the field of view are obtained simultaneously with the radiographs. In addition to obtain- ing a photograph that may provide useful diagnostic infor- mation, this system may also be employed for better positioning of the patient and improving the field of view of the radiograph. Additionally, a second CCD or CMOS camera may also be employed in the X-ray machine tower to simultaneously obtain a facial photograph of the patient. Of course photographs of the chest, for example, with portable radiography will be obtained with the patient wear- ing a hospital gown. The objective is to retain the patient’s modesty to the extent possible and not add a new step in the workflow. As a result, some of the lines and tubes may be obscured by the clothing; however, much of the overlying hardware may still be visible and provide some useful information. Facial photographs may be limited as identification tools, for example in trauma or postsurgical patients, whose faces are covered with dressing. However, the presence and pattern of such dressing may itself serve as an identification tool and allow us to recognize when a mislabeling error occurs. J Digit Imaging (2013) 26:875–885 877 Ultrasound CCD or CMOS cameras measuring a square centimeter or less are available and these can be integrated with the ultrasound transducers or a separate port could be used. These cameras can then be used to obtain both facial photographs and photographs of the affected body part that is being imaged. Already magnetic tracking markers are being embedded into ultrasound (US) transducers to allow for real-time registration of US images in three-dimensional space with imaging from other modalities. Thus embedding a camera in a US transducer should be feasible. Of course, some judgment is called for in what body part photography is useful and clinically acceptable, and clinical standards for this can be eventually established. Obviously, correlative photographic imaging is neither clinically rele- vant in many cases nor ethically acceptable when consider- ing studies such as ultrasound imaging of female pelvis or of the male prostate. However, in some cases, photographs of the affected part may turn out to be an important medico legal documentation tool. For example, radiologists are of- ten unable to perform ultrasound imaging of some organs because of overlying dressing and bandages, and it may be useful to document these with a photograph. Software locks can be provided to prevent acquisition of ultrasonic images by the technologist until a facial photograph of the patient is first obtained, thus ensuring compliance with this workflow modification. CT, MRI, PET Digital cameras can be integrated with the CT gantry or embedded within the MRI scanner, and these cameras can obtain digital photographs of both the face and the body part being imaged. At our institution, we have already installed video cam- eras in the MRI, PET-CT, and PET-MRI suites for the purposes of monitoring patients, particularly those patients receiving moderate sedation. These monitoring cameras are necessary since nursing personnel cannot be in the room, especially when the X-ray tube is active. Such monitoring cameras could easily be converted into recording devices integrated with the imaging equipment. PACS and Viewing Workstations Our technique is useful only if the photographs are readily available on a PACs viewing station. We envision this photograph will be treated just like the scout or localizer films in CT or MRI studies. These scouts should be “clickable” for enlargement. Most importantly, we should have the ability to view simulta- neously and compare photographs obtained from different studies of the same patient. A possible hanging protocol for displaying portable chest radiographs, obtained along with photographs, is shown in Fig. 1. The DICOM standard already has a standard for the storage and display for visible light (VL) images. Freely available software—the SimpleDICOM Suite, has been developed to allow importation of nonradiologic images into the PACS [7]. The key element of their approach is that the VL images require an additional workflow step and the patient must be assigned a mock event within the Radiology Information System to represent the additional images in the PACS; the VL images are thus not integrated with the radiologic images and are not part of one study. At the same time, it has been shown by PET-CT and more recently by PET-MRI implementations that multiple modal- ities can be integrated seamlessly both at the data acquisition phase and at the display phase. Thus, implementation of our technique for simultaneous acquisition and display of pho- tographic and medical imaging data is feasible with existing technologies. Thus, the technology is available for integration of digital cameras for acquisition of patient facial photographs at the point-of-care of medical imaging for a wide variety of modalities. Potential Privacy Concerns of Gathering Photographic Data Patient privacy concerns will potentially be raised as issues with obtaining, storing, and displaying digital photographs with the medical images. There are several reasons why most patients will likely not object to this minor intrusion, if it could even be considered an in- trusion, on their privacy. First, there is a significant safety issue that benefits the patient, and most patients would be happy to provide more information if it could potentially lead to a more accurate diagnosis. Second, most healthcare institutions have multiple video cameras in the hallways as a security measure, and patients’ presence and movements are already being recorded at various locations. Most patients do not enter a healthcare facility wearing a veil, and most of them are seen by a number of healthcare workers including physicians, nurses, technologists, and transporters. With photographic record- ing, their external physical appearance will be seen by one more physician—the radiologist. Third, photographic data is no different from all of the demographic data, such as contact information, social secu- rity number, and date of birth that are already being collect- ed from all patients at most medical facilities. The photographic data to be gathered will be secured just like the individually identifiable health information including demographic data that is attached to medical imaging data, and is protected under the Health Insurance Portability and Accountability Act of 1996. This data will be available only to medical personnel who are charged with the care of the patient. 878 J Digit Imaging (2013) 26:875–885 Fourth, imaging modalities such as CT already collect enormous amounts of data, and with currently available sophisticated 3D volume and surface rendering techni- ques, it is possible to recreate the external appearance of a patient. In fact, Cesarani et al. have produced a model of the face of a man who lived nearly 3,000 years ago from CT data of a mummy [8]. Interestingly, “defacing” algorithms for neuroimages are required to preserve subject privacy for research projects involving large-scale collaboration with neuroimaging data [21]. Thus, with photographs, patients are not really giving us access to any new information that cannot already be derived from the data they currently provide us. Finally, the data we intend to gather is an externally visible feature of humans and does not involve other sophis- ticated data such as retinal scans or fingerprints or any other data that could be misused. Architecture for Integrating Photography with a Portable Radiography Machine Emory Prototype Of course, the optimal implementation for the technique we have discussed thus far would require vendors of medical imaging equipment to integrate digital cameras into their devices to ensure simultaneous capture of this multimodal data. On the other hand, there is an installed base of several hundred thousand imaging devices throughout the country, and it would be difficult to justify replacing all these devices with newer devices simply for the ability to obtain digital photographs simultaneously. Thus, inexpensive, snap-on solutions need to be developed. The critical features of these snap-on solutions are (1) the photographs must be obtained nearly simultaneously with the medical images and (2) there should be a tight integration between the photograph and the medical images within the PACS environment. At Emory University Hospital and affiliated hospitals and clinics, we are currently constructing a snap-on solution built around an open-source ARM-based development board running the Android-operating system (ARM Ltd, Cambridge, UK). In Fig. 2, a system-level architectural diagram of our implementation is shown, wherein the new elements in our architecture are shown with dashed borders, and the subsystems in the existing architecture are shown with solid line borders. In the existing environment, each modality, such as CT, MRI, and US, has a wired Ethernet link to the PACS server allowing for transmission of the patient demographic information and the medical images. All of our portable X-ray machines in routine use are cassette-based CR machines. A cassette processor serves to convert each X-ray image into DICOM format. To asso- ciate each study with the patient, each cassette is marked with a unique barcode, the Plate_ID, which is added to the DICOM header by the cassette processor. These processors also allow the technologist to add the patient demographic information from the work list. The DICOM file is then transmitted to the PACS server from the processor via a wired Ethernet link. Our new architecture adds two main hardware compo- nents (Fig. 2): an android-based camera device (ABCD) and an integration server (IS). Digital photographs are captured by the ABCD and transmitted along with a time-stamp and a device code (or Plate_ID in the case of CR) via wireless links to an IS. The IS was developed to efficiently integrate the photographs with medical images in DICOM format. Android-Based Camera Device Our first ABCD implementation has been developed as a custom device using off-the-shelf components (Fig. 3). The Fig. 1 An example display showing a current radiograph–photograph combination (left) and a prior radiograph–photograph combination (right) from two different patients. For privacy reasons, the patients’ eyes have been masked. Reprinted with permission from the American Journal of Roentgenology [22] J Digit Imaging (2013) 26:875–885 879 Android platform (Texas Instruments; Dallas, TX, USA) was chosen for ease of implementation and the ability to leverage existing applications in the Android market. The device is built around a BeagleBoard (Texas Instruments) initially for deployment with a portable CR machine (Fig. 4). We are affixing all of the X-ray-sensing plates (cassettes) used with the portable CR machines with radio- frequency identification (RFID) tags that correspond to the barcodes on the cassettes, i.e., the Plate_ID. As described later, these Plate_IDs will allow us to link the photographs with the radiographs. These passive 125 KHz RFID tags feature RFID integrated circuits based on the EM4001 ISO standard, with the corresponding DICOM tag 0018,1004. The RFID tags offer a read range of 10 cm; that is, the tags are read when they are brought in close proximity to the reader. The read range has been deliberately chosen to be very small for two reasons: (1) longer range RFID readers consume more power; (2) to prevent cross-talk among the cassette RFID tags. It is not uncommon for technologists to take up to 12 cassettes when they go to the ICUs and in-patient floors to obtain portable X-rays, and this can create interference (cross talk) among the cassette RFID tags. We are currently exploring RFID readers with a larger range employing highly directional antennas that can work up to distances of 6 ft for the next generation prototype. The camera used in our solution is an Aptina ¼ CMOS Sensor (Aptina Imaging Corp., San Jose, CA, USA), which is capable of 3 megapixel resolution and is mounted on a Leopardboard 365 3 M camera board (Leopard Imaging Inc., Fremont, CA, USA). To ensure that the digital photo- graph is obtained simultaneously with the diagnostic image, a standard instrumentation bus is used. More specifically, when the trigger for acquisition of a radiograph is activated, it is received via a general purpose input/output pin that triggers the ABCD to capture a photograph with the camera. PACS ABCD: Android- based Camera Device Wi-Fi Link Ethernet Link ABCD ABCD ABCD ABCD Processor Integration Server GPIO Fig. 2 System level architecture. The building blocks in the existing environment are shown with solid borders and the building blocks in our new environment are shown with dashed borders IS ARM microprocessor Network 802.11b/g/n Bluetooth Bluetooth enabled RFID reader Trigger from imaging modality GPIO RFID Radiographic Cassettes Fig. 3 Android-based camera device block diagram. IS integration server, GPIO general purpose input/output Fig. 4 Prototype ABCD 880 J Digit Imaging (2013) 26:875–885 At the same time, the ABCD employs a bluetooth- enabled RFID reader to capture the unique identifier for the cassette. The photograph and the RFID informa- tion are both transmitted to the platform that we have built to integrate the photographs with the medical images in DICOM format, the IS (Fig. 5). An IEEE 802.11 b/g wireless module will leverage existing enterprise-wide wireless infrastructure to connect to the IS. The ABCD runs rowboat (a port of Android to 2.3 (gingerbread) to BeagleBoard XM). Table 1 summarizes the cost of our development ABCD system totaling 568 USD. We expect the cost of the final snap-on solution to be much less and on the order of 100 USD. Our initial hardware development goal was to rapidly realize a functional development platform with full debugging capabilities. As a result, we integrated off-the-shelf components that provide well above the functionality required in the final optimized system. Options to scale the cost down include replac- ing the BeagleBoard XM with a low-cost microproces- sor, eliminating the display monitor that is not essential in the final product, and selecting a lower cost RFID reader. We also expect the footprint of the ABCD to dramatically scale down once the system is further optimized and integrated at the component level. Integration Server The integration software has been developed in C++ readily leveraging the DCMTK libraries from DICOM to imple- ment the DICOM standard. The IS process flow is shown in the right half of Fig. 5. The IS has bidirectional communi- cation with the PACS server. The IS queries and retrieves recent studies for each modality from the PACS. In this illustration, reference is only made to the CR list, but the process is similar for other modalities. Once the IS receives a photo with a Plate_ID, it compares this Plate_ID with the Plate_IDs (DICOM tag 0018,1004) from the headers of the DICOM images of the retrieved studies from PACS until a match is found. Since the cassettes are reused, the Plate_ID is not unique, and this creates an ambiguous relationship between the digital photograph and the imaging study as shown in Fig. 6. However, the combination of the Plate_ID and time of acquisition is still unique, thus a Time_Stamp is also generated by the ABCD for each photograph. This requires all the ABCDs to be synchronized in time. The ABCDs use Network Time Protocol (NTP) to synchronize their times with a NTP Server running on the IS. Once a match is found, the photograph is converted from JPEG format to DICOM format and a new series is created with a study-matched subject photo. This series is then sent to PACS where it becomes a part of the imaging study or folder. Extending the implementation of this technique for non- portable, stand-alone equipment, such as CT and MR scan- ners, is much easier since departmental Wi-Fi equipment is always within range. Further, the time stamps for the photo- graphs and the medical images can be perfectly synchro- nized by the time of acquisition. Acquire subject photo Retrieve recent CR studies from PACS Identify Plate_ID from DICOM header Match photo to CR study based on Plate_ID and Time_Stamp Identify X-ray cassette using RFID Tag photo with X-ray Plate_ID and Time_Stamp Send photo to Integration Server Create new series with study-matched subject photo Convert photograph to DICOM Send new series to PACS ABCD: Android-Based Camera Device Integration Server PACS Trigger from imaging modality Fig. 5 Process-flow diagram for the ABCD and IS Table 1 Cost of the various ABCD components Component function Specific component used Cost (USD) Microprocessor BeagleBoard XM 149 Camera 3 M Camera, Leopard Imaging 40 RFID Reader RFID USB Reader, Serial IO 179 Communication (WiFi) BeagleBoard Expansion V2 200 J Digit Imaging (2013) 26:875–885 881 Integrated Display of Digital Photographs and Radiographs At the interpreting workstation, hanging protocols treat the photographs like a separate series within the study. A potential hanging protocol is shown in Fig. 1, where a portable radio- graph obtained simultaneously with a photograph is displayed along with a prior radiograph–photograph combination. In this example from [22], the radiograph on the left shows an 81 year-old man with aortic valve replacement status post coro- nary artery bypass grafting and aortic valve replacement; the characteristic median sternotomy wires can be seen with prop- er window and level settings on the workstation. The radio- graph on the right, which is the comparison (“previous”) radiograph from three days prior, shows an 89 year-old white man with aortic stenosis admitted for aortic valve replacement surgery; the radiograph also shows a calcified aortic knob and calcified mediastinal lymph nodes not seen in the patient on the left. In addition, given a difference of only three days between the two radiographs, it is unlikely that the post-operative changes would show median sternotomy wires only and no support lines and tubes. The photographs, despite being edited to protect patient identity for this report, clearly show differ- ences in facial hair and baldness between the two patients. System Requirements and Cost Considerations The average sizes of various medical imaging studies range from 8 MB to 1 GB. A few representative studies have sizes as follows [17]: chest radiograph, 8 MB; CT abdomen, 150 MB; CT heart, 1 GB; MRI abdomen, 15–50 MB; whole body PET, 10 MB; heart PET, 24 MB; standard US, 12.5 MB/s; Doppler US, 37.5 MB/s. Considering that a 3 megapixel camera can provide a JPEG compressed picture for under 0.5 MB, the overhead of adding a single photograph to a medical imaging study ranges from 12.5 % for a one-view chest radiograph down to 0.05 % for a heart CT. Currently, a MB of memory costs less than 0.01 USD and the cost of storing a photograph is thus negligible relative to the cost of the examination. A basic 3 megapixel camera costs around 10 USD, espe- cially since no sophisticated focusing capabilities are re- quired for our purpose; the object distance is almost always fixed for each machine. The most inexpensive por- table X-ray machine costs around 40,000 USD and a PET- CT scanner costs anywhere from 2 to 3 million USD to be installed. Thus, the added cost of digital photography in new medical imaging equipment is miniscule. Furthermore, it should be possible to develop snap-on kits which can be used to retrofit existing imaging equipment at a cost of about 200 USD or less per kit. As an example, at Emory’s affiliated hospitals and clinics, currently approximately 137 imaging devices are deployed (Table 2). The machines include PET-CT, MRI, US, gamma cameras, and portable X-ray machines. We estimate that retro- fitting all of these machines would cost less than 30,000 USD. In 2010, approximately 481,000 imaging examinations were performed on approximately 142,000 patients at these centers. Assuming that these cameras only last 1 year before requiring replacement (that is, grossly underestimating the longevity of these devices), the cost per examination is projected to be less about 0.02 USD including memory costs, and likely even less since these devices are expected to last more than 1 year. Plate_ID: 123456 Plate_ID: 123456 Plate_ID: 123456 Acq Time: 14:00 Imaging Study 1 Plate_ID: 123456 Imaging Study 2 Plate_ID: 123456 Imaging Study 1 Plate_ID: 123456 Acq Time: 14:00 Imaging Study 2 Plate_ID: 123456 Acq Time: 16:00 Plate_ID: 123456 Acq Time: 14:00 + + + + a b Fig. 6 a Ambiguous relationship among the two imaging studies when the same cassette is reused for one of the studies. b The addition of acquisition time, Time_Stamp, to the Plate_ID removes the ambiguity and restores the one- to-one relationship 882 J Digit Imaging (2013) 26:875–885 It is quite difficult to predict the number of mislabeled cases in any center. Many mislabeling errors may simply never be discovered. In other cases, mislabeled examina- tions may be discovered, either by the technologists or the radiologists shortly after the examinations are performed, and may be fixed promptly before the images or the inter- pretation enters the patients’ permanent medical records. These cases may not be reported since no clinician has seen the images or the radiologists’ interpretation, and there are thus no clinical consequences. Quite often, however, mis- labeling is discovered days, weeks, or months later, when a patient undergoes a subsequent examination and it is noted that the body habitus does not match between the com- parative examinations or some discrepancy regarding supportive hardware, such as presence or absence of a pacer/defibrillator device, is noted. While it is difficult to estimate the extent of the impact that our proposal will have on patient safety, it should not be difficult to argue that 30,000 USD would be a fairly inexpensive investment if even one major complication or death were to be prevented out of 480,000 medical imaging studies performed at our institution annually. We now project national costs. According to Mettler et al., nationally in 2006 about 400,000,000 imaging exami- nations involving ionizing radiation (including diagnostic radiographic and fluoroscopic studies, interventional proce- dures, CT scanning, and nuclear medicine studies) were performed [16], roughly 1,000 times the number of exami- nations being performed at our institution. If we include MRI and ultrasound examinations, nationally the number of imaging examinations is probably close to 500,000,000. Extrapolating the ratio of imaging examinations from our institution to the nation, we can estimate the number of imaging devices to be at least 145,000. Note that this is an underestimate and does not include devices such as echo- cardiography machines, which do not form part of the Radiology Department at our institution, and are thus not being counted. Likewise, various other imaging modalities such as endoscopes, which can benefit from integrated facial photographic imaging, are not included in our estimates. Additionally, non-imaging medical diagnostic devices such as electrocardiography machines could also benefit from photographic imaging as an identification tool. Thus, the market potential for such a technique is quite large. Conclusions and Future Work We have presented a relatively straightforward approach that intuitively should reduce mislabeling of medical imag- ing examinations and thus result in a reduction in medical errors. The technique employs digital facial photography at the point-of-care of medical image acquisition and integra- tes this data with the imaging data. These photographs would serve as powerful identification tools. The method can be applied to all imaging modalities including X-ray, CT, MRI, ultrasound and PET. Digital photography at the time of medical imaging also provides supplemental clinical information that can enhance the diagnostic capability of the medical imaging study. This technique is not limited to diagnostic medical imaging, and can be easily translated to other applications such as electrocardiography or any other method where electronic patient data is gathered. Thus, our concept can be exercised at a host of point-of-care data collection points resulting in a more robust patient identifi- cation and authentication function in integrated healthcare information systems. We reiterate that this technique is intended to strengthen other existing identification methods, and there is a limita- tion that patient appearances may change with time. Fur- thermore, we note that when considering the first imaging examination for a patient, the matching photograph may have to be obtained from the patient’s electronic medical record since a prior photograph will not exist in PACS. Another limitation is the availability or lack of color mon- itors for PACS workstations, which can affect the visualiza- tion of patient facial features such as skin color and tone. This problem will vanish in the future as color monitors increasingly replace grayscale ones. We have described the hardware and software architec- tural framework required to integrate photography with medical imaging examination. Specifically, we developed the ABCD, which was targeted toward portable radiography machines and enables simultaneous acquisition of digital facial photographs along with portable chest radiographs. Several research questions must be addressed before the idea of integrated photography with medical imaging can Table 2 Number of devices and number of examinations performed in 2010 classified by modality at Emory University’s affiliated hospitals and clinics Modality Number of devices Number of examinations performed annually PET, PET/CT 6 9,553 MRI 12 44,341 CT 12 89,967 SPECT/CT and gamma cameras 8 9,540 Mammography 11 32,866 Ultrasound 17 39,635 Radiography and radiofluoroscopy 35 151,804 Portable X-ray 17 81,268 Portable C-arm 19 7,230 Interventional radiology suites 8 14,647 Totals 145 480,852 J Digit Imaging (2013) 26:875–885 883 become a clinically acceptable and useful tool. First, if we want to minimize technologist interaction and thus reduce the time the technologist would spend with this new modification, then the camera must be positioned automatically to take the facial photographs. In the case of chest radiographs, it is fairly easy to mount a fixed camera on the radiography unit tower so that with a high degree of certainty the patient’s face can be captured. Moveable or steerable mounts are available for cameras and intelligence can be built into the system so that the camera can be dynamically pointed to the expected loca- tion of the patient’s face depending on the type of examination that the technologist has entered into the system. For example, if a chest radiograph is being obtained, then the camera may be positioned to take a picture about 15° superior to the angle of the radiography tower. If an abdominal radiograph is being obtained, then a greater angle between the tower and the camera unit may be required. Intelligent face tracking cameras are already commonly available; systems that can be trained to obtain images of other body parts need to be designed. Second, face recognition systems have matured signifi- cantly [6, 25], and such systems may further help inter- preters quickly identify wrong patient errors. Such face recognition systems can be embedded in multiple portions of the imaging chain. Finally, perhaps the biggest challenge is to evaluate the clinical impact of adding patient photographs. While these photographs may help with identifying wrong patients, they may lead to unintended consequences: (1) photographs may distract the reader and impair reader efficiency; (2) photo- graphs may provide conflicting information relative to the medical images and confuse the interpreter; and (3) the interpretations may become more subjective. Indeed, pre- liminary work by Turner and Hadas-Halpern [23] suggested that subjectively radiologists felt more empathy towards patients when their photographs were shown along with CT examinations, but it was also noted that the reports become objectively longer and a greater number of inciden- tal findings were reported. A survey reported in an abstract by Weiss and Safdar [24] revealed that 67 % of surveyed radiologists were not in favor of including photographs with medical images. It is unclear what radiologists’ responses would be if they are presented with an actual working tool. These issues deserve further investigation and are the sub- ject of a forthcoming paper [22]. Acknowledgments Dale Walker and Jessie Knighton provided data for Table 2. Diana Fouts and Eric Jablonowski provided assistance with the figures. Srini Tridandapani, PhD MD, was supported in part by the PHS grant (UL1 RR025008, KL2 RF025009) from the Clinical and Translational Science Award program, National Institutes of Health, National Center for Research Resources and in part by Award Number K23EB013221 from the National Institute of Biomedical Imaging And Bioengineering. Pamela Bhatti, PhD, was supported in part by the PHS grant (UL1 RR025008, KL2 RR025009) from the Clinical and Translational Science Award program, National Institutes of Health, National Center for Research Resources. The content is solely the responsibility of the authors and does not necessarily repre- sent the official views of the National Institute of Biomedical Imaging and Bioengineering, the National Center for Research Resources, or the National Institutes of Health. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. References 1. Aakre K, Johnson C: Plain-radiographic image labeling: a process to improve clinical outcomes. Journal of the American College of Radiology 3(12):934–953, 2006 2. Awai K, Murao K, Ozawa A, et al: Pulmonary nodules at chest CT: effect of computer-aided diagnosis on radiologists’ detection per- formance. Radiology 230(2):347–352, 2004 3. Baker M, Bogoni L, Obuchowski N, et al: Computer-aided detection of colorectal polyps: can it improve sensitivity of less-experienced readers? Preliminary findings. Radiology 245(1):140–149, 2007 4. Beigelman-Aubry C, Raffy P, Yang W, Castellino R, Grenier P: Computer-aided detection of solid lung nodules on follow-up MDCT screening: evaluation of detection, tracking, and reading time. American Journal of Roentgenology 189(4):948–955, 2007 5. Bhalla M, Noble E, McLean F, Norris D, et al: Radiographic features useful for establishing patient identity from improperly labeled portable chest radiographs. Journal of Thoracic Imaging 9(1):35–40, 1994. Win 6. Bowyer K, Chang K, Flynn P: A survey of approaches and challenges in 3D and multi-modal 3D+2D face recognition. Com- puter Vision and Image Understanding 101(1):1–15, 2006 7. Branstetter B, Uttecht S, Lionetti D, Chang P: SimpleDICOM suite: personal productivity tools for managing DICOM objects. RadioGraphics 27(5):1523–1530, 2007 8. Cesarani F, Martina F, Grilletto R, et al: Facial reconstruction of a wrapped Egyptian mummy using MDCT. American Journal of Radiology 183(3):755–758, 2004 9. Committee on Quality of Health Care in America: Institute of Medicine, crossing the quality chasm: a new health system for the 21st century. National Academy Press, Washington, DC, 2001 10. Dayhoff R, Kuzmak P: Providing complete multimedia patient data to consulting radiologists, other specialists, and the refer- ring clinician. Journal of Digital Imaging 11(3):134–136, 1998. Suppl. 1 11. Dayhoff R, Maloney D, Kuzmak P, Shepard B: Integrating medical images into hospital information systems. Journal of Digital Imag- ing 4(2):87–93, 1991. Suppl. 1 12. Doi K: Current status and future potential of computer-aided diagnosis in medical imaging. The British Journal of Radiology 78, Spec. No. 1:S3–S19, 2005 13. Joint Commission on Accreditation of Healthcare Organizations 2010 National patient safety goals. Available at http://jointcommission.org/ 2012_npsgs_slides. Accessed on Jun 13, 2012. 14. Kuzmak P, Dayhoff R: The use of digital imaging and com- munications in medicine (DICOM) in the integration of imag- ing into the electronic patient record at the Department of Veterans Affairs. Journal of Digital Imaging 13(2):133–137, 2000 15. Lobo-Stratton G, Mercer T, Polman R: Patient exam data reconciliation tool. Journal of Digital Imaging 20(3):314– 319, 2007 884 J Digit Imaging (2013) 26:875–885 http://jointcommission.org/2012_npsgs_slides http://jointcommission.org/2012_npsgs_slides 16. Mettler Jr, F, Bhargavan M, Faulkner K, et al: Radiologic and nuclear medicine studies in the United States and worldwide: frequency, radiation dose, and comparison with other radiation sources—1950–2007. Radiology 253(2):520–531, 2009 17. Nait-Ali A, Cavaro-Menard C Eds: Compression of biomedical images and signals. Wiley, London, 2003 18. National Quality Forum. 30 Safe practices for better health care fact sheet. Available at http://www.ahrq.gov/qual/30safe.htm. Accessed on Aug. 13, 2011. 19. O’Connor S, Yao J, Summers R: Lytic metastases in thoracolum- bar spine: computer-aided detection at CT—preliminary study. Radiology 242(3):811–816, 2007 20. O’Toole J, Phillips P, Jiang F, et al: Face recognition algorithms surpass humans matching faces over changes in illumination. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(9):1642–1646, 2007 21. Schimke N, Kuehler M, Hale J: Preserving privacy in structural neuro- images. Lecture Notes in Computer Science 6818:301–308, 2011 22. S. Tridandapani, S. Ramamurthy, S. Galgano, J. Provenzale. In- creasing rate of detection of wrong patient radiographs: use of photographs obtained at time of radiography. American Journal of Roentgenology 2013. In press. 23. Y. Turner, I. Hadas-Halpern. “The effects of including a patient’s photograph to the radiographic examination.” In: Radiological Society of North America scientific assembly and annual meeting. Oak Brook, Ill: Radiological Society of North America, 2008; 576. 24. Weiss F, Safdar N: Do digital pictures add to the interpretation of diagnostic imaging: a survey along the spectrum of experience. American Journal of Roentgenology 194:A4–A7, 2010 25. Zhao W, Chellappa R, Rosenfeld A: Face recognition: a literature survey. ACM Computing Surveys 35(4):399–458, 2003 J Digit Imaging (2013) 26:875–885 885 http://www.ahrq.gov/qual/30safe.htm Integrating Patient Digital Photographs with Medical Imaging Examinations Abstract Introduction Motivations for and Advantages of the Proposed Concept Implementation Strategies: Clinical Perspective Potential Privacy Concerns of Gathering Photographic Data Architecture for Integrating Photography with a Portable Radiography Machine Emory Prototype Android-Based Camera Device Integration Server Integrated Display of Digital Photographs and Radiographs System Requirements and Cost Considerations Conclusions and Future Work References work_3ilsmro43bepzdstfdruq4pfui ---- 1-s2.0-S016819231400210X-main.pdf Agricultural and Forest Meteorology 198–199 (2014) 259–264 Contents lists available at ScienceDirect Agricultural and Forest Meteorology j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / a g r f o r m e t Short communication Estimation of leaf area index in understory deciduous trees using digital photography Francesco Chianucci ∗, Andrea Cutini, Piermaria Corona, Nicola Puletti Consiglio per la Ricerca e la sperimentazione in Agricoltura—Forestry Research Centre, viale Santa Margherita 80, 52100 Arezzo, Italy a r t i c l e i n f o Article history: Received 16 June 2014 Received in revised form 28 August 2014 Accepted 1 September 2014 Keywords: Digital nadir photography Foliage cover Leaf angle distribution Foliage projection coefficient Leveled camera a b s t r a c t Fast and accurate estimates of understory leaf area are essential to a wide range of ecological applica- tions. Indirect methods have mainly been used to estimate leaf area of overstory but their application in understory remains largely unexplored. In this study we described a combination of digital photographic methods to obtain rapid, reliable and non-destructive estimate of leaf area index of understory deciduous trees. Nadir photography was used to estimate foliage cover, vertical gap fraction and foliage clumping index. Leveled photography was used to characterize the leaf angle distribution of the examined tree species. Leaf area index estimates obtained combining the two photographic methods were compared with direct measurements obtained from harvesting (L). We applied these methods in Quercus cerris, Carpinus betulus and Fagus sylvatica stands. Foliage cover estimates derived from two nadir image classification methods were significantly correlated with leaf area index measurements obtained from harvesting. The leveled digital photographic method, previously tested in tall trees and field crops, provided reliable leaf angle measurements in understory tree species. Digital photography provided good indirect estimates of L. We conclude that digital photography is suit- able for routine estimate and monitoring of understory leaf area, on account of its fast and cost-effective procedure. © 2014 Elsevier B.V. All rights reserved. 1. Introduction Accurate estimates of leaf area index (L) are essential to the ecological characterization of forest ecosystems (Chen et al., 1997) and for modeling stand structure and dynamics (Xue et al., 2011). Because direct measurements of L in forests are destructive, time- consuming and often impractical, indirect optical methods have been widely used to indirectly estimate L from measurements of radiation transmittance through the canopy (for a review, see Bréda, 2003; Jonckheere et al., 2004; Welles and Norman, 1991). Beer–Lambert’s law has often been used to model canopy trans- mittance (Eq. (1), based on Nilson, 1971): P(�) = exp ( −G(�) × ˝(�) × Lt cos � ) (1) where P(�) is the canopy gap fraction, G(�) is the foliage projection coefficient and ˝(�) is the foliage clumping index at zenith angle �. Lt is the plant area index, including foliar and woody materials. ∗ Corresponding author. Tel.: +39 0575 353021; fax: +39 0575 353490. E-mail addresses: francesco.chianucci@entecra.it, fchianucci@gmail.com (F. Chianucci). Over the last few decades, several comparisons have been made between direct and indirect methods to estimate the overstory leaf area index in forest ecosystems, as demonstrated in the previ- ously cited reviews. However, very few attempts have been made to estimate the leaf area index of forest understory. Some studies showed that the understory leaf area may exceed that of the over- story (Law et al., 2001; Macfarlane et al., 2010). Accurate estimates of the understory leaf area index are also required for processed- based canopy photosynthesis models (Beaudet et al., 2002; Jolly et al., 2004), for designing silvicultural systems aimed at promoting natural tree regeneration (Caccia and Ballaré, 1998) and for under- standing energy and mass exchange processes (Xue et al., 2011). As a consequence, rapid, non-destructive and reliable methods are strongly needed to estimate the understory leaf area index. Thanks to recent technological development, digital cameras with high spatial and radiometric resolutions are becoming increas- ingly affordable and promote the use of digital photographic methods to indirectly estimate canopy structural variables such as gap fraction, foliage cover, leaf angle distribution and leaf area index. In addition, the digital image format is well suited to process photographs taken from above the canopy looking down- ward. For example, the use of vegetation indices has long been explored in crops and weed plants for indirectly estimating L from http://dx.doi.org/10.1016/j.agrformet.2014.09.001 0168-1923/© 2014 Elsevier B.V. All rights reserved. work_3jrlr2a4c5h77jp2tpkiqh2nsu ---- O R I G I N A L P A P E R Cetuximab-induced skin exanthema: prophylactic and reactive skin therapy are equally effective Thomas C. Wehler • Claudine Graf • Markus Möhler • Jutta Herzog • Martin R. Berger • Ines Gockel • Hauke Lang • Matthias Theobald • Peter R. Galle • Carl C. Schimanski Received: 11 July 2013 / Accepted: 19 July 2013 / Published online: 7 August 2013 � The Author(s) 2013. This article is published with open access at Springerlink.com Abstract Purpose Treatment with cetuximab is accompanied by the development of an acneiform follicular skin exanthema in more than 80 % of patients. Severe exanthema (grade III/IV) develops in about 9–19 % of patients with the necessity of cetuximab dose reduction or cessation. Methods The study presented was a retrospective analysis of 50 gastrointestinal cancer patients treated with cetux- imab in combination with either FOLFIRI or FOLFOX. One cohort of 15 patients received an in-house reactive skin protocol upon development of an exanthema. A sec- ond cohort of 15 patients received a skin prophylaxis starting with the first dose of cetuximab before clinical signs of toxicity. A third historic group of 20 patients had received no skin prophylaxis or reactive treatment. Results 19/20 patients of the historic group developed a skin exanthema. Grade III/IV exanthema was observed six times. Forty percent discontinued cetuximab therapy. The average time to exanthema onset was 14.7 days. Applying the reactive skin protocol after the first occurrence of an exanthema, the exanthema was downgraded as follows: No patients developed grade IV� exanthema, and two patients developed a grade II/III exanthema. In the majority of cases, the reactive skin protocol controlled the exanthema (grade 0–I�). No dose reductions in cetuximab were nec- essary. Applying the prophylactic skin protocol starting at the beginning of cetuximab application was not superior to the reactive skin protocol. Conclusions Cetuximab-induced skin exanthema can be coped with a reactive protocol equally effective as com- pared to a prophylactic skin treatment. A prospective study with higher patient numbers is planned. Keywords Skin � Rash � Exanthema � EGFR � Cetuximab � Therapy reactive Introduction The inhibition of growth factor signaling pathways has proven to be an effective therapeutic option in a quite variety of tumor entities. Epidermal growth factor receptor (EGFR) inhibitors, for instance, are used in the treatment of colorectal cancer, head and neck cancer, non-small cell lung cancer, and pancreatic cancer among other malig- nancies (Ciuleanu et al. 2012; Saltz et al. 2004; Luedke et al. 2012; Busam et al. 2001) and are in general well tolerated. However, EGFR inhibitors, such as cetuximab, Thomas C. Wehler and Claudine Graf have contributed equally to this work. T. C. Wehler (&) � C. Graf � M. Theobald Third Department of Internal Medicine, Johannes Gutenberg University of Mainz, Mainz, Germany e-mail: thomas.wehler@unimedizin-mainz.de M. Möhler � P. R. Galle � C. C. Schimanski First Department of Internal Medicine, Johannes Gutenberg University of Mainz, Mainz, Germany e-mail: c.schimanski@marienhospital-darmstadt.de J. Herzog � C. C. Schimanski Department of Internal Medicine, Marienhospital Darmstadt, Martinspfad 72, 64285 Darmstadt, Germany M. R. Berger Toxicology and Chemotherapy Unit, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 581, 69120 Heidelberg, Germany I. Gockel � H. Lang Department of General and Abdominal Surgery, Johannes Gutenberg University of Mainz, Mainz, Germany 123 J Cancer Res Clin Oncol (2013) 139:1667–1672 DOI 10.1007/s00432-013-1483-4 have the potential to induce a skin exanthema of follicular origin that occurs in the majority of patients (Ciuleanu et al. 2012). This exanthema has an acneiform appearance (Busam et al. 2001) and can present itself as maculae, papulae, or small pustulae occurring on the face, décolleté, or back. In addition, it may be accompanied by symptoms such as burning or itching. Usually, it takes up to 2 or 3 weeks after the first intake of the EGFR inhibitor until the first signs of exanthema. Most patients only suffer from mild exanthemas (grade I� and II�). Nevertheless, about a fifth of the patients will experience severe exanthemas (grade III� or IV�). More severe or persistent exanthemas are eventually seen when EGFR antibodies are administered (Busam et al. 2001). By the occurrence of a severe exanthema, pause or dose reduction of the EGFR inhibitor can be considered. Dis- continuation is necessary in approximately 10 % of patients (Busam et al. 2001). However, as patients are told that the development and intensity of an exanthema are rather associated with a better prognosis, the withdrawing of EGFR inhibitors from cancer therapy can be perturbing for the respective patients (Stintzing et al. 2012). Male gender and age below 70 years are considered as risk factors for the development of severe exanthemas (La- couture et al. 2011). Bothersome for the concerned patients, there is no standardized therapy for the therapy of skin toxicities. They are mostly coped with known strategies established for the therapy of acne. In the recent past, first randomized trials analyzed the prophylaxis and therapy of cetuximab- induced skin exanthemas (Lacouture et al. 2011; Scope et al. 2007; Jatoi et al. 2008). Less severe and less symptomatic skin exanthemas were reported by a prophylactic intake of oral minocycline (Scope et al. 2007). However, this effect was only tem- porary and vanished in the long run. Similar results were reported by Jatoi et al. They analyzed the prophylactic effect of an oral tetracycline application on the develop- ment of skin exanthemas grade CII�. Efficacy was observed during the first weeks but not in a long-term setting (Scope et al. 2007). The mentioned antibiotics are commonly used for the treatment of acne. The clinical similarity between classical acne and an EGFR inhibitor- induced skin exanthema suggests that this medication might be successfully used for prevention or treatment of drug-induced exanthemas (Gammon et al. 1986; Meyna- dier and Alirezai 1998). However, the anti-inflammatory effects of tetra-/doxy-/and minocycline might also support exanthema palliation (Meynadier and Alirezai 1998; Fujita et al. 2007). Another randomized phase II study analyzed the effect of a prophylactic versus reactive therapy during second- line panitumumab therapy in colorectal cancer (Lacouture et al. 2010). The prophylactic therapy including lotion, oral doxycycline, topical hydrocortisone (1 %), and sun block resulted in a significant reduction in skin exanthemas grade CII�. However, weak points in this study were the topical administration of topical cortisone in the prophylactic arm and the undefined reactive treatment regimen. In view of these data, we retrospectively compared three patient populations who had received cetuximab therapy with either no standard skin treatment (historic group), or an in-house reactive skin protocol that was based on the therapeutic options of classical acne and treatment rec- ommendations of the cetuximab manufacturer, or a pro- phylactic treatment with cleansing syndet, topical metronidazole ointment, and doxycycline 100 mg twice per day. We were able to perform this retrospective anal- ysis, as we had established a reactive skin protocol and a prophylactic skin protocol several years ago and docu- mented all skin toxicities according to National Cancer Institute’s Common Terminology Criteria for Adverse Events version 3.0 (NCI CTCAE v3.0) and digital pho- tography. The decision whether a patient received reactive therapy or a prophylactic therapy was left to the patient. Methods Patients Due to a lack of standardized guidelines, we stuck to an in- house reactive skin protocol and a prophylactic skin pro- tocol derived from the common acne therapy, which was offered to all patients treated with cetuximab. The skin protocols were offered for the first time to a patient in April 2008. We retrospectively analyzed all patients receiving the reactive skin protocol under cetuximab-based chemo- therapy starting from April 2008. However, we did not include the patient population reported earlier. All adverse events were routinely documented on a weekly basis as per the NCI CTCAE v3.0 criteria. The evaluation entailed physical examination: a weekly assessment of patient performance status and weight; and an assessment of adverse events, including gastrointestinal toxicity and exanthema development. Skin toxicities were documented weekly by digital photography (Fig. 1). Reactive skin protocol The reactive skin protocol was established as follows: grade I� exanthema; topical cleansing syndet [Dermo- was � ]; and topical metronidazole cream (7.5 %) on affected skin areas [Rosiced � ]. Grade II� exanthema: See grade I� treatment plus oral minocycline 50 mg twice daily. Grade III� exanthema: See grade II� treatment plus 1668 J Cancer Res Clin Oncol (2013) 139:1667–1672 123 topical corticoid prednicarbat cream (0.25 %) on affected skin areas [Dermatop � ] (Fig. 2). As soon as a grade III� had improved to grade BII�, application of topical corticoid was ceased. It was specified that patients were to be withdrawn from minocycline in the event of Cgrade II�, nausea, and/or vomiting. The latter was characterized by two to five epi- sodes of nausea/vomiting within 24 h. Prophylactic skin protocol The prophylactic skin treatment protocol consisted of the application of a topical cleansing syndet [Dermowas � ], a topical 7.5 % metronidazole ointment [Rosiced � ], and doxycycline 100 mg (p.o.) twice per day (exanthema [II�; ?topical corticoid prednicarbat cream (0.25 %) [Derma- top � ]). In case of rash [grade II�, the skin treatment was identically performed as in the reactive skin protocol. Statistical analysis Our data were explored in a retrospective descriptive set- ting using t test and v2-test. Results Patients A total number of 50 patients were treated with cetuximab. Twenty patients of the historic cohort did not receive a standard skin treatment. Fifteen patients of the second Akneiformes Exanthem NCI CTCAE3.0 Grade I Grade II Grade III Grade IV Definition asymptomatic light makulate/papular/pustular exanthema or erythema symptomatic itching/ biting intermediate makulate/papular/pustular exanthema or erythema < 50 % of skin surface symptomatic itching/ biting severe makulate/papular/pustular exanthema or erythemam > 50 % of skin surface generalised exfoliative, ulcerative or bullous dermatitis Fig. 1 TKI-associated acneiform exanthema. TKI associated acneiform exanthema is classified according NCI CTCAE3.0 prophylactic skin protocol reactive skin protocol grade 0 grade I grade II grade III no specific therapy cleansing syndet topical metronidazole + oral minocyclin (2x50 mg/d) + topical corticoid (+) topical nadifloxacin e sca la tio n daily topical treatment: vitamin K1 [Reconval®] + minocyclin 2 x 100mg/d cleansing syndet topical metronidazole + oral minocyclin (2x50 mg/d) + topical corticoid (+) topical nadifloxacin d e e sc a la tio n Fig. 2 Prophylactic treatment regimen applied to group C and reactive treatment regimen applied to group B J Cancer Res Clin Oncol (2013) 139:1667–1672 1669 123 cohort were treated according to our in-house reactive skin protocol starting in June 2008. Upon retrospective evalu- ation, all patients had received treatment under this pro- tocol for a minimum of 12 weeks. In the third cohort, 15 patients received a prophylactic skin treatment consisting of a topical cleansing syndet [Dermowas � ], a topical metronidazole ointment [Rosiced � ], and doxycycline 100 mg (p.o.) twice per day. None of the patients had a history of acne. The retrospective analysis was conducted according to the requirements of the local ethics committee and was performed with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments. All patients suffered from a gastrointestinal adenocar- cinoma stage UICC IV. All patients had a history of che- motherapy consisting of a standard initial cetuximab dose of 400 mg/qm and thereafter 250 mg/qm weekly combined with either irinotecan or platinum-based chemotherapy. None of the patient received radiation. Exanthema During the first 12 weeks of therapy with cetuximab, 19/20 (95 %) patients in the historic cohort (group A) developed a skin exanthema: One patient (5 %) developed a grade IV� exanthema, 5 patients (25 %) experienced a grade III�, and 13 patients (65 %) a grade II� exanthema. Only one patient did not show clinical signs of exanthema (Fig. 3). Forty percent discontinued cetuximab therapy due to side effects (Fig. 4). Time to onset ranged from 1 to 4 weeks, and average time to onset was 14.7 days (Fig. 5). In the second cohort receiving a reactive skin protocol (group B), all patients developed a skin exanthema (15/15; 100 %) within the first three months of cetuximab appli- cation: Two patients (13 %) developed a grade III� exan- thema, eight patients (53 %) experienced a grade II� exanthema, and five patients (33 %) a grade I� exanthema (Fig. 3). Time to onset ranged from 1 to 4 weeks with an average time to onset of 13.2 days (Fig. 4). No patient had to discontinue cetuximab therapy (Fig. 5). No skin protocol-associated adverse events occurred. No patient terminated the in-house reactive skin protocol. During the first 12 weeks of therapy with cetuximab in the third cohort receiving a prophylactic regimen (group C), all patients developed a skin exanthema (15/15; 100 %): One patient (7 %) developed a grade IV� exan- thema, and one patient (7 %) developed a grade III� exanthema, while 9 patients (60 %) experienced a grade II� exanthema and four patients (27 %) a grade I� exanthema (Fig. 3). Time to onset ranged from 1 to 4 weeks, and average time to onset was 13.9 days (Fig. 5). One patient had to discontinue cetuximab therapy (Fig. 4). A comparison of maximum exanthema (grade 0, I versus grade II, III, IV) in the three cohorts showed a significant difference between the historic cohort and the ‘‘reactive treatment’’ cohort (p = 0.027). Similar results exist between ‘‘historic’’ cohort group A and the ‘‘prophylactic treatment’’ cohort (group C; p = 0.069). However, there exists no significant difference between group B and C (p = 0.69). Fig. 3 Occurrence of symptoms. Occurrence of maximum acneiform exanthema in the historic cohort A compared to the ‘‘reactive treatment’’ cohort B and ‘‘prophylactic treatment’’ group C Frequency of therapy interruption 0% 50% 100% group A group B group C interruption continuation Fig. 4 Frequency of therapy interruption. The ‘‘historic’’ cohort shows a frequency of 40 % therapy interruption compared to 0 % in cohort B and 7 % in cohort C Time to occurrence of ≥ grade II exanthema 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% group A (n=19) group B (n=9) group C (n=11) patients with ≥ grade II >2 weeks ≤ 2 weeks Fig. 5 Time to occurrence of Cgrade II exanthema. No significant difference between the three cohorts exits in terms of time to first exanthema occurrence 1670 J Cancer Res Clin Oncol (2013) 139:1667–1672 123 Discussion To our knowledge, this is the first study comparing a reactive skin protocol with a prophylactic skin therapy in cetuximab-treated patients. Although two-third of the patients in the ‘‘reactive treatment’’ cohort developed a symptomatic grade CII� exanthema shortly after initiation of a cetuximab therapy, all patients were stabilized and had either no or only a very light grade I� exanthema after initiation of the reactive skin therapy. Note worthy is that the ‘‘prophylactic treatment’’ cohort showed equal effec- tive but no superior results in preventing toxicity grade CII. Thus, a prophylactic use of topical skin and oral doxycy- cline treatment does not seem to improve efficacy but might be easier to handle in everyday practice and might also improve compliance. However, patients will have a higher intake of medication without superior results. As also presented earlier by our group, the use of the simple reactive skin protocol presented herein can prevent the exacerbation of a cetuximab-induced follicular acneiform exanthema. The application of this protocol prevented a cetuximab dose reduction or cessation in patients at risk. Protocol-induced adverse effects were not observed. Only few reports about the therapeutic options of anti- EGFR skin exanthemas are published. In most cases, pro- phylactic approaches were chosen. The prophylactic application of oral antibiotics (tetracycline or minocycline) alone was effective for the early phase of the exanthema during the first month of anti-EGFR treatment (Scope et al. 2007; Jatoi et al. 2008). A decrease in the incidence of grade CII� exanthemas was noted. In addition, tetracycline and minocycyline- treated patients reported other favorable symptomatic effects, including less itching, less burning and stinging, and less skin irritation compared to patients treated with placebo. However, these positive effects vanished after longer application. Similarly, topical usage of pimecroli- mus, a calcineurin inhibitor, did not result in symptomatic relief or improvement of the severity of the cetuximab- induced exanthema (Scope et al. 2009). As anti-EGFR strategies, such as cetuximab, are part of a long-term cancer therapy, isolated usage of oral antibiotics seems to be an insufficient approach and has to be seen critically. The STEPP study combined oral doxycycline with topical hydrocortisone (1 %) in a prophylactic setting (Lacouture et al. 2010). This approach resulted in a dramatic reduction in more severe exanthema and prolongation of time to onset of grade CII� exanthemas. However, this study has been often criticized, as all patients randomized in the prophylaxis arm had been exposed to topical hydrocortisone. In our reactive skin protocol setting, only 13 % of patients, namely those who developed a grade III� exan- thema, actually needed topical corticoid for a short period of time (maximum 3 weeks). Thus, it can be hypothesized that many patients participating in the STEPP study might have been exposed to hydrocortisone, unnecessarily. The relevant limitation of our study is its retrospective setting and the number of patients treated. Thus, it is dif- ficult to derive any validated clinical recommendations resulting from this report. A prospective study will be necessary in order to confirm our observations. Thus, there is a compelling need to continue and conduct research on how best to prevent and palliate exanthemas that occur from anti-EGFR therapy. Acknowledgements This study was intellectually supported by Alma Steinbach, Andrea Mohr, and Michael Baum; Merck Pharma GmbH, 64271 Darmstadt, Germany. Conflict of interest The authors declare that they have no conflict of interest. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, dis- tribution, and reproduction in any medium, provided the original author(s) and the source are credited. References Busam KJ, Capodieci P, Motzer R, Kiehn T, Phelan D, Halpern AC (2001) Cutaneous side-effects in cancer patients treated with the antiepidermal growth factor receptor antibody C225. Br J Dermatol 144:1169–1176 Ciuleanu T, Stelmakh L, Cicenas S, Miliauskas S, Grigorescu AC, Hillenbach C, Johannsdottir HK, Klughammer B, Gonzalez EE (2012) Efficacy and safety of erlotinib versus chemotherapy in second-line treatment of patients with advanced, non-small-cell lung cancer with poor prognosis (TITAN): a randomised multicentre, open-label, phase 3 study. Lancet Oncol 13:300–308 Fujita M, Harada E, Ikegame S, Ye Q, Ouchi H, Inoshima I, Nakanishi Y (2007) Doxycycline attenuated lung injury by its biological effect apart from its antimicrobial function. Pulm Pharmacol Ther 20:669–675 Gammon WR, Meyer C, Lantis S, Shenefelt P, Reizner G, Cripps DJ (1986) Comparative efficacy of oral erythromycin versus oral tetracycline in the treatment of acne vulgaris. A double-blind study. J Am Acad Dermatol 14:183–186 Jatoi A, Rowland K, Sloan JA, Gross HM, Fishkin PA, Kahanic SP, Novotny PJ, Schaefer PL, Johnson DB, Tschetter LK et al (2008) Tetracycline to prevent epidermal growth factor receptor inhib- itor-induced skin rashes: results of a placebo-controlled trial from the North Central Cancer Treatment Group (N03CB). Cancer 113:847–853 Lacouture ME, Mitchell EP, Piperdi B, Pillai MV, Shearer H, Iannotti N, Xu F, Yassine M (2010) Skin toxicity evaluation protocol with panitumumab (STEPP), a phase II, open-label, randomized trial evaluating the impact of a pre-Emptive Skin treatment regimen on skin toxicities and quality of life in patients with metastatic colorectal cancer. J Clin Oncol 28:1351–1357 Lacouture ME, Anadkat MJ, Bryce J, Bensadoun RJ, Chan A, Epstein JB, Eaby-Sandy B, Murphy BA, MASCC Skin Toxicity Study Group (2011) Clinical practice guidelines for the prevention and J Cancer Res Clin Oncol (2013) 139:1667–1672 1671 123 treatment of EGFR inhibitor-associated dermatologic toxicities. Support Care Cancer 19:1079–1095 Luedke E, Cristina Jaime-Ramirez A, Bhave N, Carson WE III (2012) Monoclonal antibody therapy of pancreatic cancer with cetux- imab: potential for immune modulation. J Immunother 35: 367–373 Meynadier J, Alirezai M (1998) Systemic antibiotics for acne. Dermatology 196:135–139 Saltz LB, Meropol NJ, Loehrer PJ Sr, Needle MN, Kopit J, Mayer RJ (2004) Phase II trial of cetuximab in patients with refractory colorectal cancer that expresses the epidermal growth factor receptor. J Clin Oncol 22:1201–1208 Scope A, Agero AL, Dusza SW, Myskowski PL, Lieb JA, Saltz L, Kemeny NE, Halpern AC (2007) Randomized double-blind trial of prophylactic oral minocycline and topical tazarotene for cetuximab-associated acne-like eruption. J Clin Oncol 25:5390–5396 Scope A, Lieb JA, Dusza SW, Phelan DL, Myskowski PL, Saltz L, Halpern AC (2009) A prospective randomized trial of topical pimecrolimus for cetuximab-associated acnelike eruption. J Am Acad Dermatol 61:614–620 Stintzing S, Kapaun C, Laubenderý RP, Jung A, Neumann J, Modest DP, Giessen C, Moosmann N, Wollenberg A, Kirchner T, Heinemann V (2012) Prognostic value of cetuximab related skin toxicity in metastatic colorectal cancer (mCRC) patients and its correlation with parameters of the EGFR signal transduction pathway: results from a randomized trial of the GERMAN AIO CRC Study Group. Int J Cancer. doi:10.1002/ijc.27654 1672 J Cancer Res Clin Oncol (2013) 139:1667–1672 123 http://dx.doi.org/10.1002/ijc.27654 Cetuximab-induced skin exanthema: prophylactic and reactive skin therapy are equally effective Abstract Purpose Methods Results Conclusions Introduction Methods Patients Reactive skin protocol Prophylactic skin protocol Statistical analysis Results Patients Exanthema Discussion Acknowledgements References work_3maemwyefzdydcd4vszps6vefy ---- [PDF] Open-label evaluation of the skin-brightening efficacy of a skin-brightening system using decapeptide-12 | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.3109/14764172.2012.672745 Corpus ID: 25992602Open-label evaluation of the skin-brightening efficacy of a skin-brightening system using decapeptide-12 @article{Kassim2012OpenlabelEO, title={Open-label evaluation of the skin-brightening efficacy of a skin-brightening system using decapeptide-12}, author={Andrea T Kassim and M. Hussain and D. Goldberg}, journal={Journal of Cosmetic and Laser Therapy}, year={2012}, volume={14}, pages={117 - 121} } Andrea T Kassim, M. Hussain, D. Goldberg Published 2012 Medicine Journal of Cosmetic and Laser Therapy This prospective study evaluated the safety and efficacy of decapeptide–12 in conjunction with an antioxidant cleanser, glycolic–acid containing facial moisturizer and broad-spectrum sunscreen in the treatment of facial hyperpigmentation associated with chronic photodamage. Fifteen female subjects with Fitzpatrick skin types I through IV and documented photodamage were entered into the study, of whom 13 completed the study. Results were obtained at weeks 4, 8, 12, 18 and 24 and were assessed by… Expand View on Taylor & Francis envymedical.files.wordpress.com Save to Library Create Alert Cite Launch Research Feed Share This Paper 3 Citations View All Topics from this paper Lumixyl Melanins Hyperpigmentation Sunscreening Agents Therapeutic procedure melanin biosynthetic process Antioxidants Drug Kinetics Face CLEANSER Identifier 3 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Open-label evaluation of a novel skin brightening system containing 0.01% decapeptide-12 in combination with 20% buffered glycolic acid for the treatment of mild to moderate facial melasma. S. P. Ramírez, A. Carvajal, J. Salazar, G. Arroyave, A. M. Flórez, H. F. Echeverry Medicine Journal of drugs in dermatology : JDD 2013 2 PDF Save Alert Research Feed Evaluation of a Novel Skin Brightening System Containing 0 . 01 % Decapeptide-12 for the Treatment of Mild to Moderate Facial Ramírez, Arroyave, AM Flórez, H. Echeverry, C. Ramírez 2015 PDF Save Alert Research Feed Chapter 40: Skin Hyperpigmentation and Photoaging K. O’Neal, Kimberly M. Crosby Medicine 2017 Save Alert Research Feed References SHOWING 1-10 OF 21 REFERENCES SORT BYRelevance Most Influenced Papers Recency A Double-Blind, Randomized Clinical Trial of Niacinamide 4% versus Hydroquinone 4% in the Treatment of Melasma Josefina Navarrete-Solís, J. P. Castanedo-Cázares, +5 authors B. Moncada Medicine Dermatology research and practice 2011 57 PDF View 1 excerpt, references background Save Alert Research Feed A split-face, double-blind, randomized and placebo-controlled pilot evaluation of a novel oligopeptide for the treatment of recalcitrant melasma. B. Hantash, Felipe Jimenez Medicine Journal of drugs in dermatology : JDD 2009 30 PDF Save Alert Research Feed Hypopigmenting agents: an updated review on biological, chemical and clinical aspects. F. Solano, S. Briganti, M. Picardo, G. Ghanem Biology, Medicine Pigment cell research 2006 550 Save Alert Research Feed Chemical and instrumental approaches to treat hyperpigmentation. S. Briganti, E. Camera, M. Picardo Medicine, Chemistry Pigment cell research 2003 652 Save Alert Research Feed Widespread use of toxic skin lightening compounds: medical and psychosocial aspects. Barry Ladizinski, Nisha Mistry, R. Kundu Medicine Dermatologic clinics 2011 89 Save Alert Research Feed Topical treatments for melasma and postinflammatory hyperpigmentation. C. Lynde, J. Kraft, C. Lynde Medicine Skin therapy letter 2006 68 Save Alert Research Feed Survey and mechanism of skin depigmenting and lightening agents Shoukat Parvez, Moonkyu Kang, +4 authors H. Bae Chemistry, Medicine Phytotherapy research : PTR 2006 297 Highly Influential View 7 excerpts, references background Save Alert Research Feed Depigmenting action of hydroquinone depends on disruption of fundamental cell processes. K. Penney, C. Smith, J. Allen Biology, Medicine The Journal of investigative dermatology 1984 74 Save Alert Research Feed Short-sequence oligopeptides with inhibitory activity against mushroom and human tyrosinase. Anan Abu Ubeid, L. Zhao, Y. Wang, B. Hantash Chemistry, Medicine The Journal of investigative dermatology 2009 53 PDF Save Alert Research Feed hydroquinone-dispensing practice? Aesthetic Trends & Technologies Special Report, 2011 ... 1 2 3 ... Related Papers Abstract Topics 3 Citations 21 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_3mtdixeorjhr3d36bjevbelyee ---- Impact of Fungiform Papillae Count on Taste Perception and Different Methods of Taste Assessment and their Clinical Applications A comprehensive review *Asim M. Khan,1 Saqib Ali,1 Reshma V. Jameela,1 Muhaseena Muhamood,1 Maryam F. Haqh2 Sultan Qaboos University Med J, August 2019, Vol. 19, Iss. 3, pp. e184–191, Epub. 5 Nov 19 Submitted 23 Dec 18 Revision Req. 6 Feb 19; Revision Recd. 10 Mar 19 Accepted 12 Apr 19 1Department of Biomedical Dental Science, College of Dentistry, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia; 2Department of Pedodontics & Preventive Dentistry, Oxford Dental College & Hospital, Bangalore, India *Corresponding Author’s e-mail: amkhan@iau.edu.sa Fungiform papillae are mushroom-shaped structures located on the dorsum of the anterior two-thirds of the tongue.1–3 Due to their rich capillary network, larger size and patchy distribution, fungiform papillae can be identified as reddish dots that contrast to the smaller and more numerous filiform papillae. According to cadaveric data, each individual has an estimated 200 fungiform papillae which are denser on the tip of the tongue in comparison to the middle region (29 versus 7–8 papillae per cm2).2,3 Each fungiform papilla carries between 0–20 taste buds, with an average of 2–4 buds; however, not all fungiform papillae contain taste buds and function as taste receptors at a given time. There are approximately 2,500 taste buds in the fungiform papillae of the anterior two- thirds of the tongue, although this has been reported to vary by approximately 18-fold between individuals.1–3 Based on their morphology, fungiform papillae are classified into four types that represent varying degrees of pathological severity.4 Type 1 fungiform papillae are the healthiest and are egg-shaped or long and elliptical and devoid of surface thickness, while type 2 are slightly thicker. Type 3 are thick and have an irregular surface while type 4—which represent the most pathological state—are flat and have an atrophic surface.4 Most studies investigating taste function utilise fungiform papillae count as a tool to indicate either decreased or increased taste sensitivity.5–7 To this end, fungiform papillae count has been correlated with various electrogustometric and chemogustometric thre- sholds. This review provides an overview of fungiform papillae, different methods for their quantification and the factors affecting these structures. In addition, various methods for recording taste sensation and their clinical applications are also discussed. review أثر َعّد احلليمات الكميئة على إدراك حاسَُّة الذَّوق والطرق املختلفة لتقييم الذَّوق، وتطبيقاهتا السريرية مراجعة شاملة عا�صم م�صطفى خان، ثاقب علي، ر�صما جميلة، حما�صنا حممود، مرمي ف�صيحة حق abstract: Fungiform papillae are raised lingual structures which contain taste buds and thus play an important role in taste perception. These structures vary in number due to their relative sensitivity to a range of systemic and local factors which affect the dorsum of the tongue. Taste sensation can be measured using both chemical and electrical methods; however, the number of fungiform papillae has a direct effect on chemogustometric and electrogustometric values during evaluation. This review provides a general overview of fungiform papillae, their quantification methods and the various factors which may affect these structures. In addition, numerous methods of recording taste sensation and their clinical applications are highlighted. Keywords: Sensation; Taste; Taste Perception; Tongue; Taste Buds; Investigative Techniques. ُة �لذَّوق. وتتفاوت امللخ�ص: �حلليمات �لكمئية هي بنيات ل�صينية مرتفعة حتتوي على بر�عم �لذوق، ولهذ� فهي توؤدي دور� مهما يف �إدر�ك حا�صَّ �أعد�د تلك �لبنيات ن�صبة للتفاوت يف ح�صا�صيتها �لن�صبية جتاه طيف من �لعو�مل �جلهازية و�ملحلية �ملوؤثرة على ظهر �لل�صان. وميكن قيا�س ُة �لذَّوق �ملقا�صة كيميائيا، و كهربائيا، يف ُة �لذَّوق بطرق كيميائية وكهربائية. �إال �أن لَعّد �حلليمات �لكميئة �أثر� مبا�رص� على قيم حا�صَّ حا�صَّ غ�صون فرتة �لتقييم. وتهدف هذه املراجعة إىل تقديم ��صتعر��س �صامل للحليمات �لكميئة، وطرق عدها، و�لعو�مل �ملختلفة �لتي توؤثر على تركيبها. باالإ�صافة لذلك يتطرق �ملقال للطرق �ملختلفة �لتي ميكن بها قيا�س حا�صة �لذَّوق، واستعراضتطبيقاتها �ل�رصيرية. ُة �لذَّوق؛ ال�صان؛ بر�عم �لذوق؛ طرق �ال�صتق�صاء. وق؛ �إدر�ك حا�صَّ الكلمات املفتاحية: ح�س؛ ذَّ This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License. https://doi.org/10.18295/squmj.2019.19.03.003 https://creativecommons.org/licenses/by-nd/4.0/ Asim M. Khan, Saqib Ali, Reshma V. Jameela, Muhaseena Muhamood and Maryam F. Haqh Review | e185 Taste Receptors The average human has a total of approximately 10,000 taste buds.8 These are located in the mucosa of the epiglottis, palate and pharynx as well as in fungiform and circumvallate papillae on the tongue. Fungiform papillae are most numerous around the tip of the tongue, whereas circumvallate papillae are arranged in a ‘V’ pattern on the posterior third of the tongue immediately anterior to the sulcus terminalis.8 The taste buds on the fungiform papilla are mostly embedded on the surface of the papilla. In contrast, the larger circumvallate papillae have approximately 100 taste buds located in the side walls.8 The minute conical filiform papillae spread over the remaining tongue surface lack taste buds. Taste buds are composed of ovoid bodies measuring 50–70 µm.8 Each bud consists of basal cells as well as type 1, 2 and 3 cells. Type 1 and 2 cells (i.e. sustentacular cells) support type 3 cells (i.e. the main gustatory cells) and are connected with sensory nerve fibres.8 Type 3 cells open up in the oral cavity via an opening called the taste pore which contain microvilli projecting from the taste cells. The necks of all three cell types are connected to each other and to the surrounding epithelial cells by tight junctions, so that only the microvilli are exposed to fluids in the oral cavity [Figure 1A].9 Each taste bud is innervated by 50 nerve fibres which connect up to five taste buds each.8 The cells surrounding the taste buds give rise to basal cells which in turn differentiate into new taste receptor cells which replace the older ones every 10 days.8 Taste buds degen- erate and finally disappear when the sensory nerve supplying them is severed. However, if the nerve regen- erates, the surrounding cells become organised into new taste buds; this is attributed to the influence of a chem- ical inductive from the redeveloping nerve fibre.8 Taste Pathway The chorda tympani is a branch of the facial nerve which innervates the taste buds on the anterior two-thirds of the tongue, while the posterior third of the tongue and taste buds situated in other areas (i.e. the palate and epiglottis) are innervated via the glossopharyngeal and vagus nerves, respectively.10 Taste sensation is carried to the gustatory area of the nucleus tractus solitarius in the medulla oblongata by slow-conducting myelinated nerve fibres. From the nucleus tractus solitarius, the neurons relay into the medial lemniscus of the medulla oblongata.10 The axons of these second-order neurons then transmit directly to the ventral posteromedial nucleus of the thalamus where they travel via thalamic radiation to the anterior part of the ipsilateral insula which facil- itates conscious perception of taste and taste discrim- ination [Figure 1B].10 However, in some individuals, all or some of these second-order neurons may cross over to the contralateral side and synapse at the thalamus, projecting to the contralateral cerebral cortex, while the remaining fibres continue to project to the ipsilateral cerebral cortex, thus resulting in bilateral representation.11 Taste Sensation Previously, it was believed that different areas of the tongue were responsible for perceiving the five different categories of taste (sweet, salty, bitter, sour and umami).12 Figure 1: Annotated diagrams of (A) taste bud morphology and (B) the taste pathway. Figure 1A was modified and reproduced with permission from Gowthamarajan et al.9 Impact of Fungiform Papillae Count on Taste Perception and Different Methods of Taste Assessment and their Clinical Applications A comprehensive review e186 | SQU Medical Journal, August 2019, Volume 19, Issue 3 However, it is now understood that the process of con- verting different taste stimuli into signals is not restricted to different zones of the tongue. Flavours are identified as a result of a backdrop of complex chemical mechanisms which involve ionic exchange through specific channels and secondary messenger activity.8,13 This process is known as taste signal transduction. s u p e r ta s t e r p h e n o m e n o n Supertaster phenomenon refers to the existence of individuals who experience tastes with far greater intensity than normal, to the degree that they can perceive usually tasteless substances, such as phenylthiocarbamide (PTC) and propylthiouracil (PROP). This concept was first described by Fox in 1932 when it was noted that PTC was tasteless to some individuals, yet perceived as bitter by others.14 While assessing the PTC taste response of over 2,500 subjects, Fox found that 65% of individuals recorded the taste as bitter, while 28% described it as tasteless and the remaining 6% described the taste in other ways.14 In recent years, research has revealed that this phenomenon is genetic in nature.13,15 This has been attributed to varying genotypes of the taste receptor 2 (TAS2R) member 38 gene.15–17 McMahon investigated supertaster phenomenon by assessing PROP taste status and papillae count among subjects who were able to appreciate unsweetened grapefruit juice; the results showed increased fungiform papillae density among supertasters, with non-supertasters having the lowest mean papillae densities.17 However, more recent research has shown that no correlation exists between fungiform papillae density and differences in taste perception.18,19 Methods of Counting Fungiform Papillae d i g i ta l p h o t o g r a p h y Nasri-Heir et al. utilised digital photography to assess fungiform papillae count by using a camera with a resolution of at least five megapixels to photograph the tongue alongside a millimetre slide rule.6 Subsequently, using a computer, a grid was superimposed over the image and stretched to coincide with markings on the ruler in order to create 1 cm2-sized boxes. The papillae within these boxes were then counted and averaged to arrive at a mean value.6 This technique has also been supplemented with image analysis software [Figure 2].7 c o n ta c t e n d o s c o p y a n d s ta i n i n g Pavlos et al. evaluated fungiform papillae density using contact endoscopy and the application of a methylene blue stain.20 First, a contact technique without staining was performed to aid imaging of the subepithelial capillaries; subsequently, the stain was applied to the epithelia and taste pores using a 1 cm2-sized piece of filter paper placed around the tip of the tongue. The fungi- form papillae were easily distinguishable as they were lightly stained in comparison to the darker filiform papillae.20 f o o d c o l o u r i n g d y e Zhang et al. used food dye (brilliant blue FCF133) to stain the tip of the tongue.21 The dye was transferred to the tongue using a 6-mm circular piece of filter paper. The stained area was then photographed using a digital camera and the photographs subjected to analysis using Adobe Photoshop® software (Adobe Inc., San Jose, California, USA).21 An example of this technique is shown in Figure 3.19 d e n v e r pa p i l l a e p r o t o c o l Nuessle et al. proposed a standardised method known as the Denver papillae protocol to define and prioritise the characteristics of fungiform papillae.22 This method involved manually counting the number of papillae in a 10-mm circular section of the tongue stained with a blue-coloured dye. Using image analysis software and digital photography, the fungiform papillae were then characterised based on their shape, colour, size and height.22 Individual fungiform papilla were rejected from the count if they were amorphous, stained blue in comparison to their surroundings, less than 0.5 mm in length or if they were lower in height compared to the surface of the tongue floor or the adjacent papillae.22 a u t o m at e d d e t e c t i o n Eldeghaidy et al. reported a method to automatically detect fungiform papillae on the dorsum of the tongue in collaboration with MATLAB® image analysis software (MathWorks, Natick, Massachusetts, USA).18 The anterior 2 cm section of the dorsum of the tongue was automatically divided into eight regions by the software in which fungiform papillae were counted using various algorithms. However, one limitation of Figure 2: Image of fungiform papillae quantification using digital photography and computer software. Reproduced with permission from Khan et al.7 Asim M. Khan, Saqib Ali, Reshma V. Jameela, Muhaseena Muhamood and Maryam F. Haqh Review | e187 this methodology is that the software may inaccurately assess the diameter of the papillae because the software considers all fungiform papillae to be exactly circular in shape.18 Discrepancies have therefore been noted when the diameters of fungiform papillae are validated manually.18,23 Factors Affecting Fungiform Papillae Count Nutrition and age are two major factors that affect the number of papillae present on the dorsum of the tongue.24,25 Certain nutrients such as vitamin B12 and folate are important to maintain an optimum balance between cell regeneration and deterioration;24 this is particularly important as the taste cells in the papillae have a high turnover rate.8 In addition, vitamin A deficiency may result in keratinisation and the loss of integrity of such cells in the epithelium. Zinc deficiency can also have similar consequences.24 Increasing age also slows down cellular regeneration, with 70% fewer taste buds recorded in individuals aged 70 years old compared to those aged 30 years.24 This also explains why many older individuals often have decreased taste sensitivity.25 Due to their high metabolic activity, cells forming filiform and fungiform papillae are also sensitive to enzyme, circulation or nutrient disturbances which can lead to atrophy. During atrophy, filiform papillae are more vulnerable to such disturbances compared to fungiform papillae; moreover, following atrophy, fungiform papillae regenerate faster in comparison to filiform papillae.24 Saito et al. reported that fungiform papillae atrophied among patients with normal pre- operative gustatory function after sectioning of the chorda tympani nerve; subsequently, the papillae reco- vered after regeneration or re-adaption of the nerve.26 Spielman et al. biopsied fungiform papillae from the dorsum of the tongue and reported papillae regener- ation at the same site after 40 days.27 Nasri-Heir et al. reported that the fungiform papillae count differed between patients with burning mouth syndrome and healthy controls (27.554 ± 2.122 papillae per cm2 versus 31.575 ± 3.112 papillae per cm2).6 Zhang et al. reported an inverse correlation between the number of fungiform papillae and taste thresholds using sucrose among young male subjects.21 Numerous other factors also affect the density of fungiform papillae, such as smoking.20 Various other potential causes of lingual papillae loss are listed in Table 1.28 Methods of Evaluating Taste Sensation The two primary methods of evaluating taste sensation are chemogustometry and electrogustometry. Figure 3: Photographs of tongue stained with brilliant blue FCF before (A) and after (B) fungiform papillae quantification. Reproduced with permission from Jilani et al.19 Table 1: Factors potentially resulting in lingual papillae loss28 Nutritional deficiencies Peripheral vascular diseases Local factors Therapeutic agents • Iron-deficiency anaemia • Plummer-Vinson syndrome • Pernicious anaemia • Anaemia associated with parasitic infections (e.g. ascariasis and bilharziasis) • Tropical sprue or coeliac disease • Chronic alcoholism • Vitamin B deficiency (especially vitamin B2, B6, B12, folic acid and nicotinic acid) • Diabetic angiopathy • Vasculitis in patients with SLE • Endarteritis obliterans • Syphilitic glossitis • Obliteration of the small blood vessels (e.g. in scleroderma or submucous fibrosis) • Localised vascular insufficiency in elderly patients • Frictional irritation to the tip and lateral borders of the tongue • Atrophic lichen planus • Epidermolysis bullosa or ulceration which heals with scarring • Long-standing xerostomia • Drugs that interfere with the growth and maturation of the epithelium (e.g. cyclosporine) • Drugs that induce candidosis (e.g. antibiotics and steroids) • Drugs that induce xerostomia (e.g. anticholinergic drugs and radiotherapy) SLE = systemic lupus erythematosus. Impact of Fungiform Papillae Count on Taste Perception and Different Methods of Taste Assessment and their Clinical Applications A comprehensive review e188 | SQU Medical Journal, August 2019, Volume 19, Issue 3 c h e m o g u s t o m e t r y Chemogustometry involves the application of chemical solutions to the oral mucosa; subsequently, the degree to which any of the five types of taste presents itself is evaluated by equating the taste with that of a reference material.29 Thus, taste detection thresholds are eval- uated by asking the subject to taste a particular sub- stance in various concentrations. This is done system- atically by either increasing or decreasing the dilution of the substance. Water is used as a control during the process, with the subject ideally being able to discriminate between water and the diluted test solution.29 In order to appreciate any differences in the taste intensity of the substance being tested, there should be at least a 30% difference in dilution between solutions. Bitterness is detected by the action of the bitter substance on TAS2R receptors, a type of G protein-linked receptor.16 Chemically, the threshold for bitterness is assessed using quinine hydrochloride at a dilution of 1 g in 2 L of water.23 Saltiness is evaluated by assessing the action of the salty substance on epithelial sodium channel (ENaC) receptors using a diluted solution of sodium chloride. Taste transduction is thus induced by the influx of sodium ions in the ENaC receptors which facilitates the release of glutamate to depolarise neurons. Sourness, measured using diluted hydrochloric acid, also acts via the ENaC receptors which allow the inflow of protons and triggers neurons.8,23 Sweetness is assessed using a diluted sweet sub- stance such as sucrose or artificial sweeteners like saccharin, which differ in taste due to their distinct chemical structures. Taste transduction occurs via another G protein receptor known as taste receptor 1. The final type of taste, umami, is savoury in nature. Umami taste is induced by the activation of meta- botropic glutamate receptor 4 as a result of stimulants such as monosodium glutamate. Inosine monophosphate and guanosine monophosphate act as agonists during umami taste induction.8,23 Taste strips and disks One method of chemogustometric evaluation involves presenting tastants to subjects in a clinical setting via soaked elongated strips or circular disks.30,31 These taste strips or disks are usually made up of pullulan (α-1,4- and α-1,6-glucan) which is also combined with the polymer hydroxypropyl methylcellulose. Such strips or disks are dissolvable in the oral cavity and do not need to be retrieved afterwards.30 The three-drop method Using this method, three drops of a chemical tasting solution are placed in the middle of the dorsum of the tongue, approximately 1.5 cm from the tip.31 One drop contains the actual tasting solution and the other two are distilled water drops which act as a solvent. The testing process usually starts with the lowest concent- ration, with the solution increasing in concentration until the subject’s tasting threshold is detected.31 Electroencephalography Event-related potentials refer to the electrical responses evoked in the brain when an individual is presented with a stimulus. As such, after applying chemical gustatory stimuli to the tongue, electroencephalography can show cortical brain activity associated with the taste sensation, along with its topographical distribution.32 Advantages and disadvantages Chemogustometry consists of using an array of chem- ical solutions in multiple concentrations to assess taste sensation. Some advantages of this method include the long shelf-life of the materials needed, the ease of administration, the rapidity of testing and the fact that this method allows for evaluation of each side of the tongue separately.31 However, this method is qualitative in nature and more cumbersome in a clinical setting in comparison to electrogustometry. e l e c t r o g u s t o m e t r y Electrogustometry quantifies taste and measures the threshold of taste sensation by passing a controlled current through the tongue using electrodes. As cathodal stimuli do not produce any significant recordable sen- sation, a weak anodal current is used.33 The stimulus is a constant direct current of predefined amplitude and duration. The taste perceived during electrogustometry is described as sour-metallic or ‘battery’-like and is attributed to the absorption of protons (or hydronium ions) released by the stimulus.33 In 1754, Sulzer first described the ‘ferro-sulphate’- like metallic taste which occurs when two dissimilar metals come into contact with the tongue.34 In 1955, Skouby invented the first electrogustometer based on taste thresholds determined by placing chemical solutions on the tongue.35 Subsequently, the measure- ment of taste using an electric stimulus was reported by Krarup in 1958.34 Over time, electrogustometers have evolved in terms of their design, electrode composition and size.33,35 Electrogustometry is now a viable clinical tool to estimate taste function, though this method is yet to be commonly used. When an electrode from an electrogustometer is placed on tongue, two types of sensations are induced—tingling and taste.33 These two sensations are conducted via different nerves, with taste perceived as a visceral sensation by the chorda tympani nerve while the tingling is a mechanical sensation conducted by the lingual nerve. Therefore, electrogustometry aids in differentiating between the chorda tympani and lingual Asim M. Khan, Saqib Ali, Reshma V. Jameela, Muhaseena Muhamood and Maryam F. Haqh Review | e189 nerves and is especially important in determining the integrity of the neural pathway.6,33 However, since electrogustometric taste threshold measurements are subjective, uniformity must be maintained in the environment/set-up of the test and the way the subject is trained to respond to the sensations. Advantages and disadvantages Electrogustometry is a quick and quantitative tool to assess taste threshold, particularly among patients with taste disorders such as hemiageusia and ageusia.32,36 Moreover, it allows for the evaluation of the most minute taste deficits, even in the absence of symptoms, and can be used to determine the topographical location of such deficits along the taste pathway and glossopharyngeal nerve. Furthermore, electrogustometry can also aid in determining patient prognosis.36 However, a major drawback of electrogustometry is that it is subjective and relies on feedback from the subject.36 It also cannot be used to investigate or diagnose symptoms commonly associated with certain taste disorders, such as heterogeusia and spontaneous dysgeusia. Finally, this technique cannot be used for patients with artificial pacemakers as electrical stimuli from the electrodes may cause interference with electrical signals from the pacemaker.36 Clinical Applications b u r n i n g m o u t h s y n d r o m e Braud et al. reported a significant association between electrogustometric values and pain intensity measured by visual analogue scale among patients with burning mouth syndrome, indicating a potent interaction between gustatory and nociceptive components among affected subjects.37 Nasri-Heir et al. also noted signif- icantly higher electrogustometric responses among patients with burning mouth syndrome in comparison to a normal control group.6 The researchers concluded that burning mouth syndrome is a neurodegenerative phenomenon with decreased chorda tympani activity.6 c a n c e r Ovesen et al. reported higher taste thresholds in subjects with small-cell lung, breast and ovarian cancer compared to controls with non-neoplastic disease.38 Moreover, taste thresholds decreased among those patients who responded to chemotherapy, suggesting that malignant disease has an effect on taste sensation. These findings indicate that electrogustometry could be a useful diag- nostic tool in neoplastic cancers.38 Epstein et al. found that patients with cancer devel- oped taste disorders (i.e. dysgeusia) as a result of other factors apart from pathology, such as chemotherapy treatment.39 This is because chemotherapy drugs are released into the saliva and adhere directly to the taste buds, causing altered taste perceptions and resulting in a metallic or chemical taste. As such, it is recommended that taste and olfactory electrogustometric evaluations be made mandatory for all patients undergoing cancer treatment.39 n e u r o l o g i c a l d i s e a s e s Dzaman et al. reported that 13 out of 35 subjects with nasal polyps had increased taste and olfactory thresholds as assessed by electrogustometry compared to controls.40 Deeb et al. reported a deficit in electrogustometric thresholds among subjects with Parkinson’s disease, indicative of disease severity.41 p o s t o p e r at i v e pat i e n t s Doty et al. assessed the impact of factors such as age and gender on taste perception among individuals under- going chorda tympani nerve resection.42,43 Taste assess- ment was done in different regions of the tongue using filter paper soaked in tastants such as sucrose, sodium chloride and caffeine; in addition, the patients were subjected to electrical stimuli via electrogustometry. The researchers reported a deterioration in taste sensiti- vity commencing in middle age and progressively reducing after 50 years of age.42,43 This decline in taste sensitivity occurred for all stimuli at the anterior part of the tongue, with chorda tympani nerve resection resulting in taste deficits on the same side as well as the middle portion of the tongue.42,43 Boucher et al. investigated taste defects by electro- gustometry in patients with severed afferent connections caused by dental treatment.44 Higher electrogustometric thresholds were recorded in subjects with more than seven deafferented teeth compared to those with fewer deafferented teeth, with a significant direct correlation between electrogustometric thresholds and the number of deafferented teeth, regardless of age.44 Similarly, Michael et al. found a greater prevalence of electro- gustometric taste changes among patients following middle-ear surgery; this was ascribed to damage caused by the distention and, to a lesser degree, severance of the chorda tympani nerve.45 s m o k i n g Depressed or altered taste sensation has been reported among chronic smokers.46,47 Smoke from burning tobacco includes a variety of irritants, oncogenic particles such as tar and lead, as well as other poisonous substances such as carbon monoxide and nicotine. These not only topically affect taste receptors cells and impede the normal mechanism of taste conduction Impact of Fungiform Papillae Count on Taste Perception and Different Methods of Taste Assessment and their Clinical Applications A comprehensive review e190 | SQU Medical Journal, August 2019, Volume 19, Issue 3 but also affect the neurological transmission of taste sensations.46 The effects of tobacco on taste thresholds depend upon the individual’s susceptibility, the quantity and frequency of use and the age of the individual when they started smoking. Using a modified form of electro- gustometry, Khan et al. reported significantly lower fungiform papillae counts and greater electrical taste thresholds in smokers compared to non-smokers.7 Moreover, in a follow-up study of smokers before and after quitting, Chéruel et al. found that smoking cessation lead to a recovery in taste sensitivity.46 How- ever, the time required to regain taste functionality depended on the susceptibility of the region of the tongue that had been affected. The researchers advoc- ated for the use of electrogustometry as a method of motivating chronic smokers to stop smoking.46 Yekta et al. studied somatosensory function in the mucosa of the tongue.47 Subjects were tested bilaterally in tongue regions innervated by the lingual nerves according to sensory heat, pain and mechanical detection thresholds. Increased heat thresholds were reported in smokers in comparison to non-smokers; this was attributed to damage caused by smoking to the myelinated Aδ- and C-fibres of the tongue.47 Conclusion Fungiform papillae density offers valuable information regarding an individual’s taste perception and taste sensation thresholds, with both chemical and electrical tools available for quantification purposes. However, as chemogustometry is a mostly qualitative method of determining taste sensitivity and requires a complex array of chemical solutions, it can be cumbersome in a clinical setting. In contrast, electrogustometry is a quick and quantitative tool and has a wide range of clinical applications, including for patients with taste disorders, burning mouth syndrome and neoplastic cancers as well as for smoking cessation purposes. References 1. Miller IJ Jr, Bartoshuk LM. Taste perception, taste bud distribution, and spatial relationships. In: Getchell TV, Doty RL, Bartoshuk LM, Snow JB Jr, Eds. Smell and Taste in Health and Disease. New York, USA: Raven Press, 1991. Pp. 205–33. 2. Cheng LH, Robinson PP. The distribution of fungiform papillae and taste buds on the human tongue. Arch Oral Biol 1991; 36:583–9. https://doi.org/10.1016/0003-9969(91)90108-7. 3. Miller IJ Jr. Human fungiform taste bud density and distribution. Ann NY Acad Sci 1987; 510:501–3. https://doi. org/10.1111/j.1749-6632.1987.tb43604.x. 4. Negoro A, Umemoto M, Fukazawa K, Terada T, Sakagami M. Observation of tongue papillae by video microscopy and contact endoscopy to investigate their correlation with taste function. Auris Nasus Larynx 2004; 31:255–9. https://doi. org/10.1016/j.anl.2004.01.009. 5. Pavlidis P, Gouveris H, Kekes G. Electrogustometry thresholds, tongue tip vascularization, density, and form of the fungiform papillae following smoking cessation. Chem Senses 2017; 42:419–23. https://doi.org/10.1093/chemse/bjx009. 6. Nasri-Heir C, Gomes J, Heir GM, Ananthan S, Benoliel R, Teich S, et al. The role of sensory input of the chorda tympani nerve and the number of fungiform papillae in burning mouth syndrome. Oral Surg Oral Med Oral Pathol Oral Radiol Endod 2011; 112:65–72. https://doi.org/10.1016/j.tripleo.2011.02.035. 7. Khan AM, Narayanan VS, Puttabuddi JH, Chengappa R, Ambaldhage VK, Naik P, et al. Comparison of taste threshold in smokers and non-smokers using electrogustometry and fungiform papillae count: A case control study. J Clin Diagn Res 2016; 10:ZC101–5. https://doi.org/10.7860/JCDR/2016/14 478.7835. 8. Barrett K, Brooks H, Boitano S, Barman S. Ganong’s Review of Medical Physiology, 23rd ed. New York, USA: McGraw-Hill Medical, 2009. 9. Gowthamarajan K, Kulkarni GT, Kumar MN. Pop the pills without bitterness: Taste-masking technologies for bitter drugs. Resonance 2004; 9:25–32. https://doi.org/10.1007/BF0 2834304. 10. Guyton AC, Hall JE. Textbook of Medical Physiology, 11th ed. Philadelphia, Pennsylvania, USA: Saunders Co., 2006. 11. Kamath MG, Prakash J, Tripathy A, Concessao P. Taste pathway: What do we teach? J Clin Diagn Res 2015; 9:CL01. https://doi.org/10.7860/JCDR/2015/11021.5471. 12. ScienceDaily. Biologists discover how we detect sour taste. From: www.sciencedaily.com/releases/2006/08/060823184824. htm Accessed: Mar 2019. 13. Bartoshuk LM. Comparing sensory experiences across individuals: Recent psychophysical advances illuminate genetic variation in taste perception. Chem Senses 2000; 25:447–60. https://doi.org/10.1093/chemse/25.4.447. 14. Fox AL. The Relationship between chemical constitution and taste. Proc Natl Acad Sci U S A 1932; 18:115–20. https://doi. org/10.1073/pnas.18.1.115. 15. Cannon DS, Baker TB, Piper ME, Scholand MB, Lawrence DL, Drayna DT, et al. Associations between phenylthiocarbamide gene polymorphisms and cigarette smoking. Nicotine Tob Res 2005; 7:853–8. https://doi.org/10.1080/14622200500330209. 16. Maehashi K, Matano M, Wang H, Vo LA, Yamamoto Y, Huang L. Bitter peptides activate hTAS2Rs, the human bitter receptors. Biochem Biophys Res Commun 2008; 365:851–5. https://doi. org/10.1016/j.bbrc.2007.11.070. 17. McMahon KA. Supertasters: Updating the taste test for the A & P Laboratory. Poster from the Proceedings of the 29th Conference of the Association for Biology Laboratory Education. Test Stud Lab Teach 2008; 29:398–405. 18. Eldeghaidy S, Thomas D, Skinner M, Ford R, Giesbrecht T, Thomas A, et al. An automated method to detect and quantify fungiform papillae in the human tongue: Validation and relationship to phenotypical differences in taste perception. Physiol Behav 2018; 184:226–34. https://doi.org/10.1016/j.phy sbeh.2017.12.003. 19. Jilani H, Ahrens W, Buchecker K, Russo P, Hebestreit A; IDEFICS consortium. Association between the number of fungiform papillae on the tip of the tongue and sensory taste perception in children. Food Nutr Res 2017; 61:1348865. https://doi.org/10.1080/16546628.2017.1348865. https://doi.org/10.1016/0003-9969%2891%2990108-7 https://doi.org/10.1111/j.1749-6632.1987.tb43604.x https://doi.org/10.1111/j.1749-6632.1987.tb43604.x https://doi.org/10.1016/j.anl.2004.01.009 https://doi.org/10.1016/j.anl.2004.01.009 https://doi.org/10.1093/chemse/bjx009 https://doi.org/10.1016/j.tripleo.2011.02.035 https://doi.org/10.7860/JCDR/2016/14478.7835 https://doi.org/10.7860/JCDR/2016/14478.7835 https://doi.org/10.1007/BF02834304 https://doi.org/10.1007/BF02834304 https://doi.org/10.7860/JCDR/2015/11021.5471 https://doi.org/10.1093/chemse/25.4.447 https://doi.org/10.1073/pnas.18.1.115 https://doi.org/10.1073/pnas.18.1.115 https://doi.org/10.1080/14622200500330209 https://doi.org/10.1016/j.bbrc.2007.11.070 https://doi.org/10.1016/j.bbrc.2007.11.070 https://doi.org/10.1016/j.physbeh.2017.12.003 https://doi.org/10.1016/j.physbeh.2017.12.003 https://doi.org/10.1080/16546628.2017.1348865 Asim M. Khan, Saqib Ali, Reshma V. Jameela, Muhaseena Muhamood and Maryam F. Haqh Review | e191 20. Pavlos P, Vasilios N, Antonia A, Dimitrios K, Georgios K, Georgios A. Evaluation of young smokers and non-smokers with electrogustometry and contact endoscopy. BMC Ear Nose Throat Disord 2009; 9:9. https://doi.org/10.1186/1472-6815-9-9. 21. Zhang GH, Zhang HY, Wang XF, Zhan YH, Deng SP, Qin YM. The relationship between fungiform papillae density and detection threshold for sucrose in the young males. Chem Senses 2009; 34:93–9. https://doi.org/10.1093/chemse/bjn059. 22. Nuessle TM, Garneau NL, Sloan MM, Santorico SA. Denver papillae protocol for objective analysis of fungiform papillae. J Vis Exp 2015; 100:e52860. https://doi.org/10.3791/52860. 23. Piochi M, Dinnella C, Prescott J, Monteleone E. Associations between human fungiform papillae and responsiveness to oral stimuli: Effects of individual variability, population characteristics, and methods for papillae quantification. Chem Senses 2018; 43:313–27. https://doi.org/10.1093/chemse/bjy015. 24. Kamath SK. Taste acuity and aging. Am J Clin Nutr 1982; 36:766–75. https://doi.org/10.1093/ajcn/36.4.766. 25. Arvidson K. Location and variation in number of taste buds in human fungiform papillae. Scand J Dent Res 1979; 87:435–42. https://doi.org/10.1111/j.1600-0722.1979.tb00705.x. 26. Saito T, Narita N, Yamada T, Manabe Y, Ito T. Morphology of human fungiform papillae after severing chorda tympani nerve. Ann Oto Rhinol Laryngol 2011; 120:300–6. https://doi. org/10.1177/000348941112000504. 27. Spielman AI, Pepino MY, Feldman R, Brand JG. Technique to collect fungiform (taste) papillae from human tongue. J Vis Exp 2010; 42:2201. https://doi.org/10.3791/2201. 28. Laskaris G. Pocket Atlas of Oral Diseases, 2nd ed. Stuttgart, Germany: Thieme, 2005. 29. Simmen B, Pasquet P, Hladik CM. Methods for assessing taste abilities and hedonic responses in human and non-human primates. In: Macbeth H, MacClancy J, Eds. Researching Food Habits: Methods and problems. Oxford, UK: Berghahn Books, 2004. Pp. 87–99. 30. Doty RL. Measurement of chemosensory function. World J Otorhinolaryngol Head Neck Surg 2018; 4:11–28. https://doi. org/10.1016/j.wjorl.2018.03.001. 31. Mueller CA, Kallert S, Renner B, Stiassny K, Temmel AF, Hummel T, et al. Quantitative assessment of gustatory function in a clinical context using impregnated “taste strips”. Rhinology 2003; 41:2–6. 32. Hummel T, Genow A, Landis BN. Clinical assessment of human gustatory function using event related potentials. J Neurol Neurosurg Psychiatry 2010; 81:459–64. https://doi. org/10.1136/jnnp.2009.183699. 33. Sardana DS, Mittal DP, Saha AK, Singh DP, Bassi NK. Electrogustometry: A physiological study. Indian J Otolaryngol 1975; 27:127–33. 34. Krarup B. Electro-gustometry: A method for clinical taste examinations. Acta Otolaryngol 1958; 49:294–305. https://doi. org/10.3109/00016485809134758. 35. Grant R, Ferguson MM, Strang R, Turner JW, Bone I. Evoked taste thresholds in a normal population and the application of electrogustometry to trigeminal nerve disease. J Neurol Neurosurg Psychiatry 1987; 50:12–21. https://doi.org/10.1136/ jnnp.50.1.12. 36. Tomita H, Ikeda M. Clinical use of electrogustometry: Strengths and limitations. Acta Otolaryngol Suppl 2002; 122:27–38. https://doi.org/10.1080/00016480260046391. 37. Braud A, Descroix V, Ungeheuer MN, Rougeot C, Boucher Y. Taste function assessed by electrogustometry in burning mouth syndrome: A case-control study. Oral Dis 2017; 23:395–402. https://doi.org/10.1111/odi.12630. 38. Ovesen L, Sørensen M, Hannibal J, Allingstrup L. Electrical taste detection thresholds and chemical smell detection thresholds in patients with cancer. Cancer 1991; 68:2260–5. https://doi.org/10.1 002/1097-0142(19911115)68:10<2260::AID-CNCR28206810 26>3.0.CO;2-W. 39. Epstein JB, Barasch A. Taste disorders in cancer patients: Pathogenesis, and approach to assessment and management. Oral Oncol 2010; 46:77–81. https://doi.org/10.1016/j.oralonco logy.2009.11.008. 40. Dzaman K, Pleskacz WA, Wałkanis A, Rapiejko P, Jurkiewicz D. [Taste and smell senses estimation in patients with nasal polyps]. Otolaryngol Pol 2007; 61:831–7. https://doi.org/10.1016/S003 0-6657(07)70537-1. 41. Deeb J, Shah M, Muhammed N, Gunasekera R, Gannon K, Findley LJ, et al. A basic smell test is as sensitive as a dopamine transporter scan: Comparison of olfaction, taste and DaTSCAN in the diagnosis of Parkinson’s disease. QJM 2010; 103:941–52. https://doi.org/10.1093/qjmed/hcq142. 42. Doty RL, Heidt JM, MacGillivray MR, Dsouza M, Tracey EH, Mirza N, et al. Influences of age, tongue region, and chorda tympani nerve sectioning on signal detection measures of lingual taste sensitivity. Physiol Behav 2016; 155:202–7. https://doi.org/10.1 016/j.physbeh.2015.12.014. 43. Doty RL. Age-related deficits in taste and smell. Otolaryngol Clin North Am 2018; 51:815–25. https://doi.org/10.1016/j.otc.2 018.03.014. 44. Boucher Y, Berteretche MV, Farhang F, Arvy MP, Azérad J, Faurion A. Taste deficits related to dental deafferentation: An electrogustometric study in humans. Eur J Oral Sci 2006; 114:456–64. https://doi.org/10.1111/j.1600-0722.2006.00401.x. 45. Michael P, Raut V. Chorda tympani injury: Operative findings and postoperative symptoms. Otolaryngol Head Neck Surg 2007; 136:978–81. https://doi.org/10.1016/j.otohns.2006.12.022. 46. Chéruel F, Jarlier M, Sancho-Garnier H. Effect of cigarette smoke on gustatory sensitivity, evaluation of the deficit and of the recovery time-course after smoking cessation. Tob Induc Dis 2017; 15:15. https://doi.org/10.1186/s12971-017-0120-4. 47. Yekta SS, Lückhoff A, Ristić D, Lampert F, Ellrich J. Impaired somatosensation in tongue mucosa of smokers. Clin Oral Investig 2012; 16:39–44. https://doi.org/10.1007/s00784-010- 0480-0. https://doi.org/10.1186/1472-6815-9-9 https://doi.org/10.1093/chemse/bjn059 https://doi.org/10.3791/52860 https://doi.org/10.1093/chemse/bjy015 https://doi.org/10.1093/ajcn/36.4.766 https://doi.org/10.1111/j.1600-0722.1979.tb00705.x https://doi.org/10.1177/000348941112000504 https://doi.org/10.1177/000348941112000504 https://doi.org/10.3791/2201 https://doi.org/10.1016/j.wjorl.2018.03.001 https://doi.org/10.1016/j.wjorl.2018.03.001 https://doi.org/10.1136/jnnp.2009.183699 https://doi.org/10.1136/jnnp.2009.183699 https://doi.org/10.3109/00016485809134758 https://doi.org/10.3109/00016485809134758 https://doi.org/10.1136/jnnp.50.1.12 https://doi.org/10.1136/jnnp.50.1.12 https://doi.org/10.1080/00016480260046391 https://doi.org/10.1111/odi.12630 https://doi.org/10.1002/1097-0142%2819911115%2968:10%3C2260::AID-CNCR2820681026%3E3.0.CO%3B2-W https://doi.org/10.1002/1097-0142%2819911115%2968:10%3C2260::AID-CNCR2820681026%3E3.0.CO%3B2-W https://doi.org/10.1002/1097-0142%2819911115%2968:10%3C2260::AID-CNCR2820681026%3E3.0.CO%3B2-W https://doi.org/10.1016/j.oraloncology.2009.11.008 https://doi.org/10.1016/j.oraloncology.2009.11.008 https://doi.org/10.1016/S0030-6657%2807%2970537-1 https://doi.org/10.1016/S0030-6657%2807%2970537-1 https://doi.org/10.1093/qjmed/hcq142 https://doi.org/10.1016/j.physbeh.2015.12.014 https://doi.org/10.1016/j.physbeh.2015.12.014 https://doi.org/10.1016/j.otc.2018.03.014 https://doi.org/10.1016/j.otc.2018.03.014 https://doi.org/10.1111/j.1600-0722.2006.00401.x https://doi.org/10.1016/j.otohns.2006.12.022 https://doi.org/10.1186/s12971-017-0120-4 https://doi.org/10.1007/s00784-010-0480-0 https://doi.org/10.1007/s00784-010-0480-0 work_3nh7oui6jbgh5ov3nxigxztvwe ---- A Smartphone Application for Personal Assessments of Body Composition and Phenotyping sensors Article A Smartphone Application for Personal Assessments of Body Composition and Phenotyping Gian Luca Farina 1, Fabrizio Spataro 1, Antonino De Lorenzo 1,* and Henry Lukaski 2 1 Section of Clinical Nutrition and Nutrigenomic, Department of Biomedicine and Prevention, University of Rome “Tor Vergata”, Rome 00173, Italy; gianluca.farina@students.uniroma2.eu (G.L.F.); fabrizio.spataro@students.uniroma2.eu (F.S.) 2 Department of Kinesiology and Public Health Education, University of North Dakota, Grand Forks, ND 58202-7166, USA; henry.lukaski@email.und.edu * Correspondence: delorenzo@uniroma2.it; Tel.: +39-06-7259-6856 Academic Editor: Ki H. Chon Received: 25 August 2016; Accepted: 13 December 2016; Published: 17 December 2016 Abstract: Personal assessments of body phenotype can enhance success in weight management but are limited by the lack of availability of practical methods. We describe a novel smart phone application of digital photography (DP) and determine its validity to estimate fat mass (FM). This approach utilizes the percent (%) occupancy of an individual lateral whole-body digital image and regions indicative of adipose accumulation associated with increased risk of cardio-metabolic disease. We measured 117 healthy adults (63 females and 54 males aged 19 to 65 years) with DP and dual X-ray absorptiometry (DXA) and report here the development and validation of this application. Inter-observer variability of the determination of % occupancy was 0.02%. Predicted and reference FM values were significantly related in females (R2 = 0.949, SEE = 2.83) and males (R2 = 0.907, SEE = 2.71). Differences between predicted and measured FM values were small (0.02 kg, p = 0.96 and 0.07 kg, p = 0.96) for females and males, respectively. No significant bias was found; limits of agreement ranged from 5.6 to −5.4 kg for females and from 5.6 to −5.7 kg for males. These promising results indicate that DP is a practical and valid method for personal body composition assessments. Keywords: body composition assessment; mobile health; weight management 1. Introduction Obesity is a major public health problem in the US and worldwide [1,2]. Despite intensive and multi-focal efforts to attenuate the rate of increase of obesity, this problem has persisted. Personal health assessments, a form of self-monitoring [3], empower individuals to engage in behaviors and lifestyles that can mitigate risk of development of chronic diseases and improve quality of life [4,5]. The use of personal health assessments is a burgeoning approach in the management of overweight and obesity [6,7]. A practical attraction of personal health assessment is the privacy and convenience of performing these activities at home. Consumer devices enable an individual to participate in personalized health assessments for weight management. A growing area of interest is the determination and monitoring of personal biometrics and body phenotype including body composition and size. Simple measures such as body weight, height, waist, and hip circumferences are desirable because of their associations with risk factors for metabolic and chronic diseases [5]. These measurements, however, can be time-consuming and require training and proficiency to provide reliable and useful information for personal and clinical use. Self-monitoring of body weight and various biometrics are effective in the promotion of successful weight loss, weight maintenance, and weight regain prevention [8–10]. Assessments of body fat during weight loss intervention are important to enable the preservation of lean body mass [11]. Sensors 2016, 16, 2163; doi:10.3390/s16122163 www.mdpi.com/journal/sensors http://www.mdpi.com/journal/sensors http://www.mdpi.com http://www.mdpi.com/journal/sensors Sensors 2016, 16, 2163 2 of 9 Personal body composition assessments can be completed at home with bioelectrical impedance-based assessments (BIA), such as foot-to-foot and hand-to-hand impedance scales or even smaller finger-to-finger devices [12,13]. Although widely publicized and used as fat analyzers, BIA devices are limited in accuracy in terms of assessing body fat largely due to several faulty assumptions such as a steady hydration of the lean tissues [13–15]. Alternatively, whole-body optical scanning devices to estimate body volume, size, and regional body circumferences are emerging [16–19]. They yield reasonably accurate measurements but require relatively costly and cumbersome equipment, large space requirements and lighting that makes them impractical for routine personal use [20]. We evaluate a novel application for smart phones that provides a simple and cost-effective method of determining body fat for an individual. It utilizes a smartphone built-in camera to obtain digital whole-body images to estimate human body composition. The objectives of this research are to develop and validate a digital image photography model to assess human body composition. We tested the hypothesis that digital photography with a smart phone is a valid method to estimate the body fat of adults. 2. Methods and Materials Caucasian women and men, varying widely in age and body mass index, volunteered to participate in this study, which was conducted at the Department of Physiology, University of Tor Vergata in Rome, Italy. Each prospective participant underwent a clinical examination and completed a health questionnaire to establish the absence of an unhealthy condition prior to participation. This study was approved by the Institutional Review Board of the University of Tor Vergata. Each participant provided written informed consent prior to participation in any testing. 2.1. Digital Photography The operational principle of digital photography (DP) is that the surface of any digitally acquired image encased within a background can be computed by its occupation ratio within a digitally constructed virtual frame. Quantification of the number of pixels within the image of a body is expressed as a percentage of the total number of pixels in the framed background. It requires a clear and reliable discrimination of the pixels constituting the image of the body from the pixels determining the background. The ratio of the image pixels is expressed as a percentage of the total background pixels and termed the percent occupation by the image. The surface of an object or any of its components is computed from its frame percent occupation. Large and small occupation percentages are calculated into individual sagittal section surface areas that relate to volumes and can be transformed into body composition variables. 2.2. Procedure Each volunteer wore light, non-compressing underwear with or without light footwear and stood showing either lateral side in front of a homogeneous white background to obtain a significant contrast. With the head in a horizontal plane, the individual stood upright and fully extended the arms down alongside the body with feet and legs touching and aligned sagittal to the camera to provide a lateral profile of the body. A second individual directed the hand-held smart phone camera with the lens at the middle of the standing height of each subject. The distance from the camera to the subject was variable because of inter-individual differences in standing height and the requirement to surround the whole-body image within a minimal area of the background. Normal illumination in the room was adequate to allow for the capture of a clear and focused picture without flash. An operator uploaded the image (Figure 1A) and then framed the image by using software that provides an outline of the body to be conditioned with color to contrast the subject image from the background of a different color. The operator first positioned horizontal lines through the eyes and malleoli of the ankles (Figure 1B), and next inserted vertical lines along the widest anterior and posterior of the image (Figure 1C). An algorithm utilized a complex evaluation of the background pixel Sensors 2016, 16, 2163 3 of 9 digits and allocated to that homogeneous pixel a value equivalent to black; the algorithm next allocated to the subject body image (e.g., non-background) pixels a value equivalent to white. This process represents the conditioning of an individual digital photograph with additional horizontal lines drawn by the proprietary software at the thorax, belly, and hips (Figure 1D). The determination of the populations of black and white pixels enables computation of the respective occupation percentages, specifically the individual image surface area. Sensors 2016, 16, 2163 3 of 9 posterior of the image (Figure 1C). An algorithm utilized a complex evaluation of the background pixel digits and allocated to that homogeneous pixel a value equivalent to black; the algorithm next allocated to the subject body image (e.g., non-background) pixels a value equivalent to white. This process represents the conditioning of an individual digital photograph with additional horizontal lines drawn by the proprietary software at the thorax, belly, and hips (Figure 1D). The determination of the populations of black and white pixels enables computation of the respective occupation percentages, specifically the individual image surface area. (A) (B) (C) (D) Figure 1. Images showing the sequence of operator-positioned and software-determined anatomical landmarks used to condition a digital photography lateral image. (A) Uploaded digital image of lateral surface of an individual; (B) operator-positioned horizontal lines at the level of the eyes and ankles; (C) operator-positioned vertical lines at widest protuberance of breast and hip; (D) software-determined horizontal lines at thorax, belly, and hips. 2.3. Development of Prediction Model for Body Fat One hundred seventeen healthy adults had lateral whole-body digital images with either Android version 4.2.2 on a Huawei G730 smart phone (resolution 540 × 960 pixels or 51.8 megapixels) or iOS 9.2 on an iPhone 5s (resolution 1136 × 640 pixels or 72.7 megapixels). The phones were randomly utilized to obtain digital images that were scaled by the software to a standard resolution of 5 megapixels. One operator performed all of the digital photographic images and stored all digital images to the memory of the smart phone. Shape recognition and image conditioning after the software-driven pixel count was performed on all digital images uploaded from the memory of the smart phone. Reference body composition was determined with dual X-ray absorptiometry (DXA; GE model LUNAR iDXAnCore s/n 200278; Rome, Italy) using software version General Electric 14.10.022. Gender-specific prediction models for body fat were developed using multiple regression analyses. Independent variables included body weight, height, and lateral surface image occupancy from each conditioned image. 2.4. Statistical Methods Descriptive data are expressed as mean ± SD. Estimates of inter-operator precision or reproducibility of image conditioning are reported as average values (±SD) and mean differences. We used FM values determined from individual DXA scans as the dependent variable to develop prediction equations using forward step-wise regression with height, weight, image occupancy, and linear measurements obtained from digital images as independent variables. Gender-specific prediction equations were developed to account for known differences in FM and adipose tissue distribution between adult males and females [21,22]. For comparison of the prediction models, criteria included the highest adjusted R2 value and the lowest root mean square as a measure of precision [23]. The precision and robustness of the prediction equations were also assessed by calculating the PRESS statistic, which is the predicted residual sum of squares that can be used as a Figure 1. Images showing the sequence of operator-positioned and software-determined anatomical landmarks used to condition a digital photography lateral image. (A) Uploaded digital image of lateral surface of an individual; (B) operator-positioned horizontal lines at the level of the eyes and ankles; (C) operator-positioned vertical lines at widest protuberance of breast and hip; (D) software-determined horizontal lines at thorax, belly, and hips. 2.3. Development of Prediction Model for Body Fat One hundred seventeen healthy adults had lateral whole-body digital images with either Android version 4.2.2 on a Huawei G730 smart phone (resolution 540 × 960 pixels or 51.8 megapixels) or iOS 9.2 on an iPhone 5s (resolution 1136 × 640 pixels or 72.7 megapixels). The phones were randomly utilized to obtain digital images that were scaled by the software to a standard resolution of 5 megapixels. One operator performed all of the digital photographic images and stored all digital images to the memory of the smart phone. Shape recognition and image conditioning after the software-driven pixel count was performed on all digital images uploaded from the memory of the smart phone. Reference body composition was determined with dual X-ray absorptiometry (DXA; GE model LUNAR iDXAnCore s/n 200278; Rome, Italy) using software version General Electric 14.10.022. Gender-specific prediction models for body fat were developed using multiple regression analyses. Independent variables included body weight, height, and lateral surface image occupancy from each conditioned image. 2.4. Statistical Methods Descriptive data are expressed as mean ± SD. Estimates of inter-operator precision or reproducibility of image conditioning are reported as average values (±SD) and mean differences. We used FM values determined from individual DXA scans as the dependent variable to develop prediction equations using forward step-wise regression with height, weight, image occupancy, and linear measurements obtained from digital images as independent variables. Gender-specific prediction equations were developed to account for known differences in FM and adipose tissue distribution between adult males and females [21,22]. For comparison of the prediction models, criteria included the highest adjusted R2 value and the lowest root mean square as a measure of precision [23]. The precision and robustness of the prediction equations were also assessed by calculating the PRESS Sensors 2016, 16, 2163 4 of 9 statistic, which is the predicted residual sum of squares that can be used as a measure of predictive power when one sample is used to develop and validate a regression equation; it is an indicator of internal cross-validation strategy [24]. Measured and predicted FM values were compared with a paired t-test. A Bland–Altman plot was used to determine bias and limits of agreement for the derived prediction model [25]. Statistical analyses were performed using SYSTAT version 10 (Systat Corporation; San Jose, CA, USA), although the PRESS residuals and statistics were obtained using the PROC REG procedure with PRESS Statistics in SAS version 9.2 (SAS Institute, Inc., Crary, NC, USA). 3. Results 3.1. Inter-Operator Variability in Digital Image Conditioning Three novice operators individually performed one conditioning of 20 identical digital images (10 males and 10 females) to assess the range of precision of assessment of percent occupancy, and verify the limit of agreement. The operators determined similar (p = 0.968) areas of occupancy (30.65% ± 3.6%, 30.74% ± 3.7%, and 30.71% ± 3.7%) with average differences between operators ranging from 0.033% to 0.097% that were not different than 0 (p = 0.908). Coefficients of determination (R2) for comparisons among operators for individual estimates of occupancy ranged from 0.972 to 0.996 (p < 0.0001) with an average difference of 0.02% among operators (p = 0.86). 3.2. Development and Validation of Prediction Model for Fat Mass Table 1 summarizes the characteristics of the participants. Table 2 describes the gender-specific regression models to predict FM from DP. The prediction equation for females includes the lateral surface areas including the lower abdomen and hips, whereas the prediction model for males includes the same area as well as the upper abdomen and entire lateral surface area. Table 1. Physical characteristics of 117 study participants. Values are mean ± SD (range of values). Females Males n 63 54 Age, year 38.7 ± 13.8 32.5 ± 9.8 (19 to 65) (19 to 54) Weight, kg 70.9 ± 15.6 82.0 ± 13.2 (41.8 to 108.7) (63.4 to 108.4) Height, cm 162.7 ± 6.1 178.0 ± 7.7 (152.0 to 174.9) (163.0 to 194.5) BMI a, kg/m2 43.8 ± 12.6 62.8 ± 16.7 (16.1 to 40.4) (19.4 to 37.1) Fat-free mass b, kg 43.8 ± 12.6 62.8 ± 16.7 (31.9 to 62.8) (47.4 to 80.3) Fat mass b, kg 27.2 ± 12.7 19.2 ± 10.0 (7.4 to 59.4) (6.2 to 44.6) Body fat, % 36.6 ± 10.8 22.5 ± 8.9 (12.3 to 54.5) (9.6 to 44.9) a Body mass index; b Dual X-ray absorptiometry. Table 2. Multiple regression equations to predict body fat mass (FM) of 117 healthy adults. Females: FM = 18.545 − 0.312 HT + 0.653 WT + 4.522 LOWERABD_HT Males: FM = 56.602 + 0.799 PCTTOTAL − 0.063 SURFUP + 25.366 LOWERABD_HT HT = height in cm; WT = weight in kg; LOWERABD = the surface (cm2) of the lateral surface between the lines drawn by the APP at the belly and the hip (Figure 1C); SURFUP = the surface (cm2) of the lateral section between the line at the belly drawn by the APP and the operator-drawn line at the eyes; PCTTOTAL = the percent of occupation of the entire lateral surface from ankle to eyes (Figure 1B). Sensors 2016, 16, 2163 5 of 9 The DXA-determined and DP-predicted FM values were similar in the female (26.91 ± 12.74 and 26.97 ± 12.41 kg, respectively; p = 0.714) and male (18.79 ± 9.28 and 18.72 ± 8.83 kg, respectively; p = 0.838) groups. Within each gender group, measured and predicted FM values were significantly correlated (R2 = 0.991 and 0.982; p < 0.0001) with concordance correlation coefficients of 0.974 and 0.952 for males and females (p < 0.0001), respectively. The predicted and measured FM values were distributed linearly in the male and female samples with slopes similar to 1.0 and intercepts not different than 0 (Figure 2). 1 Figure 2. Plots of dual X-ray absorptiometry (DXA)-measured and digital image photography (DP)-predicted fat mass (FM) values of females (left) and males (right). Figure 3 shows the distribution of the differences between measured and predicted FM values as a function of increasing FM. The differences between measured and predicted FM values were not different than 0 (−0.06 kg, p = 0.81; and 0.11 kg, p = 0.78) for females and males, respectively, with no proportional bias in the difference values as a function of the average FM values (r = 0.12, p = 0.86). The limits of agreement (mean ± 1.96 SD) ranged from 5.6 to −5.4 kg and from 5.6 to −5.7 kg for females and males, respectively. 1 Figure 3. The Bland–Altman plots to illustrate the differences between individual dual X-ray absorptiometry (DXA)-measured and digital image photography (DP)-predicted fat mass (FM) as a function of the mean values of females (left) and males (right). Linear regression line describes the bias with 95% confidence intervals (1.96 SD) shown. Sensors 2016, 16, 2163 6 of 9 4. Discussion Behavioral approaches that utilize self-monitoring are a basic component of weight management programs with regular monitoring of body weight associated with successful weight loss and prevention of weight regain after weight loss [26,27]. Attributes of successful self-monitoring program include selection of measurements that are simple, are easy to perform, are not costly, use commonly available equipment, are performed accurately and privately, yield meaningful information for the individual, and may be shared with a health care provider. The present study met these criteria by using a camera of a common cellular phone to capture a digital image of the lateral surface of the body that was conditioned for analysis with proprietary software and yielded body surface estimates of regions epidemiologically associated with an increased risk of cardiometabolic disease that were used to estimate body fat in adults. It developed and validated a model in apparently healthy adults that accurately estimated FM, as compared to DXA FM values, in the range of 10–55 kg with no bias. Traditional anthropometric measurements of body size and shape, including body mass index (BMI), various girths, body segmental circumferences, and mid-sagittal diameter are surrogates for adiposity and are associated with graded risks of cardio-metabolic disease [22,28]. Photonic imaging of the body with lasers, lights and cameras enables the generation of three-dimensional (3D) body outline images or shapes [29]. These methods eliminate the need for trained and certified anthropometrists but require costly and sophisticated data acquisition equipment, computerized algorithms to reconstruct body topography, and large space requirements [21,29]. Although originally developed for the clothing industry, national surveys used 3D imaging systems and observed differences in body phenotypes including differences in female and male shape patterns, surface area and volumes within the same BMI range [30], age-dependent effects on shape at a given BMI [30], and ethnic differences in shape [31]. Daniels et al. [32] reported that body segment volumes increased with increasing BMI and these BMI-related patterns of increase varied among different body segments. Emerging clinical applications include assessments of body phenotype and composition. Wells et al. [33] reported no difference in estimates of body volume of adults using 3D photonic imaging, whole-body air displacement plethysmography (ADP), and hydrodensitometry (HD) but wide LOA (>2 L) for optical imaging that equated to 20% variability in the estimation of body fat. Wang et al. [16] found that 3D imaging significantly overestimated (0.5 L) body volume compared to HD with no apparent differences in estimated body fat. Body fatness predicted with 3D imaging was significantly correlated with HD reference values with a wide distribution of the values (SEE = 7.95%). Garlie et al. [18] also reported no differences in body fat values among military personnel measured with 3D optical imaging, DXA, and military-specific anthropometric models. They noted, however, a higher concordance correlation coefficient for 3D imaging compared to anthropometry rather than to 3D imaging compared to DXA (0.96 vs. 0.74). Ng et al. [33] found significant correlations between 3D and reference measures of waist circumference, hip circumference, body surface area, and volume in adults. Predictions of body composition using 3D-determined circumferences were strongly related to DXA reference values of FM and fat-free mass with good precision (SEE = 2.5 and 2.2 kg, respectively). Awareness of the adverse effects of increased centralized adipose tissue (AT) as a risk factor for cardio-metabolic disease prompted observational studies that utilized 3D imaging to assess regional AT. Lee et al. [34] reported that inclusion of the 3D-determined waist-to-hip ratio significantly improved multiple regression equations to predict MRI-estimated visceral AT but neither total abdominal AT nor subcutaneous AT. They also used 3D imaging to assess fat patterning of adults [35]. Compared to the usual demographic information (e.g., gender, age, and ethnicity) and standard anthropometric measurements (e.g., weight, height, and waist circumference), the inclusion of 3D body image data (e.g., regional volumes, circumferences and sagittal thicknesses) improved the precision, relative to DXA, to 2 kg and 0.2% for android and 3.2 kg and 0.4% for gynoid AT mass and % fat, respectively. An alternative to the costly 3D systems is two-dimensional (2D) imaging that uses frontal and/or lateral images obtained from a standard digital camera and software to characterize body shape. Validation of 2D hip, waist, neck, and trunk circumferences with standard anthropometric measures Sensors 2016, 16, 2163 7 of 9 showed high correlations (R2 = 0.94 to 0.96) [36]. Stewart et al. [37] successfully used body shapes from 2D DP and 3D images in assessment of perception and dissatisfaction of the body images of individuals diagnosed with disorder eating behaviors. Xie et al. [19] generated active shape models (2D silhouettes) of children from modified whole-body DXA scans and compared the strength of the multiple regression equations that predicted percent body fat from standard demographic data and select 2D sites. For boys, the 2D model accounted for more variation in the prediction of body fat than the demography-based prediction model (R2 = 0.728, RMSE = 3.12% compared to R2 = 0.457, RMSE = 4.41%, respectively), whereas the 2D model accounted for similar variability in estimating body fat as the demography model for girls (R2 = 0.586, RMSE = 3.93% compared to R2 = 0.606, RMSE = 3.80%, respectively). The present study provides the first validation of the 2D DP method to assess FM in adults with a wide range of body fat. Comparisons of DP-predicted and DXA-determined FM showed no significant differences between the methods with variability (SEE) of 2.83 and 2.71 kg for females and males, respectively, that is similar (2.4 kg) for a recent 3D imaging model [33]. These preliminary findings should be followed with future research to evaluate the present prediction model in adults with a wide range of BMI and different ethnic groups. Moreover, studies should ascertain the validity of this method during weight loss. Importantly, individuals who plan to utilize self-monitoring of body fat should consult their care provider for guidance. Advancing technology provides a unique opportunity to enable healthy weight management. Smart phones offer a practical platform for personal health assessment and self-monitoring [38,39] because they can overcome some limitations of traditional weight loss and maintenance programs while reducing the cost and burden on patients and health care providers. Findings of the present study are promising for the use of a smart phone application to monitor body fat. Acknowledgments: The authors thank the volunteers who participated in this study and the staff at Department of Biomedicine and Prevention, University of Tor Vergata, Rome, Italy for their technical contributions to this research. Author Contributions: Gian Luca Farina conceived and designed the experiments; and Fabrizio Spataro performed the experiments; Antonino De Lorenzo authorized the study and made the DXA laboratory available, Henry C. Lukaski, Gian Luca Farina analyzed the data; Henry Lukaski and Gian Luca Farina wrote the paper. Conflicts of Interest: The authors declare no conflict of interest. References 1. Flegal, K.M.; Carroll, M.D.; Kit, B.K.; Ogden, C.L. Prevalence of obesity and trends in the distribution of body mass index among US adults, 1999–2010. JAMA 2012, 307, 491–497. [CrossRef] [PubMed] 2. Yumuk, V.; Tsigos, C.; Fried, M.; Schindler, K.; Bussetto, L.; Misic, D.; Toplak, H. Obesity Management Task Force of the European Association for the Study of Obesity. European Guidelines for Obesity Management in Adults. Obes. Facts 2015, 8, 402–424. [CrossRef] [PubMed] 3. Turk, M.W.; Elci, O.U.; Wang, J.; Sereika, S.M.; Ewing, L.J.; Acharya, S.D.; Glanz, K.; Burke, L.E. Self-monitoring as a mediator of weight loss in the SMART randomized clinical trial. Int. J. Behav. Med. 2013, 20, 556–561. [CrossRef] [PubMed] 4. Jensen, M.D.; Ryan, D.H.; Apovian, C.M.; Ard, J.D.; Comuzzie, A.G.; Donato, K.A.; Hu, F.B.; Hubbard, V.S.; Jakicic, J.M.; Kushner, R.F.; et al. American College of Cardiology/American Heart Association Task Force on Practice Guidelines; Obesity Society. 2013 AHA/ACC/TOS guideline for the management of overweight and obesity in adults: A report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines and The Obesity Society. J. Am. Coll. Cardiol. 2014, 63, 2985–3023. [PubMed] 5. Leiter, L.A.; Astrup, A.; Andrews, R.C.; Cuevas, A.; Horn, D.B.; Kunešová, M.; Wittert, G.; Finer, N. Identification of educational needs in the management of overweight and obesity: Results of an international survey of attitudes and practice. Clin. Obes. 2015, 5, 245–255. [CrossRef] [PubMed] 6. Campos, C. Tips for communicating with overweight and obese patients. J. Fam. Pract. 2014, 63 (Suppl. S7), S11–S14. [PubMed] http://dx.doi.org/10.1001/jama.2012.39 http://www.ncbi.nlm.nih.gov/pubmed/22253363 http://dx.doi.org/10.1159/000442721 http://www.ncbi.nlm.nih.gov/pubmed/26641646 http://dx.doi.org/10.1007/s12529-012-9259-9 http://www.ncbi.nlm.nih.gov/pubmed/22936524 http://www.ncbi.nlm.nih.gov/pubmed/24239920 http://dx.doi.org/10.1111/cob.12109 http://www.ncbi.nlm.nih.gov/pubmed/26238414 http://www.ncbi.nlm.nih.gov/pubmed/25198213 Sensors 2016, 16, 2163 8 of 9 7. Linde, J.A.; Jeffrey, R.W.; Crow, S.J.; Brelje, K.L.; Pacanowski, C.R.; Gavin, K.L.; Smolenski, D.J. The Tracking Study: Description of a randomized controlled trial of variations on weight tracking frequency in a behavioral weight loss program. Contemp. Clin. Trials 2015, 40, 199–211. [CrossRef] [PubMed] 8. Zheng, Y.; Klem, M.L.; Sereika, S.M.; Danford, C.A.; Ewing, L.J.; Burke, L.E. Self-weighing in weight management: A systematic literature review. Obesity 2015, 23, 256–265. [CrossRef] [PubMed] 9. Shieh, C.; Knisely, M.R.; Clark, D.; Carpenter, J.S. Self-weighing in weight management interventions: A systematic review of literature. Obes. Res. Clin. Pract. 2016. [CrossRef] [PubMed] 10. Lukaski, H.C.; Siders, W.A. Regional impedance devices fail to accurately assess whole-body fatness. Nutrition 2013, 19, 851–857. [CrossRef] 11. Heymsfield, S.B.; Gonzalez, M.C.; Shen, W.; Redman, L.; Thomas, D. Weight loss composition is one-fourth fat-free mass: A critical review and critique of this widely cited rule. Obes. Rev. 2014, 15, 310–321. [CrossRef] [PubMed] 12. Choi, A.; Kim, J.Y.; Jo, S.; Jee, J.H.; Heymsfield, S.B.; Bhagat, Y.A.; Kim, I.; Cho, J. Smartphone-based bioelectrical impedance analysis devices for daily obesity management. Sensors 2015, 15, 22151–22166. [CrossRef] [PubMed] 13. Buchholz, A.C.; Bartok, C.; Schoeller, D.A. The validity of bioelectrical impedance models in clinical populations. Nutr. Clin. Pract. 2004, 19, 433–446. [CrossRef] [PubMed] 14. Kyle, U.G.; Bosaeus, I.; de Lorenzo, A.D.; Deurenberg, P.; Elia, M.; Manuel Gomez, J.; Lilienthal Heitmann, B.; Kent-Smith, L.; Melchior, J.C.; Pirlich, M.; et al. Bioelectrical impedance analysis-part II: Utilization in clinical practice. Clin. Nutr. 2004, 23, 1430–1453. [CrossRef] [PubMed] 15. Lukaski, H.C. Evolution of bioimpedance: Journey from assessment of physiological function through body fat to clinical medicine. Eur. J. Clin. Nutr. 2013, 67, S2–S9. [CrossRef] [PubMed] 16. Wang, J.; Gallagher, D.; Thornton, J.C.; Yu, W.; Horlick, M.; Pi-Sunyer, F.X. Validation of a 3-dimensional photonic scanner for the measurement of body volumes, dimensions, and percentage body fat. Am. J. Clin. Nutr. 2006, 83, 809–816. [PubMed] 17. Wells, J.C.; Ruto, A.; Treleaven, P. Whole body three-dimensional photonic scanning: A new technique for obesity research and clinical practice. Int. J. Obes. 2008, 32, 232–238. [CrossRef] [PubMed] 18. Garlie, T.N.; Obusek, J.P.; Corner, B.D.; Zambraski, E.J. Comparison of body fat estimates using 3D digital laser scans, direct manual anthropometry, and DXA in men. Am. J. Hum. Biol. 2010, 22, 695–701. [CrossRef] [PubMed] 19. Xie, B.; Avila, J.I.; Ng, B.K.; Fan, B.; Loo, V.; Gilsanz, V.; Hangartner, T.; Kalkwarf, H.J.; Lappe, J.; Oberfield, S.; et al. Accurate body composition measures from whole-body silhouettes. Med. Phys. 2015, 42, 4668–4677. [CrossRef] [PubMed] 20. Daanen, H.A.M.; TerHaar, F.B. 3D whole body scanners revisited. Displays 2013, 34, 270–275. [CrossRef] 21. Kuk, J.L.; Lee, S.; Heymsfield, S.B.; Ross, R. Waist circumference and abdominal adipose tissue distribution: Influence of age and sex. Am. J. Clin. Nutr. 2005, 81, 1330–1334. [PubMed] 22. Guo, S.S.; Chumlea, W.C.; Cockram, D.B. Use of statistical methods to estimate body composition. Am. J. Clin. Nutr. 1996, 64, 428S–435S. [PubMed] 23. Sun, S.S.; Chumlea, W.C.; Heymsfield, S.B.; Lukaski, H.C.; Schoeller, D.; Friedl, K.; Kuczmarski, R.J.; Flegal, K.M.; Johnson, C.L.; Hubbard, V.S. Development of bioelectrical impedance analysis prediction equations for body composition with the use of a multicomponent model for use in epidemiologic surveys. Am. J. Clin. Nutr. 2003, 77, 331–340. [PubMed] 24. Bland, J.M.; Altman, D.G. Statistical method for assessing agreement between two methods of clinical measurement. Lancet 1986, 327, 307–310. [CrossRef] 25. Van Wormer, J.J.; Martinez, A.M.; Martinson, B.C.; Crain, A.L.; Benson, G.A.; Cosentino, D.L.; Pronk, N.P. Self-weighing promotes weight loss for obese adults. Am. J. Prev. Med. 2009, 36, 70–73. [CrossRef] [PubMed] 26. Steinberg, D.M.; Bennett, G.G.; Askew, S.; Tate, D.F. Weighing every day matters: Daily weighing improves weight loss and adoption of weight control behaviors. J. Acad. Nutr. Diet. 2015, 115, 511–518. [CrossRef] [PubMed] 27. Bastien, M.; Pirier, P.; Lemieux, I.; Després, J.P. Overview of epidemiology and contribution of obesity to cardiovascular disease. Prog. Cardiovasc. Dis. 2014, 56, 369–381. [CrossRef] [PubMed] http://dx.doi.org/10.1016/j.cct.2014.12.007 http://www.ncbi.nlm.nih.gov/pubmed/25533727 http://dx.doi.org/10.1002/oby.20946 http://www.ncbi.nlm.nih.gov/pubmed/25521523 http://dx.doi.org/10.1016/j.orcp.2016.01.004 http://www.ncbi.nlm.nih.gov/pubmed/26896865 http://dx.doi.org/10.1016/S0899-9007(03)00166-7 http://dx.doi.org/10.1111/obr.12143 http://www.ncbi.nlm.nih.gov/pubmed/24447775 http://dx.doi.org/10.3390/s150922151 http://www.ncbi.nlm.nih.gov/pubmed/26364636 http://dx.doi.org/10.1177/0115426504019005433 http://www.ncbi.nlm.nih.gov/pubmed/16215137 http://dx.doi.org/10.1016/j.clnu.2004.09.012 http://www.ncbi.nlm.nih.gov/pubmed/15556267 http://dx.doi.org/10.1038/ejcn.2012.149 http://www.ncbi.nlm.nih.gov/pubmed/23299867 http://www.ncbi.nlm.nih.gov/pubmed/16600932 http://dx.doi.org/10.1038/sj.ijo.0803727 http://www.ncbi.nlm.nih.gov/pubmed/17923860 http://dx.doi.org/10.1002/ajhb.21069 http://www.ncbi.nlm.nih.gov/pubmed/20737619 http://dx.doi.org/10.1118/1.4926557 http://www.ncbi.nlm.nih.gov/pubmed/26233194 http://dx.doi.org/10.1016/j.displa.2013.08.011 http://www.ncbi.nlm.nih.gov/pubmed/15941883 http://www.ncbi.nlm.nih.gov/pubmed/8780359 http://www.ncbi.nlm.nih.gov/pubmed/12540391 http://dx.doi.org/10.1016/S0140-6736(86)90837-8 http://dx.doi.org/10.1016/j.amepre.2008.09.022 http://www.ncbi.nlm.nih.gov/pubmed/18976879 http://dx.doi.org/10.1016/j.jand.2014.12.011 http://www.ncbi.nlm.nih.gov/pubmed/25683820 http://dx.doi.org/10.1016/j.pcad.2013.10.016 http://www.ncbi.nlm.nih.gov/pubmed/24438728 Sensors 2016, 16, 2163 9 of 9 28. Soileau, L.; Bautista, D.; Johnson, C.; Gao, C.; Zhang, K.; Li, X.; Heymsfield, S.B.; Thomas, D.; Zheng, J. Automated anthropometric phenotyping with novel Kinect-based three-dimensional imaging method: Comparison with a reference laser imaging system. Eur. J. Clin. Nutr. 2016, 70, 475–481. [CrossRef] [PubMed] 29. Wells, J.C.K.; Cole, T.J.; Treleaven, P. Age-variability in body shape associated with excess weight: The UK National Sizing Survey. Obesity 2008, 16, 435–441. [CrossRef] [PubMed] 30. Wells, J.C.; Cole, T.J.; Bruner, D.; Treleaven, P. Body shape in American and British adults: Between-country and inter-ethnic comparisons. Int. J. Obes. 2008, 32, 152–159. [CrossRef] [PubMed] 31. Daniell, N.; Olds, T.; Tomkinson, G. Volumetric differences in body shape among adults with differing body mass index values: An analysis using three-dimensional body scans. Am. J. Hum. Biol. 2014, 26, 156–163. [CrossRef] [PubMed] 32. Wells, J.C.; Douros, I.; Fuller, N.J.; Elia, M.; Dekker, L. Assessment of body volume using three-dimensional photonic scanning. Ann. N. Y. Acad. Sci. 2000, 904, 247–254. [CrossRef] [PubMed] 33. Ng, B.K.; Hinton, B.J.; Fan, B.; Kanaya, A.M.; Shepherd, J.A. Clinical anthropometrics and body composition from 3D whole-body surface scans. Eur. J. Clin. Nutr. 2016. [CrossRef] [PubMed] 34. Lee, J.J.; Freeland-Graves, J.H.; Pepper, M.R.; Yao, M.; Xu, B. Predictive equations for central obesity via anthropometrics, stereovision imaging and MRI in adults. Obesity 2014, 22, 852–862. [CrossRef] [PubMed] 35. Lee, J.J.; Freeland-Graves, J.H.; Pepper, M.R.; Stanforth, P.R.; Xu, B. Prediction of android and gynoid body adiposity via a three-dimensional stereovision body imaging system and dual-energy X-ray absorptiometry. J. Am. Coll. Nutr. 2015, 34, 367–377. [CrossRef] [PubMed] 36. Meunier, P.; Yin, S. Performance of a 2D image-based anthropometric measurement and clothing sizing system. Appl. Ergon. 2000, 31, 445–451. [CrossRef] 37. Stewart, A.D.; Klein, S.; Young, J.; Simpson, S.; Lee, A.J.; Harrild, K.; Crockett, P.; Benson, P.J. Body image, shape, and volumetric assessments using 3D whole body laser scanning and 2D digital photography in females with a diagnosed eating disorder: Preliminary novel findings. Br. J. Psychol. 2012, 103, 183–202. [CrossRef] [PubMed] 38. Coughlin, S.S.; Whitehead, M.; Sheats, J.Q.; Mastromonico, J.; Hardy, D.; Smith, S.A. Smartphone Applications for Promoting Healthy Diet and Nutrition: A Literature Review. Jacobs J. Food Nutr. 2015, 2, 021. [PubMed] 39. Pellegrini, C.A.; Pfammatter, A.F.; Conroy, D.E.; Spring, B. Smartphone applications to support weight loss: Current perspectives. Adv. Health Care Technol. 2015, 1, 13–22. [CrossRef] [PubMed] © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/). http://dx.doi.org/10.1038/ejcn.2015.132 http://www.ncbi.nlm.nih.gov/pubmed/26373966 http://dx.doi.org/10.1038/oby.2007.62 http://www.ncbi.nlm.nih.gov/pubmed/18239656 http://dx.doi.org/10.1038/sj.ijo.0803685 http://www.ncbi.nlm.nih.gov/pubmed/17667912 http://dx.doi.org/10.1002/ajhb.22490 http://www.ncbi.nlm.nih.gov/pubmed/24554284 http://dx.doi.org/10.1111/j.1749-6632.2000.tb06460.x http://www.ncbi.nlm.nih.gov/pubmed/10865749 http://dx.doi.org/10.1038/ejcn.2016.109 http://www.ncbi.nlm.nih.gov/pubmed/27329614 http://dx.doi.org/10.1002/oby.20489 http://www.ncbi.nlm.nih.gov/pubmed/23613161 http://dx.doi.org/10.1080/07315724.2014.966396 http://www.ncbi.nlm.nih.gov/pubmed/25915106 http://dx.doi.org/10.1016/S0003-6870(00)00023-5 http://dx.doi.org/10.1111/j.2044-8295.2011.02063.x http://www.ncbi.nlm.nih.gov/pubmed/22506746 http://www.ncbi.nlm.nih.gov/pubmed/26819969 http://dx.doi.org/10.2147/AHCT.S57844 http://www.ncbi.nlm.nih.gov/pubmed/26236766 http://creativecommons.org/ http://creativecommons.org/licenses/by/4.0/. Introduction Methods and Materials Digital Photography Procedure Development of Prediction Model for Body Fat Statistical Methods Results Inter-Operator Variability in Digital Image Conditioning Development and Validation of Prediction Model for Fat Mass Discussion work_3pc5m2eaend6roqiiauk4t2i5e ---- Microsoft Word - 0109hedonisminabstractart Hedonism in Abstract Art: Minimalist Digital Abstract Photography Srdjan Jovanović Palacky University Abstract In this piece of writing the writer/artist puts forward the view that art can be understood and taken in as sometimes purely hedonistic. By drawing upon the theories pertaining to hedonism, he applies this view to minimalist digital abstract photography and tries to justify his point of view with the help of three abstract photographs. Science is the one instance of human thought that has got the most explanatory power. Science explains, and that is wherefrom its importance comes. Its explanatory power is not only useful; however, it can also offer great amounts of pleasure for an intelligent being. Yet there are other instances of human effort in which pleasure is the only goal. From the Greek hedonist Democritus to today’s hedonist/atheist philosopher Michel Onfray, we can follow a line of pure pleasure in human thought. Hedonism, as a principle, is a philosophy of enjoyment. It is vastly misunderstood, though. It is a common error to say that hedonism simply means doing only and exclusively that which brings the subject intense pleasure. This line of thought falsely accuses and misinterprets the hedonist of taking joy as the only goal in life, sacrificing everything else to the unique goal of achieving greater states of gratification. This is incorrect on many a level. Rupkatha Journal on Interdisciplinary Studies in Humanities Summer Issue,Volume I, Number 1, 2009 URL of the journal: www.rupkatha.com/issue0109.php URL of the article: www.rupkatha.com/0109hedonisminabstractart.pdf © www.rupkatha.com Rupkatha Journal, Issue 1 2009 88 The designation ‘hedonism’ comes from the Greek word for ‘pleasure’, ήδονή. Democritus defined the ultimate goal in life as reaching ‘contentment’. Aristippus of Cyrene was yet another philosopher that defined the Socratic maxim of happiness as one of the many goals of moral action, while Epicurus in a similar vein defined the state of ataraxia (αταραξία) as the freedom from fear and aponia – the absence of bodily pain – as goals to be reached in life. Epicurean hedonism may be the type of hedonism a scientist might embrace, as it finds the ultimate good and ultimate pleasure in knowledge and in living a virtuous life. Art, thus, is a form that can be understood and taken in as sometimes purely hedonistic, as the subject derives sheer pleasure from the perception of the piece of art in front of him. Hedonistic abstract art is precisely that type of art where nothing needs to be understood, it only has to be felt. This point of view takes a break from Epicurean hedonism and immerses the subject – the perceiver – into pure pleasure. While Francisco Goya’s famous piece The Third of May 1808 needs the subject to understand it as well, to comprehend the horrors of war, abstract art is deprived of heavy semantics and concentrates on the purely visual. The very definition of art, after all, is that it is a type of human action in which emotions take the leading role. Thus, in my minimalist abstract photography (that can barely be even called photography – I use the word simply because the foundation, the canvas of the image is taken by a digital camera) concentrates solely on three major instances relevant for any visual piece of art: spatial composition, color and the light versus dark contrast. I find the third instance most important, as the very word photography literally means ‘painting with light’. The foundations of each picture were taken by a digital Practica, after which the .jpg format photographs were ‘played with’, that is, modified towards the achievement of the desired levels of saturation (color), contrast (black versus white) and composition (cropping the image to the desired form). All of that was done in the Photoshop CS. The digital camera came as a much practical and less financially demanding means of taking pictures. Before the Practica, I was working with older cameras such as the Zenit, Minolta and Yashica. All the initial black and white photographs I used to develop myself (years ago), while the colored ones had to be taken to be developed. After the development of the Photoshop serials, even the film-based old type cameras could have their films developed and Rupkatha Journal, Issue 1 2009 89 transferred to a digital media (jpg to cd), but it all took both money and effort. The era of digital photography open up new doors to practicality and abstract art. When it comes to using a digital camera versus an old one, there is much controversy. Some feel that technical development has ‘made everything easier’ and that ‘true photographers’ only use old cameras and develop their films themselves. I must accent that this is a vast misunderstanding. It would be the same if we said that writers today are not what they used to be because they do not use ink and goose feathers any more, but Microsoft Word instead. The artist has got to keep up pace with the society. It is not the type of camera that defines a photograph – it is his competence to make a composition, use the contrast between colors and between the light and the dark. In the vein of Edgar Allan Poe’s known essay ‘The Philosophy of Composition’ (1846) in which he describes in detail how he created probably one of the most beautiful poems ever written in the English language, I shall proceed to show how my works of art have come to pass, giving a brief overview for each of the three pieces I am going to present here. The So Above, So Below (picture below) composition, for instance, is based – as many of my pieces are – on a diagonally-held composition, where a tangible diagonal line of division intersects visibly the main frame, separating the white area from the blue, thus creating a visible contrast. The lines add to the dynamics of the composition, perhaps asking the viewer to wonder what they actually represent. The texture of the right-based sky-blue part reminds on the texture and outlook of a watercolor work and the representation of the sky, for me, a personal reminder of my earlier watercolor days. Rupkatha Journal, Issue 1 2009 90 The Let It Be composition is a monochrome one. The contrast, so pleasing to the eye, is here based exclusively on the difference between white – the lack of color – and black/dark – all the colors mixed together. The blotch in the middle gives the impression of chalk and charcoal, ever so often used in every artist’s early education. The diagonal, this time, is not a clear cut one – it is but a difference in shade between the upper-right and lower-left corner of the piece. The Tower of Despair is one of the pieces in which I used the night – the bane of many a photographer – to my advantage. With a low exposition value and a highly opened shutter, I twisted the camera while taking a picture in almost utter darkness, so that the soft glow of the streetlights visible in the distance would leave a trail in form of whitish lines, after which a negative was made, re- saturated and turned upside-down, thus making the fence-like structure that serves as the grounding for the picture. The misty tower in the distance might just have been a pillar of lighter background, or the remnants of the form of a building, carefully rubber-stamped towards the achievement of the desired form. Rupkatha Journal, Issue 1 2009 91 The framing of the pieces presented here is a different story. I prefer to manually frame all of my pieces and to select every frame and/or mat myself. The choice of the frame heavily depends on the composition itself. The So Above, So Below composition should be put on a wide white background behind a glass cover, so that the whiteness of the mat should strengthen the blue color in the piece. Let it be, however, due to its sepia-like saturation, deserves a simple wooden frame, with no glass, allowing the woodenness to be strengthened by the frame. The pictures themselves are to be viewed in large formats, with the mat even up to 15 inches wide. The writer/artist: Srdjan Jovanović is a doctoral candidate at the Department of History, Palacky University, Olomouc, Czech Republic. E-mail: srdjan.j@humanicus.org work_3pkbnsk7dbet7pqbdodpd32ieu ---- Safety and Effectiveness of the Hyaluronic Acid Dermal Filler VYC-17.5L for Nasolabial Folds: Results of a Randomized, Controlled Study | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1097/DSS.0000000000001529 Corpus ID: 13879929Safety and Effectiveness of the Hyaluronic Acid Dermal Filler VYC-17.5L for Nasolabial Folds: Results of a Randomized, Controlled Study @article{Monheit2018SafetyAE, title={Safety and Effectiveness of the Hyaluronic Acid Dermal Filler VYC-17.5L for Nasolabial Folds: Results of a Randomized, Controlled Study}, author={G. Monheit and K. Beer and B. Hardas and P. Grimes and Barry M. Weichman and V. Lin and D. Murphy}, journal={Dermatologic Surgery}, year={2018}, volume={44}, pages={670 - 678} } G. Monheit, K. Beer, +4 authors D. Murphy Published 2018 Medicine Dermatologic Surgery BACKGROUND Juvéderm Vollure XC (VYC-17.5L) belongs to a family of nonanimal hyaluronic acid (HA) gels based on the Vycross technology platform. OBJECTIVE To evaluate the safety and effectiveness of VYC-17.5L for correction of moderate to severe nasolabial folds (NLFs) compared with a control HA dermal filler. METHODS In this double-blind study, 123 adults with 2 moderate or severe NLFs as measured on the 5-point photonumeric NLF severity scale (NLFSS) were randomized to VYC-17.5L in 1 NLF and… Expand View on Wolters Kluwer europepmc.org Save to Library Create Alert Cite Launch Research Feed Share This Paper 16 CitationsHighly Influential Citations 1 Background Citations 3 Methods Citations 1 Results Citations 1 View All Topics from this paper Hyaluronic Acid Vocal cord structure Nasolabial sulcus Dermal Fillers Filler (substance) Juvéderm Silo Filler's Disease Gel Paper Mentions Interventional Clinical Trial A Safety and Effectiveness Study of JUVÉDERM VOLIFT® XC Versus Control for Moderate to Severe Nasolabial Folds A prospective, multicenter, within-subject controlled study of the safety and effectiveness of JUVÉDERM VOLIFT® XC versus Control for the correction of moderate to severe nasolabial… Expand Conditions Moderate to Severe Nasolabial Folds Intervention Device Allergan October 2013 - October 2015 16 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Safety and Effectiveness of VYC-17.5L for Long-Term Correction of Nasolabial Folds. S. Dayan, C. Maas, +5 authors V. Lin Medicine Aesthetic surgery journal 2019 1 Save Alert Research Feed Safety And Efficacy Of Princess® FILLER Lidocaine In The Correction Of Nasolabial Folds D. Grablowitz, Monika Sulovsky, S. Hoeller, Zrinka Ivezic-Schoenfeld, Souyet Chang-Rodriguez, M. Prinz Medicine Clinical, cosmetic and investigational dermatology 2019 2 Highly Influenced PDF View 3 excerpts, cites methods and results Save Alert Research Feed Non-surgical Rhinoplasty with Hyaluronic acid Fillers: Predictable Results Using Software for the Evaluation of Nasal Angles Adriano Santorelli, S. Marlino Medicine Aesthetic Plastic Surgery 2019 3 View 1 excerpt, cites background Save Alert Research Feed Full-Face Rejuvenation with Hyaluronic Acid Fillers based on the MD Codes Technique: A Retrospective, Single-Center Study Maria Teresa Saliani Medicine 2020 PDF View 2 excerpts, cites background Save Alert Research Feed Nonsurgical Redefinition of the Chin and Jawline of Younger Adults With a Hyaluronic Acid Filler: Results Evaluated With a Grid System Approach. D. Bertossi, M. Robiony, Andrea Lazzarotto, G. Giampaoli, R. Nocini, P. Nocini Medicine Aesthetic surgery journal 2020 2 Save Alert Research Feed Improvements in Skin Quality Biological Markers in Skin Explants Using Hyaluronic Acid Filler VYC-12L Lauren Nakab, Christopher K. Hee, Olivier Guetta Medicine Plastic and reconstructive surgery. Global open 2020 Save Alert Research Feed Evaluation of the efficacy of a new hyaluronic acid gel on dynamic and static wrinkles in volunteers with moderate aging/photoaging Adele Sparavigna, Beatrice Tenconi, A. Giori, Gilberto Bellia, Laura La Penna Medicine Clinical, cosmetic and investigational dermatology 2019 5 PDF Save Alert Research Feed Non-surgical facial reshaping using MD Codes. D. Bertossi, P. Nocini, Eqram Rahman, Izolda Heydenrych, K. M. Kapoor, M. de Maio Medicine Journal of cosmetic dermatology 2020 Save Alert Research Feed A Systematic Review of the Literature of Delayed Inflammatory Reactions After Hyaluronic Acid Filler Injection to Estimate the Incidence of Delayed Type Hypersensitivity Reaction. King Lueh Chung, Cormac Convery, Ifioma Ejikeme, A. Ghanem Medicine Aesthetic surgery journal 2019 3 Save Alert Research Feed Improvements in satisfaction with skin after treatment of facial fine lines with VYC‐12 injectable gel: Patient‐reported outcomes from a prospective study P. Ogilvie, Marva Safa, +5 authors A. Marx Medicine Journal of cosmetic dermatology 2019 3 Save Alert Research Feed ... 1 2 ... References SHOWING 1-10 OF 18 REFERENCES SORT BYRelevance Most Influenced Papers Recency Safety and Effectiveness of Hyaluronic Acid Injectable Gel in Correcting Moderate Nasolabial Folds in Chinese Subjects. Y. Wu, J. Xu, Y. Jia, D. Murphy Medicine Journal of drugs in dermatology : JDD 2016 6 Save Alert Research Feed Efficacy and durability of two hyaluronic acid-based fillers in the correction of nasolabial folds: results of a prospective, randomized, double-blind, actively controlled clinical pilot study. A. Nast, N. Reytan, +4 authors B. Rzany Medicine Dermatologic surgery : official publication for American Society for Dermatologic Surgery [et al.] 2011 47 PDF Save Alert Research Feed Efficacy and safety of a new hyaluronic acid dermal filler in the treatment of moderate nasolabial folds: 6-month interim results of a randomized, evaluator-blinded, intra-individual comparison study B. Rzany, C. Bayerl, +8 authors M. Podda Medicine Journal of cosmetic and laser therapy : official publication of the European Society for Laser Dermatology 2011 38 Save Alert Research Feed Juvéderm injectable gel: a multicenter, double-blind, randomized study of safety and effectiveness. M. A. Pinsky, J. A. Thomas, D. Murphy, P. Walker Medicine Aesthetic surgery journal 2008 85 PDF Save Alert Research Feed Duration of wrinkle correction following repeat treatment with Juvéderm hyaluronic acid fillers S. Smith, D. Jones, J. A. Thomas, D. Murphy, F. Beddingfield Medicine Archives of Dermatological Research 2010 41 PDF Save Alert Research Feed Effectiveness of Juvéderm Ultra Plus Dermal Filler in the Treatment of Severe Nasolabial Folds M. Lupo, S. Smith, J. A. Thomas, D. Murphy, F. Beddingfield Medicine Plastic and reconstructive surgery 2008 86 Save Alert Research Feed Comparison of Smooth‐Gel Hyaluronic Acid Dermal Fillers with Cross‐linked Bovine Collagen: A Multicenter, Double‐Masked, Randomized, Within‐Subject Study L. Baumann, Ava T Shamban, +4 authors P. Walker Medicine Dermatologic surgery : official publication for American Society for Dermatologic Surgery [et al.] 2007 166 PDF Save Alert Research Feed A multi‐center, double‐blind, randomized controlled study of the safety and effectiveness of Juvéderm® injectable gel with and without lidocaine S. Weinkle, D. Bank, C. Boyd, M. Gold, J. A. Thomas, D. Murphy Medicine Journal of cosmetic dermatology 2009 56 Save Alert Research Feed Hyaluronic acid fillers: a comprehensive review. K. Beasley, M. Weiss, R. Weiss Medicine Facial plastic surgery : FPS 2009 158 View 1 excerpt, references background Save Alert Research Feed Efficacy and safety of a new hyaluronic acid dermal filler in the treatment of severe nasolabial lines – 6‐month interim results of a randomized, evaluator‐blinded, intra‐individual comparison study B. Ascher, C. Bayerl, +5 authors M. Podda Medicine Journal of cosmetic dermatology 2011 42 View 1 excerpt, references methods Save Alert Research Feed ... 1 2 ... Related Papers Abstract Topics Paper Mentions 16 Citations 18 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_3poru3gdljbcxh7yktfxpr6axm ---- NOTES ON CONTRIBUTORS NOTES ON CONTRIBUTORS Richard J. Aldrich is Professor of International Security at the Warwick University and is the author of several books including The Hidden Hand: Britain American and Cold War Secret Intelligence (John Murray Publishers Ltd, 2001) which won the Donner Book prize in 2002. He is currently directing the AHRC project, ‘Landscapes of Secrecy: The Central Intelligence Agency and the contested record of US foreign policy, 1947–2001’. He has held a Fulbright fellowship at Georgetown University and more recently has spent time in Canberra and Ottawa as a Leverhulme Fellow. Tarak Barkawi is Senior Lecturer in war studies at the Centre of International Studies, University of Cambridge. He specialises in the study of war, armed forces and society with a focus on conflict between the West and the global South. He earned his doctorate at the University of Minnesota. His publications include Globalization and War (Rowman and Littlefield, 2005); ‘Peoples, Homelands and Wars? Ethnicity, the Military and Battle among British Imperial Forces in the War against Japan’, Comparative Studies in Society and History, 46:1 (January 2004); ‘Strategy as a Vocation: Weber, Morgenthau and Modern Strategic Studies’, Review of International Studies, 24:2 (1998); and with Mark Laffey, ‘The Imperial Peace: Democracy, Force, and Globalization’, European Journal of International Relations, 5:4 (1999) and ‘The Postcolonial Moment in Security Studies’, Review of International Studies, 32:4 (2006). Angus Boulton is a visual artist working in photography and film. The DG Bank Kunststipendium and subsequent Berlin residency in 1998/99 created an oppor- tunity to turn his attention towards different aspects of the Cold War and he began investigating what remained of the Soviet Military legacy in Eastern Europe. Until 2008 he was engaged in an AHRC research fellowship at Manchester Metropolitan University. The photographic series and various films arising from his projects have been exhibited internationally. {www.angusboulton.net} Chris Brown is Professor of International Relations and Vice-Chair of the Academic Board at the London School of Economics and Political Science. He is the author of numerous articles in international political theory and of Sovereignty, Rights and Justice (Polity Press, 2002); International Relations Theory: New Normative Approaches (Columbia University Press, 1992); editor of Political Review of International Studies (2009), 35, 723–728 Copyright � British International Studies Association doi:10.1017/S0260210509990441 723 h tt p s: // d o i.o rg /1 0. 10 17 /S 02 60 21 05 09 99 04 41 D o w n lo ad ed f ro m h tt p s: // w w w .c am b ri d g e. o rg /c o re . C ar n eg ie M el lo n U n iv er si ty , o n 0 6 A p r 20 21 a t 01 :1 6: 26 , s u b je ct t o t h e C am b ri d g e C o re t er m s o f u se , a va ila b le a t h tt p s: // w w w .c am b ri d g e. o rg /c o re /t er m s. http://www.angusboulton.net https://doi.org/10.1017/S0260210509990441 https://www.cambridge.org/core https://www.cambridge.org/core/terms Restructuring in Europe: Ethical Perspectives (Routledge, 1994) and co-editor (with Terry Nardin and N. J. Rengger) of International Relations in Political Thought: Texts from the Greeks to the First World War (Cambridge University Press, 2002). His textbook Understanding International Relations (Palgrave, Macmillan, 2009) is now in its 4th edition and has been translated into Arabic, Turkish and Chinese. A collection of his essays, Practical Judgement and International Relations: Essays in International Political Theory will appear in 2010. A graduate of LSE, before returning to the School in 1999 he taught at the University of Kent from 1970–1994, and was Professor of Politics at the University of Southampton from 1994–1998. He is a former Chair of the British International Studies Association. Bernadette Buckley is a Lecturer in International Politics at Goldsmiths College in London {http://www.goldsmiths.ac.uk/politics/staff/buckley.php}. She previously taught at the International Centre of Cultural and Heritage Studies, Newcastle University, prior to which she was Head of Education & Research at the John Hansard Gallery, University of Southampton. Her research interests cut across diverse fields from visual culture to politics and cultural studies. She has published broadly, both in texts and journals, as well as in exhibition catalogues. Recent publications include: ‘Mohamed is Absent. I am Performing: Contemporary Iraqi Art and the Destruction of Heritage’ in Peter G Stone and Joanne Farchakh Bajjaly (eds), The Destruction of Cultural Heritage in Iraq, with foreword by Robert Fisk (2009) and ‘Terrible Beauties’ in Brumaria, 12 (February 2009). She has also worked on a number of funded research projects for AHRC, ACE, En-quire, Heritage Lottery and the Wellcome Foundation. Tim Cross Major General Tim Cross retired from the British Army in January 2007. He was appointed CBE in the Kosovo operational awards list in 2000. His wide operational experience concluded in Washington, Kuwait and Baghdad as the International Deputy in the US led Office of Reconstruction and Humanitarian Affairs (ORHA) in 2002/03, later re-titled the Coalition Provisional Authority (the CPA). He now works with a number of charitable foundations, international aid organisations and defence related companies. He is also a Visiting Professor at three universities and the current Army Adviser to the UK House of Commons Defence Committee. Alex Danchev is Professor of International Relations at the University of Nottingham, and in 2009–10, Warden’s Visiting Fellow at St Antony’s College, Oxford. He is the author of a number of widely acclaimed biographies, and has written extensively on various aspects of art and politics. His most recent books are Georges Braque (Arcade Publishing, 2007); Picasso Furioso (Editions Dilecta, 2008), and On Art and War and Terror (Edinburgh University Press, 2009). He is currently working on a biography of Cezanne, and a collection of artists’ manifestos. Philip H. J. Davies is Director of the Brunel Centre for Intelligence and Security Studies at Brunel University, and Convenor of the Security and Intelligence Studies Group of the UK Political Studies Association. He is the author of MI6 and the Machinery of Spying (Routledge, 2004) and co-author of Spinning the Spies: Intelligence, Open Government and the Hutton Inquiry (Social Affairs Unit, 2004) 724 h tt p s: // d o i.o rg /1 0. 10 17 /S 02 60 21 05 09 99 04 41 D o w n lo ad ed f ro m h tt p s: // w w w .c am b ri d g e. o rg /c o re . C ar n eg ie M el lo n U n iv er si ty , o n 0 6 A p r 20 21 a t 01 :1 6: 26 , s u b je ct t o t h e C am b ri d g e C o re t er m s o f u se , a va ila b le a t h tt p s: // w w w .c am b ri d g e. o rg /c o re /t er m s. http://www.goldsmiths.ac.uk/politics/staff/buckley.php https://doi.org/10.1017/S0260210509990441 https://www.cambridge.org/core https://www.cambridge.org/core/terms and The Open Side of Secrecy: Britain’s Intelligence and Security Committee (Social Affairs Unit, 2006). He is currently completing a comparative study of national intelligence in Britain and the United States tentatively entitled, They Come Not Single Spies: Intelligence and Government in Britain and the United States to be published by Praeger in 2010. Richard Devetak is Senior Lecturer in International Relations and Director of the Rotary Centre for International Studies in Peace and Conflict Resolution at the University of Queensland. Among other things he is co-editor of An Introduction to International Relations: Australian Perspectives (Cambridge University Press, 2007); The Globalization of Political Violence (Routledge, 2008), and Security and the War on Terror (Routledge, 2008). Antony Field completed an ESRC funded doctorate on terrorism at the University of Warwick in 2009. His current research is concerned with the degree of continuity and change in the organisation of terrorist groups and its implications for the responses of intelligence and security agencies. Alastair Finlan is a RCUK Academic Fellow in Strategic Studies in the Department of International Politics at Aberystwyth University. He has published widely on the Falklands and the Gulf War, together with warfare and strategic culture. His most recent book is Special Forces, Strategy and the War on Terror: Warfare By Other Means which was published by Routledge in 2008. Christopher J. Finlay is a Lecturer in Political Theory at the University of Birmingham (Department of Political Science and International Studies). His current research is in the fields of just war theory, the ethics of political violence and the history of political thought. Recent publications include Hume’s Social Philosophy (London & New York: Continuum, 2007) and articles in the Journal of Political Philosophy, History of Political Thought, the European Journal of International Relations, the European Journal of Political Theory, Thesis Eleven, and the International Journal of Philosophical Studies. Frank Foley is a Zukerman Postdoctoral Fellow in the Center for International Security and Cooperation (CISAC) at Stanford University. He received his PhD in Political Science from the European University Institute (EUI) in Florence, Italy, and an MPhil in History from the University of Cambridge. Stevyn D. Gibson lectures and publishes on concepts of intelligence, security, risk, and resilience at Cranfield University. He runs their ‘Intelligence in International Security’ MSc module. His PhD explored the relationship between OSINT and national intelligence structures. He is the author of The Last Mission – a first-hand account of intelligence collection behind the Iron Curtain and sits on the steering committee of the Oxford Intelligence Group. Peter Gill is Professor of Intelligence Studies at Salford University and Honorary Fellow at the University of Liverpool. He is the author of Policing Politics: Security Intelligence and Liberal Democratic State (London: Cass, 1994) and has 725 h tt p s: // d o i.o rg /1 0. 10 17 /S 02 60 21 05 09 99 04 41 D o w n lo ad ed f ro m h tt p s: // w w w .c am b ri d g e. o rg /c o re . C ar n eg ie M el lo n U n iv er si ty , o n 0 6 A p r 20 21 a t 01 :1 6: 26 , s u b je ct t o t h e C am b ri d g e C o re t er m s o f u se , a va ila b le a t h tt p s: // w w w .c am b ri d g e. o rg /c o re /t er m s. https://doi.org/10.1017/S0260210509990441 https://www.cambridge.org/core https://www.cambridge.org/core/terms recently co-authored (with Mark Pythian) Intelligence in an Insecure World (Cambridge: Polity, 2006). He continues to research issues involving the democ- ratisation of intelligence and the impact on intelligence of the ‘war on terror’. Liam Kennedy is Director of the Clinton Institute for American Studies at University College Dublin. He is the author of Susan Sontag: Mind as Passion (Manchester University Press, 1995); Race and Urban Space in America (Routledge, 2000) and editor of Urban Space and Representation (Chicago: Fitzroy Dearborn, 1999) and Visual Culture and Urban Regeneration (Routledge, 2004). He is currently writing a book on photography and international conflict and preparing edited books on urban photography and on The Wire. He leads a research project on Photography and International Conflict {www.photoconflict.org}, funded by the Irish Research Council for the Humanities and Social Sciences. Ian Leigh is Professor of Law and co-director of the Humans Rights Centre in the Department of Law at the University of Durham. He has published widely in the fields of public law and human rights and his recent report Making Intelligence Accountable with Dr Hans Born, published by the Norwegian Parliament Printing House in 2005, has been translated into twelve languages. His most recent book is Making Rights Real: the Human Rights Act in its First Decade, with R. C. W. Masterman (London: Hart Publishing, 2008). Debbie Lisle is a Senior Lecturer in the School of Politics, International Studies and Philosophy at the Queen’s University Belfast. Her work explores the intersections of culture, power and travel, and draws from International Relations, Social and Political Theory, Cultural Studies, Media Studies and Tourism Studies. She has explored war films, museum exhibits, airports; travel guide books, photography and visual art in her research, and her articles have appeared in journals such as Millennium, Security Dialogue, and The Review of International Studies. Her first book for Cambridge University Press (2006) was entitled, The Global Politics of Contemporary Travel Writing, and she is currently working on a project exploring the intersections of tourism, war and visuality. Susan McManus is Lecturer in Political Theory at Queen’s University, Belfast. She has published essays and a book on the intersection of political theory, post- structuralism, and utopian studies. Her current research projects focus on theorising affective-agency, figurations of oppositional consciousness, and anti- humanist critiques of cosmopolitanism. Frank Möller is a Research Fellow at the Tampere Peace Research Institute, University of Tampere, Finland. He is a member of the Finnish Center of Excellence in Political Thought and Conceptual Change, Research Team Politics and the Arts, and the co-editor of Cooperation and Conflict (2005–2009). He is interested in the theory and practice of peaceful change and in visual peace research. His most recent book is Thinking Peaceful Change: Baltic Security Policies and Security Community Building (Syracuse University Press, 2007); email: {frank.moller@uta.fi}. 726 h tt p s: // d o i.o rg /1 0. 10 17 /S 02 60 21 05 09 99 04 41 D o w n lo ad ed f ro m h tt p s: // w w w .c am b ri d g e. o rg /c o re . C ar n eg ie M el lo n U n iv er si ty , o n 0 6 A p r 20 21 a t 01 :1 6: 26 , s u b je ct t o t h e C am b ri d g e C o re t er m s o f u se , a va ila b le a t h tt p s: // w w w .c am b ri d g e. o rg /c o re /t er m s. http://www.photoconflict.org mailto:frank.moller@uta.fi https://doi.org/10.1017/S0260210509990441 https://www.cambridge.org/core https://www.cambridge.org/core/terms Kevin A. O’Brien is the Director of Alesia PSI Consultants Ltd, which provides support to government and the critical infrastructure sectors on security matters. He is a Fellow in the Department of War Studies, King’s College London, an Associate of Libra Advisory Group (UK) and a Senior Consultant to Innovative Analytics and Training, LLC (US). He was previously Deputy Director of RAND Europe’s Defence and Security Programme, and Deputy Director of the Inter- national Centre for Security Analysis, King’s College London. The author of more than sixty monographs, academic articles, reports and trade publications, he is a member of the Editorial Board of the journal Small Wars and Insurgencies, and the Complex Terrain Lab {www.terraplexic.org/}. He is currently completing one book on the history of South Africa’s intelligence dispensation (forthcoming Routledge, 2010) and another on the terrorist craft of intelligence (forthcoming Hurst, 2010). Hilary Roberts is Head of Collections Management at the Imperial War Museum Photograph Archive, has worked as a curator of photography since 1980. In her current role, she is responsible for managing the Imperial War Museum’s collection of 10 million images covering all aspects of modern conflict from 1850 to the present day. A specialist in the history of war photography and a qualified archivist, Hilary is a member of various national and international bodies concerned with the history of photography as well as others concerned with the care and management of photographic collections. She works closely with working photographers who cover war from a broad range of perspectives and is a contributor to international efforts to establish standards and techniques for the management and preservation of digital photography. In 2008, she curated a major exhibition to mark the centennial of the well known Life and Magnum pho- tographer George Rodger. She is now preparing a major exhibition and book on the life and work of Don McCullin for February 2010. Angharad Closs Stephens is Lecturer in Human Geography at Durham University. Her research work focuses on contemporary attempts to imagine ‘community without unity’. She has published in Alternatives: Global, Local, Political; is co-editor (with N. Vaughan-Williams) of Terrorism and the Politics of Response (Routledge, 2008) and co-convenor of the BISA Post-structural Politics Working Group. She holds a PhD in Politics and International Relations from Keele University. Nick Vaughan-Williams is Lecturer in Politics at the Department of Politics, University of Exeter, UK and co-convenor of the BISA Post-structural Politics Working Group. He has recently published articles in Alternatives: Global, Local, Political, International Politics and Millennium: Journal of International Studies. Cynthia Weber is Professor of International Politics at Lancaster University and Co-Director of the media company Pato Productions. Her recent work investigates US domestic and foreign policy and US identity in relation to the so-called War on Terror through the critique of popular US films and through her own production of moving and still video images. 727 h tt p s: // d o i.o rg /1 0. 10 17 /S 02 60 21 05 09 99 04 41 D o w n lo ad ed f ro m h tt p s: // w w w .c am b ri d g e. o rg /c o re . C ar n eg ie M el lo n U n iv er si ty , o n 0 6 A p r 20 21 a t 01 :1 6: 26 , s u b je ct t o t h e C am b ri d g e C o re t er m s o f u se , a va ila b le a t h tt p s: // w w w .c am b ri d g e. o rg /c o re /t er m s. http://www.terraplexic.org/ https://doi.org/10.1017/S0260210509990441 https://www.cambridge.org/core https://www.cambridge.org/core/terms Maja Zehfuss is Professor of International Politics at The University of Manchester. She is the author of Constructivism and International Relations: The Politics of Reality (Cambridge University Press, 2002) and Wounds of Memory: The Politics of War in Germany (Cambridge University Press, 2007) and the co-editor, with Jenny Edkins, of Global Politics: A New Introduction (Routledge, 2008). She is currently working on the politics of ethics in relation to war. 728 h tt p s: // d o i.o rg /1 0. 10 17 /S 02 60 21 05 09 99 04 41 D o w n lo ad ed f ro m h tt p s: // w w w .c am b ri d g e. o rg /c o re . C ar n eg ie M el lo n U n iv er si ty , o n 0 6 A p r 20 21 a t 01 :1 6: 26 , s u b je ct t o t h e C am b ri d g e C o re t er m s o f u se , a va ila b le a t h tt p s: // w w w .c am b ri d g e. o rg /c o re /t er m s. https://doi.org/10.1017/S0260210509990441 https://www.cambridge.org/core https://www.cambridge.org/core/terms work_3r73gwna6vd2xgktivkardknfa ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218541341 Params is empty 218541341 exception Params is empty 2021/04/06-02:16:18 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218541341 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:18 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_3rn2pobjsjfwngmsrnab4pbd6u ---- Telemedicine Applications in Pediatric Retinal Disease Journal of Clinical Medicine Review Telemedicine Applications in Pediatric Retinal Disease Akhilesh S. Pathipati 1 and Darius M. Moshfeghi 2,* 1 Stanford University School of Medicine, Stanford, CA 94305, USA; apathip@stanford.edu 2 Department of Ophthalmology, Byers Eye Institute, Horngren Family Vitreoretinal Center, Stanford University School of Medicine, Palo Alto, CA 94303, USA * Correspondence: dariusm@stanford.edu; Tel.: +1-650-721-6888 Academic Editors: Yolanda Blanco, Núria Solà-Valls, Rajender Gattu and Richard Lichenstein Received: 17 November 2016; Accepted: 20 March 2017; Published: 23 March 2017 Abstract: Teleophthalmology is a developing field that presents diverse opportunities. One of its most successful applications to date has been in pediatric retinal disease, particularly in screening for retinopathy of prematurity (ROP). Many studies have shown that using telemedicine for ROP screening allows a remote ophthalmologist to identify abnormal findings and implement early interventions. Here, we review the literature on uses of telemedicine in pediatric retinal disease and consider future applications. Keywords: teleophthalmology; retina; pediatrics; retinopathy of prematurity; telemedicine screening 1. Introduction Telemedicine broadly refers to the use of communications technology to assist in the diagnosis and management of disease [1]. Recent advances in ophthalmic imaging and mobile technology have enabled applications in ophthalmology. Efforts to date have focused on screening and monitoring of disease [2]. For instance, telemedicine-based screening of diabetic retinopathy (DR) has been shown to increase the proportion of patients receiving an annual exam, identify diabetic changes to the retina, and decrease vision loss from DR at a population level [3–5]. Several studies have evaluated telemedicine for glaucoma and age-related macular degeneration as well [6–9]. Results have been mixed but promising for both conditions. While these findings are encouraging, teleophthalmology has had limited use in adult populations. Further research is needed regarding its practicality, reliability, and cost-effectiveness [10]. In this context of limited information, patients, providers, and payers have struggled to establish a viable financial model for services. As a result, services remain poorly reimbursed and are not widely available outside of research settings [11]. By contrast, teleophthalmology has had considerable success in the pediatric population, particularly for retinal disease in newborns. It has been widely validated for screening of retinopathy of prematurity (ROP) [12,13] and has more recently been applied as a screening tool for other ocular pathology present at birth [14,15]. Pediatric retinal disease is particularly well suited to telemedicine because (1) the existing health care delivery system struggles to screen for these diseases [16,17]; (2) digital imaging tools allow us to address that need [18,19]; and (3) once a condition is identified, there are often therapeutic interventions available that benefit the patient [20,21]. It therefore avoids the pitfalls facing telemedicine programs that are conceptually appealing but do not have clear targets and ultimately do not improve health. This review will summarize the literature and consider future applications of telemedicine in pediatric retinal disease. J. Clin. Med. 2017, 6, 36; doi:10.3390/jcm6040036 www.mdpi.com/journal/jcm http://www.mdpi.com/journal/jcm http://www.mdpi.com http://dx.doi.org/10.3390/jcm6040036 http://www.mdpi.com/journal/jcm J. Clin. Med. 2017, 6, 36 2 of 6 2. Background on Retinopathy of Prematurity Retinopathy of prematurity refers to a condition of immature retinal blood supply in premature and low-birth-weight infants. When an infant is born before retinal vasculature has adequately developed (typically before 30–32 weeks of gestation), areas of the peripheral retina may become ischemic. This ischemic retina secretes vascular endothelial growth factors (VEGF) that can lead to disordered neovascularization, retinal detachment, and blindness. Although rare, ROP is a leading cause of blindness in children and is believed to affect a majority of preterm infants born below 1500 g [20]. ROP is further categorized by geographic zones of the retina that are involved (Zones 1–3), staging of the junction between vascular and avascular retina (Stages 1–5), and the presence of plus disease. Plus disease refers to a set of characteristic complications relating to vascular dilation and tortuosity that can be present in any stage [22,23]. Infants are deemed candidates for treatment based on zone, stage, and the presence of plus disease. Surgical treatment is recommended for Zone I (any stage with plus disease), Zone I (Stage 3 without plus disease), or Zone II (Stage 2 or 3 with plus disease) [24]. Treatment options include ablative therapy, cryotherapy, laser surgery, and intravitreal injections [20,21]. Randomized trials have shown that early treatment significantly improves outcomes and effectively preserves vision [20,21]. Screening therefore provides a substantial clinical impact. 3. Telemedicine for Retinopathy of Prematurity Current guidelines recommend bedside screening of at-risk infants in the neonatal intensive care unit (NICU) by binocular indirect ophthalmoscopy (BIO); in fact, ROP screening is required for NICUs to maintain accreditation in the United States (US) [24]. Meeting this standard is challenging for various reasons. First, there is an increasing number of preterm infants who are born and who live as neonatal care becomes more sophisticated. At the same time, there is a decreasing number of ophthalmologists willing to perform ROP exams due to medicolegal liability and the logistical hurdles of traveling to NICUs and coordinating with staff for eye exams [16,17]. As a result, many NICUs in the US struggle to provide ROP screening. The challenge is even more pressing abroad. Telemedicine offers an opportunity to alleviate that burden [25]. Trained NICU technicians can use a wide-angle, fiber optic fundus camera (e.g., RetCam III, Clarity Medical Systems, Pleasanton, CA, USA) to obtain retinal images and send them to a specialist via remote digital fundus imaging (RDFI). The specialist can then screen images for ROP. A host of studies have evaluated RDFI dating back to the early 2000s [13]. Existing research indicates that telemedicine is a reliable, cost-effective method for ROP screening that is comparable to bedside BIO [2,26–28]. The photographic screening for retinopathy of prematurity study (PHOTO-ROP) was one of the early studies to evaluate telemedicine for ROP [26]. It enrolled 51 infants in a prospective, multicenter trial in 2001 and 2002. The study found that RDFI detected “clinically significant” ROP with 92% sensitivity and 37% specificity compared to BIO. For patients with ROP that met criteria for early treatment of ROP, RDFI demonstrated 92% sensitivity and 67% specificity. Quinn et al. evaluated 1257 infants for “referral warranted” ROP (RW-ROP) using RDFI [27]. Digital image grading by non-physician readers identified RW-ROP with 90% sensitivity and 87% specificity compared to BIO by an ophthalmologist. In some respects, wide-angle photography is even superior to BIO. Richter et al. found that ROP screening via telemedicine required less time than BIO, thereby increasing the efficiency of the examining ophthalmologist [29], while Myung et al. noted that digital image capture allowed for analysis of the progression of ROP over time [30]. In light of these findings, the 2013 screening guidelines acknowledge the value of telemedicine in ROP screening [24] and allow for telemedicine as an alternative means of ROP screening. A subsequent Joint Technical Report issued by the American Academy of Pediatrics in 2015 found that “There is J. Clin. Med. 2017, 6, 36 3 of 6 level I evidence from at least 5 studies demonstrating that digital retinal photography has high accuracy for the detection of clinically significant ROP” [12]. Of note, RDFI does not capture all ROP. The accuracy of wide-angle cameras is not as well established for mild ROP that is primarily present in Zone 3 [12]. However, this type would not require treatment. RDFI is capable of identifying clinically actionable ROP with a high degree of accuracy—the relevant goal for a screening program. Multiple studies have reported on “real world” programs that employ telemedicine for ROP screening. The Stanford University Network for Diagnosis of Retinopathy of Prematurity (SUNDROP) is an ongoing telemedicine screening program (founded by author DMM). Nurses in underserved NICUs capture retinal images that are reviewed by a retinal specialist at a remote quarternary care center [18]. Now, with more than 11 years of data and over 1000 infants screened, the remote interpretation of RetCam images has had 100% sensitivity and 99.8% specificity compared to BIO for treatment-warranted ROP [18]. In other words, no blinding disease has been missed by telemedicine screening of ROP. Similarly, Lorenz et al. report on a 6-year, multi-center study in Germany and found that wide-field digital imaging captured all treatment-requiring ROP [31]. The KIDROP program found that a telemedicine screening program effectively identified ROP in rural South India, with 0 infants progressing to unscreened Stage 4 or 5 retinopathy [19]. Finally, the PCA-PERP program in Hungary found that telemedicine screening not only had high diagnostic performance but also produced cost savings [32]. Taken together, these findings demonstrate that telemedicine is an effective modality for ROP screening. While its use should continue to be refined [33], it addresses an important need that the existing ophthalmology infrastructure is not well equipped to handle and does so without sacrificing quality of care. 4. Future Applications Screening for ROP through remote image interpretation is the most well established application of telemedicine for pediatric retinal disease. Additional approaches are now being explored. 4.1. Use of Image Analysis Software Within screening for ROP, there is some excitement about the use of image analysis software to assist in the diagnosis of disease. Identifying plus disease is particularly challenging for clinicians, and a computer software that can trace and quantify vessel characteristics may aid in diagnosis [34]. In one study, a computer program (ROPtool) detected plus disease with 97% sensitivity and 94% specificity, compared to just 72% average sensitivity for human examiners [35]. However, the use of image recognition software faces several challenges. Perhaps most notably, there is poor inter-observer agreement on what constitutes plus disease [36,37]. It is necessary to establish reference standards in order to benchmark a software against those standards. 4.2. Universal Screening of Newborns It goes without saying that vision impairment is common, but it is unknown how much stems from factors present at birth. At present, screening for eye disease typically does not start until several years later, often starting in elementary school. Yet early ocular pathology may be responsible for later ophthalmic disease. For instance, amblyopia is the most frequent manifestation of visual impairment in children with estimates of prevalence ranging from 2% to 5% [38]. While the cause is often unknown, it has been hypothesized that untreated perinatal hemorrhage may play a role by influencing the visual axis, eye pressure, or other factors [39]. Detection of early pathology can lead to interventions that dramatically improve vision in the long term. J. Clin. Med. 2017, 6, 36 4 of 6 Given the success of screening for ROP, a logical next step is to incorporate screening for other ocular pathology in newborns. A single-hospital study done in China screened over 8000 otherwise healthy newborns and found ocular pathology was present in 23.2% of cases. Of these, 20.96% were retinal hemorrhages and 2.28% represented other problems (including congenital anomalies, anterior chamber hemorrhage, etc.) [14,40]. In order to better understand the effects of ocular abnormalities present at birth, the newborn eye screen testing (NEST) study was initiated at Stanford University. A prospective cohort of newborns at Stanford was enrolled and screened with RetCam III digital photography in 2013–2014. The study will track the development of eye disease over time in this cohort. In the initial enrollment period, 203 subjects were screened with 39 subjects (19%) demonstrating retinal hemorrhages [41]. Along with the NEST study, the Global Universal Eye Screen Testing (GUEST) sub-study will evaluate universal eye screening with images from other parts of the world. 5. Conclusions Telemedicine has been successfully employed in pediatric retinal disease. It has been most widely used in screening for retinopathy of prematurity but has already shown promise as a universal newborn screening tool. These screening tools benefit from well-defined clinical targets that produce actionable findings. Moving forward, we should continue to prioritize tools that advance health care delivery with targeted, cost-effective approaches. Author Contributions: All authors wrote and revised the manuscript. Conflicts of Interest: The authors declare no conflict of interest. References 1. Di Cerbo, A.; Morales-Medina, J.C.; Palmieri, B.; Iannitti, T. Narrative review of telemedicine consultation in medical practice. Patient Prefer. Adherence 2015, 9, 65–75. [PubMed] 2. Sreelatha, O.K.; Ramesh, S.V. Teleophthalmology: Improving patient outcomes? Clin. Ophthalmol. 2016, 10, 285–295. [CrossRef] [PubMed] 3. Boucher, M.C.; Desroches, G.; Garcia-Salinas, R.; Kherani, A.; Maberley, D.; Olivier, S.; Oh, M.; Stockl, F. Teleophthalmology screening for diabetic retinopathy through mobile imaging units within Canada. Can. J. Ophthalmol. J. Can. d’Ophtalmol. 2008, 43, 658–668. [CrossRef] [PubMed] 4. Cavallerano, A.A.; Cavallerano, J.D.; Katalinic, P.; Blake, B.; Rynne, M.; Conlin, P.R.; Hock, K.; Tolson, A.M.; Aiello, L.P.; Aiello, L.M. A telemedicine program for diabetic retinopathy in a Veterans Affairs Medical Center—The Joslin Vision Network Eye Health Care Model. Am. J. Ophthalmol. 2005, 139, 597–604. [CrossRef] [PubMed] 5. Liew, G.; Michaelides, M.; Bunce, C. A comparison of the causes of blindness certifications in England and Wales in working age adults (16–64 years), 1999–2000 with 2009–2010. BMJ Open 2014, 4, e004015. [CrossRef] [PubMed] 6. Strouthidis, N.G.; Chandrasekharan, G.; Diamond, J.P.; Murdoch, I.E. Teleglaucoma: Ready to go?: Table 1. Br. J. Ophthalmol. 2014, 98, 1605–1611. [CrossRef] [PubMed] 7. Kassam, F.; Yogesan, K.; Sogbesan, E.; Pasquale, L.R.; Damji, K.F. Teleglaucoma: Improving access and efficiency for glaucoma care. Middle East Afr. J. Ophthalmol. 2013, 20, 142–149. [PubMed] 8. Andonegui, J.; Aliseda, D.; Serrano, L.; Eguzkiza, A.; Arruti, N.; Arias, L.; Alcaine, A. Evaluation of a telemedicine model to follow up patients with exudative age-related macular degeneration. Retina 2016, 36, 279–284. [CrossRef] [PubMed] 9. De Bats, F.; Vannier Nitenberg, C.; Fantino, B.; Denis, P.; Kodjikian, L. Age-related macular degeneration screening using a nonmydriatic digital color fundus camera and telemedicine. Ophthalmol. J. Int. d’Ophtalmol. Int. J. Ophthalmol. Z. Augenheilkd. 2014, 231, 172–176. [CrossRef] [PubMed] 10. Vaziri, K.; Moshfeghi, D.M.; Moshfeghi, A.A. Feasibility of Telemedicine in Detecting Diabetic Retinopathy and Age-Related Macular Degeneration. Semin. Ophthalmol. 2015. [CrossRef] [PubMed] http://www.ncbi.nlm.nih.gov/pubmed/25609928 http://dx.doi.org/10.2147/OPTH.S80487 http://www.ncbi.nlm.nih.gov/pubmed/26929592 http://dx.doi.org/10.3129/i08-120 http://www.ncbi.nlm.nih.gov/pubmed/19020631 http://dx.doi.org/10.1016/j.ajo.2004.10.064 http://www.ncbi.nlm.nih.gov/pubmed/15808153 http://dx.doi.org/10.1136/bmjopen-2013-004015 http://www.ncbi.nlm.nih.gov/pubmed/24525390 http://dx.doi.org/10.1136/bjophthalmol-2013-304133 http://www.ncbi.nlm.nih.gov/pubmed/24723617 http://www.ncbi.nlm.nih.gov/pubmed/23741133 http://dx.doi.org/10.1097/IAE.0000000000000729 http://www.ncbi.nlm.nih.gov/pubmed/26383707 http://dx.doi.org/10.1159/000356695 http://www.ncbi.nlm.nih.gov/pubmed/24356326 http://dx.doi.org/10.3109/08820538.2013.825727 http://www.ncbi.nlm.nih.gov/pubmed/24171781 J. Clin. Med. 2017, 6, 36 5 of 6 11. Zimmer-Galler, I.E.; Kimura, A.E.; Gupta, S. Diabetic retinopathy screening and the use of telemedicine. Curr. Opin. Ophthalmol. 2015, 26, 167–172. [CrossRef] [PubMed] 12. Fierson, W.M.; Capone, A.; American Academy of Pediatrics Section on Ophthalmology; American Academy of Ophthalmology, American Association of Certified Orthoptists. Telemedicine for evaluation of retinopathy of prematurity. Pediatrics 2015, 135, e238–e254. [CrossRef] [PubMed] 13. Schwartz, S.D.; Harrison, S.A.; Ferrone, P.J.; Trese, M.T. Telemedical evaluation and management of retinopathy of prematurity using a fiberoptic digital fundus camera. Ophthalmology 2000, 107, 25–28. [CrossRef] 14. Li, L.-H.; Li, N.; Zhao, J.-Y.; Fei, P.; Zhang, G.; Mao, J.; Rychwalski, P.J. Findings of perinatal ocular examination performed on 3573, healthy full-term newborns. Br. J. Ophthalmol. 2013, 97, 588–591. [CrossRef] [PubMed] 15. Callaway, N.F.; Ludwig, C.A.; Blumenkranz, M.S.; Jones, J.M.; Fredrick, D.R.; Moshfeghi, D.M. Retinal and Optic Nerve Hemorrhages in the Newborn Infant: One-Year Results of the Newborn Eye Screen Test Study. Ophthalmology 2016, 123, 1043–1052. [CrossRef] [PubMed] 16. Chiang, M.F.; Melia, M.; Buffenn, A.N.; Lambert, S.R.; Recchia, F.M.; Simpson, J.L.; Yang, M.B. Detection of clinically significant retinopathy of prematurity using wide-angle digital retinal photography: A report by the American Academy of Ophthalmology. Ophthalmology 2012, 119, 1272–1280. [CrossRef] [PubMed] 17. Chiang, M.F.; Wang, L.; Busuioc, M.; Du, Y.E.; Chan, P.; Kane, S.A.; Lee, T.C.; Weissgold, D.J.; Berrocal, A.M.; Coki, O.; et al. Telemedical Retinopathy of Prematurity Diagnosis. Arch. Ophthalmol. 2007, 125, 1531–1538. [CrossRef] [PubMed] 18. Wang, S.K.; Callaway, N.F.; Wallenstein, M.B.; Henderson, M.T.; Leng, T.; Moshfeghi, D.M. SUNDROP: Six years of screening for retinopathy of prematurity with telemedicine. Can. J. Ophthalmol. J. Can. d’Ophtalmol. 2015, 50, 101–106. [CrossRef] [PubMed] 19. Vinekar, A.; Jayadev, C.; Mangalesh, S.; Shetty, B. Role of tele-medicine in retinopathy of prematurity screening in rural outreach centers in India—A report of 20,214 imaging sessions in the KIDROP program. Semin. Fetal Neonatal Med. 2015, 20, 335–345. [CrossRef] [PubMed] 20. Good, W.V.; Hardy, R.J.; Dobson, V.; Palmer, E.A.; Phelps, D.L.; Quintos, M.; Tung, B.; Early Treatment for Retinopathy of Prematurity Cooperative Group. The incidence and course of retinopathy of prematurity: Findings from the early treatment for retinopathy of prematurity study. Pediatrics 2005, 116, 15–23. [PubMed] 21. Mintz-Hittner, H.A.; Kennedy, K.A.; Chuang, A.Z. Efficacy of Intravitreal Bevacizumab for Stage 3+ Retinopathy of Prematurity. N. Engl. J. Med. 2011, 364, 603–615. [CrossRef] [PubMed] 22. International Committee for the Classification of Retinopathy of Prematurity. An International Classification of Retinopathy of Prematurity. Arch. Ophthalmol. 1984, 102, 1130–1134. 23. International Committee for the Classification of Retinopathy of Prematurity. The International Classification of Retinopathy of Prematurity Revisited. Arch. Ophthalmol. 2005, 123, 991–999. 24. Fierson, W.M.; American Academy of Pediatrics Section on Ophthalmology; American Academy of Ophthalmology; American Association for Pediatric Ophthalmology and Strabismus; American Association of Certified Orthoptists. Screening examination of premature infants for retinopathy of prematurity. Pediatrics 2013, 131, 189–195. [PubMed] 25. Quinn, G.E.; Ying, G.; Repka, M.X.; Siatkowski, R.M.; Hoffman, R.; Mills, M.D.; Morrison, D.; Daniel, E.; Baumritter, A.; Hildebrand, P.L.; et al. Timely implementation of a retinopathy of prematurity telemedicine system. J. Am. Assoc. Pediatr. Ophthalmol. Strabismus 2016, 20, 425–430.e1. [CrossRef] [PubMed] 26. Photographic Screening for Retinopathy of Prematurity (Photo-ROP) Cooperative Group. The photographic screening for retinopathy of prematurity study (photo-ROP). Primary outcomes. Retina 2008, 28, S47–S54. 27. Quinn, G.E.; Ying, G.; Daniel, E.; Hildebrand, P.L.; Ells, A.; Baumritter, A.; Kemper, A.R.; Schron, E.B.; Wade, K.; e-ROP Cooperative Group. Validity of a telemedicine system for the evaluation of acute-phase retinopathy of prematurity. JAMA Ophthalmol. 2014, 132, 1178–1184. [CrossRef] [PubMed] 28. Kandasamy, Y.; Smith, R.; Wright, I.; Hartley, L. Use of digital retinal imaging in screening for retinopathy of prematurity. J. Paediatr. Child Health 2013, 49, E1–E5. [CrossRef] [PubMed] 29. Richter, G.M.; Williams, S.L.; Starren, J.; Flynn, J.T.; Chiang, M.F. Telemedicine for retinopathy of prematurity diagnosis: Evaluation and challenges. Surv. Ophthalmol. 2009, 54, 671–685. [CrossRef] [PubMed] http://dx.doi.org/10.1097/ICU.0000000000000142 http://www.ncbi.nlm.nih.gov/pubmed/25759962 http://dx.doi.org/10.1542/peds.2014-0978 http://www.ncbi.nlm.nih.gov/pubmed/25548330 http://dx.doi.org/10.1016/S0161-6420(99)00003-2 http://dx.doi.org/10.1136/bjophthalmol-2012-302539 http://www.ncbi.nlm.nih.gov/pubmed/23426739 http://dx.doi.org/10.1016/j.ophtha.2016.01.004 http://www.ncbi.nlm.nih.gov/pubmed/26875004 http://dx.doi.org/10.1016/j.ophtha.2012.01.002 http://www.ncbi.nlm.nih.gov/pubmed/22541632 http://dx.doi.org/10.1001/archopht.125.11.1531 http://www.ncbi.nlm.nih.gov/pubmed/17998515 http://dx.doi.org/10.1016/j.jcjo.2014.11.005 http://www.ncbi.nlm.nih.gov/pubmed/25863848 http://dx.doi.org/10.1016/j.siny.2015.05.002 http://www.ncbi.nlm.nih.gov/pubmed/26092301 http://www.ncbi.nlm.nih.gov/pubmed/15995025 http://dx.doi.org/10.1056/NEJMoa1007374 http://www.ncbi.nlm.nih.gov/pubmed/21323540 http://www.ncbi.nlm.nih.gov/pubmed/23277315 http://dx.doi.org/10.1016/j.jaapos.2016.06.007 http://www.ncbi.nlm.nih.gov/pubmed/27651231 http://dx.doi.org/10.1001/jamaophthalmol.2014.1604 http://www.ncbi.nlm.nih.gov/pubmed/24970095 http://dx.doi.org/10.1111/j.1440-1754.2012.02557.x http://www.ncbi.nlm.nih.gov/pubmed/22970982 http://dx.doi.org/10.1016/j.survophthal.2009.02.020 http://www.ncbi.nlm.nih.gov/pubmed/19665742 J. Clin. Med. 2017, 6, 36 6 of 6 30. Myung, J.S.; Gelman, R.; Aaker, G.D.; Radcliffe, N.M.; Chan, R.V.P.; Chiang, M.F. Evaluation of vascular disease progression in retinopathy of prematurity using static and dynamic retinal images. Am. J. Ophthalmol. 2012, 153, 544–551.e2. [CrossRef] [PubMed] 31. Lorenz, B.; Spasovska, K.; Elflein, H.; Schneider, N. Wide-field digital imaging based telemedicine for screening for acute retinopathy of prematurity (ROP). Six-year results of a multicentre field study. Graefe’s Arch. Clin. Exp. Ophthalmol. Albr. von Graefes Arch. für Klin. und Exp. Ophthalmol. 2009, 247, 1251–1262. [CrossRef] [PubMed] 32. Kovács, G.; Somogyvári, Z.; Maka, E.; Nagyjánosi, L. Bedside ROP screening and telemedicine interpretation integrated to a neonatal transport system: Economic aspects and return on investment analysis. Early Hum. Dev. 2017, 106–107, 1–5. [CrossRef] [PubMed] 33. Quinn, G.E.; Ells, A.; Capone, A.; Hubbard, G.B.; Daniel, E.; Hildebrand, P.L.; Ying, G.; e-ROP (Telemedicine Approaches to Evaluating Acute-Phase Retinopathy of Prematurity) Cooperative Group. Analysis of Discrepancy between Diagnostic Clinical Examination Findings and Corresponding Evaluation of Digital Images in the Telemedicine Approaches to Evaluating Acute-Phase Retinopathy of Prematurity Study. JAMA Ophthalmol. 2016, 134, 1263. [PubMed] 34. Cabrera, M.T.; Freedman, S.F.; Kiely, A.E.; Chiang, M.F.; Wallace, D.K. Combining ROPtool measurements of vascular tortuosity and width to quantify plus disease in retinopathy of prematurity. J. Am. Assoc. Pediatr. Ophthalmol. Strabismus 2011, 15, 40–44. [CrossRef] [PubMed] 35. Wallace, D.K.; Freedman, S.F.; Zhao, Z.; Jung, S.-H. Accuracy of ROPtool vs individual examiners in assessing retinal vascular tortuosity. Arch. Ophthalmol. 2007, 125, 1523–1530. [CrossRef] [PubMed] 36. Ataer-Cansizoglu, E.; Bolon-Canedo, V.; Campbell, J.P.; Bozkurt, A.; Erdogmus, D.; Kalpathy-Cramer, J.; Patel, S.; Jonas, K.; Chan, R.V.P.; Ostmo, S.; et al. Computer-Based Image Analysis for Plus Disease Diagnosis in Retinopathy of Prematurity: Performance of the “i-ROP” System and Image Features Associated With Expert Diagnosis. Transl. Vis. Sci. Technol. 2015, 4, 5. [CrossRef] [PubMed] 37. Chiang, M.F.; Jiang, L.; Gelman, R.; Du, Y.E.; Flynn, J.T. Interexpert Agreement of Plus Disease Diagnosis in Retinopathy of Prematurity. Arch. Ophthalmol. 2007, 125, 875–880. [CrossRef] [PubMed] 38. Powell, C.; Hatt, S.R. Vision screening for amblyopia in childhood. In Cochrane Database of Systematic Reviews; Powell, C., Ed.; John Wiley & Sons, Ltd.: Chichester, UK, 2009. 39. Choi, Y.J.; Jung, M.S.; Kim, S.Y. Retinal hemorrhage associated with perinatal distress in newborns. Korean J. Ophthalmol. 2011, 25, 311–316. [CrossRef] [PubMed] 40. Li, L.-H. Universal Eye Screening in Healthy Neonates. Available online: http://retinatoday.com/2013/03/ universal-eye-screening-in-healthy-neonates/ (accessed on 23 March 2017). 41. Callaway, N.F.; Ludwig, C.; Moshfeghi, D.M. Newborn Retinal Hemorrhages: One-year Results of the Newborn Eye Screening Test (NEST) Study. Investig. Ophthalmol. Vis. Sci. 2015, 56, 2031. © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). http://dx.doi.org/10.1016/j.ajo.2011.08.030 http://www.ncbi.nlm.nih.gov/pubmed/22019222 http://dx.doi.org/10.1007/s00417-009-1077-7 http://www.ncbi.nlm.nih.gov/pubmed/19462177 http://dx.doi.org/10.1016/j.earlhumdev.2017.01.007 http://www.ncbi.nlm.nih.gov/pubmed/28171806 http://www.ncbi.nlm.nih.gov/pubmed/27657673 http://dx.doi.org/10.1016/j.jaapos.2010.11.019 http://www.ncbi.nlm.nih.gov/pubmed/21397804 http://dx.doi.org/10.1001/archopht.125.11.1523 http://www.ncbi.nlm.nih.gov/pubmed/17998514 http://dx.doi.org/10.1167/tvst.4.6.5 http://www.ncbi.nlm.nih.gov/pubmed/26644965 http://dx.doi.org/10.1001/archopht.125.7.875 http://www.ncbi.nlm.nih.gov/pubmed/17620564 http://dx.doi.org/10.3341/kjo.2011.25.5.311 http://www.ncbi.nlm.nih.gov/pubmed/21976937 http://retinatoday.com/2013/03/universal-eye-screening-in-healthy-neonates/ http://retinatoday.com/2013/03/universal-eye-screening-in-healthy-neonates/ http://creativecommons.org/ http://creativecommons.org/licenses/by/4.0/. Introduction Background on Retinopathy of Prematurity Telemedicine for Retinopathy of Prematurity Future Applications Use of Image Analysis Software Universal Screening of Newborns Conclusions work_3tifsm5655epxd5ocayuibznri ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218550923 Params is empty 218550923 exception Params is empty 2021/04/06-02:16:29 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218550923 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:29 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_3ucvyav5mfbvnnyudy5leqfia4 ---- www.PRSGlobalOpen.com 1 INTRODUCTION Before and after photodocumentation is used in a va- riety of contexts. First and foremost, as a means of medi- colegal protection. Second, it documents the efficacy of a treatment. Photodocumentation is indispensable for stud- ies, lectures, or publications. It is also an effective marketing tool documenting the physician’s qualifications and exper- tise. Last but not least, it offers the patient insight into real- ized or expected treatment results.1 THE PHOTOGRAPHIC STANDARDS Definition In photography, there is a difference between under- standing the syntactic and semantic structure of a single picture and understanding the “language” of a combina- tion of more than 1 image.2 We differentiate such combi- nations into the following classification: I. A series of photographs deals with a theme, for exam- ple, a series about Yosemite National Park. The order of the photos is not significant and can be created by the photographer. II. A sequence of photographs tells us a visual story. III. A photographic timeline shows a change over time of the same subject within a fixed interval. Medical and aesthetic photodocumentation falls into this category by demonstrating changes in body surface and/or contour in a given time frame after a medical/ aesthetic treatment or procedure. Standards for Photographic Timelines To create a photographic timeline, all photographic conditions should preferably remain constant to recog- Copyright © 2017 The Authors. Published by Wolters Kluwer Health, Inc. on behalf of The American Society of Plastic Surgeons. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. DOI: 10.1097/GOX.0000000000001389 From the *Plastic and Reconstructive Surgery, Regensburg, Germany; †Globalhealth Academy for Aesthetic Medicine, Drensteinfurt, Germany; and ‡Dermatology and Dermatopathology, Miami and Palm Beach Gardens, Fla. Received for publication February 9, 2017; accepted May 4, 2017. Summary: In 1998, DiBernardo et al. published a very helpful standardization of comparative (before and after) photographic documentation. These standards pre- vail to this day. Although most of them are useful for objective documentation of aes- thetic results, there are at least 3 reasons why an update is necessary at this time: First, DiBernardo et al. focused on the prevalent standards of medical photography at that time. From a modern perspective, these standards are antiquated and not always cor- rect. Second, silver-based analog photography has mutated into digital photography. Digitalization offers virtually unlimited potential for image manipulation using a vast array of digital Apps and tools including, but not limited to, image editing software like Photoshop. Digitalization has given rise to new questions, particularly regarding appropriate use of editing techniques to maximize or increase objectivity. Third, we suggest changes to a very small number of their medical standards in the interest of obtaining a better or more objective documentation of aesthetic results. This article is structured into 3 sections and is intended as a new proposal for photographic and medical standards for the documentation of aesthetic interventions: 1. The photographic standards. 2. The medical standards. 3. Description of editing tools which should be used to increase objectivity. (Plast Reconstr Surg Glob Open 2017;5:e1389; doi: 10.1097/GOX.0000000000001389; Pub- lished online 17 August 2017.) Lukas Prantl, MD* Dirk Brandl, Dipl.-Ing.† Patricia Ceballos, MD‡ A Proposal for Updated Standards of Photographic Documentation in Aesthetic Medicine Disclosure: Supported by the German Research Foundation (DFG) within the funding program Open Access Publishing. Mr. Brandl is a photography engineer and speaker of the NETWORK-Globalhealth Academy for Aesthetic Medicine, Germany. Dr. Prantl is head of the Plas- tic Surgery Department of Regensburg University, Germany. Dr. Ceballos is a dermatologist and dermatopathologist from Miami and Palm Beach Gardens. The Article Processing Charge was paid for by the authors. Photographic Documentation in Aesthetic Medicine Prantl et al. XXX xxx 8 Sudharshini Plastic & Reconstructive Surgery-Global Open 2017 5 Special Topic 10.1097/GOX.0000000000001389 4May2017 9February2017 © 2017 The Authors. Published by Wolters Kluwer Health, Inc. on behalf of The American Society of Plastic Surgeons. All rights reserved. 17 August 2017 2017 Special Topic http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ PRS Global Open • 2017 2 nize changes of the subject, which has been documented. These conditions are specified as follows: camera, distance between photographer and patient, camera perspective/ angle, background brightness and color, picture size, and lighting. The aim of this article was not only to describe current photographic standards but also to propose optimal pa- rameters for aesthetic photodocumentation. Background color and lighting are 2 such parameters. Camera Let us first consider the camera. Modern digital cam- eras include a lot of software that generates an image that has already been changed by the software. Using the same camera and settings in a photographic timeline does not pose a problem. On the other hand, 2 different cameras will result in 2 different representations of the same sub- ject/reality, thereby reducing the accuracy of photodocu- mentation. Distance between Photographer and Patient The most manipulated parameter is the distance be- tween photographer and patient. Figure 1 shows the changes in the 2-dimensional reconstruction of a 3-di- mensional reality. Observing the 2 faces (B, C) and also the background structures, one can easily recognize the distortion potential of changing distance on the final im- age. Therefore, the distance between photographer and subject should be marked and remain fixed, using the zoom feature as needed. Camera Perspective/Angle Camera perspective/angle is subject to manipulation. A subject or patient may be photographed directly or from above or below. Changing the perspective interferes with photographic documentation. Maintaining a fixed perspective or angle is paramount to reproducible aesthetic photodocumentation. The cam- era position must be at the same level as the subject or anatomic region. This means that the photographer has to move the camera in a vertical direction. Photograph- ing the knees means the camera must be moved down, whereas photographing the face requires an upward cam- era movement (Fig. 2). Background Brightness and Color Background brightness and color should remain con- stant in a photographic timeline. The distortion potential of brightness and colors3 was already demonstrated by cognitive psychology in the 1940s (Fig. 3). For medical and aesthetic photodocumentation, we recommend grey or dark blue backgrounds. Grey or dark blue backgrounds elicit natural skin tones best. Picture Size In PowerPoint presentations, you sometimes see before and after pictures in different sizes. This kind of presentation technique is inconsistent and should be avoided. The size of the 2 photographs and the size of the body area that is reproduced should be equal in both images. The only legitimate exception is a smaller size of photographs that show different steps until the final result. The same image section need not be creat- ed during the process of making the photograph. It can be adjusted afterward with Photoshop or other software (Fig. 4). The format of the photographs (square, por- trait or landscape format) should always be the same. Lighting Let us discuss a rather difficult subject, namely the light- ing conditions. Maintaining the exact same lighting con- ditions is only possible when outside daylight is excluded. Because outside lighting is continuously changing, repro- ducing it is nearly impossible. This means the room has to be darkened and artificial light used. Most modern cameras have an integrated flash. Unfortunately, the quality of illumi- nation and the light direction are suboptimal using a flash. This leads us to the best lighting conditions, which are valid for 90–95% of all motifs. Figure 5 shows our recommendation for the creation of good lighting con- ditions. Instead of 1 light source, we recommend 2, 1 to the left and 1 to the right of the camera position, arranged at a 45-degree angle and slightly above the position of the patient. Not only is the direction of the light important to avoid shadows but also diffusion of the light source causing “light softness” improves the quality of photodocumentation. A cloud diffuses the sunlight. In a room, you can use translucent and opal glasses, which are able to diffuse direct light. Instead of Fig. 1. a, Same distance as in Fig 1c, photographed with a wide angle of the zoom as reference for the comparison. B, Demonstrates distortion of the 2D image when the distance is changed. c, Demon- strates consistent representation of the 2D image when the distance is not changed. Prantl et al. • Photographic Documentation in Aesthetic Medicine 3 glass, an opal film can also be used. So called daylight light showers, used in many offices, may be purchased. The color temperature of the artificial light sources should be daylight (5,500 Kelvin). We discourage the use of 2 flashes. Modern cameras do not need much luminous intensity, 10,000 lumen of each source being sufficient, and the camera speed should be set at 200 International Standards Organization (ISO)/American Standards Association (ASA) (ASA and ISO are origi- nally a scale for film speed, now used to describe the speed of the electronic sensor of the camera; both ISO and ASA have the same scaling). Very rare indications require photography under dif- ferent lighting conditions. In cases where you want to show a change in the skin surface, for example, in cellulite or facial pore size, you need backlight conditions. Backlight means that your light source is in front of the camera. A frontal 180-degree angle is not useful; a 135-degree angle is recommended instead. THE MEDICAL STANDARDS Besides standards belonging to the photographic part, we also have to consider and standardize the medical background. Fig. 2. The perspective can be reproduced by holding the camera in a horizontal position. Fig. 3. The yellow color of both squares is identical although they look different because of different background colors. PRS Global Open • 2017 4 Patient Preparation Face All makeup and jewelry must be removed for all pho- tographic documentation, including the posttreatment pictures. A before and after photographic session for a sci- entific presentation cannot be rendered otherwise! In oth- er words, pretreatment photographs without makeup and posttreatment photographs including makeup should be avoided. This does not mean that you cannot shoot a third supplemental photograph of the patient wearing makeup. Fig. 4. adjustment of 2 different picture sections by the photoshop tool. Fig. 5. optimal lighting arrangement. Prantl et al. • Photographic Documentation in Aesthetic Medicine 5 The hair should be fixed with a clasp at the back so that the whole face is visible. If the patient wears glasses, you should always photograph with and without glasses. Any and all clothing concealing portions of the head and neck such as hats or scarves should be removed. Body Photographing the body mandates a decision regard- ing inclusion of undergarments in the pictures. Under optimal circumstances, the patient would agree to being photographed without undergarments. However, this is not always possible and, in certain cultures, strictly forbidden. If undergarments are included, they should be black and as inconspicuous as possible. It is recom- mended that the doctor’s office stock disposable under- garments for this purpose. The minimum requirement to ensure consistent photodocumentation is that the pa- tient wears the same undergarments of his or her choos- ing in all photographs. Anatomic Landmarks The 1998 standards of DiBernardo et al.1 are very clear and still correct. Whatever you try to treat has to be docu- mented, and a beholder of the pictures must be able to recognize the anatomic part. If the photographed section is too small, one cannot recognize the anatomic part; con- versely, if the photographed area is too broad, one cannot appreciate any changes. The adjacent anatomic regions have to be included in the photograph, not as a whole, but partly. They serve as anatomic landmarks or reference points. Treating the orbicular part of the face means that the nose and forehead have to be present in the picture. If a breast augmentation is planned, the breasts are your motif and you need to include clavicles, shoulders, arms, and upper abdomen. The upper legs should include the knees and the lower abdomen. Different Views of the Patient Depending on the anatomic region, you may have to shoot more than 1 picture. For facial documentations, you need 3–5 different views as shown in Figure 6. The chest area also requires between 3 and 5 pictures. Even more views are needed when documenting treatment of the hip area. The greatest recommended number of images, that is, 8, can be seen in Figure 7. In general, we can state that during the session the patient has to move both face and body together in a circular movement. Head Tilt Without using a stand to fix the position of the face, it is not possible to reproduce the same position without us- ing a reference line. The Frankfurt line (horizontal line be- tween the corner of the mouth and the earlobe) has been used as a reference line for decades. In our opinion, this line moves the face too far upward. A realistic documen- tation of submental fat is not possible using the Frankfurt line. Therefore, we propose the use of another reference line, which we have called the Network line,4 as it is rec- ommended by the members of the NETWORK-Lipolysis organization (Fig. 8) who have documented many patients with submental fat problems. A stand to fix the position is not the best choice, as this aid can be seen in the picture and a double chin cannot be documented. A virtual refer- ence line cannot be seen and helps to fix the position in the same manner. Posture Even small changes in posture may alter the documen- tation of treatment results. The photodocumentation of a fat reduction of the abdomen (e.g., by liposuction, cryo- lipolysis, or injection lipolysis) can be affected by the pa- tient’s breathing alone! The body position should always be upright and the patient should exhale before the pho- tograph is taken. Facial Expression Facial expression should always be neutral. A neu- tral expression before treatment followed by a smiling expression after treatment or vice versa is not recom- mended. Before treatment with botulinum toxin A, a number of photographs with different expressions including smil- ing and frowning can be helpful in identifying the correct injection points. EDITING TOOLS Editing software developed during the process of digi- talized photography can greatly enhance the quality of photographs. The faces we see on the front cover pages of fashion magazines show only virtual faces, modified by fil- ters, retouch, and morphing tools. Nevertheless, software such as Photoshop or Affinity can help us in contrary to increase objectivity. In 1998, when DiBernardo et al.1 published their ar- ticle, the normal kind of photodocumentation was black and white film (B/W). B/W pictures are sometimes still appropriate, particularly when documenting changes in contour. Any photographic software can easily perform a transformation from color to B/W. As previously mentioned in the photographic stan- dards, in the section “Posture,” the editing of the exact sec- Fig. 6. Different positions of the patient for facial documentation. PRS Global Open • 2017 6 tion of 2 different photographs made at the same distance from the patient can be performed on a computer (Fig. 4). The conditions of a photographic shoot are never identi- cal (light, speed, automatic setting, and so on), and some- times even the complexion of a patient will have changed after a holiday at the beach. Therefore, it is advisable to bring the 2 sets of pictures into “alignment” in terms of brightness, color, and contrast. These are editing tools that can and should be used because they increase objec- tivity (Table 1). If the change in complexion, for example, through mesotherapy treatment or peeling, is the purpose of the documentation, adjustment of the skin color should not take place. In case the pictures are used for marketing purposes to present treatment results to other patients, it is often necessary to mask, for example, the eye region of the face with a black bar to respect the anonymity of the patient. The use of filters (except sharpening filters in case of haziness) and retouching and morphing tools are to be strictly avoided. DISCUSSION Before and after pictures of medical or aesthetic interventions should give the best (most objective and unbiased) impression of the results to an inde- pendent observer. To accomplish this goal, the clinic or office team (all members of the staff involved in photodocumentation) has to be trained in the photo- graphic and medical standards for producing compa- rable photographs, that is, meaningful documentation of treatment results compared with the clinical situa- tion preceding the intervention. “Adjustments” which compensate for variable photographic conditions are permitted and should always be conducted through the use of editing software for section, color, bright- ness, and contrast. Although today some companies (Canfield; Quantifi- care, Canada; Fotofinder, Germany) have developed new automatic units for 3D and 2D photodocumentation, thorough knowledge of the basic photographic stan- Table 1. Tools Tool Usable Not Usable Section X Sharpness (X) Intensity X Contrast X Brightness X Morphing X Color fog X Retouching X Filter X Masking X Fig. 7. Maximum number of patient positions. Fig. 8. NeTWoRK line. Prantl et al. • Photographic Documentation in Aesthetic Medicine 7 dards is essential. The above-referenced units require an investment of 10,000–50,000 U.S. dollars. Such an invest- ment might be acceptable for bigger clinics but would be onerous for an office of average size. At the moment, those types of units are perfect for producing objective measurements of fine changes in volume, elasticity, or pore size. These kinds of measurements are fantastic for evidence-based studies, but do we need them for the pre- sentation of treatment results? Another problem of those soft- and hardware combination units is the aesthetic presentation/layout. The picture of the patient has to be extracted from the background and then a black 2D background is added. This kind of montage technique always produces pictures that look surreal. If the pho- tographs are intended for marketing and advertising purposes, the artistic and aesthetic aspects of the images should also be considered. Lukas Prantl, MD, PhD Center of Plastic-, Hand- and Reconstructive Surgery University of Regensburg Franz-Josef-Strauß-Allee 11 Regensburg, Germany - 93042 E-mail: lukas.prantl@klinik.uni-regensburg.de REFERENCES 1. DiBernardo BE, Adams RL, Krause J, et al. Photographic stan- dards in plastic surgery. Plast Reconstr Surg. 1998;102:559–568. 2. Schrader V. Die Grenzen des Mediums, unpublished manu- script, 1983. 3. Arnheim R. Art and Visual Perception: A Psychology of the Creative Eye. Los Angeles: University Press Group Ltd; 2004. ISBN-10: 0520243838, ISBN-13: 978-0520243835. 4. Brandl D. Fotodokumentation in der ästhetischen Medizin und Dermatologie. Drensteinfurt: Lichtblick GmbH, ed. 2015. Apple iBook Store. mailto:lukas.prantl@klinik.uni-regensburg.de work_3ufp5cxw2javpjbjomxf7d4mvm ---- gulacti_ismailerim Uluslararası Sosyal Araştırmalar Dergisi / The Journal of International Social Research Cilt: 11 Sayı: 55 Şubat 2018 Volume: 11 Issue: 55 February 2018 www.sosyalarastirmalar.com Issn: 1307-9581 http://dx.doi.org/10.17719/jisr.20185537279 DIGITAL PHOTOGRAPH AS A METHOD OF EXPERIENCING, IDENTITY CONSTRUCTION AND PIECES OF NETWORKED MEMORY İsmail Erim GÜLAÇTI* Abstract Photographs, intertwined with Modernism, rapidly turned into a groundbreaking phenomenon with multi-dimensional sociological and cultural repercussions shortly after it was invented in the 19th century, in which it was perceived as yet another scientific tool. The 19th century roles of photographs for documenting and keeping memories got transformed and the areas of use for photographs rapidly diversified as well as they become even more complex as a result of the dramatic changes that were brought about in the 20th century. Coupled with the digital infrastructure realized in the 20th century, photographs revolutionized the nature of communication, individuals’ perception of identity and the method of keeping and remembering memories. Molded into the widespread tool of communicating experiences, expressing oneself and creating a visual memory by this digital shift in the 20th century, photographs also became a popular means of forging a digital identity via the Internet and social media. Despite the match between this circumstance and the individuals’ desire for instant communication staying up-to-date, the ease of manipulation that the same digital shift resulted in also radically loosened individuals’ control over their digital photographs. Even though this situation seems to have eliminated the role of photographs as memory, it only transformed this role due to the manipulation and the potential to construct a digital identity offered by the Internet and social media. Consequently, individuals’ photographs of themselves or of the moments that they lived have started to turn up in unexpected circumstances at equally unexpected times. As such, it seems that the role of photography as memory has gone well beyond being individualistic and turned into a collective one thanks to the digital potential made possible by the Internet and social media. This study, which has a descriptive and exploratory qualitative method, investigates how a transformation of not only technological but also socio-cultural dimensions has come to reshape photography in general and digital photography in particular. Keywords: Digital Photography, Identity, Social Media, Memory, Communication. 1. Introduction In conjunction with the developments in technology, photography has undergone remarkable changes since 1839, when its invention was announced at the French Academy of Sciences. However, it is in the last thirty years that photography has changed significantly thanks to the radical digital capabilities of the new technological advancements. Despite the fact that the consequences of this dramatic change are concrete and abstract alike and that there are many dimensions of its complex uses, the depth at which digital photography has affected the nature of communication, the construction of identity and the creation of memory has been its most notable implication. This profound change has resulted in continuous interaction among and the transformation of all these concepts as well as how one sees, understands and reacts to the world. Thus, digital photography has emerged as the most effective method of communication, self-expression and memory. To be more specific, due to the role of technological tools and developments like smartphones, the Internet and the spread of social media in our lives, photography has turned into a way of creating one’s identity and a method of communication rather than a tool of keeping memories for recording one’s photographic past, as it was once. It is now the language of a new generation of people who use it to form a ‘digital’ identity and to communicate instantly. This new set of users of digital photography ‘live’ in semi- virtual reality, taking countless photographs for which they pose fashionably, calculate the consequences of various angles and facial expressions. Such a circumstance could well be characterized as a transformation on both sociological and cultural fronts. This incessant use of photography for building and modelling one’s identity perfectly matches the individuals’ need to project a trendy image in instant communication. Yet, it is this ease of manipulation that poses a serious problem for people who use digital photography in the such a manner by undoing their control over the probable uses of their photographs in unpredictable contexts in the future. In contrary to the widespread belief, digital photography has not eliminated the memory role of a photograph. It has only transformed it in such a way that memory that a photograph creates and contains is scattered, or networked, over the social media and the ever-connected computers on the Internet. This article aims to investigate how all these changes has influenced and reshaped digital photography in the context of creating identities, communicating self-image and keeping memories. * Assistant Professor, Yıldız Teknik Üniversitesi, Sanat ve Tasarım Fakültesi. Uluslararası Sosyal Araştırmalar Dergisi Cilt: 11 Sayı: 55 The Journal of International Social Research Volume: 11 Issue: 55 - 1093 - 2. Digital Photography in Context Thanks to the recent developments in technology that have been going on for more than a decade, digital photography seems to have entered a new and historical phase, in which the mind-boggling speed and inclusion of state-of-the-art audio-visual technologies into our lives has made it ever easier to manipulate and disseminate images. Naturally, this poses a risk both for the people in the images and those who commit this act of manipulation and circulation since it is not so easy to tell the difference between a doctored and non-doctored image. What is even worse is that the consequences of all this image manipulation and dispersion could well be more serious in the future. As Williams (1974, 28) rightly argues, technology itself does not contain anything that relates to where it originates from or how it should be used, which means a new technological development could easily be adopted and used in several potential ways. People adapt or devise novel manners of usage for this new technology, which are rather difficult to predict, and these new uses also become rooted in the production and consumption of images. It is noteworthy to point out that how digital photography functions and what role it plays when compared to analogue photography have changed drastically. Analogue photography was the main mode of keeping and recalling one’s significant moments in life twenty or twenty-five years ago. All those photographs in thick and dusty albums and were regarded as the most dependable and useful way of remembering how life was once upon a time despite the fact that picturing and imagining have always been an inherent part of remembering while telling stories (Stuhlmiller, 1996, 183). How photography affected the creation of identity and what role it played in communication were not so emphasized although these notions were always understood to be latent in the nature of photography, as Barthes (1981, 106) and Sontag (1973, 76) highlights. However, with the ever-increasing progress and usage of digital photography on equipment like smartphones, photography has begun to be used as a means of instant communication and creating as well as ‘curating’ one’s identity through images, which has led its role as a method of collecting and recalling memories to be regarded as secondary, as pointed out by several researchers like Schiano et al. (2017, 2), Harrison (2002, 100) and Garry and Gerrie (2005, 323). Yet, it should be noted that photography, in its digital form as well, is still an effective form of memory, which constitutes the main argument of this article. To put things in a broader context, it should be borne in mind that using photographs for the purpose of communication and as the expression of one’s identity have been inherent components of photography. As Schiano et al. (2017, 3) indicate, today’s young people tend to employ their smartphone cameras or digital cameras more for instant communication and easy circulation over their social media accounts and less to saving photographs to look at later in their lives, which is a stark difference to their parents‘preferred way of handling their own photos. Moreover, it seems that this use changing use of photography has not occurred due to the recent developments in technology. There seems to be a greater and intricate web of transformation that has sociological and cultural dimensions in addition to the obvious technological one. Often pointed to as the master criminal behind the widespread belief that photography is no longer dependable as it was during the analogue era, digitalization is also essential to keep countless photographs, which were quite often modified even in the analogue form as well, as Öztuncay (2003, 33) puts forward. It is only that new digital capabilities of the cameras and smartphones allow for more flexibility and ease in ‘editing’ the photographs before sharing them. This is how the identity gets formed through the digital photography. Yet, with this identity communicated over the Internet via various means comes the question of what happens to the memory role of photography. It can be argued that memory is not erased at all from digital photographs of the sort mentioned above. Memory gets transformed and scattered over the wide web that covers all the world only to be saved for an infinite period in the virtual reality. Taking all these into account, it could be argued that developments in technology, when considered together with the changes in the sociological and cultural atmosphere of our age, have deeply influenced the function that photography plays in the formation of one’s identity and autobiographical memory. What lies beneath this idea is that digital photos can now be effortlessly tweaked thanks to the technological tools while they may also be doctored by people who have the sufficient knowledge and equipment. This dilemma regarding to what extent you can really control your digital photographs is also reflected in the way those digital photographs are shared over the Internet. Although instantly taken digital photographs are easily shared over the social media, this also means that they are quite open to illegal or unlicensed use, which means that the pictures ‘live’ almost forever on the Internet and pop up in unpredictable instances. In Uluslararası Sosyal Araştırmalar Dergisi Cilt: 11 Sayı: 55 The Journal of International Social Research Volume: 11 Issue: 55 - 1094 - summary, even though digital photographs can easily be modified, and this manipulability fits the endless desire to sculpt one’s identity, it is the same ease that creates the absence of control which decreases how much those photographs can actually be controlled in the distributed virtual memory of the Internet. 3. Digital Photography as A Means of Communicating and Experiencing Memories Originally emerged in the first half of 19th century as a tool of recordkeeping to aid scientists and travelers in their efforts to understand and categorize the world, photography steadily acquired a more personal form in the second half of the same century, which included people recording their experiences on a material like silver plates, glass or paper to refer to or remember with their friends in the future. Nonetheless, even in its first years, it could be claimed that photography also had these interpersonal uses, which, as early as 19th and 20th century, indicated that photography was also a phenomenon that had communicative and social aspects as too. As anyone who has been to a place as a tourist for the first time would acknowledge, there is a constant urge to take photographs to share with friends later, which shows that photography is an essential part of creating a memory and experience. However, this social function of photography as a means of recording and remembering experiences has lost ground to more individual usage, communicative functions and sharing rather than saving experiences. As an indispensable part of a one’s social life, Sontag (1973, 23) argues that photography has always been a tool of belonging to a certain group, and since the beginning of 21st century, the individual has slowly moved into the center of this social life. As Harrison (2002, 90) argues in her study, the presentation of oneself has become the more important function of digital photography, rather than reminiscing the past experiences with family. This profound change signals that digital photography has been moving towards a process of creating and molding identities, in which photographs are employed for the assertion of one’s personality, experiences and interpersonal relations. In a shift which has been more forcefully felt since the start of the 21st century, digital photographs have been serving as a means of communicating daily experiences more than they are used for commemorating memories. Despite the fact that this is partly due to the developments in technology that have brought about considerable comfort in our everyday life, sociological and cultural aspects of this transformation cannot be ignored. Today’s users of digital photographs differ greatly in their preference for the use of these photographs. Having experienced the analogue era in photography, adult users mainly seem to be sticking to the recording and remembering function of photography for their family experiences, even in its digital form, while young users, who are already quite familiar with the electronical and digital image tools, prefer using digital photographs to share and communicate their daily experiences as well as to join social groups (Liechti and Ichikawa, 2000, 233; Schiano et al. 2017, 3). Such a difference also presents itself in how young people manage their digital photographs. As indicated in Schiano et al.’s study (2017, 3), today’s youth use digital photography as a means of social interaction and communicating experiences by sharing their photos through their social media accounts or looking at such photos of others rather than going over the numerous photos collected on their smartphones or cameras by themselves like their mothers or fathers. This clearly shows that photography has begun to play a more active role in social circles like schools, friend groups and clubs, which means that digital photography has turned into a new kind of visual language to express one’s identity. This transformation partly results from the widespread availability and ever-increasing popularity of digital tools like smartphones, tablets; social media applications like Facebook, Instagram, Snapchat or instant messaging programs like WhatsApp. All these technologies and software have a dramatic effect on how people, especially young people, socialize and spend time together. With the recently added features like various kinds of vignettes and filters, images or ‘stories’ disappearing after a certain amount of time determined by the user, completely original uses and rules have also appeared. People all perform these ‘rituals’ and abide by the rules without even knowing when they casually take a selfie and post it on Instagram or share like a story on Snapchat after carefully editing it through some filters built in these applications. Young users of digital photography are more active regarding this aspect, unlike their parents, who took and stored their photos in albums to look at them some day in the future. It seems that they are more enthusiastic about sharing their experiences through digital photos rather than keeping them as objects on their phones. The digital technologies mentioned above apparently reinforce this trend in digital photography for experiencing and communicating memories or messages. This situation causes digital photography to acquire a social function, which serves to communicate or ‘connect’ with others rather than Uluslararası Sosyal Araştırmalar Dergisi Cilt: 11 Sayı: 55 The Journal of International Social Research Volume: 11 Issue: 55 - 1095 - ‘save’ memories. Therefore, it could be argued that digital photographs used in this manner are now momentary notes rather than long-lived memories. As noted in Lehtonen et al. (2002, 71), digital photographs which are spiced with some captions on smartphones and posted over social media or instant messaging tools have turned into ‘postcards’, which serve the function of social connection above. In this new mode of social interaction mediated via digital photography, those words added to the photographs shared over the social media clearly represent our instinct to communicate and the function of photography regarding this new mode of saying ‘right here, right now’. Therefore, it seems that digital photographs have turned into visual means of experiencing and communicating those experiences instantly as if they are words uttered from out mouth, which are not for saving but consuming. It is certain that a profound change has been going on regarding digital photography for the last decade and this change is even more obviously visible among young users of digital photographs, who prefer to use their digital photographs as a tool to connect socially within their friend circles. However, this phenomenon does not exactly stem from today’s improving digital technology. It is also the consequence of a wider shift which includes cultural aspects as well. In this transformation, the individuals and their experiences are at the core, often to the point of exclusion of family, and digital photography is merely a component of this widespread change of perspective, in which not saving photographs for the future but posting and/or exchanging them via digital technology has become the mainstream method of creating identity. 4. Digital Photography as A Means of Constructing and Shaping Identity Employed as a method of experiencing and communicating memories, digital photography has also been an influential tool in creating and shaping identity as it enables people to ‘tweak’ their images. This is not to say that manipulating or tweaking photos has only become possible with digital photography. As Öztuncay (2003, 39), Terpak and Bonfitto (2015, 23) point out, photographs have always been ‘edited’ somehow even in analog photography for various reasons. Unnecessary or undesired parts were removed or retouched on glass plates to make the image more desirable in the eyes of the photographed person or the patron of the photographer. Therefore, camera has long been a tool of constructing identity through editing images. Barthes (1981, 80) argues that there is a strong relationship between identity construction and photography. According to this connection, photographs are instruments that calls us to contemplate our own past, current and future selves, continuously evaluating our images. Therefore, whenever we have our photograph taken, we tend to model our image to match the ideal collection of those past, current and future selves. Taking this connection into consideration, photography has an active and complicated impact on the process of identity construction since this process includes not only a visual but also a cultural and semiotics aspects as well. Barthes (1981, 81) also adds four distinct dimensions to having a photograph taken, which could be summarized as “the one that I think I am”, “the one I want others to think I am”, “the one the photographer thinks I am” and “the one the photographer makes use of when exhibiting his art” (Barthes, 1981, 13). These aspects are often at play with each other whenever we pose for a photograph so that it fits the ideal self that we have in our minds, only to realize that they almost never fit together as Barthes (1981, 81) argues. The fact that we pose conscientiously, putting on our best smile and trying to look pleased during the shooting and then choose the best photographs for editing on the phone or on the computer and delete the undesirable ones shows all this interplay of the aspects that constitute the construction of identity through photography. In short, what was once the domain of expert visual artists is now the ability of any person who has a smartphone or digital camera and can operate it with relative ease. Nevertheless, it should clearly be pointed out that this digital capability of editing or manipulation is not only applicable to digital photography. Despite being singled out as the main difference of digital photography from its analog counterpart, tweaking and manipulation has already been present in analog photography as well, as Özendes (2013, 19), Öztuncay (2003, 33), Terpak and Bonfitto (2015, 23) explain. However, digital photography has more latent flexibility to go through and edit photographs whose subject is oneself, thus controlling one’s public image, which is more difficult, though not impossible, when those photographs are on a film roll. Therefore, it has become more appealing in digital photography to individuals to ‘improve’ how they look via a variety of digital software. To put it differently, tweaking one’s digital photographs is now an indispensable and common component of one’s photographical life experiences. Uluslararası Sosyal Araştırmalar Dergisi Cilt: 11 Sayı: 55 The Journal of International Social Research Volume: 11 Issue: 55 - 1096 - This brings another idea to the mind, which is the fact that this tweaking of individuals’ photographs has become quite the norm. In fact, it is such widespread practice that people tend not to question whether the photograph that they are looking at has any visual integrity when compared to the original person. This common situation is also reflected in our accepting attitudes to the photographs in magazines, advertisements on the Internet and TV, which are almost always ‘improved’ via various editing tools to polish the image. In short, digital photography allows people to play with almost anything in their photographs and there is a widespread acceptance of this manipulability, which has become well-integrated into one’s digital autobiography. Unlike ‘stationary’ analog photography, such a powerful combination makes the digital photography the perfect method for the construction and re-construction of one’s digital identity and memory. Therefore, it can be argued that a new and different kind of socio-cultural environment in which manipulability of and individuals’ control over digital photography is increasing more is coming into being. 5. Digital Photography as Networked Memory After taking the reflections above into account, it seems quite normal to conclude that digital photography has now become a method of experiencing, communicating and constructing identity as opposed to the original role of photography as remembering. Nonetheless, this new situation has not undone the primary role of photographs as medium of recollection despite digital cameras being used for experiencing and identity formation more and more with novel social uses. In fact, function of digital photographs as memory lives on in a different social shape in the distributed yet connected reality of our age. What causes this new mode of memory is sharing, which means that digital photographs shared over social media or the Internet to communicate experience or to construct identity are stored for ever in the distributed yet connected memory, popping up at unpredictable instances and times. The notorious photographs of Abu Ghraib prisoners are a famous and crystal-clear example of the networked memory that digital photographs are stored in. After their appearance in the press in 2004 and spreading via the Internet, one year after the start of the US war in Iraq, these repellent and shocking photographs of torture and abuse of the Iraqi detainees by the personnel of the US armed forces remain an undeniable indication of the power of photography over prose. Informally taken by as a casual sign of military bonding, conformity within the ‘mission’ and souvenirs of victory, these photographs, as Sontag (2017, 2) rightly points out, are “a recent shift in the use made of pictures – less objects to be saved than messages to be disseminated, circulated.” Besides the fact that digital cameras have become an everyday accessory even for the soldiers, these photographs also clearly demonstrate that war photography has changed dramatically, with soldiers being the photographers and the users of photographs as a casual and ‘fun’ way of sharing experience and constructing identity. Informally intended as ‘postcards’ sent to relatives back home only to be forgotten, these photographs remained in the eternal networked memory, which is the Internet. This shows that digital photographs are not one’s only to keep on cameras or smartphones anymore. They have now become a hidden burden in people’s both personal and professional life. In short, it is clear that digital photographs taken with smartphones or cameras may not be limited to one’s exclusive and personal space as easily as it is taken for granted. Instead, they get integrated into the networked memory and remain eternally distributed in the Internet. Conclusion With the dramatic move of photography from the chemical processes in the darkroom of the analog photography to the electronic circuits of the digital camera in the second half of the 20th century came the remarkably easy yet extremely important manipulability of photographs. Thus, digital images could rapidly spread into the global network of electronic communication, which meant that digital photographs could not be kept in the drawers or albums unlike what our grandparents did before the advent of digital photography. All this shift in paradigms has transformed the roles of digital photography in a wider cultural and sociological context which could be defined via concepts like experiencing, communication, manipulation and easy circulation of digital images. In this novel circumstance, digital photographs have turned into tools of identity construction because of the reconfigured equilibrium of the roles of recording memories and communicating experiences. With the extensive availability of digital tools like cameras, smartphones and enhanced software on these, tweaking and manipulating photographs seem to have become the standard procedure, not an alternative, to ‘present’ oneself. Although it is hard to claim that photography is a true reflection of reality and memory, digital photography has pushed this reflection even more to its limits by providing people with the ability to Uluslararası Sosyal Araştırmalar Dergisi Cilt: 11 Sayı: 55 The Journal of International Social Research Volume: 11 Issue: 55 - 1097 - ‘enhance’ and ‘reconstruct’ their appearances via technological tools mentioned above, which they share instantly over the social media as an expression of their identities. This new and widespread ‘ritual’ naturally lead to the question of how much control people really have over their photographs. In its new form as a tool for the formation of identity, it would not be wrong to claim that digital photography coupled with the power of the Internet has the potential danger of images being manipulated and popping up unexpectedly in completely different domains. Moreover, the bitter fact is that it is very difficult to tell the true image in most cases. While images as keepsakes of memory are leaving their places to images as means of communication, photographs captured by a digital camera or a smartphone and intended merely as ‘jokes’ or ‘postcards’ to be forgotten are now becoming engraved in the distributed memory of the Internet, which further complicates the issue of control referred to above. Even though a digital photograph is taken only as a memory and shared solely with the purpose of communication or expressing one’s identity, it easily goes back and forth between individual, confidential use and general, popular one. This is not exactly the direct result of digitalization of photography but, as stated earlier, a combination of digitalization and a sociocultural transformation which digital photography is only a part of. However, digital photography and the Internet have made the distribution of images easier as a form of communication, which, in turn, renders private images communal property and considerably undermines individuals’ control of their photographs. Overall, the growing ease of tweaking and manipulation in digital photographs has led to individuals to feel powerful in constructing and presenting their digital identities yet powerless in controlling them in the networked memory of the Internet. Because of this circumstance, the meaning of memory and the role of photography as a way of remembering have changed dramatically to include the eternal photographs in the digital corridors of the Internet as well as instant and private ones on the cameras and smartphones. REFERENCES Barthes, Roland (1981). Camera Lucida: Reflections on Photography, Çev. Richard Howard. New York: Hill and Wang. Garry, Maryanne ve Gerrie, Matthew (2005). When Photographs Create False Memories. Current Directions in Psychological Science, S. 14, 6, s. 321–325. Harrison, Barbara (2002). Photographic Visions and Narrative Inquiry. Narrative Inquiry, S. 12, 1, s. 87–111. Lehtonen, Turo-Kimmo, Koskinen, Ilpo ve Kurvinen, Esko (2002). Mobile Digital Pictures – The Future of the Postcard? Findings from an Experimental Field Study. Laakso Vertken ve John Ostman (der.) Postcards and Cultural Rituals, s. 69–96. Korttien Talo: Haemeenlinna. Liechti, Olivier ve Ichikawa, Tadao (2000). A Digital Photography Framework Enabling Affective Awareness in Home Communication. Personal and Ubiquitous Computing, S. 4, 1, s. 232–239. Özendes, Engin (2013). Osmanlı İmparatorluğu’nda Fotoğrafçılık 1839-1923. İstanbul: YEM Yayın. Öztuncay, Bahattin (2003). Dersaadet’in Fotoğrafçıları 1-2. 1. Bs. İstanbul: Koç Kültür Sanat Tasarım Hizmetleri. Schiano, Dianne J., Chen, Coreena P. ve Isaacs, Ellen (2017, 08 Ağustos) How Teens Take, View, Share, and Store Photos. https://www.researchgate.net/publication/265533076_How_Teens_Take_View_Share_and_Store_Photos Sontag, Susan (1973). On Photography. New York: Delta. Sontag, Susan. (2017, 09 Eylül). Regarding the Torture of Others. http://www.nytimes.com/2004/05/23/magazine/regarding-the- torture-of-others.html?mcubz=3 Stuhlmiller, Cynthia (1996). Narrative Picturing: Ushering Experiential Recall. Nursing Inquiry, S. 3, s. 183–184. Terpak, Frances ve Bonfitto, Peter (2015). Antikiteyi Mürekkebe Taşımak – Amerika Kıtasından Anadolu’ya Kalıntılar ve Fotolitografinin Gelişimi. Camera Ottomana: Osmanlı İmparatorluğu’nda Fotoğraf ve Modernite 1840-1914. Zeynep Çelik ve Edhem Eldem (der.). İstanbul: Koç Üniversitesi Yayınları. s. 20-65. Williams, Raymond (1974). Television, Technology and Cultural Form. London: Fontana. work_3ul4uszz5jdw7k6cdg2iyojini ---- Additive Manufacturing of Resected Oral and Oropharyngeal Tissue: A Pilot Study International Journal of Environmental Research and Public Health Article Additive Manufacturing of Resected Oral and Oropharyngeal Tissue: A Pilot Study Alexandria L. Irace 1,2 , Anne Koivuholma 1, † , Eero Huotilainen 3, †, Jaana Hagström 4 , Katri Aro 1 , Mika Salmi 3 , Antti Markkola 5, Heli Sistonen 5 , Timo Atula 1 and Antti A. Mäkitie 1,3,6,7,* ���������� ������� Citation: Irace, A.L.; Koivuholma, A.; Huotilainen, E.; Hagström, J.; Aro, K.; Salmi, M.; Markkola, A.; Sistonen, H.; Atula, T.; Mäkitie, A.A. Additive Manufacturing of Resected Oral and Oropharyngeal Tissue: A Pilot Study. Int. J. Environ. Res. Public Health 2021, 18, 911. https://doi.org/10.3390/ ijerph18030911 Academic Editor: Paul B. Tchounwou Received: 30 December 2020 Accepted: 18 January 2021 Published: 21 January 2021 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affil- iations. Copyright: © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/). 1 Department of Otorhinolaryngology–Head and Neck Surgery, HUS Helsinki University Hospital, University of Helsinki, P.O.Box 263, FI-00029 HUS Helsinki, Finland; ali2116@cumc.columbia.edu (A.L.I.); anne.koivuholma@helsinki.fi (A.K.); katri.aro@hus.fi (K.A.); timo.atula@hus.fi (T.A.) 2 Vagelos College of Physicians and Surgeons, Columbia University, New York, NY 10032, USA 3 Department of Mechanical Engineering, Aalto University, P.O.Box 14100, FI-00076 Aalto Espoo, Finland; eero.huotilainen@iki.fi (E.H.); mika.salmi@aalto.fi (M.S.) 4 Department of Pathology, HUS Helsinki University Hospital, University of Helsinki, P.O. Box 21, Haartmaninkatu 3, FI-00029 HUS Helsinki, Finland; jaana.hagstrom@hus.fi 5 Department of Imaging, HUS Helsinki University Hospital, University of Helsinki, FI-00029 HUS Helsinki, Finland; antti.markkola@hus.fi (A.M.); heli.sistonen@hus.fi (H.S.) 6 Division of Ear, Nose and Throat Diseases, Department of Clinical Sciences, Intervention and Technology, Karolinska Institutet, Karolinska University Hospital, SE-17176 Stockholm, Sweden 7 Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, FI-00014 Helsinki, Finland * Correspondence: antti.makitie@helsinki.fi; Tel.: +358-50-428-6847 † These authors contributed equally. Abstract: Better visualization of tumor structure and orientation are needed in the postoperative setting. We aimed to assess the feasibility of a system in which oral and oropharyngeal tumors are resected, photographed, 3D modeled, and printed using additive manufacturing techniques. Three patients diagnosed with oral/oropharyngeal cancer were included. All patients underwent preoperative magnetic resonance imaging followed by resection. In the operating room (OR), the re- sected tissue block was photographed using a smartphone. Digital photos were imported into Agisoft Photoscan to produce a digital 3D model of the resected tissue. Physical models were then printed using binder jetting techniques. The aforementioned process was applied in pilot cases including carcinomas of the tongue and larynx. The number of photographs taken for each case ranged from 63 to 195. The printing time for the physical models ranged from 2 to 9 h, costs ranging from 25 to 141 EUR (28 to 161 USD). Digital photography may be used to additively manufacture models of resected oral/oropharyngeal tumors in an easy, accessible and efficient fashion. The model may be used in interdisciplinary discussion regarding postoperative care to improve understanding and collaboration, but further investigation in prospective studies is required. Keywords: additive manufacturing; rapid manufacturing; stereolithography; 3D imaging; head and neck surgery; surgical oncology 1. Introduction Additive manufacturing (AM), otherwise known as three-dimensional (3D) printing, is a growing technology with an emerging role in several surgical specialties. For example, it has been used for preoperative planning of hemihepatectomy and prostatectomy proce- dures [1–3]. Within otolaryngology, AM has improved surgical education and simulation, facilitated production of auricular prostheses and hearing aids, and enabled more accurate mandibular reconstruction following major segmental resection [4–6]. These are just a few examples of how AM may enhance patient-specific care, as there are several potential applications of this technology that have yet to be established. Postoperative management Int. J. Environ. Res. Public Health 2021, 18, 911. https://doi.org/10.3390/ijerph18030911 https://www.mdpi.com/journal/ijerph https://www.mdpi.com/journal/ijerph https://www.mdpi.com https://orcid.org/0000-0003-4758-3146 https://orcid.org/0000-0002-2590-2225 https://orcid.org/0000-0001-6079-7881 https://orcid.org/0000-0001-8103-5237 https://orcid.org/0000-0002-7295-3551 https://orcid.org/0000-0003-0224-9750 https://orcid.org/0000-0002-6560-1128 https://orcid.org/0000-0002-0451-2404 https://doi.org/10.3390/ijerph18030911 https://doi.org/10.3390/ijerph18030911 https://creativecommons.org/ https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/ https://doi.org/10.3390/ijerph18030911 https://www.mdpi.com/journal/ijerph https://www.mdpi.com/1660-4601/18/3/911?type=check_update&version=2 Int. J. Environ. Res. Public Health 2021, 18, 911 2 of 12 of head and neck cancer is one possible application that has not been well studied in the literature to date. Clear visualization of head and neck tumors is essential not only for surgical planning but also postoperative management. Detailed preoperative imaging of the mass helps the surgeon achieve negative margins while mitigating surgical risks, including cosmetic deformity and neuromuscular injury. In the postoperative setting, tumor visualization is equally important. Head and neck surgeons must collaborate with pathologists, radiation and medical oncologists, and imaging specialists to evaluate the surgical margins, plan for possible revision procedures or adjuvant therapy, and monitor for recurrence. Knowledge of the surface anatomy, tumor structure, orientation, and location of surgical margins is essential for these multidisciplinary discussions to be effective. Although magnetic resonance imaging (MRI) is the gold standard imaging modality for head and neck cancer [7], 2D MRIs do not necessarily show oral and oropharyngeal tumors from the clinician’s perspective. For one, they may limit appreciation of complex borders and anatomy. In addition, MRI interpretation can be challenging for the untrained eye, particularly when there are artifacts due to inflammation and/or swallowing [8]. Software and tools to combine preoperative MRIs and postoperative information about biopsies, tumor orientation, and surgical margins are not yet considered part of standard clinical practice. The 3D-printed models of resected tumor tissue may improve visualization and un- derstanding of tumor structure, location, and orientation in the postoperative setting. Previous studies have constructed 3D tumor models based on preoperative radiologic images, most commonly computer tomography (CT) and MRI scans [9–11], or histological reconstruction [12,13]. The 3D reconstructions of bone can also be made by using cone beam computed tomography [14,15]. However, it has previously been shown that stan- dard digital photography may also be used to construct 3D models of human tissue [16]. This technique obviates the need for 3D CT/MRI or a 3D scanner, which may make digital photography the preferred modality for AM in certain settings. In addition, digital photog- raphy is very cost effective, accessible, and popular throughout much of the world. To our knowledge, no studies have assessed the feasibility of creating 3D tumor models using standard digital photographs of resected tumor tissue. The objective of this pilot study is to determine whether digital photography be used to construct 3D images and physical models of resected oral and oropharyngeal tumor tissue. This study will therefore develop a system in which the tumors are resected, pho- tographed in the operating room, and modeled/printed using AM techniques. Specific outcomes to be measured include the number of photographs required for modeling, production times, and costs. The 3D model of the resected tissue may serve as a commu- nication tool at multidisciplinary tumor board meetings to facilitate discussion of critical surgical margins, tumor location, and orientation, which affects decision-making regarding postoperative care. 2. Materials and Methods 2.1. Participants The study was approved by the institutional review board at HUS Helsinki University Hospital (Helsinki, Finland). Patients were eligible for inclusion in the study if they were diagnosed with an oral or oropharyngeal tumor and were scheduled for surgical resection. Three patients were identified through review of the operative schedule. Eligible patients underwent informed consent procedures and signed written consent forms to participate in the study. Hospital records of all patients included in the study were reviewed for clinicopathological data. 2.2. Clinical Care All patients underwent a preoperative MRI of the head and neck. MRIs were per- formed at 1.5 T, with slice thickness of 3–4 mm. The imaging protocol was a standard Int. J. Environ. Res. Public Health 2021, 18, 911 3 of 12 protocol used for head and neck tumors. The mass was visualized and resected via partial resection of the tongue base (Case 1), hemiglossectomy (Case 2), or total laryngectomy with extension to the base of tongue (Case 3). Surgical resection was planned and performed according to standard protocol. Immediately after resection, the resection block was placed on a white surgical towel in the operating room. The tissue was placed such that any visible area believed to be tumor was exposed to the viewer (i.e., not resting on the towel). The tissue was photographed by a study assistant using a smartphone (iPhone 6s; Apple; Cupertino, CA, USA) in natural light without flash. Photographs were taken at a distance approximately 15–30 cm away from the specimen at numerous angles. Every surface of the specimen was photographed from several different angles to aid in image alignment for 3D reconstruction. Attempts were made to image every surface of the tissue except for the inferior surface resting on the towel because movement of the specimen to photograph this region would have disrupted the tissue and resulted in inaccurate modeling. Approximately sixty to two hundred photographs were taken from several angles and planes with respect to the specimen. Variation in the number of photographs taken served to not only ensure that complex surface topography was sufficiently captured (with more complex topography requiring more photographs), but also to get a sense of how the number of photographs affected 3D image construction. Photographs did not contain any patient information. After all photographs were taken, the resection block was transferred to the Department of Pathology for histological analysis on the day of surgery. 2.3. Additive Manufacturing Digital photos of the specimen were then imported into Agisoft Photoscan (ver- sion 1.4.2.6205; Agisoft; St. Petersburg, Russia), a 3D imaging software that performs photogrammetric reconstruction using a series of digital images. The software automati- cally aligned images using the “Highest” alignment accuracy settings and adaptive camera model fitting; non-aligned photographs were discarded from the project. After alignment, a dense point cloud was constructed with “Ultra high” quality, and “Moderate” depth fil- tering (boundary smoothing). Processing took place on a laptop workstation. Lowering the quality to the lowest available settings reduced the computation to a few minutes, but also reduced both the surface topology and texture (color) accuracy. The dense point cloud was triangulated into a mesh using interpolation to fill any remaining holes, and the mesh was exported in a color-preserving PLY file format into Meshlab [17]. The table surface surrounding the section block was removed, and the hole below the model was stitched (no model surface was constructed on the undersurface be- cause photographs were not taken from underneath the resection block). Finally, the model was converted into STL file format, which is the input supported by most 3D printers. Due to the STL standard lacking support for local color information, the model face colors were encoded into triangle normal values within the file. In all cases colorized 3D models were printed using a binder jetting 3D printer (ProJet CJP 660, 3D Systems Inc. Rock Hill, SC, USA) from composite powder hardened in post processing with cyanoacrylate. In Case 1, a 3D model was also printed using material extrusion (uPrint SE Plus, Stratasys Ltd., Rehovot, Israel) from ABS Plus material and SR-30 Soluble Support Material. The different steps of the project and their relationships to one another are presented in Figure 1. Int. J. Environ. Res. Public Health 2021, 18, 911 4 of 12Int. J. Environ. Res. Public Health 2021, 18, x FOR PEER REVIEW 4 of 13 Figure 1. A diagram showing various steps of the project and their interaction (arrows), MDTB: multidisciplinary tumor board. 2.4. Multidisciplinary Tumor Board Meeting Surgical outcomes were reviewed at a meeting of the multidisciplinary tumor board (MDTB) two weeks after surgery. MDTB meetings consist of case review and discussion among pathologists, head and neck surgeons, oncologists, radiologists, and other special- ists. The need for revision surgery or adjuvant therapy was assessed for each case. 3. Results 3.1. Case 1 A 62-year-old female was diagnosed with an exophytic tumor on the left side of the tongue base. Preoperative clinical classification was T1N0M0. Resection of the tumor was performed with selective neck dissection of levels IIa, IIb, and III. Approximately two weeks later, the case was discussed at an MDTB meeting. Histological analysis confirmed a p16-negative pT2pN0, Stage II squamous cell carcinoma (SCC) with a diameter of 30 mm. Maximum invasion was 4.5 mm and the closest surgical margin was 4 mm at the posterior border. Due to the extent of invasion, postoperative radiation was recom- mended as the next step in the patient’s care. 3.2. Case 2 A 53-year-old female was diagnosed with a p16-negative T3N2bM0, Stage IVa SCC of the mobile tongue at an outside institution (Figures 2 and 3). Hemiglossectomy was performed in conjunction with neck dissection of levels I, IIa, III, and IV on the right side. Anterolateral thigh microvascular flap was utilized in reconstruction. Histopathology re- vealed an SCC with a diameter of 24 mm and with 10 mm depth of invasion. The mini- mum resection margin was 7 mm on the medial edge of the tumor. She had two metastatic lymph nodes in level IIa with no extranodal extension. The pathological staging was pT2pN2bM0 (Stage IVa). MDTB recommended postoperative chemoradiotherapy. Figure 1. A diagram showing various steps of the project and their interaction (arrows), MDTB: multidisciplinary tumor board. 2.4. Multidisciplinary Tumor Board Meeting Surgical outcomes were reviewed at a meeting of the multidisciplinary tumor board (MDTB) two weeks after surgery. MDTB meetings consist of case review and discussion among pathologists, head and neck surgeons, oncologists, radiologists, and other specialists. The need for revision surgery or adjuvant therapy was assessed for each case. 3. Results 3.1. Case 1 A 62-year-old female was diagnosed with an exophytic tumor on the left side of the tongue base. Preoperative clinical classification was T1N0M0. Resection of the tumor was performed with selective neck dissection of levels IIa, IIb, and III. Approximately two weeks later, the case was discussed at an MDTB meeting. Histological analysis confirmed a p16-negative pT2pN0, Stage II squamous cell carcinoma (SCC) with a diameter of 30 mm. Maximum invasion was 4.5 mm and the closest surgical margin was 4 mm at the posterior border. Due to the extent of invasion, postoperative radiation was recommended as the next step in the patient’s care. 3.2. Case 2 A 53-year-old female was diagnosed with a p16-negative T3N2bM0, Stage IVa SCC of the mobile tongue at an outside institution (Figures 2 and 3). Hemiglossectomy was performed in conjunction with neck dissection of levels I, IIa, III, and IV on the right side. Anterolateral thigh microvascular flap was utilized in reconstruction. Histopathology re- vealed an SCC with a diameter of 24 mm and with 10 mm depth of invasion. The minimum resection margin was 7 mm on the medial edge of the tumor. She had two metastatic lymph nodes in level IIa with no extranodal extension. The pathological staging was pT2pN2bM0 (Stage IVa). MDTB recommended postoperative chemoradiotherapy.Int. J. Environ. Res. Public Health 2021, 18, x FOR PEER REVIEW 5 of 13 Figure 2. Preoperative image of Case 2 showing squamous cell carcinoma of the right mobile tongue. Figure 3. T2-weighted fat-suppressed, contrast-enhanced MRI of Case 2 in the axial plane. Squamous cell carcinoma indi- cated by arrow. 3.3. Case 3 A 52-year-old male, ex-smoker, was diagnosed with SCC of the vallecula. Seven years prior, the patient had been diagnosed with SCC (pT2pN2bM0) in the floor of the mouth and SCC in the esophagus. Both oral and esophageal carcinomas had been treated with surgery and chemoradiotherapy. Due to residual tumor in the esophagus, the patient had undergone surgical resection with gastric pull-up. In 2018, the patient presented with swallowing difficulty. A third primary tumor in the vallecula was diagnosed clinically as a p16-negative T2N2bM0 (Stage IVa) SCC with dimensions 15 × 5 × 35 mm on MRI scans. There were metastatic lymph nodes on the left side of the neck. The patient underwent total laryngectomy with extension to the base of Figure 2. Preoperative image of Case 2 showing squamous cell carcinoma of the right mobile tongue. Int. J. Environ. Res. Public Health 2021, 18, 911 5 of 12 Int. J. Environ. Res. Public Health 2021, 18, x FOR PEER REVIEW 5 of 13 Figure 2. Preoperative image of Case 2 showing squamous cell carcinoma of the right mobile tongue. Figure 3. T2-weighted fat-suppressed, contrast-enhanced MRI of Case 2 in the axial plane. Squamous cell carcinoma indi- cated by arrow. 3.3. Case 3 A 52-year-old male, ex-smoker, was diagnosed with SCC of the vallecula. Seven years prior, the patient had been diagnosed with SCC (pT2pN2bM0) in the floor of the mouth and SCC in the esophagus. Both oral and esophageal carcinomas had been treated with surgery and chemoradiotherapy. Due to residual tumor in the esophagus, the patient had undergone surgical resection with gastric pull-up. In 2018, the patient presented with swallowing difficulty. A third primary tumor in the vallecula was diagnosed clinically as a p16-negative T2N2bM0 (Stage IVa) SCC with dimensions 15 × 5 × 35 mm on MRI scans. There were metastatic lymph nodes on the left side of the neck. The patient underwent total laryngectomy with extension to the base of Figure 3. T2-weighted fat-suppressed, contrast-enhanced MRI of Case 2 in the axial plane. Squamous cell carcinoma indicated by arrow. 3.3. Case 3 A 52-year-old male, ex-smoker, was diagnosed with SCC of the vallecula. Seven years prior, the patient had been diagnosed with SCC (pT2pN2bM0) in the floor of the mouth and SCC in the esophagus. Both oral and esophageal carcinomas had been treated with surgery and chemoradiotherapy. Due to residual tumor in the esophagus, the patient had undergone surgical resection with gastric pull-up. In 2018, the patient presented with swallowing difficulty. A third primary tumor in the vallecula was diagnosed clinically as a p16-negative T2N2bM0 (Stage IVa) SCC with dimensions 15 × 5 × 35 mm on MRI scans. There were metastatic lymph nodes on the left side of the neck. The patient underwent total laryngectomy with extension to the base of tongue and neck dissection on the left side including levels II to V. Frozen sections revealed that the initial resection was inadequate. Further resection was performed during the same operation and repeated frozen sections as well as final pathology displayed that the margins were free of tumor. Final histopathology also revealed nerve and blood vessel invasion. The tumor was classified pT2pN2bM0. 3.4. Digital Reconstruction and Additive Manufacturing Process Table 1 summarizes the main parameters from the reconstruction. “Photographs taken” refers to the number of smartphone photographs, taken from various orientations, intro- duced into the photogrammetry software. Aligned photographs were able to be utilized during the computation and construction of the digital 3D model (Figures 4–6). Processing time gives an approximate total time span required by the alignment, dense cloud creation, and meshing steps. Digital and physical models of the surgical resection block were produced within one week of the operation. The 3D model dimensions were scaled up from the true dimensions of the resection block due to the small size of each tumor. Enlarging the dimensions of the model allowed for improved visualization of surface topography and anatomical details. For Case 1, the model produced by binder jetting conveyed more information than the material extrusion model because of the added color texture made possible by binder jetting (Figures 7 and 8). Using material extrusion, the model was made of simple ABS-like white polymer with less detail. Production time for printing of the 3D model ranged from two to nine hours and printing costs ranged from 25 to 140 EUR: 24.90 EUR/28.41 USD (Case 1), 38.20 EUR/43.59 USD (Case 2) and 141.00 EUR/160.89 USD (Case 3). Int. J. Environ. Res. Public Health 2021, 18, 911 6 of 12 Table 1. Stereophotogrammetry reconstruction details using binder jetting 3D printer. Case Photographs Aligned/Total Stereophotogrammetry Processing Time (h) Scaling Factor Print Time (h:min) Print Volume/Material Consumption (cm3) 1 46/63 7 2.5 2:01 139.92 2 111/117 16 2.0 3:39 227.63 3 194/195 24 1.5 9:09 976.82Int. J. Environ. Res. Public Health 2021, 18, x FOR PEER REVIEW 7 of 13 Figure 4. Digital 3D model of Case 1 using Meshlab. Figure 5. Digital 3D model of Case 2 using Meshlab. Figure 4. Digital 3D model of Case 1 using Meshlab. Int. J. Environ. Res. Public Health 2021, 18, x FOR PEER REVIEW 7 of 13 Figure 4. Digital 3D model of Case 1 using Meshlab. Figure 5. Digital 3D model of Case 2 using Meshlab. Figure 5. Digital 3D model of Case 2 using Meshlab. Int. J. Environ. Res. Public Health 2021, 18, 911 7 of 12 Int. J. Environ. Res. Public Health 2021, 18, x FOR PEER REVIEW 8 of 13 Figure 6. Digital 3D model of Case 3 using Meshlab. Figure 7. Lateral and superior views of physical 3D model of Case 1 printed via material extrusion. Figure 6. Digital 3D model of Case 3 using Meshlab. Int. J. Environ. Res. Public Health 2021, 18, x FOR PEER REVIEW 8 of 13 Figure 6. Digital 3D model of Case 3 using Meshlab. Figure 7. Lateral and superior views of physical 3D model of Case 1 printed via material extrusion. Figure 7. Lateral and superior views of physical 3D model of Case 1 printed via material extrusion. Int. J. Environ. Res. Public Health 2021, 18, 911 8 of 12 Int. J. Environ. Res. Public Health 2021, 18, x FOR PEER REVIEW 9 of 13 Figure 8. Physical 3D models of Case 1 (A), Case 2 (B), and Case 3 (C) printed via binder jetting. Figure 8. Physical 3D models of Case 1 (A), Case 2 (B), and Case 3 (C) printed via binder jetting. Int. J. Environ. Res. Public Health 2021, 18, 911 9 of 12 4. Discussion In this pilot study, three surgical resection blocks of oral and oropharyngeal cancer were photographed, modeled, and printed using AM technology. Each tumor originated in a different anatomical location: the mobile tongue, base of tongue, and epiglottic vallecula. This study employed digital photography to construct a 3D model of each resection block. Processing time for 3D image construction ranged from seven to 24 h and importing a larger number of photographs into Agisoft required longer processing times to construct the 3D image. Production time for printing of the 3D model ranged from two to nine hours and printing costs ranged from 25 to 141 EUR (28 to 161 USD), with larger models requiring more material consumption, longer printing times, and higher costs. Overall, the process was relatively efficient and could feasibly be completed within one day depending on the number of photographs and size/complexity of the tissue specimen. There was also variability in the number of photographs processed for each specimen. For Case 3, 195 photographs of the resection block were taken, partly because of the sample’s convoluted shape with its cavity structure. This high number of photos increased processing time but did not add to the quality of the digital or printed model. In practice, models of satisfactory quality can easily be reconstructed using a magnitude fewer number of photographs, which also reduces processing time, as long as all orientations of the specimen are captured allowing the software to accurately align the photographs. The oral cavity and oropharynx are challenging surgical environments. Many proce- dures in these regions are restricted to small operative areas, involve complex anatomy, and put the patient at risk for cosmetic and functional defects. There are few landmarks to aid in surgical resection apart from the free mucosal border, as the deep margin is typically surrounded by muscle and connective tissue. Moreover, attempts to achieve adequate surgical margins must be balanced with tissue preservation to not only avoid neighboring structures, but also maintain speech and swallowing functions [18]. At our institution, at least 5 mm microscopic margins are warranted for carcinomas in the oral cavity and oropharynx. Inadequacy or invasion of the margin is known to at least double the prob- ability of local recurrence [19–22]. Head and neck surgeons, pathologists, radiologists, oncologists, and other specialists must collaborate to review the adequacy of margins and assess the need for revision surgery or adjuvant therapy. However, information exchange across disciplines can be imprecise or incomplete when referring only to imaging studies and histopathology slides. If the margin shown on a histopathology slide is positive or insufficient, it can be challenging to correlate that area on the slide with the exact location in the resected tissue. Diagrams drawn by the pathologist depicting how the tissue was cut during slide preparation may prove useful when the tumor is small and simple in shape but may lack specificity and accuracy in more complex cases. Referring to the preoperative MRI is another option. MRI has superior soft tissue contrast resolution and the relationship of the tumor to nearby structures can usually be sufficiently evaluated. However, correlation of the MR findings and histopathological information about the orientation and positive margins of the tumor is usually difficult to determine precisely. The use of 3D modeling to correlate MR data to histopathological findings has previously been studied primarily in cases of prostate cancer [23–26], but these techniques have not been used in routine clinical practice. Lastly, from a clinical perspective, postoperative swelling, hematoma, or reconstructive procedures can distort the appearance of the resection defect and make it difficult to identify areas requiring additional treatment. Because of these limitations, more effective tools are needed to facilitate interdisci- plinary communication. Image-guided surgical navigation is one example. Using preoper- ative 3D images (CT or MRI) and a real-time imaging source connected to input devices, the surgeon can better understand the orientation, position, size, and location of a tumor in relation to other structures. Guijarro-Martinez et al. demonstrated how image-guided navigation may be used during head and neck tumor surgery, focusing on the importance of mapping resection margins, the locations of any intraoperative biopsies, and any areas Int. J. Environ. Res. Public Health 2021, 18, 911 10 of 12 suspicious for residual tumor [18]. During the procedure, they saved digitized coordinates of these areas on an intraoperative CT scan. Pathologists accessed this anatomical map to locate any tumor-positive coordinates in relation to the resection margins. This map also allowed radiotherapists to focus precisely on tumor-positive coordinates identified by the pathologist, thereby avoiding irradiation of adjacent healthy tissue. As image-guided navigation technology continues to emerge in head and neck surgery, we believe that AM can serve a similar purpose. Our objective was to develop a simple, replicable process of 3D modeling and printing resected oral tongue and tongue base tumors. These models would then be used at MDTB meetings to facilitate interdisciplinary management (Figure 1). Digital photography enabled us to efficiently construct a 3D image of the resection block, which was then printed to create a tangible model of the tissue. Like the coordinate map used in image-guided surgical navigation, this model optimizes visualization and understanding of tumor topography, location, and orientation. For example, pathologists could use the model as a tool to present exact margin locations. While postoperative 3D tumor modeling has not been well studied in head and neck literature, we believe it merits further investigation given the efficiency of the process (as short as 1 or 2 days from surgery to printing), growing availability of AM technology and related open-source software, and developing need for non-invasive procedures that mitigate surgical risks. Future directions in developing our method may include using a 3D scanner instead of photography and comparing the accuracy of different methods. While photography is inexpensive and widely available, 3D table scanners have become more advanced and accessible and have been used in 3D soft tissue tumor modeling [27]. Using a scanner may affect the speed and accuracy of the 3D modeling process. Apart from optimizing interdisciplinary communication and postoperative care, 3D tu- mor models have multiple applications that have been demonstrated in several areas of the body. For example, Hovens et al. recently studied postoperative 3D modeling of prostate tumors to understand the relationship between tumor morphometry and prognosis [12]. 3D modeling has also been shown to facilitate preoperative planning for malignant bone tumors in the cervical spine, complex cardiac and thoracic malignancies, and hilar cholan- giocarcinoma [28–31]. Moreover, 3D imaging, 3D modeling and AM have also played a role in mathematical optimization of resection planes and allograft fitting, as previously demonstrated in fresh cadaveric osteochondral allograft knee surgery [32]. Similar studies can be conducted using 3D models of head and neck tumors. Currently AM and 3D technologies are utilized mostly in medical models, guides and implants [33] and not as frequently in surgical settings focusing on tissue resection. As 3D printing technology becomes more simplified and available to academic medical centers and hospitals on a global scale, we expect the clinical and research applications to greatly expand. Despite the potential of this technology, there are limitations that must be recognized when determining how and when it should be utilized. As with any equipment, mechanical issues are to be expected, and contingency plans should be considered if 3D printing is routinely incorporated into patient care. In addition, using digital photography to construct the 3D image may limit visualization of the aspect of the tissue resting on the underlying surface. In our study, the specimens were not rotated to photograph all aspects because of the risk of disturbing the tissue’s shape, which would interfere with image alignment in the 3D modeling process. In the future, set-up for photographing the resection block could be improved by developing a stand or a rack that would allow the entire surface area to be captured. Other potential limitations include the printing costs, such as the cost of the printer itself, maintenance of the printer, printing materials, operating staff, and so on. 5. Conclusions This 3D imaging and printing technology can be utilized in the management of head and neck tumors in a way that is easily implemented, accessible, and efficient. A physical model of the oral/oropharyngeal cancer resection block can be used to improve understand- ing of tumor orientation and optimize interdisciplinary collaboration. Certain limitations, Int. J. Environ. Res. Public Health 2021, 18, 911 11 of 12 including mechanical error, surface visibility, and cost, are areas for future development. Larger prospective studies are required to further investigate the role of additive manufac- turing in head and neck surgery and establish recommendations on how to best implement this technology. Author Contributions: Conceptualization, A.A.M., T.A., K.A. and A.K.; methodology, A.L.I., A.K., E.H., J.H., K.A., M.S., A.M., H.S., T.A. and A.A.M.; software, A.L.I., E.H., M.S., A.M. and H.S.; formal analysis, A.L.I., E.H. and A.K.; investigation, A.L.I., E.H. and A.K.; resources, A.A.M.; data cu- ration, A.L.I., E.H., M.S., A.K. and H.S.; writing—original draft preparation, A.L.I.; writing—review and editing, A.L.I., A.K., E.H., J.H., K.A., M.S., A.M., H.S., T.A. and A.A.M.; supervision, A.A.M., T.A. and K.A.; funding acquisition, A.A.M. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the institutional Ethics Committee, §56/04.04.2018, HUS/1092/2018. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: Data sharing is not applicable to this article. Acknowledgments: Helsinki University Hospital Research Funding. The authors would like to thank Pekka Paavola for the photographs for Figures 5 and 6. Conflicts of Interest: The authors declare no conflict of interest. References 1. Witowski, J.S.; Pędziwiatr, M.; Major, P.; Budzyński, A. Cost-effective, personalized, 3D-printed liver model for preoperative planning before laparoscopic liver hemihepatectomy for colorectal cancer metastases. Int. J. CARS 2017, 12, 2047–2054. [CrossRef] [PubMed] 2. Chen, M.Y.; Skewes, J.; Woodruff, M.A.; Dasgupta, P.; Rukin, N.J. Multi-colour extrusion fused deposition modelling: A low-cost 3D printing method for anatomical prostate cancer models. Sci. Rep. 2020, 10, 10004. [CrossRef] [PubMed] 3. Wake, N.; Rosenkrantz, A.B.; Huang, R.; Park, K.U.; Wysock, J.S.; Taneja, S.S.; Huang, W.C.; Daniel, L.; Chandarana, H. Patient- specific 3D printed and augmented reality kidney and prostate cancer models: Impact on patient education. 3D Print. Med. 2019, 5, 4. [CrossRef] [PubMed] 4. Chan, H.H.; Siewerdsen, J.H.; Vescan, A.; Daly, M.J.; Prisman, E.; Irish, J.C. 3D rapid prototyping for otolaryngology—Head and neck surgery: Applications in image-guidance, surgical simulation and patient-specific modeling. PLoS ONE 2015, 10, e0136370. [CrossRef] 5. Subburaj, K.; Nair, C.; Rajesh, S.; Meshram, S.; Ravi, B. Rapid development of auricular prosthesis using CAD and rapid prototyping technologies. Int. J. Oral Maxillofac. Surg. 2007, 36, 938–943. [CrossRef] 6. Zhang, T.; Zhang, Y.; Li, Y.S.; Gui, L.; Mao, C.; Chen, Y.-N.; Zhao, J.-Z. Application of CTA and CAD\CAM techniques in mandible reconstruction with free fibula flap. Zhonghua Zheng Xing Wai Ke Za Zhi 2006, 22, 325–327. 7. Abraham, J. Imaging for head and neck cancer. Surg. Oncol. Clin. N. Am. 2015, 24, 455–471. [CrossRef] 8. Alberico, R.A.; Husain, S.H.; Sirotkin, I. Imaging in head and neck oncology. Surg. Oncol. Clin. N. Am. 2004, 13, 13–35. [CrossRef] 9. Huotilainen, E.; Paloheimo, M.; Salmi, M.; Paloheimo, K.-S.; Björkstrand, R.; Tuomi, J.; Markkola, A.; Mäkitie, A.A. Imaging re- quirements for medical applications of additive manufacturing. Acta Radiol. 2014, 55, 78–85. [CrossRef] 10. Fernandez, R.; Lau, R.; Yu, P.; Siu, I.; Chan, J.; Ng, C. Use of custom made 3-dimensional printed surgical guide for manubrio- sternal resection of solitary breast cancer metastasis: Case report. AME Case Rep. 2020, 4, 12. [CrossRef] 11. Wu, Z.; Alzuhair, A.; Kim, H.; Lee, J.W.; Chung, I.Y.; Kim, J.; Lee, S.B.; Son, B.H.; Gong, G.; Kim, H.H.; et al. Magnetic resonance imaging based 3-dimensional printed breast surgical guide for breast-conserving surgery in ductal carcinoma in situ: A clinical trial. Sci. Rep. 2020, 10, 18534. [CrossRef] [PubMed] 12. Hovens, M.C.; Lo, K.; Kerger, M.; Pedersen, J.; Nottle, T.; Kurganovs, N.; Ryan, A.; Peters, J.S.; Moon, D.; Costello, A.J.; et al. 3D modelling of radical prostatectomy specimens: Developing a method to quantify tumor morphometry for prostate cancer risk prediction. Pathol. Res. Pract. 2017, 213, 1523–1529. [CrossRef] [PubMed] 13. Pischat, J.; Iglesias, J.E.; Yousry, T.; Ourselin, S.; Modat, M. A survey of methods for 3D histology reconstruction. Med. Image Anal. 2018, 46, 73–105. [CrossRef] 14. Pawlaczyk-Kamieńska, T.; Winiarska, H.; Kulczyk, T.; Cofta, S. Dental Anomalies in Rare, Genetic Ciliopathic Disorder-A Case Report and Review of Literature. Int. J. Environ. Res. Public Health 2020, 17, 4337. [CrossRef] [PubMed] http://doi.org/10.1007/s11548-017-1527-3 http://www.ncbi.nlm.nih.gov/pubmed/28144830 http://doi.org/10.1038/s41598-020-67082-7 http://www.ncbi.nlm.nih.gov/pubmed/32561811 http://doi.org/10.1186/s41205-019-0041-3 http://www.ncbi.nlm.nih.gov/pubmed/30783869 http://doi.org/10.1371/journal.pone.0136370 http://doi.org/10.1016/j.ijom.2007.07.013 http://doi.org/10.1016/j.soc.2015.03.012 http://doi.org/10.1016/S1055-3207(03)00124-8 http://doi.org/10.1177/0284185113494198 http://doi.org/10.21037/acr.2020.03.08 http://doi.org/10.1038/s41598-020-75398-7 http://www.ncbi.nlm.nih.gov/pubmed/33116237 http://doi.org/10.1016/j.prp.2017.09.022 http://www.ncbi.nlm.nih.gov/pubmed/29033190 http://doi.org/10.1016/j.media.2018.02.004 http://doi.org/10.3390/ijerph17124337 http://www.ncbi.nlm.nih.gov/pubmed/32560490 Int. J. Environ. Res. Public Health 2021, 18, 911 12 of 12 15. Pawlaczyk-Kamieńska, T.; Kulczyk, T.; Pawlaczyk-Wróblewska, E.; Borysewicz-Lewicka, M.; Niedziela, M. Limited Mandibular Movements as a Consequence of Unilateral or Asymmetrical Temporomandibular Joint Involvement in Juvenile Idiopathic Arthritis Patients. J. Clin. Med. 2020, 9, 2576. [CrossRef] 16. Mäkitie, A.A.; Salmi, M.; Lindford, A.; Tuomi, J.; Lassus, P. Three-dimensional printing for restoration of the donor face: A new digital technique tested and used in the first facial allotransplantation patient in Finland. J. Plast. Reconstr. Aesthet. Surg. 2016, 69, 1648–1652. [CrossRef] 17. Ranzuglia, G.; Callieri, M.; Dellepiane, M.; Cignoni, P.; Scopigno, R. MeshLab as a Complete Tool for the Integration of Photos and Color with High Resolution 3D Geometry Data. Paper Presented at the 40th Conference in Computer Applications and Quantitative Methods in Archaeology, Southampton, UK, March 2012; Available online: http://vcg.isti.cnr.it/Publications/2013 /RCDCS13/MeshLab_Color.pdf (accessed on 1 August 2018). 18. Guijarro-Martinez, R.; Gellrich, N.C.; Witte, J.; Tapioles, D.; Von Briel, C.; Kolotas, C.; Achinger, J.; Hailemariam, S.; Schulte, H.; Rohner, D.; et al. Optimization of the interface between radiology, surgery, radiotherapy, and pathology in head and neck tumor surgery: A navigation-assisted multidisciplinary network. Int. J. Oral Maxillofac. Surg. 2014, 43, 156–162. [CrossRef] 19. Spiro, R.H.; Guillamondegui, O.; Paulino, A.F.; Huvos, A.G. Pattern of invasion and margin assessment in patients with oral tongue cancer. Head Neck 1999, 21, 408–413. [CrossRef] 20. Sutton, D.N.; Brown, J.S.; Rogers, S.N.; Vaughan, E.; Woolgar, J. The prognostic implications of the surgical margin in oral squamous cell carcinoma. Int. J. Oral Maxillofac. Surg. 2003, 32, 30–34. [CrossRef] 21. Binahmed, A.; Nason, R.W.; Abdoh, A.A. The clinical significance of the positive surgical margin in oral cancer. Oral Oncol. 2007, 43, 780–784. [CrossRef] 22. Slootweg, P.J.; Hordijk, G.J.; Schade, Y.; Van Es, R.J.; Koole, R. Treatment failure and margin status in head and neck cancer: A critical view on the potential value of molecular pathology. Oral Oncol. 2002, 38, 500–503. [CrossRef] 23. Trivedi, H.; Turkbey, B.; Rastinehad, A.R.; Benjamin, C.J.; Bernardo, M.; Pohida, T.J.; Shah, V.; Merino, M.J.; Wood, B.J.; Linehan, W.M.; et al. Use of patient-specific MRI-based prostate mold for validation of multiparametric MRI in localization of prostate cancer. Urology 2012, 79, 233–239. [CrossRef] [PubMed] 24. Shah, V.; Pohida, T.; Turkbey, B.; Mani, H.; Merino, M.; Pinto, P.A.; Choyke, P.; Bernardo, M. A method for correlating in vivo prostate magnetic resonance imaging and histopathology using individualized magnetic resonance-based molds. Rev. Sci. Instrum. 2009, 80, 104301. [CrossRef] 25. Kiessling, F.; Le-Huu, M.; Kunert, T.; Thorn, M.; Vosseler, S.; Schmidt, K.; Hoffend, J.; Meinzer, H.-P.; Fusenig, N.E.; Semmler, W. Improved correlation of histological data with DCE MRI parameter maps by 3D reconstruction, reslicing and parameterization of the histological images. Eur. Radiol. 2005, 15, 1079–1086. [CrossRef] [PubMed] 26. Costa, D.N.; Chatzinoff, Y.; Passoni, N.M.; Kapur, P.; Roehrborn, C.G.; Xi, Y.; Rofsky, N.M.; Torrealba, J.; Francis, F.; Futch, C.; et al. Improved magnetic resonance imaging-pathology correlation with imaging-derived, 3Dprinted, patient-specific whole-mount molds of the prostate. Investig. Radiol. 2017, 52, 507–513. [CrossRef] 27. Koivuholma, A.; Aro, K.; Mäkitie, A.; Salmi, M.; Mirtti, T.; Hagström, J.; Atula, T. Three-Dimensional Presentation of Tumor Histopathology: A Model Using Tongue Squamous Cell Carcinoma. Diagnostics 2021, 11, 109. [CrossRef] 28. Xiao, J.; Huang, W.; Yang, X.; Yan, W.-J.; Song, D.-W.; Wei, H.-F.; Liu, T.-L.; Wu, Z.-P.; Yang, C. En Bloc Resection of Primary Malignant Bone Tumor in the Cervical Spine Based on 3-Dimensional Printing Technology. Orthop. Surg. 2016, 8, 171–178. [CrossRef] 29. Al Jabbari, O.; Saleh, A.; Patel, A.P.; Igo, S.R.; Reardon, M.J. Use of three-dimensional models to assist in the resection of malignant cardiac tumors. J. Card. Surg. 2016, 31, 581–583. [CrossRef] 30. Kim, M.P.; Ta, A.H.; Ellsworth, W.A.; Marco, R.A.; Gaur, P.; Miller, J.S. Three-dimensional model for surgical planning in resection of thoracic tumors. Int. J. Surg. Case Rep. 2015, 16, 127–129. [CrossRef] 31. Yang, Y.; Zhou, Z.; Liu, R.; Chen, L.; Xiang, H.; Chen, N. Application of 3D visualization and 3D printing technology on ERCP for patients with hilar cholangiocarcinoma. Exp. Ther. Med. 2018, 15, 3259–3264. [CrossRef] 32. Huotilainen, E.; Salmi, M.; Lindahl, J. Three-dimensional printed surgical templates for fresh cadaveric osteochondral allograft surgery with dimension verification by multivariate computed tomography analysis. Knee 2019, 26, 923–932. [CrossRef] [PubMed] 33. Pettersson, A.B.V.; Salmi, M.; Vallittu, P.; Serlo, W.; Tuomi, J.; Mäkitie, A.A. Main Clinical Use of Additive Manufacturing (Three-Dimensional Printing) in Finland Restricted to the Head and Neck Area in 2016–2017. Scand. J. Surg. 2019. [CrossRef] [PubMed] http://doi.org/10.3390/jcm9082576 http://doi.org/10.1016/j.bjps.2016.09.021 http://vcg.isti.cnr.it/Publications/2013/RCDCS13/MeshLab_Color.pdf http://vcg.isti.cnr.it/Publications/2013/RCDCS13/MeshLab_Color.pdf http://doi.org/10.1016/j.ijom.2013.09.005 http://doi.org/10.1002/(SICI)1097-0347(199908)21:5<408::AID-HED5>3.0.CO;2-E http://doi.org/10.1054/ijom.2002.0313 http://doi.org/10.1016/j.oraloncology.2006.10.001 http://doi.org/10.1016/S1368-8375(01)00092-6 http://doi.org/10.1016/j.urology.2011.10.002 http://www.ncbi.nlm.nih.gov/pubmed/22202553 http://doi.org/10.1063/1.3242697 http://doi.org/10.1007/s00330-005-2701-5 http://www.ncbi.nlm.nih.gov/pubmed/15747142 http://doi.org/10.1097/RLI.0000000000000372 http://doi.org/10.3390/diagnostics11010109 http://doi.org/10.1111/os.12234 http://doi.org/10.1111/jocs.12812 http://doi.org/10.1016/j.ijscr.2015.09.037 http://doi.org/10.3892/etm.2018.5831 http://doi.org/10.1016/j.knee.2019.05.007 http://www.ncbi.nlm.nih.gov/pubmed/31171427 http://doi.org/10.1177/1457496919840958 http://www.ncbi.nlm.nih.gov/pubmed/30991900 Introduction Materials and Methods Participants Clinical Care Additive Manufacturing Multidisciplinary Tumor Board Meeting Results Case 1 Case 2 Case 3 Digital Reconstruction and Additive Manufacturing Process Discussion Conclusions References work_3vf7vu6tmbftnnjs6errm7bup4 ---- Measuring specific surface area of snow by near-infrared photography Measuring specific surface area of snow by near-infrared photography Margret MATZL, Martin SCHNEEBELI WSL Swiss Federal Institute for Snow and Avalanche Research, Flüelastrasse 11, CH-7260 Davos Dorf, Switzerland E-mail: schneebeli@slf.ch ABSTRACT. The specific surface area (SSA) is considered an essential microstructural parameter for the characterization of snow. Photography in the near-infrared (NIR) spectrum is sensitive to the SSA. We calculated the snow reflectance from calibrated NIR images of snow-pit walls and measured the SSA of samples obtained at the same locations. This new method is used to map the snow stratigraphy. The correlation between reflectance and SSA was found to be 90%. Calibrated NIR photography allows quantitative determination of SSA and its spatial variation in a snow profile in two dimensions within an uncertainty of 15%. In an image covering 0.5–1.0 m2, even layers of 1 mm thickness can be documented and measured. Spatial maps of SSA are an important tool in initializing and validating physical and chemical models of the snowpack. 1. INTRODUCTION The specific surface area (SSA) is an essential microstructural parameter for the characterization of sintered materials such as snow (German, 1996). The SSA of snow changes during isothermal and temperature-gradient metamorphism (Schneebeli and Sokratov, 2004; Legagneux and Dominé, 2005) and determines the radiative properties of snow (Warren, 1982; Leroux and others, 1998). The SSA is a highly relevant parameter for the catalytic effect caused by photochemical properties (Dominé and Shepson, 2002). It is also likely that the permeability of snow (Albert and Perron, 2000) is determined by SSA, as German (1996) shows is true for other sintered materials. In this paper, SSA (mm–1) is defined as the ratio between surface area and the volume of the ice phase. Natural snowpacks consist of morphologically different layers (Colbeck, 1991). The complexity of the stratigraphy (Pielmeier and Schneebeli, 2003; Sturm and Benson, 2004) usually precludes a detailed and exhaustive sampling of snow to determine SSA. Current methods of determining SSA, such as adsorption (Legagneux and others, 2002), microtomography (Brzoska and others, 2001; Schneebeli and Sokratov, 2004) and stereology (Matzl and Schneebeli, 2006), are restricted to relatively small snow samples and confined to essentially point measurements. These methods also require a cold laboratory, and are relatively time- consuming. Thus, a method that could deliver spatial information on natural snow profiles while requiring only simple equipment would make the measurement of SSA more accessible. Such a method could also be extremely valuable in initializing and validating numerical simulations of snowpack metamorphism (Brun and others, 1989; Lehning and others, 2002). Warren and Wiscombe (1980) used Mie theory to describe the optical properties of snow in the near-infrared (NIR) spectrum. Based on this theory, the reflectance between wavelengths of 750 and 1400 nm is controlled largely by snow grain size. The influence of snow density can be neglected, as the interparticle distances between the grains are still large compared with the wavelength. Accord- ing to Dozier (1992), the grain-size effect dominates in the NIR spectrum as long as the snow density is < 650 kg m–3. Impurities influence the reflectance of snow only in the visible spectrum, but not in the NIR spectrum (Warren and Wiscombe, 1980; Leroux and others, 1999). As a conse- quence, the dependence of snow reflectance on grain size is used in remote sensing to map grain size in the surface snow layer (Dozier and others, 1981; Nolin and Dozier, 2000). Giddings and LaChapelle (1961) speculated that the most appropriate definition of the optically relevant grain size of snow could be derived from the volume : surface ratio, the inverse ratio of the SSA. This speculation has yet to be verified experimentally. Grenfell and Warren (1999) reviewed pre- vious works on the importance of the volume : area ratio for explaining optical properties of snow and clouds. But Mitchell (2002) showed theoretically that the SSA could be directly converted to an effective optical diameter. This suggests that SSA provides a direct link between optical and structural snow properties. Wiesmann and others (1998) correlated scattering and absorption coefficients at micro- wave frequencies to the correlation length of snow particles. Mätzler (2002) showed that correlation length is equivalent to SSA per total volume. In a related but different problem, there is no simple way to document the stratigraphy observed in a snow pit. The translucent snow profile (Benson, 1962; Good and Krüsi, 1992) is perhaps the best-known attempt to map layer features more quantitatively. The translucent profile delivers a clear delineation of boundaries between layers, but the transmitted light intensity cannot be interpreted in a unique fashion, because transmissivity depends not only on grain size but also on snow density (Zhou and others, 2003) and section thickness. Experiments comparing NIR photographs on film and translucent profiles showed the feasibility of using an NIR photographic method (Haddon and others, 1997), but analogue photographs were cumbersome to process quantitatively. Here we describe a new method based on NIR digital photography that can be used to determine the SSA and map the pit stratigraphy. The NIR reflectance was compared with measured values of SSA for precisely located snow samples. The correlation between the two measurements was about 90%, with an uncertainty of 3–15%, where the uncertainty increases with increasing SSA. Based on this correlation function we have calculated the SSA for the NIR images. The SSA can be mapped with a Journal of Glaciology, Vol. 52, No. 179, 2006558 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:34, subject to the Cambridge Core terms of use. https://www.cambridge.org/core spatial resolution of approximately 1 mm in an image covering a profile wall with a size up to 1 m2. Even thin layers, which can hardly be detected by traditional methods, can thus be detected in the images. 2. METHOD The correlation between SSA and snow reflectance required an experimental design that allowed measurement of both parameters for the same snow samples. The reflectance of a snow profile wall was measured using digital photography in the NIR spectrum (section 2.1). Subsequently, snow samples containing specific layers were taken from the same snow wall. The SSA of the snow samples was determined using stereological methods (section 2.2) described by Matzl (2006). The reflectance of these samples was calculated from the NIR images. Reflectance and SSA were correlated for easily distinguishable and preferably homogeneous layers (section 2.3). Figure 1 illustrates the experimental design. The correlation function was then used to calculate the SSA for the whole NIR image of the snow-pit wall. 2.1. NIR photography The camera used for the digital photographs was a Kodak DCS420ir with a 20 mm Nikkor lens. A gelatine filter (Kodak Wratten 87c) was placed over the charge-coupled device (CCD). The wavelength of the detected light ranged from 840 to 940 nm. The distance between camera and profile wall varied between 1.5 and 2.0 m, resulting in pit wall photos varying from 0.5 to 1.0 m2. Before the photographs were taken, the profile wall was cut carefully with a saw and smoothed with a rounded-edge wooden board about 20 cm in width. To compensate for tilting or turning of the camera with respect to the profile wall, a geometrical correction was performed on the digital image. The correction required at least three targets, inserted in the profile wall with a known distance to each other and a known geometry (Fig. 1a). The nearest-neighbour method was used to transform the target coordinates in the digital image congruently with their original geometry and distance. The calibration targets were manufactured of Spectralon greyscale standards with NIR reflectances of 50% and 99%. Alongside the geometrical correction, these reflectance values corrected illumination variations by interpolation between the 50% targets and subtracting the interpolation from the original image. The NIR reflectance, r, of the snow was calibrated with regard to the pixel intensities of the targets: r ¼ a þ bi , ð1Þ where i is the intensity of each pixel and a and b are determined by a linear regression on the pixel intensities of the greyscale standards. The coefficient of deviation of the reflectance from the calibrated image measured at the white targets is 2.6%. After photographing the smoothed wall, a second image of the profile wall was taken, including the inserted containers of the snow samples (Fig. 1c). The comparison of the two images allowed us to define the exact sample coordinates in the first image. The reflectance signal was Fig. 1. (a) NIR image of a snow-pit wall with four calibration targets (i) in the corners. The locations of the sample containers are marked with a black frame. (b) Magnification of the NIR image at one sample location. The vertical distribution of the reflectance signal for this sample is calculated based on the pixel intensity of the targets (i). (c) Image covering the same snow-pit wall with the inserted sample containers (ii). The surface section of one sample is displayed (d) and also the corresponding measured specific surface area (SSA) values. Matzl and Schneebeli: Near-infrared photography to measure SSA of snow 559 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:34, subject to the Cambridge Core terms of use. https://www.cambridge.org/core calculated by averaging horizontally over the 65 mm width of the image covering the sample location (Fig. 1b), thereby producing a graph of SSA with height. 2.2. Measuring the SSA from surface sections Samples with a volume of approximately 70 � 70 � 50 mm were cast with dyed diethyl phthalate and frozen for transport to the cold laboratory (www.slf.ch/schnee- lawinen/Schneephysik/Downloads/CastingSnowPhthalate/ phthalatesamples-en.html). In the cold laboratory, vertical surface sections were cut using a Leica Sliding Microtome, and photographed. The resulting images had a resolution of 10 mm, a width of 10–25 mm and a height of 30–60 mm (Fig. 1d). The SSA from the vertical surface sections was Fig. 2. Surface section and NIR image of two snow samples and the derived density, specific surface area (SSA) and reflectance signals. For sample 2 the absolute height of the surface section and the NIR image vary. The grey bands mark visually identified layers. Matzl and Schneebeli: Near-infrared photography to measure SSA of snow560 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:34, subject to the Cambridge Core terms of use. https://www.cambridge.org/core measured using model-based stereology (Baddeley and others, 1986; Matzl, 2006). Stereological estimations were also used to determine the volume density of the snow samples, which allowed the further calculation of snow density. The snow type of the surface sections was deter- mined visually according to the International Classification of Snow (Colbeck and others, 1990). For very fine new snow particles the resolution of the SSA measurements leads to an underestimation of the SSA. We experienced such an underestimation for SSA values higher than approximately 55 mm–1. For the correlation in this study we used snow types of explicit smaller SSA. 2.3. Correlation of reflectance and SSA For each sample we visually identified discrete layers, defined as zones of vertical homogeneity with lateral extent. The mean SSA and the mean reflectance were calculated for each layer. These values were used for correlating SSA and reflectance. Samples with strongly inclined layers were excluded. 3. RESULTS Figure 2 gives an overview of all the measured parameters. Density and SSA are stereological estimates from the segmented surface section, while the reflectance is calcu- lated from the NIR image. Corresponding layers are marked for the SSA and the reflectance signal. In total, 29 samples were obtained from field measure- ments. For each sample, we identified between one and three layers for which mean reflectance and SSA were measured. Figure 3 shows the scatter plot of reflectance and SSA for all layers. For a comparison with existing data we choose values published by Warren and Wiscombe (1980), Dozier (1992) and Sergent and others (1993) at 890 nm. We transformed the optical sphere diameter, d, used in these publications to SSA using: SSAsphere ¼ 6=d : ð2Þ To avoid an overweighting of the intermediate reflectance class, we grouped our data in reflectance classes of 5%. Six measurements per class were randomly selected. The regression was calculated based on these stratified samples. The scatter plot shows an exponentially growing relationship between SSA and reflectance, r: SSA ¼ Aer=t , ð3Þ where A ¼ 0.017 � 0.009 mm–1 and t ¼ 12.222 � 0.842. The correlation coefficient, R2, is 90.8% at a significance level p < 0.002. The error increases with increasing SSA from 4% at an SSA of 5 mm–1 to about 15% for SSA values of about 25 mm–1. Alternatively, a linear fit of the reflectance vs square root of grain diameter was tried, based on the simplified equation in Wiscombe and Warren (1980). In this case the correlation coefficient, R2, is 89.5% at a signifi- cance level p < 0.0001. The scatter plot of density and SSA for the same snow layers shows that we included snow types with a wide range of densities and SSA values (Fig. 4). It can also be seen that low-density snow covers the whole range of measured SSA, and samples with a low SSA cover a wide range of densities. Figures 5 and 6 show a quantitative two dimensional (2-D) mapping based on Equations (1) and (3). Figure 5 also shows a conventional hand profile. The photograph corres- ponds with the hand profile. Note that the ice crust at a height of 73 cm and the boundary at 36 cm are very distinct. Not only layers are mapped but also gradual transitions (e.g. that between 57 and 60 cm height). The SSA varies between 1 mm–1 for the lower parts of the profile and for the ice crust, and 27 mm–1 for the upper parts of the profile. Figure 6 shows an NIR image with inhomogeneous illumination. The upper-left part of the profile is too bright. However, it is still possible to recognize features such as the melt crust in the upper third of the profile or the infiltration zone below. 4. DISCUSSION The SSA and reflectance of a wide range of snow types were measured. An increasing reflectance is correlated with an exponentially increasing SSA. We did not observe any systematic difference between snow type and the fitted line. This result supports the assumption of Giddings and LaChapelle (1961) that SSA is the most important parameter determining snow reflectance in the NIR spectrum. The comparison with data published by Wiscombe and Warren (1980), Dozier (1992) and Sergent and others (1993) shows good agreement with our observations for SSA values Fig. 3. Scatter plot of measured specific surface area (SSA) values vs reflectance with a fit (solid line; Equation (3)). The dotted line indicates the relationship between reflectance at 890 nm and SSA based on published values by Wiscombe and Warren (1980), Dozier (1992), and Sergent and others (1993). Fig. 4. Scatter plot of SSA and density. Matzl and Schneebeli: Near-infrared photography to measure SSA of snow 561 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:34, subject to the Cambridge Core terms of use. https://www.cambridge.org/core below approximately 20 mm–1 (Fig. 3). There are three possible explanations for the difference for SSA values greater than 30 mm–1. (i) The calibration targets may have had a lower reflectance than specified because of staining caused by field use. However, post-calibration indicates that this effect is not more than 2%. (ii) Our estimated average wavelength (890 nm) is underestimated. Because the NIR sensitivity of the camera CCD is unknown, this is a reasonable explanation. Comparison with figure 9 in Wiscombe and Warren (1980) shows that our curve is steeper than expected for 890 nm. (iii) The measured light intensity was not only reflectance but also light that passed the snowpack from the snow surface, as the snow surface behind the profile was not always completely shaded. Because the snow samples showing a large SSA value were mostly obtained close to the surface, such an influence cannot be excluded. The comparison between density and SSA (Fig. 4) shows a loose correlation between these properties, as high SSA values are usually related to low densities and vice versa. For densities from approximately 150 to 300 kg m–3 a wider range of SSA values can be observed. This makes a direct correlation, as suggested by Narita (1971) and Legagneux and others (2002), very uncertain in this density range. Snow classes 1–3, which include new snow, partly decomposed snow and rounded snow, are well discriminated by a steadily decreasing surface area. Snow classes 4–8, which indicate stronger metamorphism, are not well discriminated and cover a larger range of SSA and reflectivity. The correlation function between reflectance and SSA was used to calculate the SSA for the NIR images. The images of the snow profiles (Figs 5 and 6), especially when combined with a manual profile, show that variations in the SSA reflect the layering of a snow cover in great detail. The method ‘visualizes’ layers well and shows the slight vertical and horizontal structural variations that are otherwise difficult to record by conventional field methods. Even very small local differences in the SSA are visible (e.g. an infiltration zone in Fig. 6). This method could be improved in several ways. A homogeneous and diffuse illumination of the profile wall is essential, because the digital correction of an inhomo- geneous illumination is not possible without losing the context to absolute reflectance. We found that in the field such heterogeneities in the illumination are not detectable by eye. Figure 6 shows such uncontrolled reflections. The upper-left part of the profile was too bright, which makes a quantitative interpretation of this part difficult. The shading for diffuse illumination must cover at least 0.5 m behind the profile wall, and must extend over the side walls of the pit. The method could also be improved by the use of a flat-field correction. A second image, where a target of constant colour brightness covers the whole profile wall, is subtracted from the first image. This method even corrects for very local illumination heterogeneities and may be used instead of the digital interpolation between the reference targets, as was used in this study. We conclude that the current technique allows us to visualize relative differences of SSA at a very high resolution. The SSA on a 2-D image can be measured with an uncertainty of about 15%. The proposed improvements could significantly enhance this result. The ability to measure SSA in two dimensions opens up the possibility of linking micro- and macroscale measure- ments of snow structure. SSA of individual layers as well as the stratigraphy of the snow cover is crucial for physical and chemical models. Because of the widespread use of SSA to parameterize air permeability and thermal conductivity of Fig. 5. SSA mapped for a snow-pit wall combined with a traditional hand profile. Larger boundaries are marked with a dotted line. Matzl and Schneebeli: Near-infrared photography to measure SSA of snow562 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:34, subject to the Cambridge Core terms of use. https://www.cambridge.org/core sintered materials (German, 1996), it can be expected that this should also be possible for snow. Such an extension of the present work would be very useful in initializing and calibrating numerical models that simulate the transport and transformation processes occurring in a snow cover. ACKNOWLEDGEMENTS We thank the reviewers, M. Sturm and S. Warren, for valuable comments and suggestions. The financial support of the Swiss National Science Foundation (project No. 200021-101884) is acknowledged. ADDITIONAL MATERIAL Additional material describing new digital cameras suitable for NIR imaging and programs for image processing can be requested from the corresponding author (M.S.). REFERENCES Albert, M.R. and F. Perron. 2000. Ice layer and surface crust permeability in a seasonal snowpack. Hydrol. Process., 14(18), 3207–3214. Baddeley, A.J., H.J. Gundersen and L.M. Cruz-Orive. 1986. Estimation of surface area from vertical sections. J. Microsc., 142(3), 259–276. Benson, C.S. 1962. Stratigraphic studies in the snow and firn of the Greenland ice sheet. SIPRE Res. Rep. 70. Brun, E., E. Martin, V. Simon, C. Gendre and C. Coléou. 1989. An energy and mass model of snow cover suitable for operational avalanche forecasting. J. Glaciol., 35(121), 333–342. Brzoska, J.B. and 7 others. 2001. Computation of the surface area of natural snow 3D images from X-ray tomography: two ap- proaches. Image Anal. Stereol., 20(2), 306–312. Colbeck, S.C. 1991. The layered character of snow covers. Rev. Geophys., 29(1), 81–96. Colbeck, S.C. and 7 others. 1990. The international classification for seasonal snow on the ground. Wallingford, Oxon., Inter- national Association of Hydrological Sciences. International Commission on Snow and Ice. Dominé, F. and P.B. Shepson. 2002. Air–snow interactions and atmospheric chemistry. Science, 297(5586), 1506–1510. Dozier, J. 1992. Remote sensing of alpine snow cover in visible and near-infrared wavelengths. In Kaitowski, M. and M.K. Decker,, eds. Proceedings from a symposium: Snow Science: Reflections on the Past, Perspectives on the Future. Alta, UT, The Center for Snow Science at Alta, 10–21. Dozier, J., S.R. Schneider and D.F. McGinnis, Jr. 1981. Effect of grain size and snowpack water equivalence on visible and near- infrared satellite observations of snow. Water Resour. Res., 17(4), 1213–1221. German, R.M. 1996. Sintering theory and practice. New York, etc., John Wiley & Sons, Inc. Giddings, J.C. and E. LaChapelle. 1961. Diffusion theory applied to radiant energy distribution and albedo of snow. J. Geophys. Res., 66(1), 181–189. Good, W. and G. Krüsi. 1992. Micro- and macro-analyses of stratigraphic snow profiles. In Proceedings of the International Snow Science Workshop, 4–8 October 1992, Breckenridge, Colorado, USA. Denver, CO, Colorado Avalanche Information Center, 1–9. Grenfell, T.C. and S.G. Warren. 1999. Representation of a nonspherical ice particle by a collection of independent spheres for scattering and absorption of radiation. J. Geophys. Res., 104(D24), 31,697–31,709. Haddon, J.F., M. Schneebeli and O. Buser. 1997. Automatic segmentation and classification using a co-occurrence based approach. In Frost, J.D. and S. McNeil, eds. Imaging technol- ogies: techniques and applications in civil engineering. Reston, VA, American Society of Civil Engineers, 175–184. Legagneux, L. and F. Dominé. 2005. A mean field model of the decrease of the specific surface area of dry snow during isothermal metamorphism. J. Geophys. Res., 110(F4), F04011. (10.1029/2004JF000181.) Legagneux, L., A. Cabanes and F. Dominé. 2002. Measurement of the specific surface area of 176 snow samples using methane adsorption at 77 K. J. Geophys. Res., 107(D17), 4335. (10.1029/ 2001JD001016.) Lehning, M., P. Bartelt, B. Brown, C. Fierz and P. Satyawali. 2002. A physical SNOWPACK model for the Swiss avalanche warning. Part II. Snow microstructure. Cold Reg. Sci. Technol., 35(3), 147–167. Leroux, C., J.L. Deuzé, P. Goloub, C. Sergent and M. Fily. 1998. Ground measurements of the polarized bidirectional reflectance of snow in the near-infrared spectral domain: comparisons with model results. J. Geophys. Res., 103(D16), 19,721–19,731. Leroux, C., J. Lenoble, G. Brogniez, J.W. Hovenier and J.F. De Haan. 1999. A model for the bidirectional polarized reflectance of snow. J. Quant. Spectrosc. Radiat. Transfer, 61(3), 273–285. Matzl, M. 2006. Quantifying the stratigraphy of snow profiles. (PhD thesis, Swiss Federal Institute of Technology, Zürich.) (http://e-collection.ethbib.ethz.ch/cgi-bin/show.pl?type= diss&nr=16570,2006) Mätzler, C. 2002. Relation between grain-size and correlation length of snow. J. Glaciol., 48(162), 461–466. Mitchell, D.L. 2002. Effective diameter in radiation transfer: general definition, applications, and limitations. J. Atmos. Sci., 59(15), 2330–2346. Fig. 6. SSA mapped for a snow pit with inhomogeneous illumination. Matzl and Schneebeli: Near-infrared photography to measure SSA of snow 563 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:34, subject to the Cambridge Core terms of use. https://www.cambridge.org/core Narita, H. 1971. Specific surface of deposited snow. II. Low Temp. Sci., Ser. A. 29, 69–79. [In Japanese with English summary.] Nolin, A.W. and J. Dozier. 2000. A hyperspectral method for remotely sensing the grain size of snow. Remote Sens. Environ., 74(2), 207–216. Pielmeier, C. and M. Schneebeli. 2003. Stratigraphy and changes in hardness of snow measured by hand, ramsonde and snow micro penetrometer; a comparison with planar sections. Cold Reg. Sci. Technol., 37(3), 393–405. Schneebeli, M. and S.A. Sokratov. 2004. Tomography of temperature gradient metamorphism of snow and associated changes in heat conductivity. Hydrol. Process., 18(18), 3655–3665. Sergent, C., E. Pougatch, M. Sudul and B. Bourdelles. 1993. Experimental investigation of optical snow properties. Ann. Glaciol., 17, 281–287. Sturm, M. and C. Benson. 2004. Scales of spatial heterogeneity for perennial and seasonal snow layers. Ann. Glaciol., 38, 253–260. Warren, S.G. 1982. Optical properties of snow. Rev. Geophys. Space Phys., 20(1), 67–89. Warren, S.G. and W.J. Wiscombe. 1980. A model for the spectral albedo of snow. II. Snow containing atmospheric aerosols. J. Atmos. Sci., 37(12), 2734–2745. Wiesmann, A., C. Mätzler and T. Weise. 1998. Radiometric and structural measurements of snow samples. Radio Sci., 33(2), 273–289. Wiscombe, W.J. and S.G. Warren. 1980. A model for the spectral albedo of snow. I. Pure snow. J. Atmos. Sci., 37(12), 2712–2733. Zhou, X., S. Li and K. Stamnes. 2003. Effects of vertical inhomogeneity on snow spectral albedo and its implication for optical remote sensing of snow. J. Geophys. Res., 108(D23), 4738. (10.1029/2003JD003859.) MS received 3 April 2006 and accepted in revised form 25 August 2006 Matzl and Schneebeli: Near-infrared photography to measure SSA of snow564 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:34, subject to the Cambridge Core terms of use. https://www.cambridge.org/core work_3zxifpwtr5ellbvzr4gqzbxo4q ---- 15_R0260_502-7.pmd 502 J Med Assoc Thai Vol. 90 No. 3 2007 Correspondence to : Chompoopong S, Department of Anatomy, Faculty of Medicine, Siriraj Hospital, Mahidol University, Bangkoknoi, Bangkok 10700, Thailand. Phone: 0-2419-7035, Fax: 0-2419-8523, E-mail: siasg@mahidol.ac.th The Acromial Morphology of Thais in Relation to Gender and Age: Study in Scapular Dried Bone Arraya Sangiampong MS*, Supin Chompoopong MS, PhD*, Sanjai Sangvichien MD, PhD*, Penake Thongtong MD*, Suwarat Wongjittraporn MD* *Department of Anatomy, Faculty of Medicine, Siriraj Hospital, Mahidol University Objective: The objective of the present study was to determine the acromial shape and examine if there is a correlation between the acromial morphology and genders, ages and sides. Material and Method: The present examined 154 dried Thai scapulas (107 males and 47 females) with age range from 16 to 87 years (mean = 49 + 17 years). The acromial morphology of each scapula was studied by the computerized image analysis of digitized photography through the supraspinatous outlet view, with the distance (M) measured from its anterior to posterior end, the height (H) of the resultant curve and the distance (N) from the anterior end to the point perpendicular to the height. The acromial types were defined as type I (flat), II (curved) and III (hooked) with the criteria that N is more than or equal to the 2/3, 1/3 and less than 1/3 of M, respectively. Results: The incidences of types I, II and III were 3.2%, 93.5% and 3.2%, respectively. It was found in both sexes, female (93.6%) and male (93.5%) and in both sides, left (96%) and right (91.1%). With respect to the age range, type II were found in 100% of subjects of less than 30 years, 4.5% in those between 30-60 years, were 4.5% (I), 93.2% (II), 2.3% (III).Those more than 60 years old were 2.3%(I), 90.7% (II) and 7.0% (III). The spur formation on the anterior end of the acromion was found in 14.9% of scapulas, curved type mostly; and it was associated with hooked type in only one scapula. Conclusion: There was no significant type difference between sex, side and age range (p > 0.05). The spurs found are not related to acromial morphology and old age. Keywords: Acromion, Acromial process, Aacromial morphology, Scapula The most common causes of pain and dis- ability in the upper limb is inflammation of the rotator cuff tendons and rotator cuff tears that relates to the structure of the acromion(1).The etiology of rotator cuff tears is multifactorial. The primary factors are a curved or hook-shaped anterior end of acromion and subacromial osteophytes. Both of these are reported to involve tearing of the supraspinatus tendon(2). Dif- ferences in the development and morphology of the acromion, and the presence of anterior acromial spurs and inferior acromioclavicular osteophytes decrease the volume of the subacromial space, leading to im- pingement(3) and the very close contact between the supraspinatus and the anterior inferior part of the acromion, occurring at 90 degrees abduction in inter- nal rotation(4). The variations in the shape and orienta- tion of the anterior acromion have also been implicated as predisposing factors for the development of rotator cuff problems(5). Many reports show the incidence of complete rotator cuff tear ranging from 5% to 25%(6-10). In a Thai cadaveric study reported by Wunnasinthop et al(11), the incidence is not uncommon (12%) when compared to other studies done in developed coun- tries. The relation between the acromial shape and rotator cuff tares in Thai patients might be a predictive value in determining the success of conservative measures and the need for surgery in patients with impingement syndrome. J Med Assoc Thai 2007; 90 (3): 502-7 Full text. e-Journal: http://www.medassocthai.org/journal J Med Assoc Thai Vol. 90 No. 3 2007 503 As identified by Bigliani et al(12), there are three distinct types of acromions; type I (flat), type II (curved) and type III (hooked), based on samples such as cadaveric dissections, the lateral radiographs in the sagittal plane of the anterior slope of acromion(12), three-dimensional MRI and CT reconstruction on the shoulder of patients(13). Therefore, the purpose of the present study was to determine the acromial shape more completely by photography of dried scapulas in Thais through the supraspinatous outlet view and if there is a relationship between the acromial morphology and genders, ages and sides. Material and Method One hundred and fifty four dried cadaveric scapulas of known genders and ages were selected from the collection of Department of Anatomy, Faculty of Medicine, Siriraj Hospital, Mahidol University. Both sides of each scapula were studied for the acro- mial morphology by computerized image analysis, and categorized by the criterion of Bigliani et al. as type I (flat); type II (curved); and type III (hooked)(12). All scapulas were photographed (Camedia E-10, Olympus Optical Co, Ltd.) through supraspinatous outlet view. The plane of scapula was adjusted in front of the mirror. Its mirror image showed the “Y” appearance with its three limbs represented in profile. The center of the glenoid cavity was perfectly centred on the junction of these limbs(14) (Fig. 1). The images were transferred to the computer; and the distances were marked, using Adobe Photoshop version 6.0 program as shown in Fig. 1. All distances of each acromial pro- cess were measured directly with the computer soft- ware (UTHSCSIA Image Tool version 2.0 for Windows). The distance (M) from its anterior to posterior end, the Height (H) of the resultant curve and the distance (N) from the anterior end to the point perpendicular to the height were measured. All measurements were analysed for classification of the acromial types, N is more than or equal to the 2/3, 1/3 and less than 1/3 of M, defined as type I (flat), II (curved) and III (hooked), respec- tively (Fig. 2)(15). All scapulas were directly inspected whether or not each contained the spur that appears on the lower surface of the anterior end of the acromial pro- cess (Fig. 3). Statistical analysis was performed by using SPSS program with Pearson Chi-square test to deter- mining the association between the characteristics and Acromial Types or not at p < 0.05. Fig. 1 Digitized photography of a scapula through the supraspinatous outlet viewed by setting the plane of scapula and its mirror image to show the “Y” appearance with its three limbs. The measurement of the distance (M) from its anterior margin to pos- terior margin of an acromion, the height (H) of the resultant curve and the distance (N) from the ante- rior margin to the point perpendicular to the height for determining the acromial type Fig. 2 Three types of scapula and their incidences found in each type: type I, flat type (A & D); type II, curved type (B & E); and type III, hooked type (C & F) 504 J Med Assoc Thai Vol. 90 No. 3 2007 Fig. 3 A spur (arrows) on the anterior acromial end Side Left Right Total n 75 79 154 Type I 2 (2.7%) 3 (3.8%) 5 (3.2%) Type II 72 (96.0%) 72 (91.1%) 144 (93.5%) Type III 1 (1.3%) 4 (5.1%) 5 (3.2%) Table 2. Distribution of acromial types according to sides p > 0.05 by Fisher’s exact test and Chi-square test Age range (years) < 30 30-60 > 60 n 23 88 43 Type I 0 (0%) 4 (4.5%) 1 (2.3%) Type II 23 (100%) 82 (93.2%) 39 (90.7%) Type III 0 (0%) 2 (2.3%) 3 (7.0%) Spur 1 (4.4%) 16 (69.6%) 6 (26.1%) Table 3. Distribution of acromial types and spurs related to age range p > 0.05 by Fisher’s exact test and Chi-square test p > 0.05 by Fisher’s exact test and Chi-square test Sex Female Male Total n 47 107 154 Type I 3 (6.4%) 2 (1.9%) 5 (3.2%) Type II 44 (93.6%) 100 (93.5%) 144 (93.5%) Type III 0 (0%) 5 (3.2%) 5 (3.2%) Table 1. Distribution of acromial types according to genders tively. Type II was the most frequently found and it was found in both sexes and sides. With respect to the age range in Table 3, type II were found in 100% of cases of people less than 30 years and a higher percentage of Type III were found in the older groups. There were no significant type differences between sex, side and age range (p > 0.05). From the direct inspection, the spur as shown in figure 3 was found in 23 scapulas (14.9%). It was associated with hooked type in only one scapula, most of the spurs were found in curved type. The spur was found in 16 scapulas (69.6%) in the age range between 30-60 years, rather than other age ranges (Table 3). Discussion The description of acromial morphology has varied among authors and this may lead to confusion when interpreting radiological data and considering the surgical treatment of such conditions. Difficulties should be noted in reproducing quantity radiographs, most data are from radiological views and not from the observation of dried bones. In the past, the reviews by authors are shown in Table 4, radiographic findings of acromial types were reported in cadaveric shoulders and classified as: type I, type II, and type III(12). MRI was also discovered to be a valid method of determin- ing acromial morphology in three dimensions. There was no difference in the incidence of the acromial mor- phology between radiographs and MRI in patients, and type II was found most frequently(13,16). These data differed from the report by Hirano et al(15) indicating Results The age of 154 scapulas (107 males and 47 females) ranged from 16 to 87 years with the mean of 49 + 17 years. The distances M and N were measured in all scapulas (data not shown). The acromial shape was categorized by determining M and N as shown in Fig. 1 and they were classified into three different types (Fig. 2). Tables 1 and 2 showed the distribution of acromial types according to genders and side, respec- J Med Assoc Thai Vol. 90 No. 3 2007 505 that type III was most frequently found. They also com- pared the incidence of each acromial shape between groups of specimens with and without rotator cuff tears, revealing no significant differences. Another study by using radiographs in the 394 cadaveric scapulas revealed only 8.6% of type III acromions(17), whereas the study of Morrison et al(18) in patients revealed 41% of type III acromions and the above studies of the patients also showed 28% (radiographs), 25% (MRI) and 39.6% (MRI) of type III acromions(15,16). The different results may be a reflection of the selected samples among radiographs, MRI from the patients or dried bones as well as the method of differentiating between acromial types. Similar to the report of Getz et al(17), which found 8.6% of type III acromions, the present study also found less frequent incidence of type III. The present results revealed 3.2% of type III acromions by studying 154 dried Thai scapulas. The different method of differentiation and classification that the authors designed in the present study by using photo- graphy with fixed plane of scapulas and mirror image, instead of radiography through the supraspinatus outlet view, showed the result to be mildly different. Even though Getz et al(17) used the different classifica- tion based on the ratio between anterior and posterior angles of the acromial arch, they still concluded that the Bigliani classification system of acromial types remains a reasonable method to distinguish acromial types. This is due to the difficultly in fixing the angula- tion of radiography in clinical studies using their method. The results of recent attempts to visually clas- sify acromions based on Bigliani et al(12) with direct inspection and photography of scapulas can be used to distinguish adequately between type I, II and type III only in dried bone (not from patients), therefore the incidence of type III acromions were found less fre- quently than in the previous report. From Table 2, the presented data show no significant difference between the right side and left side; therefore, the acromial shape tends to be symmetric as shown by Getz et al(17). The acromial shape varied significantly with the sexes. Men are more likely to have type III and women tend to have type I(17). When the authors con- sidered the relationship between the acromial types and sexes, there was a greater percentage of type II in both men (93.6%) and women (93.5%). The acromial shape also does not vary significantly with the sexes; therefore, there was no relationship between acromial types and the sexes. Edelson et al(19) reported that hooking of the acromion was not found in subjects under the age of 30 years. They concluded that the hooked configu- ration develops at later ages as a result of calcification of the acromial attachment of the coracoacromial ligament. In the present study, in the entire group of 154 scapulas there was an increase in the incidence of type III acromions and a decrease in the incidence of type I and II acromions in people (subjects) over 60 years old as the previous study indicated(20). These results showed the possibility that type I acromions may progress to type II acromions and then further change into type III acromions over time. This finding is also supported by examining the MRI of the patient’s shoulders which shows the gradual transition from a flat acromion at a younger age to a more hooked acro- mion at an older age that was significant in both the midsagittal and lateral-sagittal planes(13). In any case, whether the acromial morphology is an innate anatomic characteristic or whether it represents a degenerative Authors Bigliani et al1986(12) Morrison et al 1987(18) MacGillivray et al 1998(13) Wang et al 2000(16) Hirano et al 2002(15) Getz et al 1996(17) Sangiampong et al 2006 Sample (No.) 140 200 98 32 91 394 154 Subject cadavers patients patients patients patients with RCT cadavers dried bone Mode of study radiogragh radiogragh MRI X-ray MRI MRI radiograph photograph Acromial types Type I Type II Type III 17% 43% 39% 18% 41% 41% 40% 52% 8% 6% 66% 28% 6% 69% 25% 36.3% 24.2% 39.6% 22.8% 68.5% 8.6% 5% 93.5% 8.2% Table 4. Reveiws of acromial types by authors 506 J Med Assoc Thai Vol. 90 No. 3 2007 process with type I acromions changing to type III acromions over time is still uncertain(20). As described by Nicholson et al(21) the spur formation on the anterior acromion is an age-depen- dent process and not related to acromial types. The acromial morphology is a primary anatomic charac- teristic. They also concluded that the variations seen in the acromial morphologic condition are not acquired from age-related changes and the spur formation and thus contribute to impingement disease independent of and in addition to age-related processes(21). Other evidence showed that bony spurs on the acromion and a thickening of the fibrocartilaginous layer are not degenerative changes, but they are caused by increased tensile strength of the coracoacromial ligament and are not influenced by age(22). The present result, examin- ing Thai scapulas confirmed these previous findings that the spur found in 14.94% of cases are not related to acromial morphology and older age. Conclusion The present results reveal that the acromial shape of Thai dried scapulas does not vary signifi- cantly with age and sex. The highest incidence of types is curved acromion. Acromial shape tends to be symmetric. The relative percentages of the types differ from previously reported values. There is an increase in the incidence of type III and a decrease in type I and II in people over 60 years, showing the possibility that type I may progress to type II and then further change into type III over time. This finding also confirms the previous report that the spurs found are not related to acromial morphology and old age. It seems to be the case that the hooked type and spur may or may not be involved in the degenerative changes or rotator cuff tears, therefore the acromial types from these results will support further study on the incidence of the acromial types of rotator cuff tears in Thai cadavers. Acknowledgements The authors wish to thank the Department of Anatomy, Faculty of Medicine, Siriraj Hospital, Mahidol University, Bangkok, Thailand for the specimens pro- vided, Miss Benjamas Sarakoonpaisarn for help in draw- ing the pictures and Mr. Suthipol Udompunturak from the Epidimology unit, Siriraj Hospital for his statistical guidance. References 1. Toivonen DA, Tuite MJ, Orwin JF. Acromial struc- ture and tears of the rotator cuff. J Shoulder Elbow Surg 1995; 4: 376-83. 2. Mayerhofer ME, Breitenseher MJ. [Impingement syndrome of the shoulder]. Radiologe 2004; 44: 569-77. 3. Bigliani LU, Ticker JB, Flatow EL, Soslowsky LJ, Mow VC. The relationship of acromial architec- ture to rotator cuff disease. Clin Sports Med 1991; 10: 823-38. 4. Graichen H, Bonel H, Stammberger T, Heuck A, Englmeier KH, Reiser M, et al. A technique for de- termining the spatial relationship between the ro- tator cuff and the subacromial space in arm abduc- tion using MRI and 3D image processing. Magn Reson Med 1998; 40: 640-3. 5. Zuckerman JD, Kummer FJ, Panos SN. Charac- terization of acromial concavity. An in vitro com- puter analysis. Bull Hosp Jt Dis 2000; 59: 69-72. 6. Keyes CL. Observations on rupture of supraspina- tus tendon. Based upon a study of 73 cadavers. Ann Surg 1933; 97: 849-56. 7. Neer CS, Craig EV, Fukuda H. Cuff-tear arthropa- thy. J Bone Joint Surg Am 1983; 65: 1232-44. 8. Ogata S, Uhthoff HK. Acromial enthesopathy and rotator cuff tear. A radiologic and histologic post- mortem investigation of the coracoacromial arch. Clin Orthop Relat Res 1990; 39-48. 9. Wilson CL, Duff GL. Pathologic study of de- gen- eration and rupture of the supraspinatus tendon. Arch Surg 1943; 47: 121-35. 10. Skinner HA. Anatomical consideration relative to rupture of the supraspinatus tendon. J Bone Joint Surg Am 1937; 19: 137-51. 11. Wunnasinthop S, Sangiampong A, Pichaisak W, Ittipalin K, Lamsam C. Incidence of rotator cuff tear: a cadaveric study. J Sports Med Assoc Thai 2004; 8: 9-16. 12. Bigliani LU, Morrison DS, April EW. The morphol- ogy of the acromion and its relationship to rotator cuff tears [abstracts]. Orthop Trans 1986; 10: 228. 13. MacGillivray JD, Fealy S, Potter HG, O’Brien SJ. Multiplanar analysis of acromion morphology. Am J Sports Med 1998; 26: 836-40. 14. Prato N, Peloso D, Franconeri A, Tegaldo G, Ravera GB, Silvestri E, et al. The anterior tilt of the acro- mion: radiographic evaluation and correlation with shoulder diseases. Eur Radiol 1998; 8: 1639-46. 15. Hirano M, Ide J, Takagi K. Acromial shapes and extension of rotator cuff tears: magnetic resonance imaging evaluation. J Shoulder Elbow Surg 2002; 11: 576-8. 16. Wang JC, Hatch JD, Shapiro MS. Comparison of J Med Assoc Thai Vol. 90 No. 3 2007 507 ลักษณะปุ่มกระดูกอะโครเมียนในกระดูกสะบักคนไทยสัมพันธ์กับเพศและอายุ อารยา เสง่ียมพงษ,์ สุพิน ชมภพูงษ,์ สรรใจ แสงวเิชียร, เป็นเอก ธงทอง, สุวรัตน ์ วงจติราภรณ์ จากการศกึษากระดกูสะบกัในคนไทย 154 ช้ิน (เพศชาย107 ช้ิน และเพศหญงิ 47 ช้ิน) มีอายเุฉลีย่ 49 + 17 ปี (อายต้ัุงแต่ 16 ถึง 87 ปี) โดยการถา่ยภาพปุม่กระดกูอะโครเมยีนผา่นชอ่งซพุราสไปเนทสั เอาทเ์ลท จากนัน้ใช้ โปรแกรมคอมพวิเตอรว์เิคราะหภ์าพ เพือ่วดัระยะจากปลายหนา้ถงึปลายหลงัของปุม่กระดกูอะโครเมยีน (M) วดัระยะ สูงสุดของความโคง้ของปุม่กระดกูนี ้(H) และวดัระยะจากปลายหนา้ถงึจดุทีล่ากเสน้ตัง้ฉากกบัจุดสงูสุด (N) แล้วจำแนก ชนดิของปุม่กระดกูอะโครเมยีนโดยใชห้ลกัเกณฑด์งันี ้ถ้าคา่ N มากกวา่หรอืเทา่กบั 2/3, 1/3 และนอ้ยกวา่1/3 ของคา่ M จะสามารถแบง่ไดเ้ปน็ชนดิทีห่นึง่ลกัษณะแบนราบ ชนดิทีส่องลกัษณะโคง้ และชนิดทีส่ามลักษณะตะขอ ตามลำดบั ผลการศกึษานีพ้บวา่ปุ่มกระดกูอะโครเมยีน มีลักษณะแบนราบ 3.2% โคง้ 93.5% และตะขอ 3.2% ไม่พบวา่มคีวาม แตกตา่งชนดิของปุม่กระดกูอะโครเมยีน อยา่งมนียัสำคญัทางสถติ ิระหวา่งเพศและขา้ง (p > 0.05) ซ้าย (96%) และ ขวา (91.1%) เมือ่ศกึษาในชว่งอายตุา่ง ๆ พบวา่ ช่วงอายนุอ้ยกวา่ 30 ปี ปุ่มกระดกูอะโครเมยีนเปน็ชนดิโคง้ 100% อายุระหว่าง 30 ถึง 60 ปี พบชนิดแบนราบ 4.5% โค้ง 93.2% ตะขอ 2.3% และช่วงอายุมากกว่า 60 ปีพบชนิด แบนราบ 2.3% โค้ง 90.7% และตะขอ 7% จากการศึกษาด้วยตาเปล่าพบว่ามีส่วนยื่นของกระดูกออกไปที่เรียกว่า สเปอ เกิดขึ้นที่บริเวณ ปลายหน้าด้านล่าง ของปุ่มกระดูก อะโครเมียน 14.94% และพบว่าส่วนใหญ่สเปอจะเกิด บนปุ่มกระดูกชนิดโค้ง มีเพียงชิ้นเดียวที่เกิดบนชนิดตะขอ MRI and radiographs in the evaluation of acromial morphology. Orthopedics 2000; 23: 1269-71. 17. Getz JD, Recht MP, Piraino DW, Schils JP, Latimer BM, Jellema LM, et al. Acromial morphology: rela- tion to sex, age, symmetry, and subacromial enthesophytes. Radiology 1996; 199: 737-42. 18. Morrison DS, Bigliani LU. The clinical significance of variations in acromial morphology [abstracts]. Orthop Trans 1987; 11: 234. 19. Edelson JG. The ‘hooked’ acromion revisited. J Bone Joint Surg Br 1995; 77: 284-7. 20. Wang JC, Shapiro MS. Changes in acromial mor- phology with age. J Shoulder Elbow Surg 1997; 6: 55-9. 21. Nicholson GP, Goodman DA, Flatow EL, Bigliani LU. The acromion: morphologic condition and age- related changes. A study of 420 scapulas. J Shoul- der Elbow Surg 1996; 5: 1-11. 22. Jacobson SR, Speer KP, Moor JT, Janda DH, Saddemi SR, MacDonald PB, et al. Reliability of radiographic assessment of acromial morphology. J Shoulder Elbow Surg 1995; 4: 449-53. work_44tdvycaqvg4zaj6rxhzt3q2rq ---- 6. Dong-Hwan Har: Scanner Certification Tool for the Standardization of Digitized Documents: Focusing on Target Factors and Measurement Programs 25 International Journal of Contents, Vol.6, No.3, Sep 2010 Scanner Certification Tool for the Standardization of Digitized Documents: Focusing on Target Factors and Measurement Programs Hyung-Ju Park Digital Scientific Imaging Laboratory Graduate School of Advanced Imaging and Multimedia Chung-Ang University, Seoul, Korea Dong-Hwan Har Digital Scientific Imaging Laboratory Graduate School of Advanced Imaging and Multimedia Chung-Ang University, Seoul, Korea ABSTRACT Scanners play an important role in digitally reproducing the color and imaging of original documents used in public offices; however, the current system lacks a standard for digitized documents created by scanners, complicating efforts to create a digitized system. In particular, macrography cannot guarantee the accuracy and reliability of digitalized color documents, pictures, and photographs created by scanners. To this end, we develop a standardized evaluation tool and test target to certify digitalized documents created by a scanner in the domestic environment. In this study, we enhance the accuracy and reliability of scanned data to create an advanced standard evaluation tool for scanners. Moreover, to produce a scanner certification standard, we overcome existing problems related to the growing market. We anticipate that this new standard will see a high degree of application in the current environment. Keywords: scanner, standardization, target, documentation. 1. INTRODUCTION Scanners are used to digitize documents and photographs by public agencies, a process in which it is important to record the exact original color and imaging. The current standard applied to digitalized documents produced by the National Archives of Korea is based on Japanese and American standards. According to the current policy of scanner evaluation method by the National Archives of Korea, we need to develop an adequate system for scanner certification. Therefore, we research an automatic comprehensive scanner quality evaluation program. And also, it consists of a standard target and a program to measure Macrography, automation test, and TWAIN test; however, in practice, this may not be the best way to measure the performance and quality of scanners. In addition, this standard uses macrography to measure and evaluate digitalized documents, and thus lacks accuracy and reliability due to variation among inspectors. In current system, we do not have a standard for verifying a scanner performance; therefore, we This is an excellent paper selected from the papers presented at ICCC 2009. * Corresponding author. E-mail : dhhar@cau.ac.kr Manuscript received Mar. 22, 2010 ; accepted Jul. 30, 2010 focused on scanner certification tool for domestic standard. The accuracy and reliability of scanner certification program are not only for macrography but also for automation test and TWAIN test. In the previous system, there was not automation and TWAIN test so that we propose a new evaluation test for the accuracy and reliability. We have developed a standard certification tool for digitalized documents created by scanner. This tool consists of a program that evaluates image-quality factors and compares them against a reference target. 2. METHOD OF SCANNER QUALITY EVALUATION Current Macrography based on America and Japan international standard is carried out by inspectors so that the results can be differed and dropped their reliabilities and objectivity. Most of all, we need to develop a standard which follows the international standard and objectivity. Therefore, we can confirm our research reliability and novelty in process of developing a new standard target and a program. The proposed evaluation factors for scanners are subjected to a macrography test, an automation test, and a TWAIN (Technology Without An Interesting Name) test. These factors DOI:10.5392/IJoC.2010.6.3.025 26 Dong-Hwan Har: Scanner Certification Tool for the Standardization of Digitized Documents: Focusing on Target Factors and Measurement Programs International Journal of Contents, Vol.6, No.3, Sep 2010 are based on an International Organization for Standardization (ISO) standard and are designed as a target (Scanner Certification Target Ver.1.0.). The measurement program (Scanner Certification Program Ver.1.0.) evaluates these factors. 2.1. Macrography test A macrography test based on the ISO standard examines all factors of reproduced information, such as photographs, pictures, letters, dotted lines, and etc [1],[2],[3]. Table 1 details the measurement factors of macrography: picture, letter size, and thickness of dotted line. Table 1. Details of Macrography Item Picture Letter size Thickness of dotted line Image Measuring factor Photograph, picture, image reproduction (color, B/W) Diverse letter size of Korean, Chinese, special characters, English capital/small letters Dotted line, thickness of line Related standard JIS X6933, ISO/IEC 15775 JIS X6933, ISO/IEC 15775 JIS X6933, ISO/IEC 15775, ISO 11698-1 Evaluation Grade on a 1– 10 scale for photography gradation and color reproduction Readability of the minimum letter size distinguished by the normal naked eye Record the first distinguishable dotted line 2.2. Automation test The automation test addresses the following measurement factors: color reproduction ability based on color difference, one- and two-dimension bar code recognition, evaluation of corner-color reproduction ability, and graduated ruler test for extended paper when using ADF (Automatic Document Feeding) [4],[5]. Each factor is based on the ISO standard, and we suggest a new factor and measurement method for practical purposes [6].[7].[8]. Each factor is measured 10 times and the results are compared to the original, with the measurement factors being graded as either pass or fail. Table 2 shows the details of the automation test. Table 2. Details of automation test Item Color Corner color Bar code Graduated ruler Image Measu -ring factor Scanner color reproductio n Identity of four-corner color reproductio n ability One- dimensional (code 128) and two- dimensional (QR) bar code recognition Extended paper test when using ADF Re- lated standa rd KS X ISO 12641 Color Checker SG KS X ISO 12641 ISO/IEC 18004:2006, ISO/IEC 15417:2007 Newly proposed Evalu -ation Select color target and automaticall y calculate Select 4- corner color target and automatic calculate Select bar code and automaticall y recognize Select graduated ruler on both right and left side 2.3. TWAIN test This test evaluates whether a scanner has the standard interface, TWAIN. Since 1992, manufacturers (Adobe, Eastman Kodak, Hewlett-Packard, Epson, Logitech) of graphic or imaging input devices have produced a software standard interface to connect device and software. After installing the scanner driver on a PC, the TWAIN interface is examined. Table 3 shows the details of the TWAIN test. Table 3. Details of TWAIN test Item Contents Auto Feed ADF function test Duplex Both-sided scan function test Brightness Brightness control function test Contrast Contrast control function test Image File Format Supported image storage type test Pixel Type Supported paper size test Supported Sizes Supported pixel type test (BW, GRAY, RGB) Dong-Hwan Har: Scanner Certification Tool for the Standardization of Digitized Documents: Focusing on Target Factors and Measurement Programs 27 International Journal of Contents, Vol.6, No.3, Sep 2010 X Resolution Supported X axis resolution test Y Resolution Supported Y axis resolution test Enabled UI Only Supported user interface setting test Customs Data Profile creation or application support 3. DEVELOPMENT OF SCANNER CERTIFICATION TOOL We have developed a newly designed target (Scanner Certification Target Ver.1.0.) and a program (Scanner Certification Program Ver.1.0.) for the standardization of digitized documents. Figure 1 is the process of study. Fig.1. Process of scanner certification tool 3.1. Macrography test In Fig.2, we present a test target (Scanner Certification Target Ver.1.0.) in our program, which consists of a macrography test and an automation test. We incorporated the tests into a single target for reasons of convenience, ease of use, practicality, and economy. The target specifications are shown in Table 4. Table 4. Specification of test target Printer Epson R 2880 Calibration device Gretag Macbeth Eye-One Paper Epson premium semi-gross paper Storage Temperature 23±2 ℃, relative humidity 50±20 % (ISO 16067-1:2005) Limitation Disuse a target based on each factor test 10 times average value, if it is over error range 1.0. Fig.2. Scanner Certification Target Ver.1.0. 3.2. Development of program We have developed software to certify scanners for use in digitizing documents. The software features 3 tests: a macrography test, an automation test, and a TWAIN test. Fig.3. shows the program’s interface. 28 Dong-Hwan Har: Scanner Certification Tool for the Standardization of Digitized Documents: Focusing on Target Factors and Measurement Programs International Journal of Contents, Vol.6, No.3, Sep 2010 Fig.3. Scanner Certification Program Ver.1.0.interface 3.3. Program report Once the scanner certification program is installed on a PC, it is used to scan a target set at paper size A4, resolution 300dpi, and image mode 24 bit. The program reports the results of the macrography, automation, and TWAIN tests. 3.3.1. Macrography test result: In the macrography test, the scanned target is compared with the original at 100% size using the naked eye. The inspector grades macrography items and records the values in the program (refer Fig.4.) Fig.4. Macrography interface 3.3.2. Automation test result: For preventing the error from many different monitors, we made a controlled test environment. We calibrated a monitor to make a standard color using by X-rite's 1Xtreme. Gretag Macbeth's Digital Color Checker SG target is based on color target and it can measure XYZ, CIELAB, and RGB. To convert RGB values to CIELAB values, we follow CIE 1976 standard and use standard light source D50. When the inspector clicks and drags the Region Of Interest (ROI), the program automatically calculates and reports the automation test items: one- and two-dimensional bar code, color, four-corner color, and graduated ruler for ADF (refer Fig.5.). Fig.5. Color Automation test interface 3.3.3. TWAIN test result: The TWAIN test determines whether the selected scanner complies with the standard interface and compares the selected scanner’s capabilities with those of the standard interface (refer Fig.6.). . Fig.6. TWAIN test interface 3.4. Program considerations In this research, we made a scanner certification target and a program to evaluate scanner’s performance. In the previous system, inspectors wrote down its macrography test results manually, therefore its efficiency and speed of scanner certification test have been insufficient to verify scanner’s performance. However, we proposed a new tool for a scanner which can record and save test results automatically so that we can enhance the speed, accuracy and reliability of test results. Especially, automation test by using color differences value can be an objective tool for measuring color reproduction in case of different devices. Automation test can show its result as a number to record, therefore we can expect the high reliability of program. Moreover, there are one and two dimension bar code, ADF paper extension measurement, and TWAIN standard interface. Those are the automatic measuring system Dong-Hwan Har: Scanner Certification Tool for the Standardization of Digitized Documents: Focusing on Target Factors and Measurement Programs 29 International Journal of Contents, Vol.6, No.3, Sep 2010 for acquiring objectivity. 4. RESULT Throughout this study, we measured Delta E for color testing, which was difficult to define in previous macrography systems. This allowed us to measure the accuracy of scanner color reproduction with the objectivity of a scanner certification program. In addition, we designed a synthesized target that contains a variety of items, such as letters, dotted lines, photographs, bar codes, and graduated rulers. This target, which is based on ISO standards, works for our program and evaluates the scanner’s performance. The program makes the test process faster and improves its objectivity. We hope that this research will provide a solution to the problem of scanner certification for digitized documents, which is due to a lack of a specific scanner standard, and will improve the conservation of original documents in a digitized form. REFERENCES [1] ISO Std., ISO 11698-1 - Micrographics - Methods of measuring image quality produced by aperture card scanners - Part 1: Characteristics of the test images, Geneva, Switzerland, 2002. [2] ISO Std., ISO/IEC 15775 - Information technology - Office machines - Method of specifying image reproduction of colour copying machines by analog test charts - Realisation and application, Geneva, Switzerland, 1998. [3] JIS Std., JIS X6933 – Color Test Chart for Copying Machines, Tokyo, Japan ,2003. [4] ISO Std., ISO 11664-4 - Colorimetry - Part 4: CIE 1976 L*a*b* Colour space, ISO, Geneva, Switzerland, 2008. [5] ISO Std., ISO 21550 Photography - Electronic scanners for photographic images - Dynamic range measurements, Geneva, Switzerland, 2004. [6] ISO Std., ISO 12641 Graphic technology - Displays for colour proofing - Characteristics and viewing conditions, Geneva, Switzerland, 2004. [7] ISO Std., ISO/IEC 18004 - Information technology - Automatic identification and data capture techniques - QR Code 2005 bar code symbology specification, Geneva, Switzerland, 2006. [8] ISO Std., ISO/IEC 15417 - Information technology - Automatic identification and data capture techniques - Bar code symbology specification - Code 1282007, Geneva, Switzerland, 2007. ACKNOWLEDGEMENT This work was supported by the second phase of the Brain Korea 21 Program in 2010. Also, this research was supported by Seoul Future Contents Convergence (SFCC) Cluster established by Seoul R&BD Program (10570). Hyung-Ju Park Hyung Ju Park has been working as a chief researcher at Digital Scientific Imaging Lab at Chung-Ang Univ. in S.Korea from August 1st, 2006, right after finishing her master study at Brooks Institute of Photography (Santa Barbara, CA, USA), till now. Her major areas include photography, digital imaging, image quality and standardization, and academic Industrial cooperation with Samsung Inc., Epson, InZi soft Inc., Korea foundation for the advancement of science and creativity, and Supreme prosecutors’ office in Korea. Dong-Hwan Har He received the B.S., M.A., in industrial photography from Brooks Institute of Photography, and Ohio University, U.S.A., respectively and also, received Ph.D in educational technology from Han-Yang University, Korea. He is a professor in digital scientific imaging lab, Graduate school of advanced imaging and multimedia, at Chung-Ang University. His main research interests include forensic, scientific, forgery, medical, and digital photography. work_44xmxtzd75bithjee5gm4t6ndu ---- doi:10.1016/j.joms.2005.05.227 trauma. Roughly half of the laryngeal fractures in our series were managed non-operatively although approxi- mately three-quarters required airway intervention rang- ing from intubation to emergent cricothyroidotomy. Cli- nicians treating maxillofacial trauma need to be familiar with the signs and symptoms of this injury. A timely evaluation of the larynx and rapid airway intervention are essential for a successful outcome. The Schaefer classification of injury severity and corresponding treat- ment guidelines were consistent with our study. References Leopold DA: Laryngeal trauma. A historical comparison of treatment methods. Arch Otolaryngol 109:106, 1983 Schaefer SD, Stringer SP: Laryngeal trauma, in Bailey BJ, Pillsbury HC, Driscoll BP (eds): Head and Neck Surgery: Otolaryngology. Phila- delphia, PA, Lippincott-Raven, 1998, pp 947-956 Short and Long Term Effects of Sildenafil on Skin Flap Survival in Rats Kristopher L. Hart, DDS, 705 Aumond Rd, Augusta, GA 30909 (Baur D; Hodam J; Wood LL; Parham M; Keith K; Vazquez R; Ager E; Pizarro J) Statement of the Problem: Annually in the United States, approximately 175,000 people sustain severe facial trauma requiring major surgical repair. These injuries often cause significant loss of facial skin, leading to severe aesthetic and functional deficits. Skin flaps are the foundation for recon- structing such defects. The most important factor deter- mining the survival of these flaps is the delivery of oxygen via the circulation. A number of therapeutic modalities have been explored to improve blood flow and oxygen- ation of flap tissue. One principal approach has been to increase blood flow by vasodilation. However, due to their hypotensive effects, the vasodilators tested thus far have not been utilized in surgical repair of facial skin. Phospho- diesterase (PDE)-inhibitors, which includes the drug silden- afil, are a relatively new class of FDA approved drugs whose effect on tissue viability has not been widely explored. The vasodilatory effects of these drugs have the potential to enhance blood flow to wound sites, improve oxygen sup- ply, and promote wound repair. In this study, we examined whether administering sildenafil intraperitoneally at a dose of 45 mg/kg/d has a beneficial effect on the survival of surgical skin flaps in rats. Materials and Methods: Surgical skin flaps were evaluated using orthogonal polarization spectral imaging, flap image analysis, and histology at 1, 3, 5, and 7 days. Orthogonal polarization spectral imaging provides high quality, high con- trast images of the microcirculation of skin flaps. Areas of normal capillary flow are easily differentiated from areas of stasis and areas completely devoid of vessels. First, rats were assigned to either sildenafil treated (45 mg/kg/day IP), vehicle control, or sham (no injection) groups. Second, caudally based dorsal rectangular (3 x 10 cm) flaps were completely raised and then stapled closed. Third, spectral imaging was used to determine the distances from the distal end of the flap to the zones of stasis and zones of normal flow. Finally, animals were sacrificed and the flaps removed and photographed. Digital images of the flaps were used to determine the percent of black, discolored (gray/red), and normal tissue. Method of Data Analysis: a. Sample size: N � 152 rats b. Duration of study: 3 months c. Statistical methods: One-way analysis of variance (ANOVA) d. Subjective analysis: No Results: The orthogonal polarization spectral imaging results showed a significant decrease in the zone of necrosis (no vessels present) in rats treated with silden- afil one and three days after surgery. We also found a significant decrease in the total affected area, which consists of the zones of necrosis and stasis, in treated rats three days after surgery. Digital photography analysis also showed a significant decrease in the area of necrosis (black tissue) at three days. These findings support the results obtained using spectral imaging. No significant differences were found between sildenafil treated and control animals five and seven days after surgery. Conclusion: These results demonstrated that 45 mg/ kg/d IP of sildenafil may have a beneficial effect on skin survivability at the early stages of wound healing. Or- thogonal polarization spectral imaging has been proven to predict areas of necrosis more accurately than photo- graphic analysis. This method allowed us to observe differences between sildenafil treated and control rats as early as 24 hours and as late as three days after surgery. Although we did not see any benefit when animals were treated with 45 mg/kg/d IP five and seven days after surgery, we believe that changes in the treatment regi- men may enhance long-term flap survivability. References Olivier WA, Hazen A, Levine JP, et al: Reliable assessment of skin flap viability using orthogonal polarization imaging. Plast Reconst Surg 112:547, 2003 Sarifakioglu N, Gokrem S, Ates L, et al: The influence of sildenafil on random skin flap survival in rats: An experimental study. Br J Plast Surg 57:769, 2004 Funding Source: United States Army 2005 Straumann Resident Scientific Award Winner Histomorphometric Assessment of Bony Healing of Rabbit Critical-Sized Calvarial Defects With Hyperbaric Oxygen Therapy Ahmed M. Jan, DDS, The Hospital for Sick Children, S-525, 555 University Ave, Toronto, Ontario M5G 1X8, Oral Abstract Session 4 AAOMS • 2005 63 Canada (Sàndor GKB; Evans AW; Mhawi A; Peel S; Clokie CML) Statement of the Problem: A critical-sized defect is the smallest osseous wound that will not heal spontaneously with bony union over the lifetime of an animal. Practi- cally, the defect should not heal spontaneously during the experimental period. Hyperbaric oxygen therapy (HBO) is used to improve the healing of a variety of problem wounds. This study evaluated the effect of HBO on the healing of critical-sized defects in the rabbit cal- varial model and whether HBO administration can result in the healing of a larger “supracritical-sized” defect. Materials and Methods: Twenty New Zealand rabbits were divided into 2 groups of 10 animals. Full thickness calvarial defects were created in their parietal bones bilat- erally. Defects were critical-sized (15 mm) on one side and supracritical (18 mm) on the contralateral side. Group 1 received a 90 minute HBO therapy session at 2.4 ATA daily for 20 consecutive days. Group 2 served as a control group receiving only room air. Five animals in each group were sacrificed at 6 and 12 weeks postoperatively. Method of Data Analysis: Data analysis included qual- itative assessment of the calvarial specimens as well as quantitative histomorphometric analysis to compute the amount of regenerated bone within the defects. Hema- toxylin and eosin stained sections were sliced and cap- tured by a digital camera (RT Color; Diagnostic Instru- ments Inc, Sterling Heights, MI). A blinded investigator examined merged images and analyzed them for quantity of new bone regeneration. Statistical significance was established with a p value � .05. Results: The HBO group showed bony union and demon- strated more bone formation than the control group at 6 weeks (p � .001). The control group did not show bony union in either defect by 12 weeks. There was no significant difference in the amount of new bone formed in the HBO group at 6 weeks compared with 12 weeks (p � .309). However, the bone at 6 weeks was more of a woven charac- ter, while at 12 weeks it was more lamellated and more mature. Again, in the HBO group both the critical-sized and the supracritical-sized defects healed equally (p � .520). Conclusion: HBO therapy has facilitated the bony heal- ing of both critical-sized and supracritical-sized rabbit calvarial defects. Since bony healing was achieved early, it is reasonable to assume that an even larger than 18 mm defect (if it were technically feasible) might have healed within the 12 week period of study aided by HBO. Adjunctive HBO, based on histomorphometrics, doubles the amount of new bone formed within both the critical sized and the supracritical-sized defects. It allowed an increase in the critical size by more than 20%. References Moghadam HG, Sàndor GK, Holmes HI, et al: Histomorphometric evaluation of bone regeneration using allogeneic and alloplastic bone substitutes. J Oral Maxillofac Surg 62:202, 2004 Muhonen A, Haaparanta M, Gronroos T, et al: Osteoblastic activity and neoangiogenesis in distracted bone of irradiated rabbit mandible with or without hyperbaric oxygen treatment. Int J Oral Maxillofac Surg 33:173, 2004 Craniofacial Growth Following Cytokine Therapy in Craniosynostotic Rabbits Harry Papodopoulus, DDS, MD, University of Pittsburgh School of Dental Medicine, 3501 Terrace Street, Pittsburgh, PA 15261 (Ho L; Shand J; Moursi AM; Burrows AM; Caccamese J; Costello BJ; Morrison M; Cooper GM; Barbano T; Losken HW; Opperman LM; Siegel MI; Mooney MP) Statement of the Problem: Craniosynostosis affects 300- 500/1,000,000 births. It has been suggested that an over- expression of Tgf-beta 2 leads to calvarial hyperostosis and suture fusion in craniosynostotic individuals. This study was to test the hypothesis that neutralizing antibodies to Tgf-beta 2 may block its activity in craniosynostotic rabbits, preventing coronal suture fusion in affected individuals, and allowing unrestricted craniofacial growth. Materials and Methods: Twenty-eight New Zealand White rabbits with bilateral delayed-onset coronal suture synostosis had radiopaque dental amalgam markers placed on either side of coronal sutures at 10 days of age (synos- tosis occurs at approximately 42 days of age). At 25 days, the rabbits were randomly assigned to three groups: 1) Sham control rabbits (n � 10); 2) Rabbits with non-spe- cific, control IgG antibody (100ug/suture) delivered in a slow release collagen vehicle (n � 9); and 3) Rabbits with Tgf-beta 2 neutralizing antibody (100ug/suture) delivered in slow release collagen (n � 9). The collagen vehicle in groups Two and Three was injected subperiosteally above the coronal suture. Longitudinal lateral and dorsoventral head radiographs and somatic growth data were collected from each animal at 10, 25, 42, and 84 days of age. Method of Data Analysis: Significant mean differences were assessed using a one-way analysis of variance. Results: Radiographic analysis showed significantly greater (p � 0.05) coronal suture marker separation, overall craniofacial length, cranial vault length and height, cranial base length, and more lordotic cranial base angles in rabbits treated with anti-Tgf-beta-2 anti- body than groups at 42 and 84 days of age. Conclusion: These data support our initial hypothesis that interference with Tgf-beta-2 production and/or function may rescue prematurely fusing coronal sutures and facilitate craniofacial growth in this rabbit model. These findings also suggest that this cytokine therapy may be clinically significant in infants with insidious or progressive postgestational craniosynostosis. References Poisson E, Sciote JJ, Koepsel R, et al: Transforming growth factor- beta isoform expression in the perisutural tissue of craniosynostotic rabbits. Cleft Palate Craniofac J 41:392, 2004 Oral Abstract Session 4 64 AAOMS • 2005 work_4aiu2awdk5e4zb5nndhgvtzf7e ---- 72 COMMUNITY EYE HEALTH JOURNAL | VOLUME 28 ISSUE 92 | 2015 Although traditionally the features of DR have been identified through direct ophthalmoscopy or slit lamp biomicroscopy, digital photography is more sensitive than direct ophthalmoscopy and is comparable to slit lamp examination by a trained observer.1 A digital fundus camera has the following advantages: • Fast and convenient imaging of the retina by a photographer • Storage, archiving, and transmission of the images • Use of the images for quality assurance (that is, having them checked by another person) to ensure that no cases of retinopathy go undetected • Ability to enhance images – magnification, red-free, enhanced contrast, etc. When using the Scottish Grading Protocol2, just one retinal photograph is taken, which is centred on the fovea. The field must extend at least 2 disc diameters (DD) temporal to the fovea and 1DD nasal to the disc for adequate visualisation. Features of retinopathy The signs of diabetic retinopathy are covered on page 65 and on pages 70–71. For DR screening, certain signs are more important than others. Blot haemorrhages should be distin- guished from microaneurysms, not just by their darker appearance but also by their size – the larger diameter of a blot haemor- rhage should be equal in size to, or larger than, the diameter of the widest vein exiting from the optic disc. Chronic retinal oedema results in precipitation of yellow waxy deposits of lipid and protein known as exudates. When blot haemorrhages and exudates are visible within the macular area, they are considered markers for macular oedema. Signs of retinal ischaemia include blot haemorrhages, venous beading and intra-retinal microvascular anomalies (IRMA). Venous beading is a subtle change in the calibre (thickness) of the second and third order retinal veins which gives them an irregular contour resembling a string of beads. IRMA look like new vessels; however they occur within areas of capillary occlusion and do not form vascular loops. Unusual vessels with loops therefore, should be treated as NV. Grading of DR Most grading protocols are based on classification systems for DR which track the appearance and progression of disease (for example, the Early Treatment of Diabetic Retinopathy Study, or EDTRS, classification). Location (distance from fovea) is important when grading maculopathy. Visual acuity can be used as a marker for macular oedema, although it may be affected by other pathology such as cataracts or refractive error. The Scottish Grading protocol grades the severity of retinopathy from R0 to R4 and of maculopathy as a separate grade from M0 to M2 (Table 1). R6 is a stand-alone grade for poor quality images which cannot be graded. If patients have technical failures at photography they must undergo further screening by slit lamp biomicroscopy. GRADING DR Grading diabetic retinopathy (DR) using the Scottish grading protocol Sonia Zachariah Associate Specialist: Diabetic Retinopathy Screening Service, Glasgow, Scotland. William Wykes Consultant Ophthalmologist and Clinical Director: Diabetic Retinopathy Screening Service, Glasgow, Scotland. David Yorston Consultant Ophthalmologist: Tennent Institute of Ophthalmology, Gartnavel Hospital, Glasgow, UK. dbyorston@btinternet.com D av id Y or st on D av id Y or st on Figure 1. R3M2, The photograph shows multiple blot haemorrhages, corresponding to the R3 grade. In addition there are exudates within 1 disc diameter to the fovea, so the complete grade is R3M2 Figure 2. R3. There are blot haemorrhages and cotton wool spots. In addition there is a venous loop inferotemporal to the fovea. These features indicate severe ischaemia, corresponding to R3. There are no exudates visible ‘For DR screening, certain signs are more important than others’ mailto:dbyorston%40btinternet.com?subject= COMMUNITY EYE HEALTH JOURNAL | VOLUME 28 ISSUE 92 | 2015 73 Grade Features Outcome R0 No disease Rescreen in 12 months R1 Mild background DR Including microaneurysms, flame exudates, >4 blot haemorrhages in one or both hemifields, and/or cotton wool spots Rescreen in 12 months R2 Moderate background DR >4 blot haemorrhages in one hemifield Rescreen in 6 months R3 Severe non-proliferative or pre-proliferative DR: >4 blot haemorrhages in both hemifields, intra-retinal microvascular anomalies (IRMA), venous beading Refer R4 Proliferative retinopathy NVD, NVE, vitreous haemorrhage, retinal detachment Refer M0 No macular findings 12 month rescreening M1 Hard exudates within 1–2 disc diameters of fovea 6 month rescreening M2 Blot haemorrhage or hard exudates within 1 disc diameter of fovea Refer BDR – background diabetic retinopathy Hemifield – field of image divided by an imaginary line running across the disc and fovea D av id Y or st on D av id Y or st on Table 1. The different grades of diabetic retinopathy (DR) in the Scottish Grading Protocol: features and outcomes Figure 3a. New vessels at the disc. There are new vessels at the optic disc, indicating high risk proliferative retinopathy. Note that there are few other signs of retinopathy, and you might miss the disc vessels if you are not looking for them Figure 3b. New vessels at the disc (red-free). The red-free version of this photo shows the new vessels at the optic disc more clearly. Altering the images, e.g. by using red-free, is a valuable tool for detecting retinopathy When grading, the graders first assess the quality of an image on the basis of the clarity of the nerve fibre layer. Images considered of good enough quality are then inspected systematically, starting with the optic disc, then the macula and then all other areas. Using the red free filter is mandatory as it is essential to highlight subtle features such as micro- aneurysms and IRMA. Other tools such as the zoom and contrast enhancement are used to improve visualisation. A ruler is used to measure the size of blot haemorr- hages and to measure the distance of exudates and blot haemorrhages from the fovea (in disc diameters) in order to set the maculopathy grade. Table 1 shows the different grades and their outcomes.3 Conclusion Screening has proved to be a vital tool in the fight against DR-related visual loss. An important measure of the successful implementation of screening is the reduced incidence of blindness due to sight-threatening diabetic retinopathy.4 References 1 Harding S, Greenwood R, Aldington S, Gibson J, Owens D, Taylor R, et al. Grading and disease management in national screening for diabetic retinopathy in England and Wales. Diabet Med 2003;20:965-71. 2 http://www.ndrs-wp.scot.nhs.uk 3 The NHS Diabetic Eye Screening Programme Revised Grading Definitions, 2012. Available from: http://www. diabeticeye.screening.nhs.uk/gradingcriteria 4 Arun CS, Al-Bermani A, Stannard K, Taylor R. Long-term impact of retinal screening on significant diabetes-related visual impairment in the working age population. Diabet Med 2009;26:489-92. © The author/s and Community Eye Health Journal 2015. This is an Open Access article distributed under the Creative Commons Attribution Non-Commercial License. http://www.ndrs-wp.scot.nhs.uk  http://www.diabeticeye.screening.nhs.uk/gradingcriteria http://www.diabeticeye.screening.nhs.uk/gradingcriteria work_4bhwp5pgdjdkrenykwwoasscwm ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218564391 Params is empty 218564391 exception Params is empty 2021/04/06-02:16:45 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218564391 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:45 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_4dc7byfja5aklj7lbf46rvlg5u ---- [PDF] Separation of the principal HDL subclasses by iodixanol ultracentrifugation | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1194/jlr.D037432 Corpus ID: 9801828Separation of the principal HDL subclasses by iodixanol ultracentrifugation @article{Harman2013SeparationOT, title={Separation of the principal HDL subclasses by iodixanol ultracentrifugation}, author={N. Harman and B. Griffin and I. Davies}, journal={Journal of Lipid Research}, year={2013}, volume={54}, pages={2273 - 2281} } N. Harman, B. Griffin, I. Davies Published 2013 Chemistry, Medicine Journal of Lipid Research HDL subclasses detection, in cardiovascular risk, has been limited due to the time-consuming nature of current techniques. We have developed a time-saving and reliable separation of the principal HDL subclasses employing iodixanol density gradient ultracentrifugation (IxDGUC) combined with digital photography. HDL subclasses were separated in 2.5 h from prestained plasma on a three-step iodixanol gradient. HDL subclass profiles were generated by digital photography and gel scan software. Plasma… Expand View on Publisher jlr.org Save to Library Create Alert Cite Launch Research Feed Share This Paper 3 CitationsHighly Influential Citations 1 Background Citations 1 Methods Citations 2 Results Citations 1 View All Figures, Tables, and Topics from this paper figure 1 table 1 figure 2 table 2 figure 3 table 3 figure 4 View All 7 Figures & Tables High Density Lipoproteins iodixanol subclass Ultracentrifugation Area Under Curve HUNTINGTON DISEASE-LIKE 3 (disorder) Gel Electrophoresis (lab technique) JPH3 gene Centrifugation, Density Gradient HUNTINGTON DISEASE-LIKE 2 3 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Deepening our understanding of HDL proteome G. E. Ronsein, T. Vaisar Biology, Medicine Expert review of proteomics 2019 8 View 1 excerpt, cites methods Save Alert Research Feed HDL quality and functionality: what can proteins and genes predict? E. Karavia, E. Zvintzou, P. Petropoulou, Eva Xepapadaki, C. Constantinou, Kyriakos E Kypreos Medicine Expert review of cardiovascular therapy 2014 21 View 1 excerpt, cites methods Save Alert Research Feed Emerging Phenotypes of Type 2 Diabetes R. Lemmers Medicine 2018 Highly Influenced PDF View 5 excerpts, cites background and results Save Alert Research Feed References SHOWING 1-10 OF 46 REFERENCES SORT BYRelevance Most Influenced Papers Recency Rapid separation of LDL subclasses by iodixanol gradient ultracentrifugation. I. Davies, J. M. Graham, B. Griffin Chemistry, Medicine Clinical chemistry 2003 54 PDF Save Alert Research Feed A novel method for the rapid separation of plasma lipoproteins using self-generating gradients of iodixanol. J. Graham, J. Higgins, +4 authors D. Billington Chemistry, Medicine Atherosclerosis 1996 91 View 1 excerpt, references methods Save Alert Research Feed Rapid isolation of low density lipoprotein (LDL) subfractions from plasma by density gradient ultracentrifugation. B. Griffin, M. Caslake, B. Yip, G. Tait, C. Packard, J. Shepherd Medicine Atherosclerosis 1990 228 Save Alert Research Feed Lipoprotein separation in a novel iodixanol density gradient, for composition, density, and phenotype analysis This research was supported by a grant from GlaxoSmithKline. Published, JLR Papers in Press, March 12, 2008. M. Yee, D. Pavitt, +4 authors D. Johnston Chemistry, Medicine Journal of Lipid Research 2008 28 PDF Save Alert Research Feed Relationship between plasma lipid concentrations and HDL subclasses. Y. Yang, B. Yan, M. Fu, Y. Xu, Y. Tian Chemistry, Medicine Clinica chimica acta; international journal of clinical chemistry 2005 49 PDF Save Alert Research Feed A rapid single-step centrifugation method for determination of HDL, LDL, and VLDL cholesterol, and TG, and identification of predominant LDL subclass. A. Sawle, M. Higgins, Marcus P Olivant, J. Higgins Chemistry, Medicine Journal of lipid research 2002 38 PDF Save Alert Research Feed Quantification of HDL2 and HDL3 cholesterol by the Vertical Auto Profile-II (VAP-II) methodology. K. R. Kulkarni, S. Marcovina, R. Krauss, D. Garber, A. M. Glasscock, J. Segrest Chemistry, Medicine Journal of lipid research 1997 42 Save Alert Research Feed A one-step separation of human serum high density lipoproteins 2 and 3 by rate-zonal density gradient ultracentrifugation in a swinging bucket rotor. P. Groot, L. Scheek, L. Havekes, W. L. van Noort, F. V. van't Hooft Chemistry, Medicine Journal of lipid research 1982 72 View 1 excerpt, references methods Save Alert Research Feed Alterations of HDL subclasses in hyperlipidemia. Y. Xu, M. Fu Chemistry, Medicine Clinica chimica acta; international journal of clinical chemistry 2003 49 Save Alert Research Feed High resolution plasma lipoprotein cholesterol profiles by a rapid, high volume semi-automated method. B. Chung, J. Segrest, J. T. Cone, J. Pfau, J. Geer, L. A. Duncan Chemistry, Medicine Journal of lipid research 1981 46 PDF View 1 excerpt, references methods Save Alert Research Feed ... 1 2 3 4 5 ... Related Papers Abstract Figures, Tables, and Topics 3 Citations 46 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_4dftvbqy6zbjlef24iycwgn354 ---- The Picture to Print Value Chain The Picture to Print Value Chain Reiner Fageth, CeWe Color AG, Oldenburg, Germany Philipp Sandhaus, OFFIS-Institute for Information Technology, Oldenburg, Germany Abstract This paper describes the changes in the value chain from taking the picture to displaying it. In the days of analogue imaging, there was only one option for displaying images after they had been taken; developing the film and prints. Nowadays the consumer has various display possibilities that do not necessarily include tangible products. Possible integrations and real data of consumers’ behavior while ordering tangible products are presented and analyzed. Introduction Digital photography is maturing with respect to picture taking. More and more images are taken using digital still cameras and mobile phones. Even digital camcorders offer adequate image quality for stills. Conversely, tehh act of transferring these pictures into tangible products is still not, and might never, following the growth rate of image taking. This paper describes what compelling offers should look like to address that challenge. Starting from product presentation on the Internet, delivering an integrated software to order a huge number of different products and tools required to support the ordering and creation process, the whole value chain from picture to print is analyzed in this paper. Change in the value chain Obviously there was a huge change in the value chain caused by the switch from analogue picture taking to digital. In analogue there was the clear defined process starting with purchasing a camera, then buying several rolls of film per year and developing these films and ordering prints, either at retail or in wholesale labs partnering with retailers. In these times printing even exceeded image capture due to the re-ordering [1]. The end of the value chain has now changed from printing images to displaying images, where tangible products or only one of many possibilities. The main possibilities are illustrated in Figure 1: Figure 1: Consumers’ display choices It is obvious that many more manufacturers are offering services for displaying and storing images and therefore are giving the consumer much more choice for viewing images than ever before. It is now more complicated and, marketing wise, more expensive to address consumers as a pre-defined starting point for displaying no longer exits. In addition, the consumers’ preferences are largely influenced by the equipment and/or methods they use to display images. The industry is faced with a marketing dilemma and the consumer is confronted with too much choice [2]. Consumers’ challenges The consumer still has a lot of challenges to resolve on his/her own while working with digital images: • Archiving the images using tools designed to assist with speedy retrieval once the images have been stored • Long term storage of images • Selection of the most the relevant/best ones for archiving and display • Communicating and telling compelling stories with the stored images • Interaction between all hardware available (computer(s), online solutions, TV screens, digital frames, mobile devices, …) There are several suppliers who offer perfect solutions for one of the challenges mentioned above. There are very few who address two or more successfully. The dominance in the former analogue value chain of market leaders such as Kodak and Fuji is gone, newer relevant players are addressing special target groups. Looking at the display choices and the related variety of technologies it becomes quite obvious why they do this. There are too many different skills required to control all the manufacturing challenges in digital display technologies in as competent a manner as, e.g. Kodak did extremely well in analogue photography. In the following sections we will describe an existing solution for supporting the user generating a huge variety of tangible products while helping in image finding and selection, automated annotating for future usage and mainly supporting the consumer in artistic design. Leveraging images stored online (private and public ones) and textual information such as, e.g. provided by Wikipedia is shown as an integrated solution in the order software described below. This solution is of course “only” addressing the print and online portion as described in Figure1. Additionally, user behavior is described by data retrieved from order files without the means to match these data with the users’ profile information. 70 © 2009 Society for Imaging Science and Technology Evaluated data from consumer behavior In [3] an evaluation based on millions of images in CeWe Colors’ production has been presented. The analysis helps to define what can typically be considered to be important image content for those ordering tangible products and what supporting tasks might help the consumer with the selection and design process. The three portraits categories in Table 1 have not been subdivided into indoor or outdoor motives. Group portraits 36,3% Children 16,9% Single portraits 15,1% Landscape 5,5% Architecture 5,4% Urban areas 3,4% Animals 3,3% Plants 3,1% Sports 1,9% Indoor 1,5% Food 0,8% Night images 0,3% Others 6,6% Table 1: Typical Scene types in CEWE PHOTO BOOKS (n=10.000 images) [3] We also know from our CRM data that more than 63% of the consumers producing a CEWE PHOTO BOOK have previously ordered prints form digital files. The relevance of images taken by mobile phones is still low, but rising. Slightly less than three percent of the images printed are generated by mobile phones as shown in Figure 2. Images containing geo data coded in the Exif header are even more seldom; 0.6% of the analyzed images had valid information based on the position where the image was taken. Figure 2: Repartition of images generated by mobile phones With our customers in central Europe pick-up, at retail after ordering via the Internet is the most successful path. Nearly 80% of our digital orders were processed in 2008 in this manner. Integrated software solution for ordering We prefer desktop application for more complex products. The consumers can take their time and the number of sessions used to generate calendars and/or photo books is theoretically unlimited. User behavior generating CEWE PHOTO BOOKS The average number of sessions used to generate a photo book by our customers is 4.86, the median is 3. The real session distribution is illustrated in Figure 3. The average time spent in generating the product is 2.81 hours, the median is 1.89 hours. The diagram in Figure 3 shows the time spent without including pauses of less than 10 minutes. Figure 3: Number of sessions per CEWE Photo Book and time used generating it While in the US the challenge is to get consumers to complete compilation and then place orders, in Europe the challenge is the conversion from downloading and installing the application to order placement. But the European ratio is significantly better than the non-completion rate of International Symposium on Technologies for Digital Photo Fulfillment 71 approximately 30% in the US for online photo book applications as published by [4]. Integration of images stored online Of course there is the huge benefit in generating photo books at places there the consumers’ images are stored. To overcome that disadvantage we have the possibility to integrate private and public images from online communities as illustrated in Figure 4. Nearly 10% of all users have tried at least once to access these communities or external textual information sources. Figure 4: Integration of online communities and Wikipedia The images stored online are downloaded to the computer running the application and can therefore become involved in the process of analyzing the relevance of images and automatically selecting the best ones as described in [5] and [6]. In addition, there is the possibility of adding text from interesting data sources as delivered by, e.g. Wikipedia. 30% of the pages included in CEWE Photo Books contain at least one text box. In the case of the online community locr (see Figure 4) the images are geo-coded (see www.locr.com). It is also possible to search for pictures that have been taken at a specific location and later stored online by third parties. The images can also be uploaded from the application to this community for future private or public use. Results of consumers usage of the assistant functionality The use of this feature results in books that are richer in content and contain a larger number of pages. Figure 5 shows the behavior of the consumers after using the CEWE Photo Book Assistant. The assistant is used by nearly 60% of all our consumers. The assistant aids the user to select the best and relevant images and in designing the CEWE Photo Book. The functionality is described in [7] and [8]. Most consumers add pages to the suggested number evaluated by the software. This evaluation is indicated with 0 in Figure 5 where the number of images finally ordered by the consumer minus the number of images selected by the assistant is displayed. Figure 5: Number of pages ordered relative to assistants’ suggestion The assistant and its default settings also support users in generating more compelling results. One of the main advantages of the assistants default settings is that they prevent the consumer from placing too many images on a single page. Overcrowded pages result in photo books that are boring for third parties to look at (similar to slide shows which contained too many slides in the days of analogue photography. Figure 6 shows the redistribution of the number of images placed by the consumer per page. Figure 6: Number of images per page This knowledge used in supporting consumers in image selection and artistic tasks can be applied to nearly all products. Therefore an integrated software solution offering these features is the obvious step forward. One can even say that the developments for photo books are now supporting “older” digital products and even prints are becoming more fun to order. 72 © 2009 Society for Imaging Science and Technology Figure 7: Start screen of integrated software solution Conclusions It is more and more complicated and expensive to induce the consumer to order tangible products from his/her digital files in order to display and archive them. If one has attracted a potential customer it is essential to offer compelling solutions to bind the consumer to the brand and also to cross-sell to products they were initially not considering. Display is no longer necessary in the value chain as a tangible product. In the main, computers with online or desktop solutions offer divers possibilities in addition to the capturing devices themselves. If we want to sell tangible products in the future it is essential to analyze consumers’ behavior and offer solutions tailored to the consumers. The entire process from looking for, or being interested in, a product in a retail store or increasingly via the Internet until after sales with customer care (CC) and CRM defines the success of the efforts made attracting and paying via search engine marketing (SEM) and optimization (SEO) for the consumer. This process is illustrated in Figure 8. Figure 8: Product is more than the ordering software The goal is to offer applications that relieve consumers from several tedious and time-consuming tasks such as sorting and selecting their photos [9]. If the applications are nearly as seamless as the simple display on non tangible products consumers will jump at the offer as they become more and more aware of the challenges of finding and archiving images on a purely digital basis. References [1] PMA Market Research 2006 [2] Wulf Schmidt-Sacht, Printing and Non-Printing Business, PMA Orlando, 2005 [3] Dietmar Wueller, Reiner Fageth, Statistic Analysis of Millions of Digital Photos, Digital Photography IV, Proceedings of SPIE, Volume 6817, Electronic Imaging 2008, San Jose, CA [4] 2008 PMA Camera/Camcorder, Digital Imaging Survey [5] R. Fageth, S.Boll, Image selection – No longer a dilemma?, Color Imaging XII, Proceedings of SPIE, San Jose, CA 2008 [6] Reiner Fageth, Wulf Schmidt-Sacht, Wholesale Finishing Fulfils Consumers' Expectations, International Symposium on Technologies for Digital Fulfilment, Las Vegas, NV, 2007 [7] Susanne Boll, Philipp Sandhaus, Ansgar Scherp, Utz Westermann, Semantics, content, and structure of many for the creation of personal photo albums, ACM Multimedia 2007, Augsburg [8] Reiner Fageth, Wulf Schmidt-Sacht, The perfect photo book – hints for the image selection process, Color Imaging XII, Proceedings of SPIE, San Jose, CA 2007 [9] Philipp Sandhaus, Sabine Thieme, Susanne Boll, Processes of Photo Book Production, Multimedia Systems, No. 14, Vol. 6, 2008, Springer Verlag, Berlin Author Biography Reiner Fageth received his diploma. in Electronic Engineering from the University of Applied Science in Heilbronn, Germany (1990) and his Ph.D. in 1994 from the University of Northumbria at Newcastle, UK in the field of industrial image processing. Up to 1998 he worked with the Steinbeis Transferzentrum BMS on designing inspection systems for process automation mainly in the injection molding and bottling industry. He joint CeWe Color in 1998 and is since 2007 serving as CTO. Philipp Sandhaus received his diploma degree in computer science at the University of Oldenburg in 2006. Since then he is working as scientific assistant at the R&D Institute OFFIS on a project together with CeWe Color to find new and innovative ways to enhance digital photo services. Currently he is also working on his PhD Thesis in the context of semantic understanding of personal media from media usage. International Symposium on Technologies for Digital Photo Fulfillment 73 work_4ecbtr5rojgfjeveu7mweds7sy ---- Provided by the author(s) and University College Dublin Library in accordance with publisher policies. Please cite the published version when available. Title A Systematic Approach for Large-scale, Rapid, Dilapidation Surveys of Historic, Masonry Buildings Authors(s) Clarke, Julie; Laefer, Debra F. Publication date 2012-05-16 Publication information International Journal of Architectural Heritage: Conservation, Analysis, and Restoration, 8 (2): 290-310 Publisher Taylor and Francis Link to online version http://www.tandfonline.com/doi/abs/10.1080/15583058.2012.692849 Item record/more information http://hdl.handle.net/10197/4121 Publisher's statement This is an electronic version of a forthcoming article to be published in International Journal of Architectural Heritage: Conservation, Analysis, and Restoration, available online at: http://www.tandfonline.com/doi/abs/10.1080/15583058.2012.692849 Publisher's version (DOI) 10.1080/15583058.2012.692849 Downloaded 2021-04-06T01:16:29Z The UCD community has made this article openly available. Please share how this access benefits you. Your story matters! (@ucd_oa) © Some rights reserved. For more information, please see the item record link above. https://twitter.com/intent/tweet?via=ucd_oa&text=DOI%3A10.1080%2F15583058.2012.692849&url=http%3A%2F%2Fhdl.handle.net%2F10197%2F4121 A Systematic Approach for Large-scale, Rapid, Dilapidation Surveys of Historic, Masonry Buildings Julie Clarke School of Civil, Structural and Environmental Engineering, University College Dublin. University College Dublin, Newstead, Room G67, Belfield, Dublin 4, Ireland. 353-1-7163232, julie.clarke.1@ucdconnect.ie Dr. Debra F. Laefer School of Civil, Structural and Environmental Engineering, University College Dublin. Dr. Debra Laefer, University College Dublin, Newstead, Room G25, Belfield, Dublin 4, Ireland. 353-1-7163226, debra.laefer@ucd.ie 1 A SYSTEMATIC APPROACH FOR LARGE-SCALE, RAPID, DILAPIDATION SURVEYS OF HISTORIC MASONRY BUILDINGS Julie A. Clarkea and Debra F. Laefera*1 aSchool of Architecture, Landscape and Civil Engineering, University College Dublin, Ireland Dilapidation surveys may require extensive resources to achieve detailed ac- counts of damage for intervention purposes or may involve only limited re- sources but be restricted to an extremely rapid assessment (e.g. post-earthquake, life-safety inspection). Neither provides a holistic, cost-effective approach for evaluating the general health of a large number of structures, as is needed for urban planning, historic designation determination, and risk assessment due to adjacent works. To overcome this limitation, index images are introduced for a systematic approach for rapidly conducting large-scale, dilapidation surveys of historic masonry buildings. This method, the University College Dublin Inspec- tion Method (UCDIM), is tested against both a detailed inspection and an alter- native rapid approach to determine accuracy and resource intensiveness through its application by three inspectors of various levels of experience to six buildings in the city centre of Dublin, Ireland. The UCDIM provided a damage ranking of ρ = 0.94 for all inspectors, regardless of experience, except when painted or rendered façades were included. The UCDIM, when compared to de- tailed inspection provided a high level of reliability, cost savings of approxi- *1 Debra Laefer. University College Dublin, Newstead, Room G25, Belfield, Dublin 4, Ireland. 353-1-7163226, Email: debra.laefer@ucd.ie 2 mately 90% and several months of time savings since interior access was not required. KEY WORDS: historic; masonry; buildings; dilapidation; survey; condition; assess- ment; urban; planning; risk. 1. INTRODUCTION Traditionally, a dilapidation survey refers to the evaluation of a building to assess any ex- isting defects. A variety of terms are used (often interchangeably) to describe such inspections including condition assessment, dilapidation survey, and structural survey, as well as others. Problematically, existing methods for assessing building dilapidation vary greatly in purpose and scope, as well as level of detail. Selection of an appropriate system for historic buildings may have national, as well as international implications. For example, European policies such as the Granada Convention for the Protection of the Architectural Heritage of Europe (COE, 1985) out- line numerous obligations requiring European states to protect their architectural heritage. Under Article 10 of this Convention, the signed parties must undertake policies which “include the pro- tection of the architectural heritage as an essential town and country planning objective…”. In Ireland, the Planning and Development Act 2000 (OAG, 2000) outlines a system for listing structures of special interest, stating that their owners are legally obliged to protect them. Gener- ally, these provisions are interpreted as generating and maintaining heritage inventories, which necessitate large resource commitments. Additionally, the output of a dilapidation survey can form a crucial component in building legislation and, therefore, must provide an accurate and re- liable representation of a building at the time of inspection. Furthermore, in the case of adjacent construction works, buildings are often vulnerable to a variety of damage mechanisms. As a re- 3 sult, a building’s condition prior to adjacent construction commonly forms the basis for deciding pre-construction mitigation efforts and post-construction litigation claims. For large subsurface projects, such as tunnelling, hundreds if not thousands of buildings may need inspection. Unfor- tunately, there has yet to be a widely adopted approach, as will be explained. 2. BACKGROUND In general, building survey methods vary in purpose, scope, and level of detail. As will be subsequently described, procedures tend to consist either of a highly detailed investigation tar- geted for individual structures or of a rapid nature suitable for promptly considering the pre- or post-disaster life-safety level for a large number of buildings. Those methods consisting of a de- tailed exploration generally require a large number of inspection hours to fully document a build- ing’s condition. For detailed inspection of residential properties, Bowyer (1971) and Sheeley (1985) recommended a preliminary exploratory inspection to determine the time involvement and extent of specialist services required. For the main inspection, a sequential methodology was advised, whereby the property was divided into sections (e.g. roof, walls, floors, staircases, inter- nal finishes). This type of survey was intended to be undertaken prior to the purchasing of a resi- dential dwelling to inform the potential purchaser of the property’s condition and likely repair costs. Also for residential properties, Staveley and Glover (1983) suggested a sequential inspec- tion but following a definitive procedure, e.g. commencing within the roof space and working downwards through the property and leaving external works, installations, and drains until last. When faced with a large inventory of buildings, including residential and commercial properties, the American Society of Civil Engineers’ Guideline for Structural Condition Assess- ment of Existing Buildings (ASCE, 2000) proposed the use of screening criteria. These included 4 the building’s original construction date, cultural importance, and current occupancy level, and are used to assist in the prioritisation of buildings for preliminary inspection, thereby eliminating unnecessary inspection of certain buildings. The preliminary inspection would consist of an in- itial analysis of a building’s structural adequacy, including a review of available documents, a site inspection, and a decision as to the need for a more detailed analysis. Upon deciding to con- duct a detailed assessment, a thorough inspection of the building’s structural system, features, and its need for rehabilitation would then be conducted (ASCE, 2000). For large-scale and multiple, detailed inspections of residential buildings, the Dutch stan- dard NEN 2767 (Straub, 2009) outlined a method to determine a structure’s condition according to individual building elements. NEN 2767 is used for maintenance planning and asset manage- ment purposes. The method uses a six-point scale, ranging from excellent (1) to very bad (6) to categorise defects. These are subsequently weighted according to their structural importance, in- tensity and extent, producing an overall condition index for the building. Strictly for commercial office buildings, Brandt and Rasmussen (2002) proposed the Tool for Selecting Office Building Upgrading Solutions (TOBUS), which consisted of a broad checklist of 70 categories, each sub- divided into 1-12 object types comprising both structural elements and building services. The method’s intent was to assess degradation and aid in the selection of an upgrade solution. TO- BUS makes use of index images. These were visual examples which aid in the standardized se- lection process of one of four degradation codes to each object type, as well four corresponding work codes that indicate the extent of repair required. Index images are commonly employed within Civil Engineering as well as related fields to provide a visual standard for guidance, e.g. Cho et al. (2006), Grünthal et al. (1998). 5 In contrast to detailed inspections, post-disaster surveying methods typically involve the application of rapid evaluation techniques solely relating to structural integrity. A modified ap- proach to the Applied Technology Council's ATC-20 Forms (ATC, 2005), which are generally used for post-earthquake response, was described by Peraza (2006) for use around the World Trade Center (WTC) site following the September 11th 2001 terrorist attacks. The ATC-20 Ev- aluation Safety Assessment Forms consider a building according to structural, non-structural, and geotechnical hazards, which are used to assess its condition as a percentage of the overall es- timated building damage. Remote sensing technologies have also been employed to enable rapid, post-disaster as- sessment. In a novel post-disaster survey method, Corbane et al. (2011) used remote sensing im- agery, which provided lateral views of buildings, to assign damage grades based on the European Macroseismic Scale EMS 1998 (Grünthal et al., 1998). According to Corbane et al. (2011), ex- treme levels of building damage were reliably detected with this approach, but lesser damage levels were not. Pre-disaster evaluation methods also exist, such as that proposed by the United States' Federal Emergency Management Agency (FEMA, 2002) to determine earthquake risk based on seismicity levels, the building’s geometry, soil type, and occupancy levels. From this, a damage score is generated indicating whether or not a detailed evaluation is required. There are also more rapid techniques, such as those described by Martinelli et al. (2008) and Roca et al. (2006) in which typological data such as building location, height, age, and masonry configuration were used to assess seismic vulnerability, as outlined as part of EMS 98 (Grünthal et al., 1998). Since typological information is commonly available from general building censuses, the need for field surveys is minimized, thereby enabling results to be obtained rapidly and at a reduced cost. Simi- 6 larly, Ho et al. (2011) developed a rapid building assessment method as part of the decision- making process for urban renewal in Hong Kong based on 21 weighted building factors. The method considers the building’s existing condition and management criteria to determine the cur- rent building dilapidation level, as well as future vulnerability to dilapidation. One of the difficulties of using the above methods is that they were generally devised for an extremely large range of building types. In contrast, the Critical Element Factor (CEF) System was specifically devised for historic buildings to offer a rapid assessment of the condition and use of a building by individually assessing its roof, walls, windows and doors, ancillary items, associated boundary items, and overall state (Table 1) (The Handley Partnership, North York- shire, UK). These are considered along with the building’s occupancy density. The CEF System is a registered trademark and was developed by The Handley Partnership, a surveying and struc- tural engineering practice, specializing in the assessment of large groups of historic buildings as well as other structures. This method is intended to be conducted by a single inspector across an entire building stock, thereby enabling targeted resource planning by governmental authorities; however this is not always feasible due to time and budget constraints. In a similarly rapid and quantitative approach, Burland et al. (1977) proposed detailing damage classification according to visible cracking, thereby allowing for an objective and consistent building assessment to be made, although on an extremely limited set of damage indicators (Table 2). Despite this limita- tion, Burland’s approach has been widely adopted in the engineering community for subsurface construction risk assessment [(Aye et al., 2006), (Laefer, 2001), (Torp-Petersen and Black, 2001)]. Table 1. CEF System (The Handley Partnership, North Yorkshire, UK) 7 Historic buildings commonly require non-contact documentation procedures. This can be achieved through the employment of digital photography, photogrammetry or terrestrial laser scanning (TLS). In each case the general geometry of the building can be effectively doc- umented. Whilst photogrammetry has been a popular choice for general documentation (Yilmaz et al., 2007), increasingly there is also its application for the detection and monitoring of struc- tural damage, such as was used at the C14th Basílica da Ascenciόn in Northern Spain for crack measurement (Armesto et al., 2008). Laser scanning is being fairly regularly used for obtaining cultural heritage building data, e.g. Andrés et al. (2012), Grussenmeyer and Guillemin (2011). Yastikli (2007) contended that laser scanning is more applicable for complex structures than photogrammetric techniques which use digital image rectification since it is suitable for planar surfaces. A study by Laefer et al. (2010) which compares TLS, digital photography and direct visual inspection, demonstrated that both TLS and photography are good sources of a permanent record. However, TLS data proved difficult in determining crack location and size, due to the pixelated nature of the datasets. Al- though these methods offer viable solutions for the inspection of historic structures, highly ex- perienced inspectors are required, thus limiting their application amongst multiple inspectors. Furthermore, the high costs associated with remote sensing technologies further limit their usage. In the cases of both underground infrastructure works and national inventories, what is particularly sought is a strategic approach for consistently and reliably comparing the status of large groups of buildings. To this end Laefer et al. (2008) proposed the use of four scales (Tables 3-6) to be used in conjunction with Burland et al.’s (1977) system (Table 2) to rapidly assess a building’s condition. The results are then weighted according to Table 7, and finally normalized 8 by the summation of weighting fractions (i.e. 13) to better identify structural versus non- structural concerns using the original categories of negligible (0) to very severe (5). Table 2. Cracking (after Burland et al., 1977) Table 3. Protruding or loose brickwork (after Laefer et al., 2008) Table 4. Replaced or repaired brickwork (after Laefer et al., 2008) Table 5. Exposure-based damage (after Laefer et al., 2008) Table 6. Plant growth (after Laefer et al., 2008) Table 7. Weighting system (after Laefer et al., 2008) Ideally there would be a method that could be applied rapidly and consistently with rela- tively limited resources by multiple inspectors. The best case would be that the method could be used to ascertain the state of an area and the individual buildings within it for historic registra- tion, general condition monitoring, and base-line documentation in the case of adjacent works. However, to date there has been no consensus for a method for rapidly evaluating large groups of historic buildings for either large-scale, risk-assessment near infrastructure projects or for long- term documentation, as in the case of national inventories. 3. SCOPE AND METHODOLOGY To help overcome the problems of limited available expert knowledge and time re- sources, and to enable the repeatable and consistent surveying of large groups of historic struc- tures, a rapid assessment method is proposed - the University College Dublin Inspection Method (UCDIM). The UCDIM employs five tables relating to the degradation of a building's façade, as 9 proposed by Laefer et al. (2008) (Tables 2-7), to be used alongside accompanying index images, which offer a visual standard against which to judge varying degrees of degradation. These index images (Figures 1-3) were taken prior to the study. They provide a means for visual comparison when inspecting a building's façade and aid in the damage classification, allowing testing of the UCDIM, as will be explained. The aim of the study was to improve user understanding and repeatability both between multiple building investigations and across investigators. The intention is that a single digital photograph of the front façade of each building is taken using a camera capable of producing photographic quality of at least 300dpi to which the UCDIM is then applied. Where a single photograph is not possible due to street width restrictions, multiple photographs should be taken and subsequently merged using an editing tool in a graphics software programme. Ideally, all images for the study area would be taken under similar lighting and weather conditions, as com- mon for the locale. The UCDIM, therefore, relies upon digital photography to ascertain the as- signing of a damage level. The UCDIM is presently limited to masonry buildings since these are often most vulnerable to excavation-induced damage and are of the most relevance in the case of historical documentation and designation. To determine the UCDIM's cost efficiency, repeatability amongst inspectors, and time re- source intensiveness, it was tested against both a lengthy detailed inspection and an alternative rapid method, the CEF System (Table 1). The detailed inspection consisted of a comprehensive investigation of all parts of each building (e.g. internal and external walls, floors, ceilings, win- dows, doorways), documenting all identified defects. This involved a thorough visual inspection of both the exterior and interior of each building, beginning at the top storey of the building and working downwards, with each detected defect being photographed and noted. The CEF System 10 (The Handley Partnership, North Yorkshire, UK), as described previously, consisted of a rapid condition assessment of building components and subsequent weighting according to the build- ing's occupancy. Damage classifications for both the rapid methods were determined numeri- cally. For the UCDIM, the higher the numerical value, the worse the state of degradation of the building. However, for the CEF System, the opposite case exists, whereby a lower numerical value implies a worse state of building damage. Damage ratings for the detailed inspection were determined qualitatively, whereby buildings were ranked according to the nature and extent of defects present, requiring an engineer's judgement. To benchmark these inspection methods, three inspectors of various experience levels surveyed six buildings in the city centre of Dublin, Ireland. As illustrated in Table 8, one inspec- tor had extensive experience, having surveyed nearly 500 buildings. The other two inspectors had highly limited experience, one having surveyed less than 20 buildings and one with no for- mal previous experience. Firstly, the rapid CEF System was conducted by each inspector individually. Next the de- tailed inspection was conducted by the three inspectors working together as a team; because of the commercial nature of the properties, the inspection period was not open-ended. Since this method acted as a benchmark for the other two methods, the inspection of the building by the team of inspectors was felt to be justified. In the detailed inspection, the three inspectors simul- taneously examined all building components, both structural and decorative, commencing within the uppermost storey and working downwards through the building, followed by an examination of the building's exterior. Each identified defect was discussed amongst the team of inspectors before reaching a general consensus about the extent and nature of this defect, which was subse- quently photographed by one inspector and recorded on paper by another. Defects which were 11 noted included the following: interior cracking of plasterwork and walls; interior chipping of plasterwork; interior sagging of floor joists; exterior flaking and weathering of masonry; and ex- terior cracking through coursing and mortar joints. Following this, the front façade of each building was photographed as per the require- ments for the UCDIM. This photograph was subsequently examined individually by each inspec- tor who then applied the UCDIM. For each method, the resource requirements, time allotments, and data obtained were recorded. Table 8. Inspector experience Figure 1. Protruding or loose brickwork index images (to be used in conjunction with Table 3) Figure 2. Replaced or repaired brickwork index images (to be used in conjunction with Table 4) Figure 3. Damage due to exposure index images (to be used in conjunction with Table 5) The overall study focused on the Grafton Street region in Dublin’s city centre, for which an underground railway system has been granted planning permission by Dublin City Council (An Bord Pleanála, 2010). The majority of the region forms an Architectural Conservation Area with a high number of Georgian buildings constructed of unreinforced masonry (Casey, 2006). A letter requesting permission to gain access to the buildings in this region was hand delivered to 207 individual addresses. Due to the high number of building tenancies in this region, it was dif- ficult in many instances to gain contact with the property owner to seek permission, as many of the tenants claimed to have little or no contact with their landlords, often refusing to accept the letter. Furthermore, a large proportion of the premises consisted of multiple tenants occupying a single building, which further complicated inspection co-ordination. Ultimately, responses were 12 obtained from only 5% of this dataset, resulting in access to only 6 buildings (Figure 4). Details of these six buildings are provided in Table 9. Figure 4. Selected buildings Table 9. Building details 4. RESULTS 4.1 General Findings Two distinct building types were identified as part of this study. The first, herein called Type 1, consisted of a highly maintained ground floor retail unit. This unit contained large amounts of shelving and display units. As a result, the majority of the interior walls were con- cealed, thereby making defects difficult to detect. The remaining storeys were generally used for storage and employee facilities and, thus, remained out of view of the public. The state of dilapi- dation for these storeys was significantly worse than that of the ground storey; numerous defects were visible, and in many instances these areas appeared poorly maintained compared to publicly visible areas. Buildings A, B, C and E are examples of Type 1. In Building E (Figure 5), structural and non-structural damage was evident in the upper stories; with cracking of up to 10mm in width propagating through brick and mortar (Figure 5b), as well as separation of walls at junction points (see Figure 5d). Non-structural damage was also present in many rooms in the form of cracked plaster on walls, corners, and coving (Figure 5c-d). However, the ground floor consisted of no defects whatsoever (Figure 5a), thus highlighting the potential disparity in damage across storeys within a single building. 13 Figure 5. Type 1 (Building E) – interior views Buildings D and F provided examples of Type 2, where the building consists of highly maintained commercial units on all floor levels, with public access at the ground level and pri- vate offices in the upper storeys. Visible defects were mainly aesthetic in nature and limited to cracked plasterwork. Figure 6 illustrates the state of Building F's interior. Figure 6. Type 2 (Building F) – interior views 4.2 Survey Methods The results of the CEF System are illustrated in Figure 7, in which a lower rating indi- cates increasing damage severity. Therefore according to the CEF System, Building D is in the best condition and Building E the worst. Damage ratings across the three inspectors were in good agreement, with a maximum coefficient of variance of 0.21 occurring in Building A and most variance in single digits. Notably, in two-thirds of the cases, greater experience correlated with improved agreement between inspectors. Figure 7. Survey results for the CEF System Figure 8 illustrates the damage ratings for each of the three inspectors for use of the UCDIM. In- creasing damage classification is represented by the higher the numerical score. Therefore, in 14 agreement with the results of the CEF System, Building E is shown as the most damaged and Building D as the least. Figure 8. Survey results for the UCDIM The outcome of the detailed inspection consisted of a list of defects present with accom- panying photographs. Since the results were qualitative in nature, buildings were ranked accord- ing to the nature and extent of defects present, beginning with the least damaged building. Table 10 summarizes the main findings and provides a classification order based on these findings (1 = least damaged; 6 = most damaged). The results of each method were considered with respect to (1) reliability across inspectors; (2) consistency of damage classification; (3) time expended; and (4) overall efficiency. Table 10. Findings of the detailed inspection 4.2.1 Reliability across inspectors The reliability of the CEF System, based on the coefficient of variance across inspectors, was generally higher than the UCDIM (Figure 7). This demonstrates a level of consistency in the application of the method irrespective of the inspector’s level of experience. The higher discre- pancies for Building A appear to be due to the fact that the CEF System does not account for cases where certain building elements were not included in the damage assessment. For example, in Building A Inspectors I and III did not rate ancillary items (e.g. shop fronts, architectural de- tails). As scoring is cumulative and non-weighted, the overall score assigned by Inspector II was 15 significantly higher. Figure 8 shows a higher level of variance for the application of the UCDIM. None of the coefficients of variance were single digit, and the maximum was 0.56 for Building C, which consisted of a painted façade. This highlighted the difficulties of applying this method to painted structures. A similar scenario for structures c0onsisting of rendered façades is likely. Nonetheless, all three inspectors identified Building E as the most dilapidated, with a coefficient of variance of 0.1 across the assigned damage ratings. Table 11 illustrates the ranking of damage classification by each of the inspectors accord- ing to the CEF System and the UCDIM. Spearman's Rank-Order Correlation Coefficient (ρ) was calculated for each when compared with the results of the detailed inspection. The value of ρ can vary between -1 and 1, where a value closer to 1 or -1 indicates a match between the order of the two sets of data, while a value of 0 represents no correlation between the two datasets. A plus sign indicates identical rankings, and a minus sign demonstrates reverse rankings (Walpole et al., 2002). The results of the CEF System indicate that the experience of the inspector contributes to a more accurate result since both Inspectors I and II have a higher value of ρ than Inspector III. The results of the UCDIM demonstrate that if the dataset did not include a building with a painted façade (Building C), a reasonably accurate evaluation would have been achieved across all inspectors, irrespective of previous experience. Although the CEF System provided a relatively good damage classification of the 6 buildings, with a value of ρ ranging from 0.5 for an inspector with no experience to 0.7 for both an inspector of limited experience and one with extensive experience, several issues were noted upon use. The CEF System did not account for varying levels of dilapidation within a building across storey levels, as was common for Building Type 1. This may lead to inaccuracies in the results. Furthermore, as described earlier, the number of items that were assessed were not 16 weighted (Table 1), which may further lead to inaccuracies. However, it should be noted that a detailed version of the CEF System employs a weighting system (The Handley Partnership, North Yorkshire, UK). However, due to lack of details of this further developed version, its ap- plication remained beyond the scope of study. The UCDIM appears to provide a more precise evaluation irrespective of inspector experience, with the possible limitation of its use for build- ings consisting of painted and/or rendered façades where damage is harder to discern, particu- larly when applying Tables 2,4 and 5. This was an issue previously identified in a study of crack detection methods including close-up digital images where cracking in rendered façades were much more difficult to accurately detect than exposed brick ones (Laefer et al., 2010). Table 11. Spearman's Rank-Order Correlation Coefficient 4.2.2 Consistency of damage ranking Figure 9 illustrates the damage ranking for the three methods according to Inspector I. All three methods assign Building D as the least dilapidated state and Building E as the most. How- ever, differences are evident for the rankings in between. The detailed inspection considers Building C to be ranked fifth, i.e. second highest level of damage. However, the CEF System ranks this building fourth while the UCDIM places it third. The CEF System ranks Building A fifth, while the UCDIM ranks it fourth, and the detailed inspection ranks it third. Furthermore, the CEF System considers Building F as the second least damaged building, while the UCDIM lists it the second most damaged. Even though the UCDIM considers only the exterior façade, the findings reveal that for the majority of instances, this can provide a good estimation of the overall state of dilapidation for buildings in this region. 17 Figure 9. Damage ranking according to Inspector I 4.2.3 Time Figure 10 illustrates the average time across the three inspectors to conduct each of the methods. The UCDIM was the most rapid with time per building varying from approximately 4 to 6 minutes for assessment, with a further 2 minutes for photographing and processing the image. While the CEF System required only roughly twice the assessment time (averaging be- tween 7 and 12 minutes per building), the method necessitates interior building access, thereby requiring significantly more resources to organise access. Assuming time requirements for entry of approximately 30 minutes for delivery of a letter of request to the premises and a further 15- 20 minutes for scheduling the inspection, an estimate of 56.5 minutes can be attributed to the CEF System. While this is significantly more than the UCDIM, the time required for the detailed inspection was even greater. For the inspection alone, it was roughly three times longer than the CEF System (approximately six times longer than the UCDIM). Furthermore, the detailed in- spection also necessitates interior building access. Consequently, the overall time was approxi- mately 80 minutes per building. Figure 10. Average time required for methods 4.2.4. Overall efficiency The overall efficiency for each of the survey methods according to the three inspectors was calculated according to Equation 1 and is presented in Table 12. The time was calculated as 18 an average across the six examined buildings and is presented as a fraction of the time taken for the detailed inspection. Since access was obtained for just 5% of the buildings in the selected study area, methods for which access was required were weighted 20 times those for which no access was required. Therefore, a value of 0.02 was applied where access to the building was ne- cessary. A value of one represents the maximum possible efficiency. Efficiency = (1 - T) x (ARC) x (ρ) (1) where T = time, ARC = Access Requirement Constant, ρ = Spearman's Rank-Order Correlation Coefficient Table 12. Efficiency Since both rapid survey methods (CEF System and UCDIM) were benchmarked against a de- tailed inspection, an efficiency value of zero was evaluated for the detailed method. Significant differences can be seen between the efficiencies calculated for the two rapid methods. The UC- DIM was on average approximately 70 times more efficient across inspectors than the CEF Sys- tem. 5. DISCUSSION The condition of a masonry building's façade was previously shown by Peraza (2006) as being potentially indicative of the condition of the entire building. If adequately demonstrated to be the case, this provides a means for substantial cost and time savings when conducting large- scale dilapidation surveys of historic masonry buildings. The UCDIM has been shown to accu- 19 rately provide a damage classification for the six buildings examined according to their exterior façade, thus offering a highly economic means for assessing large groups of historic buildings in this region, with the exception of those that are painted or rendered. In such instances, an alterna- tivee specified means of assessment is required. Furthermore, extensively renovated buildings may also be a concern when using the UCDIM, as no explicit consideration is made of any inte- rior features. Since heritage laws often ensure the protection of the original façade of a building, the exterior of such a building may not always be representative of the overall state of the build- ing. Likewise, the opposite may also exist where only the original interior is maintained. An ancillary benefit of the UCDIM is that it creates a permanent historic record. This plays an important role in establishing a reliable means for condition assessment of a building since this record may be revisited, thereby providing the opportunity for assessment by multiple inspectors and at regular time intervals in the future. In the case of litigation, photographic documentation offers solid evidence, upon which disputes may be resolved. Furthermore, Laefer et al.'s (2010) recent study using two inspectors and four historic façades to compare crack detec- tion from terrestrial laser scanning, ground level binocular based inspection, and digital pho- tography versus elevated manual inspections showed vast discrepancies between techniques and inspectors, demonstrating digital photography to be the most accurate and reliable approach. In order to illustrate the potential cost savings of the UCDIM, a sample calculation is pre- sented for the planned Metro North, an underground railway system approved for this area. Ac- cording to the Environmental Impact Statement (EIS) produced by the Railway Procurement Ag- ency (RPA, 2008), the state agency responsible for the provision of the project, approximately 500 buildings lie within the potential zone of influence along the initial portion of the route. As- suming a yearly wage of €20k and a 39 hour working week, a sample cost analysis is presented 20 in Table 13. The UCDIM is shown to cost approximately 12.4% of that required for the CEF System and just 8.7% of the cost required for the detailed inspection. Furthermore, substantial time savings are also noted for the UCDIM, which may be conducted for a dataset of this size in less than two weeks, while both the CEF System and the detailed inspection require several months for completion. In the case of risk assessment due to adjacent works, there exists a re- quirement for condition assessments to be conducted in a short time frame so as to minimize dis- parities between buildings based on temporal or environmental conditions. Table 13. Sample cost analysis 6. CONCLUSIONS AND FUTURE WORK This study examined six historic buildings in an architectural conservation area of Dub- lin's city centre according to three surveying methods: a rapid method known as the UCDIM for which index images have been provided herein, another rapid method known as the CEF System, and a full detailed inspection. Three inspectors of varying experience applied the two rapid methods to the six individual buildings. Results were compared to each other and were bench- marked against the detailed inspection which was conducted by the team of three inspectors col- lectively for each building. The UCDIM results appear to be most accurate, except in the case of painted façades. Furthermore, the UCDIM was roughly 70 times more efficient than the CEF System and significantly more efficient than the detailed inspection. This can be attributed mainly to the fact that interior building access is not required for the UCDIM. Gaining access to buildings has been shown to be extremely difficult for this region due to the high prevalence of rented properties and lack of communication between tenants and landlords. Furthermore, in the 21 case of high security buildings such as jewellery shops and banks, interior access is especially problematic. Overall, the UCDIM appears to be most advantageous when rapidly assessing large numbers of structures, as is needed for urban planning, historic designation, and risk assessment for upcoming infrastructure projects. The three methods applied in this study focused solely on the physical attributes of a building. However, in the case of national inventories for historic designation or urban planning, all buildings are not viewed equally in terms of cultural importance and community valuation. Existing damage in a building highly regarded by its community is generally perceived more seriously than damage incurred in a building of little or no cultural worth. In the past, this idea has not commonly been incorporated into risk analyses or national inventories. The incorporation of a system that accounts for varying levels of architectural significance reflecting the cultural value regarded by the community for a building would provide more accurate risk analyses ap- plicable for historical designation, as well as planning efforts in the case of adjacent construction works. Such efforts are currently being undertaken by the authors. ACKNOWLEDGMENTS This research was generously supported by the Irish Research Council for Science, Engineering and Technology (IRCSET) through an Embark Initiative Scholarship. REFERENCES American Society of Civil Engineers (ASCE) 2000. Guideline for structural condition assess- ment of existing buildings (SEI / ASCE 11-99.) Reston, VA: ASCE.. An Bord Pleanála, 2010. NA0003 - Construction, operation and maintenance of a light railway 22 system. Available at: www.pleannala.ie (accessed April 2012). Andrés, A. N., Pozuelo, F. B., Marimόn, J. R. and Gisbert, A. D. (2012). Generation of virtual models of cultural heritage. Journal of Cultural Heritage 13(2012):103-106. Applied Technology Council (ATC) 2005. ATC-20-1, Field Manual: Postearthquake Safety Ev- aluation of Buildings. Redwood City, California: ATC. Armesto, J., Arias, P., Roca, J., Lorenzo, H. 2008. Monitoring and assessing structural damage in historic buildings. The Photogrammetric Record 23(121):36-50. Aye, Z. Z., Karki, D. and Schulz, C. 2006. Ground movement prediction and building damage risk assessment for the deep excavations and tunnelling works in Bangkok subsoil. Proceedings of International Symposium on Underground Excavation and Tunnelling, February 2006, Bangkok, Thailand:281-297. Brandt, E. and Rasmussen, M.H. 2002. Assessment of building conditions. Energy and Build- ings, 34 (2):121-125. Bowyer, J. 1971. Guide to domestic building surveys. Bristol: Western Printing Services Ltd. Burland, J. B. 1995. Assessment of damage to buildings due to tunnelling and excavation. In- vited Special Lecture to IS-Tokyo 1995: Proceedings of 1st International Conference on Earthquake and Geotechnical Engineering, Tokyo:1189-1201. Burland, J. B., Broms, B., and DeMello. V.F.B. 1977. Behaviour of foundations and structures: state of the art report. Proceedings of 9th International Conference on Soil Mechanics and Foundation Engineering, Vol. 3, Balkema, Rotterdam:495-546. Casey, C. 2005. The Buildings of Ireland: Dublin. New Haven CT: Yale University Press. 23 Cho, G. C., Dodds, J. and Santamarina, J. C. 2006. Particle shape effects on packing density, stiffness and strength: natural and crushed sands. Journal of Geotechnical and Geoenvi- ronmental Engineering, 5(132):591-602. Council of Europe (COE) 1985. Convention for the protection of the architectural heritage of Eu- rope, Granada, 3.X. 1985. Available at: www.conventions.coe.int, (accessed April 2012). Corbane, C., Saito, K., Dell’Oro, L., Gill, S. P. D., Emmanuel Piard, B., Huyck, C. K., Kemper, T., Lemoine, G., Spence, R. J. S., Shankar, R., Senegas, O., Ghesquiere, F., Lallemant, D., Evans, G. B., Gartley, R. A., Toro, J., Ghosh, S., Svekla, W. D., Adams, B. J., and Eguchi, R. 2011. A comprehensive analysis of building damage in the January 12, 2010 Mw7 Haiti earthquake using high-resolution satellite and aerial imagery. Special issue on the Haiti Earth- quake, Photogrammetric Engineering and Remote Sensing, 77(10):997-1009. Federal Emergency Management Agency (FEMA), 2002. Rapid visual screening of buildings for potential seismic hazards - a handbook, FEMA 154. Redwood City: ATC. Grünthal, G., Musson, R. M. W., Schwartz, J., and Stucchi, M. 1998. European macroseismic scale 1998. Luxembourg: Cahiers du Centre Européen de Géodynamique et de Seismologie, Vol. 15. Grussenmeyer, P and Guillemin, S. 2011. Photogrammetry and laser scanning in cultural heri- tage documentation: an overview of projects from Insa Strasbourg. First International Geo- matics Symposium in Saudi Arabia. Available at: www.geomaticsksa.com/GTC2011, (accessed April 2012). Ho, D.C.W., Yau, Y., Poon, S.W., and Liusman, E. 2011. Achieving sustainable urban renewal in Hong Kong: a strategy for dilapidation assessment of high rises. Journal of Urban Planning and Development. doi.org/10.1061/(ASCE)UP.1943-5444.0000104 24 Laefer, D., F. 2001. Prediction and assessment of ground movement and building damage in- duced by adjacent excavation. PhD Thesis, University of Illinois at Urbana-Champaign, Ur- bana, IL USA, pp. 803. Laefer, D. F., Conry, B., Murphy, D., and Cerbasi, S. 2008. A new multi-parameter condition as- sessment scale for estimating tunnel-induced damage. Development of Urban Areas and Geo- technical Engineering, Vol. 2, St. Petersburg, Russia:571-576. Laefer, D.F., Gannon, J., and Deely, E. 2010. Reliability of crack detection for baseline condition assessments. Journal of Infrastructure Systems, 16(2):129-137. Martinelli, A., Cifani, G., Cialone, G., Corazza, L., Petracca, A. and Petrucci, G. 2008. Building vulnerability assessment and damage scenarios in Celano (Italy) using a quick survey data- based methodology. Soil Dynamics and Earthquake Engineering, 28(10-11):875-889. Office of the Attorney General (OAG), 2000. Planning and Development Act, Irish Statute Book. Available at: www.irishstatutebook.ie (accessed April 2012). Peraza, D. B. 2006. Condition assessment of buildings. Proceedings of SEI Structures Congress, ASCE, St. Louis, MO:1-10. Railway Procurement Agency (RPA), 2008. Environmental Impact Statement - Dublin Metro North, v.2, Book 7. Roca, A., Goula, X., Susagna, T., Chávez, J., González, M. and Reinoso, E. 2006. A simplified method for vulnerability assessment of dwelling buildings and estimation of damage scenarios in Catalonia, Spain. Bulletin of Earthquake Engineering, 4(2):141-158. Sheeley, I. H. 1985. Building surveys, reports and dilapidations. Essex: Anchor Brendon Ltd. Staveley, H. S., and Glover, P. V. 1983. Surveying buildings. Cornwall: Robert Hartnoll Ltd. 25 Straub, Ad. 2009. Dutch standard for condition assessment of buildings. Structural Survey, 1 Vol. 27(1):23-35. The Handley Partnership. Critical Element Factor (CEF) System. Available at: www.buildingsatrisk.com (accessed April 2012). Torp-Petersen, G. E. and Black, M. G., 2001. Geotechnical investigation and assessment of po- tential building damage arising from ground movements: CrossRail. Proceedings of the Insti- tution of Civil Engineers - Transport, Vol. 147(2):107-119. Walpole, R. E., Myers, R. H., Myers, S. L., and Ye, K. 2002. Probability & Statistics for Engi- neers & Scientists. New Jersey: Prentice-Hall, Inc. Yastikli, N. 2007 Documentation of cultural heritage using digital photogrammetry and laser scanning. Journal of Cultural Heritage 8:423-427. Yilmaz, H. M., Yakar, M., Gulec, S.A., and Dulgerler, O. N. 2007. Importance of digital close- range photogrammetry in documentation of cultural heritage. Journal of Cultural Heritage 8:428- 433. Figure 1. Protruding or loose brickwork index images (to be used in conjunction with Table 3) Figure 2. Replaced or repaired brickwork index images (to be used in conjunction with Table 4) Figure 3. Damage due to exposure index images (to be used in conjunction with Table 5) Figure 4. Selected buildings Figure 5. Type 1 (Building E) - interior views Figure 6. Type 2 (Building F) - interior views Figure 7. Survey results for the CEF System Figure 8. Survey results for the UCDIM Figure 9. Damage ranking according to Inspector I 26 Figure 10. Average time required for each method Table 1. CEF System (The Handley Partnership, North Yorkshire, UK) Table 2. Cracking (after Burland et al., 1977) Table 3. Protruding or loose brickwork (after Laefer et al., 2008) Table 4. Replaced or repaired brickwork (after Laefer et al., 2008) Table 5. Exposure-based damage (after Laefer et al., 2008) Table 6. Plant growth (after Laefer et al., 2008) Table 7. Weighting system (after Laefer et al., 2008) Table 8. Inspector experience Table 9. Building details 27 Table 10. Findings of the detailed inspection Table 11. Spearman's Rank-Order Correlation Coefficient Table 12. Efficiency Table 13. Sample cost analysis a) Risk category 0 b) Risk category 1 c) Risk category 2 d) Risk category 3 e) Risk category 4 f) Risk category 5 Figure 1. Protruding or loose brickwork index images (to be used in conjunction with Table 3) 28 a) Risk category 0 b) Risk category 1 c) Risk category 2 d) Risk category 3 e) Risk category 4 f) Risk category 5 Figure 2. Replaced or repaired brickwork index images (to be used in conjunction with Table 4) 29 a) Risk category 0 b) Risk category 1 c) Risk category 2 30 d) Risk category 3 e) Risk category 4 f) Risk category 5 Figure 3. Damage due to exposure index images (to be used in conjunction with Table 5) A) 31 Grafton Street Building Locations B) 35 Grafton Street C) 21 Duke Street D) 47 Dawson Street E) 44 Dawson Street F) 34 Dawson Street Figure 4. Selected buildings B C D A E F Dublin, Ireland 31 a) Ground floor – retail unit b) First floor – stockroom c) Second floor – stock room d) Second floor – stockroom Figure 5. Type 1 (Building E) - interior views structural damage structural damage non-structural damage non-structural damage 32 a) Ground floor – retail unit b) First floor – office c) Second floor – office d) Second floor – office Figure 6. Type 2 (Building F) - interior views non-structural damage non-structural damage non-structural damage 33 Figure 7. Survey results for the CEF System 0 5 10 15 20 25 30 35 A B C D E F D am ag e R at in g Building Inspector I - highly experienced Inspector II - little experience Inspector III - no experience Mean = 22.3 COV = 0.21 Mean = 24.7 COV = 0.08 Mean = 22.3 COV = 0.06 Mean = 31.0 COV = 0.05 Mean = 15.3 COV = 0.08 Mean = 27.7 COV = 0.03 34 Figure 8. Survey results for the UCDIM 0 5 10 15 20 25 30 35 40 A B C D E F D am ag e R at in g Building Inspector I - highly experienced Inspector II - little experience Inspector III - no experience Mean = 23.7 COV = 0.28 Mean = 16.0 COV = 0.18 Mean = 14.0 COV = 0.56 Mean = 11.3 COV = 0.44 Mean = 31.3 COV = 0.10 Mean = 25.0 COV = 0.14 35 Detailed Inspection D B A F C E CEF System D F B C A E UCDIM D B C A F E Damage Ranking 1 (Least damaged) 2 3 4 5 6 (Most damaged) Figure 9. Damage ranking according to Inspector I 36 Figure 10. Average time required for each method 0 5 10 15 20 25 30 35 40 45 A B C D E F Ti m e (m in s) Building Detailed Inspection CEF System UCDIM 37 Table 1. CEF System (after The Handley Partnership, North Yorkshire, UK) Condition (1) Very Bad (2) Poor (3) Fair (4) Good Occupancy V PO FO V PO FO V PO FO V PO FO Risk Score 1 2 3 3 3 4 4 4 5 5 6 6 V = Vacant; PO = Partial Occupancy; FO = Full Occupancy 38 Table 2. Cracking (after Burland et al., 1977) Risk Category Degree of Damage Approximate Crack Width (mm) 0 Negligible Hairline cracks 1 Very Slight 0.1-1 2 Slight 1-5 3 Moderate 5-15 or a number of cracks greater than 3 4 Severe 15-25 but also depends on number of cracks 5 Very Severe Greater than 25 but depends on number of cracks 39 Table 3. Protruding or loose brickwork (after Laefer et al., 2008) Risk Category Degree of Damage Description of Existing Damage 0 Negligible All bricks in the same plane 1 Very slight A few bricks (1-3) are noticeably out of plane / Mortar appears to be loose/weak/missing around 1-3 bricks 2 Slight Overall, more than 5 bricks appear to be slightly out of plane/ Gaps in mortar are more noticeable/ Just perceptible difference in line of brick 3 Moderate Overall up to 10% of bricks are noticeably out of plane Noticeable slope in masonry Windows, lintels, doorframes etc. are noticeably tilted 4 Severe Overall, up to 15% of bricks are missing entirely Noticeably outward bulge in the wall Window lintels and doorframes are at an angle greater than 15 degrees 5 Very severe More than 15% of bricks are missing entirely Sections of the wall are on the verge of collapse Repair work would require majority of wall to be rebuilt 40 Table 4. Replaced or repaired brickwork (after Laefer et al., 2008) Risk Category Degree of Damage Description of Existing Damage 0 Negligible None 1 Very slight Brickwork was replaced as a result of filling a doorway or window. 2 Slight Replacement occurred in rarely occurring small clusters (i.e. 2-6) of bricks 3 Moderate Replacement occurred in larger clusters (greater than 6) 4 Severe More than 10% of the wall is comprised of replaced brickwork 5 Very severe More than 25% of the wall is comprised of replaced brickwork 41 Table 5. Exposure-based damage (after Laefer et al., 2008) Risk Category Degree of Damage Description of Existing Damage 0 Negligible None 1 Very slight Isolated, rarely occurring chipping (i.e. 1-3 bricks)/ Lower perceptible damage of overall wall. 2 Slight Perceptible overall damage (weathering) of bricks in wall. 3 Moderate Numerous examples of significant damage i.e. greater than 5% 4 Severe Noticeable damage to greater than 15% of bricks in wall 5 Very severe Greater than 25% of bricks are subjected to heavy chipping / spalling. Bricks are heavily eroded due to exposure 42 Table 6. Plant growth (after Laefer et al., 2008) Risk Category Degree of Damage Description of Existing Damage 0 Negligible None 1 Very slight One or two examples of weeds growing in typical places ( i.e. top of chimney, ledge etc) 2 Slight Weeds are more numerous, as well as being more overgrown 3 Moderate Whole wall ensconced with vegetation 4 Severe Minor bush/tree growing out of masonry 5 Very severe Major (fully grown) tree growing out of masonry 43 Table 7. Weighting system (after Laefer et al., 2008) Scale Used Modifier / Weight Used Cracking (Table 1) 4 Protruding or loose brickwork (Table 2) 3 Replaced or repaired brickwork (Table 3) 3 Exposure-based damage (Table 4) 2 Plant growth (Table 5) 1 44 Table 8. Inspector experience Inspector Experience Level Number of Buildings Previously Examined I Extensive ≈ 500 II Minimal < 20 III None 0 45 Table 9. Building details Approx. Dimensions (m) Building Address Number of Stories Length Width Height Year of Construction Building Usage A 31 Grafton Street 4+basement 15 4 14 Unknown Retail/ Commercial B 35 Grafton Street 4+basement 9 4 13 Unknown Retail/ Commercial C 21 Duke Street 2+basement 14 5 12 1939 Pub D 47 Dawson Street 4+basement 10 5 16 1825 Commercial E 44 Dawson Street 4+basement 15 8 14 1820 Retail/ Commercial F 34 Dawson Street 2+basement 20 5 14 1870 Commercial 46 Table 10. Findings of the detailed inspection Relative Damage Rating Main Defects Noted 1 (Least Damaged) Building D a. Cracking through window sill of external façade. b. Hairline cracking in façade. c. Cracking through plasterwork of walls at ground floor level. 2 Building B a. Cracking ~1mm through plaster- work of ceiling at basement level. b. Missing window sills on interior at first floor level. c. <1mm cracking through plaster- work of walls and ceiling at third floor level. 3 Building A a. Vertical crack in window sill of external façade. b. ≈1mm cracking through plaster- work along walls of second floor level. c. Cracking above upper corners of window and door openings at third floor level. 4 Building F a. <1mm cracking through plaster- work of walls at ground floor level. b. <1mm cracking through plaster- work of walls at 1st floor level. c. <1mm cracking through plaster- work of ceiling at 1st floor level. 5 Building C a. Cracking >2mm adjacent to win- dow openings and in chim- ney.(Structural damage) b. Reinforcement visible at under- side of ground floor slab.(Structural damage) c. Cracking >2mm in ceiling and in corners of window frame. 6 (Most Damaged) Building E a. Cracking >4mm surrounding openings.(Structural damage) b. Cracking of façade.(Structural damage) c. Cracking of coving on interior. a. b. c. a. b. c. a. b. c. a. b. c. b. a. c. a. b. c. 47 Table 11. Spearman's Rank-Order Correlation Coefficient CEF System UCDIM Inspector Inspector Detailed Inspection Building I II III I II III 1 D 1 1 1 1 1 1 2 B 3 3 4 3 6 3 3 A 5 5 5 2 3 2 4 F 2 2 2 4 2 4 5 C 4 4 3 5 4 5 6 E 6 6 6 6 5 6 ρ 0.7143 0.7143 0.5429 0.9429 0.3714 0.9429 48 Table 12. Efficiency Method Inspector Time (T) 1 - T Access Requirement Constant (ARC) Yes = 0.02, No = 1 ρ Efficiency Detailed Inspec- tion N/A 1 0 0.02 1 0 I 0.216 0.784 0.9429 0.739 II 0.213 0.787 0.3714 0.292 UCDIM III 0.227 0.773 1 0.9429 0.729 I 0.369 0.631 0.7143 0.009 II 0.326 0.674 0.7132 0.010 CEF System III 0.353 0.647 0.02 0.5429 0.007 49 Table 13. Sample cost analysis Method Time per building (mins) No. of buildings Total time (hrs) Wage per hour (€/h) Cost (€) Detailed Inspection 80.0 666.67 7120.04 CEF System 56.5 470.83 5028.46 UCDIM 7.0 500 58.30 10.68 622.64 work_4esqeaptlvcojmqcvgmot47lya ---- Digital dental photography. Part 4: choosing a camera discussed in Part 1, which is an ideal choice for dental use. However, for the sake of completion, a few other cameras systems are discussed that are used in dentistry. CAMERAS FOR DENTAL PHOTOGRAPHY The fi rst is an intra-oral or fi bre optic camera. This is an excellent tool for a cursory tour of the oral cavity, showing patients gingival infl ammation, decay and defective restorations. While its quality is adequate for displaying on a monitor, it is insuffi cient for permanent documentation or for archiving. Digital dental photography. Part 4: choosing a camera I. Ahmad1 The number of cameras on the market is daunting and their specifi cations mind- boggling. The requirements of a camera for dentistry are two-fold: capable of tak- ing portraits, as well as close-up or macro images of the dentition and study casts (Figs 1-3). These dual requirements limit the types of cameras meeting these criteria. Furthermore, a camera system should be fl exible, allowing adaptation for changing technology, and versatile for creative pho- tography (Fig. 4). Therefore to simplify the technical jungle, we only need to consider a digital single refl ex (DSLR) camera, as With so many cameras and systems on the market, making a choice of the right one for your practice needs is a daunting task. As described in Part 1 of this series, a digital single refl ex (DSLR) camera is an ideal choice for dental use in enabling the taking of portraits, close-up or macro images of the dentition and study casts. However, for the sake of completion, some other cameras systems that are used in dentistry are also discussed. 1General Dental Practitioner, The Ridgeway Dental Surgery, 173 The Ridgeway, North Harrow, Middlesex, HA2 7DF Correspondence to: Irfan Ahmad Email: iahmadbds@aol.com www.IrfanAhmadTRDS.co.uk Refereed Paper Accepted 15 November 2008 DOI: 10.1038/sj.bdj.2009.476 ©British Dental Journal 2009; 206: 575–581 • The most convenient, versatile, easy to use camera for dental applications is the digital single lens refl ex (DSLR) camera. • A high quality lens is the key factor for high resolution images. • The number of pixels is not an indication of the image quality, but only the size of a digital image. • Many photographic accessories and dental armamentarium expedite a photographic session. I N B R I E F PR A C TIC E 1. Digital dental photography: an overview 2. Purposes and uses 3. Principles of digital photography 4. Choosing a camera and accessories 5. Lighting 6. Camera settings 7. Extra-oral set-ups 8. Intra-oral set-ups 9. Post-image capture processing 10. Printing, publishing and presentations FUNDAMENTALS OF DIGITAL DENTAL PHOTOGRAPHY Figs 1-3 A camera for dental use should be capable of taking portraits as well as close up or macro images of the dentition and dental casts Fig. 4 A versatile camera system allows fl exibility for creative images 1 3 2 BRITISH DENTAL JOURNAL VOLUME 206 NO. 11 JUN 13 2009 575 © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE The second is the Polaroid Marco 5 SLR camera, relatively inexpensive, with auto- matic exposure, auto-focusing and built-in fl ashes. Various pre-set magnifi cations are offered ranging from full face to a few teeth. The ‘trademark’ of Polaroid is self- developing prints, usually in a few minutes after taking the picture. The images are low quality, unable to be electronically archived, as only one print is produced for each exposure. The fi lm is expensive because each print contains the develop- ing chemicals. Before the advent of digital photography, Polaroid prints were the only method of instantly viewing a picture, but with the arrival of digital technology these cameras have become obsolete, having merely a novelty factor (Fig. 5). The next tailored camera for dental pic- tures is the Kodak P712 with an EasyShare docking port printer. This DSLR is adapted from the Kodak budget range of digital cameras, modifi ed specifi cally for dental use. Lenses, focusing, framing, exposure and fl ash are all pre-set and automatic. Unlike a Polaroid, images can be stored on a computer and/or immediately printed with the accompanying printer (Fig. 6). The major drawback with this ‘shoot and go’ camera is the fi xed focal length lens, which limits fl exibility and versatility. Furthermore, for the same price, a semi- professional DSLR can be purchased, offer- ing greater possibilities and functionality. Other cameras are the rangefi nder varie- ties, for example compact ‘point and shoot’ that are unsuitable for dentistry because they suffer from parallax (Fig. 7). Parallax is defi ned as the difference between the image seen in a viewfi nder and that recorded by a sensor. With macro photography, as the lens moves closer to the subject the vari- ance increases, which means that certain Fig. 5 Before digital photography, Polaroid instant fi lms were the only method of previewing an image before making the fi nal exposure Fig. 6 The Kodak P712 digital camera system Fig. 7 Compact cameras are unsuitable for dental use due to the disadvantageous parallax phenomenon Fig. 8 The body of a DSLR houses the CCD sensor and electronics Fig. 9 The LCD viewer on the camera back allows a video display of the intended image, followed by instant viewing of the image after exposure 576 BRITISH DENTAL JOURNAL VOLUME 206 NO. 11 JUN 13 2009 © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE For dental applications, a dual-purpose lens is necessary, fi rstly for portraiture and secondly to focus down for close-up pho- tography. The ideal choice is therefore a lens that combines both these features, that is, a macro-telephoto lens. A word of cau- tion about macro lenses. Many lenses have ‘macro’ etched on their barrels but are not true macro lenses. Even compact cameras claim macro facilities but this only indi- cates close focusing facilities. A true macro is capable of producing a 1:2 or 1:1 magni- fi cation. A 1:1 magnifi cation is the ideal and means that the image recorded by the sensor is the same size as the object in real life. For 35 mm format DSLRs, a 1:1 image usually translates to about four maxillary incisors. Depending on the manufacturer and arrangement of optics within the lens bar- rel, the focal lengths of macro-telephotos vary from 50 mm to 105 mm. Also, many sensors are smaller than the 35 mm fi lm format, and therefore have a multipli- cation factor. For example, attaching a 100 mm lens to a 35 mm camera body will effectively ‘convert’ the focal length of the lens to 150 mm, that is, the sen- sor has a multiplication factor of 1.5. However, some newer high-end cameras have larger sensors and therefore the lenses do not require a multiplication fac- tor. It is important to purchase the high- est quality lens that you can, preferably a proprietary make rather than a third party analogue. The lens characteristics (resolu- tion and superior optics) are essential for high quality images, and spending a lit- tle extra money in the beginning will pay dividends in the long term. The lens char- acteristics should include high contrast for higher resolution, combined with superior optical quality for minimising chromatic and spherical distortions. Lenses with the prefi x ‘APO’ (Fig. 10) and ‘ASPH’ indicate teeth or parts of individual teeth may be missing on the resultant image. Some man- ufacturers have ingeniously overcome par- allax by viewing and focusing attachments, all with limited success. This is analogous to buying a family saloon car and then cus- tomising it for formula one racing, which seems a little futile! DIGITAL SINGLE LENS REFLEX The most versatile camera for dental pho- tography and for achieving the best results is without doubt a DSLR system. A DSLR offers TTL (through the lens) viewing and metering, precise focusing and accurate framing. The major advantage of DSLRs is that parallax is eliminated, because the viewfi nder, lens and image sensor all share the same optical axis. This means that what is seen in the viewfi nder is identical to that recorded on the resulting image. A DSLR consists of a camera body hous- ing the sensor, LCD viewer and microproc- essor, which is the ‘brain’ of the camera (Figs 8-9). At one time it was a luxury to have a live video display on the LCD, but nowadays this feature is almost common- place. However, it should be remembered that the video display lags behind what is observed in the viewfi nder depending on the refresh rate of the LCD screen. The second component is the interchangeable lenses, which are selected according to the type of photographic application, for example macro, portrait, landscape, sports, wildlife, etc. A few DSLRs have fi xed zoom lenses, which are usually incapable of taking macro dental images. The standard lens of a DSLR has a focal length of 50 mm; a shorter focal length lens, say 28 mm, is classifi ed as wide angled (for example, for landscapes), while a longer focal length is a telephoto (for example, for sports or wildlife). Canon EOS 450D Nikon D90 Fujifilm FinePix S5Pro Pentax K200D Olympus E-520 Sony a350 Nikon D5000 Nikon D700 Canon EOS 50D Canon EOS 5D MKII Fig. 10 A high quality macro-telephoto is the ideal lens for dental photography Fig. 11 A selection of semi-professional DSLR cameras, all approximately £500 BRITISH DENTAL JOURNAL VOLUME 206 NO. 11 JUN 13 2009 577 © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE low chromatic and spherical aberrations, respectively. Finally, auto-focus lenses may not operate in some close-up dental set-ups, and the ability to revert to manual focusing is a useful option. Most semi-professional DSLRs offer all the specifi cations necessary for dental pho- tography (Fig. 11). The salient features to scrutinise before purchasing a camera are: Image sensor: CCD or CMOS 1. with greater than six megapixels. However, most current semi- professional cameras exceed ten megapixels as standard (see Part 3) Bit depth: minimum 8 bit/channel 2. (24 total bit or colour depth), 16 bit/ channel (48 total bit or colour depth) preferred (see Part 3) Dynamic range of sensor: minimum 3. 6 f stops, more than 6 f stops preferred (covered further in Part 6) Dust reduction system for sensor: 4. mitigating unwanted particle build-up on sensor or access to the sensor for cleaning Metering: multi-pattern TTL with 5. aperture priority mode (to be covered in Part 6) Flash metering: TTL synchronisation 6. (to be covered in Part 5) White balance: automatic and 7. manual (to be covered in Part 6) Full frame capture (not mandatory if 8. cost is prohibitive) ISO range: ability to set a minimum 9. of 100 for low noise (to be covered in Part 6) Data formats: RAW, PNG, TIFF and 10. JPEG fi les (to be covered in Part 9) Colour domains: Adobe RGB and 11. sRGB (to be covered in Part 6) Storage media: maximum internal 12. capacity or memory cards, greater than one gigabyte (see Part 2) Interface: FireWire or other high 13. speed transfer to computer (to be covered in Part 9). It is diffi cult to recommend manufactur- ers or models of cameras, since the mar- ket is rapidly changing and new products are introduced annually or even every few months. The best way to choose a camera is to visit a retail showroom and check that a semi-professional DSLR meets the above criteria and specifi cations. Of course one can purchase high-end professional DSLRs (Fig. 12) with additional features, some gim- mickry and functions that are superfl uous for dentistry. Unless one is fastidious about image quality, it is pointless spending ten times the amount purchasing equipment with features that one will rarely use. However, to ensure a prudent long-term investment, when choosing a lens it is preferable to opt for a proprietary macro-telephoto lens of the highest optical quality. This ensures that even if the camera body becomes obsolete, the lens can still be used with a newer body purchased in the future. IMAGE QUALITY A key factor before choosing photo- graphic equipment is ensuring that the items purchased harmoniously integrate and ‘communicate’ with each other to yield acceptable image quality. The theoretical aspect of image quality was discussed in Part 3, but now requires further elaboration in relation to the photographic hardware. Image quality is infl uenced by perceptual and practical factors. Perceptual factors Image quality is a subjective and nebulous entity meaning different things to different people. Summarised succinctly, image quality depends on the following items, which are briefl y discussed below: 1. Magnifi cation 2. Inherent object details 3. Degree of the trained eye 4. Visual acuity and acutance 5. Psychological perception of detail 6. Circle of confusion 7. Distance of viewing 8. Viewing media. Magnifi cation and inherent object details (dependant of illumination) are discussed in Part 3 and Part 5, respectively. An individual trained to ‘see’ certain details will perceive them more than an untrained layman. This is clearly evident in dental aesthetic assessment. A clini- cian’s eye is trained to scrutinise details pertaining to dental aesthetics far more than the average patient. Visual acu- ity or perception of sharpness is a func- tion of the eyes, which deteriorates with advancing years and is compensated by eyeglasses or magnifi cation aids. A phys- ical measurement of sharpness is per- formed by densitometers and is termed acutance. The latter is an objective Nikon DX3 Sony a900 Olympus E-3 Canon EOS 1Ds MKIII Fig. 12 A selection of professional DSLR cameras, from £800 to £6,000 578 BRITISH DENTAL JOURNAL VOLUME 206 NO. 11 JUN 13 2009 © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE white light through a coloured sur- face, for example viewing transpar- encies using a projector 5. Illuminant colour – an incandescent entity, such as a light bulb, emanat- ing light of a specifi c hue. Practical factors The single feature about digital cameras most frequently quoted by manufacturers and retailers is the number of megapixels. But contrary to popular belief, the numbers of pixels does not determine the quality of a digital image: they determine the size of an image. The hardware, and subsequent software processing and manipulation determine image quality. Returning to the acro- nym CPD (capture, processing, display) discussed in Part 3, how good the image looks is determined by method of capture (a function of lens quality together with the quality and quantity of pixels), method of in-camera processing (A/D converter) and the method used to display an image (monitor, prints). measure of sharpness, not infl uenced by subjective idiosyncrasies. Psychological perception of detail varies enormously, not only from individual to individual, but also intra-individually. An individual will see what his or her brain wants to perceive, depending on their social, academic, religious and intellectual psychological make-up. Furthermore, an individual may perceive the same image differently at different times depending on his or her state of mind. Factors such as stress, lethargy, infl uence of intoxi- cating substances (alcohol, psychotropic drugs), insomnia, inactivity and libido changes all affect an individual’s ability to discern detail. The circle of confusion is a phenomenon that also affects image quality. Any lens, no matter how perfect, will always rep- resent a perfect pinpoint as a tiny blur. The circle of confusion can be reduced, but never to the original pinpoint. Therefore, an acceptable dimension is assigned to represent the original pinpoint, which var- ies from 0.1 mm to 0.033 mm, depending on different authorities and manufactur- ers. The signifi cance of the circle of con- fusion is that the nearer the viewer is to an image, the smaller the circle of con- fusion necessary to ensure image sharp- ness. For example, viewing an A4 image requires a smaller circle of confusion compared to viewing a 20 × 10 foot bill- board poster from a distance. The same is true for watching a fi lm in a cinema: the further away from the screen, the sharper the image. Additionally, viewing media also alter image detail. For example, a printed image which is converted from an RGB original to CMYK separations will have obvious detail loss compared to the original image. Finally, appearance is also affected by the fi ve modalities of colour: 1. Object colour – colour due to white light refl ected off its surface, for example a tomato appears red because all colours of the spectrum are absorbed except the colour red, which is refl ected off its surface 2. Volume colour – objects such as col- oured wine bottles, where the colour is related to the volume of the object 3. Aperture colour – colour of a space, for example blue sky 4. Illumination colour – transmission of Poor image quality with inexpensive 10 megapixel camera, there is little detail of the soft tissues texture and poor depiction of tooth characterization High image quality with expensive 10 megapixel camera, the soft tissue architecture and gingival stippling is clearly visible, as well as tooth stains, and fracture lines Fig. 13 The number of pixels alone does not infl uence image quality. Both images are taken with 10 megapixel cameras. The top image, using an inexpensive camera, has little detail and a low dynamic range. The bottom image is with an expensive camera, and is rich in detail with a high dynamic range BRITISH DENTAL JOURNAL VOLUME 206 NO. 11 JUN 13 2009 579 © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE Capture The predominant factor affecting image quality is the optics of the lens, discussed previously. Furthermore, as the physical pixel size decreases to cram more onto the limited surface area of a sensor, the demands on the lens to resolve detail increases. For example, purchasing a digital camera with a 10 megapixel sen- sor and attaching a lens capable of only resolving 6 megapixels of detail will do little to increase image quality. The quality of the pixels is determined by the type of sensor, either CCD (charge coupled device) or CMOS (complementary metal oxide semi-conductor). CCD sensors allow a larger bit depth (range of colours), higher dynamic range (increased contrast) and greater signal to noise ratios (avoiding grainy images). The CMOS are second- generation sensors with less power con- sumption, which allow access to individual pixels. Pictures taken with two different cameras with the same number of pixels will vary enormously because the quality of pixels is a crucial factor determining image quality. It is possible to purchase a relatively inexpensive compact camera with 10 megapixels and one which is ten or twenty times the price, also with 10 megapixels. Both cameras will capture an image that is identical in size. However, the image from the inexpensive camera will be poorer in quality compared with that from the expensive camera. Although both cameras have the same number of pixels, to ensure high image quality other factors much be taken into account such the resolving power of the lens, the A/D converter, pixel quality, pixel size, bit depth, dynamic range, fi le format, degree of noise and method of display (Fig. 13). To summarise, the number of pixels determines the size, not quality, of an image. However, the numbers of pix- els become significant when enlarg- ing an image. For example, for a print size of 5 × 7 inches, a camera with a 3 megapixel sensor is adequate. If larger images are required, or enlargement of a part of the image is necessary, more pixels are necessary to avoid deteriora- tion of the image quality. For dental pho- tography, 6 megapixels are more than suffi cient, allowing a high quality A4 print assuming that the image is printed without magnifi cation. Processing The only method of recording a virgin, or pure analogue image signal is to record it on fi lm. All software and hardware used for processing, to a lesser or greater degree adulterate the captured image signal. In most contemporary digital cameras, the corruption is insignifi cant and rarely per- ceptible. However, the in-camera software, which eventually extrapolates a picture from the image capture, should be suffi - ciently sophisticated to minimise fl aws. For example, if the initial capture is 24-bit, but is processed with a 12-bit depth A/D con- verter, 12 bits of information or detail are lost. Furthermore, if in-camera processing is set to translate a RAW image to a low quality JPEG fi le format, further loss of information is inevitable. Display Finally, the method of display has a bear- ing on how the fi nal image is perceived. Once again, image quality is affected by the resolving power of the monitor (CRT or LCD), the type of fi le used to display the image, the quality of paper and the printing equipment, or the calibration [and intensity] of a projector. For the untrained eye and for most dental applications, a computer monitor, inkjet printer with photographic paper or standard ‘beamer’ projectors are acceptable. It is only the aficionados who will detect nuances which necessitate calibration of monitors, projectors and printers to optimise the fi nal display. PHOTOGRAPHIC ACCESSORIES Besides the camera and lens, other photo- graphic adjuncts are useful accessories to expedite dental photography. Fig. 14 The tripod supports a mechanical stage, fl ash brackets and the camera. A TTL adaptor mounted on top of the camera controls the bilateral fl ashes and exposure is made with cable release Fig. 15 A selection of coloured cards used as backdrops for photographic set-ups in a dental laboratory Fig. 16 The dental armamentarium required for dental photography 580 BRITISH DENTAL JOURNAL VOLUME 206 NO. 11 JUN 13 2009 © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE for dental shade analysis and white balance calibration, discussed further in Part 6. The purpose of backdrops, or back- grounds, is to isolate and concentrate attention on the object being photographed. The oral environment serves as a natural backdrop and due to limitations of space and access, in vivo backgrounds are rarely required. On occasions, a black card placed behind the maxillary anterior teeth may be useful for highlighting translucency and incisal edge characterisations. For portraiture and extra-oral pictures, backgrounds are a requisite and add inter- est to the composition. However, subtlety is necessary when choosing backgrounds, since fl amboyant, ostentatious or lurid backgrounds detract attention from the main subject. Any medium such as cloth, coloured papers, walls or furniture can effectively be utilised to separate and therefore highlight the main object. A background consisting of a cluster of objects causes visual confrontation and is distracting, for example photographing a patient seated in a dental chair with the entire surgery armamentarium serving as a backdrop. For dental laboratory or bench photog- raphy, backgrounds are extremely useful for blocking extraneous objects or the laboratory equipment and clutter. The sim- plest backdrops are coloured cards, cut to various sizes, illuminated separately from the main subject, giving a sense of sepa- ration and three dimensionality (Fig. 15). Another useful medium is cloth such as black velvet, which absorbs all incident light, producing a uniform black back- ground. Textured or patterned backgrounds should be avoided unless a special effect is required. Professional still life shoot- ing tables and props with custom-made backgrounds are commercially available for expediting repetitious set-ups. DENTAL ARMAMENTARIUM The main items needed for intra-oral pho- tography are cheek retractors, available from dental suppliers in unilateral and bilateral varieties. Traditionally, metal retractors were universally used for dental photography, but they are more traumatic than the plastic pliable varieties that are generally preferred. The bilateral variety is used for talking pictures of anterior teeth, while the unilateral type is useful for lat- eral views and pictures of posterior teeth. Intra-oral photographic mirrors are neces- sary for occlusal, lingual (or palatal) and lateral views of teeth. Dental photographic mirrors should be front-coated to avoid double images. Several sizes are avail- able to accommodate various degrees of mouth opening. Other dental items necessary for pho- tography are readily available in a dental surgery. These include cotton wool rolls, saliva ejectors and rubber dam for isola- tion and moisture control (Fig. 16). An oil-free, 3-in-1, or 6-in-1 syringe deliver- ing warm air ensures that the soft tissues and teeth are free of saliva and blood and prevents condensation or fogging onto the surface of intra-oral mirrors. Another approach to preventing condensation on mirror surfaces is to pre-soak in warm water before use. Plaque and food par- ticles are removed by fl ossing and pol- ishing with prophylaxis paste before a photographic session (unless the inten- tion is recording biofilm or extrinsic stains). Gingival bleeding or crevicu- lar exude, for example following crown preparation, is arrested with retraction cord soaked in a haemostatic agent, for example buffered aluminium chloride. Astringent agents containing adrenaline or ferric compound are avoided to prevent cardiac stimulation and black gingival residue, respectively. Camera supports allow hands-free, no touch protocols to be practised. This not only offers convenience, but is also con- ducive to sterility and disinfection proce- dures in the clinical environment. A tripod is an essential support item to stabilise the camera, and other mounted accessories and various types are available offering sturdiness on a solid platform. A tripod ensures precise picture framing and focusing and is conveniently moved aside once a photo session is fi nished. Besides tripods, if space is a paramount concern there are numerous ingenious camera support devices on the market, which can be tailored to specifi c surgery requirements. A visit to the local camera shop or surfi ng the Internet provides ample ideas and choices. As well as securing the camera to a tri- pod, fl ashes can be mounted either lat- erally or superiorly to the camera body via brackets, for example the macro fl ash bracket (Manfrotto, Italy). Finally, the camera and fl ash bracket can be supported on the tripod via a graduated mechani- cal stage (Kaiser Phototechnik, Germany) as in Figure 14. A stage also facilitates precise focusing, especially when a par- ticular scale of reproduction is required, for example 1:1 or 1:2. Additionally, no contact with the lenses is necessary, which also benefi ts a ‘no touch’ protocol. This set-up is also useful for extra-oral bench photographs of dental casts in the dental laboratory. For photographing radiographs a copy stand is indispensable, ensuring precise location and uniform illumination (Kaiser Phototechnik, Germany). Another accessory is a remote release cable, which can be wireless or foot control- led and is invaluable for taking pictures of surgical procedures where cross-infection control is essential. Also, an 18% grey card and a photographic colour guide is useful BRITISH DENTAL JOURNAL VOLUME 206 NO. 11 JUN 13 2009 581 © 2009 Macmillan Publishers Limited. All rights reserved. Digital dental photography. Part 4: choosing a camera Main Cameras for dental photography Digital Single Lens Reflex Image quality Perceptual factors Practical factors Capture Processing Display Photographic accessories Dental armamentarium Note work_4ffzdkcv45hb3c3isarckiqdui ---- BA573403.pdf Arctic, Antarctic, and Alpine Research, Vol. 45, No. 3, 2013, pp. 330–341 Particle Size Sampling and Object-Oriented Image Analysis for Field Investigations of Snow Particle Size, Shape, and Distribution AbstractSusanne Ingvander*‡ Snow particle size is an important parameter strongly affecting snow cover broadbandIan A. Brown* albedo from seasonally snow covered areas and ice sheets. It is also important in remote Peter Jansson* sensing analyses because it influences the reflectance and scattering properties of the snow. Per Holmlund* We have developed a digital image processing method for the capture and analysis of data of snow particle size and shape. The method is suitable for quick and reliable dataCecilia Johansson† and capture in the field. Traditional methods based on visual inspection of samples have been Gunhild Rosqvist* used but do not yield quantitative data. Our method provides an alternative to both simpler *Department of Physical Geography and and more complex methods by providing a tool that limits the subjective effect of the Quaternary Geology, Stockholm visual analysis and provides a quantitative particle size distribution. The method involves University, S-106 91 Stockholm, Sweden image analysis software and field efficient instrumentation in order to develop a complete †Department of Earth Science, Uppsala process-chain easily implemented under field conditions. The output from the analysis isUniversity, Villavägen 16, SE- 752 36 a two-dimensional analysis of particle size, shape, and distributions for each sample. TheUppsala, Sweden ‡Corresponding author: results of the segmentation process were validated against manual delineation of snow susanne.ingvander@natgeo.su.se particles. The developed method improves snow particle analysis because it is quantitative, reproducible, and applicable for different types of field sites. DOI: http://dx.doi.org/10.1657/1938-4246-45.3.330 Introduction Snow particle size and shape analysis is a key ingredient in snow related investigations such as, for example, avalanche risk analysis (Lehning et al., 1999), ground control for remote sensing investigations of snow cover and its properties (Wiscombe and Warren, 1980; Nolin and Dozier, 1993, 2000; Dozier and Painter, 2004; Scambos et al., 2007; Dozier et al., 2009), and in accumula- tion modeling studies (Kirnbauer et al., 1994; Fierz et al., 2009). Traditionally snow particle studies have largely been accomplished using either visual observations in the field (following standards outlined in e.g. Fierz et al., 2009) or by gathering samples in the field for subsequent laboratory analysis (e.g. Albert, 2002; Gay et al., 2002). The visual observations have the benefit of providing direct results but are cumbersome and may involve some subjectiv- ity and lower accuracy (Colbeck et al., 1990; Ingvander et al., 2010). The methods used for laboratory analysis typically involve fixating the sample in the field (requiring chemical techniques or specialist equipment) as well as transport of frozen samples to a laboratory for analysis (e.g. Albert, 2002; Gay et al., 2002, Matzl and Schneebeli, 2010). The laboratory techniques yield high-qual- ity reproducible data but prohibit acquisition of results in the field and typically yield small sample sizes. Alternatively, sophisticated instrumentation using near infrared (NIR) imaging or specific sur- face area instruments can be used (e.g. Matzl and Schneebeli, 2006; Gallet et al., 2009), requiring increased logistical and financial support for the field campaign. Hence there is need for quick, relia- ble, and reproducible methods that combine the main advantages of both methods, the speed of visual field investigations, and the high-quality reproducible data of the laboratory methods. We have explored the advantages of using digital photography in combina- 330 / ARCTIC, ANTARCTIC, AND ALPINE RESEARCH � 2013 Regents of the University of Colorado 1523-0430/6 $7.00 tion with digital image analysis to record and analyze snow samples in the field. Imaging of snow particle size has been carried out for different purposes and has evolved over time. This evolution has resulted from improving technology such as the introduction of charge- coupled devices and the digital camera (Mellor, 1965; Sommerfeld and LaChapelle, 1970; Colbeck et al., 1990; Grenfell et al., 1994; Pulliainen, 2006; Fierz et al., 2009). Lesaffre et al. (1998) presented a method for an objective analysis of particle size images extracting the mean convex radius of the snow particles. To improve on the two-dimensional analysis of images, the use of pore fillers were employed to fixate the pore space of the snow pack in situ, hence preserving the three-dimensional structure of the snow for later detailed analysis (Perla, 1982). Brun and Pahaut (1991) used iso- octane for fixation and post characterization of such snow samples. Such three-dimensional analysis provides rich data but is cumber- some to perform due to the fixation process and need for transport of larger samples to an environment where final analysis can be performed. Improving image analysis of snow particles thus pro- vides a means to obtain good quality data in larger quantity and with little additional effort because the researcher is already in the field. Earlier studies of snow particle size analysis in Dronning Maud Land, Antarctica, based on digital photography were made by, for example, Gay et al. (2002) and Kärkäs et al. (2002). Gay et al. (2002) focused on the methodology of snow sampling, providing three different types of analysis based on fixation with isooctane and sample analysis in a laboratory by using classical macro-pho- tography and digital image acquisition. These studies show that photography-based methods are useful parts of any sampling and analysis program. A benefit of the digital images is that they can easily be kept for posterity for re-analysis. Hence there is also a documentary benefit to an image based system. We have modified, merged, and extended existing methods (Gay et al., 2002; Kärkäs et al., 2005) to arrive at the so-called Digital Snow Particle Property (DSPP) method, which requires a minimum of both instrument support and analysis time. The spe- cific goal has been to provide opportunities for both quick field sampling and preliminary analysis, and subsequent re-analysis of the collected data using advanced image processing techniques. The method is based on digital photography of snow samples on a high-accuracy reference plate. The resulting images are processed in an object-oriented image analysis program yielding several pa- rameters including snow particle size and shape. The result is a non-subjective method that yields statistical measures of the snow particle samples allowing for quantitative analysis and inter-sample comparison. Since the digital analysis is made using adjustable parameters, results can be double checked or re-analyzed with dif- ferent settings at a later stage. The main limitation of the photo- graphic method is that the analysis is two-dimensional. The data resulting from the analysis allow us to study the distribution within each sample and thereby enable comparisons between distributions from multiple samples. With our method we can obtain large quan- tities of field data with minimal equipment requirements and can provide quick and reproducible digital analysis of gathered data. This method can thus be used as a simple way to gain independent data for more complex studies of snow particle size. The novelty of this method in comparison with previous approaches is the use of a high-precision sampling grid for accurate pixel-size calibration and the pixel-based object-oriented image analysis method which enables analysis of multiple size parameters for a range of grains in each sample. This also enables us to establish size distributions for each sample and also minimizes the objective input by the observer in ordinary visual analysis. Furthermore, given the ex- tremely large number of samples resulting, the DSPP approach facilitates the use of rigorous statistical analyses of the particle size data in contrast to approaches that focus on single grains or limited sample sizes. In this paper we have chosen to use the term snow particle size rather than snow grain size for the objects we study. A snow grain is normally a single crystal (e.g. Fierz et al., 2009), though multiple crystal examples have been observed (e.g. Rolland du Roscoat et al., 2011). Furthermore, snow metamorphoses in situ as molecules transfer between individual grains in contact resulting in the bonding of grains across flat surfaces (Colbeck, 1998; Rol- land du Roscoat et al., 2011). Thus, we have therefore opted to use the more general term particle to indicate that we do not distinguish between objects consisting of single grains (or crystals) or multi- grain bonded particles. These different grain types are very difficult to distinguish using traditional visual or other in situ methods and require appropriate and detailed instrumentation for determination. Observations of grains in situ and in laboratory environments, using different observation techniques, have led to the development of a wide range of descriptive parameters. Different parameters used to describe snow particle sizes include: grain radius (Wis- combe and Warren, 1980), mean convex radius (Fily et al., 1997), and optically equivalent radius (Nolin and Dozier, 1993; Painter et al., 2007). Increasingly, Specific Surface Area (SSA) is becoming a SUSANNE INGVANDER ET AL. / 331 favored parameter (Legagneux et al., 2002; Domine et al., 2008; Picard et al., 2009; Jacobi et al., 2010). The SSA of snow is used in optical equivalent analysis of snow grain size as it is strongly correlated to the spectral reflectance, an important factor in remote sensing of snow (Domine et al., 2008; Painter et al., 2009; Gallet et al., 2009). The SSA parameter normalizes the surface area to the snow particle volume as SSA is a relation between the surface area and the mass of a snow grain (Picard et al., 2009). A plethora of SSA approaches have been implemented. SSA has been mea- sured by extracting snow samples and placing them beneath a near- infrared laser and measuring diode in an integrating sphere (e.g. Gallet et al., 2009). In contrast, Legagneux et al. (2002) measured snow SSA using methane absorption at the temperature of liquid nitrogen (77.1 K) in the laboratory. Matzl and Schneebeli (2010) compared gas absorption, standard micro X-ray tomography, and stereological analysis of vertical thin sections using micro X-ray tomography to validate the latter. This latter approach required the estimate of three-dimensional SSA from two-dimensional data, an approach with some relevance for the method described below. That said, the aim of the method presented here is to improve and reduce subjectivity in visual analysis of snow particles and not duplicate or replace existing highly sophisticated methods such as the measurement of specific surface area in the laboratory or the use of highly specialized equipment. The Digital Snow Particle Property (DSPP) Method The DSPP method was developed to meet our needs for fast, on-the-fly sampling and measurement, and for an efficient process- ing method for field determination of sampled surface snow particle sizes during the Swedish Japanese Antarctic Expedition 2007/2008 (Ingvander, unpublished data). There was a need to be able to gather information in a short time period since sampling could only be managed during brief stops of the traverse vehicles. The time con- straints made accurate manual observations difficult. We also wished to make a large number of observations along the traverse, which precluded the use of the traditional sampling methods requir- ing samples to be carried back to a laboratory for analysis. In addition, we also aimed at gathering as much information as possi- ble about not only size but also shape of the snow particles. The use of digital photography hence provided most of the positive aspects of the established sampling techniques. The speed of the field sampling and recording of samples in the field greatly limited the time available for sublimation that could have changed particle size and shape (e.g. Fujii and Kusunoki, 1982; Albert, 2002; Déry and Yau, 2002; Neumann et al., 2009. Although most of the details of the analysis lie in the digital image analysis, here we describe the entire workflow of the method. A digital analysis for particle size estimates consists of three main steps: (1) digital image acquisi- tion of the particles to be analyzed; (2) digital analysis of the im- ages; and (3) statistical analysis of the retrieved data. The complete workflow of the DSPP method is displayed in Figure 1. Our field sampling instrumentation system consisted of a Canon EOS 350D digital camera mounted on a tripod that fixed the camera inside a transparent squared wind shelter (Fig. 2). An �m-accurate millimeter dot grid reference plate was used to support image calibration and rectification of the image during post-pro- FIGURE 1. Flowchart of the digital snow grain proper- Sample site Metadata data parameters retrieval Sample photography Three parallell snow samples Pixel size determination Image crop Classification export Image classification Image segmentation Class export Object export Scale parameter Shape Compactness Area Brightness Shape Factor Mean Mode St. deviation Individual objects statistics 1 2 3 4 ID Position Elevation Temperature Camera height Sun angle ties (DSPP) method. See text for details. FIGURE 2. Scatter plot of equation-derived pixel size (estimated pixel size) (from correlating camera elevation and pixel size for sample images) and actual pixel size (pixel size) derived by imple- menting Equation (1) for 27 grid samples during the Japanese Swedish Traverse 2007/2008. R2 � 0.84 and root mean squared error (RMSE) � 0.0012. 332 / ARCTIC, ANTARCTIC, AND ALPINE RESEARCH cessing. A snow sample was collected from the natural snow cover using a silicon spatula and placed on the reference plate by a gentle shake to disperse the particles over the glass plate. The plate was immediately photographed to prevent melting and/or sublimation. The image resolution was calculated in two different ways based on the sampling procedure: either by recording the distance be- tween the camera and the sample glass (camera height) in order to calculate the pixel size in the images, or by counting the number of pixels between the millimeter markers on the reference plate. The image analysis is performed by object-oriented image analysis software; we used Definiens Developer 7.0 in our study. The digital analysis is based on segmentation to identify each snow particle as an object and then apply different algorithms to extract a series of generic parameters such as width, length, area, circum- ference, etc. The result of the analysis is a comprehensive data set of all measures for all identified objects. No additional light source was added to the sample equipment as sufficient natural lighting was present and generated adequate contrast between the snow particles and the background in the images. The object-oriented image analysis extracts every object in the images generating a size range and distribution within each sample. This enables statistical analysis of the size distribution within each sample generating exact size ranges, in contrast to visual/manual classification systems that generate a mean size within fixed size ranges for each class. Measured snow particles are thus binned based on the resolution of the image and the size of the objects to visualize size distribution. The extraction of the sample from the snowpack using a spat- ula may result in the destruction of bonds between grains in some cases. Obviously, care must be taken during sample extraction. This potential limitation to the method can be addressed in a number of ways. Visual inspection of the photographs can be used to assess the validity of the grain size distribution and maximum and mini- mum particle size thresholds adjusted to exclude extraneous ob- jects. The image may be subset to remove regions that clearly exhibit damaged particles. Multiple samples and images are typi- cally taken to improve the data quality and the very large number of particles resulting from the analysis, and the potential to use sophisticated statistical analyses post prioi means that the effects of damaged samples can be ameliorated. In the following we will outline the digital processing steps and the resulting measures of particle size and shape. This is impor- tant since we need to show how our measures tie to standard meth- ods for snow particle analysis. A Brief Overview of Digital Image Analysis Object-oriented image analysis involves applying a series of image processing operations to the images as described in the fol- lowing processing chain and visualized in Figure 1. IMAGE PREPARATION The digital analysis requires a pixel-based sharp digital image. In this study we used JPEG and RAW formats. Note that JPEG compression is destructive, and these effects have to be considered. The image must first be cropped in image processing software in order to remove the reflective border areas of the calibration glass plate and to remove areas exhibiting snow particle clustering (mul- tiple particles in aggregates), melt features, or frost tracks on the glass. IMAGE RESOLUTION The resolution (mean pixel size) (Sp) of each image is calcu- lated by counting the number of pixels between each millimeter marker on the sampling grid in four directions (0, 90, 180 and 270�) at three markers (P1, P2, and P3) in each image [Equation (1)] (step 2 in the DSPP method). S̄p � Ptot lmax �4 i�1 diP1 � �4 i�1 diP2 � �4 i�1 diP3 (1) where Ptot is the total number of points where the numbers of pixels were counted, and lmax is the number of pixel distances counted. By dividing 1 mm by the total number of pixels divided by the number of pixel distances counted we receive the resolution of the image. For extensive data sets with registered camera elevation, the pixel resolution can be calculated by correlating the camera elevation with pixel resolution for reference images and implement- ing the equation on remaining images. Figure 3 shows an example of equation-derived pixel size and actual pixel size derived by using Equation (1) generating a root mean square (RMS) error of 0.0012. SEGMENTATION Using Definiens Developer 7.0 we segmented, then classified the snow particles. Definiens Developer image segmentation and SUSANNE INGVANDER ET AL. / 333 FIGURE 3. Field implementation of the instrumentation for using the DSPP method. (a) The fixed camera and the sample glass under- lain by a dark reference plate to improve contrast; (b) example of surface snow sampling for photography; and (c) the micrometer accurate dot grid used. It is here equipped with a gray scale for light condition comparison. classification have been used in a range of image processing appli- cations (Benz and Pottier, 2001; Schiewe, 2002; van der Sande et al., 2003; Benz et al., 2004). In the following we focus on the Definiens Developer 7.0 software parameter definitions; all infor- mation below is based on this source unless otherwise stated. The segmentation and classification processes are integral to the pro- gram and are described fully in the Definiens user manual (Defini- ens, 2008) and, with emphasis on the segmentation process, in Baatz and Schäpe (2000). First the image is segmented using a multi-resolution segmentation algorithm in order to extract discrete FIGURE 4. Examples of the image analysis processes: (a) raw image, which has been cropped to reduce size and increase particle visibility; (b) image after segmentation; (c) the segmented image after classification identifying segments corresponding to particles; and (d) resulting distribution from the classified image (note that the distribution shape is poor due to the low number of particles (N � 88) in the sample image. 334 / ARCTIC, ANTARCTIC, AND ALPINE RESEARCH objects in the image (Figs. 1 and 4). Multiple-resolution segmenta- tion is performed grouping homogeneous pixels into regions from an evenly spaced set of seed cells in the image using user-defined segmentation parameters (Definiens, 2008). Multiresolution seg- mentation operates at multiple scales, thereby avoiding bias related to segment size. In the first run, each pixel is considered as being a seed cell. Pixels are then merged with surrounding pixels with the same attributes as defined by the segmentation parameters chosen (Baatz and Schäpe, 2000). The grouped pixels are then pairwise merged into regions in iterations that continue until no further merg- ing is possible according to the given rule set (this is the so-called multiresolution approach). The limiting factor for the merge (reso- lution) is based on the scale parameter set by the user which is, in turn, based on the pixel size of the image. Higher scale parameter values generate larger, more diverse objects compared to a smaller scale parameter generating small homogeneous parameters. Seg- mentation parameters are chosen in order to capture the features in the image but constrained so as not to divide the objects to be analyzed into smaller features. The original image is segmented by two types of segmentation values: the shape, which determines to what level (in percent) the shape of the object contributes to the homogeneity of the image; and the compactness, which is a func- tion of the homogeneity within the brightness and color of the objects (smoothness and compactness). To summarize, when the shape is set to 1, the segmentation is optimized for the shape of the objects, whereas if the segmentation is set to 0, the segmentation is optimized for the compactness and smoothness of the objects in the image. Testing and validation is appropriate when selecting these parameters. CLASSIFICATION The segmented images are classified by: (1) brightness, to contrast the snow particle against the background; (2) area, to re- duce the influence of clusters and remove the millimeter markers; and (3) shape index, to determine the shape of the particle and remove elongated reflections in the sampling glass. This operation is performed by assigning a class with relevant threshold values for the objects to be extracted. The snow class was determined by extracting minimum and maximum values for snow particle objects and determining the threshold at multiple levels, including small snow particles and excluding clusters of particles. The classification is mostly consistent, but occasionally the brightness has to be ad- justed for incident light in the image or the air content in the particle, which affects the transparency of the particles. The size range may also be adjusted for the presence of particles of extreme sizes. The minimum area value used in the classification of snow particles was 0.015 mm2, which is just larger than the size of the millimeter markers of the sampling grid. The segmentation and classification values vary with snow and sample type, and values for the different regions are presented in Table 1. The segmented-classification pro- TABLE 1 Five different rules sets of segmentation and classification systems used in Definiens Developer 7.0 to optimize grain size retrieval on four separate field sites where segmentation scale determines the size of the segmented objects, the compactness, and shape determined the spectral or spatial homogeneity of the objects. The classification values of area, brightness, and shape of the objects determine snow particle or not snow particle. Region Segmentation values Classification values Scale Compactness Shape Area (mm2) Brightness Shape (8 bit relative scale) Antarctica coast 50 0.4 0.6 0.025–13.6 136–255 0–2.6 Antarctica plateau 50 0.4 0.6 0.015–13.6 145–255 0–2.6 Järämä 1 150 0.4 0.6 0.035–20 116–255 0–2.6 Järämä 2 50 0.2 0.8 0.035–20 116–255 0–2.6 Tarfala 30 0.4 0.6 0.035–13.6 200–255 0–2.6 SUSANNE INGVANDER ET AL. / 335 cessing can be automated or, if necessary, run interactively to allow the images to be re-segmented by changing parameter settings. PARAMETER RETRIEVAL The size of the objects classified as snow particles are calcu- lated by identifying separate objects and using generic shape algo- rithms. The parameters are divided into different levels (I–III) based on how complex the calculations of the required parameter are (how many of the prior calculations are necessary in the calcula- tion) (Fig. 5). Based on the first-level data (I), size and shape parameters can be calculated by the level II and III algorithms. The most basic shape parameters are border length (BL) and area (A). As the pixel resolution is set initially in the program, the area is calculated using the area of each pixel (Ap) times the number of pixels in each object (np). The border length is simply the perimeter of each particle. Further algorithms at the basic level are the bounding box and eigenvalue calculations. The bounding box (BB) is the smallest rectangular area that encloses the object of interest. The geometrical definition is the difference between maximum and minimum coor- dinates in x and y direction for each object (Fig. 5). The final algorithm on level I is based on a tilted bounding box delimited by eigenvalues (�) derived from the spatial distribution of the ob- ject. At the second level of calculations, the calculations are based on the values from the first level equations. An example of second level equations is length/width ratio, which is calculated using BB and �: the ratio between the longest and the shortest axis is calcu- lated for both BB and �; the smallest value is the featured value. The largest enclosed ellipse (LEE) and smallest enclosing ellipse (SEE) are based on the eigenvalues and an ellipse with the same area as the object OE. The OE is downscaled in order to be com- pletely surrounded by the object LEE or upscaled to surround the object SEE. The returned result is the ratio between the radius of the original ellipse and the radius of the SEE or LEE ellipse. This value explains the complexity of the object. The third level of equations is based on the results of the level II equations. As the smallest feature of BB and � is determining the length-to-width ratio, the values retrieved (depending on which is the smallest) are the length and width, respectively. The DSPP method has been tested in different environments to verify its applicability outside of FIGURE 5. Basic and derived parameters in Definiens Developer 7.0 used for generating snow particle size parameter data sets from collected images (Definiens, 2008). See text for a detailed description. Antarctica for which it was originally designed. We will therefore briefly summarize our experiences from case studies. Processing Examples The method has so far been used at three very different sites: Dronning Maud Land, Antarctica (Ingvander et al., 2010), as well as Järämä (Ingvander et al., 2012) and Tarfala in northern Sweden (Ingvander et al., 2013). The Antarctic data set consists of 62 sur- face samples, ten 1-m pits sampled with 10-cm resolution, and 4 grid net samples (9 samples in a squared grid with 10-m resolution) along the Japanese Swedish Traverse 2007/2008 (JASE). The Jär- ämä data set, consisting of surface and pit samples, was collected for a comparison between DSPP method results and a long-term monitoring data set collected by Abisko Scientific Research Station using their simplified classification system (Ingvander et al., 2012). The Tarfala data set was collected on Storglaciären, a glacier in the northern Swedish mountains where long-term snow studies have been performed as part of the ongoing mass balance program (e.g. Jansson and Pettersson, 2007), as part of a high-resolution study of seasonal snow accumulation (Jansson et al., 2007). The 336 / ARCTIC, ANTARCTIC, AND ALPINE RESEARCH data explored by us thus span several different types of snow re- gimes from high elevation subarctic mountains to forest snow cover as well as the extreme Antarctic snow conditions. Data from all three field sites have been used in order to develop and improve the method. All data from the three sites have been collated and plotted as size distributions in Figure 6 in order to illustrate the differences in snow particle size between the three sampling sites. Segmenta- tion and classification parameters, rule sets, for Antarctica and for coastal and polar plateau areas (Järämä and Tarfala) are presented in Table 1. The snow sampled at Järämä consisted of snow with widely different particle sizes, which required analysis with differ- ent rule sets (Ingvander et al., 2012). The difference in brightness values is caused by the occurrence of melt features when sampling at temperatures above 0 �C and differences in lighting conditions during the different sampling ses- sions. Figure 7 shows the size distribution within each group of samples. The number of objects analyzed in each field campaign is also evident and depends on the sizes of the snow particles in each sample (small particle sizes allow more objects to be analyzed in each sample). Despite the number of particles in each distribu- 0 2 4 6 8 10 0 2000 4000 6000 8000 10000 12000 14000 16000 Antarctic traverse - length (mm) # pa rt ic le s 0 2 4 6 8 10 0 50 100 150 200 250 300 350 400 Järämä - length (mm) 0 2 4 6 8 10 0 50 100 150 200 250 Tarfala - length (mm) N = 52347a) b) c)N = 1403 N = 1067 FIGURE 6. Snow particle size distributions from (a) Antarctica; (b) Järämä; and (c) Tarfala. See text for a discussion. tion, there is a distinct negative skewness in all distributions. We will discuss this further below. Two examples of different snow size samples are presented in Figure 7. The original sample 1 image of rounded particles (Fig. 7, part a) are segmented and classified according to Järämä 2, whereas the original sample 2 image of depth hoar (part b) is classified using the rule set of Järämä 1. The length results from the classified images (parts c and d) were binned using the maximum object size divided by the resolution for each image. The objects are binned separately according to resolution and object sizes. The bin sizes used in Figure 7 are 14.66 in part e and 15.57 in part f in the size distribution histograms. The interpreted size from visual observa- tion and the mean DSPP size is presented in Figure 7. The results show that visual interpretation may overestimate sizes of small particles and underestimate the size of larger particles. Since the segmentation and classification is automatic, clustered particles may enter the distribution. This is the reason for the long distribu- tion tail toward larger particle sizes (Fig. 7). This means that some caution must be made when using a distribution to assign a particle size value. The mode provides the dominant value while the mean is affected by the tail. In order to test the reproducibility, a re-analysis where images were rotated 90� was made, which generated slightly deviating results. This deviation is caused by the software analysis algorithm. The software scans the image in a predetermined way; if altering the image orientation, the seed cell positions are altered, which means the segmentation develops differently. To obtain a length, the Definiens Developer 7.0 software uses either the bounding box or the eigenvalues as a basis for the calculation (Fig. 5). A rotation of the object may thus result in a switch of base parameter for the calculation of length, which may introduce smaller deviations. Figure 8 shows how the length and area parameters change when rotating the image. The area does not show any appreciable deviations (R2 � 0.99), but there is a deviation in length for the larger particle sizes although the correlation is still good (R2 � 0.97). This behavior originates from the fact that area comes from a first-level equation, but length is calculated from a third-level equation (Fig. 5). The length parameter is based on the relation between the bounding box and eigenvalues through a derived length/width ratio value. SUSANNE INGVANDER ET AL. / 337 We deem this deviation of little significance, but it indicates how important knowledge of software algorithms can be. The difference in mean of area, length, and width is presented in Table 2. We validated the method against visual interpretation using manual digitalization of snow particles and compared the size of the manually digitalized snow particle sizes with the DSPP-method retrieved particle sizes generating a R2 � 0.98 and RMSE of 0.071 for the length parameter (Fig. 9). It is important that the DSPP results are comparable to other investigations. The length parameter is comparable to the Interna- tional Classification of Seasonal Snow on the Ground (Colbeck et al., 1990; Fierz et al., 2009) where the definition of snow grain size is the greatest extension of the grain. When identifying the average particle size we believe the mode from our distributions is to be compared with the particle size concept of the snow classifi- cation. Furthermore, the size parameters can be used to calculate the geometry of the objects and thereby determine the shape of the particles in the samples. For example, the length/width ratio can be used to determine the shape of the objects classified as snow particles. This shape parameter can be used to analyze morphology and shape altering processes such as wind transport (Kikuchi et al., 2005). Furthermore, shape differences may be useful in remote sensing applications (Mätzler, 1996). Discussion and Recommendations The DSPP method seems to successfully fulfill the basic re- quirements for analyzing snow particle size in the field. The method is cost effective, accurate, and delivers the size range (distribution) for each sample. It is an extension of existing methods using com- mercial instrumentation and software, which enables reproducibil- ity within the scientific community. The developed DSPP method facilitates objective analysis of data from different types of snow and geographical regions. By introducing a completely automatic method, comparison of different data sets is possible independent of the analyzer. The method generates extensive data sets that im- prove the data that can be extracted and thereby offers a range of alternative measurements. The number of data points retrieved also improves the statistical reliability. Studying the distribution of the data shows a negative skewness in Figure 6. This information im- FIGURE 7. Snow grain images from Järämä in northern Sweden. The first column shows a sample of surface snow: (a) raw image; (b) classified image; and (c) resulting particle length distribution. The second column shows a sample identified as depth hoar: (d) raw image; (e) classified image; and (f) resulting particle length distribution. The values at the bottom of each column show in-field visually interpreted and DSPP-derived size values. 338 / ARCTIC, ANTARCTIC, AND ALPINE RESEARCH FIGURE 8. Scatter plot of objects classified as snow particles in a test image before and after a 90� rotation for (a) particle length (R2 � 0.97) and (b) particle area (R2 � 0.99). plies that the mean value may not be representative when analyzing snow grain size and should be taken into account when comparing to visual analysis of snow samples that usually provide average sizes. The primary advantages of the DSPP method are the range of descriptive statistics retrieved and the removal of visual subjective components of the snow particle size analysis. This allows users TABLE 2 The resulting mean size in the three parameters (area [mm2], width [mm], and length [mm]) in an original example image and the results from a 90� shift of the same image. Parameter (mean) Original 90� shift Area (mm2) 0.31 0.35 Length (mm) 0.91 0.93 Width (mm) 0.61 0.62 SUSANNE INGVANDER ET AL. / 339 FIGURE 9. For validation purposes, images were digitized using visual interpretation (manual digitalization of particle size) and correlated using linear regression with the DSPP method for re- trieved particle sizes (particle size DSPP). Twenty objects in four images were used for the analysis of the size parameters area, length, and width (R2 � 0.99 and RMSE � 0.044 for area, R2 � 0.98 and RMSE � 0.071 for length, and R2 � 0.91 and RMSE � 0.089 for width). to apply the method widely. We would like to recommend that the nomenclature used in this paper be carefully considered so as to avoid confusion using the terms particle and grain in relation to what is actually studied. The main limitation of the developed method is the two-di- mensional analysis of three-dimensional natural snow objects. Fur- thermore, the georectification by pixel calculation reduces the accu- racy, but this could be prevented by using a fixed distance between the camera and the sample grid. Melted snow particles (water drop- lets or refrozen water) resulting in clumping of particles affects the image analysis as they imitate the same classified features as the snow particles. This problem is reduced by using area, shape, and brightness limitations and by initial crop of the particle size images. Sufficient lighting conditions are also important to promote contrast in the image. The possibility exists that the morphology of the particles are altered by removal from the sampled snow; this error is not method specific and is considered to be marginal considering that the sample strategy is cautious and the particles are generally robust. An exception is fresh snow which has to be analyzed imme- diately when placed gently on the reference grid. The method facili- tates the analysis of different types of snow (in shape and size) that occur in different regions with various physical settings. Large objects have a tendency to cluster and increase the mean size in the analysis. This effect is countered by manually cropping the image to remove clusters. This leads to fewer particles to be ana- lyzed in each sample. We propose increasing number of parallel samples when analyzing larger particles in order to achieve a statis- tically reliable number of particles. The smallest particles also have a tendency to cluster due to their size (Ingvander et al., 2012). The DSPP method is dynamic and can be tuned to account for differences in particle size and illumination. In the coastal region of Antarctica large particles were sampled under overcast conditions during the JASE traverse. The Järämä samples are separated into Järämä 1 and Järämä 2 rule sets in order to segment samples with small and large particles separately. The Tarfala samples were seg- mented with a scale parameter of 30 in order to capture fresh snow and small particles. The brightness threshold is considerably higher in the Tarfala classification, being an effect of sampling and pho- tography performed in a 3-m-deep snow pit with poorer light condi- tions than surface sampling (i.e. overcast that generate whiteout and raise the general brightness level in the images). However, even in samples conducted with poor lighting conditions, the contrast between the snow particles and the background has proved to be sufficient for segmentation and classification. Future development of the method is to establish constant eleva- tion of the camera by fixing the reference grid plate to the tripod, which would generate constant resolution and higher accuracy in the images. Without tilt, the pixel resolution would be constant in the entire image and not be scattered as in Figure 3. However, the refer- ence grid is also used for calculating camera distortion between the center and the border of the images [included in the process of Equa- tion (1)], which we found negligible in this study, being less than 0.06 mm. Furthermore, photographing the snow fixed in its position would remove the possibility of grain alteration by movement. This would also provide information on how the particles are positioned against each other in the snow pack and small-scale information on the grain/air ratio within the snow. There are built in limitations in the software as the seed cells governing the multi-resolution segmen- tation grow in different directions depending on the angle of the image. This problem is overcome by photographing snow in its origi- nal position in the snow pack, which is a complicated but interesting next step for the DSPP method. The segmentation process was vali- dated against manual delineation of snow particles (Fig. 9). This high accuracy is accomplished by the significant contrast in the image and high brightness of the particles. The segmentation process would in- duce larger errors on images with wider ranges in brightness of the snow particles and the background. This method aims to provide accurate measurements of snow particles from large sample sizes in support of remote sensing in- vestigations. The correlation of snow particle sizes with remote sensing observation is a work in progress. Here we show the poten- tial of the DSPP method to derive size and shape measurements of snow particles in two dimensions; it should be noted that there are approaches that allow for the estimation of SSA from two- dimensional observations of snow particle size (e.g. Matzl and Schneebeli, 2010). Conclusions We have presented a method that combines rapid, simple data acquisition with sophisticated analysis for snow particle size assess- ment. The aim of the method was to improve upon the visual inter- pretation of snow particle size where project logistics could not cover refined instrumentation such as integrated sphere measure- ments of SSA or contact spectroscopy. As the DSPP method targets snow particles, i.e. bonded grains, as reflectors of radiation, the investigation does not aim to replace high-accuracy snow grain size analysis such as tomography or grain growth modeling (Schneebeli and Sokratov, 2004). However, the potential to retrieve 340 / ARCTIC, ANTARCTIC, AND ALPINE RESEARCH multiple two-dimensional size parameters enables further calcula- tion of sizes on a subsample scale. The large number of samples that can be obtained offers statistical rigor not always found in snow particle size observations. In the future, evaluation against simultaneous measurements of DSPP photography and reliable measurements of specific surface area is desirable. Acknowledgments The authors would like to thank the Swedish Polar Research Secretariat for logistic support and Swedish Research Council, Swedish National Space Board, Tarfala Research Station, Bert Bolin Centre for Climate Research, SSAG, Ymer-80, and Helge Ax:son Johnson foundation for financial support. Acknowledg- ments to colleagues participating in the field for making the mea- surements possible. The authors would also like to thank Dr. Charles Fierz at the Institute for Snow and Avalanche Research, Davos, Switzerland, for valuable comments on the method and the nomenclature at an early stage. References Cited Albert, M. R., 2002: Effects of snow on firn and ventilation on sublima- tion rates. Annals of Glaciology, 35: 52–56. Baatz, M., and Schäpe, A., 2000: Multiresolution segmentation—An optimization approach for high quality multi-scale image segmenta- tion. In Strobl, J., Blaschke, T., and Griesebner, G. (eds.), Angewan- dte Geographische Informationsverarbeitung XII. Karlsruhhe: Her- bert Wichman, 12–23. Benz, U., and Pottier, E., 2001: Object based analysis of polarimetric SAR data in alpha-entropy-anisotropy decomposition using fuzzy clas- sification by eCognition. Proceedings IGARSS 2003, 4: 1913–1915. Benz, U., Hofmann, P., Willhauck, G., Lingenfelder, I., and Heynen, M., 2004: Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS Journal of Photog- rammetry and Remote Sensing, 58: 239–258. Brun, E., and Pahaut, E., 1991: An efficient method for delayed and accurate characterization of snow grains from natural snow packs. Journal of Glaciology, 37(127): 420–422. Colbeck, S. C., 1998: Sintering in a dry snow cover. Journal of Applied Physics, 84(8): 4585–4589. Colbeck, S. C., Akitaya, E., Armstrong, R., Gubler, H., Lafeuille, J., Lied, K., McKlung, D., and Morris, E., 1990: The International Clas- sification of Seasonal Snow on the Ground. Wallingford, U.K.: The International Commission on Snow and Ice of the International Asso- ciation of Scientific Hydrology, 23 pp. Definiens, 2008: Definiens Developer 7.0 User Guide. Munich: Defini- ens AG. Déry, S. J., and Yau, M. K., 2002: Large-scale mass balance effects of blowing snow and surface sublimation. Journal of Geophysical Research., 107(D23): http://dx.doi.org/10.1029/2001JD001251. Domine, F., Albert, M., Hultwelker, T., Jacobi, H.-W., Kokhanovsky, A. A., Lehning, M., Picard, G., and Simpson, W. R., 2008: Snow physics as relevant to snow photochemistry. Atmospheric Chemistry and Physics, 8: 171–180. Dozier, J., and Painter, T. H., 2004: Multispectral and hyperspectral remote sensing of alpine snow properties. Remote Sensing of Envi- ronment, 113: 525–527. Dozier, J., Green, R. O., Nolin, A. W., and Painter, T. H., 2009: Inter- pretation of snow properties from imaging spectrometry. Annual Re- view of Earth and Planetary Sciences, 32: 465–494. Fierz, C., Armstrong, R. L., Durand, Y., Etchevers, P., Greene, E., McClung, D. M., Nishimura, K., Satayawali, P. K., and Sokratov, S. A., 2009: The International Classification for Seasonal Snow on the Ground. Prepared by the ICSI-UCCS-IACS Working Group on Snow Classification. IHP-VII Technical Documents in Hydrology, UNESCO-IHP, N 83 IACS Contribution N1. Paris: UNESCO. Fily, M., Bourdelles, B., Dedieu, J. P., and Sergent, C., 1997. Compari- son of in situ and Landsat Thematic Mapper derived snow grain characteristics in the alps. Remote Sensing of Environment, 59: 452–460. Fujii, Y., and Kusunoki, K., 1982: The role of sublimation and conden- sation in the formation of ice sheet surface at Mizuho Station, Antarc- tica. Journal of Geophysical Research, 87(C6): 4293–4300. Gallet, J.-C., Domine, F., Zender, C. S., and Picard, G., 2009: Measure- ment of the specific surface area of snow using infrared reflectance in an integrating sphere at 1310 and 1550 nm. The Cryosphere, 3(2): 167–182. Gay, M., Fily, M., Genthon, C., Frezzotti, M., Oerter, H., and Winther, J.-G., 2002: Snow grain-size measurements in Antarctica. Journal of Glaciology, 48(163): 167–182. Grenfell, T., Warren, S. G., and Mullen, P. C., 1994: Reflection of solar radiation by the Antarctic snow surface at ultraviolet, visible and near-infrared wavelengths. Journal of Geophysical Research, 99(D9): 18,669–18,684. Ingvander, S., Brown, I. A., and Jansson, P., 2010: Snow grain size variability along the JASE 2007/2008 traverse route in Droning Maud Land, Antarctica, and its relation to MOA NDSI index, MERIS and MODIS satellite data. ESA Special Publication, SP-686. Ingvander, S., Johansson, C., Jansson, P., and Pettersson, R., 2012: Comparison between digital and manual methods of snow grain size estimation. Hydrology Research, 43(3): 192–202. Ingvander, S., Rosqvist, G., Svensson, J., and Dahlke, H., 2013: Sea- sonal and interannual variability of elemental carbon in the snowpack of Storglaciären, northern Sweden. Annals of Glaciology, 54(62), 50–58, http://dx.doi.org/10.3189/2013AoG62A229. Jacobi, H.-W., Domine, F., Simpson, W. R., Douglas, T. A., and Sturm, M., 2010: Simulations of the specific surface area of snow using one-dimensional physical snowpack model: implementation and evaluation for subarctic snow in Alaska. The Cryosphere, 4: 35–51. Jansson, P., and Pettersson, R., 2007: Spatial and temporal characteris- tics of a long mass balance record, Storglaciären, Sweden. Arctic, Antarctic, and Alpine Research, 39(3): 432–437, http://dx.doi.org/ 10.1657/1523-0430(06-041)[JANSSON]2.0.CO;2. Jansson, P., Linderholm, H. W., Pettersson, R., Karlin, T., and Mörth, C.-M., 2007: Assessing the possibility to couple chemical signals in winter snow on Storglaciären to atmospheric climatology. Annals of Glaciology, 46: 335–341. Kärkäs, E., Granberg, H. B., Kanto, K., Rasmus, K., Lavoie, C., and Läppäranta, M., 2002: Physical properties of the seasonal snow cover in Dronning Maud Land, East Antarctica. Annals of Glaciology, 34: 89–94. Kärkäs, E., Martma, T., and Sonninen, E., 2005: Physical properties and stratigraphy of surface snow in western Dronning Maud Land, Antarctica. Polar Research, 1–2: 55–67. Kikuchi, T., Fukushima, Y., and Nishimura, K., 2005: Snow entrain- ment coefficient estimated by field observations and wind tunnel experiments. Journal of Cold Regions Engineering, 52: 309–315, http://dx.doi.org/10.1016/j.ijheatmasstransfer.2008.06.003. Kirnbauer, R., Böschl, G., and Gutknecht, D., 1994: Entering the era of distributed snow models. Nordic Hydrology, 25: 1–24. Legagneux, L., Cabanes, A., and Domine, F., 2002: Measurement of the specific surface area of 176 snow samples using methane adsorp- tion at 77 K. Journal of Geophysical Research, 107(D17): 4335, http://dx.doi.org/10.1029/2001JD001016. Lehning, M., Bartelt, P., Brown, R. L., Russi, T., Stöckli, U., and Zimmerli, M., 1999: Snowpack model calculations for avalanche warning based upon a new network of weather and snow stations. Cold Regions Science and Technology, 30(4): 145–157. LeSaffre, B., Pougarch, E., and Martin, E., 1998: Objective determina- tion of snow-grain characteristics from images. Annals of Glaciology, 26: 112–118. SUSANNE INGVANDER ET AL. / 341 Mätzler, C., 1996: Microwave permittivity of dry snow. IEEE, 34(2): 573–581. Matzl, M., and Schneebeli, M., 2006: Measuring specific surface area of snow by near-infrared photography. Journal of Glaciology, 52(179): 558–564. Matzl, M., and Schneebeli, M., 2010: Stereological measurement of the specific surface area of seasonal snow types: comparison to other methods, and implications for mm-scale vertical profiling. Cold Regions Science and Technology, 64(1): 1–8, http://dx.doi.org/10 .1016/j.coldregions.2010.06.006. Mellor, M., 1965: Cold Regions Science and Engineering part III, Section A3c: Blowing Snow. Hanover, New Hampshire: Cold Re- gions Research and Engineering Laboratory. Neumann, T. A., Albert, M. R., Engel, C., Courville, Z., and Perron, F., 2009: Sublimation and the mass-transfer coefficient for snow sublimation. International Journal of Heat and Mass Transfer, 19(4): 117–129, http://dx.doi.org/10.1016/(ASCE)0887-381X(2005) 19:4(117). Nolin, A. W., and Dozier, J., 1993: Estimation snow grain size using AVIRIS data. Remote Sensing of Environment, 44: 231–238. Nolin, A. W., and Dozier, J., 2000: A hyperspectral method for re- motely sensing the grain-size of snow. Remote Sensing of Environ- ment, 74: 207–216. Painter, T. H., Molotch, N. P., Cassidy, M., Flanner, M., and Steffen, K., 2007: Contact spectroscopy for determination of stratigraphy of snow optical grain size. Journal of Glaciology, 53(180): 121–127. Painter, T. H., Rittger, K., McKenzie, C., and Slaughter, P., 2009: A retrieval of subpixel snow covered area, grain size and albedo from MODIS. Remote Sensing of Environment, 113: 868–879. Perla, R., 1982: Preparation of section planes in snow specimens. Jour- nal of Glaciology, 28(98): 199–204. Picard, G., Arnaud, L., Domine, F., and Fily, M., 2009: Determining snow specific surface area from near-infrared reflectance measure- ments: numerical study of the influence of grain shape. Cold Regions Science and Technology, 56: 10–17. Pulliainen, J., 2006: Mapping of snow water equivalent and snow depth in boreal and sub-arctic zones by assimilating space-borne micro- wave radiometer data and ground-based observations. Remote Sen- sing of Environment, 101: 257–269. Rolland du Roscoat, S., King, A., Philip, A., Reischig, P., Ludwig, W., Flin, F., and Meyssonnier, J., 2011: Analysis of snow microstructure by means of X-ray diffraction contrast tomography. Advanced Engineering Materials, 13(3): 128–135, http://dx.doi.org/10.1002/ adem.201000221. Scambos, T. A., Haran, T. M., Fahnestock, M. A., and Painter, T. H., 2007: MODIS-based mosaic of Antarctica (MOA) data sets: conti- nent-wide surface morphology and snow grain size. Remote Sensing of Environment, 111: 242–257. Schiewe, J., 2002. Segmentation of high-resolution remotely sensed data—Concepts, applications and problems. International Archives of Photogrammetry Remote Sensing, 34 (4): 380–385. Schneebeli, M., and Sokratov, S. A., 2004: Tomography of temperature gradient metamorphism of snow and associated changes in heat con- ductivity. Hydrological Processes, 18: 3655–3665, http://dx.doi.org/ 10.1002/hyp.5800. Sommerfeld, R. A., and LaChapelle, E., 1970: The classification of snow metamorphism. Journal of Glaciology, 9(55): 3–17. van der Sande, C. J., de Jong, S. M., and de Roo, A. P. J., 2003: A segmentation and classification approach of IKONOS-2 imagery for land cover mapping to assist flood risk and flood damage assessment. International Journal of Applied Earth Observation and Geoinfor- mation, 4: 217–229. Wiscombe, W. J., and Warren, S. G., 1980: A model for the spectral albedo of snow. Journal of Atmospheric Science, 37: 2712–2733. MS accepted February 2013 work_4gza3kftpzevfl4ceiwtt4xggm ---- untitled Practice O ver the past decade, digital imaging technology has be- come widespread: most cam- eras now sold in North America and Europe are digital rather than film based. Digital photography has be- come an easy, inexpensive and rapid way to capture clinical images.1 Med- ical fields such as dermatology, radiol- ogy and ophthalmology2 have been us- ing digital imaging for some time, but family doctors are also well positioned to take advantage of these evolving im- aging trends.3 Digital photography can help physicians provide better medical care by adding accurate, easy- to-capture clinical images directly to electronic records (replacing inaccu- rate or inadequate clinical descriptors or stylized pen and ink sketches in our notes). The growth and evolution of lesions and wounds can also be moni- tored. In addition, this format allows images to be shared through telemedi- cine services4 or for images to be sent ( t h r o u g h i n t r a n e t o r t h e i n t e r n e t ) quickly and efficiently from one health care level to another for consultations with colleagues or for teaching5 and research. We aim to briefly summarize the ap- proaches that we have found useful for introducing digital photography into the family physician’s practice. Items required for getting started A digital camera: see below for advice on choosing the one that is right for your practice A personal computer: A Windows or Macintosh computer with sufficient processor speed (e.g., 2 GHz) and ran- dom access memory (RAM; e.g., 512 Mb) is required for viewing and storing images. Imaging software is often bun- dled with the purchase of the camera. Storage devices, such as a hard drive (with > 50 Gb of memory) and a CD- ROM burner, allow you to store and archive older images. Memory cards: Keeping a spare mem- ory card on hand will allow you to quickly store more photos without downloading images from a full card onto your computer. Cables or memory card reader: Cables connect the camera to your computer, whereas a card reader allows a directDO I: 10 .1 5 0 3 /c m aj .0 4 5 2 5 0 CMAJ • December 5, 2006 • 175(12) | 1519 © 2006 CMA Media Inc. or its licensors Digital photography in the generalist’s office Fig. 1: Image of a dermatologic lesion obtained by macro format. Fig. 2: Two different images of a single lesion. connection of the camera’s memory card to your computer without needing to connect the camera. Consent: Capture images only after ob- taining consent. Written consent must also be obtained if the images are to be used for publication or research. Choosing a camera Although technology is always being updated, we have divided digital cam- eras into 3 groups based on image quality (megapixels [Mp]): 1. Basic models: these cameras have resolutions of less than 4 Mp. They a r e e a s y t o u s e a n d a r e r e c o m - mended for novices. 2. Advanced models: these cameras have resolutions of between 4 and 8 Mp and have features such as auto- matic focus, macro focus and zoom with approximation adjustments. Some are digital single-lens reflex ( d S L R ) m o d e l s a n d h a v e i n t e r - changeable lenses. 3. Professional models: currently there are digital single-lens reflex models with resolutions of up to 12 Mp. In general, we recommend the use of an advanced or professional model to obtain high-quality digital images in the doctor’s office. Generally speaking, these cameras cost upwards of Can $300 [Price adjusted to 2006 dollars]. Tips for improving the quality of your images Size marker: If the size or shape of a le- sion or injury is important, include a size marker (such as a ruler). This will allow for comparison of the injury over time. Minimize unnecessary elements: In general, we find that minimizing un- necessary details in the image is best. However, it is sometimes useful to show anatomical landmarks, for the purpose of localizing the lesion in the picture (if multiple lesions are being captured). Identification: If an image will not be immediately added to a patient’s file, it is necessary to include identifying de- tail (such as the patient’s initials or ID number) to a peripheral aspect of the printed photograph. Lighting: Setting aperture-priority al- lows the camera to automatically adjust the shutter speed to obtain the appro- priate lighting conditions. Depending on the camera, the sensor sensitivity can be modified, together with the lighting conditions and use of flash for improving the lighting of the subject. Macro format: The use of this format allows high-quality close-up images to be obtained, particularly of dermato- logic features (Fig. 1). Take 2 images: Until you are comfort- able with clinical photography, we would advise taking 2 images (Fig. 2): one from a distance showing the loca- tion of the injury and its relation to the surrounding area, and another showing the injury or lesion in detail. We recom- mend holding the camera so that the plane of the camera is parallel to the plane of the object being photographed, maximizing the depth of field. Use backdrops: The use of a matte backdrop can help to minimize distrac- tions in the image. We suggest using a backdrop that is medium light green (such as a surgical drape), black or blue. Special considerations Certain situations provide for a greater challenge when capturing digital im- ages in the clinic: Photographing dark skin: A common error when photographing individuals with dark skin is to overexpose the im- age. To correct this, it is necessary to close the aperture by 1 or 2 increments (f-stops). Mouth shots: When taking a photograph of the inside of the mouth, it is important to have good lighting conditions and to focus carefully. To do this, an external light source (a lamp) or a ring flash cou- pled to the lens can be used. However, ring flashes do not produce a shadow, which gives a less textured image. Standardized photographs: To com- pare images obtained at different times, it is important that the images are taken in the same way: using the same cam- CMAJ • December 5, 2006 • 175(12) | 1520 Practice Glossary of digital photography terms Digital camera: A camera that stores images electronically. In contrast to film, a digital camera allows immediate viewing of the photograph before printing. Pixel: Abbreviation of picture element. A pixel is the basic unit of colour that forms a digital image. It could be considered to be the atom of a digital image. Resolution: The number of pixels that constitute a digital image. Resolution is commonly expressed in megapixels (Mp). For example, an image with a resolution of 2200 × 1800 pixels is composed of 3 960 000 pixels, or about 4 Mp. Quality: The quality of a digital photography camera depends on the capacity to capture an image as a series of pixels. At present, resolutions vary between 1 and 12 Mp. This immense quantity of pixels offers the same quality as a conventional camera in a 20 × 30 cm format. CCD and CMOS: Coupled Charge Device and Complementary Metal Oxide Semiconductor. While they work by different mechanisms, both CCD and CMOS are image sensors that capture light and transform it into a digital signal. LCD: Liquid Crystal Display. This is a thin, flat display of pixels on the camera that allows the user to preview or to view the images after a photo is taken. DPI: Dots Per Inch. This is a measure of printing resolution, or the number of individual dots a printer can produce within a 1-inch space. For example, an image of 4 × 6 inches (10 × 15 cm) printed at 300 dpi would be 1800 points wide and 1200 points high. Compression: This is a process that compresses image data to allow for efficient storage and transmission. Redundant data are eliminated or stored in a compressed format. When the photo is viewed or edited in the future, the compressed data are interpreted. JPEG: Joint Photographic Experts Group. This is a storage format that compresses the digital image with some loss of quality. It is the most commonly used format in digital photography. TIFF: Tagged Image File Format. This is a storage format that compresses the image without a loss of quality. dSLR: Digital single-lens reflex camera. This is a type of digital camera that uses a movable mirror placed between the lens and the sensor to project the image seen through the lens to the viewfinder. era, lens, lighting conditions, distance and angle. Recording all of this can be a challenge in a busy general practice and is not always possible without dedicated specific equipment; however, standard- ization allows for a direct comparison between photographs of lesions taken at different times. Ophthalmology: In ophthalmology it is often useful to take photographs of the retina (Fig. 3). A digital retina camera can be mounted on a similar support to a slit lamp. Photographing radiographs: Although radiographic images can now be cap- t u r e d d i g i t a l l y , f i l m r a d i o g r a p h s mounted on a light box can also be photographed. Generally, it is wise to keep the camera’s aperture as open as possible, to use macro format and to avoid the use of the flash (Fig. 4). Viewing and processing images The quality of an image can be immedi- ately assessed by the clinician, and a new image can be taken if it is inadequate. The angles, lighting and point of focus can be varied to allow different points of view, incorporation of additional clinical details and the recording of changes in the colour of lesions. Early and immedi- ate storage in the patient’s medical record is wise, so that the images are not accidentally misfiled at a later date. Because accuracy is essential for clinical images, we generally do not recommend manipulation of the image (with the exception of perhaps remov- ing unnecessary identifying details if the images are to be shared externally, with the patient’s consent). Conclusions Digital photography has become com- monplace in health care and comple- ments the advent of electronic medical record keeping. The ability to capture high-quality digital photographs is an important skill for generalists to de- velop. Unlike film photography, digital photography allows practitioners infi- nite experimentation to hone their skills before implementing the system in their practice. Advances in digital photogra- phy have made it possible to study the possibility of creating new programs such as teleassistance, teledermatology, teleradiology and teleophthalmology. David Riba Torrecillas ABS Tremp Institut Català de la Salut Jorge Soler-González ABS Rambla Ferran Institut Català de la Salut Antonio Rodríguez-Rosich Medicine Faculty Universitat de Lleida (UdL) Institut Català de la Salut Lleida, Spain REFERENCES 1. Prasad S, Roy B. Digital photography in medicine. J Postgrad Med 2003;49:332-6. 2. Ratner D, Thomas CO, Bickers D. The uses of digi- tal photography in dermatology. J Am Acad Der- matol 1999;41:749-56. 3. Soler-Gonzalez J, Riba Torrecillas D, Rodríguez- Rosich A, et al. Fotografia digital en Atención Pri- maria. FMC 2003;10:536-43. 4. Wootton R. Recent advances: telemedicine. BMJ 2001;323:557-60. 5. Riba Torrecillas D, Soler-González J, Rodríguez- Rosich A. Puede ser una buena herramienta do- cente el uso de la cámara digital en un centro de atención primaria? Aten Primaria 2005;35:105-7. CMAJ • December 5, 2006 • 175(12) | 1521 Practice This article has been peer reviewed. Competing interests: None declared. Acknowledgements: The authors thank Anna Rodríguez-Rosich and Cristina Florensa Roca for their review and assistance in the preparation of the manuscript. Fig. 4: Photograph of a radiograph showing reflected flash at the time of taking the photograph. Fig. 3: Obtaining an image of the retina using a digital retina camera. work_4h2ssmsrffdfhhcokoe67nlwra ---- Journal of Pollination Ecology Journal of Pollination Ecology Open Journal Systems Journal Help Information For Readers For Authors For Librarians User Username Password Remember me Journal Content Search All Authors Title Abstract Index terms Full Text Browse By Issue By Author By Title Current Issue Notifications View Subscribe Font Size Home About Login Search Current Archives Announcements Home > Vol 26 (2020) Journal of Pollination Ecology The Journal of Pollination Ecology is an open access, peer-reviewed journal that aims to promote the exchange of original knowledge and research in any area of pollination issues.Due to increasing SPAM registrations, you need to contact jpe@pollinationecology.org in case you want to register and submit an article. New registrations are only possible by contacting the journal management. Thank you for your understanding. If you have an account ready, got to online submissions to submit your manuscript! Interesting stories about pollination can be published in the associated Pollination Magazine. A short lay person’s summary of all papers published in JPE can be found there as well.   Vol 26 (2020) Table of Contents Articles Spatiotemporal variation in pollinator taxa on the santa ana river wooly star Eriastrum densifolium ssp. Sanctorum (Milliken) mason (Polemoniaceae) PDF C. Eugene. Jones, Fern L. Hoffman, Patricia Nunes-Silva, Robert L. Allen, Axhel Munoz, Marion Erickson, Douglas Stone, Youssef Atallah Comparative floral ecology and breeding systems between sympatric populations of Nothoscordum bivalve and Allium stellatum (Amaryllidaceae) PDF Daniel S Weiherer, Kayla Eckardt, Peter Bernhardt Comparison of flower-visiting behaviour of bumblebees and swallowtail butterflies to the Japanese azalea (Rhododendron japonicum) PDF Keigo Takahashi, Takao Itino Pollinator effectiveness in the mixed-pollination system of a Neotropical Proteaceae, Oreocallis grandiflora Untitled Santiago Cárdenas-Calle, Juan D Cardenas, Boris O Landázuri, Gabriela Mogrovejo, Antonio M Crespo, Nils Breitbach, Matthias Schleuning, Boris A Tinoco Testing Pollination Syndromes in Oenothera (Onagraceae) PDF Kyra N. Krakos, Matthew W. Austin EARLY VIEW: New records of pollinators and other insects associated with Arizona milkweed, Asclepias angustifolia, at four sites in Southeastern Arizona PDF Robert Aaron Behrstock Short Communications Exploitation of Strobilanthes ixiocephala (Acanthaceae) flower buds by bees PDF Priyanka A Ambavane, Nikhil P More, Renee M. Borges Notes on Methodology An Effective and Affordable Camera Trap for Monitoring Flower-visiting Butterflies in Sandhills: with Implications for the Frosted Elfin (Callophrys irus) PDF dave mcelveen, robert t meyer Early View Aneriophora aureorufa (Philippi, 1865) (Diptera: Syrphidae): a fly specialized in the pollination of Eucryphia cordifolia Cav. (Cunoniaceae R. Br.), an endemic species of South American temperate forest PDF Cecilia Smith, Lorena Vieli, Rodrigo Barahona-Segovia Insect pollination and sustainable agriculture in Sub-Saharan Africa PDF Kumsa Tolera, Gavin Ballantyne Novel data support model linking floral resources and honey bee competition with bumble bee abundances in coastal scrub PDF Diane M Thomson This work is licensed under a Creative Commons Attribution 3.0 License. ISSN 1920-7603   Google Scholar Profile work_4hakggfkyzdk3nalhc3qeyqneu ---- RESEARCH ARTICLE Open Access A comparison of photographic, replication and direct clinical examination methods for detecting developmental defects of enamel Ali Golkari1,2*, Aira Sabokseir2, Hamid-Reza Pakshir2,3, M Christopher Dean4, Aubrey Sheiham1 and Richard G Watt1 Abstract Background: Different methods have been used for detecting developmental defects of enamel (DDE). This study aimed to compare photographic and replication methods with the direct clinical examination method for detecting DDE in children’s permanent incisors. Methods: 110 8-10-year-old schoolchildren were randomly selected from an examined sample of 335 primary Shiraz school children. Modified DDE index was used in all three methods. Direct examinations were conducted by two calibrated examiners using flat oral mirrors and tongue blades. Photographs were taken using a digital SLR camera (Nikon D-80), macro lens, macro flashes, and matt flash filters. Impressions were taken using additional- curing silicon material and casts made in orthodontic stone. Impressions and models were both assessed using dental loupes (magnification=x3.5). Each photograph/impression/cast was assessed by two calibrated examiners. Reliability of methods was assessed using kappa agreement tests. Kappa agreement, McNemar’s and two-sample proportion tests were used to compare results obtained by the photographic and replication methods with those obtained by the direct examination method. Results: Of the 110 invited children, 90 were photographed and 73 had impressions taken. The photographic method had higher reliability levels than the other two methods, and compared to the direct clinical examination detected significantly more subjects with DDE (P = 0.002), 3.1 times more DDE (P < 0.001) and 6.6 times more hypoplastic DDE (P < 0.001). The number of subjects with hypoplastic DDE detected by the replication method was not significantly higher than that detected by direct clinical examination (P = 0.166), but the replication detected 2.3 times more hypoplastic DDE lesions than the direct examination (P < 0.001). Conclusion: The photographic method was much more sensitive than direct clinical examination in detecting DDE and was the best of the three methods for epidemiological studies. The replication method provided less information about DDE compared to photography. Results of this study have implications for both epidemiological and detailed clinical studies on DDE. Background Developmental defects of enamel (DDE) can be detected and studied using microscopic and macroscopic meth- ods. Macroscopic methods are especially important in epidemiological studies. Direct clinical examination is the most widely used method for detecting enamel defects, while photographic and replication methods are of special interest because of their suggested advantages over direct clinical examination. None of these methods are fully standardized as no single detailed method is used by many researchers. The replication method used by some dental anatomists, archaeologists and anthro- pologists is not used by epidemiologists. The signifi- cance of digital technologies, which have opened up new horizons in almost all aspects of science, has been relatively neglected in epidemiological studies of DDE. Digital photography, which has been shown to have high levels of success in caries detection [1], has only been used in a few DDE studies.* Correspondence: aligolkari@yahoo.com1Dept. of Epidemiology and Public Health, University College London, UK Full list of author information is available at the end of the article Golkari et al. BMC Oral Health 2011, 11:16 http://www.biomedcentral.com/1472-6831/11/16 © 2011 Golkari et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. mailto:aligolkari@yahoo.com http://creativecommons.org/licenses/by/2.0 Direct clinical examination is fast and cheap and all surfaces of teeth can be examined. However, it has many disadvantages such as observer bias and effects of visual problems related to fatigue of the examiner. Accuracy is highly dependent on cooperation of subjects [2,3]. Direct examination was unreliable when multiple indices were used or compared [4]. Direct visual exami- nation of enamel can be done with or without tactile examination of the enamel surface with a probe [5]. Examination may be conducted under natural light, avoiding direct sunshine. When natural light is not strong enough or when posterior teeth are being exam- ined, a fibre optic light may be used [6]. Teeth may be cleaned before the examination [7,8]. Polarizing filters may be used to overcome the burn outs from strong flashlight and to enhance the visual details of enamel defects, especially when the extent of defects is more important than their colour [9]. Photography has been used in some studies of tooth and enamel defects [3,4,9-14]. Assessing photographs is more objective than direct clinical examination. With photography it is possible for all cases (even from differ- ent geographic areas or examined at different times) to be assessed under standard conditions by one person or one group of examiners [11]. Photography facilitates ran- domization and blinding, so observer bias can be avoided. Photographs can be kept for future reassessment or application of different approaches or indices [3,12]. On the other hand, the disadvantages of photography are cost, technical sensitivity and inability to use tactility. Furthermore, with single photographs only labial surfaces of incisors are recorded. Multiple views are needed to view more teeth and/or more surfaces [3]. Some surfaces or parts of a surface may be missed even in multiple views. Some researchers have preferred to use conven- tional photography with 35 mm film [3,4,13]. However, digital photography provides better conditions to record developmental defects of enamel. Digital photography is cheaper and independent of developing negatives and printing or projection. Most importantly, it gives the photographer the opportunity to view each image imme- diately and repeat it in case there is any problem with the image, such as a burn-out caused by flash; or to take sev- eral photos and choose the best one later [15]. Replicas of teeth may be used in both macroscopic and microscopic studies of enamel defects. In this method the whole cast is in one colour, so changes in colour of enamel are not shown. But a replica of teeth gives the observer better visibility to investigate hypopla- sia, including small changes in the enamel surface [16]. This method also enables researchers to spend as much time as needed and provides a dry specimen that can be studied easily from different perspectives without worry- ing about adjacent structures. Disadvantages of the replication method are cost, time needed to make repli- cas, and its sensitivity to technical methods. Even under the best conditions some proximal surfaces may not be well recorded. And, as stated above, it only displays hypoplastic defects. As the three methods differ in sensitivity in detecting DDE, it is surprising that very few studies have com- pared them [3,4,14]. No published epidemiological study has compared the results of detecting DDE using the replication method with the direct examination on a population basis. Wong et al. (2005) used the Modified DDE Index to compare the photographic and direct examination methods and found kappa agreement values from 0.79 to 0.85 between them for detecting subjects with any DDE. They used one-view, three-view and five-view photographic methods. The highest preva- lence of subjects with DDE (36.6%) was, surprisingly, obtained from their one-view method. It was close to the prevalence obtained by the direct clinical method (33.9%). The intra-examiner reliability of the photo- graphic method (k = 0.81 to 0.88) was also close to the direct examination method (k = 0.82) [3]. Ellwood et al. (1996) used the TF (Thylstrup and Fejerskov) Index [16] for their comparison and found a substantial agreement between the two methods at subject level (k = 0.63). At subject level, the prevalence obtained by photographic method (44.9%) was close to that obtained by the direct examination (41.4%) [14]. Sabieha and Rock (1998) used both the Modified DDE Index and TF Index and reported almost perfect agreement between the direct examination and the photographic methods for both indices (k = 0.91 and 0.83 respectively). They only assessed maxillary central incisors [4]. Several clinical indices have been developed to cate- gorize enamel defects based on their nature, appearance, microscopic features or their cause. Some indices, such as the TF Index [17], were introduced specifically for fluorosis. Other indices are descriptive and include all kinds of enamel defects including fluorosis. The Modi- fied DDE Index [6] is a descriptive index derived from the original Developmental Defects of Enamel Index [18]. It covers all defects based on their macroscopic appearance. However, the criteria for classification are closely related with histo-pathological changes [19]. The Modified DDE index was claimed to be a more practical and comparable index in epidemiological studies. Its extensive use and its high degree of validity and reliabil- ity support that claim [6,20-22]. As there are few epidemiological studies comparing the three methods of detecting DDE, the objective of this study was to compare the ability of the digital photographic and replication methods with the direct clinical examination method to detect DDE in children’s permanent incisors. Golkari et al. BMC Oral Health 2011, 11:16 http://www.biomedcentral.com/1472-6831/11/16 Page 2 of 7 Methods The study was conducted on relatively newly erupted permanent incisors in a sample of 8 to 10-year-old school children of the city of Shiraz in the south of Iran. Approval and ethical permission were obtained from Iran’s National Ethical Committee in Medical Research and the regional Educational Head Office. A representa- tive sample of 335 primary schools children in grades 3 to 5 were first examined using the direct clinical exami- nation method. Children were excluded from the study if they were outside the age range, had one or more per- manent incisors unerupted or partially erupted, or had a fracture or restoration in their permanent incisors. Fifty five children with DDE and another 55 children without DDE were then randomly selected and invited to dental clinics for further investigation using the photographic and replication methods. Upon arrival at the clinic the purpose and stages involved in the study were explained in detail to the parents. Parental consent was requested for taking photographs and impressions. DDE of permanent inci- sors were recorded based on the Modified DDE Index [6] in all three methods. Results of assessing photo- graphs and replicas were separately compared with the results obtained from the direct examination. Direct clinical examination method Direct intra-oral clinical examinations were carried out by 2 calibrated examiners in classrooms using natural light, disposable mirrors and tongue blades. Teeth were examined wet, but excess saliva and food debris was removed with sterilized gauze when necessary. Each of the two examiners checked the other examiner’s find- ings on 1 in 10 random selected children to test the inter-examiner reliability. Testing the intra-examiner reliability was only possible if subjects were assessed again under the same conditions in schools. Permission for school re-visits was not granted. The Photographic method A Nikon D80 digital SLR (Single Lens Reflex) camera with a 105 mm macro lens which provided a magnifica- tion of 1:1 and a macro double flash was used. The quality of photos was set on “JPEG FINE” and 5.2 mega pixels. Speed and diaphragm were set on 60 and 32 based on the best results obtained from a pilot study of 13 children chosen from the same population. Photographs were taken by two calibrated photogra- phers with the child sitting on a dental chair and leaning back to avoid movement during focusing and taking photographs. Cheeks and lips were retracted so that all 12 anterior teeth and part of the upper and lower gums were shown. An assistant helped with retraction of cheeks in less-cooperative subjects. The child was asked to close the incisors together edge to edge. This was practiced a few times while holding a mirror in front of their face before taking the photograph. Food debris and/ or excess saliva was removed with sterilized gauze when necessary. A “one view” photograph was considered to be acceptable as only incisors were assessed [3]. Photo- graphs were taken by focusing on the centre of the 4 cen- tral incisors. The camera was planed approximately 15 degrees above the perpendicular to the central incisors’ plane to minimize specula reflection and burn outs. Each photograph was evaluated for acceptability and quality. If not acceptable, the photograph was repeated. Photographs were viewed randomly on a 17” flat free- angle monitor with high resolution (1028 by 1024 pix- els) using “Adobe Photoshop CS version 8.0” software. Each photograph was assessed and scored independently by two calibrated examiners. Each photograph was first viewed as actual size based on the magnification ratio of 1:1 used when taking them. Then the examiner could use the magnifier tool to enlarge the photo several times. Some defects were clearer when using higher magnifications but some others were better seen with lower magnifications when the sharpness of components was higher. The examiners’ first records were used to assess inter- examiner reliability. Disagreements between examiners on coding a DDE were then solved by reassessment of teeth by both examiners for the purpose of comparing the photographic and direct examination methods. To test the intra-examiner reliability in the photographic method, each of the two examiners re-rated all photo- graphs in a computer generated random order a year after the original assessment. The Replication method Impressions were taken of the anterior teeth of children using heavy body (putty) and then light body (liner) of Affinis, an additional curing rubber base material made by Coltène/Whaledent. The impressions were disin- fected by soaking in sodium hypochlorite for five min- utes and stored in individual bags away from heat, direct sunshine and pressure. They were cast using hard orthodontic stone. All casts were assessed for hypoplas- tic defects by two calibrated examiners. All casts and impressions were viewed using a pair of dental loupes with a magnification of x3.5. Type of DDE detected in replicas and impressions were also recorded using the Modified DDE Index and its subcategories, as men- tioned above. However, considering the fact that opaci- ties were not detectable in this method, codes given to detected DDE were limited to 7 (pits) and 8 (missing enamel). Similar to the photographic assessment, examiners’ first records were used to test the inter-examiner Golkari et al. BMC Oral Health 2011, 11:16 http://www.biomedcentral.com/1472-6831/11/16 Page 3 of 7 reliability in replication assessment and then disagree- ments were resolved by discussion. However, in this method impressions and casts were both reassessed and compared. To test the intra-examiner reliability, casts and impressions were reassessed by the two examiners a year after the original scoring. Unlike the photographs, avoiding bias by recognition of impressions and casts was not possible. This occurred because replications were not the same shape or colour and lacked a rando- mised environment such as that done by computer in the photographic reassessment. However, the time per- iod between the first assessment and the reassessment (one year) was long enough to assume it was not possi- ble for examiners to remember their first scoring at the time of the reassessment. Comparison of methods The number and percentage of children with DDE and teeth with DDE detected by each method was calcu- lated. The total number of DDE detected by each method was also calculated as several defects may be present on one tooth. Such calculations made it possible to compare the results obtained by photographic and replication methods with those of direct examination method at individual, tooth, and lesion levels. All types of DDE were included in comparison of direct examina- tion and photographic methods. However, only hypo- plastic defects were taken into account when comparing the direct examination and replication methods as col- our changes were not recorded in the replicas. Kappa agreement was used to test reliability of methods and the agreement between each of the two photographic and replication methods and the direct examination method. Two-sample proportion test was used to evalu- ate if the prevalences obtained by the two methods were significantly different. McNemar’s test was used to test whether number of subjects or teeth with DDE detected only by one method was significantly higher than the other method. The data collection took place in Spring 2007. The reassessment of photographs and models was done in Spring 2008. SPSS (Version 14) software was used for analysis. Results The number of children examined at schools, invited to clinics, had their photographs taken, and had impres- sions taken of their teeth are shown in Table 1. Ninety five of the invited 110 children attended the clinics. However, some parents did not give consent for photo- graphs or impressions to be taken. Photographs were taken of 90 children (81.8% response rate). Therefore, the direct examination and photographic methods were compared based on these 90 children. Impressions were taken of teeth of 73 children (66.4% response rate) which were used for comparison of the replication and direct examination methods. Results of the inter-examiner and intra-examiner relia- bility tests for each test are shown in Table 2. As explained earlier, intra-examiner reliability was not tested for the direct clinical method. The intra-examiner reliability for photographic and replication methods are the average of the two examiners. 14 disagreements were found between the two examiners in assessing the photographs. 10 of them (72%) occurred when one of the examiners missed detecting or reporting a DDE on a tooth with 2 or more defects. 2 disagreements (14%) occurred when a diffuse opacity was coded as “lines” by one examiner and as “patchy” by the other one. Two other disagreements (14%) were based on colour of demarcated opacities. Eight disagreements were found in replication assessments, all being related to distin- guishing a DDE from an artefact. Table 1 Number of children in each part of study, by sex Sex Were examined at schools Were invited to clinics Had photographs taken Had impressions taken Girls 177 56 48 42 Boys 158 54 42 31 Total 335 110 90 73 Table 2 Inter-examiner and average intra-examiner reliability levels of each method in detecting DDE Type of DDE Method Sample size Inter-examiner reliability Intra-examiner reliability Individual level Tooth level Lesion level Individual level Tooth level Lesion level All types of DDE Direct examination 38 0.90 0.84 0.81 - - - Photographic 90 1.00 1.00 0.90 1.00 0.99 0.93 Replication - - - - - - - Hypoplastic DDE Direct examination 38 1.00 0.91 0.91 - - - Photographic 90 1.00 1.00 0.95 1.00 0.99 0.95 Replication 73 0.93 0.80 0.78 0.94 0.84 0.80 Golkari et al. BMC Oral Health 2011, 11:16 http://www.biomedcentral.com/1472-6831/11/16 Page 4 of 7 The photographic method detected all cases with DDE that were detected clinically, except one. Twenty cases were only detected by the photographic method. The photographic method detected 1.4 times more children with DDE, 2.1 times more teeth with DDE, and 3.1 times more DDE lesions than the direct examination method. The photographic method also detected 4.5 times more individuals with hypoplastic DDE, 5.9 times more teeth with hypoplastic DDE, and 6.6 times more hypoplastic lesions than the direct examination method (Table 3). There was a moderate agreement (k = 0.48) between the two methods in detecting cases with DDE at subject level. The photographic method detected significantly more DDE than the direct examination method for the following: number of subjects with DDE (P = 0.002), number of teeth with DDE (P < 0.001), number of sub- jects with hypoplastic defects (P < 0.001), and number of teeth with hypoplastic defects (P < 0.001). In addition to detecting higher numbers than the direct examination, the numbers detected only by the photographic method were significantly higher than the numbers detected only by the direct examination for all 4 tests (P < 0.001). One hundred and one teeth (88% of those detected by direct examination and 41% of those detected by photo- graphic method) had DDE detected by both direct exami- nation and photographic methods. Fifty four teeth (47% of those detected by direct examination and 22% of those detected by photography) were identically scored by both methods. The number increased from 54 to 66 teeth when similar subcategories, scores 1 and 2, 3 to 5, and 7 and 8, were combined. Scores 6 and 9 were not combined with any other subcategory. At the lesion level, 56 DDE lesions (46% of those detected by direct examination and 15% of those detected by photographic method) were identically scored by both methods. The number increased to 70 defects when similar subcategories were combined. Diffuse opacities had the highest prevalence in both methods. In the 73 children who had impressions taken of their teeth, the replication method detected 1.5 times more children with hypoplastic DDE, 2.4 times more teeth with hypoplastic DDE, and 2.3 times more hypoplastic DDE lesions than the direct clinical examination method (Table 4). 21 hypoplastic defects (87.5% of those detected by direct clinical examination and 37.5% of those detected by the replication method) were detected by both methods. The prevalence of subjects with hypo- plastic DDE was not significantly different between the replication and the direct clinical examination methods (P = 0.166). However, the proportion of teeth with hypoplastic DDE detected by the replication method was significantly higher than that obtained by direct examination method (P < 0.001). On the other hand, the number of affected subjects and teeth detected only by the replication method were significantly higher than those detected only by the direct clinical examination method (P < 0.001). The average time spent on direct clinical examination of each child was about 3 minutes. Taking photographs of each child, including the time spent for preparing the child and repeating the photograph (if necessary) took less than one minute. Taking impressions took 15 min- utes on average. Examiners were told to spend as much as time as they needed to assess the photographs and replicas. Assessing a photograph took 6.5 minutes in average. Assessing replicas took up to 20, and in aver- age, 12 minutes. Discussion It was assumed that the photographic method would detect more changes in colour and transparency of enamel than the direct clinical examination method. But Table 3 Comparison of direct examination and photographic methods in detecting all types of DDE and in detecting hypoplastic DDE in permanent incisors of 90 children Type of DDE Method Number and percent of subjects with DDE (%) Number and percent of teeth with DDE (% of all examined teeth) Mean number of teeth with DDE per child in all 90 children Mean number of teeth with DDE per affected child *Number of DDE in all 90 children *Mean DDE per child in all 90 children *Mean DDE per affected child *Mean DDE per affected tooth All types of DDE Direct examination 50 (55.6) 115 (16.0) 1.3 2.3 121 1.3 2.4 1.1 Photographic method 69 (76.7) 246 (34.2) 2.7 3.6 374 4.2 5.4 1.5 Hypoplastic DDE Direct examination 11 (12.2) 20 (2.8) 0.2 1.8 24 0.3 2.2 1.2 Photographic method 49 (54.4) 117 (16.3) 1.3 2.4 159 1.8 3.3 1.4 *Note: Some teeth had more than one DDE. Golkari et al. BMC Oral Health 2011, 11:16 http://www.biomedcentral.com/1472-6831/11/16 Page 5 of 7 the ability of the photographic method to detect hypo- plastic defects, where defective enamel had the same colour as its surroundings, was in doubt. The replication method, on the other hand, although not showing changes in colour, was assumed to better detect the hypoplastic DDE than the other two methods as it was the usual method used by dental anthropologists and anatomists to find fine hypoplasias. Results of this study, however, suggest that the photographic method was good enough to detect both hypomineralized and hypo- plastic enamel defects. The photographic method, whilst not necessarily the most sensitive method, detected con- siderably more DDE of all types, 3.1 times more DDE lesions in total, than the direct examination method. This study found only a moderate agreement (k = 0.48) between the direct clinical examination and photographic methods in detecting DDE at subject level. These find- ings differ from other epidemiological studies comparing the photographic and direct clinical examination meth- ods. They reported kappa values of 0.63 [14] to 0.91 [4]. The low level of agreement found in this study was not due to the inability of the photographic method to detect DDE, but due to significantly more subjects with DDE being detected by the photographic method than the direct examination method (P = 0.002). Results of the study by Ellwood et al. [14] also showed significant differ- ences between the numbers of subjects with DDE detected by the two methods, although they found a much smaller difference. The other two studies men- tioned above did not find such results [3,4]. Indeed they reported very similar prevalence of cases with DDE detected by the two methods. Wong et al. [3] reported that 33.9% of their subjects had a DDE detected by the direct examination with a very similar percentage of 34.6% to 36.6% detected by the photographic method. The photographic method used in the present study was able to detect most (98%) of the cases detected by the direct examination method plus a significant number of new cases. None of the three above-mentioned studies reported a similar finding. As with the comparisons at subject level, at the tooth level the photographic method detected most of the DDE detected by direct examination plus significantly more other affected teeth. Comparing the results of the present study with those of Sabieha and Rock [4] in detecting a DDE in permanent upper central incisors, both studies found that the photographic method detected around 82% of those upper central incisors detected by the direct examination (46 out of 56 in this study, and 161 out of 194 in the study by Sabieha and Rock). However, the percentage of affected upper cen- tral incisors detected only by the photographic method in the present study (27%) was three times greater than the 9% reported by Sabieha and Rock [4] (P < 0.001). These findings show that the photographic method used in this study detected more DDE than both the direct clinical examination method used in this study and the photographic methods used in other studies at both subject and tooth level. Unfortunately no previous study compared the two methods at lesion level. The main differences in the methods used in this study from those used in the above-mentioned studies are that in the present study a powerful digital camera with well tested accessories and settings was used. That allowed the photographer to zoom and focus to have the best picture of the 8 incisors instead of using a fixed barrel lens, and allowed the examiners to view the photographs at different magnifications and angles, as they were able to do during the direct clinical examination. Both photographic and replication methods provided permanent records of teeth, but the photographic method also provided for easy random presentation of subjects with less bias than in the other methods. The photographic method was also faster than replicas, both in time needed to be taken (1 versus 15 minutes) and in time to be assessed (6.5 versus 12 minutes), with no laboratory process and no concerns about cross infec- tion. A clinical setting or presence of a dental clinician was not necessary for taking photographs. Unlike the photographic method, the replication method showed Table 4 Comparison of direct examination and replication methods in detecting hypoplastic DDE in permanent incisors of 73 children Method Number and percent of subjects with DDE (%) Number and percent of teeth with DDE (% of all examined teeth) Mean number of teeth with DDE per child in all 73 children Mean number of teeth with DDE per affected child *Number of DDE in all 73 children *Mean DDE per child in all 73 children *Mean DDE per affected child *Mean DDE per affected tooth Direct examination 13 (17.8) 20 (3.4) 0.3 1.5 24 0.3 1.9 1.2 Replication method 20 (27.4) 47 (8.1) 0.6 2.35 56 0.8 2.8 1.2 *Note: some teeth had more than one DDE in them. Golkari et al. BMC Oral Health 2011, 11:16 http://www.biomedcentral.com/1472-6831/11/16 Page 6 of 7 lower inter-examiner reliability levels than the direct clinical examination method (Table 2). Conclusion The digital photographic method detected much more DDE than the direct examination method and provided much more information than the replication method. Therefore, the digital photographic method, as used in this study, was the best of three methods used for detecting enamel defects of permanent incisors of chil- dren. Results of this study have implications for both epidemiological and detailed clinical studies on DDE. Acknowledgements The camera and its accessories were provided by the Borrow Foundation (UK). Impression material was provided by the Coltène/Whaledent (Switzerland). Authors would like to thank Drs Zahra Pakshir, Yasmin Hadaegh and Hassan Abiri for their help in data collection. Author details 1Dept. of Epidemiology and Public Health, University College London, UK. 2Dental School, Shiraz University of Medical Sciences, Iran. 3Orthodontic Research Centre, Shiraz University of Medical Sciences, Iran. 4Dept. of Cell and Developmental Biology, University College London, UK. Authors’ contributions This paper is based on a PhD thesis by AG done under the supervision of RGW and ASh. ASa helped with data collection, processing and analysis. The fieldwork was planned and supervised by HRP. MCD helped with the design of the study and provided technical advice and support. AG wrote the paper. All authors read and approved the final manuscript. Competing interests The authors declare that they have no competing interests. Received: 2 September 2010 Accepted: 21 April 2011 Published: 21 April 2011 References 1. Iijima Y: Early detection of white spot lesions with digital camera and remineralization therapy. Aust Dent J 2008, 53(3):274-280. 2. Cochran JA, Ketley CE, Arnadottir IB, Fernandes B, Koletsi-Kounari H, Oila AM, van Loveren C, Whelton HP, O’Mullane DM: A comparison of the prevalence of fluorosis in 8-year-old children from seven European study sites using a standardized methodology. Community Dent Oral Epidemiol 2004, 32(Suppl 1):28-33. 3. Wong HM, McGrath C, Lo EC, King NM: Photographs as a means of assessing developmental defects of enamel. Community Dent Oral Epidemiol 2005, 33(6):438-446. 4. Sabieha AM, Rock WP: A comparison of clinical and photographic scoring using the TF and modified DDE indices. Community Dent Health 1998, 15(2):82-87. 5. King T, Hillson S, Humphrey LT: A detailed study of enamel hypoplasia in a post-medieval adolescent of known age and sex. Arch Oral Biol 2002, 47(1):29-39. 6. Clarkson J, O’Mullane D: A modified DDE Index for use in epidemiological studies of enamel defects. J Dent Res 1989, 68(3):445-450. 7. Evans DJ: A study of developmental defects in enamel in 10-year-old high social class children residing in a non-fluoridated area. Community Dent Health 1991, 8(1):31-38. 8. Milsom K, Mitropoulos CM: Enamel defects in 8-year-old children in fluoridated and non-fluoridated parts of Cheshire. Caries Res 1990, 24(4):286-289. 9. Robertson AJ, Toumba KJ: Cross-polarized photography in the study of enamel defectsin dental paediatrics. J Audiov Media Med 1999, 22(2):63-70. 10. Dooland MB, Wylie A: A photographic study of enamel defects among South Australian school children. Aust Dent J 1989, 34(5):470-473. 11. Nunn JH, Murray JJ, Reynolds P, Tabari D, Breckon J: The prevalence of developmental defects of enamel in 15-16-year-old children residing in three districts (natural fluoride, adjusted fluoride, low fluoride) in the north east of England. Community Dent Health 1992, 9(3):235-247. 12. Cochran JA, Ketley CE, Sanches L, Mamai-Homata E, Oila AM, Arnadottir IB, van Loveren C, Whelton HP, O’Mullane DM: A standardized photographic method for evaluating enamel opacities including fluorosis. Community Dent Oral Epidemiol 2004, 32(Suppl 1):19-27. 13. Kanthathas K, Willmot DR, Benson PE: Differentiation of developmental and post-orthodontic white lesions using image analysis. Eur J Orthod 2005, 27(2):167-172. 14. Ellwood RP, Cortea DF, O’Mullane DM: A photographic study of developmental defects of enamel in Brazilian school children. Int Dent J 1996, 46(2):69-75. 15. Bengel W: Mastering Digital Dental Photography Tokyo: Quintessence Publishing Ltd; 2006. 16. Beynon AD: Replication technique for studying microstructure in fossil enamel. Scanning Microsc 1987, 1(2):663-669. 17. Thylstrup A, Fejerskov O: Clinical appearance of dental fluorosis in permanent teeth in relation to histologic changes. Community Dent Oral Epidemiol 1978, 6(6):315-328. 18. Federation Dentaire Internationale (FDI): An epidemiological index of developmental defects of enamel: Technical report no 5 Ferney-Voltaire: World Dental Federation Publications; 1982. 19. Suckling GW, Nelson DG, Patel MJ: Macroscopic and scanning electron microscopic appearance and hardness values of developmental defects in human permanent tooth enamel. Adv Dent Res 1989, 3(2):219-233. 20. Dini EL, Holt RD, Bedi R: Prevalence of caries and developmental defects of enamel in 9-10 year old children living in areas in Brazil with differing water fluoride histories. Br Dent J 2000, 188(3):146-149. 21. Lunardelli SE, Peres MA: Prevalence and distribution of developmental enamel defects in the primary dentition of pre-school children. Pesqui Odontol Bras 2005, 19(2):144-149. 22. Muratbegović A, Marković N, Kobašlija S, Zukanović A: Oral Health Indices and Molar Incisor Hypomineralization in 12 Year Old Bosnians. Acta Stomatol Croat 2008, 42(2):155-163. Pre-publication history The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1472-6831/11/16/prepub doi:10.1186/1472-6831-11-16 Cite this article as: Golkari et al.: A comparison of photographic, replication and direct clinical examination methods for detecting developmental defects of enamel. BMC Oral Health 2011 11:16. Submit your next manuscript to BioMed Central and take full advantage of: • Convenient online submission • Thorough peer review • No space constraints or color figure charges • Immediate publication on acceptance • Inclusion in PubMed, CAS, Scopus and Google Scholar • Research which is freely available for redistribution Submit your manuscript at www.biomedcentral.com/submit Golkari et al. BMC Oral Health 2011, 11:16 http://www.biomedcentral.com/1472-6831/11/16 Page 7 of 7 http://www.ncbi.nlm.nih.gov/pubmed/18782375?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/18782375?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15016114?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15016114?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15016114?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16262611?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16262611?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/9793223?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/9793223?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11743929?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11743929?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/2921385?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/2921385?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/2049654?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/2049654?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/2276167?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/2276167?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/10628000?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/10628000?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/2818303?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/2818303?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/1450997?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/1450997?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/1450997?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/1450997?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15016113?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15016113?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15817624?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15817624?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/8930676?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/8930676?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/3112938?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/3112938?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/282114?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/282114?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/2640433?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/2640433?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/2640433?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/10718001?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/10718001?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/10718001?dopt=Abstract http://www.biomedcentral.com/1472-6831/11/16/prepub Abstract Background Methods Results Conclusion Background Methods Direct clinical examination method The Photographic method The Replication method Comparison of methods Results Discussion Conclusion Acknowledgements Author details Authors' contributions Competing interests References Pre-publication history work_4jmw5hygprg5fnqflpl7kl25su ---- Evaluation of the effect of JPEG and JPEG2000 image compression on the detection of diabetic retinopathy J Conrath1,2, A Erginay1, R Giorgi3, A Lecleire-Collet1, E Vicaut4, J-C Klein5, A Gaudric1 and P Massin1 Abstract Aims To compare the effect of classic Joint Photographic Experts Group (JPEG) and JPEG2000 compression algorithms on detection of diabetic retinopathy (DR) lesions. Methods In total, 45 colour fundus photographs obtained with a digital nonmydriatic fundus camera were saved in uncompressed Tagged Interchanged Files Format (TIFF) format (1.26 MB). They were graded jointly by two retinal specialists at a 1 month interval for soft exudates, hard exudates, macular oedema, newvessels, intraretinal microvascular abnormalities (IRMA), and retinal haemorrhages and/or microaneurysms. They were compressed to 118, 58, 41, and 27 KB by both algorithms and 24 KB by classic JPEG, placed in random order and graded again jointly by the two retina specialists. Subjective image quality was graded, and sensitivity, specificity, positive and negative predictive values, and kappa statistic were calculated for all lesions at all compression ratios. Results Compression to 118 KB showed no effect on image quality and kappa values were high (0.94–1). Image degradation became important at 27 KB for both algorithms. At high compression levels, IRMA and HMA detection were most affected with JPEG2000 performing slightly better than classic JPEG. Conclusion Performance of classic JPEG and JPEG2000 algorithms is equivalent when compressing digital images of DR lesions from 1.26 MB to 118 KB and 58 KB. Higher compression ratios show slightly better results with JPEG2000 compression, but may be insufficient for screening purposes. Eye (2007) 21, 487–493. doi:10.1038/sj.eye.6702238; published online 3 February 2006 Keywords: diabetic retinopathy; digital fundus photography; JPEG; JPEG2000 Telescreening for diabetic retinopathy (DR) is relevant to compensate for the lack of ophthalmologists assessing DR, especially in remote areas. Digital fundus photography is commonly used for DR screening.1 High-quality digital images can reach a size of 1.5 MB or greater. Compression techniques are required to fasten transmission of such images, yet information may be lost during compression. Standards have been defined for radiology and pathology; yet to date, resolution levels required for reliable diagnosis in ophthalmology have not been determined. Basu et al2 evaluated the effect of classic Joint Photographic Experts Group (JPEG) compression (cJPEG) on the grading of DR lesions and found that compression ratios of 1 : 20 to 1 : 12 were acceptable. JPEG2000, a new image compression algorithm purportedly rendering higher quality images than that of cJPEG at higher compression ratios, has been evaluated for the compression of medical images in radiology.3,4 The aim of our study was to compare cJPEG and JPEG2000 compression on the detection of DR lesions. Methods In total, 45 good quality digital images including different pathologic lesions of DR were selected from our digital image bank. Good quality was defined as a well-centred image without dark or bright peripheral halos, in focus that allowed unambiguous analysis of fine details. They were obtained with Topcon’s nonmydriatic retinal fundus camera (451 opening, TRC-NW6, Topcon Europe, Received: 27 April 2005 Accepted in revised form: 29 November 2005 Published online: 3 February 2006 None of the authors have any commercial relationship with any of the companies mentioned 1Department of Ophthalmology, Hôpital Lariboisière, Assistance Publique – Hôpitaux de Paris, Université Paris 7, Paris, France 2Department of Ophthalmology, Hôpital de la Timone, Assistance Publique – Hôpitaux de Marseille, Université de la Méditerranée, Marseille, France 3LERTIM, Faculté de Médecine de Marseille, Marseille, France 4Unité de Recherche Clinique, Hôpital Lariboisière, Assistance Publique – Hôpitaux de Paris, Université Paris 7, Paris, France 5Centre de Morphologie Mathématique, Ecole des Mines, Fontainebleau, France Correspondence: P Massin, Department of Ophthalmology, Hôpital Lariboisière, 2 rue Ambroise Paré, F-75010 Paris, France Tel: þ 33 1499 52475; Fax: þ 33 1499 56484. E-mail: p.massin@ lrb.aphp.fr Eye (2007) 21, 487–493 & 2007 Nature Publishing Group All rights reserved 0950-222X/07 $30.00 www.nature.com/eye C L IN IC A L S T U D Y Rotterdam, The Netherlands) connected to a tri-CCD colour video camera (Sony, DXC-950 P, Tokyo, Japan). Images were captured without pupil dilation, in true colour (24 bits) at a resolution of 800 � 600 pixels resulting in an uncompressed image size of 1.26 MB. Photographs were selected to ensure adequate distribution of different DR lesions; eight photographs showed normal fundus, 37 various numbers of haemorrhages/microaneurysms (HMA), 18 soft exudates (SE), 22 hard exudates (HE), seven new vessels on the disk (NVD) or elsewhere (NVE), and four showed intraretinal microvascular abnormalities (IRMA). Images were stored as uncompressed TIFF (tagged interchanged files format) files. This study of retrospective design adhered to the tenets of the declaration of Helsinki. Compression The 45 TIFF images were first compressed to five different levels, using the cJPEG algorithm with PhotoShop 5.0 (Adobe, San Jose, CA, USA), to 118, 58, 41, 27, and 24 KB in size (representing compression ratios of 1 : 11, 1 : 22, 1 : 31, 1 : 47, and 1 : 53, respectively). The TIFF images were then compressed by the JPEG2000 algorithm using the ImagePress JP2 plug-in (Pegasus, Tampa, FL, USA) for Photoshop computer software at levels 1 : 11, 1 : 22, 1 : 31, and 1 : 47, resulting in image sizes of 118, 58, 41 and 27 KB, respectively. These weights were chosen as it became empirically clear that the low compression ratio (118 KB) and high compression ratio (27 KB) were respectively of excellent and poor quality with both compression algorithms. For classic JPEG images, only 24 KB was applied as it appeared much worse than the others, and the gain in storage/ transmission size between 27 and 24 KB is negligible. All 405 compressed images were placed in a random order. Image grading Images were displayed on a 21-inch monitor (resolution: 1280 � 1024 � 24 bits, CRT Sony Trinitron Multiscan G500, Tokyo, Japan). To avoid intergrader variability, all images were graded jointly by two retinal specialists. No image processing was used. Each sign of DR was graded on TIFF photographs as SE, HE, ‘macular oedema’ (ME, which used as a surrogate the presence of HE within one disc diameter of the foveola), NVD, NVE, and IRMA were assessed as absent (0), questionable (1) or present (2). HMA were graded as (0), absent; (1), questionable or less than 5; (2), 5 to 10, and (3), more than 10. TIFF images were graded twice at an interval of 1 month in order to assess intragrader variability. Just after the second TIFF grading, compressed images (n ¼ 405) were then presented in random order and graded jointly by the two graders. Image quality was graded as being good (image degradation not apparent), acceptable (image degradation apparent, but still allowing subjectively reliable assessment), or poor (quality not sufficient from which to make reliable assessment). Then each lesion of DR was graded on each image according to the above scale. Statistics TIFF photographs were the reference standard. However, some variability was observed between the two successive gradings due to intragrader variability. A consensus grade for each image was reached after discussion between the two graders, to establish the gold standard. Lesion grades for each image at each compression level were then compared with the gold standard, and sensitivity, specificity, positive and negative predictive values were calculated. For sensitivity and specificity, the score for each lesion was dichotomized as present or absent. Questionable lesions were conventionally considered positive. 95% confidence intervals were calculated. A weighted kappa statistic of each compression grade was calculated to evaluate level of agreement between gold standard and grading scores for each level of compression. Kappa statistic agreement was termed slight (0–0.20), fair (0.21–0.40), moderate (0.41– 0.60), substantial (0.61–0.80), and almost perfect (0.81 and more).5 Agreement between gold standard and grading scores for HMA was also calculated, when HMA were absent (grade 0) vs present (grades 1, 2, and 3), to study the effect of compression on distinction of no lesions to few or many. Kappa statistics were also used to assess intragrader agreement between the two gradings at a 1-month interval. The SPSS v11.1 software statistical package (SPSS, Chicago, IL, USA) for Windows was used. Results Table 1 summarizes image quality assessment for different compression levels. With cJPEG image quality started to decrease at 41 KB and blocking effects were obvious in almost all 24 KB images. With JPEG2000, at sizes of 27 and 41 KB, slightly more ‘poor’ images were seen than that for cJPEG, with ‘rice grain’ artefacts present. No intragrader variability was observed when grading for SE, HE, ME, NVD/NVE, and IRMAs on TIFF images. Kappa statistic for intragrader agreement for the grading of HMA on TIFF images was 0.87 (95% confidence interval, 0.74–0.99). Evaluation of the effect of JPEG and JPEG2000 image compression J Conrath et al 488 Eye Sensitivity, specificity, positive and negative predictive values are given in Table 2. Table 3 summarizes agreement of lesion grade between TIFF and compressed images. Compression had no effect on the detection of NVD. Among NVE, two were subtle and smaller than 1/2 disc area in size; one of them was not detected on both 24 KB cJEPG and 27 KB JPEG2000 images. 27, 41 and 58 KB JPEG2000 image sizes led to identification of one false-positive NVE. Good-to-excellent agreement was observed for detection of exudates between TIFF and compressed images. Only subtle isolated exudates were missed in 24–41 KB cJPEG and 27–58 KB JPEG2000 images. For SE at 41 and 58 KB, JPEG2000 showed lower sensitivity. The lowest level of agreement was associated with IRMA, present on four images; there were many of them on two, and few small, isolated ones on the two others. In cJPEG, small IRMA were missed at 27 and 41 KB. At these same image sizes, JPEG2000 performed slightly better for IRMA. Kappa statistics analysis for HMA grades showed fair- to-good agreement (greater than 0.5) between TIFF and either JPEG images, at any compression level. However, variability of HMA grades was both due to intragrader variability and compression. For 118 KB size in both cJPEG and JPEG2000, variability was mostly due to intragrader variability (kappa values being greater than that for intragrader agreement). For other cases, variability was mostly due to compression, as kappa values were smaller than for intragrader agreement. Kappa values were higher when considering absence vs presence of HMA than that of global detection for all images, except for the 27 and 41 KB JPEG2000 images. Table 1 Values of subjective image quality assessment for 45 compressed images at different levels of cJPEG and JPEG 2000 compression Classic JPEG JPEG2000 Image size 24 KB 27 KB 41 KB 58 KB 118 KB 27 KB 41 KB 58 KB 118 KB Q value 10 20 40 60 80 Compression ratio 1 : 53 1 : 47 1 : 31 1 : 22 1 : 11 1 : 47 1 : 31 1 : 22 1 : 11 Quality good 0 0 29 45 45 0 22 41 45 Acceptable 5 38 15 0 0 33 15 4 0 Poor 40 7 1 0 0 12 8 0 0 Table 2 Sensitivity, specificity, positive predictive value, and negative predictive value for 45 compressed images at different levels of cJPEG and JPEG 2000 compression Classic JPEG JPEG2000 Image size 24 KB 27 KB 41 KB 58 KB 118 KB 27 KB 41 KB 58 KB 118 KB Q value 10 20 40 60 80 jp2 jp2 jp2 Jp2 Sens/Spe Sens/Spe Sens/Spe Sens/Spe Sens/Spe Sens/Spe Sens/Spe Sens/Spe Sens/Spe HMA 92/75 89/75 95/100 95/100 100/100 97/50 83/88 95/100 100/100 Soft exudates 89/96 83/100 100/100 100/100 100/100 89/100 89/100 94/96 100/100 Hard exudates 95.5/100 95.5/100 95.5/100 100/100 100/100 95.5/100 95.5/100 95.5/100 100/100 ME 100/100 100/100 100/100 100/100 100/100 100/97 100/100 100/97 100/100 IRMA 25/92.5 50/100 50/100 100/100 100/100 75/100 75/100 75/100 100/100 NVE 83/100 100/100 100/100 100/100 100/100 83/97 100/97 100/97 100/100 NVD 100/100 100/100 100/100 100/100 100/100 100/100 100/100 100/100 100/100 PPV/NPV PPV/NPV PPV/NPV PPV/NPV PPV/NPV PPV/NPV PPV/NPV PPV/NPV PPV/NPV HMA 94.5/66.5 94/60 100/80 100/80 100/100 90/80 97/70 100/80 100/100 Soft exudates 94/93 100/90 100/100 100/100 100/100 100/93 100/93 94/96 100/100 Hard exudates 100/96 100/96 100/96 100/100 100/100 100/96 100/96 100/96 100/100 ME 100/100 100/100 100/100 100/100 100/100 91/100 100/100 91/100 100/100 IRMA 25/93 100/95 100/95 100/100 100/100 100/98 100/98 100/98 100/100 NVE 100/97.5 100/100 100/100 100/100 100/100 83/97 86/100 86/100 100/100 NVD 100/100 100/100 100/100 100/100 100/100 100/100 100/100 100/100 100/100 Evaluation of the effect of JPEG and JPEG2000 image compression J Conrath et al 489 Eye Discussion Compression reduces image file size, allowing quicker transmission and using less storage space. The Joint Photographic Experts Group (JPEG) compression is the most common image format; it is largely used for medical imaging. It is a ‘lossy’ compression technique, meaning that some information and image quality are lost during compression. The amount of information that is discarded determines the amount of compression. The cJPEG algorithm breaks the image into 8 � 8 pixel blocks and performs a discrete cosine transform on each block. An 8 � 8 block of spectral coefficients with most of the information concentrated in relatively few coefficients is obtained. Quantization is then performed, closely preserving low-frequency components, approximating high-frequency components. The amount of discarded information determines compression level. A coding process compresses the remaining frequency coefficients. The JPEG2000 wavelet algorithm uses a different approach, dividing the image into a series of nonoverlapping rectangular blocks called tiles.6 Each tile component is decomposed using the wavelet transforms into decomposition levels, each of which contains a number of subbands. These subbands contain information describing the horizontal and vertical characteristics of the original tile. They are computed using a one-dimensional filter applied in both directions. This gives four smaller image blocks; one with low resolution, one with high vertical and low horizontal resolution, one with low vertical and high horizontal resolution, one with all high resolution. This application of one-dimensional filters in both directions is then repeated a number of times on the low-resolution image block (dyadic decomposition). After transformation, all coefficients are quantized. This is the process by which the coefficients are reduced in precision. Following quantization, each subband is subjected to a packet partition or code-blocks creation,7 which are the fundamental entities used for the final step of entropy coding. The effect of compression on retinal images has been investigated in a number of studies. Eikelboom et al8 studied the effect of various levels of both cJPEG and wavelet compression on the quality of digitized retinal images and using different methods concluded that a digital image 1.5 MB in size could be compressed to 29 KB without serious degradation in quality. Newsom et al9 demonstrated significant loss of sensitivity to the features of DR with cJPEG compression of 35 mm slides that had been digitized concluding that this was due to the TFT screen they used. Only retinopathy level was considered, with no mention of individual lesion counts.T a b le 3 K a p p a v a lu e s (7 9 5 % co n fi d e n ce in te rv a ls ) fo r 4 5 co m p re ss e d im a g e s a t d if fe re n t le v e ls o f cJ P E G a n d JP E G 2 0 0 0 co m p re ss io n C la ss in c JP E G JP E G 2 0 0 0 Im ag e si ze 2 4 K B 2 7 K B 4 1 K B 5 8 K B 11 8 K B 2 7 K B 4 1 K B 5 8 K B 11 8 K B Q v a lu e (f o r cl a ss ic JP E G ) 1 0 2 0 4 0 6 0 8 0 C o m p re ss io n ra ti o 1 :5 3 1 :4 7 1 :3 1 1 :2 2 1 :1 1 1 :4 7 1 :3 1 1 :2 2 1 :1 1 H M A (g lo b a l) 0 .5 3 (0 .3 3 – 0 .7 2 ) 0 .5 6 (0 .3 7 – 0 .7 5 ) 0 .7 2 (0 .5 5 – 0 .8 8 ) 0 .7 5 (0 .5 8 – 0 .9 0 ) 0 .9 7 (0 .9 0 – 1 ) 0 .7 1 (0 .5 3 – 0 .8 8 ) 0 .7 4 (0 .5 8 – 0 .9 0 ) 0 .8 4 (0 .7 0 – 0 .9 7 ) 0 .9 4 (0 .8 5 – 1 ) H M A : T IF F : 0 v s 1 ,2 ,3 0 .6 4 (0 .3 4 – 0 .9 4 ) 0 .5 8 (0 .2 7 – 0 .8 9 ) 0 .8 6 (0 .6 7 – 1 ) 0 .8 6 (0 .6 7 – 1 ) 1 (1 – 1 ) 0 .5 5 (0 .1 8 – 0 .9 2 ) 0 .7 2 (0 .4 6 – 0 .9 8 ) 0 .8 6 (0 .6 7 – 1 ) 1 (1 – 1 ) S o ft e x u d a te s 0 .8 2 (0 .6 6 – 0 .9 8 ) 0 .8 6 (0 .7 1 – 1 ) 1 (1 – 1 ) 0 .9 6 (0 .8 7 – 1 ) 1 (1 – 1 ) 0 .8 6 (0 .7 1 – 1 ) 0 .8 6 (0 .7 1 – 1 ) 0 .8 6 (0 .7 2 – 1 ) 1 (1 – 1 ) H a rd e x u d a te s 0 .9 6 (0 .8 7 – 1 ) 0 .9 6 (0 .8 7 – 1 ) 0 .9 6 (0 .8 7 – 1 ) 1 (1 – 1 ) 1 (1 – 1 ) 0 .9 5 (0 .8 7 – 1 ) 0 .9 5 (0 .8 7 – 1 ) 0 .9 1 (0 .8 0 – 1 ) 1 (1 – 1 ) M E 1 (1 – 1 ) 1 (1 – 1 ) 0 .9 4 (0 .8 2 – 1 ) 1 (1 – 1 ) 1 (1 – 1 ) 0 .8 8 (0 .7 2 – 1 ) 0 .9 4 (0 .8 2 – 1 ) 0 .8 3 (0 .6 3 – 1 ) 1 (1 – 1 ) IR M A 0 .2 0 (- 0 .2 0 – 0 .6 1 ) 0 .4 8 (0 .0 7 – 0 .8 8 ) 0 .6 5 (0 .2 0 – 1 ) 1 (1 – 1 ) 1 (1 – 1 ) 0 .9 7 (0 .5 5 – 1 ) 0 .9 8 (0 .5 5 – 1 ) 0 .9 8 (0 .5 5 – 1 ) 1 (1 – 1 ) N V E 0 .9 0 (0 .7 0 – 1 ) 1 (1 – 1 ) 1 (1 – 1 ) 1 (1 – 1 ) 1 (1 – 1 ) 0 .8 1 (0 .5 5 – 1 ) 0 .9 1 (0 .7 3 – 1 ) 0 .9 1 (0 .7 3 – 1 ) 1 (1 – 1 ) N V D 1 (1 – 1 ) 1 (1 – 1 ) 1 (1 – 1 ) 1 (1 – 1 ) 1 (1 – 1 ) 1 (1 – 1 ) 1 (1 – 1 ) 1 (1 – 1 ) 1 (1 – 1 ) Evaluation of the effect of JPEG and JPEG2000 image compression J Conrath et al 490 Eye Basu et al2 explored the effect of four different levels of cJPEG compression on 58 digitally acquired fundus images, finding that up to 1 : 20 compression ratios were acceptable. Stellingwerf et al10 compared uncompressed TIFF and compressed cJPEG digitally acquired fundus photographs with 35 mm retinal slides, finding 1 : 30 compression decreased sensitivity from 0.86–0.92 to 0.72–0.74. Using large 2008 � 3040 pixel images, Baker Figure 1 Examples of the effect on a retinal image (detail) of the different JPEG and JEPG2000 compression ratios. Top Middle: Original 1.26 MB TIFF image. Left column, from top to bottom: classic JPEG compression to 118, 58, 41, 27 and 24 KB. Right column, from top to bottom: JPEG2000 wavelet compression to 118, 58, 41 and 27 KB. A small red dot (HMA, arrow) is seen in all images except the 24 KB JPEG image (bottom left), where it becomes a vertical line, part of a blocking artefact (large arrowhead). The vessels surrounding it, seen well on TIFF and both 118 KB images progressively fade away with both compression algorithms. Rice-grain artefacts become more prominent in the JPEG2000 images as image size decreases (small arrowheads in 41 and 27 KB images). Evaluation of the effect of JPEG and JPEG2000 image compression J Conrath et al 491 Eye found 1 : 55 and 1 : 113 compression ratios acceptable for DR screening.11 The aim of our study was to compare cJPEG and JPEG2000 compression algorithms in screening for DR lesions. We therefore chose individual images to determine effect of compression on specific lesions and not composite fundus images as ETDRS grading was not our aim. In cJPEG compression, blocking artefacts started to be visible on 41 KB-images (Figure 1) and became obvious on 27 KB images. Image degradation was first noticed at an earlier level of compression than that in Eikelboom’s study.8 In JPEG2000 images, certain images were subjectively found to be blurred at all three levels of compression, without typical ‘blocking’ artefacts, yet artefacts specific to JPEG2000 (‘wavelet or rice-shaped’,12 Figure 1) were observed. Another feature visible on JPEG2000 images at high compression ratios was the ‘smoothing’ effect; as Eikelboom et al8 noted wavelet images may be pleasant to look at, even if appearing somewhat fuzzy or ‘out of focus’.8 With the levels of compression that we used, we observed relatively few effects on the detection of gross anomalies, which were detected with good sensitivity. Eikelboom et al8 found that large anomalies could be detected on retinal images at a compression ratio over 1 : 300, using JPEG. In our study, NVD or NVE were detected at any level of JPEG compression when they were greater than 1/2 disc area in size. However, smaller new vessels were missed in one case each on 24 KB cJPEG and 27 KB JPEG2000 images. HE were well detected at any level of either compression. Indeed, HE are small lesions, but are often grouped together in clusters or large circinate rings. Only small isolated HE were missed at higher levels of compression. As expected, the effect of compression was more pronounced on small, subtle, low-contrasted anomalies. Vanishing SE were missed at the lowest compression levels. The lowest level of agreement was associated with IRMA; small, isolated IRMA present on two images were missed from 41 KB images. This was the case in one JPEG2000 image. An important point in this study was the effect of image compression on the detection of microaneurysms; they are the first ophthalmoscopic sign of early DR, and their detection is particularly critical when screening for DR; it is important to be able to count them at early stages of DR to follow the progression of the disease.13 The possibility to detect and count them properly on compressed images will thus determine the highest compression level clinically acceptable to screen for DR. The sensitivity to detect microaneurysms is not as good on digital photography as on conventional 35-mm photography, although several authors have found good agreement between both techniques for DR grading,14–16 the poorest agreement between DR grades was recorded at level 21, characterized by the presence of few microaneurysms.17–19 This is due to the lower resolution of digital photography, which makes the images granular and increases the number of questionable lesions when grading for microaneurysms. Image compression is expected to increase this phenomenon, as well as the difficulty in detecting and counting microaneurysms. For both classic and JPEG2000, global level of agreement between 118 KB compressed and TIFF images grading was almost perfect. Image compression did not lead to increased variability, compared to variability observed when grading TIFF images twice. At intermediate compression ratios (41 and 58 KB), global performance assessed by the kappa statistic was quite similar for both algorithms. At 27 KB, JPEG2000 performed slightly better than cJPEG for distinction of different HMA levels. At these levels of compression, cJPEG compression resulted in a significant decrease of visibility of critical details due to blocking artefacts that affected microaneurysm grading. Thus, such levels of compression with cJPEG do not appear suitable for DR screening. Blocking artefacts are an intrinsic limitation of the cJPEG algorithm that splits the image in blocks of 8 � 8 pixels. Low-quality compressed images are inefficiently compressed because for each block, only the lowest frequency (constant) component remains, and these various components are encoded inefficiently. As the JPEG2000 standard works on the image as a whole, it does not present blocking effects, as it is based on wavelet decompositions. However, implementation issues have also imposed that this newer JPEG standard splits the image into ‘tiles’,6 so that the problem of artefacts (‘wavelet or rice-shaped’) still remain. On the other hand, at high compression ratios, JPEG2000 might be more efficient in retaining small details like microaneurysms than the current cJPEG standard, especially when these details are much smaller than the 8 � 8 block size. This effect is supported by our finding a higher kappa for 27 KB JPEG2000 vs cJPEG images for global HMA evaluation. In spite of higher performance computers, more sophisticated software and high-speed internet-based data transmission, compression will still be a matter of concern for telemedicine in the future, as digital retinal screening cameras are also getting higher definitions (above the 5 MegaPixel range to date). We found that cJPEG as well as JPEG2000 compression of a 1.26 MB fundus image to 118 KB does not affect accuracy when compared to uncompressed TIFF images. At higher compression ratios, JPEG2000 has proven superior to cJPEG in radiology;4 we also found it slightly better, but Evaluation of the effect of JPEG and JPEG2000 image compression J Conrath et al 492 Eye also with results insufficient for clinical use.4 This remains to be tested in a clinical setting using larger image sizes. References 1 Freudenstein U, Verne J. A national screening programme for diabetic retinopathy. BMJ 2001; 323: 4–5. 2 Basu A, Kamal AD, Illahi W, Khan M, Stavrou P, Ryder REJ. Is digital image compression acceptable within diabetic reitnopathy screening. Diabet Med 2003; 20: 766–771. 3 Sung MM, Kim HJ, Yoo SK, Choi BW, Nam JE, Kim HS et al. Clinical evaluation of compression ratios using JPEG2000 on computed radiography chest images. J Digit Imaging 2002; 15: 78–83. 4 Fidler A, Likar B, Pernus F, Skaleric U. Comparative evaluation of JPEG and JPEG2000 compression in quantitative digital subtraction radiography. Dentomaxillofac Radiol 2002; 31: 379–384. 5 Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977; 33: 159–174. 6 Christopoulos C, Skodras A, Ebrahimi T. The JPEG2000 still image coding: an overview. IEEE Trans Consumer Electron 2000; 46: 1103–1127. 7 Marcellin MW, Gormish MJ, Bilgin A, Boliek MP. An overview of JPEG-2000, in Proceedings of 2000 Data Compression Conference, pp 523–544 March 2000. 8 Eikelboom RH, Yogesan K, Barry CJ, Constable IJ, Tay- Kearney ML, Jitskaia L et al. Methods and limits of digital image compression of retinal images for telemedecine. Invest Ophthalmol Vis Sci 2000; 41: 1916–1924. 9 Newsom RS, Clover A, Costen MT, Sadler J, Newton J, Luff AJ et al. Effect of digital image compression on screening for diabetic retinopathy. Br J Ophthalmol 2001; 85: 799–802. 10 Stellingwerf C, Hardus PL, Hooymans JM. Assessing diabetic retinopathy using two-field digital photography and the influence of JPEG-compression. Doc Ophthalmol 2004; 108: 203–209. 11 Baker CF, Rudnisky CJ, Tennant MT, Sanghera P, Hinz BJ, De Leon AR et al. JPEG compression of stereoscopic digital images for the diagnosis of diabetic retinopathy via teleophthalmology. Can J Ophthalmol 2004; 39: 746–754. 12 Persons KR, Palisson P, Manduc A, Erickson BJ, Savcenko V. An analytical look at the effects of compression on medical images. J Digit Imag 1997; 10: 60–66. 13 Klein R, Meuer SM, Moss SE, Klein BE. Retinal microaneurysm counts and 10-year progression of diabetic retinopathy. Arch Ophthalmol 1995; 113: 1386–1391. 14 George LD, Halliwell M, Hill R, Aldington SJ, Lusty J, Dunstan F et al. A comparison of digital retinal images and 35 mm colour transparencies in detecting and grading diabetic retinopathy. Diabet Med 1998; 15: 250–253. 15 Kerr D, Cavan DA, Jennings B, Dunnington C, Gold D, Crick M. Beyond retinal screening: digital imaging in the assessment and follow-up of patients with diabetic retinopathy. Diabet Med 1998; 15: 878–882. 16 Lin DY, Blumenkrantz MS, Brothers R. The role of digital fundus photography in diabetic retinopathy screening. Digital Diabetic Screening Group (DDSG). Diabetes Technol Ther 1999; 1: 4477–4487. 17 Henricsson M, Karlsson C, Ekholm L, Kaikkonen P, Sellman A, Steffert E et al. Colour slides or digital photography in diabetes screeningFa comparison. Acta Ophthalmol Scand 2000; 78: 164–168. 18 Massin P, Erginay A, Ben Mehidi A, Vicaut E, Quentel G, Victor Z et al. Evaluation of a new non-mydriatic digital camera for detection of diabetic retinopathy. Diabet Med 2003; 20: 635–641. 19 Lim JI, LaBree L, Nichols T, Cardenas I. A comparison of digital nonmydriatic fundus imaging with standard 35-millimeter slides for diabetic retinopathy. Ophthalmology 2000; 107: 866–870. Evaluation of the effect of JPEG and JPEG2000 image compression J Conrath et al 493 Eye Evaluation of the effect of JPEG and JPEG2000 image compression on the detection of diabetic retinopathy Methods Compression Image grading Statistics Results Discussion Note References work_4k47gbdotvdk7k4wpwbuhtcjvu ---- s o u r c e : h t t p s : / / d o i . o r g / 1 0 . 7 8 9 2 / b o r i s . 1 4 2 3 4 6 | d o w n l o a d e d : 6 . 4 . 2 0 2 1 Original researchJournal of Virus Eradication 2020; 6 : 11–18 11© 2020 The authors. Journal of Virus Eradication published by MediscriptThis is an open access article published under the terms of a creative commons license. Atherosclerotic cardiovascular disease screening and management protocols among adult HIV clinics in Asia Dc Boettiger1,2*, Mg law1, J ross3, BV huy4, Bsl heng5, r Ditangco6, s Kiertiburanakul7, a avihingsanon8, DD cuong9, n Kumarasamy10, a Kamarulzaman11, Ps ly12, e Yunihastuti13, T Parwati Merati14, F Zhang15, s Khusuwan16, r chaiwarith17, MP lee18, s sangle19, JY choi20, WW Ku21, J Tanuma22, OT ng23, ah sohn3, cW Wester24, D nash25,26, c Mugglin27 and s Pujari28 on behalf of the international epidemiology Databases to evaluate aiDs – asia-Pacific 1 Kirby institute, UnsW sydney, australia 2 institute for health Policy studies, University of california, san Francisco, Usa 3 TreaT asia/amfar, The Foundation for aiDs research, Bangkok, Thailand 4 national hospital for Tropical Disease, hanoi, Vietnam 5 hospital sungai Buloh, Kuala lumpur, Malaysia 6 research institute for Tropical Medicine, Manila, Philippines 7 Faculty of Medicine ramathibodi hospital, Bangkok, Thailand 8 hiV-naT, Thai red cross aiDs research centre, Bangkok, Thailand 9 Bach Mai hospital, hanoi, Vietnam 10 carT clinical research site, infectious Diseases Medical centre, Voluntary health services, chennai, india 11 University Malaya Medical centre, Kuala lumpur, Malaysia 12 social health clinic, national center for hiV/aiDs, Dermatology and sTDs, Phnom Penh, cambodia 13 Faculty of Medicine Universitas indonesia, cipto Mangunkusumo general hospital, Jakarta, indonesia 14 Udayana University and sanglah hospital, Denpasar, indonesia 15 Beijing Ditan hospital, capital Medical University, Beijing, china 16 chiangrai Prachanukhor hospital, chiangrai, Thailand 17 research institute for health sciences, chiangmai, Thailand 18 Queen elizabeth hospital, hong Kong 19 BJ government Medical college and sassoon general hospitals, Pune, india 20 severance hospital, seoul, south Korea 21 Taipei Veterans general hospital, Taipei, Taiwan 22 national center for global health and Medicine, Tokyo, Japan 23 Tan Tock seng hospital, singapore 24 Vanderbilt University Medical center, institute for global health, nashville, Usa 25 institute for implementation science in Population health, city University of new York, new York, Usa 26 Department of epidemiology and Biostatistics, city University of new York, new York, Usa 27 institute of social and Preventative Medicine, University of Bern, switzerland 28 institute for infectious Diseases, Pune, india *corresponding author: David c Boettiger institute for health Policy studies, University of california, san Francisco, 3333 california street, 94118, Usa email: dboettiger@kirby.unsw.edu.au Abstract Objectives: integration of hiV and non-communicable disease services improves the quality and efficiency of care in low- and middle-income countries (lMics). We aimed to describe current practices for the screening and management of atherosclerotic cardiovascular disease (ascVD) among adult hiV clinics in asia. Methods: sixteen lMic sites included in the international epidemiology Databases to evaluate aiDs – asia-Pacific network were surveyed. Results: sites were mostly (81%) based in urban public referral hospitals. half had protocols to assess tobacco and alcohol use. Protocols for assessing physical inactivity and obesity were in place at 31% and 38% of sites, respectively. Most sites provided educational material on ascVD risk factors (between 56% and 75% depending on risk factors). a total of 94% reported performing routine screening for hypertension, 100% for hyperlipidaemia and 88% for diabetes. routine ascVD risk assessment was reported by 94% of sites. Protocols for the management of hypertension, hyperlipidaemia, diabetes, high ascVD risk and chronic ischaemic stroke were in place at 50%, 69%, 56%, 19% and 38% of sites, respectively. Blood pressure monitoring was free for patients at 69% of sites; however, most required patients to pay some or all the costs for other ascVD-related procedures. Medications available in the clinic or within the same facility included angiotensin-converting enzyme inhibitors (81%), statins (94%) and sulphonylureas (94%). Conclusion: The consistent availability of clinical screening, diagnostic testing and procedures and the availability of ascVD medications in the asian lMic clinics surveyed are strengths that should be leveraged to improve the implementation of cardiovascular care protocols. Keywords: hiV, cardiovascular disease, atherosclerosis, hypertension, asia Introduction While aiDs-related infections, malignancies and deaths have declined among people living with hiV (PlhiV) in the antiretroviral therapy (arT) era, there has been a concomitant increase in non-aiDs-related causes of death [1–3]. This trend is consistent with the dramatic global increase in population rates of non-communicable disease over the past three decades [4]; however, many studies now show that PlhiV experience a dis- proportionate amount of this burden [5–7]. although partly due to the high prevalence of traditional risk factors among PlhiV, the increase in non-aiDs-related causes of death may also be associated with the persistent low-level inflammation induced by long-term arT and hiV infection itself [8]. atherosclerotic cardiovascular disease (ascVD) is now a leading cause of non-aiDs-related death among people on arT [9–11]. however, in many low- and middle-income countries (lMics), ascVD is managed episodically, thereby placing patients at risk of long-term complications [12]. in contrast, hiV programmes Original research mailto:dboettiger@kirby.unsw.edu.au Original research Journal of Virus Eradication 2020; 6 : 11–18 12 Dc Boettiger et al. in lMics have proven very successful in establishing long-term care models that focus on continuity of care and retention, routine monitoring, and reduction in hiV transmission risk. integration of hiV and non-communicable disease services has been shown to improve the quality and efficiency of care among PlhiV in lMics [13,14], and was recommended by the World health Organization in their action plan for the prevention and control of non-communicable diseases in southeast asia from 2013 to 2020 [15]. UnaiDs estimates that 5.9 million people are cur- rently living with hiV in asia [16]. nevertheless, data remain limited with regard to the infrastructure and human resources available for non-communicable disease care among hiV clinics in the region. here we describe current practices for ascVD screening and management among hiV clinics in lMics in asia. Our data will inform current and future research in the region and help poli- cymakers develop more effective strategies to prevent and manage ascVD among PlhiV. Methods The international epidemiology Databases to evaluate aiDs (ieDea; www.iedea.org) is a global consortium that includes cohorts located in asia-Pacific, caribbean, south and north america, and central, east, southern and West africa [17–20]. ieDea provides a unique platform for evaluating standard practices and resource allocation in hiV cohorts in these regions. in 2016, a survey of lMic (as defined by World Bank [21]) sites in ieDea was conducted to assess site capacity for non-communicable disease screening and management [22–24]. a follow-up survey was conducted in 2018 across sites in the ieDea asia-Pacific network to further evaluate screening and management practices for ascVD. We have assessed the results from both surveys. investigators from ieDea developed and standardised a 302-ques- tion survey on site resources, lifestyle assessment and education practices, screening and management of non-communicable diseases (hypertension, diabetes, kidney and pulmonary disease, mental health disorders and cancer), care for paediatric and ado- lescent patients, availability of vaccinations, imaging, surgery and medicines and routine data collection procedures. The ieDea site capacity survey is available from the authors on request. Multilingual versions of the survey were implemented using research electronic Data capture (reDcap), a secure, web-based application designed to support data capture for research studies (www.project-redcap.org). reDcap was developed at the Van- derbilt institute for clinical and Translational research. it provides an interface for validated data entry, audit trails for tracking data manipulation, and automated import and export procedures to common statistical packages [25]. separate reDcap databases were created for each of the participating regions and could be completed directly online or transferred from paper copies. regional data centres coordinated distribution to and completion of the survey by the sites. clinical site investigators affiliated with the ieDea regional networks were primarily responsible for completing the surveys. Data collection for ieDea asia-Pacific sites was com- pleted by October 2016. investigators from ieDea asia-Pacific developed an additional 66-question survey to supplement the above mentioned site capacity survey. The survey containing the additional questions is available from the authors on request. Questions related to hyperlipidaemia screening and management, ascVD risk assess- ment and management, chronic ischaemic stroke management, and access to lipid lowering and anti-diabetic medicines. although ascVD risk assessment tools are limited in their capacity to predict stroke risk accurately [26], we did not enquire about whether sites used additional means to assess stroke risk to prevent the survey becoming overly burdensome. an english version of the survey was implemented using reDcap, which allowed data to be entered directly online. sites could also complete the survey on paper and send scanned files to the ieDea asia-Pacific coor- dinating centre (TreaT asia/amfar, Bangkok, Thailand) for data entry. clinical site investigators and data managers affiliated with ieDea asia-Pacific were primarily responsible for completing the surveys. Data collection was completed by June 2018. an addi- tional question on beta-blocker availability was put to the sites via email in October 2019. analysis all 16 lMic sites in ieDea asia-Pacific participated in the global site capacity and regional ascVD surveys. countries included were Thailand (four sites), china (two sites), india (two sites), indonesia (two sites), Malaysia (two sites), Vietnam (two sites), cambodia (one site) and the Philippines (one site). Data from both surveys were combined and responses evaluated for incon- sistencies, which were resolved through direct communication with site data managers. The results were split into eight categories: (1) general site characteristics; (2) risk factor assessment and patient education practices; (3) hypertension screening and man- agement; (4) hyperlipidaemia screening and management; (5) diabetes screening and management; (6) ascVD risk assessment and management, and chronic ischaemic stroke management; (7) availability of clinical testing and procedure; and (8) availability of medicines. Results general site characteristics sites were mostly (81%) based in urban public referral hospitals. seventy-five percent were linked to an academic medical centre. The median number of active outpatients was 2649 (interquartile range 1500–7500) and total number of active outpatients across all sites was 81,426. cardiology services were within the hiV clinic itself or available in the same facility for 75% of sites. risk factor assessment and patient education practices approximately half of sites had protocols to assess tobacco (50%), alcohol (50%) and other substance use (56%), as well as family history of chronic illnesses (50%). Protocols for assessing physical inactivity and obesity were in place at 31% and 38% of sites, respectively. Most sites provided educational material to patients on tobacco (75%), alcohol (69%) and other substance use (56%), physical inactivity (63%), and obesity and nutri- tion (69%). The most common means of education was patient counselling (Table 1). hypertension screening and management ninety-four percent of sites reported performing routine screening for hypertension, with 64% indicating that they had a protocol in place (Table 2). Most sites (88%) did not have any selection criteria for performing hypertension screening and most (81%) conducted screening at every visit. Fifty percent of sites had a protocol in place for hypertension management. hyperlipidaemia screening and management all sites reported performing routine screening for hyperlipidaemia, with 44% having a protocol in place. available screening tests included total cholesterol (75% of sites), high-density lipoprotein cholesterol (63% of sites), low-density lipoprotein cholesterol Original researchJournal of Virus Eradication 2020; 6 : 11–18 atherosclerotic cardiovascular disease screening and management protocols among adult hiV clinics in asia 13 Table 1. assessment and patient education for individual risk factors (N=16) Characteristic Tobacco use n (%) Alcohol use n (%) Other substance use n (%) Physical inactivity n (%) Obesity and nutrition n (%) Family history of chronic illness Written protocol in place to assess 8 (50) 8 (50) 9 (56) 5 (31) 6 (38) 8 (50%) educational material provided to patients 12 (75) 11 (69) 9 (56) 10 (63) 11 (69) na Primary patient education method used counselling 5 (31) 5 (31) 3 (19) 5 (31) 5 (31) na group education 1 (6) 1 (6) 1 (6) 1 (6) 2 (13) na referral 1 (6) 1 (6) 1 (6) 0 (0) 2 (13) na Written information 3 (19) 2 (13) 3 (19) 1 (6) 1 (6) na Unspecified 2 (13) 2 (13) 1 (6) 3 (19) 1 (6) na Table 2. hypertension screening and management (N=16) routine screening for hypertension 15 (94) Protocol for hypertension screening in place 9 (56) screening tests useda Manual blood pressure measurement 9 (56) automated blood pressure measurement 8 (50) ambulatory 24-hour blood pressure monitoring 2 (13) Patients assessed for hypertension all 14 (88) high-risk groups 0 (0) Other selection 0 (0) Undefined 2 (13) Timing of hypertension screeninga at enrolment into care 5 (31) at antiretroviral therapy initiation 4 (25) Yearly 1 (6) at every visit 13 (81) number of patients screened for hypertension per month 0–100 1 (6) 101–250 3 (19) 251–500 2 (13) >500 8 (50) Uncertain 2 (13) Protocol for hypertension management 8 (50) location of hypertension management Within hiV clinic 11 (69) in same facility but not in hiV clinic 3 (19) Off site 2 (13) Undefined 0 (0) not available 0 (0) staff primarily responsible for hypertension management hiV physician 10 (63) non-hiV physician 0 (0) nurse 0 (0) nurse assistant 1 (6) Other clinical staff 0 (0) non-clinical staff 0 (0) Uncertain 5 (31) Training received in past 2 years for staff managing hypertension 10 (63) Characteristic n (%) Characteristic n (%) a respondents could select more than one option. (69% of sites) and triglycerides (75% of sites). a protocol for hyperlipidaemia management was in place at 69% of sites. Further details on hyperlipidaemia screening and management procedures are shown in Table 3. Diabetes screening and management eighty-eight percent of sites reported performing routine screen- ing for diabetes, with 56% indicating that they had a protocol in place (Table 4). The most frequent method used for diabetes screening was a fasting plasma glucose measurement (81% of sites). Fifty percent of sites had a protocol in place for diabetes management. atherosclerotic cardiovascular disease risk assessment and management, and chronic ischaemic stroke management routine assessment of ascVD risk was reported by 94% of sites, of which 25% reported having a protocol of any kind in place and 6%, a protocol specifically for PlhiV (Table 5). The most commonly used risk equations were Framingham (56%) and the american college of cardiology and american heart association pooled cohort (50%; respondents could select more than one option). nineteen percent of sites reported assessing ascVD risk among all patients and 63%, only among high-risk groups. a protocol for managing patients at high risk of ascVD was in place at 19% of sites, and 6% of sites had an hiV-specific pro- tocol. Thirty-eight percent of sites reported having a protocol for chronic ischaemic stroke management; however, no site had a protocol specifically for PlhiV. clinical testing and procedure availability Blood pressure monitoring was available in all but one (94%) of the sites surveyed. Other tests and procedures were usually available either within the hiV clinic itself or in the same facility as the hiV clinic: glycosylated haemoglobin (hba1c, 81%), fasting Original research Journal of Virus Eradication 2020; 6 : 11–18 14 Dc Boettiger et al. Table 3. hyperlipidaemia screening and management (N=16) routine screening for hyperlipidaemia 16 (100) Protocol for hyperlipidaemia screening 7 (44) screening tests useda Fasting blood lipids 15 (94) non-fasting blood lipids 1 (6) Total cholesterol 12 (75) high-density lipoprotein (hDl) cholesterol 10 (63) low-density lipoprotein (lDl) cholesterol 11 (69) Triglycerides 12 (75) Patients assessed for hyperlipidaemia all 11 (69) high-risk groups 2 (13) Other selection 3 (19) Undefined 0 (0) Timing of hyperlipidaemia screeninga at enrolment into care 3 (19) at antiretroviral therapy initiation 3 (19) Yearly 13 (81) at every visit 1 (6) number of patients screened for hyperlipidaemia per month 0–100 3 (19) 101–250 4 (25) 251–500 4 (25) >500 3 (19) Uncertain 2 (13) location of hyperlipidaemia screening Within hiV clinic 13 (81) in same facility but not in hiV clinic 2 (13) Off site 0 (0) Undefined 1 (6) not available 0 (0) staff primarily responsible for hyperlipidaemia screening hiV physician 15 (94) non-hiV physician 0 (0) nurse 0 (0) nurse assistant 0 (0) Other clinical staff 1 (6) non-clinical staff 0 (0) Uncertain 0 (0) Payment of hyperlipidaemia screening costs Patient only 5 (31) Full public funding 6 (38) co-payment (patient and public) 4 (25) Mixture of all 1 (6) Protocol for hyperlipidaemia management 11 (69) location of hyperlipidaemia management Within hiV clinic 14 (88) in same facility but not in hiV clinic 1 (6) Off site 0 (0) Undefined 0 (0) not available 1 (6) staff primarily responsible for hyperlipidaemia management hiV physician 13 (81) non-hiV physician 0 (0) nurse 0 (0) nurse assistant 0 (0) Other clinical staff 1 (6) non-clinical staff 2 (13) Uncertain 0 (0) Training received in past 2 years for staff managing hyperlipidaemia 10 (63) Payment of hyperlipidaemia management costs 5 (31) Patient only 4 (25) Full public funding 6 (38) co-payment (patient and public) 1 (6) Mixture of all Characteristic n (%) Characteristic n (%) a respondents could select more than one option. Table 4. Diabetes screening and management (N=16) routine screening for diabetes 14 (88%) Protocol for diabetes screening in place 9 (56%) screening tests useda random plasma glucose measurement 7 (44%) Fasting plasma glucose measurement 13 (81%) 2-hour plasma glucose tolerance test 4 (25%) hba1c 6 (38%) Patients assessed for diabetes all 9 (56%) high-risk groups 2 (13%) Other selection 0 (0%) Undefined 3 (19%) Timing of diabetes screeninga at enrolment into care 5 (31%) at antiretroviral therapy initiation 5 (31%) Yearly 4 (25%) at every visit 3 (19%) number of patients screened for diabetes per month 0–100 6 (38%) 101–250 5 (31%) 251–500 1 (6%) >500 3 (19%) Uncertain 1 (6%) Characteristic n (%) Characteristic n (%) Original researchJournal of Virus Eradication 2020; 6 : 11–18 atherosclerotic cardiovascular disease screening and management protocols among adult hiV clinics in asia 15 Table 5. ascVD risk assessment and management, and chronic ischaemic stroke management (N=16) routine assessment of ascVD risk 15 (94) Protocol for ascVD risk assessment 4 (25) hiV-specific protocol 1 (6) cardiovascular disease risk calculators useda Data collection on adverse events of anti-hiV drugs (D:a:D) 2 (13) Framingham 9 (56) american college of cardiology 8 (50) Other 0 (0) Patients assessed for ascVD risk all 3 (19) high-risk groups 10 (63) Other selection 2 (13) none 1 (6) Timing of ascVD assessmenta at enrolment into care 4 (25) at antiretroviral therapy initiation 2 (13) Yearly 9 (56) at every visit 0 (0) number of patients assessed for ascVD risk per month 0–100 8 (50) 101–250 2 (13) 251–500 2 (13) >500 1 (6) Uncertain 3 (19) location of ascVD risk assessment Within hiV clinic 13 (81) in same facility but not in hiV clinic 1 (6) Off site 0 (0) Undefined 1 (6) not available 1 (6) staff primarily responsible for ascVD risk assessment hiV physician 13 (81) non-hiV physician 0 (0) nurse 1 (6) nurse assistant 0 (0) Other clinical staff 0 (0) non-clinical staff 1 (6) Uncertain 1 (6) Protocol for managing those with high risk of ascVD 3 (19) hiV-specific protocol 1 (6) Protocol for chronic ischaemic stroke management 6 (38) hiV-specific protocol 0 (0) location of chronic ischaemic stroke management Within hiV clinic 4 (25) in same facility but not in hiV clinic 9 (56) Off site 2 (13) Undefined 0 (0) not available 1 (6) staff primarily responsible for chronic ischaemic stroke management hiV physician 4 (25) non-hiV physician 10 (63) nurse 0 (0) nurse assistant 0 (0) Other clinical staff 1 (6) non-clinical staff 1 (6) Uncertain 0 (0) Training received in past 2 years for staff managing chronic ischaemic stroke 6 (38) Payment of chronic ischaemic stroke management costs Patient only 7 (44) Full public funding 4 (25) co-payment (patient and public) 4 (25) Mixture of all 1 (6) Characteristic n (%) Characteristic n (%) a respondents could select more than one option. ascVD: atherosclerotic cardiovascular disease. Protocol for diabetes management 8 (50%) location of diabetes management Within hiV clinic 9 (56%) in same facility but not in hiV clinic 4 (25%) Off site 1 (6%) Undefined 1 (6%) not available 1 (6%) staff primarily responsible for diabetes management hiV physician 9 (56%) non-hiV physician 0 (0%) nurse 0 (0%) nurse assistant 0 (0%) Other clinical staff 0 (0%) non-clinical staff 0 (0%) Uncertain 7 (44%) Training received in past 2 years for staff managing diabetes 10 (63) Characteristic n (%) Characteristic n (%) Table 4. Diabetes screening and management (N=16) (continued) a respondents could select more than one option. hba1c: glycosylated haemoglobin. Original research Journal of Virus Eradication 2020; 6 : 11–18 16 Dc Boettiger et al. Table 6. Test and procedure availability (N=16) Availability and cost Blood pressure monitor n (%) HbA1c n (%) Fasting plasma glucose n (%) Oral glucose tolerance test n (%) Random plasma glucose n (%) Digital photographya n (%) Point of care diabetes testing n (%) Computed tomography scan n (%) availability Within hiV clinic 15 (94) 9 (56) 12 (75) 6 (38) 11 (69) 1 (6) 10 (63) 3 (19) in same facility but not in hiV clinic 0 (0) 4 (25) 2 (13) 7 (44) 3 (19) 4 (25) 2 (13) 10 (63) Off site 0 (0) 2 (13) 1 (6) 1 (6) 0 (0) 3 (19) 2 (13) 2 (13) not available 1 (6) 1 (6) 1 (6) 2 (13) 2 (13) 8 (50) 2 (13) 1 (6) Procedure or test free for patients 11 (69) 5 (31) 7 (44) 3 (19) 5 (31) 3 (19) 5 (31) 4 (25) Brain MRI Computed tomography angiogram Echocardiogram ECG Cardiac stress testb 24-hour Holter monitor Carotid duplex/ ultrasound Cardiac catheterisation availability Within hiV clinic 3 (19) 3 (19) 3 (19) 7 (44) 2 (13) 2 (13) 2 (13) 2 (13) in same facility but not in hiV clinic 10 (63) 8 (50) 11 (69) 6 (38) 10 (63) 9 (56) 9 (56) 8 (50) Off site 2 (13) 4 (25) 1 (6) 1 (6) 2 (13) 3 (19) 3 (19) 4 (25) not available 1 (6) 1 (6) 1 (6) 2 (13) 2 (13) 2 (13) 2 (13) 2 (13) Procedure or test free for patients 4 (25) 4 (25) 4 (25) 5 (31) 3 (19) 3 (19) 3 (19) 2 (13) Cardiac troponin Creatine kinase MB isoenzyme Creatine phosphokinase Cerebral thrombectomy Stroke rehabilitation Coronary bypass or stenting availability Within hiV clinic 6 (38) 6 (38) 6 (38) 0 (0) 1 (6) 0 (0) in same facility but not in hiV clinic 6 (38) 6 (38) 6 (38) 8 (50) 11 (69) 8 (50) Off site 2 (13) 2 (13) 2 (13) 4 (25) 3 (19) 6 (38) not available 2 (13) 2 (13) 2 (13) 4 (25) 1 (6) 2 (13) Procedure or test free for patients 3 (19) 3 (19) 3 (19) 3 (19) 5 (31) na a For remote diagnosis of diabetic retinopathy. b any form of cardiac stress test. ecg: electrocardiogram; hba1c: glycosylated haemoglobin; MB: myocardial band; Mri: magnetic resonance imaging; na: not assessed. plasma glucose (88%), oral glucose tolerance test (81%), random plasma glucose (88%), point-of-care diabetes testing (75%), computed tomography scan (81%), brain magnetic resonance imaging (81%), computed tomography angiogram (68%), echocar- diogram (88%), electrocardiogram (81%), any form of cardiac stress test (75%), 24-hour holter monitor (69%), carotid duplex or ultrasound (69%), cardiac catheterisation (63%), cardiac tro- ponin (75%), creatine kinase myocardial band isoenzyme (75%), creatine phosphokinase (75%) and stroke rehabilitation (75%). cerebral thrombectomy and coronary bypass or stenting were available within the same facility as the hiV clinic at 50% of sites. Digital photography for remote diagnosis of diabetic retin- opathy was available either within the hiV clinic, within the same facility as the hiV clinic or off site at 50% of sites. Blood pressure monitoring was free to patients at 69% of sites; however, most sites required patients to pay some or all the associated costs for other procedures (Table 6). Medication availability Medications available in the clinic or within the same facility to treat ascVD-associated conditions included thiazides (88%), angiotensin-converting enzyme inhibitors (81%), calcium channel blockers (88%), beta-blockers (94%), statins (94%), fibrates (88%), ezetimibe (56%), aspirin (88%), P2Y12 inhibitors (81%), alteplase (56%) and sulphonylureas (94%). Further details on medication availability are provided in Table 7. Discussion The surveys of 16 hiV clinics in lMics in asia revealed several gaps in ascVD diagnosis and management practices, in particular, a lack of ascVD screening and management protocols. To our knowledge, this is the first study to report the capacity of hiV clinics in asia to manage and screen for ascVD. Original researchJournal of Virus Eradication 2020; 6 : 11–18 atherosclerotic cardiovascular disease screening and management protocols among adult hiV clinics in asia 17 Table 7. Medication availability (N=16) Characteristic Aspirin n (%) P2Y12 inhibitors n (%) Alteplase n (%) Atorvastatin n (%) Fluvostatin n (%) Lovastatin n (%) Pitavastatin n (%) Pravastatin n (%) availability Within hiV clinic 3 (19) 6 (38) 0 (0) 9 (56) 3 (19) 2 (13) 6 (38) 4 (25) in same facility but not in hiV clinic 11 (69) 7 (44) 9 (56) 5 (31) 3 (19) 5 (31) 5 (31) 6 (38) Off site 1 (6) 2 (13) 4 (25) 1 (6) 3 (19) 3 (19) 1 (6) 3 (19) not available 1 (6) 1 (6) 3 (19) 1 (6) 7 (44) 6 (38) 4 (25) 3 (19) Rosuvastatin Simvastatin Gemfibrozil Fenofibrate Clofibrate Ezetimibe Thiazides ACE- inhibitors availability Within hiV clinic 8 (50) 7 (44) 7 (44) 9 (56) 2 (13) 5 (31) 10 (63) 9 (56) in same facility but not in hiV clinic 5 (31) 6 (38) 5 (31) 5 (31) 5 (31) 4 (25) 4 (25) 4 (25) Off site 1 (6) 2 (13) 2 (13) 1 (6) 2 (13) 4 (25) 0 (0) 1 (6) not available 2 (13) 1 (6) 2 (13) 1 (6) 7 (44) 3 (19) 2 (13) 2 (13) CCBs Beta- blockers Sulphonylureas Meglitinides AGIs Glitazones DPP-4 inhibitors Incretin mimetics availability Within hiV clinic 10 (63) 4 (25) 8 (50) 3 (19) 4 (25) 6 (38) 5 (31) 2 (13) in same facility but not in hiV clinic 4 (25) 11 (69) 7 (44) 8 (50) 7 (44) 6 (38) 7 (44) 6 (38) Off site 0 (0) 1 (6) 1 (6) 2 (13) 3 (19) 3 (19) 2 (13) 3 (19) not available 2 (13) 0 (0) 0 (0) 3 (19) 2 (13) 1 (6) 2 (13%) 5 (31%) ace: angiotensin-converting enzyme; agis: alpha-glucosidase inhibitors; ccBs: calcium channel blockers; DPP: dipeptidyl peptidase. For each of the major ascVD risk factors assessed, approximately half of the surveyed sites indicated they had an assessment pro- tocol in place. Between 56% and 75% of the sites provided some form of education to patients on these risk factors, indicating room for improvement. education empowers patients and com- munity members to seek care and to better manage their health [15]. in comparison with a similar survey among hiV treatment sites in Tanzania [27], education provision among our asian sites was higher for tobacco use (75% vs 57%), lower for alcohol use (69% vs 86%), and slightly higher for obesity and nutrition (69% vs 64%). routine screening for hypertension, hyperlipidaemia, diabetes and ascVD risk was common, and sites had excellent access to blood pressure monitors, lipid and fasting plasma glucose testing, and appropriate ascVD risk equations. however, only 64% of sites had a protocol in place for hypertension screening and fewer had protocols to screen for hyperlipidaemia, diabetes or ascVD risk. in asia and elsewhere, primary care systems with well- established protocols have proven to be effective in non-commu- nicable disease prevention and management [28–30]. Protocols help to standardise medical care and optimise the utility of equip- ment, laboratory testing and medications. For hiV or primary care clinics, protocols can also assist in deciding appropriate patient referral for a non-communicable disease-related complication. Many sites also lacked a protocol for the management of hyper- tension, hyperlipidaemia, diabetes, high ascVD risk and chronic stroke. This finding is consistent with other studies from resource- limited countries reporting findings from hiV [27] and primary care clinics [31,32]. importantly, the availability of medications to treat these conditions was generally good. as an example, while we found 94% of sites had statins available either within the hiV clinic or in the same facility as the hiV clinic, leung et al. reported that less than 10% of the hiV clinics they had surveyed in Tanzania could provide simvastatin [27]. it was also encouraging to find that coronary bypass or stenting and stroke rehabilitation services were available at 88% and 94% of the surveyed sites, respectively. Patient management of hypertension, hyperlipidaemia, diabetes and chronic stroke was usually carried out by an hiV physician. This is becoming more common in lMics; however, in high- income countries, where integrated care has typically focused on better management of broad groups of people with multiple morbidities, hiV physicians may not have as much autonomy regarding their patient cVD care [33]. For 38% to 63% of sites, the staff member primarily responsible for patient management had received training in the last 2 years. Patients often had to pay some or all of the costs associated with diagnosis and management. ensuring clinics are adequately staffed to address the growing ascVD burden among PlhiV is critical. Moreover, healthcare workers must be adequately trained, encouraged to explore novel models of care and incentivised to continue devel- oping their career track [15]. This study indicates that staff at the surveyed clinics have sufficient tools available to diagnose and manage patients appropriately. There are several limitations to this study. First, the hiV clinics included may not be representative of hiV care across asia, par- ticularly in more rural areas. second, our study is based on self- reported data collected cross-sectionally, which may be subject to recall and desirability biases. Finally, we have captured infor- mation only on the service availability and not their quality, uptake Original research Journal of Virus Eradication 2020; 6 : 11–18 18 Dc Boettiger et al. or coverage. Further studies examining the quality of ascVD care provided in asian hiV clinics and impact of ascVD preven- tion and care initiatives among PlhiV are warranted. This study shows ascVD care is generally well integrated among urban hiV centres in lMics in asia. The consistent availability of clinical screening, diagnostic testing and procedures, and ascVD medication is a strength in the current system that should be lever- aged to improve implementation of cardiovascular care protocols. Acknowledgements The authors would like to acknowledge all site staff involved in completing the study surveys. conflicts of interest DcB has received research funding from gilead sciences and is supported by a national health and Medical research council early career Fellowship (aPP1140503); Mgl has received unre- stricted grants from Boehringer ingelhiem, gilead sciences, Merck sharp & Dohme, Bristol-Myers squibb, Janssen-cilag and ViiV healthcare, consultancy fees from gilead sciences, and data and safety monitoring board sitting fees from sirtex Pty ltd; ahs has received research funding and travel support from ViiV healthcare; OTn is supported by a national Medical research council clini- cian scientist award (MOh-000276). all other authors report no potential conflicts of interest. Funding The international epidemiology Databases to evaluate aiDs (ieDea) is supported by the national institute of allergy and infectious Diseases (100000060), the eunice Kennedy shriver national institute of child health and human Development (100009633), the national institute on Drug abuse (100000026), the national cancer institute (100000054) and the national institute of Mental health (100000025) in accordance with the regulatory requirements of the national institutes of health under award numbers U01ai069911 (east africa), U01ai069919 (West africa), U01ai096299 (central africa), U01ai069924 (southern africa) and U01ai069907 (asia-Pacific). The Kirby institute (data centre for the ieDea asia-Pacific) is funded by the australian government Department of health and ageing (501100001027) and is affiliated with the Faculty of Medicine, University of new south Wales (sydney, australia). This work is solely the respon- sibility of the authors and does not necessarily represent the official views of any of the institutions mentioned. References 1. antiretroviral Therapy cohort collaboration. causes of death in hiV-1-infected patients treated with antiretroviral therapy, 1996-2006: collaborative analysis of 13 hiV cohort studies. Clin Infect Dis 2010; 50 (10): 1387–1396. 2. lewden c, May T, rosenthal e et al. changes in causes of death among adults infected by hiV between 2000 and 2005: the ‘Mortalité 2000 and 2005’ surveys (anrs en19 and Mortavic). J Acquir Immune Defic Syndr 2008; 48 (5): 590–598. 3. Palella FJ Jr, Baker rK, Moorman ac et al. Mortality in the highly active antiretroviral therapy era: changing causes of death and disease in the hiV outpatient study. J Acquir Immune Defic Syndr 2006; 43 (1): 27–34. 4. allen l. are we facing a noncommunicable disease pandemic? J Epidemiol Glob Health 2017; 7 (1): 5–9. 5. currier Js, Taylor a, Boyd F et al. coronary heart disease in hiV-infected individuals. J Acquir Immune Defic Syndr 2003; 33 (4): 506–512. 6. Freiberg Ms, chang cc, Kuller lh et al. hiV infection and the risk of acute myo- cardial infarction. JAMA Intern Med 2013; 173 (8): 614–622. 7. Triant Va, lee h, hadigan c et al. increased acute myocardial infarction rates and cardiovascular risk factors among patients with human immunodeficiency virus disease. J Clin Endocrinol Metab 2007; 92 (7): 2506–2512. 8. Freiberg Ms, so-armah K. hiV and cardiovascular disease: we need a mechanism, and we need a plan. J Am Heart Assoc 2016; 4 (3): e003411. 9. eyawo O, Franco-Villalobos c, hull MW et al. changes in mortality rates and causes of death in a population-based cohort of persons living with and without hiV from 1996 to 2012. BMC Infect Dis 2017; 17 (1): 174. 10. smith cJ, ryom l, Weber r et al. Trends in underlying causes of death in people with hiV from 1999 to 2011 (D:a:D): a multicohort collaboration. Lancet 2014; 384 (9939): 241–248. 11. Bijker r, Jiamsakul a, Uy e et al. cardiovascular disease-related mortality and factors associated with cardiovascular events in the TreaT asia hiV Observational Database (TahOD). HIV Med 2018; 20 (3): 183–191. 12. Maher D, Ford n. action on noncommunicable diseases: balancing priorities for prevention and care. Bull World Health Organ 2011; 89 (8): 547–547a. 13. Janssens B, Van Damme W, raleigh B et al. Offering integrated care for hiV/aiDs, diabetes and hypertension within chronic disease clinics in cambodia. Bull World Health Organ 2007; 85 (11): 880–885. 14. Duffy M, Ojikutu B, andrian s et al. non-communicable diseases and hiV care and treatment: models of integrated service delivery. Trop Med Int Health 2017; 22 (8): 926–937. 15. WhO. action plan for the prevention and control of noncommunicable diseases in south-east asia, 2013-2020. available at: www.who.int/nmh/ncd-tools/who- regions-south-east-asia/en./ (accessed January 2020). 16. UnaiDs. Fact sheet – World aids Day 2019. available at: www.unaids.org/sites/ default/files/media_asset/UnaiDs_Factsheet_en.pdf (accessed January 2020). 17. egger M, ekouevi DK, Williams c et al. cohort profile: the international epidemio- logical Databases to evaluate aiDs (ieDea) in sub-saharan africa. Int J Epidemiol 2012; 41 (5): 1256–1264. 18. gange sJ, Kitahata MM, saag Ms et al. cohort profile: the north american aiDs cohort collaboration on research and Design (na-accOrD). Int J Epidemiol 2007; 36 (2): 294–301. 19. Mcgowan cc, cahn P, gotuzzo e et al. cohort Profile: caribbean, central and south america network for hiV research (ccasanet) collaboration within the international epidemiologic Databases to evaluate aiDs (ieDea) programme. Int J Epidemiol 2007; 36 (5): 969–976. 20. Zhou J, Kumarasamy n, Ditangco r et al. The TreaT asia hiV Observational Database: baseline and retrospective data. J Acquir Immune Defic Syndr 2005; 38 (2): 174–179. 21. The World Bank. World Bank country and lending groups – country classification. available at: https://datahelpdesk.worldbank.org/knowledgebase/articles/906519 (accessed January 2020). 22. Mugglin c, egger M, Urassa M et al. HIV care and treatment sites’ capacity to manage NCDs: diabetes and hypertension 21st International Workshop on HIV and Hepatitis Observational Databases, 2017, lisbon, Portugal. abstract 108. 23. Parcesepe a, Mugglin c, egger M et al. Capacity to screen and manage mental health disorders at hiv treatment sites in low- and middle-income countries. 9th ias conference on hiV science, 2017 Paris, France. Abstract 2984. 24. Mugglin c, Mulenga l, Mweemba a et al. Site capacity to screen for and manage renal dysfunction among hiv-infected persons receiving care in low- and middle- income countries 9th IAS Conference on HIV Science, 2017 Paris, France. Abstract 2441. 25. harris Pa, Taylor r, Thielke r et al. research electronic data capture (reDcap)–a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform 2009; 42 (2): 377–381. 26. Maheshwari a, norby Fl, roetker ns et al. refining prediction of atrial fibrilla- tion-related stroke using the P2-cha2Ds2-Vasc score. Circulation 2019; 139 (2): 180–191. 27. leung c, aris e, Mhalu a et al. Preparedness of hiV care and treatment clinics for the management of concomitant non-communicable diseases: a cross-sectional survey. BMC Public Health 2016; 16 (1): 1002. 28. Farzadfar F, Murray cJ, gakidou e et al. effectiveness of diabetes and hyperten- sion management by rural primary health-care workers (Behvarz workers) in iran: a nationally representative observational study. Lancet 2012; 379 (9810): 47–54. 29. liang X, chen J, liu Y et al. The effect of hypertension and diabetes management in southwest china: a before- and after-intervention study. PLoS ONE 2014; 9 (3): e91801. 30. rabkin M, Melaku Z, Bruce K et al. strengthening health systems for chronic care: leveraging hiV programs to support diabetes services in ethiopia and swaziland. J Trop Med 2012; 2012: 137460. 31. Pakhare a, Kumar s, goyal s et al. assessment of primary care facilities for cardio- vascular disease preparedness in Madhya Pradesh, india. BMC Health Serv Res 2015; 15: 408. 32. Van Minh h, Do YK, Bautista Ma et al. Describing the primary care system capac- ity for the prevention and management of non-communicable diseases in rural Vietnam. Int J Health Plann Manage 2014; 29 (2): e159–e173. 33. Mounier-Jack s, Mayhew sh, Mays n. integrated care: learning between high- income, and low- and middle-income country health systems. Health Policy Plan 2017; 32 (Suppl 4): iv6–iv12. http://www.who.int/nmh/ncd-tools/who-regions-south-east-asia/en./ http://www.who.int/nmh/ncd-tools/who-regions-south-east-asia/en./ http://www.unaids.org/sites/default/files/media_asset/UNAIDS_FactSheet_en.pdf http://www.unaids.org/sites/default/files/media_asset/UNAIDS_FactSheet_en.pdf https://datahelpdesk.worldbank.org/knowledgebase/articles/906519 1 work_4lpdi25frrggrkefuo7w7bqh7m ---- Microsoft Word - g01-Clark.doc Hacking as a Form of “Self-Improvement” Eric Clark Department of Computing Sciences Graduate School Villanova University Villanova, PA 19085 +01 610 721 9258 eric.clark@villanova.edu ABSTRACT When does hacking one’s own property pose an ethical problem? Categories and Subject Descriptors K.4.1 [Computers and Society]: Public Policy Issues - Ethics General Terms: Legal Aspects Keywords: Hacking, firmware, DVDs, cell phones, digital cameras 1. DISCUSSION Hacking has long been a topic of great interest in the field of computer ethics, as attested by many references in the media as well as in the professional literature [1], [2], [3] [4]. These conventional notions can be summarized as illegally accessing someone else’s intellectual property and amount to a virtual “breaking and entering,” as well as stealing. A far newer and much less debated aspect of hacking relates to “breaking into” one’s own personal property. The question arises, why would one “hack” into one’s own phone or camera? Recently, the most common reason is to gain access to features that, for monetary reasons, were locked by the manufacturer. A typical example of this is the camera/phone feature on the latest U.S. mobile phones. In order to increase revenue, most wireless carriers require phone manufacturers to install software to prevent the downloading of photos without using the cell phone’s fee-based transfer service. These same cell phones in Europe and Asia do not have the blocking software installed so there is no problem transferring photos via USB or serial cables. Websites such as www.cellphonehacks.com [5] are now emerging with “How to” instructions on the art of unlocking SIM cards on imported phones for use with US wireless plans. Another good example of manufacturer installed blocking features is Canon’s Rebel 300D digital camera. It was released shortly after Canon’s more expensive and more feature filled 10D camera. Because of similarities between the cameras, curious developers (hackers) carefully examined each camera’s firmware. By slightly modifying the 300D firmware, many features of the 10D could be unlocked. This new firmware is known in the 300D community as the Wasia hack (named after the developer) [6]. This type of activity by Rebel camera buffs is becoming as common and as easy as installing an external flash or neck strap [7]. This new type of hacking exists in a much grayer area than the traditional hacking. Here is the ethical “pull”. If a person legally owns something, shouldn’t they have the right to alter it in any way they deem appropriate for their needs? Most US mobile phone companies will only release SIM card information concerning their phones after several months into your plan. Now, most of the popular digital photography websites are banning people from posting links to “the hack”. But so far, no one has ever received an official warning from Canon, even though most of the webmasters fear that by providing direct links, they are assisting in the modification of proprietary software [8]. A highly publicized example of this type of “hacking” relates to ownership of DVDs. 321 Studios released DVDXCOPY, a product that allowed users to extract movies from DVDs and “back them up” onto a hard drive. The MPAA feared this would lead to the pirating of movies and launched a series of lawsuits against 321 Studios. While the lawsuits never really panned out, legal costs forced 321 Studios out of business. [9], [10]. What right do companies have to stop consumers from altering the devices they legally purchased? Manufacturers can, of course, refuse to honor warranties on products that have the additional features installed by unlocking blocking devices but where do they draw the line? Should Warner Brothers care if I edited Harry Potter 3 so that the deleted scenes are woven into a movie legally purchased from them? Does Ford sue aftermarket parts vendors because they can make a Mustang faster or more fuel-efficient? Ethical criteria separating legal from illegal “hacking” have yet to be determined by manufacturers, consumers or the government. 2. REFERENCES [1] Denning, D., “The U. S. vs. Craig Neidorf,” Communications of the ACM, volume 34, no. 3 (March 1991), pp. 23-32. [2] Kapor, M. “Civil Liberties in Cyberspace: When does hacking turn from an exercise of civil liberties into crime?” Scientific American (September 1991), pp. 158-162. [3] Levy, S. “Wisecrackers,” Wired Magazine (March 1996) [4] Spafford, E. “Are Computer Hacker Break-Ins Ethical?” Journal of Systems Software, volume 17 (1991), pp. 41-47. [5] http://www.cellphonehacks.com, last accessed 20-02-05. [6] http://satinfo.narod.ru/en/, last accessed 20-02-05. [7] http://www.digit-life.com/articles2/canon300dfw2/, last accessed 20-02-05. [8] http://www4.law.cornell.edu/uscode/17/117.html, last accessed 21-02-05. [9] http://www.wired.com/news/digiwood/0,1412,64453,00.html, last accessed 20-02-05. [10] http://www.321studios.com/, last accessed 20-02-2005. Copyright is held by the author/owner(s). ITiCSE’05, June 27–29, 2005, Monte de Caparica, Portugal. ACM 1-59593-024-8/05/0006. 397 work_4m5xoee2gjejri6hpb3brbjfgi ---- From Undesired Flaws to Esthetic Assets: A Digital Framework Enabling Artistic Explorations of Erroneous Geometric Features of Robotically Formed Molds technologies Article From Undesired Flaws to Esthetic Assets: A Digital Framework Enabling Artistic Explorations of Erroneous Geometric Features of Robotically Formed Molds Malgorzata A. Zboinska Department of Architecture and Civil Engineering, Chalmers University of Technology, SE-412 96 Gothenburg, Sweden; malgorzata.zboinska@chalmers.se Received: 28 September 2019; Accepted: 30 October 2019; Published: 31 October 2019 ���������� ������� Abstract: Until recently, digital fabrication research in architecture has aimed to eliminate manufacturing errors. However, a novel notion has just been established—intentional computational infidelity. Inspired by this notion, we set out to develop means than can transform the errors in fabrication from an undesired complication to a creative opportunity. We carried out design experiment-based investigations, which culminated in the construction of a framework enabling fundamental artistic explorations of erroneous geometric features of robotically formed molds. The framework consists of digital processes, assisting in the explorations of mold errors, and physical processes, enabling the inclusion of physical feedback in digital explorations. Other complementary elements embrace an implementation workflow, an enabling digital toolset and a visual script demonstrating how imprecise artistic explorations can be included within the computational environment. Our framework application suggests that the exploration of geometrical errors aids the emergence of unprecedented design features that would not have arisen if error elimination were the ultimate design goal. Our conclusion is that welcoming error into the design process can reinstate the role of art, craft, and material agency therein. This can guide the practice and research of architectural computing onto a new territory of esthetic and material innovation. Keywords: digital fabrication; digital design; robotic single-point incremental forming; fabrication errors; imprecision; material agency; artistic architectural computing; esthetic design exploration 1. Introduction 1.1. Background In contemporary experimental architectural design, industrial robots are included in the design process as a medium of accessing the physical phenomena accompanying the processing of architectural materials. Architectural programming, computing, and customized robotic fabrication have now become vehicles for design innovation. The inclusion of robotic fabrication into the design pipeline has interestingly extended the abstract realm of digital experimentation in computer-aided architectural design (CAAD) onto the physical territory. A general observation of digital fabrication as a research area in architecture leads to the conclusion that it has been heavily focused on achieving geometrical accuracy [1]. Such a focus has generated vast knowledge on fabrication error elimination and computational control of material behaviors. Architectural studies of this research trajectory developed versatile computational methods of increasing geometrical accuracy in materialized designs, to fulfill the esthetic criterion of perfectly engineered beauty [2]. Technologies 2019, 7, 78; doi:10.3390/technologies7040078 www.mdpi.com/journal/technologies http://www.mdpi.com/journal/technologies http://www.mdpi.com https://orcid.org/0000-0001-8713-0083 http://www.mdpi.com/2227-7080/7/4/78?type=check_update&version=1 http://dx.doi.org/10.3390/technologies7040078 http://www.mdpi.com/journal/technologies Technologies 2019, 7, 78 2 of 22 Likewise, the manufacturing of molds and free-form shapes with high accuracy is a well-studied problem in engineering research supporting architectural fabrication. Considerable work in the area of architectural geometry has been done to develop efficient strategies of paneling and optimizing non-standard architectural shapes to manufacture them at a reasonable cost and with the desired aesthetic quality [3–5]. A vast body of research has also yielded methods of geometrical optimization of machine toolpaths to increase the geometric accuracy of double-curved elements, molds, and dyes [6,7]. Our study explores an alternative approach to this exactness-oriented and computationally accurate handling of fabrication errors. This approach regards geometrical errors as esthetic traits and welcomes computational imprecision as a generative factor in the design process. We develop a framework that facilitates the implementation of such an approach in the computational design and digital fabrication process. Our framework departs from the tradition of error elimination towards its creative, artistic exploration. 1.2. Motivation and Significance This study is focused on fabrication errors in robotic single-point incremental forming (SPIF) of polymer sheets, for several reasons. Firstly, because this fabrication method is promising from the standpoint of wider applications in the architectural industry. It enables materialization of highly-customized, non-repeatable architectural elements by means of molding and casting. Such elements can embrace external façade panels, decorative interior cladding, or furniture at urban, landscape, and interior architecture scale. A wide range of materials can be cast into the robotically formed molds, from concrete and plaster to biomaterials such as mycelium and bioplastics, or even more unconventional materials, such as silicone. This versatility of applications and materials suggests the significance of investigating in this fabrication method in the context of non-standard manufacturing of architectural elements [8]. The second motivation, tied to the first one, arises from the sustainability and economic benefits of polymer SPIF as a method of building element production. Polymer molds can be easily reused or repurposed through recycling. Further, their production does not require the use of dyes. The production of a custom dye for each non-repeating mold design would be inefficient and time-consuming. It would create an additional cost but also generate considerable material waste due to the subtractive methods used in dye production. Because dye-less SPIF has already proven to be sustainable and material-efficient [9], it is now important to investigate its further benefits, oriented towards design and esthetic qualities. This relates to the third motivation of the study, linked to the esthetic design potential of polymer SPIF. Forming of polymers exhibits a good capacity to influence the final design expression. The high elasticity and plasticity of polymers causes material deformations upon forming that are much more unpredictable than in the case of their alternative—metals [10]. The high material instability of polymers increases the level of geometrical imprecision of the fabricated molds and therefore creates interestingly challenging conditions to work within. In this way, it provides an incentive for explorative material research at the artistic level of architectural design. Fourthly, undertaking this study is significant for filling some particular knowledge gaps in contemporary architectural research—both in digital fabrication research in general, and in robotic SPIF research in particular—as discussed below. 1.3. Knowledge Contribution to Architectural Computing and Digital Fabrication Quite recently, the CAAD research community has turned its attention towards an unconventional line of inquiry, which explores the roles of imprecision and infidelity in computing. This new interest was collectively expressed at the ACADIA 2018 conference titled “Recalibration: On Imprecision and Infidelity”. The main argument for exploring computational imprecision and infidelity was that they exhibit high potential for disrupting the mainstream practices and triggering innovation—both in the current methodologies of CAAD and at the fundamental level of esthetic design [11–13]. Technologies 2019, 7, 78 3 of 22 Another example of increased interest of the CAAD community in erroneous processes is a strand of research on architectural three-dimensional (3D) printing that pushes the boundaries of computation in a way expressed at the ACADIA conference. Here, a number of studies emerged that developed methods of 3D printer code manipulation and physical 3D printer setup customization that turn the typical errors accompanying 3D printing into unique esthetic features [14–16]. Single studies can also be identified in general digital fabrication research, examining errors as well as unpredictable material behaviors accompanying other fabrication methods, such as robotic extrusion of ferrofluids [17], gravity printing [18], glass slumping [19], vacuum forming [20], and reconfigurable molding [21]. This study collection presents how material behaviors can be steered computationally to determine the esthetic appearance of the design. To conclude, the research referenced above takes the perspective of computational control of material processes. An opposite approach, featuring intuitive and imprecise exploration of material behaviors within the framework of computation, remains yet to be developed. Our ambition with the undertaken research was to contribute to filling this knowledge gap, by providing a framework exemplifying such an approach in the context of robotic fabrication using the SPIF method. 1.4. Knowledge Contribution to Architectural Robotic SPIF In the specific context of architectural SPIF research, our first contribution concerns the function of architectural elements explored in research. To our knowledge, the majority of studies discussed the immediate production of architectural elements. Elements such as façade panels and footbridge components were the subject of investigations [22,23], but not architectural molds from which architectural elements can be cast using other materials. Our investigation of architectural mold fabrication therefore adds a new component to this body of knowledge. Our second contribution pertains to the purpose of previous studies on architectural SPIF. So far, their aim was to develop methods that increase the geometrical precision of the formed elements [24,25]. Therefore, they leave the aspects of error exploration undisclosed. In this context, we contribute with an approach based on the notion of desired fabrication errors. Our third contribution relates to the materials investigated in architectural SPIF. The previous studies, as referenced above, investigated the forming of metal as the primary material. Only one study discussed the forming of polymers and its potentials for wider implementation in architecture [26]. Importantly, this study also argued that the typical polymer forming error of surface micro-cracking could be explored as a design asset. However, that notion was only discussed briefly and not further developed. Our contribution, herein, expands the sparse existing knowledge on architectural SPIF of polymers by providing a novel standpoint of geometrical error exploration for esthetic design purposes. 1.5. Main Aims and Highlights of the Work Prompted by the abovementioned state of knowledge, we developed a digital framework that enables explorations of erroneous geometric features of robotically formed polymer sheets, in a way that introduces artistic intuition and ambiguity into the conventionally precise computational environment. Through the construction of such a framework, we sought to add to the still young research on imprecision in computation with new knowledge pertaining to alternative approaches employing imprecision and fabrication errors as design drivers. A general conclusion suggested by our study is that welcoming into the computational process both the imprecise actions of the human designer and the unpredictable agency of materials brings with itself the capacity to reinstate the role of art and craft in architectural computing. By revealing the value of creative processing of material behaviors, it can alter the prevalent esthetic convention of the perfectly engineered architectural object and initiate the emergence of a new esthetic canon of imperfect beauty, signified by the joint agency of designers, materials, and digital fabrication machines. Upon our framework’s implementation, we discovered that the exploration of fabrication errors produces esthetic qualities that would have not arisen if precision had been the design goal. This leads Technologies 2019, 7, 78 4 of 22 to a refreshing conclusion that digital fabrication does not always need to strive for physical instances perfectly mirroring the digital models to be meaningful from the design standpoint. The material errors perceived as generative rather than disruptive can liberate computational designers from geometrical perfection as the only viable design goal, and expand their focus onto the exploration of new esthetic forms and new material expressions. 1.6. Product Imperfection and Potential Areas of Its Application To describe a product with imperfections viewed in positive light, we adapt its established definition from industrial design [27]. Consequently, we define imperfections as geometrical and/or surficial deviations of the product from the original design model. These imperfections result from particular ways of mechanical processing of materials and can be influenced to emerge in a certain way. Importantly, these imperfections should not compromise the global functionality or quality of a product. For example, elements that are parts of assemblies may still need to comply with the requirement of a dimensionally precise geometrical border that enables their accurate fitting. Therefore, the imperfections of interest embrace tolerated errors within the product’s form and surface finish that do not hinder the production of standard products [28,29]. Potential areas of application for products with imperfect features, as defined above. include art and sculpture, architectural design, interior design, furniture design, and landscape furniture design. Particular product examples could embrace ornamental façade panels, decorative interior cladding elements, and sculptural elements such as pillars and free-standing furniture pieces. For these products, the exactness of shape at the level of surface design or form detail design may not be the main criterion for quality assessment, leaving room for the introduction of artistically explored imprecisions that do not disrupt the overall functionality of the product. Our approach will suit well cases in which the client orders a solution with a certain function but leaves the esthetic design expression to its creator. Especially in architectural design, clients often do not order a particular shape or exact geometrical design, which leaves room for the creative exploration of error that does not compromise the quality of the product. In the context of client expectations, it is also important to mention user experience research, which indicates that people experience materials and products very differently [30]. Thus, imperfections may be perceived as faulty or incomplete by some, while for others, they appear unique or original [31]. This entails that the possibility of introducing imperfection as a driving element of design should be considered at the earliest stages of design planning, in which client expectations are established. Another matter to consider is that irregularities on material surfaces or within the material mass, as defined above, could improve the functionality of a product [27,32]. For example, an imperfect surface finish in the form of a porous texture may have a positive acoustic effect of sound damping. Likewise, uneven material distribution in cast building elements resulting in thicker areas may locally increase their loadbearing capacities or even their thermal properties by creating material accumulations in which thermal energy can be stored and perhaps repurposed in some way. This could lead to very pragmatic applications in which the emergence of manufacturing flaws is intentionally guided to improve the functional properties of the manufactured design. It could also positively affect sustainability and resource efficiency in production by contributing to a reduced number of products that are discarded due to material or surficial flaws [27]. To date, examples of products that deviate from original designs can be found in consumer product design [33], but also in architecture. One of the characteristic projects is the P-Wall by Andrew Kudless, in which a difficult to predict behavior of a flowing plaster mass was capitalized on to produce wall panels with precise, straight-edge borders but unique, double-curved inner geometries [34]. Another example is architectural elements cast from concrete or clay using fabric formwork with an intentionally imperfect surface finish left as a remnant of the making process [35]. Further examples can also be found in experimental architectural research on 3D printing of building components that Technologies 2019, 7, 78 5 of 22 feature some irregularities in material settling, which do not affect functionality while embedding unique esthetic traces of the material process in the final product [15]. Therefore, the notion of manufacturing errors as viable features is already recognized to some extent in design. At the same time, in architectural design, it is a quite novel concept. Even though it has already been put forth [36], it still resides within the realm of experimental practice. That is, because it necessitates the development of new design agendas and methodologies that considerably depart from current paradigms and conventions. Therefore, it will certainly take time for the architectural practice, the building element manufacturing industry, and the construction sector to fully legitimize and adopt it as a viable design possibility. 1.7. Positioning of the Study in the Contexts of CAAD and Building Information Modeling (BIM) Traditional 3D CAAD is based on direct modeling of surfaces, with users needing to edit every feature of a geometrical object manually. Rhinoceros® 3D is an example of a common architectural software for direct modeling, supporting the creation of complex, non-standard, free-form building components. Traditional architectural BIM, on the other hand, relies on object-based parametric modeling. Therein, instead of direct 3D modeling of each building element from discrete surfaces, the designer uses a generic 3D model, often predefined. Its parameters are changed to generate particular 3D instances whose properties are automatically updated within the entire model upon each parameter change. A commonly used BIM program in architecture is Autodesk® Revit®, applied in the modeling of architectural elements with standard geometry, accompanied by automated extraction of information describing the features of those elements. In comparison with the above systems, however, our work addresses yet another type of 3D modeling methods—advanced associative parametric modeling using visual scripting. This modeling approach extends the standard functionalities of both CAAD and BIM programs. Through a modular programming approach, it enables the creation, extraction, and processing of parametric surface data in geometrically complex CAAD and BIM models. For CAAD, a popular add-in is Grasshopper® for Rhinoceros®, which enables to parameterize standard 3D models of surfaces and control them using geometrical associations, parameters, and mathematical formulae. For BIM, a popular add-in is Dynamo® for Revit®, which enables designers to move beyond the use of standard predefined building components, and work with non-standard, free-form surface models that can be linked into the main building model and endowed with typical BIM information. In this study, we explored geometrical errors of molds within the framework of the CAAD modeler Rhinoceros®, with extended parametric modeling functionalities through visual scripting in Grasshopper®. However, our approach to error exploration could have been executed within a BIM system as well, by using the visual scripting functionalities of Dynamo® within Revit®. The BIM implementation of our framework would have been similar in workflows to the one presented in this study, with a possible difference lying in the method of erroneous mold feature evaluation. That is, given that currently no readily available add-ins for mesh analysis for Dynamo® exist, the curvatures of the erroneous mold features would need to be defined and color-coded from scratch by the user. A separate study presenting in detail such an application within the BIM context could be of great use for the broadening of knowledge and the scope of applications of our proposal, and perhaps for yielding new research questions. 2. Materials and Methods 2.1. Investigation Method The digital framework presented herein was developed based on the results of design experiments, which featured combined intuitive and computational explorations of polymer SPIF errors. The workflows in these experiments are discussed in another publication [37], while in this Technologies 2019, 7, 78 6 of 22 article, we focus on presenting the details of the proposed exploration framework and important prerequisites of its implementation. 2.2. Materials For mold forming, a polymer material, i.e., polyethylene terephthalate (PET-G), was used. PET-G sheets of size 125 × 125 cm, 2 mm thick were formed. The coating of the formed molds enabling digital photography was done using removable rubber paint. The material cast into the mold to produce the final design objects was a translucent addition-cure silicone, colored using pigments of varying hues. 2.3. Software The software supporting the developed framework embraced: Rhinoceros® (version 6) for free-form modeling, Grasshopper® add-on (version 1.0.0007) for visual programming, Mesh Curvature add-on for mesh analyses, KUKA|prc add-on (version 2, 31 March 2016) for robot programming, Adobe Photoshop® (version CC 2015, 2 January 2015 release) for artistic digital painting, and Autodesk® ReCap™ Photo (version 19.1.0.10) for photogrammetric 3D mesh reconstruction. 2.4. Hardware The hardware used included: For mold forming—an industrial robot arm KUKA KR150; for digital photography of the silicone casts and photogrammetry of the molds—a digital 5 megapixel camera with a 3.85 mm f/2.8 lens, in-built in Apple iPhone 4; and for silicone application onto mold—a Nuair Herkules air compressor and a pneumatic air spray gun. 2.5. Robotic Process Setup The physical setup for the forming process is shown in Figure 1. The setup included a floor-mounted robot arm accompanied by frames for holding and supporting the processed material. The PET-G sheet to be formed was point-mounted between two custom-designed MDF frames with cutouts following the outline of the formed geometry. The MDF frames were mounted horizontally onto an aluminum frame. Figure 1 also shows the robot arm’s end effector, comprising a round-tipped metal rod, mounted in a chuck. Technologies 2019, 7, x 6 of 22 2.2. Materials For mold forming, a polymer material, i.e., polyethylene terephthalate (PET-G), was used. PET- G sheets of size 125 × 125 cm, 2 mm thick were formed. The coating of the formed molds enabling digital photography was done using removable rubber paint. The material cast into the mold to produce the final design objects was a translucent addition-cure silicone, colored using pigments of varying hues. 2.3. Software The software supporting the developed framework embraced: Rhinoceros® (version 6) for free- form modeling, Grasshopper® add-on (version 1.0.0007) for visual programming, Mesh Curvature add-on for mesh analyses, KUKA|prc add-on (version 2, 31 March 2016) for robot programming, Adobe Photoshop® (version CC 2015, 2 January 2015 release) for artistic digital painting, and Autodesk® ReCap™ Photo (version 19.1.0.10) for photogrammetric 3D mesh reconstruction. 2.4. Hardware The hardware used included: For mold forming—an industrial robot arm KUKA KR150; for digital photography of the silicone casts and photogrammetry of the molds—a digital 5 megapixel camera with a 3.85 mm f/2.8 lens, in-built in Apple iPhone 4; and for silicone application onto mold— a Nuair Herkules air compressor and a pneumatic air spray gun. 2.5. Robotic Process Setup The physical setup for the forming process is shown in Figure 1. The setup included a floor- mounted robot arm accompanied by frames for holding and supporting the processed material. The PET-G sheet to be formed was point-mounted between two custom-designed MDF frames with cutouts following the outline of the formed geometry. The MDF frames were mounted horizontally onto an aluminum frame. Figure 1 also shows the robot arm’s end effector, comprising a round- tipped metal rod, mounted in a chuck. (a) (b) Figure 1. Elements of the robotic process setup: (a) Robot arm and system of frames holding the formed material; (b) The forming tool. Figure 1. Elements of the robotic process setup: (a) Robot arm and system of frames holding the formed material; (b) The forming tool. Technologies 2019, 7, 78 7 of 22 3. Result Part 1—The Framework Enabling Artistic Explorations of Forming Errors 3.1. Specification of the Explored Errors The mold errors of interest for this study are coined in manufacturing engineering literature as pillows or bulges [38]. They concern the bottom parts of the formed geometry and appear as zones with concave curvature. Such errors arise in a particular geometrical situation where curvature changes from steep to more flat. This geometrical condition generates material compression and local thickening of the material in flatter areas, elevating them as bulges. The effect is additionally amplified by the more steep and therefore stiffer neighboring zones, which push the less formed material towards the middle and then upwards [39]. The mechanics of this phenomenon are not yet fully understood but its probable cause is the in-plane stresses, in horizontal plane perpendicular to the tool axis, generated during forming [40]. The mathematical quantification of this error can be done in two ways, both of which require the reverse engineering of the physical model into a digital representation to compare the deviations between the original geometry and the physical version. One way is to express the error as orthogonal distance between the ideal geometry profile and the actual one [41]. Another way is to calculate changes in principal curvature for local error quantification [42] and aggregate normal vectors for global error quantification [43]. In our case, however, we do not apply the numerical quantification of the error. Our approach to error exploration relies on the mean curvature analysis of the fabricated geometry using an existing software tool that is commonly available to architects. Therefore, we do not employ any numerical comparisons between the original and the manufactured model as this would require creating a new analysis tool. Nonetheless, such an approach could be implemented as an interesting further development of our current research. 3.2. Generalized Workflow for Framework Implementation As introduction to the framework’s presentation, let us begin with an outline of a generalized workflow for the framework’s implementation. As presented in Figure 2, such a workflow features a combination of physical and digital activities and is looped to facilitate design iteration. The exploration process begins with the 3D modeling of a of a NURBS (non-uniform rational basis spline) patch surface representing the first mold design. Then, section NURBS curves are generated for the surface and approximated to define a single polyline toolpath for the robot using the workflow described in Section 4.3. The mold design is then fabricated. In the next step, the fabricated mold is digitalized, i.e., reconstructed into a digital 3D representation using the photogrammetry technique. To enable digital photography for photogrammetry, one surface of the transparent and glossy polymer mold is coated with an opaque, matte, removable spray paint. Once the photographs of the mold are complete, they are used by the photogrammetry software to generate a digital representation of the mold as a point cloud. Using in-built functions in the photogrammetry software, the point cloud is approximated into a triangular mesh representation using binary STL meshing with no decimation. The resultant triangular mesh is imported into a 3D modeling software and its face count is reduced by 50% for faster processing. This reduced mesh is then subjected to a curvature analysis targeting the mean curvature in order to identify areas of abrupt surface curvature change and to locate mesh areas that are convex, flat, and concave. The curvature values are represented as a colored map on the mesh surface, which aids their intuitive perception. In parallel, optionally, translucent pigmented material is cast into the mold. The translucent material’s varying accumulations indicating the erroneous features of the mold are then photographed using a digital camera. An image representing the initial mold design, a digital photograph of the mold, a digital image of mesh analysis, and an optional digital photograph of the translucent pigmented cast are then used Technologies 2019, 7, 78 8 of 22 as bases for locally affecting the erroneous mold features through combined intuitive digital painting and computational explorations. As a result of the process, an iteration of the first mold design is generated. This 3D model is used as a point of departure for a new robot toolpath generation. The second mold is fabricated based on the toolpath data. The process of iterating its erroneous geometry features is then repeated according to the looped procedure outlined above. Technologies 2019, 7, x 8 of 22 As a result of the process, an iteration of the first mold design is generated. This 3D model is used as a point of departure for a new robot toolpath generation. The second mold is fabricated based on the toolpath data. The process of iterating its erroneous geometry features is then repeated according to the looped procedure outlined above. Figure 2. Generalized workflow for framework implementation. 3.3. Framework Overview Figure 3 shows the configuration of the proposed framework for erroneous feature exploration. The framework consists of two linked components: Digital processes and physical processes. Figure 2. Generalized workflow for framework implementation. Technologies 2019, 7, 78 9 of 22 3.3. Framework Overview Figure 3 shows the configuration of the proposed framework for erroneous feature exploration. The framework consists of two linked components: Digital processes and physical processes.Technologies 2019, 7, x 9 of 22 Figure 3. Framework configuration. 3.4. Digital Processes The digital processes component contains the digital operations assisting the explorations of geometrical errors of the robotically formed molds. The operations embrace: Free-form modeling, explorative computational design, bitmap painting, photogrammetric 3D mesh reconstruction, mesh curvature analysis, robot process simulation, and programming. Table 1 summarizes the particular functionalities of the digital toolkit enabling these operations in relation to error exploration. The first type of operation, embracing free-form modeling, supports the creation of the first design that underpins the erroneous feature explorations. Moreover, these operations allow for the processing of the meshes obtained through photogrammetry, the fine-tuning of the intentionally erroneous mesh deformations generated based on bitmap painting, and, finally, the creation of geometries used as bases for robot toolpath programming. All of these operations are enabled by a 3D modeler Rhinoceros®, featuring a wide array of relevant tools for geometry edition, such as NURBS surface generation, remodeling, slicing, and joining; mesh smoothing, reduction, and subdivision; as well as operations of NURBS-to-mesh conversions. The designer can carry out the operations from this group in an intuitive manner, even though in-built computation and automated algorithms of the 3D modeling software lie at their core. Table 1. Digital toolkit functionalities in the context of error exploration. Digital Tool in the Framework Functionality Supported Type of Explorations 3D modeler Error design + fine-tuning Intuitive Visual program editor Error execution + exploration + fine-tuning Intuitive + computational Mesh curvature analysis tool Error evaluation Computational Photogrammetric mesh reconstruction tool Error reconstruction Computational Bitmap painting tool Error design Intuitive The second type of operation—photogrammetric 3D mesh reconstruction—facilitates the creation of a digital representation of the physical mold. Such a representation can have several purposes. Firstly, it can enable digital comparisons between the geometry of the original 3D model Figure 3. Framework configuration. 3.4. Digital Processes The digital processes component contains the digital operations assisting the explorations of geometrical errors of the robotically formed molds. The operations embrace: Free-form modeling, explorative computational design, bitmap painting, photogrammetric 3D mesh reconstruction, mesh curvature analysis, robot process simulation, and programming. Table 1 summarizes the particular functionalities of the digital toolkit enabling these operations in relation to error exploration. Table 1. Digital toolkit functionalities in the context of error exploration. Digital Tool in the Framework Functionality Supported Type of Explorations 3D modeler Error design + fine-tuning Intuitive Visual program editor Error execution + exploration + fine-tuning Intuitive + computational Mesh curvature analysis tool Error evaluation Computational Photogrammetric mesh reconstruction tool Error reconstruction Computational Bitmap painting tool Error design Intuitive The first type of operation, embracing free-form modeling, supports the creation of the first design that underpins the erroneous feature explorations. Moreover, these operations allow for the processing of the meshes obtained through photogrammetry, the fine-tuning of the intentionally erroneous mesh deformations generated based on bitmap painting, and, finally, the creation of geometries used as bases for robot toolpath programming. All of these operations are enabled by a 3D modeler Rhinoceros®, featuring a wide array of relevant tools for geometry edition, such as NURBS surface generation, remodeling, slicing, and joining; mesh smoothing, reduction, and subdivision; as well as operations of NURBS-to-mesh conversions. The designer can carry out the operations from this group in an intuitive Technologies 2019, 7, 78 10 of 22 manner, even though in-built computation and automated algorithms of the 3D modeling software lie at their core. The second type of operation—photogrammetric 3D mesh reconstruction—facilitates the creation of a digital representation of the physical mold. Such a representation can have several purposes. Firstly, it can enable digital comparisons between the geometry of the original 3D model and of the fabricated mold. Secondly, it can serve as input for further design iterations of the mold or as a visual guide for intuitive digital bitmap painting. In our framework, the digital representation of the mold is generated using the Autodesk® ReCap™ Photo software. A 3D mesh model of the mold is created from a series of digital photographs of the mold, taken at different distances, heights, and angles. The operations of mesh curvature analyses aid the visual evaluations of the photogrammetrically digitalized molds, helping to locate their erroneous features. These operations additionally support the artistic assessment of mold fine-tuning results in each design iteration. They help to determine whether a particular fine-tuned geometry version is esthetically satisfactory or whether its fine-tuning should continue. The curvature analysis is enabled by an add-on for Grasshopper® called Mesh Curvature. The add-on visually evaluates approximate mesh curvatures—Gaussian, mean, minimum, maximum, absolute, and root mean square (RMS). The tool generates a color-coded visual representation directly on the evaluated surface, indicating which of its regions are synclastic, neutral, and anticlastic. The operations of bitmap painting support the intuitive development of geometrical errors in the molds. Such operations feature imprecise and ambiguous error feature painting, enabled by a bitmap editing software Adobe Photoshop. The software lets the designer overlay digital photos of the casts and molds, as well as the digital images of mesh curvature distributions—as translucent layers. These overlaid images then serve as visual guides to artistically apply paint strokes that indicate the locations of the desired mold deformations. Importantly, the digital image overlaying, as well as the painting, can in this case be approximate, to create a condition resembling digital sketching instead of precise drafting, which assures that the spontaneous flow of the design process is not slowed down or distracted by precision-focused activities. The explorative computational design operations are enabled by a visual programming editor, Grasshopper®, extending the functionalities of the Rhinoceros® 3D modeler by offering procedural parametric control of the 3D modeling operations applied in mold geometry creation, modification, and fine-tuning. This category of operations therefore gives designers the possibility to explore mold errors more systematically, using exact numerical control. At the same time, however, this working mode is not limited to the procedural and precision-oriented working style only. The environment offers a number of functions that can be made imprecise, intuitive, and explorative. Such functions include, for example, intentionally approximated sampling of bitmap paint strokes or arbitrary mesh point relocations done using randomized multipliers for movement vector magnitudes, or movement vector magnitude remapping based on intuitively chosen function graphs. Finally, the operations of robot process simulation and programming facilitate the robotic fabrication of iterated mold designs. They embrace the creation of robot toolpaths, the definition of the digital and physical parameters for the robotic process, the visual simulation of the forming process for collision detection purposes, and the generation of robot-specific machine codes executing the forming process. Robot toolpath generation is done parametrically, through the visual programming medium Grasshopper®, while the robot process setup, simulation, and machine code generation are supported with the KUKA|prc add-on functions, also executed in Grasshopper®. 3.5. Physical Processes The physical processes component of the framework enables the inclusion of physical feedback in the digital explorations of erroneous mold features. It produces various digital representations of the physical results, which enables their utilization in the digital part of the framework. It is instrumental in the robotic fabrication of the initial and iterated mold designs, in the digitalization of the physical results and in the qualitative selection of erroneous mold features for exploration purposes. The operations Technologies 2019, 7, 78 11 of 22 in this component embrace: Robotic forming of molds, mold coating, digital photography of molds, translucent pigmented material casting, digital photography of casts, and visual analysis, identification, and selection of erroneous mold features. The first category of operation relates to the robotic SPIF of molds, which produces input in the form of physical molds. Not only the molds, but also the entire course of the forming process is instrumental in error feature exploration. That is to say, the thorough observation of material behaviors during forming promotes a deeper understanding of how and why the erroneous features emerge. In particular, the understanding of the relationships between the features of a particular geometrical design and their effect on material behaviors causing errors. For best results and recollection, we recommend recording the forming process using a digital camera. The material behaviors in forming are very sensitive to even minor local changes in shape. As generalized conclusions are difficult to be drawn from this sensitive and failure-prone process, each forming occasion needs to be carefully observed and thoroughly registered. The operations of mold coating with removable rubber paint produce an opaque, matte surface finish. Such a finish is essential for the mold to be photographable using a digital camera. It is best if the coating is removable, to enable unaffected casting of materials into the molds. The operations of digital photography of molds produce images that can be used as inputs for photogrammetric 3D reconstruction and for the digital painting process. These photographs are also indispensable for the qualitative erroneous feature identification and ocular comparisons between the physical mold and its digital version. A dispersed lighting setup for photography is favorable for the photogrammetry photographs, while a setup featuring direct illumination generating shadows that underline the geometrical irregularities is favorable for the other types of photographs. Finally, the operations of translucent pigmented material casting and its photography have a twofold purpose. Firstly, they produce silicone casts featuring material accumulations underlining the erroneous geometry of the mold, which aids the process of ocular error identification. Secondly, once captured in digital form through photography and overlaid with digital images of mesh curvature analysis and photographs of the coated mold, they serve as underlays for the bitmap painting process, visually guiding the process. 4. Result Part 2—Framework Implementation Here, we discuss excerpts from an exemplary process of geometrical error exploration, carried out using the framework and its generalized workflow presented in Figure 3. While a systematic procedure of framework’s implementation is described in the already-mentioned publication [22], here, we discuss important prerequisites supporting the implementation, i.e., additional considerations necessary for mold feature explorations and a custom-developed visual program enabling these explorations. 4.1. Additional Considerations for Geometrical Error Explorations To discuss the considerations in question, we use the mold design shown in Figure 4, which features a double-curved form based on isosurface geometry. Its volume has an undulant profile with 12 interconnected, irregular, sphere-like synclastic protrusions, connected by anticlastic regions. In our example, this mold design is iterated twice. The first important consideration in geometrical error exploration, supporting a successful application of our framework, is the project-specific logic of exploring the errors of the mold. To formulate such a logic in an informed way, we recommend to begin by creating a taxonomy of erroneous mold features, based on the fabrication result of the first mold design. Such an initial, qualitative taxonomy, capturing observable errors that occur, forms a systematic point of departure for the design decisions supporting esthetic explorations. In our example, such a taxonomy was developed based on the ocular examination of the first mold. This examination resulted in the localization and identification of erroneous feature types. Technologies 2019, 7, 78 12 of 22 To characterize the spatial nature of these features, we used the landform analogy, expressing our features as highland, brink, hillock, depression, and bay. The features are indicated in Figure 5.Technologies 2019, 7, x 12 of 22 (a) (b) Figure 4. Initial mold design: (a) Outside view; (b) Inside view. Figure 5. Types of erroneous features in the first mold: (1) Highland, (2) Brink, (3) Hillock with surrounding depression, and (4) Bay. The second important step is to characterize the spatial form of each feature and link it to source digital geometry. This can be done by combining knowledge from the ocular examination of features with knowledge from the video-registered polymer sheet behaviors during forming. In this way, a complete error taxonomy can be constructed, as exemplified in Figure 6. Having this taxonomy at hand makes it possible to select erroneous features for exploration and to systematize the error exploration strategies for each design iteration. Figure 4. Initial mold design: (a) Outside view; (b) Inside view. Technologies 2019, 7, x 12 of 22 (a) (b) Figure 4. Initial mold design: (a) Outside view; (b) Inside view. Figure 5. Types of erroneous features in the first mold: (1) Highland, (2) Brink, (3) Hillock with surrounding depression, and (4) Bay. The second important step is to characterize the spatial form of each feature and link it to source digital geometry. This can be done by combining knowledge from the ocular examination of features with knowledge from the video-registered polymer sheet behaviors during forming. In this way, a complete error taxonomy can be constructed, as exemplified in Figure 6. Having this taxonomy at hand makes it possible to select erroneous features for exploration and to systematize the error exploration strategies for each design iteration. Figure 5. Types of erroneous features in the first mold: (1) Highland, (2) Brink, (3) Hillock with surrounding depression, and (4) Bay. The second important step is to characterize the spatial form of each feature and link it to source digital geometry. This can be done by combining knowledge from the ocular examination of features with knowledge from the video-registered polymer sheet behaviors during forming. In this way, a complete error taxonomy can be constructed, as exemplified in Figure 6. Having this taxonomy at hand makes it possible to select erroneous features for exploration and to systematize the error exploration strategies for each design iteration. Such a systematization can begin by defining a collection of error-modifying operations. In our example, they embraced: Addition, i.e., new feature creation; erasure, i.e., existing feature removal; amplification, i.e., existing feature expansion or enlargement; emphasis, i.e., existing feature intensification; and transformation, i.e., conversion of one feature into another. Based on such a Technologies 2019, 7, 78 13 of 22 systematization of operations, conscious choices for modifications in each iteration can be made. Examples of these choices and their geometrical consequences are presented in Figure 7. Technologies 2019, 7, x 13 of 22 Figure 6. Detailed mold error taxonomy. Such a systematization can begin by defining a collection of error-modifying operations. In our example, they embraced: Addition, i.e., new feature creation; erasure, i.e., existing feature removal; amplification, i.e., existing feature expansion or enlargement; emphasis, i.e., existing feature intensification; and transformation, i.e., conversion of one feature into another. Based on such a systematization of operations, conscious choices for modifications in each iteration can be made. Examples of these choices and their geometrical consequences are presented in Figure 7. The final consideration, important from the standpoint of artistic design, is the making of a silicone cast from each mold. The production of the casts, as illustrated in Figure 8, is an optional but important aid for the framework implementation, because the digital photographs of the casts can serve as additional guides in the process of erroneous feature bitmap painting. More generally, the casts shown here are meant to demonstrate the potential of practical application in the production of customized architectural elements from unconventional materials. As shown in Figure 8, the accumulations of translucent pigmented material constituting those casts, caused by the erroneous features generated using our framework, produce unprecedented esthetic effects, indicating opportunities for artistic design innovation. In terms of optical phenomena, the mold errors yield gradients of color and translucency across the surface of the cast objects. Because these casts were created in layers, using pigments with varying hues, this creates an effect of truly three-dimensional color, applied within one material mass. In terms of spatial perception of architectural form, the regions of excessively accumulated material, caused by the geometrical errors of the mold, underline the curved geometry of the object. In terms of experiencing the physical substance using the sense of touch, they create an unfamiliar sensation of varying pliability and stiffness, promoting a closer than usual examination of the architectural material and of the surface which it constitutes. An additional tactile effect is also generated by the scalloping features demarcating the forming tool traces. These are left intentionally unprocessed within the mold to become transferred into the cast material, resulting in a geometrical pattern of curves, perceptible by eye and hand. Figure 6. Detailed mold error taxonomy. The final consideration, important from the standpoint of artistic design, is the making of a silicone cast from each mold. The production of the casts, as illustrated in Figure 8, is an optional but important aid for the framework implementation, because the digital photographs of the casts can serve as additional guides in the process of erroneous feature bitmap painting. More generally, the casts shown here are meant to demonstrate the potential of practical application in the production of customized architectural elements from unconventional materials. As shown in Figure 8, the accumulations of translucent pigmented material constituting those casts, caused by the erroneous features generated using our framework, produce unprecedented esthetic effects, indicating opportunities for artistic design innovation. In terms of optical phenomena, the mold errors yield gradients of color and translucency across the surface of the cast objects. Because these casts were created in layers, using pigments with varying hues, this creates an effect of truly three-dimensional color, applied within one material mass. In terms of spatial perception of architectural form, the regions of excessively accumulated material, caused by the geometrical errors of the mold, underline the curved geometry of the object. In terms of experiencing the physical substance using the sense of touch, they create an unfamiliar sensation of varying pliability and stiffness, promoting a closer than usual examination of the architectural material and of the surface which it constitutes. An additional tactile effect is also generated by the scalloping features demarcating the forming tool traces. These are left intentionally unprocessed within the mold to become transferred into the cast material, resulting in a geometrical pattern of curves, perceptible by eye and hand. Due to this richness and unconventional character, the abovementioned esthetic effects are yet another important factor to consider when systematizing the logic behind the computational explorations of the geometrical errors in each mold iteration. Technologies 2019, 7, 78 14 of 22 Technologies 2019, 7, x 14 of 22 Due to this richness and unconventional character, the abovementioned esthetic effects are yet another important factor to consider when systematizing the logic behind the computational explorations of the geometrical errors in each mold iteration. Figure 7. The details and results of mold exploration using the framework. Figure 7. The details and results of mold exploration using the framework.Technologies 2019, 7, x 15 of 22 Figure 8. Esthetic features of the silicone casts made from the iterated molds. 4.2. The Visual Program Supporting the Mold Error Explorations To facilitate erroneous feature explorations in a way that dually combines artistic and computational operations, we developed a custom visual program using the Grasshopper® add-on for 3D modeler Rhinoceros®. The visual program’s modules, shown in Figure 9, support the abovementioned dual functionality—as discussed below. Figure 9. Functional modules of the visual program forming the core of mold error explorations. The first module of the program gathers the initial inputs for the process: The mesh to be iterated, the bitmap representing the affecting paint strokes, and the curves delineating the boundaries of the paint strokes. In this module, the artistic and imprecise factors are expressed in the nature of the provided inputs; that is, in their levels of precision. For example, the mesh to be iterated forms a precise input if the photogrammetric reconstruction of the mold is used. However, it can also be made imprecise by intentionally subjecting that photogrammetric mesh to smoothing operations in the free- form modeling environment prior to inputting it in the visual program. Another example of an arbitrary factor that can be introduced is the way in which the bitmap of mesh-affecting paint strokes is represented. The paint strokes can, for example, be either expressed with uniform color across each stroke, or represented as gradients of color, e.g., dark in the middle of the stroke and gradually lighter towards the stroke boundary. This choice of paint stroke representation, made arbitrarily by the designer, will affect the computational image sampling in the succeeding modules of the program and the way in which the mesh is deformed. The second module uses the vector input defining the paint stroke boundaries from the first module to capture the vertices of the mesh that match the locations of the paint strokes. Here, intuitive and imprecise factors can be incorporated by determining how the paint stroke outlines are represented. They can be represented either very precisely, as boundary curves generated based on Figure 8. Esthetic features of the silicone casts made from the iterated molds. Technologies 2019, 7, 78 15 of 22 4.2. The Visual Program Supporting the Mold Error Explorations To facilitate erroneous feature explorations in a way that dually combines artistic and computational operations, we developed a custom visual program using the Grasshopper® add-on for 3D modeler Rhinoceros®. The visual program’s modules, shown in Figure 9, support the abovementioned dual functionality—as discussed below. Technologies 2019, 7, x 15 of 22 Figure 8. Esthetic features of the silicone casts made from the iterated molds. 4.2. The Visual Program Supporting the Mold Error Explorations To facilitate erroneous feature explorations in a way that dually combines artistic and computational operations, we developed a custom visual program using the Grasshopper® add-on for 3D modeler Rhinoceros®. The visual program’s modules, shown in Figure 9, support the abovementioned dual functionality—as discussed below. Figure 9. Functional modules of the visual program forming the core of mold error explorations. The first module of the program gathers the initial inputs for the process: The mesh to be iterated, the bitmap representing the affecting paint strokes, and the curves delineating the boundaries of the paint strokes. In this module, the artistic and imprecise factors are expressed in the nature of the provided inputs; that is, in their levels of precision. For example, the mesh to be iterated forms a precise input if the photogrammetric reconstruction of the mold is used. However, it can also be made imprecise by intentionally subjecting that photogrammetric mesh to smoothing operations in the free- form modeling environment prior to inputting it in the visual program. Another example of an arbitrary factor that can be introduced is the way in which the bitmap of mesh-affecting paint strokes is represented. The paint strokes can, for example, be either expressed with uniform color across each stroke, or represented as gradients of color, e.g., dark in the middle of the stroke and gradually lighter towards the stroke boundary. This choice of paint stroke representation, made arbitrarily by the designer, will affect the computational image sampling in the succeeding modules of the program and the way in which the mesh is deformed. The second module uses the vector input defining the paint stroke boundaries from the first module to capture the vertices of the mesh that match the locations of the paint strokes. Here, intuitive and imprecise factors can be incorporated by determining how the paint stroke outlines are represented. They can be represented either very precisely, as boundary curves generated based on Figure 9. Functional modules of the visual program forming the core of mold error explorations. The first module of the program gathers the initial inputs for the process: The mesh to be iterated, the bitmap representing the affecting paint strokes, and the curves delineating the boundaries of the paint strokes. In this module, the artistic and imprecise factors are expressed in the nature of the provided inputs; that is, in their levels of precision. For example, the mesh to be iterated forms a precise input if the photogrammetric reconstruction of the mold is used. However, it can also be made imprecise by intentionally subjecting that photogrammetric mesh to smoothing operations in the free-form modeling environment prior to inputting it in the visual program. Another example of an arbitrary factor that can be introduced is the way in which the bitmap of mesh-affecting paint strokes is represented. The paint strokes can, for example, be either expressed with uniform color across each stroke, or represented as gradients of color, e.g., dark in the middle of the stroke and gradually lighter towards the stroke boundary. This choice of paint stroke representation, made arbitrarily by the designer, will affect the computational image sampling in the succeeding modules of the program and the way in which the mesh is deformed. The second module uses the vector input defining the paint stroke boundaries from the first module to capture the vertices of the mesh that match the locations of the paint strokes. Here, intuitive and imprecise factors can be incorporated by determining how the paint stroke outlines are represented. They can be represented either very precisely, as boundary curves generated based on the paint stroke color gradient data, or imprecisely, as boundary curves that only approximately capture the bitmap paint stroke outlines. This choice will affect the precision with which the mesh vertices are captured for further operations. The third module executes a two-stage process. Firstly, it defines the movement values for the mesh vertices, based on the sampling of color brightness of chosen pixels from the paint stroke bitmap. Then, optionally, it can employ a function graph to remap and distort the values from the bitmap sampling. Here, more arbitrary and intuitive factors can be introduced directly within the computational functions of the module. For the first stage of the process, this can be done by setting the paint stroke bitmap sampling parameters in various ways, from precise mapping at high-resolution to imprecise mapping at low-resolution. For the second stage, the image sampling values can be either precise, i.e., left as they are without remapping, or be altered through resampling using various function graphs for a more imprecise effect. Finally, the fourth module executes the deformations of the input mesh based on the values from the third module. These values are used as magnitudes for vectors defining the movement of the mesh points captured in the second module. Technologies 2019, 7, 78 16 of 22 4.3. The Visual Program for Robot Toolpath Generation The visual program for robot toolpath creation relies on the parametric modeling functions of the Grasshopper® visual programming interface for Rhinoceros®, and on the functions of the Kuka|prc add-in for Grasshopper® that enable the generation of G-code executable on our particular robot. The program approximates mold geometry into a polyline toolpath. The toolpath has the form of a unidirectional, constant level profile. The first module of the program takes the mold geometry represented as a NURBS patch as its input. It then generates NURBS section curves for the patch, spaced at a specified distance to define the forming increment. In the next module, the global bounding box for the curves is constructed, accompanied by the extraction of XYZ coordinates of its corner points. One of the uppermost points is selected to mark the starting location for the forming process. In the next module, this point is used to find points on the section curves that lie closest to it. These points delineate the locations of retractions of the forming tool that accompany the progression from one path loop to another. A step that follows is the approximation of the separate NURBS contours into one polyline. This is done by rebuilding the NURBS contours into curves with degree 1 and with the number of control points reduced to half of the number of control points of each initial NURBS section curve. Additional linear curves demarcating the horizontal and vertical tool traveling movements are also constructed between the points of tool retraction defined in the earlier module. These are joined with the polylines approximating the sections to form one polyline. In the final step, the number of control points for that polyline is reduced with a numerically given tolerance to enable shorter process duration and faster processing by the robot controller. Lastly, a file containing robot instructions is generated in a proprietary programming language KRL to control our robot. 5. Discussion 5.1. Research Results in a Broader Context Today, the scarce number of research publications on architectural SPIF indicates that knowledge of this promising fabrication method is still limited and should be expanded further. Therefore, considering SPIF research in a broader context of the engineering disciplines can provide, to a certain extent, a background for discussing the significance of our study and its role in the development of the current approaches to handling the errors in fabrication. The key discipline researching SPIF is manufacturing engineering, in which a number of methods have been developed to deal with the geometrical inaccuracies introduced by this forming method. Many of these methods embrace alterations of the geometrical input for the process, just as our method does. For instance, they advocate the creation of addendum surfaces [44] and extracurricular support structures [45], input geometry splitting and varying toolpath designs to enable multi-pass forming [46], as well as input geometry alteration [47,48]. However, what makes all those methods fundamentally different from our approach is their goal. In engineering research, forming inaccuracies are perceived as a problem [49] and the main aim is to develop methods that eliminate them. In our case, we invert inaccuracies from being problematic to being generative. In addition, in the manufacturing engineering methods, the geometry alteration is a straightforward and deterministic one and is often preceded by computational analyses of material stresses and precise digital simulation of the forming conditions. For us, the process of geometry alteration is iterative, explorative, and with an uncertain outcome. Such a process is, in our case, based on approximate, qualitative ocular examinations and not computational calculations. Technologies 2019, 7, 78 17 of 22 5.2. Advantages of the Approach In contrast to the engineering approaches described above, the framework resulting from this study embodies an alternative, bottom-up approach to erroneous feature handling. Although the framework is supported by digital and computational means, which by definition are precise, an advantage is that it also offers several entry points for ambiguous explorations that may be preferred by some designers, especially those with an artistic orientation. The framework’s construction and implementation workflow also ensure that geometric feature explorations take place in a gradual, iterative manner. The benefit of this is that each iteration opens up several potential avenues for further explorations, all originating from one initial design. Importantly, we also believe that an advantage of our method is its capacity to be combined with conventional, precision-oriented fabrication. This is important for a wider application of our process in non-standard building element manufacturing. Practical applications at the scale of an actual building will necessitate that some parts of the formed molds, such as edges that connect with other building elements, maintain high levels of geometrical precision, while the middle zones of the geometry can be treated in a less strict and more explorative manner. Further research is needed to find out how to make such a combined approach effective from an industrial application standpoint. Another advantage of our method pertains to a novel mode of operation that it offers, in which the material is allowed to shape and compose itself partially on its own, with the human designer and the fabrication machine only to a certain extent guiding the process instead of controlling it entirely. The conventional mode of material crafting and processing, in which the material is merely formed, transformed, and reshaped, either by the designer’s hand or by the machine tool, is therefore expanded in our framework. This can be considered beneficial from an esthetic standpoint, as the material accumulations caused by the intentionally tweaked mold errors emphasize the intricate esthetic attributes of the material cast into the mold, offering an opportunity for unusual, bottom-up design feature generation. Our framework therefore mediates a novel approach to design, in which designers can work very closely with material behaviors and react to these behaviors in an artistic, intuitive, and ambiguous way. Influencing these behaviors in an indirect manner creates a possibility to shape the expressive attributes of the design through the medium of the material, instead of mere drawing. This seems unique in light of the conventional design methods that often operate in a top-down manner. We hope that our research can play a part in the emergence of an alternative strand of architectural design methods, featuring material agency as an enriching design medium, existing side-by-side with the conventional design means. 5.3. Challenges, Limitations, and Future Research Directions One of the major challenges experienced during our study was the difficulty to predict the material behaviors during forming, and therefore, to know which geometry iterations are producible. We tackled this challenge in two ways. Firstly, we developed a practice of carefully registering each forming process on camera and of carefully analyzing the video material. This enabled us to relate each failure to the global and local geometry features and to the material behaviors accompanying the particular moment of failure. Through this, we could develop a deeper understanding of qualitative factors influencing the failures. The second undertaken measure was to synthesize knowledge from previous studies on SPIF failures and adapt this knowledge to the specific context of our molding process, which seemed to have slightly improved our success rate in the successive process runs. The abovementioned challenge gives rise to the first limitation of our framework. Namely, that its application necessitates a deep understanding of the material behaviors accompanying the incremental forming process. Because each geometry will behave differently when formed, the digital exploration part will be highly dependent on the physical results of the SPIF process. That is to say, each design iteration needs to be fabricated to enable any further explorations. At the same time, as mentioned above, the success of each forming run will be geometry-dependent. This entails that the intricate and Technologies 2019, 7, 78 18 of 22 difficult to foresee balance between the geometry, the locations and actions of the forming tool, and the internal stresses in the material needs to hold. In our exemplary implementation, we had one unsuccessful iteration run. However, for a designer new to the process, the number of failures could have been higher, as experience and deep general knowledge of material-specific behaviors are important determinants of success. Consequently, to facilitate the use of our approach, further research is needed that constructs a comprehensive and accessible knowledge base about architectural SPIF, relevant from the design exploration standpoint, thoroughly discussing the typical material behaviors and formulating general recommendations for exploring the process in an effective way. Given that research on SPIF has a long tradition, but accurate methods of error prediction for large-scale free-form geometries have not yet been fully developed, evokes the question of whether the noted imprecision could get modeled and induced in a fully digital way, saving materials. Answering such a question extends beyond the scope of this architectural research work. It can only be addressed by interdisciplinary engineering research. Nonetheless, a wish directed at future research in engineering would be for accessible, easily understandable and fast computational simulations of the polymer SPIF process, directed at architects and designers, allowing for evaluations of specific geometries for failure or success in early-stage design. Another potential drawback of the framework, related to its application in the explorative design process, is the need to switch between the software environments of the 3D modeler and the bitmap editing program during the explorations. It would be perhaps more convenient to stay within the environment of the 3D modeler and its visual programming add-on, and carry out all intuitive and artistic explorations only therein. To facilitate this, direct coupling of intuitive drawing with mesh alterations could be done, by means of real-time painting of mold features directly onto the mesh, supported with the visual interface of the 3D modeler, giving instant feedback of the esthetic results in 3D view. The first step in such further development of the tooling part of our framework would be to incorporate an already existing physics simulation engine, Kangaroo—an add-on for Grasshopper®. This add-on contains some functions allowing for more direct manipulation of meshes and real-time simulation of dynamic mesh deformation processes. However, further studies are needed to develop an efficient way of incorporating such a tool and its specific functionalities into our framework. 5.4. Potential Artistic Error Exploration Avenues Stemming from the Current Work Future work expanding our error exploration approach could include methods from industrial inspection that evaluate the production errors not only digitally, but also in the physical space of the model. In this respect, future work could be inspired by research on tolerance analysis that uses sensing strategies to directly incorporate manufacturing process data to react to the manufacturing errors [50]. In our case, new work triggered by this concept could embrace the development of a sensing system that registers and follows the spatial deformations of the material as it is being processed, compares this data with the original digital model, and uses the difference values to alter the robot toolpath either in real-time or in the next design iterations. Such an approach could provide a conceptually interesting and a highly interactive way of exploring the erroneous material behaviors in architectural SPIF of polymers. Another interesting strategy for artistic error exploration could be induced by two digital methods used in our process that are based on approximation—photogrammetry and mesh curvature analyses. Intentional manipulation of parameters affecting the precision of those methods, done in a series, could produce erroneous digital outputs—imprecise meshes from photogrammetry and imprecise color maps from mesh curvature analyses. The numerical data from these outputs could then be used to affect the design iterations of the mold. Such an approach would resemble the effects of audio or video signal feedback used as means of generating emergent artistic expressions. The overlay of meshes and colored curvature maps, in our case, could lead to noise and pattern formation that could create an interesting basis for artistic robot toolpath modification. Technologies 2019, 7, 78 19 of 22 6. Conclusions The perception of fabrication errors in a positive light presents a new disciplinary challenge for digital design and fabrication in architecture. That is, because it positions the conventionally precision- and control-oriented architectural computing within an unknown setting of material uncertainty. The discipline needs to respond to this positioning in ways never practiced before. Completely new agendas for handling the conditions of material unpredictability and uncertainty need to be formulated to enable this new opportunity for innovation to be seized and consolidated into a fully-fledged new design paradigm. Currently, even though partly embraced, acknowledged imprecision and deliberate error are still exceptional in the field of computational architectural design. However, the exciting challenges they introduce to architectural computing imply great potential for enriching the mainstream practice. The value of welcoming error and imprecision lies in its capacity to move beyond the esthetic canon of the perfect artefact of architectural production towards a novel esthetic of material agency whose beauty lies in the artefact imperfections arising from the processes of making, informed by designers, digital machines, and material behaviors. The tools and workflows of the framework presented herein therefore provide the first instances of enabling media through which such enactment of material agency might occur. Through our research, we hope to spark further interest and development of this novel line of inquiry in architectural computing and fabrication. Funding: This research was funded by the Swedish Research Council Vetenskapsrådet, grant number 2015-01519. Acknowledgments: The author acknowledges the work of Karin Hedlund, who assisted in setting up the digital and robotic fabrication processes in this study. The author would also like to acknowledge the peer reviewers of this article for their insightful feedback on the work. Conflicts of Interest: The author declares no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. References 1. Castle, H.; Sheil, B. High Definition: Zero Tolerance in Design and Production; Wiley: London, UK, 2014. 2. Menges, A. The New Cyber-Physical Making in Architecture: Computational Construction. Arch. Des. 2015, 85, 28–33. [CrossRef] 3. Eigensatz, M.; Kilian, M.; Schiftner, A.; Mitra, N.J.; Pottmann, H.; Pauly, M. Paneling Architectural Freeform Surfaces. ACM Trans. Graph. 2010, 29, 45. [CrossRef] 4. Li, Y.; Liu, Y.; Wang, W. Planar Hexagonal Meshing for Architecture. IEEE Trans. Vis. Comput. Graph. 2014, 21, 95–106. [CrossRef] [PubMed] 5. Pottmann, H. Architectural Geometry and Fabrication-Aware Design. Nexus Netw. J. 2013, 15, 195–208. [CrossRef] 6. Calleja, A.; Bo, P.; Gonzalez, H.; Bartoň, M.; López de Lacalle, L.N. Highly Accurate 5-Axis Flank CNC Machining with Conical Tools. Int. J. Adv. Manuf. Technol. 2018, 97, 1605–1615. [CrossRef] 7. Fallböhmer, P.; Rodríguez, C.A.; Özel, T.; Altan, T. High-Speed Machining of Cast Iron and Alloy Steels for Die and Mold Manufacturing. J. Mater. Process. Technol. 2000, 98, 104–115. [CrossRef] 8. Castaneda, E.; Lauret, B.; Lirola, J.M.; Ovando, G. Free-form Architectural Envelopes: Digital Processes Opportunities of Industrial Production at a Reasonable Price. J. Facade Des. Eng. 2015, 3, 1–13. [CrossRef] 9. Dittrich, M.A.; Gutowski, T.G.; Cao, J.; Roth, J.T.; Xia, Z.C.; Kiridena, V.; Ren, F.; Henning, H. Exergy Analysis of Incremental Sheet Forming. Prod. Eng. 2012, 6, 169–177. [CrossRef] 10. Franzen, V.; Kwiatkowski, L.; Neves, J.; Martins, P.A.F.; Tekkaya, A.E. On the Capability of Single Point Incremental Forming for Manufacturing Polymer Sheet Parts. In Proceedings of the 9th International Conference on Theory of Plasticity (ICTP2008), Gyeongju, Korea, 7–11 September 2008. 11. Anzalone, P.; Del Signore, M.; Wit, A.J. Imprecision in Materials + Production. In Proceedings of the 38th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA 2018), Mexico City, Mexico, 18–20 October 2018; p. 243. http://dx.doi.org/10.1002/ad.1950 http://dx.doi.org/10.1145/1778765.1778782 http://dx.doi.org/10.1109/TVCG.2014.2322367 http://www.ncbi.nlm.nih.gov/pubmed/26357024 http://dx.doi.org/10.1007/s00004-013-0149-5 http://dx.doi.org/10.1007/s00170-018-2033-7 http://dx.doi.org/10.1016/S0924-0136(99)00311-8 http://dx.doi.org/10.3233/FDE-150031 http://dx.doi.org/10.1007/s11740-012-0375-9 Technologies 2019, 7, 78 20 of 22 12. Anzalone, P.; Del Signore, M.; Wit, A.J. Notes on Imprecision and Infidelity. In Proceedings of the 38th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA 2018), Mexico City, Mexico, 18–20 October 2018; pp. 16–17. 13. Kobayashi, P.; Slocum, B. Introduction: Recalibration. In Proceedings of the 38th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA 2018), Mexico City, Mexico, 18–20 October 2018; pp. 12–15. 14. Atwood, W.A. Monolithic Representations. In Matter: Material Processes in Architectural Production; Borden, G.P., Meredith, M., Eds.; Routledge: London, UK, 2012; pp. 199–205. 15. Gürsoy, B. From Control to Uncertainty in 3D Printing with Clay. In Proceedings of the 36th eCAADe Conference, Lodz, Poland, 19–21 September 2018; Volume 2, pp. 21–30. 16. Mohite, A.; Kochneva, M.; Kotnik, T. Material Agency in CAM of Undesignable Textural Effects: The Study of Correlation between Material Properties and Textural Formation Engendered by Experimentation with G-Code of 3D Printer. In Proceedings of the 36th eCAADe Conference, Lodz, Poland, 19–21 September 2018; Volume 2, pp. 293–300. 17. Goldman, M.; Myers, C. Freezing the Field: Robotic Extrusion Techniques Using Magnetic Fields. In Proceedings of the 37th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA 2017), Cambridge, MA, USA, 2–4 November 2017; pp. 260–265. 18. Dickey, R. Soft Computing in Design: Developing Automation Strategies from Material Indeterminacies. In Proceedings of the 17th International Conference CAAD Futures 2017, Istanbul, Turkey, 12–14 July 2017; pp. 419–430. 19. Belanger, Z.; McGee, W.; Newell, C. Slumped Glass: Auxetics and Acoustics. In Proceedings of the 38th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA 2018), Mexico City, Mexico, 18–20 October 2018; pp. 244–249. 20. Swackhamer, M.; Satterfield, B. Breaking the Mold: Variable Vacuum Forming. In Proceedings of the 33rd Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA 2013), Cambridge, ON, Canada, 21–27 October 2013; pp. 269–278. 21. Tessmann, O.; Rumpf, M.; Eisenbach, P.; Grohmann, M.; Äikäs, T. Negotiate My Force Flow: Designing with Dynamic Concrete Formwork. In Proceedings of the 34th eCAADe Conference, Oulu, Finland, 22–26 August 2016; Volume 1, pp. 93–102. 22. Kalo, A.; Newsum, M.J.; McGee, W. Performing: Exploring Incremental Sheet Metal Forming Methods for Generating Low-Cost, Highly Customized Components. In Fabricate 2014: Negotiating Design and Making; Gramazio, F., Kohler, M., Langenberg, S., Eds.; UCL Press: London, UK, 2017; pp. 166–173. 23. Nicholas, P.; Stasiuk, D.; Nørgaard, E.; Hutchinson, C.; Thomsen, M.R. An Integrated Modelling and Toolpathing Approach for a Frameless Stressed Skin Structure, Fabricated Using Robotic Incremental Sheet Forming. In Robotic Fabrication in Architecture, Art and Design 2016; Reinhardt, D., Saunders, R., Burry, J., Eds.; Springer: Cham, Switzerland, 2016; pp. 62–77. 24. Kalo, A.; Newsum, M.J. An Investigation of Robotic Incremental Sheet Metal Forming as a Method for Prototyping Parametric Architectural Skins. In Robotic Fabrication in Architecture, Art and Design 2014; McGee, W., Ponce de Leon, M., Eds.; Springer: Cham, Switzerland, 2014; pp. 33–49. 25. Kalo, A.; Newsum, M.J. Bug-out Fabrication: A Parallel Investigation of the Namib Darkling Beetle and Incremental Sheet Metal Forming. In Proceedings of the 34th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA2014), Los Angeles, CA, USA, 23–25 October 2014; pp. 531–538. 26. Lublasser, E.; Braumann, J.; Goldbach, D.; Brell-Cokcan, S. Robotic Forming: Rapidly Generating 3D Forms and Structures through Incremental Forming. In Proceedings of the 21st International Conference of the Association for Computer-Aided Architectural Design Research in Asia (CAADRIA), Melbourne, Australia, 30 March–2 April 2016; pp. 539–548. 27. Pedgley, O.; Şener, B.; Lilley, D.; Bridgens, B. Embracing Material Surface Imperfections in Product Design. Int. J. Des. 2018, 12, 21–33. 28. Salvia, G.; Ostuzzi, F.; Rognoli, V.; Levi, M. The Value of Imperfection in Sustainable Design. In Proceedings of the LeNS Conference, Bangalore, India, 29 September–1 October 2010; pp. 1573–1589. Technologies 2019, 7, 78 21 of 22 29. Pedgley, O.; Şener, B. So, What Comes Next? Constructive Randomness within Products. In Proceedings of the 8th International Design and Emotion Conference, London, UK, 11–14 September 2012. 30. Giaccardi, E.; Karana, E. Foundations of Materials Experience: An Approach for HCI. In Proceedings of the 33rd Conference on Human Factors in Computing Systems, Seoul, Korea, 18–23 April 2015; pp. 2447–2456. 31. Rognoli, V.; Karana, E. Toward a New Materials Aesthetic Based on Imperfection and Graceful Aging. In Materials Experience: Fundamentals of Materials and Design; Karana, E., Pedgeley, O., Rognoli, V., Eds.; Butterworth-Heinemann: Oxford, UK, 2014; pp. 145–154. 32. Ostuzzi, F.; Salvia, G.; Rognoli, V.; Levi, M. The Value of Imperfection in Industrial Product. In Proceedings of the Conference on Designing Pleasurable Products and Interfaces, Milano, Italy, 22–25 June 2011; pp. 361–369. 33. Rognoli, V. Dynamic and Imperfect as Emerging Material Experiences: A Case Study. In Proceedings of the 9th International Conference on Design and Semantics of Form and Movement, Milan, Italy, 13–17 October 2015; p. 66. 34. Iwamoto, L. Digital Fabrications: Architectural and Material Techniques; Princeton Architectural Press: New York, NY, USA, 2013. 35. West, M. The Fabric Formwork Book: Methods for Building New Architectural and Structural Forms in Concrete; Routledge: London, UK, 2016. 36. Hughes, F. The Architecture of Error: Matter, Measure, and the Misadventures of Precision; MIT Press: Cambridge, MA, USA, 2014. 37. Zboinska, M.A. Artistic Computational Design Featuring Imprecision: A Method Supporting Digital and Physical Explorations of Esthetic Features in Robotic Single-Point Incremental Forming of Polymer Sheets. In Proceedings of the 37th eCAADe and 23rd SIGraDi Conference, Porto, Portugal, 11–13 September 2019; Volume 1, pp. 719–728. 38. Hussain, G.; Gao, L.; Hayat, N. Forming Parameters and Forming Defects in Incremental Forming of an Aluminum Sheet: Correlation, Empirical Modeling, and Optimization: Part A. Mater. Manuf. Process. 2011, 26, 1546–1553. [CrossRef] 39. Isidore, B.L.; Hussain, G.; Shamchi, S.P.; Khan, W.A. Prediction and Control of Pillow Defect in Single Point Incremental Forming Using Numerical Simulations. J. Mech. Sci. Technol. 2016, 30, 2151–2161. [CrossRef] 40. Jackson, K.; Allwood, J. The Mechanics of Incremental Sheet Forming. J. Mater. Process. Technol. 2008, 209, 1158–1174. [CrossRef] 41. Ambrogio, G.; De Napoli, L.; Filice, L. A Novel Approach Based on Multiple Back-Drawing Incremental Forming to Reduce Geometry Deviation. Int. J. Mater. Form. 2009, 2, 9. [CrossRef] 42. Kase, K.; Matsuki, N.; Suzuki, H.; Kimura, F.; Makinouchi, A. Geometrical Evaluation Models in Sheet Metal Forming: A Classification Method for Shape Errors of Free-Form Surfaces. In Geometric Design Tolerancing: Theories, Standards and Applications; ElMaraghy, H.A., Ed.; Chapman & Hall: London, UK, 1998; pp. 413–418. 43. Kase, K.; Makinouchi, A.; Nakagawa, T.; Suzuki, H.; Kimura, F. Shape Error Evaluation Method of Free-Form Surfaces. Comput. Aided Des. 1999, 31, 495–505. [CrossRef] 44. Kreimeier, D.; Buff, B.; Magnus, C.; Smukala, V.; Zhu, J. Robot-Based Incremental Sheet Metal Forming: Increasing the Geometrical Accuracy of Complex Parts. Key Eng. Mater. 2011, 473, 853–860. [CrossRef] 45. Thyssen, L.; Seim, P.; Störkle, D.D.; Kuhlenkötter, B. On the Increase of Geometric Accuracy with the Help of Stiffening Elements for Robot-Based Incremental Sheet Metal Forming. In Proceedings of the 19th International ESAFORM Conference on Material Forming, Nantes, France, 27–29 April 2016; p. 070008. 46. Junchao, L.; Junjian, S.; Bin, W. A Multipass Incremental Sheet Forming Strategy of a Car Taillight Bracket. Int. J. Adv. Manuf. Technol. 2013, 69, 2229–2236. [CrossRef] 47. Kitazawa, K.; Hayashi, S.; Yamazaki, S. Hemispherical Stretch-Expanding of Aluminum Sheet by Computerized Numerically Controlled Incremental Forming Process with Two Path Method. J. Jpn. Inst. Light Met. 1996, 46, 219–224. [CrossRef] 48. Kitazawa, K.; Nakane, M. Hemi-Ellipsoidal Stretch-Expanding of Aluminum Sheet by CNC Incremental Forming Process with Two Path Method. J. Jpn. Inst. Light Met. 1997, 47, 440–445. [CrossRef] http://dx.doi.org/10.1080/10426914.2011.552017 http://dx.doi.org/10.1007/s12206-016-0422-0 http://dx.doi.org/10.1016/j.jmatprotec.2008.03.025 http://dx.doi.org/10.1007/s12289-009-0498-5 http://dx.doi.org/10.1016/S0010-4485(99)00046-9 http://dx.doi.org/10.4028/www.scientific.net/KEM.473.853 http://dx.doi.org/10.1007/s00170-013-5179-3 http://dx.doi.org/10.2464/jilm.46.219 http://dx.doi.org/10.2464/jilm.47.440 Technologies 2019, 7, 78 22 of 22 49. Jeswiet, J.; Adams, D.; Doolan, M.; McAnulty, T.; Gupta, P. Single Point and Asymmetric Incremental Forming. Adv. Manuf. 2015, 3, 253–262. [CrossRef] 50. Sobh, T.M.; Henderson, T.C.; Zana, F. Analyzing Manufacturing Tolerances for Sensing and Inspection. Int. J. Sci. Technol. 1998. © 2019 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). http://dx.doi.org/10.1007/s40436-015-0126-1 http://creativecommons.org/ http://creativecommons.org/licenses/by/4.0/. Introduction Background Motivation and Significance Knowledge Contribution to Architectural Computing and Digital Fabrication Knowledge Contribution to Architectural Robotic SPIF Main Aims and Highlights of the Work Product Imperfection and Potential Areas of Its Application Positioning of the Study in the Contexts of CAAD and Building Information Modeling (BIM) Materials and Methods Investigation Method Materials Software Hardware Robotic Process Setup Result Part 1—The Framework Enabling Artistic Explorations of Forming Errors Specification of the Explored Errors Generalized Workflow for Framework Implementation Framework Overview Digital Processes Physical Processes Result Part 2—Framework Implementation Additional Considerations for Geometrical Error Explorations The Visual Program Supporting the Mold Error Explorations The Visual Program for Robot Toolpath Generation Discussion Research Results in a Broader Context Advantages of the Approach Challenges, Limitations, and Future Research Directions Potential Artistic Error Exploration Avenues Stemming from the Current Work Conclusions References work_4mldd5vlafbfrjfl6msc4nptei ---- EXTENDED REPORT Effect of mydriasis and different field strategies on digital image screening of diabetic eye disease H Murgatroyd, A Ellingford, A Cox, M Binnie, J D Ellis, C J MacEwen, G P Leese . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . See end of article for authors’ affiliations . . . . . . . . . . . . . . . . . . . . . . . Correspondence to: Dr G Leese, Wards 1& 2, Ninewells Hospital and Medical School, Dundee, DD1 9SY, UK; graham. leese@tuht.scot.nhs.uk Accepted for publication 1 October 2003 . . . . . . . . . . . . . . . . . . . . . . . Br J Ophthalmol 2004;88:920–924. doi: 10.1136/bjo.2003.026385 Aims: To assess the effects of (1) mydriasis and (2) single versus three field photography on screening for diabetic eye disease using digital photography Method: Slit lamp examination findings were compared to digital fundal photographs for the detection of any retinopathy and for referable retinopathy in 398 patients (794 eyes). A Topcon TRC-NW6S digital non-mydriatic fundus camera was used. Three photographic strategies were used: undilated single field, dilated single field, and dilated multiple fields. The photographs were presented in random order to one of two retinal screeners. For the single field photographs the screeners were masked to the use of mydriatics. In 13% of fundal photographs, grading was performed by both, rather than just one grader. Results: Mydriasis reduced the proportion of ungradable photographs from 26% to 5% (p,0.001). Neither mydriasis nor three field photography improved the sensitivity or specificity for the detection of any retinopathy or of referable retinopathy when compared with undilated single field photography. The sensitivity and specificity for detecting referable retinopathy using undilated single field photography was 77% (95% CI 71 to 84) and 95 % (95% CI 93 to 97) respectively. Using dilated single field photography the figures were 81% (95% CI 76 to 87) and 92% (95% CI 90 to 94) respectively. Using dilated three field photography the figures were 83% (95% CI 78 to 88) and 93% (95% CI 91 to 96) respectively. Intergrader reliability for the detection of referable retinopathy in gradable photographs was excellent (Kappa values 0.86–1.00). Conclusions: Mydriasis reduces the technical failure rate. Mydriasis and the three field photography as used in this study do not increase the sensitivity or specificity of detecting diabetic retinopathy. D iabetic retinopathy is the leading cause of blindness in the working population in England and Wales.1 In patients with type 2 diabetes, 15% will require laser treatment three years after detecting mild bilateral back- ground retinopathy at baseline.2 Timely intervention with laser photocoagulation has been shown to reduce visual loss.3 The importance of screening is well acknowledged4 5 and reported to be cost effective6 7 resulting in the recommenda- tion of a national programme for the detection of diabetic retinopathy.8 9 Reports from the UK National Screening Committee8 and a consultation document from the Health Technology Board9 both favour a digital camera based system for screening for diabetic retinopathy. The two groups differ in their recom- mendations on the use of mydriasis and the number of fields to be used per eye. The UK National Screening Committee recommends routine mydriasis with two fields for each eye: nasal and macula. The Health Technology Board has advised mydriasis only when undilated photography has failed and recommend the use of a single 45˚ field centred on the macula. Previous studies have shown that dilated two field photography was more sensitive than undilated single field photography,10 11 but these studies were not masked and used older cameras which required larger pupil size for successful photography. It was unclear whether multiple fields, or the use of mydriasis, was the important criterion resulting in improved sensitivity. Modern cameras can produce quality images using smaller pupil diameter, potentially obviating the need for mydriasis, unless multiple field photography is required. The present study assesses using a digital camera, mydriasis, and multiple fields as separate issues, in a randomised, masked trial for the first time. PATIENTS AND METHODS Subjects Ethical approval was obtained from the Tayside Regional Ethics Committee. The study population was recruited from consecutive patients with diabetes attending the medical diabetes clinic for annual review, and from ophthalmic diabetes clinics. Patients were excluded from the study if they declined the invitation, were unable to give informed consent, were unable to position at the slit lamp table, or were unable to fixate on the light target of the camera. All recruited patients provided written informed consent. Protocol A Topcon TRC NW6S non-mydriatic camera (Topcon, Tokyo, Japan) with pixel density of 7526582 pixels linked to a Sony DXC-390P PAL video camera (Sony, Tokyo, Japan) was used. Patients were given 5 minutes in a darkened room to allow dark adaptation. A trained photographer (AE) took a single undilated 45˚field retinal photograph centred on the fovea of each fundus. Photographs were taken in a darkened room with no natural or artificial light apart from that produced by the monitor, which faced away from the patient. On each occasion the right eye was photographed before the left and up to 10 minutes was allowed between each photograph to allow redilation. Two drops of tropicamide 1% were then instilled into each eye. After 20 minutes, the patients were examined with a slit lamp biomicroscope by a single trained ophthalmologist (HM). The fundal features were recorded using the former . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Abbreviations: CI, confidence interval; DD, disc diameter 920 www.bjophthalmol.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://b jo .b m j.co m / B r J O p h th a lm o l: first p u b lish e d a s 1 0 .1 1 3 6 /b jo .2 0 0 3 .0 2 6 3 8 5 o n 1 7 Ju n e 2 0 0 4 . D o w n lo a d e d fro m http://bjo.bmj.com/ UK National Screening Council recommendations (personal communication) shown in table 1. The retinal findings and further management were discussed with the patient. Finally a series of three 45˚photographs were taken of each fundus: (1) centred on the fovea, (2) nasal retina with the temporal edge of the optic disc at the edge of the field, (3) temporal retina, with the fovea at nasal edge of the field (fig 1). The fixation targets helped achieve consistent photography, but made it difficult to perform two field photography because the fixation targets distracted the patient from looking sufficiently nasally. The three field strategy used the fixation targets available with the Topcon camera and allowed visualisation of the retinal territory included in the two field strategy of the EURODIAB protocol.12 The digitally stored images were presented, via TWAIN interface, onto Paint Shop Pro 5.03 software (JASC Software, Eden Prairie, MN, USA), at full capture resolution randomly to one of two retinal readers, one ophthalmologist (CM), and one diabetologist (GL). The retinal photographs were stored as bitmap images and viewed in a darkened room on CRT screens at a resolution of 10246768 pixels. For each eye the readers graded the three strategies of photograph: (1) single field undilated, (2) single field dilated, and (3) three fields dilated. The retinal readers were masked to any clinical information and, for the single field strategies, whether mydriasis had been used. For each individual eye the multiple field images were presented consecutively and treated as a single grading episode. For the multiple field images the graders were able to move back and forth within each triplet. The different photographic strategies were, however, randomly presented to the graders, and for an individual patient the different strategies of photography were presented to graders in a masked way and at separate times. The grading of fundal features and the guidelines for management were recorded using the same protocol as for the slit lamp examination (table 1) and was based on retinal features alone (no other clinical information was available to the photograph graders). To assess intergrader reliability for referable diabetic retinopathy 13% (50/398) of the photo- graphs were read by both graders. Image quality was also assessed using the system outlined in table 2, and this was used to determine gradability. Referable diabetic eye disease was defined as the presence of maculopathy, proliferative retinopathy, or preproliferative retinopathy on the UK National Screening Grading Programme. This latter category is the equivalent of moderate to very severe non-proliferative retinopathy. Statistical analysis A power calculation indicated that with a prevalence of referable retinopathy of 20%, and a photograph sensitivity of 80%, we would require 387 patients to be 95% confident to detect a 10% difference between screening strategies with 70% power. We thus aimed to enrol 400 patients. Final data were placed in an SPSS database to allow statistical analysis. The x2 test was used for categorical data. We did not carry out Fisher’s test for any table as no table had an expected cell count of less than five. Sensitivities and specificity values were calculated with 95% confidence intervals (CI) against the reference standard of slit lamp biomicroscopy. To assess the intergrader agreement the Kappa statistic is reported. The Kappa statistic represents the observed level of agreement adjusted with the level of agreement that could have been expected by chance alone.13 It is generally accepted that a value of 0.75–1.00 represents Table 1 Classification of diabetic retinopathy and outcome from the UK National Screening Committee on photograph grading guidelines Classification Features Outcome of screening None No diabetic retinopathy Annual screening Background Microaneurysm(s) identified clinically Annual screening Retinal haemorrhage(s) +/- any exudates Annual screening Preproliferative Venous beading Refer Venous loop or reduplication Refer IRMA Refer Multiple deep, round haemorrhages Refer Cotton wool spots Refer Proliferative New vessels at disc Refer New vessels elsewhere Refer Preretinal or vitreous haemorrhage Refer Preretinal fibrosis or tractional detachment Refer Maculopathy Exudate within 1 DD of fovea Refer Circinate or group of exudates within macula Refer Any microaneurysm or haemorrhage within 1 DD of fovea Refer Photocoagulation Focal or macular grid Refer Peripheral scatter Refer Other lesion Non-diabetic pathology Preproliferative is the equivalent of moderate to very severe non- proliferative retinopathy. DD, disc diameter; IRMA, intraretinal microvascular abnormalities. Figure 1 Schema of three field photography undertaken. X represents the fovea; N represents the optic disc. Table 2 Assessment of image quality and acceptability for grading. Photographs were deemed ungradable if any of the criteria below resulted in rejection, unless a specific reason for referral could be identified (for example, foveal changes seen even if only half the field could be visualised Criteria judged Acceptance for grading (a) Definition of image (i) Good; all features fully assessable Accepted (ii) Moderate; some haziness of some small vessels Accepted (iii) Poor; unable to define small vessels Rejected (b) Field of image seen (i) Full image seen Accepted (ii) .3/4 of image seen Accepted (iii) ,3/4, but .1/2 of image seen Rejected (iv) ,1/2 of image seen Rejected (c) Fovea visible (i) Yes Accepted (ii) No Rejected Strategies for screening 921 www.bjophthalmol.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://b jo .b m j.co m / B r J O p h th a lm o l: first p u b lish e d a s 1 0 .1 1 3 6 /b jo .2 0 0 3 .0 2 6 3 8 5 o n 1 7 Ju n e 2 0 0 4 . D o w n lo a d e d fro m http://bjo.bmj.com/ excellent agreement, 0.4–0.75 represents fair to good agree- ment, and values lower than this represent poor agreement. RESULTS The population Of 407 patients approached, informed consent was secured from 398 (794 eyes). Six patients declined to take part in the study, two patients were unable to sit at the slit lamp, and one patient was unable to fixate the light target for the camera. The median age of patients excluded from the trial was 64.3 years (range 24–83 years). One patient was uniocular and a further patient had sufficient cataract in one eye to preclude any fundal view and was referred for cataract surgery. The median age of patients enrolled was 63.0 years (range 17–88 years, interquartile range 51.8–70.3) with 57% male. Mean duration of diabetes was 9.3 years (95% CI 8.5 to 10.1), and 35% were treated with insulin. Of the 794 eyes, 487 (61.3%) had no retinopathy, 101 (12.7%) had background retinopathy, 162 (20.4%) had maculopathy, 37 (4.7%) had preproliferative or proliferative changes, and seven (0.9%) had non-diabetic eye disease. Proportion of gradable photographs Table 3 shows the percentage of photographs being reported as ungradable (technical failures) with the different photo- graph strategies. A statistically significant difference was present in the proportion of gradable photographs between the undilated and dilated single field strategies (x2 test statistic 135, p,0.001), and undilated and dilated multiple field strategies (x2 test statistic 139, p,0.001). If insufficient time was allowed for redilation before photographing the second eye, a higher proportion of ungradable photographs would have been anticipated in the eye photographed second (left eye). There was no statistically significant difference in the proportion of ungradable photographs between the right and left eyes for the undilated photograph strategy (x2 test statistic 0.254, p = 0.614). The total number of patients with either or both eyes unreadable was 144 of the 398 (36%). Of these 144 patients, in 65 (45%) patients both eyes were unreadable, with 36 unreadable only on the right eye (25%), and 43 only on the left eye (30%). Figure 2 shows the proportion of ungradable single field images obtained through undilated pupils against age band. It illustrates the proportion of ungradable photographs increasing with age, particularly above age 55 years. No dilation resulted in 74% gradable photographs, whereas dilating all patients over 60 years of age resulted in 90% of photographs being gradable; the figure was 93% if all patients over 50 years were dilated. Dilating pupils in all patients resulted in 95% of images being gradable, with the remaining 5% being ungradable because of persistently small pupils or media opacities such as cataracts. Sensitivity and specificity of detection of any retinopathy and of referable retinopathy (gradable photographs only) The sensitivity and specificity for the detection of any retinopathy or of referable retinopathy with the different photography strategies are presented in table 4. These calculations are made including the results from gradable photographs only. There is no statistically significant difference in the sensitivity or specificity of the detection of any retinopathy or of referable retinopathy between the different photography strategies (table 4). Intergrader reliability To assess intergrader reliability 50 patients (100 eyes, 500 photograph episodes) were reviewed by both screeners. The outcome of each gradable photograph for detecting referable retinopathy was used to assess reliability. Ungradable photographs were treated as ‘‘missing data’’. The kappa value was 1.00 (56 cases) for the undilated single field strategy, 0.86 (90 cases) for the dilated single field strategy, and 0.88 (89 cases) for the three field strategy, representing near ‘‘excellent’’ agreement. DISCUSSION Mydriasis significantly reduced the proportion of ungradable photographs from 26% to 5% (p,0.001). Without mydriasis the proportion of ungradable photographs increased with patient age. However the use of mydriasis or of three field photography did not improve the sensitivity or specificity for the detection of any retinopathy or of referable retinopathy when using the Topcon TRC NW6S non-mydriatic camera. The criteria we used for gradability were stringent. There will always be some degree of subjectivity when assessing image quality, but the system we developed attempted to limit this by making the reason for rejection explicit (table 2). We reported 26% (mean of right and left eyes) of ungradable photographs; this figure was of the same order of magnitude (19.7%) as another study similarly using a non-mydriatic camera.14 Calculations for sensitivity and specificity in this study are based only on the photographs that were gradable. Review of Table 3 Percentage of patients with photographs being reported as ungradable (technical failures) Undilated single field photographs Dilated single field photographs Dilated multiple field photographs Right eye only 25% 4% 4% Left eye only 27% 6% 5.5% Either eye 36% 7% 6.5% Figure 2 Proportion of ungradable and gradable photographs within each age band for photography through undilated pupils. 922 Murgatroyd, Ellingford, Cox, et al www.bjophthalmol.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://b jo .b m j.co m / B r J O p h th a lm o l: first p u b lish e d a s 1 0 .1 1 3 6 /b jo .2 0 0 3 .0 2 6 3 8 5 o n 1 7 Ju n e 2 0 0 4 . D o w n lo a d e d fro m http://bjo.bmj.com/ the literature suggests that most studies have excluded photograph failures,15 16 but a few have treated them as a positive diagnoses.10 11 14 Comparing this study with other studies in the literature can be difficult because of different reference parameters. In our study, we choose slit lamp biomicrosopy as our reference standard for the screening of diabetic retinopathy, which has been shown to compare favourably with seven field photo- graphy and is more acceptable for quality monitoring of screening programmes.17 Scanlon recently reported sensitivity of 87.8% and speci- ficity of 86.1% for referable diabetic retinopathy using mydriatic digital photography,14 and Olson reported sensitiv- ity of 93% and specificity of 87%.18 These studies used slit lamp biomicroscopy as their reference standard and are comparable with our results. A variety of photography strategies have been suggested for screening including the use of two and three fields.12 19 The three field strategy chosen for this study was different from other groups. Our intention had been to perform two field photography according to the EURODIAB protocol,12 but the internal fixator on the Topcon camera distracted patients such that the area covered by two fields was not quite suf- ficient. We thus performed the three fields shown in figure 1 to ensure the EURODIAB area was covered, although, there was some additional coverage at either extreme. We found no statistically significant difference between the proportion of readable non-mydriatic photographs on the first eye compared to the second eye (p = 0.614). Other groups have found a higher rate of unreadable non-mydriatic second eye photographs, and attributed this to insufficient dark adaptation time.20 In this study with the low illumina- tion of the digital camera, we would suggest that a dark adaptation time of 5 minutes would probably be sufficient. There are a clinically significant number of patients who require mydriasis to achieve gradable images. This is important if a screening programme is to achieve the National Screening Committee target of less than 5% of photographs being ungradable.8 This target has been shown to be achievable in a screening programme.21 However, 64% of patients had gradable images without the need for mydriasis. In addition, neither mydriasis nor three field photography improved sensitivity or specificity. This suggests that mydriasis is helpful in improving the proportion of gradable photographs but does not improve screening sensitivity directly. As three field photography did not improve sensitivity or specificity, our study found no justification for using mydriasis as a prerequisite for multiple fields (which require dilated pupils). To achieve optimal quality of screening in a practical setting, on the basis of our results, mydriasis could be offered either to everyone or targeted to the minority who need it. Targeted mydriasis could be implemented in a number of ways. Digital photography allows an immediate assessment of image quality, and mydriasis could be offered immediately to those who appear to need it. This would however require the judgment of the photographer. Pupil size decreases with increasing age,22 although this may be a function of diabetes duration.23 We have shown that the proportion of ungradable photographs increases with age. Using an age cut off of 55 or 60 years may be an alternative way of targeting mydriatic use and increasing the number of gradable photographs from 74% to over 90%. This has been used successfully in some screening projects as in Belfast.24 For a year on year screening programme, patients who required mydriasis one year are likely to need them subsequently, and could be targeted appropriately. It may be easier to use mydriasis in everyone, as it is arguable whether patients would be deterred from attending screening if drops have to be used. The risk of precipitating glaucoma is negligible.25 Mydriasis does not seem to affect driving vision in young patients,26 but has been shown to do so in a group of patients of mean age 55 years,27 such that the legal requirements for driving may not be met after mydriasis. This requires further study. Avoidance of mydria- sis may make a screening programme more cost effective by increasing throughput.9 This may however be offset by the delay involved in having to dilate those who fail initial undilated photography. The effects on throughput time by using such a strategy need to be tested in a field study. Single field photography will also improve cost effectiveness compared with multiple field photography by reducing patient throughput time and reducing grading time. CONCLUSIONS This study shows that mydriatic eye drops significantly increase the proportion of gradable photographs when using a modern digital camera in a masked study. To reduce the adverse impact of mydriasis, patients could be targeted for dilation in a number of ways that have been discussed. The use of three fields did not improve the sensitivity or specificity for the detection of any retinopathy or of referable retinopathy. On the basis of this study we would recommend consideration of selective mydriasis and single field strategy for the screening of diabetic eye disease. ACKNOWLEDGMENTS We would like to thank the Ross Foundation, Tenovus, and Tayside University Hospital Trust Grants for the financial support of the project. The help and advice of Paul Baines and the statistical advice of Simon Ogston have been invaluable. Finally we would like to thank all the staff and patients in the diabetes and eye clinics who agreed to participate in the study. Table 4 Sensitivity and specificity of the detection of any retinopathy with different photograph strategies (only gradable photographs included). No statistically significant difference between photography strategies Undilated single field (n = 585) Dilated single field (n = 750) Dilated multiple field (n = 752) Detection of any retinopathy Sensitivity (95% CI) 83% (78–88%) 86% (82–90%) 90% (86–93%) Specificity (95% CI) 91% (88–94%) 91% (89–94%) 90% (88–93%) Positive predictive value (95% CI) 85% (80–90%) 87% (83–91%) 86% (82–90%) Negative predictive value (95% CI) 90% (87–93%) 91% (88–94%) 93% (91–95%) Detection of referable retinopathy Sensitivity (95% CI) 77% (71–84%) 81% (76–87%) 83% (78–88%) Specificity (95% CI) 95% (93–97%) 92% (90–94%) 93% (91–96%) Positive predictive value (95% CI) 85% (79–91%) 79% (73–85%) 82% (77–87%) Negative predictive value (95% CI) 92% (89–95%) 93% (91–95%) 94% (92–96%) Strategies for screening 923 www.bjophthalmol.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://b jo .b m j.co m / B r J O p h th a lm o l: first p u b lish e d a s 1 0 .1 1 3 6 /b jo .2 0 0 3 .0 2 6 3 8 5 o n 1 7 Ju n e 2 0 0 4 . D o w n lo a d e d fro m http://bjo.bmj.com/ Authors’ affiliations . . . . . . . . . . . . . . . . . . . . . H Murgatroyd, A Ellingford, A Cox, M Binnie, J D Ellis, C J MacEwen, Department of Ophthalmology, Ninewells Hospital and Medical School, Dundee, UK G P Leese, Department of Diabetes, Ninewells Hospital and Medical School, Dundee, UK Funding: Ross Foundation, Tenovus Scotland and TUHT grant scheme. Proprietary interests: none. REFERENCES 1 Evans J, Rooney C, Ashwood F, et al. Blindness and partial sight in England and Wales: April 1990–March 1991. Health Trends 1996;28:5–12. 2 Kohner EM, Stratton IM, Aldington SJ, et al. Relationship between the severity of retinopathy and progression to photocoagulation in patients with type 2 diabetes mellitus in the UKPDS (UKPDS 52). Diabet Med 2001;18:178–84. 3 Ferris FL. How effective are treatments for diabetic retinopathy? JAMA 1993;269:1290–1. 4 Fong DS, Gottlieb J, Ferris FL, et al. Understanding the value of diabetic retinopathy screening. Arch Ophthalmol 2001;119:758–60. 5 Garvican L, Clowes J, Gillow T. Preservation of sight in diabetes: developing a national risk reduction programme. Diabet Med 2000;17:627–34. 6 Javitt JC, Aiello LP. Preventative eye care in people with diabetes is cost-saving to the federal government: implications for health-care reform. Diabetes Care 1994;17:909–17. 7 James M, Turner D, Broadbent DM, et al. Cost effectiveness analysis of screening for sight threatening diabetic eye disease. BMJ 2000;320:1627–31. 8 Gillow JT, Gray JA. The National Screening Committee review of diabetic retinopathy screening. Eye 2001;15:1–2. 9 Facey K, Cummins E, Macpherson K, et al. Organisation of services for diabetic retinopathy screening. Health Technology Assessment Report 1. Glasgow 2002, Health Technology Board Scotland (www.htbs.org.uk). 10 Lairson DR, Pugh JA, Kapadia AS, et al. Cost-effectiveness of alternative methods for diabetic retinopathy screening. Diabetes Care 1992;15:1369–77. 11 Pugh JA, Jacobson JM, Van Heuven WA, et al. Screening for diabetic retinopathy. The wide angle camera. Diabetes Care 1993;16:889–95. 12 Aldington SJ, Kohner EM, Meuer S, et al. Methodology for retinal photography and assessment of diabetic retinopathy: the EURODIAB IDDM complications study. Diabetologia 1995;38:437–44. 13 Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977;33:159–74. 14 Scanlon PH, Malhotra R, Foyt C, et al. The effectiveness of screening for diabetic retinopathy by digital imaging photography and technician ophthalmoloscopy. Diabet Med 2003;20:467–74. 15 Taylor R, Lovelock L, Tunbridge WMG, et al. Comparison of non-mydriatic retinal photography with ophthalmoscopy in 2159 patients: mobile retinal camera study. BMJ 1990;301:1243–7. 16 Stellingwerf C, Hardus PL, Hooymans JM. Two field photography can identify patients with vision-threatening diabetic retinopathy: a screening approach in the primary care setting. Diabetes Care 2001;24:2086–90. 17 Scanlon PH, Malhotra R, Greenwood RH, et al. Comparison of two reference standards in validating two field mydriatic digital photography as a method of screening for diabetic retinopathy. Br J Ophthalmol 2003;87:1258–63. 18 Olson JA, Strachan FM, Hipwell JH, et al. A comparative evaluation of digital imaging, retinal photography and optometrist examination in screening for diabetic retinopathy. Diabet Med 2003;20:528–34. 19 Harding SP, Broadbent DM, Neoh C, et al. Sensitivity and specificity of photography and direct ophthalmoscopy in screening for sight threatening eye disease: the Liverpool Diabetic Eye Study. BMJ 1995;311:1131–5. 20 Heaven CJ, Cansfield J, Shaw KM. The quality of photographs produced by the non-mydriatic fundus camera in a screening programme for diabetic retinopathy: a 1 year prospective study. Eye 1993;7:787–90. 21 Gibbins RL, Owens DR, Allen JC, et al. Practical application of the European Field Guide in screening for diabetic retinopathy by using ophthalmoscopy and 35mm retinal slides. Diabetologia 1998;41:59–64. 22 Hreidarsson AB. Pupil size in insulin-dependent diabetes. Relationship to duration, metabolic control and long-term manifestations. Diabetes 1982;31:442–8. 23 Cahill M, Eustace P, de Jesus V. Pupillary autonomic denervation with increasing duration of diabetes mellitus. Br J Ophthalmol 2001;85:1225–30. 24 Taylor R. Practical community screening for diabetic retinopathy using the mobile retinal camera: report of a 12 centre study. Diabet Med 1996;13:946–52. 25 Pandit RJ, Taylor R. Mydriasis and glaucoma: exploding the myth. A systematic review. Diabet Med 2000;17:693–9. 26 Montgomery DM, MacEwen CJ. Pupil dilation with tropicamide. The effects on acuity, accommodation, and refraction. Eye 1989;3:845–8. 27 Jude EB, Ryan B, O’Leary BM, et al. Pupillary ditatation and driving in diabetic patients. Diabetic Med 1998;15:143–7. 924 Murgatroyd, Ellingford, Cox, et al www.bjophthalmol.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://b jo .b m j.co m / B r J O p h th a lm o l: first p u b lish e d a s 1 0 .1 1 3 6 /b jo .2 0 0 3 .0 2 6 3 8 5 o n 1 7 Ju n e 2 0 0 4 . D o w n lo a d e d fro m http://bjo.bmj.com/ work_4nip6utqujcrziqlf7v5pwo26u ---- Elastin Stabilization for Treatment of Abdominal Aortic Aneurysms Elastin Stabilization for Treatment of Abdominal Aortic Aneurysms Jason C. Isenburg, PhD*; Dan T. Simionescu, PhD*; Barry C. Starcher, PhD; Narendra R. Vyavahare, PhD Background—Maintaining the integrity of arterial elastin is vital for the prevention of abdominal aortic aneurysm (AAA) development. We hypothesized that in vivo stabilization of aortic elastin with pentagalloyl glucose (PGG), an elastin-binding polyphenol, would interfere with AAA development. Methods and Results—Safety and efficacy of PGG treatment were first tested in vitro using cytotoxicity, elastin stability, and PGG-elastin interaction assays. For in vivo studies, the efficacy of PGG was evaluated within a well-established AAA model in rats on the basis of CaCl2-mediated aortic injury. With this model, PGG was delivered periadventitially at 2 separate time points during the course of AAA development; aortic diameter, elastin integrity, and other pathological aspects were monitored and evaluated in PGG-treated aortas compared with saline-treated control aortas. Our results show that a one-time periadventitial delivery of noncytotoxic levels of PGG inhibits elastin degeneration, attenuates aneurysmal expansion, and hinders AAA development in rats without interfering with the pathogenic mechanisms typical of this model, namely inflammation, calcification, and high metalloproteinase activities. PGG binds specifically to arterial elastin and, in doing so, preserves the integrity of elastic lamellae despite the presence of high levels of proteinases derived from inflammatory cells. Conclusions—Periadventitial administration of PGG hinders the development of AAA in a clinically relevant animal model. Stabilization of aortic elastin in aneurysm-prone arterial segments offers great potential toward the development of safe and effective therapies for AAAs. (Circulation. 2007;115:1729-1737.) Key Words: aneurysm � aorta � drug delivery systems � elasticity � metalloproteinases � prevention � tannins Abdominal aortic aneurysms (AAAs) are associatedwith impaired arterial wall integrity, leading to ab- normal ballooning and eventual fatal rupture. AAAs, which are apparently increasing in frequency, have been cited as one of the top 10 causes of death among older men.1 After initial diagnosis, AAA patients (AAA diame- ter ��2.5 cm) are monitored for an increase in aortic diameter, and surgery aimed at preventing death from enlarged or ruptured AAAs is recommended when the diameter reaches ��5.5 cm.2 No pharmacological treat- ment currently exists for AAAs. Surgical procedures entail either endovascular stent graft repair or complete replace- ment of the diseased aortic segment with an artificial vascular graft. Although often effective, endovascular stents are anatomically appropriate for only 30% to 60% of AAA patients at the outset and present the risk of en- doleaks and graft displacement.3 Moreover, open surgery for full-size graft insertion is highly invasive, limiting its use to those patients with high operative risk. Clinical Perspective p 1737 Treatment options are particularly limited for patients with small or moderate aneurysms; this group makes up the largest percentage of all AAA patients.4 This number is likely to increase dramatically with the advent of “blanket” screening of asymptomatic subjects with imaging surveillance.5 Conse- quently, novel therapeutic approaches targeted at hindering the progression of AAAs promptly after diagnosis would be extremely beneficial. Information about AAA mechanisms is currently gathered from analysis of late-stage aneurysmal samples obtained from patients and animal models. In addition to arterial dilatation, AAAs are characterized by degeneration of the arterial architecture,6 decreased medial elastin content,7 disruption or fragmentation of elastic lamellae,8 presence of matrix- degrading enzymes such as matrix metalloproteinases (MMPs),9,10 inflammatory infiltration,10 and often calcifica- tion.11 As a result of its multifactorial pathogenesis, antiin- Received November 3, 2006; accepted January 26, 2007. From the Department of Bioengineering (J.C.I., D.T.S., N.R.V.) Clemson University, Clemson, SC, and Department of Biochemistry, University of Texas Health Center, Tyler (B.C.S.). The online-only Data Supplement, consisting of expanded Methods, is available with this article at http://circ.ahajournals.org/cgi/ content/full/CIRCULATIONAHA.106.672873/DC1. *Dr Isenburg and Dr Simionescu contributed equally to this work. Correspondence to Narendra R. Vyavahare, PhD, Cardiovascular Implant Research Laboratory, Department of Bioengineering, Clemson University, 501 Rhodes Engineering Research Center, Clemson, SC 29634. E-mail narenv@clemson.edu © 2007 American Heart Association, Inc. Circulation is available at http://www.circulationaha.org DOI: 10.1161/CIRCULATIONAHA.106.672873 1729 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 flammatory agents,12 proteinase inhibitors (such as tissue inhib- itors of metalloproteinases [TIMPs]), and genetic and pharmacological inhibition of MMPs13 have been tested as potential AAA treatments in experimental animals, but none has yet reached clinical application. Because long-term, adequate control of local inflammation and MMP activities may be difficult to achieve and may be accompanied by adverse side effects,14 our hypothesis was that stabilization of aortic elastin in aneurysm-prone arterial segments offers potential for the devel- opment of safe and effective therapies for AAAs. In current studies, we have explored polyphenolic tannins, specifically pentagalloyl glucose (PGG), as novel elastin- stabilizing agents. Tannins bind to elastin15 and, in doing so, render it resistant to enzymatic degradation.15–17 We provide evidence that periarterial treatment with PGG preserves elastin fiber integrity and hinders aneurysmal dilatation of the abdom- inal aorta in a clinically relevant animal model of AAA. Methods Please see the online Data Supplement for expanded Methods. Pentagalloyl Glucose High-purity PGG used throughout the in vitro and in vivo experi- ments was produced from tannic acid by methanolysis, as described previously.17 In Vitro Cytotoxicity Primary rat aortic smooth muscle cells and rat skin fibroblasts were exposed to increasing concentrations of PGG and viability assessed by the soluble tetrazolium salt (MTS) assay (Promega, Madison, Wis) and Live-Dead staining (Molecular Probes, Eugene, Ore). Cells also were incubated with PBS and 70% ethanol as negative and positive controls, respectively. In Vitro Efficacy Native abdominal aortas collected from adult rats were treated in vitro with increasing concentrations of PGG dissolved in saline. Control samples were treated with saline alone. Aortic samples were tested for resistance to elastase by desmosine analysis.15,18 To demonstrate PGG– aorta interactions, samples were stained with a phenol-specific histology stain and were tested for natural recoil ability using opening-angle measurements, as described previously.17,19 Animal Surgeries Two experiments were designed to evaluate the in vivo efficacy of PGG in hindering AAA formation and progression in a CaCl2 injury model of aortic aneurysm (Figure 1). Experiment 1: Interference With Formation of Early AAA Infrarenal abdominal aortas of adult male Sprague-Dawley rats were perivascularly treated for 15 minutes with PGG dissolved in physi- ological saline using a presoaked gauze applicator. After rinsing, the aortas underwent chemical injury by application of 0.5 mol/L CaCl2 for 15 minutes. Control aortas were treated with vehicle (saline) for 15 minutes, rinsed, and then subjected to CaCl2. After 28 days, rats were humanely euthanized with CO2 asphyxiation, and aortas were excised and analyzed as described below. Changes in external aortic diameter were measured by digital photography. Experiment 2: Hindrance of AAA Progression Abdominal aortas were treated perivascularly 0.5 mol/L CaCl2 to induce AAA, and rats were allowed to recover. After 28 days, the aneurysmal aortas were reexposed and treated with PGG in saline for 15 minutes, as described above. In the control group, aneurysmal aortas were treated with vehicle (saline) for 15 minutes. At 28 days after the second surgery (56 days after initial CaCl2 injury), rats were humanely euthanized, and samples were collected for analysis. Changes in external aortic diameter were measured by digital photography. All animals used in our experiments received humane care in compliance with protocols approved by the Clemson University Animal Research Committee as formulated by the National Institutes of Health (publication No. 86-23, revised 1999). Elastin Content and Integrity Assessment Aortic segments were analyzed for desmosine content by radioim- munoassay18 and by histology using Verhoeff van Gieson stain for elastin and hematoxylin and eosin for general structure. PGG Binding Studies To extract PGG, aortas retrieved 28 days after PGG treatment (experiment 1) were pulverized and shaken with 80% methanol; extracts were assayed for phenol content with the Folin-Denis reaction.20 As positive controls, rat aortas were treated with PGG in situ for 15 minutes and extracted for analysis of initial PGG content. Calcification Assessment Rat aortic samples retrieved at 28 days after CaCl2 injury (experi- ment 1) were analyzed for calcium content using atomic absorption spectrophotometry and by histological Alizarin red staining, as described before.21 MMP and TIMP Quantification MMP-2 and MMP-9 activities in rat aortas retrieved 28 days after CaCl2 injury (experiment 1) were analyzed by gelatin zymography.22 TIMP-2 levels were assayed by ELISA (Amersham Biosciences, Piscataway, NJ). Immunohistochemistry Macrophages were detected on samples from control (saline) and PGG-treated aortas 28 days after treatment (experiment 1) using a mouse anti-rat monocyte/macrophage primary antibody (Chemicon International, Temecula, Calif), whereas lymphocytes were detected with a rabbit anti-CD3 T-cell primary antibody (Sigma-Aldrich, St Louis, Mo). Staining was performed with the appropriate biotinyl- ated secondary antibodies and an avidin-biotin detection system (Vector Laboratories, Burlingame, Calif). Figure 1. Experimental design for in vivo experiments. In experi- ment 1, PGG was applied to healthy abdominal aorta immedi- ately before CaCl2-mediated injury, and AAA development was followed for 28 days. In experiment 2, PGG was applied to an- eurysmal aorta, and development of advanced AAA was moni- tored for another 28 days. 1730 Circulation April 3, 2007 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 Statistical Analyses Results are expressed as mean�SEM. Statistical analyses of the data were performed using single-factor ANOVA. Subsequently, differ- ences between means were determined through the use of the least significant difference with an � value of 0.05. The authors had full access to and take responsibility for the integrity of the data. All authors have read and agree to the manuscript as written. Results In Vitro Safety and Efficacy Studies In preparation for the animal studies, several experiments were performed in vitro to evaluate the effects of a single 15-minute application of PGG on cells and arterial extracel- lular matrix, specifically elastin. MTS results (Figure 2A and 2B) showed that exposure of cells to PGG concentrations of up to 0.06% had minimal cytotoxic effects. No statistical differences were observed between cells exposed to 0.03% and 0.06% PGG (P�0.05). These results were also confirmed by Live-Dead assay (data not shown). Desmosine analysis of aortic samples exposed to elastase showed an increasing trend in elastin preservation with increasing PGG concentrations (Figure 2C). Moreover, the intensity of polyphenol histolog- ical staining increased progressively with increasing concen- trations of PGG, revealing direct binding of PGG to elastic fibers (Figure 2D). For further proof of PGG-arterial elastic fiber interactions, the ring-opening test was performed on PGG-treated native rat aortic rings (Figure 2E and 2F). When allowed to open by a single incision, rings extended to �75°, revealing the natural elastic recoil properties of aortic tissues. Rings that were exposed to increasing concentrations of PGG exhibited progressively smaller opening angles, suggestive of the direct interactions of PGG with elastic fibers. Having established that a 15-minute exposure to PGG solutions of up to 0.06% is not cytotoxic and effectively stabilizes aortic tissue by binding to elastic fibers, we de- signed 2 experiments to test the in vivo efficacy of PGG in hindering AAA formation and progression (Figure 1). Perivascular Delivery of Noncytotoxic Levels of PGG Hinders Aneurysm Formation In experiment 1, perivascular application of CaCl2 to the infrarenal abdominal aorta induced significant changes in aortic diameter at 28 days after injury (Figure 3A and 3B). Figure 2. In vitro safety and efficacy studies of PGG interactions with native rat abdominal aorta. Rat aortic smooth muscle cells (A) and rat skin fibroblasts (B) were exposed to increasing concentrations of PGG, and cytotoxicity was measured with the MTS assay (n�3 per group per cell type). Cells exposed to PBS and 70% ethanol (EtOH) were used as negative and positive controls, respectively. C, To test resistance to elastase, aortic samples were exposed to increasing concentrations of PGG for 15 minutes in vitro and then exposed to pure elastase. Aortic elastin content was measured in each group by desmosine analysis (n�5 per group). D, For histological confirmation of PGG binding to aortic elastin, PGG-treated aortic tissues were stained with a phenol-specific stain (black indicates PGG) and counterstained with light green (L indicates lumen; bar�50 �m). Arrows point to elastic lamellae. E, To test elastic recoil, aortic rings were treated for 15 minutes with increasing concentrations of PGG and subjected to ring-opening analysis (n�5 per group). Opening angles were measured graphically (bot- tom left corner). F, Mean opening angles for each group are shown as a function of PGG concentration. *Statistically significant differences (P�0.05) relative to the negative control group. Isenburg et al Aneurysm Treatment by Elastin Stabilization 1731 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 Comparative measurements of control rats at days 0 and 28 after surgery (1.3�0.05 and 1.9�0.1 mm, respectively) revealed a mean increase in diameter of 42�10% (P�0.05; n�12). In comparison, aortas exposed to PGG exhibited a minimal (8�7%) increase in diameter after 28 days (from 1.5�0.06 to 1.6�0.09 mm). With an arbitrary threshold of a 20% diameter increase considered aneu- rysmal in this experimental model, 8 of 12 rats exhibited aneurysms in the control group (66.7%), and only 2 of 11 rats were aneurysmal in the PGG group (18.2%; Figure 3C). These results indicate that PGG application to healthy aorta effectively hindered formation of aneurysms in this experimental model. As shown above, exposure to PGG had minimal in vitro cytotoxic effects on rat cells. In our animal studies, PGG- treated rats did not exhibit significant weight losses during the 28-day test period (42.3�3.4 g gain in the control group versus 44.2�3.6 g in the PGG group; P�0.05; n�12). Liver samples did not exhibit any noticeable histological changes indicative of hepatotoxicity (data not shown). In addition, levels of serum ALT (an enzyme used to assess liver function) for saline controls (9.1�1.5 U/L) and PGG-treated rats (17.7�3.1 U/L) were consistently within the acceptable range (5 to 45 U/L). Furthermore, ALT levels for PGG- treated rats were not statistically different from those of nonsurgery controls rats (11.5�1.2 U/L; P�0.05), suggesting Figure 3. Local delivery of PGG prevents AAA development and elastin degeneration. Adult rat abdominal aortas were treated periad- ventitially with PGG for 15 minutes, rinsed, and then exposed to CaCl2-mediated chemical injury to induce aneurysm formation. In con- trol rats, PGG treatment was replaced with saline, followed by CaCl2. A, External diameter of the infrarenal aorta between the renal artery (top) and iliac bifurcation (bottom) was measured by digital photography. Black dashed lines were added to aid in the identifica- tion of aortic anatomy. B, Mean percent change in diameter at 28 days relative to day 0 (n�12 for saline controls, n�11 for PGG). C, Calculated aneurysm occurrence using an arbitrary 20% diameter increase threshold for defining AAA. D, Elastin integrity evaluated by histology using VVG stain (black L indicates lumen; elastin stained black; bar � 50 �m) and (E) desmosine analysis. F, Quantitative PGG content analysis in explanted aorta (n�3). Thick black arrows in B and E designate timing of PGG application. *Statistically signifi- cant differences between means (P�0.05) at 28 days relative to saline-treated controls. 1732 Circulation April 3, 2007 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 that PGG treatment did not induce major liver damage within this model. Treatment With PGG Prevents Degeneration of Aortic Elastin In parallel with aortic dilatation, experimentally induced AAAs were accompanied by major changes in vascular elastin content and integrity as shown by desmosine analysis and histology (Figure 3D and 3E). Compared with nonsur- gery control aorta, aortic elastin content in the control group diminished by almost 50% (Figure 3E) and exhibited char- acteristic flattening and fragmentation of the elastic laminae (Figure 3D) at 28 days after injury. Conversely, aortas from the PGG group exhibited minimal (�15%) decrease in elastin content (Figure 3E) and excellent preservation of elastic laminar integrity and waviness (Figure 3D), suggesting that PGG delivery effectively prevented elastin degeneration in this animal model. PGG Binds to Aortic Elastin In Vivo The affinity of phenolic tannins toward elastin is well known from histology techniques23 and was previously demonstrated in vitro with pure elastin.15 Compared with day 0 (1.8�0.6 �g PGG/mg dry tissue), aortas explanted 28 days after PGG application contained slightly lower (Figure 3F) but not statistically different amounts of PGG (1.2�0.4 �g PGG/mg dry tissue; P�0.05), indicating that in vivo binding of PGG to aortic tissues is stable for a minimum of 28 days in this animal model. Treatment With PGG Does Not Interfere With Model-Related Pathogenesis To investigate pathogenic aspects typical to this animal model,24 calcium content, macrophage infiltration, MMP activities, and expression of TIMP were analyzed in aortas retrieved 28 days after CaCl2 treatment (experiment 1). Perivascular injury induced significant tissue calcification after 28 days (Figure 4A), which was localized mainly in the media (Figure 4B). Similar calcium values (P�0.05) and distribution were observed in PGG-treated aortas. MMP activities and TIMP-2 levels (Figure 4C) also were not different between controls and PGG-treated aortas (P�0.05), suggesting that tissues in both groups were exposed to similar proteolytic environments. Moreover, hematoxylin and eosin and immunohistochemical staining for macrophages and lymphocytes revealed comparable inflammatory infiltrates in both groups (Figure 4D). Taken together, these results sug- gest that PGG treatment did not interfere with key pathogenic mechanisms typical of this AAA experimental model. Periadventitial Treatment of Aneurysmal Aorta With PGG Limits AAA Progression In experiment 2, rat aortas were treated with CaCl2, and AAA was allowed to develop for 28 days. At this time point, the Figure 4. PGG application does not interfere with pathogenic mechanisms of AAA formation. A, Aortic calcification at 28 days after CaCl2 injury was evaluated by calcium content analysis (n�12 per group) and (B) histology using Alizarin red stain (red L indicates lumen; calcium deposits stained red). C, MMP-2 and MMP-9 enzymatic activities and TIMP-2 levels in the 2 groups were not statisti- cally different (P�0.05; n�6 per group) at 28 days after injury. D, Immunohistochemistry for macrophages (bar�100 �m) and lympho- cytes (bar�50 �m). For both types, positive reaction is brown. Isenburg et al Aneurysm Treatment by Elastin Stabilization 1733 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 aneurysmal aortas were exposed through a second surgery, and PGG was applied perivascularly (PGG group). Saline was applied to aneurysmal aortas in the control group. AAA progression was followed for another 28 days in both groups. A progressive diameter increase, reaching a mean 47.1�11% increase at 56 days, was measured in the control group (Figure 5A and 5B). Approximately half of the aneurysmal aortas significantly increased in diameter from day 28 to 56, indicating chronic AAA progression in this animal model (Figure 5C). Conversely, aneurysmal aortas that were exposed to PGG exhibited no increase in mean diameter at 56 days compared with day 28 mean values (Figure 5B). It is especially noteworthy that 100% of aortas in the PGG group (11 of 11) maintained the same diameter or exhibited a decrease in aortic diameter at 56 compared with 28 days (Figure 5C). The mean diameter value at 56 days for the PGG group was actually slightly lower than that at 28 days but not statistically significant (P�0.05). Overall, these results indicate that PGG application to aneurysmal aortas effectively hindered arterial dilatation in this experimental model. Effect of PGG on Degeneration of Aortic Elastin At 56 days after injury, aneurysmal aortas exhibited extensive flattening, fragmentation, and degeneration of the elastic laminae in the control group (Figure 5D). Overall tissue architecture was indicative of severe tissue degeneration as outlined by numerous gaps or lacunae, bestowing the aneu- rysmal aorta with a porous, “spongy” aspect. In contrast, PGG-treated aortas exhibited improved preservation of elas- tic laminar integrity and waviness and overall preserved tissue architecture (Figure 5D). Analysis of vascular elastin revealed no significant differences in elastin content as shown by desmosine analysis (679.5�63.2 and 715.9�90.6 pmol desmosine/mg dry tissue). Moreover, these elastin values were not statistically different from the 28-day values Figure 5. Delivery of PGG to aneurysmal aorta prevents AAA progression. Adult rat abdominal aortas were treated periadventitially with CaCl2, and AAAs were allowed to develop for 28 days. At that time, aneurysmal aortas were reexposed and treated with PGG. In con- trol rats, PGG treatment was replaced with saline. Rats in both groups were euthanized 28 days after the second surgery (total, 56 days after initial CaCl2 injury). A, External diameter of the infrarenal aorta was measured by digital photography after perfusion fixation. Black dashed lines were added to aid in the identification of aortic anatomy. B, Mean percent change in diameter at 28 and 56 days relative to day 0 (n�11 per group). Thick black arrow designates timing of PGG application. C, Numbers of individual rats that exhib- ited increased, stationary, or reduced aortic diameters at 56 vs 28 days. D, Elastin integrity and tissue architecture were evaluated by histology using VVG stain (black L indicates lumen; elastin stained black) and hematoxylin and eosin. Bar�40 �m. E, Low-magnification pictures used to digitally measure the thickness of arterial media. Bar�1000 �m. *Statistically significant differences between means (P�0.05). 1734 Circulation April 3, 2007 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 (503.0�69.2 pmol desmosine/mg dry tissue; P�0.05), indi- cating that the severe elastin degradation observed at 28 days has reached a steady state and has not changed significantly as AAA progressed from 28 to 56 days. Morphometric measurements on perfusion-fixed aortas (Figure 5E) revealed that the thickness of the aortic media was significantly smaller in the PGG treatment group (55.4�2.4 �m) com- pared with saline-treated aneurysmal controls (80.3�4.7 �m; P�0.05). Taken together, these results suggest that PGG treatment of aneurysmal aortas effectively prevented chronic diameter expansion and aortic matrix deterioration in this AAA animal model. Discussion Aneurysms arise from irreversible, chronic pathological changes that involve degeneration of structural matrix com- ponents. The role of MMP/TIMP imbalances in AAA initia- tion and development has been irrevocably demonstrated by studies using MMP- and TIMP-knockout mice.25–27 MMP- mediated loss of elastin, a hallmark of AAA, is believed to lead to progressive dilatation of the aorta. Therefore, our studies focused on chemical stabilization of vascular elastin as a novel approach to limit aneurysmal degeneration. The traditional approach to treating AAAs involves surgi- cal procedures such as endovascular stent graft repairs or complete replacement of the diseased section of the aorta. Nonsurgical approaches, currently still in the experimental phase, include use of antiinflammatory agents, proteinase inhibitors, and genetic and pharmacological inhibition of MMPs in animal models. Doxycycline, an antibiotic and MMP inhibitor, showed initial promising results in a pilot clinical study.28 Recently, Yoshimura et al29 reported that the systemic pharmacological inhibition of c-Jun N-terminal kinase, an intracellular signaling switch that controls MMP production, might block AAA progression and stimulate aneurysm regression. Although numerous attempts have focused on proteolytic inhibition for AAA treatment, we present a radically innova- tive approach, namely periadventitial delivery of noncyto- toxic concentrations of phenolic tannins such as PGG to stabilize elastin and render it resistant to enzymatic degrada- tion. In previous studies, we have shown that tannic acid (a decagalloyl glucose) binds to pure aortic elastin and porcine aortic wall, in both cases resulting in improved resistance to enzyme-mediated elastin degradation.15,16 More recently, we have reported that PGG, a more stable and less cytotoxic derivative of tannic acid, also binds to and protects elastin within porcine aortic wall with very similar efficacy.17 Our hypothesis was that the ability of PGG to stabilize elastin would extend in vivo to rat aorta. To verify our hypothesis, we performed a series of in vitro safety and efficacy studies, followed by in vivo validation in a rat AAA model. Safety and Efficacy Studies Results showed that PGG binds specifically to vascular elastin and, in doing so, renders elastin resistant to enzymatic degradation. In addition to being an effective elastin- stabilizing agent, PGG, when used at concentrations of up to 0.06%, had virtually no adverse effects on cell viability when applied in vitro to rat cells. Local PGG Delivery Hinders AAA Formation and Limits Diameter Increase Traditionally, an increase in aortic diameter is considered the defining characteristic of AAA. Numerous investigators stud- ied AAA formation in the CaCl2 injury model in mice,30 rabbits,11,31 and rats.32 We have previously used this model to show that CaCl2 injury induces a series of pathogenic alter- ations that share many similarities to human pathology and to changes observed in other experimental animals.24 In exper- iment 1, application of CaCl2 to the rat infrarenal abdominal aorta induced a mean increase in diameter of 42% at 28 days after application. The extent of aneurysmal dilatation ob- served in our studies corresponds well with other published studies using CaCl2 injury that consistently reported a 25% to 75% increase in diameter in rodents.25,26,32,33 In our studies, this pathology was observed in 8 of 12 control rats (66% incidence). This prevalence also was similar to other studies that reported AAA incidences of 60% to 80%.25,26 It is unknown why a consistent number of genetically similar experimental animals are apparently resistant to CaCl2- mediated injury. This aspect warrants further investigation. Conversely, mean aortic diameter increase in the PGG group was �10%, which is traditionally not considered indicative of aneurysm formation. The overall AAA inci- dence in the PGG group was �20%, indicating that in this model, PGG prevented aneurysm formation in more than 8 of 10 animals. Although the detailed mechanisms of this suc- cessful approach are not entirely known, we provide evidence that it likely involves PGG-mediated elastin stabilization. In addition to dilatation, AAAs also are characterized by progressive damage to arterial elastin. In our experimental model, periadventitial application of CaCl2 induced a mean decrease of �50% in aortic elastin content. Earlier studies have shown that elastin degeneration starts within the first days after CaCl2 injury and that this chronic process is related to an overexpression of MMPs within the aortic extracellular matrix.24 Similar to diameter increase, the incidence of elastin degeneration in CaCl2-treated aorta was observed in �80% of control rats. Typically, aortic samples that were characterized by low elastin content also exhibited characteristic flattening and fragmentation of the naturally wavy elastic lamellae. This histological aspect is characteristic of the aortic pathology in CaCl2-treated aorta.29 It is not clear at this time what the mechanism and clinical significance of elastic fiber flattening are, and this aspect merits further examination. Periadventitial treatment of aorta with PGG resulted in remarkable preservation of elastin integrity. Quantitative desmosine results showed that elastin content in the PGG group was not statistically different from the nonsurgery controls (P�0.05), indicating minor, if any, elastin degrada- tion. This was accompanied by excellent histological preser- vation of elastic lamellar integrity and waviness. Elastin preservation in the CaCl2 experimental model also was observed in aortas of MMP-knockout mice25,26 and in mice treated systemically with a c-Jun N-terminal kinase inhibi- tor.29 In all studies published thus far, the present report Isenburg et al Aneurysm Treatment by Elastin Stabilization 1735 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 included, elastin preservation was consistently associated with the absence of aneurysm initiation or progression. This indicates that elastin stabilization by either MMP inhibition or PGG treatment may be effective as a potential treatment of AAAs. Periadventitial Treatment of Aneurysmal Aorta With PGG Limits AAA Progression In a more clinically relevant experiment, PGG was directly applied to aneurysmal aortas, and AAA progression was fol- lowed after this surgical intervention. Although most control aneurysmal aortas (saline treated) continued to expand in diam- eter, all of the PGG-treated aneurysmal aortas maintained the same diameter or exhibited a decrease in aortic diameter at the end of the experiment (56 days). Diameter expansion in aneu- rysmal controls was associated with changes in medial thickness and integrity that included severe tissue degeneration. Yo- shimura et al29 reported similar inhibition of AAA progression after aneurysm onset in rodent animal models as a result of systemic pharmacological inhibition of c-Jun N-terminal kinase. Because c-Jun N-terminal kinase is an intracellular signaling switch that controls MMP production, its inhibition hindered MMP production and thus prevented degeneration of the vascu- lar matrix. In our studies, histological analysis of aneurysmal aortas at day 56 showed a significant decline in elastin integrity in the control group, indicating that elastin degeneration is a chronic pathogenic aspect characteristic of this AAA model. Hallmarks of this process were elastic fiber flattening, fragmentation, and lack of elastic laminar staining in extended areas in almost all samples (5 of 6). Aortas from the PGG group exhibited im- proved preservation of elastic laminar integrity and waviness in most samples (4 of 6). Desmosine analysis for this experiment revealed little quantitative difference for the PGG group com- pared with controls, possibly because elastin degradation has reached a maximum threshold at 28 days. Being mindful of these limitations, we nonetheless suggest that PGG-mediated elastin stabilization in aneurysmal aortas has the potential to significantly hinder AAA development. Mechanisms of PGG-Mediated Elastin Stabilization As demonstrated before, PGG exhibits an outstanding affinity toward elastin, possibly binding to hydrophobic areas, which are known to be susceptible to protease-mediated elastolysis.16 In the present studies, we have validated PGG binding to aortic elastin in vivo and provided evidence that the binding is stable for a minimum of 28 days in this accelerated AAA experimental model. We also have provided ample in vitro evidence that binding of PGG to arterial elastin provides an outstanding resistance to proteolytic degeneration.17 Because natural elastin turnover is exceptionally low,34 we hypothesize that PGG may remain bound to vascular elastin for extended periods of time after application, sufficient to maintain resistance to enzymes and to deter AAA progression. However, more studies are required to validate this hypothesis. The mechanisms of aneurysmal dilatation in this animal model are not understood, but we hypothesize that progres- sive elastin degradation leads to loss of elastic recoil under normal blood pressure, which slowly leads to an increase in measured external diameter. This arterial expansion was accompanied by increased thickness of the media layer in the saline controls. Because these parameters were significantly reduced in PGG-treated aortas, we are tempted to speculate that direct interaction of PGG with elastin fibers preserves their integrity and helps to maintain mechanical properties of aneurysmal aortas. More detailed studies are categorically needed to better understand the mechanisms of AAA and the interaction of PGG with components of diseased aortas. Aneurysmal aorta consists of both intact and degraded elastin fibers. For potential clinical applications, we provide supportive evidence that PGG is capable of preventing degeneration of intact elastin fibers. This may account for the prevention of AAA formation in experiment 1 and the lack of AAA progression in experiment 2. However, it is not known whether PGG can interact with partially degraded elastin fibers and stabilize them against further degeneration. Advanced stages of AAA also involve collagen degen- eration, which in turn may contribute to fatal ruptures in terminal stages of AAA.35 Although polyphenolic tannin binding to collagen has been demonstrated before,36 it is not known at this point whether PGG also binds to aortic collagen in vivo and may possibly influence the outcome of developing AAA, especially in regard to preventing late AAA ruptures. These issues are currently under investiga- tion in our laboratory. In our studies, an acute application of PGG to rat aortas did not directly interfere with model-related pathogenesis; thus, PGG-mediated inhibi- tion of AAA development and progression possibly is due to direct stabilization of elastin rather than to inhibition of enzyme activities. Conclusions Maintaining the integrity of the aortic wall is vital for the prevention of AAA development. Acute localized periad- ventitial delivery of noncytotoxic concentrations of PGG inhibits elastin degeneration, attenuates aneurysmal diam- eter expansion, and hinders development of AAA in an established animal model. PGG binds strongly and specif- ically to arterial elastin, and in doing so, it preserves elastic laminar integrity and architecture despite the presence of high levels of MMPs derived from inflammatory cells. Approaches that target stabilization of the aortic extracel- lular matrix in aneurysm-prone arterial segments hold great potential toward development of safe and effective therapies for AAAs. Sources of Funding This work was supported in part by NIH grants HL61652 and HL08426 (to Dr Vyavahare). Disclosures None. References 1. Anderson RN. Deaths: leading causes for 2000. Natl Vital Stat Rep. 2002;50:1– 85. 2. Ashton HA, Buxton MJ, Day NE, Kim LG, Marteau TM, Scott RA, Thompson SG, Walker NM. The Multicentre Aneurysm Screening Study (MASS) into the effect of abdominal aortic aneurysm screening on 1736 Circulation April 3, 2007 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 mortality in men: a randomised controlled trial. Lancet. 2002;360: 1531–1539. 3. Isselbacher EM. Thoracic and abdominal aortic aneurysms. Circulation. 2005;111:816 – 828. 4. Dawson J, Choke E, Sayed S, Cockerill G, Loftus I, Thompson MM. Pharmacotherapy of abdominal aortic aneurysms. Curr Vasc Pharmacol. 2006;4:129 –149. 5. SAAAVE Act becomes law. Available at: http://www.vdf.org/News/ saaaveact.php. Accessed July 7, 2006. 6. Thompson RW. Basic science of abdominal aortic aneurysms: emerging therapeutic strategies for an unresolved clinical problem. Curr Opin Cardiol. 1996;11:504 –518. 7. Huffman MD, Curci JA, Moore G, Kerns DB, Starcher BC, Thompson RW. Functional importance of connective tissue repair during the devel- opment of experimental abdominal aortic aneurysms. Surgery. 2000;128: 429 – 438. 8. Campa JS, Greenhalgh RM, Powell JT. Elastin degradation in abdominal aortic aneurysms. Atherosclerosis. 1987;65:13–21. 9. Sinha S, Frishman WH. Matrix metalloproteinases and abdominal aortic aneurysms: a potential therapeutic target. J Clin Pharmacol. 1998;38: 1077–1088. 10. Freestone T, Turner RJ, Coady A, Higman DJ, Greenhalgh RM, Powell JT. Inflammation and matrix metalloproteinases in the enlarging abdominal aortic aneurysm. Arterioscler Thromb Vasc Biol. 1995;15: 1145–1151. 11. Gertz SD, Kurgan A, Eisenberg D. Aneurysm of the rabbit common carotid artery induced by periarterial application of calcium chloride in vivo. J Clin Invest. 1988;81:649 – 656. 12. Miralles M, Wester W, Sicard GA, Thompson R, Reilly JM. Indometh- acin inhibits expansion of experimental aortic aneurysms via inhibition of the cox2 isoform of cyclooxygenase. J Vasc Surg. 1999;29:884 – 892. 13. Baxter BT, Pearce WH, Waltke EA, Littooy FN, Hallett JW Jr, Kent KC, Upchurch GR Jr, Chaikof EL, Mills JL, Fleckten B, Longo GM, Lee JK, Thompson RW. Prolonged administration of doxycycline in patients with small asymptomatic abdominal aortic aneurysms: report of a prospective (phase II) multicenter study. J Vasc Surg. 2002;36:1–12. 14. Ramnath N, Creaven PJ. Matrix metalloproteinase inhibitors. Curr Oncol Rep. 2004;6:96 –102. 15. Isenburg JC, Simionescu DT, Vyavahare NR. Elastin stabilization in cardiovascular implants: improved resistance to enzymatic degradation by treatment with tannic acid. Biomaterials. 2004;25:3293–3302. 16. Isenburg JC, Simionescu DT, Vyavahare NR. Tannic acid treatment enhances biostability and reduces calcification of glutaraldehyde fixed aortic wall. Biomaterials. 2005;26:1237–1245. 17. Isenburg JC, Karamchandani NV, Simionescu DT, Vyavahare NR. Structural requirements for stabilization of vascular elastin by poly- phenolic tannins. Biomaterials. 2006;27:3645–3651. 18. Starcher B, Conrad M. A role for neutrophil elastase in the progression of solar elastosis. Connect Tissue Res. 1995;31:133–140. 19. Liu SQ, Fung YC. Influence of STZ-induced diabetes on zero-stress states of rat pulmonary and systemic arteries. Diabetes. 1992;41: 136 –146. 20. Rosenblatt M, Peluso J. Folin-Denis method for total phenols. J Assoc Offic Agr Chemists. 1941;24:170 –181. 21. Bailey MT, Pillarisetti S, Xiao H, Vyavahare NR. Role of elastin in pathologic calcification of xenograft heart valves. J Biomed Mater Res A. 2003;66:93–102. 22. Vyavahare N, Jones PL, Tallapragada S, Levy RJ. Inhibition of matrix metalloproteinase activity attenuates tenascin-C production and calcifi- cation of implanted purified elastin in rats. Am J Pathol. 2000;157: 885– 893. 23. Simionescu N, Simionescu M. Galloylglucoses of low molecular weight as mordant in electron microscopy, I: procedure, and evidence for mor- danting effect. J Cell Biol. 1976;70:608 – 621. 24. Basalyga DM, Simionescu DT, Xiong W, Baxter BT, Starcher BC, Vyavahare NR. Elastin degradation and calcification in an abdominal aorta injury model: role of matrix metalloproteinases. Circulation. 2004; 110:3480 –3487. 25. Longo GM, Xiong W, Greiner TC, Zhao Y, Fiotti N, Baxter BT. Matrix metalloproteinases 2 and 9 work in concert to produce aortic aneurysms. J Clin Invest. 2002;110:625– 632. 26. Longo GM, Buda SJ, Fiotta N, Xiong W, Griener T, Shapiro S, Baxter BT. MMP-12 has a role in abdominal aortic aneurysms in mice. Surgery. 2005;137:457– 462. 27. Ikonomidis JS, Gibson WC, Butler JE, McClister DM, Sweterlitsch SE, Thompson RP, Mukherjee R, Spinale FG. Effects of deletion of the tissue inhibitor of matrix metalloproteinases-1 gene on the progression of murine thoracic aortic aneurysms. Circulation. 2004;110(suppl):II- 268 –II-273. 28. Mosorin M, Juvonen J, Biancari F, Satta J, Surcel HM, Leinonen M, Saikku P, Juvonen T. Use of doxycycline to decrease the growth rate of abdominal aortic aneurysms: a randomized, double-blind, placebo- controlled pilot study. J Vasc Surg. 2001;34:606 – 610. 29. Yoshimura K, Aoki H, Ikeda Y, Fujii K, Akiyama N, Furutani A, Hoshii Y, Tanaka N, Ricci R, Ishihara T, Esato K, Hamano K, Matsuzaki M. Regression of abdominal aortic aneurysm by inhibition of c-Jun N-terminal kinase. Nat Med. 2005;11:1330 –1338. 30. Daugherty A, Cassis LA. Mouse models of abdominal aortic aneurysms. Arterioscler Thromb Vasc Biol. 2004;24:429 – 434. 31. Freestone T, Turner RJ, Higman DJ, Lever MJ, Powell JT. Influence of hypercholesterolemia and adventitial inflammation on the development of aortic aneurysm in rabbits. Arterioscler Thromb Vasc Biol. 1997;17:10 –17. 32. Karapolat S, Unlu Y, Erkut B, Kocak H, Erdogan F. Influence of indomethacin in the rat aneurysm model. Ann Vasc Surg. 2006;20: 369 –375. 33. Ikonomidis JS, Gibson WC, Gardner J, Sweterlitsch S, Thompson RP, Mukherjee R, Spinale FG. A murine model of thoracic aortic aneurysms. J Surg Res. 2003;115:157–163. 34. Werb Z, Banda MJ, McKerrow JH, Sandhaus RA. Elastases and elastin degradation. J Invest Dermatol. 1982;79(suppl 1):154s–159s. 35. Dobrin PB, Mrkvicka R. Failure of elastin or collagen as possible critical connective tissue alterations underlying aneurysmal dilatation. Car- diovasc Surg. 1994;2:484 – 488. 36. Heijmen FH, du Pont JS, Middelkoop E, Kreis RW, Hoekstra MJ. Cross- linking of dermal sheep collagen with tannic acid. Biomaterials. 1997; 18:749 –754. CLINICAL PERSPECTIVE Aortic aneurysms (AAs) are degenerative diseases characterized by destruction of arterial architecture and subsequent dilatation that may eventually lead to fatal ruptures. Aneurysms grow over a period of years and pose great health risks as a result of the potential to rupture, which can be fatal in �80% of cases. AAs are a serious health concern for the aging population, among the top 10 causes of death for patients �50 years of age. Currently, the sole treatment of AA is surgical replacement of the diseased artery or endovascular stent graft repair. However, reported postoperative survival rates can drop to only 50% at 10 years. Nonsurgical treatment of aneurysms does not currently exist; thus, a dire need exists for an evidence-based approach to stop this pathological process. The onset and progression of AAs are associated with elastin degradation by proteinases, which, in turn, are derived from activated vascular cells and infiltrating inflammatory cells. Using an aortic injury model that mimics human AA pathology, we show here that periarterial delivery of noncytotoxic concentrations of pentagalloyl glucose, a pharmacological agent capable of stabilizing elastin, significantly reduces AA development and controls diameter expansion by limitation of tissue degeneration. Pentagalloyl glucose delivery provides prospects for the development of clinically applicable therapies as an adjunct or stand-alone procedure that could temper development of a life-threatening pathology and thus affect thousands of patients. Isenburg et al Aneurysm Treatment by Elastin Stabilization 1737 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 work_4oa7kezpvfalhjseax2eet65am ---- Is digital photography an accurate and precise method for measuring range of motion of the hip and knee? RESEARCH Open Access Is digital photography an accurate and precise method for measuring range of motion of the hip and knee? Russell R. Russo1, Matthew B. Burn1, Sabir K. Ismaily2, Brayden J. Gerrie1, Shuyang Han2, Jerry Alexander2, Christopher Lenherr2, Philip C. Noble2, Joshua D. Harris1 and Patrick C. McCulloch1* Abstract Background: Accurate measurements of knee and hip motion are required for management of musculoskeletal pathology. The purpose of this investigation was to compare three techniques for measuring motion at the hip and knee. The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision compared to the other two techniques. Methods: Using infrared motion capture analysis as the reference standard, hip flexion/abduction/internal rotation/ external rotation and knee flexion/extension were measured using visual estimation, goniometry, and photography on 10 fresh frozen cadavers. These measurements were performed by three physical therapists and three orthopaedic surgeons. Accuracy was defined by the difference from the reference standard, while precision was defined by the proportion of measurements within either 5° or 10°. Analysis of variance (ANOVA), t-tests, and chi-squared tests were used. Results: Although two statistically significant differences were found in measurement accuracy between the three techniques, neither of these differences met clinical significance (difference of 1.4° for hip abduction and 1.7° for the knee extension). Precision of measurements was significantly higher for digital photography than: (i) visual estimation for hip abduction and knee extension, and (ii) goniometry for knee extension only. Conclusions: There was no clinically significant difference in measurement accuracy between the three techniques for hip and knee motion. Digital photography only showed higher precision for two joint motions (hip abduction and knee extension). Overall digital photography shows equivalent accuracy and near-equivalent precision to visual estimation and goniometry. Keywords: Hip, Knee, Range of motion, Digital photography, Goniometry, Visual estimation Background When assessing hip and knee pathology, range of motion (ROM) is a commonly used clinical parameter utilized by medical professionals. Accurate measure- ments of ROM are important for diagnosis, monitoring progression or resolution of symptoms, clinical decision- making, surgical planning, assessing treatment response, for research, and to evaluate permanent disability or im- pairment (Lavernia et al. 2008; Lea and Gerhardt 1995; Mai et al. 2012). In addition, it allows the patient to appreciate their own progress during clinical visits and can be used as goals for rehabilitation (e.g. knee flexion needed to ascend and descend stairs) (Brosseau et al. 2001; Lavernia et al. 2008). Within orthopaedic surgery, accurate measurement of hip and knee ROM is critical for assessing the outcomes of surgery. The two most commonly used techniques for assessing range of motion are visual estimation and goniometry (Chevillotte et al. 2009; Ferriero et al. 2013; Gajdosik and Bohannon 1987; Lavernia et al. 2008; Murphy et al. 2013). Of these, goniometry is often believed to offer more accur- ate and reliable measurements than visual estimation * Correspondence: PCMcculloch@houstonmethodist.org 1Department of Orthopedics & Sports Medicine, Houston Methodist Hospital, 6445 Main Street, Outpatient Center, Suite 2500, Houston, TX 77030, USA Full list of author information is available at the end of the article Journal of Experimental Orthopaedics © The Author(s). 2017 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Russo et al. Journal of Experimental Orthopaedics (2017) 4:29 DOI 10.1186/s40634-017-0103-7 http://crossmark.crossref.org/dialog/?doi=10.1186/s40634-017-0103-7&domain=pdf mailto:PCMcculloch@houstonmethodist.org http://creativecommons.org/licenses/by/4.0/ (Brosseau et al. 2001; Chevillotte et al. 2009; Edwards et al. 2004; Ferriero et al. 2013; Gajdosik and Bohannon 1987; Herrero et al. 2011; Holm et al. 2000; Lavernia et al. 2008; Lea and Gerhardt 1995; Murphy et al. 2013; Roach et al. 2013; Watkins et al. 1991). However, it requires two hands for use (leaving neither hand free for limb stabilization) and more time than visual estimation (Nussbaumer et al. 2010). Many other less commonly uti- lized techniques have been studied within the literature (Charlton et al. 2015; Chevillotte et al. 2009; Herrero et al. 2011; Holm et al. 2000; Lea and Gerhardt 1995; Roach et al. 2013). The accuracy and reliability of any of these techniques has been shown to improve with repeated measurements, either by different investigators or the same investigator multiple times (Boone et al. 1978; Edwards et al. 2004; Watkins et al. 1991). Digital photog- raphy offers additional benefits as it allows for comparison between observations of the same measurement, different measurements separated by time, and allows for off-site measurements over long distances (such as for telemedi- cine or internet-based healthcare) (Naylor et al. 2011). Smartphone technology, which has become almost uni- versally available, facilitates this technique (Charlton et al. 2015; Chevillotte et al. 2009; Herrero et al. 2011; Murphy et al. 2013; Naylor et al. 2011; Russell et al. 2003; Verhaegen et al. 2010). The purpose of this study was to compare the accuracy of ROM measurements of hip and knee motion using multiple techniques (visual estimation, goniometric measurement, and digital photographic measurement). The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision com- pared to the other two techniques. Methods Using G*Power software (Universität Mannheim, Mannheim, Germany) and assuming mean measurement error of 3° ±5° (effect size 0.6) between each of the meas- urement techniques (goniometry, digital photography, visual estimation), an a priori power analysis (β = 0.20, α = 0.05) predicted that we would require 45 measure- ments with each of the three techniques. This require- ment would be met with 3 investigators each taking measurements on 16 lower extremities [8 cadavers] with each of the 3 techniques, however it was decided to include 6 investigators from two different specialties (orthopaedic surgery and physical therapy) to broaden the scope and generalizability (Faul et al. 2009; Faul et al. 2007). After institutional review board (IRB) approval, ten fresh-frozen human cadavers were obtained without specifying race, gender, ethnicity, age, or cause of death. The only exclusion criteria were gross limb deformity or amputated limbs. All specimens were stored at −5 °C and thawed 24 h prior to testing. Ten cadavers were used (20 lower extremities, measured by six investigators using three different techniques, 120 measurements by each technique) for measurements in two different ses- sions (5 different cadavers were used in each session) separated by a two-month period. For each of the two sessions, the five cadavers used were not refrozen after initial thawing and thus all measurements were obtained over a 3-day period. Three of the investigators were licensed physical thera- pists (PT) with greater than 6 months of clinical experi- ence and three were board-certified fellowship-trained orthopaedic surgeons. The orthopedic surgeons included two sports medicine fellowship trained surgeons and one adult reconstructive fellowship trained surgeon. All investigators took measurements of six selected motions (hip flexion, hip abduction, hip internal rotation, hip ex- ternal rotation, knee flexion, and knee extension) using three techniques (visual estimation, goniometric meas- urement, and digital photographic measurement) on each cadaveric specimen bilaterally (both lower limbs). Cadaver & Motion Analysis Setup Prior to beginning each measurement session, specific sites on each of the five cadavers (to be used for that session) were dissected down to bone bilaterally where mounting plates were secured rigidly with screw fixation and cementation (using polymethyl methacrylate) to three sites. The three mounting sites used bilaterally, in- cluded (1) the iliac crest, (2) the anterolateral aspect of the femoral midshaft, and (3) the anterior aspect of the tibial midshaft. Arrays of reflective markers (NDI, Waterloo, Canada – shown in Fig. 1) including passive reflective spheres were attached to each mounting site to track three-dimensional (3D) spatial location of each of these bones during the measurement session. Prior studies have used radiographic (two-dimensional) measurements as their “gold standard” with which to compare other measurements (Chapleau et al. 2011). Computed tomography (CT)-based motion analysis offers an additional advantage of being able to measure the joint angle in three-dimensions and being able to account for rotation (e.g. measurement of elbow flexion with differing humeral rotation). To define the “gold standard” used for this study (motion capture analysis), the lower extremities of all 10 cadavers underwent computer tomographic (CT) scans with mounting sites and markers attached. This Digital Imaging and Communications in Medicine (DICOM) data was used to construct three-dimensional (3D) models of each joint to be measured with software from Materialise Mimics (Materialise, Leuven, Belgium). Each 3D model was imported into Rapidform (INUS Technology Inc., Seoul, Korea) to be used with the motion capture device in combination with a custom MATLAB program (The MathWorks Inc., Massachusetts, USA). Russo et al. Journal of Experimental Orthopaedics (2017) 4:29 Page 2 of 8 This process allowed for accurate joint angle calculations to be performed in real-time during measurement sessions. Arrays of markers, but not the mounting plates, would be rearranged once each joints’ angle was to be measured. For example, when testing knee flexion and extension, the markers would be attached to the femur and tibia on one side of the body only. Markers would be removed from the contralateral lower extremity. This was done to aid the accuracy of motion analysis by avoiding confusion of the twelve motion analysis cameras (Motion Analysis, Santa Rose, CA), which were set up in a semi-circle surrounding an operating room (OR) table holding the ca- daver (Fig. 2). The position of the table and cameras were calibrated prior to beginning a measurement session and remained constant for all investigators’ measurements. Clear visualization of the arrays by at least two cameras simultaneously is the minimum requirement for accurate localization, but this study used a minimum of three cameras to guarantee accuracy (Furtado et al. 2013). Measurement technique During measurement sessions, each of the cadavers, one at a time, was positioned supine on an operating room (OR) table in the center of the twelve calibrated motion analysis cameras (Fig. 2). A single assistant ( ) would position the limb at the maximum joint motion and hold it in place for all measurements. First, the exact skeletal location (joint angle) would be calculated by computer-assisted infrared camera motion capture ana- lysis (Furtado et al. 2013). This would establish the gold standard measure for comparison by this investigator of this joint motion to all three other techniques. Second, while the assistant held the limb, the investi- gator would stand three feet from the joint in question at a standardized position (depending on the joint and motion being measured) and make a visual estimation of the joint angle based on each measurers’ estimation of the underlying bone axis of each long bone (as demon- strated in Fig. 3). This distance was chosen as it has been utilized in prior digital photography studies and offered adequate visualization of the bone long axes for all joints (Bennett et al. 2009; Naylor et al. 2011). Third, the inves- tigator would take a digital photograph of the joint angle using a Sony Alpha DSLR-A100 10.2 Megapixel digital camera (Sony Corporation, Tokyo, Japan), but without using a tripod. Finally, a standard plastic goniometer (Patterson Medical, Warrenville, Illinois, USA) was used to measure the joint angle without blinding of the investigator. Overall, 120 measurements were obtained for each joint motion using each of the three techniques. This was repeated for (A) hip flexion, (B) hip abduction, (C) hip internal rotation, (D) hip external rotation, (E) knee extension, and (F) knee flexion (see Fig. 3). Digital Fig. 1 Photograph demonstrating the arrays of reflective markers used for infrared motion capture analysis, which are fixed to the femur (right) and the tibia (left) using a combination of screw fixation and bone cement Fig. 2 Room set up for the measurements. An operating room (OR) table was positioned in the center of twelve motion analysis cameras on tripods at different heights and angles. These cameras were pre-calibrated prior to each measurement session Russo et al. Journal of Experimental Orthopaedics (2017) 4:29 Page 3 of 8 photographs were taken perpendicular to the axis of ro- tation. The camera was aimed lateral-to-medial (relative to the cadaver) at the hip joint (for hip flexion) and at the knee joint line (for knee extension, and flexion). The camera from anterior-to-posterior (relative to the ca- daver) through the hip joint while the investigator stood on a ladder aiming down towards the floor (for hip abduction, internal rotation, and external rotation). These measurements were done bilaterally on each ca- daver and repeated for all five cadavers during each of two sessions (total of 10 cadavers measured bilaterally). Digital photographs of each joint were reviewed after the cadaver measurement session, where joint angles were measured using Image J digital measurement software (National Institutes of Health, Bethesda, Maryland) on a 20-in. liquid crystal display computer screen (Dell, Round Rock, TX). ImageJ is free, publically available, Java-based image-processing software designed by the National Institutes of Health (NIH), which allows the angle between two straight lines to be measured on mul- tiple image formats. Lines were drawn as demonstrated in Fig. 3 for each of the six motions described. Statistical analysis Motion capture analysis was used as the “gold standard” with which all measurements by the six investigators using three different techniques (i.e. visual estimation, digital photography, and goniometry) were compared. For analysis, the measurements of all six investigators were combined. Accuracy was defined by the authors as the mean measurement error (defined as absolute value of the difference between measurement and gold stand- ard) and was compared between the three measurement techniques. These comparisons were made using analysis of variance (ANOVA) for significant differences between each measurement technique for each hip or knee motion individually. If significant differences were found by ANOVA, a Tukey post-hoc test was performed to identify subgroups with significant differences. ANOVA results are reported along with the degrees of freedom, F-statistic, and statistical significance. The precision of measurements was defined as the pro- portion of measurement errors less than the minimally clinically important difference (MCID) (Edwards et al. 2004; Gajdosik and Bohannon 1987). These proportions Fig. 3 Examples of the investigator’s view during visual estimation, photographing of the limb position (for subsequent angle measurement), and goniometric measurement. All measured positions are shown, including: a hip flexion, b hip abduction, c hip internal rotation, d hip external rotation, e knee extension, and f knee flexion. A stepladder was used when necessary to obtain a “bird’s eye” view of the joint (i.e. hip abduction, internal rotation, and external rotation) Russo et al. Journal of Experimental Orthopaedics (2017) 4:29 Page 4 of 8 were compared by measurement technique using a Chi squared test (MedCalc, Ostend, Belgium). Statistical sig- nificance was defined by an α-value <0.05. The authors chose 10° as the minimal clinically important difference (MCID), or clinically significant difference, for all measurements with the exception of knee extension. Five degrees (5°) was chosen as the MCID for knee extension because less loss of motion would be tolerated clinically with flexion contractures at the knee joint (Edwards et al. 2004; Gajdosik and Bohannon 1987). Results Hip range of motion (flexion, abduction, internal rotation, external rotation) There was no significant difference in measurement error between measurement techniques for hip flexion (F(2355) = 2.32; p = 0.100), hip internal rotation (F(2354) = 1.97; p = 0.140), or hip external rotation (F(2356) = 2.13; p = 0.121), shown in Table 1. There was a significant difference in measurement error for hip abduc- tion (F(2357) = 4.18; p = 0.016). A Tukey post-hoc test for hip abduction revealed that digital photographic (4.8° ±3.8) measurement had a significantly lower measurement error than visual estimation (6.2° ±4.1, p = 0.015). When comparing the proportion of measurements with measurements errors greater than 10° (Table 2), the only significant differences identified were that hip ab- duction was more precisely measured with digital pho- tography than visual estimation (93% vs. 83%, p = 0.019). Knee ROM (flexion, extension) There was a significant difference in measurement error for knee flexion (F(2356) = 3.17; p = 0.043) and for knee extension (F(2347) = 15.95; p < 0.001), shown in Table 3. However, a Tukey post-hoc test for knee flexion did not reveal any significant comparisons (p > 0.05). A Tukey post-hoc test for knee extension revealed that digital photographic (3.5° ±2.3) measurement had a significantly lower measurement error than visual estimation (4.7° ±2.5, p = 0.001) and goniometry (5.2° ±2.6, p = 0.001). When comparing the proportion of measurements with measurements errors within the defined clinically significant difference (Table 4), the only significant difference identified was that knee extension was more precisely measured with digital photography than with visual estimation (74% vs. 49%, p < 0.001) and with goniometry (74% vs 50%, p < 0.001). Discussion The authors hypothesized that digital photography would be equivalent in accuracy and show higher preci- sion compared to visual estimation and goniometric measurement when measuring motion at the hip and knee. Overall, there were only two statistically significant differences in measurement accuracy found with digital photography showing higher accuracy than: (i) visual estimation for hip abduction, and (ii) visual estimation and goniometry for knee extension. Neither of these statistical differences met the authors’ definition of clinical significance confirming the equivalent accuracy of digital photography (compared to goniometry and visual estimation). The maximum difference in measure- ment error between the three techniques was 1.4° for hip abduction and 1.7° for knee extension. Digital pho- tography proved to have higher precision only for two motions (hip abduction & knee extension) compared to Table 1 Accuracy (Measurement Error) by Technique for Hip Range of Motion Visual estimation (Mean ± SDa) Goniometry (Mean ± SD) Digital photography (Mean ± SD) Hip Flexion 3.9° ±3.4 3.1° ±2.5 3.5° ±2.7 Hip Abduction 6.2° ±4.1 5.8° ±3.7 4.8° ±3.8 Hip Internal Rotation 7.3° ±5.7 6.8° ±5.1 5.9° ±4.8 Hip External Rotation 10.1° ±6.7 9.7° ±6.0 8.6° ±5.1 Accuracy of measurements (measurement error, in degrees) calculated as the absolute value of the difference between the measurement taken by the investigator using each technique (visual estimation, goniometry, digital photography) and the reference standard (motion capture analysis) for all four hip motions aSD standard deviation Table 2 Precision by technique for hip range of motion (proportions of measurement errors within 10° of motion capture analysis) Visual estimation Goniometry Digital photography Hip Flexion 91% 96% 96% Hip Abduction 83% 88% 93% Hip Internal Rotation 76% 75% 85% Hip External Rotation 58% 53% 61% Precision of measurements or the proportion of measurement errors that were within 10°, defined as the minimally clinically significant difference by the authors, of the reference standard (motion capture analysis) for all four hip motions Table 3 Accuracy (measurement error) by technique for knee range of motion Visual estimation (Mean ± SDa) Goniometry (Mean ± SD) Digital photography (Mean ± SD) Knee Flexion 5.2° ±3.9 4.3° ±3.1 5.2° ±3.4 Knee Extension 5.2° ±2.6 4.7° ±2.5 3.5° ±2.3 Accuracy of measurements (measurement error, in degrees) calculated as the absolute value of the difference between the measurement taken by the investigator using each technique (visual estimation, goniometry, digital photography) and the reference standard (motion capture analysis) for both knee motions aSD standard deviation Russo et al. Journal of Experimental Orthopaedics (2017) 4:29 Page 5 of 8 visual estimation and one motion (knee extension) com- pared to goniometry. Thus, overall digital photography shows equivalent accuracy and precision to visual esti- mation and goniometry, except for measurements of hip abduction & knee extension. Many studies look specific- ally at the accuracy and/or reliability of one technique or motion without comparing two or more techniques or motions (Ferriero et al. 2013; Krause et al. 2015; Naylor et al. 2011). Few studies have looked specifically at visual estimation of hip or knee motion (Edwards et al. 2004; Holm et al. 2000; Rachkidi et al. 2009). Edwards et al. found higher accuracy with goniometry compared to vis- ual estimation with 22% and 46% of measurements being within 5° of their gold standard (radiography) for knee flexion only (Edwards et al. 2004). Murphy et al. showed equivalent accuracy of digital photography and goniom- etry in measuring knee flexion and extension (Murphy et al. 2013). Some studies report an advantage to either digital photography or goniometry over visual estimation with increasing amounts of knee flexion (Ferriero et al. 2013). Visual estimation is the most common modality used in most surgical practices, due to its speed, ease of use, and lack of need for equipment (Chevillotte et al. 2009; Murphy et al. 2013). The next most common technique, and most commonly used technique among therapists, is goniometry, which is believed by some to offer a more reliable measurement (Ferriero et al. 2013; Gajdosik and Bohannon 1987; Lavernia et al. 2008; Murphy et al. 2013; Watkins et al. 1991). Our study contests that notion with clinically equivalent accuracy between the three techniques. However, digital photography still offered slightly improved precision for measuring hip abduction and knee extension. In addition, digital photography offers the added benefit of a permanent, savable, and printable record of the motion allowing comparison between observations of the same measure- ment, different measurements separated by time, and allows for off-site measurements over long distances (Bennett et al. 2009; Dunlevy et al. 2005; Ferriero et al. 2013). The ability to accurately measure motion at dis- tance could help facilitate telemedicine or internet-based healthcare, which could alert the clinician regarding de- clines in function that would benefit from intervention. Often, especially in the hip, motion is included as part of clinical outcome scores (Holm et al. 2000). The ability to obtain digital measures of motion over a distance (by phone or internet) offers great promise for clinical re- search (Holm et al. 2000). Jenny et al. demonstrated high measurement accuracy at the knee using a smartphone digital camera measurement (Jenny et al. 2016). In this study, we have used a digital camera and secondarily measured the angle on a desktop computer. Although not utilized for this study, smartphone applications allow for identical techniques to be used without the need for transfer of the image to a desktop computer (Ferriero et al. 2013; Milani et al. 2014). This may make digital photographic measurements more clinically attractive alternative to visual estimation or goniometry (Ferriero et al. 2013; Milani et al. 2014). This study does have some limitations. First, the limb position used for each measurement by each investigator was not identical so comparison of accuracy between measurements relies on the accuracy of the motion capture analysis. Prior studies have shown motion cap- ture analysis to be highly accurate for joint motion mea- surements making it ideal for use as a gold standard (Charlton et al. 2015; Furtado et al. 2013) and our study utilized arrays of reflective markers that were attached directly to the bones and secured with cement to de- crease the possibility of loosening (Chung and Ng 2012). Hagio et al. used CT scans combined with infrared mo- tion capture analysis (similar to this study) and showed excellent accuracy (within 5 degrees) for hip motion (Hagio et al. 2004). Second, motion capsule analysis measures the angle formed by the two bones being measured, which may not represent the “clinical” angle at the joint made by the soft tissue (i.e. with the knee in full extension [or 0°], the bones may be in slight hyper- extension relative to each other). However, this reference remained constant for all measurements by each group allowing comparison between groups. Additionally, other authors have cited radiographic measurement as the “gold standard” which suffers from the same issues (Lavernia et al. 2008). Third, for photographic measure- ments, we did not measure the distance or angle of the camera in relation to the joint being measured (i.e. no reflective markers were placed onto the camera itself, no use of a tripod or other apparatus). However, this lack of standardization corresponds to the method that it would be used clinically so it allows for better generalization of our results. Fourth, the skin was not marked to identify the optimal points of reference. Instead, each investiga- tor made their own judgment regarding the boney land- marks, in order to be more representative of the clinical utility, which is limited by the amount of body fat, muscle, and clothing obscuring landmarks (Naylor et al. 2011). Again, this will allow better generalization to Table 4 Precision by technique for knee range of motion (proportions of measurement errors within 10° or 5° of motion capture analysis) Visual Goniometer Photo Knee Flexion (<10°) 92% 94% 90% Knee Extension (<5°) 49% 50% 74% Precision of measurements or the proportion of measurement errors that were within 10° (for elbow flexion) or 5° (for elbow extension), defined as the minimally clinically significant difference by the authors, of the reference standard (motion capture analysis) for both knee motions Russo et al. Journal of Experimental Orthopaedics (2017) 4:29 Page 6 of 8 clinical practice. Fifth, our investigators included three fellowship-trained orthopaedic surgeons and three physical therapists with varied levels of experience. This may have had an effect on the measurement accuracy and reliability. Sixth, the clinical applicability of meas- urement errors are not static across a range of motion. An error of 5–10° at 100° knee flexion is less clinically significant than that same error at full extension (Ferriero et al. 2013). The definition of clinically signifi- cant changes in motion (such as minimal clinically im- portant difference [MCID]; minimal detectable change [MDC]) or minimal acceptable motion (such as patient acceptable symptom state [PASS]) for joint range of motion is not well established within the literature. Some suggest 6° be used for the lower extremity, while others define clinical significance by a change greater than 10% of the motion arc (Blonna et al. 2012; Boone et al. 1978; Mehrholz et al. 2005; Roach et al. 2013; Wheatley-Smith et al. 2013). However, for certain joints, 10% seems excessive (i.e. 14° for knee extension) (Mehrholz et al. 2005; Roach et al. 2013; Wheatley- Smith et al. 2013). The authors chose 10° as the MCID, or clinically significant difference, for all measurements with the exception of knee extension. Conclusions There was no clinically significant difference in measure- ment accuracy between the three techniques for hip and knee motion. Digital photography only showed higher precision for two joint motions (hip abduction and knee extension). Overall digital photography shows equivalent accuracy and near-equivalent precision to visual estima- tion and goniometry. Acknowledgements Luis F. Pulido, M.D., Robert A. Jack, M.D., Derek T. Bernstein, M.D., Brad Hollas, DPT, Corbin Hedt, DPT, and Kyle Belski, DPT, for assisting with measurements. Lucas Bizzaro, Amarani Rucoba, Mladen Milovancevic, Sangwoo Kim, Michael Hogan, and Jonathan Gold for assisting with cadaver setup and motion analysis setup/analysis. Funding No outside sources of funding were utilized. All funding was internally from the Department of Orthopedics & Sports Medicine at Houston Methodist Hospital. Authors’ contributions All authors read and approved the final manuscript. Competing interests The authors declare that they have no competing interests with the subject of this paper. For full disclosure, all authors have listed their disclosures below: RRR, MBB, SKI, BJG, SH, JA, CL: None. PCN: Research support [CeramTech; DJ Orthopaedics; Microport; Smith & Nephew; Zimmer]; Board or committee member [International Society for Technology in Arthroplasty; Knee Society]; Stock or stock options [Joint View, LLC]; Editorial or governing board [Journal of Arthroplasty; Journal of Hip Preservation Surgery]; Publishing royalties, financial or material support [Springer]; IP Royalties [Stryker; Zimmer]; Paid consultant [Zimmer]; Other financial or material support [Musculoskeletal Transplant Foundation]. JDH: Board or committee member [AAOS; American Orthopaedic Society for Sports Medicine; Arthroscopy Association of North America]; Paid consultant [Applied Biologics; NIA Magellan; Smith & Nephew]; Editorial or governing board [Arthroscopy; Frontiers in Surgery]; Research support [DePuy, A Johnson & Johnson Company]; Paid presenter or speaker [Ossur; Smith & Nephew]; Publishing royalties, financial or material support [SLACK Incorporated]. PCM: Editorial or governing board [Journal of Knee Surgery; Orthobullets.com]. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Author details 1Department of Orthopedics & Sports Medicine, Houston Methodist Hospital, 6445 Main Street, Outpatient Center, Suite 2500, Houston, TX 77030, USA. 2Institute for Orthopaedic Research & Education (IORE), Houston, TX, USA. Received: 16 May 2017 Accepted: 4 September 2017 References Bennett D, Hanratty B, Thompson N, Beverland D (2009) Measurement of knee joint motion using digital imaging. Int Orthop 33(6):1627–1631 Blonna D, Zarkadas PC, Fitzsimmons JS, O'Driscoll SW (2012) Validation of a photography-based goniometry method for measuring joint range of motion. J Shoulder Elb Surg 21(1):29–35 Boone DC, Azen SP, Lin CM, Spence C, Baron C, Lee L (1978) Reliability of goniometric measurements. Phys Ther 58(11):1355–1360 Brosseau L, Balmer S, Tousignant M et al (2001) Intra- and intertester reliability and criterion validity of the parallelogram and universal goniometers for measuring maximum active knee flexion and extension of patients with knee restrictions. Arch Phys Med Rehabil 82(3):396–402 Chapleau J, Canet F, Petit Y, Laflamme GY, Rouleau DM (2011) Validity of goniometric elbow measurements: comparative study with a radiographic method. Clin Orthop Relat Res 469(11):3134–3140 Charlton PC, Mentiplay BF, Pua YH, Clark RA (2015) Reliability and concurrent validity of a smartphone, bubble inclinometer and motion analysis system for measurement of hip joint range of motion. J Sci Med Sport 18(3):262–267 Chevillotte CJ, Ali MH, Trousdale RT, Pagnano MW (2009) Variability in hip range of motion on clinical examination. J Arthroplast 24(5):693–697 Chung PY, Ng GY (2012) Comparison between an accelerometer and a three- dimensional motion analysis system for the detection of movement. Physiotherapy 98(3):256–259 Dunlevy C, Cooney M, Gormely J (2005) Procedural considerations for photographic-based joint angle measurements. Physiother Res Int 10(4):190–200 Edwards JZ, Greene KA, Davis RS, Kovacik MW, Noe DA, Askew MJ (2004) Measuring flexion in knee arthroplasty patients. J Arthroplast 19(3):369–372 Faul F, Erdfelder E, Buchner A, Lang AG (2009) Statistical power analyses using G*power 3.1: Tests for correlation and regression analyses. Behav Res Methods 41(4):1149–1160 Faul F, Erdfelder E, Lang AG, Buchner A (2007) G*power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods 39(2):175–191 Ferriero G, Vercelli S, Sartorio F et al (2013) Reliability of a smartphone-based goniometer for knee joint goniometry. Int J Rehabil Res 36(2):146–151 Furtado DA, Pereira AA, Andrade Ade O, Bellomo DP Jr, da Silva MR (2013) A specialized motion capture system for real-time analysis of mandibular movements using infrared cameras. Biomed Eng Online 12:17 Gajdosik RL, Bohannon RW (1987) Clinical measurement of range of motion. Review of goniometry emphasizing reliability and validity. Phys Ther 67(12):1867–1872 Hagio K, Sugano N, Nishii T et al (2004) A novel system of four-dimensional motion analysis after total hip arthroplasty. J Orthop Res 22(3):665–670 Herrero P, Carrera P, Garcia E, Gomez-Trullen EM, Olivan-Blazquez B (2011) Reliability of goniometric measurements in children with cerebral palsy: a comparative analysis of universal goniometer and electronic inclinometer. A pilot study. BMC Musculoskelet Disord 12:155 Russo et al. Journal of Experimental Orthopaedics (2017) 4:29 Page 7 of 8 http://orthobullets.com Holm I, Bolstad B, Lutken T, Ervik A, Rokkum M, Steen H (2000) Reliability of goniometric measurements and visual estimates of hip ROM in patients with osteoarthrosis. Physiother Res Int 5(4):241–248 Jenny JY, Bureggah A, Diesinger Y (2016) Measurement of the knee flexion angle with smartphone applications: which technology is better? Knee Surg Sports Traumatol Arthrosc 24(9):2874–2877 Krause DA, Boyd MS, Hager AN, Smoyer EC, Thompson AT, Hollman JH (2015) Reliability and accuracy of a goniometer mobile device application for video measurement of the functional movement screen deep squat test. Int J Sports Phys Ther. 10(1):37–44 Lavernia C, D'Apuzzo M, Rossi MD, Lee D (2008) Accuracy of knee range of motion assessment after total knee arthroplasty. J Arthroplast 23(6 Suppl 1):85–91 Lea RD, Gerhardt JJ (1995) Range-of-motion measurements. J Bone Joint Surg Am 77(5):784–798 Mai KT, Verioti CA, Hardwick ME, Ezzet KA, Copp SN, Colwell CW Jr (2012) Measured flexion following total knee arthroplasty. Orthopedics 35(10):e1472–e1475 Mehrholz J, Major Y, Meissner D, Sandi-Gahun S, Koch R, Pohl M (2005) The influence of contractures and variation in measurement stretching velocity on the reliability of the modified Ashworth scale in patients with severe brain injury. Clin Rehabil 19(1):63–72 Milani P, Coccetta CA, Rabini A, Sciarra T, Massazza G, Ferriero G (2014) Mobile smartphone applications for body position measurement in rehabilitation: a review of goniometric tools. PM R 6(11):1038–1043 Murphy M, Hides J, Russell T (2013) A digital photographic techniques for knee range of motion measurement: performance in a Total knee arthroplasty clinical population. Open Journal of Orthopedics 3:4–9 Naylor JM, Ko V, Adie S et al (2011) Validity and reliability of using photography for measuring knee range of motion: a methodological study. BMC Musculoskelet Disord 12:77 Nussbaumer S, Leunig M, Glatthorn JF, Stauffacher S, Gerber H, Maffiuletti NA (2010) Validity and test-retest reliability of manual goniometers for measuring passive hip range of motion in femoroacetabular impingement patients. BMC Musculoskelet Disord 11:194 Rachkidi R, Ghanem I, Kalouche I, El Hage S, Dagher F, Kharrat K (2009) Is visual estimation of passive range of motion in the pediatric lower limb valid and reliable? BMC Musculoskelet Disord 10:126 Roach S, San Juan JG, Suprak DN, Lyda M (2013) Concurrent validity of digital inclinometer and universal goniometer in assessing passive hip mobility in healthy subjects. Int J Sports Phys Ther 8(5):680–688 Russell TG, Jull GA, Wootton R (2003) Can the internet be used as a medium to evaluate knee angle? Man Ther 8(4):242–246 Verhaegen F, Ganseman Y, Arnout N, Vandenneucker H, Bellemans J (2010) Are clinical photographs appropriate to determine the maximal range of motion of the knee? Acta Orthop Belg 76(6):794–798 Watkins MA, Riddle DL, Lamb RL, Personius WJ (1991) Reliability of goniometric measurements and visual estimates of knee range of motion obtained in a clinical setting. Phys Ther 71(2):90–96 discussion 96-97 Wheatley-Smith L, McGuinness S, Colin Wilson F, Scott G, McCann J, Caldwell S (2013) Intensive physiotherapy for vegetative and minimally conscious state patients: a retrospective audit and analysis of therapy intervention. Disabil Rehabil 35(12):1006–1014 Russo et al. Journal of Experimental Orthopaedics (2017) 4:29 Page 8 of 8 Abstract Background Methods Results Conclusions Background Methods Cadaver & Motion Analysis Setup Measurement technique Statistical analysis Results Hip range of motion (flexion, abduction, internal rotation, external rotation) Knee ROM (flexion, extension) Discussion Conclusions Funding Authors’ contributions Competing interests Publisher’s Note Author details References work_4php427kmvhvdcijy24l4nseaa ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218549054 Params is empty 218549054 exception Params is empty 2021/04/06-02:16:27 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218549054 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:27 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_4pians4wmfbzrm4zcnygnl4hze ---- OTT-65961-randomized-double-blind-trial-of-prophylactic-topical-evozac © 2014 Wang et al. This work is published by Dove Medical Press Limited, and licensed under Creative Commons Attribution – Non Commercial (unported, v3.0) License. The full terms of the License are available at http://creativecommons.org/licenses/by-nc/3.0/. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. Permissions beyond the scope of the License are administered by Dove Medical Press Limited. Information on how to request permission may be found at: http://www.dovepress.com/permissions.php OncoTargets and Therapy 2014:7 1261–1266 OncoTargets and Therapy Dovepress submit your manuscript | www.dovepress.com Dovepress 1261 O r i g i n a l r e s e a r c h open access to scientific and medical research Open access Full Text article http://dx.doi.org/10.2147/OTT.S65961 randomized double-blind trial of prophylactic topical evozac® calming skin spray for gefitinib-associated acne-like eruption Yalan Wang* Yunpeng Yang* Jinxia Xu Juan Yu Xia liu ruizhen gao li Zhang state Key laboratory of Oncology in south china, collaborative innovation center for cancer Medicine, sun Yat-sen University cancer center, guangzhou, guangdong, People’s republic of china *These authors contributed equally to this work correspondence: ruizhen gao sun Yat-sen University cancer center, 651 DongFeng road east, guangzhou, guangdong, People’s republic of china 510060 Tel +86 20 8734 3368 Fax +86 20 8734 3352 email gaorzh@sysucc.org.cn li Zhang sun Yat-sen University cancer center, 651 DongFeng road east, guangzhou, guangdong, People’s republic of china 510060 Tel +86 20 8734 3458 Fax +86 20 8734 3565 email zhangli63@hotmail.com Background: “Gefitinib” is a first-generation epidermal growth factor receptor tyrosine-kinase inhibitor. More than half of patients receiving gefitinib develop acne-like eruption. Evozac® Calming Skin Spray (Evaux Laboratoires, Évaux-les-Bains, France) is made of Évaux thermal spring water and commonly used for the treatment of dermatological toxicities caused by anti- epidermal growth factor receptor therapy. The aim of the study reported here was to test the effect of Evozac Calming Skin Spray on the prevention of rash in patients receiving gefitinib. Methods: Non-small-cell lung cancer patients preparing to initiate gefitinib therapy were randomly assigned to apply Evozac Calming Skin Spray or physiological saline to the face three times a day. The treatment was started on the same day as initiation of gefitinib therapy and continued for 4 weeks. Results: A total of 51 patients in the Evozac Calming Skin Spray group and 50 patients in the physiological saline group completed the study per the protocol. The number of facial lesions peaked at the end of 3 weeks in both groups. There were significantly fewer lesions in the Evozac Calming Skin Spray group than in the physiological saline group at the end of 1 week (0.25 versus [vs] 1.10, P=0.031) and 3 weeks (6.67 vs 12.26, P=0.022). Patients from the Evozac Calming Skin Spray group also developed fewer facial lesions at the end of 2 weeks and 4 weeks, how- ever, the difference was not statistically significant. At the end of 4 weeks, fewer patients from the Evozac Calming Skin Spray group developed rash of grade 2 or greater severity (17.6% vs 36.0%, P=0.037), or experienced rash-associated symptoms (13.7% vs 34.0%, P=0.017). Conclusion: Prophylactic treatment with Evozac Calming Skin Spray appears to decrease the number of facial lesions at the peak of the rash, reduce the incidence of grade 2 or more severe rash and relieve rash-associated symptoms. Keywords: dermatological toxicities, facial rash lesions, rash severity, rash-associated symptoms Introduction Epidermal growth factor receptor (EGFR) has been well established as an important target for the treatment of several tumors, including lung, head and neck, colorectal, breast, and pancreatic cancer.1 EGFR can be inhibited by monoclonal antibodies gen- erated against the ligand-binding domain of the receptor or small-molecule tyrosine kinase inhibitors (TKIs) that compete with the intracellular adenosine triphosphate binding domain of the receptor.2 Dermatologic toxic effects are the major side effects associated with EGFR inhibition. Common dermatological side effects include acneiform skin rash, pruritus, mucositis, xerosis, fissures, hyperpigmentation, nail changes, hair loss, and changes in colour.3–5 Incidences of these side effects range from 50% to 90%, and side effects O n co T a rg e ts a n d T h e ra p y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com/permissions.php http://creativecommons.org/licenses/by-nc/3.0/ www.dovepress.com www.dovepress.com www.dovepress.com http://dx.doi.org/10.2147/OTT.S65961 mailto:gaorzh@sysucc.org.cn mailto:zhangli63@hotmail.com OncoTargets and Therapy 2014:7submit your manuscript | www.dovepress.com Dovepress Dovepress 1262 Wang et al of grade 3 or greater severity occur in 3% to 20% of patients receiving EGFR inhibition.6–13 Although a systematic review including 8,998 cancer patients receiving an EGFR inhibitor concluded that there were no reported deaths from dermatologic toxicities,14 the skin side effects could lead to dose modification or discontinuation of the treatment, and adversely affect patients’ quality of life.15–18 “Gefitinib” is a first-generation reversible EGFR-TKI widely used in the treatment of non-small-cell lung cancer (NSCLC) patients with EGFR mutations. More than half of patients receiving gefitinib develop dermatological side effects. Evozac® Calming Skin Spray (Evaux Laboratories, Évaux-les-Bains, France) is made of Évaux thermal spring water, which is naturally rich in lithium. It is commonly used to manage dermatological toxicities caused by chemo- therapy, radiation therapy, and anti-EGFR therapy in several countries. It has been reported that Evozac Calming Skin Spray can relieve the dermatological toxicities of patients under anti-EGFR therapy.19 However, to the best of our knowledge, there has been no formal clinical trial designed to examine the role of Evozac Calming Skin Spray in man- aging skin side effects induced by EGFR inhibitors. Thus, the present trial was conducted to test the effect of Evozac Calming Skin Spray in the prevention of rash in patients receiving gefitinib (NCT01528488). Patients and methods The study was designed as a single-center, randomized, double-blind, placebo-controlled trial, and approved by the institutional review board of Sun Yat-sen University Cancer Center. Informed consent was obtained from each participant. Patient eligibility Patients who met the following criteria were eligible to enroll in the study: $18 years old, histologically con- firmed diagnosis of NSCLC, preparing to initiate gefitinib treatment, and normal hepatic and renal function. Study exclusion criteria were: Eastern Cooperative Oncology Group score $3; pregnancy; use of therapies within 4 weeks prior to enrollment that may induce similar skin reaction, such as cetuximab or sorafenib; use of anti-inflammatory or antibiotic drugs for other conditions; or any rash at the time of study registration. Treatment Eligible patients were randomly assigned to apply Evozac Calming Skin Spray or physiological saline. The topical treatment was started on the same day as initiation of gefitinib therapy and continued for 4 weeks. Patients were instructed to apply the Evozac Calming Skin Spray or physiological saline to the face three times a day, in the morning, at noon, and at bedtime. Patients were asked to record use of the study treatment, as well as that of gefitinib, in a study diary. evaluation At baseline, a clinical history was obtained for all participants and a physical examination performed. Patients’ Eastern Cooperative Oncology Group performance score within 1 week of trial registration was also assessed. A blood draw for assessment of serum creatinine and total bilirubin levels was obtained. Finally, patients also underwent a complete skin examination by the participating oncologists, as well as standardized digital photography of the face. The total number of patient face lesions was calculated at the end of Week 1, 2, 3, and 4. At the end of 4 weeks, each patient’s treating oncologist was to perform an evaluation, which included taking a history and performing a physical examination; an assessment of patient performance status; an assessment of rash-associated symptoms (itching, dry skin, pain, and irritation); and an assessment of rash severity, per the National Cancer Institute’s Common Terminology Criteria for Adverse Events (version 3.0). In addition, digital photography utilizing the same standard poses as at baseline was also performed at Week 4. Further, because some pre- liminary studies indicated that the plasma concentration of gefitinib might be associated with the severity of the skin side effects, the plasma concentration of gefitinib was tested at Week 4. Plasma samples (2 mL) of the participants were collected on Day 28 and frozen at -80°C until required for analysis. The steady-state trough concentration was analyzed using a validated high-performance liquid chromatographic method with tandem mass spectrometry. statistical analysis The primary endpoint was difference in total number of face lesions between Evozac Calming Skin Spray- and physio- logical saline-treated patients at completion of the study period (Week 4). Secondary endpoints were differences in total number of face lesions between Evozac Calming Skin Spray- and physiological saline-treated patients at the other follow-up intervals (Weeks 1–3), difference in rash severity evaluated according to the National Cancer Institute’s Com- mon Terminology Criteria for Adverse Events (v 3.0) between the two arms at Week 4, and difference in rash-associated symptoms between the two arms at Week 4. O n co T a rg e ts a n d T h e ra p y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com OncoTargets and Therapy 2014:7 submit your manuscript | www.dovepress.com Dovepress Dovepress 1263 Prophylactic evozac® Calming Skin Spray for gefitinib-associated rash A sample size of 48 patients per group provided a 90% probability of detecting a lesion count difference of four between the two study arms and of thereby rejecting the null hypothesis of equal proportions with a P-value of 0.05 as a two-sided test. Assuming the dropout rate of ,20%, it was determined that 59 patients should be enrolled in each arm. SPSS for Windows software (v 19.0; IBM Corporation, Armonk, NY, USA) was used for all data analysis. The normality of quantitative variables was analyzed using the Shapiro–Wilk test. Quantitative variables according with normal distribution were analyzed by independent-sample t-test; Quantitative variables departing from the normal distribution were analyzed using the Mann–Whitney U test. Pearson’s chi-squared test and Fisher’s exact test were used to test the difference in the distribution of categorical vari- ables when appropriate. All significance levels reported refer to two-sided tests. A P-value of ,0.05 was considered significant. Results Between December 2011 and July 2013, 118 NSCLC patients were randomly assigned to the Evozac Calming Skin Spray group (n=59) or physiological saline group (n=59) on the same day of initiation of gefitinib treatment. Details of patient attrition during the study are shown in Figure 1. In total, 51 patients in the Evozac Calming Skin Spray group and 50 patients in the physiological saline group completed the study per protocol; the baseline characteristics of these patients are listed in Table 1. There was no significant differ- ence in the baseline characteristics between the two arms. As shown in Figure 2, the number of lesions increased rapidly over the first 3 weeks of the study then began to decrease. Table 2 lists the total number of patient face lesions at the end of Weeks 1, 2, 3, and 4. At the end of Week 1 and Week 3, the total number of facial lesions in the Evozac Calming Skin Spray group was significantly fewer than that of the physiological saline group. Patients in the Evozac Calming Skin Spray group also developed fewer facial lesions compared with patients in the physiological saline group at the end of Week 2 and Week 4. However, the dif- ference was not statistically significant. With regard to rash severity, in the Evozac Calming Skin Spray group, 21 patients were diagnosed with grade 1 rash, eight with grade 2 rash, and one with grade 3 rash at the end of 4 weeks, while, in the physiological saline group, 18 patients were diagnosed with grade 1 rash, 16 with grade 2 rash, and two with grade 3 rash. In total, rash of grade 2 or greater severity occurred in 17.6% (9/51) of the Evozac Enrollment Randomization Analysis Recruitment and consent (N=118) 59 patients randomized to Evozac® Calming Skin Spray* three times a day 59 patients randomized to physiological saline three times a day 51 patients completed the study per protocol Eight excluded from analysis: • Five lost to follow-up • Three withdrew consent 50 patients completed the study per protocol Nine excluded from analysis: • Four lost to follow-up • Five withdrew consent Figure 1 study enrollment, randomization, and attrition data. Note: *evaux laboratories, Évaux-les-Bains, France. O n co T a rg e ts a n d T h e ra p y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com OncoTargets and Therapy 2014:7submit your manuscript | www.dovepress.com Dovepress Dovepress 1264 Wang et al Calming Skin Spray-treated patients and in 36.0% (18/50) of the physiological saline-exposed patients, and the difference reached statistical significance (P=0.037). In addition, an assessment of rash-associated symptoms (itch, dry skin, pain, and irritation) was performed at the end of Week 4. In the Evozac Calming Skin Spray-treated group and physiological saline-exposed group, 13.7% (7/51) and 34.0% (17/50) of patients experienced one or more rash- associated symptoms, respectively. This difference was of statistical significance (P=0.017). The steady-state trough concentration of gefitinib was available for 43 patients in the Evozac Calming Skin Spray group and 35 patients in the physiological saline group. The concentration was comparable between the Evozac Calming Skin Spray group (mean 172.4, median 159.4, range 47.8–433.0 ng/mL) and the physiological saline group (mean 170.2, median 145.1, range 51.8–391.8 ng/mL) (P=0.533). Discussion To the best of our knowledge, the trial reported here is the first clinical study to test the effectiveness of Evozac Calming Skin Spray for the management of dermatological toxicities caused by gefitinib. Designed as a randomized, double-blind, placebo-controlled trial, this study sought as its primary endpoint to determine whether Evozac Calming Skin Spray could reduce the number of facial lesions at the end of 4 weeks. Evozac Calming Skin Spray did not appear to decrease the number of facial lesions compared with placebo at the end of 4 weeks. However, despite the fact that the primary endpoint was not reached, the results of the study have generated some useful findings. Evozac Calming Skin Spray did reduce the total number of facial lesions at the end of Week 1 and 3. Considering that the lesion counts peaked at the end of 3 weeks, Evozac Calming Skin Spray seemed to decrease the number of facial lesions at the peak of the rash. In addi- tion, at the end of 4 weeks, a decrease in the incidence of grade 2 or more severe rashes in patients assigned to the Table 2 Total number of patient facial lesions Week Evozac® Calming Skin Spray** Saline P-value* Mean Range Mean Range 1 0.25 0–4 1.10 0–12 0.031 2 2.96 0–21 5.34 0–38 0.088 3 6.67 0–59 12.26 0–80 0.022 4 6.18 0–63 8.02 0–59 0.058 Notes: *Based on Mann–Whitney U test of total number of lesions at each study time point; **evaux laboratories, Évaux-les-Bains, France. Table 1 Baseline characteristics of the patients Characteristic Evozac® Calming Skin Spray* arm (n=51) Physiological saline arm (n=50) P-value N % N % sex 0.619 Male 27 52.9 24 48.0 Female 24 47.0 26 52.0 age, years (mean ± standard deviation) 54.3±10.9 57.7±9.6 0.101 stage 0.678 iiiB 4 7.8 2 4.0 iV 47 92.2 48 96.0 ecOg Ps 0.649 0 5 9.8 8 16.0 1 35 68.6 32 64.0 2 11 21.6 10 20.0 EGFR mutation 0.605 19 exon 23 45.1 17 34.0 21 exon 21 41.2 22 44.0 Wild-type 6 11.8 9 18.0 Other 1 1.9 2 4.0 histological type 0.565 adeno 42 82.4 36 72.0 adeno-squamous 7 13.7 9 18.0 squamous 1 2.0 3 6.0 Other 1 2.0 2 4.0 Treatment 0.224 First line 30 58.8 31 62.0 Maintenance 7 13.7 2 4.0 second line 5 9.8 3 6.0 Third or later line 9 17.6 14 28.0 smoking history 0.916 never smoker 28 54.9 26 52.0 ex-smoker 19 37.2 19 38.0 current smoker 4 7.8 5 10.0 allergy history 0.617 Yes 1 2.0 2 4.0 no 50 98.0 48 96.0 skin-disease history 0.715 Yes 5 9.8 3 6.0 no 46 90.2 47 94.0 Note: *evaux laboratories, Évaux-les-Bains, France. Abbreviations: ecOg Ps, eastern cooperative Oncology group performance score; EGFR, epidermal growth factor receptor. 1 0 2 2 Study week L e s io n c o u n ts 3 Evozac Saline 4 4 6 8 10 12 14 Figure 2 Mean number of facial lesions in evozac® calming skin spray (evaux laboratories, Évaux-les-Bains, France) group and physiological saline group at each of the four study time points. O n co T a rg e ts a n d T h e ra p y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com OncoTargets and Therapy 2014:7 submit your manuscript | www.dovepress.com Dovepress Dovepress 1265 Prophylactic evozac® Calming Skin Spray for gefitinib-associated rash Evozac Calming Skin Spray arm was noted. Further, fewer patients from the Evozac Calming Skin Spray arm suffered from rash-associated symptoms (itch, dry skin, hurting and irritation) than patients treated with placebo. In view of these points, patients could benefit from treatment with Evozac Calming Skin Spray. The underlying mechanism responsible for why Evozac Calming Skin Spray can help manage the dermatological toxicities associated with gefitinib remains unclear. One reasonable explanation is that the Evozac Calming Skin Spray contains rich lithium (2.20 mg/L). The pathogenesis of the EGFR-TKI-induced rash involves abnormalities in the follicular epithelium together with inflammation. Lithium may have anti-inflammatory effects on kerati- nocytes by increasing expression of interleukin 10 and decreasing expression of Toll-like receptors 2 and 4.20 Topical agents containing lithium have been widely used for the treatment of inflammatory dermatitis, especially seborrheic dermatitis, and significantly improve patient symptoms.21–24 In addition, activation of the neurokinin-1 receptor by substance P is associated with pruritus and other symptoms of EGFR-TKI-induced rash.25 Lithium could inhibit the effect stimulated by substance P26 and relieve the symptoms of rash. Our study has some limitations. Since only patients receiving gefitinib were enrolled in the study, the effect of Evozac Calming Skin Spray on dermatological toxicities associated with other EGFR-TKIs or anti-EGFR antibodies is uncertain. Considering that the pathogenesis of skin toxicities during use of anti-EGFR antibodies or TKIs is thought to be similar, Evozac Calming Skin Spray might also be effective for these. However, further trials should be undertaken to validate the efficacy of Evozac Calming Skin Spray for the treatment of dermatological toxicities caused by other EGFR inhibition therapies. In addition, although assessment of rash- associated symptoms was performed at the end of 4 weeks, patient quality of life was not systematically evaluated in the study. Finally, the number of patients enrolled in the study was relatively small, and large-scale trials are needed to confirm the findings in the future. Conclusion Prophylactic treatment with Evozac Calming Skin Spray appears to decrease the number of facial lesions at the peak of the rash, reduce the incidence of grade 2 or more severe rash and relieve rash-associated symptoms. Prophylactic Evozac Calming Skin Spray treatment with gefitinib initia- tion is reasonable. Disclosure The study was funded by Evaux Laboratories. The authors report no other conflicts of interest in this work. References 1. Balagula Y, Garbe C, Myskowski P, et al. Clinical presentation and management of dermatological toxicities of epidermal growth factor receptor inhibitors. Int J Dermatol. 2011;50(2):129–146. 2. Arteaga CL. The epidermal growth factor receptor: from mutant onco- gene in nonhuman cancers to therapeutic target in human neoplasia. J Clin Oncol. 2001;19(Suppl 18):32S–40S. 3. Agero AL, Dusza SW, Benvenuto-Andrade C, Busam KJ, Myskowski P, Halpern AC. Dermatologic side effects associated with the epidermal growth factor receptor inhibitors. J Am Acad Dermatol. 2006;55(4): 657–670. 4. Segaert S, Van Cutsem E. Clinical signs, pathophysiology and management of skin toxicity during therapy with epidermal growth factor receptor inhibitors. Ann Oncol. 2005;16(9):1425–1433. 5. Lacouture ME. Mechanisms of cutaneous toxicities to EGFR inhibitors. Nat Rev Cancer. 2006;6(10):803–812. 6. Potthoff K, Hofheinz R, Hassel JC, et al. Interdisciplinary management of EGFR-inhibitor-induced skin reactions: a German expert opinion. Ann Oncol. 2011;22(3):524–535. 7. Cunningham D, Humblet Y, Siena S, et al. Cetuximab monotherapy and cetuximab plus irinotecan in irinotecan-refractory metastatic colorectal cancer. N Engl J Med. 2004;351(4):337–345. 8. Saltz LB, Meropol NJ, Loehrer PJ Sr, Needle MN, Kopit J, Mayer RJ. Phase II trial of cetuximab in patients with refractory colorectal cancer that expresses the epidermal growth factor receptor. J Clin Oncol. 2004;22(7):1201–1208. 9. Van Cutsem E, Peeters M, Siena S, et al. Open-label phase III trial of panitumumab plus best supportive care compared with best supportive care alone in patients with chemotherapy-refractory metastatic colorectal cancer. J Clin Oncol. 2007;25(13):1658–1664. 10. Soulieres D, Senzer NN, Vokes EE, Hidalgo M, Agarwala SS, Siu LL. Multicenter phase II study of erlotinib, an oral epidermal growth factor receptor tyrosine kinase inhibitor, in patients with recurrent or metastatic squamous cell cancer of the head and neck. J Clin Oncol. 2004;22(1):77–85. 11. Kris MG, Natale RB, Herbst RS, et al. Efficacy of gefitinib, an inhibitor of the epidermal growth factor receptor tyrosine kinase, in symptomatic patients with non-small cell lung cancer: a randomized trial. JAMA. 2003;290(16):2149–2158. 12. Mok TS, Wu YL, Thongprasert S, et al. Gefitinib or carboplatin- paclitaxel in pulmonary adenocarcinoma. N Engl J Med. 2009;361(10): 947–957. 13. Han JY, Park K, Kim SW, et al. First-SIGNAL: first-line single- agent iressa versus gemcitabine and cisplatin trial in never-smokers with adenocarcinoma of the lung. J Clin Oncol. 2012;30(10): 1122–1128. 14. Jatoi A, Nguyen PL. Do patients die from rashes from epidermal growth factor receptor inhibitors? A systematic review to help counsel patients about holding therapy. Oncologist. 2008;13(11):1201–1204. 15. Robert C, Soria JC, Spatz A, et al. Cutaneous side-effects of kinase inhib- itors and blocking antibodies. Lancet Oncol. 2005;6(7):491–500. 16. Lacouture ME. The growing importance of skin toxicity in EGFR inhibitor therapy. Oncology (Williston Park). 2009;23(2):194–196. 17. Osio A, Mateurs C, Soria JC, et al. Cutaneous side-effects in patients on long-term treatment with epidermal growth factor receptor inhibitors. Br J Dermatol. 2009;161(3):515–521. 18. Joshi SS, Ortiz S, Witherspoon JN, et al. Effects of epidermal growth factor receptor inhibitor-induced dermatologic toxicities on quality of life. Cancer. 2010;116(16):3916–3923. 19. Chanez JF. The neurogenic component of cutaneous toxicities induced by chemotherapy – new solutions. Eur Oncol. 2010;6(1):28–30. O n co T a rg e ts a n d T h e ra p y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com OncoTargets and Therapy Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/oncotargets-and-therapy-journal OncoTargets and Therapy is an international, peer-reviewed, open access journal focusing on the pathological basis of all cancers, potential targets for therapy and treatment protocols employed to improve the management of cancer patients. The journal also focuses on the impact of management programs and new therapeutic agents and protocols on patient perspectives such as quality of life, adherence and satisfaction. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. OncoTargets and Therapy 2014:7submit your manuscript | www.dovepress.com Dovepress Dovepress Dovepress 1266 Wang et al 20. Ballanger F, Tenaud I, Volteau C, Khammari A, Dréno B. Anti-inflammatory effects of lithium gluconate on keratinocytes: a possible explanation for efficiency in seborrhoeic dermatitis. Arch Dermatol Res. 2008;300(5):215–223. 21. Dreno B, Chosidow O, Revuz J, Moyse D; Study Investigator Group. Lithium gluconate 8% vs ketoconazole 2% in the treatment of seborrhoeic dermatitis: a multicentre, randomized study. Br J Dermatol. 2003;148(6):1230–1236. 22. Langtry JA, Rowland Payne CM, Staughton RC, Stewart JC, Horrobin DF. Topical lithium succinate ointment (Efalith) in the treatment of AIDS-related seborrhoeic dermatitis. Clin Exp Dermatol. 1997;22(5):216–219. 23. Dreno B, Moyse D. Lithium gluconate in the treatment of seborrhoeic dermatitis: a multicenter, randomised, double-blind study versus placebo. Eur J Dermatol. 2002;12(6):549–552. 24. Cuelenaere C, De Bersaques J, Kint A. Use of topical lithium succinate in the treatment of seborrhoeic dermatitis. Dermatology. 1992;184:194–197. 25. Santini D, Vincenzi B, Guida FM, et al. Aprepitant for management of severe pruritus related to biological cancer treatments: a pilot study. Lancet Oncol. 2012;13(10):1020–1024. 26. Boisnic S, Branchet MC, Chanez JF. Evaluation of the inhibition of human sebocytes proliferation stimulated by substance P and corticotropin-releasing hormone by mineral constituents in Evaux thermal spring water. Nouv Dermatol. 2004;23:569–575. O n co T a rg e ts a n d T h e ra p y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com/oncotargets-and-therapy-journal http://www.dovepress.com/testimonials.php www.dovepress.com www.dovepress.com www.dovepress.com www.dovepress.com Publication Info 2: Nimber of times reviewed: work_4po3zpj3jzabfkfdwedwrr2wwm ---- ASIST AM05 Communication Regimes: A conceptual framework for examining IT and social change in organizations Eric T. Meyer School of Library and Information Science, Indiana University, Bloomington, IN. 791 Union Drive, Indianapolis, IN 46202. etmeyer@indiana.edu This paper defines and extends Kling’s concept of a communication regimes by identifying the concept’s origins and offering a definition that will allow further research using this framework. The terminology used here originates in political science; in translating these concepts for information science, however, much of the original meaning can be maintained and fruitfully applied. The paper outlines the definition and illustrates it using examples from photojournalism as a communication regime undergoing change. A communication regime is: 1) a loosely coupled social network in which the communication and the work system are highly coupled; 2) a system with a set of implicit or explicit principles, norms, rules, and decision-making procedures around which actors’ expectations converge; 3) a system in which the types of communication are tightly coupled to the production system in which they are embedded; 4) a system with institutions which help to support and to regulate the regime; and 5) a system within which there are conflicts over control, over who enforces standards, over who bears the costs of change and who reaps the benefits of change. This example suggests other areas where a communication regime framework may help understand the relationship between IT and social change. Introduction mailto:etmeyer@indiana.edu One goal of social informatics research, as identified by the late Rob Kling, is to formulate additional ways of understanding information technology’s relationship to social change. This paper will suggest that one possible organizing concept, or conceptual framework, for helping to understand IT and social change is that of communication regimes. The paper will define the concept of communication regimes and apply it to a specific information technology—digital photography—used in a particular setting—photojournalism. Table 1 outlines the definition of a communication regime. A full discussion of each component of this definition is discussed below, following a brief history of the concept. Table 1: Communication regime defined Communication regime literature Communication regimes were introduced to information science only recently by Kling et. al. (Kling, Spector, & Fortuna, 2004), who relied on Hilgartner’s (1995) introduction of the concept as it applied to scientific communication. Discussing the changes that occurred as E-biomed was transformed into PubMed Central, Kling et. al. argue that various aspects of the biomedical science journal publication communication regime, including “those regarding gate-keeping, the business model, speed of information sharing, mobilization of authors, and the communication infrastructure” were fundamentally altered. “Examining the transformation of E-biomed to PubMed Central from a ‘communication regime’ viewpoint, we see that significant changes to the biomedical science journal communication regime existed in the original proposal” (Kling et al., 2004:140). Also, Kling et. al. argue that their case study illustrates that the transformative effects did not spring autonomously from the technology (in this case, the internet), but were shaped by various groups seeking to serve their own interests. Hilgartner, likewise, saw the transformative effects of biomolecular databases on the communication regimes of biomolecular journal publication. “Clearly, public biomolecular databases have become much more than simply computerized versions of print-based publications: they represent new forms of scientific interaction based on novel and rapidly evolving communication regimes” (Hilgartner, 1995:258). Hilgartner is careful to point out that in his conceptualization, there is not a singular communication regime representing biomolecular publication. Instead, he identifies a variety of related and interconnected communication regimes, including services that abstract from journals and the process of direct submission to journals, which he considers to be niches within a “broader ecology of biomolecular knowledge” which can support a variety of communication regimes. While Kling and Hilgartner both use the concept of communication regimes to understand scientific communication, this research proposes applying the concept to other areas in general and to digital photography in particular. The case for using the concept in this instance is described below. First, however, it is instructive to look at how the concept of a regime developed, and what elements of regimes may be useful to information science. Kling’s desire to bring the concept of a communication regime into information science was based at least partly on his familiarity with Hilgartner’s use of the phrase 1. Hilgartner, in turn, developed communication regimes “as a sort of grounded, or even rough- and-ready, concept for bringing into focus how patterns of control, power, institutional re-engineering, and inter- and intra-actor relations were being reshaped in both the ‘small’ and the ‘large’ changes underway [in science communication]” (Hilgartner, 2004:1). Both Kling and Hilgartner were using an existing concept, that of regimes, and moving it into a communication and information specific context Of course, the concept is clearly related to Foucault’s treatment of “regimes.” Foucault rejected ‘truth’ as an absolute concept, preferring to discuss less “what happened” than “how were people brought to think what happened.” He likewise discussed the non- absolute nature of power, which Foucault understood as being dispersed through a network of relationships which make up society and based in discourse. ‘Truth’ must be understood as a system of ordered procedures for the production, regulation, distribution, circulation, and operation of statements. ‘Truth’ is linked in a circular relation with systems of power which produce and sustain it, and to the effects of power which it induces and which extends it. A ‘regime’ of truth. (Foucault, 1984:74) Just as Foucault understood truth and power to be both non-absolute and related to each other through social networks, I am suggesting that this point of view (common among anthropologists, for instance 2) will help illuminate our understanding of communication with organizations. The concept of regime itself, of course, is most frequently used in the popular political realm when discussing the regimes of various political leaders 3, but can also mean, more generally “the set of conditions under which a system occurs or is maintained” (OED Online, 1989). It is this more general concept that has been used predominantly in academic political science discourse. Lord discusses how the concept of a regime has developed in the political science literature: Regimes are classically defined in International Relations theory as the voluntary convergence of actors on a shared set of norms, meanings, expectations and procedures for communicating, co-ordinating and acting. Self-enforcement, the internationalization of conventions and low level of institutionalization are thus key elements that distinguish regimes from alternative forms of political cohesion. (Lord, 1999:3) This definition, while intended for the analysis of international political organizations, is general enough to potentially be applicable to other types of organizations. This is even clearer in some of the seminal work in international relations on regime theory 4. While International Regime Theory was first introduced in 1975 in a special edition of International Organization (Gale, 1998), the most widely accepted definition of an international regime comes from Krasner: Regimes can be defined as sets of implicit or explicit principles, norms, rules, and decision-making procedures around which actors’ expectations converge…Principles are beliefs of fact, causation, and rectitude. Norms are standards for behavior defined in terms of rights and obligations. Rules are specific prescriptions or proscriptions for action. Decision-making procedures are prevailing practices for making and implementing collective choice. (1982:186) Regimes, in this conceptualization, are comprised of the “underlying principles of order and meaning that shape the manner of their formation and transformation” (Ruggie, 1982:380). Ruggie argues that these regimes are embedded in a larger social order. By embedded, Ruggie is drawing on Polanyi’s argument that in pre-industrial societies, economic behavior was a function of, and contained within, social behavior, and not a separate activity . 5 One criticism of regime theory is that it emphasizes static descriptions of systems, dealing predominantly with the status quo (Strange, 1982). This criticism should be kept in mind when translating this concept to communication regimes. If indeed we are interested in examining change in communication regimes due to the influence of technology, in this case digital photography, we must be careful not to imply that the previous state of the communication regime was a static and unchanging set of principles, norms, rules and decision-making procedures. Economic, cultural, social, and organizational changes will have happened previously, and changes both large and small will be occurring independently of technological innovation even at the same time as technology- influenced change is occurring. Kling et. al. (2004) and Hilgartner (1995) however, as discussed above, specifically choose to use the concept of communication regimes to illuminate a period of change and demonstrate for us the viability of using the concept to aide in understanding changing, not static, regimes. Also, more recent international relations applications of regime theory are specifically targeted at understanding social change: Students of regime theory, interested in employing the regime concept within a critical theoretical framework to reveal the political and economic struggles among state and social forces over a regime’s normative content, procedures and compliance mechanisms, will find much fascinating material in the recent literature on global civil society. It is evident that global social change organizations (GSCOs) are engaged in an ongoing struggle to restructure existing international regimes in the interests of peace, human rights, improvements in the status of women, environmental protection, forest conservation and sustainable trade. (Gale, 1998:279) Also, since the basis of regime theory is in analyzing international relations and the behavior of governments and other international organizations, it is not possible to apply all of the theory’s elements directly to smaller organizations in non-governmental settings. But as the preceding quote makes clear, it may be useful to draw on when looking at social change. Modifying this specific formulation to one more useful to understanding information and communication technologies (ICT’s) and social change in communication-intensive organizations 6 will be of benefit not only to this research, but also to others researching similar domains in information science Communication regime definition and discussion At the beginning of this section, Table 1 offers a definition of a communication regime. Next, we will examine this definition in more detail. For now let us look at each element of this definition in turn and discuss briefly how each might manifest in (for simplicity’s sake) one particular communication regime, photojournalism. 1. A communication regime is a loosely coupled social network in which the communication and the work system are highly coupled. Professional photojournalists and their news editors are part of a communication regime. The members of this regime are part of a shared social network, as are most people in workplaces, but in addition, the nature of their work is highly coupled to the communication of visual information. In the case of photojournalism, the social network of photojournalists is quite loose – photographers and journalists assigned to presidential campaigns, for instance, travel with candidates for months at a time and develop loose social associations ((Columbia Journalism Review Editors, 2004; Crouse, 1974), and photojournalists have a loose social association with the other people within their news organization (Fahmy & Smith, 2003). However, the communication is central to the work activity of photojournalists, tightly coupling their activity as photographers to their behavior within the work system (Russial & Wanta, 1998). 2. A communication regime is a system with a set of implicit or explicit principles, norms, rules, and decision-making procedures around which actors’ expectations converge. This element is borrowed directly from Krasner’s (1982) definition outlined above. For photojournalists, their principles (beliefs of fact, causation, and rectitude) include the notion that different types of photography are inherently subject to different standards: “The categorization of photo types – spot news, feature, illustration – creates a distinct continuum that can predict when newspaper editors are more willing to allow the digital manipulation of a photograph. Newspaper editors appear to discriminate between hard news and soft news, and this distinction influences their tolerance toward digital manipulation” (Reaves, 1995:712-713). The issue of digital manipulation as a reflection of a group struggling to define their principles has been one of the primary areas of research for those studying the shift to digital photography (Russial & Wanta, 1998). Norms (standards for behavior in terms of rights and obligations) are reflected partially in hiring practices: while “the shift from chemical to digital processing has led to a relative lack of concern among photo editors about the need for chemical darkroom skills…new technical skills, such as the use of digital cameras and the web, are growing in importance…” (Russial & Wanta, 1998:593). Rules (specific prescriptions or proscriptions for action) include the codes of ethics for journalists discussed below in element four below. Bisell discusses decision-making procedures (prevailing practices for making and implementing collective choice): while “personal opinion was a part of decision-making [in selecting photographs to run]…other influences on news content were evident. According to the photo editor, the newspaper never rejected photographs from local photographers [regardless of quality]…In this sense the publisher dictated photographic content” (Bissell, 2000:89) 3. A communication regime is a system in which the types of communication are tightly coupled to the production system in which they are embedded. The practices of creating, selecting, manipulating, and publishing the photographs are part of the broader production system of the news outlet, which may be newspapers, magazines, or websites. Even something mundane like whether a photograph will be reproduced in black and white or in color is tightly coupled to the production system, and more subtle choices such as how many elements can be included in a photograph based on its eventual production size and resolution are part of the communication regime. “The practice of newspaper journalism historically has entailed some level of production responsibilities for news workers…In some current job categories, such as…photographer, news workers have a greater production role than others in the newsroom, in part because of their closer tie to the actual manufacture of the newspaper as a product…” (Russial, 2000:69). 4. A communication regime is a system with institutions which help to support and to regulate the regime. The institutions that support and regulate the photojournalism regime include the news organizations, the professional associations for journalists, photojournalists and editors, and the public for whom the news is created. Some of the clearest examples of the reinforcement of group norms by professional organizations can be seen in the various codes of ethics adopted by these organizations. The codes of ethics of The American Society of Media Photographers (1992), the National Press Photographers Association (1991; 2003), and the Society of Professional Journalists (1996) all clearly and specifically say that it is wrong to alter the content of photographs in any way, either in the darkroom or digitally, except in the case of non-news (feature) photographs, and even then the alteration should be clearly disclosed. These clear statements help support the public trust for the communication regime of photojournalism .7 5. A communication regime is a system within which there are conflicts over control, over who enforces standards, over who bears the costs of change and who reaps the benefits of change. When change occurs, it is nearly inconceivable that there will not be conflicts that arise. A number of questions can be asked to begin to understand these conflicts in a system changing from traditional to digital photographic methods. Are existing photographers used, or does the person assigned to taking photographs change? What training and re-training, if any, is required? What new business processes are going to be instituted to deal with new flows of information, in this case photographs? What will happen to the people who used to be responsible for getting rolls of film, processing them, selecting images from proof sheets, enlarging them, and retouching them? If a photojournalist is in a location distant from the paper, such as foreign correspondents, do processes for transmitting photographs change? Will previous gatekeepers (lab managers, photo editors) be bypassed by reporters sending digital images via computer network directly to editors? Russial (2000) reports that 66% of photo editors (n=214) surveyed felt that the workload of the photo department was “much heavier” or “somewhat heavier” once digital imaging was adopted by a newspaper. In addition, Russial argues that the perceived increase in workload is not dependent on the length of time a newsroom has been using digital imaging, indicating that it may indicate a permanent shift in work responsibilities instead of a temporary period of learning new technology followed by a return to more traditional work patterns. Other findings include factor-analysis results suggesting that photographers feel a loss of control over their images, while desk editors experience a gain in control. Russial’s study, which is highly relevant to this research, is discussed in greater detail below in section 5.3. Fahmy & Smith (2003), on the other hand, argue that the ability to delete photographs on location affords photographers with greater control over their images as they decide what to keep or delete. The author is currently conducting research using this conceptual framework to examine professional communication regimes (e.g., photojournalists, police forensic photographers, and medical personnel) as opposed to more informal types of photographic communication (e.g., by family photographers, artists, or photobloggers), although the latter may be studied in future extensions of this work. The reason for limiting the research to professional regimes is twofold. First, the definition of communication regimes explained above is most directly applicable in the more formalized settings of professional work than in less formal uses of photography. Second, professionals using digital photography as part of their work have both intensive and extensive involvement with photography as part of their professional communication. Less formal communication regimes, on the other hand, are often made up of people who spend less of their time engaged in digital photography (such as hobbyists) and/or are less dependent on photography for their personal income and for their prestige within the regime. Conclusion How applicable is this communication regime framework to areas of interest to information scientists other than digital photography? Particularly when considering the broad topic of the relationship between information technology and social change, a number of areas that may generate potentially fruitful research can be identified. Scholarly communication is an obvious area due to the concept’s origins in the study of scholarly communication. Other areas in journalism, including the shift to digital editorial layout that has intensified since the 1980s and the creation of web editions of news beginning in the mid-1990s, offer possibilities. In the entertainment arena, the tensions between movie makers and film distributors on the one hand and theaters on the other over the issue of digital delivery and projection of first-run films is a technological change affecting their communication regime. Other entertainment areas include music recording and the industry’s relationship to musicians and music downloaders, and tensions in the radio communication regime that includes the FCC, broadcast radio, and satellite radio. In the legal arena, digital courtrooms and court document management systems are changing how judges, clerks, lawyers and others involved in courtroom procedure communicate in fundamental ways. Digital libraries have changed work rules and communication paths for librarians and their patrons. These are just a few of the areas in which existing communication regimes are undergoing changes as new technologies are influencing or altering how production systems, norms, social relationships, and power structures operate. Notes 1Although Kling only has one published reference to this concept, he and the author engaged in extensive discussions on this concept in the months before his death in 2003. Much of the definition developed in this paper arose from these conversations.Back 2See Boyer (2003) for a discussion of the ubiquity of Foucault’s concepts among contemporary anthropologists.Back 3A recent example widely covered in the news was frequent discussions of regime change in regard to the Bush administration’s policy toward Iraq in the 2002-2003 run up to the Iraq war. A Lexis search for “regime iraq” for the first six months of 2003, for instance, turns up 632 news items referring to regimes. This also points to one of the difficulties with the popular use of the word regime, which has come more often to be applied to governments which Western nations, and the United States in particular, consider politically undesirable.Back 4Habermas (1996) has also discussed regimes in ways that are primarily outside the scope of this paper. Habermas’ argument is that regimes regulate power and that regulations are a way for reconciling differences between facts and norms and thus addressing both social situation and aspirations. The extent to which there is “agreement between words and deeds may be the yardstick for a regime's legitimacy” (Habermas, 1996:150). For the purposes of this research, Habermas’ work has limited applicability because it tends to focus on macro settings. However, it will be useful to keep in mind the notion of legitimacy, and attempt to look for evidence of legitimate regulation in terms of the day to day practices of organizations.Back 5Polanyi also argued that even with the onset of a separate “economy” in industrial societies, there was still not a detachment between the social and the economic, just a reversal of the relative importance of each: now the social relations became embedded within the economic system as it assumes primacy (Block, 2001). This idea that the social and the economic are tightly coupled has clear ties to social informatics.Back 6It is important to note that communication regimes as conceptualized here are interested in organizational communication at both internal and external levels of analysis. The external aspect of communication regimes is that the organizations that will be discussed are communication-centric organizations: organizations that have a primary purpose of communicating information for external consumption (e.g., news organizations, scholarly publications, etc…). The internal aspect includes the intra-organizational structures, norms, etc… that may be invisible to outside consumers of information, but nevertheless influence the forms that external communications eventually take.Back 7This is an example of the day to day legitimate regulation of the regime that helps to unify facts (e.g., photoshopping pictures is easy and can make more compelling pictures) and norms (e.g., photoshopping news images is wrong because it may reduce public trust in photojournalism), as discussed by Habermas (1996).Back Acknowledgements Special thanks to Rob Kling, who provided the initial inspiration for this research, and to Howard Rosenbaum, whose guidance made this paper possible. References American Society of Media Photographers. (1992). ASMP members code for ethical, professional business dealings to benefit their clients and the industry. Retrieved Sept. 26, 2003, from http://www.asmp.org/culture/code.shtml http://www.asmp.org/culture/code.shtml Bissell, K. L. (2000). A return to 'Mr. Gates': Photography and objectivity. Newspaper Research Journal, 21(3), 81-93. Block, F. (2001). Introduction. In Polanyi. The Great Transformation: The political and economic origins of our time. Boston: Beacon Press. Boyer, D. (2003). The Medium of Foucault in Anthropology. The minnesota review: a journal of committed writing, 58, 265-272. Columbia Journalism Review Editors. (2004). The Boys on the Broken Bus. Columbia Journalism Review, 43(3), 8. Crouse, T. (1974). Boys on the Bus; (1st ed.). New York: Random House. Fahmy, S., & Smith, C. Z. (2003). Photographers Note Digital's Advantages, Disadvantages. Newspaper Research Journal, 24(2), 82-95. Foucault, M. (1984). Foucault Reader (P. Rabinow, ed.). New York: Pantheon Books. Gale, F. (1998). Cave 'cave! Hic dragones': a neo-Gramscian deconstruction and reconstruction of international regime theory. Review of International Political Economy, 5(2), 252-283. Habermas, J. (1996). Between Facts and Norms: Contributions to a Discourse Theory of Law and Democracy. Cambridge, MA: MIT Press. Hilgartner, S. (1995). Biomolecular Databases: New Communication Regimes for Biology? Science Communication, 17(2), 240- 263. Hilgartner, S. (2004). Personal communication. Kling, R., Spector, L., & Fortuna, J. (2004). The Real Stakes of Virtual Publishing: The Transformation of E-Biomed Into PubMed Central. Journal of the American Society of Information Science & Technology, 55(2), 127-148. Krasner, S. D. (1982). Structural causes and regime consequences: regimes as intervening variables. International Organization, 36(2), 185-205. Lord, C. (1999). Party Groups in the European Parliament: Rethinking the Role of Transnational Parties in the Democratization of the European Union. Paper presented at the European Community Studies Association. National Press Photographers Association. (1991). Digital Manipulation Code of Ethics: NPPA Statement of Principle. Retrieved September 29, 2004, from http://www.nppa.org/professional_development/business_practices/digitalethics.html National Press Photographers Association. (2003). Code of Ethics. Retrieved Sept. 26, 2003, from http://www.nppa.org/ethics/default.htm OED Online. (1989). Oxford English Dictionary. Retrieved Sept. 7, 2004 Reaves, S. (1995). The Vulnerable Image: Categories of Photos as Predictor of Digital Manipulation. Journalism & Mass Communication Quarterly, 72(3), 706-715. Ruggie, J. G. (1982). International regimes, transactions and change: embedded liberalism in the postwar economic order. International Organization, 36(2), 379-415. Russial, J. (2000). How digital imaging changes the work of photojournalists. Newspaper Research Journal, 21(2), 67-83. Russial, J., & Wanta, W. (1998). Digital imaging skills and the hiring and training of photojournalists. Journalism & Mass Communication Quarterly, 75(3), 593-605. Society of Professional Journalists. (1996). SPJ Code of Ethics. Retrieved September 29, 2004, from http://www.spj.org/ethics_code.asp Strange, S. (1982). Cave! hic dragones: a critique of regime analysis. International Organization, 36(2), 479-496. http://www.nppa.org/professional_development/business_practices/digitalethics.html http://www.nppa.org/ethics/default.htm http://www.spj.org/ethics_code.asp work_4qc4qibienea7kqv7lnzfj4gcm ---- 10.11 interior 2,4-D 13 aboriginal people 451 Acacia 217; angustissima var. hirta 314; confusa 211; koa 176, 211, 226, 251, 358, mangium 211 acacia, prairie 314 accessions 328 Acer grandidentatum 394 achene 40 Achnatherum hymenoides 330 acid detergent fiber 172; lignin 172 Acoraceae 451 acorns 143, 144, 199, 220; storage 324 Acorus americanus 451 adaptation 139, 155; local 352; traits 140, 308 ADF – see acid detergent fiber Adiantum 1 ADL – see acid detergent lignin afforestation 46, 435 AFLP – see amplified fragment length polymorphism Agoseris glauca 125 agroforestry 305 airplane 343 Alaska 101 Alberta, central 406 alder, green 253; thinleaf 78 alfalfa 304 Alismataceae 238 allee effect 376 Allium acuminatum 212 Alnus crispa 253 alpine 114, 133 Amblyolepis 160 Ambrosia dumosa 146; pumila 229 AMF – see mycorrhizae, vesicular-arbuscular (VAM) Amorpha canescens 270 Amorpha herbacea var. crenulata 202 Ampelaster carolinianus 223 amplified fragment length polymorphism 154, 200, 221 Anacardiaceae 48, 50, 362 Anaphalis margaritacea 196 Andropogon gerardii 59, 153, 389 animals 166 Anthonomus 390 Apocynaceae – see Asclepiadaceae apomixis 152 Appalachians, Central 149; Southern 454 applecactus, Caribbean 164 application, large-scale 409 Araceae 168 Araliaceae 70, 72 arbuscular mycorrhizae – see mycorrhizae, vesicular-arbuscular Arbutus menziesii 130 Arctostaphylos 239; franciscana 410; montana ssp. montana 410 Arecaceae 75, 398, 393 Argyroxiphium kauense 89 arid 219, 243, 316 Arisaema triphyllum 168 Aristida beyrichiana 138 Aristolochiaceae 453, 454, 455 Arizona 146, 197, 354 arrowhead 41; broadleaf 238 art 207 Artemisia 140, 141; cana ssp. cana 55; tridentata 372; tridentata ssp. wyomingensis 412 Asarum 254, 255; canadense 124, 453 Asclepiadaceae 404, 433 Asclepias 404; 433 ash, black 102; green 46, 371 aspen, big tooth 107; quaking 107, 108 asselrue 15 Aster carolinianus 223 Asteraceae 8, 63, 89, 123, 125, 129, 141, 196, 206, 233, 291, 319, 342, 372, 376, 412, 425, 429, 444, 445 Astragalus bibullatus 54; filipes 273 Athyrium 1 Atriplex canescens 322; polycarpa 146; torreyi 322 auger 87, 121 augmentation 133; population 282 Auwahi 92 auxin 52, 73, 169, 414 awns 320 azalea, mountain 169; orange 169 BA – see benzylaminopurine Baboquivari Mountain 355 Bacillus subtilis 147 Balduina angustifolia 440 Balsamorhiza sagittata 206 balsamroot, arrow-leaved 206 bats 439 bay, Carolina 178; loblolly 34; swamp 178 bean, wild 172 bear root 302 beargrass 289 bedformer 323 bees, bumble 266, 420; ground-nesting 266; honey 266; wood-nesting 266 benzylaminopurine 131 bermudagrass 116 biodiversity 30 bioengineering 32; 310 biological control 267 biomass 59; yield 198 biosolids 223 bioswale 283 birch, water 78 bitterbrush, antelope 209 Black Belt Prairie 422 blackbrush 377, 396 Blackfeet 18 blender 123; modified 96 BLM – see Bureau of Land Management blueberry, velvet-leaf 399 bluegrass, Nevada 269; Texas 383 bluestem 9; big 13, 389; creeping 132; little 13, 76, 361 bog pools 397 bog, sphagnum 62 boiled 409 book reviews 67, 68, 81, 82, 97, 98, 114, 115, 134, 135, 149, 150, 166, 175, 189, 205, 214, 215, 237, 245, 256, 265, 287, 298, 312, 355, 356, 386, 387, 402, 403, 413, 419, 420, 421, 430, 431, 432, 441, 442, 456 Boraginaceae 258 borer, emerald ash 371 boroughs 435 Bouteloua curtipendula 59, 76; curtipendula var. caespitosa 452; hirsuta 337; repens 311; rigidiseta 338 Brahea 75 Brassicaceae 216 breadroot 263 breeding 61 NATIVEP L AN TS | 15 | 3 | FAL L 2 0 1 4 273 This index is constructed from author-provided key words and key words in titles. Scientific names were correct when published (according to cited nomenclatures) but have not been updated in this index to reflect current taxonomy. Numbers correspond to article index. Bold entries indicate refereed research articles. KEY WORD AND SPECIES INDEX bristlegrass, Catarina blend 339 British Columbia 67 brome 126; Kalm’s 76 bromes, North American mountain 154 Bromus 126; carinatus 94, 113, 154, 195, 308, 352; kalmii 76; orcuttianus 308, 352; sitchensis 195; tectorum 125 Brooklyn Botanic Garden 170 browse 209, 395; protection 209, 225 bryophyte 397 buckwheat, sulphur-flower 196 budburst phenology 183 buds, axillary 131 buffaloberry, russet 99; silver 247 bugbane, Carolina 15 bulb 36, 228; collection 212 bulltongue 41 bulrush 41 bunchgrass 297; wheatgrasses 406 bundleflower, rayado 116; velvet 116 Bureau of Land Management 100, 329 burnet 101 bush-clover, tall 314 buttercup 15 butterfly 404; migration 433; monarch 404; monarch waystations 433; rare 423; regal fritillary 448 C3 grasses – see grass C4 grasses – see grass Cactaceae 17 Caesar Kleberg Wildlife Research Institute 446 Calamovilfa longifolia 59 California 113, 190, 326; central coast 375; northern 93; 308, 352; San Francisco 37; San Francisco Bay area 375 caliper 106 Callicarpa americana 309 Callirhoe involucrata 119 Callisia ornata 440 camas, common 36 Camassia 36 camera, digital (see digital photography); single-lens reflex 264 Campanulaceae 129 Canada 81 canarygrass, reed 192 Capparaceae 333 Caprifoliaceae 96, 128, 364, 365 Captan 372 carbohydrate reserves 120 Cardamine californica 216 Carex 33, 128, 323, 397; nebrascensis 40, 203, 411; perigynium 40; stipata 351 Carnegiea gigantea 17 carrionflower 381 Carya 439 cased-hole punch seeding – see seeding, cased-hole punch Castilleja 158, 160, 181; angustifolia 159; applegatei 159; cusickii 159; exilis 159; hispida 159; indivisa 159; integra 159; linariifolia 159; lutescens 159; miniata 159; minor 159; nivea 159; occidentalis 159; pallescens 159; pulchella 159; purpurea 159; rhexiifolia 159; scabrida 159; sessiliflora 159; sulphurea 159 Catalina Island 190 catkins 293 Ceanothus 214 Celtis occidentalis 371 Center for Plant Conservation 54 Centrosema virginianum 306 Ceratiola ericoides 52 Ceratoides lanata 84 Cercis occidentalis 173; orbiculata 173 ceremonial plants 19 Chamaecrista fasciculata 161 Chamaedorea 75 cheatgrass 125, 240 Chehalis River Surge Plain 192 Chenopodiaceae 84 chestnut, American 318 children 279 Chionanthus pygmaeus 353 Chippewa 18 Chloris cucullata 340; Chloris x subdolichostachya 341 chlorophyll fluorescence 183, 318 Choctaw 18 chokecherry 232; colorow black 315 Chrysoma pauciflosculosa 440 Chrysopsis godfreyi 440 Chrysothamnus 444, 445 cinquefoil, Robbins’ 133 CITES – see Convention on International Trade in Endangered Species CKWRI – see Caesar Kleberg Wildlife Research Institute clammyweed, Rio Grande 333 clonal 229 clover, western prairie 370 clustervine, Florida beach 120, 368 coastal dune 224; strand 120, 224 coconut 22 Cocos nucifera 22 coir fiber 22, 26 Coleogyne ramosissima 377, 396 Coleoptera 392 collaboration 250, 332 Colorado 74; Plateau 377 commercially produced 248 common-garden, study 155, 233, 280, 281 community stability, plant 30 community, people 279; plant development 406 competition 46, 59, 129, 240; biology 219; grass 426 competitive ability 426 compost – see container media, compost composted pine bark 147 concepts 136, 357, 413 conditions, managed 361 coneflower, purple 13; Tennessee purple 63 conifer 183, 255; plantations interference 93 Conradina canescens 73 conservation 29, 69, 70, 137, 175, 285, 353, 368, 384, 400, 439, 441, 450, 454; bats 439; biology 8; fish hatchery 366; horticulture 164; in situ 212; planting 422; Reserve Program 100 container media 22, 148, 440; biosolids 148; compost 147, 148, 223; disease suppressive 147; electrical conductivity 257; formulations 384; pH 243; recycled 244 container plants 88, 168, 173, 184, 347, 408, 443; production 147; seedling 7, 34, 53, 318, 451; soft-walled 449; Styroblock 254; tallpot 316; type 11, 163, 257, 344 contract specifications 171, 344 controlled-release fertilizer – see fertilization, controlled-release Convention on International Trade in Endangered Species 69 Convolvulaceae 224, 368 cordgrass, smooth 236 Cordia globosa 148 Coreopsis 160, gladiata 223; lanceolata 13; leavenworthii 233, 248; tinctoria 161 coreopsis, lance-leaved 13; Leavenworth’s 233, 248 Cornaceae 127 Cornus sericea ssp. sericea 127 cottontop, Arizona 292 cottonwood 12, 435 cotyledon 220 Cover Monitoring Assistant 373 CP – see crude protein CPB – see composted pine bark Crepis acuminata 125 crops, cover 283 Crosby Arboretum 300 crown diameter 46 crude protein 172, 304, 359 Cuisinart 122 cultivar 152, 153, 156; eastern gamagrass 359; release 186; American grass 59 NATIVEP L AN TS | 15 | 3 | FAL L 2 0 1 4 K E Y W O R D A N D S P E C I E S I N D E X 274 cultivation 175, 408 cultural practices, nursery 141 cultural significance, people 17, 19, 193 culture, people 36, 194 Culver’s root 331 Cupressaceae 96, 182, 260, 414, 415 cuttings 6, 12, 32, 34, 73, 109, 202, 260, 262, 399, 414; basal cut-surface 242; divisions 47, 49, 453; dormant pole 316; dormant stem 242; dormant whip 316; hardwood stem 399; installation 121; micro 455; mini 110; rhizome 457; root 108, 222; stem 47, 49; stooling beds 109 Cycad coontie 244 Cylindrocarpon 267 Cyperaceae 128, 203, 411, 416 Cypripedium arietinum 234; kentuckiense 391; montanum 79 daisy, Willamette 285 Dalea 142, 246, 349; candida 83; feayi 440; ornata 370; purpurea 83, 153 Danaus plexippus 404, 433 dandelion, mountain 125 Danthonia californica 195 Dasiphora fruticosa 270 database 23 deciduous 218 deep-pipe 31 defoliation 172 degree days 345 dehydration stress 318 derivation 281 Deschampsia 126; cespitosa 113, 195 desiccation 220 design, diversity enhancement block 284; ecological 300; Nelder’s 45; patented 294; response surface 45; systematic experimental 45; traditional 380 desired future conditions 278 Desmanthus 116, 142; illinoensis 83 Desmodium 142; canadense 83, 305; canescens 83; glabellum 317; illinoense 305; paniculatum 83, 314; sessilifolium 83 devil’s club 47 DFCs – see desired future conditions dibble 12, 121; double 88; homemade 12 Dicentra cucullaria 124 Dicranaceae 193 Dicranum scoparium 193 digestibility 304, 359 digital photography, camera 264; image 343; image formats 264; image manipulation 264; non-SLR photographic exposure 264; short course 264 Digitaria californica 292 disease 226; dieback 251; resistance 211; root 267; suppressive compost 147 disturbance 406, 428, 388 distyly 259 diversity 274, 276, 277, 313, 417, 454; interference 45 divisions – see cuttings DNA profile 200, 221 Dodecatheon species 124 dogwood, redosier 127 domestication selection 284 Douglas-fir 22 Dracopis amplexicaulis 161 dropseed, prairie 361 drought adapted 243 dry matter 359 Dryopteris 1 duck potato 238 Echinacea 217; tennesseensis 63 ECM – see mycorrhizae, ectomycorrhizae ecological design 300; life history 8 ecology 8, 63, 93, 175, 355, 400; restoration 278; seedbed 55 economics 162, 201, 326; fire-driven prices 100 ecoregion 152; approach 284 ecosystem 9; function 366; groundwater dependent 398; restoration 329; roadside 278 ecotype 65, 140, 152, 153, 230, 329, 444, 445 ectomycorrhizae – see mycorrhizae, ectomycorrhizae education, place-based 279 efficiency, fertilizer-use 357; nitrogen-use 357; water-use 357 Elaeagnaceae 99, 247 elaiosomes 423 elderberry 122; American black 365; common 364 electrical conductivity 217, 357 elevation 130; gradients 140 Elymus 126; 422; canadensis 76; elymoides 125, 321, 437, 438; glabriflorus 450; glaucus 7, 94, 195, 308, 352; lanceolatus 59, 328; multisetus 437; wawawaiensis 268 embryo 72 emergence 246, 291, 369, 377, 444, 445 encyclopedia 114 endangered – see threatened and endangered endemic 353 endomycorrhizae – see mycorrhizae, vesicular-arbuscular environmental stewardship 74 environments, field 55 epidemiology 226 equilibrium 412, 417 equipment, auger 87, 121; bedformer 323; Cuisinart 122; development 320; dibble 12, 88, 121; expandable stinger 121; hedge trimmer (gas) 25; hoedad 121; hydrodrill 44, 204; inexpensive 252; Jet Harvester 363; laundry spin dryer 39; low cost 128, 417; low tech 25, 252, 274; modifications 51; outplanting 12, 44, 87, 121, 204; power hammer 121; seed cleaning 96; seed drying 39, 418; seed harvesting 25, 363; seed sowing 77; shop vacuum 111; sickle bar mower 429; smoker 348; Trac Vac 51; vacuum skid steer 429; Waterjet Stinger 44, 204; Westrup HA-400 320; Woodward Flail-Vac seed stripper 213 (see also entries in seed cleaning and processing, seed collection, and seed harvesting) Eragrostis intermedia 354 Ericaceae 96, 169, 239, 399, 410 Ericameria nauseosa 444, 445 Erigeron decumbens 376 Eriogonum umbellatum 196; lanatum 195, 196 erosion control 281; shoreline 236; vegetative mats 26 establishment 13, 33, 76, 79, 116, 198, 246, 306, 328, 397, 416; grass stand 383; plant 261, 271; rapid 407; seed 161; shrub 209; stand 180 ethics 171 ethnobotany 36, 263, 355 ethylene 206 Euphorbia esula 343 Euphorbiaceae 343 Eurotia lanata 84 evergreen, broadleaf 239 evolution 29, 166 expandable stinger 121 expanded polystyrene 303 experiment, competition 45; greenhouse 397; mixtures 45; move-along 85 exported species 69 extinction 368 Fabaceae 91, 116, 142, 153, 173, 176, 198, 202, 251, 263, 273, 286, 291, 304, 314, 317, 358, 370, 395, 427, 443, 446 Fagaceae 16, 144, 145, 199, 318, 325 fall-sown 145, 185 federal 196; agencies 60; nurseries 7 fencing 90 fern 1, 150; bracken 93 fertility 243 fertilization 14, 141, 144, 217, 247, 252, 270, 424; concentration 243; controlled- release 106, 255, 357; regime 270 fescue 126, 208; red 26; rough 406 K E Y W O R D A N D S P E C I E S I N D E X NATIVEP L AN TS | 15 | 3 | FAL L 2 0 1 4 275 Festuca 126; hallii 406; roemeri 208, 280, 281; rubra 26, 208 fiber 172 field performance 358, 361 field requirements 32 fire 5, 9, 177, 289, 388, 290, 396; adapted 401; ecology 362; prescribed 177 firebush 148 firewheel 230 flaming 91 flatwoods 177, 223 flax, distinguishing characteristics 259 flora 157, 431, 432; Colorado 419; vascular 189 Florida 10, 73, 138, 148, 223, 288, 353, 440; South 224 flowering 228, 230, 405 flowers 237 food plant, perennial 283 forage 172, 422, 450; quality 304; yield 142 forb 2, 195, 196, 219, 291, 363; semiarid grassland 434; warm-season 13; woodland 453 forest 430; highways 188; Indiana hardwood 242; openings 79; plantations 93 Forest Service 329, 404 Forestiera pubescens var. pubescens 127 forestry 151; clonal 260 forests, eastern 241; urban 283; temperate 454 four o’clock, showy 184 Franciscan manzanita 410 Frankia 253 Fraxinus nigra 102 fringetree, pygmy 353 frost protection 247 fuelbreaks 240 fungi 112, 372 Fusarium 226; oxysporum 251, 267; semitectum 251; solani 251; sterilihyphosum 251; subglutinans 251 GA3 – see gibberellic acid Gaillardia 160, 246, 349; 230 garden(ing) 181, 193, 215, 275; trial 440; western 134 garden, rock 184 gardens, public 300 GDD – see growing degree days gene pool 29 genecological evidence 280 genetic, principles 152; resource 374; shift 186 genetic variation 139, 155; adaptation 152; conservation 280; considerations 151; diversity 61, 137, 154, 200, 221, 233; erosion 137; fingerprinting 200, 221; regional 308 genetics 24; primer 136 genotype x environment 138, 426 geographic information system 212 geographic variation 308 georeferencing 212 geotropism 53 Geranium maculatum 124 germination – see seed germination germplasm 156, 186, 272, 315, 319; Antelope Creek 438; ‘Appar’ flax release 259; Atascosa Texas grama 338; Bonita 354; Bounty 389; Chaparral hairy grama 337; ‘Continental’ basin wildrye 295; ‘Discovery’ Snake River wheatgrass 268; Divot tallow weed blend 336; Ekalaka 325; experimental 359; Fowler 327; genetically diverse 284; Goliad 342; Hoverson 446; La Salle 292; Mariah hooded windmillgrass 340; Marion 317; Maverick 335; native plant 269, 277, 313; natural 292, 317, 333, 334, 335, 336, 337, 338, 339, 340, 341, 364, 370, 371, 389, 446, 452; NBR-1 273; Opportunity 269; Pleasant Valley 438; Prairie Harvest 371; pre-variety 186, 281; Rattlesnake 321; South Texas sideoats grama 452; Spectrum 370; Vintage 364; Webb 334; Welder shortspike windmillgrass 341; ‘Windbreaker’ cultivar big sacaton 385; White River 330; Zapata 333 gibberellic acid 78, 80, 164 ginger 453, 454, 455 ginseng 70; American 69, 71, 72 GIS – see geographic information system Glacier National Park 193, 382 Glandularia 160 globemallow 346, 390; Munro’s 409 Glomus intraradices 113, 261 glossary 136 goldenrod 8 goldenseal 200, 221 gophers 297 Gordonia lasianthus 34 grafting, cleft 211; splice 211 grama, sideoats 76, 452 grass 61, 76, 94, 113, 280, 281, 308, 320, 352, 422; C3 59; C4 59; cool-season 422, 450; redhead 27; warm-season 14, 395 grassland 28, 277, 441; remnant 153; restored 153; western Oregon 286 gravimetric weight 449 gravity separation 78 grazing 422, 450 Great Basin 212, 240, 291 Great Plains 369 green industry 74, 380 green roof 283, 303 greenbrier 381 greywater 283 Grossulariaceae 127 ground cover 6, 350 groundwater, fringe 316 growing degree days (GDD) 345 GROWISER 79 growth, development 345; early 347; height 106; response 181; root 53; stage 345; stem diameter 106 guide 256, 265, 356; children’s 387; field 287, 419; identification 387; illustrated 189; nomenclature 437; propagation 97; seed collection 67; winter 442 guidelines, cultural 391 gulf prairies and marshes 332 Gymnocarpium 1 habitat 266, 297, 433, characteristics 69 hackberry, common 371 hairgrass 126 Hakalau Forest National Wildlife Refuge 87 Hamelia patens 148 handbook 205 hardhack 405 hardiness, cold 75, 239 hardseed 314 hardwoods 151; bottomland 46 Harrisia fragrans 164 Hawai‘i 86, 90, 92, 176, 226, 249, 251, 358 hawksbeard, long-leaf 125 hayland 389 heat island effect 283 heat shock 362 heated 409 hedge trimmer, gas 25 Hedysarum boreale 345 Helianthus angustifolius 223 Heliotropium angiospermum 148 hellebore, false 428 hemiparasitism 159, 181 Hemiptera 392 Hepatica americana 124 herbage, nutritive value 307; yield 307 herbicide 65, 225, 240, 242, 246, 276, 277, 294, 313; application 294; imazapic 1, 240, 349; injury 246, 349; post- emergent 349; pre-emergent 246 herbivory 286 heterostyly 259 Hexastylis 454, 455 Hibiscus moscheutos 235 NATIVEP L AN TS | 15 | 3 | FAL L 2 0 1 4 K E Y W O R D A N D S P E C I E S I N D E X 276 hibiscus, marsh 235 hickory 439 Hierochloe odorata 19 highways, forest 188 historical and modern use 263 historical prairie remnant 103 history 193; natural 207, 393, 400, 441 hoedad 121 hollyhock, streambank wild 401 Holodiscus discolor 443 home region 230 Hopi Reservation 197 Hordeum brachyantherum 195 hormone, rooting 260, 399, 435 horticulture 275 host plant, nursery 249 huckleberry, black 399; mountain 378 hurricanes 178 hybridization 152 Hydrangeaceae 128 Hydrastis canadensis 200, 221 hydrodrill 44, 204 hydrogel 379 hydroseed 416 Hy-Gro Tiller 121 Hylocomiaceae 193 Hylocomium splendens 193 hyperhydricity 322 Hypericum reductum 73 IBA – see indole-3-butyric acid ICBN – see International Code of Botanical Nomenclature Idaho 68 identification 355, 442 IDS separation 78 ‘Iliahi 249 Iliamna remota 3; rivularis 401 Illiciacea 43 Illicium floridanum 43; parviflorum 43 Illinois bundleflower 13, 116, 198, 304 imazapic 1, 240, 349 imperiled 374 implementation 357 inbreeding depression 152, 280 index, seed 83 Indiana 405 indiangrass 13 indigenous species 28 indigo, blue wild 304; false 198, 304; wild blue 198 indole-3-butyric acid 52, 73, 131, 169, 260, 414, 455 Inland Northwest 387 inoculation 113, 147, 176, 253, 253 insects 392 integrated approach 278 Intermountain West 219, 243, 431 International Code of Botanical Nomenclature 158 Internet 23 interpretation 207 interseeding 116 Intertribal Nursery Council 197 IR-4 Program 21 irrigation 88, 217, 219, 252, 357; clay pot 31; drip 146, 323; ebb-and-flow 257; efficiency 31; flood 257; overhead 358; regimes 163 Iva imbricata 73 Jack-in-the-pulpit 168 Jacquemontia reclinata 120, 368, 224 Jeffersonia diphylla 124 Jet Harvester 363 Jiffy forestry pellets 443 Juglandaceae 145, 439 Juncaceae 128, 203 Juncus 33,128; balticus 203 junegrass, prairie 126 juniper, oneseed 182; Rocky Mountain 42, 369, 415; Utah 182, 414 Juniperus monosperma 182; osteosperma 182, 414; scopulorum 42, 270, 369, 415 Kentucky 150, 189, 296, 442 key 437 K-IBA – see indole-3-butyric acid Klamath 454 knotweed, Bohemian 192 koa 87, 176; wilt 211, 226, 251 Koeleria 126 Kosteletzkya virginica 179 Krascheninnikovia lanata 84 Ktunaxa Kinbasket 18 Lamiaceae 309 land, private 374 landowners, private 171 landscape architecture 74, 207, 275, 300; perception 380; preference 380 landscape ecology, disturbed 386 landscape, xeric 385; yard 218 Lantana depressa 223 Larix laricina 255 Larrea tridentata 146 larval host 423 laundry spin dryer 39 Lauraceae 178, 210, 436 laurel wilt-resistant 436 lead plant, crenulate 202 leaf greenness 405 legal status 165 legume 83, 91, 198, 304, 305, 306; herbaceous 314 Lepidoptera 404 Lespedeza 142; capitata 83; hirta 83; stuevei 83, 314; virginica 83 Lesquerella perforata 227; stonensis 227 Lewisia cotyledon 131 Leymus cinereus 295, 297; racemosus 59 Liatris 246 Licania michauxii 440 life cycle 69, 79 light 101, 120, 248, 445 lignin 172 Ligusticum porteri var. porteri 302 Liliaceae 56, 57, 58, 104, 123, 289, 384, 428 Lilium canadense 104; philadelphicum 228 lily, Canada 104; western red 228 Linaceae 259 Lindera benzoin 210 Linum, distinguishing characteristics 259; lewisii 259; perenne 259 livestock 422 living roof – see green roof lizard’s tail 35 locust, black 64; bristly 395; New Mexico 64 long-stem capillary 316 lotus, American 117, 407 Louisiana 306 lovegrass, plains 354 lumber, urban 283 lupine, Kincaid’s 286 Lupinus 160; latifolius 155; oreganus var. kincaidii 286; polyphyllus 443 Lycium torreyi 127 macerate/macerator 42, 232 Macromeria viridiflora 258 madrone, Pacific 130 Magnoliidae–Caryophyllidae 431 maintenance 65 mallow, Kankakee 3; marsh 179; Virginia saltmarsh 179 Malvaceae 3, 119, 179, 235, 291, 346, 390, 401, 409 mamane 427 management 36, 374; habitat 400; nursery 356; roadside 278 manzanita, Mt Tamalpais 410 maple, bigtooth 394 marker, molecular 200, 221 market price 60 marketing 167 marsh, tidal 421 maternal effects 426 meadow beauty 170 meadowsweet 405 mechanical application 294 medicinal 175, 283 Meehania cordata 6 Melastomataceae 170 K E Y W O R D A N D S P E C I E S I N D E X NATIVEP L AN TS | 15 | 3 | FAL L 2 0 1 4 277 mesquite 31 Metrosideros polymorpha 21 Mexico, northeastern 75 Miami 250 microbes 166 microhabitat 286 micropropagation 91, 131, 132, 234, 322, 384, 455; medium, Driver Kuniyuki Walnut 131; medium, Murashige and Skoog 131; medium, Woody Plant 131; cleaning protocols 384 Midwest 237 migration 30 milkmaids 216 milkvetch, basalt 273 milkweed 404, 433 Mimosa strigillosa 306 Minnesota 371, 403 mint, creeping 6; Meehan’s 6; trailing 6 Mirabilis multiflora 184 Mississippi 41, 161, 300; Delta soil types 46; northeast 422 Missoula Technology and Development Center 225 mist 414; interval 169 Mitella 124 mitigation 288; projects 24 Mniaceae 193 Mnium lycopodioides 193 model, heat-unit 345 Mohawk 18 Mojave Desert 377 molecular 157; marker 200; 221 Monarda punctata 223 monitoring 373 monoculture 46 monsoon 434 mortality 163 moss growth 217 mowing 219 Muhlenbergia asperifolia 272 mulch 31, 290; fabric 180; living 145; nursery 188; plastic 323; straw 185 multiple species 45 mushrooms 149 mycorrhizae 113, 424; colonization 424; ectomycorrhizae 112, 424; inoculum 261, 270; vesicular-arbuscular 112, 113, 261, 270 Myrtaceae 217 NAA – see naphthaleneacetic acid name change 157 naphthaleneacetic acid 52, 73, 260 narrow endemic 8, 63 Nassella viridula 59, 327 National Forest and Grasslands 277, 313 National Forest, eastern 430; Kisatchie 391; Mt Hood 155 National Plant Germplasm System 212 native alternative 259 Native Americans 18 Native Plant Network 23 native plant sector 74 native seeds, purchasing 187 native status 28 natural areas 137 natural heritage 69 nature 279 Navajo 18 nectar 433 needlegrass, green 327 Nelder’s Design 45 Nelumbo lutea 117 Nelumbonaceae 117, 407 Neptunia lutea 306; pubescens var. microcarpa 307 nettle, stinging 49 neutral detergent fiber 304 New England Wild Flower Society 81 New Mexico 78, 95, 354; central 434 New York City 435 ninebark 38 nitrogen 424; fixing 253; loss 217 nodulation 305 nomenclature 157 North America 97, 114, 115; western 60 North Dakota 432 no-till drill 383 N-sulate seed guard 254 nursery 5, 11, 18, 23, 26, 32, 38, 141, 144, 190, 196, 247, 331, 344, 350, 408, 413, 424, 433; Andrews 10; bareroot 2, 36, 109, 145, 254; container 2, 3, 10, 33, 36, 37, 182, 217, 252, 253, 254, 378, 394, 415, 449; education 167; forest 402; manual 356; Mexican forest 147; network 375; opportunities 32; production 56, 318; stock 24; tribal 356 nutrition 263 nutritive value 172, 304, 307 Nyctaginaceae 184 Nymphalidae 404, 433 oak 143, 185, 163; bur 325; island scrub 190; northern red 220; 257; Nuttall 46; Oregon white 324; poison 48; swamp white 199; water 46 oak savanna 195; degraded 195 Oenothera 160; caespitosa 231 ‘ōhi‘a lehua 87 oil sands 451 Ojibwa 20 Old Mill seeder 77 Oleaceae 102, 127 olive, New Mexico 127 Olympic Peninsula 289 Onagraceae 123, 231 onion, tapertip 212 Ontario 109 Oplopanax horridus 47 orchards, seed 151, 180 orchid 79, 256, 403; Kentucky lady’s slipper 391; mountain lady’s slipper 79; Navasota ladies’ tresses 400; ram’s head lady’s slipper 234; terrestrial 400 Orchidaceae 234, 391, 400 Oregon 270, 275, 285 Oregon Coast Range 195 origin 259 ornamental 223, 288 Oryzopsis hymenoides 330 Oshá 302 outbreeding depression 152 outplanting 86, 92, 183, 190, 202, 204, 229, 270, 344, 366, 388, 391; co- planting 88; deep-planting 316; dormant cuttings 310; large container 301; seedling density 201; survival 379 overburden sand tailings 132 Pacific Islands 245 Pacific Northwest 139, 277, 287, 313 paintbrush 181; indian 159 palatability 304 palm 75; oases 398; California fan 398; desert fan 393, 398 PAM – see polyacrylamide Panax quinquefolius 69, 70, 71, 72 Panicum virgatum 59 Pappophorum bicolor 335; vaginatum 334 pappusgrass, pink 335; whiplash 334 particle size 379 partnerships 33 Pascopyrum smithii 59 Passiflora 250 passionflower, goatsfoot 250 pasture 389 pathogens 118; forest 436 Paysonia perforata 227; stonensis 227 pearl seed 258 pearly everlasting, western 196 peat stability 397 Pediomelum esculentum 263 peduncles 131 pendimethalin 349 Penstemon 181, 392 people 430 perception 28; differences 74; visitor 300 perennial 4, 299; cool-season 383; herba- ceous 243; herbaceous Texas legumes 307; Texas 314 NATIVEP L AN TS | 15 | 3 | FAL L 2 0 1 4 K E Y W O R D A N D S P E C I E S I N D E X 278 perennials, desert 388 performance, agronomic 359 Perigynium 351 perigynium removal 40 Persea borbonia 436; palustris 178 pesticide 21; application 93; label 21 pests 402, 392 Phacelia 349 phenotypic variation 233 Philadelphus lewisii 128 philosophy 29 Phlox 160; longifolia 299 phosphate 132 photography – see digital photography phreatophyte 316 Phyllostomidae 439 phylogenetics 157 phylogeography 154 Physocarpus opulifolius 38 Picea 217; glauca 255; mariana 255 pickerelweed 288 Piedmont 454 Pinaceae 424, 449 pine, loblolly 162, 379; longleaf 9, 11, 53, 201, 347, 348; pinyon 449; ponderosa 93; straw 348; whitebark 267, 424 Pinus albicaulis 267, 424; banksiana 255; edulis 449; palustris 9, 11, 53, 118, 201, 347, 348; ponderosa 93; resinosa 255; strobus 255; taeda 162, 379 pitcher plants 62 plant, community 195; competition 8; form 24; populations 198; position 297; selection 30; size 209; tags 167 plant material 74, 323; development 30, 284; rangeland 437 Plant Materials Program (USDA NRCS) 332, 446 plant variety protection 186 Plantaginaceae 336 Plantago hookeriana 336; rhodosperma 335 plantain, Hookers 336; redseed 336 plantations 46 planting 12, 77, 79, 383; costs 326; depth 297, 369, 445; dormant cuttings 44; methods 121; mixed species 45; rate 395; season 369; tool 87, 121; wildflower 65 (see also outplanting) plants, alpine 114; aquatic 98, 288, 451; common greenhouse 243; container 89; culturally significant 197; Hawaiian 89; invasive 165, 286; wetland 98, 366; woody 366 PLANTS Internet Database 165 plasticity, adaptive 426; transgenerational 426 plasticulture 323 Platanaceae 127 Platanus wrightii 127 plateau 240 ploidy by cytometry 191 ploidy hybridization 140 Poa secunda 269, 426 Poaceae 14, 19, 20, 26, 59, 61, 94, 113, 116, 123, 125, 126, 153, 208, 268, 269, 272, 295, 311, 321, 334, 335, 337, 338, 339, 340, 341, 352, 354, 359, 361, 385, 389, 406, 422, 426, 437, 450, 452 pocosin 34, 178 point-of-purchase 167 poison oak 48; sumac 50 poisonous plants 115, 205 Polanisia dodecandra 333; ssp. riograndensis 333 pollination, cross 152; hand 216; self 152 pollinator reproduction 216 pollinators 266, 315 polyacrylamide 379; seed wafers 180 polyethylene 414 Polygonaceae 196 Polygonella macrophylla 440; polygama 440; robusta 440 Polygonum x bohemicum 192 polymorphism, seed 330 polyploidy 152 Pontederia cordata 288 Pontederiaceae 288 pool margin 397 poplar 110 poppymallow, purple 119 population 374, 444, 445; declining 439; differentiation 233; polycross 281; rescued 202; size 286 Populus 12, 105, 128, 197, 293; balsamifera 110; deltoides 435; grandidentata 107, 435; tremuloides 107, 222 portable refrigerator-freezer 262 Portulacaceae 131 Potamogeton perfoliatus 27 Potamogetonaceae 27 Potentilla gracilis 195; robbinsiana 133 power hammer 121 ppi 264 prairie 129, 219, 276, 277, 284, 313, 376; Canadian 59; clover, purple 13, 198, 304; eastern US 417; fen 103; Midwest 361; Piedmont 417; remnants 374; tall grass 2; wet 103 predation, seed 163, 185, 286; wildlife 185 pre-dispersal 286 pressure pump 44 pre-varietal 156; release 325 price, fire-driven 100 primrose, rock evening 231 principals 386 private property 171 production 100, 223; bareroot 117, 102, 255; commercial 71; operational 151; plant 71; plug 448; seedling 253; timber 151 production field 60; grass 190 propagation 4, 6, 17, 18, 35, 79, 86, 104, 120, 133, 141, 144, 170, 173, 179, 206, 208, 210, 218, 231, 351, 375, 393, 411, 439, 443, 449, 451; beautyberry 309; container 110; in vitro 384, 391; management 263; moss 194; protocol 89, 102; rare plant 8, 63; sexual 309; shrub 350; spore 1; stacked 222; vegetative 27, 36, 48, 52, 56, 73, 104, 106, 108, 109, 120, 169, 194, 222, 235, 238, 257, 353, 407, 414, 436 propagule 195, 216 Prosopis glandulosa 31 protective marking 70 protocol, propagation 102 provenance, seed 426 Prunus 232; pumila var. besseyi 270; virginiana var. melanocarpa 315 Pseudotsuga menziesii 22 Psoralea esculentum 263 Pteridium aquilinum 93 pure live seed 186 Purshia tridentata 209 quality 142; stocking 162 Quercus 143, 144, 217; alba 185, 199; bicolor 199; douglasii 16; garryana 324; lobata 163; macrocarpa 325; rubra 220 rabbitbrush 444, 445 Raffaelea lauricola 436 ragweed 229 rangeland 295, 327, 330, 370; seed 60; Triticeae 321 Ranunculaceae 15, 123, 200, 350 rare 8, 88, 391; wildflowers 298 rarity 63 Ratibida 246; linuron halosulfuron 349 recalcitrant 91, 143, 220 reciprocal agreements 171 reclamation 14, 29, 422; mine 132, 180 recovery 133, 282; Mojave 388; plant community 428; rare plant 63 recycled materials 303 redbay 436; swamp 178 redbud, California 173 redwood, coastal 260 reestablishment 209 reforestation 87, 92, 137, 183, 251; artificial 16; costs 162 regional effort 332 rehabilitation 89, 100 K E Y W O R D A N D S P E C I E S I N D E X NATIVEP L AN TS | 15 | 3 | FAL L 2 0 1 4 279 reintroduction 250, 285, 376; endangered plant 282 relative humidity 293, 412, 417 release 281; selected class 281 remediation 407 remnant guild 418 renovation 306 reproductive biology 8 research, collaborative 276 resolution 264 response, community 428; harvest 428 restoration 5, 7, 9, 12, 27, 28, 32, 37, 52, 84, 86, 90, 113, 137, 138, 151, 156, 163, 187, 188, 190, 192, 194, 195, 197, 204, 222, 225, 271, 277, 282, 284, 289, 297, 326, 332, 333, 337, 344, 354, 377, 381, 386, 388, 396, 397, 418, 421, 422, 424, 427, 433, 434, 435, 436, 452; disturbed areas 74; ecological 29, 55, 375, 382; gene pool concept 29; grassland 276, 313; habitat 280, 281, 374; methods 276, 277, 313; partnerships 279; prairie 76, 153, 208, 276; projects 24, 27; rangeland 438; riparian 105, 197; seed 404; stream-bank 32; techniques 177; Texas 337; tidal marsh 236; wetland pond 407 revegetation 14, 28, 32, 48, 50, 120, 137, 188, 203, 259, 271, 278, 290, 328, 332, 388; desert 146; prairie 389; roadside 373; roadside 374 Rhexia virginica 170 Rhexifolia 158 Rhizobium 176 rhizome 399; divisions 62; spread 328 Rhododendron austrinum 169; canescens 169 Rhus michauxii 362 Rhynchosia americana 307; latifolia 306 Rhytidiopsis robusta 194 Ribes aureum 127, 270; cereum 127 rice, wild 20 ricegrass, Indian 330 Rio Grande Plain 292, 332, 338, 339, 340, 341 riparian 12, 44, 301; vegetation 316; zone 32, 109 road 188; decommissioning 271 roadside 374; disturbed 278; vegetation 373 Robinia hispida 395 rockland hammocks 250 Rocky Mountains, northern 382 rooftop meadow 129 roost trees 439 root crown 315 root, deformation 344; formation 384; illegal harvesting 70; lateral 53; mass 120; nodulation 253; spiraling 344; vigor 310 root dip 379 root gels, polymer 379 rooting 52, 239; media 436 Rosa carolina 408; woodsii 128 Rosaceae 96, 123, 128, 133, 232, 377, 396, 408, 443 rose, pasture 408 rosemallow, crimsoneyed 235 rosemary, Florida 52 roundwood 283 Rudbeckia hirta 161, 223 Ruellia caroliniensis 223 rural-to-urban continuum 74 rush, Baltic 203 Sabal 75 sage, butterfly 148; tropical 148 sagebrush 140, 141; big 425; silver 55; Wyoming big 180, 372, 412 Sagittaria 41; latifolia 238 saguaro 17 Salicaceae 12, 105, 106, 107, 108, 109, 110, 111, 113, 128, 197, 293, 296, 435 Salish-Kootenai 18 Salix 12, 111, 128, 197, 293, 310; arizonica 106; bebbiana 106, 296; discolor 296; drummondiana 110; eriocephala 296, 435; exigua 106, 110; irrorata 106; lasiolepis 113; nigra 435; prolixa 110; scouleriana 106 salt marsh, coastal 37 saltbush, fourwing 322; Griffiths’ 322 Salvia coccinea 148; lyrata 161 Sambucus 122; nigra ssp. canadensis 364, 365 sampling 343; field 212 sand plains 332 sandalwood 249 Sanguinaria canadensis 124 Sanguisorba 101 Santalaceae 249 Santalum freycinetianum 249 Sarracenia 62 Sarraceniaceae 62 sarsaparilla 381 Saururus cernuus 35 savanna 418 scaling 104 scalping 347 scarification – see seed scarification schedule, planting 263 Schizachyrium 361; maritimum 73; scoparium 59, 76; scoparium var. stoloniferum 132 Schoenoplectus 41, 128 schools 279 Scirpus 33, 41 scorpions tail 148 scratchgrass 272 Scrophulariaceae 181, 331, 392 scrub, Florida 52 sedge 287, 351; awl-fruit 351; Nebraska 40, 203, 411, 416 seed 38, 56, 57, 65, 77, 79, 100, 106, 117, 145, 152, 163, 202, 206, 218, 231, 289, 309, 324, 381, 396, 398, 399, 401, 429; accessions 374; addition 276, 277, 313; biology 248; burial 368, 445; burning 91, 409; capsules 293; certification 186; certified 66, 354, 385; chaffy 383; coat 118; collection 293; damage 185; deterioration 55; forest tree 348; grass 195; infection 226; longleaf pine 118; loss 185; manual 135; mixtures 129, 219, 382, 406; moisture content 143, 296, 393; orchard 42, 151, 180; planting efficiency 77; polymorphism 330; prairie 374; prices 186; quality 11, 66, 118, 220, 328, 360; rain 286; set 216; size 326; source 153, 230, 329, 369; storage 41, 94, 107, 143, 190, 199, 202, 210, 288, 399, 418; viability 220, 232, 377, 397, 412, 444, 445; wax currant 95; weight 360, 382; Willamette Valley 274; woody plant 67; yield 59, 180, 359 seed cleaning and processing 39, 42, 96, 107, 111, 122, 123, 125, 126, 127, 161, 232, 236, 293, 320, 360, 367, 408; brush comb 320; drying 39, 418; gravity separation 78; moisture tester 418 seed collection 10, 25, 42, 57, 60, 102, 105, 124, 128, 155, 235, 262, 263, 293, 382, 447, 448; caution 208; ex situ 202, 224 seed dispersal 124; bear 178; bird 381; coyote 398 seed dormancy 4, 5, 42, 43, 64, 72, 80, 91, 95, 99, 101, 164, 248, 299, 302, 314, 327, 330, 346, 351, 353, 368, 369, 410, 434, 445; breaking technique 427; double 104, 182; morphophysiological 43; physical 119, 401, 409; riparian 411 seed germination 17, 20, 27, 37, 40, 41, 42, 48, 50, 55, 58, 80, 85, 101, 107, 118, 133, 164, 203, 206, 227, 228, 248, 263, 286, 288, 289, 299, 309, 314, 324, 331, 346, 348, 350, 351, 353, 360, 362, 365, 367, 372, 377, 397, 401, 409, 410, 411, 412, 416, 423, 427, 444, 445, 451; cloths 254; patterns 434; rate 43, 83, 130; smoke-induced 410; time 99 seed harvesting 25, 161, 171, 174, 177, 196, 213, 274, 429; brushing machines 320; commercial 187; cordless hedge trimmer NATIVEP L AN TS | 15 | 3 | FAL L 2 0 1 4 K E Y W O R D A N D S P E C I E S I N D E X 280 174; fluffy seeds 51; timing 444; wildflower 246 seed predation 163, 185, 286; insects 390 seed production 7, 51, 61, 66, 71, 84, 142, 172, 213, 216, 315, 323, 328, 363, 390, 392, 429, 443, 447, 448; increase 60, 323, 443; increase program 284; industry 100; methods 161, 196, 274; wildflower 349 seed propagation 20, 41, 47, 49, 62, 71, 75, 102, 104, 105, 107, 119, 159, 164, 178, 182, 210, 234, 238, 244, 258, 346, 353, 404, 433 seed scarification 64, 119, 176, 184, 314, 346, 353, 362, 365, 409, 449; acid 80, 99; boiling water 401; percussion 64; sulfuric acid 95, 164, 309 seed source 426 seed survival, longevity 94, 377; long-term 368 seed testing 66, 94, 191; accelerated aging 191; pop 367; tetrazolium 444 seed transfer 139; guidelines 140; risk 155; zones 139, 140, 151, 152, 155, 280, 281, 284, 308, 329, 352 seed treatment 5, 33, 39, 225, 299, 348, 369; after-ripening 72, 423, 429; chemical 118; combination 99; conditioning 127; IDS separation 78; imbibition 99; operational 409, 427; soak 224; stratification (cold-moist) 4, 40, 43, 72, 78, 80, 85, 95, 99, 130, 159, 178, 228, 299, 302, 346, 353, 365, 394, 396, 401, 408, 415, 416, 423, 434, 445; warm-moist 80, 394, 415 seedbank(ing) 133, 401 seedbed 254, 397 seeding 14, 290; cased-hole punch 180; date 76; depth 291; direct 326; establishment 388; grass 195; rate 198, 361; roadside 160 seedling 5, 18, 23, 286, 185, 211; bareroot 7, 16, 38, 107, 141, 144, 162, 218, 225, 247, 255, 347, 369, 379, 439, 443; container 267, 270, 443; establishment 11; hardwood 257; installation 121; morphologically improved 162; production 7; quality 257, 318, 347, 358, 425; stabilization 328; storage 183; survival 379; thawing 183 seedpods 226 selected 156 selection 152; plant 435 self-pollination 152 semiarid 316, 243 Senna covesii 290 senna, wild 198 Sequoia sempervirens 260 service-learning 279 Setaria leucopila 339; vulpiseta 339 shade 120, 307, 453; tolerance 305 Shepherdia argentea 247; canadensis 99 shop vacuum 111 shop-built tool 363 shrub 113, 363; desert 31; native 84 sickle bar mower 429 Sidalcea malviflora ssp. virgata 195 Sierra Nevada 352 silica gel 94 silversword, Mauna Loa 89 silvopasture, emulated organic 395 single-lens reflex camera 264 Sitanion hystrix 437, 438; jubatum 437 site preparation 65, 271 site stabilization 26 skid steer 301 slag refuse areas 14 slope, eastern 419 SLR – see camera, single-lens reflex Smilacaceae 380, 381 Smilax 381; smallii 380 smoke, cellulose-based 348 smoker 348 smokewater 289, 410 Smoky Bear 329, 404 snowberry, mountain 80 soaking 310 soil compaction 271; forest 271 soil seedbank 177, 368 soil surface 297; vacuuming 177 Solanaceae 127 Solidago shortii 8 Sonoran Desert 17, 31, 146 Sophora chrysophylla 427 Sorghastrum nutans 59, 153 source materials 137 source-identified 156 South 9, 441 southern plains 359, 383 sowing 128; broadcast 416; machine 77 spacing 201 Spartina alterniflora 236 specialty plants 21 species of concern 86 species, richness 46; alien 90, 166, 241; broadleaved 252; coastal dune 73; composition 129; diversity 219, 454; exotic 166, 195, 241; invasive 197, 241, 275, 276, 277, 313, 343; invasive control 192; legume 142; mixed 46; ornamental 148; prairie 274; rare 282; restoration 426; selection 290, 388; woodland 124; woody 353 Speyeria idalia 423 Sphaeralcea 291, 346, 390; munroana 409 sphagnum bog 62 spice bush 210 Spiraea alba 405; douglasii 443; tomentosa 405 Spiranthes cernua 124; parksii 400 spirea 405 spore propagation 1 Sporobolus 361; wrightii 385 spurge, leafy 343 squirreltail 125, 437; bottlebrush 321, 438 stand establishment 180 stands, grass 116 steeplebush 405 stewardship 279 Stipa viridula 327 stock, container 16, 146, 163, 183, 326, 388; nursery 261 stocking 162, 201 stocktype 87, 347, 378 storage, overwinter 425; interval 444; behavior 224; cooler 425; freezer 425; long-term 377; seed requirements 412 stratification – see seed treatment streams 275 strophiole 64 Strophostyles helvola 172; leiosperma 172; umbellata 306 Styroblock 254 subirrigation 217, 257, 357, 358; system 252 subjective labels 30 submerged aquatic vegetation 27 subsoiling 271 substrate 22 substrate, container (see container media); growth 399; sand 227 Suillus 424 sulfurflower 4 sulfuric acid 95, 164, 309, 365 sumac, Michaux’s 362 sunflower, common woolly 196 supply and demand 187 survey, green industry 74; nursery managers 425 survival 46, 230, 291, 297, 306, 347, 395 sweetgrass 19 sweetvetch 345 switchgrass 116 sycamore, Arizona 127 symbionts, obligate 261 Symphoricarpos albus 128 Symphyotrichum carolinianum 223 talc 77 taproot 53, 163 techniques, companion planting 88 temperature 83, 101, 143, 254, 248, 262; profiles 55 K E Y W O R D A N D S P E C I E S I N D E X NATIVEP L AN TS | 15 | 3 | FAL L 2 0 1 4 281 Tennessee 82, 442 Tephrosia virginiana 306 terminology 136 Terrafill 303 test, cold 191; electric conductivity 191; tetrazolium 191, 444 Texas 333, 334, 335, 336, 338, 339, 340, 341, 342, 400, 446; South 311, 332, 452 Thalia 41 Thiram 372 threatened and endangered 3, 8, 54, 63, 70, 86, 88, 89, 90, 91, 120, 133, 165, 202, 224, 229, 286, 362, 366, 368, 376, 400, 410 threeawn, Beyrich 138 Thuja occidentalis 255 tick-clover, panicled 314 tick-trefoil, Dillenius’ 317 tidal zone 37 Tipularia discolor 124 tissue culture – see micropropagation Tohono O’odham 17 tolerance 349 toppling 53 Toxicodendron diversilobum 48; vernix 50 Trac Vac 51 translocation 282 transplant 27, 133, 146, 378; establishmet 31; mini-plug 16 transport survival 133; public 207 Trautvetteria caroliniensis 15 travel corridors 433 tree 218, 387; improvement 151; traditional 245; tropical 135 trefoil, birdsfoot 304 trial, field 270 Trichoderma 147; harzianum 267 Trillium 56, 57; albidum 58; ovatum 58; parviflorum 58; persistens 384; reliquum 384; rivale 58 Tripsacum 359 Triticeae 268, 295 trumpets, giant 258 tumbled 127, 409 turnip, prairie 263 two-stage “scalping” auger bit 301 Ulmaceae 371 United States 81; Eastern 151; Midwestern 198, 448; Northern 265; Southeastern 98, 106, 124, 385; Western 100, 114, 420 unprotected 374 upland sites 306 urban lumber 283 Urtica dioica 49 Utah’s Choice 167 Vaccinium membranaceum 378, 399; myrtilloides 399 vacuum skid steer 429 value, cultural 380 VAM 80® 261 variability 95 variety 186 vegetation management, integrated 242 vegetative 381; cover 373 Veratrum californicum 428 Verbena 160 verbena, pink sand 285 Veronicastrum virginicum 331 Vespertilionidae 439 vetch, deer pea 446 Vicia ludoviciana 446 vines, woody 381 Viola 124, 448; adunca 447; pedata 423 Violaceae 423, 447, 448 violet, early blue 447 Virginia 103 volunteer 250 wapato 238 Washingtonia filifera 393, 398 waste 294 water, aerated 411; level 397; management 358; quality 33; use 217 water-efficient plants 275 Waterjet Stinger 44, 204 water-wise 275 weaning 163 Wedelia texana 342 weed 129; control 65, 246, 294, 343; perennial 241 weevils 390 West Virginia 149 Westrup HA-400 320 wetland 37, 170, 203, 223, 237, 288, 323, 351, 411, 416; construction 33, 238; equipment 301; plant 33, 35; restoration 238; shrub 50; species 41 wheatgrass, thickspike 328 white nose syndrome 439 wild harvest 69 wild senna 304 wildflower 51, 57, 82, 101, 150, 161, 168, 233, 246, 248, 331, 440, 456; Florida 230; mountain 68 wildland plantings 30 wildrye 126, 422; basin 297; blue 7; Canada 76 Willamette Valley 286, 374, 376; Seed Increase Program 284 willow 12, 109, 110, 310, 435; Arizona 106; Bebb’s 106; bluestem 106; coyote 106; Scouler 106 windbreak 371, 385; wildlife 315 wine cups 119 winterfat 84 wiregrass 9, 138; propagation 10 wolfberry 127 wood quality 201 woodlands, California 16 woods grown 71 Woodward Flail-Vac seed stripper 213 woody invasive 242 woody plant 24, 97, 241, 326, 381, 395, 442 Woody Plant Seed Manual 312 woolgrass 41 Xanthorhiza simplicissima 350 xeric 258 xeriscaping 259 Xerophyllum tenax 289 X-rays 220 Xyleborus glabratus 436 yellowroot 350 yield, forage 359; seed 180, 359 Zamia pumila 244 Zamiaceae 244 zexmenia, orange 342 Zinnia acerosa 319 zinnia, desert 319 Zizania palustris 20 NATIVEP L AN TS | 15 | 3 | FAL L 2 0 1 4 K E Y W O R D A N D S P E C I E S I N D E X 282 work_4qpwt4ebebap7e3blokzwtqdue ---- $VLDQ�3DFLÀF�-RXUQDO�RI�&DQFHU�3UHYHQWLRQ��9RO��������� 2337 DOI:http://dx.doi.org/10.7314/APJCP.2016.17.4.2337 Educational Utilization of Microsoft Powerpoint for Oral and Maxillofacial Cancer Presentations Asian Pac J Cancer Prev, 17 (4), 2337-2339 Introduction In oral cavity, the oral squamous cell carcinoma is the most common maligantt entity, accounting for over 90% of the cases of diagnosed malignant tumors (Scully, 2005). The mortality rate regarding oral cancer is estimated at approximately 12,300 deaths per year, and the survival rate is simply 40% to 50% for diagnosed individuals and is closely related to the early disease perception time between and its diagnosis and treatment plan (Dantas et DO����������,Q�WKH�ÀHOG�RI�RUDO�FDQFHU��WKHUH�DUH�LPSRUWDQW� researchers focusing on knowledge of oral cancer among adult dental subjects attending dental clinics (Razavi et al., 2015), which highlights the need of other studies involving the use of educational strategies for oral cancer evaluation. In this context, the advent of digital photography allowed the photographic documentation to become routine in many services of oral and maxillofacial cancer. In addition, the acquisition and storage of photographic documentations has become easier, faster and with reduced costs. Setting up surgical clinical cases with Microsoft® PowerPoint may facilitate the photographic recording and the patient follow-up, aiming to its use in clinical sessions IRU�WUHDWPHQW�SODQ�GLVFXVVLRQ�RU�LQ�VFLHQWLÀF�SUHVHQWDWLRQV�� It facilitates communication between professionals, teaching and supervising of clinical cases. Cases may be set up individually, facilitating the treatment planning, sharing experiences involving clinical cases and new 1 Department of Clinical Dentistry, Federal University of Ceara, 1 Fortaleza-CE, 2 Campus Sobral, Sobral, Brazil *For correspondence: fwildson@yahoo.com.br Abstract Electronic presentations have become useful tools for surgeons, other clinicians and patients, facilitating PHGLFDO�DQG�OHJDO�VXSSRUW�DQG�VFLHQWLÀF�UHVHDUFK��0LFURVRIW��3RZHU3RLQW�LV�E\�IDU�DQG�DZD\�WKH�PRVW�FRPPRQO\� XVHG�FRPSXWHU�EDVHG�SUHVHQWDWLRQ�SDFNDJH��6HWWLQJ�XS�VXUJLFDO�FOLQLFDO�FDVHV�ZLWK�3RZHU3RLQW�PDNHV�LW�HDV\� WR�UHJLVWHU�DQG�IROORZ�SDWLHQWV�IRU�WKH�SXUSRVH�RI�GLVFXVVLRQ�RI�WUHDWPHQW�SODQ�RU�VFLHQWLÀF�SUHVHQWDWLRQV��,W� IDFLOLWDWHV�FRPPXQLFDWLRQ�EHWZHHQ�SURIHVVLRQDOV��VXSHUYLVLQJ�FOLQLFDO�FDVHV�DQG�WHDFKLQJ��,W�LV�RIWHQ�XVHIXO�WR� FUHDWH�D�WHPSODWH�WR�VWDQGDUGL]H�WKH�SUHVHQWDWLRQ��RIIHUHG�E\�WKH�VRIWZDUH�WKURXJK�WKH�VOLGH�PDVWHU��7KH�SXUSRVH� RI�WKLV�SDSHU�ZDV�WR�VKRZ�D�VLPSOH�DQG�SUDFWLFDO�PHWKRG�IRU�FUHDWLQJ�D�0LFURVRIW��3RZHU3RLQW�WHPSODWH�IRU�XVH� LQ�SUHVHQWDWLRQV�FRQFHUQLQJ�RUDO�DQG�PD[LOORIDFLDO�FDQFHU� .H\ZRUGV� PowerPoint presentations - audiovisual aids - computer-assisted learning - oral cancer COMMENTARY (GXFDWLRQDO�8WLOL]DWLRQ�RI�0LFURVRIW�3RZHUSRLQW�IRU�2UDO�DQG� 0D[LOORIDFLDO�&DQFHU�3UHVHQWDWLRQV )UDQFLVFR�6DPXHO�5RGULJXHV�&DUYDOKR1��)LOLSH�1REUH�&KDYHV2��(GXDUGR�&RVWD� 6WXGDUW�6RDUHV1��.DUX]D�0DULD�$OYHV�3HUHLUD2��7K\FLDQD�5RGULJXHV�5LEHLUR1, &ULVWLDQH�6D�5RUL]�)RQWHOHV1��)DELR�:LOGVRQ�*XUJHO�&RVWD1* techniques, and exchanging digital information with other experts or residency programs (Hegarty, 1999). This program has become popular due to the many advantages it offers, including the ease of making last minute changes and customization, the possibility of incorporating multimedia files, portability, and cost reductions (Regennitter, 2000). Clinical images may be enhanced through alignments, cropping, brightness/ contrast optimization, and color corrections, requiring a minimum of skills by users (Halazonetis, 2002). Microsoft® PowerPoint has been described as an incorporation software. Images are easily produced, edited and stored without loss of quality or resolution. It is the state of art in presentation software, with the addition of clinical content (Hegarty, 1999). In a presentation, the creation of a standardized template it is often useful. Microsoft® PowerPoint provides the Slide Master as a valuable tool. Although this tool is not shown in the presentation, it serves as a template the top slide in a hierarchy of slides that stores information about the theme and layout of a presentation, including the background, color, fonts, effects, placeholder sizes, and positioning. In addition, the Slide Master allows visually satisfactory results, optimizing time in long-duration presentations, since all changes can be performed at one time (Halazonetis, 1998). The purpose of this paper was to show a simple and practical method for creating a Microsoft® PowerPoint Fábio Wildson Gurgel Costa et al $VLDQ�3DFLÀF�-RXUQDO�RI�&DQFHU�3UHYHQWLRQ��9RO���������2338 template for use in oral and maxillofacial cancer and how to perform the insertion of multiple photos that standardized presentation, as to the comparative. The analysis of photographic records is remarkably important for surgeons and patients to assess the surgical outcomes, DV�ZHOO�DV�RU�PHGLFDO�DQG�OHJDO�VXSSRUW�DQG�VFLHQWLÀF� research. 3UDFWLFDO�0HDVXUHV Creating a Microsoft ® PowerPoint template A suitable and visually attractive PorwerPoint presentation requires a general knowledge about the Slide Master functionality (Figure 1). It is possible to insert master slides (e.g. slides representing the preoperative period, imaging exams, and postoperative evaluation) with proper layouts. Besides the possibility of inserting several layouts in each Slide Master, there are a number of options that can be easily editable as described in the )LJXUH����$GGLWLRQDOO\��WKH�ÀJXUH�SODFHKROGHUV�FDQ�EH� arranged to provide an adequate orientation of the image rotation arrows according to object position during the SKRWRJUDSKLF�UHFRUG��)LJXUH�����)RU�H[DPSOH��LI�D�SURÀOH� ZDV�SKRWRJUDSKHG�ZLWK�WKH�ÁDVK�WRZDUGV�WKH�OHIW��WKH� arrow above the placeholder should be pointed in the left. Likewise, in occlusion photos the arrow should point in the VDPH�GLUHFWLRQ�DV�WKH�ÁDVK��XSZDUGV�IRU�XSSHU�RFFOXVLRQ� and downwards for lower occlusion. When photographs are taken in a standardized manner, no image rotation is required. After creating a Slider Master and its layouts, it is possible to rename the slide and customize the theme )LJXUH����6OLGH�0DVWHU�5LEERQ��1) Insert Slide Master; 2) Insert layout; 3) Insert placeholder (aiming to insert editable objects during the making of the presentation outside the Slide Master); 4) Menu insert (aiming to insert objects, text boxes, ÀJXUHV�WR�EH�UHSHDWHG�LQ�WKH�VOLGH�VWUXFWXUH��QRW�DPHQDEOH�WR� issue during the preparation of the presentation outside the Slide Master) )LJXUH����)LQDO�$GMXVWPHQWV�RI�WKH�6OLGH�0DVWHU��1) Drawing menu (it allows the image placeholders rotation to ensure the guidance of the insertion of photographs); 2) options to customize the theme presentation. 3) Close master view (it should be activated after the completion of all the changes). )LJXUH����5HSUHVHQWDWLRQ�RI�%RWK�&XVWRPL]HG�6OLGH� 0DVWHUV��3UHRSHUDWLYH�DQG�3RVWRSHUDWLYH��6KRZLQJ� 5HVSHFWLYH�/D\RXWV )LJXUH����,QVHUWLQJ�0XOWLSOH�,PDJHV�� 1) Insert; 2) Photo Album; 3) File/Disk; 4) Picture layout (determines the distribution of pictures on slides); 5) Browse (allows you to select a design template); 6) Create (create the presentation with DOO�VHOHFWHG�SLFWXUHV�����3DQHO�WR�LQVHUW�RWKHU�PLGLD�ÀOHV�����0\� Apps (allows to import Apps). )LJXUH����,Q�DSS�IXQFWLRQDOLW\�RI�URWDWLRQ�VWUXFWXUHV�LQ� D�YLUWXDO���'�VSDFH�����,PSRUW�WKH�',&20�GDWD�WR�D�VSHFLÀF� software; 2) Create a reformatting surface structure and export as �[�G�IRUPDW�����2SHQ�WKLV�ÀOH�LQ�0HVK/DE�VRIWZDUH�DQG�H[SRUW�WR� �X�G�IRUPDW�����&RQYHUW�WKLV�ÀOH�LQ�D� �SGI�IRUPDW�XVLQJ�$GREH� Acrobat XI Pro; 5) After installing 3DPDF software, an icon on WKH�ULEERQ�ZLOO�DOORZ�WR�DWWDFK�WKH�FUHDWHG�3')�ÀOH��ZKLFK�RIIHUV� a real time tridimensional handling. $VLDQ�3DFLÀF�-RXUQDO�RI�&DQFHU�3UHYHQWLRQ��9RO��������� 2339 DOI:http://dx.doi.org/10.7314/APJCP.2016.17.4.2337 Educational Utilization of Microsoft Powerpoint for Oral and Maxillofacial Cancer Presentations with appropriate colors, fonts, effects, background styles and size (Figure 2). Subsequently, as described in Figure 3, these customized layouts will appear on the layout command of the home tab situated on the ribbon. However, to add this new design to the existing PowerPoint themes is QHFHVVDU\�WR�VDYH�LW�DV�D�3RZHU3RLQW�7HPSODWH���SRW[��ÀOH� ,QVHUWLQJ�PXOWLSOH�SKRWRV�RU�RWKHU�PHGLDV�ÀOHV Microsoft PowerPoint offers the Photo Album as an interesting tool, which allow the inclusion of several images in a presentation. To insert a lot of pictures into the SUHVHQWDWLRQ��VHH�WKH�ÀUVW���VWHSV�GHVFULEHG�LQ�WKH�)LJXUH��� In addition, a versatile option available in PowerPoint LV�WKH�LQFRUSRUDWLRQ�RI�RWKHU�PHGLD�ÀOHV�VXFK�DV�YLGHRV�DQG� screen-recorded images (Figure 4). There is the possibility of incorporating exported videos from 2-D and 3-D image viewers, as well as it is possible to attach a video from a recorded screen while manipulating a 3-D reconstructed LPDJH�LQ�D�VSHFLÀF�VRIWZDUH��H[DPSOH��,Q9HVDOLXV������ 2-D images of 3-D representations are a poor substitute for the in-app functionality of rotationg structures in a virtual 3-D space. As an alternative to this problem, Phelps et al., 2012.5 described a way for embedding 3-D radiology models in Portable Document Format (PDF). Hence, it is possible to convert 3-D DICOM (Digital Imaging and Communications in Medicine) dataset into D�VXUIDFH�UHSUHVHQWDWLRQ�ÀOH�DQG�VXEVHTXHQWO\�H[SRUW� WKLV�GDWD�WR�0HVK/DE�VRIWZDUH��9LVXDO�&RPSXWLQJ�/DE� ²�,67,�²�&15��KWWS���PHVKODE�VRXUFHRUJH�QHW���YHUVLRQ� �������DLPLQJ�WR�FUHDWH�DQ�DFUREDW�FRPSDWLEOH�ÀOH��ZKLFK� DOORZV�FRQYHUWLQJ�WKLV�ÀOH�LQWR�8�'��8QLYHUVDO��'�ÀOH� IRUPDW���7KLV�ÀOH�W\SH�FDQ�EH�DWWDFKHG�LQ�$GREH�$FUREDW� X Pro (www.adobe.com/products/acrobatpro.html), and posteriorly it can be embebed in PowerPoint presentations �:LQGRZV�2IÀFH�������WKURXJK�D�SOXJLQ�FDOOHG�3')�'� �KWWS���ZZZ�SGI�G�FRP�3')�'BLQB3RZHU3RLQW�SKS��� which adds 3D PDF interactive views inside PowerPoint slide presentations (Figure 5). &RQFOXVLRQV With 3-D imaging of the maxillofacial skeleton becoming widespread, the use of Microsoft PowerPoint facilitates the presentation and sharing of clinical information with a variety of clinical and non-clinical audiences in a variety of settings. Hence, the present description about guidelines and their functionality could be applied to all versions of Power Point for Windows. However, screen recording and Apps are available only in 2013 Microsoft Power Point version. Mobile devices, both android and iOS (iPhones and iPads) systems, allow all the Slide Master functionalities. However, these systems show some limitations regarding the applicability of the tools described in the present article. The options “insert photo album”, “screen recording” and use of Apps are not able to be used in current versions of PowerPoint for these systems. Similarly, Microsoft PowerPoint version for Mac OS does not provide these tools. A solution for the problem related to the image insertion may be the use of Iphotos® that allows insertion of multiple pictures into the slide. Regarding the unavailable screen recording option, Apple® offers the QuickTime® as a free downloadable software. Thus, there is an interesting possibility of embedding a DICOM movie (converted from a Quick Time movie) in a customized PowerPoint presentation. Another relevant issue in Power Point presentations is the security to avoid unauthenticated changes. Since an increasing number of presentations availabe in the World Wide Web has become widespread, a password can be assigned to prevent unauthorized users from opening/ editing a document. Hence, to create a password the option of exporting presentation should be selected, followed by Package Presentation for CD and advanced options to enter a valid password aiming to open or modify a presentation. Future trends have been heading for making increasingly dynamic and interactive presentations. In this context, a new available tool for any device is the 2IÀFH�0L[�WKDW�DOORZV�WR�UHFRUG��ZULWH��DQG�GUDZ�RQ�VOLGHV� just like a whiteboard, as well as to add quizzes, video, and interactive content. In brief, the advent of digital midia improved quickly and more cost effectively the registration of clinical cases. In an educational perspective, Microsoft PowerPoint may facilitate the organization of cases conducted in Oral and Maxillofacial surgical services, enhancing academic DFWLYLWLHV�DQG�VFLHQWLÀF�NQRZOHGJH� 5HIHUHQFHV Scully C (2005). Oral cancer; the evidence for sexual transmission. Br Dent J, 199, 203-7. 'DQWDV�76��GH�%DUURV�6LOYD�3*��6RXVD�()��HW�DO���������,QÁXHQFH� of educational level, stage, and histological type on survival RI�RUDO�FDQFHU�LQ�D�%UD]LOLDQ�SRSXODWLRQ��D�UHWURVSHFWLYH�VWXG\� of 10 years observation. Medicine, 95, 2314. Hegarty D (1999). Presentations with dentofacial planner images. Am J Orthod Dentofacial Orthop, 116, 114-6. Regennitter FJ (2000). Powering up your PowerPoint presentations. Am J Orthod Dentofacial Orthop, 118, 116-20. Halazonetis DJ (2002). New features of Powerpoint 2002. Am J Orthod Dentofacial Orthop, 122, 668-72. Halazonetis DJ (1998). Making slides for orthodontic presentations. Am J Orthod Dentofacial Orthop, 113, 586-9. work_4rbfrm7iuzhjnbrfrheolpcsdm ---- Science Education for Sustainability: Strengthening Children’s Science Engagement through Climate Change Learning and Action sustainability Article Science Education for Sustainability: Strengthening Children’s Science Engagement through Climate Change Learning and Action Carlie D. Trott 1,* and Andrea E. Weinberg 2 1 Department of Psychology, University of Cincinnati, Cincinnati, OH 45221, USA 2 Mary Lou Fulton Teachers College, Arizona State University, Tempe, AZ 85281, USA * Correspondence: carlie.trott@uc.edu Received: 25 June 2020; Accepted: 29 July 2020; Published: 9 August 2020 ���������� ������� Abstract: Scientists and sustainability scholars continue to make urgent calls for rapid societal transformation to sustainability. Science education is a key venue for this transformation. In this manuscript, we argue that by positioning children as critical actors for sustainability in science education contexts, they may begin to reimagine what science means to them and to society. This multi-site, mixed-methods study examined how children’s climate change learning and action influenced their science engagement along cognitive, affective, and behavioral dimensions. For fifteen weeks, ten- to twelve-year-olds participated in an after-school program that combined on-site interactive educational activities (e.g., greenhouse gas tag) with off-site digital photography (i.e., photovoice process), and culminated in youth-led climate action in family and community settings. Participants were 55 children (M = 11.1 years), the majority from groups underrepresented in science (52.7% girls; 43.6% youth of color; 61.8% low-income). Combined survey and focus group analyses showed that, after the program, science became more relevant to children’s lives, and their attitudes towards science (i.e., in school, careers, and in society) improved significantly. Children explained that understanding the scientific and social dimensions of climate change expanded their views of science: Who does it, how, and why—that it is more than scientists inside laboratories. Perhaps most notably, the urgency of climate change solutions made science more interesting and important to children, and many reported greater confidence, participation, and achievement in school science. The vast majority of the children (88.5%) reported that the program helped them to like science more, and following the program, more than half (52.7%) aspired to a STEM career. Lastly, more than a third (37%) reported improved grades in school science, which many attributed to their program participation. Towards strengthening children’s science engagement, the importance of climate change learning and action—particularly place-based, participatory, and action-focused pedagogies—are discussed. Keywords: children; climate change education; participatory action research; photovoice; science attitudes; sustainability 1. Introduction Promoting children’s science engagement is often framed as a means to address the world’s most pressing problems over the decades to come [1]. For example, by shoring up children’s science interest and achievement today, young people may go on to become leaders and agents of change in addressing the major scientific and technological challenges of tomorrow [2,3]. From a sustainability perspective, however, this is a problematic starting point. Current global crises—including climate change and biodiversity loss—demand rapid societal transformation towards social and ecological sustainability Sustainability 2020, 12, 6400; doi:10.3390/su12166400 www.mdpi.com/journal/sustainability http://www.mdpi.com/journal/sustainability http://www.mdpi.com https://orcid.org/0000-0002-4400-4287 https://orcid.org/0000-0003-4327-6237 http://www.mdpi.com/2071-1050/12/16/6400?type=check_update&version=1 http://dx.doi.org/10.3390/su12166400 http://www.mdpi.com/journal/sustainability Sustainability 2020, 12, 6400 2 of 24 beginning now, not in the abstract future when today’s children have reached adulthood [4]. Indeed, climate change is already wreaking havoc on a global scale in the form of more frequent and more intense extreme weather events and causing unprecedented environmental, societal, and economic disruption [5]. Reimagining the very purpose of science education—beyond honing the “potential” of young people to take decisive action in the future—has become a necessity. Promoting children’s science engagement is also commonly framed in the language of global dominance. A persistent narrative is that, by feeding the STEM “pipeline” today, the U.S. may remain competitive in the global marketplace of the future by remaining a leader in technology and innovation [6,7] This narrative is clear in federal STEM education reports and initiatives, which have for decades emphasized the connection between intensifying STEM education and regaining or maintaining the U.S. position on the frontiers of discovery [8–11]. Again, from a sustainability perspective, this worldview is misaligned with the kinds of societal transformation that are required to avert catastrophic ecological and societal consequences. Specifically, the modes of transformation required to adequately address sustainability challenges are those that force us to rethink, reinvent, and restructure our institutions (e.g., global economies, the scientific enterprise) in ways that shift away from the non-sustainable modes of interaction (e.g., competition, consumerism, individualism) that have delivered us into the present moment [12]. The prevailing neoliberal ideology of limitless growth and perpetual global competition has locked us onto the current crisis-bound trajectory [13,14]. Rethinking the very purpose of the STEM pipeline—beyond the language of global competition—is required for course correction. In sum, there is a need for rapid societal transformation to sustainability in ways that substantively involve children and young people in climate change learning and action as well as a need to redefine science not as a “competitive edge” to safeguard against the threat of future subordination, but as a collaborative process to envision and enact sustainable futures today. Moreover, given the overwhelming nature of sustainability challenges, there is a need to encourage children’s informed actions on sustainability topics in ways that promote their sustained interest and engagement. What if, rather than attempting to shore up children’s science engagement today to ensure their status as capable actors in the future, we positioned children as critical actors for sustainability today as a means to simultaneously strengthen and reimagine children’s science engagement for the well-being of people and planet? This article is the third in a series of manuscripts exploring what constitutes, and how to facilitate, children’s constructive climate change engagement through the lens of Science, Camera, Action!, an afterschool program that combined educational activities with digital photography to facilitate children’s individual and collaborative climate change action [15,16]. The present article examines how children’s agentic experiences in the program influenced their thoughts (i.e., perceptions of), feelings (i.e., attitudes toward), and behaviors (i.e., engagement) related to science as a school subject, potential career, and societal force. Findings suggest that children’s climate change engagement can be a vehicle not only for supporting children’s science interest, but for opening up pathways to a sustainable future by positioning children as agents of change today. 2. Literature Review Today’s most urgent challenges are increasingly those that call upon broad sectors of society to understand and practice sustainability. Sustainability is defined as “meeting the needs of the present without compromising the ability of future generations to meet their needs” [17], and is commonly acknowledged to have environmental, social, and economic dimensions. High-stakes sustainability challenges (e.g., climate change and biodiversity loss) are understood to be the consequence of decisions and actions by humans. The conclusion that human activity has had such a massive impact on the global environment has prompted scientists to refer to our current geological age as the Anthropocene—anthropo meaning “human” and -cene referring to a geological period [18]. To avert catastrophic ecological and societal consequences, what is needed now are broad shifts in Sustainability 2020, 12, 6400 3 of 24 how societies operate and how people live their everyday lives to reduce future threats to people and planet. In the sections that follow, we review the current state of sustainability science education in the U.S., and make the case that what is needed now are empowering and transformative pedagogies that encourage children’s sustained interest and participation in sustainability action. Doing so, we argue, could redefine the very purpose of science education while strengthening children’s science engagement as they reinterpret what science means to them and to society. 2.1. Science Education for a Sustainable Future Science education must play a pivotal role in promoting sustainability, in particular through facilitating climate change learning and action. To date, however, climate change education is not a national priority in the U.S., and many science teachers report feeling unsupported in teaching about climate change in the classroom [8]. For example, many teachers feel under-prepared due to a lack of training on the subject, and there is no national policy mandate requiring that they include climate change topics in their curriculum [8,19]. Consequently, most teachers spend a limited amount of time teaching about climate change, if at all, and students sometimes receive “mixed messages” about the topic that do not align with accepted science [8,20]. Even still, demanding that teachers teach according to the scientific evidence may not be enough. Research critiquing the “information deficit” approach to climate change communication has shown that merely learning facts about a problem does not necessarily lead to action [21]. Moreover, young learners can feel overwhelmed by the issue, causing them to disengage [22]. What is needed, rather, are agency- and action-focused engagement strategies that empower learners to feel capable of addressing the issue in meaningful ways, particularly by working together in local settings [15,16,23]. Within and beyond the U.S., sustainability education has been criticized for being ‘depoliticised’ [24,25]. The Next Generation Science Standards (NGSS), which shape P-12 science education and teacher education in the U.S., position the environment as entirely separate from living organisms, humans in particular [26]. Further, sustainability challenges are often framed as having physical properties and primarily technological solutions, which overlooks their complex human elements (e.g., cultural, social, and political) [26–29]. In the U.S. classroom, for example, the siloed, discipline-based approach to teaching climate change as science tends to ignore the social dimensions of climate change, and the political dimension is likely to be omitted [30]. Further, the individualism that drives—and is perpetuated by—the competitive orientation of STEM education has been linked to a renunciation of pro-environmental thought and behavior, and the rejection of human behavior as a contributor to environmental problems [28]. Consequently, sustainability learning from the formal classroom is focused on individualistic learning of scientific facts rarely paired with opportunities for action, especially collaborative action within local communities. Positioning young people as capable actors today could promote their science engagement and redefine the role of science in society—for promoting the stability of ecosystems that support human life on this planet. To accomplish this, we must rethink the nature of science education as well as the purpose of the “STEM pipeline”. Students need opportunities to see how the science they learn matters to their lives and that of other living beings, to spread their knowledge across their networks of families and friends, and to transform the world around them as they engage collaboratively to translate their knowledge into action. 2.2. Sustainability Science Education for Children’s Sociopolitical Inclusion Explanations for the depoliticization of sustainability education can be found at multiple levels. Beyond the institutional level (e.g., policy, school), discussed earlier, a less apparent reason is the broader cultural issue of children’s exclusion from the socio-political domain. Put differently, children are viewed not as “human beings”, but rather as “human becomings” whose political participation and engagement is not yet considered an age-appropriate behavior [31]. Dominant constructions of childhood, including children as innocent and children as becoming, regard early life as fundamentally Sustainability 2020, 12, 6400 4 of 24 a period of preparation and socialization leading toward the full citizenship of adulthood [32]. Such images of young people in primarily Western societies, including the U.S., render adult-youth relations as inherently paternalistic, whereby young people are often neither consulted as competent citizens nor invited as capable actors with rights to participate in civil society [33]. Politics is an “adult-only” domain, and children are asked to learn and observe from society’s margins [34]. This state of affairs inevitably leaves young people without a voice in important matters that impact their lives. More generally, the disjuncture between science and society has given rise to critical dialogue about promoting public engagement through “scientific citizenship” [35] and redefining the meaning of “science literacy” [36,37]. Given children’s general lack of engagement with critical societal issues in the classroom, researchers have argued that the formal education system has failed to empower young people as citizens [38,39]. Relative to other societal issues, this accusation is especially severe in the context of sustainability education, given that today’s children are the “future generations” whose health and well-being will be increasingly harmed as the stability of social and ecological systems continues to unravel [40]. Towards building sustainable futures in collaboration with youth—and towards developing empowering pedagogies in the process—inviting young people to learn about the scientific and social dimensions of sustainability is critical, as is encouraging their action [4,41]. 2.3. Sustainability Science Education for Collaborative Action To date, most classroom-based climate change learning is not paired with an action component of any kind [30]. When opportunities for action are incorporated into environmental programming, within and beyond the classroom, a key criticism is their emphasis on individual rather than collective action. Underscoring individualized behavior change implicitly frames sustainability challenges as a matter of personal responsibility rather than large-scale structural change, or the kinds of change that require communication, coordination, and collaboration [28,42]. This trend is rooted in neoliberal ideology, as the implied message is that sustainability challenges can be overcome through aggregated, freely taken individual actions (e.g., consumer choices) that need not involve coordinated efforts or policy-level decisions that may prompt state interference in the marketplace. A key distinction here is between individual actions taken within existing systems (i.e., behavior change) versus collaborative actions taken to transform existing systems (e.g., via collective action). Of great significance for sustainability learning is that such a micro-level framing misrepresents the macro-level (i.e., policy, infrastructure) changes that must occur in order to adequately address sustainability challenges, thus hindering learners’ ability to imagine alternative futures and the kinds of decisions and actions necessary to realize them. As noted by Hayward, . . . the psychological lens inadvertently narrows our vision of citizenship, reducing the potential of political agency to the aggregation of personal value choices, aspirations and psycho-social interactions with the natural world, obscuring the political potential of citizens collaborating and reasoning together to create alternative pathways and forms of public life. [42] (pp. 7–8) In a sustainability education context, then, “fight(ing) post-political representations of the present” is a first step towards building a sustainable future [43] (p. 148). To be transformative for learners, pedagogies must move beyond instrumental (i.e., prescriptive) modes and towards emancipatory engagement, as the former “stifles creativity, homogenises thinking, narrows choices and limits autonomous thinking and degrees of self-determination” [44] (p. 180). For example, engaging young people using participatory approaches—in which they are treated as decision-makers and collaborators throughout the process of learning and action—can cultivate a sense of agency that combats climate change anxiety and withdrawal [15,16,41]. Moreover, engaging young people in collaborative approaches to education and action have the potential to promote pro-environmental thoughts and behaviors that may lead to a more interdependent (rather than Sustainability 2020, 12, 6400 5 of 24 independent) ways of thinking and solving problems [28]. Positioning young people as radical visionaries and capable actors for sustainable transformation demands that educators cultivate their critical awareness and invite them to envision preferable futures, dialogue about and develop their own plans for action, and then act on them collaboratively within communities [15,16,45–48]. 2.4. Science Engagement for Societal Transformation Societal transformation to sustainability requires widespread shifts in modes of thinking, being, and interacting in the world as a way of preventing the worst effects of environmental degradation. Science education is critical to this transition. Not only do students need deep knowledge and understanding of disciplinary concepts and processes, they also must have extensive scientific literacy to use disciplinary knowledge to make evidence-based decisions that simultaneously consider environment, social, and economic dimensions. In the U.S., children and youth are most likely to learn about today’s most urgent sustainability challenges, including climate change, in the science classroom. It is therefore imperative not only to support learners’ fact-based understanding of sustainability challenges, but to cultivate their sustained interest and participation in addressing these challenges through empowering and transformative pedagogies that position children as critical actors for a sustainable future. Transformative sustainability learning theory holds that profound changes in learners’ thinking and action can result from pedagogical modes that encompass cognitive, affective, and behavioral engagement [49]. By cultivating critical awareness and collaborative action, sustainability science education can lead to increased pro-environmental knowledge, more positive attitudes, and greater behavioral engagement. What if, beyond transforming learners’ perspectives on sustainability, such modes had the capacity to transform learners’ perspectives on science? For example, could transformative sustainability pedagogies also influence science learning, attitudes, and behavioral engagement? Doing so could begin to reframe what science means to children and their understanding of the role of science in society. Using mixed-methods data collected through a collaborative, multi-site research study, the present research examines how children’s climate change learning and action influenced their science interest and engagement. An earlier manuscript in this series [16] examined children’s knowledge gains through the program, and showed that after the program, children knew more about climate change than they did before, and—on average—more than the average U.S. teenager or adult. The present research moves beyond climate change learning to examine how the program impacted children’s cognitive, affective, and behavioral science engagement. 3. Program Description, Community Partner, Research Context The present study was carried out in partnership with three Boys and Girls Club (BGC) units in the Mountain West Region of the U.S. The BGC is one of the longest-standing and largest community-based youth development organizations in the U.S., founded in 1860 and currently serving over 4 million youth annually across 4600 clubs in urban and rural areas, in public housing communities, and on Native lands [50]. As a non-profit organization funded by government grants as well as corporate donations and private philanthropy, the BGC offers out-of-school youth services year-round, with annual membership fees as low as five U.S. dollars [51]. As an approximation of members’ socio-economic status, 61% of BGC youth receive free or reduced-price school lunches, for which eligibility is based on federal poverty guidelines. To achieve their mission to enable young people to “reach their full potential as productive, caring, responsible citizens”, BGC provides positive and safe places to learn, be with friends, and develop relationships with caring adults. The BGC offers “unstructured, drop-in, recreational” activities [52] (p. 52) as well as structured programming aligned with its five focal areas: character/leadership, education/career, health/life skills, the arts, and sports/fitness/recreation [51]. Science, Camera, Action! (SCA) was an after-school program that aligned with BGC structured education-oriented programming by pairing climate change science education with photovoice methodology. Throughout the program, participants engaged with topics of global climate change Sustainability 2020, 12, 6400 6 of 24 (e.g., ecosystems; the greenhouse effect) and sustainable solutions (e.g., energy use; teamwork and leadership) as well as digital photography (i.e., photovoice), while being encouraged and assisted as they developed and implemented action plans in their families and communities. The present research aligned with regional BGC efforts to integrate “STEAM” programming into their clubs. STEAM is science, technology, engineering, and mathematics (STEM) combined with the arts. Designed and implemented by the first author, SCA took place for one hour weekly over a period of 15 weeks in 2016 (January to May). Program content and activities were shaped by the ‘Head, Hands, and Heart’ model for sustainability education, which underscores the transformative potential of simultaneous cognitive, behavioral, and affective engagement [49]. Using these dimensions, key program components are described below. 3.1. Science: Cognitive Engagement SCA’s educational program content consisted of six activities integrating the scientific and social dimensions of climate change by demonstrating the relationships between Earth’s changing climate, the functioning of local ecosystems, and the actions of individuals and communities. In the framework of ‘Head, Hands, and Heart,’ SCA’s Science component encouraged children to think critically and systematically (“Head”) about the problem of climate change (e.g., causes and consequences) and its many solutions through human action. Hands-on activities also introduced children to relevant STEM fields (e.g., ecology, climatology) and communicated how various STEM careers affect communities and improve lives. 3.2. Camera: Affective Enablement Digital cameras were distributed at the conclusion of each of the six educational activities (one per week), and children were prompted to photograph images conveying their views of and connections with the week’s topic. Three subsequent photovoice sessions, scheduled at regular intervals across the 15-week program, allowed children to reflect on what they learned and the connections represented in the images, to narrate their photos, and to discuss the connections between their own and others’ photographs and experiences. The final photovoice session involved identifying common themes discussed during photovoice sessions and translating themes into action plans [15]. In this way, the photovoice methodology bridged educational activities with children’s action projects [53]. Photovoice is typically employed as a participatory action research method but has also been adopted as an equity- and empowerment-oriented pedagogical technique [54]. When used as a pedagogical technique, photovoice has the potential to support learners to make personal connections to disciplinary content (e.g., [55]), to recognize the value of their subjective experiences, and empower them to conceptualize “new and reflective ways to perceive their own world and the science around them, as well as the potential to generate change in their own community” [56] (p. 340). Regardless of its application, photovoice is a powerful tool that promotes critical and reflexive group dialogue. Participants use photographs as representations of important issues to reflect on community strengths and concerns and collaborate to engage in action to advance social change [57]. In the framework of ‘Head, Hands, and Heart,’ the photovoice method encouraged participants to experience connection (“Heart”) to their surroundings through deeper awareness of the interconnected nature of ecological systems and their own place in them. Moreover, photovoice was intended to facilitate children’s ability to make connections between their own lives and SCA’s science content, which served both to make seemingly distant and abstract science concepts feel more personally relevant and concrete. 3.3. Action: Behavioral Enactment Youth-led action projects included: (1) Family action plans, crafted by each child in response to personalized carbon footprint feedback, emphasizing behavior change toward sustainability; and (2) Community action projects, planned and implemented by each group of children, towards advancing sustainability through community advocacy and action. In the framework of ‘Head, Hands, Sustainability 2020, 12, 6400 7 of 24 and Heart’, these family action plans and community action projects each enabled children to deeply and actively engage with the learned climate change concepts (“Hands”) through everyday practices and innovative projects. For each of the three community action projects, there are outcomes that continue to be felt nearly four years later. Children in a small politically conservative agricultural community prepared a speech that described climate change and some of its global and local impacts. This speech, presented to 60 officials and community members at a city council meeting, included an appeal for permission to begin a tree planting campaign. Not only were they given approval, when trees were planted in a local park, they were accompanied by a plaque commemorating the children’s environmental stewardship [15]. Children attending at another site created an education- and action-oriented website designed to raise awareness about climate change and inspire action within their community and beyond. At a gallery event to launch the website, a selection of children’s photographs—matted and mounted with titles and short narrative descriptions—were put on display to convey participants’ personal connections to climate change topics. Children served as docents to over 100 visitors, discussing the meaning of photographs and directing them to the newly unveiled website to learn more. At the third SCA site, children revitalized an abandoned and overgrown garden on the BCG property. After preparing the garden site (e.g., weeding, turning the soil, spreading compost), children planted more than 100 fruit and vegetable plants. At harvest time, not only did BCG member families and the community have access to fresh local produce, the older children used the produce in educational healthy-eating activities for younger BCG members. In planning for the future, SCA participants created a BCG garden club for all ages to ensure the ongoing maintenance and use of the restored garden space [15]. Inspired by the SCA garden, at least four additional at-home gardens were established that summer by participants’ families. 4. The Present Study This mixed-methods study used surveys and focus groups to explore the impact of SCA on children’s science engagement. Learner engagement is a multidimensional construct comprised of interrelated cognitive, affective, and behavioral dimensions [58,59]. For the purposes of this study, “science engagement” encompassed children’s perceptions, attitudes, and behaviors related to science. Children’s perceptions of science included how they thought about science (and scientists) before and after the program as well as their general regard for science. Children’s attitudes towards science included the extent to which they viewed science as interesting, appealing, and important in school, career, and societal contexts. Conceptually, children’s perceptions and attitudes differ in the sense that perceptions encompass mostly knowledge and beliefs, whereas attitudes entail evaluative judgments and feelings. Finally, the behavioral dimension of science engagement was explored through children’s narratives of school-based science participation and achievement. The present study was guided by three research questions: 1. How did SCA influence children’s perceptions of science? 2. How did children’s attitudes towards science change following SCA? 3. How did children describe the influence of SCA on their behavioral engagement with science? 5. Methods 5.1. Participants Participants were 55 children (52.7% girls; n = 29), ages 10 to 12 (M = 11.1), who attended one of the three partnering BGC units. For socio-demographic characteristics by research site, see Table 1. Participants were recruited during BGC site visits, through flyers, and via letters to parents. Participation in both SCA and this study were voluntary, and parental consent and youth assent were obtained for all participants. This study was approved by the university’s institutional review board. Sustainability 2020, 12, 6400 8 of 24 Table 1. Socio-demographic characteristics by research site. Town City Suburb Total (n = 9) (n = 19) (n = 27) (n = 55) Characteristic Total % Total % Total % Total % Gender Girls 7 77.78 12 63.16 10 37.04 29 52.73 Boys 2 22.22 7 36.84 17 62.96 26 47.27 Age 10 4 44.44 6 31.58 13 48.15 23 41.82 11 1 11.11 3 15.79 7 25.93 11 20.00 12 3 33.33 7 36.84 6 22.22 16 29.09 13 1 11.11 3 15.79 1 3.70 5 9.09 Average Age 11.11 years 11.37 years 10.81 years 11.05 years School Grade 4 2 22.22 4 21.05 12 44.44 18 32.73 5 2 22.22 7 36.84 6 22.22 15 27.27 6 5 55.56 4 21.05 8 29.63 17 30.91 7 0 0.00 4 21.05 1 3.70 5 9.09 Race/Ethnicity White 3 33.33 9 47.37 19 70.37 31 56.36 Hispanic/Latino 3 33.33 6 31.58 5 18.52 14 25.45 Multiple Ethnicities 3 33.33 4 21.05 1 3.70 8 14.55 Other 0 0.00 0 0.00 2 7.41 2 3.64 Free/Reduced Price Lunch 4 44.44 17 89.47 13 48.15 34 61.82 Single Parent Household 3 33.33 10 52.63 11 40.74 24 43.64 5.2. Data Sources and Analysis Procedures To explore the impact of the program on children’s science engagement, pre- and post-program surveys included scales measuring children’s attitudes towards school science, attitudes towards the societal implications of science, and attitudes towards careers in science [60], as well as one prompt that asked children to report their most recent overall grade in science class. In the post-survey, children were asked to respond, yes or no, to whether SCA helped them to “like science more”, and to write about why. Also in the post-survey, one open-ended item explored children’s career aspirations. Post-program focus groups were conducted to further explore this study’s research questions, as well as to clarify and expand on survey findings [61,62]. Specifically, a portion of the focus group guide examined children’s thoughts and feelings about science before and after their program participation. In total, 11 focus groups were conducted, averaging four to five children each and lasting an average of 38 min. Focus groups were audio-recorded, transcribed verbatim, edited for accuracy, and then entered into NVivo 10 software [63] for analysis following the process and rules of thematic analysis [64]. 6. Results Findings are organized into three sections aligning with this study’s research questions. The first section explores the cognitive dimension of children’s science engagement by examining children’s perceptions of science (i.e., thoughts, beliefs) before and after the program, including how and why SCA helped them to like science more. The second section explores the primarily affective dimension of children’s science engagement by examining differences in their attitudes towards science. Lastly, the third section examines the behavioral dimension of children’s science engagement by assessing differences in children’s school science achievement as well as their self-reported classroom behavior and career choices. Each section begins with quantitative survey results, followed by focus group findings, which serve to clarify and expand the survey results. Sustainability 2020, 12, 6400 9 of 24 6.1. Children’s Perceptions of Science This study’s first research question aimed to explore children’s general perceptions of science, or “the way[s] in which [science] is regarded, understood, or interpreted” by children [65]. Perceptions of science encompassed general thoughts and beliefs about what science entails, who needs science and why, and the relevance of science to their own lives. 6.1.1. Survey Results Recognizing that children’s general perceptions of science may have influenced whether or not they participated in the program, one open-ended survey item asked children about their motivations for joining SCA. The most common response category was SCA’s digital photography component (n = 23, 41.8%), followed closely by participants’ fondness for science (n = 21; 38.2%). Other reasons for joining SCA included children’s: Belief that SCA would be fun or interesting (n = 15; 27.3%), love for nature (n = 5; 9.1%), eagerness to learn (n = 5; 9.1%), interest in action (n = 3; 5.6%), and desire to be around friends (n = 3; 5.6%). After the program, children were asked “Did Science, Camera, Action! help you to like science more?” and were then prompted to provide an open-ended explanation of their response. Separate thematic analyses [64] were conducted for “Yes” and “No” groups. Most children (n = 46; 88.5%) indicated that SCA did, in fact, have a positive impact on how they regard science. Among the remaining participants (n = 6; 11.5%), several described their love of science as a motivator for joining SCA. Consistent with pre-survey findings, these children perceived SCA to be a venue for engaging in science programming aligned with their existing interests. Of the children who reported that SCA helped them to like science more, most said it was because: (1) SCA was fun and they learned science could be fun; (2) they learned new things in SCA; and (3) they gained a better understanding of the applicability of science to real-world problems. A summary of thematic analyses of children’s explanations, along with representative quotations, is provided in Table 2. 6.1.2. Focus Group Findings Focus group discussions explored children’s perceptions of science before and after the program. Before SCA, children’s knowledge about, and images of, science and scientists ranged widely. While some valued science as important to know and relevant to their lives, others expressed less familiarity with science. Miguel described his limited exposure to science at school, and Theo reported not knowing a lot about science, while Gabe viewed it as extremely important to society. I don’t do science at school. —Miguel (12) I don’t know much about science. —Theo (10) Overall, I think science is a big help to the human race, and without it, we’d not be where we are now. —Gabe (12) A few children explained that SCA expanded their perspectives on science, particularly which types of problems are dealt with by science and how scientists do their work. Some began with simplified impressions of science. To Theo, science was about “making rockets fly”. Without having a class in school dedicated explicitly to science, Miguel perceived science to be “all about experiments”. Olivia and Nora had similar impressions, sharing that before participating in SCA, they understood science to take place “indoors”, such as in laboratories, and focus on “inside” things rather than the environment. Sustainability 2020, 12, 6400 10 of 24 I thought that science was just like an indoors thing . . . Like science experiments and stuff? I didn’t know it had anything to do with the outdoors or anything . . . We don’t need to mix stuff together to make science. —Olivia (12) I thought it was like . . . I didn’t know that science was like outside things. I thought that was social studies. Social studies and science are two different things. It confused me at the beginning of the program, but I kind of get it now. —Nora (12) By including nature in their concept of science, both Olivia and Nora gained more expansive views of what science entails. Olivia remarked that, “Science is actually all around the world”. Nora said, “Science opened my mind . . . Science is a bigger topic than [I thought]”. In the following exchange, three additional participants, all girls, agree that anyone can do science, and that science is much more than “chemicals and labs”. Riley (10): At first, I just thought scientists could do science and you had to be a scientist or grow up to be one. But now I know that you don’t have to be a scientist, you can be anyone [and do science]. Aubrey (11): Like Riley said, it doesn’t matter if someone is a scientist or not because, at the beginning, I thought, like Riley, “You have to be a scientist to know what you’re doing”. But I learned that if you have enough experience, you don’t have to be a scientist . . . You can do all this stuff. Charlotte (10): When I hear the word “science”, I think of like chemicals and like labs, but then we’re going through this program and it’s not just chemicals and labs. It’s the Earth and it can be— Riley: Anything! Charlotte: —It could be plants, the sky. It could be . . . That can be science. Riley: Climate change . . . Inventions. It’s so magical. For some, science was interesting because scientific innovation was understood to have a significant impact on people’s lives, including the need for science in addressing climate change. I think science makes Earth cool because, with science, people can change a lot of things, like how we do this or how we do that. —James (11) [SCA] changed how I felt because now I know that science is all around us and we can do science stuff to help the environment and to help the Earth be healthy and for us to be able to live without any of this bad stuff. Also, that sometimes science can do bad things to the Earth, but if you do more science then it will help fix it, too. —Olivia (12) Eleven-year-old Grace explained that SCA enhanced her views of the importance of science. As she put it, “I used to think that science wasn’t that important and now I know it’s really important and that we can help”. Not everyone’s views of science changed. For example, 10-year-old Ben said he “[didn’t] really think of science differently” because, as he put it, “scientific studies . . . can be about anything really”. Sustainability 2020, 12, 6400 11 of 24 Table 2. Thematic analysis of Science, Camera, Action! (SCA)’s impact on participants’ perceptions of science. Thematic Categories & Representative Quotations † n ‡ (%) The program helped me to like science more because . . . 46 (88.46) SCA was fun and I learned that science can be fun. 11 (21.15) “Because I now know science can be FUN!”—Ali, 12 “Because we did fun activities.” —Riley, 10 I learned new things in SCA. 10 (19.23) “I learned things I never knew!”—George, 11 “Because I had learned more about my subjects in school.”—Lexi, 11 SCA helped me understand the applicability of science. 9 (17.31) “Yes, because science can help the world.”—Gabe, 12 “Yes, because I like helping other people, and science helps people.”—Maria, 10 It gave me ideas for action-taking to benefit the environment. 6 (11.54) “[SCA] helped me learn what I could do to help.”—Tim, 11 “Because we can save our ecosystem.”—Henry, 10 SCA made science more interesting. 4 (7.69) “Because I slept through class in school. Now I don”t.”—Nora, 12 “[SCA] helped me like science more because I know there is a point to it.”—Noah, 10 It built on my existing enjoyment of science. 4 (7.69) “It allowed me to do a lot of science.”—Owen, 12 “Because it made me enjoy the science even more than I did.”—Bill, 13 It helped me to understand science as a career. 2 (3.85) “Because it taught about science. Now I kind of want to be a scientist.”—Carlos, 10 “Because it helps to know what to do if you become a scientist.”—Olivia, 12 The program did not help me to like science more because . . . 6 (11.54) I already liked science. 4 (7.69) “I liked science already too much to add to.”—Abigail, 12 “SCA is great, but my love for science is too strong already.”—Scarlett, 12 The program could be improved. 1 (1.92) “It didn”t really have interesting activities.”—Ben, 10 I just don”t like science. 1 (1.92) “Not really. I still hate science!!”—Kelly, 12 Note: n = 52; † Categories appear in order of descending prevalence. Participant responses could be categorized into more than one response type; Bold headings indicate whether perception change occurred; Italics indicate thematic category, followed by direct quotes. ‡ n (%) = number of participant responses corresponding with each thematic category, followed by the percentage of full sample coverage. 6.2. Children’s Attitudes Towards Science This study’s second research question explored how the program influenced children’s attitudes towards science. Attitudes are simultaneously cognitive and affective in nature and refer to a “general evaluation of an object, person, group, issue, or concept on a dimension ranging from negative to positive” [66]. Attitudes are “feelings” towards the attitude object that are grounded in perceptions (see previous section) and lead to behavior (see next section). 6.2.1. Survey Results A combined 15 items on the pre-post questionnaire asked children about their attitudes towards school science, science careers, and the societal implications of science. On all constructs, responses Sustainability 2020, 12, 6400 12 of 24 ranged from 1 (“Strongly Disagree”) to 5 (“Strongly Agree”), with higher scores indicating more positive attitudes. Attitudes Towards School Science The “Attitudes Towards School Science” (ATSS) scale [60] consists of seven items (αpre = 0.88; αpost = 0.82). Children’s school science attitudes were very positive overall, though they were more positive following program participation (M = 4.41, SD = 0.54), compared to before (M = 4.25, SD = 0.70). A paired-samples t-test was conducted to assess changes in ATSS following program participation. Results of the t-test revealed that the mean increase of 0.16 in children’s ATSS, 95% CI [0.02, 0.30], was statistically significant, t(52) = 2.22, p = 0.031, d = 0.30 (see Table 3). Table 3. Summary of paired-samples t-tests for science attitudes and grades. Variable Pre Post 95% CI Cohen’s M (SD) M (SD) MD t df p LL UL d Attitudes Towards School Science 4.25 (0.70) 4.41 (0.54) +0.16 2.22 52 0.031 * 0.01 0.30 0.30 Attitudes Towards Careers in Science 3.73 (0.78) 4.02 (0.71) +0.29 2.96 51 0.005 ** 0.09 0.49 0.41 Attitudes Towards Societal Implications of Science 4.12 (0.70) 4.33 (0.56) +0.20 2.13 53 0.038 * 0.01 0.40 0.29 Science Grades 7.20 (2.48) 8.02 (2.10) +0.81 2.19 53 0.033 * 0.07 1.56 0.30 Note: * p < 0.05; ** p < 0.01. Attitudes Towards Science Careers Five items assessed children’s “Attitudes Towards Careers in Science” [60]. Internal consistency correlations were acceptable to good (αpre = 0.73; αpost = 0.60). Children’s attitudes towards science careers were more favorable after the program (M = 4.02, SD = 0.71), compared to before (M = 3.73, SD = 0.78). A paired-samples t-test was conducted to assess differences in children’s attitudes towards science careers prior to and following their participation in the program. The mean increase of 0.29 in children’s attitudes towards science careers, 95% CI [0.09, 0.49], was statistically significant, t(51) = 2.96, p = 0.005, d = 0.41. Differences in children’s science attitudes by research site are summarized in Table 4. Table 4. Summary of descriptive statistics for children’s science engagement. Variable (Number of Items) Town (n = 9) City (n = 19) Suburb (n = 27) Pre-Survey M (SD) Post-Survey M (SD) MD Pre-Survey M (SD) Post-Survey M (SD) MD Pre-Survey M (SD) Post-Survey M (SD) MD Attitudes Towards School Science (7) a 4.08 (0.76) 4.40 (0.40) +0.32 4.21 (0.72) 4.29 (0.80) +0.08 4.29 (0.79) 4.43 (0.53) +0.14 Attitudes Towards Societal Implications of Science (3) a 4.33 (0.76) 4.30 (0.72) −0.04 4.11 (0.75) 4.37 (0.59) +0.26 4.01 (0.69) 4.33 (0.51) +0.32 Attitudes Towards Careers in Science (5) a 3.87 (0.85) 4.00 (0.81) +0.13 3.47 (0.86) 3.74 (0.71) +0.26 3.85 (0.68) 4.12 (0.84) +0.27 Science Grades (1) b 3.56 (0.53) 3.78 (0.44) +0.22 2.89 (1.24) 3.50 (0.71) +0.61 3.30 (0.72) 3.63 (0.74) +0.33 Science Career Aspirations (1, post only) c 66.67 (n = 6) 42.11 (n = 8) 55.56 (n = 15) . Note: a Response range: 1–5, where higher scores indicate more positive attitudes; b Response range: 0–4, where scores are coded as grade point averages (0 = F; 4 = A); c Response range: 0–100% of participants within each group aspiring to a science career. Attitudes Towards Societal Implications of Science Three items assessed children’s “Attitudes Towards Societal Implications of Science” [60]. Internal consistency correlations were high (αpre = 0.79; αpost = 0.81. Children’s attitudes in this domain were very positive overall, though they were more positive following program participation (M = 4.33, SD = 0.56), compared to before (M = 4.12, SD = 0.70). A paired-samples t-test was conducted to assess pre- and post-program differences in children’s attitudes towards the societal implications of science. Sustainability 2020, 12, 6400 13 of 24 The mean increase of 0.21 in children’s attitudes towards the societal implications of science, 95% CI [0.01, 0.40], was statistically significant, t(53) = 2.13, p = 0.038, d = 0.29. 6.2.2. Focus Group Findings A portion of the focus group guide explored children’s generalized attitudes towards science. When asked about their feelings towards science, several children said they viewed science favorably prior to their participation in SCA. For example, 10-year-old Noah said he “always liked” science, while 13-year-old Matthew said he “already liked science”. More commonly, children reported that SCA enhanced their interest in, and enjoyment of science. As 10-year-old Lexi put it, “I kind of did not like science before. I do like it now”. Girls and boys across ethnicities, age groups, and research sites explained that SCA either deepened their appreciation or changed their attitudes in favor of science. This came about through their enjoyment of SCA activities, viewing science as more accessible, interesting, or valuable. I didn’t really like science until I actually started to learn more about [it in] the program. —Bryan (10) What I feel about science now is I like it more than I did before. —Michael (11) I enjoy science a lot now. It’s one of my favorite subjects now actually. —Sydney (12) I mean, I liked science but I didn’t like science too much. I didn’t think it was very interesting. I can tell you this much, I like my Geo classes a lot more! —Ali (12) Some children suggested that SCA captured their interest and held their attention more than school science sometimes did. At my school, if there’s a topic that we’re talking about that doesn’t interest me . . . science is not actually fun for me. But [SCA] made me care a lot about global warming. —Athena (10) I’ve been learning about [climate change] in class, but I wasn’t paying attention much . . . So now I really know what it means and . . . how it is. —Luke (11) When asked to explain whether his views on science had changed overall, Luke added, “Well, I thought that science was kind of boring and you didn’t really have to do it. But when I came here and I knew that it was about climate change and how the world is, I thought of it differently”. Climate change made science relevant. Grace expressed a similar view, saying, “I didn’t really like [science] before, and I wasn’t interested in it. But now I know that you really need to know about it and you can’t just ignore the changes happening in the world”. For Arie, science went from “not really that interesting” to absolutely essential. As she explained, “Before [SCA], I had thought of [science] as just something to do and something that’s not really that interesting. But now . . . I’d rather do science now than pretty much anything else”. 6.3. Children’s Behavioral Engagement with Science This study’s final research question explored how SCA influenced children’s science-relevant behaviors. Children’s perceptions of, and attitudes towards, science can lead to changes in behavior and behavioral intentions [67]. In this study, children’s behavioral engagement with science before and after the program was assessed in the survey and focus group by asking children about their academic performance in school science and their career goals. Sustainability 2020, 12, 6400 14 of 24 6.3.1. Survey Results To assess the behavioral dimension of children’s science engagement, children were also to report their most recent letter grade in science class before the program (i.e., from the fall term) and after the program (i.e., from the spring term). The pre-survey was administered in January, closely following winter break, and the post-survey was administered in May, closely following the end of the school year. In the post-survey, children were also asked about their career aspirations. Science Grades For exploratory purposes, children’s grades in science class, measured before and after the program, were treated as a proxy measure of children’s behavioral science engagement. On the pre-survey, most children reported receiving A’s (n = 24, 44.4%) and Bs (n = 22, 40.7%) in science class the previous fall. In the post-survey, 70.4% (n = 38) reported receiving an A grade in science class in the spring term, while 22.2% (n = 12) reported receiving a B. Of the 54 children who completed these items, 20 (37.0%) received improved science grades after the program compared to before, seven (13.0%) received a lower grade, and 27 (50.0%) received the same grade. Scores ranging from 0 (F) to 10 (A+) were subjected to a dependent samples t-test to determine pre- and post-program differences. Results of the t-test revealed that children’s science grades improved from the fall term (M = 7.20, SD = 2.48) to the spring term (M = 8.02, SD = 2.10), a statistically significant mean increase of 0.82, 95% CI [0.07, 1.56], t(53) = 2.19, p = 0.033, d = 0.30. Career Choice In the post-survey, one open-ended item asked children about their career aspirations. The 55 responses were categorized into major career fields. More than half (52.73%) aspired to a STEM career (see Figure 1). These included careers in physical science (e.g., physicist), earth science (e.g., geologist), space science (e.g., astronomer), and life science (e.g., biologist) careers, as well as applied science careers in engineering, computer science, and medicine. Sustainability 2020, 12, x FOR PEER REVIEW 14 of 24 Science Grades For exploratory purposes, children’s grades in science class, measured before and after the program, were treated as a proxy measure of children’s behavioral science engagement. On the pre- survey, most children reported receiving A’s (n = 24, 44.4%) and Bs (n = 22, 40.7%) in science class the previous fall. In the post-survey, 70.4% (n = 38) reported receiving an A grade in science class in the spring term, while 22.2% (n = 12) reported receiving a B. Of the 54 children who completed these items, 20 (37.0%) received improved science grades after the program compared to before, seven (13.0%) received a lower grade, and 27 (50.0%) received the same grade. Scores ranging from 0 (F) to 10 (A+) were subjected to a dependent samples t-test to determine pre- and post-program differences. Results of the t-test revealed that children’s science grades improved from the fall term (M = 7.20, SD = 2.48) to the spring term (M = 8.02, SD = 2.10), a statistically significant mean increase of 0.82, 95% CI [0.07, 1.56], t(53) = 2.19, p = 0.033, d = 0.30. Career Choice In the post-survey, one open-ended item asked children about their career aspirations. The 55 responses were categorized into major career fields. More than half (52.73%) aspired to a STEM career (see Figure 1). These included careers in physical science (e.g., physicist), earth science (e.g., geologist), space science (e.g., astronomer), and life science (e.g., biologist) careers, as well as applied science careers in engineering, computer science, and medicine. Figure 1. Children’s career aspirations by major career category. 6.3.2. Focus Group Results During focus groups, several children reported that their participation in SCA had a positive impact on their achievement in school science. For some, doing better in school was attributed to their enjoyment of SCA. Ten-year-old Lexi said, “I liked … learning all this stuff and plus I’m ahead in my class”. Others attributed their improved school science performance to an enhanced interest in science, which they gained through SCA. [After SCA], I enjoy science so much more. Before, I thought science was just one of those things we had to learn and so I wasn’t really interested. I did what I had to do to get a good grade. Before I started Science, Camera, Action!, I started falling behind in science, but after I started the program it helped me catch up [in school].—Sydney (12) A number of children reported that SCA content mapped onto current school science topics. As Scarlett, age 12, explained, “[SCA] helped me out in class because we’re kind of learning about the same things at the same times and so I could put more input into my science class because I knew more from here”. Children across the age spectrum identified connections between SCA and school science, which made them feel knowledgeable and better able to learn new things in the classroom. Figure 1. Children’s career aspirations by major career category. 6.3.2. Focus Group Results During focus groups, several children reported that their participation in SCA had a positive impact on their achievement in school science. For some, doing better in school was attributed to their enjoyment of SCA. Ten-year-old Lexi said, “I liked . . . learning all this stuff and plus I’m ahead in my class”. Others attributed their improved school science performance to an enhanced interest in science, which they gained through SCA. Sustainability 2020, 12, 6400 15 of 24 [After SCA], I enjoy science so much more. Before, I thought science was just one of those things we had to learn and so I wasn’t really interested. I did what I had to do to get a good grade. Before I started Science, Camera, Action!, I started falling behind in science, but after I started the program it helped me catch up [in school]. —Sydney (12) A number of children reported that SCA content mapped onto current school science topics. As Scarlett, age 12, explained, “[SCA] helped me out in class because we’re kind of learning about the same things at the same times and so I could put more input into my science class because I knew more from here”. Children across the age spectrum identified connections between SCA and school science, which made them feel knowledgeable and better able to learn new things in the classroom. It helped me learn what we’re actually doing in school. —Daniel (10) In science, sometimes I don’t know the answers, and now I know a lot more answers about carbon dioxide and that stuff. —Jack (11) With all that . . . I’ve learned here, I feel like it’s kind of helped me with my learning . . . Because the time that I’m here . . . I had time to really understand what I need to in science or social studies. —Wayne (12) Several children described feeling more confident in science, which made them more likely to actively participate in science class. As Wayne continued, “I feel like it was easier for me to open up [and say] what I learned [here] at my school and stuff”. When asked whether SCA influenced her self-confidence, 10-year-old Peyton said, “When we read books [in science class], they would ask us questions on the side of the books. And I was usually the one that would be most confident to raise my hand and tell them what I know about”. After participating in SCA, Scarlett and Arie also felt more confident communicating about science. Every year . . . we do the school science fair. SCA gave me more ideas for the science fair and gave me more confidence in myself so I could present it to everyone. —Arie (10) I learned how to better communicate what I meant . . . because, when we were learning about [climate change] in school, I didn’t know to say certain terms. Or how to [choose] my words so that it made sense or got my point clear. I felt like this program really helped me realize how to tell better on what I learned and what I know. How to put that into real life. —Scarlett (12) A couple of children explained that their increased interest and confidence in science, gained through SCA, helped them to feel better on school science tests and standardized tests. Well, I wasn’t really into science [before the program]. But after I got more into science, it actually made me feel better on my tests when I had to take tests. —Bryan (10) I thought [the program] did help because . . . we had TCAP [the Transitional Colorado Assessment Program test], and doing this program actually helped me feel more confident on one of the tests. Some people were like, “I don’t want to take the test because I don’t know a lot about science”. But I was . . . excited because I know about it. —Peyton (10) Sustainability 2020, 12, 6400 16 of 24 Others reported getting better grades in science. For Cristy, it was a matter of paying more attention in science class. Jimmy thought joining SCA may help boost his science grades. After the program, he said his science performance had improved a full letter grade. Cristy (11): I pay attention to class now. I’m getting an A. Ali (12): I’m getting a B. Jimmy (10): Before the program, I would usually get C-pluses or C-minuses and now I’m getting either B-pluses or A-minuses. 7. Discussion The present study explored whether and how SCA—an after-school program focused on climate change learning and action—impacted children’s science engagement along cognitive, affective, and behavioral dimensions. SCA combined hands-on climate change educational activities with digital photography (i.e., photovoice methodology) to simultaneously explore and expand children’s role as change agents for sustainability in both family and community contexts. Prior to SCA, children’s survey-based attitudes towards science were, on average, generally positive. For more than a third of participants, joining SCA was at least partially due to their fondness for science. However, not everyone favored science prior to the program. Although few articulated an explicit dislike for science during focus groups, many discussed their previous indifference or narrow definitions of science. Some children described inattention and poor performance in school science, while others said they completed class requirements satisfactorily, but with little enthusiasm. Following SCA, children’s perceptions of science had expanded beyond indoor laboratory-based science to include the outdoors and their everyday environments. Through SCA, science became relevant to their lives, and their attitudes towards science (i.e., in school, careers, and in society) improved significantly. In short, climate change made science interesting and important. The vast majority of the children reported that SCA helped them to like science more, and following the program, more than half aspired to a STEM career. In sum, climate change learning and action became an avenue towards children’s increased science engagement. 7.1. Strengthening Children’s Science Engagement through Climate Change Learning and Real-World Action One reason for SCA’s positive impact on children’s science engagement—even among those who already liked science—could be that the program’s content and format diverged from traditional school science in important ways. In particular, SCA emphasized the connections between science and everyday life through place-based, participatory, and action-focused programming. In formal classroom settings, science topics can often be perceived as disconnected from real-world issues [68]. Learning about socio-scientific issues such as climate change, however, can crystallize the connection between what children are learning in the science classroom and their own everyday lived realities [39]. After teaching about atmospheric processes (i.e., the greenhouse effect), SCA brought climate change “down to Earth” through activities focused on people, plants, and animals. Further, SCA brought science content into children’s everyday environments through place-based content focused on local ecosystem impacts. A previous article in this series showed that not only did children demonstrate significant knowledge gains through their participation in SCA, they also felt motivated to act on this knowledge and doing so strengthened children’s sense of agency to make a difference on climate change [16]. School-based science curriculum is not often associated with action-taking on learned concepts, particularly in U.S. science classrooms [67,68]. An exclusive focus on the cognitive dimensions of science learning without connecting science topics to students’ civic engagement, “isolates scientific knowledge and practices from individuals’ lived experiences and the immediacy of community life” [69] (pp. 287–288). These researchers have advanced the concept of educated action in science, which “requires both knowing and doing . . . the capacity to leverage scientific knowledge and practices Sustainability 2020, 12, 6400 17 of 24 to inform actions(s) taken” (p. 287). Beyond SCA’s place-based content, it was likely this pairing of knowing with doing that strengthened children’s science engagement. They understood how climate change—and thereby science—was important and relevant to their own everyday lives and behaviors. Implementing programming like SCA in the formal science classroom, however, is impeded by a number of factors. Some students in this study shared that they did not have school science. While many elementary-aged students receive the recommended 30 min per day [70], infrequent classroom science instruction is not uncommon. This is particularly true in schools not meeting benchmarks on high stakes tests, where instructional time has been shifted away from science in favor of literacy and mathematics [71–73]. Further, science teachers who want to implement projects or link learning to action may face an uphill battle. Teachers face demands to teach in certain ways, to cover certain topics, and—implicitly or explicitly—are discouraged from slowing down to dive deeply into topics or do open-ended projects. Indeed, “implementing project-based science curriculum is challenging in the context of standardized tests, 45-min class periods, large class sizes, and the emphasis on individual grades” [74] (p. 455). The SCA program—having taken place outside the formal classroom—undoubtedly benefited from increased flexibility on these dimensions, which have been associated with successful science learning outcomes in informal contexts [75–77]. At present, school science policies and practices emphasize the role of education in preparing “future citizens”, rather than creating opportunities for children’s educated action now. This focus on preparation (e.g., via testing) is a barrier to children’s full science engagement. While informal learning spaces that can offer children empowering and constructive ways to learn about climate change are paramount [15], informal and after-school programming alone is not the answer. To adequately address sustainability challenges and to make engaged science the norm, there is a need for larger-scale policy change focused on school reform recognizing children’s capacities to be change agents in their communities. Such policies would support teachers in evidence-based instruction and real-world action, making science relevant to the lived realities of learners and their families. Deliberately inviting young people to think about and act on critical societal and global issues—beyond advancing children’s sociopolitical inclusion—is a first step towards repositioning science education at the heart of necessary societal transformation to sustainability. 7.2. Promoting Diversity in Science through Climate Change Learning and Collaborative Action Findings of the present study suggest that, through their participation in SCA, children came to view science as more interesting, accessible, and important. For many, this was due to an expanded view of the scope of science inquiry, who can be a scientist, and how science connects to their lives. Perspectives shifted beyond stereotypical views of scientists in the laboratory or building rockets to scientists whose work takes place in the outdoors and deals with environmental aspects of everyday life. After SCA, some children saw science all around them. This enlarged view of science made it fascinating, and its role in understanding and addressing climate change made it valuable. Although connections between science attitudes and attitudes towards climate change are under-studied, they have been shown to have weak but positive correlations [60]. In this study, knowing about climate change made science important, a finding that resonates with previous studies documenting the expanded significance of science topics when implications are considered beyond the confines of the classroom [39,78]. Viewing science as more approachable and appealing translated into youths’ increased confidence and performance in school science. They reported being more engaged. For some children, greater self-confidence and enthusiasm made active participation in science class less effortful, and science tests less daunting. A few participants attributed better grades in science to their participation in SCA, while surveys showed significantly improved science grades by participants overall. Most children left the program aspiring to a science career of some kind, representing a variety of subfields. Sustainability 2020, 12, 6400 18 of 24 These findings are encouraging given the socio-demographic composition of children in SCA, many of whom were from groups underrepresented in science. SCA’s participants were mostly girls (52%), nearly half youth of color (44%), and a majority were from low-income households (61%). Issues of equity, access, identity, and confidence still impede the science engagement of underrepresented groups such as girls, racial and ethnic minorities, and economically disadvantaged students [79–81]. From early adolescence, girls express less interest in math and science careers compared to boys [82], with gender differences in STEM self-confidence beginning to emerge in middle school [83]. This makes upper elementary and early middle school, the age of SCA participants, a critical stage for girls’ science interest and confidence. Youth of color, despite showing increased interest in science at earlier educational stages, continue to be underrepresented in higher education and careers [84]. Finally, low-income youth often have less access to science enrichment opportunities and after-school activities, and are more likely to attend schools with insufficient resources to support science learning [80,85]. To date, most research on diversifying the sciences looks at marginalized groups based on single identities (e.g., girls or youth of color). It is worth noting that many SCA participants had multiple marginalized identities (e.g., low-income girls of color) and face a combination of barriers to their interest and pursuit of science higher education and careers [86]. In this context, climate change learning and action became an avenue through which to markedly strengthen their overall science engagement. It is possible that a critical element supporting children’s science engagement, in this study, was its collaborative action component. Research on goal congruity in STEM education contends that students’ educational and career choices are affected by how much they perceive a career path to align (or dis-align) with their life goals [87]. For example, to the extent that STEM careers are viewed as fulfilling communal goals—of working alongside or helping other people—they are more appealing to girls and many first-generation college students whose socialization emphasizes a communal orientation [87,88]. Similarly, altruistic goals are associated with STEM career interest by underrepresented minority students [85]. Similarly in SCA, children’s recognition that science was relevant to their everyday lives supported their overall engagement in the program, and their full-cycle participation provided opportunities for individual and collaborative climate change action, which further emphasized the connections between science learning and addressing real-world challenges. In particular, SCA’s collaborative action component was framed in terms of community service and action, which may have contributed to the shift in children’s perceptions about the importance of science in society. To date, when climate change education is paired with an action component, most often it is focused on individual behavior change rather than collaborative sustainability action [42]. In action-focused climate change programming, offering children opportunities to engage in collaborative community-focused action is important because—compared to promoting individual behavior change, which is often framed in terms of personal responsibility—it more accurately frames climate change as a complex, global issue requiring collective, coordinated action region by region. As this study suggests, collaborative action may also play a critical role in helping children from groups underrepresented in science to reimagine science as fitting with their other-oriented goals (i.e., to be communal, altruistic). Through climate change learning and informed action, children were able to see how science permeates every aspect of their daily experience, and many were able to view themselves as future scientists. Inviting children to participate as co-researchers and collaborators in making sense of and acting on sustainability challenges is an additional step towards their full sociopolitical inclusion. Importantly, through SCA, climate change became a portal through which children were able to rethink who can do science, how, and why. 7.3. Transforming Science Education through Climate Change Learning and Action So far, we have discussed the transformative potential of action-focused climate change programming in terms of its capacity to deepen students’ engagement with science. Resonating Sustainability 2020, 12, 6400 19 of 24 with previous research [39,68], this study’s results show that when children perceive science as relevant to their lives and connected to social change action, its value and attraction grow. As mentioned, this may be especially appealing to children from backgrounds underrepresented in science, an effect which itself could transform the discipline [85,87]. Beyond transforming children’s views of science, on a much broader level, action-focused climate change learning has the potential to transform science education in terms of its role in society. Rather than focusing on the STEM “pipeline” and children’s “potential” futures, science education could be a societal force for positive social change and building cultures of sustainability today. Doing so would mean making visible the inherent interconnectedness across disciplinary ‘subjects’ in addressing sustainability challenges and emphasizing children’s participatory action in addressing sustainability challenges in local settings. Sustainability has long been a key site of disciplinary re-integration [12]. Findings of the present study suggest that action-oriented climate change learning can help learners draw linkages across fields framed in the classroom as disparate or disconnected, helping them to better understand and act on sustainability challenges in meaningful ways. Despite having been enculturated into the world of disciplinary silos in the form of school subjects, through SCA, children made connections between, for example, science and social studies through the lens of climate change. Advocates and scholars of scientific literacy have argued for years that traditional disciplinary boundaries are not only arbitrary, but also impede deep understanding of complex socioscientific issues like those related to sustainability [36,37,89]. That SCA participants were seeing the socio-ecological complexity of climate change and the interconnectedness between the sciences and other fields was a key strength of the program [23,24]. Adequately addressing sustainability challenges will require the participation of diverse fields, and appreciating these connections is critical [90,91]. Towards positioning children as agents of change, action-oriented climate change learning can prompt children’s awareness of the inseparability of school subjects when focusing on complex environmental problems. Finally, by taking action on learned concepts, children were reframing the meaning of science education for themselves as well as their families and communities. The SCA program allowed children to engage with science on their own terms through voluntary participation, digital photography, and youth-designed action projects. Importantly, children designed and implemented their own community-focused sustainability projects. Following the program, children reported that they had fun during SCA activities, which made science enjoyable and approachable, rather than boring or intimidating. According to Riemer and colleagues [92], the most successful non-formal youth-based environmental engagement programs tend to provide youth the opportunity to “define the context of their participation” and “act as co-creators or partners” in projects that bring about meaningful change to the youth as individuals or to the communities to which they belong (p. 570). To their families and communities who were impacted by children’s projects, SCA represented a “science program” that had a tangible impact beyond children’s learning. By working to address sustainability challenges in locally meaningful ways, SCA became an example of educated action in science [69]; children’s science knowledge opened up the possibility for their informed action. Rather than framing science education as a pathway to a “competitive edge” grounded in neoliberal logic, this approach to learning positions science education at the center of a broader, more inclusive and collaborative process of cultural transformation to sustainability—one that opens up the possibility of supporting human life and the health of ecosystems amidst unprecedented environmental degradation [5]. 8. Limitations Findings of the present study should be viewed within the context of its limitations. First, this study’s non-experimental research design calls into question whether the effects attributed to SCA were, in actuality, due to the influence of the program. A strength of this study’s mixed-methods design, however, was that qualitative analyses of focus group discussions clarified the diverse ways that children’s perceptual, attitude, and behavior change were directly tied to program content. Sustainability 2020, 12, 6400 20 of 24 Another limitation is this study’s small sample size, which precludes robust analyses of effects by sub-group (e.g., demographic characteristics). A further limitation is that, because children self-selected into the program, most already held positive attitudes towards science. As stated previously, children’s enjoyment of science was a main reason for enrolling in the program. Children’s voluntary participation was a strength of the program, and future research might explore whether children with less positive views of science show similar gains in science engagement. Relatedly, children’s preexisting positive views of science may be a reason for their high science grades both before and after the program. As with all findings in this non-experimental study, it is not possible to say with confidence that changes over time are attributable to children’s program participation. In future similar studies, data that could otherwise be obtained from primary sources (e.g., report cards) should be sought. Further, more comprehensive documentation of the content of children’s school-based learning should be both acquired and accounted for in analyses. 9. Conclusions As climate change continues to destabilize ecosystems, economies, and societies around the globe, climate scientists and sustainability scholars have continued to make urgent calls, as they have for decades, for rapid societal transformation to sustainability. In this paper, we argue that science education is a key venue for this transformation. Specifically, science education could address the need to substantively involve children and young people in climate change learning and action towards redefining science not as a pathway towards endless competition and domination, but as a collaborative process to envision and enact sustainable futures today. Reimagining science, we argue, begins with forms of engagement that allow children to think about science—and science education—in new ways. The present research explored participatory and action-focused pedagogies that, by positioning children as critical actors for sustainability, simultaneously sought to strengthen and reimagine children’s science engagement for the well-being of people and planet. Findings of the present study suggest that climate change learning and action can support children’s engagement with science by emphasizing its real-world significance and by connecting learning with collaborative, community-based action. Making such practices accessible to students in the formal science classroom, we have argued, would require broad shifts in school science policy and practices. Doing so, however, would be worth the effort as the stakes could not be higher [5]. Climate change is increasingly referred to in terms of “crisis” and “chaos”. Whereas “crisis” means a turning point, the point after which things get better or worse, “chaos” refers to an opening or empty space. Some have argued that our position on the precipice of irreversible changes to the climate system is a window of opportunity for transformative change—the kind that promotes the flourishing of human societies and ecosystems. A science education that rises to today’s challenges by opening up space for children to be critical actors for sustainability in their communities could be decisive in creating the future that is to be. Author Contributions: Conceptualization, C.D.T.; data curation, C.D.T.; formal analysis, C.D.T.; funding acquisition, C.D.T.; methodology, C.D.T.; project administration, C.D.T.; writing—original draft, C.D.T. and A.E.W.; writing—review and editing, A.E.W. All authors have read and agreed to the published version of the manuscript. Funding: This research was funded through small grants by the National Oceanic and Atmospheric Administration (NOAA) Climate Stewards Education Project; the Society for the Psychological Study of Social Issues (American Psychological Association [APA] Division 9); and the Society for Community Research and Action (APA Division 27). Acknowledgments: In this section you can acknowledge any support given which is not covered by the author contribution or funding sections. This may include administrative and technical support, or donations in kind (e.g., materials used for experiments). Conflicts of Interest: The authors declare no conflict of interest. Sustainability 2020, 12, 6400 21 of 24 References 1. Schreiner, C.; Henriksen, E.K.; Kirkeby Hansen, P.J. Climate education: Empowering today’s youth to meet tomorrow’s challenges. Stud. Sci. Educ. 2005, 41, 3–49. [CrossRef] 2. National Science and Technology Council: Charting a Course for Success, America’s Strategy for STEM Education. Available online: https://www.whitehouse.gov/wp-content/uploads/2018/12/STEM-Education- Strategic-Plan-2018.pdf (accessed on 22 June 2020). 3. National Academies of Sciences, Engineering, and Medicine. Developing a National STEM Workforce Strategy: A Workshop Summary; The National Academies Press: Washington, DC, USA, 2016. 4. Hodson, D. Time for action: Science education for an alternative future. Int. J. Sci. Educ. 2003, 25, 645–670. [CrossRef] 5. Intergovernmental Panel on Climate Change: Special Report on Global Warming of 1.5 ◦C. Available online: https://www.ipcc.ch/sr15/ (accessed on 22 June 2020). 6. DeBoer, G. A History of Ideas in Science Education; Teachers College Press: New York, NY, USA, 1991. 7. National Science Board: Our Nation’s Future Competitiveness Relies on Building a STEM-Capable U.S. Workforce. Available online: https://www.nsf.gov/nsb/sei/companion-brief/NSB-2018-7.pdf (accessed on 22 June 2020). 8. Plutzer, E.; Hannah, A.L.; Rosenau, J.; McCaffrey, M.S.; Berbeco, M.; Reid, A.H. Mixed Messages: How Climate Is Taught in America’s Schools. National Center for Science Education. Available online: http://ncse.com/ files/MixedMessages.pdf (accessed on 22 June 2020). 9. Domestic Policy Council: American Competitiveness Initiative: Leading the World in Innovation. Available online: https://georgewbush-whitehouse.archives.gov/stateoftheunion/2006/aci/aci06-booklet.pdf (accessed on 22 June 2020). 10. National Commission on Excellence in Education: A Nation at Risk: The Imperative for Educational Reform. Available online: https://www2.ed.gov/pubs/NatAtRisk/index.html (accessed on 22 June 2020). 11. National Defense Education Act Pub. L. No. 85–864, 72 Stat. 1580. Available online: https://www.govinfo. gov/content/pkg/STATUTE-72/pdf/STATUTE-72-Pg1580.pdf (accessed on 22 June 2020). 12. Cook, J.W. (Ed.) Sustainability, Human Well-Being, and the Future of Education; Palgrave Macmillan: London, UK, 2019. 13. Meadows, D.; Randers, J. The Limits to Growth: The 30-Year Update; Routledge: London, UK, 2012. 14. MacLellan, M. The tragedy of limitless growth: Re-interpreting the tragedy of the commons for a century of climate change. Environ. Humanit. 2016, 7, 41–58. [CrossRef] 15. Trott, C.D. Reshaping our world: Collaborating with children for community-based climate change. Action Res. 2019, 17, 42–62. [CrossRef] 16. Trott, C.D. Children’s constructive climate change engagement: Empowering awareness, agency, and action. Environ. Educ. Res. 2020, 26, 532–554. [CrossRef] 17. World Commission on Environment and Development. The Brundtland Report: Our Common Future; Oxford University Press: Oxford, UK, 1987. 18. Lewis, S.L.; Maslin, M.A. Defining the anthropocene. Nature 2015, 519, 171–180. [CrossRef] 19. Wise, S.B. Climate change in the classroom: Patterns, motivations, and barriers to instruction among Colorado science teachers. J. Geosci. Educ. 2010, 58, 297–309. [CrossRef] 20. Branch, G.; Rosenau, J.; Berbeco, M. Climate education in the classroom: Cloudy with a chance of confusion. Bull. Atom. Sci. 2016, 72, 89–96. [CrossRef] 21. Hart, P.S.; Nisbet, E.C. Boomerang effects in science communication: How motivated reasoning and identity cues amplify opinion polarization about climate mitigation policies. Commun. Res. 2012, 39, 701–723. [CrossRef] 22. Ojala, M. Regulating worry, promoting hope: How do children, adolescents, and young adults cope with climate change? Int. J. Environ. Sci. Res. 2012, 7, 537–561. 23. Rousell, D.; Cutter-Mackenzie-Knowles, A. A systematic review of climate change education: Giving children and young people a ‘voice’ and a ‘hand’ in redressing climate change. Child. Geogr. 2020, 18, 191–208. [CrossRef] 24. Feinstein, N.W.; Kirchgasler, K.L. Sustainability in science education? How the Next Generation Science Standards approach sustainability, and why it matters. Sci. Educ. 2015, 99, 121–144. [CrossRef] http://dx.doi.org/10.1080/03057260508560213 https://www.whitehouse.gov/wp-content/uploads/2018/12/STEM-Education-Strategic-Plan-2018.pdf https://www.whitehouse.gov/wp-content/uploads/2018/12/STEM-Education-Strategic-Plan-2018.pdf http://dx.doi.org/10.1080/09500690305021 https://www.ipcc.ch/sr15/ https://www.nsf.gov/nsb/sei/companion-brief/NSB-2018-7.pdf http://ncse.com/files/MixedMessages.pdf http://ncse.com/files/MixedMessages.pdf https://georgewbush-whitehouse.archives.gov/stateoftheunion/2006/aci/aci06-booklet.pdf https://www2.ed.gov/pubs/NatAtRisk/index.html https://www.govinfo.gov/content/pkg/STATUTE-72/pdf/STATUTE-72-Pg1580.pdf https://www.govinfo.gov/content/pkg/STATUTE-72/pdf/STATUTE-72-Pg1580.pdf http://dx.doi.org/10.1215/22011919-3616326 http://dx.doi.org/10.1177/1476750319829209 http://dx.doi.org/10.1080/13504622.2019.1675594 http://dx.doi.org/10.1038/nature14258 http://dx.doi.org/10.5408/1.3559695 http://dx.doi.org/10.1080/00963402.2016.1145906 http://dx.doi.org/10.1177/0093650211416646 http://dx.doi.org/10.1080/14733285.2019.1614532 http://dx.doi.org/10.1002/sce.21137 Sustainability 2020, 12, 6400 22 of 24 25. Håkansson, M.; Kronlid, D.O.O.; Östman, L. Searching for the political dimension in education for sustainable development: Socially critical, social learning and radical democratic approaches. Environ. Educ. Res. 2019, 25, 6–32. [CrossRef] 26. Hufnagel, E.; Kelly, G.J.; Henderson, J.A. How the environment is positioned in the Next Generation Science Standards: A critical discourse analysis. Environ. Educ. Res. 2019, 25, 731–753. 27. Boström, M.; Andersson, E.; Berg, M.; Gustafsson, K.; Gustavsson, E.; Hysing, E.; Lidskog RLöfmarck, E.; Ojala, M.; Olsson JSingleton, B.; Svenberg, S.; et al. Conditions for transformative learning for sustainable development: A theoretical review and approach. Sustainability 2018, 10, 4479. [CrossRef] 28. Komatsu, H.; Rappleye, J.; Silova, I. Culture and the independent self: Obstacles to environmental sustainability? Anthropocene 2019, 26, 100198. [CrossRef] 29. Knappe, H.; Holfelder, A.K.; Beer, D.L.; Nanz, P. The politics of making and un-making (sustainable) futures. Sustain. Sci. 2018, 13, 273–274. [CrossRef] 30. Monroe, M.C.; Plate, R.R.; Oxarart, A.; Bowers, A.; Chaves, W.A. Identifying effective climate change education strategies: A systematic review of the research. Environ. Educ. Res. 2019, 25, 791–812. [CrossRef] 31. Qvortrup, J. Are children human beings or human becomings? A critical assessment of outcome thinking. Riv. Internazionale di Sci. Soc. 2009, 17, 631–653. 32. Kellett, M.; Robinson, C.; Burr, R. Images of childhood. In Doing Research with Children and Young People; Fraser, S., Lewis, V., Ding, S., Kellett, M., Robinson, C., Eds.; Sage: Thousand Oaks, CA, USA, 2004; pp. 27–42. 33. Mitra, D.; Serriere, S.; Kirshner, B. Youth participation in US contexts: Student voice without a national mandate. Child. Soc. 2014, 28, 292–304. 34. Wyness, M.; Harrison, L.; Buchanan, I. Childhood, politics and ambiguity: Towards an agenda for children’s political inclusion. Sociology 2004, 38, 81–99. [CrossRef] 35. Elam, M.; Bertilsson, M. Consuming, engaging and confronting science: The emerging dimensions of scientific citizenship. Eur. J. Soc. Theory 2003, 6, 233–251. [CrossRef] 36. Liu, X. Beyond science literacy: Science and the public. Int. J. Environ. Sci. Educ. 2009, 4, 301–311. 37. Roth, W.M. Scientific literacy as an emergent feature of collective human praxis. J. Curric. Stud. 2003, 35, 9–23. [CrossRef] 38. Freire, P. Pedagogy of the Oppressed; Penguin: London, UK, 1972. 39. Sadler, T.D. Situated learning in science education: Socio-scientific issues as contexts for practice. Stud. Sci. Educ. 2009, 45, 1–42. [CrossRef] 40. Barry, B. Sustainability and intergenerational justice. Theoria 1997, 44, 43–64. [CrossRef] 41. Cutter-Mackenzie, A.; Rousell, D. Education for what? Shaping the field of climate change education with children and young people as co-researchers. Child. Geog. 2019, 17, 90–104. [CrossRef] 42. Hayward, B. Children, Citizenship and Environment: Nurturing a Democratic Imagination in a Changing World; Routledge: London, UK, 2012. 43. Kenis, A.; Mathijs, E. Climate change and post-politics: Repoliticizing the present by imagining the future? Geoforum 2014, 52, 148–156. [CrossRef] 44. Wals, A.E. Learning our way to sustainability. J. Educ. Sustain. Dev. 2011, 5, 177–186. [CrossRef] 45. D’Amato, L.G.; Krasny, M.E. Outdoor adventure education: Applying transformative learning theory to understanding instrumental learning and personal growth in environmental education. J. Environ. Educ. 2011, 42, 237–254. [CrossRef] 46. Hicks, D. Educating for Hope in Troubled Times: Climate Change and the Transition to a Post-Carbon Future; Trentham Books Limited: Staffordshire, UK, 2014. 47. Ojala, M. Hope and anticipation in education for a sustainable future. Futures 2017, 94, 76–84. [CrossRef] 48. Wals, A.E.; Jickling, B. “Sustainability” in higher education: From doublethink and newspeak to critical thinking and meaningful learning. High Educ. Policy 2002, 15, 221–232. [CrossRef] 49. Sipos, Y.; Battisti, B.; Grimm, K. Achieving transformative sustainability learning: Engaging head, hands and heart. Int. J. Sustain. High Educ. 2008, 9, 68–86. [CrossRef] 50. Boys and Girls Clubs of America: 2018 Annual Report. Available online: http://www.bgca.org/ (accessed on 22 June 2020). 51. Anderson-Butcher, D.; Newsome, W.S.; Ferrari, T.M. Participation in boys and girls clubs and relationships to youth outcomes. J. Community Psychol. 2003, 31, 39–55. [CrossRef] http://dx.doi.org/10.1080/13504622.2017.1408056 http://dx.doi.org/10.3390/su10124479 http://dx.doi.org/10.1016/j.ancene.2019.100198 http://dx.doi.org/10.1007/s11625-018-0541-x http://dx.doi.org/10.1080/13504622.2017.1360842 http://dx.doi.org/10.1177/0038038504039362 http://dx.doi.org/10.1177/1368431003006002005 http://dx.doi.org/10.1080/00220270210134600 http://dx.doi.org/10.1080/03057260802681839 http://dx.doi.org/10.3167/004058197783593443 http://dx.doi.org/10.1080/14733285.2018.1467556 http://dx.doi.org/10.1016/j.geoforum.2014.01.009 http://dx.doi.org/10.1177/097340821100500208 http://dx.doi.org/10.1080/00958964.2011.581313 http://dx.doi.org/10.1016/j.futures.2016.10.004 http://dx.doi.org/10.1016/S0952-8733(02)00003-X http://dx.doi.org/10.1108/14676370810842193 http://www.bgca.org/ http://dx.doi.org/10.1002/jcop.10036 Sustainability 2020, 12, 6400 23 of 24 52. Fredricks, J.A.; Hackett, K.; Bregman, A. Participation in Boys and Girls Clubs: Motivation and stage environment fit. J. Community Psychol. 2010, 38, 369–385. [CrossRef] 53. Cook, K. Grappling with wicked problems: Exploring photovoice as a decolonizing methodology in science education. Cult. Stud. High Educ. 2015, 10, 581–592. [CrossRef] 54. Derr, V.; Simons, J. A review of photovoice applications in environment, sustainability, and conservation contexts: Is the method maintaining its emancipatory intents? Environmental Education Research. Environ. Educ. Res. 2019, 26, 1–22. 55. Schell, K.; Ferguson, A.; Hamoline, R.; Shea, J.; Thomas-Maclean, R. Photovoice as a teaching tool: Learning by doing with visual methods. Int. J. Teach. Learn. High Educ. 2009, 21, 340–352. 56. Cook, K.; Quigley, C.F. Connecting to our community: Utilizing photovoice as a pedagogical tool to connect college students to science. Int. J. Environ. Sci. Educ. 2013, 8, 339–357. [CrossRef] 57. Wang, C.C.; Morrel-Samuels, S.; Hutchison, P.M.; Bell, L.; Pestronk, R.M. Flint photovoice: Community building among youths, adults, and policymakers. Am. J. Public Health 2004, 94, 911–913. [CrossRef] 58. Ben-Eliyahu, A. Academic Emotional Learning: A critical component of self-regulated learning in the emotional learning cycle. Educ. Psychol. 2019, 54, 84–105. 59. Fredricks, J.A.; Blumenfeld, P.C.; Paris, A.H. School engagement: Potential of the concept, state of the evidence. Rev. Educ. Res. 2004, 74, 59–109. [CrossRef] 60. Dijkstra, E.; Goedhart, M. Development and validation of the ACSI: Measuring students’ science attitudes, pro-environmental behaviour, climate change attitudes and knowledge. Environ. Educ. Res. 2012, 18, 733–749. [CrossRef] 61. Gibson, F. Conducting focus groups with children and young people: Strategies for success. J. Res. Nurs. 2007, 12, 473–483. [CrossRef] 62. Millward, L. Focus groups. In Research Methods in Psychology, 4th ed.; Breakwell, G.M., Smith, J.A., Wright, D.B., Eds.; Sage: Thousand Oaks, CA, USA, 2012; pp. 412–437. 63. QSR International Pty Ltd. NVivo (Released in March 2020). Available online: https://www.qsrinternational. com/nvivo-qualitative-data-analysis-software/home (accessed on 27 January 2020). 64. Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [CrossRef] 65. Oxford Dictionaries, s.v. “Perception”. Available online: https://en.oxforddictionaries.com/definition/ perception (accessed on 27 January 2020). 66. American Psychological Association, s.v. “Attitude”. Available online: https://dictionary.apa.org/attitude (accessed on 27 January 2020). 67. Lester, B.T.; Ma, L.; Lee, O.; Lambert, J. Social activism in elementary science education: A science, technology, and society approach to teach global warming. Int. J. Sci. Educ. 2006, 28, 315–339. [CrossRef] 68. Roth, W.M.; Lee, S. Science education as/for participation in the community. Sci. Educ. 2004, 88, 263–291. [CrossRef] 69. Birmingham, D.; Calabrese Barton, A. Putting on a green carnival: Youth taking educated action on socioscientific issues. J. Res. Sci. Teach. 2014, 51, 286–314. [CrossRef] 70. NGSS Lead States. Next Generation Science Standards: For States, by States; The National Academies Press: Washington, DC, USA, 2013. 71. Center on Educational Policy Public School Facts and History. Available online: https://www.cep-dc.org/ CEP-Publications-Database.cfm (accessed on 27 January 2020). 72. Dougherty, C.; Moore, R. Educators’ Beliefs about Teaching Science and Social Studies in K-3; ACT: Iowa City, IA, USA, 2019. 73. Milner, A.R.; Sondergeld, T.A.; Demir, A.; Johnson, C.C.; Czerniak, C.M. Elementary teachers’ beliefs about teaching science and classroom practice: An examination of pre/post NCLB testing in science. J. Sci. Teach. Educ. 2012, 23, 111–132. [CrossRef] 74. Barab, S.A.; Luehmann, A.L. Building sustainable science curriculum: Acknowledging and accommodating local adaptation. Sci. Educ. 2003, 87, 454–467. [CrossRef] 75. Blythe, C.; Harré, N. Inspiring youth sustainability leadership: Six elements of a transformative youth eco-retreat. Ecopsychology 2012, 4, 336–344. [CrossRef] http://dx.doi.org/10.1002/jcop.20369 http://dx.doi.org/10.1007/s11422-014-9613-0 http://dx.doi.org/10.12973/ijese.2013.205a http://dx.doi.org/10.2105/AJPH.94.6.911 http://dx.doi.org/10.3102/00346543074001059 http://dx.doi.org/10.1080/13504622.2012.662213 http://dx.doi.org/10.1177/1744987107079791 https://www.qsrinternational.com/nvivo-qualitative-data-analysis-software/home https://www.qsrinternational.com/nvivo-qualitative-data-analysis-software/home http://dx.doi.org/10.1191/1478088706qp063oa https://en.oxforddictionaries.com/definition/perception https://en.oxforddictionaries.com/definition/perception https://dictionary.apa.org/attitude http://dx.doi.org/10.1080/09500690500240100 http://dx.doi.org/10.1002/sce.10113 http://dx.doi.org/10.1002/tea.21127 https://www.cep-dc.org/CEP-Publications-Database.cfm https://www.cep-dc.org/CEP-Publications-Database.cfm http://dx.doi.org/10.1007/s10972-011-9230-7 http://dx.doi.org/10.1002/sce.10083 http://dx.doi.org/10.1089/eco.2012.0055 Sustainability 2020, 12, 6400 24 of 24 76. Hall, C.; Easley, R.; Howard, J.; Halfhide, T. The role of authentic science research and education outreach in increasing community resilience: Case studies using informal education to address ocean acidification and healthy soils. In Cases on the Diffusion and Adoption of Sustainable Development Practices; Muga, H.E., Thomas, K.E., Eds.; IGI Global: Pennsylvania, PN, USA, 2013; pp. 376–402. 77. Tan, E.; Calabrese Barton, A.; Kang, H.; O’Neill, T. Desiring a career in STEM-related fields: How middle school girls articulate and negotiate identities-in-practice in science. J. Res. Sci. Teach. 2013, 50, 1143–1179. [CrossRef] 78. Karpudewan, M.; Roth, W.M.; Abdullah, M.N.S.B. Enhancing primary school students’ knowledge about global warming and environmental attitude using climate change activities. Int. J. Sci. Educ. 2015, 37, 31–54. [CrossRef] 79. Brotman, J.S.; Moore, F.M. Girls and science: A review of four themes in the science education literature. J. Res. Sci. Teach. 2008, 445, 971–1002. [CrossRef] 80. Lee, O.; Buxton, C.A. Diversity and Equity in Science Education: Theory, Research, and Practice; Teachers College Press: New York, NY, USA, 2010. 81. National Science Foundation: Women, Minorities, and Persons with Disabilities in Science and Engineering. Available online: https://ncses.nsf.gov/pubs/nsf19304/digest/field-of-degree-minorities (accessed on 22 June 2020). 82. Hill, C.; Corbett, C.; St Rose, A. Why so Few? Women in Science, Technology, Engineering, and Mathematics. 2010. Available online: http://www.aauw.org/learn/research/upload/whysofew.pdf (accessed on 22 June 2020). 83. Lapan, R.T.; Adams, A.; Turner, S.; Hinkelman, J.M. Seventh graders’ vocational interest and efficacy expectation patterns. J. Career Dev. 2000, 26, 215–229. [CrossRef] 84. Thoman, D.B.; Brown, E.R.; Mason, A.Z.; Harmsen, A.G.; Smith, J.L. The role of altruistic values in motivating underrepresented minority students for biomedicine. BioScience 2015, 65, 183–188. [CrossRef] 85. Steinberg, D.B.; Simon, V.A. A comparison of gobbies and organized activities among low income urban adolescents. J. Child. Fam. Stud. 2019, 28, 1182–1195. [CrossRef] 86. Joseph, N.M.; Hailu, M.; Boston, D. Black Women’s and Girls’ Persistence in the P–20 Mathematics Pipeline: Two Decades of Children, Youth, and Adult Education Research. Rev. Res. Educ. 2017, 41, 203–227. [CrossRef] 87. Diekman, A.B.; Clark, E.K.; Johnston, A.M.; Brown, E.R.; Steinberg, M. Malleability in communal goals and beliefs influences attraction to STEM careers: Evidence for a goal congruity perspective. J. Pers. Soc. Psychol. 2011, 101, 902–918. [CrossRef] 88. Allen, J.M.; Muragishi, G.A.; Smith, J.L.; Thoman, D.B.; Brown, E.R. To grab and to hold: Cultivating communal goals to overcome cultural and structural barriers in first-generation college students’ science interest. Transl. Issues Psychol. Sci. 2015, 14, 331–341. [CrossRef] 89. Ratcliffe, M.; Grace, M. Science Education for Citizenship: Teaching Socio-Scientific Issues; McGraw-Hill Education: New York, NY, USA, 2003. 90. Weinberg, A.E.; Trott, C.D.; McMeeking, L.B.S. Who produces knowledge? Transforming undergraduate students’ views of science through participatory action research. Sci. Educ. 2018, 102, 1155–1175. [CrossRef] 91. Colucci-Gray, L.; Perazzone, A.; Dodman, M.; Camino, E. Science education for sustainability, epistemological reflections and educational practices: From natural sciences to trans-disciplinarity. Cult. Stu. Sci. Educ. 2013, 10, 127–183. [CrossRef] 92. Riemer, M.; Lynes, J.; Hickman, G. A model for developing and assessing youth-based environmental engagement programmes. Environ. Educ. Res. 2014, 20, 552–574. [CrossRef] © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). http://dx.doi.org/10.1002/tea.21123 http://dx.doi.org/10.1080/09500693.2014.958600 http://dx.doi.org/10.1002/tea.20241 https://ncses.nsf.gov/pubs/nsf19304/digest/field-of-degree-minorities http://www.aauw.org/learn/research/upload/whysofew.pdf http://dx.doi.org/10.1177/089484530002600305 http://dx.doi.org/10.1093/biosci/biu199 http://dx.doi.org/10.1007/s10826-019-01365-0 http://dx.doi.org/10.3102/0091732X16689045 http://dx.doi.org/10.1037/a0025199 http://dx.doi.org/10.1037/tps0000046 http://dx.doi.org/10.1002/sce.21453 http://dx.doi.org/10.1007/s11422-012-9405-3 http://dx.doi.org/10.1080/13504622.2013.812721 http://creativecommons.org/ http://creativecommons.org/licenses/by/4.0/. Introduction Literature Review Science Education for a Sustainable Future Sustainability Science Education for Children’s Sociopolitical Inclusion Sustainability Science Education for Collaborative Action Science Engagement for Societal Transformation Program Description, Community Partner, Research Context Science: Cognitive Engagement Camera: Affective Enablement Action: Behavioral Enactment The Present Study Methods Participants Data Sources and Analysis Procedures Results Children’s Perceptions of Science Survey Results Focus Group Findings Children’s Attitudes Towards Science Survey Results Focus Group Findings Children’s Behavioral Engagement with Science Survey Results Focus Group Results Discussion Strengthening Children’s Science Engagement through Climate Change Learning and Real-World Action Promoting Diversity in Science through Climate Change Learning and Collaborative Action Transforming Science Education through Climate Change Learning and Action Limitations Conclusions References work_4sirb4l6lvcszlosifh7vpy7qy ---- 1South African Journal of Science http://www.sajs.co.za Volume 113 | Number 11/12 November/December 2017 © 2017. The Author(s). Published under a Creative Commons Attribution Licence. A new look at cheetahsBOOK TITLE: Kalahari cheetahs: Adaptations to an arid region BOOK COVER: AUTHORS: Gus Mills and Margie Mills ISBN: 9780198712145 (hardcover) PUBLISHER: Oxford University Press, Oxford; GBP65 PUBLISHED: 2017 REVIEWER: Brian W. van Wilgen AFFILIATION: Centre for Invasion Biology, Department of Botany and Zoology, Stellenbosch University, South Africa EMAIL: bvanwilgen@sun.ac.za HOW TO CITE: Van Wilgen BW. A new look at cheetahs. S Afr J Sci. 2017;113(11/12), Art. #a0236, 1 page. http://dx.doi. org/10.17159/sajs.2017/a0236 One of the many advantages of large protected areas is that they offer the opportunity for systematic study of sparsely distributed, wide-ranging species that are fast disappearing from most other areas. The cheetah, which once occurred widely across Africa, the Middle East and Asia, is one such species. Just about all that is known of the ecology of this iconic species comes from studies in two areas – Tanzania’s Serengeti National Park and ranching areas in Namibia. The three million hectare Kgalagadi Transfrontier Park, shared between South Africa and Botswana, supports a healthy population of cheetahs, which coexist alongside a host of other large predators (lions, leopards and hyaenas) in a relatively arid region, offering the opportunity to gain new insights into the ecology, behaviour and survival of cheetahs. There is arguably no one better qualified to conduct such a study than Gus and Margie Mills. In 1972, as newlyweds, the couple arrived in the Kalahari to spend a couple of years studying the little-known brown hyaena. They ended up spending 12 years in the area – a tale entertainingly recounted in a popular book of their adventures.1 During this time, Gus also produced an authoritative and unique account of the comparative behavioural ecology of both brown and spotted hyaenas.2 After completing his work in the Kalahari, Gus moved on to the Kruger National Park where he studied the ecology and conservation of wild dogs and lions. During a career as a research scientist that lasted until 2006, he established himself as a world leader in the field of carnivore ecology and conservation. Following ‘retirement’ from the South African National Parks, the intrepid couple returned to the Kalahari where – with funding from the Lewis Foundation and other sources – they were able to study the ecology of cheetahs for 6 years. The authors used a combination of good, old-fashioned field observation (over 7000 hours in total, assisted by San trackers) and modern technology to reveal the fascinating lives of these top carnivores. For example, daily energy expenditure was measured using doubly labelled water; tri-axial accelerometers, GPS loggers and drop-off radio collars were used to apportion different energy costs to different behaviours; and DNA studies were used to track paternity and genetic relationships between individuals. The emergence of affordable digital photography allowed the authors to harness the collective observational capacity of hundreds of tourists. By inviting them to submit photographs of cheetahs, they obtained over 1200 batches of photographs, allowing Margie to identify 216 individual cheetahs over 7 years. All of this work has confirmed the status of the cheetah as a sprinting specialist, but it has also revealed important differences between the Kalahari and the better-known Serengeti cheetahs, and provided new insights that have wider implications for conservation. This book reveals, once again, that the widely held view that the lion is the only social cat is incorrect. Male cheetahs often range alone, but are just as likely to team up in duos or trios. These male coalitions are not necessarily only between siblings, and the coalition males weighed more than single males as they are able to bring down larger prey, and thus eat more. Coalitions, however, do not necessarily confer an evolutionary advantage. Female cheetahs mate with several male individuals, both singleton and coalition males, and litters of cubs typically have more than one father. In the Serengeti, only 5% of cubs survived to adolescence, and many were killed by lions or hyaenas, whereas a third of cubs survived in the Kalahari. Although some were killed by larger predators, it seems possible that many were killed by smaller predators, such as ratels or jackals. The Kalahari cheetahs’ main prey was the steenbok, followed by the springbok, with gemsbok, ostrich and even eland calves, as well as hares and the strictly nocturnal springhare forming part of their diet. Cheetahs in the Kalahari live for about 7 years, after which they are either killed (sometimes by other cheetahs), or succumb to starvation. The book concludes by examining the pressing question of cheetah conservation. Prior understanding was based on the view that cheetahs require vast areas to maintain viable populations, both because they were thought to be limited by competition from other large predators, and because of a lack of genetic variability which placed smaller populations at undue risk. Consequently, international cheetah conservation efforts were focused on maintaining populations outside of protected areas. The Kalahari study has revealed that this view does not necessarily hold. Cheetahs were found not to be at undue risk from other predators, and their genetic makeup has not placed them at any disadvantage – in fact, cheetahs that were re-introduced to protected areas much smaller than the Kgalagadi have thrived. As conservation funds are limited, the authors argue that conservation efforts should focus on maintaining or improving the integrity of protected areas rather than trying to conserve cheetahs in unprotected areas. I found this book extremely interesting, loaded as it is with facts and illustrations about the diet, hunting behaviour, breeding and survival of cheetahs. The authors have made a valuable contribution to our understanding of the behaviour and survival of a top predator, and in so doing have joined an elite band of eminent authors in this field, including George Schaller, Hans Kruuk and Jane and Hugo van Lawick-Goodall. The book should find wide appeal amongst scientists, conservation practitioners and wildlife enthusiasts, especially those who visit the extremely popular Kalahari in ever-increasing numbers. References 1. Mills G, Mills M. Hyena nights and Kalahari days. Johannesburg: Jacana; 2010. 2. Mills MGL. Kalahari hyaenas: The comparative behavioural ecology of two species. London: Unwin Hyman; 1990. Book Review Page 1 of 1 http://www.sajs.co.za mailto:bvanwilgen@sun.ac.za http://dx.doi.org/10.17159/sajs.2017/a0236 http://dx.doi.org/10.17159/sajs.2017/a0236 https://crossmark.crossref.org/dialog/?doi=10.17159/sajs.2017/a0236&domain=pdf&date_stamp=2017-11-29 _GoBack work_4smh7fvoejdifo7vlyi5u4jgdu ---- Biogeosciences, 6, 1747–1754, 2009 www.biogeosciences.net/6/1747/2009/ © Author(s) 2009. This work is distributed under the Creative Commons Attribution 3.0 License. Biogeosciences Effect of CO2-related acidification on aspects of the larval development of the European lobster, Homarus gammarus (L.) K. E. Arnold1, H. S. Findlay2, J. I. Spicer3, C. L. Daniels1,3, and D. Boothroyd1 1National Lobster Hatchery, South Quay, Padstow, Cornwall, PL28 8BL, UK 2Plymouth Marine Laboratory, Prospect Place, West Hoe, Plymouth, Devon, PL1 3DH, UK 3Marine Biology and Ecology Research Centre, School of Biological Sciences, University of Plymouth, Plymouth, Devon, PL4 8AA, UK Received: 20 February 2009 – Published in Biogeosciences Discuss.: 18 March 2009 Revised: 8 July 2009 – Accepted: 20 July 2009 – Published: 24 August 2009 Abstract. Oceanic uptake of anthropogenic CO2 results in a reduction in pH termed “Ocean Acidification” (OA). Com- paratively little attention has been given to the effect of OA on the early life history stages of marine animals. Conse- quently, we investigated the effect of culture in CO2-acidified sea water (approx. 1200 ppm, i.e. average values predicted using IPCC 2007 A1F1 emissions scenarios for year 2100) on early larval stages of an economically important crus- tacean, the European lobster Homarus gammarus. Culture in CO2-acidified sea water did not significantly affect carapace length of H. gammarus. However, there was a reduction in carapace mass during the final stage of larval development in CO2-acidified sea water. This co-occurred with a reduction in exoskeletal mineral (calcium and magnesium) content of the carapace. As the control and high CO2 treatments were not undersaturated with respect to any of the calcium carbon- ate polymorphs measured, the physiological alterations we record are most likely the result of acidosis or hypercapnia interfering with normal homeostatic function, and not a di- rect impact on the carbonate supply-side of calcification per se. Thus despite there being no observed effect on survival, carapace length, or zoeal progression, OA related (indirect) disruption of calcification and carapace mass might still ad- versely affect the competitive fitness and recruitment success of larval lobsters with serious consequences for population dynamics and marine ecosystem function. Correspondence to: K. E. Arnold (katie.arnold@nationallobsterhatchery.co.uk) 1 Introduction The ocean is a substantial reservoir of CO2 (Feely et al., 2004; Sabine et al., 2004; Morse et al., 2006). Addition of CO2 to sea water alters the carbonate chemistry and re- duces pH, in an effect recently termed “Ocean Acidification” (OA). Over the past 200 years increasing pCO2 has resulted in a decrease in surface sea water pH of 0.1 units (Caldeira and Wickett, 2003; Orr et al., 2005). It is predicted that at- mospheric CO2 concentrations could reach 1200 ppm by the year 2100 resulting in a decrease in average surface ocean pH of 0.3 to 0.4 units (Caldeira and Wickett, 2003; Raven et al., 2005). Our understanding of the biological and eco- logical consequences of OA is, however, still in its infancy (Raven et al., 2005). Reductions in seawater pH have been demonstrated to affect the physiological and developmental processes of a number of marine organisms (e.g. Pörtner et al., 2004, 2005; Raven et al., 2005) through reduced internal pH (acidosis) and increased CO2 (hypercapnia), raising the possibility that not only species, but also ecosystems, will be affected (Widdicombe and Spicer, 2008). As CO2 increases and pH decreases there is a concomitant reduction in carbonate ion (CO2−3 ) availability, which can lead to increased dissolution of calcium carbonate (CaCO3) structures. This may have a significant impact on species, which have CaCO3 skeletons, such as reef building organ- isms, some phytoplankton, molluscs, crustaceans and echin- oderms. Research into the effects of OA has thus far pri- marily investigated impacts on these calcareous marine or- ganisms, particularly focusing on corals (e.g. Reynaud et al., 2003; Langdon and Atkinson, 2005), molluscs (e.g. Michae- lidis et al., 2005; Gazeau et al., 2007) and coccolithophores Published by Copernicus Publications on behalf of the European Geosciences Union. http://creativecommons.org/licenses/by/3.0/ 1748 K. E. Arnold et al.: Ocean acidification and larval lobsters (e.g. Riebesell, 2000; Zondervan et al., 2002; with reviews by Paasche, 2001; Hinga, 2002; and Riebesell, 2004). Studies to date have demonstrated a number of potentially important effects including reduced growth rates (Gazeau et al., 2007), decreased reproductive success (Kurihara et al., 2004b), shell dissolution (Bamber, 1990) as well as acidification of internal body fluids (Spicer et al., 2007), compromise of induced de- fences (Bibby et al., 2007), increased susceptibility to infec- tion (Holman et al., 2004) and impairment of immune func- tion (Bibby et al., 2008). Unfortunately the majority of these studies have investigated short-term responses with most at- tention being paid to effects on adult organisms alone. Thus there is an urgent need, both for more long-term data, es- pecially for invertebrates (Langenbuch and Pörtner, 2002; Pörtner et al., 2004; Michaelidis et al., 2005) and studies of species at different stages of their larval development as well as on adults (Widdicombe and Spicer, 2008). Research into the effects during larval stages has mainly focused on echinoderms (Kurihara and Shirayama, 2004; Havenhand et al., 2008; Dupont et al., 2008), fish (Ishimatsu et al., 2004), copepods (Kurihara et al., 2004b, 2007), amphipods (Egils- dottir et al., 2009) and gastropods (Ellis et al., 2009) How- ever, it is important to investigate potential effects of CO2- induced acidification during early life stages, as they are likely to be more sensitive to such environmental stressors than adults (e.g. Kikkawa et al., 2003; Ishimatsu et al., 2004), especially as many benthic species possess pelagic larval stages which occur in the surface waters where increasing pCO2 is occurring first (Calderia and Wickett, 2003). De- termining how OA might affect larval development of ben- thic organisms, particularly those that initiate calcification processes while still in their planktonic phase, is critical to predicting how these impacts might propagate through the ecosystem. Early investigations suggest that early life stage development may be slowed (Egilsdottir et al., 2009; Find- lay et al., 2009) or even completely disrupted (Dupont et al., 2008; Havenhand et al., 2008) at CO2 levels predicted for the end of this century. The European lobster Homarus gammarus is an economi- cally important species contributing substantially to the con- tinued survival of small coastal communities. Calcification in the Norway lobster Nephrops norvegicus, closely related to H. gammarus, occurs during very early development, with a major change in the pattern of calcification observed as the individual makes the transition from a planktonic zoea to a benthic postlarva (Spicer and Eriksson, 2003). While crustacean exoskeletons contain CaCO3, this most likely pre- dominates in the more soluble polymorph, magnesium cal- cite (Mg-CaCO3) (Boßelmann et al., 2007). It is believed that crustaceans utilise either CO2 or bicarbonate (HCO − 3 ), not carbonate (CO2−3 ) as the primary source of carbon for the formation of their CaCO3 structures (Cameron, 1989). Therefore reduction in CO2−3 as a result of OA may not al- ways be expected to directly impact their ability to produce CaCO3. H. gammarus is cultured within hatcheries to aid commercial catch and hence is an ideal model to use to un- derstand the impacts that OA might have on the early devel- opment stages of a commercially important crustacean. Con- sequently, this study investigated the effect of CO2-induced acidification on the early life stages of H. gammarus. The patterns of growth and shell mineralogy of the carapace dur- ing larval development were determined by measuring the length, area, dry mass and Ca and Mg contents of the cara- pace. As Mg-CaCO3 is more soluble than other polymorphs of calcium carbonate, aragonite and calcite, (Feely et al., 2004) it may be less advantageous to form Mg-CaCO3 un- der decreasing pH as it will be the first to dissolve (Kley- pas et al., 2006), hence these lobsters may be at greater risk from dissolution. Exposure to long-term hypercapnia may be energetically costly to marine organisms and therefore may be detrimental to developmental processes, such as growth, reproduction, natural recruitment and survival (Barry et al., 2005; Wood et al., 2008). 2 Materials and methods 2.1 Animal material Ovigerous females were supplied by local fishermen and held in aquaria (160×100×35 cm) at the National Lobster Hatch- ery (NLH) in Padstow, Cornwall, UK. Each aquarium was constantly supplied with aerated, filtered re-circulating sea water (T =17±1◦C, S=35) pumped directly from waters ad- jacent to the NLH. Water was pre-treated in a pressurized sand filter, passed through activated carbon, and finally UV- irradiated. Adult lobsters were fed ad libitium with blue mus- sels, Mytilus edulis. Experiments were carried out between June and July 2007, to coincide with the natural hatching season (between April and September). Sea water was placed in ten open conical flasks (vol.=1 l) and the CO2 concentration was modified by equilibrating the water with air containing different CO2 concentrations ex- actly as described by Findlay et al. (2008). Air/CO2 mixtures were produced using a bulk flow technique where known amounts of scrubbed air (CO2 removed using KOH) and CO2 gas were supplied, via flow meters (Jencons, UK, Roxspur, France), and mixed before equilibrating with sea water. Con- trol flasks were aspirated (10 l min−1) with an air mixture containing 380 ppm of CO2, however the pressure was not high enough to completely equilibrate these flasks, which had high alkalinity, and hence the measured sea water CO2 value was slightly lower (mean 315 ppm). To produce the reduced pH treatment, sea water was aspirated (1 ml min−1 CO2 mixed with 10 l min −1 scrubbed air) with the high-CO2 air containing 1200 ppm of CO2. The pCO2 was monitored regularly using a pCO2 micro-electrode (LazarLabs) and pH using a pH probe (Denver). Any fluctuations in pH were noted, and adjusted, via the flow meters, accordingly. Biogeosciences, 6, 1747–1754, 2009 www.biogeosciences.net/6/1747/2009/ K. E. Arnold et al.: Ocean acidification and larval lobsters 1749 Newly-hatched Zoea I larvae, from 3 different moth- ers, were (carefully) distributed haphazardly between a number of aquaria (flasks vol.=1 l; N =50 zoea per flask; T =17±1◦C), with all flasks containing larvae from all fe- males. Flasks contained one of the following aerated me- dia: sea water (“untreated control”) or sea water with ele- vated CO2 (1200 ppm) (N =5 for each treatment). Both treat- ments commenced simultaneously and were incubated for 28 days. Media changes were performed every 24 h. The elevated CO2 treatment flasks were left to equilibrate for 2 h to the required CO2 levels before larvae were transferred to them. Both moulted exoskeletons and mortalities were re- moved before changing media. Larvae were fed (Artemia nauplii, density=5 indiv ml−1 sea water) after media changes (Carlberg and Van Olst, 1976). 2.2 Larval growth and development Larvae were removed haphazardly from each flask (N =9), at day 7, 14, 21 and 28. Larval development can vary de- pending on temperature; therefore in order to ascertain larval development times under the control conditions, preliminary studies were first carried out. The sampling days represent the mid-point of development through each of the four larval stages (i.e. Zoea I, II, III, and IV). Individuals were washed briefly in distilled water, carefully blotted dry with paper tis- sue, and stored frozen (T =−20◦C). They were subsequently freeze-dried (LYOVAC GT2, Leybold-Heraeus, Germany) to a constant mass, before the carapace was carefully removed and weighed using a microbalance (AT200, Mettler-Toledo, Switzerland). Carapace length (CL) and carapace area (CA) were measured using digital photographs under lower power magnification (x 10) and ImageJ software. CL was calculated as shown in Fig. 1, CA was calculated by taking measure- ments of the removed and flattened carapace again using dig- ital photography under lower power magnification (x 10) and ImageJ software. Larval survival was observed daily, along with moult stage, which was recorded as a measure of devel- opment, and determined using the schemes of Aiken (1973) and Chang et al. (2001); this involves detailed examination of the exoskeleton and pleopods. 2.3 Carapace mineral content Measurements of the calcium and magnesium content of the carapace from the same individuals measured above, for each of the four developmental stages (Zoea I, II, III, and IV), were made using Inductively Coupled Plasma Spectrometry (ICP). After the morphological measurements were made the carapace was dissolved in concentrated nitric acid (75% pro analysis) to extract the mineral portion. The resultant so- lution was diluted with Milli-Q water before ICP analysis. To compare between treatments and stages (these are rela- tive measures and not absolute measures) the calcium and magnesium concentrations are expressed both as a percent- Fig. 1. Larval carapace length measured using digital photography under lower power magnification (x 10) and image J software. age of total mass of animal carapace and also per unit of to- tal carapace area, exactly as presented by Spicer and Eriks- son (2003). This gives an indication of the content of each mineral as the animal grows relative to the previous devel- opment stage and the percentage of total mass takes into ac- count any differences in the thickness of the carapace. 2.4 Statistical analysis Data are expressed as mean±1 S.E.; the data were tested for normality using the Kolmogorov-Smirnov test and ho- mogeneity of variances and applying Levene’s test prior to analysis. Two-way repeated-measures ANOVA was used to investigate significant differences in physiological parame- ters as a result of CO2 and of exposure time, and least signif- icant difference post hoc tests were carried out to assess the significance of differences between treatment groups. 3 Results 3.1 Larval growth and development There were no significant effects of hypercapnia on cara- pace length in larval H. gammarus (p>0.05) when expressed against time and developmental stage (Fig. 2a and b). There was however, an observed effect of hypercapnia on carapace mass throughout larval development, causing a significant re- duction in mass at Zoea IV (p<0.05, Fig. 2c). As larvae were removed for sampling throughout the experiment, sur- vival could only be observed during experimental exposures. www.biogeosciences.net/6/1747/2009/ Biogeosciences, 6, 1747–1754, 2009 1750 K. E. Arnold et al.: Ocean acidification and larval lobsters (a) (b) (c) Fig. 2. Relationships between growth and development for larval H. gammarus during exposure to CO2-acidification (1200 ppm). (a) Carapace length (mm) and development time (days); (b) Carapace length (mm) and developmental stage; (c) Carapace mass (mg) and developmental stage. Values represent mean±1 standard error; white, control; grey, 1200 ppm of CO2. From these observations, incubation with CO2 did not appear to have an affect on the survival of larval lobsters. 3.2 Carapace mineral content Changes in calcium concentration of the carapace during lar- val development are presented in Fig. 3. The calcium con- tent in the carapace was significantly effected over time, with respect to treatment, this was detectable both when ex- pressed as concentration per surface area (p<0.05; Fig. 3a) and when presented as a percentage of the total carapace con- tent (p<0.05; Fig. 3b). In the high CO2 treatment, the cal- cium concentration was almost half of the control at Zoea IV (0.13 µg mm−2 S.E. 0.01 vs. 0.23 µg mm−2 S.E. 0.01). This indicates that after metamorphosis into Zoea IV the increased CO2 significantly impacted the carapace calcium content. The effect of increased CO2 on magnesium in the cara- pace, when expressed as a concentration per surface area (Fig. 4a), produced a reduction in concentration over time when compared to the control, with significant differences occurring at Zoea IV (p<0.05). Reduction in magnesium concentration, due to culture in CO2-acidified sea water, were also apparent when expressed as a percentage of total carapace content (p<0.05), with significant differences ev- ident at Zoea III (Fig. 4b); 0.81% (S.E. 0.07) compared to 1.16% (S.E. 0.13) in the control larvae. 4 Discussion 4.1 Growth and carapace mineralogy This is the first study, to the authors’ knowledge, to doc- ument net changes in the calcium and magnesium content of the carapace of any lobster species during larval devel- opment. We found that larval length, area and mass of the carapace are coupled with development stage of the zoea lar- vae; and that there are progressive changes in the content of calcium and magnesium. Calcium and magnesium form the principal mineral portion of the crustacean exoskeleton (Neufeld and Cameron, 1992); the concentrations of which are most likely to be determined by environmental conditions (Wickins, 1984). The pattern of calcification displayed by H. gammarus generally follows a net increase in calcium as the larvae moult through each progressive zoea, with meta- morphosis into the Zoea IV containing the largest concen- tration of calcium. However, when expressed as a percent- age of mass, the proportion of calcium in relation to other exoskeletal components stays fairly constant throughout lar- val development. The only other study to investigate cal- cification in larval lobsters was completed by Spicer and Eriksson (2003), on the Norway lobster Nephrops norvegi- cus. However, only Zoea III was examined and while N. norvegicus Zoea III had 1.7 Ca µg mm2 (4.3% of mass), H. gammarus had 0.2 Ca µg mm2 (9.7% of mass). H. gammarus Zoea III values, in term of calcium concentration per surface area, were much lower than those found in N. norvegicus. Biogeosciences, 6, 1747–1754, 2009 www.biogeosciences.net/6/1747/2009/ K. E. Arnold et al.: Ocean acidification and larval lobsters 1751 (a) (b) Fig. 3. Changes in the calcium concentration of the carapace during development and exposure to CO2-acidification (1200 ppm), expressed as (a) concentration per surface area and (b) % carapace dry mass. Values are means±1 standard error; white bar, control; grey bar, 1200 ppm of CO2. Asterisk denotes significant differences from control (p<0.05). (a) (b) Fig. 4. Changes in the magnesium concentration of the carapace during development and exposure to CO2-acidification (1200 ppm), ex- pressed as (a) concentration per surface area and (b) % carapace dry mass. Values are means±1 standard error; white bar, control; grey bar, 1200 ppm of CO2. Asterisk denotes significant differences from control (p<0.05). However, the percentage of calcium in relation to mass was double the value in H. gammarus compared to N. norvegicus, indicating that the processes involved in calcification may be reasonably different between the two species. Magnesium concentrations were also measured and similarly displayed an increase with development; although the ratios of cal- cium to magnesium can be variable in many marine calcify- ing species (Boßelmann et al., 2007). It is hypothesised that in crustaceans, magnesium is often used as a substitute for calcium in the mineral matrix of the exoskeleton (Richards, 1951). However, when the percentage of calcium increased at Zoea IV the percentage of magnesium decreased, possibly showing that calcium plays a more important role during the final stages of development in H. gammarus. 4.2 Impacts of elevated CO2 on growth and carapace mineralogy This is also the first study to examine the effects of CO2- induced acidification on growth and net carapace divalent ion concentration (as a measure of calcification; Findlay et al., 2009) of any lobster species. In the field increased moult frequency and high rates of mortality are associated with lar- val development; therefore these early stages may be partic- ularly vulnerable to ocean acidification due to an increased energy requirement for calcification of the exoskeleton (Hau- gan et al., 2006). Carapace length in H. gammarus was not significantly affected by culture in CO2-acidified sea wa- ter (1200 ppm CO2), with all remaining coupled throughout. However, both carapace mass and calcification were consid- erably reduced during metamorphosis into Zoea IV due to in- creased CO2. The fact that carapace length was not affected by culture in CO2-acidified sea water even though both mass and mineral content did appear to decrease, is feasible be- cause Spicer and Eriksson (2003) indicates that the majority of growth occurs through laying down organic material and chitin and not CaCO3. The reduced carapace mass observed was therefore most likely due to CO2-induced acidification resulting in a decline in the net production of CaCO3, which would potentially result in a lighter carapace. It is thought that lobsters utilize HCO−3 or CO2 for the precipitation of CaCO3 (Cameron, 1989) not CO 2− 3 there- fore if the experimental conditions had resulted in under- saturation with respect to carbonate minerals, we still might not expect a direct impact on calcification in larval lobsters. However, the sea water in both the control and the elevated CO2 treatment had relatively high levels of CO 2− 3 , and nei- ther became low enough to result in undersaturation with respect to any of the calcium carbonate polymorphs rele- vant here (�aragonite>2.5; Table 1). This seems to occur as a result of naturally high alkalinity levels in the sea wa- ter (>2450 µEq kg−1), and indicates that neither CO2−3 or www.biogeosciences.net/6/1747/2009/ Biogeosciences, 6, 1747–1754, 2009 1752 K. E. Arnold et al.: Ocean acidification and larval lobsters Table 1. System data (mean±standard error) for the control and the high CO2 treatment. Salinity, temperature, pH and pCO2 were measured; all other data (DIC=dissolved inorganic carbon, AT =total alkalinity; CO 2− 3 =carbonate ion concentration; �calcite=calcite saturation state; �aragonite=aragonite saturation state) were calculated from pH and pCO2 using CO2sys (Pier- rot et al., 2006), with dissociation constants from Mehrbach et al. (1973) refit by Dickson and Millero (1987) and KSO4 using Dickson (1990). Control High CO2 Treatment CO2 (ppm) 315±18.83 1202±29.83 pH 8.39±0.006 8.10±0.009 Temp (◦C) 17.0±1.0 17.0±1.0 Sal (psu) 35.0±1.0 35.0±1.0 AT (µEq kg −1) 2544±148.9 4290±145.9 DIC (µmol kg−1) 2152±131.1 3967±132.8 �calcite 6.71±0.43 6.79±0.32 �argonite 4.33±0.28 4.38±0.20 HCO−3 (µmol kg −1) 1863±112.9 3651±119.5 CO2−3 (µmol kg −1) 281±18.1 285±13.3 HCO−3 were limiting calcification, nor was there any indica- tion of enhanced dissolution. One caution to these results is that pH and pCO2 were measured during the experiment but total alkalinity and DIC were calculated using CO2sys (Pier- rot et al., 2006), with dissociation constants from Mehrbach et al. (1973) refit by Dickson and Millero (1987) and KSO4 using Dickson (1990), and therefore these may not repre- sent exact values, particularly as the sea water comes from a coastal environment. The increased CO2, added through gas bubbling, increased the total dissolved inorganic carbon, and hence lowered the pH, although not below levels perceived to be “normal”. Therefore the observed impacts on mass and reduced calcium concentration in these larval H. gam- marus may primarily be the result of hypercapnia interfering with normal homeostatic function (perhaps via a trade-off), and not a direct impact on calcification per se, e.g. general physiological stress can result in a reduction in the energy allocated to shell thickening (Henry et al., 1981). A sim- ilar study by Gazeau et al. (2007), using comparable CO2 values, also displayed a net decline in the calcium carbon- ate structure in the mussel, Mytilus edulis, and the Pacific oyster, Crassostrea gigas. As in this study, they too found that calcium carbonate polymorphs did not become under- saturated, but did suggest that saturation states were corre- lated with net calcification rates. Here we suggest that sat- uration states may not always be the main limiting factor for calcification under increasing CO2, as the processes and mechanisms for calcification in multi-cellular organisms are complex and are closely linked with many other physiolog- ical processes (Pörtner, 2008). As the solubility of a calcite structure increases with the incorporation of magnesium, un- der increasing CO2 concentrations (Raven et al., 2005), sev- eral species have the ability to secrete a lower concentration of magnesium in response to environmental changes (Stan- ley et al., 2002). However, as the decrease in magnesium ions was fairly consistent with the decrease in calcium ions it seems unlikely that this mechanism for dealing with an al- tered environment was occurring in the present study. CO2-induced acidification displayed a progressive long- term CO2 effect on the calcified exoskeleton in Zoea IV, which is arguably the most critical period for production of viable post-larvae. There has been no examination of the ef- fects of OA on post-larval lobsters completed to date. As OA is predicted to occur over ocean surface waters, with a great degree of certainty, and corrosive waters are already seasonally observed in shelf sea upwelling areas (Feely et al., 2008) and around major rivers (Salisbury et al., 2008), ma- rine organisms inhabiting the pelagic zone will be unable to avoid these unfavourable changes in ocean chemistry (Hau- gan, 1997). From most studies on marine organisms to date there ap- pears to be a substantial cost involved with increased CO2 on developmental processes, whether it be decreased calcifica- tion or shell dissolution in order to maintain internal chem- istry (Gazeau et al., 2007; Michaelidis et al., 2005), or in- crease muscle wastage in order to maintain skeletal integrity (Wood et al., 2008). A net decline in calcification, along with a reduced shell mass of developing larval lobsters may affect their competitive fitness and recruitment success; this could trigger cascading trophic effects on population dynamics and potentially on the functioning of marine ecosystems. It is not certain whether the adverse effects associated with OA may be counteracted by physiological acclimatization and/or genetic adaptation of marine organisms (Riebesell, 2004). However initial findings do not appear promising and assess- ments of potential impacts are hampered by the scarcity of relevant research (Orr et al., 2005). Acknowledgements. We thank, R. Pryor, C. Ellis, and C. Wells of the National Lobster Hatchery, Padstow, for their assistance throughout these experiments. This research project was funded by the National Lobster Hatchery and the European Social Fund (ESF), as part of the Cornwall Research Fund managed by the Combined Universities of Cornwall (CUC). H. S. Findlay is funded from NERC Blue Skies PhD NER/S/A/2006/14324. Edited by: H.-O. Pörtner References Aiken, D. E.: Proecdysis, setal development, and molt prediction in the American lobster (Homarus americanus), J. Fish. Res. Bd. Can., 30, 1337–1344, 1973. Barry, J. P., Buck, K. R., Lovera, C., Kuhnz, L., and Whaling, P. J.: Utility of deep sea CO2 release experiments in under- standing the biology of a high-CO2 ocean: effects of hyper- Biogeosciences, 6, 1747–1754, 2009 www.biogeosciences.net/6/1747/2009/ K. E. Arnold et al.: Ocean acidification and larval lobsters 1753 capnia on deep sea meiofauna, J. Geophys. Res., 110, C09S12, doi:10.1029/2004JC002629, 2005. Bibby, R., Cleall-Harding ,P., Rundle, S. D., Widdicombe, S., and Spicer J. I.: Ocean acidification induced defenses in the intertidal gastropod Littorina littorea, Biol. Lett., 3, 699–701, 2007. Bibby, R., Widdicombe, S., Parry, H., Spicer, J., and Pipe, R.: Ef- fects of ocean acidification on the immune response of the blue mussel Mytilus edulis, Aquat. Biol., 2, 67–74, 2008. Boßelmann, F., Romano, P., Fabritius, H., Raabe, D. and Epple, M.: The composition of the exoskeleton of two Crustacea: the American lobster Homarus americanus and the edible crab Can- cer pagurus, Thermochimica Acta, 463, 65–68, 2007. Caldeira, K. and Wickett, M. E.: Anthropogenic carbon and ocean pH, Nature, 425, p. 365, 2003. Cameron, J. N.: Acid-base homeostasis: past and present perspec- tives, Physiol. Zool., 62, 845–865, 1989. Carlberg, J. M. and Van Olst, J. C.: Brine shrimp (Artemia salina) consumption by the larval stages of the American lob- ster (Homarus americanus) in relation to food density and water temperature, Proceedings of the Annual Meeting – World Mari- culture Society, 7, 379–389, 1976. Chang, E. S., Chang, S. A., and Mulder, E. P.: Hormones in the lives of crustaceans: an overview, Am. Zool., 41, 1090–1097, 2001. Dupont, S., Havenhand, J., Thorndyke, W., Peck, L., and Thorndyke, M.: Near-future level of CO2 – driven ocean acid- ification radically affects larval survival and development in the brittlestar Ophiothrix fragilis, Mar. Ecol. Prog. Ser, 373, 285– 294, 2008. Egilsdottir, H., Spicer, J. I., and Rundle, S. D.: The effect of CO2 acidified sea water and reduced salinity on aspects of the embry- onic development of the amphipod, Echinogammarus marinus (Leach), Mar. Poll. Bull., 8, 1187–1191, 2009. Ellis, R. P., Bersey, J., Rundle, S. D., Hall-Spencer, J. M., and Spicer, J. I.: Subtle but significant effects of CO2 acidified sea water on embryos of the intertidal snail Littorina obtusata, Aquat. Biol., 5, 41–48, 2009. Feely, R. A., Sabine, C. L., Lee, K., Berelson, W., Kleypas, J., Fabry, V. J., and Millero, F. J.: Impact of anthropogenic CO2 on the CaCO2 system in the ocean, Science, 305, 362–366, 2004. Feely, R. A., Sabine, C. L., Hernandez-Ayon, J. M., Ianson, D. ,and Hales, B.: Evidence for upwelling of corrosive “acidified” water onto the continental shelf, Science, 320, 1490–1492, 2008. Findlay, H. S., Kendall, M. A., Spicer, J. I., and Widdicombe, S.: Future high CO2 in the intertidal may compromise adult barna- cle (Semibalanus balanoides) survival and embryo development rate, Mar. Ecol. Prog. Ser., in press, 2009. Findlay, H. S., Wood, H. L., Kendall, M. A., Spicer, J. I., Twitchett, R. A., and Widdicombe, S.: Comparing the impact of high CO2 on calcium carbonate structures in different marine organisms, J. Exp. Mar. Biol. Ecol., in review, 2009. Findlay, H. S., Kendall, M. A., Spicer, J. I., Turley, C., and Wid- dicombe, S.: A novel microcosm system for investigating the impacts of elevated carbon dioxide and temperature on intertidal organisms, Aquat. Biol., 3, 51–62, 2008. Gazeau, F., Quiblier, C., Jansen, J. M., and Gattuso, J. P.: Mid- delburg, J. J. and Heip, C. H. R.: Impact of elevated CO2 on shellfish calcification, Geophys. Res. Lett., 34, L07603, doi:10.1029/2006GL028554, 2007. Haugan, P. M.: Impacts on the marine environment from direct and indirect ocean storage of CO2, Waste Manage., 17, 323–327, 1997. Haugan, P. M., Turley, C., and Pörtner, H.-O.: Effects on the ma- rine environment of ocean acidification resulting from elevated levels of CO2 in the atmosphere, Document prepared by an in- tercessional working group convened by Norway and the United Kingdom, Report to the OSPAR commission, 1, 1–27, 2006. Havenhand J., Buttler, F.-R., Thorndyke M. C., and Williamson, J. E.: Near-future levels of ocean acidification reduce fertilization success in a sea urchin, Curr. Biol., 18, R651–R652, 2008. Henry, R. P., Kormanik, G. A., Smartresk, N. J., and Cameron, J. N.: The role of CaCO3 dissolution as a source of HCO − 3 for the buffering of hypercapnic acidosis in aquatic and terrestrial decapod crustaceans, J. Exp. Biol., 94, 269–274, 1981. Hinga, K. R.: Effects of pH on coastal phytoplankton, Mar. Ecol. Prog. Ser., 238, 281–300, 2002. Holman, J. D., Burnett, K. G., and Burnett L. E.: Effects of hyper- capnic hypoxia on the clearance of Vibrio campbellii in the blue crab, Callinectes sapidus, Biol. Bull. Mar. Biol. Lab. Mass., 206, 188–196, 2004. IPCC: The fourth assessment report of the Intergovernmental Panel on Climate Change (IPCC) Cambridge University Press: Cam- bridge, UK, and New York, USA, 2007. Ishimatsu, A., Kikkawa, T., Hayashi, M., Lee, K. S., and Kita, J.: Effects of CO2 on marine fish: larvae and adults, J. Oceanogr., 60, 731–741, 2004. Kikkawa, T. Ishimatsu, A., and Kita, J.: Acute CO2 tolerance dur- ing the early developmental stages of four marine teleosts, Envi- ron. Toxicol., 18, 375–382, 2003. Kleypas, J. A., Feely, R. A., Fabry, V. J., Langdon, C. Sabine, C. L., and Robbins, L. L.: Impacts of ocean acidification on coral reefs and other marine calcifiers: a guide for future research, report of a workshop held 18–20 April 2005, St. Petersburg, FL, sponsored by NSF, NOAA, and the US Geological Survey, 2006. Kurihara, H., Shimode, S., and Shirayama, Y.: Sub-lethal effects of elevated concentration of CO2 on planktonic copepods and sea urchins, J. Oceanogr., 60, 743–750, 2004a. Kurihara, H., Shimode, S., and Shirayama, Y.: Effects of raised CO2 concentration on the egg production rate and early devel- opment of two marine copepods (Arctia steuri and Acartia ery- thraea), Mar. Pollut. Bull., 49, 721–727, 2004b. Kurihara, H. and Shirayam, Y.: Effects of increased atmospheric CO2 on sea urchin early development, Mar. Ecol. Prog. Ser., 274, 161–169, 2004. Langdon, C. and Atkinson, M. J.: Effect of elevated pCO2 on pho- tosynthesis and calcification of corals and interactions with sea- sonal change in temperature/irradiance and nutrient enrichment, J. Geophys. Res., 110, C09S07, doi:10.1029/2004JC002576, 2005. Langenbuch, M. and Pörtner, H.-O.: Changes in metabolic rate and N excretion in the marine invertebrate Sipunculus nudus under conditions of environmental hypercapnia: identifying effective acid–base variables, J. Exp. Biol., 205, 1153–1160, 2002. Michaelidis, B., Ouzounis, C., Paleras, A., and Pörtner, H.-O.: Ef- fects of long-term moderate hypercapnia on acid-base balance and growth rate in marine mussels (Mytilus galloprovincialis), Mar. Ecol. Prog. Ser., 293, 109–118, 2005. Morse, J. W., Andersson, A. J., and Mackenzie, F. T.: Initial re- sponses of carbonate-rich shelf sediments to rising atmospheric www.biogeosciences.net/6/1747/2009/ Biogeosciences, 6, 1747–1754, 2009 1754 K. E. Arnold et al.: Ocean acidification and larval lobsters pCO2 and “Ocean acidification”: Role of high Mg-calcites, Geochim. Cosmochim. Ac., 70, 5814–5830, 2006. Neufeld, D. S. and Cameron, J. N.: Postmoult uptake of calcium by the blue crab (Callinectes sapidus) in water of low salinity, J. Exp. Biol., 171, 283–299, 1992. Orr, J. C., Fabry, V. J., Aumont, O., et al.: Anthropogenic ocean acidification over the twenty-first century and its impact on cal- cifying organisms, Nature, 437, 681–686, 2005. Paasche, E.: A review of the coccolithophorid Emiliania huxleyi (Prymnesiophyceae), with particular reference to growth, coccol- ith formation, and calcification photosynthesis interactions, Phy- cologia, 40, 503–529, 2001. Pörtner, H.-O., Langenbuch, M., and Reipschläger, A.: Biological impact of elevated ocean CO2 concentrations: lessons from an- imal physiology and Earth history, J. Oceanogr., 60, 705–718, 2004. Pörtner, H.-O., Langenbuch, M., and Michaelidis, B.: Synergistic effects of temperature extremes, hypoxia, and increases in CO2 on marine animals: From Earth history to global change, J. Geo- phys. Res., 110, C09S10, doi:10.1029/2004JC002561, 2005. Pörtner, H.-O.: Ecosystem effects of ocean acidification in times of ocean warming: a physiologist’s view, Mar. Ecol. Prog. Ser., 373, 203–217, 2008. Raven, J., Caldeira, K., Elderfield, H., Hoegh-Guldberg, O., Liss, P., Riebesell, U., Shepherd, J., Turley, C., and Watson, A.: Ocean acidification due to increasing atmospheric carbon dioxide, Pol- icy document 12/05, online available at: www.royalsoc.ac.uk, The Royal Society, UK, 2005. Reynaud, S., Leclercq, N., Romaine-Lioud, S., Ferrier-Pagés, C., Jaubert, J., and Gattuso, J. P.: Interacting effects of CO2 partial pressure and temperature on photosynthesis and calcification in a scleractinian coral, Glob. Change Biol., 9, 1660–1668, 2003. Richards, A. G.: The integument of arthropods. University of Min- nesota Press, Minneapolis, 1951. Riebesell, U., Zondervan, I., Rost, B., Tortell, P. D., Zeebe, R., and Morel, F. M. M.: Reduced calcification of marine plankton in response to increased atmospheric CO2, Nature, 407, 364–367, 2000. Riebesell, U.: Effects of CO2 enrichment on marine phytoplankton, J. Oceanogr., 60, 719–729, 2004. Sabine, C. L., Feely, R. A., Gruber, N., et al.: The oceanic sink for anthropogenic CO2, Science, 305, 367–371, 2004. Salisbury, J., Green, M., Hunt, C., and Campbell, J.: Coastal acid- ification by rivers: A new threat to shellfish? Eos, Transactions, American Geophysical Union, 89(50), 513–513, 2008. Shirayama, Y. and Thornton, H.: Effect of increased atmo- spheric CO2 on shallow water marine benthos, J. Geophys. Res., 110(C9), C09S08, doi:10.1029/2004JC002618, 2005. Stanley, S. M., Ries, J. B., and Hardie, L. A.: Low magnesium cal- cite produced by coralline algae in seawater of Late Cretaceous composition, P. Natl. Acad. Sci. USA, 99, 15323–15326, 2002. Spicer, J. I. and Eriksson, S. P.: Does the development of respiratory regulation always accompany the transition from pelagic larvae to benthic fossarial postlarvae in the Norway lobster Nephrops norvegicus (L.)?, J. Exp. Mar. Biol. Ecol., 295, 219–243, 2003. Spicer, J. I., Raffo, A., and Widdicombe, S.: Influence of CO2- related seawater acidification on extracellular acid-base balance in the velvet swimming crab Necora puber, Mar. Biol. (Berl.), 151, 1117–1125, 2007. Wickins, J. F.: The effect of reduced pH on carapace calcium, stron- tium and magnesium levels in rapidly growing prawns (Penaeus monodon Fabricius), Aquaculture, 41, 49–60, 1984. Widdicombe, S. and Spicer, J. I.: Predicting the impact of ocean acidification on benthic biodiversity: What can animal physiol- ogy tell us?, J. Exp. Mar. Biol. Ecol., 366, 187–197, 2008. Wood, H. L., Spicer, J. I., and Widdicombe, S.: Ocean acidification may increase calcification rates, but at a cost, P. Roy. Soc., Lond., 275B, 1767–1773, 2008. Zondervan, I., Rost, B., and Riebesell, U.: Effect of CO2 concen- tration on the PIC/POC ratio in the coccolithophore Emiliania huxleyi grown under light-limiting conditions and different day lengths, J. Exp. Mar. Biol. Ecol., 272, 55–70, 2002. Biogeosciences, 6, 1747–1754, 2009 www.biogeosciences.net/6/1747/2009/ www.royalsoc.ac.uk work_4sskqj6gsrezvp4eblaqffhbqi ---- 9 Lente sa likod ng Lente - Camba.indd H U M A N I T I E S D I L I M A N 145 MGA LENTE SA LIKOD NG LENTE: Isang Panimulang Pag-aaral ng Ilang Piling Litratong Kuha ni Xander Angeles  MOREAL NAGARIT CAMBA Si Moreal Camba ay nakapaglathala ng ilang mga sanaysay sa Daluyan: Jornal ng Wikang Filipino, CAS Review, at sa University of Michigan Journal for Filipino and Philippine Studies ukol sa kababaihan, wika, at panitikan. Nagtapos siya ng kursong BA Araling Pilipino (cum laude) at MA Philippine Studies kapwa sa Kolehiyo ng Arte at Literatura, Unibersidad ng Pilipinas Diliman. Kasalukuyan naman niyang tinatapos ang kursong PhD in Philippine Studies sa Asian Center, UP Diliman. Sampung taon na siyang guro ng Wika at Panitikang Filipino sa isang unibersidad sa Ortigas. Ang pangalang Xander Angeles ay kilala bilang isa sa mga in-demand na pangalan sa mundo ng fashion at advertisement sa Pilipinas. Kakabit ng pangalang ito ang samut-saring lokal at internasyunal na parangal; idagdag pa rito ang isang advertising company, isang modelling agency, at isang fashion and photography school. “Mga Lente sa Likod ng Lente” ang pamagatng rebyung ito dahil bukod sa lente ng kamera, may iba pang mga lente na tumitingin at nagdidikta sa imaheng lalabas sa final art o ang litrato bago mailimbag sa pahayagan at magasin; nariyan ang lente/mata ng litratista, photo editor, art director at marami pang iba. Sa kabilang banda, habang inilalahad ang samut-saring lente bago-habang-pagkatapos makunan ang litrato, hindi maisasantabi ang mismong lenteng ginagamit ng mananaliksik, ako, na may mata ng isang malay ng feminista sa pagdidiskurso. May kani-kanyang ‘personal na lente’ ang mga litratista. Ang ‘pagkiling’ na ito ay nakaaapekto, malay man siya o hindi, (1) sa pagpili ng mga lilitratuhan at (2) sa pag-frame sa mga subject (Vergara 7-14). Sa pamamagitan nito, nagiging makapangyarihan ang litratista sa pag-edit at pag-crop, maging sa pagdirehe sa lilitratuhan (kung paano ngumiti, tumayo/umupo, sa palagay Rebyu ng Piling Litrato Camba 146 niyang maganda/importante). Pinakukumplikado pa ng makabagong mga teknolohiya (computer, internet, photoshop atbp) ang kapangyarihan ng mga litratista; katuwang ang mga photo-editor at art director, sa isang kalabit ng kamera at mouse ay maaaring magbago ang imahen ng subject na nilitratuhan. Tila mga iskultor sila ng makabagong panahon na maaaring magdagdag at magbawas ng mga bahagi ng katawan at maging ng mga background ng imahen. Makatutulong ang samut-saring online sites ng photographer upang maging balon ng mga litrato: (1) Angel Locsin - Timex, (2) Anne Curtis – Folded&Hung, (3) Angel Locsin – Folded&Hung/FHM, (4) Kim Chiu at Gerald Anderson – Bench, (5) Lisa Maza, Angel Locsin, at Sandra Araullo – GABRIELA. Ito ang napiling mga larawan dahil ito ang ilan sa mga screen saver sa computer ni Angeles. Ito rin ang mga larawan na napag-usapan ng mananaliksik at ng kanyang mga kinapanayam. Nakapanayam ng mananaliksik si Xander Angeles upang matukoy ang kanyang aestetik, ang mga teknolohiya na kanyang ginagamit, gayon din ang papel nito (teknolohiya) sa kanyang litrato. Kinapanayam din ang isang Fine Arts graduate ng University of Santo Tomas at isa ring in-house graphic designer (photo-editor) ng Summit Media, si John Laurence Patulan, upang humingi ng ekspertong opinyon ukol sa mga larawang pag-aaralan at ilang mga konsepto ukol sa digital photography sa mundo ng print media. ANG PAGLALATAG NG SAMUT-SARING LENTE Bago pa man ilabas ang litrato o final art, sa pahayagan man o sa magasin, dumadaan ito sa samut-saring mata/lente. Unang-una na rito ay ang lente ng photographer, ang lente ng kamera, at ang mahigpit na ugnayan ng dalawa. Nariyan din ang lente ng mga taga-post-production at ng kliyenteng kumukuha ng serbisyo para sa litrato. Ang Lente ni Xander Angeles Nick Knight, Rankin, Dah-Len, Mondino, La Chapelle--pangalan ng internasyunal na mga litratista na ikinukumpara kay Xander Angeles. Ayon kay Luis Espiritu, kilala si Angeles sa mga litratong nagpapakita ng “edginess, sensuality, and the irreverent play and display of light and color.” (1-2) Ang kanyang edginess at sensuality ay hindi na maihiwalay sa kontrobersyal niyang litrato: si Diana Zubiri na nakasuot ng pulang bikini sa flyover ng Shaw Boulevard noong 2002. Samantala, ang irreverent play and display of H U M A N I T I E S D I L I M A N 147 light and color na binabanggit ay tila nagsusuma sa mismong pangalan ng kumpanyang itinatag ni Angeles, ang Edge of Light Studios (EoLS). Ayon kay Angeles, bago at habang nagpo-photoshoot, naiisip na niya ang samut-saring imahen na posibleng hingiin ng kliyente. Dahil tumatakbo bilang negosyo ang kanyang serbisyo at studio, “I give what the clients want,” wika niya. Bilang creative director ng photoshoot, punong-abala siya sa set. Bago ang shoot, sinisigurado niyang maayos na ang lahat — ang posisyon ng ilaw at ang aninong malilikha nito, ang make-up, buhok at kasuotan ng mga modelo ay kanyang binubusisi. Dagdag pa niya, dahil nasanay siya sa paggamit ng SLR (Single Lens Reflex) na kamera o yaong de-negatibo, nadala niya ang pagiging mabusisi maging sa mga photoshoot na gumagamit ng DSLR (Digital Single Lens Reflex). Sabi pa niya, dahil sa kamahalan ng negatibo at pagproseso noon, kinakailangang maging metikuloso sa lahat ng aspekto bago i-click ang kamera. Isa pa, mas gusto niyang makipag- interaksyon nang matagal sa modelo sa set kaysa sa harap ng computer dahil sa pag-aayos ng buhok, damit, at ilaw. “If you did your homework right, you don’t need to alter or edit your photos,” sabi pa niya (Angeles panayam). Inilarawan niya ang kanyang sarili sa mga photoshoot bilang “spontaneous, anything goes. If I like it, I shoot it. It’s very raw” (Lei 1-3). Hindi niya pinapa- pose ang kanyang mga modelo; sinusunod niya ang mood niya sa araw na iyon. Sa kabilang banda, kung wala siyang ‘nakukuha’ mula sa modelo, humihingi na siya ng movements. Kaya naman, mas mabuti kung kilala o nakatrabaho na niya ang mga tao sa set — mula sa modelo, make-up artist, stylist, atbp. — upang magiging madali ang trabaho dahil kabisado at kilala na niya ang gawa at kakayahan nila (Angeles panayam). Madalas, ang pagkuha kay Angeles ay pagkuha na rin sa serbisyo ng EoLS. Kung sabagay, bukod sa photography, ilan pa sa mga serbisyong ibinigay ng EoLS ay ang pagbo-book ng celebrity, pagka-cast ng modelo, pagbibigay ng mga make-up artist at stylist ng buhok at damit. (www.edgeoflightstudios. com) Sa halos sampung taon niya sa industriya, nakagawa na siya ng portfolio ng mga nakatrabaho. Bilang creative director naman at may-ari ng franchise ng Elite Model Management–Manila (EMM-M), mas lumalawak pa ang kanyang network at lumalalim ang balon na mapagkukunan ng talento. Ang EMM–M ay hindi lamang isang modeling agency, nagbibigay rin ito ng mga workshop ukol sa photography. (www.elitemanila.com/blog/) Katuwang nito ang Elite Fashion Academy Manila (EFAM) na naghahanap din ng mga guro para sa kurso sa fashion at fashion shoots. (www.facebook.com/#!/ photo.php?fbid=319034755151&set=a.319034745151.185894.39715980151) Rebyu ng Piling Litrato Camba 148 Kamakailan lamang, ang EoLS ay naghahanap ng mga modelo para sa isang calendar-photoshoot. Ayon sa patalastas, kailangang magpadala ang mga interesado ng snapshot ng sarili, kasama ang detalye ukol sa taas, sukat ng dibdib, bewang, balakang at paa. Dagdag pa rito ang kwalipikasyon na “girls that have a style about them, are comfortable wearing LINGERIE, secure and confident in front of a camera, energetic, effortless movement. Models MUST be 18-20 years old, at least 5’7” tall.” (www.facebook.com/ pages/Xander-Angeles-Photographer/39715980151?filter=2) Kapansin- pansin ang patriyarkal na lente na ginagamit sa pagpili ng magiging modelo ng kumpanya. Modelo para sa kumpanya ang mga babaeng hindi bababa sa 5’7” na taas, na ang modelo ay (pansinin ang all-caps na ginamit: MUST) 18-20 taong gulang lamang. Kung kaya naman, ayon sa patalastas, “Women who don’t meet these requirements will not receive a reply.” (Ibid) Kapansin-pansin rin ang ‘pagpapaganda’ ng intensyong paghubarin at pagsuotin ng lingerie ang mga babae: girls that have a style about them, secure and confident in front of a camera, energetic, effortless movement (aking diin). Sa kabilang banda, hindi rin nalalayo rito ang mga kahingian ng EMM–M sa ipinatawag nitong casting call. (www.elitemanila.com/blog/) Kung tutuusin, ang mga kahingian ng dalawang ahensyang pinapatakbo ni Angeles ay repleksyon lamang ng mga kahingian rin industriyang kanyang ginagalawan at maging ng kanyang kliyenteng pinagtratrabahuan. Ang maka-lalaking mata ay tila nananaig din sa mga photography workshop na kanyang isinagawa. Bukod sa creative lighting workshop at photo directing, bahagi rin ng sesyon ang swimwear shoot at boudoir photography na kapwa hinihingi na magsuot ng ’kapirasong tela’ ang mga modelo bago malitratuhan ng mga nag-enroll. Mahalagang banggitin na ang mga modelo sa nasabing shoot ay ibinigay ng EMM–M. (Kung sila’y aplikante o aktwal na modelo ng EMM–M ay nangangailangan pa ng dagdag na pananaliksik.) Sa ganitong kalakaran, mas bulnerable ang mga aplikante. “I love women, beautiful women,” sabi ni Angeles nang tanungin siya kung bakit babae ang mahigit siyamnapung porsiyento ng kanyang mga larawan. Sabi niya, “You can’t go wrong with beautiful women.” Sa kabilang banda, kapansin-pansin na may salitang pang-uri na ‘maganda’ ang salitang babae. Nangangahulugan na hindi lamang basta biyolohikal na babae, bagkus ay magandang babae. Ayon pa sa kanya, may ‘limitasyon’ na magagawa ang make-up at stylist sa pagpapaganda ng subject na katulad halimbawa ni Pokwang. (Angeles panayam) H U M A N I T I E S D I L I M A N 149 Ang Lente ng Lente ni Xander Angeles Nagsimula si Xander Angeles sa mundo ng photography gamit ang kanyang Pentax P30N (Espiritu 1). Sa katunayan, nang ilabas ang coffeetable book niyang ‘X Book’ (Marso 2004), SLR kamera ang kanyang ginamit. Maging ang pamoso niyang litrato kay Diana Zubiri ay ginamitan ng SLR kamera (Angeles panayam). Unti-unting nakalikha ng pangalan si Angeles sa mundo ng digital photography nang magkaroon sila ng back-to-back photo exhibit ni Jun de Leon noong Hunyo 2004 na sinundan pa ng isang solo-exhibit noong Hunyo 2005. Napasama rin siya sa magasing Digital Photographer – Ukraine noong 2005 (www.modelmayhem.com). Sa kasalukuyan, isa si Angeles sa nangungunang digital photographer ng Pilipinas at endorser ng professional DLSR kamera ng Nikon. Sa pamamagitan ng DSLR, maaari nang makita agad ang imaheng kukunan sa LCD display ng kamera kumpara sa SLR na kailangang magdebelop muna ng negatibo. Bukod pa rito, dahil pampropesyunal ang gamit niyang kamera, Nikon D2SXs dati at D700 sa kasalukuyan, walang time-lag o delay sa imaheng kinukunan. (http://download.cnet.com/Nikon-Camera- Control-Pro/3000-18489_4-141499.html) Isang magandang halimbawa, ayon kay John Laurence Patulan, ang larawan ni Angel Locsin sa patalastas ng Timex (2008) kung saan ang modelo ay nakasandal sa isang gilid ng animo swimming pool. Ayon sa kanya, dahil sa shutter speed na 30 hanggang 1/8000 segundo, nahuhuli ng kamera ang galaw ng tubig (sa kanang bahagi ng litrato). Dahil din sa 5 frames–12 megapixel at 8 frames–6.8 megapixels na kakayahan ng kamera, mas nagiging maganda ang resolusyon ng larawan; palakihin at ilagay man ito sa billboard, hindi kalat ang kulay at detalye ng litrato. Dagdag pa niya, ang mataas na megapixel ay nakatutulong sa mga photo editor dahil mas madaling habulin at ayusin ang ilaw, kulay, at imahen ng litrato kung saka-sakali. Sa pamamagitan rin ng mga lente na ikinakabit sa kamera, maaaring ayusin at baguhin ang pokus ng litrato. Maaaring palabuin ang background ng subject, katulad ng sa litrato, upang mas mabigyang pansin ang modelo. Ayon kay Angeles, nang ipinaliwanag niya ang proseso ng pagkuha ng litrato ni Anne Curtis sa Folded&Hung (patalastas ng pabango – 2010), “raw” ang mga larawan; ibig sabihin, ganoong-ganoon ang pagkakakuha mula sa aktwal na eksena. Ganunpaman, dumaan ito sa color grading o pagtitimpla ng kulay sa digital na paraan; kung kaya ang natural na kulay na dilaw ay Rebyu ng Piling Litrato Camba 150 naging ube. Ang pagpili niya sa kulay na ube ay batay sa panlasa at palagay niyang babagay para sa mood ng eksena, produkto at modelo. Ayon kay Patulan, bagaman isang proseso ng adobe photoshop ng computer ang color grading, hindi ito ekslusibong magagawa sa computer lamang. Sa katunayan, ang kamera na Nikon, yaong high-end professional camera, ay may kakayahan nang mag-color grade bago pa ilipat sa computer. Dahil digital ang kamera, madaling i-save at i-load ang mga larawan sa napakaliit na lalagyan, ang compact flash disk o memory card. Dahil hindi negative ang ginagamit, ang kalidad ng larawan na makukuha, ilang taon man ang lumipas, ay pareho pa rin; hindi katulad ng negatibo na maaaring amagin, mag-moist o magasgas. Hindi rin mararanasan ang problema ukol sa over-exposure ng negative na dulot ng maling paglalagay o pagtanggal ng rolyo sa kamera (Patulan panayam). Sa kabilang banda, may posibilidad na ma-corrupt ang mga file, katulad ng nasaksihan ng mananaliksik habang kinakapanayam si Angeles; ang ilang larawan ni Curtis sa file ay nabura. Ang Lente ng Post-Production Bago pa man lumabas ang isang larawan sa magasin o pahayagan, dadaan pa ito sa mata/lente ng mga tao sa post-production: ang art director at kanyang mga photo editor. Sila ang mga mapanuring mga mata na ‘mag-aayos’ sa huling pagkakataon ng mga larawan. Bagaman sinasabi ni Angeles na “raw” ang kanyang mga litrato, ilan sa mga serbisyong ibinibigay ng kompanya niyang EoLS ay retouching, graphic design, at final art (www.edgeoflightstudio.com). Kaya naman, hindi malayo na may mga larawan si Angeles na dumaan sa mga prosesong ito bago ipasa sa mga photo editor ng mga publiksyon. Ayon kay Patulan, ilan sa mga minor editing ay ang pagbura sa mga baby hair, patay, o naliligaw na buhok ng modelo, tigyawat, at iba pang ‘makakasagabal’ o ‘makakapangit’ sa litrato. Sa kabilang banda, may pagkakataon din na hindi tinatanggal ang eye bags. Halimbawa, sa inedit niyang cover ng Women’s Health (Marso 2010), kinailangan niyang panatilihin ang eye bags ni Senadora Pia Cayetano. Inabisuhan na lamang siya ng art director na paputiin ang ilalim na bahagi ng mata upang hindi masyadong mabigyang ng atensyon ang eye bags. Ang desisyong panatilihin ang eye bags ay upang hindi diumano lumayo sa tunay na edad at hitsura ng senador ang larawan; nakakabawas ng edad ang pagtanggal ng eye bags at kung gayon, magmumukhang niretoke ang larawan. (Ibid.) Isa pa, H U M A N I T I E S D I L I M A N 151 hindi naman beauty product ang iniendorso ng modelo kaya maaari nang palampasin; magkagayunman, kinailangan pa rin itong paputiin dahil hindi ito ‘kaaya-aya’ batay sa pamantayan ng publikasyon. May pagkakataon din na ang ilang mga nunal ng modelo ay hindi binubura; lalo na kung ito ang kanyang trade mark. Halimbawa, si Curtis ay kilala sa kanyang mga nunal sa ilalim ng kaliwang mata at gitna ng dibdib. Ayon kay Patulan, magmumukhang niretoke ang larawan kung ang mga markang ito ay mawawala sa nasabing modelo. Ang ibang mga nunal, lalo na yaong di pansinin, ay maaaari nang burahin. Sa mga larawan ng F&H, napanatili nito ang ‘natural’ na hitsura ni Curtis dahil sa mga nunal. Ilan naman sa mga major na editing ay ang pagbubura ng mga di kailangang anino, pag-aayos ng mga gusot ng damit, pag-aayos ng make-up, pagbabawas o pagdadagdag ng timbang/kaha ng modelo, atbp. Ayon kay Patulan, marami sa mga ini-edit niyang larawan ay dulot ng kapabayaan sa set, bago at habang nagpo-photoshoot. Halimbawa, sa cover ng Women’s Health (Oktubre 2010) na may larawan ni Solenn Heusseff, kinailangan niyang ayusin ang mis-matched na sports bra at executive skirt na suot ng modelo; kinailangan niyang gawing pang-corporate ang pantaas ng modelo. Dagdag pa rito, dahil sa hindi maayos na pag-iilaw, kinailangan niyang burahin ang anino ng kamay ni Heussef sa ibabaw ng palda na nagmistulang basa ng ihi. Naging kagawian na rin na pakinisin ang kilikili ng mga model. Isa itong mahirap na gawain lalo na’t layon ng bawat photo editor na ‘magmukhang natural’ ang nasabing bahagi ng katawan; makikita ang husay kung napanatili itong natural sa pamamagitan ng mga linya at anino sa kilikili (Patulan panayam). Binigyang pansin din ni Patulan ang pagbabago ng tiyan ni Angel Locsin sa patalastas ng Folded&Hung (Summer Swimsuit Collection – 2010) na nasa FHM May 2010. Mapapansing maliit ang umbok sa tiyan ni Locsin habang nakaliyad ay biglang nawala sa mga litratong bahagyang nakayuko ang modelo. Ayon kay Patulan, marahil, dahil hindi naman ‘masagwang’ tingnan ang tiyan ng modelo sa larawang nakaliyad, kaya ipinalampas ito, kumpara sa resulta ng larawang nakayuko. Ang depinisyon ng masagwa, ayon kay Patulan, ay nakabatay sa iniisip nilang target audience ng magasin (kasama ang photo editor at art director); dahil FHM, ito ay mga lalaki. Upang lumiit ang tiyan ng isang modelo sa litrato, kinakailangang i-liquify ang bahagi ng katawan at hihilahin palikod. Sa proseso, kailangan ding ayusin ang mababagong anggulo ng anino; kinakailangang maging maalam Rebyu ng Piling Litrato Camba 152 ang photo editor sa relasyon ng ilaw at anino. Dagdag pa, kailangan ding maging metikuloso upang hindi mabura ang pusod sa pag-liquify; kung nagkataon, magmumukha itong niretoke (Patulan panayam). Sa kabilang banda, hindi maikakaila ang senswalidad ng mga larawang kuha nina Locsin at Curtis. Si Locsin, sa kanyang Timex na commercial, ay babaeng bukas ang mga braso at tumititig na tila nagsasabing “handa akong makipagsabayan.” Ang babaeng basa at babad sa tubig ay arketipo na ng babaeng sekswal; mula sa wet look nina Gloria Diaz sa “Pinakamagandang Hayop sa Balat ng Lupa,” mga babae ng “Temptation Island,” Thalia sa “Marimar,” hanggang sa imahen ni Venus sa pinta ni Botticelli. Ang pagiging “bukas” na susugan pa ng makahulugang pagngiti at pagtitig na makikita rin sa kanyang F&H na commercial. Umaalingawngaw din ang senswalidad ni Curtis sa mga imahen niyang hinahangin na buhok (kasunod ng paglingon) at ang hinahanging buhok (kasunod ang pagpikit ng mata at paghawak sa batok); kung tutuusin, ang paghawak niya sa batok ay paggagagad sa madaming babaeng tauhan sa mga S&M na kuwento: nakatali/nakahawak ang mga kamay sa batok, tanda ng pagiging bukas, pagsunod at kawalan ng kapangyarihan. Maganda ring banggitin na parehong bukas din ang labi ng modelo, isang imahen na kakambal na ng arketipo ng babaeng senswal. Isa rin sa mga ginagawang pag-edit ng litrato ay ang paglalagay ng background sa patalastas nina Kim Chiu at Gerald Anderson para sa pabangong “Adored” ng Bench (2010). Ayon kay Angeles, kinailangan niyang lagyan ng sinag ang likod ng ulo ng modelo dahil hinihingi ng kliyenteng Suyen Corporation na magmukhang mala-santo, “to signify innocence, being angelic, being adorable” (Angeles Panayam) --na siya ring tatak ng pabango na iniendorso ng mga modelo: “Adored.” Pinuri ni Patulan ang ‘maingat na pag-aayos ng mga modelo’ na ginagawa ng photo director, si Angeles. Ayon sa kanya, sa kapayatan ni Kim Chiu, parati siyang nili-liquify, hindi upang lalong pumayat bagkus ay upang magkaroon ng laman sa mga litrato. Hindi diumano ‘maganda’ ang labis niyang kapayatan. Kung kaya naman, sa pamamagitan ng paglalagay kay Chiu sa likod ni Gerald Anderson at pagsusuot ng long- sleeve na damit, hindi na napapansin ang kapayatan; nangangahulugan na hindi na siya kailangang ‘ayusin.’ Sa kabilang banda, ang paglalagay sa payat na payat na si Chiu sa likod ng matipunong si Anderson, idagdag pa ang paghawak niya sa balikat ni Anderson, ay tila nagpapahiwatig ng pagiging mahina nito; na kinakailangan H U M A N I T I E S D I L I M A N 153 niya ng isang tagapagligtas o protektor. Sa pagkakataong ito, si Anderson na sentro at buo sa litrato, ang bayani ni Chiu. Sinususugan pa ng mga sinag sa likod ng ulo ni Anderson ang pagiging tagapagligtas nito. Mahalagang banggitin na walang sinag sa ulo si Chiu. Sa kabuuan, layon ng mga photo editor at art direktor na ‘ayusin’ ang mga larawan bago ito ilimbag. Layon nilang i-edit ang mga sa palagay nila ay hindi ‘maganda’ at ‘nakagugulo’ sa larawan; kung ano ang maganda at nakagugulo ay batay sa modelo (ang kanyang edad, katayuan at iba pa), konteksto ng litrato, produktong iniendorso, hinihingi ng kliyente, inaasahan ng mambabasa at personal na panlasa. Ang pinakamahalaga, ayon na rin kay Patulan, magmukhang ‘natural’ ang hitsura ng modelo at hindi mukhang manekin (maliban na lamang kung hinihingi). ANG AMALGAMASYON NG MGA LENTE Sinasabing sa loob ng isang lipunang di-pantay, ang pagtingin ay isang kapangyarihan; at sa loob ng isang patriyarkal na lipunan, naging kaugalian nang gamitin ang mata ng Ama sa pagtingin/pagbasa sa isang teksto/ larawan. Sa pamamagitan ng ganitong androsentrikong pagtingin, ini-estima at kinikilatis ng patriyarkal na sistema ang mga babae batay sa pamantayang likha rin ng patriyarkal na pagpapahalaga. Sa pamamagitan nito, nabubuo ang imahen ng ‘ideyal’ at ‘di-ideyal’ na babae batay na rin sa pantasya, imahinasyon, at pangangailangan ng patriyarkal na sistema. (Robbins 51) Sa pagsusuri ng ilang piling litrato ni Xander Angeles, kapansin-pansin ang samut-saring mata ng mga biyolohikal na lalaki (mula sa photographer, photo editor, art director) na may patriyarkal na lente. Dagdag pa rito ang pagsasaalang-alang ng mga nabanggit sa mga mata ng kanilang target audience (na madalas ay lalaki rin) at ng mga kliyente na bumabatay rin sa inaakalang panlasa ng kanyang mga konsyumer. Sa pamamagitan ng mga matang ito, naghahanap sila ng mga ‘mali’ sa mga modelo/litrato at sa proseso ay lumilikha ng imahen/ilusyon ng ideyal na babae. Nakatatawa ngang isipin na sa proseso ng kanilang ‘pag-aayos’ sa mga imperpeksyong ito, ninanais pa rin nilang maging ‘natural’ ang hitsura ng kanilang mga inedit na litrato. Dahil sa pinatatanggap na ‘natural’ ang patriyarkal na pagtingin, ginagamit ito maging ng mga babae para sa pamantayan ng kanilang kagandahan at pagiging kaakit-akit. Sinasabi ni Neferti Xina Tadiar na sa pamamagitan Rebyu ng Piling Litrato Camba 154 mismo ng mga Filipino, nagaganap ang patriyarkal na pagtingin, kung saan ginagamit ang mata ng isang babae upang matahin o maliitin ang mga kapwa babae (De Guzman, 9-13). Nangangahulugan na hindi awtomatikong maka-babae ang isang larawan na tinitingnan/binabasa ng isang biyolohikal na babae; kung kaya, sa pamamagitan ng mata ng isang malay na Filipinang feminista, nagsisilbi ito bilang kontra-gahum na lente sa nananaig na lente ng patriyarkiya na bumabasag sa nakagawiang uri ng pagtingin sa mga babaeng modelo. Isa pa sa mga di-makalilimutang kuhang litrato ni Angeles ay ang larawan ng tatlong GABRIELA: sina dating Congresista Lisa Maza, Angel Loscin, at Sandra Araullo; bahagi ang mga larawan ng kampanyang “Passion of a Nationalist Feminist” (Marso 2010). Sa pagkakataong ito, nagsasanib ang dalawang haligi: isang photographer na kilala sa kanyang mga sexy- women-models at isang organisasyon na nagtataguyod ng nasyonalistang feminismo; tila lumalabas at umiigpaw na sa de-kahong imahen ang photographer dahil sa kolaborasyong ito. Ang katangiang GABRIELA, bilang isang organisasyong nasyonalista- feminista at bayaning nakipaglaban sa mga Espanyol, ay sinususugan pa ng mga postura at posisyon ng kamay ng mga modelo. Halimbawa na lamang ay ang halos tuwid na mga tayo at mga nakapamewang na modelo--imahen ng mga babaeng seryoso, palaban, mayabang, at nakapag-iisa. Gayon pa man, tila may ‘mali’ pa rin mga imahen. Ayon kay Patulan, drop-out o tracing ang prosesong ginawa sa mga larawan. Ito yaong pagkuha sa mga modelo sa likod ng isang puting pader, itre-trace ang imahen at ‘ididikit’ sa ibabaw ng bagong background. Matrabaho ang gawaing ito dahil sa kailangang maging maingat ang photo editor sa pag- trace sa mga tuwid at kurbadang linya ng modelo/subject bago ilipat sa ibang background. Sa tatlong larawan, kapansin-pansin na hindi pulido ang gawa. Nariyan ang makanto na laylayan ng damit (ni Araullo sa kaliwang bahagi), ang mga nawawalang paa ng mga modelo at iba pa. Ayon kay Patulan, mukhang minadali ang final art ng larawan (Patulan panayam). Kung kaya naman bagaman ‘passion of a nationalist fashionista’ (aking diin) ang tawag sa serye ng mga larawang ito, malayong-malayo ito sa mga fashionistang litrato ni Angeles na kinumisyon ng mga malalaking kumpanya at ng hindi mabilang na mga magasin. H U M A N I T I E S D I L I M A N 155 BINANGGIT NA SANGGUNIAN Almario, Virgilio, et al. UP Diksyunaryong Filipino. Pasig: Anvil, 2001. Print. Angeles, Xander. Personal na panayam. 23 Setyembre 2010. Datuin, Flaudette May. Home Body Memory. Quezon City: UP Press, 2002. Print. De Guzman-Odine (ed.). Body Politics: Essays on Cultural Representations of Women’s Bodies. Quezon City: UP CWS. 2002. Eagleton, Terry. Marxism and Literary Criticism. California: University of California Press, 1976. Print. Estrada-Claudio, Sylvia. Rape, Love and Sexuality: The Construction of Women in Discourse. Quezon City: UP Press, 2002. Print. Garcellano, Edel. Knife’s Edge. Quezon City: UP Press, 1991. Print. Kintanar, Thelma (ed.). Women Reading. Quezon City: UP Press, 1992. Print. Lodge, David. Modern Criticism and Theory: A Reader. New York: Longman, 1988. Print. Mendez-Ventura, Sylvia. Feminist Readings of Philippine Fiction: Critique and Anthology. Quezon City: UP Press, 1994. Print. Patoja-Hidalgo, Cristina. Gentle Subversion. Quezon City: UP Press, 1998. Print. Patulan, John Laurence. Personal na panayam. 10 Oktubre 2010. Robbins, Ruth. Literary Feminism. London: MacMillan, 2000. Print. Santiago, Lilia Quindoza. Sa Ngalan ng Ina: 100 Taon ng Tulang Feminista sa Pilipinas. Quezon City: UP Press, 1997. Print. Santiago, Lilia Quindoza. Sexuality and the Filipina. Quezon City: UP Press, 2007. Print. Scholes, Robert. Textual Power. London: Yale University Press, 1984. Print. Servando D. Halili Jr. Iconography of the New Empire: Race and Gender Images and the American Colonization of the Philippines. Quezon City: UP Press, 2006. Print. Tolentino, Rolando. Richard Gomez at ang Mito ng Pagkalalake, Sharon Cuneta at ang Perpetwal na Birhen, at iba pang Sanaysay sa Bida ng Pelikula bilang Kultural na Texto. Pasig: Anvil, 2000. Print. Vergara, Benito Jr. Displaying Filipinos: Photography and Colonialism in Early 20th Century Philppines. Quezon City: UP Press, 1995. Print. Watkins, Susan. Twentieth-Century Women Novelists. New York: Palgrave, 2001. Print. Rebyu ng Piling Litrato Camba 156 Web Sources Espiritu, Luis. “Rated X.” Philstar.com. 31 Mayo 2004. 16 Setyembre 2010. . Lei, Joanna. “Stepping into the Light with Xander Angeles.” Telebisyon.net. 8 Pebrero 2010. Web. 16 Setyembre 2010 . “Manila’s Most Beautiful Troop to Xander Angeles’ Seven Jeans calendar Launch.” Spot.ph. 21 Enero 2010. Web. 16 Setyembre 2010. . “Nikon Camera Control Pro 2.7.1 for Mac.” Web. 27 Setyembre 2010 Salamat, Marya. “Lisa Maza’s Passion Statement.” Bulatlat.com. 15 Marso 2010. Web. 3 Oktubre 2010 . Teehankee, Pepper. “MMX: A Fusion of Art and Fashion.” Telebisyon.net 9 Pebrero 2010. Web. 16 Setyembre 2010 . “The Passion of Nationalist Fashionista.” Philstar.com. 17 Marso 2010. Web. 3 Oktubre 2010. . “The ‘X’ Factor for Xander Angeles.” ManilaBulletin 26 Nobyembre 2003. 16 Setyembre 2010 http://www.edgeoflightstudios.com http://www.elitemanila.com/blog/ http://www.facebook.com/pages/Xander-Angeles-Photographer http://www.facebook.com/#!/album.php?aid=240949&id=151742649321 h t t p : // w w w. f a c e b o o k . c o m / # ! / p h o t o . p h p ? f b i d = 4 74 4 7 9 4 6 0 1 5 1 & s e t =a.474479370151. 272834.39715980151 http://www.fhm.com.ph/fhm-babes/imagegallery/1365/angel-locsin http://www.myspace.com/xanderangeles http://www.modelmayhem.com http://www.philstar.com/Article.aspx?articleId=558692&publicationSubCategoryId=83 http://pinoyshowbiss.blogspot.com/2010/05/angel-locsins-fhm-may-2010- issue.htmlhttp://weloveangel.multiply.com/photos/album/7/Angel_ Locsin_for_Timex#photo=7 http://www.xanderangeles.com work_4t7nuor62nfshfyeavfebf2eze ---- GSA 2018 Annual Scientific Meeting depressive symptoms. Our findings suggest the majority of residents experience terminal change, with the exception of those at already high levels of impairment. Furthermore, late-life cognitive change is related to functional and mental health. IS COGNITIVE DECLINE BEFORE DEATH IN THE OLDEST OLD A UNIVERSAL PHENOMENON? A. Robitaille1, D. Cadar, PhD2, A. Koval, PhD3, C. Jagger, PhD4, B. Johansson, PhD5, S. Hofer, PhD6, A. Piccinin, PhD7, G. Muniz-Terrera, PhD8, 1. Department of Psychology, Université du Québec à Montréal, Montreal, Quebec, Canada, 2. Department of Epidemiology and Public Health, University College London, London, UK, 3. Department of Psychology, University of Victoria, Victoria, BC, Canada, 4. Institute of Health and Society, Newcastle University, Newcastle upon Tyne, UK, 5. Department of Psychology, University of Gothenburg, Gothenburg, Sweden, 6. Department of Psychology, University of Victoria, Victoria, BC, Canada, 7. Department of Psychology, University of Victoria, Victoria, BC, Canada, 8. Centre for Dementia Prevention, University of Edinburgh, Edinburgh, UK We investigated the heterogeneity in end of life cognitive decline in two European longitudinal studies of the oldest old: the OCTO-Twin and the Newcastle 85+ Study. Using a coor- dinated analytical approach, we identified unobserved groups of individuals with similar trajectories of cognitive decline at the end of life by fitting Tobit Growth Mixture Models to Mini-Mental State Examination scores. In both studies, the current analyses consistently identified two groups of indi- viduals whose cognitive decline at the end of life were dis- tinct: one group did not exhibit an ostensible rate of decline, another group experienced steep decline in measures of global cognition within each study. In OCTO-Twin, accelerated decline was found in only one group. Our results showed het- erogeneity in cognitive decline at the end of life in the oldest old across two different European countries and suggest that terminal decline is not necessarily a normative process. TERMINAL DECLINE AS A GENERIC MODEL OF COGNITIVE AGING V. Thorvaldsson, University of Gothenburg, Gothenburg, Vastra Gotaland, Sweden Cognitive terminal decline (TD) refers to acceleration in an individual decline trajectory with an onset at some specific time (months, years) prior to death. Previous studies provide strong evidence of a large inter-individual differences in the onset of TD, in which some show an acceleration many years prior to death while others are never affected. In the present analysis we further evaluate the pros and cons of TD as a generic model of cognitive aging with the specific purpose to capture inter-individual difference in change trajectories. More specifically we provide examples of the potential role of cardio-vascular health-related variables derived from a representative population-based sample (the H70 study). The findings show superiority of the time-to-death time structure, in comparison to age-based or time-in-study, to account for inter-individual difference in the change trajectories, and that compromised vascular health in general is associated with earlier onset of TD. REDUCING INTRAINDIVIDUAL VARIABILITY IN COGNITIVE SPEED VIA PRODUCTIVE ACTIVITY ENGAGEMENT: THE SYNAPSE PROJECT A.A. M. Bielak1, C. Brydges, PhD2, D.C. Park, PhD3, 1. Colorado State University, Fort Collins, Colorado, United States, 2. Colorado State University, Fort Collins, CO, USA, 3. The University of Texas at Dallas, Dallas, Texas, USA Intraindividual variability (IIV) in cognitive speed has the potential to be a sensitive outcome measure for evaluating cognitive improvement from lifestyle interventions. Using the Synapse Project (n = 181), a randomized controlled trial to improve cognitive ability, we evaluated if older adults who participated in productive engagement (i.e., active learning of quilting, digital photography, or both) showed a reduc- tion in IIV compared to those in receptive engagement (i.e., using existing knowledge via social outings or rote cogni- tive tasks.). All participants completed their condition for 14 weeks. IIV was based on three versions of a RT flanker task. Complier average casual effect modeling was used with com- pliance set at 210 hours. The models indicated that compli- ers in the productive engagement groups did not show any significant change in their IIV. The results demonstrate that even an intensive activity intervention may not be sufficient to cause significant improvement in IIV. SESSION 1465 (SYMPOSIUM) OLDER TRAUMA-EXPOSED MALE AND FEMALE VIETNAM VETERANS: CLINICAL ISSUES AND INNOVATIONS Chair: P. Bamonti, VA Boston Healthcare System, Roslindale, Massachusetts Co-Chair: K. O’Malley, VA Boston Healthcare System, Boston, Massachusetts Discussant: E.H. Davison, VA Boston Healthcare System, VA National Center for PTSD, Boston University School of Medicine, Boston, Massachusetts As Vietnam Veterans age into older adulthood, the conse- quences of lifetime trauma exposure can manifest, including mental and physical health morbidities. Identification of risk and protective factors is needed for the development and tai- loring of interventions to mitigate the impact of trauma on wellbeing. This symposium focuses on clinical issues and inno- vations relevant to research and clinical work with Vietnam Veterans. We begin with research examining risk factors and correlates of mental and physical health outcomes in Vietnam Veterans. First, we will present research examining risk asso- ciations linking traumatic stress exposures and mental health correlates with current health-related outcomes in Vietnam era women veterans. Next, we will present findings examin- ing cognitive and other psychological factors related to posi- tive and/or negative outcomes associated with past combat exposure in Vietnam era combat veterans. The program will then shift to innovative interventions for late life PTSD and stress reactions. Pre- and post-group data on veterans who participated in a Later-Adulthood Trauma Reengagement (LATR) group will be presented. The presenter will discuss the application of the LATR model in the treatment of late life PTSD symptomatology, and present program improve- ment recommendations based on the findings. Fourth, we Innovation in Aging, 2018, Vol. 2, No. S1 383 D ow nloaded from https://academ ic.oup.com /innovateage/article-abstract/2/suppl_1/383/5169416 by U niversity C ollege London Library user on 22 N ovem ber 2018 work_4u6pdis6drce5kfzutrfycghmu ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218549608 Params is empty 218549608 exception Params is empty 2021/04/06-02:16:28 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218549608 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:28 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_4vjvmmhde5hjfhgapo2rhk4fwq ---- In Vivo Bioluminescence Imaging To Evaluate Systemic and Topical Antibiotics against Community-Acquired Methicillin-Resistant Staphylococcus aureus-Infected Skin Wounds in Mice Yi Guo,a Romela Irene Ramos,a John S. Cho,a Niles P. Donegan,b Ambrose L. Cheung,b Lloyd S. Millerc Department of Medicine, Division of Dermatology, David Geffen School of Medicine at University of California, Los Angeles, Los Angeles, California, USAa; Department of Microbiology and Immunology, Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USAb; Department of Dermatology, Johns Hopkins University School of Medicine, Baltimore, Maryland, USAc Community-acquired methicillin-resistant Staphylococcus aureus (CA-MRSA) frequently causes skin and soft tissue infections, including impetigo, cellulitis, folliculitis, and infected wounds and ulcers. Uncomplicated CA-MRSA skin infections are typically managed in an outpatient setting with oral and topical antibiotics and/or incision and drainage, whereas complicated skin infec- tions often require hospitalization, intravenous antibiotics, and sometimes surgery. The aim of this study was to develop a mouse model of CA-MRSA wound infection to compare the efficacy of commonly used systemic and topical antibiotics. A biolu- minescent USA300 CA-MRSA strain was inoculated into full-thickness scalpel wounds on the backs of mice and digital photog- raphy/image analysis and in vivo bioluminescence imaging were used to measure wound healing and the bacterial burden. Sub- cutaneous vancomycin, daptomycin, and linezolid similarly reduced the lesion sizes and bacterial burden. Oral linezolid, clindamycin, and doxycycline all decreased the lesion sizes and bacterial burden. Oral trimethoprim-sulfamethoxazole de- creased the bacterial burden but did not decrease the lesion size. Topical mupirocin and retapamulin ointments both reduced the bacterial burden. However, the petrolatum vehicle ointment for retapamulin, but not the polyethylene glycol vehicle oint- ment for mupirocin, promoted wound healing and initially increased the bacterial burden. Finally, in type 2 diabetic mice, sub- cutaneous linezolid and daptomycin had the most rapid therapeutic effect compared with vancomycin. Taken together, this mouse model of CA-MRSA wound infection, which utilizes in vivo bioluminescence imaging to monitor the bacterial burden, represents an alternative method to evaluate the preclinical in vivo efficacy of systemic and topical antimicrobial agents. Community-acquired methicillin-resistant Staphylococcus au-reus (CA-MRSA) skin and soft tissue infections (SSTIs), such as impetigo, folliculitis, cellulitis, and infected wounds and ulcers, have been increasing for more than a decade and are creating a serious public health concern (1, 2). In particular, outpatient and emergency room visits for SSTIs have been estimated to result in 11.6 to 14.2 million ambulatory care visits per year in the United States (3, 4). In 2004 and 2008, CA-MRSA was identified as the most common cause (59%) of all SSTIs presenting to emergency rooms across the United States (5, 6). In these and other studies, the USA300 clone has been isolated in up to 90% of all CA-MRSA SSTIs in the United States (5–8). USA300 causes severe and ne- crotic SSTIs and often causes infections in otherwise healthy indi- viduals without any known risk factors for infection (1, 2, 9). Uncomplicated CA-MRSA SSTIs, such as impetigo, infected abrasions, and folliculitis/furunculosis can be managed in an out- patient setting with oral antibiotics and/or incision and drainage (10–12). Typical oral antibiotic regimens used for CA-MRSA in- fections include trimethoprim-sulfamethoxazole (TMP/SMX), a tetracycline (doxycycline or minocycline), clindamycin or lin- ezolid, and rifampin can be added as a second agent to these reg- imens (10–12). Complicated CA-MRSA SSTIs such as deeper soft tissue infections, surgical/traumatic wound infections, major ab- scesses, cellulitis, and infected ulcers and burns require intrave- nous antibiotics and sometimes surgical debridement (12, 13). Commonly used intravenous antibiotics with coverage against CA-MRSA include vancomycin, linezolid, daptomycin, telavan- cin, clindamycin, tigecycline, and quinupristin-dalfopristin (12– 14). Topical antibiotics can also play an adjunctive role in CA- MRSA SSTIs, including impetigo, infected abrasions/lacerations, infections with poor blood supply (i.e., diabetic foot ulcers) and in the prevention of postsurgical wound infections (1, 10, 11). Mupi- rocin is a commonly used topical antibiotic for the treatment of CA-MRSA SSTIs (1, 10, 11) and is also used for decolonization of S. aureus and MRSA nasal carriage (15). Retapamulin is a newer prescription-strength topical antibiotic that can also be used to treat S. aureus and MRSA SSTIs (1, 10, 16). A preclinical animal model system is an important step to eval- uate the efficacy of an antimicrobial agent before more extensive studies are performed in human subjects. Previous animal models to evaluate S. aureus/MRSA SSTIs include a tape-stripping model (17, 18), a burned skin model (19–21), and a skin surgical/suture wound (22–25). However, in these models, large numbers of an- imals are required because animals need to be euthanized at vari- ous time points after infection to evaluate the ex vivo bacterial burden by performing traditional CFU counting. The aim of the present study was to develop a mouse model of CA-MRSA wound Received 21 May 2012 Returned for modification 21 July 2012 Accepted 24 November 2012 Published ahead of print 3 December 2012 Address correspondence to Lloyd S. Miller, lloydmiller@jhmi.edu. Supplemental material for this article may be found at http://dx.doi.org/10.1128 /AAC.01003-12. Copyright © 2013, American Society for Microbiology. All Rights Reserved. doi:10.1128/AAC.01003-12 February 2013 Volume 57 Number 2 Antimicrobial Agents and Chemotherapy p. 855–863 aac.asm.org 855 o n A p ril 5 , 2 0 2 1 a t C A R N E G IE M E L L O N U N IV L IB R h ttp ://a a c.a sm .o rg / D o w n lo a d e d fro m http://dx.doi.org/10.1128/AAC.01003-12 http://dx.doi.org/10.1128/AAC.01003-12 http://dx.doi.org/10.1128/AAC.01003-12 http://aac.asm.org http://aac.asm.org/ infection using a newly generated USA300 strain that possesses a stable bioluminescent construct to compare the efficacy of com- monly used systemic and topical antibiotics. MATERIALS AND METHODS CA-MRSA bioluminescent strain. The USA300 LAC::lux bioluminescent CA-MRSA strain, derived from the USA300 LAC parent strain, was used in all experiments (26). USA300 LAC::lux possesses a modified luxABCDE operon from the bacterial insect pathogen Photorhabdus luminescens, which was chromosomally transduced from the Xen29 bioluminescent S. aureus strain (Caliper, Perkin-Elmer Company, Alameda, CA) (27). This resulted in a bioluminescent USA300 LAC::lux strain that constitutively emits a blue-green light with a maximal emission wavelength of 490 nm (only live and metabolically active bacteria will emit light). The biolumi- nescent construct is stably integrated into the bacterial chromosome and is maintained in all progeny without selection. Preparation of bacteria for inoculation. USA300 LAC::lux bacteria were streaked onto tryptic soy agar plates (tryptic soy broth [TSB] plus 1.5% Bacto agar [BD Biosciences, Franklin Lakes, NJ]) and grown at 37°C overnight (28). Single bacterial colonies were cultured in TSB and grown overnight at 37°C in a shaking incubator (MaxQ 4450; Thermo Fisher Scientific, Waltham, MA). Mid-logarithmic-phase bacteria were obtained after a 2-h subculture of a 1/50 dilution of the overnight culture. Bacteria were pelleted, resuspended, and washed three times in phosphate-buff- ered saline (PBS). Bacterial inocula (2 � 105 CFU, 2 � 106 CFU, or 2 � 107 CFU in 10 �l of PBS) were estimated by measuring the absorbance at 600 nm (Biomate 3; Thermo Fisher Scientific). CFU were verified after an overnight culture. Mice. Six-week-old male C57BL/6 mice were obtained from Jackson Laboratories (Bar Harbor, ME). In some experiments, 12-week-old NONcNZO10/LtJ male mice (Jackson Laboratories) on a 10 to 11% (wt/ wt) chow diet (LabDiet 5k20; PMI Nutrition International, St. Louis, MO) were used (29). NONcNZO10/LtJ male mice by 10 weeks of age develop a disease closely mimicking human type 2 diabetes with visceral obesity, hyperglycemia, dyslipidemia, moderate liver steatosis, and pancreatic islet atrophy (29) and were confirmed to have hyperglycemia (blood glucose levels � 300 mg/dl) before they were used in experiments. Mice were housed in one mouse per cage and in specific-pathogen-free conditions. Mouse model of CA-MRSA skin wound infection using in vivo bio- luminescence imaging. All procedures were approved by the UCLA An- imal Research Committee. Mice were anesthetized with inhalation isoflu- rane (2%), the posterior upper backs were shaved and three parallel 8-mm linear full-thickness scalpel cuts (#11 blade) were made through the der- mis (28). The wounds were subsequently inoculated with USA300 LAC:: lux (2 � 105 CFU, 2 � 106 CFU, or 2 � 107 CFU in 10 �l of PBS) using a micropipette. To obtain measurements of wound sizes, mice were anes- thetized with inhalation isoflurane (2%) at several different time points after infection (e.g., days 0, 1, 3, 5, 7, and 10) and digital photographs of the infected-skin wounds were taken. The total lesion size (cm2) was quantified by using the image analysis software program ImageJ (NIH Research Services Branch [http://rsbweb.nih.gov/ij/]) and a millimeter ruler as a reference. A measurement of bacterial burden was obtained by performing in vivo bioluminescence imaging at the same time points us- ing the Lumina II imaging system (Caliper). In vivo bioluminescence im- aging data are presented on a color-scale overlaid on a grayscale photo- graph of mice and quantified as total flux (photons/s) within a circular region of interest using Living Image software (Caliper). In some experi- ments, to confirm that the in vivo bioluminescent signals accurately rep- resented the bacterial burden in vivo, CFU counts were determined after overnight cultures of homogenized (Pro200 Series homogenizer; Pro Sci- entific, Oxford, CT) 8-mm punch biopsy specimens of lesional skin taken at 4 h and on days 1, 3, 5, and 7 after inoculation. Typically, 5 to 10 mice per group were used, and the numbers of mice used in each experiment are indicated in the figure legends. Subcutaneous and oral antibiotic therapy. Based on previously pub- lished studies, mice were administered subcutaneous (to the flank skin) or oral (via gavage) therapeutic doses of vancomycin (110 mg/kg adminis- tered subcutaneously twice daily) (30, 31) (Novaplus; Hospira, Inc., Lake Forest, IL), daptomycin (50 mg/kg administered subcutaneously daily) (32, 33) (Cubicin; Cubist Pharmaceuticals, Inc., Lexington, MA), lin- ezolid (60 mg/kg administered subcutaneously and orally twice daily) (34) (Zyvox; Pfizer, Inc., New York, NY), clindamycin (100 mg/kg admin- istered orally three times a day) (35, 36) (Cleocin phosphate; Pfizer, Inc.), doxycycline (100 mg/kg orally twice daily) (33), and TMP-SMX (320/ 1,600 mg/kg administered orally twice daily) (37, 38). Linezolid was used at the same dose for subcutaneous and oral administration because of equivalent bioavailability when given via either route (39). These doses were chosen to approximate the free-drug area under the curve (AUC) of typical human doses of intravenous vancomycin (440 �g·h/ml for 1 g twice daily) (40), intravenous daptomycin (598 �g·h/ml for 6 mg/kg daily) (41), intravenous/oral linezolid (138 �g·h/ml for 600 mg/kg twice daily) (42), clindamycin (116 �g·h/ml for 600 mg three times a day) (33), and doxycycline (55.7 �g·h/ml for 100 mg twice daily) (43). All mice were treated with the first dose of antibiotics administered at 4 h after CA- MRSA inoculation and continued at the aforementioned regimens for 7 days. For the USA300 LAC::lux strain, the following MICs were measured according to established guidelines and methods (44) using in-house pre- pared broth microdilution trays: oxacillin, 16 � �g/ml; vancomycin, 1.0 �g/ml; daptomycin, �0.25 �g/ml; linezolid, 2.0 �g/ml; clindamycin, �0.5 �g/ml; doxycycline, �1.0 �g/ml; and TMP/SMX, �0.5/�9.5 �g/ml. Topical antibiotic therapy. CA-MRSA-infected skin wounds were treated topically by applying 100 �l of mupirocin 2% ointment (Bactro- ban; GlaxoSmithKline, Research Triangle Park, NC), retapamulin 1% ointment (Altabax; Stiefel/GlaxoSmithKline), or the corresponding vehi- cle ointments (polyethylene glycol [mupirocin] and white petrolatum [re- tapamulin]) at 4 h after CA-MRSA inoculation, followed by twice daily (every 12 h) thereafter for a total of 7 days. Statistical analysis. Data were compared using Student’s t test (two- tailed). All data are expressed as mean � the standard error of the mean (SEM). Values of P � 0.05 were considered statistically significant. RESULTS In vivo bioluminescence imaging to measure bacterial burden. To model a CA-MRSA wound infection in mice, scalpel cuts on the shaved backs of mice were inoculated with the bioluminescent USA300 LAC::lux strain (26). The wound lesion sizes (Fig. 1A and B) and in vivo bacterial burden (Fig. 1C and D) of anesthetized mice were determined by analyzing digital photographs of the mice using image analysis (ImageJ; NIH Research Services Branch) and measuring the USA300 LAC::lux bioluminescent sig- nals (Lumina II imaging system; Caliper), respectively. As a first step, the optimal bacterial inoculum that produced a consistent wound infection was determined by evaluating different inocula of USA300 LAC::lux (2 � 105, 2 � 106, and 2 � 107 CFU per 10 �l) or no bacterial inoculation (none) (Fig. 1). The inoculum of 2 � 107 CFU induced the largest lesions and the 2 � 106 CFU induced intermediate lesion sizes, which were both statistically greater than those of uninfected mice (Fig. 1A and B). In contrast, 2 � 105 CFU induced lesions that did not significantly differ from unin- fected mice. The inoculum of 2 � 107 CFU induced higher biolu- minescent signals than the 2 � 106 CFU, but the signals of both inocula decreased over time (Fig. 1C and D). The inoculum of 2 � 105 CFU resulted in bioluminescent signals that were below the bioluminescent signals of the other inocula and reached back- ground levels by day 7. All three inocula had bioluminescent sig- nals on days 1 through 5 that were statistically greater than the Guo et al. 856 aac.asm.org Antimicrobial Agents and Chemotherapy o n A p ril 5 , 2 0 2 1 a t C A R N E G IE M E L L O N U N IV L IB R h ttp ://a a c.a sm .o rg / D o w n lo a d e d fro m http://rsbweb.nih.gov/ij/ http://aac.asm.org http://aac.asm.org/ background bioluminescent signals of uninfected mice. Since our ultimate goal was to produce a CA-MRSA wound infection that induced relatively small lesion sizes and bioluminescent signals that were greater than the uninfected wounds, the intermediate inoculum of 2 � 106 CFU was used in all subsequent experiments. Correlation of in vivo bioluminescent signals with ex vivo bacterial burden. To evaluate whether bioluminescent signals of USA300 LAC::lux accurately represented the bacterial burden, in vivo bioluminescence imaging and skin biopsies from the infected wounds were performed on the same mice after inoculation with 2 � 106 CFU of USA300 LAC::lux. At 4 h and on days 1, 3, 5, and 7 after inoculation, the bioluminescent signals were 9.56 � 1.55 � 105, 1.72 � 0.07 � 105, 1.05 � 0.17 � 105, 5.0 � 0.37 � 104, and 2.84 � 0.18 � 104 photons/s (see Fig. S1A in the supplemental material) and the ex vivo bacterial burden was 1.29 � 0.21 � 107, 2.95 � 0.34 � 106, 2.29 � 0.40 � 106, 2.7 � 0.42 � 105, and 2.97 � 0.78 � 104 CFU (see Fig. S1B in the supplemental mate- rial), respectively. These in vivo bioluminescent signals highly cor- related with the ex vivo CFU values (correlation coefficient R2 � 0.9996) (Fig. 2). These data indicate that in vivo bioluminescence imaging of wounds infected with USA300 LAC::lux provides an accurate measurement of the in vivo bacterial burden that can be measured noninvasively and longitudinally during the course of an infection. It should be noted that the culture plates used to FIG 1 Mouse model of CA-MRSA wound infection. Three 8 mm in length, parallel, full-thickness scalpel wounds on the backs of C57BL/6 mice were inoculated with 2 � 105, 2 � 106, or 2 � 107 CFU of the bioluminescent CA-MRSA strain, USA300 LAC::lux, or no bacteria (none)/10 �l (n � 5 mice per group). (A) Mean total lesion size (cm2) � the standard error of the mean (SEM). (B) Representative photographs of the lesions of the entire dorsal back (upper panels) and close-up photographs of the lesions (lower panels) are shown. (C) Bacterial counts as measured by in vivo bioluminescence imaging (mean total flux [photons/s] � the SEM) (logarithmic scale). (D) Representative in vivo bioluminescent signals on a color scale overlaid on top of a grayscale image of mice. *, P � 0.05; †, P � 0.01; ‡, P � 0.001 (USA300 LAC::lux-infected mice versus none) (Student’s t test [two-tailed]). Antibiotic Activity against CA-MRSA-Infected Wounds February 2013 Volume 57 Number 2 aac.asm.org 857 o n A p ril 5 , 2 0 2 1 a t C A R N E G IE M E L L O N U N IV L IB R h ttp ://a a c.a sm .o rg / D o w n lo a d e d fro m http://aac.asm.org http://aac.asm.org/ determine the CFU were imaged to detect the presence or absence of a bioluminescent signal. In every case, the CFU had a biolumi- nescent signal (data not shown), indicating that vast majority of bacteria present in the infected wounds were the bioluminescent USA300 LAC::lux strain that were inoculated into these wounds rather than other bacteria that may have contaminated the wounds. Furthermore, the numbers of CFU at 4 h increased 6.5- fold from the initial inoculum, demonstrating that the bacteria were multiplying within the wound and that the infection was established. We therefore chose to use the 4-h time point to initi- ate antibiotic treatment of the CA-MRSA-infected skin wounds. The 4-h time point was also the same time previously used to initiate treatment of a topically applied mupirocin cream formu- lation to treat a S. aureus surgical skin wound infection in mice (23). Efficacy of systemic (subcutaneous and oral) antibiotic ther- apy. To evaluate the efficacy of commonly used intravenous anti- biotics for hospitalized patients with complicated CA-MRSA SSTIs (12), therapeutic doses in mice of vancomycin (110 mg/kg twice daily) (30, 31), daptomycin (50 mg/kg daily) (32, 33), lin- ezolid (60 mg/kg twice daily) (34), or sterile saline (sham control) were administered subcutaneously beginning at 4 h after USA300 LAC::lux inoculation and continued at the above regimens through day 7. We chose to evaluate daptomycin at a dose that simulated the 6-mg/kg human dose, which is used to treat MRSA bacteremia or complicated CA-MRSA SSTIs, rather than evaluat- ing a dose that simulated the 4-mg/kg dose in humans, which is typically used to treat uncomplicated CA-MRSA SSTIs (45). Mice treated with vancomycin, daptomycin, or linezolid all had similar lesion sizes compared with control mice until day 5. However, by day 7, the antibiotic treatment groups had �40% significantly decreased lesion sizes than sham control mice, demonstrating en- hanced wound healing (Fig. 3A). The enhanced would healing observed in the antibiotic treatment groups was associated with significantly lower bioluminescent signals beginning on day 1 (up to 7.6-fold decrease) and remained significantly below the signals of sham control mice through day 7 (Fig. 3B). There were no differences between the lesion sizes and bioluminescent signals between the various antibiotic treatment groups. Taken together, these data indicate that subcutaneously administered therapeutic doses of vancomycin, daptomycin, and linezolid all resulted in similar decreased lesion sizes and bioluminescent signals during a CA-MRSA wound infection in mice. To compare the efficacy of commonly used oral antibiotics that are used for outpatient therapy of CA-MRSA SSTIs (12), clinda- mycin (100 mg/kg three times a day) (35, 36), linezolid (60 mg/kg twice daily) (34), doxycycline (100 mg/kg twice daily) (33), TMP- SMX (320/1,600 mg/kg twice daily) (37, 38), or sterile saline (sham control) were administered orally beginning at 4 h after USA300 LAC::lux inoculation and continued as indicated through day 7. Mice treated with oral linezolid had significantly decreased lesion sizes compared with sham control mice beginning on day 3, whereas mice treated with clindamycin or doxycycline had signif- icantly decreased lesion sizes beginning on day 5 (Fig. 3C and E). TMP/SMX was the only oral antibiotic evaluated that did not result in significantly decreased lesion sizes compared with sham control mice (Fig. 3E). Oral linezolid had the most rapid thera- peutic effect because it resulted in the most substantial decrease in bioluminescent signals (8.3-fold on day 1), which was signifi- cantly lower compared with the decreased bioluminescent signals in mice treated with oral clindamycin (2.7-fold on day 1) (P � 0.05) (Fig. 3D). Oral TMP/SMX and doxycycline had biolumines- cent signals that were decreased compared with sham control mice beginning on day 3 after inoculation (3.0- and 2.6-fold on day 3, respectively) (P � 0.05) (Fig. 3F). After day 3, all of the oral antibiotics resulted in bioluminescent signals that approached background levels by days 5 to 7. In summary, these data indicate that oral linezolid, clindamycin, and doxycycline, but not TMP/ SMX, resulted in decreased lesion sizes. In addition, all of these oral antibiotics resulted in decreased bioluminescent signals with linezolid having the most rapid therapeutic effect. Efficacy of topical antibiotic therapy. Next, the efficacy of the two Food and Drug Administration (FDA)-approved topical pre- scription-strength ointments, mupirocin (1, 10, 11) and reta- pamulin (1, 10, 16), were compared in this mouse model of CA- MRSA wound infection. Mupirocin 2% ointment, retapamulin 1% ointment, or corresponding vehicle ointments (polyethylene glycol [PEG; mupirocin] and white petrolatum [retapamulin]) were topically applied (100-�l volume) to the infected wounds at 4 h after USA300 LAC::lux inoculation followed by twice daily (every 12 h) for the next 7 days. Mupirocin ointment resulted in 29 to 58% decreased lesion sizes beginning at day 5 after inoculation compared with the PEG vehicle ointment (Fig. 4B). In contrast, retapamulin ointment resulted in lesion sizes that did not differ from mice treated with petrolatum vehicle ointment alone. Com- pared with their respective vehicle ointments, between days 1 and 5, mupirocin ointment resulted in a 5- to 12-fold significant re- duction in bioluminescent signals, whereas retapamulin ointment treatment resulted in a 10- to 41-fold significant reduction in bio- luminescent signals (Fig. 4B). However, when the lesion sizes and bioluminescent signals for the mupirocin and retapamulin oint- ments were compared, they were not significantly different from each other. Interestingly, the observed differences in effectiveness of these ointments were impacted by changes induced by the ve- hicle ointment alone. In particular, the petrolatum vehicle oint- ment induced decreased lesion sizes that were comparable to those of mice treated with mupirocin or retapamulin ointment. This enhanced wound healing was unexpected because the petro- FIG 2 In vivo bioluminescent signals of CA-MRSA-infected wounds highly correlate with ex vivo bacterial CFU counts. Scalpel wounds on the backs of mice were inoculated with USA300 LAC::lux (2 � 106 CFU/10 �l). Correlation between in vivo bioluminescent signals (mean total flux [photons/s] � the SEM) (logarithmic scale) from mice (n � 6 per group) imaged at 4 h and on days 1, 3, 5, and 7 and total ex vivo CFU (logarithmic scale) recovered from 8-mm lesional punch biopsies performed on euthanized mice (n � 6 per group) at the same time points. The trendline and the correlation coefficient of determination (R2) between in vivo bioluminescent signals and total CFU are shown. Guo et al. 858 aac.asm.org Antimicrobial Agents and Chemotherapy o n A p ril 5 , 2 0 2 1 a t C A R N E G IE M E L L O N U N IV L IB R h ttp ://a a c.a sm .o rg / D o w n lo a d e d fro m http://aac.asm.org http://aac.asm.org/ latum vehicle ointment also induced an increase in biolumines- cent signals on day 1. In contrast, the PEG vehicle ointment had lesion sizes or bioluminescent signals that were similar to those observed without any topical treatment (saline control mice) in Fig. 3. Efficacy of systemic antibiotics in a mouse model of type 2 diabetes. One advantage of developing a mouse model of CA- MRSA wound infection is that the efficacy of antibiotic therapy can be evaluated in genetically engineered mouse strains that mimic certain human diseases. One such disease is type II diabetes in which patients develop chronic wounds and ulcers that can become infected with CA-MRSA (46). To determine the efficacy of antibiotic therapy against a CA-MRSA wound infection in a mouse model of type II diabetes, the NONcNZO10/LtJ mouse strain, which expresses six known obesity-induced diabetes quan- titative trait loci, was used (29). NONcNZO10/LtJ male mice on a high-fat diet develop many of the characteristics of type II diabetes when they become 10 weeks of age, including visceral obesity, hyperglycemia, dyslipidemia, moderate liver steatosis, and pan- creatic islet atrophy (29). Vancomycin (110 mg/kg twice daily) (30, 31), daptomycin (50 mg/kg daily) (32, 33), linezolid (60 mg/kg twice daily) (34), or sterile saline (sham control) were ad- ministered subcutaneously to 12-week-old NONcNZO10/LtJ di- abetic male mice beginning at 4-h after USA300 LAC::lux inocu- lation and continued at the above regimens through day 7. Although pharmacokinetic studies with vancomycin, daptomy- FIG 3 Efficacy of systemic (subcutaneous and oral) antibiotics against CA-MRSA-infected wounds. Scalpel wounds on the backs of mice were inoculated with USA300 LAC::lux (2 � 106 CFU/10 �l). Mice (n � 5 to 10 mice per group) were treated with subcutaneous vancomycin (110 mg/kg twice daily), daptomycin (50 mg/kg once daily), linezolid (60 mg/kg twice daily), or sterile saline (sham control) (top panels) or oral (via gavage) linezolid (60 mg/kg twice daily), clindamycin (100 mg/kg three times a day), doxycycline (100 mg/kg twice daily), TMP/SMX (320/1600 mg/kg once daily), or sterile saline (sham control) (middle and bottom panels). Antibiotic treatment was initiated at 4 h after CA-MRSA inoculation and continued for the first 7 days. (A, C, and E) Mean total lesion size (cm2) � the SEM. (B, D, and F) Bacterial counts as measured by in vivo bioluminescence imaging (mean total flux [photons/s] � the SEM) (logarithmic scale). *, P � 0.05; †, P � 0.01; ‡, P � 0.001 (antibiotic treatment versus sham treatment [saline]) (Student’s t test [two-tailed]). Antibiotic Activity against CA-MRSA-Infected Wounds February 2013 Volume 57 Number 2 aac.asm.org 859 o n A p ril 5 , 2 0 2 1 a t C A R N E G IE M E L L O N U N IV L IB R h ttp ://a a c.a sm .o rg / D o w n lo a d e d fro m http://aac.asm.org http://aac.asm.org/ cin, and linezolid were not performed on these diabetic mice, these doses used were based on total body weight, which is how vancomycin and daptomycin are dosed in humans and levels of linezolid at normal dosing are decreased in obese patients (12, 47). As expected, sham control NONcNZO10/LtJ mice developed larger lesions and higher in vivo bioluminescent signals that per- sisted longer (up to 14 days) compared with wild-type C57BL/6 mice used in Fig. 3 (Fig. 5). In NONcNZO10/LtJ mice, subcuta- neously administered vancomycin resulted in significantly de- creased lesion sizes on day 7 (22% decrease [P � 0.05]), and sub- cutaneously administered daptomycin and linezolid resulted in significantly decreased lesion sizes on day 5 (19% [P � 0.05] and 23% [P � 0.05] decreases, respectively) and day 7 (38% [P � 0.01] and 44% [P � 0.001] decreases, respectively) compared with sham control mice (Fig. 5A). On day 1, vancomycin, daptomycin, and linezolid resulted in 3.2-, 7.9-, and 13.9-fold significantly de- creased bioluminescent signals compared with sham control mice (Fig. 5B). The decreased bioluminescent signals in daptomycin- and linezolid-treated mice were not statistically different from each other. However, on day 1, the bioluminescent signals for both daptomycin and linezolid were both significantly lower than the bioluminescent signals of the vancomycin-treated mice (P � 0.05 and P � 0.001, respectively). After day 1, vancomycin-, dap- tomycin-, and linezolid-treated mice had similarly decreased bio- luminescent signals compared with sham control mice. Taken to- gether, in NONcNZO10/LtJ diabetic mice, subcutaneous linezolid and daptomycin treatment had a more rapid therapeutic effect compared with vancomycin, but after day 1, all three anti- biotics had similar efficacy against this CA-MRSA wound infec- tion. DISCUSSION CA-MRSA SSTIs represent a major public health threat in the United States and are becoming an increasing problem worldwide (1, 2, 13). Antimicrobial resistance among CA-MRSA isolates has complicated the treatment of these infections. A rapid and cost- effective preclinical animal model of CA-MRSA SSTIs could pro- vide an alternative system to evaluate the in vivo efficacy of existing and potential antimicrobial agents. In the present study, a mouse model of a CA-MRSA skin wound infection was developed in FIG 4 Efficacy of topical antibiotics against CA-MRSA-infected wounds. Scalpel wounds on the backs of mice were inoculated with USA300 LAC::lux (2 � 106 CFU/10 �l). Mice (n � 5 mice per group) were treated with topical mupirocin 2% ointment, retapamulin 1% ointment, or the corresponding vehicle ointment (polyethylene glycol [mupirocin] and white petrolatum [retapamulin]). Antibiotic treatment was initiated at 4 h after CA-MRSA inoculation and continued twice daily for the first 7 days. (A) Mean total lesion size (cm2) � the SEM. (B) Bacterial counts as measured by in vivo bioluminescence imaging (mean total flux [photons/s] � the SEM) (logarithmic scale). *, P � 0.05; †, P � 0.01; ‡, P � 0.001 (antibiotic treatment versus sham treatment [saline]) (Student’s t test [two-tailed]). FIG 5 Efficacy of subcutaneous vancomycin, daptomycin, and linezolid against a CA-MRSA wound infection in diabetic mice. Scalpel wounds on the backs of NONcNZO10/LtJ diabetic mice were inoculated with USA300 LAC::lux (2 � 106 CFU/10 �l). Mice (n � 5 to 10 mice per group) were treated with subcutaneous vancomycin (110 mg/kg twice daily), daptomycin (50 mg/kg once daily), linezolid (60 mg/kg twice daily), or sterile saline (sham control) beginning at 4 h after CA-MRSA inoculation and continued for the first 7 days. (A) Mean total lesion size (cm2) � the SEM. (B) Bacterial counts as measured by in vivo biolumines- cence imaging (mean total flux [photons/s] � the SEM) (logarithmic scale). *, P � 0.05; †, P � 0.01; ‡, P � 0.001 (antibiotic treatment versus sham treatment [saline]) (Student’s t test [two-tailed]). Guo et al. 860 aac.asm.org Antimicrobial Agents and Chemotherapy o n A p ril 5 , 2 0 2 1 a t C A R N E G IE M E L L O N U N IV L IB R h ttp ://a a c.a sm .o rg / D o w n lo a d e d fro m http://aac.asm.org http://aac.asm.org/ which a bioluminescent USA300 CA-MRSA strain (USA300 LAC::lux) was inoculated into skin wounds. Digital photography and in vivo bioluminescence imaging were used to obtain nonin- vasive and longitudinal measurements of wound healing and the bacterial burden. The bioluminescent USA300 LAC::lux strain used in the pres- ent study has certain advantages compared with other available S. aureus and CA-MRSA bioluminescent strains. First, this strain was derived from the clinical USA300 LAC strain, which produces toxins associated with the increased virulence of CA-MRSA, in- cluding phenol soluble modulins, alpha-toxin, and Panton-Val- entine leukocidin (48, 49). USA300 LAC::lux also possesses the bioluminescent construct stably integrated into the bacterial chromosome (26). Since the bioluminescent construct is main- tained in all progeny without selection, it is thus not lost in vivo over time (26). We demonstrated that the in vivo bioluminescent signals of USA300 LAC::lux highly correlated with the numbers of ex vivo CFU harvested from the same infected skin wounds at various different time points (correlation coefficient R2 � 0.9996) (Fig. 2). Thus, in vivo USA300 LAC::lux bioluminescent signals provide an accurate measurement of bacterial burden that does not require euthanasia of animals for traditional CFU counting to determine the bacterial burden at each time point. For example, only 5 to 10 mice per group were used to monitor the bacterial burden with in vivo bioluminescence imaging. In contrast, to ob- tain CFU data from the infected skin, at least 5 mice per group would need to be euthanized at each time point (i.e., days 1, 3, 5, and 7), corresponding to 20 total mice per group. For future stud- ies that seek to utilize techniques of in vivo bioluminescence im- aging to monitor the bacteria burden in other animal models of infection, the bioluminescent construct containing the modified luxABCDE operon, which was chromosomally induced from Xen29 (Caliper, a Perkin-Elmer Company, Alameda, CA) (27), could be transduced into other S. aureus and MRSA strains (50, 51). Although, the in vivo bioluminescent signals correlate with bacterial burden, they are also dependent upon the transcription/ translation of the bioluminescent construct as well as the meta- bolic activity of the bacteria (50, 51). However, we and others have demonstrated that in vivo bioluminescence imaging can be used to monitor the bacterial burden in various in vivo models of biofilm formation (26, 27, 52), demonstrating that this technology is sen- sitive enough to detect low levels of light produced by bacteria in biofilms. This mouse model was used to compare the efficacy of com- monly used systemic antibiotics against CA-MRSA SSTIs. We found that subcutaneous administration of therapeutic doses of vancomycin, daptomycin, or linezolid were all effective in treating the CA-MRSA infection in mice, suggesting that these antibiotics would have similar efficacy against CA-MRSA SSTIs in humans. In addition, both subcutaneous linezolid and daptomycin had a more rapid therapeutic effect in type II diabetic mice compared to vancomycin, suggesting that linezolid and daptomycin might have an additional clinical benefit in the context of diabetes. How- ever, these preliminary results with these diabetic mice should be interpreted with caution since detailed pharmacokinetics studies were not performed on these mice, and the dosages used were based on total body weight. Nonetheless, there have been studies in humans that evaluated the efficacy of intravenous daptomycin or linezolid as single agents compared with intravenous vancomy- cin in the treatment of CA-MRSA SSTIs. In general, daptomycin has been shown to have increased efficacy compared with vanco- mycin (53). However, one study demonstrated that daptomycin had similar efficacy compared with vancomycin in the treatment of diabetic foot ulcers (54). Most studies have found that linezolid has equivalent or superior efficacy compared with vancomycin (55–60) and similar efficacy as vancomycin in diabetic patients (57). Given these findings in humans, this mouse model was able to confirm the efficacy of these antibiotics against a CA-MRSA wound infection, but it was not able to recapitulate the potentially increased efficacy of linezolid or daptomycin compared with van- comycin in nondiabetic mice. The reason for this may be due to different pharmacokinetics (i.e., absorption, distribution, metab- olism, and elimination) between the species, which is a limitation of preclinical animal models. Despite this drawback, we believe that the ability to monitor wound healing and bacterial burden longitudinally over time provides valuable preclinical informa- tion about overall efficacy, which is necessary to establish before the commencement of more comprehensive studies in humans. Our findings involving the comparison of orally administered antibiotics to treat the CA-MRSA wound infections in mice indi- cated that linezolid, clindamycin, doxycycline, and TMP/SMX were all effective in reducing the bacterial burden, but linezolid had a more rapid therapeutic effect. The reason that the linezolid resulted in a more dramatic decrease in bacterial burden than clindamycin at day 1 after infection is not clear. As mentioned above, this result could be due to different pharmacokinetics of these antibiotics in mice. Nonetheless, it should be mentioned that inducible clindamycin resistance is found in up to 8.4% of CA- MRSA strains, and this should be taken into account when treat- ing patients (61). In this mouse model, TMP/SMX was effective at reducing the bioluminescent signals at a high dose (320/1,600 mg/kg twice daily). However, TMP/SMX was the only oral antibi- otic evaluated that had no efficacy in decreasing the size of the lesions. It should be mentioned that TMP/SMX at a lower dose (160/800 mg twice daily) had no efficacy in decreasing the lesion size or bioluminescent signals (data not shown), even though this strain was highly sensitive to this drug (MIC � 0.5 �g/ml). The high dose of TMP/SMX that was required for efficacy in this mouse model may be due the increased content of thymidine in mouse sera and tissues, which interferes with the activity of TMP (62). Since infected human tissues may also have increased thymi- dine levels, some studies have used high doses of TMP/SMX in treating CA-MRSA SSTIs in humans (37, 38). Since this model of CA-MRSA SSTI involved the infection of open skin wounds, it provided an opportunity to evaluate the efficacy of topically applied antimicrobial agents. We compared two FDA-approved prescription-strength topical ointments, mupirocin 2% ointment and retapamulin 1% ointment. We found that both mupirocin and retapamulin ointments were equally effective in reducing the bacterial burden to levels seen with the subcutaneously and orally administered antibiotics. In- terestingly, the white petrolatum ointment, the vehicle for reta- pamulin, initially induced an increase in the bacterial burden, which was not observed with PEG vehicle ointment for mupiro- cin. Despite this higher bacterial burden, the petrolatum ointment resulted in faster wound healing and the wound sizes were virtu- ally identical to those treated with the mupirocin or retapamulin ointments. Therefore, the petrolatum vehicle ointment may pro- vide a therapeutic benefit compared with the PEG vehicle oint- ment because it promoted wound healing. Clearly, the choice of Antibiotic Activity against CA-MRSA-Infected Wounds February 2013 Volume 57 Number 2 aac.asm.org 861 o n A p ril 5 , 2 0 2 1 a t C A R N E G IE M E L L O N U N IV L IB R h ttp ://a a c.a sm .o rg / D o w n lo a d e d fro m http://aac.asm.org http://aac.asm.org/ vehicle is an important consideration in the development of fu- ture topical antimicrobial therapies. It could be that a vehicle that enhances wound healing without affecting bacterial growth may be even more efficacious. One limitation of our study is that we studied a single isolate of USA300 CA-MRSA and the virulence and antibiotic therapeutic effects may be different with other MRSA strains. Moreover, since we only evaluated the USA300 strain, we were not able to describe variability between strains, including differences between USA300 and USA100 strains, which are likely very different because USA100 strains produce fewer toxins and may be less virulent than USA300 strains. These limitations will require additional studies to resolve. Taken together, the mouse model of CA-MRSA wound infec- tion developed here utilized digital photography/image analysis and in vivo bioluminescence imaging to monitor wound healing and the bacterial burden longitudinally over time. Since this model does not require euthanasia to determine the bacterial bur- den, fewer numbers of animals are required to evaluate the effi- cacy of antimicrobial agents, which is an important consideration for the reduction, refinement, and replacement of animals used in research and testing. For this particular model, since the biolumi- nescent signals for the antibiotic treatment and the sham control groups were nearly identical by day 7, future experiments may not need to be extended beyond day 5, providing additional labor and experimental cost savings. This model could serve as an alternative or complementary noninvasive, cost-effective, and accurate pre- clinical animal model to investigate in vivo efficacy of certain sys- temic and topical antimicrobial agents before extensive studies in human subjects. Our results using this model indicate that there are several viable options for intravenous and oral antibiotic ther- apy for the treatment of CA-MRSA SSTIs in humans and topical antibiotic therapy may provide an additional therapeutic benefit. ACKNOWLEDGMENTS This study was supported by an Investigator-Initiated Research Grant from Pfizer, Inc. (grant WS751303 to L.S.M.), and National Institutes of Health grants R01-AI078910 (to L.S.M.), T32-AR058921 (to J.S.C.), and R24-CA92865 (to the UCLA Small Animal Imaging Resource Program). We thank Tammy Kielian, Kenneth W. Bayles, and Jennifer Endres at the University of Nebraska Medical Center for providing the USA300 LAC::lux bioluminescent CA-MRSA strain. We also thank Michael Lewinski and Kevin Ward at the UCLA Clinical Microbiology Laboratory for performing the antibiotic MICs for the bacterial strains used in this study. REFERENCES 1. David MZ, Daum RS. 2010. Community-associated methicillin-resistant Staphylococcus aureus: epidemiology and clinical consequences of an emerging epidemic. Clin. Microbiol. Rev. 23:616 – 687. 2. Deleo FR, Otto M, Kreiswirth BN, Chambers HF. 2010. Community- associated methicillin-resistant Staphylococcus aureus. Lancet 375:1557– 1568. 3. McCaig LF, McDonald LC, Mandal S, Jernigan DB. 2006. Staphylococcus aureus-associated skin and soft tissue infections in ambulatory care. Emerg. Infect. Dis. 12:1715–1723. 4. Hersh AL, Chambers HF, Maselli JH, Gonzales R. 2008. National trends in ambulatory visits and antibiotic prescribing for skin and soft-tissue infections. Arch. Intern. Med. 168:1585–1591. 5. Moran GJ, Krishnadasan A, Gorwitz RJ, Fosheim GE, McDougal LK, Carey RB, Talan DA. 2006. Methicillin-resistant S. aureus infections among patients in the emergency department. N. Engl. J. Med. 355:666 – 674. 6. Talan DA, Krishnadasan A, Gorwitz RJ, Fosheim GE, Limbago B, Albrecht V, Moran GJ. 2011. Comparison of Staphylococcus aureus from skin and soft-tissue infections in US emergency department patients, 2004 and 2008. Clin. Infect. Dis. 53:144 –149. 7. King MD, Humphrey BJ, Wang YF, Kourbatova EV, Ray SM, Blumberg HM. 2006. Emergence of community-acquired methicillin-resistant Staphylococcus aureus USA300 clone as the predominant cause of skin and soft-tissue infections. Ann. Intern. Med. 144:309 –317. 8. Jones RN, Nilius AM, Akinlade BK, Deshpande LM, Notario GF. 2007. Molecular characterization of Staphylococcus aureus isolates from a 2005 clinical trial of uncomplicated skin and skin structure infections. Antimi- crob. Agents Chemother. 51:3381–3384. 9. Tenover FC, Goering RV. 2009. Methicillin-resistant Staphylococcus au- reus strain USA300: origin and epidemiology. J. Antimicrob. Chemother. 64:441– 446. 10. Daum RS. 2007. Clinical practice. Skin and soft-tissue infections caused by methicillin-resistant Staphylococcus aureus. N. Engl. J. Med. 357:380 – 390. 11. Elston DM. 2007. Community-acquired methicillin-resistant Staphylo- coccus aureus. J. Am. Acad. Dermatol. 56:1–16. 12. Liu C, Bayer A, Cosgrove SE, Daum RS, Fridkin SK, Gorwitz RJ, Kaplan SL, Karchmer AW, Levine DP, Murray BE, Rybak J, Talan DA, Cham- bers HF. 2011. Clinical practice guidelines by the Infectious Diseases Society of America for the treatment of methicillin-resistant Staphylococ- cus aureus infections in adults and children. Clin. Infect. Dis. 52:e18 – e55. 13. Dryden MS. 2010. Complicated skin and soft tissue infection. J. Antimi- crob. Chemother. 65(Suppl 3):iii35–iii44. 14. Curcio D. 2011. Skin and soft tissue infections due to methicillin-resistant Staphylococcus aureus: role of tigecycline. Clin. Infect. Dis. 52:1468 –1469. 15. Bode LG, Kluytmans JA, Wertheim HF, Bogaers D, Vandenbroucke- Grauls CM, Roosendaal R, Troelstra A, Box AT, Voss A, d, Tvan IBAvan Verbrugh HA, Vos MC. 2010. Preventing surgical-site infections in nasal carriers of Staphylococcus aureus. N. Engl. J. Med. 362:9 –17. 16. Yang LP, Keam SJ. 2008. Retapamulin: a review of its use in the manage- ment of impetigo and other uncomplicated superficial skin infections. Drugs 68:855– 873. 17. Hahn BL, Onunkwo CC, Watts CJ, Sohnle PG. 2009. Systemic dissem- ination and cutaneous damage in a mouse model of staphylococcal skin infections. Microb. Pathog. 47:16 –23. 18. Kugelberg E, Norstrom T, Petersen TK, Duvold T, Andersson DI, Hughes D. 2005. Establishment of a superficial skin infection model in mice by using Staphylococcus aureus and Streptococcus pyogenes. Antimi- crob. Agents Chemother. 49:3435–3441. 19. Heggers JP, McHugh T, Zoellner S, Boertman JA, Niu XT, Robson MC, Velanovich V. 1989. Therapeutic efficacy of timentin and augmentin versus silvadene in burn wound infections. J. Burn Care Rehabil. 10:421– 424. 20. Rode H, de Wet PM, Millar AJ, Cywes S. 1988. Bactericidal efficacy of mupirocin in multi-antibiotic resistant Staphylococcus aureus burn wound infection. J. Antimicrob. Chemother. 21:589 –595. 21. Yamakawa T, Mitsuyama J, Hayashi K. 2002. In vitro and in vivo anti- bacterial activity of T-3912, a novel non-fluorinated topical quinolone. J. Antimicrob. Chemother. 49:455– 465. 22. Boon RJ, Beale AS. 1987. Response of Streptococcus pyogenes to therapy with amoxicillin or amoxicillin-clavulanic acid in a mouse model of mixed infection caused by Staphylococcus aureus and Streptococcus pyogenes. An- timicrob. Agents Chemother. 31:1204 –1209. 23. Gisby J, Bryant J. 2000. Efficacy of a new cream formulation of mupiro- cin: comparison with oral and topical agents in experimental skin infec- tions. Antimicrob. Agents Chemother. 44:255–260. 24. McRipley RJ, Whitney RR. 1976. Characterization and quantitation of experimental surgical-wound infections used to evaluate topical antibac- terial agents. Antimicrob. Agents Chemother. 10:38 – 44. 25. Rittenhouse S, Singley C, Hoover J, Page R, Payne D. 2006. Use of the surgical wound infection model to determine the efficacious dosing regi- men of retapamulin, a novel topical antibiotic. Antimicrob. Agents Che- mother. 50:3886 –3888. 26. Thurlow LR, Hanke ML, Fritz T, Angle A, Aldrich A, Williams SH, Engebretsen IL, Bayles KW, Horswill AR, Kielian T. 2011. Staphylococ- cus aureus biofilms prevent macrophage phagocytosis and attenuate in- flammation in vivo. J. Immunol. 186:6585– 6596. 27. Kadurugamuwa JL, Sin L, Albert E, Yu J, Francis K, DeBoer M, Rubin M, Bellinger-Kawahara C, Parr TRJ, Jr, Contag PR. 2003. Direct con- Guo et al. 862 aac.asm.org Antimicrobial Agents and Chemotherapy o n A p ril 5 , 2 0 2 1 a t C A R N E G IE M E L L O N U N IV L IB R h ttp ://a a c.a sm .o rg / D o w n lo a d e d fro m http://aac.asm.org http://aac.asm.org/ tinuous method for monitoring biofilm infection in a mouse model. Infect. Immun. 71:882– 890. 28. Cho JS, Zussman J, Donegan NP, Ramos RI, Garcia NC, Uslan DZ, Iwakura Y, Simon SI, Cheung AL, Modlin RL, Kim J, Miller LS. 2011. Noninvasive in vivo imaging to evaluate immune responses and antimi- crobial therapy against Staphylococcus aureus and USA300 MRSA skin infections. J. Invest. Dermatol. 131:907–915. 29. Fang RC, Kryger ZB, Buck DW, De la Garza M, Galiano RD, Mustoe TA. 2010. Limitations of the db/db mouse in translational wound healing research: is the NONcNZO10 polygenic mouse model superior? Wound. Repair Regen. 18:605– 613. 30. Crandon JL, Kuti JL, Nicolau DP. 2010. Comparative efficacies of human simulated exposures of telavancin and vancomycin against methicillin- resistant Staphylococcus aureus with a range of vancomycin MICs in a murine pneumonia model. Antimicrob. Agents Chemother. 54:5115– 5119. 31. Reyes N, Skinner R, Benton BM, Krause KM, Shelton J, Obedencio GP, Hegde SS. 2006. Efficacy of telavancin in a murine model of bacteraemia induced by methicillin-resistant Staphylococcus aureus. J. Antimicrob. Chemother. 58:462– 465. 32. Dandekar PK, Tessier PR, Williams P, Nightingale CH, Nicolau DP. 2003. Pharmacodynamic profile of daptomycin against Enterococcus spe- cies and methicillin-resistant Staphylococcus aureus in a murine thigh in- fection model. J. Antimicrob. Chemother. 52:405– 411. 33. LaPlante KL, Leonard SN, Andes DR, Craig WA, Rybak MJ. 2008. Activ- ities of clindamycin, daptomycin, doxycycline, linezolid, trimethoprim- sulfamethoxazole, and vancomycin against community-associated methicil- lin-resistant Staphylococcus aureus with inducible clindamycin resistance in murine thigh infection and in vitro pharmacodynamic models. Antimicrob. Agents Chemother. 52:2156 –2162. 34. Louie A, Liu W, Kulawy R, Drusano GL. 2011. In vivo pharmacody- namics of torezolid phosphate (TR-701), a new oxazolidinone antibiotic, against methicillin-susceptible and methicillin-resistant Staphylococcus aureus strains in a mouse thigh infection model. Antimicrob. Agents Che- mother. 55:3453–3460. 35. Girard AE, Girard D, Gootz TD, Faiella JA, Cimochowski CR. 1995. In vivo efficacy of trovafloxacin (CP-99,219), a new quinolone with extended activities against gram-positive pathogens, Streptococcus pneumoniae, and Bacteroides fragilis. Antimicrob. Agents Chemother. 39:2210 –2216. 36. Azeh I, Gerber J, Wellmer A, Wellhausen M, Koenig B, Eiffert H, Nau R. 2002. Protein synthesis inhibiting clindamycin improves outcome in a mouse model of Staphylococcus aureus sepsis compared with the cell wall active ceftriaxone. Crit. Care Med. 30:1560 –1564. 37. Cadena J, Nair S, Henao-Martinez AF, Jorgensen JH, Patterson JE, Sreeramoju PV. 2011. Dose of trimethoprim-sulfamethoxazole to treat skin and skin structure infections caused by methicillin-resistant Staphy- lococcus aureus. Antimicrob. Agents Chemother. 55:5430 –5432. 38. Schmitz GR, Bruner D, Pitotti R, Olderog C, Livengood T, Williams J, Huebner K, Lightfoot J, Ritz B, Bates C, Schmitz M, Mete M, Deye G. 2010. Randomized controlled trial of trimethoprim-sulfamethoxazole for uncomplicated skin abscesses in patients at risk for community-associated methicillin-resistant Staphylococcus aureus infection. Ann. Emerg. Med. 56:283–287. 39. Hilliard JJ, Fernandez J, Melton J, Macielag MJ, Goldschmidt R, Bush K, Abbanat D. 2009. In vivo activity of the pyrrolopyrazolyl-substituted oxazolidinone RWJ-416457. Antimicrob. Agents Chemother. 53:2028 – 2033. 40. Healy DP, Polk RE, Garson ML, Rock DT, Comstock TJ. 1987. Com- parison of steady-state pharmacokinetics of two dosage regimens of van- comycin in normal volunteers. Antimicrob. Agents Chemother. 31:393– 397. 41. Woodworth Jr, Nyhart EH, Jr, Brier GL, Wolny JD, Black HR. 1992. Single-dose pharmacokinetics and antibacterial activity of daptomycin, a new lipopeptide antibiotic, in healthy volunteers. Antimicrob. Agents Chemother. 36:318 –325. 42. Burkhardt O, Borner K, HNvon der, Koppe P, Pletz MW, Nord CE, Lode H. 2002. Single- and multiple-dose pharmacokinetics of linezolid and co-amoxiclav in healthy human volunteers. J. Antimicrob. Che- mother. 50:707–712. 43. Saano V, Paronen P, Peura P. 1990. Bioavailability of doxycycline from dissolved doxycycline hydrochloride tablets: comparison to solid form hydrochloride tablets and dissolved monohydrate tablets. Int. J. Clin. Pharmacol. Ther. Toxicol. 28:471– 474. 44. Clinical and Laboratory Standards Institute (CLSI). 2012. Methods for dilution antimicrobial susceptibility tests for bacteria that grow aerobically; approved standard, 9th ed. CLSI document M07-A9. CLSI, Wayne, PA. 45. Seaton RA. 2008. Daptomycin: rationale and role in the management of skin and soft tissue infections. J. Antimicrob. Chemother. 62(Suppl 3): iii15–iii23. 46. Lipsky BA, Tabak YP, Johannes RS, Vo L, Hyde L, Weigelt JA. 2010. Skin and soft tissue infections in hospitalized patients with diabetes: cul- ture isolates and risk factors associated with mortality, length of stay and cost. Diabetologia 53:914 –923. 47. Dryden MS. 2011. Linezolid pharmacokinetics and pharmacodynamics in clinical treatment. J. Antimicrob. Chemother. 66(Suppl 4):iv7–iv15. 48. Otto M. 2010. Basis of virulence in community-associated methicillin- resistant Staphylococcus aureus. Annu. Rev. Microbiol. 64:143–162. 49. Wang R, Braughton KR, Kretschmer D, Bach TH, Queck SY, Li M, Kennedy AD, Dorward DW, Klebanoff SJ, Peschel A, Deleo FR, Otto M. 2007. Identification of novel cytolytic peptides as key virulence deter- minants for community-associated MRSA. Nat. Med. 13:1510 –1514. 50. Andreu N, Zelmer A, Wiles S. 2011. Noninvasive biophotonic imaging for studies of infectious disease. FEMS Microbiol. Rev. 35:360 –394. 51. Hutchens M, Luker GD. 2007. Applications of bioluminescence imaging to the study of infectious diseases. Cell Microbiol. 9:2315–2322. 52. Pribaz JR, Bernthal NM, Billi F, Cho JS, Ramos RI, Guo Y, Cheung AL, Francis KP, Miller LS. 2012. Mouse model of chronic post-arthroplasty infection: noninvasive in vivo bioluminescence imaging to monitor bac- terial burden for long-term study. J. Orthop. Res. 30:335–340. 53. Davis SL, McKinnon PS, Hall LM, Delgado G, Jr, Rose W, Wilson RF, Rybak MJ. 2007. Daptomycin versus vancomycin for complicated skin and skin structure infections: clinical and economic outcomes. Pharma- cotherapy 27:1611–1618. 54. Lipsky BA, Stoutenburgh U. 2005. Daptomycin for treating infected diabetic foot ulcers: evidence from a randomized, controlled trial compar- ing daptomycin with vancomycin or semi-synthetic penicillins for com- plicated skin and skin-structure infections. J. Antimicrob. Chemother. 55:240 –245. 55. McKinnon PS, Sorensen SV, Liu LZ, Itani KM. 2006. Impact of linezolid on economic outcomes and determinants of cost in a clinical trial evalu- ating patients with MRSA complicated skin and soft-tissue infections. Ann. Pharmacother. 40:1017–1023. 56. Weigelt J, Itani K, Stevens D, Lau W, Dryden M, Knirsch C. 2005. Linezolid versus vancomycin in treatment of complicated skin and soft tissue infections. Antimicrob. Agents Chemother. 49:2260 –2266. 57. Lipsky BA, Itani KM, Weigelt JA, Joseph W, Paap CM, Reisman A, Myers DE, Huang DB. 2011. The role of diabetes mellitus in the treat- ment of skin and skin structure infections caused by methicillin-resistant Staphylococcus aureus: results from three randomized controlled trials. Int. J. Infect. Dis. 15:e140 – e146. 58. Itani KM, Weigelt J, Li JZ, Duttagupta S. 2005. Linezolid reduces length of stay and duration of intravenous treatment compared with vancomycin for complicated skin and soft tissue infections due to suspected or proven methicillin-resistant Staphylococcus aureus (MRSA). Int. J. Antimicrob. Agents 26:442– 448. 59. Itani KM, Dryden MS, Bhattacharyya H, Kunkel MJ, Baruch AM, Weigelt JA. 2010. Efficacy and safety of linezolid versus vancomycin for the treatment of complicated skin and soft-tissue infections proven to be caused by methicillin-resistant Staphylococcus aureus. Am. J. Surg. 199: 804 – 816. 60. Sharpe JN, Shively EH, Polk HC, Jr. 2005. Clinical and economic out- comes of oral linezolid versus intravenous vancomycin in the treatment of MRSA-complicated, lower-extremity skin and soft-tissue infections caused by methicillin-resistant Staphylococcus aureus. Am. J. Surg. 189: 425– 428. 61. LaPlante KL, Rybak MJ, Amjad M, Kaatz GW. 2007. Antimicrobial susceptibility and staphylococcal chromosomal cassette mec type in com- munity- and hospital-associated methicillin-resistant Staphylococcus au- reus. Pharmacotherapy 27:3–10. 62. Tokunaga T, Oka K, Takemoto A, Ohtsubo Y, Gotoh N, Nishino T. 1997. Efficacy of trimethoprim in murine experimental infection with a thymidine kinase-deficient mutant of Escherichia coli. Antimicrob. Agents Chemother. 41:1042–1045. Antibiotic Activity against CA-MRSA-Infected Wounds February 2013 Volume 57 Number 2 aac.asm.org 863 o n A p ril 5 , 2 0 2 1 a t C A R N E G IE M E L L O N U N IV L IB R h ttp ://a a c.a sm .o rg / D o w n lo a d e d fro m http://aac.asm.org http://aac.asm.org/ In Vivo Bioluminescence Imaging To Evaluate Systemic and Topical Antibiotics against Community-Acquired Methicillin-Resistant Staphylococcus aureus-Infected Skin Wounds in Mice MATERIALS AND METHODS CA-MRSA bioluminescent strain. Preparation of bacteria for inoculation. Mice. Mouse model of CA-MRSA skin wound infection using in vivo bioluminescence imaging. Subcutaneous and oral antibiotic therapy. Topical antibiotic therapy. Statistical analysis. RESULTS In vivo bioluminescence imaging to measure bacterial burden. Correlation of in vivo bioluminescent signals with ex vivo bacterial burden. Efficacy of systemic (subcutaneous and oral) antibiotic therapy. Efficacy of topical antibiotic therapy. Efficacy of systemic antibiotics in a mouse model of type 2 diabetes. DISCUSSION ACKNOWLEDGMENTS REFERENCES work_4vtma7zktve2bngzdjbhtmlymi ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218551980 Params is empty 218551980 exception Params is empty 2021/04/06-02:16:30 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218551980 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:30 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_4wrs7i2ljbepvjdju63vs3glcu ---- Received 01/20/2020 Review began 02/02/2020 Review ended 02/05/2020 Published 02/09/2020 © Copyright 2020 Dent et al. This is an open access article distributed under the terms of the Creative Commons Attribution License CC-BY 4.0., which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Validation of Teleconference-based Goniometry for Measuring Elbow Joint Range of Motion Paul A. Dent Jr. , Benjamin Wilke , Sarvram Terkonda , Ian Luther , Glenn G. Shi 1. Physics, Hampden-Sydney College, Farmville, USA 2. Orthopaedics, Mayo Clinic, Jacksonville, USA 3. Plastic Surgery, Mayo Clinic, Jacksonville, USA 4. Physical Therapy, Mayo Clinic, Jacksonville, USA Corresponding author: Glenn G. Shi, shi.glenn@mayo.edu Abstract Background Range of motion (ROM) is a critical component of a physician’s evaluation for many consultations. The purpose of this study was to evaluate if teleconference goniometry could be as accurate as clinical goniometry. Methods Forty-eight volunteers participated in the study. There was a sample size of 52 elbows. Each measurement was recorded consecutively in person, through teleconference, and still-shot photography by two researchers trained in goniometry. Measurements of maximum elbow flexion and extension were taken and recorded. Results Teleconference goniometry had a high agreement with clinical goniometry (Pearson coefficient: flexion: 0.93, Extension: 0.87). Limits of agreement found from the Bland-Altman test were 7⁰ and -3⁰ for flexion and 10.4⁰ and -7.4⁰ for extension. A t-test revealed a P-value of less than 0.001 between teleconference and clinical measurements, proving the data are significant. Conclusions ROM measurements through a teleconferencing medium are comparable to clinical ROM measurements. This would allow for interactive elbow ROM assessment with the orthopedist without having to incorporate travel time and expenses. Categories: Physical Medicine & Rehabilitation, Orthopedics Keywords: elbow, range of motion, rom, orthopedics, physical therapy, goniometry, telemedicine Introduction The increasing cost of healthcare can lead to a gap in a patient’s ability to access proper care. To account for this crisis, hospitals in Europe and Australia have experimented with telemedicine [1-4]. Telemedicine is a cost-effective way to consult with patients from their own home, eliminating travel expenses and time while providing care [2]. Using Telemedicine to determine ROM has peaked the interests of many physicians [1-7]. ROM goniometry is a vital 1 2 3 4 2 Open Access Original Article DOI: 10.7759/cureus.6925 How to cite this article Dent P A, Wilke B, Terkonda S, et al. (February 09, 2020) Validation of Teleconference-based Goniometry for Measuring Elbow Joint Range of Motion. Cureus 12(2): e6925. DOI 10.7759/cureus.6925 https://www.cureus.com/users/145586-paul-a-dent-jr- https://www.cureus.com/users/122492-benjamin-wilke https://www.cureus.com/users/145594-sarvram-terkonda https://www.cureus.com/users/145596-ian-luther https://www.cureus.com/users/124818-glenn-g-shi component of an orthopedic surgeon’s examination. Measurements can be used to establish a baseline for patients, guide further improvement, or for a post-operation comparison [5,7-8]. Previous studies have shown promising results in validating telemedicine using smartphone photography to provide measurements within the acceptable error range of a goniometer for fingers and elbows [5,7]. There is also a strong agreement between telehealth and in-person clinical visits with respect to diagnoses of patients with chronic musculoskeletal conditions [4]. This is also important as photography-based goniometry relied less on observer expertise than clinical goniometry [6]. Although previous research has shown strong findings in digital photography, none have validated ROM through a teleconference [5-7]. Teleconference will allow for a real-time measurement where photography may lead to excess waiting. The purpose of this study is to determine if teleconferencing can be used as an alternative to evaluate ROM. This study could provide an increase in physician accessibility for patients in remote areas. Materials And Methods All volunteers were over the age of 18 years and in healthy condition. The volunteer must be able to comfortably perform flexion and extension of the elbow without pain. The volunteer was excluded if they had a previous or ongoing injury. If the volunteer was uncomfortable with teleconferencing they were excluded from participation. Forty-eight healthy volunteers were recruited to participate in this prospective study. The study took place in a clinical setting to have standardized conditions. Every volunteer was asked to perform full flexion and extension of the elbow joint. The joint range of motion was measured and recorded in-person by a research personnel trained in goniometry by a board-certified physical therapist. The research personnel, blinded to their prior results, asked the patient to repeat the full extension and flexion of the same joints but through a teleconferencing medium. The research personnel recorded the goniometric measurements through the teleconferencing system and recorded the data. Finally, screen photography of the joints was measured by a second research personnel to determine interobserver reliability. The second researcher was blinded to the results of the first measurements. Clinical goniometry The researcher measured maximum flexion and extension of the elbow using a standard goniometer. The position of the elbow during the experiment was standardized for all participants: the participants were recorded standing with elbows extended with the palms of the hand fully supinated (Figure 1). For flexion measurements, the participant was instructed to attempt to place their hand on their shoulder (Figure 2). The researcher used their preferred landmarks to record the measurements. 2020 Dent et al. Cureus 12(2): e6925. DOI 10.7759/cureus.6925 2 of 9 FIGURE 1: Maximum extension A participant demonstrates maximum extension during a clinical trial FIGURE 2: Maximum flexion A participant demonstrates maximum flexion during a clinical trial Telemedical goniometry Full flexion and extension of the elbow using the same goniometer that was used for clinical goniometry. Participants were measured through a computer-mounted camera via teleconference. Participants were positioned 3-5 feet from the web camera. The camera used was a Logitech C270 720-pixel camera (Logitech, Newark, CA, USA). Each were informed to achieve maximum extension perpendicular to the web camera with palms fully supinated and 2020 Dent et al. Cureus 12(2): e6925. DOI 10.7759/cureus.6925 3 of 9 https://assets.cureus.com/uploads/figure/file/96343/lightbox_3b6549a0413411ea918ee5cf908bed65-max-extension.png https://assets.cureus.com/uploads/figure/file/96345/lightbox_84c94830413411eabfe9571144cafb9f-max-flexion.png recorded the ROM by placing the goniometer up to the computer screen (Figure 1). Once recorded, the researcher took a screenshot of the teleconference to emulate digital photography for the second researcher to record. This process was repeated for maximum elbow flexion using the same method in the clinical trial (Figure 2). Still-shot photography The second research personnel blinded to the data recorded by the first used the same goniometer from the previous trials. The researcher measured the full flexion and extension of the participants ROM photographs and recorded the data. Statistical analysis A paired two-sample for means t-test was performed to determine the significance of the data. The test calculated a sample size of 52 measurements based on a mean difference of 5⁰, an α of 0.05 (Table 1 & Table 2). Flexion Clinic Teleconference Extension Clinic Teleconference Mean 41.50 39.46 Mean 0.92 1.48 Variance 44.37 40.88 Variance 12.50 19.08 Observations 52.00 52.00 Observations 52.00 52.00 Pearson Correlation 0.93 Pearson Correlation 0.87 Hypothesized Mean Difference 5.00 Hypothesized Mean Difference 5.00 df 51.00 df 51.00 t Stat -8.50 t Stat -18.29 P(T<=t) one-tail 1.2E- 11 P(T<=t) one-tail 2.4E-24 t Critical one-tail 1.68 t Critical one-tail 1.68 P(T<=t) two-tail 2.4E- 11 P(T<=t) two-tail 4.7E-24 t Critical two-tail 2.01 t Critical two-tail 2.01 TABLE 1: t-Test: clinical vs. telemedical goniometry This table represents in-depth statistics for the comparison between clinical goniometry and telemedicine-based goniometry 2020 Dent et al. Cureus 12(2): e6925. DOI 10.7759/cureus.6925 4 of 9 Flexion Clinic Photography Extension Clinic Photography Mean 41.50 40.02 Mean 0.92 0.38 Variance 44.37 25.90 Variance 12.50 4.75 Observations 52.00 52.00 Observations 52.00 52.00 Pearson Correlation 0.73 Pearson Correlation 0.82 Hypothesized Mean Difference 5.00 Hypothesized Mean Difference 5.00 df 51.00 df 51.00 t Stat -5.57 t Stat -14.93 P(T<=t) one-tail 4.7E-07 P(T<=t) one-tail 1.5E-20 t Critical one-tail 1.68 t Critical one-tail 1.68 P(T<=t) two-tail 9.4E-07 P(T<=t) two-tail 3.0E-20 t Critical two-tail 2.01 t Critical two-tail 2.01 TABLE 2: t-Test: clinical vs. photography goniometry This table represents in-depth statistics for comparing clinical vs. photography-based goniometric measurements Interobserver reliability between clinical, photo, and teleconferencing was calculated using Pearson coefficients for all measurements. An intraclass correlation coefficient (ICC) less than 0.4 represents low agreement, an ICC between 0.4 and 0.59 represents fair agreement, an ICC between 0.6 and 0.75 represents a good agreement, and an ICC above 0.75 represents exceptional agreement between measurements [7]. A Bland-Altman analysis was also performed to determine the limits of agreement between clinical and teleconferencing measurements (Figure 3 and Figure 4). 2020 Dent et al. Cureus 12(2): e6925. DOI 10.7759/cureus.6925 5 of 9 FIGURE 3: Clinical vs. photography A Bland–Altman plot representing flexion comparison measurements that fell within the 95% confidence interval FIGURE 4: Clinical vs. telemedicine 2020 Dent et al. Cureus 12(2): e6925. DOI 10.7759/cureus.6925 6 of 9 https://assets.cureus.com/uploads/figure/file/96335/lightbox_faf1f45045fb11ea88bfe75a52605d0e-BACVP-cropped.png https://assets.cureus.com/uploads/figure/file/96329/lightbox_2e9cbb0045fc11eaafca1fbf5d87169e-BACVT-Cropped.png A Bland–Altman plot representing the amount of measurements that fell within a 95% confidence interval Results Forty-eight subjects and 52 measurements were recorded in this study. The average clinical goniometry measurements resulted in flexion 41.5 +/- 6.7 degrees and extension 0.93 +/- 3.5 degrees. Teleconference measurements held similar results with flexion 39.5 +/- 6.4 degrees and extension 1.5 +/- 4.4 degrees while photography-based measurements were 40 +/- 5.1 and 0.4 +/- 2.2 degrees. The differences recorded between measurements were statistically significant between clinical and photo as well as between clinical and teleconferencing. There was a mean difference of 2.7 +/- 1.7 (paired t-test, P < .0001) degrees in flexion between clinical and teleconferencing measurements. The mean difference between clinical and photography- based measurements were 3.7 +/- 3 (paired t-test, P < .0001) degrees for flexion. The findings are similar for extension (Table 1 & Table 2 for in-depth statistics). Interobserver reliability All measurements represented strong reliability. Clinical vs videoconferencing yielded a Pearson coefficient of 0.93 for flexion and 0.86 for extension. Clinical vs. photography yielded a Pearson coefficient of 0.73 for flexion and 0.82 for extension. The Bland-Altman test (Figure 3 and Figure 4) revealed that 50 out of 52 of the total flexion measurements fell within the limits of agreement (95% confidence interval) for telemedicine and clinical goniometry. Clinical vs. photographic yielded the same results. Discussion This study validated that goniometric ROM measurements over a teleconferencing medium are consistent with clinical measurements. Teleconferencing measurements, like photography also required less skill than taking a ROM measurement in person [6]. Patients could have a teleconference with a physician without needing to travel to the clinic to evaluate ROM. This may translate to cost savings for our medical systems [2]. This study may also improve patient return rate as they may be more likely to follow up with a physician since there is no need for travel. Previous studies have reported accuracy in photography-based ROM measurements yet none have attempted to validate ROM measurements through a teleconferencing medium [5-7]. This is important because a video consultation with a physician would allow the patient to have their questions answered in real time. Photography has been proven accurate; however, it may lead to excess waiting for the patients and ultimately decrease satisfaction. Teleconferencing has been reported to be satisfactory for patients with chronic musculoskeletal conditions and virtual outreach consultations [3-4]. Dermatology has been a front-runner in the use of telemedicine along with optometry. This study can increase the uses for telehealth in the orthopedic field. Patients are more likely to return for follow-visits and physical therapy appointments if the location is closer to home. Thus, it is expected that this percentage may be higher for telehealth as it requires no travel at all. It would make life easier for seniors or those recovering from arthroplasties. This study would also benefit rural communities by providing easy access to physicians who may have been out of reach prior to the adoption of teleconference. 2020 Dent et al. Cureus 12(2): e6925. DOI 10.7759/cureus.6925 7 of 9 The limitations of this study include the lack of measurers and the ability of being tech-savvy. With telemedicine, patients must be able to understand how to use the system to speak with the physician and must be connected to the internet. There was only one measurer for clinical ROM measurements and teleconference measurements. There was also one researcher measuring all the photography-based ROM measurements. Although every measurement taken was standardized and unbiased, it may be beneficial to include other researchers trained in goniometry to further strengthen the findings. Video conferencing measurements tended to underestimate the ROM values compared to the clinical setting. This could be explained by the difficulty to identify the “bony” landmarks without feeling the patients' elbow. The photography-based measurements had an average difference of 3.7 degrees compared to the videoconference with an average difference of 2.7 degrees. This could be because the researchers used slightly different landmarks when recording their ROM measurements. Although there was a greater difference, it was still under the accepted value of 5 degrees [5, 8]. Conclusions Teleconference can be a reliable resource for evaluating elbow ROM (difference between maximum flexion and extension). Our findings demonstrated acceptable angular measurements (maximum elbow flexion and extension) via teleconference screen. Results were similar to still photograph and clinical goniometer. The findings of this study may help lead to validating ROM measurements of other joints through a teleconferencing medium. Additional Information Disclosures Human subjects: Consent was obtained by all participants in this study. Mayo Clinic Institutional Review Board issued approval 19-005484. The above referenced application is approved by expedited review procedures (45 CFR 46.110, category 4 and 6). The Reviewer conducted a risk-benefit analysis, and determined the study constitutes minimal risk research; therefore, in accordance with 45 CFR 46.109(f )(1), continuing review is not required. The Reviewer determined that this research satisfies the requirements of 45 CFR 46.111. The Reviewer approved the accrual of 50 subjects. The Reviewer noted that oral consent is appropriate for this study. The oral consent script was reviewed and approved as written. The Reviewer approved waiver of the requirement for the Investigator to obtain a signed consent form in accordance with 45 CFR 46.117 as justified by the Investigator. As protected health information is not being requested from subjects, HIPAA authorization is not required in accordance with 45 CFR 160.103. . Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conf licts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. Acknowledgements The authors of this paper recognize Griffin P. Stinson for recording goniometric measurements. References 2020 Dent et al. Cureus 12(2): e6925. DOI 10.7759/cureus.6925 8 of 9 1. Naeemabadi M, Dinesen B, Andersen OK, et al.: Developing a telerehabilitation programme for postoperative recovery from knee surgery: specifications and requirements. BMJ Health Care Inform. 2019, 26:10.1136/bmjhci-2019-000022 2. Buvik A, Bugge E, Knutsen G, et al.: Quality of care for remote orthopaedic consultations using telemedicine: a randomised controlled trial. BMC Health Serv Res. 2016, 16:483. 10.1186/s12913-016-1717-7 3. Wallace P, Haines A, Harrison R, et al.: Joint teleconsultations (virtual outreach) versus standard outpatient appointments for patients referred by their general practitioner for a specialist opinion: a randomised trial. Lancet. 2002, 359:1961-1968. 10.1016/s0140- 6736(02)08828-1 4. Cottrell MA, O'Leary SP, Swete-Kelly P, et al.: Agreement between telehealth and in-person assessment of patients with chronic musculoskeletal conditions presenting to an advanced- practice physiotherapy screening clinic. Musculoskelet Sci Pract. 2018, 38:99-105. 10.1016/j.msksp.2018.09.014 5. Meislin MA, Wagner ER, Shin AY: A comparison of elbow range of motion measurements: smartphone-based digital photography versus goniometric measurements. J Hand Surg Am. 2016, 41:510-515. 10.1016/j.jhsa.2016.01.006 6. Blonna D, Zarkadas PC, Fitzsimmons JS, et al.: Validation of a photography-based goniometry method for measuring joint range of motion. 2012, 21:29-35. 10.1016/j.jse.2011.06.018 7. Zhao JZ, Blazar PE, Mora AN, et al.: Range of motion measurements of the fingers via smartphone photography. 2019, 1558944718820955:10.1177/1558944718820955 8. Gajdosik RL, Bohannon RW: Clinical measurement of range of motion. Review of goniometry emphasizing reliability and validity. Phys Ther. 1987, 67:1867-1872. 10.1093/ptj/67.12.1867 2020 Dent et al. Cureus 12(2): e6925. DOI 10.7759/cureus.6925 9 of 9 https://dx.doi.org/10.1136/bmjhci-2019-000022 https://dx.doi.org/10.1136/bmjhci-2019-000022 https://dx.doi.org/10.1186/s12913-016-1717-7 https://dx.doi.org/10.1186/s12913-016-1717-7 https://dx.doi.org/ 10.1016/s0140-6736(02)08828-1 https://dx.doi.org/ 10.1016/s0140-6736(02)08828-1 https://dx.doi.org/10.1016/j.msksp.2018.09.014 https://dx.doi.org/10.1016/j.msksp.2018.09.014 https://dx.doi.org/10.1016/j.jhsa.2016.01.006 https://dx.doi.org/10.1016/j.jhsa.2016.01.006 https://dx.doi.org/10.1016/j.jse.2011.06.018 https://dx.doi.org/10.1016/j.jse.2011.06.018 https://dx.doi.org/10.1177/1558944718820955 https://dx.doi.org/10.1177/1558944718820955 https://dx.doi.org/10.1093/ptj/67.12.1867 https://dx.doi.org/10.1093/ptj/67.12.1867 Validation of Teleconference-based Goniometry for Measuring Elbow Joint Range of Motion Abstract Background Methods Results Conclusions Introduction Materials And Methods Clinical goniometry FIGURE 1: Maximum extension FIGURE 2: Maximum flexion Telemedical goniometry Still-shot photography Statistical analysis TABLE 1: t-Test: clinical vs. telemedical goniometry TABLE 2: t-Test: clinical vs. photography goniometry FIGURE 3: Clinical vs. photography FIGURE 4: Clinical vs. telemedicine Results Interobserver reliability Discussion Conclusions Additional Information Disclosures Acknowledgements References work_4wujcvhgdzax5l7yovs3rdlcui ---- Bergrath et al. Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine 2013, 21:3 http://www.sjtrem.com/content/21/1/3 ORIGINAL RESEARCH Open Access Prehospital digital photography and automated image transmission in an emergency medical service – an ancillary retrospective analysis of a prospective controlled trial Sebastian Bergrath1*, Rolf Rossaint1, Niklas Lenssen1, Christina Fitzner2 and Max Skorning1 Abstract Background: Still picture transmission was performed using a telemedicine system in an Emergency Medical Service (EMS) during a prospective, controlled trial. In this ancillary, retrospective study the quality and content of the transmitted pictures and the possible influences of this application on prehospital time requirements were investigated. Methods: A digital camera was used with a telemedicine system enabling encrypted audio and data transmission between an ambulance and a remotely located physician. By default, images were compressed (jpeg, 640 x 480 pixels). On occasion, this compression was deactivated (3648 x 2736 pixels). Two independent investigators assessed all transmitted pictures according to predefined criteria. In cases of different ratings, a third investigator had final decision competence. Patient characteristics and time intervals were extracted from the EMS protocol sheets and dispatch centre reports. Results: Overall 314 pictures (mean 2.77 ± 2.42 pictures/mission) were transmitted during 113 missions (group 1). Pictures were not taken for 151 missions (group 2). Regarding picture quality, the content of 240 (76.4%) pictures was clearly identifiable; 45 (14.3%) pictures were considered “limited quality” and 29 (9.2%) pictures were deemed “not useful” due to not/hardly identifiable content. For pictures with file compression (n = 84 missions) and without (n = 17 missions), the content was clearly identifiable in 74% and 97% of the pictures, respectively (p = 0.003). Medical reports (n = 98, 32.8%), medication lists (n = 49, 16.4%) and 12-lead ECGs (n = 28, 9.4%) were most frequently photographed. The patient characteristics of group 1 vs. 2 were as follows: median age – 72.5 vs. 56.5 years, p = 0.001; frequency of acute coronary syndrome – 24/113 vs. 15/151, p = 0.014. The NACA scores and gender distribution were comparable. Median on-scene times were longer with picture transmission (26 vs. 22 min, p = 0.011), but ambulance arrival to hospital arrival intervals did not differ significantly (35 vs. 33 min, p = 0.054). Conclusions: Picture transmission was used frequently and resulted in an acceptable picture quality, even with compressed files. In most cases, previously existing “paper data” was transmitted electronically. This application may offer an alternative to other modes of ECG transmission. Due to different patient characteristics no conclusions for a prolonged on-scene time can be drawn. Mobile picture transmission holds important opportunities for clinical handover procedures and teleconsultation. Keywords: Telemedicine, Teleconsultation, Digital image, Emergency medical service, Picture transmission, Photo transmission * Correspondence: sbergrath@ukaachen.de 1Department of Anaesthesiology, University Hospital Aachen, RWTH Aachen University, Pauwelsstr. 30, Aachen D–52074, Germany Full list of author information is available at the end of the article © 2013 Bergrath et al.; licensee BioMed Centra Commons Attribution License (http://creativec reproduction in any medium, provided the or l Ltd. This is an Open Access article distributed under the terms of the Creative ommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and iginal work is properly cited. mailto:sbergrath@ukaachen.de http://creativecommons.org/licenses/by/2.0 Bergrath et al. Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine 2013, 21:3 Page 2 of 9 http://www.sjtrem.com/content/21/1/3 Background Digital cameras are becoming increasingly common in emergency departments and emergency medical services (EMS). Potential applications include photo documenta- tion, facilitation of handover procedures, and collecting clinical pictures for teaching purposes. Medical societies, like the British Orthopaedic Association and the British Association of Plastic Surgeons, recommend the photo documentation of open wounds [1]. In Germany, digital cameras are mandatory on physician-staffed advanced life support ambulances and response units [2]. How- ever, few studies to date have assessed the use of digital cameras in emergency medicine. Morgan et al. reported in 2007 that 80% of the analysed emergency departments in the United Kingdom had a digital or Polaroid camera that was ready for operation [3]. The availability of photographic equipment has increased over the past decade, but little data have been published about the scenarios in which digital photography was used, content of photographs taken and reasons for the use of digital photography in emergency medicine [4,5]. The objective of this study was to investigate the feasibil- ity of using a digital camera for still picture transmission in a prehospital telemedicine system on one specifically equipped ambulance. The picture quality was rated, and the content of the pictures was classified. Moreover, the study analysed how often and for which diagnoses the camera was used. The possible influence of this application on prehospital time requirements was also evaluated. Methods The study was conducted as a retrospective, additional data analysis of data that was gathered during a previ- ously performed prospective, controlled trial in the con- text of the research project entitled “Med-on-@ix” in Aachen, Germany [6-8]. Telemedicine system and approach within the prospective controlled trial From May to September in 2010 289 emergency mis- sions were performed in total by one advanced life sup- port ambulance on weekdays between 7.30 am and 4 pm. In addition to the two paramedics who were nor- mally assigned to this unit, the vehicle was staffed by an additional EMS physician during this funded period. Moreover, the vehicle was equipped with a portable tele- medicine system that allowed real-time vital data trans- mission, 12-lead electrocardiogram (ECG) transmission on demand and video transmission (i.e. from a video camera embedded into the ceiling of the ambulance). During the development phase of the system we found out that mobile video transmission from a video camera that was fixed on a headband was not meaningful, be- cause the perspective changed permanently due to head movements of the EMS personnel. Furthermore, video transmission required a stable broadband data connec- tion, which was easier to realise with the roof antennas of the ambulance. Therefore, a mobile digital camera (Powershot A1000IS, Canon Inc, Tokyo, Japan) was used to take and transmit still pictures. A portable data trans- mission unit was developed (peeq-Box, P3 communica- tions, Aachen, Germany) that enabled an encoded broadband communication via four parallel data chan- nels from different network providers (each data channel enabled the use of second or third generation mobile networks; max. uplink 1.4 Mbit/s, max. downlink 6 Mbit/s) and voice communication between the EMS team and a teleconsultation centre. This centre was staffed with experienced EMS physicians (tele-EMS phy- sicians), who offered medical and organisational support, when required. If a picture was taken by the EMS team, it was automatically transmitted from the digital camera to the transmission unit via a wireless LAN (Eye-Fi card, Eye-Fi, Mountain View, CA, USA) and then transferred to the teleconsultation centre via mobile networks. By default, the system compressed the photos to a file size of < 100 kB (jpeg format, 640 × 480 pixels). This com- pression mode was deactivated on some study days in order to gain experience with the transmission of uncom- pressed picture files (3648 × 2736 pixels, approximate file size 2–2.5 MB). Under ideal testing circumstances, end- to-end transmission times were approximately 25 seconds when using the compression mode and up to 2 minutes with uncompressed files. EMS teams were free to decide which telemedical applications they wanted to use based on medical necessity. In fact, no defined criteria were established to dictate when picture transmission should or should not be used, but in a previously performed training program all EMS physicians and paramedics were cau- tioned to be judicious when gathering data. More specifi- cally, they were told that pictures should only be taken and transmitted when a medical rationale supported the action. During the training examples of situations when digital image transmission might be meaningful were dis- tributed: e.g., transmission of medical reports, medication lists, medication packages, accident kinematics, open wounds or skin rashes and facial asymmetry. Particular emphasis was given to the high sensitivity related to pic- tures that allow the patient to be identified (e.g., patient’s face, name and date of birth on documents). No technical guidelines for the use of the digital camera were provided to this group. A questionnaire about the use and perform- ance of the telemedical applications was completed after each mission. Ethics statement and trial registration This study exhibits an ancillary, retrospective data ana- lysis of transmitted pictures and EMS mission data that Bergrath et al. Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine 2013, 21:3 Page 3 of 9 http://www.sjtrem.com/content/21/1/3 were gathered during a prospective, controlled trial that was registered (http://www.controlled-trials.com/isrctn/ pf/83270177). It was approved by the local ethics com- mittee (University Hospital Aachen, Germany, registra- tion number EK 141/09). The scientific analysis of transmitted pictures was specifically approved in the statement. In two previously obtained legal opinions the operation of the system was in accordance with national law. All patients were informed about the telemedical data transmission, and informed consent was obtained. All data recorded within the project were strictly for non-commercial means, and the recorded digital images were only stored for documentation and scientific purposes. During the prospective, controlled trial we found that in acute stroke the telemedical approach led to improved data transmission from the prehospital to the in-hospital setting. Prehospital time intervals were not affected negatively by teleconsultation and the quality of prehospital stroke diag- noses was comparable between telemedically supported EMS and regular EMS [7]. In acute coronary syndromes significantly more patients received urgent percutaneous coronary intervention in the telemedicine group [8]. Retrospective ancillary data collection and analysis Prior to the analysis, a local expert committee consisting of EMS physicians defined criteria for the evaluation and classification of the pictures (Tables 1 and 2). All question- naires were screened to detect missions where digital pho- tography and transmission was documented (Figure 1). The identified missions were searched in a specifically Table 1 Classification of transmitted still pictures Content Number of transmitted pictures (% of categorized photos) Number of corresponding emergency missions Medication list 49 (16.4%) 37 Medication packages 12 (4%) 6 Medical report, physician’s note 98 (32.8%) 42 Nursing report 17 (5.7%) 11 12-lead ECG 28 (9.4%) 13 Patient (with identifiable face) 16 (5.4%) 12 Patient’s detail (without identifiable face) 20 (6.7%) 10 Surrounding of emergency site 10 (3.3%) 5 Accident kinematics/ mechanism 2 (0.7%) 1 Other content (not previously defined) 47 (15.7%) 30 secured database, where all data of the missions were stored. Two faculty investigators had a secured access to the database for a limited time. Both assessed the mis- sions’ data and all pictures independently. The quality of the pictures were assessed by the two faculty investigators and categorised into one of three classes. If the content of a picture was clearly identifiable (e.g., text clearly readable) the photograph was rated as “good quality.” The rating “limited quality” was assigned to pictures with a limited clarity of the content. Pictures were evaluated as “not useful,” if the content was not or only partially identifiable. Additionally, the resolution and file size of each picture was retrieved for analysis to determine if the file size influenced the recognisability or readability of the pictures. The content of each picture was categorised (Table 1). If a picture of a medical record included a medication list, it was assigned to the category “medical record” be- cause nearly all medical records contained recommenda- tions for medications. After the first assessment, all datasets that were rated differently by the investigators were evaluated by a third investigator, who discussed the situation with investigators 1 and 2 until either a joint decision was made or a final decision was made by in- vestigator 3. Medical diagnoses and demographic data were obtained from the EMS protocol sheets for all mis- sions with (group 1) and without documented picture transmission (group 2). All missions within both groups were performed by the same ambulance and medical personnel (Figure 1). The time intervals designated as “on-scene time” and “ambulance arrival to hospital ar- rival” were measured with electronic timestamps that were transmitted via radio from the ambulance to the EMS dispatch centre; a comparison of the time data from each group was completed. All data were trans- ferred to a database (SAS database, Version 9.2, SAS In- stitute Inc., Cary, NC, USA). Statistical methods Categorical data are presented as frequencies and per- centages. The numbers of transmitted pictures and file sizes are expressed as means and standard deviations (±SD), whereas age, NACA scores and time intervals are expressed as medians and interquartile ranges (IQR). To analyse the influence of the file size on the picture qual- ity for missions with activated compression mode and missions without file compression, ratios of “pictures with good quality/total number of pictures” were calcu- lated and compared using the unpaired Wilcoxon test. Ages, NACA scores and prehospital time intervals of group 1 were compared to those of group 2 using the unpaired Wilcoxon test. Gender distribution and the fre- quency of medical diagnoses were compared using Fish- er’s exact test. Due to the fact that this was a pilot study, http://www.controlled-trials.com/isrctn/pf/83270177 http://www.controlled-trials.com/isrctn/pf/83270177 Table 2 Diagnosis related usage of picture transmission Diagnosis Number of missions with intention to use picture transmission (n = 113) [Mean number of transmitted pictures ± SD] Number of missions without picture transmission (n = 151) p-value Acute coronary syndrome 24 [3.33 ± 3.38] 15 0.014 Cardiopulmonary resuscitation successful 2 [7.0 ± 9.90] 1 -d Cardiopulmonary resuscitation not successful 0 [−] 1 -d Other cardio-circulatory emergency 21 [2.48 ± 1.47] 23 0.51 Strokea/TIA 11 [3.82 ± 2.14] 6 0.076 Seizure and other neurological emergency 13 [1.85 ± 0.99] 15 0.69 Syncope/orthostatic dysregulation 11 [2.73 ± 2.05] 18 0.69 Minor and moderate trauma 8 [2.25 ± 1.83] 11 1.0 Major trauma 0 [−] 1 -d Determination of deathb 1 [−] 8 0.083 Paediatric emergencyc 2 [2.50 ± 2.12] 5 0.70 Asthma/obstructive airway disease 2 [1.50 ± 0.71] 6 0.47 Psychiatric emergency 2 [3.00 ± 1.41] 3 1.0 Intoxication 1 [−] 4 0.40 Other emergency 15 [2.27 ± 1.62] 34 0.078 SD, standard deviation; TIA, transient ischaemic attack. aintracranial haemorrhage included. bno resuscitation attempts. cdefined as age < 6 years. dstatistical comparison not meaningful due to small case numbers. Bergrath et al. Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine 2013, 21:3 Page 4 of 9 http://www.sjtrem.com/content/21/1/3 no alpha-adjustment was performed and p-values of less than 0.05 were considered to be significant. All statistical analyses were conducted using SAS (Version 9.2, SAS Institute Inc, Cary, NC, USA). Results Usage of digital image transmission and picture quality Overall 264 missions were screened, and 314 pictures for 113 (42.8%) missions (range 0–17, mean 2.77 ± 2.42 pic- tures in each mission) were found in the database and were included into the analysis (Figure 1). Of them 240 (76.4%) pictures (mean 2.38 ± 1.97 pictures per mission) exhibited a clearly identifiable content (“good quality”). A total of 45 (14.3%) pictures (mean 0.45 ± 1.08 pictures per mission) were of “limited quality” and 29 (9.2%) pictures were assessed as “not useful”, because their content was not or only hardly identifiable. Common reasons for an unacceptable picture quality included unintentional cam- era shaking or the use of the flash in situation with a bright background, especially with photos of paper-based documents. Figures 2 and 3 show examples of transmitted pictures without identifiable personal data. Comparison of compressed and uncompressed file transmission In 84 of the missions with successful transmission, file compression was activated, and in 17 missions photographs were transmitted without compression. The mean file size was 45 ± 34 kB and 2261 ± 300 kB with the lower and higher resolutions, respectively. With the compression mode activated, 74% of the pictures contained clearly iden- tifiable content according to the faculty investigators’ as- sessment (median ratio “pictures with good quality/total number of pictures” =100%, range 50%-100%, n = 84) as compared to 97% without file compression (median ratio “pictures with good quality/total number of pic- tures” =100%, range 0%-100%, n = 17), p = 0.003. Classification of pictures and emergency missions Overall, 299 pictures were classified according to the predefined content criteria (Table 1). All other pictures (n = 15) were not classifiable due to unidentifiable con- tent. Most frequently, medical reports and physician’s notes were photographed (n = 98, 32.8%). The content of 47 pictures did not fit one of the nine predefined categor- ies. Most of these pictures were photographed personal data (e.g., patient’s name and address). Table 2 displays the diagnoses related to the digital image transmission. In situations with acute coronary syndromes, other circula- tory emergencies, cardiopulmonary resuscitation, and neurological emergencies, picture transmission was used in 46 to 67% of the corresponding missions. Patient characteristics of group 1 (i.e., those missions with documented picture transmission, n = 113) were A B C D E F G H Figure 2 Sample pictures. Only pictures without identifiable information are displayed. Patient names and patient related details were obscured. A: Medication list, B: Medication list, C: Medication packages, D: Medical report, E: Medical report, F: Medication list, G: Patient’s detail, H: Medication packages. 289 missions performed by one telemedically equipped ambulance 264 missions screened Exclusion of missions: complete dropout of the system, n=13 no patient on-scene, n=10 missions in foreign country, n=1a incidental first aid assistance at road accident, no use of system, n=1 Group 1: Intention to use picture transmission documented, n=113 missions Malfunction of transmission documented, n=6 missions Documentation of faultless function, n=107 missions Pictures found in database in n=4 missionsb Pictures found in database in n=97 missions Assessment of 314 transmitted pictures of n=101 missions Group 2: No documented picture transmission, n=151 missions Retrieving of prehospital time intervals, medical diagnoses, NACA score Retrieving of prehospital time intervals, medical diagnoses, NACA score Figure 1 Trial Flow. NACA score, National Advisory Committee for Aeronautics Score (seven-ary severity score). a: Mission in the Netherlands, EMS station was 7.3 km away from the border. Cross-border assistance between the EMS is governed by contract. b: In 4 missions mission related pictures were found, although a malfunction of this application was documented. Bergrath et al. Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine 2013, 21:3 Page 5 of 9 http://www.sjtrem.com/content/21/1/3 Figure 3 Sample 12-lead ECGs. Only ECGs without identifiable personal data are displayed. Bergrath et al. Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine 2013, 21:3 Page 6 of 9 http://www.sjtrem.com/content/21/1/3 compared to those of group 2 (i.e., missions without pic- ture transmission, n = 151). The median patient age in group 1 was 72.5 (IQR 30) years as compared to 56.5 (IQR 41) years in group 2, p = 0.001. No significant dif- ference in terms of gender distribution was detected; more specifically, 47.8% (n = 54) versus 49.3% (n = 71) were female, p = 0.90. The median NACA scores were 4 (IQR 1) for group 1 versus 3 (IQR 1), p = 0.28. No sig- nificant differences regarding the frequency of the diag- noses were found between the groups except for acute coronary syndromes (Table 2). Prehospital time requirements The median on-scene time was 23 min (IQR 10, n = 205 with complete documentation of time interval), and the median ambulance arrival to hospital arrival time was 35 min (IQR 14, n = 198 with complete documentation of time interval) for all missions. For group 1 (n = 93 with complete documentation), the median on-scene time was 26 min (IQR 9) as compared to 22 min (IQR 11) for group 2 (n = 112 with complete documentation), p = 0.011. The median ambulance arrival to hospital ar- rival times for groups 1 and 2 were 35 min (IQR 11, n = 90 with complete documentation of time interval) and 33 min (IQR 14.5, n = 108 with complete documentation), respect- ively (p = 0.054). Discussion This study evaluated the quality and content of transmit- ted digital images in a prehospital telemedicine system under routine clinical conditions. Pictures were taken and transmitted in 42.8% of the analysed missions. Most of them were deemed of “good quality” (76.4%) and mainly captured previously existing documents, like medical reports, medication lists, or 12-lead ECGs. In only 6.1% of the pictures, an identifiable face of the pa- tient was displayed, but most of the documents con- tained confidential personal data. Between the groups, patient characteristics differed significantly in terms of age and the frequency of acute coronary syndromes. Therefore, no conclusions from a comparison of these groups should be drawn regarding the influence of image transmission on prehospital time intervals. Digital image transmission was used frequently and resulted in the transmission of photographs with pre- dominantly satisfactory picture quality. Camera shaking, which was a common reason for impaired quality, is nearly impossible to prevent in an emergency setting. Using the flash while taking pictures of documents fre- quently caused pictures to be overexposed, thereby resulting in pictures without an identifiable content. Therefore, we recommend adjusting the default settings of the flash used. Quality and readability improved sig- nificantly when the uncompressed transmission mode was used, but the file sizes were about 50 times higher. Compressed jpeg-files seem to be reasonable in this set- ting, but studies to evaluate whether lesser file compres- sion improves the quality in the same way as the uncompressed mode are needed. Especially in rural areas, the amount of data is crucial in terms of deter- mining acceptable transmission times because mobile networks with lower uplink capacities are mostly available. Most of the photos contained personal data, but an identifiable face was photographed in only a few missions. In most instances, no new data was generated (e.g., picture Bergrath et al. Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine 2013, 21:3 Page 7 of 9 http://www.sjtrem.com/content/21/1/3 of a face); instead, previously existing data (e.g., medical documents) were converted into a digital format and transmitted. To transmit video files from the emergency site could have been an alternative, but technical and practical arguments like described above led to the use of still picture transmission. In the future, when mobile net- works enable faster and more stable upload capacities, mobile video transmission might be easier to realise and could be more meaningful in certain situations (e.g., re- mote neurological assessment). However, the focus of the described telemedicine system was to achieve sufficient data for remote teleconsultation within a narrow time period. Most of the time documents were photographed and such information can be extracted easily from single pictures. Thus, the tele-EMS physician was able to gain detailed information without committing to an overly time consuming audio communication with the EMS team. To get sufficient data for necessary medical decisions within the shortest possible time period is crucial for emergency teleconsultation. Unfortunately, the extent to which meas- urable benefits to patient care were reached remains un- clear. If electronic patient records become accessible via the Internet in the future or if localized electronic health information (e.g., electronic health card) becomes rou- tinely used, the need for photographs of medical docu- ments will be reduced noticeably. The described system enabled image transmission of previously printed 12-lead ECGs (e.g., by the general practitioner) in addition to or instead of the 12-lead ECG transmission from the vital data monitor. Consequently, this data transmission enabled the pre-notification of the cardiologist on-call when needed. Patients with acute coronary syndromes that were treated with additional teleconsultation received significantly more often urgent percutaneous coronary intervention com- pared to patients that were treated by regular EMS [8]. Previous research has clearly demonstrated that trans- mission of the 12-lead ECG enhances treatment pro- cesses and improves outcome [9-13]. In a study using a cellular video-phone for remote interpretation of pre- hospital 12-lead ECGs, the interpretation quality was comparable to the evaluation of a printed ECG [14]. Ohtsuka et al. demonstrated that the transmission of 12-lead ECGs was feasible and rapid using an older type camera phone [15]. All pictures of ECGs (n = 28) in our study were completely readable (Figure 3). In situations where a previously recorded ECG differs from the current ECG, picture transmission seems to offer a meaningful contribution to the standard ECG transmission modes. However, the main intention of picture transmission was to enable rapid medical assist- ance by a remote experienced physician who was the receiver of all transmitted data. Medical decisions can be based on transmitted pictures if they contain critical information (e.g., medical report, 12-lead-ECG) and this approach can reduce the amount of audio commu- nication needed during emergency teleconsultation. To evaluate the influence of picture transmission on prehospital time requirements, we first analysed the comparability of both groups regarding patient charac- teristics. Although no significant differences in the NACA score and gender distribution were detected, patients of group 1 were significantly older and signifi- cantly more patients were diagnosed with acute coronary syndrome compared to group 2. Therefore, no conclu- sions from a significantly prolonged on-scene time should be drawn due to the different patient characteris- tics that may have caused the prolongation of the on- scene time. Overall, the meaningfulness of time interval comparisons between both groups is questionable. How- ever, the ambulance arrival to hospital arrival intervals did not differ significantly, but data about the exact driv- ing route or the use of emergency lights and sirens on route to the hospital were not available. The operation of the described system represents a considerable effort with associated costs. In contrast, the use of smartphones could be a comparatively inexpen- sive alternative. Indeed, pictures taken with a smart- phone can be transmitted to any e-mail address or to another smartphone. Unfortunately, this transmission occurs mostly without proper encryption, and the reli- ability is unknown. Furthermore, local storage of confi- dential data seems to be problematic. Smartphones have already been evaluated for similar purposes in different disciplines. For example, in the plastic surgery context, image transmission led to shorter treatment intervals with a comparable diagnostic accuracy in the assessment of free flaps when compared to the classic in-house as- sessment [16]. Even with an older 1.1 megapixel camera phone, satisfactory assessments of the replantation po- tential of completely amputated fingers were possible [17]. Pirris et al. demonstrated that a patient’s cell phone camera can be used for the remote evaluation of infected wounds [18]. In such an ambulatory setting, ultra-short transmission times are not required, and in situations with non-urgent communication between a patient and the surgeon, compromises in the reliability of transmis- sion are acceptable. However, for prehospital teleconsul- tation a single smartphone does not enable multiple telemedical applications. Indeed, secure availability, high reliability, and encoded transmission are required for tel- econsultation in EMS. Our developed pilot system was designed to meet these demands, but in 4.5% of all mis- sions (n = 289), a complete drop-out occurred. Prior to a routine use the reliability must be improved in order to achieve the advantage offered by the combined use of several networks. Successful functioning is crucial for the introduction of potentially helpful telemedical appli- cations. Prior to our project, different systems with Bergrath et al. Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine 2013, 21:3 Page 8 of 9 http://www.sjtrem.com/content/21/1/3 similar applications were developed but not evaluated in clinical routine [19]. In a previous observational study with a precursor of our system, a tablet-computer with a built-in camera was used. The frequency of picture transmission was comparable; however the content of the photos was not evaluated, and the picture quality was only rated cumulatively [20]. Poor reliability and sta- bility of this tablet-PC led to the described changes. In addition to the operation of digital cameras within a telemedicine system, they can be useful for documenta- tion and teaching in EMS. Photos taken on-scene enable an in-hospital trauma team to get realistic impressions of accident kinematics. A large display of images is desir- able, but even the built-in screen of the camera allows this information to be distributed. Prehospital images should be saved in picture archiving and communication systems (PACS) so that they become available for the whole treatment process. Limitations The frequency of picture transmission probably depended not only on the medical necessity but also on the team’s attitude towards the system. Pictures were taken during day-time from spring to autumn due to restricted funding capacities. Pictures taken during the night or under differ- ent weather conditions (e.g., snow) would probably result in varying picture quality. Unfortunately, no technical guidelines for camera use were provided to the EMS. If such recommendations would have been implemented, different results may have been detected. As mentioned above, the non comparable groups led to very limited as- sessable results regarding the comparison of time inter- vals. Moreover, incorrect assessments by the investigators cannot be ruled out definitely, but we minimised this risk by using the described assessment procedure. This study was designed to evaluate feasibility, quality, and content of transmitted images and influences on time requirements. Influences on the patient outcomes were not measured and not purpose of this study. If the described approach would be implemented into routine care to support paramedic staffed ambulances this study has to be repeated in this different setting. The indications for still picture transmission might be different when no physician in on-scene. Conclusions Encrypted picture transmission in EMS missions is feas- ible. A remotely located physician received photos with high informative value. Picture quality was satisfactory, but the optimal file compression mode remains unclear and should be adjusted according to local network avail- ability. In most cases previously existing medical docu- ments or 12-lead ECGs were photographed. Image transmission of printed 12-lead ECGs seems to be an alternative to standard transmission modes for special situations. Nevertheless, the influence of picture trans- mission on prehospital time requirements remains un- clear. Although digital photography in medicine is cheap and easy to realize, secured data storage must be guaran- teed. Further studies to evaluate the influence of this ap- plication on patient outcome should be conducted. Competing interests The study was conducted within the joint research project “Med-on-@ix” funded by the German Federal Ministry of Economics and Technology (BMWi), Project-No.: 01 MB07022. Philips Healthcare (Hamburg, Germany) and P3 communications (Aachen, Germany) brought in their own financial resources. The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. No author had financial relationships or other non-financial dependencies with the funding sponsors. Authors’ contributions SB, MS and RR had the initial idea of the study. SB, NL and MS assessed and rated the pictures. CF and SB performed the statistical analysis. SB, RR, NL, CF and MS performed data interpretation. SB, RR, NL and MS drafted the manuscript and SB, NL and MS performed literature search. MS, CF, RR and NL revised the manuscript critically. All authors read and approved the final manuscript. Author’s information Sebastian Bergrath: www.anaesthesie.ukaachen.de; www.medonaix.de Acknowledgements We thank Sebastian Thelen, Michael Protogerakis, Tadeusz Brodziak and Marie-Thérèse Schneiders for supporting the study. Author details 1Department of Anaesthesiology, University Hospital Aachen, RWTH Aachen University, Pauwelsstr. 30, Aachen D–52074, Germany. 2Department of Medical Statistics, RWTH Aachen University, Aachen, Germany. Received: 10 August 2012 Accepted: 13 January 2013 Published: 16 January 2013 References 1. Nanchahal J, Nayagam N, Khan U, Moran C, Barrett S, Sanderson F, Pallister I: Standards for the management of open fractures of the lower limb. British Association of Plastic Reconstructive and Aesthetic Surgeons, British Orthopedic Association. London: RSM Press; 2009. 2. Rescue Services and Hospital Standards Committee: www.nark.din.de. 3. Morgan BW, Read JR, Solan MC: Photographic wound documentation of open fractures: an update for the digital generation. Emerg Med J 2007, 24:841–842. 4. Solan MC, Calder JD, Gibbons CE, Ricketts DM: Photographic wound documentation after open fracture. Injury 2001, 32:33–35. 5. Windsor JS, Rodway GW, Middleton PM, McCarthy S: Digital photography. Postgrad Med J 2006, 82:688–692. 6. Skorning M, Bergrath S, Rörtgen D, Brokmann JC, Beckers SK, Protogerakis M, Brodziak T, Rossaint R: E-health in emergency medicine - the Research project Med-on-@ix. Anaesthesist 2009, 58:285–292. 7. Bergrath S, Reich A, Rossaint R, Rörtgen D, Gerber J, Fischermann H, Beckers SK, Brokmann JC, Schulz JB, Leber C, Fitzner C, Skorning M: Feasibility of prehospital teleconsultation in acute stroke - a pilot study in clinical routine. PLoS One 2012, 7:e36796. 8. Fischermann H, Skorning M, Bergrath S, Krüger S, Schröder J, Fitzner C, Rörtgen D, Beckers SK, Rossaint R: Prehospital teleconsultation in acute coronary syndromes: feasibility and effects [abstract]. Circulation 2011, 124:21s A250. 9. Sejersten M, Sillesen M, Hansen PR, Nielsen SL, Nielsen H, Trautner S, Hampton D, Wagner GS, Clemmensen P: Effect on treatment delay of prehospital teletransmission of 12-lead electrocardiogram to a cardiologist for immediate triage and direct referral of patients with http://www.nark.din.de Bergrath et al. Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine 2013, 21:3 Page 9 of 9 http://www.sjtrem.com/content/21/1/3 ST-segment elevation acute myocardial infarction to primary percutaneous coronary intervention. Am J Cardiol 2008, 101:941–946. 10. Carstensen S, Nelson GC, Hansen PS, Macken L, Irons S, Flynn M, Kovoor P, Soo Hoo SY, Ward MR, Rasmussen HH: Field triage to primary angioplasty combined with emergency department bypass reduces treatment delays and is associated with improved outcome. Eur Heart J 2007, 28:2313–2319. 11. Adams GL, Campbell PT, Adams JM, Strauss DG, Wall K, Patterson J, Shuping KB, Maynard C, Young D, Corey C, Thompson A, Lee BA, Wagner GS: Effectiveness of prehospital wireless transmission of electrocardiograms to a cardiologist via hand-held device for patients with acute myocardial infarction (from the Timely Intervention in Myocardial Emergency, NorthEast Experience [TIME-NE]). Am J Cardiol 2006, 98:1160–1164. 12. Dhruva VN, Abdelhadi SI, Anis A, Gluckman W, Hom D, Dougan W, Kaluski E, Haider B, Klapholz M: ST-segment analysis using wireless technology in acute myocardial infarction (STAT-MI) trial. J Am Coll Cardiol 2007, 50:509–513. 13. Sanchez-Ross M, Oghlakian G, Maher J, Patel B, Mazza V, Hom D, Dhruva V, Langley D, Palmaro J, Ahmed S, Kaluski E, Klapholz M: The STAT-MI (ST-segment analysis using wireless technology in acute myocardial infarction) trial improves outcomes. JACC Cardiovasc Interv 2011, 4:222–227. 14. Gonzalez MA, Satler LF, Rodrigo ME, Gaglia MA, Ben-Dor I, Maluenda G, Hanna N, Suddath WO, Torguson R, Pichard AD, Waksman R: Cellular video- phone assisted transmission and interpretation of prehospital 12-lead electrocardiogram in acute st-segment elevation myocardial infarction. J Interv Cardiol 2011, 24:112–118. 15. Ohtsuka M, Uchida E, Nakajima T, Yamaguchi H, Takano H, Komuro I: Transferring images via the wireless messaging network using camera phones shortens the time required to diagnose acute coronary syndrome. Circ J 2007, 71:1499–1500. 16. Engel H, Huang JJ, Tsao CK, Lin CY, Chou PY, Brey EM, Henry SL, Cheng MH: Remote real-time monitoring of free flaps via smartphone photography and 3 G wireless internet: A prospective study evidencing diagnostic accuracy. Microsurgery 2011, 31:589–595. 17. Hsieh CH, Jeng SF, Chen CY, Yin JW, Yang JC, Tsai HH, Yeh MC: Teleconsultation with the mobile camera-phone in remote evaluation of replantation potential. J Trauma 2005, 58:1208–1212. 18. Pirris SM, Monaco EA, Tyler-Kabara EC: Telemedicine through the use of digital cell phone technology in pediatric neurosurgery: a case series. Neurosurgery 2010, 66:999–1004. 19. Chu Y, Ganz A: A mobile Teletrauma system using 3 G networks. IEEE Trans Inf Technol Biomed 2004, 8:456–462. 20. Bergrath S, Rörtgen D, Rossaint R, Beckers SK, Fischermann H, Brokmann JC, Czaplik M, Felzen M, Schneiders MT, Skorning M: Technical and organisational feasibility of a multifunctional telemedicine system in an emergency medical service - an observational study. J Telemed Telecare 2011, 17:371–377. doi:10.1186/1757-7241-21-3 Cite this article as: Bergrath et al.: Prehospital digital photography and automated image transmission in an emergency medical service – an ancillary retrospective analysis of a prospective controlled trial. Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine 2013 21:3. Submit your next manuscript to BioMed Central and take full advantage of: • Convenient online submission • Thorough peer review • No space constraints or color figure charges • Immediate publication on acceptance • Inclusion in PubMed, CAS, Scopus and Google Scholar • Research which is freely available for redistribution Submit your manuscript at www.biomedcentral.com/submit Abstract Background Methods Results Conclusions Background Methods Telemedicine system and approach within the prospective controlled trial Ethics statement and trial registration Retrospective ancillary data collection and analysis Statistical methods Results Usage of digital image transmission and picture quality Comparison of compressed and uncompressed file transmission Classification of pictures and emergency missions Prehospital time requirements Discussion Limitations Conclusions Competing interests Author’s information Acknowledgements Author details References work_52j3fyubh5epvbkwjxyzwuih5a ---- Page 1 High dynamic range imaging f or archaeological recording David Wheatley 1 Archaeology, Faculty of Arts and Humanities, University of Southampton, Southampton, SO15 1BF dww@soton.ac.uk +44 2380 594779 (office) +44 7801 435297 (mobile) 1 Archaeology, University of Southampton, Southampton, UK mailto:dww@soton.ac.uk High dynamic range imaging for archaeological recording Page 2 Abstract This paper notes the adoption of digital photography as a primary recording means within archaeology, and reviews some issues and problems that this presents. Particular attention is given to the problems of recording high-contrast scenes in archaeology and High Dynamic Range imaging using multiple exposures is suggested as a means of providing an archive of high-contrast scenes that can later be tone-mapped to provide a variety of visualisations. Exposure fusion is also considered, although it is noted that this has some disadvantages. Three case studies are then presented (1) a very high contrast photograph taken from within a rock-cut tomb at Cala Morell, Menorca (2) an archaeological test pitting exercise requiring rapid acquisition of photographic records in challenging circumstances and (3) legacy material consisting of three differently exposed colour positive (slide) photographs of the same scene. In each case, HDR methods are shown to significantly aid the generation of a high quality illustrative record photograph, and it is concluded that HDR imaging could serve an effective role in archaeological photographic recording, although there remain problems of archiving and distributing HDR radiance map data. Keywords: archaeology; photography; HDR; recording High dynamic range imaging for archaeological recording Page 3 Archaeological record photography Photography has been a fundamental part of archaeological recording for well over half a century. By the 1950s, for example, Cookson published a dedicated guide to archaeological photography in which he reports that ‘one cannot imagine any archaeologist today cutting even the simplest trial trench without photographing it from three or four positions’ (1954:11). The aim of record photography is to capture as much of the variation in texture and colour within a scene or surface as possible. Where this is part of an excavation recording process, the aim is to create a record that complements the other primary records such as plan and section drawings, and which may subsequently be used to effectively illustrate reports and some considerable attention has been given over the years to the technical detail of how this can be best achieved (BAJR 2006; Cookson 1954; Conlon 1973; Dorrell 1989; Fischer 2009a, b; Howell and Blanc 1992; Schlitz 2007). In recent years digital photography has almost completely replaced film-based photography in many contexts and has begun to be used as a recording tool in archaeology. With a few exceptions, however (see for example Woolliscroft 2010), archaeology has not yet given serious consideration to the methodological implications of this change. At a trivial but practical level, digital photography provides the ability to check results in the field and to record large numbers of images without changing or wasting film, whereas film produces a relatively robust physical result for archiving and does not require batteries. More profoundly, though, images produced by digital sensors differ from film-based photographs in many ways. Most significantly there are differences in archival properties, and in the quality of images that can be obtained in terms of both resolution (acuity) and dynamic range. For detailed information on the capabilities of digital sensors see, for example, Clark (2010). Although archiving of digital collections of images has been discussed for at least fifteen years (e.g. Ester 1996), there remains a deep conservatism within archaeology that manifests itself in concerns over the most appropriate formats and metadata standards for archiving digital images, and this may be one reason why archaeology has been reluctant to adopt digital photography as a replacement for film. The Archaeological Archives Forum Guide to Best Practice in Field Archaeology notes that digital photographs are increasingly used, but that there needs to be a ‘clearly established procedure for long-term preservation’ and states that ‘black and white film processed to BS5699 is the archival ideal’ (Brown 2007: 13) and High dynamic range imaging for archaeological recording Page 4 this view is reflected by many museums and archives, who will not yet accept digital images as archival records. Issues of archiving digital material, including images, are largely outside the remit of this paper (but see e.g. Digital Preservation Coalition 2010; Hunter 2000; Kenney and Rieger 2000; Marty 2009; Parry 1998; Richards and Robinson 2000) but it is interesting to note that, despite advice from JISC Digital Media, formerly TASI (JISC 2010), practitioners (e.g. Andrews et al. 2006) and aerial photography specialists (e.g. Verhoeven 2010) all of whom outline the advantages of camera RAW data and the open – albeit proprietary – Adobe Digital Negative format, at the time of writing the Archaeology Data Service (ADS) expects deposition of digital photographs in TIF format and currently has no policy on archiving of RAW images in any format. This is, arguably, the digital equivalent of a physical archive refusing to accept negatives, only prints. Ultimately, however, this may be of little relevance to the adoption of digital photography as a recording method by many fieldworkers. Indeed it seems increasingly likely that archaeology may have no choice but to ‘go digital’ for primary recording, at least for colour photographs. In 2009, for example, Kodak announced that they were ceasing production of Kodachrome slide film (Topping 2009) so that when existing stocks are exhausted, it will no longer be available for colour recording. Like vinyl records, black and white photography is unlikely to suffer the same fate immediately because there is a sufficiently large enthusiasm for its particular characteristics, but in general it seems very probable that ‘born digital’ photographs will make up the majority of archaeological record images in the near future. Fortunately, the issue of image quality is rather more tractable, and may even offer positive benefits to archaeology rather than solely present us with problems. The resolution of digital photography is decreasingly a serious concern because, while early digital sensors did not offer recording resolution close to film, most modern digital cameras in use for archaeological recording offer ample resolution. There is, however, still a problem over the more limited dynamic range that most electronic sensors offer over, particularly, black and white film. Additionally, many photographers find the response curves of digital sensors, which typically have a rather abrupt graduation in the highly saturated (bright) regions of the image, rather less useable than the more gradual transition to zero density (the ‘shoulder’) found in the response curves of most film and photographic papers. High dynamic range imaging for archaeological recording Page 5 Dynamic range in archaeological photographs Dynamic range is of particular concern in this context because the majority of archaeological field recording takes place out of doors, where it can be difficult to control the lighting in any given scene. If the contrast in a scene is too great – where direct sunlight falls on part of a surface while another part is in shadow, for example – it can be almost impossible to create a single photograph that retains sufficient detail in all areas of the scene. Dorrell explains this problem very clearly: ‘The factor that can make or mar the effectiveness of site photographs is the strength and direction of the natural light ... Direct strong sunlight, particularly if it falls cross the scene diagonally, is probably the worst possible lighting. Not only will it raise the contrast to unacceptable levels, giving solid black shadows, or burnt-out highlights, or both, but the pattern of light and shade may form an outline of black stripes and masses far more obvious to the eye than the shape of the walls themselves’ (1994: 127). Archaeological photography presents many situations in which the contrast can be extremely high. Almost any photograph that attempts to show the interior of a cave, chambered tomb or other unlit structure as well as the view through the entrance or window will likely fall into this category. Typically a digital camera sensor may have the ability to register a contrast ratio of around 1,000:1, while the scene to be recorded may exceed 70,000:1 in some circumstances (Fig 1). The task of the photographer is to adjust either the time of the exposure (normally using the camera's shutter speed) or the intensity of light permitted to fall on the sensor (normally using the lens's aperture) so that the range of reflected light intensities in the scene fall within the dynamic range of the sensor. The problem with high contrast scenes is that photographic films and digital sensors are only able to respond to a limited range of radiances and the range of reflected light intensities within a high contrast scene quite often exceeds that range, a problem that is most obvious with colour transparency films which have notoriously limited dynamic range sensitivity. If the photographer (or camera metering system) chooses an exposure so that the luminance values within shaded areas of the scene correspond to the available dynamic range of the sensor, then regions in direct sunlight will exceed the sensor's saturation point and will be recorded as uniform maximum value. Conversely, if the High dynamic range imaging for archaeological recording Page 6 photographer chooses an exposure for which the sunlit parts of the scene are correctly mapped to the dynamic range of the sensor, then much of the regions in shade will fall below the minimum sensitivity of the film or sensor where they will be recorded as uniform black (or the signal-to-noise ratio becomes unacceptably high). There may still be an 'optimal' exposure for the scene, which can be estimated using either the automatic metering system in the camera or a separate light meter and some method such as the 'zone' system advocated by Adams (1970) but 'optimal' in this sense means only that the loss of information at either end of the dynamic range is minimised, not eliminated altogether. We tend not to notice this problem when we look at a scene, because the human visual perception system does not work like a camera. Although there are some well documented similarities between human eyes and cameras, the human visual perceptual system as a whole consists of far more than a single 'camera' generating a single static image. Our eyes can move around the different areas of the scene while changing their sensitivity (by contracting the pupil) which allows simultaneous perception of luminance levels that vary over a range of 3.7 log units (Kunkel and Reinhard 2010). The sensitivity of the human visual system is also adjusted in other ways such as the bleaching of photopigment in the rod and cone cells and adaptation of photoreceptor mechanisms (Reinhard et al. 2010: 243-251). Although these latter are far less rapid, taking from a few seconds to a few minutes to fully adjust, they allow the human visual system to be sensitive to up to 10 orders of magnitude of sensitivity in total (Fig 1). Digital imaging systems typically respond to variations of not much more than 2 orders of magnitude (Reinhard et al. 2010: 4-5). The problem is very familiar to both amateur and professional photographers and also to archaeological fieldworkers and it is therefore unsurprising there are a number of established ways to mitigate it. Simply avoiding situations in which lighting is from the side and casting strong shadows by ensuring that photographs are taken with the sun behind the camera is one approach, but is generally rather undesirable as ‘this would mean that any surface facing the camera would be in direct, straight-on, light , and its surface texture would therefore be lost’ (Dorrell 1994: 127). In a typical archaeological situation, mitigation usually involves reducing the difference in luminance between regions in the scene to make it easier to 'map' the scene to the available dynamic range of the sensor or film. This can be achieved by reducing the absolute luminance of those areas in direct sunlight by shading them with – variously – tarpaulins, gazebos, vehicles or even students. Alternatively, regions in shadow High dynamic range imaging for archaeological recording Page 7 can be 'filled' with additional light so as to raise their luminance, a task most easily achieved using either a reflector or electronic flash. Both of these approaches can be highly effective but neither is without disadvantages: shading can be both time consuming and labour intensive (and is sometimes impossible due to the height or position of the sun) while few field archaeologists have both the expertise and the equipment to deploy effective fill lighting. Partly for these reasons, archaeologists have also long recognised the need to 'bracket' record photographs. This means that the photographer establishes the 'optimal' exposure as above and takes one photograph using these values. They then also record at least one additional image that is intentionally over-exposed, and one that is under-exposed to make it more likely that both bright and dark areas are correctly exposed in at least one image. Although these are sometimes discarded, there is a significant legacy archive of archaeological record photographs recorded in this way. High Dynamic Range (HDR) imaging High dynamic range photography is an increasingly popular branch of photography and a very active area of research that deals with the recording and representation of scenes with extended dynamic ranges – in other words, high contrast scenes such as those described above. Although it is possible to foresee cameras that record high dynamic range information directly, the most popular approach is currently based on the approach of Debrevic and Malik (2008) who described a method of using multiple images taken at different exposures to create a single ‘High Dynamic Range radiance map’. A full description of the method is beyond the scope of this paper, but in summary it involves utilising two or more images of the same scene taken in the same lighting conditions to reconstruct the 'response function' of the imaging process. Alternative methods of deriving the response function do exist, such as that proposed by Mitsunaga and Nayar (1999) and a full treatment of the derivation of the response function and various other processing steps that may be needed such as noise removal, image alignment, control of lens flare and automatic removal of ‘ghost’ elements – which arise when an object does not appear in all of the source images or moves between exposures – can be found in Reinhard et al (2010:171-197). The response function is then used to combine multiple exposures into a single High Dynamic Range (HDR) 'radiance map' whose pixel values are proportional to the original luminance values in the scene, rather than (as in a conventional photograph) the result of some non-linear function. High dynamic range imaging for archaeological recording Page 8 Two implications of the methodology may be relevant in archaeological photography. Firstly, the approach uses the conventional photographic assumption of reciprocity: the property of a film or sensor that ensures that there is a reciprocal equivalence between the time of exposure and the intensity of illumination – in other words if we double the exposure time and half the intensity of the light falling on it then the sensor or film will provide the same response. The second assumption is that the images used to reconstruct the response function are recorded in the same lighting conditions. The first of these provides no significant problems for archaeological situations, as reciprocity can reasonably be assumed to hold within the range of exposure times and intensities (expressed as aperture stops or f-stops) that would be used in archaeological recording. This would only become relevant were we to require exposure times of more than around 30 seconds, or encounter intensities of light well beyond those we would expect. At these extremes then 'reciprocity failure' could occur, and the method may not produce reliable results (although see discussion of ‘exposure fusion’ below). The second assumption is of more practical concern, as it is not uncommon – in Britain at least – for lighting conditions to change quite rapidly during field recording. It follows that care should be taken to record the various exposures quickly and, where possible, to choose periods of relatively stable light. In practice, this means taking the series of images in quick succession, perhaps taking advantage of modern Digital Camera’s automation functions for bracketing, and/or waiting for a large cloud or a large gap in the clouds to pass across the sun. Software to calculate the response function and/or to combine source images into HDR radiance maps is readily available and is widely used within the photographic community to enable photographers to record high contrast scenes and to obtain interesting pictorial effects. Various methods can be used to ‘tone map’ the values in the HDR file to the LDR of conventional monitors, output devices and colour spaces. Tone mapping The HDR radiance map is obviously of interest as a useful record of the illumination of a scene, but clearly it does not contain values that fall within the low dynamic range (LDR) gamut of an output device such as a monitor or printer and so the data cannot be viewed directly. In order to view an HDR radiance map, therefore, some processing or tone-mapping needs to take place between the radiance values in the HDR file and the LDR of an output device. High dynamic range imaging for archaeological recording Page 9 There are a very wide variety of methods and approaches that can be used for tone mapping, many of which are described in Reinhard (2010). Often these take the human visual system as a model, and attempt to generate representations of the full (HDR) luminance scale in such a way as to mimic the response of a human observer in some form. Some approaches compress or select ranges from the HDR histogram which can then be mapped through some linear or – more commonly – sigmoidal function to the range of values in the output gamut. Broadly, these can be referred to as ‘tone compression’ methods but they can often lead to a final image with poor overall contrast or in which parts of the dynamic range in the HDR radiance map are poorly represented. For these reasons region-based and adaptive local contrast operations are often used for tone mapping (referred to below as ‘detail enhancement’ methods) as these are capable of producing outputs that feature good local contrast, while also retaining the local variations in luminance that best evoke the texture and tonality of the original scene. Alternatives to High Dynamic Range methods HDR is a physically-based method for combining information from several exposures. Where the HDR radiance map is derived using the method of Debrevec and Malik (2008), it requires that the relative exposure values (EV) for each image are known, or at least can be estimated (see Avebury Cove example below). Where the HDR radiance map is not required then it may be more efficient to use alternative one-step procedures for combining multiple-exposure sequences. Manual and ‘ad hoc’ methods for doing this have always existed such as ‘sandwiching’ of negatives together to make single prints, and the use of image processing software to combine images in similar ways. Automated or semi-automated methods for doing this are referred to as ‘exposure fusion’ (Mertens et al. 2009; Reinhard et al. 2010: 400- 404). Exposure fusion methods usually rely on computing perceptual quality measures for each pixel, and combining those areas of each image that produce the ‘best’ results to form the final image, and they have the advantage that they are computationally less intensive because there is no need to deduce a physically-based radiance function, and they do not require that we know the relative EV of each exposure in advance. This may be particularly relevant to the potential use of legacy archaeological material, which may consist of sequences of photographic negatives or slides whose shutter speeds and apertures have not been recorded and for this reason exposure fusion was used, and is illustrated in the Avebury Cove example, below. High dynamic range imaging for archaeological recording Page 10 One minor disadvantage of this approach is that it ‘cannot extend the dynamic range of the original picture’ (Mertens et al. 2009: 161) but a more significant disadvantage in the context of archaeological recording may be that it negates one of the long-term potential benefits of HDR because all decisions as to how to make the final image are made by the publisher/creator and not – as is possible if HDR radiance maps are archived or distributed – by the researcher/reader. The real potential for HDR imaging may ultimately lie in the publication and distribution of HDR radiance maps themselves. Although they do not contain information that can be viewed directly, it should be possible to distribute viewers (as standalone software or browser plugins) that perform tone mapping ‘on the fly’. In this way, HDR images might be useable in archaeology in a similar way to ‘bubbleworld’ methods such as Quick Time VR (Jeffrey 2001) or Polynomial Texture Maps (Earl et al. 2010a; Earl et al. 2010b) in that the researcher or reader is permitted to determine the optimum tone mapping parameters just as they determine the view position in the case of QTVR, and the lighting conditions for Polynomial Texture Maps. In short, it is difficult to entirely predict in advance what aspects of the image a future researcher/reader will wish to explore, and so it would be better to create and distribute physical HDR radiance maps where possible, so as to permit them to manipulate the final image appearance to suit their own interests and needs. The potential for HDR in archaeological recording The potential benefits to archaeological field recording of this approach should be fairly clear by now. The raw data to construct HDR radiance maps is relatively easy to obtain: it requires only that multiple exposures are made of the scene that is being recorded, which most archaeologists already do, and it complements rather than replaces existing photographic recording methods. The immediate benefit is that it permits a far wider range of textures and colours to be accurately recorded and represented than is currently possible. We might also expect to achieve significantly improved recording outcomes where we have high contrast scenes but are unable to employ – for whatever reason – the conventional mitigation strategies described above. Moreover, the method should be applicable to a significant body of legacy photographic material: we may be able to use this approach to generate HDR radiance maps, and hence significantly more useful photographs, from existing bracketed slide and film records that exist within physical archaeological archives. Three examples are presented here to explore the potential of HDR imaging methods in archaeological recording. The first (Cala Morell) illustrates what has become the High dynamic range imaging for archaeological recording Page 11 conventional application of HDR methods to a typically high contrast scene, the second (Itchen Abbas) is intended to illustrate how HDR recording can be incorporated into a ‘born digital’ approach to site photography, while the final example (Avebury Cove) explores whether HDR methods may be used with ‘legacy’ photographic materials to provide added value. In all the case studies below, generation of HDR radiance maps and subsequent tone mapping was done using Photomatix Pro v3.0 (HDR 2010). Although proprietary, this is a relatively inexpensive software solution that is widely used within the wider photographic community and provided all the functionality required for these examples. Usefully, it allows for batch processing large numbers of source images automatically, which proved extremely useful for the Itchen Abbas case study below. It also has an exceptional level of functionality for adjusting and manipulating the tone-mapping process, although this is not fully explored in these simple examples. Many other software solutions are available both for the generation of HDR radiance maps and for tone mapping and these include both commercial and Open Source platforms. Cala Morell – a very high contrast photograph The requirement in this example was to record the interior of one of the rock-cut tombs at Cala Morell, Menorca, with a doorway through which direct sunlight was visible. This presents a photographic problem for which HDR has been widely applied and which is common in archaeology. In this example, a wide angle shift lens was used on the camera. To ensure that the full dynamic range of the scene was represented, seven exposures were made with a digital camera mounted on a tripod at 2EV intervals, centred on the exposure recommended by the camera’s metering system (Figure 2) Although RAW images were recorded (shown here using settings as shot in the camera) it is clear that the recommended exposure provides acceptable detail in few areas of the image, and that none of the exposures provides an acceptable level of detail within the cave and through the entrance doorway. The exposure at +2EV provides an acceptable image of the interior of the cave – note the green algae growing on the walls, for example – but much of the detail of the surface texture of the floor is poorly rendered, the shape of the entrance and the floor inside it show no detail at all and – most obviously – the view through the doorway is also entirely ‘burnt out’. Some additional detail in the original exposures could be recovered through careful processing of High dynamic range imaging for archaeological recording Page 12 the RAW image data, but nowhere near enough to achieve an acceptable result. The region of the image seen through the doorway appears to need an exposure of some 7 or 8EV less than the average in order to render acceptable detail. All seven images were used to generate an HDR radiance map which also allowed the contrast ratio in the scene to be quantified as approximately 60,000:1 (estimated by the Photomatix software). The HDR image was then tone-mapped using the ‘detail enhancement’ option of Photomatix software. Some experimentation with the parameters quickly enabled an acceptable image to be produced (Figure 2, bottom right) that rendered all areas of the scene within the gamut of a computer monitor and enabled adequate representation in print. The quality of the image has a slightly strange tonality that is typical for images that have been tone mapped using local contrast optimisation methods, although the overall colour balance of the image appears reasonable. The resulting image is a useful record that illustrates well the various textures and colours of the scene. Unlike any of the source images, the tone-mapped image shows good textural detail on the ceiling, walls and floor of the cave. It also renders the floor in front of the door, the door jambs and the view through the entrance itself. Although the image appears slightly unnatural, it is arguably a better rendering of the perceptual experience of being inside the tomb, from where the human visual system is capable of adapting to the extreme luminance levels outside. Alternative renderings of the HDR radiance map are possible using different parameters of the tone mapping. It is worth noting that the entire HDR recording process took less than 2 minutes. The only other method of achieving an acceptable result would have been to use artificial lighting to raise the light level inside cave which would have been extremely difficult given the limited space available and the absence of mains electricity – it is unlikely that portable flash would have provided sufficient additional light. It would also have been both time consuming and expensive. Itchen Abbas – ‘born digital’ photography for HDR For the second example, field recording was undertaken with the prior intention of making HDR radiance maps from the images. The University of Southampton field school at Itchen Abbas, Winchester was used for this purpose as the problems faced were typical of archaeological photographic recording: weather conditions varied from overcast High dynamic range imaging for archaeological recording Page 13 (occasionally rainy) to bright sunlight and the contrast of some images was also significantly increased where test pits revealed white chalk features against dark red/brown clays. Reinhard et al (2010: 182) provides guidelines for successfully recording HDR images and these proved straightforward enough to incorporate into a conventional archaeological recording workflow. Record photographs were taken using a high-quality digital SLR camera mounted on a tripod, and for each record photograph a sequence of images were taken at fixed EV intervals. The ISO setting was set to a constant 200 ISO, white balance of the camera was fixed prior to the excavation to a suitable value, and a standard Gretag/Xrite colour chart was included in each image to enable identification of the photograph in the written registers, and provide colour calibration information. The process was therefore almost unchanged from a conventional archaeological recording workflow in which exposures would be ‘bracketed’ for safety. The only significant change was that the ‘bracketing’ was always done using the camera’s shutter speed because altering the lens aperture has the effect of altering the depth of focus, and hence would complicate the process of combining images later, and care was taken not to alter ISO and white balance values. Photographs were recorded as RAW images, and – as is conventional – a blackboard with metadata (at least site code, test pit number and north arrow) was incorporated in each image. Some experimentation was required to establish the appropriate number of exposures and EV intervals to use (each EV step corresponds to a halving or doubling of the exposure time or an increase or decrease of one aperture stop). Initially five images were recorded at an interval of one EV but it was apparent that this failed to fully cover the dynamic range in some of the higher contrast scenes, so a general policy of recording five images at two EV steps was then adopted. This was a relatively small field project resulting in sixty-eight record photographs of test pits, which required 340 raw images totalling 984Mb of data. Post excavation, each sequence of five RAW files was batch processed using Photomatix 3.0 Pro software to generate HDR radiance maps and tone-mapped images using both tone compression and detail enhancement methods. Because EV information is recorded as metadata within digital photographs, the generation of the HDR radiance map is entirely automated and so batch processing of the images was relatively straightforward. In this case it took around six hours of processing for all 68 sets of five images using a personal computer. Many of the record images exhibited relatively low contrast and in these examples a single RAW file was fully capable of rendering an acceptable record photograph. In several cases, High dynamic range imaging for archaeological recording Page 14 the recommended exposure settings of the camera proved not to be suitable, and a published image would have been selected from one of the other nearby exposures. In around 12 of the 68 cases, however, the contrast was sufficiently high in the scene that some mitigation would have been required in the field to ensure an acceptable photograph. Figure 3 illustrates one of the higher contrast scenes, although not the most extreme. The photograph shows part of a rammed-chalk floor, partly excavated, adjacent to a dark red/brown clay. Although an exposure slightly above the camera meter’s recommendation would provide an acceptable result, either the areas of shadow or the stark white of the chalk floor surface would be difficult to render effectively without considerable post-processing of the image. The tone compression method (using only default values) shown as Image A in Figure 3 shows some improvement in this respect, with both the side of the cutting and the surface of the chalk floor exhibiting textural detail, but the detail enhancement method (shown as Image B, also using default values) is a considerable improvement, with good textural detail visible in all areas of the photograph. As with the previous example, the colour and lighting appear slightly unnatural and – for publication purposes – it would probably benefit from experimentation with the tone mapping parameters to achieve a more natural result. As a final point of interest, Image C in Figure 3 shows an automated exposure fusion which has also produced a record image that is very comparable to the results obtained using HDR-based methods. Avebury Cove – using legacy photography for HDR The final example concerns the possibility of recovering HDR radiance maps from legacy archaeological photography. Four colour slides were selected from a record made of excavations at Avebury Cove in 2003 (Pollard 2008). These had been taken as a bracketed sequence of images using a tripod in difficult lighting conditions – vertically downwards into a fairly deep cutting adjacent to one of the stones of the cove. The range of dynamic variation had proved too great for the slide film, and use of any of the original images for illustration purposes would have represented a compromise between good representation of the base of the cutting (Figure 3, image 2) or the sides of the cutting (Figure 3, image 3). The slides were scanned using a film scanner, resulting in the digital images shown as images 1 to 4 in Figure 3. The EV settings of the sequence was not recorded, and so some experimentation was required in order to achieve a plausible reconstruction of the HDR High dynamic range imaging for archaeological recording Page 15 radiance map. The procedure adopted here was simply to try a reconstruction with a particular set of values, examine the HDR radiance map, and to reject it if the result appeared incorrect in the HDR viewer or the histogram exhibited odd effects. After a few attempts, a sequence of relative EV estimates was arrived at that provided an apparently reasonable reconstruction of the HDR radiance map. One further complication arises with the use of legacy material in that, although these sides had been photographed using a tripod, the scanned images were not perfectly registered (this can be seen in the positions of the black frames surrounding the images of Figure 4) and so it was necessary for the software to perform an additional image alignment step that required re-sampling of the images, resulting in some slight loss of sharpness. The image shown as Figure 4 image A was then tone mapped from the HDR radiance image using the detail enhancement method. It should be clear that it provides a considerable improvement in the level of textural detail that is represented, permitting a single effective illustration of both the base of the cutting and the sides (which include objects in the section on the left of the image, and a brick wall on the right). Figure 3 image B was created directly from the four scanned slides using exposure fusion. This was a considerably quicker process that did not require any estimation of the relative EV stages, and it should be clear that the result is in many ways equally effective as the tone-mapped image A. Conclusions and future considerations The brief experiments reported here suggest that HDR approaches could be of considerable use to archaeological field recording. Although the examples here are all of colour images, the methodology is equally applicable to digital monochrome photography and to ultraviolet or infrared imaging – which is becoming increasingly available through modification of conventional digital cameras (see e.g. Verhoeven 2008, Verhoeven and Schmitt 2010) – and so further archaeological applications in areas such as rock art recording are more than likely. In future, it is quite possible that the design of digital sensors will be improved in various ways to improve their dynamic range sensitivity, and cameras may become more widely available that are capable of producing a proportional response to the entire range of luminance values that may be encountered in any photographic situation. In this case, the capture of HDR radiance maps may be possible directly rather than requiring the processing of several differently exposed images. Similarly, the availability and development of monitors High dynamic range imaging for archaeological recording Page 16 capable of displaying a wider range dynamic range is also likely to improve in coming years (Seetzen et al. 2003). Good quality conventional LCD displays are capable of a contrast ratio of around 2,000:1 but this is an area of considerable commercial development (because it impacts on television and home cinema products) and monitors with a contrast ratio of around 200,000:1 (capable of rendering luminance levels from 0.015 to 3000 cd/m 2 ) have been demonstrated. These should ultimately enable a far wider range of luminance levels to be experienced than is currently possible, and their potential widespread availability is a further argument in favour of recording and using HDR radiance maps as part of archaeological archives. If it is appropriate to store HDR radiance maps as components of archaeological archives, then decisions need to be taken with respect to which formats are the most appropriate and what metadata standards – and indeed content – should be archived. There are several candidate ‘open’ formats that could be used for archiving HDR radiance data including Radiance (.hdr) format and OpenEXR (.exr), although both involve some loss of precision. Floating-point TIF may be the best option in the short term, although this may require as much as 96 bits per pixel resulting in very large files. As intimated above, however, there are similarities here with the need to archive Polynomial Texture Map data and even QTVR-type image data for use in ‘bubbleworld’ virtual reality viewers because – in all these cases – the image depends not only on the archived data but also on the viewer used to interpret that data. Because of this, it would seem prudent to establish new archival policies that permit a far wider range of image data, including all of these, to be deposited and documented. This paper has suggested that there could be a role in future for HDR-based photography in archaeological recording. The case studies each provide evidence that there are some benefits to be gained from post-processing record photographs (both ‘born digital’ and legacy) to recover the HDR radiance maps. The range of benefits achieved is clearly proportional to the contrast in the original scene, with the first case study providing the clearest benefit. The Itchen Abbas case study suggests that the benefits in many cases may not be significant, although in a few high-contrast scenes the use of HDR is a perfectly adequate alternative to the traditional approach of mitigating contrast using either shading or fill lighting. Although it may provide little benefit for low contrast scenes, the additional cost – both of capture in the field, and of post-processing the results – is surprisingly modest and so incorporating an ‘HDR-friendly’ recording strategy into an existing workflow may be regarded as worthwhile High dynamic range imaging for archaeological recording Page 17 for the minority of scenes that would significantly benefit. The Avebury Cove example also demonstrates that this approach can achieve useful results on existing legacy material, although this may prove more difficult to process. Clearly further exploration of a wider range of legacy material, including different film types, may provide more reliable conclusions about the future potential. Acknowledgements A number of colleagues have commented on this manuscript and I am particularly grateful to Graeme Earl and Tom Goskar for their useful input. Thanks are also due to Geraldine Joffre of the Photomatix Engineering team and to the Southampton University students and supervisors for their interest in and help with the HDR recording experiments during the Itchen Abbas excavation project. References Adams A (1970) Basic photo, 1st revision edn. Morgan & Morgan, Hastings-on-Hudson, N.Y. Andrews P, Butler Y, Farace J (2006) Raw workflow from capture to archives : a complete digital photographer's guide to raw imaging. Focal, Oxford BAJR (2006) Short guide to Digital Photography in Archaeology. BAJR Guide. British Archaeological Jobs Resource Brown DH (2007) Archaeological Archives: A guide to best practice in creation, compilation, transfer and curation. Institute of Field Archaeologists, London Clark, R. N. (2010) Digital Camera Sensor Performance Summary (http://www.clarkvision.com/articles/digital.sensor.performance.summary/) Retrieved 15/10/2010 Conlon VM (1973) Camera techniques in archaeology. J. Baker, London, Cookson MB (1954) Photography for archaeologists. Parrish, London, Debevec P, Malik J (2008) Recovering high dynamic range radiance maps from photographs. SIGGRAPH '08: ACM SIGGRAPH 2008 classes. ACM, Los Angeles, California, pp 1-10 Digital Preservation Coalition (2010) Digital Preservation Coalition Web Site (http://www.dpconline.org/) Retrieved 15/10/2010 Dorrell PG (1989) Photography in archaeology and conservation. Cambridge University Press, Cambridge ; New York http://www.dpconline.org/ High dynamic range imaging for archaeological recording Page 18 Dorrell PG (1994) Photography in archaeology and conservation, Second edition edn. Cambridge University Press, Cambridge ; New York Earl G, Beale G, Martinez K, Pagi H (2010a) Polynomial texture mapping and related imaging technologies for the recording, analysis and presentation of archaeological materials. Commission V Symposium. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Newcastle Upon Tyne, pp 218-223 Earl G, Martinez K, Malzbender T (2010b) Archaeological applications of polynomial texture mapping: analysis, conservation and representation. Journal of Archaeological Science 37 Ester M (1996) Digital image collections: issues and practice. Commission on Preservation and Access, Washington, DC Fischer LJ (2009a) Photography for archaeologists. Part I: Site Specific. BAJR Practical Guides Fischer LJ (2009b) Photography for archaeologists. Part II Artefact Recording. BAJR Practical Guides HDR (2010) HDR Photomatix website (http://www.hdrsoft.com/). Retrieved 15/10/2010. Howell CL, Blanc W (1992) A practical guide to archaeological photography. Institute of Archaeology, University of California, Los Angeles Hunter GS (2000) Preserving digital information: a how-to-do-it manual. Neal-Schuman, New York ; London Jeffrey S (2001) A Simple Technique For Visualising Three Dimensional Models in Landscape Contexts. Internet Archaeology 10 (http://intarch.ac.uk/journal/issue10/jeffrey_index.html) Retrieved June 2010 JISC (2010) JISC Digital Media Website (http://www.jiscdigitalmedia.ac.uk/) Retrieved 15/10/2010 Kenney AR, Rieger OY (2000) Moving theory into practice : digital imaging for libraries and archives. Research Libraries Group, Mountain View, Calif. Kunkel T, Reinhard E (2010) A reassessment of the simultaneous dynamic range of the human visual system. ACM symposium on applied perception in graphics and visualisation, Los Angeles Marty PF (2009) Digital convergence - libraries, archives, and museums in the information age. New York: Springer Mertens T, Kautz J, Van Reeth F (2009) Exposure Fusion: A Simple and Practical Alternative to High Dynamic Range Photography. Computer Graphics Forum 28(1):161-171 Mitsunaga T, Nayar SK (1999) Radiometric Self Calibration. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'99), Fort Collins, pp 374- 380 High dynamic range imaging for archaeological recording Page 19 Parry D (1998) Virtually new : creating the digital collection; a review of digitisation projects in local authority libraries and archives. Library and Information Commission Pollard J (2008) Archaeological excavation in the World Heritage Site. In: Simmonds S (ed) Avebury World Heritage Site: values and voices. Kennet District Council, Devizes, pp 10-12 Reinhard E, Ward G, Pattanaik S, Debevec P, Heidrich W, Myszkowski K (2010) High dynamic range imaging : acquisition, display, and image-based lighting, 2nd edn. Morgan Kaufmann, Amsterdam ; Boston Richards JD, Robinson D (2000) Digital archives from excavation and fieldwork : a guide to good practice, 2nd ed. edn. Oxbow, Oxford Schlitz M (2007) Archaeological Photography. In: Peres MR (ed) The Focal Encyclopedia of Photography: Digital Imaging, Theory and Applications History and Science, Fourth Edition edn. Focal Press, Oxford Seetzen H, Whitehead L, Ward G (2003) A high dynamic range display using low and high resolution modulators. Society for Information Display Symposium, pp 1450-1453 Topping A (2009) Mama, they've taken away my nice bright Kodachrome colours: Kodak ends production of film loved by photographers and remembered in Paul Simon song. The Guardian, Manchester (Newspaper article) Verhoeven G (2008) Imaging the invisible using modified digital still cameras for straightforward and low-cost archaeological near-infrared photography. Journal of archaeological Science 35(12):3087-3100 Verhoeven G (2010) It's all about the format - unleashing the power of RAW aerial photography. International Journal of Remote Sensing 31(8): 2009-2042. Woolliscroft D (2010) Thoughts on the suitability of digital photography for archaeological recording. IFA Website (wwwarchaeologistsnet/modules/icontent/). Institute for Archaeologists. Retrieved 15/10/2010. High dynamic range imaging for archaeological recording Page 20 Figure 1.Comparison between the approximate sensitivity of the Human Visual System (HVS) and the typical sensitivity range of a digital camera, Low Dynamic Range (LDR) display and High Dynamic Range (HDR) display. The arrows show how the HVS can adapt its sensitivity within the range shown, while most cameras can be adjusted (by changing aperture and shutterspeed settings) to at least the range shown, commonly considerably more. (Based on Kunkel and Reinhard 2010 with additions). High dynamic range imaging for archaeological recording Page 21 Figure 2.The interior of one of the rock-cut tombs at the necropolis of Cala Morel (Menorca) in which the dynamic variation between the dark, interior of the cave and the exterior presents a contrast ratio of around 60,000:1, far exceeding the available dynamic range of the camera sensor. The top six smaller images and the bottom left larger image were taken at two EV stop intervals, and none contains sufficient variation to fully represent the scene. All seven exposures were used to construct an HDR radiance map from which the final image (bottom right) could then be mapped. This image was generated using Photomatix Pro 3.0 software using the details enhancement method. (Photographs were taken with a Canon TS-E 17mm L using a Canon 5D MkII digital camera body, which has a very high quality 'full frame' (35mm) CMOS sensor. High dynamic range imaging for archaeological recording Page 22 Figure 3. Five digital photographs of a test pit recorded at two EV intervals (by altering shutter speed) recorded during investigations at Itchen Abbas, Hampshire in June 2010. The five images were used to generate an HDR radiance map from which image A was derived using the tone compression method, and image B was derived using the detail enhancement method. Image C was generated directly from the five source images using exposure fusion. All images were taken as RAW images with a Canon 5D Mk II camera, the colour calibration chart included was then used to create and apply a custom digital camera profile before the images were imported into Photomatix Pro software. Photographs were taken with a Canon 50mm 1.2L at f16 using a Canon 5D MkII digital camera body as in Figure 2. Default values were used for all other processing steps. As before, the histograms are included to illustrate the general shape of the distribution of values in each image and are not necessarily to the same scale. High dynamic range imaging for archaeological recording Page 23 Figure 4. The four images (numbered 1 to 4) were scanned from a sequence of bracketed Fujichrome colour slides. These are from excavations at the Cove, Avebury in 2003 but are fairly typical of ‘legacy’ photographic material that exists in many archaeological archives. Specific exposures were not recorded for these slides, so some experimentation with different estimates of their relative exposures was used to generate a suitable HDR radiance map. Image A (bottom left) was generated from that using the detail enhancement method. Image B was generated directly from the four scans without the need to construct an HDR radiance map using exposure fusion. Scanning was done with a Nikon 4000ED film scanner. HDR radiance map was constructed using estimated relative EV values of +5, +4, 0 and -2 respectively for images 1 through 4. Tone mapping and exposure fusion images were generated in Photomatix Pro 3.0 software using ‘default’ settings. Histograms are shown to illustrate the general distribution of values, and are not necessarily to the same scale for each image. work_533ql25dzfgnlbp2rmddl4jb2u ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218558988 Params is empty 218558988 exception Params is empty 2021/04/06-02:16:39 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218558988 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:39 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_54ftulhh4nd5hbi5nrfk2fesou ---- JPEG2000 JPEG2000: The Upcoming Still Image Compression Standard A. N. Skodrasa∗ , C. A. Christopoulosb and T. Ebrahimic aElectronics Laboratory, University of Patras, GR-26110 Patras, Greece bMedia Lab, Ericsson Research, Ericsson Radio Systems AB, S-16480 Stockholm, Sweden cSignal Processing Laboratory, EPFL, CH-1015 Lausanne, Switzerland ∗ Corresponding author. Fax: +30 61 997456 Email addresses: skodras@cti.gr (A.N. Skodras), charilaos.christopoulos@era.ericsson.se (C.A. Christopoulos), Touradj.Ebrahimi@epfl.ch (T. Ebrahimi). Abstract: With the increasing use of multimedia technologies, image compression requires higher performance as well as new features. To address this need in the specific area of still image encoding, a new standard is currently being developed, the JPEG2000. It is not only intended to provide rate-distortion and subjective image quality performance superior to existing standards, but also to provide functionality that current standards can either not address efficiently or not address at all. Keywords: JPEG, colour image compression, source coding, subband coding, wavelet transform 1. Introduction Since the mid-80s, members from the International Telecommunication Union (ITU) and the International Organisation for Standardisation (ISO) have been working together to establish a joint international standard for the compression of continuous-tone (multilevel) still images, both greyscale and colour. This effort has been known as JPEG, the Joint Photographic Experts Group. (The “joint” in JPEG refers to the collaboration between ITU and ISO). Officially, JPEG corresponds to the ISO/IEC international standard 10928-1, digital compression and coding of continuous-tone still images or to the ITU-T Recommendation T.81. The text in both these ISO and ITU-T documents is identical. JPEG became a draft international standard (DIS) in 1991 and an international standard (IS) in 1992 (Pennebaker and Mitchell, 1993). With the continual expansion of multimedia and Internet applications, the needs and requirements of the technologies used, grew and evolved. In March 1997 a new call for contributions were launched for the development of a new standard for the compression of still images, the JPEG2000. This project, JTC1 1.29.14 (15444), was intended to create a new image coding system for different types of still images (bi-level, grey-level, colour, multi- component), with different characteristics (natural images, scientific, medical, remote sensing, text, rendered graphics, etc) allowing different imaging models (client/server, real-time transmission, image library archival, limited buffer and bandwidth resources, etc) preferably within a unified system. This coding system should provide low bit-rate operation with rate-distortion and subjective image quality performance superior to existing standards, without sacrificing performance at other points in the rate-distortion spectrum, incorporating at the same time many contemporary features. The standardisation process, which is co-ordinated by the JTC1/SC29/WG1 of ISO/IEC2 has already (as of May 2000) produced the Final Committee Draft (FCD) of the JPEG2000 Part I decoder (ISO/IEC, 2000). The IS is scheduled for December 2000. The JPEG2000 standard provides a set of features that are of vital importance to many high-end and emerging applications, by taking advantage of new technologies. It addresses areas where current standards fail to produce the best quality or performance and provides capabilities to markets that currently do not use compression. The markets and applications better served by the JPEG2000 standard are Internet, colour facsimile, printing, scanning (consumer and pre-press), digital photography, remote sensing, mobile, medical imagery, digital libraries / archives and E-commerce. Each application area imposes some requirements that the standard should fulfil (Requirements AHG, 1999). The main features that this standard possesses are: superior low bit-rate performance, continuous-tone and bi-level compression, lossless and lossy compression, progressive transmission by pixel accuracy and resolution, random codestream access and processing, robustness to bit-errors. In the present Letter the structure of the JPEG2000 standard is presented and performance comparisons are reported. The paper is organised in the following way: In Section 2 the architecture of the standard is described, and in Section 3 the multiple-component case is covered. The file format aspects and other interesting features of the standard, like region-of-interest coding, error resilience and scalability are presented in Section 4. Finally, some performance comparisons are reported in Section 5 of the Letter. 1 JTC stands for Joint Technical Committee 2. Architecture of the Standard The block diagram of the JPEG2000 encoder is illustrated in Fig. 1a. A discrete wavelet transform (DWT) is first applied on the source image data. The transform coefficients are then quantised and entropy coded, before forming the output codestream (bitstream). At the decoder (Fig. 1b), the codestream is first entropy decoded, dequantised and inverse discrete transformed, providing the reconstructed image data. It is worth mentioning that, unlike other coding schemes, the JPEG2000 can be both lossy and lossless. This depends on the wavelet transform and the quantisation applied. Before proceeding with the details of each block of Fig. 1, it should be mentioned that the standard works on image tiles. The term ‘tiling’ refers to the partition of the original (source) image into rectangular non-overlapping blocks (tiles), which are compressed independently, as though they were entirely distinct images (Fig. 2). This is the strongest form of spatial partitioning, in that all operations, including component mixing, wavelet transform, quantisation and entropy coding are performed independently on the different tiles of the image. All tiles have exactly the same dimensions, except maybe those, which abut the right and lower boundary of the image. Arbitrary tile sizes are allowed, up to and including the entire image (i.e. no tiles). Tiling reduces memory requirements and constitutes one of the methods for the efficient extraction of a region of the image. Prior to computation of the forward DWT on each tile, all samples of the image tile component are DC level shifted by subtracting the same quantity (i.e. the component depth) from each sample (Fig. 2). 2.1. The Wavelet Transform Tile components are decomposed into different decomposition levels using a wavelet transform. These decomposition levels contain a number of subbands populated with coefficients that describe the horizontal and vertical spatial frequency characteristics of the original tile component planes (Fig. 2). The coefficients provide local frequency information. A decomposition level is related to the next decomposition level by spatial powers of two. Part I of the standard supports dyadic decomposition, since this appears to yield the best compression performance for natural images. To perform the forward DWT the standard uses a 1-D subband decomposition of a 1-D set of samples into low-pass samples, representing a downsampled low-resolution version of the original set, and high-pass samples, representing a downsampled residual version of the original set, needed for the perfect reconstruction of the original set from the low-pass set. In general, any user supplied wavelet filter bank may be used. The DWT can be irreversible or reversible. The default (Part I) irreversible transform is implemented by means of the Daubechies 9- 2 SC, WG, IEC stand for Standing Committee, Working Group and International Electrotechnical Commission, respectively. tap/7-tap filter (Antonini et al., 1992). The analysis and the corresponding synthesis filter coefficients are given in Table I. The default (Part I) reversible transformation is implemented by means of the 5-tap/3-tap filter, the coefficients of which are given in Table II (Le Gall and Tabatabai, 1988). The standard supports two filtering modes: a convolution-based and a lifting-based. For both modes to be implemented, the signal should be first extended periodically as shown in Fig. 3. This periodic symmetric extension is used to ensure that for the filtering operations that take place at both boundaries of the signal, one signal sample exists and spatially corresponds to each coefficient of the filter mask. The number of additional samples required at the boundaries of the signal is therefore filter-length dependent (ISO/IEC, 2000). Convolution-based filtering consists in performing a series of dot products between the two filter masks and the extended 1-D signal. Lifting-based filtering consists of a sequence of very simple filtering operations for which alternately odd sample values of the signal are updated with a weighted sum of even sample values, and even sample values are updated with a weighted sum of odd sample values (ISO/IEC, 2000; Calderbank et al., 1997; Kovacevic and Sweldens, 2000). For the reversible (lossless) case the results are rounded to integer values. The lifting-based filtering for the 5/3 analysis filter is achieved by means of eq. (1): [ ]      ++−+=+ )2n2(x)n2(x 2 1 )1n2(x)1n2(y extextext (1) where xext is the extended input signal, y is the output signal and  a indicates the largest integer not exceeding a. 2.2. Quantisation Quantisation is the process by which the transform coefficients are reduced in precision. This operation is lossy, unless the quantisation step is 1 and the coefficients are integers, as produced by the reversible integer 5/3 wavelet. Each of the transform coefficients ab(u,v) of the subband b is quantised to the value qb(u,v) (i.e. scalar quantisation) according to the formula:         ∆ = b b bb )v,u(a ))v,u(a(sign)v,u(q (2) where Δb is the quantisation step of subband b. One quantisation step per subband is allowed. All quantised transform coefficients are signed values even when the original components are unsigned. These coefficients are expressed in a sign-magnitude representation prior to coding. ( )       +++−+= 2 1 )1n2(y)1n2(y 4 1 )n2(x)n2(y ext 2.3. Entropy Coding Each subband of the wavelet decomposition is divided up into rectangular blocks, called code-blocks, which are coded independently using arithmetic coding. This approach, called EBCOT (embedded block coding with optimised truncation), was introduced in 1998 (Taubman, 1998; Taubman, 2000). Such a partitioning reduces memory requirements in both hardware and software implementations and provides a certain degree of spatial random access to the bitstream. The block size is identical for all subbands, so that blocks in lower resolution subbands span a larger region in the original image. A neighborhood of spatially consistent code-blocks from each subband at a given resolution level forms larger rectangles, called precincts. Code-blocks are coded at a bit-plane at a time, starting with the most significant bit-plane with a non-zero element to the least significant bit-plane. For each bit-plane in a code-block, a special code-block scan pattern is used for each of the three passes, i.e. the significance propagation pass, the magnitude refinement pass and the clean-up pass (Marcellin et al., 2000). Each coefficient bit in the bit-plane is coded in only one of the three passes. A rate distortion optimisation method is used to allocate a certain number of bits to each block. The recursive probability interval subdivision of Elias coding is the basis for the binary arithmetic coding process. With each binary decision, the current probability interval is subdivided into two sub-intervals, and the codestream is modified (if necessary) so that it points to the base (the lower bound) of the probability sub-interval assigned to the symbol which occurred. Since the coding process involves addition of binary fractions rather than concatenation of integer codewords, the more probable binary decisions can often be coded at a cost of much less than one bit per decision (ISO/IEC, 2000). 3. Multiple-Component Images JPEG2000 supports multiple-component images. Different components need not have the same bit-depths; nor need they have all been signed or unsigned. For reversible systems, the only requirement is that the bit-depth of each output image component must be identical to the bit-depth of the corresponding input image component. The standard supports two different component transformations, one irreversible component transformation (ICT) and one reversible component transformation (RCT). The block diagram of the JPEG2000 colour image (RGB) encoder is shown in Fig. 4. C1, C2, C3 represent in general the colour transformed output components. If needed, prior to applying the forward colour transformation, the image component samples are DC level shifted. The ICT may only be used for lossy coding. It can be seen as an approximation of a YCbCr transformation of the RGB components. The forward and the inverse irreversible component transformations are already known well (ISO/IEC, 2000; Pennebaker and Mitchell, 1993). The RCT may be used for lossy or lossless coding. It is a decorrelating transformation, which is applied to the three first components of an image. Three goals are achieved by this transformation, namely, colour decorrelation for efficient compression, reasonable colour space with respect to the Human Visual System for quantisation, and ability of having lossless compression, i.e. exact reconstruction with finite integer precision. For the RGB components, the RCT can be seen as an approximation of a YUV transformation. The forward and inverse RCT is performed by means of eq. (3): Yr= (R+2G+B)/4 Ur=R-G Vr=B-G G=Yr- (Ur+Vr)/4 R=Ur+G B=Vr+G (3) 4. Significant Features of the Standard The JPEG2000 standard exhibits a lot of features, the most significant being the possibility to define regions of interest in an image, the spatial and SNR scalability, the error resilience and the possibility of intellectual property rights protection. Interestingly enough, all these features are incorporated within a unified algorithm. Region-of-Interest (ROI): One of the features included in JPEG2000 is the ROI coding. According to this, certain ROI’s of the image can be coded with better quality than the rest of the image (background). The ROI scaling-based method used, scales up (DC shifts) the coefficients so that the bits associated with the ROI are placed in higher bit-planes. During the embedded coding process, those bits are placed in the bit-stream before the non-ROI parts of the image. Thus, the ROI is decoded or refined, before the rest of the image. Regardless of the scaling, a full decoding of the bit-stream results in a reconstruction of the whole image with the highest fidelity available. If the bit- stream is truncated, or the encoding process is terminated before the whole image is fully encoded, the ROI is of higher fidelity than the rest of the image. The ROI approach defined in the JPEG2000 Part I is called MAXSHIFT method and allows ROI encoding of arbitrary shaped regions without the need of shape information and shape decoding (Christopoulos et al., 2000). Scalability: Realising that many applications require images to be simultaneously available for decoding at a variety of resolutions or qualities, the JPEG2000 architecture supports scalability. In general, scalable coding of still images means the ability to achieve coding of more than one resolution and/or quality simultaneously. Scalable image coding involves generating a coded representation (bitstream) in a manner which facilitates the derivation of images of more than one resolution and/or quality by scalable decoding. Bitstream scalability is the property of a bitstream that allows decoding of appropriate subsets of the bitstream to generate complete pictures of resolution and/or quality commensurate with the proportion of the bitstream decoded. For scalable bitstreams, decoders of different complexities, from low performance to high performance, can coexist. While low performance decoders may decode only small portions of the bitstream producing basic quality, high performance decoders may decode much more and produce significantly higher quality. The most important types of scalability are signal-to-noise ratio (SNR) scalability and spatial scalability (ISO/IEC, 2000; Marcellin et al., 2000). SNR scalability involves generating at least two image layers of same spatial resolution, but different qualities, from a single image source. The lower layer is coded by itself to provide the basic image quality and the enhancement layers are coded to enhance the lower layer. The enhancement layer, when added back to the lower layer, regenerates a higher quality reproduction of the input image. Spatial scalability involves generating at least two spatial resolution layers from a single source such that the lower layer is coded by itself to provide the basic spatial resolution and the enhancement layer employs the spatially interpolated lower layer and carries the full spatial resolution of the input image source. An additional advantage of spatial and SNR scalability types is their ability to provide resilience to transmission errors, as the most important data of the lower layer can be sent over a channel with better error performance, while the less critical enhancement layer data can be sent with poor error performance. Both types of scalability are very important for Internet and database access applications and bandwidth scaling for robust delivery. The SNR and spatial scalability types include the progressive and hierarchical coding modes already defined in the current JPEG, but they are more general. Error Resilience: Many applications require the delivery of image data over different types of communication channels. Typical wireless communication channels give rise to random and burst bit errors. Internet communications are prone to loss due to traffic congestion. To improve the performance of transmitting compressed images over these error prone channels, error resilient bitstream syntax and tools are included in the JPEG2000 standard. The error resilience tools deal with channel errors using approaches like data partitioning and resynchronisation, error detection and concealment, and Quality of Service (QoS) transmission based on priority (ISO/IEC, 2000). New File Format with IPR Capabilities: An optional file format (JP2) for the JPEG2000 compressed image data has been defined in the standard. This format has got provisions for both image and metadata, a mechanism to indicate the tonescale or colourspace of the image, a mechanism by which readers may recognise the existence of intellectual property rights (IPR) information in the file and a mechanism by which metadata (including vendor specific information) can be included in the file (ISO/IEC, 2000). 5. Comparative Results The rate-distortion behaviour of the lossy (non-reversible) JPEG2000 and the progressive JPEG is depicted in Fig. 5 for a natural image. It is seen that the JPEG2000 significantly outperforms the JPEG scheme for any given rate. We can easily conclude that for similar PSNR quality, the JPEG2000 compresses almost twice more than JPEG (Charrier et al., 1999; Christopoulos and Skodras, 1999). The superiority of the JPEG2000 can be subjectively judged with the help of Fig. 6, where the reconstructed image ‘hotel’ (720x576) is shown. This image was compressed at a rate of 0.125 bpp using the existing JPEG and the upcoming JPEG2000 (http://etro.vub.ac.be/~chchrist/jpeg2000_contributions.htm). From the point of view of visual quality, JPEG2000 is 10% to 25% better than the baseline JPEG (for images compressed at approximately 0.5-1 bpp). The improvement is much higher in the case of very low bit rates. One of the interesting and unique features of JPEG2000 is its capability in defining ROI’s, that are coded at a better quality than the rest of the image. ROI’s can be more than one and of any shape and size. In Fig. 7 an example of a circular ROI is given. Experiments have shown that for lossless coding of images, the ROI feature results in an increase of the bitrate by a maximum of 8% in comparison to lossless coding without using the ROI feature (Christopoulos et al., 2000). The lossless compression efficiency of JPEG2000 versus the lossless mode of JPEG and JPEG-LS for a natural and a compound image is reported in Table III. It is seen that JPEG2000 performs equivalently to JPEG-LS in the case of the natural image, with the added benefit of scalability. JPEG-LS, however, is advantageous in the case of the compound image. Taking into account that JPEG-LS is significantly less complex than JPEG2000, it is reasonable to use JPEG-LS for lossless compression. In such a case though, the generality of JPEG2000 is sacrificed. A comparison of JPEG, JPEG-LS and JPEG2000 from the functionality point of view is illustrated in Table IV. A plus (or minus) sign indicates that the corresponding functionality is supported (or not supported). The more the plus signs the greater the support. The parentheses indicate that a separate mode is required. It becomes evident from Table IV, that the JPEG2000 standard offers the richest set of features in a very efficient way and within a unified algorithm. But, all of the above mentioned advantages of JPEG2000 are at the expense of memory and computational complexity. Optimised JPEG codecs run almost three times faster and require less memory than current JPEG2000 software implementations. It should be stressed though, that these figures refer to non-optimal implementations, which are also platform dependent. Careful optimisation of the JPEG2000 algorithm will greatly improve performance without sacrificing functionality. However, the multi-pass bit plane context model and the arithmetic coder of the JPEG2000 will prevent any software implementation from reaching the speed JPEG obtains with DCT and Huffman coder. As for the memory requirements, the new line-based reduced memory approaches for the calculation of the wavelet transform, offer already less memory hungry implementations, as compared to the JPEG ones, especially in the progressive coding case (Chrysafis and Ortega, 2000). 6. Conclusions JPEG2000 is the new standard for still image compression that is going to be in use by the beginning of next year. It provides a new framework and an integrated toolbox to better address increasing needs for compression and functionalities for still image applications, like Internet, colour facsimile, printing, scanning, digital photography, remote sensing, mobile applications, medical imagery, digital library and E-commerce. Lossless and lossy coding, embedded lossy to lossless, progressive by resolution and quality, high compression efficiency, error resilience and lossless colour transformations are some of its features. Comparative results have shown that JPEG2000 is indeed superior to existing still image compression standards. Work is still needed in optimising its implementation performance. The reference software of the standard has been developed in C and in JAVA. The intention is to have a license fee free software for commercial and non-commercial use. The JAVA version can be downloaded from http://jj2000.epfl.ch. Acknowledgements The authors would like to thank Diego Santa Cruz from EPFL and Mathias Larsson and Joel Askelof from Ericsson, Sweden for their contribution in this work. REFERENCES Antonini, M., Barlaud, M., Mathieu, P., Daubechies, I., 1992. Image Coding Using the Wavelet Transform. In: IEEE Trans. Image Processing, 1, pp. 205-220. Calderbank, A.R., Daubechies, I., Sweldens, W., Yeo, B.-L, 1997. Lossless Image Compression Using Integer to Integer Wavelet Transforms. In: Proc. Int. Conf. Image Processing. Charrier, M., Santa Cruz, D., Larsson, M., 1999. JPEG2000: The Next Millennium Compression Standard for Still Images. In: Proc. ICMCS-99. Christopoulos, C., Askelof, J., Larsson, M., 2000. Efficient Methods for Encoding Regions of Interest in the Upcoming JPEG2000 Still Image Coding Standard. In: IEEE Signal Processing Letters, 7, 9, pp. 247-249. Christopoulos, C.A., Skodras, A.N., 1999. The JPEG2000 (Tutorial). In: Proc. IEEE Int. Conf. Image Processing. Chrysafis, C., Ortega, A., 2000. Line-Based, Reduced Memory, Wavelet Image Compression. In: IEEE Trans. on Image Processing, 9, 3, pp. 378-389. ISO/IEC, 2000. JPEG2000 Image Coding System, JPEG2000 Final Committee Draft v1.0, ISO/IEC JTC1/SC29/WG1 N1646. Kovacevic, J., Sweldens, W., 2000. Wavelet Families of Increasing Order in Arbitrary Dimensions. In: IEEE Trans. on Image Processing, 9, 3, pp. 480-496. Le Gall, D., Tabatabai, A., 1988. Subband Coding of Digital Images Using Symmetric Short Kernel Filters and Arithmetic Coding Techniques. In: Proc. IEEE Int. Conf. On Acoustics, Speech and Signal Processing, pp. 761-765. Marcellin, M.W., Gormish, M.J., Bilgin, A., Boliek, M.P., 2000. An Overview of JPEG-2000. In: Proc. IEEE Data Compression Conference, pp. 523-541. Pennebaker, W.B., Mitchell, J.L., 1993. JPEG: Still Image Data Compression Standard. Van Nostrand Reinhold, New York. Requirements AHG, 1999. JPEG2000 Requirements and Profiles v5.0, ISO/IEC JTC1/SC29/WG1 N1271. Taubman, D., 1998. Report on Coding Experiment CodEff22: EBCOT (Embedded Block Coding With Optimized Truncation). ISO/IEC JTC1/SC29/WG1 N1020R. Taubman, D., 2000. High Performance Scalable Image Compression with EBCOT. In: IEEE Trans. on Image Processing, 9, 7, pp. 1158-1170. FIGURE CAPTIONS Figure 1. Block diagram of the JPEG2000 (a) encoder and (b) decoder. Figure 2. Tiling, DC level shifting and DWT of each image tile component. Figure 3. Periodic symmetric extension of a signal. Figure 4. Block diagram of the JPEG2000 colour image encoder. Figure 5. Rate-distortion results for the JPEG2000 versus the progressive JPEG for a natural image. Figure 6. Reconstructed image ‘hotel’ compressed at 0.125 bpp by means of (a) JPEG and (b) JPEG2000. Figure 7. Reconstructed image in which a ROI of circular shape has been defined. TABLE CAPTIONS Table I Daubechies 9/7 analysis and synthesis filter coefficients Table II 5/3 analysis and synthesis filter coefficients Table III Lossless compression results (in bpp) Table IV Summary of functionalities supported by each standard Table I Daubechies 9/7 analysis and synthesis filter coefficients Analysis Filter Coefficients Synthesis Filter Coefficients i Lowpass Filter hL(i) Highpass Filter hH(i) Lowpass Filter gL(i) Highpass Filter gH(i) 0 0.6029490182363579 1.115087052456994 1.115087052456994 0.6029490182363579 ±1 0.2668641184428723 -0.5912717631142470 0.5912717631142470 -0.2668641184428723 ±2 -0.07822326652898785 -0.05754352622849957 -0.05754352622849957 -0.07822326652898785 ±3 -0.01686411844287495 0.09127176311424948 -0.09127176311424948 0.01686411844287495 ±4 0.02674875741080976 0.02674875741080976 Table II 5/3 analysis and synthesis filter coefficients Analysis Filter Coefficients Synthesis Filter Coefficients i Lowpass Filter hL(i) Highpass Filter hH(i) Lowpass Filter gL(i) Highpass Filter gH(i) 0 6/8 1 1 6/8 ±1 2/8 -1/2 1/2 -2/8 ±2 -1/8 -1/8 Table III Lossless compression results (in bpp) Image lossless JPEG JPEG-LS JPEG2000 Lena (512x512, 24bpp) 14.75 13.56 13.54 Cmpnd1 (512x768, 8bpp) 2.48 1.24 2.12 Table IV Summary of functionalities supported by each standard Compression Algorithm Lossless Lossy Embedded bitstream ROI Error resilience Scalability Complexity Random access Generic JPEG (+) ++ - - - (+) +(+) + + JPEG-LS ++++ + + - - - + - + JPEG2000 +++ +++ +++ ++ ++ ++ +++ ++ +++ Figure 1 Figure 2 Figure 3 Figure 4 DC level shifting Tiling DWT on each tile Image Component ... E F G F E D C B A B C D E F G F E D C B A B C ... R G B DC level shifting DC level shifting DC level shifting Colour Transformation C1 C2 C3 JPEG2000 encoding JPEG2000 encoding JPEG2000 encoding Compressed Image Data Colour Image Source Image Data Forward Transform Entropy EncodingQuantisation Compressed Image Data Entropy DecodingInverseTransform Dequantisation Compressed Image Data Reconstructed Image Data (a) (b) (a Figure 5 ) (b) Figure 6 Figure 7 30 32 34 36 38 40 42 44 46 0 0,5 1 1,5 2 2,5 Bitrate (bpp) P S N R (d B ) JPEG2000 NR P-JPEG work_556mxabtyjgxdaoih6d67ogv3a ---- www.eda-egypt.org • Codex : 98/1707 I . S . S . N 0 0 7 0 - 9 4 8 4 Orthodontics, Pediatric and Preventive Dentistry E G Y P T I A N DENTAL JOURNAL Vol. 63, 2147:2154, July, 2017 * Assistant Professor of Pediatric and Preventive Dentistry and Dental Public Health, Faculty of Dentistry, Suez Canal University. ** Lecturer of Pediatric and Preventive Dentistry and Dental Public Health , Faculty of Dentistry, Suez Canal University. EFFICACY OF TWO DIFFERENT TREATMENT MODALITIES ON MASKING WHITE SPOT LESIONS IN CHILDREN WITH MOLAR INCISOR HYPO-MINERALIZATION Ghada A. ElBaz * and Shaimaa M. Mahfouz ** ABSTRACT Purpose: This study aimed to assess and compare the effectiveness of resin infiltration and fluoride varnish in masking white spot lesions of central incisors in children with molar incisor hypo-mineralization (MIH) clinically and radio-graphically. Methods: Twenty children aged from 9 to 14 years with bilateral maxillary central incisors with MIH, according to Judgment criteria of European Academy of Pediatric Dentistry (EAPD), were included in this study. Children were divided into two groups, group I was treated with resin infiltration (ICON) while group II was treated with fluoride varnish (Fluoro- protector). All patients received pre-operative, immediately after treatment, 1 week and 1 month after treatment a digital standardized photograph and peri-apical radiography. All photographs and peri-apical radiographs were analyzed to calculate the color and gray level (GL) differences between the sound enamel and white spot lesions. Results: A comparison between the two groups showed a statistically significant lower color difference mean between the sound and treated white spots enamel in group I than in group II in all evaluation periods. Moreover, a comparison between the GL mean values of both groups showed a statistically significant higher GL mean values after resin infiltrations in group I than in group II after fluoride application in all evaluation periods. Conclusion: Resin infiltration (ICON) is dramatically better than fluoride varnish in the masking of white spot lesions in children with (MIH). Resin infiltration technique allows a minimally invasive treatment in a single appointment making it beneficial for school aged patients. KEYWORDS: White spot-fluoride- resin infiltration-children. (2148) Ghada A. ElBaz and Shaimaa M. MahfouzE.D.J. Vol. 63, No. 3 INTRODUCTION Molar incisor hypo-mineralization (MIH) is a frequently noticed developmental defect of dental enamel that involves hypo-mineralization of 1 to 4 permanent first molars (FPM) with or without permanent incisors. The qualitative structure enamel defects occur mainly due to systemic causes 1,2. MIH may be due to environmental factors such as chronic or acute illness during last gestational trimester and first 3 years of childhood more precisely during the period in which the crowns of the FPMs and incisors are mineralized 3. MIH clinically appears as a well demarcated discrete opaque lesions ranging from white to yellow–brown color of normal thickness with a smooth surface, and has a distinct boundary adjacent to normal enamel 3,4. Based on previous research on MIH, it has been shown that 71.6% of affected children may have permanent incisors as well as FPM involved 5.The children with MIH suffer from aesthetic and psychological problems that need to be treated as early as possible 6. Many studies reported that topical application of fluoride or casein phospho- peptide amorphous calcium phosphate (CPP–ACP) is successfully re-mineralizing the de-mineralized enamel surface but their aesthetic effect is limited because re-mineralization is limited to the lesion surface 7-10. Recently, correction of enamel lesions by resin infiltration to achieve better enamel color and translucency seems to be promising approaches for the noninvasive treatment of white spot lesions. Infiltration of low viscosity resin into subsurface enamel layer has been used for decreasing enamel porosity and for obstructing minerals and acid pathway out of enamel surface, which arrest enamel surface demineralization 11-13.Therefore, the aim of the present study was to assess and compare the effectiveness of resin infiltration and fluoride varnish in masking white spot enamel lesions of permanent central incisors in children with MIH clinically and radio-graphically. MATERIALS AND METHODS Twenty apparently healthy and cooperative children with bilateral maxillary permanent central incisors with MIH were included in this study. Their age ranged from 9 to 14 years and they were selected from Outpatient Clinic, Pedodontic Department, Faculty of Dentistry, Suez Canal University. The protocol of the research project has been revised and approved by the scientific ethical committee of Pedodontic Department, Faculty of Dentistry, Suez Canal University. A brief history and clinical examination were recorded. Hypo-mineralization area was assessed and identified visually as an abnormality in the translucency of the enamel and also denominated as opacity of the enamel according to Judgment criteria of European Academy of Pediatric Dentistry (EAPD) 14. All patients had not received any previous treatment in both maxillary permanent central incisors. Treatment plan was explained to the parent and a written informed consent was obtained before the clinical procedure. Teeth included in this study were being cleaned with a rubber cups and prophylaxis paste. Each patient received pre-operative digital photography and digital periapical radiography. Children were divided into two groups of 10 patients each as follow: group I (n = 20 maxillary central incisors) was treated with resin infiltration (ICON_ DMG) and group II (n = 20 maxillary central incisors) was treated with Fluoro- protector (difluorsilane 1000 ppmF, Involar- Vivadent, Schaan, Liechtentein). The resin infiltration and fluoride varnish were applied to the teeth according to manufacturer’s instructions. Group I: Each patient received resin infiltration in two maxillary permanent central incisors. The teeth must be cleaned with a pumice or non fluoridated prophylaxis paste before the infiltration procedure. During the infiltration procedure, the eye glasses should be wore for protection also rubber dam was applied. The surface layer of the incisors was etched EFFICACY OF TWO DIFFERENT TREATMENT MODALITIES (2149) by the application of 15% hydrochloric acid gel in a circular motion (ICON_-Etch; DMG, Hamburg, Germany) for 2 minutes to expose the layer of lesion body and finally rinsed for 30 seconds with water spray and dried. The lesions were desiccated using 100% ethanol (ICON_-Dry; DMG) for 30 seconds followed by air dryness. An infiltrant resin (ICON_-Infiltrant; DMG) was applied to the surface and allowed to penetrate inside for 3 minutes with the operator light turned away. Excessive material was wiped off using a cotton roll from the surface and with a dental floss from the proximal surfaces before light curing for 40 seconds. The process was then repeated with the icon infiltrant being placed for one more minute then each tooth was cured for a further 40 seconds. Finally, the roughened enamel surface was polished using a composite resin polishing discs (Sof-lex disk; 3M ESPE, Saint Paul, MN, USA). Group II: Each patient received a fluoride varnish application in two maxillary permanent central incisors. Thoroughly clean the tooth surfaces. Adequate isolation was performed by cotton roll and high suction. Fluoride varnish was applied by special applicator brush on a proper dried tooth surface. Generally, Fluor Protector is applied every six months according to manufacturer’s instructions. Standardized clinical digital photographs and digital radiographs were taken before treatment (P0&R0) served as a baseline record for successive films, immediately after treatment (P1&R1), one week (P2&R2) and one month (P3&R3) after treatment for both groups. I) Standardized clinical photography Standardized clinical photographs were taken by a camera, Nikon D3200, Thailand, lens, Nikkor 18- 55mm AF-S VR 1:3.5-5.6 G II , Thailand . A tripod was used to fix the camera at the same distance from the patient also; the dental unit was stabilized at rest position to standardize the photographic conditions. The camera settings were 1⁄ 200, F29, ISO 600, and Auto white balance. The flash was set on manual mode with power 1/16 with no compensation. All photographs were taken by one photographer according to a standardized protocol. Evaluation of color changes The sound enamel and white spot lesions were outlined on the screen, and the colors of the sound enamel and white spot lesions were measured using an image analyzing photoshop program. The color of the teeth in the same site was analyzed by transferring the outlined layer of the P0 photograph to the P1, P2 and P3 photographs. The difference in color between the sound enamel and white spot lesions at each time was calculated and measured by digital analyzing photography program. II) Standardized digital radiography Standardized intraoral periapical digital radiographs were taken before treatment (R0), immediately after treatment (R1), one week (R2) and one month (R3) after treatment for each treated tooth using a long cone paralleling technique for standardization. An individual silicon bite stent and a film holder device, in which the X-ray tube- fitted were used in order to achieve standardization. The sensor was placed horizontally in the holder for the central incisors region. The sensor was exposed to the dental x-ray machine (Fona XCD- Italy) at 100-240 V, 50-60 HZ , 6 A and 690 VA for 0,06 seconds with x –ray perpendicular to sensor. The exposure parameters were fixed for all patients. After exposure was terminated, the readout started automatically and the image was displayed gradually on a computer screen and the image was stored automatic. Baseline radiographs (R0) were compared with (R1), (R2) and (R3) radiographs for detection of changes in crown radio-density. Quantitative information regarding radio-density changes at crown areas was obtained by the use of a computerized image analysis program using (2150) Ghada A. ElBaz and Shaimaa M. MahfouzE.D.J. Vol. 63, No. 3 Digora (2.5) image analysis software system. The optical density of each image element (pixel) was quantified and converted into gray levels between 0 and 255 (0 for black regions and 255 for white) by using an image processor (Digora) based on windows system. Statistical analysis The results were recorded, tabulated, and statistically analyzed using SPSS version 16 computer analysis software for statistical analysis. Data were expressed as mean ± standard deviations of the parameters evaluated. Paired sample t test was used to test the significance of the difference of intra group measurement, while independent sample t test was used to test the significance of the difference of inter group measurement. The significance level was set at P < 0.05. RESULTS I) Color change results In group I, the mean color difference between the sound and white spots enamel was significantly decreased after immediate treatment, one week and one month after resin infiltration treatment from 204.99, 194.83 to 194.38 while in group II, the mean color difference was in-significantly decreased after immediate treatment, one week and one month after fluoride varnish application from 200.17, 198.44 to 198.4. On the other hand, a comparison between group I and group II showed a statistically significant lower mean color difference in group I than in group II in all evaluation periods. (Figures 1,2,3,4) (Table 1) II) Gray level results Comparison of the gray level (GL) means of images taken before and after resin infiltrations or fluoride application showed that, for group I there was a significant improvement in radio-density from base line to one month after resin infiltration treatment from 83.98±0.9 to 110.13±0.93, while in group II there was in-significant improvement from base line to one month after fluoride varnish application (77.65±0.8 to 78.55±1.33). In addition, a comparison between the GL mean values of the group I and group II showed a statistically significant higher GL mean value in the group I than in group II in all evaluation periods. (Figure 5) (Table 2) TABLE (1) Comparisons between the color difference results for both treated groups at different evaluation periods Color difference Treatment modalities Resin infiltration Fluoride varnish Difference P value Baseline (P0) 204.99 ± 13.85 200.17 ± 16 4.82 ± 10.17 >0.05 Immediate after treatment (P1) 194.63 ± 4.2 198.93 ± 2.9 4.3 ± 2.1 <0.05* 1 week (P2) 194.83 ± 4.7 198.44 ± 1.9 3.61 ± 3.7 <0.05* 1 month (P3) 194.38 ± 5.0 198.4 ± 2.1 4.02± 2.9 <0.05* Baseline versus immediate, 1 week, and 1 month <0.05* >0.05 EFFICACY OF TWO DIFFERENT TREATMENT MODALITIES (2151) TABLE (2) Comparisons between the gray level results for both treated groups at different evaluation periods Gray level Treatment modalities Resin infiltration Fluoride varnish Difference P -value Baseline (R0) 83.98 ± 0.9 77.65 ± 0.8 6.33 ± 0.17 >0.05 Immediate after treatment (R1) 110.83 ± 0.35 78.23 ± 0.91 32.6 ± 0.98 <0.05* 1 week (R2) 110.65 ± 1.54 78.83 ± 1.5 31.82 ± 0.49 <0.05* 1 month (R3) 110.13 ± 0.93 78.55 ± 1.33 31.58 ± 0.69 <0.05* Baseline versus immediate, 1 week, and 1 month <0.05* >0.05 Fig. (1) Digital photography before treatment with resin infiltration (P0) Fig. (3) Digital photography before treatment with fluoride varnish (P0) Fig. (2) Digital photography after one month treatment with resin infiltration (P3) Fig. (4) Digital photography after one month treatment with fluoride varnish (P3) (2152) Ghada A. ElBaz and Shaimaa M. MahfouzE.D.J. Vol. 63, No. 3 DISCUSSION Hypo-mineralized incisors in MIH may present esthetic concerns to children and their parents. Micro-abrasion can be an effective treatment in shallow defects, but the defects usually extend through the full enamel thickness, also an application of the opaque resin after enamel reduction can be considered a successful restoration for hypo- mineralized enamel 15,16. In the present study non- invasive treatments (resin infiltrations and fluoride varnish) were used to improve the color defects and they were much less invasive than micro-abrasion or restoration 17-20. Enamel surfaces were dried with air for adequate diagnosis and treatment before resin infiltration or fluoride varnish application as white spot lesions become more obvious when the teeth are dry because of the different refractive indices of normal enamel, water, and air 21. The process of resin infiltrations was repeated two times with the icon infiltrant to decrease polymerization shrinkage of the resin and to complete occlusion of surface micro-porosity and obliteration of the hypo-mineralized surface lesions22. The result of standardized clinical photographs in the present study showed a significant decrease in the color differences between the sound and white spot enamel after application of resin infiltration. This finding agreed with Torres et al., 2011 23 who explained the greater efficacy of resin infiltration in masking white spot lesion of enamel due to low viscosity, acid resistance and high diffusion coefficient of resin that lead to obstruction of micro-porosities in enamel surfaces. In addition, Paris et al., 2013 24 proved that infiltration was an effective method in masking white spot lesions and it had visible stable effects on enamel surface color under discoloring condition. On the other hands, in group II the color difference decreased insignificantly after fluoro-protector varnish application. This result in accordance to Chin et al., 2009 25 who applied low dose fluoride (250 ppm) as re-mineralizing agent for 28 days and found limited improvement of white spot lesions and the change in color was instable. The result of this study showed a statistically significant lower mean color difference in group I than in group II in all evaluation periods. Previous studies showed that, (ICON) proved to be the most effective method in masking white spot lesions and less resistant to formation of new white spot lesions when compared to treatment with therapeutic fluoride solutions 26,27. Furthermore the color stability of infiltrated teeth is durable 28,29. These findings may be attributed to the extremely high penetration coefficient of the Icon resin that enables it to penetrate to a depth more than 0.5mm into the lesion pores by capillary action phenomena. The deep penetration of the Icon resin leads to elevation of the refractive index of hypo- mineralized enamel from 1.33 to 1.52 which is nearly similar to sound enamel refractive index 1.62 and gives more homogenous enamel color and an excellent cosmetic result 22,30,31. In the present study, the gray level (GL) results showed significant improvement in means of (GL) images taken before and after resin infiltration on the other hand the results of group II showed Fig. (5) Periapical radiograph analyzed by gray level system. EFFICACY OF TWO DIFFERENT TREATMENT MODALITIES (2153) insignificant improvement in (GL) before and after fluoride varnish application. This may be due to increased surface micro hardness of the surfaces treated with resin infiltration, increased resistance to further demineralization and mineral loss 24. In addition, Comparison of the GL means of the two groups showed a significant lower GL means in fluoride treated group than resin treated group in all evaluation periods. It may be attributed to the complete penetration of the resin infiltrant into the lesion body of the enamel and the formation of a deep homogenous diffusion barrier inside the lesion, by replacing lost minerals with resin 11,32. Based on this study’s results, resin infiltration (ICON) is promising treatment modality in masking white spot enamel lesions in children with MIH. One of the limitations of this study was the moderate sample size due to school-aged patients which are usually difficult to collaborate in studies with follow up period. Another limitation is the high cost of resin infiltration (ICON) treatment modality in relation to other approaches. Further studies with longer periods of follow up are necessary to evaluate the clinical performance and color stability of resin infiltration (ICON) technique in masking the enamel lesions in children with MIH. CONCLUSION Resin infiltration (ICON) is dramatically better than fluoride varnish in the masking of white spot lesions in children with (MIH). Resin infiltration technique allows a minimally invasive treatment in a single appointment making it beneficial for school aged patients. ACKNOWLEDGMENTS We are grateful to all children and their families who took part in this study REFERENCES 1- Weerheijm KL, Jälevik B and Alaluusua S. Molar-incisor hypomineralisation.Caries Res 2001;35: 390-391. 2- Ghanim A, Manton D, Bailey D, Marino R and Morgan M. Risk factors in occurance of molar – incisor hypomineralization among group of Iraq children. Int J Paediatr Dent 2013; 23:197-184. 3- Jälevik B and Norén JG. Enamel hypomineralization of permanent first molars: a morphological study and survey of possible aetiological factors. Int J Paediatr Dent 2000;10: 278-289. 4- A review of the developmental defects of enamel index (DDE Index). Commission on Oral Health, Research & Epidemiology. Report of an FDI Working Group. Int Dent J 1992;42:411-426. 5- Chawla N, Messer LB and Silva M. Clinical studies on molar-incisor hypomineralisation part 1: distribution and putative associations. Eur Arch Paediatr Dent 2008; 9: 180-190. 6- Lygidakis NA. Treatment modalities in children with teeth affected by molar-incisor enamel hypomineralisation (MIH): A systematic review. Eur Arch Paediatr Dent 2010;11: 65-74. 7- Willmot DR.White lesions after orthodontic treatment: does low fluoride make a difference? J Orthod 2004; 31: 235–24. 8- Ardu S, Castioni NV, Benbachir N and Krejci I. Minimally invasive treatment of white spot enamel lesions. Quintessence Int 2007;38: 633–636. 9- Beerens M W, Van der Veen M H, van Beek H and ten Cate J M. Effects of casein phoshopeptide amorphous calcium fluoride phosphate paste on white spot lesion and dental plaque after orthodontic treatement: a 3 mounth follow up. European Journal of Oral Sciences 2010; 118:610-617. 10- Hamdan S M, El Banna M, El Zayat I and Mohsen M A. Effect of resin infiltration on white spot lesion after debonding orthodontic brackets. American Journal of Dentistry 2012; 25:3-8. 11- Paris S, Meyer-Lueckel H, Cölfen H and Kielbassa AM. Resin infiltration of artificial enamel caries lesions with experimental light curing resins. Dent Mater J 2007; 26: 582–588. (2154) Ghada A. ElBaz and Shaimaa M. MahfouzE.D.J. Vol. 63, No. 3 12- Meyer-Lueckel H and Paris S. Progression of artificial enamel caries lesions after infiltration with experimental light curing resins. Caries Res 2008; 42: 117–124. 13- Eckstein A, Helms HJ and Knösel M. Camouflage effects following resin infiltration of post-orthodontic white-spot lesions in vivo: One-year follow-up. Angle Orthod. 2015; 85(3):374-380. 14- Weerheijm KL. Molar-incisor hypomineralization (MIH). Eur J Paediatr Dent 2003; 4(3):114–120. 15- Weerheijm KL. Molar incisor hypo-mineralization (MIH): Clinical presentation, aetiology, and manage ment. Dent Update 2004; 31:9-12. 16- Reynolds EC. New modalities for a new generation: Casein phosphopeptide-amorphous calcium phos phate, a new remineralization technology. Synopses 2005; 30:1-6. 17- Malterud MI. Minimally invasive restorative dentistry: a biomimetic approach. Pract Proced Aesthet Dent 2006; 18: 409–414. 18- Meyer-Lueckel H, Paris S and Kielbassa AM. Surface layer erosion of natural caries lesions with phosphoric and hydrochloric acid gels in preparation for resin infiltration. Caries Res 2007; 41: 223–230. 19- Stahl J and Zandona AF. Rationale and protocol for the treatment of non-cavitated smooth surface carious lesions. Gen Dent 2007; 55: 105–111. 20- Reynolds E.C. Calcium phosphate based remineralizing sys- tem: scientific evidence. Aust. Dent J 2008; 53(3):268-273. 21- Hosey MT, Deery C and Waterhouse PJ. Pediatric cariology. London: Quintessense Essentials 2004. 22- Kim S, Kim EY, Jeong TS et al., The evaluation of resin infilteration for masking labial enamel white spot lesions . Int J Pediatric Dent 2011; 21:241-248. 23- Torres CR, Borges A B, Torres L M , Gomes I S and de Oliveira RS. Effect of caries infiltration technique and fluoride therapy on the color of masking white spot lesions Journal of Dentistry 2011; 39:202-207. 24- Paris S, Schwendicke F, Keltsch J, Drfer C and Meyer- Lueckel H. Masking of white spot lesions by resin infilteration in vitro. Journal of Dentistry 2013; 41:28-34. 25- Chin MY, Sandham A, Rumachik E N and Huysmans M C. Fluoide release and cariostatic potential of orthodontic adhesives with and without daily fluoride rinsing. Ameican Journal of Orthodontic and Dentofacial Orthopedics 2009; 136:547-553. 26- Kim S, Shin JH, Kim EY, Lee SY, Yoo SG. The evaluation of resin infiltration for masking labial enamel white spot lesions. Caries Res 2010; 44: 171–248. 27- Rocha Gomes Torres C, Marcondes Sarmento Torres L, Silva Gomes I, Simões de Oliveira R, Bühler Borges A. Effect of caries infiltration technique and fluoride therapy on the color masking of white spot lesions. 2010, Data on file. DMG, Hamburg, Germany. 28- Luebbers D, Spieler-Husfeld K, Staude C. In vitro color stability of infiltrated carious lesions. 2009, Data on file. DMG, Hamburg, Germany. 29- Phark JH, Duarte S. Clinical performance and color stability of infiltrated smooth surface lesions. 2010, Data on file. DMG, Hamburg. Germany. 30- Meyer-Lückel H and Paris S. Improved resin infiltration of natural caries lesions. J Dent Res 2008; 87(12):1112-1116. 31- Meyer-Lückel H and Paris S. Infiltration of natural caries lesions with experimental resins differing in penetration coefficients and ethanol addition. Caries Res 2010; 44(4):408-414. 32- Gugnani N, Pandit IK, Gupta M, Josan R. Caries infiltration of noncavitated white spot lesions: A novel approach for immediate esthetic improvement. Contemporary Clinical Dentistry 2012;3:199-202. work_5abe2tjzybf5deu5stonspm6xu ---- BA130722.pdf 764 BOOK REVIEWS The Condor, Vol. 111, Number 4, pages 764–766. ISSN 0010-5422, electronic ISSN 1938-5422. 2009 by The Cooper Ornithological Society. All rights reserved. Please direct all requests for permission to photocopy or reproduce article content through the University of California Press’s Rights and Permissions website, http://www.ucpressjournals.com/ reprintInfo.asp. DOI: 10.1525/cond.2009.review07 The Condor 111(4):764–766 The Cooper Ornithological Society 2009 Identification Guide to North American Birds, Part II.— Peter Pyle. 2008. Slate Creek Press, Point Reyes Station, CA. xi + 836 pp., 556 text figures, 71 text tables, 289 bar graphs. ISBN 9780961894047. $62.99 (paper). At the outset, I must stress that this is not a field guide in the sense of a guide to the identification of species of North Ameri- can birds. In the main, it is not a field guide at all; it is the second and final part of a comprehensive collation of technical material to be applied primarily to birds in the hand. Thus the primary au- dience consists of bird banders (there is a successful coordination of data in the book with the data required of U.S. and Canadian bird banders by the joint North American Bird Banding Labora- tory) or museum workers and others using reference collections for research. The girl scout seeking material for her Bird Study merit badge or the beginning birder interested in, for example, the changing plumages of the American Golden-Plover (Pluvia- lis dominica) might start to read the molt section on p. 513 and puzzle over “Molt—CAS. PF complete (Oct–Mar/May in HY/ SYs), DPB complete (Jul–Mar in AHY/ASYs). . . .” Yet Pyle is aware of the problems involved in describing concisely for each BOOK REVIEWS 765 species the characteristics that separate it from similar species in the hand, and with the advent of superb optics and modern digital photography, an understanding of the molt sequence of any one species, with reference to the complete species account, may add considerably to the inquiring field birder’s knowledge and ap- preciation of a well-studied individual bird at reasonable ranges. Thus we discerning viewers of online digital bird images should also be included as potential buyers of this book. This final volume (Part II) completes the monumental task started in Part I (Pyle 1997), which covered 392 species and 856 subspecies of North American landbirds. In this volume, Pyle summarizes tersely geographic variation, molt, age, sex, hybrids reported, and references for 310 species and 276 subspecies of nonpasserine birds (waterfowl through alcids). Accordingly, a careful reading of the first 46 pages before the species accounts is obligatory. To keep the book at a transportable size (15.5 × 23.5 cm), extensive abbreviation has been employed, and the most frequent “abbreviations and interpretations” are printed inside the front and back covers for more rapid reference. Because published information on molts and plumages by age is less for the spe- cies in Part II than for those in Part I, Pyle has relied more heav- ily on field work and specimen examination, making 93 visits to reference collections over the 6.5 years of writing. He pre- sents this work (rather too modestly in my opinion) with “a pri- mary objective … to call attention to unsolved problems” (p. 1) and asks for the inevitable new research, suggestions, correc- tions, and updates to be coordinated at the website www.Slate- CreekPress.com in the future. The phrase “more study needed” occurs frequently in species accounts. We should all try harder. An early heading within the introduction is “Bird Topog- raphy,” which standardizes feather nomenclature to follow the widely used approach summarized in Sibley (2000). The primary, secondary, or rectrix feathers are numbered in the order in which they are molted in the majority of species (primaries proximal to distal, secondaries usually distal to proximal, and rectrices centrif- ugally). After sections similarly standardizing measurement tech- niques, an extremely important section deals with molt. Following Humphrey and Parkes (1959), a molt (e.g., prealternate) is named for the plumage (e.g. alternate, usually breeding) that results from it. Pyle also follows Howell et al. (2003) by recognizing additional inserted molts such as the preformative molt, which may follow the first prebasic and precede the first prealternate. This sequence is abbreviated to save space and would be represented as PB1, PF, PA1 (see Complex Alternate Strategy F, fig. 10, p. 14). In large, long-winged birds such as cormorants (family Phalacrocoracidae) that need to maintain the ability to fly while molting, primary and secondary wing feathers follow a strategy termed Staffelmauser (Stresemann and Stresemann 1966). Here “simultaneous waves” of molt may create differences in wear and coloration, as does ar- rested molt in any tract, leading to visible molt limits in a plum- age. Thus, an understanding of molt leads to plumages, which lead to age and sex determination in as precise a manner as possible. When combined with characters of brood patch and cloacal devel- opment the age and sex are then coded according to the calendar- year-based system of the Bird Banding Laboratory (BBL). Most active bird banders have already discovered that moistening the skin over the skull and attempting to see through to the pneuma- tization development beneath is far more difficult in nonpasserine birds. Pyle summarizes the literature to suggest that “most to all nonpasserines do not complete pneumatization” that can be seen through the skin of the living bird. This will be a welcome relief to all who have struggled and failed. “Species Accounts” (pp. 47–789) present a highly detailed summary of the above information (and much more) for all species covered. In reading these I often find it essential to re- fer back to the abbreviation summaries inside the covers or to the introductory chapters, even after using most of this terminol- ogy since the publication of Part I in 1997. Each account gives the BBL’s four-letter codes, species numbers, and suggested band size. Measurements or plumage characters distinguishing the bird from similar species are presented, and geographic variation is noted. Molts and their extent and timing are given, and then all the possible age categories are listed and described. This follows the BBL protocol in defining hatch year (HY) as the calendar year in which the bird was hatched. On the following 1 January the bird becomes second year (SY) for the next 12 months, etc. A bird in at least its second calendar year, but maybe older, is after hatch year (AHY). It then becomes after second year (ASY) on the next 1 January, if that plumage can be discerned. This some- what confusing information is combined with any determination of sex in a summary bar graph for nearly all species. For each month, by all available age criteria, the percentage of birds of that age group that is likely to be successfully aged is indicated. The bars are patterned for the four likelihoods: >95%, 25–95%, 5–25%, and <5%. Symbols for male and female are added if sex can be determined in that month. The references at the end of each species account are invaluable and allow the user to check original publication of the described characters and also provide the basis for new discoveries, updates, or corrections. The pub- lisher’s website is provided with just this in mind for the future. When I first saw the Pyle’s (1997) system of summarizing all possible age and sex groupings under each species in North America it brought to mind a similar method used by Lars Svens- son (1970) in his Identification Guide to European Passerines. The drawback to this method is that the user has to read the whole account and then assign the “best fit” age or sex category to the bird in hand. This system often requires a person unfamiliar with the species to have to choose the “best” character or perhaps be unsure and fail to make a more precise determination. Pyle tries to minimize this problem by presenting all the criteria, com- menting on their reliability, then providing an estimate of what percentage of that species can be so determined in each month (the bar graph). The alternative is a form of binomial or multino- mial key. This latter method has the advantage of leading the user from the most reliable characters down the key to the least reli- able, while dropping out conclusions about the bird’s age and sex category along the way. Some banders or biologists using skin collections may remember this method from unpublished draft keys from the BBL or A Bird-Bander’s Guide to the Determina- tion of Age and Sex of Selected Species (Wood 1969). For specific locations or smaller geographic areas, this may arguably still be the better method for training people unfamiliar with a species or plumage. For the more experienced, a Svensson/Pyle type of manual may be easier to use when only a single detail is to be confirmed. As a guide’s geographic scope increases to an area the size of North America, many populations or subspecies will introduce variation and reduce the power of discrimination in a binary key. Reading of the acknowledgements from Pyle et al. (1987) informed me that Pyle had met Svensson in Sweden in 1981 and described conversations and birding trips with him as “the initial inspiration to someday create Identification Guide to North American Passerines.” Now data for the rest of the non- passerines have followed in the same Svensson style, with many, many refinements and improvements by Pyle and others. The au- thor particularly thanks Steve Howell for initial drafts on gulls, terns, skuas, and jaegers, many comments, and 62 figures. Siob- han Ruck prepared many more figures and commented on diur- nal raptors, while David DeSante published the series. 766 BOOK REVIEWS In summary, the sheer volume of detail, well referenced and presented in a standard format, makes this volume (Part II) and the original Part I essential for any serious ornithological refer- ence library, museum, reference collection of bird skins, or bird- banding operation. Field ornithologists who wish to improve their basic knowledge of bird biology will find it informative and key to much more detailed observation, at least at closer ranges. The Internet is now providing all of us with better and better im- ages, and this book will be useful in identifying many of these photographed birds to age or sex. The species accounts are not exactly readable in the normal sense. The necessary abbrevia- tions are numerous and far from clear at first reading. But it is worth the effort to try to learn them, or at least keep looking them up. Standardization of terminology and measurement methods is always good, and bird banders will appreciate the standardiza- tion with BBL codes and the BBL’s acceptance of age and sex categories, by month, used by Pyle. There will always be a place for well-written binary keys to age and sex for birds “up close.” There is an equal place for the invaluable mass of data in the in- troduction, species accounts, and literature cited of Pyle’s Part II.—TREVOR L. LLOYD-EVANS, Manomet Center for Con- servation Sciences, P. O. Box 1770, Manomet, MA 02345-1770. E-mail: tlloyd-evans@manomet.org. LITERATURE CITED HOWELL, S. N. G., C. CORBEN, P. PYLE, AND D. I. ROGERS. 2003. The first basic problem: a review of molt and plumage homol- ogies. Condor 105:635–653. HUMPHREY, P. S., AND K. C. PARKES. 1959. An approach to the study of molts and plumages. Auk 76:1–31. PYLE, P., S. N. G. HOWELL, R. P. YUNICK, AND D. F. DESANTE. 1987. Identification guide to North American passerines. Slate Creek Press, Bolinas, CA. PYLE, P. 1997. Identification Guide to North American Birds, Part I. Slate Creek Press, Bolinas, CA. SIBLEY, D. A. 2000. The Sibley guide to birds. Alfred A. Knopf, New York. STRESEMANN, E., AND V. STRESEMANN. 1966. Die Mauser der Vögel. Journal für Ornithologie 111:378–393. SVENSSON, L. 1970. Identification Guide to European Passerines, first ed. Lars Svensson, Stockholm. WOOD, M. 1969. A bird-bander’s guide to the determination of age and sex of selected species. Pennsylvania State Univer- sity, University Park, PA. work_5aisax4sfjhw3kgnmg7qemfdlm ---- Meteoritics & Planetary Science 39, Nr 6, 787–790 (2004) Abstract available online at http://meteoritics.org 787 © Meteoritical Society, 2004. Printed in USA. The Chicxulub Scientific Drilling Project (CSDP) Jaime URRUTIA-FUCUGAUCHI,1 Joanna MORGAN,2 Dieter STÖFFLER,3 and Philippe CLAEYS4 1Instituto de Geofisica, Universidad Nacional Autónoma de México, Coyoacán 04510, Mexico City, Mexico 2Earth Sciences and Engineering, Imperial College London, SW7 2AZ, United Kingdom 3Natural History Museum, Humboldt University, D-10099, Berlin, Germany 4Department of Geology, Vrije Universiteit Brussel, Pleinaan 2, B-1050, Brussels, Belgium In the June and July issues of Meteoritics & Planetary Science, we present the initial results of the Chicxulub Scientific Drilling Project (CSDP). The Chicxulub crater is a unique impact structure associated with one of the most dramatic geological events in the Phanerozoic. It is buried beneath carbonate sediments in the northern Yucatán Peninsula, southeastern Mexico. It has a diameter of 180–200 km and a complex multi-ring structure (Fig. 1). It represents the youngest and the best preserved of only three large multi-ring impact basins documented in terrestrial geological record. The impact that excavated the Chicxulub crater has been dated at 65 Ma and, therefore, is related to the event that marks the Cretaceous/Tertiary (K/T) boundary and the extinction of about 75% of species (50% of genera, including the dinosaurs) (Alvarez et al. 1980; Hildebrand et al. 1991). This crater presents a unique opportunity to discover new important information on the formation and characteristics of such large multi-ring impact craters, their environmental and climatic effects, and their implications for geological and biological evolution. The Chicxulub crater has been the focus of numerous studies, particularly in the last decade. Studies include offshore and onshore geophysical surveys, drilling/coring, laboratory analyses of impact breccias and melt, and computer modeling (e.g., Sharpton et al. 1992; Morgan et al. 1997, 2000; Collins et al. 2002; Ebbing et al. 2001). Drilling inside the structure and in its immediate vicinity has been carried out within the oil exploratory program by PEMEX with intermittent core recovery and, more recently, by the National University of Mexico (UNAM) Chicxulub drilling program resulting in continuous core recovery. The need for drilling/coring within the deeper part of the impact basin, where the crater floor is buried under several hundred meters of Tertiary carbonate sediments, was recognized already during the early studies of the crater because only the uppermost crater-related deposits were drilled (Urrutia-Fucugauchi et al. 1996; Rebolledo-Vieyra et al. 2000). With the initiation of the International Continental Scientific Drilling Program (ICDP), interest in drilling the crater increased and CSDP was developed as part of an international collaboration within the framework of ICDP. The drilling project was financed by ICDP and the National University of Mexico (UNAM) and was coordinated by UNAM. The CSDP borehole Yaxcopoil-1 was drilled from December 2001 through March 2002 in the southern sector of the crater at ~62 km radial distance from the approximate crater center at Chicxulub Puerto (Fig. 1). The Yaxcopoil-1 (Yax-1) borehole was planned to core continuously into the lower part of the post-impact carbonate sequence, the impact breccias, and the displaced Cretaceous rocks. The drill site at Hacienda Yaxcopoil (Figs. 2a and 2b) was selected based on integration of gravity, magnetics, magnetotelluric and offshore seismic surveys, pre-existing boreholes of PEMEX and UNAM programs, site conditions and access, land ownership, water availability, and an environmental impact assessment (Urrutia-Fucugauchi et al. 2001). An INDECO rotary drill rig from Perforaciones Industriales Termicas, S. A. (PITSA) and the coring device from the Drilling, Observation, and Sampling of the Earth’s Continental Crust (DOSECC) were used for the drilling/ coring operations. Rotary mode was used to drill from the surface to the depth of 404 m. After running wireline logs and casing the borehole, drilling was continued with wireline coring of the carbonate sequence and impact lithologies. Cores 63.5 mm in diameter were obtained to the depth of 993 m. At this depth, the HQ coring string became stuck and eventually was left in the hole as casing. Coring resumed with an NQ string (core diameter of 47.6 mm) to the final depth of 1511 m. Geophysical logging was conducted after completion of drilling through the first 400 m and reaching the final depth of 1511 m. Observations included hole deviation and azimuth (caliper, SP), magnetic susceptibility, radioactive element contents, gamma ray, electrical resisitivity, temperature, and conventional and waveform sonic. The Yax-1 borehole is open and available for experiments under a ten-year agreement between the Yaxcopoil Hacienda and UNAM. Several geophysical logging campaigns have been conducted after completion of the drilling project. A core laboratory and temporary repository was established at the University of Yucatán in Mérida. Facilities for digital photography of core boxes and core segments and http://meteoritics.org 788 J. Urrutia-Fucugauchi et al. an automated digital core scanner were available for documentation. Cores were transported from the drill site daily, and information was made available on the CSDP web site through the ICDP information system. After completion of drilling operations, cores were packed and shipped to the UNAM Core Repository in Mexico City. Cores were further examined and then cut longitudinally in halves (one half is the project archive, while the other one is available for sampling by CSDP science team). Full cores and halves were digitally scanned for high resolution images, some of which are presented in the papers included in these two volumes of MAPS. The CSDP recovered core from the lower Tertiary carbonate sequence, impact breccias, and overlying Cretaceous carbonates, down to the depth of 1511 m. Core recovery was 98.5%. The Tertiary carbonates were cored between 404 m and 795 m. Impact breccias were cored between 795 m and 895 m, presenting an unexpectedly thin sequence of impactites, given the greater thickness of such breccias in wells located toward and outward the crater center relative to Yax-1. Beneath the impact breccias, a sequence of carbonate rocks (limestones, dolomites, and anhydrite) were recovered. The dip of the carbonates varies from being sub- horizontal to up to 60 degrees, and these rocks contain thin dikes of breccia and melt. Impact breccias have been divided into six units which, from top to bottom, are (prelimary log names): redeposited suevites (794.63–808.02 m), suevites (808.02–822.86 m), chocolate-brown melt breccias (822.86– 845.80 m), glass-rich variegated suevites (845.80–861.06 m), green monomicitic-autogene melt breccias (861.06– 884.92 m), variegated polymicitic, allogenic clast melt breccias (884.92–894.94 m) (Dressler et al. 2003). A similar subdivision with a different classification of the lithologies Fig. 1. Location of the CSDP borehole Yax-1. Fig. 2a. Drilling operations at Hacienda Yaxcopoil. CSDP-Introduction 789 has been proposed in Dressler et al. (2003) and in other papers included in these volumes (e.g., Tuchscherer et al. 2004; Stöffler et al. 2004). The Cretaceous sedimentary rocks below about 895 m are cut with dikes of polymict breccias and display zones of monomict brecciation. They appear to represent “megablocks” displaced by the impact event. Anhydrite layers, the thickness of which varies from a few centimeters to up to 15 m, and make some 27% of the megablocks. Organic-rich layers and oil-bearing units are present at depths between 1410 and 1455 m (according to Kenkmann et al. [2004], this layer starts at 1263 m.). Scientific study of Yax-1 core samples and complementary studies are allowing researchers to: 1) evaluate the link between the Chicxulub crater and global K/ T boundary layer and mass extinction; 2) study large-scale cratering processes; 3) investigate post-impact crater modifications, environmental evolution, and faunal recovery; and 4) provide additional constraints on target compositions and deformation styles characteristic of the zone flanking the collapsed transient cavity. Initial studies have already provided important progress on these issues and, at the same time, left questions unanswered and opened new lines of inquiry. For instance: 1. The proportion of basement material within the impact breccias is higher than observed at other craters, opening exciting possibilities for investigations of excavation models and the nature of the Yucatán crust. 2. The impact breccia layers are thinner than predicted, with implications for crater structure, breccia emplacement, and crater formation. 3. Although there are layers of anhydrite in the target rocks, there is a surprising lack of evidence for anhydrite in the impact breccias. 4. Several different analyses document the importance of post-impact hydrothermal alteration within the core samples. 5. Recovered blocks of Cretaceous and impactites provide valuable material for laboratory analyses, despite the effects of subsequent alteration. 6. The origin of the thick carbonate sequence beneath the breccias (>600 m) has provoked an interesting debate and requires further study. 7. Initial studies of the impactite and early Paleocene stratigraphy have provided interesting age constraints and information on sequence completeness (for example the identification of the 29r/29n chron boundary just above the appearance of the first Danian fossils), provoking a debate about the age of the Chicxulub impact. In two consecutive volumes of MAPS, we present the initial results of the CSDP Science Team. These volumes include papers on the petrology, mineralogy, geochemistry, hydrothermal alteration, and degree of shock of the impactites, as well as interpretations of their deposition, including Fig. 2b. The CSDP rig at Hacienda Yaxcopoil. 790 J. Urrutia-Fucugauchi et al. numerical modeling. Papers on the late Cretaceous section include studies of their stratigraphy, sedimentology, structure and deformation, and cross-cutting dikes, as well as interpretations of their origin. Other contributions include physical property measurements on the core, seismic-, bio- and magneto-stratigraphic studies, and evidence for the existence of a hydrothermal plume in the early Tertiary ocean. The guest editors would like to acknowledge the valuable contribution of the reviewers of the papers submitted for the special volumes. Our thanks to: D. Ames, K. Benn, R. J. Bodnar, T. Bralower, R. Buffler, O. Campos-Enriquez, A. Deutsch, H. Dypvik, S. D’Hondt, A. Goguitchaichvili, S. Gulick, M. Harting, L. Hecht, A. Hildebrand, F. Hörz, B. Huber, G. Keller, T. Kenkmann, D. Kent, K. Kirsimae, C. Koeberl, D. Kring, F. Kyte, V. Luciani, B. Milkereit, A. Montanari, O. Morton, H. R. Newsom, R. Norris, J. Ormo, H. Palme, L. Pesonen, K. Pope, W. U. Reimold, E. Robin, P. Rochette, J. Safanda, J. Smit, A. M. Soler, R. Stoessel, D. Stüben, R. Tagle, D. H. Tarling, A. Therriault, D. Thomas, L. Thompson, F. Tsikalas, R. Walker, J. Whitehead, L. Zurcher. We also tkank the Editor, Tim Jull, and the editorial staff of Meteoritics & Planetary Science. Their professional assistance and patience have made this effort possible. Acknowledgments–The CSDP is an international cooperation research project. The support received from numerous individuals and institutions is gratefully acknowledged. We are thankful for the support and encouragement of ICDP Chairman Prof. Rolf Emmermann. ICDP, UNAM, CONACYT, Government of Yucatán, Hacienda Yaxcopoil, Universidad Autónoma de Yucatán, and PEMEX have provided financial and/or logistic support. REFERENCES Alvarez L. W., Alvarez W., Asaro F., and Michel H. V. 1980. Extraterrestrial cause for the Cretaceous/Tertiary extinction. Science 208:1095–108. Collins G., Melosh H. J., Morgan J., and Warner M. R. 2002. Hydrocode simulations of crater collapse and peak-ring formation. Icarus 157:24–33. Dressler B., Sharpton V. L., Morgan J., Buffler R., Morán D., Smit J., Stöffler D., and Urrutia-Fucugauchi J. 2003. Investigating a 65 Ma-old smoking gun: Deep drilling of the Chicxulub impact structure. Eos 84: 125, 130. Ebbing J., Janle P., Koulouris J., and Milkereit B. 2001. 3D gravity modeling of the Chicxulub impact structure. Planetary and Space Science 49:599–609. Hildebrand A. R., Penfield G., Kring D. A., Pilkington M., Camargo A., Jacobson S. B., and Boynton W. V. 1991. A possible Cretaceous-Tertiary boundary impact crater on the Yucatán Peninsula, Mexico. Geology 19:867–871. Morgan J., Warner J., and Chicxulub Working Group. 1997. Size and morphology of the Chicxulub impact crater. Nature 390:472– 476. Morgan J., Warner M., Collins G. S., Melosh H. J., and Christianson G. L. 2000. Peak ring formation in large impact craters. Earth & Planetary Science Letters 183:347–354. Rebolledo-Vieyra M., Urrutia-Fucugauchi J., Marin L., Trejo A., Sharpton V. L., and Soler-Arechalde A. M. 2000. UNAM scientific drilling program of the Chicxulub impact crater. International Geology Review 42:948–972. Sharpton V. L., Brent Dalrymple G., Marín L. E., Ryder G., Schuraytz B. C., and Urrutia-Fucugauchi J. 1992. New links between the Chicxulub impact structure and the Cretaceous/Tertiary boundary. Nature 359: 819–821. Stöffler D., Artemieva N. A., Ivanov B. A., Hecht L., Kenkmann T., Schmitt R. T., Tagle R. A., and Wittmann A. 2004. Origin and emplacement of the impact formations at Chicxulub, Mexico, as revealed by the ICDP deep drilling Yaxcopoil-1 and by numerical modeling. Meteoritics & Planetary Science 39:1035–1067. Tuchscherer M., Reimold W. U., Koeberl C., and Gibson R. L. 2004. Major and trace element characteristics of impactites from the Yaxcopoil-1 borehole, Chicxulub structure, Mexico. Meteoritics & Planetary Science. This issue. Urrutia-Fucugauchi J., Marin L., and Trejo-Garcia A. 1996. UNAM scientific drilling program of the Chicxulub impact structure- Evidence for a 300-km crater diameter. Geophysical Research Letters 23:1516–1568. Urrutia-Fucugauchi J., Morán Zenteno D., Sharpton V. L., Buffler R., Stöffler D., and Smit J. 2001. The Chicxulub Scientific Drilling Project. Infrastructura científica y desarrollo tecnológico 3. Mexico City: Insituto de Geofisica, Universidad Nacional Autónoma de México. 45 p. http://www.ingentaselect.com/rpsv/cgi-bin/linker?ext=a&reqidx=0028-0836()359L.819[aid=5300675] http://www.ingentaselect.com/rpsv/cgi-bin/linker?ext=a&reqidx=0020-6814()42L.948[aid=6045031] http://www.ingentaselect.com/rpsv/cgi-bin/linker?ext=a&reqidx=0012-821x()183L.347[aid=5549914] http://www.ingentaselect.com/rpsv/cgi-bin/linker?ext=a&reqidx=0012-821x()183L.347[aid=5549914] http://www.ingentaselect.com/rpsv/cgi-bin/linker?ext=a&reqidx=0028-0836()390L.472[aid=4854157] http://www.ingentaselect.com/rpsv/cgi-bin/linker?ext=a&reqidx=0019-1035()157L.24[aid=4924473] http://www.ingentaselect.com/rpsv/cgi-bin/linker?ext=a&reqidx=0036-8075()208L.1095[aid=210660] work_5akkxa2bqvemdali6ger7ex4wy ---- doi:10.1016/j.mejo.2005.07.002 Review of CMOS image sensors M. Bigas a , E. Cabruja a,*, J. Forest b , J. Salvi b a Centre Nacional de Microelectrònica, IMB-CNM (CSIC), Campus Universitat Autónoma de Barcelona, 08193 Bellaterra, Barcelona, Spain b Institut d’Informàtica i Aplicacions Campus Montilivi, Universitat de Girona, 17071 Girona, Spain Received 28 October 2004; received in revised form 29 June 2005; accepted 5 July 2005 Available online 6 September 2005 Abstract The role of CMOS Image Sensors since their birth around the 1960s, has been changing a lot. Unlike the past, current CMOS Image Sensors are becoming competitive with regard to Charged Couple Device (CCD) technology. They offer many advantages with respect to CCD, such as lower power consumption, lower voltage operation, on-chip functionality and lower cost. Nevertheless, they are still too noisy and less sensitive than CCDs. Noise and sensitivity are the key-factors to compete with industrial and scientific CCDs. It must be pointed out also that there are several kinds of CMOS Image sensors, each of them to satisfy the huge demand in different areas, such as Digital photography, industrial vision, medical and space applications, electrostatic sensing, automotive, instrumentation and 3D vision systems. In the wake of that, a lot of research has been carried out, focusing on problems to be solved such as sensitivity, noise, power consumption, voltage operation, speed imaging and dynamic range. In this paper, CMOS Image Sensors are reviewed, providing information on the latest advances achieved, their applications, the new challenges and their limitations. In conclusion, the State-of-the-art of CMOS Image Sensors. q 2005 Elsevier Ltd. All rights reserved. Keywords: CMOS image sensors; APS 1. Introduction 1.1. Introduction Currently, there are many different Imaging Systems suitable for different purposes, de pending upon their final application. Digital Cameras, Camcorders, Webcams, Security cameras or IR-cameras are well-known Imaging Systems. Moreover, as the purposes are different, the technologies used differ from each other. This situation has been possible thanks to the fact that Imaging Technologies, mainly the ones concerning CMOS imagers, have been improving their performance, their functional capability and their flexibility during last years. CMOS image sensors have received much attention over the last decade, because their performance is very promising compared to CCDs. New horizons can be opened, like ultra 0026-2692/$ - see front matter q 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.mejo.2005.07.002 * Corresponding author. Tel.: C34 935947700; fax: C34 935801496. E-mail addresses: marc.bigas@cnm.es (M. Bigas), enric.cabruja@cnm. es (E. Cabruja). low power or camera-on-chip systems. Owing to this situation and the latest developments within this field, this paper reviews CMOS image sensors since 1997 in order to continue and update the review reported by E. Fossum [1]. In order to understand why CMOS image sensors have emerged as a strong alternative to CCDs, it is important, first, to highlight the Advantages and Disadvantages of CMOS image sensors. 1.2. Advantages and disadvantages The main Advantages of CMOS imagers are: 1. Low power consumption. Estimates of CMOS power consumption range from one-third to more than 100 times less than that of CCDs [2]. Besides, they work at low voltage. CMOS imagers only need one supply voltage, instead of CCDs, which need 3 or 4. 2. Lower cost compared to CCD’s technology. 3. On chip functionality and compatibility with standard CMOS technology. CMOS imagers allow monolithical integration of readout and signal processing electronics. In 2001, a study for Cross Contamination between CMOS Image Sensor and IC product showed no Microelectronics Journal 37 (2006) 433–451 www.elsevier.com/locate/mejo http://www.elsevier.com/locate/mejo M. Bigas et al. / Microelectronics Journal 37 (2006) 433–451434 problems [3]. A sensor can integrate various signal and image processing blocks such as amplifiers, ADCs, circuits for colour processing and data compression, etc. on the same chip. 4. Miniaturisation, although important limitations exist, the level of integration is rather high [4]. 5. Random access of image data. 6. Selective read-out mechanism [4,5] 7. High-speed imaging. The flexibility and the possibility to acquire images in a very short period of time [6]. 8. To avoid blooming and smearing effects, which are typical problems of CDD technology [6]. As outlined before, despite these advantages, there are still significant Disadvantages of CMOS image sensors compared to CCD technology. Therefore, these problems need to be solved so that CMOS image sensors can compete in any area. These disadvantages are: 1. Sensitivity: The basic quality criterion for pixel sensitivity is the product of its Fill Factor and its Quantum Efficiency (FFxQE) where Fill Factor is the ratio of light-sensitive area to the pixel’s total size, and Quantum efficiency is the ratio of photon-generated electrons that the pixel captures to the photons incident on the pixel area. It must be pointed out that Active Pixel Sensors (APS) have reduced sensitivity to incident light, due to a limited Fill Factor, hence, less quantum efficiency. 2. Noise: CMOS Image sensors suffer from different noise sources which set the fundamental limits of their performance, especially under low illumination. 3. Dynamic range (DR): Dynamic Range, which is the ratio of the saturation signal to the rms noise floor of the sensor, is limited by the photosensitive-area size, integration time and noise floor. 4. Less image quality than CCD. In order to overcome the disadvantages outlined before and, also, to improve the current advantages as well, the research on CMOS image sensors, since 1997, has been mostly focused on the following areas: † Low noise † High dynamic range † High sensitivity and High fill factor † Low power consumption † Low voltage operation † High speed imaging In spite of the high number of applications of the imaging systems, all of them have almost always the same basic functions: (1) Optical gathering of photons (lens), (2) Wavelength discrimination of photons (filter), (3) Detector for photons to electrons conversion (photodiode), (4) A method to readout the detector (CCD), (5) Timing, control, and drive electronics for the sensors, (6) Signal processing electronics such as for Correlated Double Sampling (CDS) or for color processing, (7) Analog-to- digital conversion, (8) Interface electronics. 1.3. Historical background 1.3.1. Before 1997 CMOS image sensors could not compete in the past with CCD technology, although the first solid-state imagers presented in the 60 s and early 70 s used MOS diodes as light sensitive elements and during the 60 s several works were performed in the solid-state image sensor’s field, using NMOS, PMOS and bipolar processes. [1] For instance, photodiode image sensors with MOS scanning circuits were known from mid 60 s. However, they were not embraced commercially because of poor performance and large pixel size (for that time) relative to that of the CCDs. In fact, even though CMOS image sensors appeared in 1967, CCDs have prevailed since their invention in 1970 [7]. Full-analog CCDs have dominated the vision applications owing to their superior dynamic range, lower fixed-pattern noise (FPN), smaller pixels and higher sensitivity to light [2]. In the early 1990s, CMOS image sensors re-emerged as an alternative to CCDs thanks to the advantages pointed out before. Passive pixel CMOS arrays were the first generation. Major improvements in signal-to-noise ratio for photo- diodes and charge-injection devices (CIDs) could be made by adding an amplifier per column or per row. Therefore, sensors that implement a buffer, which acts as simple source follower, per pixel have been known as active-pixel sensors (APS) and represent the second generation of CMOS imagers [8]. CMOS APS (Active Pixel Sensors) promised to provide: lower noise readout, improved scalability to large array formats and higher speed readout compared to PPS. 1.3.2. After 1997 Recently, the research has been focusing, mainly, on the improvement of the APS, because APS are the pixel circuit that have shown better performance and flexibility. In order to strongly compete with CCD technology, the aim of researchers has been to obtain higher performance imaging systems based on CMOS technology. Therefore, there have been several reports on improving the fill-factor (FF) with low power consumption, low voltage operation, low noise, high speed imaging and high dynamic range. Moreover, little research has been done on other topics such as pixel shape optimization [9], pixels on SOI substrate [10], high resolution [11], APS with variable resolution [12,13], self-correcting [14,15] and for low light [16–18], etc. On the other hand, new applications have emerged due to the CMOS imager development. Automotive applications, imaging for cellular or static phones, computer video, space, medical, digital photography and 3D applications have been improved. So many applications areas caused that CMOS M. Bigas et al. / Microelectronics Journal 37 (2006) 433–451 435 technology made a breakthrough on two fronts in 2000: sensors for computers and cell phones on the low end, and ultra high speed, large format imaging on the high-end [19]. Moreover, new technologies and architectures appeared due to scale effects. For instance, Thin Film on ASIC (TFA) technology [20–24] and Complementary Active Pixel Sensors (CAPS) [25–31]. Finally, some studies have been carried out to study the radiation effects and how radiation induced dark current in APS [32–39] and the effect of hot carriers [40]. In addition, it is well known that heavy metals such as Cu, Ni, Fe or Zn, which appear in some CMOS image sensor processes, can cause defects in silicon and influence gate oxide quality in VLSI circuits. So the cross- contamination between CMOS image sensor and IC technologies has been studied as well [3]. 1.4. CCD technology limitations CCD technology has prevailed since its invention in 1970, because it provided better solutions to the typical problems, such as FPN and it had a higher fill factor, smaller pixel size, larger format, etc. than CMOS, which could not compete with CCD performance. Then, the research has been mainly focused on CCD technology. Nevertheless, CCD technology has some limitations: For instance, in a CCD-based system, the basic function often consumes several watts (1–5 W) and is, therefore, a major drain for the battery. Furthermore, unlike CMOS image sensors, CCD cannot be monolithically integrated with analog readout and digital control electronics [1]. New applications have appeared as well. For instance, in the automotive field, the image sensor has to fulfill the specifications concerning the temperature range, the range of illumination, and the power dissipation. CCD image sensors cannot guarantee their functionality over the whole temperature range required, or to cover all lighting conditions during daytime. They cannot operate beyond quite restrictive ranges of illumination and temperature. For instance, a non illuminated CCD is completely full of electrons after roughly 1 min at room temperature and the dark current doubles approximately every 7 K. This means that noise drastically increases with the temperature of the chip. Also there is a need to acquire images in a very short time for high speed applications, therefore short integration time is required. This leads to image sensors equipped with Fig. 1. Summary of the main advantages synchronous shutters in order to avoid blur [6]. Other CCD typical problems are: Blooming and smearing [6]. CCDs are high capacitance devices, so they suffer from high power dissipation. CCDs need many different voltage levels, they are sensitive to radiation and their readout rate is limited. 1.5. CMOS Image sensors (APS) as an alternative to overcome CCDs limitations CMOS imagers began to be a strong alternative since early 90 s (see Fig. 1). Their most important feature was that they would satisfy the demand for low-power, miniaturised and cost-effective imaging systems. Moreover, CMOS image sensors offered the possibility to monolithically integrate a significant amount of VLSI electronics on-chip and reduce component and packaging costs [1]. However, passive pixel CMOS arrays were the first generation. CMOS Active Pixel Sensors (APS) have offered better performance though. Up to now, a lot of research and studies has been done on this topic and CMOS APS technology has demonstrated noise, quantum efficiency, and dynamic range performance comparable to CCDs [41]. However, CCDs still offer a better image quality, especially for digital still applications. Therefore, CCDs are still superior to CMOS image sensors as far as signal-to-noise and dynamic range is concerned. This means that CCDs are still the first choice for high quality still photography. In spite of the huge work that has been carried out in this area, more research on reducing the noise and increasing the sensitivity of the CMOS imagers is needed in order to compete with industrial and scientific CCDs. For instance, if the noise issues (mainly reset noise and dark current shot noise) can be solved with CMOS imagers, they might be able to challenge CCDs in digital still applications [5]. Thanks to the fact that there were niches to cover such as high-speed, motion analysis or detection, etc. [42], CMOS APS technology has been growing up and, currently, there are different kinds of pixel circuits depending on their purpose. For instance, logarithmic APS, capacitive tran- simpedance amplifier (CTIA) APS [43], APS with shutter [44] or complementary active pixel sensors CAPS [31]. Currently, there is no CMOS image sensor that can provide the global quality of a CCD in terms of noise, sensitivity, dynamic range and so on. This means that it is possible to of CCD and CMOS image sensors. M. Bigas et al. / Microelectronics Journal 37 (2006) 433–451436 reach or improve one or two of their characteristics with a specific architecture, but not all together. Fig. 2. Downscaling of sensitivity with pixel options. 1.6. New applications-new challenges The improvement of CMOS image sensors has opened up new application areas [45], owing to the lower cost of CMOS image sensors. They can compete with CCDs in applications such as IR-vision (systems for automobile drivers under fog and night driving conditions, security cameras, baby monitors that can ‘see’ in the dark, etc.) Besides these, imaging or vision systems for X-ray, space, medical, 3D, consumer electronics, automotive or low-light applications are required, and most of them need highly integrated imaging systems, so CMOS image sensors are well situated to jump into the market. Moreover, imaging or vision systems have to offer, in order to be ideal, good imaging performance with low noise, no lag, no smear, good blooming control, random access, simple clocks and fast readout rates. 1.7. CMOS Image sensors limitations and device scaling considerations 1.7.1. Industry trend The technology has advanced from a 2 mm CMOS in 1993 to 0.25 mm in 1996 [4] and less than 0.1 mm will also be possible. So, considering scaling effects has been necessary in order to know where and which are the limits of CMOS imagers. Some Scaling Considerations were studied in 1996[46] and in 1997[4]. The question was whether the image sensing performance of CMOS imagers would get better or worse as the technology would be scaled. The question arose because if CMOS imagers would scale down as fast as industry standard CMOS technologies, CMOS imagers would achieve a smaller pixel size than CCDs during the following years [4]. Anyway, it seemed to be clear that ‘standard ’ CMOS technology, which provided good imaging perform- ance at 2–1 mm without any process change, would need some modifications in its fabrication process and inno- vations on the pixel architecture in order to enable CMOS imagers to perform good quality imaging when using the 0.25 mm generation technology and beyond [1]. In fact, CMOS imagers could not be scaled down using standard CMOS technology because scaling effects increase leakage current and reduce dynamic range. That is to say that performance was getting worse. Thus, technological changes in CMOS technology are needed in order to reach the imaging performance of CCDs with a CMOS imager. In 2000 [20] a scaling perspectives study was done and, certainly, new technological processes appeared, such as PPD [20] and TFA imagers [20–24] (See Fig. 2). Later, CAPS appeared as well [25–31]. TFA imagers are immune to negative scaling impacts on sensitivity. Even more, they offer high sensitivity and high dynamic range. Nevertheless this technology is not suitable down to 0.1 mm. Thus, it has been demonstrated the limitation of conventional APS, because conventional pixel architecture (APS) cannot work properly with a 0.1 mm technology or below because the low power of these technologies implies a decrease in the saturation level and in the light sensitivity that it is not acceptable. Nevertheless, an alternative architecture called CAPS (Complementary APS) came on scene [25–31]. They are a possible way to design a highly integrated, high performance CMOS image sensor in the deep sub-quarter micron technology, because CAPS architecture has a very attractive low-voltage operation capability. For instance 1.0 V [25,26,29,30] Moreover, the possibility to reach low power and low voltage consumption depends on the capability to scale down the current technology. 2. CMOS image sensors CMOS image sensors are mixed-signal circuits contain- ing pixels, analog signal processors, analog-to-digital converters, bias generators, timing generators, digital logic and memory. 2.1. Overall architecture There are several CMOS imager topologies depending on their purpose. Nevertheless, CMOS imagers architecture can be divided into four main blocks, as Fig. 3 shows. 1. Pixel Array 2. Analog Signal Processors 3. Row and Column Selector 4. Timing and Control Fig. 3. CMOS image sensor floorplan. Fig. 4. A photodiode-type PPS schematic. M. Bigas et al. / Microelectronics Journal 37 (2006) 433–451 437 2.2. Pixel circuits 2.2.1. Traditional imagers or photodetectors 1. Photodiodes: Semiconductor devices responsive to high energy particles and photons. They operate by absorp- tion of photons or charged particles and the collected electrons which decrease the voltage across the photodiode in a proportional basis to the incident power. Currently, photodiode CMOS pixels are the most popular ones. 2. Photogates (PG) or CID (Charge-injection device): Semiconductor devices that also collect the photon- generated electrons, but only when the photogate is biased to a high potential. 3. CCD (Charge-coupled device): Device architecture based on series and parallel connection of capacitors, which are made using a dedicated semiconductor process. 2.2.2. Pixel circuits Pixel circuits are mainly divided into active pixels (APS) and passive pixels (PPS). APS are sensors that implement a buffer per pixel. This buffer is as simple as adding a source- follower. Currently, APS are the predominant devices, although in some cases PPS are also used. 2.2.2.1. Passive pixels (PPS). They were the first CMOS imagers. They are based on photodiodes without internal amplification. In these devices each pixel consists of a photodetector (e.g. photodiode), and a transistor in order to connect it to a readout structure (See Fig. 4). Then, after adressing the pixel by opening the row-select transistor (RS), the pixel is reset along the bit line and RS. In spite of the small pixel size capability and a large fill factor, they suffer from low sensitivity and high noise [20] due to the large column’s capacitance with respect to the pixel’s one. 2.2.2.2. Active pixels (APS). APS are sensors that implement a buffer per pixel. This buffer is a simple source-follower. It is well known that the insertion of a buffer/amplifier into the pixel improves the performance of the pixel. Power dissi- pation is minimal and, generally, less than CCD’s, because each amplifier is only activated during readout. Nevertheless, it must be noted APS technology has some disadvantages: Conventional APS suffer from a high level of fixed pattern noise (FPN) due to wafer process variations that cause differences in the transistor thresholds and gain character- istics. A solution is to use a Correlated Double Sampling (CDS) circuit, which can almost eliminate the threshold variations that cause offsets in the video background. 2.2.2.3. Photodiode (PD) type APS. The photodiode-type (PD) APS is considered as standard and it was described by Noble in 1968 [1,20] (See Fig. 5a). It consists of three- transistor: a reset transistor, for resetting the photodiode voltage; and a source follower with select transistor, for buffering the photodiode voltage onto a vertical-column bus. The PD APS is suitable for most mid low-performance applications. 2.2.2.4. Photogate(PG) type APS. It was introduced later than PD APS, in 1993 [1,20] and it employs the principle of operation of CCDs concerning integration transport and readout inside each pixel. Its transfer of charge and correlated double sampling permits a low noise operation. Thus, it is suitable for high performance and low light applications. A typical schematic is shown in Fig. 5c. 2.2.2.5. Logarithmic APS. Non-linear output of the sensor can be desirable. This fact permits an increase on the intra- scene dynamic range. Logarithmic APS are suitable for High Dynamic Range applications, although they suffer from large FPN. Owing to this fact, currently, they are not as used as before. It must be pointed out that they are used a lot in silicon retinas. A typical schematic is shown in the Fig. 5b. 2.2.2.6. CTIA APS pixels. As highlighted before, conven- tional APS suffer a high level of FPN. Thus, reducing FPN has been a challenge for quite a while and some solutions Fig. 5. (a) A photodiode- type APS schematic, (b) A photodiode- type logarithmic APS schematic, (c) A photogate- type APS schematic, (d) Photodiode- type shutter APS schematic, (e) TFA pixel, (f) A photodiode- type CAPS schematic, (g) A photodiode- type Low FPN CTIA APS schematic (h) A photodiode- type High FPN CTIA APS schematic. M. Bigas et al. / Microelectronics Journal 37 (2006) 433–451438 M. Bigas et al. / Microelectronics Journal 37 (2006) 433–451 439 have been reported. For instance, capacitive transimpedance amplifier (CTIA) pixels can achieve low FPN [43]. Besides this, the high gain and low read noise are advantages of using CTIAs as well. Fig. 5h and g show a high FPN and a low FPN CTIA APS pixel schematics respectively. 2.2.2.7. Pinned photodiode (PPD) pixel. The pinned photodiode [20], which was previously used in charge- coupled devices, was proposed early during the develop- ment of CMOS active-pixel image sensors. Besides lower pixel noise, the pinned photodiode offers reduced dark current. Therefore, they are a PG’s alternative, because their architecture offers higher sensitivities than PG. 2.2.2.8. TFA pixels. The thin film on ASIC (TFA) pixels were developed in order to improve sensitivity [20–24]. They consist of an amorphous silicon (a-Si:H) multilayer system that is deposited on top of an ASIC. Their absorption coefficient for visible light is approxi- mately 20 times bigger than crystalline’s silicon (c-Si). In fact, TFA pixels are suitable for High Dynamic Range applications. A typical TFA structure pixel is shown in Fig. 5e. 2.2.2.9. Complementary active pixels sensors CAPS. CAPS are a possible way to manufacture a highly integrated, high performance CMOS image sensor in the deep sub-quarter micron technology. Furthermore, CAPS architecture has low power consumption and a high low-voltage operation capability. A typical schematic is shown in Fig. 5f. The main new features are that the reset transistor is replaced by a PMOS. In addition, a complementary signal path is implemented and the pixel gives out two signal path outputs: Voutn and Voutp [25–31]. 2.2.2.10. Other pixels. There is more pixel architectures [47], due to the huge number of possible applications. For instance, pixel circuits suitable for high speed operation by adding a shutter transistor [44] (see Fig. 5d). Also Diericks (Fillfactory) introduced a novel 100% fill factor pixel. Overall, PG, PPD and TFA are detector structures to improve the sensitivity. Nevertheless, TFA provides significantly better values than PPD due to the higher fill factor and higher quantum efficiency [20]. 2.3. Analog signal processing Analog signal processing circuits are used in order to improve the performance and functionality of CMOS image sensors. However, they tend to involve less pixel density and increase the chip area due to the added functions. Some research has been done to overcome these problems [48]. Firstly, there are some well known traditional signal processing systems. For instance, additional analog-signal- processing circuitry located at the periphery of the array permits the suppression of both, temporal and fixed-pattern noise. As an example, Correlated Double Sampling (CDS) or Double Differencing Sampling (DDS) suppress FPN [49, 43,50,51]. Others achieve high SNR [52] and ADC for camera-on-a-chip. Secondly, there are also several signal processing systems depending on the application. For instance, signal processing such as K-winners-take-all, are suitable for 3D vision systems [53–58] and subpixel accuracy [59–65]. Smoothing [66], motion detection [67– 69], programmable amplification, multiresolution imaging [70], video compression [71], dynamic range enhancement, discrete cosine transform (DCT), intensity sorting, etc. are other signal processing systems. 2.4. Readout methods Readout methods have an important influence in the sensor performance. Thus, there are several readout methods depending on the desirable application. The main requirements are: 1. low power dissipation 2. high resolution 3. good linearity 4. stable detector bias 5. low noise 6. high injection efficiency 7. small pixel size 8. good dynamic range APS readout structures fulfil 4 and 8 requirements, but not 3 and 7. On the other hand PPS readout structure fulfils 7 and not 3, 4, 6 and 8. Finally, share-buffered direct-injection (SBDI) [72] readout structure combine both imagers characteristics. Therefore, it is possible to find suitable traditional readout methods for low FPN [73–76], for high frame rates [73], to increase linearity and DR [72], with high SNR and ultrahigh- sensitivity [77,78] and for infrared detectors [79]. Finally, there are also specialised readout methods suitable for ultra high sensitivity, for focal plane array, for X-ray imaging, for an emission-transmission medical imaging systems, for low-light levels, detectors with self- triggered readout, offset-free column readout circuit and transversal-readout architecture 2.5. Noise sources CMOS Image sensors suffer from several noise sources. They set the fundamental limits on image sensor perform- ance, especially under low illumination and in video applications. Therefore it is important to have an overview of all of them [80]. The noise sources in CMOS imagers can be divided into Temporal Noise [81] and Fixed Pattern Noise (FPN)[43]. M. Bigas et al. / Microelectronics Journal 37 (2006) 433–451440 2.5.1. Temporal noise It can be divided into different kinds of noise, depending on its source: † Pixel noise: photon shot noise, reset or kT/C noise (which is the thermal noise resulting from resetting after each pixel’s readout. The k is Boltzmann’s constant, T is the absolute temperature; and C is the junction parasitic capacitance), dark current shot noise and MOS device noise (thermal, 1/f or flicker, etc.) † Column Amplifier noise † Programmable gain amplifier noise † ADC noise † Overall temporal noise, noise floor or reading noise. 2.5.2. Fixed pattern noise (FPN) It has been a huge CMOS imagers’ limitation. FPN is the fixed variation in the output between pixels when a uniform input is applied. In a perfect image sensor, each pixel should have the same output provided that the same input is applied, but in current image sensors the output of each sensor is different. FPN does not change as a function of time and can be characterized, assuming a linear pixel response, as a variation in the offset and gain at each pixel. VijðtÞ Z GijXijðtÞ COi; j V output, G gain of pixel, X input, O offset of pixel, Gain FPN pixel to pixel variation of Gij, Offset FPN pixel to pixel variation of Oij. Fig. 6. Photoconversion characteristics: linear vs logarithmic. 3. Latest developments in the field of CMOS imagers The research has been mainly focused on APS in areas like Low noise, High dynamic range, High sensitivity and High fill factor, Low power consumption, Low voltage operation, High speed imaging. To remark is that all these features are difficult to achieve in one design. Hence, depending on the application, one feature will have more priority than another one. 3.1. High dynamic range (DR) The ratio of the saturation signal to the rms noise floor of the sensor is known as dynamic range. This is limited by the photosensitive-area size, integration time and noise floor, which is the noise generated in the pixel and the signal processing electronics. DR is limited by the integration time, although high dynamic range image readout can be achieved by using different exposure or integration times [82]. Secondly, with respect to the noise floor the use of a linear readout is more suitable than a logarithmic one. So Dynamic Range of CMOS APS is strongly managed by the readout method. Finally, the photosensitive-area size is an important issue because of the well-known scaling effects. The major problem of artificial image acquisition has been the extraordinary high optical dynamic range of natural scenes. For instance, the human vision system exhibits an enormous optical dynamic range of about 200 dB, due to the fact that it can adapt to an extremely high range of light intensity levels [83]. Nevertheless, artificial imagers have been much poorer in this aspect. With conventional CCD sensors it is hard to reach high dynamic range and CMOS imagers with logarithmic response suffer from excessive FPN and temperature drift. For instance, in year 2000 the conventional CCD imagers exhibited usually a DR of about 50–70 dB only and on the other hand, CMOS imagers would achieve better DR, up to 140 dB, than CCDs [82], although they used logarithmic readout, which has some disadvantages such as a high FPN. In fact, some research has been done to obtain CMOS image sensors with high dynamic range: 3.1.1. Logarithmic The use CMOS imagers with logarithmic readout or the non-linear output of the logarithmic pixel provides higher dynamic range (see Fig. 6). Nevertheless, logarithmic response has a large FPN and slower response time for low light levels, which bring down the image quality. Thus, some systems based on logarithmic response have been developed in order to offer high DR with low FPN. For instance, in 1998, the University of Heidelberg proposed a CMOS camera chip with logarithmic response and self- calibrating FPN correction [84]. Its results showed a significant FPN reduction. Fraunhofer Institute of Micro- electronic Circuits and Systems of Duisburg suggested also a CMOS imager with Local Brightness adaptation [85]. It used logarithmic image sensors in order to reach high DR and FPN was also compensated. In year 2000, S. Kavadias reported a technique to remove the high FPN, due to its logarithmic response [86]. This CMOS image sensor, which M. Bigas et al. / Microelectronics Journal 37 (2006) 433–451 441 was based on an active pixel structure, employed on-chip calibration and achieved a DR of 120 dB and the FPN was 2,5% of the output signal range. Finally in year 2003 a multiresolution scheme and a cost-effective architecture for nonlinear analog-to-digital conversion was presented. These two features combined together improve the sensors quality under low light intensity [87]. 3.1.2. Linear On the other hand, using CMOS imagers with linear readout can improve FPN, even though they can achieve lower DR (see Fig. 6) than logarithmic ones. For instance, in 1999 a new design, a 1280!1024 digital CMOS image sensor with enhanced dynamic range, was reported [51]. The response was linear, low FPN was achieved and DR was of 69 dB. In year 2000 a CMOS image sensor for automotive applications was proposed [83]. It offered a high dynamic range up to 120 dB and an excellent image quality due to its linear readout. Furthermore, it had good temperature behavior up to 85 8C. In year 2002 a high dynamic CMOS imager with spiking pixels, pixel-level ADC, and linear characteristics was reported [88]. It had a DR of 93 dB, although 120 dB was expected. Finally, it is reported of a mechanism [82], using linear readout, capable for adjusting its sensitivity depending on the absolute illumination level, like human vision. This is reached by using different integration times. The results demonstrate that with 1 integration time, a DR of 61 dB is reached. In contrast, with two integration times, the DR is of 92 dB. 3.1.3. Combined In 1998, the University of Waterloo reported a CMOS APS with combined linear and logarithmic mode operation [89]. The results showed that it had good linearity, in the linear mode operation, and it reached wide dynamic range in the logarithmic mode. So the issue is to determine which is the more convenient readout method in each application. Nevertheless, using linear readout methods with different integration times seems to be more suitable. 3.1.4. TFA technology As outlined before, TFA technology [23,24] is suitable for achieving a high dynamic range and a high fill factor. In 1999–2000, T. Lulé reported a 100.000 pixel imager in TFA technology [21,22]. The main feature was that every pixel contained an automatic shutter, which adapted the inte- gration time to the local intensity. This allowed obtaining a high DR of 120 dB. 3.2. High sensitivity (High fill-factor (FF) and high quantum efficiency) 3.2.1. Background The basic quality criterion for pixel sensitivity is the product of its Fill Factor and its Quantum Efficiency (FF!QE). Where Fill Factor is the ratio of light-sensitive area to the pixel’s total size; also known as aperture efficiency, and Quantum efficiency is the ratio of photon- generated electrons that the pixel captures to the photons incident on the pixel area. Photons are lost for conversion due to: reflection on dielectrics, no absorption in the acquisition layer and loss of charges and recombination. It is well known that a good image quality is obtained if most of the chip area is dedicated to the photodetectors. Therefore in order to achieve a good image quality, a high Fill Factor (FF) is needed. Unlike CCDs, which achieve around 100% FF [90,91], CMOS APS FF are limited, because each pixel has an area devoted to the CMOS readout circuitry. Around 30% FF is a kind of standard. In CMOS APS pixel, the Fill Factor is limited by: (a) shadowing by metals or silicides, (b) collection of photons by the insensitive junctions of the active pixel, (c) the relatively small size of the useful photo-sensitive junction, (d) recombination of photo-generated carriers with majority carriers, limiting the diffusion length. Besides this, a high fill factor allows shorter exposure times for a given pixel size or smaller pixel sizes for a given sensitive area. Thus, FF plays an important role in the scaling perspectives and imager’s performance. In the wake of that, a lot of research has been done to increase the fill factor of CMOS APS. There are different methods used to improve the FF: one is to design active pixels with larger photodiodes, although small pixels can not be made and large photodiodes have low charge conversion sensitivity due to their higher capacitance [92]. The other method is to make passive pixels, but their performance with respect to active pixels is worse. On the other hand, microlenses, which help funneling photons to the light-sensitive portion of the pixel [93,1], are another alternative to overcome this problem. They could reach up to 90% Fill Factor. In spite of this 90% Fill Factor, they have some disadvantages such as the reduction of efficiency as microlens dimension decreases. In fact, some high fill-factor designs based on CMOS APS exist, which can enhance the FF: A. Bermak developed a 46% Fill Factor native logarithmic pixel [94] in 0.7 mm CMOS technology in 2000. In 2002, the Georgia Institute of Technology achieved a fill factor greater than 40% by using a matrix transform imager [95] Also in 2002, National Tsing-Hua University described an APS, with a fill factor of 55%, made in 0.25 mm technology [96]. In addition, this device has a high DR of 120 dB, thanks to its innovative tuneable injection current compensation architecture and a voltage operation of 1.9 V. Finally, it is important to consider the downscaling effects, because small pixels mean lower light sensivity and dynamic range. Thus, conventional APS technology is limited. Nevertheless innovative architectures, ideas and technologies reaching a fill factor up to 90–100% have appeared: For instance, in 1997 Dierickx [97,98] introduced a near 100% Fill Factor CMOS active pixel, which was Fig. 7. Cross-section of Fill Factory’s well pixel. M. Bigas et al. / Microelectronics Journal 37 (2006) 433–451442 patented by FillFactory as High Fill Factor N-Well Pixel w (US Patent 6,225,670). Photoelectrons are channeled by electrostatic barriers shielding them off the active pixel circuitry and substrate (see Fig. 7 left side), to the photodiode junction (see Fig. 7 right side). Virtually, all electrons diffuse down this drain, and as the diffusion time is short (typically 10–50 ns) negative effects like image lag must not be feared. In addition, TFA Imagers [20–24] offer 100% FF. The photodiode is placed on top of the ASIC. So the whole pixel area is available for the photodiode and there are no further layers obstructing the light penetration such as further metallization, polysilicon or dielectrics. Increasing the photosensitivity of the photodetector is another thing to be taken into account in order to improve the quantum efficiency. M. Furumiya [99] reported in 2001 a high photosensitivity and no-crosstalk pixel technology for an APS by using a 0.35 mm CMOS technology. A deep p-well photodiode, with a sensitivity improvement of 110% for 550 nm incident light, and an antireflective film to increase photosensitivity, consisting of Si3N4 film, with a sensitivity improvement of 24% are used. Finally, it is possible to increase the sensitivity by the cascoding method, which allows shielding of the integrating capacitor from the parasitic junction capacitance of the photodiode. This can be done with shutter APS [44] or CTIA pixel [43]. 3.3. High performance (low power consumption and low voltage operation) One of the most important advantages of CMOS image sensors compared to CCDs is the lower power consumption. Therefore, CMOS image sensors are suitable for portable applications [31,100,101], among which, cellular phones, portable digital assistants (PDAs), and wireless security systems, etc. A lot of research has been carried out on this topic. Low power camera-on-a-chip using CMOS APS technology began to be developed in 1995 by NASA at the Jet Propulsion Laboratory [102,103] and in 1998, the first CMOS APS fabricated using a high performance 1.8 V, 0.25 mm technology was introduced by Hon-Sum Philip Wong [104]. In that paper, the impact of the device scaling was studied, because no process modifications were made to the CMOS logic technology. In 1999, a CMOS imager with a power consumption of 250 mW [105] with an acquisition rate of 60 frames/s and a resolution of 1280!720 pixels was reported. This has been useful for large-format high-speed imaging applications such as industrial vision systems. In 2000, a 1.2 V micropower CMOS Active Pixel Image Sensor for portable applications was proposed [106]. In 2001, a low voltage hybrid Bulk/SOI CMOS APS was manufactured [107]. Also, in 2001, Nara Institute of Science and Technology reported a CMOS pixel circuit based on a pulse frequency modulation (PFM) technique [108]. This device reached a quite good performance (DR over 50 dB) under very low operation voltage, less than 1 V, and was very robust against noise due to its A/D converter. In 2003, a 176!144 CMOS APS with micropower consumption [109, 110] was reported, with a voltage operation of 1.5 V and power consumption of 550 mW. Thus, this amount enables the sensor to run using a watch battery. Other novel designs have been introduced, like a CMOS imager with motion vector estimator for low power image compression [111], which was designed by Toyohashi University of Technology in 1999. Someone states that APS will not function at 1.2 V or below [28]. However, CAPS [25,26,29,30] offer to work with a low voltage of 1 V or less using advanced technologies. They claim to obtain good performances [25,26,28–30]. Unlike APS, it must be stressed that CAPS are an alternative architecture which can be manufactured using top level technologies, below 0.25 mm, with great performance (see device scaling considerations). Thus, Low Voltage systems expect to continue downscaling by using CAPS, as an alternative to conventional APS. 3.4. High speed imaging Acquiring high-speed images is becoming more and more important in some areas such as real time applications. Nevertheless CCD technology did not make enough progress in this aspect. During more than 3 decades CCDs have been developed and after spending millions of dollars, they reached 250 kilo-pixels with an acquisition speed of 1000 frames per second [19]. Comparatively, high speed CMOS sensor technology is just started, because the first high-speed sensors were introduced around 1998 [112,113] and they have reached already great results. In addition, high frame rates have been possible thanks to the CMOS downscaling. In conclusion, CMOS imagers appear to be a promising alternative to CCDs taken also into account other advantages such as less Blooming and Smearing effects. 3.4.1. Needs and problem solving A typical CMOS APS contains three NMOS transistors in each pixel only. Therefore, a very compact implemen- tation is possible although the sensor lacks of image data parallel acquisition, a feature often important in high-speed imaging. The alternative is a pixel that contains an analog Fig. 9. Architecture of the high speed CMOS imager. M. Bigas et al. / Microelectronics Journal 37 (2006) 433–451 443 memory called SNAP (Shuttered-Node Active Pixel). Another important issue is how the data is multiplexed into the output pads. Multiplexing of digital data is much simpler than passing off analog data, so ADC are needed. Another architectural feature that allows high-speed operation is pipelining [19]. A CMOS image sensor needs to fulfill all the necessary requirements in order to provide fast image acquision. No smear, no blooming and global electronic shutter are some of the most valuable characteristics needed [114]. In addition, low lag and snap-shot mode are preferable. Low lag is essential in order to capture rapidly changing scenes. The rolling shutter method is very common in CMOS imagers where the rows of pixels in the image sensor are sequentially reset, starting at the top of the image and proceeding row by row till the bottom. When this reset process has moved some distance down the image, the readout process begins: rows of pixels are read out sequentially as well, starting at the top of the image and proceeding row by row till the bottom in exactly the same fashion and at the same speed as the reset process. So, this device is not appropriate at high frame rates, because the scene can significantly change during the frame reading time. Therefore, a non-rolling shutter or snap-shot mode is necessary [115]. It is also necessary to acquire images in a very short time and using short integration times. This requires the image sensors to be equipped with synchronous shutter in order to avoid blur (see Fig. 8). For instance, images acquisition of fast-moving objects requires imagers with high photoresponsivity at short integration times, synchronous exposure, and high-speed parallel readout [116]. 3.4.2. Manufactured imagers Several designs have been reported since 1997. In 1998, a 128!128 snap-shot photogate CMOS imager in 0.5 mm technology was implemented by Guang Yang [113]. It offered high speed (400 fps) and minimum exposure time 75 mm. It reproduces high quality, motion artifact-free images at high shutter-speeds (!75 mm exposure), with low noise, unmeasurable image lag and excellent blooming protection. Between 1998 and 2000, N. Stevanovic and M. Hillebrand [6,116,117,114] reported a high speed Fig. 8. Photodiode-type shutter APS schematics. CMOS camera (see Fig. 9). It was able to acquire more than 1000 frame/s using a global shutter in each sensor cell. The integration time in synchronous exposure was variable between 1 ms and 150 instead of previous CMOS implementations, which had around 500 frames/s at integration times ranging from 75 to 200 mm [42,113,112]. So it offered a compact, portable, and low power (320 mW) solution for high speed video systems and had a resolution of 256!256 pixels. DALSA Inc. Waterloo reported a VGA CMOS imager [115] in 2001, which can capture images at 1600 frames per second (see Fig. 10). Furthermore, it has exposure control functionality, antiblooming capability and a non-rolling shutter architecture to implement snap-shot image capture mode. Also in 2001, S. Yoshimura, T. Sugiyama, K. Yonemoto and K. Ueda reported a 48 kframe/s CMOS Image sensor for Fig. 10. Pixel of a DALSA VGA CMOS imager. Fig. 11. Schematic diagram of the correlated double sampling circuit. There is one such circuit for every column. M. Bigas et al. / Microelectronics Journal 37 (2006) 433–451444 Real-time 3D Sensing and Motion Detection [118]. It had an array of 192!124 pixels, depth resolution of 500 mm, fast motion detection and 12b digital image output resolution. E. Fossum and A. Krymski introduced first a 1280!720 pixel at 60 fps (60 Mpixel/s), then reported a 1024!1024 pixel at 500 fps (500 Mpixel/s). They continued enhancing their design, when in 2003 they presented [119] a high speed, 240 fps, 4.1 Mpixel(2352!1728) CMOS sensor (O 800 Mpixels/s) with on-chip parallel 10-b analog-to-digital converters (ADCs) and power dissipation of less than 700 mW. Besides this, in 2003, a High-responsivity 9- V/Lux-s, high speed 5000 fps at full 512!512 resolution CMOS sensor was manufactured [44]. The sensor was designed for a 0.35 mm process and consisted of a five transistor pixel to provide a true parallel shutter. 3.4.3. High speed market High speed imaging systems are suitable for automotive applications such as occupancy detection, precrash sensing, collision avoidance, surveillance, crash test observation, or airbag control. For instance, a smart airbag solution based on a high speed camera system was designed by Fraunhofer Institute of Microelectronic Circuits and Systems [120]. The system continuously monitors the seats and quickly determines the occupancy status and passenger’s position and size before the airbag is blasted. Another application is smart image sensor for real-time. For instance Yosuke Oike reported in 2003 a smart image sensor for real-time and high-resolution 3D measurement [121]. It does not only have enough high frame rate for real-time 3D measurement, but also high pixel resolution owing to a small pixel circuit and high subpixel accuracy due to gravity center calculation using a light intensity profile measurement trick. Finally, an application for high-speed video systems, for fast moving objects or for machinery vision is also suitable. 3.5. Low noise sensors CMOS Image sensors suffer from several noise sources. These set the fundamental limits on image sensor performance, especially under low illumination and in video applications. Therefore, it is important to have an overview of all of them [80]. The noise sources in CMOS Imagers can be divided in Temporal Noise [81] and Fixed Pattern Noise (FPN) [43]. In fact, FPN is one of the major CMOS imager’s disadvantages. Thus, a lot of research has been done in order to minimise FPN. Many researchers have designed FPN-reduction circuits. For instance, Correlated Double Sampling (CDS) is one of the most suitable for suppressing FPN [50,76]. Fig. 11 shows a typical schematic of a CDS circuit. CDS technique consists of taking two samples from a signal, which are closely spaced in time. Then, the first signal is substratcted from the second one, hence, removing the low-frequency noise. Sampling occurs twice: first after reset and last after integrating the signal charge. The subtraction removes the reset noise and dc offset from the signal charge. The two values are then used as differential signals in further stages like programmable gain amplifiers (PGA) or ADC. Most of them are placed below each column of pixels (see Fig. 12a). However, CDS reduces the fixed pattern noise to a large extent, a component of the FPN due to mismatch in the CDS circuits at each column introduces column-FPN, which should be also removed. For instance, K. Yonemoto and H. Sumi proposed [50] that FPN reduction should be performed in a CDS circuit (see Fig. 12b), in order to avoid this column-FPN caused by CDS circuits. On the other hand, although the dark current variation of photodiodes appears as FPN in the output signal of a CMOS image sensor, which resembles FPN caused by threshold variation of transistors in pixel circuits, the dark current noise cannot be suppressed with CDS circuits. This is because the dark current does not appear in the reset level, but only in the signal level of the pixel signal. Therefore, the dark current of the photodiode itself should be reduced. One way of reducing the dark current is to employ a pinned photodiode [122]. Another method reported by K. Yonemoto and H. Sumi in 2000 is a pinned photodiode, in the form of hole accumulation diode (HAD) [50]. They achieved a reduction of the dark current to 150 pA/cm 2 instead of 6 nA/cm 2 of a pn-photodiode. As a result, the dark Fig. 12. (a) CMOS image sensor with column CDS circuit, (b) CMOS image sensor with proposed FPN-reduction scheme. M. Bigas et al. / Microelectronics Journal 37 (2006) 433–451 445 current variation at the output of the CMOS image sensor was 0.19 mV and the period of readout operation was about 20 ns at 30 frames/s. Two years later, K. Yonemoto and H. Sumi carried out [49] a numerical analysis of this CMOS Image sensor with a simple FPN reduction technology. They showed that the low-input-voltage I–V converter with a current-mirror circuit improves the amplification factor and linearity of the pixel circuit. In a five-transistor pixel circuit, the threshold voltage of the X–Y addressing transistor affects the amplitude and the level of the readout pulse. An analysis of the mechanism of the X–Y addressing transistor showed the basic concept behind the selection of the threshold voltage. An L-shaped readout gate for a pinned photodiode was compared with a straight readout gate, and was proved to be adequate for rapid charge transfer. Another circuit to suppress FPN peak-to-peak to 0.15% of saturation level is Delta-difference sampling (DDS) [123]. On the other hand, B. Fowler [43] proposed a new APS, called Capacitive Transimpedance Amplifier (CTIA). CTIA APS that can achieve low FPN by using a divider circuit with switched capacitor voltage feedback. Besides this, the high gain and low read noise are advantages of using a CTIA as well (See Fig. 5h and g). Moreover, other readout methods can also offer an improvement in order to suppress FPN. For instance, R&D Headquarters from Minolta Co. and Gazoh System Kaihatsu reported a CMOS APS with transversal readout architecture that eliminates the vertically striped FPN [73,75]. The possi- bility of high frame rate using a multiport structure was also demonstrated. In addition, the Photonics and Sensors Group of the Cambridge University suggested, in 2002 [74], a new readout circuit for a CMOS APS, which removes the FPN and reduces signal degradation while offering an increase in readout speed compared to the conventional approach. As outlined before, CDS can not suppress the dark current noise, although it is a FPN’s source. Thus, decreasing the dark current to suppress FPN has been an aim. Dark current (offset error) is the signal charge that the pixel collects in the absence of light divided by the integration time. Dark current is temperature-sensitive and typically normalised by area. Photobit Technology Corporation and Tokyo institute of Technology reported a low dark current stacked CMOS APS for charged particle detection [124,125]. A use of a p-MOSFET transistor for readout reduces the hot carrier effect; thereby the dark current within the low temperature region is greatly decreased. It also improves noise reading performance due to its lower flicker noise compared to n-MOSFETs’. Thanks to the improvement of the noise performance, CMOS Image Sensors for Low light level applications are possible [18]. In 2000, a CDS noise analysis of readout circuits used in CMOS APS for low light levels was carried out [17]. In 2001, different pixel architectures were studied in order to increase the sensitivity and reduce the spatial (FPN) and temporal noise [16]. This study demonstrates that the N-well photodiode is the best light sensor, either for its parasitic capacitance value, for its quantum efficiency, or for its dark current. However, the design rules required by this photodiode (a wide space must be kept between N well and MOS transistors) limit their use in CMOS imagers. On the other hand, a new pixel architecture was also introduced. This architecture reduces kTC or reset noise and FPN. Therefore, this architecture is ideal for applications requiring very high sensitivity and low noise, which is necessary for low light level sensing. Complete reset of the photodiode is needed in order to remove kTC or reset noise and decrease the lag effect. Note that the source of image lag in CMOS imagers is different from the source of image lag in CCDs. In CCDs, image lag is caused by incomplete charge transfer. This can be eliminated using a pinned photodiode. On the other hand, CMOS image lag is due to incomplete reset so, in 2001, H. Tian [81] reported a new reset method, which alleviates the lag without increasing the reset noise. The reset transistor gate is overdriven. Finally, CMOS APS still has readout-noise problems because of irregular gain from mismatched transistor thresholds. M. Bigas et al. / Microelectronics Journal 37 (2006) 433–451446 4. Applications As highlighted before, improvement of CMOS image sensors has opened up new application areas [45]. There- fore, CMOS imagers are very suitable for Space, Auto- motive, Medical, Digital photography and 3D applications. Furthermore, there are more specific applications such as portable devices [100,101,31], security, industrial vision [126,127], consumer electronics, imaging phones, astron- omy, surveillance [128], robotics and machine vision, guidance and navigation (e.g., stereovision [129]), computer inputs, etc. 4.1. Space applications CMOS Imagers are widely used in the space environment for a varied range of applications [130,131]. These applications include robotic and navigation cameras, imagers for astronomy and earth observation, star trackers [132], tracking sensors in satellite constellations, lander and rover imagers, X-ray satellite missions [133,134], etc. Moreover, CMOS imagers are known to be tolerant to radiation, although true radiation tolerance can only be obtained using specific methods. Thus, there is a huge interest in radiation-tolerant imaging systems [32–39,135, 136] 4.2. Automotive applications There are a lot of applications [137] in the automotive field like occupancy detection, airbag control, precrash sensing, collision avoidance, surveillance, crash test observation, etc. Another one is a smart airbag solution based on a high speed camera system [120]. This system continuously monitors the seats and quickly determines the occupancy status and passenger’s position and size before the airbag is blasted. Moreover, IR-vision systems for foggy and night driving conditions are also addressable. Fig. 13. 3D measurement system based on triangulation. 4.3. Medical applications Medical or Biomedical systems based on CMOS imagers have been successfully developed [138]. For instance, Microelectronic components for a retina-implant system that will provide visual sensations to patients suffering from photoreceptor degeneration was reported by M. Schwarz in 1999 [139]. On the other hand, the digitisation of medical images, especially in radiology, has been another demand in recent years. Ho Kyung Kim proposed an X-ray imaging system with large FOV (field-of-view) using CMOS image sensors [140]. S. Wook Lee reported a 3-D Xray microtomographic system [141] in 2001. It makes possible to see the internal structure of small objects in a non- destructive way. Finally, P. Lechner developed an APS for X-ray imaging Spectroscopy in 2001 [142]. 4.4. Digital photography A CMOS image sensor integrating the sensor itself and the digital control functions on a single chip was reported [1]. This demonstrates the viability of producing a camera-on-a-chip suitable for commercial and scientific applications [143,144]. Besides, cameras with nearly noise- free pictures [145] and low power consumption have been developed. In 1998, Toshiba Corp. reported a 3.7!3.7 mm2 square pixel CMOS image sensor [146] for Digital Still Camera applications with high performance. 4.4.1. 3D range imaging applications 3D range imaging systems are more and more required due to the fact that 3D images acquisition is important in various sectors such as home, public and industrial domains. Furthermore, improvements in speed and resolution performance have opened up the possibility to obtain real time systems.3D range imaging system, also called 3D digitiser or Range finder, is a system capable to acquire range or depth information. These devices grab ‘range images’ or ‘ images’, which are dense arrays of values related to the distance of the scene to a known point or plane [147]. Currently, there are some special 3D measurement methods available for scene reconstruction. These tech- niques rely on triangulation (see Fig. 13), time-of-flight (TOF) measurements or interferometry, etc. [148] The range of possible applications is wide. For instance, obtaining 3D Models from Range Scans [149], Space monitoring and surveillance [118], safety and security [148,120], Real time Sensing and Motion detection [118], Inspection [150], 3D X-ray imaging [141], Robot vision [151], etc. 4.5. Other applications There are more suitable applications, such as portable applications [100,101,31], security, industrial vision (e.g., Imager with focal plane edge detection [127]), consumer electronics, surveillance devices [128], Smart vision system on-a-chip [143,144,129,152], Robotics and machine vision, M. Bigas et al. / Microelectronics Journal 37 (2006) 433–451 447 guidance and navigation (e.g., stereovision [129]), video phones, computer inputs, for charged particle imaging [124, 125] such as ion and electron imaging, IR-vision appli- cations, low light level applications [16–18], electrostatic sensing, instrumentation, imaging phones, astronomy and low-end professional cameras. 5. Conclusions A review of the most important advances in the field of CMOS image sensors has been carried out. These advantages have been mainly focused on fields such as sensitivity, low noise, low power consumption, low voltage operation, high-speed imaging and good dynamic range. This paper demonstrates that CMOS imagers are competi- tive with CCDs in many application areas, such as security, consumer digital cameras, automotive, computer video, imaging phones, etc. CMOS imagers will replace CCD devices in some cases, because of its low cost, low power consumption, integration capability, etc. Nevertheless, CCD technology will continue as predominant in high perform- ance systems, such as medical imaging, astronomy, low-end professional cameras, etc. because of its better image quality. To sum up, State-of-the-art of CMOS image sensors has been provided. References [1] E. Fossum, Cmos image sensors: electronic camera-on-a-chip, IEEE transactions on electron devices 44 (10) (1997). [2] S. Kempainen, Cmos image sensors:eclipsing ccds in visual information?, EDN 42 (21) (1997) 101–102 [See also pp. 105–6, 108, 110]. [3] C.-H. Chen, H.-J. Tsai, K.-S. Huang, and H.-T. Liu, Study for cross contamination between cmos image sensor and ic product, in 2001 IEEE/SEMI Advanced Semiconductor Manufacturing Conference, 2001, pp. 121–123. [4] H.-S.P. Wong, Cmos image sensors-recent advances and device scaling considerations, in International Electron Devices Meeting 1997. IEDM, 1997, pp. 201–204. [5] A. Theuwissen, Ccd or cmos image sensors for consumer digital still photography? in International Symposium on VLSI Technology, Systems, and Applications, 2001. Proceedings of Technical Papers, 2001, pp. 168–171. [6] M. Hillebrand, N. Stevanovic, B. Hosticka, J. Conde, A. Teuner, and M. Schwarz, High speed camera system using a cmos image sensor, in Proceedings of the IEEE Intelligent Vehicles Symposium 2000. 2000, pp. 656–61. [7] W.S. Boyle, G. Smith, Charge-coupled semiconductor devices, Bell System Techenics of Journal 49 (1970) 587–593. [8] J. Zarnowski, M. Pace, M. Joyner, Active-pixel cmos sensor improve their image, Laser Focus World. PennWell Plublishing 35 (7) (1999) 111–114. [9] I. Shcherback, O. Yadid-Pecht, Photoresponse analysis and pixel shape optimization for cmos active pixel sensors, IEEE Transactions on Electron Devices 50 (2003) 12–18. [10] W. Zhang, M. Chan, H. Wang, and P. Ko, Building hybrid active pixels for cmos imager on soi substrate, in SOI Conference. Proceedings. IEEE International, 1999, pp. 102–103. [11] Y. Malinovich, Ultra-high resolution cmos image sensors, Electronic Product Design. IML Techpress 20 (7) (1999) 19–20. [12] Z. Zhou, B. Pain, and E. Fossum, A cmos imager with on-chip variable resolution for light-adaptive imaging, in IEEE Inter- national Solid-State Circuits Conference. 45th ISSCC, Feb. 1998, pp. 174–175, 433. [13] J. Coulombe, M. Sawan, and C. Wang, “Variable resolution cmos current mode active pixel sensor, in IEEE International Symposium on Circuits and Systems. Proceedings. ISCAS 2000 Geneva., vol. 2, 2000, pp. 293–296. [14] Y. Audet and G. Chapman, Design of a self-correcting active pixel sensor, in IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems, 2001. Proceedings., 2001, pp. 18–26. [15] I. Koren, G. Chapman, and Z. Koren, A self-correcting active pixel camera, in IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems, 2000. Proceedings., 2000, pp. 56–64. [16] J. Goy, B. Courtois, J. Karam, F. Pressecq, Design of an aps cmos image sensor for low light level applications using standard cmos technology Analog Integrated Circuits and Signal Proces- sing, vol. 29, Kluwer Academic Publishers, Dordrecht, 2001. pp. 95–104. [17] Y. Degerli, F. Lavernhe, P. Magnan, J. Farre, Analysis and reduction of signal readout circuitry temporal noise in cmos image sensors for low-light levels, IEEE Transactions on Electron Devices 47 (5) (2000) 949–962. [18] D. Croft, Cmos image sensors compete for low-light tasks, Laser Focus World. PennWell Publishing 35 (1) (1999) 135–136 [See alspo pp. 138–40]. [19] E. Fossum and A. Krymski, High speed cmos imaging. in Electronic- Enhanced Optics, Optical Sensing in Semiconductor Manufacturing, Electro-Optics in Space, Broadband Optical Networks. Digest of the LEOS Summer Topical Meetings, July 2000, pp. I3–I4. [20] T. Lule, S. Benthien, H. Keller, F. Mutze, P. Rieve, K. Seibel, M. Sommer, M. Bohm, Sensitivity of cmos based imagers and scaling perspectives, IEEE transactions on electron devices 47 (2000) 2110–2122. [21] T. Lule, M. Wagner, M. Verhoeven, H. Keller, M. Bohm, 100000- pixel, 120-db imager in tfa technology, IEEE Journal of Solid-State Circuits 35 (2000) 732–739. [22] T. Lule, H. Keller, M. Wagner, and M. Bohm,100.000 pixel 120 db imager in tfa-technology, in Symposium on VLSI Circuits, 1999. Digest of Technical Papers, June 1999, pp. 133–136. [23] T. Lule, B. Schneider, M. Bohm, Design and fabrication of a high- dynamicrange image sensor in tfa technology, IEEE Journal of Solid-State Circuits 34 (1999) 704–711. [24] B. Schneider, H. Fischer, S. Benthien, H. Keller, T. Lule, P. Rieve, M. Sommer, J. Schulte, and M. Bohm, Tfa image sensors: from the one transistor cell to a locally adaptive high dynamic range sensor, in Electron Devices Meeting, 1997. Technical Digest., International, 1997, pp. 209–212. [25] C. Xu, W.-H. Ki, M. Chan, A low-voltage cmos complementary active pixel sensor (caps) fabricated using a 0.25 (m cmos technology, IEEE Electron Device Letters 23 (2002) 398–400. [26] C. Xu, W. Zhang, W.-H. Ki, and M. Chan, A highly integrated cmos image sensor architecture for low voltage applications with deep submicron process, in IEEE International Symposium on Circuits and Systems. ISCAS 2002., vol. 3, 2002, pp. 699 –702. [27] C. Shen, C. Xu, Weiquan, W.R. Huang, and M. Chan, Low voltage cmos active pixel sensor design methodology with device scaling considerations, in IEEE Hong Kong Electron Devices Meeting, June 2001, pp. 21–24. [28] C. Xu, W. Zhang, and M. Chan, A 1.0 v vdd cmos active pixel image sensor with complementary pixel architecture fabricated with a 0.25 m cmos process, in IEEE International Solid-State Circuits Conference. ISSCC, vol. 1, 2002, pp. 44–443. M. Bigas et al. / Microelectronics Journal 37 (2006) 433–451448 [29] -, A 1.0 vdd cmos active pixel image sensor with complementary pixel architecture fabricated with a 0.25 m cmos process, in IEEE International Solid-State Circuits Conference. ISSCC, vol. 2, 2002, pp. 28–385. [30] C. Xu, W. Zhang, W.-H. Ki, M. Chan, A 1.0-v vdd cmos active-pixel sensor with complementary pixel architecture and pulsewidth modulation fabricated with a 0.25 (m cmos process, IEEE Journal of Solid-State Circuits 37 (2002) 1853–1859. [31] C. Xu and M. Chan, The approach to rail-to-rail cmos active pixel sensor for portable applications, in Proceedings of IEEE Region 10 International Conference on Electrical and Electronic Technology. TENCON, vol. 2, 2001, pp. 834–837. [32] E.-S. Eid, S. Ay, and E. Fossum, Design of radiation tolerant cmos aps system-on-a chip image sensors in Aerospace Conference Proceedings, vol. 4, 2002, pp. 2005–2011. [33] E.-S. Eid, T. Chan, E. Fossum, R. Tsai, R. Spagnuolo, and J. Deily, Design of radiation hard cmos aps image sensors in 0.35 m cmos standard process,in Proceedings of the SPIE—The International Society for Optical Engineering, vol. 4306, 2002, pp. 50-9. [34] J. Bogaerts, B. Dierickx, R Mertens, Random telegraph signals in a radiationhardened cmos active pixel sensor, IEEE Transactions on Nuclear Science 49 (2002) 249–257. [35] J. Bogaerts, B. Dierickx, G. Meynants, D. Uwaerts, Total dose and displacement damage effects in a radiation-hardened cmos aps, IEEE Transactions on Electron Device 50 (84) (2003) 90. [36] J. Bogaerts, B. Dierickx, R. Mertens, Enhanced dark current generation in proton-irradiated cmos active pixel sensors, IEEE Transactions on nuclear science 49 (2002) 1513–1521. [37] M. Cohen and J. David, Radiation effects on active pixel sensors, in Fifth European Conference on Radiation and Its Effects on Components and Systems. ADECS 99., 1999, pp. 450–456. [38] -, “Radiation-induced dark current in cmos active pixel sensors,” IEEE Transactions on Nuclear Science, vol. 47, pp. 2485–2491, 2000. [39] G. Hopkinson, Radiation effects in a cmos active pixel sensor, IEEE Transactions on Nuclear Science 47 (2000) 2480–2485. [40] C.-C. Wang and C. Sodini,The effect of hot carriers on the operation of cmos active pixel sensors, in International Electron Devices Meeting, 2001. IEDM Technical Digest., 2001, pp. 24.5.1–.5.4. [41] C. Seibold. (2002) Comparison of cmos and ccd image sensor technologies. [Online]. Available: http://eng.oregonstate.edu/wmil- lerst/ece44x. [42] A. Krymski, D. Blerkom, A. Andersson, N. Bock, B. Mansoorian, and E.R. Fossum,A high speed, 500 frame/s, 1024x1024 cmos active pixel sensor, in IEEE Symposium on VLSI Circuits Digest of Technical Papers, 1999, pp. 137 -138. [43] B. Fowler, J. Balicki, D. How, and M. Godfrey, Low fpn high gain capacitive transimpedance amplifier for low noise cmos image sensors, in Proceedings of the SPIE. The International Society for Optical Engineering, vol. 4306, 2001, pp. 68–77. [44] A.I. Krymski, N. Tu, A 9-v/lux-s 5000-frame/s 512!512 cmos sensor, IEEE Transactions on electron devices 50 (1) (2003) 136–143. [45] C.H. Small, “Lower costs open new application areas for cmos image sensors, Computer Design, International Edition, PennWell Publishing 37 (4) (1998) 37–40 [See also pp. 42–43]. [46] H.-S. Wong;, Technology and device scaling considerations for cmos imagers, IEEE Transactions on Electron Devices 43 (1996) 2131–2142. [47] S. Mendis, S. Kemeny, R. Gee, B. Pain, C. Staller, Q. Kim, E. Fossum, Cmos active pixel image sensor for highly integrated imaging systems, IEEE Journal of Solid-State Circuits 32 (1997) 187–197. [48] Y. Muramatsu, S. Kurosawa, M. Furumiya, H. Ohkubo, Y. Nakashiba, A signalprocessing cmos image sensor using a simple analog operation, IEEE Journal of Solid-State Circuits 38 (2003) 101–106. [49] K Yonemoto, H Sumi, A numerical analysis of a cmos image sensor with a simple fixed-pattern-noise-reduction technology, IEEE Transactions on Electron Devices 49 (2002) 746–753. [50] A cmos image sensor with a simple fixed-pattern-noise-reduction technology and a hole accumulation diode, IEEE Journal of Solid- State Circuits, vol. 35, pp. 2038–43, 2000. [51] J.-S. Ho, M.-C. Chiang, H.-M. Cheng, T.-P. Lin, and M.-J. Kao, A new design for a 1280!1024 digital cmos image sensor with enhanced sensitivity, dynamic range and fpn, in 1999 International Symposium on VLSI Technology, Systems, and Applications., 1999, pp. 235–238. [52] R. Nair and K. Raj, Signal processing in cmos image sensors, in IEEE Workshop on Signal Processing Systems. SIPS 2000, 2000, pp. 801–810. [53] B. Sekerkiran, U. Cilingiroglu, A cmos k-winners-take-all circuit with o(n) complexity, IEEE Transactions on Circuits and Systems, II, Analog and Digital Signal Processing 46 (1) (1999) 1–5. [54] A. Demosthenous, S. Smedley, J. Taylor, A cmos analog winner- take-all network for large-scale applications, IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications 45 (3) (1998) 300–304. [55] J.-C. Yen, J.-I. Guo, H.-C. Chen, A new k-winners-take-all neural network and its array architecture, IEEE Transactions on Neural Networks 9 (5) (1998) 901–912. [56] N. Donckers, C. Dualibe, and M. Verleysen. (1999) Design of complementary low-power cmos architectures for looser-take-all and winner-take-all. [Online]. Available: citeseer.nj.nec.com/donck- ers99design.html. [57] D.Y. Aksin, A high-precision high-resolution wta-max circuit of o(n) complexity, IEEE Transactions on Circuits and Systems 49 (1) (2002) 48–53. [58] G. Indiveri, P. Oswald, and J. Kramer, An adaptive visual tracking sensor with a hysteretic winner-take-all network, in IEEE International Symposium on Circuits and Systems. ISCAS 2002, vol. 2, 2002, pp. II–324 –II–327. [59] F. Blais, M. Lecavalier, and J. Bisson, Real-time processing and validation of optical ranging in a cluttered enviroment, in The 7th International Conference on Signal Processing Applications & Technology. ICSPAT’96, Oct. 1996. [60] Z. Zhang and V. Prinet, A rogh-to-fine satellite image registration method with subpixel accuracy, in International Conference on Image Processing, vol. 3, 2002, pp. III–385 –III–388. [61] L. Gonzo, M. Gottardi, F. Comper, A. Simoni, J.-A. Beraldin, F. Blais, M. Rioux, and J. Domey. Smart sensors for 3d digitization. [Online]. Available: http://www.vit.iit.nrc.ca/References/NRC- 44934.pdf. [62] F. Eugenio, F. Marques, and J. Marcello, Pixel and sub-pixel accuracy in satellite image georeferencing using an automatic contour matching approach, in International Conference on Image Processing, vol. 1, 2001, pp. 822–825. [63] H. Kwon, S.Z. Der, and N.M. Nasrabadi. (2002) Subpixel target detection for hyperspectral images using ica-based feature extrac- tion. [Online]. Available: http://www.asc2002.com/manuscripts/J/ JO-03.PDF. [64] S.S. Abeysekera, An efficient hilbert transform interpolation algorithm for peak position estimation, in Proceedings of the 11th IEEE Signal Processing Workshop on Statistical Signal Processing, 2001, pp. 417–420. [65] J. Xu, Z.P. Fang, A.A. Malcolm, H. Wang, “Camera calibration for 3-d measurement with micron level accuracy,” in 5th (2001) 2001. [66] P.C. Yu, S.J. Decaer, H.-S. Lee, C.G. Sodini, J. John, L. Wyatt, Cmos resistive fuses for image smoothing and segmentation, IEEE Journal of Solid-State Circuits 27 (4) (1992) 545. [67] S.-B. Park, A. Teuner, and B. Hosticaka, A motion detection system based on a cmos photo sensor array, in International Conference on Image Processing, 1998. http://www.elsevier.com/locate/mejo http://www.elsevier.com/locate/mejo http://citeseer.nj.nec.com/donckers99design.html http://citeseer.nj.nec.com/donckers99design.html http://www.vit.iit.nrc.ca/References/NRC-44934.pdf http://www.vit.iit.nrc.ca/References/NRC-44934.pdf http://www.asc2002.com/manuscripts/J/JO-03.PDF http://www.asc2002.com/manuscripts/J/JO-03.PDF M. Bigas et al. / Microelectronics Journal 37 (2006) 433–451 449 [68] S.-M. Sohn, M.G. Kim, S. Kim;, “A cmos image sensor (cis) with low power motion detection for security camera applications,” in IEEE International Conference on Consumer Electronics, ICCE. 2003, June 2003, p. 250 (2003) 251. [69] Y. Muramatsu, S. Kurosawa, M. Furumiya, H. Ohkubo, Y. Nakashiba, A signal-processing cmos image sensor using a simple analog operation, IEEE Journal of Solid-State Circuits 38 (1) (2003) 101–106. [70] S. Kemeny, B. Pain, L. Matthies, E. Fossum, Multiresolution image sensor, IEEE Transactions on Circuits and Systems for Video Technology 7 (1997) 575–583. [71] S. Kawahito, Y. Tadokoro, A. Matsuzawa, “Cmos image sensors with video compression,” in Proceedings of the ASP-DAC ’98. Asia and South Pacific Design Automation Conference., Feb., p. 595 (1998) 600. [72] Y.-C. Shih and C.-Y. Wu, The design of high-performance 128! 128 cmos image sensors using new current-readout techniques, in IEEE International Symposium on Circuits and Systems. ISCAS ’99, vol. 5, May 1999, pp. 168–171. [73] S. Miyatake, M. Miyamoto, K. Ishida, T. Morimoto, Y. Masaki, H. Tanabe, Transveral-readout architecture for cmos active pixel image sensors, IEEE Transactions on Electron Devices 50 (2003) 121–129. [74] T. Kwok, J.J. Zhong, T. Wilkinson, W.A. Crossland, Readout circuit for cmos active pixel image sensor, IEEE Electronics Letters 38 (2002) 317–318. [75] S. Miyatake, K. Ishida, T. Morimoto, Y. Masaki, and H. Tanabe, Transversal-readout cmos active pixel image sensor, in Proceedings of the SPIE - The International Society for Optical Engineering, vol. 4306, 2001, pp. 128–136. [76] Y. Degerli, F. Lavernhe, P. Magnan, P.J. Farre, Column readout circuit with global charge amplifier for cmos aps imagers, IEEE Electronics letters 36 (17) (2000) 1457–1459. [77] T. Watabe, M. Goto, H. Ohtake, H. Maruyama, M. Abe, K. Tanioka, N. Egami, New signal readout method for ultrahigh- sensitivity cmos image sensor, IEEE Transactions on Electron Devices 50 (2003) 63–69. [78] T. Watabe, M. Goto, H. Ohtake, H. Maruyama, and K. Tanioka, A new readout circuit for an ultra high sensitivity cmos image sensor,” in International Conference on Consumer Electronics. ICCE. 2002, 2002, pp. 42–43. [79] C.-C. Hsieh, C.-Y. Wu, F.-W. Jih, T.-P. Sun, Focal-plane-arrays and cmos readout techniques of infrared imaging systems, IEEE Transactions on Circuits and Systems for Video Technology 7 (1997) 594–605. [80] HPComponentsGroup. (1998) Noise sources in cmos image sensors. [Online]. Available: http://www.stw.tu- ilmenau.de/wff/beruf_cc/cmos/cmos_noise.pdf. [81] H. Tian, B. Fowler, A.E. Gamal, Analysis of temporal noise in cmos photodiode active pixel sensor, IEEE Journal of Solid-State Circuits 36 (1) (2001) 92–101. [82] O. Schrey, J. Huppertz, G. Filimonovic, A. Bussmann, W. Brockherde, B. Hosticka, A 1 k!1 k high dynamic range cmos image sensor with on-chip programmable region-of-interest readout, IEEE Journal of Solid-State Circuits 37 (2002) 911–915. [83] M. Schanz, C. Nitta, A. Bussmann, B.J. Hosticka, R.K Wertheimer, A high-dynamic-range cmos image sensor for automotive appli- cations, IEEE Journal of Solid-State Circuits 35 (2000) 932–938. [84] M. Loose, K. Meier, and J. Schemmel, Cmos image sensor with logarithmic response and self calibrating fixed pattern noise correction, in Proceedings of the SPIE - The International Society for Optical Engineering, vol. 3410, 1998, pp. 117–27. [85] R. Hauschild, M. Hillebrand, B.J. Hosticka, J. Huppertz, T. Kneip, and M. Schwarz, A cmos image sensor with local brightness adaptation and high intrascene dynamic range, in Proceedings of the 24th European Solid-State Circuits Conference. ESSCIRC ’98, 1998, pp. 308–11. [86] S. Kavadias, B. Dierickx, D. Scheffer, A. Alaerts, D. Uwaerts, J. Bogaerts, A logarithmic response cmos image sensor with on- chip calibration, IEEE Journal of Solid-State Circuits 35 (2000) 1146–1152. [87] S.-F. Chen, Y.-J. Juang, S.-Y. Huang, and Y.-C. King, Logarithmic cmos image sensor through multi-resolution analog-to-digital conversion, in International Symposium on VLSI Technology, Systems, and Applications, 2003, pp. 227–230. [88] J. Doge, G. Schonfelder, G.T. Streil, A. Konig, An hdr cmos image sensor with spiking pixels, pixel-level adc, and linear characteristics, Germany IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing 49 (2002) 155–158. [89] N. Tu, R. Hornsey, and S.G. Ingram, Cmos active pixel image sensor with combined linear and logarithmic mode operation, in IEEE Canadian Conference on Electrical and Computer Engineering, vol. 2, May 1998, pp. 754–757. [90] W. Yang and A. Chiang, A full fill-factor ccd imager with integrated signal processors, in IEEE International Solid-State Circuits Conference, 37th ISSCC., 1990, pp. 218–219. [91] R. Reich, D. O’Mara, D. Young, A. Loomis, D. Rathman, D. Craig, S. Watson, M. Ulibarri, and B. Kosicki, High-fill-factor, burst-frame- rate charge-coulped device, in International Electron Devices Meeting, 2001. IEDM Technical Digest, Dec. 2001, pp. 24.6.1– 24.6.4. [92] S. Chamberlain, Photosensitivity and scanning of silicon image detector arrays, IEEE Journal of Solid-State Circuits 4 (1969) 333–342. [93] Y.-T. Fan, C.-S. Peng, and C.-Y. Chu, Advanced microlens and color filter process technology for the high-efficiency cmos and ccd image sensors, in Proceedings of the SPIE - The International Society for Optical Engineering, vol. 4115, 2000, pp. 263–74. [94] A. Bermak, A. Bouzerdoum, and K. Eshraghian, A high fill-factor native logarithmic pixel: simulation, design and layout optimization. in The 2000 IEEE International Symposium on Circuits and Systems. ISCAS 2000 Geneva, vol. 5, 2000, pp. 293–296. [95] P. Hasler, A. Bandyopadhyay, and P. Smith, A matrix transform imager allowing high-fill factor, in IEEE International Symposium on Circuits and Systems. ISCAS 2002., vol. 3, 2002, pp. 337–340. [96] H.-C. Chang and Y.-C. King, Tunable injection current compen- sation architecture for high fill-factor self-buffered active pixel sensor, in IEEE Asia-Pacific Conference on ASIC, 2002., 2002, pp. 101–104. [97] B. Dierickx, G. Meynants, and D. Scheffer, Near 100% fill factor cmos active pixels, in IEEE CCD and AIS workshop, 1997. [98] G. Meynants, B. Dierickx, and D. Scheffer, Cmos active pixel image sensor with ccd performance,” in Proceedings of the SPIE - The International Society for Optical Engineering, vol. 3410, 1998, pp. 68–76. [99] M. Furumiya, H. Ohkubo, Y. Muramatsu, S. Kurosawa, F. Okamoto, Y. Fujimoto, Y. Nakashiba, High-sensitivity and no-crosstalk pixel technology for embedded cmos image sensor, IEEE Transactions on Electron Devices 48 (2001) 2221–2227. [100] K. Yoon, C. Kim, B. Lee, D. Lee, Single-chip cmos image sensor for mobile applications, IEEE Journal of Solid-State Circuits 37 (2002) 1839–1845. [101] K.-B. Cho, A. Krymski, and E. Fossum, A 1.2 v micropower cmos active pixel image sensor for portable applications, in IEEE International Solid-State Circuits Conference, ISSCC., 2000, pp. 114-115. [102] E. Fossum, Low power camera-on-a-chip. using cmos active pixel sensor technology, in IEEE Symposium on Low Power Electronics, 1995, pp. 74–77. [103] B. Pain, G. Yang, B. Olson, T. Shaw, M. Ortiz, J. Heynssens, C. Wrigley, and C. Ho, A low-power digital camera-on-a-chip implemented in cmos active pixel approach, in Twelfth International Conference On VLSI Design, Jan. 1999, pp. 26–31. M. Bigas et al. / Microelectronics Journal 37 (2006) 433–451450 [104] H.P. Wong, R.T. Chang, E. Crabbe, P.D. Agnello, Cmos active pixel image sensors fabricated using a 1.8-v, 0.25 m cmos technology, IEEE Transactions on Electron Devices 45 (1998) 889–894. [105] B. Mansoorian, H.-Y. Yee, S. Huang, and E. Fossum, A 250mw, 60 frames/s 1280!720 pixel 9 b cmos digital image sensor, in IEEE International Solid-State Circuits Conference. ISSCC, 1999, pp. 312–13. [106] K.-B. Cho, A. Krymski, and E.R. Fossum, A 1.2 v micropower cmos active pixel image sensor for portable applications, in 2000 IEEE International Solid-State Circuits Conference, 2000, pp. 114–15. [107] C. Xu, W. Zhang, M. Chan, A low voltage hybrid bulk/soi cmos active pixel image sensor, IEEE Electron Device Letters 22 (2001) 248–250. [108] J. Ohta, H. Sakata, T. Tokuda, and M. Nunoshita, Low-voltage operation of a cmos image sensor based on pulse frequency modulation, in Proceedings of the SPIE - The International Society for Optical Engineering, vol. 4306, 2001, pp. 319–26. [109] K.-B. Cho, A.I. Krymski, E.R. Fossum, A 1.5-v 550(w 176!144 autonomous cmos active pixel image sensor, IEEE Transactions on Electron Devices 50 (2003) 96–105. [110] E.R.F. Kwang-Bo Cho, Alexander I. Krymski, A 3-pin 1.5-v 550 w 176!144 selfclocked cmos active pixel image sensor, in IEEE International Symposium on Low Power Electronics and Design, 2001, pp. 316-321. [111] S. Kawahito, D. Handoko, and Y. Tadokoro, A cmos image sensor with motion vector estimator for low-power image compression, in Proceedings of the 16th IEEE Instrumentation and Measurement Technology Conference. IMTC/99, vol. 1, 1999, pp. 65–70. [112] B.W.C. Huat, A 128x128 pixel standard cmos image sensor with electronic shutter, IEEE journal of solid-state circuits 31 (12) (1996) 180–181. [113] G. Yang, O. Yadid-Pecht, C. Wrigley, and B. Pain, A snap-shot cmos active pixel imager for low-noise, high-speed imaging, in International Electron Devices Meeting. IEDM ’98, Dec. 1998, pp. 45–48. [114] N. Stevanovic, M. Hillebrand, B. Hosticka, U. Iurgel, and A. Teuner, A high speed camera system based on an image sensor in standard cmos technology, in Proceedings of the 1999 IEEE International Symposium on Circuits and Systems VLSI. ISCAS’99., vol. 5, 1999, pp. 148–51. [115] G. Allan, D. Dattani, D. Dykaar, E. Fox, S. Ingram, S. Kaniasz, M. Kiik, B. Li, A. Pavlov, and Q. Tang, High-speed vga cmos image sensor, in Proceedings of the SPIE - The International Society for Optical Engineering, vol. 4306, 2001, pp. 111–18. [116] N. Stevanovic, M. Hillebrand, B.J. Hosticka, and A. Teuner, A cmos image sensor for high-speed imaging, in IEEE International Solid- State Circuits Conference, 2000, pp. 104–5, 449. [117] N. Stevanovic, M. Hillebrand, B.J. Hosticka, U. Iurgel, and A. Teuner, A high frame rate image sensor in standard cmos- technology, in Proceedings of the 24th European Solid-State Circuits Conference. ESSCIRC ’98., 1998, pp. 316-19. [118] S. Yoshimura, T. Sugiyama, K. Yonemoto, and K. Ueda, A 48 kframe/s cmos image sensor for real-time 3-d sensing and motion detection, in IEEE International Solid-State Circuits conference. ISSCC, 2001, pp. 94–5, 436. [119] A. Krymski, N.E. Bock, N. Tu, D. Blerkom, and E. Fossum, A high- speed, 240-frame/s, 4.1-mpixel cmos sensor, IEEE Transactions on electron devices, vol. 50, 2003. [120] J.S. Conde, M. Hillebrand, A. Teuner, N. Stevanovic, U. Iurgel, and B. Hosticka, A smart airbag solution based on a high speed cmos camera system, in International Conference on Image Processing. ICIP 99., vol. 3, 1999, pp. 930 –934. [121] Y. Oike, M. Ikeda, K. Asada, A cmos image sensor for high-speed active range finding using column-parallel time-domaain adc and position encoder, IEEE Transactions on Electron Devices 50 (2003) 152–158. [122] R. Guidash, T.-H. Lee, P. Lee, D. Sackett, C. Drowley, M. Swenson, L. Arbaugh, R. Arbaugh, F. Shapiro, and S. Domer, A 0.6 m cmos pinned photodiode color imager technology, in International Electron Devices Meeting, IEDM, Dec. 1997, pp. 927–929. [123] R. Nixon, S. Kemeny, C. Staller, and E. Fossum, 128!128 cmos photodiode-type active pixel sensor with on-chip timing, control and signal chain electronics, in Charge-Coupled Devices and Solids- State Optical Sensors V, Proc. SPIE, vol. 2415, 1995, pp. 117–123. [124] I. Takayanagi, J. Nakamura, E.-S. Eid, E. Fossum, K. Nagashima, T. Kunihiro, and H. Yurimoto, A low dark current stacked cmos-aps for charged particle imaging, in International Electron Devices Meeting, IEDM, Dec. 2001, pp. 24.2.1–24.2.4. [125] I. Takayanagi, J. Nakamura, E. Fossum, K. Nagashima, T. Kunihiro, H. Yurimoto, Dark current reduction in stacked-type cmos-aps for charged particle imaging, IEEE transactions on electron devices 50 (2003) 70–76. [126] P. Lee and C. Anagnopoulis, Mems/cmos integration and image sensors, in Eleventh Annual IEEE International ASIC Conference, 1998, pp. 381–381. [127] M. Tabet and R. Hornsey, Cmos image sensor camera with focal plane edge detection, in Canadian Conference on Electrical and Computer Engineering 2001, vol. 2, 2001, pp. 1129–33. [128] A. Teuner, M. Hillebrand, B. Hosticka, S.-B. Park, J.S. Conde, and N. Stevanovic, Surveillance sensor systems using cmos imagers, in International Conference on Image Analysis and Processing, 1999. Proceedings., 1999, pp. 1124–1127. [129] Y. Ni, A 256!256-pixel smart cmos image sensor for line based stereo vision applications, IEEE Journal of Solid-State Circuits 35 (2000) 1055–2000. [130] S. Habinc. (2001) Active pixel sensors for space applications. [Online]. Available: http://esapub.serin.esa.it/pff/pffv11n1/Habinc. pdf. [131] J. Goy, B. Courtois, J.M. Karam, and F. Pressecq, Design of an aps cmos image sensor for space applications using standard cad tools and cmos technology, in Proceedings of the SPIE - The International Society for Optical Engineering, vol. 4019, 2000, pp. 145–52. [132] C. Liebe, E. Dennison, B. Hancock, R. Stirbl, and B. Pain, Active pixel sensor (aps) based star tracker, in IEEE Aerospace Conference., vol. 1, 1998, pp. 119–127. [133] P. Holl, P. Fischer, P. Klein, G. Lutz, W. Neeser, L. Struder, and N. Wermes, Active pixel matrix for x-ray satellite missions, in Nuclear Science Symposium, 1999. Conference Record., vol. 1, 1999, pp. 171–175. [134] -, Active pixel matrix for x-ray satellite missions, IEEE Transactions on Nuclear Science, vol. 47, pp. 1421–1425, 2000. [135] R. Stirbl, B. Pain, T. Cunningham, B.H.J. Heynssens, and C. Wrigley, Advances in ultra-low power, highly integrated, active pixel sensor cmos imagers for space and radiation environments, in Proceedings of the SPIE - The International Society for Optical Engineering, vol. 4547, 2002, pp. 1–10. [136] E. El-Sayed, Design of radiation hard cmos aps image sensors for space applications, in Proceedings of the Seventeenth National Radio Science Conference. 17th NRSC’2000, 2000, pp. D5/1–9. [137] B. Hosticka, W. Brockherde, A. Bussmann, T. Heimann, R. Jeremias, A. Kemna, C. Nitta, O. Schrey, Cmos imaging for automotive applications, IEEE Transactions on Electron Devices 50 (2003) 173–183. [138] B. Wandell, A. Gamal, and B. Girod, Common principles of image acquisition systems and biological vision, in Proceedings of the IEEE, vol. 90(1), 2002, pp. 5–17. [139] M. Schwarz, R. Hauschild, B. Hosticka, J. Huppertz, T. Kneip, S. Kolnsberg, L. Ewe, H.K. Trieu, Single-chip cmos image sensors for a retina implant system, IEEE Transactions on Circuits and Systems II:Analog and Digital Signal Processing 46 (7) (1999) 870–877. http://esapub.serin.esa.it/pff/pffv11n1/Habinc.pdf http://esapub.serin.esa.it/pff/pffv11n1/Habinc.pdf M. Bigas et al. / Microelectronics Journal 37 (2006) 433–451 451 [140] H.K. Kim, G. Cho, Y.H. Shin, H.S. Cho, Development and evaluation of a digital radiographic system based on cmos image sensor, IEEE Transactions on Nuclear Science 48 (2001) 662–666. [141] S.W. Lee, H.K. Kim, Y.H. Shin, A 3-d x-ray microtomographic system with a cmos image sensor, IEEE Transactions on Nuclear Science 48 (2001) 1503–1505. [142] P. Lechner, R. Hartmann, P. Holl, T. Johannes, P. Klein, J. Kollmer, G. Lutz, R. Richter, L. Struder, P. Fischer, M. Trimpl, J. Ulrici, N. Wermes, A. Castoldi, E. Gatti, and P. Rehak, Active pixel sensor for x-ray imaging spectroscopy, in Nuclear Science Symposium Conference Record, IEEE, vol. 1, 2001, pp. 15–19. [143] R. Nixon, S.E. Kemeny, and E. Fossum, 256!256 cmos active pixel sensor cameraon-chip, in IEEE International Solid-State Circuits Conference, ISSCC96, Feb. 1996, pp. 178–9, 440. [144] M. Segawa, M. Ono, S. Musha, Y. Kishimoto, A. Ohashi, A cmos image sensor module applied for a digital still camera utilizing the tab on glass (tog) bonding method, IEEE Transactions on Advanced Packaging 22 (1999) 160–165. [145] D. Bursky, Cmos megapixel image sensors deliver nearly noise-free pictures, Electronic Design. Penton Publishing, vol. 47, no. 24, pp. 66–68, 1999. [146] H. Ihara, H. Yamashita, I. Inoue, T. Yamaguchi, N. Nakamura, and H. Nozaki, A 3.7!3.7 m/sup 2/ square pixel cmos image sensor for digital still camera application, in 1IEEE International Solid-State Circuits Conference. ISSCC, 1999, pp. 182–3. [147] J. Forest and J. Salvi, A review of laser scanning three-dimensional digitisers, in IEEE/RSJ International Conference on Intelligent Robots and System, vol. 1, 2002, pp. 73 –78. [148] P. Mengel, G. Doemens, and L. Listl, ;Fast range imaging by cmos sensor array through multiple double short time integration (mdsi), in International Conference on Image Processing, 2001., vol. 2, 2001, pp. 169–172. [149] B. Curless, From range scans to 3d models, Computer Graphics 33 (4) (1999). [150] M. Johannesson and H. Thorngren. (2001) Advances in cmos technology enables higher speed true 3d-measurements. [Online]. Available: https://www.machinevisiononline.org/public/articles/ ivp1.pdf. [151] Y. Oike, M. Ikeda and K. Asada, High-speed position detector using new row-parallel architecture for fast collision prevention system, Proceedings of the 2003 International Symposium on Circuits and Systems, 2003. ISCAS 03., vol. 4, pp. 788 –791, 2003. [152] W. Brockherde, B.J. Hosticka, M. Petermann, M. Schanz, and R. Spors, Smart 2048-pixel linear cmos image sensor, in Proceedings of the 24th EuropeanSolid-State Circuits Conference. ESSCIRC ’98, 1998, pp. 212–15. https://www.machinevisiononline.org/public/articles/ivp1.pdf https://www.machinevisiononline.org/public/articles/ivp1.pdf Review of CMOS image sensors Introduction Introduction Advantages and disadvantages Historical background CCD technology limitations CMOS Image sensors (APS) as an alternative to overcome CCDs limitations New applications-new challenges CMOS Image sensors limitations and device scaling considerations CMOS image sensors Overall architecture Pixel circuits Analog signal processing Readout methods Noise sources Latest developments in the field of CMOS imagers High dynamic range (DR) High sensitivity (High fill-factor (FF) and high quantum efficiency) High performance (low power consumption and low voltage operation) High speed imaging Low noise sensors Applications Space applications Automotive applications Medical applications Digital photography Other applications Conclusions References work_5bye7f7425dtfcbdj5tmm65dfa ---- Endoscopic Photography: Digital or 35 mm? Endoscopic Photography Digital or 35 mm? Patrick C. Melder, MD; Eric A. Mair, MD Objective: To compare off-the-shelf digital imaging equip- ment with a standard single lens reflex 35-mm endoscopic camera in a busy pediatric ears, nose, and throat setting. Design: Two digital cameras with an endoscope adapter and a step-down ring were evaluated to obtain optimal set- tings for digital endoscopic photography. The equipment was used in various clinical and surgical settings to in- clude otoscopy, sinonasal endoscopy, laryngoscopy, and bronchoscopy. The overall quality, color, brightness, and diagnostic quality of the endoscopic digital photographs were compared with those of the single-lens reflex 35-mm flash-generated photographs by experienced endosco- pists. Cost analysis and ease of use were also compared. Subjects: Initial digital endoscopic settings were for- mulated from cadaveric tests. These settings were then studied in multiple patients during endoscopy. Results: Endoscopic digital photography resulted in high-quality images in all settings. Digital images were comparable to 35-mm images. The digital system was easier to use and less expensive than the 35-mm system. Conclusions: We introduce a simple, inexpensive, and easily available endoscopic digital photography system. Digital photography offers numerous advantages over ana- log photography in a clinical practice. Digital imaging and archiving is more durable and easier to incorporate into patient records and clinical presentations. As the de- mand for high-quality digital imaging increases, easy-to- use inexpensive digital endoscopic photography will soon replace 35-mm camera technology. Arch Otolaryngol Head Neck Surg. 2003;129:570-575 T HE PRINTING PRESS opened the door for the religious and secular revolutions that occurred during the Refor- mation in northern Eu- rope and the Renaissance in southern Eu- rope in the 15th century, respectively. Likewise, the digital age has opened the door for new ideas and expressions of ideas not possible or before imaginable. Like the printing press, digital information allows users to distribute their ideas to the masses, but on a magnitude of order far greater than can be experienced with hard copy. At the core of a human’s being is the ability to communicate thoughts, feel- ings, and experiences. Medicine is no is- land in this sea of expression, and often words cannot describe what the human eye can see. For centuries, visual documen- tation has been a cornerstone of medical education and experience. Physicians/ surgeons could only communicate as well as they could visually describe their ex- periences. Fortunately, for the surgeon who is not an anatomist or artist, the 20th century ushered in technologies to en- hance our ability to communicate our vi- sual experience.1 But even though this new technology ushered in a new era of com- munication as in the 15th century, the pho- tograph in hard copy format is limited in that it is an analog. Today, the means to organize and collate this information is many times digital (databases); however, the commodity, the image itself, is an ana- log and is difficult to distribute in or in- terface with the digital age. The surgeon has the means to take digital photographs. This, combined with the use of digital organizational and dis- tribution techniques or enhancement pro- grams, allows the endoscopist the means to fully interface with the digital age. This new avenue to describe our visual expe- rience has been tremendous. But for de- cades past, the tried and true 35-mm sys- tem has sat on a pedestal that has yet to be toppled, but knocks at the base can be heard. Endoscopists face unique chal- lenges in documenting their visual expe- ORIGINAL ARTICLE From the Department of Otolaryngology–Head and Neck Surgery, Walter Reed Army Medical Center, Washington, DC. The authors have no relevant financial interest in this article. (REPRINTED) ARCH OTOLARYNGOL HEAD NECK SURG/ VOL 129, MAY 2003 WWW.ARCHOTO.COM 570 ©2003 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 rience. Their extended eye, the endoscope, has required special adaptation to interface with the 35-mm single- lens reflex (SLR) camera. These adaptations can be ex- pensive, and time-tested experience is required to master the art of endoscopic photography.2 With the introduc- tion of digital cameras, camera makers (some never be- fore camera makers) introduced devices that did not and do not look like the time-tested 35-mm camera—a psy- chological barrier to its acceptance. None of this was use- ful for endoscopists as they waited on the sideline to see how they could interact with this new technology. For- tunately, inexpensive digital cameras combined with in- expensive modifications afford the endoscopist the abil- ity to participate in the digital age and the great exchange of ideas that is so much a part of this new age. One of us (E.A.M.) has used the standard 35-mm SLR camera for endoscopic pictures for more than 15 years. This has continued to be standard practice in the Department of Otolaryngology–Head and Neck Surgery at Walter Reed Army Medical Center. However, with the advent of affordable digital cameras, another one of us (P.C.M.) has attempted to use digital cameras in endos- copy for less than 18 months. We report herein our ex- perience in obtaining the necessary equipment and skill to take diagnostic-quality, digital, endoscopic images. We then compared images taken with the 35-mm camera and those taken with the digital camera in various clinical/ operative settings. METHODS First, the initial purchase of a digital camera required research- ing the means to attach endoscopes to digital cameras. Many digital cameras are marketed toward the point-and-shoot con- sumer market and do not offer the ability to attach differing lenses and filters. Second, the image quality of the digital pho- tograph had to be of sufficient resolution (pixels) to result in diagnostic-quality photographs. Third, the external flash at- tachment (hot shoe) was researched as a requirement. Also, a mode for manual operation of the camera was a necessity. Last, types of storage media were evaluated. Because of the univer- sal appeal and cost of compact flash cards, this storage me- dium was chosen. One camera (Epson PhotoPC 3000Z 3.3 Megapixel cam- era; Epson USA, Long Beach, Calif) met the initial criteria for purchase. A lens adapter is shipped with the camera, which al- lows the endoscopist to use Epson USA or third-party lenses and filters to modify images. If lenses or filters are attached di- rectly to the camera housing, the lens mechanism will not fully extend. Most midrange digital cameras with a zooming capa- bility have a retractable lens that retracts into the camera body when the unit is off. During normal operation, the lens is fully extended, hence the need for the lens adapter that fits like a sleeve over the extended lens (Figure 1). The lens adapter shipped with the camera has a 46-mm step-up to a 49-mm thread. Initially, a borescope adapter (item STI10213; Scope Technology, Pomfret, Conn) was purchased. The endoscopic adapter can be purchased in a 46- or 37-mm thread. Because the 46-mm thread more closely approximated the 49-mm thread, it was purchased. To accommodate for the change from 46 to 49 mm, a 49- to 46-mm step-down ring was purchased at a lo- cal camera store (widely available). The final adaptation in lin- ear form was as follows (final thread in millimeters): camera (Epson PhotoPC 3000Z 3.3 Megapixel camera) (46 mm) → l e n s a d a p t e r ( 4 9 m m ) → s t e p - d o w n r i n g ( 4 6 m m ) →borescope or endoscopic adapter (46 mm). The endoscope is then placed into the adapter, and a screw is used to tighten the adapter around the endoscope. The initial fittings with the borescope adapter were poor. The borescope adapter is an industrial-grade product and not a medical- grade product. The endoscope would move about during pho- tography, and 2 hands were required to center and stabilize it. Later, an endoscopic adapter (item 35mmCA; Precision Optics, Gardner, Mass) was chosen; this adapter has a spring release like many adapters commonly used in our practice. This adapter had matching 49-mm threads so it attached directly to the lens adapter (Epson USA) without step-up or step-down ring modifications. After obtaining the proper equipment, a black box experi- ment was conducted. A cadaveric temporal bone was placed into a box and closed. On one side, a small valve was made in which to introduce the endoscope. Tape was applied to the en- doscope with millimeter markings to determine the distance from the temporal bone. A 0° 4-mm endoscope (Storz Hop- kins Rod II; Karl Storz, Culver City, Calif) was coupled to the camera, and a light source (Storz Xenon 175 model 20132020; Karl Storz) with a standard light cable was used to take pic- tures of the object, which was 11 cm from the valve. All modes of the camera were tested: fully automatic, program, and manual (automatic or manual exposure and aperture priority). Vari- ous flash modes were used: forced flash, flash off, and auto- matic flash. In addition, metering was adjusted. Likewise, the light intensity from the light source was adjusted. After the initial work in the black box, the camera was used in various clinical settings to include airway and nasal endos- copy and otoscopy. Cadaveric airway photographs were taken as well. Images were also taken with other cameras (Canon Power Shot G1 3.34 Megapixel and Canon Power Shot G2 4 Megapixel cameras; Canon USA, Lake Success, NY) with simi- lar attachments. Photographs (35 mm) were then taken, of the same pa- tients, using another camera (Olympus OM88 35mm SLR cam- era) using a special zoom lens (Karl Storz) with an endoscopic adapter (Karl Storz 560 QC; Karl Storz Endoscopy, Culver City), a synchronization cable, a fluid light cable, a light source, and a flash generator. Because techniques with the 35-mm camera are well established by one of us (E.A.M.), photography was limited to clinical or cadaveric work. There was no need to ob- tain initial settings in the black box. The following endoscopes (Karl Storz) were used during the evaluation period: 4 mm, 0° (model 7208AA); 4 mm, 0° Figure 1. Digital endoscopic equipment (from left to right): CompactFlash card, camera (Canon Power Shot G2 4 Megapixel camera; Canon USA, Lake Success, NY) with lens fully extended (normal operation), lens adapter, step-down ring, endoscopic adapter, and endoscope. A fiberoptic light cord is running along the top of the photograph. (REPRINTED) ARCH OTOLARYNGOL HEAD NECK SURG/ VOL 129, MAY 2003 WWW.ARCHOTO.COM 571 ©2003 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 (model 27005AA); 4 mm, 70° (model 7208CA); 10 mm, 0° (model 26033AP); and 4 mm, 120° (model 8700E). To obtain images large enough to view, the camera had to be in full optical (�3) and digital (�2) zoom. The camera (Canon Power Shot G2 4 Megapixel camera) allows greater than �6 zooming, but this is sufficient to fill the viewfinder of the camera. Zooming much past �6 to �8 will decrease picture quality. The best digital images were obtained using an Inter- national Standards Organization setting of 50 to 100. Two different light cables were used. The one offering the most consistent results was the standard fiberoptic light cord. The fluid-filled light cord (used for 35-mm photography) of- ten flooded the image with light and washed out the image, even with the light source on its lowest setting. For these low-light situations, alterations of the previ- ously mentioned variables are needed, as is adjustment of the International Standards Organization setting. The lowest In- ternational Standards Organization setting possible for the cam- era should be chosen for this environment. In addition, using the lowest possible International Standards Organization set- ting will minimize noise in the photograph.3 After digital or 35-mm photography, the images were pro- cessed in the standard fashion: images from the digital camera were downloaded to the computer immediately after shooting for review and analysis, and the 35-mm slide film was devel- oped through our photography department. A total of 29 images were obtained with the digital cam- era and compared with the 35-mm slides. Each image was then assessed by experienced endoscopists as to (1) overall image quality, (2) brightness, (3) color, and (4) diagnostic quality (ie, can healthy structures or abnormalities be identified?). Over- all image quality was assessed on a scale from 1 to 5 (1 indi- cates excellent; and 5, poor). Brightness was evaluated on a scale from 1 to 3 (1 indicates adequate; 2, too bright; and 3, too dark). Overall color was evaluated as either 1 (good) or 2 (poor). Last, depending on the anatomic sight being photographed, the im- age was judged according to its ability to convey healthy ana- tomic structures and/or pathologic conditions. For nasal im- ages, the image was evaluated to determine if (a) turbinates, (b) the uncinate process, (c) the eustachian tube orifice, and (d) the nasopharynx could be identified. Laryngeal images were evaluated for the presence of (a) the epiglottis, (b) a false vo- cal fold, (c) the ventricle, and (d) true vocal folds. Otoscopic images were evaluated for normal tympanic membrane land- marks: (a) the short process of malleus, (b) the umbo, (c) the pars flaccida, and (d) the light reflex. Any structure altered by abnormalities was noted. If healthy anatomic and/or patho- logic conditions could be assessed, the image was assigned a score of 1; and if the diagnostic quality was poor, a score of 2 was applied. This scoring was done for digital and 35-mm slides. With the grading scale, the photograph with the lowest score was the better image. Accordingly, the best score available for each individual photograph is a 4, and the poorest is a 12. A side-by-side comparison of these images was not performed. Given the unique nature and mode of viewing these images (computer screen or carousel projection), viewing images in the same setting would unduly bias the observer. Therefore, in 2 separate settings with the same observers, either digital or analog images were viewed. RESULTS The initial results with the digital camera were disap- pointing. The images were often blurred. This im- proved with manipulating the various shutter sizes and speeds. Images taken in a program or an automatic mode were uniformly poor in quality. In adapting fittings with step-up and step-down rings, the distance from the en- doscope to the camera lens will increase, thereby affect- ing final image quality. The cadaveric temporal bone work was a good start; however, transferring what was learned to a clinical set- ting was difficult. The bare white temporal bone in the black box evaluation produced different images than those obtained in situ. Manual camera operation using manual exposure with aperture priority offered the greatest con- trol over the camera. The flash was placed on automatic and the light source was variously adjusted, but a low light output of 10% offered the best results (for the black box). Differing light adjustments in situ were needed for differing image scenarios. Metering changes had no sig- nificant impact on image quality in this setting. The best shutter setting (F number) was F2, which gives the wid- est available aperture of the lens. Shutter speeds of 1/50 to 1/225 second produced the best images. The total score for each photograph from 3 observ- ers was added, and the added score was the final score of the photograph. Hence, with the combined scores of each photograph, the best possible score is a 12 and the poorest is a 36. After each photograph was graded and assigned a numerical score, the individual scores for all of the photographs in a given clinical scenario were summed and divided by the total to give an average score for the digital and 35-mm analog images. Of the 35-mm images reviewed (n = 29), the lowest score (best quality) was a 12 and the highest score (poor- est quality) was a 29. The total score for all 35-mm im- ages was 521. The final average score for the 35-mm im- ages was 18. The lowest score of the digital images was a 12; and the highest score, a 22. The final average score for the digital images was 17 (Table 1). Although not graded, a uniform comment made by the panel was that the 35-mm slides offered greater depth of field. COMMENT Taking endoscopic photographs with the digital camera involves completely different optics than those of an SLR camera. The pupil sizes of the endoscopes and camera must be considered, as must the distance of the endo- scope from the camera lens. The exit pupil of the endo- scope must closely approximate, if not match, the entry pupil of the camera lens to produce quality photo- graphs. Also, the lens of the endoscope must be as close to the lens of the camera as possible to increase clarity and depth of field of the image. D’Agostino et al4 noted that photographic docu- mentation of the larynx (practically speaking, photo- graphic endoscopy) should fulfill 6 criteria: (1) The sys- tem should be easy to use in the operating room and the clinic. (2) It should be user friendly. (3) It should require only 1 operator. (4) When used in the operating room, it should involve minimal disruption of the procedure. (5) The cost of the equipment should be reasonable. (6) It should allow use of a commonly available film type. Ben- jamin2(p271) adds that, “the more expensive the equip- ment, the more reliable it is and the better the image.” However, he did add that the photographic technique (REPRINTED) ARCH OTOLARYNGOL HEAD NECK SURG/ VOL 129, MAY 2003 WWW.ARCHOTO.COM 572 ©2003 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 should not be so cumbersome that a novice could not use the system. Digital photography is an exciting and emerging tech- nology for use in otolaryngology. There are many ad- vantages in using digital imaging over 35-mm analog slides. First, images can be assessed intraoperatively for quality. The initial fear was that the review image on the liquid crystal display on the camera may look decep- tively better on the smaller screen than on a computer or in a presentation; however, in a short time, we devel- oped an appreciation for the quality of the final digital photograph by judging the appearance of the photo- graph on a liquid crystal display. On the other hand, the total time for a 35-mm image to make it from the sur- geon’s camera to processing and developing may take hours or days, and the end result may not be the desired result, with the opportunity for capturing the image lost at the expense of surgeon teaching or training. Second, images can be shared or manipulated via a wide platform of programs and formats. Having the abil- ity to directly import images into presentation software or publish directly onto the Web affords surgeons the abil- ity to instantly share their visual experience. Further- more, time-saving software is available for filing and or- ganizing digital images that enhance surgeon education.5 In addition, any flaws in technique or unwanted noise in the photograph can be eliminated with inexpensive, popular, off-the-shelf image-editing software. The con- cern and prohibition, though, in altering the true image and, thus, the true experience is always a factor in deal- ing with digital images. In fact, this concern extends to all digital data. Watermark technologies are being de- veloped to counter this concern.6 In an attempt to allay this concern, it may be appropriate in public forums or before acceptance in peer-reviewed journals that the au- thor or presenter be required to sign a full disclosure state- ment. Appropriate disclosure may include information about the original source of the image, method of en- hancement (if any), and denial of intent to alter photo- graphic or surgical outcome. Third, images may be archived indefinitely on mag- netic or optical media. With the exception of Koda- chrome (Eastman Kodak Co, Rochester, NY) slides, pho- tographic slide film loses quality with time.7 Digital images may be stored on a wide variety of magnetic and optical media at minimal cost, indefinitely. Also, images can be shared across a wide platform from the camera itself to laptop/desktop, handheld device, Internet, and even high- quality photographic paper suitable for framing. Yet an- other advantage in digital archiving is the organiza- tional and retrieval methods available for the image. Images can be sorted, stored, and retrieved by key word with inexpensive (even free) popular image archival soft- ware. Also, digital cameras and technologies to develop digital data are inexpensive (Table 2). The cost sav- ings in using or starting to use digital technology are widely known.7 In particular, the initial costs for an en- doscopist to purchase special 35-mm equipment can be substantial. The cost of our digital endoscopic system was less than $1000 (less the light source). The camera at the initial time of purchase was $777, and the endoscopic adapter can be purchased for $200. The additional step-up or step-down rings are nominal in cost. The software needed for image enhancement is often bundled with most of the digital cameras on the market. With time, many of these cameras decrease dramatically in cost as newer models replace them. Gordon Moore, cofounder of In- tel, initially predicted in 1965 that data density would Table 1. Scores for Images Produced Using 2 Methods* Image No. 35-mm Camera Digital Camera 1 15 17 2 16 16 3 12 13 4 13 12 5 16 15 6 16 15 7 16 15 8 15 16 9 20 22 10 13 18 11 13 15 12 21 17 13 15 19 14 14 16 15 14 20 16 18 20 17 15 16 18 15 15 19 17 21 20 19 19 21 14 17 22 24 16 23 22 13 24 22 15 25 26 13 26 29 16 27 28 20 28 23 18 29 20 17 Total 521 482 Final averaged score 18 17 *The best possible score is 12; and the poorest, 36. Table 2. Characteristics of the 2 Endoscopic Methods* Characteristic 35-mm Single Lens Reflex Endoscopy Digital Endoscopy Equipment costs + ++++ Ease of use + ++++ Universal utility† + ++++ Image quality ++++ ++++ Portability‡ + ++++ Image-processing costs + ++++ Image storage costs§ ++ ++++ Image storage life� +++ ++++ *Abbreviations: +, poor; ++, fair; +++, good; ++++, excellent. †Compared with ease of use in word processing, presentation software, image enhancement software, and Internet applications. ‡All equipment associated with 35-mm endoscopy is usually in a tower, which limits its transport outside of the clinic or operating room. §The differential here will only favor digital endoscopy as storage costs continue to decrease (see the “Comment” section). �Unless Kodachrome (Eastman Kodak Co, Rochester, NY) slides, 35-mm slides lose quality with time. (REPRINTED) ARCH OTOLARYNGOL HEAD NECK SURG/ VOL 129, MAY 2003 WWW.ARCHOTO.COM 573 ©2003 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 double every 12 months.8 It has doubled approximately every 18 months. With this doubling of data density is the concomitant decrease in cost. This rule dictates that with an increase in data density, price moves inversely. Hence, it is inevitable that digital cameras will become more and more powerful and less and less expensive. Furthering the decrease in cost of digital photogra- phy are the digital memory cards that replace the film and developing cost, not to mention the wait in devel- oping 35-mm slides. This digital film comes in a wide variety of formats, some of which are proprietary and some of which are used industry-wide: (a) CompactFlash Types I & II, (b) SmartMedia Cards, (c) Memory Sticks (Sony Corp USA, New York, NY), (d) compact discs, and (e) Secure Digital Cards. What really makes using this digi- tal film intriguing is being able to take photographs in one instant and taking the digital film out of the camera and inserting it into another camera, a personal digital assistant, a handheld computer, a laptop, or a desktop personal computer for review and manipulation. The application of the Moore law8 holds for this digi- tal film as well. In addition, the compression format used in storing digital data affects storage capacity, which fur- ther enhances the cost advantage of digital imaging. Most cameras allow the user to capture the maximum amount of resolution in an image with little or no compression in a tagged image file format or an equivalent format. This greatly reduces the ability of the digital film to hold many photographs. To improve storage capacity and enhance cost savings, cameras can store data in a compressed for- mat, commonly in the joint photographic experts group format. To further increase storage capacity, the resolu- tion of images can be reduced when saving them on the camera. Saving images in a tagged image file format is “loss- less,” ie, as much data as can be captured from the origi- nal digital image are preserved in this format vs storage in a joint photographic experts group format (which is “lossy”). This has implications in saving, editing, and re- saving images because it will affect final image quality. The cost analysis would not be complete without an evaluation of the use and sharing of 35-mm slide tech- nology. The unique components needed for 35-mm en- doscopic photography are as follows: (1) a 35-mm cam- era with metering, motor wind, and a plain and clear focusing screen; (2) a special zoom lens with variable fo- cal length from 70 to 140 mm; and (3) a synchroniza- tion cable with a flash generator and a fluid light cable (Figure 1).5 The cost of the SLR camera with the endo- scopic adapter and lens is about $5300. The total cost of the flash generator, light source, and synchronization cable is about $9100. The fluid light cable is $900, for a total cost of around $15 300. After the initial cost of the cam- era and setup, the endoscopist must contend with years of film and developing costs. Also, realizing that storing and retrieving these slides take time and space will affect the final cost of a 35-mm system. Also, for the endosco- pists to input their 35-mm images into presentation soft- ware would require the expense, and time, of using a scan- ner. Alas, if the SLR camera is a dinosaur that will not die, some camera makers offer digital SLR camera bod- ies that allow the use of the lenses and endoscopic adapter of the previously mentioned system. This comes at a price, though. One digital SLR camera body (without lenses and adapter) is around $2000. And the technique and knowl- edge of taking endoscopic photographs for these cam- eras is far from being resolved. In addition, digital photography is completely por- table. The digital system we use is a system that can be easily used in the operating room, in the clinic, or at the bedside. Contrary, the 35-mm technology requires so much more in the way of setup and equipment (flash gen- erator and special cable). It is difficult to transplant its use from the fixed setting of the operating room or clinic to the hospital ward. Finally, image quality for diagnostic purposes is com- parable to 35-mm slides, as demonstrated by our re- sults. This is the last knock at the base of the pedestal of 35-mm analog photography. Although the resolution of digital photographs is not at the level of analog photo- graphs, the question must be asked, “How much do you need?” The goal of any mode of visual presentation, whether it be on photographic paper, a slide photo- graph, on a computer screen, or a videotape projection, should be to render a photograph that is “photorealis- tic.” It should have the ability to convey and store the visual experience, as seen with the endoscope. For our camera, the pixel resolution is 3.3 mega- pixels, or more than 3 million pixels. This means that the camera can capture an image with 2048 � 1536 pix- els (actual and advertised pixels do not match) in an un- compressed format. In this format, the images can be printed out on 28 � 36-cm (11 � 14-in) photographic pa- per for a photorealistic appearance. To view images on an upscale computer screen would require input of any- where from 1280 � 1024 to 800 � 600 pixels. Com- puter projection devices require 1024 � 768 pixels of reso- lution for projection. No matter the means of visual presentation, a 3-megapixel camera or higher is more than adequate for capturing the visual data. In fact, our re- sults demonstrate that overall image quality with the digi- tal photographs is comparable to the 35-mm slides (Figure 2 and Figure 3). There was a noticeable dif- ference in the quality of the otologic pictures between the 35-mm and the digital cameras. Images 22 through Figure 2. A digital photograph, with a 120° endoscope, of the nasopharynx and adenoidal tissues. A catheter used for retracting the palate is seen in the midline. (REPRINTED) ARCH OTOLARYNGOL HEAD NECK SURG/ VOL 129, MAY 2003 WWW.ARCHOTO.COM 574 ©2003 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 28 (Table 1) were all otologic pictures (Figure 4). The picture quality of the 35-mm images was uniformly poorer than that of the digital photographs. Even though the digi- tal photographs matched or exceeded the 35-mm im- ages, all of the digital images lacked significant depth of field. This may explain the excellent picture quality in the middle ear, where depth of field is limited by ana- tomic features. The lack of depth of field in the other pho- tographs can be explained by the shutter size needed for clear images and the distance of the camera from the endoscope. However, with a little serendipitous ingenuity, the endoscopist can obtain much information from this simple system. This is best illustrated by the inexpensive and simplistic means by which we found that digital cam- eras (or camcorders) allow an endoscopist to perform vid- eoendoscopy at a fraction of the cost of a tower system. Most digital cameras come with packaged audio/visual cables that connect the camera directly to a television for the viewing of images in a slide show format. In doing this, a live image is routed from the digital camera to a standard television. Attaching a videocassette recorder then allows the endoscopist to have videotape documen- tation for around $1200. Before photography, it was the pen and paper (or its equivalent). Certainly, these were adequate to con- vey medical images over millennia. What needs to be de- cided is, what advantage does digital photography offer over 35-mm technology?9 Given a similar paradigm shift, the answer was obvious for surgeons who first took pho- tographs in the late 19th and the early 20th centuries to record their visual experiences. The answer should be as obvious now as well. Accepted for publication October 4, 2002. This study was presented at the annual meeting of the American Society of Pediatric Otolaryngology, Boca Ra- ton, Fla, May 13, 2002. Corresponding author and reprints: Patrick C. Melder, MD, Department of Otolaryngology–Head and Neck Sur- gery, Walter Reed Army Medical Center, 6900 Georgia Ave NW, Washington, DC 20307-5001 (e-mail: pmelder @pol.net). REFERENCES 1. Benjamin B. Indirect laryngeal photography using rigid telescopes. Laryngo- scope. 1998;108:158-161. 2. Benjamin B. Art and science of laryngeal photography. Ann Otol Rhinol Laryn- gol. 1993;102:271-282. 3. Indman PD. Documentation in endoscopy. Obstet Gynecol Clin North Am. 1995; 22:605-616. 4. D’Agostino M, Jiang J, Hanson D. Endoscopic photography: solving the difficul- ties of practical application. Laryngoscope. 1994;104:1045-1047. 5. Mendelsohn MS. Using a computer to organize digital photographs. Arch Facial Plast Surg. 2001;3:133-135. 6. Teuffel W, Stettin J. Electronic documentation in endoscopy: present status and future perspectives from a company standpoint. Endoscopy. 2001;33:276-279. 7. Galdino GM, Swier P, Manson PN, Vander Kolk CA. Converting to digital pho- tography: a model for a large group or academic practice. Plast Reconstr Surg. 2000;106:119-124. 8. Moore’s law. Available at: http://www.webopedia.com/TERM/M/Moores_Law .html. Accessed March 12, 2002. 9. Benjamin B. Photography of severe laryngeal obstruction. Ann Otol Rhinol Laryn- gol. 2000;109:829-831. Figure 3. A digital photograph, with a 10-mm 0° endoscope, of a healthy pediatric larynx. Figure 4. A digital photograph, using a 4-mm sinus endoscope, of the middle ear in preparation for a stapedectomy. The stapes, oval window, stapedial tendon, fallopian canal, and Jacobson nerve are well visualized. (REPRINTED) ARCH OTOLARYNGOL HEAD NECK SURG/ VOL 129, MAY 2003 WWW.ARCHOTO.COM 575 ©2003 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 work_5ca6aicqxjaa3pswa3xa4a2cva ---- Radiographic scoring methods in hand osteoarthritis - a systematic literature search and descriptive review Osteoarthritis and Cartilage 22 (2014) 1710e1723 Radiographic scoring methods in hand osteoarthritis e a systematic literature search and descriptive review A.W. Visser y *, P. Bøyesen z, I.K. Haugen z, J.W. Schoones x, D.M. van der Heijde y z, F.R. Rosendaal k, M. Kloppenburg y k y Department of Rheumatology, Leiden University Medical Center, Leiden, The Netherlands z Department of Rheumatology, Diakonhjemmet Hospital, Oslo, Norway x Walaeus Library, Leiden University Medical Center, Leiden, The Netherlands k Department of Clinical Epidemiology, Leiden University Medical Center, Leiden, The Netherlands a r t i c l e i n f o Article history: Received 21 January 2014 Accepted 30 May 2014 Keywords: Osteoarthritis Hand Radiography Systematic review * Address correspondence and reprint requests to: V Medical Center, Department of Rheumatology, C1-R, P The Netherlands. Tel: 31-71-5263265; Fax: 31-71-526 E-mail address: a.w.visser@lumc.nl (A.W. Visser). http://dx.doi.org/10.1016/j.joca.2014.05.026 1063-4584/© 2014 Osteoarthritis Research Society In s u m m a r y Objective: This systematic literature review aimed to evaluate the use of conventional radiography (CR) in hand osteoarthritis (OA) and to assess the metric properties of the different radiographic scoring methods. Design: Medical literature databases up to November 2013 were systematically reviewed for studies reporting on radiographic scoring of structural damage in hand OA. The use and metric properties of the scoring methods, including discrimination (reliability, sensitivity to change), feasibility and validity, were evaluated. Results: Of the 48 included studies, 10 provided data on reliability, 11 on sensitivity to change, four on feasibility and 36 on validity of radiographic scoring methods. Thirteen different scoring methods have been used in studies evaluating radiographic hand OA. The number of examined joints differed exten- sively and the obtained scores were analyzed in various ways. The reliability of the assessed radiographic scoring methods was good for all evaluated scoring methods, for both cross-sectional and longitudinal radiographic scoring. The responsiveness to change was similar for all evaluated scoring methods. There were no major differences in feasibility between the evaluated scoring methods, although the evidence was limited. There was limited knowledge about the validity of radiographic OA findings compared with clinical nodules and deformities, whereas there was better evidence for an association between radio- graphic findings and symptoms and hand function. Conclusions: Several radiographic scoring methods are used in hand OA literature. To enhance compa- rability across studies in hand OA, consensus has to be reached on a preferred scoring method, the examined joints and the used presentation of data. © 2014 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved. Introduction metacarpaphalangeal (MCP) joints3. Currently, no structure modi- Osteoarthritis (OA) is the most common musculoskeletal dis- order, frequently affecting the hands1,2. Hand OA is characterized by the formation of bony enlargements and deformities, most frequently occurring in the distal interphalangeal (DIP) joints and first carpometacarpal (CMC1) joints, less often in the proximal interphalangeal (PIP) joints and least prevalent in isser A.W., Leiden University .O. Box 9600, 2300 RC Leiden, 6752. ternational. Published by Elsevier L fying treatments are available. To date, few high-quality clinical trials have been performed in hand OA4,5. A key problem in the lack of high-quality clinical trials in hand OA is the lack of standardi- zation of outcome measures4,6. The Outcome Measures in Rheu- matoid Clinical Trials (OMERACT) and Osteoarthritis Research Society International (OARSI) Task Force on Clinical Trials Guide- lines defined core domains to describe outcomes in clinical trials. One of these domains for structure modifying trials was imaging.7e9 Conventional radiography (CR) is commonly used to assess structural damage in hand OA, as they are widely available and relatively cheap. Radiography allows visualization of osteophytes, joint space narrowing (JSN), subchondral cysts, sclerosis and cen- tral erosions. td. All rights reserved. mailto:a.w.visser@lumc.nl http://crossmark.crossref.org/dialog/?doi=10.1016/j.joca.2014.05.026&domain=pdf http://dx.doi.org/10.1016/j.joca.2014.05.026 http://dx.doi.org/10.1016/j.joca.2014.05.026 http://dx.doi.org/10.1016/j.joca.2014.05.026 A.W. Visser et al. / Osteoarthritis and Cartilage 22 (2014) 1710e1723 1711 Several standardized scoring methods are available such as the KellgreneLawrence (KL)10, Kessler11 and Kallman grading scales12, the OARSI scoring atlas13, the VerbruggeneVeys anatomical phase score14, and the Gent University scoring system (GUSS)15. These scores differ in the joints that are assessed, the type of scores (composite score or individual feature scores), and the total score ranges. Most scoring methods have been shown to be reliable in- struments for the assessment of structural damage in hand OA as well as its change15e17. However, a systematic comparison of the different scoring methods that will help to decide on a recom- mended method has not been performed. We performed a systematic review to evaluate the use of CR in studies on hand OA and to assess the metric properties of the different radiographic scoring methods18. To this end we made use of the OMERACT filter19, focusing on aspects of discrimination (reliability and sensitivity to change), feasibility and truth (validity) of the radiographic scoring methods available in hand OA. Methods Identification of studies Incooperationwitha medical librarian (JWS), a systemic literature search was performed to obtain all manuscripts reporting on any radiographic scoring methods assessing the nature, severity and progression of structural damage in hand OA. Medical literature da- tabases (PubMed, Embase, Web of Science, COCHRANE and CINAHL) were searched up to November 2013, using all variations of the following key words ‘hand’, ‘osteoarthritis’, ‘radiography’, ‘reli- ability’, ‘validity’, ‘sensitive’ and ‘feasibility’ (see Supplementary File For Exact Search Strings). Inclusion and exclusion criteria First all retrieved titles were screened, subsequently selected abstracts were reviewed and finally full text articles of the remaining references were read by one reviewer (AWV). A random sample of 150 titles was also reviewed by a second reviewer (MK), resulting in a similar selection of titles. In case of uncertainties in the reviewing process by the single reviewer, these were discussed and solved with MK. The metric properties of the studied radio- graphic scoring methods were evaluated according to four items: reliability, sensitivity to change, feasibility and validity. Inclusion criteria required for studies to evaluate these items differed per item: - Reliability was evaluated in studies describing the reliability of two or more scoring methods performed on the same radio- graphs and by the same reader. Both cross-sectional and longi- tudinal studies were included. - Sensitivity to change was evaluated in longitudinal studies of at least one year, in which hand OA was assessed by at least two radiographic scoring methods. Studies with a follow-up dura- tion between one and three years using only one radiographic scoring method were also included. - Feasibility was evaluated in studies describing the feasibility of one or more scoring methods. - Validity was evaluated in studies comparing a radiographic scoring method with other measurements of structural damage such as magnetic resonance imaging (MRI), computed tomog- raphy (CT), ultrasound (US), digital photography, histology or nodes at physical examination. In addition, validity was evalu- ated in studies comparing radiographic findings to clinical signs such as hand function or symptoms. Both cross-sectional and longitudinal studies were included. Studies that fulfilled the requirements for at least one of these four items were included in this review. Animal studies, reviews, abstracts, letters to the editor and studies reporting on musculoskeletal diseases other than hand OA or in languages other than English were excluded. Data extraction A standardized form was used to extract information about the following data: (1) study population (population size, setting, age, sex), (2) applied radiographic scoring methods, (3) performance of the scoring (number of readers, consensus/independent reading, (4) assessed joints, (5) level of analyses of obtained scores (joint, joint group or patient level) and used definition of outcome (e.g., summed scores (total or per feature), counts of number of affected joints, dichotomized outcome), (6) results concerning: reliability (intraclass correlation coefficient (ICC), kappa-value, percentage of agreement, smallest detectable change (SDC)), sensitivity to change (percentage of change, amount of change, standardized response mean (SRM)), feasibility (time needed to perform scoring), validity (correlations, associations and measures of agreement between radiographic scores and other measures). From a random number of studies data were also extracted by MK and all extracted results were discussed with MK. Statistical analyses Due to the heterogeneity of the studies and the difference in outcome measures that were used it was not possible to perform a meta-analysis. Therefore we chose to perform a descriptive review. Results Literature flow After removing duplicate references, 1873 unique references were identified [Fig. 1]. After reviewing 133 abstracts and 80 full- text articles, 48 articles were included in this review. Of the included studies, 10 fulfilled the inclusion criteria for evaluation of reliability12,16,17,20e26, 11 for sensitivity to change14,16,17,24e31, four for feasibility11,16,17,22, and 36 for validity of radiographic scoring methods.20e24,32e62 Evaluation of radiographic scoring methods was the primary aim in 10 of the included studies11,12,14,16,17,22,26,27,59,60. The other studies used radiographic scoring to identify prevalence or progression of radiographic OA features (n ¼ 7)20,25,28e30,33,34, or to compare ob- tained scores with other outcome measures (other imaging methods, clinical outcomes, histology) (n ¼ 31).21,23,24,31,32,35e38,40e58,61e63 The characteristics of the evaluated or applied radiographic scoring methods (except for non-validated methods) are depicted in Table I. Study characteristics The characteristics of the 48 included studies are depicted in Table II. Most studies included more women than men and most of the studied individuals were aged >50 years. As shown in Table II, a wide variety of scoring methods (n ¼ 13) was used to assess radiographic (signs of) hand OA. The KL scoring method was used most frequently (n ¼ 24), followed by the OARSI scoring method (n ¼ 18). Other scoring methods were the Kallman (n ¼ 9), indi- vidual features following non-validated methods (n ¼ 7), Fig. 1. Overview of literature research. A.W. Visser et al. / Osteoarthritis and Cartilage 22 (2014) 1710e17231712 anatomical phases (n ¼ 6), anatomical lesions (n ¼ 2) and automatic JSW measurement (n ¼ 3). The GUSS, Burnett, Kessler, Lane, Eaton and a non-validated global score were all used in only one study. Although the majority of studies used only one radiographic scoring method, 15 studies used more than one method. The examined joint groups differed between the studies: DIPs and PIPs were assessed most frequently (in 48 and 46 studies, respectively), followed by the CMC1s (n ¼ 34), MCPs (n ¼ 30), IP1s (n ¼ 23) and the scaphotrapezotrapezoidal (STT) joints (n ¼ 8). The way the analysis of the radiographic scores were executed was quite different across the studies; (1) the score of one joint (the most severely affected) from a joint group, hand or patient33,36,37, 43,46,50, (2) sum score for all joints and features14,16,17,20e22,24e26,31, 34,38,44,45, (3) sum scores per feature21,22,24,27e29,48, (4) sum scores per joint group16,24,47,49, (5) mean score per feature12,30 or per joint60, (6) scores on joint level (composite score or per feature)12,20e24,34,35,38,40e44,47,48,51e53,60,61 and (7) presence or absence of radiographic features per joint21,22,54,55,57,58, joint group32,38,39,45, or on patient level52,56. Discrimination Reliability Ten included articles provided data on the reliability of at least two radiographic scoring methods, shown in Table III. The KL scoring method was assessed in seven of these studies12,16,17,20,21,23,24. Other assessed scoring methods were the Kallman (n ¼ 4)12,17,20,23, OARSI (n ¼ 4)16,21,22,24, anatomical phases (n ¼ 4)16,17,25,26, anatomical lesions (n ¼ 1)26, GUSS (n ¼ 1)25, global score (n ¼ 1)17, and the semi-automated joint space width (JSW) measurement (n ¼ 1).22 Eight studies provided cross-sectional data12,16,17,20e24. The ICCs as well as kappa values were shown to be reliable for all assessed total scores, and no differences between the scoring methods were observed. The ICCs and kappa values for the individual radiographic features depended on the scored feature; the lowest reliability was reported for the scoring of cysts and the highest for the scoring of erosions and osteophytes.12,20,21 In five of the studies readers performed the scoring indepen- dently of another reader, providing results on the interreader reli- ability12,16,17,21,24. The interreader ICCs and kappa values were somewhat lower than the intrareader values, especially for the Kallman method and for sclerosis as scored using the OARSI atlas12,17,24. Whether readers were from one or different centers did not seem to influence the reliability of the scoring methods. Six studies provided data on the reliability of change of at least two radiographic scoring methods12,16,17,24e26. The reliability of change of KL, OARSI, Kallman, global, anatomical phases and GUSS scores was reported to be good for all methods12,16,17,24e26. Bij- sterbosch et al. compared the SDC of three scoring methods on Table I Radiographic scoring methods for hand osteoarthritis Scoring method No. of joints DIP PIP IP1 MCP CMC1 STT Scored features Type of score Range of total score Anatomical phases14 26 þ þ þ þ e e Osteophytes, JSN, erosions, sclerosis Composite score 0e218.4 Anatomical lesions14 24 þ þ e þ e e Osteophytes, JSN, cysts Composite score Not specified Burnett74 18 þ þ e e þ e Osteophytes, JSN, sclerosis Individual features 0e126 Eaton75 4 e e e e þ þ Osteophytes, JSN, erosions, cysts, sclerosis, subluxation Composite score Not specified GUSS15 18 þ þ þ e e e Osteolytic areas, bone plate resorption, JSN Composite score 10e300 Kallman12 22 þ þ þ e þ þ Osteophytes, JSN, cysts, sclerosis, deformity, cortical collapse Individual features 0e208 Kellgren-Lawrence10 30 þ þ þ þ þ e Osteophytes, JSN, sclerosis, alignment Composite score 0e120 Kessler11 18 þ þ e e þ e Osteophytes, JSN, sclerosis Composite score 0e18 Lane76 22 þ þ þ e þ þ Osteophytes, JSN, erosions/cysts, sclerosis, deformity Individual features 0e182 OARSI13 20 þ þ þ e þ e Osteophytes, JSN, erosions/cysts, sclerosis, alignment Individual features 0e198 Abbreviations: CMC1 ¼ First carpometacarpal joint, DIP ¼ distal interphalangeal joint, IP1 ¼ First interphalangeal joint, MCP ¼ metacarpaphalangeal joint, No. ¼ number, PIP ¼ proximal interphalangeal joint, STT ¼ scaphotrapezotrapezoidal joint. A.W. Visser et al. / Osteoarthritis and Cartilage 22 (2014) 1710e1723 1713 patient level, showing a small difference in favor of the KL score, followed by the anatomical phases and OARSI scores. Reported SDCs were a little higher over a 6 year interval than over a 2 year interval16. Haugen et al. assessed reliability of change in KL and OARSI scores, showing a good reliability for the KL score and most of the OARSI features. ICC and kappa values were somewhat lower for change scores than for baseline KL and OARSI scores. Except for change of sclerosis (OARSI), moderate to good reliability was re- ported for the scoring of change in KL and OARSI scores24. Kallman et al. evaluated agreement on progression in KL and Kallman scores on joint group level, showing that agreement was more often present in DIP joints than PIP joints and that agreement was lowest on the progression of cysts.12 Sensitivity to change Table IV shows the characteristics of the included studies describing data on sensitivity to change of radiographic scoring methods. Nine studies reported data on short-term follow-up (�3 years), most of them on patient level16,17,25e31. Two studies evalu- ated change of summed KL, Kallman and anatomical phases scores, of which one study also evaluated the global score16,17. Maheu et al. reported SRMs over a 1 year interval of the global, KL, Kallman, anatomical phases and OARSI scores; all below 0.50, indicating that the responsiveness to change was small17. Bijsterbosch et al. detected somewhat more progression over a 2 year interval when scored following the KL or anatomical phases score as compared with the OARSI atlas16. The anatomical phases score was evaluated in two other studies25,26, one of these studies (a randomized controlled trial (RCT)) also assessed change of GUSS. Progression over a 1 year interval was detected by both scoring methods, although no difference between treatment and placebo group was observed.25 Five studies reported follow-up data of only one scoring meth- od27e31. Botha-Scheepers et al. reported change of JSN and osteo- phytes as scored following the OARSI atlas over a 2 year interval27e29. Scoring of these features tended to be more sensitive to change when scoring radiographs in chronological order as compared with paired reading27. BucklandeWright et al. evaluated stereoscopic measurement of individual OA features during a 1.5 year interval, reporting change of most features64. Olej�arov�a et al. evaluated change of hand OA over a 2 year interval using the Kallman scoring method, reporting no significant difference in total score.31 In the three studies investigating long term follow-up data (>3 years), change in KL (n ¼ 2), OARSI (n ¼ 2), anatomical phases (n ¼ 2) and anatomical lesions (n ¼ 1) score was evaluated12,14,16,24. Studies with a longer follow-up duration detected higher occurrence of progression of OA features as well as higher mean radiographic change scores.16 Feasibility Four studies reported data regarding feasibility of radiographic scoring methods (Table V)11,16,17,22. The KL, anatomical phases and Kallman scoring methods were assessed in two studies16,17. The OARSI, Kessler and Lane scoring methods, as well as a non-validated global score and semi-automated JSW measurement, were all examined in only one study.11,16,17,22 The mean time to perform scoring ranged from 1.5 to 10e15 min per hand radiograph. The KL, anatomical phases and Kessler scoring methods seemed to be least time consuming while scoring ac- cording Kallman, Lane and the OARSI atlas needed more time to perform11,16,17. However, the time needed to perform the scoring differed per study11,16,17. Bijsterbosch et al. showed that the per- formance time increased in patients with higher levels of structural abnormalities; 1 min increment in performance time was associ- ated with 3.9 points in KL score (95% confidence interval (CI) 1.0, 6.8), 8.0 (5.3,10.7) points in OARSI score, and 21.1 (12.9, 29.2) points in the anatomical phases scoring method.16 Validity The 36 studies providing data regarding validity of radiographic scoring methods are listed in Table VI. Analyses on individual joint level were performed in 18 of these studies, and analyses on joint group or patient level were performed in 13 and 14 studies, respectively. Thirteen studies focused on structural findings at physical ex- amination in comparison to radiographic OA findings20,22,33e42. Four studies presented correlation coefficients and kappa values, reporting that nodes at physical examination were weakly to moderately associated with radiographic hand OA34,35,37,38. The lowest agreement was reported in a study on clinical Heberden nodes and radiographic DIP osteophytes scored following the Burnett scoring method, performed on joint level (k ¼ 0.36)35. The highest correlation was reported in a study examining a clinical score consisting of nodes and deformity and the radiographic KL score, analyzed on joint group level (males r ¼ 0.47, females r ¼ 0.66).38 Two studies reported the association between two radiographic scoring methods and clinical nodes, both analyzed on a joint level20,41. Addimanda et al., examining KL and Kallman scores, re- ported the erosion and osteophyte features of the Kallman method to be associated most strongly with nodes (OR 7.4 and 3.2 Table II Overview of included studies (n ¼ 48) First author, year of publication Source population, no. of patients (% women), mean age (years) Scoring methods Joints investigated Analysis of radiographic scores Addimanda, 201220 Secondary care (50% erosive OA), 446 (93), 68 KL Kallman DIP, PIP, CMC1 DIP, PIP, CMC1 Score per joint, summed total Score per joint per feature, summed per joint, summed total Bagge, 199133 General population, 217 (66), 82 KL DIP, PIP, IP, MCP, CMC1 Score per joint group (most affected joint) Bijsterbosch, 201116 Familial polyarticular OA (GARP), 90 (78), 60 KL OARSI Anatomical phases DIP, PIP, IP1, MCP, CMC1 DIP, PIP, IP1, CMC1 DIP, PIP, IP1, MCP Summed per joint group, summed total Summed per joint group, summed total Summed per joint group, summed total Botha-Scheepers, 200527 Familial polyarticular OA (GARP), 20 (90), median age 62 OARSI DIP, PIP, IP1, MCP, CMC1, STT Summed total per feature Botha-Scheepers, 200729 Familial polyarticular OA (GARP), 193 (80), 60 OARSI DIP, PIP, IP1, MCP, CMC1, STT Summed total per feature Botha-Scheepers, 200928 Familial polyarticular OA (GARP), 172 (79), 61 OARSI DIP, PIP, IP1, CMC1 Summed total per feature Buckland eWright,199030 Unclear (radiographic OA patients), 32 (91), 62 Stereoscopic measurement DIP, PIP, MCP Mean score total per feature, mean score per joint group per feature Caspi, 200134 Secondary care (geriatric patients), 253 (68), 79 Modified OARSI DIP, PIP, IP1, MCP, CMC1 Score per joint, summed total Ceceli, 201262 Secondary care, 60 (100), 59 Kallman Not specified Summed per hand Cicuttini, 199835 General population (twin study), 660 (100), 56 Burnett Kallman DIP PIP, CMC1 Score per joint Score per joint Dahaghin, 200443 General population (Rotterdam study), 3906 (58), 67 Modified KL DIP, PIP, MCP, CMC1, STT Score per joint, score per joint group, score per patient (most affected joint) Ding, 200744 Finnish dentists/teachers, 543 (100), range 45 e63 KL DIP, PIP, IP1, MCP Score per joint, no. of joints scored �2, summed total Dominick, 200545 Familial OA (Genetics of Generalized Osteoarthritis (GOGO) study), 700 (80), 69 KL DIP, PIP, IP1, MCP, CMC1, STT Present/absent of score �2 per joint group, summed total Drape, 199632 Secondary care (mucoid cyst), 23 (61), 63 Osteophytes, JSN (NVM) DIP Present/absent per joint group per feature El-Sherif, 200846 Secondary care, 40 (100), 57 KL DIP, PIP, IP1, MCP, CMC1 Score per patient (most affected joint) Grainger, 200754 Secondary care, 15 (93), 59 Erosions (NVM) DIP, PIP Present/absent per joint Hart, 199136 Primary/secondary care (non-joint related problems), 541 (100), 54 KL DIP, PIP, CMC1 Score per joint group (most affected joint) Hart, 199437 Primary care, 976 (100), age range 45e65 KL DIP, PIP, CMC1 Score per joint group (most affected joint) Haugen, 201221 Secondary care (Oslo hand OA cohort), 106 (92), 69 KL OARSI Marginal erosions (NVM) DIP, PIP, IP1, MCP, CMC1 DIP, PIP, IP1, MCP, CMC1 DIP, PIP, IP1, MCP, CMC1 Score per joint, summed total Score per joint per feature, summed total per feature Present/absent per joint Haugen, 201324 Secondary care (Oslo hand OA cohort), 190 (91), 62 (longitudinal analysis: 99 (92), 61) KL OARSI DIP, PIP, IP1, MCP, CMC1 DIP, PIP, IP1, MCP, CMC1 Score per joint, summed per joint group, summed total Score per joint per feature, summed total per feature Huetink, 201259 22 phantom joints, 22 human cadaver joints Automatic JSN quantification DIP, PIP, MCP Millimeter (mm) per joint Iagnocco, 200556 Secondary care (inflam-matory OA), 110 (100), 67 Classical/erosive OA (NVM) DIP, PIP Present/absent per patient Jones, 200147 Secondary care, 522 (67), 56 OARSI DIP, CMC1 Score per joint per feature, summed per joint group Jonsson, 201238 General population (AGES-Reykjavik study), 381 (58), 76 KL DIP, PIP, CMC1 Score per joint, present/absent of score �2 per joint group, summed total Kallman, 198912 General population (BLSA), 50 (0), 68 KL Kallman DIP, PIP, IP1, CMC1 DIP, PIP, IP1, CMC1, STT Score per joint, score per joint group, mean score total Score per joint per feature, score per joint group per feature, mean score total per feature Keen, 200857 Secondary care, 37 (84), 57 OARSI DIP, PIP, MCP, CMC1 Present/absent per joint per feature Kessler, 200011 Advanced hip/knee OA patients (Ulm OA study) 50, range 51e79 Kessler Kallman Lane DIP, PIP, CMC1 DIP, PIP, CMC1 DIP, PIP, CMC1 No. of affected joints per joint group Not specified Not specified Kortekaas, 201148 Secondary care, 55 (47), 61 OARSI DIP, PIP, IP1, CMC1 Score per joint per feature, summed total per feature Kwok, 201122 Familial polyarticular OA (GARP), 235 (83), 65, and 471 controls OARSI Anatomical phases Semi-automated measured JSW DIP, PIP, MCP DIP, PIP DIP, PIP, MCP Score per joint per feature, summed total per feature Present/absent per joint Score per joint, summed total Lee, 201249 General population (KLoSHA), 378 (48), 75 KL DIP, PIP, IP1, MCP, CMC1 Summed per finger Maheu, 200717 Secondary care, 105 (93), 61 KL Kallman Global score Anatomical phases DIP, PIP, MCP, CMC1 DIP, PIP, MCP, CMC1,STT DIP, PIP, MCP, CMC1, STT DIP, PIP, MCP Summed total Summed total Summed total Summed total Mancarella, 201023 Secondary care, 35 (94), 66 KL Kallman DIP, PIP, MCP DIP, PIP, MCP Score per joint Score per joint Marshall, 200939 Primary care (hand pain), 592 (62), 64 KL DIP, PIP, IP1, MCP, CMC1, STT Present/absent of score �2 per joint group Mathiessen, 201240 Secondary care (Oslo hand OA cohort), 127 (91), 69 OARSI DIP, PIP, IP1, MCP Score per joint per feature A.W. Visser et al. / Osteoarthritis and Cartilage 22 (2014) 1710e17231714 Table II (continued) First author, year of publication Source population, no. of patients (% women), mean age (years) Scoring methods Joints investigated Analysis of radiographic scores Olej�arov�a, 200031 Secondary care, erosive OA: 28 (93), 68; non- erosive OA: 24 (83), 65 Kallman DIP, PIP, IP1, MCP, CMC1 Summed total Ozkan, 200750 Secondary care, 100 (87), 69 KL DIP, PIP, MCP, CMC1 Score per patient (most affected joint) Rees, 201241 Secondary care (Genetics of Osteoarthritis and Lifestyle (GOAL) study participants with �1 node), 1,939 (54), 68 KL OARSI DIP, PIP, IP1, CMC1 DIP, PIP, IP1, CMC1 Score per joint Score per joint per feature Saltzherr, 201361 Secondary care, 30 (70), median age 57 Eaton CMC1, STT Score per joint, score per joint per feature Sonne-Holm, 200651 General population (Copenhagen city hearth study), 3,355 (61),age>20 Modified KL CMC1 Score per joint, score per joint per feature Stern, 200442 Primary and secondary care (Investigation of Nodal Osteoarthritis to Detect an Association with Loci encoding IL-1 (I-NODAL) study), 71 (80), 67 KL DIP, PIP, IP1, CMC1 Score per joint Sunk, 201253 Post mortem IP joints, 40 (44), median age 66 KL OARSI DIP, PIP DIP, PIP Score per joint Score per joint per feature Verbruggen, 199614 Unclear (radiographic OA), 46 (96), 57 Anatomical phases Anatomical lesions DIP, PIP, MCP DIP, PIP, MCP Summed total Summed total Verbruggen, 200226 Unclear (radiographic OA, two RCT's), 222 (92), 56 Anatomical phases Anatomical lesions DIP, PIP, MCP DIP, PIP, MCP Summed total Summed total Verbruggen, 201225 Secondary care (RCT), 60 (85), 61 Anatomical phases GUSS DIP, PIP DIP, PIP No. of joints in each phase per patient Summed total Van ‘t Klooster, 200860 Familial polyarticular OA (GARP), 40 (33), 60 OARSI Automatic JSW quantification DIP, PIP, MCP DIP, PIP, MCP Score per joint Mean score per joint Vlychou, 200958 Secondary care (OA patients), 22 (91), 63 Osteophytes, erosion (NVM) DIP, PIP, IP1, MCP, CMC1 Present/absent per joint per feature Wittoek, 201155 Secondary care, erosive OA: 9 (67), median 61; non-erosive OA: 5 (100), median 63 Osteophytes, erosions (NVM) DIP, PIP Present/absent per joint per feature Zhang, 200252 General population (Framingham hand OA study), 1,032(64), age�71 Modified KL DIP, PIP, IP1, MCP, CMC1 Score per joint, present/absent of score �2 per patient Abbreviations: AGES ¼ Age, Gene/Environment Susceptibility, BLSA ¼ Baltimore Longitudinal Study of AgingGARP ¼ Genetics osteoarthritis and Progression, KLoSHA ¼ Korean Longitudinal Study on Health and Aging, NVM ¼ non-validated method, OA ¼ osteoarthritis, . Table III Studies providing data on reliability of scoring methods (n ¼ 10) First author No. of readers, centers Intrareader reliability* Interreader reliability* Cross-sectional studies Addimanda20 2 (consensus), 1 KL: ICC 0.994 Kallman: ICC 0.987, k range per feature 0.42e0.81 N/A Bijsterbosch16 3 (independent), 3 KL: ICC range per reader 0.90e0.96 OARSI: ICC range per reader 0.77e0.97 Anatomical phases: ICC range per reader 0.88e0.97 KL: ICC range per two readers 0.84e0.91 OARSI: ICC range per two readers 0.80e0.95 Anatomical phases: ICC range per two readers 0.81e0.95 Haugen21 2 (independent), 2 KL: ICC 0.97, k 0.86 (one reader) OARSI (including marginal erosions): ICC range per feature 0.70e0.97, k range per feature 0.75e0.88 (one reader) KL: ICC 0.96, k 0.79 OARSI (including marginal erosions): ICC range per feature 0.56e0.95, k range per feature 0.62e0.81 Haugen24 2 (independent), 2 KL: ICC 0.97, k 0.82 (one reader) OARSI: ICC range per feature 0.62e0.96, k range per feature 0.64e0.81 (one reader) KL: ICC 0.95, k 0.70 OARSI: ICC range per feature �0.07e0.94, k range per feature 0.00.-0.77 Kallman12 4 independent, 2 KL mean score: ICC 0.80, range per joint group 0.68e0.87 Kallman mean score: ICC per feature range 0.74e0.84, per feature per joint group range 0.62e0.93 KL mean score: ICC 0.74, range per joint group 0.74e0.81 Kallman mean score: ICC per feature range 0.29e0.71, per feature per joint group range 0.33e0.82 Kwok22 2 (consensus), 1 OARSI (JSN): ICC 0.92 Semi-automated JSW: ICC 0.99, mean difference 0.017 mm (standard deviation (SD) 0.04), smallest detectable difference (SDD) 0.055 mm N/A Maheu17 2 (independent), 2 KL: ICC range per reader 0.988e0.991 Kallman: ICC range per reader 0.962e0.999 Global: ICC range per reader 0.922e0.961 Anatomical phases: ICC range per reader 0.999e0.999 KL: ICC 0.951 Kallman: ICC 0.706 Global: ICC 0.859 Anatomical phases: ICC 0.996 Mancarella23 2, not specified KL: ICC score per joint 0.99 Kallman: ICC score per joint 0.99 Longitudinal studies Bijsterbosch16 3 (independent), 3 Mean follow-up 2 years Mean follow-up 6 years KL: SDC range per reader 2.1e7.1 OARSI: SDC range per reader 1.2e10.2 Anatomical phases: SDC range per reader 1.4e7.8 KL: SDC range per reader 3.7e8.1 OARSI: SDC range per reader 3.0e11.1 Anatomical phases: SDC range per reader 3.5e9.9 KL: SDC 2.9 OARSI: SDC 4.1 Anatomical phases: SDC 2.7 KL: SDC 3.8 OARSI: SDC 4.6 Anatomical phases: SDC 4.0 (continued on next page) A.W. Visser et al. / Osteoarthritis and Cartilage 22 (2014) 1710e1723 1715 Table III (continued) First author No. of readers, centers Intrareader reliability* Interreader reliability* Haugen24 2 (independent), 2 Mean follow-up 7 years KL: ICC 0.93, k 0.83 (one reader) OARSI: ICC range per feature �0.02e0.96, k range per feature 0.00e0.90 (one reader) KL: ICC 0.83, k 0.53 OARSI: ICC range per feature �0.03e0.90, k range per feature �0.03e0.71 Kallman12 4 (independent), 2 Mean follow-up 23 years N/A KL: scattered agreement Deformity/collapse: agreement Cysts: disagreement Osteophytes/JSN/sclerosis: scattered agreement Maheu17 2 (independent), 2 Mean follow-up 1 year KL: ICC range per reader 0.990e0.998 Kallman: ICC range per reader 0.986e0.959 Global: ICC range per reader 0.939e0.956 Anatomical phases: ICC range per reader 0.941e0.988 KL: ICC 0.998 Kallman: ICC 0.995 Global: ICC 0.999 Anatomical phases: ICC 0.998 Verbruggen26 2 (independent), 1 Mean follow-up 3 years Anatomical phases: agreement for two RCTs 84e93%, k 0.6e0.8 Anatomical lesions: correlation for two RCTs r 0.7e0.9, R2 44e87% Anatomical phases: agreement for two RCTs 81e85%, k 0.6e0.7 Anatomical lesions: correlation for two RCTs r 0.7e0.8, R2 55e66% Verbruggen25 2 (independent), 1 Mean follow-up 1 year Anatomical phases: 96% agreement, k 0.95 GUSS: ICC 0.97 Anatomical phases: 94% agreement, k 0.92 GUSS: ICC 0.86, SDC 18 Abbreviations: k ¼ kappa, , N/A ¼ not applicable, R2 ¼ explained variance. * Unless stated otherwise ICCs are for summed total scores on patient level, k's on joint level. Table IV Studies providing data on sensitivity to change of radiographic scoring methods in hand osteoarthritis (n ¼ 11) First author Mean follow-up (years) Definition of progression Sequence known/ unknown Results relevant for evaluation of sensitivity to change Short-term Bijsterbosch16 2 Change > SDC Known Percentage progression (range for three readers): - KL: 19e56% - OARSI: 7e38% - Anatomical phases: 13e52% Botha- Scheepers27 2 �1 score Known/ unknown Progression of JSN/osteophytes: - chronological reading: 1/15% (SRM 0.38/0.41) - paired reading: 5/15% (SRM 0.00/0.39) Botha- Scheepers28 2 �1 score Unknown JSN: 19% progression, mean change 0.3, SRM 0.34 Osteophytes: 22% progression, mean change 0.4, SRM 0.35 Botha- Scheepers29 2 �1 score Unknown JSN: 24% progression (�2/�3/�4 score: 10/4/3%) Osteophytes: 22% progression (�2/�3/�4 score: 10/4/ 3%) Buckland- Wright30 1.5 Change > variations in precision Not specified JSW: 62% narrowing (P < 0.02) Subchondral sclerosis: 60% increase, 34% decrease Osteophytes: increase in size and no. (P < 0.005) Juxta-articular radiolucencies: increase in size (P < 0.002), not in no. Maheu17 1 Change in summed score Unknown SRM for two readers: - KL: 0.17/0.24 - Kallman: 0.26/0.29 - Global: 0.17/0.27 - Anatomical phases: 0.18/0.27 Olej�arov�a31 2 Change in summed score Unknown Erosive OA: change 5.0, P > 0.05 Non-erosive OA: change 4.3, P > 0.05 Verbruggen26 3 Change in anatomical phases, Change in anatomical lesions Known Anatomical lesions showed different progression between trial arms, anatomical phases did not. Verbruggen25 1 Change in anato-mical N/S/J phase to E phase, Change in summed score Unknown No. (%) joints with progression to E phase: - Total group: 24 (2.8%) of 848 N/S/J joints - Placebo treated: 15 (3.6%) of 429 N/S/J joints - Adalimumab treated: 9 (2.1%) of 419 N/S/J joints Mean difference GUSS (baseline palpable swelling yes/ no): - Placebo: �5/3 - Adalimumab: 4/1 Long-term Bijsterbosch16 6 Change > SDC Known Percentage progression (range for three readers): - KL: 51e80% - OARSI: 33e74% - Anatomical phases: 27e66% Haugen24 7.3 Change in score Known Progression (percentage of joints): - KL: 29% - OARSI: osteophytes 19%, JSN 13%, erosions 9%, mala- lignment 4%, cysts 2%, sclerosis 1% Verbruggen14 4.6 Change in anatomical phases, Change in anatomical lesions Known Progression of anatomical lesions more frequent in PIP/ DIP than MCP. Progression of anatomical phases in 43%. Progression according anatomical phases and anatomical lesions yielded comparable results. A.W. Visser et al. / Osteoarthritis and Cartilage 22 (2014) 1710e17231716 Table V Studies providing data on feasibility of radiographic scoring methods in hand osteoarthritis (n ¼ 4) First author No. of radiographs Mean (SD) time to perform scoring Bijsterbosch16 3 KL: 4.3 (2.5) min OARSI: 9.3 (6.0) min Anatomical phases: 2.8 (1.5) min Kessler11 1 Kessler: 5 min per hand Kallman: 10e15 min per hand Lane: 10e15 min per hand Kwok22 1 Semi-automated JSW measurement: 5.1 (2.8) min Maheu17 1 KL: 1.9 (0.6) min Kallman: 3.5 (0.7) min Global score: 1.5 (0.5) min Anatomical phases: 1.6 (0.5) min Abbreviations: min ¼ minutes, no. ¼ number. Table VI Studies providing data on validity of scoring methods (n ¼ 37) First author Validation method Clinical: structural findings at physical examination Addimanda20 Heberden/Bouchard nodes (yes/no) Bagge33 Nodes/periarticular enlarge-ment, instability, squaring (yes/no �1 feature per joint) Caspi34 Nodes, malalignment DIP/PIP (summed) Cicuttini35 Heberden nodes (yes/no) Hart36 Nodes (yes/no) Hart37 Nodes IP (graded 0e4), squaring CMC1 (grade 0e1) Jonsson38 Nodes, deformity (graded 0e3, summed) Kwok22 Nodes (yes/no) Marshall39 Nodes, deformity, enlargement (yes/no) Mathiessen40 Nodes (yes/no) Rees41 Nodes (yes/no) Stern42 Nodes (yes/no) Clinical: symptoms, function Bagge33 Pain/stiffness (interview, yes/no) Ceceli62 Pain (visual analog scale(VAS)), disability (Disabilities of the Arm Shoulder and Hand (DASH) questionnaire), dexterity (Purdue pegboard test), grip/pinch strength Dahaghin43 Pain (interview, yes/no)/disability (Health Assessment Questionnaire (HAQ)) A.W. Visser et al. / Osteoarthritis and Cartilage 22 (2014) 1710e1723 1717 respectively)20. Rees et al. examined the association between KL and OARSI scores and clinical nodes, reporting ORs only for the KL method (range per joint 2.3e21.2). Regarding the OARSI atlas, JSN was mentioned to be more strongly associated with clinical nodes than osteophytes.41 Seventeen studies assessed clinical symptoms and hand func- tion in comparison to radiographic scoring methods (KL: n ¼ 14, OARSI: n ¼ 3, Kallman: n ¼ 1, JWS/JSN: n ¼ 1)22,24,33,36,37,39,43e52,62. All studies reported significant associations between radiographic OA features and pain and disability, of which four showed a dose- dependent association between KL and OARSI scores and pain24,43,44,48. Of the nine studies assessing grip or pinch strength, only two did not find an association with radiographic OA (1x KL,1x JSW/JSN, analyzed on patient level).22,50 Only one study assessed longitudinal data, showing incident or progressive KL or OARSI scores to be associated with incident pain Results relevant for evaluation of validity OR (95% CI) for nodes on joint level, adjusted for disease duration, body mass index (BMI): - KL: 2.20 (2.09, 2.31) - Kallman: 1.17 (1.62, 1.72) - Kallman JSN: 2.57 (2.40, 2.75) - Kallman osteophytes: 3.19 (2.97, 3.42) - Kallman central erosions: 7.4 (6.0, 10.1) Correlated with KL score in all joint groups (correlation coefficient not provided), test for linear trend: P < 0.01. Clinical features also present in KL 0 joint groups. Correlation with OARSI: - summed total: r 0.4 (P 0.001) - DIP/PIP: range per joint r 0.18e0.52 (P 0.004e0.0001) k with DIP osteophytes (Burnett): 0.36 (95% CI 0.33, 0.39) Sensitivity for KL �2: range per joint group 19e49% Specificity for KL �2: range per joint group 87e98% Prevalence node �2: KL0: 3%, KL1: 19%, KL2: 48%, KL3: 74%, KL4: 82% Prevalence squaring: KL0: 5%, KL1: 11%, KL2: 25%, KL3: 41%, KL4: 70% (correlation coefficient not specified) Correlation summed score with summed total KL: males r 0.47, females r 0.66 Prevalence KL � 2 (DIP 67%, PIP 32%, CMC1 20%) higher as compared to clinical grade �2 (DIP 54%, PIP 19%, CMC1 10%) b (95% CI) for nodes on joint level, adjusted for age, sex, BMI, family effect, mean phalanx width: - JSW: �0.37 (�0.40, �0.34) - JSN: 0.48 (0.42, 0.55) OR (95% CI) of presence of �1 feature for: - KL � 2 in CMC1: 2.2 (1.5, 3.3) - KL � 2 in any thumb joint: 3.1 (2.1, 4.5) Osteophytes (OARSI) in 30% of joints, nodes in 37% of joints KL � 2 associated with any node on patient level: OR range per joint 2.26e21.23 (adjusted for age, sex, BMI, hand dominance, trauma, occupation, sports) JSN/osteophytes (OARSI) also associated with nodes (P < 0.001); ORs of JSN greater than ORs of osteophytes in all joints except for IP1/CMC1 Sensitivity for KL � 2: range per joint group 42e100% Specificity for KL � 2: range per joint group 17e94% Correlated with KL score in all joint groups (correlation coefficient not provided), test for linear trend: P < 0.01. Correlation with summed Kallman score right/left hand: - Pain: r 0.17/0.18 (P > 0.05) - Disability: r 0.29/0.30 (P < 0.05) - Dexterity: r �0.26/�0.30 (P < 0.05) - Grip strength: r �0.37/�0.40 (P < 0.05) - Pinch strength: r range per test �0.31 to �0.25/�0.35 to �0.27 (P < 0.05) OR (95% CI) for KL � 2/�3/4 on patient level, adjusted for age, sex: - pain: 1.9 (1.5, 2.4)/1.8 (1.3, 2.5)/3.6 (2.2, 5.8) - disability: 1.5 (1.1, 2.1)/1.6 (1.1, 2.5)/1.6 (0.9, 2.9) Pain associated with KL � 2 in PIP/CMC1/STT, disability with KL � 2 in MCP Adjusted OR (95% CI) for KL � 2 in all joint groups: pain 2.7 (1.4, 5.2), disability 2.7 (1.3, 6.0) (continued on next page) Table VI (continued ) First author Validation method Results relevant for evaluation of validity Ding44 Pain (questionnaire, yes/no per joint, summed) Correlation with summed total KL: r 0.26 (P 0.0005) Correlation with no. KL � 2 joints: r 0.28 (P 0.0005) prevalence ratio (PR) (95% CI) for pain on joint level, adjusted for age, occupation: - KL 2: 1.70 (1.44, 2.01) - KL � 3: 5.17 (4.34, 6.16) Adjusted PR (95% CI) for mild/moderate pain on joint level: - KL 2: 1.93 (1.54, 2.41)/2.21 (1.58, 3.10) - KL � 3: 4.92 (3.77, 6.43)/11.73 (8.95, 15.38) Dominick45 Grip/pinch strength b (P-value) for grip/pinch strength, adjusted for age, sex, pain, chondro-calcinosis, hand hypermobility: - Summed total KL: �0.67 (<0.001)/�0.16 (<0.001) - KL � 2 PIP: �6.67 (0.003)/�1.17 (0.070) - KL � 2 MCP: �3.32 (0.114)/�1.78 (0.003) - KL � 2 CMC: �9.06 (<0.001)/�1.03 (0.049) - KL � 2 per finger: range �1.81 to �11.08 (P < 0.05) El-Sherif46 AUSCAN, morning stiffness (minutes), grip strength, Ritchie index AUSCAN pain/function higher in KL4 than KL2 (P < 0.05) Correlation with KL score: - AUSCAN pain: r 0.459 (P 0.003), function: r 0.394 (P 0.012) - Grip strength right hand: r �0.322 (P 0.043) Other measures not significantly correlated with KL Hart36 Tenderness, pain on movement (physical examination, yes/no) Comparison tenderness/pain on movement with KL � 2: - sensitivity: range per joint group 7e26%/1e22% - specificity: range per joint group 92e99%/96e99% Hart37 Pain, stiffness (interview, yes/no) Prevalence symptoms in patients with KL < 2: 15%, KL2: 49%, KL3-4: 81%; test for linear trend: P < 0.01 Haugen24 Tenderness on palpation (yes/no), grip strength, AUSCAN Cross-sectional OR (95% CI) for tenderness on joint level, adjusted for age, sex: - KL score 1/2/3/4: 1.4 (1.2, 1.7)/3.0 (2.4, 3.7)/6.8 (4.5, 10)/5.3 (3.3, 8.6) - OARSI osteophytes score 1/2/3: 2.8 (2.3, 3.4)/4.3 (3.0, 6.3)/4.5 (2.9, 7.0) - OARSI JSN score 1/2/3: 0.9 (0.7, 1.2)/1.9 (1.4, 2.5)/2.5 (1.7, 3.7) - OARSI erosions: 3.3 (2.3, 4.9), malalignment: 2.8 (2.0, 3.9), cysts: 2.2 (1.4,3.3), sclerosis: 2.6 (1.1, 6.0) AUSCAN pain associated with summed KL and OARSI osteophytes/JSN. AUSCAN function associated with summed KL and OARSI osteophytes, JSN, erosions, cysts. Grip strength associated with summed KL and all OARSI features except for sclerosis. Summed KL per joint group only associated with grip strength (CMC1 strongest) Adjusted OR (95% CI) of progressive/incident scores for incident tenderness: - KL score 1/2/3/4: 1.2 (0.7, 2.0)/1.5 (0.9, 2.4)/5.7 (3.0, 11)/11 (4.0, 33) - OARSI osteophytes: 3.0 (2.0, 4.4), JSN: 2.8 (1.7, 4.7), erosions: 8.4 (4.7, 15), malalignment: 3.8 (1.9, 7.4), cysts: 2.2 (0.9, 5.0), sclerosis: 2.4 (0.8, 8.0) Increasing summed KL and OARSI JSN/malalignment associated with increased AUSCAN function. More malalignment associated with less grip strength Change summed KL per joint group not associated with AUSCAN/grip strength Jones47 AUSCAN, grip strength Association with summed OARSI per joint group, adjusted for age/sex/other joints/ Heberden nodes: - AUSCAN pain: PIP b 0.17, CMC1 b 0.14 (P < 0.05) - AUSCAN function: PIP b 0.15, CMC1 b 0.19 (P < 0.05) - grip strength: PIP b �0.12, CMC1 b �0.09 (P < 0.05) Kortekaas48 AUSCAN, pain (VAS), Doyle index of hands OR (95% CI) for pain on palpation on joint level, adjusted for age, sex, BMI: - osteophytes score 1/2/3: 2.2 (1.7, 2.9)/3.9 (2.6, 5.9)/4.8 (2.7, 8.4) - JSN score 1/2/3: 2.0 (1.4, 2.8)/5.3 (3.1, 9.1)/6.4 (2.7, 14.8) Summed osteophytes/JSN not associated with AUSCAN pain, VAS, Doyle. Kwok22 AUSCAN, pain on palpation (yes/no), grip strength, mobility b (95% CI) for JSW/JSN on joint level, adjusted for age, sex, BMI, family effect, mean phalanx width: - self-reported pain: �0.21 (�0.27, �0.16)/0.39 (0.30, 0.48) - pain on palpation: �0.25 (�0.29, �0.21)/0.37 (0.29, 0.44) No. joints with self-reported pain/pain on palpation, AUSCAN pain/function and mobility associated with summed JSW/JSN. Grip strength not associated Lee49 Grip/pinch strength, disability (DASH questionnaire) Associations with summed KL, adjusted for age/sex (P < 0.05): - grip strength: thumb b �1.05, third finger b �2.17 - pinch strength: thumb b �0.28, second finger b �0.26 -disability: thumb b 1.53, second finger b 0.63, third finger b 3.97 Marshall39 AUSCAN, pain during activity/pain in past month (questionnaire, yes/no), grip/pinch strength, grind test, Finkelstein's test OR (95% CI) for KL � 2 in CMC1/any thumb joint: - Pain during activity: 2.1 (1.5, 2.9)/2.2 (1.6, 3.2) - Pain in past month: 1.5 (1.0, 2.1)/1.4 (1.0, 2.0) - Grind test: 1.8 (1.1, 2.9)/1.7 (1.0, 2.9), Finkelstein's test not associated Ozkan50 Grip/pinch strength, Dreiser's functional index, disability (HAQ) Disability KL score <2/2/3-4: 2.40/2.10/6.45 (KL3-4 vs KL < 2/2 P < 0.05) Dreiser's index KL score <2/2/3-4: 2.73/2.10/9.25 (KL3-4 vs KL < 2/2 P < 0.05) Grip/pinch strength not different between KL scores A.W. Visser et al. / Osteoarthritis and Cartilage 22 (2014) 1710e17231718 Table VI (continued ) First author Validation method Results relevant for evaluation of validity Sonne-Holm51 Pain CMC1 (interview, yes/no) OR (95% CI) for pain, adjusted for age, sex, BMI: - KL: 1.48 (1.33, 1.65) - Sclerosis/cyst: 1.48 (1.23, 1.77)/1.23 (1.03, 1.47) JSW and osteophytes not associated. Zhang52 Functional limitations (questionnaire), grip strength Patients with KL � 2 and joint pain/aching/stiffness had more functional limitations and lower grip strength; age adjusted difference (95% CI) men 3.1 kg (1.8, 4.4), women 1.9 kg (1.4, 2.4) Histological Sunk53,69 Modified Mankin score (range 0e14; >5 ¼ OA) Correlation with KL score (DIP/PIP): r 0.87/0.79 (P < 0.0001) Correlation with OARSI JSN: r 0.77/0.76, osteophytes: r 0.89/0.69 (P < 0.0001) Sensitivity KL � 2 for Mankin >5 (DIP/PIP): 84.6/54.2%, specificity: 100/100% MRI Drape32 Pedicled cysts DIP (yes/no) 19 pedicled cysts: 16 associated with osteophytes/JSN on CR, three no osteophytes/JSN on CR Grainger54 Erosions (central/marginal, yes/no) 37 MRI erosions: 24% also on CR (44% of central, 5% of marginal erosions) All CR erosions also on MRI Haugen21 Oslo hand OA score (graded per feature) Agreement with osteophytes k 0.41, JSN k 0.50, central erosions k 0.75, central/ marginal erosions k 0.43, cysts k 0.11, malalignment k 0.50 Wittoek55 Erosions, osteophytes (yes/no) Prevalence erosions: MRI PIP 29%, DIP 68%, CR PIP 11%, DIP 38% PIP osteophytes (erosive/non-erosive) hand OA MRI 25/50%, CR 42/40% DIP osteophytes: MRI and CR >80% CT Saltzherr61 JSN, osteophytes, subchon-dral sclerosis, cyst, erosion, subluxation (OA defined on no. of features) Prevalence of individual features and OA higher according to CT than CR US Iagnocco56 Erosions (yes/no) US erosions in 16 (72.7%) of 22 CR erosive hand OA patients. No US erosions in CR classical hand OA patients (n ¼ 88). Keen57 JSN, osteophytes (yes/no) Osteophytes: k 0.54 (77.8% agreement) JSN: k 0.436 (74.6% agreement) Kortekaas48 Osteophytes (yes/no) US osteophytes 69%, OARSI osteophytes 46% Mancarella23 Cartilage thickness (mm) Negatively correlated with KL and Kallman score (P < 0.0001) Mathiessen40 Osteophytes (yes/no) OARSI osteophytes in 30% of joints, US osteophytes in 53% of joints CR and US: 57.3% exact agreement, 88.3% close agreement Vlychou58 Central erosions, osteophytes (yes/no) CR detected less erosions/osteophytes (17/47%) than US (35/55%), P < 0.05 Difference most apparent in DIP and PIP Wittoek55 Erosions, osteophytes (yes/no) CR detected less erosions (PIP 11%, DIP 38%) than US (21, 52%) in erosive and non- erosive hand OA CR detected less PIP osteophytes (41%) than US (54%). CR and US both detected >80% DIP osteophytes Digital photography Jones47 Heberden nodes (yes/no) Correlation with OARSI score �1 in DIP joints: r 0.74 (P < 0.001) Jonsson38 Tissue enlargement/deformity (graded 0e3 per joint, summed) Prevalence OA higher according to KL � 2 (DIP 67%, PIP 32%, CMC1 20%) as compared to digital photograph �2 (DIP 33%, PIP 20%, CMC1 3%) Correlation summed score with summed total KL: males r 0.35, females r 0.53 Stern42 Hard tissue enlargement (yes/no) Sensitivity for KL � 2: range per joint 17e74% Specificity for KL � 2: range per joint 67e92% Other measures of JSW Huetink59 True JSW by micrometer Compared to automatic JSN quantification: Mean difference (SD): phantom joints: 0.052 (0.014) mm, cadaver joints: 0.210 (0.115) mm SDD: phantom joints 0.028 mm, cadaver joints: 0.226 mm van't Klooster60 Automatic JSW quantification (mm) Association with OARSI JSN: R2 0.54, P < 0.01 Abbreviations: kg ¼ kilogram, r ¼ correlation coefficient. A.W. Visser et al. / Osteoarthritis and Cartilage 22 (2014) 1710e1723 1719 on joint level and with change in Australian/Canadian Hand Oste- oarthritis Index (AUSCAN) pain/function and grip strength.24 One study examined the association between the KL and OARSI scoring methods and histological findings on joint group level, showing a good correlation (r � 0.7) as well as a high sensitivity and specificity.53 Four studies assessed individual features of hand OA by both radiography and MRI21,32,54,55. The agreement between the two methods was lowest for the presence of cysts and highest for central erosions21. Three of the studies showed that MRI detected more osteophytes, cysts and erosions as compared to radiography.32,54,55 One study assessed individual features of CMC1 and STT OA by both radiography and CT, reporting the latter to detect more JSN, osteophytes, subchondral sclerosis, cysts, erosions and subluxation.61 Seven studies used both US and radiography to assess hand OA signs23,40,48,55e58. Six of the studies examined individual radio- graphic features and reported US to detect more osteophytes and erosions than radiography. A study on KL and Kallman scores re- ported a negative correlation between radiographic JSN and US- detected cartilage thickness on joint level.23 Three studies examined hand OA using digital photography and radiography38,42,47. Two studies, performed on joint group level, reported a good correlation between OARSI scores and Heberden nodes on digital photography (r ¼ 0.74), and a weak to moderate correlation between summed KL scores and summed digital photograph score (comprising enlargement and deformity) on digital photography (males r ¼ 0.35, females r ¼ 0.53).38,47 Finally, two studies examined quantitative measures of JSW, both on individual joint level59,60. Van't Klooster et al. showed that automatic JSW quantification was associated with JSN scored A.W. Visser et al. / Osteoarthritis and Cartilage 22 (2014) 1710e17231720 according to the OARSI atlas60. Huetink et al. reported that auto- matic JSW quantification has a high accuracy in measuring the true JSW (assessed by micrometer).59 Discussion This review aimed at evaluating the radiographic scoring methods used in hand OA research and to assess their metric properties. We noticed that a wide variety of scoring methods has been used in studies evaluating radiographic hand OA. Further- more, the joints that were examined and the analysis of the ob- tained scores differed extensively across studies. Evaluation of metric properties of the evaluated scoring methods regarding reliability, sensitivity to change, feasibility and validity did not reveal major differences. Both intra- and interreader reliability of all evaluated radio- graphic scoring methods were good for summed scores and global scores, for both cross-sectional and longitudinal radiographic scoring. When grading individual radiographic features, the highest reliability was reported for the scoring of erosions and osteophytes and the lowest for the scoring of cysts. When evaluating sensitivity to change, only one study evaluated this in different groups of patients (trial arms) using different scoring methods. Although such comparative studies may provide the best insights in sensitivity to change, the included observational follow-up studies showed the ability to detect change in structural damage over time with CR. Change over time was observed even in short term follow-up studies (<3 years). Reported SRMs were similar for all evaluated scoring methods. The feasibility of scoring methods has been described in a limited number of studies. The performance time of the scoring differed not only across the evaluated scoring method but also across studies, and was shown to increase with the amount of structural damage. A large number of studies investigated the validity of radio- graphic OA findings in comparison with clinical findings at physical examination (such as nodes and deformities) and symptoms and function; there was large variation between these studies. This could be due to the various analyses of radiographic and clinical findings, e.g., joint level vs patient level, and individual features vs summed scores. Furthermore, studies were difficult to compare because of the use of different effect measures, such as odds ratios (ORs), correlation coefficients, sensitivity and specificity. In general we can say that there was moderate agreement between radio- graphic features and structural findings at physical examination. The association of radiographic findings with hand function and symptoms was reported to be stronger than the association with findings at physical examination. All evaluated radiographic scores were associated with grip strength and pain, the relation with pain was observed on joint level as well as on patient level, and was shown to be dependent on the radiographic severity. No differences between the evaluated radiographic scoring methods were observed. Only few studies assessed longitudinal associations between radiography and pain or function, requiring further validation. In comparison with other imaging methods, radiography appeared to detect fewer structural damage than MRI, CT and US, and more structural damage than digital photography. However, the findings on MRI, CT and digital photography require further confirmation because of limited evidence. Agreement between radiography and other imaging methods was assessed most often on joint level and differed per feature. Although no major differences regarding the metric properties of the evaluated radiographic scoring methods were observed in this review, the examined joints and analysis of the obtained scores were shown to differ extensively across studies. All kinds of pre- sentation of radiographic outcome measures were used, such as scores per joint, summed scores, presence/absence of radiographic OA features, or the highest scored joint. Summed scores were used most frequently for evaluation of the reliability of radiographic scoring methods and change of structural damage over time, analyzed on patient level. When evaluating the validity of scoring methods, analyses on individual joint level or on joint group level were performed most often. The various examined joints within hand OA research has been described before in a review by Marshall et al. In addition, they evaluated the use of definitions of hand OA, reporting some agreement in the definition of individual joint OA but a wide variation in defining overall hand OA65. Kerkhof et al. showed that the use of varying definitions of radiographic OA within the same study leads to different results66. Therefore, as stated before by Haugen et al., standardization of the evaluation and definition of radiographic hand OA with respect to scoring methods, examined joints and required number of affected joints could reduce the variation across studies.67 Based on this review, it is not possible to decide on what radiographic scoring method should be recommended in hand OA research. Although no major differences regarding metric properties of the scoring methods were observed, the amount of supporting evidence differed for the evaluated methods, which may provide an argument for recommendation of specific scoring methods. Most evidence across all evaluated domains is available for the KL and OARSI scoring methods. Although global scoring methods may be more reliable than the scoring of individual radiographic features, individual features may be more suitable for evaluation of specific study objectives. Therefore, the OARSI scoring method may be recommended for evaluation of indi- vidual radiographic features in addition to use of the KL scoring method for global radiographic assessment. The OARSI Task Force recommendations for the design and conduct of clinical trials in hand OA already stated that the use of either aggregate radio- graphic scores or grading of individual features depends on the aim of study9. However, consensus should be reached on a more specific definition; when should a global or individual feature score be used and what specific scoring method should be rec- ommended. Furthermore, consensus on the evaluated joints, presentation of the radiographic outcome measures and the definition of hand OA will help to enhance the comparability of studies in hand OA. A limitation of this study is that the methodological quality of the included studies was not assessed, due to the heterogeneity across studies regarding their purpose. The heterogeneity regarding examined joints and analyses of obtained radiographic scores did not enable performance of a meta-analysis. Furthermore, publication bias was not addressed. Although we aimed to provide a comprehensive overview of available literature, the formulated inclusion and exclusion criteria resulted in a specific selection of studies. Consequently, some radiographic scoring methods were not included in this review, being the Eaton-Littler classification system and the recently developed interphalangeal OA radiographic simplified (iOARS) score. These methods have not been evaluated for reliability together with another method.68,69 Since sensitivity to change was evaluated in follow-up studies assessing hand OA by at least two radiographic scoring methods in case of long-term follow-up studies (>3 year), a number of studies or abstracts evaluating change in KL and OARSI scores could not be included.3,70e72 In the evaluation of the feasibility of the available radiographic scoring methods in hand OA, we did not focus on the importance of A.W. Visser et al. / Osteoarthritis and Cartilage 22 (2014) 1710e1723 1721 radiographic techniques. Dela Rosa et al. evaluated the reliability of scoring OA of the CMC1s according to the Eaton method when using different X-ray views, showing that a combination of the posterior-anterior, lateral and Bett's view showed a higher reli- ability than using only one or two views73. Standardization of radiographic techniques might further enhance comparability of studies in hand OA. In conclusion, this systematic review provides an overview of the radiographic scoring methods used in the assessment of structural damage in hand OA. We showed that several scoring methods are available, evaluation of their metric properties regarding reliability, sensitivity to change, feasibility and validity did not reveal major differences. The examined joints and analysis of the obtained radiographic scores differed extensively across all studies. To enhance comparability across studies in hand OA, consensus has to be reached on a preferred scoring method, as well as on the examined joints and the used outcome measure. Contributions Authors made substantial contributions to the following: (1a) conception and design of the study: AWV, PB, DMH, MK; (1b) acquisition of data: AWV, JWS, MK; (1c) analysis and interpretation of data: AWV, PB, IKH, DMH, FRR, MK (2) drafting or critically revising of manuscript: AWV, PB, IKH, JWS, DMH, FRR, MK; (3) final approval of manuscript: AWV, PB, IKH, JWS, DMH, FRR, MK. Competing interest statement There were no competing interests. Funding This work was supported by the Dutch Arthritis Foundation (grant number 10-1-309). Supplementary data Supplementary data related to this article can be found at http:// dx.doi.org/10.1016/j.joca.2014.05.026. References 1. Lawrence RC, Felson DT, Helmick CG, Arnold LM, Choi H, Deyo RA, et al. Estimates of the prevalence of arthritis and other rheumatic conditions in the United States. Part II. Arthritis Rheum 2008;58:26e35. 2. van Saase JL, van Romunde LK, Cats A, Vandenbroucke JP, Valkenburg HA. Epidemiology of osteoarthritis: Zoetermeer survey. Comparison of radiological osteoarthritis in a Dutch population with that in 10 other populations. Ann Rheum Dis 1989;48:271e80. 3. Haugen IK, Englund M, Aliabadi P, Niu J, Clancy M, Kvien TK, et al. Prevalence, incidence and progression of hand osteoar- thritis in the general population: the Framingham Osteoar- thritis Study. Ann Rheum Dis 2011;70:1581e6. 4. Mahendira D, Towheed TE. Systematic review of non-surgical therapies for osteoarthritis of the hand: an update. Osteoar- thritis Cartilage 2009;17:1263e8. 5. Zhang W, Doherty M, Leeb BF, Alekseeva L, Arden NK, Bijlsma JW, et al. EULAR evidence based recommendations for the management of hand osteoarthritis: report of a Task Force of the EULAR Standing Committee for International Clinical Studies Including Therapeutics (ESCISIT). Ann Rheum Dis 2007;66:377e88. 6. Towheed TE. Systematic review of therapies for osteoarthritis of the hand. Osteoarthritis Cartilage 2005;13:455e62. 7. Bellamy N, Kirwan J, Boers M, Brooks P, Strand V, Tugwell P, et al. Recommendations for a core set of outcome measures for future phase III clinical trials in knee, hip, and hand osteoar- thritis. Consensus development at OMERACT III. J Rheumatol 1997;24:799e802. 8. Altman R, Brandt K, Hochberg M, Moskowitz R, Bellamy N, Bloch DA, et al. Design and conduct of clinical trials in patients with osteoarthritis: recommendations from a task force of the Osteoarthritis Research Society. Results from a workshop. Osteoarthritis Cartilage 1996;4:217e43. 9. Maheu E, Altman RD, Bloch DA, Doherty M, Hochberg M, Mannoni A, et al. Design and conduct of clinical trials in pa- tients with osteoarthritis of the hand: recommendations from a task force of the Osteoarthritis Research Society Interna- tional. Osteoarthritis Cartilage 2006;14:303e22. 10. Kellgren JH, Lawrence JS. Radiological assessment of osteo- arthrosis. Ann Rheum Dis 1957;16:494e502. 11. Kessler S, Dieppe P, Fuchs J, Sturmer T, Gunther KP. Assessing the prevalence of hand osteoarthritis in epidemiological studies. The reliability of a radiological hand scale. Ann Rheum Dis 2000;59:289e92. 12. Kallman DA, Wigley FM, Scott Jr WW, Hochberg MC, Tobin JD. New radiographic grading scales for osteoarthritis of the hand. Reliability for determining prevalence and progression. Arthritis Rheum 1989;32:1584e91. 13. Altman RD, Hochberg M, Murphy Jr WA, Wolfe F, Lequesne M. Atlas of individual radiographic features in osteoarthritis. Osteoarthritis Cartilage 1995;3(Suppl A):3e70. 14. Verbruggen G, Veys EM. Numerical scoring systems for the anatomic evolution of osteoarthritis of the finger joints. Arthritis Rheum 1996;39:308e20. 15. Verbruggen G, Wittoek R, Vander CB, Elewaut D. Morbid anatomy of 'erosive osteoarthritis' of the interphalangeal finger joints: an optimised scoring system to monitor disease progression in affected joints. Ann Rheum Dis 2010;69:862e7. 16. Bijsterbosch J, Haugen IK, Malines C, Maheu E, Rosendaal FR, Watt I, et al. Reliability, sensitivity to change and feasibility of three radiographic scoring methods for hand osteoarthritis. Ann Rheum Dis 2011;70:1465e7. 17. Maheu E, Cadet C, Gueneugues S, Ravaud P, Dougados M. Reproducibility and sensitivity to change of four scoring methods for the radiological assessment of osteoarthritis of the hand. Ann Rheum Dis 2007;66:464e9. 18. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ 2009;339:b2535. 19. Boers M, Brooks P, Strand CV, Tugwell P. The OMERACT filter for outcome measures in Rheumatology. J Rheumatol 1998;25: 198e9. 20. Addimanda O, Mancarella L, Dolzani P, Punzi L, Fioravanti A, Pignotti E, et al. Clinical and radiographic distribution of structural damage in erosive and nonerosive hand osteoar- thritis. Arthritis Care Res (Hoboken ) 2012;64:1046e53. 21. Haugen IK, Boyesen P, Slatkowsky-Christensen B, Sesseng S, Bijsterbosch J, van der Heijde D, et al. Comparison of features by MRI and radiographs of the interphalangeal finger joints in patients with hand osteoarthritis. Ann Rheum Dis 2012;71: 345e50. 22. Kwok WY, Bijsterbosch J, Malm SH, Biermasz NR, Huetink K, Nelissen RG, et al. Validity of joint space width measurements in hand osteoarthritis. Osteoarthritis Cartilage 2011;19: 1349e55. 23. Mancarella L, Magnani M, Addimanda O, Pignotti E, Galletti S, Meliconi R. Ultrasound-detected synovitis with power Doppler http://dx.doi.org/10.1016/j.joca.2014.05.026 http://dx.doi.org/10.1016/j.joca.2014.05.026 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref1 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref1 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref1 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref1 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref1 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref2 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref2 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref2 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref2 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref2 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref2 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref3 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref3 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref3 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref3 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref3 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref4 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref4 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref4 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref4 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref5 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref5 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref5 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref5 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref5 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref5 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref5 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref6 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref6 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref6 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref7 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref7 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref7 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref7 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref7 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref7 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref8 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref8 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref8 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref8 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref8 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref8 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref9 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref9 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref9 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref9 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref9 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref9 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref10 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref10 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref10 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref11 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref11 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref11 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref11 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref11 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref12 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref12 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref12 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref12 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref12 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref13 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref13 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref13 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref13 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref14 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref14 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref14 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref14 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref15 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref15 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref15 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref15 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref15 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref15 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref16 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref16 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref16 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref16 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref16 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref17 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref17 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref17 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref17 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref17 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref18 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref18 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref18 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref19 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref19 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref19 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref19 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref20 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref20 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref20 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref20 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref20 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref21 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref21 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref21 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref21 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref21 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref21 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref22 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref22 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref22 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref22 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref22 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref23 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref23 A.W. Visser et al. / Osteoarthritis and Cartilage 22 (2014) 1710e17231722 signal is associated with severe radiographic damage and reduced cartilage thickness in hand osteoarthritis. Osteoar- thritis Cartilage 2010;18:1263e8. 24. Haugen IK, Slatkowsky-Christensen B, Boyesen P, van der Heijde D, Kvien TK. Cross-sectional and longitudinal associa- tions between radiographic features and measures of pain and physical function in hand osteoarthritis. Osteoarthritis Carti- lage 2013;21:1191e8. 25. Verbruggen G, Wittoek R, Vander CB, Elewaut D. Tumour ne- crosis factor blockade for the treatment of erosive osteoar- thritis of the interphalangeal finger joints: a double blind, randomised trial on structure modification. Ann Rheum Dis 2012;71:891e8. 26. Verbruggen G, Goemaere S, Veys EM. Systems to assess the progression of finger joint osteoarthritis and the effects of disease modifying osteoarthritis drugs. Clin Rheumatol 2002;21:231e43. 27. Botha-Scheepers S, Watt I, Breedveld FC, Kloppenburg M. Reading radiographs in pairs or in chronological order in- fluences radiological progression in osteoarthritis. Rheuma- tology (Oxford) 2005;44:1452e5. 28. Botha-Scheepers S, Riyazi N, Watt I, Rosendaal FR, Slagboom E, Bellamy N, et al. Progression of hand osteoarthritis over 2 years: a clinical and radiological follow-up study. Ann Rheum Dis 2009;68:1260e4. 29. Botha-Scheepers SA, Watt I, Slagboom E, Meulenbelt I, Rosendaal FR, Breedveld FC, et al. Influence of familial factors on radiologic disease progression over two years in siblings with osteoarthritis at multiple sites: a prospective longitudinal cohort study. Arthritis Rheum 2007;57:626e32. 30. Buckland-Wright JC, Macfarlane DG, Lynch JA, Clark B. Quan- titative microfocal radiographic assessment of progression in osteoarthritis of the hand. Arthritis Rheum 1990;33:57e65. 31. Olejarova M, Kupka K, Pavelka K, Gatterova J, Stolfa J. Com- parison of clinical, laboratory, radiographic, and scintigraphic findings in erosive and nonerosive hand osteoarthritis. Results of a two-year study. Jt Bone Spine 2000;67:107e12. 32. Drape JL, Idy-Peretti I, Goettmann S, Salon A, Abimelec P, Guerin-Surville H, et al. MR imaging of digital mucoid cysts. Radiology 1996;200:531e6. 33. Bagge E, Bjelle A, Eden S, Svanborg A. Osteoarthritis in the elderly: clinical and radiological findings in 79 and 85 year olds. Ann Rheum Dis 1991;50:535e9. 34. Caspi D, Flusser G, Farber I, Ribak J, Leibovitz A, Habot B, et al. Clinical, radiologic, demographic, and occupational aspects of hand osteoarthritis in the elderly. Semin Arthritis Rheum 2001;30:321e31. 35. Cicuttini FM, Baker J, Hart DJ, Spector TD. Relation between Heberden's nodes and distal interphalangeal joint osteophytes and their role as markers of generalised disease. Ann Rheum Dis 1998;57:246e8. 36. Hart DJ, Spector TD, Brown P, Wilson P, Doyle DV, Silman AJ. Clinical signs of early osteoarthritis: reproducibility and rela- tion to X ray changes in 541 women in the general population. Ann Rheum Dis 1991;50:467e70. 37. Hart D, Spector T, Egger P, Coggon D, Cooper C. Defining osteoarthritis of the hand for epidemiological studies: the Chingford Study. Ann Rheum Dis 1994;53:220e3. 38. Jonsson H, Helgadottir GP, Aspelund T, Sverrisdottir JE, Eiriksdottir G, Sigurdsson S, et al. The use of digital photo- graphs for the diagnosis of hand osteoarthritis: the AGES- Reykjavik study. BMC Musculoskelet Disord 2012;13:20. 39. Marshall M, van der Windt D, Nicholls E, Myers H, Dziedzic K. Radiographic thumb osteoarthritis: frequency, patterns and associations with pain and clinical assessment findings in a community-dwelling population. Rheumatology (Oxford) 2011;50:735e9. 40. Mathiessen A, Haugen IK, Slatkowsky-Christensen B, Boyesen P, Kvien TK, Hammer HB. Ultrasonographic assess- ment of osteophytes in 127 patients with hand osteoarthritis: exploring reliability and associations with MRI, radiographs and clinical joint findings. Ann Rheum Dis 2013;72:51e6. 41. Rees F, Doherty S, Hui M, Maciewicz R, Muir K, Zhang W, et al. Distribution of finger nodes and their association with un- derlying radiographic features of osteoarthritis. Arthritis Care Res (Hoboken ) 2012;64:533e8. 42. Stern AG, Moxley G, Sudha Rao TP, Disler D, McDowell C, Park M, et al. Utility of digital photographs of the hand for assessing the presence of hand osteoarthritis. Osteoarthritis Cartilage 2004;12:360e5. 43. Dahaghin S, Bierma-Zeinstra SM, Ginai AZ, Pols HA, Hazes JM, Koes BW. Prevalence and pattern of radiographic hand oste- oarthritis and association with pain and disability (the Rot- terdam study). Ann Rheum Dis 2005;64:682e7. 44. Ding H, Solovieva S, Vehmas T, Riihimaki H, Leino-Arjas P. Finger joint pain in relation to radiographic osteoarthritis and joint locationea study of middle-aged female dentists and teachers. Rheumatology (Oxford) 2007;46:1502e5. 45. Dominick KL, Jordan JM, Renner JB, Kraus VB. Relationship of radiographic and clinical variables to pinch and grip strength among individuals with osteoarthritis. Arthritis Rheum 2005;52:1424e30. 46. El-Sherif HE, Kamal R, Moawyah O. Hand osteoarthritis and bone mineral density in postmenopausal women; clinical relevance to hand function, pain and disability. Osteoarthritis Cartilage 2008;16:12e7. 47. Jones G, Cooley HM, Bellamy N. A cross-sectional study of the association between Heberden's nodes, radiographic osteoar- thritis of the hands, grip strength, disability and pain. Osteo- arthritis Cartilage 2001;9:606e11. 48. Kortekaas MC, Kwok WY, Reijnierse M, Huizinga TW, Kloppenburg M. Osteophytes and joint space narrowing are independently associated with pain in finger joints in hand osteoarthritis. Ann Rheum Dis 2011;70:1835e7. 49. Lee HJ, Paik NJ, Lim JY, Kim KW, Gong HS. The impact of digit- related radiographic osteoarthritis of the hand on grip- strength and upper extremity disability. Clin Orthop Relat Res 2012;470:2202e8. 50. Ozkan B, Keskin D, Bodur H, Barca N. The effect of radiological hand osteoarthritis on hand function. Clin Rheumatol 2007;26:1621e5. 51. Sonne-Holm S, Jacobsen S. Osteoarthritis of the first carpo- metacarpal joint: a study of radiology and clinical epidemi- ology. Results from the Copenhagen Osteoarthritis Study. Osteoarthritis Cartilage 2006;14:496e500. 52. Zhang Y, Niu J, Kelly-Hayes M, Chaisson CE, Aliabadi P, Felson DT. Prevalence of symptomatic hand osteoarthritis and its impact on functional status among the elderly: the Fra- mingham Study. Am J Epidemiol 2002;156:1021e7. 53. Sunk IG, Amoyo-Minar L, Niederreiter B, Soleiman A, Kainberger F, Smolen JS, et al. Histopathological correlation supports the use of X-rays in the diagnosis of hand osteoar- thritis. Ann Rheum Dis 2013;72:572e7. 54. Grainger AJ, Farrant JM, O'Connor PJ, Tan AL, Tanner S, Emery P, et al. MR imaging of erosions in interphalangeal joint osteoarthritis: is all osteoarthritis erosive? Skeletal Radiol 2007;36:737e45. 55. Wittoek R, Jans L, Lambrecht V, Carron P, Verstraete K, Verbruggen G. Reliability and construct validity of ultraso- nography of soft tissue and destructive changes in erosive http://refhub.elsevier.com/S1063-4584(14)01108-X/sref23 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref23 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref23 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref23 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref24 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref24 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref24 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref24 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref24 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref24 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref25 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref25 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref25 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref25 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref25 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref25 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref26 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref26 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref26 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref26 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref26 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref27 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref27 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref27 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref27 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref27 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref28 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref28 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref28 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref28 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref28 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref29 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref29 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref29 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref29 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref29 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref29 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref30 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref30 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref30 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref30 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref31 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref31 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref31 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref31 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref31 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref32 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref32 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref32 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref32 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref33 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref33 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref33 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref33 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref34 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref34 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref34 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref34 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref34 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref35 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref35 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref35 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref35 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref35 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref36 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref36 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref36 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref36 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref36 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref37 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref37 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref37 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref37 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref38 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref38 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref38 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref38 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref39 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref39 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref39 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref39 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref39 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref39 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref40 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref40 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref40 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref40 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref40 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref40 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref41 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref41 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref41 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref41 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref41 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref42 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref42 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref42 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref42 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref42 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref43 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref43 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref43 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref43 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref43 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref44 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref44 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref44 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref44 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref44 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref44 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref45 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref45 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref45 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref45 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref45 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref46 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref46 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref46 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref46 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref46 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref47 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref47 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref47 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref47 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref47 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref48 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref48 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref48 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref48 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref48 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref49 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref49 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref49 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref49 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref49 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref50 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref50 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref50 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref50 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref51 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref51 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref51 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref51 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref51 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref52 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref52 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref52 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref52 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref52 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref53 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref53 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref53 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref53 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref53 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref54 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref54 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref54 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref54 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref54 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref55 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref55 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref55 A.W. Visser et al. / Osteoarthritis and Cartilage 22 (2014) 1710e1723 1723 osteoarthritis of the interphalangeal finger joints: a compari- son with MRI. Ann Rheum Dis 2011;70:278e83. 56. Iagnocco A, Filippucci E, Ossandon A, Ciapetti A, Salaffi F, Basili S, et al. High resolution ultrasonography in detection of bone erosions in patients with hand osteoarthritis. J Rheumatol 2005;32:2381e3. 57. Keen HI, Wakefield RJ, Grainger AJ, Hensor EM, Emery P, Conaghan PG. Can ultrasonography improve on radiographic assessment in osteoarthritis of the hands? A comparison be- tween radiographic and ultrasonographic detected pathology. Ann Rheum Dis 2008;67:1116e20. 58. Vlychou M, Koutroumpas A, Malizos K, Sakkas LI. Ultrasono- graphic evidence of inflammation is frequent in hands of pa- tients with erosive osteoarthritis. Osteoarthritis Cartilage 2009;17:1283e7. 59. Huetink K, van't Klooster R, Kaptein BL, Watt I, Kloppenburg M, Nelissen RG, et al. Automatic radiographic quantification of hand osteoarthritis; accuracy and sensitivity to change in joint space width in a phantom and cadaver study. Skelet Radiol 2012;41:41e9. 60. van't Klooster R, Hendriks EA, Watt I, Kloppenburg M, Reiber JH, Stoel BC. Automatic quantification of osteoarthritis in hand radiographs: validation of a new method to measure joint space width. Osteoarthritis Cartilage 2008;16:18e25. 61. Saltzherr MS, van Neck JW, Muradin GS, Ouwendijk R, Luime JJ, Coert JH, et al. Computed tomography for the detection of thumb base osteoarthritis: comparison with dig- ital radiography. Skelet Radiol 2013;42:715e21. 62. Ceceli E, Gul S, Borman P, Uysal SR, Okumus M. Hand function in female patients with hand osteoarthritis: relation with radiological progression. Hand (NY) 2012;7:335e40. 63. Marshall M, van der Windt D, Nicholls E, Myers H, Hay E, Dziedzic K. Radiographic hand osteoarthritis: patterns and associations with hand pain and function in a community- dwelling sample. Osteoarthritis Cartilage 2009;17:1440e7. 64. Buckland-Wright JC, Macfarlane DG, Lynch JA. Sensitivity of radiographic features and specificity of scintigraphic imaging in hand osteoarthritis. Rev Rhum Engl Ed 1995;62:14Se26S. 65. Marshall M, Dziedzic KS, van der Windt DA, Hay EM. A systematic search and narrative review of radiographic definitions of hand osteoarthritis in population-based studies. Osteoarthritis Cartilage 2008;16:219e26. 66. Kerkhof HJ, Meulenbelt I, Akune T, Arden NK, Aromaa A, Bierma-Zeinstra SM, et al. Recommendations for standardiza- tion and phenotype definitions in genetic studies of osteoar- thritis: the TREAT-OA consortium. Osteoarthritis Cartilage 2011;19:254e64. 67. Haugen IK, Boyesen P. Imaging modalities in hand osteo- arthritiseand perspectives of conventional radiography, magnetic resonance imaging, and ultrasonography. Arthritis Res Ther 2011;13:248. 68. Spaans AJ, van Laarhoven CM, Schuurman AH, van Minnen LP. Interobserver agreement of the Eaton-Littler classification system and treatment strategy of thumb carpometacarpal joint osteoarthritis. J Hand Surg Am 2011;36:1467e70. 69. Sunk IG, Amoyo-Minar L, Stamm T, Haider S, Niederreiter B, Supp G, et al. Interphalangeal Osteoarthritis Radiographic Simplified (iOARS) score: a radiographic method to detect osteoarthritis of the interphalangeal finger joints based on its histopathological alterations. Ann Rheum Dis 2013 [epub ahead of print]. 70. Altman RD, Fries JF, Bloch DA, Carstens J, Cooke TD, Genant H, et al. Radiographic assessment of progression in osteoarthritis. Arthritis Rheum 1987;30:1214e25. 71. Paradowski PT, Lohmander LS, Englund M. Natural history of radiographic features of hand osteoarthritis over 10 years. Osteoarthritis Cartilage 2010;18:917e22. 72. Maheu E, Cadet C, Carrat F, Barthe Y, Berenbaum F. Radiologic Progression of Hand Osteoarthritis Over 2.6 Years According to Various Methods of Calculation e Data From the SEKOIA Trial 2013. 73. Dela Rosa TL, Vance MC, Stern PJ. Radiographic optimization of the Eaton classification. J Hand Surg Br 2004;29:173e7. 74. Burnett S, Hart DJ, Cooper C, Spector TD. A Radiographic Atlas of Osteoarthritis 1994. 75. Eaton RG, Glickel SZ. Trapeziometacarpal osteoarthritis. Stag- ing as a rationale for treatment. Hand Clin 1987;3:455e71. 76. Lane NE, Nevitt MC, Genant HK, Hochberg MC. Reliability of new indices of radiographic osteoarthritis of the hand and hip and lumbar disc degeneration. J Rheumatol 1993;20:1911e8. http://refhub.elsevier.com/S1063-4584(14)01108-X/sref55 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref55 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref55 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref56 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref56 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref56 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref56 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref56 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref57 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref57 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref57 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref57 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref57 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref57 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref58 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref58 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref58 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref58 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref58 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref59 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref59 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref59 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref59 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref59 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref59 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref60 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref60 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref60 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref60 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref60 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref61 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref61 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref61 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref61 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref61 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref62 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref62 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref62 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref62 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref63 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref63 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref63 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref63 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref63 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref64 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref64 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref64 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref64 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref66 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref66 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref66 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref66 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref66 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref67 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref67 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref67 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref67 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref67 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref67 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref68 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref68 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref68 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref68 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref68 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref69 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref69 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref69 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref69 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref69 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref70 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref70 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref70 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref70 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref70 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref70 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref71 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref71 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref71 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref71 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref72 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref72 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref72 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref72 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref73 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref73 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref73 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref73 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref73 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref74 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref74 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref74 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref75 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref75 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref76 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref76 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref76 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref77 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref77 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref77 http://refhub.elsevier.com/S1063-4584(14)01108-X/sref77 Radiographic scoring methods in hand osteoarthritis – a systematic literature search and descriptive review Introduction Methods Identification of studies Inclusion and exclusion criteria Data extraction Statistical analyses Results Literature flow Study characteristics Discrimination Reliability Sensitivity to change Feasibility Validity Discussion Contributions Competing interest statement Funding Appendix A Supplementary data References work_5camchd2kfc4bd6ccy4nmwphoa ---- Ophthalmology Practice A picture is worth a thousand words. A clinical photograph is an invaluable tool in the learning process of any medical practitioner by documenting the progression of a disease or response to treatment over time. Ophthalmic photography is a highly specialized form of medical imaging dedicated to the study and treatment of the disorders of the eye. Surgeons and healthcare providers rely extensively on photographic communication for patient conditions, surgical outcomes, teaching, education, research, and marketing.[1] Digital Photography Over the past two decades, digital photography has taken over film photography and has now become the standard. Digital photography offers significant advantages over conventional photography. Storing and retrieving digital images is particularly convenient in terms of time and space. Digital format offers an undisputed economic advantage as immediate visualization of pictures allows deletion of the undesired images and recapturing the same, avoiding the cost of useless prints, as with traditional photography. Digital images can also be useful in providing care in tele- ophthalmology systems deployed in remote or underserved areas and in sharing images via electronic communication with peers.[2] Digital photography offers the ability to correct almost all aspects of an image once it has been imported into a computer and the proper software has been installed.[3] Digital photographs are indispensable in today’s world for publications, presentations, patient information and communication, and medicolegal documentation. This article deals with the principles and practice of external ophthalmic digital photography. Basic Setup Camera i. Choosing the right camera: A multitude of digital cameras at affordable prices offering a wide array of features are available in the market. Cameras essentially are either single lens reflex (SLR) or ‘point-and-shoot’ (compact digital). The main difference between the two is how the photographer sees the object through the lens. In a point- and-shoot camera, the viewfinder is a simple window through the body of the camera. One does not see the real image formed by the camera lens. An SLR camera uses a mechanical mirror system and pentaprism (a five- sided prism) to direct light from the lens to an optical viewfinder on the back of the camera, thereby enabling the photographer to see the exact frame about to be captured. Furthermore, point-and-shoot cameras have a shutter lag, which is the time delay between pressing down the ‘click’ (shutter) button and the image actually being captured. In point-and-shoot cameras it is important to initially press the shutter button halfway down to enable the camera to focus on the subject properly before pressing all the way down. Different lenses can be used with an SLR camera, which also allow the photographer to modify parameters such as shutter speed and aperture, and enable him to take photographs under all conditions. An SLR camera is bulky, heavy and expensive, whereas a point-and-shoot camera is light, portable and relatively inexpensive. The choice between an SLR and a point-and-shoot must be made keeping in mind the budget, ease of use, photographic requirements, ability of the photographer and features required. Of late, a few cameras have been built with the purpose of bridging the gap between the professional SLR Principles and practice of external digital photography in ophthalmology Bipasha Mukherjee, Akshay Gopinathan Nair Access this article online Website: www.ijo.in DOI: 10.4103/0301-4738.94053 PMID: *** Quick Response Code: It is mandatory to incorporate clinical photography in an ophthalmic practice. Patient photographs are routinely used in teaching, presentations, documenting surgical outcomes and marketing. Standardized clinical photographs are part of an armamentarium for any ophthalmologist interested in enhancing his or her practice. Unfortunately, many clinicians still avoid taking patient photographs for want of basic knowledge or inclination. The ubiquitous presence of the digital camera and digital technology has made it extremely easy and affordable to take high-quality images. It is not compulsory to employ a professional photographer or invest in expensive equipment any longer for this purpose. Any ophthalmologist should be able to take clinical photographs in his/her office settings with minimal technical skill. The purpose of this article is to provide an ophthalmic surgeon with guidelines to achieve standardized photographic views for specific procedures, to achieve consistency, to help in pre-operative planning and to produce accurate pre-operative and post-operative comparisons, which will aid in self-improvement, patient education, medicolegal documentation and publications. This review also discusses editing, storage, patient consent, medicolegal issues and importance of maintenance of patient confidentiality. Key words: Digital camera, external photograph, photography Department of Orbit, Oculoplasty, Reconstructive and Aesthetic Services, Sankara Nethralaya, Medical Research Foundation, Chennai, India Correspondence to: Dr. Bipasha Mukherjee, Department of Orbit, Oculoplasty, Reconstructive and Aesthetic Services, Sankara Nethralaya, Medical Research Foundation, 18 College Road, Chennai - 600 006, India. E-mail: beas003@yahoo.co.uk Manuscript received: 06.04.11; Revision accepted: 17.11.11 [Downloaded free from http://www.ijo.in on Wednesday, October 10, 2012, IP: 194.44.21.47]  ||  Click here to download free Android application for this journal abc Rectangle https://market.android.com/details?id=comm.app.medknow 120 Indian Journal of Ophthalmology Vol. 60 No. 2 cameras and the more consumer-friendly point-and-shoot cameras. These cameras are called the ‘pro-sumer’ (professional + consumer) cameras. Here are some of the frequently used terms to be kept in mind before buying a camera intended for clinical photography. Mega Pixel: Digital images are made up of thousands of small tile-like picture elements called pixels. One mega pixel equals one million pixels. More the number of pixels, higher the image resolution. Before deciding on the mega pixels of a camera, one must decide the maximum size of the print that might be needed. A picture taken at 6 mega pixels can be optimally printed up to 11 inches x 14 inches size without ‘pixilation’, or individual pixels being visible. However, even a 3.2-mega pixel camera is sufficient for clinical photography. Macro mode: Macro photography is essentially close-up photography of small objects. Cameras can be switched into macro mode by pressing the button with an icon of a ‘flower’. This is of particular use while photographing small skin lesions. Flash: The use of a flash is to illuminate a dark scene or the object to be photographed. Most cameras have built-in flashes that can be turned off. SLR cameras can have additional flash fixtures, which can be used to create diffuse illumination. Flashes can wash out subtle skin conditions. Hence flash should be used with caution in those particular case scenarios. It is imperative to stabilize the camera when not using the flash to prevent a blurred photograph. ‘Image Stabilization’(IS), ‘Vibration Reduction’(VR) or ‘Optical Image Stabilization’(OIS) are names given by different manufacturers to technology which is available in newer digital cameras to minimize the effects of camera shake. The camera is held horizontal in taking frontal views, even of vertical subjects so as to standardize illumination, with the light always coming from above. On the contrary, it may be necessary to turn the camera vertical for lateral and oblique views, such that the flash comes from the side to avoid shadows.[4] Keeping the subject set off from the background (50 - 90 cm) helps to avoid shadows as well.[5] Video Mode: This is a feature regularly available in point- and shoot cameras, and only recently included in digital SLR cameras. This is particularly of use while capturing surgical procedures, ocular motility and dynamic clinical conditions such as nystagmus and blepharospasm.[6] Accessories: Other accessories such as tripods and additional lights and/or reflectors may be helpful. Nowadays, to power the newer high-performance cameras, manufacturers have shifted to ‘AA’ batteries or more efficient Lithium-ion battery packs, which last longer. The charging kit for the battery packs comes as a standard accessory and usually takes 4–6 hours to charge when fully drained. Hence, it is always advisable to have charged batteries and spare camera cards available so as not to miss important clinical photographs. Standardization of External Clinical Photography Standardization requires planning, adherence to the set protocols and common sense. Patient position: Most external photographs are taken in the patient in anatomical position, unless the object of interest is hidden in that position. Patient Preparation: Whenever the face is photographed, hair should be pulled off the face and placed behind the ears.[7] Jewelry, glasses, and hearing aids should be removed as far as possible. Makeup is not allowed, especially in cases of skin resurfacing procedures (e.g., laser resurfacing, dermabrasion, chemical peelings). All garments that interfere with the visibility of the area must be removed.[8] Background: The background should be an even, neutral, nonreflecting, monochromatic surface. Preferred background colors are white, gray, and blue.[9] White or light blue panels on walls or the backs of doors can be adapted in places where photographs are taken (e.g. in the clinic). An assistant can hold a light-colored drape behind the patient in the ward or in the operation theater. For serial photographs to show progression/regression/ post-operative results in a patient it is important that the only variable should be in the patient and everything else should stay the same—viewpoint, positioning, lighting, color, magnification, and background[10] [Fig. 1]. Commonly Required Views in Ophthalmology: Face views Frontal view: From the upper limit of the head to the ‘‘jugular incisure,’’ with the patient looking at the camera [Fig. 2]. The reference plane that runs from upper edge of the tragus to the lowest point on the lower edge of the orbit (Frankfurt plane) should be horizontal.[7] Oblique view (right and left): From the frontal view, with the patient’s face rotated 45 degrees so as to align the tip of the nose with the cheek outline. Care must be taken to leave a narrow strip of cheek to set off the nasal tip from the background. The Frankfurt plane is held horizontal. The patient looks ahead. Lateral view (right and left): From the frontal view, with the patient’s whole body rotated 90 degrees so as to align the nasal tip and chin. The head must be in its anatomic position with no lateral inclination, flexion or extension. The Frankfurt plane is held horizontal, and the contralateral eyebrow is not visible. The patient looks ahead. Images are taken asking the patient to assume a neutral face expression, holding a relaxed and natural head position, unless it is to assess muscular contractions. The series of photographs with patient’s frontal, oblique and lateral views (right and left) together is called a ‘standard five-view head series’, and is indicated in oculo-facial plastic surgery practices [Fig. 3]. Eye views Frontal view: The upper margin is the eyebrows, and the lower margin is the malar arches. The lateral canthi are included [Fig. 4]. Lateral view: This view shows the position of the eyeball relative to the zygomatic bone. [Downloaded free from http://www.ijo.in on Wednesday, October 10, 2012, IP: 194.44.21.47]  ||  Click here to download free Android application for this journal https://market.android.com/details?id=comm.app.medknow March - April 2012 121 Figure 1: Standardization in images: Note the same position and lighting which allows for easy pre-operative and post-operative comparisons Figure 3: Standard 5-view head series: From left to right: Left profile; left oblique; frontal view; right oblique and right profile Figure 5: Head posture in Ptosis Figure 2: Face: Frontal view. The horizontal line indicates the Frankfurt plane. Masking has been done to protect the identity of the patient Photography in Specific Situations In oculoplasty/aesthetic surgery practice Ptosis It is important to document any face turn/head tilt before taking photographs of the face in anatomical position [Fig. 5]. Primary gaze: Two separate photographs are shot with both eyes looking straight: first with frontalis overaction followed by one without the frontalis overaction [Fig. 4]. These are followed by one where the subject is looking down to document any pre-operative lid lag. Finally a photograph with the eyes closed and the brow relaxed [Fig. 6]. Mukherjee and Nair.: Photography in ophthalmology Figure 4: Right upper eyelid ptosis with frontalis acting (a) and relaxed (b). In eye view, the upper margin is at the eyebrows and the lower margin the malar arches a b [Downloaded free from http://www.ijo.in on Wednesday, October 10, 2012, IP: 194.44.21.47]  ||  Click here to download free Android application for this journal https://market.android.com/details?id=comm.app.medknow 122 Indian Journal of Ophthalmology Vol. 60 No. 2 Motility pictures must also be captured if patient has a squint (see below). Proptosis/ Enophthalmos Basal view (Worm’s eye view): The head is bent backward so as to align the nasal tip with brows on a horizontal plane [Fig. 7a]. Cephalic view (Bird’s eye view): Taken from above, with eyebrows aligned horizontally. The patient should be looking straight up [Fig. 7b]. The focus should be on the corneas, not the brow or chin. Strabismology Motility Photographs: External photographs of both eyes together, which are taken with the patient looking at all directions of gaze. Usually a collage is made from the photographs and combined into a single one in order to show the abnormality [Fig. 8]. Tip: It is advisable to crop the photos to show only the eyes. Skin Lesions: Images in macro-mode or optimal cropping of a high-resolution image yield good results [Fig. 9]. Placing the camera too close to the subject should be avoided, as it would distort the normal features. Figure 9: Ideal cropping of a high-resolution image to demonstrate a right lower lid lesion Figure 8: Ocular Motility: 9 positions of gaze Figure 10: Per-operative photography: The image clicked with both flash and the overhead lights on. The hand with the bloody gauze in the left upper corner should be cropped Figure 6: Ptosis of the left upper eyelid: Primary gaze (a), down gaze (b) and eyes closed (c) a b c Figure 7: Enophthalmos and proptosis: Worm’s eye view (a) and Bird’s eye view (b) a b [Downloaded free from http://www.ijo.in on Wednesday, October 10, 2012, IP: 194.44.21.47]  ||  Click here to download free Android application for this journal https://market.android.com/details?id=comm.app.medknow March - April 2012 123 Per-operative photography: One should try different combinations of overhead lamps with and without flash to obtain the correct exposure in an operation theater [Figs. 10 and 11]. Tip: It is preferable to remove gauzes, Q-tips or hands of the surgeon/assistant from the field. A scale can be placed against an object to document the size (foreign bodies, tumours etc.) [Fig.12]. Radiographic images: High-quality images of computed tomography (CT) scans and magnetic resonance imaging (MRI) can be procured from the radiologist and viewed on a computer using a DICOM (Digital Imaging and Communications in Medicine) viewer. One can photograph the image sheets placed against a film viewer with the flash in the ‘off’ mode to avoid reflection. The camera should be held steady to avoid blur [Fig. 13]. Ethical and Legal Issues with Clinical Photography Consent: Patient consent for photo-documentation must be obtained prior to any photography. The consent includes a statement of understanding that the photographs are part of the patient’s medical record for purposes of medico-legal documentation and may be used for educational purposes, lectures, exhibits and publications.[11] Informed consent for publication of patient information is necessary because the physician-patient relationship is confidential.[12] Patients can be at times identified in photographs or descriptions of their sex, age and other details.[13] Furthermore, patients do have the right to refuse photography. Electronic publishing: About a third of the 70 million MEDLINE searches each year at the website of the National Library of Medicine are performed by members of the general public.[12] Scientific material is widely available on the internet and there is no control over viewership of the images once they are published. Hence, it is recommended by some that specific consent should be obtained if an image will be used in electronic publishing, explicitly mentioning all possible forms of publication now in existence[13] [Table 1]. Figure 12: Macro mode: The image shows an intra-orbital foreign body, clicked in macro mode. The presence of a ruler indicates the size of the object Figure 13: Capturing radiographic images: The left-sided image clicked with the flash turned on, resulting in the reflection of the flash being captured; the right-sided image clicked with the flash turned off, however, ‘camera shake’ results in a blurred image. The image on the right shows clear details with ideal illumination. CT and MRI scan images must be cropped to protect patient identity Mukherjee and Nair.: Photography in ophthalmology Figure 11: Per-operative photography: Flash off (a); flash on (b) a b [Downloaded free from http://www.ijo.in on Wednesday, October 10, 2012, IP: 194.44.21.47]  ||  Click here to download free Android application for this journal https://market.android.com/details?id=comm.app.medknow 124 Indian Journal of Ophthalmology Vol. 60 No. 2 Table 1: Consent form for medical photography PATIENT CONSENT FORM FOR MEDICAL PHOTOGRAPHY I, ____________________________________________________, hereby give my consent to Dr. ________________________________ to include my/my ward’s photograph/s in work to support medical teaching, research and science. This consent extends to all editions of the work, present, past and future, and in whatever form or medium (books, journals CD-ROMS, internet and online publication). By signing, I confirm that this consent form has been explained to me in terms and language which I completely understand. 1. I understand that the photograph/s may be used in my medical record, for purposes of medical teaching, for publication in medical textbooks, journals and/or electronic publications. 2. It has been made clear to me that personal information such as my name, age, home or work place address and hospital identification number (MRD No.) will not be either displayed or used for any purpose. 3. I understand that I will not receive any payment from any party. 4. I understand that the image/ s may be seen by members of the general public, in addition to scientists and medical researchers who regularly use these publications in their professional education. 5. I understand that it is possible that someone may recognize me and that complete anonymity cannot be guaranteed. 6. Refusal to consent will in no way affect the medical care I will receive. I declare that I have no claim on ground of breach of confidence against Dr. ____________, in any legal case in context with the publication of the photograph/ s. Signature of the patient Name of the patient (CAPITAL LETTERS) In case of minor (patient age less than 18 years)/ intellectually disabled, consent can be given by the parent or guardian. Signature of the Parent /Guardian Name of the Parent / Guardian (CAPITAL LETTERS) Relationship with the patient Signature of the Doctor Name of the Doctor (CAPTIAL LETTERS) Date Time Place Copyright: Under the Indian copyright law, which protects the original works of authorship including photography, the creator of the original expression in a work is its author. The author is also the owner of copyright unless there is a written agreement by which the author assigns the copyright to another person or entity, such as a publisher. Many biomedical journals ask authors to transfer copyright to the journal (e.g., the Indian Journal of Ophthalmology). However, an increasing number of “open-access” journals do not require transfer of copyright. Copyright infringement occurs when a copyrighted work is reproduced, distributed, publicly displayed, or made into a derivative work without the permission of the copyright owner. (For further information, see copyright.gov.in) [Downloaded free from http://www.ijo.in on Wednesday, October 10, 2012, IP: 194.44.21.47]  ||  Click here to download free Android application for this journal https://market.android.com/details?id=comm.app.medknow March - April 2012 125 Editing and storage: After the advent of digital photography, storage and editing of photographs has become easy. As practices move to electronic medical records, uploading photographs against patients’ data will become even easier. There are many photo-editing programs which can help alter and manipulate photographs convincingly. The most common form of storing digital images are in high-quality JPEG (Joint Photographic Experts Group) files, but some publishers prefer TIFF (Tag Image File Format). It is advisable to save the photographs at a resolution of 200-300 dpi (dots per inch). Use of JPEG images saves memory space and makes images easier to store and manipulate. The standard for professionals involved in photo editing is Adobe Photoshop 7.0. It is important to make backup copies of the photographs at regular intervals on CD-ROMs or external hard disks, which are inexpensive and freely available. The photographs can also be stored online on some websites [Table 2]. Most medical journals insist that submitted photographs adhere to guidelines suggested by them, in terms of quality, size, and color. This would mean a certain amount of manipulation of the photograph. Photographs may need manipulation to protect the identity of the patient such as masking [Fig. 2]. However, to maintain the integrity of the image, manipulation may only be carried out to the whole image, and must be limited to simple sharpening, adjustment of contrast and brightness, and correction of color balance.[14] Photographic manipulation at the processing stage for misrepresentation of outcomes is unethical and illegal. It is also advisable to watermark one’s photographs and videos to prevent unethical copying and usage. Digital watermarking is the process of incorporating the author ’s name or logo in the photograph, which helps to verify the identity of its owner. While watermarking is acceptable when images are incorporated into presentations and handouts, it is strictly forbidden while submitting to journals for publication. Watermarking can be done on any photo-editing program [Table 2]. Thus, procurement of a written consent from the patient, minimal modification, responsible and safe storage goes a long way in safeguarding the interests of the patient and the clinician. A simple and systematic approach to clinical photography ensures optimal photographic results by providing standardized views for specific procedures in ophthalmology. References 1. Parker WL, Czerwinski M, Sinno H, Loizides P, Lee C. Objective interpretation of surgical outcomes: Is there a need for standardizing digital images in the plastic surgery literature. Plast Reconstr Surg 2007;120:1419-23. 2. Verma M, Raman R, Mohan RE. Application of tele-ophthalmology in remote diagnosis and management of adnexal and orbital diseases. Indian J Ophthalmol 2009;57:381-4. 3. Spear M, Hagan K. Photography and plastic surgery: Part 1. Plast Surg Nurs 2008;28:66-70. 4. Niamtu J. Image is everything: Pearls and pitfalls of digital photography and powerpoint presentations for the cosmetic surgeon. Dermatol Surg 2004;30:81-91. 5. Fogla R, Rao SK. Ophthalmic photography using a digital camera. Indian J Ophthalmol 2003;51:269-72. 6. Yavuzer R, Smirnes S, Jackson IT. Guidelines for standard photography in plastic surgery. Ann Plast Surg 2001;46:293-300. 7. Persichetti P, Simone P, Langella M, Marangi GF, Carusi C. Digital photography in plastic surgery: How to achieve reasonable standardization outside a photographic studio. Aesthet Plast Surg 2007;31:194-200. 8. Galdino GM, Vogel JE, Vander Kolk CA. Standardizing digital photography: It’s not all in the eye of the beholder. Plast Reconstr Surg 2001;108:1334-44. 9. Nayler JR. Clinical photography: A guide for the clinician. J Postgrad Med 2003;49:256-62. 10. Bhangoo P, Maconochie IK, Batrick N, Henry E. Clinicians taking pictures: A survey of current practice in emergency departments and proposed recommendations of best practice. Emerg Med J 2005;22:761-5. 11. Hoey J. Patient consent for publication: An apology. CMAJ 1998;159:503-4. 12. Nylenna M, Riis P. Identification of patients in medical publications: Need for informed consent. BMJ 1991;302:1182. 13. Hood CA, Hope T, Dove P. Videos, photographs, and patient consent. BMJ 1998;316:1009-11. 14. Supe A. Ethical considerations in medical photography. Issues Med Ethics 2003;11:83-4. Table 2: Useful websites and software Image viewing sites, also used for organizing, editing, and storing digital images: 1. http://www.acdsee.com 2. http://www.digikam.org 3. http://www.picasaweb.com 4. http://www.snapfish.com 5. http://www.flickr.com 6. http://www.canto.com Software for editing and/or storing digital images: Basic photo-editing software: 1. Windows Live Photo gallery/ Windows Photo Gallery (comes with Windows 7/Windows XP) – by Microsoft. 2. Microsoft Office Picture Manager / Microsoft Photo Editor (included with Microsoft suite starting with version 2003) – developed by Microsoft. 3. iPhoto (comes with iLife, Mac OS) – developed by Apple Inc. Advanced photo-editing software: 1. Adobe® Photoshop® CS5/LightRoom - developed by Adobe Systems Inc. 2. IrfanView - freeware/shareware software developed by Irfan Skiljan. 3. PhotoImpression - developed by Arcsoft (For Mac OS based systems only) 4. Aperture - developed by Apple Inc. Cite this article as: Mukherjee B, Nair AG. Principles and practice of external digital photography in ophthalmology. Indian J Ophthalmol 2012;60:119-25. Source of Support: Nil. Conflict of Interest: None declared. Mukherjee and Nair.: Photography in ophthalmology [Downloaded free from http://www.ijo.in on Wednesday, October 10, 2012, IP: 194.44.21.47]  ||  Click here to download free Android application for this journal https://market.android.com/details?id=comm.app.medknow work_5gymuv75rjeohodluqv5h5ssy4 ---- Ljubica Josić (Ed.) (2016) Information Technology and Media 2016 Informacijska tehnologija i mediji 2016. Zagreb _ University Department of Croatian Studies Marta Takahashi 140 COMMUNICATION MANAGEMENT REVIEW, 2 (2017) 2 BOOK REVIEW Information technology has changed the traditional media in terms of organisation, content and economics, which is demonstrated by these interesting Proceedings from the Summer School of Information Technology and Media 2016, held in Zadar from 26 to 31 August 2016, organised by the Department of Tourism and Communication Sciences (University of Zadar), Department of Information and Communication Sciences (Faculty of Humanities and Social Sciences, University of Zagreb) and the Department of Communication Studies (Croatian Studies, University of Zagreb). The lectures and workshops were headed by university professor from Zadar, Zagreb and Maribor, as well as media experts from several national media companies. Apart from lectures and workshops, held were doctoral colloquia for information and communication science doctoral students from the Faculty of Humanities and Social Sciences, University of Zagreb. The Foreword of the Proceedings was written by the editor, Assistant Professor Ljubica Josić, PhD. It describes how the aim of the School was to encourage international cooperation in order to properly keep up to date with the influence of information technologies on social changes, especially on changes in the media, in order to accelerate the exchange of academic and practical knowledge and skills, thereby contributing to the strengthening of media literacy in the digital age. At the School, through lectures and workshops, provided were insights into various areas and new research on mass communication, convergent media, digitisation, digitisation of heritage, information searching and storing, new mobile journalism, digital photographs and media literacy. The Proceedings contain thirteen contributions that are divided into three sections: Lectures, Workshops and Doctoral Colloquia. The first section begins with a paper by Assistant Professor Dejan Jontes, PhD, “Television Taste and Changing Patterns of Viewing in Slovenia”. Jontes presents data of the empirical research project “Media Consumption, Social Class and Cultural Stratification”, carried out with the help of a questionnaire administered to 820 residents of Ljubljana and Maribor. The chapter examines the relationship between social class, education and television taste, and it deals with the question of the role of television consumption in the organisation of class distinctions. It also shows how cultural capital operates in the field of popular culture. Jones uses the Slovenian sample to show that class and education differentiate television preferences significantly although only in some segments of television repertoire. 141141INFORMATION TECHNOLOGY AND MEDIA 2016 LJUBICA JOSIĆ The next paper is brought by Vesna Kalajžić, PhD and Ana Vuletić Škrbić, Assistant: “Social Media in Journalism”, in which researched is the application of social media in journalism on the example of the experience of journalists in the city of Zagreb. The objective of the paper is to expand knowledge on the use of social media in journalism; what is the reason for use of social media in their work; do social media help journalists in their work (and how); and do journalists believe social media. Participating in the research were 39 respondents, and the results, in general, showed that respondents recognised the importance of the use of social media, as well as the possibilities to obtain feedback through them, to have an easier time to find interlocutors, to conceive ideas for articles, to check and confirm information, as well as to use them as primary and additional sources. “Media Literacy and the Information Age” is a paper by Associate Professor Danijel Labaš, PhD, which shows several different approaches to media literacy in the new digital age. New digital media are often related to questions of security, especially when talking about how they are used by young people. Today, often mentioned is “better Internet”, but also introduced is the concept of responsibility for media use. A responsible user can only be someone who is media literate and critical, as well as who recognises the complexity of digital media, which can be potentially damaging for developing children and youth. However, on the other hand, they provide broad possibilities for communication, exchanges and learning. In the research “Media and Family” by the Department of Communication Studies at Croatian Studies, University of Zagreb, a high 90% of respondents consider that children and parents require continuing education on how media can affect child development. Therefore, emphasised in the paper is the need to consider what model would be suitable for media education to achieve media literary, and presented is a possible pedagogical approach to media education. Assistant Professor Ljiljana Zekanović-Korona, PhD and Jurica Grzunov, Assistant, in the contribution “Digital Media in Tourism” deal with the role of social and digital media in tourism promotion. Digital media and the Internet have created a world without borders, in which information becomes available at any moment and from anywhere. In this manner, social and digital media in tourism enable better connectivity between distant places and the easier organisation of travels. Developing is a form of marketing that enables the tourist offer to create and offer diverse products and services, as well as to bring them closer to 142 COMMUNICATION MANAGEMENT REVIEW, 2 (2017) 2 BOOK REVIEW demand with targeted reach to the end consumer. Especially important in tourism is tourism on social media because popular tourist destinations use them for marketing purposes. The results of such social media campaigns are a significant increase in the number of guests in the destinations encompassed by the campaign. The paper “New Media of the Digital Age” by Associate Professor Nada Zgrabljić Rotar, PhD, from the communication science perspective provides insight into the digital culture of the new age, which are determined by the terms interaction, convergence, virtual and new media. The author emphasises that traditional media are legally regulated institutions in which professional experts, with the aid of technological means, produce symbolic content for a wide auditorium. The Internet and telecommunications enabled the convergence of traditional media and the appearance of new media that have democratised social processes and opened the possibility for unexpected changes in the media communications environment. The second section, Workshops, opens with a contribution from Katarina Alvir and Šime Vičević, “Technology in Contemporary TV Journalism”, which describes the use of computer technology in the Zadar bureau and the foreign editorial board of the Information Programme of Nova TV, the first commercial television station in Croatia with a national concession. The emphasis is on the computer editorial system iNews, which allows everyone participating in the operations of the Information Programme to perform their tasks, as well as enables insight into the current state of preparation of the show and control during broadcasting. This is followed the contribution by Assistant Professor Domagoj Bebić, PhD, and PhD student Marija Volarecić, “Changes in Journalism: Thoughts on Existing Forms and Techniques”. The aim of this workshop was to familiarise and educate students and participants of the new trends in media communication, as well as to make them conscious of the impact and change that were introduced in media reporting by digital marketing. Through the workshop, participants were introduced to the concept of viral journalism as a concept that encompasses a change in how news is created, as well as the role of users in the spreading of media content by means of social media platforms. Full Professor Goran Bubaš, PhD, in the contribution “Motivation for Usage of the Internet and Dependencies Related to the Internet in the Context of the Function of Mass Media”, accentuated the functions of mass media and the role of journalists, then provided an 143143INFORMATION TECHNOLOGY AND MEDIA 2016 LJUBICA JOSIĆ overview of the most frequent activities of Internet users, elucidating theories that explain motivation for Internet use (and other media), and then succinctly covering that area of Internet addiction, and in the end, explaining the possible negative consequences on the media ecology and social roles of mass media. Associate Professor Marjan Družovec, PhD, Assistant Professor Marko Hölbl, PhD, and Full Professor Tatjana Welzer, PhD, in their paper “Digital Photography Processing” explain that photography is a powerful medium because most of information that comes to our brain is visual and we grow up expecting that what we see is true. Photography is evidence, identification, a kind of diagram of happening. The opposite facet of photography is if it is used to manipulate or interpret reality, since in the digital age, it is very easy to manipulate a photo. The goal of the authors’ text is mainly to understand what happens with the image during the processing steps in the digital route from the beginning in the digital camera to the end in the computer. Associate Professor Nives Mikelić Preradović, PhD, in the contribution “Digital Photo Processing for the New Journalistic Practice”, states that the digital manipulation of photos is today the most popular way of correcting photographic values through the rapid development of technologies. Furthermore, the availability of user-generated content eases everything, and for this reason for contemporary journalistic practice, it is important that journalists learn to recognise the integrity of digital content and determine the authenticity of photos. Erroneous reports and counter-reports frequently accompany an emergency situation. For this reason, in journalistic practice, there is a need for double-checking and confirming all information so that journalists, in such situations, maintain their status as reliable sources of news and information. Digital photography must be credible, authentic, protected by copyright, with accompanying licenses. Associate Professor Tena Perišin, PhD and PhD student Petra Kovačević divided the paper “Mobile Journalism – Challenges and Opportunities” into two parts. In the first, they speak about research on the application of mobile technology in journalism, and in the second, they summarised the content of the mobile journalism workshop, which was conducted as part of the Summer School at the University of Zadar. The text can serve as a handbook and introductory lecture that should precede the training on mobile journalists, becoming familiar with basic tools and the possibilities of mobile telephones, as well as examples of 144 COMMUNICATION MANAGEMENT REVIEW, 2 (2017) 2 BOOK REVIEW published stories. The paper aims to explain more clearly the concept of “mobile journalism”, as well as clearly differentiate “content production” from creating news stories that attempt to satisfy professional criteria. In the contribution “From Radio to Multimedia Editorial Board: Example of Voice of Croatia”, Tomislav Šikić describes how, with the challenges of the digital age, this multimedia international programme of Croatian Radiotelevision functions. The Voice of Croatia began airing in May 2003, and it derives from a one-hour show aimed at immigrants that was broadcast in 1991. The Voice of Croatia produces more than 2 hours of its own programming in Croatian, as well as about 70 minutes of informative programming in English, Spanish and German. In the third section, dedicated to doctoral colloquia, Associate Professor Sonja Špiranec, PhD and Full Professor Jadranka Lasić-Lazić, PhD, describe the contents and experiences of the colloquia at the Summer School, at which some thirty participants of the post-graduate study programme of the Department of Information and Communication Sciences of the Faculty of Humanities and Social Sciences, University of Zagreb presented their research, and with a series of offered topics, upgraded their academic knowledge and skills, of which they could provide their opinions in the survey questionnaire. The proceedings represent a unique publication with the most recent discoveries and analyses in the communication industry, contributing to the further development of communication sciences. 145145INFORMATION TECHNOLOGY AND MEDIA 2016 LJUBICA JOSIĆ work_5isjabbvajgmfhup4p3ec7j75i ---- The sedimentation of colloidal nanoparticles in solution and its study using quantitative digital photography HAL Id: hal-01637907 https://hal.archives-ouvertes.fr/hal-01637907 Submitted on 18 Nov 2017 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. The sedimentation of colloidal nanoparticles in solution and its study using quantitative digital photography Johanna Midelet, Afaf El-Sagheer, Tom Brown, Antonios Kanaras, Martinus H. V. Werts To cite this version: Johanna Midelet, Afaf El-Sagheer, Tom Brown, Antonios Kanaras, Martinus H. V. Werts. The sedimentation of colloidal nanoparticles in solution and its study using quantitative digital photogra- phy. Particle and Particle Systems Characterization, Wiley-VCH Verlag, 2017, 34 (10), pp.1700095. �10.1002/ppsc.201700095�. �hal-01637907� https://hal.archives-ouvertes.fr/hal-01637907 https://hal.archives-ouvertes.fr The sedimentation of colloidal nanoparticles in solution and its study using quantitative digital photography Johanna Midelet1, Afaf H. El-Sagheer2, Tom Brown2, Antonios G. Kanaras1, and Martinus H. V. Werts*3 1University of Southampton, Physics and Astronomy, Faculty of Physical Sciences and Engineering, Southampton SO171BJ, U.K. 2University of Oxford, Department of Chemistry, 12 Mansfield Road, Oxford, OX1 3TA, U.K. 3Ecole normale supérieure de Rennes, CNRS, lab. SATIE, Campus de Ker Lann, F-35170 Bruz, France author version, accepted manuscript published as Part. Part. Syst. Charact. 2017, 1700095 DOI: 10.1002/ppsc201700095 Abstract Sedimentation and diffusion are important aspects of the behaviour of colloidal nanoparticles in solution, and merit attention during the synthesis, characterisation and application of nanoparticles. Here we study the sedimentation of nanoparticles quantitatively using digital photography and a simple model based on the Mason-Weaver equa- tion. Good agreement between experimental time-lapse photography and numerical solutions of the model was found for a series of gold nanoparticles. The new method was extended to study for the first time the gravitational sedimentation of DNA-linked gold nanoparticle dimers as a model system of a higher complexity structure. Addition- ally we derive simple formulas for estimating suitable parameters for the preparative centrifugation of nanoparticle solutions. *martinus.werts@ens-rennes.fr 1 1 Introduction Observations on the sedimentation of colloidal solutions were important his- torically in establishing the physical reality of molecules and providing a molecular basis for thermodynamics in the form of statistical mechanics.[1– 3] Nowadays, there is an intense interest in the development of colloidal solutions of engineered nanocrystals for a variety of applications. These so- lutions should display sedimentation behaviour in line with expectations for these particles on basis of their shape, composition and suspending medium. For stable and dilute solutions of sufficiently large particles, the sedimen- tation behaviour (Figure 1) is a result of the particular interplay of nanopar- ticle hydrodynamics, and gravitational and Brownian forces. Under these conditions, the surface chemistry of the nanoparticles, while ensuring col- loidal stability, does not significantly influence the sedimentation behaviour. For smaller particles (< 10 nm), and in particular ’soft’ particles such as polymers and proteins, solvation effects on sedimentation may become sig- nificant, as the shell of solvent molecules around the object influences its hydrodynamic behaviour.[4–6] Furthermore, the presence of surfactants in- teracting with colloidal particles may change sedimentation behaviour[7, 8]. These effects are not considered in the present work. A general understanding of the sedimentation of nanoparticle solutions is useful as it may give rapid and visual clues about the size distribution and colloidal stability of newly synthesised colloids. These clues go beyond the simple assessment of whether a prepared solution is colloidally stable. Careful monitoring of sedimentation behaviour may be used to verify the nature of the suspended object in its native medium, and is complemen- tary to methods that characterise a limited number of specimens deposited on a substrate, such as electron microsopy. Furthermore, sedimentation, decantation and controlled centrifugation are often purification steps in wet- chemical synthesis of nanoparticles and nanoparticle assemblies, for instance for the purification of gold nanorods.[9] A further example is the purifica- tion of nanoparticles by flocculation and subsequent sedimentation. Floc- culation under the influence of depletion forces due to specific flocculatants produces large aggregates of these nanoparticles that settle rapidly leaving non-flocculated impurities in the supernatant.[10–12]. When analysing the interaction of nanoparticles with biological systems in therapeutic and diagnostic applications, their transport properties such as diffusion and sedimentation need to be taken into account The importance of sedimentation to in vitro studies of cellular uptake of nanoparticles has been demonstrated.[13] There is a clear connection between the gravitational sedimentation of 2 Figure 1 – Sedimentation versus Brownian diffusion in colloidal solutions. (a) Gravity tends to direct dense particles to the bottom of the cell, whereas the random Brownian forces tend to disperse the particles throughout the entire volume. (b) This leads to the establishment of an equilibrium gradient start- ing from an initial homogeneous distribution of particles. Here we trace the theoretical initial, transient and equilibrium concentration profiles as a func- tion of vertical position. (c) Experimental observation of sedimentation over time of colloidal gold (20, 40, 60 nm) in water using quantitative digital pho- tography. The cell at the left of each picture contains water only. Photos were taken at the start, and after 7, 14 and 35 days, respectively. The rightmost picture represents the cells at equilibrium: a photograph of the Boltzmann distribution. colloidal particles and analytical centrifugation techniques, where sedimen- tation is sped up by increasing g to multiples of the earth gravitational pull. Analytical ultracentrifugation (AUC), which is well known for its applications in biochemistry, has been successfully applied to the study of nanoparticle solutions.[14–17] Optimisation of the measurement using a spec- troscopic approach in combination with detailed numerical analysis of the multi-wavelength sedimentation profiles has made AUC extremely versatile for the analysis of nanoparticle preparations.[18] It has furthermore been demonstrated that information on the shapes of DNA-based nanoparticle as- semblies can also be obtained for AUC sedimentation analysis combined with hydrodynamic modeling.[19] The rotational speed and centrifugal acceleration in analytical ultracen- trifuges are extremely high, and adapted to proteins and small bioparti- cles. As will be further pointed out later on, many assemblies of inorganic nanoparticles, which have higher density and size, are better analysed with 3 more modest speeds. A relevant recent development in this respect is ana- lytical centrifugation (AC) with space- and time-resolved extinction profiles (STEP),[20, 21] which functions at significantly lower speeds than AUC. Its application to nanoparticle solution analysis is currently emerging.[22, 23] Another analytical centrifuge technique is differential centrifugal sedimen- tation (DCS) which is based on the sedimentation of injected samples in a spinning disk containing a liquid. DCS successfully detects small changes in particle sedimentation behaviour as a result of their ligand capping,[24] or particle size variations due to varying synthesis conditions.[25] For these reasons, it is of fundamental interest and instructive to study the sedimentation behaviour of nanoparticle solutions in greater detail. Re- cently, Alexander et al. reported a study of gravitational sedimentation of gold nanoparticles [26]. They focused on measuring the optical density at a pre-defined height in the solution as a function of time using an UV-visible absorption spectrophotometer. This study was then further extended by making estimates of the size distribution of colloidal gold samples by includ- ing a multi-wavelength analysis.[27]. Sedimentation has also been instru- mental for in situ attenuated total internal reflection infrared spectroscopy by concentrating the particles or their aggregates onto the substrate.[28]. Magnetic-field enhanced sedimentation of superparamagnetic iron oxide par- ticles has also been investigated.[29] Here we take advantage of quantitative digital photography[30–32] for experimentally analyzing the time-evolution of the concentration gradient of sedimenting nanoparticle solutions. The technique is simple and easily imple- mented. When properly set up, time-lapse imaging of the entire nanoparticle concentration gradient is possible (Figure 1c). In particular with nanoparti- cles of high overall density (such as those having a gold core) sedimentation may be readily studied without a centrifuge, using only the gravitational field. The aim of the present work was to monitor the nanoparticle sedimenta- tion process and the establishment of a concentration gradient using quan- titative digital photography, and model this with a simple mathematical diffusion-sedimentation equation. We also discuss the practical implications of the theoretical model for the analysis and purification of nanoparticle assemblies. Furthermore we obtain a photograph of the Boltzmann distribu- tion (rightmost picture of Figure 1c). This work utilises well-known concepts from colloid chemistry with an emphasis on their practical application in purification and characterization of colloidal nanoparticle assemblies. 4 2 Sedimentation vs diffusion: the Mason-Weaver equation A simple model for sedimentation can be formulated under the assumptions that there are no interactions between nanoparticles and that their motion is only governed by random Brownian forces and a directional gravitational force. The relevant parameters of the system depend only on the effective density and overall shape of the particles, and on the density, viscosity and temperature of the suspending liquid. The surface chemistry of the particles has no influence, nor does the exact composition of the suspending medium. However, these chemical parameters may influence the model indirectly, e.g. by changing density or viscosity of the medium, or by altering particle shape and effective density. We work under conditions where liquid density and viscosity are not significantly altered by the dissolved ingredients, and at low densities of colloid (mass fraction � 1%). The establishment of a concentration gradient in such a dilute colloidal solution of independent, non-interacting particles in a homogeneous gravita- tional field is described by the Mason-Weaver equation,[33] a one-dimensional partial differential equation. This equation is the direct predecessor to the more well-known Lamm equation[34, 35] which applies for a centrifugal field and is extensively used in the analysis of ultracentrifuge data.[36] The height position in the cell is given by z. The concentration of de- canting particles c satisfies the Mason-Weaver equation (1), with boundary conditions (2). ∂c ∂t = D ∂2c ∂z2 + sg ∂c ∂z (1) D ∂c ∂z + sgc = 0(z = zmax,z = 0) (2) D is the diffusion coefficient, s sedimentation coefficient, g gravitational constant, and z = zmax and z = 0, are the top and bottom of the cell; re- spectively. In this work, equations (1) and (2) are solved numerically using a Crank-Nicolson finite-difference method which is described in the Supporting Information, Section S.2. The numerical solver accepts an arbitrary concen- tration profile as the initial condition. The initial condition at t = 0 that we are interested in is c = c0 for 0 ≤ z < zmax and c = 0 elsewhere. That is, initially the particles are homogeneously distributed within the cell. The term sg is the terminal velocity of a particle accelerated by grav- itational force in a viscous liquid (vterm = sg), and represents the velocity 5 (directed towards the bottom of the cell) this particle would have in absence of Brownian motion. It is written in terms of the sedimentation coefficient s. The sedimentation coefficient s for a specific type of particle in a particular viscous liquid is s = mb f (3) This involves the buoyant mass mb (i.e. the mass of the particle minus the mass of the liquid displaced by the particle) and the frictional coefficient f. The frictional coefficient depends on the particle’s size and shape and on the viscosity of the solvent (more on this below). The frictional coefficient f also is of importance for the diffusion coefficient D. D = kBT f (4) For spherical particles we have the well-known results from Stokes’ law and the Einstein-Smolukowski-Sutherland theory given by equations (5) and (6). fsphere = 6πηa mb,sphere = 4 3 πa3(ρ−ρfl) (5) where a is the particle radius, η the viscosity of the liquid, and ρ and ρfl the mass density of the particle and the suspending fluid, respectively. An underlying assumption of these relations is the non-slip condition at the sphere-liquid interface. ssphere = mb 6πηa = 2 9 a2(ρ−ρfl) η Dsphere = kBT 6πηa (6) Table 1 contains diffusion and sedimentation coefficients for gold nanospheres calculated using Eqns. (6) for selected diameters in water at 277K, which is the temperature used in the experiments (see Experimental Details). Data for these particles at 298K can be found in the Supporting Information, Ta- bles S2, S3, S4 and S5. The table collects further characteristic values related to sedimentation, which will be discussed below. At longer times, the solution to the Mason-Weaver equation converges to the concentration distribution at equilibrium, eqn. (7). c(z) = Bc0 exp ( −z z0 ) (7) The characteristic height of the equilibrium gradient, z0, is 6 Table 1 – Calculated sedimentation parameters and characteristic values for gold nanospheres in water at 277K (4 �, η = 1.56 mPa.s). Diffusion (D) and sedimentation (s) coefficients, characteristic height of concentration gra- dient at equilibrium (z0), approximate time to equilibrium from completely dispersed state (t1cmsed ), concentration factor (B 1cm). The two latter values re- fer to a liquid column height of 1 cm. Further data at 298K can be found in the Supporting Information. diam. D s z0 t 1cm sed B 1cm nm µm2s−1 10−9 s mm hours 13 19.9 0.110 18.5 2579 1.29 20 13.0 0.260 5.08 1090 2.29 40 6.48 1.04 0.636 272 15.7 50 5.18 1.62 0.325 174 30.7 60 4.32 2.34 0.188 121 53.1 80 3.24 4.16 0.0794 68.1 126 100 2.59 6.50 0.0407 43.5 246 150 1.73 14.6 0.0121 19.4 830 z0 = D sg (8) Interestingly, by combining (3), (4) and (8), we find that z0 = kBT mbg (9) The same result for the equilibrium gradient can be obtained using a sta- tistical mechanical approach, i.e. calculating the Boltzmann distribution for dense suspended particles in a gravitational field (Eparticle = mbgz), illustrat- ing that the digital image of the gold nanoparticle solution at equilibrium is a photograph of a Boltzmann distribution. The concentration profile at equilibrium does not depend on the frictional coefficient nor on the viscosity of the medium, but exclusively on the buoyant mass of the particles. For a spherical particle this translates into z0,sphere = 3kBT 4πa3(ρ−ρfl)g (10) Some typical values for the characteristic height z0 for gold nanospheres in water are included in Table 1. For diameters from 13 to 60 nm, the 7 characteristic height falls in the 20 to 0.2 mm range, where the gradient is readily observed with the naked eye. Going from the initial dispersed solution to the equilibrium gradient leads to an increase in concentration of the particles near the bottom of the cell. In Eqn. (7) this is expressed in the B constant, which represents the factor by which the particles are concentrated at the bottom. It is obtained by applying mass conservation to Eqn. (7). With the initial condition being a uniform concentration c0 throughout the cell (0 ≤ z < zmax), we obtain eqn. (11). Typical values for gold nanosphere have been added to Table 1. B = zmax z0[1 − exp(−zmax/z0)] (11) We found that the time to reach sedimentation equilibrium from the initial homegeneously dispersed state can be roughly estimated from the time tsed necessary for a hypothetical particle to traverse the cell entirely from top to bottom (distance zmax) at its sedimentation velocity sg. tsed = zmax sg (12) Comparison of the numerical solutions for various ratios of zmax to sg to the corresponding equilibrium distributions given by Eqn. (7) showed that the concentration gradient at tequil ≈ 1.4tsed is in all cases within less than one percent of the final equilibrium distribution (see Figure S3). This is a useful result which defines the necessary time window for sedimentation experiments and simulations. Weaver[37] found analytically that the upper limit for tequil is 2tsed, A more sophisticated analysis of the time to reach equilibrium is known in the literature.[38] In the Supporting Information we demonstrate that this analysis validates tequil ≈ 1.4tsed as a rough guess (which is already a 30 percent time reduction with respect to the Weaver limit), but in many cases the time needed to reach equilibrium is actually shorter still, and for optimisation of experimental procedures the more elaborate formulation can be helpful. Moreover, the numerical solver described in the present work enables detailed simulation of the evolution of the concentration profile, and may be even more complete for optimising the sedimentation and detection strategy. In 1 cm high liquid columns, the time needed to reach equilibrium vary from hours to weeks for typical gold nanospheres (Table 1). In practice, with the data analysis demonstrated in this paper, it will not be necessary to run experiments to complete equilibrium, as good estimates for diffusion 8 and sedimentation coefficients can be extracted using the initial progression of the concentration gradient. 3 Sedimentation of simple gold nanospheres Experimental concentration profiles of several settling solutions of gold nanopar- ticles were obtained from digital photographs taken at different moments. A typical example, using 40 nm gold nanospheres in water, is shown in Figure 2. The photographs for this series (of which 4 are shown in Figure 1) were taken over a 39 day period, and quantitative vertical optical density profiles were obtained using the method detailed in the Experimental section. In this work, we only use the green colour channel of the images, since this produces the strongest optical response for the red-coloured gold nanoparticles. In the same figure we also show the solution of the Mason-Weaver equa- tion c(z,t) at the same times t. The diffusion and sedimentation coeffi- cients were adjusted independently to obtain best agreement with the ex- perimental observations. The values obtained, D = 5.1 × 10−12 m2s−1 and s = 7.9 × 10−10 s, agree within 20% with those expected from the Einstein- Smolukowski-Sutherland and the Stokes relations for perfect 40 nm diameter gold spheres in water at 277 K (Table 1). Diffusion and sedimentation coefficients were obtained for the entire series of gold nanosphere diameters by fitting the Mason-Weaver solutions to the optical density profiles evolving in time. These coefficients agree reasonably well (within 20%) with the predicted values (Figure 3). In addition to the predictions based on the gold core only, we also calculated the expected diffusion coefficients for the gold core plus an extra 1 nm of ligand shell (ρ = 900 kg m−1, dotted curve in Fig. 3). This second theoretical curve demonstrates that the present simple method can not distinguish between small differences in overall hydrodynamic radius and density. These experiments on simple gold nanosphere solutions demonstrate the feasibility of quantitative sedimentation measurements using digital photog- raphy. The solutions behave according to the Mason-Weaver model. It is also illustrated that equilibrium gradients develop for gold nanospheres in the selected diameter range upon standing undisturbed for prolonged times. These are observable to the naked eye. The simplicity of the method and its implementation obviously leads to a variety of sources of experimental error. The cumulation of these errors finally leads to an overall error in the diffusion and sedimentation coefficients that is estimated above to be around 20%, based on the comparison of the experimental results with the idealized theoretical curves (Figure 3). 9 Figure 2 – Evolution of the vertical particle density gradient in an aqueous solution of 40 nm diameter gold nanospheres towards sedimentation equilib- rium, observed using quantitative digital photography at 277K. Photos were taken at t = 0, 1 h, 21 h, 27 h, 44 h, 52 h, 7 d, 9 d, 11 d, 14 d, 21 d, 23 d, 28 d, 35 d, and 39 days. Top: line-averaged intensities in the green colour channel, corrected for differences in illumination intensity between dif- ferent images. Middle: experimental optical density profiles obtained from the intensity profiles. The arrow indicates the direction of time. Bottom: theoretical optical density profiles obtained numerically as the solution to the Mason-Weaver equation, with D = 5.1 × 10−12 m2s−1 and s = 7.9 × 10−10 s. 10 The back-illumination of the observation cells should be homogeneous, which is not entirely the case in our set-up. The small illumination inho- mogeneities are corrected for by the baseline correction we apply. Small deviations in camera positioning from photograph to photograph lead to changes in optical path lengths across the spectroscopic cells. There are also fluctuations in illumination intensity; these are small and averaged out when quantitatively analyzing the time-lapse series using the Mason-Weaver model, but lead to additional noise. Stray light is present in the optical sys- tem, i.e. the dark zones of the image are not entirely devoid of light, which limits the maximum optical density that can be reliably measured. A specific optical design with dedicated illumination and detection optics may improve the analysis of the concentration gradient. The long duration of the sedimentation in some samples calls for effi- cient sealing of the containers, since solvent evaporation reduces the liquid column height, introducing further uncertaintity in the analysis. Also, the the meniscus at the top of the liquid column and the optical artefacts (dark zone) related to it limit the precision of the measurement of the gradient. At present, the numerical analysis assumes an ideally monodisperse dis- tribution of nanoparticles, all being of the same shape and volume. The diffusion and sedimentation coefficients obtained are effective, average coef- ficients. We have not attempted to analyse the data in terms of continuous size distributions, in which case an additional weighting should be applied for the strong dependence of the optical extinction coefficient on the diameter of gold nanoparticles.[39] There is no fundamental impediment to analyzing the time-lapse optical extinction profile data sets using models based on continuous polydisperse size distributions, or even multimodal distributions, similarly to what is cur- rently already done in analytical ultracentrifugation.[15, 20, 36] The numeri- cal solver presented here may be used to generate time-resolved concentration profiles for individual nanoparticle diameters in a trial distribution. After weighting with the effective extinction coefficients these can then be com- bined into the expected overall optical extinction profiles, which can then be optimised to fit the experimental data. In this respect, multimodal size dis- tributions may be challenging if the different populations display unsufficient contrast in diffusion and sedimentation coefficients. 11 Figure 3 – Diffusion D and sedimentation s coefficients obtained by analysing the evolving experimental density gradient of settling gold nanosphere solu- tions (square markers). The solid curves are the expected values for per- fect golden spheres from the Stokes-Einstein-Sutherland equation (top), and Stokes’ law (bottom). The dotted curves are for gold spheres with a hypo- thetical 1 nm thick organic layer. 12 4 Application to nanoparticle assemblies After the initial demonstration of the quantitative analysis of time-lapse pho- tography of settling spherical gold nanoparticles in water, we investigated a sample of purified DNA-linked dimers of 13 nm diameter gold nanospheres (Figure 4). This experiment illustrates that studying sedimentation can aid in the chemistry and characterization of biomolecularly-scaffolded multi- nanoparticle assemblies. Figure 4 – Top left: transmission electron micrograph of purified DNA-linked dimer sample. Top right: composed image of the time-lapse photography of sedimenting DNA-linked gold nanosphere dimers (at 277 K). The rightmost cell is a sample of monomeric 13 nm gold nanospheres. Bottom: optical density traces as a function of vertical position in the cell, taken at various points in time (red solid lines); the black dotted curves are the solution to the Mason- Weaver equation with D = 5.8 × 10−12 m2 s−1 and s = 9.0 × 10−11 s . By adjusting the curves to the experimental data, we obtain D = 5.8 × 10−12 m2 s−1 and s = 9.0 × 10−11 s, for the diffusion and sedimentation coef- ficients, respectively. The sedimentation coefficient of DNA-linked dimers is slightly smaller than that of a bare 13 nm monomer sphere, i.e. it sediments even more slowly. The extra mass from the second gold sphere is counter- balanced by more friction with the solvent due to the larger outer surface area of the dimer. The larger volume comes to a large extent from the DNA 13 which has a much lower density than gold. Additionally, the diffusion coefficient is lower than that of a 13 nm monomer as a result of the larger hydrodynamic volume of the object. For the char- acteristic height (Eqn. 3) we find z0 = 6.6 mm. This value is close to what we would expect for an object which simply has the double buoyant mass of a 13 nm monomer. Since gold has a much higher density than water, whereas the DNA linker has a density comparatively very close to that of water. Any extra volume taken up by the DNA linkage does not significantly contribute to the buoyant mass of the object, since the displaced water vol- ume is replaced with a substance having a density close to that of water. The buoyant mass of the dimer is therefore determined by the two gold cores. On the other hand, the bulky DNA structure should indeed have an effect on the sedimentation and diffusion constants, i.e. sedimentation will be greatly slowed down compared to an equivalent sphere of the same buoyant mass. The lower sedimentation coefficients leads to slower establishment of the gradient. However, in combination with the lower diffusion coefficient, it finally leads to a more pronounced, shorter density gradient. A further anal- ysis of the hydrodynamic behaviour and the resulting combination of D and s of this type of assemblies is not within the scope of this work, but has received recent attention in the literature.[19] It is also interesting to note that the estimated time to obtain the final equilibrium gradient (at 277K, with 2 cm liquid height) is on the order of 200 days for both the monomers and the DNA-linked dimers. For analysing the transient concentration profile it is not necessary to wait that long, which demonstrates the interest of having the numerical solution to the Mason- Weaver at hand. Nonetheless, 30 days is still a long time, and these samples are better analysed with centrifuge-based techniques.[14, 16, 19, 24, 25, 40]. In this context, it is interesting to note that — as we will see in the following — the required centrifugal acceleration for many inorganic-core nanoparticles is well within range of standard laboratory centrifuges, instead of higher- speed specialised instruments. 5 Centrifugation of nanoparticle solutions The results from the Mason-Weaver model may be used to generate intial ap- proximate estimates for preparative and analytical centrifugation. For many nano-assemblies using high-density inorganic core materials, the centrifugal acceleration necessary for rapid sedimentation is well within the capabilities of standard table-top laboratory centrifuges. A centrifugal acceleration that is much higher than strictly necessary may have deleterious consequences 14 for colloidal stability as the density of nanoparticles in the pellet may be- come very high, accelerating aggregation. This is one reason why even a rough estimate is of interest. Furthermore, we anticipate that preparative purification protocols may be developed for nanoparticle assemblies by using approximate numerical simulations, using the computer code supplied with the present work. As an example we consider the minimal centrifugal acceleration gcfg needed to obtain sedimentation equilibrium for gold colloid solutions in a centrifuge within a given time tcfg, i.e. we wish to establish the centrifugal conditions for a typical nanoparticle ‘centrifugation/re-dispersal’ washing procedure. In Section 2 of this paper, we found that the time to reach equilibrium is 1.4tsed, and using Eqn. (12) we can roughly estimate the centrifugal acceleration gcfg necessary for the chosen centrifugation time tcfg and a liquid height in the centrifuge tube ztube. gcfg = 1.4 ×ztube s× tcfg (13) This can be expressed as ‘relative centrifugal force’ (‘times g’, RCF = gcfg/g, where g = 9.81 m s −1). RCF = 1.4 ×ztube g ×s× tcfg (14) As discussed above and detailed in the Supporting Information, the factor 1.4 represents a conservative rough estimate of the time to reach equilibrium. This factor may become smaller as a function of the particle’s sedimentation and diffusion coefficients and the container height, using the formula by Van Holde and Baldwin,[38] or using results from numerical simulation. However, for the sake of simplicity, we will use 1.4 for the following calculations, as this is a robust albeit conservative choice. Table 2 contains the calculated RCF values needed for complete cen- trifugation of gold colloids in typical Eppendorf-type vials (liquid height ztube = 3 cm), within tcfg = 30 min. These values were then used in an experiment in which a selection of colloidal gold solutions were spun for 30 min. at the calculated RCF. The samples were subjected to a typical centrifugation/re-dispersal cycle, in which 95 vol% of the supernatant was removed, followed by redispersal of the (relatively fluid, but highly concen- trated) pellet in fresh water. In most cases the recommended centrifugal acceleration gave remarkably good results (Supporting Information, Figure S7). Measurement of the UV-visible extinction spectra of the resuspended col- loids confirms the visual impression that the calculated RCF values are indeed 15 Table 2 – Calculated RCF values for complete centrifugation, within 30 min., of spherical gold nanoparticles in water, using a liquid height of 3 cm. Ex- perimentally determined nanoparticle recovery in a typical centrifugation/re- dispersal washing cycle (n.t. = not tested). diam. calcd. exp. (nm) RCF % recovery 13 11671 n.t. 20 4931 94 (a) 40 1233 n.t. 50 789 98 60 548 n.t. 80 308 97 150 88 97 (a) recovery increases to 97% by spinning at 1.2 times the calculated RCF sufficient, and higher speeds for centrifugation are not necessary. In the case of 20 nm diameter gold spheres, centrifugation at slightly higher acceleration (∼ 20%) was required to concentrate all particles near the bottom of the tube (Supporting Information, Figure S7). Centrifugation at a given RCF corresponds to a sedimentation gradient length scale z cfg 0 and corresponding concentration factor Bcfg. The gradient length should be small enough, such that the nanoparticles are concentrated well in the pellet. The concentration factor gives an estimate of the concen- tration increase at the bottom of the tube. For spherical particles, coefficient s and D for Eqn. (15) are obtained from Eqn. (6). z cfg 0 = D s×g × RCF Bcfg = ztube z cfg 0 [1 − exp(−ztube/z cfg 0 )] (15) For the conditions in Table 2 z cfg 0 decreases from 2.1 µm (for 13 nm diameter gold spheres) to well below 1 µm (larger particles), which indicates that the particles are well concentrated near the bottom of the tube. Prolonged centrifugation may concentrate smaller particles of less dense materials. If we take for instance particles of 8 nm diameter with average density 4.5×103 kg m−3 in water (a rough estimate for typical[41] CdSe/ZnS quantum dot particles with a small-molecule capping), then it may be an- ticipated that these can be concentrated in a liquid pellet less than 300 µm high (z cfg 0 ∼ 31µm) by spinning them for 6 hours at 11000 × g (or 12 hours at 5500 × g for a zcfg0 ∼ 63µm), starting from a 3 cm liquid height (typical Eppendorf-type vial). Such centrifugation conditions are well in the range of 16 standard laboratory centrifuges. We may use similar reasoning to find the conditions for minimal centrifu- gal stress, which will correspond to finding the minimum RCF necessary for a certain pellet compactness (small z cfg 0 ), and applying prolonged centrifuga- tion, the duration in that case given by tcfg. Here, we were concerned with centrifugation without density gradient, starting from an initially homogeneous solution. For separation purposes, density gradient methods may be more adapted.[42] Moreover, we used the Mason-Weaver model which assumes a constant gravitational field, instead of the gradient found in centrifugation. Nevertheless, this simple model does provide a means of predicting the behaviour of nanoparticles in solution in a standard laboratory centrifuge, and is therefore relevant for the rational design of nanoparticle purification protocols. Another implication of this analysis is that the RCFs needed for the centrifugal sedimentation of inorganic nanoparticles and their assemblies is significantly lower than those in analytical ultracentrifuges (∼ 100000 × g). Analytical centrifugation using lower-speed (and simpler) equipment, such as recent photocentrifuges with space- and time-resolved extinction profiling (STEP),[20–23] is therefore highly relevant for the characterization of inor- ganic nanoparticle solutions. Compared to the low-cost digital photography- based method presented here, such dedicated and optimised equipment pro- vide more rapid, precise and detailed analysis of nanoparticle samples. 6 Conclusion This work rationalised observations on the settling of nanoparticles in liquid solution in the framework of the Mason-Weaver model. This is illustrated by the quantitative agreement between this model and experimental digital pho- tography for the time-evolution of the concentration gradient of suspended nanoparticles submitted to a gravitational field. A simple experimental pro- tocol for observing the sedimentation process is established, using time-lapse digital photography of the samples in an undisturbed laboratory fridge in order to avoid thermally induced convection. By fitting a numerical solution of the Mason-Weaver equation to the experimental concentration gradient, the diffusion and sedimentation coeffi- cients D and s are obtained, without need to wait for complete equilibrium to be established. Early studies on sedimentation[3, 43] were mostly concerned with precise measurement of the equilibrium gradient, which only yields the buoyant mass mb of the particle as the sole parameter, not the separate contributions of diffusion and sedimentation coefficients. 17 We demonstrated that measuring the evolving concentration gradient of settling solutions distinguishes between individual monomeric nanoparticles and their dimeric assemblies, by giving a distinct combination of diffusion and sedimentation coefficients for each type of nano-object. In our experiments on gold nanospheres in dilute solution, no signifi- cant deviations from the simple Mason-Waver model were observed. Col- loidal gold solutions are generally known to be well-behaved in this respect, and we expect many dilute solutions of other non-aggregating (inorganic) nanoparticles to behave in a similar way. Any significant deviations from the predictions made by the Mason-Waver model would point to stronger inter- particle interactions, aggregation of individual objects, specific solvation[4–6] or surfactant[7, 8] effects, or changes in the properties of the liquid medium. It is important to be aware of such deviations as they may affect other as- pects of the behaviour of the nanoparticles in liquid media, such as their interaction with biological entities. The simple mathematical model and the experimental protocol used here have obvious limitations, and do not substitute advanced analytical centrifu- gation techniques[14, 16, 19–25, 40]. However, they do provide means for ini- tial and very simple screening of nanoparticle solutions, and rationalise visual observations made at the bench, or upon prolonged storage in the laboratory fridge. The use of a standard digital photo-camera limits the applications to systems that have detectable optical response in the visible. Many parti- cle systems do indeed have such a response, for instance plasmonic materials such as gold, silver, titanium nitride,[44] but also other metal particles, semi- conductor quantum dots,[17] and paramagnetic iron oxide.[29] Attainment of equilibrium can take quite some time, and an analytical photocentrifuge[20– 23] may be a time-saving investment, which also will yield more precise anal- ysis, in particular for broad size distributions and multimodal samples. Based on the Mason-Weaver model, we obtained simple expressions that give useful recommendations for the centrifugation of nanoparticle solutions, so that delicate solutions can be processed with care, instead of spinning at maximum RCF. The practical insights and simple quantitative expressions provided in the present work are of direct interest for wet-chemical synthesis, purification and application of functional nanoparticle assemblies, and will stimulate further interest in sedimentation analysis[4, 19, 23] for functional nanoparticle solutions. 18 7 Experimental details Temperature stabilisation. In a non-thermostated environment such as a lab shelf, even modest temperature changes may create temperature gra- dients in the sample that lead to convective motion sufficiently strong to prevent the sedimentation equilibrium from establishing itself. Such convec- tive motion plagued early sedimentation experiments, which were concerned with precise measurement of the equilibrium distribution.[3, 43] Thermally driven convection offers an explanation as to why, on a non-thermostated shelf, many colloidal gold solutions seem not to evolve to a sedimentation equilibrium gradient. In the present work, mechanical and thermal per- turbations were avoided by carrying out the experiments in a undisturbed laboratory fridge at 277 K. Quantitative digital photography. In previous work we used quantita- tive colour imaging for measuring concentration profiles in microfluidic chan- nels, using an optical microscope and a dedicated CCD camera.[45] Here we use a ’consumer-grade’ photo camera. Images were taken using a digital camera (Nikon Coolpix P7800, Nikon Corp., Japan) providing the output of unprocessed (’RAW’) image data, where the individual pixel values are proportional to the detected light intensity for the particular colour chan- nel. This circumvents problems of linearization generally encountered with most digital cameras.[31] Typically, four samples were photographed in a single picture. A black area was also included to serve as the source for background subtraction. The ’RAW’ image data were read by ImageJ software[46] using the dcraw plugin.[47] The three colour channels of the images were separated (ImageJ) and treated individually as monochrome intensity images. Intensity gradient profiles Iraw(z) for all samples (and all colour channels) in each image of the time series were extracted by horizontal averaging over a rectangular area (typically, the visible optical window of the cells). The pixel values of the black area were averaged for background substraction, Idark. The z scaling was calibrated by precise measurements of specific cell features. Each image frame thus obtained its specific calibration of pixel size. The digital intensity profiles Iraw(z) were then further treated numerically using the Python programming language with scientific extensions.[48–50] The individual profile traces with their specific z step sizes were re- sampled at a standard higher resolution in order to have profiles with identical step sizes, facilitating their processing and comparison with theory. Subse- quently, background-corrected image profiles I(z) were obtained by subtrac- tion of the near-zero dark background. 19 I(z) = Iraw(z) − Idark (16) The top area of each extracted z profile, which does not contain liquid, was used for calculating I0 by averaging. This corrects slight frame-to-frame variations in illumination intensity. A linear baseline correction, ODbase(z) = k1z +k2, was applied globally to all time-frames for each sample. In all cases, the baseline correction was only modest and not necessary to obtain useful results. The final corrected optical density is obtained using eqn. (17). We note that we apply this Beer-Lambert-Bouguer formulation despite the condition of monochromatic light not being rigorously fulfilled: the filters used in colour cameras define large spectral bands (width ∼ 100 nm). As has been discussed previously,[45] a linear response of the optical density as a function of concen- tration is still obtained, provided that the extinction spectra of the samples are sufficiently large and their optical density is sufficiently low (under OD 1). OD(z) = log10 ( I0 I(z) ) − ODbase(z) (17) Comparison of experiment with theory is achieved by converting the con- centration profiles c(z) from the Mason-Weaver model into modeled optical density profiles ODmodel(z) using an ”effective extinction coefficient” which we refer here to as k3 and can be adapted to rescale the model concentration profile to fit the experimental values. ODmodel(z) = k3c(z) (18) Centrifugation. A thermostated bench-top laboratory centrifuge (Hettich Mikro 220r with 1195-A rotor) was used. The relation between rotational speed (rounds per minute, RPM) and relative centrifugal force (RCF) is RCF = 1.12 × 10−3 ×R× (RPM)2 (19) The rotor radius R for the 1195-A rotor used is 0.087 m. The maximum speed for this rotor is 18000 RPM, corresponding to an RCF of 31500 × g. Samples were generally centrifuged with the centrifuge thermostat at 298K. Colloidal gold solutions. For the study of the sedimentation of gold nanospheres (Section 3) we used standard aqueous solutions of colloidal gold from commercial sources (BBI Solutions, UK, and Sigma-Aldrich, France) and samples synthesised and characterised according to literature procedures.[51] 20 All colloids are stabilised with negatively charged carboxylate ligands, and the particles have negative zeta potential. The samples were diluted with pure water where necessary. Typical optical densities at the extinction max- imum were in the range 0.3 . . . 1. The carboxylate ligand layers around the particles are thin compared to the particle diameter (d > 20 nm), and do not contribute significantly to the buoyant mass. For smaller particles (< 10 nm) and larger ligands such effects may become significant, and will then show up as a deviation from the idealised predicted behaviour, and result in a change in effective buoyant mass, and effective diffusion and sedimentation coefficents. Furthermore, sol- vation effects on sedimentation may become significant, as the shell of solvent molecules around the object influences its hydrodynamic behaviour.[4–6] DNA-linked dimers. The dimers were constructed from 13 nm gold nano- Gold nanoparticles of diameter 13 nm were synthesised according to the published protocol.[52, 53] Briefly, a solution of sodium tetrachloroaurate (50 mL, 1 mM) was heated up to 100 °C under stirring. Once boiling, a solution of sodium citrate (5 mL, 2 wt.%) was added. After appearance of the typical red colour, boiling and stirring were maintained for 15 additional minutes before letting cool down to room temperature. To stabilise the AuNPs, BSPP was added (15 mg) and the solution was left to stir overnight. TEM images were obtained with a Hitachi H7000 transmission electron microscope, operating at 75 kV bias. All samples preparation involved deposition and evaporation of a specimen droplet on a carbon film-coated 400 Mesh copper grid. TEM analysis is shown in the Supporting Information, Fig. S2. Dimers of 13 nm AuNPs were synthesised using DNA hybridisation.[54] BSPP-coated AuNPs of 13 nm (5 pmol) were flocculated using NaCl and centrifuged for 5 min at RCF 25 000 ×g (Eppendorf centrifuge 5417R, rotor FA-45-24-11, 8.8 cm radius). The supernatant was taken off and the particles were re-dispersed in 100 µL buffer (20 mM phosphate, 5 mM NaCl). DNA single strands S1 respectively S2 (15 pmol) were added each to a separate nanoparticle solution to achieve a ratio DNA/particles of 3:1. A solution of BSSP (10 µL, 1 mg/20 µL) was added and the reaction mixture was shaken for 1 h, in order to deprotect the thiol group on the DNA and to allow its attachment to the particles. The AuNPs were then centrifuged for 15 min at 25 000 × g and redispersed in hybridization buffer (50 µL, 6mM phosphate, 80mM NaCl). After mixing the two types of particles (S1 and S2 strands), a complementary DNA strand S3 (2.5 pmol) was added to create the dimers. Hybridisation was realized by heating up the solution to 70°C and leaving it to cool down slowly and gradually to room temperature. The dimers were purified by agarose gel electrophoresis (1.75% agarose gel in 0.5 x TBE, 1 21 h at 90V). After extraction from the gel the solution was centrifuged for 10 min at 25 000 ×g and the dimers were redisolved in Milli-Q water. Acknowledgements This work was supported by Dstl (UK) in the framework of the France-UK Ph.D. programme. MW acknowledges funding by the Agence Nationale de la Recherche (France), grant ANR-2010-JCJC-1005-1 (COMONSENS). References [1] M. D. Haw. J. Phys.: Condens. Matter 2002, 14, 33 7769. [2] P. Ball. Chemistry World 2005, 2 38. [3] J. Perrin. J. Phys. Theor. Appl. 1910, 9 5. [4] J. W. Williams, K. E. Van Holde, R. L. Baldwin, H. Fujita. Chem. Rev. 1958, 58, 4 715. [5] T. M. Laue, W. F. Stafford III. Annu. Rev. Biophys. Biomol. Struct. 1999, 28, 1 75. [6] J.-J. Huang, Y. J. Yuan. Phys. Chem. Chem. Phys. 2016, 18, 17 12312. [7] J. T. Li, K. D. Caldwell. Langmuir 1991, 7, 10 2034. [8] M. Andersson, K. Fromell, E. Gullberg, P. Artursson, K. D. Caldwell. Anal. Chem. 2005, 77, 17 5488. [9] V. Sharma, K. Park, M. Srinivasarao. Proc. Nat. Acad. Sci. 2009, 106, 13 4981. [10] K. Park, H. Koerner, R. A. Vaia. Nano Lett. 2010, 10, 4 1433. [11] L. Scarabelli, M. Coronado-Puchau, J. J. Giner-Casares, J. Langer, L. M. Liz-Marzán. ACS Nano 2014, 8, 6 5833. [12] F. Liebig, R. M. Sarhan, C. Prietzel, A. Reinecke, J. Koetz. RSC Adv. 2016, 6, 40 33561. [13] E. C. Cho, Q. Zhang, Y. Xia. Nat. Nanotechnol. 2011, 6, 6 385. 22 [14] J. M. Zook, V. Rastogi, R. I. MacCuspie, A. M. Keene, J. Fagan. ACS Nano 2011, 5, 10 8070. [15] R. P. Carney, J. Y. Kim, H. Qian, R. Jin, H. Mehenni, F. Stellacci, O. M. Bakr. Nat. Commun. 2011, 2, May 335. [16] K. L. Planken, H. Cölfen. Nanoscale 2010, 2, 10 1849. [17] B. Demeler, T.-L. Nguyen, G. E. Gorbet, V. Schirf, E. H. Brookes, P. Mulvaney, A. O. El-Ballouli, J. Pan, O. M. Bakr, A. K. Demeler, B. I. Hernandez Uribe, N. Bhattarai, R. L. Whetten. Anal. Chem. 2014, 86, 15 7688. [18] J. Walter, K. Löhr, E. Karabudak, W. Reis, J. Mikhael, W. Peukert, W. Wohlleben, H. Cölfen. ACS Nano 2014, 8, 9 8871. [19] M. J. Urban, I. T. Holder, M. Schmid, V. Fernandez Espin, J. Garcia de la Torre, J. S. Hartig, H. Cölfen. ACS Nano 2016, 10 7418. [20] T. Detloff, T. Sobisch, D. Lerche. Part. Part. Syst. Charact. 2006, 23 184. [21] T. Detloff, T. Sobisch, D. Lerche. Powder Technol. 2007, 174 50. [22] E. Ibrahim, S. Hampel, J. Thomas, D. Haase, A. U. B. Wolter, V. O. Khavrus, C. Täschner, A. Leonhardt, B. Büchner. J. Nanoparticle Res. 2012, 14 1118. [23] J. Walter, T. Thajudeen, S. Süβ, D. Segets, W. Peukert. Nanoscale 2015, 7 6574. [24] Ž. Krpetić, A. M. Davidson, M. Volk, R. Lévy, M. Brust, D. L. Cooper. ACS Nano 2013, 7, 10 8881. [25] A. Knauer, A. Thete, S. Li, H. Romanus, A. Csáki, W. Fritzsche, J. M. Köhler. Chem. Eng. J. 2011, 166, 3 1164. [26] C. M. Alexander, J. C. Dabrowiak, J. Goodisman. J. Coll. Interf. Sci. 2013, 396 53. [27] C. M. Alexander, J. Goodisman. J. Coll. Interf. Sci. 2014, 418 103. [28] A. I. López-Lorente, M. Sieger, M. Valcárcel, B. Mizaikof. Anal. Chem. 2014, 86 783. 23 [29] V. Prigiobbe, S. Ko, C. Huh, S. L. Bryant. J. Coll. Interf. Sci. 2015, 447 58. [30] M. Stevens, C. A. Páraga, I. C. Cuthill, J. C. Partridge, T. S. Troscianko. Biol. J. Linn. Soc. 2007, 90, 2 211. [31] J. E. Garcia, A. G. Dyer, A. D. Greentree, G. Spring, P. A. Wilksch. PLoS ONE 2013, 8, 11 e79534. [32] T. Schwaebel, O. Trapp, U. H. F. Bunz. Chem. Sci. 2013, 4, 1 273. [33] M. Mason, W. Weaver. Phys. Rev. 1924, 23 412. [34] O. Lamm. Ark. Mat. Astron. Fys. 1929, 21B 1. [35] P. H. Brown, P. Schuck. Comp. Phys. Commun. 2008, 178, 2 105. [36] P. Schuck. Biophys. J. 2000, 78, 3 1606. [37] W. Weaver. Phys. Rev. 1926, 27, 4 499. [38] K. E. Van Holde, R. L. Baldwin. J. Phys. Chem. 1958, 62, 6 734. [39] J. R. G. Navarro, M. H. V. Werts. Analyst 2013, 138 583. [40] J. B. Falabella, T. J. Cho, D. C. Ripple, V. A. Hackley, M. J. Tarlov. Langmuir 2010, 26, 15 12740. [41] A. R. Clapp, E. R. Goldman, H. Mattoussi. Nat. Protoc. 2006, 1, 3 1258. [42] B. Kowalczyk, I. Lagzi, B. A. Grzybowski. Curr. Opinion Coll. Interf. Sci. 2011, 16, 2 135. [43] N. Johnston, L. G. Howell. Phys. Rev. 1930, 35 274. [44] U. Guler, S. Suslov, A. V. Kildishev, A. Boltasseva, V. M. Shalaev. Nanophotonics 2015, 4, 3 269. [45] M. H. V. Werts, V. Raimbault, R. Texier-Picard, R. Poizat, O. Français, L. Griscom, J. R. G. Navarro. Lab. Chip 2012, 12 808. [46] C. A. Schneider, W. S. Rasband, K. W. Eliceiri. Nat. Methods 2012, 9 671. [47] D. Coffin. dcraw. URL http://www.cybercom.net/~coffin/dcraw/. 24 [48] K. J. Millman, M. Aivazis. Comp. Sci. Eng. 2011, 13, 2 9. [49] S. Van Der Walt, S. C. Colbert, G. Varoquaux. Comp. Sci. Eng. 2011, 13, 2 22. [50] J. D. Hunter. Comp. Sci. Eng. 2007, 9 90. [51] N. G. Bastús, J. Comenge, V. Puntes. Langmuir 2011, 27, 17 11098. [52] J. Turkevich, P. C. Stevenson, J. Hillier. Discuss. Faraday Soc. 1951, 11, c 55. [53] J. Turkevich, P. C. Stevenson, J. Hillier. J. Phys. Chem. 1953, 57, 7 670. [54] A. Heuer-Jungemann, R. Kirkwood, A. H. El-Sagheer, T. Brown, A. G. Kanaras. Nanoscale 2013, 5, 16 7209. 25 Supporting Information for Part. Part. Syst. Charact. 2017, 1700095 DOI: 10.1002/ppsc201700095 The sedimentation of colloidal nanoparticles in solution and its study using quantitative digital photography Johanna Midelet, Afaf H. El-Sagheer, Tom Brown, Antonios G. Kanaras and Martinus H. V. Werts* S.1 Synthesis of DNA-linked gold nanosphere dimers Figure S1 – Schematic of the assembly of DNA-linked dimers and subsequent ’click’ chemistry. . S1 Table S1 – Sequences of single DNA strands S1, S2 and S3. Figure S2 – Citrate-stabilised gold nanoparticles used for the construction of DNA-linked dimers. a) TEM image of 13±2 nm spherical gold nanoparticles. Scale bar is 100 nm. b) Corresponding size distribution histogram, N = 500 particles . S2 S.2 Finite-difference solver for the Mason-Weaver equa- tion S.2.1 Dimensionless Mason-Weaver equation A numerical solver has been designed for the dimensionless version of the Mason-Weaver equation. Conversion between the Mason-Weaver equation and its dimensionless form can be achieved using: ζ = z/z0 z0 = D sg (S1) and τ = t/t0 t0 = D s2g2 (S2) The aim is now to obtain the time-evolving spatial concentration distribu- tion c(ζ,τ) obeying the dimensionless Mason-Weaver equation (S3), starting from an arbitrary initial concentration profile c(ζ, 0). ∂c ∂τ = ∂2c ∂ζ2 + ∂c ∂ζ (S3) ∂c ∂ζ + c = 0 @ ζ = 0,ζ = ζmax (S4) A finite-difference scheme for numerically solving this equation has been proposed previously,[26] but unfortunately, in our hands, this did not yield a working computer code. In particular, the proposed time coordinate trans- formation leads to unsurmountable numerical problems in our implementa- tion. Here, we propose a different finite-difference approximation that yields a stable, robust numerical solver, that moreover does not suffer from mass conservation problems reported[26] for the previously proposed scheme. S.2.2 Crank-Nicolson scheme Time and space are discretised into N + 1 and J + 1 grid points, respectively. Space is on an evenly spaced grid between 0 and ζmax. ζj = j∆ζ (j = 0, 1 . . .J) (S5) ∆ζ = ζmax J (S6) S3 Time may be on an evenly or unevenly spaced grid. We use an exponen- tially expanding grid, going from 0 to τmax. τn = exp(kn) − 1 (n = 0, 1 . . .N) (S7) k = ln(τmax + 1) N We make the following Crank-Nicolson style finite-difference approxima- tions for the PDE, where cnj is the value of c(ζj,τn). ∂c ∂τ → γ(cn+1j − c n j ) (S8) ∂2c ∂ζ2 → α(cnj+1 − 2c n j + c n j−1 + c n+1 j+1 − 2c n+1 j + c n+1 j−1 ) (S9) ∂c ∂ζ → β(cnj+1 − c n j−1 + c n+1 j+1 − c n+1 j−1 ) (S10) where (noting the factors 2 and 4 in α and β) γ = 1 τn+1 − τn α = 1 2(∆ζ)2 β = 1 4∆ζ (S11) With these finite-difference approximations, and after rearrangement of the terms, the discrete Mason-Weaver equation can be written as follows (j = 1, 2 . . .J − 1). (−α + β)cn+1j−1 + (γ + 2α)c n+1 j + (−α−β)c n+1 j+1 = (α−β)cnj−1 + (γ − 2α)c n j + (α + β)c n j+1 (S12) The boundary conditions are set as follows. At ζ = 0 we choose c → 1 4 (cn0 + c n+1 0 + c n 1 + c n+1 1 ) (S13) ∂c ∂ζ → 2β(cn1 − c n 0 + c n+1 1 − c n+1 0 ) (S14) Approximation of c using (S13) (instead of simply taking the values at c0) leads to much better behaviour in terms of mass conservation (the sum over all cj), in particular at longer times. After substitution and rearrangement this results in S4 (−2β + 1 4 )cn+10 + (2β + 1 4 )cn+11 = (2β − 1 4 )cn0 + (−2β − 1 4 )cn1 (S15) At ζ = ζmax the equivalent choice was made (using backward difference instead of forward difference, naturally). The equations (discretised PDE and BCs) are assembled into a matrix equation involving tridiagonal matrices. Numerically solving this equation yields the cn+1j from c n j , i.e. the concentration profile at time τn+1 using the concentration profile from time τn. Repeating the process, starting from the initial condition at τ0, yields the evolution of the concentration profile, obeying the Mason-Weaver equation. S.2.3 Implementation The finite-difference scheme was implemented using Python (with numpy for array manipulation and sparse matrix routines from scipy). For brevity, we only give the solver code without any plotting, processing or storage of results. It consists of a single loop that generates the successive concentration profiles cnj at each τn, starting from the initial conditions at τ0. # -*- coding: utf -8 -*- import numpy as np from scipy import sparse from scipy.sparse import linalg """ Solver for the adimensional Mason -Weaver equation. The solution is calculated on an exponentially expanding time grid (N+1 points), and an evenly spaced space grid (J+1 points ). Input: zeta_max (float ): adimensional height of the cell tau_end (float): calculate solution from tau=0 to tau=tau_end J (int): number of space points minus 1 on (linear) grid N (int): number of time points minus 1 on (exponential) grid Output: The solution is stored in a two -dimensional array ’c’. The space grid points are given by the vector ’zeta ’. The time grid points are given by the vector ’tau ’. code tested with Python 3.5.2, Anaconda 4.1.1 (64-bit) """ def _tridiamatrix(J,ldiagelem ,cdiagelem ,rdiagelem , S5 cstart ,rstart ,lend ,cend): """ utility function that generates a sparse tridiagonal matrix from given elements """ ldiag = np.empty(J+1) cdiag = np.empty(J+1) rdiag = np.empty(J+1) ldiag.fill(ldiagelem) cdiag.fill(cdiagelem) rdiag.fill(rdiagelem) cdiag [0]= cstart rdiag [1]= rstart ldiag [-2]= lend cdiag [-1]= cend diag=[ldiag ,cdiag ,rdiag] N = J+1 return sparse.spdiags(diag ,[-1,0,1],N,N,format="csr") # set calculation parameters zeta_max = 10. # height of cell tau_end = 10. # end tau J = 400 # number of space points (-1) N = 200 # number of time points (-1) c_init = np.ones(J+1) # initial condition # define time and space grids k_tau = np.log(tau_end +1)/N tau = -1.0 + np.exp(k_tau*np.arange(0,N+1)) dltzeta = zeta_max/J zeta=np.linspace(0,zeta_max ,J+1,endpoint=True) # create c array for storing result c = np.zeros ((J+1,N+1)) c[:,0] = c_init c_n = c_init dltzeta=dltzeta tau=tau # loop generating c^{n+1} from c^{n} starting from c^0 for n in range(0,N): alpha = 1/(2*( dltzeta **2)) beta = 1/(4* dltzeta) gamma = 1/(tau[n+1]-tau[n]) # finite difference diagonals Left Hand Side ldiagelem = -alpha + beta cdiagelem = gamma + 2* alpha rdiagelem = -alpha - beta # boundary conditions LHS cstart = -2*beta + 0.25 S6 rstart = 2*beta + 0.25 lend = -2*beta + 0.25 cend = 2*beta + 0.25 # create LHS tridiagonal matrix LHSmat = _tridiamatrix(J,ldiagelem ,cdiagelem ,rdiagelem , cstart ,rstart ,lend ,cend) # finite difference diagonals Right Hand Side ldiagelem = alpha - beta cdiagelem = gamma - 2*alpha rdiagelem = alpha + beta # boundary conditions RHS cstart = 2*beta - 0.25 rstart = -2*beta - 0.25 lend = 2*beta - 0.25 cend = -2*beta - 0.25 # create RHS tridiagonal matrix RHSmat = _tridiamatrix(J,ldiagelem ,cdiagelem ,rdiagelem , cstart ,rstart ,lend ,cend) # construct RHS vector RHSvec = RHSmat * c_n # c contains concentration profile # SOLVE the matrix equation , giving c^{n+1} (c_next) c_next = linalg.spsolve(LHSmat ,RHSvec) # store c_{n+1} into solution matrix c[:,n+1] = c_next # we swap the vectors containing c_n and c_next via cswap # we cannot simply re-assign c_n = c_next cswap = c_n c_n = c_next c_next = cswap # plot example import matplotlib.pyplot as plt plt.figure (1) plt.clf() for i in [0 ,50 ,100 ,150 ,200]: plt.plot(zeta , c[:,i],label=’tau=’+str(tau[i])) plt.ylim (0,3) plt.xlabel(’$\\ zeta$ ’) plt.ylabel(’$c$’) plt.legend () plt.show() S7 S.2.4 Validation of the numerical solution The numerical solver was tested extensively for a wide range of ζmax and τ. Typical values for N and J were on the order of several hundred grid points. Numerical mass conservation and convergence of the gradient to the analytical Boltzmann equilibrium solution were verified. An example of a numerical solution is given in Figure S3, which also illustrates that the numerical solution faithfully converges to the analytical equilibrium distribution, for τ = 1.4τsed. Figure S3 – Left: Numerical solution (N = 200, J = 400) to the dimension- less Mason-Weaver equation with ζmax = 10 and τ going from 0 to 30. Right: Comparison of the analytical equilibrium profile and the numerical solution at longer times, demonstrating (1) that the numerical solution converges to the analytical equilibrium (Boltzmann) distribution and (2) that at 1.4τsed the system is virtually at equilibrium. Mass conservation is observed in the numerical scheme, as illustrated in Figure S4. A slight increase of the total numerical mass is observed, but the numerical solution does not diverge at longer simulation times, as a result of our choice for the boundary conditions, cf. Equation (S13). S8 Figure S4 – Evolution of the total mass in the numerically simulated system (N = 200, J = 400, ζmax = 10, τ = 0 . . . 30), obtained by summing all space grid points. A slight increase (< 2%) of the total numerical mass is observed, but the total mass does not diverge for long simulation times. S9 S.3 Sedimentation parameters for gold nanospheres in water Table S2 – Calculated diffusion D and sedimentation s coefficients for gold nanoparticles in water at two different temperatures. 298 K 277 K 0.89 mPa.s 1.56 mPa.s diam. D s D s (nm) 10−12 m2s−1 10−9 s 10−12 m2s−1 10−9 s 13 37.8 0.193 19.9 0.110 20 24.5 0.457 13.0 0.260 40 12.3 1.83 6.48 1.04 50 9.82 2.86 5.18 1.62 60 8.18 4.12 4.32 2.34 80 6.14 7.32 3.24 4.16 100 4.91 11.4 2.59 6.50 150 3.27 25.7 1.73 14.6 Table S3 – Values of equilibrium gradient length scale z0 (in mm) for spherical gold nanoparticles in water, at two different temperatures (room temperature and fridge) diam. z0 (mm) (nm) 298 K 277 K 13 19.9 18.5 20 5.147 5.08 40 0.684 0.636 50 0.350 0.325 60 0.203 0.188 80 0.0855 0.0794 100 0.0438 0.0407 150 0.0130 0.0121 S10 Table S4 – Calculated concentration factor B for spherical gold nanoparticles in water at 277K (left columns) and 298K (right columns), for cell heights (zmax) of 1 mm and 1 cm 277K 298K diam. B (nm) 1 mm 1 cm 1 mm 1 cm 13 1.03 1.29 1.03 1.27 20 1.10 2.29 1.09 2.18 40 1.99 15.7 1.90 14.6 50 3.22 30.7 3.03 28.6 60 5.34 53.1 4.97 49.4 80 12.6 126 11.7 117 100 24.6 246 22.9 229 150 83.0 830 77.1 771 Table S5 – Calculated sedimentation times (in hours) for gold nanoparticles in water for a cell with zmax = 1 cm. A good rough estimate for the time to reach equilibrium is tequil ≈ 1.4tsed. These times are directly proportional to cell height. diam. tsed(h) (nm) 298 K 277 K 13 1465 2580 20 619 1090 40 155 272 50 99.0 174 60 68.8 121 80 38.7 68.1 100 24.8 43.6 150 11.0 19.4 S11 S.4 The time needed to reach sedimentation equilib- rium In the main text we state that a simple but robust approximation for the time to reach sedimentation equilibrium is 1.4 times the time tsed needed by a particle to traverse the liquid column from top to bottom at its terminal sedimentation velocity. This approximation enables a simple estimate of the time needed for centrifugation of nanoparticles at a certain centrifugal acceleration. A more sophisticated approach to this question is known in the ultra- centrifugation literature.[38] This approach by Van Holde and Baldwin[38] yields the time to arrive near equilibrium within a ’distance’ determined by parameter �. The choice for this parameter made in [38] is based on the difference in particle concentration between the top and the bottom of the cell, ∆c = c(0) − c(zmax). � = 1 − ∆c(t) ∆ceq (S16) At the beginning, when the concentration is homogenous, � = 1. As sedimentation equilibrium is reached � will tend to zero. A close approach to equilibrium is typically obtained for � < 0.02. The time to come within this ’distance’ � from equilibrium can then be approximated with the following expression. tequil ≈ z2max D F(α) (S17) F(α) = − 1 π2U(α) ln ( π2U2(α)� 4[1 + cosh(1/2α)] ) (S18) U(α) = 1 + 1 4π2α2 (S19) The time to reach equilibrium is expressed as a function of α, which, in the context of the present work, is given by α = kBT mbgzmax (S20) An interesting point is that the parameter α can also be expressed as the ratio from the length scale of the equilibrium gradient z0 to the height of the liquid column zmax. S12 α = kBT mbgzmax = z0 zmax (S21) It is thus a measure of how the equilibrium gradient ’fits’ in the height of the liquid column. For ’oversized’ containers (where all particles will fi- nally be concentrated near the bottom of the container), the attainment of equilibrium for initial homogeneous distribution will be governed by the time needed for sedimentation, whereas containers much smaller than the equilib- rium gradient will not show much of the gradient, i.e. when α becomes very large, the gradient will be undetectable. Equation (S17) may be expressed as the ratio of tequil and tsed. It can be shown that this ratio is a pure function of α. It thus depends only on the ratio z0/zmax, not on D and s individually. tequil tsed = F(α) α (S22) It is important to note that Equation (S17) is an approximation for α > 0.1,[38] and that it is not valid at smaller values. For small α our approximation tequil ≈ 1.4tsed is valid and safe. For α > 0.1, we find that the time predicted by Eqn (S17) is always shorter than the rough 1.4tsed estimation (see Figure S5) Figure S5 – Left: The ratio of the ‘time-to-equilibrium’ tequil and the ‘sedi- mentation time’ tsed as a function of parameter α. Curves (a) are calculated using Eqn. (S17), valid for α > 0.1. We chose � = 0.018, such that the maximum of the curve is at 1.4. Curves (b) use the constant 1.4 ratio from this work. Right: Absolute sedimentation times in 2 cm high water column, calculated for spherical gold particles. T = 277.15 K was used. Small values of α (left) correspond to large particle diameters (right). We note that our numerical solver enables full simulation of the sedimen- tation process. This can be used to study in more detail the evolution of the concentration gradient and the attainment of equilibrium. A different metric S13 for the distance to equilibrium may be used instead of the difference between the two outermost points, such as the root-mean-square deviation over the entire curve, or a metric that is representative to the particular measurement used. We performed numerical simulations of the attainment of sedimentation equilibrium for different values of α. The numerical solver allows us to study also the case of α < 0.1 not covered by Equation (S17). Figure S6, shows how the ‘distance to equilibrium’ � (Eqn. S16) evolves over time (relative to the sedimentation time tsed). For all values of α, equilibration is near completion at t ≈ 1.4tsed, even for the ‘worst case’ α = 0.1. As predicted by Eqn. (S17), equilibrium is attained relatively faster (t < 1.4tsed) for larger α (smaller particles). Figure S6 – Approach to sedimentation equilibrium, obtained using the nu- merical Mason-Weaver solver, for different values of α. Equilibration is always virtually complete at t = 1.4tsed. For large α, time to equilibrium becomes shorter. This brief analysis shows that tequil ≈ 1.4tsed is useful as a rough approx- imation. It never underestimates the time to equilibrium. However, it does overestimate the time to equilibrium for the smaller particles, in which case Equation (S22) becomes relevant. S14 S.5 Experimental test of the centrifugation parame- ters recommmended by the Mason-Weaver model Figure S7 – Centrifugation of gold nanospheres in water, for 30 min. at the centrifugal acceleration recommended by the Mason-Weaver model. Diame- ters: (a) 20 nm, (b) 50 nm, (c) 80 nm, (d) 150 nm. Photographs on the left: centrifuge vials before and after centrifugation. Relative centrifugal force: (a) 4931 × g, (b) 789 × g, (c) 308 × g, (d) 88 × g. UV-visible extinction spectra on the right: sample before centrifugation (black) and after (blue) re-dispersal of the liquid pellet (5% of the initial volume); red: supernatant (95% of the initial volume). S15 Figure S8 – Centrifugation of 20 nm diameter gold nanospheres in water, for 30 min., at 6000×g, i.e. 20% higher than the Mason-Weaver recommendation. S16 work_5jvjaamsk5ajte7dawztkvcz3e ---- 302743 446..456 A novel method to remotely measure food intake of free-living individuals in real time: the remote food photography method Corby K. Martin 1 *, Hongmei Han 1 , Sandra M. Coulon 1 , H. Raymond Allen 1 , Catherine M. Champagne 1 and Stephen D. Anton 2 1 Pennington Biomedical Research Center, 6400 Perkins Road, Baton Rouge, LA 70808, USA 2 Department of Aging and Geriatric Research, University of Florida, PO Box 112610, Gainesville, FL, USA (Received 12 December 2007 – Revised 25 March 2008 – Accepted 28 April 2008 – First published online 11 July 2008) The aim of the present study was to report the first reliability and validity tests of the remote food photography method (RFPM), which consists of camera-enabled cell phones with data transfer capability. Participants take and transmit photographs of food selection and plate waste to research- ers/clinicians for analysis. Following two pilot studies, adult participants (n 52; BMI 20–35 kg/m 2 inclusive) were randomly assigned to the dine-in or take-out group. Energy intake (EI) was measured for 3 d. The dine-in group ate lunch and dinner in the laboratory. The take-out group ate lunch in the laboratory and dinner in free-living conditions (participants received a cooler with pre-weighed food that they returned the following morning). EI was measured with the RFPM and by directly weighing foods. The RFPM was tested in laboratory and free-living conditions. Reliability was tested over 3 d and validity was tested by comparing directly weighed EI to EI estimated with the RFPM using Bland–Altman analysis. The RFPM produced reliable EI estimates over 3 d in laboratory (r 0·62; P,0·0001) and free-living (r 0·68; P,0·0001) conditions. Weighed EI correlated highly with EI estimated with the RFPM in laboratory and free-living conditions (r . 0·93; P,0·0001). In two labora- tory-based validity tests, the RFPM underestimated EI by 24·7 % (P¼0·046) and 25·5 % (P¼0·076). In free-living conditions, the RFPM under- estimated EI by 26·6 % (P¼0·017). Bias did not differ by body weight or age. The RFPM is a promising new method for accurately measuring the EI of free-living individuals. Error associated with the method is small compared with self-report methods. Digital photography: Food intake: Energy intake: Measurement: Self-report The gold standard for measuring food or energy intake (EI) in free-living humans is the doubly labelled water (DLW) method. DLW provides an accurate measure of total daily energy expenditure and, during a period of energy balance, total daily energy expenditure is equal to EI (1,2) . When a large energy deficit is present during the DLW period, how- ever, it is difficult to obtain an accurate (valid) estimate of an individual’s short-term EI using DLW, even if changes in energy stores are considered (3) . This limitation is noteworthy, since researchers and clinicians frequently require an estimate of EI during diets or periods of energy restriction. Additional limitations of the DLW method include: (1) cost; (2) avail- ability; (3) its inability to provide important information about the type and micro- and macronutrient composition of foods ingested. Nevertheless, seemingly few valid and reliable alternatives for estimating EI are available. Self-report methods are frequently used to collect EI data, including 24 h food recall and pen-and-paper food records. When estimating EI with 24 h food recall, a trained individual interviews the participant about his/her food and beverage consumption over the previous 24 h. This method relies on the ability of the participant to accurately recall the types and amounts of foods consumed during the previous 24 h, and it assumes that these foods are representative of habitual EI. Consequently, this method is subject to error (4–6) . For example, EI estimated with 24 h recall in New Zealand’s National Nutrition Survey resulted in significant underestima- tion of EI, particularly among women and obese partici- pants (5) , and 24 h recall with multiple-pass methodology resulted in significant underestimation of EI in a sample of African-American women diagnosed with type 2 diabetes (6) . Efforts to improve the accuracy of 24 h recall have been disappointing. For example, financial incentives were not found to improve diet recall (7) . Food records are another self-report method frequently used to estimate food intake. When using food records, participants estimate the portion sizes of foods that they eat and record this portion size and a description of the food on the food record, which is typically pen and paper. The accuracy of this method has been questioned (8,9) . Tightly controlled studies that com- pared self-reported EI to EI measured with DLW indicate that individuals significantly under-report their EI when using food records (10,11) . Moreover, overweight and obese individuals tend to under-report EI to a greater degree than * Corresponding author: Dr Corby K. Martin, fax þ1 225 763 3045, email Corby.Martin@pbrc.edu Abbreviations: DLW, doubly labelled water; EI, energy intake; EMA, ecological momentary assessment; ICC, intraclass correlation coefficient; PBRC, Pennington Biomedical Research Center; PDA, personal digital assistant; RFPM, remote food photography method. British Journal of Nutrition (2009), 101, 446–456 doi:10.1017/S0007114508027438 q The Authors 2008 B ri ti sh Jo u rn al o f N u tr it io n D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:43 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S0007114508027438 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S0007114508027438 lean individuals (10) , and individuals tend to selectively under- report fat intake (12) . Different methods have been used to try to improve the accuracy of food records, but the validity of these methods remains questionable. For example, an indivi- dualised approach to food records that considered variability in EI and estimated energy requirements resulted in underesti- mates of EI of 27 % or more (13) , and a method that included visual aids to improve portion-size estimates underestimated EI by 14 % with marked variability (14) . Software applications housed on hand-held devices such as personal digital assistants (PDA) have been developed to reduce the burden of keeping a food record and to improve their accuracy. Nevertheless, food records kept on PDA appear no more accurate than 24 h recall (15) or traditional food records (16) and do not improve the accuracy of self- report EI (17) . Importantly, Beasley et al. (15) noted that the lar- gest source of error in estimated EI resulted from participants’ poor estimation of portion size. Individuals have difficulty estimating portion size even after training and these errors negatively affect EI estimates (18) . Our laboratory found that participants more accurately estimated portion size after extensive training, though a large degree of variability in portion-size estimates remained (19) . Together, these findings indicate that methods are needed that do not rely on the participant to estimate portion sizes. Methods to measure EI that do not rely on the participant to estimate portion size are limited. Direct observation of food selection and plate waste by trained observers results in accu- rate EI estimates (20,21) , but this method is not applicable to free-living conditions. The digital photography of foods method (22,23) , however, could be modified for free-living con- ditions. When using this method, the plate of foods selected by an individual is photographed and the individual’s plate waste is photographed following the meal. Reference or standard portions of known quantities of the foods are also photo- graphed. In the laboratory, trained registered dietitians use these photographs to estimate the portion size of food selec- tion and plate waste by comparing these photographs to the standard portion photographs. These estimates are entered into a computer application that calculates the weight (g), energy (kJ or kcal) and macro- and micronutrients of food selection, plate waste, and food/EI based on a US Department of Agriculture database (24) . The digital photography method has been found to be highly reliable and accurate when used to measure the EI of adults (22,23) and children (25) in cafeteria settings. Portion-size estimates from digital photography correlate highly with weighed portion sizes (r . 0·90; P,0·0001) and mean differences between directly weighing foods and digital photography are minimal (,6 g) (23) . Agree- ment is high among trained registered dietitians who estimate portion sizes using digital photography; for example, intra- class correlation coefficients (ICC) are consistently .0·90 (25) . To our knowledge, only a few studies have used digital photographs to estimate EI in free-living conditions. A research group in Japan used a PDA with a digital camera to collect EI data in free-living conditions. The method results in EI estimates comparable with weighed food records. Never- theless, these studies relied on samples of young lean college women who were majoring in food and nutrition, and the criteria (food records) are prone to error (26,27) . When used in a more diverse sample of males and females, the method resulted in EI estimates that were significantly lower than esti- mates from weighed records, and under-reporting in males was associated with obesity (28) . Nevertheless, these important studies demonstrate the likely utility of such technology. The purpose of the present paper is to describe the develop- ment of the remote food photography method (RFPM) and to report the first reliability and validity tests of this method. The RFPM relies on the validated digital photography method (22,23) to analyse food photographs and estimate EI, but the photographs are collected in free-living conditions with a camera-enabled cell phone and are sent to the researchers in near real time via a cellular network. Two pilot studies and the main study, which includes tests of reliability and validity, are described. It was hypothesised that initial reliability and validity data would suggest that the RFPM is a viable method for measuring EI in free-living conditions. The three studies reported here were conducted at the Pennington Biomedical Research Center (PBRC; Baton Rouge, LA, USA). All appli- cable institutional and government regulations concerning the ethical use of human volunteers were followed. All participants provided written informed consent and the research was approved by the Institutional Review Board of the PBRC. Pilot study 1 The purpose of pilot study 1 was to overcome the primary limitation of adapting the digital photography method for free-living humans, which is the need for a photograph of a weighed standard portion from the dining location. When the digital photography method is utilised in the cafeteria, a photograph is taken of a standard portion of weighed food. In the laboratory, trained registered dietitians estimate EI by comparing the photographs of food selection and plate waste to the standard portion photograph. When photographs are collected in free-living conditions, it is unfeasible for partici- pants to weigh and take a photograph of a standard portion of each food that they eat. Procedures To overcome this limitation, we created a large database of standard portion photographs from previous studies and we took new standard portion photographs of foods that are frequently consumed (for example, cereal) but were under- represented in the database. This database is called the ‘archive’ and it includes over 2100 photographs of different foods, and over 250 photos of different portion sizes of the same foods. Each food is linked to energy and nutrient infor- mation from the US Department of Agriculture database (24) . In rare cases when a food item is not represented in the US Department of Agriculture database, data from the manufac- turer is utilised (for example, food label, website). During pilot study 1, we simulated food selection and plate waste in the laboratory by taking photographs of foods with a PDA (Palm Pilot, Ziree 72; Palm Inc., Sunnyvale, CA, USA) equipped with a digital camera. We took photographs of thirty-one different foods that were grouped together to simu- late sixteen meals (for example, hamburger, ketchup, French fries, soda). Each simulated meal consisted of a mean of 3·88 foods; hence, each food was represented twice with two different portion sizes. For each simulated meal, the Remote food photography method 447 B ri ti sh Jo u rn al o f N u tr it io n D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:43 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S0007114508027438 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S0007114508027438 amount of food selected and plate waste was determined randomly. Food selection ranged from 35 to 800 % of the standard portion, and plate waste ranged from 0 to 90 % of the amount of food selected. The amount of food in each photograph was carefully weighed and recorded. Data analytic plan To test if use of a standard portion photograph from the archive facilitated reliable and valid/accurate estimates of EI, trained registered dietitians estimated food selection and plate waste using two methods in the following order: (1) without photo- graphs of a standard portion; (2) with photographs of a standard portion from the archive. For each of the two methods, we com- pared the registered dietitians’ estimates of food selection to the actual amount of food that was selected with dependent t tests. Similarly, plate waste and EI from each method were compared with weighed plate waste and EI. To test inter-rater reliability, two registered dietitians estimated EI for all of the foods (100 % over-sample) and ICC were calculated. We predicted that the registered dietitians’ EI estimates would not differ significantly from EI measured by directly weighing foods when a standard-portion photograph from the archive was used, but the estimates would differ significantly when no photograph from the archive was used. Results Inter-rater agreement between the two trained registered dieti- tians who estimated EI was high with and without the archive (Table 1); therefore, the registered dietitians’ estimates were averaged. Estimated food selection and EI differed signifi- cantly from weighed food selection and EI when the archive was not utilised, but this difference was not significant when the archive was utilised (Table 1). Estimated plate waste did not differ from weighed plate waste for either method. Estimated EI only differed from weighed EI by 28·2 % on average when the archive was utilised. Discussion The first pilot study demonstrated that trained registered dietitians can reliably and accurately estimate EI with stan- dard-portion photographs from the archive. Estimated food selection, plate waste and EI differed by 28·2 % on average from directly weighed food when the archive was used. Pilot study 2 Participants and procedures Pilot study 2 was a feasibility study to determine if free-living people could use the PDA to take pictures of their food selection and plate waste. Male and female adults (aged 18–54 years) with BMI 19–35 kg/m 2 (inclusive) were enrolled. Participants were asked to take pictures of their food selection, including energy-containing beverages, and plate waste for four consecu- tive days and to label the pictures with a brief description of the foods. When participants returned the PDA to the researchers, they were asked about their satisfaction with the method and factors that hindered their ability to take photographs of their T a b le 1 . C o m p a ri s o n o f fo o d s e le c ti o n , p la te w a s te a n d e n e rg y in ta k e (E I) e s ti m a te d w it h th e re m o te fo o d p h o to g ra p h y m e th o d w it h w e ig h e d E I w h e n a s ta n d a rd -p o rt io n p h o to g ra p h fr o m th e a rc h iv e w a s a n d w a s n o t u ti lis e d d u ri n g th e e s ti m a ti o n p ro c e d u re (p ilo t s tu d y 1 ) IC C † E s ti m a te d E I (k J ) W e ig h e d E I (k J ) E s ti m a te d E I (k c a l) W e ig h e d E I (k c a l) D if fe re n c e o f th e m e a n s (% ) C o rr e la ti o n 9 5 % C I M e a n S E M M e a n S E M M e a n S E M M e a n S E M t d f P N o s ta n d a rd -p o rt io n p h o to w a s u s e d F o o d s e le c ti o n 0 ·9 3 ** * 0 ·8 1 , 0 ·9 8 5 0 2 5 9 5 4 8 5 9 1 2 0 1 4 1 3 1 1 4 2 8 ·4 2 2 ·2 1 6 1 0 ·0 1 P la te w a s te 0 ·9 4 ** * 0 ·8 4 , 0 ·9 8 1 4 2 2 5 1 4 6 2 1 3 4 6 3 5 5 2 2 ·9 2 0 ·3 5 6 1 0 ·7 3 E n e rg y in ta k e 0 ·9 1 ** * 0 ·7 6 , 0 ·9 7 3 6 0 4 6 4 0 6 5 0 8 6 1 1 9 7 1 2 2 1 1 ·3 2 2 ·3 0 6 1 0 ·0 3 S ta n d a rd -p o rt io n p h o to w a s u s e d F o o d s e le c ti o n 0 ·9 5 ** * 0 ·8 6 , 0 ·9 8 5 2 3 5 9 5 4 8 5 9 1 2 5 1 4 1 3 1 1 4 2 4 ·6 2 1 ·1 2 6 1 0 ·2 7 P la te w a s te 0 ·9 2 ** * 0 ·8 1 , 0 ·9 7 1 5 1 2 5 1 4 6 2 1 3 6 6 3 5 5 þ 2 ·9 0 ·5 1 6 1 0 ·6 1 E n e rg y in ta k e 0 ·9 2 ** * 0 ·7 9 , 0 ·9 7 3 7 2 4 6 4 0 6 5 0 8 9 1 1 9 7 1 2 2 8 ·2 2 1 ·5 8 6 1 0 ·1 2 IC C , in tr a c la s s c o rr e la ti o n c o e ffi c ie n t. ** *P , 0 ·0 0 0 1 . † IC C w it h 9 5 % C I re p re s e n t a g re e m e n t b e tw e e n th e tw o re g is te re d d ie ti ti a n s w h o e s ti m a te d E I (1 0 0 % o v e r- s a m p le ). C. K. Martin et al.448 B ri ti sh Jo u rn al o f N u tr it io n D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:43 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S0007114508027438 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S0007114508027438 foods. Trained registered dietitians estimated EI using the digi- tal photography method. During this process, they recorded problems with the pictures that limited their ability to estimate EI from the photographs. No data analyses were planned for this pilot and feasibility study, which served the purpose of identi- fying barriers to collecting digital photography data in free- living conditions and analysing the data. Results Participant characteristics for pilot study 2 are depicted in Table 2. The sample was predominantly Caucasian (67 %) and female (79 %). Pilot study 2 demonstrated that free-living indi- viduals can use a PDA to take pictures of their food selection and plate waste and also identified factors that inhibit application of the method to free-living conditions. Moreover, the study identified ways to modify procedures to overcome these factors. First, participants occasionally forgot to take photographs of their food selection and plate waste. To address this problem, ecological momentary assessment (EMA) (29) technology was adopted and alarms were installed on the Palm Pilotse to remind participants to take photographs of their foods. Second, the researchers could not review the data until the par- ticipants returned the PDA at the end of the 4 d study. There- fore, systematic mistakes in taking the pictures could not be corrected, and these mistakes negatively affected data quality for the entire study. To address this problem, future research relied on cellular phones equipped with digital cameras and data transfer capabilities that allowed photographs to be sent to the researchers in near real time. Third, many photographs were difficult to analyse due to poor lighting; consequently, the new data collection devices (cell phones) included a flash and computer software is being developed to digitally enhance photographs. Fourth, participants occasionally pro- vided poor descriptions of foods in pictures due to the time required to type the description. To correct this problem, participants were instructed to quickly record voice (or text) messages with the cell phone to describe the foods. Finally, participants required experiential training with the data collec- tion devices to take good photographs; therefore, we created a 20–30 min training paradigm. Discussion The results of the second pilot study provided important infor- mation about factors that inhibit data collection in free-living conditions, and the study identified ways to overcome these limitations. The results of this study were instrumental in guiding development of the RFPM and addressing factors that affected data quality. Main study The aim of the main study was to finalise the development and procedures for the RFPM and to obtain reliability and validity data from laboratory and free-living conditions. Measures Remote food photography method. The procedures for the RFPM were finalised based on the results of the two pilot studies and consisted of the following: (1) Participants were trained to use the cell phones to take pictures of their food selection and plate waste, label the pictures, and send these pictures to the researchers over the cellular network; (2) Participants received and were asked to respond to four to six automated prompts that reminded them to take photo- graphs and to send the photographs to the researchers. These prompts utilised EMA (29) principles and consisted of emails and text messages; (3) Participants were provided with a telescoping pen to stan- dardise the distance of the camera from the food and were instructed to take photographs at a 458 angle; (4) Participants were instructed to record their EI using tra- ditional pen-and-paper methods or a voice recording on the phone in case of technology failure (we also inter- viewed participants at the end of the data collection period to name foods that were difficult to identify or were poorly described). Participants were trained on these procedures during a 20–30 min individual experiential training session during which they showed mastery of the procedures before being dismissed. The phones used during the research were Motorola i860 phones (Motorola, Inc., Schaumburg, IL, USA), which were equipped with 1·3-mega pixel cameras and 4 £ digital zoom. Data analysis/estimation of EI followed the validated digital photography of foods method (22,23) and is outlined in the Introduction of the present paper. Participants completed a form before data collection that asked them to rate their comfort level with using a cell phone (1 ¼ not at all comfortable; 5 ¼ extremely comfortable). After data collection, participants completed a form that asked Table 2. Baseline characteristics and sex and race distribution of study participants (Mean values with their standard errors or numbers and percentages) Sex Race Age (years) Height (cm) Weight (kg) BMI (kg/m2) Male Female Caucasian African American Other Mean SEM Mean SEM Mean SEM Mean SEM n % n % n % n % n % Pilot study 1 – – – – – – – – – – – – – Pilot study 2 (n 42) 33·9 1·6 166·8 1·7 70·7 1·9 25·5 0·7 9 21 33 79 28 67 12 28 2 5 Main study (n 50) 32·4 1·5 170·6 1·4 77·2 1·9 26·5 0·5 23 46 27 54 35 70 15 30 0 0 Remote food photography method 449 B ri ti sh Jo u rn al o f N u tr it io n D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:43 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S0007114508027438 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S0007114508027438 about their satisfaction with the RFPM, and their satisfaction with sending the photographs to the researchers (1 ¼ extremely dissatisfied; 6 ¼ extremely satisfied). Participants also rated their perceived ease of using the RFPM (1 ¼ very difficult; 6 ¼ extremely easy), and if they would rather use the RFPM or pen-and-paper records to record intake. Collection of these data began after we started the trial; therefore, data were only available on a subset of the study sample. Weighed energy intake. EI measured by directly weighing foods served as the gold standard criterion for EI and was used to test the validity of the RFPM over 3 d. EI was measured under two conditions. First, participants ate meals in the Inges- tive Behavior Laboratory of the PBRC. Second, a group of participants were provided with pre-weighed food in a cooler and were asked to eat this food in their natural environ- ment for dinner. The participants returned the cooler the fol- lowing morning and the remaining food was weighed to calculate EI. Participants used the RFPM in the laboratory and their natural environment. Participants Fifty-two participants were randomised and two participants failed to complete the study. Inclusion criteria were: healthy male or female aged 18–54 years with BMI 20–35 kg/m 2 (inclusive). Exclusion criteria were: use of medications that influenced eating behaviour, diagnosis of chronic illness, tobacco use, aversion to study foods, and, for females, irregu- lar menstrual cycle and pregnancy. The inclusion and exclu- sion criteria were lenient to recruit a diverse sample that is representative of people who participate in research or seek weight-loss treatment, increasing the generalisability of the study. Random assignment and procedures All participants completed training in the RFPM. Participants were randomly assigned (1:1 ratio) to the dine-in or take-out group. On each test day, all participants consumed a standard breakfast (Table 3). In the dine-in condition, participants com- pleted 3 d of food intake testing, during which lunch and dinner were eaten in the laboratory. In the take-out condition, participants completed 3 d of food intake testing, but they only ate lunch in the laboratory. Dinner was eaten out of the cooler in free-living conditions. Participants used the RFPM during all lunch and dinner meals; therefore, two EI measures were collected: (1) weighed EI; (2) EI estimated with the RFPM. In the dine-in and take-out conditions, the same types and amounts of foods were provided (Table 3) and test days were 2–7 d apart. Excess food was provided to ensure plate waste and provide a conservative test of the RFPM since the presence of plate waste creates more opportunity for error and is not always present in people’s natural environment, i.e. many people eat all of their portions. Data analytic plan The reliability of the RFPM was examined by calculating ICC for EI measured with the RFPM and directly weighing foods over the 3 d of testing in both free-living and laboratory conditions. Agreement among the three registered dietitians who scored the photographs was examined with ICC (27 % of meals were over-sampled). For the following ana- lyses, EI data were averaged over the 3 d of testing. Three series of analyses were conducted to test the validity of the RFPM. First, EI estimated with the RFPM was com- pared with weighed EI measured in the Ingestive Behavior Laboratory. These analyses relied on data from the dine-in group (their lunch and dinner data were combined). A second laboratory-based test of the RFPM relied on the take-out group’s lunch data. Third, the take-out group’s dinner data were analysed to test the validity of the RFPM in free-living conditions. The second and third series of ana- lyses allowed examination of the accuracy of the RFPM in laboratory v. free-living conditions with the same sample of participants. For each series of analyses, Pearson correlation coefficients were calculated between the RFPM and weigh- back method, and the Bland–Altman technique (30) was used to determine if: (1) the methods differ significantly from each other; (2) the accuracy of the RFPM varies with the amount of food eaten (EI). The validity of the RFPM was also examined as a function of body mass or weight (kg), age and sex. Error was calculated by determining the percentage difference between EI esti- mated with the RFPM and directly weighed foods. Error was regressed against body weight with and without sex in the model to test for significance of slopes. A similar analysis was conducted with age. These analyses were conducted for each of the three samples outlined in the previous paragraph, namely: (1) the combined lunch and dinner data from the dine-in group, (2) the take-out group’s lunch data, which were collected in the laboratory; (3) the take-out group’s dinner data that were collected in free-living conditions. Results Participant characteristics for the main study are depicted in Table 2. The sample consisted of 54 % females and 70 % Caucasians. Ecological momentary assessment and missing data. A total of 157 EMA prompts were sent near dinner time to par- ticipants in the take-out group and all were received, though thirty-seven (23·6 %) were delivered at the incorrect time (prompts were sent via an email server that delayed delivery of some messages). Participants responded to 118 (98·3 %) of the 120 prompts that were delivered on time. Each group, dine-in and take-out, consumed 150 meals that were measured with the RFPM and by directly weighing foods. For the dine-in group, all 150 meals were consumed in the Ingestive Behavior Laboratory. When using the weigh-back method, data from three meals were not analysed due to technical error or protocol violations, including eating only certain ingredients from the sandwiches, which made it impossible to obtain weighed intake. Four meals were not analysed due to poor picture quality or missing pictures. The take-out group consumed 150 meals (seventy-five meals in the laboratory and seventy-five meals in free-living conditions). In the laboratory, data from seven meals were not analysed: one meal had missing pictures; three meals had poor-quality pictures; three meals involved protocol violations. In free-living conditions, data from eleven meals were not analysed: five meals had poor-quality pictures; six C. K. Martin et al.450 B ri ti sh Jo u rn al o f N u tr it io n D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:43 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S0007114508027438 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S0007114508027438 Table 3. Description (energy and macronutrient content) of the foods served during breakfast, lunch and dinner during the main study Serving size (g) Energy (kJ) Energy (kcal) Fat (g) Protein (g) Carbohydrate (g) Breakfast Nutri-Grain w bar (one bar) 37·0 586 140 3 1 26 Skimmed milk (one carton) 230·8 377 90 0 8·1 11·4 Sun-Maidw raisins (one box) 43·3 540 129 0 1·3 34 Totals 311·1 1502 359 3 10·4 71·4 Lunch* Ham sandwiches (three hoagies†) 708·0 4858 1161 36·8 81·4 123·2 Roast beef sandwiches (three hoagies†) 708·0 5243 1253 36·1 105·5 120·4 Turkey sandwiches (three hoagies†) 708·0 4653 1112 28·3 89·2 120·4 Rufflesw reduced-fat potato chips 122·8 2544 608 26·0 8·7 82·4 Famous Amosw chocolate chip cookies 224·0 4845 1158 54·0 15·5 154·6 Totals 1054·8 12 042 to 12 631 2878 to 3019 108·3 to 116·8 105·6 to 129·7 357·4 to 360·2 Dinner Spaghetti noodles with Prego sauce 300 g noodles, 265 g sauce 3021 722 12·4 19·1 131·1 Spaghetti noodles with Alfredo sauce 300 g noodles, 265 g sauce 3858 922 45·5 19 97·7 Arezzow meatballs (ten meatballs) 480 2021 483 42 16·6 8·4 Pepperidge Farmw Texas Toast with cheese (four slices) 192 3012 720 44 16 68 Dolew tropical fruit salad (one container) 732 2008 480 0 0 126 Pepperidge Farm w iced brownie (four brownies) 296 5021 1200 48·4 14·4 189·2 Totals 2830 15 920 3805 179·9 66·1 522·7 * Participants selected and were served one of the three sandwich choices for lunch. † Elongated rolls. R e m o te fo o d p h o to g ra p h y m e th o d 4 5 1 British Journal of Nutrition Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0007114508027438 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S0007114508027438 meals had protocol violations. Protocol violations included not eating the food provided by the researchers due to eating other foods. Reliability of the remote food photography method. Agreement was high among the three registered dietitians who used photographs from the RFPM to estimate EI. The ICC for food selection, plate waste and EI were 0·99 (95 % CI 0·99, 0·99), 0·91 (95 % CI 0·87, 0·94) and 0·88 (95 % CI 0·81, 0·91), respectively. The ICC for intake (kJ) of fat, carbohydrate and protein were 0·92 (95 % CI 0·88, 0·94), 0·85 (95 % CI 0·77, 0·89) and 0·85 (95 % CI 0·79, 0·90), respectively. These values are consistent with inter-rater agreement when the digital photography method is used in cafeteria settings (22,23,25) . EI estimated with the RFPM was reliable over the 3 d of testing. Based on the dine-in group’s combined lunch and dinner data, the RFPM produced reliable estimates of EI in laboratory conditions (ICC 0·62; P,0·0001). Moreover, data from the dinner meal of the take-out group indicated that the RFPM produced reliable estimates of EI in free living con- ditions (ICC 0·68; P,0·0001). These values are consistent with ICC calculated from directly weighing food intake in the laboratory over a number of days (31) . In the present study, the ICC for food intake measured in the laboratory were 0·68 and 0·70 for lunch and dinner, respectively (P,0·0001). Validity of the remote food photography method. Participants took and sent photographs of good quality to the researchers. Fig. 1 contains representative pictures from the study. The first series of analyses tested the validity of the RFPM in laboratory conditions and included the dine-in group’s com- bined lunch and dinner data. EI estimated with the RFPM corre- lated highly with weighed EI, though the RFPM significantly underestimated EI (2368 kJ (288 kcal); 24·7 %; Table 4). Bland–Altman analysis indicated that this bias was consistent over different levels of EI (Fig. 2). The second series of analyses tested the validity of the RFPM in laboratory conditions by analysing the take-out group’s lunch data. EI estimated with the RFPM correlated highly with weighed EI, and the RFPM non-significantly underestimated EI (2151 kJ (236 kcal); 25·5 %; Table 4). Bland and Altman analysis indicated that this bias was consis- tent over different levels of EI (Fig. 3). The third series of analyses tested the validity of the RFPM in free-living conditions by analysing the take-out group’s dinner data. EI estimated with the RFPM correlated highly with weighed EI, and the RFPM significantly underestimated EI (2406 kJ (297 kcal); 26·6 %; Table 4). Bland and Altman analysis indicated that this bias was consistent over different levels of EI (Fig. 4). Validity of the remote food photography method as a func- tion of body mass (kg), age and sex. Regression analysis indicated that there was no significant association between body weight (kg) and the RFPM error (percentage difference between EI estimated with the RFPM and weighed EI) with (adjusted R 2 20·07 to 0·08; P values .0·45) or without (adjusted R 2 20·04 to 0·01; P values .0·30) sex in the models. Age also was not associated with error (adjusted R 2 20·07 to 0·08; P values .0·65). Error did not differ signifi- cantly between men and women in the take-out group at lunch (t(21) ¼ 20·73; P¼0·47) or dinner (t(22) ¼ 20·75; P¼0·46). Error was significantly larger for women (29·2 %) compared with men (20·7 %) in the dine-in group (t(23) ¼ 22·08; P¼0·05). Satisfaction ratings. Thirty-five participants had data on their comfort level (1 ¼ not at all comfortable; 5 ¼ extremely comfortable) of using cell phones before data collection. Thirty (85·7 %) participants rated their comfort level as 4 or higher. After data collection, forty-seven participants had satisfaction data (1 ¼ extremely dissatisfied; 6 ¼ extremely satisfied) and thirty-seven (78·8 %) participants rated their satis- faction with the RFPM as 5 or higher and forty (85 %) parti- cipants rated their satisfaction with sending photographs to the researchers as 5 or higher. Last, forty-four (93·6 %) participants rated the ease of use of the RFPM as 5 or higher, and almost all participants (forty-four; 93·6 %) indicted they would rather use the RFPM rather than pen and paper to record food intake. Discussion The RFPM was found to produce reliable and valid estimates of EI in both laboratory and free-living conditions. The degree of error for the RFPM was very small (24·7 to 26·6 %) when compared with error of 37 % or more associated with self- report methods (10–12) , and the error was similar in both labora- tory and free-living conditions. Moreover, error associated Fig. 1. Representative pictures of food selection (a) and plate waste (b) taken by a participant and sent to the researchers via the wireless network. C. K. Martin et al.452 B ri ti sh Jo u rn al o f N u tr it io n D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:43 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S0007114508027438 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S0007114508027438 with the RFPM was not found to vary by the amount of energy/food eaten or body weight, which is a significant lia- bility of self-report methods (5) . Error also did not differ by age, and only one of three comparisons found that error dif- fered by sex (this comparison relied on data from the labora- tory). The majority of participants rated their satisfaction with the RFPM and with sending pictures to the researchers T a b le 4 . S u m m a ry o f a n a ly s e s to te s t th e v a lid it y o f e n e rg y in ta k e (E I) e s ti m a te d w it h th e re m o te fo o d p h o to g ra p h y m e th o d (R F P M ) c o m p a re d w it h d ir e c tl y w e ig h e d fo o d s in la b o ra to ry a n d fr e e -l iv in g c o n d it io n s (m a in s tu d y )† L o c a ti o n o f E I e s ti m a te d b y R F P M (k J ) E I m e a s u re d b y d ir e c tl y w e ig h e d fo o d (k J ) E I e s ti m a te d b y R F P M (k c a l) E I m e a s u re d b y d ir e c tl y w e ig h e d fo o d (k c a l) D if fe re n c e in E I (k J ) D if fe re n c e in E I (k c a l) P e a rs o n D if fe re n c e o f th e d a ta c o lle c ti o n M e a n S E M M e a n S E M M e a n S E M M e a n S E M M e a n S E M M e a n S E M c o rr e la ti o n t d f P m e a n s (% ) E I (l u n c h a n d d in n e r, d in e -i n g ro u p ) L a b o ra to ry 7 4 4 8 4 6 9 7 8 1 6 4 3 5 1 7 8 0 1 1 2 1 8 6 8 1 0 4 2 3 6 8 1 7 4 2 8 8 4 2 0 ·9 3 ** * 2 2 ·1 0 2 4 0 ·0 4 6 2 4 ·7 E I (l u n c h , ta k e -o u t g ro u p ) L a b o ra to ry 2 5 9 0 2 2 2 2 7 4 1 2 0 9 6 1 9 5 3 6 5 5 5 0 2 1 5 1 8 1 2 3 6 1 9 0 ·9 3 ** * 2 1 ·8 6 2 2 0 ·0 7 6 2 5 ·5 E I (d in n e r, ta k e -o u t g ro u p ) F re e -l iv in g 5 7 0 7 4 8 1 6 1 1 3 4 3 9 1 3 6 4 1 1 5 1 4 6 1 1 0 5 2 4 0 6 1 5 9 2 9 7 3 8 0 ·9 5 ** * 2 2 ·5 7 2 3 0 ·0 1 7 2 6 ·6 ** *P , 0 ·0 0 0 1 . † E I e s ti m a te d w it h th e R F P M w a s c o m p a re d w it h w e ig h e d E I w it h P e a rs o n c o rr e la ti o n s a n d d e p e n d e n t t te s ts . Fig. 2. Bland and Altman analysis comparing the remote food photography method (RFPM) with weighed energy intake (EI) in the laboratory. Lunch and dinner data from the dine-in group were included in these analyses. The RFPM bias in estimating EI was consistent over different levels of EI (R 2 0·03; adjusted R 2 20·01; P¼0·39). Fig. 3. Bland and Altman analysis comparing the remote food photography method (RFPM) with weighed energy intake (EI) in laboratory conditions by analysing lunch data from the take-out group. The RFPM bias in estimating EI was consistent over different levels of EI (R 2 0·03; adjusted R 2 20·02; P¼0·45). Remote food photography method 453 B ri ti sh Jo u rn al o f N u tr it io n D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:43 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S0007114508027438 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S0007114508027438 favourably and almost all (93·6 %) participants indicated that they would rather use the RFPM compared with pen-and- paper records to record food intake. Thus, the RFPM appears to be a promising new method for estimating group and individual EI. Implications of the findings. The RFPM has important implications for both research and clinical practice. Obtaining accurate estimates of food and macronutrient intake from free- living individuals is important for researchers who investigate the nutritional adequacy of the diet and interventions to modify food intake and body weight. Additionally, the RFPM allows individuals to receive virtually immediate feed- back about their energy and micro- and macronutrient intake from professionals in a remote location. Individuals can obtain clinical and dietary recommendations based on food intake data that are reviewed in near real time, and they do not need to go to a clinic to receive this feedback. The RFPM is a promising method to estimate EI with inno- vative technology. Nevertheless, our experiences during these studies remind us of the complexities of collecting EI data from free-living individuals and the barriers that negatively affect data quality, as described in the following paragraphs. Therefore, the RFPM continues to be modified to address these barriers, and these modifications include the following. First, the RFPM relies on the ability of participants to remember to take photographs of their food selection and plate waste. The use of EMA methodology provided auto- mated prompts reminding participants to take photographs, but missing data are unavoidable. To reduce missing data, it is clear that secondary methods are needed to capture EI data when the participant forgets to take photographs or when photographing food is not possible. In current and future studies, we include explicit instructions for participants to record food intake with pen and paper or to use the phone to make voice recordings describing the food and portion size. Additionally, we monitor incoming data and contact partici- pants immediately if their data are of poor quality or if photo- graphs are missing. Participants have responded positively to these procedures, since the procedures indicate that the researchers are invested in data collection and that participants are accountable. Second, participants’ food descriptions do not always convey important details about the food item, such as the method of preparation. Therefore, when participants return the cell phone to the researchers, they answer questions about foods that are difficult to identify and that have poor descriptions. During this meeting, the participant also provides the researchers with pen-and-paper records or voice recordings that were used to record EI in the case of technology failure. Third, a considerable amount of personnel time is required to schedule EMA prompts, monitor incoming data, etc. There- fore, the methodology is being further modified and computer applications are being developed to: (1) automatically deliver EMA prompts and modify the prompts based on the partici- pants’ adherence; (2) monitor incoming data and alert the researcher and participant when data appear to be missing, for example, when an odd number of photographs are received; (3) digitally enhance the quality and lighting of photographs; (4) automatically estimate the amount of food represented in pictures using computer imaging algorithms. These aims are part of ongoing research and development to facilitate data collection in free-living conditions where par- ticipants eat ad libitum and to minimise under-recording or participants failing to take and send photographs. Also, the computer imaging techniques being developed will determine the angle and distance of the camera phone from the food, which will facilitate accurate estimates of EI even when foods are eaten off plates that vary in size. To accomplish this aim, participants will include a standard-size object in the photograph, for example, a dollar bill or six-inch ruler. An additional aim of the ongoing research is to determine if participants under-eat when they use the RFPM. Under- eating is known to occur when food intake is monitored using other methods, for example, Goris et al. (12) . Strengths and limitations. The study has important strengths, including a diverse study sample and excellent retention (only two participants failed to complete the protocol). Additionally, participants considered under-repor- ters were not excluded from the data analysis, which is common in other studies. Last, the RFPM was tested in controlled laboratory and free-living conditions in the same sample (the take-out group), which allows direct comparisons of the accuracy between laboratory and free-living conditions. The study has important limitations that also must be con- sidered. First, participants in the take-out group of the main study collected data during free-living conditions, though they ate food from a cooler provided by the researchers. Consequently, the variety of food was limited and not necess- arily representative of their habitual daily intake. Although these conditions were contrived, they were necessary since other criterion measures of EI during free-living conditions, for example, DLW, were not feasible for this study. Second, EI was only measured for 3 d. Fig. 4. Bland and Altman analysis comparing the remote food photography method (RFPM) with weighed energy intake (EI) in free-living conditions by analysing the dinner data from the take-out group. The RFPM bias in estimating EI was consistent over different levels of EI (R 2 0·08; adjusted R 2 0·04; P¼0·18). C. K. Martin et al.454 B ri ti sh Jo u rn al o f N u tr it io n D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:43 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S0007114508027438 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S0007114508027438 Conclusions The studies reported here describe the development and initial support for the reliability and validity of the RFPM, as well as high user satisfaction ratings. Ongoing research is refining the methodology and testing the RFPM in free- living conditions against EI measured with DLW. Our find- ings suggest that the RFPM is a promising new method for estimating EI in free-living conditions. Acknowledgements This work was partially supported by the National Institutes of Health and the National Science Foundation (National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) grant 1 K23 DK068052-01A2; National Heart Lung and Blood Institute, National Institute on Aging, and National Science Foundation grant 1 R21 AG032231-01) and a Clinical Nutrition Research Unit Center grant (1P30 DK072476) entitled ‘Nutritional Programming: Environmen- tal and Molecular Interactions’ sponsored by the NIDDK. The content is solely the responsibility of the authors and does not necessarily represent the official view of the National Institutes of Health. There are no conflicts of inter- est associated with the study. C. K. M. was the study principal investigator and laboratory director; H. H. conducted the statistical analyses; S. M. C. was the laboratory manager who collected and managed data, H. R. A. developed the digital photography computer application and data collection software; C. C. supervised the dietary aspects of the protocol; S. D. A. was involved in study design, collection of data, and results interpretation. References 1. Livingstone MB & Black AE (2003) Markers of the validity of reported energy intake. J Nutr 133, Suppl. 3, 895S–920S. 2. Schoeller DA (1990) How accurate is self-reported dietary energy intake? Nutr Rev 48, 373–379. 3. de Jonge L, DeLany JP, Nguyen T, Howard J, Hadley EC, Redman LM & Ravussin E (2007) Validation study of energy expenditure and intake during calorie restriction using doubly labeled water and changes in body composition. Am J Clin Nutr 85, 73–79. 4. O’Neil PM (2001) Assessing dietary intake in the management of obesity. Obes Res 9, Suppl. 5, 361S–366S, 373S–374S. 5. Pikholz C, Swinburn B & Metcalf P (2004) Under-reporting of energy intake in the 1997 National Nutrition Survey. N Z Med J 117, U1079. 6. Samuel-Hodge CD, Fernandez LM, Henriquez-Roldan CF, Johnston LF & Keyserling TC (2004) A comparison of self-reported energy intake with total energy expenditure estimated by accelerometer and basal metabolic rate in African-American women with type 2 diabetes. Diabetes Care 27, 663–669. 7. Hendrickson S & Mattes R (2007) Financial incentive for diet recall accuracy does not affect reported energy intake or number of underreporters in a sample of overweight females. J Am Diet Assoc 107, 118–121. 8. Beaton GH, Burema J & Ritenbaugh C (1997) Errors in the interpretation of dietary assessments. Am J Clin Nutr 65, 1100S–1107S. 9. Tran KM, Johnson RK, Soultanakis RP & Matthews DE (2000) In-person vs telephone-administered multiple-pass 24-hour recalls in women: validation with doubly labeled water. J Am Diet Assoc 100, 777–783. 10. Schoeller DA, Bandini LG & Dietz WH (1990) Inaccuracies in self-reported intake identified by comparison with the doubly labeled water method. Can J Physiol Pharmacol 68, 941–949. 11. Bandini LG, Schoeller DA, Cyr HN & Dietz WH (1990) Validity of reported energy intake in obese and nonobese adolescents. Am J Clin Nutr 52, 421–425. 12. Goris AH, Westerterp-Plantenga MS & Westerterp KR (2000) Undereating and underrecording of habitual food intake in obese men: selective underreporting of fat intake. Am J Clin Nutr 71, 130–134. 13. RennieKL,CowardA&JebbSA(2007)Estimatingunder-reporting of energy intake in dietary surveys using an individualised method. Br J Nutr 97, 1169–1176. 14. Koebnick C, Wagner K, Thielecke F, et al. (2005) An easy- to-use semiquantitative food record validated for energy intake by using doubly labelled water technique. Eur J Clin Nutr 59, 989–995. 15. Beasley J, Riley WT & Jean-Mary J (2005) Accuracy of a PDA-based dietary assessment program. Nutrition 21, 672–677. 16. Yon BA, Johnson RK, Harvey-Berino J, Gold BC & Howard AB (2007) Personal digital assistants are comparable to tra- ditional diaries for dietary self-monitoring during a weight loss program. J Behav Med 30, 165–175. 17. Yon BA, Johnson RK, Harvey-Berino J & Gold BC (2006) The use of a personal digital assistant for dietary self-monitoring does not improve the validity of self-reports of energy intake. J Am Diet Assoc 106, 1256–1259. 18. Blake AJ, Guthrie HA & Smiciklas-Wright H (1989) Accuracy of food portion estimation by overweight and normal-weight subjects. J Am Diet Assoc 89, 962–964. 19. Martin CK, Anton SD, York-Crowe E, Heilbronn LK, VanSkiver C, Redman LM, Greenway FL, Ravussin E, Williamson DA & for the Pennington CALERIE Team (2007) Empirical evaluation of the ability to learn a calorie counting system and estimate portion size and food intake. Br J Nutr 98, 439–444. 20. Comstock EM, St Pierre RG & Mackiernan YD (1981) Measur- ing individual plate waste in school lunches. Visual estimation and children’s ratings vs. actual weighing of plate waste. J Am Diet Assoc 79, 290–296. 21. Wolper C, Heshka S & Heymsfield SB (1995) Measuring food intake: an overview. In Handbook of Assessment of Methods for Eating Behaviors and Weight-related Problems, pp. 215–240 [D Allison, editor]. Thousand Oaks, CA: Sage Publications. 22. Williamson DA, Allen HR, Martin PD, Alfonso A, Gerald B & Hunt A (2004) Digital photography: a new method for estima- ting food intake in cafeteria settings. Eat Weight Disord 9, 24–28. 23. Williamson DA, Allen HR, Martin PD, Alfonso AJ, Gerald B & Hunt A (2003) Comparison of digital photography to weighed and visual estimation of portion sizes. J Am Diet Assoc 103, 1139–1145. 24. United States Department of Agriculture & Agricultural Research Service (2000) Continuing Survey of Food Intakes by Individuals, 1994 – 1996, 1998. Washington, DC: USDA & ARS. 25. Martin CK, Newton RL Jr, Anton SD, Allen HR, Alfonso A, Han H, Stewart T, Sothern M & Williamson DA (2007) Measurement of children’s food intake with digital photography and the effects of second servings upon food intake. Eat Behav 8, 148–156. 26. Wang DH, Kogashiwa M & Kira S (2006) Development of a new instrument for evaluating individuals’ dietary intakes. J Am Diet Assoc 106, 1588–1593. Remote food photography method 455 B ri ti sh Jo u rn al o f N u tr it io n D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:43 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S0007114508027438 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S0007114508027438 27. Wang DH, Kogashiwa M, Ohta S & Kira S (2002) Validity and reliability of a dietary assessment method: the application of a digital camera with a mobile phone card attachment. J Nutr Sci Vitaminol (Tokyo) 48, 498–504. 28. Kikunaga S, Tin T, Ishibashi G, Wang DH & Kira S (2007) The application of a handheld personal digital assistant with camera and mobile phone card (Wellnavi) to the general popu- lation in a dietary survey. J Nutr Sci Vitaminol (Tokyo) 53, 109–116. 29. Stone AA & Shiffman S (1994) Ecological momentary assess- ment (EMA) in behavioral medicine. Ann Behav Med 16, 199–202. 30. Bland JM & Altman DG (1986) Statistical methods for asses- sing agreement between two methods of clinical measurement. Lancet i, 307–310. 31. Martin CK, Williamson DA, Geiselman PJ, Walden H, Smeets M, Morales S & Redmann S Jr (2005) Consistency of food intake over four eating sessions in the laboratory. Eat Behav 6, 365–372. C. K. Martin et al.456 B ri ti sh Jo u rn al o f N u tr it io n D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:43 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S0007114508027438 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S0007114508027438 work_5lvze5h7urfhjpxquu7pqgls5q ---- DB Error: Unknown column 's.type_id' in 'on clause' work_5oc24ht65ng7dpflx52glf3aea ---- Fine-scale monitoring of fish movements and multiple environmental parameters around a decommissioned offshore oil platform_ A pilot study in the North Sea Contents lists available at ScienceDirect Ocean Engineering journal homepage: www.elsevier.com/locate/oceaneng Fine-scale monitoring of fish movements and multiple environmental parameters around a decommissioned offshore oil platform: A pilot study in the North Sea Toyonobu Fujii ⁎ , Alan J. Jamieson Oceanlab, School of Biological Sciences, University of Aberdeen, Main Street, Newburgh, Aberdeenshire, AB41 6AA UK A R T I C L E I N F O Keywords: Artificial reefs Environmental monitoring Offshore oil/gas platforms Underwater observatory Fish movement North Sea A B S T R A C T A new underwater monitoring system was constructed using time-lapse photography and a suite of oceanographic instruments to characterise the dynamic relationships between changing environmental conditions, biological activities and the physical presence of offshore infrastructure. This article reports the results from a pilot study on fine-scale monitoring of fish movements in relation to changes in multiple environmental parameters observed at an offshore oil platform in the North Sea. Temporal changes in the number of saithe Pollachius virens were readily observed with a strong indication of diurnal rhythm of vertical movements. Key environmental parameters such as temperature, salinity, currents, tidal cycle, illumination, chlorophyll and dissolved oxygen also varied spatially (i.e. different depths) and/or temporally. If the monitoring system is to be deployed systematically at multiple offshore locations for longer duration as appropriately controlled experiments, this approach may greatly help understand the influence of redundant offshore man-made structures on the marine ecosystem. 1. Introduction Any decisions on the issues of decommissioning of offshore infrastructure will need to be made based on an array of selection criteria including, but not limited to, environmental, health and safety, financial, socioeconomic, technological and political considerations (Fowler et al., 2014; Wilkinson et al. 2016). In the North Sea, the question of how aging offshore infrastructure is utilised by different marine fauna has become increasingly important in terms of the environmental considerations. This is because recent studies have suggested that the physical presence of offshore platforms, together with associated subsea infrastructure such as pipelines, wellheads and manifolds, may have beneficial effects on present-day ecosystem functioning as they may serve as artificial reefs that attract marine life (Stachowitsch et al., 2002; Whomersley and Picken, 2003), including species of conservation importance, e.g. a cold-water coral Lophelia pertusa (Gass and Roberts, 2006), and thereby increase the number of economically important fishes in the proximity of these foundations (Love and Westphal, 1990; Stanley and Wilson, 1991, 1997; Fabi et al., 2004; Love and York, 2005; Love et al., 2006; Jablonski, 2008). The North Sea has long been a vital ground for the exploitation of natural resources, supporting one of the world's most active fisheries as well as oil and gas exploration which has led to installation of over 500 offshore platforms across the region primarily since the 1960 s. In this region, commercially important fishes, such as saithe, Pollachius virens cod Gadus morhua, and haddock Melanogrammus aeglefinus have been known to show coherent patterns in their local distributions where significantly higher number of individuals can be found in the immediate vicinity of offshore structures when compared with sur- rounding open soft-bottom areas (Valdemarsen, 1979; Aabel et al., 1997; Løkkeborg et al., 2002; Soldal et al., 2002; Fujii, 2015). Although there is also a growing body of evidence to confirm that a variety of fish species aggregate around artificial hard structures in marine environ- ments worldwide (e.g. Bohnsack and Sutherland, 1985; Picken and McIntyre, 1989; Aabel et al., 1997; Stanley and Wilson, 1997; Baine, 2001; Løkkeborg et al., 2002; Soldal et al., 2002; Fabi et al., 2004; Love and York, 2005; Wilhelmsson et al., 2006), causality of such attraction effect, however, has not yet been satisfactorily identified. Several mechanisms may be responsible for the increase, for example, en- hanced food availability (e.g. Page et al., 2007), shelter from predation (e.g. Hixon and Beets, 1989) or reference point (e.g. Soria et al., 2009), but it still remains unclear whether the fish individuals merely concentrate around offshore artificial structures from surrounding areas or whether such effects can facilitate the reproductive ability of fish populations and thereby produce any net increase in fish stock sizes overall. To ultimately determine the ecological consequences of http://dx.doi.org/10.1016/j.oceaneng.2016.09.003 Received 20 January 2016; Received in revised form 30 June 2016; Accepted 6 September 2016 ⁎ Corresponding author. E-mail address: t.fujii@abdn.ac.uk (T. Fujii). Ocean Engineering 126 (2016) 481–487 0029-8018/ © 2016 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by/4.0/). Available online 16 September 2016 crossmark http://www.sciencedirect.com/science/journal/00298018 http://www.elsevier.com/locate/oceaneng http://dx.doi.org/10.1016/j.oceaneng.2016.09.003 http://dx.doi.org/10.1016/j.oceaneng.2016.09.003 http://dx.doi.org/10.1016/j.oceaneng.2016.09.003 http://crossmark.crossref.org/dialog/?doi=10.1016/j.oceaneng.2016.09.003&domain=pdf alternative decommissioning options of obsolete offshore infrastruc- tures, it would be necessary to design an appropriate ecological field experiments which allows identification of both causal mechanisms of attraction effect and temporal nature of fish movements in association with the physical presence of the offshore structures. However, there has been a lack of fundamental information on how the life cycle and/ or the temporal variations in the movements of fish populations around offshore structures is related to any changes in key environmental factors (environmental cues) such as water temperature, salinity, currents, tidal cycle, light intensity (illumination), surface primary production (chlorophyll a), prey availability, dissolved oxygen and so on, which in turn hampers progress in advancing research in this field. The aim of this article is to describe the design of a newly developed underwater monitoring system which uses time-lapse digital photo- graphy and a suite of oceanographic instruments in an attempt to provide sufficient background measurements to characterise the dy- namic relationships between changing environmental conditions and biological activities in association with offshore oil/gas platforms at fine temporal resolution. Using the monitoring system, a pilot study was conducted at three different sampling depths in the immediate vicinity of an offshore oil platform in the North Sea, the results of which is therefore presented thereafter. Based on the findings from the pilot study, together with the relevant literature, implications for the ecological influence of obsolete offshore platforms on the marine ecosystem are discussed. If the fine scale monitoring system is to be deployed systematically at multiple offshore locations over longer periods of time as appropriately controlled experiments, this approach may provide a useful tool to help identify the precise role of large offshore man-made structures in the ecology of fish populations in the North Sea. 2. Material and methods 2.1. Study site The pilot study was conducted at BP's Miller platform situated in the northern central North Sea (Fig. 1). The platform was installed in 1991 in a water depth of approximately 103 m on a dense sandy seafloor. The platform has a large steel jacket supporting structure (eight-legged) weighing approximately 18,600 t with a size of 71×55 m at the base on the seabed, tapering to 71×30 m at the top just under the topsides modules. The Miller platform ceased production in September 2007 and the initial phase of decommissioning has already been completed. A detailed decommissioning programme has also been planned for dealing with the topsides, jacket and drill cuttings pile, and the platform is therefore currently being maintained with minimum on-board personnel, representing some of the redundant offshore oil/ gas installations found in the region. 2.2. A new underwater monitoring system An autonomous underwater monitoring system was newly designed and constructed in an attempt to describe and characterise the dynamic relationship between biological activities and an offshore oil platform in relation to fine-scale changes in various environmental parameters. This system comprised a surface float, three identical observatory frames (Fig. 2) and ballast weights (approximately 1400 kg), all attached to a single mooring rope (Fig. 3). Each observatory unit contained a suite of oceanographic instrumentations (Fig. 2), includ- ing: (1) a 10 megapixel digital stills camera and strobe (OE14-408 & OE11-442: Kongsberg Maritime, UK) programmed to take time-lapse photographs of the sea floor or the water column and associated fauna at 60 min intervals; (2) a 3D current metre (Aquadopp Current Metre: Nortek, Norway) programmed to measure current speed (m/s) and direction (degree) at 10 min intervals; (3) a CTD probe (RBRmaestro: RBR Ltd., Canada) programmed to measure salinity, water tempera- ture (°C) and hydrostatic pressure (dbar) at 10 min intervals; (4) a PAR (photosynthetically active radiation) sensor (PAR Quantum 192SA: LI- COR, Inc., US) programmed to measure the photosynthetic photon flux Fig. 1. Map of the North Sea showing the locations of offshore oil/gas platforms (the filled orange circles) (Source: OSPAR, 2012). The black circle and cross symbol indicates the location of the Miller platform. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) Fig. 2. Schematic diagram of the underwater observatory unit designed and constructed for this study. T. Fujii, A.J. Jamieson Ocean Engineering 126 (2016) 481–487 482 density (light intensity) (μMol/m2/s) at 10 min intervals; (5) a fluorometer (Chlorophyll Fluorometer, Seapoint Sensors, Inc., US) programmed to measure phytopigments (μg/l), which can be related to the organic matter content (chlorophyll a) of the particle load descended from the surface water, at 10 min intervals; (6) a DO (dissolved oxygen) sensor (Fast Response DO Sensor: Oxyguard, Denmark) programmed to measure dissolved oxygen levels (ml/l) available to the local marine fauna at 10 min intervals. Within each observatory frame, all of the following three instru- ment units have sufficient battery and memory capacity independently of each other to operate autonomously: (1) the digital stills camera and strobe for up to a maximum of three months; (2) the 3D current metre for up to six months; (3) the rest of the oceanographic instruments connected to the RBRmaestro data logger (i.e. CTD probe, PAR sensor, fluorometer and DO sensor) for up to nine months. Thus once deployed, the present programme, for example, allows the monitoring system to remain under water for approximately three months and, where a longer duration of deployment is desirable, the observatory system can also be briefly recovered to the surface for service, calibration and data download between deployments. 2.3. A pilot study Before implementing a full-scale monitoring, a small scale test sampling was conducted at the Miller platform between 21st and 23rd June 2014 using the above monitoring system which was anchored to the bottom at one end and tethered to the lower deck of the platform at the other (Fig. 3). The three observatory frames were deployed at depths of approximately 10, 50 and 100 m from the sea surface (“surface”, “mid-depth” and “bottom”, respectively) and each observa- tory unit had floatation devices connected to each frame to make the entire monitoring system positively buoyant and thus in suspension throughout the water column (Fig. 3). This pilot study was conducted in two consecutive deployments as follows: in the first deployment, only digital stills cameras were deployed from 12 PM on 21st June till 10 AM on 22nd June (22 h), whereas in the second deployment, the full set of oceanographic instrumentations was deployed from 4PM on 22nd June till 4PM on 23rd June (24 h). During the first deployment, the digital stills cameras took images at three sampling depths (i.e., 10, 50 and 100 m). During the second deployment, in addition to the hourly time-lapse digital photography, the observatory system also logged the rest of the environmental measurements at the same three sampling depths. 2.4. Data analysis Upon recovery of the monitoring system, the images and oceano- graphic data were downloaded. The images from the stills cameras were analysed manually for species identification and fish counting. As the stills images were recorded in the two consecutive deployments, the two fish-count data were combined and a time series was constructed to examine temporal patterns in the number of fish individuals as a proxy for fish movement (rhythm) observed at three different depths over the duration across the two deployments. Subsequently, similar time series plots were constructed for all the oceanographic data over the duration of the second deployment. In addition, trends in current speed and direction were examined for each sampling depth using a stick-plot diagram, where vectors of length (speed) and angle (direc- tion) were placed along a reference line that represent time. These plots were constructed using the ‘oce’ package (Kelley, 2016) in R v.3.3.1 (R Development Core Team, 2016). 3. Results The stills cameras completed both the first and second deployments successfully at all three depths and took a total of 141 images over the duration of 23 and 24 h in the first and second deployments, respectively. Not a single fish was observed at 10 m depth during both deployments (Fig. 4). However, the occurrence of fish, mostly saithe Pollachius virens, was readily observed in 55.3% and 38.3% of the total images recorded at 50 and 100 m depths, respectively (Figs. 4 and 5). Fig. 3. Schematic diagram of the underwater monitoring system deployed from the Miller platform. Fig. 4. Changes in the number of fish individuals observed at three different depths at the Miller platform. T. Fujii, A.J. Jamieson Ocean Engineering 126 (2016) 481–487 483 Further, the time series clearly demonstrated the number of fish individuals observed at the Miller platform peaked sharply at around 3–4 AM and 3–4 PM at 50 and 100 m depths, respectively (Fig. 4) showing an indication of diurnal rhythm of fish movements operating across different depths. During the second deployment, all the physical conditions were recorded successfully except for dissolved oxygen data at 50 m depth due to a sensor failure, and time series plots for these environmental variables were shown in Figs. 6 and 7. With respect to light intensity, essentially no photosynthetically active radiation was measured over the second deployment at depths below 50 m at the Miller platform (Fig. 6b). At 10 m depth, however, there was a well-defined change in the intensity of light observed, which peaked around 5 PM on 22nd June and decreased to almost none overnight and increased again peaking around 1 PM on the following day (Fig. 6b). In contrast, temperatures stayed relatively constant with average temperatures of 12.93, 7.78 and 7.53 °C at 10, 50 and 100 m depths, respectively, over the duration of the second deployment (Fig. 6c). Although the temperatures were similar between 50 and 100 m depths, it was noticeable that temporal pattern was slightly more variable at 50 m when compared with that at 100 m depth (Fig. 6c). Time series of both chlorophyll and salinity values showed some distinctive variations between depths and, in both cases, temporal patters were markedly variable at 50 m depth in comparison with the other sampling depths (Fig. 6d and e). Dissolved oxygen levels were relatively constant with average values of 6.11 and 5.09 ml/l at 10 and 100 m depths, respectively, at the Miller platform (Fig. 6f). The temporal patterns of hydrostatic pressures were similar between 10 and 50 m depths, but the amplitude of the range appears to be markedly reduced at 100 m depth (Fig. 7a–c). Further, the patterns of the current speeds and directions also showed some consistency between depths (Fig. 7D-f) with the average current speeds at the Miller field declining from 0.32 through 0.25–0.18 m/s at 10, 50 and 100 m depths, respectively. Thus the observed patterns of hydro- static pressures corresponded with the observed changes in current speeds and directions at each sampling depth, indicating dynamic coupling between tidal movements and the prevailing oceanographic conditions. 4. Discussion This study investigated fine-scale temporal variation in the envir- onmental conditions and the occurrence of fish in the immediate vicinity of the Miller oil platform in the North Sea. Temporal changes in the number of commercially important fish, saithe Pollachius virens, was readily observed by the newly established underwater monitoring system, showing an indication of diurnal rhythm of vertical movements throughout the water column. With respect to the physical conditions at the Miller platform, temperatures, salinity and dissolved oxygen essentially remained constant throughout the study period. However, the intensity of the light at 10 m, chlorophyll concentration at 10 and 50 m, as well as the patterns of hydrostatic pressures and the prevailing currents at all the three water depths (i.e., 10, 50 and 100 m) showed a distinctive indication of diurnal cycle with the degree of changes differed depending on the parameter and depth in question. Temporal changes in temperature, chlorophyll and salinity values were found to be more variable at 50 m than any other sampling depth at the Miller platform and this could be attributable to the timing of the sampling in this study because, in the northern part of the North Sea, the water column is generally well mixed in winter, but horizontal thermal stratification occurs in summer, typically at a depth of around 50 m (e.g. Umgiesser et al., 2002; Fujii, 2015). Overall, the results of the pilot study indicated there were highly dynamic as well as distinctive temporal patterns observed not only in some key environ- mental parameters but also in fish movement in the immediate vicinity of the Miller platform even within a very short window of time. However, the question of how and why the observed diurnal rhythm of the fish movement is related to any of the fine-scale predictor variables still remains unresolved and such question could only be answered if the monitoring system is to be deployed systematically at multiple offshore locations, including both experiments and controls, for longer durations of time (Aguzzi et al., 2012; Doya et al., 2014). For spatial considerations, several research studies have confirmed that commercially important fish species, such as saithe, haddock Melanogrammus aeglefinus and cod Gadus morhua, show coherent patterns in their local distributions where significantly higher number of individuals can be found in the immediate vicinity of individual offshore platforms ( < ~100–300 m) when compared with surrounding open soft bottom areas in the North Sea (Valdemarsen, 1979; Aabel Fig. 5. Example images of fish individuals (saithe Pollachius virens) observed around the depth of 50 m at the Miller platform at: (a) 2:00AM; (b) 3:00AM; (c) 4:00AM on 22nd June 2014. T. Fujii, A.J. Jamieson Ocean Engineering 126 (2016) 481–487 484 et al., 1997; Løkkeborg et al., 2002; Soldal et al., 2002). However, movements of fish individuals in association with offshore platforms can be complex and variable over different temporal scales. For example, using hydroacoustic quantification, Soldal et al. (2002) investigated temporal changes in the concentrations of fish close to the jacket of a decommissioned platform in the North Sea and found that demersal fish, such as saithe and cod, spread throughout the water column during the night with significantly higher acoustic values being repeatedly recorded late at night in July 1998. This was markedly consistent with the findings obtained in this study, indicating that a large number of saithe individuals may have been undertaking diurnal vertical migration simultaneously across multiple platforms at wider spatial scale. In addition, Fujii (2016) investigated the feeding habits of saithe across the North Sea and found that the observed spatio- temporal variability in the saithe diet was significantly explained by proximity to the nearest offshore platforms and changes in water temperatures, which appear to reflect patterns of euphausiid avail- ability over space and time. Saithe is known to feed preferentially on euphausiids (e.g. Christensen, 1995; Carruthers et al., 2005), which is also known to undertake diurnal vertical migration (e.g. Lass et al., 2001). The results from the present study may therefore indicate that the physical presence of the offshore structures affects the distribution/ availability of zooplankton (i.e. euphausiids) first and thereby influence the feeding/aggregating behaviour of saithe. In order to substantiate such hypothesis, it would be of great importance for future research to systematically establish appropriately controlled experiments using e.g. the monitoring system described in this study or using alternative techniques such as a stationary acoustic buoy equipped with a scientific split-beam echosounder (e.g. Godø et al., 2014) which could be deployed at the bottom in the close proximity of offshore platforms. The latter option would permit silent environment measurements without disturbance in the surface water and thus allow recording with much higher sensitivity compared to a conventional installation onboard a ship, which would be of a powerful tool to monitor migration patterns of different fish species, their prey (e.g. zooplankton) and their interactions at fine temporal resolution over a long period of time. Fig. 6. Time series plots at three different depths for: (a) number of fish individuals; (b) PAR (photosynthetically active radiation); (c) temperature; (d) chlorophyll; (e) salinity; (f); dissolved oxygen, over duration of the second deployment conducted at the Miller platform. T. Fujii, A.J. Jamieson Ocean Engineering 126 (2016) 481–487 485 Using mid-water fish traps, Fujii (2015) also investigated temporal variations in the structure of fish assemblages observed in the vicinity of the Miller platform and found that relative abundances of saithe, haddock and cod did not vary significantly within each season. However, the species composition and the relative abundances did vary significantly between seasons as well as years. In particular, saithe showed significant differences in their body size distributions between seasons as well as years, suggesting that there was a series of turnover of individuals in the utilisation of the platform by different age groups across seasons and years and their residence time and between-habitat movement could therefore be regulated at seasonal and/or inter- annual scales (Fujii, 2015). On the other hand, using an acoustic tagging system, Jørgensen et al. (2002) examined the residence time and movement of cod at an offshore platform in the North Sea and found that around 50% of the tagged individuals remained in the direct vicinity throughout the 3-month period, while some individuals were registered at the neighbouring platform and approximately 14% of individuals were still detected at the study site 12-month later. Other fishing study conducted around North Sea platforms has also found evidence of seasonal changes in the abundance in fish assemblages around the structures (Løkkeborg et al., 2002). As has been demon- strated in the other parts of the world, different fish species have different utilisation patterns of an offshore platform which may be influenced by physical factors such as temporal variation in tempera- ture and oceanographic conditions as well as biological factors such as prey availability, species-specific sedentary/migratory behaviour and life cycle stages of individuals (e.g. Bohnsack and Sutherland, 1985; Stanley and Wilson, 1997; Schroeder and Love, 2004; Love et al., 2005; Love et al., 2006; Page et al., 2007, Fujii, 2016). To understand the true role of offshore platforms in the ecology of fish populations, it is vital to identify the biological mechanisms behind such temporal movement patterns in association with large scale environmental drivers. 5. Conclusions Offshore oil and gas platforms represent some of the largest man- made constructions installed on the seafloor and an increasing number of fishers and scientists have been aware that a variety of fish species aggregate in substantial numbers in the proximity of such large sub-sea structures. The majority of these structures have been in place for decades and they may well have functioned as artificial reefs potentially acting as a network of de facto marine protected zones (de Groot, 1982; Osmundsen and Tveterås, 2003; Fujii, 2015). However, an increasing number of existing offshore platforms are approaching the end of their commercial lives and, to date, 7% of existing North Sea installations have already been decommissioned (Royal Academy of Engineering, 2013). Given the number of offshore platforms currently installed in the North Sea as well as the possible association between fish move- ments and the physical presence of offshore structures as indicated by the available literature to date, there is a need to fully characterise the relationships between changing environmental conditions and biologi- cal activities in association with offshore oil/gas platforms at varying temporal resolutions over longer durations of time in the North Sea. The resulting outcome will not only compensate for previous lack of knowledge, but also make a major contribution in providing stake- holders with appropriate information on the issues of decommissioning to allow for informed decisions to be made. It is clear that further study is needed to better understand the influence of the physical presence of the offshore platforms on the life cycle, temporal movements and connectivity of fish populations as well as possible links between large Fig. 7. Changes in hydrostatic pressure recorded at the Miller platform at: (a) surface (~10 m); (b) mid-depth (~50 m); (c) bottom (~100 m). Stick-plots for the patterns in the water currents with the speed and direction indicated by each vector along time reference line recorded at: (d) surface (~10 m); (e) mid-depth (~50 m); (f) bottom (~100 m). T. Fujii, A.J. Jamieson Ocean Engineering 126 (2016) 481–487 486 scale ecological/environmental processes and distributional dynamics of fish populations. The results of this pilot study urge further fine- resolution and long-term monitoring to be conducted as appropriately controlled experiments at multiple locations over wider spatial cover- age. Accumulation of such knowledge will increase our ability to identify the true ecological consequences of different decommissioning alternatives as well as to facilitate effective spatial management of marine ecosystems in the North Sea. Acknowledgements The authors would like to thank OSPAR for providing data for offshore structures in the North Sea, and Imants G. Priede (University of Aberdeen), Stewart Chalmers (University of Aberdeen), John Polanski (University of Aberdeen), Thomas O’Donoghue (University of Aberdeen), and Michelle Horsfield (BP), Anne Walls (BP), Peter Evans (BP), Alwyn Mcleary (BP) and all the crew members of the Miller platform for invaluable advice and support in conducting this research project. This work was coordinated by Oceanlab, University of Aberdeen and supported by the BP Fellowship in Applied Fisheries Programme. References Aabel, J.P., Cripps, S.J., Jensen, A.C., Picken, G., 1997. Creating Artificial Reefs from Decommissioned Platforms in the North Sea: a Review of Knowledge and Proposed Programme of Research. Offshore Decommissioning Communications Project, London. Aguzzi, J., Company, J.B., Costa, C., Matabos, M., Azzurro, E., Manuel, A., Menesatti, P., Sardà, F., Canals, M., Delory, E., Cline, D., Favali, P., Juniper, K.S., Furushima, Y., Fujiwara, Y., Chiesa, J.J., Marotta, L., Bahamon, N., Priede, I.G., 2012. Challenges to the assessment of benthic populations and biodiversity as a result of rhythmic behaviour: video solutions from cabled observatories. Oceanogr. Mar. Biol. 50, 233–284. Baine, M., 2001. Artificial reefs: a review of their design, application, management and performance. Ocean Coast. Manag. 44, 241–259. Bohnsack, J.A., Sutherland, D.L., 1985. Artificial reef research: a review with recommendations for future priorities. Bull. Mar. Sci. 37, 11–39. Carruthers, E.H., Neilson, J.D., Waters, C., Perley, P., 2005. Longterm changes in the feeding of Pollachius virens on the Scotian Shelf: responses to a dynamic ecosystem. J. Fish. Biol. 66, 327–347. Christensen, V., 1995. A model of trophic interactions in the North Sea in 1981, the year of the stomach. Dana 11, 1–28. de Groot, S.J., 1982. The impact of laying and maintenance of offshore pipelines on the marine environment and the North Sea fisheries. Ocean Manag. 8, 1–27. Doya, C., Aguzzi, J., Pardo, M., Matabos, M., Company, J.B., Costa, C., Milhaly, S., 2014. Diel behavioral rhythms in the sablefish (Anoplopoma fimbria) and other benthic species, as recorded by deep-sea cabled observatories in Barkley canyon (NEPTUNE- Canada). J. Mar. Syst. 130, 69–78. Fabi, G., Grati, F., Puletti, M., Scarcella, G., 2004. Effects on fish community induced by installation of two gas platforms in the Adriatic Sea. Mar. Ecol. Prog. Ser. 273, 187–197. Fowler, A.M., Macreadie, P.I., Jones, D.O.B., Booth, D.J., 2014. A multi-criteria decision approach to decommissioning of offshore oil and gas infrastructure. Ocean Coast. Manag. 87, 20–29. Fujii, T., 2015. Temporal variation in environmental conditions and the structure of fish assemblages around an offshore oil platform in the North Sea. Mar. Environ. Res. 108, 69–82. Fujii, T., 2016. Potential influence of offshore oil and gas platforms on the feeding ecology of fish assemblages in the North Sea. Mar. Ecol. Prog. Ser. 542, 167–186. Gass, S.E., Roberts, J.M., 2006. The occurrence of the cold-water coral Lophelia pertusa (Scleractinia) on oil and gas platforms in the North Sea: colony growth, recruitment and environmental controls on distribution. Mar. Pollut. Bull. 52 (5), 549–559. Godø, O.R., Handegard, N.O., Browman, H.I., Macaulay, G.J., Kaartvedt, S., Giske, J., Ona, E., Huse, G., Johnsen, E., 2014. Marine ecosystem acoustics (MEA): quantifying processes in the sea at the spatio-temporal scales on which they occur. ICES J. Mar. Sci. 71, 2357–2369. Hixon, M.A., Beets, J.P., 1989. Shelter characteristics and caribbean fish assemblages - experiments with artificial reefs. Bull. Mar. Sci. 44, 666–680. Jablonski, S., 2008. The interaction of the oil and gas offshore industry with fisheries in Brazil: the “Stena Tay” experience. Braz. J. Oceanogr. 56, 289–296. Jørgensen, T., Løkkeborg, S., Soldal, A.V., 2002. Residence of fish in the vicinity of a decommissioned oil platform in the North Sea. ICES J. Mar. Sci. 59, 288–293. Kelley, D., 2016. Package ‘oce’. 〈https://cran.r-project.org/web/packages/oce/oce.pdf〉 Lass, S., Tarling, G.A., Virtue, P., Matthews, J.B.L., Mayzaud, P., Buchholz, F., 2001. On the food of northern krill Meganyctiphanes norvegica in relation to its vertical distribution. Mar. Ecol. Prog. Ser. 214, 177–200. Løkkeborg, S., Humborstad, O., Jørgensen, T., Soldal, A.V., 2002. Spatio-temporal variations in gillnet catch rates in the vicinity of North Sea oil platforms. ICES J. Mar. Sci. 59, 294–299. Love, M.S., Schroeder, D.M., Lenarz, W.H., 2005. Distribution of bocaccio (Sebastes paucispinis) and cowcod (Sebastes levis) around oil platforms and natural outcrops off California with implications for larval production. Bull. Mar. Sci. 77, 397–408. Love, M.S., Schroeder, D.M., Lenarz, W., MacCall, A., Bull, A.S., Thorsteinson, L., 2006. Potential use of offshore marine structures in rebuilding an overfished rockfish species, bocaccio (Sebastes paucispinis). Fish. Bull. 104, 383–390. Love, M.S., Westphal, W., 1990. Comparison of fishes taken by a sportfishing party vessel around oil platforms and adjacent natural reefs near Santa Barbara, California. Fish. Bull. 88, 599–605. Love, M.S., York, A., 2005. A comparison of the fish assemblages associated with an oil/ gas pipeline and adjacent seafloor in the Santa Barbara Channel, southern California Bight. Bull. Mar. Sci. 77, 101–117. Osmundsen, P., Tveterås, R., 2003. Decommissioning of petroleum installations–major policy issues. Energy Policy 31, 1579–1588. Page, H.M., Dugan, J.E., Schroeder, D.M., Nishimoto, M.M., Love, M.S., Hoesterey, J.C., 2007. Trophic links and condition of a temperate reef fish: comparisons among offshore oil platform and natural reef habitats. Mar. Ecol. Prog. Ser. 344, 245–256. Picken, G.B., McIntyre, A.D., 1989. Rigs to reefs in the North Sea. Bull. Mar. Sci. 44, 782–788. R Development Core Team, 2016. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. Royal Academy of Engineering, 2013. Decommissioning in the North Sea. Royal Academy of Engineering, London. Schroeder, D.M., Love, M.S., 2004. Ecological and political issues surrounding decommissioning of offshore oil facilities in the Southern California Bight. Ocean Coast. Manag. 47, 21–48. Soldal, A.V., Svellingen, I., Jørgensen, T., Løkkeborg, S., 2002. Rigs-to-reefs in the North Sea: hydroacoustic quantification of fish in the vicinity of a “semi-cold” platform. ICES J. Mar. Sci. 59, 281–287. Soria, M., Dagorn, L., Potin, G., Fréon, P., 2009. First field-based experiment supporting the meeting point hypothesis for schooling in pelagic fish. Anim. Behav. 78, 1441–1446. Stachowitsch, M., Kikinger, R., Herler, J., Zolda, P., Geutebrück, E., 2002. Offshore oil platforms and fouling communities in the southern Arabian Gulf (Abu Dhabi). Mar. Pollut. Bull. 44, 853–860. Stanley, D.R., Wilson, C.A., 1991. Factors affecting the abundance of selected fishes near oil and gas platforms in the northern Gulf of Mexico. Fish. Bull. 89, 149–159. Stanley, D.R., Wilson, C.A., 1997. Seasonal and spatial variation in the abundance and size distribution of fishes associated with a petroleum platform in the northern Gulf of Mexico. Can. J. Fish. Aquat. Sci. 54, 1166–1176. Umgiesser, G., Luyten, P.J., Carniel, S., 2002. Exploring the thermal cycle of the Northern North Sea area using a 3-D circulation model: the example of PROVESS NNS station. J. Sea Res. 48, 271–286. Valdemarsen, J.W., 1979. Behavioural Aspects of Fish in Relation to Oil Platforms in the North Sea. ICES CM, 1979/B: 27. Whomersley, P., Picken, G.B., 2003. Long-term dynamics of fouling communities found on offshore installations in the North Sea. J. Mar. Biol. Assoc. 83, 897–901, (U. K.). Wilhelmsson, D., Malm, T., Öhman, M.C., 2006. The influence of offshore windpower on demersal fish. ICES J. Mar. Sci. 63, 775–784. Wilkinson, W.B., Bakke, T., Clauss, G.F., Clements, R., Dover, W.D., Rullkötter, J., Shepherd, J.G., 2016. Decommissioning of large offshore structures – The role of an Independent Review Group (IRG). Ocean Eng. 113, 11–17. T. Fujii, A.J. Jamieson Ocean Engineering 126 (2016) 481–487 487 http://refhub.elsevier.com/S0029-16)30379-sbref1 http://refhub.elsevier.com/S0029-16)30379-sbref1 http://refhub.elsevier.com/S0029-16)30379-sbref1 http://refhub.elsevier.com/S0029-16)30379-sbref1 http://refhub.elsevier.com/S0029-16)30379-sbref2 http://refhub.elsevier.com/S0029-16)30379-sbref2 http://refhub.elsevier.com/S0029-16)30379-sbref2 http://refhub.elsevier.com/S0029-16)30379-sbref2 http://refhub.elsevier.com/S0029-16)30379-sbref2 http://refhub.elsevier.com/S0029-16)30379-sbref2 http://refhub.elsevier.com/S0029-16)30379-sbref3 http://refhub.elsevier.com/S0029-16)30379-sbref3 http://refhub.elsevier.com/S0029-16)30379-sbref4 http://refhub.elsevier.com/S0029-16)30379-sbref4 http://refhub.elsevier.com/S0029-16)30379-sbref5 http://refhub.elsevier.com/S0029-16)30379-sbref5 http://refhub.elsevier.com/S0029-16)30379-sbref5 http://refhub.elsevier.com/S0029-16)30379-sbref6 http://refhub.elsevier.com/S0029-16)30379-sbref6 http://refhub.elsevier.com/S0029-16)30379-sbref7 http://refhub.elsevier.com/S0029-16)30379-sbref7 http://refhub.elsevier.com/S0029-16)30379-sbref8 http://refhub.elsevier.com/S0029-16)30379-sbref8 http://refhub.elsevier.com/S0029-16)30379-sbref8 http://refhub.elsevier.com/S0029-16)30379-sbref8 http://refhub.elsevier.com/S0029-16)30379-sbref9 http://refhub.elsevier.com/S0029-16)30379-sbref9 http://refhub.elsevier.com/S0029-16)30379-sbref9 http://refhub.elsevier.com/S0029-16)30379-sbref10 http://refhub.elsevier.com/S0029-16)30379-sbref10 http://refhub.elsevier.com/S0029-16)30379-sbref10 http://refhub.elsevier.com/S0029-16)30379-sbref11 http://refhub.elsevier.com/S0029-16)30379-sbref11 http://refhub.elsevier.com/S0029-16)30379-sbref11 http://refhub.elsevier.com/S0029-16)30379-sbref12 http://refhub.elsevier.com/S0029-16)30379-sbref12 http://refhub.elsevier.com/S0029-16)30379-sbref13 http://refhub.elsevier.com/S0029-16)30379-sbref13 http://refhub.elsevier.com/S0029-16)30379-sbref13 http://refhub.elsevier.com/S0029-16)30379-sbref14 http://refhub.elsevier.com/S0029-16)30379-sbref14 http://refhub.elsevier.com/S0029-16)30379-sbref14 http://refhub.elsevier.com/S0029-16)30379-sbref14 http://refhub.elsevier.com/S0029-16)30379-sbref15 http://refhub.elsevier.com/S0029-16)30379-sbref15 http://refhub.elsevier.com/S0029-16)30379-sbref16 http://refhub.elsevier.com/S0029-16)30379-sbref16 http://refhub.elsevier.com/S0029-16)30379-sbref17 http://refhub.elsevier.com/S0029-16)30379-sbref17 http://https://cran.r-roject.org/web/packages/oce/oce.pdf http://refhub.elsevier.com/S0029-16)30379-sbref18 http://refhub.elsevier.com/S0029-16)30379-sbref18 http://refhub.elsevier.com/S0029-16)30379-sbref18 http://refhub.elsevier.com/S0029-16)30379-sbref19 http://refhub.elsevier.com/S0029-16)30379-sbref19 http://refhub.elsevier.com/S0029-16)30379-sbref19 http://refhub.elsevier.com/S0029-16)30379-sbref20 http://refhub.elsevier.com/S0029-16)30379-sbref20 http://refhub.elsevier.com/S0029-16)30379-sbref20 http://refhub.elsevier.com/S0029-16)30379-sbref21 http://refhub.elsevier.com/S0029-16)30379-sbref21 http://refhub.elsevier.com/S0029-16)30379-sbref21 http://refhub.elsevier.com/S0029-16)30379-sbref22 http://refhub.elsevier.com/S0029-16)30379-sbref22 http://refhub.elsevier.com/S0029-16)30379-sbref22 http://refhub.elsevier.com/S0029-16)30379-sbref23 http://refhub.elsevier.com/S0029-16)30379-sbref23 http://refhub.elsevier.com/S0029-16)30379-sbref23 http://refhub.elsevier.com/S0029-16)30379-sbref24 http://refhub.elsevier.com/S0029-16)30379-sbref24 http://refhub.elsevier.com/S0029-16)30379-sbref25 http://refhub.elsevier.com/S0029-16)30379-sbref25 http://refhub.elsevier.com/S0029-16)30379-sbref25 http://refhub.elsevier.com/S0029-16)30379-sbref26 http://refhub.elsevier.com/S0029-16)30379-sbref26 http://refhub.elsevier.com/S0029-16)30379-sbref27 http://refhub.elsevier.com/S0029-16)30379-sbref27 http://refhub.elsevier.com/S0029-16)30379-sbref28 http://refhub.elsevier.com/S0029-16)30379-sbref28 http://refhub.elsevier.com/S0029-16)30379-sbref29 http://refhub.elsevier.com/S0029-16)30379-sbref29 http://refhub.elsevier.com/S0029-16)30379-sbref29 http://refhub.elsevier.com/S0029-16)30379-sbref30 http://refhub.elsevier.com/S0029-16)30379-sbref30 http://refhub.elsevier.com/S0029-16)30379-sbref30 http://refhub.elsevier.com/S0029-16)30379-sbref31 http://refhub.elsevier.com/S0029-16)30379-sbref31 http://refhub.elsevier.com/S0029-16)30379-sbref31 http://refhub.elsevier.com/S0029-16)30379-sbref32 http://refhub.elsevier.com/S0029-16)30379-sbref32 http://refhub.elsevier.com/S0029-16)30379-sbref32 http://refhub.elsevier.com/S0029-16)30379-sbref33 http://refhub.elsevier.com/S0029-16)30379-sbref33 http://refhub.elsevier.com/S0029-16)30379-sbref34 http://refhub.elsevier.com/S0029-16)30379-sbref34 http://refhub.elsevier.com/S0029-16)30379-sbref34 http://refhub.elsevier.com/S0029-16)30379-sbref35 http://refhub.elsevier.com/S0029-16)30379-sbref35 http://refhub.elsevier.com/S0029-16)30379-sbref35 http://refhub.elsevier.com/S0029-16)30379-sbref36 http://refhub.elsevier.com/S0029-16)30379-sbref36 http://refhub.elsevier.com/S0029-16)30379-sbref37 http://refhub.elsevier.com/S0029-16)30379-sbref37 http://refhub.elsevier.com/S0029-16)30379-sbref38 http://refhub.elsevier.com/S0029-16)30379-sbref38 http://refhub.elsevier.com/S0029-16)30379-sbref38 Fine-scale monitoring of fish movements and multiple environmental parameters around a decommissioned offshore oil platform: A pilot study in the North Sea Introduction Material and methods Study site A new underwater monitoring system A pilot study Data analysis Results Discussion Conclusions Acknowledgements References work_5ozaqhbikrg4njik7pp22hf5sa ---- Haryono, Rivkeh Y., Sprajcer, Madeline A. and Keast, Russell S. J. 2014, Measuring oral fatty acid thresholds, fat perception, fatty food liking, and papillae density in humans, Journal of visualized experiments, vol. 88, Article no. : e51236, pp. 1-12. DOI: 10.3791/51236 This is the published version. ©2014, Journal of Visualized Experiments Reproduced by Deakin University under the terms of the Creative Commons Attribution Non-Commercial Licence Available from Deakin Research Online: http://hdl.handle.net/10536/DRO/DU:30064911 http://dx.doi.org/10.3791/51236 https://creativecommons.org/licenses/by-nc/4.0/ https://creativecommons.org/licenses/by-nc/4.0/ http://hdl.handle.net/10536/DRO/DU:30064911 Journal of Visualized Experiments www.jove.com Copyright © 2014 Creative Commons Attribution-NonCommercial License June 2014 | 88 | e51236 | Page 1 of 12 Video Article Measuring Oral Fatty Acid Thresholds, Fat Perception, Fatty Food Liking, and Papillae Density in Humans Rivkeh Y. Haryono1, Madeline A. Sprajcer1, Russell S. J. Keast1 1School of Exercise and Nutrition Sciences, Deakin University Correspondence to: Russell S. J. Keast at russell.keast@deakin.edu.au URL: http://www.jove.com/video/51236 DOI: doi:10.3791/51236 Keywords: Neuroscience, Issue 88, taste, overweight and obesity, dietary fat, fatty acid, diet, fatty food liking, detection threshold Date Published: 6/4/2014 Citation: Haryono, R.Y., Sprajcer, M.A., Keast, R.S.J. Measuring Oral Fatty Acid Thresholds, Fat Perception, Fatty Food Liking, and Papillae Density in Humans. J. Vis. Exp. (88), e51236, doi:10.3791/51236 (2014). Abstract Emerging evidence from a number of laboratories indicates that humans have the ability to identify fatty acids in the oral cavity, presumably via fatty acid receptors housed on taste cells. Previous research has shown that an individual's oral sensitivity to fatty acid, specifically oleic acid (C18:1) is associated with body mass index (BMI), dietary fat consumption, and the ability to identify fat in foods. We have developed a reliable and reproducible method to assess oral chemoreception of fatty acids, using a milk and C18:1 emulsion, together with an ascending forced choice triangle procedure. In parallel, a food matrix has been developed to assess an individual's ability to perceive fat, in addition to a simple method to assess fatty food liking. As an added measure tongue photography is used to assess papillae density, with higher density often being associated with increased taste sensitivity. Video Link The video component of this article can be found at http://www.jove.com/video/51236/ Introduction Excessive dietary fat consumption is a potential contributor to weight gain1-3 and obesity has become a modern day global epidemic. Research suggests higher levels of fat intake, particularly as part of an ad libitum diet, may be associated with a higher BMI2,3, however the factors influencing dietary fat consumption and preference are far from clear. Searching for the mechanisms which underlie fat consumption is therefore an obvious goal and of particular interest is an oral mechanism responsible for fat detection, commonly termed 'fatty acid taste'2. From an evolutionary standpoint, the taste system presumably served as a gatekeeper of the digestive system, guiding the consumption of energy dense nutrients and expulsion of potentially toxic compounds4. The sense of taste is elicited through specialized taste receptor cells which are distributed within three types of tongue papillae; fungiform, foliate, and circumvallate papillae, which can each contain up to several hundred taste buds5. In addition to the widely accepted five prototypical tastes (sweet, salty, sour, bitter, and umami), it is not entirely surprising that there has been suggestion of an oral mechanism for detecting fat, or more likely their breakdown products fatty acids6. Previous research has consistently shown fatty acids can be detected in the oral cavity over a range of concentrations7-11, despite the fact that it is not a 'taste' in the traditional sense, as it has no single discernible perceptual quality associated with it (i.e. sweet)12. Work from our laboratory has highlighted functional implications of impaired fatty acid chemoreception, namely on body weight and dietary fat consumption. Those who are less able to detect fatty acids (hyposensitive) appear to have a higher body mass index (BMI) and consume more energy9, while a relationship between oral fatty acid sensitivity and dietary fat consumption has also been observed; that is, fatty acid hyposensitive individuals have been shown to consume more animal fats, including, meat, high fat dairy, and fatty spreads all of which are been implicated as contributors to weight gain13. Additionally, individuals who are more sensitive to fatty acids appear to be better equipped at differentiating between samples with varying fat contents9. While other research groups have failed to find similar associations10,14,15, this growing area of research remains intriguing. These individual differences in fatty acid chemoreception appear to be somewhat modulated by environmental factors, including diet. Habitual fat intake has been associated with impaired oral fatty acid chemoreception and consequently, a heightened preference for, and increased consumption of dietary fat16. In addition to gustatory adaptation, the gastrointestinal tract (GI) also appears responsive to such changes in fat intake17 and impaired GI fatty acid sensitivity may be implicated in the inability to generate appropriate satiety signaling responses which discourage excess energy consumption18. In addition to environmental factors, fatty acid chemoreception may also be dictated by genetic or physiological differences between individuals, including the concentration of fungiform papillae density (and presumably taste receptors) on an individual's tongue19. Higher density of fungiform tongue papillae has been linked to heightened oral sensitivity for numerous orally detected compounds including 6-n-propylthiouracil (PROP)20, sugar21, and salt22, while others have also noted an association with creamy perception23. PROP supertasters (who presumably have a higher number of fungiform papillae) are able to distinguish high fat from low fat salad dressings24 and are able to discriminate the fat content and http://www.jove.com http://www.jove.com http://www.jove.com mailto:russell.keast@deakin.edu.au http://www.jove.com/video/51236 http://dx.doi.org/10.3791/51236 http://www.jove.com/video/51236/ Journal of Visualized Experiments www.jove.com Copyright © 2014 Creative Commons Attribution-NonCommercial License June 2014 | 88 | e51236 | Page 2 of 12 creaminess of dairy foods more accurately than non-tasters23,25. At this stage however, the relationship between fungiform papillae density and oral fatty acid 'taste' detection is unknown. At the basis of human oral fatty acid chemoreception research is the application of various sensory techniques. Identifying individual variability in oral fatty acid detection is a major focus and largely depends on the determination of fatty acid detection thresholds, that is, the lowest point at which fatty acid is able to be detected in solution9. While the specific testing method and stimulus vehicle used varies across the literature and between research groups, the typical procedure involves presenting a participant with a set of emulsified fatty acid and control (no fatty acid) solutions and identifying which is the 'odd' sample. Here we present an established, reliable and reproducible method for threshold determination10 using emulsified milk solutions and an ascending forced choice triangle procedure. The extent to which oral fatty acid sensitivity influences diet, namely consumption of fatty foods, and the ability to perceive fat in foods is also of interest and here we also report on two additional established techniques to further extend our understanding of fatty acid chemoreception. Fatty food liking can be identified by providing individuals with samples of commercially available foods, with both a regular and a low-fat option who are asked to indicate liking of each16. In regards to fat perception, a fat ranking task has been developed by our laboratory, designed to evaluate an individual's ability to detect fat in custard, a typical food matrix16. To evaluate genetic or physiological differences between individuals, a commonly used tongue photography method involves staining, photographing and quantifying fungiform papillae26. While using this technique in fatty acid research is in its infancy, increasing application, especially in both lean and overweight/obese population groups may help to identify inherent causes of excess fat consumption. Protocol The following techniques have been approved for use by the Deakin University Human Research Ethics Committee. 1. Demographics and Anthropometry 1. Record demographic information from participants, including date of birth and gender. 2. At baseline (and other study points if temporal study design) take height and weight measurements. Ensure that participants have taken off shoes, heavy jackets or other clothing items, and have removed any heavy items from their pockets. 1. Measure participant's height using a stadiometer. Record measurements to the nearest cm. 2. Weigh participants using dedicated scales. Record weight to the nearest g. 3. Calculate BMI using the equation: weight (kg)/height2 (m2). From this, participants are categorized according to standard BMI definition values; healthy 18.5-25 kg/m2, overweight 25-30 kg/m2 or obese >30 kg/m2 27. 2. Producing Samples for Oral Fatty Acid Threshold Assessment 1. Use non-fat UHT milk as the base for fatty acid taste threshold assessment. Product can be purchased and stored in bulk if necessary, and will keep unopened for up to 6 months or until the product has reached its expiry. Prepare 2 types of vehicles: vehicle with added fatty acid and a control vehicle. The volume of solutions prepared for testing will depend on participant number. The following protocol provides typical amounts for 2 participants. 2. Prepare a base milk solution to use for both the control and fatty acid vehicle by placing 5% w:v food grade gum arabic (e.g., 100 g per 2 L of milk) into a 3 L glass beaker. If required, hydrate gum prior to use (this will vary depending on gum manufacturer). 3. Add 0.01% w:v EDTA to the gum to prevent oxidation (e.g., 200 mg per 2 L of milk). 4. Allocate approximately 1 L of non-fat milk per participant (e.g., for 2 participants, use 2 L milk) and pour into beaker. 5. Using a laboratory grade mixer with emulsor screen, homogenize the solution at 12,000 rpm for 2 min. Set solution aside. 6. Prepare the fatty acid solutions using food grade C18:1. Oxidation can be assessed through gas chromatography if necessary. 7. Prepare a series of 13 variants of the fatty acid vehicle (UHT non-fat milk) with increasing concentrations of C18:1 (0.02, 0.06, 1, 1.4, 2, 2.8, 3.8, 5, 6.4, 8, 9.8, 12, and 20 mM/L). To do this, label 250-ml glass beakers with each concentration. 8. Add 5% liquid paraffin to each beaker (e.g., 5 ml paraffin per 100 ml of milk solution). 9. Based on C18:1 concentration, add the appropriate amount of C18:1 to each beaker (see Table 1). http://www.jove.com http://www.jove.com http://www.jove.com Journal of Visualized Experiments www.jove.com Copyright © 2014 Creative Commons Attribution-NonCommercial License June 2014 | 88 | e51236 | Page 3 of 12 C18:1 concentration (mM) μl/100 ml 0.02 0.56 0.06 1.9 1 31.5 1.4 44.1 2 63.1 2.8 88.4 3.8 119.9 5 157.8 6.4 202 8 250 9.8 309 12 380 20 631.2 Table 1. Example C18:1 concentration per 100 ml solution. Increasing concentrations (µl/L) of C18:1 are used to prepare the series of 13 emulsions for oral fatty acid threshold testing. 10. Following use, fill the C18:1 container with N2 to minimize oxidation and store below 4 °C. 11. Add the base milk solution to each fatty acid beaker to a total volume of 100 ml. Set aside. 12. Using the remaining base solution, prepare the control vehicle. In a 2 L glass beaker, add 5% of the remaining volume in liquid paraffin (e.g., 35 ml liquid paraffin in a final volume of 750 ml) together with the remaining base solution and homogenize for 30 sec per 100 ml of liquid. 13. Homogenize the control vehicle for 30 sec per 100 ml. This step is conducted prior to the fatty acid solutions to prevent contamination with C18:1. 14. Homogenize each fatty acid vehicle, beginning with the lowest concentration for 30 sec per 100 ml. 15. Sanitize the homogenizer both prior to and following testing. 16. As the homogenization process can raise the temperature of the solutions, check temperature of control and C18:1 samples with a thermometer. Serve all samples at RT (20 °C). 17. Milk samples must be freshly prepared on the same day as testing. Taste each solution prior to testing to assess freshness and suitability. Note: Depending on volume required, solution preparation will take a minimum of 60 min. 3. Oral Fatty Acid Threshold Testing 1. Ensure that participants have refrained from eating or drinking (including coffee, gum, mouthwash, etc.) for at least 1 hr prior to testing. 2. Minimize non-taste cues by conducting testing under red lighting with participants wearing nose clips. 3. Use the ascending forced choice triangle procedure to determine oral fatty acid thresholds. Label 30-ml plastic portion cups with a three-digit identification number. Provide each participant with a set of three 20-ml solutions in random order; two control vehicles and one fatty acid vehicle with the lowest concentration of C18:1 (0.02 mM). 4. To determine a participant's oral fatty acid threshold, instruct the participant to taste each solution from left to right and expectorate into a sink. Ask participants to not swallow the samples. 5. Ask the participant to identify which of the 3 samples is 'odd' or 'different' and if they are unsure, they must guess (forced choice). 6. Have participants rinse their mouth with deionized water after each set of samples. 7. If correctly identified, provide the participant with a second set of 3 solutions (2 control and 1 fatty acid solution in random order) with the same fatty acid concentration. If incorrectly identified, provide the participant with a second set of 3 solutions, but with the next highest concentration of C18:1 (0.06 mM). 8. Continue this procedure until the participant is able to correctly identify the 'odd' sample 3x in a row at the same concentration. The concentration at which they are able to correctly identify the 'odd' sample is recorded as the participants C18:1 detection threshold. See Figure 1 for a graphical representation of this process. 9. Based on detection threshold, characterize participants as hypersensitive, or hyposensitive to C18:1. In line with previous literature, hypersensitive individuals can detect C18:1 at concentrations <3.8 mM, while hyposensitive individuals require concentrations >3.8 mM. Note: Depending on the number of incorrect answers, the testing procedure can take between 10-30 min to complete. http://www.jove.com http://www.jove.com http://www.jove.com Journal of Visualized Experiments www.jove.com Copyright © 2014 Creative Commons Attribution-NonCommercial License June 2014 | 88 | e51236 | Page 4 of 12 Figure 1. Ascending forced choice triangle procedure used for determining fatty acid detection thresholds. Participants are provided with three solutions (two control solutions and one C18:1 solution at a given concentration) and asked to identify the 'odd sample'. If correct, participants are given a second set of samples with the same C18:1 concentration. If incorrect, the participant is provided with another set of samples with a higher concentration of C18:1. This procedure continues until three 'odd' solutions are correctly identified at a given concentration. This point is deemed the individuals 'fatty acid detection threshold'. 4. Fat Ranking Task This task involves participants tasting four samples of instant custard, each with different fat contents (0, 2, 6, and 10%) and ranking them in order of perceived ascending fat concentration. 1. Prepare 1 batch of custard using non-fat instant vanilla custard powder according to packet instructions. Mix 2 tablespoons custard powder, 1 tablespoon sugar, and 2 cups non-fat milk in a microwave safe bowl. If the suggested product is unavailable, this may be substituted with a similar non-fat instant product (e.g., cook and serve custard). 2. Using high power, heat the mixture using a 1,400 W microwave in 30 sec intervals for a total of approximately 5 min, or until thick. This may vary depending on the brand of custard and the wattage of the microwave used. Allow custard to cool. 3. Label four 500-ml kitchen bowls (or similar) with fat percentages. 4. Divide the custard into 4 separate 100-g batches. 5. Add 0, 2, 6, and 10% vegetable oil to each bowl to achieve the desired fat content (e.g., in a 100 g batch, add 0 ml, 2 ml, 6 ml, and 10 ml vegetable oil to achieve respective fat percentages) and combine. Stir each sample well to ensure all ingredients are completely amalgamated. 6. Label four 30-ml plastic portion cups with randomized three-digit numbers. Fill the portion cups with 20 g of each custard (1 type of custard per cup). 7. Refrigerate samples prior to testing and serve cold (4 °C). 8. Carry testing out under red lights to minimize visual cues. 9. Have participants taste, swallow and rank the 4 custards from perceived lowest to highest fat content and receive a score out of 5 depending on their responses. 10. Scoring for this task is shown in Table 2. Note: Approximate preparation time for the custard samples is 30 min. The fat ranking task should take no longer than 10 min to complete. http://www.jove.com http://www.jove.com http://www.jove.com Journal of Visualized Experiments www.jove.com Copyright © 2014 Creative Commons Attribution-NonCommercial License June 2014 | 88 | e51236 | Page 5 of 12 Ranking order Score 0, 2, 6, 10 5 2, 0, 6, 10 4 0, 2, 10, 6 3 0, 6, 2, 10 2 1, 6, 10, 2 1 6, 0, 2, 10 1 2, 10, 6, 0 1 2, 6, 0, 10 0 6, 2, 10, 0 0 0, 10, 2, 6 0 Table 2. Fat ranking task scoring. Participants are given 4 samples of custard with 0, 2, 6, or 10% fat added. Participants are asked to rank samples from lowest to highest fat content and score 0 to 5 points (5 being the maximum). 5. Fatty Food Liking 1. Prepare small samples (5-20 g) of both regular and low-fat options of commercially available foods. Foods include regular and low-fat versions of: cream cheese (served on a cracker), chocolate mousse, cheese, dry biscuits, peanut butter dip served on a piece of carrot, mayonnaise, salad dressing (served on a slice of cucumber), and yogurt. 2. Label each sample with a random three-digit number for identification. 3. Present samples in a random order to prevent order effects. 4. Instruct participants to taste each sample individually. Food items are ingested, but participants can eat as much or as little of each sample as they desire. 5. Have participants rate how much they like or dislike each sample. Measure liking using a 100-mm hedonic generalized magnitude scale (gLMS; see Figure 2) ranging from strongest imaginable dislike to strongest imaginable like. Record liking by placing a vertical line at the point which represents the participants like or dislike of the food. Figure 2. Hedonic gLMS. The hedonic gLMS30,31 used to assess liking of both regular and low fat commercially available foods. Participants taste and rate each sample and place a vertical line at the point which best represents their like, or dislike of the sample. 6. Non-taste inputs are not minimized for this task, so carry this task out under normal light and do not have participants wear nose clips. 6. Tongue Photography 1. Set up a camera and tripod for photography. Regular indoor lighting is sufficient. 2. Set the camera to macro mode (or similar) for close up photography. 3. Use a hole punch to create a 6-mm diameter circle on a 1.5 cm x 1.5 cm (or similar) square of filter paper. Label the paper with the participant's identification number. 4. In a 50-ml beaker, combine blue food coloring with deionized water at a 1:20 ratio. A small amount is required per participant. 5. Pour 30 ml food grade ethanol into a 50-ml beaker for tweezer sterilization. 6. Using masking tape, mark a 20 cm x 30 cm rectangle on the side of the testing table (this should be regular desk height), as shown in Figure 3a. http://www.jove.com http://www.jove.com http://www.jove.com Journal of Visualized Experiments www.jove.com Copyright © 2014 Creative Commons Attribution-NonCommercial License June 2014 | 88 | e51236 | Page 6 of 12 Figure 3. A) Tongue photography setup. Demonstration of the table setup required prior to tongue photography. B) Tongue photography. Demonstration of the tongue photography method 7. Have participants place their elbows on the marked corners of the rectangle, rest their chin in their palms and to comfortably protrude their tongue, using the lips to steady this position (Figure 3b). The participant must remain in this position for the duration of the testing. 8. Using a rectangular (1.5 cm x 3 cm) strip of filter paper, briefly dry the bottom portion of the tongue. 9. Dip a cotton bud into the food coloring/water solution and transfer a small amount of dye onto the anterior dorsal surface of tongue, immediately right of the midline point and close to the tip (see Figure 4). Dry the tongue for a second time with filter paper. 10. Dry ethanol sterilized tweezers with paper towel and using tweezers, place the pre-labeled 1.5 cm2 filter paper onto the participant's tongue, with the 6-mm hole over the blue food coloring (see Figure 4). 11. Using flash, take three-digital photographs of the participant's tongue. For confidentiality, ensure only the participant's mouth and tongue are visible. 12. Remove the 1.5 cm2 filter paper from the participant's tongue with tweezers that have again been sterilized in food grade ethanol. Upload photographs to a photo editing software and with the zoom function, count all visible fungiform papillae. 13. Differentiate fungiform papillae from other papillae as larger mushroom shaped, elevated structures. They do not take on the dye solution as strongly, and as such appear much lighter in color. Note: Tongue photography should take no longer than 10 min to complete. http://www.jove.com http://www.jove.com http://www.jove.com Journal of Visualized Experiments www.jove.com Copyright © 2014 Creative Commons Attribution-NonCommercial License June 2014 | 88 | e51236 | Page 7 of 12 Figure 4. Quantifying fungiform papillae density. Location of the 6-mm area for fungiform papillae assessment. Using photography editing software, numerical figures indicate each fungiform papilla. Representative Results The methods detailed above are important as some emerging evidence has indicated that impaired fatty acid chemoreception in the oral cavity and GI tract may be associated with increases in BMI and the development of obesity17. Several studies have used the described protocols to investigate oral fatty acid detection and our recent publication has shown the method is both reliable and reproducible10. Studies using this methodology have been able to reliably determine individual's oral fatty acid detection thresholds by identifying the point at which participants are able to detect a difference between milk samples9. After three testing sessions utilizing this protocol, Stewart et al.9 found that the mean detection threshold for C18:1 was 2.2 ± 0.1, with detection thresholds ranging from 1-6.4 mM (see Figure 5). More recently, we have established a C18:1 detection threshold range from 0.26-12 mM, (mean: 2.64 ± 0.7 mM)10. These results support the notion that fatty acids can be detected in the oral cavity, and that marked individual differences in sensitivity to C18:1 exist. Based on these results, we are able to classify individuals as hypersensitive or hyposensitive to C18:1. Hypersensitive individuals are able to correctly identify C18:1 <3.8 mM, while hyposensitive subjects require concentrations >3.8 mM. Research from our laboratory has found oral sensitivity to C18:1 is associated with dietary fat consumption and BMI (Figure 6), where C18:1 hyposensitive individuals consume more saturated and animal fats and have a higher BMI13. Interestingly, in a study conducted by Stewart and Keast16, it was found that consuming a low fat diet resulted in increased sensitivity to C18:1 for both lean and overweight participants (Figure 7). However, this study also found that when participants consumed a high fat diet, lean individuals had reduced sensitivity to C18:1, while overweight individuals had no change in taste sensitivity (Figure 8). This suggests that habitually consuming a high fat diet, which is more likely for overweight individuals, may result in attenuated fatty acid chemoreception28. However, as there were no differences in baseline sensitivity between lean and overweight participants, these results may indicate that lean individuals are simply more susceptible to dietary changes regarding fat intake. This may also suggest that it was the presence of the specific intervention (high- vs low-fat) that may have influenced outcomes, rather than habitual diets, which may or may not have been different to the intervention diet. Despite this, this study indicates that there are some fundamental differences between lean and overweight individuals regarding fatty acid taste sensitivity, which requires further investigation. http://www.jove.com http://www.jove.com http://www.jove.com Journal of Visualized Experiments www.jove.com Copyright © 2014 Creative Commons Attribution-NonCommercial License June 2014 | 88 | e51236 | Page 8 of 12 Figure 5. C18:1 taste detection thresholds. Marked variability has been shown in sensitivity to C18:1 with participants able to detect C18:1 across a range of concentrations (1 mM-6.4 mM). Figure 6. C18:1 taste detection thresholds and association with BMI. An association between the ability to detect C18:1 and body composition has been shown, whereby those with higher detection thresholds (hyposensitive individuals) have significantly higher BMI values (P = 0.002, r2= 0.467). http://www.jove.com http://www.jove.com http://www.jove.com Journal of Visualized Experiments www.jove.com Copyright © 2014 Creative Commons Attribution-NonCommercial License June 2014 | 88 | e51236 | Page 9 of 12 Figure 7. C18:1 detection thresholds following a low fat diet. Following 4 week consumption of a low fat diet, C18:1 detection thresholds increased for both lean and obese individuals. Figure 8. C18:1 detection thresholds following a high fat diet. Following 4 week consumption of a high fat diet, lean individuals displayed reduced sensitivity to C18:1 (P = 0.006), while overweight individuals demonstrated no change (P= 0.609). Similarly to the impact of diet on detection thresholds for fatty acids, there is research to suggest that food liking may be plastic and altered by exposure. For example, it appears that a high fat diet increases preference for a higher fat product, with the opposite occurring following consumption of a low fat diet16. However, these changes have not been consistent in the literature. It appears that changes to preferences are mediated by the length of time the individual has been adhering to a high or low-fat diet. Specifically, Mattes29 found significant changes in participant food preferences after 12 weeks on a reduced fat diet, while Stewart and Keast16 found only sporadic and marginal changes after four weeks on a similar diet. Consuming a high fat diet altered participant preferences for yogurt, with preferences for low-fat yogurt increasing, converse to expected results (Baseline (BL): 19.44 ± 5.73, Week 4 (WK4): 21.94 ± 5.21, P = 0.046). Further, after four weeks on a low-fat diet, preferences for low-fat butter increased in all participants (BL: 6.23 ± 4.26, WK4: 7.32 ± 3.04, P = 0.046). Preferences for low-fat yogurt increased for lean participants only (BL: 2.51 ± 3.26, WK4: 3.68 ± 4.94, P = 0.07) while preferences for low-fat mousse decreased for all participants (P = 0.01). The ability to detect fat in foods is assessed by asking participants to taste and rank a series of custards with differing fat contents. Fat perception is identified based on how well participants were able to rank the samples. Fat perception has been known to change with diet, for http://www.jove.com http://www.jove.com http://www.jove.com Journal of Visualized Experiments www.jove.com Copyright © 2014 Creative Commons Attribution-NonCommercial License June 2014 | 88 | e51236 | Page 10 of 12 example, following a low fat diet has resulted in improvements to participant's performance in correctly identifying and ranking the degree of fat in each custard sample11. Furthermore, it appears that there is an association between sensitivity to C18:1 and identification and ranking of fat content4. Indeed, individuals who were hypersensitive to C18:1 performed significantly better on the fat ranking task (4.3 ± 0.6) compared to hyposensitive individuals (2.3 ± 0.1, P = 0.02) (scores are out of a maximum of five)9. This indicates that individuals who are more sensitive to fatty acids were also better at differentiating between the four varying fat concentrations in custard. While there was a trend for performance to improve after consuming the low-fat diet for four weeks, this was not a significant change (BL: 1.3 ± 0.3, WK4: 2 ± 0.3, P = 0.077)16. Papillae density i.e. the number of papillae (and hence taste buds) present on the tongue varies between individuals and is indicative of taste function. Higher fungiform papillae density has been linked to heightened taste sensitivity, for compounds including sucrose21 and the bitter substance PROP20. Tongue papillae density, as determined by tongue photography, varies considerably between subjects. For example, Zhang et al.21 found that there were significant individual differences between participants, ranging from a concentration of 7.07 ± 0.35/cm2 to 233.43 ± 0.00/cm2 (data was for a single participant), whereas others26 have found a mean fungiform papillae concentration to be 156.00 ± 5.86/cm2. Further, it has been found that papillae can appear significantly differently in structure between individuals, with variation in height, width, and shape,21 though limited evidence regarding the possible implications of these differences. Given previous findings linking papillae number with taste sensitivity, it is plausible that a similar relationship may also exist for oral fatty acid sensitivity, whereby those who are more orally sensitive to fatty acids may have a higher density of taste papillae and therefore a higher number of oral fatty acid receptors. While this association is yet to be established, it presents a novel area of research may help implicate the underlying mechanisms guiding the over consumption of fat. Discussion The techniques described for determination of oral fatty acid thresholds, fatty food liking, and tongue papillae density have been validated and used in a number of published works in recent years and we suggest oral fatty acid threshold assessment, the fat ranking task and fatty food liking be performed in duplicate at each relevant time point in a study. There has been some discussion regarding the optimum method for assessing detection thresholds32. In particular, the composition of solutions used varies between laboratories, as does the methodology itself. Specifically, the fatty acid used in this protocol, C18:1, we believe is a generally representative and easy to use fatty acid, as opposed to other fatty acids, including linoleic acid (C18:2) and lauric acid (C12:0), which have been used previously9. C18:1 is commonly found in the food supply and unlike C12:0 is liquid at RT, and is more resistant to oxidation than C18:29. C18:1 has also been shown to provide reliable data across multiple testing sessions, and is highly correlated with C18:2 and C12:010. Furthermore, C18:1 has been examined extensively throughout relevant literature, and is thus more helpful for comparisons. A major point of difference between the protocol outlined within the present paper and other procedures used in other laboratories are the vehicles used for presenting fatty acid stimuli and the systematic approach by which detection thresholds are determined. Two major fatty acid vehicles used within the literature are non-fat milk10,17 and water emulsions6. While both have demonstrated efficacy for fatty acid threshold determination, participants may be more likely to identify the taste of fat within milk, that is, it is unusual to taste fatty acids in water, which may result in lower levels of external validity for studies utilizing a water base. Non-fat milk provides a vehicle for fatty acid chemoreception, without compromising validity. Although these two methods are yet to be directly compared in the literature, it is known that fatty acids are poorly soluble in water33. As a result of fatty acid solubility in milk-based solutions, this emulsion could both be kept longer and be more homogenous than water-based solutions, though this is yet to be confirmed. When implementing this methodology, it is important to note free fatty acids may be naturally present in milk34 and consequently, the product should be used well within its expiry to prevent the increase of free fatty acids (which develop with age) and potential interference with taste threshold performance. Successful preparation of the solutions depends on numerous factors. Firstly, the order in which the 'ingredients' are added is imperative. Vehicle preparation steps should carefully follow those outlined earlier to ensure proper vehicle composition and a stable emulsion. Secondly, temperature must be controlled for. Each sample must be presented to participants at RT to ensure participants do not detect the 'odd sample' due to factors other than 'taste'. Finally, all samples must be correctly homogenized for the suggested period of time. While the emulsion of fatty acids and no-fat milk is more effective than if water were to be used, there is still a chance of emulsion separation within the sample. The specific testing method used in oral fatty acid threshold determination must also be considered. Two sensory-based methods have been commonly described in the literature; one being the ascending forced choice triangle procedure and the alternate, the staircase method35. The ascending forced choice triangle methodology is an established method for taste threshold determination and can be considered useful for several reasons, including the fact that, unlike the staircase method, the ascending method begins with the lowest concentration of C18:1 (0.02 mM) and increases until the participant is able to detect the presence of fatty acid in solution9. Conversely, the staircase method involves increasing or decreasing the fatty acid concentration from a predetermined midpoint11. However, starting a threshold determination at a point above threshold may cause a desensitization of response impairing ones tasting ability. Further, the ascending method has a lower probability of random chance influencing results (3.7%) compared with the staircase method (11.1%)11. As such, we suggest the ascending forced choice triangle method, combined with non-fat milk as a vehicle for taste testing appears to be an effective means of accurately determining oral thresholds. Food acceptance or liking testing is one of the more straightforward assessments performed within sensory research and as such there are few problems that tend to arise. However, the type of liking scale used is an important focus. In this case, a hedonic gLMS is the most effective, as it has good discriminatory power and is easy for participants to use36. The end points of the hedonic gLMS are labeled with the descriptors 'strongest imaginable dislike' and 'strongest imaginable like' and participants evaluate liking against all hedonic experiences, not solely foods30,31. This is effective in controlling for ceiling effects produced by standard 9-point scales, as all experiences are considered and compared. Further, the hedonic gLMS is more able to demonstrate greater individual variance, as the scale is broader36. Food acceptance testing itself may be limited by the foods presented, in that we only present two options per food type. Further research may include several more brands or types of each food, each with differing fat contents, or perhaps specifically made products where fat content can be controlled and is the only variable. It is important to note that the interpretation of all data must be done with caution. While a potential link between liking, preference and intake is plausible and intriguing, results are generated within a laboratory environment and there may be limits to the applicability of these findings to real-world situations. http://www.jove.com http://www.jove.com http://www.jove.com Journal of Visualized Experiments www.jove.com Copyright © 2014 Creative Commons Attribution-NonCommercial License June 2014 | 88 | e51236 | Page 11 of 12 Assessing papillae density through tongue photography is a more difficult process, with specific steps that must be taken in order to produce appropriate and applicable results. In particular, it is important to identify the correct papillae type. Three types of taste papillae are visible on the human tongue; fungiform, foliate and circumvallate4. Fungiform papillae can however be easily distinguished as mushroom-shaped structures26, and are generally the papillae that are recorded during sensitivity assessments. Fungiform papillae tend to range in concentration from 5-60 per 6-mm area37 (depending on sensitivity), though there have been studies indicating that some individuals may have upwards of 230 papillae the same area21. The type of camera used is fundamental to obtaining appropriate results and may account for this variability. Prior to the use of digital photography in this area, videomicroscopy was the gold standard for identifying and recording papillae density. However, it has been determined that the same level of identification is possible using an appropriate digital camera26. Further, digital photography takes only several minutes, where videomicroscopy may take up to an hour26. Not only this, but digital photography has the potential to be far less costly, and more portable, which may be helpful for use with various participant groups26. Finally, while we aim to measure fungiform papillae density for associations with oral fatty acid detection, we also suggest taste thresholds for the five prototypical tastes also be performed in parallel. Given previous linkage with papillae density and taste function, this could serve as an additional 'checking measure' which may add integrity to the data, especially given this is a new area of research. The area of oral chemoreception research, particularly regarding fatty acids, is an emerging one, and as such it is important for all research to be performed to a high standard, preferably with the use of consistent protocols to allow for direct comparisons. Disclosures The authors have nothing to disclose. Acknowledgements The authors would like to acknowledge the support of the Australian National Health and Medical Research Council and Deakin University. The work performed at the Deakin University Sensory laboratory was supported by the National Health and Medical Research Council Grant (1043780)(RSJK) and Horticulture Australia Limited (BS12006)(RSJK). References 1. Bray, G. A., Paeratakul, S., & Popkin, B. M. Dietary fat and obesity: a review of animal, clinical and epidemiological studies. Physiol Behav. 83, 549-555 (2004). 2. Shikany, J. M. et al. Is Dietary Fat “Fattening”? A Comprehensive Research Synthesis. Critical Reviews in Food Science and Nutrition. 50, 699-715 (2010). 3. Maskarinec, G. et al. Trends and Dietary Determinants of Overweight and Obesity in a Multiethnic Population. Obesity. 14, 717-726, doi:10.1038/oby.2006.82 (2006). 4. Bachmanov, A. A., & Beauchamp, G. K. Taste receptor genes. Annu Rev Nutr. 27, 389 (2007). 5. Chandrashekar, J., Hoon, M. A., Ryba, N. J., & Zuker, C. S. The receptors and cells for mammalian taste. Nature. 444, 288-294 (2006). 6. Chale-Rush, A., Burgess, J. R., & Mattes, R. D. Evidence for human orosensory (taste?) sensitivity to free fatty acids. Chem Senses. 32, 423-431 (2007). 7. Mattes, R. D. Oral detection of short-, medium-, and long-chain free fatty acids in humans. Chem Senses. 34, 145-150 (2009). 8. Mattes, R. D. Accumulating evidence supports a taste component for free fatty acids in humans. Physiol Behav. 104, 624-631, doi:10.1016/ j.physbeh.2011.05.002 (2011). 9. Stewart, J. E. et al. Oral sensitivity to fatty acids, food consumption and BMI in human subjects. B J Nutr. 104, 145 (2010). 10. Newman, L. P., & Keast, R. S. J. The test-retest reliability of fatty acid taste thresholds. Chemosens Percept. doi:10.1007/ s12078-013-9143-2, published online May (2013). 11. Tucker, R. M., & Mattes, R. D. Influences of repeated testing on nonesterified Fatty Acid taste. Chem Senses. 38, 325-332, doi:10.1093/ chemse/bjt002 (2013). 12. Mattes, R. D. Is there a fatty acid taste? Annu Rev Nutr. 29, 305-327, doi:10.1146/annurev-nutr-080508-141108 (2009). 13. Stewart, J. E., Newman, L. P., & Keast, R. S. J. Oral sensitivity to oleic acid is associated with fat intake and body mass index. Clin Nutr. 30, 838-844, doi:10.1016/j.clnu.2011.06.007 (2011). 14. Mattes, R. D. Oral thresholds and suprathreshold intensity ratings for free fatty acids on 3 tongue sites in humans: implications for transduction mechanisms. Chem Senses. 34, 415-423, doi:10.1093/chemse/bjp015 (2009). 15. Kamphuis, M. M., Saris, W. H., & Westerterp-Plantenga, M. S. The effect of addition of linoleic acid on food intake regulation in linoleic acid tasters and linoleic acid non-tasters. Br J Nutr. 90, 199-206 (2003). 16. Stewart, J. E., & Keast, R. S. Recent fat intake modulates fat taste sensitivity in lean and overweight subjects. Int J Obes. doi:10.1038/ ijo.2011.155 (2011). 17. Stewart, J. E. et al. Marked differences in gustatory and gastrointestinal sensitivity to oleic acid between lean and obese men. Am J Clin Nutr. 93, 703-711 (2011). 18. Stewart, J. E., Feinle-Bisset, C., & Keast, R. S. J. Fatty acid detection during food consumption and digestion: Associations with ingestive behavior and obesity. Prog Lipid Res. 50, 225-233 (2011). 19. Miller, I. J., & Reedy, F. E. Variations in human taste bud density and taste intensity perception. Physiol Behav. 47, 1213-1219 (1990). 20. Delwiche, J. F., Buletic, Z., & Breslin, P. A. Relationship of papillae number to bitter intensity of quinine and PROP within and between individuals. Physiol Behav. 74, 329-337 (2001). 21. Zhang, G. H. et al. The relationship between fungiform papillae density and detection threshold for sucrose in the young males. Chem Senses. 34, 93-99, doi:10.1093/chemse/bjn059 (2009). 22. Doty, R. L., Bagla, R., Morgenson, M., & Mirza, N. NaCl thresholds: relationship to anterior tongue locus, area of stimulation, and number of fungiform papillae. Physiol Behav. 72, 373-378 (2001). http://www.jove.com http://www.jove.com http://www.jove.com Journal of Visualized Experiments www.jove.com Copyright © 2014 Creative Commons Attribution-NonCommercial License June 2014 | 88 | e51236 | Page 12 of 12 23. Hayes, J. E., & Duffy, V. B. Revisiting sugar-fat mixtures: sweetness and creaminess vary with phenotypic markers of oral sensation. Chem Senses. 32, 225-236, doi:10.1093/chemse/bjl050 (2007). 24. Tepper, B. J., & Nurse, R. J. Fat perception is related to PROP taster status. Physiol Behav. 61, 949-954 (1997). 25. Tepper, B. J., & Nurse, R. J. PROP taster status is related to fat perception and preference. Ann N Y Acad Sci. 855, 802-804 (1998). 26. Shahbake, M., Hutchinson, I., Laing, D. G., & Jinks, A. L. Rapid quantitative assessment of fungiform papillae density in the human tongue. Brain Res. 1052, 196-201, doi:10.1016/j.brainres.2005.06.031 (2005). 27. World Health Organisation. Global Database on Body Mass Index. BMI classification. (2006). 28. Astrup, A. et al. Obesity as an adaptation to a high fat diet: Evidence from a cross sectional study. Am J Clin Nutr. 59, 350-355 (1994). 29. Mattes, R. D. Fat preference and adherence to a reduced-fat diet. Am J Clin Nutr. 57, 373-381 (1993). 30. Duffy, V. B. et al. Food preference questionnaire as a screening tool for assessing dietary risk of cardiovascular disease within health risk appraisals. J Am Diet Assoc. 107, 237-245 (2007). 31. Duffy, V. B. Surveying food/beverage liking: A tool for epidemiological studies to connect chemosensation with health outcomes. Ann NY Acad Sci. 1170, 558-568 (2009). 32. Running, C. A., Mattes, R. D., & Tucker, R. M. Fat taste in humans: Sources of within- and between-subject variability. Prog Lipid Res. 52, 438-445, doi:10.1016/j.plipres.2013.04.007 (2013). 33. Ralston, A. W., & Hoerr, C. W. The solubilities of the normal saturated fatty acids. J Org Chem. 7, 546-555 (1942). 34. Parodi, P. Milk fat in human nutrition. Australian Juournal of Dairy Technology. 59, 3-59 (2004). 35. Pepino, M. Y., Love-Gregory, L., Klein, S., & Abumrad, N. A. The fatty acid translocase gene CD36 and lingual lipase influence oral sensitivity to fat in obese subjects. J Lipid Res. 53, 561-566, doi:10.1194/jlr.M021873 (2012). 36. Lawless, H. T., Popper, R., & Kroll, B. J. A comparison of the labeled magnitude (LAM) scale, an 11-point category scale and the traditional 9- point hedonic scale. Food Qual Prefer. 21, 4-12, doi:10.1016/j.foodqual.2009.06.009 (2010). 37. Bartoshuk, L. M. Hedonic gLMS: a new scale that permits valid hedonic comparisons. Florida, USA, (2010). http://www.jove.com http://www.jove.com http://www.jove.com http://apps.who.int/bmi/index.jsp?introPage=intro_3.html http://apps.who.int/bmi/index.jsp?introPage=intro_3.html work_5qo7azujwva4djca66kv7dpgtq ---- High-Speed Direct-Detection Electron Detector for the TEAM Project High-Speed Direct-Detection Electron Detector for the TEAM Project P. Denes,* M. Battaglia,** D. Contarato,* D. Doering,* P. Giubilato***, D. Gnani*, B. Krieger* and V. Radmilovic* * Lawrence Berkeley National Laboratory, Berkeley, CA 94720, ** Department of Physics, University of California Santa Cruz, CA 95064 *** Dipartimento di Fisica, Universita` degli Studi, Padova, I-35131, Italy Monolithic CMOS Active Pixel Sensors (APS), first described in 1967 [1],[2] are widely used today in both low- and high-end digital photography. The same technology can, with suitable modification, be used as a direct detector of electrons [3],[4]. For the TEAM (Transmission Electron Aberration-corrected Microscope) project [5]. a 400 frame-per-second APS-based detector has been developed. This detector was engineered through a sequence of design optimizations including precise modeling of electron interactions in the detector together with radiation hardening [6],[7] together with improvements in imaging modes [8]. The detector has a pixel pitch of 9.5 µm, with a Point Spread Function (PSF) of roughly 8 µm at 300 keV. In a “cluster counting” mode, the PSF can be reduced to 2 µm, as shown in FIG. 1. The operation of the TEAM detector, together with initial experimental results, will be described. Current developments, in technology further advanced than that used for TEAM will also be described. References [1] G. Wecklers, .”Operation of p-n junction photodetectors in a photon flux integrating mode”., IEEE J. Solid-State Circuits, Vol. SC-2, p. 65, 1967. [2] P. Noble, .”Self-scanned image detector arrays”., IEEE Trans. Electron Devices, Vol. ED-15, p. 202, 1968. [3] NH Xuong, et al. “First use of a high-sensitivity active pixel sensor array as a detector for electron microscopy”, Proc. SPIE, Vol. 5301 (2004) 242. [4] A.R. Faruqi, D. M. Cattermole and C. Raeburn, Nucl.Instrum.Meth., Vol. 513 (2003) 317. [5] C. Kisielowski, et al. “Detection of Single Atoms and Buried Defects in Three Dimensions by Aberration-corrected Electron Microscopy with 0.5 Å Information Limit”,, Microscopy and Microanalysis 14454-462. [6] M. Battaglia et al., “A Rad-hard CMOS Active Pixel Sensor for Electron Microscopy”, Nucl.Instrum.Meth., Vol. 598 (2009) 642. [7] M. Battaglia et al., CMOS Pixel Sensor Response to Low Energy Electrons in Transmission Electron Microscopy”, Nucl.Instrum.Meth., Vol.605 (2009) 350. [8] M. Battaglia et al., “Cluster Imaging with a Direct Detection CMOS Pixel Sensor in Transmission Electron Microscopy”, Nucl.Instrum.Meth., Vol.608 (2009) 363. 156 doi:10.1017/S1431927610062215 Microsc. Microanal. 16 (Suppl 2), 2010 © Microscopy Society of America 2010 https://doi.org/10.1017/S1431927610062215 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:21, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1431927610062215 https://www.cambridge.org/core https://www.cambridge.org/core/terms FIG. 1. Point Spread Function in the TEAM detector as a function of electron energy. Points represent measured values in “integrating” and “counting” imaging modes, and curves represent predictions from simulation. Microsc. Microanal. 16 (Suppl 2), 2010 157 https://doi.org/10.1017/S1431927610062215 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:21, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1431927610062215 https://www.cambridge.org/core https://www.cambridge.org/core/terms work_5su57syrozhqfmtnixovnhtfmq ---- WORKSHOP PAPER Workshop 3: Novel approaches for estimating portion sizes WL Wrieden1 and NC Momen2 1Health Services Research Unit, University of Aberdeen, Aberdeen, UK and 2MRC Human Nutrition Research, Cambridge, UK European Journal of Clinical Nutrition (2009) 63, S80–S81; doi:10.1038/ejcn.2008.71 An essential step in measuring energy and nutrient intake is the quantification of the portion size of each food item recorded. One of the first comments made in this group discussion was that, in comparison with other areas of science, dietary assessment has not progressed much in the last couple of decades. A ‘gold standard’ method for estimation of portion size is still elusive. Previously used and established approaches to ascertain portion sizes include: � Estimates of ‘small/medium/large’ � Weighed diaries � Food portion-size books � Food portion photographs � Food models � Household measures � PETRA scales Each method has its advantages and disadvantages, which were discussed in the workshop and are outlined briefly below. Small/medium/large Advantage � Reduces the amount of variation in the number of possible answers. Disadvantages � What one respondent considers to be a ‘large’ portion, may not be large to another; the respondents’ reports are based on what is normal for them. � May be useful to assess variation in portion sizes and intakes in individuals on different days, for example, assessing variation between intake on weekdays and weekends. � Unlikely to give an accurate assessment of portion size. Comment � The method can be improved by giving examples of what is meant by ‘small’, ‘medium’ or ‘large’ using descriptors. Weighed diaries Advantage � Gives accurate portion sizes by their very nature. Disadvantages � The method is extremely time-consuming and relies on the respondents to remember to weigh their food before they start eating and after, if they have leftovers. � Problems can occur if the respondent takes second helpings and forgets to record them. � There is a high possibility of reactivity with this method, that is, the respondents change how much (and what) they are eating because they are making a record of it. Books on food portion size Advantage � The current booklet on portion size used in the UK is based on data from the National Diet and Nutrition Survey and is therefore representative of the British population. Disadvantages � Owing to the time lag between data collection for the survey and publication of the results, these data may not be contemporary. � Small, medium and large portions are not stated for every food. Correspondence: Dr WL Wrieden, Health Services Research Unit, University of Aberdeen, Aberdeen, UK. E-mail: w.l.wrieden@abdn.ac.uk European Journal of Clinical Nutrition (2009) 63, S80–S81 & 2009 Macmillan Publishers Limited All rights reserved 0954-3007/09 $32.00 www.nature.com/ejcn http://dx.doi.org/10.1038/ejcn.2008.71 mailto:w.l.wrieden@dundee.ac.uk http://www.nature.com/ejcn Food portion photographs Advantage � Photographs can help the respondents visualize how much they ate by comparing the amount of food eaten with portions presented in a number of photographs. Disadvantages � Photographs tend to guide choices regarding portion size and the data are then categorized into only those portions given in the photographs. � They are more difficult to use for meals when the respondent did not prepare the food, and when the food items are mixed or cover each other, for example, a chicken curry—the curry sauce covers and ‘hides’ the chicken and some of the rice. Food models Advantages � Similar to photographs, they can assist respondents in visualizing their portion sizes. � It can be easier to estimate the portion sizes of beverages from models compared with food photographs. Disadvantages � Food models may also guide choices regarding portion size subject to the models available. � Similar to photographs, they may be difficult to use for composite meals. Household measures Advantage � Using standard measures such as cups or tablespoons should mean that more reliable portions are given. Disadvantage � There is variation in what people consider a household measure and in whether this is level or heaped. Comment � Data can be improved by including pictures of household measures and providing instructions on how to record portion sizes using these measures. PETRA scales Advantage � These voice-recording scales had the benefits of weighed diaries (portions were accurate), with the added advantage that respondents were not required to write anything down. Disadvantages � The method relied on people remembering to weigh everything and making a full verbal record of what they ate. � The method was expensive and required technical expertise to convert the data to a useable format. Future directions for the estimation of portion sizes The use of till receipts and digital cameras was identified as offering the potential to improve the estimation of portion sizes. Till receipt information assists with the coding of single consumption products and ready meals. The advent of the Internet means has brought easier access to product information to check products without necessarily having to buy them, although sometimes this is still necessary. Digital cameras have become much more affordable in the low product range, and they offer the potential to comple- ment dietary records and assist with the portion-size estimation process if a ruler or any other marker that indicates the scale is included in the photograph. Using respondents’ pictures to code diaries also reduces reliance on an accurate description of the food. The group discussed the need for more research, using comparisons with a number of other currently acceptable methods for estimation of portion size. The disadvantages of using digital cameras are that they require a certain level of dexterity and technical knowledge and thus may be unsuitable in some populations, for example, in older people. The method also relies on people remembering to photograph leftovers and second helpings. Conclusion Estimation of portion sizes is a challenging area of dietary assessment. Contemporary information on portion sizes commonly eaten in a population is very valuable, especially given the current hypothesis that portion sizes are increas- ing. The methods to estimate portion sizes have not advanced in recent years. Manufacturers’ information is now more widely available on the Internet, which can help the estimation of portion size; till receipt information may also be useful for a limited number of foods. Advances in technology, such as digital photography and more sophisti- cated approaches (such as those discussed in a parallel workshop on technology), have the potential to improve the estimation of portion size in dietary assessment. Young people are likely to respond positively to such technologies. Disclosure The authors have declared no financial interests. Workshop 3 WL Wrieden and NC Momen S81 European Journal of Clinical Nutrition Workshop 3: Novel approaches for estimating portion sizes Small/medium/large Advantage Disadvantages Comment Weighed diaries Advantage Disadvantages Books on food portion size Advantage Disadvantages Food portion photographs Advantage Disadvantages Food models Advantages Disadvantages Household measures Advantage Disadvantage Comment PETRA scales Advantage Disadvantages Future directions for the estimation of portion sizes Conclusion Note work_5t5h2e755zdg3ojq6qle7wi7mm ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218545195 Params is empty 218545195 exception Params is empty 2021/04/06-02:16:22 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218545195 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:22 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_5t7bamozgvhhpo6mayxnew2b34 ---- Research Article Monitoring Instantaneous Dynamic Displacements of Masonry Walls in Seismic Oscillation Outdoors by Monocular Digital Photography Guojian Zhang ,1,2 Guangli Guo ,1,2 Chengxin Yu ,3 Long Li ,2,4 Sai Hu ,2 and Xue Wang 2 1NASG Key Laboratory of Land Environment and Disaster Monitoring, China University of Mining and Technology, Daxue Road 1, Xuzhou 221116, Jiangsu, China 2School of Environmental Science and Spatial Informatics, China University of Mining and Technology, Daxue Road 1, 221116 Xuzhou, Jiangsu, China 3Business School, Shandong Jianzhu University, Fengming Road 1000, 250101 Ji’nan, Shandong, China 4Department of Geography, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium Correspondence should be addressed to Guangli Guo; guo gli@126.com Received 18 May 2018; Revised 24 July 2018; Accepted 26 July 2018; Published 7 August 2018 Academic Editor: Arkadiusz Zak Copyright © 2018 Guojian Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Understanding the development of cracks in masonry walls can provide insight into their capability for earthquake resistance. The crack development is characterized by the displacement difference of the adjacentpositions on masonry walls. In seismic oscillation, the instantaneous dynamic displacements of multiple positions on masonry walls can warn of crack development and reflect the propagation of the seismic waves. For this reason, we proposed a monocular digital photography technique based on the PST-TBP (photographing scale transformation-time baseline parallax) method to monitor the instantaneous dynamic displacements of a masonry wall in seismic oscillation outdoors. The seismic oscillation was simulated by impacting a suspended steel plate with a hammer and by simulation software ANSYS (analysis system), for comparative analysis. The results show that it is feasible to use a hammer to impact a suspended steel plate to simulate the seismic oscillation as the stress concentration zones of the masonry wall model in ANSYS are consistent with the positions of destruction on the masonry wall, and that the crack development of the masonry wall in the X-direction could be characterized by a sinusoid-like curve, which is consistent with previous studies. The PST-TBP method can improve the measurement accuracy as it corrects the parallax errors caused by the change of intrinsic and extrinsic parameters of a digital camera. South of the test masonry wall, the measurement errors of the PST-TBP method were shown to be 0.83mm and 0.84mm in the X- and Z-directions, respectively, and in the west, the measurement errors in the X- and Z-directions were 0.49mm and 0.44mm, respectively. This study provides a technical basis for monitoring the crack development of the real masonry structures in seismic oscillation outdoors to assess their safety and has significant implications for improving the construction of masonry structures in earthquake-prone areas. 1. Introduction The magnitude, 7.9 Wenchuan earthquake of 2008, occurred along the Longmenshan Fault with an 11-degree epicenter intensity [1]. The earthquake was felt across China and was profiled as the strongest earthquake in China since 1949. Statistics show that this destructive earthquake knocked down ∼6,500,000 houses and destroyed ∼23,000,000. More than 80% of the structures affected were masonry houses [2]. Rapid house collapse led to numerous inhabitants buried and thus killed. The catastrophic consequences have offered clear evidence of the poor performance of those masonry structures during the earthquake attack. Since then, Chinese authorities at all levels have been implementing strict policies and rules on the resistance of buildings to earthquakes. The seismic resistance of masonry structures has therefore become a hotspot once again in recent years in China and beyond. Hindawi Mathematical Problems in Engineering Volume 2018, Article ID 4316087, 15 pages https://doi.org/10.1155/2018/4316087 http://orcid.org/0000-0003-2640-9587 http://orcid.org/0000-0002-8005-3086 http://orcid.org/0000-0002-9365-2416 http://orcid.org/0000-0001-7763-1108 http://orcid.org/0000-0002-6359-6345 http://orcid.org/0000-0002-6999-1362 https://doi.org/10.1155/2018/4316087 2 Mathematical Problems in Engineering A crack in masonry structures caused by earthquake load is a common problem that needs to be solved. Mon- itoring crack development in masonry walls has impor- tant significance in the study of the earthquake resistance capability of masonry structures [3]. As seismic oscillation is an extremely complex process, the crack development mechanism of masonry structures is not clear and there is no precise mathematical model to describe the change characteristics. Examining the current literature, most schol- ars adopt a structure test to study crack development in a masonry structure [4]. A few analyze masonry structure with consideration of an earthquake or dynamic load as a model. As finite element models emerged, Arne [5] and Pietro [6] have successfully simulated masonry structure in seismic oscillation. However, the finite element method gradually becomes invalid in studying the crack development mechanism with the increase of the crack development of the masonry walls. The finite element method can reproduce the process of seismic oscillation well and reflect the influence of strain rate when a shaking table shakes, and the method is useful for studying the masonry walls in seismic oscillation. Some researchers also used shaking tables to study the failure pattern, dynamic response, and collapse mechanism of masonry structures [7]. Using shaking tables for large scale masonry structures is, however, challenging due to the limited bearing capacity and size of the tables. And the shaking table is too expensive to be a conventional method to study the crack development of masonry walls in seismic oscillation. This study used a hammer to impact a suspended steel plate to simulate the seismic oscillation in the field. Few techniques and methods are available to monitoring the crack development of masonry walls in seismic oscillation in the field. The techniques and methods currently available are not effective in continuously monitoring the instanta- neous dynamic displacements of masonry walls in seismic oscillation in the field. This problem can be solved, how- ever, by applying monocular digital photography. Monocular digital photography, combining photogrammetric technique with information technology, presents an opportunity to monitor the instantaneous dynamic displacement of engi- neering structures by adopting a nonmetric digital camera to observe multiple points simultaneously and to capture the instantaneous displacement of a deformable object [8]. Although digital photography has not been as popular in engineering structures as in other fields such as biomechan- ics, chemistry, biology, and architecture, many pioneering applications in this field have demonstrated its underesti- mated potential, e.g., for monitoring bridge structures [9– 11], structural cracks [12–15], and masonry wall displacements with digital image correlation (DIC) method [3, 16–20]. Moreover, the DIC method is also used to monitor the strain development on bricks [21]. These examples demonstrate that it is feasible to monitor the masonry walls to examine the crack development pattern in seismic oscillation with monocular digital photography. The DIC method, which is popular for studying the crack development of a masonry wall in the laboratory, needs a special light source for the test. The light cannot be projected evenly onto the structure when monitoring large engineering structures, resulting in low measurement accuracy, which is even lower outdoors as higher light quality is required. The objective of this study is to propose monocular digital photography based on the PST-TBP (photographing scale transformation-time baseline parallax) method [22, 23] to monitor the instantaneous dynamic displacements of masonry walls in seismic oscillation outdoors. Based on the monitoring data obtained through the technique, we analyzed the influence of the S-wave (shear wave) on masonry walls in seismic oscillation and investigated the relationship between the crack development pattern and the seismic wave propagation on the masonry wall. Monocular digital photography based on the PST-TBP method can even be used to monitor the real masonry house in seismic oscillation to warn of the possible danger. 2. Monocular Digital Photography Both the DLT (direct linear transformation) method [24] and the PST-TBP (photographing scale transformation-time baseline parallax) method [25] can be used to process the image sequences. The DLT method requires at least six reference points, which should be evenly distributed and encircle the target. In addition, their spatial coordinates need to be accurate enough for allowing the calculation of the spatial coordinates of deformation points [26]. These requirements are, however, difficult to achieve outdoors. As a result, the DLT method is often used to monitor small objects in the laboratory. The PST-TBP method was therefore used in this study. In this method, the reference points are used to match a zero image with successive images. Their placements are not as strict as in the DLT method. It is not essential to obtain the spatial coordinates of the reference points because the object is to obtain the relative displacement of the targets. 2.1. Distortion Correction of Nonmetric Digital Cameras. The distortion of the camera used in photography contributes to a decrease in the measurement accuracy [27, 28]. The distortion error is linear near the center of the images in a short photographing distance when the digital camera and the monitored object do not move in the test field [29]. Thus, we adopted a grid-based method [30] in the study to eliminate the distortion of the digital camera (detailed in Section 2.3). Figure 1 illustrates that the distortion error of the feature point on the monitored object moves from Position A to Position A’ in the camera’s view and thatΔX andΔZ are the corresponding horizontal and vertical values. The steps to correct the distortion error by a grid are as follows: first, a grid was fixed at a photographing distance S from a SONY350 digital camera (Figure 2 and Table 1) to the grid plan. The photographing distance was recorded, and the corresponding photos of the grid were taken. Second, we compared the highest quality photo of the grid to the real grid without being affected by the distortion and carefully observed the feature points on the monitored object such as point A, to analyze the size and direction of the distortion. Third, the camera was removed along the photographing direction with the first and second steps reiterated. Last, a Mathematical Problems in Engineering 3 Table 1: Technical parameters of the Sony-350 camera used in the study. Type Sensor Sensor Scale Focal length Active pixels Sony DSLR A350(Sony-350) CCD 23.5×15.7mm 35mm(27-375) 4592×3056 pixels Table 2: Spatial coordinates (/m) of reference points C0 to C7. Point no. C0 C1 C2 C3 C4 C5 C6 C7 X 108.343 109.739 110.144 110.144 109.492 108.862 108.456 108.713 Y 95.770 95.804 95.957 95.957 97.022 97.199 96.194 96.487 Z 99.534 99.542 99.540 99.540 99.568 99.240 99.225 99.384 D is to rt io n er ro r i n Z Δ Z Di sto rti on er ror Distortion error in XA A’ Δ X X Z O Figure 1: Analysis of distortion error through a 50cm×50cm grid. mathematical expression for eliminating the distortion was therefore formulated. After correction, the DLT method was used to assess the measurement accuracy of the digital camera. We mea- sured the spatial coordinates of the reference points and deformation points in relation to the total station position in the laboratory for ease of calculation (Table 2). Taking the top left corner of an image as the origin of the pixel coordinate system, we recorded their pixel coordinates in relation to the origin position (Table 3). The DLT method was used to calculate the spatial coordinates of deformation points U0 and U1 based on the coordinates in Tables 2 and 3. Their differences were obtained by comparing the actual coordinates of U0 and U1 with their calculated coordinates (Table 4). The actual coordinates were the spatial coordinates of deformation points monitored by a total station, and the calculated coordinates were the spatial coordinates of deformation points calculated by the DLT method. The maximal and minimal measurement errors were 2 mm and 0 mm, respectively, with an average measurement error of 1 mm, suggesting that the digital camera used in this study meets the accuracy requirements of deformation observation. 2.2. Principle of Photographic Scale Transformation. The pho- tographing scale of somewhere is always changing along the photographing distance (from the position to the pho- tographing center) [31, 32]. Figure 3 shows a schematic diagram of a CCD (charge coupled device) camera capturing images at different photographing distances H3 and H4. According to Figure 3, the relationship between pixel counts and distances can be described by 𝐻1 𝐻2+𝐻3 = 𝑁 𝐷1 𝐻1 𝐻2+𝐻4 = 𝑁 𝐷2 (1) In general, H3 and H4 are meter-sized, while H2 is centimeter-sized. If H2 can be ignored when the camera is far from the bridge, (1) can be expressed as 𝐻1 𝐻3 = 𝑁 𝐷1 𝐻1 𝐻4 = 𝑁 𝐷2 (2) From (2), we have 𝐷2= H4 𝐻3 ⋅𝐷1 (3) Assume that M1 and M2 are the photographing scale of the reference plane and the object plane, respectively. According to (3), we have 𝑀2= H4 𝐻3 ⋅𝑀1 (4) Namely, 𝑀2=ΔPSTC ⋅𝑀1 (5) where Δ𝑃𝑆𝑇𝐶 is the photographing scale transformation coefficient andΔ𝑃𝑆𝑇𝐶=𝐻4/𝐻3. 2.3. Photographing Scale Transformation-Time Baseline Par- allax Method. If no errors exist in the measurement, the horizontal and vertical displacements on the object plane of a deformation point are given by 4 Mathematical Problems in Engineering (a) Front view (b) Side view Figure 2: A Sony-350 camera used for the monocular digital photography. Table 3: Pixel coordinates (/pixel) of reference points (C0 to C7) and deformation points (U0 and U1). Point no. C0 C1 C2 C3 C4 C5 C6 C7 U0 U1 Photo 1 X 205 428 581 958 1540 1864 782 1120 429 1541 Z 687 468 437 451 490 924 1057 796 700 718 Photo 2 X 144 1179 1486 1716 1918 1631 487 907 619 1543 Z 572 495 491 530 605 1101 948 799 643 826 Table 4: Accuracy assessment for deformation points U0 and U1. Name Actual coordinates/m Calculated coordinates/m Differences/mm U0-X 108.825 108.826 1 U0-Y 95.887 95.888 1 U0-Z 99.441 99.440 1 U1-X 109.067 109.065 2 U1-Y 96.935 96.934 1 U1-Z 99.394 99.394 0 Δ𝑋𝑃𝑆𝑇 =𝑀⋅Δ𝑃𝑆𝑇𝐶 ⋅Δ𝑃 𝑥 =𝑀𝑃𝑆𝑇Δ𝑃 𝑥 Δ𝑍𝑃𝑆𝑇 =𝑀⋅Δ𝑃𝑆𝑇𝐶 ⋅Δ𝑃 𝑧 =𝑀𝑃𝑆𝑇Δ𝑃 𝑧 (6) where Δ𝑋𝑃𝑆𝑇 and Δ𝑍𝑃𝑆𝑇 are the horizontal and vertical displacements on the object plane of a deformation point,Δ𝑃 𝑥 andΔ𝑃 𝑧 are the horizontal and vertical displacements on the image plane of a deformation point, M is the photographing scale on the reference plane, and𝑀𝑃𝑆𝑇 is the photographing scale on the object plane.Δ𝑃 𝑥 andΔ𝑃 𝑧 have parallax errors. The PST-TBP method was however used to eliminate the errors caused by the change of intrinsic and extrinsic parameters of a digital camera. The PST-TBP method consists of three steps. First, the TBP method is used to obtain the displacements on the reference plane of a deformation point. These displacements have the parallax errors caused by the change of intrinsic and extrinsic parameters of a digital camera. Second, the reference points are used to match a zero image with the successive images to eliminate the parallax errors. The corrected displacements on the reference plane are obtained. Lastly, the real displacements of a deformation point are equal to its corrected displacements on the reference plane multiplied by the photographing scale transformation coefficient. The details are as follows. The first step, when a point on the object plane moves from A to B (Figure 4), the horizontal and vertical displace- ments on the reference plane of a deformation point are given by Δ𝑋=𝑀Δ𝑃 𝑥 Δ𝑍=𝑀Δ𝑃 𝑧 (7) where ΔX and ΔZ are the horizontal and vertical displace- ments on the reference plane of a deformation point, Δ𝑃 𝑥 and Δ𝑃 𝑧 are the horizontal and vertical displacements on the image plane of a deformation point, and M is the photographing scale on the reference plane.Δ𝑃 𝑥 andΔ𝑃 𝑧 have parallax errors In the second step, some points are laid at a stable position around the camera to form a reference plane that is perpendicular to the photographing direction, and the parallax is therefore eliminated through differencing a zero image with the successive images based on a reference plane, respectively. Mathematical Problems in Engineering 5 o Image plane CCD Camera Reference plane H1 H2 H3 H4 N D1 D2 Object plane Figure 3: Schematic diagram of photographing scale transforma- tion: H1 is the focal length of a CCD camera, H2 is the distance between the optical origin (o) and the front end of the CCD camera, D1 on reference plane and D2 on the object plane are the real- world lengths formed by the view field of the CCD camera at photographing distances H3 and H4, respectively, and N is the maximal pixel number in a horizontal scan line of an image plane, which is fixed and known a priori irrelevant to the photographing distances. The reference plane in Figure 4 consists of six reference points labeled as C0-C5 (at least three reference points), and the reference plane equation can be expressed as 𝑃 𝑥 = 𝑎 𝑥 𝑥+𝑏 𝑥 𝑧+𝑐 𝑃 𝑧 = 𝑎 𝑧 𝑥+𝑏 𝑧 𝑧+𝑑 (8) where (𝑥,𝑧) and (𝑃 𝑥 ,𝑃 𝑧 )are the image plane coordinates and the image plane parallaxes of a reference point, respectively; (𝑎 𝑥 ,𝑏 𝑥 ) and (𝑎 𝑧 ,𝑏 𝑧 ) are the parallax coefficients in the X- and Z- directions, respectively; and(𝑐,𝑑)are the constant parallax coefficients in the X- and Z- directions, respectively. After correcting the pixel displacements on the image plane of a deformation point based on the parallaxes of the reference plane, the corrected pixel displacements on the image plane of a deformation point can be obtained: Δ𝑃󸀠 𝑥 =Δ𝑃 𝑥 −𝑃 𝑥 Δ𝑃󸀠 𝑧 =Δ𝑃 𝑧 −𝑃 𝑧 (9) where (Δ𝑃󸀠 𝑥 ,Δ𝑃󸀠 𝑧 ) and (Δ𝑃 𝑥 ,Δ𝑃 𝑧 ) are the corrected pixel displacements and the measured pixel displacements on the image plane of a deformation point, respectively. Then, we obtained the corrected displacements on the reference plane of a deformation point: Δ𝑋󸀠 =𝑀Δ𝑃󸀠 𝑥 Δ𝑍󸀠 =𝑀Δ𝑃󸀠 𝑧 (10) C3 A B x z x z a b Δ Px Δ Pz Image plane Object plane C0 S Projection center Reference plane C2 C4 C1 C5 Δ : 034 Δ 8 034 Figure 4: Photographic scale transformation-time baseline parallax (PST-TBP) method. where (Δ𝑋󸀠,Δ𝑍󸀠) are the corrected displacements on the object plane of a deformation point. In the last step, the real displacements on the object plane of a deformation point are given by (Δ𝑋𝑃𝑆𝑇) 󸀠 =Δ𝑃𝑆𝑇𝐶 ⋅Δ𝑋󸀠 (Δ𝑍𝑃𝑆𝑇) 󸀠 =Δ𝑃𝑆𝑇𝐶 ⋅Δ𝑍󸀠 (11) where (Δ𝑋𝑃𝑆𝑇)󸀠 and (Δ𝑍𝑃𝑆𝑇)󸀠 are the real displacements on the object plane of a deformation point. To improve data processing speed, a data processing toolkit has been developed in the environment of Microsoft Visual C++ 6.0 that allows synchronization of measuring pixel coordinates of deformation points and data processing. The procedure of the toolkit is detailed in Figure 5. The PST- TBP method is the IM-TBP (image matching-time baseline parallax) method whenΔ𝑃𝑆𝑇𝐶 is 1 [33]. 3. Masonry Wall Test in Seismic Oscillation 3.1. Test Preparation. Following the study of Sun [34], a masonry wall with an aspect ratio of 0.67 was used in our test. According to the test conditions and the corresponding design specifications [35], we adopted the Yishun Yiding method [36] to build a masonry wall (1.2m×0.8m×0.24m) with bricks (240mm×115mm×53mm) of cement mortar grade between M2.5 and M5.0 (Figure 6(a)) at a construction site where the foundation was being treated. The foundation soil was native, yellow, and compacted silt clay which was of strong compressive strength. A pit (2.1m×1.4m×0.2m) was dug on the foundation and a manganese steel plate 6 Mathematical Problems in Engineering Start Open successive photograph and get its scale coefficient; Get deformation points position by Hough Transform Filter deformation data and eliminate their random noise Open zero photograph and get its scale coefficient; Get deformation points position by Hough Transform Eliminate the parallax of deformation points by image matching-time baseline parallax Gain the deformation value on the reference plane of deformation points Complete a group calculation End Yes Gain the actual deformation of deformation points with the coefficient of the photographic scale transformation No Figure 5: Flowchart of data processing. (2.3m×1.5m×0.15cm) was put above the pit. The man- ganese steel plate, above which the masonry wall was built, hardly generated plastic deformation when impacted by the ironic hammer due to its strong elasticity and strength. The impact hammer was a 25-kg iron cylinder (50cm in height and 12cm in diameter), whose upper portion was welded with an arc steel bar as a hook. Circular targets were uniformly distributed on the bricks of the masonry wall without obscuring the connections between the bricks, and the diameter was less than 53mm whenever possible. Figure 6(b) illustrates an overlook of the test field, which shows the relative positions of the digital cameras and the masonry wall. In addition, awls were used to fix the steel plate in the test. The stakes were constructed at a stable place on both sides of the masonry wall, and reference points were laid on the stakes. Six Sony-350 digital cameras—the north and south sides of the masonry wall each with two cameras and the east and west sides each with only camera—were used to capture the instantaneous dynamic displacements of the masonry wall when the 25-kg hammer fell freely and hit the steel plate. 3.2. Test Process. As shown in Figure 7, we used a hammer to impact a steel plate to simulate the seismic oscillation. The test procedure is described as follows: (1)The impact hammer was tied with a rope and lifted to a 0.3-meter position above the ground. Additionally, the digital cameras were adjusted to shoot the masonry wall clearly. (2)Digital cameras shot the masonry wall to produce a zero image. (3)Digital cameras shot the masonry wall when the 25- kg hammer fell freely and impacted the steel plate. One group test was completed. (4)The impact hammer was tied with a rope and lifted to one of the following heights above the ground: 0.6 m, 0.9 m, 1.2 m, 1.5 m, 1.8 m, 2.1 m, 2.4 m, 2.7 m, 3.0 m, 3.3 m, 3.6 m, 3.9 m, 4.2 m, 4.5 m, 4.8 m, 5.1 m, 5.4 m, and 5.4 m, respectively. Repeat Step (3) for each height. 4. Numerical Simulation of Masonry Wall in Seismic Oscillation As this study aims to simulate and analyze the failure process of a masonry wall and its crack development, a global continuous model was therefore adopted. Constitutive model of masonry material is based on Material Model 3: plastic kinematic model in LS-DYNA. In this study, the masonry wall was characterized by MU10-strength grade fired common brick, M7.5-strength grade mortar, material density of 1700kg/m3, elastic modulus of 2.704E9Pa, Poisson ratio 0.2, and compressive strength of 1.69MPa. Its bottom base was set to be rigid when we established a 3D (three- dimension) solid model of the masonry wall (Figure 8). Contact and collision were considered in the calculation, and we selected the Single Face, automatic single surface contact (ASSC), and set the static and dynamic friction coefficient to 0.1, and other parameters were the default values given by ANSYS. Then, we imposed horizontal seismic waves to influence the masonry wall; and the expression of a seismic wave used in this study is 𝛼= sin (0.1𝑡)(1− cos (0.02𝑡)) ∗ (sin 𝑡+ sin (1.1𝑡)+ sin (1.2𝑡)) (12) where 𝛼 is the acceleration of the seismic wave and t is the moment of seismic wave propagation. 5. Data Analysis and Discussion 5.1. Comparing the Results Obtained through the Field Test and the Numerical Simulation. After the masonry wall test, the masonry wall was damaged (Figure 9). The east-down masonry wall developed longitudinally through cracks (Line 1). Diagonal cracks (Line 2 and Line 3) emerged along the diagonal of the masonry wall. Crossing cracks (Line 4 and Line 5) occurred on the west of the masonry wall. Masonry damage was caused mainly by the shear stress created by the impact hammer. Cracks develop on a masonry wall when the shear stress is up to a certain extent. As shown in Figure 9(b), the stress concentration zones of the masonry wall were at its Mathematical Problems in Engineering 7 (a) Brick masonry wall (taken on November 9, 2008) Camera 5 Camera 3 Camera 6Masonry Wall Steel plate Stake 1 Stake 2 Stake 3Stake 4 Hammer Awl1 Awl2 Awl4 N EW S 7.0m 3.66m 6.55m 1.65m Awl3 Camera 4 Camera 1 Camera 2 (b) Illustration of test field Figure 6: Test setup. Figure 7: Monitoring a masonry wall in seismic oscillation in the field. top left corner, top right corner, central region, two diagonals, and right side, and particularly at the bottom left corner. These stress concentration zones did develop cracks. For example, U33 and U34 in Figure 9(a) represent red zones in Figure 9(b), U0 represents the yellow zone in top right corner of the masonry wall, and Line 1 represents the turquoise zone on the right side of the masonry wall. These indicate that our test results are consistent with the numerical simulation, and that it is feasible to use the hammer to impact the suspended steel plate to simulate the seismic oscillation. 5.2. Analysis of Measurement Accuracy. Reference points were labeled as C0-C11on stakes 3 and 4 of the masonry wall, and deformation points were labeled as U0-U38 on the south side of the masonry wall (Figure 10). To assess the measurement accuracy of the PST-TBP method, we selected six reference points labeled as C0-C2 and C6-C8. In theory, these points do not move during the monitoring process, which means that their displacements should be zero. However, the displacements of these reference 1 DISPLACEMENT STEP=9999 DMX =.251E–03 Figure 8: A 3D (three-dimension) solid model of the masonry wall, created in ANSYS. Table 5: Measurement accuracy (/pixel) in the X-direction. Point Name C0 C1 C2 C6 C7 C8 Standard deviation 0.57 0.65 0.64 0.53 0.46 0.64 Average 0.58 points obtained by the PST-TBP method were not zero. Their displacements can be considered as the measurement accuracy of the monocular digital photography used in this study. As the photographic scale is 1.43mm/pixel, Tables 5 and 6 show that the average measurement accuracy in the X- and Z-directions was 0.58 pixels (0.83 mm) and 0.59 pixels (0.84mm), respectively. In the west, the average mea- surement accuracy in the X- and Z-directions was 0.49mm and 0.44mm, respectively [33]. Thus, the PST-TBP method improves the measurement accuracy of a digital camera. Note that in the DIC method, a suitable balance does exist between high spatial resolution and accuracy because the basic principle, the gray level distribution, of the DIC 8 Mathematical Problems in Engineering (a) The masonry wall was damaged after the test 1 NODAL SOLUTION STEP=9999 EPTOXY (AVG) DMX =.251E–03 SMX =.103E–03 –.172E–04–.378E–05 .962E–05 .230E–04 .364E–04 .499E–04 .633E–04 .767E–04 .901E–04 .103E–03 RSYS=0 SMN =–.172E–04 (b) Shear stress simulation with ANSYS Figure 9: Masonry wall after seismic oscillation. U0U30 U25 U20 U15 U10 U5 U1U31 U26 U21 U16 U11 U6 U2U32 U27 U22 U17 U12 U7 U3U33 U28 U23 U18 U13 U8 U4U34 U29 U24 U19 U14 U9 C1 C0 C2 C3 C4 C5 C7 C6 C8 C9 C10 C11 U35U37 U36U38 Figure 10: The distribution of selected monitoring points on the masonry wall and the stakes. Table 6: Measurement accuracy (/pixel) in the Z-direction. Point Name C0 C1 C2 C6 C7 C8 Standard deviation 0.51 0.62 0.7 0.47 0.81 0.45 Average 0.59 method decides that the tradeoff for enhanced accuracy is a reduced spatial resolution and the spatial resolution is the subset size [20]. However, the proposed method is based on the principle of the correspondence points matching. In our paper, the correspondence points are the reference points on the stakes. Once the position of the reference points and the camera are determined, the measurement accuracy of the PST-TBP method is determined. The measurement accuracy of the PST-TBP method increases with the camera resolution when the photographing distance is fixed and it has nothing to do with the spatial resolution (the subset size). The basic principles of these two methods are different in deal with the images. Thus, we cannot compare these two methods in terms of spatial resolution (the subset size), and in the proposed technique, there is no functional relationship between the spatial resolution (the subset size) and the accuracy with reference to the DIC method. The proposed method provides a new way to monitor the crack development of masonry wall in seismic oscillation by the PST-TBP (photographing scale transformation-time baseline parallax) method outside the laboratory. This paper aims to explore the feasibility of this new method. In our test, we did not use the DIC method to monitor the masonry wall in seismic oscillation. Thus, we cannot directly compare the proposed method with the DIC method in terms of mea- surement accuracy. We consulted the literature extensively to find the application of the DIC method outdoors. A few researchers, such as Küntz Michel and Tiago Ramos, have used the DIC method to monitor the engineering structures outside a laboratory. In their research, they did not mention the measurement accuracy of the DIC method outdoors. The DIC method is very sensitive to outdoors light conditions. The change of external light intensity has great influence on the measurement precision. Thus, the accuracy of the DIC method outside a laboratory is much lower than in a laboratory. However, there are no detailed data about the accuracy of the DIC method outside a laboratory. In the future test, we will use these two methods to monitor masonry walls in seismic oscillation to compare the proposed method with the DIC method with respect to accuracy. 5.3. Masonry Wall Deformation Caused by S-Wave. The seismic wave in the test is a body wave that consists of a P-wave (pressure wave) and an S-wave (shear wave). The shear wave can result in the shear deformation and the brittle fracture on a masonry structure. Furthermore, the S-wave is divided into an SH-wave (shear horizontal wave) and an SV-wave (shear vertical wave). Thus, this paper discusses the influence of the SH-wave and the SV-wave on the masonry wall based on test result. 5.3.1. Masonry Wall Deformation Caused by an SH-Wave. The SH-wave caused the masonry wall to move in the horizontal direction, which is perpendicular to the S-wave propagation. Mathematical Problems in Engineering 9 (a) The west side of the masonry wall (b) The east side of the masonry wall Figure 11: Both sides of the masonry wall. X0 X1 X2 X3 X4 X5 2 4 6 8 10 12 14 16 18 200 Test −1 0 1 2 3 4 5 6 7 8 9 10 D ef or m at io n in X d ir ec tio n (m m ) (a) Deformation curve for the west side of the masonry wall X0 X1 X2 2 4 6 8 10 12 14 16 180 Test −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3 3.5 D ef or m at io n in X d ir ec tio n (m m ) (b) Deformation curve for the east side of the masonry wall Figure 12: Deformation curve for the masonry wall. We therefore chose the west and east sides of the masonry wall to examine the influence of the SH-wave on the masonry wall. As shown in Figure 11, the east side was near the hypocenter (the position impacted by the hammer), while the west side was far from the hypocenter. We drew deformation curves of the masonry wall for both sides to study the degree of damage along the direction of the SH-wave propagation (Figure 12). X0, X1, X2, X3, X4, and X5 in Figure 12 represent the displacements of U0, U1, U2, U3, U4, and U5 in the horizontal direction, respectively. According to Figure 12, the displacements of the deforma- tion points on the east side of the masonry wall were within the elasticity. There was a good integrity between the brick and the mud. However, plastic deformation occurred on the other side (i.e., the west side of the masonry wall), far from the hypocenter, with the increase in the intensity of the seismic oscillation. The integrity between the brick and the mud was destroyed. The destruction from the SH-wave was gradually enhanced along its propagation direction. 5.3.2. Masonry Wall Deformation Caused by the SV-Wave. The SV-wave caused the masonry wall to move in the vertical direction which is perpendicular to S-wave propagation. To study the propagation characteristics of SV-wave in the masonry wall, we divided the masonry wall into five layers (Figure 13) and draw wave profile (Figure 14) of the deformation points for each layer. Figure 14 shows that the displacements decreased gradu- ally from east to west in the first and second layers and that the displacements fluctuated as sinusoid or cosine curves from east to west in the third, fourth, and fifth layers, particularly in the fifth layer in Figure 14(b). In conclusion, the SV-wave 10 Mathematical Problems in Engineering Table 7: Maximum displacement differences (/mm) in the Z-direction between the adjacent layers. Adjacent layers U31-U32 U33-U34 U21-U22 U18-U19 U10-U11 U7-U8 Difference 4.31 6.02 3.15 4.33 3.06 2.65 Table 8: Relative deformation (/mm) in the X-direction. Test U34 U33 U27 U22 U21 U0 U3 U2 U4 15 -4.22 -5.7 -4.39 -1.6 2.65 3.76 2.45 2.4 1.07 16 -5.84 -8.69 -7.25 -1.53 2.79 4.18 4.14 2.72 -0.18 17 -3.98 -9.83 -7.17 -0.07 4.11 8.03 6.92 3.93 -0.13 18 -4.02 -9.77 -9.86 -1.32 4.39 8.44 7.05 5.6 1.34 19 -4.16 -10.3 -9.29 -0.67 6.1 8.71 9.79 6.51 1.58 U0U30 U25 U20 U15 U10 U5 U1U31 U26 U21 U16 U11 U6 U2U32 U27 U22 U17 U12 U7 U3U33 U28 U23 U18 U13 U8 U4U34 U29 U24 U19 U14 U9 First Layer Second Layer Third Layer Fourth Layer Fifth Layer Figure 13: Layers on the masonry wall. heavily affects not only the side near the hypocenter but also the third, fourth, and fifth layers of the masonry wall. In addition, we chose the displacement data of tests 16, 17, 18, and 19 to calculate the displacement differences between the adjacent layers. The maximum displacement differences between the adjacent layers are listed in Table 7. The figures show that the SV-wave destroyed the stability between some bricks and mud such as that between U31 and U32, U33 and U34, U21 and U22, U18 and U19, and U10 and U11. Then, the masonry wall developed cracks and brittle failure at these locations. Thus, the SV-wave is caused in the longitudinal through cracks on the side near the hypocenter and diagonal cracks along the diagonal of the masonry wall. 5.4. Crack Development of the Masonry Wall in Seismic Oscillation. In this section, the south and west sides of the masonry wall were selected to understand their crack development. Figure 15 shows the positions of the deformation points on the south side of the masonry wall before (red) and after (blue) seismic oscillation tests. We reduced the scale of the masonry wall from (1200mm×800mm) to (60mm×80mm) to see the crack development of a masonry wall clearly. For narrative convenience, we defined that the X-direction is the position along the length of the masonry wall and that the Z-direction is the position along the height of the masonry wall. One point (X 0 , Z 0 ) means that it is at position X 0 and Z 0 along the length and height of the masonry wall, respectively. Figure 15 also clearly shows the direction of the movement of each point on the masonry wall. The crack development is therefore determined by the relative movement of the adjacent deformation points. For example, in Figure 15(b) point (0, 60) moves to the right, and point (0, 40) moves to the left; therefore, cracks will develop between point (0, 60) and point (0, 40). In Figure 15(a), point (0, 20) moves 8.69mm to the left, and point (10, 20) moves 2.97mm to the left. The displacement difference between them was 5.72mm, and the cracks therefore will develop between them. For clearly understanding the relationship between the seismic wave propagation and the crack development of the masonry wall, the following deformation points around the crack attract our attention, U0, U2, U3, U4, U21, U22, U27, U33, and U34. As tests 15 to 19 cause the masonry wall to be destroyed gradually, we selected their displacement data (Tables 8 and 9) to study. Then, Figure 16 was drawn based on these displacement data. In Figure 16(a), it is surprising that the shape of the wave profile of the cracks in the X-direction is very similar to a sinusoid or a cosine curve. The similarity increases with the seismic oscillation intensity. This phenomenon is consistent with the results from a shaking table [20] and further suggests that it is feasible to simulate the seismic oscillation by using a hammer to impact a suspended steel plate. In Figure 16(b), wave profiles of the cracks in the Z-direction are of good volatility. They fluctuated up and down around a horizontal line, which conforms to the characteristics of the seismic wave propagation. Wave profiles of the cracks therefore represent the relationship between the crack development of the masonry wall and the seismic wave propagation well. On the west, we depicted deformation curves of defor- mation points. Figure 17(a) shows that the maximal displace- ment difference for the obvious horizontal crack developed between U0 and U4 is 2.56 mm, while Figure 17(b) shows that maximal displacement difference for the obvious vertical crack developed between U0 and U4 is 4.31 mm. Moreover, slight horizontal cracks developed on the position between U2 and U5 and the position between U1 and U4. These findings are consistent with the destroyed masonry wall (Figure 11(a)). Position A (Figure 11(a)) developed obvi- ous vertical and horizontal cracks, and position B and C Mathematical Problems in Engineering 11 Table 9: Relative deformation (/mm) in the Z-direction. Test U34 U33 U27 U22 U21 U0 U3 U2 U4 15 8.7 4.15 6.76 6.77 5.09 7.8 7.03 6.77 7.27 16 9.2 4.67 5.83 7.23 5.57 9.58 8.75 7.08 8.98 17 9.34 5.01 5.09 6.63 5.17 9.85 8.48 8.46 8.5 18 11.71 5.69 6.79 9.61 6.46 10.39 9.7 10.85 9.98 19 12.45 7.86 7.59 7.61 7.33 11.51 9.38 10.54 11.09 First Layer Second Layer �ird Layer Fourth Layer Fi�h Layer 1 2 3 4 5 6 7 80 Positon along the length of masonry wall 2 3 4 5 6 7 8 9 10 11 12 13 D ef or m at io n in Z d ir ec tio n (m m ) (a) Test 16 First Layer Second Layer �ird Layer Fourth Layer Fi�h Layer 1 2 3 4 5 6 7 80 Positon along the length of masonry wall 2 3 4 5 6 7 8 9 10 11 12 13 D ef or m at io n in Z d ir ec tio n (m m ) (b) Test 17 First Layer Second Layer �ird Layer Fourth Layer Fi�h Layer 2 3 4 5 6 7 8 9 10 11 12 13 D ef or m at io n in Z d ir ec tio n (m m ) 1 2 3 4 5 6 7 80 Positon along the length of masonry wall (c) Test 18 First Layer Second Layer �ird Layer Fourth Layer Fi�h Layer 2 3 4 5 6 7 8 9 10 11 12 13 D ef or m at io n in Z d ir ec tio n (m m ) 1 2 3 4 5 6 7 80 Positon along the length of masonry wall (d) Test 19 Figure 14: Wave profile of SV-wave in each layer. 12 Mathematical Problems in Engineering 0 10 20 30 40 50 60 70 80 Po si tio n al on g th e he ig ht o f m as on ry w al l ( m m ) 0 10 20 30 40 50 60 70−10 Position along the length of masonry wall (mm) (a) Test 16 0 10 20 30 40 50 60 70 80 Po si tio n al on g th e he ig ht o f m as on ry w al l ( m m ) 0 10 20 30 40 50 60 70−10 Position along the length of masonry wall (mm) (b) Test 17 0 10 20 30 40 50 60 70 80 Po si tio n al on g th e he ig ht o f m as on ry w al l ( m m ) 0 10 20 30 40 50 60 70−10 Position along the length of masonry wall (mm) (c) Test 18 0 10 20 30 40 50 60 70 80 Po si tio n al on g th e he ig ht o f m as on ry w al l ( m m ) 0 10 20 30 40 50 60 70−10 Position along the length of masonry wall (mm) (d) Test 19 Figure 15: Positions of deformation points before and after seismic oscillation test. developed slight horizontal cracks. In addition, the crack on C occurred in test 14, the vertical and horizontal cracks on A occurred in test 16 and 19, respectively, and the crack on B occurred in test 19. Thus, in seismic oscillation the shear failure first develops in the middle-lower portion of a masonry wall. Then, the bonding between the bricks and the mud is invalid in the middle-upper portion of a masonry wall. Last, shear failure occurs in the middle of a masonry wall. Based on the test results, we propose some preliminary suggestions to reinforce a masonry wall. Figure 18 shows that construction columns 1 and 2 are set on the two sides of a masonry wall to prevent slip failure, and solid piers 1 and 2 replace the initial bricks easily damaged by the SH-wave, and solid flagstones 1 and 2 replace the initial bricks to minimize the SV-wave influence on the masonry wall. Note that in the field we used a hammer to impact a suspended steel plate to simulate seismic oscillation. There is no comparative study with reference to this methodology demonstrated elsewhere in the reported research. However, it is feasible as some test phenomena are consistent with the results from numerical simulation and shaking tables. Moreover, the suggestions for reinforcing the masonry walls are restricted to the test example in this study, and future tests are required to prove their feasibility. 6. Conclusions This study proposes a new technique to study masonry walls in seismic oscillation outdoors. In this technique, a freely falling hammer impacts a suspended steel plate to simulate seismic oscillation propagated in a masonry wall outdoors. Then, we used monocular digital photography based on the (PST-TBP) (photographing scale transformation-time baseline parallax) method to monitor the crack development of the masonry wall outdoors. In addition, the field test results are also compared with the simulation results produced by ANSYS. The following conclusions are provided: (1) It is feasible to use a hammer to impact a suspended steel plate to simulate seismic oscillation in the field. Stress concentration zones simulated by ANSYS are consistent with Mathematical Problems in Engineering 13 Test15 Test16 Test17 Test18 Test19 −10 −5 0 5 10 D ef or m at io n in X d ir ec tio n (m m ) U33 U27 U22 U21 U0 U3 U2 U4U34 Position along the length of masonry wall (a) Wave profiles of the cracks in the X-direction Test15 Test16 Test17 Test18 Test19 0 2 4 6 8 10 12 D ef or m at io n in Z d ir ec tio n (m m ) U33 U27 U22 U21 U0 U3 U2 U4U34 Position along the length of masonry wall (b) Wave profiles of the cracks in the Z-direction Figure 16: Wave profile of the cracks. Test 14 Test 15 Test 16 Test 17 Test 18 Test 19 1 2 3 4 5 6 7 8 9 100 Deformation in X direction (mm) U2 U5 U1 U4 U0 U3 (a) Deformation curves in the X- direction Test 14 Test 15 Test 16 Test 17 Test 18 Test 19 U2 U5 U1 U4 U0 U3 0 1 2 3 4 5−1 Deformation in Z direction (mm) (b) Deformation curves in the Z- direction Figure 17: Deformation curves for the west side of the masonry wall. the crack development positions of the destroyed masonry wall. The shape of the wave profile of the cracks in the horizontal direction is very similar to a sinusoid or cosine curve. This phenomenon agrees with the test results from a shaking table. (2) Monocular digital photography based on the PST- TBP method can meet the accuracy requirement to monitor masonry walls in seismic oscillation outdoors. The average measurement accuracies of Camera 3 in the X- and Z- directions are 0.83 mm and 0.84 mm, respectively. The average measurement accuracies of Camera 5 in the X- and Z-directions are 0.49 mm and 0.44 mm, respectively. (3)Diagonal cracks and longitudinal through cracks may occur along the diagonal of a masonry wall and on the side near the hypocenter, respectively. The destruction from the SH-wave is gradually enhanced along its propagation direction. Cracks may occur on the side of a masonry wall far from the hypocenter due to SH-waves. SV-waves seriously affect the central bottom portion of a masonry wall without decreasing intensity, but they affect the upper portion of a masonry wall with decreasing intensity from the position near the hypocenter to the position far from the hypocenter. (4) The crack development pattern of a masonry wall in seismic oscillation is consistent with the seismic wave 14 Mathematical Problems in Engineering C olum n 1 C olum n 2 Solid Pier 1 Solid Pier 2 Masonry wall Solid flagstone 1 Solid flagstone 2 Figure 18: Illustration of masonry wall reinforcement. propagation because the shape of the wave profile of the cracks in the horizontal direction is very similar to a sinusoid or a cosine curve. This study used monocular digital photography based on the PST-TBP method to conduct innovative research into the relationship between crack development and seismic wave propagation on the masonry wall and provides a technical basis to monitor instantaneous dynamic displacements of masonry structures in seismic oscillation outdoors to warn of possible danger. This study also has significant implica- tions for improved construction of masonry structures in earthquake-prone areas. Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest The authors declare that there are no conflicts of interest regarding the publication of this paper. Acknowledgments The study was supported by the National Natural Science Foundation of China (Grant no. 51674249) and the Science and Technology Project of Shandong Province, China (Grant no. 2010GZX20125). The authors thank the management department of a construction site in Liaocheng City (Shan- dong) for authorizing their fieldwork and thank Su LIU for the numerical simulation in the study. References [1] C. J. Xu, Q. B. Fan, Q. Wang, S. M. Yang, and G. Y. Jiang, “Postseismic deformation after 2008 wenchuan earthquake,” Survey Review, vol. 46, no. 339, pp. 432–436, 2014. [2] P. Cui, X.-Q. Chen, Y.-Y. Zhu et al., “The Wenchuan Earthquake (May 12, 2008), Sichuan Province, China, and resulting geohaz- ards,” Natural Hazards, vol. 56, no. 1, pp. 19–36, 2011. [3] S. Tung, M. Shih, and W. Sung, “Development of digital image correlationmethodtoanalyse crackvariationsofmasonrywall,” Sadhana, vol. 33, no. 6, pp. 767–779, 2008. [4] D. P. Abrams and T. J. Paulson, “Modeling earthquake response of concrete masonry building structures,” ACI Structural Jour- nal, vol. 88, no. 4, pp. 475–485, 1991. [5] A. Hillerborg, M. Modéer, and P.-E. Petersson, “Analysis of crack formation and crack growth in concrete by means of fracture mechanics and finite elements,” Cement and Concrete Research, vol. 6, no. 6, pp. 773–781, 1976. [6] P. Bocca, A. Carpinteri, and S. Valente, “Fracture mechanics of brick masonry: size effects and snap-back analysis,” Materials and Structures, vol. 22, no. 5, pp. 364–373, 1989. [7] M. Giaretton, D. Dizhur, and J. M. Ingham, “Dynamic testing of as-built clay brick unreinforced masonry parapets,” Engineering Structures, vol. 127, pp. 676–685, 2016. [8] H.-G. Maas and U. Hampel, “Programmetric techniques in civil engineering material testing and structure monitoring,” Photogrammetric Engineering and Remote Sensing, vol. 72, no. 1, pp. 39–45, 2006. [9] R. N. Jiang, D. V. Jáuregui, and K. R. White, “Close-range pho- togrammetry applications in bridge measurement: literature review,” Measurement, vol. 41, no. 8, pp. 823–834, 2008. [10] G. Abdelsayed, B. Bakht, and L. G. Jaeger, “Soil-steel bridges: design and construction,” Conduits, 1993. [11] C. Forno, S. Brown, R. A. Hunt, A. M. Kearney, and S. Oldfield, “Measurement of deformation of a bridge by Moire photography and photogrammetry,” Strain, vol. 27, no. 3, pp. 83– 87, 2008. [12] Y. Chen, “Detection Technology of Structural Cracks Amplifi- cation Digital Photograph and Real-time Transmission,” Con- struction Quality, 2012. [13] D. Lecompte, J. Vantomme, and H. Sol, “Crack detection in a concrete beam using two different camera techniques,” Structural Health and Monitoring, vol. 5, no. 1, pp. 59–68, 2006. [14] M. Küntz, M. Jolin, J. Bastien, F. Perez, and F. Hild, “Digital image correlation analysis of crack behavior in a reinforced concrete beam during a load test,” Canadian Journal of Civil Engineering, vol. 33, no. 11, pp. 1418–1425, 2006. [15] J.-F. Destrebecq, E. Toussaint, and E. Ferrier, “Analysis of cracks and deformations in a full scale reinforced concrete beam using a digital image correlation technique,” Experimental Mechanics, vol. 51, no. 6, pp. 879–890, 2011. [16] B. Ghiassi, J. Xavier, D. V. Oliveira, and P. B. Lourenço, “Application of digital image correlation in investigating the bond between FRP and masonry,” Composite Structures, vol. 106, pp. 340–349, 2013. [17] T. Ramos, A. Furtado, S. Eslami et al., “2D and 3D digital image correlation in civil engineering - measurements in a masonry wall,” Procedia Engineering, vol. 114, pp. 215–222, 2015. [18] H.-L. Nghiem, M. Al Heib, and F. Emeriault, “Method based on digital image correlation for damage assessment in masonry structures,” Engineering Structures, vol. 86, pp. 1–15, 2015. [19] A. H. Salmanpour and N. Mojsilovic, “Application of Digital Image Correlation for strain measurements of large masonry walls,” in Proceedings of the Asia Pacific Congress on Computa- tional Mechanics, 2013. [20] R. Ghorbani, F. Matta, and M. A. Sutton, “Full-Field Deforma- tion Measurement and Crack Mapping on Confined Masonry Walls Using Digital Image Correlation,” Experimental Mechan- ics, vol. 55, no. 1, pp. 227–243, 2015. Mathematical Problems in Engineering 15 [21] B. Ghiassi, J. Xavier, D. V. Oliveira, A. Kwiecien, P. B. Lourenço, and B. Zajac, “Evaluation of the bond performance in FRP-brick components re-bonded after initial delamination,” Composite Structures, vol. 123, pp. 271–281, 2015. [22] Z. Guojian, L. Shengzhen, Z. Tonglong, and Y. Chengxin, “Exploring of PST-TBPM in Monitoring Bridge Dynamic Deflection in Vibration,” in Proceedings of the IOP Conference Series: Earth and Environmental Science, vol. 108, 2018. [23] C. Mingzhi, Z. Yongqian, H. Hua, Y. Chengxin, and Z. Guojian, “Exploring of PST-TBPM in Monitoring Dynamic Deformation of Steel Structure in Vibration,” in Proceedings of the IOP Conference Series: Earth and Environmental Science, vol. 108, 2018. [24] C. Mingzhi, Y. ChengXin, X. Na, Z. YongQian, and Y. WenShan, “Application study of digital analytical method on deforma- tion monitor of high-rise goods shelf,” in Proceedings of the IEEE International Conference on Automation and Logistics, pp. 2084–2088, 2008. [25] G. Zhang, S. Liu, T. Zhao, and C. Yu, “Exploring of PST-TBPM in Monitoring Bridge Dynamic Deflection in Vibration,” in Pro- ceedings of the IOP Conference Series: Earth and Environmental Science, 2018. [26] R. R. P. D. Samuel, Close-Range Photogrammetry, Springer, Berlin, Germany, 2014. [27] K. Fujimoto, J. Sun, H. Takebe, M. Suwa, and S. Naoi, “Shape from parallel geodesics for distortion correction of digital camera document images,” Journal of Electronic Imaging, pp. 286–290, 2007. [28] H. Yanagi and H. Chikatsu, “Factorsand estimation of accuracy in digital close range photogrammetry using digital cameras,” Journal of the Japan society of photogrammetry, vol. 50, no. 1, pp. 4–17, 2011. [29] X. U. Fang, The Monitor of Steel Structure Bend Deformation Based on Digital Photogrammetry, Editoral Board of Geomatics & Information Science of Wuhan University, 2001. [30] J. I. Jeong, S. Y. Moon, S. G. Choi, and D. H. Rho, “A study on the flexible camera calibration method using a grid type frame with different line widths,” in Proceedings of the Sice Conference, vol. 2, pp. 1319–1324, 2002. [31] M.-C. Lu, C.-C. Hsu, and Y.-Y. Lu, “Distance and angle measurement of distant objects on an oblique plane based on pixel variation of CCD image,” in Proceedings of the 2010 IEEE International Instrumentation and Measurement Technology Conference, I2MTC 2010, pp. 318–322, Austin, TX, USA, May 2010. [32] C.-C. J. Hsu, M.-C. Lu, and Y.-Y. Lu, “Distance and angle measurement of objects on an oblique plane based on pixel number variation of CCD images,” IEEE Transactions on Instru- mentation and Measurement, vol. 60, no. 5, pp. 1779–1794, 2011. [33] G. Zhang, C. Yu, and X. Ding, “Analyzing crack development pattern of masonry structure in seismic oscillation by digital photography,” in Proceedings of the IOP Conference Series: Earth and Environmental Science, vol. 108, 2018. [34] Y. Sun, H. Yang, and S. Jiang, “Evaluation of seismic perfor- mance and tolerant deformation on existing brick masonry buildings,” Shenyang Jianzhu Daxue Xuebao, vol. 29, no. 5, pp. 861–867, 2013. [35] Ministry of Housing and Urban-Rural Development of the people’s Republic, Code for Design of Masonry Structures, China Building Industry Press, 2011. [36] Y. Zhao, Y. Wu, and C. Yu, “Research on Instantaneous Dynam- ical Deformation Monitoring the Masonry Structure Based on Artificial Earthquake Motion,” International Journal of Hybrid Information Technology, vol. 8, no. 10, pp. 241–252, 2015. Hindawi www.hindawi.com Volume 2018 Mathematics Journal of Hindawi www.hindawi.com Volume 2018 Mathematical Problems in Engineering Applied Mathematics Journal of Hindawi www.hindawi.com Volume 2018 Probability and Statistics Hindawi www.hindawi.com Volume 2018 Journal of Hindawi www.hindawi.com Volume 2018 Mathematical Physics Advances in Complex Analysis Journal of Hindawi www.hindawi.com Volume 2018 Optimization Journal of Hindawi www.hindawi.com Volume 2018 Hindawi www.hindawi.com Volume 2018 Engineering Mathematics International Journal of Hindawi www.hindawi.com Volume 2018 Operations Research Advances in Journal of Hindawi www.hindawi.com Volume 2018 Function Spaces Abstract and Applied Analysis Hindawi www.hindawi.com Volume 2018 International Journal of Mathematics and Mathematical Sciences Hindawi www.hindawi.com Volume 2018 Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 Hindawi www.hindawi.com The Scientific World Journal Volume 2018 Hindawi www.hindawi.com Volume 2018Volume 2018 Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical Analysis Advances inAdvances in Discrete Dynamics in Nature and Society Hindawi www.hindawi.com Volume 2018 Hindawi www.hindawi.com Di�erential Equations International Journal of Volume 2018 Hindawi www.hindawi.com Volume 2018 Decision Sciences Advances in Hindawi www.hindawi.com Volume 2018 Analysis International Journal of Hindawi www.hindawi.com Volume 2018 Stochastic Analysis International Journal of Submit your manuscripts at www.hindawi.com https://www.hindawi.com/journals/jmath/ https://www.hindawi.com/journals/mpe/ https://www.hindawi.com/journals/jam/ https://www.hindawi.com/journals/jps/ https://www.hindawi.com/journals/amp/ https://www.hindawi.com/journals/jca/ https://www.hindawi.com/journals/jopti/ https://www.hindawi.com/journals/ijem/ https://www.hindawi.com/journals/aor/ https://www.hindawi.com/journals/jfs/ https://www.hindawi.com/journals/aaa/ https://www.hindawi.com/journals/ijmms/ https://www.hindawi.com/journals/tswj/ https://www.hindawi.com/journals/ana/ https://www.hindawi.com/journals/ddns/ https://www.hindawi.com/journals/ijde/ https://www.hindawi.com/journals/ads/ https://www.hindawi.com/journals/ijanal/ https://www.hindawi.com/journals/ijsa/ https://www.hindawi.com/ https://www.hindawi.com/ work_5vh3xayawbbchc3h7sp2pidnkq ---- 89 * Faculty of Economics, University of Belgrade JEL CLASSIFICATION: M10, M30 ABSTRACT: One of the most important questions faced by business leaders in the strategic management process is a choice of timing to launch new product/technologies and enter new markets. There are two options: to be a pioneer or to be a follower. Both have advantages and risks. Pioneers often have higher profitability, greater market share, and a longer business life, but the relative success of each strategy depends on several factors, both internal and external (pace of evolution of technology and markets). KEY WORDS: pioneer, follower, first- mover advantage Đorđe Kaličanin* DOI:10.2298/EKA08177089K A QUESTION OF STRATEGY: TO BE A PIONEER OR A FOLLOWER? COMMUNICATIONS 90 Đorđe Kaličanin 1. Introduction The question of first-mover advantage has long been a subject for intense discussion by economists and business people. It is recognized that first market entry rewards pioneers, the main initial reward being the biggest market share (Urban et al, 1986). This reward decreases over time as new firms enter, forcing pioneers to take actions to increase and/or defend market share. However, market share is often not highly correlated with other important performance measures (profitability, survival, or sales growth) in new industries, and this has led to changes in business strategy (VanderWerf, Mahon, 1997). Historically the advantages of being a pioneer have been promoted to a much greater extent than the risks (Lieberman, Montgomery, 1998). The actual outcome depends on the initial resources of the pioneer, as well as on the resources and capabilities subsequently developed in response to those of the followers. Environmental change certainly provides opportunities to first-movers, but firms must initially possess the organizational skills and resources to capitalize on such opportunities (Kerin et al, 1992). Additionally, the question of whether to be a pioneer or a follower is of increasing importance in the modern Age of Information/Uncertainty. Speed of entry into any market is always of great competitive importance. Perhaps an enterprise intends to develop a new product or a novel technology to generate new products, creating a new industrial sector, or even a whole new industry. Alternatively a company wants to be first to invade a hitherto unexploited national or regional market. In all cases, companies are trying to beat the competition. This is superficially similar to an athletic contest, in which every runner tries to finish first, but in business, the race never ends. Competitive advantage has to be continuously maintained. A temporary innovation-based monopoly has to be transformed into a sustainable long-term process or product. However, there are companies that consciously choose to follow, rather than innovate, believing this to be a more advantageous strategy. Pioneering and first-move advantage can be achieved in several ways: 1) by making new products, 2) by using a new process, or 3) by entering a new market (Heiens et al, 2003). In all cases, pioneers create new market demand for their products/services and continue to satisfy that demand before other enterprises enter the same market. It has also been claimed that first-move advantage can be established by new advertising campaigns, initiating price changes, and the adoption of new distribution techniques. A QUESTION OF STRATEGY: TO BE A PIONEER OR A FOLLOWER? 91 The first two cases (new product or new process) fall into the category of technological pioneering. Technological pioneering refers to the development and commercialization of an emerging technology in pursuit of profits and growth (Zahra et al, 1995: 144). The third case is described as market pioneering, when an enterprise with established products and technology is the first to enter a new market. Both technological pioneers and market pioneers, through their timing and actions, are first-movers. Other companies are followers. The latter can be divided into early or late followers (or entrants). Early followers enter the market soon after the pioneer; late followers enter after more time has elapsed. In any company, the choice of whether to be a pioneer or a follower must be incorporated in the strategic planning process. A pioneer strategy is compatible with the choice of a broad or a focused differentiation strategy. A follower strategy is compatible with the choice of a broad or a focused low-cost strategy (analogous to Kaplan, Norton, 2004). 2. Advantages and Risks of Pioneer and Follower Strategies Pioneers gain advantage by making first moves in technology, product or marketing innovation. These advantages are called first-mover advantages. Other enterprises are followers, they aim to maximize the late-mover advantages and to minimize late-mover disadvantages. First-mover advantages are: 1) owning the positive image and reputation of being a pioneer, 2) reduction of total costs through control of new technology, and supply and distribution channels, 3) the creation of a base of loyal customers, 4) having the ability to make imitation by competitors as difficult as possible (Thompson, Strickland, 2003: 193). Pioneers try to make a competitive advantage through being first in a new field. Building a competitive advantage is a time consuming process (time is the horizontal axis on figure 1) and is called the build-up period. This period can be minimized if an enterprise already has the necessary resources in place and the customer response is positive and rapid. Build-up periods can be longer if demand growth is weak, or the technology takes several years to perfect, or if it takes time to establish manufacturing capacity.. The size of advantage is shown on the vertical axis. It can be large (as in pharmaceuticals, where patents result in a substantial advantage), or small (as 92 Đorđe Kaličanin in the fashion industry, where popular designs can be imitated quickly). After the build-up period comes the benefit period during which a company enjoys the benefits of competitive advantage. The length of this period depends on both the responses of the followers and the success of pioneers in enhancing and strengthening their position in the market. The overall magnitude of pioneer competitive advantage depends on the extent to which followers: 1) benefit from the difference between innovation costs and imitation costs, 2) exploit saving innovation costs, 3) capitalize on the pioneer’s mistakes, 4) benefit from economies of scope, and 5) influence/change consumer preferences (Kerin et al, 1992: 47). If followers are successful, the period of competitive advantage erosion begins. Figure 1. Building and Eroding of Competitive Advantage (Source: Thompson, Strickland, 2003: 186.) Generally, the greatest first-mover advantage is in being the market leader and in maximizing early revenues from the products/services. Thanks to the initial temporary monopoly from innovation, pioneers are in a position to charge higher prices for their products, and thus maximise profits. This strategy is popularly called “skimming the cream”. Pioneers can also create new sources of revenue through licensing, but this generally produces the lowest rewards for entrepreneurship. In most cases, pioneers continue to hold the biggest market share even after the entry of followers into the same market. High profitability can be maintained through high entry barriers in the form of resource control (control of technology, A QUESTION OF STRATEGY: TO BE A PIONEER OR A FOLLOWER? 93 locations, managers and key employees).Also, the inevitable learning curve can positively influence the pioneer’s market share. These advantages are summarized in the teachings of one of the greatest military theorists ever, Chinese general Sun Tzu. Sun Tzu wrote: “Generally, whoever comes first to the battlefield, and awaits the enemy, will be rested; but, whoever comes last and has to gostraight into battle, will be tired. So, one who is skilled in war constrains the others but is not constrained by the others” (Cu Sun, 2005: 81). High switching costs also benefit the pioneer. There is inevitably a cost to the customer to switch to a follower’s product. Naturally, a pioneer has to continuously improve his own product, in case followers offer an alternative whose enhanced performance outweighs switching costs. The main first-mover disadvantages (Lieberman, Montgomery, 1988) are: 1) free–rider benefits to followers, 2) market and technological uncertainties, 3) unforeseen changes in technology or customer needs, and 4) incumbent inertia, which results in the gradual updating of existing technology, rather than the adoption of new and improved technologies. Several factors influence the size of the risk in being a pioneer. For instance, R&D (research and development) costs involved in innovation can be so high that they may not be recovered in revenues. It is logical that the risks associated in a completely new product are greater than those associated with incremental product changes (Min et al, 2006). Another difficult task faced by pioneers is in creating primary demand for a completely new - and non-branded - product. Educating customers about a novel product can be very expensive. Nevertheless, pioneers usually have longer market lives than their followers. In contrast, followers try to create selective demand, for a specific brand. In their case, technological standard has been established by the pioneer, hence they avoid these costs. Bandwagon effects can increase the risk to followers. Bandwagons appear when companies make strategic decisions - such as new-product development, technological innovation, or even an acquisition - in response to the actions of other companies (McNamara et al, 2008). Institutional bandwagon pressures appear when non-adopters are pressured to mimic the actions of early adopters to avoid appearing different. Competitive bandwagon pressures appear when non-adopters fear they may be disadvantaged. Bandwagon companies often base decisions on informal information, ignoring criteria used in rational decision making processes. Very often heuristics are used, thus “company XYZ has an excellent track record, so, if we follow them, we will not make a mistake”. 94 Đorđe Kaličanin Networks are currently envisaged as company environments in which value for customers is created. Consequently, when discussing new management challenges, we compare how networks compete against networks, rather than companies competing against companies (Cares, 2006: 40-41). This has implications with respect to choice of pioneer or follower strategies. New products created through effective cooperation of networked members are more likely to dominate new markets. There are also associated risks. In the context of networking achieving pioneering advantages, consider innovation ecosystems. Innovation ecosystems are forms of strategic alliances focused on new technology and new product development. Through them, firms combine their individual offerings (in the form of parts) into a coherent, customer- facing solution (in the form of a final product). The total risk in these innovation ecosystems consists of initiative risk, interdependence risk, and integration risk (Adner, 2006). Initiative risk is a type of risk inherent in every new venture. There are a lot of unknowns with the launch of any new product or service, both technical (does it do what it’s supposed to) and with respect to marketing (is it acceptable to the customer). Interdependence risk is associated with the uncertainties of coordinating the relevant contributors. We can estimate the joint probability that different partners will be able to satisfy their commitments within a specific time frame. For instance, suppose that each of three suppliers has a 0.9 probability of success in developing a part of the whole product. In that case, the probability of launching the new product on time is 0.66, or 66%. This probability is the product of the probabilities for successful completion of every partner, i.e. 0.9 x 0.9 x 0.9. The final type is integration risk. It is concerned with the uncertainties in the adoption process across the value chain. Time is required for all intermediaries in value chains to become aware of the product, sample it, and make orders. Logically, the more intermediaries, the bigger the integration risk for customer acceptance and loyalty. For instance, a flat-screen TV manufacturer needs eight months to bring a new screen to production. End consumers need four months to become aware of a new product before they start to buy it in significant numbers. Realistically, suppliers need six months to develop inputs for flat-screen manufacturers. If we add two months for the distributors to stock the product and train the sales force, the integration period will be 20 months (8+4+6+2). If a flat-screen manufacturer is able to allocate additional resources and reduce A QUESTION OF STRATEGY: TO BE A PIONEER OR A FOLLOWER? 95 development time by 50%, a saving of four months will be made. Now, the total integration period will be 16 months (4+4+6+2), and there is a greater chance of achieving the product-launch target. 3. Conditions for Implementing Pioneer and Follower Strategies The resource-based view of companies is a concept often explored in the search for competitive advantage. This holds that competitive advantage is a positive consequence of the exploitation of resources that are valuable, rare and difficult to imitate. How long it lasts, is not simply a question of time flow, nor is it strongly connected with the patent period. The time competitive advantage lasts is a function of the time spent creating a pool of superior resources. The resources needed for a pioneer to succeed are different from those required by a follower. In the case of technology, success often depends on radical changes, which are frequently the consequences of scientific discoveries. Accordingly large, financially-strong companies with well developed and funded (R&D) programmes are best positioned to be technological pioneers. However, great technological pioneering will only gain market leadership if it is followed by successful commercialization. Innovative technological pioneering and market leadership do not always go hand-in-hand. Examples of success include Gillette in safety razors and Sony in personal stereos. But other companies failed in the commercialization of technological innovation, e.g. Xerox in fax machines, eToys in Internet retailing (Suarez, Lanzolla, 2005), Bowmar in calculators, and EMI in scanners (Mittal, Swami, 2004). Examples of followers gaining greater market share than pioneers include Seiko in quartz watches, and Matsushita in VHS VCR (Mittal, Swami, 2004). Market leadership for pioneers is a consequence of temporary monopoly based on innovation. This needs to be defended and improved by carefully formulated and implemented strategies for innovation, distribution, pricing and promotion. Increasing investment in advertising and price cutting are the most frequently used ways of retaining market share (Urban et al, 1986). If a technological pioneer lacks marketing competencies, early followers will soon erode the initial advantage. In contrast, the critical success factors for followers are strong competencies in production and marketing. Followers can gain advantages in cases when they 96 Đorđe Kaličanin possess valuable assets, when learning curve effects are not crucial for profit, when customers have lower switching costs, and when the market is growing slowly. Clear positioning and strong promotion are essential for the market success of followers. Some followers invest significant amounts of money in R&D, but, unlike pioneers, investment is narrowly focused into areas giving the best returns. Michael Dell has said that Dell’s R&D strategy is shareholder focused. R&D projects result in product characteristics valued by customers who are prepared to pay for them. Those companies who invest in things that are technically interesting, but not valued by customers, are “over-inventing” (Stewart, O’Brien, 2005). The choice whether to be a pioneer or a follower is strategic decision. The correct approach is to start by accurately establishing the current and future internal capabilities of the company. The choice depends on the likely length of first- mover advantage. This period is determined not only by the internal resources of the company but also by industry dynamics (Suarez, Lanzolla, 2005) or environmental dynamics (Suarez, Lanzolla, 2007). Industry (environmental) dynamics are created by the interaction of two important factors: the pace of technological evolution and the pace of market evolution. Both factors are external and beyond the control of any single company. The pace of technological evolution refers to the number of performance enhancements with time. Some technologies, such as computer processors, evolve in series of incremental improvements. Other technologies, such as digital photography evolved in sudden, disruptive bursts, with little or no connection with earlier technology (digital photography actually started to displace film). Regarding the pace of technological evolution, the faster or more disruptive it is, the more difficult it is for any enterprise to control. The pace of market evolution refers to market penetration - the number of customers who bought the product in a specific time period. The market for automobiles and fixed telephones evolved more slowly than the market for VCRs and cellular telephones. Fixed telephones took more than 50 years to reach 70% of households. Cellular telephones achieved the same level in less than two decades (Suarez, Lanzolla, 2005: 123). Regarding the pace of market evolution, the greater the difference between old and new product, the greater the uncertainty about the pace of market growth and the number of market segments. A QUESTION OF STRATEGY: TO BE A PIONEER OR A FOLLOWER? 97 The combined effects of market and technological change determine a company’s chances of achieving a first-mover advantage (see the figure 2). Figure 2. The Combined Effects of Market and Technological Change (Source: Suarez, Lanzolla, 2005: 124) There are four possible combinations. “Calm Waters” represents the situation where both technology and market grow slowly. The gradual evolution of both factors allows pioneers to create long-lasting dominant positions. The slow pace of technology evolution makes it difficult for followers to differentiate their products. The slow pace of market evolution helps pioneers to create, defend and develop new market segments. Scotch tape is an original product of the very innovative company 3M, which has maintained first-mover advantage. For this combination, resources are of less importance for defending competitive advantage. One of the more relevant is brand, others may be physical assets, location and financial resources. When “the market leads, and technology follows” describes situations where the technology evolves gradually but the market grows quickly. It is very likely that first-mover advantage will be short-lived (as in the case of Howe sewing machine, after Singer entered the market). First-mover advantage can be through superior 98 Đorđe Kaličanin design, marketing, branding and capacity production (using all these strategies, Sony maintained a long lasting advantage with the Walkman, the first personal stereo). When “technology leads and the market follows” is the situation where the market evolves gradually but the technology evolves rapidly. Success in these circumstances requires significant R&D effort backed by strong finances. This is exemplified by digital cameras. Sony launched the first digital camera, the Mavica, but sales remained stagnant for a decade. After many performance upgrades, sales finally started to increase. Throughout this period Sony kept a leadership position in the USA market. “Rough waters” describes situations where both the technology and the market evolve rapidly. Sustaining first-mover advantage is very difficult as it requires superior resources in R&D, production, marketing and distribution. In this situation pioneers become vulnerable very soon. A good example is the case of Netscape, the first Internet browser created in 1994. Today, Microsoft Explorer is dominant. However, Netscape created wealth for its shareholders, since AOL paid around $10 billion to the owners. On the other hand, Intel is an example of a company that has strongly defended its leadership position, through significant and focused investment into resources, as well as acquisitions. From the above, we can conclude that, in some situations, the creation of first- mover advantage can be time-consuming, expensive and ultimately unsustainable. In such situations, it is wiser to be a follower. The success of market pioneer strategy, in the sense of rapid penetration of a new market before competitors, depends on identification of challenges and risks specific to the targeted market. This includes analyzing company resources, predicting market growth, and forecasting future relationships with competitors. The emphasis today is on emerging markets, those markets inside states that have accepted the economic model of developed countries in the past few decades. China, India and Brazil are the largest, but similar characteristics are found in the Southeastern European countries, including the Republic of Serbia. Emerging markets are characterized by uncertainty. Pioneers have to minimize those uncertainties and profits are often less than in other markets. Followers usually enter when market conditions stabilize, the infrastructure is in place and customers educated. One way to minimize the risk is to enter through a joint A QUESTION OF STRATEGY: TO BE A PIONEER OR A FOLLOWER? 99 venture with a local partner. Such partners can possess important resources, such as knowledge of local markets and distribution channels (Cui, Lui, 2005). Successful penetration of emerging markets often depends on the image of the country of origin of the pioneer. Customers are usually prepared to pay a premium for products which come from developed countries with high reputations, especially the USA, Japan or Germany (Gao, Knight, 2007). This preference is most marked for products associated with a specific country, e.g. Swiss watches or Scotch whiskey. Country images are not the creation of individual companies, but rely on the concerted efforts of national government agencies and individuals over a long period. 4. Conclusion The process of strategic business planning consists of making a series of choices: generic strategy, method of growth, diversification strategy, product-market segment selection etc… One of the most important is the choice of timing or market entry - when to develop and commercialize new products and technologies and when to enter new markets. There are two options: to be a pioneer or to be a follower. Followers also have to decide whether to be early or late entrants. All options have advantages and risks. Research suggests followers may fare slightly better than pioneers but, logically, the outcome should depend on the specific characteristics of each new market (Lieberman, Montgomery, 1998:1122). The magnitude and durability of pioneer competitive advantage drives first- mover strategy. In most cases, pioneers enjoy benefits in the form of a greater market share; but market share is only one performance measure. Planners must also consider other, possibly more relevant, factors such as profitability, or sales growth. Concentrating on shareholder focus and value-based measurements will increase the chance of making the correct strategic choice. Most current strategic management approaches emphasize the importance of possessing strong internal resources and competencies for implementation. The resource-base determines choice of strategy. A pioneer strategy implies greater R&D and financial resources. A follower strategy implies strength in marketing and production. This is of great importance when allocating limited resources. Understanding internal resources and competencies is the first pillar or strategic analysis. 100 Đorđe Kaličanin The duration of the first-mover advantage period depends on both internal and external factors. The latter are beyond the control of any single enterprise. An understanding of external constraints on entry order and lead times is the second pillar of strategic analysis. Two external factors are especially important: the pace of technological change and the pace of market evolution. Rapid changes in both generally reduce the period of first-mover advantage. In the current Internet era, both technological and market evolution happen almost in real time. Internet-based technologies promote the emergence of new businesses and the transformation of old. This constant creation of market entry opportunities presents new challenges for strategic management in the 21st century, and potential areas for future research. Adner, R., (2006), “Match Your Innovation Strategy to Your Innovation Ecosystem”, Harvard Business Review (April): 98-107. Cares, J., (2006), “Battle of the Networks”, The HBR List: Breakthrough Ideas for 2006, Harvard Business Review, pp.40-41. Cu Vu, Sun., (1952), Veština ratovanja, Vojno delo, Beograd (translation) Cu, Sun., (2005), Sveobuhvatno umeće ratovanja, Alnari, Beograd (translation) Cui, G. and H. Lui (2005), “Order of Entry and Performance of Multinational Corporations in an Emerging Market: A Contingent Resource Perspective”, Journal of International Marketing, Vol. 13, No.4, pp.28-56. Frawley, T. and J. Fahy, (2006), “Revisiting the First-Mover Advantage Theory: A Resource-Based Perspective”, The Irish Journal of Management, pp. 273-295. Gao, H. and J. Knight, (2007), “Pioneering Advantage and product-country image: Evidence from an Exploratory Study in China”, Journal of Marketing Management, Vol. 23, No 3-4, pp.367-385. Gilbert, J.T., (1994), “Choosing an Innovation Strategy: Theory and Practice”, in: Thompson, A.A. and A.J. Strickland III, (1998), Readings in Strategic Management, tenth edition, McGraw-Hill, Boston LITERATURE A QUESTION OF STRATEGY: TO BE A PIONEER OR A FOLLOWER? 101 Heiens, R., Pleshko, L. and R. Leach, (2003), “Examining the Effects of Strategic Marketing Initiative and First-Mover Efforts on Market Share Performance”, The Marketing Management Journal, Vol. 14, Issue 1, pp.63-70. Kaplan, R.S. and D.P. Norton., (2004), Strategy Maps: Converting Intangible Assets into Tangible Outcomes, Harvard Business School Press Kerin, R.A., Varadarajan, P.R, and R.A. Peterson, (1992), “First-Mover Advantage: A Synthesis, Conceptual Framework, and Research Propositions,” Journal of Marketing, Vol. 56 (October), pp.33-52. Lieberman, M.B. and D.B. Montgomery, (1988), “First-Mover Advantages,” Strategic Management Journal, Vol. 9, No. S1, 41-58. Lieberman, M.B. and D.B. Montgomery, (1998), “First-Mover (Dis)Advantages: Retrospective and Link with The Resource-Based View,” Strategic Management Journal, Vol. 19 (December), 1111- 1126. McNamara, G.M., Haleblian, J. and B. Johnson Dykes, (2008), “The Performance Implications of Participating in an Acquisition Wave: Early Mover Advantages, Bandwagon Effects, and the Moderating Influence of Industry Characteristics and Acquirer Tactics”, Academy of Management Journal, Vol. 51, No. 1, 113-130. Min, S., Kalwani, M.U. and W.T. Robinson (2006), “Market Pioneer and Early Follower Survival Risks: A Contingency Analysis of Really New Versus Incrementally New Product-Market”, Journal of Marketing, Vol. 70, (January 2006), pp.15-33. Mittal, S. and S. Swami, (2004), “What Factors Influence Pioneering Advantage of Companies?”, Vikalpa, Vol. 29, No. 3, (July-September), pp.15-33. Stewart, T.A. and L. O’Brien, (2005), “Execution without Excuses, Interview with Michael Dell and Kevin Rollins”, Harvard Business Review (March): 102-111. Suarez, F. and G. Lanzolla, (2005), “The Half-Truth of First-Mover Advantage”, Harvard Business Review, (April): 121-127. Suarez, F. and G. Lanzolla, (2007), “The Role of Environmental Dynamics in Building a First Mover Advantage Theory”, Academy of Management Review, Vol. 32, No.2, 377-392. Thompson, J. and A. Strickland, (2003), Strategic Management: Concepts and Cases, McGraw- Hill/Irwin Urban, G.L., Carter, T., Gaskin, S. and Z. Mucha, (1986), “Market Share Rewards to Pioneering Brands: An Empirical Analysis and Strategic Implications,” Management Science, Vol. 32, No.6, 645-659. 102 Đorđe Kaličanin VanderWerf, P.A. and J.F. Mahon, (1997), “Meta-Analysis of the Impact of Research Methods on Findings of First-Mover Advantage,” Management Science, Vol. 43, No.11, 1510-1519. Zahra, S.A, Nash, S. and D.J. Bickford, (1998), “Transforming Technological Pioneering into Competitive Advantage”, in: Thompson, A.A. and A. J. Strickland III, Readings in Strategic Management, tenth edition, McGraw-Hill, Boston work_5vp4zqhcbbb5dms5epqqod7qmu ---- Research Article A Comparative Study of Clinical vs. Digital Exophthalmometry Measurement Methods Tháıs de Sous Pereira,1 Cristina Hiromi Kuniyoshi,2 Cristiane de Almeida Leite,1 Eloisa M. M. S. Gebrim,2 Mário L. R. Monteiro,1 and Allan C. Pieroni Gonçalves 1 1Laboratory of Investigation in Ophthalmology (LIM 33), Division of Ophthalmology, University of São Paulo Medical School, São Paulo, Brazil 2Department of Radiology, University of São Paulo Medical School, São Paulo, Brazil Correspondence should be addressed to Allan C. Pieroni Gonçalves; allanpieroni75@gmail.com Received 27 September 2019; Revised 6 February 2020; Accepted 14 February 2020; Published 23 March 2020 Academic Editor: Enrique Mencı́a-Gutiérrez Copyright © 2020 0áıs de Sous Pereira et al. 0is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Background. A number of orbital diseases may be evaluated based on the degree of exophthalmos, but there is still no gold standard method for the measurement of this parameter. In this study we compare two exophthalmometry measurement methods (digital photography and clinical) with regard to reproducibility and the level of correlation and agreement with measurements obtained with Computerized Tomography (CT) measurements. Methods. Seventeen patients with bilateral proptosis and 15 patients with normal orbits diseases were enrolled. Patients underwent orbital CT, Hertel exophthalmometry (HE) and stan- dardized frontal and side facial photographs by a single trained photographer. Exophthalmometry measurements with HE, the digital photographs and axial CT scans were obtained twice by the same examiner and once by another examiner. Pearson correlation coefficient (PCC) was used to assess correlations between methods. Validity between methods was assessed by mean differences, interintraclass correlation coefficients (ICC’s), and Bland–Altman plots. Results. Mean values were significantly higher in the proptosis group (34 orbits) than in the normal group (30 orbits), regardless of the method. Within each group, mean digital exophthalmometry measurements (24.32±5.17 mm and 18.62±3.87 mm) were significantly greater than HE measurements (20.87±2.53 mm and 17.52±2.67 mm) with broader range of standard deviation. Inter-/intraclass correlation coefficients were 0.95/0.93 for clinical, 0.92/0.74 for digital, and 0.91/0.95 for CT measurements. Correlation coefficients between HE and CT scan measurements in both groups of subjects (r � 0.84 and r � 0.91, p<0.05) were greater than those between digital and CT scan measurements (r � 0.61 and r � 0.75, p<0.05). On the Bland–Altman plots, HE showed better agreement to CT measurements compared to the digital photograph method in both groups studied. Conclusions. Although photographic digital exoph- thalmometry showed strong correlation and agreement with CT scan measurements, it still performs worse than and is not as accurate as clinical Hertel exophthalmometry. 0is trail is registered with NCT01999790. 1. Background Exophthalmometry is the assessment of the anteroposterior position of the globe in the orbit relative to the orbital rim. 0ough a number of orbital diseases may be evaluated based on the degree of exophthalmos, there is still no gold standard method for the measurement of this parameter. One of the most widely used methods is Hertel exophthalmometry (HE) [1–3]. However, despite the ease and convenience afforded by this method, the problems of reproducibility and standardization of measurements remain unsolved [1, 3, 4]. As shown by several authors, computed tomography (CT) correlates well with HE while providing greater ac- curacy, [5–8] but the high cost, exposure to radiation, and the need for repeated measurements severely restrict the use of this technology for routine exophthalmometry. Digital photography is a simple, noninvasive method of measurement, with the additional advantage that Hindawi Journal of Ophthalmology Volume 2020, Article ID 1397410, 6 pages https://doi.org/10.1155/2020/1397410 mailto:allanpieroni75@gmail.com https://clinicaltrials.gov/ct2/show/NCT01999790 https://orcid.org/0000-0002-3222-8128 https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/ https://doi.org/10.1155/2020/1397410 measurements are documented and the images are easily shared online. Photographic techniques have been used to evaluate palpebral position, [9, 10] but to our knowledge, only one previous study has compared the methods of photography, clinical exophthalmometry, and CT in the measurement of exophthalmos. 0e authors found the correlation between the methods to be weak, possibly due to the fact that the study was multicentric and involved dif- ferent photographers and evaluators, compromising reli- ability and reproducibility [8]. In the present investigation, we compared digital and clinical exophthalmometry to radiological exoph- thalmometry (CT) in a sample of patients from a single center, employing a single trained photographer. We also determined the level of agreement and reproducibility of the respective measurements. 2. Methods 0is was an observational, cross-sectional, and descriptive study conducted at a hospital-based tertiary-level ophthal- mology and otorhinolaryngology outpatient clinic in São Paulo, Brazil. 0e study protocol complied with the tenets of the Declaration of Helsinki, was approved by the Institu- tional Review Board of the University of São Paulo Medical School, and all participants gave their informed written consent. Between May 2016 and August 2017, 17 patients with bilateral proptosis defined as clinical exophthalmometry readings greater than 20 mm due to orbital diseases were recruited. We also recruited 15 patients with normal orbits who had recently been submitted to CT scanning for nasal evaluation. 0e inclusion criteria were: (i) age above 21 years, (ii) absence of ocular abnormalities such as degen- erative myopia, microphthalmos, and anophthalmic socket, (iii) absence of orbital abnormalities such as previous fractures and congenital defects, (iv) good patient collabo- ration, and (v) absence of previous orbital, strabismus, or eyelid surgery. Patients with poor-quality images (head tilted or eyes not in primary gaze) were excluded. 2.1. Clinical Exophthalmometry. Patients were submitted to clinical exophthalmometry twice by one examiner (senior faculty) and once by a second examiner (senior faculty). All measurements were taken in the primary position, with the patient standing up and with the eyes at the same level as the examiner’s eyes. 0e exophthalmometer (Oculus Inc., Dutenhofen, Germany) was fitted with a double mirror, without prism. 0e base was recorded and maintained at all 3 measurements. 2.2. Radiological Exophthalmometry. No later than 4 weeks after the ophthalmologic examination, patients were sub- mitted to multidetector CT scanning of the orbit (Brilliance 16, Philips Medical Systems, the Netherlands) without in- travenous contrast. Axial scanning was performed with the patient in dorsal decubitus and with the head parallel to the Frankfurt plane. Patients were instructed to keep their eyes open and static in the primary position of gaze. 0e ac- quisition parameters were 120 kv, 200 mAs; detector setting 16 × 0.75 mm; slice thickness 1.5 mm; and increase 0.7 mm. After acquisition, images were processed and analyzed with the dedicated workstation software (Extended Brilliance Workspace (EBW) Philips Medical Imaging, Best, the Netherlands) 0e images were examined by a head-and- neck radiologist and by a second reader, both of whom were blinded to the clinical condition of the patient. Having selected the full-orbit image with the greatest intraocular lens thickness, a line was drawn from the zygomatic rhymes to the anterior surface of the cornea in order to measure exophthalmos (Figure 1(c)) [8, 11]. 0e images of the right and left orbits were evaluated independently. 0e reliability of the measurements was assessed by repeating them on the same CT images 6 months later. 2.3. Digital Photography Exophthalmometry. Standardized frontal and side-view photographs (Canon Power-Shot SX530 HS) of each subject were taken by a single trained ophthalmologist. 0e patient was positioned in a chair with a blue background, with the head aligned in primary gaze and parallel to the photographer (Figure 1(a)). A surgical pen mark was made on the anterior border of the lateral orbital rim at the height of the lateral canthus. 0e photograph also included a 12 mm diameter circular sticker for digital cal- ibration. 0e digital images were processed and analyzed by two readers with the assistance of custom software devel- oped at Matlab (MathWorks, Natick, MA) by Garcia and colleagues [12]. One of the readers repeated the measure- ments on a different day. Based on the side-view photograph, exophthalmos was defined as the distance from the lateral orbital rim to the corneal vertex (Figure 1(b)). 2.4. Statistical Analysis. 0e statistical analysis was per- formed in the R Language [13]. 0e data generated with the three methods of exophthalmometry (clinical, radiological, and digital) in both groups (normal and proptosis) were compared with paired t tests. Intraclass and interclass correlation coefficients (ICC) were used to assess the con- sistency or reproducibility of the measurements. Radio- logical (CT) was the modality to which the other two methods were compared. Pearson correlations coefficient was used to evaluate the association between the different modalities, while the agreement between clinical and digital to radiological exophthalmometry was assessed with Bland–Altman plots. 0e level of statistical significance was set at 5% (p<0.05). 3. Results 0irty-four orbits with proptosis and 30 orbits without proptosis (normal group) were included in the study. Table 1 shows the clinical, radiological, and digital exoph- thalmometry measurements of the normal and proptosis group, expressed as mean values±standard deviation (SD). Mean values were significantly higher in the proptosis group than in the normal group, regardless of the method. Within 2 Journal of Ophthalmology each group, clinical exophthalmometry yielded significantly smaller mean values (control −0.58 mm; proptosis −0.82 mm) than radiological exophthalmometry. Moreover, mean digital exophthalmometry values were significantly greater (control +0.52 mm; proptosis +2.63 mm) than mean radiological exophthalmometry values (p �0.276 and p<0.05, respectively). 0e level of intra- and interclinician agreement of the clinical measurements was 0.93 and 0.95, respectively. 0e intra- and interclass correlation coefficients were, respec- tively, 0.95 and 0.91 for radiological measurements and 0.74 and 0.92 for digital measurements. A stronger correlation was observed between clinical and radiological measure- ments in both the proptosis group and the control group (Pearson correlation coefficient: r � 0.84 and r � 0.91, re- spectively; both p<0.05). Somewhat weaker, although still significant, correlations were found between digital and radiological measurements (Pearson r � 0.61 and r � 0.75, respectively; both p<0.05) (Figure 2). When comparing clinical and radiological measure- ments on the Bland–Altman plot [14], the 95% limits of agreement (LOA) were −3.12 and 1.96 mm for the control group and −4.49 and 2.85 mm for the proptosis group. While comparing digital and radiological measurements, LOA were −4.52 and 5.56 mm for the control group and −5.41 and 10.68 mm for the proptosis group. In other words, clinical measurements showed better agreement to CT especially in the normal orbits group (Figure 3). 4. Discussion Exophthalmometry is an important tool in the evaluation and follow-up of orbital diseases. A variety of instruments have been used to measure proptosis [7, 15, 16], including radiological exophthalmometry, which was shown to cor- relate well with HE [6, 7, 16]. However, the use of CT solely for exophthalmometry should be avoided due to radiation exposure and high costs. HE is still the most widely used method, although readings, reproducibility, and accuracy are affected by the device type and examiner skill. In our study, the readings were made by a single senior faculty and repeated by another senior faculty in order to calculate interobserver variability [1, 4]. With all three measuring methods, mean values were significantly higher in the proptosis group than in the control group. In addition, accuracy remained unchanged as the degree of proptosis increased. Previous studies have yielded inconsistent results in this regard: some have found accuracy to be negatively associated with the degree of proptosis [2, 17], while others have not [8]. 0e high levels of reliability and reproducibility observed in this study (ICC � 0.93 for clinical measurements; ICC � 0.95 for radiological measurements) match the results of several other studies [3, 4, 16]. Due to the high intra- and interclass correlation of the CTmeasurements (an indication of good reproducibility and consistency), CT was chosen as the gold standard to which the other methods were compared. CT also correlated well with HE. Both observations are compatible with those of previous studies [6–8, 16]. Because patients are in the supine position during CT imaging and standing upright during HE reading, CT measurements are expected to be lower than Hertel readings. However, the opposite was observed in our study. 0is nevertheless coincides with Bingham’s findings for the Oculus exophthalmometer, the same device used in this study [8]. 0e lower estimation of the HE method may overcome the position bias on measurement. On the other hand, the digital measurements from photographs made in Figure 1: (a) Standardized frontal patient photography. (b) Side view photography with digital measurement method. (c) Radiological exophthalmometry method. Journal of Ophthalmology 3 the upright position were higher than the radiological measurements. Digital exophthalmometry is a recent method to mea- sure the axial globe position. Some authors using photog- raphy to evaluate palpebral position have reported good results in patients with Graves Orbitopathy [9, 10]. 0e upsides of using photography instead of CT or Hertel exophthalmometry are noninvasiveness and high avail- ability. 0e downsides are the requirement of good stan- dardization, including gaze, light, and camera parameters (position, zoom, and aperture). Furthermore, different that HE, measurements are not prompt available. We found lower intraclinician correlation for digital than clinical and radiological measurements, indicating lower reproducibility, possibly due to inaccurate rhyme edge markings or photography inconsistency. Digital measure- ments also displayed higher mean value, broader range and standard deviations (especially in the proptosis group), and lower Pearson correlation indices to CT, when compared with clinical measurements. On the Bland–Altman plot, LOA was larger in the prop- tosis group than in the control group in all methods com- parisons. It was also larger for digital vs. CTmeasurements than for clinical vs. CTmeasurements in both groups. 0ese findings suggest that measurements were less reliable (greater variation) in the proptosis group and when using digital photography. Our study was carried out in a single center, employing a single trained photographer and specific custom software for Table 1: Clinical, radiological, and digital exophthalmometry means and standard deviations (SD) for each group (normal and proptosis). Intra- and interclass correlation coefficients (ICC) of each exophthalmometry method. Exophthalmometry method Normal group 30 orbits Proptosis group 34 orbits ICC Mean (range)±SD (mm) Mean (range)±SD (mm) Intra Inter Clinical (Hertel) 17.52±2.67 20.87±2.53 0.93 0.95 Radiological (CT) 18.10±3.09 21.69±2.91 0.95 0.91 Digital photography 18.62±3.87 24.32±5.17 0.74 0.92 Proptosis group-clinical vs. radiological 15 20 25 30 35 4010 Clinical measurement (mm) 10 15 20 25 30 R ad io lo gi ca l m ea su re m en t ( m m ) (a) Normal group-clinical vs. radiological 15 20 25 30 35 4010 Clinical measurement (mm) 10 15 20 25 30 R ad io lo gi ca l m ea su re m en t ( m m ) (b) Proptosis group-digital vs. radiological 15 20 25 30 35 4010 Digital measurement (mm) 10 15 20 25 30 R ad io lo gi ca l m ea su re m en t ( m m ) (c) Normal group-digital vs. radiological 15 20 25 30 35 4010 Digital measurement (mm) 10 15 20 25 30 R ad io lo gi ca l m ea su re m en t ( m m ) (d) Figure 2: (a–d) Correlations between clinical and digital photographic measurements versus CT measurements. 4 Journal of Ophthalmology the measurements. It also has some limitations as small sample size. Albeit the correlation indices observed in our study were better than the indices reported elsewhere [8], the level of agreement was below our expectations. 5. Conclusions Digital photography exophthalmometry was associated with greater variance, lower correlation, and agreement to ra- diological measurements when compared to clinical exophthalmometry. Inspite of improving photography standardization and measurement consistency, digital exophthalmometry is still not accurate enough to supplant Hertel exophthalmometry. Abbreviations HE: Hertel exophthalmometry CT: Computed tomography SD: Standard deviation LOA: Limits of agreement ICC: Intraclass and interclass correlation coefficients. Data Availability 0e datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Requests for access to these data should be made to allanpieroni75@gmail.com. Consent All participants gave their written informed consent. Disclosure 0e study was approved by the Institutional Review Board at of the University of São Paulo Medical School. Conflicts of Interest 0e authors declare that they have no conflicts of interest. Authors’ Contributions T.S.P. and C.H.K. have contributed to the acquisition, analysis, and interpretation of data; A.C.P.G., E.M.M.S.G., and C.A.L. made substantial contributions to the concep- tion, design, interpretation of data, and writing of the work; and M.L.R.M. contributed to the conception of the work and have substantively revised it. Acknowledgments 0is work was supported by grants from CAPES-Coor- denação de Aperfeiçoamento de Nı́vel Superior, Braśılia, Proptosis group-clinical vs. radiological 95% LOA (–4.49) Bias (–0.82) 95% LOA (2.85) –15 –10 –5 0 5 10 15 D iff er en ce in m ea su re m en t ( m m ) 20 25 3015 Mean measurement (mm) (a) Normal group-clinical vs. radiological 95% LOA (–3.12) Bias (–0.58) 95% LOA (1.96) –15 –10 –5 0 5 10 15 D iff er en ce in m ea su re m en t ( m m ) 20 25 3015 Mean measurement (mm) (b) Proptosis group-digital vs. radiological 95% LOA (–5.41) Bias (2.63) 95% LOA (10.68) –15 –10 0 –5 5 10 15 D iff er en ce in m ea su re m en t ( m m ) 20 25 3015 Mean measurement (mm) (c) Normal group-digital vs. radiological 95% LOA (–4.52) Bias (0.52) 95% LOA (5.56) 20 25 3015 Mean measurement (mm) –15 –10 –5 0 5 10 15 D iff er en ce in m ea su re m en t ( m m ) (d) Figure 3: (a–d) Bland–Altman plots comparing clinical and digital photographic measurements versus CT measurements. Journal of Ophthalmology 5 mailto:allanpieroni75@gmail.com Brazil, and CNPq-Conselho Nacional de Desenvolvimento Cient́ıfico e Tecnológico (No. 308172/2018-3), Braśılia, Brazil. 0e funding organizations had no role in the design or conduct of this research. References [1] D. C. Musch, B. R. Frueh, and J. R. Landis, “0e reliability of Hertel exophthalmometry,” Ophthalmology, vol. 92, no. 9, pp. 1177–1180, 1985. [2] Y. Vardizer, T. T. J. M. Berendschot, and M. P. Mourits, “Effect of exophthalmometer design on its accuracy,” Oph- thalmic Plastic & Reconstructive Surgery, vol. 21, no. 6, pp. 427–430, 2005. [3] M. B. Kashkouli, B. Beigi, M. M. Noorani, and M. Nojoomi, “Hertel exophthalmometry: reliability and interobserver variation,” Orbit, vol. 22, no. 4, pp. 239–245, 2003. [4] A. K. C. Lam, C.-f. Lam, W.-k. Leung, and P.-k. Hung, “Intra- observer and inter-observer variation of Hertel exoph- thalmometry,” Ophthalmic and Physiological Optics, vol. 29, no. 4, pp. 472–476, 2009. [5] I. T. Kim and J. B. Choi, “Normal range of exophthalmos values on orbit computerized tomography in Koreans,” Ophthalmologica, vol. 215, no. 3, pp. 156–162, 2001. [6] N. Ramli, S. Kala, A. Samsudin, K. Rahmat, and Z. Zainal Abidin, “Proptosis-correlation and agreement between Hertel exophthalmometry and computed tomography,” Orbit, vol. 34, no. 5, pp. 257–262, 2015. [7] M. J. Hauck, J. P. Tao, and R. A. Burgett, “Computed to- mography exophthalmometry,” Ophthalmic Surgery, Lasers, and Imaging, vol. 42, pp. 1–4, 2010. [8] C. M. Bingham, J. A. Sivak-Callcott, M. J. Gurka et al., “Axial globe position measurement,” Ophthalmic Plastic and Re- constructive Surgery, vol. 32, no. 2, pp. 106–112, 2016. [9] D. T. Edwards, G. B. Bartley, D. O. Hodge, C. A. Gorman, and E. A. Bradley, “Eyelid position measurement in graves’ ophthalmopathy,” Ophthalmology, vol. 111, no. 5, pp. 1029– 1034, 2004. [10] H. A. Miot, “Comparative evaluation of oculometric variables in graves’ ophthalmopathy,” Clinics, vol. 64, no. 9, pp. 885–889, 2009. [11] R. D. Gibson, “Measurement of proptosis (exophthalmos) by computerised tomography,” Australasian Radiology, vol. 28, no. 1, pp. 9–11, 1984. [12] G. H. Milbratz, D. M. Garcia, F. C. Guimarães, and A. A. V. Cruz, “Multiple radial midpupil lid distances: a simple method for lid contour analysis,” Ophthalmology, vol. 119, no. 3, pp. 625–628, 2012. [13] R Core Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria, 2017. [14] J. M. Bland and D. G. Altman, “Statistical methods for assessing agreement between two methods of clinical mea- surement,” Lancet, vol. 327, no. 8476, pp. 307–310, 1986. [15] K. KNUDTZON, “On exophthalmometry; the result of 724 measurements with Hertel’s exophthalmometer on normal adult individuals,” Acta Psychiatrica Scandinavica, vol. 24, no. 3-4, pp. 523–537, 1949. [16] M. Segni, G. B. Bartley, J. A. Garrity, E. J. Bergstralh, and C. A. Gorman, “Comparability of proptosis measurements by different techniques11InternetAdvance publication at ajo.com March 7, 2002,” American Journal of Ophthalmology, vol. 133, no. 6, pp. 813–818, 2002. [17] B. R. Frueh, F. Garber, R. Grill, and D. C. Musch, “Positional effects on exophthalmometer readings in graves’ eye disease,” Archives of Ophthalmology, vol. 103, no. 9, pp. 1355-1356, 1985. 6 Journal of Ophthalmology work_5wongrdopfdyxmkwkzweliac34 ---- Pdf | Türkiye Klinikleri Pdf Yasal uyarı:Bu sitede yayınlanan resim, yazı ve diğer uygulamaların her hakkı Ortadoğu Reklam Tanıtım Yayıncılık Turizm Eğitim İnşaat Sanayi ve Ticaret A.Ş. 'ye aittir. Kaynak gösterilmeden kullanılamaz. Bu site hekimleri sağlık alanında bilgilendirmeye yönelik hazırlanmıştır. Sitede yer alan bilgiler tanı ve tedavi amaçlı kullanıldığında sorumluluk tamamen kullanıcıya aittir. Siteye girmekle bu şartları okumuş, anlamış ve kabul etmiş sayılırsınız. work_5xxea4outbcvxadqfppbieir7m ---- Beginnings: the first hardware digital filters IEEE SIGNAL PROCESSING MAGAZINE Leland B. Jackson Beginnings: The First Hardware Digital Filters NOVEMBER 2004 55 ate may tap you on the shoulder anywhere, including an airplane. In December 1965, while flying to Atlanta for Christmas, I ran into Edward E. David, then executive director for Communications Research at Bell Labs and later science advisor to President Nixon. Ed was also from Atlanta, and I had met him in 1961 when I worked at Bell Labs as a Massachusetts Institute of Tech- nology (MIT) co-op student in Henry S. McDonald’s department. James F. Kaiser was another mem- ber of Hank’s department, and dur- ing the Kennedy administration, Jim and I had been known as “the other JFK and LBJ.” On the plane, Ed offered me a job in Hank’s department with the understanding that I would attend Stevens In- stitute of Technology part time for my Sc.D. and, as an adjunct profes- sor at Stevens, Ed would be my the- sis advisor. Thus, on that fateful plane trip began my career in digital signal processing. When I reported to work in August 1966, Hank and Jim intro- duced me to an exciting new area called digital filtering, based largely on earlier results from the field of sampled-data control systems. Jim had recently published the first book with an extensive chapter on digital filters [1]. Hank was a true visionary and a salesman, and he could already see the tremendous impact that this new technology would have on communication systems in F L eland Jackson was born on 23 July 1940 in Atlanta, Georgia, and grew upthere. He obtained the S.B. and S.M. degrees from the MassachusettsInstitute of Technology (1963) and the Sc.D. degree from the Stevens Institute of Technology (1970). He has been a member of the technical staff with Bell Laboratories (1966–1970) and vice president of engineering with Rockland Systems Corporation (1970–1974). Since 1974, Dr. Jackson has been with the Department of Electrical and Computer Engineering, University of Rhode Island, where he is currently a professor of electrical engineering. His research work has focused on quantization effects in digital filters and parametric signal modeling. Dr. Jackson coauthored the books Digital Filters and Signal Processing (1986; 1989; 1996), and Signals, Systems, and Transforms (1991). For his work in finite word length design and hardware implementation of dig- ital signal processing systems, he received the Technical Achievement Award of the IEEE Signal Processing Society (1983). Mixing professional and personal interests was easy for Dr. Jackson, who has been the audio engineer for the digital recordings of his wife Diana’s pipe organ music. An accomplished organist, she has had more than 20,000 plays of her music on the web, including works by Bach, Widor, and Vierne, all wear- ing his engineering fingerprint. In recent years, he has become interested in genealogy and was surprised to learn that his eighth great grandfather, Sir Anthony Jackson, was imprisoned by Cromwell in the Tower of London between 1651–1659, and that his fifth and sixth-great grandfathers were Quakers who immigrated to Pennsylvania in 1717. The Jacksons have also traveled to far-away destinations, where the inherit- ed adventurous spirit of his forefathers must have been their guide: East Africa, Australia, New Zealand, Peru, Costa Rica, the Galapagos Islands, and the Amazon River. At home in Rhode Island, Leland Jackson enjoys sailing, hiking, digital photography, and reading historical novels. He is also looking forward to reading more bedtime stories to his grandchildren, now that they have moved to a new house next door. For the “DSP History” column, Dr. Leland Jackson tells the story of his pio- neering work in digital filter hardware implementations with simplicity and interleaved bits of humor. Enjoy! —Adriana Dumitras and George Moschytz “DSP History” column editors adrianad@ieee.org moschytz@isi.ee.ethz.ch 1053-5888/04/$20.00©2004IEEE dsp history IEEE SIGNAL PROCESSING MAGAZINE56 NOVEMBER 2004 dsp history continued the future. He had been telling any- one he could buttonhole at Bell Labs about these new developments (and with his boundless energy and enthusiasm, that was just about everyone), and he was pushing man- agement to devote more R&D resources to this area. He decided that to demonstrate the possibilities, he would select an analog unit in current use in the Bell System, and we would build a prototype digital equivalent. His choice was the touch-tone receiver (TTR), the unit in the local telephone central office that decodes the multifrequency tones used for dialing, because it con- tained every type of frequency-selec- tive filter: low pass, high pass, band pass, and band rejection. The original block diagram of that TTR is shown in Figure 1. Thus, my first assignment was to design and build an all-digital TTR. Hank was excited about the latest advance in the integrated-circuit (IC) technology, transistor-transis- tor logic (TTL), which was rapidly replacing the older diode-transistor logic (DTL), and resistor-transistor logic (RTL). We could buy four two-input gates, two four-input gates, two flip-flops, or two full adders in a single 14-pin dual-inline IC package! We could even get an 8-b shift register in a single package. Hank realized that, because of the parallelism inherent in digital filters, we could use bit-serial arithmetic and still be fast enough to keep up with the data rate in real time. In fact, using later very large scale inte- gration (VLSI) terminology, our circuits were flow simple, cell sim- ple, completely pipelined, locally connected, and thus systolic [2]. The sampling rate was 10 kHz, with 10-b rounding of multiplication products, for a basic bit rate of 100 kb/s. The TTL bit-serial circuits could run at least an order of mag- nitude faster, so we multiplexed the filters by eight; i.e., by interleaving the samples from different filter inputs into a basic second-order sec- tion, lengthening the shift registers comprising each delay (z −1 ) by a factor of eight, and cycling through the filter coefficients from a read- only memory, we could realize eight low-pass or bandpass filters with a single second-order section. Similarly, two second-order high- pass filters and two sixth-order bandstop filters were multiplexed into a single section. Hank and I have a patent on this digital-filter multiplexing scheme, and I have another on the implementation of saturation arithmetic in digital fil- ters. I am aware of earlier fixed clut- ter-rejection digital filters in moving-target-indicator radars, but to my knowledge, this demonstra- tion TTR was the first realization of a fully programmable digital filter in hardware form. It’s interesting how we discov- ered the existence of overflow oscil- lations in fixed-point digital filters and thus the need for overflow detection and saturation arithmetic. The TTR was built in a single 6-in- high chassis with wire-wrapped cards mounted vertically. A fluorescent light was mounted over the chassis. The unit would be operating prop- erly with a sampled sinusoid displayed on the oscillo- scope, but if we turned on the light, the signal would break up into a seemingly random, full-scale oscilla- tion. Turning the unit off and then on again to reset the delays (z −1 ), the sinu- soid would reappear and normal operation would resume. We tried this over and over, and were per- plexed at what was going on. However, we quickly realized that the inherently nonlinear overflow charac- teristic of twos-complement arithmetic, coupled with the feedback in a recursive digital filter, could easily produce oscillations for▲ 1. A digital touch-tone receiver. Notations A-D, HPF, BPF, LPF, BRF, HWR, LIM stand for analog-to- digital converter, high-pass filter, band-pass filter, low-pass filter, band-rejection filter, half-wave rec- tifier, and limiter, respectively. HPF 2 BRF 2 BPF 1 BPF 4 BPF 5 BPF 8 BRF 1 HPF 2 A-D HPF1 Touch- Tone Signal LIM LIM (Multiplexed Units Enclosed in Dotted Lines) HWR HWR HWR HWR LPF LPF LPF LPF Level Detect Level Detect Level Detect Level Detect Detection Logic Detected Digit (continued on page 81) tion for contextual information and for time registration. And, the dimensionality of the supervector might lead to computational intractability. At the other extreme, one might run “recognition” on each mode separately to conclusion and then compare word string results across modes. The disparities in richness could make this approach inefficient, or at worse, useless. Where we stand at present, and on which the systems of Figures 6, and 7 depend, is to use speech as totally centric, bearing the main communication burden, and to uti- lize gaze and gesture mainly at the feature level as complements to resolve deictic references that speech leaves ambiguous. This is not satis- fying. But it’s where we are! Acknowledgment Aspects of this research have been supported by the National Science Foundation and by the Defense Advanced Research Projects Agency. References [1] R. Chandrasekaran, “Statistical modeling of user input in a multimodal speech and graphics environment,” M.S. thesis, Dept. Elec. Comput. Eng., Rutgers Univ., 2004. [2] S. Dusan, G. Gadbois, and J. Flanagan, “Multimodal interaction on PDA’s integrating speech and pen inputs,” in Proc. of EUROSPEECH, Geneva, Switzerland, 2003, pp. 2225–2228. [3] J. Flanagan and T. Huang, “Human-computer multimodal interface,” Proc. IEEE, vol. 91, no. 9, pp. 1267–1271, 2003. [4] F. Flippo, A. Krebs, and I. Marsic, “A frame- work for rapid development of multimodal interfaces,” in Proc. Int. Conf. on Multimodal Interfaces (ICMI 2003), Vancouver, BC, pp. 109–116, 2003. [5] M. Kaur, M. Tremaine, N. Huang, J. Wilder, Z. Gakofski, F. Flippo, and C. Mantravadi, “Where is it? Event synchronization in gaze- speech input systems,” in Proc. Int. Conf. on Multimodal Interfaces (ICMI 2003), Vancouver, BC, pp. 151–158, 2003. [6] S. Oviatt, R. Coulton, S. Tomko, and B. Xiao, “Toward a theory of organized multimodal integration patterns during human-computer interaction,” in Proc. IEEE Int. Conf. on Multimodal Interfaces (ICMI 2003), Vancouver, BC, pp. 44–51, 2003. IEEE SIGNAL PROCESSING MAGAZINENOVEMBER 2004 81 certain initial conditions. Electro- magnetic radiation generated by turning on the fluorescent light was generating the required initial con- ditions. Intuitively, we added over- flow detection and saturation circuits to the feedback loops of the filters, which fixed the problem, and our colleagues Ebert et al. [3] sub- sequently proved that this indeed precluded overflow oscillations in second-order sections. Separately, I also investigated and modeled the small-scale limit cycles caused by multiplication rounding [4] and the tradeoff of roundoff noise versus dynamic range in fixed-point digital filters [5]. Another groundbreaking idea of Hank McDonald was the use of delta modulation to implement analog-to- digital (A/D) conversion. Our homemade A/D converter consisted of a simple delta-modulator, followed by an up/down counter and a “leaky” accumulator. The contents of the counter were transferred out at regular intervals, with the counter then being reset, to produce a differ- ential pulse code modulated (DPCM) signal. The DPCM signal was, in turn, accumulated (with a slight “leak” for stability) to produce the desired pulse code modulated input. This made a very simple and effective A/D converter. If only we’d thought of delta-sigma converters! A paper on our approach to digi- tal-filter implementation was pub- lished in IEEE Transactions on Audio and Electroacoustics (the fore- runner of IEEE Transactions on Signal Processing) in September of 1968 [6]. If you look at my picture at the end of that 1968 article, you will see a very serious young man gazing blankly into space. This pic- ture was taken on 6 June 1968, and that evening I proposed to my wife. Evidently, the enormity of what I was contemplating had begun to dawn on me! References [1] J.F. Kaiser, “Digital Filters,” in System Analysis by Digital Computer, J.F. Kaiser and F.F. Kuo, Eds. New York: Wiley: 1966, pp. 218–85. [2] L.B. Jackson, Digital Filters and Signal Processing, 3rd ed. Boston, MA: Kluwer, 1996. [3] P.M. Ebert, J.E. Mazo, and M.G. Taylor, “Overflow oscillations in digital filters,” Bell Sys. Tech. J., vol. 48, pp. 2999–3020, 1969. [4] L.B. Jackson, “An analysis of limit cycles due to multiplication rounding in recursive digital fil- ters,” in Proc. 7th Allerton Conf. Cir. Sys. Th., 1969, pp. 69–78. [5] L.B. Jackson, “On the interaction of roundoff noise and dynamic range in digital filters,” Bell Sys. Tech. J., vol. 49, pp. 159–184, 1970. [6] L.B. Jackson, J.F. Kaiser, and H.S. McDonald “An approach to the implementation of digital filters,” IEEE Trans. Audio Electroacoustics, vol. AU-16, pp. 413–421, 1968. dsp history continued from page 56 footer1: work_5zann4gdgfcavee3iipvwa6vve ---- PII: S1098-2140(00)00067-9 mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm Exemplars This section presents interviews with evaluators whose work can illustrate, in a specific evaluation study, the application of different models, theories, and princi- ples described in evaluation literature. These interviews should also exemplify the choices that evaluators make in the course of planning and conducting an evalu- ation. Entries in this section will begin with a brief description of the evaluation to provide a foundation for the interview. The focus will be on the dialogue between the section editor and the evaluator to learn more about the choices made, the models followed, the constraints and opportunities that emerged as the study progressed, and the evaluator’s assessment of the study itself. The section editor will provide commentary as needed to point out important attributes of the eval- uation itself and to highlight connections between the evaluator’s comments and relevant literature or issues in the field of evaluation. Please send any suggestions about evaluators/evaluations that might be the subject of this column to the section editor, Jody Fitzpatrick, at jfitzpat@carbon. cudenver.edu. Summary of the STEP Evaluation DAVID FETTERMAN INTRODUCTION David Fetterman, the past president of the American Evaluation Association and Director of the MA Policy Analysis and Evaluation Program at Stanford University, is the founder and primary developer of the empowerment approach to evaluation. The American Journal of Evaluation has published several articles by Dr. Fetterman on this approach. In 1997, however, Dr. Fetterman undertook a complex, three-year evaluation of the Stanford Teacher Education Program (STEP), and chose to follow a different model. In an effort to learn more about the practice of major model builders in evaluation, this column will focus on the STEP evaluation and the choices Dr. Fetterman made in conducting that study. This project was selected to illustrate Dr. Fetterman’s work because the evaluation continues to receive much attention both at Stanford and nationally in the teacher education arena and because Dr. Fetterman felt that readers would learn, as he did, from the process of this study. David Fetterman ● Stanford University, School of Education, Room 333 Cubberly, 485 Lasuen Hall, Stanford, CA 94305-3096. American Journal of Evaluation, Vol. 21, No. 2, 2000, pp. 239 –241. All rights of reproduction in any form reserved. ISSN: 1098-2140 Copyright © 2000 by American Evaluation Association. 239 SUMMARY OF THE STEP EVALUATION The president of Stanford University, Gerhard Casper, requested an evaluation of the Stanford Teacher Education Program (STEP). The first phase of the evaluation was forma- tive, designed to provide information that might be used to refine and improve the program. It concluded at the end of the 1997–1998 academic year. Findings and recommendations from this phase of the evaluation were reported in various forms, including a formal summer school evaluation report (Fetterman, Dunlap, Greenfield, & Yoo 1997), over 30 memoranda, and various informal exchanges and discussions. The second stage of this evaluation was summative in nature, providing an overall assessment of the program (Fetterman, Connors, Dunlap, Brower, Matos, & Paik, 1999). The final report highlights program evaluation findings and recommendations, focusing on the following topics and issues: unity of purpose or mission, curriculum, research, alumni contact, professional development schools/university school partnerships, faculty involve- ment, excellence in teaching, and length of the program. Specific program components also were highlighted in the year-long program evaluation report, including admissions, place- ment, supervision, and portfolios. (See the STEP web site for copies of all evaluation reports: http://www.stanford.edu/;davidf/step.html.) THE METHODOLOGY The evaluation relied on traditional educational evaluation steps and techniques, including a needs assessment; a plan of action; data collection (interviews, observations, and surveys); data analysis; and reporting findings and recommendations. Data collection involved a review of curricular, accreditation, and financial records, as well as interviews with faculty and students, and observations of classroom activity. Informal interviews were conducted with every student in the program. Focus groups were conducted with students each quarter and with alumni from the classes of ’95, ’96, and ’97. More than 20 faculty interviews were conducted. Survey response rates were typically high (90% to 100%) for master teachers, current STEP students, and alumni. Data collection also relied on the use of a variety of technological tools, including digital photography of classroom activity, Web surveys, and evaluation team videoconferencing on the Internet. Data analysis was facilitated by weekly evaluation team meetings and frequent database sorts. Formal and informal reports were provided in the spirit of formative evaluation. Responses to preliminary evaluation findings and recommendations were used as additional data concerning program operations. (A detailed description of the methodology is presented in Fetterman, Connors, Dunlap, Brower, & Matos, 1998.) BRIEF DESCRIPTION OF STEP STEP is a 12-month teacher education program in the Stanford University School of Education, offering both a master’s degree and a secondary school teaching credential. Subject area specializations include English, languages, mathematics, sciences, and social studies. The program also offers a Cross-cultural, Language, and Academic Development 240 AMERICAN JOURNAL OF EVALUATION, 21(2), 2000 (CLAD) emphasis for students who plan to teach second language learners. The 1997–98 class enrollment was 58 students. Tuition and board are approximately $30,000. The program introduces students to teaching experiences during the summer quarter through the summer school program under the guidance of a master teacher. Students are also required to take course work during this period. Students enter the academic year with a nine-month teaching placement, which begins in the fall quarter under the supervision of a cooperating teacher and field supervisor. Students also are required to take the School of Education master’s degree and state-required course work throughout the year. The program administration includes a faculty sponsor, director, placement coordinator, student services coordinator, lead supervisor, field supervisors, and a program assistant. In addition, the program has a summer school coordinator/liaison and part-time undergraduate and doctoral students. REFERENCES Fetterman, D. M., Connors, W., Dunlap, K., Brower, G., Matos, T., & Paik, S. (1999). Stanford Teacher Education Program 1997–98 Evaluation. Stanford, CA: Stanford University. Fetterman, D. M., Connors, W., Dunlap, K., Brower, G., & Matos, T. (1998). Stanford Teacher Education Program 1997–98 Evaluation Report. Stanford, CA: School of Education, Stanford University. Fetterman, D. M., Dunlap, K., Greenfield, A., & Yoo, J. (1997). Stanford Teacher Education Program 1997 Summer School Evaluation Report. Stanford, CA: Stanford University. 241Exemplars work_5zmjks7ilffhxo6k7pfxnrwik4 ---- 178 M E D IJ S K E S T U D IJ E M ED IA S TU D IE S 2 01 6 . 7 . ( 14 ) . 17 8 -1 89 PRIKAZI I ANOTACIJE BOOK REVIEWS Ruxandra Boicu, Silvia Branea and Adriana Ștefănel (eds) POLITICAL COMMUNICATION AND EUROPEAN PARLIAMENTARY ELECTIONS IN TIMES OF CRISIS: PERSPECTIVES FROM CENTRAL AND SOUTH-EASTERN EUROPE Palgrave Macmillan, London, 2017, 295 pp ISBN 978-1-137-58590-5 Political Communication is the lifeblood of democracies. Communication is particularly important where democracy is in flux, fragile, or indeed in crisis as is the case across many of the newer, Central and South-Eastern European (CSE) EU Member nations. This edited collection of empirical research papers focuses on understanding democratic processes through a study of the election campaign communication and its impact. Authors exploit the opportunities for comparative research offered by the 2014 European parliamentary election and so are able to encompass an exploration of the campaigns in Bulgaria, Croatia, the Czech Republic, Hungary, Lithuania, Poland, Romania and Slovakia. While this does not represent all nations it offers insights across countries of varying sizes, population densities, cultural and political experiences and so provides an overview of the electoral terrain of the area. The collection is usefully divided into four sections. The first section ‘Media Coverage and Political Marketing’ of six chapters explores electioneering strategies and tactics with the main focus being on how media outlets covered the campaign. The second section broadens the focus of tactics to explore the question of whether these represent second- order campaigns. The three chapters collectively find mixed results suggesting the European parliamentary contests can be both a tactical testing ground as well as a contest of marginal importance. The third section develops this debate to explore how contests are framed around either a European or National political agenda. The four chapters in this section similarly offer mixed findings, where some parties focus purely on national issues where others promote a European agenda. The final section, entitled ‘Ideological premises, candidate recruitment, and vote results’ contains five chapters which link tactics to outcomes, here the issues of the role the elections play within democratic culture are explored. The volume highlights a variety of challenges for the development of democracy, raising important questions for scholars of elections. In particular the role of media, with coverage largely focusing on the mainstream broadcast media, is explored and the studies show the varying interest, style of coverage and editorialisation which occurs. With these media consistently being the primary source of information on politics to many citizens, and particularly those least interested in politics, this focus is justified and insightful offering a salutary reminder of the potential political power of a minority of editors and, in some cases, the political actors to which they align. Similarly we also see a range of questions posed regarding engagement with the European project, an issue that many of the nations of the EU face. European parliamentary elections often can be used as a platform to attack the idea of the EU and so lead to the election of right-wing Eurosceptic party representatives. This phenomenon undermines the functioning of the European parliament and so the relationship between the EU, its member nations and their citizens. While this proves problematic within countries with fragile and at points corrupt democratic institutions, it also undermines the external M E D IJ S K E S T U D IJ E M ED IA S TU D IE S 2 01 6 . 7 . ( 14 ) . 17 8 -1 89 179 PRIKAZI I ANOTACIJE BOOK REVIEWS forces of democracy which attempt to raise the average level of democratic standards across the continent. In raising these issues, and many others relating to the conduct, reporting and impact of election campaigns across the CSE nations, this collection is important for understanding the progress of democratization in a more global perspective. While the EU has its own specific challenges, the issues relating to the conduct of campaigns and the reporting of politics by media are universal to democracies globally. Overall this collection of essays offers unique snapshots of national contests within nations facing challenges for the embedding of democracy. Each of the chapter authors provide the reader with rich insights into the diversity of the campaigns and common issues and challenges faced by campaign managers, journalists and voters. The studies also adopt a multi-disciplinary perspective, so the volume interweaves work from political marketing, psephology, media analysis and electoral sociology to offer a rich series of insights into the study of this election. These insights make Boicu et al’s volume an important addition to the fields of election campaigning, the CSE nations, broader EU politics as well as offering lessons for all scholars interested in the interplay between elections and democratic culture. The rich coverage and insightful data within the essays raise particularly important questions for consideration for the study of the future of democracy in Europe. Hence this work represents a must read for scholars and practitioners in this important field of human endeavour and the book a timely and welcome contribution to the Palgrave Political Communication and Campaigning series. Darren G. Lilleker Toril Aalberg, Frank Esser, Carsten Reinemann, Jesper Strömbäck and Claes H. de Vreese (eds) POPULIST POLITICAL COMMUNICATION IN EUROPE Routledge, London, 2017, 402 pp ISBN 978-1-138-65479-2 Populism has been increasingly pervasive phenomenon that has reached its crescendo with the vote in the UK to exit the EU and the victory of Donald Trump in the USA. The term has been used in public discourse to describe a plethora of different political actors (candidates, parties, movements) who come both from political left and right and who claim to speak in the name of the people and against the political elites. Common to all of them is the use of simple and direct language that is aiming at the common sense of average people and that is rejecting the intellectualism of elites. Contrary to the fluidity of colloquial uses of the term, scholars have been struggling to introduce more structure and clarity into the research of populism. The growing amount of literature on populism has been primarily focused on providing a solid definition of the phenomenon, identifying and measuring different types of populism, discussing its relation to democracy and understanding causes for its rise. One of the key academic debates evolves around the issue of ideology: is it justified to treat populism as a coherent political ideology or is populism just a political communication style? The book Populist Communication in Europe edited by Toril Aalberg, Frank Esser, Carsten Reinemann, Jesper Strömbäck and Claes H. de Vreese focuses on communicative aspects 180 M E D IJ S K E S T U D IJ E M ED IA S TU D IE S 2 01 6 . 7 . ( 14 ) . 17 8 -1 89 PRIKAZI I ANOTACIJE BOOK REVIEWS of populism. This dimension has been unjustifiably neglected in the research of populism even though “communication plays a significant role in the rise of populism” (p. 3). Aside from the discursive nature of populism, which has been the focus of numerous studies, authors of this book provide several other reasons why populist communication should be given serious attention in the studies of populism. First, populist actors are dependent on the media that serve as their ”oxygen of publicity”; second, media give attention to populists because they represent attractive and ‘sellable’ media products, as was the case with Donald Trump; third, populist communication is not to be studied only with political actors but also with the media who gladly adopt populist frames when reporting about certain issues, such as immigration; finally, mainstream leaders also resort to populist rhetoric when reaching out for voters (p. 4-5). The book provides 24 country cases and groups them geographically. In the introductory part the authors outline key discussions in existing research of populism. They align with the scholars who consider populism a ‘thin’ ideology (e.g. Mudde, 2004; Freeden, 1996) and they single out three key elements of populist communication: references to the people, anti-elitism and anti-group messages (p. 15). ‘Anti-group messages’ are in the literature also known as the references to ‘dangerous others’ (see Grbeša and Šalaj, 2016). These ‘dangerous groups’ pose a threat to the efforts to bring power back to the people. For instance, resentment towards immigrants who might take away jobs from the domestic populace or hostility towards ethnic and religious minorities, who threaten the way of life of the domestic people, are common features of right-wing populism. On the other hand, dangerous others may also be detected among financial elites; when we talk about left populism. The authors lay down their conceptualization of populism by proposing that construction of “the people” should be regarded as the central component of populist messages while anti-elitism and identification of “anti-out-groups” should serve as “optional additional elements” (p. 24). Various combinations of these elements can point to “different types of populism” (p. 24). The second part of the book focuses on Northern Europe and contains cases of Denmark, Finland, Norway and Sweden. The third part deals with Western Europe and contains cases of Austria, Belgium, Germany, Ireland, The Netherlands, Switzerland and The United Kingdom. The fourth part is about populist communication in Southern Europe, specifically in France, Greece, Israel, Italy, Portugal and Spain. The fifth part is concerned with Eastern Europe and contains the cases of Bosnia and Herzegovina, Croatia, Czech Republic, Hungary, Poland, Romania and Slovenia. The final, sixth part, brings conclusions. Stanyer et al. first sum up the results of country- specific analyses and conclude that ‘in Europe, populist actors are often equated with extreme-right, anti-immigration attitudes and nationalism’ (p. 356). However, populist actors position themselves across the political spectrum, from right to left, which is why Stanyer et al. conclude that ‘a clear, distinctive ideology is (...) not a distinctive feature of populist political actors across Europe” (p. 356). Esser et al. in their concluding remarks come up with a challenging threefold conceptualization of “media populism”. The first dimension is “populism by the media” which captures the way in which media engage in “their own kind of populism” (p. 367). This perspective builds on the in-built media “anti- establishment bias” detected in several analysed countries (p. 356). The second dimension is the “populism through the media” (p. 369) which focuses on the attractiveness of populist messages and inclination of the media to promote populist players to catch the M E D IJ S K E S T U D IJ E M ED IA S TU D IE S 2 01 6 . 7 . ( 14 ) . 17 8 -1 89 181 PRIKAZI I ANOTACIJE BOOK REVIEWS attention of the audiences. The third dimension is “populist citizen journalism” (p. 371) which refers to the tendency of the media to open the gate to populism through readers’ comments which are often populist in nature. Finally, in the last chapter of the book Reinemann et al. sketch the uses and effects of populist political communication (p. 386) which calls for integrated research of populist messages and their effects. This book is a valuable contribution to studying populist political communication. Its biggest value, aside from being a very useful and thorough overview of the state of populism in Europe, is that it points to a number of communication-related topics that have been emerging in the studies of populism but so far have not been given adequate scholarly attention. Although all concluding chapters build on comparative findings, they do not say much about differences between different groups of countries with similar socio-political background or for that matter, between the countries with similar background. This type of comparative insight might have contributed to the discussion on foundations and nature of populism. Nevertheless, this does not undermine the quality of this important reading. References >Freeden, Michael (1996) Ideologies and Political Theory: A Conceptual Approach. Oxford: Clarendon Press. >Grbeša, Marijana and Šalaj, Berto (2016) Textual Analysis of Populist Discourse in 2014/2015 Presidential Election in Croatia. Contemporary Southeastern Europe 3 (1): 106-127. >Mudde, Cas (2004) The Populist Zeitgeist. Government and Opposition 39 (4): 541-63. Marijana Grbeša Kristin Skare Orgeret and William Tayeebwa (eds) JOURNALISM IN CONFLICT AND POST-CONFLICT CONDITIONS: WORLDWIDE PERSPECTIVES NORDICOM, University of Gothenburg, Göteborg, 2016, 202 pp ISBN: 978-91-87957-24-6 The book Journalism in Conflict and Post-Conflict Conditions: Worldwide Perspectives, edited by Kristin Skare Orgeret and William Tayeebwa, consists of ten articles related to different post-conflict areas, media reporting and journalists’ experiences from conflict and post- conflict areas. Skare Orgeret states that the aim of this book “is to provide both empirical and theoretical input to the discussions of the role of journalism and media in conflict and post-conflict situations and in the often rather muddy waters between them” (p. 16). The first chapter of the book, Elisabeth Eide’s “Afghanistan: Journalism in pseudo post- conflict. A clash of definitions?” explores the concept of post-conflict using Afghanistan as an example. Eides discusses the development of journalism as an institution and analyses the stories published by two important websites – NAI (Supporting Open Media in Afghanistan) and IWPR (Institute for War and Peace Reporting) in 2014. Both media organizations “seem to strike a balance between reporting ongoing conflicts and war activities and reporting which takes citizens’ concerns in the everyday seriously”, seeking a peaceful solution in a war-ridden country (p. 36). In her chapter “Justified mission? Press coverage of Uganda’s military intervention in the South Sudan conflict” (Chapter 182 M E D IJ S K E S T U D IJ E M ED IA S TU D IE S 2 01 6 . 7 . ( 14 ) . 17 8 -1 89 PRIKAZI I ANOTACIJE BOOK REVIEWS Two), Charlotte Ntulume discusses neighbouring Uganda’s press coverage of the Uganda People’s Defence Forces (UPDF)’s involvement in the conflict in South Sudan using the frame theory, indexing and the notion of the fourth estate. The majority (72%) of manifest frames portrayed UPDF’s involvement in the war as justified (p. 48), and 85% of the sources identified in the coverage were official sources (p. 55). “The New Vision and Daily Monitor coverage of the UPDF intervention in the South Sudan conflict in the early days following the deployment toed the government line on the justifications” (p. 57). In the chapter “Who’s to blame for the chaos in Syria? The coverage of Syria in Aftenposten, with the war in Libya as doxa” (Chapter Three), Rune Ottosen and Sjur Øvrebø examine how the civil war in Syria can be discussed as a post-crisis-crisis. Using the dichotomy between war journalism and peace journalism the article analyses frames used in reporting on the gas attack in Ghouta and examines which of the parties involved were blamed for its escalation. Ottosen and Øvrebø claim that Aftenposten’s coverage leans towards a war journalism approach, with some elements of peace journalism and mostly blames the Assad regime for the attacks (p. 68). The peace journalism frames were found in only 15% of the articles. In 12 of the 72 examined articles, where the word “Libya” is mentioned in the text, there is a willingness to draw historical lines to the Libyan war as at least a contributing factor to the War in Syria. Chapter Four “Framing peace building: Discourses of United Nations radio in Burundi” by William Tayeebwa discusses how in its post-conflict, peace-building operations in Africa the United Nations has been accused of promoting the Western model of “liberal peace building” as opposed to exploring alternative approaches proposed by national actors. Tayeeba uses the “conflict-sensitive-journalism” and “peace building” concepts to analyse six selected programs broadcast from national RTNB, from October to December 2009. Tayeeba states that principles of conflict-sensitive reporting and peace journalism found in the analysed content is the kind of model of journalistic practice that should see conflict-prone countries (such as Burundi) transition to a stable nation that aspires to play a more prominent role in the peace-building architecture (p. 95). Chapter five “Women making news – conflict and post-conflict in the field” is based on the experiences of female reporters and journalists from seven countries around the world. “Kristin Skare Orgeret discusses what challenges and opportunities women journalists face when covering conflict related issues either at home or in a foreign context where gender roles may be very different from those of their home country” (p. 20). First, the author states the role of journalists in the conflict area and gives a definition of two central strands of feminism – equality feminism (or liberal feminism) which focuses on the basic similarities between men and women, and difference feminism (or cultural feminism) which accentuates the inherent differences between men and women. Female reporters interviewed in this chapter agreed that men and women cover the war and conflict differently. An awareness of cultural norms and practices is considered to be particularly important along with the need to fight the victimisation of women. Through two case studies Chapter Six “Experiences of female journalists in post-conflict Nepal”, by Samiksha Koirala, explores the participation of women journalists in Nepali media, including their experience of reporting during and after the conflict. Babita Basnet felt isolated the first few years she worked, and as the only female reporter she was often questioned about the credibility of her reports. Maina Dhital, “being one of the few women working in Kantipur, was often subject to gossip and subtle harassment” (p. 123). Data presented in this chapter M E D IJ S K E S T U D IJ E M ED IA S TU D IE S 2 01 6 . 7 . ( 14 ) . 17 8 -1 89 183 PRIKAZI I ANOTACIJE BOOK REVIEWS show an increase of women journalists in Katmandu-based media by 100%, from 12% in 2005 to 24% in 2014. “The case studies presented suggest that women journalists in media organisations (and female sources) are more highly valued than previously. It seems that the growing number of female journalists make women feel more comfortable with their occupation.” (p. 125) In Chapter Seven, “Intercultural Indigenous Communication of the Indigenous Communities of Cauca in the Context of the Armed Conflict”, Henry Caballero Fula explains and analyses the emergence of a diverse indigenous media in Columbia, answering questions of representation, debate and indigenous autonomy. Local media groups, some their own radio stations, were formed in the 1990s giving a voice to the community and their organisations. Caballero Fula argues that “when it comes to a position diverging from that of their organisation and/or community, a healthy aspiration for a collective indigenous communication would be to generate an internal debate to seek a solution” (p. 144). Claiming that the Norwegian media were not capable of producing independent, quality journalism on the significance and meaning of the killings in relation to the conflict and the peace process in Colombia, Roy Krøvel states that the “Global and Local Journalism and the Norwegian Collective Imagination of “Post-Conflict” Colombia” (Chapter Eight) seeks to contribute to a better understanding of the relationship between Norwegian foreign policy and Norwegian journalism. The chapter explores how global media made sense of the assassinations of indigenous leaders in Toribio. Opposite to an indigenous media which put associations in historical and social context, media around the world used frames and narratives considered to be suitable for their audience (p. 148). “Media coverage was entirely subsumed within the dominating narratives and discourses produced by the state department and the NGOs. Only much later did a few critical voices begin to be heard.” (p. 156) Krøvel concludes that education, knowledge and willingness are an important part of the production of creative and critical journalism on peace-related issues. Chapter Nine “Improving post-conflict journalism through three dances of trauma studies”, by Elsebeth Frey, explores the possibilities for interaction between post-conflict journalism and trauma studies. The study is based on qualitative in-depth interviews with two Tunisian journalists and two Norwegian journalists. Using Newman and Nelson’s framework of three tensions, Frey shows how the concepts of crisis journalism, conflict sensitive journalism and post-conflict journalism may overlap. The three tensions or so-called dances are based on traumatic stress studies: the dance of approach and avoidance, the dance of fragmentation and integration, and the dance of resilience and vulnerability. Frey states that the knowledge learned from these three dances “could mean a more realistic perspective on post-conflict situations, and a better understanding of how journalism may help to strengthen resilience in people and society” (p. 184). Anne- Hege Simonsen’s “Moving forward, holding on. The role of photojournalistic images in the aftermath of crisis” shows how, in post-conflict situations, photographs may work as triggers of collective as well as individual emotions with their power depending on where in the post-conflict process their users find themselves and how far the process of negotiating the past has come (p. 20). Simonsen argues that the reading of the image is always individual and, using the example of Richard Drew’s “falling man”, photographed on 11 September, points out that the impact and the role of photography depends on the audience and the moment of publishing. “Photographic meaning is never fixed, but 184 M E D IJ S K E S T U D IJ E M ED IA S TU D IE S 2 01 6 . 7 . ( 14 ) . 17 8 -1 89 PRIKAZI I ANOTACIJE BOOK REVIEWS fluid and multivocal, and photographic agency is thus best studied through concrete photographs in concrete contexts.” (p. 198) The book offers an insight to the role of the media in post-conflict areas, “to report on the conflict but also to build a consensus on the way out of it” (p. 16) along with the discussion on the changing conditions for the journalistic profession in post-conflict areas. “At the same time, the contributions problematise the concept of post-conflict and powerfully illustrate that the phase between war/conflict and peace is neither unidirectional nor linear, as the use of the concept sometimes seems to imply.” (p. 16) Zrinka Viduka Jan Fredrik Hovden, Gunnar Nygren and Henrika Zilliacus-Tikkanen (eds) BECOMING A JOURNALIST. JOURNALISM EDUCATION IN THE NORDIC COUNTRIES Nordicom, Göteborg, 2016, 334 pp ISBN 978-91-87957-34-5 Students’ motivation for journalism studies, their views on journalism as a profession and themselves as future journalists has lately been in the focus of scientific research. Ever changing technology and trends has made journalism a dynamic profession with circumstances and prospects that differ significantly from the situation just 10 years ago. All this justifies further research in terms of gathering valuable data for understanding as well as developing (improving) not just journalism studies but the practice as well. As pointed out by the editors, the book is “strongly rooted in the Nordic Conferences for Journalism Teachers” and the “research tradition on journalism education in the Nordic countries” which “by 2012 had become the largest survey of this kind in the world” since almost five thousand students at thirty institutions participated. This amount of data and experience should be of interest to a far reaching (scientific) audience. The book is divided into four sections that group individual papers (chapters) by different authors. The first one entitled A Nordic model offers four chapters, including an introduction written by the editors Jan Fredrik Hovden, Gunnar Nygren and Henrika Zilliacus-Tikkanen. Their chapter provides an insight into the “Nordic model” of journalism education – a product of similarities between the Nordic countries in terms of history, media systems and education as well as intense cooperation between (journalism) educators. The second paper written by Elin Gardeström entitled “Educating Journalists. The Who, When, How and Why of Early Journalism Programmes in the Nordic Countries” gives a comparison of journalism education systems in the Nordic countries, but also discusses the origins of this education and the way it was developed through the years. The paper ‘We All Think the Same’. Internship, Craft and Conservation” written by Ida Willig is about Danish journalism education and the specificity of a relatively few formal educational institutions and the focus on journalism as a craft. The fourth paper “New Times, New Journalists? Nordic Journalism Students Entering an Age of Uncertainty” by Jan Fredrik Hovden and Rune Ottosen gives a summary of a series of surveys conducted with journalism students (almost five thousand students in Nordic countries). The second section is entitled Professional (re)orientations and consists of six papers. The first paper “Journalism Education and the Profession. Socialization, Traditions and Change” M E D IJ S K E S T U D IJ E M ED IA S TU D IE S 2 01 6 . 7 . ( 14 ) . 17 8 -1 89 185 PRIKAZI I ANOTACIJE BOOK REVIEWS by Gunnar Nygren presents the results of two surveys conducted in Poland, Sweden and Russia to compare attitudes towards professional values and integrity between students and journalists. The paper “Perfect Profession. Swedish Journalism Students, Their Teachers, and Educational Goals” written by Gunilla Hultén and Antonia Wiklund provides insight into both students (survey about their future professional roles) and teachers’ reflections (interviews about their professional attitudes). “From Politics on Print to Online Entertainment? Ideals and Aspirations of Danish Journalism Students 2005-2012” by Jannie Møller Hartley and Maria Bendix Olsen is a survey about students’ ideal workplace and roles, and “Finnish Journalism Students. Stable Professional Ideals and Growing Critique of Practice” by Henrika Zilliacus-Tikkanen, Jaana Hujanen and Maarit Jaakkola questions students’ motives for studying journalism along with their perceptions of the profession. Ulrika Andersson’s paper “The Media Use of Future Journalists and How it Changes During Journalism Education” deals with media forms students use and whether it changes as they progress in their education. The last paper in the second section written by Erik Eliasson and Maarit Jaakkola is entitled “More Mobile, More Flexible. Students’ Device Ownership and Cross-Media News Consumption, and Their Pedagogical Implications” and examines students’ media habits to see how educational institutions could adapt their teaching to build upon such habits. The third section Meeting the challenges offers six papers. The first “The Gap. J-School Syllabus Meets the Market” written by Arne H. Krumsvik talks about the development of journalism school syllabi in Norway. Why many women leave journalism just a few years on the job is discussed in Hege Lamark’s paper “Women Train In – and Out of – Journalism”. The paper “Burdens of Representation. Recruitment and Attitudes towards Journalism among Journalism Students with Ethnic Minority Backgrounds” written by Gunn Bjørnsen and Anders Graver Knudsen deals with the problems students and young journalists with minority backgrounds face when entering journalistic profession. Terje Skjerdal and Hans-Olav Hodøl’s paper “Tackling Global Learning in Nordic Journalism Education. The Lasting Impact of a Field Trip” analyses curricular profiles of journalism programmes in Nordic countries to trace the footprint of global learning, and “Dialogues and Difficulites. Transnational Cooperation in Journalism Education” by Kristin Skare Orgeret looks at the possible involvement of global learning in journalism through international cooperation. The last paper in the chapter bears a similar title to the book itself “Becoming Journalists. From Engaged to Balanced or from Balanced to Engaged?” Written by Roy Krøvel, it deals with the understanding of the role journalism education. The fourth section is entitled Meeting the field and consists of four papers. Jenny Wiik’s paper “Standardized new Providers or Creative Innovators? A New Generation of Journalists Entering the Business” provides an answer to how journalism (student) interns perceive the practices of professional journalism. “Is This a Good News Story? Developing Professional Competence in the Newsroom” by Gitte Gravengaard and Lene Rimestad questions the differences (discrepancies) in practices students learn during their education and at the beginning of their internship. In the paper “Internal Practical Training as a Teaching Method for Journalists Students” by Hilde Kristin Dahlstrøm focuses on internal practice training as the essential pedagogical tool for the training of journalism students. The last paper “Developing Journalism Skills through Informal Feedback Training” by Astrid Gynnild discusses the use of a new feedback tool to develop journalism skills. 186 M E D IJ S K E S T U D IJ E M ED IA S TU D IE S 2 01 6 . 7 . ( 14 ) . 17 8 -1 89 PRIKAZI I ANOTACIJE BOOK REVIEWS The importance of the book lies in the breathe of the tackled topics and all the collected experiences from different Nordic countries. It certainly provides interesting reading that could be of potential interest to any journalism teacher, professional or student. Dunja Majstorović Viktorija Car, Miroljub Radojković and Manuela Zlateva (eds) REQUIREMENTS FOR MODERN JOURNALISM EDUCATION – THE PERSPECTIVE OF STUDENTS IN SOUTH EAST EUROPE Konrad Adenauer Stiftung, Sofia, 2016, 207 pp ISBN 978-3-95721-255-9 25 years after the seismic changes with the fall of communism in Eastern Europe and Western Balkan, the question of adequate media policy and journalistic standards remains highly relevant in all of the countries in the region. To improve the situation and to solve some of the issues, the issue of quality journalistic education seems one of the key potential solutions. Therefore the programmes of journalistic education at universities in these countries present one of the potential ways to educate ethical and professional journalists of the next generation – but they might also represent an obstacle to evolution if these programmes are old-fashioned, out-of-date, or focused on wrong topics and problematic perspectives. The new book Requirements for Modern Journalism Education – The Perspective of Students in South East Europe therefore addresses a highly relevant and topical issue of processes in five different countries in SEE: Albania, Bulgaria, Croatia, Romania, and Serbia. There are similar patterns from the past: in most of these countries, the education of journalists developed only after the World War II and in some cases even decades after that. The education was in all of these countries during those years under extremely high control of ruling communist parties, emphasizing the role of journalists as ‘social-political workers’ (as in Yugoslavia) and as part of larger propaganda system. Only after the 1990s and the fall of communism did more open systems of journalistic education develop. There were certainly important differences between different countries, with Albania and Romania representing some of the harshest example of communistic hard-line systems which did not leave space for any sort of democracy, open discussion, watch-dog function and other characteristics of critical journalism. On the other hand, journalism and journalism education in Croatia and Serbia as former parts of Yugoslavia was more open to critical perspectives and Western influences, due to the softer version of socialistic rule. These countries thus had some similar starting points in 1990 and 1991, but also faced some different historical and political circumstances which led to quicker or slower democratization, both in politics and in media and journalism. This is reflected also in todays situation, with Croatia, Rumania, and Bulgaria being members of European Union, and Serbia and Albania not. However many of the issues in their media and journalism are similar. The five national reports and final comparative chapter show many similar problems, including political capture, high levels of corruption, lack of transparency in many aspects of societies etc., resulting in media that is connected or often intertwined with political actors, particularly ruling parties; media that is financially weak and thus M E D IJ S K E S T U D IJ E M ED IA S TU D IE S 2 01 6 . 7 . ( 14 ) . 17 8 -1 89 187 PRIKAZI I ANOTACIJE BOOK REVIEWS more prone to political or economic (advertising) pressure; and journalists that have weak social and financial position in a society, that are often threatened or that function less as a watchdogs and more as PR representatives of partial interests. This specific position of journalists and media results also in specific expectations of young students who decide to study journalism: a relatively high percentage of them is seeking fame and recognition, and would like to work on television, but very few of them talk about financial motives for their study and profession. A number of countries report the feminization of this profession which can also be seen as a result of weak financial position of todays’ journalists in these, but also many other countries. And in most of the countries, post-graduate students show higher level of scepticism about the potential or power of their profession, confirming that older students with more practice in the media identify the weak position of journalists and their media compared to the owners, advertisers, or politicians. The research in five different countries shows also some similar problems of todays’ journalistic education – for example in all of the analysed countries, one of the key issues as identified by students is a technical one: a modern, up-to-date media equipment that allows students to produce content on their own. In all of these countries, students emphasize the need for more equipment and for more practical knowledge or experience. In all of these countries, state owned universities generally have weaker economic position, however some countries (Croatia, Albania) also show that private faculties or universities did not provide quality education or positive feedback from their students due to lack of staff and low quality. It seems thus that private faculties at least in the area of journalistic education did not reach an adequate level that would enable the high-quality education of new generations of journalists, showing that state owned universities remain highly relevant in keeping or upping the standard and influencing the future of countries’ journalism. All five country studies use same framework of research based on questionnaire among the undergraduate and post-graduate students, looking for their opinion and experience with their studies. The book thus offers significant contribution and insight to the complex picture of todays’ media, journalism, but particularly future generations of journalists and what can be expected from them. It packs this answers in a context of country history and wider social and political framework, explaining country particularities and experiences. It also shows differences from the ‘enthusiastic’ 1990s when the promise of democratic change led to extremely high and obviously quite naïve expectations about the brave new world of media and journalism in such new democracy. Todays’ picture is much more down to earth, showing the potential but also the wider obstacles (from finances to political system and international influences, including the role of EU and international organizations and donors). The area of journalistic education thus represents the area of continuous changes and continuous need for research. And of course a research of students and their perceptions and expectations represents one part of the wider educational landscape. A further research on other issues would also contribute to the SWOT analysis of journalistic education: what are the strengths and weaknesses, what are opportunities and troubles? Semi-structured in-depth interviews with specific students might provide additional insight and provide clues to improvement. A research among the journalistic professors 188 M E D IJ S K E S T U D IJ E M ED IA S TU D IE S 2 01 6 . 7 . ( 14 ) . 17 8 -1 89 PRIKAZI I ANOTACIJE BOOK REVIEWS and educators would be another potential area of further research, showing the problems and dilemmas that educators face (some aspects are mentioned in the research, but to hear about these issues from educators themselves would enable more complex picture). And a research among the faculty managers, financiers (state, donors) would offer an insight into the management and financial aspects of journalistic education. Finally, the perspectives and experiences of media companies who later employ or offer contracts to new journalists would add another stone to the mosaic of different perceptions with journalistic education in South East Europe – a more practical aspect that is often controversial and ethically problematic (due to political or economic pressures and controversial practices or relationships), but would still help researchers (and wider society) to gain new set of results and opinions. The book on journalistic education in South East Europe, researching attitudes and expectations of students, represents a significant contribution to such wider picture. It is highly important that this sort of research (and reflections) from the region continues to be produced, as the media and journalism itself is affected by economic, political, and technological changes and disruptions, changing the role of future journalists, their identity, and particularly their expected knowledge. Marko Milosavljević Edgar Gómez Cruz and Asko Lehmuskallio (eds) DIGITAL PHOTOGRAPHY AND EVERYDAY LIFE: EMPIRICAL STUDIES ON MATERIAL VISUAL PRACTICES ECREA, Routledge, London and New York, 2016, 296 pp ISBN 978-1-138-89981-0 The book Digital Photography and Everyday Life: Empirical studies on material visual practices does exactly what its title asserts. It investigates practices by people around the world that revolve around the use of photography in the digital age. Comprised of three parts: “Variance in use in everyday photography”, “Cameras, connectivity and transformed localities” and the final section “Camera as the extension of the photographer”, this interdisciplinary book has the visual social sciences as the common denominator for all of its chapters. As the editors Edgar Gómez Cruz and Asko Lehmuskallio write in the introduction, the core of this volume is in the “pictoral practices” that ground the use of digital photography in everyday life. It is no novelty that sight is our most important sense (Jenks, 2002)2. What is new, however, is the massive proliferation of images in our world, whether on posters, billboards and ads or in the digital realm – on various websites, applications and social media especially. As Paolo Favero states in this book, every two minutes our contemporary world produces more images than the entire 19th century accumulated throughout its course! (p. 209). Possessing a camera equipped phone is the norm nowadays, and coincidentally, the onset of 2016 had the iconic Kodak launch a first primarily camera-oriented smartphone, dubbed Kodak Ektra. Besides social media such as Facebook and the primarily visual Instagram, 2 Jenks, Chris (2002) The Centralit y of the Eye in Western Culture: An Introduction, pp. 11-47 in Jenks, Chris (ed.) Visual Culture. Zagreb: Naklada Jesenki i Turk, Hrvatsko sociološko društ vo. M E D IJ S K E S T U D IJ E M ED IA S TU D IE S 2 01 6 . 7 . ( 14 ) . 17 8 -1 89 189 PRIKAZI I ANOTACIJE BOOK REVIEWS special websites have constituted online communities that revolve around photography: Flickr, Tumblr and National Geographic's Your Shot, to name the most popular ones. BBC, the British public service media has too converged its services and now offers a Visual Radio Production and short Instagram news videos. In such a mediascape, where the visual has asserted itself quietly, but abundantly as the norm, this book is a welcomed collection of texts that begins to uncover the usage and meaning of photographs in our daily lives. Ethnography is an often mentioned method here, as case studies on pictorial practices from Ireland, Austria, Tanzania, Portugal and the United Kingdom are presented by the authors, each with a different take, a different niche on the usage of photography. The chapters of this book analyse the shaping of long-distance relationships by communicating visually on messaging platforms and applications, the notions of space and locality related to photography, the selfie culture that has taken the world by storm, as well as the political and activist dimensions that comprise picture taking/making but also the mere presence of the camera itself. The notion of age is touched as well: as it would seem to be common sense to assume that the elderly are not quite apt to the technological pace (the 'digital natives' versus the 'digital immigrants'), research in this book uncovers that this is not exactly the case. It also investigates how digital photography communicates a myriad of topics, from raising awareness on mental health issues to selling clothes on Facebook. It is quite common (and taken for granted) that people keep pictures in family photo albums, hang them on the wall and post them on Instagram. Besides the academic community and media studies and visual culture scholars, this book is the right one for every person that has ever pondered why people take pictures of their meals served in restaurants and post them online. Scientifically, yet cleverly written and straight forward, it is welcomed at just the right time, as the interest in the visual that surrounds us has never been higher. A wonderfully exciting read. Emil Čančar work_5znuf4bfvrenjcurvbpnrc2esa ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218542172 Params is empty 218542172 exception Params is empty 2021/04/06-02:16:19 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218542172 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:19 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_62d2mejhszhvfcrbwzwoall56u ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218543969 Params is empty 218543969 exception Params is empty 2021/04/06-02:16:21 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218543969 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:21 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_63goykx5obetnmnievg35mgww4 ---- FROM PERSPECTIVE DRAWING TO LOW COST PHOTOGRAMMETRY: APPLICATION IN ARCHITECTURAL STUDIES M. Zammel1 1 ENAU, National school of architecture and urban planning, Rue El Quods,Sidi Bou Said , Tunisia. meriem.zammel@gmail.com Commission II KEY WORDS: Descriptive geometry, Perspective drawing, Low cost photogrammetry, 3D. ABSTRACT: Nowadays digital tools are booming. The evolution of the technologies used in archeology and in architecture is such that we are going more and more towards low cost and time saving. Indeed, today we can use free softwares in photogrammetry in order to make 3D surveys of architectural buildings. Fifteen years ago it only required the use of the 3D laser scanner, a very expensive technique. As part of the course reform of the second year at the national school of architecture and urban planning Tunisia in the course of descriptive geometry and perspective drawing, an experiment was conducted in order to introduce low cost photogrammetry using the free software 123 D catch by autodesk, 3DF Zephyr or Scann 3D on the mobile phone. The objective of this course is to make students experiment different techniques of representation such as photogrammetry, perspective drawing, using a mobile phone , a digital camera or a pen. The object of the representation is the Tunisian cultural heritage such as the archeological site of Carthage. 1. AIM OF THE COURSE 1.1 What is Descriptive geometry? Descriptive geometry has been invented by the French engineer mathematician Monge (1746-1818) which developed its principles. It’s the study of the representation of the space on a plan. Its aim is to represent space figure on plans projection. Descriptive geometry is useful for architects and engineers in order to represent space and architectural buildings in plan and elevation. This course develops imagination in student’s minds. In deed the precision of the drawings made using the pen and the rule make them experiment accurate representations. Teaching descriptive geometry remains in the architectural studies since the XIX century and we are wondering if it is useful with the appearance of digital tools and 3D. In deed that we can say, that descriptive geometry develops imagination what is useful in front of a screen to represent 3D models. More over the rigor it requires is important for architectural drawings. Figure 1-One point on plans projection A new representation technique appears when we found a new method to represent point in space (Saint Auubin, 1992). The course program starts by the representation of a points with its coordinate (x, y, z). then the representation of a line, the intersection of two straight, the intersection of two plans, the intersection of volumes and perspective drawings (Aubert, 1996). This course was a little bit abstract, so we introduce digital 3D tools and photogrammetry in order to apply the course on architectural buildings especially cultural heritage with its history and knowledge. 1.2 Perspective drawing The perspective drawing is made from a single observer with two vanishing points. This prspective makes us visualise the 3D monument. Using this representation method is important to understand fondamentals of 3D. Figure 2- Method of perspective drawing The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W17, 2019 6th International Workshop LowCost 3D – Sensors, Algorithms, Applications, 2–3 December 2019, Strasbourg, France This contribution has been peer-reviewed. https://doi.org/10.5194/isprs-archives-XLII-2-W17-427-2019 | © Authors 2019. CC BY 4.0 License. 427 1.3 Multiple representation The representation of each architectural element has been obtained since multiple data sources. The representation of architecture is no matter a two dimensional graph, it’s now possible to represent and describe of each part using 3D modeling. Image based modeling of the principal elements of the building has been made in order to represent complex geometric surfaces texture, materials. Multi-representation (De Luca, 2006) aims to use different kind of representation in order to show the final result from different data sources. In this exercise, students use low cost photogrammetry, perspective drawing using a pen and manual survey. 2. INNOVATION IN THE COURSE: THE RESULTS 2.1 Site presentation The Carthage site is on the UNESCO World Heritage List since 1979. This site experimentation was composed by site museum of Carthage, in the byrsa Hill and aristocratic roman villa: The villa of the aviary. The site museum of Carthage Carthage national museum is a national museum in Byrsa, Tunisia. Along with the Bardo National Museum, it is one of the two main local archaeological museums in the region. The edifice sits a top Byrsa Hill, in the heart of the city of Carthage. Founded in 1875, it houses many archaeological items from the Punic era and other periods.The Carthage National Museum is located near the Cathedral of Saint-Louis of Carthage. It allows visitors to appreciate the magnitude of the city during the Punic and Roman eras. Some of the best pieces found in excavations are limestone/marble carvings, depicting animals, plants and even human sculptures. Of special note is a marble sarcophagus of a priest and priestess from the 3rd century BC, discovered in the necropolis of Carthage. The Museum also has a noted collection of masks and jewelry in cast glass, Roman mosaics including the famous "Lady of Carthage", a vast collection of Roman amphoras. It also contains numerous local items from the period of the Byzantine Empire. Also on display are objects of ivory. The villa of the aviary The villa of the aviary is located in the park of the roman villas in Carthage. This villa is the main element of the park, by the quality of the restoration carried out in 1960 (Ennabli, Slim, 1993). The name of the villa comes from the mosaic aviary marked by the presence of birds among leaves (Picard, 1951), which occupies the garden, in the middle of the viridarium, heart of courtyard surrounded by a portico of pink marble columns. Figure 3- Picture of the villa of the aviary Figure 4- Mosaic of the villa of the aviary 2.2 The manual drawing and perspective Using the perspective methods students can also, use colors in order to represent texture and materials. On the basis of the manual architectural survey, a 2D representation (plans, elevation) was produced before going on with the 3D modelling of each architectural element. In order to experiment the representation technique from the manual survey, as well as 3D modelling and image based modelling we choose statues from the villa of the aviary as well as details small architectural elements. Figure 5- Perspective drawing with colors of the villa of the aviary The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W17, 2019 6th International Workshop LowCost 3D – Sensors, Algorithms, Applications, 2–3 December 2019, Strasbourg, France This contribution has been peer-reviewed. https://doi.org/10.5194/isprs-archives-XLII-2-W17-427-2019 | © Authors 2019. CC BY 4.0 License. 428 https://en.wikipedia.org/wiki/National_museum https://en.wikipedia.org/wiki/National_museum https://en.wikipedia.org/wiki/Byrsa https://en.wikipedia.org/wiki/Tunisia https://en.wikipedia.org/wiki/Bardo_National_Museum https://en.wikipedia.org/wiki/Bardo_National_Museum https://en.wikipedia.org/wiki/Byrsa https://en.wikipedia.org/wiki/Carthage https://en.wikipedia.org/wiki/Punics https://en.wikipedia.org/wiki/Acropolium_of_Carthage https://en.wikipedia.org/wiki/Acropolium_of_Carthage https://en.wikipedia.org/wiki/Ancient_Rome https://en.wikipedia.org/wiki/Limestone https://en.wikipedia.org/wiki/Sarcophagus https://en.wikipedia.org/wiki/Necropolis https://en.wikipedia.org/wiki/Amphora https://en.wikipedia.org/wiki/Byzantine_Empire https://en.wikipedia.org/wiki/Ivory 2.3 Teaching low cost photogrammetry Photogrammetric restitution allows extracting from the pictures a set of coordinate expressed in model space (Grussenmeyer et al.2001). Students are invited not only to make perspective drawing using a pen but also low cost photogrammetry using the free softwares. Perspective drawing is made from a single observer, photomodelling allows a succession of views from digital photography taken all around the object. Low cost free software such as 123 D catch by Autodesk, 3DF Zephyr and Scann 3D used in automatic photomodelling are able to render the 3D scene, to replace the camera lens,to make 3D models accurate and true to reality. The cameras are calibrated in order to replace the architectural object in the scene 3D. A similar study untitled: Descriptive geometry meets computer vision (Stachel, 2006), explains the link between descriptive geometry and image based modeling. What is innovative in this paper is that we use free low cost software with automatic photogrammetry The methodology developed in the course begins first by doing digital photographs using the mobile phone in the free software scann 3D all around the objects, the software shows red and green points on the screen when the images are calibrated and in right position. And then the 3D model appears on the phone and we can use the free software Zephyr 3D in order to take those photographs well calibrated from the mobile phone and introduce its on the laptop in order to have 3d models using zephyr 3D, point cloud models and videos. Figure 6- Photomodelling of a statue in the site museum of Carthage using Scann 3D. Figure 7- Photomodelling of capitals in the site museum of Carthage using Zephyr 3D. Figure 8- Photomodeling of statues of the villa of the aviary using 123 D Catch by autodesk. Figure 9- Photomodeling of a house in the site museum of Carthage using Zephyr 3D. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W17, 2019 6th International Workshop LowCost 3D – Sensors, Algorithms, Applications, 2–3 December 2019, Strasbourg, France This contribution has been peer-reviewed. https://doi.org/10.5194/isprs-archives-XLII-2-W17-427-2019 | © Authors 2019. CC BY 4.0 License. 429 3. PERSPECTIVES 3.1 Image based modelling of the Tunisian cultural heritage In this experimentation our aim is to represent hole Tunisian cultural heritage using low cost photogrammetry. Tunisia is rich in cultural heritage, such as archeological sites, medinas We have a lot of students each year and we can change site experimentation and work on different sites. Our aim is to do internet archives of Tunisian cultural heritage sites. 3.2 Internet archives And then our purpose is to share those photomodellings of the Tunisian cultural heritage on sketchfab whish can share photomodellings on internet and make people see those objects. This free software low cost make people share photomodelling of objets all over the world it can be useful for information technology. 3.3 Memory Working on cultural heritage aims to bring knowledge about history, culture and to share commun memory of the people in the world. Architectural and archeological buildings are not limited to their representation it also includes knowledge (De Luca, L. 2006). This knowledge is usefull for new generations. CONCLUSION It don’t exist one but multiple representations. Whith the evolutions of representation techniques in the domain of architecture we’ll find other innovative methods that will appears This experimentation using low cost photogrammetry was very interesting for students. They were very happy to use there mobile phones, the computer, free softwares. With the appearance of such digital tools we can also obtain 3D models accurate and true to reality. In the new aims of the course we will introduce lasergrammetry when it will be low cost. Because we could have the hole monument more precisely. Internet archives of cultural heritage in Tunisia and in the world is a good challenge to documenting the past. REFERENCES Aubert, J., 1996. Dessin d’Architecture à partir de la géométrie descriptive, Editions de la villette, Paris France, 163p. Cardinale, T., Valva, R., and Lucarelli, M. 2013, Advanced representation technologies applied to the temple of Neptune, The sphinx, and the meteope in the archeological park of Paestum, Photogramm. Remote Sens. Spatial Inf. Sci., XL- 5/W1, 35-41, https://doi.org/10.5194/isprsarchives-XL-5-W1- 35-2013, 2013. Casu P. and Pisu C., 2013. photomodelling and cloud computing application in the survey of late gothic architectural elements, int-arch-photogramm-remote-sens-spatial-inf- sci.net/XL- 5W1. Casu, P. and Pisu, C. 2013. Photo modelling and cloud computing application in the survey of late gothic architectural elements. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL- 5/W1, 2013 3D-ARCH 2013 - 3D Virtual Reconstruction and Visualization of Complex Architectures, 25 – 26 February 2013, Trento, Italy. De Domenico, F. 2006. De l’acquisition des données spatiales à la représentation architecturale : le cas des vestiges du théâtre antique d’Arles, Journal MIA,Vol.0, n°1. De Luca, L. 2002. Relevé et représentation de l’architecture : Le capitole de Dougga. Rapport de mission Dougga, UMR 694 MAP. De Luca, L. 2006. Structuration sémantique et multireprésentation en Architecture, Biarritz, virtual Retrospect De Luca, L. 2009, La photomodélisation architecturale : Relevé, modélisation et représentation d'édifices à partir de photographies. Eyrolles, pp.264, Grussenmeyer, P. Hanke, P. Russenmeyer, P. and Streilein A. 2001. Photogrammétrie architecturale, chapitre du livre, Photogrammétrie numérique. Editions Lavoisier-Hermès. Saint Aubin, J.1992. Le relevé et la représentation de l’Architecture. Paris : Association Etudes, loisirs et Patrimoine, Documents et méthodes. Scann 3D, Developpement Team 2017, open source Software. Sketchfab, Developpement Team 2017, open source software. Stachel, H., 2006. Descriptive Geometry meets computer vision- The geometry of multiple images, International conference on geometry and graphics, Salvador . Zammel, M. 2014. Apport des outils numériques dans l’étude des théâtres de l’Afrique romaine. Actes du colloque patrimoine et horizons, Gammarth, Tunisia. Zammel, M., 2015. Photo modeling and cloud processing : Application in the survey of the roman Theatre of Uthina (Tunisia) architectural elements. Caa conference (Computer applications and quantitative methods in Archeology) 29 Mars au 03 Avril Zammel, M.2015. Outils d’aide à l’étude architecturale des théâtres romains d’Afrique Approche comparative basée sur les nouvelles technologies de l’information et de la communication. Sarrebruk , Presses académiques francophones. Zephyr 3DF, Developpement Team 2017, open source Software. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W17, 2019 6th International Workshop LowCost 3D – Sensors, Algorithms, Applications, 2–3 December 2019, Strasbourg, France This contribution has been peer-reviewed. https://doi.org/10.5194/isprs-archives-XLII-2-W17-427-2019 | © Authors 2019. CC BY 4.0 License. 430 work_645yod4irrg7pnzqoo35vcrkha ---- 200407PPA_Bassett.qxd RESTORING TETRACYCLINE-STAINED TEETH WITH A CONSERVATIVE PREPARATION FOR PORCELAIN VENEERS: CASE PRESENTATION Joyce Basset, DDS* Brad Patrick, BSc† Pract Proced Aesthet Dent 2004;16(7):481-486 481 Tetracycline exposure in utero and in early childhood often results in intrinsic tooth staining that varies in severity based upon timing, duration, and form of tetracy- cline administered. Traditionally, dental aesthetics compromised by tetracycline staining have been restored with modalities requiring aggressive tooth prepara- tion. In this case involving a patient with extremely severe staining of healthy and aesthetically shaped dentition, a conservative tooth preparation strategy and porce- lain veneers were utilized to preserve tooth shape and arch form while restoring natural color. Learning Objectives: This case report presents a minimally invasive approach for the restoration of tetracycline-stained teeth. Upon reading this article, the reader should: • Understand the requirements for material selection when restoring severely discolored dentition. • Recognize the minimally invasive preparation technique that could be used for optimal results. Key Words: tetracycline, minimally invasive, porcelain, veneers B A S S E T T A U G U S T 16 7 * Private practice, Scottsdale, Arizona †Owner, Patrick Dental Studio, Laguna Beach , California. Joyce Bassett, DDS, 4921 East Bell Road, Suite 206, Scottsdale, AZ 85254 Tel: 602-867-2888 • Fax: 602-867-2878 • E-mail: drmouthy@aol.com C O N T I N U I N G E D U C A T I O N 1 8 200407PPA_Bassett.qxd 8/16/04 1:03 PM Page 481 Tetracyclines are a group of broad-spectrum antibi-otics originally found in Streptomyces bacteria and used in treating many common infections. When admin- istered after the second trimester of pregnancy through the age of 12 years (the period during which permanent teeth are still forming), the antibiotic is deposited within these forming teeth and intrinsic staining may result.1-5 Severity of discoloration depends upon the type of tetra- cycline administered, the dosage, and the duration of exposure to the drug. Regardless of severity, the result is the unaesthetic appearance of the dentition while the health of the teeth is not compromised. Tetracycline Staining The period of dental formation during which teeth are most vulnerable to tetracycline staining is from the fourth month in utero through the fifth month postpartum.6 In addition to developing fetuses and children under age 13, infants of breast-feeding mothers to whom the drug is administered can be affected.3,7 When the drug is administered in courses, discoloration is generalized and band-like; extended use results in a more homogeneous appearance. Different tetracyclines result in different color effects (eg, a slate grey color results from chlortetracy- cline exposure whereas a creamy discoloration is com- mon with exposure to oxy-tetracycline.8,9 Color tends to become browner with sun (ultraviolet) exposure, with anterior teeth more susceptible to light-induced color changes than posterior teeth. Routine vital bleaching procedures cannot satisfac- torily remove dark tetracycline staining.10 When stain- ing is limited to the gingival third of the tooth and the patient has a lower smile line, cosmetic treatment may not be indicated because the lip will cover the discol- ored band. When banding exists in the incisal or mid- dle third and/or the patient has a high smile line, the resulting poor aesthetics are, however, most effectively addressed with restorative procedures that mask the dark intrinsic color. Traditionally, these measures have included full-coverage crowns, composite bonding, and porcelain veneers following aggressive preparation of the tooth structure. The results often cannot simulate natural tooth coloration because the underlying darkness (eg, from 482 Vol. 16, No. 7 Practical Procedures & AESTHETIC DENTISTRY Figure 4. Gingival margins are prepared with a minimal chamfer line placed in enamel. Figure 1. Severe tetracycline staining is evident throughout the aesthetic zone in both the maxilla and mandible. Figure 2. A caliper is used to ensure 1.5 mm of incisal reduction. Figure 3. Depth cutters are used as a guide for initial facial reduction; a stump shade is taken for communication with the ceramist regarding gingival color. 200407PPA_Bassett.qxd 8/16/04 1:03 PM Page 482 dentin, metal, or opaquers) can affect the translucent effect of the porcelains.10 An alternative aggressive method of treatment is elective endodontic therapy with internal bleaching. Though the results are dramatic, the purposeful removal of healthy pulps is considered too radical despite the cosmetic improvement.11,12 The following case presentation demonstrates the restorative protocol used for restoration of severely tetra- cycline-stained teeth using a minimally invasive approach for delivery of porcelain veneers. Case Presentation A 43-year-old male patient presented with severe stain- ing of all the teeth in the aesthetic zone (Figure 1). The staining was caused by exposure to tetracycline for the treatment of severe ear infections from birth until four months of age.6 The patient also exhibited a high “dental IQ” developed over a long and fruitless history of consulta- tions in search of an acceptable restorative approach. Despite the extreme discoloration of his teeth, the patient had rejected prior treatment plans in order to preserve the existing tooth structures. The patient recognized that the aggressive preparation required for conventional restorative options would remove large amounts of tooth structure and may deliver unnatural aesthetics. By the time he arrived in the author’s practice, the patient had delayed treatment in order to secure a minimally invasive tech- nique that delivered natural aesthetics. Because of his pre- vious rejection of treatment, the patient’s anterior teeth had never been treated prosthetically and presented in virgin condition, making a conservative preparation design pos- sible and desirable. Discoloration was pronounced in the middle third of all the teeth and, to a lesser degree, in the incisal third. The shade of the gingival third was nearly ideal in the maxilla, but stained in the mandible. The patient’s resolve regarding reduction of tooth structure exerted a decisive influence on the treatment planning, diagnostics, material selection, tooth prepa- ration, provisionalization, and final restoration of the case. It also necessitated an exceptionally thorough and detailed collaboration between the clinician and the ceramist that included a trial of alternative restorative materials. P P A D 483 Bassett Figure 8. Note the arch form discrepancy and the saturation of color into the gingival third of the mandible. Figure 5. Utilizing a football-shaped diamond, discolored tetracycline banding is removed from the middle third of the tooth. Figure 6. A finishing diamond (8868-016, Brasseler, Savannah, GA) is utilized to smooth all transition areas and refine all margins. Figure 7. The Luxatemp B1 (Zenith DMG, Englewood, NJ) maxillary provisional is viewed against the severely stained mandibular teeth. 200407PPA_Bassett.qxd 8/16/04 1:03 PM Page 483 The patient’s goal of preserving existing natural tooth shape, maxillary arch form, and most of the mandibular arch form necessitated meticulous collaboration between the clinician and the ceramist to develop a treatment plan that was both clinically sound and technically viable. Mounted casts in centric relation with face-bow, occlusal evaluation, and complete digital photography conform- ing to AACD protocols were prepared for communica- tion with the laboratory and the patient. The clinician and ceramist conferred verbally and electronically (ie, e- mail correspondence, e-mail transmittal of digital pho- tography), and five pretreatment consultations were held with the patient. The ceramist often participated in patient conferences via telephone conferencing. It was agreed that because of the desirable shape and orientation of the patient’s natural dentition, a diagnostic waxup was not necessary. To determine whether the discoloration could be masked with minimal tooth preparation, a trial veneer was placed on a single tooth outside the aesthetic zone. The #12(24) premolar was reduced by approximately 0.7 mm to expose negative color in the dentin, and a shade match was obtained. Blanks of three different restorative materials (A0++ Authentic, Microstar, Lawrenceville, GA; Empress II ingot 50, Vivadent, Amherst, NY; and d.Sign Brilliant Dentin White, Ivoclar, Amherst NY) were tried on the prepared tooth. In order to identify the material that would achieve complete mask- ing with the least amount of porcelain and the most conservative tooth preparation, each material was successively thinned by the ceramist at chairside and tried in by the dentist until discoloration from the prepared tooth became evident. Complete masking was attained with 0.6-mm thickness of a pressed ceramic material (ie, A0++ Authentic, Microstar, Lawrenceville, GA). The prepara- tion requirements for use of the material conformed to the patient’s expectations, and the treatment was accepted. Preparation Preparation requirements differed somewhat in the two arches because discoloration was more severe in the middle and cervical thirds of the mandibular teeth and because the patient desired a slight correction of the mandibular arch form. In the maxilla, the incisal edges were reduced by 1.5 mm (Figure 2) to create sufficient restorative space to achieve lifelike translucency. Depth cutters were used 484 Vol. 16, No. 7 Practical Procedures & AESTHETIC DENTISTRY Figure 9. The mandibular preparations remove dark band- ing and correct the misaligned arch form without opening contacts. Figure 10. Try-in of all ceramic units finds no show- through of the middle band except in teeth #7(12) through #10(22), where reduction was insufficient in the middle facial preparation. Figure 11. In the laboratory, the ceramist deferrs fabrica- tion of final restorations for teeth #7 through #10 because of concern that preparations might not be sufficient for complete masking of the tetracycline staining. 200407PPA_Bassett.qxd 8/16/04 1:03 PM Page 484 for the facial reductions (Figure 3), and interproximal con- tacts were maintained in order to preserve tooth width, shape, and contour. Negative space in the buccal cor- ridor was maintained in conformance with the patient’s wishes. The natural color was nearly ideal in the cervi- cal third, and a traditional feldspathic veneer prepara- tion was utilized. Approximately 0.5 mm of enamel was removed, and a definitive chamfer line was placed in enamel (Figure 4).13 In the middle third of the teeth, a red football-shaped finishing diamond (368EF-023, Brasseler, Savannah, GA; 1923F, Microcopy, Kennesaw, GA) was utilized to remove the 0.6mm of discolored tooth structure necessary to attain complete masking with the pressed ceramic restorative material (Figure 5). The more aggressive preparation in the middle third was necessitated by the degree of staining in this area. All transition areas were smoothed and margins refined uti- lizing a finishing diamond (8868-016, Brasseler, Savannah, GA) (Figure 6). Maxillary full-arch impressions (eg, Honigum, Zenith DMG, Englewood, NJ; Take 1, Kerr/Sybron, Orange, CA) and maxillary provisionals (Luxatemp B1, Zenith DMG, Englewood, NJ) were fab- ricated utilizing a silicone matrix. In the mandible, tetracycline staining was more severe in the middle and cervical thirds of the anterior teeth (Figure 7). The same principles of preparation design used in the maxillary arch were applied to the mandibular teeth; however, a more extensive middle and cervical-third reduction was prepared. Additionally, the patient desired a slight realignment of the arch form from teeth #23(32) through #26(42) (Figure 8), where tooth #24(31) was in facial version to #25. To correct the alignment without changing tooth width or form, the facial incisal third of #24(41) was removed and brought to the lingual, and the lingual incisal third of #25 was brought to the facial. The distal incisal corner of tooth #26 was more modestly brought to the lingual to com- plete the realignment (Figure 9). Full-arch impressions (Honigum, Zenith DMG, Englewood, NJ) of the mandibular teeth and a face-bow (Stators, Ivoclar, Amherst, NY) were taken. Registration bites were obtained with and without a stick bite lined up with the patient’s eyes. To aid in communication with the ceramist regarding final shade selection for the veneers, numerous digital photographs were taken with shade tabs adjacent to the prepared stumps. Provisional restorations (eg, Luxatemp B1, Zenith DMG, Englewood, Bassett P P A D 485 Figure 12. The final preparation of teeth #7 through #10 achieves sufficient amount of reduction to block color. Figure 14. The final result exhibits excellent incisal translu- cency and youthful incisal embrasures. Figure 13. The final result achieves the goal of eliminating the tetracycline staining while preserving the aesthetic shape, form, and alignment of the patient’s natural teeth. 200407PPA_Bassett.qxd 8/16/04 1:04 PM Page 485 NJ; BioTemps, Glidewell Laboratories, Newport Beach, CA) were bonded (Tempbond Clear, Kerr, Orange, CA; TempoCem, Zenith DMG, Englewood, NJ), and further digital photographs were taken with the provisionals in place. Prior to cementation and after the veneers had been tried in for proximal continuity and marginal fit, the veneers were color evaluated with water-soluble try-in gels (3M Try-In Gel, 3M ESPE, St. Paul, MN) (Figure 10). Due to insufficient reduction in the middle third of teeth # 7(12) through #10(22), the pressed ceramic did not block out the dark zone (Figure 11). Utilizing a silicone index (Sil-tech, Ivoclar, Amherst, NY), the clini- cian removed sufficient additional tooth structure (0.5mm) to achieve complete masking (Figure 12). The four mandibular incisors were temporized; the remaining final restorations were cemented with laser-cured 3M Luting Cement (3M ESPE, St. Paul, MN) (Figure 13). One week later, the final four restorations were delivered. The final result exhibited excellent incisal translucency and char- acterization, round incisal edges, youthful incisal embra- sures, and natural color (Figures 14 and 15). Conclusion The aforementioned case demonstrated that even extreme tetracycline staining can be aesthetically and conserva- tively treated provided that the following key principles of treatment are observed: • Thorough patient consultation. The patient’s deter- mination to preserve his natural smile in this case established the parameters for planning the case. Understanding and respecting patient needs and desires are fundamental. • Careful material selection. The technical capa- bilities of restorative materials differ. Testing alter- native restorative products verified the feasibility of the patient’s ideals and allowed selection of the most appropriate material. • Meticulous laboratory collaboration. The techni- cal capabilities and requirements of the restora- tive materials determined preparation design. Observing these was possible only with complete involvement of the ceramist. • Elimination of a dual interface. Tetracycline- stained teeth have traditionally been etched and bonded with resin bonding in the dark- banded zone before veneering, a technique that presents technical difficulties (eg, the resin composite sometimes separates from the tooth when the provisional restoration is removed) and does not predictably mask color. It also requires more severe tooth preparation in order to create space for two restorative materials. Utilizing a single interface simplified the restora- tive process, increased operator control, and improved predictability. When treatment was completed, the patient was satisfied and reassured that postponing treatment during his long and frustrating quest for a conservative treatment for preserving his smile was justified by the final result. References 1. Wallman IS, Hilton HB. Teeth pigmented by tetracycline. Lancet 1962;1:827-829. 2. Weymann J, Porteous JR. Discolouration of the teeth probably due to administration of tetraycyclines: A preliminary report. Brit Dent J 1962;113:51-54. 3. British National Formulary.London, UK:BMJ Books;March 1999;37:254-256, BMJ Books: London, UK 4. Toaff R, Ravid R. Tetracyclines and the teeth. Lancet 1966;2(7457):281-282. 5. Cohlan SQ. Tetracycline staining of teeth. Teratology 1977;15(1):127-129. 6. Watts A, Addy M. Tooth discolouration and staining: A review of the literature. Brit Dent J 2001;190(6):309-316. 7. Genot MT, Golan HP, Porter PJ, Kass EH. Effect of administra- tion of tetracycline in pregnancy on the primary dentition of the offspring. J Oral Med 1970;25(3):75-79. 8. Moffitt JM, Cooley RO, Olsen NH, Hefferren JJ. Prediction of tetracycline-induced tooth discoloration. J Am Dent Assoc 1974;88(3):547-552. 9. van der Bijl P, Pitigoi-Aron G. Tetracyclines and calcified tis- sues.Ann Dent 1995;54(1-2):69-72. 10. Cavanaugh RR, Croll TP. Bonded porcelain veneer masking of dark tetracycline dentinal stains. Pract Periodont Aesthet Dent 1994;6(1):71-79. 11. Anitua E, Zabalegui B, Gil J, Gascon F. Internal bleaching of severe tetracycline discolorations: Four-year clinical evaluation. Quint Int 1990;21(10):783-788. 12. Aldecoa EA, Mayordomo FG. Modified internal bleaching of severe tetracycline discoloration: A 6-year clinical evaluation. Quint Int 1992;23(2):83-89. 13. Nixon R. Building natural tooth color into porcelain laminate veneers. Pract Periodont Aesthet Dent 1990;2(4):22-26. 486 Vol. 16, No. 7 Practical Procedures & AESTHETIC DENTISTRY 200407PPA_Bassett.qxd 8/16/04 1:04 PM Page 486 1. At what stage of life are tetracycline stains developed? a. Throughout the adolescent years only. b. After the second trimester of pregnancy through the age of 12 years. c. In utero only. d. None of the above. 2. Tetracycline staining is initiated by: a. The administration of broad-spectrum antibiotics origi- nally found in Streptomyces. b. The use of tetracyclines used in the treatment of many common infections. c. Administration of tetracycline antibiotics after the second trimester of pregnancy through the age of 12 years. d. All of the above. 3. According to the aforementioned article, a variety of porcelain materials were selected based on an initial try- in to ensure proper: a. Biocompatibility of the porcelain to the tooth structures. b. Concealment of the dark, tetracycline-stained tooth struc- tures beneath the proposed veneers. c. Shade match with the patient’s natural tooth color. Care was taken to place dark striations throughout the pro- posed veneers for an accurate match. d. All of the above. 4. Teeth are most vulnerable to tetracycline staining: a. From the fourth month in utero through the fifth month postpartum. b. From the fourth month in utero through birth. c. Only during breast-feeding. d. Only in utero. 5. Routine vital bleaching procedures are sufficient in removing tetracycline staining. Anterior teeth are more susceptible to light-induced color changes than posterior teeth. a. The first statement is true, the second statement is false. b. The first statement is false, the second statement is true. c. Both statements are true. d. Both statements are false. 6. Which of the following can be affected by tetracycline staining: a. Developing fetuses. b. Children under age 13. c. Infants of breast-feeding mothers. d. All of the above. 7. In reference to color effects, which of the following is false? a. A slate grey color results from chlortetracycline exposure. b. A creamy discoloration is common with exposure to oxy- tetracycline. c. Color depends on the age when a person is exposed to tetracycline. d. Color tends to become browner with sun (ultraviolet) exposure. 8. Following aggressive preparation of the tooth structure, the restoration of tetracycline-stained teeth have included: a. Full-coverage crowns. b. Composite bonding. c. Porcelain veneers. d. All of the above. 9. Conservative results cannot generally simulate natural tooth coloration because the underlying darkness can affect the restoration’s translucency. Underlying darkness is defined as all of the following EXCEPT: a. Dentin. b. Gingival pathology. c. Metal. d. Opaquers. 10. An alternative aggressive method of treatment is elective endodontic therapy with internal bleaching. Despite the dramatic results, many clinicians prefer more conserva- tive treatment due to: a. Time and cost of treatment. b. The removal of healthy pulps. c. The pain patient endures during treatment. d. Temporary results. To submit your CE Exercise answers, please use the answer sheet found within the CE Editorial Section of this issue and complete as follows: 1) Identify the article; 2) Place an X in the appropriate box for each question of each exercise; 3) Clip answer sheet from the page and mail it to the CE Department at Montage Media Corporation. For further instructions, please refer to the CE Editorial Section. The 10 multiple-choice questions for this Continuing Education (CE) exercise are based on the article: “Restoring tetracycline-stained teeth with a conservative preparation for porcelain veneers: Case presentation,” by Joyce Basset, DDS, and Brad Patrick, BSc. This article is on Pages 481-486. CONTINUING EDUCATION (CE) EXERCISE NO. 18 CECONTINUING EDUCATION18 488 Vol. 16, No. 7 200407PPA_Bassett.qxd 8/16/04 1:04 PM Page 488 work_64pw36j7jzckjctfizfxgaaypu ---- Histopathology and surgical anatomy of patients with primary hyperparathyroidism and calcium phosphate stones Histopathology and surgical anatomy of patients with primary hyperparathyroidism and calcium phosphate stones Andrew E. Evan1, James E. Lingeman2, Fredric L. Coe3, Nicole L. Miller4, Sharon B. Bledsoe1, Andre J. Sommer5, James C. Williams1, Youzhi Shao1 and Elaine M. Worcester3 1Department of Anatomy and Cell Biology, Indiana University School of Medicine, Indianapolis, Indiana, USA; 2International Kidney Stone Institute, Methodist Clarian Hospital, Indianapolis, Indiana, USA; 3 Department of Medicine, University of Chicago, Chicago, Illinois, USA; 4 Department of Urology, Vanderbilt University, Nashville, Tennessee, USA and 5 Department of Chemistry and Biochemistry, Miami University, Oxford, Ohio, USA Using a combination of intra-operative digital photography and micro-biopsy we measured renal cortical and papillary changes in five patients with primary hyperparathyroidism and abundant calcium phosphate kidney stones. Major tissue changes were variable papillary flattening and retraction, dilation of the ducts of Bellini, and plugging with apatite deposits of the inner medullary collecting ducts and ducts of Bellini. Some of the papillae in two of the patients contained plentiful large interstitial deposits of Randall’s plaque and where the deposits were most plentiful we found overgrowth of the attached stones. Hence, this disease combines features previously described in brushite stone formers – dilation, plugging of ducts and papillary deformity – with the interstitial plaque and stone overgrowth characteristic of routine idiopathic calcium oxalate stone formers, suggesting that these two patterns can coexist in a single patient. Kidney International (2008) 74, 223–229; doi:10.1038/ki.2008.161; published online 30 April 2008 KEYWORDS: ultrastructure; infrared analysis; Randalli’s plaque; renal biopsies; collecting duct plugging; attached stones Using a combination of intra-operative digital photography and micro-biopsy, we have described gross and histopatho- logical renal papillary and cortical changes in patients with brushite,1 apatite stones, and distal renal tubular acidosis (dRTA);2 and patients with calcium oxalate (CaOx) stones from obesity bypass procedures,3 cystinuria,4 and idiopathic CaOx stones (ICSF).3,5 In all but ICSF, we found (a) a mixture of inner medullary collecting duct (IMCD) and duct of Bellini (BD) plugging with apatite or, in the case of cystinuria, IMCD apatite and BD cystine crystals; (b) variable papillary flattening and atrophy; (c) interstitial fibrosis; and (d) epithelial cell loss. In ICSF, by contrast, we found only interstitial apatite deposits, the so-called Randall’s plaque 6 with otherwise normal renal tissue. Of crucial importance, stones appear to grow on papillary surfaces of ICSF at sites of sub-urothelial plaque,5,6 whereas such overgrowth was not found by us in any of the other aforementioned conditions. Primary hyperparathyroidism (HPT) with stones spans a spectrum from CaOx to calcium phosphate (CaP) stones;7,8 we would expect patients with HPT who form CaP stones to have a surgical anatomy and renal histopathology similar to those with brushite and apatite stone. However, to date this prediction has not been tested. We report here the surprising fact that among five patients with HPT and predominantly CaP stones, we found both IMCD crystal plugging with associated papillary injury, as well as abundant plaque with attached stones. This is the first disease instance thus far in which attached stones on plaque coexists with IMCD plugging. RESULTS Clinical information All but one patient was female, and had formed multiple stones (Table 1). The stones from these patients that were available for analysis were predominantly CaP; however, for patient 1 we had only one stone that was analyzed as predominantly CaOx. Parathyroidectomy was performed on the day of, or within a few weeks after, percutaneous http://www.kidney-international.org o r i g i n a l a r t i c l e & 2008 International Society of Nephrology Received 14 November 2007; revised 14 January 2008; accepted 13 February 2008; published online 30 April 2008 This study was funded by NIH PO1 DK56788 Correspondence: Andrew E. Evan, Department of Anatomy and Cell Biology, Indiana University School of Medicine, 635 Barnhill Drive, MS 5055S, Indianapolis IN 46223, USA. E-mail: evan@anatomy.iupui.edu Kidney International (2008) 74, 223–229 223 http://dx.doi.org/10.1038/ki.2008.161 http://www.kidney-international.org mailto:evan@anatomy.iupui.edu nephrolithotomy at which the renal biopsies reported here were obtained. Hypercalcemia had been present for months to years (Table 1), was of moderate degree (Table 2), and was accompanied by elevated serum PTH levels. Pre-parathyr- oidectomy 24-h urine studies were available in only two cases (Table 2) and showed the expected hypercalciuria. A parathyroid adenoma was found at surgery in all five patients and successfully removed. In four cases we were able to obtain postoperative serum calcium values (Table 2). Post- parathyroidectomy urine calcium values were obtained from all five patients and were normal, as were supersaturations with respect to CaOx and CaP. Urine pH, a principle determinant of CaP supersaturation, was not alkaline except in patients 2 and 3 (Table 2); the peak of the pH value in patient 3 indicates probable contamination with a urea- splitting organism, which was confirmed by high ammonia (53 mEq/day), and low magnesium (1.5 mg/day) and phos- phate (0.114 g/day) levels. Notably, serum creatinine values were slightly elevated and estimated glomerular filtration rates reduced. Table 1 presents the total number of stones found in each patient and the net percentage of CaOx, hydroxyapatite/ brushite (HA/BR), and struvite for all analyzed stones. A further breakdown of the number of stones analyzed per patient and the composition of each stone is presented here. Patient 1 had four stones analyzed with the following mineral compositions: 42% HA and 58% CaOx; 73% HA and 27% CaOx; 18% HA and 82% CaOx; 1% HA and 99% CaOx. Patient 2 had two stones analyzed with the following mineral compositions: 100% BR; 100% BR. Patient 3 had five stone analyzed with the following mineral compositions: 32% HA and 68% CaOx; 20% BR, 30% HA and 50% CaOx; 60% HA and 40% CaOx; 100% BR; 50% HA and 50% CaOx. Patient 4 had two stones analyzed with the following mineral compositions: 87% HA and 13% CaOx; 100% BR. Patient 5 had four stones analyzed with the following mineral compositions: 90% HA and 10 CaOx; 45% HA and 55% struvite; 37% HA and 63% CaOx; 89% HA and 11% CaOx. Renal surgical pathology Patient one displayed the least amount of papillary pathology (Figure 1a), consisting of scattered white Randall’s plaque in amounts similar to those found in normal people, that is less than 1%.9 Even so, two papillae of this same patient (Figure 1b) showed marked retraction, and markedly dilated BD with protruding crystal plugs. All the papillae of patients 2 and 3 (Figures 2 and 3) showed the advanced changes, retraction, BD dilation and plugging, that were seen in only some papillae of patient 1. In addition, these two patients had yellow plaque, the known gross morphological appearance of Table 1 | Clinical characteristics of biopsied patients Case Sex Age of first stone Stones ESWL PNL Total Procedures Age at biopsy Age at PTX Duration of known m serum Ca Stone composition (%) CaOx HA/BR Struvite 1 F 48 4/2 0 2/1 2/1 49 49 43.5 years 67 33/0 0 2 M 52 5/? 1/? 2/1 5/? 53 53 o12 months 0 0/100 0 3 a F 29 11/7 0 3/1 13/5 34 34 5 years 28 34/38 0 4 F 30 3/2 1/1 1/1 4/2 37 37 2 years 7 43/50 0 5 F 33 3/1 0 1/1 3/1 34 34 3 months 21 65/0 14 BR, brushite; CaOx, calcium oxalate; ESWL, extra-corporeal shock wave lithotripsy before biopsy (total/biopsied kidney); HA, hydroxyapatite; PNL, percutaneous nephrolithotomy before biopsy (total/biopsied kidney); Procedures, ureteroscopy, laser lithotripsy, nephrectomy, ESWL, PNL (total/biopsied kidney); PTX, parathyroidectomy; Stones, total stones/stones from biopsied kidney; Stone composition, the net % including all analyzed stones. aThis case had nephrectomy 1.5 years before biopsy, for non-function. Table 2 | Selected laboratory data for biopsied patients Case Collection period Serum 24-h urine Calcium (mg per 100 ml) PTH (pg/ml) Creat (mg per 100 ml) EGFR (ml/min/1.73 m 2 ) pH Volume (l day �1 ) Ca/Creat (mg/g) SSCaOx SSCaP 1 Pre-PTX 11.0 172 1.1 56 ND ND ND ND ND Post-PTX 9.3 76 1.2 51 5.58 1.32 107 4.4 0.14 2 Pre-PTX 12.2 233 0.8 107 6.90 2.41 174 7.5 2.4 Post-PTX 9.8 ND 1 83 6.89 2.12 118 5.7 2.1 3 Pre-PTX 11.2 149 1.2 55 ND ND ND ND ND Post-PTX ND ND ND ND 8.54 0.94 11 1.2 0.1 4 Pre-PTX 10.8 188 0.9 75 5.65 1.92 261 11.9 1.1 Post-PTX 9.4 25 ND ND 5.97 1.28 75 7.1 0.8 5 Pre-PTX 11.3 143 1.5 42 ND ND ND ND ND Post-PTX 8.1 ND 1.7 37 5.93 1.53 88 5.4 0.6 Ca/Creat, ratio of urine calcium to creatinine concentrations from 24 h collections, normal value is up to 140 mg/g; Creat, creatinine; eGFR, MDRD-calculated glomerular filtration rate; post-PTX, post-parathyroidectomy; pre-PTX, pre-parathyroidectomy; PTH, parathyroid hormone; SS, supersaturation. Serum and urine were not collected on the same days. 224 Kidney International (2008) 74, 223–229 o r i g i n a l a r t i c l e AE Evan et al.: Stone disease in primary hyperparathyroidism IMCD apatite plugging.1 (Figures 2a and d and 3a and b). Large amounts of white (Randall’s) plaque were also observed (Figure 2b and c), as were attached stones in some areas of white plaque (Figures 2a and c; 3a and b). Attached stones were on the same papillae in which BD were plugged with crystals (Figure 2a and c). Figure 3 shows an attached stone on a papilla obtained from patient 3 prior to removal (Figure 3a); during removal, exposing a region of white plaque deep within the site of stone attachment (Figure 3b); after removal (Figure 3c); and a micro-computed tomography (mCT) image of the same stone (Figure 3d). mCT analysis reveals the mineral composition of the stone to be mixed, the whitish areas being apatite and the gray areas CaOx dihydrate. Patients 5 and 4 displayed the most advanced changes (Figure 4). Papillae were flattened with numerous dilated BD (Figure 4a and b), occasionally with crystalline plugs (arrow). The visual impression that larger amounts of white plaque were present on the papillae of patients 2 and 3 than on the papillae of the other three patients was confirmed by digital imaging. The percent of papillary surface covered by white plaque in patients 2 and 3 exceeded that of the other three patients by over 10-fold (Table 3). Histopathological findings Patient 1, who had a predominantly CaOx stone admixed with 33% apatite, had a few, very large IMCD plugs (Figure 5a and b) with little plaque. Patient 3, who had brushite and apatite stones, showed presence of large amount of IMCD deposit on high-resolution CT and histopathology (Figure 5c and d) along with abundant plaque. Patient 2, who a c b d Figure 2 | Coexistence of attached stones and BD plugging on the same papillae (patient 2). (a) Areas of yellow plaque (IMCD crystal deposit, double arrowheads) and white interstitial (Randall’s) plaque (single arrowhead) are found on one papilla. One dilated BD has a crystalline plug (arrow) protruding from its opening. A stone attached at an area of white plaque (within the white box) is magnified in the inset at the upper right; the stone is outlined within the inset by a black dotted overlay and the plaque border is indicated with an arrow. (b) Large areas of white plaque (single arrowhead) are evident on another papilla of this patient. (c) On another papilla a stone (asterisk) is attached to the papilla near an area of white plaque (single arrowhead). (d) On yet another papilla a dilated BD with plugging (arrow) is near the yellow plaque. a b Figure 1 | Range of papillary pathology in one patient (patient 1). Most papillae were normal appearing (a) with small regions of Randall’s plaque. Note the presence of biopsy forceps at upper right. Two papillae contained markedly dilated BD with crystalline plugging (b, arrow) and were retracted. a b c d Figure 3 | Attached stone prior to, during, and after removal (patient 3). (a) An attached stone (asterisk) is near areas of yellow (double arrowhead) and white (single arrowhead) plaque. (b) The stone in panel (a) is being removed by a basket during percutaneous nephrolithotomy (asterisk); it was attached to an area of white plaque (single arrowhead) deep beneath the stone surface. This stone was also overlying a dilated BD (double arrows) and is adjacent to a large crystalline plug (small arrows) protruding from another dilated BD. (c) Light-microscopy image of the stone (2 mm) in panel (b), after removal. Many large CaOx dihydrate crystals (arrows) cover the urinary surface of the stone. (d) mCT analysis of the same stone reveals a mixture of apatite (white regions) and CaOx dihydrate (gray regions). Kidney International (2008) 74, 223–229 225 AE Evan et al.: Stone disease in primary hyperparathyroidism o r i g i n a l a r t i c l e also had brushite-containing stones, showed presence of a similar amount of IMCD plugging and abundant plaque (Figure 5e and f ) as in patient 3; this patient, as noted, along with patient 2, had stones attached to plaque (Figure 2a and c). When found, by high-resolution CT and light microscopy, patients 4 and 5 (not illustrated) to have a moderate amount of BD plugging and dilation, but lacking abundant white plaque and attached stones. Pathological changes were extreme in some areas (Figure 6a). Plugging of IMCD and BD with mineral led to loss of cells and interstitial fibrosis in patient 3. Very heavy deposits of Randall’s plaque were observed in patient 2 (Figure 6b, Table 3), with the expected preservation of interstitial integrity. In patient 1 (Figure 6c), a single massive intra-tubular plug (center of panel), visualized by high- resolution CT in the lower left quadrant of the panel, led to total loss of epithelial cells. The mineral phase reaches to the surface of the basement membrane (lower right quadrant of panel). A ring of fibroblasts surrounds the plugged duct. The number of IMCD and BD deposits per biopsy differed widely among our patients (Table 3). Three patients had mineral deposits (patients 1, 4, and 5) extending from the inner medulla through outer medulla to the cortex (Table 3). In the outer medulla of patient 5 (Figure 7a), deposits are in the outer medullary collecting ducts and extend into the cortical collecting ducts (Figure 7b). The same changes in patients 1 and 4 are not illustrated. We determined the composition of intratubular deposits in all five papillary and three cortical and outer medullary sections using micro-Fourier transform infrared spectro- meter; in every case, deposits were biological apatite (Figure 8). In the two cases (patients 2 and 3) with abundant interstitial plaque and attached stones, the mineral phase of the plaque was biological apatite (Figure 8). a b Figure 4 | Severe papillary changes. Patient 5 (a) and patient 4 (b). The most severe changes consisted of papillary retraction and flattening associated with numerous dilated BD (double arrows), some with crystalline plugs (arrow); white plaque is absent. Table 3 | Summary of pathological changes Patient Papillary deformity (%) White plaque (%) Yellow plaque Attached stones IMCD/BD deposits Cortical deposits 1 o50 0.28 Some No 2±1 Yes 2 450 15.9 Some Yes 25±2 No 3 450 2.8 Some Yes 24±3 No 4 450 0.15 Some No 10±1 Yes 5 450 0.17 Some No 6±1 Yes BD, duct of Bellini; IMCD, inner medullary collecting duct; IMCD/BD deposits, number of individual intra-luminal deposits per square millimeter of tissue, ±s.e.m. Percent papillary deformity refers to the fraction of papillae visualized at the time of surgery with deformity; white plaque is expressed as percent of papillary surface covered by white plaque; IMCD, previously published value for mean plaque coverage measured in four patients with no stone was 0.5%. 9 a b c d e f Figure 5 | Relative densities of IMCD deposits and interstitial plaque. mCT imaging (a, c, e) and histopathology (b, d, f). (a) Patient 1 had a few huge deposits (arrows) in IMCD by mCT analysis. (b) One of the large deposits (single arrow) in patient 1 is associated with loss of epithelium (decalcified light microscopic section, double arrow); Randall’s plaque is not present. In patient 3 (c, d) and patient 2 (e, f), many IMCD deposits (arrows) are associated with Randall’s plaque (arrowheads). Patients 4 and 5 had a moderate amount of IMCD deposits (not shown). Original magnification � 30 (b); � 150 (d, f). Panels (d) and (f) are Yasue staining images; panel (b) is an image of toluidine blue staining. 226 Kidney International (2008) 74, 223–229 o r i g i n a l a r t i c l e AE Evan et al.: Stone disease in primary hyperparathyroidism DISCUSSION All five of our patients exhibited the changes that were seen in patients with brushite stone, that is, IMCD and BD plugging with apatite crystals and loss of epithelial cell integrity, peri- tubular interstitial inflammation and fibrosis about the affected tubules; and papillary retraction and dilation of BD and their openings.1 However, there are some differences. Only in patient 1 did we observe the extreme of BD dilation found regularly in brushite disease. Patients 2 and 3 had a larger fraction of IMCD involved than among the those with brushite stone that we have reported previously.1 In contrast, our patients with dRTA had a very high fraction of IMCD involved with crystal plugging,2 and somewhat less dramatic tubule dilation than that in patients with brushite stone. So one might say these five HPT patients share some traits of both brushite disease and dRTA. Likewise, patients with brushite stone had a very wide range of papillary involvement within individual patients, ranging from retraction and scarring to virtually normal, where in dRTA almost all papillae are quite abnormal; our HPT patients are more like those with brushite stone in having a wide range of changes. On the basis of the above distinctions, the findings from five HPT patients are similar to those from brushite disease and dRTA patients, but more closely resemble brushite disease than dRTA. As patient 1 had only a net 33% of apatite in the stones that we had analyzed, she could have been classified clinically as having a CaOx stone. This finding from this one patient suggests that in HPT, unlike ICSF, CaOx stones may well be associated with the histopathology observed usually in patients with CaP stone. This conjecture cannot be tested until we biopsy more CaOx stone-containing patients with a b c Figure 6 | Higher magnification reveals degree of cell loss and interstitial fibrosis. (a) Massive IMCD plugging (arrows) is associated with epithelial cell loss and severe fibrosis (patient 3). (b) Areas of abundant interstitial plaque (patient 2, arrowheads) were not associated with cell loss or interstitial fibrosis. (c) A large deposit in patient 1 (mCT in lower left inset) lies within an IMCD lumen (transmission electron microscopy, arrow); tubule cells are absent and crystals are apposed directly to the basement membrane (double arrow, detailed in lower right inset). Surrounding interstitium is fibrotic (arrowheads). Original magnification � 500 (a, b); � 1600 (c); and � 4800 (right lower inset). a b Figure 7 | Mineral extends into the outer medulla and cortex in patients 1, 4, and 5. Yasue-positive deposits were found in outer medullary collecting ducts (a, arrows) and in the cortical collecting ducts (b, arrow) of patient 5; the same changes in patients 1 and 4 are not illustrated. Original magnification � 300 (a) and � 200 (b). Calcium oxalate Hydroxyapatite Intraluminal crystals Interstitial crystals Tissue with embedding media 4000 3000 2000 1500 1000 580 Wavenumbers (cm–1) Figure 8 | Micro-Fourier-transform infrared spectrometer analysis of intra-luminal deposits and interstitial plaque from patient 3. Intra-luminal and interstitial crystals show a spectral band matching that of the hydroxyapatite standard, but no bands of the CaOx standard. Tissue with the embedding medium, a control, displays bands in common with neither standard. Kidney International (2008) 74, 223–229 227 AE Evan et al.: Stone disease in primary hyperparathyroidism o r i g i n a l a r t i c l e HPT. One might mention in this regard that patients with CaOx stone who have enteric hyperoxaluria have thus far also showed presence of IMCD plugging with apatite, and are the only other instances of the presence CaOx stones with any crystals in their epithelial cell compartments.3 Unlike all four prior contrast groups with IMCD crystal plugging, patients with brushite stone, enteric hyperoxaluria with CaOx stones, dRTA, and cystinuria, these HPT patients can have some of their stones attached to Randall’s plaque (patients 2 and 3). That they can have abundant plaque is not surprising in that hypercalciuria is well known in HPT10 and a main pathophysiological correlate of plaque abundance.9 Patient 2 was in fact hypercalciuric (Table 2), and plaque abundance was highest in patients 2 and 3 who had attached stones. For reasons yet to be uncovered, HPT can produce both pathologies: intratubular plugging and stone over- growth on plaque, whereas no other disease yet described has them in combination. We could speculate that urine pH and volume, the other two factors controlling plaque abundance, 9 permit plaque in HPT and not in the other conditions involving IMCD plugging; this study cannot discuss this matter further due to lack of sufficient pre-parathyroidect- omy urine studies. Our published reports have thus far included 10 patients with brushite and 5 with dRTA stone.1,2 In our publications, we did not chose to quantify whether stones were or were not attached, although this was fully documented at the time of our intraoperative observations. Among 82 fully studied papillae from the 10 patients with brushite stone, we found 22 stones excluding the index stones that led to the surgery. Of these, none were attached to plaque, but were rather sub- urothelial or attached to plugs of crystal growing out of the mouths of dilated BD. No stones were found attached to the papillae of our 5 dRTA patients, our 5 patients with cystine stone, 4 or our 4 patients with obesity bypass stone,3 all of whom manifested IMCD apatite plugging. We offer these observations to put the present results in as quantitative a perspective as possible at this time. Why IMCD plugging occurs in our HPT patients is an unsettled matter. In patients with brushite stone, hypercal- ciuria is more marked and urine pH is higher than in patients CaOx stone; 11 so supersaturation CaP is increased markedly and would be expected to foster CaP crystallizations in urine and IMCD and BD. The same is true for dRTA. We presume that both factors (high urine pH and calcium) were present in these HPT patients, but did not have the opportunity to study them before curative surgery. Postoperative recordings need not reflect those when HPT was present. More studies of HPT are clearly needed. We have found virtually no prior information about renal tissue changes in primary HPT in patients who have stones; PubMed searches in fact revealed only three publications on the subject, and only when we relaxed the demand for stone formation. All the three publications describe patients with extreme hypercalcemia. 12–14 Changes included calcifications in glomeruli, mitochondria, and numerous tubular segments. Serum calcium values ranged from 16 to 22 mg per 100 ml, far exceeding those of our patients. MATERIALS AND METHODS Patients We studied five patients with primary HPT stone who required percutaneous nephrolithotomy at this institution (International Kidney Stone Institute, Methodist Clarian Hospital, Indianapolis, IN, USA) during the past 5 years (Table 1) and who consented to participate in the study. Patients were not selected. Clinical history was obtained along with reviews of old records to obtain stone analysis and the type and number of stone procedures (Table 1). Clinical laboratory studies Two 24-h urine samples were collected while patients were on their free-choice diet and were not on medications. In the urine we measured volume, pH, level of calcium, oxalate, citrate, phosphate, uric acid, sodium, potassium, magnesium, sulfate, and ammonia, and calculated supersaturation with respect to CaOx, BR, and uric acid using methods detailed elsewhere.11 Routine clinical blood measurements were made on blood samples obtained for clinical purposes. Biopsy protocol and plaque area determination During percutaneous nephrolithotomy, all papillae were digitally imaged as described elsewhere.9 Biopsies were taken from one upper pole, inter-polar, and lower pole papillum, and from the cortex. Using intraoperative recordings,1 total surface area of each papilla was measured, and one of us outlined the areas of white (Randall’s) plaque as well as the entire papilla on one set of prints. The white plaque and total papillary areas were converted to numbers of pixels, yielding the ratio of plaque to total papillary pixels, or percent coverage with plaque. No biopsy site inspected intraoperatively displayed significant hemorrhage and no postoperative complica- tions related to the biopsy procedures occurred in any patient. The study was approved by the Institutional Review Board Committee for Clarian Health Partners (no. 98-073). Tissue analysis General. Fifteen papillary and five cortical biopsies were studied using light and transmission electron microscopy. All biopsy (cortical and papillary) specimens were immersed in 5% paraformaldehyde in 0.1 mol/l phosphate buffer (pH 7.4). Light microscopy. Papillary and cortical biopsies were dehy- drated through a series of graded ethanol concentrations to 100% ethanol prior to being embedded in a 50/50 mixture of Paraplast Xtra (Fisher Scientific, Itasca, IL, USA) and Peel-away Micro-Cut (Polysciences Inc., Warrington, PA, USA). Serial sections were cut at 4 mm and stained by Yasue metal substitution method for calcium histochemistry, 3 and hematoxylin and eosin for routine histological examination. An additional set of serial sections was cut at 7 mm for infrared analysis. Infrared. Reflectance–absorption spectra were collected with a Perkin-Elmer Spotlight 400 infrared imaging microscope interfaced to a Perkin-Elmer Spectrum One Fourier transform infrared spectrometer. The system uses a 100 � 100 mm, liquid nitrogen- cooled, mercury cadmium telluride (HgCdTe) detector. Samples were analyzed using an aperture size suitable for the size of the sample being studied (for example, 50–20 mm in diameter). Each spectrum collected represents the average of 64 or 128 individual 228 Kidney International (2008) 74, 223–229 o r i g i n a l a r t i c l e AE Evan et al.: Stone disease in primary hyperparathyroidism scans having a spectral resolution of 4 cm�1. A clean area on the low-E slide was employed to collect the background spectrum. Attenuated total internal reflection spectra were collected with a Perkin-Elmer Spotlight 400 infrared imaging microscope interfaced to a Perkin-Elmer Spectrum One Fourier transform infrared spectrometer. The system uses a 100 � 100 mm, liquid nitrogen- cooled, mercury cadmium telluride (HgCdTe) detector. The standard germanium internal reflection element was employed in conjunction with an aperture of 100 � 100 mm or 50 � 50 mm. Since attenuated total internal reflection measurement is essentially an immersion measurement, the sampled area using these apertures is 25 � 25 mm or 12.5 � 12.5 mm. Each spectrum collected represents the average of 64 or 128 individual scans having a spectral resolution of 4 cm�1. A clean potassium chloride surface was used to collect the background spectrum. Transmission electron microscopy. The 5-mm biopsy speci- mens of the renal papilla were divided into 1-mm blocks and routinely processed for transmission electron microscopy before being embedded. 3 All thin sections were examined with an FEI G2 Tecnai 12 BioTwin transmission electron microscope (FEI, Hills- boro, OR, USA), equipped with an AMT Corp., XR-60 Digital CCD system. lCT. All papillary biopsies underwent mCT analysis with the SkyScan-1072 (Vluchtenburgstraat 3, B-2630 Aartselaar, Belgium) high-resolution desktop mCT system allowing nondestructive mapping of the location and size of the crystalline deposits within a biopsy specimen. This mCT system can generate a tissue window so that both the mineral deposit and tissue organization are seen at the same time. For this protocol, biopsies are quickly dipped in a 1:10 dilution of Hypaque (50%; Nycomed Inc., Princeton, NJ, USA)/ phosphate-buffered saline, and then coated with a thin layer of paraffin and mounted in the center of a small chuck which is then locked into place in the machine. The sample was positioned in the center of the beam and the system configuration was set at 35 kV, 209 mA, 1801 rotation, with flat-field correction. Images were recorded to CDs and reconstructed with Cone-Reconstruction software by SkyScan. These images were then reconstructed into three-dimensional images with SkyScan’s CTAn þ CTVol software. These images allowed us to properly orient each biopsy for future light-microscopy analysis. Three separate scans from each patient were used to determine the number of sites of intraluminal deposits per square millimeter, except for patient 5 for whom paraffin sections were used. DISCLOSURE All the authors declared no competing interests. REFERENCES 1. Evan AP, Lingeman JE, Coe FL et al. Crystal-associated nephropathy in patients with brushite nephrolithiasis. Kidney Int 2005; 67: 576–591. 2. Evan AP, Lingeman J, Coe F et al. Renal histopathology of stone-forming patients with distal renal tubular acidosis. Kidney Int 2007; 71: 795–801. 3. Evan AP, Lingeman JE, Coe FL et al. Randall’s plaque of patients with nephrolithiasis begins in basement membranes of thin loops of Henle. J Clin Invest 2003; 111: 607–616. 4. Evan AP, Coe FL, Lingeman JE et al. Renal crystal deposits and histopathology in patients with cystine stones. Kidney Int 2006; 69: 2227–2235. 5. Matlaga BR, Coe FL, Evan AP et al. The role of Randall’s plaques in the pathogenesis of calcium stones. J Urol 2007; 177: 31–38. 6. Evan AP, Coe FL, Lingeman JE et al. Mechanism of formation of human calcium oxalate renal stones on Randall’s plaque. Anat Rec (Hoboken) 2007; 290: 1315–1323. 7. Odvina CV, Sakhaee K, Heller HJ et al. Biochemical characterization of primary hyperparathyroidism with and without kidney stones. Urol Res 2007; 35: 123–128. 8. Pak CY, Poindexter JR, ms-Huet B et al. Predictive value of kidney stone composition in the detection of metabolic abnormalities. Am J Med 2003; 115: 26–32. 9. Kuo RL, Lingeman JE, Evan AP et al. Urine calcium and volume predict coverage of renal papilla by Randall’s plaque. Kidney Int 2003; 64: 2150–2154. 10. Parks J, Coe F, Favus M. Hyperparathyroidism in nephrolithiasis. Arch Intern Med 1980; 140: 1479–1481. 11. Parks JH, Worcester EM, Coe FL et al. Clinical implications of abundant calcium phosphate in routinely analyzed kidney stones. Kidney Int 2004; 66: 777–785. 12. Henegar JR, Coleman JP, Cespedes J et al. Glomerular calcification in hypercalcemic nephropathy. Arch Pathol Lab Med 2003; 127: E80–E85. 13. Kashitani T, Makino H, Nagake Y et al. Two cases of hypercalcemic nephropathy associated with primary hyperparathyroidism. Nippon Jinzo Gakkai Shi 1993; 35: 1189–1194. 14. Yu JM, Pyo HJ, Choi DS et al. A case of primary hyperparathyroidism with hypercalcemic nephropathy in children. J Korean Med Sci 1994; 9: 268–272. Kidney International (2008) 74, 223–229 229 AE Evan et al.: Stone disease in primary hyperparathyroidism o r i g i n a l a r t i c l e Histopathology And Surgical Anatomy Of Patients With Primary Hyperparathyroidism And Calcium Phosphate Stones������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������� Results������������������������������������� Discussion���������������������������������������������� Materials And Methods������������������������������������������������������������������������������� Disclosure���������������������������������������������� References���������������������������������������������� work_64zmss7hlvewhcoiyhookhjjua ---- Welcome triomeetingposters.org - BlueHost.com Affordable, Reliable Web Hosting Solutions Web Hosting - courtesy of www.bluehost.com Help Center Contact Us About Us Affiliates Terms ©2012 Bluehost.com. All rights reserved. work_67hx7iz4dzablerir4xksozwrq ---- A Multidomain Approach to Assessing the Convergent and Concurrent Validity of a Mobile Application When Compared to Conventional Methods of Determining Body Composition sensors Article A Multidomain Approach to Assessing the Convergent and Concurrent Validity of a Mobile Application When Compared to Conventional Methods of Determining Body Composition Eric V. Neufeld 1,2,† , Ryan A. Seltzer 1,3,*,† , Tasnim Sazzad 1 and Brett A. Dolezal 1 1 Airway & Exercise Physiology Research Laboratory, David Geffen School of Medicine, Los Angeles, CA 90095, USA; eneufeld8@ucla.edu (E.V.N.); sazzad.tasnim@gmail.com (T.S.); bdolezal@mednet.ucla.edu (B.A.D.) 2 Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hofstra University, Hempstead, NY 11549, USA 3 School of Medicine, Stanford University, Stanford, CA 94305, USA * Correspondence: rseltzer@stanford.edu † These authors contributed equally to this work. Received: 7 October 2020; Accepted: 28 October 2020; Published: 29 October 2020 ���������� ������� Abstract: Determining body composition via mobile application may circumvent limitations of conventional methods. However, the accuracy of many technologies remains unknown. This investigation assessed the convergent and concurrent validity of a mobile application (LS) that employs 2-dimensional digital photography (LS2D) and 3-dimensional photonic scanning (LS3D). Measures of body composition including circumferences, waist-to-hip ratio (WHR), and body fat percentage (BF%) were obtained from 240 healthy adults using LS and a diverse set of conventional methods—Gulick tape, bioelectrical impedance analysis (BIA), and skinfolds. Convergent validity was consistently high—indicating these methods vary proportionally and can thus reliably detect changes despite individual measurement differences. The span of the Limits of Agreement (LoA) using LS were comparable to the LoA between conventional methods. LS3D exhibited high agreement relative to Gulick tape in the measurement of WHR, despite poor agreement with individual waist and hip circumferences. In BF%, LS2D exhibited high agreement with BIA and skinfold methods, whereas LS3D demonstrated low agreement. Interestingly, the low inferred bias between LS3D and DXA using existing data suggests that LS3D may have high agreement with dual-energy x-ray absorptiometry. Overall, the suitability of LS2D and LS3D to replace conventional methods must be based on an individual user’s criteria. Keywords: anthropometry; digital health; waist-to-hip ratio; body composition; body fat percentage; validity; health monitoring 1. Introduction Lifestyle-related metabolic diseases constitute some of the most pervasive yet preventable ailments in the developed world. An underlying symptom that substantially increases the risk for developing these conditions is excess body weight and obesity [1–3]. Although many pharmacological, surgical, and behavioral interventions exist to combat obesity, it remains a prominent and expanding public health issue [4]. Effectively combatting obesity requires a multidisciplinary and personalized approach, which is increasingly intertwined with the rapidly advancing field of digital health technology. Mobile applications and wearable devices are increasingly used to track a wide variety of physiological parameters. Many of these are aimed at prevention and mitigation of metabolic Sensors 2020, 20, 6165; doi:10.3390/s20216165 www.mdpi.com/journal/sensors http://www.mdpi.com/journal/sensors http://www.mdpi.com https://orcid.org/0000-0002-7247-7671 https://orcid.org/0000-0002-0833-3673 http://www.mdpi.com/1424-8220/20/21/6165?type=check_update&version=1 http://dx.doi.org/10.3390/s20216165 http://www.mdpi.com/journal/sensors Sensors 2020, 20, 6165 2 of 15 disease—such as using heart rate and energy expenditure to promote physical activity and healthy dietary intake [5–9]. This trend is in line with the current vast expansion in use of digital health technology for non-contact medical evaluation during the COVID-19 pandemic [10–12]. However, not all digital health technology utilizes scientifically established guidelines and, those that do may still lack accuracy or precision [13–16]. Technological advancement naturally outpaces validating rigorous research, thus the quality of many health-related mobile applications remains unknown. Monitoring various measurements of body composition can promote a reduction in obesity and decreased risk of metabolic disease; however, many conventional measurements are often expensive, cumbersome, or best measured by a trained provider. Self-weighing on a consistent basis, arguably the simplest method of quantifying body composition, improves weight outcomes without adverse psychological effects [17,18]. Although this can be easily converted into a body mass index (BMI), this method has limited clinical utility due to its poor sensitivity and overall simplicity [19–21]. In addition to body weight, body composition measures closely linked to metabolic diseases include waist circumference, waist-to-hip ratio (WHR), and body fat percentage (BF%) [22–24]. Although helpful in providing a fuller picture of body habitus and characterizing disease risk, many of these require substantial training, have limited efficacy in certain populations, and may be prohibitively expensive. For example, measuring tapes can be simplistic and inefficient, dual-energy x-ray absorptiometry (DXA) is expensive and often inaccessible, and bioelectrical impedance analysis (BIA) can exhibit poor accuracy in certain populations while requiring stringent pretesting standards [25]. Digital anthropometry via mobile application has the potential to determine body composition while circumventing several limitations of conventional techniques. LeanScreen (LeanScreen®, version 8.6; PostureCo, Inc., Trinity, FL, USA) is a mobile application that provides a cluster of body composition measures through novel, non-contact 2D digital photography (LS2D) and 3D photonic scanning (LS3D). While existing evidence suggests these 2D [26] and 3D technologies are valid measurement tools, recent studies have called the accuracy and utility of LeanScreen (LS) into question [27]. Specifically, Macdonald et al. found a consistent and significant underestimation of BF% as determined by LS2D technology (version 6.0) compared to DXA [27]. For LS2D, they reported a combined male-female (n = 148) bias of−3.3±3.6 with limits of agreement (LoA) spanning 13.99 (−10.26 to 3.73). However, they also concluded that LS2D exhibits high inter-rater and intra-rater reliability. The objective of this investigation was to evaluate both the convergent and concurrent validity of LS2D and LS3D technology in determining body composition (body circumferences, WHR, and BF%) when compared to a diverse set of conventional methods (Gulick tape, BIA, and skinfolds). This breadth of direct comparison provides a higher degree of confidence and transparency for individuals seeking to compare LeanScreen against their preferred method of determining body composition. 2. Methods 2.1. Participants Our sample consisted of 153 men and 87 women (mean ± SD: age: 20.8 ± 2.3 years; height: 172.3 ± 9.8 cm; weight: 69.5 ± 12.2 kg; BMI: 23.3 ± 2.8 kg/m2). The majority were visibly healthy undergraduate students that engaged in routine moderate-to-vigorous exercise (7.0 ± 3.9 h/week). Participants comprised a multitude of races including Caucasian (n = 113), African American (n = 6), Asian (n = 80), and non-white Hispanic (n = 25). Some individuals reported mixed descent such as Caucasian/Asian (n = 11) and Caucasian/Hispanic (n = 5). Participants 18–40 years old were recruited using flyers posted on the UCLA campus and by email. Exclusion criteria included: (i) pregnancy, (ii) metal or silicone body implants, and (iii) cancer diagnosis or receiving radiological treatment. Each participant reported to the UCLA Airways & Exercise Physiology Research Laboratory for a single 30-min session. During this session, three body composition measures—Body Circumference, Waist-to-Hip Ratio (WHR), and Body Fat Percentage (BF%)—were collected using conventional (criterion) methods and LeanScreen to enable calculation of Sensors 2020, 20, 6165 3 of 15 concurrent and convergent validity (Table 1). The UCLA Institutional Review Board approved this study (IRB#11-003190), and all participants provided informed consent prior to enrollment. Table 1. List of conventional and LeanScreen methods used for determining each measure of body composition. Body Composition Measure Conventional Methods LeanScreen Methods Body Circumferences Gulick tape LeanScreen 3D (LS3D) Waist-To-Hip Ratio (WHR) Gulick tape LeanScreen 2D (LS2D) LeanScreen 3D (LS3D) Body Fat Percentage (BF%) SkinfoldsBioelectrical Impedance (BIA) LeanScreen 2D (LS2D) LeanScreen 3D (LS3D) 2.2. Measurements with Conventional Methods and LeanScreen Each measure of body composition—body circumferences, Wait-To-Hip Ratio, and BF%—was collected using conventional methods and LeanScreen photographic or 3D-scanning methods (Table 1). All measurements for a participant were collected in a single session while the participant wore minimal clothing—fitted shorts and shirtless for men, fitted shorts and a sports bra for women. Gulick tape body circumferences and skinfold BF% were collected by one of four trained examiners. Interrater reliability of these four examiners was excellent with intraclass correlation coefficient (ICC) = 0.99 (95% CI: 0.99–1.00) based on a single-rating, absolute-agreement, one-way random-effects model. Conventional body circumferences were measured using Gulick tape (precision: 0.01 cm) at eight anatomical sites. Prior to collecting the measurement, each anatomical site was marked with athletic tape (RockTape, Campbell, CA, USA) to ensure the circumferences were measured at the same location. • Neck: inferior to the laryngeal prominence • Waist: most narrow portion between the xiphoid process and umbilicus • Umbilicus: directly in line with the umbilicus • Hips: at the widest point and across the most prominent portion of the buttocks • Left Humerus: midway between the left acromion and olecranon processes • Right Humerus: midway between the right acromion and olecranon processes • Left Femur: midway between the left inguinal crease and proximal border of the patella • Right Femur: midway between the right inguinal crease and proximal border of the patella Conventional BF% methods included skinfolds and BIA. Skinfolds at nine anatomical sites (chest, abdomen, thigh, bicep, tricep, subscapular, suprailiac, midaxillary, and calf) were collected using a Lange skinfold caliper (Cambridge Scientific Industries, Inc., Cambridge, MD, USA) [28]. Three measurements were collected at each location (precision: 1 mm), averaged for the site, and BF% was calculated using modernized skinfold equations based on sex and race [29,30]. During the BIA measurement, each participant stood upright on the BIA instrument (R20; InBody Co., Seoul, Korea) with the ball and heel of each foot on two metallic footpads while holding a handgrip with both hands pronated and perpendicular to the floor. The participant held the handgrip completely with the palm on one electrode and the thumb resting on the top of the unit’s other electrode. To ensure accuracy, participants adhered to standard pre-measurement BIA guidelines recommended by the American Society of Exercise Physiologists [31]. 2.3. LeanScreen Methods For both the LS2D and LS3D exams, each participant stood upright with arms outstretched, motionless, and avoided exaggerated breaths. During the LS2D exam, the participant’s anterior and right lateral photographs were collected using a tablet (iPad 2; Apple Inc., Cupertino, CA, USA). The vertical position of the camera was ensured based on feedback from the LS application. The images Sensors 2020, 20, 6165 4 of 15 were analyzed to determine the count of vertical pixels. A ratio of known linear distance to known pixels was derived from the subject’s known height and known linear distance to the backdrop. In both images, anatomical points were identified for each body area to be measured. Each body area required four points to be identified, two points from each photograph. The four points were used to construct two lines as the major and minor axes of an ellipse, and multiple circular formulas were applied to create an array of estimated circular circumferences. The four points were also used to construct a rectangular plane where additional circular formulas were employed to produce ellipses inscribed within and circumscribed around the rectangular plane. The circumferences of these ellipses were combined into the array of estimated circular circumferences, which was then applied through proprietary algorithms to measure WHR (precision: 0.1) and BF% (precision: 0.1%). During the LS3D exam, the examiner captured the volume of the participant by maneuvering the same tablet as LS2D equipped with a 3D photonic scanner (Structure Sensor; Occipital, Inc., San Francisco, CA, USA) around the participant. Circumferential measurements of a 3D polygonal mesh model were computed by defining an infinite sized 2D plane to completely intersect the 3D model at a desired location. The intersection provides an array of 3D planar polygonal primitives, triangles, and quadrilaterals as well as the defines 3D vertices of those primitives and the related intersection angles of the 2D plane to each primitive and vertex. The array of primitives was interrogated and the angle of intersection with each primitive applied to calculate the linear distance across the primitive. The summation of all calculated linear distances provided the circumferential measurement of the 3D model at the 2D plane of intersection. The resulting circumferential measurements were then applied through the aforementioned proprietary algorithms to output body circumferences (precision: 1 cm), WHR (precision: 0.01), and BF% (precision: 0.1%). 2.4. Data Analysis We assessed convergent validity and concurrent validity across each pair of proxy methods used to obtain a body composition measure (Table 1). Convergent validity reflects whether two proxy measurements capture the same underlying construct while concurrent validity reflects the correspondence between a proxy measurement and a criterion method when captured simultaneously [32,33]. Convergent validity, an indication that two proxy measurements change proportionally relative to an underlying construct, must be high for the two proxies to provide similar results when used as an outcome in a research study. Based on Carlson and Herdman (2012), convergent validity occurs between two measurements when the lower bound of the 95% confidence interval (CI) of the Pearson correlation coefficient (r) is greater than 0.70. On the other hand, concurrent validity is an indication that values obtained from two different proxy measures agree—not only change together as with convergent validity. Concurrent validity was assessed at two levels—across an entire group and for an individual. Group agreement between measurement pairs was assessed using bias and limits of agreement (LoA) [34]. Bias measures how concurrent measurements differ on average across an entire sample, with systematic bias occurring when the 95% confidence interval does not include zero [35]. The 95% LoA indicate the outer extremes for the possible differences between two measurements based on the sample population [36]. Histograms of all differences between measurements were visually inspected for a normal distribution before calculating bias or LoA. Individual agreement between two measurements was assessed with the percent of participants whose measurements were within the calculated reliable change index (RCI). This clinically useful value indicates when two measures applied to the same person are statistically different from each other [37]. There are multiple methods to set the RCI for comparison, all of which depend on the data available. The RCI for the body circumference comparisons was set using the intraclass correlation coefficient (ICC) method within each comparison [37]. The RCI for all BF% comparisons was performed using the ICC method applied to a comparison of the conventional methods (BIA and skinfolds). This provides a standard comparison on which to assess all other comparisons to conventional methods. Sensors 2020, 20, 6165 5 of 15 The conventional methods were used as criterion standards for the concurrent validation of LS2D and LS3D. In the analysis of BF%, concurrent validity was assessed relative to the concurrent comparison between the two conventionally collected criterion standards. This enables a fair comparison when evaluating the performance of proxy methods such as LS to measure BF%. Percent agreement, the proportion of participants who have measures that fall within the bounds of the RCI, were calculated for each concurrent method. For example, an 80% agreement between two concurrent methods means that 8 out of 10 people tested using both methods would have a numerical result that is not statistically significantly different from the other method. All statistical analyses were performed in R (version 3.5.1; R Foundation for Statistical Computing, Vienna, Austria) using packages “psych” and “BlandAltmanLeh” [38,39]. 3. Results Almost all measurement pairs across conventional (criterion) and LS methods demonstrated high convergent validity. In the assessment of WHR, LS3D demonstrated high concurrent validity with the Gulick tape method. With regard to BF%, LS2D demonstrated concurrent validity of similar magnitude to the concurrent validity between criterion standards themselves. However, the systematic overestimation of BF% by LS3D relative to criterion standards resulted in lower concurrent validity. 3.1. Body Circumferences When comparing LS3D and Gulick tape methods for measuring body circumferences (Table 2), convergent validity was excellent. Group biases were generally small with some large limits of agreement corresponding to those with large bias. Individual agreement between the methods was poor for most locations. Of note, there were several output errors by LS3D in the assessment of both right and left humerus circumferences that resulted in nonviable, non-geometric measurements. This tended to occur in very thin subjects, which our sample was biased towards given the population from which they were sampled. As a result, the sample size is reduced for humerus circumference comparison between LS3D and Gulick tape. Table 2. Validity measures of body circumferences as measured by Gulick tape and LS3D. Body Circumference Locations Convergent Validity Concurrent Validity Pearson Coefficient r [95% CI] Bias [95% CI] 95% Limits of Agreement Individual % Agreement ** Male Female Total Neck (cm) 0.85 [0.80, 0.88] 0 [−1, 1] (−4, 4) 74 (4) 78 (5) 75 (3) Umbilicus (cm) 0.91 [0.89, 0.93] 2 [1, 3] * (−4, 8) 62 (4) 51 (6) 58 (3) Waist (cm) 0.96 [0.94, 0.97] 2 [−6, 6] (−2, 6) 46 (4) 46 (6) 46 (3) Hip (cm) 0.89 [0.86, 0.91] 2 [1, 3] * (−4, 8) 69 (4) 70 (5) 69 (3) Left Humerus (cm) 0.85 [0.80, 0.89] 0 [−1, 1] (−4, 4) 62 (5) 76 (6) 67 (4) Right Humerus (cm) 0.85 [0.80, 0.89] 0 [−1, 1] (−4, 4) 67 (5) 69 (6) 68 (4) Left Femur (cm) 0.90 [0.87, 0.92] 2 [1, 3] * (−2, 6) 53 (5) 42 (6) 49 (4) Right Femur (cm) 0.92 [0.89, 0.94] 2 [1, 3] * (−2, 5) 56 (4) 50 (6) 54 (4) *—indicates statistically significant bias between the measurement methods. **—RCI = 2 cm for all measures except Hip, which has RCI = 3 cm. The minimum criterion for convergent validity between Gulick tape and LS3D was met for each body circumference location. Even the most conservative estimate of the correlation coefficient (lower bound of the 95% CI) met or exceeded 0.8 at all 8 sites, and convergent validity for the waist was very high (r > 0.94). There was no statistically significant group bias in 4 of the 8 body circumference locations (Neck, Waist, Left Humerus, Right Humerus). Group bias was statistically different from zero between Gulick tape and LS3D for 4 of 8 locations (Umbilicus, Hip, Left Femur, and Right Femur). The LS3D bias was 2 cm higher than Gulick tape in each location with statistically significant bias. The largest limits of Sensors 2020, 20, 6165 6 of 15 agreement occurred at the Umbilicus and Hip—which also had significant biases—with upper limits of 8 cm higher as measured by LS3D relative to Gulick tape. Individual agreement between LS3D and Gulick tape was moderate for the neck, in which 75% of the participants were within the RCI of 2 cm (Table 2, Figure 1). However, individual agreement between LS3D and Gulick tape was low for the 7 other locations, ranging from 46 to 69% agreement within 2 cm of Gulick Tape (Table 2). RCIs were 2 cm for all locations except the Hip, which had an RCI of 3 cm (Figure 1). Sensors 2020, 20, x FOR PEER REVIEW 6 of 15 (lower bound of the 95% CI) met or exceeded 0.8 at all 8 sites, and convergent validity for the waist was very high (r > 0.94). There was no statistically significant group bias in 4 of the 8 body circumference locations (Neck, Waist, Left Humerus, Right Humerus). Group bias was statistically different from zero between Gulick tape and LS3D for 4 of 8 locations (Umbilicus, Hip, Left Femur, and Right Femur). The LS3D bias was 2 cm higher than Gulick tape in each location with statistically significant bias. The largest limits of agreement occurred at the Umbilicus and Hip—which also had significant biases—with upper limits of 8 cm higher as measured by LS3D relative to Gulick tape. Individual agreement between LS3D and Gulick tape was moderate for the neck, in which 75% of the participants were within the RCI of 2 cm (Table 2, Figure 1). However, individual agreement between LS3D and Gulick tape was low for the 7 other locations, ranging from 46 to 69% agreement within 2 cm of Gulick Tape (Table 2). RCIs were 2 cm for all locations except the Hip, which had an RCI of 3 cm (Figure 1). Figure 1. Scatter plots comparing circumferences at the 8 different anatomical sites as measured by LS3D (x-axis) and Gulick tape (y-axis). The dashed lines represent the RCIs, which are 2 cm for all measures except at the hip (RCI = 3 cm). The solid line is the unity line. Males are represented as filled circles, while females are represented as open triangles. A. B. C. D. E. F. G. H. Figure 1. Scatter plots comparing circumferences at the 8 different anatomical sites as measured by LS3D (x-axis) and Gulick tape (y-axis). The dashed lines represent the RCIs, which are 2 cm for all measures except at the hip (RCI = 3 cm). The solid line is the unity line. Males are represented as filled circles, while females are represented as open triangles. 3.2. Waist-To-Hip Ratio In the determination of WHR, LS3D outperformed LS2D and demonstrated high convergent and concurrent validity with Gulick tape. LS2D did not meet the 0.7 threshold for convergent validity if using the conservative lower bound of the 95% CI of the Pearson coefficient, and had a statistically significant group bias of 0.02 with LoA ranging from −0.05 to 0.10 (Table 3). Individual agreement between LS2D and Gulick tape was high with 85% of participants falling within the RCI of 0.06 (Figure 2). In contrast, LS3D showed high convergent validity, very little bias, and narrow limits of Sensors 2020, 20, 6165 7 of 15 agreement ranging from −0.06 to 0.06 (Table 3). Individual agreement between LS3D and Gulick tape was also high with 87.1% of participants falling within the RCI of 0.04 (Figure 2). Table 3. Validity measures of WHR as measured by Gulick tape, LS2D, and LS3D. Waist-to-Hip Ratio Measurements Convergent Validity Concurrent Validity Pearson Coefficient r [95% CI] Bias [95% CI] 95% Limits of Agreement Individual % Agreement ** Male Female Total Gulick and LeanScreen 2D 0.73 [0.66, 0.78] 0.02 [0.01, 0.03] * (−0.05, 0.10) 86.5 (2.8) 82.3 (4.3) 85.0 (2.4) Gulick and LeanScreen 3D 0.81 [0.75, 0.85] 0.00 [−0.01, 0.01] (−0.06, 0.06) 87.8 (2.7) 85.5 (4.0) 87.1 (2.2) *—indicates statistically significant bias between the measurement methods. **—RCIs were 0.06 for LS2D and 0.04 for LS3D. Sensors 2020, 20, x FOR PEER REVIEW 7 of 15 3.2. Waist-To-Hip Ratio In the determination of WHR, LS3D outperformed LS2D and demonstrated high convergent and concurrent validity with Gulick tape. LS2D did not meet the 0.7 threshold for convergent validity if using the conservative lower bound of the 95% CI of the Pearson coefficient, and had a statistically significant group bias of 0.02 with LoA ranging from −0.05 to 0.10 (Table 3). Individual agreement between LS2D and Gulick tape was high with 85% of participants falling within the RCI of 0.06 (Figure 2). In contrast, LS3D showed high convergent validity, very little bias, and narrow limits of agreement ranging from −0.06 to 0.06 (Table 3). Individual agreement between LS3D and Gulick tape was also high with 87.1% of participants falling within the RCI of 0.04 (Figure 2). Table 3. Validity measures of WHR as measured by Gulick tape, LS2D, and LS3D. Waist-to-Hip Ratio Measurements Convergent Validity Concurrent Validity Pearson Coefficient r [95% CI] Bias [95% CI] 95% Limits of Agreement Individual % Agreement ** Male Female Total Gulick and LeanScreen 2D 0.73 [0.66, 0.78] 0.02 [0.01, 0.03] * (−0.05, 0.10) 86.5 (2.8) 82.3 (4.3) 85.0 (2.4) Gulick and LeanScreen 3D 0.81 [0.75, 0.85] 0.00 [−0.01, 0.01] (−0.06, 0.06) 87.8 (2.7) 85.5 (4.0) 87.1 (2.2) *—indicates statistically significant bias between the measurement methods. **—RCIs were 0.06 for LS2D and 0.04 for LS3D. Figure 2. Scatter plots comparing WHR as measured by LS2D, LS3D, and Gulick tape. The dashed lines represent the RCIs, which are 0.06 for LS2D and 0.04 for LS3D. The solid line is the unity line. Males are represented as filled circles, while females are represented as open triangles. 3.3. Body Fat Percentage Convergent validity was acceptable for all comparisons between methods. Concurrent validity for the two conventional standards showed a small group bias with wide LoA and individual agreement in approximately 7 out of 10 participants. Concurrent validity for LS2D and the two conventional methods was equivalent or exceeded those in the conventional comparison. Concurrent validity for LS3D was lower than the conventional comparison as a result of a consistent overestimation of BF%. Of note, due to equipment malfunction in the late phase of data collection, fewer subjects were assessed with BIA than with skinfolds—accounting for the sample size discrepancy observed in Figure 3. A. B. Figure 2. Scatter plots comparing WHR as measured by LS2D, LS3D, and Gulick tape. The dashed lines represent the RCIs, which are 0.06 for LS2D and 0.04 for LS3D. The solid line is the unity line. Males are represented as filled circles, while females are represented as open triangles. 3.3. Body Fat Percentage Convergent validity was acceptable for all comparisons between methods. Concurrent validity for the two conventional standards showed a small group bias with wide LoA and individual agreement in approximately 7 out of 10 participants. Concurrent validity for LS2D and the two conventional methods was equivalent or exceeded those in the conventional comparison. Concurrent validity for LS3D was lower than the conventional comparison as a result of a consistent overestimation of BF%. Of note, due to equipment malfunction in the late phase of data collection, fewer subjects were assessed with BIA than with skinfolds—accounting for the sample size discrepancy observed in Figure 3. For the two criterion standard methods of BF%—skinfolds and BIA—there was high convergent validity (0.82) and a small, but statistically significant, bias of 1.7% for the group (Table 4). LoA between skinfolds and BIA spanned 14.7% (−5.7 to 9.0) for the group. The RCI based on ICCs between the conventional methods was 4.4% and set as the standard for all subsequent comparisons between LS and conventional methods. Individual agreement between the conventional methods indicated that 68.1% (approximately 7 out of 10) participants would not measure differently between methods, which corresponds to the significant bias observed. (Figure 3). Sensors 2020, 20, 6165 8 of 15Sensors 2020, 20, x FOR PEER REVIEW 8 of 15 Figure 3. Scatter plots comparing BF% as measured by LS2D, LS3D, skinfolds, and BIA. The dashed lines represent the RCI which was 4.4% for all comparisons. The solid line is the unity line. Males are represented as filled circles, while females are represented as open triangles. For the two criterion standard methods of BF%—skinfolds and BIA— there was high convergent validity (0.82) and a small, but statistically significant, bias of 1.7% for the group (Table 4). LoA between skinfolds and BIA spanned 14.7% (−5.7 to 9.0) for the group. The RCI based on ICCs between the conventional methods was 4.4% and set as the standard for all subsequent comparisons between LS and conventional methods. Individual agreement between the conventional methods indicated that 68.1% (approximately 7 out of 10) participants would not measure differently between methods, which corresponds to the significant bias observed. (Figure 3). Table 4. Validity measures of body fat percentage as determined by LS2D, LS3D, skinfolds, and BIA. BF% Measurements Convergent Validity Concurrent Validity Pearson Coefficient r [95% CI] Bias [95% CI] (95% LoA) Individual % Agreement (SD) ** Male Female Overall BIA and Skinfolds 0.86 [0.82, 0.90] 1.7 [1.1, 2.3] * (−5.7, 9.0) 65.3 (4.9) 72.1 (5.4) 68.1 (3.7) BIA and LeanScreen2D 0.82 [0.77, 0.87] 1.8 [1.2, 2.4] * (−6.5, 10.1) 58.9 (5.0) 75.0 (5.3) 65.6 (3.7) Skinfolds and 0.83 [0.78, 0.86] 0.5 [0.0, 1.0] (−6.8, 7.8) 74.7 (3.6) 82.1 (4.2) 77.4 (2.7) A. B. C. D. E. Figure 3. Scatter plots comparing BF% as measured by LS2D, LS3D, skinfolds, and BIA. The dashed lines represent the RCI which was 4.4% for all comparisons. The solid line is the unity line. Males are represented as filled circles, while females are represented as open triangles. In the comparison of BF% as determined by BIA and LS2D, the threshold for convergent validity was met (0.77) and a small, but statistically significant, bias of 1.8% (LS2D overestimated relative to BIA) existed for the group (Table 4). LoA between BIA to LS2D spanned 16.6% (−6.5, 10.1) for the group. Individual agreement between the BIA and LS2D indicated that 65.6% (approximately 7 out of 10) participants would not measure differently between methods, which corresponds to the significant Sensors 2020, 20, 6165 9 of 15 bias observed (Figure 3). Group bias, span of LoA, and individual agreement between BIA and LS2D were all similar to the concurrent validity between the criteria standards. Table 4. Validity measures of body fat percentage as determined by LS2D, LS3D, skinfolds, and BIA. BF% Measurements Convergent Validity Concurrent Validity Pearson Coefficient r [95% CI] Bias [95% CI] (95% LoA) Individual % Agreement (SD) ** Male Female Overall BIA and Skinfolds 0.86 [0.82, 0.90] 1.7 [1.1, 2.3] * (−5.7, 9.0) 65.3 (4.9) 72.1 (5.4) 68.1 (3.7) BIA and LeanScreen2D 0.82 [0.77, 0.87] 1.8 [1.2, 2.4] * (−6.5, 10.1) 58.9 (5.0) 75.0 (5.3) 65.6 (3.7) Skinfolds and LeanScreen2D 0.83 [0.78, 0.86] 0.5 [0.0, 1.0] (−6.8, 7.8) 74.7 (3.6) 82.1 (4.2) 77.4 (2.7) BIA and LeanScreen3D 0.82 [0.76, 0.86] 4.8 [4.1, 5.5] * (−3.7, 13.3) 38.3 (5.0) 47.7 (6.2) 42.1 (3.9) Skinfolds and LeanScreen3D 0.82 [0.77, 0.86] 3.4 [2.9, 3.9] * (−4.1, 10.8) 57.7 (4.0) 69.2 (5.2) 61.7 (3.2) *—indicates statistically significant bias between the measurement methods. **—RCI = 4.4% for all measures. In the comparison of BF% as determined by skinfolds and LS2D, the threshold for convergent validity was met (0.78) and no significant bias existed in the group (Table 4). LoA between skinfolds to LS2D spanned 14.6% (−6.8, 7.8) for the group. Individual agreement between skinfolds and LS2D indicated that 77.4% (approximately 8 out of 10) participants would not measure differently between methods, which corresponds to the absence of significant bias observed (Figure 3). Group bias, span of LoA, and individual agreement between skinfolds and LS2D were all smaller than the same measures of concurrent validity between conventional methods. In the comparison of BF% as determined by BIA and LS3D, the threshold for convergent validity was met (0.76) and a statistically significant bias of 4.8% (LS3D overestimated relative to BIA) existed for the group (Table 4). LoA between BIA to LS3D spanned 17.0% (−3.7, 13.3) for the group. Individual agreement between the BIA and LS3D indicated that 42.1% (approximately 4 out of 10) participants would not measure differently between methods, which corresponds to the significant bias in the group (Figure 3). Compared to the measures of concurrent validity between conventional methods (skinfolds and BIA), group bias was three times larger, LoA were wider, and individual agreement lower between BIA and LS3D. In the comparison of BF% as determined by skinfolds and LS3D, the threshold for convergent validity was met (0.77) and a statistically significant bias of 3.4% (LS3D overestimated relative to skinfolds) existed for the group (Table 4). LoA between skinfolds to LS3D spanned 14.9% (−4.1, 10.8) for the group. Individual agreement between the skinfolds and LS3D indicated that 61.7% (approximately 6 out of 10) participants would not measure differently between methods, which corresponds to the significant bias in the group (Figure 3). Compared to the measures of concurrent validity between conventional methods, group bias was two times larger, LoA were slightly wider, and individual agreement lower between skinfolds and LS3D. 4. Discussion This investigation examined the validity of a novel 2D digital photography (LS2D) and 3D photonic scanning (LS3D) software compared to conventional criterion methods of determining body composition. The RCI based on the conventional methods comparison was set as an objective maximum threshold of agreement for all BF% comparisons. In the determination BF%, LS2D exhibited higher agreement than LS3D with BIA and skinfold methods. However, a comparison with reference literature suggests LS3D has high agreement with DXA [40]. In the assessment of WHR, LS3D demonstrated high agreement relative to Gulick tape despite poor agreement on absolute waist and hip circumferences. We recommend choosing either LS2D or LS3D as a proxy for BF% based on the criterion desired by the Sensors 2020, 20, 6165 10 of 15 user. Future investigations comparing LS3D to DXA are needed to confirm the potential agreement between these methods. Our validity measures are congruent with existing literature comparing conventional methods used to estimate BF%. In two comparisons to DXA—the current gold standard—Chen et al. reported that BIA underestimated DXA by −3.7% with an LoA that spanned 16.4% (n = 711) and MacDonald et al. reported that the Department of Defense equations underestimate DXA by 4.8% with an LoA that spanned 13.5% (n = 148) [27,40]. In our comparison between conventional methods, BIA underestimated skinfolds by 1.7% with LoA spanning 14.6% (n = 163). In our various comparisons between LS and conventional methods, biases ranged from 0.5% (LS2D vs. skinfolds) to underestimating by 4.8% (LS3D vs. BIA) and LoA ranged from 14.6 to 17.0%. It is notable that the reported span of the LoA for BF% is up to 16.4% between conventionally accepted methods. Pearson correlation coefficients were comparable to the values of 0.82–0.83 reported in Marx et al. [41]. To our knowledge, no comparison for individual agreement is available, so we will reference the 7 out of 10 participant agreement between our conventional methods, BIA and skinfolds. WHR outcomes are also in a comparable range to previous work by Marx et al. [41]. The suitability of LS2D and LS3D to replace conventional BF% methods must be guided by the user ’s preferred method of measurement. LS2D exhibited higher concurrent validity with the skinfolds and BIA methods than LS3D. As such, if the user prefers skinfolds as their standard, LS2D would be an appropriate replacement due to its lack of bias, similar LoA, and high individual agreement (8 of 10 participants agreeing). Similarly, if the user prefers BIA, LS2D would be an appropriate replacement due to its lower bias and comparable LoA relative to other comparisons of conventional methods (BIA vs. DXA, BIA vs. skinfolds). If absolute agreement is not required, the high convergent validity (>0.82) indicates that LS2D is able to reliably detect a change in BF% with either skinfolds or BIA as the standard method of measurement. LS2D overestimated BIA in the current study (1.8%) but was reported as underestimating DXA (3.3%) by Chen et al., suggesting that LS2D may approximate DXA better than BIA. However, the direct comparison between LS2D and DXA reported in MacDonald et al. indicates a similar agreement of BIA and LS2D with DXA. In the measurement of BF%, LS3D exhibited lower concurrent validity than LS2D when compared to conventional methods. The overestimation of LS3D by 4.8% (relative to BIA) and 3.4% (relative to skinfolds), as well as the poor individual agreement (4 of 10 participants relative to BIA and 6 of 10 participants relative to skinfolds) indicates that LS3D is a poor replacement for these criterion measures when absolute agreement with BF% measures is required by the user. However, LS3D would be an appropriate replacement to determine group means in research and reliably detecting a change in BF%, given its strong convergent validity (>0.76 and >0.77) and comparable LoA with both BIA and skinfold methods (Carlson and Herdman, 2012) [32]. When evaluating our results in the context of a well-powered comparison of BIA and DXA, circumstantial evidence suggests LS3D may approximate DXA. Chen et al. compared BF% measured by a similar 8 electrode BIA (Tanita BC-418) and DXA in a sample of 711 Chinese participants [40]. When our data are superimposed with Chen et al. (Figure 4), their similarities become apparent—LS3D and DXA both measure higher values than BIA with a similar bias (3.7% vs. 4.8%) and comparable span of LoA (16.4% and 17.0%). The inferred bias of LS3D and DXA is approximately 1%, which indicates that, although LS3D does not agree well with BIA or skinfolds, it may have high agreement with DXA. This assumes that the BIA devices in each study perform similarly in the populations tested. Both were 8-electrode devices, which allows a multi-segmental approach to measuring fat in the human body based on five heterogeneous cylinders with separately measured resistances. We previously demonstrated the superior reliability of an 8-electrode BIA device compared to a 4-electrode device [42]. Future work assessing the accuracy of LS3D should include a comparison to DXA, which is considered most accurate. Sensors 2020, 20, 6165 11 of 15 Sensors 2020, 20, x FOR PEER REVIEW 11 of 15 and hip circumference measurements, indicating high proportional accuracy between measurement sites. In contrast, LS2D demonstrated a small but statistically significant overestimation of WHR—albeit with high individual agreement (85%). In practice, for the acquisition of WHR, LS3D is a reasonable substitute for Gulick tape despite low concurrent validity on the circumference measurements individually. This is compounded by the practical ease of using a 3D scanner compared to the often-cumbersome nature of a physical tape measure. Figure 4. Bland-Altman plots comparing BF% as measured by BIA and LS3D (current study) and overlayed with BIA and DXA (Chen et al.) [40]. Both studies, which are well powered, show a similar bias of DXA and LS3D to BIA and similar span of LoA. Color and formatting in each study are preserved with lighter colors representing Chen et al. Interestingly, our results indicate a gender discrepancy in the determination of BF%. Across all method pair comparisons in the assessment of concurrent validity, female subjects demonstrated higher individual agreement (Table 4). However, this gender effect was largely attenuated in the assessment of circumferences (Table 2) and reversed in the assessment of WHR (Table 3). One possible etiology of this discrepancy might be differences in clothing worn—males wore only fitted shorts while females wore fitted shorts and a sports bra. Further investigation into gender discrepancies in digital anthropometry may be warranted. We have attempted to provide a thorough interpretation of convergent and concurrent validity that extends beyond group inference expectations for individual measurements when using LS as a proxy measure for body composition. Although the correlation coefficient is an excellent indicator of the convergent form of construct validity for an entire group of measurements, it does not provide information about agreement and concurrent validity for a group or individual measurement [35]. Similarly, bias (mean difference) provides information about how a given method will perform across a sample of similar size and characteristics and, if consistent, the proxy method can be adjusted by subtracting the mean difference from the proxy measurements [43]. Although more applicable than bias for understanding individual measurements, the LoA, when reported, are not interpreted in the context of standard measurement comparisons. The LoA are an estimate of the “extreme values” in which the nearly all difference pairs will fall [36]. In a previous comparison using LS2D the “wide LoA” of 14.0% was cited as one of the reasons for not recommending its use as a proxy for BF% [27]. However, we found that the span of the LoA in MacDonald et al., this study, and others using LS were the same or sometimes even lower than the LoA between conventional measures of BF%. To compensate for the lack of inference on individuals, we calculated the percent agreement based on thresholds of the RCI [37]. This approach determines whether a data point agrees or disagrees based on an a priori maximum acceptable threshold and results that can be conveyed in Chen et al. M F Current Data M F Figure 4. Bland-Altman plots comparing BF% as measured by BIA and LS3D (current study) and overlayed with BIA and DXA (Chen et al.) [40]. Both studies, which are well powered, show a similar bias of DXA and LS3D to BIA and similar span of LoA. Color and formatting in each study are preserved with lighter colors representing Chen et al. In contrast to concurrent validity, convergent validity was consistently high across all comparisons in this study excluding WHR as measured by LS2D and Gulick tape. This is despite use of the most conservative estimate—the lower bound of the 95% confidence interval for the Pearson Coefficient. High convergent validity indicates that almost all proxy method pairs were indeed measuring the same underlying construct. In practice, this means these methods vary proportionally with one another and can reliably detect changes. In the measurement of WHR, a clinically useful body composition measure, LS3D performed much better than LS2D when compared to the Gulick tape. LS3D exhibited no bias, a narrow LoA (−0.06 to 0.06), and high individual agreement (9 out of 10 participants agreeing). Notably, the high WHR validity occurred despite LS3D having low agreement with the Gulick tape on absolute waist and hip circumference measurements, indicating high proportional accuracy between measurement sites. In contrast, LS2D demonstrated a small but statistically significant overestimation of WHR—albeit with high individual agreement (85%). In practice, for the acquisition of WHR, LS3D is a reasonable substitute for Gulick tape despite low concurrent validity on the circumference measurements individually. This is compounded by the practical ease of using a 3D scanner compared to the often-cumbersome nature of a physical tape measure. Interestingly, our results indicate a gender discrepancy in the determination of BF%. Across all method pair comparisons in the assessment of concurrent validity, female subjects demonstrated higher individual agreement (Table 4). However, this gender effect was largely attenuated in the assessment of circumferences (Table 2) and reversed in the assessment of WHR (Table 3). One possible etiology of this discrepancy might be differences in clothing worn—males wore only fitted shorts while females wore fitted shorts and a sports bra. Further investigation into gender discrepancies in digital anthropometry may be warranted. We have attempted to provide a thorough interpretation of convergent and concurrent validity that extends beyond group inference expectations for individual measurements when using LS as a proxy measure for body composition. Although the correlation coefficient is an excellent indicator of the convergent form of construct validity for an entire group of measurements, it does not provide information about agreement and concurrent validity for a group or individual measurement [35]. Similarly, bias (mean difference) provides information about how a given method will perform across a sample of similar size and characteristics and, if consistent, the proxy method can be adjusted by subtracting the mean difference from the proxy measurements [43]. Sensors 2020, 20, 6165 12 of 15 Although more applicable than bias for understanding individual measurements, the LoA, when reported, are not interpreted in the context of standard measurement comparisons. The LoA are an estimate of the “extreme values” in which the nearly all difference pairs will fall [36]. In a previous comparison using LS2D the “wide LoA” of 14.0% was cited as one of the reasons for not recommending its use as a proxy for BF% [27]. However, we found that the span of the LoA in MacDonald et al., this study, and others using LS were the same or sometimes even lower than the LoA between conventional measures of BF%. To compensate for the lack of inference on individuals, we calculated the percent agreement based on thresholds of the RCI [37]. This approach determines whether a data point agrees or disagrees based on an a priori maximum acceptable threshold and results that can be conveyed in clinically useful language such as “8 out of 10 participants will measure the same as BIA method” [35]. We set the threshold for agreement in this study to 4.4 BF%, which is the RCI calculated using the ICC method between BIA and skinfolds, two accepted methods [37]. Indexing the RCI to a comparison between two conventional methods provided a common standard to be applied for all BF% comparisons. Our group and MacDonald et al. used similar methods with a priori thresholds set as 3 BF% and 4 BF%, respectively [27,42]. These thresholds, which were more strict than the conventional methods could perform in our analysis, seem to be aspirational for all proxy BF% calculations. The most apparent limitation of this study was the lack of DXA as a criterion comparison in the assessment of BF%. DXA is widely considered the gold standard for assessing BF%, though it is not without drawbacks—namely its cost and scarcity. While DXA was not feasible in this study, we previously demonstrated concurrent validity of the R20 BIA instrument when compared to DXA, albeit in a small sample size of convenience [42]. We chose BIA and skinfolds as conventional methods in this study due to their ubiquity in use—thus making our results more relevant to the typical user. Future studies should investigate the convergent and concurrent validity between LS technology and DXA directly. 5. Conclusions Despite tremendous advancement in digital anthropometry over the past decade, the field remains in its infancy. Yet its potential as a viable solution to the cumbersome nature of traditional measurement methods should not be underestimated, especially in the context of personal fitness and health monitoring. In most cases, there exists a trade-off between accuracy and practicality across anthropometric tools. In the current study, we demonstrate that LS2D has high convergent and concurrent validity in BF% when compared to BIA and skinfolds, while LS3D has lower measures of concurrent validity. In comparison with another large study, LS3D may agree well with DXA, but needs further verification in a well-designed study. In contrast, LS3D exhibited high concurrent validity with Gulick tape in the assessment of WHR, while LS2D exhibited lower concurrent validity. Because gold-standard instrumentation is unlikely to become more compact, simplistic, and affordable for everyday use, efforts must be aimed at increasing the validity of existing modalities such as digital photography and photonic scanning. Author Contributions: Conceptualization, E.V.N., B.A.D., R.A.S.; Methodology, E.V.N., R.A.S., T.S.; Software, E.V.N., R.A.S., T.S.; Validation, E.V.N., R.A.S.; Formal Analysis, E.V.N., R.A.S.; Investigation, E.V.N., R.A.S., T.S.; Data Curation, E.V.N., R.A.S.; Writing—Original Draft Preparation, R.A.S., E.V.N.; Writing—Review & Editing, E.V.N., R.A.S., T.S.; Supervision, B.A.D. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Acknowledgments: We thank Bradley Davidson, Biomechanically, LLC for his assistance in methodology and data analysis. Conflicts of Interest: This study did not receive any funding. The authors declare that there are no conflict of interest. Sensors 2020, 20, 6165 13 of 15 References 1. Burke, G.L.; Bertoni, A.G.; Shea, S.; Tracy, R.; Watson, K.E.; Blumenthal, R.S.; Chung, H.; Carnethon, M.R. The impact of obesity on cardiovascular disease risk factors and subclinical vascular disease. Arch Intern. Med. 2008, 168, 928. [CrossRef] [PubMed] 2. Golay, A.; Ybarra, J. Link between obesity and type 2 diabetes. Best Pract. Res. Clin. Endocrinol. Metab. 2005, 19, 649–663. [CrossRef] [PubMed] 3. Benjamin, E.J.; Blaha, M.J.; Chiuve, S.E.; Cushman, M.; Das, S.R.; Deo, R.; De Ferranti, S.D.; Floyd, J.; Fornage, M.; Gillespie, C.; et al. Heart disease and stroke statistics—2017 update: A report from the american heart association. Circulation 2017, 135. [CrossRef] [PubMed] 4. Flegal, K.M.; Kruszon-Moran, D.; Carroll, M.D.; Fryar, C.D.; Ogden, C.L. Trends in obesity among adults in the united states, 2005 to 2014. JAMA 2016, 315, 2284. [CrossRef] [PubMed] 5. Coppetti, T.; Brauchlin, A.; Müggler, S.; Attinger-Toller, A.; Templin, C.; Schönrath, F.; Hellermann, J.; Lüscher, T.F.; Biaggi, P.; Wyss, C.A. Accuracy of smartphone apps for heart rate measurement. Eur. J. Prev. Cardiol. 2017, 24, 1287–1293. [CrossRef] 6. Yan, B.P.; Chan, C.K.; Li, C.K.; To, O.T.; Lai, W.H.; Tse, G.; Poh, Y.C.; Poh, M.Z. Resting and postexercise heart rate detection from fingertip and facial photoplethysmography using a smartphone camera: A validation study. JMIR mHealth uHealth 2017, 5, e33. [CrossRef] 7. Brinkløv, C.F.; Thorsen, I.K.; Karstoft, K.; Brøns, C.; Valentiner, L.; Langberg, H.; Vaag, A.A.; Nielsen, J.S.; Pedersen, B.K.; Ried-Larsen, M. Criterion validity and reliability of a smartphone delivered sub-maximal fitness test for people with type 2 diabetes. BMC Sports Sci. Med. Rehabil. 2016, 8, 31. [CrossRef] 8. Schoeppe, S.; Alley, S.; Van Lippevelde, W.; Bray, N.A.; Williams, S.L.; Duncan, M.J.; Vandelanotte, C. Efficacy of interventions that use apps to improve diet, physical activity and sedentary behaviour: A systematic review. Int. J. Behav. Nutr. Phys. Act. 2016, 13, 127. [CrossRef] 9. Yang, H.; Kang, J.-H.; Kim, O.; Choi, M.; Oh, M.; Nam, J.; Sung, E. Interventions for preventing childhood obesity with smartphones and wearable device: A protocol for a non-randomized controlled trial. Int. J. Environ. Res. Public Health 2017, 14, 184. [CrossRef] 10. Scott, B.K.; Miller, G.T.; Fonda, S.J.; Yeaw, R.E.; Gaudaen, J.C.; Pavliscsak, H.H.; Quinn, M.T.; Pamplin, J.C. Advanced digital health technologies for COVID-19 and future emergencies. Telemed. e-Health 2020. [CrossRef] 11. Sust, P.P.; Solans, O.; Fajardo, J.C.; Peralta, M.M.; Rodenas, P.; Gabaldà, J.; Eroles, L.G.; Comella, A.; Muñoz, C.V.; Ribes, J.S.; et al. Turning the crisis into an opportunity: Digital health strategies deployed during the COVID-19 outbreak. JMIR Public Health Surveill. 2020, 22, e19106. [CrossRef] [PubMed] 12. Inkster, B.; O’Brien, R.; Selby, E.; Joshi, S.; Subramanian, V.; Kadaba, M.; Schroeder, K.; Godson, S.; Comley, K.; Vollmer, S.J.; et al. Digital health management during and beyond the COVID-19 pandemic: Opportunities, barriers, and recommendations (Preprint). JMIR Ment. Health 2020, 7, e19246. [CrossRef] [PubMed] 13. Patel, M.S.; Asch, D.A.; Volpp, K.G. Wearable devices as facilitators, not drivers, of health behavior change. JAMA 2015, 313, 459. [CrossRef] [PubMed] 14. Guo, Y.; Bian, J.; Leavitt, T.; Vincent, H.K.; Vander Zalm, L.; Teurlings, T.L.; Smith, M.D.; Modave, F. Assessing the quality of mobile exercise apps based on the american college of sports medicine guidelines: A reliable and valid scoring instrument. J. Med. Internet Res. 2017, 19, e67. [CrossRef] 15. Jo, E.; Lewis, K.; Directo, D.; Kim, M.J.; Dolezal, B.A. Validation of biofeedback wearables for photoplethysmographic heart rate tracking. Sport Sci. Med. 2016, 15, 540–547. 16. Lee, J.; Finkelstein, J. Consumer sleep tracking devices: A critical review. Stud. Health Technol. Inf. 2015, 210, 458–460. 17. Shieh, C.; Knisely, M.R.; Clark, D.; Carpenter, J.S. Self-weighing in weight management interventions: A systematic review of literature. Obes. Res. Clin. Pract. 2016, 10, 493–519. [CrossRef] 18. Zheng, Y.; Klem MLou Sereika, S.M.; Danford, C.A.; Ewing, L.J.; Burke, L.E. Self-weighing in weight management: A systematic literature review. Obesity 2015, 23, 256–265. [CrossRef] 19. Banack, H.R.; Wactawski-Wende, J.; Hovey, K.M.; Stokes, A. Is BMI a valid measure of obesity in postmenopausal women? Menopause 2018, 25, 307–313. [CrossRef] http://dx.doi.org/10.1001/archinte.168.9.928 http://www.ncbi.nlm.nih.gov/pubmed/18474756 http://dx.doi.org/10.1016/j.beem.2005.07.010 http://www.ncbi.nlm.nih.gov/pubmed/16311223 http://dx.doi.org/10.1161/CIR.0000000000000485 http://www.ncbi.nlm.nih.gov/pubmed/28122885 http://dx.doi.org/10.1001/jama.2016.6458 http://www.ncbi.nlm.nih.gov/pubmed/27272580 http://dx.doi.org/10.1177/2047487317702044 http://dx.doi.org/10.2196/mhealth.7275 http://dx.doi.org/10.1186/s13102-016-0056-7 http://dx.doi.org/10.1186/s12966-016-0454-y http://dx.doi.org/10.3390/ijerph14020184 http://dx.doi.org/10.1089/tmj.2020.0140 http://dx.doi.org/10.2196/19106 http://www.ncbi.nlm.nih.gov/pubmed/32339998 http://dx.doi.org/10.2196/19246 http://www.ncbi.nlm.nih.gov/pubmed/32484783 http://dx.doi.org/10.1001/jama.2014.14781 http://www.ncbi.nlm.nih.gov/pubmed/25569175 http://dx.doi.org/10.2196/jmir.6976 http://dx.doi.org/10.1016/j.orcp.2016.01.004 http://dx.doi.org/10.1002/oby.20946 http://dx.doi.org/10.1097/GME.0000000000000989 Sensors 2020, 20, 6165 14 of 15 20. Romero-Corral, A.; Somers, V.K.; Sierra-Johnson, J.; Thomas, R.J.; Collazo-Clavell, M.L.; Korinek, J.E.; Allison, T.G.; Batsis, J.A.; Sert-Kuniyoshi, F.H.; Lopez-Jimenez, F. Accuracy of body mass index in diagnosing obesity in the adult general population. Int. J. Obes. 2008, 32, 959–966. [CrossRef] 21. Wickramasinghe, V.P.; Cleghorn, G.J.; Edmiston, K.A.; Murphy, A.J.; Abbott, R.A.; Davies, P.S.W. Validity of BMI as a measure of obesity in Australian white Caucasian and Australian Sri Lankan children. Ann. Hum. Biol. 2005, 32, 60–71. [CrossRef] [PubMed] 22. Zhu, S.; Wang, Z.; Heshka, S.; Heo, M.; Faith, M.S.; Heymsfield, S.B. Waist circumference and obesity-associated risk factors among whites in the third national health and nutrition examination survey: Clinical action thresholds. Am. J. Clin. Nutr. 2002, 76, 743. [CrossRef] [PubMed] 23. Dalton, M.; Cameron, A.J.; Zimmet, P.Z.; Shaw, J.E.; Jolley, D.; Dunstan, D.W.; Welborn, T.A.; AusDiab Steering Committee. Waist circumference, waist-hip ratio and body mass index and their correlation with cardiovascular disease risk factors in Australian adults. J. Intern. Med. 2003, 254, 555–563. [CrossRef] [PubMed] 24. Zeng, Q.; Dong, S.-Y.; Sun, X.-N.; Xie, J.; Cui, Y. Percent body fat is a better predictor of cardiovascular risk factors than body mass index. Braz. J. Med. Biol. Res. 2012, 45, 591–600. [CrossRef] 25. Lee, S.Y.; Gallagher, D. Assessment methods in human body composition. Curr. Opin. Clin. Nutr. Metab. Care 2008, 11, 566–572. [CrossRef] [PubMed] 26. Boland, D.M.; Neufeld, E.V.; Ruddell, J.; Dolezal, B.A.; Cooper, C.B. Inter- and intra-rater agreement of static posture analysis using a mobile application. J. Phys. Ther. Sci. 2016, 28, 3398–3402. [CrossRef] [PubMed] 27. Macdonald, E.Z.; Vehrs, P.R.; Fellingham, G.W.; Eggett, D.; George, J.D.; Hager, R. Validity and reliability of assessing body composition using a mobile application. Med. Sci. Sports Exerc. 2017, 49, 2593–2599. [CrossRef] [PubMed] 28. American College of Sports Medicine. ACSM’s Guidelines for Exercise Testing and Prescription, 9th ed.; Wolters Kluwer/Lippincott Williams & Wilkins Health: Philadelphia, PA, USA, 2014. 29. O’Connor, D.P.; Bray, M.S.; McFarlin, B.K.; Sailors, M.H.; Ellis, K.J.; Jackson, A.S. Generalized equations for estimating DXA percent fat of diverse young women and men. Med. Sci. Sports Exerc. 2010, 42, 1959–1965. [CrossRef] [PubMed] 30. Davidson, L.E.; Wang, J.; Thornton, J.C.; Kaleem, Z.; Silva-Palacios, F.; Pierson, R.N.; Heymsfield, S.B.; Gallagher, D. Predicting fat percent by skinfolds in racial groups. Med. Sci. Sports Exerc. 2011, 43, 542–549. [CrossRef] 31. Heyward, V. ASEP methods recommendation: Body composition assessment. J. Exerc. Physiol. Online 2001, 4, 1–12. 32. Carlson, K.D.; Herdman, A.O. Understanding the impact of convergent validity on research results. Organ. Res. Methods 2012, 15, 17–32. [CrossRef] 33. Portney, L.G.; Watkins, M.P. Foundations of Clinical Research: Applications to Practice; Pearson/Prentice Hall: Upper Saddle River, NJ, USA, 2009. 34. Bland, J.M.; Altman, D.G. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet 1986, 1, 307–310. [CrossRef] 35. Giavarina, D. Understanding bland altman analysis. Biochem. Med. 2015, 25, 141–151. [CrossRef] 36. Bland, J.M.; Altman, D.G. Measuring agreement in method comparison studies. Stat. Methods Med. Res. 1999, 8, 135–160. [CrossRef] [PubMed] 37. Stolarova, M.; Wolf, C.; Rinker, T.; Brielmann, A. How to assess and compare inter-rater reliability, agreement and correlation of ratings: An exemplary analysis of mother-father and parent-teacher expressive vocabulary rating pairs. Front. Psychol. 2014, 5, 509. [CrossRef] [PubMed] 38. Revelle, W.R. psych: Procedures for Personality and Psychological Research. Published Online. 2017. Available online: https://www.scholars.northwestern.edu/en/publications/psych-procedures-for-personality- and-psychological-research (accessed on 6 September 2020). 39. Lehnert, B. BlandAltmanLeh: Plots (Slightly Extended) Bland-Altman Plots Version 0.3.1 from CRAN. Available online: https://rdrr.io/cran/BlandAltmanLeh/ (accessed on 6 September 2020). 40. Chen, K.T.; Chen, Y.Y.; Wang, C.W.; Chuang, C.L.; Chiang, L.M.; Lai, C.L.; Lu, H.K.; Dwyer, G.B.; Chao, S.P.; Shih, M.K.; et al. Comparison of standing posture bioelectrical impedance analysis with DXA for body composition in a large, healthy Chinese population. PLoS ONE 2016, 11, e0160105. [CrossRef] http://dx.doi.org/10.1038/ijo.2008.11 http://dx.doi.org/10.1080/03014460400027805 http://www.ncbi.nlm.nih.gov/pubmed/15788355 http://dx.doi.org/10.1093/ajcn/76.4.743 http://www.ncbi.nlm.nih.gov/pubmed/12324286 http://dx.doi.org/10.1111/j.1365-2796.2003.01229.x http://www.ncbi.nlm.nih.gov/pubmed/14641796 http://dx.doi.org/10.1590/S0100-879X2012007500059 http://dx.doi.org/10.1097/MCO.0b013e32830b5f23 http://www.ncbi.nlm.nih.gov/pubmed/18685451 http://dx.doi.org/10.1589/jpts.28.3398 http://www.ncbi.nlm.nih.gov/pubmed/28174460 http://dx.doi.org/10.1249/MSS.0000000000001378 http://www.ncbi.nlm.nih.gov/pubmed/28719493 http://dx.doi.org/10.1249/MSS.0b013e3181dc2e71 http://www.ncbi.nlm.nih.gov/pubmed/20305578 http://dx.doi.org/10.1249/MSS.0b013e3181ef3f07 http://dx.doi.org/10.1177/1094428110392383 http://dx.doi.org/10.1016/S0140-6736(86)90837-8 http://dx.doi.org/10.11613/BM.2015.015 http://dx.doi.org/10.1177/096228029900800204 http://www.ncbi.nlm.nih.gov/pubmed/10501650 http://dx.doi.org/10.3389/fpsyg.2014.00509 http://www.ncbi.nlm.nih.gov/pubmed/24994985 https://www.scholars.northwestern.edu/en/publications/psych-procedures-for-personality-and-psychological-research https://www.scholars.northwestern.edu/en/publications/psych-procedures-for-personality-and-psychological-research https://rdrr.io/cran/BlandAltmanLeh/ http://dx.doi.org/10.1371/journal.pone.0160105 Sensors 2020, 20, 6165 15 of 15 41. Marx, R.; Porcari, J.P.; Doberstein, S.; Mikat, R.; Ryskey, A.; Foster, C. Ability of the leanscreen app to accurately assess body composition. Comp. Exerc. Physiol. 2017, 13, 59–66. [CrossRef] 42. Dolezal, B.A.; Lau, M.J.; Abrazado, M.; Storer, T.W.; Cooper, C.B. Validity of two commercial grade bioelectrical impedance analyzers for measurement of body fat percentage. J. Exerc. Physiol. Online 2013, 16, 74–83. 43. Choudhary, P.K.; Nagaraja, H.N. Measuring agreement in method comparison studies—a review. Adv. Rank. Sel. Mult. Comp. Reliab. 2007, 2802, 215–244. [CrossRef] Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). http://dx.doi.org/10.1249/01.mss.0000535610.04677.17 http://dx.doi.org/10.1007/0-8176-4422-9_13 http://creativecommons.org/ http://creativecommons.org/licenses/by/4.0/. Introduction Methods Participants Measurements with Conventional Methods and LeanScreen LeanScreen Methods Data Analysis Results Body Circumferences Waist-To-Hip Ratio Body Fat Percentage Discussion Conclusions References work_6aflx4xxqbfr7mp2hr2bxxxice ---- Microsoft Word - AN_ART_08_2019_001507_NO_HIGHLIGHTING_ A Colourimetric Vacuum Air-Pressure Indicator Yusufu, D., & Mills, A. (2019). A Colourimetric Vacuum Air-Pressure Indicator. The Analyst, 144(20), 5947-5952. https://doi.org/10.1039/c9an01507h Published in: The Analyst Document Version: Peer reviewed version Queen's University Belfast - Research Portal: Link to publication record in Queen's University Belfast Research Portal Publisher rights Copyright 2019 Royal Society of Chemistry. This work is made available online in accordance with the publisher’s policies. Please refer to any applicable terms of use of the publisher. General rights Copyright for the publications made accessible via the Queen's University Belfast Research Portal is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The Research Portal is Queen's institutional repository that provides access to Queen's research output. Every effort has been made to ensure that content in the Research Portal does not infringe any person's rights, or applicable UK laws. If you discover content in the Research Portal that you believe breaches copyright or violates any law, please contact openaccess@qub.ac.uk. Download date:06. Apr. 2021 https://doi.org/10.1039/c9an01507h https://pure.qub.ac.uk/en/publications/a-colourimetric-vacuum-airpressure-indicator(4ba54672-31f0-47bc-8d8c-0f45fc84b1c9).html 1    A Colourimetric Vacuum Air‐Pressure Indicator    Dilidaer Yusufu, and Andrew Mills*  Department of Chemistry and Chemical Engineering, Queens University Belfast, David Keir  Building, Stranmillis Road, Belfast, BT9 5AG, UK  e‐mail: andrew.mills@qub.ac.uk  Abstract  A colourimetric vacuum air pressure indicator is described, based on the very low level of  CO2 in air.   The indicator uses the pH indicator dye, ortho‐cresolphthalein, OCP, which  is  violet coloured in its deprotonated form and colourless when protonated.  When the violet  coloured  OCP  anion  is  ion‐paired  with  the  tetrabutylammonium  cation,  the  product  is  readily dissolved in a non‐aqueous solution containing the polymer ethyl cellulose to create  an ink which, when cast and allowed to dry, responds to levels of CO2 well below that in air,  i.e. << 0.041%; the indicator's halfway colour changing point is at 0.062 atm of air at 22 oC,  which is interesting in that in food vacuum packaging the pressure in the pack is usually ca.  0.04 atm.  The indicator can be used as a qualitative and quantitative indicator of vacuum  air  pressure.    The  latter  requires  the  use  of  digital  photography,  coupled  to  RGB  colour  analysis, in the analysis of the indicator's colour.  As with most CO2 indicators, the indicator's  response is temperature sensitive, with H = 78  5 kJ mol‐1.  The indicators 90% response  and recovery times to a cycle of vacuum and air were 16.2 and 2.7 min, respectively.  The  efficacy  of  the  indicator  as  a  vacuum‐package  integrity  indicator  for  food  packaging  is  illustrated and other potential applications are discussed briefly.  This is the first reported  example of an ink‐based, inexpensive vacuum air pressure indicator.          Keywords: indicator; vacuum, packaging, carbon dioxide, low pressure, colourimetric      2    1. Introduction  Vacuum packaging (VP) is a method of packaging which removes air from the package prior  to  sealing  and,  amongst  other  things,  is  commonly  used  in  wholesale  and  retail  food  packaging (especially of: fresh and processed meat and fish, cheese, chocolate, sweets and  many different dried goods, such as seaweed and rice); after packaging, the pressure inside  the pack is typically ca. 0.04 atm.1  VP can extend the shelf‐life of foods by days, although  more  often  weeks  and,  in  some  cases,  months.1    In  addition,  vacuum  packagers  can  be  found in most household and restaurant kitchens because of their low cost and ease of use.   VP is also used for pharmaceutical and medical products, electronic components, such as:   semiconductors, microchips, memory chips, sub panels, motherboards, PLC’s and RAM,2 and  coins and collectables.3  Interestingly, despite its widespread use, there is no quick, simple,  inexpensive  method  for  measuring  the  vacuum  pressure  inside  such  packages  and  so  no  routine way to assess package integrity after VP.  As a consequence, VP quality control is  usually  limited  to  periodic  sampling  of  the  package  line  by  the  packager,  often  as  infrequently as one in every 300‐400 packages, and there is little or no subsequent testing  of package integrity as the package makes its way to the retailer and then consumer.1    It has been demonstrated that it is possible to use reversible luminescence‐based oxygen  indicators to assess the vacuum level in VP,4, 5 but these utilise relatively expensive (typically   $4  –  30  each6)  luminescent  indicator  'dots'  or  strips  and  costly  excited‐state  lifetime  measuring equipment.6 As a consequence, the use of oxygen indicators in food packaging  for  example  is  largely  limited  to  packaging  research.4,  5  Although  a  number  of  different  colourimetric  O2  indicator  strips  have  been  reported,  mainly  based  on  redox  indicators,7  they  are  all  irreversible,  as  noted  by  Wang  and  Wolfbeis  in  their  seminal  review  on  O2  indicators and, as a consequence, cannot be used for monitoring O2  levels quantitatively  and so cannot be used for monitoring vacuum air pressures.8  As  a  consequence,  there  is  a  real  need  for  an  inexpensive,  quick,  easy  to  use,  reversible  colour‐based vacuum air pressure indicator which allows both a: (i) qualitative assessment  of vacuum package integrity by eye and (ii) quantitative assessment of the pressure inside  the  package,  by  digital  photography  and  colour  analysis  App.    The  former  feature  will  reassure the consumer regarding packaging integrity and absence of tampering, whilst the  latter will improve quality control along the distribution chain from packager to retailer.    3    The novel approach presented here towards generating a colour based vacuum air pressure  indicator is through the detection of the CO2 that is in air.  Obviously, there is not much CO2  in air, with the average level of CO2 in air being ca. 411 ppmv (i.e. 0.041 %), but what there is  could be used to provide a measure of the ambient air pressure using a very sensitive CO2  indicator.  It is assumed here that the vacuum‐packaged product, be it a food stuff, medical  product etc., does not itself alter the CO2 level in the package.  Thus, this new vacuum air‐ pressure indicator would not be appropriate if the product respired, such as in the case of  vacuum‐packed meat for example.6 However, as we shall see, it would be appropriate for  non‐respiring food stuffs, such as most dry goods.   In the early 90's this group developed a number of different CO2 indicators derived from  solvent‐based  inks  that  contained  an  ion‐pair  of  a  pH  indicating  dye  anion,  D‐,  with  a  lipophilic  quaternary  ammonium  cation,  Q+,  such  as  the  tetraoctyl  ammonium  cation,  so  that  the  ion‐pair's  formula  was:  D‐Q+xH2O;9,  10  note:  like  many  such  ion‐pairs,  D‐Q+xH2O  retains a few molecules of water even in non‐aqueous solvents.  Due to the phase transfer  nature of the ion‐pair, D‐Q+xH2O readily dissolves in a solvent‐based ink and when allowed  to dry produces a coloured, water‐insoluble plastic film that responds to CO2, as if the dye  were dissolved in water.  The key process can be summarised as follows:                                           D‐Q+xH2O  +  CO2  ⇌  HCO3‐Q+(x‐1)H2O.DH                                             (1)                                               Colour A                             Colour B  where,  D‐Q+xH2O  and  HCO3‐(x‐1)H2O.DH  are  the  deprotonated  (D‐)  and  protonated  (HD)  forms of the pH‐indicating dye, D.  It follows from the above that                                                          R = [HD]/ [D‐] = α×%CO2                                                                (2)  where  R  is  a  measure  of  the  transformation  of  the  dye  from  the  deprotonated  to  protonated  form  due  to  the  presence  of  CO2,  [HD]  and  [D‐]  are  the  concentrations  of  Q+HCO3‐.HD.(x‐1)H2O  and  Q+D‐.xH2O,  respectively  and  α  is  a  proportionality  constant  (units: %‐1), which provides a measure of the sensitivity of the CO2‐sensitive optical sensor  under test.  Most importantly here, note that it can be shown that the value of α depends  upon, amongst other things, the value of the pKa of the dye, increasing with increasing pKa,  and  the  level  of  additional  base  present,  OH‐Q+xH2O,  decreasing  with  increasing  [OH‐ Q+xH2O].11  As a consequence, the sensitivity of the CO2 indicator can be varied, i.e. tuned,  4    by  using  different  pH‐indicating  dyes  with  different  pKa  values.11    Solvent  based  CO2‐ sensitive inks and films dominate the field since, unlike their water‐based counterparts, they  dry very quickly, and so are more conducive to printing, are not prone to dye leaching by  water and largely insensitive to changes in humidity.    Following on from the initial work of this group, many different CO2 indicators have been  reported,12‐15 using the  same ion‐pair technology, but with different dyes, phase transfer  agents (often tetrabutyl ammonium hydroxide, TBAOH) and encapsulating polymers (such  as ethyl cellulose (EC), silicone and poly vinyl alcohol).   However, in almost all cases, this  work has focussed on the detection of super‐ambient levels of CO2, most notably: ca. 5% for  the detection of CO2 in breath, as exemplified by the Easy Cap II detector16 and > 10% for  the detection of CO2 in modified atmosphere packaging (MAP), as in Insignia Technologies'  After  Freshness  Indicator.17,  18    1‐5%  CO2  responding  ion‐pair  indicators  usually  employ  moderately high pKa dyes, such as meta‐cresol purple (MCP; pKa = 8.28) or Cresol Red (CR;  pKa = 7.95), whereas >10% CO2 responding indicators use pH‐dyes with a lower pKa, such as  Phenol Red (PR; pKa = 7.52).  It follows that in order to make a super‐sensitive CO2 indicator  for vacuum air pressure work, a dye with a much higher pKa is required.11  For this purpose,  here we use ortho‐cresolphthalein (OCP), which is violet coloured in its deprotonated (D‐)  anionic form and colourless in its protonated neutral lactone (i.e. DH) form and has a pKa of  9.32.  Using such an indicator, this paper describes the preparation and characterisation of  the first colourimetric vacuum air pressure indicator.  2. Experimental  Materials  Unless otherwise stated, all chemicals were purchased from Sigma Aldrich  in the highest  purity available.  All solutions were prepared fresh, and all aqueous solutions were made up  using double distilled and deionised water.   The Sigma Aldrich  safety data sheet for OCP  provides  useful  details  regarding  the  handling  and  disposal  of  this  dye  and,  under  the  heading  'toxicological  information'  notes  that  'no  component  of  this  product  present  a  levels  0.1% is identified as probable, possible or confirmed human carcinogen by IARC'.19    The  super‐sensitive  CO2  ink  containing  OCP  was  prepared  as  follows:  0.1  g  of  ortho‐ cresolphthalein  (the  pH  indicator  dye),  2  g  methanol  (MeOH),  5  ml  tetrabutylammonium  5    hydroxide  (TBAH)  1M  in  methanol  (the  base,  OH‐Q+xH2O,  which  ensures  the  dye  is  converted  into  a  lipophilic,  deprotonated  ion‐paired  form,  D‐Q+xH2O,  and  which  provides  excess base and so additional control of the value of )11, 0.5 ml tributyl phosphate (TBP,  the  plasticiser  which  aids  the  rate  of  diffusion  of  the  CO2  into  and  out  of  the  indicator  film)9,20 and 5 g of an ethyl cellulose (EC) solution, comprising 10 g of EC in a 80/20 (v/v) mix  of  toluene  and  EtOH,  (EC;  the  water  insoluble  polymer  encapsulation  medium  which  prevents  dye  leaching  if  any  water  is  present)21  were  mixed  together  in  a  30  ml  jar  and  stirred for 2 hours using a magnetic flea.  The ink was then cast as a thin film (ca. 80 m  when wet and ca. 24 m dry) onto 50 m PET film using a K‐bar 7 coater;22 the dry film is  colourless  in  air  (due  to  the  presence  of  0.041%  CO2).    MeOH  is  used  here,  and  in  most  reported CO2 indicator preparations,12‐15 as one of the solvents, since the quaternary base is  usually sold as a methanol‐based solution.  Given its low boiling point, and the fact that the  indicator is well dried before use, it is unlikely the MeOH, or the other solvents used in the  preparation  of  the  ink  film,  present  any  significant  health  risk.    However,  it  is  worth  mentioning the OCP indicator also works well when covered with a thin layer of Sellotape  which, if required, would then ensure the ink film does not make direct contact with the  packaged product.    Methods  All UV‐vis absorbance measurements were made using an Agilent Technologies CARY 60 UV‐ vis  spectrophotometer.    All  digital  photographs  were  taken  using  a  Cannon  600D  digital  camera  and  all  digital  images  were  processed  for  their  red,  green  and  blue  colour  space  values (i.e. RGB values) using the freely available photo‐processing software, Image J.23  In  one part of the work, a DVP vacuum technology, model: LC.4, vacuum pump was used to  evacuate the spectrophotometric cell containing the indicator.  In another part, a vacuum  chamber, a model: DP1.5 chamber supplied by Applied Vacuum Engineering, was used to  monitor simultaneously; (i) the ambient vacuum pressure (measured using a digital vacuum  meter, a Keller‐Druk Digital manometer, model: LEO Record, placed inside the chamber) and  (ii) the colour of the indicator (through digital photography).  All food was packaged using a  commercial  food  vacuum  packager  made  by  Orved,  model:  Multiple  315.    Unless  stated  otherwise, all work was carried out at 22 oC.    6    3. Result and Discussion  Qualitative analysis  As  noted  above  the  super‐sensitive  CO2  ink  containing  OCP,  the  'vacuum  indicator',  was  colourless in air, but when placed in a 1 cm spectrophotometer cell and evacuated using a  vacuum pump, the film quickly developed a striking violet colour, as illustrated by the digital  images taken of the indicator in figure 1 (a).  The vacuum in the cell (< 0.002 atm) was then  slowly  released,  over  2  h,  allowing  ambient  air  to  enter  into  the  cell,  and  bring  with  it  sufficient CO2 so as to bleach the indicator, via reaction (1), given the deprotonated and  protonated forms of OCP are violet and colourless, respectively.   Both the photographed  colour  of  the  indicator  and  its  UV‐vis  absorption  spectrum  were  monitored  during  this  bleaching  process  and  the  results  of  this  work  are  illustrated  in  figure  1(a)  and  (b),  respectively.    These  results  show  that  the  vacuum  indicator  can  be  used  at  least  as  a  qualitative indicator of sub‐ambient, i.e. low, air pressures.    Fig. 1: (a) Digital photographs of the OCP vacuum indicator in an evacuated cuvette as, from left to  right, air is slowly – over 2 h ‐ allowed in and (b) simultaneous recording of the UV‐vis absorption  spectrum of the vacuum indicator (from top to bottom).  The insert diagram is a plot of the real  absorbance, A, vs the apparent absorbance, A’, vide infra.      7    Digital and spectrophotometric analysis  Recent  work  by  this  group  and  others  has  shown  that24‐26  for  simple  colour‐changing  systems, such as here, digital photography coupled with colour analysis, using red, green  and blue colour space values (RGB) obtain from freely available software such as Image J,23  can be used to generate apparent absorbance values, A', which are directly proportional to  their real absorbance value, A, counterparts.   Apparent absorbance values are calculated  using either the red, green or blue component values (ranging from 0‐255) of the digital  image  of  the  indicator,  i.e.  RGB(red),  RGB(green)  and  RGB(blue).    In  the  case  of the  OCP  vacuum  indicator,  the  RGB(red)  component  varied  most  significantly  with  P  and  so  its  values, derived from the digital images illustrated in figure 1(a), were used to calculate the  apparent absorbance values, A', for the vacuum indicator, using the expression:  A'  = log{255/RGB(red)}                                                             (3)  The  direct  relationship  between  the  real,  A,  and  apparent  absorbance,  A',  values  for  the  vacuum indicator is demonstrated by the good fit to a straight line of the data in the insert  plot  in figure 1(b).    In this plot the values of  A were derived from the UV‐vis absorption  spectral  data  in  the  main  diagram  of  figure  1(b)  and  the  values  of  A',  the  apparent  absorbance values, were derived from colour analysis, and eqn (3), of the associated digital  photographs in figure 1(a).  The straight line nature of this plot suggests that simple digital  colour  photography,  coupled  with  colour  analysis,  can  be  used  to  probe  the  vacuum  pressure  response  characteristics  of  the  vacuum  indicator,  instead  of  the  much  more  expensive and cumbersome method of UV‐vis  spectroscopy.   Note that the digital  image  colour analysis reported here was carried out here using the free software Image J23 but can  also be achieved using a mobile phone and colour analysis App of which there are many, as  they are often used as an indoor decorating tool to help identify and reproduce a particular  colour, as in of paint for example.27‐29  This ease of analysis is an important feature if the  vacuum indicator is to be used routinely as a quantitative method for measuring vacuum  pressure, as well as (by eye) a qualitative method to identify if the vacuum in the pack is still  present or not.        8    Quantitative analysis  The absorbance at the max of D‐, i.e. 589 nm, of the OCP dye in the vacuum indicator film, A,  is a measure of the concentration of [D‐] and can be used, via eqns (1) and (2), to calculate a  value for, R, which is related directly to %CO2, since:                                             R = (A0 – A)/(A ‐A) = [HD]/[D‐]  =  .%CO2                                            (4)  where, A0 is the value of absorbance due to the pH‐indicating dye at λmax(D‐) when %CO2 = 0  (i.e. when all the dye is in its deprotonated form) and A is the absorbance of the film when  all  of  D  is  in  its  protonated  (colourless)  form,  i.e.  as  HD,  i.e.  when  %CO2  =  ;  the  latter  absorbance is assumed here to be that measured when the %CO2 is that of air, i.e. = 0.041 %  and the film is colourless.  In this work the source of CO2 is ambient air, and so it follows that  %CO2 is proportional to the air pressure in the cell (or package), P, and so eqn (4) can be  rewritten:                                                          R = (A0 – A)/(A ‐A) = '.P                                                             (5)  where ' is measure of the sensitivity of the indicator (units: atm‐1).  Note also that, since  the  real  absorbance,  A,  (from  spectrophotometric  measurements)  is  proportional  to  the  apparent absorbance, A', (from photographic measurements), eqn (5) can be re‐written as:                                                      R = (A'0 – A')/(A' –A') = '.P                                                           (6)  The above expression suggests that digital photography, coupled with colour analysis, can  be used to provide a measure of the ambient vacuum air pressure, P.  In order to test this  idea, the vacuum pressure  indicator was placed in a  vacuum chamber which had a clear  glass  top  so  the  indicator  could  be  photographed,  along  with  a  digital  manometer,  and  evacuated down to ca. 0.002 atm at which point the originally colourless vacuum indicator  was rendered a bright violet colour.  Air was then very slowly allowed into the system and  the  vacuum  pressure  (monitored  using  the  digital  manometer)  and  indicator  colour  (monitored  by  digital  photography)  recorded  as  the  air  pressure  increased  so  that  the  vacuum  indicator  returned  to  its  colourless  (protonated)  form.    As  noted  earlier,  all  this  work was carried out at 22 oC and the digital images of this experiment collected over this  time  are  illustrated  in  figure  2(a).    RGB  colour  analysis  of  each  of  these  digital  images  coupled with eqn (3), allowed the calculation of the apparent absorbance value, A', of the  9    vacuum indicator at each pressure, P.  This data was used to construct the straight line plot  of R, calculated using eqn (6), vs P,  illustrated in figure 2(b), (solid black dots) for the VP  indicator  at  22  oC,  from  the  gradient  of  which  a  value  for  ',  of  16.1    0.9  atm‐1   was  calculated.  A measure of the pressure sensitivity of the vacuum indicator is the value of  1/',  since  this  corresponds  to  the  pressure  at  which  the  indicator  will  have  lost  half  its  initial violet colour, i.e. the point when R = 1 and [D‐] = [HD] and for the vacuum indicator  1/' = 0.062 atm at 22 oC.  Given a typical vacuum packaged food product will be at ca. 0.04  atm,1 this value of ', and so the vacuum indicator itself, appear well‐suited for monitoring  the pressure  inside a vacuum‐packed product, so that the  indicator  is able to reveal any  subsequent  loss  of  vacuum  pressure,  due  to  a  pin‐hole  leak  say,  after  the  initial vacuum  packaging process.  The straight line plot illustrated in figure 2(b) suggests that, provided the  temperature  is  known,  a  photographic  image  of  the  vacuum  indicator  inside  a  vacuum  packaged product can be used to provide a measure of the ambient vacuum pressure, P,  inside the pack, i.e. the vacuum indicator can be used for quantitative analysis of ambient  vacuum air pressure in a vacuum‐packaged product.    Fig. 2: (a) Digital photographs of the OCP vacuum indicator in an evacuated chamber with pressure  gauge as, from left to right air is slowly –allowed in at 22  oC; and (b) plot of R vs P for the same  system, where R values were calculated using eqn (6) and apparent absorbance values derived from  colour analysis of the images of the VP indicator at 17 (red dots), 22 (black dots) and 28 oC.   10    Temperature Sensitivity  The same experiments reported above, for probing the quantitative analysis properties of  the vacuum indicator at 22 oC, were also carried out at 28 and 17 oC and the subsequent  plots of R vs P are also illustrated in figure 2(b).  From the gradients of these straight line  plots, values of ' of 9.1  0.8 atm‐1 and 29.9  0.7 atm‐1 were calculated for 28 and 17 oC,  respectively.    Since  '  is  an  equilibrium  constant,  it  is  expected,  from  the  van't  Hoff  equation, that a plot of  ln(') vs 1/T would yield a straight  line with a gradient = ‐H/R,  where  H  is  the  enthalpy  change  associated  with  reaction  (1),  and  with  OCP  as  the  pH  indicating dye.  Not surprisingly, therefore, a plot of ln(') vs 1/T, using the three values of  ' reported above, yielded a good straight line, with a gradient that revealed a value for H  of 78  5 kJ mol‐1 which is in line with those reported for other CO2‐sensitive colourimetric  indicators (i.e. 32‐77 kJ mol1).9      Response and recovery  The response and recovery characteristics of the vacuum indicator were briefly explored by  placing the indicator in a vacuum chamber and exposing it to a cycle of a relatively low (<  0.001 atm) air pressure (rendering it violet coloured) following by ambient air pressure, 1  atm, (rendering it colourless), whilst at the same time monitoring the colour changes via  digital  photography.    The  apparent  absorbance  values,  A',  associated  with  these  colour  changes  were  determined  from  the  digital  images,  and  the  results  of  this  study  are  illustrated in figure 3 and reveal 90% response and recovery times for the vacuum indicator,  i.e. t90 (colourless to violet; air to vacuum) and t90 (violet to colourless; vacuum to air),  respectively, of 16.2 and 2.7 min, respectively.  These timescales appear reasonable given  the  suggested  application  of  the  indicator,  namely  as  a  package  integrity  indicator  for  vacuum packaged food.  11    Fig. 3: Variation in the apparent absorbance (A’) of the OCP vacuum indicator as a function of time  as it was exposed to a cycle of low (< 0.001 atm) and high (1 atm) air pressure, from which t90 and  t90 values of 16.2 and 2.7 min, respectively, were calculated.      Testing with a vacuum packaged food product  Finally, the vacuum indicator was stuck to piece of white card which was then placed inside  a store‐bought bag of vacuum‐packaged risotto rice.30  The product, plus indicator, was then  evacuated using a commercial food vacuum packager and, as expected, the indicator rapidly  turned  bright  violet  and  remained  at  that  colour  until  opened,  when  it  was  immediately  rendered colourless.  From colour analysis of the digital photograph of the OCP indicator in  the  vacuum‐packed  rice,  and  eqn  (3),  an  apparent  absorbance  value,  A’,  of  0.38  was  calculated,  which  in  turn  was  used  to  calculate  a  value  of  R  of  1.37  which,  using  the  calibration graph illustrated in figure 2, indicates that the vacuum pressure inside the pack  was 0.085  0.005 atm.  This estimated pressure compared very well with that measured by  the digital vacuum meter, 0.084 atm, when the latter was vacuumed packed under the same  conditions.  Photographs  of  the  package  plus  indicator  before  and  after  opening,  and  the  vacuum packed digital vacuum meter are shown in figure 4.  12      Fig. 4: Digital photograph of the vacuum packed risotto rice with OCP indicator, before (left) and  after opening (middle), and vacuum packed digital vacuum meter.    Conclusions  The above work shows that  it  is possible to create a vacuum pressure indicator for both  quantitative (as in figure 3) and qualitative applications, using a very sensitive CO2 indicator,  based  on  the  pH  indicator  dye,  OCP.    Such  an  indicator  may  well  find  an  application  in  vacuum packaging, for which at present there does not exist a simple, inexpensive method  for  100%  quality  control.    Although  the  example  illustrated  in  figure  4  is  for  vacuum  packaged  food,  other  vacuum  packaged  goods,  such  as  pharmaceuticals,  medical  instruments  and  electronic  components,  may  also  benefit  from  this  technology,  such  as  pressure  sensitive  paints.    This  is  the  first  reported  colourimetric  plastic  film  vacuum  indicator and many more are likely to follow given that they can be readily tuned to respond  to different air pressures by using different pH‐indicating dyes, with different pKa values,  and different base concentrations.11            13    References  1.  R. Perdue, in The Wiley encyclopedia of packaging technology, ed. K. L. Yam, John  Wiley & Sons, New Jersey, 2009, pp. 1259‐1264.  2.  Electronic  vacuum  packaging  for  product  protection,  https://amactechnologies.com/products/electronics/,  August 2019.  3.  Wikipedia, Vacuum packing, https://en.wikipedia.org/wiki/Vacuum_packing, August  2019.  4.  M. Smiddy, N. Papkovskaia, D. Papkovsky and J. Kerry, Food Research International,  2002, 35, 577‐584.  5.  M. Smiddy, M. Fitzgerald, J. Kerry, D. Papkovsky, C. O'Sullivan and G. Guilbault, Meat  Science, 2002, 61, 285‐290.  6.  S.  Banerjee,  C.  Kelly,  J.  P.  Kerry  and  D.  B.  Papkovsky,  Trends  in  Food  Science  &  Technology, 2016, 50, 85‐102.  7.  A. Mills, Chemical Society Reviews, 2005, 34, 1003‐1011.  8.  X.‐d. Wang and O. S. Wolfbeis, Chemical Society Reviews, 2014, 43, 3666‐3761.  9.  A. Mills, Q. Chang and N. McMurray, Analytical Chemistry, 1992, 64, 1383‐1389.  10.  A. Mills and K. Eaton, Quimica Analitica, 2000, 19, 75‐86.  11.  A. Mills and Q. Chang, Analytica Chimica Acta, 1994, 285, 113‐123.  12.  A. Mills, in Sensors for environment, health and security, ed. M. I. Baraton, Springer,  Dordrecht, Netherlands, 2009, pp. 347‐370.  13.  S. M. Borisov, M. C. Waldhier, I. Klimant and O. S. Wolfbeis, Chemistry of Materials,  2007, 19, 6187‐6194.  14.  C. McDonagh, C. S. Burke and B. D. MacCraith, Chemical Reviews, 2008, 108, 400‐ 422.  15.  S. Neethirajan, D. Jayas and S. Sadistap, Food and Bioprocess Technology, 2009, 2,  115‐121.  14    16.  Nellcor™  Adult/Pediatric  Colorimetric  CO2  Detector,  https://www.medtronic.com/covidien/en‐gb/products/intubation/nellcor‐adult‐ pediatric‐colorimetric‐co2‐detector.html, August 2019.  17.  Insignia  Technologies  Ltd.,  https://www.insigniatechnologies.com/home.php?video=1,  August 2019.  18.  D. Yusufu, C. Y. Wang and A. Mills, Food Packaging and Shelf Life, 2018, 17, 107‐113.  19.  Safety  data  sheet,  https://www.sigmaaldrich.com/catalog/product/aldrich/c85778?lang=en®ion=G B,  August 2019.  20.  A. Mills and L. Monaf, Analyst, 1996, 121, 535‐540.  21.  A. Mills and Q. Chang, Sensors and Actuators B: Chemical, 1994, 21, 83‐89.  22.  K Hand Coater, https://www.rkprint.com/products/k‐hand‐coater/, August 2019.  23.  ImageJ, https://imagej.nih.gov/ij/, August 2019.  24.  D. Yusufu and A. Mills, Sensors and Actuators B: Chemical, 2018, 273, 1187‐1194.  25.  A. Lapresta‐Fernández and L. F. Capitán‐Vallvey, Analyst, 2011, 136, 3917‐3926.  26.  D. L. Williams, T. J. Flaherty, C. L. Jupe, S. A. Coleman, K. A. Marquez and J. J. Stanton,  Journal of Chemical Education, 2007, 84, 1873‐1877.  27.  ColorMeter  RGB  Hex  Color  Picker  and  Colorimeter  by  White  Marten,  https://itunes.apple.com/us/app/colormeter‐rgb‐hex‐color‐picker‐and‐ colorimeter/id713258885?mt=8, August 2019.  28.  Color  Card  and  RGB  Color  Meter  by  NStart  MITech,  https://itunes.apple.com/us/app/color‐card‐and‐rgb‐color‐ meter/id1297107041?mt=8, August 2019.  29.  Color  Mate  ‐  Convert  and  Analyze  Colors  by  David  Williames,  https://itunes.apple.com/us/app/color‐mate‐convert‐and‐analyze‐ colors/id896088941?mt=8, August 2019.  30.  Gran  Gallo  Risotto  rice  http://www.risogallo.co.uk/products/white‐rice/gran‐gallo‐ risotto‐rice/#, August 2019.  work_6aq5wzb55bg7nf5cwdgqlkse2q ---- Microsoft Word - Picture This.doc Swinburne Research Bank http://researchbank.swinburne.edu.au Gye, L. (2007). Picture this: the impact of mobile camera phones on personal photographic practices. Originally published Continuum: Journal of Media & Cultural Studies, 21 (2) June, 279-288 Available at: http://dx.doi.org/10.1080/10304310701269107 Copyright © 2007 Taylor and Francis. This is the author’s version of the work. It is posted here with the permission of the publisher for your personal use. No further distribution is permitted. If your library has a subscription to this journal, you may also be able to access the published version via the library catalogue. Accessed from Swinburne Research Bank: http://hdl.handle.net/1959.3/39130 Picture This: the Impact of Mobile Camera Phones on Personal Photographic Practices Lisa Gye The convergence of the camera and mobile phone has proved to be highly popular. This should come as no surprise to anyone interested in both the history of mobility and the history of photography.1 As media archaeologist Errki Huhtamo has asserted, the first mobile medium proper was arguably amateur or personal photography (Huhtamo, 2004). Camera phones are not, however, just another kind of camera. Located as they are in a device that is not only connected to the telecommunications grid but that is usually carried with us wherever we go, camera phones are both extending existing personal imaging practices and allowing for the evolution of new kinds of imaging practices. Given the centrality of personal photography to processes of identity formation and memorialization, changes to the ways in which we capture, store, and disseminate personal photographs through the use of devices like camera phones will have important repercussions for how we understand who we are and how we remember the past. The aim of this paper is to look at the current social uses of personal photography and to consider the impact that camera phones will have on these uses. I will examine the ways in which camera phones are enabling new modes of personal photography, which will extend the role that photographs play in our lives. The Social Uses of Personal Photography In order to understand the relationship between existing personal photographic practices and new practices emerging from camera phone usage, we need to understand why people take photographs in the first place. Surprisingly, despite its tremendous popularity as a practice, personal photography has received relatively scant attention from scholars. (There are a few important studies in this field. See, for example, Sontag, 1978; Bourdieu, 1990; Barthes, 1980; Chalfen, 1987; Hirsch, 1997). One study (Van House et al., 2004) does provide us with a useful set of categories that can help us to understand the reasons why people take personal photographs; in order to construct personal and group memory; in order to create and maintain social relationships; and for the purposes of self-expression and self-presentation. I will first examine each of these categories as they relate to older analogue photographic practices before looking at the ways in which these categories apply to new practices emerging out of camera phone culture. Constructing Personal and Group Memory The introduction of celluloid film and easy-to-use box cameras in the late 1880s sparked what Huhtamo describes as a ‘camera epidemic’ in Victorian society (Huhtamo, 2004, website). But this epidemic was far from spontaneous. As Pierre Bourdieu observed in his now famous sociological study of 1965, Photography: a Middle-brow Art, the desire to photograph is not a given—it is socially constructed and culturally specific. Writers who have tracked the evolution of personal photography seem to concur that the rise in its popularity can be directly attributable to the emergence of a correlation in the public imagination between photographic practice and private memorialization. Leading this change in public perception was Kodak Eastman—the premier manufacturer of portable photographic devices in the late nineteenth and early twentieth century. As Liberty Walton notes: By the turn of the century, Kodak no longer promoted the camera’s instantaneous capabilities that were a novelty in the 1888 promotions. Instead, the idea of the snapshot’s value as an aid to memory was promoted. The idea that photography could be used to capture and save moments is evident in Eastman’s advertising campaign, containing such slogans as ‘ . . . a means of keeping green the Christmas memories.’ 1903: ‘A vacation without a Kodak is a vacation wasted.’ 1904: ‘Where there’s a child, there should the Kodak be. As a means of keeping green the Christmas memories, or as a gift, it’s a holiday delight.’ 1905: ‘Bring your Vacation Home in a Kodak.’ 1907: ‘In every home there’s a story for the Kodak to record—not merely a travel story and the story of summer holidays, but the story of Christmas, of the winter evening gathering and of the house party.’ 1909: ‘There are Kodak stories everywhere’. (Walton, 2002, pp. 26–38) Personal snapshot photography took off on the back of such rhetoric. As Bourdieu notes, as ‘a private technique, photography manufactures private images of private life . . . Apart from a tiny minority of aesthetes, photographers see the recording of family life as the primary function of photography’ (1990, p. 30). Evidence of the importance of photography to private memorialization is all around us—in our own collections and albums. It can also be heard in the often-told story of survivors of natural (or unnatural) disasters—after securing the children and the pets, one must also ensure that the family photographs are rescued as well. Personal photographs not only bind us to our own pasts—they bind us to the pasts of those social groups to which we belong. The documentation of social groups through photography reinforces our connections to others. Photographs are often the sutures that bind the narratives of group memory. As Giuliana Bruno argues: In a post-modern age, memories are no longer Proustian madeleines, but photographs. The past has become a collection of photographic, filmic or televisual images. We, like the replicants [of Blade Runner ], are put in the position of reclaiming a history by means of its reproduction. (Bruno, 1990, p. 183) Creating and Maintaining Social Relationships ‘Photos reflect social relationships but they also help to construct and maintain them’ (Van House et al., 2004, p. 7). Exchanging and sharing personal photographs is integral for the maintenance of relationships. One important function of personal photography, one that extends its existence as a material prosthesis for personal memory, is the role it plays as an aid to storytelling. Sharing memories through the creation of narratives around them plays an integral role in the construction and maintenance of personal relationships. As Richard Chalfen argues, personal photography is ‘primarily a medium of communication’ (Chalfen, 1991, p. 5). People share personal photographs in many ways. They send copies of photographs through the mail (and now e-mail) to distant loved ones. They frame the photographs for display in their homes and workplaces. They attach them to pinboards and fridges. They post them to photosharing sites such as Flickr (http://www.flickr.com), Photobucket (http://photobucket.com) or Kodakgallery (http://www.kodakgallery. com). However, while in the strictest sense of the word these activities involve sharing, in reality sharing requires some dialogue and annotation. These activities emphasize display and can operate without any dialogue or annotation at all. But when they do function in silence, their ability to build and maintain social relationships is somewhat diminished. Photographic albums, on the other hand, function by virtue of presence. Not only do they make the subjects of the photographs present to us in the form of their image, as Barthes famously noted, but they require, if they are to enact their primary function, the presence of a narrator and a listener (see Barthes, 1980). The performative act of showing and telling is integral to the ritualized use of personal photography. Not that this, as we all know, is always a pleasant experience. As Martha Langford points out: Looking at another person’s snapshots, slides, home movies or tapes can indeed be killing: presentations are rarely of short duration, and repetition seems endemic to the genre. The real-life domestic experience is loaded with compensatory pleasures—intimacy, conviviality, emotional investment and perhaps a slice of cake. Inside stories frame the pictures, animating even the most stilted of studio portraits with family secrets and subversive tales. (Langford, 2001, p. 5) Our preference for face-to-face sharing, coupled with our resistance to annotation, as demonstrated in research undertaken by Nancy Van House and others, has implications for digital photography generally and more specifically for mobile photography (see, in particular, Van House et al., 2004). This will be discussed in more detail below. For now it is enough to note that photography is integral to the creation and maintenance of social relationships and that these relationships are reinforced by activities such as the narration of stories that accompany face-to-face photosharing. Self-expression and Self-presentation As an expressive medium, photography has been both celebrated and condemned. While artistic photography has had to fight against the prejudices of the art establishment to become accepted as a legitimate artistic practice, personal photography has been more heartily embraced as an important mode of selfexpression. As Susan Sontag notes: ‘[n]obody takes the same picture of the same thing’, so ‘photographs are evidence not only of what’s there but of what an individual sees, not just a record but an evaluation of the world’ (Sontag, 1978, p. 88). Individuals are engaged at all levels of personal photographic production—from the taking of the snapshots and now, with digital photography, to their processing and dissemination. The emphasis on user control is a long-standing feature of personal photography of all kinds. Ease of use was a key selling point for Kodak Eastman in the early twentieth century and remains so for all kinds of cameras today. Kodak Eastman advertising assured its customers that anyone could be a photographer—even women and children! (for examples of the ‘Eastman version’, see Collins, 1990). What do the photographs we take tell the world about who we are? Presumably, that our view of the world is unique and interesting and that, by virtue of this, so are we. At least that is what we hope. This is related but different from the use of personal photography for self-presentation. Self-presentation relates more to those photographs we take or display of ourselves, our family, our friends, our possessions, our pets, and so on. Photographs which are taken or used for self-presentation reflect the view of our selves that we want to project out into the world. The Impact of Camera Phones on the Social Uses of Personal Photography Camera phones participate in the same kind of economy of photography as that just discussed. They can be used to take photographs that help to construct personal and group memories, that maintain and develop social relationships and that allow their users to express and present themselves to the world. However, there are certain affordances which are specific to the camera phone that mean that these functions are somewhat altered and that allow for new forms of photographic practice. By examining each of these categories separately we may be able to discern the ways in which camera phones reinforce and extend existing photographic practices and the ways in which they create new practices. Constructing Personal and Group Memory The fact that we tend to carry our mobile camera phones with us wherever we go means that we now have increased opportunities for taking photographs. As Dong-Hoo Lee notes in her study of the use of camera phones by women in Korea, ‘with camera phones, taking pictures is experienced as an everyday activity’ (Lee, 2005, website). However, there are certain features of mobile imaging that have prevented the camera phone thus far from becoming the imaging device of choice at important moments in our lives that we may wish to memorialize. The poor resolution of camera phone images has, until recently, meant that significant life events such as weddings, births and so on are still photographed using cameras rather than camera phones. As many images from events such as these end up as prints that can be archived or shared, image resolution is important. Mobile camera phones suffer from similar problems to digital cameras—the very immaterial nature of the technology works against our usual ways of working with personal photographs. If personal photographs operate as a medium of communication, enabling shared conversations and storytelling, then digital camera technologies work against some of the enabling techniques of this practice. For instance, sharing photographs, even when they are in albums, is often a tactile and sensual experience. Photographs are passed between participants who are invited to inspect them more closely. As Sit et al. note: These forms of interaction are well supported with printed photographs in communal spaces, such as gatherings around the kitchen table or the living room sofa. In contrast, this naturalness of interaction has not been duplicated with digitally formatted photos published online. (Sit et al., 2005, p. 1) Creating the same kinds of interaction with digital images on mobile devices is made even more difficult by their location in a highly personalized device. While we may show photos to people on our mobile devices, we are usually reluctant to hand the device over to someone who may not understand the interface and so we keep them at a distance from the images under examination. The construction of personal and group memories through photography is inscribed within an oral tradition which, ironically given its primary function, is not facilitated as yet by the mobile camera phone. Efforts are currently being made to allow users to annotate their camera photos and send them to online repositories or directly to family and friends. Nokia’s Lifeblog offers users a commercial and domesticated version of the moblog—as Gerard Goggin points out, Nokia itself carefully avoids the term ‘moblog’ because of its association with technical digital culture, preferring instead to use phrases such as ‘mobile sharing’ and ‘life sharing’ (Goggin, 2005). Nokia is keen to promote Lifeblog as a family-friendly platform and its advertising emphasizes the family connection. Expounding the benefits to consumers, Nokia’s publicists write that: Nokia Lifeblog provides a simple method of capturing your daily experiences and unforgettable moments, like a child’s birth or a friend’s wedding and storing them all in one place. The memories you want to share can be easily posted to the web. The blogs can be accessed by the family, friends and colleagues via a password-protected area, or can be available for general access. (Nokia, 2005) However, even annotated Lifeblogs are unable to promote the kinds of two-way conversations that are integral to the popularity and endurance of personal photography as a social practice for the construction of personal and group memory. Creating and Maintaining Social Relationships While any form of photosharing is bound to assist in the creation and maintenance of social relationships, the kinds of photos that are most often taken with mobile camera phones are those that reinforce the user’s individuality rather than their ties to other groups. Research conducted by Daisuke Okabe and Mizuko Ito clearly supports this idea. They argue that: the social function of the camera phone differs from the social function of the camera in some important ways. In comparison to the traditional camera, most of the images taken by camera phone are short-lived and ephemeral. The camera phone is a more ubiquitous and lightweight presence, and is used for more personal, less objectified viewpoint and sharing among intimates. Traditionally, the camera would get trotted out for special excursions and events—noteworthy moments bracketed off from the mundane . . . The camera phone tends to be used more frequently as a kind of archive of a personal trajectory or viewpoint on the world, a collection of fragments of everyday life. (Okabe & Ito, 2006) This supports Daniel Palmer’s argument that the Nokia moment is far more intimate than the Kodak moment (Palmer, 2005). It is also reflected in the kinds of images that are captured using mobile camera phones. Okabe and Ito have observed that ‘[w]ithin the broader ecology of personal record-keeping and archiving technologies, camera phone images occupy a niche that is more personal, fleeting, and commonplace’ (Okabe & Ito, 2006). Camera phones and other mobile multimedia, according to Ilpo Koskinen, tend to participate in aesthetics of banality. That is, the images captured with these devices tend to focus on the mundane, trivial aspects of everyday life. Koskinen argues that ‘[p]eople capture ordinary things in immediate life and share them with friends and acquaintances in monadic clusters that become even more emotionally and relationally more self-reliant than before’ (Koskinen, 2005, p. 15). Other studies have pointed to a distinction between captured camera phone images that are affective and captured camera phone images that are functional (Kindberg et al., 2005). Under the taxonomy that they developed as a result of their research, Kindberg et al. argue that affective images can be taken to enhance a mutual experience or to share an experience with someone who is absent, either in the moment or later. Affective images can also be taken to be used for personal reflection, rather than to share with others. Interestingly, almost half of the photos taken by the participants in their study fell into this category, something which supports Okabe and Ito’s assertion that the camera phone is primarily a personal imaging device. Similarly, functional images could be divided into three categories: images that support a mutual task, images that serve as a reminder either to the self or others to perform a remote task, or images that serve as a personal reminder to perform a practical, individual task. Regardless of whether images being captured on camera phones are affective or functional, most researchers studying camera phone use agree that the images themselves are far more individualized, mundane and everyday than much of the personal photography that preceded it. While the images may be more individualized, the connectivity of camera phones presents us with new ways to maintain our social relationships through photosharing. Instantaneous photosharing can lead to a practice that is peculiar to mobile camera phones and which Ito describes as ‘intimate, visual co-presence’ (Ito, 2005). Just as text messaging has allowed people to remain in perpetual contact, Ito argues that by keeping in touch through picture messaging camera phone users are able to create a shared visual space—a sense of presence created through visual intimacy. Mark Federman takes this argument one step further by claiming that: One of the most important effects of massively multi-way, instantaneous and ubiquitous communications is pervasive proximity. We experience everyone to whom we are connected—and conceivably everyone to whom we are potentially connected—as if they are exactly next to us. The effect is that of hundreds, or thousands, or millions of people coming together in zero space, so that there is no perceptible distance between them. (Federman, 2006) As appealing as this may seem, reality suggests that this kind of scenario is not widespread. In fact, sharing images through services such as MMS (Multimedia Messaging Services) has not taken off as yet in most parts of the world. In fact, much of the sharing that takes place with camera phones still takes place in face-to-face situations rather than through the sending of images. This is partly because of the cost of sending MMS. However it is also because users are unclear about such things as handset compatibility and how to send the images (see, for example, Kindberg et al., 2005). Self-expression and Self-presentation While intimate, visual co-presence may yet be some way off, there is no doubt that the increasing pervasiveness of personal camera technologies, which are exemplified by the mobile camera phone, is leading to a change in the way in which we visualize the world. As Van House et al. argue: ‘Ready access to imaging encourages people to see the world “photographically”—as images, and to see beauty and interest in the everyday. And easy internet-based sharing creates an audience’ (Van House et al., 2005). Mobile camera phones certainly increase our ability to express ourselves and to present our unique view of the world to others. However, the transitory nature of camera phone images means that self expression is shifting away from ‘this is what I saw then’ to ‘this is what I see now’. The photo message in particular participates in this economy of presence. As Mark Federman argues, ‘[p]hotographs, clippings, scrapbooks, and even works of art and sculpture as forms of cultural memory give way to ephemeral artefacts that exist for a brief instant in the span of time, as a sharing of experience itself ’ (Federman, 2006, p. 5). Seeing the world photographically is also, as Susan Sontag points out, in itself somewhat problematic. As she reminds us, ‘[w]hile there appears to be nothing that photography can’t devour, whatever can’t be photographed becomes less important’ (Movius, 1975). Translated to aspects of public life such as news and current affairs, the increasing use of mobile camera images by mainstream media organizations, which in itself is an interesting phenomenon, serves to remind us that the term ‘newsworthy’ has now become interchangeable with the term ‘pictureworthy’. If an event is unable to be captured by an imaging device, its value is diminished. Hence the somewhat perverse desire that appears to have developed in Western cultures to document all aspects of our lives lest we are deemed not to exist. This point leads us to ask what kinds of resistances have appeared to the use of such pervasive imaging devices. Despite its incredible popularity as a medium, misuses of photography and intrusive photographic practices have long been of concern to people with an interest in both personal privacy and the public sphere. Huhtamo reminds us that this goes back as far as the introduction of portable cameras in the late nineteenth century: Their activities developed into a kind of distributed panopticon—anybody anywhere could be the target of a snapshot. Caricaturists often interpreted such intrusions as sexually motivated (the pleasure beach being a favourite setting), but the camera also seemed to have a de-humanizing effect on the person carrying it. In a telling cartoon a group of ladies are seen pointing their cameras at a man hanging from a branch of a tree over water, struggling for his life. Instead of terror or empathy, the faces of the ladies show excitement about the photogenic event; as can be expected, none of them makes the slightest effort to run for rescue. (Huhtamo, 2004) This distrust of the roaming photographer and his/her panoptic technology is resurfacing today in the current distrust of the mobile camera phone. While phone and camera manufacturers scramble to find ways to promote the use of mobile camera phones for the documentation of private life, public concern about the inappropriate use of these technologies is escalating. These two competing discourses are set to shape the evolution of the mobile camera phone and its attendant practices and software. Finally, revisiting Koskinen’s argument with regards to the banality of mobile imaging, one might also argue that the intensely intimate nature of camera phone imagery leads to a reinforcement of a movement towards what Richard Sennet has called the ‘tyranny of intimacy’ (Sennet, as quoted in Palmer, 2003). Drawing on Sennet, Palmer argues: We are permeated with narcissism, not as self-love but in terms of the exclusive reference to ourselves, which asks:What does this event mean to me? The narcissistic subject searches for and expects ‘real’ and intense experiences only within the framework of their own needs and expectations. (Palmer, 2005, p. 163) By reinforcing the intensely personal, camera phones may also participate in this narrow economy of self, one that becomes almost asocial in its emphasis on the individual and its refusal of the more broadly social. Conclusion There is no doubt that mobile camera phones are having an impact on the established ways in which we record and archive our personal and group memories, create and maintain social relationships and express and present ourselves to our friends, family and the world. As camera resolutions improve and as users become more comfortable sharing their photographs through MMS, photosharing sites and camera technologies such as Bluetooth, we will undoubtedly see many more changes to the ways in which we undertake these activities. Research in this area is, at best, nascent. Solid empirical work needs to be undertaken if we are to really understand the current and potential impacts of camera phone technologies on our culture and our lives. For now we can say that camera phones are set to extend our way of looking at the world photographically and in doing so bring changes to how we understand ourselves and that world. Note [1] According to a published report by ABI Research, by 2008 more than one billion camera phones will be in service in markets throughout the world.While sales numbers in Australia are difficult to locate, analysts have noted that four years after Sharp and J- Phone introduced the first camera phone to the Japanese market, 85 per cent of handsets sold in Japan in 2005 have a built-in camera. In Korea, the number is closer to 99 per cent. See ‘Mobile phone imaging: opportunities for driving usage of camera phones through click/send/print’ (ABI Research, 2005, http://www.abiresearch.com/products/market_research/Mobile_Phone_Imaging), ‘Camera phones: disruptive innovation for imaging’ (Market and trend report, 5th Version of 11 Oct. 2004, http://www.eurotechnology.com/store/camera-phone/) and ‘Camera phones to get 99% of local market in 2005—Korea’ (Camera Phone Reviews, 22 Nov. 2004, http://www.livingroom.org.au/cameraphone/archives/camera_phones_to_get_99_of_lo cal_market_in_2005_korea.php). References Barthes, R. (1980) Camera Lucida: Reflections on Photography, Random House, London. Bourdieu, P. (1990) Photography: a Middle-brow Art, Polity Press, Cambridge. Bruno, G. (1990) ‘Ramble city: postmodernism and Blade Runner’, in Alien Zone, ed. A. Kuhn, Verso Books, New York, pp. 183–195. Chalfen, R. (1987) Snapshot Versions of Life, Bowling Green State University Popular Press, Bowling Green, OH. Chalfen, R. (1991) Turning Leaves: the Photographic Collections of Two Japanese American Families, University of New Mexico Press, Albuquerque. Collins, D. (1990) The Story of Kodak, Abrams, New York. Federman, M. (2006) ‘Memories of now’, Receiver, 15, p. 4, Available at: http://www.receiver.vodafone.can/15/articles/pdf/15_08.pdf (accessed 23 Mar. 2007). Goggin, G. (2005) ‘“Have fun and change the world”: moblogging, mobile phone culture and the Internet’, Paper delivered at BlogTalk Downunder, Sydney, 20 May, Available at: http://incsub.org/blogtalk/?page_id ¼ 119 (accessed 23 Mar. 2007). Hirsch, M. (1997) Family Frames: Photography, Narrative, and Postmemory, Harvard University Press, Cambridge, MA. Huhtamo, E. (2004) ‘An archaeology of mobile media’, Keynote address, ISEA, Available at: http://www.isea2004.net/register/view_attachment.php?id ¼ 6230. Ito, M. (2005) ‘Intimate visual co-presence’, Position paper for the Seventh International Conference on Ubiquitous Computing, Tokyo, 11–14 September 2005, Available at: http://www.spasojevic.org/pics/PICS/ito.ubicompos.pdf (accessed 23 Mar. 2007). Kindberg, T., Spasojevic, M., Fleck, R. & Sellen, A. (2005) ‘The ubiquitous camera: an in-depth study of camera phone use’, in IEEE Pervasive Computing, vol. 4, no. 2, Apr.– Jun. Koskinen, I. (2005) ‘Seeing with mobile images: towards perpetual visual contact’, in A Sense of Place: The Global and the Local in Mobile Communication, ed. K. Nyiri, Passagen Verlag, Vienna. Langford, M. (2001) Suspended Conversations: the Afterlife of Memory in Photographic Albums, McGill-Queen’s University Press, Quebe´c City. Lee, D. (2005) ‘Women’s creation of camera phone culture’, Fibreculture Journal, no. 6, Available at: http://journal.fibreculture.org/issue6/issue6_donghoo.html Movius, G. (1975) ‘An interview with Susan Sontag’, Boston Review, Jun., Available at: http://bostonreview.net/BR01.1/sontag.html (accessed 23 Mar. 2007). Nokia. (2005) ‘Lifeblog backgrounder’, Press document, Available at: http://www.nokia.com. Okabe, D. & Ito, M. (2006) ‘Everyday contexts of camera phone use: steps toward technosocial ethnographic frameworks’, in Mobile Communication in Everyday Life: an Ethnographic View, eds J. Ho¨flich & M. Hartmann, Frank & Timme, Berlin. Palmer, D. (2003) ‘The paradox of user control’, in Digital Art and Culture Melbourne 2003 Conference Proceedings, RMIT, Melbourne, pp. 167–172. Palmer, D. (2005) ‘Mobile exchanges’, Paper presented at the Vital Signs conference, ACMI, Melbourne, 8 Sept. Sit, R. Y., Hollan, J. D. & Griswold, W. D. (2005) ‘Digital photos as conversational anchors’, Proceedings of the 38th Hawaii International Conference on System Sciences, Hawaii, 3–6 January 2005, IEEE Hawaii, Available at: http://ieexplore.ieee.org/iel5/9518/30166/01385459.pdf?amumber¼1385459 (accessed 23 Mar. 2007). Sontag, S. (1978) On Photography, Farrar, Straus & Giroux, New York. Van House, N., Davis, M., Ames, M., Finn, M. & Viswanathan, V. (2005) ‘The uses of personal networked digital imaging: an empirical study of cameraphone photos and sharing’, Paper presented at CHI 2005, Portland, Oregon, 2–7 Apr. Van House, N. A., Davis, M., Takhteyev, Y., Ames, M. & Finn, M. (2004) ‘The social uses of personal photography: methods for projecting future imaging applications’, Available at: http://www.sims.berkeley.edu/- , vanhouse/vanhouseetal2004b.pdf. Walton, L. (2002) ‘How shall I frame myself?’, British Columbia Historical News, vol. 35, no. 4, pp. 26–38. work_6cvpz6kjxne2pbgq2vknhhywrm ---- [PDF] Characterizing lunch meals served and consumed by pre-school children in Head Start. | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1017/S1368980013001377 Corpus ID: 5277919Characterizing lunch meals served and consumed by pre-school children in Head Start. @article{Nicklas2013CharacterizingLM, title={Characterizing lunch meals served and consumed by pre-school children in Head Start.}, author={T. Nicklas and Y. Liu and J. Stuff and J. Fisher and J. Mendoza and C. O'neil}, journal={Public health nutrition}, year={2013}, volume={16 12}, pages={ 2169-77 } } T. Nicklas, Y. Liu, +3 authors C. O'neil Published 2013 Medicine Public health nutrition OBJECTIVE To examine the variability of food portions served and consumed by African-American and Hispanic-American pre-school children attending Head Start. DESIGN Cross-sectional. SETTING Food consumption by pre-schoolers (n 796) enrolled in sixteen Head Start centres in Houston, Texas (51 % boys, 42 % African-American, mean age 4 years) were assessed during 3 d of lunch meals using digital photography. Descriptive statistics and multilevel regression models, adjusting for classroom and… Expand View on PubMed cambridge.org Save to Library Create Alert Cite Launch Research Feed Share This Paper 30 CitationsBackground Citations 11 Methods Citations 5 Results Citations 2 View All Figures, Tables, and Topics from this paper figure 1 table 2 figure 2 table 3 figure 3 View All 5 Figures & Tables Meal (occasion for eating) Portion Size Starch Vegetables kilocalorie Manuscripts Neoplasms Overweight Obesity kilojoule (kJ) hatching Carbohydrates gram Exclusion Platelet Glycoprotein 4, human Propolis Promotion (action) Description statistical cluster Paper Mentions Interventional Clinical Trial Healthy Start to Increase Physical Activity and Improve Healthy Eating in Early Childcare Centres Childhood obesity is one of the greatest challenges facing public health and recent evidence shows it begins in preschool aged children. It has been suggested that interventions be… Expand Conditions Pediatric Obesity Intervention Behavioral Université de Sherbrooke September 2013 - July 2016 30 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Nutritional Quality of Meals and Snacks Served and Consumed in Family Child Care. Alison Tovar, S. Benjamin-Neelon, +4 authors D. Ward Medicine Journal of the Academy of Nutrition and Dietetics 2018 13 Save Alert Research Feed Dietary Quality of Preschoolers' Sack Lunches as Measured by the Healthy Eating Index. M. Romo-Palafox, N. Ranjit, +4 authors M. Briley Medicine Journal of the Academy of Nutrition and Dietetics 2015 20 Save Alert Research Feed Factors affecting fruit and vegetable school lunch waste in Wisconsin elementary schools participating in Farm to School programmes. Andrea B Bontrager Yoder, L. Foecke, D. Schoeller Business, Medicine Public health nutrition 2015 27 PDF View 2 excerpts, cites background Save Alert Research Feed Food Photography as a Tool to Assess Type, Quantity, and Quality of Foods in Parent-Packed Lunches for Preschoolers. Savanah Elliott, Morgan L McCloskey, S. Johnson, Noereem Z. Mena, T. Swindle, L. Bellows Psychology, Medicine Journal of nutrition education and behavior 2020 Save Alert Research Feed Childhood Obesity and Quality of Meals: The Saudi School Meal and Home Meal Study SSMHMS – A Cross Sectional Study N. Eid, Abeer Aljahdali, E. Albajri, Manal A. Naseeb Chemistry 2018 3 PDF View 1 excerpt, cites background Save Alert Research Feed Parent packs, child eats: Surprising results of Lunch is in the Bag‘s efficacy trial C. Roberts-Gray, N. Ranjit, S. Sweitzer, C. Byrd-Williams, D. Hoelscher Medicine Appetite 2018 3 Save Alert Research Feed Dietary Intake by Dutch 1- to 3-Year-Old Children at Childcare and at Home J. Gubbels, L. Raaijmakers, S. Gerards, S. Kremers Medicine Nutrients 2014 47 PDF View 2 excerpts, cites results and background Save Alert Research Feed From policy to practice: addressing snack quality, consumption, and price in after-school programs. M. Beets, F. Tilley, R. Weaver, Gabrielle M. Turner-McGrievy, J. Moore, C. Webster Business, Medicine Journal of nutrition education and behavior 2014 22 Save Alert Research Feed Macronutrient and micronutrient intakes of children in Oklahoma child-care centres, USA. A. H. Rasbold, R. Adamiec, +4 authors S. Sisson Medicine Public health nutrition 2016 15 PDF Save Alert Research Feed Snack Portion Sizes for Preschool Children Are Predicted by Caregiver Portion Size, Caregiver Feeding Practices and Children′s Eating Traits Sophie Reale, R. Simpson, +4 authors S. Caton Medicine Nutrients 2019 PDF View 1 excerpt, cites methods Save Alert Research Feed ... 1 2 3 ... References SHOWING 1-10 OF 52 REFERENCES SORT BYRelevance Most Influenced Papers Recency Portion size effects on daily energy intake in low-income Hispanic and African American children and their mothers. J. Fisher, A. Arreola, L. Birch, B. Rolls Medicine The American journal of clinical nutrition 2007 128 PDF Save Alert Research Feed Dietary intake in Head Start vs non-Head Start preschool-aged children: results from the 1999-2004 National Health and Nutrition Examination Survey. E. Bucholz, M. Desai, Marjorie S. Rosenthal Medicine Journal of the American Dietetic Association 2011 29 View 1 excerpt, references background Save Alert Research Feed Effects of portion size and energy density on young children's intake at a meal. J. Fisher, Yan Liu, L. Birch, B. Rolls Medicine The American journal of clinical nutrition 2007 230 PDF Save Alert Research Feed Energy intake and meal portions: associations with BMI percentile in U.S. children. T. Huang, N. C. Howarth, B. Lin, S. Roberts, M. McCrory Medicine Obesity research 2004 195 PDF View 1 excerpt, references background Save Alert Research Feed Eating vegetables first: the use of portion size to increase vegetable intake in preschool children. M. Spill, L. Birch, L. Roe, B. Rolls Medicine The American journal of clinical nutrition 2010 97 PDF View 1 excerpt, references background Save Alert Research Feed Portion size of common foods predicts energy intake among preschool-aged children. Kristen L. McConahy, H. Smiciklas-Wright, D. Mitchell, M. Picciano Medicine Journal of the American Dietetic Association 2004 114 Save Alert Research Feed Reducing the energy density of multiple meals decreases the energy intake of preschool-age children. Kathleen E. Leahy, L. Birch, B. Rolls Medicine The American journal of clinical nutrition 2008 105 PDF View 1 excerpt, references background Save Alert Research Feed Food portions are positively related to energy intake and body weight in early childhood. Kristen L. McConahy, H. Smiciklas-Wright, L. Birch, D. Mitchell, M. Picciano Medicine The Journal of pediatrics 2002 161 View 3 excerpts, references background Save Alert Research Feed Overestimation of infant and toddler energy intake by 24-h recall compared with weighed food records. J. Fisher, N. Butte, +4 authors Denise M. Deming Medicine The American journal of clinical nutrition 2008 88 PDF Save Alert Research Feed Dietary intakes at child-care centers in central Texas fail to meet Food Guide Pyramid recommendations. Alison M. Padget, M. Briley Medicine Journal of the American Dietetic Association 2005 99 View 1 excerpt, references background Save Alert Research Feed ... 1 2 3 4 5 ... Related Papers Abstract Figures, Tables, and Topics Paper Mentions 30 Citations 52 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_6drhumhdwzg5zooyli62tmdgpy ---- The Development of Indicator Cotton Swabs for the Detection of pH in Wounds sensors Article The Development of Indicator Cotton Swabs for the Detection of pH in Wounds Cindy Schaude 1, Eleonore Fröhlich 2, Claudia Meindl 2, Jennifer Attard 3, Barbara Binder 4 and Gerhard J. Mohr 1,* 1 JOANNEUM RESEARCH Forschungsgesellschaft mbH-Materials, Franz-Pichler-Straße 30, A-8160 Weiz, Austria; Cindy.Schaude@joanneum.at 2 Medical University of Graz, Center for Medical Research, Stiftingtalstraße 24, A-8010 Graz, Austria; Eleonore.Froehlich@klinikum-graz.at (E.F.); Claudia.Meindl@klinikum-graz.at (C.M.) 3 Green Chemistry Centre of Excellence, University of York, York YO10 5DD, UK; ja1024@york.ac.uk 4 Department of Dermatology and Venereology, Medical University of Graz, Auenbruggerplatz 8, A-8036 Graz, Austria; Barbara.Binder@medunigraz.at * Correspondence: Gerhard.Mohr@joanneum.at; Tel.: +43-316-876-3404 Academic Editor: W. Rudolf Seitz Received: 24 April 2017; Accepted: 1 June 2017; Published: 12 June 2017 Abstract: Indicator cotton swabs have been developed in order to enable faster, less expensive, and simpler information gathering of a wound status. Swabs are normally used for cleaning the wound, but here, they were covalently functionalized with indicator chemistry. Thus, they in principle enable simultaneous wound cleaning and wound pH detection. Using an indicator dye with a color change from yellow to red, combined with an inert dye of blue color, a traffic light color change from green to red is induced when pH increases. The indicator cotton swabs (ICSs) show a color change from green (appropriate wound pH) to red (elevated wound pH). This color change can be interpreted by the naked eye as well as by an optical color measurement device in order to obtain quantitative data based on the CIE L*a*b* color space. Two types of swabs have been developed—indicator cotton swabs ICS1 with a sensitive range from pH 5 to 7 and swabs ICS2 with a sensitive range from 6.5 to 8.5. The swabs are gamma-sterilized and the effect of sterilization on performance was found to be negligible. Furthermore, cytotoxicity testing shows cell viability and endotoxin levels to be within the allowable range. Keywords: visual indicator; wound pH; pH indicator; sensor swabs; cotton swabs; traffic-light response 1. Introduction Currently, the evaluation of wounds is a qualitative rather than a quantitative issue, and accordingly, the effect of local treatment on wound healing is not well quantified. One important parameter is wound size [1], which is monitored by size scaling methods or via photography. Furthermore, coloration and wetness of wounds are described [2,3]. The intensity of pain felt by patients is another assessment method [4]. Lastly, smell may be evaluated by a nurse, but this is difficult to describe reproducibly, [5] although recent approaches attempt to make use of electronic noses [6]. Size, color, wetness, and smell are regularly reported because these parameters are easily attainable, while detection of wound colonization by specific bacteria requires much more time-consuming and costly methods such as immunoassays [7] and polymerase chain reaction [8]. Alternatively, the determination of pH in wounds may be a relevant biomarker to monitor wound healing and to foster the proper treatment of chronic wounds. If so, then the therapy can be adjusted accordingly, e.g., if the pH is too high for efficient healing, then distinct treatment to change wound pH should be applied [9]. Currently, wound pH measurement is essentially limited to electrochemical Sensors 2017, 17, 1365; doi:10.3390/s17061365 www.mdpi.com/journal/sensors http://www.mdpi.com/journal/sensors http://www.mdpi.com http://dx.doi.org/10.3390/s17061365 http://www.mdpi.com/journal/sensors Sensors 2017, 17, 1365 2 of 13 sensors [10]. The pH electrode is certainly the first choice for pH measurements, and attempts have been made to correlate wound healing with changes in wound pH [11]. Unfortunately, the pH electrode measures protons exclusively in aqueous environment and is defined for pure water containing small amounts of salts only [12]. Wound liquids do not conform to this specification, as they contain salts and biomolecules (e.g., sodium and potassium chloride, creatinine, glucose, lysocyme, matrix metalloproteinases, and proteins) in varying concentrations [13]. Another issue concerning the use of electrodes with humans is that patients do not want to have direct contact with electric current. Finally, patients do not want to be exposed to electrodes that have previously been used for other patients, even if they have been sterilized beforehand. Nevertheless, experimental data confirm that wounds do not heal properly at a pH above 8 [14–16]. Optical sensors have comparable limitations to electrodes in that they are, e.g., affected by ionic strength and the composition of matrix materials [17]. However, they are advantageous over electrodes with respect to their price. While it is quite expensive to discard each pH electrode after use, it is much cheaper to discard an optical sensor layer after use. This is why pH indicator paper has become so widely used [18]. It is cheap and consequently a single use item and it gives very rapid visual information, albeit of course with limited accuracy. Optical sensor layers for the detection of wound pH have already been presented [19,20] and have shown that the healing of split wounds is accompanied by a continuous decrease in pH [21]. Accordingly, while the pH of a split wound was 8.56 on the first day, the pH decreased within 14 days to a value of 6.23 [21]. The optical measurements were performed via fluorescence imaging, which gave accurate pH values over the whole area of the skin, not only at specific locations as in the case of pH electrodes. Unfortunately, the chemically instable fluorescent dyes were only sterilizable in ethanol solutions, and not by standard techniques such as gamma irradiation and ethylene oxide treatment. Furthermore, some patients may not wish to be evaluated via the camera-type devices necessary for fluorescence measurements. In contrast, they are already accustomed to being treated with cotton swabs for wound cleaning. Based on the above considerations, we have decided to develop a simpler approach to determine wound pH. The test system is a combination of cotton swabs used for cleaning wounds and pH indicator strips used for pH measurement. In order to make this system useful for doctors and nursing personnel, it has to be cheap, the measurement must be visible and easily interpretable, and of course the material must be sterilizable and non-toxic. Accordingly, we developed pH indicator cotton swabs where the indicator chemistry is covalently immobilized to avoid contamination of wounds. Then, the effect of gamma sterilization on the response of the indicator cotton swabs was evaluated, and cytotoxicity of the swabs was tested. Finally, we performed preliminary measurements in a simulated environment, e.g., the pH of wet wound dressings and the pH of horse serum samples were measured. 2. Materials and Methods 2.1. Materials The chemicals for coloring the cellulose-based cotton swabs and the buffers for spectral evaluation (sodium carbonate, sodium hydroxide, concentrated sulfuric acid, sodium dihydrogen phosphate, disodium hydrogen phosphate, and hydrochloric acid, all of analytical reagent grade), RINGER tablets for the preparation of RINGER’S solution, as well as disodium 1-amino-9,10-dioxo- 4-[3-(2-sulfonatooxyethylsulfonyl)anilino]-anthracene-2-sulfonate, also known as Remazol Brilliant Blue R (RBBR) were from Aldrich (Vienna, Austria). The pH indicator dyes 2-fluoro-4-[4-(2- hydroxyethanesulfonyl)-phenylazo]-6-methoxyphenol (GJM-492) and 4-[4-(2-hydroxyethanesulfonyl)- phenylazo]-2,6-dimethylphenol (GJM-503) were from Joanneum Research Forschungsgesellschaft mbH (Weiz, Austria). The sterile cotton swabs (Cod.6100/SG/CS) with a length of 150 mm, a plastic shaft, and a Rayon head were from Nuova Aptaca S.R.L. (Canelli, Italy). Sensors 2017, 17, 1365 3 of 13 2.2. Fabrication of Indicator Cotton Swabs In a typical immobilization procedure, each 50 mg of GJM-492 or of GJM-503 were treated with 0.5 g of concentrated sulfuric acid for 30 min at room temperature [22,23]. This converted the 2-hydroxyethylsulfonyl group of the respective indicator dye into the sulfonate. Then, the mixture was poured into 360 mL of distilled water and neutralized with 1.0 mL of a 30% sodium hydroxide solution. At this stage, 40 mg of the inert dye RBBR in 40 mL of water was added, followed by the addition of 12.5 g of sodium carbonate in 100 mL of water and 2.5 mL of a 30% sodium hydroxide solution. The cotton swabs were placed into this dyeing bath. Upon the addition of sodium hydroxide and sodium carbonate, the dye sulfonates were converted into vinylsulfonyl derivatives and coupled via Michael addition to the hydroxyl groups of the cellulose. After 30 min, the colored cotton swabs were removed from the dyeing bath and washed with copious amounts of distilled water until a green color was obtained, indicating the full removal of the alkaline reaction medium. The combination of GJM-492 with RBBR gave indicator cotton swabs type 1 (ICS1), while combination of GJM-503 with RBBR gave indicator cotton swabs type 2 (ICS2). 2.3. Measurements A glass microelectrode pH meter (Hanna Instruments) was used to measure the pH of the buffered solutions. A 0.1 mol·L−1 phosphate buffer composed of sodium dihydrogen phosphate and disodium hydrogen phosphate was used. In order to reach pH values outside the normal buffer range, 1.0 mol·L−1 aqueous sodium hydroxide or 1.0 mol·L−1 hydrochloric acid were added. In order to represent a more realistic sample, horse serum (donor herd, USA origin, sterile-filtered, suitable for cell culture, suitable for hybridoma) from Sigma was used and was adjusted in pH using 6 N hydrochloric acid. The indicator swabs were dipped into the buffer solution for 10 s and afterwards placed under a color measurement device at 20 ± 2 ◦C. The color changes were given in a* values, which represent the green–red axis (negative values indicate green while positive values indicate magenta) of the L*a*b* color space, where dimension L* represents lightness, and a* and b* represent the color-opponent dimensions [24]. The pKa values were calculated by taking the a* values of the indicator cotton swabs and depicting them against pH, fitting the corresponding data with the Boltzmann fit of OriginPro 8.6 G, and calculating the points of inflection of the resulting sigmoidal graphs. The fit function was also used to calculate pH values from a* data of horse serum. Due to the smaller size of the cotton swabs relative to the illumination area of the device, black non-reflective uncoated paper was chosen as a standard color background for the cotton swabs. For experiments with wound dressings (Mepilex from Mölnlycke Health Care (Vienna, Austria), AQUACEL Extra from ConvaTec (Vienna, Austria), Suprasorb A from Lohmann & Rauscher (Vienna, Austria)), the above phosphate buffers and Ringer solution were adjusted to the appropriate pH by an addition of aqueous sodium hydroxide and hydrochloric acid. 2.4. Determination of Endotoxin Detection of endotoxin was performed in compliance with ISO 10993-1 and ISO 10993-12 [25,26]. Exactly 0.2 g of the cotton swab (i.e., three pieces of cotton including the 1.5 cm long plastic stick to which the cotton is attached in 1.5 mL) were extracted per mL of pyrogen-free water for 24 ± 1 h at 37 ± 1 ◦C. Dilutions of this extract were also prepared with pyrogen-free water, and PYROGENT Plus 200 test (sensitivity = 0.06 EU mL−1, Lonza, Walkersville, MD, USA) was used for endotoxin detection. Each sample dilution was tested in duplicate, and the different endotoxin standards with E. coli strain 055:B5 in triplicates. One hundred microliters of standard water or samples together with 100 µL of the reconstitute lysate were added to each tube and placed at 37 ◦C ± 1 ◦C in a non-circulating water bath. After 1 h (±2 min) of incubation, each tube was examined. The reaction of each tube was recorded as either positive or negative. A positive reaction was characterized by the formation of a firm gel, which remained intact when the tube was inverted (vertical rotation of 180◦). A negative reaction was Sensors 2017, 17, 1365 4 of 13 characterized by the absence of a solid clot after inversion. For conversion of the qualitative data into quantification of endotoxin content, the equation supplied by the producer was used. 2.5. Cytotoxicity Screening of Eluates/Extracts of Cotton Swab Components MRC-5 cells, fibroblasts derived from normal lung tissue of a 14-week-old male fetus, [27] were used for testing. Cells were cultured in 175 cm2 culture flasks (Costar Corning) in Minimal Essential Medium (MEM, Thermo Fisher Scientific, Vienna, Austria) + Earle’s salts, 10% fetal bovine serum (Thermo Fisher Scientific), 2 mM L-glutamine, 2% penicillin/streptomycin at 37 ± 1 ◦C in 5% CO2, and subcultured at regular intervals. Eluates from the test materials were obtained by incubation of 0.2 g·mL−1 of the cotton swab (i.e., five pieces of cotton including the 1 cm long plastic stick to which cotton is attached was incubated in 2.5 mL) per mL of MEM Earle’s salts, 10% fetal bovine serum (Thermo Fisher Scientific), and 2 mM L-glutamine for 24 ± 1 h at 37 ± 1 ◦C. The extraction was carried out in compliance with ISO 10993-5 and ISO 10993-12 [26,28]. To obtain the subconfluent cultures required for cytotoxicity testing, 16,000 MRC-5 cells were seeded per well of a 96-well plate 24 h prior to exposure to the eluates. Pure eluates and dilutions were added to the cells and exposed for 24 h. Plain polystyrene particles with a diameter of 20 nanometer (Thermo Fisher Scientific, Vienna, Austria) were used as the positive control, and plain polystyrene particles with a diameter of 200 nanometer (Thermo Scientific) as the negative control. CellTiter 96® AQueous Non-Radioactive Cell Proliferation Assay (Promega, Mannheim, Germany) was used for testing. The tetrazolium compound [3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium, inner salt; MTS] and the electron coupling reagent (phenazine methosulfate; PMS) solution supplied in the assay kit were thawed, 100 µL of the PMS solution was mixed with 2 mL of MTS solution, and 20 µL of the combined MTS/PMS solution was added to 100 µL of each well. Plates were incubated for 2 h at 37 ± 1 ◦C in 5% CO2 in a cell incubator. Absorbance was read at 490 nm on a plate reader (SPECTRA MAX plus 384, Molecular Devices, Wals-Siezenheim, Austria). In parallel, cells were viewed by brightfield microscopy to confirm the MTS data. Dehydrogenase activity was used as an indicator for cell viability and was calculated according to the following equation: Dehydrogenase activity (%) = 100 × (A490nmsample − A490nmblank)/(A490nmcontrol − A490nmblank). (1) Indication for cytotoxic effect according to European and American guidelines for biological evaluation of medical devices is a dehydrogenase activity of less than 70% compared to untreated controls (solvent controls). The above absorbance measurements were not compromised by any absorbance by the indicator dyes because no leaching of the dyes into the eluates was observed. 2.6. Cytotoxicity Screening in Direct Contact with Tips of Cotton Swabs MRC-5 cells were seeded at densities of 300,000/well in 6-well plates 24 h prior to the experiment to reach confluence. For the evaluation of cotton swabs in direct contact with cells, cells were exposed to two cotton swabs/well in order to cover ~1/7th the growth area [29]. A copper foil (Glas-per-Klick, Berlin, Germany) served as the positive control and PVC foil (cell culture plastic ware, GE Healthcare, Vienna, Austria) as the negative control. The controls and test samples (i.e., two pieces of cotton including the 1 cm long plastic stick to which the cotton is attached) were placed on the cell layer for 24 h at 37 ± 1 ◦C in 5% CO2. After the removal of the controls and the samples, cells were viewed in phase contrast to evaluate the cell density and morphology. Subsequently, the cells were stained with 0.2% crystal violet (Merck, Darmstadt, Germany) for 20 min at RT and washed. The cell coverage on the plate and cellular morphologies were recorded. Sensors 2017, 17, 1365 5 of 13 3. Results and Discussion 3.1. Choice of Dyes and Evaluation of the pH Indicator Cotton Swabs Using a Color Measurement Device Two types of indicator swabs have been prepared; the first type made from indicator dye GJM-492 and inert dye RBBR (termed ICS1) and the second type made from indicator dye GJM-503 and inert dye RBBR (termed ICS2) (Figure 1). The indicator dyes GJM-492 and GJM-503 were chosen for their pKa values when immobilized to transparent cellulose layers, which were found to be 6.1 and 7.7, respectively [30]. These values appear to be appropriate for pH measurements in wounds, considering that the sensitive range of the dyes covers 1.5 pH units above and below the pKa. Furthermore, both dyes show color changes from yellow to red when going from acidic to alkaline pH. This color change is not that easily discernible by the human eye. However, the addition of a blue pH-insensitive dye (RBBR) in the dyeing process converts the originally yellow-to-red color change of both GJM-492 and GJM-503 into a more logical green-to-red color change. Consequently, the two types of indicator swabs can both be evaluated visually through their color change from green to red as pH increases. For this evaluation, the indicator swabs have been placed in phosphate buffer solutions of different pH and color changes were recorded via digital photography (Figure 2). Sensors 2017, 17, 1365 5 of 13 3. Results and Discussion 3.1. Choice of Dyes and Evaluation of the pH Indicator Cotton Swabs Using a Color Measurement Device Two types of indicator swabs have been prepared; the first type made from indicator dye GJM-492 and inert dye RBBR (termed ICS1) and the second type made from indicator dye GJM-503 and inert dye RBBR (termed ICS2) (Figure 1). The indicator dyes GJM-492 and GJM-503 were chosen for their pKa values when immobilized to transparent cellulose layers, which were found to be 6.1 and 7.7, respectively [30]. These values appear to be appropriate for pH measurements in wounds, considering that the sensitive range of the dyes covers 1.5 pH units above and below the pKa. Furthermore, both dyes show color changes from yellow to red when going from acidic to alkaline pH. This color change is not that easily discernible by the human eye. However, the addition of a blue pH-insensitive dye (RBBR) in the dyeing process converts the originally yellow-to-red color change of both GJM-492 and GJM-503 into a more logical green-to-red color change. Consequently, the two types of indicator swabs can both be evaluated visually through their color change from green to red as pH increases. For this evaluation, the indicator swabs have been placed in phosphate buffer solutions of different pH and color changes were recorded via digital photography (Figure 2). Figure 1. Chemical structures of the indicator dyes 2-fluoro-4-[4-(2-hydroxyethanesulfonyl)-phenylazo]-6-methoxyphenol (GJM-492) and 4-[4-(2-hydroxyethanesulfonyl)-phenylazo]-2,6-dimethylphenol (GJM-503), respectively, and the inert blue dye, Remazol Brilliant Blue R (RBBR). (a) (b) Figure 2. Color changes of indicator cotton swabs type 1 (ICS1) (a) and indicator cotton swabs type 2 (ICS2) (b) upon exposure to different pH buffers. Figure 1. Chemical structures of the indicator dyes 2-fluoro-4-[4-(2-hydroxyethanesulfonyl)-phenylazo]- 6-methoxyphenol (GJM-492) and 4-[4-(2-hydroxyethanesulfonyl)-phenylazo]-2,6-dimethylphenol (GJM-503), respectively, and the inert blue dye, Remazol Brilliant Blue R (RBBR). Sensors 2017, 17, 1365 5 of 13 3. Results and Discussion 3.1. Choice of Dyes and Evaluation of the pH Indicator Cotton Swabs Using a Color Measurement Device Two types of indicator swabs have been prepared; the first type made from indicator dye GJM-492 and inert dye RBBR (termed ICS1) and the second type made from indicator dye GJM-503 and inert dye RBBR (termed ICS2) (Figure 1). The indicator dyes GJM-492 and GJM-503 were chosen for their pKa values when immobilized to transparent cellulose layers, which were found to be 6.1 and 7.7, respectively [30]. These values appear to be appropriate for pH measurements in wounds, considering that the sensitive range of the dyes covers 1.5 pH units above and below the pKa. Furthermore, both dyes show color changes from yellow to red when going from acidic to alkaline pH. This color change is not that easily discernible by the human eye. However, the addition of a blue pH-insensitive dye (RBBR) in the dyeing process converts the originally yellow-to-red color change of both GJM-492 and GJM-503 into a more logical green-to-red color change. Consequently, the two types of indicator swabs can both be evaluated visually through their color change from green to red as pH increases. For this evaluation, the indicator swabs have been placed in phosphate buffer solutions of different pH and color changes were recorded via digital photography (Figure 2). Figure 1. Chemical structures of the indicator dyes 2-fluoro-4-[4-(2-hydroxyethanesulfonyl)-phenylazo]-6-methoxyphenol (GJM-492) and 4-[4-(2-hydroxyethanesulfonyl)-phenylazo]-2,6-dimethylphenol (GJM-503), respectively, and the inert blue dye, Remazol Brilliant Blue R (RBBR). (a) (b) Figure 2. Color changes of indicator cotton swabs type 1 (ICS1) (a) and indicator cotton swabs type 2 (ICS2) (b) upon exposure to different pH buffers. Figure 2. Color changes of indicator cotton swabs type 1 (ICS1) (a) and indicator cotton swabs type 2 (ICS2) (b) upon exposure to different pH buffers. Sensors 2017, 17, 1365 6 of 13 Visual evaluation gives rather crude information on the actual pH, because essentially three different colors—green, orange, and red—can be visualized within three pH values. However, in order to evaluate the color changes in a more quantitative way, and at the same time provide a convenient measurement method, a hand-held color measurement device was used for quantitative optical evaluation. The color measurement device gives readings of L*, a*, and b* values, which are more reliable than the RGB values of smartphones. In preliminary studies using the two different types of indicator swabs, it became clear that L* and b* values did not give reliable readings for evaluation, because L* reported luminosity/brightness rather than color changes and b* (blue to yellow) did not cover the color changes from green to red. However, a* (green to red) gave reliable readings for all pH values and thus was used for the correlation between measured color changes and pH (Figure 3). As the indicator swabs (approx. 14 × 3 − 5 mm) were smaller than the illumination area of the measurement devices (diameter of 15 mm), a black color background was used for measurements. Tests with a white background showed a signal magnitude for a* that was half of the signal magnitude with a black background. To obtain a measure for the sensitive range, pKa calculations from the readings were performed using a Boltzmann fit. Sensors 2017, 17, 1365 6 of 13 Visual evaluation gives rather crude information on the actual pH, because essentially three different colors—green, orange, and red—can be visualized within three pH values. However, in order to evaluate the color changes in a more quantitative way, and at the same time provide a convenient measurement method, a hand-held color measurement device was used for quantitative optical evaluation. The color measurement device gives readings of L*, a*, and b* values, which are more reliable than the RGB values of smartphones. In preliminary studies using the two different types of indicator swabs, it became clear that L* and b* values did not give reliable readings for evaluation, because L* reported luminosity/brightness rather than color changes and b* (blue to yellow) did not cover the color changes from green to red. However, a* (green to red) gave reliable readings for all pH values and thus was used for the correlation between measured color changes and pH (Figure 3). As the indicator swabs (approx. 14 × 3 − 5 mm) were smaller than the illumination area of the measurement devices (diameter of 15 mm), a black color background was used for measurements. Tests with a white background showed a signal magnitude for a* that was half of the signal magnitude with a black background. To obtain a measure for the sensitive range, pKa calculations from the readings were performed using a Boltzmann fit. (a) (b) Figure 3. Calibration graphs of sterilized ICS1 (a) and ICS2 (b) upon exposure to different pH buffers. Five different swabs were used for error bar calculation of a* values (see also Table 1). 4 5 6 7 8 9 10 -10 -5 0 5 10 15 a* pH 4 5 6 7 8 9 10 -15 -10 -5 0 5 10 15 20 a* pH a* = 15.86 + (−8.65 − 15.86)/(1 + exp((pH − 5.85)/0.46)), R2 = 0.997 a* = 18.56 + (−13.99 − 18.56)/(1 + exp((pH − 7.34)/0.46)), R2 = 0.998 Figure 3. Calibration graphs of sterilized ICS1 (a) and ICS2 (b) upon exposure to different pH buffers. Five different swabs were used for error bar calculation of a* values (see also Table 1). 3.2. Sensitivities of ICS1 and ICS2, and the Effect of Sterilization Both types of indicator cotton swabs were exposed to pH buffer solutions; afterwards, their a* values were detected using the color measurement device. Characterization of the swabs was Sensors 2017, 17, 1365 7 of 13 performed before and after sterilization using 25 kGy gamma irradiation. This was done in order to evaluate any possible effect of sterilization on the performance of the indicator cotton swabs, as sterilization is well known to compromise the performance of sensing materials [31]. The pKa values for the non-sterilized indicator swabs ICS1 and ICS2 are given in Table 1. Furthermore, the table also gives data on the sterilized cotton swabs, showing that using 25 kGy gamma irradiation does not affect the indicator performance. Both the non-sterilized and sterilized indicator swabs have comparable pKa values. Thus, it can be concluded that the indicator dyes function properly before and after the sterilization process. Furthermore, the standard deviation in pKa of five different indicator swabs in comparison to measuring the same indicator swab five times is acceptable. There is a notable shift of pKa between the indicator dyes measured on transparent cellulose foils (GJM-492: 6.1, GJM-503: 7.7) and the indicator cotton swabs. This we attribute to the higher content of dye on the swabs, a concentration effect that has already been observed to shift the pKa of pH indicator dyes significantly [32,33]. Table 1. pKa values of the indicator cotton swabs before and after gamma sterilization. ICS1 (Not Sterilized) ICS1 (Sterilized) ICS2 (Not Sterilized) ICS2 (Sterilized) Five swabs measured once 5.89 (0.07) 5.85 (0.06) 7.38 (0.11) 7.34 (0.05) One swab measured five times 5.75 (0.11) 5.87 (0.07) 7.38 (0.09) 7.37 (0.03) 3.3. Temperature Effect on Sensitivity A relevant issue for optical sensors in general is the cross-sensitivity to temperature changes. Therefore, we have evaluated the indicator cotton swabs at three different temperatures; 20, 30, and 40 ◦C. Table 2 shows that there is a small yet significant increase in pKa with temperature. This contrasts with findings using the commonly used triphenylmethane dye phenol red, where the pKa decreases by 0.1 pH units when the temperature is raised from 25 to 37 ◦C [34]. We currently do not have an explanation for this finding, but the effect in both cases is approximately 0.1 pH units, which, for this application, is a minor effect, although not to be ignored. Table 2. Effect of temperature on the pKa value of the dyes in the indicator cotton swabs ICS1 and ICS2. ICS1 (n = 10) ICS2 (n = 10) pKa at 20 ◦C 5.86 (0.22) 7.31 (0.26) pKa at 30 ◦C 5.92 (0.14) 7.39 (0.31) pKa at 40 ◦C 5.90 (0.08) 7.43 (0.13) 3.4. Toxicity Testing of the Cotton Swabs According to ISO Guidelines Medical devices should not cause adverse effects in the human body. Regulatory bodies like the American Society for Testing and Materials (ASTM), the Food and Drug Administration (FDA), and the International Standards Organization (ISO) provided guidelines for the testing. ISO 10993 specifies the procedure for cytotoxicity testing in eluates and in direct contact with the sample. Contamination with bacteria and components of the bacterial wall (endotoxin) in the samples has to be excluded as they are pyrogenic. The pyrogenic activity of endotoxin is much higher than that of most other pyrogenic substances. The Limulus Amebocyte Lysate (LAL) assay is an established alternative to the detection of the pyrogenic effect of endotoxin in white rabbits and is recommended by the European Medicines Agency to evaluate medical devices for bacterial endotoxins [35]. Endotoxin in the aqueous extract of the sample activates coagulase in the blood cells (amoebocytes) of the horseshoe crab [36]. The activated enzyme (coagulase) hydrolyzes specific bonds within a clotting protein (coagulogen) also present in Limulus Amebocyte Lysate. Once hydrolyzed, the resultant coagulin self-associates and Sensors 2017, 17, 1365 8 of 13 forms a gelatinous clot. The LAL clotting test can be used for all types of samples. The initial rate of activation is determined by the concentration of endotoxin. The United States Pharmacopeia pyrogen standard for medical devices that contact the blood or lymph in circulation requires <20 endotoxin units (EU)/device or <5 EU/mL [37]). In order to provide realistic testing conditions for the indicator cotton swabs, not only the cotton itself but also the adjacent plastic stick was evaluated, as this part might also come into contact with the wound during manipulation. Extracts of the originally packaged sterile uncolored cotton swaps did not induce clotting of amoebocyte lysate. This indicates endotoxin levels below the detection threshold of the assay (0.06 EU/mL). For comparison, originally uncolored sterile cotton swabs were removed from the original package and subsequently packaged again and gamma sterilized (termed manipulated swabs). Contamination with endotoxin occurred during manipulation of the cotton swabs and this could not be removed by gamma irradiation. Decontamination of samples is a well-known problem because it is very difficult to remove endotoxin from samples [38]. Extracts of the pH-sensitive sterilized cotton swabs ICS1 and ICS2 diluted to 1:5 also did not induce clotting. According to this testing, the indicator cotton swabs ICS1 and ICS2 exhibited <5 EU/mL, which is below the threshold for devices in contact with blood or lymph in circulation. Since endotoxin levels in all investigated samples (original sterile cotton swabs, cotton swabs removed from package and sterilized again, and sterilized ICS1 and ICS2) were below the accepted threshold, additional measures to reduce endotoxin contamination are not required. As in the second step, in vitro cytotoxicity screening is recommended for identification of adverse cellular effects. Cell damage and cell death induced by the leakage of toxins can lead to an inflammatory response with recruitment of various cell types to the wound. Continuous or prolonged leakage of toxic substances will delay wound healing. Cytotoxicity testing on cell lines shows, in many cases, good correlation with animal assays. It is frequently more sensitive than animal studies and equally predictive for acute toxicity in humans to rodent in vivo studies [39]. The ISO 10993-5 guideline for cytotoxicity testing indicates evaluation of eluates and in direct contact. Total amount of proteins, DNA, cell number, or enzymatic activity of a cell population can serve as readout parameters for cell viability. The use of bioreduction of a tetrazolium dye into formazan for the assessment of cell viability is a generally accepted technique [40]. Detection with CellTiter 96® AQueous Non-Radioactive Cell Proliferation Assay uses a novel tetrazolium compound MTS and the electron coupling reagent PMS. MTS is bioreduced by cellular dehydrogenases into a formazan product, which is soluble in the tissue culture medium and can be quantified by photometric measurement. Eluate testing was performed with the original sterile uncolored cotton swabs, with the original cotton swabs removed from package and sterilized again via gamma irradiation, and with ICS1 and ICS2 after gamma sterilization. Significant decreases in viability were only observed after exposure of MRC-5 cells to the positive control and to ICS1 (Figure 4). This decrease to 75 ± 7% of the untreated cells, however, is not interpreted as cytotoxic according to ISO 10993-5, as only decreases in dehydrogenase activity below 70% are considered so [28]. 1 Figure 4. Changes in cell viality after exposure to positive control and eluates of samples (pure and diluted 1 + 1) for 24 h. Sensors 2017, 17, 1365 9 of 13 Finally, evaluation of the effects after direct contact between cells and biomedical devices identifies the presence of potentially leachable toxic materials. Damage of the cell layer is detected by colorimetric dyes and cell morphology. Cell detachment from the plate is recorded after 24 h of exposure. A confluent cell layer was observed in cultures of untreated cells and of cells exposed to repackaged and gamma irradiated (manipulated) cotton swaps, and to ICS1 and ICS2 (Figure 5a,d,e,f). On the other hand, cell density was much lower in cell cultures exposed to the positive control (Figure 5 c). Cells of the positive control were rounded and detached from the plastic support. The absence of abnormal cell morphology after direct contact with ICS1 (no indication of cell detachment, rounded cells, necrosis, etc.) related to experiments with the eluate (see above) suggests the absence of cytotoxic effects for ICS1. Decreases of dehydrogenase activity can be caused by several mechanisms and are not specific to cytotoxicity [41]. Potential reasons for reduced dehydrogenase activity are inhibition of proliferation, induction of apoptosis, and interference with mitochondrial metabolism. Additional studies would be required to identify the underlying mechanism. Sensors 2017, 17, 1365 9 of 13 Finally, evaluation of the effects after direct contact between cells and biomedical devices identifies the presence of potentially leachable toxic materials. Damage of the cell layer is detected by colorimetric dyes and cell morphology. Cell detachment from the plate is recorded after 24 h of exposure. A confluent cell layer was observed in cultures of untreated cells and of cells exposed to repackaged and gamma irradiated (manipulated) cotton swaps, and to ICS1 and ICS2 (Figure 5a,d,e,f). On the other hand, cell density was much lower in cell cultures exposed to the positive control (Figure 5 c). Cells of the positive control were rounded and detached from the plastic support. The absence of abnormal cell morphology after direct contact with ICS1 (no indication of cell detachment, rounded cells, necrosis, etc.) related to experiments with the eluate (see above) suggests the absence of cytotoxic effects for ICS1. Decreases of dehydrogenase activity can be caused by several mechanisms and are not specific to cytotoxicity [41]. Potential reasons for reduced dehydrogenase activity are inhibition of proliferation, induction of apoptosis, and interference with mitochondrial metabolism. Additional studies would be required to identify the underlying mechanism. Figure 5. Images of untreated controls (a) and cells positioned in the vicinity of the samples: negative control (b), positive control (c), manipulated cotton swab (d), ICS1 (e), and ICS2 (f). Healthy MRC-5 cells show the elongated form of normal fibroblasts. Upon damage they round up and eventually detach from the plastic surface (c). Scale bar: 50 µm. Figure 5. Images of untreated controls (a) and cells positioned in the vicinity of the samples: negative control (b), positive control (c), manipulated cotton swab (d), ICS1 (e), and ICS2 (f). Healthy MRC-5 cells show the elongated form of normal fibroblasts. Upon damage they round up and eventually detach from the plastic surface (c). Scale bar: 50 µm. Sensors 2017, 17, 1365 10 of 13 3.5. Testing of Wound Dressings and Horse Serum Using ICS1 and ICS2 Initially, indirect evaluation of wounds was considered in an attempt to avoid contact of wounds with the indicator cotton swabs. Hence, it was decided that the pH in the wet wound dressings would be measured rather than in the wound itself. Before starting such a procedure on patients, several functional wound dressings were assessed for their effect on the pH of (a) phosphate buffer solutions and (b) Ringer solutions to confirm that dressings would not affect wound pH. The pH of phosphate buffer and Ringer solutions was measured before soaking different wound dressings. After soaking the wound dressing with the solutions, the pH in the wound dressing was measured by pressing a pH electrode into the wet dressing (Table 3). Table 3. Measurement of pH in phosphate buffer and Ringer solution before and after soaking wound dressings with them. pH of Solutions Measured by a pH Electrode Mepilex AQUACEL Extra Suprasorb A Phosphate buffer (pH 6.0) 6.6 6.5 6.3 Phosphate buffer (pH 7.0) 7.3 7.5 6.6 Phosphate buffer (pH 8.0) 8.2 7.9 6.5 Ringer solution (pH 6.0) 7.1 5.1 5.2 Ringer solution (pH 7.0) 7.5 5.0 5.4 Ringer solution (pH 8.0) 7.7 5.1 5.6 Not surprisingly, there was a significant effect of the composition of the wound dressing on the pH inside the wound dressing. It is thought that adjusting pH in the wound establishes an environment appropriate for improved wound healing [42]. Accordingly, Mepilex seems to establish a slightly alkaline pH in phosphate buffer and in Ringer solution (6.6–8.2). Presumably, the data for the Ringer solution (7.1–7.7) is more meaningful because the wound will not have the buffer capacity of a phosphate buffer. AQUACEL and Suprasorb A seem to establish a slightly acidic environment in the Ringer solution (around 5.0–5.6). As a consequence, this preliminary experiment indicates that it is not justified to draw conclusions about the pH inside the actual wound from the pH in the wet dressing. Therefore, rather than measuring the pH indirectly, a pH measurement has to be performed directly in the wound, after removal of the functional dressing. We performed additional experiments with horse serum to provide more realistic samples for pH measurements. The sigmoidal calibration function was calculated using phosphate buffered solutions of different pH (Figure 3). Then, a*-values for horse serum, adjusted to different pH levels by the addition of 6 N hydrochloric acid, were measured for both ICS1 and ICS2 (Table 4). Naturally, the horse serum represents a wound-like environment as it is viscous, has a yellow color, and contains proteins. In the case of ICS1, the calculated pH values correlate well with the pH of horse serum measured by the pH electrode, while in the case of ICS2 the calculated values are slightly too high, presumably due to the intrinsic coloration of the horse serum, which may affect the a* value. Although both swabs show color changes from green to red, the spectra of both dyes are significantly different with different absorbance maxima for the base form (GJM-492: 492 nm, GJM-503: 503 nm), causing differences in the effect of the yellow color [43]. Eventually, a compensation measurement for the color of the real analyte sample (wound fluid) has to be devised for practical applications. Sensors 2017, 17, 1365 11 of 13 Table 4. Measurement of pH in horse serum using ICS1 and ICS2. pH of Horse Serum Measured by a pH Electrode ICS1 * ICS2 * 6.01 6.00 (0.06) 6.39 (0.04) 6.62 6.62 (0.18) 6.81 (0.02) 8.41 8.29 (0.36) 8.54 (0.30) * Average of three measurements, standard deviation in brackets. 4. Conclusions The pH-sensing system can give an indication of the progress of wound healing because wounds do not heal properly when at a pH above 8. We have successfully colored cotton swabs with the indicator and inert dye to possibly achieve two functions, namely (a) cleaning of the wound from its exudate and (b) simultaneous pH determination. Thus, the swabs may give valuable information on the healing process and can act as an early indicator for possible pathogenic processes. The next step will involve characterization of the indicator cotton swabs in a clinical study to establish proper measurement and calibration procedures for practical application by the clinical personnel, as well as evaluation of the correlation between pH and healing progress. Acknowledgments: This work was supported by the Research Studios Austria project 844724 “SmartColorTextiles” of the Austrian Research Promotion Agency (FFG). This support is gratefully acknowledged. Author Contributions: G.J.M., C.S., E.F. and B.B. conceived and designed the experiments; C.S. and C.M. performed the experiments; G.J.M., C.S, G.J.M. and E.F. analyzed the data; C.S. and C.M. contributed reagents/materials/analysis tools; G.J.M., E.F. and J.A. wrote the paper. Conflicts of Interest: The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. References 1. Thawer, H.A.; Houghton, P.E.; Woodbury, M.G.; Keast, D.; Campbell, K. A comparison of computer-assisted and manual wound size measurement. Ostomy Wound Manag. 2002, 48, 46–53. 2. Siddall, S.S. Wound healing. An assessment tool. Home Healthc. Nurse 1983, 1, 35–40. [CrossRef] 3. Lipsky, B.A.; Hoey, C. Topical antimicrobial therapy for treating chronic wounds. Clin. Pract. 2009, 49, 1541–1549. [CrossRef] [PubMed] 4. De Laat, E.H.; Scholte op Reimer, W.J.; van Achterberg, T. Pressure ulcers: Diagnostics and interventions aimed at wound-related complaints: A review of the literature. J. Clin. Nurs. 2005, 14, 464–472. [CrossRef] [PubMed] 5. Fleck, C.A. Palliative dilemmas: Wound odour. Wound Care Can. 2006, 4, 10–14. 6. Casalinuovo, I.A.; Di Pierro, D.; Coletta, M.; Di Francesco, P. Application of electronic noses for disease diagnosis and food spoilage detection. Sensors 2006, 6, 1428–1439. [CrossRef] 7. Worsley, G.J.; Attree, S.L.; Noble, J.E.; Horgan, A.M. Rapid duplex immunoassay for wound biomarkers at the point-of-care. Biosens. Bioelectr. 2012, 34, 215–220. [CrossRef] [PubMed] 8. Pirnay, J.-P.; De Vos, D.; Duinslaeger, L.; Reper, P.; Vandenvelde, C.; Cornelis, P.; Vanderkelen, A. Quantitation of Pseudomonas aeruginosa in wound biopsy samples: From bacterial culture to rapid ‘real-time’ polymerase chain reaction. Crit. Care 2000, 4, 255–261. [CrossRef] [PubMed] 9. Percival, S.L.; McCarty, S.; Hunt, J.A.; Woods, E.J. The effects of pH on wound healing, biofilms, and antimicrobial efficacy. Wound Repair Regen. 2014, 22, 174–186. [CrossRef] [PubMed] 10. Khan, M.A.; Ansari, U.; Ali, M.N. Real-time wound management through integrated pH sensors: A review. Sens. Rev. 2015, 35, 183–189. [CrossRef] 11. Gethin, G. The significance of surface pH in chronic wounds. Wounds UK 2007, 3, 52–56. 12. Parra, J.L.; Paye, M.; The EEMCO Group. EEMCO guidance for the in vivo assessment of skin surface pH. Skin Pharmacol. Appl. Skin Physiol. 2003, 16, 188–202. [CrossRef] [PubMed] 13. Cutting, K.F. Wound exudate: Composition and functions. Br. J. Commun. Nurs. 2003, 8, 4–9. [CrossRef] http://dx.doi.org/10.1097/00004045-198309000-00010 http://dx.doi.org/10.1086/644732 http://www.ncbi.nlm.nih.gov/pubmed/19842981 http://dx.doi.org/10.1111/j.1365-2702.2004.01090.x http://www.ncbi.nlm.nih.gov/pubmed/15807753 http://dx.doi.org/10.3390/s6111428 http://dx.doi.org/10.1016/j.bios.2012.02.005 http://www.ncbi.nlm.nih.gov/pubmed/22386484 http://dx.doi.org/10.1186/cc702 http://www.ncbi.nlm.nih.gov/pubmed/11056755 http://dx.doi.org/10.1111/wrr.12125 http://www.ncbi.nlm.nih.gov/pubmed/24611980 http://dx.doi.org/10.1108/SR-08-2014-689 http://dx.doi.org/10.1159/000069756 http://www.ncbi.nlm.nih.gov/pubmed/12677099 http://dx.doi.org/10.12968/bjcn.2003.8.Sup3.11577 Sensors 2017, 17, 1365 12 of 13 14. Wilson, I.A.; Henry, M.; Quill, R.D.; Byrne, P.J. The pH of varicose ulcer surfaces and its relationship to healing. J. Vasc. Dis. 1979, 8, 339–342. 15. Schneider, L.A.; Korber, A.; Grabbe, S.; Dissemond, J. Influence of pH on wound-healing: A new perspective for wound-therapy. Arch. Dermatol. Res. 2007, 298, 413–420. [CrossRef] [PubMed] 16. Eberlein, T.; Abel, M.; Wild, T.; Riesinger, T. Ergebnisse zur In-time-, bed-side-Messung von pH-Wert und Wundtemperatur. In Proceedings of the 1. Wund-D.A.CH Dreiländerkongress, Friedrichshafen, Germany, 10–12 October 2013; Abstract No. 30. 17. Van der Schueren, L.; De Clerck, K. Coloration and application of pH-sensitive dyes on textile materials. Color. Technol. 2012, 128, 82–90. [CrossRef] 18. Kumar, P.; Honnegowda, T.M. Effect of limited access dressing on surface pH of chronic wounds. Plast. Aesthet. Res. 2015, 2, 257–260. [CrossRef] 19. Steyaert, I.; Vancoillie, G.; Hoogenboom, R.; De Clerck, K. Dye immobilization in halochromic nanofibers through blend electrospinning of a dye-containing copolymer and polyamide-6. Polym. Chem. 2015, 6, 2685–2694. [CrossRef] 20. Kassal, P.; Zubak, M.; Scheipl, G.; Mohr, G.J.; Steinberg, M.D.; Murković Steinberg, I. Smart bandage with wireless connectivity for optical monitoring of pH. Sens. Actuators B Chem. 2017, 246, 455–460. [CrossRef] 21. Meier, R.J.; Schreml, S.; Wang, X.D.; Landthaler, M.; Babilas, P.; Wolfbeis, O.S. Simultaneous photographing of oxygen and pH In vivo using sensor films. Angew. Chem. Int. Ed. 2011, 50, 10893–10896. [CrossRef] [PubMed] 22. Mohr, G.J.; Müller, H. Tailoring colour changes of optical sensor materials by combining indicator and inert dyes and their use in sensor layers, textiles and non-wovens. Sens. Actuators B Chem. 2015, 206, 788–793. [CrossRef] 23. Schaude, C.; Meindl, C.; Fröhlich, E.; Attard, J.; Mohr, G.J. Developing a sensor layer for the optical detection of amines during food spoilage. Talanta 2017, 170, 481–487. [CrossRef] [PubMed] 24. Capeletti, L.B.; Dos Santos, J.H.Z.; Moncada, E. Dual-target sensors: The effect of the encapsulation route on pH measurements and ammonia monitoring. J. Sol-Gel Sci. Technol. 2012, 64, 209–218. [CrossRef] 25. International Organization for Standardization. 10993-1. Biological Evaluation of Medical Devices Part 1: Evaluation and Testing in the Risk Management Process; International Organization for Standardization: Geneva, Switzerland, 2009. 26. International Organization for Standardization. 10993-12. Biological evaluation of Medical Devices—Part 12: Sample Preparation and Reference Materials; International Organization for Standardization: Geneva, Switzerland, 2007. 27. Jacobs, J.P.; Jones, C.M.; Baille, J.P. Characteristics of a human diploid cell designated MRC-5. Nature 1970, 227, 168–170. [CrossRef] [PubMed] 28. International Organization for Standardization. 10993-5. Biological Evaluation of Medical Devices—Part 5: Test for In Vitro Cytotoxicity; International Organization for Standardization: Geneva, Switzerland, 2009. 29. Van Tienhoven, E.A.E.; Korbee, D.; Schipper, L.; Verharen, H.W.; De Jong, W.H. In vitro and in vivo (cyto) toxicity assays using PVC and LDPE as model materials. J. Biomed. Mater. Res. Part A 2006, 78, 175–182. [CrossRef] [PubMed] 30. Mohr, G.J.; Müller, H.; Bussemer, B.; Stark, A.; Carofiglio, T.; Trupp, S.; Heuermann, R.; Henkel, T.; Escudero, D.; Gonzalez, L. Design of acidochromic dyes for facile preparation of pH sensor layers. Anal. Bioanal. Chem. 2008, 392, 1411–1418. [CrossRef] [PubMed] 31. Zajko, S.; Klimant, I. The effects of different sterilization procedures on the optical polymer oxygen sensors. Sens. Actuators B Chem. 2013, 177, 86–93. [CrossRef] 32. Wolfbeis, O.S. Fiber Optic Chemical Sensors and Biosensors; CRC Press: Boca Raton, FL, USA, 1991. 33. Schaude, C.; Mohr, G.J. Indicator washcloth for detecting alkaline washing solutions to prevent dermatitis patients and babies from skin irritation. Fash. Text. 2017, 4, 7. [CrossRef] 34. Sendroy, J.; Rodkey, F.L. Apparent dissociation constant of phenol red as determined by spectrophotometry and by visual colorimetry. Clin. Chem. 1961, 7, 646–654. [PubMed] 35. Committee for Medicinal Products for Human Use. EMEA/CHMP/BWP/452081/2007. Guideline on the Replacement of Rabbit Pyrogen Testing by an Alternative Test for Plasma Derived Medicinal Products; European Medicine Agency: London, UK, 2009. http://dx.doi.org/10.1007/s00403-006-0713-x http://www.ncbi.nlm.nih.gov/pubmed/17091276 http://dx.doi.org/10.1111/j.1478-4408.2011.00361.x http://dx.doi.org/10.4103/2347-9264.165449 http://dx.doi.org/10.1039/C5PY00060B http://dx.doi.org/10.1016/j.snb.2017.02.095 http://dx.doi.org/10.1002/anie.201104530 http://www.ncbi.nlm.nih.gov/pubmed/21954190 http://dx.doi.org/10.1016/j.snb.2014.09.104 http://dx.doi.org/10.1016/j.talanta.2017.04.029 http://www.ncbi.nlm.nih.gov/pubmed/28501199 http://dx.doi.org/10.1007/s10971-012-2849-9 http://dx.doi.org/10.1038/227168a0 http://www.ncbi.nlm.nih.gov/pubmed/4316953 http://dx.doi.org/10.1002/jbm.a.30679 http://www.ncbi.nlm.nih.gov/pubmed/16628708 http://dx.doi.org/10.1007/s00216-008-2428-7 http://www.ncbi.nlm.nih.gov/pubmed/18941739 http://dx.doi.org/10.1016/j.snb.2012.10.040 http://dx.doi.org/10.1186/s40691-017-0092-2 http://www.ncbi.nlm.nih.gov/pubmed/13910646 Sensors 2017, 17, 1365 13 of 13 36. United States Pharmacopeia. 34-NF29, U. <87>, Biological Reactivity Test, In Vitro—Direct Contact Test; The United States Pharmacopeia: Rockville, MD, USA, 2011. 37. United States Pharmacopeia. United States Pharmacopeial Convention USP 23; The United States Pharmacopeia: Rockville, MD, USA, 1995. 38. Csako, G.; Tsai, C.M.; Hochstein, H.D.; Elin, R.J. The concentration, physical state, and purity of bacterial endotoxin affect its detoxification by ionizing radiation. Radiat. Res. 1986, 108, 158–166. [CrossRef] [PubMed] 39. Ekwall, B. Overview of the Final MEIC Results: II. The in vitro-In vivo evaluation, including the selection of a practical battery of cell tests for prediction of acute lethal blood concentrations in humans. Toxicol. In Vitro 1999, 13, 665–673. [CrossRef] 40. Scudiero, D.A.; Shoemaker, R.H.; Paull, K.D.; Monks, A.; Tierney, S.; Nofziger, T.H.; Currens, M.J.; Seniff, D.; Boyd, M.R. Evaluation of a soluble tetrazolium/formazan assay for cell growth and drug sensitivity in culture using human and other tumor cell lines. Cancer Res. 1988, 48, 4827–4833. [PubMed] 41. Berridge, M.V.; Herst, P.M.; Tan, A.S. Tetrazolium dyes as tools in cell biology: New insights into their cellular reduction. Biotechnol. Annu. Rev. 2005, 11, 127–152. [PubMed] 42. Uzun, M.; Anand, S.C.; Shah, T. The effect of wound dressings on the pH stability of fluids. J. Wound Care 2012, 21, 88–95. [CrossRef] [PubMed] 43. Escudero, D.; Trupp, S.; Bussemer, B.; Mohr, G.J.; Gonzalez, L. Spectroscopic properties of azobenzene-based pH indicator dyes: A quantum chemical and experimental study. J. Chem. Theory Comput. 2011, 7, 1062–1072. [CrossRef] [PubMed] © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). http://dx.doi.org/10.2307/3576821 http://www.ncbi.nlm.nih.gov/pubmed/3538129 http://dx.doi.org/10.1016/S0887-2333(99)00061-2 http://www.ncbi.nlm.nih.gov/pubmed/3409223 http://www.ncbi.nlm.nih.gov/pubmed/16216776 http://dx.doi.org/10.12968/jowc.2012.21.2.88 http://www.ncbi.nlm.nih.gov/pubmed/22584528 http://dx.doi.org/10.1021/ct1007235 http://www.ncbi.nlm.nih.gov/pubmed/26606354 http://creativecommons.org/ http://creativecommons.org/licenses/by/4.0/. Introduction Materials and Methods Materials Fabrication of Indicator Cotton Swabs Measurements Determination of Endotoxin Cytotoxicity Screening of Eluates/Extracts of Cotton Swab Components Cytotoxicity Screening in Direct Contact with Tips of Cotton Swabs Results and Discussion Choice of Dyes and Evaluation of the pH Indicator Cotton Swabs Using a Color Measurement Device Sensitivities of ICS1 and ICS2, and the Effect of Sterilization Temperature Effect on Sensitivity Toxicity Testing of the Cotton Swabs According to ISO Guidelines Testing of Wound Dressings and Horse Serum Using ICS1 and ICS2 Conclusions work_6gnree3or5ajxlsmeiv4fncl34 ---- untitled Volume 132, 2015, pp. 277–287 DOI: 10.1642/AUK-14-107.1 RESEARCH ARTICLE Digital photography quantifies plumage variation and salt marsh melanism among Song Sparrow (Melospiza melodia) subspecies of the San Francisco Bay Sarah A. M. Luttrell,1* Sara T. Gonzalez,2 Bernard Lohr,1 and Russell Greenberg3† 1 Department of Biological Sciences, University of Maryland Baltimore County, Baltimore, Maryland, USA 2 Department of Ecology and Evolutionary Biology, Cornell University, Ithaca, New York, USA 3 Smithsonian Migratory Bird Center, Smithsonian Conservation Biology Institute, Washington, District of Columbia, USA † This author is deceased * Corresponding author: manor1@umbc.edu Submitted May 13, 2014; Accepted October 13, 2014; Published December 17, 2014 ABSTRACT Local adaptation is often implicated as a driver of speciation and diversity, but measuring local variation within a species can be difficult. Many taxa endemic to salt marshes exhibit a phenotypic trait called salt marsh melanism, in which salt marsh endemics have a darker or grayer integument than their freshwater congeners. The repeated occurrence of salt marsh melanism across distantly related taxa in similar environments suggests a role for local selection in maintaining this trait. We quantitatively explored variation in plumage characteristics for four subspecies of the Song Sparrow (Melospiza melodia) in the San Francisco Bay area. These subspecies are restricted to habitats of varying salinity and climate, and are considered a classic example of ecologically based variation on a local scale. To analyze plumage color, we employed a digital photographic technique which was quantitative, able to deal with pattern variation, and independent of a particular visual system. Although no single plumage measure distinguished among all four subspecies, combining the measures allowed reliable assignment of most specimens. Using a discriminant analysis with five measures of plumage color, we were able to classify 75% of specimens to the correct subspecies, well above the 25% correct classification expected due to chance. The three subspecies inhabiting more saline environments (M. m. pusillula, M. m. samuelis, and M. m. maxillaris) were either darker (lower luminance) or grayer (lower red dominance) than the inland subspecies M. m. gouldii, supporting the pattern of salt marsh melanism observed in other taxa. Keywords: digital photography, plumage evolution, salt marsh melanism, Song Sparrow, subspecies La fotografı́a digital cuantifica variación en plumaje y melanismo asociado con pantanos salobres entre subespecies de Melospiza melodia de la bahı́a de San Francisco RESUMEN La adaptación local frecuentemente está implicada como promotor de especiación y diversidad, pero la medición de la variación local dentro de una especie puede ser difı́cil. Muchos taxones endémicos de pantanos salobres exhiben un rasgo fenotı́pico llamado melanismo de pantanos salobres, que consiste en que presentan integumentos más oscuros o más grises que sus congéneres de aguas dulces. La existencia repetida del melanismo de pantanos salobres en taxones lejanamente relacionados de ambientes similares sugiere que selección local mantiene este rasgo. Exploramos cuantitativamente la variación en las caracterı́sticas del plumaje en cuatro subespecies de Melospiza melodia en el área de la bahı́a de San Francisco. Estas subespecies están restringidas a hábitats de salinidad y clima variados, y son consideradas un ejemplo clásico de variación ecológica a escala local. Empleamos una técnica cuantitativa de fotografı́a digital para analizar el color del plumaje que tiene en cuenta la variación en patrón y es independiente de un sistema visual particular. Aunque ninguna medida del plumaje distinguió entre las cuatro subespecies por si sola, la combinación de las medidas permitió una asignación confiable de la mayorı́a de los especı́menes. Usando un análisis discriminante con cinco medidas del color del plumaje pudimos clasificar 75% de los especı́menes en la subespecie correcta, muy por encima del 25% de clasificaciones correctas esperadas por azar. Las tres subespecies que habitan los ambientes más salobres (M. m. pusillula, M. m. samuelis, y M. m. maxillaris) fueron más oscuras (menor luminancia) o más grises (dominancia de rojos bajos) que la subespecie continental M. m. gouldii, lo que sustenta el patrón de melanismo de pantanos salobres observado en otros taxones. Palabras clave: evolución del plumaje, fotografı́a digital, melanismo de pantanos salobres, subespecies. Q 2015 American Ornithologists’ Union. ISSN 0004-8038, electronic ISSN 1938-4254 Direct all requests to reproduce journal content to the Central Ornithology Publication Office at aoucospubs@gmail.com mailto:manor1@umbc.edu INTRODUCTION Quantifying geographic variation within a species has occupied biologists for over a century. Since Darwin first synthesized his ideas on heritable variation and selection, we have been attempting to incorporate the individual variety of organisms into the typological classification system of Linnaeus. One way of doing so has been to use subspecies and trinomial names for geographically parti- tioned variation within a species. Originally, the standard method for identifying subspecies was through careful, but subjective, observation of morphological differences, primarily based on geographically partitioned variation in overall size and color. New methods for demarcating distinct groups, especially genetic analyses, have altered our ideas of what constitutes local variation. Molecular advances have given us additional tools with which to evaluate taxonomy, but the morphologically based taxo- nomic hypotheses of our predecessors are not explicitly wrong, they are merely based on a different set of evidence. In addition to our current knowledge of gene flow and population histories, a rigorous reevaluation of the morphological features used by traditional taxonomists may lead us to a fuller, more exciting understanding of the processes creating and maintaining local diversity. Song Sparrows (Melospiza melodia) are highly morpho- logically variable, with 25 currently recognized subspecies (Patten and Pruett 2009). We focus particularly on the populations in and around the San Francisco Bay, because they exhibit highly localized plumage variation in stable populations that are clustered within a 100-km radius of one another (Grinnell 1913, Marshall 1948b). The 4 San Francisco Bay subspecies are found in the 3 embayments within the San Francisco Bay, as well as in the surrounding upland (Figure 1), with narrow, stable intergrade zones between populations (Marshall 1948a). The last American Ornithologists’ Union checklist that covered subspecies (American Ornithologists’ Union 1957) recognized these taxa as subspecies, as did a relatively recent revision of Song Sparrow taxonomy (Patten and Pruett 2009), so we accept them as the starting point for our analyses in this paper (as follows): Melospiza m. gouldii is found in the uplands surrounding all three portions of the bay area; M. m. pusillula inhabits the surrounds of the south San Francisco Bay; M. m. samuelis is located around San Pablo Bay; and M. m. maxillaris is found around Suisun Bay (Marshall 1948a, Patten and Pruett 2009). All subspecies will hereafter be referred to solely by their trinomial gouldii, pusillula, samuelis, and maxillaris. Like most North American birds, the San Francisco Bay subspecies were first described based on plumage variation (although Grinnell [1909] noted the relatively large bill of M. m. maxillaris). Marshall (1948a) conducted an expansive qualitative analysis of color which involved sorting over 2,000 individuals into plumage classes, and found that different plumage morphotypes dominated the different Song Sparrow subspecies of the region. However, Marshall’s (1948a) work was limited by the technology of his time, when the most objective method for analyzing color variation was by matching skins to color cards, a method that is inherently observer-biased. Since Marshall (1948a), many studies have evaluated these populations for shape, size, and genetics, but there has been no quantitative exploration of variation in the trait that originally defined the San Francisco Bay Song Sparrows, plumage. Song Sparrows have been extensively evaluated for patterns of neutral genetic variation, with particular focus on populations in western North America (Zink and Dittmann 1993, Fry and Zink 1998, Chan and Arcese 2002, 2003, Patten et al. 2004, Pruett et al. 2008a, 2008b, Patten and Pruett 2009, Wilson et al. 2011). The resulting body of work suggests that the extensive phenotypic variation in this species complex is likely due to nonneutral processes. Song Sparrows show mitochondrial haplotype sharing among current morphological subspecies, which is likely due to recent shared ancestry among subspecies (Zink and Dittmann 1993, Fry and Zink 1998). In contrast, nuclear microsatellite analyses have revealed mixed support for genetically distinct subspecies, with roughly half of the western subspecies corresponding to genetic groups, and with evidence for as many as 4–12 immigrants per generation in some populations (Chan and Arcese 2002, Pruett et al. 2008a, 2008b, Wilson et al. 2011). Among the San Francisco Bay populations specifically, only 1–2% of genetic variation is explained by subspecies, with evidence of microspatial genetic structuring in patch sizes of 2–10 km (Chan and Arcese 2002, Wilson et al. 2011). Given the extensive evidence for shared ancestry and gene flow among named subspecies, it is remarkable that phenotypic variation persists, and yet 25 Song Sparrow subspecies can be quantitatively diagnosed based on morphological measurements of size and shape, as well as plumage color and pattern (Chan and Arcese 2003, Patten and Pruett 2009). Taken together with the observation that most morphological subspecies of Song Sparrow roughly correspond to biome distributions in North America (Patten and Pruett 2009), these results suggest that the Song Sparrow complex warrants continued scrutiny, although it is unclear whether the force maintaining phenotypic diversity is selective or an expression of phenotypic plasticity. The maintenance of phenotypic variation in the face of ongoing and recent gene flow among Song Sparrow populations is especially intriguing in the case of the San Francisco Bay populations. In this system, four subspecies inhabit distinct areas of the bay with varying salinity, and salinity is correlated with variation in body shape and size The Auk: Ornithological Advances 132:277–287, Q 2015 American Ornithologists’ Union 278 Digital imaging of sparrows S. A. M. Luttrell, S. T. Gonzalez, B. Lohr, and R. Greenberg among subspecies (Chan and Arcese 2003). Given this pattern, Song Sparrows of the San Francisco Bay may provide insights into the relationship between coloration and the upland–salt marsh ecotone. Salinity is one of the major factors driving changes in species composition and biotic diversity within and among marsh communities (Malamund-Roam et al. 2006, Engels 2010). Adaptations to a specific saline environment may effectively isolate populations, even among closely related taxa such as the King Rail–Clapper Rail (Rallus elegans–Rallus crepitans) complex along the United States Gulf Coast, where a genetic cline exists that coincides with the change from fresh to saline marsh (Maley 2012). Beyond physiological adaptations to salinity (Goldstein 1999, 2006), vertebrate taxa in salt marshes show variation in the coloration of the integument compared with their freshwater marsh congeners, a phenomenon known as salt marsh melanism (Grinnell 1913, Von Bloeker 1932, Greenberg and Maldonado 2006). In salt marsh melanism, salt marsh taxa are grayer or blacker than their upland relatives (Greenberg and Droege 1990). Qualitatively, Song Sparrow populations of the San Francisco Bay area show a trend toward salt marsh melanism. However, no one has attempted to quantify the pattern and luminosity of the Song Sparrow subspecies because their highly patterned plumage is difficult to systematically sample using standard spectrophotometric measures. The objective of this study was to quantify plumage variation and patterns among four subspecies of Song Sparrow in the San Francisco Bay, to test whether the populations were statistically diagnosable based on plumage traits, and to determine whether the salt marsh populations were darker or had more black in their plumage as would be predicted if salt marsh melanism were present in this group. METHODS We used museum specimens from the Smithsonian Institution National Museum of Natural History, the California Academy of Sciences, and the University of California–Berkeley Museum of Vertebrate Zoology to evaluate color in Song Sparrows of the San Francisco Bay. All specimens were adult birds, both male and female, collected during the nonbreeding season from 1876 to 1963. The distributions of the coastal marsh subspecies are quite restricted, but gouldii has an extensive range in central California. For the purposes of this study, we selected gouldii specimens from counties surrounding the San Francisco Bay, including Alameda, Contra Costa, Marin, Napa, San Francisco, San Mateo, Santa Clara, Santa Cruz, and Sonoma. Photographic Methods We applied standardized digital photography analysis for evaluating the back and wing plumage of all specimens (McKay 2013). Using an unbiased measure of color analysis, such as digital photography, is crucial when considering the biological relevance of population varia- tion, because the human visual system is different from the visual systems of many other animals (Stevens et al. 2007, Osorio and Vorobyev 2008). Often, human bias is avoided by using a spectrophotometer to analyze the specific wavelengths present in a color. However, obtaining an average spectrophotometric measure for an animal is most useful when testing a large, uniform patch of color. Many birds have finely patterned plumage, with breaks in color smaller than the diameter of the light beam sensed by the spectrophotometer. Spectrophotometric measures of these patterns are subject to serious error. To account for the fine patterning in the plumage of Song Sparrows and still obtain an accurate and unbiased measure of their overall color, we developed a technique using standardized digital FIGURE 1. Historic ranges of the San Francisco Bay Song Sparrow (Melospiza melodia) subspecies (range extents from Marshall 1948a). Historic ranges represent maximum distribution during the last century. The present extent of suitable marsh habitat in all embayments has been severely reduced (California Wetlands Monitoring Workgroup 2013). The Auk: Ornithological Advances 132:277–287, Q 2015 American Ornithologists’ Union S. A. M. Luttrell, S. T. Gonzalez, B. Lohr, and R. Greenberg Digital imaging of sparrows 279 photographs and the image processing software ImageJ (Abràmoff et al. 2004). We photographed the dorsal and lateral portions of each specimen against a gray color standard (QpCard 101; QpCard, Helsingbord, Sweden). We used a Canon EOS Rebel Xsi digital camera, with a 60 mm macro lens, an MR-14EX ring flash set to 1/16 power, and a tripod to stabilize the camera at 0.44 m focal distance from the specimen (Canon USA, Melville, NY, USA). The camera was set in manual mode with settings of ISO 100, f/11, and 1/250 s, and images were saved as RAW files. The RAW files of each photograph were standardized to the gray standard by linearizing the red, green, and blue channels, and by adjusting the brightness of each image. The color channels were linearized by selecting a small portion of the gray standard in the image and then adjusting the red, blue, and green sliders in ImageJ to equalize the channels. The brightness was adjusted so that the white square on the gray standard card was set to a luminance value of 230 (6 1.5) and the gray square had a luminance value of 100 (6 1.5). From the standardized images we created both grayscale and color histograms (red, blue, and green) for each body region of each bird. We used the polygon feature to select a portion of the plumage (either the back, including the scapulars, or the wing, including the primaries and secondaries) for each bird. We recorded means and standard deviations from the histograms of each color channel, as well as the total number of pixels in each bin of the grayscale histogram. From these raw histogram values we calculated luminance (mean luminance in grayscale, indicative of total lightness of plumage), red dominance (mean red luminance / [mean blue luminance þ mean green luminance]; a measure of how rusty the plumage is), and percent black (percentage of pixels below a threshold luminance value of 20) for the back and wing. Using our luminance data we calculated the coefficient of variation (CV) of luminance as a measure of the overall patterning of the plumage (solid-colored plumage will have a low CV, highly contrasting patterns will have a high CV). Statistical Methods We visually examined plumage measures of luminance, red dominance, percent black, and CV of luminance in two body regions (back and wing) using histograms and bivariate scatter plots. We found that percent black of the wing was uninformative (most values were below 1% for all subspecies). Red dominance of the back and wing and percent black of the back were not normally distributed. To obtain a normal distribution, we trans- formed red dominance of the wing and back using a modified log transformation (x1 ¼ log10(x � 1.2)), and percent black using a square root arcsine transformation (x1 ¼ arcsin=x). All of our statistical analyses were performed using the transformed data in R (R Develop- ment Core Team 2010) unless stated otherwise. We began by testing for differences in population means among subspecies of Song Sparrow from the San Francisco Bay area, as any two populations whose means do not differ cannot be diagnosed. Initially, we tested for correlations of red dominance with luminance and percent black with luminance, because the overall amount of light reflected by the feathers should affect measures of both red dominance and percent black. As expected, both the data for red dominance of the back and percent black of the back were tightly correlated with luminance, but a partial correlation analysis revealed that percent black of the back and red dominance of the back were not correlated with each other when luminance was controlled (Pearson r ¼ 0.015, P ¼ 0.88). To account for a correlation between luminance and our two color measures, we included luminance as a covariate when testing for variation in population means of red dominance and percent black. We used ANOVAs to test for population mean differences in measures of luminance, and ANCOVAs with luminance as a covariate to test for population mean differences in measures of red dominance and percent black. For the ANCOVAs, we included interaction terms (color measure * subspecies) in our models to test for heterogeneity of slopes among the groups. The interaction term was only significant for the percent black of the back model. A visual inspection of the regressions revealed that this was caused by the maxillaris data (Figure 2A). We removed maxillaris from the percent black of the back analysis, and the interaction term was no longer significant. All nonsignificant interaction terms were removed from the model. We determined which pairwise mean differences were contributing to significant ANOVA and ANCOVA results using a Tukey’s Honest Significant Difference (Tukey’s HSD) post hoc analysis. Distinguishing among population means demonstrates that the named subspecies may represent phenotypically distinct groups based on plumage. A useful way to test population differentiation is through population diagnos- ability, or the ability to accurately identify the population of origin for any given individual based on one or more character traits. In order to test for diagnosability among the four subspecies of San Francisco Bay Song Sparrow, we began by employing a discriminant analysis. Discriminant analysis selects traits as predictors of group membership, designs orthogonal functions to discriminate among groups, and then can be used to test the ability of those predictors to assign members into the correct group. We used the untransformed data in a Wilks’ lambda method discriminant analysis in Statistica Version 11 (StatSoft 2012), with prior probabilities weighted by original group size, to identify orthogonal functions that predicted subspecies identity, and a jackknifed classification matrix to assign members to subspecies groups. Four individuals The Auk: Ornithological Advances 132:277–287, Q 2015 American Ornithologists’ Union 280 Digital imaging of sparrows S. A. M. Luttrell, S. T. Gonzalez, B. Lohr, and R. Greenberg were excluded from the discriminant analysis because they were missing one body region measurement. Finally, we performed a pairwise test of diagnosability by using a diagnosability index (D-statistic; Patten and Unitt 2002, Patten 2010) based on the ‘‘75% rule’’: that at least 75% of the distribution of population 2 must lie to the right of 99% of the distribution of population 1, and vice versa, for the populations to be diagnosable. In this case we tested individual plumage measurement variables sepa- rately. RESULTS After Bonferroni correction for multiple tests, we found significant mean differences among subspecies in 5 of the 6 plumage characters that we examined: percent black of the back, red dominance of the wing, luminance of the back, luminance of the wing, and CV of luminance of the back. We found significant differences in overall luminance among the subspecies (back ANOVA: F ¼ 30.2, df ¼ 3, P ,, 0.001; wing ANOVA: F ¼ 6.1, df ¼ 3, P , 0.001; Figures 3A, 3B). Tukey’s HSD post hoc analyses showed that, for luminance of the back, all pairwise comparisons were significant except for samuelis–pusillula and max- illaris–pusillula (Figure 3A), while for luminance of the wing there were pairwise differences between maxillaris and gouldii (Figure 3B). We also found significant differences in the CV of back luminance (ANOVA, F ¼ 30.2, df¼3, P ,, 0.001; Figure 3C). Tukey’s HSD for CV of back luminance showed that all pairwise differences were significant except for samuelis–maxillaris. We used ANCOVAs, rather than ANOVAs, to evaluate percent black of back and red dominance, because each measure was affected by luminance. Percent black of the back (ANCOVA, F ¼ 45.1, df ¼ 2, P , 0.001; maxillaris excluded) and red dominance of the wing (ANCOVA, F¼ 5.6, df ¼ 3, P ¼ 0.001) were both significantly different among subspecies. Tukey’s HSD post hoc analyses showed that, for percent black of the back, pairwise comparisons were significant between gouldii and pusillula and between gouldii and samuelis. Pairwise comparisons for red dominance of the wing were significant for gouldii– pusillula. Only one measure, red dominance of the back, was not significantly different among any subspecies pair (ANCOVA, F ¼ 2.3, df ¼ 3, P ¼ 0.089). See Table 1 for all pairwise comparison P-values. In addition to identifying mean differences among populations, we evaluated the distinctiveness of popula- tions using all traits at once in a discriminant analysis (number of variables in model¼6, Wilks’ lambda¼0.110, F18,263 ¼ 17.30, P , 0.001). Three canonical discriminant functions were identified, but only the first two were significant (Table 2). An assignment test using the canonical functions from the discriminant analysis as- signed 75% of the Song Sparrow specimens to the correct subspecies. The ability of the discriminant analysis to classify specimens to the correct group membership varied among subspecies. The discriminant analysis correctly assigned 100% of the gouldii, 60% of the maxillaris, 76% of the samuelis, and 59% of the pusillula specimens (Table 3, FIGURE 2. Linear regressions of luminance against percent black and red dominance in each subspecies of Song Sparrow (Melospiza melodia) used for ANCOVA analysis. Solid lines represent M. m. maxillaris, long dashes represent M. m. gouldii, short dashes represent M. m. samuelis, and dashes–dots represent M. m. pusillula. (A) The relationship between luminance and percentage of black in the back plumage in M. m. maxillaris differs from that of the other three subspecies, which violates assumptions of an ANCOVA analysis. As a result, data on M. m. maxillaris were removed from the ANCOVA analysis of percentage of black in the back plumage. (B) The relationship between luminance and red dominance of the back plumage is similar for all subspecies. Coupled with the information that M. m. maxillaris does not show a similar relationship between luminance and percent black of the back, we suggest that M. m. maxillaris may be overall more gray than the other three subspecies under consideration. The Auk: Ornithological Advances 132:277–287, Q 2015 American Ornithologists’ Union S. A. M. Luttrell, S. T. Gonzalez, B. Lohr, and R. Greenberg Digital imaging of sparrows 281 Figure 4). Of the 17 incorrect assignments, in no case was an individual assigned to a subspecies from the wrong habitat (i.e. upland rather than salt marsh). Our follow-up tests using the D-statistic (Patten and Unitt 2002) found no cases in which a pairwise comparison among subspecies groups was diagnosable using any of the original plumage measurement variables. DISCUSSION Based on plumage color measurements of the back and wing, three phenotypic groups of San Francisco Bay Song Sparrow were distinguishable (gouldii, maxillaris, and pusillula–samuelis). Distinctiveness varied among subspe- cies. For example, the gouldii and maxillaris populations had significant pairwise mean differences when compared with other populations, and gouldii was readily distin- guishable from all other populations, being correctly classified 100% of the time. In contrast, we found only one significant pairwise mean difference between pusillula and samuelis (CV of luminance), but were able to distinguish samuelis from other populations 76% of the time using an assignment test. Our assignment estimates represent an upper bound on the likelihood of correctly classifying subspecies based on plumage color alone. Using a training subset of data that was different from the data used in the assignment test would have provided a more stringent test of assignment. However, given our sample size, we chose to use all data for both training and classification, and to jackknife the classification matrix. Salt marsh populations could be readily distinguished from freshwater marsh populations based on plumage alone (100% correct discrimination of upland vs. salt marsh subspecies), while discrimination within salt marsh pop- ulations likely will require the inclusion of other measures, such as beak size. Indeed, Chan and Arcese (2003) showed that the San Francisco Bay Song Sparrow subspecies are diagnosable based on beak and body size, with an overall correct classification rate of 78%. Furthermore, their canonical analysis readily discriminated pusillula from samuelis, the two groups that our plumage measures had the most difficulty discriminating between. Overall, the most useful traits in distinguishing among populations were luminance, CV of luminance (our measure of patterning), and percentage of black on the back. The upland subspecies gouldii was significantly lighter and less patterned than the salt marsh subspecies, and, among the salt marsh subspecies, maxillaris was the darkest. Although we were not able to statistically compare maxillaris with the other subspecies for the trait percent black of back because the data did not comply with the ANCOVA assumption of equal heterogeneity of slopes (Figure 2A), this unsuitability for statistical comparison is in itself interesting. The regression of luminance against FIGURE 3. Means and 95% confidence intervals for three ANOVA plumage analyses of Song Sparrow (Melospiza melodia) subspe- cies. (A) Luminance of the back varies significantly among subspecies (P ,, 0.001), with M. m. gouldii being lighter in coloration than the three tidal-marsh subspecies. (B) Luminance of the wing plumage differs significantly among subspecies (P , 0.001), but only M. m. maxillaris and M. m. gouldii exhibit significant pairwise differences. The upland subspecies M. m. gouldii has the lightest coloration of all subspecies based on back and wing luminance measurements. (C) Coefficient of variation (CV) of luminance of the back varies significantly among subspecies (P ,, 0.001), with M. m. gouldii being much less variable than the tidal-marsh subspecies. We used CV as a measure of patterning; thus, M. m. gouldii are less patterned than the other San Francisco Bay Song Sparrow subspecies. The Auk: Ornithological Advances 132:277–287, Q 2015 American Ornithologists’ Union 282 Digital imaging of sparrows S. A. M. Luttrell, S. T. Gonzalez, B. Lohr, and R. Greenberg percent black of the back plumage in maxillaris showed a steeper negative slope than that for the other subspecies, while the relationship between luminance and red dominance was the same among maxillaris and the other subspecies (Figure 2B). This suggests that the lower luminance values for maxillaris were due to increased amounts of gray plumage (which has equal reflectance in the red, green, and blue channels, and therefore does not affect red dominance); gray plumage is lighter in coloration than our threshold for black plumage. There- fore, the darkness of maxillaris is the result of an overall grayer plumage compared with other subspecies, rather than an increase in black plumage. The amount of rust in TABLE 1. Summary of mean differences in plumage characteristics for four subspecies of Song Sparrow (Melospiza melodia) using ANOVA and ANCOVA analyses. Significant differences based on a Bonferonni-corrected P-value of P ¼ 0.008 are indicated by an asterisk (*). Test F df P-value Pairwise comparison Tukey’s HSD P-value ANCOVA percent black back 45.1 2 ,,0.001* gouldii–pusillula 0.001* samuelis–pusillula 0.087 samuelis–gouldii 0.001* ANCOVA red dominance back 2.3 3 0.086 — — ANCOVA red dominance wing 5.6 3 0.001* gouldii–pusillula 0.006* maxillaris–pusillula 0.064 samuelis–pusillula 0.107 maxillaris–gouldii 0.454 samuelis–gouldii 0.319 samuelis–maxillaris 0.996 ANOVA luminance of back 30.2 3 ,,0.001* gouldii–pusillula 0.001* maxillaris–pusillula 0.023 samuelis–pusillula 0.382 maxillaris–gouldii 0.001* samuelis–gouldii 0.001* samuelis–maxillaris 0.001* ANOVA luminance of wing 6.1 3 ,0.001* gouldii–pusillula 0.264 maxillaris–pusillula 0.133 samuelis–pusillula 0.999 maxillaris–gouldii 0.001* samuelis–gouldii 0.207 samuelis–maxillaris 0.131 ANOVA CV of back luminance 30.2 3 ,,0.001* gouldii–pusillula 0.001* maxillaris–pusillula 0.008* samuelis–pusillula 0.042 maxillaris–gouldii 0.001* samuelis–gouldii 0.001* samuelis–maxillaris 0.928 TABLE 2. Standardized coefficients for each plumage trait in three orthogonal functions that define variation in plumage color among San Francisco Bay Song Sparrow (Melospia melodia) subspecies. We identified the orthogonal functions using a Wilks’ lambda method discriminant analysis in Statistica Version 11 (StatSoft 2012). Functions 1 and 2 are significant at P , 0.05 (indicated by *), and together explain 98.4% of the variation in plumage color among subspecies. Variable Function 1 Function 2 Function 3 Red dominance back �0.135 0.018 0.593 Percent black back 0.580 �1.241 �0.111 Luminance back 1.376 �1.363 �0.010 CV luminance back �1.124 �0.110 0.000 Red dominance wing �0.327 �1.138 0.998 Luminance wing �0.041 �1.276 0.866 Eigenvalue 5.317 0.317 0.091 Cumulative proportion of variation explained 0.929 0.984 1.000 P-value ,0.001* ,0.001* 0.078 TABLE 3. Each individual Song Sparrow (Melospiza melodia) specimen measured for this study was blindly assigned to 1 of 4 predicted subspecies groups based on plumage color alone, using a discriminant function assignment test with prior probabilities for group membership weighted by original group size and a jackknifed classification matrix. See Table 2 for orthogonal functions used in this analysis. Overall, 75% of the specimens were assigned to the correct subspecies, with 100% of the upland subspecies (M. m. gouldii) being assigned correctly. Total number of specimens for each subspecies used in this analysis are indicated in the rightmost column. Subspecies Percent correct Predicted group membership Totalpusillula maxillaris gouldii samuelis pusillula 59 13 6 0 4 23 maxillaris 60 5 15 0 2 22 gouldii 100 0 0 30 0 30 samuelis 76 4 4 0 19 27 The Auk: Ornithological Advances 132:277–287, Q 2015 American Ornithologists’ Union S. A. M. Luttrell, S. T. Gonzalez, B. Lohr, and R. Greenberg Digital imaging of sparrows 283 the plumage, measured by red dominance, contributed relatively little to differentiating among populations. Red dominance of the back plumage was not significantly different between any pair of subspecies, while red dominance of the wing plumage was only significantly different between gouldii and pusillula, with gouldii having less rusty wings. The salt marsh subspecies pusillula and samuelis had no significant pairwise mean differences in our measures of plumage coloration. However, both Marshall (1948b) and Patten and Pruett (2009) noted that pusillula is the only Song Sparrow subspecies with yellow ventral plumage. We did not include any ventral plumage measures in our analyses, so the inclusion of this body region likely would increase the potential for diagnosability of pusillula vs. samuelis. It is important to note that group distinctions based on mean differences are not equivalent to diagnos- ability. Indeed, despite 100% of our upland birds being classified correctly based on our discriminant analysis, we found no cases in which plumage measures were able to diagnose pairwise comparisons between groups based on a diagnosability index (Patten and Unitt 2002). Thus, the plumage measures in our study by themselves were not sufficient to diagnose any of the four subspecies. Despite extensive work demonstrating the phenotypic distinctiveness of Song Sparrow populations (Marshall 1948b, Chan and Arcese 2003, Patten and Pruett 2009, this study), among the western Song Sparrow subspecies, only one subspecies boundary (M. m. heermanni–M. m. fallax) shows morphological divergence that is unequivocally correlated with genetic divergence (Patten et al. 2004). The overall pattern of genetic evidence in Song Sparrows suggests a recent divergence (Fry and Zink 1998), likely with continued gene flow among many populations, including the San Francisco Bay populations (Chan and Arcese 2002, Pruett et al. 2008a, 2008b). Our data, in conjunction with the data of Chan and Arcese (2003), support the hypothesis that phenotypic differences in plumage, beak size, and body size in Song Sparrow populations are either rapidly evolving, being maintained in the face of gene flow, or both. Plumage differentiation without marked neutral genetic differentiation has been shown in a number of bird subspecies (e.g., Seutin et al. 1995, Greenberg et al. 1998, Baker et al. 2003, Ödeen and Björklund 2003, Milá et al. 2007, Antoniazza et al. 2010), but mechanisms for the maintenance of plumage divergence are untested, and likely vary among bird taxa. In contrast, the selective pressures that may be acting on beak and body size have been more thoroughly studied. For example, selection acting on beak and body size has been classically shown in the Galapagos finches as a result of feeding ecology (Grant and Grant 1995). Additionally, Greenberg and Danner (2012) have shown that beak size in Song Sparrow subspecies is correlated with annual summer tempera- tures, and it has been further demonstrated that increased beak size in M. m. atlantica allows for the dissipation of ‘‘dry’’ heat while reducing water loss (Greenberg et al. 2012). It is possible that these morphological differences are phenotypically plastic, rather than rapidly evolving traits. However, in a common garden experiment, Green- berg and Droege (1990) found no evidence of plasticity in plumage or beak morphology of a coastal marsh–endemic relative of the Song Sparrow, the Coastal Plains Swamp Sparrow (Melospiza georgiana nigrescens). The plumage patterns identified in our study are consistent with salt marsh melanism, with the upland subspecies being lighter in coloration. This result is supported by the finding of Chan and Arcese (2003) that morphological measurements of San Francisco Bay Song Sparrows vary with salinity. We propose two potential mechanisms for selection of salt marsh melanism in San Francisco Bay Song Sparrows and other salt marsh melanistic taxa. The first is background matching to avoid predation. Tidal marshes have grayer mud than freshwater marshes, because the relatively low oxygen content in the water causes iron to be present as dark iron sulfides, rather than rusty iron oxides (Greenberg and Droege 1990). No studies have tested support for this mechanism, but the occurrence of salt marsh melanism in small mammals and FIGURE 4. A scatterplot of the first two discriminant functions (of five measures of plumage color) demonstrates the reliability of distinguishing the upland Song Sparrow (Melospiza melodia) subspecies M. m. gouldii from the three saltmarsh subspecies (M. m. maxillaris, M. m. pusillula, and M. m. samuelis). Discriminant factor 1 is principally a measure of the luminance (lightness of coloration) and patterning of the back plumage, while discriminant factor 2 is mainly a measure of overall luminance (lightness of coloration) of the individual. Open circles represent M. m. gouldii, closed circles represent M. m. maxillaris, open triangles represent M. m. pusillula, and stars represent M. m. samuelis. The Auk: Ornithological Advances 132:277–287, Q 2015 American Ornithologists’ Union 284 Digital imaging of sparrows S. A. M. Luttrell, S. T. Gonzalez, B. Lohr, and R. Greenberg reptiles as well as birds lends credence to the idea that salt marsh melanism is adaptive via some broadly applicable selective pressure such as predator avoidance. The second possible mechanism is increased melanism to resist feather degradation by bacteria, a selective pressure that is specific to birds. Bacillus licheniformis, a naturally occurring feather-degrading bacterium, is found on more individuals and at higher concentrations in the Coastal Plain Swamp Sparrow, a tidal marsh inhabitant, than on inland Eastern Swamp Sparrow (Melospiza georgiana georgiana) popula- tions (Peele et al. 2009). B. licheniformis is a highly salt- tolerant bacterium, and increased melanin concentration in feathers slows the rate at which it can degrade feathers in vitro (Goldstein et al. 2004). Increased melanin concentration in feathers of salt marsh birds would act as a mechanism to resist bacterial degradation of feathers. The Song Sparrow subspecies maxillaris may be the key to distinguishing between these two hypotheses of selection. It has a steeper negative correlation between luminance and percent black than any of the other three San Francisco Bay subspecies, probably due to an increased amount of dark gray plumage (Figure 2A). M. m. maxillaris also inhabits the least saline of the three bays making up the greater San Francisco Bay. Von Bloeker (1932) noted a similar trend among brackish marsh– inhabiting mammals, and argued that background match- ing with darker soils, rather than salinity, was the selective force maintaining melanism. Specifically testing the San Francisco Bay Song Sparrows for bacterial levels and measuring contrast with background substrate could help to distinguish between these hypotheses. Alternatively, differences in melanin deposition could result from pleiotropic effects of genes related to osmoregulation. The mechanisms involved in osmoregu- lation by salt marsh passerines are generally unknown, although one study suggests that modification of the surface area of the intestinal tract may be a key innovation (reviewed in Goldstein 2006). A number of birds inhabiting salt marshes appear to have evolved a tolerance to the ingestion of saline water (Bartholomew and Cade 1963, Poulson 1969, Sabat et al. 2003), including M. m. pusillula, but not its nearest upland conspecific, M. m. gouldii (Basham and Mewaldt 1987). Establishing a link between integument color and physiological adaptation to salinity would be difficult, but worth investigating. Regardless of the mechanism behind salt marsh melanism, our study demonstrates that Song Sparrow subspecies in the San Francisco Bay area are another example of this phenomenon. We used digital photography to analyze color, which allowed for a quantitative and objective analysis of highly patterned plumage through calibrated measurement equipment, without bias toward a particular visual system (Stevens et al. 2007, McKay 2013). Plumage traits such as percent black, luminance, and CV of luminance reliably differentiated upland gouldii from tidal marsh maxillaris, pusillula, and sameulis subspecies. Upland birds were on average lighter and less rusty in coloration, and had less contrasting patterning than tidal marsh birds. We hypothesize that the plumage (this study) and morphological differences (Chan and Arcese 2003, Patten and Pruett 2009) demonstrated by Song Sparrows in the San Francisco Bay are the result of local selection on phenotype, especially between upland and tidal marsh populations. Our findings are in agreement with the work of Chan and Arcese (2003), which showed that salinity was correlated with morphological variation in this same group. Furthermore, extensive work on other taxa across upland vs. saltmarsh ecosystems shows that salinity is highly correlated with changes in species composition and diversity in mosquitos, King–Clapper rails, and plants (Roberts and Irving-Bell 1997, Engels 2010, Maley 2012, Sneddon and Steyer 2013). These differences might result from selective pressures for tidal marsh birds to match the grayer background substrate (Greenberg and Droege 1990), for an increase in melanism as a defense against feather-degrading bacteria (Peele et al. 2009), or as a pleiotropic effect related to the physiological demands of osmoregulation in a salt marsh environment. Further studies are needed to provide direct evidence for the mechanism of phenotypic differentiation, but the pattern remains consistent, with salt marsh melanism seen in a wide variety of tidal marsh vertebrates (Greenberg and Maldonado 2006). Many subspecies of birds were origi- nally described based on geographic variation in plumage patterns, but this regional phenotypic variation has often been dismissed by neutral genetic studies that find no clear differentiation among subspecies. Close evaluation of phenotypic variation, to determine when this variation represents diagnosable differences, is clearly warranted. ACKNOWLEDGMENTS We thank the curators and staff of the National Museum of Natural History, California Academy of Science, and Museum of Vertebrate Zoology for making specimens available for our use. Development of the digital imaging system was assisted by Gerhard Hofmann, Melvin Wachowiak, and Elizabeth Keats Webb. We also thank the University of Maryland Baltimore County Ecology and Evolution Journal Club, Raymond Danner, and Kevin E. Omland for reviewing early drafts, and two anonymous reviewers for their helpful comments on the manuscript. LITERATURE CITED Abràmoff, M. D., P. J. Magalhães, and S. J. Ram (2004). Image Processing with ImageJ. Biophotonics International 11:36–42. The Auk: Ornithological Advances 132:277–287, Q 2015 American Ornithologists’ Union S. A. M. Luttrell, S. T. Gonzalez, B. Lohr, and R. Greenberg Digital imaging of sparrows 285 American Ornithologists’ Union (1957). Check-list of North American birds, fifth edition. Lord Baltimore Press, Baltimore, MD, USA. Antoniazza, S., R. Burri, L. Fumagalli, J. Goudet, and A. Roulin (2010). Local adaptation maintains clinal variation in melanin- based coloration of European Barn Owls (Tyto alba). Evolution 64:1944–1954. Baker, J. M., E. López-Medrano, A. G. Navarro-Sigüenza, O. R. Rojas-Soto, and K. E. Omland (2003). Recent speciation in the Orchard Oriole group: Divergence of Icterus spurius spurius and Icterus spurius fuertesi. The Auk 120:848–859. Bartholomew, G. A., and T. J. Cade (1963). The water economy of land birds. The Auk 80:504–539. Basham, M. P., and L. R. Mewaldt (1987). Salt water tolerance and the distribution of south San Francisco Bay Song Sparrows. The Condor 89:697–709. California Wetlands Monitoring Workgroup (2013). EcoAtlas. http://www.ecoatlas.org Chan, Y., and P. Arcese (2002). Subspecific differentiation and conservation of Song Sparrows (Melospiza melodia) in the San Francisco Bay region inferred by microsatellite loci analysis. The Auk 119:641–657. Chan, Y., and P. Arcese (2003). Morphological and microsatellite differentiation in Melospiza melodia (Aves) at a microgeo- graphic scale. Journal of Evolutionary Biology 16:939–947. Engels, J. G. (2010). Drivers of marsh plant zonation and diversity patterns along estuarine stress gradients. Ph.D. dissertation, University of Hamburg, Hamburg, Germany. Fry, A. J., and R. M. Zink (1998). Geographic analysis of nucleotide diversity and Song Sparrow (Aves: Emberizidae) population history. Molecular Ecology 7:1303–1313. Goldstein, D. L. (1999). Patterns of variation in avian osmoreg- ulatory physiology and their application to questions in ecology. In Proceedings of the 22nd International Ornitho- logical Congress, August 1998, Durban, South Africa (N. J. Adams and R. H. Slotow, Editors), BirdLife South Africa, Johannesburg, pp. 1417–1426. Goldstein, D. L. (2006). Osmoregulatory biology of saltmarsh passerines. In Terrestrial Vertebrate of Tidal Marshes: Evolution, Ecology, and Conservation (R. Greenberg, J. E. Maldonado, S. Droege, and M. V. MacDonald, Editors). Studies in Avian Biology 32:110–118. Goldstein, G., K. R. Flory, B. A. Browne, S. Majid, J. M. Ichida, and E. H. Burtt, Jr. (2004). Bacterial degradation of black and white feathers. The Auk 121:656–659. Grant, P. R., and B. R. Grant (1995). Predicting microevolutionary responses to directional selection on heritable variation. Evolution 49:241–251. Greenberg, R., V. Cadena, R. M. Danner, and G. Tattersall (2012). Heat loss may explain bill size differences between birds occupying different habitats. PLOS One 7:e40933. doi:10. 1371/journal.pone.0040933 Greenberg, R., P. J. Cordero, S. Droege, and R. C. Fleischer (1998). Morphological adaptation with no mitochondrial DNA differentiation in the Coastal Plain Swamp Sparrow. The Auk 115:706–712. Greenberg, R., and R. M. Danner (2012). The influence of the California marine layer on bill size in a generalist bird. Evolution 66:3825–3835. Greenberg, R., and S. Droege (1990). Adaptations to tidal marshes in breeding populations of the Swamp Sparrow. The Condor 92:393–404. Greenberg, R., and J. E. Maldonado (2006). Diversity and endemism in tidal-marsh vertebrates. In Terrestrial Vertebrate of Tidal Marshes: Evolution, Ecology, and Conservation (R. Greenberg, J. E. Maldonado, S. Droege, and M. V. MacDonald, Editors). Studies in Avian Biology 32:32–53. Grinnell, J. (1909). Three new Song Sparrows from California. University of California Publications in Zoology 5:265–269. Grinnell, J. (1913). The species of the mammalian genus Sorex of west-central California with a note on the vertebrate palustrine fauna of the region. University of California Publications in Zoology 20:179–205. Malamud-Roam, K. P., F. P Malamud-Roam, E. B. Watson, J. N. Collins, and B. L. Ingram (2006). The quaternary geography and biogeography of tidal marshes. In Terrestrial Vertebrate of Tidal Marshes: Evolution, Ecology, and Conservation (R. Greenberg, J. E. Maldonado, S. Droege, and M. V. MacDonald, Editors). Studies in Avian Biology 32:11–31. Maley, J. M. (2012). Ecological speciation of King Rails (Rallus elegans) and Clapper Rails (Rallus longirostris). Ph.D. disserta- tion, Louisiana State University, Baton Rouge, LA, USA. Marshall, J. T., Jr. (1948a). Ecologic races of Song Sparrows in the San Francisco Bay region: Part I. Habitat and abundance. The Condor 50:193–215. Marshall, J. T., Jr. (1948b). Ecological races of Song Sparrows in the San Francisco Bay region: Part II. Geographic variation. The Condor 50:233–256. McKay, B. D. (2013). The use of digital photography in systematics. Biological Journal of the Linnean Society 110: 1–13. Milá, B., T. B. Smith, and R. K. Wayne (2007). Speciation and rapid phenotypic differentiation in the Yellow-rumped Warbler Dendroica coronata complex. Molecular Ecology 16:159–173. Ödeen, A., and M. Björklund (2003). Dynamics in the evolution of sexual traits: Losses and gains, radiation and convergence in Yellow Wagtails (Motacilla flava). Molecular Ecology 12:2113– 2130. Osorio, D., and M. Vorobyev (2008). A review of the evolution of animal colour vision and visual communication signals. Vision Research 48:2042–2051. Patten, M. A. (2010). Null expectations in subspecies diagnosis. Ornithological Monographs 67:35–41. Patten, M. A., and C. L. Pruett (2009). The Song Sparrow, Melospiza melodia, as a ring species: Patterns of geographic variation, a revision of subspecies, and implications for speciation. Systematics and Biodiversity 7:33–62. Patten, M. A., and P. Unitt (2002). Diagnosability versus mean differences of Sage Sparrow subspecies. The Auk 119:26–35. Patten, M. A., J. T. Rotenberry, and M. Zuk (2004). Habitat selection, acoustic adaptation, and the evolution of repro- ductive isolation. Evolution 58:2144–2155. Peele, A. M., E. H. Burtt, Jr., M. R. Schroeder, and R. S. Greenberg (2009). Dark color of the Coastal Plain Swamp Sparrow (Melospiza geogriana nigrescens) may be an evolutionary response to occurrence and abundance of salt-tolerant feather-degrading bacilli in its plumage. The Auk 126:531– 535. Poulson, T. L. (1969). Salt and water balance in Seaside and Sharp-tailed sparrows. The Auk 86:473–489. The Auk: Ornithological Advances 132:277–287, Q 2015 American Ornithologists’ Union 286 Digital imaging of sparrows S. A. M. Luttrell, S. T. Gonzalez, B. Lohr, and R. Greenberg http://www.ecoatlas.org dx.doi.org/10.1371/journal.pone.0040933 dx.doi.org/10.1371/journal.pone.0040933 Pruett, C. L., P. Arcese, Y. Chan, A. Wilson, M. A. Patten, L. F. Keller, and K. Winker (2008a). The effects of contemporary processes in maintaining the genetic structure of western Song Sparrows (Melospiza melodia). Heredity 101:67–84. Pruett, C. L., P. Arcese, Y. Chan, A. Wilson, M. A. Patten, L. F. Keller, and K. Winker (2008b). Concordant and discordant signals between genetic data and described subspecies of Pacific Coast Song Sparrows. The Condor 110:359–364. R Development Core Team (2010). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. http://www.R-project. org/ Roberts, D. M., and R. J. Irving-Bell (1997). Salinity and microhabitat preferences in mosquito larvae from southern Oman. Journal of Arid Environments 37:497–504. Sabat, P., J. M Fariña, and M. Soto-Gamboa (2003). Terrestrial birds living on marine environments: Does dietary compo- sition of Cinclodes nigrofumosus (Passeriformes: Furnariidae) predict their osmotic load? Revista Chilena de Historia Natural 76:335–343. Seutin, G., L. M. Ratcliffe, and P. T. Boag (1995). Mitochondrial DNA homogeneity in the phenotypically diverse Redpoll Finch complex (Aves: Carduelinae: Carduelis flammea-horne- manni). Evolution 49:962–973. Snedden, G. A., and G. D. Steyer (2013). Predictive occurrence models for coastal wetland plant communities: Delineating hydrologic response surfaces with multinomial logistic regression. Estuarine, Coastal and Shelf Science 118:11–23. StatSoft (2012). STATISTICA (Data Analysis Software System), version 11. http://www.statsoft.com Stevens, M., C. A. Párraga, I. C. Cuthill, J. C. Partridge, and T. S. Troscianko (2007). Using digital photography to study animal coloration. Biological Journal of the Linnean Society 90:211– 237. Von Bloeker, J. C., Jr. (1932). Three new mammals from salt marsh areas in southern California. Proceedings of the Biological Society of Washington 45:131–137. Wilson, A. G., P. Arcese, Y. L. Chan, and M. A. Patten (2011). Micro-spatial genetic structure in Song Sparrows (Melospiza melodia). Conservation Genetics 12:213–222. Zink, R. M., and D. L. Dittman (1993). Gene flow, refugia, and evolution of geographic variation in the Song Sparrow (Melospiza melodia). Evolution 47:717–729. The Auk: Ornithological Advances 132:277–287, Q 2015 American Ornithologists’ Union S. A. M. Luttrell, S. T. Gonzalez, B. Lohr, and R. Greenberg Digital imaging of sparrows 287 http://www.R-project.org/ http://www.R-project.org/ http://www.statsoft.com work_6hdp474ulrhvbmpvnmenotdpdm ---- 06 April 2021 AperTO - Archivio Istituzionale Open Access dell'Università di Torino Original Citation: Clinical characteristics influence screening intervals for diabetic retinopathy. Published version: DOI:10.1007/s00125-013-2989-7 Terms of use: Open Access (Article begins on next page) Anyone can freely access the full text of works made available as "Open Access". Works made available under a Creative Commons license can be used according to the terms and conditions of said license. Use of all other works requires consent of the right holder (author or publisher) if not exempted from copyright protection by the applicable law. Availability: This is the author's manuscript This version is available http://hdl.handle.net/2318/139376 since 2020-06-17T15:47:45Z 1 The final publication is available at Springer via http://dx.doi.org/10.1007/s00125-013-2989-7 2 CLINICAL CHARACTERISTICS INFLUENCE SCREENING INTERVALS FOR DIABETIC RETINOPATHY. Running head: Screening intervals in diabetic retinopathy 1 Massimo Porta, 1 Mauro 1 Maurino, 1 Sara Severini, 1 Elena Lamarmora, 1 Marina Trento, 1 Elena Sitia, 1 Eleonora Coppo, 1 Alessandro Raviolo, 1 Stefania Carbonari, 1 Marcello Montanaro, 1 Lorenza Palanza, 2 Paola Dalmasso, 2 Franco Cavallo 1 Diabetic Retinopathy Centre, Department of Medical Sciences, and 2 Department of Public Health and Paediatrics, University of Turin, Italy Corresponding Author: Prof. Massimo Porta, MD PhD Department of Medical Sciences University of Turin Corso AM Dogliotti 14 10126 Torino Italy Tel. +39-011-6632354 Fax +39-011-6334515 e-mail: massimo.porta@unito.it Abstract word count: 235 Main text word count: 3351 mailto:massimo.porta@unito.it 3 ABSTRACT Aims/hypothesis: Most guidelines recommend annual screening for diabetic retinopathy (DR) but resource limitations and the slow progression of DR suggest that longer recall intervals should be considered if patients have no detectable lesions. This study aimed at identifying the cumulative incidence and time of development of referable DR in patients with no DR at baseline, classified by clinical characteristics. Methods: Analysis of data collected prospectically over 20 years in a teaching hospital-based screening clinic according to a consensus protocol. The cumulative incidence, time of development and relative risk of developing referable retinopathy over 6 years following a negative screening for DR were calculated in 4320 patients, stratified according to age at onset of diabetes <30 or 30, being on insulin treatment at the time of screening, and known duration of diabetes <10 or ≥10yrs. Results: The 6-year cumulative incidence of referable retinopathy was 10.5% (95% CI: 9.4%, 11.8%). Retinopathy progressed within 3 years to referable severity in 6.9% (95% CI: 4.3%,11.0%) of patients with age at onset 30, on insulin treatment and 10 years or longer known disease duration. The other patients, especially those with age at onset <30, on insulin and <10 year duration, progressed more slowly. Conclusions/Interpretation: screening can be repeated safely at 2-year intervals in any patient without retinopathy. Longer intervals may be practicable, provided all efforts are made to ensure adherence to standards in procedures and to trace and recall non attenders. Key words: Diabetic retinopathy, screening, retinal screening, blindness prevention, type 2 diabetes, type 1 diabetes. ABBREVIATIONS DR, Diabetic Retinopathy ETDRS, Early Treatment of Diabetic Retinopathy Study OO-IT, Older-Onset Insulin Treated OO-NIT, Older-Onset Non Insulin Treated YO-IT, Younger-Onset Insulin Treated YO-NIT, Younger-Onset Non Insulin Treated 4 INTRODUCTION Unless treated before the appearance of symptoms, diabetic retinopathy (DR) may lead to severe visual loss (1). Consequently, recommendations to screen for asymptomatic sight-threatening DR have been issued in many countries (2-5). Most guidelines recommend that retinal examination is performed annually in people with diabetes (2,3) but resources for repeated yearly checks are in short supply and the progression of DR may be slow enough to consider longer intervals when patients have no detectable lesions. A cohort study in Liverpool suggested that patients with type 2 diabetes and no retinopathy may be safely seen every 3 to 5 years (6) and an econometric simulation based upon U.S. data concluded that screening may not be cost-effective unless performed every 2-3 years in type 2 patients without DR and at low risk of developing any (7). Another study suggested that also adolescents with type 1 diabetes may be screened every other year (8). More recent studies support the notion that 2-3 years between screenings are safe in patients without retinopathy (9-12). This paper reports on an analysis of screening data collected over 20 years in a teaching hospital- based diabetes clinic according to the European Working Party protocol to Screen for DR (4) and its implementation document, the Field Guide-Book (5). The European protocol had been validated by independent investigators (13,14) and reported to reduce referrals to a low-vision clinic by one-third over 5 years (15). The specific aims were to evaluate the cumulative incidence and time of development of referable DR in patients with negative screening and different clinical characteristics. METHODS The Diabetic Retinopathy Centre is a facility dedicated to screening for DR within the outpatient diabetes clinic of Turin main teaching hospital. It offers screening to patients from inside and outside the clinic. Since its staff includes retinal specialists, it also functions as tertiary referral centre, though patients with sight-threatening DR are normally seen by the specialists without going through a formal screening procedure. Data of 35,545 screening episodes [19,864 (55.88%) males; 15,861 (44.12%) females] performed in 12,074 patients [6,751 (55.91%) males and 5,323 (44.09%) females] between 1/1/1991 and 31/12/2010 were analysed. The individuals subjected to screening were almost totally Caucasians, with few patients of African, Asian or South American origin included in the latest years. Data were collected prospectively using a dedicated software, SEE (Save Eyes in Europe), which had been 5 specifically designed to record episodes according to the European screening protocol (16). All study participants gave their informed consent and the investigations were carried out in accordance with the Declaration of Helsinki as revised in 2000 (http://www.wma.net/e/policy/b3.htm). Until May 2000 screening was by direct and indirect ophthalmoscopy performed by diabetes specialists and colour photography on 35 mm slide film (Kodak Elite 200 ASA) using Kowa Pro-I and Kowa Pro-II funduscameras (2,237 patients, 5,328 episodes). From June 2000, patients were screened by non-mydriatic digital fundus photography (Canon NM45CR) and the images processed by the EyeCap software (Haag-Streit, Koeniz, Switzerland) (9,837 patients, 30,217 episodes). Photographs were taken by trained medical or nursing personnel. Grading was performed by diabetes specialists, after specific training, according to the European Working Party recommendations (4,5). Patients were assessed at retinal photography and formally graded later. Feed-back on referrals was by direct discussion with the consultant ophthalmologists working in the DR Centre. Doubtful cases were discussed on pictures alone and patients not requiring referrals re-graded accordingly. Patients with mild non proliferative retinopathy not requiring referral (microaneurysms only, isolated larger haemorrhages and/or isolated cotton wool spots), equivalent to ETDRS level ≤35 (17), were given re-screening appointments. Those with moderate non proliferative retinopathy requiring referral (association of the above lesions in higher number and/or within one disc diameter of the centre of the fovea) or worse (pre-proliferative, proliferative, photocoagulated DR, advanced diabetic eye disease with or without macular involvement), equivalent to ETDRS level >35 (17), were referred to an ophthalmologist for further assessment and treatment, as required. For patient classification, DR severity in the worst eye was considered. Yearly follow-ups in the same patients were calculated as screening episodes within multiples of 12±6 months after the first visit. Hence, follow-up screening episodes were considered to be at 1 year if they fell within 7-18 months of the first visit, 2 years if within 19-30 months, and so forth. Comparison of ophthalmoscopy + 35mm photography and digital photography. Since no formal trial was run to compare ophthalmoscopy + 35 mm photography versus non- mydriatic digital photography, the detection rates of DR using these two methods were assessed by two independent approaches: 1. the prevalence of all gradings in patients consecutively screened for the first time 9 months before 22 May 2000 (n=544) was compared with that of all patients first screened over the 9 months after changeover (n=622), assuming that there was no change in the prevalence http://www.wma.net/e/policy/b3.htm 6 grades of DR over time. There was no difference in the distributions of DR [No DR: 321 (59.01%) vs 347 (55.79%); mild DR: 78 (14.34%) vs 95 (15.27%); referable DR: 134 (24.63%) vs 169 (27.17%); non gradable: 11 (2.02%) vs 11 (1.77%)] (p= 0.68 - Chi- square).. 2. the diagnoses of 317 patients who were screened with both methods, first within 9 months before May 2000 and then re-screened over the 9 months following changeover were compared, the assumption being that very little progression of DR would occur in this group. There was a minor trend to more DR over the second examination but no significant differences were observed between the distributions of DR detected by the two methods in the same population (p = 0.14) (chi-square). Kappa-statistics showed an agreement index K=0.75 (p<0.001) when comparing absence of DR [n=150 (49.02%) before and 141 (46.08%) after changeover] vs any DR [n=156 (50.98%) and 165 (53.92%), respectively], and a weighted K=0.81 (p<0.001) when comparing absence of DR vs mild [n=68 (22.2%) before and 71 (23.2%) after changeover] vs any other more severe (referral-requiring) DR [n=88 (28.76%) and 94 (30.72%), respectively]. Pictures of 4 (1.26%) and, respectively, 9 (2.84%) patients were ungradable before and after changeover. Quality assessment of digital photographs. Digital photographs of macular and nasal fields were assessed for quality and judged Good, Sufficient for grading if not worse than Standard 14 of the ETDRS protocol (17) or Insufficient. Photographic fields were judged Centred, Partially Centred if the disc was within one disc diameter of the desired position, or Non-Centred. Out of 11,359 eyes thus assessed, 80.2% macular fields and 77.9% nasal fields were of good quality, 16.7% and 19.4% were sufficient for grading, respectively, and only 3.1% and 2.7% were unreadable. More than 99% photographic fields were at least partially centred. Quality of images was influenced by lens opacities and pharmacologic mydriasis, though not by centring (data not shown). Patient classification At the time of first screening, the patients were divided into younger-onset (YO), if age at diagnosis of diabetes was <30, and older-onset (OO) if it was 30 and further stratified into insulin-treated (IT), either alone or with oral agents, and non insulin treated (NIT), i.e. on diet only or diet and tablets. Data from all patients so stratified who were screened at baseline and at least once within the following 6 years were analysed. In total, follow-up was available for 4320 patients with no detectable DR at first visit. 7 Of these, 2934 (67.9%) were OO-NIT (1712 males, 58.4%, age 62.1±9.7, known duration of diabetes 5.9±6.6), 689 (16%) were OO-IT (373 males, 54.1%, age 58.4±12.9, duration 8.5±8.1), 671 (15.5%) were YO-IT (347 males, 51.7%, age 22.2±11.7, duration 8.8±8.1) and 26 (0.6%) were YO-NIT (13 males (50.0%), age 39.0±15.4, duration 16.1±13.1). Because of limited numbers, the YO-NIT group was not further considered for this work. The other 3 groups were further subdivided into patients with <10 or ≥10 years known duration of diabetes. In total, 2247 OO-NIT<10yrs, 687 OO-NIT≥10yrs, 426 OO-IT<10yrs, 263 OO-IT≥10yrs, 432 YO- IT<10yrs and 239 YO-IT ≥10yrs without retinopathy at their initial screening were included. Statistics Clinical and demographic differences at baseline were assessed with the χ 2 test or ANOVA, as appropriate. Cumulative and incidence rates of DR were calculated using the product limit method, with standard error (SE) according to Greenwood and 95% confidence interval (CI) computed as ± 1.96 x SE. Patients who had not developed DR contributed to person-years of follow-up until their last screening visit. Difference among subgroups was tested using the log-rank or Wilcoxon (Breslow) statistic. An interval censoring Weibull regression model was used to estimate hazard ratios (HR) and corresponding 95% CI according to the potential prognostic variables (subgroup and known duration of diabetes).Due to a partial violation of Cox's model basic assumption, we chose the Weibull model as it showed to be the best fitting one using Akaike Information Criterion (AIC) for comparison with the other parametric models (Gompertz and exponential). Statistical significance level was set at 0.05. Statistical analyses were performed using STATA 12.1. RESULTS Over the 6 years following the first screening episode, the incidence rate of referable DR was higher among the OO-IT (2.74 cases per 100 person/year, 95%CI: 2.23,3.37) than the OO-NIT (1.64, 95%CI: 1.45,1.85) or the YO-IT (1.90, 95%CI: 1.50,2.41). Table 1 shows the cumulative incidence of referable or worse DR over the 6 years following a first screening in which patients had no detectable retinopathy, divided by subgroups. Being on insulin treatment and having been diagnosed 10 years earlier or more were both associated with higher incidence of referable DR (p<0.001). 8 The average times and 95% CI needed for 5% of the patients in the different subgroups to develop referable retinopathy were 56 months (95%CI 49,64) for OO-NIT with <10 years known duration, 33 (95%CI 23,51) for OO-NIT with ≥10 years known duration, 41 (95%CI 24,57) for OO-IT with <10 years known duration, 27 (95%CI 15,38) for OO-IT ≥10 years known duration, 60 (45,79) for YO-IT <10 years duration, and 39 (22,51) for YO-IT ≥10 years duration. None of the subgroups reached 5% cumulative incidence of referable retinopathy within 2 years of a negative screening, whereas the OO-NIT≥10 years and OO-IT≥10 years did so within 3 years. Consequently, the relative risk of developing referable retinopathy within 3 years of a first screening was calculated for all subgroups. Table 2 shows that, compared to the OO-NIT<10 years duration group, taken as reference, both OO-NIT and OO-IT with 10 or more years known duration had more than twice the risk of developing referable retinopathy. In contrast, the YO-IT with less than 10 years duration had a 72% reduced risk of developing referable retinopathy within the same time frame. DISCUSSION To evaluate the potential for sight-threatening DR to develop in a real-life screening scenario, all records collected with tight observance of the 1990 European Working Party recommendations were analysed to find out the cumulative incidence and risk of developing referable DR over the 6 years following a screening episode in which no DR had been detected. Patients without retinopathy at first screening appeared to carry a negligible risk of developing lesions requiring referral over the following year, but 2.1% developed them within 2 years, and 3.2% after 3 years. The 6-year cumulative incidence of referable or worse DR was 10.5%. In the UKPDS, 17.5% of T2DM patients with no DR at first examination reached an ETDRS level of <35/35 or worse after 6 years (18). However, not all patients with this grading on the ETDRS scale would have been defined as requiring referral in our screening context, where they might simply be re-screened at shorter intervals. In addition, all UKPDS patients had newly-diagnosed type 2 diabetes at baseline and 4- field 30° stereoscopic retinal photography was used. For reasons of costs and practicality, stereo retinograpy is not recommended for screening purposes and our photographic protocol is based upon the EURODIAB procedure, which had been previously validated and found to perform as well as the ETDRS in detecting both mild and more severe DR (19). 9 The goal of screening is to identify eyes with sight-threatening DR before symptoms occur, so that photocoagulation or other treatments can be applied timely and appropriately (20). Data from Sweden (14) and Iceland (20) show that while very few people with type 1 diabetes progress to blindness if properly screened, patients with type 2 diabetes may still develop severe visual impairment, mostly due to macular disease (21). Both the American Diabetes Association (2) and NICE (3), among others, recommend that all diabetic patients are screened yearly. The 1990 European Working Party had recommended to “Examine at diagnosis and at least two-yearly thereafter, at least annually if DR appears” (4,5). However, the desirability of frequent controls has to be balanced against the high patient throughput and limited facilities available in most clinics. A prospective study of 20,570 systematic screening episodes in Liverpool (6), suggested that patients with T2DM and no retinopathy can be re-screened every 5 years, and those with mild DR every year, to retain a 95% chance of remaining free of sight-threatening DR. However, those authors conceded that 3-year intervals may be more viable in real life. In that study, 3-field 50° photography and a somewhat different DR classification but a statistical approach similar to this study were used, though considering different variables. The cumulative incidence of sight- threatening DR in individuals with no retinopathy at baseline after 5 years follow-up was 3.9%. Such figures are lower than those reported in this paper but the definition used in Liverpool for sight-threatening DR (6 or more cotton-wool spots, venous changes, IRMAs) was more severe than our definition of referable DR. The econometric simulation published by Vijan et al (7) considered intervals of 1 to 5 years in a sample model older than 40, as defined from data from the NHANES-III (22) population study and, for progression of retinopathy, from the UKPDS (18), DRS (23) and ETDRS (24) and suggested that screening may not be cost-effective unless carried out every 2 or even 3 years in DR-free patients who are older and in fairly good metabolic control. However, some of the assumptions made in that study, e.g. the population base and screening performed by ophthalmologists, may not apply to the settings tested in Liverpool or Turin. More recently, Agardh et al (9) recommended 3 year screening intervals based on their case series in which only 1 out of 1,322 patients with type 2 diabetes without DR at baseline had developed a condition (macular oedema) requiring laser treatment within that time frame. Their patients had an average known duration of type 2 diabetes of 6 years, were mostly on diet or oral agents and in good glycaemic control (HbA1c 6.4±1.4%). Chalk et al (11) developed a simulation model based upon a National Health Service series in the UK and concluded that 2 years would be a safe re- screening interval. Thomas et al. (12), in South Wales, analysed nearly 50.000 patients with no DR 10 at first screening and at least 1 further screening within the following 4 years. Similarly to this paper, they subdivided patients with type 2 diabetes into those on insulin treatment or not and with less or more than 10 years known duration. Although reporting a higher cumulative incidence of referable retinopathy than in our population, they also concluded for screening intervals longer than one year, with the possible exception of patients on insulin treatment and with ≥10 years duration. The stages of DR defined as referable in their paper (preproliferative or worse) were more advanced than ours, which does not help to explain their higher incidence rate, and, similarly to us, they did not collect data on glycated haemoglobin or blood pressure. Finally, Aspelund et al (10) proposed a fully personalised algorithm which, applied to a population of 5,199 Danish patients followed for 20 years, suggested a mean screening interval of 29 months, although that included patients with DR at baseline. The algorithm takes into account not only duration and type of diabetes but also HbA1c, blood pressure and presence of retinopathy at previous visit, which commands shorter intervals. With reference to type 1 diabetes, one study suggested that 2-yearly screening may be safe also in DR-free adolescents with reasonable metabolic control, due to their rare progression to sight-threatening forms (8). Absence/presence of mild retinopathy in one or both eyes at two consecutive screening episodes has also been proposed as a risk indicator to develop sight- threatening DR in a UK based population in which no stratification was made for type of diabetes or current tratment (25). Strengths of this study are its large real-world population base, the strictness with which data were prospectively collected and retinopathy consistently graded according to a validated consensus procedure developed more than 20 years ago, and the long follow-up. Internal procedures assured uniformity of the grading process through training of the operators and their continuous feed-back with the senior diabetes (MP) and ophthalmic (MM) specialists, who worked in the programme for the entire 20-year period. Overall quality of retinal photographs was satisfactory, with low rates of ungradable pictures, in which case the patients underwent full eye examination. Possible problems are selection bias, the switch-over of screening methods in 2000 without a formal assessment of their sensitivity and specificity, and the lack of data on metabolic and blood pressure control in the patients screened. The Diabetic Retinopathy Centre offers screening to diabetic patients from inside and outside the hospital where it is based. Although it also functions as tertiary referral centre, patients with sight-threatening DR are not subjected to formal screening and would not have been included in this analysis of people without DR at first examination. The indirect comparisons described in Research Design and Methods suggest that the two approaches yielded equivalent results and disprove the possibility that the combined use of ophthalmoscopy and 35 mm 11 colour photography may lead to higher detection rate of minimal, non referable, retinopathy than digital photography alone (26). In addition, onset of referable DR was the outcome of this study, and its lesions arguably pose even less problems in detection than those of mild retinopathy. As also pointed out in the Liverpool Study (6), data on HbA1c and blood pressure, although major determinants of DR progression, are not usually collected in a general screening setting like ours, which provides a service to different diabetes units and general practitioners. HbA1c results were from different laboratories, not standardised, and blood pressure could not be measured consistently, due to time, personnel and space constraints. In conclusion, although risk charts may result in a more personalised approach to screening intervals by taking multiple variables into account (10), knowledge of diabetes duration and type of glucose-lowering treatment is easily obtainable information that may suffice to provide useful guidance when planning re-screening appointments. In particular, this paper confirms that screening can be repeated safely at 2-year intervals in any patient with type 1 or 2 diabetes and no retinopathy, giving a 95% probability of remaining free of referable lesions according to the same standard adopted by previous reports (6, 12). It also shows that DR progresses more rapidly to referable severity in patients with type 2 diabetes on insulin treatment and 10 years or longer known disease duration. On the other hand, patients with shorter duration can potentially be seen even less frequently, eg at 3 years intervals, though prudence is always of the essence, considering that information on duration of type 2 diabetes is often imprecise. In addition, one word of caution refers to the sensitivity of most screening programmes, which is around 80-90% (26), meaning that 1 out of 5-10 diagnoses of no DR may be false negatives and the patients be given hazardously delayed appointments as a result. Thirdly, programming checks at excessively delayed intervals may convey to patients the impression that retinopathy is unimportant, and recalling people who do not attend appointments given 3 or more years earlier may be problematic. Since no standardized procedure exists for grading digital retinal photographs, this same exercise should be carried out in any other programme where extended screening intervals are proposed and careful quality assurance needs to be carried out to ensure that there is no drift in grading or there are not one or two poor graders. All efforts should be made to ensure the highest adherence to standards and to put effective methods in place for tracing and recalling patients who do not attend re-screening appointments. 12 ACKNOWLEDGEMENTS AND FUNDING The Diabetic Retinopathy Centre was established thanks to funds provided by the Compagnia di San Paolo, Turin. No specific grant was applied for to develop the analysis described in this paper. DUALITY OF INTEREST The authors have no conflict of interest to declare. CONTRIBUTION STATEMENT MP Planned the study, researched the data and wrote the manuscript; MMa collaborated in the acquisition, analysis and interpretation of the data and revised the manuscript; SS, EL, MT, ES, EC, AR, SC, MMo and LP collected and researched the data and revised the manuscript; PD and FC analysed the data, contributed to their interpretation and reviewed the manuscript. All authors approved the final version of the manuscript. REFERENCES 1. Antonetti DA, Klein R, Gardner TW (2012) Diabetic Retinopathy. N Engl J Med 366:1227-1239. 2. American Diabetes Association. Standards of Medical Care in Diabetes–2012 (2012) Diabetes Care 35 (Suppl 1): S11-S63;. 3. NICE. Management of type 2 diabetes – screening and early management) [article online], 2002. available from http://nice.org.uk/nicemedia/pdf/diabetesretinopathyguideline.pdf. Accessed 3 March 2013. 4. Retinopathy Working Party (1991) A protocol for screening for diabetic retinopathy in Europe. Diabet Med 8:263-267. 5. Kohner EM, Porta M (1992) Screening for Diabetic Retinopathy in Europe: a Field Guide-Book. World Health Organization, Regional Office for Europe, Copenhagen. 6. Younis N, Broadbent DM, Vora JP, Harding SP (2003) Incidence of sight-threatening retinopathy in patients with type 2 diabetes in the Liverpool Diabetic Eye Study: a cohort study. Lancet 361:195-200. http://nice.org.uk/nicemedia/pdf/diabetesretinopathyguideline.pdf 13 7. Vijian S, Hofer TP, Hayward RA (2000) Cost-utility analysis of screening intervals for diabetic retinopathy in patients with type 2 diabetes mellitus. JAMA 283:889-896. 8. Maguire A, Chan A, Cusumano J et al (2005) The case for biennial retinopathy screening in children and adolescents. Diabetes Care 28:509-513. 9) Agardh E, Tarabat-Khani P (2011) Adopting 3-year screening intervals for sight-threatening retinal vascular lesions in type 2 diabetic subjects without retinopathy. Diabetes Care 34:1318-1319. 10) Aspelund T, Thornórisdóttir O, Olafsdottir E et al (2011) Individual risk assessment and information technology to optimise screening frequency for diabetic retinopathy. Diabetologia 54:2525-2532. 11) Chalk D, Pitt M, Vaidya B, Stein K (2012) Can the retinal screening interval be safely increased to 2 years for type 2 diabetic patients without retinopathy? Diabetes Care 35:1663-1668. 12) Thomas RL, Dunstan F, Luzio SD, et al (2012) Incidence of diabetic retinopathy in people with type 2 diabetes mellitus attending the Diabetic Retinopathy Screening Service for Wales: retrospective analysis. BMJ.;344:e874. 13. Gibbins RL, Owens DR, Allen JC, Eastman L (1998) Practical application of the European Field Guide in screening retinopathy by using ophthalmoscopy and 35 mm retinal slides. Diabetologia 41: 59-64. 14. Agardh E, Agardh CD, Hansson-Lundblad C (1993) The five-year incidence of blindness after introducing a screening programme for early detection of treatable diabetic retinopathy. Diabet Med.10:555-9. 15. Backlund LB, Algvere PV, Rosenqvist U (1994) New blindness in diabetes reduced by more than one-third in Stockholm County. Diabet Med 14:732-740. 16. Sivieri R, Rovera A, Porta M (1995) SEE (Save Eyes in Europe): the London protocol in software. Giornale Italiano di Diabetologia 15(Suppl.):37-38. 14 17. Early Treatment Diabetic Retinopathy Study Research Group (1991) Grading diabetic retinopathy from stereoscopic colour fundus photographs – an extension of the modified Airlie House classification. ETDRS Report No 10. Ophthalmology 98:786-806. 18) Stratton I, Kohner EM, Aldington SJ et al (2001) UKPDS 50: Risk factors for incidence and progression of retinopathy in type II diabetes over 6 years from diagnosis. Diabetologia 44:156-163. 19. Aldington SJ, Kohner EM, Meuer S, Klein R, Sjølie AK (1995) Methodology for retinal photography and assessment of diabetic retinopathy: the EURODIAB IDDM complications study. Diabetologia 38:437-444. 20. Stefansson E, Bek T, Porta M, Larsen N, Kristinsson JK, Agardh E (2000) Screening and prevention of diabetic blindness. Acta Ophthalmol Scand 78:374-385. 21. Hansson-Lundblad C, Holm K, Agardh CD, Agardh E (2002) A small number of older type 2 diabetic patients end up visually impaired despite regular photographic screening and laser treatment for diabetic retinopathy. Acta Ophthalmol Scand 80:310-315. 22. Plan and operation of the Third National Health and Nutrition Examination Survey, 1988-94, series 1: programs and collection procedures (1994) Vital Health Stat 32: 1-407. 23. Diabetic Retinopathy Study Research Group (1981) Photocoagulation treatment of proliferative diabetic retinopathy: clinical application of Diabetic Retinopathy Study (DRS) findings. DRS report No. 8. Ophthalmology 88:583-600. 24. Early Treatment Diabetic Retinopathy Study Research Group (1991) Early photocoagulation for diabetic retinopathy. ETDRS report No. 9. Ophthalmology 98:766-785, 25. Stratton IM, Aldington SJ, Taylor DJ, Adler AI, Scanlon PH (2013) A simple risk stratification for time to development of sight-threatening diabetic retinopathy. Diabetes Care, 36:580-585. 26. Hutchinson A, McIntosh A, Peters J, et al (2000) Effectiveness of screening and monitoring tests for diabetic retinopathy. A systematic review. Diabet Med 17:495-506. 15 Table 1 - Cumulative incidence and 95% Confidence Interval (percent) of referable DR observed in patients with no DR at baseline, according to baseline characteristics Time from first screening (years) 1 2 3 4 5 6 OO- NIT<10 yrs (95 CI%) (Numbers at risk) 0.72 (0.44,1.17) (2247) 1.59 (1.14,-2.22) (2162) 2.49 (1.89,3.28) (1866) 3.68 (2.89,4.67) (1531) 5.54 (4.51,6.81) (1281) 7.77 (6.45,9.34) (1061) OO-IT <10 yrs (95 CI%) (Numbers at risk) 0.96 (0.36,2.53) (426) 2.64 (1.42,4.85) (403) 3.61 (2.11,6.17) (321) 6.37 (4.12,9.80) (263) 8.32 (5.57,12.32) (219) 15.13 (10.97,20.69) (174) OO-NIT≥10 yrs (95 CI%) (Numbers at risk) 2.06 (1.23,3.46) (687) 3.50 (2.34,5.22) (648) 5.12 (3.64,7.18) (572) 6.25 (4.55,8.56) (476) 8.95 (6.73,11.86) (390) 11.86 (9.12,15.34) (308) OO-IT ≥10 yrs (95 CI%) (Numbers at risk) 1.91 (0.80,4.54) (263) 3.59 (1.88,6.79) (249) 6.87 (4.25,11.00) (219) 11.48 (7.79,16.75) (173) 14.23 (9.95,20.13) (141) 21.13 (15.43,28.57) (112) YO-IT <10 yrs (95 CI%) (Numbers at risk) 0.23 (0.03,1.64) (432) 0.47 (0.12,1.87) (422) 0.75 (0.24,2.33) (378) 3.28 (1.76,6.07) (319) 5.47 (3.30,9.01) (257) 7.77 (4.94,12.12) (195) YO-IT ≥10 yrs (95 CI%) (Numbers at risk) 1.27 (0.41,3.87) (239) 2.61 (1.18,5.73) (233) 4.04 (2.12,7.63) (214) 6.16 (3.61,10.41) (194) 11.48 (7.67,16.99) (165) 17.18 (12.23,23.84) (142) 16 Table 2 - Risk of developing referable diabetic retinopathy 3 years after a negative screening test. Hazard Ratio p 95% CI OO-NIT<10 Reference OO-NIT≥10 2.22 0.001 1.42 3.45 OO-IT<10 1.41 0.273 0.76 2.59 OO-IT≥10 2.75 0.001 1.57 4.83 YO-IT<10 0.28 0.032 0.09 0.90 YO-IT≥10 1.74 0.110 0.88 3.43 work_6ifp4vbnzbfolfpsno6p7a6aa4 ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218557689 Params is empty 218557689 exception Params is empty 2021/04/06-02:16:37 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218557689 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:37 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_6k4tkwvderaypdflusojred6eu ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218552917 Params is empty 218552917 exception Params is empty 2021/04/06-02:16:32 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218552917 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:32 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_6ku4cjcubbcbxeftek34ytf7ke ---- PLEASE SCROLL DOWN FOR ARTICLE This article was downloaded by: On: 26 February 2011 Access details: Access Details: Free Access Publisher Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37- 41 Mortimer Street, London W1T 3JH, UK Visual Anthropology Publication details, including instructions for authors and subscription information: http://www.informaworld.com/smpp/title~content=t713654067 Systems of representation: Towards the integration of digital photography into the practice of creating visual images Terence Wrighta a Reader in Media Arts, University of Luton, UK Online publication date: 17 May 2010 To cite this Article Wright, Terence(1998) 'Systems of representation: Towards the integration of digital photography into the practice of creating visual images', Visual Anthropology, 11: 3, 207 — 220 To link to this Article: DOI: 10.1080/08949468.1998.9966751 URL: http://dx.doi.org/10.1080/08949468.1998.9966751 Full terms and conditions of use: http://www.informaworld.com/terms-and-conditions-of-access.pdf This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, re-distribution, re-selling, loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. http://www.informaworld.com/smpp/title~content=t713654067 http://dx.doi.org/10.1080/08949468.1998.9966751 http://www.informaworld.com/terms-and-conditions-of-access.pdf Visual Anthropology, Vol. 11, p p . 2U/-22U © 1W8 U1JA (Overseas Publishers Association) Reprints available directly from the publisher Amsterdam B.V. Published under license Photocopying permitted by license only under the Harwood Academic Publishers imprint, part of The Gordon and Breach Publishing Group. Printed in India. I Systems of Representation: Towards the Integration of Digital Photography into the Practice of Creating Visual Images Terence Wright This paper aims to set the foundations for an integration of digital photography into the broader framework of visual representation. The current climate seems to be marked by a preoccupation with contrasting the digital with the analog image. An alternative cross-cultural approach is proposed, employing "systems of repre- sentation" characterized by the wide range of strategies for communicating through the visual image that can be found in the anthropology of art. These take into account the optical principles of depiction and their cultural determinants. The paper aims to place the practice of the digital generation and manipulation of photographs at a point of convergence with a variety of other means of transcribing the three-dimensional world onto a two-dimensional flat surface. INTRODUCTION In this paper I want to argue that the introduction of digital manipulation to photo- graphy, rather than creating a rupture from existing practices in visual representa- tion (for example, bringing about the "death of photography" [Robins 1995: 29]) in practice has brought about a more gradual shift of emphasis. This is most clear if digital photography is considered in the broader framework of visual representation. Indeed digital imagery only appears to have brought about the "radical and permanent displacement of photography" [Mitchell 1992: 19] from the relatively limited viewpoint of an evolutionary approach to the history of art and a developmental view of culture. The alternative position, taking a wider global overview of the generation of visual imagery, would mean that digital manipulation would not be limited to occupying a disruptive position in the trajectory of Western art. I intend to show that the broader perspective that encompasses "world art" points to computer-manipulated photography achieving TERENCE WRIGHT is Reader in Media Arts at the University of Luton, UK, where he teaches photography and visual representation. His published work and research concentrate on photogra- phy, digital imagery and visual anthropology. He is completing The Photography Handbook for Routledge. 207 D o w n l o a d e d A t : 1 2 : 5 2 2 6 F e b r u a r y 2 0 1 1 208 T. Wright a greater potential through its integration, rather than its opposition, to existing visual practices: a gradual incorporation, rather than the heralded revolution in image-making. Alongside other issues concerning the impact of technology on traditional cul- tural practices, technological change in visual representation is not only insepara- ble from economic, social and environmental issues, but results in "visual syncretism". In such cases visual representation can be viewed, not as static, but as adaptive systems responding to changes that occur in the fields of technology and existing representational practices. For example, the work of Gutman [1982] on Indian photography, which entails the compression of space and unique forms of composition; and that of Sprague [1978] which has aimed to show how certain African photographs are "coded in Yoruba" and containing information "about their cultural values and their view of the world". DIGITAL PHOTOGRAPHY In the field of art criticism, since 1915, the artist's expressionist agenda has been supplanted by formalist and structuralist theory which has increased the poten- tial to locate any particular visual image within a general field of relational systems. According to the Russian Formalist critics [Lemon and Reiss 1965], images in poetry have never really changed, what has changed is the form of the poem. And if we are to apply this view to photography we might suggest that, despite technical innovations in photographic equipment and materials, the subject matter of photographs (and that of pictorial representation in general) has changed relatively little. We always have relied on (and continue to rely on) the camera to produce portraits, landscapes, images of war, etc. [Eastlake 1857: 442]. Nonetheless, the style of these images has changed over the years, and the photographer's approach to subject-matter has shifted with the tide of social and cultural change, as well as with the introduction of technical innovation. Photography's latest technical development has been the introduction of digital imagery. The manipulation of the visual array of the photograph, that the "new technology" affords, has broken down the distinction between the "mechanical" representation and that brought about by "human agency": the photographic and the chirographic—to use psychologist James Gibson's [1979: 272] term—becomes blurred: there is no longer a clear separation of the image "captured" by the cam- era, from the "progressive trace" [Gibson op. cit.] drawn by the hand. In addition, for the present and the immediate future, we find ourselves in a situation where most viewers are likely to be unfamiliar with the procedures involved in process- ing of digital photographs. So Alfred Gell's [1992: 50] proposal that the photogra- pher only gains prestige when "the nature of his photographs is such as to make one start to have difficulties conceptualizing the process which made them achievable with the familiar apparatus of photography" will not necessarily hold true for digital photography. For the moment, at least, the spectator's attitude to the digital image is likely to remain indeterminate or undefined. Rather than considering "pure" digital photography in contrast to "pure" analog photography, the most interesting area of innovation currently is where D o w n l o a d e d A t : 1 2 : 5 2 2 6 F e b r u a r y 2 0 1 1 Digital Photography and Visual Images 209 the two practices converge. In taking a retrospective view of technical innovation in photography, we can see that it is rarely the case that a new process has imme- diately supplanted another. A process is introduced (or marketed) and taken up by a few individuals who see the potential and are prepared to take the risk. There can then follow a rapid expansion characterised by everyone "jumping on the bandwagon", followed by a very gradual tailing off, which may have been caused by the introduction of the next innovation. This process of technological change and innovation has been illustrated by Kubler's notion of the "battle-ship- shaped" curves [Kubler 1962], which describe how one social phenomenon runs concurrently with, and then takes over from, another. In the early years of photography, the Daguerreotype, introduced in the early 1840s, spread rapidly over the next fifteen years or so, but had then contracted towards 1860. In the meantime the collodion (wet-plate) process had been intro- duced and was well established and in extensive use from around 1855 to the early 1880s, falling out of mainstream practice towards the turn of the century, as it became increasingly superseded by 1875's gelatin plate, and so on. For example, in the United States, S. Rush Seibert recalled the introduction of collodion pho- tography, in tandem with the simultaneous practice and gradual decline of the former Daguerreotype process. Collodion "was immediately made a success and Daguerreotypes were laid aside in many establishments, although I continued to make them at intervals between 1840 and 1874" [Busey 1900: 93]. The change from the Daguerreotype process to that of the collodion involved a move from the once- produced unique image to the multiple reproducibility possible from a negative. It has been suggested that the digital image's most radical departure from photography as we knew it lies in the rejection of the negative, yet the retention of extensive reproduction. In the case of photographs taken with digital cameras, there is no "original" image and all subsequent copies (unless deliberately altered) will be identical to the first. Scholars can often trace back through a family tree of editions or manuscripts to recover an original, a definitive version, but the lineage of an image file is usually untraceable, and there may be no way to determine whether it is a freshly captured, unmanipulated record or a mutation of a mutation that has passed through many unknown hands. [Mitchell 1992: 50] Nonetheless the issue of the "original" art object regarding the manuscript has been discussed by Wollheim [1968: 22]. He questions whether the original art- work is the production of the opera, say, or—in the case of a novel—James Joyce's manuscript. He goes on to question how much we can change the pro- duction before it ceases to be the same opera or the "original" idea. Or should we assume that we base our judgement on the initial concept of the writer? Wollheim points out that in literature, music and the performing arts reference to the "original" version is not significant. It is only in the visual arts that the loss of the original (for example, the physical commodity of Leonardo's painting The Mona Lisa) results in a "lost work". So, similarly, the lack of an original in digital imagery may result in a shift of emphasis from the art-object to the art-concept, focusing attention on the underlying principles of creating the visual image as well as its social and cognitive functions. D o w n l o a d e d A t : 1 2 : 5 2 2 6 F e b r u a r y 2 0 1 1 210 T. Wright REALISM A N D REPRESENTATION Human beings have been communicating and representing their world by means of visual images for the last 35,000 years. The fact that so many pictures (tens of thousands) have survived from the Paleolithic period suggests that the activities of painting and drawing were widespread. By 10,000 BC, the activity of creating depictions had developed independently in locations as far apart as Australia, Africa, the Near East and Europe. In all these cases, we can safely assume that the images produced had some sort of communicative function, which in itself implies that visual images were understood by the members of the cultures that produced them, yet from a contemporary viewpoint our own understanding of this work is limited. The early cave paintings may have been formative attempts at creating a realistic record, perhaps used as simulations for "virtual reality" hunting exercises,1 or have functioned in broader educational roles aiding recog- nition and recall. They could have had magical significance intended to increase the productivity of the hunt where they may have operated as totemic reli- gious icons. Or perhaps they were able to liven-up day-to-day life [Ucko and Rosenfeld 1967]. Although the exact purposes of these images remain obscure, pictures in gen- eral, with their changing functions over the span of history, have formed an inte- gral part of human culture. Among other uses, images have been employed as conveyors of information, symbols of devotion and sites of social interaction, or have provided means of discovering aspects of physical as well as psychological worlds. As Anthony Forge [1966: 23] suggests, we should "regard art and 'visual communication' as something more than a decorative icing on the heavy cake of social, economic and linguistic structures." From today's point of view, visual images have retained their status as playing a central role in contemporary life. In their symbolic role, as road signs, they guide us through the urban environment or, as video images, they can offer us an apocalyptic new realism by transmitting a cruise missile eye view as it reaches its target [Ritchin 1991]. During the time-span from the Upper Paleolithic to the present day, we have seen a wide range of styles, principles and criteria involved in the production and reception of these images. Traditionally, art historians2 have regarded early artis- tic endeavor as amounting to crude attempts at transcribing a three-dimensional world onto a flat, two-dimensional surface. According to this view, artistic repre- sentation has "evolved" in its accuracy and complexity to result in a gradually developing record of the ways that society and the environment were viewed by those cultures. Not only did late Nineteenth Century theorists make much of superficial similarities to the images of the so-called "primitive" cultures of today, but also in support of theories of "recapitulation", Paleolithic art was iden- tified with the paintings and drawings of children. This view, with its roots in Nineteenth Century evolutionism, believed that the development of visual depic- tion culminated in the pinnacle of realism characterised by systems of mechani- cal reproduction such as photography, movie film and today's experiments in virtual reality. Layton [1991: 3] describes this as the notion of "a single grand movement towards the art of the Renaissance or industrial society". D o w n l o a d e d A t : 1 2 : 5 2 2 6 F e b r u a r y 2 0 1 1 Digital Photography and Visual Images 211 The popular notion that seeing is believing had always afforded special status to the visual image. So when the technology, in the form of photography, was developed, this was considered not only to provide a record of vision, but the fact that it was able to produce the image as a permanent tangible object accounted for the extent of the medium's social and cultural impact. The chemical fixing of the image enabled the capture of what was considered to be a natural phenome- non: the visual array projected in the camera obscura. In short, it "reproduced with a perfection unattainable by the ordinary methods of drawing and painting, equal to nature itself...".3 The invention of photography during the early Nineteenth Century offered the promise of a truthful visual record that (it was assumed) did not rely upon human intervention. Photography not only pro- duced images that were based on the rationale of linear perspective, characteris- tic of Western visual representation, but the camera was considered to function by the same principles as the human eye. Throughout the history of visual repre- sentation, questions have been raised concerning the supposed accuracy (or oth- erwise) of the visual image, as well as its status in society. Ideas concerned with how we perceive the world and how this affects the status of its pictorial repre- sentations have been central concerns from the time of Plato to the present-day technical revolution of the new media communications. Theories of vision and representation have pursued interdependent trajectories, influencing each other throughout the history of Western culture. Indeed, Wartofsky [1980], has main- tained that the beliefs derived from representational systems are central to deter- mining a culture's theory of visual perception. In many cases, where visual imagery is used to help realise concepts—in such fields as architecture, engineering and graphic communication—the type of visu- alisation provided by the camera is not appropriate and at worst can be totally misleading. The artist and designer have a range of representational strategies to fulfil the requirements of the task in hand. However, the use of a visual "system" does not depend upon function alone. There may be social, cultural, philosophi- cal or religious criteria that can play an important role in determining the outcome of a representation. This is not limited to "style" alone, but can have a deep-seated basis in a culture's values and codes of behavior. All these can determine the scope and limitations of representational practice, as well as the choice of options available to the "artist". The photographic image was held to be an achievement of a sophisticated cul- ture and was thought able to produce "automatically" the type of image that artists had struggled throughout the centuries to acquire the manual, visual and conceptual skills to create. In this developmental scheme of things, every form of picture-making that had gone before, including the visual arts of "other" cultures, amounted to more or less approximate attempts at gaining the represen- tational heights gained by the Western world. According to some Nineteenth Century theorists, just as children learned how to draw, starting with "primitive" scribbling and developing into sophisticated adult representations, so the representations of "others" were seen as mirrors of cultural and racial develop- ment. For example, the Victorian psychologist Sully [1895: 385] proclaimed, "it is... incontestable that a number of characteristic traits in children's drawings D o w n l o a d e d A t : 1 2 : 5 2 2 6 F e b r u a r y 2 0 1 1 212 T.Wright are reflected in those of untutored savages". From this viewpoint, Western art demonstrated the development of the "correct" ways of viewing the world. In summary, the general regard the West has had for "other" cultures is reflected in the ways that Western scholarship has regarded the imagery of those cultures. In contrast to this view, modern scholarship has been increasingly looking to visual representation as an activity totally integrated into the fabric of culture. As Morphy [1989:1] has pointed out, this "sophisticated analytic approach coincided with changes in attitude to contemporary indigenous societies and the realisation of the complexities of their conceptual systems". From our contemporary view- point, upon taking a sideways glance at "other" (non-Western) cultures, we not only find that the production of visual representations is a universal activity, but despite the proliferation of photography and other lens-based media, there too exists a vast range of ways and means for transcribing aspects of experience as two-dimensional representations. Nevertheless, despite this long history and widespread practice of visual representation, relatively little is known about how visual images are actually able to communicate. Until recently, the issue appeared fairly straightforward in that mainstream visual communication has been concerned with the pursuit of uncomplicated the- ories of "realism". Although this trend has continued through such developments as photography, movie film, television, holography and contemporary initiatives in "virtual reality"; innovations in computer technology have given rise to new forms of visual representation. At the very same time, the medium of television continues on a course of rapid global expansion, establishing the electronic camera in a central role within the "universal" medium of communication. The digital age has also led to an increased emphasis on the visual, over the traditional, written forms of communication. This dramatic renewal of interest emphasises an urgency to obtain a greater understanding of how visual images communicate, as well as their scope and potential to adapt to future technological and cultural change. SYSTEMS OF REPRESENTATION In the foregoing section, reference is made to such terms as "depiction" and "visual representation and communication" in favor of using the term "art". This paper proposes a shift of emphasis away from art historical studies of visual imagery to a wider theory and history of "visual representation", which can include a broader spectrum of Western categories of architectural and engineer- ing drawings, film and photographs, art and design, as well as a global diversity of picture-making traditions and new schema emerging with today's "digital cul- ture". The "system of representation" also suggests that forms of visual image- making can be regarded as systematic—in that identifiable principles and criteria are involved in their production—and, in their cultural contexts, they serve social and cognitive functions. They can act as sites for human interaction as well as providing the means for understanding our environmental, political and cultural worlds. Moreover, for many "traditional" cultures the concept of "art" does not exist. For example, in Australian Aboriginal culture, the activity of making pictures D o w n l o a d e d A t : 1 2 : 5 2 2 6 F e b r u a r y 2 0 1 1 Digital Photograph]/ and Visual Images 213 forms an integral part of religious ritual and other day-to-day activities. The exis- tence of "art" (as a specialism) and "artists" (as its specialists) is very much a Western preoccupation. And Flores [1985: 35] maintains that our experience of the "High Art" of Western culture fosters the creation of artificial divisions between the "representational" activities of other cultures. Many non-literate cultures recognise little distinction between "art" and "craft", and the Western notion of the pursuit of creating an object that is "beautiful", in contrast to mak- ing the "functional" object, has equally little relevance. And if we look to the per- forming arts for a comparison, we find that anthropologists have encountered great difficulty in differentiating between "performance", "ritual" and the enact- ment of "myth". In the case of visual representation, Kiichler [1987: 238] describes the problem of using the Western term "art" for those visual representa- tions which form part of a broader ceremonial activity: "The art is known under the indigenous term as malangan. It is a collective term for sculptures and dances as well as for the mortuary ceremony and ceremonial exchange". One solution to such dilemmas is Silver's [1979] conception of ethnoart, which attempts to adopt and employ those terms and concepts used by a particular culture. But even as far as Western artistic output is concerned, theorists such as Kubler [1962] have suggested that we might turn our attention to a "History of Things", whereby all manufactured objects can be regarded as "art". For our purposes this would pre- sent too broad a brush, for the purpose of this enquiry is to examine a phenome- non which essentially concerns visual representation in two dimensions. A central concern of visual representation is that the usual purpose of images is to "re-present" something other than themselves: some other "reality". For Munn [1973], artistic production is guided by structural principles which reflect cultural patterns. In these instances, visual images are able to reflect and promote the abstract social structures and concerns of a particular culture. Here the approach of Levi-Strauss is extremely pertinent. He recognises little distinction between the behavioral and the ideational, which means that the performance of social and ritual activities (behavioral) is deeply entwined with myth and symbolism (ideational). So whether the activity is dance or painting—what we might call the formal aspects of behavior—the immediate observable structures cannot be regarded in isolation from the context of expression and the underlying symbolism—the deeper cultural or generative structures. The problem of finding the appropriate terminology can be compounded by the lack of any universal or cross-cultural criteria for the interpretation and eval- uation of artworks, whereby the Western avant-garde approach with its revision- ist agenda runs in strong contrast to the relative conservatism of its ethnic counterparts. Indeed, the terms "innovation" and "creativity" assume very dif- ferent roles in different cultures. Furthermore, over the past hundred and twenty years, the practice of making images, particularly under the heading of "art", has made it its business deliberately to revise and challenge its own traditions of practice. Weitz [1956: 439] characterises the process as "a decision on someone's part to extend or to close the old or invent a new concept (for example, 'It's not a sculpture, it's a mobile.')." While we may live in a culture that has, for the last five hundred years at least, pursued verisimilitude, other cultural traditions have not been so concerned D o w n l o a d e d A t : 1 2 : 5 2 2 6 F e b r u a r y 2 0 1 1 224 T.Wright with the representation of an "external" reality. For example, the Abelam of New Guinea have created carvings which do "look like" animals and birds, yet there seems to be little preoccupation with representational issues, other than how the carving operates within self-contained traditions of practice. The principles and criteria here are purely formal, so in response to questions con- cerning representational meaning or significance: "... the answers to questions are always in the form—'It is the way to do it', or This is the way our ancestors did it' or This is the most powerful (supernatural) way to do it'." [Forge 1966: 23] Similar questions concerning realism and formalism have come to play central roles in contemporary representation. Meanwhile, Firth finds in tra- ditional Tikopian culture a contrast between the naturalistic and the abstract where "the bird of naturalistic form was of less ritual weight than its abstract presentation, the 'sacred creature'. The image of the bird in geometrical projec- tion carried more emotional loading than the more literal presentation of it" [Firth 1992: 27]. Other systems involve multi-referential meaning whereby any one particular element of an image can signify any number of meanings. This may be open to the misinterpretation of a developmental view of art wherein artists of other cul- tures and periods have striven, with only varying degrees of success, to produce the sort of image that is produced by the camera. However, as has been noted by Boas [1927: 221-50], the "split representations" of animals produced by the North-West Coast Indians are not failures in perspective, but operate by a very different "system". The artist must include the numerous symbolic features of the animal which contain details of totemic groupings and information regarding individuals' social rank and status [Layton 1991:153]. COMPUTER IMAGERY AND VISUALISATION The advent of the digital image has led to a greater need to understand not only how visual imagery provides information about human culture, but also how it places renewed emphasis upon the functioning of the human mind in the perception of the environment and its visual images. Digital processing provides new models of visual perception and challenges the veracity of the visual image. As human culture has increased in its complexity, it is becoming more and more evident that a one-size-fits-all mode of representation is less and less of a viable option. Visual representation is not only inextricably linked to cultural criteria, but abides by its own principles and internal logic. For example, we should regard Mediaeval art as, "not a childish or irrational way of recording visual experience, for our eye does not dwell on a single point, but moves, and we move and a procession of objects passes before it" [Clark 1949: 29]. And while we may strive towards greater realism, aiming for exact reproductions or creations of virtual realities, our concepts and criteria for verisimilitude have always been governed by cultural requirements and aspirations.4 There is a need to address the importance of cultural differences in media rep- resentations, in particular the variety of approaches to the psychology and anthropology of visual communication that have occurred over the past fifty D o w n l o a d e d A t : 1 2 : 5 2 2 6 F e b r u a r y 2 0 1 1 Digital Photography and Visual Images 215 years. Traditionally, in the psychology of visual perception, studies have been polarised between unproblematic realism and conventionalism, neither of these directly addressing the scope and limitations of visual representation. Similarly, anthropological studies, seeing visual representation as having little relevance to "social facts", took theoretical approaches to visual images which were identified with those of material culture. These studies became restricted by diffusionist assumptions to dwelling on issues of post-production cataloging. At the same time, the institutions and practices of the Fine Arts have categorised and margin- alised the esthetic schemes of "other" cultures as "Ethnic Arts" or "Primitive Art" [Hiller 1991]. However influential they have proved for Western artists, they are frequently regarded as the product of basic craft skills and as "primitive" in nature: critical evaluation in this area might be described as having been limited to "Primitive Formalism". Since the 1960s not only have indigenous art-forms attained new importance and self-consciousness for minority groups, but also, with the widening of access to the media, cultural traditions, styles and influ- ences are becoming increasingly significant [Graburn 1976]. CASTING LIGHT Photography provides a useful "embarkation point" from which to address the significant issues arising from the broad range of schemes and systems of visual representation that exist throughout the world's cultures. Yet if photography has not entirely achieved the status of the "universal language" as propounded by photographer August Sander [1933], "Even the most isolated Bushman could understand a photograph of the heavens", it might be considered to be a univer- sal system of representation. We normally expect a photograph to offer us an accurate and straightforward two-dimensional representational image that has a fairly close correspondence to the ways we perceive events in the world. Furthermore, it is often considered that the mechanical nature of the camera accounts for the "automatic" transcription of the three-dimensional world into pictorial form. Our identification of the scope and limitations of photographs can provide the theoretical foundations for addressing other, less familiar, systems of representation. Photography is based on a projection system, whereby the light rays emitted by an object and scene are cast onto a two-dimensional surface: a principle which was observed by Aristotle as far back as 320 BC [Ross 1927; Eder 1945: 36]. However, the casting of light has played important roles in human history. It was not only central to some of the principles that governed the construction of ancient earthworks such as Stonehenge, but the shadows cast by objects on a flat surface—the origin of the orthographic projection—has been a basic source of picture-making since the Paleolithic. This relatively simple form of image- making, which does not rely upon the camera, has the advantage of not being subject to some of the problems that arise from photographic hardware: for example, lens distortions and foreshortening. Images produced by this means are not restricted by the notion of a frame, but are bounded only by the expanse of an irregular picture surface: a cave wall, for instance. In addition, the camera D o w n l o a d e d A t : 1 2 : 5 2 2 6 F e b r u a r y 2 0 1 1 226 T.Wright obscura is "architecturally dependent" in that the phenomenon is most likely to be observed only by people who dwell in geometrically consistent, flat-surfaced buildings. Although, apparently during an eclipse, images of the sun can be cast onto the ground through the foliage of trees [Minnaert 1993]. Nevertheless, it is one thing to observe a "natural optical phenomenon", it is quite another for a culture to decide to incorporate it into a system of visual representation, let alone give it a central role. "Other" cultures have adopted systems of representation that employ different criteria. Japanese painting, for example, has employed multi-view points and the oblique projection system. Other representational strategies, such as cave art, emphasise other pictorial criteria, for instance that the image has no definable boundaries nor anything resembling a frame. So we can consider the tradition of representing the world through the boundary of a rec- tangle as being a peculiarly Western urban phenomenon: "Chinese painters like Chou Ch'en never considered that they should portray nature as if it were seen through a window, and they never felt bound to the consistency of the fixed viewpoint demanded of their Western counterparts" [Edgerton 1980:187]. However, although orthographic projection involves a one-to-one mapping producing an image that does not vary in size or shape from the object it repre- sents, the system does pose a number of restrictions upon the artist—that it is only through recording the sitter's profile that a recognisable portrait can be obtained and it is usual for the "Egyptian style" to be adopted for a clear depic- tion of the full figure. For the sake of argument in the orthographic system of rep- resentation, the sun's rays, upon striking an object on the Earth, can be considered as parallel. This means that, irrespective of the distance of the object from the picture surface, the depicted object will always be projected as the same size as the original. However, for the moment, there are two points of central importance to this system: only to an extremely limited degree is it possible to place objects in pictorial space, and the representation itself has no central view- ing point and exists without having any integral concern for the viewer. This con- trasts with the role of linear perspective systems in creating dramatic illusions, such as Andrea Pozzo's painted ceiling in Sant' Ignazio in Rome, discussed at length by Pirenne [1970]. At face value, it would seem that a type of "technological determinism" is responsible for the formation of pictures, whereby it is a culture's theory of vision that determines its modes of visual representation. And we can see there is indeed a close analogy between Plato's ideas regarding the nature of the world and the prevailing canons of visual representation. In his case, he was expound- ing his philosophical analogy of shadows cast on the wall of the cave at a point of transition in visual imagery from systems based on orthographic projection to those of oblique projection. Two thousand years later, Kepler's theory of vision, as well as the visual arts of the Sixteenth and Seventeenth Centuries, were closely associated with Descartes' philosophical standpoint. Furthermore, much of our contemporary computer technology, by means of virtual reality head-sets for instance, is devoted to simulating our current understanding of perceptual input. And this image was considered to be a very close approximation to that which we actually see. The chemical fixing of the image enabled the capture of what might be considered a natural phenomenon: the camera obscura's image. D o w n l o a d e d A t : 1 2 : 5 2 2 6 F e b r u a r y 2 0 1 1 Digital Photography and Visual Images 217 However, the important point for photography is that a theory of pictorial rep- resentation evolved which had a firm basis in the current understanding of the optical and physical mechanisms of vision. In our present age of computer tech- nology we have inherited this tradition of developing representational systems that aim to replicate our current understanding of visual processes. A significant change in perceptual theory occurred in the 1950s in the work of the psychologist James J. Gibson. During World War II, it was Gibson's encounter with the prob- lem of landing aircraft on moving carriers that led to his rethinking of visual per- ception. Indeed, such an approach to perception, derived from flight simulation, has taken on additional contemporary relevance in the creation of virtual envi- ronments [Reingold 1991:143-44]. The development of Gibson's theory is most clearly described in his Ecological Approach to Visual Perception [1979]. The existing theories of perception did not provide an adequate account of how organisms could find their way around their environments. In particular it is Gibson's con- cept of the active exploratory perceiver that is most relevant to notions of com- puter interactivity. CONCLUSION This paper has suggested that the assimilation of digital imagery into existing practices of visual representation has shifted the emphasis away from the notion of "traditional" media: photography, painting, drawing, etc., to a broader consid- eration of systems of representation. These are characterised by the range of picture-making systems, together with their integration into the particular social and cognitive roles, that are found in visual representations functioning in "other" cultures and different historical periods. In this context, photography offers a relatively limited range of projection systems for transcribing three- dimensional space. Rather than limiting photography's ability to record a "truthful" image, computer manipulation has the potential to broaden the repertoire of the photographic system and to enrich photography's scope and ability to describe the visual world. The dichotomy of photographic "truth or lies" does not arise— photographs have always been subject to mis-representations—or "infelicities", as the ex-newspaper editor Harold Evans [1978: 227] rather coyly has described them. In 1992 William Mitchell pronounced, "from the moment of its sesquicentennial in 1989 photography was dead" [1992: 20]. This statement was no doubt intended to echo Delaroche's 1839 statement at the announcement of the invention of photography "from today painting is dead". If there are any parallels to be drawn, or lessons to be learned, over the century and a half, the statement is incorrect. Painting did not die in 1839, nor did photography die in 1989. Nonetheless, painting was never to be quite the same again—and, like painting, we find our whole conceptual outlook through the medium of photography has been irrevocably changed. Photography too will no doubt seek new applications and a modified role. The encroachment made on professional practice by the digital image mean that photography can no longer be regarded as a "window on the world", but then it never really was. D o w n l o a d e d A t : 1 2 : 5 2 2 6 F e b r u a r y 2 0 1 1 218 T. Wright NOTES 1. E.H. Gombrich [1950: 23]: "these primitive hunters thought that if they only made a picture of their prey—and perhaps belaboured it with their spears or stone axes— the real animals would also succumb to their power." 2. Gombrich, for example, [1950: 20] suggests, "All that is needed is the will to be absolutely honest with ourselves and see whether we, too, do not retain something of the 'primitive' in us." 3. Quote from Gay-Lussac's report to the Chamber of Peers, 30 July 1839, from Eder [1945: 242]. 4. For example, Kuhn [1923] has suggested that illusionism in the arts results from social systems based upon exploitation and consumption. REFERENCES Boas, Franz 1927 Primitive Art. New York: Dover. [1955 reprint] Busey, Samuel C. 1900 Early History of Daguerreotype in the City of Washington. Columbia Historical Society Records, 3: 81-95. Clark, Kenneth 1949 Landscape into Art. Harmondsworth: Pelican. Eastlake, Elizabeth 1857 Photography. Quarterly Review, 101:442-68. Eder, Josef Maria 1945 History of Photography. Edward Epstean. trans. New York: Columbia University Press. Edgerton, S.Y. 1980 The Renaissance Artist as Quantifier. In The Perception of Pictures. M.A. Hagen, ed. New York: Academic Press. Evans, Harold 1978 Pictures on a Page. London: Heinemann. Firth, Raymond 1992 Art and Anthropology. In Anthropology, Art and Aesthetics. Jeremy Coote and Anthony Shelton, eds. Pp. 15-39. Oxford: Clarendon Press. Flores, T. 1985 The Anthropology of Aesthetics. Dialectical Anthropology, 10: 27-41. Forge, J. Anthony W. 1966 Art and Environment in the Sepik. Proceedings of the Royal Anthropological Institute for 1965: 23-31. London: Royal Anthropological Institute. Gell, Alfred 1992 The Technology of Enchantment and the Enchantment of Technology. In Anthro- pology, Art and Aesthetics. Jeremy Coote and Anthony Shelton, eds. 40-46. Oxford: Clarendon Press. Gibson, James J. 1979 The Ecological Approach to Visual Perception. Boston: Houghton Mifflin. Gombrich, Ernst H. 1950 The Story of Art. London: Phaidon. D o w n l o a d e d A t : 1 2 : 5 2 2 6 F e b r u a r y 2 0 1 1 Digital Photography and Visual Images 219 Graburn, Nelson H.H., ed. 1976 Ethnic and Tourist Arts: Cultural Expressions of the Fourth World. Berkeley: University of California Press. Gutman, Judith M. 1982 Through Indian Eyes: 19th and Early 20th-century Photography from India. New York: Oxford University Press. Hiller, Susan, ed. 1991 The Myth of Primitivism: Perspectives on Art. London: Routledge. Kubler, George 1962 The Shape of Time: Remarks on the History of Things. New Haven: Yale University Press. Küchler, Susan 1987 Malangan: Art and Memory in a Melanesian Society. Man, 22: 238-55. Kühn, Herbert 1923 Die Kunst der Primitiven. Munich: Delphin-Verlag. Layton, Robert 1991 The Anthropology of Art. 2nd ed. London: Granada. Lemon, L.T., and R.J. Reiss, eds. 1965 Russian Formalist Criticism: Four Essays, Lincoln: University of Nebraska. Minnaert, M.G.J. 1993 Light and Colour in the Outdoors, translated and revised by L. Seymour. New York: Springer-Verlag. Mitchell, William J. 1992 The Reconfigured Eye: Visual Truth in the Post-photographic Era. Cambridge, Mass.: MIT Press. Morphy, Howard, ed. 1989 Animals into Art. London: Unwin-Hyman. Munn, Nancy D. 1973 Walbiri Iconography. Ithaca: Cornell University Press. Pirenne, Maurice H. 1970 Optics, Painting & Photography. Cambridge: Cambridge University Press. Reingold, Harold 1991 Virtual Reality. New York: Touchstone. Ritchin, Fred 1991 The End of Photography as We Have Known It. In Photovideo: Photography in the Age of the Computer. Paul Wombell, ed. London: Rivers Press. Robins, Kevin 1995 Will Image Move Us Still. In The Photographic Image in Digital Culture. M. Lister, ed. London: Routledge. Ross, W.D., trans. 1927 The Works of Aristotle Translated into English, Volume VII—Problemata. Oxford: Clarendon Press. Sander, August 1933 Photography as a Universal Language. A. Halley, trans. Massachusetts Review, 19: 674-75 [1978]. Silver, H.R. 1979 Ethnoart. Annual Review of Anthropology, 8: 267-307. Sprague, S. 1978 Yoruba Photography: How the Yoruba See Themselves. African Arts, 12(1): 52-59. D o w n l o a d e d A t : 1 2 : 5 2 2 6 F e b r u a r y 2 0 1 1 220 T. Wright Sully, J. 1895 Studies in Childhood. London: Longmans. Ucko, Peter J., and A. Rosenfeld 1967 Palaeolithic Cave Art. London: Weidenfeld and Nicholson. Wartofsky, Marx W. 1980 Visual Scenarios: the Role of Representation in Visual Perception. In The Perception of Pictures, Vol. II. M.A. Hagen, ed. New York: Academic Press. Weitz, Morris 1956 The Role of Theory in Aesthetics. Journal of Aesthetics and Art Criticism, 15: 27-35. Wollheim, Richard 1968 Art and its Objects, an Introduction to Aesthetics. New York: Harper & Row. D o w n l o a d e d A t : 1 2 : 5 2 2 6 F e b r u a r y 2 0 1 1 work_6nkt7f2l7zgcdgdcllsep4t4zq ---- F c f i a t m m r d c t i i r c 1 t m T E E R l d t m c f l c t fi r c o t A s V S 0 d EDITORIALS The Clinical Site-Reading Center Partnership in Clinical Trials RONALD P. DANIS t e t a p q O M U R a a e a f e c v r t e i d c p R s b o s o o m s n s f g e t b ORMER U.S. PRESIDENT MARTIN VAN BUREN ONCE wrote, “It is easier to do a job right than to explain why you didn’t.” This dictum is applicable to all aspects of linical trials. The clinical site-reading center partnership ocuses on collection of high-quality, standardized, unbiased maging data. High-quality data collection by clinical sites ccording to standardized methodology has been central to he majority of studies leading to the most exciting advance- ents in ophthalmic patient care. The current model of ulticenter clinical trials employing coordinating centers and eading centers has evolved partly to promote a high-level of ata quality through training, certification, and monitoring of linical site data collection. This Editorial will briefly review he roles of reading centers in clinical trials, the purpose of maging protocols, certification, and image quality monitor- ng, quality control within reading centers, clinical site esponsibilities, and the impact of new technology in large linical trials. As residents in residents in ophthalmology during the 980s at the University of Wisconsin, we were steeped in he importance of standardized independent evaluation to inimize bias and variability in clinical trial assessments. he seminal reports from the Diabetic Retinopathy Study, arly Treatment Diabetic Retinopathy Study, Wisconsin pidemiologic Study of Diabetic Retinopathy, Diabetic etinopathy Vitrectomy Study, and Macular Photocoagu- ation Study had radically changed the standard of care for iabetic retinopathy and neovascular macular degenera- ion, and they relied heavily upon photographic assess- ents from reading centers. Later, however, in the role of linical investigator recruiting patients for clinical trials rom a busy retina subspecialty practice, I was shocked to earn the minute details specified and required by reading enters – photographic field definition, stereoscopy, film ype (sometimes by film lot number), processing by certi- ed labs, labeling and sorting of slides, etc. Meeting these equirements was sometimes a frustrating burden upon the linic staff and patients and sometimes seemed downright bsessive. My liberation came with the gradual apprecia- ion of the relationship between process and procedure at ccepted for publication Jul 15, 2009. From the Department of Ophthalmology and Visual Sciences, Univer- ity of Wisconsin – Madison, Madison, Wisconsin. Inquiries to Ronald P. Danis, Department of Ophthalmology and e isual Sciences, University of Wisconsin – Madison, 406 Science Drive, uite 400, Madison, WI 53711; e-mail: rdanis@rc.ophth.wisc.edu © 2009 BY ELSEVIER INC. A002-9394/09/$36.00 oi:10.1016/j.ajo.2009.07.017 he site and data quality. These minute requirements xisted because the experience of the reading center taught hat systems can fail without them. This renewed my ppreciation for the clinical research coordinator and hotographers who maintained the high-level of data uality from our site despite my occasional impatience. ur staff understood Van Buren’s quip better than I did. y role has changed again and now, as Director of the niversity of Wisconsin-Madison Fundus Photograph eading Center, I have the opportunity to respect and dmire the many clinical investigators and their staff that re able to meet the requirements of clinical research with nthusiasm. Such sites share in common the attributes of dequate staff time dedicated to clinical research, thorough amiliarity with the study protocol and procedures, and xperience. Ophthalmic reading centers are diverse and include enters for interpretation of optic nerve head topography, isual fields, corneal endothelial images, and a variety of etinal images. Reading centers provide image evaluations hat are uniform across clinical sites by evaluators who are asily masked to treatment assignment and other clinical nformation. They have the potential to employ more etailed observations than what might be possible in linics and to develop disease classifications from them; a rime example has been the Early Treatment Diabetic etinopathy Study (ETDRS) diabetic retinopathy severity cale from stereoscopic color photographs,1 which has een used as a major outcome in multiple important trials f diabetes complications. The requirements for image analysis in clinical trials are pecific to each study. The importance of morphology utcomes for a study weighs in the balance. If the primary utcome is a functional assessment such as vision, there ay be no need for a reading center to evaluate images. In tudies where morphology is the primary outcome, the eed for masked standardized independent assessment hould be carefully considered. Imaging data may be useful or the development of disease classifications, hypothesis eneration, and subgroup analyses. Masked independent valuation to control bias is particularly important when he treatment benefit is small, a function which often can e more easily and reliably performed in a reading center nvironment. LL RIGHTS RESERVED. 815 mailto:rdanis@rc.ophth.wisc.edu p u u t a a c m c i j R i p a c w p t a o p i d s c m a b B i i r c w i r T c r p m o q m a v d “ t s m c a d A u i f i i f d h t i h s s t t m p c m a i n p h s i q L t i t i m a d t m i m m c a f f O d r e s r 8 The problem of variability in clinical assessments can be artially overcome through the use of trained observers sing a strict protocol in a centralized setting to promote niformity.2,3 Historically, reading centers have excelled in he interpretation of clinical images in a manner somewhat nalogous to the physician observer but with greater detail nd complexity than is possible in the clinic. The reading enter evaluation may be repeated and the reproducibility easured; an extremely difficult process to replicate in the linic setting. Knowing the reproducibility of measurements is mportant when statistically analyzing imaging data and in udging significance of the data from a clinical perspective.2,3 eading centers continually test and report the reproducibil- ty of grading for each data item. A systematic quality control rogram is necessary as part of the grading process. The reading center prepares protocols for image capture nd submission, trains the site staff on these procedures, and ertifies the imagers. The imaging protocol is detailed and ritten clearly so as to standardize data collection as much as ossible. Digital imaging requires the protocol to be specific to he make and model of the equipment because most camera nd optical coherence tomography (OCT) manufacturers in phthalmology have developed proprietary software that roduce unique non-standardized file formats. Certification s the formal process by which the clinical site staff emonstrate understanding and ability to adhere to the tudy protocol before the first patient is enrolled. The ertification process may be quite simple for some imaging ethods, and very difficult for others. Inexperienced im- gers may make multiple attempts and even then fail to ecome certified – a source of frustration for all involved. ecause of the pressures on modern clinical practices for ncreased efficiency and productivity in the face of declin- ng revenue, the clinical site staff member pressed into the ole of imager is sometimes incompletely trained for linical trial work. Clinical practice demands for imaging, hich emphasize capture of the abnormalities of particular nterest in an individual patient, are often different from the igorous standard protocols typically used in clinical research. he diligence required by the imager to become certified for linical trial protocols, with feedback and advice from the eading center, often improves the quality of the work erformed for patient care purposes. Reading centers also onitor and report upon image quality throughout the course f a trial in order to identify and help correct problems uickly should they occur. Image quality issues are a common source of variability and issing data in clinical trials. For instance, if images of an eye t one visit show retinal neovascularization, but at the next isit the image is poorly focused or the photographic field oes not capture the area in question, there may be a spurious disappearance” of the neovascularization that mimics the herapeutic effect of an intervention. If the images are everely flawed (fortunately, a rare event), the reading center ay not be able to produce data for those visits. Missing data an have a deleterious impact upon the statistical analyses i AMERICAN JOURNAL OF16 nd results in a clinical trial, and, if there is enough missing ata, the outcomes of the trial may be called into question.4 n error in obtaining baseline images may severely limit the sefulness of imaging data on that patient. Of particular nterest from an image quality standpoint is the transition rom color film fundus photography to digital color fundus mages in clinical trials. Because digital camera chips handle llumination, contrast, and color balance quite differently rom film,5 obtaining images similar to film quality with igital cameras has been challenging. Post hoc image en- ancement at the reading center can improve the illumina- ion, color balance, and contrast of substandard digital mages.5 This may help preserve continuity of grading with istorical film data sets and minimize variability within tudies using both film and digital photography. The best olution is for the imagers at the clinical sites to educate hemselves regarding the assessment and modification of onal balance and how this is accomplished with the equip- ent available at the clinic to produce the highest quality hotographs – this is beneficial for both research and clinical are purposes. Technological advances have lead to instruments that ake automated measurements that formerly were clinical ssessments. For example, macular edema assessment by OCT s clearly a better method for measurement of retinal thick- ess at the center of the macula than stereoscopic color hotographs or the eye of the clinician. In general, the uman mind/eye remains (for the moment) superior to oftware for purposes of lesion classification in complex mages under a variety of quality conditions, for solving uality issues, and for handling unusual disease presentations. ooking forward, it is inevitable that automated lesion detec- ion, classification, and measurement will become increas- ngly reliable and used more frequently in the clinic setting. Is here a future role for reading centers in the context of ncreasing technologic innovation and automated measure- ent? Actually, new imaging technology (eg, OCT, fundus utofluorescence) seems to increase, rather than decrease, the emand for reading center services. This is in part attributable o the uncertainty of how best to classify disease with the new ethodology, and to identify and rectify new imaging quality ssues that present with new technology. Even if the only orphologic outcome variable to be analyzed is an automated easurement obtained directly from an instrument at the linical site, a reading center may be of value for certification nd quality control of imaging. The evidence base upon which medical care of patients is ounded is increasingly dependent upon the data from care- ully designed and conducted multicenter clinical trials.6 cular imaging analysis in support of such trials must be of emonstrable high quality. Without this assurance, a study uns the risk of running afoul of Van Buren’s credo. In the nd, it is the diligent efforts of clinical site investigators and taff, and the patients that donate time and their own esources, that create the foundation of ophthalmology clin- cal research. OPHTHALMOLOGY DECEMBER 2009 T d I 1 2 3 V HE AUTHOR INDICATE NO FINANCIAL SUPPORT OR FINANCIAL CONFLICT OF INTEREST. THE AUTHOR WAS INVOLVED IN esign and conduct of study; collection of data; management, analysis, and interpretation of data; and preparation, review, and approval of manuscript. nstitutional Review Board approval was not needed for this study. The author wishes to thank Dr Matthew D. Davis, University of Wisconsin-Madison, Madison, Wisconsin, for critical review of this manuscript. 4 5 6 REFERENCES . Early Treatment Diabetic Retinopathy Study Research Group. Grading diabetic retinopathy from stereoscopic color fundus photo- graphs – an extension of the modified Airlie House classification. ETDRS Report No. 10. Ophthalmology 1991;98:786 – 806. . Lachin JM. The role of measurement reliability in clinical trials. Clin Trials 2004;1:553–566. . Ederer F. Methodological problems in eye disease epidemiol- ogy. Epidemiol Rev 1983;5:51– 66. EDITORIOL. 148, NO. 6 . Minckler DS, Vedula SS, Li TJ, Mathew MC, Ayyala RS, Francis BA. Aqueous shunts for glaucoma. Cochrane Database Syst Rev 2006:CD004918. . Hubbard LD, Danis RP, Neider MW, et al. Brightness, contrast, and color balance of digital versus film retinal images in the age-related eye disease study 2. Invest Ophthalmol Vis Sci 2008;49:3269 –3282. . Ferris FL. Clinical trials – more than an assessment of treatment effect: LXV Edward Jackson Memorial Lecture. Am J Ophthalmol 2009;147:22–32.e21. AL 817 work_6oeto6a4zra3hgtf6i2bjpad5e ---- Vertical contact tightness of occlusion comparison between orofacial myalgia patients and asymptomatic controls: a pilot study | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1177/0300060518782346 Corpus ID: 54433444Vertical contact tightness of occlusion comparison between orofacial myalgia patients and asymptomatic controls: a pilot study @article{Qi2018VerticalCT, title={Vertical contact tightness of occlusion comparison between orofacial myalgia patients and asymptomatic controls: a pilot study}, author={K. Qi and Y. Xu and Shao-Xiong Guo and Wei Xiong and M. Wang}, journal={The Journal of International Medical Research}, year={2018}, volume={46}, pages={4952 - 4964} } K. Qi, Y. Xu, +2 authors M. Wang Published 2018 Medicine The Journal of International Medical Research Objective The association between occlusal contact and orofacial pain remains unclear. The aim of this study was to detect occlusal contact tightness by using a new method and to compare differences between patients and asymptomatic controls. Methods Fifteen female patients with orofacial myalgia and fifteen age- and sex-matched asymptomatic controls were enrolled. Occlusal contacts were recorded by making bite imprints. The numbers, sizes, and distributions of the contacts were detected by… Expand View on SAGE journals.sagepub.com Save to Library Create Alert Cite Launch Research Feed Share This Paper Figures, Tables, and Topics from this paper figure 1 table 1 figure 2 table 2 figure 3 table 3 figure 4 table 4 View All 8 Figures & Tables Myalgia Eosinophilia-Myalgia Syndrome Orofacial Pain bite injury Imprinting (Psychology) Occlusal - Brand Name Large References SHOWING 1-10 OF 40 REFERENCES SORT BYRelevance Most Influenced Papers Recency The distribution of occlusal contacts in the intercuspal position and temporomandibular disorder. R. Ciancaglini, E. Gherlone, S. Redaelli, G. Radaelli Medicine Journal of oral rehabilitation 2002 41 View 1 excerpt, references background Save Alert Research Feed Unilateral temporomandibular disorder and asymmetry of occlusal contacts. R. Ciancaglini, E. Gherlone, G. Radaelli Medicine The Journal of prosthetic dentistry 2003 33 Save Alert Research Feed Occlusal contacts in maximum intercuspation and craniomandibular dysfunction in 16- to 17-year-old adolescents. A. I. Gianniri, B. Melsen, L. Nielsen, A. Athanasiou Medicine Journal of oral rehabilitation 1991 29 View 1 excerpt, references background Save Alert Research Feed Number and intensity of occlusal contacts following surgical correction of mandibular prognathism. A. Athanasiou Medicine Journal of oral rehabilitation 1992 17 Save Alert Research Feed Prevalence of dental occlusal variables and intraarticular temporomandibular disorders: molar relationship, lateral guidance, and nonworking side contacts. J. Kahn, R. Tallents, R. Katzberg, M. E. Ross, W. Murphy Medicine The Journal of prosthetic dentistry 1999 68 View 1 excerpt, references background Save Alert Research Feed The relationship between signs and symptoms of temporomandibular disorders and bilateral occlusal contact patterns during lateral excursions. E. K. Watanabe, H. Yatani, +4 authors A. Yamashita Medicine Journal of oral rehabilitation 1998 34 View 2 excerpts, references background Save Alert Research Feed Digital Evaluation of Functional Occlusion Parameters and their Association with Temporomandibular Disorders. S. Haralur Medicine Journal of clinical and diagnostic research : JCDR 2013 33 View 1 excerpt, references background Save Alert Research Feed Influence of Clenching Intensity on Bite Force Balance, Occlusal Contact Area, and Average Bite Pressure O. Hidaka, M. Iwasaki, M. Saito, T. Morimoto Medicine Journal of dental research 1999 297 Save Alert Research Feed Influence of occlusal force and mandibular position on tooth contacts in lateral excursive movements. A. Takai, M. Nakano, E. Bando, E. Hewlett Medicine The Journal of prosthetic dentistry 1995 17 View 1 excerpt, references background Save Alert Research Feed The role of occlusal factors on the occurrence of temporomandibular disorders Stéphanie Trajano de Sousa, Victor Villaça Cardoso de Mello, +4 authors S. G. F. Gomes Medicine Cranio : the journal of craniomandibular practice 2015 22 Save Alert Research Feed ... 1 2 3 4 ... Related Papers Abstract Figures, Tables, and Topics 40 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_6olzxeyqzrb5jgbwhfgq3t7woy ---- Amazon.com: Physical Origins of Time Asymmetry (9780521568371): J. Share your own customer images Search inside this book Your Amazon.com | Today's Deals | Gifts & Wish Lists | Gift Cards Your Digital Items Search Books Advanced Search Browse Subjects New Releases Best Sellers The New York Times® Best Sellers Children's Books More Amazon.com Search Results for "physical origins of time asymmetry by j. j. halliwell" Information and the Nature of Reality: From Physics to Metaphysics Buy new: $30.00 $18.26 66 Used & new from $14.99 (10) Programming the Universe: A Quantum Computer Scientist Takes on the C… Buy new: $16.00 $12.00 73 Used & new from $2.93 (41) Recent Developments in Gravity: Proceedings of th 10th Hellenic Rela… Buy new: 10 Used & new Physical Origins of Time Asymmetry [Paperback] J. J. Halliwell (Editor), J. Perez-Mercader (Editor), W. H. Zurek (Editor) Be the first to review this item | (0) List Price: $110.00 Price: $101.54 & this item ships for FREE with Super Saver Shipping. Details You Save: $8.46 (8%) In Stock. Ships from and sold by Amazon.com. Gift-wrap available. Only 1 left in stock--order soon (more on the way). Want it delivered Tuesday, March 20? Order it in the next 33 hours and 32 minutes, and choose One-Day Shipping at checkout. Details 13 new from $97.54 8 used from $65.00 FREE Two-Day Shipping for students on millions of items. Learn more Book Description Publication Date: March 29, 1996 | ISBN-10: 0521568374 | ISBN-13: 978-0521568371 In the world about us, the past is distinctly different from the future. More precisely, we say that the processes going on in th world about us are asymmetric in time or display an arrow of time. Yet this manifest fact of our experience is particularly diffi explain in terms of the fundamental laws of physics. Newton's laws, quantum mechanics, electromagnetism, Einstein's theory gravity, etc., make no distinction between past and future - they are time-symmetric. Reconciliation of these profoundly confl facts is the topic of this volume. It is an interdisciplinary survey of the variety of interconnected phenomena defining arrows o time, and their possible explanations in terms of underlying time-symmetric laws of physics. Frequently Bought Together Customers buy this book with The Direction of Time (Dover Books on Physics) by Hans Reichenbach Paperback Price For Both: $111.55 Hello. Sign in to get personalized recommendations. New customer? Start here. Shop All Departments Sign in 21 used & new Have one to sell? Formats Amazon Price New from Used from Hardcover -- -- $54.99 Paperback $101.54 $97.54 $65.00 Amazon.com: Physical Origins of Time Asymmetry (9780521568371): J... http://www.amazon.com/Physical-Origins-Time-Asymmetry-Halliwell/... Стр. 1 из6 18.03.2012 17:42 + Show availability and shipping details Editorial Reviews Review "...a veritable fireworks of ideas in computation, physics and cosmology. Some of the discussion is also recorded, and this add considerably to the reader's understanding and enjoyment of the book....The distinction of the participants and the intrinsic in of the topics discussed make this a book that should be available to all physicists as well as to students of cognate subjects." P Landsberg, Nature "...[provides] an in-depth introduction to a fascinating set of inter-related topics about the nature of time. It is written at a le that is sure to be stimulating to a sophisticated theorist while still accessible to a young graduate student." Gino Segre, Physi Today "...sure to plant the seeds for further lines of investigation on this intriguing and very fundamental set of issues." Robert Wai Foundations of Physics Book Description We say that the processes going on in the world about us are asymmetric in time or display an arrow of time. Yet this manifes of our experience is particularly difficult to explain in terms of the fundamental laws of physics. This volume reconciles these profoundly conflicting facts. Product Details Paperback: 536 pages Publisher: Cambridge University Press (March 29, 1996) Language: English ISBN-10: 0521568374 ISBN-13: 978-0521568371 Product Dimensions: 9.7 x 6.9 x 1.1 inches Shipping Weight: 1.9 pounds (View shipping rates and policies) Average Customer Review: Be the first to review this item Amazon Best Sellers Rank: #2,483,140 in Books (See Top 100 in Books) Would you like to update product info, give feedback on images, or tell us about a lower price? Customer Reviews There are no customer reviews yet. Customers Who Bought This Item Also Bought The Direction of Time (Dover Books on Physics) by Hans Reichenbach (7) $10.01 Amazon.com: Physical Origins of Time Asymmetry (9780521568371): J... http://www.amazon.com/Physical-Origins-Time-Asymmetry-Halliwell/... Стр. 2 из6 18.03.2012 17:42 Video reviews Amazon now allows customers to upload product video reviews. Use a webcam or video camera to record and upload reviews to Amazon. Inside This Book (learn more) First Sentence: How come time? It is not enough to joke that "Time is nature's way to keep everything from happening all at once. page Key Phrases - Statistically Improbable Phrases (SIPs): (learn more) strong thermodynamic arrow, evolved sheet, statistical entropy relative , observed time asymmetries, erasure cost decoherence functional, final density matrices, predictability sieve, cosmological particle creation, conditional algorithmic information, probability sum rules, low entropy initial state, generalized quantum mechanics perturbed sheets, approximate decoherence, decohering sets, quasiclassical domain, decoherence timescale thermodynamical arrow, time asymmetry, pure initial state, particle creation processes, connected wormholes preferred observables Key Phrases - Capitalized Phrases (CAPs): ( learn more) New York, World Scientific, Cambridge University Press, Los Alamos, Nuclear Physics, Redwood City, University of California P Murray Gell-Mann, Physics Letters, Physics Today, Princeton University Press, Addison Wesley, Physical Society of Japan Bayesian Networks, Canadian Conference, Enrico Fermi, National Science Foundation, New Jersey, North Holland Claudio Teitelboim, International Symposium, Light of New Technology, Oxford University Press, Robert Griffiths New! Books on Related Topics | Concordance | Text Stats Browse Sample Pages: Front Cover | Table of Contents | First Pages | Back Cover | Surprise Me! Search Inside This Book: Citations (learn more) This book cites 100 books: Complexity, Entropy and the Physics of Information by Wojciech H. Zurek on 9 pages The Direction of Time (Dover Books on Physics) by Hans Reichenbach on 7 pages The Physical Basis of The Direction of Time by H. Dieter Zeh on 6 pages Physics of Time Asymmetry (Campus, 334) by P. C. W. Davies on 6 pages Quantum Cosmology and Baby Universes: Proceedings by S. Coleman on 6 pages See all 100 books this book cites 44 books cite this book: Maxwell's Demon 2: Entropy, Classical and Quantum Information, Computing by Harvey Leff on 5 pages Directions in General Relativity: Volume 2: Proceedings of the 1993 International Symposium, Maryland: Papers in Honor of Brill by B. L. Hu on page 26, page 288 , and page 289 The Physics of Communication: Proceedings of the Xxii Solvay Conference on Physics Delphi Lamia, Greece 24 - 29 Novembe by I. Antoniou on page 136 , page 216, and page 217 Amazon.com: Physical Origins of Time Asymmetry (9780521568371): J... http://www.amazon.com/Physical-Origins-Time-Asymmetry-Halliwell/... Стр. 3 из6 18.03.2012 17:42 Maxwell's Demon 2 by Harvey Leff Discusses: erasure cost conditional algorithmic information decoherence timescale The Undivided Universe by David Bohm Discusses: decoherence functional decohering sets Physical Society of Japan Complexity, Entropy and the Physics of Information by Wojciech H. Zurek Discusses: decoherence functional conditional algorithmic information approximate decoherence Science and Ultimate Reality by John D. Barrow Discusses: predictability sieve decohering sets decoherence timescale › Explore product tags Science and Ultimate Reality: Quantum Theory, Cosmology, and Complexity by John D. Barrow on page 120 400 Open Systems and Measurement in Relativistic Quantum Theory: Proceedings of the Workshop held at the Istituto Italiano pe Studi Filosofici, Napoli, April 3-4, 1998 (Lecture Notes in Physics) by Heinz-Peter Breuer on page 168, and See all 44 books citing this book Books on Related Topics (learn more) Tag this product ( What's this?) Think of a tag as a keyword or label you consider is strongly related to this product. Tags will help all customers organize and find favorite items. Search Products Tagged with Sell a Digital Version of This Book in the Kindle Store If you are a publisher or author and hold the digital rights to a book, you can sell a digital version of it in our Kindle Store. more Customer Discussions Listmania! So You'd Like to... Amazon eGift: Send a Gift Card Suggesting This Item (What's this?) Instant Delivery: E-mail a gift card suggesting this item Flexible Gifting Choices: They can choose this, or pick from millions of other items. Look for Similar Items by Category Books > New & Used Textbooks > Science & Mathematics > Physics Books > Science & Math > Physics > Mathematical Physics Books > Science & Math > Physics > Time Amazon.com: Physical Origins of Time Asymmetry (9780521568371): J... http://www.amazon.com/Physical-Origins-Time-Asymmetry-Halliwell/... Стр. 4 из6 18.03.2012 17:42 Look for Similar Items by Subject Search Books by subject: Science/Mathematics Science Relativity (Physics) Space and time Symmetry (Physics) Physics - Mathematical & Computational SCIENCE / Mathematical Physics Science / Physics / Mathematical & Computational Science / Time Science : Physics - Mathematical & Computational Time Mathematics & science Physics Relativity physics Theoretical methods Theoretical Physics i.e., each book must be in subject 1 AND subject 2 AND ... Feedback If you need help or have a question for Customer Service, contact us. Would you like to update product info, give feedback on images, or tell us about a lower price? Is there any other feedback you would like to provide? Click here Get to Know Us Careers Investor Relations Press Releases Amazon and Our Planet Amazon in the Community Make Money with Us Sell on Amazon Become an Affiliate Advertise Your Products Independently Publish with Us › See all Let Us Help You Shipping Rates & Policies Amazon Prime Returns Are Easy Manage Your Kindle Help Your Recent History (What's this?) Amazon.com: Physical Origins of Time Asymmetry (9780521568371): J... http://www.amazon.com/Physical-Origins-Time-Asymmetry-Halliwell/... Стр. 5 из6 18.03.2012 17:42 Canada China France Germany Italy Japan Spain United Kingdom AbeBooks Rare Books & Textbooks AmazonLocal Great Local Deals in Your City AmazonWireless Cellphones & Wireless Plans Askville Community Answers Audible Download Audio Books BeautyBar.com Prestige Beauty Delivered Book Depository Books With Free Delivery Worldwide CreateSpace Indie Publishing Made Easy Diapers.com Everything But The Baby DPReview Digital Photography Endless Shoes & More Fabric Sewing, Quilting & Knitting IMDb Movies, TV & Celebrities MYHABIT Private Fashion Designer Sales Shopbop Designer Fashion Brands Small Parts Industrial Supplies Soap.com Health, Beauty & Home Essentials Wag.com Everything For Your Pet Warehouse Deals Open-Box Discounts Woot Never Gonna Give You Up Yoyo.com A Happy Place To Shop For Toys Zappos Shoes & Clothing Conditions of Use Privacy Notice © 1996-2012, Amazon.com, Inc. or its affiliates Amazon.com: Physical Origins of Time Asymmetry (9780521568371): J... http://www.amazon.com/Physical-Origins-Time-Asymmetry-Halliwell/... Стр. 6 из6 18.03.2012 17:42 work_6osz6qa6nzgdvjl3xli347nwlu ---- A multilevel intervention to increase physical activity and improve healthy eating and physical literacy among young children (ages 3-5) attending early childcare centres: the Healthy Start-Départ Santé cluster randomised controlled trial study protocol STUDY PROTOCOL Open Access A multilevel intervention to increase physical activity and improve healthy eating and physical literacy among young children (ages 3-5) attending early childcare centres: the Healthy Start-Départ Santé cluster randomised controlled trial study protocol Mathieu Bélanger1,2,3, Louise Humbert4, Hassan Vatanparast5, Stéphanie Ward1,2, Nazeem Muhajarine6, Amanda Froehlich Chow4, Rachel Engler-Stringer6, Denise Donovan2, Natalie Carrier7 and Anne Leis6* Abstract Background: Childhood obesity is a growing concern for public health. Given a majority of children in many countries spend approximately 30 h per week in early childcare centers, this environment represents a promising setting for implementing strategies to foster healthy behaviours for preventing and controlling childhood obesity. Healthy Start-Départ Santé was designed to promote physical activity, physical literacy, and healthy eating among preschoolers. The objectives of this study are to assess the effectiveness of the Healthy Start-Départ Santé intervention in improving physical activity levels, physical literacy, and healthy eating among preschoolers attending early childcare centers. Methods/Design: This study follows a cluster randomized controlled trial design in which the childcare centers are randomly assigned to receive the intervention or serve as usual care controls. The Healthy Start-Départ Santé intervention is comprised of interlinked components aiming to enable families and educators to integrate physical activity and healthy eating in the daily lives of young children by influencing factors at the intrapersonal, interpersonal, organizational, community, physical environment and policy levels. The intervention period, spanning 6-8 months, is preceded and followed by data collections. Participants are recruited from 61 childcare centers in two Canadian provinces, New Brunswick and Saskatchewan. Centers eligible for this study have to prepare and provide meals for lunch and have at least 20 children between the ages of 3 and 5. Centers are excluded if they have previously received a physical activity or nutrition promoting intervention. Eligible centers are stratified by province, geographical location (urban or rural) and language (English or French), then recruited and randomized using a one to one protocol for each stratum. Data collection is ongoing. The primary study outcomes are assessed using accelerometers (physical activity levels), the Test of Gross Motor Development-II (physical literacy), and digital photography-assisted weighted plate waste (food intake). (Continued on next page) * Correspondence: Anne.Leis@usask.ca 6Department of Community Health & Epidemiology, College of Medicine, University of Saskatchewan Health Sciences E Wing, 104 Clinic Place, Saskatoon, SK S7N 2Z4, Canada Full list of author information is available at the end of the article © 2016 Bélanger et al. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Bélanger et al. BMC Public Health (2016) 16:313 DOI 10.1186/s12889-016-2973-5 http://crossmark.crossref.org/dialog/?doi=10.1186/s12889-016-2973-5&domain=pdf mailto:Anne.Leis@usask.ca http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/publicdomain/zero/1.0/ (Continued from previous page) Discussion: The multifaceted approach of Healthy Start-Départ Santé positions it well to improve the physical literacy and both dietary and physical activity behaviors of children attending early childcare centers. The results of this study will be of relevance given the overwhelming prevalence of overweight and obesity in children worldwide. Trial registration: NCT02375490 (ClinicalTrials.gov registry). Keywords: Childhood obesity, physical activity, physical literacy, food intake, eating habits, preschool, population health intervention Background Childhood obesity is one of the greatest challenges fa- cing public health in the 21st century [1–3]. Weight in early childhood closely predicts weight in later childhood [4–6] and children under six who are obese are at four times more at risk of becoming obese adults than normal weight children [7, 8]. Further, being overweight during childhood increases the likelihood to have compromised emotional, psychological and social well-being [9]. Many factors come together to produce obesity, but it is primarily due to an imbalance between energy intake and energy expenditure [10]. Eating habits are estab- lished early in childhood and usually persist for many years [11]. Data show that only 29 % of Canadian pre- school aged children meet recommendations for fruit and vegetable intake and 23 % for grain products [12]. Further, 79 % of 4-5 year olds consume food of little nu- tritional value (e.g., chips, french fries, candy, chocolate, soft drinks, cake and cookies) at least once a week [12] and other studies have demonstrated that empty calories are making up as much as 40 % of their total caloric in- take [13, 14]. Similarly, a recent review found that chil- dren in early childcare centers (ECC) have low levels of physical activity, and that they are sedentary for much of the time [15]. It was estimated that children in childcare settings accumulate an average of only 7 to 13 min of moderate to vigorous physical activity during the course of a 7 h day [16]. Moreover, recent data show that many of them have poor physical literacy [17–21]. These data are troubling given that sedentary state and physical ac- tivity levels track over time [22–26]. Interventions designed to improve the physical activity and nutrition of preschoolers are needed to prevent and control childhood obesity [1, 27–29]. Since more than half of Canadian preschoolers spend an average of 29 h a week in ECC [30], this environment is a prime setting for implementing an array of strategies to foster healthy behaviours [31–36]. However, two systematic reviews on obesity prevention in children under 5 years, one on in- terventions [35] the other on preventive policies, prac- tice and interventions in ECC [36], reported limited success in improving physical activity levels, dietary be- haviour, or body composition. The authors suggest that the least successful interventions focused only on one or two outcomes, while the most successful interventions positively influenced factors such as knowledge, abilities and competence. According to the authors, this means that interventions should be grounded in comprehensive behaviour change models, be multifaceted and sustained over time [35]. Few interventions focused on physical ac- tivity and eating behaviours simultaneously, therefore fu- ture interventions should target both behaviours [36]. It can be concluded that interventions promoting healthy weights in children should encompass a broad spectrum of concerted actions and be based on best available knowledge from research and practice [16, 37]. Healthy Start-Départ Santé, a multilevel intervention that targets physical activity, physical literacy, and healthy eating in preschoolers, was developed following these principles. The aim of the current study is to lead a com- prehensive evaluation of the Healthy Start-Départ Santé intervention using an experimental research design. The intervention Healthy Start-Départ Santé uses a population health ap- proach to promoting physical activity and healthy eating among English- and French-speaking preschoolers be- tween 3–5 years old who attend ECCs (e.g., licenced child- cares or preschools). The population health approach posits that to positively influence population-level health outcomes, interventions must take into account the wide range of health determinants [38], recognise the import- ance and complexity of potential interplay among these determinants, and reduce social and material inequities [39]. Further, they must rely on best evidence available, stimulate intersectoral collaborations, and provide oppor- tunities for all potential stakeholders to be meaningfully engaged from the onset to their deployment [39]. Several models based on the population health approach have been developed to steer interventions [40–42] and, similar to ecological models [43], they call for interventions to in- clude a series of concerted actions capable of targeting all levels of influence such as the intrapersonal (biological and psychological), interpersonal (social and cultural), organizational, community, physical environment and pol- itical levels. Healthy Start-Départ Santé includes strategies for each level of influence. Supported by local funding, and then by Phase I of the Public Health Agency of Bélanger et al. BMC Public Health (2016) 16:313 Page 2 of 10 https://clinicaltrials.gov/ct2/show/NCT02375490 Canada Innovation Strategy (2008-2012), Healthy Start- Départ Santé was developed by researchers, community groups, educators, parents, and government representa- tives, pilot tested, and adapted to diverse contexts. In par- ticular, Healthy Start-Départ Santé was linguistically and culturally adapted to cater to both official linguistic groups in Canada, which is important since it has been docu- mented that to be effective, interventions must be tailored and implemented using several different socio-linguistic perspectives as befitting the target population [44]. The mission of Healthy Start-Départ Santé is to en- courage and enable families and educators to integrate physical activity and healthy eating in the daily lives of young children. Specifically, Healthy Start-Départ Santé attempts to influence factors at the intrapersonal (e.g., eating and physical activity behaviour of children), inter- personal (e.g., educators and parents), organizational (e.g., ECC), community (e.g., community organization involvement), physical environment and political levels (e.g., built environment and policies). The intervention is composed of six interlinked components which are pre- sented in more detail in Fig. 1. These components include: 1) intersectoral partnerships conducive to par- ticipatory action that leads to promoting healthy weights in communities and ECC; 2) the Healthy Start-Départ Santé implementation manual for educators on how to integrate healthy eating and physical activity in their centre; 3) customized training, role modelling and moni- toring of Healthy Start-Départ Santé in ECC; 4) the evidence-based resource, LEAP-GRANDIR [16], which contains material for both families and educators; 5) supplementary resources from governmental partners; and 6) a knowledge development and exchange (KDE), and communication strategy involving social media and web-resources to raise awareness and mobilize grassroots organizations and communities. Healthy Start-Départ Santé is delivered over 6-8 months and includes a part- nership agreement, an initial training session which ori- ents ECC staff to the concepts, the implementation manual and the use of resources, on-going support and Fig. 1 Healthy Start components Bélanger et al. BMC Public Health (2016) 16:313 Page 3 of 10 monitoring over time, one tailored booster session, and a family day to celebrate the ECC’ success at the end of the intervention. Study objectives It is hypothesized that, in comparison to usual practice, exposure to the Healthy Start-Départ Santé intervention will lead to improved opportunities for physical activity and healthy eating hence to increased physical activity and healthier eating among children. The specific study objectives are to assess whether: 1. The Healthy Start-Départ Santé intervention leads to increases in opportunities for physical activity and healthy eating in ECC through improved knowledge, attitudes and self-efficacy of the educators and directors; 2. The Healthy Start-Départ Santé intervention leads to increases in physical activity levels and healthy eating behaviors among preschoolers, in turn promoting healthy weights; 3. The Healthy Start-Départ Santé intervention leads to improvements in physical literacy among preschoolers. Methods/Design Study design The Healthy Start-Départ Santé evaluation follows a de- layed cluster randomized controlled trial design in which the ECCs are randomly assigned to receive the interven- tion or serve as usual practice controls. The intervention spans a period of 6–8 months. This period is both pre- ceded and followed by data collections (Fig. 2). Control sites are given the option of receiving the intervention once their participation in the evaluation has been com- pleted. Data collection takes place over three years, such that approximately a third of ECCs recruited will complete the study each year. This study protocol received ethics approval from Health Canada, the University of Saskatchewan, and the Université de Sherbrooke. Target population and sampling Participants for the Healthy Start-Départ Santé project are recruited from 61 French and English ECCs in two Canadian provinces: New Brunswick and Saskatchewan. Province-specific recruitment and data collection are carried through a study coordinating center established in each province. Both coordinating centers follow the same protocols. Specifically, a registry of all licenced ECCs in both provinces is obtained. From this list we ex- clude any ECC which has already received a physical ac- tivity or nutrition promoting intervention in the past. This avoids underestimating the effect of the Healthy Start-Départ Santé intervention. To be included in the study, an ECC has to prepare and provide meals for lunch. This is required for assessing the quality of foods being served and for measuring nutritional intake. For feasibility and efficiency reasons, the number of children attending the ECC also serves as an exclusion criterion; centers with less than 20 children between the ages of 3 and 5 are not considered. Information on these criteria is obtained from governmental partners or through a brief telephone administered questionnaire. In order to obtain a final sample of at least 735 children, we esti- mated that a minimum of 15 children aged 3 to 5 would need to be recruited in 62 ECCs. A minimum of 20 chil- dren per center was established to account for a 60 % participation rate (Recruiting 60 % of eligible children in 62 ECCs would give a total of approximately 744 partici- pants). Despite planned measures to minimize losses to follow-up, this issue is inevitable in a longitudinal study of this nature. Based on pilot work, we anticipate an at- trition rate of approximately 5 %, which will provide ad- equate statistical power to conduct the planned analyses. The target of 735 children was based on estimates that 700 (735 - 5 %) participants divided into two groups of 350 participants will provide 80 % power to detect a Fig. 2 Data collection timeline for the Healthy Start study Bélanger et al. BMC Public Health (2016) 16:313 Page 4 of 10 10 % between-group difference in outcome considering a within-group standard deviation of 40 %, α = 0.05, intra-class correlation of 0.02, and collinearity between the intervention and other explanatory variables expressed by an estimated multiple correlation of 0.15. For recruiting purposes, the ECCs are stratified ac- cording to province, geographical location (urban and rural) [45] and their respective school district (English or French) (Fig. 3). This stratification process ensures relatively homogenous strata for comparisons, since cur- riculum can differ between ECCs under different district authorities and that the ECC environment can be influ- enced by its geographical location [46, 47]. Once stratifi- cation is completed, eligible ECCs are randomly selected through a sequence generated using Stata SE statistical software (StataCorp, College Station, TX) according to the above-mentioned strata. Given the large area Sas- katchewan represents (652 000 km2), it was decided to carry the study with selected ECCs in the central region in year 1, and in the South and central-North in years 2 and 3. Selected ECC are contacted, provided with infor- mation, and invited to participate in the project. Subse- quently, ECC directors are telephoned to answer their questions and to confirm their participation while secur- ing the parents’ board support. The recruited ECC are then sent consent forms. Centres which decline partici- pation or with fewer than five children recruited are re- placed by other randomly selected ECC from the same stratum. Once the ECC provides final consent, it is ran- domly allocated to the intervention or usual practice (control) arm. A one to one randomization protocol is applied to each stratum. Invitation packages describing the study and seeking parental consent are sent to par- ents of all age-eligible children. ECCs return completed parental consent forms to the provincial coordinating centres. Following recruitment of one ECC and its chil- dren in the usual practice arm, it was found that it had the same director and shared staff with a nearby ECC that had been recruited in the intervention arm. Given the risk for contamination was regarded as being certain in this situation, the two ECC are considered as one intervention arm ECC, which explains the final sample of 61 ECC enrolled in the study. Main outcome measures To assess the objectives specified earlier, each of the measures below is administered to participants in the intervention group before and after the intervention period. The measures are administered at the same times to participants in the control group. Physical activity level Children’s physical activity levels are obtained using an Actical accelerometer worn during attendance of ECC for five consecutive weekdays. Accelerometers represent an objective and valid method of measuring physical ac- tivity in preschoolers [48–50]. The Actical, an omnidir- ectional accelerometer, was demonstrated to have higher intra- and inter-instrument reliability than other accelerom- eters [51]. It correlates at r = 0.89 with directly measured oxygen consumption [50]. For this study, accelerometer data are recorded in 15 s intervals. Data will be cleaned and managed using the procedure recommended by Statistic Canada [52, 53] through a series of publicly available SAS codes adapted for this type of study [54]. Educators’ physical activity levels are measured with pe- dometers (New Lifestyles SW-200 DIGI-WALKER). The SW-200 pedometers, commonly used in applied research, have consistently shown to be among the most accurate at counting steps in controlled laboratory settings [55, 56], and have demonstrated acceptable reliability in real-life settings [57]. Educators are given a log in which to record the number of steps taken each day during work at the ECC for the same five consecutive weekdays. Physical literacy Physical literacy and gross motor skills of children are measured using the Test of Gross Motor Development (TGMD-II). The TGMD-II is a standardized test de- signed to assess the gross motor functioning of children aged 3 through 10 years [58]. It evaluates two subtests of skills: locomotor (run, hop, gallop, leap, horizontal jump and slide) and object control (ball skills such as striking a stationary ball, stationary dribble, catch, kick, overhand throw, and underhand roll). Of the 12 skills included in the TGMD-II, three (slide, striking with a stationary ball and stationary dribble) are not assessed as they were found particularly difficult and rarely performed ad- equately by children under six years in pilot work. This was decided in concertation with developers of the TGMD-II. Each child is videotaped while completing the skills. Videos are then reviewed by trained researchers who score two trials for each subtest. Moderate to strong correlations were reported between the TGMD-II and the Comprehensive Scales of Student Abilities, and an inter-rater correlation coefficient of 0.98 indicate good test-retest reliability [58]. Quality of food served to children Menus and recipes for mixed dishes are collected from each ECC. From these, the types and amounts of meals and snacks served are recorded. Information on food provided is standardized in terms of category and food group as well as number and portion size served. Food consumed by children Food intake in ECCs is assessed using the digital photography-weighted plate waste method on two Bélanger et al. BMC Public Health (2016) 16:313 Page 5 of 10 Fig. 3 Flow diagram of selection and randomisation process for the Healthy Start study. Bélanger et al. BMC Public Health (2016) 16:313 Page 6 of 10 consecutive days. Food consumed is measured by weigh- ing each food served to participants with a digital scale that was manually calibrated (inStyle, 16 × 22 × 1.5 cm, Home Hardware Stores Limited, Canada, 2009) and sim- ultaneously taking a digital picture of the food, and the plate ware on which it was served, with an “ASUS Memo Pad HD7” (ME173X model, Android US, China, 2013). Any leftovers of those foods are then weighed and pic- tured again at the end of the meal. This is done for every serving. Pictures are taken at a distance of 62 cm and at a 45° angle [59]. This method has been extensively used in studies of school-aged children [60–62] and is considered the most precise measurement of dietary intake [63, 64]. Digital photography has recently been used to characterize meals served and consumed at pre-schools and it is corre- lated at 0.92 (p < 0.0001) with the weighing method [59]. Information on amount of calories, various food groups, macronutrients, vitamins and minerals consumed is de- rived by subtracting the weight of leftover of a given food from the weight served. For foods containing multiple food groups, trained research assistants used the pictures to estimate the proportion of food group consumed. This step is assisted by the Food Processor nutritional analysis software (Food Processor, Esha version 10.10.00). Eating habits and nutritional risks The Nutrition Screening Tool for Every Preschooler (NutriSTEP), designed and validated in both English and French for 3–5 year-old Canadian children of different ethnicities, is used to assess eating habits and nutrition problems in children [65]. This screening tool consists of 17 questions and takes parents or primary caregivers about five minutes to complete. Assessment of nutritional risk using the NutriSTEP has been validated against regis- tered dietitians’ nutrition assessments (r = 0.49, p = 0.01) [66]. Intraclass correlation of 0.89 (p < 0.001) indicates good test-retest reliability and the majority of items have a Kappa percent of agreement ranging from adequate (κ > 0.5) to excellent (κ > 0.75) [66]. NutriSTEP includes a vali- dated semi-quantitative food frequency questionnaire to assess the usual intake of fast foods as well as the four main food groups according Canada’s Food Guide to Healthy Eating: fruits and vegetables, milk and alterna- tives, meat and alternatives, grain products. This enables us to estimate the contribution of meals consumed in ECCs to the total intake of the four main food groups. Early childcare centre environment Dietary and physical activity practices and policies in ECC are measured using items from the Nutrition and Physical Activity Self-Assessment of Child Care (NAP SACC) [67–69]. The 55-items of the NAP SACC retained for this study collect information on 5 components of nu- trition (25 items assessing: feeding environment, feeding practices, menus and variety, education and professional development, policy) and 5 components of physical activ- ity (30 items assessing: time provided, indoor and outdoor play environment, educator practices, education and pro- fessional development, policy) [70]. Each item is attributed a score between 0 and 3 points. The NAP SACC is filled out independently by two research assistants in each ECC over the course of 2 consecutive days. Differences in item scores are discussed between research assistants, who then come to a consensus. NAP SACC showed excellent inter-rater reliability with 65 % of items having 100 % agreement and all other items having agreement scores exceeding 71 %. Knowledge, attitudes, practices and self-efficacy of educators and directors is measured through a self- administered questionnaire. The educator questionnaire collects information on educator practices related to nu- trition (12 items, e.g., role modelling healthy food choices) and physical activity (6 items, e.g., incorporating physical activity into classroom routines and transitions) [68], on their self-perceived knowledge of fundamental movement skills (4 items, e.g., level of knowledge of activities that incorporate motor skills in the daily activities of pre- schoolers), on their self-efficacy (e.g., rate your level of confidence in …) to promote physical activity (1 item), and to teach fundamental movement skills (1 item) [71, 72]. Educators also provide information on their physical activity level through the International Physical Activity Questionnaire-Short form [73], food choices based on an adapted version of the NutriStep, and sociodemo- graphic information (including their age group, level of education, and the length of time that they have been working as an educator). Complementary measures or co-variables Beyond the main measures which relate directly to the objectives, the following measures are also administered at the beginning and the end of the study to account for potentially confounding factors in the analyses and to assess secondary outcomes. Sociodemographic information Parents provide sociodemographic information through a self-reported questionnaire. Questions in the question- naire are drawn from Statistic Canada’s Canadian Com- munity Health Survey [74]. The questionnaire provides information on parents’ education levels (highest level of education completed by each parent), family structure (number of siblings and family status) and socioeconomic level (estimate of total family revenues, before deductions, in the last 12 months). Parents also report their physical activity levels and food choices based on the same ques- tions as the educator questionnaire. Parents’ perception of their child’s physical activity abilities are assessed through Bélanger et al. BMC Public Health (2016) 16:313 Page 7 of 10 4 items of the Perceptions of Physical Activity Importance and their Children’s Ability Questionnaire (PPPAICAQ) [75]. The NutriStep questionnaire is also completed by parents. Body composition Height, weight and waist circumference of children are measured using a standardized protocol [76]. Two mea- sures of height (SECA Stadiometer – Model 213) to the nearest 0.1 cm, weight (SECA Scale – Model 761) to the nearest 0.2 kg, and waist circumference to the nearest 0.1 cm are obtained for each participant. If discrepancies greater than 0.5 cm for height and waist circumference or 0.2 kg for weight are observed between the two measures, a third measure is obtained. The average of the two closest measures is recorded. Body mass index (BMI), calculated using the ratio of weight (kg) and squared height (m2) will be used to determine if children are overweight or obese by following the International Obesity Task Force age- adjusted thresholds as recommended [77]. Training of research assistants Before every data collection periods, research assistants are given a full day of training on data collection and entry to ensure that all procedures are standardized across sites. Research assistants are trained specifically on how to collect anthropometric data, how to conduct plate waste analysis, and how to complete the environ- mental scan. They also practice all fundamental movement patterns so that they can demonstrate them correctly, and review how to properly videotape children’s movements. Databases are also presented, and research assistants are trained on how to enter all data collected. Provincial study coordinators oversee all aspects of data collection and data entry to ensure protocols are applied as intended. Data analysis As a first step, each variable distribution will be exam- ined in order to manage obvious errors such as outliers or questionable input. If necessary, appropriate transfor- mations will be applied. If there are systematic missing variables, these will be filled using multiple imputation methods. After the initial data cleaning and processing, descriptive analyses will be used to characterise the two groups of participants and their context. Analyses of the main study objectives will adhere to the intention-to-treat principle. Multiple linear regression will be used to determine if, in comparison to participants in the usual care group, participants exposed to Healthy Start- Départ Santé had greater increases in ECC-provided oppor- tunities for physical activity, ECC-provided opportunities for healthy eating, in physical activity levels, in healthy eat- ing behaviours, and if they had greater improvements in physical literacy. A multilevel approach will be used in each model to account for clustering that could result from children attending the same ECC. Potentially con- founding variables (i.e., age, parental education, household income, etc.) will be included in the models. Limitations Educator turnover is common in ECC, meaning that new educators in centers offering the Healthy Start- Départ Santé intervention may not receive the intended training. Two elements of the intervention nevertheless reduce this possibility. First, one booster session per centre is offered as part of the intervention to reinforce the training of educators who are there since the begin- ning and to ensure that others are trained to implement Healthy Start-Départ Santé. Second, Healthy Start- Départ Santé is designed to be integrated into centres’ everyday functioning so that all employees should be ex- posed to it naturally. Also, children’s physical activity level and dietary intake can vary from one day to the next. This could cause the data to be non-representative of typical behavior. Data collection over five consecutive days for physical activity and two consecutive days for dietary behavior will help minimise this risk. Trial status Participant recruitment began in autumn 2013. Inter- vention delivery and data collection began at the same time and are ongoing. This trial is registered through the ClinicalTrials.gov registry (Clinical Trials ID = NCT02375490; Protocol ID = 6282-15-2010/3381056-RSFS). Discussion Framed within the population health approach and based on six interlinked components, the Healthy Start-Départ Santé study is positioned to improve the physical literacy and both dietary and physical activity behaviours of chil- dren attending ECC. The results of this study will be of particular relevance given the overwhelming prevalence of overweight and obesity in children worldwide, especially in North America. The current study has numerous strengths, including the use of a randomized control trial design, longitudinal follow up, objective measures of the main outcomes, and a population-based sample with sufficient subjects. In addition to a rigorous scien- tific approach, Healthy Start-Départ Santé is culturally adapted for the two official linguistic groups in Canada and targets multiple levels of influence. The fact that it is based on strong multi-sectorial collaborations with a variety of stakeholders improves its feasibility and poten- tial of its wide-spread implementation post-evaluation. To date, few intervention studies have applied multifa- ceted methods to obesity prevention in preschoolers. Healthy Start-Départ Santé is among the first to capitalize on a combination of multiple partner involvement and Bélanger et al. BMC Public Health (2016) 16:313 Page 8 of 10 interventions on multiple socioecological levels with over 6 months of follow-up in order to affect two behaviours which are fundamental causes of obesity. Healthy Start-Départ Santé is also well setup to enable epidemiological investigations. For example, we will use Healthy Start-Départ Santé data to quantify the relative importance of different components of the population health model in explaining different levels of involve- ment in the physical activity and the various dietary be- haviors of preschoolers. Similarly, the data will enable investigating how different determinants may interact to influence behaviour change in young children, which to date has not been explored extensively. This combination of effectiveness testing and epidemiological analyses will allow gaining insights into processes and events that lead to behavioral changes in preschoolers. All of this will guide the refinement of the current intervention and the development of new ones in order to increase physical ac- tivity and improve dietary behavior and physical literacy among young children. Abbreviations ECC: Early childcare centers; KDE: Knowledge development and exchange; NAP SACC: Nutrition and physical activity self-assessment of child care; NutriSTEP: Nutrition screening tool for every preschooler; TGMD-II: Test of gross motor development. Competing interests The authors declare that they have no competing interests. Authors’ contributions Development of the intervention was led by AL, LH, and NM and development of the evaluation protocol was led by AL, MB, LH, HV, and NM. AL acts as the study’s principal investigator. AL and MB each oversee the study in their respective province. Physical activity components are led by MB and NM, nutrition components are led by HV, SW, RES and NC, and physical literacy components are led by LH and AFC. DD provides guidance in relation to the public health approach. MB, SW, and AL wrote the first draft of this manuscript. All of the authors reviewed the manuscript critically for important intellectual content and approved the final version being submitted for publication. Acknowledgments Authors are grateful for the important contribution of Gabrielle Lepage-Lavoie, project manager, and Roger Gautier, director of the Réseau Santé en Français de la Saskatchewan, for leading the implementation of Healthy Start-Départ Santé. We also thank directors of ECC, educators, parents, and children for their valued collaboration. This study represents Phase II of the Healthy Start-Départ Santé project. It is financially supported by a grant from the Public Health Agency of Canada (# 6282-15-2010/3381056-RSFS), a research grant from the Consortium national de formation en santé (# 2014-CFMF-01), and a grant from the Heart and Stroke Foundation of Canada (# 2015-PLNI). Amanda Froehlich Chow was funded through a postdoctoral fellowship from the Saskatchewan Health Research Foundation and Stéphanie Ward was funded through a Canadian Institutes of Health Research Charles Best Canada Graduate Scholarships Doctoral Award and a Gérard-Eugène-Plante Doctoral Scholarship from the Faculty of Medicine and Health Sciences at the Université de Sherbrooke. Author details 1Department of family medicine, Université de Sherbrooke, 18 avenue Antonine-Maillet, Moncton, NB E1A 3E9, Canada. 2Centre de formation médicale du Nouveau-Brunswick, 18 avenue Antonine-Maillet, Moncton, NB E1A 3E9, Canada. 3Vitalité Health Network, 330 Université Avenue, Moncton, NB E1C 2Z3, Canada. 4College of Kinesiology, University of Saskatchewan, 7 Campus Drive, Saskatoon, SKS7N 5B2, Canada. 5College of Pharmacy and Nutrition /School of Public Health, University of Saskatchewan, 104 Clinic Place, Saskatoon, SK S7N 0Z2, Canada. 6Department of Community Health & Epidemiology, College of Medicine, University of Saskatchewan Health Sciences E Wing, 104 Clinic Place, Saskatoon, SK S7N 2Z4, Canada. 7École des sciences des aliments, de nutrition et d’études familiales, Faculté des sciences de la santé et des services communautaires, Université de Moncton, 18 avenue Antonine-Maillet, Moncton, NB E1A 3E9, Canada. Received: 4 February 2016 Accepted: 19 March 2016 References 1. WHO. Population-based approaches to childhood obesity prevention. 2012. p. 54. 2. de Onis M, Blössner M, Borghi E. Global prevalence and trends of overweight and obesity among preschool children. Am J Clin Nutr. 2010;92:1257–64. 3. Shields M. Overweight and obesity among children and youth. Heal Reports. 2006;17:27–42. 4. Nader PR, O’Brien M, Houts R, Bradley R, Belsky J, Crosnoe R, Friedman S, Mei Z, Susman EJ. Identifying risk for obesity in early childhood. Pediatrics. 2006;118: e594–601. 5. Quattrin T, Liu E, Shaw N, Shine B, Chiang E. Obese children who are referred to the pediatric endocrinologist: characteristics and outcome. Pediatrics. 2005;115:348–51. 6. Gardner D, Hosking J, Metcalf B, Jeffery A, Voss L, Wilkin T. Contribution of early weight gain to childhood overweight and metabolic health: a longitudinal study. Pediatrics. 2009;123:e67–73. 7. Guo S, Huang C, Maynard L, Demerath E, Towne B, Chumlea W, Siervogel R. Body mass index during childhood, adolescence and young adulthood in relation to adult overweight and adiposity: the Fels longitudinal study. Int J Obes Relat Metab Disord. 2000;24:1628–35. 8. Freedman D, Kettel Khan L, Serdula M, Dietz W, Srinivasan S, Berenson G. The relation of childhood BMI to adult adiposity: the Bogalusa Heart Study. Pediatrics. 2005;115:22–7. 9. Reilly J, Methven E, McDowell Z, Hacking B, Alexander D, Stewart L, Kelnar C. Health consequences of obesity. Arch Dis Child. 2003;88:748–52. 10. Hill JO, Melanson EL. Overview of the determinants of overweight and obesity: current evidence and research issues. Med Sci Sports Exerc. 1999;31(11 Suppl):S515–21. 11. Birch L, Fisher J. Development of eating behaviors among children and adolescents. Pediatrics. 1998;101(3 Pt 2):539–49. 12. Pabayo R, Spence JC, Casey L, Storey K. Food consumption patterns in preschool children. Can J Diet Pract Res. 2012;73:66–71. 13. Reedy J, Krebs-Smith SM. Dietary sources of energy, solid fats, and added sugars among children and adolescents in the United States. J Am Diet Assoc. 2010;110:1477–84. 14. Langlois K, Garriguet D. Sugar consumption among Canadians of all ages. Heal Reports. 2011;22:23–7. 15. Reilly J, Kelly J. Long-term impact of overweight and obesity in childhood and adolescence on morbidity and premature mortality in adulthood: systematic review. Int J Obes. 2011;35:891–8. 16. Temple V, Naylor P, Rhodes R, Higgins J. Physical activity of children in family child care. Appl Physiol Nutr Metab. 2009;34:794–8. 17. Hardy LL, Reinten-Reynolds T, Espinel P, Zask A, Okely AD. Prevalence and correlates of low fundamental movement skill competency in children. Pediatrics. 2012;130:e390–8. 18. Fisher A, Reilly JJ, Kelly LA, Montgomery C, Williamson A, Paton JY, Grant S. Fundamental movement skills and habitual physical activity in young children. Med Sci Sports Exerc. 2005;37:684–8. 19. Gentier I, D’Hondt E, Shultz S, Deforche B, Augustijn M, Hoorne S, Verlaecke K, De Bourdeaudhuij I, Lenoir M. Fine and gross motor skills differ between healthy-weight and obese children. Res Dev Disabil. 2013;34:4043–51. 20. Lubans DR, Morgan PJ, Cliff DP, Barnett LM, Okely AD. Fundamental movement skills in children and adolescents: review of associated health benefits. Sports Med. 2010;40:1019–35. 21. Williams HG, Pfeiffer KA, O’Neill JR, Dowda M, McIver KL, Brown WH, Pate RR. Motor skill performance and physical activity in preschool children. Obesity (Silver Spring). 2008;16:1421–6. 22. Kjønniksen L, Torsheim T, Wold B. Tracking of leisure-time physical activity during adolescence and young adulthood: a 10-year longitudinal study. Int J Behav Nutr Phys Act. 2008;5:69. Bélanger et al. BMC Public Health (2016) 16:313 Page 9 of 10 23. Telama R, Yang X, Viikari J, Valimaki I, Wanne O, Raitakari O. Physical activity from childhood to adulthood: a 21-year tracking study. Am J Prev Med. 2005;28:267–73. 24. Tammelin T. A review of longitudinal studies on youth predictors of adulthood physical activity. Int J Adolesc Med Heal. 2005;17:3–12. 25. Telama R. Tracking of physical activity from childhood to adulthood: a review. Obes Facts. 2009;2:187–95. 26. Hallal P, Victora C, Azevedo M, Wells J. Adolescent physical activity and health: a systematic review. Sport Med. 2006;36:1019–30. 27. About Obesity in Canada - Canadian Obesity Network [http://www. obesitynetwork.ca/obesity-in-canada] 28. Baur LA. Tackling the epidemic of childhood obesity. CMAJ. 2009;180:701–2. 29. Larson N, Ward DS, Neelon SB, Story M. What role can child-care settings play in obesity prevention? A review of the evidence and call for research efforts. J Am Diet Assoc. 2011;111:1343–62. 30. Bushnik T. Child Care in Canada. 2006. 31. Dooris M, Poland B, Kolbe L, et al. Healthy Settings: Building evidence for the effectiveness of whole system health promotion. Challenges and future directions. In: Global Perspectives on Health Promotion Effectiveness. New York: Springer; 2007. p. 327–52. 32. Sharma S, Chuang R, Hedberg A. Pilot-testing CATCH early childhood: a preschool-based healthy nutrition and physical activity program. Am J Heal. 2011;42(1):12–23. 33. Stettler N, Bhatia J, Parish A, Stallings V. Feeding healthy infants, children, and adolescents. In Nelson Textbook of Pediatrics. 19th ed. Philadelphia: Saunders Elsevier; 2011. 34. Story M, Kaphingst K, French S. The role of child care settings in obesity prevention. Futur Child. 2006;16:143–68. 35. Hesketh KD, Campbell KJ. Interventions to prevent obesity in 0–5 year olds: an updated systematic review of the literature. Obesity (Silver Spring). 2010;18 Suppl 1:S27–35. 36. Kaphingst KM, Story M. Child care as an untapped setting for obesity prevention: state child care licensing regulations related to nutrition, physical activity, and media use for preschool-aged children in the United States. Prev Chronic Dis. 2009;6:A11. 37. Public Health Agency of Canada and the Canadian Institute for Health Information. Obesity in Canada. Ottawa: Government of Canada; 2011. 38. Evans R, Barer M, Marmor T. Why Are Some People Healthy and Others Not?: The Determinants of Health of Populations. 1994. 39. Implementing the Population Health Approach - Public Health Agency of Canada [http://www.phac-aspc.gc.ca/ph-sp/implement/index-eng.php] 40. Kumanyika S, Jeffery RW, Morabia A, Ritenbaugh C, Antipatis VJ. Obesity prevention: the case for action. Int J Obes Relat Metab Disord. 2002;26:425–36. 41. Prince S. A population health approach to obesity in Canada—Putting the problem back into context. Transdiscipl Stud Popul Heal Ser. 2009;1:22–33. 42. Huynen MMTE, Martens P, Hilderink HBM. The health impacts of globalization: a conceptual framework. Global Health. 2005;1:14. 43. Sallis J, Owen N, Fisher E. Ecological models of health behavior. In: Glanz K, Rimer BK, Viswanath K, editors. Health Behavior and Health Education: Theory, Research, and Practice. 4th ed. San Francisco: Wiley; 2008. p. 465–85. 44. Kao H, Hsu M, Clark L. Conceptualizing and Critiquing Culture in Health Research. J Transcult Nurs. 2004;15:269–77. 45. Community Information Database - Metropolitan Influence Zone (MIZ) Topology [http://map.cid-bdc.ca/#s=2006;i=comtype.miz;sly=can_sdr_DR; sid=689;v=map1;l=en;z=2149095,322536,269304,188240] 46. Maher E, Frestedt B, Grace C. Differences in Child Care Quality in Rural and Non-Rural Areas. J Res Rural Educ. 2008;23:1–13. 47. Swenson K. Child care arrangements in urban and rural areas. US Dep Heal Hum Serv Pg. 2008; p. 1–16. 48. Adolph A, Puyau M, Vohra F, Nicklas T, Zakeri I, Butte N. Validation of uniaxial and triaxial accelerometers for the assessment of physical activity in preschool children. J Phys Act Heal. 2012;9:944–53. 49. Pate RR, O’Neill JR, Mitchell J. Measurement of physical activity in preschool children. Med Sci Sports Exerc. 2010;42:508–12. 50. Pfeiffer K, Mciver K, Dowda M, Almeida M, Pate R. Validation and calibration of the Actical accelerometer in preschool children. Med Sci Sports Exerc. 2006;38:152–7. 51. Esliger D, Tremblay M. Technical reliability assessment of three accelerometer models in a mechanical setup. Med Sci Sports Exerc. 2006;38:2173–81. 52. Colley R, Connor Gorber S, Tremblay M. Quality control and data reduction procedures for accelerometry-derived measures of physical activity. Heal Reports. 2010;21:63–9. 53. Colley R, Garriguet D, Janssen I, Craig C, Clarke J, Tremblay M. Physical activity of Canadian children and youth: accelerometer results from the, to 2009 Canadian Health Measures Survey. Heal Reports. 2011;22:15–23. 54. Boudreau J, Bélanger M. SAS Code for Actical Data Cleaning and Management: Version 1.3. Centre de formation médicale du Nouveau- Brunswick. Moncton; 2015. http://mathieubelanger.recherche.usherbrooke. ca/Files/Other/UserManual_BoudreauBelanger%20V1-3.pdf 55. Crouter SE, Schneider PL, Karabulut M, Bassett DR. Validity of 10 electronic pedometers for measuring steps, distance, and energy cost. Med Sci Sports Exerc. 2003;35:1455–60. 56. Schneider PL, Crouter SE, Lukajic O, Bassett DR. Accuracy and reliability of 10 pedometers for measuring steps over a 400-m walk. Med Sci Sports Exerc. 2003;35:1779–84. 57. Schneider PL, Crouter SE, Bassett DR. Pedometer measures of free-living physical activity: comparison of 13 models. Med Sci Sports Exerc. 2004;36:331–5. 58. Ulrich D. Test of gross motor development-2. Austin: Prod-Ed; 2000. 59. Williamson DA, Allen HR, Martin PD, Alfonso AJ, Gerald B, Hunt A. Comparison of digital photography to weighed and visual estimation of portion sizes. J Am Diet Assoc. 2003;103:1139–45. 60. Blakeway SF, Knickrehm ME. Nutrition education in the Little Rock school lunch program. J Am Diet Assoc. 1978;72:389–91. 61. Lee H, Lee K, Shanklin C. Elementary students’ food consumption at lunch does not meet recommended dietary allowance for energy, iron, and vitamin A. J Am Diet. 2001;101(9):1060–63. 62. Whatley J, Donnelly J. Energy and macronutrient consumption of elementary school children served modified lower fat and sodium lunches or standard higher fat and sodium lunches. J Am Coll Nutr. 1996;15(6):602-7. 63. Jacko C, Dellava J, Ensle K, Hoffman D. Use of the Plate-Waste method to Measure food intake in children. J Ext. 2007;45:6RIB7. 64. Wolper C, Heshka S, Heymsfield S. Measuring food intake: An overview. In: Allison D, editor. Handbook of Assessment Measures for Eating Behaviors and Weight-Related Problems. Thousand Oaks, CA: Sage Publications Inc; 1995. p. 215–40. 65. NutriSTEP® - NutriSTEP® Implementation Toolkit and Resources [http:// nutristep.ca/en/toolkit_resources.aspx] 66. Randall Simpson JA, Keller HH, Rysdale LA, Beyers JE. Nutrition Screening Tool for Every Preschooler (NutriSTEP): validation and test-retest reliability of a parent-administered questionnaire assessing nutrition risk of preschoolers. Eur J Clin Nutr. 2008;62:770–80. 67. Benjamin S, Neelon B, Ball S, Bangdiwala S, Ammerman A, Ward D. Reliability and validity of a nutrition and physical activity environmental self-assessment for child care. Int J Behav Nutr Phys Act. 2007;4:29–39. 68. Benjamin S, Ammerman A, Sommers J, Dodds J, Neelon B, Ward D. Nutrition and physical activity self-assessment for child care (NAP SACC): results from a pilot intervention. J Nutr Educ Behav. 2007;39:142–9. 69. Trost S, Messner L, Fitzgerald K, Roths B. Nutrition and physical activity policies and practices in family child care homes. Am J Prev Med. 2009;37:537–40. 70. Ward D, Morris E, McWilliams C, Vaughn A, Erinosho T, Mazzuca S, Hanson P, Ammerman A, Neelon S, Sommers S, Ball S. Go NAP SACC: Nutrition and Physical Activity Self-Assessment for Child Care. 2013. 71. Bandura A. The explanatory and predictive scope of self-efficacy theory. J Soc Clin Psychol. 1986;4;359-73. 72. Bandura A, McClelland D. Social learning theory. 1977. 73. Lee P, Macfarlane D, Lam T, Stewart S. Validity of the international physical activity questionnaire short form (IPAQ-SF): A systematic review. Int J Behav Nutr Phys Act. 2011;8:115:1–11. 74. Canada S. Canadian Community Health Survey (CCHS). 2010. 75. Martinent G, Naisseh M, Ferrand C, Bois J, Hautier C. Development and evaluation of the psychometric properties of the parents’ Perceptions of Physical Activity Importance and their Children’s Ability Questionnaire (PPPAICAQ). Psychol Sport Exerc. 2013;14:719–27. 76. National Health and Nutrition Examination Survey. Anthropometry Procedures Manual. 2004. p. 1-65. 77. Monasta L, Lobstein T, Cole T, Vignerová J, Cattaneo A. Defining overweight and obesity in pre-school children: IOTF reference or WHO standard? Obes Rev. 2011;12:295–300. Bélanger et al. BMC Public Health (2016) 16:313 Page 10 of 10 http://www.obesitynetwork.ca/obesity-in-canada http://www.obesitynetwork.ca/obesity-in-canada http://www.phac-aspc.gc.ca/ph-sp/implement/index-eng.php http://map.cid-bdc.ca/#s=2006;i=comtype.miz;sly=can_sdr_DR;sid=689;v=map1;l=en;z=2149095,322536,269304,188240 http://map.cid-bdc.ca/#s=2006;i=comtype.miz;sly=can_sdr_DR;sid=689;v=map1;l=en;z=2149095,322536,269304,188240 http://mathieubelanger.recherche.usherbrooke.ca/Files/Other/UserManual_BoudreauBelanger%20V1-3.pdf http://mathieubelanger.recherche.usherbrooke.ca/Files/Other/UserManual_BoudreauBelanger%20V1-3.pdf http://nutristep.ca/en/toolkit_resources.aspx http://nutristep.ca/en/toolkit_resources.aspx Abstract Background Methods/Design Discussion Trial registration Background The intervention Study objectives Methods/Design Study design Target population and sampling Main outcome measures Physical activity level Physical literacy Quality of food served to children Food consumed by children Eating habits and nutritional risks Early childcare centre environment Complementary measures or co-variables Sociodemographic information Body composition Training of research assistants Data analysis Limitations Trial status Discussion Abbreviations Competing interests Authors’ contributions Acknowledgments Author details References work_6oxcvy72tbecrlxlwy4rdvhkya ---- The surgical plane for lingual tonsillectomy: an anatomic study ORIGINAL RESEARCH ARTICLE Open Access The surgical plane for lingual tonsillectomy: an anatomic study Eugene L. Son1*, Michael P. Underbrink1, Suimin Qiu2 and Vicente A. Resto1 Abstract Background: The presence of a plane between the lingual tonsils and the underlying soft tissue has not been confirmed. The objective of this study is to ascertain the presence and the characteristics about this plane for surgical use. Methods: Five cadaver heads were obtained for dissection of the lingual tonsils. Six permanent sections of previous tongue base biopsies were reviewed. Robot assisted lingual tonsillectomy was performed using the dissection technique from the cadaver dissection. Results: In each of the 5 cadavers, an avascular plane was revealed deep to the lingual tonsils. Microscopic review of the tongue base biopsies revealed a clear demarcation between the lingual tonsils and the underlying minor salivary glands and muscle tissue. This area was relatively avascular. Using the technique described above, a lingual tonsillectomy using TORS was performed with similar findings from the cadaver dissections. Conclusions: A surgical plane for lingual tonsillectomy exists and may prove to have a role with lingual tonsillectomy with TORS. Keywords: Lingual tonsil, Surgical plane, Transoral robotic surgery, Lingual tonsillectomy Background The base of tongue had once been a difficult area for surgery to perform on because of problems with expos- ure. With new innovations in endoscope technology, transoral laser microsurgery and transoral robotic sur- gery (TORS) with the da Vinci Surgical System manufac- tured by Intuitive Surgical, Inc., access to the tongue base has been made more feasible. In the base of tongue, the lingual tonsils are an important target for surgery. There are two main indications for lingual tonsillectomy. The first indication is lingual tonsil hypertrophy (LTH), which can contribute to obstructive sleep apnea (OSA) in pediatric and adult patients. LTH among other functionally or fixed areas of obstruction in the upper aerodigestive tract are targets for sleep surgery. In pediatric patients, LTH can be primary or a secondary following a tonsillectomy and adenoidectomy [1]. The second indication is for squamous cell carcinoma of unknown primary (SCCUP) in the head and neck [2]. There has been an increase in the incidence of human papilloma virus (HPV) related oropharyngeal squamous cell carcinoma [3]. A large of number of SCCUP with negative clinical and radiographical evidence of a pri- mary tumor are most commonly found to have primar- ies in the palatine and lingual tonsils. More recently, performing palatine tonsillectomy and lingual tonsil- lectomy has returned finding the primary tumor in 75–90 % allowing patients to receive a decreased amount of radiation lessening side effects [3, 4]. Lingual tonsillectomy techniques include cold dissec- tion, electrocautery, coblation, carbon dioxide (CO2) laser and microdebrider [5–9]. Many of these methods ablate or disrupt the microarchitecture of the lingual tonsils. In SCCUP, carpet resection of the lingual tonsils requires the desired tissue to be left intact for diagnosis under a microscope by a pathologist. Lingual tonsillec- tomy techniques have been described but no evidence of a surgical plane of dissection is available. Some have de- scribed the presence of a potential capsule and some have stated there is none. Joseph et al. described a layer of fibrous tissue that sometimes delineates this tissue * Correspondence: eson85@gmail.com 1Department of Otolaryngology - Head and Neck Surgery, University of Texas Medical Branch, 301 University Boulevard, Galveston, TX 77555, USA Full list of author information is available at the end of the article © 2016 Son et al. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Son et al. Journal of Otolaryngology - Head and Neck Surgery (2016) 45:22 DOI 10.1186/s40463-016-0137-3 http://crossmark.crossref.org/dialog/?doi=10.1186/s40463-016-0137-3&domain=pdf mailto:eson85@gmail.com http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/publicdomain/zero/1.0/ from the tongue but no definite capsule [10]. Multiple authors in the early 20th century described a potential plane as “a basement membrane analogous to the cap- sule of the faucial tonsils” but not as delineated or devel- oped as that tissue [11–14]. Dundar et al. describes the lingual tonsil having no capsule [5]. Lin and Koltai in their case series of coblation of lingual tonsillectomy in 26 pediatric patients describe no clear demarcation between the lingual tonsils and the tongue muscula- ture, although the change in tissue quantities becomes readily apparent [1]. We believe that deep to the lin- gual tonsils, a relatively avascular plane made up of connective tissue exists. This potential surgical plane may be utilized in lingual tonsillectomy for the afore- mentioned indications. Methods This study was approved by the institutional review board from the University of Texas Medical Branch. Five fresh cadaveric heads were procured for the purpose of this study. These cadavers were ages ranged from 86 years-old to 101 years-old including 2 females and 3 males. Cadaver dissection The tongue from 5 cadaver heads were removed. Five complete dissections of the lingual tonsils were per- formed with scissors and forceps. First, a midline inci- sion is made from the foramen cecum to the vallecula. An incision is also made immediately posterior to the circumvallate papilla. The anterior medial edge of a side of the lingual tonsils were grasped with forceps and then, dissection with the Iris scissors was per- formed in the lateral posterior direction until the lin- gual tonsils were removed en bloc on each side. Grossly, the muscle was identified as darker red in color and striations present. The dissection is carried to the border of the lateral pharyngeal wall and pos- teriorly to the edge of the vallecula. Care was take to remove the lymphoid tissue while keeping the underlying muscle intact. Digital photography was performed during the dissections. Histological sectioning Six archived permanent sections of base of tongue biopsy specimens from the department of pathology at University of Texas Medical Branch were reviewed. Three cases with no malignancy detected and three cases with dysplasia were chosen for recording of digital pho- tography under magnification. These specimens were from base of tongue biopsies from patients with SCCUP and had been fixed in formalin, processed for paraffin em- bedding and stained with hematoxylin and eosin. These specimens were reviewed for histological characterization of the lingual tonsils. Robot-assisted lingual tonsillectomy A 52 year old male with SCCUP in the left neck under- went robot assisted lingual tonsillectomy as well as bilat- eral palatine tonsillectomy. A Fehy-Kastenbauer mouth gag was used for access to the tongue base. The da Vinci Surgical System was brought into the surgical field with visualization of the lingual tonsils. The 30-degree angled endoscope was placed in the midline, and the two work- ing robotic arms were placed in the appropriate position. A Maryland dissector was used on the right arm and a monopolar cautery hook was used on the left arm for the left sided lingual tonsillectomy. The robot was then used to assist taking the lingual tonsil tissue down to the muscle layer, starting medially and dissecting laterally. A midline incision is made from the foramen cecum to an- terior to the median glossoepiglottic fold. An incision is also made immediately posterior to the circumvallate pa- pilla. The anterior medial edge of a side of the lingual tonsils were grasped and dissection was performed in the lateral posterior direction until the lingual tonsils were removed as one specimen for each side. Care was take to remove the lymphoid tissue keeping the under- lying musculature intact. The specimens were then sent to the pathology department for histological review. Results and discussion Cadaver dissection In all five cadaver dissections, the plane between the lin- gual tonsils and the underlying musculature was identi- fied and used for excision of the lingual tonsils. There was a plane of dissection easily separating the lingual tonsils and the underlying tongue musculature. No grossly visible vessels nor nerves were encountered dur- ing the dissections (Fig. 1). Fig. 1 Gross dissection of lingual tonsils is shown. Left image shows the lingual tonsils before dissection and the right image shows the lingual tonsils dissected and reflected posteriorly Son et al. Journal of Otolaryngology - Head and Neck Surgery (2016) 45:22 Page 2 of 5 Histological sectioning In these biopsies of the lingual tonsils, there was a rela- tively less vascular or avascular area between the lingual tonsil and the underlying minor salivary glands and muscle tissue in all cases (Figs. 2 and 3). Between the lingual tonsil and the submucosal muscular tissue at the base of the tongue a distinct space or line is demon- strated in both benign (Fig. 2) and premalignant (Fig. 3) cases. The space indicated by the drawn line is less vas- cular or avascular, by which the squamous mucosa with the lingual tonsil is distinct from the minor salivary gland and the muscle. The minor salivary gland and muscle is intimately admixed especially in the super- ficial portion of the muscle. Presence of submucosal edema (Fig. 2a) may exaggerate the space and pres- ence of dysplasia associated with peritumoral lym- phocytes infiltrate may obscure the space between the lingual tonsil and the underlying muscle as showed in Fig. 3c. However, in all conditions, dissection between two layers are feasible based on histological study. Robot-assisted lingual tonsillectomy The procedure was performed uneventfully. No grossly visible vessels nor nerves were encountered during the dissections (Fig. 4). The patient had a normal post- operative course without any complications. Microscopic examination of this specimen showed microscopic foci of squamous cell carcinoma for which a random biopsy of the base of tongue would have missed. The palatine tonsils have a well-described plan separating it from the surrounding oropharyngeal musculature. There Fig. 2 Permanent sections of base of tongue biopsies with benign pathology. Blue line demarcates surgical plane. Lingual tonsils (LT), minor salivary gland tissue (MS), muscle (MU) are labeled. Presence of submucosal edema exageratign plane in a compared to no edema in b (hypervascular) and c Fig. 3 Permanent sections of base of tongue biopsies with premalignant pathology. Blue line demarcates surgical plane. Lingual tonsils (LT), minor salivary gland tissue (MS), muscle (MU) are labeled. Plane ispreserved in the presence of dysplasia in a and b (most defined plane). However, peritumoral lymphocytes infiltrate obscure the space between the lingual tonsil and the underlying muscle in c Son et al. Journal of Otolaryngology - Head and Neck Surgery (2016) 45:22 Page 3 of 5 has been only speculation about the existence of a surgical plane for the lingual tonsils, being only described anec- dotally in the current literature. In our study, we show a relatively avascular plane deep to the lingual tonsils in ca- daver dissection, in histologic sections and in vivo. In the five dissections performed, there was a space deep to the tonsils that gave less resistance in the force of dissec- tion. In Fig. 1, there a is an uneven layer deep to this plane after dissection representing the minor salivary glands asso- ciated with the muscle layer as seen in the histological sections. In this plane, there were no grossly visible neuro- vascular structures appreciated for all specimens. There may be a feasible way for cold dissection of the tonsils using direct visualization or with robot assistance with a potential for decreased amount of cautery, minimal bleeding and decreased post-operative morbidity. This plane may also be utilized instead of ablative techniques such as in obstructive sleep apnea in pediatric patients to prevent secondary hypertrophy of the lingual tonsillar after tonsillectomy and adenoidectomy for obstructive sleep apnea. Decreased use of cautery for dissection and hemostasis may provide better tissue diagnosis when lingual tonsil- lectomy is performed for elucidation of a diagnosis of an unknown head and neck primary tumor. In the case described, the diagnosis was made using a carpet resec- tion of the lingual tonsils using our technique. Random biopsies of the base of tongue may miss a diagnosis making the consequential treatment different. Conclusions The lingual tonsils have become more accessible in recent times for surgical intervention for diagnostic and treat- ment purposes. There is an avascular plane for dissection deep to the lingual tonsils and superficial to the under- lying minor salivary glands and lingual musculature. This plane may be utilized in surgical resection in appropriate patients to potentially decrease post-operative morbidity and increase diagnostic yield. Further studies in human subjects with this plane utilized will need to be performed in the future. Abbreviations CO2: carbon dioxide; HPV: human papilloma virus; LTH: lingual tonsil hypertrophy; OSA: obstructive sleep apnea; SCCUP: squamous cell carcinoma of unknown primary; TORS: transoral robotic surgery. Competing interests The authors declare that they have no competing interests. Authors’ contributions ES performed the dissection, drafted and edited manuscript. MU performed the surgery and edited the manuscript. SQ prepared and interpreted the histology. VA conceived the study and edited the manuscript. All authors read and approved the final manuscript. Acknowledgement No other acknowledgments. Author details 1Department of Otolaryngology - Head and Neck Surgery, University of Texas Medical Branch, 301 University Boulevard, Galveston, TX 77555, USA. 2Department of Pathology, University of Texas Medical Branch, 301 University Boulevard, Galveston, TX 77555, USA. Received: 6 January 2016 Accepted: 30 March 2016 References 1. Lin AC, Koltai PJ. Persistent pediatric obstructive sleep apnea and lingual tonsillectomy. Otolaryngol Head Neck Surg. 2009;141:81–5. 2. Mehta V et al. A new paradigm for the diagnosis and management of unknown primary tumors of the head and neck: A role for transoral robotic surgery. Laryngoscope. 2013;123:146–51. Fig. 4 Intra-operative photography of lingual tonsillectomy during TORS. Lingual tonsil is grasped and reflected posteriorly Son et al. Journal of Otolaryngology - Head and Neck Surgery (2016) 45:22 Page 4 of 5 3. Forastiere AA et al. HPV-associated head and neck cancer: a virus-related cancer epidemic. Lancet Oncol. 2010;11:781–9. 4. Nagel TH et al. Transoral laser microsurgery for the unknown primary: Role for lingual tonsillectomy. Head Neck. 2014;36(7):942–6. 5. Dundar A et al. Lingual tonsil hypertrophy producing obstructive sleep apnea. Laryngoscope. 1996;106:1167–9. 6. Abdel-Aziz M et al. Lingual tonsils hypertrophy; a cause of obstructive sleep apnea in children adenotonsillectomy: Operative problems and management. Int J Pediatr Otorhinolaryngol. 2011;75:1127–31. 7. Robinson S et al. Lingual tonsillectomy using bipolar radiofrequency plasma excision. Otolaryngol Head Neck Surg. 2006;134:328–30. 8. Bower CM. Lingual tonsillectomy. Oper Tech Otolaryngol. 2005;16:238–41. 9. Kluszynski BA, Matt BH. Lingual tonsillectomy in a child with obstructive sleep apnea: a novel technique. Laryngoscope. 2006;116:668–9. 10. Joseph M, Reardon E, Goodman M. Lingual tonsillectomy: A treatment for inflammatory lesions of the lingual tonsil. Laryngoscope. 1984;94:179–84. 11. Cohen HB. The lingual tonsils: General considerations and its neglect. Laryngoscope. 1917;27:691–700. 12. Waugh JM. Lingual quinsy. Trans Am Acad Ophthalmol Otolaryngol. 1923;89:726–31. 13. Hoover WB. The treatment of the lingual tonsil and lateral pharyngeal bands of lymphoid tissue. Surg Clin North Am. 1934;14:1257–69. 14. Elfman LK. Lingual tonsils. Laryngoscope. 1949;59:1016–25. • We accept pre-submission inquiries • Our selector tool helps you to find the most relevant journal • We provide round the clock customer support • Convenient online submission • Thorough peer review • Inclusion in PubMed and all major indexing services • Maximum visibility for your research Submit your manuscript at www.biomedcentral.com/submit Submit your next manuscript to BioMed Central and we will help you at every step: Son et al. Journal of Otolaryngology - Head and Neck Surgery (2016) 45:22 Page 5 of 5 Abstract Background Methods Results Conclusions Background Methods Cadaver dissection Histological sectioning Robot-assisted lingual tonsillectomy Results and discussion Cadaver dissection Histological sectioning Robot-assisted lingual tonsillectomy Conclusions Abbreviations Competing interests Authors’ contributions Acknowledgement Author details References work_6ptdmhiw7fcktdhen7e23ixrcu ---- Noninvasive Imaging of Melanoma with Reflectance Mode Confocal Scanning Laser Microscopy in a Murine Model Noninvasive Imaging of Melanoma with Reflectance Mode Confocal Scanning Laser Microscopy in a Murine Model Daniel S. Gareau1,2, Glenn Merlino3, Christopher Corless1,2, Molly Kulesz-Martin1,2 and Steven L. Jacques1,2 A reflectance-mode confocal scanning laser microscope (rCSLM) was developed for imaging early-stage melanoma in a living mouse model without the addition of exogenous contrast agents. Lesions were first located by surveying the dorsum with a polarized light camera, then imaged with the rCSLM. The images demonstrated two characteristics of melanoma in this animal model: (1) melanocytes and apparent tumor nests in the epidermis at the stratum spinosum in a state of pagetoid spread and (2) architectural disruption of the dermal–epidermal junction. The epidermal melanocytes and apparent tumor nests had a high melanin content, which caused their reflectance to be fivefold greater than the surrounding epidermis. The rCSLM images illustrate the difference between normal skin and sites with apparent melanoma. This imaging modality shows promise to track the progression of melanoma lesions in animal models. Journal of Investigative Dermatology (2007) 127, 2184–2190; doi:10.1038/sj.jid.5700829; published online 26 April 2007 INTRODUCTION Reflectance-mode confocal microscopy (rCSLM) offers a means to image mouse skin in vivo by exploiting scattering from microscopic biological variations in refractive index. The light-scattering properties of cutaneous tissues provide optical contrast for imaging the presence and spatial distribution of pigmented melanoma against the background of healthy tissue in a pigmented murine model, the hepatocyte growth factor/scatter factor transgenic mouse (HGF/B6) (Noonan et al., 2001). Components of skin whose refractive index are higher than the bulk refractive index of epidermis (nepi ¼ 1.34) (Rajadhyaksha et al., 1999), such as keratin in stratum corneum (n ¼ 1.51) (Rajadhyaksha et al., 1999), hydrated collagen (n ¼ 1.43) (Wang et al., 1996), and melanin (n ¼ 1.7) (Vitkin et al., 1994) can be imaged with backscattered light. Conventional wide-field microscopy illuminates and images a large volume simultaneously, so thin histological sections must be prepared to observe structural, cellular, and subcellular details. Optical sectioning in rCSLM blocks multiply scattered light so the image of the tissue in the plane of focus remains sharp despite light scattered above and below that plane. Confocal microscopy is limited in depth to the ballistic regime where photons propagate unscattered to the focus, backscatter from the focus toward the microscope, and escape the tissue without scattering. At deeper depths, the low level of light owing to multiply scattered photons becomes the optical noise floor for the image, specifying the practical depth limit for rCSLM imaging. The imaging depth range of rCSLM in this work (50–100 mm) was limited primarily by the laser wavelength used (488 nm). As mouse epidermis is thin (B15 mm, Figure 3), even enlarged epidermis (B40 mm, Figure 4b) associated with tumors can be imaged fully. By comparison, imaging with rCSLM in human skin (Rajadhyaksha et al., 1999) with 830-nm laser light encoun- ters 1.7-fold less scattering and 4 fold less optical absorption (Jacques, 1998). The imaging depth is increased to 250 mm, which sufficiently images human epidermis (60–100 mm). The long-term goal of this work is to contribute to on- going efforts to ‘‘humanize’’ the mouse melanoma model such that melanoma onset in the mouse model better mimics early stage human melanoma where tumors originate in the interfollicular epidermis and invade locally downward through the epidermal–dermal junction (DEJ) rather than originating in the deeper dermis as in current mouse models. In human skin, melanomas are characterized by polymorphic (multilobed) melanocytes (Greger et al., 2005). One goal of this work was to survey the features of this animal model and identify characteristic structures that occur only in melanoma and not in normal skin (Figure 7). The rCSLM images can detect the early progression of melanoma in the subepidermal layer and its violation of the ORIGINAL ARTICLE 2184 Journal of Investigative Dermatology (2007), Volume 127 & 2007 The Society for Investigative Dermatology Received 22 August 2006; revised 30 January 2007; accepted 9 February 2007; published online 26 April 2007 1Department of Dermatology, Oregon Health & Science University, Portland, Oregon, USA; 2Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon, USA and 3Laboratory of Cell Regulation and Carcinogenesis, National Cancer Institute, Bethesda, Maryland, USA Correspondence: Dr Daniel S. Gareau, Department of Dermatology, Memorial Sloan Kettering Cancer Center, Dermatology, 160 E. 66th, New York, New York 10022, USA. E-mail: dan@dangareau.net Abbreviations: DEJ, epidermal–dermal junction; HGF, hepatocyte growth factor; rCSLM, reflectance-mode confocal scanning laser microscope DEJ by revealing melanocytic structures in this well- characterized animal model of UV-induced melanoma (Noonan et al., 2001). Melanoma can be characterized by the high reflectance from melanocytes and strong attenuation within tumors. Melanin granules (B30 nm diameter within melanosomes) have a refractive index of 1.7 (Vitkin et al., 1994) compared with the surrounding cytoplasm of 1.35 (Brunsting and Mullaney, 1974). Therefore, melanin granules scatter light, providing a strong endogenous contrast agent for rCSLM (Rajadhyaksha et al., 1995) image of melanocytes. The two key features of melanoma imaged by rCSLM were (1) pagetoid melanocytes and tumor nests in the epidermis and (2) altered skin ultrastructure described as the disruption of the DEJ. The ability of rCSLM to image the development of these features suggests that time course imaging may elucidate the dynamically invasive nature of melanoma lesions in this mouse model. RESULTS Excised samples were fixed in formalin, sectioned, and stained using hematoxylin and eosin. Samples stained with a histological counter stain for iron pigment showed that the pigment was in fact melanin. A melanin-bleach method revealed subcellular detail in melanoma cells verifying the atypical nuclei in tumor cells. Immunohistochemical staining with the antibody PEP8H specified the melanocyte antigen dopachrome tautomerase (DCT) and verified the presence of melanocytes. Figure 1 shows an immunohistochemically stained tumor biopsy. Figure 2a and b shows a typical experiment where a lesion is identified and imaged over 3 weeks. The nodular tumor is indicated both with and without involvement of the surrounding dermis. This rapid nodular growth developed in approximately half of the observed tumors. Eight lesions identified by polarized imaging showed suspicious areas of uneven confocal reflectance in the epidermis and at the DEJ. Five normal sites (Figure 3) were characterized in a sagittal view (image of a plane perpendi- cular to the surface) by a relatively uniform reflectance with the absence of highly reflective structures. As a measure of Figure 1. The immunohistochemical stain for dopachrome tautomerase verifies that the tumor is a melanoma. Epidermal melanocytes are shown with arrows. Figure 2. Digital photography of tumors. Digital photograph of (a) early- stage tumor and (b) late-stage tumor 2 weeks later. Bar ¼ 4 mm. 100 �m Z = 0.5 �m Z = 15.5 �m Z = 30.5 �m Z = 35.5 �m Z = 40.5 �m Z = 20.5 �m Z = 25.5 �m Z = 5.5 �m Z = 10.5 �m 10 20 30 40 50 50 100 X (�m) 150 200 250 Z ( � m ) a b c Figure 3. Normal skin. Figure of normal skin, correlating histology (a) with confocal microscopy of normal skin in (b) sagittal view. (c) A set of en face images taken at various depths on a different normal skin site. www.jidonline.org 2185 DS Gareau et al. Noninvasive Imaging of Melanoma dermal reflectance uniformity, the maximum contrast (bright- est voxel value divided by dimmest) was 1.370.2. Normal epidermis was about 15 mm thick (one or two cell layers) based on the histological image (Figure 3a). Collagen reflectance in the underlying dermis was uniform and the DEJ was relatively flat. Melanocytes were sparse among keratinocytes, yet frequent enough to give the skin a dark tone to the eye. Melanocytes accounted for less than 1% of epidermal cells as observed by rCSLM and histology. In contrast, melanoma lesions were well populated with pleomorphic melanocytes and melanocyte nests. Melanoma lesions contained high levels of melanin and were located repeatedly as dark regions in wide-field images and bright regions in subepidermal microscopic images. Malignant tumors presented nodular regions of high reflec- tance and thickened epidermis and often surrounded hair follicles. Figure 4a and b shows a sagittal view. rCSLM features common to confocal images of tumors (Figure 4b and c) included melanocytic cells migrating upward into the epidermis. Figure 4c shows a series of en face images progressing from the surface of the skin through the epidermis into the dermis. To the eye, the tone of the normal skin on the melanoma- induced HGF/B6 mouse was not much lighter than the tone in the melanoma lesions or the normal pigmented tissue although the confocal microscopy clearly showed an increased presence of melanotic features with strong back- scattering of light in the tumors. The putative melanoma cells were large, abundant, and irregularly shaped. In Figure 5, two axial profiles (vertical white lines, Figure 4) of reflectance are plotted versus depth, one intersecting an epidermal melanocyte and the other just adjacent. Calibration (equation 2) was applied to the data to yield reflectance units. At the tissue surface (z ¼ 16 mm, Figure 5) the water/ stratum corneum interface reflectivity was Rmeasured ¼ 0.0013. The Fresnel reflectance predicted from an interface 200 �m 30 �m 20 �m 10 20 30 40 50 50 100 X (�m) 150 200 250 60 70 80 Z (� m ) Z = 1 �m Z = 9 �m Z = 17 �m Z = 41 �mZ = 33 �mZ = 25 �m Z = 49 �m Z = 57 �m Z = 65 �m HF HF M M GC EM DEJ GC a b c Figure 4. Melanoma. (a) Histology with an iron counter stain shows that the pigment is not iron. This late-stage tumor has ulcerated. The insets show (left) the epidermal thickening (left to right) and (right) the epidermal melanocytes indicated with arrows. In the confocal images (b, sagittal, c, en face) the malignant tumor is identified by bright areas of high melanin density located in single epidermal melanoma cells and at larger structures of these cells at the DEJ. HF, hair follicle (hair has been Nair’d TM ) 50 mm in diameter. M, epidermal melanocytes. GC, granular cells with dark nuclei beneath the stratum corneum. Cells in the granular layer within the epidermis appear with dark nuclei, which backscatter less light than the surrounding cytoplasm/cell wall/extracellular matrix. DEJ, dermal–epidermal junction. EM, irregular groups of polymorphic melanocytes at DEJ. The white lines at x ¼ 132 marks an axial z-profile that will be analyzed in Figure 5. 10–3 10–4 10–5 0 R e fle ct a n ce ( – ) 10 0.00018 exp[–0.068 (Z–24)] 20 Z (�m) 30 40 50 60 m = 0.00023 SC = 0.0013 Figure 5. The axial reflectance profile through one melanocytic cell, relative to surrounding epidermis. Circles represent the data from the solid white line in Figure 4 b, diamonds represent data from the dashed white line. Centered at z ¼ 16 mm, the reflectance of the stratum corneum (SC) is 1.3 � 10�3. Beneath the SC, the bulk tissue reflectance decay is fit with an exponential. Centered at z ¼ 45 mm, an epidermal melanocyte’s measured peak reflectance is m ¼ 2.3 � 10�4, which is 1.87 � 10�4 above the epidermal background at z ¼ 45 mm (4.3 � 10�5). The decaying exponential least-squares error fit to the data, which is not sensitive to data points in the SC (zo24 mm) or in the melanocyte (404z448), represents the background reflectance of the epidermis. 2186 Journal of Investigative Dermatology (2007), Volume 127 DS Gareau et al. Noninvasive Imaging of Melanoma of water (nH2 O ¼ 1:33 ) and stratum corneum (n ¼ 1.51) is Rtheoretical ¼ 0.0040. The difference between Rtheoretical and Rmeasured is probably because of roughness of the stratum corneum. At z ¼ 45 mm, the reflectance of the melanocyte (Figure 5) was Rmel ¼ 0.00023 and the reflectance of the background was only Repi ¼ 0.000043. The melanocyte stands out from the background epidermis by a factor of Rmel/Repi ¼ 5.3. In addition to the axial decay characterization described above, an en face analysis was used to compare populations of tumor characteristics. Tumor cells and nests were characterized by directly comparing their reflectance to that of the laterally surrounding epidermis. Five features (mela- nocytic cells or tumor nests) were picked from Figure 4c along with the corresponding five adjacent normal areas. Figure 6a shows the same en face images as in Figure 4c replotted with the tumor features marked. A 3 � 3 voxel (1.5 � 1.5 mm) square region centered on the points picked as tumor and normal was averaged to yield the reflectance of tumor (Rt) and normal (Rn) tissue, respectively. In Figure 6a, the black open circles indicate normal sites and asterisks to indicate tumor sites. Figure 6b shows the paired points, Rt vs Rn, for the tumor and normal sites of Figure 6a. The average reflectances shown for the five pairs represent the mean and standard deviation of the nine-voxel region. Although the reflectance variability within a particular tissue was large because of the natural texture of the tissue, the mean reflectance level was consistently larger for the tumor (RtE5.2Rn). Table 1 lists the mean ratio of melanocyte reflectance (Rt) to epidermal reflectance (Rn). For the five tumors imaged, the value of Rt/Rn was 5.071.6, which agreed with a simple model Rt/Rn ¼ 5.2 (equation 1). Table 1 also includes the results from a separate tumor on the same animal and three tumors on a separate animal (images not shown). Figure 7 compares en face confocal images of tumors versus normal tissue. In general, the characteristic tumor structures were strongly scattering. Two distinct forms of involvement were seen. (1) In the epidermis, atypical melanocytes and tumor nests were observed in the tumor where only normal granular cells presented in the normal. The melanocytic lesions in the mouse epidermis exhibited pagetoid spreading, characteristic of human intraepidermal melanoma cells. (2) At the basement membrane where the DEJ was flat and continuous in healthy tissue, tumors presented irregularity where the architecture of the DEJ was disrupted. DISCUSSION This report illustrates our attempt to image melanoma and characterize malignancy in early-stage tumors. It was a challenge to follow lesions on a living animal and prepare histology of the same region with precision. The endogenous landmarks used such as hair follicles proved insufficient to reliably and consistently correlate the confocal microscopy with the histology. Exogenous markers such as tattooing should be pursued. Comprehensive imaging for detection of Z = 1 �m Z = 25 �m Z = 49 �m Z = 57 �m Z = 65 �m Z = 33 �m Z = 41 �m Z = 9 �m Z = 17 �m 2 1.8 0.8 0.6 0.4 0.2 0 Normal reflectance Rn (–) Tu m o r re fle ct a n ce R t ( – ) 0.5 1 1.5 2 2.5 3.5 43 1.6 1.4 1.2 1 × 10−3 Rt = 5.3Rn × 10−4 a b Figure 6. The contrast of melanoma. (a) Five paired tumor (*) and normal (o) sites were chosen at various depths. (b) The reflectance at the five tumor locations is shown as a function of their normal counterpart’s reflectance. Table 1. The contrast between atypical tumor features and background tissue Tissue site Mean Rt/Rn SD Rt/Rn 1. Figure 5 5.0 1.6 2. Not shown 4.7 0.7 3. Not shown 6.7 1.8 4. Not shown 6.3 1.0 5. Not shown 5.3 0.7 SD, standard deviation. The reflectance of tumor features (epidermal melanocytes or tumor cell nests) Rt is divided by the reflectance of five normal surrounding tissue Rn. Each result, the mean and SD, n=5 features per site for each of five tissue sites on two animals, represents the ratio Rt/Rn. The five features per site were a mixture of melanocytes and tumor nests. www.jidonline.org 2187 DS Gareau et al. Noninvasive Imaging of Melanoma epidermal tumors might include confocal mosaics (Chow et al., 2006) of a large square region (B2 � 2 cm), marked by tattoo on younger animals over time with corresponding polarized light images (Jacques et al., 2000, 2002; Gareau et al., 2005). This report has concentrated on illustrating two features of apparent melanoma: (1) the presence of melanocytic cells and tumor nests in the epidermis indicative of pagetoid spread and (2) disruption of the DEJ. The epidermal melanocytes and tumor nests were characterized by bright reflectance because of melanin. The relative reflectance of a melanoma cell vs background epidermis was measured to be 5.3. Five tumors additionally studied (Table 1) showed a relative reflectance of 5.670.9. In general, the images of tumors contained a high degree of heterogeneity in rCSLM images compared with their normal counterparts. Using these refractive indices of 1.51 for keratin, 1.7 for melanin granules, and 1.35 for backround epidermis, the Fresnel reflectance (Hecht, 2002) predicted from a plane of melanin or keratin within epidermis is R ¼ ððnepi � nÞ=ðnepi þ nÞÞ2 ð1Þ which yields Rker ¼ 0.0024 and Rmel ¼ 0.014 and the ratio Rmel/Rker is 5.8. This ratio is comparable to the fivefold ratio of melanocyte reflectance relative to that keratinocyte reflectance. The simple calculation of Fresnel reflectance from a planar interface is probably too simple to model accurately keratinocytes and melanocytes. Scattering also depends on the small particle size of keratin fibrils and melanin granules and the larger size of keratin aggregates and melanosomes. Nevertheless, the observed ratio of fivefold higher reflectivity for melanocytes relative to keratinocytes is consistent with the strong reflectivity expected from melanin granules. The refractive index difference of melanin granules and keratin fibrils relative to the surrounding epidermis is the expected basis of optical contrast for imaging of melanoma. MATERIALS AND METHODS Animals The HGF/B6 melanoma model (Noonan et al., 2001) developed at the National Cancer Institute and George Washington University was used in this study. Genetically engineered mice overexpressed HGF/scatter factor, making them susceptible to melanoma induced by UV radiation on the back (Noonan et al., 2001). Mice used in this study had a pigmented C57BL/6 genetic background. The UV- irradiated HGF/B6 mouse developed melanoma through a series of stages, starting with multiple skin lesions that appeared first as small tumors (o1 mm diameter, Figure 8) followed by a progressive swelling of the dermis (Figure 2). Mice with tumors that grew to 1 cm in diameter were immediately killed and imaged, all other imaging was done in vivo. All animal studies were approved by the Oregon Health and Science University Institutional Animal Care and Use Committee. Hair was removed chemically (NairTM). Tumors on the lower back were imaged to avoid motion from the heart and lungs. The underside of the mice, which had not developed melanoma through UV-induced radiation was imaged as a control. Figure 8 shows an early-stage lesion. Multiple early-stage lesions were followed through tumor development. About half of the early- stage lesions became enlarged and spread laterally. The results presented in this paper constitute a subset of the laterally spreading lesions versus normal skin. One-year-old animals from previous collaborators’ experiments were used to minimize overall animal use. Lesions were identified by eye and then imaged with a polarized wide-field microscope (Jacques et al., 2000, 2002; Gareau et al., 2005) to identify lesions that were superficial and hence likely to present pagetoid melanocytes. Animals were placed on a metal plate the size and shape of a standard microscope slide, with the tumor of interest a d e f ihg j k l b c Figure 7. The characteristics of melanoma. (a–i) Tumor images versus (j–l) normal images. (a–c) Irregular epidermal melanocytes (M) in the epidermis and hair follicles (H). (d–f) A melanoma tumor nest (M) and hair follicle (H). (g–i) Disruption of the DEJ is characterized by its broken appearance. (j, k) Healthy epidermis presents granular cells with dark nuclei. (l) Approximately 10 mm below the healthy epidermis, the healthy DEJ presents as relatively uniform and intact. Figure 8. Digital photograph of dorsal melanoma tumor (center). Millimeter markings show the tumor’s diameter to be about 0.7 mm. The animals had already developed lesions as large as 5 mm in diameter, but also had early-stage lesions (less than 1 mm diameter), which were deemed early lesions and chosen for imaging. 2188 Journal of Investigative Dermatology (2007), Volume 127 DS Gareau et al. Noninvasive Imaging of Melanoma centered over a 2-mm diameter hole in the plate. Optical coupling between the objective lens and the skin was achieved with a drop of saline solution. The animal was immobilized in a least invasive manner with 10 wrappings of an elastic string (Spider- ThreadTM, Redwing Tackle, Ontario, Canada), which is commonly used for fixing bait to fishing hooks. This sufficiently stabilized the skin to minimize movement artifacts because of heartbeat and breathing. The two-dimensional field of view (x, y, and z ¼ 260, 253, and 80 mm, respectively) took about 15 minutes to acquire. The animal (36 g typical weight) was anesthetized during each 45-minute imaging session by a ketamine/xylazine cocktail (0.5 ml intraperitoneally, adjusted for animal weight, age, and tumor load). rCSLM An rCSLM incorporating reflectance and fluorescence channels was designed and assembled. The fluorescence mode capabilities were designed for other experiments and not used in this report. The rCSLM used a 488-nm (blue) argon ion laser, x- and y-axis scanning mirrors, original magnification � 60 water-dipping objective lens (0.90 NA Olympus LUMPlanFl), a photomultiplier tube (Hamamatsu Photonics, 5773-01), a data acquisition board (National Instruments, 6062E) and a z-axis motorized stage (Applied Scientific Instrumenta- tion, Eugene, OR, LS50A) for supporting the animal, Labview software to control the system, and a Gateway laptop computer (Microsoft Windows 2000 operating system). A relay lens system magnified the image to project the central lobe of the airy function (Rajadhyaksha and Gonzalez, 2003) to be 1.5 times larger than the 50-mm diameter pinhole for confocally matched gating (Wilson and Sheppard, 1984). The axial resolution limit measured for the system was 1.25 mm. The scanning mirrors provided x–y scans (512 � 512 volume elements (voxels), 25 kHz voxel acquisition rate, 10.5 sec- onds per image) at each depth z in the tissue. The motorized stage advanced the animal 1 mm along the z-axis before each x–y scan 80 times (512 � 512 � 80 voxels). Post-processing of the data was carried out using MATLABTM software. To express voxel values in the units of optical reflectance, calibration was achieved by imaging the water/glass coverslip interface with a neutral density filter (optical density ¼ 1.0) attenuating the laser and equating this measurement to the Fresnel reflectance for a planar water/glass interface with a refractive index mismatch R ¼ ((n1�n2)/(n1 þ n2))2 ¼ 0.0044 for water (n1 ¼ 1.33) and glass (n2 ¼ 1.52). The reflectance (R) of the mouse skin measured without the neutral density filter was calculated based on the confocal signal in volts from the mouse (Vm) and from the water/glass interface (Vwg): R ¼ Vm Vwg 0:0044 ð10�ODÞ ð2Þ Typical values of R for the C57/B6 mouse skin were 10�5–10�4. Voxel values in the confocal images in this report are presented as the log of the data log10(R) over the range 10 �5oRo10�3. This graphical display allocates the dynamic range in the image to optimally include the range of reflectance of the tissue. Experimental protocol The back of each animal, which had been exposed to the tumor- inducing UV radiation, was examined for tumor growth. Anesthesia was followed by digital photography (Panasonic DMC-FZ20) as shown in Figure 8, then imaging with a wide field-of-view polarized- light imaging system (Figure 9, Gareau et al., 2005) that aided in finding early superficial lesions. After selecting lesions using the polarized images, animals were immobilized on the metal plate, which was placed on the rCSLM stage. The water-dipping objective lens of original magnification � 60 was coupled to the skin surface from below using phosphate- buffered saline. Lesions were imaged one to three successive times over a 1- month period using landmarks of biological features such as tumor size, shape, and relative location as well as hair follicle location to keep track of the lesions during successive imaging sessions. At the last time point of in vivo imaging, the tumors were excised for histopathology with the position and orientation landmarks of the tumor noted. CONFLICT OF INTEREST The authors state no conflict of interest. ACKNOWLEDGMENTS This work was supported by the National Institutes of Health, EB000224 (SLJ), CA98893 (MKM), and CA69533 (OHSU Cancer Institute). We thank Drs. Alon Scope, Frances Noonan, Ed De Fabo, and Miriam Anver for critical review of this paper. We also thank Dr. Miriam Anver and Carolyn Gendron for histopathology. a b D D D D S S Figure 9. Polarized image of dorsal melanoma tumors. (a) Normal light image. (b) Polarized light image, based on difference between two images, one through linear polarizer oriented parallel to the polarized illumination and the second cross-polarized perpendicular to the illumination. The superficial lesion (S) appeared black in both normal-light and polarized-light images, whereas deeper lesion (D) appears black only in normal-light images. Bar ¼ 2 mm. www.jidonline.org 2189 DS Gareau et al. Noninvasive Imaging of Melanoma REFERENCES Brunsting A, Mullaney P (1974) Differential light scattering from spherical mammalian cells. Biophys J 14:439–53 Chow SK, Hakozaki H, Price DL, Maclean AB, Deerinck TJ, Bouwer JC et al. (2006) Automated microscopy system for mosaic acquisition and processing. J Microscopy 222:76–84 Gareau DS, Lagowski J, Rossi V, Viator JA, Merlino G, Kulesz-Martin M, Jacques SL (2005) Imaging melanoma in a murine model using reflectance-mode confocal scanning laser microscopy and polarized light imaging. J Investig Dermatol Symp Proc 10:164–9 Greger A, Koller S, Kern T, Massone C, Steiger K, Richtig E et al. (2005) Diagnostic applicability of in vivo confocal laser scanning microscopy in melanocytic skin tumors. J Invest Dermatol 124:493–498 Hecht E (2002) Optics. New Jersey: Pearson Education Jacques SL (1998) Spectrum used in online class tutorial. http://omlc.ogi.edu/ classroom/ece532/class3/muaspectra.html Jacques SL, Roman J, Lee K (2000) Imaging superficial tissues with polarized light. Lasers Surg Med 26:119–29 Jacques SL, Ramella-Roman JC, Lee K (2002) Imaging skin pathology with polarized light. J Biomed Opt 7:329–40 Noonan FP, Recio JA, Takayama H, Duray P, Anver MR, Rush WL et al. (2001) Neonatal sunburn and melanoma in mice. Nature 413:271–2 Rajadhyaksha M, Gonzalez S (2003) Real-time in vivo confocal fluorescence microscopy. In: Handbook of biomedical fluorescence. (Mycek MA, Pogue B, eds) New York: Marcel Dekker, 143–80 Rajadhyaksha M, Gonzalez S, Zavislan JM, Anderson RR, Webb RH (1999) In vivo confocal scanning laser microscopy of human skin II: advances in instrumentation and comparison with histology. J Invest Dermatol 113:293–303 Rajadhyaksha M, Grossman M, Esterowitz D, Webb RH, Anderson RR (1995) Video-rate confocal scanning laser microscopy for human skin: melanin provides strong contrast. J Invest Dermatol 104: 946–952 Vitkin IA, Woolsey J, Wilson BC, Anderson RR (1994) Optical and thermal characterization of natural (sepia officinalis) melanin. Photochem Photobiol 59:455–62 Wang X, Milner TE, Chang MC, Nelson JS (1996) Group refractive index measurement of dry and hydrated type I collagen films using optical low- coherence reflectometry. J Biomed Opt 1:212–6 Wilson T, Sheppard CJR (1984) Theory and practice of scanning optical microscopy. New York: Academic Press 2190 Journal of Investigative Dermatology (2007), Volume 127 DS Gareau et al. Noninvasive Imaging of Melanoma Noninvasive Imaging of Melanoma with Reflectance Mode Confocal Scanning Laser Microscopy in a Murine Model ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������� Introduction ������������������������������������������������������� Results ���������������������������������������� Discussion ������������������������������������������������� Materials and Methods ���������������������������������������������������������������������������������� work_6rzfixmosjb4jimjfkydu4wb7i ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218545253 Params is empty 218545253 exception Params is empty 2021/04/06-02:16:22 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218545253 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:22 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_6sdltphwmneqtpdx75mhkt4gs4 ---- European journal of American studies , Reviews 2010-1 European journal of American studies Reviews 2010-1 J. M. Gratale on Jane Chapman’s Issues in Contemporary Documentary Electronic version URL: http://journals.openedition.org/ejas/7817 ISSN: 1991-9336 Publisher European Association for American Studies Electronic reference « J. M. Gratale on Jane Chapman’s Issues in Contemporary Documentary », European journal of American studies [Online], Reviews 2010-1, document 1, Online since 04 March 2010, connection on 01 May 2019. URL : http://journals.openedition.org/ejas/7817 This text was automatically generated on 1 May 2019. Creative Commons License http://journals.openedition.org http://journals.openedition.org http://journals.openedition.org/ejas/7817 J. M. Gratale on Jane Chapman’s Issues in Contemporary Documentary 1 Jane Chapman, Issues in Contemporary Documentary. Polity Press, 2009. pp. 210 ISBN: 978-0-7456-4009-9. 2 The prevalence of images in contemporary society and culture is a patent reality. From digital photography to cinematic film, and cell phone imaging to You Tube, the production and circulation of the image is becoming more and more diffused. In conjunction with such developments is the fact that the mediums and technologies which ‘deliver’ these images are constantly diversifying and improving in capabilities. Where does all this leave the documentary? Has the documentary outgrown its appeal in a market consumed with reality TV programming, news and surveillance footage, and Hollywood 3-D blockbuster films? This book by Jane Chapman represents a timely engagement with a range of issues intimately linked to the documentary genre. While she does not directly address the growing appeal of the multifarious visual mediums mentioned above, Chapman makes a compelling case that contemporary documentary occupies a vital space in the global social milieus of the present. 3 In under two-hundred pages Chapman competently assesses the state of current documentary film practice. The range of issues she addresses include the following: definitions of relevant terminology, modes of representation, the tension between objectivity and subjectivity, censorship, the authorial voice, reflexivity, audience response, and questions relating to ethics. Each of these areas forms the individual chapters which comprise the book in addition to an introduction and a concluding section. While it is essentially an introductory volume, Chapman’s methodology and the book’s content reveals something much more which I would identify as the hidden strengths of Issues in Contemporary Documentary. For example, one of the ‘necessities’ of an introductory text is to provide an historical backdrop for the subject matter at hand. So the reader might assume a chapter or two of Chapman’s book would be devoted to the historical evolution of the documentary genre over the past hundred years or so. The author, however, avoids this formulaic convention and in its place opts to incorporate historical context relevant to specific thematic issues. As she indicates, she has “used J. M. Gratale on Jane Chapman’s Issues in Contemporary Documentary European journal of American studies , Reviews 2010-1 | 2010 1 documentary history selectively to explain context rather than attempt to present an all- embracing chronological evolution” (2). 4 In place of a historically oriented discourse, Chapman utilizes a number of case studies. These case studies, or documentary vignettes, give the book a highly appealing quality and provide an insight into contemporary documentary practices at the international level. From documentaries which deal with 9/11 to the war in Iraq, and issues of sexuality and gender to anthropologically focused treatments, Chapman’s case studies not only contextualize particular themes, but also provide the reader some access to various theoreticians and their theoretical perspectives. These theoretical insights coupled with critical analyses of selected documentaries provide a much needed interpretive dimension to the book. For example, the author considers the 2007 conspiracy documentary Zeitgeist. After a brief summary and helpful commentary from other authors, Chapman identifies the key weaknesses of the documentary. Her comments are effective and to the point. In the case of Zeitgeist, she states the following: the “legitimate questions about what happened on 9/11, and about corruption in financial and religious organizations, are all undermined by the film’s determined effort to maximize an emotional response at the expense of reasoned argument” (173). This sort of focused analysis clarifies and illustrates the theoretical underpinnings of the book, and provides the reader with ample exposure to the application of theory. 5 Another intriguing aspect of Chapman’s volume is its sustained engagement with questions relating to a documentary’s intent and purpose. For example, the issue of truth and its depiction in a documentary comes up repeatedly throughout the work. Her handling of such matters is that “the camera is incapable of simply delivering an unmediated reproduction of truth: the camera is by definition an instrument of visual mediation” (23). Mediation and the objectivity/subjectivity dichotomy are central for the documentary genre. Common public understanding seems to assume that a documentary is a factual depiction of an event, issue, or personality which is somehow ‘objectively’ represented as one might view journalistic investigative reporting. Chapman reiterates over and over that the constructed elements of a documentary require acknowledgement. As she argues, any “suggestion that documentary provides an unmediated reflection of the real world should be challenged: images provide evidence of the real world as a subjective interpretation in the same way that a painting or piece of writing does…” (27). Therefore, an inherent tension exists within documentary film-making which is less of an issue in other similar mediated visual forms. 6 This problematic (or tension) is perhaps responsible for the continued appeal of documentary; and this takes us to one further strength of the book. Chapman successfully manages to incorporate the advances and developments of the digital revolution and illuminate its linkage to documentary practices. Essentially Chapman suggests that documentary is “alive and well” in the early twenty-first century despite the proliferation of alternate forms of visual culture. In fact what has been occurring is an increased ability to create a ‘documentary’ and have it circulate through the Internet, bypassing the conventional institutional configuration of documentary production. For example, through uploading and downloading on the Internet, the genre, some would argue, has become more democratized. The contemporary visual landscape is more and more becoming comprised of social-activist oriented documentaries. As Chapman points out, the “emergence of cheaper and more flexible technology allows anybody to make a documentary, to edit their films quickly, and to promote them on the Internet” (71). J. M. Gratale on Jane Chapman’s Issues in Contemporary Documentary European journal of American studies , Reviews 2010-1 | 2010 2 7 The new mediums in which documentary is formatted and the platforms in which they are consumed clearly represent an aspect of change in the genre. But as Chapman warns the reader, there is an equally strong current of continuity in documentary film-making along the thematic lines discussed throughout her work including the authorial voice, audience response, objectivity/subjectivity, ethics, as well as other related issues. And just as these discursive tensions were a key characteristic in the past and continue to be in the present, so too will they remain in the future. Chapman’s book thus performs a very critical function in bridging past and present while simultaneously noting not only the overlapping redundancies of documentary practices, but also accounting for its breaks and discontinuities. Therefore, any reader who is interested in an informed analysis and scholarly account of contemporary documentary should not hesitate to consult this impressive volume by Jane Chapman. Joseph Michael Gratal, The American College of Thessaloniki J. M. Gratale on Jane Chapman’s Issues in Contemporary Documentary European journal of American studies , Reviews 2010-1 | 2010 3 J. M. Gratale on Jane Chapman’s Issues in Contemporary Documentary work_6siq5l4pyjh7fl6re4e7njcvq4 ---- PLEASE SCROLL DOWN FOR ARTICLE This article was downloaded by: On: 3 October 2008 Access details: Access Details: Free Access Publisher Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Photographies Publication details, including instructions for authors and subscription information: http://www.informaworld.com/smpp/title~content=t778749997 A LIFE MORE PHOTOGRAPHIC Daniel Rubinstein a; Katrina Sluis a a Department of Arts Media and English, Faculty of Human Sciences, London South Bank University, London SE1 0AA, UK Online Publication Date: 01 March 2008 To cite this Article Rubinstein, Daniel and Sluis, Katrina(2008)'A LIFE MORE PHOTOGRAPHIC',Photographies,1:1,9 — 28 To link to this Article: DOI: 10.1080/17540760701785842 URL: http://dx.doi.org/10.1080/17540760701785842 Full terms and conditions of use: http://www.informaworld.com/terms-and-conditions-of-access.pdf This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, re-distribution, re-selling, loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. http://www.informaworld.com/smpp/title~content=t778749997 http://dx.doi.org/10.1080/17540760701785842 http://www.informaworld.com/terms-and-conditions-of-access.pdf Daniel Rubinstein Katrina Sluis A LIFE MORE PHOTOGRAPHIC Mapping the networked image Twenty-two years since the arrival of the first consumer digital camera, Western culture is now characterized by ubiquitous photography. The disappearance of the camera inside the mobile phone has ensured that even the most banal moments of the day can become a point of photographic reverie, potentially shared instantly. Supported by the increased affordability of computers, digital storage and access to broadband, consumers are provided with new opportunities for the capture and transmission of images, particularly online where snapshot photography is being transformed from an individual to a communal activity. As the digital image proliferates online and becomes increasingly delivered via networks, numerous practices emerge surrounding the image’s transmission, encoding, ordering and reception. Informing these practices is a growing cultural shift towards a conception of the Internet as a platform for sharing and collaboration, supported by a mosaic of technologies termed Web 2.0. In this article we attempt to delineate the field of snapshot photography as this practice shifts from primarily being a print-oriented to a transmission-oriented, screen-based experience. We observe how the alignment of the snapshot with the Internet results in the emergence of new photographies in which the photographic image interacts with established and experimental media forms – raising questions about the ways in which digital photography is framed institutionally and theoretically. Introduction Recent changes in the production, distribution, consumption and storage of images caused by the merging of photography with the Internet have had a notable effect on varied and diverse social and cultural processes and institutions including medicine (Pap, Lach, and Upton), journalism (Colin; Gillmor), law enforcement (Cascio), tourism (Noah, Seitz, and Szeliski), space exploration (Lanzagorta) and fine art. 1 In all these areas digital imaging causes shifts in the way bodies are imagined and perceived, wars are fought, people are monitored, works of art valorized and the public informed of all of the above. Photography is now ingrained in so many processes that a scholar of photography must also be highly informed about industries and institutions that traditionally have had little to do with the study of photography. Conversely, researchers in the fields of cultural anthropology (Okabe and Ito), informatics and human–computer interaction (Van House et al.; Kindberg et al., ‘‘How and Why’’) are increasingly concerned with the importance of digital photography to their fields of study. Photographies Vol. 1, No. 1, March 2008, pp. 9–28 ISSN 1754-0763 print/ISSN 1754-0771 online � 2008 Taylor & Francis http://www.informaworld.com DOI: 10.1080/17540760701785842 D o w n l o a d e d A t : 1 0 : 0 3 3 O c t o b e r 2 0 0 8 In this article we choose to focus on only one area of photography, which we consider to be key in understanding the shifts which are occurring in our perception of the medium. Popular photography was, for several decades, the focus of studies by cultural theorists and historians, curators and artists. But it is only with the dissemination of personal photography online that ‘‘Kodak culture’’ (Chalfen 8–18), augmented by ‘‘Nokia culture’’, became distributed and shared on a scale comparable with news or commercial photography. The distribution and sharing of snapshots online highlights a paradoxical condition that characterizes snapshot photography: it is both ubiquitous and hidden. Since the beginning of the twentieth century the snapshot has been the archetypal readymade image: placeholder for memories, trophy of sightseeing, produced in their millions by ordinary people to document the rituals of everyday life. And yet despite being the most mass produced photographic product, the snapshot has remained highly private, concealed from public eye, and quite often an invisible image. When snapshots do appear in public, whether in the context of fine art exhibitions and publications or in scientific journals, they are often presented as ‘‘found images’’ – stripped of notions of authorship or details about the original purpose of the image, its subjects and the circumstances of its creation. Even as the anonymous snapshot is used extensively as a metaphor or a sign in works of fine art from Jeff Wall to Nan Goldin, from Christian Boltanski to Gerhard Richter, these artists seem to be filling a gap left by the absence of genuine, real-life snapshots in the public domain. Looked at as a genre, snapshot photography seems to have many imitators but no recognized originals, many admirers but no masterpieces, many iconoclasts but no icons. Inverting this paradigm, the 2007 exhibition ‘‘We Are All Photographers Now!’’ at the Musée de l’Elysée, Lausanne, Switzerland, 2 responded to the way in which the photography of ‘‘ordinary people’’ has achieved visibility and popularity, challenging the way in which photography is framed and consumed. Photo-sharing and social networking sites now provide a platform for photographers to deliver their images to locations where millions can view them simultaneously. Being in the right place with the right phone is now enough to make you a photojournalist, or give you access to gallery wall space. Snapshots now appear not only in web-based family albums and diaries but also literally cover the face of the Earth: augmented by geographic coordinates they are superimposed onto screen-based online maps of the world. 3 Photography is dead, long live photography If one looks back at the brief history of digital photography it becomes very clear that the issues that bothered critics and historians twenty years ago are significantly different from the questions we may need to ask now. For many scholars, the most pressing issues were those concerning the digital image’s ability to represent the Real (Mitchell 23–57; Ritchin 36; Rosler 50–56). The malleability of digital photographs was then seen by many as the central element of the digital revolution and caused some to herald the ‘‘death of photography’’, shattering the privileged status of the photograph as ‘‘objective’’ truth (for accounts of this period see Robins 29–50). Instead, we now see that the power of the photograph to document is not diminished due to digital technology. From CCTV stills, traffic control and 1 0 P H O T O G R A P H I E S D o w n l o a d e d A t : 1 0 : 0 3 3 O c t o b e r 2 0 0 8 monitoring systems, to photo-reportage, the digital image plays a major role as ‘‘evidence’’. The low-resolution, pixilated appearance of early camera phone photographs and video clips is now an accepted part of the syntax of truthful and authentic reportage in the same way that the grainy black and white photograph once was. The speed with which these highly compressed JPEGs are transmitted and amalgamated into news media is an indication of the acceptance of the explicitly digital image into the structure of news reporting while emergent practices such as citizen journalism and sousveillance (Mann, Fung, and Lo 177) rely on the instant distribution that the networked camera facilitates. A typical example of this shift is the camera phone image taken by Adam Stacey on his way out of the underground on the morning of the 7/7 London bombings. 4 Alongside other camera phone images, his picture rather than the photographs taken later that day by photo-journalists became iconic of the incident. Significantly, the picture that appeared on major news sites was a self-portrait of Stacey, one hand covering his nose and mouth, with the tube carriage in the background. Here, the camera phone provides the means of reporting from the perspective of the participant in the event, the ergonomics of the telephone even allowing for easy inclusion of the photographer in the picture. Compare that to the position of the photojournalist: whose professional ethics dictate the position of the detached observer, assisted by the adoption of bulky photographic equipment and long range lenses which create a physical separation between subject and object. The mass-amateurization of photography, and its renewed visibility online signals a shift in the valorization of photographic culture. If, in the past, the arena of public photography was dominated by professional practitioners, currently the work of specialists is appearing side by side with images produced by individuals who don’t have the same professional investment in photography. 5 As a result, the roles of the professional photographic image and that of a snapshot are changing. The early years of digital photography During the first years of the ‘‘digital revolution’’, digital technology was largely inserted into the framework of existing traditional photographic practice. Through the 1990s the dominant shift was marked by replacing the technology of the analogue photograph (film, chemical processing, darkroom practices) with the technology of digital capture, and image manipulation. But these technological changes did not radically alter the economy of production and storage of photographic images. The arrival of digital imaging did not revolutionize popular photography but caused gradual shifts in the habits of hobbyists and middle-class amateurs who bought computers, scanners and ink-jet printers but used them within the old paradigms of analogue photography. The photographic darkroom and the photo lab were replaced by Photoshop and a colour printer. The ability to make prints without the need for a home darkroom, and the ease with which old, faded prints could be improved or restored convinced many photographers to swap the photo lab for domestic digital set-up. But during the first stages of penetration of digital photography into the amateur market (1990s) ‘‘going digital’’ was not about the acquisition of a digital A L I F E M O R E P H O T O G R A P H I C 1 1 D o w n l o a d e d A t : 1 0 : 0 3 3 O c t o b e r 2 0 0 8 camera – which was at that time an expensive tool beyond the reach of all but the richest dilettantes. Instead, the flatbed scanner or the more specialized film scanner became the central hardware of the digital ‘‘lightroom’’ as it provided a way of digitizing film negatives and old prints, correcting and restoring them beyond what was previously possible in the darkroom and printing them on inkjet paper which mimicked photographic emulsion. The digital print was considered a compromise: not as good as a darkroom print, but an acceptable surrogate. Here the print persisted. Imaging software of that decade also simulated the tools and techniques of photographic craft; Photoshop was the software of choice, with its array of familiar darkroom tools for ‘‘dodging’’ and ‘‘burning’’, sharpening and blurring. Image management software (Thumbs Up, ACDSee) employed the metaphor of a light-box to display rows of images presented as a sheet of mounted transparencies. Even when digital cameras became more affordable for the consumer market, the promise of immediacy that digital photography offered was frustrated by unsuitable methods for instant image sharing. Showing digital photographs to family and friends relied on being physically gathered around a single computer’s screen – and relied on the presence of a computer literate person to operate the software. Whilst sending photographs by email was possible and indeed practised, there were significant barriers to the uptake of this. Internet access in the 1990s was characterized by slow and expensive modem connections, accompanied by the popular adoption of low-capacity web-based email accounts. The practice of sending large attachments was quite risky, potentially condemning the recipient to an overflowing mailbox or terminal boredom whilst waiting for images to appear onscreen. Meanwhile, the publishing of images online was either a complicated or costly process, usually requiring a website domain, a hosting subscription, and a web designer or computer-savvy friend. From print-based to screen-based photography The advent of affordable, consumer-oriented digital cameras introduced amateur photographers to several technological innovations which contributed to dramatic changes in popular photographic practices. In 1995, the first digital consumer camera with a screen made it possible to preview an image before it was taken (Tatsuno 36). In addition to the screen, the digital camera also acquired a delete button, which provided a way of erasing unwanted shots from memory. With these two innovations, digital technology addressed the two significant barriers for engagement with photography: the delay between taking a picture and viewing it, 6 and the cost of each exposed frame (Bourdieu 78). From now on it became possible to engage with photography in a remarkably different way. The ability to take a picture, look at the screen, readjust the composition and correct the camera settings until the image is perfect created an environment of accelerated learning which gave amateurs the tools to compete with professionals. In the world of still film cameras, years of training were required to mentally transform a 3D view into a 2D plane, and translate light and colour into photographic (greyscale or colour) values in order to visualize the scene the way it would look in print. Ansel Adams famously attributed great importance to this skill: 1 2 P H O T O G R A P H I E S D o w n l o a d e d A t : 1 0 : 0 3 3 O c t o b e r 2 0 0 8 I can not overemphasize the importance of continuous practice in visualization, both in terms of image values […] and image management […]. We must learn to see intuitively as the lens/camera sees and to understand how the negative and printing papers will respond. It is a stimulating process and one with great creative potential. (Adams 7; our emphasis) The little screen at the back of a digital camera made it possible to see intuitively as the lens/camera sees without years of training, dramatically narrowing the gap between the professional photographer and the amateur. The ability to delete an image immediately after it was taken has intriguing consequences for the kinds of photographs that are left after the ‘‘on-the-fly’’ editing. Whatever seems imperfect, unflattering, or meaningless at the time the picture was made is now in danger of being deleted immediately in order to free some space in the camera memory for future, presumably better, photo opportunities. The delete button promises a set of selected and more perfect images while at the same time threatening a death blow to the traditional role of the photograph as memento and keepsake. The ability to edit in camera means that pictures that are deemed unsuccessful disappear for ever, thus eliminating the possibility of returning to them after months or years to discover redeeming qualities that compensate for their apparent imperfections. The delete button reduces the chances of discovering hidden truth in photographs: a blurred face that becomes a poignant representation of absence and loss; a bad expression that turns into a cherished quality; closed eyes that reflect the proximity of death; a stranger in the background that becomes a lover or a friend. Today, the overwhelming majority of personal photographs are destined never to appear on paper. As computer processing power has increased exponentially, and as the price of storage has dropped, the ability to accumulate tens of thousands of images has become a reality for the vernacular photographer. For the accidental archivist, the familiar trope of the dusty shoebox stuffed full of neglected prints gathering dust has been reinvented as the hard disk cluttered by files. In 2005 one Internet survey stated that 11 per cent of respondents had more than 10,000 digital photos, whilst the largest group (27 per cent) had between 1,001 and 5,000 digital photos (‘‘Do You Have 10,000 Digital Photos?’’). Traditional models of handling photography had no way of coping with such dramatic increase in volume of photographic production. 7 As the photographic print becomes an unwieldy vehicle for sharing this image explosion the snapshot is now commonly viewed via the screen of the camera, mobile phone or computer. As Daisuke Okabe has noted, currently the most fluid and immediate sharing of images happens when images are shared directly via the camera’s screen (2, 9). In recent years, the camera screen has grown in size from an electronic viewfinder into a portable viewing frame, designed not only to compose and review but to view, edit and share photographs without resorting to computers 8 or photo- labs 9 (see figure 1). Some consumer cameras now have screens which mimic the scale of a small photographic print, colonizing the entire back side of the camera, replacing the camera controls with a responsive touch screen (e.g. the seductive, shiny screen has also become a marketing tool for the camera; product shots frequently highlight the back, rather then the front of the camera, drawing attention to a large screen with A L I F E M O R E P H O T O G R A P H I C 1 3 D o w n l o a d e d A t : 1 0 : 0 3 3 O c t o b e r 2 0 0 8 a bright picture on it. A similar development can be observed in the way in which the design of higher end camera phones and some PDAs almost eliminate the keypad, and therefore sacrifice some of the functionality, in order to maximize the screen. The emergence of the digital lifestyle As our viewing practices shift towards the screen, the photograph appears within the same space as other digitized information and entertainment. Christian Metz, in ‘‘Photography and Fetish’’, points out the difference between a (traditional) still photograph and a movie film in terms of its ‘‘socialised unit of reading’’ or lexis (155). He goes on to observe that ‘‘… the photographic lexis, a silent rectangle of paper, is much smaller than the cinematic lexis’’ (155). But viewed on the computer screen, the amateur/family photograph occupies the same space as the video game, the film trailer, the newspaper and the artwork in a virtual museum. It becomes part of an endless stream of data, disassociated from the origins of the snapshot in the personal, the ostensibly real, and private life. In 2005, Photobucket was registering 1.3 million images being uploaded to its servers each day, which were then being replicated to over 500,000 other websites (‘‘Fun Statistics’’). Flickr recorded its 100 millionth photo upload in 2006 (Champ). Here, we observe a transformation similar in magnitude to the one that John Tagg describes in The Burden of Representation, when the invention of halftone plates in the 1880s ‘‘enabled the economical and limitless reproduction of photographs’’ (55–56). The ease with which images replicate and transmit across telecommunications networks modifies the economies of photographic images just as drastically. FIGURE 1 Camera screen image. Reproduced with permission of e-Photographia.com. 1 4 P H O T O G R A P H I E S D o w n l o a d e d A t : 1 0 : 0 3 3 O c t o b e r 2 0 0 8 Intriguingly, there are protracted similarities between the present digital photography revolution and the events that shaped amateur photography at the turn of the nineteenth century. As John Tagg points out, all the technological innovations of George Eastman, the founder of Kodak, would have amounted to little if he had not thought to re-brand photography in a way which made it appealing to ‘‘a whole stratum of people who had never before taken a photograph’’(54). Similarly, the technological innovations that made storing limitless numbers of images possible on cheap hard disks and memory cards, and fast and economical distribution of photographs through a high-speed Internet connection, would not have amounted to a restructuring of the place of photography in society if they had not been augmented by a shift in the marketing of computers. At the axis of this second digital revolution in photography 10 was the re-branding of the home computer as the centre of ‘‘digital lifestyle’’. Championed by both Microsoft (‘‘Microsoft Digital Lifestyle’’) and Apple (Redman), this concept aims to situate the computer at the heart of family life, replacing the television and the sound system, the coffee table, 11 the phone, the family album and the slide projector. Part of this re-branding exercise earmarked photography, together with music and video, as central to the ‘‘fun’’ things that the computer can do. As part of the ‘‘digital lifestyle’’, a new generation of consumer photographic tools (iPhoto, Picasa) avoids references to the darkroom and to the photographic skills and practices of old. Gone are the metaphors of the light-box and the filing cabinet; replaced instead by features more common to video software. Now, one is able to ‘‘scroll through’’ long sequences of photographs by navigating a timeline: as you navigate, the images flicker one after another. A whole year’s worth of pictures can flash in front of your eyes in a matter of seconds. When Steve Jobs, Apple’s CEO, unveiled the first version of iPhoto in 2001, he referred to the ‘‘chain of pain’’ involved in downloading photographs from the camera to the computer (Steinberg 1). In contrast with the past, Apple’s new software was offering a ‘‘zero configuration’’ environment for photography in which the camera model was recognized automatically, images stored according to date and time, and photos were ‘‘shared’’ as slideshows, photo books and webpages. The new breed of photographic applications did not emphasize the manipulation and the editing of images: these were activities requiring time to learn and execute; and made almost unnecessary due to the very high ‘‘success rate’’ of compact digital cameras. By discretely eliminating references to craftsmanship and specialist knowledge from digital photography software, photography is incorporated into the suite of friendly multimedia applications designed to appeal to every computer user. This re-branding of photography occurred in tandem with a revolution in mobile telecommunications. The disappearance of the camera inside the telephone bonded photography to the most important device of personal communications that ever existed – the mobile phone. As Kristóf Nyı́ri observes: Combining the option of voice calls with text messaging, MMS, as well as e-mail, and on its way to becoming the natural interface through which to conduct shopping, banking, booking flights, and checking in, the mobile phone is obviously turning into the single unique instrument of mediated communication, A L I F E M O R E P H O T O G R A P H I C 1 5 D o w n l o a d e d A t : 1 0 : 0 3 3 O c t o b e r 2 0 0 8 mediating not just between people, but also between people and institutions, and indeed between people and the world of inanimate objects. (Nyı́ri 2) In the light of Nyı́ri’s observations, it is not surprising that the influence of the camera phone on contemporary culture is the subject of extensive research by scholars in the field of human–computer interaction (HCI). Such studies respond to the demand of telecommunications conglomerates to know more about social uses of mobile photography with the view of developing additional services that will further deepen the bond between consumers and their phones (see, for example, Van House et al.; Kindberg et al., ‘‘How and Why’’). Based mostly on the analysis of focus groups, these studies look at the things people ‘‘do’’ with their camera phones. Given the methodology, it comes as no surprise that most findings indicate that the main uses of camera phone photography are highly social. In the now celebrated analysis by Van House et al., ‘‘The Uses of Personal Networked Digital Imaging: An Empirical Study of Cameraphone Photos and Sharing’’, the four main uses of camera phone photography are creating and maintaining social relationships, constructing personal and group memory, self-presentation, and self-expression (1845). What people do with photographs after they are taken is also the subject of acute attention. The emerging field of personal information management (PIM) addresses the personal photographic archive as another form of data which needs to be understood, managed and retrieved more efficiently. Rodden and Wood (409) have observed the ways in which pictures are stored, annotated, sent, shared and archived. Elsewhere, algorithms are being developed in order to assist in the creation of image collections which archive themselves (Naaman et al. 180–81). Surveying this growing literature on camera phones and photo sharing it becomes quite clear that the field is dominated by research that is not troubled by questions concerning the role of representation or the power structures which surround photography. Whilst references to the canon of critical writing on photography may appear in the occasional footnote, it is still remarkable that the new wave of works on photography (see, for example, Van House et al.; Van House and Davis; Okabe and Ito; Kindberg et al.) can do without the persistent questions about representation that fascinated writers on photography for decades. Before we lament the indifference of these researchers to the theories of photography, it is worth remembering that the photograph that occupied the mind of Barthes is a different object to the photograph that Okabe, Van House and Kindberg write about. Where Barthes was turning the rustling pages of his family album, Okabe observes a formation of pixels on a 262 inch screen of a telephone. Where Sekula interprets the significance of the strips of light and dark in Stieglitz’s fine print, Van House deals with an image that becomes illegible binary data at the press of a button. As image data become a viable substitute for the printed snapshot, we see the material structures which supported the storage and display of personal photography (shoebox, album, photo frame) being sustained by range of different practices and forms. For an increasing number of consumers, the archival and sharing practices which surrounded the print are now provided via the transmission of photographs to networked locations. Through the relocation of the image collection online, 1 6 P H O T O G R A P H I E S D o w n l o a d e d A t : 1 0 : 0 3 3 O c t o b e r 2 0 0 8 consumers are able to mitigate data anxiety by outsourcing backing-up responsibilities to the companies which maintain the massive server farms which host the images (‘‘The Downside of Digital Snaps’’). Significantly, whether located online or contained on a home PC, the digital snapshot collection now takes the form of a database. Borrowing Manovich’s definition, ‘‘they appear as collections of items on which the user can perform various operations – view, navigate, search’’ (Language of New Media 219). With the emergence of the photo-sharing platform, the photographs of millions of individuals are now contained within online databases connected to each other by hyperlink, tag, or search term. Within this context, the consumption of personal photography has become intimately linked with the software interfaces which mediate their display on-screen. The social life of the networked image The popularity of photo sharing needs to be considered alongside the processes that shape the World Wide Web, particularly in recent years where notions of ‘‘community’’, ‘‘social’’, friend’’, ‘‘free’’ (as in free account) and ‘‘fun’’ are being reshaped through the rise of social networking and Web 2.0. Whilst often put forth as a problematic and controversial term, Web 2.0 was first coined in 2004 to describe shifts in the way in which ‘‘software developers and end users use the web as a platform’’ (O’Reilly). Instead of providing an interface for the navigation and display of interlinked documents, commentators observed that successful websites were appearing which mimicked the functionality and interactive possibilities more commonly found in desktop software applications. In terms of photography, it suddenly became possible to modify content online without programming skills; one could upload, rotate, annotate, distribute and organize images by interacting directly with the webpage itself. Whilst in the 1990s photo-sharing sites simply functioned as add-ons for online print finishing services, the new generation of sites such as SmugMug, Buzznet, Zoto, and Flickr (launched in 2004) functioned as interfaces which facilitated a playful engagement with one’s own snapshots and those uploaded by others. The photo-sharing platform, like the software that supports blogging, makes the process of updating one’s page simple and intuitive. However, unlike a blog, photo sharing does not require the labour of writing entries on a regular basis, and it does not demand continual activity in the way that a social networking site does, and yet photo sharing offers to its members many of the benefits of blogging and social networking. As such, photo sharing provides a flexible model of participation which allows for regular updates in the form of online photo journals, but also accommodates users who only want to upload a few images as a permanent photo gallery or those who use photo sharing as a backup solution for the image collection on their PC. At the same time, a major appeal of photo sharing is the ability to connect with others not through writing but by posting images. Van House notes in her study of Flickr users that many have given up blogging because it is ‘‘too much work’’ and now favour the photograph as a more convenient way of sharing their experiences (2720). The practices of moblogging (blogging with a mobile phone) and photoblogging (blogging with photographs rather then text) A L I F E M O R E P H O T O G R A P H I C 1 7 D o w n l o a d e d A t : 1 0 : 0 3 3 O c t o b e r 2 0 0 8 further exploit the way in which mobile phone images have become a kind of visual speech – an immediate, intimate form of communication that replaces writing. The networking of the snapshot provides something which vernacular photographers have always lacked: a broad audience. Don Slater has noted how marginalized the practice of looking at as opposed to taking snapshots has been, quoting a 1982 survey which stated that 60 per cent of respondents and their families looked at their family snaps once a year or less (Slater 138). Single images, uploaded to a photo-sharing site can accumulate thousands of viewings and long strings of comments. Whilst an invitation to someone else’s ‘‘slide night’’ of holiday snaps has been something to be avoided at all costs, the photo-sharing environment encourages a prolonged engagement with the image, where the act of viewing other people’s images online becomes a form of leisure and a social activity. Writing in 1995, Slater noted the way in which ‘‘actively using domestic photographs as opposed to taking them … is marginal because it is not structured into a leisure event’’ (140). Within a photo-sharing platform, the viewing of photographs is now constructed as a creative pursuit, involving remixing, captioning and commenting upon images. At the same time, traditions of collecting, archiving, and scrapbooking have become re-branded as the marketing buzzword ‘‘life caching’’: a consumer ‘‘mega trend’’ coined by Trendwatching.com (‘‘Life Caching: An Emerging Consumer Trend’’). Consumers, or ‘‘anyone with even a tiny amount of creative talent’’, are also now re-branded as members of a ‘‘Generation C’’, for whom the production and manipulation of digital media ‘‘Content’’ is both ‘‘Creative’’ and inseparable from the consumption of digital storage, media players, and camera phones (‘‘Generation C: An Emerging Trend and New Business Opportunity’’). The photo-sharing interface provides a range of built-in features designed to make the viewing of photographs into a concrete, traceable activity, which is a source of anticipation each time a user logs on to their account. Within Flickr, Jean Burgess suggests that these interface features reward users for participation: At the most basic level, each action of uploading an image contains a potential reward – there is always the possibility that someone will view and enjoy it; the reward is delivered in material form if another user leaves a comment or marks the image as a favourite. (Burgess 140) Viewer involvement can extend from leaving a comment at the bottom of the page to attaching notes to specific areas of the image, thereby making viewing into a creative activity that has the potential to support or subvert the intentions of the photographer. The face of the photograph can become a site of struggle between interpretations by various users, while at the same time generating layers of text that can be software read and used as a resource in search algorithms. 12 The visibility of the networked image A quick visit to the Flickr homepage reveals that over 3,000 images have been uploaded in the last minute. 13 Within this avalanche of images, the practice of tagging 1 8 P H O T O G R A P H I E S D o w n l o a d e d A t : 1 0 : 0 3 3 O c t o b e r 2 0 0 8 one’s photos acts as a strategy for preventing them from disappearing from view. Tagging systems are a central feature of photo-sharing sites such as Flickr and SmugMug which promote the community features of their interfaces. The practice of tagging involves the addition of freely chosen words to an image – resulting in a bottom-up subjective categorization system known as a folksonomy. In a photo-sharing context, tagging serves a dual function of helping to classify a personal collection of images, and making the images available to search enquiries within the photo-sharing site. As such, tagging is both a part of personal image management and at the centre of the social aspects of photo sharing. Within the Flickr environment, the practice of tagging is linked to the popularity and visibility of the image. Images which are highly ranked in search results may have been tagged with up to seventy-five keywords (the maximum allowed) through which they have attracted hundreds of hits and numerous comments. Within photo sharing, the practice of tagging becomes part of a strategy for self-promotion that allows the individual to rise above the anonymity of most users. The reliance on tagging for organization and retrieval of images is an indication of the importance of textuality for online photographic procedures. Photo sharing is therefore not just a portal for photographs but an amalgamation of mutually dependent visual and textual practices. Matt Locke has observed that annotation ‘‘creates a kind of intimacy around the photograph, capturing some of the ‘murmur of laughing voices’ that surrounded their creation’’ (391). Tagging, commenting, titling and annotating of images are essential elements of participation in the social aspects of photo sharing which play a role in creating communities of users interested in specific images. Tagging provides a substantially different way of viewing and interacting with personal photography. Batchen states ‘‘… when we … touch an album and turn its pages, we put the photograph in motion, literally in an arc through space and metaphorically in a sequential narrative’’ (49). As a form of new media, the hyperlinked image enables the possibility of non-linear navigation, creating an environment where images can be connected and displayed according to an array of different categories. Significantly, as a tag is actually a hyperlink created by the user, tagging systems resemble more closely Vannevar Bush’s conception of the Memex – where user-created links form loose trails between different documents (Johnson 121). Within Flickr, the simple addition of the tag ‘‘cat’’ to an image immediately connects the image to 100,000 photos of other cats, which can be called to the screen with a single click (see figure 2). In this respect, tagging subverts any attempt to impose narrative order on the snapshot collection, and calls into question a snapshot’s specificity or individual mark of identity. As a process it acts to join images together as communal pools of tourist snaps, sunsets and babies. Tagging is one system which rewards users by providing a tool for search and retrieval of photographs, while at the same time making large collections of photographs legible to other software. Tagging is crucial in helping computers to make meaningful selections of images that relate to its content or emotional significance. This stands in contrast to mechanically captured metadata: information which has been added by the camera concerning the technical context of the image (e.g. camera make, exposure). By assigning tags to their images, users are in essence describing their photographs in a way that the computer can understand. A L I F E M O R E P H O T O G R A P H I C 1 9 D o w n l o a d e d A t : 1 0 : 0 3 3 O c t o b e r 2 0 0 8 The inability of computers to interpret pixel information in a way which would allow automatic cataloguing of photographs forces computer scientists to develop systems in which humans assist computers in ‘‘seeing’’ photographs. One example of such ‘‘human–computer’’ collaboration is a ‘‘Google Image Labeler’’ 14 – an online game in which players score points while labelling elements of photographs presented to them by the software. Despite the fact that the only reward for the human players is the score they accumulate against others, the game is so addictive that it was estimated that it will take Google only several months to catalogue all the images on its servers (von Ahn and Dabbish 319). As a means for giving machines the ability to interpret an image, metadata provides ‘‘a new paradigm to ‘interface reality’’’ (Manovich, ‘‘Metadata’’), providing a means for the image to escape its original context. Stripped of their interfaces, photo-sharing sites function as vast databases of indexed photographs which can be remixed and remapped online as mashups. Hackers and programmers interested in new ways of navigating and visualizing images now create alternative interfaces which pull together images with maps, texts, ratings, newsfeeds and other content online. In this new context, the currency of the snapshot ceases to lie in its narrative or mnemonic value, in its indexicality, or in its status as a precious object. Instead, these FIGURE 2 Flickr screenshot. Reproduced with permission of Yahoo! Inc. � 2007 by Yahoo! Inc. YAHOO! and the YAHOO! logo are trademarks of Yahoo! Inc. Image credit: Annabel Blair. 2 0 P H O T O G R A P H I E S D o w n l o a d e d A t : 1 0 : 0 3 3 O c t o b e r 2 0 0 8 practices illustrate the way in which the networked image is data, that is: visual information to be analysed and remapped to new contexts via algorithms. With the mashup tools provided by ‘‘Yahoo! Pipes’’ it becomes possible to ‘‘read’’ ‘‘The Guardian’’ as a sequence of images pulled from Flickr. In this example, a news feed will undergo automated content analysis, a sequence of keywords will be generated, which are then used to pull out images via a tag search from Flickr (‘‘Guardian’s Newsblog thru Flickr’’). I am the camera The mass appeal of the camera phone as a platform for digital photography (Nyı́ri 1) could be partly explained by the promise to fulfil a desire for unmediated photography; photography that takes place without the intervention of the camera. As Erkki Huhtamo observes, photography was the first mass media that was susceptible to miniaturization; an inventory of nineteenth-century photographic apparatus includes bow tie cameras, bowler hat and walking stick cameras and suitcase and book cameras. In the twentieth century, subminiature cameras found their way into finger rings, pocket watches, mechanical pencils and pens (Huhtamo 1). Beyond addressing the voyeuristic urge to be able to photograph without being noticed, these devices indicate a wish for photography with everyday objects instead of a camera, and prefigure contemporary developments in the field of wearable electronics. This desire is motivated not only by the wish to make the act of photography invisible and mobile but also by the fantasy of blurring the boundaries between the act of living and the act of taking photographs. While examining photoblogging practices, Cohen identifies the yearning for photography without photography during an interview with a photoblogger called Ed, who expresses the wish to ‘‘… go around recording, taking pictures by [pause] blinking …’’ (891). Cohen goes on to explain that Ed’s desire is to ‘‘augment his body with the means to generate photographs as he lives; remove duration from the process of taking a photograph; remove the need to reach out and grasp a separate physical device in order to fix the image’’ (892). A camera inside a telephone seems like something that might have appeared in a Victorian catalogue of detective cameras – minute, invisible and much more convenient to operate discretely then a camera concealed in a bowler hat. Eliminating the camera from the practice of photography removed a barrier to spontaneous image capture, allowing anyone with a telephone to participate in the documentation of their immediate environment. The ability to take photographs without becoming a photographer is appealing not only because it makes photography less technological but also because with the absence of the camera the photographer does not become an observer but remains intimately connected to the subject of photography. At the same time, the act of wearing a camera at all times opens up a different relationship to space, turning everything in one’s immediate environment into a potential subject for a snapshot. Digital image abundance Whilst the traditional album provides a discrete framework for displaying a limited selection of images, photo-sharing websites exist as spheres of image abundance. A L I F E M O R E P H O T O G R A P H I C 2 1 D o w n l o a d e d A t : 1 0 : 0 3 3 O c t o b e r 2 0 0 8 Accordingly, we see our attention shift from the singular photographic image to image sequences: the image ‘‘pool’’, the ‘‘slideshow’’, the ‘‘photostream’’, the image ‘‘feed’’. At the same time, images from camera phones and digital cameras are not ‘‘frozen moments in time’’ in the way photographs used to be understood. A recent offering from one of the leading camera manufacturers is a 6 mega pixel camera that captures sixty full resolution frames per second (‘‘Casio Developing 300 fps CMOS Based Camera’’); one can only wonder what is the meaning of the ‘‘decisive moment’’ in these circumstances and what is the difference between photography and video (beyond the fact that photography now has more frames per second). But even if the technological gadgetry does not seduce us, we are still left with an endless number of images available from photo-sharing websites. This inexhaustible stream makes it difficult to develop an intimate relationship with a single image. The assurance of infinite scopic pleasure online encourages a restless, continual search in which the present image, exciting as it is, is only a cover for the next, potentially more promising and thrilling. Caterina Fake, Flickr’s founder, argues that ‘‘the nature of photography now is it’s in motion … It doesn’t stop time anymore, and maybe that’s a loss. But there’s a kind of beauty to that, too’’ (Harmon). The possibility of snapshot photography not as composed of static, physical objects but as something more akin to live transmission is also seen in the emergence of screen-based electronic photo frames. The digital photo frame mimics the traditional photo frame, but replaces the print with a flat screen, displaying a constant stream of digital images within a familiar 86100 proportion. Recent models are marketed for their ability to integrate wirelessly with photo-sharing sites, using RSS to suck down image sequences directly to the mantelpiece. Here, personal photography is imagined as an image ‘‘feed’’: the image is presented as a shifting sequence, able to dynamically update itself within the frame as new images are posted by the user online. Return of the anonymous snapshot Within this flow of images the value of a single photograph is being diminished and replaced by the notion of a stream of data in which both images and their significances are in a state of flux. Disassociated from its origins, identified only by semantic tags and placed in a pool with other images that share similar metadata, the snapshot’s resonance is dependent on the interface which mediates our encounter with it. Corby and Baily (referring to earlier work by Johnson), explain: While often presented as some form of untainted fact, statistical visualizations, database interfaces, etc., act as both a membrane for access, and a culturally organized surface that formulates perception of underlying data and informational structures. Simply put, there is no natural connection between the data and its representational form, other than the fact it is digital material. (Corby and Baily 113) Stripped of its original context, the personal photograph appears to be ‘‘authorless’’ and can function as a highly versatile vessel for ideological narratives from news 2 2 P H O T O G R A P H I E S D o w n l o a d e d A t : 1 0 : 0 3 3 O c t o b e r 2 0 0 8 reports to fine art installations to programming experiments. The lack of significance represented by the authorless snapshot now has more to do with its belonging to a class of images that share similar metadata than it does to photography’s intrinsic polysemy. Put another way, transmitted over networks, the snapshot image signifies an absence of meaning; it is the ambient visual background against which visual narratives are told, distributed and consumed. The work of Paul Frosh (concerned with the online image banks through which stock advertising photographs are now accessed) is significant in this context because he develops a model of analysis for images which are intentionally made to be unseen. In ‘‘Rhetorics of the Overlooked’’ his analysis is focused on the generic images of ‘‘smiling, white middle-class families at the beach, well-groomed businessmen shaking hands, romantic young couples kissing’’ (175) manufactured by the stock photography industry, which he contrasts with the attention-seeking, highly visible and dramatic advertising images which attract most consumer and critical attention. As Frosh puts it: … I hope to resurrect the significance of the ordinary, the unremarkable and the overlooked in our understanding of how many (if not most) advertising images communicate, and to replace the isolated object of the consumer-critic’s specular interest with an unremarkable but enveloping visual environment. (Frosh 173) The distinction Frosh makes between the ‘‘isolated object’’ and the ‘‘visual environment’’ when talking about stock photography has clear implications for the way networked vernacular photography can be understood as ocular ‘‘white noise’’: ‘‘Stock photography … emits the ‘background noise’ of consumer cultures: vast numbers of similar images which are repeatedly produced and preformed as ordinarily familiar and ordinarily desirable’’ (191). Similarly, the networked snapshot is overlooked not simply because it is bland, banal and repetitious but also because it is a non-object. And it is not just in the sense in which photographs always had an insecure presence as an object through their role as signifiers that we tend to look through rather than look at. Within online networks the individual snapshot is stripped of the fragile aura of the photographic object as it becomes absorbed into a steam of visual data. By giving up the attributes of a photograph as a unique, singular and intentional presence, the networked snapshot is becoming difficult to comprehend with the conceptual tools of visual literacy and photographic theory. The comparative silence of photographic theorists in regard to vernacular photography online could, in part, be due to this. By taking on the appearance of a snapshot, the networked image is camouflaged as a non-political, non-significant and non-ideological site that does not merit textual analysis. This is perhaps a source of the persistence and power of the networked image. Invisibility, of course, is not without its benefits; not only does it help to evade analysis, criticism and deconstruction that are the fate of the louder, more visible images, but through being unnoticed, vernacular images appear normative, all- encompassing, and inherently benign. In their capacity as readymade, mass produced and slightly silly, the snapshot perpetuates the notion of the world going about its A L I F E M O R E P H O T O G R A P H I C 2 3 D o w n l o a d e d A t : 1 0 : 0 3 3 O c t o b e r 2 0 0 8 business in a natural way. The practice of tagging, which results in millions of images identified with ‘‘holiday’’, ‘‘party’’, ‘‘wedding’’, ‘‘family’’ reinforces a sense of identity and unity which overwhelms differences and distinctions. They advance a sense of uniformed, global satisfaction with the way things are without being called to account for their lyrical promotion of ‘‘universal human nature’’ (Barthes, Mythologies 101). The self-image of the deprived, the cut-off, the bombed out, does not exist online because the rhetoric of personal photography is anchored in a sense of individual and social identity and the pathos of control over the means of image making. Within the context of the networked snapshot, this means access to the Internet, to electricity and to mobile telephone networks. The great talent of the online snapshot is to make specific historical conditions appear natural and universal. What Paul Frosh says about the stock image rings true about the vernacular photograph too: ‘‘it erases indexical singularity, the uniqueness of the instance, in favor of uniformity and recurrence – the systematic iconic repetition of staged image types’’ (189). Through the semantic mechanisms of tagging and metadata, the specificity of each online snapshot is obliterated by the way in which a single hyperlinked keyword can group together thousands of disparate images. Can 4,150,058 photographs tagged with ‘‘party’’ be wrong? Notes 1 Members of the public were invited to contribute to the exhibition ‘‘How We Are Now: Photographing Britain’’ at Tate Britain, London, UK, 22 May–2 September 2007, by submitting photographs to a Flickr group, . 2 ‘‘We Are All Photographers Now!’’, Musée de l’Elysée, Lausanne, Switzerland, 8 February–20 May 2007. . 3 See examples at the Panoramio website ; Flickr’s image map ; and Woophy . 4 See the image in its original context, posted by Alfie Dennen to his blog at Stacey’s request: . 5 In studies by Van House et al. (‘‘The Uses of Personal Networked Digital Imaging’’ 1856) and Okabe and Ito it is suggested that the camera phone has enabled the freedom to explore new paradigms of visual storytelling and personal expression. 6 Roland Barthes sums up this sentiment in Camera Lucida: ‘‘I am not a photographer, not even an amateur photographer: too impatient for that: I must see right away what I have produced[.]’’ (9). 7 Only a generation ago the average number of photographs taken by a family during one year is estimated to have been three to four rolls of film (King 9; Chalfen 14). 8 At the same time, the marketing of recent camera phones suggests that the mobile phone is now being re-constructed as a mobile multimedia computer. The latest phones from Nokia are described in promotional literature as ‘‘multi-media devices’’ and elsewhere as ‘‘multi-media computers’’ (‘‘Nokia Introduces the Next Story in Video with the Nokia N93’’). 2 4 P H O T O G R A P H I E S D o w n l o a d e d A t : 1 0 : 0 3 3 O c t o b e r 2 0 0 8 9 For an evaluation of on-camera sharing practices, see E. Salwen, ‘‘Beyond Chimping’’, AfterCapture Magazine June/July 2007. 4 Aug. 2007. . 10 In this respect 2004 perhaps marks the beginning of this shift: it was a year of massive growth for digital cameras, as was the year in which sales of camera phones outstripped sales of digital cameras (which outsold film cameras) (Raymond). In the same year, the term Web 2.0 was coined (O’Reilly), and Google revolutionized online storage with the introduction of 1GB email accounts. 11 See Microsoft’s multimedia coffee table covered in Popular Mechanics: . 12 Compare Flickr’s ‘‘interestingness’’, a ranking algorithm for seeking out the ‘‘best’’ images on their servers: . 13 Flickr website: (accessed 16 Oct. 2007, 5:15 p.m.). 14 To play Google Image Labeler, visit . Works cited Adams, A. The Negative. Boston: Little, Brown, 1994. Barthes, R. Camera Lucida: Reflections on Photography. Trans. Richard Howard. London: Vintage, 1993. ———. Mythologies. London: Vintage, 1993. Batchen, G. Forget Me Not – Photography & Remembrance. New York: Princeton Architectural Press, 2004. Bourdieu, P. Photography: A Middle-brow Art. Cambridge: Polity, 1990. Burgess, J. ‘‘Vernacular Creativity and New Media.’’ Diss. Queensland U of Technology, 2007. 1 Oct. 2007. . Cascio, J. ‘‘The Rise of the Participatory Panopticon.’’ Worldchanging.com. 4 May 2005. 2 Apr. 2007. . ‘‘Casio Developing 300 fps CMOS Based Camera.’’ Digital Photography Review. 31 Aug. 2007. 1 Oct. 2007. . Chalfen, R. Snapshot Versions of Life. Bowling Green, OH: Popular Press, 1987. Champ, H. ‘‘100,000,000th.’’ Weblog Post. Flickr Blog. 15 Feb. 2006. 10 Aug. 2007. . Cohen, K. ‘‘What Does the Photoblog Want?’’ Media, Culture & Society 27 (2005): 883–901. Colin, C. Citizen Journalist: Cameraphones, Photoblogs, and Other Disruptive Technologies. Indianapolis: New Riders, 2005. Corby, T., and G. Baily. ‘‘System Poetics and Software Refuseniks.’’ Network Art: Practices and Positions. Ed. Tom Corby. London: Routledge, 2006. 109–27. Dennen, A. ‘‘London Underground Bombing, Trapped.’’ Weblog entry. Alfie’s Moblog. 7 July 2005. 20 Sept. 2007. . ‘‘Do You Have 10,000 Digital Photos? Survey Says You’re Not Alone.’’ Business Wire. 13 Dec. 2006. 12 Apr. 2007. . A L I F E M O R E P H O T O G R A P H I C 2 5 D o w n l o a d e d A t : 1 0 : 0 3 3 O c t o b e r 2 0 0 8 ‘‘The Downside of Digital Snaps.’’ Sydney Morning Herald Online. 17 Sept. 2007. 1 Oct. 2007. . Frosh, P. ‘‘Rhetorics of the Overlooked: On the Communicative Modes of Stock Advertising Images.’’ Journal of Consumer Culture 2 (2002): 171–96. ‘‘Fun Statistics.’’ Photobucket Blog. Weblog entry. 3 Mar. 2005. 6 May 2007. . ‘‘Generation C: An Emerging Trend and New Business Opportunity.’’ Trendwatching.com. 2 Oct. 2007. . Gillmor, D. We the Media: Grassroots Journalism by the People, for the People. Sebastopol, CA: O’Reilly, 2004. ‘‘Guardian’s Newsblog thru Flickr.’’ Yahoo Pipes. 19 Feb. 2007. 1 Mar. 2007. . Harmon, A. ‘‘We Simply Can’t Stop Shooting.’’ International Herald Tribune. 7 May 2005. 5 May 2007. . Huhtamo, E. ‘‘An Archaeology of Mobile Media.’’ Keynote address, ISEA, 2004. 2 Aug. 2007. . Johnson, S. Interface Culture: How New Technology Transforms the Way we Create and Communicate. New York: HarperCollins, 1997. Kindberg, et al. ‘‘How and Why People Use Camera Phones. Technical Report.’’ HP Lab and Microsoft Research, 2004. ———. et al. ‘‘The Ubiquitous Camera: An In-depth Study of Camera Phone Use.’’ IEEE Pervasive Computing 4.2 (2005): 42–50. King, B. ‘‘Photo-consumerism and Mnemonic Labor: Capturing the Kodak Moment.’’ Afterimage 21 (1993): 9–13. Lanzagorta, M. ‘‘Interactive Visualization of a High-resolution Reconstruction of the Moon.’’ Computing in Science and Engineering 4.6 (2002): 78–82. 18 Oct. 2007. . ‘‘Life Caching: An Emerging Consumer Trend and Related New Business Ideas.’’ Trendwatching.com. 2 Oct. 2007. . Locke, M. ‘‘Let us Grind them into Dust! The New Aesthetics of Digital Archives.’’ Hybrid: Living in Paradox. Ars Electronica 2005. Ed. Gerfried Stocker and Christine Schöpf. Osterfildern-Ruit: Hatje Cantz, 2005. 390–93. Mann, S., J. Fung, and R. Lo. ‘‘Cyborglogging with Camera Phones: Steps towards Equiveillance.’’ Proceedings of the 14th Annual ACM International Conference on Multimedia, Santa Barbara, CA, USA, October 23–27, 2006. Multimedia ’06. New York: ACM. 177–80 Manovich, L. The Language of New Media. Cambridge, MA: MIT P, 2001. ———. ‘‘Metadata, Mon Amour.’’ Homepage. 2002. 1 Oct. 2007. . Metz, C. ‘‘Photography and Fetish.’’ The Critical Image: Essays on Contemporary Photography. Ed. Carol Squires. Seattle: Bay, 1990. 155–64. ‘‘Microsoft Digital Lifestyle.’’ 2007. Microsoft Corporation. 20 Sept. 2007. . Mitchell, W. J. The Reconfigured Eye: Visual Truth in the Post-photographic Era. Cambridge, MA: MIT P, 1992. 2 6 P H O T O G R A P H I E S D o w n l o a d e d A t : 1 0 : 0 3 3 O c t o b e r 2 0 0 8 Naaman, et al. ‘‘Leveraging Context to Resolve Identity in Photo Albums.’’ Proceedings of the 5th ACM/IEEE-CS Joint Conference on Digital Libraries, Denver, CO, USA, June 7–11, 2005. JCDL ’05. New York: ACM. 178–87. . Noah, S., S. Seitz, and R. Szeliski. ‘‘Photo Tourism: Exploring Photo Collections in 3D.’’ ACM Transactions on Graphics 25.3 (2006): 1–11. 1 Sept. 2007. . ‘‘Nokia Introduces the Next Story in Video with the Nokia N93.’’ Nokia Global. 25 Apr. 2006. 2 Oct. 2007. . Nyı́ri, K. ‘‘The Mobile Phone in 2005: Where Are we Now?’’ Proceedings of Seeing, Understanding, Learning in the Mobile Age, Hungarian Academy of Sciences, Budapest, 28–30 April 2005. 3 Sept. 2007. . O’Reilly, T. ‘‘What is Web 2.0? Design Patterns and Business Models for the Next Generation of Software.’’ O’Reilly Network. 30 Sept. 2005. 5 Oct. 2007. . Okabe, D. ‘‘Emergent Social Practices, Situations and Relations through Everyday Camera Phone Use.’’ Paper presented at Mobile Communication and Social Change, 18–19 Oct. 2004, Seoul, Korea. 2 Jun 2007. . Okabe, D., and M. Ito. ‘‘Camera Phones Changing the Definition of Picture-worthy.’’ Japan Media Review. 29 Aug. 2003. 5 Apr. 2006. . Pap, S., E. Lach, and J. Upton. ‘‘Telemedicine in Plastic Surgery: E-Consult the Attending Surgeon.’’ Plastic & Reconstructive Surgery 110.2 (2002): 452–56. Raymond, E. ‘‘22.8 Million Digital Cameras Sold in 2004 So Far; Up 43% from 2003.’’ Digital Camera Info. 7 Aug. 2007. . Redman, R. ‘‘Jobs at Macworld: Apple Driving Digital Lifestyle.’’ Channel Web Network. 17 July 2002. 28 Sept. 2007. . Ritchin, F. In Our Own Image: The Coming Revolution in Photography. New York: Aperture, 1990. Robins, K. ‘‘Will the Image Move us Still?’’ The Photographic Image in Digital Culture. Ed. Martin Lister. London: Routledge, 1995. 29–50. Rodden, K., and K. R. Wood. ‘‘How Do People Manage their Digital Photographs?’’ Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Ft. Lauderdale, Florida, USA, April 5–10, 2003. CHI ’03. New York: ACM, 2003. 409–16. 2 Feb. 2007. . Rosler, M. ‘‘Image Simulations, Computer Simulations: Some Considerations.’’ Ten.8 Digital Dialogues: Photography in the Age of Cyberspace 2.2 (1991): 52–63. Slater, D. ‘‘Domestic Photography and Digital Culture.’’ The Photographic Image in Digital Culture. Ed. Martin Lister. London: Routledge, 1995. 129–46. Steinberg, D. H. ‘‘At MacWorld, Jobs Shows iPhoto & New iMac.’’ O’Reilly Mac Developer Centre. 1 Sept. 2002. 12 Aug. 2007. . Tagg, J. The Burden of Representation. New York: Palgrave Macmillan, 1988. Tatsuno, K. ‘‘Current Trends in Digital Cameras and Camera-phones.’’ Science and Technology Trends 18 (2006): 36–44. 24 Sept. 2007. . A L I F E M O R E P H O T O G R A P H I C 2 7 D o w n l o a d e d A t : 1 0 : 0 3 3 O c t o b e r 2 0 0 8 Van House, N. ‘‘Flickr and Public Image-sharing: Distant Closeness and Photo Exhibition.’’ CHI ’07 Extended Abstracts on Human Factors in Computing Systems (CHI 2007), San Jose, CA, USA, April 28–May 3, 2007. New York: ACM, 2007. 2717–22. Van House, N., and M. Davis. ‘‘The Social Life of Cameraphone Images.’’ Proceedings of the Pervasive Image Capture and Sharing: New Social Practices and Implications for Technology Workshop (PICS 2005) at the Seventh International Conference on Ubiquitous Computing (UbiComp 2005) in Tokyo, Japan. 2 Sept. 2007. . Van House, N., et al. ‘‘The Uses of Personal Networked Digital Imaging: An Empirical Study of Cameraphone Photos and Sharing.’’ Extended Abstracts of the Conference on Human Actors in Computing Systems (CHI 2005), Portland, Oregon, April 2–7, 2005. New York: ACM, 2005. 1853–56. 2 Apr. 2007. . von Ahn, L., and L. Dabbish. ‘‘Labeling Images with a Computer Game.’’ Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vienna, Austria, April 24– 29, 2004. CHI ’04. New York: ACM, 2004. 319–26. 11 Oct. 2007. . Daniel Rubinstein is a Senior Lecturer in the Department of Arts, Media and English at London South Bank University where he directs a BA in Digital Photography. Previously he studied history at Tel-Aviv University and photography at London College of Printing and has written about camera phone photography. Currently he is working on a PhD thesis concerned with the relationship between photography, philosophy and culture. Address: Department of Arts Media and English, Faculty of Human Sciences, London South Bank University, Borough Road, London SE1 0AA, UK. [email: rubinsd@lsbu.ac.uk] Katrina Sluis is a Senior Lecturer in the Department of Arts, Media and English at London South Bank University where she co-ordinates the BA (Hons) Digital Media Arts. Her teaching and scholarly interests include cultures of mobile multimedia, critical theory and digital art. As a practising artist, she works with painting, photography and digital media to explore materiality, archiving and transmission in relation to the photographic image. Address: Department of Arts Media and English, Faculty of Human Sciences, London South Bank University, Borough Road, London SE1 0AA, UK. [email: sluiskp@lsbu.ac.uk] 2 8 P H O T O G R A P H I E S D o w n l o a d e d A t : 1 0 : 0 3 3 O c t o b e r 2 0 0 8 work_6srmts75d5g5ti2fjqp4ulzgvm ---- AATA 27.1.10 web_dr Abstract This study examined survey data from professional cre- dentialed members of the American Art Therapy Association and 8 follow up interviews to determine how art therapists adopt or reject technology and/or new digital media for thera- peutic use with their clients. Using Rogers’s (2003) “diffusion of innovation” model, the author identified a two-stage process of media adoption used when respondents were introduced to new media with potential artistic or therapeutic applications. The Media Adoption Stage Model described in this article is an iterative process of selection, experimentation, and reevalu- ation of art media based on their properties. The findings have implications for art therapy, art therapy education, and per- sonal use of technology. The recent emergence of digital artistic media has provided researchers with a unique opportunity to study how therapists determine whether and how new image- based media can be implemented with clients to promote positive therapeutic outcomes. This article identifies the stages of the media adoption process used by art therapists based on a survey of professional practitioners. Research by Peterson, Stovall, Elkins, and Parker-Bell (2005) formulat- ed the term Digital Imagery Technology (DIT) to denote a digital computer-based device or software program that can be used to produce art. As DIT continues to evolve, research into the adoption of these technologies by art ther- apists becomes increasingly important. Throughout the history of art therapy the adoption of innovations has resulted in the use of new, improved, and safer media for treatment. Toxic chemicals such as mercury and lead have been removed from art materials and replaced with safer alternatives (Jacobs & Milton, 1994). New inno- vations like Polaroid and digital cameras originally were adopted for phototherapy because they did not require film to be sent out for development (Wolf, 2007). Once certain media are adopted for use, they can continue to be utilized, be replaced with better materials, or be discontinued alto- gether. The decision-making process behind the adoption, modification, continuance, or discontinuance of art materi- als is an essential element of treatment, as it reflects trends and alterations in the therapeutic tools that art therapists present to their clients. What has not been studied in depth, however, is the thought process that art therapists undergo as they determine whether a new medium has potential as a therapeutic tool. This study focused on two research ques- tions. First, how do art therapists determine whether to adopt or reject existing and emerging DITs as expressive therapeutic tools? Second, do therapists progress through an identifiable decision-making process when presented with a new medium that may have artistic and/or therapeutic applications with clients? Review of the Literature The first question examined by this study was whether the adoption of DIT for use in art therapy with clients dif- fers from the adoption of any other innovation. The diffu- sion model developed by Rogers (2003) is applicable across multiple disciplines, from anthropology to marketing, and was a suitable research perspective for this study. However, the model by itself fails to address the use of an innovation with another person and thus is insufficient to explain how therapists decide to use DIT with their clients. To assist in the creation of a suitable model for the context of art thera- py, I focused on the adoption process and adopter categories. Adoption Process The adoption process for media innovation is comprised of five stages: awareness, interest, evaluation, trial, and adop- tion (Rogers, 2003). Awareness is the stage in which an indi- vidual is exposed to an innovation but lacks sufficient infor- mation to decide whether it is useful. When interested in the innovation, he or she first will seek more information about it and then will evaluate the innovation by applying it to pre - sent and anticipated situations, and deciding whether to try it or not. After trying out the idea the individual finally reach- es the adoption stage, in which he or she decides whether to continue or to discontinue use of the innovation. Rogers’s adoption model was important for this study because it explained how therapists arrived at personal knowledge and use of an innovation. However, it did not explain how that knowledge could be transferred to their clients. Adopter Categories All users in a system do not adopt a new innovation at the same time. Adopter categories identify when an indi- Art Therapy: Journal of the American Art Therapy Association, 27(1) pp. 26-31 © AATA, Inc. 2010 26 The Media Adoption Stage Model of Technology for Art Therapy Brent Christian Peterson, Williamsburg, VA Editor’s note: Brent Christian Peterson, PhD, is an art ther- apist and graduate of the Florida State University Art Education and Art Therapy doctoral program. Contact brentphd@gmail .com to obtain a copy of the survey instrument or to correspond with the author. The author would like to thank Drs. David Gussak, Marcia Rosal, Penelope Orr, and Sande Milton for their guidance. This study was funded in part by a grant received from The Florida State University Office of Graduate Studies. vidual implements a new innovation. Rogers’s (2003) cate- gories consist of innovators, early adopters, members of the early majority, members of the late majority, and laggards. For an adoption process model to be credible and useful it has to apply to innovators as well as to laggards. Dewey (1980) stated that the technological arts are an art discipline to the extent that they “carry over into them- selves something of the spontaneity of the automatic arts” (p. 227). The automatic arts are based on the use of the body, such as singing and dancing, rather than on external media. Therefore, digital media such as computer-generat- ed graphics and digital photography (both of which are considered by many as acceptable but much debated art forms) provide a precedent for the inclusion of DIT as an art-making tool because it can meet aesthetic expectations. For art therapists to use technology in the therapeutic sense, the technological object has to be accepted as an art- making tool as well as seen as appropriate for treatment. Gussak and Nyce (1999) argued that adoption often does not take place because the market has not produced the tools art therapists need or want. Often therapists have to use tools created for other professions, in an adaptive func- tion, to treat their clients. The use of imagery is an essential and fundamental component of art therapy treatment. Because art therapists are also artists, they have personal experience with most media and processes, which is how they become acquaint- ed with such media and help others learn to use them com- fortably (Rubin, 2010). Many potentially useful technolo- gies for art therapists fall into the categories of DIT and HIT, or Health Information Technology. These technolo- gies include but are not limited to electronic health records, e-mail communication, clinical alerts and reminders, com- puterized provider order entry, computerized decision sup- port systems, hand-held computers, electronic information resources technology, and electronic monitoring systems for therapy. Personal experience with DIT and HIT remains important because the educational standards of the American Art Therapy Association (2007) do not require coursework on the uses of technology in art thera- py treatment. Therefore, art therapy students may not be learning technology; their educational programs may need to revisit its relevance to contemporary techno-cultural contexts (Kapitan, 2007). Art therapists who use technolo- gy generally have had little formal training (Orr, 2006). Asawa (2009) found that emotional factors such as anxiety were additional barriers to the adoption of technology. Thus, there is a need for research to determine how tech- nology can best be integrated into art therapy education and treatment practices. Therapists have been adopting technology for use in mental health treatment since the invention of the tele- phone. Murphy (2003) reviewed the historical adoption of telephones, recording equipment, and computer technolo- gies for use in psychological practice. He concluded that psychologists had an initial resistance to the application of these technologies due to ethical concerns and the need to modify standard practices. The first wave of technology adoption consisted of office utilities such as telephones, fax machines, copy machines, and computers for billing sys- tems and word processing use. The second wave included computer assessment and interviewing programs. However, after the initial resistance the relative advantages of technol- ogy motivated widespread adoption and application. Close attention has been given to the impact computers have on professional treatment practices (Austin, 2009; Kapitan, 2007; Klorer, 2009; Potash, 2009). Computers are assisting in the treatment of obsessive–compulsive disorder (Greist et al., 2002), traumatic illness (Collie & Cubranic, 2002), anxiety and depression (Proudfoot et al., 2003), aphasia (Wallesch & Johannsen-Horbach, 2004), speech and language therapy (Mortley, Wade, & Enderby, 2004), and children with medical illnesses (Thong, 2007). These authors have found that computers often have specific prop- erties that are appealing to the treatment of their patients’ specific needs. For example, people with speech delays can now use a microphone to speak into a computer and have their speech analyzed without the presence of a speech ther- apist. This method allows them to receive feedback any time they are willing to practice. As more sophisticated software and hardware programs are developed, new technologies likely will have an even greater impact on health care practi- tioners and the services they render. The process by which DIT becomes adopted for art therapy treatment is not well documented. The use of tech- nology in art therapy has met with resistance (Asawa, 2009; Thong, 2007) or has proceeded in large part outside of graduate art therapy coursework. This study explored how personal experiences with HIT and DIT factored into a therapist’s decision to implement a particular technology as a therapeutic medium with clients. Method The Florida State University Human Subjects Com - mittee approved the mixed methods study. I used a survey instrument to sample credentialed professional members of the American Art Therapy Association (AATA) in order to obtain direct reports of personal experiences with media adoption from experienced practitioners. Next I purchased a list directly from AATA containing the contact informa- tion of randomly selected credentialed professional mem- bers. Of the 1,000 professional AATA members on the list, 785 met all the contact information criteria needed for in - clusion in the study and each was assigned a survey ID number. I sent an e-mail to each of the respondents inform- ing them of the study and inviting them to participate. E- mail delivery confirmation notices were not received for 51 participants, who were immediately mailed a survey packet. Participants completed the survey using either postal mail or the Internet for two reasons. First, the method by which each participant completed the study provided insights into his or her use of technology. Second, two for- mats for completing the survey were offered in order to increase the response rate. The use of mail surveys was deemed essential for this study so as not to alienate individ- uals who do not use technology. The postal mail survey and online survey were identical. PETERSON 27 Of the 785 total participants, 136 completed and returned the survey. This resulted in a response rate of 17.4%. I assigned each participant an Overall Technology Adoption Score (OTAS) based on his or her responses to the close-ended survey items. Interview participants were chosen from this sample by means of their OTAS and writ- ten survey responses, both of which were used to identify them as representative members of adopter categories. Maximum variation sampling was the purposeful sampling technique utilized to determine which participants would be interviewed. This sampling technique allowed me to identify responses from individuals in the innovator, early majority, and laggard adaptor categories (Rogers, 2003). Eight participants were chosen for follow up inter- views. I conducted interviews with 3 participants identified as “innovators,” 3 participants identified as “laggards,” and 2 participants identified as members of the “early majority” (Rogers, 2003). Each interview was conducted over the telephone. With each participant’s consent, I recorded our conversation and then transcribed and coded the interview data for patterns and themes using semantic content analy- sis (Lemke, 2005). This method helped determine whether agreement pertaining to the use and reasons for use of DIT could be established among interview participants. A single database was generated by combining the sur- veys that were returned via the Internet and postal mail. Qualitative information from the postal surveys was typed and then verified for accuracy though peer review. I then used the corrected spreadsheet for computer-based statisti- cal analyses (SPSS) to organize and summarize the data. Results Participants provided feedback that illuminated why technology adoption takes place in their art therapy prac- tices. They stated that a client’s response to a form of DIT was the basis for their decision as to whether that DIT was an effective therapeutic medium. Respondents also identi- fied two criteria for determining a client’s response: ease of use and “trialability” (the degree to which a new product is capable of being tried on a limited basis). The art therapists stated that a medium must be simple enough for the client to learn how to use it effectively. It must also have a trial- and-error quality that allows a client to explore the medi- um’s possibilities safely. Some participants asserted that DIT provided clients with an opportunity to learn new skills, which positively affected the clients’ self-esteem. The evaluation of DIT as a therapeutic tool was reported to be no different than the evaluation of nondigital tools. Almost all participants agreed that if a medium could safely pro- duce a desirable change in a client, then it warranted inclu- sion in art therapy treatment. The fact that the medium was digital or nondigital was found to be less relevant than its capacity to produce change. Participants listed therapeutic tasks for which they use DIT that were not listed in the survey. Clients who cannot type on a keyboard use voice recognition software to create stories and journals. Therapists and people with disabilities have adopted DIT for the creation of digital imagery and motion pictures. In addition, the participants observed that computer technology was effective for individuals who did not want to get messy during their art therapy treat- ment. DIT also was seen as being effective for people with disabilities — especially those with mobility disabilities — who may need to connect with others via e-mail, web cams, and online communities. The results demonstrate that some art therapists have found that the impact of DIT has expanded treatment options. Adoption Process The survey results indicated that the decision to adopt new technology was most often influenced by replacement discontinuance, meaning the replacement of one innova- tion with a superior one (Rogers, 2003). The study identi- fied four main examples: (a) e-mail replacing telephone calls, (b) digital photography replacing various forms of art- work storage and archiving, (c) assistive technologies replac- ing traditional art media, and (d) computer-based word processing replacing handwriting and typewriters. Art ther- apists make conscious decisions to replace old practices with new ones when they find them to be advantageous. For example, many participants replaced film cameras with dig- ital ones because they found the latter to be more effective for framing photos using the camera’s LCD screen. Digital photography also allowed for easy editing, was more cost effective, provided immediate access to the images, and was not dependent on chemicals and darkrooms. Interviewees often based their decisions to adopt an innovation upon the quality of the DIT to improve on established media. Adoption Factors I identified several factors that influenced the adoption of DIT and HIT. Among them was cost. Respondents stat- ed that a digital single lens reflex (SLR) camera was the device that was most desired but was difficult to obtain due to its high price. They viewed the digital SLR camera as the ideal combination of convenience, image quality, and con- tinuance of existing photographic knowledge. Participants who worked in medical settings were an exception: they reported that they were able to purchase various forms of technology due to larger material budgets and existing widespread technology adoption in their field. Cost also appeared as an adoption deterrent for those who had access to DIT but chose not to use it with clients. Art therapists often had to use personal funds to buy tech- nology that they wanted to use with their clients. One par- ticipant stated, “I am not letting my young clients touch my brand new digital camera.” Many art therapists report- ed owning art materials that they reserved for their own personal use. Whether that material was expensive oil paints or a digital camera, therapists may not have present- ed every tool that they had to their clients based on a desire to protect or reserve some media for themselves. Thus, per- sonal adoption did not always lead to client use. The value of DIT in providing the therapists and/or their clients with new capabilities was an additional adop- THE MEDIA ADOPTION STAGE MODEL OF TECHNOLOGY FOR ART THERAPY28 tion factor. A digital camera and computer provide many of the features once only found in darkrooms. Art thera- pists can crop, change the exposure, and modify the con- trast and brightness of an image without the need of chem- icals. Digital technology also allows individuals to change color photos to black and white, add special effects, and digitally add or remove portions of an image, each of which fostered adoption and continued client use. Art therapists in different adopter groups were found to use different features of the digital darkroom. Innovators often edited and removed portions of the image, used more than one program to edit the same image, and developed custom filters for their work. Those in the early majority group often cropped or turned color images to black and white, and removed red eye from images. Laggards mostly limited themselves to selecting the images they wanted to print; at times they also made simple crops and one-click software edits to images. Each adopter category consisted of art therapists with varying degrees of technological skill related to image manipulation. Nonetheless, each fulfilled the same desire of gaining control over the editing process- es of their digital imagery. An art therapist’s occupation was an additional media adoption factor. For example, there was a significantly greater use of LCD projectors and digital camcorders by art therapy educators. These two devices were used for presen- tations, teaching, and providing student interns with feed- back on their presentations and therapy sessions. Art ther- apists teaching in higher education, as compared to art therapy practitioners, used additional and/or specific tech- nologies. These significant differences between art therapy educators and the general survey population offer insight into how individuals in related occupations might use sim- ilar forms of technology out of common necessity. This was also true of participants who worked with special needs populations. These art therapists were using assistive tech- nologies more often than those who were not working with clients with special needs. The occupational tasks that respondents carried out were strong indicators of the forms of DIT and HIT they had adopted. Forced adoption, which is the requirement to use HIT despite a person’s desire not to do so, was found to be a common adoption factor. Although no participant report- ed being forced to use HIT with their clients as part of their treatment, forced adoption tasks did include comput- erized recordkeeping and billing, online teaching among art therapy educators, and online proposal submission processes for professional conference presentations. Par tic - i pants who worked in medical, educational, and correc- tional settings reported having to use technology more often than those who worked in studio art settings and in private practice. For some, technology adoption began as a mandate of their employers. Art therapists based much of their decision to adopt or reject DIT on their clients’ responses to those media. For most, there was indifference in how they determined whether to adopt a traditional material such as clay or a dig- ital camera as an artistic medium. The medium’s inherent qualities were the key component of the decision-making process (Orr, 2005; Thong, 2007). Whether DIT is an effective art medium had less to do with the fact that a device was digital than it did with whether a given medium’s inherent properties could be implemented therapeutically with their clients. Thus, the model for the adoption of DIT and traditional media should be identical because adoption was based primarily on media properties. Discussion: Media Adoption Stage Model A media adoption stage model (MASM) was created from the survey results to address the question of whether art therapists progress through an identifiable decision- making process when presented with new media that may have professional, artistic, and/or therapeutic applications (Figure 1). The results indicate that art therapists progress through a two-stage diffusion of Rogers’s (2003) innova- tion model for new media in art therapy treatment. Art therapists first proceed through Stage I Adoption, which is comprised of the five stages in Rogers’s innovation–deci- sion process: knowledge ➛ persuasion ➛ decision ➛ implementation ➛ confirmation. During the knowledge stage participants became aware that an innovation existed, either by chance or because they were looking to solve a need. In the persuasion stage art therapists weighed the rel- ative advantages of the device, along with its work-related compatibility, complexity, and novelty. Those with favor- able attitudes toward previously adopted forms of DIT and HIT were more likely to proceed quickly through the per- suasion stage with higher adoption rates than those with- out this experience. Media adoption then progressed to the decision stage, where a decision was made to reject an item outright or to actively engage in activities that assisted in deciding whether to adopt or reject an innovation. This stage of the decision-making process was very important because ownership, access, and use of a device were not required for its rejection and were factors that helped to negate its adoption. In the implementation stage adoption transferred from a mental concept to physical activity, and uses for the device were determined and then implement- ed. Finally, during the confirmation stage, art therapists confirmed their adoption of the medium and then chose to continue adoption or to reject the medium. Once rejected, an art therapist might revisit the medium for future adop- tion, for example, by considering a newer version of a device that might have more desirable features. Once the art therapist personally adopted a medium, it could then be incorporated into practice as an expressive art-making tool. The MASM demonstrates that the adoption process of media for clients is an extension of the art therapist respon- dents’ personal adoption processes (Figure 1). The study found that respondents bridge personal media adoption (Stage I Adoption) with a client-focused secondary adop- tion process (Stage II Adoption). This “media properties bridge” between the personal and therapeutic adoption of media represents the process whereby the therapist deter- mined that a medium possessed inherent qualities that could lead to therapeutic applications. This determination PETERSON 29 was based on the art therapist’s experiences with a medium as well as knowledge gained from outside sources such as books, journals, or presentations. The bridge analogy is appropriate for two reasons. First, a bridge denotes that information can go back and forth across it, or in the case of media adoption, experience and feedback loops alter whether the therapist will continue to view the medium as an expressive tool. Second, the bridge connects the two stages of adoption; they are not independent of one anoth- er. Once an art therapist’s experience with a medium cross- es the media properties bridge, Stage II Adoption begins. Stage II Adoption of the MASM consists of three iter- ative stages: decision II, implementation II, and confirma- tion II. The media properties bridge serves a function in Stage II Adoption that is similar to the persuasion stage of Stage I Adoption because the therapist must be persuaded by a medium’s inherent properties to introduce it to clients as a therapeutic tool. The decision II stage, which is when a therapist determines that a medium’s inherent properties may have therapeutic applications, leads to the develop- ment of treatment tasks using that medium. Some art ther- apists, for example, reported that they may compare a new medium’s properties with another, similar medium, such as comparing a digital camera with a film camera. If it was determined that the medium was not suitable for treat- ment, it was rejected and returned across the media prop- erties bridge to the confirmation stage of Stage I Adoption, where it remained available for reevaluation. Once the art therapist approved a medium, he or she proceeded to im - ple mentation II in the next stage of decision making. The therapist carries out treatment with clients and then de - cides on continued use. The final stage, or confirmation II, was dependent upon the clients’ responses, the therapist’s comfort level with specific media, and the medium’s rela- tive advantage over already implemented media. At the end of Stage II, the art therapist returns to the beginning of the decision-making process for a subsequent evaluation of the medium’s properties. If the medium has a favorable response with clients, the therapist implements it further. If clients have a less than favorable response, the therapist rejects the medium as a therapeutic tool and returns it across the media properties bridge to the confir- mation stage of Stage I Adoption. Stage II Adoption is tai- lored to individual clients and is not necessarily universal, in that a therapist can reject the use of media with one client and accept it with another. Therapists can also accept or reject a medium for use with all clients. Conclusion The Media Adoption Stage Model was formulated by integrating Rogers’s (2003) diffusion model, research from the fields of mental health and art education, and the sur- vey and interview data from this study. The MASM coin- cides with the therapist’s responsibility to protect his or her clients. Stage I Adoption presents the opportunity to personally experiment with a medium to determine its properties and personal uses. Only after therapists feel confident in their personal use of a medium — despite the possibility that a client may have greater proficiency with it — does it become implemented with clients. Therapists then proceed to ascertain the medium’s therapeutic poten- tial and reevaluate its applications, which allow the thera- pists to continue to adopt or reject the medium. Factors such as cost, new capabilities, occupation, and forced adoption each were found to play distinctive roles in over- all technology adoption. Digital media have the potential to become a staple of art therapy treatment. Clinicians and their clients continue to own these types of tools in greater numbers, which leads to greater usage as well as applications that are different from the way they use traditional media such as clay and paints. Having access to these tools at home and in the clinic can create a therapeutic continuum that extends from the office to the home and back again. At the same time, a greater use of technology can result in the loss of some of the therapeutic advantages that come with messier THE MEDIA ADOPTION STAGE MODEL OF TECHNOLOGY FOR ART THERAPY30 Figure 1 Media Adoption Stage Model art materials. However, clean materials are more inviting to clients who do not desire a tactical somatic experience. These findings have implications for both art therapy education and clinical practice. Educational programs may need to adjust instruction on the ethical use of art media for students who may have greater familiarity with DIT than their instructors and less bias towards their applica- tion. Such bias appears to be related more to art therapists’ personal and professional experiences with technology than its inherent properties. Although some art therapists do not yet consider DIT in the same light as traditional media, it is already becoming a standard tool in art therapy educa- tion, public relations, information dissemination, and treatment. The future of technology in art therapy will be complex but unmistakable. Digital information technolo- gy has made its mark on the field of art therapy, its only limits bound by ethics and the imagination. References American Art Therapy Association. (2007). Masters education standards. Retrieved December 29, 2009 from http://www. americanarttherapyassociation.org/upload/masterseducation standards.pdf Asawa, P. (2009). Art therapists’ emotional reactions to the demands of technology. Art Therapy: Journal of the American Art Therapy Association, 26(2), 58–65. Austin, B. (2009). Renewing the debate: Digital technology in art therapy and the creative process. Art Therapy: Journal of the American Art Therapy Association, 26(2), 83–85. Collie, K., & Cubranic, D. (2002). Computer-supported dis- tance art therapy: A focus on traumatic illness. Journal of Technology in Human Services, 20, 155–171. Dewey, J. (1980). Art as experience. New York, NY: Pedigree. Greist, J. H., Marks, I. M., Baer, L., Kobak, K. A., Wenzel, K. W., Hirsch, M. J., … Clary, C. M. (2002). Behavior therapy for obsessive–compulsive disorder guided by a computer or by a clinician compared with relaxation as a control. Journal of Clinical Psychiatry, 63, 138–145. Gussak, D., & Nyce, J. (1999). The art of art therapy may be toxic. Art Therapy: Journal of the American Art Therapy Association, 11(4), 271–277. Jacobs, J., & Milton, I. (1994). To bridge art therapy and com- puter technology: The visual toolbox. Art Therapy: Journal of the American Art Therapy Association, 16(4), 194–195. Kapitan, L. (2007). Will art therapy cross the digital divide? Art Therapy: Journal of the American Art Therapy Association, 24(2), 50–51. Klorer, P. G. (2009). The effects of technological overload on children: An art therapist’s perspective. Art Therapy: Journal of the American Art Therapy Association, 26(2), 80–82. Lemke, J. L. (1998). Analyzing verbal data: Principles, methods, problems. In B. J. Fraser & K. G. Tobin (Eds.), International handbook of science education (Vol. 2, pp. 1175–1190). Retrieved from http://academic.brooklyn.cuny.edu/education/ jlemke/papers/handbook.htm Mortley, J., Wade, J., & Enderby, P. (2004). Superhighway to promoting a client-therapist partnership? Using the Internet to deliver word-retrieval computer therapy, monitored remotely with minimal speech and language therapy input. Aphasiology, 18, 193–211. Murphy, M. J. (2003). Computer technology for office-based psychological practice: Applications and factors affecting adop- tion. Psychotherapy: Theory, Research, Practice, Training, 40(1–2), 10–19. Orr, P. P. (2005). Technology media: An exploration for “inher- ent qualities.” The Arts in Psychotherapy, 32(1), 1–11. Orr, P. P. (2006). Technology training for future art therapists: Are we meeting their needs? Art Therapy: Journal of the American Art Therapy Association, 23(4), 191–196. Peterson, B., Stovall, K., Elkins, D., & Parker-Bell, B. (2005). Art therapists and computer technology. Art Therapy: Journal of the American Art Therapy Association, 22(3), 139–149. Potash, J. S. (2009). Fast food art, talk show therapy: The impact of mass media on adolescent art therapy. Art Therapy: Journal of the American Art Therapy Association, 26(2), 52–57. Proudfoot, J., Swain, S., Widmer, S., Watkins, E., Goldberg, D., Marks, I., … Gray, J. A. (2003). The development and beta- test of a computer-therapy program for anxiety and depression: Hurdles and lessons. Computers in Human Behavior, 19, 277–289. Rogers, E. M. (2003). Diffusion of innovations (5th ed.). New York, NY: Free Press. Rubin, J. A. (2010). Art therapy: An introduction. New York, NY: Routledge. Thong, S. A. (2007). Redefining the tools of art therapy. Art Therapy: Journal of the American Art Therapy Association, 24(2), 52–58. Wallesch, C. W., & Johannsen-Horbach, H. (2004). Computers in aphasia therapy: Effects and side-effects. Aphasiology, 18(3), 223–228. Wolf, R. I. (2007). Advances in phototherapy training. The Arts in Psychotherapy, 34(2), 124–133. PETERSON 31 work_6tde7xvtjvcipm2mk6jkl3fezq ---- [PDF] Quantitative conjunctival provocation test for controlled clinical trials. | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.3414/ME13-12-0142 Corpus ID: 24859675Quantitative conjunctival provocation test for controlled clinical trials. @article{Srndi2014QuantitativeCP, title={Quantitative conjunctival provocation test for controlled clinical trials.}, author={I. S{\'a}r{\'a}ndi and D. P. Cla{\ss}en and A. Astvatsatourov and O. Pfaar and L. Klimek and R. M{\"o}sges and T. M. Deserno}, journal={Methods of information in medicine}, year={2014}, volume={53 4}, pages={ 238-44 } } I. Sárándi, D. P. Claßen, +4 authors T. M. Deserno Published 2014 Medicine Methods of information in medicine BACKGROUND The conjunctival provocation test (CPT) is a diagnostic procedure for the assessment of allergic diseases. Photographs are taken before and after provocation increasing the redness of the conjunctiva due to hyperemia. OBJECTIVE We propose and evaluate an automatic image processing pipeline for objective and quantitative CPT. METHOD After scale normalization based on intrinsic image features, the conjunctiva region of interest (ROI) is segmented combining thresholding, edge… Expand View on PubMed isarandi.github.io Save to Library Create Alert Cite Launch Research Feed Share This Paper 7 CitationsBackground Citations 3 Methods Citations 2 View All Figures, Tables, and Topics from this paper figure 1 table 1 figure 2 figure 3 figure 4 figure 5 figure 6 figure 7 figure 8 figure 9 figure 10 View All 11 Figures & Tables Erythema Region of interest:Presence or Identity:Point in time:*:Nominal Hyperemia algorithm photograph 7 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Digitally Analyzed Conjunctival Redness: Does Repeated Conjunctival Provocation Intrinsically Cause Local Desensitization of the Eye? Claas Gloistein, A. Astvatsatourov, S. Allekotte, R. Mösges Medicine International Archives of Allergy and Immunology 2015 6 Save Alert Research Feed Reliability of a New Symptom Score in a Titrated Quantitative Conjunctival Provocation Test Supported by an Objective Photodocumentation O. Pfaar, D. P. Claßen, A. Astvatsatourov, L. Klimek, R. Mösges Medicine International Archives of Allergy and Immunology 2018 4 Save Alert Research Feed Validation of Computerized Quantification of Ocular Redness E. Sirazitdinova, Marlies Gijs, Christian J F Bertens, T. Berendschot, R. Nuijts, T. M. Deserno Computer Science, Medicine Translational vision science & technology 2019 3 PDF View 10 excerpts, cites methods and background Save Alert Research Feed Liposomal Eye Spray Is as Effective as Antihistamine Eye Drops in Patients with Allergic Rhinoconjunctivitis Induced by Conjunctival Provocation Testing Anne-Nele Grzella, S. Schleicher, +4 authors R. Mösges Medicine International Archives of Allergy and Immunology 2019 2 Save Alert Research Feed Conjunctival allergen provocation test : guidelines for daily practice J-L Fauquert, M. Jędrzejczak-Czechowicz, +11 authors A. Leonardi Medicine Allergy 2017 40 PDF Save Alert Research Feed Multimodal Data Acquisition at SARS-CoV-2 Drive Through Screening Centers: Setup Description and Experiences in Saarland, Germany P. Flotho, M. Bhamborae, +4 authors D. Strauss Computer Science, Medicine medRxiv 2020 PDF View 2 excerpts, cites background and methods Save Alert Research Feed Medical informatics, biometry and epidemiology. Recent developments and advances. H. Handels, J. Ingenerf Medicine Methods of information in medicine 2014 View 1 excerpt, cites background Save Alert Research Feed References SHOWING 1-10 OF 23 REFERENCES SORT BYRelevance Most Influenced Papers Recency Automatic conjunctival provocation test combining Hough circle transform and self-calibrated color measurements S. Bista, I. Sárándi, S. Dogan, A. Astvatsatourov, R. Mösges, T. M. Deserno Mathematics, Engineering Medical Imaging 2013 5 PDF Save Alert Research Feed Objectifying the Conjunctival Provocation Test: Photography-Based Rating and Digital Analysis S. Dogan, A. Astvatsatourov, +4 authors R. Mösges Biology, Medicine International Archives of Allergy and Immunology 2013 18 View 3 excerpts, references background and methods Save Alert Research Feed Quantification of conjunctival vascular reaction by digital imaging. F. Horak, U. Berger, R. Menapace, N. Schuster Medicine The Journal of allergy and clinical immunology 1996 33 Save Alert Research Feed Image Analyses of the Kinetic Changes of Conjunctival Hyperemia in Histamine-Induced Conjunctivitis in Guinea Pigs A. Fukushima, T. Tomita Medicine Cornea 2009 17 Save Alert Research Feed Automated measurement of bulbar redness. P. Fieguth, T. Simpson Medicine Investigative ophthalmology & visual science 2002 70 PDF View 1 excerpt, references background Save Alert Research Feed Automated hyperemia analysis software: reliability and reproducibility in healthy subjects Tsuyoshi Yoneda, T. Sumi, A. Takahashi, Y. Hoshikawa, M. Kobayashi, A. Fukushima Medicine Japanese Journal of Ophthalmology 2011 26 Save Alert Research Feed Comparison of Conjunctival and Nasal Provocation Test in Allergic Rhinitis to House Dust Mite H. Riechelmann, Barbara Epple, Gunther Gropper Medicine International Archives of Allergy and Immunology 2003 60 View 1 excerpt, references background Save Alert Research Feed A comparison of manual and automated methods of measuring conjunctival vessel widths from photographic and digital images C. Owen, T. Ellis, E. Woodward Mathematics, Medicine Ophthalmic & physiological optics : the journal of the British College of Ophthalmic Opticians 2004 14 Save Alert Research Feed Automatic strabometry by Hough-transformation and covariance-filtering T. M. Deserno, A. Kaupp, R. Effert, D. Meyer-Ebrecht Computer Science Proceedings of 1st International Conference on Image Processing 1994 3 Save Alert Research Feed OC ToGo: bed site image integration into OpenClinica with mobile devices D. Haak, J. Gehlen, S. Jonas, T. M. Deserno Computer Science, Engineering Medical Imaging 2014 6 Save Alert Research Feed ... 1 2 3 ... Related Papers Abstract Figures, Tables, and Topics 7 Citations 23 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_6zuztheu6nannmreu5pzrwrciu ---- Counting Actinic Keratosis – Is Photographic Assessment a Reliable Alternative to Physical Examination in Clinical Trials? | Abstract | Acta Dermato-Venereologica Toggle navigation Home About Acta Dermato-Venereologica Advisory Board Medical Journals Sweden Privacy Policy Digital archive Editors For Authors/Reviewers For Authors Submit a manuscript Track your manuscript Instructions Patient Consent form Open Access Policy For Reviewers Manage your reviews Guidelines for reviewers List of reviewers Online Content All issues available online Preview of papers in the copy-editing process Preview of fully accepted papers, still not published in any volume Test your skill - Quiz Articles by Category Sign up for e-mail alerts Meetings/Congresses Contact us Submit a manuscript Title Author Abstract DOI Keyword Content » Vol 95, Issue 5 Short communication Tweet HTML PDF Counting Actinic Keratosis – Is Photographic Assessment a Reliable Alternative to Physical Examination in Clinical Trials? Sudipta Sinnya, Peter O'Rourke, Emma Ballard, Jean M. Tan, Conrad Morze, Azadeh Sahebian, Sam C. Hames, Tarl W. Prow, Adèle C. Green, H. Peter Soyer DOI: 10.2340/00015555-2040 Abstract Abstract is missing (Short Communication) Significance Supplementary content Comments Not logged in! You need to login/create an account to comment on articles. Click here to login/create an account. Acta Dermato-Venereologica S:T Johannesgatan 22A SE-753 12 Uppsala, Sweden Copyright © Acta Dermato-Venereologica. All rights reserved. | Privacy Policy work_6zyfcl32tbhrbj3s3jvzwxolj4 ---- Faculty & Research - Harvard Business School Skip to Main Content HBS Home About Academic Programs Alumni Faculty & Research Baker Library Giving Harvard Business Review Initiatives News Recruit Map / Directions Faculty & Research Faculty Research Featured Topics Academic Units …→ Harvard Business School→ Faculty & Research→ HBS Book Reimagining Capitalism in a World on Fire By: Rebecca Henderson Free market capitalism is one of humanity’s greatest inventions and the greatest source of prosperity the world has ever seen. But this success has been costly. Capitalism is on the verge of destroying the planet and destabilizing society as wealth rushes to the top. The time for action is running short. HBS Book Reimagining Capitalism in a World on Fire By: Rebecca Henderson Free market capitalism is one of humanity’s greatest inventions and the greatest source of prosperity the world has ever seen. But this success has been costly. Capitalism is on the verge of destroying the planet and destabilizing society as wealth rushes to the top. The time for action is running short. Marketing Science 39, no. 5 (September-October 2020): 872-892. The Air War Versus the Ground Game: An Analysis of Multi-Channel Marketing in U.S. Presidential Elections By: Lingling Zhang and Doug J. Chung This study jointly examines the effects of television advertising and field operations in U.S. presidential elections, with the former referred to as the “air war” and the latter as the “ground game.” Specifically, the study focuses on how different campaign activities—personal selling in the form of field operations and mass media advertising by the candidate and outside sources—vary in their effectiveness with voters who have different political predispositions. Marketing Science 39, no. 5 (September-October 2020): 872-892. The Air War Versus the Ground Game: An Analysis of Multi-Channel Marketing in U.S. Presidential Elections By: Lingling Zhang and Doug J. Chung This study jointly examines the effects of television advertising and field operations in U.S. presidential elections, with the former referred to as the “air war” and the latter as the “ground game.” Specifically, the study focuses on how different campaign activities—personal selling in the form of field operations and mass media advertising by... Health Care Initiative Private and Social Returns to R&D: Drug Development and Demographics By: Efraim Benmelech, Janice Eberly, Dimitris Papanikolaou and Joshua Krieger Investment in intangible capital—in particular, research and development—increased dramatically since the 1990s. However, output and measured productivity growth remains sluggish in recent years. One potential reason is that a significant share of the increase in intangible investment is geared toward consumer products such as pharmaceutical drugs that are not included in measured economic output. We document that a significant fraction of total R&D spending in the U.S. economy is done by pharmaceutical firms and is geared to developing drugs for the elderly. Health Care Initiative Private and Social Returns to R&D: Drug Development and Demographics By: Efraim Benmelech, Janice Eberly, Dimitris Papanikolaou and Joshua Krieger Investment in intangible capital—in particular, research and development—increased dramatically since the 1990s. However, output and measured productivity growth remains sluggish in recent years. One potential reason is that a significant share of the increase in intangible investment is geared toward consumer products such as pharmaceutical drugs... Featured Case Best Buy's Corie Barry: Confronting the COVID-19 Pandemic By: William W. George and Amram Migdal This case examines the leadership of Corie Barry, the new CEO of Best Buy, with a focus on actions the company took in 2020 to adapt to the COVID-19 pandemic. The case includes a history of Best Buy’s strategy and leadership, including the transitions between the company’s founder and the subsequent four CEOs. In particular, the career trajectory of CEO Corie Barry is described in detail. Featured Case Best Buy's Corie Barry: Confronting the COVID-19 Pandemic By: William W. George and Amram Migdal This case examines the leadership of Corie Barry, the new CEO of Best Buy, with a focus on actions the company took in 2020 to adapt to the COVID-19 pandemic. The case includes a history of Best Buy’s strategy and leadership, including the transitions between the company’s founder and the subsequent four CEOs. In particular, the career trajectory... Featured Case Rosalind Fox at John Deere By: Anthony Mayo and Olivia Hull Rosalind Fox, the factory manager at John Deere’s Des Moines, Iowa plant, has improved the financial standing of the factory in the three years she’s been at its helm. But employee engagement scores—which measured employees’ satisfaction with working conditions and enthusiasm about their work— have remained lackluster. As the first Black female factory manager to lead the plant, Fox considers how to build stronger bonds with her staff, who are mostly white men. The case describes how Fox took charge and established her credibility while building and nurturing a diverse leadership team. Featured Case Rosalind Fox at John Deere By: Anthony Mayo and Olivia Hull Rosalind Fox, the factory manager at John Deere’s Des Moines, Iowa plant, has improved the financial standing of the factory in the three years she’s been at its helm. But employee engagement scores—which measured employees’ satisfaction with working conditions and enthusiasm about their work— have remained lackluster. As the first Black female... HBS Working Knowledge Fairness or Control: What Determines Elected Local Leaders’ Support for Hosting Refugees in Their Community? By: Kristin Fabbe, Eleni Kyrkopoulou, Konstantinos Matakos, and Asli Unan Local politicians are not adamantly opposed to setting up host sites for refugees in their municipalities. However, they want a fair process to ensure that interaction between refugees and residents is limited, gradual, and mediated. Most importantly, local politicians want to control those interactions. HBS Working Knowledge Fairness or Control: What Determines Elected Local Leaders’ Support for Hosting Refugees in Their Community? By: Kristin Fabbe, Eleni Kyrkopoulou, Konstantinos Matakos, and Asli Unan Local politicians are not adamantly opposed to setting up host sites for refugees in their municipalities. However, they want a fair process to ensure that interaction between refugees and residents is limited, gradual, and mediated. Most importantly, local politicians want to control those interactions. HBS Working Paper Working from Home during COVID-19: Evidence from Time-Use Studies By: Thomaz Teodorovicz, Raffaella Sadun, Andrew L. Kun and Orit Shaer We assess how the sudden and widespread shift to working from home during the pandemic impacted how knowledge workers allocate time throughout their working day. We analyzed the results from an online time-use survey that collected data on 1,192 knowledge workers in two waves, a pre-pandemic wave collected in August/2019 (615 participants) and a post-pandemic wave collected in August/2020 (577 participants). Our findings indicate that the forced transition to WFH created by the COVID pandemic was associated with a drastic reduction in commuting time, and an increase in time spent in work and/or personal activities. HBS Working Paper Working from Home during COVID-19: Evidence from Time-Use Studies By: Thomaz Teodorovicz, Raffaella Sadun, Andrew L. Kun and Orit Shaer We assess how the sudden and widespread shift to working from home during the pandemic impacted how knowledge workers allocate time throughout their working day. We analyzed the results from an online time-use survey that collected data on 1,192 knowledge workers in two waves, a pre-pandemic wave collected in August/2019 (615 participants) and a... Initiatives & Projects U.S. Competitiveness The U.S. Competitiveness Project is a research-led effort to understand and improve the competitiveness of the United States. The project is committed to identifying practical steps that business leaders can take to strengthen the U.S. economy. →All Initiatives & Projects Seminars & Conferences Apr 06 06 Apr 2021 Katja Seim, Yale School of Management Apr 07 07 Apr 2021 Thomas (Toto) Graeber, Harvard Business School →More Seminars & Conferences Recent Publications Does Observability Amplify Sensitivity to Moral Frames? Evaluating a Reputation-Based Account of Moral Preferences By: Valerio Capraro, Jillian J. Jordan and Ben Tappin 2021 | Working Paper | Faculty Research A growing body of work suggests that people are sensitive to moral framing in economic games involving prosociality, suggesting that people hold moral preferences for doing the “right thing”. What gives rise to these preferences? Here, we evaluate the explanatory power of a reputation-based account, which proposes that people respond to moral frames because they are motivated to look good in the eyes of others. Across four pre-registered experiments (total N = 9,601), we investigated whether reputational incentives amplify sensitivity to framing effects. Studies 1-3 manipulated (i) whether moral or neutral framing was used to describe a Trade-Off Game (in which participants chose between prioritizing equality or efficiency) and (ii) whether Trade-Off Game choices were observable to a social partner in a subsequent Trust Game. These studies found that observability does not significantly amplify sensitivity to moral framing. Study 4 ruled out the alternative explanation that the observability manipulation from Studies 1-3 is too weak to influence behavior. In Study 4, the same observability manipulation did significantly amplify sensitivity to normative information (about what others see as moral in the Trade-Off Game). Together, these results suggest that moral frames may tap into moral preferences that are relatively deeply internalized, such that the power of moral frames is not strongly enhanced by making the morally-framed behavior observable to others. Citation Related Capraro, Valerio, Jillian J. Jordan, and Ben Tappin. "Does Observability Amplify Sensitivity to Moral Frames? Evaluating a Reputation-Based Account of Moral Preferences." Working Paper, January 2021. Why Do Successful Women Feel So Guilty? By: Debora Spar June 2012 | Editorial | The Atlantic Citation Related Spar, Debora. "Why Do Successful Women Feel So Guilty?" The Atlantic (June 28, 2012). Large-Scale Field Experiment Shows Null Effects of Team Demographic Diversity on Outsiders' Willingness to Support the Team By: Edward H. Chang, Erika L. Kirgios and Rosanna K. Smith 2021 | Article | Journal of Experimental Social Psychology Demographic diversity in the United States is rising, and increasingly, work is conducted in teams. These co-occurring phenomena suggest that it might be increasingly common for work to be conducted by demographically diverse teams. But to date, in spite of copious field experimental evidence documenting that individuals are treated differently based on their demographic identity, we have little evidence from field experiments to establish how and whether teams are treated differently based on their levels of demographic diversity. To answer this question, we present the results of a preregistered, large-scale (n=9496) field experiment testing whether team demographic diversity affects outsiders’ responses to the team. Participants were asked via email to donate money to support the work of a team that was described and depicted as demographically diverse, or not. Even though the study was well-powered to detect even small effects (i.e., differences of less than 1.5 percentage points in donation rates), we found no significant differences in people’s willingness to donate to a more diverse versus a less diverse team. We also did not find moderation by participant gender, racial diversity of the participant’s zip code, or political leaning of the participant’s zip code, suggesting that the lack of a main effect is not due to competing mechanisms cancelling out a main effect. These results suggest past research on the effects of demographic diversity on team support may not generalize to the field, highlighting the need for additional field experimental research on people’s responses to demographically diverse teams. Citation Related Chang, Edward H., Erika L. Kirgios, and Rosanna K. Smith. "Large-Scale Field Experiment Shows Null Effects of Team Demographic Diversity on Outsiders' Willingness to Support the Team." Art. 104099. Journal of Experimental Social Psychology 94 (May 2021). Does Observability Amplify Sensitivity to Moral Frames? Evaluating a Reputation-Based Account of Moral Preferences By: Valerio Capraro, Jillian J. Jordan and Ben Tappin 2021 | Article | Journal of Experimental Social Psychology A growing body of work suggests that people are sensitive to moral framing in economic games involving prosociality, suggesting that people hold moral preferences for doing the “right thing”. What gives rise to these preferences? Here, we evaluate the explanatory power of a reputation-based account, which proposes that people respond to moral frames because they are motivated to look good in the eyes of others. Across four pre-registered experiments (total N = 9,601), we investigated whether reputational incentives amplify sensitivity to framing effects. Studies 1-3 manipulated (i) whether moral or neutral framing was used to describe a Trade-Off Game (in which participants chose between prioritizing equality or efficiency) and (ii) whether Trade-Off Game choices were observable to a social partner in a subsequent Trust Game. These studies found that observability does not significantly amplify sensitivity to moral framing. Study 4 ruled out the alternative explanation that the observability manipulation from Studies 1-3 is too weak to influence behavior. In Study 4, the same observability manipulation did significantly amplify sensitivity to normative information (about what others see as moral in the Trade-Off Game). Together, these results suggest that moral frames may tap into moral preferences that are relatively deeply internalized, such that the power of moral frames is not strongly enhanced by making the morally-framed behavior observable to others. Citation Related Capraro, Valerio, Jillian J. Jordan, and Ben Tappin. "Does Observability Amplify Sensitivity to Moral Frames? Evaluating a Reputation-Based Account of Moral Preferences." Journal of Experimental Social Psychology 94 (May 2021). How to Build a Life: The Hidden Toll of Remote Work By: Arthur C. Brooks April 1, 2021 | Article | The Atlantic Citation Related Brooks, Arthur C. "How to Build a Life: The Hidden Toll of Remote Work." The Atlantic (April 1, 2021). Glass Half-Broken: Shattering the Barriers That Still Hold Women Back at Work By: Colleen Ammerman and Boris Groysberg 2021 | Book | Faculty Research Why does the gender gap persist and how can we close it? For years women have made up the majority of college-educated workers in the United States. In 2019, the gap between the percentage of women and the percentage of men in the workforce was the smallest on record. But despite these statistics, women remain underrepresented in positions of power and status, with the highest-paying jobs the most gender-imbalanced. Even in fields where the numbers of men and women are roughly equal, or where women actually make up the majority, leadership ranks remain male-dominated. The persistence of these inequalities begs the question: Why haven't we made more progress? In Glass Half-Broken, HBS Gender Initiative director Colleen Ammerman and Boris Groysberg reveal the pervasive organizational obstacles and managerial actions—limited opportunities for development, lack of role models and sponsors, and bias in hiring, compensation, and promotion—that create gender imbalances. Bringing to light the key findings from the latest research in psychology, sociology, organizational behavior, and economics, Ammerman and Groysberg show that throughout their careers—from entry-level to mid-level to senior-level positions—women get pushed out of the leadership pipeline, each time for different reasons. Presenting organizational and managerial strategies designed to weaken and ultimately break down these barriers, Glass Half-Broken is the authoritative resource that managers and leaders at all levels can use to finally shatter the glass ceiling. Citation Purchase Related Ammerman, Colleen, and Boris Groysberg. Glass Half-Broken: Shattering the Barriers That Still Hold Women Back at Work. Boston: Harvard Business Review Press, 2021. Building Cities' Collaborative Muscle By: Jorrit De Jong, Amy C. Edmondson, Mark Moore, Hannah Riley-Bowles, Jan Rivkin, Eva Flavia Martínez Orbegozo and Santiago Pulido-Gomez Spring 2021 | Article | Stanford Social Innovation Review (website) The most pressing social problems facing cities today require multiagency and cross-sector solutions. We offer tools and techniques to facilitate the process of diagnosing and solving problems by breaking down silos to build up cities. Citation Related De Jong, Jorrit, Amy C. Edmondson, Mark Moore, Hannah Riley-Bowles, Jan Rivkin, Eva Flavia Martínez Orbegozo, and Santiago Pulido-Gomez. "Building Cities' Collaborative Muscle." Stanford Social Innovation Review (website) (Spring 2021). Utilizing Time-driven Activity-based Costing to Determine Open Radical Cystectomy and Ileal Conduit Surgical Episode Cost Drivers By: Janet Baack Kukreja, Mohamed A. Seif, Marissa W. Merry, James R. Incalcaterra, Ashish M. Kamat, Colin P. Dinney, Jay B. Shah, Thomas W. Feeley and Neema Navai April 2021 | Article | Urologic Oncology: Seminars and Original Investigations Objectives Patients undergoing radical cystectomy represent a particularly resource-intensive patient population. Time-driven activity based costing (TDABC) assigns time to events and then costs are based on the people involved in providing care for specific events. To determine the major cost drivers of radical cystectomy care we used a TDABC analysis for the cystectomy care pathway. Subjects and methods We retrospectively reviewed a random sample of 100 patients out of 717 eligible patients undergoing open radical cystectomy and ileal conduit for bladder cancer at our institution between 2012 and 2015. We defined the cycle of care as beginning at the preoperative clinic visit and ending with the 90-day postoperative clinic visit. TDABC was carried out with construction of detailed process maps. Capacity cost rates were calculated and the care cycle was divided into 3 phases: surgical, inpatient, and readmissions. Costs were normalized to the lowest cost driver within the cohort. Results The mean length of stay was 6.9 days. Total inpatient care was the main driver of cost for radical cystectomy making up 32% of the total costs. Inpatient costs were mainly driven by inpatient staff care (76%). Readmissions were responsible for 29% of costs. Surgery was 31% of the costs, with the majority derived from operating room staff costs (65%). Conclusion The major driver of cost in a radical cystectomy pathway is the inpatient stay, closely followed by operating room costs. Surgical costs, inpatient care and readmissions all remain significant sources of expense for cystectomy and efforts to reduce cystectomy costs should be focused in these areas. Citation Related Kukreja, Janet Baack, Mohamed A. Seif, Marissa W. Merry, James R. Incalcaterra, Ashish M. Kamat, Colin P. Dinney, Jay B. Shah, Thomas W. Feeley, and Neema Navai. "Utilizing Time-driven Activity-based Costing to Determine Open Radical Cystectomy and Ileal Conduit Surgical Episode Cost Drivers." Urologic Oncology: Seminars and Original Investigations 39, no. 4 (April 2021). More Publications In The News 05 Apr 2021 St. Louis Post-Dispatch Nicklaus: America needs a more resilient medical supply chain, but self-sufficiency isn't the answer Re: Willy Shih 05 Apr 2021 The Harbus Leadership Truths that Transcend the Pandemic By: Rosabeth Moss Kanter 04 Apr 2021 Quartz How to support employee mental health from every level of the firm Re: Tsedal Neeley 02 Apr 2021 Harvard Business School Salary Negotiations: A Catch-22 for Women Re: Julian Zlatev →More Faculty News The Case Method Introduced by HBS faculty to business education in 1925, the case method is a powerful interactive learning process that puts students in the shoes of a leader faced with a real-world management issue and challenges them to propose and justify a resolution. Today, HBS remains an authority on teaching by the case method. The School is also the world’s leading case-writing institution, with HBS faculty members contributing hundreds of new cases to the management curriculum a year via the School’s unique case development and writing process. →Browse HBS Case Collection →Purchase Cases Faculty Positions Harvard Business School seeks candidates in all fields for full time positions. Candidates with outstanding records in PhD or DBA programs are encouraged to apply. →Learn More ǁ Harvard Business School Soldiers Field Boston, MA 02163 →Map & Directions →More Contact Information Make a Gift Site Map Jobs Harvard University Trademarks Policies Digital Accessibility Copyright © President & Fellows of Harvard College work_736d2xs7uzgvnhxdnk72gooqbq ---- 1 Mapping the spatio-temporal distribution of key vegetation cover properties in lowland river 1 reaches, using digital photography 2 Authors: Veerle Verschoren1*, Jonas Schoelynck1, Kerst Buis1, Fleur Visser2, Patrick Meire1, Stijn 3 Temmerman1 4 5 1. University of Antwerp, Department of Biology, Ecosystem Management Research Group, 6 Universiteitsplein 1C, B-2610 Wilrijk, Belgium. 7 2. University of Worcester, Institute of Science and the Environment, Henwick Grove, 8 Worcester WR2 6AJ, UK 9 10 *Corresponding author 11 UA - Campus Drie Eiken 12 Ecosystem Management research group 13 Universiteitsplein 1 14 Building C, C1.20 15 B - 2610 Wilrijk, Belgium 16 17 verschoren.veerle@hotmail.com 18 Tel +32 3 265 22 52 19 Fax +32 3 265 22 71 20 21 2 Abstract 22 The presence of vegetation in stream ecosystems is highly dynamic in both space and time. A 23 digital photography technique is developed to map aquatic vegetation cover at species level, which 24 has a very-high spatial and a flexible temporal resolution. A digital single-lens-reflex (DSLR) 25 camera mounted on a handheld telescopic pole is used. The low-altitude (5 m) orthogonal aerial 26 images have a low spectral resolution (Red-Green-Blue), high spatial resolution (~1.9 pixels cm-2, 27 ~1.3 cm length) and flexible temporal resolution (monthly). The method is successfully applied in 28 two lowland rivers to quantify four key properties of vegetated rivers: vegetation cover, patch size 29 distribution, biomass and hydraulic resistance. The main advantages are that the method is: (i) 30 suitable for continuous and discontinuous vegetation covers (ii) of very-high spatial and flexible 31 temporal resolution, (iii) relatively fast compared to conventional ground survey methods, (iv) 32 non-destructive, (v) relatively cheap and easy to use, and (vi) the software is widely available and 33 similar open source alternatives exist. The study area should be less than 10 m wide and the 34 prevailing light conditions and water turbidity levels should be sufficient to look into the water. 35 Further improvements of the images processing are expected in the automatic delineation and 36 classification of the vegetation patches. 37 38 Key words: macrophytes, vegetation cover, very high spatial resolution, flexible temporal 39 resolution 40 41 3 Introduction 42 The presence of aquatic vegetation in river ecosystems tends to be highly variable in space and 43 time. Because of the importance of vegetation in fluvial ecosystems there is a need to efficiently 44 map and monitor this variability. The study described in this paper presents a method for detailed 45 mapping of the dynamic vegetation patterns in rivers. 46 47 Macrophytes, or aquatic plants, have different growth forms: exclusively submerged, submerged 48 with floating leaves, exclusively floating or emergent. They occur in single species beds with a 49 continuous cover or in a discontinuous composition of multiple species. The interaction between 50 vegetation and water flow leads to spatial patterns of vegetation patches at reach scale, river 51 sections of 100 to 200 m (Schoelynck et al. 2012). A macrophyte patch can be defined by an area 52 covered by vegetation, which has a finite spatial extent that is larger than an individual shoot but 53 smaller than the entire reach. The size of these vegetation patches varies strongly from a few square 54 decimetre to a few square meter (Gurnell et al. 2006; Sand-Jensen et al. 1999). The size of the 55 individual leaves ranges from several square centimetre to several square decimetre. In temperate 56 mid-latitude climate zones, the development of these vegetation patches has an annual cycle with 57 abundant plant growth in the growth season followed by die-back (Battle and Mihuc 2000; 58 Menendez et al. 2003). 59 60 These dynamic growth processes result in frequent changes in key properties of vegetated rivers 61 including vegetation cover, patch size distribution, biomass and hydraulic resistance. These 62 properties in turn affect stream processes, such as: nutrient cycling (Dhote and Dixit 2009; Krause 63 et al. 2011; Seitzinger et al. 2006), the transport of dissolved matter and the retention of particulate 64 4 matter (Cordova et al. 2008; Horvath 2004; Lamberti et al. 1989), bedload sediment transport 65 (Gibbins et al. 2007) and drift of macro-invertebrates (Extence et al. 1999). 66 67 The first of the key properties, macrophyte cover, is an essential parameter used for monitoring of 68 fluvial ecosystems. Macrophytes are for example used as a quality parameter in the assessment of 69 the ecological status of surface water for the Water Framework Directive in Europe (EU 2000). 70 This assessment takes into account the number of species and species abundance. The second key 71 property, the frequency distribution of patch sizes, can be used to investigate spatial self-72 organisation in river ecosystems. Spatial self-organisation in rivers is the process where large scale 73 patterns develop from disordered initial conditions through small scale feedbacks between plants 74 and the water flow (Lejeune et al. 2004; Rietkerk et al. 2004; Schoelynck et al. 2012). The process 75 is important for ecosystem functioning, since self-organised ecosystems have a higher resilience 76 and resistance to environmental change and a higher productivity compared to homogeneous 77 ecosystems (van de Koppel et al. 2008). Schoelynck et al. (2012) showed the presence of spatial 78 self-organisation of macrophytes patches in lowland rivers. They demonstrated that the size 79 distribution of macrophytes patches can be described by a power-law relationship, which is an 80 indication of self-organisation (Newman 2005; Scanlon et al. 2007). Thirdly, biomass is a crucial 81 parameter in many ecological studies for example for the calculation of mass balances or 82 quantification of nutrient fluxes (Borin and Salvato 2012; Dinka et al. 2004). The parameter values 83 will depend on vegetation extent and species composition. Finally, the hydraulic resistance of a 84 river reach is influenced by obstructions like aquatic vegetation, bed material, the meandering of 85 the river and irregularities in its cross-sections (Chow 1959). Macrophytes increase the hydraulic 86 resistance which leads to reduced stream velocities and increased water levels upstream (De 87 Doncker et al. 2009b). A direct effect of increased water levels is a higher risk of flooding. The 88 5 effect of macrophytes on the hydraulic resistance is threefold: through vegetation density (e.g. 89 biomass (De Doncker et al. 2009b)), plant characteristics (e.g. growth form (Bal et al. 2011)) and 90 spatial distribution (e.g. cross-sectional blockage (Green 2005b)). In general: high biomass, stiff 91 plants and large cross-sectional blockage all lead to a higher resistance to water flow, which is 92 expressed by a higher Manning roughness coefficient (n) (Chow 1959; Madsen et al. 2001; 93 Vereecken et al. 2006). Recently more detailed hydrodynamic models have been developed, which 94 incorporate such plant features (Verschoren et al. 2016). 95 96 To quantify the above-mentioned vegetation parameters and use them for monitoring, modeling 97 and management of river processes, a method is needed that can efficiently map the dynamic 98 patchiness of macrophytes in rivers with a very-high spatial (subcentimetre) and flexible temporal 99 resolution. The detection of fine scale details in structure, texture and pattern on very-high spatial 100 resolution image data allows identification of macrophytes up to species level (Bryson et al. 2013; 101 Visser et al. 2013). Properties like biomass and hydraulic resistance depend strongly on species 102 composition and need flexible temporal resolution (e.g. monthly) data acquisition to catch seasonal 103 variation. Low-altitude image data collection seems the most suitable method to obtain high spatial 104 and flexible temporal resolution data while minimizing the time and cost (Carter et al. 2005; 105 Legleiter 2003). 106 107 High resolution low-altitude image data collection techniques proved to be suitable for many 108 ecological studies in intertidal marine environments with a spatial extent between 0.01 - 1 ha and 109 resolutions ranging between 0.5 - 5 cm. Examples are patterns of algae distribution (Guichard 110 2000), biophysical control of benthic diatom films and macroalgae (van den Wal 2014), the 111 distribution of eelgrass and blue mussel (Barrell and Grant 2015), and terrain models of intertidal 112 6 rocky shores (Bryson 2013). However, images were mostly obtained at low tides while study sites 113 were not inundated. Due to the absorption of light in water (Visser et al. 2013), limited spatial 114 resolution or high costs (Flynn and Chapra 2014; Husson et al. 2014; Shuchman et al. 2013), it is 115 only relatively recently that more studies started looking at mapping aquatic vegetation in 116 submerged environments, including rivers and lakes (Anker et al. 2014; Silva et al. 2008; Villa et 117 al. 2015). Hyperspectral remote sensing is successfully used to measure the river morphology 118 (Tamminga et al. 2015), to map invasive aquatic vegetation in a delta (Hestir et al. 2008) and 119 submerged macrophytes and green algae in rivers (Anker et al. 2014). However, these 120 hyperspectral images are costly and/or have too low spatial resolution (~1-3 m) to be applied in 121 small streams (stream width <10 m) (Shuchman et al. 2013). 122 123 Recent efforts have been undertaken to obtain low-cost, high spatial resolution (subdecimetre to 124 submetre) images, but with a low spectral range. At a resolution of 25 cm, Flynn and Chapra (2014) 125 mapped aquatic submerged vegetation and green algae in small lowland rivers and lakes, and 126 Nezlin et al. (2007) mapped algae and mussels on tidal flats. Higher spatial resolution images were 127 obtained by Husson et al. (2014) (5.6 cm) and Anker et al. (2014) (4 cm) to record aquatic 128 vegetation. However, these resolutions are often still too coarse to distinguish different macrophyte 129 species, which sometimes requires assessment of the shape of individual leaves. A common 130 recommendation from several of the before mentioned studies is the requirement that images 131 should be taken under optimal conditions, e.g. no diffuse light, sun at its highest position, clear 132 water, no ripples. However, this almost never occurs in reality and therefore further limits the 133 applicability of the method and is an additional reason why this technique has not yet become 134 mainstream in river ecosystem research: it is difficult to look into a river trough a camera lens 135 (Visser et al. 2013). 136 7 137 In this paper we present a rapid and cost-effective digital aquatic vegetation cover photography 138 technique based on orthogonal low-altitude images with a very-high (subcentimetre) spatial 139 resolution and flexibility to collect data frequently (monthly or higher) under optimal weather and 140 scene illumination conditions (no diffuse light and the sun at its highest position). We use the 141 collected images to map the spatial distribution of aquatic vegetation at species level in two river 142 ecosystems (± 200 m river reaches) and we demonstrate how the maps are suitable to monitor four 143 key properties of vegetated lowland rivers, namely vegetation cover, patch size distribution, 144 biomass and hydraulic resistance. 145 Materials and methods 146 Study area 147 The data were collected in 2013 in two lowland rivers in the North East of Belgium: Zwarte Nete 148 and Desselse Nete (51° 15’ 3.45” N, 5° 4’ 54.27’’ E) (Fig. 1). Both rivers are characterised by 149 extensive plant growth in summer and are surrounded by pasture, which limits overhanging and 150 other riparian vegetation. The rivers have a low suspended matter concentration (< 50 mg L-1) and 151 the substrate consists of sand (median grain size of 167 µm). The Zwarte Nete has a mean width 152 of 4.4 m, water depth ranges between 0.5 - 0.6 m and discharge between 0.2 - 0.5 m3 s-1. A reach 153 of 187 m (821 m2) was mapped where multiple plant species were present. The Desselse Nete is 154 slightly larger with a mean width of 5.4 m, mean water depth of 0.6 - 0.7 m and mean discharge 155 between 0.3 - 0.6 m3 s-1. Here a reach of 180 m (1123 m2) was selected, dominated by a single 156 submerged species with floating leaves: Potamogeton natans (L.). The following species were 157 present in one or both reaches: submerged species: Callitriche obtusangula (Le Gall), 158 Myriophyllum spicatum (L.), Potamogeton pectinatus (L.) Ranunculus peltatus (L.), Sagittaria 159 8 sagittifolia (L.), Sparganium emersum (L.) and emergent species: Typha latifolia (L.) and riparian 160 vegetation (not identified to species level). No exclusively floating species were present. 161 162 Image collection 163 The images were collected with a Nikon D300s DLSR camera with a crop sensor 164 (NikonCorporation, 2009). As inherent to most unmodified cameras, images consisting of three 165 broad spectral bands are obtained (RGB): a blue (400 - 500 nm), a green (500 - 600 nm) and a red 166 band (600 - 700 nm). The files were compressed as JPEG (fine) with an image dimension of 4288 167 x 2848 pixels and an image size of 12.3 megapixels. The camera was equipped with a Tokina AT-168 X 116 Pro DX (11-16mm, F2.8) wide angle lens that has a large field-of-view and a distortion of 169 0.6% (Dxomark). The zoom was set to the widest possible angle and the focus at infinity. The 170 camera was attached with a ball head to a handheld telescopic pole to take low altitude images of 171 the water surface at nadir (Fig. 2a). The lower end of the pole was placed at the river bank. The 172 pole was tilted so that the camera was positioned above the center line of the river at a height of 173 approximately 5 m above the water surface (Fig. 2b). The camera was remotely operated from a 174 laptop (tethered capture), which also provided live view to ensure correct positioning of the image 175 footprint. Both river banks had to be visible in each image. No polarization filter was used as this 176 was not thought to have an effect with the camera at nadir positon. Camera ISO was set to 200 to 177 minimize the noise and a variable aperture to achieve a fast shutter speed (Pekin and Macfarlane 178 2009). The images generally covered an area of 10 m (along the stream) x 6.5 m (across) (Fig. 3a 179 and 3b). 180 181 Multiple images were collected at monthly intervals covering the entire reaches of both rivers from 182 April to September 2013. The distance between two consecutive images was 4 m to ensure 183 9 sufficient overlap (~30 % overlap). Data was collected on clear days around noon to achieve 184 optimal illumination conditions. The angle between the sun and the camera is approximately 40° 185 between 11 a.m. and 1 p.m. (summertime) in Belgium. Several ground controls points (GCPs) were 186 positioned along the reaches to allow georeferencing of the image mosaics. Both reaches are 187 bounded upstream and downstream by small bridges which were included as GCPs. Geographic 188 coordinates for the GCPs were obtained with a dGPS (Trimble R4 GNSS, Eersel, NL) with an 189 accuracy of 1 cm. The exact coordinates of the river banks were once measured with an electronic 190 theodolite (Total Station, Sokkia set 510k, Capelle a/d Ijssel, NL) with a spatial interval of 2 to 3 191 m. The coordinates of the river bank were considered as complementary GCPs, which are clearly 192 visible on the images. 193 194 Spatio-temporal vegetation cover 195 Three steps are needed to create vegetation maps at species level: (i) image dehazing and stitching 196 by month and reach, (ii) georeferencing of image mosaics, and (iii) manual delineation of 197 vegetation patches. 198 Firstly, haze was removed from the images with the Autopano Giga (v. 3.0, Kolor, Francin, FR) 199 software using the Neutralhazer Light Anti-Haze plug-in. The software was then used to create 200 image mosaics along the full river reaches, using image matching algorithms to match up 201 overlapping photographs. For around 10 % of the images the matching process seemed to be 202 affected by reflection, movement of the vegetation with the river current and a homogeneous 203 riparian margin. In these cases we manually added extra control points at matching locations in 204 both images. This hardly affected the time to stitch. The image mosaics were exported as a JPEG. 205 This protocol was repeated for the images of both reaches and for each month. Secondly, in ArcGIS 206 (v. 10.1, ESRI Inc, Redlands, USA) the image mosaics were georeferenced using a spline 207 10 transformation. It should be noted that the GCPs were not present in all images that formed a 208 mosaic. An example of georeferenced image mosaics is given in Fig 3c and 3d. Thirdly, polygons 209 were drawn manually delineating the vegetation patches. Advantages and limitations of this 210 approach are extensively discussed at the end of this paper. Patches consisted of a single species 211 and had a minimum size of 2 dm2. For each polygon the type of species was determined from the 212 image (Fig. 3e and 3f). The surface area of each polygon was calculated and summed to obtain the 213 total vegetation cover per reach and per species type. 214 215 The manual image classification was validated against independent field measurements of 216 vegetation presence. A conventional grid method (Anker et al. 2014; Champion and Tanner 2000) 217 was used to estimate macrophyte cover on the ground. A rectangular grid of 2.88 by 0.88 m (36 by 218 11 cells of 0.08 by 0.08 m) was placed at a fixed location monthly in both streams on the same 219 days the images were collected. The presence of macrophytes in each cell is recorded and 220 determined to species level. The image data was resampled to 0.08 m resolution with each cell 221 coded according to the dominant species. The overall accuracy is calculated by comparing the 222 species in each cell of both grids with a true or false evaluation. This is done per month per river. 223 The relative cover for each vegetation class is given for the months with a cover accuracy of less 224 than 95 %. 225 226 Patch size distribution 227 We tested if the frequency distribution of patch sizes can be approached by a power-law 228 relationship. We therefore used the inverse cumulative distribution which is the probability that a 229 patch size (S) is larger than or equal to s (Newman 2005; Scanlon et al. 2007): 230 𝑃 (𝑆 ≥ 𝑠) ~ 𝑠−𝛽 Eq. 1 231 11 with s the size of a patch and β the power-law exponent. A power-law relationship in this context 232 means that the sizes of patches varies strongly with many small patches and relatively few large 233 patches. R (R Core Team 2014) version 3.2.0, was used to fit a standard least squares regression 234 on the log-transformed data. 235 236 Biomass 237 A conversion factor between cover and biomass can be obtained from the literature (e.g. Flynn et 238 al. 2002; Madsen and Adams 1989). However the required input data weren’t available for species 239 in our study area, therefore the four dominant species in both rivers (C. obtusangula, M. spicatum, 240 P.natans, S. emersum) were sampled monthly to obtain the monthly conversion factor 241 biomass:cover (Tab. 7). Vegetation samples were collected at the date of image acquisition, 242 downstream from the studied reaches to not destruct the natural growth of the vegetation within 243 the study reaches. Each month, three replicates per species were sampled by manually removing 244 the above ground vegetation in a quadrant of 0.5 m x 0.5 m that was placed upon a monotopic 245 vegetation patch. The samples were oven dried (at 70° C for 48 h) and weighed afterwards (dry 246 weight, DW). It has to be noted that in May 2013, no sample could be taken for C. obtusangula. 247 Therefore the average was taken of values for April and June to estimate the biomass in that month. 248 The total cover per species per month was obtained through the image analysis. Then the biomass 249 (gDW) per species was calculated monthly by multiplying the species-specific conversion factor 250 biomass:cover (gDW m-2) with the corresponding cover (m2) . The biomass values were summed 251 for the whole reach and divided by the total surface of the reach to obtain the total biomass (gDW 252 m-2) averaged out over all species and over the whole river reach. Since three replicates were taken, 253 the total biomass consists of three values. 254 255 12 The applied image analysis method aims to quantify vegetation cover in a non-destructive way. 256 However, the validation of the total biomass required mowing of all the vegetation and is therefore 257 a destructive method. We only had the opportunity to use the mowing method in August. On 26 258 and 28 August 2013 the entire reach in the Desselse Nete and Zwarte Nete was mechanically 259 mowed by cutting most vegetation just above the sediment and removing it from the river. All 260 mowed vegetation from both reaches was immediately weighed (fresh weight, FW). A 261 representative subsample of the biomass consisting of a mixture of all species, was transported to 262 the lab. The subsample was weighed (FW), dried at 70°C for 48h and reweighed (DW). This 263 enabled us to determine a conversion factor between FW and DW for the biomass of the entire 264 reach. R 3.2.0 was used to perform a one-sample t-test to test the difference between the total 265 biomass obtained by the mowing method (one value) and by the image method (mean with standard 266 error based on three values). 267 268 Hydraulic resistance 269 The hydraulic resistance of rivers can be expressed as a Manning coefficient (Chow 1959). The 270 commonly used equation to calculate the Manning coefficient is based on hydraulic parameters and 271 is applicable in vegetated and non-vegetated rivers (Eq. 2, Tab. 1) (Chow 1959). The equation uses 272 the cross-sectional area, discharge, hydraulic radius and the water level slope. The water discharge 273 was measured upstream of both reaches at the same days the images were taken, using an 274 electromagnetic flow meter (Valeport model 801, Totnes, UK) and calculated by the velocity-area 275 method (Bal and Meire 2009). Simultaneously, the water level was measured with two pressure 276 sensors (Eickelkamp, Geisbeek, NL) placed in the water column near the bridges bordering the reach 277 upstream and downstream with a time interval of 20 min. and with an accuracy of 0.5 cm. The 278 elevation difference between the pressure sensors was measured with a RTK-GSP. The water levels 279 13 are corrected for atmospheric pressure and averaged over 24 h for each sampling campaign. The 280 water level slope in the reaches was calculated by subtracting the upstream and downstream water 281 level, divided by the length of the reach. Additionally, different empirical relationships are used to 282 convert vegetation properties to the Manning coefficient. Based on the data of the surface area 283 coverage of Green (2005a) we found an empirical relationship (Eq. 3, Tab. 1) between the Manning 284 coefficient and the vegetation cover. De Doncker et al. (2009a) fitted an equation (Eq. 4, Tab. 1) 285 based on measurements of the biomass (gDW m-2) and the Manning coefficient. These empirical 286 relationships (Eq. 3 and Eq. 4) are easy to use, but have a limited application potential. They don’t 287 account for the species composition and the horizontal and vertical distribution of the vegetation and 288 are derived for a specific study area. The general Manning coefficient (Eq. 2) is used to validate the 289 empirical equations (Eq. 3 and Eq. 4). 290 Results 291 Between 86 and 115 images (~ 1.9 pixels cm-2, ~ 1.3 cm edge length) were taken per reach from 292 which 41 to 56 were selected to construct the image mosaic. The images collection took around 293 one hour per reach per sampling campaign. Reduced illumination of the submerged vegetation 294 target for the April and September data due to low sun angles, made macrophytes less visible in 295 the images. Delineation of the vegetation patches was still possible but the vegetation cover may 296 have been underestimated. Processing of the images took around two days for months with a low 297 vegetation abundance (< 30%) and around three days for months with a high vegetation cover (> 298 30%). 299 300 14 Spatio-temporal vegetation cover 301 The total vegetation cover and partial species cover is given per month for the two reaches (Fig. 302 4). In the Zwarte Nete, the total vegetation cover increases from April to August and suddenly 303 decreases in September due to the scheduled mowing event on 28 August 2013. The dominant 304 species in the Zware Nete are S. emersum and M. spicatum during the sampling period (Fig. 4a). 305 The natural development of the vegetation cover in the Desselse Nete is different. The growth was 306 disturbed by an extra mowing activity on 25 June 2013 for management and safety regulations. 307 Two months later, a scheduled mowing event took place on 26 August 2013. P. natans is the most 308 abundant species in the Desselse Nete each month and recovered completely 8 weeks after the first 309 mowing event (Fig. 4b). 310 The validation of the image method with the ground survey showed that the accuracy of species 311 identification is very high (>97 %) in the study reach dominated by a single species (Desselse Nete) 312 (Tab. 2). These high values are due the relative simple composition of the vegetation patches, where 313 the whole reach is covered by a single species. On the contrary, the accuracy is less (> 59%) in the 314 river with a heterogeneous composition of multiple species, certainly in months when the 315 vegetation patches are developing (June and July). So the accuracy to determine the exact location 316 of vegetation patches is limited in those months. 317 For those months with a cover accuracy less than 95 %, the relative cover of each vegetation class 318 is given separately in Tab. 3. The difference in cover between the ground survey method and the 319 image method for each vegetation class is less than 12 %. This means that the cover per vegetation 320 class agrees well between both methods. 321 322 15 Patch size distribution 323 In total 262 vegetation patches were mapped in August in the Zwarte Nete, of which 143 were C. 324 obtusangula patches. The surface area of these patches ranged between 0.04 m2 and 2.76 m2. The 325 size frequency distribution of the patches is plotted on a double logarithmic scale (Fig. 5). A 326 significant power-law relationship was found for the upper part of the distribution (least squares 327 regression on the log-transformed data; p< 0.001, R2 = 0.99; 59 % of the data). 328 329 Biomass 330 The total biomass per reach is estimated with the image analysis method on a monthly basis (Tab. 331 4). The monthly conversion factors are given in Tab. 7. The mowed vegetation is immediately 332 weighed (FW) and converted to dry weight with measured the conversion factor FW:DW equal to 333 10.3. The total biomass (gDW m-2) obtained by the image analysis method does not significantly 334 differ from the biomass (gDW m-2) obtained by the mowing method. The results of the one-sample 335 t-test is a p-value of 0.797 and 0.198 for the Zwarte Nete and Desselse Nete, respectively. 336 337 Hydraulic resistance of vegetated rivers 338 Variation of the Manning coefficient over time is shown for the Zwarte Nete and Desselse Nete in 339 Figure. 6. In the Zwarte Nete the Manning coefficient is based on hydraulic data, Eq. 2, increasing 340 from April to August and decreasing in September to values similar to those of April. The Manning 341 coefficients of the Zwarte Nete calculated with the empirical equations (Eq. 3 and Eq. 4) are well 342 in agreement. The largest difference is found in August with values of 0.26, 0.30 and 0.20 for Eq. 343 2, Eq. 3 and Eq. 4, respectively. The Manning coefficient based on hydraulic data, Eq. 2, varies 344 between 0.03 and 0.17 in the Desselse Nete. The empirically based Manning coefficients 345 16 overestimate this value every month up to a factor two. The largest differences are found in the 346 months May, June and August. 347 Discussion 348 There is a strong need for new methods to acquire 2D data on the spatial and temporal distribution 349 of vegetation in small rivers. The digital cover photography technique applied in this paper is a 350 useful tool to obtain this detailed 2D information. This method has six main advantages: (i) it can 351 be applied in rivers with any kind of vegetation cover; (ii) it has a very-high spatial resolution, 352 around 1.9 pixels cm-2 (~1.3 cm edge length), and a very flexible temporal resolution with the 353 frequency only dependent on availability of suitable weather conditions; (iii) it is relatively fast, 354 two to three days to collect and process the data of a reach of 180 m; (iv) it is non-destructive in 355 contrast to other methods where sampling is involved; (v) the equipment is relatively cheap with a 356 single time cost of approximately € 2000 for the camera, lens, control software and memory card; 357 (vi) the software used to process the data is widely available and similar open source alternatives 358 exist. Tab. 5 shows the performance of the current method in comparison to five other commonly 359 used remote sensing approaches with optical imagery. The spectral range and spectral resolution 360 depends on the sensor for all platforms mentioned in Tab. 5. Manned aircraft imaging can have a 361 wide range of spectral resolution from very narrow band hyperspectral imagery to one very broad 362 band for a panchromatic image. Similarly for satellite imagery, sensors with a high spectral 363 resolution are available. However, these images are of low spectral quality and low spatial 364 resolution. Hyperspectral sensors with a high spectral resolution are available for unmanned aerial 365 vehicles but can only be assembled on larger vehicles and do not achieve the high spatial resolution 366 that can be obtained with RGB cameras. The current method is particularly suitable for studies in 367 river reaches which are difficult to access and require high spatial resolution . In addition, limited 368 17 technical training is required to pre- and post-process the images. The method can be used in its 369 current stage in relative small study areas for monitoring, modelling and management purposes. 370 Applying this method in larger study areas would require further automatization of image 371 collection, e.g. by attaching the camera to an Unmanned Aerial Vehicle (UAV) (Husson et al. 2014; 372 Tamminga et al. 2015), and image classification, e.g. by applying the OBIA method (Visser et al. 373 2016). 374 The image data collection requires suitable light and site conditions. The water needs to be clear 375 (i.e. ideally < 1 m deep and low turbidity) (Visser et al. 2013), and the water velocity should be 376 low to limit stem motion (i.e. ideally < 1 ms-1) (Franklin et al. 2008). These site conditions are 377 similar requirements for the occurrence of macrophytes in the first place (Riis and Biggs 2003). 378 However the water can be temporary less clear after storm events. It this case it is recommended 379 to wait a few days until the concentration of suspended sediment is reduced. Light intensity should 380 be sufficient to penetrate the surface and illuminate the submerged macrophytes. The angle 381 between the sun and the camera should be around 45° to minimize sun glint and maximize the light 382 availability in the water. The time of image collection depends on the latitude of the study area, for 383 example in Belgium (latitude 52 °) this is around noon, between 11 a.m. and 1 p.m., summertime. 384 The image collection can only take place under these specified good weather conditions. This limits 385 the data collection frequency, but for monitoring vegetation very high frequency data is rarely 386 needed. Techniques currently under development may in the near future allow the removal of 387 remaining surface reflection (Hardesty 2015). Other requirements are related to the study area 388 itself. The rivers and streams should be relative small, i.e. <10 m wide, which is the equivalent of 389 the spatial extent covered by one image, and at least one river bank should be accessible and stable 390 enough to position the pole. Yet these limitations to the study area can be overcome by attaching 391 the camera to an Unmanned Aerial Vehicle (UAV) (Husson et al. 2014; Tamminga et al. 2015), or 392 18 to a helium balloon, or by attaching the pole to the bow of a boat (Lirman and Deangelo 2007). 393 This makes it possible to collect similar resolution data from close to the water surface of larger 394 rivers. However, helium balloons need to be sufficiently big to carry a DSLR camera, which makes 395 them rather impractical and in the long-run quite expensive platforms (due to the cost of helium). 396 UAVs are a good alternative since battery life is improving year on year. Currently the only 397 disadvantages of a rotary-winged UAV platform are (i) the need for training to actually fly the 398 vehicle, which may involve some costly training; (ii) the purchase and insurance of suitable quality 399 UAV and camera; (iii) the transport of larger UAVs. UAVs are therefore the platform of choice 400 for further development of the method proposed in this paper. 401 402 The image processing as it was done is this study works well, yet improvements are possible to 403 delineate and identify the vegetation patches. This study used a manual interpretation based on 404 expert judgement, which is a sound method to separate between different species (Husson et 405 al.2014), because the manual delineation and identification uses many image elements like size, 406 shape, shadow, colour, texture, pattern, location and surroundings (Colwell 1960; Tempfli et al. 407 2009). However, the observer bias can still be present since this method makes use of manual 408 decision rules concerning the exact edge of the vegetation patches. In the study reach dominated 409 by a single species the accuracy is very high (>97 %). These high values are due to the relative 410 simple composition of the vegetation patches, where the whole reach is covered by a single species 411 (Desselse Nete). On the contrary, the accuracy is less (> 59%) in the river with a heterogeneous 412 composition of multiple species (Zwarte Nete), certainly in months when the vegetation patches 413 are developing (June and July). If we compare the relative cover of each vegetation class between 414 the image method and ground survey, differences are less than 12 %. The images method proved 415 to be suitable to estimate the relative cover of each vegetation class in rivers with a continuous and 416 19 discontinuous vegetation cover. However, it is difficult to map the exact location of all vegetation 417 patches in rivers with heterogeneous vegetation cover. This is due to the movement of the 418 vegetation patches by the flowing water and the relatively simple image processing. 419 420 Another limitation is the detection of rare species which are normally not abundantl, e.g. C. 421 obtusangula was detected by the ground survey in June and July in the Zwarte Nete but not by the 422 image method. The last limitation is the separation of multi-layered plant communities, e.g. P. 423 natans was classified as S. emersum in August (Desselse Nete), while only a few leaves of S. 424 emersum where present on top of P. natans. Similar limitations are found by Anker et al. (2014). 425 From the images, plant growth form (submerged, submerged with floating leaves, emergent) can 426 be easily recognized, as well as the species identification up to genus level. A classification up to 427 species level is possible, but requires knowledge of the species present in the reach. This 428 information can simply be obtained during the collection of the images at the field site. Automatic 429 classification methods based on variation in spectral signatures of different vegetation types could 430 not be used to automatically delineate and identify vegetation patches under these specific 431 circumstances. The varying incidences of light, the prevailing sub-optimal light conditions during 432 the sampling campaign and submergence depth of the vegetation all caused complications for 433 automated species detection (Visser et al. 2013). We acknowledge this drawback on the manual 434 image processing, which increases the cost of data processing and may make this method no longer 435 as cost-effective. Attention should be given to reduce phenological (space and time) differences in 436 the classification to make this technique suitable for long term monitoring. Two solutions have 437 been proposed: (i) convert the Red-Green-Blue colors to the green chromatic coordinate 438 (G/[R + G + B]), (ii) use the 90th percentile of all daytime values within a three-day window around 439 the centre day (Dronova 2017; Sonnentag et al. 2012). However, it may be not straightforward to 440 20 apply similar algorithms to submerged aquatic vegetation where relative variation in Red-Green-441 Blue values at any point can differ due to water depth differences. Alternative image analysis 442 approaches such as object based image analysis (OBIA) are less reliant on spectral information and 443 may mitigate for such conditions, however applications of such approaches in submerged 444 environments are still in a developmental stage (Visser et al. 2016). OBIA is currently applied in 445 other ecosystems. For example Laba et al. (2010) used a maximum-likelihood classification in tidal 446 marshes, which resulted in a classification accuracy between 45 and 77 %. In offshore submerged 447 environments OBIA based approaches have so far achieved good results for mapping coarse 448 vegetation and substrate classes. For example, the extent of seagrass habitat was mapped by 449 Baumstark et al. (2016), showing in a slightly higher accuracy using OBIA (78%) compared to 450 photo-interpretation (71%). 451 The image analysis method proved suitable for measuring the spatio-temporal vegetation cover, 452 which is a primary parameter for monitoring vegetated ecosystems. For instance within the Water 453 Framework Directive, it is essential for long-term monitoring of vegetation abundance (Hering et 454 al. 2010). Changes in abundance and location of the vegetation were derived directly from the 455 image data. For example the regrowth capacity of P. natans was high after the mowing event in 456 June, and pre-mowed cover values were reached within 8 weeks, which is similar to other 457 macrophyte species (Bal et al. 2006). Other, more conventional methods to estimate vegetation 458 cover data range from fast methods with a high observer bias due to expert judgement (Tansley 459 scaling method based on 5 classes) to more detailed scaling methods, which have a higher accuracy, 460 but are more time consuming and require substantial expert knowledge (Braun-Blanquet scaling 461 method based on 9 classes (Blanquet 1928)) and Londo scaling method based on at least 21 classes 462 (Londo 1976)). These methods have two main disadvantages. Firstly, abundance class errors are 463 difficult to correct even with substantial expert knowledge (Wiederkehr et al. 2015). Secondly, the 464 21 classification of the cover makes use of discontinuous class scales, which are less accurate and can 465 hamper data analyses. Hence the image analysis method fulfils the requirement of a more objective 466 quantification of the cover with a continuous cover scale with high spatial and flexible temporal 467 resolution. 468 469 The cover maps were also used in this study to investigate the presence of spatial self-organisation 470 of macrophytes in lowland rivers. A significant power-law relationship of the frequency 471 distribution of the patch sizes is found, which is an indication of spatial self-organisation (Newman 472 2005; Scanlon et al. 2007). This is in agreement with a study of Schoelynck et al. (2012), who 473 investigated the spatial self-organisation of macrophytes in the same reach in the Zwarte Nete in 474 2008. In the study of Schoelynck et al. (2012) the exact location of all vegetation patches was 475 determined using an electronic theodolite. It took roughly three weeks to map the whole reach, 476 which is much slower in comparison with the new method, were we needed 1 hour to collect the 477 images and two to three days to process the data. So obtaining spatial information of vegetation is 478 much faster compared to conventional methods. 479 480 From the cover data, biomass can be derived using simple non-destructive cover:biomass 481 conversion factors. These conversion factors can be determined for the specific field site or can be 482 obtained from literature (e.g. Madsen and Adams (1989); Flynn et al. (2002)). The biomass (gDW 483 m-2) estimated by the image analysis method was compared to the biomass obtained from the 484 scheduled mowing method. The biomass obtained by the two methods does not significantly differ 485 for either of the two reaches. The relatively small differences may be attributed to inaccuracies in 486 both methods. During the scheduled mowing, the biomass could have been slightly overestimated 487 when non-plant materials like sediment, stones and dead wood were removed too, which may have 488 22 added up to the total fresh weight, or underestimated the latter when not all the vegetation was 489 removed. However, we only assessed the biomass in a month with high biomass. Higher relative 490 difference in biomass might be expected when less biomass is present, but this would result in low 491 absolute differences. The image analysis method may also have certain flaws and uncertainties 492 involving the estimation of the species-specific biomass obtained by the plots. The within species 493 variation of the biomass may not be fully captured by three replicas (e.g. by depth variance of the 494 river and of the vegetation). The image analysis method doesn’t account for variability in the 495 density. Classic methods of biomass estimation are based on destructive measures of the biomass 496 (mowing, harvesting), which disturb the follow-up of natural vegetation development during the 497 growth season (Wood et al. 2012). 498 499 The difference between the Manning coefficient based on empirical relationships and the one based 500 on hydraulic data differs less than 23 % in the Zwarte Nete and less than 37% in the Desselse Nete. 501 The empirical relationships don’t account for the species composition and horizontal and vertical 502 distribution of the vegetation, which are different in both rivers and are major determining factors 503 of the hydraulic resistance of the reach. The Zwarte Nete is dominated by submerged vegetation 504 and this vegetation type has similar effects on the hydraulic resistance as the vegetation used to 505 construct Eq. 2 and Eq. 3. The Desselse Nete is dominated by the floating species P. natans, which 506 is a more open species that concentrates the majority of the biomass near the water surface, which 507 leads to a limited interaction with the water flow: rivers with macrophytes can have a 2 to 7-fold 508 increase of the resistance for floating (Green 2005a) and submerged (Bal and Meire 2009) species, 509 respectively, compared to rivers without vegetation. The same vegetation biomass or cover will 510 therefore result in a lower hydraulic resistance. Detailed 2D hydrodynamic models can be used to 511 quantify more accurately the hydraulic resistance created by the vegetation based on plant density, 512 23 species characteristics and spatial distribution of the vegetation (Verschoren et al. 2015). Accurate 513 2D spatio-temporal vegetation cover data, as obtained by the digital cover photography technique, 514 is indispensable to calibrate and validate these models. The spatial distribution of the vegetation is 515 a direct input to these models. Therefore these models account for the exact location of all 516 vegetation patches and the different plant characteristics of all species. This is a major leap forward 517 for engineers and water managers in the fine tuning of the hydrodynamic models of vegetated 518 rivers. 519 24 Conclusions 520 We successfully applied a digital cover photography technique based on orthogonal aerial images 521 with a very-high spatial (subcentimetre) and flexible temporal (monthly) resolution. The produced 522 vegetation maps were used to assess four key properties of vegetated lowland rivers which are 523 important for monitoring, modelling and management, being spatio-temporal variation in 524 vegetation cover, patch size distribution, biomass and hydraulic resistance. 525 The main limitations are related to the study area itself, which should be limited in size, and the 526 prevailing light conditions should be sufficient to look into the water. Improvements in the images 527 processing are situated in the automatic delineation and classification of the vegetation patches. 528 529 25 Acknowledgements 530 The funding for this research was partly provided by the Research Fund Flanders (FWO-, project 531 no. G.0290.10) via the multidisciplinary research project ‘Linking optical imaging techniques and 532 2D-modelling for studying spatial heterogeneity in vegetated streams and rivers’ (University of 533 Antwerp and University of Ghent) and party by Province of Antwerp, departement Leefmilieu, 534 dienst Integraal Waterbeleid (Report number ECOBE – 014 – R179). V.V. thanks the Institute for 535 the Promotion of Innovation through Science and Technology in Flanders (IWT-Vlaanderen) for 536 personal research funding. J.S. is a postdoctoral fellow of FWO (project no. 12H8616N). 537 538 References 539 Anker, Y., Y. Hershkovitz, E. Ben Dor & A. Gasith, 2014. Application of Aerial Digital Photography 540 for Macrophyte Cover and Composition Survey in Small Rural Streams. River Res Appl 541 30(7):925-937. 542 Apollo Mapping, 2016. RapidEye. In. https://apollomapping.com/imagery/medium-resolution-543 satellite-imagery. 544 Bal, K. D. & P. Meire, 2009. The Influence of Macrophyte Cutting on the Hydraulic Resistance of 545 Lowland Rivers. J Aquat Plant Manage 47:65-68. 546 Bal, K. D., E. Struyf, H. Vereecken, P. Viaene, L. De Doncker, E. de Deckere, F. Mostaert & P. 547 Meire, 2011. How do macrophyte distribution patterns affect hydraulic resistances? Ecol 548 Eng 37(3):529-533. 549 Bal, K. D., S. Van Belleghem, E. De Deckere & P. Meire, 2006. The re-growth capacity of sago 550 pondweed following mechanical cutting. J Aquat Plant Manage 44:139-141. 551 Barrell, J. & J. Grant, 2015. High-resolution, low-altitude aerial photography in physical geography: 552 A case study characterizing eelgrass (Zostera marina L.) and blue mussel (Mytilus edulis 553 L.) landscape mosaic structure. Prog Phys Geog 39(4):440-459. 554 Battle, J. M. & T. B. Mihuc, 2000. Decomposition dynamics of aquatic macrophytes in the lower 555 Atchafalaya, a large floodplain river. Hydrobiologia 418:123-136. Blanquet, B., 1928. 556 Pflanzensoziologie. Grundzüge der Vegegtationskunde, Berlin 1.ed. Berlin. 557 Baumstark R., R. Duffey & R. Pu, 2016. Mapping seagrass and colonized hard bottom in Springs 558 Coast, Florida using WorldView-2 satellite imagery. Estuarine, Coastal and Shelf Science 559 181:83-92.Borin, M. & M. Salvato, 2012. Effects of five macrophytes on nitrogen 560 remediation and mass balance in wetland mesocosms. Ecol Eng 46:34-42. 561 Bryson, M., M. Johnson-Roberson, R. J. Murphy & D. Bongiorno, 2013. Kite Aerial Photography 562 for Low-Cost, Ultra-high Spatial Resolution Multi-Spectral Mapping of Intertidal 563 Landscapes. Plos One 8(9). 564 Carter, G. A., A. K. Knapp, J. E. Anderson, G. A. Hoch & M. D. Smith, 2005. Indicators of plant 565 species richness in AVIRIS spectra of a mesic grassland. Remote Sens Environ 98(2-566 3):304-316. 567 http://www.sciencedirect.com/science/journal/02727714 26 Champion, P. D. & C. C. Tanner, 2000. Seasonality of macrophytes and interaction with flow in a 568 New Zealand lowland stream. Hydrobiologia 441(1-3):1-12. 569 Chow, V. T., 1959. Open-channel hydraulics. McGraw-Hill, New York, USA. 570 Colwell, R., 1960. Manual of photographic interpretation Washington, DC. 571 Cordova, J. M., E. J. Rosi-Marshall, J. L. Tank & G. A. Lamberti, 2008. Coarse particulate organic 572 matter transport in low-gradient streams of the Upper Peninsula of Michigan. J N Am 573 Benthol Soc 27(3):760-771. 574 De Doncker, L., P. Troch, R. Verhoeven, K. Bal, N. Desmet & P. Meire, 2009a. Relation between 575 Resistance Characteristics Due to Aquatic Weed Growth and the Hydraulic Capacity of the 576 River Aa. River Res Appl 25(10):1287-1303. 577 De Doncker, L., P. Troch, R. Verhoeven, K. Bal, P. Meire & J. Quintelier, 2009b. Determination of 578 the Manning roughness coefficient influenced by vegetation in the river Aa and Biebrza 579 river. Environ Fluid Mech 9(5):549-567. 580 Dhote, S. & S. Dixit, 2009. Water quality improvement through macrophytes-a review. Environ 581 Monit Assess 152(1-4):149-153. 582 Dinka, M., E. Agoston-Szabo & I. Toth, 2004. Changes in nutrient and fibre content of 583 decomposing Phragmites australis litter. Int Rev Hydrobiol 89(5-6):519-535. 584 Dronova, I. 2017. Environmental heterogeneity as a bridge between ecosystem service and visual 585 quality objectives in management, planning and design. Landscape and Urban Planning 586 163:90-106. 587 EU, 2000. 2000/60/EC. A framework for the Community action in the field of water policy, or 588 short: The EU Water Framework Directive Directive 2000/60/EC, The European Parliament 589 and the European Council of Ministers. Brussels, Belgium. 590 Extence, C. A., D. M. Balbi & R. P. Chadd, 1999. River flow indexing using British benthic 591 macroinvertebrates: A framework for setting hydroecological objectives. Regul River 592 15(6):543-574. 593 Flynn, K. F. & S. C. Chapra, 2014. Remote Sensing of Submerged Aquatic Vegetation in a Shallow 594 Non-Turbid River Using an Unmanned Aerial Vehicle. Remote Sens-Basel 6(12):12815-595 12836. 596 Flynn, N. J., D. L. Snook, A. J. Wade & H. P. Jarvie, 2002. Macrophyte and periphyton dynamics 597 in a UK Cretaceous chalk stream: the River Kennet, a tributary of the Thames. Sci Total 598 Environ 282:143-157. 599 Foody, G. M., 2002. Status of land cover classification accuracy assessment. Remote Sens Environ 600 80(1):185-201. 601 Franklin, P., M., Dunbar & P. Whitehead, 2008. Flow controls on lowland river macrophytes: A 602 review. Sci Total Environ 400(1-3):369-378. 603 Gibbins, C., D. Vericat & R. J. Batalla, 2007. When is stream invertebrate drift catastrophic? The 604 role of hydraulics and sediment transport in initiating drift during flood events. Freshwater 605 Biol 52(12):2369-2384. 606 Green, J. C., 2005a. Comparison of blockage factors in modelling the resistance of channels 607 containing submerged macrophytes. River Res Appl 21(6):671-686. 608 Green, J. C., 2005b. Modelling flow resistance in vegetated streams: review and development of 609 new theory. Hydrol Process 19(6):1245-1259. 610 Guichard, F., E. Bourget & J. P. Agnard, 2000. High-resolution remote sensing of intertidal 611 ecosystems: A low-cost technique to link scale-dependent patterns and processes. Limnol 612 Oceanogr 45(2):328-338. 613 27 Gurnell, A. M., M. P. van Oosterhout, B. de Vlieger & J. M. Goodson, 2006. Reach-scale interactions 614 between aquatic plants and physical habitat: River Frome, Dorset. River Res Appl 615 22(6):667-680. 616 Hardesty, L., 2015. Remowing reflections from photos taken through windows: new algorithm 617 exploits multiple reflections in individual images to distinguish reflection from transmission. 618 In: Massachusetts Institute of Technology. 619 Hering, D., A. Borja, J. Carstensen, L. Carvalho, M. Elliott, C. K. Feld, A. S. Heiskanen, R. K. 620 Johnson, J. Moe, D. Pont, A. L. Solheim & W. Van De Bund, 2010. The European Water 621 Framework Directive at the age of 10: A critical review of the achievements with 622 recommendations for the future. Sci Total Environ 408(19):4007-4019. 623 Hestir, E. L., S. Khanna, M. E. Andrew, M. J. Santos, J. H. Viers, J. A. Greenberg, S. S. Rajapakse 624 & S. L. Ustin, 2008. Identification of invasive vegetation using hyperspectral remote 625 sensing in the California Delta ecosystem. Remote Sens Environ 112(11):4034-4047. 626 Horvath, T. G., 2004. Retention of particulate matter by macrophytes in a first-order stream. 627 Aquat Bot 78(1):27-36. 628 Husson, E., O. Hagner & F. Ecke, 2014. Unmanned aircraft systems help to map aquatic 629 vegetation. Appl Veg Sci 17(3):567-577. 630 Krause, S., D. M. Hannah, J. H. Fleckenstein, C. M. Heppell, D. Kaeser, R. Pickup, G. Pinay, A. L. 631 Robertson & P. J. Wood, 2011. Inter-disciplinary perspectives on processes in the 632 hyporheic zone. Ecohydrology 4(4):481-499. 633 Lamberti, G. A., S. V. Gregory, L. R. Ashkenas, R. C. Wildman & A. D. Steinman, 1989. Influence 634 of Channel Geomorphology on Retention of Dissolved and Particulate Matter in a Cascade 635 Mountain Stream. Us for Serv T R Psw 110:33-39. 636 Laba, M., B. Blair, R. Downs, B. Monger, W. Philpot, S. Smith, P. Sullivan & P. C. Baveye, 2010. 637 Use of textural measurements to map invasive wetland plants in the Hudson River National 638 Estuarine Research Reserve with IKONOS satellite imagery. Remote Sensing of 639 Environment 114:876-886. 640 Legleiter, C. J., 2003. Spectrally driven classification of high spatial resolution, hyperspectral 641 imagery: A tool for mapping in-stream habitat. Environ Manage 32(3):399-411. 642 Lejeune, O., M. Tlidi & R. Lefever, 2004. Vegetation spots and stripes: Dissipative structures in 643 arid landscapes. Int J Quantum Chem 98(2):261-271. 644 Lirman, D. & G. Deangelo, 2007. Geospatial video monitoring of benthic habitats using the 645 Shallow-Water Positioning System (SWaPS). Oceans-Ieee:1176-1180. 646 Londo, G., 1976. The decimal scale for releves of permanent quadrats, vol 33. 647 Madsen, J. D. & M. S. Adams, 1989. The Distribution of Submerged Aquatic Macrophyte Biomass 648 in a Eutrophic Stream, Badfish Creek - the Effect of Environment. Hydrobiologia 649 171(2):111-119. 650 Madsen, J. D., P. A. Chambers, W. F. James, E. W. Koch & D. F. Westlake, 2001. The interaction 651 between water movement, sediment dynamics and submersed macrophytes. 652 Hydrobiologia 444(1-3):71-84. 653 Masek, J. G., D. J. Hayes, M. J. Hughes, S. P. Healey & D. P. Turner, 2015. The role of remote 654 sensing in process-scaling studies of managed forest ecosystems. Forest Ecol Manag 655 355:109-123. 656 Menendez, M., O. Hernandez & F. A. Comin, 2003. Seasonal comparisons of leaf processing rates 657 in two Mediterranean rivers with different nutrient availability. Hydrobiologia 495(1-3):159-658 169. 659 28 Mitchell, J. J., R. Shrestha, L. P. Spaete & N. F. Glenn, 2015. Combining airborne hyperspectral 660 and LiDAR data across local sites for upscaling shrubland structural information: Lessons 661 for HyspIRI. Remote Sens Environ 167:98-110. 662 NASA, 2016. MODIS, Moderate Resolution Imaging Spectroradiometer. In. 663 http://modis.gsfc.nasa.gov. 664 Newman, M. E. J., 2005. Power laws, Pareto distributions and Zipf's law. Contemporary Physics 665 46(5):323-351. 666 Nezlin, N. P., K. Kamer & E. D. Stein, 2007. Application of color infrared aerial photography to 667 assess macroalgal distribution in an eutrophic estuary, upper Newport Bay, California. 668 Estuar Coast 30(5):855-868. 669 Pekin, B. & C. Macfarlane, 2009. Measurement of Crown Cover and Leaf Area Index Using Digital 670 Cover Photography and Its Application to Remote Sensing. Remote Sens-Basel 1(4):1298-671 1320. 672 R Core Team, 2014. R: A language and environment for statistical computing. 673 Rango, A., A. Laliberte, K. Havstad, C. Winters, C. Steele & D. Browning, 2010. Rangeland 674 Resource Assessment, Monitoring, and Management Using Unmanned Aerial Vehicle-675 Based Remote Sensing. Int Geosci Remote Se:608-611. 676 Rietkerk, M., S. C. Dekker, P. C. de Ruiter & J. van de Koppel, 2004. Self-organized patchiness 677 and catastrophic shifts in ecosystems. Science 305(5692):1926-1929. 678 Riis, T. & B. J. F. Biggs, 2003. Hydrologic and hydraulic control of macrophyte establishment and 679 performance in streams. Limnol Oceanogr 48(4):1488-1497. 680 Sand-Jensen, K., K. Andersen & T. Andersen, 1999. Dynamic properties of recruitment, expansion 681 and mortality of macrophyte patches in streams. Int Rev Hydrobiol 84(5):497-508. 682 Satellite Imaging Corporation, 2016. Ikonos Satellite Sensor. In. 683 http://www.satimagingcorp.com/satellite-sensors/ikonos/. 684 Scanlon, T. M., K. K. Caylor, S. A. Levin & I. Rodriguez-Iturbe, 2007. Positive feedbacks promote 685 power-law clustering of Kalahari vegetation. Nature 449(7159):209-U4. 686 Schoelynck, J., T. De Groote, K. Bal, W. Vandenbruwaene, P. Meire & S. Temmerman, 2012. Self-687 organised patchiness and scale-dependent bio-geomorphic feedbacks in aquatic river 688 vegetation. Ecography 35(8):760-768. 689 Seitzinger, S., J. A. Harrison, J. K. Bohlke, A. F. Bouwman, R. Lowrance, B. Peterson, C. Tobias & 690 G. Van Drecht, 2006. Denitrification across landscapes and waterscapes: A synthesis. Ecol 691 Appl 16(6):2064-2090. 692 Shuchman, R. A., M. J. Sayers & C. N. Brooks, 2013. Mapping and monitoring the extent of 693 submerged aquatic vegetation in the Laurentian Great Lakes with multi-scale satellite 694 remote sensing. J Great Lakes Res 39:78-89. 695 Silva, T. S. F., M. P. F. Costa, J. M. Melack & E. M. L. M. Novo, 2008. Remote sensing of aquatic 696 vegetation: theory and applications. Environ Monit Assess 140(1-3):131-145. 697 Sonnentag, O., Hufkens, K., Teshera-Sterne, C., Young, A.M., Friedl, M.A., Braswell, B.H., Miliman, 698 T., O'Keefe, J., Richardson, A.D. (2012), Digital repeat photography for phenological 699 research in forest ecosystems. Agricultural and Forest Meteorology, 152, 159-177. 700 Tamminga, A., C. Hugenholtz, B. Eaton & M. Lapointe, 2015. Hyperspatial Remote Sensing of 701 Channel Reach Morphology and Hydraulic Fish Habitat Using an Unmanned Aerial Vehicle 702 (Uav): A First Assessment in the Context of River Research and Management. River Res 703 Appl 31(3):379-391. 704 Tempfli, K., N. Kerle, G. Huurneman & L. Janssen, 2009. Principles of remote sensing, Enschede, 705 NL. 706 29 U.S. Department of the Interior & U.S. Geological Survey, 2016. Landsat Missions. In. 707 http://landsat.usgs.gov. 708 van de Koppel, J., J. C. Gascoigne, G. Theraulaz, M. Rietkerk, W. M. Mooij & P. M. J. Herman, 709 2008. Experimental Evidence for Spatial Self-Organization and Its Emergent Effects in 710 Mussel Bed Ecosystems. Science 322(5902):739-742. 711 van der Wal, D., J. van Dalen, A. Wielemaker-van den Dool, J. T. Dijkstra & T. Ysebaert, 2014. 712 Biophisical contral of the intertidal benthic macroalgae revealed by high-frequency 713 multispectral camera images. J Sea Res 90:111-120. 714 Vereecken, H., J. Baetens, P. Viaene, F. Mostaert & P. Meire, 2006. Ecological management of 715 aquatic plants: effects in lowland streams. Hydrobiologia 570:205-210. 716 Verschoren, V., D. Meire, J. Schoelynck, K. Buis, K. D. Bal, P. Troch, P. Meire & S. Temmerman, 717 2016. Resistance and reconfiguration of natural flexible submerged vegetation in 718 hydrodynamic river modelling Environ Fluid Mech 16(1), 245-265. 719 Villa, P., M. Bresciani, R. Bolpagni, M. Pinardi & C. Giardino, 2015. A rule-based approach for 720 mapping macrophyte communities using multi-temporal aquatic vegetation indices. 721 Remote Sens Environ 171:218-233. 722 Visser, F., C. Wallis & A. M. Sinnott, 2013. Optical remote sensing of submerged aquatic 723 vegetation: Opportunities for shallow clearwater streams. Limnologica 43(5):388-398. 724 Visser, F., K. Buis, V. Verschoren & J. Schoelynck, 2016. Mapping of submerged aquatic in rivers 725 from very high-resolution image data, using object-based image analysis combined with 726 expert knowledge. Hydrobiologia:19 doi:10.1007/s10750-016-2928-y. 727 Wiederkehr, J., C. Grac, M. Fabregue, B. Fontan, F. Labat, F. Le Ber & M. Tremolieres, 2015. 728 Experimental study of uncertainties on the macrophyte index (IBMR) based on species 729 identification and cover. Ecol Indic 50:242-250. 730 Wood, K. A., R. A. Stillman, R. T. Clarke, F. Daunt & M. T. O'Hare, 2012. Measuring submerged 731 macrophyte standing crop in shallow rivers: A test of methodology. Aquat Bot 102:28-33. 732 30 Tables 733 Table 1: Overview of the equations used to calculate the Manning coefficient, n (s m-1/3). Eq. 2 is used to calculate the 734 Manning coefficient, with A (m2) cross-sectional area, Q (m3 s-1) discharge, R (m) hydraulic radius, S (m m-1) water 735 level slope, for which all parameters are measured in both reaches of the study area. Eq. 3 and Eq. 4 are empirical 736 relationships between the Manning coefficient and the vegetation cover (%) and Manning coefficient and the biomass 737 (g DW m-2), respectively All parameters are derived from the digital maps. 738 Reference Equation Number Chow et al. (1956) 𝑛 = 𝐴 𝑄 ∗ 𝑅2 3⁄ ∗ 𝑆1 2⁄ Eq. 2 Green (2005) 𝑛 = 0.0438 exp (0.0200 ∗ 𝑐𝑜𝑣𝑒𝑟) Eq. 3 De Doncker et al. (2009) 𝑛 = 0.4628 − 0.3998 exp (−0.0047 ∗ 𝑏𝑖𝑜𝑚𝑎𝑠𝑠) Eq. 4 739 740 31 Table 2: The accuracy (%) of the species identification of the image method compared to the ground survey method 741 per month per river. The accuracy is based on species level; for each grid cell (n=396) the species is compared between 742 the image method and the ground survey method. 743 Month April May June July August September Zwarte Nete 100 100 66.4 59.6 84.8 93.7 Desselse Nete 100 100 100 100 97.0 100 744 745 32 Table 3:Percentage vegetation cover (%) estimated by the image method and the ground survey method (GS) for June, 746 July, August and September in the Zwarte Nete. 747 Month June July August September Method GS Image GS Image GS Image GS Image C. obtusangula 2.3 0.0 2.0 0.0 2.5 2.5 0.0 0.0 M. spicatum - - - - - - - - P. pectinatus 1.3 0.0 25.5 24.0 5.1 0.0 - - S. emersum 32.3 29.6 62.1 59.3 86.4 97.5 4.6 6.1 Riparian vegetation - - - - - - - - Bare sediment 64.1 70.4 10.4 16.6 1.0 0.0 95.5 94.0 748 33 Table 4:Total biomass (gDW m-2) per month in both rivers. The biomass is estimated by the image analysis method 749 and by mowing method when all vegetation was removed and weighed. 750 Month April May June July August September Zwarte Nete Image 0 3.3 ± 0.1 11.4 ± 3.0 56.8 ± 5.9 187.5 ± 34.2 10.8 ± 1.3 Mowing - - - - 193.3 - Desselse Nete Image 0.9 65.8 ± 8.8 101.6 ± 26.4 36.7 ± 7.9 150.4 ± 24.4 13.8 ± 3.5 Mowing - - - - 123.6 - 751 34 Table 5: Comparison of the current method with fiveother remote sensing approaches using optical imagery and ground level visual survey. The features where the 752 current method performs good are highlighted in bold. 753 Spatial resolution (pixel edge length) Temporal resolution Spectral region Operation cost Collection cost Spatial extent Weather dependency Knowledge requirements (obtaining, processing) This study < 1 cm Flexible RGB Low (man hours, consumables) Low m² Low (sun) Low Kite, blimp and balloon photography (Barrell and Grant 2015; Bryson et al. 2013; Guichard et al. 2000) < 5 cm Flexible RBG NIR Low (man hours, consumables) Medium m² Medium (sun, wind speed) Medium Unmanned aerial vehicles (Rango et al. 2010) 1-10 cm (dependent on sensor and flight height) Flexible RBG NIR High (training, man hours, post-processing) Medium m² - hm² Medium (sun, wind speed) High Manned aircraft imaging 0.3 - 5m (dependent on flying height) Flexible RGB NIR MIR High (plane charter, post-processing) High m² - km² High (sun, sky conditions) High Freely available satellite images (NASA 2016; U.S. Department of the Interior and U.S. Geological Survey 2016) > 5 m Fixed Several/year (dependent on location and resolution) RGB NIR MIR 0 0 > 1 km² High (sun, sky conditions) Medium Commercial satellite images (Apollo Mapping 2016; Satellite Imaging Corporation 2016) 0.5-5 m Fixed 14-100 days (dependent on location and resolution) RGB NIR MIR 0 High > 1 km² High (sun, sky conditions) Medium Ground level visual survey Variable Flexible - High - m² None Low 754 35 Figures 755 756 Figure 1: The location of the study area is indicated with a black dot in the North East of Belgium. Insert: the location 757 of Belgium in Europa is shown in dark grey. 758 759 36 760 (a) (b) Figure 2: Illustrations of the image collection in the field. (a) The DSLR camera is attached with a ball head to a 761 handheld telescopic pole to take orthogonal images. (b) One person holds the pole with camera tilted in order to 762 position the camera at a height of 5 m above the water surface. A second person checks with a live view on a laptop 763 that both river banks are visible on each image and takes the images with tethered capture. 764 765 37 (a) (b) (c) (d) (e) (f) → → ↓ → ↓ → 38 Figure 3: Examples are given of the image collection, processing and analysis in the (a, c, e) Zwarte Nete and the (b, 766 d, e) Desselse Nete on the 13th of August 2013. Illustrations are shown of (a, b) individual images taken with a DSLR 767 camera attached to a pole, (c, d) a plan view of a part of the image mosaic, (e, f) vegetation map with colors indicating 768 the species and the location of the ground survey (black rectangular). The water flow direction is indicated with an 769 arrow. 770 771 39 772 773 774 Figure 4: Vegetation cover per species per month for the reach in the (a) Zwarte Nete and (b) Desselse Nete. The 775 colors of the bars refer to the species, the same colors for the species as in Fig. 3 are used (submerged species: red-776 yellow, floating species: green, emerged species: blue). The total vegetation cover per month is added in italics. 777 778 40 779 Figure 5: The inverse cumulative distribution of the patch sizes of C. obtusangula plotted on a double logarithmic 780 scale. A power-law relationship is added with β = 0.6 of Eq.1 (p<0.001; R² = 0.99). 781 782 41 783 784 Figure 6: Manning coefficient in function of time for the (a) Zwarte Nete and (b) Desselse Nete. For the validation 785 the Manning coefficient is calculated with Eq. 2() based on field measurents, Table 1. The Manning coefficient is 786 calculated with Eq. 3 ( ) and Eq. 4 (), these are empirical relationships with cross-sectional blockage and biomass, 787 respectively see Table 1. 788 789 https://www.google.be/url?sa=i&rct=j&q=&esrc=s&source=images&cd=&cad=rja&uact=8&ved=0CAcQjRxqFQoTCNPXzJW67cgCFYbTFAodZGMLDw&url=https://www.knutselkraam.nl/papier-karton/vierkant-karton-27-x-135-cm-hsg-220-gr/vierkant-donker-grijs-220-gr/&psig=AFQjCNF6KWoadENbhQYIQe7V-LOuPvyvvw&ust=1446406477967479 42 Appendix 790 Table 6: Overview of the measured hydraulic data per river per month. These values are used to calculate the Manning 791 coefficient with Eq. 2. 792 April May June July August September Zwarte Nete Discharge (m3 s-1) 0.28 0.5 0.23 0.25 0.2 0.46 Cross-sectional area (m2) 1.06 1.39 1.28 1.97 2.36 1.69 Hydraulic radius (m) 0.35 0.43 0.37 0.43 0.51 0.39 Water level slope (m m-1) 0.0007 0.0007 0.0009 0.0012 0.0013 0.0005 Desselse Nete Discharge (m3 s-1) 0.45 0.61 0.33 0.39 0.32 0.61 Cross-sectional area (m2) 1.43 1.63 1.76 1.90 2.65 2.32 Hydraulic radius (m) 0.38 0.33 0.44 0.46 0.57 0.52 Water level slope (m m-1) - 0.0005 0.0009 0.0006 0.0009 0.0008 793 Table 7: The biomass:cover conversion factor mean ± standard error (g m-2) is measured per month for C. 794 obtusangula, S. emersum and P. natans (n=3). Note that no replicates were taken in April, so no standard error is 795 given. 796 Apr. May Jun. Jul. Aug. Sept. C. obtusangula 28.5 NA 114.8 ± 37.7 123.7 ± 8.6 238.4 ± 50.3 354.9 ± 41.0 P. natans 146.0 116.6 ± 15.9 172.8 ± 45.3 209.8 ± 48.3 174.5 ± 19.5 190.6 ± 67.6 S. emersum 1.1 4.6 ± 1.7 49.9 ± 8.5 85.3 ± 14.0 202.2 ± 62.0 64.0 ± 10.7 797 work_73tmsradd5aqfhfrwsky3n5wra ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218560821 Params is empty 218560821 exception Params is empty 2021/04/06-02:16:41 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218560821 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:41 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_7aa2axdawfdf7pwbqly56kdupu ---- Digital Photography with Flash and No-Flash Image Pairs Digital Photography with Flash and No-Flash Image Pairs Georg Petschnigg Maneesh Agrawala Hugues Hoppe Richard Szeliski Michael Cohen Kentaro Toyama Microsoft Corporation Orig. (top) Detail Transfer (bottom) Flash No-Flash Detail Transfer with Denoising Figure 1: This candlelit setting from the wine cave of a castle is difficult to photograph due to its low light nature. A flash image captures the high-frequency texture and detail, but changes the overall scene appearance to cold and gray. The no-flash image captures the overall appearance of the warm candlelight, but is very noisy. We use the detail information from the flash image to both reduce noise in the no-flash image and sharpen its detail. Note the smooth appearance of the brown leather sofa and crisp detail of the bottles. For full-sized images, please see the supplemental DVD or the project website http://research.microsoft.com/projects/FlashNoFlash. Abstract Digital photography has made it possible to quickly and easily take a pair of images of low-light environments: one with flash to capture detail and one without flash to capture ambient illumina- tion. We present a variety of applications that analyze and combine the strengths of such flash/no-flash image pairs. Our applications include denoising and detail transfer (to merge the ambient qualities of the no-flash image with the high-frequency flash detail), white-balancing (to change the color tone of the ambient image), continuous flash (to interactively adjust flash intensity), and red-eye removal (to repair artifacts in the flash image). We demonstrate how these applications can synthesize new images that are of higher quality than either of the originals. Keywords: Noise removal, detail transfer, sharpening, image fusion, image processing, bilateral filtering, white balancing, red- eye removal, flash photography. 1 Introduction An important goal of photography is to capture and reproduce the visual richness of a real environment. Lighting is an integral aspect of this visual richness and often sets the mood or atmos- phere in the photograph. The subtlest nuances are often found in low-light conditions. For example, the dim, orange hue of a candlelit restaurant can evoke an intimate mood, while the pale blue cast of moonlight can evoke a cool atmosphere of mystery. When capturing the natural ambient illumination in such low-light environments, photographers face a dilemma. One option is to set a long exposure time so that the camera can collect enough light to produce a visible image. However, camera shake or scene motion during such long exposures will result in motion blur. Another option is to open the aperture to let in more light. How- ever, this approach reduces depth of field and is limited by the size of the lens. The third option is to increase the camera’s gain, which is controlled by the ISO setting. However, when exposure times are short, the camera cannot capture enough light to accu- rately estimate the color at each pixel, and thus visible image noise increases significantly. Flash photography was invented to circumvent these problems. By adding artificial light to nearby objects in the scene, cameras with flash can use shorter exposure times, smaller apertures, and less sensor gain and still capture enough light to produce relative- ly sharp, noise-free images. Brighter images have a greater signal- to-noise ratio and can therefore resolve detail that would be hidden in the noise in an image acquired under ambient illumina- tion. Moreover, the flash can enhance surface detail by illuminating surfaces with a crisp point light source. Finally, if one desires a white-balanced image, the known flash color greatly simplifies this task. As photographers know, however, the use of flash can also have a negative impact on the lighting characteristics of the environment. Objects near the camera are disproportionately brightened, and the mood evoked by ambient illumination may be destroyed. In addition, the flash may introduce unwanted artifacts such as red eye, harsh shadows, and specularities, none of which are part of the natural scene. Despite these drawbacks, many amateur pho- tographers use flash in low-light environments, and consequently, these snapshots rarely depict the true ambient illumination of such scenes. Today, digital photography makes it fast, easy, and economical to take a pair of images of low-light environments: one with flash to capture detail and one without flash to capture ambient illumina- tion. (We sometimes refer to the no-flash image as the ambient image.) In this paper, we present a variety of techniques that No-Flash Flash analyze and combine features from the images in such a flash/no- flash pair: Ambient image denoising: We use the relatively noise-free flash image to reduce noise in the no-flash image. By maintaining the natural lighting of the ambient image, our approach creates an image that is closer to the look of the real scene. Flash-to-ambient detail transfer: We transfer high-frequency detail from the flash image to the denoised ambient image, since this detail may not exist in the original ambient image. White balancing: The user may wish to simulate a whiter illumi- nant while preserving the “feel” of the ambient image. We exploit the known flash color to white-balance the ambient image, rather than relying on traditional single-image heuristics. Continuous flash intensity adjustment: We provide continuous interpolation control between the image pair so that the user can interactively adjust the flash intensity. The user can even extrapo- late beyond the original ambient and flash images. Red-eye correction: We perform red-eye detection by consider- ing how the color of the pupil changes between the ambient and flash images. While many of these problems are not new, the primary contribu- tion of our work is to show how to exploit information in the flash/no-flash pair to improve upon previous techniques1. One feature of our approach is that the manual acquisition of the flash/no-flash pair is relatively straightforward with current consumer digital cameras. We envision that the ability to capture such pairs will eventually move into the camera firmware, thereby making the acquisition process even easier and faster. One recurring theme of recent computer graphics research is the idea of taking multiple photographs of a scene and combining them to synthesize a new image. Examples of this approach include creating high dynamic range images by combining photo- graphs taken at different exposures [Debevec and Malik 1997; Kang et al. 2003], creating mosaics and panoramas by combining photographs taken from different viewpoints [e.g. Szeliski and Shum 1997], and synthetically relighting images by combining images taken under different illumination conditions [Haeberli 1992; Debevec et al. 2000; Masselus et al. 2002; Akers et al. 2003; Agarwala et al. 2004]. Although our techniques involve only two input images, they share the similar goal of synthesizing a new image that is of better quality than any of the input images. 2 Background on Camera Noise The intuition behind several of our algorithms is that while the illumination from a flash may change the appearance of the scene, it also increases the signal-to-noise ratio (SNR) in the flash image and provides a better estimate of the high-frequency detail. As shown in Figure 2(a), a brighter image signal contains more noise than a darker signal. However, the slope of the curve is less than one, which implies that the signal increases faster than the noise and so the SNR of the brighter image is better. While the flash does not illuminate the scene uniformly, it does significantly increase scene brightness (especially for objects near the camera) and therefore the flash image exhibits a better SNR than the ambient image. As illustrated in Figure 2(b), the improvement in SNR in a flash image is especially pronounced at higher frequencies. Properly exposed image pairs have similar intensities after passing through 1 In concurrent work, Eisemann and Durand [2004] have developed techniques similar to ours for transferring color and detail between the flash/no-flash images. the imaging system (which may include aperture, shutter/flash duration, and camera gain). Therefore their log power spectra are roughly the same. However, the noise in the high-ISO ambient image is greater than in the low-ISO flash image because the gain amplifies the noise. Since the power spectrum of most natural images falls off at high frequencies, whereas that of the camera noise remains uniform (i.e. assuming white noise), noise domi- nates the signal at a much lower frequency in the ambient image than in the flash image. 3 Acquisition Procedure. We have designed our algorithms to work with images acquired using consumer-grade digital cameras. The main goal of our acquisition procedure is to ensure that the flash/no- flash image pair capture exactly the same points in the scene. We fix the focal length and aperture between the two images so that the camera’s focus and depth-of-field remain constant. Our acquisition procedure is as follows: 1. Focus on the subject, then lock the focal length and aperture. 2. Set exposure time Δ𝑡 and ISO for a good exposure. 3. Take the ambient image 𝐴. 4. Turn on the flash. 5. Adjust the exposure time Δ𝑡 and ISO to the smallest settings that still expose the image well. 6. Take the flash image 𝐹. A rule of thumb for handheld camera operation is that exposure times for a single image should be under 1 30� s for a 30mm lens to prevent motion blur. In practice, we set the exposure times for both images to 1 60� s or less so that under ideal circumstances, both images could be shot one after another within the 1 30� s limit on handheld camera operation. Although rapidly switching between flash and non-flash mode is not currently possible on consumer- grade cameras, we envision that this capability will eventually be included in camera firmware. Most of the images in this paper were taken with a Canon EOS Digital Rebel. We acquire all images in a RAW format and then convert them into 16-bit TIFF images. By default, the Canon conversion soft- ware performs white balancing, gamma correction and other non- linear tone-mapping operations to produce perceptually pleasing images with good overall contrast. We apply most of our algo- rithms on these non-linear images in order to preserve their high- quality tone-mapping characteristics in our final images. Registration. Image registration is not the focus of our work and we therefore acquired most of our pairs using a tripod setup. Nevertheless we recognize that registration is important for images taken with handheld cameras since changing the camera settings (i.e. turning on the flash, changing the ISO, etc.) often results in camera motion. For the examples shown in Figure 11 we took the photographs without a tripod and then applied the regis- tration technique of Szeliski and Shum [1997] to align them. Figure 2: (a-left) Noise vs. signal for a full-frame Kodak CCD [2001]. Since the slope is less than one, SNR increases at higher signal values. (b- right) The digital sensor produces similar log power spectra for the flash and ambient images. However, the noise dominates the signal at a lower frequency in the high-ISO ambient image than in the low-ISO flash image. While we found this technique works well, we note that flash/no- flash images do have significant differences due to the change in illumination, and therefore robust techniques for registration of such image pairs deserve further study. Linearization. Some of our algorithms analyze the image differ- ence 𝐹 − 𝐴 to infer the contribution of the flash to the scene lighting. To make this computation meaningful, the images must be in the same linear space. Therefore we sometimes set our conversion software to generate linear TIFF images from the RAW data. Also, we must compensate for the exposure differ- ences between the two images due to ISO settings and exposure times Δ𝑡. If 𝐴′𝐿𝑖𝑛 and 𝐹𝐿𝑖𝑛 are the linear images output by the converter utility, we put them in the same space by computing: 𝐴𝐿𝑖𝑛 = 𝐴′𝐿𝑖𝑛 ISO𝐹 Δ𝑡𝐹 ISO𝐴 Δ𝑡𝐴 . (1) Note that unless we include the superscript 𝐿𝐿𝐿, 𝐹 and 𝐴 refer to the non-linear versions of the images. 4 Denoising and Detail Transfer Our denoising and detail transfer algorithms are designed to enhance the ambient image using information from the flash image. We present these two algorithms in Sections 4.1 and 4.2. Both algorithms assume that the flash image is a good local estimator of the high frequency content in the ambient image. However, this assumption does not hold in shadow and specular regions caused by the flash, and can lead to artifacts. In Section 4.3, we describe how to account for these artifacts. The relation- ships between the three algorithms are depicted in Figure 3. 4.1 Denoising Reducing noise in photographic images has been a long-standing problem in image processing and computer vision. One common solution is to apply an edge-preserving smoothing filter to the image such as anisotropic diffusion [Perona and Malik 1990] or bilateral filtering [Tomasi and Manduchi 1998]. The bilateral filter is a fast, non-iterative technique, and has been applied to a variety of problems beyond image denoising, including tone-mapping [Durand and Dorsey 2002; Choudhury and Tumblin 2003], separating illumination from texture [Oh et al. 2001] and mesh smoothing [Fleishman et al. 2003; Jones et al. 2003]. Our ambient image denoising technique also builds on the bilat- eral filter. We begin with a summary of Tomasi and Manduchi’s basic bilateral filter and then show how to extend their approach to also consider a flash image when denoising an ambient image. Bilateral filter. The bilateral filter is designed to average together pixels that are spatially near one another and have similar intensi- ty values. It combines a classic low-pass filter with an edge- stopping function that attenuates the filter kernel weights when the intensity difference between pixels is large. In the notation of Durand and Dorsey [2002], the bilateral filter computes the value of pixel 𝑝 for ambient image 𝐴 as: 𝐴𝑝 𝐵𝑎𝑠𝑒 = 1 𝑘(𝑝) � 𝑔𝑑(𝑝′ − 𝑝) 𝑔𝑟�𝐴𝑝 − 𝐴𝑝′� 𝐴𝑝′ 𝑝′∈Ω , (2) where 𝑘(𝑝) is a normalization term: 𝑘(𝑝) = � 𝑔𝑑(𝑝 ′ − 𝑝) 𝑔𝑟�𝐴𝑝 − 𝐴𝑝′� 𝑝′∈Ω . (3) The function 𝑔𝑑 sets the weight in the spatial domain based on the distance between the pixels, while the edge-stopping function 𝑔𝑟 sets the weight on the range based on intensity differences. Typi- cally, both functions are Gaussians with widths controlled by the standard deviation parameters 𝜎𝑑 and 𝜎𝑟 respectively. We apply the bilateral filter to each RGB color channel separately with the same standard deviation parameters for all three chan- nels. The challenge is to set 𝜎𝑑 and 𝜎𝑟 so that the noise is averaged away but detail is preserved. In practice, for 6 megapixel images, we set 𝜎𝑑 to cover a pixel neighborhood of between 24 and 48 pixels, and then experimentally adjust 𝜎𝑟 so that it is just above the threshold necessary to smooth the noise. For images with pixel values normalized to [0.0, 1.0] we usually set 𝜎𝑟 to lie between 0.05 and 0.1, or 5 to 10% of the total range. However, as shown in Figure 4(b), even after carefully adjusting the parame- ters, the basic bilateral filter tends to either over-blur (lose detail) or under-blur (fail to denoise) the image in some regions. Joint bilateral filter. We observed in Section 2 that the flash image contains a much better estimate of the true high-frequency information than the ambient image. Based on this observation, we modify the basic bilateral filter to compute the edge-stopping function 𝑔𝑟 using the flash image 𝐹 instead of 𝐴. We call this technique the joint bilateral filter2: 𝐴𝑝 𝑁𝑅 = 1 𝑘(𝑝) � 𝑔𝑑(𝑝′ − 𝑝) 𝑔𝑟�𝐹𝑝 − 𝐹𝑝′� 𝐴𝑝′ 𝑝′∈Ω , (4) where 𝑘(𝑝) is modified similarly. Here 𝐴𝑁𝑅 is the noise-reduced version of 𝐴. We set 𝜎𝑑 just as we did for the basic bilateral filter. Under the assumption that 𝐹 has little noise, we can set 𝜎𝑟 to be very small and still ensure that the edge-stopping function 𝑔𝑟�𝐹𝑝 − 𝐹𝑝′� will choose the proper weights for nearby pixels and therefore will not over-blur or under-blur the ambient image. In practice, we have found that 𝜎𝑟 can be set to 0.1% of the total range of color values. Unlike basic bilateral filtering, we fix 𝜎𝑟 for all images. The joint bilateral filter relies on the flash image as an estimator of the ambient image. Therefore it can fail in flash shadows and specularities because they only appear in the flash image. At the edges of such regions, the joint bilateral filter may under-blur the ambient image since it will down-weight pixels where the filter straddles these edges. Similarly, inside these regions, it may over- blur the ambient image. We solve this problem by first detecting flash shadows and specu- lar regions as described in Section 4.3 and then falling back to basic bilateral filtering within these regions. Given the mask 𝑀 2 Eisemann and Durand [2004] call this the cross bilateral filter. Figure 3: Overview of our algorithms for denoising, detail transfer, and flash artifact detection. produced by our detection algorithm, our improved denoising algorithm becomes: 𝐴𝑁𝑅 ′ = (1 − 𝑀)𝐴𝑁𝑅 + 𝑀 𝐴𝐵𝑎𝑠𝑒. (5) Results & Discussion. The results of denoising with the joint bilateral filter are shown in Figure 4(c). The difference image with the basic bilateral filter, Figure 4(d), reveals that the joint bilateral filter is better able to preserve detail while reducing noise. One limitation of both bilateral and joint bilateral filtering is that they are nonlinear and therefore a straightforward implementation requires performing the convolution in the spatial domain. This can be very slow for large 𝜎𝑑. Recently Durand and Dorsey [2002] used Fourier techniques to greatly accelerate the bilateral filter. We believe their technique is also applicable to the joint bilateral filter and should significantly speed up our denoising algorithm. 4.2 Flash-To-Ambient Detail Transfer While the joint bilateral filter can reduce noise, it cannot add detail that may be present in the flash image. Yet, as described in Section 2, the higher SNR of the flash image allows it to retain nuances that are overwhelmed by noise in the ambient image. Moreover, the flash typically provides strong directional lighting that can reveal additional surface detail that is not visible in more uniform ambient lighting. The flash may also illuminate detail in regions that are in shadow in the ambient image. To transfer this detail we begin by computing a detail layer from the flash image as the following ratio: 𝐹𝐷𝑒𝑡𝑎𝑖𝑙 = 𝐹 + 𝜀 𝐹𝐵𝑎𝑠𝑒 + 𝜀 , (6) where 𝐹𝐵𝑎𝑠𝑒 is computed using the basic bilateral filter on 𝐹. The ratio is computed on each RGB channel separately and is inde- pendent of the signal magnitude and surface reflectance. The ratio captures the local detail variation in 𝐹 and is commonly called a quotient image [Shashua and Riklin-Raviv 2001] or ratio image [Liu et al. 2001] in computer vision. Figure 5 shows that the advantage of using the bilateral filter to compute 𝐹𝐵𝑎𝑠𝑒 rather than a classic low-pass Gaussian filter is that we reduce haloing. At low signal values, the flash image contains noise that can generate spurious detail. We add 𝜀 to both the numerator and denominator of the ratio to reject these low signal values and thereby reduce such artifacts (and also avoid division by zero). In practice we use 𝜀=0.02 across all our results. To transfer the detail, we simply multiply the noise-reduced ambient image 𝐴𝑁𝑅 by the ratio 𝐹𝐷𝑒𝑡𝑎𝑖𝑙. Figure 4(e-f) shows examples of a detail layer and detail transfer. Just as in joint bilateral filtering, our transfer algorithm produces a poor detail estimate in shadows and specular regions caused by the flash. Therefore, we again rely on the detection algorithm described in Section 4.3 to estimate a mask 𝑀 identifying these regions and compute the final image as: 𝐴𝐹𝑖𝑛𝑎𝑙 = (1 − 𝑀) 𝐴𝑁𝑅 𝐹𝐷𝑒𝑡𝑎𝑖𝑙 + 𝑀 𝐴𝐵𝑎𝑠𝑒 . (7) With this detail transfer approach, we can control the amount of detail transferred by choosing appropriate settings for the bilateral filter parameters 𝜎𝑑 and 𝜎𝑟 used to create 𝐹𝐵𝑎𝑠𝑒. As we increase these filter widths, we generate increasingly smoother versions of 𝐹𝐵𝑎𝑠𝑒 and as a result capture more detail in 𝐹𝐷𝑒𝑡𝑎𝑖𝑙. However, with excessive smoothing, the bilateral filter essentially reduces to a Gaussian filter and leads to haloing artifacts in the final image. Results & Discussion. Figures 1, 4(f), and 6–8 show several examples of applying detail transfer with denoising. Both the lamp (Figure 6) and pots (Figure 8) examples show how our detail transfer algorithm can add true detail from the flash image to the ambient image. The candlelit cave (Figure 1 and 7) is an extreme case for our algorithms because the ISO was originally set to 1600 and digitally boosted up to 6400 in a post-processing step. In this (a) No-Flash Flash (b) Denoised via Bilateral Filter (c) Denoised via Joint Bilateral Filter (d) Difference (e) Detail Layer (f) Detail Transfer Figure 4: (a) A close-up of a flash/no-flash image pair of a Belgian tapestry. The no-flash image is especially noisy in the darker regions and does not show the threads as well as the flash image. (b) Basic bilateral filtering preserves strong edges, but blurs away most of the threads. (c) Joint bilateral filtering smoothes the noise while also retaining more thread detail than the basic bilateral filter. (d) The difference image between the basic and joint bilateral filtered images. (e) We further enhance the ambient image by transferring detail from the flash image. We first compute a detail layer from the flash image, and then (f) combine the detail layer with the image denoised via the joint bilateral filter to produce the detail-transferred image. Figure 5: (left) A Gaussian low-pass filter blurs across all edges and will therefore create strong peaks and valleys in the detail image that cause halos. (right) The bilateral filter does not smooth across strong edges and thereby reduces halos, while still capturing detail. case, the extreme levels of noise forced us to use relatively wide Gaussians for both the domain and range kernels in the joint bilateral filter. Thus, when transferring back the true detail from the flash image, we also used relatively wide Gaussians in compu- ting the detail layer. As a result, it is possible to see small halos around the edges of the bottles. Nevertheless, our approach is able to smooth away the noise while preserving detail like the gentle wrinkles on the sofa and the glazing on the bottles. Figure 7 shows a comparison between a long exposure reference image of the wine cave and our detail transfer with denoising result. In most cases, our detail transfer algorithm improves the appear- ance of the ambient image. However, it is important to note that the flash image may contain detail that looks unnatural when transferred to the ambient image. For example, if the light from the flash strikes a surfaces at a shallow angle, the flash image may pick up surface texture (i.e. wood grain, stucco, etc.) as detail. If this texture is not visible in the original ambient image, it may look odd. Similarly if the flash image washes out detail, the ambient image may be over-blurred. Our approach allows the user to control how much detail is transferred over the entire image. Automatically adjusting the amount of local detail transferred is an area for future work. 4.3 Detecting Flash Shadows and Specularities Light from the flash can introduce shadows and specularities into the flash image. Within flash shadows, the image may be as dim as the ambient image and therefore suffer from noise. Similarly, within specular reflections, the flash image may be saturated and lose detail. Moreover, the boundaries of both these regions may form high-frequency edges that do not exist in the ambient image. To avoid using information from the flash image in these regions, we first detect the flash shadows and specularities. Flash Shadows. Since a point in a flash shadow is not illuminated by the flash, it should appear exactly as it appears in the ambient image. Ideally, we could linearize 𝐴 and 𝐹 as described in Section 3 and then detect pixels where the luminance of the difference image 𝐹𝐿𝑖𝑛 − 𝐴𝐿𝑖𝑛 is zero. In practice, this approach is confound- ed by four issues: 1) surfaces that do not reflect any light (i.e. with zero albedo) are detected as shadows; 2) distant surfaces not reached by the flash are detected as shadows; 3) noise causes non- zero values within shadows; and 4) inter-reflection of light from the flash causes non-zero values within the shadow. The first two issues do not cause a problem since the results are the same in both the ambient and flash images and thus whichever image is chosen will give the same result. To deal with noise and inter-reflection, we add a threshold when computing the shadow mask by looking for pixels in which the difference between the linearized flash and ambient images is small: 𝑀𝑆ℎ𝑎𝑑 = � 1 when 𝐹𝐿𝑖𝑛 − 𝐴𝐿𝑖𝑛 ≤ 𝜏𝑆ℎ𝑎𝑑 0 otherwise. (8) No-Flash Detail Transfer with Denoising Long Exposure Reference Figure 7: We captured a long exposure image of the wine cave scene (3.2 seconds at ISO 100) for comparison with our detail transfer with denoising result. We also computed average mean-square error across the 16 bit R, G, B color channels between the no-flash image and the reference (1485.5 MSE) and between our result and the reference (1109.8 MSE). However, it is well known that MSE is not a good measure of perceptual image differences. Visual comparison shows that although our result does not achieve the fidelity of the reference image, it is substantially less noisy than the original no-flash image. Orig. (top) Detail Transfer (bottom) Flash No-Flash Detail Transfer with Denoising Figure 6: An old European lamp made of hay. The flash image captures detail, but is gray and flat. The no-flash image captures the warm illumination of the lamp, but is noisy and lacks the fine detail of the hay. With detail transfer and denoising we maintain the warm appearance, as well as the sharp detail. Flash No-Flash We have developed a program that lets users interactively adjust the threshold value 𝜏𝑆ℎ𝑎𝑑 and visually verify that all the flash shadow regions are properly captured. Noise can contaminate the shadow mask with small speckles, holes and ragged edges. We clean up the shadow mask using image morphological operations to erode the speckles and fill the holes. To produce a conservative estimate that fully covers the shadow region, we then dilate the mask. Flash Specularities. We detect specular regions caused by the flash using a simple physically motivated heuristic. Specular regions should be bright in 𝐹𝐿𝑖𝑛 and should therefore saturate the image sensor. Hence, we look for luminance values in the flash image that are greater than 95% of the range of sensor output values. We clean, fill holes, and dilate the specular mask just as we did for the shadow mask. Final Merge. We form our final mask 𝑀 by taking the union of the shadow and specular masks. We then blur the mask to feather its edges and prevent visible seams when the mask is used to combine regions from different images. Results & Discussion. The results in Figures 1 and 6–8 were generated using this flash artifact detection approach. Figure 8 (top row) illustrates how the mask corrects flash shadow artifacts in the detail transfer algorithm. In Figure 1 we show a failure case of our algorithm. It does not capture the striped specular highlight on the center bottle and therefore this highlight is transferred as detail from the flash image to our final result. Although both our shadow and specular detection techniques are based on simple heuristics, we have found that they produce good masks for a variety of examples. More sophisticated techniques developed for shadow and specular detection in single images or stereo pairs [Lee and Bajcsy 1992; Funka-Lea and Bajcsy 1995; Swaminathan et al. 2002] may provide better results and could be adapted for the case of flash/no-flash pairs. 5 White Balancing Although preserving the original ambient illumination is often desirable, sometimes we may want to see how the scene would appear under a more “white” illuminant. This process is called white-balancing, and has been the subject of much study [Adams et al. 1998]. When only a single ambient image is acquired, the ambient illumination must be estimated based on heuristics or user input. Digital cameras usually provide several white-balance modes for different environments such as sunny outdoors and fluorescent lighting. Most often, pictures are taken with an “auto” mode, wherein the camera analyzes the image and computes an image- wide average to infer ambient color. This is, of course, only a heuristic, and some researchers have considered semantic analysis to determine color cast [Schroeder and Moser 2001]. A flash/no-flash image pair enables a better approach to white balancing. Our work is heavily inspired by that of DiCarlo et al. [2001], who were the first to consider using flash/no-flash pairs for illumination estimation. They infer ambient illumination by performing a discrete search over a set of 103 illuminants to find the one that most closely matches the observed image pair. We simplify this approach by formulating it as a continuous optimiza- tion problem that is not limited by this discrete set of illuminants. Thus, our approach requires less setup than theirs. We can think of the flash as adding a point light source of known color to the scene. By setting the camera white-balance mode to “flash” (and assuming a calibrated camera), this flash color should appear as reference white in the acquired images. The difference image Δ = 𝐹𝐿𝑖𝑛 − 𝐴𝐿𝑖𝑛 corresponds to the illumi- nation due to the flash only, which is proportional to the surface albedo at each pixel 𝑝. Note that the albedo estimate Δ has un- known scale, because both the distance and orientation of the surface are unknown. Here we are assuming either that the surface is diffuse or that its specular color matches its diffuse color. As a Orig. (top) Detail Transfer (bottom) Detail Transfer without Mask Shadow and Specularity Mask Detail Transfer using Mask Flash No-Flash Detail Transfer with Denoising Figure 8: (top row) The flash image does not contain true detail information in shadows and specular regions. When we naively apply our denoising and detail transfer algorithms, these regions generate artifacts as indicated by the white arrows. To prevent these artifacts, we revert to basic bilateral filtering within these regions. (bottom row). The dark brown pot on the left is extremely noisy in the no-flash image. The green pot on the right is also noisy, but as shown in the flash image it exhibits true texture detail. Our detail transfer technique smoothes the noise while maintaining the texture. Also note that the flash shadow/specularity detection algorithm properly masks out the large specular highlight on the brown pot and does not transfer that detail to the final image. No-Flash Flash counter-example, this is not true of plastics. Similarly, semi- transparent surfaces would give erroneous estimates of albedo. Since the surface at pixel 𝑝 has color 𝐴𝑝 in the ambient image and the scaled albedo Δ𝑝, we can estimate the ambient illumination at the surface with the ratio: 𝐶𝑝 = 𝐴𝑝 Δ𝑝 , (9) which is computed per color channel. Again, this estimated color 𝐶𝑝 has an unknown scale, so we normalize it at each pixel 𝑝 (see inset Figure 9). Our goal is to analyze 𝐶𝑝 at all image pixels to infer the ambient illumination color 𝑐. To make this inference more robust, we discard pixels for which the estimate has low confidence. We can afford to do this since we only need to derive a single color from millions of pixels. Specifically, we ignore pixels for which either �𝐴𝑝� < 𝜏1 or the luminance of Δ𝑝 < 𝜏2 in any channel, since these small values make the ratio less reliable. We set both 𝜏1 and 𝜏2 to about 2% of the range of color values. Finally, we compute the ambient color estimate 𝑐 for the scene as the mean of 𝐶𝑝 for the non-discarded pixels. (An alternative is to select 𝑐 as the principal component of 𝐶, obtained as the eigen- vector of 𝐶𝑇𝐶 with the largest eigenvalue, and this gives a similar answer.) Having inferred the scene ambient color 𝑐, we white-balance the image by scaling the color channels as: 𝐴𝑝𝑊𝐵 = 1 𝑐 𝐴𝑝 . (10) Again, the computation is performed per color channel. Results & Discussion. Figure 9 shows an example of white balancing an ambient image. The white balancing significantly changes the overall hue of the image, setting the color of the wood table to a yellowish gray, as it would appear in white light. In inferring ambient color 𝑐, one could also prune outliers and look for spatial relationships in the image 𝐶. In addition, the scene may have multiple regions with different ambient colors, and these could be segmented and processed independently. White-balancing is a challenging problem because the perception of “white” depends in part on the adaptation state of the viewer. Moreover, it is unclear when white-balance is desirable. However we believe that our estimation approach using the known infor- mation from the flash can be more accurate than techniques based on single-image heuristics. 6 Continuous Flash Adjustment When taking a flash image, the intensity of the flash can some- times be too bright, saturating a nearby object, or it can be too dim, leaving mid-distance objects under-exposed. With a flash and non-flash image pair, we can let the user adjust the flash intensity after the picture has been taken. We have explored several ways of interpolating the ambient and flash images. The most effective scheme is to convert the original flash/no-flash pair into YCbCr space and then linearly interpolate them using: 𝐹𝐴𝑑𝑗𝑢𝑠𝑡𝑒𝑑 = (1 − 𝛼)𝐴 + (𝛼)𝐹 . (11) To provide more user control, we allow extrapolation by letting the parameter 𝛼 go outside the normal [0,1] range. However, we only extrapolate the Y channel, and restrict the Cb and Cr channel interpolations to their extrema in the two original images, to prevent excessive distortion of the hue. An example is shown in Figure 10. 7 Red-Eye Correction Red-eye is a common problem in flash photography and is due to light reflected by a well vascularized retina. Fully automated red- eye removal techniques usually assume a single image as input and rely on a variety of heuristic and machine-learning techniques to localize the red eyes [Gaubatz and Ulichney 2002; Patti et al. 1998]. Once the pupil mask has been detected, these techniques darken the pixels within the mask to make the images appear more natural. We have developed a red-eye removal algorithm that considers the change in pupil color between the ambient image (where it is usually very dark) and the flash image (where it may be red). We convert the image pair into YCbCr space to decorrelate luminance Original No-Flash Estimated ambient illumination White-Balanced Figure 9: (left) The ambient image (after denoising and detail transfer) has an orange cast to it. The insets show the estimated ambient illumination colors 𝐶 and the estimated overall scene ambience. (right) Our white-balancing algorithm shifts the colors and removes the orange coloring . -0.5 0.0 (No-Flash) 0.33 0.66 1.0 (Flash) 1.5 Figure 10: An example of continuous flash adjustment. We can extrapolate beyond the original flash/no-flash pair. from chrominance and compute a relative redness measure as follows: 𝑅 = 𝐹𝐶𝑟 − 𝐴𝐶𝑟 . (12) We then initially segment the image into regions where: 𝑅 > 𝜏𝐸𝑦𝑒 . (13) We typically set 𝜏𝐸𝑦𝑒 to 0.05 so that the resulting segmentation defines regions where the flash image is redder than the ambient image and therefore may form potential red eyes. The segmented regions also tend to include a few locations that are highly satu- rated in the Cr channel of the flash image but are relatively dark in the Y channel of the ambient image. Thus, if 𝜇𝑅 and 𝜎𝑅 denote the mean and standard deviation of the redness 𝑅, we look for seed pixels where: 𝑅 > max[0.6, 𝜇𝑅 + 3𝜎𝑅] and 𝐴𝑌 < 𝜏𝐷𝑎𝑟𝑘 . (14) We usually set 𝜏𝐷𝑎𝑟𝑘 = 0.6. If no such seed pixels exist, we assume the image does not contain red-eye. Otherwise, we use these seed pixels to look up the corresponding regions in the segmentation and then apply geometric constraints to ensure that the regions are roughly the same size and elliptical. In particular, we compute the area of each region and discard large outliers. We then check that the eccentricity of the region is greater than 0.75. These regions then form a red-eye pupil mask. Finally to correct these red-eye regions we use the technique of Patti et al.[1998]. We first remove the highlights or “glints” in the pupil mask using our flash specularity detection algorithm. We then set the color of each pixel in the mask to the gray value equivalent to 80% of its luminance value. This approach properly darkens the pupil while maintaining the specular highlight which is important for main- taining realism in the corrected output. Results & Discussion. Figure 11 illustrates our red-eye correction algorithm with two examples. The second example shows that our algorithm performs well even when the red-eye is subtle. In both examples our algorithm is able to distinguish the pupils from the reddish skin. Moreover, the specular highlight is preserved and the eye shows no unnatural discoloration. Both of these examples were automatically aligned using the approach of Szeliski and Shum [1997]. Since color noise could invalidate our chromaticity comparison, we assume a relatively noise free ambient image, like the ones generated by our denoising algorithm. 8 Future Work and Conclusions While we have developed a variety of applications for flash/no- flash image pairs, we believe there remain many directions for future work. In some cases, the look of the flash image may be preferable to the ambient image. However, the flash shadows and specularities may be detracting. While we have developed algorithms for detecting these regions, we would like to investigate techniques for removing them from the flash image. Flash shadows often appear at depth discontinuities between surfaces in the scene. Using multiple flash photographs it may be possible to segment foreground from background. Raskar et al. [2003] have recently explored this type of approach to generate non-photorealistic renderings. Motion blur is a common problem for long-exposure images. It may be possible to extend our detail transfer technique to de-blur such images. Recent work by Jia et al. [2004] is beginning to explore this idea. No-Flash Flash Red-Eye Corrected Closeup with Faint Red-Eye No-Flash Flash Red-Eye Corrected Figure 11: Examples of red-eye correction using our approach. Although the red eye is subtle in the second example, our algorithm is still able to correct the problem. We used a Nikon CoolPix 995 to acquire these images. While our approach is designed for consumer-grade cameras, we have not yet considered the joint optimization of our algorithms and the camera hardware design. For example, different illumi- nants or illuminant locations may allow the photographer to gather more information about the scene. An exciting possibility is to use an infrared flash. While infrared illumination yields incomplete color information, it does provide high-frequency detail, and does so in a less intrusive way than a visible flash. We have demonstrated a set of applications that combine the strengths of flash and no-flash photographs to synthesize new images that are of better quality than either of the originals. The acquisition procedure is straightforward. We therefore believe that flash/no-flash image pairs can contribute to the palette of image enhancement options available to digital photographers. We hope that these techniques will be even more useful as cameras start to capture multiple images every time a photographer takes a pic- ture. Acknowledgements We wish to thank Marc Levoy for mentoring the early stages of this work and consistently setting an example for scientific rigor. Steve Marschner gave us many pointers on the fine details of digital photography. We thank Chris Bregler and Stan Birchfield for their encouragement and Rico Malvar for his advice. Mike Braun collaborated on an early version of these ideas. Georg would like to thank his parents for maintaining Castle Pyrmont, which provided a beautiful setting for many of the images in the paper. Finally, we thank the anonymous reviewers for their excep- tionally detailed feedback and valuable suggestions. References ADAMS, J., PARULSKI, K. AND SPAULDING, K., 1998. Color pro- cessing in digital cameras. IEEE Micro, 18(6), pp. 20-30. AGARWALA, A., DONTCHEVA, M., AGRAWALA, M., DRUCKER, S., COLBURN, A., CURLESS, B., SALESIN, D. H., AND COHEN, M., 2003. Interactive Digital Photomontage. ACM Transaction on Graphics, 23(3), in this volume. AKERS, D., LOSASSO, F., KLINGNER, J., AGRAWALA, M., RICK, J., AND HANRAHAN, P., 2003. Conveying shape and features with image-based relighting. IEEE Visualization 2003, pp. 349-354. CHOUDHURY, P., AND TUMBLIN, J., 2003. The trilateral filter for high contrast images and meshes. In Eurographics Rendering Symposium, pp. 186-196. DEBEVEC, P. E., AND MALIK, J., 1997. Recovering high dynamic range radiance maps from photographs. ACM SIGGRAPH 97, pp. 369-378. DEBEVEC, P., HAWKINS, T., TCHOU, C., DUIKER, H., SAROKIN, W. AND SAGAR, M., 2000. Acquiring the reflectance field of the human face. ACM SIGGRAPH 2000, pp. 145-156. DICARLO, J. M., XIAO, F., AND WANDELL, B. A., 2001. Illuminating illumination. Ninth Color Imaging Conference, pp. 27-34. DURAND, F., AND DORSEY, J., 2002. Fast bilateral filtering for the display of high-dynamic-range images. ACM Transactions on Graphics, 21(3), pp. 257-266. EISEMANN, E., AND DURAND, F., 2004. Flash photography en- hancement via intrinsic relighting. ACM Transactions on Graphics, 23(3), in this volume. FLEISHMAN, S., DRORI, I. AND COHEN-OR, D., 2003. Bilateral mesh denoising. ACM Transaction on Graphics, 22(3), pp. 950-953. FUNKA-LEA, G., AND BAJCSY, R., 1995. Active color and geometry for the active, visual recognition of shadows. International Conference on Computer Vision, pp. 203-209. GAUBATZ, M., AND ULICHNEY, R., 2002. Automatic red-eye detec- tion and correction. IEEE International Conference on Image Processing, pp. 804-807. HAEBERLI, P., 1992. Synthetic lighting for photography. Grafica Obscura, http://www.sgi.com/grafica/synth. JIA, J., SUN, J., TANG, C.-K., AND SHUM, H., 2004. Bayesian correction of image intensity with spatial consideration. ECCV 2004, LNCS 3023, pp. 342-354. JONES, T.R., DURAND, F. AND DESBRUN, M., 2003. Non-iterative feature preserving mesh smoothing. ACM Transactions on Graphics, 22(3), pp. 943-949. KANG, S. B., UYTTENDAELE, M., WINDER, S., AND SZELISKI, R., 2003. High dynamic range video. ACM Transactions on Graphics, 22(3), pp. 319-325. KODAK, 2001. CCD Image Sensor Noise Sources. Application Note MPT/PS-0233. LEE, S. W., AND BAJCSY, R., 1992. Detection of specularity using color and multiple views. European Conference on Computer Vision, pp. 99-114. LIU, Z., SHAN., Y., AND ZHANG, Z., 2001. Expressive expression mapping with ratio images. ACM SIGGRAPH 2001, pp. 271- 276. MASSELUS, V., DUTRE, P., ANRYS, F., 2002. The free-form light stage. In Eurographics Rendering Symposium, pp. 247-256. OH, B.M., CHEN, M., DORSEY, J. AND DURAND, F., 2001. Image- based modeling and photo editing. ACM SIGGRAPH 2001, pp. 433-442. PATTI, A., KONSTANTINIDES, K., TRETTER, D. AND LIN, Q., 1998. Automatic digital redeye reduction. IEEE International Con- ference on Image Processing, pp. 55-59. PERONA, P., AND MALIK, J., 1990. Scale-space and edge detection using anisotropic diffusion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(7), pp. 629-639. RASKAR, R., YU, J. AND ILIE, A., 2003. A non-photorealistic cam- era: Detecting silhouettes with multi-flash. ACM SIGGRAPH 2003 Technical Sketch. SCHROEDER, M., AND MOSER, S., 2001. Automatic color correction based on generic content-based image analysis. Ninth Color Imaging Conference, pp. 41-45. SHASHUA, A., AND RIKLIN-RAVIV, T., 2001. The quotient image: class based re-rendering and recognition with varying illumina- tions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(2), pp. 129-139. SWAMINATHAN, R., KANG, S. B., SZELISKI, R., CRIMINISI, A. AND NAYAR, S. K., 2002. On the motion and appearance of specular- ities in image sequences. European Conference on Computer Vision, pp. I:508-523 SZELISKI, R., AND SHUM, H., 1997. Creating full view panoramic image mosaics and environment maps. ACM SIGGRAPH 97, pp. 251-258. TOMASI, C., AND MANDUCHI, R., 1998. Bilateral filtering for gray and color images. IEEE International Conference on Computer Vision, pp. 839-846. http://www.cs.caltech.edu/~arvo/applets/SphTriDemo.html http://www.cs.caltech.edu/~arvo/applets/SphTriDemo.html http://www.cs.caltech.edu/~arvo/applets/SphTriDemo.html http://www.cs.caltech.edu/~arvo/applets/SphTriDemo.html 1 Introduction 2 Background on Camera Noise 3 Acquisition 4 Denoising and Detail Transfer 4.1 Denoising 4.2 Flash-To-Ambient Detail Transfer 4.3 Detecting Flash Shadows and Specularities 5 White Balancing 6 Continuous Flash Adjustment 7 Red-Eye Correction 8 Future Work and Conclusions Acknowledgements References work_7dtuo44lmrfeffbjz5dnkj6rr4 ---- Études photographiques, 31 | Printemps 2014 Études photographiques 31 | Printemps 2014 Fictions du territoire/Partager l'image/La question documentaire The conversational image New uses of digital photography André Gunthert Translator: Fatima Aziz Electronic version URL: http://journals.openedition.org/etudesphotographiques/3546 ISSN: 1777-5302 Publisher Société française de photographie Printed version Date of publication: 20 March 2014 ISBN: 9782911961311 ISSN: 1270-9050 Electronic reference André Gunthert, « The conversational image », Études photographiques [Online], 31 | Printemps 2014, Online since 22 October 2015, connection on 20 April 2019. URL : http://journals.openedition.org/ etudesphotographiques/3546 This text was automatically generated on 20 April 2019. Propriété intellectuelle http://journals.openedition.org http://journals.openedition.org http://journals.openedition.org/etudesphotographiques/3546 The conversational image New uses of digital photography André Gunthert Translation : Fatima Aziz 1 Steven Spielberg’s cinematographic adaptation of Philip K. Dick’s short story, Minority Report (2002) is renowned for the credibility of its technological predictions. Depicting the world of 2054, based on expert propositions, it is famous for its anticipation of tactile interfaces. Besides the visualization of mental images, it predicts the widespread use of retinal scanning for surveillance and targeted advertising. 2 The seriousness of this forecasting exercise renders all the more remarkable its myopia of what became, soon after, ordinary visual practices in developed countries. In the movie, private uses of the image are limited to traditional print photography, holographic films and interactive video chat. 3 Few years later, these predictions seemed greatly outdated. It did not take half a century, but three to five years to have tools of audiovisual communication (Skype 2.0, 2005) or tactile interfaces (Apple iPhone, 2007). On the other hand no one could have imagined the rise of multimedia messaging services, parodies or visual conversations on social media. The present has overtaken the future, and Minority Report today seems trapped in a Foucauldian perception of the image as instrument of control and domination. 4 Unanticipated uses, more avant-garde than anything that could have been imagined at the dawn of the 21st century, have changed our visual practices and established themselves with self-evident force. But, unpredictability provides a valuable historical indication. Quite contrary to the automobile, aviation, or television — subsequent extensions of the horse carriage, navigation, or radio— the development of innovations such as photography, cinema, or disc depends on the appropriative mechanisms, determined by users’ choices. The same is true for conversational image, an unexpected product resulting from the meeting of digitalized visual content and documented interaction. The conversational image Études photographiques, 31 | Printemps 2014 1 From fluid image to connected photography 5 In the early 1990s, the digitization of photography is described both as a revolution and as a disaster. Many specialists, following in a traditional technical approach, consider the transition to pixels the ruin of indiciality and announce injudiciously the end of our belief in the truth of images1. 6 As writing transformed language into information, granting it invaluable qualities of preservation, reproduction and transmission, by reducing the materiality of images, digitization grants them both plasticity and new mobility. As digital files become easy to copy and manipulate, the iconic object becomes fluid image. 7 This first stage of the digital transition has serious implications for the image industry: the end of processing laboratories, simplification of procedures, proliferation of digital databases, falling prices. Yet, despite a considerable technological leap, there has been a remarkable continuity of forms and uses. For twenty years, the digital transition only marginally affected the visual practices. Contrary to the most somber predictions, newspapers continue publishing illustrated reports, and parents still photograph their children. As an automobile that would have swapped a combustion engine for an electric motor, photography has preserved its most essential functions. The visual has not been devastated, it has rather prosaically undergone a rapid rationalization 2. 8 Like Minority Report, many experts expected the arrival of new visual tools to be accompanied by a shift towards animated image, more attractive, and a disaffection for photography. Amateur video practices have certainly made an important headway3. However, photographs remain by far, the most shared content. 9 It is difficult to compare in absolute terms the number of photos and videos shared on social media, because video is often accounted for as broadcasting hours. Facebook no longer provides regular statistics on video uploads, suggesting a weak growth. In 2010, with half a billion registered Facebook members, available statistics indicated a monthly upload of 2.5 billion photographs for only 20 million videos uploads, or 125 times less (from the point of view of amateur production, the difference is intensified by large proportions of rebroadcasted footage, whereas photographs are rich in auto-produced content). 10 It seems that the still image draws its advantage from its greater fluidity as compared to video, penalized for file weight, download time and restricted formats. Less universal than photographs, videos can only be viewed in an environment with appropriate technical reading device. In contrast, a JPEG file or an animated GIF4 have the advantage of being displayed in any environment, on a web browser as well as a platform, a mobile phone or a tablet. 11 Between 2008 to 2011, the landscape changed unexpectedly. It was not a camera that was the cause of this crucial change, but a mobile phone manufactured by a computer brand – the Apple iPhone, designed by Steve Jobs5 to provide wide access to web features (especially its 3G version, available from 2008) which heralds the advent of connected photography6. Smartphones rapidly outsold cameras in all developed countries. In France, in 2011, while 4.6 million cameras were marketed (more than twice The conversational image Études photographiques, 31 | Printemps 2014 2 the amount sold towards the end of the 1990’s) smartphone sales soared to 12 million units7. 12 The adaptation of photography to the mobile phone existed since the first camphones available in Japan in 2000. But, the power conferred by this conjunction to the 3G standard (UMTS), equivalent to the transition from modem to broadband, opens the way for full implementation of visual practices. 13 This development has transformed the smartphone into a universal camera. Taking a camera with oneself, once suggested the anticipation of a picture worthy occasion. Instead, the phone that one takes along for its communication or playing functions allows permanent access to photography8. Photographic occasions are limited to a set of codified events, outside of which shooting is barely tolerated9. Only tourism or exotic situations could justify intensive photographic activity. By granting the possibility of capturing every moment of our life, mobile phones transform each of us into a tourist of the everyday, ready to snap pictures at any moment. This new skill is manifested in particular through the press publishing amateur photographs or footage of catastrophes or outstanding accidents (London bombings in 2005, Virginia Tech shootings in 2007, the Hudson River crash landing in 2009, etc.). 14 But this metamorphosis is not limited to the production of images. Connected photography is a result of the association between the smartphone and communication networks: instant messaging or social media, on which images can be transferred immediately, through elementary operations. Although this combination represents only a fraction of amateur practices, it stands as an emblematic step, the symbol of the second revolution of digital images. 15 To be able to share a photograph in real time with an individual or a group of friends, once a capacity reserved to wired agencies, profoundly modifies its uses. During this initial period, picture quality offered by smartphones regresses compared to that proposed by compact cameras. Under these conditions, the choice of a mobile instead of the camera or the sharp rise in production on this support indicates that users find an advantage in connected photography. The qualitative deficit is largely offset by the usefulness of new image practices and particularly by the increasing ability to display them via social media. 16 Facebook, the largest of them, opened to public in 2006, considerably improves its interface for image display between 2009 and 2011, facilitating the integration of visual files and giving them better visibility. Now, taking a photograph is not enough, what matters is being able to show, comment and repost it. First choice for exhibiting auto- photography, Facebook becomes logically the most important collection of images in the world (more than 250 billion photos uploaded in September, 201310. Despite the recent decline in media popularity, it will remain the main historic site in spreading the connected image. 17 Sharing ones’ photographs or commenting them, all this was already possible on Flickr, since 2004. But, the specialized platform is today still a space for discussions focused on images. The breakthrough brought by Facebook has been to propose a general environment, equipped with maximum functionality, structured not by specific interests, but more fundamentally through interaction between real people. As noted by Pierre Bourdieu, the uses of amateur photography remain essentially social11. On Facebook, discussion focuses on all aspects of existence. Images are not mobilized primarily for The conversational image Études photographiques, 31 | Printemps 2014 3 their aesthetic qualities, but because they document life, participate in the game of self- presentation and serve referential purposes. 18 This revolution of despecialization fundamentally changes the traditional photographic paradigm based on technics, primacy of the shoot, materiality, and objectivity of the image. While visual recording once formed an autonomous strongly identified universe, what characterizes it now is its integration into multipurpose systems. The delay of camera manufacturers, who are reluctant to convert their materials into connected tools and equip them with communication features only in dribs and drabs12, is indicative of the magnitude of change. For the first time in its history, photography has become a niche practice within the vast universe of electronic communication. 19 One can compare this integration to the miniaturization process that affected the clock industry between 15th and 19th century, bringing the church steeple’s timepiece in lounges and then into the garment pocket. Gaining at every step in availability, the time function evolves and transforms: « The small sized clock that resulted from it, domestic or personal, had all together another quality and significance than public and monumental mechanisms. The possibility of use both private and universal laid the foundations of a discipline of time, as opposed to an obedience to time », as explains the historian David Landes13. 20 Having become a component among others of the communication world, does photography not risk disappearing ? Quite contrary. If the photo taking possibility is integrated in other devices, it would be unthinkable to conceive a communicating tool without a camera, or a digital environment lacking in visual display. Embedded in every connected object, the photographic function has become autonomous. It has gained in universality and in appropriability, accomplishing better than ever its promise of democratizing visual production. Like the clock, integration of photography, which is still in its primary stages, announces all the same a surpassing of this original function. Beyond the widespread production of images, what follows is a revolution of its uses. Utility of Conversation 21 We commonly see the overwhelming number of images deplored, associating this proliferation to progress in reproducibility. But is technical determinism the only parameter of this increase ? This could be explained more satisfactorily by the increase of the usefulness of photographs. It is what the observation of connected uses suggests. 22 While the first period of static web had been characterized as a “society of authors”14, the capability of symmetric interaction promoted by the web 2.0 leads instead to describe online publishing activity as a conversation15. Extensively studied in Pragmatics or Ethnomethodology, oral exchange structured by turn taking is considered as the foundation of sociability : « That is where a child learns to speak, when a foreigner socializes by integrating within a new group (…), where a social relationship is built, where language system is established and transformed. »16 23 Orderly interaction, symmetrical, open and cumulative, which characterizes instant messaging or online exchange is indeed similar to the egalitarian sociability of conversation. Integration of image in this economy represents a remarkable development of its functionalities, identified by Jean-Samuel Beuscart, Dominique Cardon, Nicolas The conversational image Études photographiques, 31 | Printemps 2014 4 Pissard and Christophe Prieur in their study on Flickr17. Rather than conversations about photos, the study posits that, the web has encouraged conversation with photos. 24 The ability to use an image as a message is however not born with digital tools. This property for example, is offered by the illustrated postcard, whose use experienced a marked rise since late nineteenth century. If we admit it as correspondence among conversational genre, the combination of images enables to observe a primitive state of this elaboration, evidently at a slow pace. Although industrial production requires to rely on standardized sceneries or situations, used postcards provide invaluable examples of an archeology of visual conversation. 25 In its digital version, it appears in email and online forums, then on multimedia messaging systems or MMS (Multimedia Messaging Service) which accompanied the first generation of camphones. The J-SH04 Sharp model, commercialized at $500 in Japan, in October 2000, used the J-Phone network, which allows the sharing of photographs among subscribers. An intermediate stage, during the mid 2000s, is offered by “moblogging”, or sharing photos taken with a camphone on a blog, a precursor of instant publishing on social network sites. If there are two uses of connected photography, one concerning private conversation, the other public or semi public conversation, then the porosity between the different spaces, encouraged by digital fluidity, must be noted. 26 Recently baptized, the selfie, form of contextualized auto-photography, is perhaps the oldest identifiable use of connected image. The onset of camphones in Japan follows in the footsteps of the purikura phenomenon18, miniature self-portraits shot by young Japanese in special photo booths, which allow multiple decorations to be added to it, and to be collected. The first Sharp camphone model includes a small mirror on the front, an original device to facilitate auto-photography. Advertisings of the time leave no doubt: the device had been designed by the manufacturer to allow auto-photography at arm’s length, thanks to a short focal lens. 27 If these functions were initially imagined only as gadgets, it is in a more dramatic manner that the transformation of uses initiated by the connected image appeared. On July 7 2005, between 8:50 am and 9:47 am, four bombs carried by terrorists blow up three underground stations and a bus in London, causing 56 deaths and 700 casualties. While the media could not enter inside the subway, Sky News broadcasts at 12:35 pm an image shot in the immediate vicinity of the attack : a photograph shot with a camphone by a user, Adam Stacey, taken at 9:25 am in the tunnel that leads to King’s Cross and sent as an electronic message to multiple recipients. 28 Although this picture shows a face, this is not a portrait in terms of pictorial tradition. And even if circumstances impose its public broadcast19, its initial sharing comes under private conversation. Thanks to the immediacy of its communication, Adam Stacey’s photograph, shot at the request of a friend accompanying him and wanting to inform his relatives, has first and foremost a utilitarian function of rapidly transmitting an observation. 29 If documentary vocation is an integral part of the history of visual recording, it generally concerns special uses— scientific, media or industrial. In terms of private photography, the usefulness of images remains essentially symbolic: preservation of memories or writing family history. Examples of practical uses, such as documenting an accident report for insurance, are attested since early twentieth century, but remain discrete The conversational image Études photographiques, 31 | Printemps 2014 5 forms, which do not interest observers and are not described by any history nor sociology of amateur photography. 30 Yet some technical innovations, as instant development process proposed by Polaroid, by accelerating the availability of images, have participated in highlighting the practical usefulness of photography and have given rise to a wide range of constative uses. The same goes for instant transmission of connected image, which opens photography to the universe of communication. One can find a significant example of its actual application with the selfie of Flickr cofounders, Stewart Butterfield and Caterina Fake, in 2005. Entitled “Hi Mom”, the photograph published on Flickr is accompanied by an indication of its use : « This was sent to my parents as I was talking to them on the phone so they could see the view of where we were standing ». 31 Available studies on new communication practices suggest an unprecedented extension of their practical uses20. By combining the visual dimension to exchanged data, the image allows to provide indications about a situation (arrival or presence in a place, use of a means of transport…), appearance checks (testing an outfit, new haircut, physical appearance…) but also other countless practical information, such as purchase of a commodity, ingredients of a recipe, state of a building, etc… that photography allows to record or to transmit more quickly than a written message21. The connected image lends itself especially to regular exchange of signals intended to maintain friendly or amorous relationships. It can also serve political or activist objectives, such as photographs of gatherings during the Arab Spring movement, immediately shared as an appeal to join in the demonstrations22. 32 The extreme variety of these applications shows a rapid adaptation of connected tools, as well as the development of a new skill : the ability to translate a situation into visual form, in a way to propose it as a brief statement, often personal or playful— a form of reinterpretation of reality which recalls “the invention of the everyday” dear to Michel de Certeau23. Visibility of a “barbaric taste” 33 Connected photography cannot exist without recipient(s). Beyond their first-level utility, communication systems also give images the function of conversation shifters or dialogical units, so that they acquire a second-level meaning as expressive forms. In private interaction, restricted audience and familiarity of participants encourages implicitness, contextual games or transgression24. On social networks, public visibility brings resources of collective practices : participative interpretation through a series of comments generated in response to visual content, or a choral construction by reusing and repeating a pattern transformed into a meme25, which indicate social productivity of visual forms. 34 Sign of its success, visual conversation tends to become autonomous through tools of image collection and reposting, such as Tumblr (2007) or Pinterest (2010), where republishing and diffusion are the principal resources to provide content value. A platform committed to connected image, such as Instagram (2010), enables in the elaboration of collaborative responses to common events: a meteorological phenomenon or a cultural occasion, celebrated by generating and displaying photographic productions whose display takes the form of a collective game. The conversational image Études photographiques, 31 | Printemps 2014 6 35 Conversely, the integration of images in conversation benefits from systems which reward participation on social networks. Exhibition and public appreciation build critical, social or aesthetic legitimacy of auto-photography. They also encourage autonomy of the interpretation, which is necessary to reduce the ambiguity of images26. 36 Widely sharing new visual practices, major social networks also give them unprecedented visibility and contribute to their viral spread. A satirical video posted in December 2012 on the website CollegeHumor parodies one of the songs of the band Nickelback, to make fun of different modes of connected photography27. Photos of meals, feet, cats, aircraft wings, filters, selfies, etc.. : the song draws up a long list of repeated motives on Facebook or Twitter timelines. This excellent parody shows that all these visual forms are indeed identified as just as many independent genres. 37 Upon reviewing characteristics of private photography of the early twentieth century, Marin Dacos noted that a great part of amateur photographs reproduced models of studio photography or advertisements28. By rewarding a bouquet of visual practices with a certificate of recognition, the CollegeHumor video suggests that from now on we are witnessing a reverse phenomenon. As memes or recommendations, private iconography benefits from the transition that has seen social network sites take the place of traditional media as cultural influencers. Through their mediation vernacular productions reach the rank of identifiable and reproducible models. 38 This new visibility appears in negative reactions. In 2013, the choice of “selfie” as word of the year by the Oxford Dictionary editors, was welcomed by critical media commentaries, condemning the saturation of the web by this narcissistic exercise of connected self- portraits29. Through the condemnation of an excessive presence, its reception testifies indeed the normative character that this genre is about to acquire. 39 When Michel de Certeau tried to approach « ordinary culture » he expressed his embarrassment at being confronted with the « virtual invisibility » of practices « that were hardly manifested through their products »30. Instead, the visibility conferred by large social networks to individual expression reverses the dynamics of the production of norm. Formerly, the popular classes copied the behavior of celebrities. Now, celebrities and world leaders reproduce models of the common people by conforming to the rules of the selfie. 40 One can regret this use of « barbaric taste », to quote Emmanuel Kant’s expression echoed by Bourdieu31, through social network sites, mediators of ordinary culture. But the opposition of good and bad taste is it not, in this case, a wrong way to ask the question ? While visual or musical practices encourage an approach inspired by Art History, which highlights author’s creativity and postulates the self-sufficient character of expressive motivation, linguistic analysis proposes a neutral description of processes. Conversation, unlike artistic creativity, is an independent practice where even expressivity holds communicational and social value32. In this context, new visual practices cannot be analyzed only through the grid of aesthetics. 41 Victory of use over content is particularly flagrant with Snapchat (2011), a visual messaging mobile application that proposes photos to be erased seconds after consultation. Semi-public visibility of conversation and the ephemeral nature of the iconic message have contributed to the success of this media among the youngsters, who use it as regularly as SMS. By programming the disappearance of photographs, Snapchat adds a ludic dimension, but also an additional freedom for users, encouraging them to The conversational image Études photographiques, 31 | Printemps 2014 7 use it in an informal and relaxed manner. The application clearly demonstrates the desertion of the masterpiece and elaborate production, in favor of conversation in action. Already widely noticeable on most social networks, this shift suggests describing ordinary practices of images as a new language. 42 As the advent of cinema or television, that of conversational image profoundly transforms our visual practices. Photography was an art and a media. We are contemporaries of the time when it is reaching the universality of a language. Integrated via versatile tools into connected systems, visual forms have become powerful shifters of private and public conversations. The part individuals can play in their production and interpretation contributes to rapid development of formats and uses. The visibility conferred by social network sites accelerates their diffusion and gives rise to self-made norms. The appropriation of visual language shows a reinvention of the everyday. Furthermore, the extension of the usefulness of images poses specific problems for their analysis. If semiotics of visual forms had hitherto relied on a narrow range of presupposed contexts, deemed identifiable only upon formal analysis, the diversity of these new applications requires us to turn to an ethnography of uses. NOTES 1. William J. Mitchell, The Reconfigured Eye. Visual Truth in the Post-Photographic Era, Cambridge (Mass.), London: MIT Press, 1992 ; Pierre Barboza, Du photographique au numérique. La parenthèse indicielle dans l'histoire des images, Paris: L'Harmattan, 1996. 2. Sylvain Maresca, Dominique Sagot-Duvauroux, “Photographie(s) et numérique(s). Du singulier au pluriel” (paper delivered at the conference "Travail et création artistique en régime numérique", Avignon, May 27, 2011), La vie sociale des images, May 5, 2011: http:// culturevisuelle.org/viesociale/2791. 3. In 1997, 14% of people aged 15 or over said they had made films or videos. In 2008, the figure was 27%, i.e. almost twice as many. See Olivier Donnat, Les Pratiques culturelles des Français à l'ère numérique. Enquête 2008, Paris: La Découverte, 2009. This confirms the importance of fluidity: shorter videos experienced the strongest growth. 4. Introduced in 1991, the JPEG (Joint Photographic Expert Group) is a type of compressed file that is used for most still images on line. An animated GIF (Graphic Interchange Format), which has been in the public domain since 2004, can be displayed in a given environment as a looped sequence of images in the same file. 5. Walter Isaacson, Steve Jobs, New York: Simon & Schuster, 2011. 6. Edgar Gómez Cruz, Eric T. Meyer, “Creation and control in the photographic process. iPhones and the emerging fifth moment of photography”, Photographies, v. 5/2, 2012. 7. “Le cycle de vie d'une photo à l'ère numérique”, a survey carried out by Ipsos in 2011, in: SIPEC, September 2011. 8. « In the eyes of the country dweller, the urbanite is someone who succumbs to a sort of perceptual 'any-old-thing-ism'. And this attitude is incomprehensible to him because he works on an implicit philosophy of photography according to which only certain objects, on certain The conversational image Études photographiques, 31 | Printemps 2014 8 occasions, merit being photographed. » Pierre Bourdieu, “The social definition of photography”, Photography: A Middle-Brow Art (trans. Shaun Whiteside), Polity Press, Cambridge, 1990 [1965]. 9. Nancy Van House et al., "The uses of personal networked digital imaging. An empirical study of cameraphone photos and sharing", Extended Abstracts of the Conference on Human Factors in Computing Systems (CHI 2005), New York: ACM Press, 2005. 10. « Every day, there are more than 4.75 billion content items shared on Facebook (including status updates, wall posts, photos, videos and comments), more than 4.5 billion "Likes", and more than 10 billion messages sent. More than 250 billion photos have been uploaded to Facebook, and more than 350 million photos are uploaded every day on average » A Focus on Efficiency, Facebook/Ericsson/Qualcomm whitepaper, September 16, 2013, pdf https://fbcdn-dragon- a.akamaihd.net/hphotos-ak-prn1/851575_520797877991079_393255490_n.pdf). 11. Pierre Bourdieu, Photography: A Middle-Brow Art, op. cit. 12. It was in 2012 that Samsung, Apple's main smartphone competitor, produced the first "smart cameras" equipped with Wi-Fi: the hybrid NX range and the compact EX2F. The same year, Nikon adopted Android for its Coolpix S800c. 13. David Landes, Revolution in Time: Clocks and the Making of the Modern World, Cambridge (Mass.): Harvard University Press, 1983. 14. Bernard Stiegler, “Situations technologiques de l'autorité cognitive à l'ère de la désorientation”, seminar “Technologies Cognitives et Environnements de Travail”, May 12, 1998 (cited in: Valérie Beaudouin, “De la publication à la conversation. Lecture et écriture électroniques”, Réseaux, No. 116, 2002). 15. Valérie Beaudouin, op. cit. 16. Lorenza Mondada, “La question du contexte en ethnométhodologie et en analyse conversationnelle”, Verbum, 28–2/3, 2006 [2008]. On this point, my thanks go to Jonathan Larcher. 17. Jean-Samuel Beuscart, Dominique Cardon, Nicolas Pissard and Christophe Prieur, “Pourquoi partager mes photos de vacances avec des inconnus ? Les usages de Flickr”, Réseaux, No. 154/2, 2009. 18. Jon Wurtzel, “Taking pictures with your phone”, BBC News, September 18, 2001: http:// news.bbc.co.uk/2/hi/science/nature/1550622.stm. Developed by Atlus and Sega, the first purikura booths appeared in Tokyo in 1995. 19. André Gunthert, “Tous journalistes? Les attentats de Londres ou l'intrusion des amateurs”, in: Gianni Haver (ed.), Photo de presse. Usages et pratiques, Lausanne: Editions Antipodes, 2009: http:// www.arhv.lhivic.org/index.php/2009/03/19/956. 20. Olivier Aïm, Laurence Allard, Joëlle Menrath, Hécate Vergopoulos, “Vie intérieure et vie relationnelle des individus connectés. Une enquête ethnographique”: Fédération Française des Télécoms, PowerPoint presentation, September 2013: http://www.fftelecoms.org/sites/ fftelecoms.org/files/contenus_lies/vie_interieure_vie_relationnelle _mai_2013.pdf. 21. According to comScore, in August 2013 14.3% of European smartphone users (155 million) sent a photo of a product from a shop to a friend, to offer or request information. This is slightly more than the total number of those who sent text messages or made telephone calls (14%) for the same purpose, see Ayaan Mohamud, "1 in 7 European smartphone owners make online purchases via their device", comScore, October 21, 2013: http://www.comscore.com/Insights/ Press_Releases/2013/10/1_in_7_European_ Smartphone_Owners_Make_Online_Purchases_via_their_Device. 22. Azyz Amami, “Photographier la révolution tunisienne” (paper presented at the conference “Photographie, internet et réseaux sociaux”, Rencontres d'Arles, July 8, 2011), in: L'Atelier des icônes, July 9, 2011, on-line audio: http://culturevisuelle.org/icones/1860. 23. Michel de Certeau, The Practice of Everyday Life (trans. Steven Rendall), Berkeley and Los Angeles: University of California Press, 1984. The conversational image Études photographiques, 31 | Printemps 2014 9 http://www.fftelecoms.org/sites/fftelecoms.org/files/contenus_lies/vie_interieure_vie_relationnelle http://www.fftelecoms.org/sites/fftelecoms.org/files/contenus_lies/vie_interieure_vie_relationnelle http://www.comscore.com/Insights/Press_Releases/2013/10/1_in_7_European_ http://www.comscore.com/Insights/Press_Releases/2013/10/1_in_7_European_ 24. Tim Kindberg, Mirjana Spasojevic, Rowanne Fleck, Abigail Sellen, "I saw this and thought of you. Some social uses of camera phones", Extended Abstracts of the Conference on Human Factors in Computing Systems (CHI 2005), New York: ACM Press, 2005; Gaby David, "The intimacy of strong ties in mobile visual communication", Culture visuelle, April 22, 2013: http://culturevisuelle.org/ corazonada/2013/04/22/the-intimacy-of-strong-ties-in-mobile-visual-communication/. 25. A meme is a motif whose viral proliferation takes the form of an appropriable game based on decontextualization. See André Gunthert, “La culture du partage ou la revanche des foules”, in: Hervé Le Crosnier (ed.), Culturenum. Jeunesse, culture et éducation dans la vague numérique, Caen: C&F Editions, 2013. 26. Fatima Aziz, “Visual Transactions. Facebook, an Online Resource for Dating”, Etudes photographiques, No. 31, spring 2014, https://etudesphotographiques.revues.org/3490. 27. "Look at this Instagram (Nickelback Parody)", December 3, 2012: http:// www.collegehumor.com/video/ 6853117/ look-at-this-instagram-nickelback-parody. 28. Marin Dacos, “Regards sur l'élégance au village. Identités et photographies, 1900–1950”, Etudes photographiques, No. 16, May 2005: http://etudesphotographiques.revues.org/728. 29. Sherry Turkle, "The Documented Life", The New York Times, December 15, 2013: http:// www.nytimes.com/2013/12/16/opinion/the-documented-life.html. 30. Michel de Certeau, op. cit. 31. Emmanuel Kant, The Critique of Judgment, 1790; Pierre Bourdieu, op. cit. 32. Catherine Kerbrat-Orecchioni, L'Enonciation. De la subjectivité dans le langage, Paris: Armand Colin, 1980. ABSTRACTS Favored by connected tools and social media, the second revolution of digital photography is that of the conversational uses of image. Since the advent of cinema or television, this mutation profoundly transforms our visual practices. Photography was an art and media. We are contemporaries of the time when it reaches the universality of a language. Integrated via versatile tools into connected systems, visual forms have become powerful shifters of private and public conversations. The part individuals can play in their production and interpretation contributes to a rapid development of formats and uses. The visibility conferred by social network sites accelerates their diffusion and gives rise to self-made norms. Appropriation of visual language makes us witness a reinvention of the everyday. The conversational image Études photographiques, 31 | Printemps 2014 10 http://www.collegehumor.com/video/ http://www.collegehumor.com/video/ The conversational image From  fluid image to connected photography Utility of Conversation Visibility of a “barbaric taste” work_7g6n3jwr45e2jppkbzmbtrmzf4 ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218545336 Params is empty 218545336 exception Params is empty 2021/04/06-02:16:22 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218545336 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:22 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_7kukcsywsvhjtah47vrsd5onlq ---- Sir, How reliable are slit lamp biomicroscopy measurements of anterior segment structures? Ophthalmologists use the standard slit lamp graticule scale to measure ocular structures in everyday clinical practice. The measurement of posterior pole structures has been particularly well documented. 1,2 Specialist interest has focused on modifying the slit lamp for anterior segment measurements ?-6 All these techniques are either available only in specialist centres or require modifications to the standard slit lamp b iomicroscope . The unmodified slit lamp scale is used in clinical . practice to measure ocular surface lesions, such as corneal ulcers and melts, Mooren's ulcers, filtration surgery bleb size, and response to treatment often by different individuals . A recent study o n pterygium management showed that one of the most important factors for evaluating severity was its size? The aim of the present study was to test the reliability of methods of ocular surface measurement, as there is no standardised published technique . Methods a nd results A model eye (Altomed, Tyne & Wear, UK) was marked using rectangles cut from paper to mimic ocular surface F lesions (Fig. 1 ) . One of these lesions was on the flat surface (F), whilst lesions C1 and C2 were on the curved surface of the model eye. C1 was on the central portion of the radius of curvature, whilst C2 was non-central and termed 'limbal'. The marked eye was mounted onto a standard slit lamp biomicroscope (Haag Streit 900, Berne, Switzerland) . Measurements were taken by ophthalmologists of all grades in our department on two separate occasions 1 week apart on the same slit lamp. They did so by focusing the slit lamp beam onto the centre of the lesion and adjusting the height of the beam. Their measurements were recorded b y an observer, but kept masked from the ophthalmologists to avoid bias. Lesions F and C 1 had to be measured with the slit lamp beam and objectives aligned axially. For C2, the lesion peripheral to C 1 , measurements could be taken by varying the angle of the beam, mimicking how these clinicians would normally measure such corneal limbal lesions . The angle of the beam was recorded. Table 1 shows the measurements taken by 1 6 clinicians, with repeat measurements b y 1 3 . The mean slit lamp measurement (± 95% CI) for F was 4. 1 ± 0 . 1 mm on the first occasion and 4.0 ± 0.2 mm on the second. For C1 the measurements were 6.6 ± 0 . 1 mm and 6 . 5 ± 0 . 1 mm. The non-central lesion C2 was C2 Cl Fig. 1. The model eye. (A) A central lesion on the flat surface (lesion F). (B) Lesions on the curved slirface, one of which is central (lesion Cl), the other limbal (lesion C2). Eye (2001) 15, 539-564 © 2001 Royal College of Ophthalmologists 539 540 Table 1. Measurements of ocular surface lesions. Three lesiolls (F, flat surface, central; C1, curved surface, celltral; C2, curved surface, limbal) were measured by 16 ophthalmologists. For C2 the angle of the slit lamp beam could be changed First measurements Second measurements Lesion C2 Lesion C2 Lesion F Lesion C 1 Lesion C 2 angle of beam Lesion F Lesion C1 Lesion C2 angle of beam Operator height (mm) height (mm) height (mm) (deg) 4.30 6 . 80 5.90 30.00 2 4.30 6 .40 5.95 0 . 0 0 3 4 . 25 6 .70 6.00 0 . 0 0 4 3.75 6 . 8 0 5 . 60 30.00 5 4.20 6.80 8 . 0 0 0 . 00 6 3 . 95 6 . 8 0 5.60 0.00 7 3.75 6.80 5.90 0.00 8 4 . 0 0 6 . 1 0 5. 80 20.00 9 4 . 1 0 6.80 5.70 30.00 10 4.30 6.70 6.l0 20.00 11 4.30 6 . 70 5.90 30.00 12 4.00 6 . 20 5.50 35 . 0 0 13 4 . 20 6 .50 5.75 65. 0 0 1 4 4.35 15 4.30 6.55 6 . 1 0 45.00 1 6 4 . 25 6 .55 5.85 0.00 measured at beam angles ranging from 0° to 65°. The mean height was 6.0 ± 0.3 mm and 5.8 ± 0.3 mm. The confidence interval is larger for the latter sets of measurements, indicating a greater variability in this measuring technique (p < 0.05, Student's t-test). To compare the variability between the first and second set of measurements the standard error of the mean of the differences was calculated at 0.1 1 , 0.08 and 0.2 1 for F, C 1 and C2 respectively. The paired t-test showed that there was no significant difference between the mean measurements for each lesion from week 1 to week 2 (0.5> P > 0.01 ) . T h e 'true' height o f each of the lesions was measured using digital Vernier callipers. This measurement was taken whilst the paper rectangles were on the model eye, to avoid the need for correction for its radius of curvature, The values were 4.00, 6.75 and 6.20 mm for F, C1 and C2. It is noteworthy that for one set of measurements for C2, the slit lamp mean (± 95% CI) and the true height do not coincide. Interclass correlation coefficients (ICC) were computed to indicate intra-observer variability on measuring the same lesion twice, a high ICC indicating strong reliability. The ICC was 0.22 and 0.23 for both measurements taken axially (F and C 1 ) , but the limbal lesion had a ICC of 0.04. Hence a greater amount of variability occurs within one person's measurements when the beam angle is altered. Comment The results from this study show that measurements of ocular surface lesions by different observers using a standard slit lamp are reproducible. The mean measurements were reliable both as a group and from one occasion to the next. However, an individual's measurements are less reliable when the beam angle is altered than when it is set to 0°, on a flat or curved height (mm) height (mm) height (mm) (deg) 3.70 6.60 5.80 35.00 4.00 6 .70 6.l0 0 . 00 4. 1 5 6 . 60 5.70 45.00 3.80 6 .70 5.70 55.00 4 . 0 0 6.70 5.95 0 . 00 4 . 1 0 6.40 5 . 65 1 0 .00 4.40 6 .50 5.80 32.00 3 . 6 0 6 .30 6.90 25.00 4 . 1 0 6 .70 5.80 40.00 3.40 6.20 5.20 45 .00 4.40 6.80 5 . 80 50.00 4 . 1 0 6.60 5 . 85 15.00 4 . 0 0 6 . 1 5 5 . 1 5 50.00 surface . It may be argued that the curved limbal lesion was more difficult to mesaure giving a lower ICC. However, the model eye, unlike the human eye, has a constant radius of curvature, so that the only variable in measuring C2 in the centre of the field of view was the non-axial beam. Though standardised methods for posterior segment measurements have been described,l,2 there have been no previous reports of evaluation of techniques for the anterior segment. The results of this study show that all ocular surface lesions should be measured with the beam and objectives aligned axially. This is important as pathological lesions encountered in the eye do not often have such clear margins as the lesions on the model eye. Future developments may well see the widespread use of serial digital photography. Where such technology is not available, the technique of axial slit lamp measurement is the most reliable. We thank Dr Mario Cortina-Borjay, Department of Statistics, Oxford University, for assistance with the statistical analysis. References 1. Garway-Heath E , Rudnicka AR, Lowe T, Foster PI, Fitzke FW, Hitchings RA. Measurement of optic disc size: equivalence of methods to correct for ocular magnification. Br J Ophthalmol 1 998;82:643--9. 2 . Garway-Heath E, Poinooswamy D, Wollstein G, Viswanathan A, Kamal D, Fontana L, Hitchings RA. Inter- and intraobserver variation in the analysis of optic disc images: a comparison of the Heidelberg retina tomograph and computer assisted planimetry. Br J Ophthalmol 1 999;83 :664-9. 3 . Ritch R, Galvao-Filho R, Liebmann JM. A slit-lamp eyepiece micrometer calibrated at 50-micron intervals. Ophthalmic Surg Lasers 1 999;30:326. 4 . Lerman S, Hockwin O. Measurement of anterior chamber diameter and biometry of anterior segment by Scheimpflug photography. Am Intraocular Implant Soc J 1 985;1 1 : 1 49-5 1 . 5. Cook C, Koretz JF. Methods to obtain quantitative p arametric descriptions of the optical surfaces of the human crystalline lens from Scheimpflug slit-lamp images. I . Image processing methods. J Opt Soc Am A 1 998; 1 5 : 1 473-85. 6. Clark B, Lowe RF. Slitlamp measurement of anterior chamber geometry. Ophthalmologic a 1 9 74;1 68:58--74. 7. Twelker J, Bailey IL, Mannis MJ, Satariano W A . Evaluating pterygium severity. Cornea 2000;1 9 :292-6 . Mandeep S . S a g oo Raman Malhotra Peter H. Constable Division of O phtha l mology Royal B e r kshi r e H os p i t a l Read i n g , U K Mandeep S . S a g oo � Oxford Eye H os pital Radcl iffe In f i r m a ry Woodstock Road Oxford OX2 6 H E , UK Tel : +44 (0)18 6 5 3 1118 5 e-mai l: sagoo@doctors . o rg . u k Sir, Fine needle aspiration biopsy: an investigative tool for iris metastasis A case report of a metastatic small cell carcinoma of the lung to the iris diagnosed by fine needle aspiration cytology is p resented. Case report A 75-year-old woman, a known case of bilateral age­ related macular degeneration, was referred to our clinic with a lesion on her right iris . She had previously received chemotherapy for small cell carcinoma of the lung. On examination the visual acuity was 6 / 9 in the right and counting fingers close to the face in the left eye. There was a raised amelanotic lesion at 3 o'clock on the pupillary border in the right eye (Fig . l a ) . There was no ectropion uvea or any localised lens opacities. Fundus examination revealed multiple drusen in the right eye. The left eye had a disciform scar. Intraocular pressures were normal in both eyes. To determine the nature of the iris lesion, iris fluorescein angiography was performed which showed initial hypofluorescence followed by late hyper fluorescence (Fig. 1b, c). Ultrasound biomicroscopy showed a well-defined nodular lesion arising from the iris stroma (Fig. 1 d ) . A diagnosis of an amelanotic iris melanom a / iris metastasis was made. In order to confirm the diagnosis a fine needle aspiration biopsy was performed which was consistent with small cell carcinoma of the lung. The patient was subsequently referred to the oncologist for further management. She died 2 months later. Comment Ocular metastases are the most common intraocular tumour, with the uveal tract being the most common site of metastasis . !-3 Microscopic intraocular lesions have Fig. 1. (a) Anterior segment photograph showing the lesion. (b) Anterior segment fluorescein angiogram showing early hyper­ fluorescence. (c) Anterior segment fluorescein angiogram showing late hyper fluorescence. (d) Ultrasound biomicroscopy showing the extent of the lesion. been found in 5-10% of all patients dying of cancer. ! Iris metastasis, a rare presentation of disseminated malignant disease,4,5 commonly presents as a solid amelanotic mass in the inferior quadrant.6 Iritis/ localised lens opacities, spontaneous hyphaema and glaucoma are the other presentations, making iris metastasis difficult to 541 How reliable are slit lamp biomicroscopy measurements of anterior segment structures? Methods and results Comment References work_7mlqdnb33vfu3npulw4ede44km ---- OTT-68469-the-use-of-allogenic-platelet-gel-in-the-management-of-chemo © 2015 Di Costanzo et al. This work is published by Dove Medical Press Limited, and licensed under Creative Commons Attribution – Non Commercial (unported, v3.0) License. The full terms of the License are available at http://creativecommons.org/licenses/by-nc/3.0/. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. Permissions beyond the scope of the License are administered by Dove Medical Press Limited. Information on how to request permission may be found at: http://www.dovepress.com/permissions.php OncoTargets and Therapy 2015:8 401–404 OncoTargets and Therapy Dovepress submit your manuscript | www.dovepress.com Dovepress 401 C a s e r e p O r T open access to scientific and medical research Open access Full Text article http://dx.doi.org/10.2147/OTT.S68469 Use of allogeneic platelet gel in the management of chemotherapy extravasation injuries: a case report Gaetano Di Costanzo1 Giovanna Loquercio1 Gianpaolo Marcacci2 Vincenzo Iervolino1 stefano Mori3 arnolfo petruzziello1 pasquale Barra1 Carmela Cacciapuoti1 1Transfusion service, Department of Haematology, National Cancer Institute “G pascale” Foundation, IrCCs, Naples, Italy; 2Hematology-Oncology and stem Cell Transplantation Unit, National Cancer Institute “G pascale” Foundation, IrCCs, Naples, Italy; 3Department of surgery, Melanoma – soft Tissues – Head and Neck – skin Cancers, National Cancer Institute “G pascale” Foundation, IrCCs, Naples, Italy Abstract: The allogeneic platelet (PLT) gel offers to be a valid supportive measure in the management of chemotherapy extravasation injuries. We report a case of a 58-year-old patient with multiple myeloma enrolled for high-dose chemotherapy and autologous stem cell trans- plantation. As pretransplant therapy, the patient received induction therapy with bortezomib, adriblastina, and desametazone. A port was inserted in the vein on the back of the hand. After three cycles, the patient reported rapid development of redness, pain, and necrotic tissue in the left hand, and a diagnosis of extravasation was addressed. The patient presented a raw area on the back of the hand caused by cytotoxic/chemotherapeutic drug leakage because of the mal- position of venous access devices. Skin ulcer was debrided, and the wound was reconstructed with a combination of local random rotational flap and abdomen skin graft. Two weeks later, a 20% skin flap necrosis was observed. In the context of wound healing, topical plasma-rich PLT gel is able to accelerate the regeneration and repair of tissue, so it was set out to assess PLT gel efficacy in this case. The PLT gel was applied topically once every 5 days, for a duration of 60 days on average. There were no adverse reactions observed during the topical therapy. Complete wound healing was observed after 12 PLT-rich plasma applications. No ulcer recur- rence was noted in the patient during the follow-up period of 2–19 months. Keywords: growth factors, platelet gel, chemotherapy, management, extravasation Introduction Extravasation is an important complication in cancer patients under chemotherapy.1–4 Currently, chemotherapy extravasation management remains controversial,3 and there is no definitive standard procedure to solve the problem. We report the management of a patient with massive skin necrosis after extravasation of bortezomib, adriblastina, and desametazone. In addition to surgical debridement of necrotic tissue and skin grafting, we used platelet (PLT) gel, rich in growth factors (PDGF, EGF, VEGF, and TGB-α and -β) to stimulate healing of skin ulceration and wound closure.5 Case report A 58-year-old man with multiple myeloma,6 Durie–Salmon stage III,7 was enrolled for high-dose chemotherapy and autologous stem cell transplantation. As pretrans- plant therapy, the patient received induction therapy with bortezomib, adriblastina, and desametazone. A port was inserted on the back of the hand. The patient reported rapid development of redness and pain in the left hand, 2 weeks after the completion of the third cycle. The patient presented a raw area on the back of the hand because of leakage of cytotoxic/chemotherapeutic drugs from the malposition of venous access devices. Skin ulcer was debrided, and the wound was reconstructed with local Correspondence: Giovanna Loquercio Department of Haematology, National Cancer Institute “G pascale” Foundation, IrCCs, via Mariano semmola, 80131, Napoli, Italy Tel +39 81 590 3433 Fax +39 81 545 3560 email giovyloquercio@libero.it Journal name: OncoTargets and Therapy Article Designation: Case report Year: 2015 Volume: 8 Running head verso: Di Costanzo et al Running head recto: Allogeneic PLT in the management of chemotherapy extravasation DOI: http://dx.doi.org/10.2147/OTT.S68469 O n co T a rg e ts a n d T h e ra p y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com/permissions.php http://creativecommons.org/licenses/by-nc/3.0/ www.dovepress.com www.dovepress.com www.dovepress.com http://dx.doi.org/10.2147/OTT.S68469 mailto:giovyloquercio@libero.it OncoTargets and Therapy 2015:8submit your manuscript | www.dovepress.com Dovepress Dovepress 402 Di Costanzo et al random rotational flap and abdomen skin graft. Two weeks later, a 20% skin flap necrosis was observed. In the contest of wound healing, numerous studies and clinical findings have demonstrated that topical, nontransfusional plasma-rich PLT gel is able to accelerate the regeneration and repair of the tissue, through the action of the various growth factors within the alpha granules of the PLTs.8–12 The PLT gel treat- ment has been used based on biological and clinical results reported in literature. According to the standard current approach,13 the lesion management included the surgical removal of necrotic material and cleansing of margins of the wound bed before PLT gel application (Figure 1). After moist saline dressing, the wound was covered with allogeneic platelet-rich plasma (PRP) and fatty gauze. In our case, a blood component (thrombin) was produced from allogeneic whole blood using a “homemade” system. The PLT gel was prepared the day of the treatment. Samples of whole blood (total 40 mL) were collected in 10-mL acid- citrate-dextrose vacutainers (Becton Dickinson Labware, Franklin Lakes, NJ, USA) from periodic donors. The whole blood was centrifuged at 180 g for 10 minutes to obtain concentrated erythrocytes and PRP. PRP were centrifuged again at 1,800 g for 10 minutes to separate PLT concentrate from PLT-poor plasma. To activate the PRP homologous and to accelerate the gelling process, thrombin autologous was prepared by adding calcium gluconate to the PLT-poor plasma (ratio 0.2:1 mL). After 15–40 minutes of incubation at 37°C, the product was centrifuged at 1,800 g for 10–15 minutes. One milliliter of thrombin-containing supernatant and 0.50 mL of ion- ized Ca++ were added to the previously separated PRP, in a Petri dish (Falcon, Becton Dickinson Labware), and mixed until a gelatinous mixture was obtained (from 2 minutes to 5 minutes). All the procedure has been performed under a laminar-flow hood (Faster Bio48). The nonhealing ulcer measured 3×4 cm (Figure 1). Three days after adjusting debridement, the wound was covered with allogeneic PRP (Figure 2A). The PLT gel was applied topically once every 5 days. The healing time was 60 days on average. The wound healed completely after 12 applications (Figure 3). The presence of granulation tissue was observed and recorded by digital photography in the patient after the second application of PLT gel. Figure 1 illustrates the ulcer before the treatment; Figures 2B and 3 show the same lesion, respectively, after 20 days and 60 days. No adverse reactions were observed during the topic therapy. No ulcer recurrence during the follow-up period of 2–19 months in the patient was noted. Figure 1 skin lesion after surgical debridement of necrotic tissue. Figure 2 (A) First application of platelet gel. (B) skin photograph 20 days after the start of therapy. O n co T a rg e ts a n d T h e ra p y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com OncoTargets and Therapy 2015:8 submit your manuscript | www.dovepress.com Dovepress Dovepress 403 allogeneic pLT in the management of chemotherapy extravasation Discussion Accidental extravasation of chemotherapy into surround- ing tissue is a frequent event. Indeed, the phenomenon is estimated at a rate of between 0.1% and 6%.14–16 Treatment of extravasations depends on the quantity extravasated, the delay until therapy is started, and the size of the resulting necrotic injury. Historically, a number of local treatments have been used, such as dimethyl sulfoxide17,18 cooling and intralesional injection of corticosteroids19 with either no proven benefit or even detrimental effect. However, if the condition is missed, the consequences may be dramatic, with massive necrosis and ensuing tissue destruction. Here we have described the case of a patient with multiple myeloma and severe skin necrosis induced by chemotherapy, who was treated with PLT gel. PLT gel rapidly repaired the ulceration damage, blocked the progression of lesion, reduced the intensity of pain, and restored the patient’s ability to move the hand. Greppi et al demonstrated the efficacy of PLT gel to treat recalcitrant ulcers in geriatric and hypomobile patients with chronic skin ulcers unresponsive to previous treatment with advanced medications.13 A meta-analysis review had revealed PRP as an advanced wound therapy in hard-heal acute and chronic wounds, favored significantly for complete healing.20,21 This process was regulated by PLTs, not only for their hemostatic function but also for their ability to repair and regenerate damaged tissues.22–27 These mechanisms are regulated by cytokines and growth factors released by activated PLTs. The cytokines and growth fac- tors contained within PLT-α granules act via an endocrine, paracrine, and autocrine mechanism, binding to the tyrosine kinase-activated membrane receptors found on the various tissue effectors, thereby regulating chemotaxis, cell prolif- eration, angiogenesis, and the synthesis and degradation of extracellular matrix proteins.28–30 Although in several clinical studies, topical therapy seems to exhibit no clear adjuvant effect on wound healing,31,32 based on our experience we suggest that the use of PLT gel, together with conventional therapies, could be considered as an effective treatment for the management of chemotherapy-induced damage and tissue necrosis in oncologic patients. Acknowledgment We acknowledge the medical assistance of technical and nursing staff: Antonio Mattiello, Vincenza Passante, and Assunta Coppola. Patient consent Patient consent was obtained for this study. Disclosure The authors declare no conflicts of interest in this work. References 1. Ener RA, Meglathery SB, Styler M. Exstravasation of systemic hemato- oncological therapies. Annals Oncol. 2004;15:858–862. 2. Sauerland C, Engelking C, Wickham R, Corbi D. Vesicant extravasa- tion part I: mechanisms, pathogenesis, and nursing care to reduce risk. Oncol Nurs Forum. 2006;33(6):1134–1141. 3. Schulmeister L. Extravasation management: clinical update. Semin Oncol Nurs. 2011;27(1):82–90. 4. Schrijvers DL. Extravasation: a dreaded complication of chemotherapy. Ann Oncol. 2003;14(suppl 3):iii26–iii30. 5. Iervolino V, Di Costanzo G, Azzaro R, et al. Platelet gel in cutaneous radiation dermatitis. Support Care Cancer. 2013;21(1):287–293. 6. Greipp PR, San Miguel J, Durie BG, et al. International staging system for multiple myeloma. J Clin Oncol. 2005;23:3412–3420. 7. Durie BG, Salmon SE. A clinical staging system for multiple myeloma. Correlation of measured myeloma cell mass with presenting clinical fea- tures, response to treatment, and survival. Cancer. 1975;36:842–854. 8. Balbo R, Avonto I, Marenchino D, Maddalena L, Menardi G, Peano G. Platelet gel for the treatment of traumatic loss of finger substance. Blood Transfus. 2010;8(4):255–259. 9. Chen TM, Tsai JC, Burnouf T. A novel technique combining platelet gel, skin graft, and fibrin glue for healing recalcitrant lower extremity ulcers. Dermatol Surg. 2010;36(4):453–460. 10. Picardi A, Lanti A, Cudillo L, et al. Rome transplant network. Platelet gel for treatment of mucocutaneous lesions related to graft-versus-host disease after allogeneic hematopoietic stem cell transplant. Transfusion. 2010;50(2):501–506. 11. Sclafani AP, Romo T 3rd, Ukrainsky G, et al. Modulation of wound response and soft tissue ingrowth in synthetic and allogeneic implants with platelet concentrate. Arch Facial Plast Surg. 2005;7(3):163–169. 12. Smrke D, Gubina B, Domanoviç D, Rozman P. Allogeneic platelet gel with autologous cancellous bone graft for the treatment of a large bone defect. Eur Surg Res. 2007;39(3):170–174. 13. Greppi N, Mazzucco L, Galetti G, et al. Treatment of recalcitrant ulcers with allogeneic platelet gel from pooled platelets in aged hypomobile patients. Biologicals. 2011;39:73–80. 14. Schulmeister L, Camp Sorrell D. Chemotherapy extravasation from implanted ports. Oncol Nurs Forum. 2000;27(3):531–538. 15. Schulmeister L. Management vesicant extravasations. Oncologist. 2008; 13:284–288. Figure 3 The previous ulceration photograph 60 days posttreatment showing complete closure of the lesion and re-epithelialization tissue with no inflammation. O n co T a rg e ts a n d T h e ra p y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com OncoTargets and Therapy Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/oncotargets-and-therapy-journal OncoTargets and Therapy is an international, peer-reviewed, open access journal focusing on the pathological basis of all cancers, potential targets for therapy and treatment protocols employed to improve the management of cancer patients. The journal also focuses on the impact of management programs and new therapeutic agents and protocols on patient perspectives such as quality of life, adherence and satisfaction. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. OncoTargets and Therapy 2015:8submit your manuscript | www.dovepress.com Dovepress Dovepress Dovepress 404 Di Costanzo et al 16. Yildizeli B, Laçin T, Batirel HF, Yüksel M. Complication and manage- ment of long-term central venous access catheters and ports. J Vasc Access. 2004;5(4):174–178. 17. Bertelli G, Gozza A, Forno GB, et al. Topical dimethylsulfoxide for the prevention of soft tissue injury after extravasation of vesicant cytotoxic drugs: a prospective clinical study. J Clin Oncol. 1995;13: 2851–2855. 18. Dorr RT, Alberts DS. Failure of DMSO and vitamin E to prevent doxorubicin skin ulceration in the mouse. Cancer Treat Rep. 1983;67: 499–501. 19. Dorr RT, Alberts DS, Chen HS. The limited role of corticosteroids in ameliorating experimental doxorubicin skin toxicity in the mouse. Cancer Chemother Pharmacol. 1980;5:17–20. 20. Carter MJ, Fylling CP, Parnell LK. Use of platelet rich plasma gel on wound healing: a systematic review and meta-analysis. Eplasty. 2011; 11:e38. 21. de Leon JM, Driver VR, Fylling CP, et al. The clinical relevance of treating chronic wounds with an enhanced near-physiological concen- tration of platelet-rich plasma gel. Adv Skin Wound Care. 2011;24(8): 357–368. 22. Cooper DM, Yu EZ, Hennessey P, Ko F, Robson MC. Determination of endogenous cytokines in chronic wounds. Ann Surg. 1994;219: 688–691. 23. Everts PA, Brown MC, Hoffmann JJ, et al. Platelet-rich plasma preparation using three devices: implications for platelet activation and platelet growth factor release. Growth Factors. 2006;24:165–171. 24. Frechette JP, Martineau I, Gagnon G. Platelet-rich plasmas: growth factor content and roles in wound healing. J Dent Res. 2005;84:434–439. 25. Mazzucco L, Medici D, Serra M, et al. The use of autologous platelet gel to treat difficult-to-heal wounds: a pilot study. Transfusion. 2004;44: 1013–1018. 26. Okuda K, Kawase T, Momose M, et al. Platelet-rich plasma contains high levels of platelet-derived growth factor and transforming growth factor-beta and modulates the proliferation of periodontally related cells in vitro. J Periodontol. 2003;74:849–857. 27. Tarroni G, Tessarin C, De SL, et al. [Local therapy with platelet-derived growth factors for chronic diabetic ulcers in haemodialysis patients]. G Ital Nefrol. 2002;19:630–633. Italian. 28. Anitua E, Sanchez M, Nurden AT, Nurden P, Orive G, Andia I. New insights into and novel applications for platelet-rich fibrin therapies. Trends Biotechnol. 2006;24:227–234. 29. Borzini P, Mazzucco L. Platelet gels and releasates. Curr Opin Hematol. 2005;12:473–479. 30. Verheul HM, Jorna AS, Hoekman K, Broxterman HJ, Gebbink MF, Pinedo HM. Vascular endothelial growth factor-stimulated endothelial cells promote adhesion and activation of platelets. Blood. 2000;96: 4216–4221. 31. Senet P, Bon FX, Benbunan M, et al. Randomized trial and local biological effect of autologous platelets used as adjuvant therapy for chronic venous leg ulcers. J Vasc Surg. 2003;38(6):1342–1348. 32. Stacey MC, Mata SD, Trengove NJ, Mather CA. Randomised double-blind placebo controlled trial of topical autologous platelet lysate in venous ulcer healing. Eur J Vasc Endovasc Surg. 2000;20(3): 296–301. O n co T a rg e ts a n d T h e ra p y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com/oncotargets-and-therapy-journal http://www.dovepress.com/testimonials.php www.dovepress.com www.dovepress.com www.dovepress.com www.dovepress.com Publication Info 2: Nimber of times reviewed: work_7mrdm6gyefbn7ng3heqguiti4m ---- Snow-cover depth, distribution and duration data from northeast Greenland obtained by continuous automatic digital photography ��������� ��� � ��� ������� �� � ����� ��� � �� �� � ���� � ������ ������� ������������ ��������� ������ � ���� �� � ����� �� ��� � �� �� ��������� � ���� ���� ���������� � ������ ���� ���������� ��������� ������ ���� ���� �� ��������� � �� � �� � �� �� ���� ��� � ��� ������ �������� � ����� ���� ���� � �� �� �� �� �� � ������� �� � �� �� ���� � �� ���� � � ��� � �� �� �� ���� � �� �� ���� � � � ����� � ��� � � �� �� ���� �� !� � ���� �� ��� �� ���� � � � ����� � �� ������ � ��� � �� � �� ���� ��� ���� ��� ��� ���� � � ��� �"� �� ���� #��$ �� �� �� � %&'()*� +, �� ��� �� !� � ���� � � ������ ���� � ���� � ���� �� %-���$ ���� � ��� �� ��.*, ��� /���� � � � ��� �� ��� ��0 � �� ���� � � �� �� 11�� ��� �� ���� �22��� �� � ��� ��� � ���� � ��� ��� �� � � ��� � ��� ��� �� ���� ���� �� �� ��� � �� ��� �� ��� � � � ��� ��� ������ �� �� ��� ���� �� � ��� � ��� � � � ����� ��� �� �"** � ���� � ���� � � ����� ������ � � ��� �� �� �� ����� ����� � �� �� �� ������� ��� �� �������� � ���� ��� �� ���� � � � � � ��� 3�� � �� ������ � � � ��� �� ��� ��� � � �� ��������� "445� 6 � ����� � ���� � ���� ��������� "445 � �� � � � �� � "444 � � �� � � ��� ��� �3 �� � � � �� �� �$ � � �� ������� ���� �� ��� � � �� ��� �� �� ��� �� ������ �� � ��� � � ���� � ��� 7' ��� � ���$� �� � ���� �� ����+�� �� �� � ���� � � � ����� � �� � � �� ����� ������� ��� ��� � �� "4458449 � ���� ��� � ������ ������ � �� "5 :� �� �0 ��� ���� ���� � ��� ���� ��� �� ����� ���� ��� � �� � � � ��� � ���� ��� � ��� ���� . �� ��$�� ��� ������ ��� � ���� ��� � �� � ��� � ������� �� "&* ���� �� ��� ������ ���� � �� �� �������� �� ������ � � � � �� ���� �� � � � �� ������ � � �� � �� ������ ��� �)7. ����� ������ ������ � � � ���� �� ���� �"444� ; ��� ����� �� � �� ��� � � �� � � � ����� � ������ � �� � �� ������� � ��� �� �������� � � � ��� ��� ���� ��� �� � � ��� � ���� � � ����� �� � �� � � ���� �� �� %� 7 � �� , �� �� � ������ < � �="� �� � ��� ���� ����� ���� ��� ��$ � �� � � � ���� ��� � ��� ��� >� �� �������� ���� �� � �� ��$ � �� � ���� �� �� �� 5=") � �="� ��� ��� �� � � � ���� ���� ��� ���� �� �� ������ �� ��� �� ��� �� ��� ������ � � ����� � � ��/� � ���� �� � � ��� � � ��� ��� �� � �� �� �� � �� �� �� � �� ���$ �� ��� ��� � ������ ��� �� > �� �� � ��� �� � ���� ���� ���� ���� ����� �� � � � ���� � �� ����� � ������ ���� �� ��� �� ����� � ���� � � � �� ���� � � �� ?�$ ��� � � ��� ������ � � �� � �������� ��� �� ��� � ������ � ���� � �� ��� �� � ���� � ���� � � �� ������ !���� �������� � � � � ��� ���� ��� ��� ���� ��� �� � �� ����� �� ��� ��������� $��� �� �� � ��� �� ��� �� � � �� ��� �� ������� �� ��� �� ��� � ��� ��� �� ������ � ����� ���� �� � ������� � �� ���� � ��� ���� ���� � �� �� �� � �� ��� ��������� �������� ��� ��� %?�� �� ��� � ���� "445,� >� ���� ���� �� � �� ��� �� � ���� ��� ���� ��� � �� � ��� � � ��� � ���� ���� ������ �� ���� ��� � � ������ � �� ��� $��� �� �� ��� �� � ��� ����� ���� �� � �� ������ � � � ��� � ��� �� ������ ����� ��� � � �� ��� � %� ��� ���� ��"445�,� >� ��� � ��� ���� ��� � ���� ������� � �� �� ���� �� ��� �� � ���� � �� ���� ������ ��� ��� ��� �� �� �� �� ���� � �*(�� ��� ���� �� � ��� ���� � �� �������� ��� � � ���� �� � ���� �� � ���� ������� � � � � � ��� �� � � ��� � ��� ���� �� � ��� ���� � �� � ����� ���� � ������ � ��� �� ������� ��� � ��� � ��� ����� � � ���� � �� �� ��� ������ ���� ����� �� �������� � ����� ���� ������ ��� � �� � ��� ���� ��� � ���� ������� � � �� � � � �� � �� ��� �� � �� ��� �� ���� � �� � � ��� � ����� � � � ���� � � � �� � �� �� �� �� ���� � � � �� ��� � ���� ����� � ������ ���� �� ��� �� ����� � ���� � � � � ����� �� �� � � � ��/� � �� � �� � �� � � �� � �� ���� ��� ������ �� � ���� �� ��� �� � � � ���� �� �� ����� ������� ��� �� ��� �� ��� ���� ��� ��� ����� ��� �� ���� �� ������� ��� � �� � � � ����� � �� �� ���$ �� � ��� � �� � ���� � � ��� � ������ � ������ � ��� ��� �� � ���� ��� � ��� � � ����� � �� � ��� � ���� � �� � ��� �� /� � �� ����� ������ >� �� � � ��� � � � � �� ��� ��� ���� � � � ����� � � � ��/� �� �� � � � ��� � � �� � � �� �� ���� �� �� � #��$ �� �� �� � �� ��� �� !� � ����� � �� � �� ��� � !������ )7 7**" � >� ��� ���� ! ���� ����� ���� � "*7 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:25, subject to the Cambridge Core terms of use. https://www.cambridge.org/core �� ���� � �� �� �� �� ���� ���� ���� � "445 � �� � �� ��� "444� �� �� � � �� ��� ����� ��� �� � �� ���� � � ��� ���� ���� �� �"#�$# � #��$ �� �� �� � �� �� �� � � &'()*� + %@��� ",� �� ��� ������ � ������� 3�� �� �� ���� �� ��� �� !� �� ������������ ��� � �� � �� ������ � �� � ��� ��� � ������� ��� �� �� #��$ �� ���� �� �� � ���� �������� �� � ���� � � ������� � ������ � � �� �� ������� � ��� � � ��� � �� ������ ��� � ���� ��� �� �� ��� � � � ��� � ����� �� �� ��� ��� %� ��� ���� ��"445�,�� #��$ �� �� � � ��� � � ���� � �� �� ��� � �� �� � ��� �� #��$ �� ���� �� �� ��� �� #��$ �� �� A�� ����� � � ��� :� �� ���� %#A�:, %���� � "444,� �� � �� ���� � �� � ������ ���� � � ����� >� ���� �� #��$ �� ���� � �� � ����� ���� � ���� ��� ������ � � ���� � � �� ��� �� � ��� ����� ���� ����� �������� � #��$ �� �� %���� �"444,� � � �� �� � � ��� � ��� ������ ��� ��� ���� �� ���� � � � � � ��� ����� �������� � �� �� � �� ��� �� � � � ������ � ��� ����� � � ���� B��� �� �� ���� ��� �� ���� ��� �� �� �� ��� �� ���� � ��� ���� �� � � ��� �� � � �� � �� �� � ��� � �� ��� � ��� ����� ��� �� �� �������� ���� ��� �� � �� ��� � ���� �� ��� �� ��� �� � � ����� �� >� �� �� �� � �� � ���� ����� ����� �� � #A�:��?;7 %� ��� ���� �� "444,� �� ��� �� ��� �� �� � � � � �� ���� ��� � � ����� ������ � ���� � %@��� 7,�� �� �� ��� � �� � � ���� �� � ��� ��� � � � ��� 3��� ��� ��� �� � �� ���� ���� ��� %� ��� ���� ��"44<,� ��� � ������ � � ��� �� � � � � � ��� � �� ��� � �� � �� �� �� � �� � �� ����� ����� �� � ������ ���� �� C�� �� ������� �� � �� �� � ���� � � � � � ����� � �� �� ��� ���� � ���� ������ � ��� ��� � ���� � #A�:��?;7 ���� �� ��� � ���� &.* � ��� ���� � � �� � ���� ����� � � ����� � � ����� � ������ � �� � �� � �� ��� ��� ��� �� � � ���� ���� � > ������ � � � � �$ ��� ������ � � �� #��$ �� �� �� �� �� � ������� ��� � � ��� � ����� %� ��� ���� ��"445�,� "��#�#$�� ��! ���� � ��� % !���&��� �� �� ����� ��# "��# '#$�� ���� ��! ����� � ������� �� ��� ��� %()*+ � ,-' ���� �� ��� % !���&��� �� � �. ����� �//.� 0��� �� ������ ���!�� ��! ��� � ����� ���� ������ �� �� ���!� ������� ��� ���0� �!� ����# $�� ������� ���� �� ���!� �� &������� ����#$�� ���� �!� ��� ��� ! ��� �� ���!�� '12 � ���� ��� 2�� �� ����#$�� ���!� �� ��� ���������� 0��� ��!������� ��� ��������� ������ ! ��� #$�� ������� �� �� � ��� ��0 ��� ��� �����0���#3��� �� ���& !������� �� ��� ������ �� ���� � ��� % !���&��� ����� ��� �� !���� ��2�� #�#�#� ��� �� ��� ������ ������ ���� �� � ��� � ��� ���&4�������� �� 5����� #�#�#6# "��# �#$�� ���� ��! ! ��� � 0��� ��� ������ ���� ��! ����� �� �'� & ������ �� ��� ��0�� � �� � ��� 0� �������� &�7# 8���0����! ��� ���������������������� ���!�� ��0����0 �� ��� 0� �������� &�7� 0��!� �����!�� ��� ! ��� ���� ��� ���0 ��� ��# *� ��� � ���&�7 ��� ��� �� ���� ! �&� ����# "*) ������� ����93��0+!����� � ��������� �� ����� �� Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:25, subject to the Cambridge Core terms of use. https://www.cambridge.org/core �%$ # ��&#��! !#&$�# � � ������ ���� � -���$ ���� � ��� �� ��.* 3��� ��� �� ��� �� �� �� � )* ;� ������ @ �� ���� ��� � ����� � � ����� �� � �� �� ��� ������� � ���� ��� � �� -���$ �� �� � *( � D'*(�� � � �� � ��� ��� �� ���������� � ������� ��� � ���� �� ������ ������ �� � � ���� � � � ��� �� � ��� ��� ������ �� �� �� �� �� �� ���� � "7 E �� ��� ���� � � � �� �� �� ��� �� � ��� � � ��� ������ ��� � ���� �� �� 11�� ��� �� ���� �22� �� � � � ��� �� �� �� � � ��� �� �� �$��� � � ����� � %@��� ),� � �� ��� �� ���� � ���� ��� � �� � �� � � �� � ����� �� � ��� � � �� � ��$ �� � ���� %@��� ',� � �� � �� �� �������� �� ��� � �� � >� �� ��� �� � � ������ ������ ������ � ���� � ���� ��� � ���� � �� ��� ���� > �� � � ������ � � ���$ � "7 E ���� � � � ��� � � � ��� � �� �� ��������� ��� ��� �� � ��� � � �� �� �� � ����� ���� � ��� �� �� �$ � �� �� � � ������ ��� �� � � �� �� �� �� �� ��� �� �� ��� � B � � �� �� � �$ � � � ����� ���� �� �� ���� �����9 %", � �� � ���� �� ��� � ���� � � ���� �� ��� 11�� ��� �� ���� �22 � ������� %7, � �� � � �� � � )* � � �� �� � �� � ����=���� ���� � � � ��� � � ��� ������ �� � )* E� %), �� � )* � � �� ������ �� �� ���� � ��� � ���� � �� � ��$ �� ����� �� �� ����� ���� � � ��� �� �� �� �� ��� ��� ��� � � ����� �� �$ �� %', �� � ��� � )* �� �� � ���� ��� ��� � ���� � � ��� ��� ��� �� � �� � � ������ ��� �� � � � � � ����� �� � � �$ �� � ��� � �$ �"��� � �$ ��� � �� � � � ����� � B � �� �� �� ���� ��� �� �� �� � ������ ��� � ����� *�7 �� �� ��� �� � � ������ �� "��� �� � � ����� � � ��������� � .** �� �� �� ��� � ��� � � � ��� � ������� ��� �� ������ ")�"�� �="� �� '�5 � �="� � �� �� ��� � ��� � �� �� �� ��� �� ��� ��� � �� � �� � .** �B� �� ���� � ��� �� "*** B �=7� ��� �� ������ "7 E �� �� �� �� � '7 �� � > �� � �� ����� �� � �� �� �� �� � � ��� � � ��� ��� �� �� �� �� ��� �� � ��� ���$� ��� � ����� � �� �� � ���� ��� ��� �� ����� �������� �� ��� � ���� �� � �� � ��� ������ � ������� � ��� � ��� �� �� �� ��� ������ 7 ��� � �� ���� ��� � ���$� ��� � � ��� �� �� �� � ���� � � � � =.( ��� =).(�� ��� ��� � ���� /� � )�. ��� � ���� � � ��� �� ��� ��� ������ � ���� � �� ����� � ��� � �� � ��� �� ��� ����� ������ ��� ���� ���� �� ��� � � ��� �� ��� �� �� �� ��� �� ������ !�6 "�"**� � �� ��� �� ��� �� ��� � � �� � ��� � � � >�� � � �� ! ����� �� F��� ��� � �� ��� � �� �� � ����$� �� � � � � ������ F� 6�� ����� !#&$�# #�� ���'��$����$���� �$�� ( � ��� �� ��� ��� � �� �� �� � ������ � �� �� � � �� ���� � ��������� � '* �� ���� ���� �� �� �� �� �� � �� � ������� %@��� .,� >� ���� ���� ����� ��� � ��� � �� � ��� ���� ��� ��� � � ���� �� � � � ���� ���� �� � ��� � � ������� � @��� �� ��� � �� � � ��� �� ������ ����� �� ��� � �� � � � �� � � ���$ �� � ���� ������ ��� ��� � ��� �� ��� ���� � � "�. � ���� ������� >� � � ����� � ������ � ���� � �� � ��� � ��$� � � ��� � ���� )* �� �� � ������ %@���7,� A�� � ��$ ��� . �� ��� ��� � �� � "7* �� ���� ������ ���� ��� � � � ��$� � � ���� � � � ��� � � �� 7* �� �� �� �� �� � � � � � �� � �� � ���� � ������ ������ � �� � ���� � � � � � ��� ��� �� � ��� ��� � �� <.= "*. � ���� ��� ��� �� � ������ � � � ��� ���� ��� ��� � � ���� �� � � ���� � � ��� � �� ����� �� � �� � ��� � ������ �%$ (%�����#(%� � ��� �� �� � ��� �������� � � �$ ��� � � � ����� � ������� ��� �� ���� @�� � �� � �� �� �� �� ��� � � ���� � � ����� � �� � ��� �� <5*�.'* ��� � ��� �� �� ��� ��� )<* � � �� � � � �� � �� ������ @ �� ����� � ���� � � � ����� � � � � �� � "��#:#�����+��� ��� ���� ��! ���������������� ���! ��� �������+���� ��&�����# "��#�#$�� ���� ��! ! ��� �� ���&�7���������� ���� ��� ������ � ������ � �#� � &��� ��� ���� �� ��� !� �� ��� % !���&��� �� # ������� ����93��0+!����� � ��������� �� ����� �� "*' Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:25, subject to the Cambridge Core terms of use. https://www.cambridge.org/core �� -���$ ���� � ��� �� ����� %-��, �� ��� � � � ���� ��� � ���� ������ @ �� ���� ����� 6 � �A� ��� � %� ���7�", �� ���� �"444� G�� �� �� � ��� ��� � ���� �� ����� %�>@@, �� � � � � � �� � ���� -�� �� �� ��� � � � ��� �3 � ����� ��� ���B��� ��� � � � � ����� � �� � %@��� 7, ��� �� ������ � �� �� �� ��� ��� � � � ��� 3�� � ��� � � ��� �� ��� ���$�� � �� � � � ��$ 7� ��� � � � ��� �� ��������� � "** � � ��� ���� � � �� � ���� ��� ��� �� ��� ���� ��� ���� � ��� �� ������ � � � � ���� ��� �� ���$ � � � ��� 3�� ������ � �� � ������ ���� ��� �� ��� � �� ��� � ��� �� ������ � ���$�� � � � � � ��$� 7 ��� ) %@��� 7,� � �� � ������ ���� ��� 3�� �� � �� ���� ��� � � � � ��� 3�� ������ � �� 3 ��7 �!��! � � � � ��$� ) ��� '0 ���� � ��� � � � � ��$� ' ��� .� ��� ����� � ���� � ��$ . ��� ����� �� �� ���� � � � ��� �� ��� � �� �� "5 ����� "445� ��� �� �� �� � � � �� ��� �� �� ���� �� �� � & � � �� � "444� � � � � ����� � � � �� ��� ��� �� > ��� ��� � �� ������ � ��� �� � �� �� ��� ���� �� �� � ��� �� � ��� � �� � � ���� ���� � � �� ��� ��� �� � � � ��� �� ���� � ��� � �� � �� ��$ � ��� �� ����� �� � �� ��� ��� � ��$ ��� ��� � �� ���� �� ��� �� ��� � � ��� � � ���� �� ���$ �� � �� 77 :� �� �� � ��$ . ��� � �� � � � ��� ������ � � �� ���� �� �� ���$ �� �� � �� �� �� � ��� �� >� ���� �"444� � ��$ ' ���� � "��# 2# 3��0+����� �� ����� ��� ��� ���!�� �� ��� ���� ��! ����� � ������� ���� �������� 0��� �����������! � � � ��� ��� �� �&� �����������! � �� ���� � ��� ���� ��8 ��� ���������� ����� ��� �� ��� ������ �� *!��&��1�. ;����&�� �//.# ( !� �� ���� �� � ����&��� ����� ���&��� ��� !����!��� ������0+����� !�����#3��0+������� �������������� ��������� ������3���!) ��� ������ � ��� �����������! � �� ���� �� ��� ��������� � ���� ���� �� ��� ���0+����� �� ��#$��� !���� �� ��� � : !�� 0��!� �� ��� ������� �������� ���� ������ �����������&� ������0 ��#$�����!���� ����0 ��� ���������� �� ���8�� ��� ����������0��� ;������!����#<���+����� ��+����!����� � 0�����!����������������0���� �� ������ ���� �����!���� ����0�������������� ����#���� �00��� � � 0��� ��!����� ������ ��������� �� *!��&��1�;����&��� ��� �� ��!���! ����&���� 0��� ��� �������# "*. ������� ����93��0+!����� � ��������� �� ����� �� Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:25, subject to the Cambridge Core terms of use. https://www.cambridge.org/core ��� ������ � � �� �� � � � �� � > ��� �� � �� � � � ������ ����� �� � ��� � � �� ���� �� �� ��� "445� ��� ���� �� � � ��� �� ����� � ���� � �� ���� �� �� A���� �� ��� ���� �� �� � � ����� � ��� ����� � � � ��� ���� �� � 7) +�� �� � ��� ����� ���� 7. H������� � �� � � ����� �� ��� � � � �� ��� �� �� #��$ �� �� ���� ��� ��� �$ � �� ' +�� �� �� � � ���� � � ����� �� ��� � �� ��� #��$ �� � �� �� ��� �$ � �� < @ ������� ��� � � ���� �� ���� � � ����� � �� ������� ��� �� � � �� � � � ���� �� ���� ��� �� � ��� ��� �� ����� � �� � ����� ������� � �� �� ���� � ��� �� �� � �� ���� � ��� �� ��� ���� �� �� � �� �� ������������� �� �� �� %���3 � �� �� ,� ;�� �� ������ �� �������� � � � � ��� ���� ��� ��� ���� ��� �� ������ � �� #A�:��?;7 ���� �� ���� �� � �� � � ����� � �� �� �� �� �������� �� ���� .="* ��� ;�� � � ����� � �� �� �� � � ������ �������� � ������ ���� � � ���� ��� �� ��� � � ��� � �� �� ���� �� � ������� ��� � ���� ��� � ���� :� � � � ������ �� �������� ��� � � � �� � �� ��� � � ��$� �� � �� � ��� � �� � � ����� �� � ��� �� ��� ���� � ���� � �� ���� � � � 7. ��� 7< @ ������ ������ � � ���� �� � ���� ����� ��� �� �� �������� ���� > ��� �� � ��� �� ���� ���� ������ �� � "* H�� � ������ � )�. ��� � �� � �� ��� � � � ����� � � � � ���� � ���������� � ��� �� ���� ���� � � ���$ � �� � ����� ���� � � ���� ��� � ��� �� �� � ��� � � ������ G�� � �� ������ � �� ������ �� ��� �� ��� � � � ��� � �� � � � ��� � ������ � ������ � ���� ���� � � ;�� � � ���� �� �� � � �� �� � �� � � � "* � �=" � 7 � ���� ������ � � ��� ����� � � ����� � � �������� ���� ���� � ��� �� ���� � � ���� � � �� � ��� �� ��� � ��� �� ��� � ���� ��� �� ��� � ��� � ���� � �� � ��� �� �� � ����� �� �� � ���� �� ��� ���' �$(�% #��������) ���� � ��� � �� � �� � �� ���� � � �� ���� � � ��� � �� � � ����� � ������ � � � �� ��� � ������ �� � ��� � � ���� "* :� �� �="5 +�� �� �� � � � ������� � ��$� � � � � ��� � �� �� ����� ��� ���� �� � � ������� � ��� � ����� �� �� � ��$� �� � � ����� �� �� � ���� � � �� ����� ���� �� � � �� ���� �� �� � �� �� � � ��$� ��� � � � � � �� � ���� ��� ���� � � � � � �� �� � ����� � ������ � ���� � %@��� <, ����� � ���$ ��������� � ������ �� ���� � � �� � ��$�� ��������� ���� � � � �� � ��� ����� "��#=# ���� ��!����� �������� ���� '�1'= *!��&���//.� !������� ��� ���� ���0��� ����������# ������� ����93��0+!����� � ��������� �� ����� �� "*< Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:25, subject to the Cambridge Core terms of use. https://www.cambridge.org/core �� � ������ � ������� ��� �� ������ � ������ � ��� ���� �� � � ���� ���� � �� "&="5 :� �� �� � � � ���� � � ��� �� � ���� . �� ��$ ��� � � � �� ������� � � ��� �� �� � �� � ��� �� ��� � � � � � ���� :� 7< :� �� � � ��$ � �������� � �� � ������ � ��� %@��� <,� � � �� � ���� ��� � � ����� �� ������ � �� �� �� ������ 5=") � �=" �� �� � ���� 7) :� �� �� �� � ���� ������ � � �� ������ � �� � �� � 7< :� �� � %@��� < ��� &,� @��� ���� ������ ��� �� � 7) :� �� � ���� �� � � ���� � � 7=' � �=" ��� . ����� :� � � � � �� � � < � �="�� 7) :� �� � ��� �������� � ��� � ���� �� ��� � � � �� 7) :� �� � � � ����� %@���&,�� � � ����� � ���� 7) ��� 7' :� �� � � �� � ��� ���� � ��� ���� ��� �� ���� � ��� ���� ��� � ��$ � �� � ������� � �� ��� ��� � �� ������ �� ������ ������ � � � ��� �� ��� �� ���� � � ��� �� ������ ������ � �� ��� �� ���� �� ���� � � ���������� �� �� � ��� ��� ���� � � ��� � �� �� � � �� � ������� ���� �������� � �� � �� 7< :� �� �� >� � � � �� ������ � � � ��$� ' ��� )� ���� � � ���� �� � �� 5* �� �� 7 ���� ���� 7' � 7< :� �� �� >� �� � ���� 5�' ���� � ���� ��� � ��� � ��� � � �� � ���� � ��� ���� � ��� � 5�& �� � ���� 7< � 7& :� �� �� ���� � ������� � ��$ '� � � � ��� �"�7 � �� ����� � � � � ��$ ) ���� � � ��� �� � 4* ��� @��� 7' � 7< :� �� � �� � 7* �� �� ���� ������ � � �� ��� � ����� �� � ���$�� � � ��$ 7� �������� �� ������ � ��$� �� �� ���� �� � ��� � ��� �� ��� ���� � � ��� �� � ���� 7 ��� ��������� ���� � � ������ � ���� � ���� ������ � �� � �� ���� � � �� � ������ � � ���� � ��� ����� � � ��� %@��� <, � ��� � ������ � � � �� � ��� ��� ���� �� �� � ������� � � �� �� �������� �� ������ � � � � ���� � �� ����� �� �� ��� ������ � � �� ������ � � � � ��$ )� ���� � � ��� � ��� � �� )* �� � <* �� � � ���� 7& � 75 :� �� �� �� � �������� ���� > � ���� �� �� ��� ��� �� � ������ � ���� 75=)* :� �� � � � ���� �� � ��� ������ "* � �="� ��� � ��� ""* �� �� ���� �� 7 +�� �� �� :� � �� 4 +�� �� � ��� ���� � � ���� �� ���� "7* �� ��� � ��$� ��� �� ������ � �� � ���� ��� � � ���� �� � ���� ��� � � ����� �� �� �") � �="��� � ��� ������ �� 74�. ���� � :� ��� � ��� �� ������ � ���$�� � � ��$ 7� ���� � � � �� � �� � � � �� � 7* �� ���� 7< :� �� � � � �� < +�� �� �� > ��� �� ������ � � �� ���� � � � �� � ��$ ����� �� � "* +�� �� �� � � ���� � � � �� � <* ��� 6� ����� �� �� ���� �� �� �� � � � ������ ��� ���� �� � �������� � ��� ��� �� �$ � �� � ��� � ���� ��� � � ���� �� � ���� ��� � � ����� ��� ������ � �� � ������ 5 +�� �� �� � �� � ��$ ��� ���� � �������� � � ��") +�� �� �� ������ � � ����� �� � ���� ������ ��� �� � �� �� � �� � *(� �� � �� � ���� %@���<,�� �� ��� �� �� � � �������� ���� � �� �� � ��� � �� �$ � � �� ������ ��� �� � � � ������ ��� ���� ���� �� � � � � �� �� ��� � �� ������ � ��� ���� ��� �� � ��� � ���$� �� � ����� F� � 7) +�� �� �� � � ���$ � ���� � ���� ������ � � � �� �� ����� � ���� � ��$ .� �� �� � � ���� ��� ��� � ������ �� �� � ��$� �� 7. H������� � � ���� � � �� ���� �� ���� �� ��� �$ � �� � ��� � ���$� � ��� �� � ������ � ���� � � �� �� � �� � � ��$ <� � � ���� ��$� �� ��� �� � ���� . ��� �� � ��$ . ��� ���� � �������� � �� :� " @ ������ ����� �� � ���� ��� � � ��� � � � �� ���� �� � ��$ <� � � �� 7 @ ������ � �� ��� � � � �� � ��� ���� �� � ��$� � �� ��� ��� <* �� ���� ��� �� �������� ��� ��$� �� ������ @ ������ %@���5,� ��� �� 7. @ ������ � ��$ < � ��� ���� � ���� � �� ����� ��������� � � ����� � ���� 7'=7. @ ������� � � �� � � �� ������ � ������ ��� ���� � �� �"�� ������ � � �� � � 7 �� �� ���� ������ �� �� ���� :� �� ����� ��� ��� �� ��� ���� � ���� � �� ����� ��� � � �� � �������� � � ��� � � �� ��� �� �� � "* H�� � ���'�!�*$� � �#���� � � ���� ��� �� � � � ��� 3�� � ������ ������ � �� � � ����� %@��� 7, �� ��� ��� �� �� #��$ �� �� �� ��� ���� ��� �� ��� � ���� ��� � �� � ������� � ���� �� ��C�� ��� �� �� ��� ��� �� � � � ��� 3�� � %� ��� ���� ��"445�,�� � ��� � � � �� � � ���� �� ��� �������� � ���� ��� � �� � �� � 3�� � �� � �� �� �� ��� �� ������ ��� �� �� �������� � ���� ��� �� � ���� ��� � ��� � �� #��$ �� �� �� �� ��� � �� � � ����� �� ������ � ���� ��� �� ���� � � � � � ��� 3�� � � � ��$� %@���7, ��� � �� ��� � ��� "445844 � ����� >� � ������ ���� ��� 3�� � ���� ��� � � ������ ���� 7."���� � ��� � �� � 75* ���� � �� � �� � � � �� �� 3 ��7 �!��! 3�� ���� ��� � �� � ���� 75* � �)7. ���� � �� � �� � � � ������ � ��� �� � �� ���� � "444� >� ���� ������ � 3�� � ��� ������ � �� � ���� �7)5 ���� � �� � ���� � � �)7. ���� � ��� � ���� �� >� ��� �� � ����� 3�� � � � � � ��$� . ��� <� ���� ��� � �� � ���� �7)5 ���� � "&* ����� � � ��� ����� � � ���� � � � � � ��� �� ���� ��� � �� � ������ ���� ��� � ���� � ���� � �� ������ ��� 7<* ���� ������ ��� �"445844� � � � �� �� �� � �� ������ � ��� ���� ���� ��� �� �� � �� � ������ � � �� � ������ � ��� �� �� � ��� �� ���$�� � � ��$ '� � ����� ������� � ��� ���$�� � � ��$ )� ��� �� � ��� �� � �� � � ��$ 7� +� �� � � �� �� ���$�� ��� ���� � �� � ��� ������ � � �� ������� ��� � ����� �� � � ��$� . ��� <� �� � ���� �� ��� ��� � �� � ���� � ��� ���� � �� ��������� ���3�� � ��� � � �� ������ � �� � � ��� � �� ���� �� ���� � �� � � � ������ � �� � � ��$ � �� ���� 3�� � � � � � ��$� ' ��� . %@���4,� "��#.# ���� ��!����� �������� ��� ''"�&�� ���//.� ��� ������ 0��� ���0��� �# *��� :� �� � ���� ��� ! ��� 0 � !���������&����� �� ���0 �� ��� ���� � ��� 0�����# "*& ������� ����93��0+!����� � ��������� �� ����� �� Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:25, subject to the Cambridge Core terms of use. https://www.cambridge.org/core � ������ � �� � � ���������� � ���� "444 ���� � �� �� � � � ��� � �� 5 � � �� �"444� ��$��� � � � �� ��� �� � ��� � � �� ������ �� ����� ' � ��� � ��� � �� � � ����� � �$� +��#�� ($��($!��*$� � ���$ �� � � � � ��� � � � ��� �� �� ������ ��� � �� � �� ���� � � � ��� ���� ��� ��� ���� ��� ���� � ����� �� � �� ����� �� ��� �� ���� � ����� � � ����� �� � � � �� �� ������ � �� � ������ � ��� ����� �� �� �� � ������� ��� �� �������� ���� �� ��� ����� �� ��� �� ���� �� #��$ �� �� �� � � ������� � � � � � ��� ���������� �������� ��� � � �� � �� � � ������ < � �=" %� 7 � �� ,� ?��� ����� ���� ��� ��� �$ � �� � ���� �� �� �� �5 � �="� ��� ���� �� � � � ���� ���� �� ���� �� �� ������ �� �� >� � � �� ��� � �� ������ � � � ������ � ���� �� �� �� � '* �� �=" �� � � � ���� � ������ �������� ���� � � ������ ���� ��� � � � ��� 3�� �� ������ � ���$�� ��� �������� � � ��� 7."=75* ����� ���� � �� � �� ������ ��� �)7. ���� �� � � �� ������ � �� �� �������� �� ������ � � �� ������������ � 3�� � � ���� � ���� ��� � �� � 7)5="&* ����� G � ����� ��� � ���� ��� �� ���� � ��� �� � �� ������ � ���� � ������ � � ������� ��� � � �� �� �� ��� ������ � ���� � � �� ��� �� � � ��� ����� � � ���� � � �� � ����� ��� �� �� � ".. ���� �� �������� � ���� ��� � � � ������ � ��� �� �� �������� �� � � ?�$ � ��� � �� �� �� �������� �� ������ � �������� � ���� ��� ��� � ��� � �� �� � 4* ���� ������ � � � ������� � �� � � � �� � ��� ����� � � ��� �� ��� ��� ��� � ��� �� � ��� ����� ����� �� ��� �� ��� ���� ��� ���� ���� ��� �� ��� � ���� �� � ��� ��� � ���� ����� �� �� ���� � ��� ��� ��� �� �� �� � �� �� ����� � ���� ��� � �� � �� �������� � ���� ��� � ����� �� � � ��� ������� � �������� �� ��� � � �� � ��� �� � �� �� � ������� � ���� �� � �� � � � � �� � � � � � ��� �� �� ���� ��� � �� � �� �� � ��� ������ � � ��� �� � ����� ��� �� ��� ����� ������� ��� � ��� �� ���������)� %?�� �� ��� � ���� "445, � ��� �� ��� � ��� �� ���� ���� � �� � �� ���� � � � ������ � ��� ���� ��� ��� ���� ���� �� ��� �� � � ����� � ��� � �� � �� � � �� ��� ������ �� ��� �� � ��� ����� ����� �� �� ��� �� � �� �� ���$� >� �� � �� ���� � � ��� ���� ��� �� ������ � �� ������ ������ �� �� �� � � ���� � � ���� �� � � ��$� �� � � ����� � ������ � � ��� � �� �� ���� � � �� ������ ������ �� �� �� ��� � ��� � ��� � ��� �� �� ������ ��� �� � ���� � �� � �� �� � "*(� �� ������� � �� �� �� ���� �� �� �� � �� ����� ��� ��� � �� � � ����� � ������ � �� ��� �"445844� ��� � �� ���� � �� �� � � ����� �� ��� ������ � � � � ��� � ���� �� �� �� �� �� �� �� G�� � � ��$� ��� � ��� ������ � ���� � � � � �� �� � ��� ��� � ��� � �� ��� �� ?�$ ��� � � �� � � �� � ��� �� ���� � � � ����� �� � ������� ��� � � ���� ������ �� � �� � � � � �� ��� �� ���� � ��� ��� � � ��� �� � ��� � ������ /�� � � �� ������ � �� �� �� ������ $��� �� �� �� � � �� � ��� � �� ��� �� ��� � ��� ����� ����� �� �� �������� �� ��� � �� � �� �� �� � �� ��� �� ������ ����� ���� �� ����� �� ������ � ������ � ���� �� � ���� ���� � � �� ��� ���� ��� ��� ���� ���� 6� ����� �� ���� � �� �� � � ����� � ����� � � �� � � � �� �� ��� �� ����� �� � ��� � � � ���� � #!,��'+$��$&$��� ;� � � ��$� �� � � � � � ������ F� 6�� ����� >�� � � �� ! ����� �� F��� ��� � �� ��� � �� �� � � ��� �� �� � � ��� � � � � �� � � ����� ��� � ��� �� ��� �� 3� ��� �� ���� � ��� �� ��� � ������� �������� � � ������ 6� � ��� ���� �� � �� � >�� � � �� ! ����� �� F��� ��� � �� ��� � �� �� ����� � F �� #A�: $��� � ������ � � ��� ����� ���� ����� �� � � ������� �� � � ��� �� ���$��� �� #��$ �� �� �� �� � �� � �� /�� � ��� �$ � �� ��>�I� !� � ��� �� � ��� � ���� ���� �� +��$� !� � ���� ��� �� ��� ���� � �� #��$ �� � �� � � ��� � � ��� ����� �� ;� ���� � >�� � � �� ! ����� �� F��� ��� � �� ��� � �� �� � �� ���$ �� ��� �� � � ��� �� 11�������� �� �� ���� �� ��� �9 6 ����� ��� � ���� ���� �� � �� �� �� �� � J � ������� ��� �22 �� � � ��� ���C � 11� ��� �� ?���� ���� 9 >� ��� ���� ��� � ����$� ����� � ����� � ����� �� �� ��� ��� ����� ���� �� �22 ���� � �� ����� ��� �� � � ��� ������ � �� ��� �������� ��� �� �� ��� �� � ��� �� ���������� ��� ��� � ���� >�� � ��� ���� �� !� � ���� � F��� ��� � ����� � �� ��� ���� %F+>�, $��� � ������ � � �� �� ����� � � > ��� ��� � ��� � �� ��� �� � >�� � � �� ! ����� �� F��� ��� � �� ��� � �� �� ������ � �� ��� ����� ��� �� >!� ��������� �� >�������$ � � �� ��� � ��� �� � � �� �$-$�$�!$� � ��� ���� �� G� G�"44<� +��� ��� ������ ���� �� � ��� � ��� � � �� � � � ��� ���� � � ��� ���� �� ��� ����# > ������ �'� %6 ��� ����, � ��� ���� �� G� G� "445��11?� >� �� 22 ���� ��� �� ��� � �� ��� �� !� � ���� >���!���� .%<,�&"4=&75� � ��� ���� �� G� G�"445�� +��� ��� ����� ��� ���� �� � �� ������� ��� � � ��� � �� ��� �� !� � ���� ( ��� 3�� # ?��!����� , �� ����� /0%5,� &."=&<*� � ��� ���� �� G� G�"444� �� �� �� � ���� ����� �� �� !� � ����� � �� ������ �� ��9 #��$ �� �� ��� ���$� �� ���� � �# @# ����#� 11�""&="7"� ?�� ��� !� A� ��� ;� � ����"445� � ����� ������� ��� ��� ���� � ������ @# � !���#�22%"'5,� '45=."<� ���� � ;�� ��� "444� % !���&��� �!�����! � ���� �!� ���� �����# :�� ��� � ������� �//.# ��� � �� �� ;���� �� �� � � ��� ��� >������ ���� ����� 6� �� � � �� "��#/# ���� ��!����� �������� ��� � 3�����&���///� ���0+ ��� ���������� � ���0� �!� �� ��� !���� � ���0� �!� �� # ������� ����93��0+!����� � ��������� �� ����� �� "*5 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:25, subject to the Cambridge Core terms of use. https://www.cambridge.org/core work_7njmuvogzzeu5kkzocodan64om ---- Suture fixation of dislocated endothelial grafts M Anandan1 and M Leyland2 Abstract Purpose To share the experience of suturing the displaced endothelial grafts in three patients who developed this complication following a posterior endothelial keratoplasty. Methods Prospective evaluation of three patients who underwent surgical revision by suture fixation for dislocated endothelial graft. The surgeries were performed immediately after noting the complication during the post- op visits and were followed up for 6 months. Assessments at follow-up visits include visual acuity, pachymetry, endothelial cell count, topography, and corneal slit-lamp digital photography. Results No significant postoperative complications were encountered in any of the three eyes after graft suturing. The vision was worse than hand movements in all, following the graft dislocation and this improved to an average of 20/50 at 6-month post-op. In two of the three eyes the endothelial cell count reduced from 2600 cells preoperatively to 1251 and 988 cells, respectively. Mean topographic astigmatism after the suture removal was 2.4D. Conclusion Suture fixation of early endothelial graft dislocation seems to be a good alternative in dislocated endothelial discs as it provides a good refractive and visual outcome, instead of converting it to a standard full-thickness penetrating keratoplasty. Eye (2008) 22, 718–721; doi:10.1038/sj.eye.6703000; published online 7 March 2008 Keywords: posterior endothelial keratoplasty; descemet-stripping endothelial keratoplasty; corneal graft; graft dislocation; endothelial graft suturing Introduction Endothelial failure is routinely treated by penetrating keratoplasty (PK), with established efficacy but slow visual rehabilitation. Posterior endothelial keratoplasty replaces a lamella of endothelium/Decemet’s membrane, and offers rapid visual recovery by retention of host stroma. Although longer-term follow-ups are not yet known as this is a relatively new technique, early complications such as graft dislocation, graft rejection, and endothelial failure have been reported. Early graft slippage or dislocation is one of the most common postoperative challenges and requires revision surgery.1 We describe three cases of recurrent donor lamellar dislocation into the anterior chamber, successfully treated by air bubble repositioning and suture fixation. The surgeries were performed immediately after noting the complication during the post-op visits and were followed up for 6 months. Assessments at follow-up visits included visual acuity, pachymetry, endothelial cell count, topography, and corneal slit-lamp digital photography. Surgical procedure All the patients underwent an uneventful/ uncomplicated Descemet-stripping endothelial keratoplasty. The donor button was prepared by manual stromal dissection in an air-filled artificial chamber (Melles’ technique). Descemet’s stripping was performed aiming for a diameter of approximately 9 mm. A 9-mm- folded donor disc was placed through a 5-mm scleral incision. Small amount of viscoelastic was used, only on the endothelial side of the disc before folding. Air was injected into the AC to help unfold the donor tissue and oppose it against the recipient cornea. When found dislocated, all patients underwent repositioning of the graft on the first postoperative day under the operating microscope. Sterile air through the paracentesis was used to refloat the graft into position. All the three eyes described here were dislocated again after refloating and were then sutured in place as described below. Received: 24 October 2006 Accepted in revised form: 12 September 2007 Published online: 7 March 2008 Presented as a poster at the Royal College of Ophthalmology Congress, 2006. 1Oxford Eye Hospital, The John Radcliffe Hospital, Oxford, UK 2Royal Berkshire Hospital NHS Foundation Trust, Reading, UK Correspondence: M Anandan, Oxford Eye Hospital, West Wing, The John Radcliffe Hospital, Headley Way, Headington, Oxford, OX3 9DU. Tel: 01865 741166. E-mail: maghizh@ hotmail.com Eye (2008) 22, 718–721 & 2008 Nature Publishing Group All rights reserved 0950-222X/08 $30.00 www.nature.com/eye C A S E S E R IE S http://dx.doi.org/10.1038/sj.eye.6703000 mailto:maghizh@hotmail.com mailto:maghizh@hotmail.com http://www.nature.com/eye All the surgeries were performed by a single surgeon (ML) in the first postoperative week immediately after noting the complication (recurrent dislocation of the graft). A paracentesis was made and the donor disc centred using forceps or a sinskey hook. It was then floated into apposition with the cornea on a sterile air bubble. 10-0 nylon sutures (Alcon, 665567M, UK) were then placed, passing from the peripheral cornea inwards, and then passing through the donor tissue, continuing through the full thickness of the cornea. The sutures were tied on adjustable sutures and tension was adjusted once all sutures were in place. Four to six sutures were placed until the graft appeared to be securing with good adhesion formed in all 3601 of the edge of the interface. Anterior chamber was reformed at this stage again with air to provide enough tamponade to avoid Descemet’s folds, prior to tying-off. The sutures were then rotated to bury the knot. It was necessary to ensure that the last movement of the suture was a rotation away from the graft, thus pulling the graft out towards the peripheral cornea. In two eyes, four sutures were placed; in one eye, six were placed. The procedure took about 15 min and involved topical anaesthesia under an operating microscope. Sutures were removed 6 weeks after surgery (Figures 1 and 2 F Pre- and post-suture removal). Results The initial pathology in all the three patients was Fuch’s endothelial dystrophy. The patient ages at surgery were 76, 79, and 60, respectively; all three were left eyes. No postoperative complications occurred. Clinically, the graft appeared clear under the slit lamp in all three cases but one eye had Descemet’s folds at 3 months follow-up (Figure 3). The vision was worse than hand movement in all following graft dislocation and this improved to an average of 20/50 at 6 months post-op. At 6 months follow-up, the average central corneal thickness was 595 mm compared to preoperative average of 685 mm (measured prior to initial lamellar graft procedure). In one of the three cases endothelial cell count was not possible, but in other two cases the cell count reduced to 1251 and 988 cells compared to a preoperative cell count of 2600 in both the donor corneas. This represents a loss of 55.1% during the surgery and postoperative period. The average age of the donor tissue was 80.3 years. Mean topographic astigmatism after the suture removal was 2.4 D. The visual acuities, pachymetry, astigmatism, and endothelial cell count at 6 months are presented in Table 1. Discussion Posterior keratoplasty is a new surgical technique that may be valuable in treating patients with corneal decompensation secondary to endothelial dysfunction. Figure 1 Sutured endothelial graft. Figure 2 After suture removal. Figure 3 Descemet’s folds after suture removal. Suture fixation of dislocated endothelial grafts M Anandan and M Leyland 719 Eye It selectively replaces diseased endothelium with healthy donor tissue. Over the past several years, this surgery has undergone extensive modifications and the published reports on several small series have suggested distinct advantages over standard full-thickness PK.1–5 The advantages include improved surface topography with the elimination or reduction of irregular astigmatism, improved predictability of postoperative-corneal refractive power with better intraocular-lens calculation, and retention of good endothelial cell densities and function.6–8 All these advantages have resulted in faster visual rehabilitation for the patient.9 However, every surgery has its own risks and complications. Graft dislocation, rejection, graft failure, glaucoma, and cataract have all been noted.1,10 Successful Posterior lamellar keratoplasty requires the donor disc to self-adhere during and after surgery. Possible causes for dislocation are thought to be: (1) Residual viscoelastic material in the interface (2) Improper position of the donor disc peripheral edge, such that it lies posterior to the edge tissue of the recipient bed (3) Delayed endothelial pump function (4) Primary graft failure (5) Donor disc placed upside down.10 Even after meticulous attention to the above- mentioned factors dislocation can still happen, and may have adverse effects on the graft: Terry and Ousley10 have reported a significantly lower average endothelial-cell count at 6-month post-op (1534±366 cells/mm2) in eyes that had donor dislocation than in eyes that did not have dislocation (2166±411 cells/mm2). One way of managing this complication postoperatively is to reposition the graft under air, described by Terry,10 but one or more causes of dislocation may persist and therefore dislocation recurs once the air bubble has disappeared. Other options to manage dislocations are to replace the endothelial graft with a new one or to covert to a full-thickness PK. All our three patients underwent an uneventful/ uncomplicated Descemet-stripping endothelial keratoplasty (DSEK). The donor disc was manually dissected using an artificial anterior chamber approximately to the same size as the diameter of the resection of the recipient’s stroma. We did not encounter any problems during the dissection of donor or recipient cornea. This was later confirmed by normal corneal pachymetry postoperatively in all three cases. All the dislocations were noted immediately after the surgery and were repositioned on the first postoperative day under air but were found dislocated again the next day when the air bubble, which was used to hold the graft in place, disappeared. We suspect a delayed endothelial pump function due to unknown reason as the cause for recurrent dislocation. With the patients’ informed and written consent, we sutured the dislocated graft in place until the graft appeared to be securing with good adhesion formed in all 360 degrees of the edge of the interface. This required four sutures in two of our patients and six in the remaining one patient. The aim of the suturing was to engage the peripheral edge of the donor tissue to support it in place until the endothelial pump function took over. The sutures were removed at 6 weeks after surgery. In the Price et al.1 series, donor detachment rate of 50% was noted in the first 10 cases that reduced to 13% in the next 126 cases and 3% in the final 64 cases.1 The reduction in dislocation rate was attributed to the surgeons’ learning curve and also to various other techniques to remove the fluid from the donor–recipient graft interface. In the Terry and Ousley10 series, of 98 eyes that under went DLEK (deep lamellar endothelial keratoplasty), four patients were found to have dislocation of the graft on the first postoperative day. Mean best spectacle-corrected vision in patients who had graft dislocation was 20/46 and the average endothelial- cell count was 1534±366, 6 months after surgery. In various other published series, the best-corrected vision following lamellar endothelial replacement ranges from 20/40 to 20/50, which compares well with our average of 20/50.4,7,11 We found the postoperative endothelial-cell loss was 55.1% compared to preoperative donor tissue. Melles et al11 found 43.1% endothelial-cell loss at 6 months in one series following a 5 mm incision posterior lamellar keratoplasty. However in another series, Terry and Ousley7 have reported 24.1% endothelial loss at 6 months post-op. In both these series, endothelial count was from Table 1 Showing the visual acuity, pachymetry, astigmatism, donor graft age and endothelial cell count at 6 months after suturing the graft Best-corrected VA pre-op –post-op Pachymentry at pre-op –post-op Astigmatism using topography Endothelial cell count pre-op –post-op Donor graft age Patient 1 20/80–20/60 668–639 1.75 2600–1251 86 Patient 2 20/60–20/40 663–517 3.5 2600–unattainable 81 Patient 3 20/80–20/60 724–630 2 2600–988 74 Suture fixation of dislocated endothelial grafts M Anandan and M Leyland 720 Eye non-dislocated grafts. The increased endothelial loss seen in our patients may be (1) A cause of dislocation due to inadequate endothelial pump (2) A result of recurrent dislocation (all eyes were sutured after a failed repositioning with air bubble) (3) A result of tissue damage from donor manipulation during repositioning and suturing We recommend suturing the graft after the first reposition for dislocation as to avoid repeated dislocation and further endothelial loss. As we do not have any comparative data for endothelial loss with repostponing under air bubble and primary suturing after first dislocation, we could not recommend this procedure to treat the initial dislocation yet. In Melles et al’ series4, of seven eyes following PELK (Posterior endothelial lamellar keratoplasty) the average postoperative astigmatism was reported as 1.54 D. Looking at stability of topography in 20 eyes following DELK, Ousley and Terry8 found the average topographic astigmatism at 1 year to be 2.3±1.1 D and 2 year to be 2.4±1.1 D. This compares well with our series of three cases with an average topographic astigmatism of 2.4 D after suture removal. The sutures just held the endothelium in place during the initial few weeks and did not have any adverse effect on astigmatism immediate post-op and also after suture removal. One of the patients developed Descemet’s folds, which remained even after suture removal (Figure 3). Fortunately, it did not affect the visual outcome significantly. Corneal graft folds have been noted before following posterior endothelial keratoplasty even without suturing.12 Suture traction could have directly contributed to the endothelial folds but improper centration within the recipient bed or an inadvertently undersized recipient bed diameter could also have been a reason for it. Using penetrating suturing to the endothelial graft could also increase the chance of endophthalmitis and epithelial ingrowths, especially if the sutures become loose prior to removal. We did not encounter any such complications in our very small case series. By suturing the dislocated endothelial graft we believe that we avoided additional major surgery for the patient, retained the advantage of rapid postoperative recovery, and prevented wastage of corneal donor tissue. The patients still benefited from early visual rehabilitation of the posterior endothelial keratoplasty as apposed to a situation of having to covert to a standard PK. We only had to suture three eyes until now and there have been no failures in this technique. To our knowledge there is no published data on suture fixation of endothelial grafts. References 1 Price Jr FW, Price MO. Descemet’s stripping with endothelial keratoplasty in 200 eyes: early challenges and techniques to enhance donor adherence. J Cataract Refract Surg 2006; 32(3): 411–418. 2 Terry MA. A new approach for endothelial transplantation: deep lamellar endothelial keratoplasty. Int Ophthalmol Clin 2003; 43(3): 183–193. 3 Melles GR, Eggink FA, Lander F, Pels E, Rietveld FJ, Beekhuis WH et al. A surgical technique for posterior lamellar keratoplasty. Cornea 1998; 17(6): 618–626. 4 Melles GR, Lander F, van Dooren BT, Pels E, Beekhuis WH. Preliminary clinical results of posterior lamellar keratoplasty through a sclerocorneal pocket incision. Ophthalmology 2000; 107(10): 1850–1856. 5 Melles GR, Lander F, Nieuwendaal C. Sutureless, posterior lamellar keratoplasty: a case report of a modified technique. Cornea 2002; 21(3): 325–327. 6 Terry MA, Ousley PJ. In pursuit of emmetropia: spherical equivalent refraction results with deep lamellar endothelial keratoplasty (DLEK). Cornea 2003; 22(7): 619–626. 7 Terry MA, Ousley PJ. Small-incision deep lamellar endothelial keratoplasty (DLEK): six-month results in the first prospective clinical study. Cornea 2005; 24(1): 59–65. 8 Ousley PJ, Terry MA. Stability of vision, topography, and endothelial cell density from 1 year to 2 years after deep lamellar endothelial keratoplasty surgery. Ophthalmology 2005; 112(1): 50–57. 9 Terry MA, Ousley PJ. Rapid visual rehabilitation after endothelial transplants with deep lamellar endothelial keratoplasty (DLEK). Cornea 2004; 23(2): 143–153. 10 Terry MA, Ousley PJ. Deep lamellar endothelial keratoplasty: early complications and their management. Cornea 2006; 25(1): 37–43. 11 Van Dooren B, Mulder PG, Nieuwendaal CP, Beekhuis WH, Melles GR. Endothelial cell density after posterior lamellar keratoplasty (Melles’ techniques): 3 years follow-up. Am J Ophthalmol 2004; 138(2): 211–217. 12 Faia LJ, Baratz KH, Bourne WM. Corneal graft folds: a complication of deep lamellar endothelial keratoplasty. Arch Ophthalmol 2006; 124: 593–595. Suture fixation of dislocated endothelial grafts M Anandan and M Leyland 721 Eye Suture fixation of dislocated endothelial grafts Introduction Surgical procedure Results Discussion Note References work_7of5ai5qdvhi5a56rseyb4gi3y ---- 109 Departments of Head and Neck Oncology (AA,DB,NS,SM,SL,SI,KT) and Plastic and Reconstructive Surgery (SI,MS), Amrita Hospital, Kochi, India Lymphology 51 (2018) 109-118 SECONDARY LYMPHEDEMA AFTER HEAD AND NECK CANCER THERAPY: A REVIEW A. Anand, D. Balasubramanian, N. Subramaniam, S. Murthy, S. Limbachiya, S. Iyer, K. Thankappan, M. Sharma ABSTRACT Secondary head and neck lymphedema (SHNL) is a chronic condition affecting patients who have undergone treatment for head and neck cancers. It results from the disruption of normal lymphatic flow by surgery and/or radiation. The incidence of secondary head and neck lymphedema varies anywhere between 12 and 54% of all patients treated for head and neck cancer, but it is still commonly under-diagnosed in routine clinical practice. In spite of awareness of this condition, treatment has been difficult as definitive staging, diagnostic, and assessment tools are still under development. This review article is aimed at looking at the evidence, standards of management, and deficiencies in current literature related to SHNL to optimize management of these patients and improve their quality of life. Keywords: cancer, head and neck lymph- edema, diagnosis, imaging, treatment, MLD, CDT, quality-of-life Swelling caused by an impaired tissue drainage resulting from lymphatic dysfunction with accumulation of fluid in the interstitial spaces is called lymphedema (1). Primary lymphedema is a result of an inherent devel- opmental anomaly of the lymphatic system whereas secondary lymphedema is a result of damage caused to the lymphatic system by surgery, radiotherapy, trauma, infection or other systemic disorders. Lymphedema visible to the clinician is referred to as ‘external’ whereas that affecting the mucosal surface of the body is ‘internal’; however, these condi- tions are not independent and can occur con- comitantly in some patients. Secondary head and neck lymphedema (SHNL) is a significant complication of treatment for head and neck cancer (HNC), but has not gained widespread recognition. Some of the reasons include: (i) less than 50% of patients treated for HNC develop SHNL and the priority of the treating clinician is more on the oncological outcomes, and (ii) most patients with complex tumors of the head and neck are treated in large tertiary centers, thus few clinicians routinely encoun- ter HNL. With the evolution of newer aggressive multimodality treatment strategies, increased incidence of human papilloma virus-related cancers, and improved survival outcomes, patients often live longer and develop late functional sequelae of treatment. This lymph- edema is often progressive (2-4). The litera- ture suggests that the incidence of secondary lymphedema after HNC treatment varies from 12%-54%. This wide variation in incidence of Permission granted for single print for individual use. Reproduction not permitted without permission of Journal LYMPHOLOGY. 110 SHNL may reflect differences in grading cri- teria, variations in the structures assessed for manifestations of lymphedema (e.g., internal vs external), differences in the duration of follow-up, and differences in cancer treatment regimens among the studies (5,6). PATHOPHYSIOLOGY A functional lymphatic system is crucial and serves many important functions such as regulation of tissue fluid homeostasis, removal of cellular debris, immune cell trafficking, lip- id absorption, and transport from the gastric system. A systematic fluid exchange mecha- nism occurring at the blood capillary-intersti- tial-lymphatic interface coordinates all these functions. Four forces interact together and drive this capillary filtration namely capil- lary pressure, negative interstitial pressure, interstitial fluid colloid osmotic pressure, and plasma colloid osmotic pressure. Variation in any of these can lead to edema. Other fac- tors, such as alteration of either extrinsic or intrinsic propulsion mechanisms (e.g., fibrosis impeding muscle movement) or of lymphatic structures (e.g., neck dissection or radiation induced fibrosis of nodes) can decrease lymph flow, impede fluid egress, and result in lymph- edema. (7) Surgery decreases both the carrying ca- pacity and efficiency of the transport mech- anism. The head and neck region requires substantial lymph drainage to maintain vital function and has about 300 lymph nodes, roughly a third of those in the body. Although chemotherapy was historically not thought to be a risk factor for lymphedema, it compounds the radiation-induced damage to the lymphat- ic system. Recently, published literature points out the association between taxane group of chemotherapeutic agents and lymphedema, though the exact causative mechanism is not yet understood. The acute effects of radiother- apy are inflammatory-mediated and cyto- kine-propelled, which usually subsides. The final factor in lymphedema severity depends on the extent of tissue damage. When tissue damage is more severe, the repair process is pathogenic with resultant over-production of the extracellular matrix and a permanent fibrotic scar. This occurs when new collagen synthesis by myofibroblasts exceeds the degra- dation rate and places cancer survivors at risk for late-effect fibrosis and/or lymphedema (7). The manifestation of disease occurs in stages, the first being heaviness or tightness with no visible edema, progressing to visible edema which is non-pitting, and subsequently edema that pits on pressure. The final stage is fibrosis and serious functional impairment such as impaired speaking, swallowing, breathing, poor cosmesis, and even impaired vision. SECONDARY HEAD AND NECK LYMPHEDEMA AND ITS IMPACT ON QUALITY OF LIFE The effects of SHNL are far from only cosmetic. Significant lymphedema of the face, mouth, and neck can result in severe function- al disturbances in communication (speaking, reading, writing, and hearing), alimentation, and respiration (8). Severe head and neck lymphedema may also impede ambulation when vision is impaired. In extreme cases, respiratory obstruction may require a trache- ostomy (9,10). Laryngectomized patients may experience difficulty with stomal access for hygiene purposes, respiration, and manage- ment of a tracheo-esophageal voice prosthesis. Intra-oral and pharyngeal edema can impede swallowing safety and efficiency (11-12) and may mandate a gastrostomy tube for feeding. The psychological effects of facial disfigu- ration may be grave, including frustration, embarrassment, and depression due to both functional and cosmetic changes (9,13). The treatment of SHNL is essential for the reha- bilitation of these deficits and improvement of the patient’s quality of life (13,14). Deng et al (8) has reported on the quality of life (QoL) in 103 HNC patients who were ≥3 months post treatment. The variables assessed included severity of internal and external Permission granted for single print for individual use. Reproduction not permitted without permission of Journal LYMPHOLOGY. 111 lymphedema, physical, psychological symp- toms, functional status, and overall QoL. The severity of internal and external lymphedema correlated with both physical and psycholog- ical symptoms. Patients with more severe ex- ternal lymphedema were more likely to have a decrease in neck movement, and the combined effects of external and internal lymphedema severity were associated with hearing impair- ment and decreased QoL. Overall severity cor- related well with symptom burden, functional status, and QoL (8). ASSESSMENT OF HNL Lack of a common evaluation algorithm had until recently impeded the reporting and staging of lymphedema, Measurement of SHNL is important as it forms an integral part in diagnosis, management planning, and monitoring progress of treatment. There is a wide variation in use of assessment tools between the various institutions which are currently managing the disease. Deng et al (15) have reviewed various assessment tools and have divided these into three main groups as follows. Patient Reported Outcomes (PRO) These unfortunately have been vastly ignored. The majority of patient reported out- come measures take into account the symptom burden associated with HNC, however out- comes pertaining specific to SHNL have not been included in any of these measures. Lymphedema Symptom Intensity & Dis- tress Survey-Head & Neck (LSIDS-H&N) is the only known existing PRO measure direct- ed specifically at head and neck lymphedema (16). It is a 65-item symptom tool and was developed via a rigorous instrument devel- opment process. Deng et al reported that: (a) the LSIDS- H&N was feasible to administer, readable, and easy to use; (b) content valid- ity was supported by expert panel review; and (c) an initial test indicated that the tool captured critical and unique symptoms related to lymphedema (17). The authors concluded that further psychometric testing of the tool in larger sample size studies was needed Clinician reported outcome measures (CRO) There are various CRO measuring tools described for external lymphedema with Földi’s scale being the predominant measure- ment tool. The MD Anderson Cancer Center Lymphedema Scale (MDACCLS), which has been modified from the Földi’s scale, is anoth- er measuring tool. However, validity and reli- ability data are not available. Other common head and neck specific scales for grading head/ neck lymphedema are the Common Termi- nology Criteria for Adverse Events (CTCAE) (18) and the American Cancer Society (ACS) Scale. Deng et al comparing these scales found the following: (1) none of these measures have been validated; (2) each failed to capture important features of external lymphedema; and (3) none capture edema and fibrosis that coexist in some patients (18). Lymphedema and fibrosis are now considered as two differ- ent pathophysiologic entities associated with HNC treatment which may co-exist in the same patient with one causing progress of the other. So there is a need for measuring both problems differently and scaling them appro- priately (19). Internal lymphedema measurements are largely based on both physical and functional assessment. The Radiation Therapy Oncology Group and the European Organization for Research and Treatment of Cancer (RTOG/ EORTC) system (20) and the Late Effects Normal Tissues-Subjective Objective Man- agement Analytic (LENT-SOMA) (21) sys- tems have been used for grading the internal laryngeal edema, but they have neither been validated nor do they consider other mucosal sites like the tongue or pharynx. Patterson’s scale uses endoscopic visualization to grade edema of eleven structures and two spaces in the pharynx and larynx (22) and has good intra-rater reliability (weighted kappa = 0.84) Permission granted for single print for individual use. Reproduction not permitted without permission of Journal LYMPHOLOGY. 112 and moderate inter-rater reliability (weighted kappa = 0.54). A weakness of the Patterson scale, however, is that it fails to capture im- portant anatomical sites (e.g., tongue) in the oral cavity that may develop internal edema. Also, the inter-rater reliability of the Patterson Scale needs to be improved for clinical use. Technical Measurement Tools Tape measurements MDACCLS (9) utilizes tape measure- ments of various facial and neck landmarks in their management of lymphedema (Fig. 1). The inter-rater reliability of these measure- ments has been studied in the ALOHA trial by Purcell et al (23), who reported excellent inter-rater reliability for 3 of the 4 tape mea- surements (Fig. 2). Digital photography Digital photography offers an excellent subjective method that can be used to assess the progress of the treatment. Exacting meth- odology must be used to ensure that the pho- tos are taken in the same positions so accurate comparisons may be made. Fig. 2. Location of measurement points on the neck include: 1) Upper neck circumference taken at the highest point inferior to the mandible; 2) Length from ear to ear measured at the inferior junction of ear lobe and face on left to inferior junction of ear lobe and face on right intersecting a point 8 cm inferior to the lower lip edge; and 3) lower neck circumference taken at the lowest possible point superior to the angle of the neck and shoulder. Fig. 1. The MD Anderson Cancer Centre Lymphede- ma Scale measurements utilizes tape measurements to assess and follow treatment for head and neck lymphedema. Multiple points are included (see lines overlaid on subject’s face) including those for facial circumference (diagonal from chin to crown of head and submental from < 1 cm in front of ear with ver- tical tape alignment) and point to point (mandibular angle to mandibular angle and tragus to tragus). Permission granted for single print for individual use. Reproduction not permitted without permission of Journal LYMPHOLOGY. 113 Moisture meter D (MMD) Another promising objective assessment of SHNL is the tissue dielectric constant (TDC), which can be assessed using the MoistureMe- terD (MMD; Delfin Technologies Ltd, Kuopio, Finland). TDC reflects the content of local tissue water and is sensitive to both free and bound water contained within the tissue vol- ume being measured. The MMD generates an ultrahigh-frequency electromagnetic wave of 300 MHz, which is transmitted into a coaxial line and further into an open-ended coaxial probe. The probe is placed in contact with the skin and the electromagnetic wave is transmit- ted to a specified depth in the area of tissue be- neath the probe. A portion of the electromag- netic energy is absorbed by tissue water and the remainder is reflected back into the coaxial line. The amount of signal reflected represents the TDC. TDC is used as an index of local tissue water. As a reference, pure water has a TDC value of 78.5. The MMD readings range from 1 to 80 with a higher reading indicating the presence of more swelling. Purcell et al (23) reported excellent in- ter-rater and intra-rater reliability for MMD use in the measurement of SHNL. The MMD discriminated well between patients with head and neck lymphedema and healthy controls (p < .001). Correlation between MMD score and SHNL level ratings was significant indicating convergent validity. The trial confirmed the potential of MMD as an objective measure- ment tool for diagnosis and assessment of SHNL. Imaging As a measurement tool, imaging is not reliable in the evaluation of HNL. Lymphangi- ography, lymphoscintigraphy, and near-infra- red fluorescence imaging may aid in func- tional mapping of the lymphatic system, but for detection of tissue changes, other imaging modalities like CT, MRI, and ultrasound may be used. When compared to CT and MRI, high-resolution ultrasonography is a noninva- sive, harmless, and inexpensive technique to visualize the dermal and subcutaneous tissue (24-34). Studies have reported that high-reso- lution ultrasonography is a sensitive method for assessment of changes in skin/soft tissue edema in the breast cancer population. Strain elastography is a newer imaging technique that allows noninvasive estimation and im- aging of tissue elasticity distribution within biological tissues using conventional real-time ultrasound equipment and elastography soft- ware. Conceptually, elastography measures tissue elasticity, potentially adding significant- ly to our ability to measure the fibrotic compo- nent of lymphedema in an objective manner but it requires validation. TREATMENT OF SHNL Manual Lymphatic Drainage (MLD) MLD was developed by Danish massage therapist Emil Vodder for the treatment of chronic sinusitis in the 1930s (35). It was later adapted to the treatment of lymphede- ma. MLD consists of series of gentle, circular massage strokes that are applied to the skin to promote increased lymphatic flow (36,37). Through this technique the lymphatic fluid is directed towards the normal lymphatic sys- tem. MLD is now used as part of the complete decongestive therapy. Complete Decongestive Therapy (CDT) Although the MLD technique decreas- es lymphedema by directing the fluid to be drained to the normal area, the already mal- formed interstitial space and altered lymphat- ic system results in rapid redevelopment of lymphedema. Földi is credited with combining MLD technique with compression bandages and physical exercises, along with skin care, labeling it CDT which is the standard of care in the management of lymphedema (38). Traditional CDT is typically provided by a certified lymphedema therapist in two phases: a primary intensive phase of outpatient treat- Permission granted for single print for individual use. Reproduction not permitted without permission of Journal LYMPHOLOGY. 114 ment provided 3-5 days weekly over a period of 2-4 weeks and the subsequent maintenance phase which begins as treatment transitions from the outpatient setting to the home environment. The basic components of the program continue to be emphasized, however the performance of the program becomes the responsibility of the patient or caregiver (39). Daily adherence to a home treatment program may be required for the remainder of the patient’s life depending on the severity of the edema. In the early stages, patients are encour- aged to carry out Simple Lymphatic Drainage (SLD), a modified form of MLD, at least once a day not only to enhance benefits of other forms of lymphedema treatment but also to help accurate recall of the routine and tech- nique. The basic goals of CDT are to decon- gest the edematous region, prevent refilling of the tissues, and promote improved drainage. MLD relieves the edema, and exercises com- bined with compression bandaging enhance the movement of lymph to adjacent areas with intact drainage. Smith et al from MD Anderson Cancer Center have reported their 6-year experience with the HNL CDT program in 1,202 pa- tients. Most patients (62%) had soft, reversible pitting edema (MDACC Stage 1b). Treatment response was evaluated in 733 patients after receiving therapy; 439 (60%) improved after complete decongestive therapy. Treatment adherence independently predicted complete decongestive therapy response (p<0.001) (40). Pharmacotherapy The use of drugs in the treatment of lymphedema is still experimental. One of the commonest agents used is selenium. Bruns et al have suggested a short positive effect of sodium selenite on SHNL caused by radiother- apy, alone or in combination with surgery. 36 patients with SHNL (20 of whom had severe internal lymphedema) received 350 µg/m2 body surface area of sodium selenite orally daily (up to 500 µg per day) for a period of 4-6 weeks after radiotherapy. 75% of the patients had an improvement of the score by one stage or more. The self-assessment of QoL using the visual analogue scale improved significantly after selenium treatment with a reduction of 4.4 points (p < 0.05) (41,42). Anti-inflammatory drugs (corticosteroids, D-penicillamine, colchicine, etc), vascular targeted therapies (pentoxifylline, hyperbaric oxygen therapy, ACE inhibitors) and anti-ox- idants (liposomal super oxide dismutase, vita- min E) are some of the drugs/techniques that have also been tried in both pre-clinical and clinical studies for the management of radia- tion induced fibrosis with promising results. But their role in the management of SHNL is doubtful, although fibrosis is a continuum of the process (43-63). Low Level Light Therapy (LLLT) Lee et al (64) has reported the use of LLLT in treatment of HNL. It is the application of light (usually a low-power laser or light emit- ting diode (LED)) to promote tissue repair, re- duce inflammation, reduce edema, and induce analgesia. The laser or LED device typically emits light in the red and near-infrared light spectrum (600 nm-1000 nm), the power output is usually in the range of 1-500 µW and the irradiance is generally in the range of 5 µW-5 W/cm2. Treatment time per point is typically in the range of 30-60 seconds per point and most pathologies require the treatment of mul- tiple points (65). Treatments can be weekly, though more frequent treatments may be more effective with a maximum possible after which effectiveness is decreased. For acute and post-operative pathologies, one treatment may be all that is necessary, but for chronic pain, degenerative conditions, and lymphedema, ten or more sessions may be necessary. Although LLLT have been more routinely used in breast cancer related lymphedema, the efficacy of its use in SHNL has not been proved. Permission granted for single print for individual use. Reproduction not permitted without permission of Journal LYMPHOLOGY. 115 Surgical Management of SHNL Lymphatico-venous anastomosis has been reported for the treatment of SHNL not responding to MLD techniques. Mihara et al (66) have reported the technique and out- comes. Functional and dilated lymph vessels were identified using pre- and intra-operative fluorescent lymphography, using indocyanine green dye for near-infrared fluorescence labeling and reliable anastomosis of the lymph vessel with the venous circulation (superficial temporal vein). A super-microsurgical anasto- mosis technique is used because the diameters of the lymph vessel and vein are approximate- ly 0.3- 0.5 mm. Capillary lymph vessels in the head and neck region have fewer valves compared with lymph vessels in the limbs, and lymph flows relatively freely. A follow-up of 8-12 months after anastomosis is needed to investigate changes in the postoperative course as the therapeutic effect may not appear rap- idly. Similar techniques and results have also been reported by Ayestaray et al (67). A lymphatic bridge is the last option in the most severe cases of SHNL. It has been re- ported in patients who were completely func- tionally impaired with no feasible therapeutic options. A pedicled (usually a deltopectoral) flap draining into a normal lymphatic system is harvested and bridged into the upper part of the lymphedematous area (usually cheek) and CDT or MLD is initiated so that the lymph flows from the collected area to the normal lymphatic system (axilla) through the pedicled flap (68). CONCLUSION SHNL is a common morbidity associated with head and neck cancer treatment. There is an urgent need to validate the various eval- uation tools and standardize measurements. Comprehensive decongestive therapy is the current treatment of choice in patients with SHNL, with medical therapies being a possible adjunct therapy and surgery a potential last resort. CONFLICT OF INTEREST AND DISCLOSURE All authors declare that no competing fi- nancial interests exist and have no disclosures. REFERENCES 1. Földi M, E Földi: Lymphostatic diseases. In: Földi’s Textbook of Lymphology for Physicians and Lymphedema Therapists. Strossenruther, RH, K. Kubic (Eds). 2nd Edition, Munich, Germany: Urban and Fischer; 2006. pp. 224- 240. 2. Horner, MJ, LA Ries, M Krapcho, et al: SEER Cancer Statistics Review, 1975-2006, National Cancer Institute. Bethesda, MD. 3. Buentzel, J, M Glatzel, R Muecke, et al: Influ- ence of amifostine on late radiation-toxicity in head and neck cancer-a follow-up study. Anticancer Res. 27 (2007), 1953-1956. 4. Kubicek, GJ, F Wang, E Reddy, et al: Impor- tance of treatment institution in head and neck cancer radiotherapy. Otolaryng. Head Neck Surg. 31 (2009), 172-176. 5. Deng, J, SH Ridner, BA Murphy: Lymphede- ma in patients with head and neck cancer. Oncol. Nurs. Forum. 38 (2011), E1-E10. 6. Bruns, F, O Micke, M Bremer: Current status of selenium and other treatments for secondary lymphedema. J. Support Oncol. 1 (2003), 121- 130. 7. Ridner, SH: Pathophysiology of lymphedema. Sem. Oncol. Nurs. 29 (2013), 4-11. 8. Deng, J, BA Murphy, MS Dietrich, et al: Im- pact of secondary lymphedema after head and neck cancer treatment on symptoms, func- tional status, and quality of life. Head Neck 35 (2013), 1026-1035. 9. Smith, BG, JS Lewin: The role of lymphedema management in head and neck cancer. Curr. Opin. Otolaryngol. Head Neck Surg. 18 (2010), 153-158. 10. Withey, S, P Pracy, F Vaz, et al: Sensory deprivation as a consequence of severe head and neck lymphoedema. J. Laryngol. Otol. 115 (2001), 62-64. 11. Piso, DU, A Eckardt, A Liebermann, et al: Early rehabilitation of head-neck edema after curative surgery for orofacial tumors. Amer. J. Phys. Med. Rehab. 80 (2001), 261-269. 12. Murphy, BA, J Gilbert: Dysphagia in head and neck cancer patients treated with radiation: Assessment, sequelae, and rehabilitation. Sem. Rad. Oncol. 19 (2009), 35-42. 13. Poulsen, MG, B Riddle, J Keller, et al: Pre- dictors of acute grade 4 swallowing toxicity Permission granted for single print for individual use. Reproduction not permitted without permission of Journal LYMPHOLOGY. 116 in patients with stages III and IV squamous carcinoma of the head and neck treated with radiotherapy alone. Rad. Oncol. 31 (2008), 253-259. 14. Penner, JL: Psychosocial care of patients with head and neck cancer. Semin. Oncol. Nurs. 25 (2009), 231-241. 15. Deng, J, SH Ridner, JM Aulino, et al: Assess- ment and measurement of head and neck lymphedema: state-of-the-science and future directions. Oral Oncol. 51 (2015), 431-437. 16. Deng, J, SH Ridner, BA Murphy, et al: Prelim- inary development of a lymphedema symptom assessment scale for patients with head and neck cancer. Support. Care Can. 20 (2012), 1911-1918. 17. U.S. Department of Health and Human Ser- vices, National Institutes of Health, National Cancer Institute. Common Terminology Crite- ria for Adverse Events v4.0 (CTCAE); 2010 18. Deng, J, SH Ridner, MS Dietrich, et al: Assess- ment of external lymphedema in patients with head and neck cancer: A comparison of four scales. Oncol. Nurs. Forum. 40 (2013), 506-506. 19. Deng, J, SH Ridner, N Wells, et al: Develop- ment and preliminary testing of head and neck cancer related external lymphedema and fibro- sis assessment criteria. Eur. J. Oncol. Nurs. 19 (2015), 75-80. 20. Budach, V, A Zurlo, JC Horiot: EORTC Ra- diotherapy Group: Achievements and future projects. Euro. J. Can. 38 (2002), 134-137. 21. Denis, F, P Garaud, E Bardet, et al: Late toxic- ity results of the GORTEC 94-01 randomized trial comparing radiotherapy with concomi- tant radiochemotherapy for advanced-stage oropharynx carcinoma: Comparison of LENT/ SOMA, RTOG/EORTC, and NCI-CTC scor- ing systems. Inter. J. Rad. Oncol. Bio. Phys. 55 (2003), 93-98. 22. Patterson, JM, A Hildreth, JA Wilson: Measur- ing edema in irradiated head and neck cancer patients. Ann. Otol. Rhinol. Laryngol. 116 (2007), 559-564. 23. Purcell, A, J Nixon, J Fleming, et al: Measur- ing head and neck lymphedema: The “ALO- HA” trial. Head Neck 38 (2016), 79-84. 24. ISL: The diagnosis and treatment of periph- eral lymphedema: 2016 Consensus Document of the International Society of Lymphology. Lymphology 49 (2016), 170-184. 25. Mellor, RH, NL Bush, AW Stanton, et al: Du- al-frequency ultrasound examination of skin and subcutis thickness in breast cancer-related lymphedema. Breast J. 10 (2004), 496-503. 26. Tassenoy, A, J De Mey, F De Ridder, et al: Postmastectomy lymphoedema: Different patterns of fluid distribution visualised by ultrasound imaging compared with magnetic resonance imaging. Physiotherapy 97 (2011), 234-243. 27. Lee, JH, BW Shin, HJ Jeong, et al: Ultraso- nographic evaluation of therapeutic effects of complex decongestive therapy in breast cancer-related lymphedema. Ann. Rehab. Med. 37 (2013), 683-689. 28. Suehiro, K, N Morikage, M Murakami, et al: Significance of ultrasound examination of skin and subcutaneous tissue in secondary lower ex- tremity lymphedema. Ann. Vasc. Dis. 6 (2013), 180-188. 29. Devoogdt, N, S Pans, A De Groef, et al: Post- operative evolution of thickness and echoge- nicity of cutis and subcutis of patients with and without breast cancer-related lymphedema. Lymph. Res. Bio. 12 (2014), 23-31. 30. Hacard F, Machet L, Caille A, et al: Measure- ment of skin thickness and skin elasticity to evaluate the effectiveness of intensive decon- gestive treatment in patients with lymphoede- ma: A prospective study. Skin Res. Tech. 20 (2014), 274-281. 31. Naouri, M, M Samimi, M Atlan, et al: High-resolution cutaneous ultrasonography to differentiate lipoedema from lymphoedema. Brit. J. Derm. 163 (2010), 296-301. 32. Kim, W, SG Chung, TW Kim, et al: Measure- ment of soft tissue compliance with pressure using ultrasonography. Lymphology 41 (2008), 167-177. 33. Solbiati, L, G Rizzatto: Ultrasound of Superfi- cial Structures: High Frequencies, Doppler and Interventional Procedures. Churchill Living- stone; 1995. 34. Lau, JC, CW Li-Tsang, YP Zheng: Application of tissue ultrasound palpation system (TUPS) in objective scar evaluation. Burns 31 (2005), 445-452. 35. Kasseroller, RG: The Vodder School: The Vod- der method. Cancer 83 (1998), 2840-2842. 36. Chikly, BJ: Manual techniques addressing the lymphatic system: Origins and development. J. Am. Osteopath. Assoc. 105 (2005), 457-464. 37. Vodder, E: Vodder’s lymph drainage. A new type of chirotherapy for esthetic prophylactic and curative purposes. Asthetische Medizin. 14 (1965), 190-191. 38. Földi, M, E Földi: Practical instructions for therapists-manual lymph drainage accord- ing to Dr E. Vodder. In: Földi’s Textbook of Lymphology for Physicians and Lymphedema Therapists. Strossenruther, RH, S Kubic (Eds.) 2nd ed. Munich, Germany: Urban & Fischer. 2006 pp 526-546. Permission granted for single print for individual use. Reproduction not permitted without permission of Journal LYMPHOLOGY. 117 39. Földi, M, E Földi: Lymphostatic diseases. In: Földi’s Textbook of Lymphology for Physicians and Lymphedema Therapists. Strossenruther, RH, S Kubic (Eds.) 2nd ed. Munich, Germany: Urban & Fischer. 2006 pp. 677-683. 40. Smith, BG, KA Hutcheson, LG Little, et al: Lymphedema outcomes in patients with head and neck cancer. Otolaryn. Head Neck Surg. 152 (2015), 284-291. 41. Bruns, F, J Büntzel, R Mücke, et al: Selenium in the treatment of head and neck lymphede- ma. Med. Prin. Pract. 13 (2004), 185-190. 42. Micke, O, F Bruns, R Mücke, et al: Selenium in the treatment of radiation-associated second- ary lymphedema. Inter. J. Rad. Oncol. Bio. Phys. 56 (2003), 40-49. 43. Gross, NJ, KR Narine, R Wade: Protective effect of corticosteroids on radiation pneumo- nitis in mice. Radiat. Res. 113 (1988), 112-119. 44. Gross, NJ, NO Holloway, KR Narine: Effects of some nonsteroidal anti-inflammatory agents on experimental radiation pneumonitis. Radi- at. Res. 127 (1991), 317-324. 45. Abergel, A, C Darcha, M Chevallier, et al: Histological response in patients treated by in- terferon plus ribavirin for hepatitis C virus-re- lated severe fibrosis. Eur. J. Gastro. Hepato. 16 (2004), 1219-1227. 46. Ziesche, R, E Hofbauer, K Wittmann, et al: A preliminary study of long-term treatment with interferon gamma-1b and low-dose prednis- olone in patients with idiopathic pulmonary fibrosis. NEJM 341 (1999), 1264-1269. 47. Peter, RU, P Gottlöber, N Nadeshina, et al: In- terferon gamma in survivors of the Chernobyl power plant accident: New therapeutic option for radiation-induced fibrosis. Int. J. Radiat. Oncol. Bio. Phys. 45 (1999), 147-152. 48. Gottlöber, P, M Steinert, W Bähren, et al: Interferon-gamma in 5 patients with cutaneous radiation syndrome after radiation therapy. Int. J. Radiat. Oncol. Bio. Phys. 50 (2001), 159- 166. 49. Steen, VD, TA Medsger, GP Rodnan: D-peni- cillamine therapy in progressive systemic scle- rosis (scleroderma): A retrospective analysis. Ann. Int. Med. 97 (1982), 652-659. 50. Rambaldi, A, G Iaquinto, C Gluud: Ana- bolic-androgenic steroids for alcoholic liver disease: A Cochrane review. Am. J. Gastro. 97 (2002). 1674-1681. 51. Selman M, G Carrillo, J Salas, et al: Colchi- cine, D-penicillamine, and prednisone in the treatment of idiopathic pulmonary fibrosis: A controlled clinical trial. Chest 114 (1998). 507- 512. 52. Entzian, P, M Schlaak, U Seitzer, et al: Antiin- flammatory and antifibrotic properties of col- chicine: Implications for idiopathic pulmonary fibrosis. Lung 175 (1997), 41-51. 53. Samlaska, C, E Winfield: Pentoxifylline: Clin- ical review. J. Am. Acad. Dematol. 30 (1994), 603-621. 54. Lefaix, JL, S Delanian, MC Vozenin, et al: Striking regression of subcutaneous fibrosis induced by high doses of gamma rays using a combination of pentoxifylline and α-tocoph- erol: An experimental study. Inter. J. Radiat. Oncol. Bio. Phys. 43 (1999), 839-847. 55. Werner-Wasik M, Madoc-Jones H: Trental (pentoxifylline) relieves pain from postradia- tion fibrosis. Inter. J. Radiat. Oncol. Bio. Phys. 125 (1993), 757-758. 56. Futran ND, Trotti A, Gwede C: Pentoxifylline in the treatment of radiation-related soft tissue injury: Preliminary observations. Laryngo- scope 107 (1997), 391-395. 57. Okunieff, P, E Augustine, JE Hicks, et al: Pent- oxifylline in the treatment of radiation-induced fibrosis. J. Clin. Oncol. 22 (2004), 2207-2213. 58. Delanian, S, R Porcher, S Balla-Mekias, et al: Randomized, placebo-controlled trial of combined pentoxifylline and tocopherol for regression of superficial radiation-induced fibrosis. J. Clin. Oncol. 21 (2003), 2545-2550. 59. Pasquier, D, T Hoelscher, J Schmutz, et al: Hyperbaric oxygen therapy in the treatment of radio-induced lesions in normal tissues: A literature review. Radiother. Oncol. 72 (2004), 1-3. 66. Carl, UM, JJ Feldmeier, G Schmitt, et al: Hyperbaric oxygen therapy for late sequelae in women receiving radiation after breast-con- serving surgery. Int. J. Radiat. Oncol. Bio. Phys. 49 (2001), 1029-1031. 61. Gothard, L, A Stanton, J MacLaren, et al: Non-randomised phase II trial of hyperbar- ic oxygen therapy in patients with chronic arm lymphoedema and tissue fibrosis after radiotherapy for early breast cancer. Radiother. Oncol. 70 (2004), 217-224. 62. Pritchard, J, P Anand, J Broome, et al: Double-blind randomized phase II study of hy- perbaric oxygen in patients with radiation-in- duced brachial plexopathy. Radiother. Oncol. 58 (2001), 279-286. 63. Ward, WF, A Molteni, CH Ts’ao, et al: Func- tional responses of the pulmonary endothelium to thoracic irradiation in rats: Differential modification by D-penicillamine. Int. J. Radi- at. Oncol. Bio. Phys. 13 (1987), 1505-1513. Permission granted for single print for individual use. Reproduction not permitted without permission of Journal LYMPHOLOGY. 118 64. Lee, N, J Wigg, JD Carroll: The use of low level light therapy in the treatment of head and neck oedema. J. Lymphoedema 8 (2013), 35-42. 65. Huang, YY, AC Chen, JD Carroll, et al: Bi- phasic dose response in low level light therapy. Dose-Response. 7 (2009), 358-383. 66. Mihara, M, G Uchida, H Hara, et al: Lym- phaticovenous anastomosis for facial lymph- oedema after multiple courses of therapy for head-and-neck cancer. J. Plas. Recon. Aesth. Surg. 64 (2011), 1221-1225. 67. Ayestaray, B, F Bekara, JB Andreoletti: π-shaped lymphaticovenular anastomosis for head and neck lymphoedema: A preliminary study. J. Plas. Recon. Aesth. Surg. 66 (2013), 201-206. 68. Withey, S, P Pracy, S Wood, et al: The use of a lymphatic bridge in the management of head and neck lymphoedema. Brit. J. Plas. Surg. 54 (2001), 716-719. Deepak Balasubramanian, MD Department of Head and Neck Surgery and Oncology Amrita Hospital Ponnekkara PO Kochi - 682 041, India Telephone: +91 8089089887 Email: dr.deepak.b@gmail.com Permission granted for single print for individual use. Reproduction not permitted without permission of Journal LYMPHOLOGY. work_7otrbquo2ffwfmfkjtypljpcw4 ---- Clinicians and their cameras: policy, ethics and practice in an Australian tertiary hospital. | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1071/AH12039 Corpus ID: 5872900Clinicians and their cameras: policy, ethics and practice in an Australian tertiary hospital. @article{Burns2013CliniciansAT, title={Clinicians and their cameras: policy, ethics and practice in an Australian tertiary hospital.}, author={K. Burns and S. Belton}, journal={Australian health review : a publication of the Australian Hospital Association}, year={2013}, volume={37 4}, pages={ 437-41 } } K. Burns, S. Belton Published 2013 Medicine Australian health review : a publication of the Australian Hospital Association Medical photography illustrates what people would prefer to keep private, is practiced when people are vulnerable, and has the power to freeze a moment in time. Given it is a sensitive area of health, lawful and ethical practice is paramount. This paper recognises and seeks to clarify the possibility of widespread clinician-taken medical photography in a tertiary hospital in northern Australia, examining the legal and ethical implications of this practice. A framework of Northern Territory law… Expand View on PubMed publish.csiro.au Save to Library Create Alert Cite Launch Research Feed Share This Paper 24 CitationsHighly Influential Citations 1 Background Citations 6 Methods Citations 1 Results Citations 1 View All Tables and Topics from this paper table 1 Tertiary Care Centers Accident and Emergency department Mobile Phone Description photograph Ward (environment) 24 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Ethical implications of digital images for teaching and learning purposes: an integrative review R. Kornhaber, Vasiliki Betihavas, R. Baber Computer Science, Medicine Journal of multidisciplinary healthcare 2015 10 PDF View 3 excerpts, cites background Save Alert Research Feed Clinical photography in surgery: Knowledge, attitudes and practices in Dakar A. Ndong, Adja Coumba Diallo, +14 authors M. Dieng Psychology 2019 PDF Save Alert Research Feed THE ROLE OF SMARTPHONES IN THE RECORDING AND DISSEMINATION OF MEDICAL IMAGES M. Kirk, Sarah R Hunter-Smith, K. M. Smith, D. Hunter-Smith Computer Science, Medicine 2014 14 PDF Save Alert Research Feed Should ‘smart phones’ be used for patient photography? N. Chan, J. Charette, Danielle O. Dumestre, F. Fraulin Medicine 2016 7 Save Alert Research Feed What Patients, Students and Doctors Think About Permission to Publish Patient Photographs in Academic Journals: A Cross-Sectional Survey in Croatia M. Roguljić, T. P. Pericic, +6 authors E. Wager Psychology, Computer Science Sci. Eng. Ethics 2020 2 Save Alert Research Feed Teledermatology and clinical photography: safeguarding patient privacy and mitigating medico‐legal risk P. Stevenson, A. Finnane, H. Soyer Medicine The Medical journal of Australia 2016 20 Save Alert Research Feed The ethical and legal aspects of image publications of patients: an integrative literature review Shirley Maria de Araújo-Passos, W. M. D. Santos, George Lucas Augusto Trindade da Silva, Isabela Cristina de Miranda Gonçalves Psychology 2018 1 Save Alert Research Feed A thousand words in the palm of your hand: management of clinical photography on personal mobile devices K. Allen, P. Eleftheriou, J. Ferguson Computer Science, Medicine The Medical journal of Australia 2016 2 PDF Save Alert Research Feed Best practices to optimize intraoperative photography. S. Gaujoux, Cecilia Ceribelli, +5 authors B. Dousset Computer Science, Medicine The Journal of surgical research 2016 6 Save Alert Research Feed Patient and practitioner satisfaction with tele-dermatology including Australia’s indigenous population: A systematic review of the literature Emily K. Kozera, A. Yang, D. Murrell Medicine International journal of women's dermatology 2016 12 View 1 excerpt, cites background Save Alert Research Feed ... 1 2 3 ... References SHOWING 1-10 OF 23 REFERENCES SORT BYRelevance Most Influenced Papers Recency A study of the personal use of digital photography within plastic surgery. D. McG Taylor, E. Foster, C. Dunkin, A. Fitzgerald Medicine Journal of plastic, reconstructive & aesthetic surgery : JPRAS 2008 31 Save Alert Research Feed Informed consent for clinical photography M. Johns Medicine The Journal of audiovisual media in medicine 2002 14 Save Alert Research Feed Clinical photography and patient rights: the need for orthopraxy I. Berle Medicine Journal of Medical Ethics 2008 50 Save Alert Research Feed Commentary: The importance of patients' consent for publication R. Smith Medicine BMJ 1996 22 Save Alert Research Feed Obtain informed consent before publishing information about patients. L. Clever Medicine JAMA 1997 17 Save Alert Research Feed A still image of a transient rash captured by a mobile phone M. Dziadzio, S. Hamdulay, V. Reddy, S. Boyce, A. Keat, J. Andrews Medicine Clinical Rheumatology 2006 7 Save Alert Research Feed Exposing the body - Baring the soul A. P. Gardner Psychology, Medicine The Journal of audiovisual media in medicine 2002 3 Save Alert Research Feed Ethical and legal aspects of illustrative clinical recording. C. Gilson Medicine British journal of hospital medicine 1994 3 Save Alert Research Feed Ethical and legal aspects of illustrative clinical recording. Gilson Cc Medicine 1994 7 Save Alert Research Feed Digital Photography as a Means of Enhancing Interconsultant Communication in Oncological Cutaneous Surgery R. Karim, J. Hage, A. J. Ahmed, Freek S de Wit, M. M. van de Sandt, Arjan Daemen Medicine Annals of plastic surgery 2002 14 Save Alert Research Feed ... 1 2 3 ... Related Papers Abstract Tables and Topics 24 Citations 23 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_7pcrmilfhveudkw7e6sileabu4 ---- Beauty and the Biologic: Artistic Documentation of Scientific Breakthrough in Psoriasis | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1159/000439583 Corpus ID: 14293066Beauty and the Biologic: Artistic Documentation of Scientific Breakthrough in Psoriasis @article{Maul2015BeautyAT, title={Beauty and the Biologic: Artistic Documentation of Scientific Breakthrough in Psoriasis}, author={J. Maul and S. Carraro and Johanna Stierlin and M. Geiges and A. Navarini}, journal={Case Reports in Dermatology}, year={2015}, volume={7}, pages={249 - 252} } J. Maul, S. Carraro, +2 authors A. Navarini Published 2015 Medicine Case Reports in Dermatology The making of wax moulages was an exclusive and sought-after art that was primarily used for teaching, but also to document clinical and laboratory research during the first half of the 20th century. Applying the technique of moulage-making to document a case of psoriasis improvement for posterity, a moulage of the trunk of a patient with psoriasis vulgaris was taken prior to treatment with biologics - adalimumab, a TNF-α antagonist - and again 3 month after adalimumab was first given. Our… Expand View on PubMed doi.org Save to Library Create Alert Cite Launch Research Feed Share This Paper Figures, Tables, and Topics from this paper table 1 figure 1 adalimumab Dermatologic disorders Biological Factors Moulages References SHOWING 1-10 OF 10 REFERENCES Traces of Marion B. Sulzberger in the Museum of Wax Moulages in Zurich and their importance for the history of dermatology. M. Geiges Medicine Journal of the American Academy of Dermatology 2009 10 Save Alert Research Feed A double‐blind, placebo‐controlled study of a commercial Aloe vera gel in the treatment of slight to moderate psoriasis vulgaris E. Paulsen, L. Korsholm, F. Brandrup Medicine Journal of the European Academy of Dermatology and Venereology : JEADV 2005 89 Save Alert Research Feed Moulage: the decaying art of dermatology. F. Bray, B. Simmons, L. Falto-Aizpurua, R. Griffith, K. Nouri Medicine JAMA dermatology 2015 3 Save Alert Research Feed Geiges M: The Museum of Wax Moulages in Zurich – current relevance for dermatology, history of medicine 2010 Körper in Wachs Moulagen in Forschung und Restaurierung. Sammlungsschwerpunkte 2010 A double-blind, placebo-controlled study of a commercial Aloe vera gel J Dtsch Dermatol Ges 2007;5:953–957 2007 [The Museum of Wax Moulages in Zurich--current relevance for dermatology, history of medicine and the general public]. M. Geiges Medicine Journal der Deutschen Dermatologischen Gesellschaft = Journal of the German Society of Dermatology : JDDG 2007 2 Save Alert Research Feed Dermatologische Klinik; in Zürcher Spitalgeschichte, vol II Dermatologische Klinik; in Zürcher Spitalgeschichte, vol II 1951 Mühlenberend S, Roessiger S: Körper in Wachs. Moulagen in Forschung und Restaurierung Zürich, 1951 Navarini contributed equally to this work Navarini contributed equally to this work Related Papers Abstract Figures, Tables, and Topics 10 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_7rbrry34l5fi5neoyqxiprrasi ---- Integrity of Scientific Imagery: Photoshop v.4 versus v.5 Michael Shaffer, University of Oregon Having installed Photoshop 5 after using version 4 for some time, I think that all scientists who believe that the integ- rity of their imagery is important, understand what Photoshop 5 brings with it: the issue of "color space" or color profiles". Whereas users could rely on the integrity of their image being retained with Photoshop 4, this changes with version 5, and in some cases rather severely. For example, a image bitmap which might represent elemental spatial distribution data should be considered to have a gamma of unity. Photoshop 4 would represent this data for editing via the display monitor's gamma compensation only. That is, the integrity of the pixel values would remain and visual compensation happens be- tween data and display. Photoshop 5, on the other hand, will modify pixel values in accordance with either a desired "working" color space or an anticipated "target" color space. A good example of this would be if version 4 is used to create a linear gradient of grays. Examining the histogram shows a fiat distribution of pixel values, 0 to 255. Now, if the gradient is saved and loaded into version 5's default working space {caveats below) and the histogram is examined, a very different set of pixel values is found. If this file is now saved, the original data is lost The degree of change is for the most part moving from "simple data" (gamma=1) to a color space gamma (gamma could be anything). The changes also reflect color gamuts associated with either the display or the target, but what is most noticeable is a compensation for gamma, for which the default is 2.2. Adobe has addressed this problem in the sense that the ear- liest distributions of version 5 installed "color profiling" behind your back without asking you to understand it. An available patch (version 5.02) and current distributions now provide warnings, and choices for defining the desired default color space or turning it "off. Still, there are many virtues of color space if it is under- stood. My message here shouldn't be considered as a suggestion that every scientist should avoid Photoshop 5, just be aware of this issue. My suggestions would be to: (1) Always keep the original image files intact. (2) Keep the installation of Photoshop version 4, (3) If version 4 isn't available, install only the latest distribution of version 5.02, or install the patch (5 to 5.02) as soon as possible. (4) Turn off "color space" conversion when opening files, and use "profiling" only when a particular target color gamut and gamma is desired. (5) Do not rely on the manual for an understanding of color space and profiles. Adobe provides much more information at their web site: http://www.adobe.com/supportservice/custsupport/TECHGUIDE/ PSHOP/main.html. Also, investigate with the aid of 3rd party texts, such as Real World Photoshop 5 (Addison-Wesley, 1999) by Bruce Fraser and David Biatner, and Essentials of Digital Photography by Akira Ka- sal, Russell Sparkman, and Elizabeth Hurley (New Riders Pub- lishing, 1997). • ^ Denton Vacuum Homepage • Netscape £0e £cSl View go Window Help HEJE eotoelHTTR/Mv/W.DENTONVACUUM.COM/ Circle Reader Inquiry #20 16 D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:39 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S1551929500064452 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S1551929500064452 work_7rowscmkejgcfmfivwyhofnqky ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218549658 Params is empty 218549658 exception Params is empty 2021/04/06-02:16:28 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218549658 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:28 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_7vntzamhl5d3rnufyxw53hhkwy ---- Anaesthetic eye drops for children in casualty departments across south east England It is a common practice to use topical anaesthetic drops to provide temporary relief and aid in the examination of the eyes when strong blepharospasm precludes thorough examination. Ophthalmology departments usually have several types of these—for example, amethocaine, oxybuprocaine (benoxinate), and proxymetacaine. The dura- tion and degree of discomfort caused by amethocaine is significantly higher than proxymetacaine,1 2 whilst the difference in the discomfort between amethocaine and oxybuprocaine is minimal.2 When dealing with children, therefore, it is recommended to use proxymetacaine drops.1 It was my experience that Accident & Emergency (A&E) departments tend to have less choice of these drops. This survey was done to find out the availability of different anaesthetic drops, and the preference for paediatric use given a choice of the above three. Questionnaires were sent to 40 A&E departments across south east England. Thirty nine replied back, of which one department did not see any eye casualties. None of the 38 departments had proxymeta- caine. Twenty units had amethocaine alone and 10 units had oxybuprocaine alone. For paediatric use, these units were happy to use whatever drops were available within the unit. Eight units stocked amethocaine and oxybuprocaine, four of these were happy to use either of them on children and four used only oxybuprocaine. One of the latter pre- ferred proxymetacaine but had to contend with oxybuprocaine due to cost issues. Children are apprehensive about the instillation of any eye drops. Hence, it is desirable to use the least discomforting drops like proxymetacaine. For eye casualties, in majority of District General Hospitals, A&E departments are the first port of call. Hence, A&E units need to be aware of the benefit of proxymetacaine and stock them for paedia- tric use. M R Vishwanath Department of Ophthalmology, Queen Elizabeth Queen Mother Hospital, Margate, Kent; m.vishwanath@virgin.net doi: 10.1136/emj.2003.010645 References 1 Shafi T, Koay P. Randomised prospective masked study comparing patient comfort following the instillation of topical proxymetacaine and amethocaine. Br J Ophthalmol 1998;82(11):1285–7. 2 Lawrenson JG, Edgar DF, Tanna GK, et al. Comparison of the tolerability and efficacy of unit- dose, preservative-free topical ocular anaesthetics. Ophthalmic Physiol Opt 1998;18(5):393–400. Training in anaesthesia is also an issue for nurses We read with interest the excellent review by Graham.1 An important related issue is the training of the assistant to the emergency physician. We wished to ascertain if use of an emergency nurse as an anaesthetic assistant is common practice. We conducted a short telephone survey of the 12 Scottish emer- gency departments with attendances of more than 50 000 patients per year. We interviewed the duty middle grade doctor about usual practice in that department. In three depart- ments, emergency physicians will routinely perform rapid sequence intubation (RSI), the assistant being an emergency nurse in each case. In nine departments an anaesthetist will usually be involved or emergency physi- cians will only occasionally perform RSI. An emergency nurse will assist in seven of these departments. The Royal College of Anaesthetists2 have stated that anaesthesia should not proceed without a skilled, dedicated assistant. This also applies in the emergency department, where standards should be comparable to those in theatre.3 The training of nurses as anaesthetic assistants is variable and is the subject of a Scottish Executive report.4 This consists of at least a supernumerary in-house program of 1 to 4 months. Continued professional devel- opment and at least 50% of working time devoted to anaesthetic assistance follow this.4 The Faculty of Emergency Nursing has recognised that anaesthetic assistance is a specific competency. We think that this represents an important progression. The curriculum is, however, still in its infancy and is not currently a requirement for emergency nurses (personal communication with L McBride, Royal College of Nursing). Their assessment of competence in anaes- thetic assistance is portfolio based and not set against specified national standards (as has been suggested4). We are aware of one-day courses to familiarise nurses with anaesthesia (personal communication with J McGowan, Southern General Hospital). These are an important introduction, but are clearly incomparable to formal training schemes. While Graham has previously demon- strated the safety of emergency physician anaesthesia,5 we suggest that when anaes- thesia does prove difficult, a skilled assistant is of paramount importance. Our small survey suggests that the use of emergency nurses as anaesthetic assistants is common practice. If, perhaps appropriately, RSI is to be increasingly performed by emergency physicians,5 then the training of the assis- tant must be concomitant with that of the doctor. Continued care of the anaesthetised patient is also a training issue1 and applies to nurses as well. Standards of anaesthetic care need to be independent of its location and provider. R Price Department of Anaesthesia, Western Infirmary, Glasgow: Gartnavel Hospital, Glasgow, UK A Inglis Department of Anaesthesia, Southern General Hospital, Glasgow, UK Correspondence to: R Price, Department of Anaesthesia, 30 Shelley Court, Gartnavel Hospital, Glasgow, G12 0YN; rjp@doctors.org.uk doi: 10.1136/emj.2004.016154 References 1 Graham CA. Advanced airway management in the emergency department: what are the training and skills maintenance needs for UK emergency physicians? Emerg Med J 2004;21:14–19. 2 Guidelines for the provision of anaesthetic services. Royal College of Anaesthetists, London, 1999. http://www.rcoa.ac.uk/docs/glines.pdf. 3 The Role of the Anaesthetist in the Emergency Service. Association of Anaesthetists of Great Britain and Ireland, London, 1991. http:// www.aagbi.org/pdf/emergenc.pdf. 4 Anaesthetic Assistance. A strategy for training, recruitment and retention and the promulgation of safe practice. Scottish Medical and Scientific Advisory Committee. http://www.scotland.gov.uk/ library5/health/aast.pdf. 5 Graham CA, Beard D, Oglesby AJ, et al. Rapid sequence intubation in Scottish urban emergency departments. Emerg Med J 2003;20:3–5. Ultrasound Guidance for Central Venous Catheter Insertion We read Dunning’s BET report with great interest.1 As Dunning himself acknowledges, most of the available literature concerns the insertion of central venous catheters (CVCs) by anaesthetists (and also electively). However, we have found that this data does not necessarily apply to the critically-ill emergency setting, as illustrated by the study looking at emergency medicine physicians2 where the ultrasound did not reduce the complication rate. The literature does not distinguish between potentially life-threatening complications and those with unwanted side-effects. An extra attempt or prolonged time for insertion, whilst unpleasant, has a minimal impact on the patient’s eventual outcome. However, a pneumothorax could prove fatal to a patient with impending cardio-respiratory failure. Some techniques – for example, high internal jugular vein – have much lower rates of pneumothorax. Furthermore, some techni- ques use an arterial pulsation as a landmark. Such techniques can minimise the adverse effect of an arterial puncture as pressure can be applied directly to the artery. We also share Dunning’s doubt in the National Institute for Clinical Excellence (NICE) guidance’s claim that the cost-per- use of an ultrasound could be as low as £10.3 NICE’s economic analysis model assumed that the device is used 15 times a week. This would mean sharing the device with another department, clearly unsatisfactory for most emergency situations. The cost per complica- tion prevented would be even greater. (£500 in Miller’s study, assuming 2 fewer complica- tions per 100 insertions). Finally, the NICE guidance is that ‘‘appro- priate training to achieve competence’’ is LETTERS PostScript. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Accepted for publication 27 May 2004 Accepted for publication 27 May 2004 608 Emerg Med J 2005;22:608–610 www.emjonline.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://e m j.b m j.co m / E m e rg M e d J: first p u b lish e d a s 1 0 .1 1 3 6 /e m j.2 0 0 4 .0 1 9 8 9 3 o n 2 6 Ju ly 2 0 0 5 . D o w n lo a d e d fro m http://emj.bmj.com/ undertaken. We are sure that the authors would concur that the clinical scenario given would not be the appropriate occasion to ‘‘have a go’’ with a new device for the first time. In conclusion, we believe that far more important than ultrasound-guided CVC insertion, is the correct choice of insertion site to avoid those significant risks, which the critically-ill patient would not tolerate. M Chikungwa, M Lim Correspondence to: M Chikungwa; mchikungwa@ msn.com doi: 10.1136/emj.2004.015156 References 1 Dunning J, James Williamson J. Ultrasonic guidance and the complications of central line placement in the emergency department. Emerg Med J 2003;20:551–552. 2 Miller AH, Roth BA, Mills TJ, et al. Ultrasound guidance versus the landmark technique for the placement of central venous catheters in the emergency department. Acad Emerg Med 2002;9:800–5. 3 National Institute for Clinical Excellence. Guidance on the use of ultrasound locating devices for placing central venous catheters. Technology appraisal guidance no 49, 2002 http://www.org.uk/cat.asp?c = 36752 (accessed 24 Dec 2003). Patients’ attitudes toward medical photography in the emergency department Advances in digital technology have made use of digital images increasingly common for the purposes of medical education.1 The high turnover of patients in the emergency department, many of whom have striking visual signs makes this an ideal location for digital photography. These images may even- tually be used for the purposes of medical education in presentations, and in book or journal format.2 3 As a consequence patients’ images may be seen by the general public on the internet, as many journals now have open access internet sites. From an ethical and legal standpoint it is vital that patients give informed consent for use of images in medical photography, and are aware that such images may be published on the world wide web.4 The aim of this pilot study was to investigate patient’s attitudes toward medical photography as a guide to consent and usage of digital photography within the emergency department. A patient survey questionnaire was designed to answer whether patients would consent to their image being taken, which part(s) of their body they would consent to being photographed, and whether they would allow these images to be pub- lished in a medical book, journal, and/or on the internet. All patients attending the minors section of an inner city emergency department between 1 st January 2004 and 30 th April 2004 were eligible for the study. Patients were included if aged over 18 and having a Glasgow coma score of 15. Patients were excluded if in moderate or untreated pain, needed urgent treatment, or were unable to read or under- stand the questionnaire. All patients were informed that the questionnaire was anon- ymous and would not affect their treatment. Data was collected by emergency department Senior House Officers and Emergency Nurse Practitioners. 100 patients completed the questionnaire. The results are summarised below: Q1 Would you consent to a photo- graph being taken in the Emergency Department of you/part of your body for the purposes of medical education? Yes 84%, No 16% 21% replied Yes to all forms of consent, 16% replied No to all forms of consent, while 63% replied Yes with reservations for particular forms of consent. Q2 Would you consent the follow- ing body part(s) to be photo- graphed (head, chest, abdomen, limbs and/or genitalia)? The majority of patients consented for all body areas to be photo- graphed except for genitalia (41% Yes, 59% No) citing invasion of privacy and embarrassment. Q3 Would you consent to your photo being published in a medical journal, book or internet site? The majority of patients gave con- sent for publication of images in a medical journal (71%), book (70%), but were more likely to refuse consent for use of images on internet medical sites (47% Yes, 53% No or unsure). In determining the attitudes of patients presenting in an inner city London emer- gency department regarding the usage of photography, we found that the majority of patients were amenable to having their images used for the purposes of medical education. The exceptions to this were the picturing of genitalia and the usage of any images on internet medical sites/journals. The findings of this pilot study are limited to data collection in a single emergency department in central London. A particular flaw of this survey is the lack of correlation between age, sex, ethnicity, and consent for photography. Further study is ongoing to investigate this. There have been no studies published about patients’ opinions regarding medical photography to date. The importance of obtaining consent for publication of patient images and concealment of identifying fea- tures has been stressed previously.5 This questionnaire study emphasises the need to investigate patients’ beliefs and concerns prior to consent. A Cheung, M Al-Ausi, I Hathorn, J Hyam, P Jaye Emergency Department, St Thomas’ Hospital, UK Correspondence to: Peter Jaye, Consultant in Emergency Medicine, St Thomas’ Hospital, Lambeth Palace Road, London SE1 7RH, UK; peter.jaye@gstt.nhs.uk doi: 10.1136/emj.2004.019893 References 1 Mah ET, Thomsen NO. Digital photography and computerisation in orthopaedics. J Bone Joint Surg Br 2004;86(1):1–4. 2 Clegg GR, Roebuck S, Steedman DJ. A new system for digital image acquisition, storage and presentation in an accident and emergency department. Emerg Med J 2001;18(4):255–8. 3 Chan L, Reilly KM. Integration of digital imaging into emergency medicine education. Acad Emerg Med 2002;9(1):93–5. 4 Hood CA, Hope T, Dove P. Videos, photographs, and patient consent. BMJ 1998;316:1009–11. 5 Smith R. Publishing information about patients. BMJ 1995;311:1240–1. Unnecessary Tetanus boosters in the ED It is recommended that five doses of tetanus toxoid provide lifelong immunity and 10 yearly doses are not required beyond this.1 National immunisation against tetanus began in 1961, providing five doses (three in infancy, one preschool and one on leaving school).2 Coverage is high, with uptake over 90% since 1990.2 Therefore, the majority of the population under the age of 40 are fully immunised against tetanus. Td (tetanus toxoid/low dose diphtheria) vaccine is often administered in the Emergency Department (ED) following a wound or burn based upon the patient’s recollection of their immunisation history. Many patients and staff may believe that doses should still be given every 10 years. During summer 2004, an audit of tetanus immunisation was carried out at our depart- ment. The records of 103 patients who had received Td in the ED were scrutinised and a questionnaire was sent to the patient’s GP requesting information about the patient’s tetanus immunisation history before the dose given in the ED. Information was received in 99 patients (96% response). In 34/99 primary care records showed the patient was fully immunised before the dose given in the ED. One patient had received eight doses before the ED dose and two patients had been immunised less than 1 year before the ED dose. In 35/99 records suggested that the patient was not fully immunised. However, in this group few records were held before the early 1990’s and it is possible some may have had five previous doses. In 30/99 there were no tetanus immunisation records. In 80/99 no features suggesting the wound was tetanus prone were recorded. These findings have caused us to feel that some doses of Td are unnecessary. Patient’s recollections of their immunisation history may be unreliable. We have recommended that during working hours, the patient’s general practice should be contacted to check immunisation records. Out of hours, if the patient is under the age of 40 and the wound is not tetanus prone (as defined in DoH Guidance1), the general practice should be contacted as soon as possible and the immunisation history checked before admin- istering Td. However, we would like to emphasize that wound management is paramount, and that where tetanus is a risk in a patient who is not fully immunised, a tetanus booster will not provide effective protection against tetanus. In these instances, tetanus immunoglobulin (TIG) also needs to be considered (and is essential for tetanus prone wounds). In the elderly and other high-risk groups—for example, intravenous drug abusers—the Accepted for publication 25 February 2004 Accepted for publication 12 October 2004 PostScript 609 www.emjonline.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://e m j.b m j.co m / E m e rg M e d J: first p u b lish e d a s 1 0 .1 1 3 6 /e m j.2 0 0 4 .0 1 9 8 9 3 o n 2 6 Ju ly 2 0 0 5 . D o w n lo a d e d fro m http://emj.bmj.com/ need for a primary course of immunisation against tetanus should be considered not just a single dose and follow up with the general practice is therefore needed. The poor state of many primary care immunisation records is a concern and this may argue in favour of centralised immuni- sation records or a patient electronic record to protect patients against unnecessary immu- nisations as well as tetanus. T Burton, S Crane Accident and Emergency Department, York Hospital, Wigginton Road, York, UK Correspondence to: Dr T Burton, 16 Tom Lane, Fulwood, Sheffield, S10 3PB; tomandjackie@doctors.org.uk doi: 10.1136/emj.2004.021121 References 1 CMO. Replacement of single antigen tetanus vaccine (T) by combined tetanus/low dose diphtheria vaccine for adults and adolescents (Td) and advice for tetanus immunisation following injuries. In:update on immunisation issues PL/ CMO/2002/4 London Department of Health, August, 2002. 2 http://www.hpa.org.uk/infections/topics_az/ tetanus. Accepted for publication 20 October 2004 BOOK REVIEWS and Technical Aspects be comprehensive with regards to disaster planning? Can it provide me with what I need to know? I was confused by the end of my involvement with this book or perhaps overwhelmed by the enormity of the importance of non-medical requirements such as engineering and tech- nical expertise in planning for and managing environmental catastrophes. Who is this book for? I am still not sure. The everyday emergency physician? I think not. It serves a purpose to be educational about what is required, in a generic sort of way, when planning disasters. Would I have turned to it last year during the SARS outbreak? No. When I feared a bio-terrorism threat? No. When I watched helplessly the victims of the latest Iranian earthquake? No. To have done so would have been to participate in some form of voyeurism on other people’s misery. Better to embrace the needs of those victims of environmental disasters in some tangible way than rush to the book shelf to brush up on some aspect of care which is so remote for the majority of us in emergency medicine. J Ryan St Vincent’s Hospital, Ireland; j.ryan@st-vincents.ie Neurological Emergencies: A Symptom Orientated Approach Edited by G L Henry, A Jagoda, N E Little, et al. McGraw-Hill Education, 2003, £43.99, pp 346. ISBN 0071402926 The authors set out with very laudable intentions. They wanted to get the ‘‘max- imum value out of both professional time and expensive testing modalities’’. I therefore picked up this book with great expecta- tions—the prospect of learning a better and more memorable way of dealing with neuro- logical cases in the emergency department. The chapter headings (14 in number) seemed to identify the key points I needed to know and the size of the book (346 pages) indicated that it was possible to read. Unfortunately things did not start well. The initial chapter on basic neuroanatomy mainly used diagrams from other books. The end result was areas of confusion where the text did not entirely marry up with the diagrams. The second chapter dealing with evaluating the neurological complaint was better and had some useful tips. However the format provided a clue as to how the rest of the book was to take shape—mainly text and lists. The content of this book was reasonably up to date and if you like learning neurology by reading text and memorising lists then this is the book for you. Personally I would not buy it. I felt it was a rehash of a standard neurology textbook and missed a golden opportunity of being a comprehensible text on emergency neurology, written by emer- gency practitioners for emergency practi- tioners. P Driscoll Hope Hospital, Salford, UK; peter.driscoll@srht.nhs.uk Emergency medicine procedures E Reichmann, R Simon. New York: McGraw- Hill, 2003, £120, pp 1563. ISBN 0-07- 136032-8. This book has 173 chapters, allowing each chapter to be devoted to a single procedure, which, coupled with a clear table of contents, makes finding a particular procedure easy. This will be appreciated mostly by the emergency doctor on duty needing a rapid "refresher" for infrequently performed skills. ‘‘A picture is worth a thousand words’’ was never so true as when attempting to describe invasive procedures. The strength of this book lies in the clarity of its illustrations, which number over 1700 in total. The text is concise but comprehensive. Anatomy, patho- physiology, indications and contraindica- tions, equipment needed, technicalities, and complications are discussed in a standardised fashion for each chapter. The authors, pre- dominantly US emergency physicians, mostly succeed in refraining from quoting the ‘‘best method’’ and provide balanced views of alternative techniques. This is well illustrated by the shoulder reduction chapter, which pictorially demonstrates 12 different ways of reducing an anterior dislocation. In fact, the only notable absentee is the locally preferred Spaso technique. The book covers every procedure that one would consider in the emergency department and many that one would not. Fish hook removal, zipper injury, contact lens removal, and emergency thoracotomy are all explained with equal clarity. The sections on soft tissue procedures, arthrocentesis, and regional analgesia are superb. In fact, by the end of the book, I was confident that I could reduce any paraphimosis, deliver a baby, and repair a cardiac wound. However, I still had nagging doubts about my ability to aspirate a sub- dural haematoma in an infant, repair the great vessels, or remove a rectal foreign body. Reading the preface again, I was relieved. The main authors acknowledge that some proce- dures are for "surgeons" only and are included solely to improve the understanding by "emergentologists" of procedures that may present with late complications. These chap- ters are unnecessary, while others would be better placed in a pre-hospital text. Thankfully, they are relatively few in number, with the vast majority of the book being directly relevant to emergency practice in the UK. Weighing approximately 4 kg, this is undoubtedly a reference text. The price (£120) will deter some individuals but it should be considered an essential reference book for SHOs, middle grades, and consul- tants alike. Any emergency department would benefit from owning a copy. J Lee Environmental Health in Emergencies and Disasters: A Practical Guide Edited by B Wisner, J Adams. World Health Organization, 2003, £40.00, pp 252. ISBN 9241545410 I have the greatest admiration for doctors who dedicate themselves to disaster prepared- ness and intervention. For most doctors there will, thank god, rarely be any personal involvement in environmental emergencies and disasters. For the others who are involved, the application of this branch of medicine must be some form of ‘‘virtual’’ game of medicine, lacking in visible, tangible gains for the majority of their efforts. Reading this World Health Organization publication however has changed my percep- tion of the importance of emergency plan- ners, administrators, and environmental technical staff. I am an emergency physician, blinkered by measuring the response of interventions in real time; is the peak flow better after the nebuliser? Is the pain less with intravenous morphine? But if truth be known it is the involvement of public health doctors and emergency planners that makes the biggest impact in saving lives worldwide, as with doctors involved in public health medicine. This book served to demonstrate to me my ignorance on matters of disaster responsive- ness. But can 252 pages of General Aspects 610 PostScript www.emjonline.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://e m j.b m j.co m / E m e rg M e d J: first p u b lish e d a s 1 0 .1 1 3 6 /e m j.2 0 0 4 .0 1 9 8 9 3 o n 2 6 Ju ly 2 0 0 5 . D o w n lo a d e d fro m http://emj.bmj.com/ work_7y4nug7wmfahpauqg7y2x3p7vi ---- [PDF] Comparison of photographic and conventional methods for tooth shade selection: A clinical evaluation | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.4103/jips.jips_342_16 Corpus ID: 23416571Comparison of photographic and conventional methods for tooth shade selection: A clinical evaluation @article{Miyajiwala2017ComparisonOP, title={Comparison of photographic and conventional methods for tooth shade selection: A clinical evaluation}, author={Juzer S. Miyajiwala and M. Kheur and Anuya H Patankar and Tabrez Lakha}, journal={The Journal of the Indian Prosthodontic Society}, year={2017}, volume={17}, pages={273 - 281} } Juzer S. Miyajiwala, M. Kheur, +1 author Tabrez Lakha Published 2017 Medicine The Journal of the Indian Prosthodontic Society Aim: This study aimed to compare three different methods used for shade selection, i.e., visual method, spectrophotometer, and digital photography method. Materials and Methods: Fifty participants were selected from the Out Patient Department of Prosthodontics. Presence of the maxillary right central incisor with no history of any restorative or endodontic procedures was the primary inclusion criterion. The shade of the right maxillary central incisor was determined using all the three shade… Expand View on Wolters Kluwer ipsonline.in Save to Library Create Alert Cite Launch Research Feed Share This Paper 8 CitationsHighly Influential Citations 3 Background Citations 4 Methods Citations 2 View All Figures, Tables, and Topics from this paper figure 1 table 1 figure 2 table 2 figure 3 table 3 table 4 table 6 table 7 View All 9 Figures & Tables Odontogenic Tissue CNS disorder Tooth Movement Techniques spectrophotometer Name Conflict (Psychology) Greater Than Prosthodontic specialty Scientific Publication Consent Forms 8 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency COMPARISON OF SPECTROPHOTOMETRIC EVALUATION OF SHADE SELECTION WITH VISUAL AND DIGITAL METHOD 2019 Highly Influenced PDF View 5 excerpts, cites background and methods Save Alert Research Feed Spectrophotometric Evaluation of Shade Selection with Digital and Visual Methods S. Preethi Suganya, P. Manimaran, D. Saisadan, C. Dhinesh Kumar, D. Abirami, V. Monnica Medicine Journal of pharmacy & bioallied sciences 2020 Highly Influenced View 3 excerpts, cites methods and background Save Alert Research Feed The effect of gender and clinical experience on shade perception. K. K. Aswini, Venkitachalam Ramanarayanan, Athira Rejithan, R. Sajeev, Reshma Suresh Psychology, Medicine Journal of esthetic and restorative dentistry : official publication of the American Academy of Esthetic Dentistry ... [et al.] 2019 1 Save Alert Research Feed Comparison between Mobile Camera and DSLR Camera Photography for the Evaluation of Shade of Anterior Teeth - A Cross-Sectional Study P. Shrestha, Sabina Poudel Medicine 2019 Save Alert Research Feed Dentin Staining Caused by Nano-Silver Fluoride: A Comparative Study. L. F. Espíndola-Castro, A. Rosenblatt, A. Galembeck, G. Monteiro Chemistry, Medicine Operative dentistry 2020 2 Save Alert Research Feed Shade guides used in the dental practice R. Todorov, BozhidarYordanov, T. Peev, Stefan Zlatev Psychology 2020 PDF Save Alert Research Feed Comparisons between photographic equipment for dental use: DSLR cameras vs. smartphones Eduardo Grigollo Patussi, Bruno Cezar Garcia Poltronieri, R. Ottoni, J. Bervian, C. Lisboa, P. H. Corazza Computer Science 2019 Highly Influenced PDF View 4 excerpts, cites background Save Alert Research Feed Estudio clínico sobre la influencia de las alteraciones visuales (miopía e hipermetropía) en la percepción del color dental Laura Natalia Bonafé Cardozo Art 2019 PDF View 1 excerpt, cites background Save Alert Research Feed References SHOWING 1-10 OF 42 REFERENCES SORT BYRelevance Most Influenced Papers Recency Comparison of in vivo visual, spectrophotometric and colorimetric shade determination of teeth and implant-supported crowns. P. Gehrke, U. Riekeberg, O. Fackler, G. Dhom Mathematics, Medicine International journal of computerized dentistry 2009 43 View 1 excerpt, references results Save Alert Research Feed Conventional visual vs spectrophotometric shade taking for porcelain-fused-to-metal crowns: a clinical comparison. S. Paul, A. Peter, L. Rodoni, N. Pietrobon Mathematics, Medicine The International journal of periodontics & restorative dentistry 2004 145 Save Alert Research Feed An Evaluation of Shade Differences Between Natural Anterior Teeth in Different Age Groups and Gender Using Commercially Available Shade Guides S. Rodrigues, S. Shetty, D. Prithviraj Medicine Journal of Indian Prosthodontic Society 2012 15 View 1 excerpt, references methods Save Alert Research Feed Reliability and accuracy of four dental shade-matching devices. Seungyee Kim-Pusateri, J. Brewer, E. Davis, Alvin G. Wee Engineering, Medicine The Journal of prosthetic dentistry 2009 181 Save Alert Research Feed Spectrophotometric color evaluation of permanent incisors, canines and molars. A cross-sectional clinical study Ioana-Sofia Pop-Ciutrila, H. Colosi, D. Dudea, M. Badea Medicine Clujul medical 2015 9 PDF View 1 excerpt, references background Save Alert Research Feed The use of digital imaging for colour matching and communication in restorative dentistry F. Jarad, M. Russell, B. Moss Mathematics, Medicine BDJ 2005 130 PDF View 1 excerpt, references methods Save Alert Research Feed Differences between the human eye and the spectrophotometer in the shade matching of tooth colour. C. Gómez-Polo, Miguel Gómez-Polo, Alicia Celemín-Viñuela, Juan Antonio Martínez Vázquez de Parga Mathematics, Medicine Journal of dentistry 2014 57 View 1 excerpt, references background Save Alert Research Feed Comparison of accuracies of an intraoral spectrophotometer and conventional visual method for shade matching using two shade guide systems V. Parameswaran, S. Anilkumar, S. Lylajam, C. Rajesh, V. Narayan Medicine Journal of Indian Prosthodontic Society 2016 13 Save Alert Research Feed Shade-matching abilities of dental laboratory technicians using a commercial light source. T. R. Jasinevicius, Francis M Curd, L. Schilling, A. Sadan Engineering, Medicine Journal of prosthodontics : official journal of the American College of Prosthodontists 2009 38 View 1 excerpt, references background Save Alert Research Feed Digital Computer Matching of Tooth Color Won-Suk Oh, John Pogoncheff, W. J. O'brien Engineering Materials 2010 6 PDF Save Alert Research Feed ... 1 2 3 4 5 ... Related Papers Abstract Figures, Tables, and Topics 8 Citations 42 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_a3h4mv72erd5nli34hvton4cyq ---- Eye in the Sky: Using UAV Imagery of Seasonal Riverine Canopy Growth to Model Water Temperature hydrology Article Eye in the Sky: Using UAV Imagery of Seasonal Riverine Canopy Growth to Model Water Temperature Ann Willis * and Eric Holmes University of California, Davis Center for Watershed Sciences, Davis, CA 95616, USA; ejholmes@ucdavis.edu * Correspondence: awillis@ucdavis.edu; Tel.: +1-530-867-9807 Received: 30 November 2018; Accepted: 4 January 2019; Published: 9 January 2019 ���������� ������� Abstract: Until recently, stream temperature processes controlled by aquatic macrophyte shading (i.e., the riverine canopy) was an unrecognized phenomenon. This study aims to address the question of the temporal and spatial scale of monitoring and modeling that is needed to accurately simulate canopy-controlled thermal processes. We do this by using unmanned aerial vehicle (UAV) imagery to quantify the temporal and spatial variability of the riverine canopy and subsequently develop a relationship between its growth and time. Then we apply an existing hydrodynamic and water temperature model to test various time steps of canopy growth interpolation and explore the balance between monitoring and computational efficiencies versus model performance and utility for management decisions. The results show that riverine canopies modeled at a monthly timescale are sufficient to represent water temperature processes at a resolution necessary for reach-scale water management decisions, but not local-scale. As growth patterns were more frequently updated, negligible changes were produced by the model. Spatial configurations of the riverine canopy vary interannually; new data may need to be gathered for each growth season. However, the risks of inclement field conditions during the early growth period are a challenge for monitoring via UAVs at sites with access constraints. Keywords: water temperature; thermal regime; UAV; riverine canopy; management; model; aquatic vegetation 1. Introduction Stream temperature is a widely-studied feature of freshwater aquatic ecosystems [1,2]. Stream temperature regulates organisms’ metabolism, growth, phenology, survival, food webs, and community structure [1–3]. Water temperature changes profoundly affect stream ecology, including nutrient processing capacity and food webs [4,5]. In addition, macroinvertebrates and other ectothermic organisms will move in both space and time as their preferred thermal regimes shift to increasingly constrained habitats [6,7]. Because stream temperature is more closely correlated with air temperature than with discharge, streams are generally expected to warm with climate change. However, buffering sources are expected to moderate stream temperature into the foreseeable future [3,8]. While the significance of stream temperature and changes to thermal regimes are widely appreciated, the processes underlying unregulated thermal regimes are less well-defined [2,9]. Thermal regime is a phrase used to describe patterns of magnitude, timing, duration, and frequency of change in a stream’s water temperature patterns [2]. Thermal landscapes are the spatial distribution of thermal regimes and are a product of the unique interactions between geography, hydrology, meteorology, climate, and myriad characteristics of the stream itself and its surrounding features [2]. Hydrology 2019, 6, 6; doi:10.3390/hydrology6010006 www.mdpi.com/journal/hydrology http://www.mdpi.com/journal/hydrology http://www.mdpi.com https://orcid.org/0000-0001-9545-2306 http://www.mdpi.com/2306-5338/6/1/6?type=check_update&version=1 http://dx.doi.org/10.3390/hydrology6010006 http://www.mdpi.com/journal/hydrology Hydrology 2019, 6, 6 2 of 14 The past few decades have seen an increase in empirical studies about the complex interactions that fundamentally control thermal regimes [2,3]. Though monitoring methods have rapidly advanced, water temperature data quality ranges widely, making it hard to determine the underlying processes controlling thermal regimes [3,9,10]. Shade is considered a major “second-order” control—behind first-order climate and hydrologic processes—on large-scale thermal regimes [9]. The distinction between shade and cover is important, as each refers to different elements in thermal regimes. In this paper, shade refers to the amount of solar radiation reduction that results from cover over an area. Canopy refers to the amount of physical cover over an area. For example, while a quadrat of a stream’s water surface might be 20% covered by riparian or riverine canopies, the light reduction in those covered areas (i.e., shade) might be 80%. Stream temperature dynamics associated with shade have been long-recognized, though predominately in the context of riparian shading and the effects of forestry practices [11], and, to a lesser extent, snow and ice [12]. Canopy cover has been associated with net cooling [13,14] and reduced sensible and latent heat exchange [14,15]. In addition to cooling via riparian canopies [8], riverine canopies that result from emergent aquatic vegetation have shown comparable solar radiation reductions to those achieved by riparian canopies [16,17]. Canopies that result in ≥70% shade is the objective for temperature control [14,18]. Predictive (also called deterministic) water temperature models can provide useful insights to thermal processes [2,9]. Early modeling studies emphasized the seasonal relevance of riparian shade to water temperature dynamics. In particular, the leaf-out and leaf-drop transitions in the riparian zone were simulated using coarse assumptions as an early example of process-based water temperature modeling [19]. Predictive models also possess several advantages over more simplified statistical modeling. Statistical models can be useful to explore questions with limited data [20,21]. However, statistical models that cover broad spatial scopes overcome data limitations by relying on underlying process assumptions. Often, these process assumptions assume a close correlation between air and stream temperature [8,20]—a sometimes erroneous assumption and unreliable simplification when studying streams affected by human activities [2,3]. Statistical models have limited abilities to identify specific mechanisms in the temperature process and can be unreliable as surrogates for streams that differ in either space or time [9]. Predictive models also have significant disadvantages. Such models tend to be data-intensive, restricting their application to larger spatial scales [9]. When simulating riparian (or riverine) vegetation, the data provided to the model must be of a resolution equal to or better than the representation provided in the model. Historically, this has resulted in either coarse representations of canopy cover [16], time-intensive manual mapping [22], or expensive data collection methods such as light detection and ranging (LiDAR) [23]. Finally, predictive stream modeling methods are not widely adopted by managers, suggesting that currently available methods need improvement to become more widely accessible [24]. Because stream temperature is more closely correlated with air temperature than with discharge [25], streams are generally expected to warm with climate change [26–29]. However, buffering sources are expected to moderate stream temperature into the foreseeable future [3,8]; shade is predicted to be a potentially significant buffer [14,22,25]. Though temperatures in cold streams are projected to increase less than warm streams, streams with small temperature changes may see a large biological response if they are located near warm-edge or cold-edge boundaries of thermal niches [8]. Cold-water ecosystems in California, such as the Shasta watershed, are considered the lower boundary for species such as salmon [30]. Given the more frequent extreme thermal conditions expected with climate change and their subsequent effects on temperate species [31], understanding the mechanisms with which to mitigate those events is critical to the viability of cold-water ecosystems. Extensive temperature modelling and analysis has occurred in the lower Shasta watershed to explore past and current stream conditions as they relate to salmon [16,32–34]. Big Springs Creek is a spring-fed tributary to the Shasta River that influences the quality and extent of cold water habitat Hydrology 2019, 6, 6 3 of 14 for tens of kilometers downstream of its confluence [34]. Strategic investments in habitat restoration have partially offset warming due to previous land and water use decisions [16,32] and may further mitigate expected warming due to climate change [8]. The primary objective of these investments has been to improve oversummering habitat quality and extent for federally and state-listed threatened coho salmon (Oncorhynchus kisutch). The result of these investments has been the fundamental shift of factors that control the thermal regime from meteorological conditions [34] to a reach-scale riverine canopy created by aquatic plants [16]. However, as the stream is located in an area where rangeland is the predominant land use [32], additional questions remain regarding how stream flows may be managed to enhance desirable instream water temperature conditions. The objective of this study is to explore the temporal and spatial scale of monitoring and modeling that is needed to accurately simulate thermal processes controlled by the riverine canopy and support management decisions. We do this by using unmanned aerial vehicle (UAV) imagery to quantify the temporal and spatial variability of the riverine canopy, and subsequently develop a relationship between its growth and time. Then we apply an existing hydrodynamic and water temperature model using the refined canopy data and explore the results for changes in accuracy. The results of this study will help identify the balance between monitoring and computational efficiencies versus model performance and utility that are needed to support management decisions in streams where shade plays a major role in water temperature processes. 2. Materials and Methods 2.1. Study Site and Period The study site included the 3.7 km reach of Big Springs Creek from the outlet of Big Springs Dam to the confluence with the Shasta River (Figure 1). The study period occurred between 1 April 2017 and 30 September 2017, during which the growing season coincided with the irrigation season on the ranches surrounding the creek. In addition, data were used from a previous flight in August 2015 to assess the spatial variability of interannual riverine canopy growth. Site access to the reach upstream from Site 2 (river kilometer (rkm) 2.7) extending to Site 1 (rkm 3.7) was limited to 1 day per month, with access dates negotiated with the landowner at least 6 weeks prior to the proposed sampling dates. Once agreed upon, access dates could not be rescheduled. Hydrology 2019, 6, x FOR PEER REVIEW 3 of 15 a spring-fed tributary to the Shasta River that influences the quality and extent of cold water habitat for tens of kilometers downstream of its confluence [34]. Strategic investments in habitat restoration have partially offset warming due to previous land and water use decisions [16,32] and may further mitigate expected warming due to climate change [8]. The primary objective of these investments has been to improve oversummering habitat quality and extent for federally and state-listed threatened coho salmon (Oncorhynchus kisutch). The result of these investments has been the fundamental shift of factors that control the thermal regime from meteorological conditions [34] to a reach-scale riverine canopy created by aquatic plants [16]. However, as the stream is located in an area where rangeland is the predominant land use [32], additional questions remain regarding how stream flows may be managed to enhance desirable instream water temperature conditions. The objective of this study is to explore the temporal and spatial scale of monitoring and modeling that is needed to accurately simulate thermal processes controlled by the riverine canopy and support management decisions. We do this by using unmanned aerial vehicle (UAV) imagery to quantify the temporal and spatial variability of the riverine canopy, and subsequently develop a relationship between its growth and time. Then we apply an existing hydrodynamic and water temperature model using the refined canopy data and explore the results for changes in accuracy. The results of this study will help identify the balance between monitoring and computational efficiencies versus model performance and utility that are needed to support management decisions in streams where shade plays a major role in water temperature processes. 2. Materials and Methods 2.1. Study Site and Period The study site included the 3.7 km reach of Big Springs Creek from the outlet of Big Springs Dam to the confluence with the Shasta River (Figure 1). The study period occurred between 1 April 2017 and 30 September 2017, during which the growing season coincided with the irrigation season on the ranches surrounding the creek. In addition, data were used from a previous flight in August 2015 to assess the spatial variability of interannual riverine canopy growth. Site access to the reach upstream from Site 2 (river kilometer (rkm) 2.7) extending to Site 1 (rkm 3.7) was limited to 1 day per month, with access dates negotiated with the landowner at least 6 weeks prior to the proposed sampling dates. Once agreed upon, access dates could not be rescheduled. Figure 1. A map of the study area. Sites 1–4 were used to test water temperature modeling performance. Approximate centroid UTM coordinates are 10T 547,987.38 m E, 4,605,393.41 m N. Figure 1. A map of the study area. Sites 1–4 were used to test water temperature modeling performance. Approximate centroid UTM coordinates are 10T 547,987.38 m E, 4,605,393.41 m N. Hydrology 2019, 6, 6 4 of 14 2.2. Riverine Canopy Surveys Riverine canopy surveys were completed using a 3DR Solo Quadcopter, which was modified to attach a Canon Powershot S100 digital camera. A flight path of 84 north–south oriented transects covering 22.5 linear kilometers were flown from an altitude of 104 m, with a flight speed of 16 km per hour to achieve an approximate image side and end overlap of 70%. Control points were established at 36 locations by monumenting 13 cm2 bolts upside-down in concrete, then surveying the top of the bolt using a Topcon Hiper V Real Time Kinematic GPS unit with 5 mm horizontal and 10 mm vertical accuracy. Control point targets were created using 0.6 m2 wood boards that were painted white, then marked with 5 cm black lines across the diagonals, and finally had a 1.3 cm hole drilled in the center of the board. The targets were then mounted on the monumented bolts (targets were held in place using a washer and nut both above and below the board) to help visual identification of the control points in the UAV imagery. The camera was programmed to take images at a 5-s interval. Images were reviewed and adjusted for brightness, then stitched together using Agisoft Photoscan Professional (v 1.2.6, St. Petersburg, Russia). Completed Photoscan models were georeferenced by identifying the control point targets in the individual photos. Orthomosaic and digital elevation model (DEM) layers, each with a 0.05 m resolution, were then created and exported as georeferenced tiff files. The spatial and temporal variability of riverine canopy growth was analyzed using supervised image classification and analysis of the tiff files in ArcMap 10.5 (Redlands, CA, USA). First, the orthomosaic image was clipped to include only the wetted channel. Also, polygons of a willow stand were made to mask the area from analysis of canopy extent due to aquatic macrophytes. Then, training samples of 40–50 merged polygons were created for each of two classes: open water and emergent aquatic vegetation. These training samples were then used to classify the clipped orthomosaic image of Big Springs Creek and to estimate the percent area covered by emergent aquatic vegetation. Misclassification was determined by extracting the classified image raster pixels in the training samples. Temporal changes were explored by comparing the percent area covered from one survey to the next during the 2017 monitoring period. Spatial changes were explored by comparing August surveys from 2015 and 2017, and analyzed to identify cover class areas that remained consistent. 2.3. Water Temperature Modeling Once the temporal and spatial trends were analyzed, an existing hydrodynamic and water temperature model of Big Springs Creek [16] was used to simulate water temperature conditions given various frequencies of canopy growth interpolation. RMA-11 is a Fortran-based, proprietary model that has been applied in various water temperature studies [9,16,35]. Riverine canopy surveys were used to develop element classes in the unstructured grid that represented various amounts of canopy cover using numerical representations of shade and roughness; no other changes to the grid were made so as to test model performance due solely to refined aquatic vegetation data. The cover in each element was determined by extracting the classified image raster pixels of each class (open and cover) for the area covered by the element and calculating the percent cover for each element at each of the four survey dates. Weekly changes in percent cover for each model element were determined using a linear interpolation between survey points. To simplify the computational requirements of the numerical water temperature model, element cover types were binned into classes representing no cover (0–10% covered areas in the classified images), 20% cover (10–30%), 40% cover (30–50%), 60% cover (50–70%), 80% cover (70–90%), and full cover (90–100% cover). A histogram of element classes was developed to review the trend of cover classes through the simulation period. Hydrology 2019, 6, 6 5 of 14 Shade for each element class was calculated by assigning the empirically observed solar radiation for covered areas (i.e., 12% of solar radiation was measured in covered areas [16]) to the proportion of the element that represented its cover class, plus full solar radiation to the remaining area Equation (1). element shade = (percent cover × 0.12) + (percent open × 1) (1) Roughness for each element class was similarly calculated, using empirically-based values for this site [16] Equation (2): element roughness = (percent cover × 0.31) + (percent open × 0.07) (2) The water temperature model was run for a continuous period between 1 June and 15 August to simulate three different time-step adjustments to the riverine canopy: weekly, bi-weekly and monthly. Shade and roughness values were updated to reflect new values at the start of each step, with no smoothing applied. Results of each simulation were analyzed at four locations (Figure 1) using mean bias, mean absolute error (MAE) and root mean square error (RMSE), keeping with performance criteria developed for management decision-making applications [16,32]. RMSE was particularly useful as it remains unbiased by seasonal cycles [9]; given the expected seasonal dynamics of canopy growth, controlling for seasonally-derived bias is a critical feature of this study. 3. Results 3.1. Riverine Canopy Surveys Due to the long lead time necessary for scheduling access to the study site, field conditions were not always conducive to UAV flights. Inclement weather or wind speeds greater than 16 km per hour prevented riverine canopy surveys for the months of April and September. During the remaining visits, UAV flights of the entire reach were completed over 2 days (Table 1). Hydrology 2019, 6, x FOR PEER REVIEW 5 of 15 element shade = (percent cover × 0.12) + (percent open × 1) (1) Roughness for each element class was similarly calculated, using empirically-based values for this site [16] Equation (2): element roughness = (percent cover × 0.31) + (percent open × 0.07) (2) The water temperature model was run for a continuous period between 1 June and 15 August to simulate three different time-step adjustments to the riverine canopy: weekly, bi-weekly and monthly. Shade and roughness values were updated to reflect new values at the start of each step, with no smoothing applied. Results of each simulation were analyzed at four locations (Figure 1) using mean bias, mean absolute error (MAE) and root mean square error (RMSE), keeping with performance criteria developed for management decision-making applications [16,32]. RMSE was particularly useful as it remains unbiased by seasonal cycles [9]; given the expected seasonal dynamics of canopy growth, controlling for seasonally-derived bias is a critical feature of this study. 3. Results 3.1. Riverine Canopy Surveys Due to the long lead time necessary for scheduling access to the study site, field conditions were not always conducive to UAV flights. Inclement weather or wind speeds greater than 16 km per hour prevented riverine canopy surveys for the months of April and September. During the remaining visits, UAV flights of the entire reach were completed over 2 days (Table 1). The supervised classification was able to distinguish between open channel and emergent plants (i.e., canopy) for all orthomosaic images produced via UAV monitoring (Figure 2). An analysis of misclassified pixels showed that the training samples were sufficient to classify cover type with 1.3% and 4.0% misclassification for open channel and emergent plant classes, respectively. (a) (b) Figure 2. An example of the (a) orthoimagery; (b) results of the supervised classification, using data gathered during Survey 2 in June 2017. 20 0 2010 Meters± Classified features EMERGENT PLANTS OPEN Figure 2. An example of the (a) orthoimagery; (b) results of the supervised classification, using data gathered during Survey 2 in June 2017. Hydrology 2019, 6, 6 6 of 14 Table 1. A summary of the survey dates and percent cover of Big Springs Creek. Survey Flight Dates Canopy Cover (%) Canopy Cover (m2) Survey 1 22–23 May 2017 38 61,668 Survey 2 19–20 June 2017 53 87,776 Survey 3 23–24 July 2017 71 117,348 Survey 4 15–16 August 2017 74 121,583 The supervised classification was able to distinguish between open channel and emergent plants (i.e., canopy) for all orthomosaic images produced via UAV monitoring (Figure 2). An analysis of misclassified pixels showed that the training samples were sufficient to classify cover type with 1.3% and 4.0% misclassification for open channel and emergent plant classes, respectively. Canopy cover changed both temporally and spatially in Big Springs Creek. Temporally, cover increased from 38% in May 2017 to 74% in August 2017 (36% increase), with the largest change occurring between June and July (18% increase) and the smallest change occurring between July and August (4% increase) (Table 1). For the spatial analysis, differences in the flight path used in 2015 resulted in poor image resolution or lack of coverage at the margins of the orthomosaic. Thus, of the total area surveyed in 2017, 16% (25,989 m2) could not be compared to data from the 2015 flight. Of the remaining area, the cover remained consistent over 66% (108,352 m2) of the stream from August 2015 to August 2017, while 16% (26,765 m2) shifted from the canopy to open channel and 2% (3197 m2) shifted from the open channel to canopy (Table 2). Table 2. A summary of the percent area that shifted cover classes from August 2015 to August 2017. Class Change Area (m2) Area (%) Canopy to open channel 25,989 16 Open channel to canopy 3197 2 No change 108,352 66 Area not analyzed 26,765 16 3.2. Water Temperature Modeling The histogram of the element classes shows that during the beginning of the simulation period, element classes were dominated by areas with cover ≤40% (Figure 3). As the simulation period progressed, classes were dominated by areas with cover ≥60%. While most classes showed steady trends either increasing or decreasing their frequency, the element class that represented 60% cover initially occurred more frequently, then declined. From its peak frequency, the 60% coverage class saw a net 30% transition to greater coverage classes. The highest and lowest cover classes also showed indications of a plateau at the end of the modeled period, while areas in the 60% and 80% covered classes showed steady decreasing and increasing trends, respectfully. Regardless of the frequency with which canopy growth was simulated, the water temperature model produced results that met the performance criteria (Table 3). Simulated water temperatures were generally warmer than observed water temperatures, as shown by the positive mean bias across all simulations. For all simulations, mean bias increased through location 3, then decreased towards the mouth, Site 4. This gradual increase in mean bias, then decline, suggests that better representation of local features such as groundwater inflow volumes may be necessary to apply the model for more refined management objectives. Mean absolute error (MAE) remained consistent at each site (with the expected exception of the boundary condition), showing no substantial changes in accuracy as the model progressed through the study area. Root mean squared error (RSME) also remained well within the 1.5*MAE threshold for all sites and simulations, indicating no anomalous, large errors. Hydrology 2019, 6, 6 7 of 14 Table 3. A summary of performance results for each simulation of water temperatures given various frequencies of interpolated canopy growth. All performance metrics are measured in ◦C. Site River Kilometer Weekly Growth Biweekly Growth Monthly Growth (rkm) Mean Bias MAE b RMSE c Mean Bias MAE RMSE Mean Bias MAE RMSE 1 a 3.7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2 2.6 0.3 0.9 1.2 0.3 0.9 1.1 0.4 0.9 1.2 3 1.7 0.6 0.8 1.0 0.7 0.8 1.0 0.7 0.9 1.1 4 0.0 0.3 0.7 0.9 0.4 0.7 0.8 0.5 0.7 0.9 a Boundary condition; b Mean absolute error; c Root mean square error. Hydrology 2019, 6, x FOR PEER REVIEW 7 of 15 Table 3. A summary of performance results for each simulation of water temperatures given various frequencies of interpolated canopy growth. All performance metrics are measured in °C. Site River Kilometer Weekly Growth Biweekly Growth Monthly Growth (rkm) Mean Bias MAE b RMSE c Mean Bias MAE RMSE Mean Bias MAE RMSE 1 a 3.7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2 2.6 0.3 0.9 1.2 0.3 0.9 1.1 0.4 0.9 1.2 3 1.7 0.6 0.8 1.0 0.7 0.8 1.0 0.7 0.9 1.1 4 0.0 0.3 0.7 0.9 0.4 0.7 0.8 0.5 0.7 0.9 a Boundary condition; b Mean absolute error; c Root mean square error. Figure 3. The distribution of element cover classes through the model simulation period. A closer review of the modeled water temperatures compared to observed water temperatures show where the likely sources of error occurred. Given the comparable performance of each simulation, only plots for the weekly growth simulation are presented and discussed. Additional plots of the biweekly and monthly growth simulations are presented in Appendix A. Though the performance metrics show a mean bias of up to 0.6 °C, the plots of modeled to observed water temperatures show that daily maximum water temperatures are generally overestimated while daily minimum water temperatures are well replicated (Figure 4). Site 1 represents a boundary condition of the model and is defined by the observed data at that site. At Sites 2 and 3, maximum water temperatures are generally over-estimated—a trend that remains consistent throughout the simulation period. At Site 2, observed water temperatures from 5 June to 13 June appear anomalous when compared to the other sites, suggesting that the observed data may not be an accurate record, contributing to the larger error at that site. By Site 4, the modeled diurnal water temperatures better match the observed record, with better agreement of both daily maximum and Figure 3. The distribution of element cover classes through the model simulation period. A closer review of the modeled water temperatures compared to observed water temperatures show where the likely sources of error occurred. Given the comparable performance of each simulation, only plots for the weekly growth simulation are presented and discussed. Additional plots of the biweekly and monthly growth simulations are presented in Appendix A. Though the performance metrics show a mean bias of up to 0.6 ◦C, the plots of modeled to observed water temperatures show that daily maximum water temperatures are generally overestimated while daily minimum water temperatures are well replicated (Figure 4). Site 1 represents a boundary condition of the model and is defined by the observed data at that site. At Sites 2 and 3, maximum water temperatures are generally over-estimated—a trend that remains consistent throughout the simulation period. At Site 2, observed water temperatures from 5 June to 13 June appear anomalous when compared to the other sites, suggesting that the observed data may not be an accurate record, contributing to the larger error at that site. By Site 4, the modeled diurnal water temperatures better match the observed record, with better agreement of both daily maximum and minimum water temperatures. However, agreement of modeled to observed daily maximum water temperatures declines towards the end of the simulation. Sites 2–4 also show periodic underestimates of daily minimum water temperatures that occur coincidently with the shift in coverage represented in the model. Similar results are shown in the Hydrology 2019, 6, 6 8 of 14 bi-weekly (Figure A1) and monthly (Figure A2) results. While the more frequent updates to canopy growth introduce more frequent errors for several hours after the new cover is introduced, results for the diurnal extremes (i.e., daily maximums and minimums) appear to be better simulated. Hydrology 2019, 6, x FOR PEER REVIEW 8 of 15 minimum water temperatures. However, agreement of modeled to observed daily maximum water temperatures declines towards the end of the simulation. Sites 2–4 also show periodic underestimates of daily minimum water temperatures that occur coincidently with the shift in coverage represented in the model. Similar results are shown in the bi- weekly (Figure A1) and monthly (Figure A2) results. While the more frequent updates to canopy growth introduce more frequent errors for several hours after the new cover is introduced, results for the diurnal extremes (i.e., daily maximums and minimums) appear to be better simulated. Figure 4. Plots of the modeled (black line) water temperature versus observed (red line) water temperatures for the weekly canopy growth simulation. 4. Discussion As recently as the 1990s, local human activities were widely viewed as the dominant influence on thermal regimes in streams across the globe, rather than large-scale climate change [11]. By the early aughts, though, climate change was rapidly identified as a major driver of stream temperature changes [3]. While many management strategies focus on water quantity, the relationship between riparian canopies and climate change have long been recognized as issues of greater consequence to thermal regimes [11]. Results from this and previous studies suggest that riverine canopies may play a similarly influential relationship in mitigating the predicted effects of climate change [25]. The findings from this study have broader implications for three facets of water temperature management for cold-water ecosystems: extending canopy monitoring methodology, as shown by combining UAV and digital photography technology; the important role of riverine canopies in thermal regimes; and water temperature modeling for large-scale versus local management objectives. 4.1. UAV Survey Methods Figure 4. Plots of the modeled (black line) water temperature versus observed (red line) water temperatures for the weekly canopy growth simulation. 4. Discussion As recently as the 1990s, local human activities were widely viewed as the dominant influence on thermal regimes in streams across the globe, rather than large-scale climate change [11]. By the early aughts, though, climate change was rapidly identified as a major driver of stream temperature changes [3]. While many management strategies focus on water quantity, the relationship between riparian canopies and climate change have long been recognized as issues of greater consequence to thermal regimes [11]. Results from this and previous studies suggest that riverine canopies may play a similarly influential relationship in mitigating the predicted effects of climate change [25]. The findings from this study have broader implications for three facets of water temperature management for cold-water ecosystems: extending canopy monitoring methodology, as shown by combining UAV and digital photography technology; the important role of riverine canopies in thermal regimes; and water temperature modeling for large-scale versus local management objectives. 4.1. UAV Survey Methods The results of this study show that UAV survey methods provide an efficient approach (both in terms of equipment cost and person-time) to gather near-census data quantifying cover over a mid-sized stream. Prior to the use of UAVs for canopy cover monitoring, methods were limited by the spatial extent that could be covered due to most survey elevations, as well as the cost and time needed to perform the surveys. As recently as 2017, imagery taken from a height of 2–5 m above a stream’s water surface was considered aerial imagery [36,37]. Those survey altitude limits further constrained the size of stream that could be surveyed; at the upper bound of those aerial surveys, Hydrology 2019, 6, 6 9 of 14 study sites were limited to streams up to 10 m wide [36]. This study shows how utilizing the flight ability of UAVs greatly expands the area that can be surveyed: in 2 days, full surveys of Big Springs Creek were completed, which covered 3.7 km and included stream reaches as wide as 300 m [34]. In addition, this study shows how digital photography methods used for low-elevation surveys can be extended by combining digital photography with UAV technology. Alternative methods that have been developed to assess cover, such as riparian surveys using canopy densitometers [38], are impractical for riverine canopies, where emergent plants grow through and remain near the water surface. Digital photography provides a cost-effective approach that has been previously utilized for aquatic plant mapping at lower elevations [36]; this study illustrates how similar methods are successful at higher survey elevations, extending this method to a wide range of stream sizes. Despite its classification as “low-spectral resolution,” the three-band (red-green-blue) survey was sufficient for classification accuracy, which makes this method a cost-effective alternative to others that use multi-spectral imaging. Some limitations of this study can be overcome by standardizing the flight paths and survey extent of UAVs. A comparison of 2015 to 2017 data illustrated the need for wide spatial margins for survey areas, as well as the value in establishing repeatable flight paths to ensure reproducible and comparable survey areas. In addition, access limitations also highlighted how sites with more access flexibility are better suited to UAV monitoring, which requires dependable field conditions that typically are only well-forecast several days in advance. Finally, this method may benefit from additional ground truthing by manually surveying randomly selected in-stream areas using an RTK or other comparable methods. 4.2. Riverine Canopy Growth Understanding the thermal regime processes that will buffer against predicted climate change is critical to conserving and managing cold-water ecosystems. The results of this study show that riverine canopy processes can be well-characterized using monthly datasets and how its influence in the thermal regime becomes stronger as the canopy more fully develops throughout the growing season. Early in the growing season, the riverine canopy is dominated by relatively low-coverage areas: This period coincides with previous observations of annual maximum water temperatures in thermal regimes controlled by riverine canopies [16], when fish like juvenile coho may be more vulnerable to elevated water temperatures [39]. Improving resolution around early season growth may be critical, as previous work has shown that water in and above submerged vegetation may be more sensitive to solar radiation than open-channel flow [40,41]. As such, additional management actions may be necessary to mediate annual maximum water temperatures during those early growth periods. As the growing season progresses, there’s a transition across mid-coverage element areas as low-coverage areas become high coverage areas. Interestingly, lack of plateau in the 60% and 80% cover classes suggest that additional growth may be occurring and that the surveys completed in mid-August did not catch the transition from growth to senescence. Previous studies show that biomass continued to increase into September [16]. Extending the surveys later in the year would help quantify the peak coverage provided by the riverine canopy, as well as the timing of when the canopy begins to senesce. As well as extending the data describing the seasonal trends of canopy growth, additional data to quantify the shade provided by the canopy would further improve understanding of those thermal regimes. As warm temperatures and dry conditions extend later into the year [42], understanding the full potential of the riverine canopy to act as a buffer against these conditions is critical to understanding the potential management challenges for thermally-sensitive ecosystems. Once the riverine canopy transitions into predominately fuller coverage, its advantages over riparian shade as a solar radiation buffer are clear. Riparian shade needs both longer time frames and spatial scales to achieve similar effectiveness to riverine canopies [16,17]. Tree height and shape, channel width and shape (i.e., straight or meandering), and channel orientation are all factors that limit the effectiveness of riparian shade [14,25]; riverine canopies have no limitations analogous to Hydrology 2019, 6, 6 10 of 14 these riparian features. Despite these drawbacks of riparian canopies, research into the relationship between riparian cover and stream temperatures suggests useful considerations for future work. Microclimate changes due to extensive cover may shift energy fluxes in the heat budget and fluxes that are generally negligible in less densely covered reaches may become more influential in the overall thermal regime [14,43]. Examining the effects of canopy-controlled thermal regimes should include an analysis of daily extreme (i.e., maximum and minimum) water temperatures to ensure that sufficient minimum temperature conditions are maintained in streams targeted for salmonid or other cold-water species recovery. Also, the ability of aquatic plants to colonize 70% of the channel is consistent with the findings of other riparian studies for the cover extent needed to affect both temperature control [14,18] and macroinvertebrate recovery [18]. Those findings are confirmed by the results of this and previous studies, which show that seasonal water temperatures in Big Springs Creek begin to cool in late June/early July [16], when the riverine canopy covers nearly 70% of the stream surface. Given the larger stream-orders that may be affected by riverine canopies, vs. riparian, it would be useful to determine the geographic extent of these types of streams, as restoration of the riverine canopy process could influence water temperatures on the reach-scale [34,44] and mitigate for climate change [25]. On a reach-scale, regression equations have been used to identify predictor variables for water temperature, such as riparian vegetation [15], and could be useful tools to explore whether the riverine canopy-controlled thermal regime is representative of a class of rivers. Such findings could have important implications for mitigating the effects of climate change, as canopy-controlled thermal regimes may result in cooler stream temperatures than currently observed in spite of predicted climate warming [14,25]. In addition, while this study focuses on the relationship between riverine canopies and thermal regimes, other studies have shown strong relationships between aquatic plants and channel hydraulics [45–47]. However, because aquatic plants senesce each year, the role of riverine canopies and their seasonal effects on physical salmonid habitat, and, by extension, salmonid life history strategies, may show an interesting contrast to studies that focus on large woody debris and other semi-permanent features for cover and velocity utilization by juvenile coho [48,49]. 4.3. Water Temperature Modeling Finally, water temperature modeling is used to transform improved monitoring using UAV technology and the improved understanding of the role riverine canopies play in the thermal regime into a potential management tool. While model results show that monthly interpolated canopy growth is sufficient to model water temperatures, the performance metrics and comparative plots suggest that there is additional room for improvement. The negligible improvement that followed more refined temporal resolution of canopy growth suggests that further improvement is more likely to result from better representation of other processes in the thermal regime. Such processes include better representation of substantial groundwater inflows to the creek, both in the overall quantity and distribution of flows among discrete groundwater sources. Due to the dominant role that groundwater plays in spring-fed stream thermal regimes [50,51], additional work is recommended to improve the understanding of the conductive and advective heat flux through the stream bed. Such work would also help clarify the issue of potential shifts in dominant heat flux processes given the microclimate effects of canopy cover. Also, the model showed some short-term (e.g., over a period of hours) sensitivity to the periodic update of canopy cover, and could benefit from additional refinement such as transitional smoothing between cover configurations. As such, the model is better suited for large-scale management objectives (e.g., managing water temperature conditions that are exported to the reach-scale habitat in the downstream Shasta River), but requires refinement before it could be confidently applied to managing the local habitat within Big Springs Creek. Additional work that explores model performance in response to more refined grid structures would help illustrate the balance between computational efficiency and the data required to accurately simulate the heat exchange processes dictated by the riverine canopy. Models developed at fine spatial scales can be particularly useful for understanding the relationship between ecosystem dynamics and Hydrology 2019, 6, 6 11 of 14 water temperature processes [9]. Also, while the decision to use a proprietary model was influenced by considerable investments made in previous stages of this research, publicly available models would allow for more transparency. Future stages, particularly those with the objective of evaluating management decisions, should weigh the benefit of using currently available models against the desirability for more transparent, and potentially transferable, modeling methods. Water resource and fisheries managers need to make decisions based on the thermal regime of a stream [19], which may be controlled by factors other than stream flow or air temperature. In these cases, deterministic modeling may be necessary when longer-term datasets are unavailable, particularly where novel thermal processes have been identified. Future studies may want to explore statistical relationships between riverine cover and stream temperature to develop management tools that are less data-intensive than deterministic models. Author Contributions: Conceptualization, A.W. and E.H.; Methodology, A.W. and E.H.; Software, A.W. and E.H.; Validation, A.W. and E.H.; Formal Analysis, A.W. and E.H.; Investigation, A.W. and E.H.; Resources, A.W. and E.H.; Data Curation, A.W. and E.H.; Writing-Original Draft Preparation, A.W. and E.H.; Writing-Review & Editing, A.W.; Visualization, A.W. and E.H; Supervision, A.W.; Project Administration, A.W. Funding: This research received no external funding. Acknowledgments: We’d like to thank the Nature Conservancy and Irene Busk for providing access to their properties and permission to use the study site; Devon Lambert for his considerable field assistance; Ian King for his willingness to answer questions about implementing RMA; Sarah Yarnell for donating her UAV equipment for the surveys; and Carson Jeffres for his willingness to let E.H. volunteer time and expertise for this study in addition to his valuable role in other projects. We’d also like to thank two anonymous reviewers for their thoughtful and detailed reviews, which considerably improved this manuscript. Conflicts of Interest: The authors declare no conflict of interest. Appendix AHydrology 2019, 6, x FOR PEER REVIEW 12 of 15 Figure A1. Plots of the modeled (black line) water temperature versus observed (red line) water temperatures for the biweekly canopy growth simulation. Figure A1. Plots of the modeled (black line) water temperature versus observed (red line) water temperatures for the biweekly canopy growth simulation. Hydrology 2019, 6, 6 12 of 14 Hydrology 2019, 6, x FOR PEER REVIEW 13 of 15 Figure A2. Plots of the modeled (black line) water temperature versus observed (red line) water temperatures for the monthly canopy growth simulation. References 1. Caissie, D. The thermal regime of rivers: A review. Freshw. Biol. 2006, 51, 1389–1406. 2. Steel, E.A.; Beechie, T.J.; Torgersen, C.E.; Fullerton, A.H. Envisioning, quantifying, and managing thermal regimes on river networks. BioScience 2017, 67, 506–522. 3. Webb, B.W.; Hannah, D.M.; Moore, R.D.; Brown, L.E.; Nobilis, F. Recent advances in stream and river temperature research. Hydrol. Process. 2008, 22, 902–918. 4. Davis, J.; Baxter, C.; Rosi-Marshall, E.; Pierce, J.; Crosby, B. Anticipating stream ecosystem responses to climate change: Toward predictions that incorporate effects via land–water linkages. Ecosystems 2013, 16, 909–922. 5. Woodward, G.; Perkins, D.M.; Brown, L.E. Climate change and freshwater ecosystems: Impacts across multiple levels of organization. Philos. Trans. R. Soc. Lond. B Biol. Sci. 2010, 365, 2093–2106. 6. Harper, M.P.; Peckarsky, B.L. Emergence cues of a mayfly in a high‐altitude stream ecosystem: Potential response to climate change. Ecol. Appl. 2006, 16, 612–621. 7. Isaak, D.J.; Wenger, S.J.; Young, M.K. Big biology meets microclimatology: Defining thermal niches of ectotherms at landscape scales for conservation planning. Ecol. Appl. 2017, 27, 977–990. 8. Isaak, D.J.; Wenger, S.J.; Peterson, E.E.; Ver Hoef, J.M.; Nagel, D.E.; Luce, C.H.; Hostetler, S.W.; Dunham, J.B.; Roper, B.B.; Wollrab, S.P. The NorWEST summer stream temperature model and scenarios for the western U.S.: A crowd‐sourced database and new geospatial tools foster a user community and predict broad climate warming of rivers and streams. Water Resour. Res. 2017, 53, 9181–9205. 9. Dugdale, S.J.; Hannah, D.M.; Malcolm, I.A. River temperature modelling: A review of process-based approaches and future directions. Earth Sci. Rev. 2017, 175, 87–113. 10. Hannah, D.M.; Garner, G. River water temperature in the United Kingdom: Changes over the 20th century and possible changes over the 21st century. Prog. Phys. Geogr. 2015, 39, 68–92. Figure A2. Plots of the modeled (black line) water temperature versus observed (red line) water temperatures for the monthly canopy growth simulation. References 1. Caissie, D. The thermal regime of rivers: A review. Freshw. Biol. 2006, 51, 1389–1406. [CrossRef] 2. Steel, E.A.; Beechie, T.J.; Torgersen, C.E.; Fullerton, A.H. Envisioning, quantifying, and managing thermal regimes on river networks. BioScience 2017, 67, 506–522. [CrossRef] 3. Webb, B.W.; Hannah, D.M.; Moore, R.D.; Brown, L.E.; Nobilis, F. Recent advances in stream and river temperature research. Hydrol. Process. 2008, 22, 902–918. [CrossRef] 4. Davis, J.; Baxter, C.; Rosi-Marshall, E.; Pierce, J.; Crosby, B. Anticipating stream ecosystem responses to climate change: Toward predictions that incorporate effects via land–water linkages. Ecosystems 2013, 16, 909–922. [CrossRef] 5. Woodward, G.; Perkins, D.M.; Brown, L.E. Climate change and freshwater ecosystems: Impacts across multiple levels of organization. Philos. Trans. R. Soc. Lond. B Biol. Sci. 2010, 365, 2093–2106. [CrossRef] [PubMed] 6. Harper, M.P.; Peckarsky, B.L. Emergence cues of a mayfly in a high-altitude stream ecosystem: Potential response to climate change. Ecol. Appl. 2006, 16, 612–621. [CrossRef] 7. Isaak, D.J.; Wenger, S.J.; Young, M.K. Big biology meets microclimatology: Defining thermal niches of ectotherms at landscape scales for conservation planning. Ecol. Appl. 2017, 27, 977–990. [CrossRef] 8. Isaak, D.J.; Wenger, S.J.; Peterson, E.E.; Ver Hoef, J.M.; Nagel, D.E.; Luce, C.H.; Hostetler, S.W.; Dunham, J.B.; Roper, B.B.; Wollrab, S.P. The NorWEST summer stream temperature model and scenarios for the western U.S.: A crowd-sourced database and new geospatial tools foster a user community and predict broad climate warming of rivers and streams. Water Resour. Res. 2017, 53, 9181–9205. [CrossRef] 9. Dugdale, S.J.; Hannah, D.M.; Malcolm, I.A. River temperature modelling: A review of process-based approaches and future directions. Earth Sci. Rev. 2017, 175, 87–113. [CrossRef] http://dx.doi.org/10.1111/j.1365-2427.2006.01597.x http://dx.doi.org/10.1093/biosci/bix047 http://dx.doi.org/10.1002/hyp.6994 http://dx.doi.org/10.1007/s10021-013-9653-4 http://dx.doi.org/10.1098/rstb.2010.0055 http://www.ncbi.nlm.nih.gov/pubmed/20513717 http://dx.doi.org/10.1890/1051-0761(2006)016[0612:ECOAMI]2.0.CO;2 http://dx.doi.org/10.1002/eap.1501 http://dx.doi.org/10.1002/2017WR020969 http://dx.doi.org/10.1016/j.earscirev.2017.10.009 Hydrology 2019, 6, 6 13 of 14 10. Hannah, D.M.; Garner, G. River water temperature in the United Kingdom: Changes over the 20th century and possible changes over the 21st century. Prog. Phys. Geogr. 2015, 39, 68–92. [CrossRef] 11. Webb, B. Trends in stream and river temperature. Hydrol. Process. 1996, 10, 205–226. [CrossRef] 12. Moore, R.D. Stream temperature patterns in British Columbia, Canada, based on routine spot measurements. Can. Water Resour. J. 2006, 31, 41–56. [CrossRef] 13. Johnson, S.L. Factors influencing stream temperatures in small streams: Substrate effects and a shading experiment. Can. J. Fish. Aquat. Sci. 2004, 61, 913–923. [CrossRef] 14. Garner, G.; Malcolm, I.A.; Sadler, J.P.; Hannah, D.M. The role of riparian vegetation density, channel orientation and water velocity in determining river temperature dynamics. J. Hydrol. 2017, 553, 471–485. [CrossRef] 15. Moore, R.; Spittlehouse, D.; Story, A. Riparian microclimate and stream temperature response to forest harvesting: A review 1. JAWRA J. Am. Water Resour. Assoc. 2005, 41, 813–834. [CrossRef] 16. Willis, A.D.; Nichols, A.L.; Holmes, E.J.; Jeffres, C.A.; Fowler, A.C.; Babcock, C.A.; Deas, M.L. Seasonal aquatic macrophytes reduce water temperatures via a riverine canopy in a spring-fed stream. Freshw. Sci. 2017, 36, 508–522. [CrossRef] 17. Kalny, G.; Laaha, G.; Melcher, A.; Trimmel, H.; Weihs, P.; Rauch, H.P. The influence of riparian vegetation shading on water temperature during low flow conditions in a medium sized river. Knowl. Manag. Aquat. Ecosyst. 2017, 418, 5. [CrossRef] 18. Rutherford, C.J.; Meleason, M.A.; Davies-Colley, R.J. Modelling stream shade: 2. Predicting the effects of canopy shape and changes over time. Ecol. Eng. 2018, 120, 487–496. [CrossRef] 19. Sinokrot, B.A.; Stefan, H.G. Stream temperature dynamics: Measurements and modeling. Water Resour. Res. 1993, 29, 2299–2312. [CrossRef] 20. Benyahya, L.; Caissie, D.; St-Hilaire, A.; Ouarda, T.B.; Bobée, B. A review of statistical water temperature models. Can. Water Resour. J. 2007, 32, 179–192. [CrossRef] 21. Schabenberger, O.; Gotway, C.A. Statistical Methods for Spatial Data Analysis; CRC Press: Boca Raton, FL, USA, 2017. 22. Trimmel, H.; Weihs, P.; Leidinger, D.; Formayer, H.; Kalny, G.; Melcher, A. Can riparian vegetation shade mitigate the expected rise in stream temperatures due to climate change during heat waves in a human-impacted pre-alpine river? Hydrol. Earth Syst. Sci. 2018, 22, 437–461. [CrossRef] 23. Loicq, P.; Moatar, F.; Jullian, Y.; Dugdale, S.J.; Hannah, D.M. Improving representation of riparian vegetation shading in a regional stream temperature model using lidar data. Sci. Total Environ. 2018, 624, 480–490. [CrossRef] [PubMed] 24. McGrath, E.; Neumann, N.; Nichol, C. A statistical model for managing water temperature in streams with anthropogenic influences. River Res. Appl. 2017, 33, 123–134. [CrossRef] 25. Wondzell, S.M.; Diabat, M.; Haggerty, R. What matters most: Are future stream temperatures more sensitive to changing air temperatures, discharge, or riparian vegetation? JAWRA J. Am. Water Resour. Assoc. 2018. [CrossRef] 26. Van Vliet, M.; Ludwig, F.; Zwolsman, J.; Weedon, G.; Kabat, P. Global river temperatures and sensitivity to atmospheric warming and changes in river flow. Water Resour. Res. 2011, 47. [CrossRef] 27. Arora, R.; Tockner, K.; Venohr, M. Changing river temperatures in northern Germany: Trends and drivers of change. Hydrol. Process. 2016, 30, 3084–3096. [CrossRef] 28. Ptak, M.; Choiński, A.; Kirviel, J. Long-term water temperature fluctuations in coastal rivers (southern Baltic) in Poland. Bull. Geogr. Phys. Geogr. Ser. 2016, 11, 35–42. [CrossRef] 29. Null, S.E.; Viers, J.H.; Deas, M.L.; Tanaka, S.K.; Mount, J.F. Stream temperature sensitivity to climate warming in California’s Sierra Nevada: Impacts to coldwater habitat. Clim. Chang. 2013, 116, 149–170. [CrossRef] 30. Moyle, P.B.; Lusardi, R.A.; Samuel, P.; Katz, J. State of the Salmonids: Status of Ccalifornia’s Emblematic Fishes 2017; University of California: Davis, CA, USA, 2017; p. 579. 31. Vasseur, D.A.; DeLong, J.P.; Gilbert, B.; Greig, H.S.; Harley, C.D.; McCann, K.S.; Savage, V.; Tunney, T.D.; O’Connor, M.I. Increased temperature variation poses a greater risk to species than climate warming. Proc. R. Soc. Lond. B Biol. Sci. 2014, 281, 20132612. [CrossRef] 32. Null, S.E.; Deas, M.L.; Lund, J.R. Flow and water temperature simulation for habitat restoration in the Shasta River, California. River Res. Appl. 2010, 26, 663–681. [CrossRef] http://dx.doi.org/10.1177/0309133314550669 http://dx.doi.org/10.1002/(SICI)1099-1085(199602)10:2<205::AID-HYP358>3.0.CO;2-1 http://dx.doi.org/10.4296/cwrj3101041 http://dx.doi.org/10.1139/f04-040 http://dx.doi.org/10.1016/j.jhydrol.2017.03.024 http://dx.doi.org/10.1111/j.1752-1688.2005.tb04465.x http://dx.doi.org/10.1086/693000 http://dx.doi.org/10.1051/kmae/2016037 http://dx.doi.org/10.1016/j.ecoleng.2018.07.008 http://dx.doi.org/10.1029/93WR00540 http://dx.doi.org/10.4296/cwrj3203179 http://dx.doi.org/10.5194/hess-22-437-2018 http://dx.doi.org/10.1016/j.scitotenv.2017.12.129 http://www.ncbi.nlm.nih.gov/pubmed/29268220 http://dx.doi.org/10.1002/rra.3057 http://dx.doi.org/10.1111/1752-1688.12707 http://dx.doi.org/10.1029/2010WR009198 http://dx.doi.org/10.1002/hyp.10849 http://dx.doi.org/10.1515/bgeo-2016-0013 http://dx.doi.org/10.1007/s10584-012-0459-8 http://dx.doi.org/10.1098/rspb.2013.2612 http://dx.doi.org/10.1002/rra.1288 Hydrology 2019, 6, 6 14 of 14 33. Willis, A.D.; Campbell, A.M.; Fowler, A.C.; Babcock, C.A.; Howard, J.K.; Deas, M.L.; Nichols, A.L. Instream flows: New tools to quantify water quality conditions for returning adult Chinook salmon. J. Water Resour. Plan. Manag. 2015, 04015056. [CrossRef] 34. Nichols, A.L.; Willis, A.D.; Jeffres, C.A.; Deas, M.L. Water temperature patterns below large groundwater springs: Management implications for coho salmon in the Shasta River, California. River Res. Appl. 2014, 30, 442–455. [CrossRef] 35. Lowney, C.L. Stream temperature variation in regulated rivers: Evidence for a spatial pattern in daily minimum and maximum magnitudes. Water Resour. Res. 2000, 36, 2947–2955. [CrossRef] 36. Verschoren, V.; Schoelynck, J.; Buis, K.; Visser, F.; Meire, P.; Temmerman, S. Mapping the spatio-temporal distribution of key vegetation cover properties in lowland river reaches, using digital photography. Environ. Monit. Assess. 2017, 189, 294. [CrossRef] [PubMed] 37. Clark, P.E.; Johnson, D.E.; Hardegree, S.P. A direct approach for quantifying stream shading. Rangel. Ecol. Manag. 2008, 61, 339–345. [CrossRef] 38. Kelley, C.E.; Krueger, W.C. Canopy cover and shade determinations in riparian zones 1. JAWRA J. Am. Water Resour. Assoc. 2005, 41, 37–046. [CrossRef] 39. Richter, A.; Kolmes, S.A. Maximum temperature limits for chinook, coho, and chum salmon, and steelhead trout in the Pacific Northwest. Rev. Fish. Sci. 2005, 13, 23–49. [CrossRef] 40. Rutherford, J.C.; Macaskill, J.B.; Williams, B.L. Natural water temperature variations in the lower Waikato River, New Zealand. N. Z. J. Mar. Freshw. Res. 1993, 27, 71–85. [CrossRef] 41. Clark, E.; Webb, B.; Ladle, M. Microthermal gradients and ecological implications in Dorset rivers. Hydrol. Process. 1999, 13, 423–438. [CrossRef] 42. Leung, L.R.; Qian, Y.; Bian, X.; Washington, W.M.; Han, J.; Roads, J.O. Mid-century ensemble regional climate change scenarios for the western United States. Clim. Chang. 2004, 62, 75–113. [CrossRef] 43. Fabris, L.; Malcolm, I.A.; Buddendorf, W.B.; Soulsby, C. Integrating process-based flow and temperature models to assess riparian forests and temperature amelioration in salmon streams. Hydrol. Process. 2018, 32, 776–791. [CrossRef] 44. Bartholow, J. Estimating cumulative effects of clearcutting on stream temperatures. Rivers 2000, 7, 284–297. 45. Bal, K.; Struyf, E.; Vereecken, H.; Viaene, P.; De Doncker, L.; de Deckere, E.; Mostaert, F.; Meire, P. How do macrophyte distribution patterns affect hydraulic resistances? Ecol. Eng. 2011, 37, 529–533. [CrossRef] 46. O’Hare, J.; O’Hare, M.; Gurnell, A.; Dunbar, M.; Scarlett, P.; Laize, C. Physical constraints on the distribution of macrophytes linked with flow and sediment dynamics in british rivers. River Res. Appl. 2011, 27, 671–683. [CrossRef] 47. Green, J.C. Velocity and turbulence distribution around lotic macrophytes. Aquat. Ecol. 2005, 39, 1–10. [CrossRef] 48. Lacey, R.J.; Millar, R.G. Reach scale hydraulic assessment of instream salmonid habitat restoration 1. JAWRA J. Am. Water Resour. Assoc. 2004, 40, 1631–1644. [CrossRef] 49. McMahon, T.E.; Hartman, G.F. Influence of cover complexity and current velocity on winter habitat use by juvenile coho salmon (oncorhynchus kisutch). Can. J. Fish. Aquat. Sci. 1989, 46, 1551–1557. [CrossRef] 50. Kurylyk, B.L.; Moore, R.D.; MacQuarrie, K.T. Scientific briefing: Quantifying streambed heat advection associated with groundwater–surface water interactions. Hydrol. Process. 2016, 30, 987–992. [CrossRef] 51. Caissie, D.; Luce, C.H. Quantifying streambed advection and conduction heat fluxes. Water Resour. Res. 2017, 53, 1595–1624. [CrossRef] © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). http://dx.doi.org/10.1061/(ASCE)WR.1943-5452.0000590 http://dx.doi.org/10.1002/rra.2655 http://dx.doi.org/10.1029/2000WR900142 http://dx.doi.org/10.1007/s10661-017-6004-5 http://www.ncbi.nlm.nih.gov/pubmed/28550516 http://dx.doi.org/10.2111/07-012.1 http://dx.doi.org/10.1111/j.1752-1688.2005.tb03715.x http://dx.doi.org/10.1080/10641260590885861 http://dx.doi.org/10.1080/00288330.1993.9516547 http://dx.doi.org/10.1002/(SICI)1099-1085(19990228)13:3<423::AID-HYP747>3.0.CO;2- http://dx.doi.org/10.1023/B:CLIM.0000013692.50640.55 http://dx.doi.org/10.1002/hyp.11454 http://dx.doi.org/10.1016/j.ecoleng.2010.12.018 http://dx.doi.org/10.1002/rra.1379 http://dx.doi.org/10.1007/s10452-004-1913-0 http://dx.doi.org/10.1111/j.1752-1688.2004.tb01611.x http://dx.doi.org/10.1139/f89-197 http://dx.doi.org/10.1002/hyp.10709 http://dx.doi.org/10.1002/2016WR019813 http://creativecommons.org/ http://creativecommons.org/licenses/by/4.0/. Introduction Materials and Methods Study Site and Period Riverine Canopy Surveys Water Temperature Modeling Results Riverine Canopy Surveys Water Temperature Modeling Discussion UAV Survey Methods Riverine Canopy Growth Water Temperature Modeling References work_a3iahepgbvafhnegid3df3a36m ---- Microsoft Word - layout A Digital Photography Framework Enabling Affective Awareness in Home Communication Olivier Liechti Information Systems Laboratory 1-4-1 Kagamiyama Higashi-Hiroshima 739, Japan olivier@isl.hiroshima-u.ac.jp Tadao Ichikawa Information Systems Laboratory 1-4-1 Kagamiyama Higashi-Hiroshima 739, Japan ichikawa@isl.hiroshima-u.ac.jp ABSTRACT By transforming the personal computer into a communication appliance, the Internet has initiated the true home computing revolution. As a result, Computer Mediated Communication (CMC) technologies are increasingly used in domestic settings, and are changing the way people keep in touch with their relatives and friends. This article first looks at how CMC tools are currently used in the home, and points at some of their benefits and limitations. Most of these tools support explicit interpersonal communication, by providing a new medium for sustaining conversations. The need for tools supporting implicit interaction between users, in more natural and effortless ways, is then argued for. The idea of affective awareness is introduced as a general sense of being in touch with one's family and friends. Finally, the KAN-G framework, which enables affective awareness through the exchange of digital photographs, is described. Various components, which make the capture, distribution, observation and annotation of snapshots easy and effortless are discussed. Keywords Affective awareness, ambient media, awareness, calm technology, Computer Mediated Communication, digital camera, home photography, implicit social interaction, social role of home photography, ubiquitous computing, World Wide Web 1 INTRODUCTION The so-called "personal" computer has made its apparition in the home more than two decades ago. It seems, however, that the true revolution has only been initiated much more recently [Venkatesh 1996]. Among other factors, it is the addition of communication features and the wide popularization of the Internet that can explain the profound social impact of home computing. Access to the Net has indeed transformed a device that had no real domestic function into an appliance extensively used for information access, entertainment and interpersonal communication. As a result, the personal computer is becoming more meaningful for the household, used both by the entire family and for home-related tasks (as opposed to work-related tasks performed at home). In this light, it might very well have effects as dramatic as those from previous technologies, such as the television or the telephone. Emerging communication technologies are deeply affecting the way people "socialize", in other words the way people meet, keep in touch and interact with each other. While this is true at work, it is also true at home. People are offered new ways to maintain relationships with their family and friends, new ways to establish contacts with local and global communities, etc. But while the social impact of Computer Mediated Communication (CMC) technologies is generally acknowledged, it is not clear whether it is mostly positive or negative. Some praise the Internet as a medium that removes economical, spatial and temporal constraints and thus allows people to extend their social networks. Others fear that mediocre computer- based communication may lead to social isolation and reduce well-being. In any way, it is very important that engineers and computer scientists take into account the o b s e r v a t i o n s a n d r e c o m m e n d a t i o n s o f ethnographers and sociologists studying the impact of new technologies in the field. Adopting a multi-disciplinary approach should help make the best use of a very unique and pervasive communication infrastructure. As a matter of fact, the Internet is a very flexible platform, on top of which it is possible to build a wide diversity of tools supporting social interaction. The challenge is to figure out what these tools could and should be, and how they could and should be used by the general population. It will take time, imagination and effort to address this challenge, and to truly leverage the potential of global and ubiquitous networking. In this article, our goal is more modestly to look at existing CMC technologies used in the home, to point at some of their limitations, and to discuss a number of ideas that would overcome these limitations. While recognizing the merits of current technologies, we argue for the need to augment the functional spectrum they cover. We introduce the idea of affective awareness, and discuss the envisioned benefits of tools supporting lightweight, effortless and emotional interpersonal communication through shared experiences and non-verbal communication. We explain how such technologies require to go beyond the traditional personal computer, and beyond traditional human-computer interaction (seen as a one-to-one, exclusive dialogue). An illustration of these ideas is then proposed with the design of KAN-G, an interpersonal communication framework based on the exchange of digital photographs. The KAN-G framework consists of a variety of hardware and software components, which support the capture, the distribution, the observation and the annotation of photographs. The foremost motivation for designing the framework has been an observation of the social role played by "traditional" home photography, and a reflection on how emerging digital photography technologies may affect this role. Family snapshots are seen as very powerful artifacts that create and strengthen affective and emotional links between people. Digital imaging technologies have the potential to make it easier to create, share and observe photographs, and thus to amplify the interpersonal communication function of home photography. We believe, however, that achieving this goal requires the careful design of "calm" systems [Weiser and Brown July 1996], that should be natural to use, that should be merged with architectural spaces [Junestrand and Tollmar 1998, Rijken 1999, Wisneski, et al. 1998], and that should privilege aesthetic and artistic quality [Padula and Amanda 1999]. The remaining sections of this article are organized as follows. In Section 2, the current use of Computer Mediated Communication (CMC) technologies in domestic environments is discussed. The functions and interaction modes supported by existing tools are described, and some of their problems and limitations are pointed out. A number of field studies on the social impact of home computing are then mentioned, and important implications for the design of communication technologies are highlighted. The idea of affective awareness is finally introduced and placed in the context of related work. The discussion stresses that tools enabling affective awareness should not replace, but augment traditional technologies such as electronic mail. Section 3 illustrates the previously introduced ideas with the KAN-G framework, designed to s u p p o r t a f f e c t i v e a w a r e n e s s t h r o u g h electronically mediated home photography. The social role of "traditional" home photography is discussed first, with references to visual anthropology. It is then explained how a number of different emerging digital imaging technologies are likely to affect and amplify this role. The components of the framework are reviewed, and it is explained how they support the capture, distribution, viewing and annotation of digital photographs. It should be noted that the implementation of the KAN-G framework is still in progress. Prototypes for the components described here have been implemented, but their integration has not been fully completed yet. The main objective of this article is thus not to report on the implementation, but rather on the design of the framework. More importantly, it is to raise attention on emerging domestic applications for digital photography and to foster research in this area. 2 COMPUTER MEDIATED COMMUNICATION IN THE HOME Computer Mediated Communication covers a broad spectrum of technologies, from electronic mail to the WWW, from real-time video- conferencing to MUDs and MOOs. Such technologies support very different functions, and enable very different interaction modes. Most of them were developed to support work related processes, but are now also used to support tasks within households. But because the r e q u i r e m e n t s o f w o r k a n d d o m e s t i c environments are very different, it is not clear whether current CMC tools really fit the needs of home users. On the contrary, it is likely that CMC performed in the home will raise a need of tools that do not have any equivalent in work environments. This is particularly the case for technologies that put an emphasis on entertainment and fun criteria. In this section, we first review some of the most popular CMC technologies used in the home and point at some of their limitations. We then look at a number of field studies on the social impact of home computing and highlight important implications for technology designers and computer scientists. We finally introduce the idea of affective awareness and review some related work in the Computer Supported Cooperative Work (CSCW) and Human- Computer Interaction (HCI) literature. 2.1 Current use of CMC technologies The applications and services that have been developed on top of the Internet infrastructure can be categorized in different ways. The first way is to distinguish between interpersonal and m a s s communication tools. I n t e r p e r s o n a l communication tools support the direct exchange of information between two (or a small number) of participants. They are thus electronic surrogates to face-to-face, paper mail or telephone communication. Mass communication tools allow the dissemination of information from one publisher to a possibly large number of consumers. In that sense, they are comparable to traditional media such as the television, the radio and the press. A fundamental difference with Internet technologies, however, is that the barriers to entry are extremely low. Therefore, and for the first time, virtually everyone has access to mass media as an information publisher. The distinction between interpersonal and mass communication technologies is in fact quite loose. The WWW, for example, is generally seen as a mass communication medium. But when a site is designed to be visited by a small number of people (e.g. a Web site published by a family, mainly visited by the members and friends of that family), its main role is to support interpersonal communication. Conversely, when electronic mail (primarily seen as an interpersonal technology) is used in conjunction with large distribution lists, it becomes similar to a mass communication tool. 2.1.1 Electronic mail At home like at work, electronic mail is certainly the most common CMC technology. In domestic environments, it is essentially used to sustain conversations (with family, friends and people met on-line) and interactions with commercial and administrative entities (e.g. with customer support services or tax office). Several factors can explain the wide popularity of email. First, it is relatively easy to use and does not require a long learning curve. Everybody in the family has thus access to the medium, without any particular requirement of technical expertise or time. Second, email is already ubiquitous, as almost everybody now has an email address. The technology can thus be used to interact with a very large number of people. Third, email is an i n t e r e s t i n g a l t e r n a t i v e t o t r a d i t i o n a l communication tools (e.g. paper mail, telephone), which among other benefits offers low cost and immediate delivery. One particularly interesting application for email is exemplified by the "kids-at-college" scenario, where parents can more easily keep in touch with their children when they leave home. Writing an email requires much less effort than writing a letter, buying a stamp and finding a mailbox. Writing an email is also cheaper than making a long-distance phone call. These factors and others contribute to increase the level of communication between the members of a family who do not live together. This is seen as a positive social impact of the technology, as it contributes to strengthen social relationships. This, however, does not mean that electronic mail is the ultimate interpersonal communication tool and that no alternative or extension should be searched for. For instance, one problem is that keeping up with electronic mail is often very tedious and requires a lot of time. As a consequence, people often reduce either the frequency (e.g. from daily to weekly) or the quality (e.g. limit to state weather information) of their messages. But these apparently insignificant messages are far from being useless. In many cases, what is important is not so much the content of the message, but rather the simple fact that the message has been sent. The message, by its mere transmission, connects the sender to the receiver. Sending an email is in some cases very similar to waving or smiling at someone. In other words, the real function of email is sometimes not to support the exchange of explicit messages (i.e. conversations), but rather to convey a general sense of being connected to each other (what we will define later as affective awareness). But if this is the case, then other tools should be developed to support this function more appropriately. The KAN-G framework, described in Section 3, was precisely developed with that idea in mind: instead of exchanging mediocre textual messages, people could maintain mutual affective awareness by exchanging i) photographs and ii) impressions on these photographs. This would present the advantage of being at the same time less constraining (taking a snapshot takes less effort than writing a few paragraphs) and emotionally richer (looking at someone's smile conveys a stronger impression than reading a weather report). Electronic mail should of course not be abandoned, as there will always be a need to exchange factual information. But electronic mail should be augmented and used in conjunction with other tools that focus on lightweight and informal communication. As a matter of fact, there are already examples of technologies which have roots in electronic mail, but which are particularly targeted at non- professional users. One example is the PostPet mail software1, tremendously popular in Japan. Although it can be used as a standard POP3 client, PostPet has a unique user interface and functions inspired from the famous "Tamagochi" artificial creatures. The main window represents a room, in which lives a small animal (e.g. a pink bear). When the user sends an email to a friend, it is the cute creature who gets the envelope, travels across cyberspace and enters the receiver's living room. At this point, the two creatures play with each other for a while, for the great enjoyment of the user. This application illustrates that an important aspect of CMC communication, when it performed at home, is entertainment and fun. Another example is the electronic greeting card service offered by several Internet companies2. Users can choose among various categories (birthday, mother day, friendship, etc) and send annotated drawings to their friends, who can view them with a standard Web browser. Sending an electronic postcard is very similar to sending a paper postcard. The main function is not the communication of factual information, but the expression that the sender is thinking of the receiver. One limitation of electronic postcards is that they require the receiver to use a PC to look at them. This requires too much time and is too difficult for some people (e.g. older people who have and will never use a computer). A better idea would be to have special displays (maybe on the fridge, or on a screen phone), where the postcards would automatically be 1 http://www.sony.com.sg/postpet/ 2 for example: http://www.bluemountain.com shown after delivery. This would make the whole system much easier and natural to use. 2.1.2 The World Wide Web The World Wide Web is another very popular application with home users, that can be used in two different ways. First, households can act as information consumers. This is the case when kids search information for doing their homework, when parents do on-line shopping, and when the whole family surfs the Web for entertainment purposes. The Web has often been described as a potential substitute for television. One difference, however, is that watching television can be a group activity (several people can easily watch the same TV screen, share an experience and exchange comments), as surfing the Web is more often an individual activity. The "Let's browse" collaborative agent [Lieberman, et al. 1999] has been an attempt to make Web browsing a collaborative and co-located activity. Second, households can use the WWW as information producers. Increasingly, home users can design and maintain their own Web site. This has become both easy and cheap, with the availability of GUI design tools and the possibility to get free storage space at various Internet portal sites. Very often, people take this as an opportunity to introduce their family to the world, to give information about their hobbies and almost always to share a few photographs. Sometimes, the family Web site is also used to support interaction around a specific event. For example, Web sites are often created to announce and remember a wedding celebration. Before the wedding, useful information is offered to the guests (maps, accommodation, schedule, etc). After the wedding, pictures, short comments and stories are added to the site. When a Web site is created by a family, the popular term "home page" seems particularly appropriate. The Web site appears to be a digital extension of the physical dwelling. As such, it could become a social place where the family could interact with visitors. This is however still rarely the case, for several reasons. The first reason is that it is difficult to grasp the activity occurring within a Web site, for example to track visitors in real-time. The publishers of a Web site therefore do not know when visitors are arriving, and what they are doing in the site. Sometimes they do not even know whether visitors are coming at all. The second reason is that the physical and electronic worlds are still very much separated from each other. As a consequence, when people are engaged in activities in the physical world (cooking, reading books, etc), they are not aware of what is happening in the digital world. The visitors of the digital home are therefore not perceptible to the inhabitants of the physical home, which prevents much interaction between them. We have discussed these problems in a previous article [Liechti, et al. 1999], and introduced the metaphor of a window bridging the physical and computational worlds. When this window is open, users hear different sounds when different events are detected on the site (e.g. they hear bird chirps or dog barking when particular pages are accessed by particular people). A related approach has been proposed in [Benford, et al. 1996], with the idea of a virtual foyer (on the WWW) connected to the real foyer of an organization. Finally, it should be noted that some families already have attempted to create a bridge between their real and virtual homes, by putting a Webcam in their living room (i.e. a camera which periodically takes pictures that are then visible on a Web site). This however only creates a link from the real world towards the virtual world, and not in the opposite direction. While the use of Webcams is interesting, it raises obvious privacy issues. 2.1.3 On-line communities Another popular CMC application is the support for online communities. Newsgroups, multi-users virtual worlds (e.g. MUDs and MOOs), and more recently WWW portal sites are some of the technologies that are used for this purpose. Although some are, many online communities are not work-oriented. They can be interest- focused (e.g. pregnant women or guitar players), but can also be bound to geographical communities (e.g. foreigners living in Tokyo or residents of the San Francisco bay area). The first category offers a good example of how the Internet offers new possibilities to meet people. They remove not only spatial and temporal barriers, but also psychological barriers. For example, shy people find it easier to initiate interaction when they are on-line and protected by some anonymity. Of course, one might argue that this kind of interaction is artificial, shallow and not as gratifying as "real" interactions. The image of the asocial teenager, spending all his time in front of his computer easily comes in mind. But there are cases where people really get a benefit from on-line interaction, as reported in [Rheingold 1993]. On- line communities sometimes allow people to share their problems with others and to get some comfort. This interesting aspect has been discussed in [Preece 1998], with the description of empathic communities. In these, participants are primarily interested in sharing feelings and supporting each other, as opposed to exchanging factual information. According to the author, current CMC tools are not very well suited to support empathic communication. The usual interaction mechanism is often limited to text, which lacks many channels used to convey feelings and compassion in face-to-face communication. Accordingly, there is a need for designing better user interfaces, with the potential to improve the quality of life for thousands of people. An interesting aspect of the geographically- bound communities is that they make it easier to augment on-line interaction with face-to-face interaction at some later point. For example, after having met interesting people in a discussion forum, it is possible (and cost-effective) to meet them "in real". This is not as easy when people from Europe and Japan meet in a subject-focused community. 2.1.4 Internet presence and instant messaging Finally, two related technologies that have recently received a large echo in the press are Internet Presence (IP) and Instant Messaging (IM). Although they have existed for a very long time (on time-sharing and UNIX systems), these mechanisms are now used by millions of people on personal computers connected to the Internet. Internet Presence makes it possible to know when a group of people (often called the buddy list) are connected to the Internet, and to have an idea of their status (e.g. busy, available, etc.). Instant Messaging then allows to send short messages to these people. Instant messages usually appear immediately on the receiver's display (unlike email, the messages are not stored in a mailbox). An interesting aspect of IM is that they support some level of awareness between users. A window dynamically reflects the status of the user's contacts, which conveys a sense of approachability. To some extent, it gives the impression that one's "buddies" are right there behind the screen, almost reachable. This is particularly interesting for home users. The members of a family who do not live together can "feel" closer to each other, on a continuous basis. There seems however to be room for much development of IP and IM. For instance, status information is now very limited (e.g. offline, online, available, busy, away). If IM is to be used to create an artificial sense of "living together", then additional information could be beneficial (e.g. mood, an idea of the current activity, etc.). 2.2 Field studies and their implications The social implications of home computing have interested a number of scholars for many years. Behaviors and usage patterns have been examined in field studies, and important observations have been made. The implications for industrials, policy makers and technology designers are important. Some studies are now reviewed, and important observations for the design of digital communication technologies are highlighted. In 1985, Vitalari, et al. [Vitalari, et al. 1985] examined the impact of home computers on time allocation patterns, by analyzing the behavior of some 282 users. They observed that the introduction of PC in the home increased the time spent alone and decreased the time spent with family and friends. It was however not clear whether this was a short-term or long-term impact, in which case it might be hypothesized that computers can lead to social isolation. The authors also observed a shift from pleasure- oriented to task-oriented activities. But it should be noted that at the time of the study, there were still very few applications targeted at home users and that the home computer was merely a work- oriented tool used at home. A very interesting point raised by the authors is that the traditional human-computer interaction paradigm plays a key role in the time reallocation process: "The time allocation phenomenon in computing is especially poignant with respect to personal computers, where personal, in operational terms, means a one-to-one real-time interaction with the machine. [...] The popular example of the personal computer "widow" exemplifies this dilemma." The problem we see here is that interacting with a personal computer generally requires the user to be isolated from the rest of the household, both physically and mentally. The first reason is that the computer is often in a room used by individuals (an office, a room) and not in a room used by groups (the kitchen, the living room). In other words, the PC user is often in a room where others don't spend much time, and is thus often alone. The situation would be quite different if the computer was in public space, for example in the living room. The user would then share the physical environment with the other family members and be less isolated. There are several reasons why PCs are not placed in public spaces more often, including factors such as poor aesthetic quality, unavailable space and noise disturbance. Observation 1. Placing home computing technologies in public spaces, such as the kitchen or the living room, could reduce the risk of social isolation for users. This requires a careful design of the devices, which have to nicely integrate in the architectural space. The second reason is that when using a computer, it is difficult to do anything else at the same time. The hands, the eyes, the whole body and the entire attention of the user is generally mobilized by the interaction. A good illustration for this problem is that many people feel very tired after using a computer for several hours. For some tasks (e.g. writing a report with a word processing software), it is quite difficult to overcome this problem. But in some cases, it seems that the interaction between the user and the computer could be achieved more naturally, not necessary requiring the use of a keyboard and a mouse. The distribution of electronic postcards, mentioned earlier, provides a good example for this issue. It seems that having the postcards automatically shown on displays scattered within the home (on walls, desks, phones, etc) is an advantageous alternative to the current situation. In this case, people would not have to dedicate a significant amount of time to the task of consulting incoming mail (going to the PC room, switching it on, launching the browser, fetching the URL, turning everything down, coming back), but would rather continuously and seamlessly be using the technology, without really noticing it. Observation 2. Human-computer interaction mechanisms that do not monopolize the mental and physical attention of users could reduce the risk of social isolation. This requires the design of multi-modal user interfaces, where interaction is supported through voice, gesture and other means. In another study published in 1996, Venkatesh compared the situation of home computing in the early 1980s and the 1990s. He explained how a number of factors had contributed to have delayed the "home computer revolution" announced with the first apparition of computers in the home. The addition of communication features and the availability of software supporting household tasks (entertainment, education, financial management) are two factors that explain the greater acceptance and significance of home computers in the 1990s. Another critical factor is the understanding that technological innovation can happen in the home, and does not have to happen at work and then be transferred in the home: Observation 3. Computing technologies do not have to be transferred from work to domestic environments. Instead, they should be developed with a consideration of the particular needs of home users. Another observation made by the author was that to gain acceptance within the home, computers had either i) to compete with existing technologies (e.g. phone) and to support existing tasks more efficiently or ii) to enable new activities that were not possible without computers. For the designers of digital communication technologies, it means that radically different ways to support interaction between people should be searched for: O b s e r v a t i o n 4 . Digital communication technologies should not only mimic traditional communication media. Instead, they should enable functions that were not possible without a pervasive communication infrastructure. Another study with important implications is the HomeNet field trial. A research group at Carnegie Mellon University observed the online behavior of more than 70 households during their first years of Internet use. The first question, addressed in [Kraut, et al. 1998], was to find out what home users really want to use the Internet for. Is it for information acquisition and entertainment, or is it for interpersonal communication? To answer this question, the respective use of a Web browser (used for access to information and entertainment) and of an electronic mail client (used for interpersonal communication) were compared over time. The collected data suggests that users generally prefer email over the Web. In other words, the use of the Internet at home seems to be motivated first by interpersonal communication, and not by information access. In the opinion of the authors, however, there has been a tendency so far to focus primarily on information access and entertainment services: "The commercial development of Internet services and public policy initiatives to date have probably over-emphasized information and e n t e r t a i n m e n t a n d u n d e r - e m p h a s i z e d interpersonal communication." The implication for computer scientists is that an effort should be made to improve existing interpersonal communication applications. It also means that it is worth investigating new forms of interpersonal CMC, as there is a strong demand for it: Observation 5. Interpersonal communication is what people really want to use the Internet for. Therefore, a special effort should be made to improve existing interpersonal communication technologies, and to propose new ones. Another question examined in the HomeNet study is also the title of an article [Kraut, et al. 1998]: "Internet paradox: a social technology t h a t r e d u c e s s o c i a l i n v o l v e m e n t a n d psychological well-being?". Because it is not clear whether the social impact of the Internet is mostly positive or negative, the authors collected data to identify a possible correlation between I n t e r n e t u s e , s o c i a l i n v o l v e m e n t a n d psychological well-being. Social involvement was measured by family communication, size of local social network, size of distant social network and social support. Psychological well- being were measured in terms of loneliness, stress and depression. The findings of the research indicate that greater use of the Internet was associated with declines in social involvement, and increases in loneliness and depression. In order to explain these results, the authors made two hypothesis. First, the negative impact could be due to the fact that Internet use requires to abandon other activities, such as reading books or talking with family members. This is the same problem as mentioned before, with the analysis of Vitalari et al. on the impact of home computing. Second, the authors argue that current CMC tools do not allow social interactions that are as good as face-to-face or telephone interactions. They thus suggest that a considerable effort should be made to develop better interpersonal communication technologies on the Internet. 2.3 Affective awareness A particularly interesting application for CMC in home settings is to offer efficient ways for people to keep in touch with their families and friends. The notion of "keeping in touch" is very general and has very different aspects. It seems that current technologies primarily support one of these aspects, which is idea of conversations. In other words, they support the explicit exchange of factual information. This means that to keep in touch with someone, it is necessary i) to explicitly initiate an interaction, ii) to encode a message (usually by typing text), and finally iii) to transmit this message. The pace at which messages are sent back and forth between the participants depends very much on the technology. With email they might be exchanged once a day. With synchronous text- based communication (e.g. IRC, MUD, chat rooms), they might be exchanged every few seconds. In any way, this kind of interaction always requires some effort from the participants, at least to establish the communication. This effort, as small as it could be, represents a barrier to communication and potentially reduces the amount of social interaction. When people share the same house, the situation is very different. For example, consider what happens in the kitchen, when a family is having breakfast. Even when they are not talking, people have many ways to interact with each other. They can easily guess what is the emotional condition of each other, have an idea of their daily schedule. Body language, dress code, facial expressions are some of the many natural channels that convey this information. In this situation, people are interacting tacitly: they do not have to explicitly exchange messages (i.e. talk) to feel "in touch" with each other. The important thing is that they do not have to do any effort to interact with each other: sharing the same physical space is sufficient. A recent survey [Philips 1998] was conducted to get an understanding of communication patterns between middle-school-age children and their parents. The results showed that 58% of the parents and 73% of the children said that they spend less than an hour a day in conversation. Moreover, 27% of the parents and 46% of the children said that they spend less than 30 minutes a day in conversation. This was interpreted negatively by the authors of the study, who then proposed different ways to increase the level of communication between parents and kids. But a more encouraging way to look at these results is to point out that conversation is not the only mechanism through which parents and kids build affective relationships. Therefore, digitally mediated social interaction should not only be supported by conversation, but also by other means. What we call affective awareness is a general sense of being close to one's family and friends. It seems that affective awareness is best achieved when people are engaged in shared experiences, especially when these experiences affect their emotional state. Watching a funny TV show, observing paintings at an art gallery, enjoying a good meal, playing games are all simple of activities that, when performed together, make people feel close to each other. It seems that CMC technologies that would recreate this kind of interactions would be very beneficial to users. First, because they would not require much effort, they would be used frequently and would increase social interaction within distributed families. Second, because they would not require the explicit interaction between a user and a computer, they would avoid many of the problems mentioned in the previous section (in particular the risk of social isolation caused by extended periods of computer use). As a matter of fact, similar issues have already been studied in work environments. Distributed working groups have very similar problems, in particular the difficulty to maintain a context for communication. The notion of awareness has been extensively discussed in the Computer Supported Cooperative Work (CSCW) literature [Dourish and Bellotti 1992, Dourish and Bly 1992, Gutwin and Greenberg 1998, Isaacs, et al. 1996, Tollmar, et al. 1996]. More specifically, peripheral awareness has been identified as the awareness that people can maintain about each other, seamlessly and without effort, when they are close to each other. A number of systems have been developed to artificially recreate peripheral awareness within distributed groups. Media spaces [Bly, et al. 1993, Pedersen and Sokoler 1997], ambient user interfaces [Wisneski, et al. 1998], and calm technologies [Weiser and Brown July 1996] are all related to this idea and have been an inspiration for our work. An important aspect is that in these approaches, human-computer interaction is not achieved through a conventional keyboard-display installation. Instead, the computing environment is more closely integrated with the architectural space, and is surrounding the users. Sounds, images, but also temperature and tangible artifacts [Ishii and Ullmer 1997] are used to create an interface between the physical and digital environments. Once again, this seems to be particularly valuable to address the issues raised in the previous section, as it does not require users to be cut out from the social environment when they are using the CMC technology. It is interesting to note that because media space technologies rely on high-speed and continuous network connections, they have long been confined to office settings. But continuous high- speed Internet connections are becoming a reality for home users, which will make it possible to develop similar systems for domestic use. 3 AFFECTIVE AWARENESS THROUGH DIGITAL HOME PHOTOGRAPHY The previous section has argued for the need to augment existing CMC technologies with tools that support affective awareness. The benefits of applications that do not support explicit, but implicit social interaction, have been suggested. Engaging remote participants in shared experiences, that have an emotional effect on them, seems to be a promising way to achieve this goal. In this section, it is explained how home photography provides a context for enabling affective awareness in digital environments. The social role of traditional home photography is discussed first, with references to visual anthropology. A description is then made of the KAN-G framework, which integrates hardware and software components supporting the capture, distribution, observation and annotation of photographs. The framework supports affective awareness between two categories of users: photographers and watchers. An affective link is created between them, in two directions. First, watchers are connected to photographers when they receive their snapshots. At this point, the reaction of watchers is captured and notified back to the photographers, who in turn feel connected to them. This two-way process is best achieved when the components that display the photographers snapshots and the watchers feedback are integrated in the living environment, and do not require the users to interact with personal computers. The description of the KAN-G framework is illustrated with a number of emerging digital imaging technologies, such as digital cameras, on-line services and innovative displays. 3.1 Social role of home photography Photography can been considered both as a visual medium, as an art and as scientific instrument. Furthermore, photography supports a range of social functions. For example, photographs allow people to keep a memory of history, they allow them to discover other cultures. The study of these functions has been of foremost interest for many scholars, particularly for visual anthropologists [Ruby 1981]. Home photography is a particular type of photography, that is concerned with the creation and utilization of pictorial artifacts in the context of the home. In a broad definition, it includes snapshots, family albums, slides, home movies and home videotapes [Chalfen 1987]. Increasingly, it also includes digital artifacts and supporting technologies, such as digital cameras and on-line photo albums. Richard Chalfen is a visual anthropologist who has been particularly interested in studying the functions of home photography. His ethnographic studies are useful to get a better understanding of how and why people take, display and share pictures. They have also important implications for the design of digital imaging technologies, particularly when they are used to support social interaction. Looking at Japanese home media as a popular culture [Chalfen 1997], Chalfen describes several uses of photographs and highlights differences between Japanese and occidental cultures. Questions that have interested him were, in his own words: "When and where, under what conditions have Japanese people made photographs and, in turn, looked at them, displayed them or otherwise showed them to other people? In turn, how do Japanese people understand and value their own photograph collections?" In his discussion, Chalfen describes five examples of photograph use. The first example is referred to as shared photographs, and highlights the fact that people often like to share pictures with their relatives, friends and colleagues. The s e c o n d e x a m p l e i s c a l l e d household photography, and relates to the display of photographs within the home (on walls, furniture, etc). An interesting observation is that there are generally few pictures displayed in Japanese homes, unlike in occidental homes. This illustrates the fact that the functions of photography depend very much on cultural factors. The third example is work photography, which is about snapshots both i) taken and ii) displayed at work. Again, there are significant differences between Japan and occidental countries: Japanese tend to take more pictures at work, but at the same time tend to display snapshots at work less often (one reason is that there is a greater separation between private and public life). The forth example is w a l l e t photography, which indicates a common practice to always carry a few snapshots (usually family snapshots). Finally, the fifth example is tourist photography. The observations made by Chalfen are interesting for several reasons. First, they clearly show that photographs are social artifacts that can trigger affective processes. This is why people display photographs at home and at work, this is why people carry photographs in their wallets. Looking at someone's photographic portrait naturally leads to think about this person. It often recalls memories and emotionally affects the viewer, provoking for example happiness, sadness or anger, depending on the situation. Second, they show that sharing photographs is a social process that connects people to each other. Particularly in Japan, there is a social obligation to regularly exchange snapshots with relatives, friends and coworkers. After a party or a trip, for example, people often exchange reprints where they appear together. In this case, snapshots have a function of "social currency" and serve to strengthen relationships and to reinforce group cohesion. There are many other situations that illustrate the social function of snapshot exchange. Think for example of grandparents receiving pictures of their grandchildren. In this situation, they are likely to feel in touch with them for two reasons. First, looking at the snapshots naturally leads them to think about their children and grandchildren. Second, the fact that the photographs were sent tells them that their children are thinking about them, that they are not forgetting them. This is very comforting for them, and is an important positive outcome of the exchange. As a matter of fact, this observation has been a foremost motivation for the design of the KAN-G framework. A related phenomenon that should be mentioned is the tremendous success of print clubs (pronounced "puricura") in Japan. Located on every street corner, these machines are used by high school students to take photographs of their smiling faces, later decorated with cute drawings and printed on small stickers. These stickers are then exchanged among friends and organized in large collections. What is interesting about print clubs, is that they are computing devices that stress the social function of photographs. In another study [Chalfen 1998], Chalfen looked at the differences between still and video images. More precisely, he examined first how effective each of them is to recall memories. Second, he looked at how each of them is qualitatively perceived by home users. These questions were answered on the basis of interviews conducted with 30 teenagers. The participants were asked to evaluate the relative merits of using still and video imaging media for home photography. The first observation was that video could recreate the past with greater fidelity. The amount of information is much larger (there is also sound), and thus is better suited to reproduce an ambiance. In other words, watching a video gives a sense of "being there". Participants to the survey generally agreed that video brought more memories than still pictures, that it made it possible to duplicate and live experiences of the past again. But more interestingly, the study also suggested that this did not mean that video tapes were preferred over snapshots and photo albums, in the contrary. Clearly, there was an idea that in many cases, less information is better. People explained that they preferred still pictures, because they had to use their imagination to remember the people and the events captured on them. When watching a video, the past is recreated but the viewer is passive and does not think about anything that is not in the video. In the contrary, observing a still picture requires to make an imagination effort, and triggers a process that is not bounded to what is represented in the picture. This idea was mentioned several times, for example in the following interview excerpts: "When I look at a [still] picture of somebody, let's say someone I haven't seen in a long time or something, I think about everything I did with that person and everything that person said instead of that one movement which on video is usually quite unimportant in a home video." "Like still pictures, like they're like, you can look at them and you can like gather what you want. But videotape, like, you can't like, you can't really imagine anything about the situation." Another reason why people prefer still pictures over videotapes has to do with the social interactions they generate when they are watched by a group of people. In the case of video, people are usually watching a TV screen in silence, and the level of communication between them is quite low. In the contrary, when a group of people watch still pictures and photo albums, they very often use them as a context to tell stories and exchange comments. In other words, it seems that still pictures are better suited to generate interaction between people. This has interesting implications for the design of visual CMC technologies. At first, it could seem that video-conferencing is the ultimate interpersonal communication tool. Indeed, it provides rich interaction channels, and almost recreates the conditions of a face-to-face meeting. But maybe, as it is the case with memory and still vs. video images, there are situations where "less is better". When people join a video-conference, they h a v e to communicate, they have to talk to each other. This means that when they don't have anything particular to say, or when they have do something that would not allow them to talk at the same time, then the communication is interrupted. On the other hand, a CMC tool that is based purely on the exchange of still photographs can generate interesting intellectual and affective processes. The person who receives a snapshot remembers and imagines people and events, and is emotionally affected by that. Another advantage of this approach is that it does not raise the privacy issues of video-based awareness systems. The point we are trying to make is that it is not necessary to have an explicit, synchronous communication medium to create affective links between people. Very often, the only thing that is needed is to trigger a mental process with an artifact (e.g. a photograph) or a tangible event (e.g. a sound indicating that a person has smiled when looking at one of your photographs). This idea provides a foundation for the KAN-G framework, that we now introduce. 3.2 The KAN-G framework KAN-G is best defined as an interpersonal communication framework supporting affective awareness through home photography. It has been named after the Japanese word kanji, meaning impression or feeling. This suggests that the goal of the framework is not to support the exchange of explicit messages, but rather to convey a general sense of being "in touch" among users. This goal is achieved through the exchange of i) photographs and ii) reactions of watchers on these photographs. The capture, distribution and observation of snapshots are performed with a number of digital imaging devices, distributed across the Internet. The interaction between users of the system is based on the following observations: • "Receiving and watching a photograph from a person connects me to that person." • "Knowing that a person is watching my photographs, and knowing what this person thinks of them, connects me to that person." Accordingly, as illustrated in Figure 1, there are two categories of KAN-G users: photographers and watchers. Photographers use digital cameras to take snapshots that they distribute to watchers (usually their relatives and friends). Watchers observe the snapshots, their reaction is captured and notified back to the photographers. KAN-G supports the distribution of photographs via publication channels (i.e. buffers of limited size, to which pictures are successively pushed). Photographers can define several channels (e.g. "Family", "Funny pics", "Food"), and can decide to distribute a particular photograph to one or several of these channels. Symmetrically, watchers can subscribe to the channels they are interested in. One motivation for using channels, as opposed to large photo collections that continuously grow, is to make the system more dynamic. Because channels are limited buffers, it is likely that changes will be noticed more easily. This should make the system more interesting for watchers, and reinforce the impression that there is activity in the system. This is important, because the activity is interpreted as the manifestation of social interactions. Photograph Photographer Watcher Feedback Affective Awareness capture & distribution display feedback capture feedback display Figure 1. Affective awareness in KAN-G. The architecture that supports these functions is depicted in Figure 2. First, photographers take pictures with a KAN-G digital camera, and possibly add comments to them (title, short description, etc). Some contextual information (e.g. time, location if a GPS sensor is used) is a u t o m a t i c a l l y a d d e d b y t h e c a m e r a . Photographers then use the digital camera to initiate the publication process (i.e. to specify which pictures should be published to which KAN-G channels). When this is done, the pictures have to be transferred to a KAN-G server, which maintains the state of publication channels. If the camera is equipped with wireless Internet access, this step can be performed directly. If not, the user has to go through a KAN-G kiosk. This means that the storage media (e.g. Picture Card or Smart Media) has to be removed from the camera and inserted in a device connected to the Internet (e.g. a PC with a PCMCIA card reader). This device will then establish the connection with the KAN-G server and transfer the pictures. Watchers receive and observe these snapshots with different tools (e.g. Web-based interfaces and ubiquitous displays), which get the photographs from a KAN-G server. These tools are also used to make comments and express emotions (e.g. laugh at a photograph). The feedback from watchers is finally notified to the photographers with various tools, called awareness monitors. They make it possible for photographers to virtually hear their friends laugh or cry when they observe their pictures. In this scenario, there is no direct, explicit communication between photographers and watchers. We nevertheless believe that an affective link has been created between them. 3.2.1 Digital cameras A very important component in the KAN-G framework is the device that is used to create the snapshots: the digital camera. Recent digital cameras can truly be described as information appliances: they integrate hardware components, an operating system, communication capabilities and a user interface. Moreover, their purpose is to produce, process and (temporarily) store information. Not only graphical information, but also contextual metadata such as the time, location or title of a photograph. Some digital cameras even integrate a programming environment, which makes them a unique platform for developing innovative photo-centric applications. Real estate, insurance and medicine are some of the professional fields that benefit Digital Camera KAN-G Server Photographer Kiosk - take snapshots - annotate snaphots - initiate distribution (select channels) - if possible, distribute Awareness Monitor - who is looking? - which picture? - what is the impression? - maintain the state of channels - receive pictures from photographers - serve pictures to watchers - monitor access to channels - distribute access notification photographs & metadata Watcher WWW Interface Ubiquitous Displays Emotion Capture Feedback Capturephotographs & metadata feed-back Channels Figure 2. The KAN-G framework architecture. from these applications, particularly because they enable automatic classification and efficient retrieval. A very interesting device is the Locatio3, recently introduced by Epson in the Japanese consumer market. This appliance is at the same time a PDA, a mobile phone, a digital camera and is equipped with location sensors (GPS). It is also interesting to note that many prototypes of wearable computers integrate a digital camera, for example [Mann 1997] and [Healey and Picard 1998]. Such devices are likely to change the behavior of home photographers. Low cost, immediate availability of the snapshots, ergonomics allowing to always carry a camera (e.g. when it is embedded in one's glasses [Mann 1997]) should increase the quantity of photographs taken. Besides, the electronic nature of the medium, combined with the Internet, makes it extremely cheap and easy to duplicate and share the pictures with others. But while it is already possible to exchange digital photographs (e.g. by attaching them to email messages), there is a clear need for better solutions. When designing the KAN-G framework, one of our main requirements was that initiating the distribution of photographs should be as effortless as possible. Otherwise, people would not do it regularly. This is similar to the problem of keeping up with one's email or updating one's Web site, that was discussed in Section 2.1.1. Therefore, it seemed a good idea to implement this function directly on the camera, with two advantages. First, the task can be performed in context, i.e. when the snapshot has been taken and with a single task-oriented tool. Second, the task can be performed very rapidly, in a matter of seconds (no need to find a PC and to wait for it to boot up). Most of the currently available digital cameras offer a limited number of functions, accessible via a user interface generally composed of an LCD display, a few buttons and a few switches. More interesting are the cameras, currently including models from Kodak and Minolta, that use the Digita operating system [Digita ]. Digita is a proposed standard operating system for imaging devices. It includes a menu-driven user interface, various sub-systems and most interestingly a scripting language. Using this 3 http://www.i-love- epson.co.jp/locatio/index.html language, it is possible to extend the functionality of the camera and to implement interesting applications. Digita scripts can control the hardware (e.g. zoom in and out, take a shot), write text files (but unfortunately not read them yet) and control GUI widgets (e.g. option lists, informative messages, text input). They also have R/W access to a number of data fields storing metadata for each picture (e.g. location, title). The scripts are organized in hierarchical menus, which can easily be accessed by the user with a four arrow keys button. As part of the KAN-G framework, we implemented two Digita scripts that run on the Kodak DC260 camera. These are used to indicate which pictures should be published, and to which channels they should be published. Screenshots of the digital camera screen are reproduced in Figures 3, 4 and 5. They represent the user interface at different steps during the execution of the scripts. The first script, "Set channel", lets the user define which pictures should be published. The first step consists in going through the snapshots, and in marking those that are going to be published (Figure 3). When this is done, the user has to select the "Set channel" script in the menu (Figure 4), and then to choose one of the predefined channels in a scrolling list (Figure 5). The script then traverses the list of marked pictures. For each of them, it updates a metadata field with the reference of the selected channel. A completion message is finally displayed on the screen. This process can be repeated several times before the actual publication of photographs. When the user has performed this first task, the second script can be invoked. The "Export to channels" script traverses the list of photographs stored in the camera and checks the content of the metadata field assigned by the "Set channel" script. During this process, the script generates an XML document [Bosak 1997], which specifies how the photographs should be published. This document also contains the metadata (e.g. time, title, etc.) that was created on the camera (either automatically or as specified by the user). The XML document is used when the snapshots are actually transferred from the camera to a KAN-G server, either directly or via a KAN-G kiosk. Figure 3. Marking snapshots for publication. Figure 4. Selecting the Digita script. Figure 5. Selecting the KAN-G channel. 3.2.2 Servers, channels and kiosks In the KAN-G framework, pictures are not sent directly from photographers to watchers (as it would be the case with email attachments, for example). Instead, they are published on channels managed by servers. Photographers push information on these channels, watchers pull information from them. Therefore, KAN-G servers essentially have two functions. On one hand, they must accept photographs and maintain the state of channels. On the other hand, they must handle requests from watchers and give them access to the channels. A KAN-G server has been implemented by extending the functionality of an HTTP server with a collection of Java servlets. Java servlets are one approach to dynamic content on the Web. In other words, they are an alternative to CGI scripts, and have several advantages over them. Because KAN-G servers are built as extensions of HTTP engines, they are directly accessible on the WWW. This seems to be very important, as it makes the system readily accessible to almost everyone. Using a standard Web browser is one possibility to browse through KAN-G channels. While it certainly is a required one, we will later explain that it might not be the best one. In a near future, many digital cameras will be able to establish wireless Internet connections. This will make it possible to send photographs directly from a camera to a KAN-G server. But because this is not the case yet, we decided to introduce k i o s k s in the architecture. After initiating the publication of photographs, i.e. after running the two Digita scripts, the user simply has to extract the storage media from the camera and to insert it in a kiosk. In our prototype installation, the role of the kiosk is played by a PC with a PCMCIA card reader. The software running on the kiosk reads the XML document generated by the script, which specifies what photographs to fetch. It also reads a special file stored on the picture card, that describes on which KAN-G server the user owns an account. Finally, a connection is established with the server and the photographs listed in the XML document are pushed to the specified channels. Although it introduces an extra step and some delay, the kiosk still keeps the publishing process simple and does not require much effort from the user (all necessary information has been gathered before, directly on the camera). A number of commercial services already support some of the functions enabled by KAN–G servers. Kodak PhotoNet Online [Kodak ] and FujiFilm.net [FujiFilm ], for example, allow users to store digital photographs in on-line photo albums. These albums can then be browsed by family and friends. Besides, the visitors can also order reprints and various accessories (mugs, tee-shirts, etc). An interesting point is that when people send a regular film to be developed and printed, they can tick an option to have them digitized and published on-line. This is very important, because it makes the publication very easy for the user: no need to handle a scanner, no need to use FTP, etc. On- line photo services have limitations that we have tried to overcome in KAN-G. First, it is difficult to know when new pictures are published by a photographer. It is necessary to periodically check the Web site, which requires a substantial effort. Second, photographers don't know whether their family and friends are watching the pictures, nor what they think of them. Third, the only mechanism for looking at the pictures is to Figure 6. Web-based interface for browsing KAN-G channels. use a Web browser (and thus a PC). It seems that other ways to perform this task would be more appropriate, as we suggested earlier with the electronic postcards application (Section 2.2). It should also be pointed out that digital photo collections, whether they are accessible on-line or on CD/DVD, raise a number of research issues in terms of efficient organization, annotation, browsing and retrieval. A very interesting system is the FotoFile system [Kuchinsky, et al. 1999], which makes the manual annotation of photographs easier. This is achieved by several means, in particular by the use of a narrative structure to organize the photographs. Because the collection organization process is transformed into a storytelling process, it is more enjoyable to the users. 3.2.3 Tools for watchers Because KAN-G servers are HTTP servers, standard Web browsers can be used to observe photographs published in KAN-G channels. Different user interfaces can be developed for this purpose, from very simple ones (a Web page that shows the last photograph pushed on a channel), to very complex ones (where JavaScript and DHTML are used to create interactive behaviors). An example for such a user interface is represented in Figure 6. On the left side, thumbnails of the pictures in a channel are displayed. When the user clicks on one of them, it appears at the center of the window, in large. On the right, a textual description of the picture (which has been defined on the digital camera) is displayed. Finally, the user can click on different buttons situated below the photograph. "Smaller" and "Larger" allow to change the size of the center photograph. More interestingly, "Laugh", "Applause" and "Booh!" are used to get a feedback from the watcher. When one of these buttons is clicked, a message is sent to the KAN- G server, and the photographer who published the photograph receives a notification (allowing for example to virtually hear the watcher laugh or applause). This user interface was only developed as a proof of concept, and more subtle ways to collect feedback from the watcher should be examined. Web browsers provide a practical solution to watch pictures published in KAN-G channels: access to the WWW is ubiquitous and does not require particular hardware or software. It thus makes it possible for anyone to start using the system. Furthermore, a Web interface is sometimes perfectly suitable. This is particularly true when the watcher wishes to browse through collections of photographs. In this case, it is acceptable to use a personal computer, a keyboard and a mouse to watch and comment the photographs. But only providing this kind of interface is not enough. In Section 2, we have seen how using personal computers presents a risk of social isolation. Two reasons for this were that using a PC often requires to be alone, and that traditional human-computer interaction monopolizes all the user's physical and intellectual faculties. The motivation for designing the KAN-G framework was to increase the level of social interaction among people who do not live together. But of course, this goal should be achieved by decreasing the level of interaction among the people who do live together. A worse situation would be achieved if a user was spending too much time exploring on-line photo collections, alone, and consequently unable to interact with the members of the household. One solution to this problem consists in providing additional interfaces, through which users can interact with the system in more seamless ways. These interfaces should also enable processes where the user can remain passive, and does not have to give an input to the system (e.g. watching TV as opposed to browsing the Web). Whenever possible, these interfaces should be integrated in the architectural space of the home: in furniture, on walls, etc. Indeed, the home is already populated with a wide range of displays. These include televisions, computer displays, but also screen phones, wall-mounted panels, etc. In the future, information will be displayed on digital wallpaper, mirrors and windows. Very often, the displays found in domestic environments are used punctually for a specific purpose, but are inactive for long periods. This seems like a waste of interactive display real-estate, which could for example be used by CMC technologies, in particular by systems supporting social awareness between people. In our case, these displays could be used to show the photographs published in KAN-G channels. As an example, Figure 7 shows a device that is used by one of the authors, in his kitchen. It is a Toshiba Libretto palmtop computer, that is never shut down (it is therefore always immediately available). There are a few interesting comments to make about how this device is used on a daily basis. First, it is in the kitchen, and is generally used when people are at the table (before a meal, for example). This means that interaction with the device is generally done when family members are together. Second, the keyboard is rarely used (this requires user interfaces such as the one described below), and there is no mouse. The pointing device is integrated to the palmtop, on the right side of the display. All this makes it possible to use the device almost like a book, and in very natural body positions. Finally, it exclusively used as a communication device, mainly of (non-professional) email and Web browsing. The interaction with the device often initiates social processes within the household. A good example is the conversations between the author and his wife that are triggered by incoming emails (the activity of writing emails is also sometimes done in a common). Figure 7. KAN-G monitor used in the kitchen. This device is perfectly suitable to be integrated in the KAN-G framework, where it could play the role of a digital picture frame. Digital picture frames are very similar to traditional picture frames: they have different shapes and sizes, it is possible to move them around, etc. The difference between the two is that the image displayed in the frame is digital and can therefore easily and automatically be changed. It is interesting to note that Sony has already introduced a digital picture frame4 in the consumer market. While this device is still expensive and does not have a network connection (the pictures are transferred via a Memory Stick storage media), it is a promising device for home photography applications. Digital imaging devices used to display photographs require a careful user interface design. The first thing to consider is that their main function is not to support the exploration of a photo collection (as it was the case with the Web-based interface). In the contrary, it is to automatically display photographs, without requiring the intervention of the user. Additionally, it is to gather some feedback from the watchers. Ideally, this feedback should be captured automatically. This would require the use of sensors (e.g. camera, microphone) and software to recognize emotions. This is of course a difficult problem, where many issues remain open. Nevertheless, a lot of research is done in this area [Moriyama and Ozawa 1999, Nakatsu, et al. 1999, Picard 1997]. While automatic emotion recognition is not possible, the user interface should make it as easy as possible for the user to give a feedback. In particular, it should not require the use of keyboard. A prototype was built to provide an example for such a user interface. A snapshot, given in Figure 8, shows this tool. While it is difficult to get a good idea of its dynamic and interactive behavior with a static image, here is a brief description of the software: • In the lower part of the display, photographs published in KAN-G channels are scrolling from left to right. In this way, the user can remain passive and simply watch at the incoming photographs. • In the upper left corner of the panel, the user can express emotions inspired by the photographs. This is done by moving a circular cursor (with the mouse, or another pointing device) within a pie chart that shows six different emotions (happiness, surprise, anger, disgust, fear and sadness). The intensity of the emotion is variable and is determined by how long the user presses the mouse button before releasing it (a visual feedback is given by an expanding-shrinking size of the cursor). 4 http://www.sel.sony.com/SEL/consumer/ss5/ho me/digitalimagingmavicartmcamera/digitalimagi ngmavicartmcamera/phd-a55_specs.shtml • When an emotion has been expressed, a visual feedback is given in the upper right portion of the screen, with vertically scrolling text. The size of the words indicates the intensity of the feeling expressed by the user (large size indicates a strong feeling, small size indicates a moderate feeling). • When feedback has been gathered via this user interface, it is notified to the person who published the photograph. This function is supported by a messaging infrastructure integrated with the KAN-G framework. Visual feedback (scrolling vertically) User feedback (mouse press-release) Photographs (scrolling horizontally) Strong feeling Weak feeling Figure 8. A KAN-G watcher monitor 3.2.4 Awareness monitors for photographers The previous components of the KAN-G framework support the flow of information between photographers to watchers. Their goal is to make the capture, the distribution and the visualization of photographs as easy and natural as possible. This is important to guarantee that users will not perceive the picture sharing activity as a burden, and will go through this process regularly and lastingly. While sharing photographs on the Internet is already possible (e.g. via email or Web sites), it requires considerable time and effort. Moreover it requires some technical knowledge that is not necessary mastered by the average consumer. All this contributes to limit the amount of digital photographs exchanged by home users. Another original aspect of the KAN-G framework is that it enables a flow of information in the opposite direction, from watchers towards photographers. When photographers know that their relatives and friends are watching their pictures, they can feel connected to them. An even richer experience is achieved when they have some idea of what these people have felt when watching the pictures. Supporting this function first requires to capture the feedback from the watchers, as we explained before. It then requires photographers to use special tools to notify them about the social activity generated by their snapshots. These tools are called awareness monitors in the KAN-G architecture. What was said about the tools used to display photographs also applies to awareness monitors. They should ideally not require the use of standard PC installations, but be more nicely integrated into the living environment. Here again, the idea of peripheral awareness is central: photographers should be able to interact with the system (i.e. to receive access and feedback notifications) while they are doing other tasks. Awareness monitors can take many different forms, and their design is only limited by imagination. In simple cases, they might use audio and visual signals to notify the activity occurring in the system. In more elaborated scenarios, they might also involve tangible and ambient channels (physical objects, temperature, light, etc.). We developed tools based on both audio [Liechti, et al. 1999] and visual [Liechti and Ichikawa 1999] interfaces for the more general case of notifying activity occurring within Web sites. These interfaces could easily be integrated in the KAN-G framework, as accessing and commenting a personal Web site is very similar to accessing and commenting personal photographs. Another prototype was also specifically developed to fit the need of KAN-G awareness monitors. This tool is using the new 2D imaging Java API, which among other features supports semi-transparency. When a notification is received by the awareness monitor, the photograph that is being looked at, as well as a caption indicating the identity of the watcher, fade in on the display. If the notification specifies an emotion, a sound is also played by the device. After a while, the photograph and the caption fade out. Using this tool, it is thus possible to hear that someone laugh across cyberspace. In a second step, it is possible to look at the device to find out who is laughing, and why (i.e. what picture is being looked at). This seems like a very natural process, that does not require any effort from the user. It should be noted that this particular user interface is particularly suitable for synchronous notifications. It seems however important to also support asynchronous notifications. For example, it should be possible for the user to ask the system questions like: • What was the activity during the last 3 days? • What was the activity generated by this group of photographs? • What was the activity generated by this group of people? While we have many ideas for addressing these needs, they have not been implemented yet and have been left for future work. 4 CONCLUSION This article has examined the role of Computer M e d i a t e d C o m m u n i c a t i o n i n d o m e s t i c environments. Existing technologies have been reviewed, some of their benefits and limitations have been described. Related ethnographic studies on the social impact of home computing have been mentioned, and important implications for technology designers have been highlighted. The discussion has insisted on the fact that most of the current technologies support explicit social interaction, by creating a new medium for conversations. While the merits of conversations are recognized, the need to also support implicit social interaction has been raised. The notion of affective awareness has been introduced and defined as a general sense of feeling in touch with relatives and friends. It has also been explained how engaging people in shared experiences that emotionally affect them is one possibility to create affective awareness between them. Finally, KAN-G has been described as an interpersonal communication framework that supports affective awareness through digital home photography. For a motivation, the social role of home photography has been discussed, with references to visual anthropology. It has then been explained how the KAN-G framework integrates a range of hardware and software components that make the capture, distribution, observation and annotation of photographs natural and effortless. ACKNOLEDGEMENTS This work was supported by the Japanese government with a Monbusho scholarship awarded to the first author. The authors are thankful to Eastman Kodak for providing them with the Digital Science DC260 digital camera used to implement the prototype system. REFERENCES 1. S. Benford, C. Brown, G. Reynard, and C. Greenhalgh, “Shared Spaces: Transportation, Artificiality, and Spatiality”, in proceedings of ACM Conference on Computer supported cooperative work (CSCW'96), Cambridge, MA, 1996. 2. S. Bly, S. Harrison, and S. Irwin, “Media Space: Bringing People Together in a Video, Audio and Computing Environment”, in Communications of the ACM, vol. 36, 1993, pp. 28-47. 3. J. Bosak, “XML, Java and the future of the Web”, World Wide Web Journal, vol. 2, pp. 219-227, 1997. 4. R. Chalfen, “Family Photograph Appreciation: Dynamics of Medium, Interpretation and Memory”, Communication and Cognition, 1998. 5. R. Chalfen, “Japanese Home Media as Popular Culture”, in proceedings of Japanse Popular Culture Conference, Victoria, British Colombia, CA, 1997. 6. R. Chalfen, Snapshot Version of Life: Bowling Green State University: Popular Press, 1987. 7. Digita, Digita Operating System for imaging devices, http://www.flashpoint.com/ 8. P. Dourish and V. Bellotti, “Awareness and Coordination in Shared Workspaces”, in proceedings of CM Conference on Computer supported cooperative work (CSCW'92), Toronto, Ontario, 1992. 9. P. Dourish and S. Bly, “Portholes: Supporting Awareness in a Distributed Work Group”, in proceedings of ACM SIGCHI Conference on Human Factors in Computing Systems (CHI'92), Monterey, CA, 1992. 10. FujiFilm, “FujiFilm.net”, . 11. C. Gutwin and S. Greenberg, “Design for individuals, design for groups: tradeoffs between power and workspace awareness”, in proceedings of ACM Conference on Computer supported cooperative work (CSCW'98), Seattle, WA, 1998. 12. J. Healey and R. W. Picard, “StartleCam: A Cybernetic Wearable Camera”, in proceedings of IEEE International Symposium on Wearable Computers, 1998. 13. E. A. Isaacs, J. C. Tang, and T. Morris, “Piazza: A Desktop Environment Supporting Impromptu and Planned Interactions”, in proceedings of ACM Conference on Computer supported cooperative work (CSCW'96), Cambridge, MA, 1996. 14. H. Ishii and B. Ullmer, “Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms”, in proceedings of ACM SIGCHI Conference on Human Factors in Computing Systems (CHI'97), Atlanta, GA, 1997. 15. S. Junestrand and K. Tollmar, “The Dwelling as a Place for Work”, in proceedings of First International Workshop on Cooperative Buildings (CoBuild'98), Darmstadt, Germany, 1998. 16. Kodak, “Kodak PhotoNet Online”, . 17. R. Kraut, T. Mukhopadhyay, J. Szczypula, S. Kiesler, and W. Scherlis, “Communication and Information: Alternative Uses of the Internet in Households”, in proceedings of ACM SIGCHI Conference on Human Factors in Computing Systems (CHI'98), Los Angeles, CA, 1998. 18. R. Kraut, M. Patterson, V. Lundmark, S. Kiesler, T. Mukophadhyay, and W. Scherlis, “Internet paradox: a social technology that reduces social involvement and psychological well-being?”, American Psychologist, vol. 53, pp. 1017-1031, 1998. 19. A. Kuchinsky, C. Pering, M. L. Creech, D. Freeze, B. Serra, and J. Gwidzka, “FotoFile: A Consumer Multimedia Organization and Retrieval System”, in proceedings of ACM SIGCHI Conference on Human Factors in Computing Systems (CHI'99), Pittsburgh, PA, 1999. 20. H. Lieberman, N. W. Van Dyke, and A. S. Vivacqua, “Let's Browse: A Collaborative Web Browsing Agent”, in proceedings of International Conference on Intelligent User Interfaces, Redondo Beach, CA, 1999. 21. O. Liechti and T. Ichikawa, “A Visual Interaction Mechanism for Increasing Awareness on the WWW”, in proceedings of IEEE International Symposium on Visual Languages, Tokyo, Japan, 1999. 22. O. Liechti, M. Sifer, and T. Ichikawa, “A Non- obtrusive User Interface for Increasing Social Awareness on the World Wide Web”, Personal Technologies, vol. 3, pp. 22-32, 1999. 23. S. Mann, “An historical account of the 'W e a r C o m p ' a n d 'WearCam' inventions developed in 'Personal Imaging'”, in proceedings of IEEE International Symposium on Wearable Computers, 1997. 24. T. Moriyama and S. Ozawa, “Emotion Recognition and Synthesis System on Speech”, in proceedings of International Conference on M u l t i m e d i a C o m p u t i n g a n d S y s t e m s (ICMCS'99), Florence, Italy, 1999. 25. R. Nakatsu, A. Solomides, and N. Tosa, “Emotion Recognition and Its Application to Computer Agents with Spontaneous Interactive Capabilities”, in proceedings of International Conference on Multimedia Computing and Systems (ICMCS'99), Florence, Italy, 1999. 26. M. Padula and R. Amanda, “Art Teams Up with Technology through the Net”, in Interactions, vol. 6, 1999, pp. 40-50. 27. E. R. Pedersen and T. Sokoler, “AROMA: abstract representation of presence supporting mutual awareness”, in proceedings of ACM SIGCHI Conference on Human Factors in Computing Systems (CHI'97), Atlanta, GA, 1997. 28. P h i l i p s , L e t ' s C o n n e c t S u r v e y , http://www.philipsconsumer.com/letsconnect/ 29. R. W. Picard, Affective Computing. Cambridge: MIT Press, 1997. 30. J. Preece, “Reaching Out Across the Web”, in Interactions, vol. 5, 1998, pp. 32-43. 31. H. Rheingold, The Virtual Community. Reading, MA: Addison-Wesley, 1993. 32. D. Rijken, “Information in Space: Explorations in Media and Architecture”, in Interactions, vol. 6, 1999, pp. 44-57. 33. J. Ruby, “Seeing Through Pictures: the Anthropology of Photography”, Camera-Lucida: The Journal of Photographic Criticism, pp. 19- 32, 1981. 34. K. Tollmar, O. Sandor, and A. Schoemer, “Supporting Social Awareness @Work: Design and Experience”, in proceedings of ACM Conference on Computer supported cooperative work (CSCW'96), Cambridge, MA, 1996. 35. A. Venkatesh, “Computers and Other Interactive Technologies for the Home”, in Communications of the ACM, vol. 39, 1996, pp. 47-54. 36. N. P. Vitalari, A. Venkatesh, and K. Gronhaug, “Computing in the Home: Shifts in the Time Allocation Patterns of Households”, in Communications of the ACM, vol. 28, 1985, pp. 512-522. 37. M. Weiser and J. S. Brown, “Designing Calm Technology”, PowerGrid Journal, vol. Version 1.01, July 1996. 38. C. Wisneski, H. Ishii, A. Dahley, M. Gorbet, S. Brave, B. Ullmer, and P. Yarin, “Ambient Displays: Turning Architectural Space into an Interface between People and Digital Information”, in proceedings of First International Workshop on Cooperative Buildings (CoBuild'98), Darmstadt, Germany, 1998. work_a4u43wydjrdyxcqccqn5spd2ie ---- Degloving and skin realignments.. Bassam Kh. Al-Abbasi, Ahmad M. Hamodat Ann Coll Med Mosul June 2013 Vol. 39 No. 1 89 Degloving and skin realignments or dorsal dartous flap technique in management of isolated penile torsion in pediatrics Bassam Kh. Al-Abbasi a , Ahmad M. Hamodat b From the a Department of Surgery, Nineveh College of Medicine, University of Mosul, b Al-Khansaa Teaching Hospital, Mosul. Correspondence: Bassam Kh. Al-Abbasi. Department of Surgery, Nineveh College of Medicine, University of Mosul, Mosul, Iraq. Email: bassamalabbasi@yahoo.com. (Ann Coll Med Mosul 2013; 39 (1): 89-93). Received: 13 th Jun. 2012; Accepted: 4 th Mar. 2013. ABSTRACT Objective: To evaluate the proper technique used for management of penile torsion in pediatrics in relation to degree of torsion. Patients and methods: From February 2008 to December 2010, 54 patients were assessed for the degree of penile torsion at pediatric surgery center at Al-Khansaa Teaching Hospital in Mosul. The angle of torsion was assessed using a digital photograph of the penis and classified into three grades, Mild with 15 - 30 degree angle of torsion, moderate with 45-90 degree angle of torsion and sever with 100-170 degree angle of torsion. Two techniques were used for repairing the torsion, the degloving and skin realignment technique for the mild condition and dorsal dartous flap technique for the moderate and severe types. All operations were done as a day case procedure. No catheter used. Results: Forty five patients (83%) were discovered accidentally while assessing for circumcision or other problems, 30 patients (55%) were in the first year of life, 50 patients (93%) have a counter clock wise direction of torsion (to the left) while only 4 patients (7%) have a clockwise direction (to the right). Thirty five patients (65%) classified as mild torsion, while 16 patients (29.5%) have moderate degree of torsion and only 3 patients (5.5%) severe degree. Degloving and realignment of skin were used for the mild condition in 35 patients (65%) while dorsal dartous flap technique was confined for moderate (29.5%) and both procedures used for severe type patients (5.5%). Conclusion: Simple realignment technique during circumcision was enough to manage the mild degree, while in severe degree, dorsal dartous flap rotation seems to be more effective. There were no complications, and good cosmetic results were obtained. Keywords: Penile torsion, degloving, dorsal dartous flap. ((Dorsal dartous flapتقنيت لحب الجلذ وإعادة ترتيبه أو جلذ ظهري األطفال لتىاء القضيب عنذإفً عالج حاالث الخالصت .نرٕاءدرخح اإل حسةاألطفال عُذنرٕاء انقضٍة إ عالجنرحذٌذ انطرٌقح انًُاسثح انًسرخذيح ن :الهذف نرٕاء انقضٍة فً يركس خراحح األطفال إيرٌضا نذرخح 45، ذى ذقٍٍى 8000 كإٌَ األٔلإنى 8002 شثاطيٍ :المرضى والطرق درخاخ، تسٍظ 3سرخذاو صٕرج رقًٍح نهقضٍة ٔذصٍُفٓا إنى إنرٕاء تذى ذقٍٍى زأٌح اإل .فً يسرشفى انخُساء انرعهًًٍ فً انًٕصم درخح يٍ 070-000نرٕاء ٔشذٌذ يع زأٌح درخح يٍ اإل 00-54نرٕاء، يعرذل يع زأٌح درخح يٍ اإل 30 - 04يع زأٌح نرٕاء إعادج خٍاطرّ نهحاالخ انرً ذحًم درخح إضٍة ثى ذقٍُح نحة خهذ انق نرٕاء،ٍٍ يٍ انرقٍُاخ إلصالذ اإلرثُإسرخذيد إٔ .نرٕاءاإل دٌٔ حٌٕيٍ حخرٌد كدراحأخًٍع انعًهٍاخ .جٔانشذٌذ حنًعرذنانهحاالخ (dorsal dartous flap) ، ٔذقٍُح خهذ ظٓريحتسٍط .حهٍم، نى ذسدم أٌح يضاعفاخخراء قسطرج اإلإانى حٔدٌٔ انحاخ انى يثٍد فً انًسرشفى، حانحاخ فً ٕا٪( كا44َيرٌضا ) ثالثٌٕ .نخراٌ أٔ غٍرْا يٍ انًشاكما عُذ ح٪( تانصذف23رتعٍٍ يرٌضا )أكرشاف خًسح ٔإذى :النتائج فقظ يٍ 5فً حٍٍ أٌ نرٕاء )إنى انٍسار(ذداِ عقارب انساعح يٍ اإلإنذٌٓى عكس (٪03)يرٌضا 40انسُح األٔنى يٍ انعًر، Bassam Kh. Al-Abbasi, Ahmad M. Hamodat Degloving and skin realignments.. 90 Ann Coll Med Mosul June 2013 Vol. 39 No. 1 نرٕاء تسٍظ، فً حٍٍ أٌ إ٪( ذصُف عهى أَٓا 54يرٌضا ) خًس ٔثالثٌٕ انًٍٍٍ(. ذداِ عقارب انساعح )إنىإ٪( نذٌٓى 7انًرضى ) سرخذيد ذقٍُح نحة ٔإعادج إٔ .ج٪( درخح شذٌذ4,4فقظ يٍ انًرضى ) 3نرٕاء ٔ إل٪( نذٌٓى درخح يعرذنح يٍ ا80,4يرٌضا ) 05 (dorsal dartous flap) خهذ ظٓريقرصرخ ذقٍُح إ٪( فً حٍٍ 54يرٌضا ) 34نرٕاء انثسٍظ عُذ ذرذٍة اندهذ نحاالخ اإل .٪(4,4) جخراءْا نًرضى انحاالخ انشذٌذإ٪(، ٔكال انرقٍُرٍٍ ذى 80,4نهًعرذل ) خراؤْا خالل عًهٍح انخراٌ تًا فٍّ انكفاٌح نعالج درخح خفٍفح يٍ إتسٍطح ًٌكٍ حعادج ذرذٍة خهذ انقضٍة ذقٍُإهحة ٔان :الخالصت أكثر فعانٍح، يع عذو حصٕل أي dorsal dartous flap) كاَد ذقٍُح خهذ ظٓري ) جشذٌذتًٍُا فً انحاالخ ان ،نرٕاء انقضٍةإ .يضاعفاخ َٔرائح ذدًٍهٍح خٍذج enile torsion is an axial rotational defect of the penile shaft (1,2) . It's usually in the counter clockwise direction, that is to the left side (3,4) . The true incidence of penile torsion is not known and it is not a very common deformity (5) . Few data exist in the literature about penile torsion and the exact etiology is unclear (6- 9) . It can be seen independently or in association with other penile anomalies such as hypospadias or chordee (1,4) . Some authors believe that the primary defect is due to abnormal skin attachment, others believe that it is caused by asymmetrical development of corpora cavernosa (6) . In 1992 Slawin and Neglar described a technique of removing an ellipse of corporal tissue to correct torsion (10) . In 2000 Bolgrano et al used modified Nesbit teqnique (asymmetric tunica albuginea excision) (11) . In 2004 Fisher and Parker described a technique using dorsal dartous flap rotation (12) . In 2009 Brent described a new technique for penile torsion correction using diagonal corporal plication suture (4) . To date this was the first study dealing with the problem of penile torsion in Mosul city, our aim of this study is to evaluate penile torsion in pediatrics and the proper surgical techniques used for correction in relation to degree of torsion in comparison with other literatures. PATIENTS AND METHODS From February 2008 to December 2010; 54 boys with isolated penile torsion were prospectively evaluated and operated upon in pediatric surgery center at Al- Khansaa Teaching Hospital, their age ranging between 1-10 years. Most of these boys discovered accidentally at time of circumcision, others presented in association with other problems, abnormal shape and urinary stream noticed by parents or post circumcision. The angle of torsion (degree) was assessed on digital photography of the penis using AutoCAD program used by the architectures on A-P view depends on 2- land marks, the slit of the meatus in relation to scrotal median raphee (Fig.1), and according to that our patients classified into three grades of torsion: 1. Mild with15-30 degree. 2. Moderate with 45-90 degree. 3. Severe with 100-170 degree. Operations were done by two surgeons (the authors). Two techniques were used (based on surgeon preference) for correction of torsion depending on the degree of torsion and angle of rotation. Degloving of the penile skin by dividing the adhesions along the entire penile shaft, and repositioning the skin bringing the twisted median raphee to its straight direction for mild degree (15- 30). The dorsal dartous flap technique was used for the moderate and severe degree, in which the dartous flap after its dissection from the dorsal penile skin, was rotated around the side of the penile shaft opposite to the direction of rotation and attached to its ventral aspect, this creates a rotational force that counterbalance that of penile torsion (Fig. 2). Assessment for corrective criteria done per operatively by noticing the slit of the urethral meatus in one line with the scrotal raphee (Fig. 3). All procedures were done as day cases, no complications were recorded and no catheter used during both procedures. All our patients were followed for about one year. RESULTS Forty five patients (83%) were discovered accidentally while assessing for circumcision or other problems (inguinal hernia, hydrocele and undescended testes), while only seven patients (13%) with moderate and severe degree were P Degloving and skin realignments.. Bassam Kh. Al-Abbasi, Ahmad M. Hamodat Ann Coll Med Mosul June 2013 Vol. 39 No. 1 91 referred because of abnormal shape of penis and/ or urinary stream direction. Two patients (4%) presented post circumcision. Thirty patients (55%) were in the first year of life, 15 patients between 1-2 years and 9 patients were 4-10 years of age. Fifty patients (93%) with counter clockwise direction of rotation (to the left), and only four patients (7%) with clockwise rotation (to the right). Mild rotation found in 35 patients (65%), 16 with moderate rotation (29.5%), and 3 with severe degree of rotation (5.5%). Degloving and skin realignment were used for correction of mild degree in 35 patients, while dorsal dartous flap technique was used for moderate degree in 16 patients, both techniques were combined to correct three patients with severe degree of torsion to achieve full correction of the rotation (Table 1). Figure 1 & 2. Severe penile torsion with 131 degree torsion using auto CAD program. Figure 3. Creation of dartuos flap. Figure 4. Post correction appearance (meatal slit and median raphee in one line) Table 1. Degree of torsion and procedures used for each. % Technique No. Degree of torsion 65% Degloving & skin realignment 35 Mild (15-30) 29% Dorsal dartous flap 16 Moderate (45-90) 5.5% Combined 3 Severe (100-170) 100% 54 Total DISCUSSION Penile torsion is a rare anomaly specially when isolated. On reviewing the published reports of the penile torsion repair in the last three decades, it Bassam Kh. Al-Abbasi, Ahmad M. Hamodat Degloving and skin realignments.. 92 Ann Coll Med Mosul June 2013 Vol. 39 No. 1 was very low both in number of the reports and number of cases included in these reports (5,13,14) . We report 54 cases of different grades which is regarded as large series in comparison to others, Ashraf Hussain et al reported 13 boys with isolated penile torsion over two years duration (9) , while Abou Zied et al reported 19 boys with hypospadias associated with penile torsion over 1.5 years (1) , Lizhou et al in 2006 described his technique for correction of penile torsion on 17 boys (6) . This difference in reporting this problem might be attributed to under diagnosis of this mild asymptomatic condition, because most of the paramedical staff and some surgeons are not familiar with this problem. This is supported in our study because 83% of our cases were diagnosed accidentally on clinical examination preceding circumcision and we believe that if the parents, medical, and paramedical staff (who use to do circumcision in the hospital or clinics) were aware of penile torsion, the number of cases might be more. Torsion of penis can vary in severity ranging from 30 degree in mild cases to 180 degree in severe cases (15) , it can be caused by abnormal skin attachment (1) , however that might not be the only causative actor since torsion is not always corrected by penile degloving (16) . These facts are proofed in our study and other studies since 35% of our patients with moderates and severe degree of torsion corrected by dartous flaps while those with mild torsion (65%) were corrected with skin realignment technique alone this confirm that there are other causative factors play a role in the etiology other than abnormal skin attachment. In most reports the methods of measuring the degree of torsion is not clarified, Pierrot and Mutharagan described a method using sterile small protractor with modification for better adjustment (7) , Abou Zied et al measured the angle of rotation on a digital photograph of the penis using software program used by the radiologist for image analysis which can be applied pre and post operatively and provides an objective evaluation for the corrective surgery (1) . In our center we don’t have such program, but instead of that we used the AutoCAD program that is used by the architectures to measure and assess the angle of rotation which applied on a digital photograph pre and post operatively (Fig. 1&2). In addition to that we depended on two landmarks to assess the results of corrective surgery per operatively, in which the long axis of the meatal slit became in same line with the scrotal median raphee, this will provide an excellent intraoperative sign of full correction (Fig. 4). Congenital penile torsion is a benign condition which my need no treatment especially for its mild conditions (17) . In our locality circumcision is done routinely for religious purposes and we use to assess every patient who underwent circumcision for any abnormality including penile torsion and if it is present we take the permission from the parents for correction, that’s why all patients even with mild degree torsion corrected at time of circumcision. In our study we applied the degloving and skin reattachment technique for 65% with mild degree with 100% correction and no complications, this agrees with most authors who described this technique to be only effective for mild form of penile torsion (6,12,17) . However Tryfonas et al reported satisfactory result by applying this technique for more severe degree by suturing the skin in an over corrected position (18). The dorsal dartous flap has been widely used by the urologist to cover their urethroplasty. In 2004 Fisher and Parker described this technique to correct counterclockwise penile torsion (12) . We found this technique very effective in correcting moderate to severe degree of torsion. We observed its effect in correcting penile torsion and covering the urethroplasty in Snodgrass technique for hypospadias repair applied on our patients, and from this observation we started to manage moderate and severe types of isolated penile torsion and rotating the flap around the side of penile shaft opposite to the direction of torsion in 35% of patients. Dorsal dartous flap technique provided 80% correction in Abuo Zied series (1) , and 100% in Fisher and Park series (12) , while only 64% achieved complete resolution of penile torsion in the series of Bauer and Kogan in 2009 (19) . In our study we recorded 100% correction in both techniques, but we combined both techniques in managing severe degree of penile torsion to reach 100% correction. This difference in success rate might be related to technical application of the procedures in relation to degree of torsion. Degloving and skin realignments.. Bassam Kh. Al-Abbasi, Ahmad M. Hamodat Ann Coll Med Mosul June 2013 Vol. 39 No. 1 93 However to avoid the risk of under or over correction the amount of flap rotation should be determined with respect to degree of torsion and still some final adjustment should be made during skin closure to gain high success rate (1) . CONCLUSION Isolated penile torsion is not a rare problem. Pre- operative assessment and measuring the angle of rotation is valuable to decide the proper technique for correction. Simple degloving and skin realignment is very effective in managing mild degree at time of circumcision; while dorsal dartous flap rotation is more suitable for moderate and severe degree. Sometimes combination of both procedures gives 100% correction in severe condition, with no complications and good cosmetic results. REFERENCES 1. Abou Zeid A, Soliman H. Penile torsion: an over looked anomalies with distal hypospadias. Annals Pediatr. Surg. 2010;6:93-97. 2. Patrick J, John M. Abnormalities of the urethra, penis and scrotum. In Jay L, James A, Arnold G, Eric W. editors. Pediatric surgery. Sixth edition. MOSBY Elsevier 2006.p.1899-1908. 3. Jack S. Penile torsion. In: David F, John P, Howard M. editors. Operative pediatric urology. Second edition. Churchill Livingstone; 2002. p. 275-285. 4. Brent W. Penile torsion correction by diagonal corporal placation sutures. International Braz J Urol. 2009;35(1):56-59. 5. Redman IF, Bissada NK. One stage correction of chordee and 180 – degree penile torsion. Urology. 1976;7:632-3. 6. Zhou L, Mei H, Hwang AH, Xie H, Hardy BE. Penile torsion repaire by suturing tunica albuginea to the pubic periostium. J Pediat. Surg. 2006; 41:E7-9. 7. Pierrot S, Muthuragan S. Incidance and predictive factors of isolated neonatal penile glanular torsion. J Pediatr Urol 2007; 3:495-9. 8. Bar Yousef Y, Binyamini J, Matzkin H, et al .Degloving and realignment –simple repair for isolated penile torsion. Urology 2007;69:369-71. 9. Hussain A, Nagib I. Penile degloving and skin reattachment technique for repaire of penile torsion, our experience. Egypt J. Plast. Reconstr. Surg. 2007;31(1): 19-23. 10. Slawin KM, Nagler HM. Treatment of congenital penile curvature with penile torsion: A new twist. J Urol.1992;147(1):152. 11. Belgrano E, Linguori G, Trombeta C , Siracusano S. Correction of complex penile deformities by modified Nesbit procedure, asymmetric tunica albuginea excision. Eur. Urol. 2000;38(2):172-6. 12. Fisher C, Parker M. Penile torsion repair using dorsal dartous flap rotation. J Urol. 2004;17(5):1903. 13. Azmy A, Eckstein H.B. Surgical correction of tortion of the penis. Br. J Urol. 1981; 53 (4):378. 14. Culp O S. Struggle and triumphs with hypospadias and associated anomalies :review of 400 cases. J Urol. 1996;96(3):339. 15. Shaeer O. Torsion of the penis in adults: prevalence and surgical correction .J Sex Med 2008;5:735. 16. Bhat A, Bhat MP, Saxena G. Correction of penile torsion by mobilization of urethral plate and urethra. J Pediatr. Urol. 2009;5:451-457. 17. Elder JS. Anomalies of the genitalia in boys and their surgical management. In: Wein AJ. Campbelle Walsh Urology 9 th edition. W. B. Saunders; 2007.p. 3754-60. 18. Tryphonas GI, Klokkaris A, Sveromis M, et al. Torsion of penis: a comparative study between two procedures of skin derotation. Pediat. Surg. Int. 1995;10: 359-61. 19. Bauer R, Kogan BA. Modern technique for penile torsion repair. J Urol. 2009;182:286-91. work_a544zezgkfey5gojssuyeyy2si ---- International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 27 A Business Enterprise Resilience Model to Address Strategic Disruptions Author’s Details: (1) Hassan Ahmed Hassan Mohamed- Ph.D. Student Affiliation: Faculty of Computers and Information Cairo University Title: Business Architect (2) Prof. Galal Hassan Galal-Eldeen-Faculty of Computers and Information Cairo University 1 Abstract Resilient business enterprises are able to survive strategic disruptions like technology disruptions and come back as more successful. They succeed because they have resilient characteristics and apply resilience strategies. Based on a case study analysis, this paper builds a business enterprise resilience model that guides the business enterprises to build the resilience capabilities that enable them to survive during strategic disruptions. The proposed model guides the business enterprise to instil in its architecture the design characteristics of resilience that make it ready to respond to disruption. The model uses the resilience strategies of mitigation, adaptation, and transformation and applies them to three enterprise levels; the operating model level, the competitive strategy level, and the business model level. The mitigation strategy moves the operating model to the efficiency frontier. The adaption strategy recovers the enterprise from the impacts of the strategic disruptions. The transformation strategy transforms the enterprise business model totally. Key Words: disruption; strategic disruption; resilience; mitigation; adaptation; transformation; operating model; competitive strategy; business model; resilience characteristic; resilience capability; resilience strategy; 2 Introduction We live in a world of change and disruptions. When they happen, the typical response is, "Who would have thought this will happen?". Whether the economy is strong or weak, competition is fiercer than ever, and change comes faster than ever; and if a business wants to survive difficult times, it has to prepare itself to be able to make the right shift at the right time in response to disruptions and changes (Bossidy and Charan 2002). Disruptions can be rooted in new technologies, new disruptive business models, the emergence of new regulatory and market forces, or changes in the availability of resources (Fiksel 2003). Some of these disruptions can be game-changing phenomena and cause storms that threaten the business enterprises going through those storms. These kinds of disruptions are called strategic disruptions (Schwartz and Randall 2007). An example of such a strategic disruption is the digital photography technology that threatened the core businesses of both enterprises, Fujifilm and Kodak (Komori 2015). Business enterprises going through these kinds of storms are not equal in their approach to dealing with them and not equal in the results they ended up with after going through the storms; some succeeded while some failed. For e.g., Fujifilm succeeded while Kodak failed to face the digital photography disruption (Komori 2015). EMC succeeded facing the disruption of the new storage technologies and customer preference change in favour of low tier low cost storage solutions, while Sun Microsystems failed to face the disruption of the technology bubble burst and the associated change in customer preference in favour of open low cost solutions (Bossidy and Charan 2002). Successful enterprises build resilience capabilities to prepare for such strategic disruptions using resilient approaches (Hamel and Välikangas 2003). A resilient approach is not concerned with stabilizing business enterprises quickly under small shocks, but rather, it is concerned with making business enterprises continuously survive large strategic disruptions in the long term. A resilient approach is concerned with surviving different strategic disruptions through continuously monitoring, interpreting, and adapting to sustainable trends that cause business enterprises to permanently lose the profitability and growth of their core businesses (Hamel and Välikangas 2003). 2.1 The Concept of Resilience Resilience (with its roots in the Latin word resilio) means to adapt and “bounce back” from a disruptive event (Longstaff, Armstrong, et al. 2010). Similarly, it is the capacity of a system to absorb disturbance, undergo change, and retain the same essential functions, structure, identity, and feedbacks (Holling 1973). (Holling 1973) differentiated between two types of resilience, ecological resilience, and engineering resilience. In the view of ecological resilience, the system seeks survival facing large disruption. While, in the view of engineering resilience, the system seeks stability facing small disruptions. In the same way, (Fiksel 2003) differentiates between two types of systems, the resistant system, and resilient system. What (Fiksel 2003) called resistant system is atypical of the engineered highly controlled system. Resistant systems are designed to resist small disruptions and return back in a very short time to their equilibrium states, but they are not designed to survive large disruptions. A bridge would be an example of an engineered highly controlled system; the bridge can face small perturbation like wind or earthquake and return back in a short amount of time to its equilibrium state. On the other hand, what (Fiksel 2003) called resilient system is a system that is adaptive and transformative. When a resilient system faces large disruption, it does not necessarily return to a specifically stated equilibrium state, but it is capable of surviving and keeping its structure and services. In a resilient system like human society, people may have diverse livelihoods that give them options for responding to change. In the western Indian ocean region, for e.g., fishers from households with more diverse livelihood portfolios that included non-fishing activities were more able to consider leaving a fishery that was in decline (Cinner, McClanahan, et al. 2012). Not only does such livelihood flexibility increase the resilience of individual households, but it also reduces the pressure on the parts of the system producing a particular service, thereby enhancing the resilience of that system service (Ellis 2000). International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 28 Since business enterprises are complex adaptive systems that are subject to large disruptions from their internal and external environments (DOOLEY 2002), they lend themselves more to the view of ecological resilience than to the view of engineering resilience, when we analyze situations in which these business enterprises face large and sustainable strategic disruptions. In this kind of situations, business enterprises are exposed to disruptive forces that threaten the identity and very existence of these enterprises, and the biggest concern is to survive or to persist using the concepts of ecological resilience (Holling 1973). Within the ecological resilience view, a system like a business enterprise can exist in one of several basins of attractions called regimes. The system shifts from one basin of attraction or regime to another if it passes the threshold of a controlling variable (Holling 1973). Figure 1: Basin of attractions A threshold of a controlling variable is the level or amount of a change of that controlling variable, that causes a change in critical feedback, causing the system to self-organize along a different trajectory towards a different attractor (Walker and Meyers 2004). In spite that complex adaptive systems like business enterprises are affected by many variables; they are usually driven by only a handful of key controlling variables (Walker and Meyers 2004). This is an important concept that is used to create and execute strategies to respond to disruptions. For e.g., if we want to prevent the system from flipping into another regime, we should prevent crossing the thresholds of the systems‟ controlling variables. 2.2 Research Objective Strategic disruptions are game-changing phenomena. They do not happen very frequently, but when they occur, the rules of the game that were previously in place no longer apply (Schwartz and Randall 2007). Strategic disruptions could include the introduction of new technologies, the emergence of new regulatory and market forces, or changes in the availability of resour ces (Fiksel 2003). When a business enterprise faces a strategic disruption, its core business crumbles, and its very existence is on the brink. This is clear for e.g., in the case of Fujifilm facing the disruption of the digital age (Komori 2015). Business enterprises need to continuously anticipate and adjust to trends that can permanently impair the earning power of their core businesses. They need to build resilience capabilities to prepare for strategic disruptions. They also need to develop strategies and execute actions when being inside the storms of these strategic disruptions. We call business enterprises that monitor trends, build resilient capabilities, and execute resilience strategies; by resilient business enterprises. There is a strong need to understand, learn, and develop a resilience model that captures how successful, resilient business enterprises prepare for and act during strategic disruptions in a way that ensures survivability of these resilient business enterprises. This need is clear when we look at the difference of results between Fujifilm and Kodak. Both enterprises faced the same disruption, the digital photography that impacted their core film businesses. After the storm, Fujifilm became a much more successful company with diversified business, ranging from optical devices to radiopharmaceuticals, while Kodak filed for bankruptcy in 2012 (Komori 2015). Both companies saw the digital disruption and executed strategies in responses to it, but one succeeded and the other failed. This points clearly to a gap in having a clear resilience model that stitches together strategies and actions in a way that enables the enterprise to survive the storm successfully. This work aims at building a business enterprise resilience model that addresses this gap by learning from a successful, resilient business enterprise. 3 Methodology We conducted a search into the strategies and actions taken by Fujifilm enterprise that made it successfully survive the disruption of the digital photography and the digital age in general. The digital disruption impacted Fujifilm‟s photo film core business and all its associated products and services (Gavetti, Tripsas, et al. 2007). Facing the digital disruption and its associated decline in the global demand for colour film, Fujifilm responded by a series of strategies and actions with specific time patterns and with specific sequence and mix. We searched those strategies and actions, categorized them, looked at what was effective and what was not, looked at what was there for those that worked to succeed, and investigated the relations between these strategies and action. The main text we analysed is the book written by the Fujifilm president (Komori 2015) describing Fujifilm‟s view of the disruption and articulating the different strategies and action taken in response to the disruption. In addition to the main text, we conducted several online search queries of “Fujifilm digital crisis,” “Fujifilm inside the storm,” “Fujifilm vs. Kodak,” and “Fujifilm survived,” to collect the articles written about the digital disruption that impacted Fujifilm and how it responded to the disruption. We filtered these articles to focus only on those that articulated the specific strategies and actions taken by Fujifilm in response to the disruption, along with the viewpoints of what made those strategies and actions succeed. Articles that were not International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 29 focusing on the specific strategies and actions taken by Fujifilm were not selected. This method resulted in a book and a set of articles that constitute the text for the qualitative analysis as shown in Table 1. Title of Text Innovating out of the crisis: How Fujifilm survived (and Thrived) as its core business was vanishing (Komori 2015) Fujifilm: A second foundation (Gavetti, Tripsas, et al. 2007) Kodak's downfall wasn't about technology (Anthony 2016) How Fujifilm survived – Sharper Focus (K.N.C. 2012) FUJIFILM‟S “MOMENT”: DISRUPTION, ADAPTATION, AND HEALTH SYSTEM TRANSFORMATION (Johnson 2015) How Fujifilm survived the digital age with an unexpected makeover (Ng 2017) Table 1: Summary of the analysed text 3.1 Results We conducted a directed content analysis (Hsieh and Shannon 2005) to the text. Guided by the ecological resilience theory, we began by a set of initial coding categories. We coded the text based on this initial set of coding categories, but any text that could not be categorized with the initial coding scheme would be given a new code (Forman and Damschroder 2007). We used the directed content analysis method because we believe that, the ecological resilience theory is a powerful theory to explain how complex adaptive systems like business enterprises (DOOLEY 2002) behave under large perturbations (Walker and Meyers 2004). The ecological resilience theory will give a structure to the resilience approach by providing relationships between concepts and metaphors to explain the behaviour of complex adaptive concepts under large perturbations. Figure 2 shows the coded categories. Figure 2: Categories and Themes Table 2 shows the list of categories and their parents: International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 30 Category Parent Number of coded text segments Demand Shrinking 15 Disruption Parent (9) Disruption Anticipation Disruption 12 Disruption Monitoring Disruption 8 Disruption Opportunities Disruption 4 Specific Disruption Threat Disruption 6 Symptoms of Disruption Disruption 5 Disruption Diagnosis Disruption 6 Enterprise Capabilities 43 Mitigation 12 Adaption 30 Enterprise Parent Failed Enterprise Enterprise 7 Non-Resilient Enterprise Enterprise 6 Resilient Enterprise Enterprise 6 Successful Enterprise Enterprise 3 Strategy Parent Growth Strategy Strategy 7 Spin-off Strategy Strategy 3 Merger/Acquisition Strategy 6 Transformation Parent (6) Business Model Transformation Transformation 30 Response to Disruption Parent (7) Effective Response Strategy Response to Disruption 4 Ineffective Response Strategy Response to Disruption 9 Resilience Characteristics Parent Diversification Resilience Characteristics 23 Learning and Innovation Resilience Characteristics 15 Disruption Factors to Prepare for Resilience Characteristics 2 Enterprise Values Resilience Characteristics 7 Execution 9 Leadership 2 Culture 5 What is the goal? 5 Total Number of Coded Text Segments 332 Table 2: Categories List 4 Discussion We organized the categories and themes into four components that composed what we called a business enterprise resilience model. x (figure 3). The following discussion shows how the business enterprise resilience model organize the actions and strategies that a resilient business enterprise prepares for and acts during strategic disruptions. Figure 3: Business Enterprise Resilience Model International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 31 4.1 Strategic Disruptions The first component of the model as shown in figure 2 is the external and internal forces that may have the dynamics to create strategic disruptions. We live in a world of change and disruptions. Whether the economy is strong or weak, competition is fiercer than ever, and change comes faster than ever; and if a business wants to survive difficult times, it has to understand the dynamics of the external and internal forces, anticipate trends that may cause strategic disruptions, prepare itself to respond, then make the right shift in response to these strategic disruptions (Bossidy and Charan 2002). Out of the many disruptions that businesses face, a strategic disruption has three key elements that differentiate it from the run-of- the-mill disruptions that are so common in today's complex world (Schwartz and Randall 2007): It has an important impact on an organization; because it challenges the conventional wisdom "the official future," it is difficult to convince others to believe that the surprise is even possible; and it is hard to imagine what can be done in response. Strategic disruptions are game-changing phenomena. They do not happen very frequently, but when they occur, the rules of the game that were previously in place no longer apply (Schwartz and Randall 2007). Examples of strategic disruptions are; introduce new technologies, the emergence of new regulatory, the entrance of new competitor, the introduction of new business model, and changes in the availability of resources (Fiksel 2003). 4.2 System State The second component of the model as shown per figure 3 is the system state. According to the theory of ecological resilience, the system state is represented by a basin of attraction that represents the current regime of the system. The basin of attraction is a stable domain in which the system has specific characteristics and delivers specific services (Folke, Carpenter, et al. 2010). Complex adaptive systems like business enterprises are affected by many variables. However, they are usually driven by only a handful of key controlling variables. When the states of these controlling variables are within specific ranges, the feedback forces controlling the behaviour of the system ensure that it is in a specific regime. The ranges are bounded with is called thresholds. When the states of the controlling variables move outside the specific ranges (meaning, they cross the thresholds), the feedb ack forces are controlling the behaviour of the system change to another regime. This happens suddenly and in a very short time. After crossing the controlling variables thresholds, it is usually extremely difficult for the system to return back to its origina l regime (Walker and Meyers 2004). In the case of business enterprises, the most critical controlling variable is the total demand for the products and services of their core businesses when the total demand shrinks to the point that the control variable threshold is crossed, the regime of the business enterprise shifts to an unprofitable basin of attraction, from which the business enterprise will not be able to recover. For e.g., Kodak and Fujifilm suffered from a devastating decline in total demand for the products and services of its film core businesses when the digital age disruption gained its momentum. That is why the enterprise has to monitor the trends and understand the forces that will impact the controlling variable of the total demand. The business enterprise can do actions that either mitigate the impact on the total demand variable, recover after the impact happens, and transform itself intentionally to a totally new profitable regime. 4.3 Design Characteristics of Resilient Systems The third component of the model as shown in figure 3 is the set of design characteristics that make the enterprise, resilient enterprise. We can design business enterprises to absorb disruptions better, operate under a wider variety of conditions, and shift more fluidly from one circumstance to the next (Hills 2000). These characteristics enable the enterprise to apply the required resilience strategies to survive and persist when facing strategic disruptions (Reeves, Levin, et al. 2016). Figure 4 shows these design characteristics. Figure 4: Design Characteristics of Resilient Systems International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 32 Diversity and redundancy in systems provide options for responding to change and disturbance and for dealing with uncertainty and surprise (Walker, Gunderson, et al. 2006). It is specifically response diversity, in combination with functional redundancy, that is important for maintaining system services in the face of disturbance and ongoing change (Walker, Gunderson, et al. 2006). Connectivity in systems generally facilitates the flow of energy, material or information necessary for the resilience of sys tem services. The strength and structure of connectivity may safeguard system services against a disturbance either by facilitating recovery or by constraining locally the spread of a disturbance (Nyström and Folke 2001). Control variables monitoring and management requires understanding the forces that underlie the different systems configurations and their associated levels of these control variables (Holling 2001). If the current system configuration is a preferable regime, the strategy typically focuses on avoiding changes in feedbacks and controlling variables that could cause the system to cross a critical threshold into another regime. On the other hand, if the system is locked into an undesirable regime, it may be necessary to weaken the feedbacks that keep it there, to restore a previous regime or transform the system to an entirely new regime (Folke, Carpenter, et al. 2010). One example of a resilient enterprise is EMC, which faced a decline in the total demand of its high tier, expensive, and proprietary storage solution. EMC responded by reinforcing counter forces that kept the total demand checked within its threshold; it did this by increasing its customer base by selling lower cost, lower tier, and open solutions (Bossidy and Charan 2002). Fujifilm transformed itself and shifted to a new regime of being a new diversified business in which its film core business became a small fraction of its total business (Komori 2015). A business enterprise cannot be resilient against all possible types of disruptions since this is economically impossible (May, Levin, et al. 2008). The enterprise has to find a way to understand the uncertainties that may define the trajectory of its future, then design the resilience characteristics that make itself resilient against these certainties. Scenario planning is a tool that can help with this regard (Schoemaker 1991). Scenario planning is a disciplined method for articulating the possible futures that may evolve taking into consideration the most critical uncertainties that drive these scenarios (Schoemaker 1991). Knowledge of complex systems like business enterprises is always partial and incomplete, so for the system to be resilient, it must have the capacity for continuous learning (Holling 1996). Creating, testing and designing experiments to explore alternative options is an important way to support learning, innovation, and enhance the resilience of the enterprise. Resilient business enterprises build in-house core capabilities that are valuable, rare, inimitable and non-substitutable (Barney 1991). These core capabilities will be the base for transformation based on diversifying their uses and applications. For example, Fujifilm‟s capability in nanotechnology for placing chemicals to the film was carried over to applying cosmetics to facial skin. 4.4 Resilient strategies (Folke, Carpenter, et al. 2002) introduced three kinds of resilient strategies; mitigation, adaptation, and transformation. They are used by systems based on the available time and level of control over the disruption as shown in figure 5. Figure 5: Resilient Strategies Mitigation strategy (figure 6) is the capacity to initiate counter forces to keep the control variables checked within their thresholds or delay crossing these thresholds. This will prevent or delay the expected impactful changes in the structure and critical feedback which causes the system to flip into an alternate undesirable stability regime of that system (Walker and Meyers 2004). As an example, Fujifilm launched research on raising the film‟s level of light sensitivity so that a flash was unnecessary. Also, the grain was made even smaller, increasing resolution. The goal was to produce an image from photo film that was far superior to anything from digital technology. Fujifilm did this to extend the life of its photosensitive materials business by raising analog image quality to a level beyond digital reach. This strategy acted as a counterforce and kept the total demand of the photo film at a reasonable level, giving Fujifilm precious needed time to launch other strategies (Komori 2015). International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 33 Figure 6: Mitigation Strategies Adaptation strategy (figure 7) in this context represents the capacity to adjust responses to changing external drivers, controlling variables and internal processes, and thereby allow for a return to the current trajectory (stability domain). It takes the system into a temporary recovery state in which adaptive responses work to cross back the control variables thresholds, return back to the current regime, and try to move away from the control variables thresholds (Walker and Meyers 2004). Figure 7: Adaptation Strategies Transformability (figure 8) is the capacity to cross thresholds into new development trajectories. It is the capacity of the system to transform itself into a different kind of system literally. Transformability becomes very important when a system is in a stable regime that is considered undesirable, and it is either impossible or getting progressively harder and harder, to engineer a „flip‟ to the original or some other regime of that same system. The system will have a different identity. (Folke, Carpenter, et al. 2010). Figure 8: Transformation Strategies Resilience strategies are implemented through changes to the architecture of business enterprises and through strategies that are executed by these business enterprises in their markets. Changes to the business enterprise happen at three cascaded configuration levels; the business model level, the competitive strategy level, and the operating model level (Teece 2010). Each level gives a context to the next level as per figure 9. International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 34 Figure 9: Enterprise Configuration Levels The first, foundational, and highest level is the business model level. Whenever a business enterprise is established, it either explicitly or implicitly employs a particular business model that describes the design or architecture of the value creation, delivery, and capture mechanisms it employs. The essence of a business model is in defining the manner by which the enterprise delivers value to customers, entices customers to pay for value, and converts those payments to profit. It thus reflects management‟s hypothesis about what customers want, how they want it, and how the enterprise can organize to meet those needs best, get paid for doing so, and make a profit (Teece 2010). A good business model yields value propositions that are compelling to customers, achieves advantageous cost and risk structures, and enables significant value capture by the business that generates and delivers products and services. Designing a business model correctly, and figuring out, then implementing commercially viable architectures for revenues and for costs are critical to enterprise success (Fisken and Rutherford 2002). The second level is the competitive strategy level, which is the creation of a unique and valuable position, involving a different set of coordinated activities (Porter 1985). Selecting a competitive strategy is a more granular exercise than designing a business model (Chesbrough and Rosenbloom 2002). Competitive strategy protects the competitive advantage that results from the design and implementation of business models as it creates various isolating mechanisms to prevent the business model from being undermined through imitation by competitors or disintermediation by customers (Harreld, O'Reilly, et al. 2007). The third level is the operating model level, which depicts how the business operates through its process architecture. Business processes describe the work performed by all resources involved in creating outcomes of value for customers and other stakeholders. The operating model depicts how the business model and business strategy are operationalized and executed. Business enterprise‟s operating model captures the work done by the enterprise on a daily basis (Winter and Fischer 2006). When a resilient business enterprise faces a strategic disruption that impacts a critical controlling variable like the total demand for its products and services or the ability of the enterprise to meet this total demand, it applies a mix of the resilience strategies of mitigation, adaptation, and transformation. Each of these three resilience strategies has a mission and delivers specific types of outcomes within the large scheme of responding to the strategic disruption. Each of these three resilience strategies is implemented through changes at one or more of the three enterprise configuration levels; the business model level, competitive strategy level, and business model level as shown in figure 10. Figure 10: Resilience strategies used by resilient enterprises facing strategic disruptions The mitigation strategy in the context of addressing strategic disruptions by business enterprises is applied to either reverse or at lease slow down the impact of a strategic disruption over the critical controlling variables of a business enterprise. The resilient business enterprise applies the resilience mitigation strategy by changing the operating model of the business enterprise. Taking the operating model of the business enterprise to its efficiency frontier to maximize the return on business operations, reduce cost, International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 35 and ensure resources availability is the primary intention of the mitigation strategy. Changing the operating model in this way has two outcomes; the first is the reverse or slowdown of the negative impact of the strategic disruption over the critical controlling variables, and the second is accumulating more resources that will be needed if a subsequent transformation phase will happen. The adaptation strategy in the context of addressing strategic disruptions by business enterprises is applied to recover from the impact of a strategic disruption over the critical controlling variables of a business enterprise. This happens after the negative impact takes momentum putting the current regime of the business enterprise in a very critical position. Usually, resilient business enterprises do not wait that far unless there is no way to reverse the trend and they deal with this by smoothly phasing out or shrinking the current regime, and in parallel launches a transformation strategy. The resilient business enterprise applies the resilience adaptation strategy by changing the competitive strategy level of the business enterprise. The resilient business enterprise applies a “scaling down” strategy to match the impact of the strategic disruption over the critical controlling variable. The goal of the adaptation strategy is to survive the impact, minimize cost, liquidate the released resources and add them to the resource base needed during the transformation strategy phase. The transformation strategy in the context of addressing strategic disruptions by business enterprises is applied to deliberately design a switch of the business enterprise to a new regime. The resilient business enterprise applies the resilience transformation strategy by changing the business model level of the business enterprise. The activities done within both the mitigation and adaptation strategies enable the resilient enterprise to survive the impact and accumulate the required resources for the transformation strategy to work. The goal of the transformation strategy is for the resilient enterprise to redesign its business model by reconfiguring its accumulated resources and capabilities and by using its stock of innovations and experiments and apply them to create and deliver different kinds of values to different areas in the marketplace. The transformation strategy shakes the very foundation of the enterprise, transform it into a different kind of an enterprise, and change its identity (Folke, Carpenter, et al. 2010). 4.5 Fujifilm application of the Resilience strategies Fujifilm is an enterprise that faced the strategic disruption of the digital age. Its main competitor Kodak also knew that the winds of the digital age were blowing, but at the end of the storm, Fujifilm transformed itself as a new diversified enterprise while Kodak failed. Fujifilm anticipated the future and was quick to adapt. Figure 11 summarizes the resilience model of Fujifilm, that explains how it transformed itself in response to the strategic disruption of the digital age. Figure 11: Resilience Model for Fujifilm The strategic disruption – the digital age Fujifilm faced a big threat that would require a fundamental change in the organization. That threat was the digital age and the radical transformation in the photography market that accompanied it. By the beginning of the 1980s, industry watchers were already predicting that silver-based photosensitive materials would one day be an endangered species. In Fujifilm‟s principal imaging fields: photography, printing and medical, the first signs of digitalization had already begun to appear. The digital age drew steadily nearer and nearer to Fujifilm‟s core (Komori 2015). Fujifilm anticipated that the digital age would be different. It would be a world in which Fujifilm‟s proprietary technical expertise; the photography technology built up over the years, including a high-precision coating of chemicals on film; would no longer be relevant (Komori 2015). The Mitigation Strategy – Move to the Efficiency Frontier The mitigation strategy was to extend the life of the photosensitive materials business by raising analog image quality to a level beyond digital reach. The fact was that photosensitive materials using a silver halide base still had a good deal of room for improvement. Fujifilm launched research on raising the film‟s level of light sensitivity so that a flash was unnecessary. Also, the grain was made even smaller, increasing resolution. The goal was to produce an image from photo film that was far superior to anything from digital technology. The mitigation strategy of extending the life of the photosensitive materials business delayed the impact of the approaching digitization strategic disruption. The impact of decreasing demand was much of a much slower rate than the industry, and it gave Fujifilm the critically needed time to redesign and reorganize. International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 36 The Adaptation Strategy – Scale Down The photographic film business is built atop a giant industry infrastructure. Fujifilm had large-scale factories in Japan, the United States, and the Netherlands, as well as photofinishing labs in one hundred fifty locations throughout the world. Maintaining facilities on this scale was extremely costly. Once sales started to drop, they dropped without stopping, and deficits led to more deficits. All these costs had to be reduced without a second thought. Fujifilm decided not to abandon the photographic film market totally, but instead, it reorganized the business to ensure a stable flow of profit. This necessarily involved some serious downsizing in creating a smaller, more flexible business that was in keeping with current demand. Fujifilm implemented serious structural reforms, in which it reorganized the photographic film business, including Fujifilm‟s global, large-scale manufacturing plants and sales organizations, research centers, and photofinishing labs (Komori 2015). The Transformation Strategy – Business Model Transformation Fujifilm anticipated that the digital world is a world of ruthless price-cutting. Even though Fujifilm had succeeded in producing a digital camera and had come to terms with digitalization, those milestones were not enough to capture back the former profitability of the film market. Fujifilm had to create a highly profitable core business in its place. Fujifilm developed an inventory of Fujifilm‟s technical stock, its technological seeds. It compared these seeds with the demands of international markets. It then mapped its technology seeds to markets (Komori 2015). The enterprise applied its technical capabilities and diversified into the businesses of Digital Imaging, Optical Devices, Highly Functional Materials, Graphics Systems, Document Solutions, and Healthcare (Komori 2015). 5 Conclusions and Future Work This study introduces a resilient model that explains how resilient business enterprises survive strategic disruptions. The model proposes to design the enterprise for resilience by applying the design characteristics of resilience including diversity, redundancy, connectedness, control variables monitoring, scenario planning, learning, and developing core enterprise capabilities. Building the enterprise for resilience prepares it to execute the resilience strategies. The resilience model suggests monitoring the trends and anticipates potential strategic disruptions. The resilience approach uses the resilience strategies of mitigation, adaptation, and transformation and executes them at the right times and in the right combinations in a way that enables resilient business enterprises to face, survive and thrive during deep strategic disruptions. The model applies these strategies at three levels; the operating model level, the competitive strategy level, and the business model level. The mitigation strategy moves the operating model to the efficiency frontier. The adaption strategy scales down the business and recovers from the impacts of the strategic disruptions. The transformation strategy changes the business model of the business which transforms the enterprise into a new identity. One area of future work is to categorize the strategic disruptions and to create a model for understanding the forces that underlie their dynamics. Another area of future work is to investigate an approach for how to change the enterprise configuration at t he three levels; the operating model, the competitive strategy, and the business mode. 6 Bibliography i. Anthony, S. (2016). "Kodak's downfall wasn't about technology." Harvard Business Review. ii. Barney, J. B. (1991). "Firm Resources and Sustained Competitive Advantage." Journal of Management 17: 99-120. iii. Bossidy, L. and R. Charan (2002). Execution: The Discipline of Getting Things Done Crown Business, New York. iv. Chesbrough, H. and R. S. Rosenbloom (2002). "The role of the business model in capturing value from innovation: evidence from Xerox Corporation's technology spin‐off companies." Industrial and Corporate Change 11(3): 529-555. v. Cinner, J. E., et al. (2012). "Comanagement of coral reef social-ecological systems." Proceedings of the National Academy of Sciences 109(14): 5219-5222. vi. DOOLEY, K. J. (2002). Organizational Complexity, International Encyclopedia of Business and Management, M. Warner (ed.), London: Thompson Learning. vii. Ellis, F. (2000). "The determinants of rural livelihood diversification in developing countries." Journal of Agricultural Economics 51(2): 289-302. viii. Fiksel, J. (2003). "Designing Resilient, Sustainable Systems." Environmental Science and Technology. ix. Fisken, J. and J. Rutherford (2002). "Business models and investment trends in the biotechnology industry in Europe." Journal of Commercial Biotechnology 8(3): 191. x. Folke, C., et al. (2002). "Resilience and sustainable development: building adaptive capacity in a world of transformations." Ambio(31): 437–440. xi. Folke, C., et al. (2010). "Resilience Thinking: Integrating Resilience, Adaptability and Transformability." Ecology and Society 15(4). xii. Folke, C., et al. (2010). "Resilience Thinking: Integrating Resilience, Adaptability and Transformability." Ecology and Society 15(4). xiii. Forman, J. and L. Damschroder (2007). Qualitative content analysis. Empirical methods for bioethics: A primer, Emerald Group Publishing Limited: 39-62. xiv. Gavetti, G., et al. (2007). "Fujifilm: A second foundation." Harvard Business School Case# 9-807 137. International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 37 xv. Gunderson, L. H., et al. (2002). "A summary and synthesis of resilience in large-scale systems." SCOPE-SCIENTIFIC COMMITTEE ON PROBLEMS OF THE ENVIRONMENT INTERNATIONAL COUNCIL OF SCIENTIFIC UNIONS 60: 249-266. xvi. Hamel, G. and L. Välikangas (2003). "The Quest for Resilience." Harvard Business Review. xvii. Harreld, J. B., et al. (2007). "Dynamic capabilities at IBM: Driving strategy into action." California management review 49(4): 21-43. xviii. Hills, A. (2000). "Revisiting Institutional Resilience as a Tool in Crisis Management." Journal of Contingencies and Crisis Management 8(2): 109-118. xix. Holling, C. S. (1973). "Resilience and Stability of Ecological Systems." Annual Review of Ecological Systems(4). xx. Holling, C. S. (1996). "Engineering resilience vs. ecological resilience." Engineering Within Ecological Constraints: 31- 43. xxi. Holling, C. S. (2001). "Understanding the Complexity of Economic, Ecological, and Social Systems." Ecosystems(4): 390-405. xxii. Hsieh, H.-F. and S. E. Shannon (2005). "Three approaches to qualitative content analysis." Qualitative health research 15(9): 1277-1288. xxiii. Johnson, D. W. (2015). FUJIFILM’S “MOMENT”: DISRUPTION, ADAPTATION AND HEALTH SYSTEM TRANSFORMATION. 4sightHEALTH. from http://www.4sighthealth.com/fujifilms-moment-disruption-adaptation-and- health-system-transformation/. xxiv. K.N.C. (2012). How Fujifilm survived - Sharper Focus. The Economist. from https://www.economist.com/schumpeter/2012/01/18/sharper-focus. xxv. Komori, S. (2015). Innovating out of crisis: How Fujifilm survived (and Thrived) as its core business was vanishing, Stone Bridge Press, Inc. xxvi. Longstaff, P. H., et al. (2010). "Building Resilient Communities: A Preliminary Framework for Assessment." HOMELAND SECURITY AFFAIRS VI(3). xxvii. May, R. M., et al. (2008). "Complex systems: Ecology for bankers." Nature 451(7181): 893-895. xxviii. Ng, D. (2017). "How Fujifilm survived the digital age with an unexpected makeover." CHANNEL NEWSASIA. from https://www.channelnewsasia.com/news/business/how-fujifilm-survived-the-digital-age-with-an-unexpected-makeove- 7626418. xxix. Nyström, M. and C. Folke (2001). "Spatial resilience of coral reefs." Ecosystems 4(5): 406-417. xxx. Porter, M. E. (1985). Competitive Advantage. Creating and sustaining superior performance, Free Press, New York. xxxi. Reeves, M., et al. (2016). "The biology of corporate survival." Harvard Business Review 94(1): 2. xxxii. Schoemaker, P. J. (1991). "When and how to use scenario planning: a heuristic approach with illustration." Journal of forecasting 10(6): 549-564. xxxiii. Schwartz, P. and D. Randall (2007) Ahead of the Curve: Anticipating Strategic Surprise. MONITOR_ GROUP xxxiv. Teece, D. J. (2010). "Business models, business strategy and innovation." Long Range Planning 43(2): 172-194. xxxv. Walker, et al. (2006). "A Handful of Heuristics and Some Propositions for Understanding Resilience in Social- Ecological Systems." Ecology and Society 11(1): 13. xxxvi. Walker, B. and J. A. Meyers (2004). "Thresholds in Ecological and Social–Ecological Systems: a Developing Database." Ecology and Society 9(2). xxxvii. Winter, R. and R. Fischer (2006). Essential layers, artifacts, and dependencies of enterprise architecture. Enterprise Distributed Object Computing Conference Workshops, 2006. EDOCW'06. 10th IEEE International, IEEE. http://www.4sighthealth.com/fujifilms-moment-disruption-adaptation-and-health-system-transformation/ http://www.4sighthealth.com/fujifilms-moment-disruption-adaptation-and-health-system-transformation/ https://www.economist.com/schumpeter/2012/01/18/sharper-focus https://www.channelnewsasia.com/news/business/how-fujifilm-survived-the-digital-age-with-an-unexpected-makeove-7626418 https://www.channelnewsasia.com/news/business/how-fujifilm-survived-the-digital-age-with-an-unexpected-makeove-7626418 work_a56vnbz4pvc6nbbcrhsgwzxclq ---- INHERENT COMPLEXITY: ROOM CRITERIA SELECTION AND ACCESSIBLE ACCOMMODATION INFORMATION FORMAT PROVISION FOR PEOPLE WITH DISABILITIES 1 Please reference as: Darcy, S. (2010). Inherent complexity: Disability, accessible tourism and accommodation information preferences. Tourism Management, 31(6), 816-826. Inherent complexity: Disability, accessible tourism and accommodation information preferences. Simon Darcy Faculty of Business University of Technology, Sydney P.O. Box 222 Lindfield NSW 2070 Ph: 61 2 9514-5100 simon.darcy@uts.edu.au ABSTRACT Studies have identified constraints with the way that accessible accommodation information is documented and marketed. Yet, no research has investigated the criteria that people with disabilities determine as ‘important’ to selecting accommodation and their preference for presenting this information. This paper presents the results of a survey to determine the relative importance of room selection criteria through the development of a 55-item Hotel Accessibility Scale. Four information formats were then presented to ascertain the preferences of the respondents. The results suggest that while socio-demographic variables offered some insight into criteria selection, the most significant explanation for criteria selection and information preferences were the dimensions of disability and level of support needs. The preferred format of accessible accommodation information provision was based on a combination of textual, floorplan and digital photography. The management implications suggest that detailed information provision using this format has benefits for accommodation stock yield and social sustainability. Keywords: Hotel Accessibility Scale; accessible tourism; disability; accommodation; 2 1 INTRODUCTION A great deal of research has investigated consumer selection criteria for hotels, evaluation of service quality and benchmarking determinants that may contribute towards hotel selection (e.g. Bell & Morey, 1996; Callan, 1998; Hsieh, Lin, & Lin, 2008; Nash, Thyne, & Davies, 2006; Warnken, Bradley, & Guilding, 2005). At the same time, there has been a series of well-documented constraints and problems that people with disabilities (PwD) encounter with accessible tourism accommodation (Access For All Alliance (Hervey Bay) Inc, 2006; Bi, Card, & Cole, 2007; Daniels, Drogin Rodgers, & Wiggins, 2005; Darcy, 1998, 2002a; Innes, 2006; Tantawy, Kim, & SungSoo, 2004). These issues are not confined to Australia but are a universal experience of PwD wanting to travel. Based on Australian and international academic research, the major issues identified were that accessible accommodation information is poorly documented, not detailed enough, not room specific and do not have an equal amenity to nondisabled rooms. From a supply perspective (Darcy, 2000; O'Neill & Ali Knight, 2000), owners and managers do not recognise disability as a market and, hence, do not promote the rooms in an appropriate manner for PwD to make an informed choice about their accommodation needs. In addition, accommodation managers report low occupancy of the accessible rooms and that non-disabled customers do not like using accessible rooms (Australian Hotels Association, 1998; Davis, 1981; Healey, 2008). As suggested by Packer, McKercher and Yau (2007), there is a complex interplay between the individual, the tourism context and the environment, where in this case, little is understood about the criteria that consumers regard as being important to their choice of accessible accommodation. Further, there has been no research 3 investigating the ways in which the criteria should be presented through accommodation management information systems. This research seeks to redress the situation. 2 LITERATURE It has been noted that tourism experiences for PwD are more than access issues (Shelton & Tucker, 2005; Stumbo & Pegg, 2005; Yau, McKercher, & Packer, 2004). Yet, for people with mobility disabilities a foundation of any tourism experience is having accessible destinations (Israeli, 2002) and locating appropriate accommodation from which to base oneself while travelling (Darcy, 2002a). Quite simply, to stay a night away from their normal residence requires appropriate accommodation that allows access to a bedroom and bathroom as a base for their stay. As shown in Figure 1, Dwyer and Darcy (2008) use the Australian National Visitor Survey demographic data to identify the statistically significant differences between the comparative travel patterns of PwD and the nondisabled. While day trips occur at the same level (p = .992), the nondisabled travel at 21% higher rate for overnight stays (p = .000) and 52% higher rate for overseas travel (p = .000). Other studies have identified that problems with finding accessible accommodation during the travel planning stage was noted as a significant constraint to overnight and overseas travel. 4 Figure 1: Comparative travel patterns between PwD and the nondisabled Two studies specifically identified the relative degree of impairment, mobility aid used and level of independence as significant influences on tourism requirements and accommodation choice (Burnett & Bender-Baker, 2001; Darcy, 2002a). Studies in Australia (Access For All Alliance (Hervey Bay) Inc, 2006; Darcy, 1998; Market and Communication Research, 2002; Murray & Sproats, 1990) and overseas (Burnett & Bender-Baker, 2001; Daniels et al., 2005; HarrisInteractive Market Research, 2003; Shaw & Coles, 2004; Turco, Stumbo, & Garncarz, 1998) have shown that PwD have indicated that there are serious constraints and problems with locating accessible accommodation. Intertwined with locating accessible accommodation is the planning of the trip, accessing information, negotiating directly with providers or, less frequently for PwD, engaging travel agents (Darcy, 1998; McKercher, Packer, Yau, & Lam, 2003). These complexities are further compounded by the way that information is documented, promoted and marketed by the accommodation sector in particular. In the Australian 5 context, these studies have been validated by the Human Rights and Equal Opportunity Commission (HREOC) complaints cases and Federal court actions (Human Rights and Equal Opportunity Commission, 2006) taken by PwD against accommodation providers. A number of studies have reviewed these issues through examining the actions taken under disability discrimination legislation (Darcy, 2002b; Darcy & Taylor, 2009; Goodall, 2002; Goodall, Pottinger, Dixon, & Russell, 2004; Miller & Kirk, 2002; Shaw, 2007). 2.1 Access and the Built Environment In the Australian context, the government regulates all aspects of the built environment through legislation, codes, standards and development control processes (Bates, 2006; Stein & Farrier, 2006). Central to the environmental planning process is that the Building Codes of Australia (Australian Building Codes Board, 1996) must address access and mobility through the Australian Standards (AS1428 Parts 1-4) (Standards Australia, 1992a, 1992b, 2001, 2002). Within the building codes, tourism accommodation is referred to as Class 3 development where a proportion of accommodation (approximately 5%) is required to be accessible for PwD. Australian Standards AS1428 for access and mobility have specific requirements for this class of accommodation. The requirements for the built environment operate in parallel to the Disability Discrimination Act, 1992 (DDA) that makes it illegal to treat a person differently before the law because of their disability. Similar approaches to the built environment and disability discrimination are found in most Western nations (e.g. ADA in the United States and the DDA in the United Kingdom). The outcome of the building codes and the 6 disability discrimination legislation is that Australians with disabilities have a right to accessible accommodation that must meet stringent access criteria. The accessibility requirements of AS1428 involve literally thousands of detailed measurements and protocols that create the built environment. In the past, this complex information has been interpreted by the Australian Automobile Association (AAA) AAA Tourism accreditation and presented in their mainstream accommodation directories through a dual access icon rating system – wheelchair “independent access” or wheelchair “access with assistance” (see Darcy, 2007). The iconic representations are based on the Australian Council for Rehabilitation of the Disabled (ACROD) assessment tool for hotels and motels developed for AAA Tourism (ACROD, 1994; 1999), which is, in turn, based on the AS1428. This system was withdrawn from the accommodation directories because third-party based assessment criteria is being reviewed (AAA Tourism, 2006). While no explanation is given as to the reason for the review, there had been a series of HREOC complaint cases about the accessibility of tourism accommodation (HREOC, 2006). To this point in time, no replacement access information is provided by this organisation. Disability and access requirements are dynamic and evolving in the same way that the spirit and intent of the DDA surpassed the previous conceptualisations of mobility, hearing, vision and cognitive dimensions of access. The effect of the DDA on the Building Codes of Australia and AS1428 created an ‘uncertainty’ in the development processes from an industry perspective. After intense lobbying, the Australian Building Codes Board (2004b) entered into a process with the Commonwealth Attorney General's Department and the HREOC (2004) to harmonise the DDA with the Building Codes through the development of a Draft Disability 7 Standard for Access to Premises (Commonwealth Attorney General's Dept., 2004, 2008). While agreement exists as to what constitutes accessibility across the four dimensions of access, there is significant industry resistance over the level of compliance and the number of access rooms to be included within tourist accommodations (Darcy, 2004; Innes, 2006). Part of the concern involves the perceived cost of access inclusions and the relative occupancy of current accessible accommodation stock (Australian Hotels Association, 1998). However, as previous research and the HREOC complaint cases have identified a great deal of ‘disabled rooms’ built as accessible accommodations do not comply with AS1428. It has been suggested that these breakdowns in compliance are an aggregation of professional misunderstanding at the planning, design, construction and operation phases of development. Yet, the processes developed for the Sydney 2000 Olympics and Paralympic Games have shown that with political will the accessibility of the built environment can be radically improved (Cashman & Darcy, 2008). Currently there is also an undertaking by Tourism Australia and all state tourism organisations (STOs) to work towards an inclusion for accessible accommodation on the Australian Tourism Data Warehouse (ATDW) (Tully, 2006). The ATDW ‘provides a central distribution and storage facility for tourism product and destination information. The information is compiled in a nationally agreed format and electronically accessible by operators, wholesalers, retailers and distributors for use in their web sites and booking systems’ (Australian Tourism Data Warehouse, 2006). It has approximately 22,000 product listings that are distributed to Tourism Australia’s online website, the STOs’ websites and a series of commercially operated websites. Yet, this opportunity is absent for accessible tourism operators who have good access 8 provisions but have no agreed format to list on the ATDW. As there is no accessible tourism information available electronically from the ATDW, day-trippers, domestic tourists and international tourists with access requirements are effectively excluded from the benefits of electronically accessing the designated premier search engines of the NTO and the STOs. 2.2 Accommodation Research Given the recent action by AAA Tourism and the position of the Draft Disability Standards for Access to Premises, it is an opportune time for the accommodation sector to take stock of accessible accommodation. Yet, outside of the identification of accommodation as a constraint to travel, very little Australian or overseas research has been done on the importance of criteria choice for accessible tourism accommodation. Israeli’s (2002) preliminary work on site accessibility provided an understanding of the importance of the seven components that need to come together to make a site accessible. The Australian and overseas studies identified the constraints to accessible accommodation provision as: • the lack of accessible accommodation; • accessible accommodation that did not comply to the access standards; • the importance of accommodation to trip satisfaction trip; • problems locating accessible accommodation even when it did exist; and • the level, detail and accuracy of information about accommodation as inadequate (Access For All Alliance (Hervey Bay) Inc, 2006; Darcy, 1998; Market and Communication Research, 2002; Murray & Sproats, 1990). 9 Relatively few quantitative studies of tourism experiences of PwD have been undertaken. Almost all of these studies involved mobility disability and a number of significant differentiators have been noted that provide insights into the visitor experience for whether PwD travel, the sectors that they interact with and the relative levels of accessibility. Darcy (1998, 2002a) identified impairment, independence, level of support needs and mobility aid as being statistically significant determinants of where a person stayed and how often they travel. He also found that the demographic variables of income, age and lifestyle circumstances had a significant effect on accommodation choice. Burnett & Bender-Baker’s (2001) study on travel criteria of people with mobility disabilities included four criteria for accessible accommodation that found the level of support needs was a significant differentiator of PwD travel criteria. They also found that gender, age, income marital status and employment status were significant components. With the specifics of accommodation, they found that over two thirds of respondents would travel more if they felt welcome at accommodations and over 70% said they would travel more if they could locate accessible accommodation more easily. Further, they identified seven criteria that PwD would change to improve their stay in the future: easy to push on floor surface; extend or motorize drape pulls; widen hallways; change door direction to swing open; place light switch close to bed; place phone close to bed; and too much furniture. The two previous supply side perceptions studies concluded that accommodation managers did not understand the access features of their rooms or provide any level of detailed information outside of whether an establishment had a ‘disabled room’ (O'Neill & Ali Knight, 2000; Tantawy et al., 2004). Supporting these findings, a market research study that looked at what PwD most wanted in the way of 10 product development concluded that accurate and detailed information about accommodation was a prerequisite for determining their destination of choice (Market and Communication Research, 2002). Further, in 2005 the Sustainable Tourism Cooperative Research Centre funded a workshop to set a national research agenda for disability tourism that involved stakeholders from all sectors of the industry, government and disability advocacy. Two of the outcomes have direct relevance in that one of the key areas of recommendation was the improvement of information provision generally and, with specific reference, to the accommodation sector (Darcy, 2006). A number of recent studies have used quantitative approaches to ascertain accessibility and attitudinal barriers to transport, accommodation, hospitality and attractions in the USA and China (Avis, 2005; Bi et al., 2007). Statistically each of these studies produced different yet significant results with Avis suggesting that gender and age provided some explanation for different levels of accessibility required by the group and Bi, Card and Cole suggesting that functional ability was a major influence on perceived accessibility of accommodation. While the cultural context may provide some explanation for these differences, the statistical results may be tempered because of the accessibility scales for the industry sector camouflage the complexity and interplay between disability, the environment and tourism (Packer et al., 2007). To reduce the accessibility of any industry sector to a single scale measure is fraught with difficulty due to the multivariate nature of any of the sectors’ accessibility considerations. For example, Israeli (2002) identified some seven basic considerations for destination site accessibility. 2.3 Seniors with Access Needs The other market segment with a nexus to PwD is seniors with access needs. 11 There is a well established nexus between ageing and increasing levels of disability and access needs over lifespan (Australian Bureau of Statistics, 2004; World Health Organization, 2007). While a great deal is known about the senior traveler in Australia and overseas (Fleischer & Pizam, 2002; Glover & Prideaux, 2009; Horneman, Carter, Wei, & Ruys, 2002; Queensland Office of Ageing, 1998; Ruys & Wei, 1998), relatively few studies have provided a detailed account of the constraints faced by the group (Fleischer & Pizam, 2002). Yet, each of the studies recognizes some inherent constraints and facilitators to tourism with a proportion of senior travelers having specific access requirements with accommodation being a significant issue. A study of the accommodation needs of mature travelers by Ruys & Wei (1998) identified that a proportion of senior travelers had accommodation access needs. In their study, five major dimensions were identified as important to mature travelers: safety, convenience, security, service, and comfort and recreation. The five dimensions included 44 criteria, many of which PwD would interpret as central to access and mobility as outlined by AS1428. The study concluded by recognising the accommodation sector’s ageing client base and suggested that changes to design and planning could improve the peace of mind and satisfaction of senior travelers. 2.4 Summary In summary, the background context and literature clearly identifies that accessible tourism is both an issue and a significant emerging market that the global tourism industry must plan to address sooner rather than later. The Australian literature on accessible tourism identifies that there have been significant issues with respect to locating, gaining reliable information and having satisfying accessible accommodation 12 experiences. The contemporary Australian situation has seen three convergences that make an investigation of access room criteria and accessible tourism information provision timely: the withdrawal of the AAA Tourism assessment of accessible accommodation; the work of the NTO and STOs to operationalise access within the ATDW; and the recent identification of accessible accommodation information as a strategic research agenda. To this point in time, no research has been published that tested what are the important access room criteria and which gives the acceptable formats of accommodation information provision for PwD. As such, research related to this issue is clearly warranted. 3 RESEARCH QUESTIONS 1. What are the key selection criteria for accessible rooms on which PwD make decisions about whether the rooms suit their access needs? a. Are there any differences between the respondents preferences based on the demographic variables collected as well as disability type and level of independence? 2. Which of the three-industry standard and one innovative accessible accommodation information formats do PwD prefer? a. Are there any differences between the respondents’ preferences based on the demographic variables collected as well as disability type and level of independence? 3. Is there congruence between the information presented and the respondents’ assessment of the accessible rooms? 13 4 RESEARCH DESIGN The research design employed an online questionnaire to draw a sample of PwD through an electronic snowballing technique (Dillman, 2000; Veal, 2006). Specifically it targeted the population of PwD who use accessible rooms designated under the Building Code of Australia while travelling. This population was then asked the relative importance of room criteria for their accommodation choice and to determine their information format preferences. The sample was drawn from over a hundred disability, seniors and government organisations through an electronic snowballing technique that took place over the second half of 2007. An information notice about the research was formulated and circulated electronically to the organisations with a link to the online questionnaire. The organisations then provided the notice to their members through either: direct e-mail out; included within the electronic or hard copy newsletters; placed on their website notices; or distributed through some other means. This form of electronic snowballing has proved successful in previous research. 4.1 The Hotel Accessibility Scale In taking direction from Ruys and Wei (1998), recent overseas research (Europe for All, 2007), the Building Code of Australia (Australian Building Codes Board, 1996, 1997, 2004a, 2004b), the Draft Standards for Access to Premises (Commonwealth Attorney General's Dept., 2004, 2008) and referenced Australian Standards for Access and Mobility (Standards Australia, 1992a, 1992b, 2001, 2002), the Hotel Accessibility Scale (HAS) was developed to test the relative importance of room criteria for PwD. Some 55 individual items were tested in a five point likert scale from 1 ‘not at all 14 important’ to 5 ‘very important’. The HAS was tested for internal reliability (Cronbach Alpha coefficient), subjected to a principal component analysis to ascertain the relative grouping of items and tested for between group difference through Chi-square test for independence, independent samples t-test and ANOVA (sociodemographic, disability and level of support need variables) (Pallant, 2007). 4.2 The questionnaire An online and a paper-based questionnaire was then prepared. The questionnaire was also prepared in alternative information formats for people who were blind or visually impaired including an option for Braille, large print and alternative completion through the provision of phone assistance. The questionnaire consisted of five parts: demographic profile; impairment specific profile; accommodation attributes; accommodation information preferences; and travel patterns. These five parts were organised into 35 questions collecting approximately 180 variables, including three open-ended qualitative questions. A copy of the instrument is included with the online version of the article. 4.3 Accommodation information preferences Together with industry assistance (Accor & Youth Hostels Australia), a preliminary assessment of hotel accommodation stock in Sydney was undertaken to determine hotels and accessible accommodation of ‘best practice'’. Access audits of the premises and the best accessible rooms in establishments were then conducted based on AS1428 (Standards Australia, 2001) and universal design principles (Preiser & Ostroff, 2001). Information was then prepared in three industry-standard formats and a fourth format of innovation. The formats drew on iconic, textual, spatial and digital 15 photographic approaches. Specifically, the access information was presented in the four following ways: 1. AAA Tourism access icons (Australian Automobile Association, 2005); 2. textual presentation (Australian Quadriplegic Association, 2002; Fodor's, 1996); 3. textual and spatial presentation (Cameron, 2000; City of Melbourne, 2006); 4. textual, spatial and digital photography, (Eichhorn, Miller, Michopoulou, & Buhalis, 2008; Europe for All, 2007). 4.4 Product testing A number of respondents (n=6) who answered the online questionnaire were recruited to product test the accessible accommodation to see whether the information preference formats matched actual room. After a site visit, the respondents were first individually interviewed and then brought together for a focus group. The interviews were taped and transcribed for analysis, where the focus group was facilitated and a note taker documented the emergent points of discussion. 4.5 Limitations While the sampling method of electronic snowballing is an efficient means of contacting organisations of PwD and those with access needs, there are limitations to the method with respect to those who have access to the internet, those members who regularly check their organisational website or their electronic or hard copy publications. Further, as noted in the discussion of the sample characteristics below, the electronic snowballing technique may have created a level of non-completion (1070 16 people responded with 566 fully completed) as professionals associated with disability, building and the accommodation sector who were not the primary population for the study took the opportunity to review the research instrument online without completing questionnaire. However, this did not compromise the integrity of the study because non- completed questionnaires were excluded from the analysis. A number of these people contacted the research team and provided insight into accessible accommodation that was used in another part of the research. 5 FINDINGS 5.1 Sample: Over 1070 people responded to the survey, with 566 fully completed questionnaires used for the analysis. An extensive profile of sociodemographic and psychographic variables was collected, together with their travel patterns, accommodation preferences and information sources. Of these 58 percent were female and 42 percent male, with a relatively even distribution of age. The dominant lifestyle groups were midlife singles, older working couples, younger singles living at home and older non-working couples. The sample was well educated with 48 percent having a University qualification and 20 percent TAFE educated. The majority of people were full-time (33%) or part time (17%) employed with 24 percent retired or receiving a pension. Over 75 percent were Australian-born with a low affiliation to other cultural or ethnic groups (8%). In comparison to the Australian national statistics on disability (Australian Bureau of Statistics, 2004), the sample has a higher proportion of people with mobility disabilities, similar proportions of people with vision, hearing, and cognitive disabilities and an under representation of those with mental health disabilities. This was expected as accessible accommodation standards are focused on 17 those with mobility, vision and hearing disabilities. The respondents identified 1077 dimensions of access, suggesting that people identified as having multiple dimensions of disability. Of these people, 39 percent identified as being independent or low support needs, 25 percent medium support needs and 36 percent having high or very high support needs. Table 1 presents a breakdown of the sample indicating the statistical significance of the relationship for the sociodemographic variables by the level of support needs using either independent t-test or ANOVA. Table 1: Sociodemographic variables by Level of Support Needs Demographic P Value Ind% Low% Medium% High% Very High% Gender 0.851 Male 17 23 28 22 10 Female 16 28 24 20 11 Age 0.006 Under 35 21 22 22 20 14 36 – 60 17 25 23 23 11 60+ 10 31 39 16 3 Employment 0.000 Not in Employment 8 23 32 24 13 Employed 25 28 20 19 8 Country of Birth 0.007 Australia 14 26 25 23 12 Overseas 25 25 30 17 3 Education 0.000 Primary/Secondary 9 17 28 27 19 Trade/TAFE 23 28 31 11 7 Uni/Postgrad 18 30 23 22 6 Dimension of Access 0.000 Power WC/scooter 6 18 22 40 13 Manual W.C. 26 28 28 16 1 Other Mobility 16 35 30 15 4 Vision/Hearing/Cognitive/other 10 19 25 20 26 18 5.2 Internal Consistency and Principal Component Analysis (PCA) The Hotel Accessibility Scale’s (HAS) Cronbach Alpha coefficient of .965 indicates excellent internal consistency for the 55 items of the scale. As the HAS was part of scale development, there are no directly comparable studies. However, there are commonalities between PwD and seniors in that many have access needs due to the increasing acquisition of disability as people age. Hence, their travel behavior and needs have connections to accessible tourism. As such, the Ruys & Wei (1998) study was a valuable starting point for the development of HAS. The HAS was then subjected to a principal component analysis (PCA) using SPSS Version 16. Prior to performing the PCA, the suitability of data for PCA was assessed. Inspection of the correlation matrix revealed the presence of most coefficients of .3 and above. The Kaiser-Meyer-Olkin Measure of Sampling Adequacy value of 0.969 exceeds the recommended value of .6 (Kaiser 1970, 1974) and Bartlett's Test of Sphericity was statistically significant to the 99 percent level (p= .000), supporting the factorability of the correlation matrix (Bartlett 1954). The PCA revealed the presence of eight components with Eigenvalues exceeding 1, explaining 65 percent of the variance. Inspection of the screeplot revealed a clear break at the third component with the elbow continuing down to the eighth component, which was confirmed using Cattell’s (1966) scree test. A Parallel Analysis (using Monte Carlo PCA) was then undertaken that showed six components with Eigenvalues exceeding the corresponding criteria values for a randomly generated data matrix the same size (55 variables x 566 respondents). The six components were retained as the principal components and explained 60 percent of variance. To aid in the interpretation of the six components, Direct Oblimin rotation was 19 performed. The rotated solution revealed the presence of simple structure (Thurstone 1947), with all 6 components showing a number of strong loadings and all variables loading substantially on only one component, Component 1. Given that the HAS is part of scale development there is no direct comparison with other studies. Table 2 presents the Pattern Matrix to explain the relative component groupings as well as presenting the component Cronbach Alpha coefficients, which were all above .7 indicating a good internal consistency of each component. The components can be named along functional lines based on AS1428 as indicated in Table 2. 20 Table 2: Component Structure of Accommodation Criteria Selection Component (Cronbach Alpha) & Criteria Pattern Matrix Load Component (Cronbach Alpha) & Criteria Pattern Matrix Load Component 1: Core Mobility (.941) Component 4 - Service & Security (.893) 17. Flexi bed configuration 0.768 47. Room service -0.310 19. Bed height 0.748 53. Dietary consideration -0.313 18. Under bed clearance 0.694 5. Clear signage -0.323 20. Firm mattress 0.621 42. Luggage assistance -0.383 37. Clear circulation in bathroom 0.594 11. In room temperature control -0.508 38. Table/kitchen bench clearance 0.551 39. Can do customer service attitude -0.510 48. Low pile carpet 0.544 40. Orientation to the room -0.520 32. Handheld shower head 0.524 46. Emergency phone in lift -0.537 30. Roll in shower 0.492 55. Alarm system -0.540 36. Toilet seat height 0.486 41. Evacuation orientation -0.607 35. Accessible vanity unit 0.457 44. Well lit public areas -0.659 14. Clear circulation space 0.417 Component 5 - Amenity (comfort/recreation) (.814) 49. Extra linen 0.396 51. Gym access 0.784 33. Lever water taps 0.386 50. Pool access 0.757 21. All controls visible from bed 0.361 52. Self serve laundry 0.696 16. Bar fridge for medication 0.303 54. Complimentary newspaper 0.497 Component 2: Hearing & Vision (Communication) ( .893) 34. Adjustable magnifying mirror 0.458 24. Non audible door bell 0.830 Component 6 - Supplementary Mobility( .889) 25. Access to TTY 0.818 4. Split level reception desk 0.696 22. Teletext decoders 0.812 2. Intercom at accessible height 0.670 6. Alternative format guest info 0.710 3. Independent access entrance 0.624 23. Phone with vol control and alert 0.709 13. Height of switches and controls 0.578 26. Internet access 0.378 10. Rooms of equal level of comfort 0.510 43. Illuminated switches 0.353 15. Reachable in room tea/coffee 0.476 Component 3 - Ambulant (Safety) ( .870) 7. Continuous accessible path 0.452 29. Grab rails in bathroom 0.829 8. Handrails throughout 0.784 31. Bench in shower 0.645 9. Seats near the lift 0.593 27. Non-slip bathroom floor 0.496 28. Call button in bathroom 0.478 45. Room near lift 0.386 12. Easily operated door handles 0.333 Note: Eigenvalues for each Component: Component 1 = 21.274, Component 2 = 5.011, Component 3 = 2.085, Component 4 = 1.977, Component 5 = 1.599 & Component 6 = 1.440. Items shown in table are those with Component loadings ≥ .300 21 5.3 Between Group Variance: Within the structure of the online questionnaire the demographic variables of gender, age, life cycle, country of birth, cultural or linguistic background, highest education, current work status and geographic location were collected. There were also disability specific variables collected: dimension of access (disability type); level of support needs; aids used; and equipment requirements. The sociodemographic variables were tested for between group differences against the relative importance of access room criteria producing statistically significant results to the 95 percent level (p < .05). Table 3 presents the independent samples t-test statistically significant criteria for gender (11 items) and country of birth (8 items), and ANOVA for age (7 items), employment status (18) and highest education level (19). As Table 3 shows, a number of the criteria were identified as being significant across three and four of the socio- demographic variables. While not shown in Table 3 due to the number of items identified, it was disability type (47 items) and level of support needs (38 items) where the greatest between group variance was explained (ANOVA p < .01). The results demonstrate that there was significant variation in criteria preferences between people with mobility, vision, hearing, cognitive and multiple dimensions of disability suggesting a highly individualised discourse of access. 22 Table 3: Criteria preferences across sociodemographic variables where p < .05 Criteria Order of Sig. Gender Age Country of Birth Employment Highest Education 1. easily operated door handles accessible parking bar fridge for medication independent access entrance rooms as equal level of comfort 2. nonslip bathroom floor reachable tea and coffee ^flexible bed configurations split-level reception desk in room temperature controlled 3. grab rails in bathrooms controls accessible from bed bed height clear circulation space bar fridge for medication 4. bench in shower ^ well lit public areas phone with volume control reachable in room tea and coffee ^flexible bed configurations 5. table/kitchen bench clearance #room near lift #well lit public areas #flexible bed configurations under beds clearance 6. #can do customer service attitudes ^pool access extra linen bed height non audible alarm 7. luggage assistance gym access ^pool access roll in shower adjustable magnifying mirror 8. ^well lit public areas #dietary considerations hand held shower hose clear circulation in the bathroom 9. #room near lift leave the water taps table/kitchen bench clearance 10. emergency phone in lift toilet seat height #can do customer service attitude 11. #dietary considerations; #can do customer service attitude orientation to room 12. orientation to the room evacuation orientation 13. room near lift illuminated switches 14. low pile carpet ^well lit public areas 15. ^pool access #room near lift 16. gym access emergency phone in lift 17. self-service laundry extra linen 18. #dietary considerations ^pool access 19. self serve laundry ^ Denotes four occurrences across the sociodemographic variables # Denotes three occurrences across the sociodemographic variables 23 The criteria preferences had a further level of complexity when the level of support needs was overlayed. This complexity included specialist equipment that respondents travelled with, the interaction with attendants for personal care assistance and the dynamics of the group with whom people travelled. In particular, group related purposes required the need for multiple accessible rooms with some groups requiring up to 20 accessible rooms for sport and advocacy events where most hotels in Australia have between 1-3 accessible rooms. 5.4 Access information preferences: The results for access room criteria can be further contextualised through reviewing the access information preferences for accessible accommodation. As Table 4 reveals, the respondents were asked to rank their preferences (from 1 first preference to 4 fourth preference) with the main preference being digital photography with floorplan and textual (70% mean = 1.54) followed by textual with floorplan (15% mean = 2.14), text (6% mean = 2.79) and AAA icon (9% mean = 3.53). Figure 2 presents the mean plots for each dimension of access and shows the commitment to this format by people with aggregate mobility and multiple disabilities, where there was a markedly higher mean by people with vision and cognitive disabilities. Not surprisingly, people who were blind or visually impaired did not find the digital photography useful for their purposes but found the rich text description very helpful. People with cognitive disabilities did not have the same needs for visual or spatial orientation. 24 Table 4: Accessible accommodation information preference Format Mean Std. Dev Variance 1. AAA icon 3.53 .954 .909 2. Textual 2.79 .695 .483 3. Floorplan 2.14 .720 .519 4. Digital images 1.54 .943 .888 (n=566) Figure 2: Mean Preference for Digital Photography, floorplan and textual by Dimension of Access The level of support needs provided a similar pattern of a lower mean for the higher the support needs (e.g. high need mean = 1.42). However, there was an anomaly of people with very high support needs having a higher mean (mean = 1.51). Further analysis revealed that those with very high support needs were people with multiple disabilities rather than the more homogenous high needs people who predominantly had mobility disabilities. There 25 were no statistically significant differences between sociodemographic groups for information preference. Outside of these observations, what becomes apparent is that there is a relative homogeneous preference for the digital photography that included floor plans and textual information. The product testing confirmed that the most important information for people determining the appropriateness of accommodation for their needs was the consideration of the bedroom and the detailed criteria of the bathroom. Interviews and a focus group with respondents (n=6) to the questionnaire who product tested the information provision at the hotel, suggested that the photographs needed to address these issues rather than the general accessibility of the hotel as they are the most critical considerations to deciding to stay at an accommodation. They were willing to overlook the general accessibility of public areas of the hotel if they could be assured that the room and bathroom had the access criteria that they needed. While the individuals in this part of the research all had mobility disabilities, each identified particular room criteria that they regarded as essential to their decision-making process. Each said the level of detail provided was essential as it identified particular information that they sought. Further, while the textual and the floorplan provided a solid basis, the digital photography provided a visual reinforcement that confirmed the accessibility for their needs. Half of the respondents then went on to say they would have liked more photos of the room/bathroom and less of the general property as it was the accessibility of the room that was most important to their decision-making. People with ambulant disabilities were particularly interested in detailed information and photos of the position of the handrails. Handrails were critical not only to their mobility but also as the contributing component to ensuring their safety from slips and falls. 26 6 DISCUSSION 6.1 Research Question 1: Relative importance of room criteria As stated in the findings, HAS has proved to be a valid and reliable instrument on which to gauge the relative importance of room criteria for PwD and those with access needs. The relative importance of the 55 room criteria, however, was dependent principally on disability type and level of independence. These findings further support Burnett and Bender- Baker’s (2001) findings for the implications for market segmentation the group. However, the findings extend this work by moving beyond the mobility dimension of disability to include the other groups in the sample. This research included those with vision, hearing, cognitive and those who have multiple disabilities. Too often PwD are seen as a homogenous whole or tightly confined to the definitional categories identified by the World Health Organisation (2001). Yet, as the respondents in this study identified many had multiple disabilities that presented a further complexity to understanding consumer needs. While most major disability data sources recognise this as a fact (e.g. Australian Bureau of Statistics, 2004) most research is still operationalised within tightly confined definitions. The sociodemographic variables that provided further insight into access needs are now examined. As Table 3 presented, the common room criteria that were statistically significant across gender, age, country of birth, employment situation and highest level of education. When these are considered in isolation, whether an individual criterion is statistically significant may not be an important outcome in itself. However, when the commonalities across the sociodemographic variables are examined, a number of important themes emerge. First, gender and age individually were the least statistically significant. Yet, further analysis showed some interesting considerations. For example, older women with disabilities travelled less than other groups and identified safety and security components as the most important criteria outside of the accessibility criteria. Second, their employment 27 situation and highest level of education also had the highest commonality for amenity (comfort/recreation) components. When this was analysed further, those with full-time employment and tertiary qualifications regarded these considerations higher than other groups. This could be explained through their awareness of their legal rights, demanded access to all areas of the premises and expect a high level of customer service than other groups. This explanation is supported in the literature regarding affluent baby boomers and their expectations in tourism participation (Cleaver & Muller, 2002; Glover & Prideaux, 2009; Muller & Cleaver, 2000). This would suggest that disability considerations should be considered within further senior and baby boomer research as there is a significant relationship between disability and ageing (Australian Bureau of Statistics, 2004). 6.2 Research Question 2: Accessible accommodation information format preference This research has reinforced that people requiring accessible accommodation information do so at a level of detail not previously considered by the accommodation sector specifically and, by inference, the tourism industry as a whole. Their experiences and evidence identified through complaint cases brought under disability discrimination legislation has been far from satisfactory (Darcy, 2002b; Darcy & Taylor, 2009; Goodall, 2002; Goodall et al., 2004; Miller & Kirk, 2002; Shaw, 2007). Hence, the respondents in this study identified the innovative approach to information provision that brought together the textual elements of AS1428 for access and mobility, the socio-spatial floorplan and digital photography to provide a triangulation of data as a distinct advance. This triangulation of access information allows individuals to make informed choices as to whether the accessible tourism accommodation is accessible for their needs. This is an important finding in itself, as expectation management is critical to customer satisfaction (Gnoth, 1997; Oliver & DeSarbo, 1988; Rodríguez del Bosque, San Martín, & Collado, 2006). An accommodation would be 28 better placed by providing detailed information in an appropriate format so that a realistic expectation of the accessibility can be determined by the individual. An individual may decide that the premise cannot meet their needs. This is far better for the person and the premise than having an unrealistic expectation due to poor information provision. In such a circumstance, the customer would be upset by their expectation not being met and would pass on their dissatisfaction through bad word of mouth. What became apparent in the qualitative parts of the questionnaire and in the interviews and focus group with the respondents, whom tested the product, was that each individual had idiosyncratic elements of the room criteria that they regarded as being important. These varied for individuals based on their sociodemographics, disability type, level of independence and equipment that they used. As an outcome of these components, certain criteria were valued by certain groups with clear delineation between power wheelchair users, manual wheelchair users, those with ambulant disabilities, those who are blind or vision impaired, those who were Deaf or hearing impaired and a number of other groups. The top 10 criteria mean scores for each of these groups varied significantly. As evidenced by HREOC (2006) disability discrimination cases, one of the clear challenges for organisations given that the internet has became one of the main forms of distribution for the industry, is that this type of online information must comply with W3C international protocols which also include the provision of alternative information formats. A series of structural exclusions to online information provision have been well documented in the tourism literature (Foggin, Cameron, & Darcy, 2004; Gutierrez, Loucopoulos, & Reinsch, 2005; Williams, Rattray, & Grimes, 2006). To operationalise this research would require an organisational commitment to the W3C accessibility standards and an upfront commitment to collecting and verifying the information. The outcomes for the organisation would be significant in that they would have a competitive advantage to market to a group of people 29 that are rarely commercially marketed (DePauw & Gavron, 2005; Reedy, 1993). Word-of- mouth and disability advocacy networks provide a low cost marketing opportunity for a low upfront investment. Another approach would be for premises to provide accessible accommodation information to collaboratively market in conjunction with a government tourism marketing authority, not-for-profit or an industry association. An excellent example of this type of collaborative marketing has occurred with the Deaf community and the Australian Hotel Motel and Accommodation Association after coming to an agreement as to inclusive provisions for the Deaf and hearing impaired. Members who comply with the inclusive provisions are marketed through an online website by the Deafness Forum (Deafness Forum & HMAA, 2005). 6.3 Research Question 3: Congruence of information to the room The six respondents who product tested provided a great depth of meaning outside of the statistical results. Another paper is in preparation dealing with the qualitative results of this question as well as the consumer perception of non-disabled hotel guests to accessible rooms and the supply side of hotel management to their product. What was striking with the interviews and focus groups was what critical disability studies explains as the ‘embodied ontology’ (Shakespeare & Watson, 2001). While social constructionist approaches to disability emphasised the importance of providing an enabling environment and welcoming attitudes, the individual’s embodiment is crucial to their particular identification of specific room criteria. This can be explained where there are certain provisions that are regarded important for the access requirements of a particular group (e.g. continuous pathway for mobility disabilities) but individuals have specific requirements based on their impairment (embodiment), level of independence and interaction with the environment (Packer, McKercher & Yau 2007). Within the accessible accommodation context, this creates an 30 inherent complexity to manage the impairment related considerations. Yet, the implications for management are relatively straightforward once the detailed information systems are in place − a person needs to be considered as an individual with their own needs. This is not an earth shattering outcome but an important one in that rather than responding to a wheelchair user or the frail aged or the blind or the Deaf, the interaction must go beyond the group access needs to engage with the individual as you would with the other customer. 7 APPLICATION OF RESULTS This research has provided greater empirical understanding of the access considerations of PwD and hotel accommodation. In particular, it has highlighted the complex level of information required for people to make an informed decision about their accommodation needs. The research suggests that previous attempts to create an iconography or rating system for accessible accommodation are misguided. A radical simplification of the high level of detail presented in the Building Code of Australia and the AS1428 for access and mobility is not possible without compromising the detail required by PwD using accessible accommodation. In particular, the digital photography and floor plans provide a socio-spatial context to information decision-making where an individual’s needs can interpret a better understanding of the spaces that they are to use. With the case of accommodation, the detailed criteria associated with the room and the bathroom that are critical. While access has a group context based on their dimension of access, there is an individual access discourse where people expressed their desire for detailed information, visual reinforcement, and an understanding of the spatial dimension of the room are important elements on which to make an informed decision for their access needs. The resulting access discourse places a weighting on which of these criteria was crucial for each individual to 31 make an informed decision and, hence, the criteria and weighting varied between individuals. The more detailed the information on accommodation within clearly defined criteria, the more appropriate, effective and efficient the organisational response for presenting accommodation information for accessible rooms. The efficiency of this approach is that information for the accessible rooms is compiled once and then can be continually disseminated through online sources, hard copy and as individual requests coming through the reservation procedures. Of course, this requires organisations to have developed an access culture and a continual process of disability awareness training (Daruwalla & Darcy, 2005). Unsatisfactory experiences have significant implications for the individual through the stress and anxiety created, while the premise then has to deal with a dissatisfied customer who will provide poor word-of-mouth and may take a disability discrimination action against the premise. This research potentially offers the tourism industry a better means by which to collect, collate, market and promote accessible accommodation information to PwD to improve expectation management. Two access information templates provide the accommodation sector with an understanding of how this can be accomplished (Darcy & Cameron, 2008; Europe for All, 2007). Further, there are many other benefits to improving accessible information systems including the improvement the economic and social sustainability of their enterprises (Eichhorn et al., 2008). This research offers the potential to contribute to the neglected area of social sustainability, which has until recently, been a poor third in relation to environmental and economic sustainability, through its contribution to the development of inclusive practices and a more enabling accommodation sector. Currently many premises with accessible rooms do not even represent to the public that these rooms exist. This is economically inefficient for the premise and socially inefficient for PwD. Diversity is recognised as an area of competitive advantage in globalised business practice (Harvey & Allard, 2005) but disability has had relatively less inclusion within organisational 32 diversity strategies than other areas (e.g. gender, ethnicity and sexuality). While the sample was adequate for the purposes of this research, a larger sample size of a number of the different disability groups should be the focus of further research. This research was largely carried out in the Australian context with some limited involvement of respondents from other nations. It is recognised that there are cultural contexts to disability that should be researched further in tourism. 8 CONCLUSION In summary, the research has the potential to contribute to a business case for accessible tourism accommodation by allowing a much more detailed understanding of the consumer needs of PwD. In particular, by using the outcomes of this research the accommodation sector may implement a new system of information collection, presentation, marketing and promotion that will be more effective and efficient in the management of accessible accommodation stock. The significant business outcome of a new system of knowledge management would contribute towards improved occupancy for accessible accommodation in the future. This would be achieved at the same time as more effectively meeting the expectations of this consumer group so that they can make decisions on the accessibility of a premise for their access needs and individual access discourse. Operationalising this research within the accommodation sector offers an opportunity for corporations and governments to gain a competitive advantage. It is recognised that there is an upfront cost associated with carrying out the access audits, formatting information and establishing W3C compliant online environments. However, this upfront cost is small in comparison to the benefits gained. Lastly, this research has significant international implications in relation to understanding and meeting the challenges of ageing and disability related issues and the social sustainability of the global accommodation sector. 33 REFERENCES AAA Tourism. (2006). Withdrawal of accessibility rating icons. Retrieved 8 August, 2006 Access For All Alliance (Hervey Bay) Inc. (2006). Survey Into the Barriers Confronted By Tourists With Disabilities - When Making Travel Arrangements, Finding Accommodation and Visiting Tourist Venues. Hervey Bay: Access For All Alliance (Hervey Bay) Inc. ACROD. (1994). Building Access - AAA Accommodation Checklist. Canberra: ACROD. Australian Automobile Association. (2005). Accommodation guide. Retrieved 10 August, 2005, from http://www.accommodationguide.com.au/ Australian Building Codes Board. (1996). Building Code of Australia. Canberra: CCH Australia. Australian Building Codes Board. (1997). Provisions for People with Disabilities (RD97/01). Canberra: Australian Building Codes Board. Australian Building Codes Board. (2004a). Disability Standards for Access to Premises: Draft Regulation Impact Statement. Canberra: Australian Building Codes Board. Australian Building Codes Board. (2004b). Draft Access Code for Buildings (Press release). Canberra: Australian Building Codes Board. Australian Bureau of Statistics. (2004). Disability Ageing and Carers Summary Of Findings, 2003 (Cat No. 4430.0). Canberra: Australian Bureau of Statistics. Australian Council for Rehabilitation of Disabled (ACROD) Ltd. (1999). Room 206 - Accommodating travellers with disabilities. 4th. Retrieved 17 May, 2002, from www.acrod.org.au/access/room206.htm Australian Hotels Association. (1998). Catering for Guests with Disabilities: Survey of AHA Members. Canberra: AHA. Australian Quadriplegic Association. (2002). Access Sydney: the Easy Guide to Easy Access (1st ed.). Sydney: Australian Quadriplegic Association. Australian Tourism Data Warehouse. (2006). About the Australian Tourism Data Warehouse. Retrieved 12 July, 2006, from http://www.atdw.com.au/ Avis, A., J.A. Card, S. T. Cole. (2005). Accessibility and Attitudinal Barriers Encountered by Travelers With Physical Disabilities Tourism Review International, 8, 239-248. Bates, G. M. (2006). Environmental law in Australia (6th ed.). Chatswood, N.S.W.: LexisNexis. Bell, R. A., & Morey, R. C. (1996). Purchase Situation Modeling: The Case of Hotel Selection Criteria for Corporate Travel Departments. Journal of Travel Research, 35(1), 57-63. Bi, Y., Card, J. A., & Cole, S. T. (2007). Accessibility and Attitudinal Barriers Encountered by Chinese Travellers with Physical Disabilities. Int. J. Tourism Res, 9, 205-216. Burnett, J. J., & Bender-Baker, H. (2001). Assessing the travel–related behaviors of the mobility–disabled consumer. Journal of Travel Research, 40(1), 4-11. Callan, R. J. (1998). Attributional Analysis of Customers' Hotel Selection Criteria by U.K. Grading Scheme Categories. Journal of Travel Research., 36(3), 20-34. Cameron, B. (2000). Easy Access Australia (2nd ed.). Kew, Vic: Kew Publishing. Cashman, R., & Darcy, S. (2008). Benchmark Games: The Sydney 2000 Paralympic Games. Petersham, NSW Australia: Walla Walla Press in conjunction with the Australian Centre for Olympic Studies. City of Melbourne. (2006). Accessing Melbourne. Melbourne: City of Melbourne. Cleaver, M., & Muller, T. E. (2002). The Socially Aware Baby Boomer: Gaining a Lifestyle- http://www.accommodationguide.com.au/� http://www.acrod.org.au/access/room206.htm� http://www.atdw.com.au/� 34 Based Understanding of the New Wave of Ecotourists. Journal of Sustainable Tourism, 10(3), 173-190. Commonwealth Attorney General's Dept. (2004). Draft Disability Standards for Access to Premises (Buildings) 200X. Canberra: AGPS. Commonwealth Attorney General's Dept. (2008). Disability (Access to Premises - Buildings) Standards. Retrieved 2 December, 2008, from http://www.ag.gov.au/premisesstandards Daniels, M. J., Drogin Rodgers, E. B., & Wiggins, B. P. (2005). "Travel Tales": an interpretive analysis of constraints and negotiations to pleasure travel as experienced by persons with physical disabilities. Tourism Management, 26(6), 919-930. Darcy, S. (1998). Anxiety to Access: The Tourism Patterns and Experiences of New South Wales People with a Physical Disability: Tourism New South Wales. Darcy, S. (2000). Tourism Industry Supply Side Perceptions of Providing Goods and Services for People with Disabilities. Sydney: report to New South Wales Ageing and Disability Department. Darcy, S. (2002a). Marginalised participation: Physical disability, high support needs and tourism. Journal of Hospitality and Tourism Management, 9(1), 61-72. Darcy, S. (2002b, 16-18 May). People with disabilities and tourism in Australia: a human rights analysis. Paper presented at the Tourism and Well Being - 2nd Tourism Industry and Education Symposium, Jyvaskyla, Finland. Darcy, S. (2004). Practice Note: Harmony and Certainty? The Status of the Draft Access to Premises Standard. Annals of Leisure Research, 7(2), 158-167. Darcy, S. (2006). Setting a Research Agenda for Accessible Tourism. In C. Cooper, T. D. LacY & L. Jago (Eds.), STCRC Technical Report Series pp. 48). Available from http://www.crctourism.com.au/BookShop/BookDetail.aspx?d=473 Darcy, S. (2007, 11-14 February). A methodology for assessing class three accessible accommodation information provision. Paper presented at the Tourism - Past Achievements, Future Challenges, Manly Pacific Novotel, Manly - Sydney Australia. Darcy, S., & Cameron, B. (2008). Accessible Accommodation Assessment Template [Software template]. Sydney: © University of Technology, Sydney and Easy Access Australia. Darcy, S., & Taylor, T. (2009). Disability citizenship: An Australian human rights analysis of the cultural industries. Leisure Studies, 28(4), 375-398. Daruwalla, P. S., & Darcy, S. (2005). Personal and Societal Attitudes to Disability. Annals of Tourism Research, 32(3), 549-570. Davis, E. (1981). Handicapped persons full participation in tourism - CEO of the Australian Hotels Association. Paper presented at the International Year of the Disabled - Tourism Seminar, Sydney Opera House. Deafness Forum, & HMAA. (2005). Accommodation Industry Voluntary Code of Practice for the Provision of Facilities for the Deaf and Hearing Impaired Sydney: HMAA. DePauw, K. P., & Gavron, S. J. (2005). Disability and sport (8th ed.). Champaign, IL: Human Kinetics. Dillman, D. A. (2000). Mail and Internet surveys: the tailored design method. New York: John Wiley. Dwyer, L., & Darcy, S. (2008). Chapter 4 - Economic contribution of disability to tourism in Australia. In S. Darcy, B. Cameron, L. Dwyer, T. Taylor, E. Wong & A. Thomson (Eds.), Technical Report 90040: Visitor accessibility in urban centres (pp. 15-21). Gold Coast: Sustainable Tourism Cooperative Research Centre. Eichhorn, V., Miller, G., Michopoulou, E., & Buhalis, D. (2008). Enabling access to tourism through information schemes? Annals of Tourism Research, 35(1), 189-210. http://www.ag.gov.au/premisesstandards� http://www.crctourism.com.au/BookShop/BookDetail.aspx?d=473� 35 Europe for All (2007). Tourism Providers reports on The Europe for all Self-Assessment Questionnaire: For owners/managers of Hotels and Self-Catering Establishments & The Europe for all Photo and Measurement Guide., (Vol. 2008, pp. Website). Available from http://www.europeforall.com/tourismProviders.seam?conversationPropagation=end&c onversationId=162076 Fleischer, A., & Pizam, A. (2002). Tourism constraints among Israeli seniors. Annals of Tourism Research, 29(1), 106-123. Fodor's. (1996). Fodor's Great American Vacations for Travelers with Disabilities (2nd ed.). New York: Fodor's Travel Publications Inc. Foggin, S. E. A., Cameron, B., & Darcy, S. (2004, 6-9 July). Towards Barrier Free Travel: Initiatives in the Asia Pacific Region. Paper presented at the Refereed conference proceedings of Developing New Markets for Traditional Destinations' - Travel and Tourism Research Association (TTRA) Canada Conference 2003, Ottawa. Glover, P., & Prideaux, B. (2009). Implications of population ageing for the development of tourism products and destinations. Journal of Vacation Marketing, 15(1), 25-37. Gnoth, J. (1997). Tourism motivation and expectation formation. Annals of Tourism Research, 24(2), 283-304. Goodall, B. (2002, 16-18 May). Disability discrimination legislation and tourism: The case of the United Kingdom. Paper presented at the Tourism and Well Being - 2nd Tourism Industry and Education Symposium, Jyvaskyla, Finland. Goodall, B., Pottinger, G., Dixon, T., & Russell, H. (2004). Heritage property, tourism and the UK Disability Discrimination Act. Property Management, 22(5), 345-357. Gutierrez, C. F., Loucopoulos, C., & Reinsch, R. W. (2005). Disability-accessibility of airlines' Web sites for US reservations online. Journal of Air Transport Management, 11(4), 239-247. HarrisInteractive Market Research. (2003). Research among adults with disabilities - travel and hospitality. Chicago: Open Doors Organization. Harvey, C. P., & Allard, M. J. (Eds.). (2005). Understanding and Managing Diversity: Readings, Cases And Exercises (International Edition). New Jersey: Pearson Prentice Hall. Healey, B. (2008, 29-31 October). The Australian Hotel Association Position: Current status and future of tourist accommodation for people with disabilities. Paper presented at the CREATING INCLUSIVE COMMUNITIES - conference of the Association of Consultants in Access, Australia, Hyatt Regency, Adelaide. Horneman, L., Carter, R. W., Wei, S., & Ruys, H. (2002). Profiling the senior traveler: An Australian perspective. Journal of Travel Research, 41(1), 23. Hsieh, L.-F., Lin, L.-H., & Lin, Y.-Y. (2008). A service quality measurement architecture for hot spring hotels in Taiwan. Tourism Management, 29(3), 429-438. Human Rights and Equal Opportunity Commission. (2006). Disability Discrimination Act Complaints Cases Register and Decisions. Retrieved January, 2002, from http://www.hreoc.gov.au/disability_rights/decisions/ Innes, G. (2006, 21 January). Building access and no holiday for the disabled. The Daily Telegraph, Israeli, A. (2002). A preliminary investigation of the importance of site accessibility factor for disabled tourists. Journal of Travel Research, 41(1), 101-104. Market and Communication Research. (2002). People with Disabilities: a Market Research Report. Brisbane: Tourism Queensland - Special Interest Tourism Unit. McKercher, B., Packer, T., Yau, M. K., & Lam, P. (2003). Travel agents as facilitators or inhibitors of travel: perceptions of people with disabilities. Tourism Management, http://www.europeforall.com/tourismProviders.seam?conversationPropagation=end&conversationId=162076� http://www.europeforall.com/tourismProviders.seam?conversationPropagation=end&conversationId=162076� http://www.hreoc.gov.au/disability_rights/decisions/� 36 24(4), 465-474. Miller, G. A., & Kirk, E. (2002). The Disability Discrimination Act: Time for the stick? Journal of Sustainable Tourism, 10(1), 82-88. Muller, T. E., & Cleaver, M. (2000). Targeting the CANZUS baby boomer explorer and adventurer segments. Journal of Vacation Marketing, 6(2), 154-169. Murray, M., & Sproats, J. (1990). The disabled traveller: Tourism and disability in Australia. Journal of Tourism Studies, 1(1), 9-14. Nash, R., Thyne, M., & Davies, S. (2006). An investigation into customer satisfaction levels in the budget accommodation sector in Scotland: a case study of backpacker tourists and the Scottish Youth Hostels Association. Tourism Management, 27(3), 525-532. O'Neill, M., & Ali Knight, J. (2000). Disability tourism dollars in Western Australia hotels. FIU Hospitality Review, 18(2), 72-88. Oliver, R. L., & DeSarbo, W. S. (1988). Response Determinants in Satisfaction Judgments. Journal of Consumer Research, 14(4), 495. Packer, T. L., McKercher, B., & Yau, M. (2007). Understanding the complex interplay between tourism, disability and environmental contexts. Disability & Rehabilitation, 29(4), 281-292. Pallant, J. F. (2007). SPSS survival manual: a step by step guide to data analysis using SPSS for Windows (Version 15) (3rd ed.). Crows Nest, N.S.W.: Allen & Unwin. Preiser, W. F. E., & Ostroff, E. (2001). Universal Design Handbook. New York: McGraw- Hill. Queensland Office of Ageing. (1998). Not over the hill - Just enjoying the view: A close-up look at the seniors market for tourism in Australia. Brisbane: Seniors Card Tourism Scheme - Office of Ageing, Department of Families, Youth and Community Care. Reedy, J. (1993). Marketing to consumers with disabilities: how to identify and meet the growing market needs of 43 million Americans. Chicago, Ill: Probus Pub Co. Rodríguez del Bosque, I. A., San Martín, H., & Collado, J. (2006). The role of expectations in the consumer satisfaction formation process: Empirical evidence in the travel agency sector. Tourism Management, 27(3), 410-419. Ruys, H., & Wei, S. (1998). Accommodation needs of mature Australian travellers. Australian Journal of Hospitality Management, 5(1), 51-59. Shakespeare, T., & Watson, N. (2001). The social model of disability: An outdated ideology? In S. N. Barnartt & B. Mandell Altman (Eds.), Exploring Theories and Expanding Methodologies (Vol. 2, pp. 9-28). Stamford: JAI Press. Shaw, G. (2007). Disability legislation and empowerment of tourists with disability in the United Kingdom. In A. Church & T. Coles (Eds.), Tourism, Power and Space (pp. 83- 100). London: Routledge. Shaw, G., & Coles, T. (2004). Disability, holiday making and the tourism industry in the UK: a preliminary survey. Tourism Management, 25(3), 397-403. Shelton, E. J., & Tucker, H. (2005). Tourism and disability: issues beyond access. Tourism Review International, 8(3), 211-219. Standards Australia. (1992a). AS 1428.2 - Design for access and mobility - Enhanced and additional requirements - Buildings and facilities ([Rev. ] ed.). North Sydney, NSW: Standards Australia. Standards Australia. (1992b). AS 1428.3 Design for access and mobility - Requirements for children and adolescents with physical disabilities North Sydney, NSW: Standards Australia. Standards Australia. (2001). AS 1428.1 Design for access and mobility - General requirements for access - New building work. Homebush, NSW: Standards Australia. Standards Australia. (2002). AS/NZS 1428.4 - Design for access and mobility - Tactile 37 indicators North Sydney, NSW: Standards Australia. Stein, P. L., & Farrier, D. (2006). The environmental law handbook: planning and land use in NSW (4th ed.). Sydney: UNSW Press. Stumbo, N. J., & Pegg, S. (2005). Travelers and tourists with disabilities: a matter of priorities and loyalties. Tourism Review International, 8(3), 195-209. Tantawy, A., Kim, W. G., & SungSoo, P. (2004). Evaluation of hotels to accommodate disabled visitors. Journal of Quality Assurance in Hospitality & Tourism, 5(1), 91- 101. Tully, J. (2006). Accessible Tourism Update Working Party. Retrieved 1 September, 2006 Turco, D. M., Stumbo, N., & Garncarz, J. (1998). Tourism Constraints - People with Disabilities. Parks and Recreation Journal, 33(9), 78-84. Veal, A. J. (2006). Research Methods for Leisure and Tourism: A Practical Guide (3rd ed.). Essex, England: Pearson Education Limited. Warnken, J., Bradley, M., & Guilding, C. (2005). Eco-resorts vs. mainstream accommodation providers: an investigation of the viability of benchmarking environmental performance. Tourism Management, 26(3), 367-379. Williams, R., Rattray, R., & Grimes, A. (2006). Meeting the On-line Needs of Disabled Tourists: an Assessment of UK-based Hotel Websites. International Journal of Tourism Research, 8(1), 59. World Health Organization. (2001). International Classification of Functioning, Disability and Health (ICIDH-2). Geneva: World Health Organization. World Health Organization. (2007). Global Age-friendly Cities Guide from http://www.who.int/ageing/age_friendly_cities/en/index.html Yau, M. K.-s., McKercher, B., & Packer, T. L. (2004). Traveling with a disability: More than an Access Issue. Annals of Tourism Research, 31(4), 946-960. http://www.who.int/ageing/age_friendly_cities/en/index.html� ABSTRACT 1 INTRODUCTION 2 LITERATURE Figure 1: Comparative travel patterns between PwD and the nondisabled 2.1 Access and the Built Environment 2.2 Accommodation Research 2.3 Seniors with Access Needs 2.4 Summary 3 RESEARCH QUESTIONS 4 RESEARCH DESIGN 4.1 The Hotel Accessibility Scale 4.2 The questionnaire 4.3 Accommodation information preferences 4.4 Product testing 4.5 Limitations 5 FINDINGS 5.1 Sample: 5.2 Internal Consistency and Principal Component Analysis (PCA) Table 2: Component Structure of Accommodation Criteria Selection 5.3 Between Group Variance: ^ Denotes four occurrences across the sociodemographic variables # Denotes three occurrences across the sociodemographic variables 5.4 Access information preferences: Table 4: Accessible accommodation information preference Figure 2: Mean Preference for Digital Photography, floorplan and textual by Dimension of Access 6 DISCUSSION 6.1 Research Question 1: Relative importance of room criteria 6.2 Research Question 2: Accessible accommodation information format preference 6.3 Research Question 3: Congruence of information to the room 7 APPLICATION OF RESULTS 8 CONCLUSION REFERENCES work_a6adjar2jrhonffpvpddxltvum ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218561192 Params is empty 218561192 exception Params is empty 2021/04/06-02:16:42 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218561192 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:42 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_a77h7rqmszavfeyshqgk3pbmgu ---- This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/), which permits noncommercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com © The Author 2014. Published by Oxford University Press on behalf of The Gerontological Society of America. 475 Research Article Training Older Adults to Use Tablet Computers: Does It Enhance Cognitive Function? Micaela Y.  Chan, MS,* Sara  Haber, PhD, Linda M.  Drew, PhD, and Denise C. Park, PhD Center for Vital Longevity, School of Behavioral and Brain Sciences, University of Texas at Dallas. *Address correspondence to Micaela Y. Chan, Center for Vital Longevity, School of Behavioral and Brain Sciences, 1600 Viceroy Drive, Suite 800, Dallas, TX 75235. E-mail: mchan@utdallas.edu Received February 20, 2014; Accepted April 28, 2014 Decision Editor: Rachel Pruchno, PhD Abstract Purpose of the Study: Recent evidence shows that engaging in learning new skills improves episodic memory in older adults. In this study, older adults who were computer novices were trained to use a tablet computer and associated software applications. We hypothesize that sustained engagement in this mentally challenging training would yield a dual benefit of improved cognition and enhancement of everyday function by introducing useful skills. Design and Methods: A total of 54 older adults (age 60-90) committed 15 hr/week for 3 months. Eighteen participants received extensive iPad training, learning a broad range of practical applications. The iPad group was compared with 2 separate controls: a Placebo group that engaged in passive tasks requiring little new learning; and a Social group that had regular social interac- tion, but no active skill acquisition. All participants completed the same cognitive battery pre- and post-engagement. Results: Compared with both controls, the iPad group showed greater improvements in episodic memory and processing speed but did not differ in mental control or visuospatial processing. Implications: iPad training improved cognition relative to engaging in social or nonchallenging activities. Mastering rel- evant technological devices have the added advantage of providing older adults with technological skills useful in facilitat- ing everyday activities (e.g., banking). This work informs the selection of targeted activities for future interventions and community programs. Keywords: Cognitive intervention, Engagement, Cognitive training, Cognitive aging, Technology, iPad As the proportion of older adults increases in society, it is of increasing economic and social importance to understand how to maintain the health of the aging mind. In 2010, the Alzheimer’s Association reported that an intervention that delays progression toward Alzheimer’s disease by five years would reduce the rate of national diagnosis by nearly 45%, resulting in very significant health and financial benefits (Alzheimer’s Association, 2010). Although both cognitive training (e.g., Anguera et  al., 2013; Basak, Boot, Voss, & Kramer, 2008; Schmiedek, Lovden, & Lindenberger, 2010) and engaging in cognitively challenging activities (e.g., Carlson et  al., 2008; Stine-Morrow, Parisi, Morrow, & Park, 2008; Tranter & Koutstaal, 2008) have been linked to cognitive improvement, most of the research to date has focused on cognitive training. Cognitive training and life- style engagement have differing approaches to cognitive facilitation: cognitive training targets specific domains with the expectation that improvements will be observed in that domain, and potentially transfer to other cognitive tasks and domains. In contrast, cognitive engagement interven- tions rely on the stimulation provided by activities that are novel for an individual and are broadly demanding of executive function, episodic memory, and reasoning (Park, Gutchess, Meade, & Stine-Morrow, 2007). Abstract The Gerontologist cite as: Gerontologist, 2016, Vol. 56, No. 3, 475–484 doi:10.1093/geront/gnu057 Advance Access publication June 13, 2014 mailto:mchan@utdallas.edu?subject= One reason for the limited research on engagement compared with cognitive training has been the cost and complexity of testing participants for prolonged periods in experimentally controlled real-world environments. Additionally, it is difficult to randomly assign participants to different experimental conditions not of their choosing and retain them over prolonged periods of time. Nevertheless, it is critical that we begin to understand what types and amounts of activities constitute “healthy behavior for the mind,” particularly given the urgency of the problem as baby boomers are reaching old age. The notion that cognitive engagement is protective or supportive of cognition with age is supported by evidence that individuals who report high participation in mentally stimulating activities (e.g., reading, chess) show less age- related cognitive decline (Wilson et  al., 2003, 2005) and have a decreased risk of Alzheimer’s disease than those who participate less (Wilson, Scherr, Schneider, Tang, & Bennett, 2007). However, it is difficult to disentangle causal rela- tionships in these studies. It is not clear whether engage- ment enhances cognition or alternatively, if individuals who are cognitively healthy engage in activities that are more cognitively demanding. There are only a few studies that have attempted to disentangle this issue by experimen- tally manipulating engagement level. For example, Tranter and Koutstaal (2008) introduced a group of older adults between the ages of 60 and 75  years to a wide range of mentally stimulating activities that involved social group meetings, reading, music, and problem solving. They found that, when compared with a control group, the experimen- tal group showed greater gains on a measure of fluid intel- ligence, suggesting that engaging in mentally stimulating activities for a short period is indeed beneficial to cognition. In a study by Stine-Morrow and colleagues (2008), older adults participated in the Senior Odyssey program, which fostered an engaged lifestyle for 20 weeks by facilitating team-based problem-solving competitions that relied on cognitive processes such as working memory, processing speed, visuospatial processing, and reasoning in a com- munity setting. When compared with a control group, participants in the program showed improvement on a com- posite measure of fluid cognitive ability. Another program, Experience Corps, had older adults partner with elementary school students, to whom they taught literacy skills, library support, and classroom etiquette (Carlson et al., 2008). Not only could the older adults benefit from the newly estab- lished relationships with students, but they also evidenced improvements in executive functioning and memory. Both Senior Odyssey and Experience Corps are community-based programs that include the potential for social, personal, and cognitive benefits, and thus have the potential to enrich lives as well as enhance cognitive function. Most recently, Park and colleagues (2013) had older adults participate in cognitively demanding leisure activi- ties such as learning to quilt and learning digital photogra- phy for 15 hr a week for more than three months. The study (referred to later in this article as the “Synapse Project”) was based on a theoretical distinction between “produc- tive” and “receptive” engagement (Park et  al., 2007). Productive engagement involves activities that require sig- nificant cognitive challenge and self-initiated processing, resulting in sustained activation of working memory, epi- sodic memory, and reasoning. For example, learning new computer software, learning a new language, or engaging in acquiring dance routines would be productive engage- ment. Park and Reuter-Lorenz (2009) have proposed that engagement in such active mental challenge for a sustained period promotes the formation of “neural scaffolds,” that is, supportive neural circuitry that provides a source of additional neural resource compensating for age-related brain shrinkage and neural degradation. There is a large literature suggesting that older adults indeed show such compensatory neural activity compared with young, par- ticularly in frontal cortex (e.g., Gutchess et  al., 2005). Although this study does not include brain imaging, the scaffolding model provides a strong conceptual framework for understanding the mechanism that operates when pro- ductive engagement improves cognition. The Synapse Project (Park et  al., 2013) had three pro- ductive engagement groups: learning to quilt, learning digital photography, or learning a combination of both. In contrast to productive engagement, receptive engage- ment involves activities that rely on existing knowledge and familiar activities that have low knowledge acquisi- tion demands, such as completing word stems or playing games of chance. The Synapse Project had two receptive engagement groups: a Placebo group, where participants worked alone on activities low on working memory and episodic memory demand by doing tasks that required only knowledge or low cognitive effort; and a Social group that engaged in social, group-based activities but no formal learning or training. At the end of the three-month Synapse Project intervention, the three productive groups showed significant improvement in episodic memory relative to the receptive groups (Park et al., 2013), providing experimen- tal evidence for this theoretical distinction. This Study As noted, there are few studies of engagement and cogni- tion in older adults. In this study, we focused on the impact of training older adults in a novel technology that required sustained cognitive challenge to further test the hypothesis that productive engagement enhances cognitive function in older adults. Specifically, older adults who were computer novices were trained to become proficient users of a tablet computer using the iPad, which can be flexibly employed to perform many tasks associated with daily living. Training in new technology was chosen because mastery in technol- ogy among older adults has been shown to increase inde- pendence in old age and improve perceived life quality (e.g., Czaja, Guerrier, Nair, & Landauer, 1993; Mynatt & Rogers, The Gerontologist, 2016, Vol. 56, No. 3476 2001). Therefore, the goal of the iPad intervention was to investigate a novel form of engagement not previously stud- ied in the literature that had high cognitive demands. Given the scant literature that exists, we wanted to determine if a qualitatively different task from those studied earlier, that nevertheless met the criteria for productive engage- ment, would show facilitation effects relative to receptive engagement conditions. In addition, iPad training had the added advantage of providing older adults with new ways to accomplish tasks that are relevant for maintaining inde- pendence in older adulthood, such as shopping, banking, communication, and securing medical care. Specifically, the wide range of available software applications (apps) for the iPad and their diverse uses provides a nearly endless way for older adults not only to learn challenging new activities, but to tailor the learning to an individual’s real-life needs. The portability and usability (e.g., touch screen, adjustable font, or icon size) of tablet computers provide easy access to com- puter technology for older adults who have a wide range of motor and visual abilities. In this study, we recruited participants with little or no computer experience to commit at least 15 hr each week to a combination of group classes, homework assignments, and other activities using the iPad. Participants were exposed to a structured curriculum for 5 hr each week in a learning environment with a highly knowledgeable instructor and were required to spend an additional minimum of 10 hr each week working on detailed assignments related to the weekly curriculum. The intervention required these novice users who were learning this new technology to engage in sustained activation of reasoning, executive function, and memory with a new task or learning challenge presented as soon as a particular skill was mastered. We designed the iPad activity schedule to mirror the struc- ture of activities from the Synapse Project, which, as noted earlier, included digital photography, quilting, social, and placebo groups (Park et al., 2013). Because of the cost and time demand inherent in engagement intervention studies, data from the two receptive engagement groups (Social and Placebo) in Park et al., 2013, were also used in this study as comparison control groups (see Methods section). The dual use of the receptive groups was planned a priori for both this study as well as for the Synapse Project (Park et al., 2013). Methods Participants The full sample across the three conditions (iPad and two control groups) consisted of 54 older adults. Eighteen of these participants comprised the iPad intervention group, and there were 18 participants included from each of the Synapse control groups, which were matched on age, edu- cation, and gender to the iPad participants. All participants were community dwelling and were between the ages of 60 and 90 years with a high school education or greater. In addition, the participants were fluent in English, spent less than 10 hr a week outside the home on volunteer or work activities, and also had limited experience with computers and no experience with tablet computers. Additional eligi- bility requirements included a minimum score of 20/40 on the Snellen eye chart (Snellen, 1863) after correction, a score of 26 or greater on the Mini-Mental State Examination (Folstein, Folstein, & McHugh, 1975), and no history of major psychiatric or neurologic disorders. Recruitment The participants for the iPad intervention were recruited simultaneously with the Synapse Project. An eligible appli- cant for both projects was randomly assigned to either the iPad intervention or the Synapse Project. The Synapse Project was a very large intervention with more than 250 participants and six different experimental groups, and had been an ongoing project for almost three years when the iPad intervention was initiated in 2011. As noted earlier, the iPad intervention was designed with a nearly identical structure to the Synapse groups, which allowed control participants in Synapse to also serve as control participants for the iPad intervention. Importantly, the recruitment pro- cedures and screening criteria were the same across the two studies. Recruitment was conducted via advertisements, mass mailings, and community postings. All participants attended an enrollment meeting where details of the study were explained and the requirement of random assignment to conditions was explained to them. Participants communicated freely with one another within each of the three study groups, but had no com- munication or exposure across groups. Since the Synapse Project was designed to be a much larger project than the iPad intervention, there were more participants in the Synapse control groups than in the iPad intervention. There were 39 participants in the original Synapse Placebo control condition and 36 in the Synapse Social control condition compared with the 18 participants in the iPad intervention. To equate the numbers for the three groups, the 18 participants from the iPad intervention who com- pleted the program were matched on age, education, and gender to 18 participants from the Social and Placebo con- trol conditions in the Synapse Project (Park et  al., 2013). Participants for the three resulting groups did not differ in their demographic characteristics (Table 1). Attrition Of the original 25 participants who were recruited for the iPad intervention, 7 participants failed to complete the full intervention and posttesting: 6 dropped out due to serious health or personal issues (e.g., new diagnosis of cancer, seri- ously ill spouse) and 1 was excluded due to insufficient hours logged in the program despite reminders. Those who dropped out did not significantly differ from the retained sample in age (t = 1.043, p = .315) and education (t = −0.451, p = .664). Note that those who dropped out were not from a specific age group (age range = 67–80 years) or education level (education The Gerontologist, 2016, Vol. 56, No. 3 477 in years range = 13–18). Of the original Synapse participants in the Placebo condition, all participants were retained. For the Social condition, 12 dropped out, and of those, 7 of them withdrew from the study due to health reasons, and 5 due to an inability to commit enough time. Study Overview The iPad intervention program consisted of planned activi- ties that required continuous cognitive challenge by engag- ing novice tablet computer users in structured lessons and assignments, which involved constant new learning in the use of numerous applications for the device. In contrast, both the Placebo and Social group engaged in receptive activities that required no structured learning or training, and minimal cognitive challenge. The three conditions— the iPad, Placebo, and Social groups—are fully described in the Productive Engagement and Receptice Engagement Condition section below. Productive Engagement Condition iPad group Participants attended a 10-week program, where they were required to spend at least 15 hr each week learning a new set of skills associated with the iPad. This included two 2.5-hr training classes that were held at the Synapse site each week, while the remaining 10 hr were spent work- ing on homework assignments. All classes were taught by the same instructor and the activities followed a detailed curriculum. The instructor was available during business hours at the Synapse site and students could consult with the instructor as well as work with one another in the space. The first week of classes focused on learning the basic func- tions and navigation of the iPad (e.g., hardware controls, software settings, volume) and discovering the variety of apps available. Subsequent weeks were organized by theme, where participants learned the function and use of apps related to that theme for 1 week. For example, for one theme, “Connectivity and Social Networking,” participants learned how to “follow” each other on Twitter (Twitter Inc., 2012), upload photos, and play games that use social networks as platforms, such as Words with Friends (Zynga Inc., 2009). Another theme, “Health and Finance,” focused on having participants explore apps that could provide tips and resources on health and track different types of financial resources. Besides interacting with fellow partici- pants in the iPad intervention, participants learned how they could use apps to connect with their grandchildren and friends as well. Throughout the program, participants chronicled their experiences with entries in journaling apps such as ScrapPad (Album tArt LLC, n.d.). To maintain par- ticipation and monitor program adherence, participants filled in a log documenting the amount of time they spent on the iPad each week. A detailed curriculum can be found in the Supplementary Appendix. The participants in the iPad intervention spent a mean total of 219.76 hr over the 10-week period (standard deviation [SD] = 27.67), averag- ing considerably more than the 15 hr per week minimum. The instructor was available to the participants all week during business hours, and participants frequently spent time working with the instructor and each other outside of training hours. Receptive Engagement Conditions Placebo group Participants completed cognitive activities for 15 hr per week that were low in cognitive demand, frequently relied on world knowledge, and involved no active skill acquisi- tion. Activities included playing games of chance, watching movies, completing knowledge-based word puzzles, reading popular articles from informative magazines, and listening to classical music or to National Public Radio (NPR) shows. All activities were performed at home, so this group received minimal social stimulation. Participants in this condition came to the research site once a week and met with a group leader. They were assigned 5 hr of activities from a “core curriculum” that were common to all participants in the Placebo group. Then, each participant selected 10 additional hours of similar activities from what we called the “brain library.” This library contained a wide variety of DVDs, CDs, and magazines that were comprised of five categories: humor (e.g., comedy DVD), learning (e.g., magazines), music (CDs), puzzles and games (e.g., crossword puzzles), and clas- sic movies. Participants were told that the activities were Table 1. Demographic Information Total iPad Placebo Social Significance N 54 18 18 18 — Age 74.74 (6.13) 74.89 (6.49) 74.50 (5.79) 74.83 (6.44) ns Years of education 15.63 (2.40) 15.28 (2.67) 15.44 (2.31) 16.17 (2.23) ns Female, % 79.6 72.2 83.3 83.3 ns Minority, % 18.5 27.8 16.7 11.1 ns Total program hours — 219.88 (27.58) 226.22 (28.04) 226.97 (24.92) ns Note: Mean differences were tested with analysis of variance for continuous variables, and with Chi-square tests for categorical variables. Standard deviation is in parentheses. ns = not significant. The Gerontologist, 2016, Vol. 56, No. 3478 http://gerontologist.oxfordjournals.org/lookup/suppl/doi:10.1093/geront/gnu057/-/DC1 designed to facilitate cognitive improvement with resources that were readily available (e.g., TV, radio, and the library). Participants logged the time they spent in a diary and also completed descriptive questions about the tasks they com- pleted to verify compliance. We note that this group (and other groups from the Synapse Project) completed 12 weeks of participation and were then tested during Weeks 13 and 14. They spent a total of 226.22 hr across the 12 weeks (SD = 28.04). Social group The Social group was designed to replicate the camarade- rie and social interactions that occurred in a group learning setting such as that experienced by the iPad group, while excluding an active learning environment. Similar to the other two groups, the Social group was required to spend a minimum of 15 hr in social activities, with 5 hr prescribed for all, and 10 that were selected by participants. The prescribed activities were organized around weekly structured topics, such as travel, art, or history, and were heavily reliant on existing knowledge rather than learning new or novel infor- mation. Like the iPad group, participants in the Social group attended 2.5-hr structured sessions twice a week. These ses- sions involved discussion of the weekly topic and included sharing memories, stories, and possessions that were related to the topic, and sometimes a field trip to a local community facility related to the topic (e.g., art museum). In addition to the two weekly sessions, participants were given a list of activities to choose from each week and selected a minimum of 10 hr in activities that were relatively low in cognitive demand and included things like recipe exchanges, covered dish luncheons, watching situation comedies together, and playing games with low level of cognitive challenge. The activities were designed to be respectful of the older adults’ maturity and function but minimized activities that had high working memory, reasoning, or episodic memory require- ments (e.g., playing bridge or chess). Like the Placebo group, these participants had 12 weeks of participation and then were tested in Weeks 13 and 14. They spent a mean total of 226.97 hr in the study (SD = 24.92). We note here that although the iPad group had a 10-week intervention compared with the 12 weeks for the Placebo and Social group, the total hours spent during the entire program was comparable for both groups (iPad: M = 219.76, SD = 27.67; Placebo: M = 226.22, SD = 28.04; Social: M = 226.97, SD = 24.92). The difference in weeks was a result of constraints on space and the availability of the iPad instructor. Nevertheless, total time in the interven- tion was equated as participants in the iPad group spent more time per week for fewer weeks. Cognitive Testing All participants completed the same battery of cognitive and psychosocial testing both before and after the train- ing period. The participants were compensated $100 for completing pretesting and $140 for completing posttest- ing. The assessment protocol was the same for pretest and posttest and, whenever possible, posttesting for each participant was administered by the same tester, on the same day of the week, and at the same time of day as their pretest session. Testing included both paper-and- pencil and computerized tasks. All computer tasks were conducted on Dell desktop computers running Windows XP, using a Wacom touch-screen monitor. Each testing session was conducted by a trained tester who was not involved in the intervention training and was blind to group assignment. The tasks included in the analysis are organized by con- structs that were derived from the original Synapse Project (Park et  al., 2013), which had a large enough sample size to verify both construct reliability and test-retest reliability of the grouped cognitive measures. A summary of the four constructs and the tasks associated with them are as follows: 1. Processing speed was measured using the Digit Comparison Task (Salthouse & Babcock, 1991). Participants made same/different judgments in a fixed interval about digit strings. There were three levels of task difficulty, with number of correct comparisons at each level as indicators of the speed construct. 2. Mental control was measured using the Cogstate Identification Task (http://www.cogstate.com) and three modified versions of the Flanker task: Flanker Center Letter, Flanker Center Symbol, and Flanker Center Arrow (Eriksen & Eriksen, 1974). The Cogstate Identification Task measures attention and the three Flanker tasks measure the ability to suppress or inhibit attention to a salient feature of the presented stimuli. 3. Episodic memory was measured using the Modified Hopkins Verbal Learning Task (HVLT; Brandt, 1991) and the Cambridge Neuropsychological Test Automated Battery (CANTAB) Verbal Recognition Memory (Robbins et al., 1994). For both tasks, participants stud- ied lists of words. Three measures of recall were used as indicators of the construct (immediate recall from HVLT and CANTAB, and delayed 20-min recall from HVLT). 4. Visuospatial processing was measured by a short- ened version of the Raven’s Progressive Matrices (Raven, Raven, & Court, 1998), CANTAB Stockings of Cambridge (Robbins et  al., 1994), and CANTAB Spatial Working Memory (Robbins et  al., 1994). The first two are measures of visuospatial reasoning and the third task measures visuospatial working memory. Results Overview of Analyses The aim of the analyses was to determine whether cognitive performance, as a result of the iPad intervention, improved more from pretest to posttest than performance in the two control conditions (Social and Placebo). The Gerontologist, 2016, Vol. 56, No. 3 479 http://www.cogstate.com Cognitive Constructs To create the four cognitive constructs, we followed the procedures described in The Advanced Cognitive Training for Independent and Vital Elderly trial (Ball et al., 2002) by first creating a normalized distribution of the target depend- ent variables from each measure by pooling together pretest and posttest scores and then applying a rank-ordered Blom transformation (Blom, 1958). Then, a composite score for each construct was created by averaging the transformed scores associated with the appropriate construct measures. Cronbach’s alpha (α) was calculated to test the internal consistency of each construct, and all showed high consist- ency as shown in Table 2. Intervention Analysis Initial pretest performance of the three groups did not sig- nificantly differ across all four constructs, all F < 1.9, p = ns (see Table  3 for pretest analysis of variances [ANOVAs]). To evaluate the effects of the interventions on cognitive per- formance, we conducted a mixed ANOVA on each cogni- tive construct with Group as a between-subjects variable (iPad, Social, and Placebo) and Time (pretest or posttest) as the within-subject variable. After the overall mixed ANOVA was completed, additional follow-up testing was performed to further evaluate constructs, where a signifi- cant Group × Time interaction was observed. In addition to the ANOVAs, we calculated the net effect size of each of the intervention groups as conducted by Ball and colleagues (2002). Specifically, the net effect is represented by the gain in performance (from pretest to posttest) normalized by pretest sample variance using the following formula: ( ) ( )B B B B s i p i p post post pre pre pre − − − spre is the standard deviation at pretest, and Bi pre and Bi post represent pre- and post-Blom transformation scores for the intervention groups, respectively. Bp preand Bp post represent pre- and post-Blom transformation scores for the control group, respectively. Although no detectable differences were observed in pretest cognition scores, we further evaluated the impact of pretest scores on gains by conducting analysis of covari- ances (ANCOVAs), with cognitive change scores (pretest – posttest) as the dependent variable, groups as the between- subject variable, and the pretest score as the covariate. This allowed us to observe group differences in change score while controlling for cognition differences at pretest. Results The results yielded evidence for greater improvement over time in the iPad intervention compared with the control groups for processing speed and episodic mem- ory. Specifically, the overall ANOVA on processing speed resulted in a main effect of Time (F(1,51) = 7.43, p = .009) and a Group × Time interaction (F(2,51) = 4.35, p = .018). Follow-up comparisons yielded evidence that the iPad group improved performance in processing speed more over time than both the Placebo group (F(1,34)  =  5.80, p = .022) and the Social group (F(1,34) = 8.35, p = .007). We found similar significant effects for episodic memory, with a main effect of Time (F(1,51)  =  42.23, p < .001) and a Group × Time interaction (F(2,51) = 7.31, p = .002). Table 2. Reliability for Cognitive Construct Measure Cognitive construct Measure Dependent variable Composite reliability Processing speed Digit Comparison Total correct on trials with three items .86 Total correct on trials with six items Total correct on trials with nine items Mental control Cogstate Identification Log RT to a 2-forced choice decision .81 Flanker Center Letter RT for incongruent trials that follow congruent trials Flanker Center Symbol RT for incongruent trials that follow congruent trials Flanker Center Arrow RT for incongruent trials that follow congruent trials Episodic memory CANTAB Verbal Recall Memory Total correct on immediate free recall .75 Hopkins Verbal Learning Task (HVLT; immediate) Total correct on trials 1, 2, and 3 Hopkins Verbal Learning Task (HVLT; delayed) Total correct after a 20-min delay Visuospatial processing Modified Raven’s Progressive Matrices Accuracy out of 18 items .69 CANTAB Stockings of Cambridge Problems solved in the minimum amount of moves CANTAB Spatial Working Memory Between errorsa Strategy scorea Notes: Composite reliabilities were calculated using Cronbach’s alpha (α). aDenotes scores where higher scores reflect worse performance. RT = reaction time. The Gerontologist, 2016, Vol. 56, No. 3480 Again, the interaction was significant because the iPad group improved more over time than both the Placebo group (F(1,34)  =  10.44, p  =  .003) and the Social group (F(1,34) = 12.22, p = .001). The Placebo group and Social group did not differ in their change over time in processing speed or episodic memory (F < 2.32, p = ns). These effects, except for processing speed between iPad and Placebo, remained significant after correcting for multiple compari- sons with a Bonferroni correction. No significant effects were observed for the mental control or visuospatial pro- cessing constructs. Supplementing the results from the ANOVAs, the net effect sizes associated with speed and episodic memory in the iPad group were congruent with the statistical results. The net effect sizes for the four constructs with the appro- priate group contrasts (iPad vs. Placebo; iPad vs. Social; Placebo vs. Social) are reported in Table 4. The mean nor- malized gains scores of all four constructs between the three groups are shown in Figure 1. In addition, to further explicate the facilitation effects, that we observed in the iPad condition for processing speed and episodic memory, we present individual gain scores for each participant as a function of Group in Figure 2. In a final analysis, ANCOVAs were performed for each cognitive construct with pretest score as the covariate. Like the earlier analysis, we found a significant effect for processing speed (F(2,50)  =  4.34, p  =  .018) and episodic memory (F(2,50)  =  6.279, p  =  .004), but not for mental control or visuospatial processing (F < 1.7, p = ns). These results confirmed that differences in pretest scores did not drive the observed Group × Time interactions we reported previously. Discussion The main finding from this study was that participation in the iPad intervention resulted in enhanced performance on two cognitive constructs—processing speed and epi- sodic memory—compared with both a Social control and a Placebo control. The results showed that productive engagement, which requires sustained mental effort, is more supportive of two major cognitive constructs in older adults than receptive engagement, which consists of less cognitively demanding activities in which little new learning and skill acquisition takes place. Although some individuals in the two receptive control groups also experienced some cognitive improvements (Figure 2), the iPad group showed significantly more improvement over time. The increases in the control groups could be due to repeated testing effects, but it is also possible that the control intervention groups experienced slight cognitive enhancements. In light of the large body of evidence that even healthy older adults experience age-related declines across multiple facets of cognition (Park & Shaw, 1992; Park et al., 1996; Salthouse, 1996; Salthouse & Babcock, 1991), one major goal of interventions is to improve or maintain cognition in order to promote independence and quality of life. The results of this study add to the sparse body of literature suggesting that engagement in mentally challenging every- day activities can be supportive of cognition. Specifically, previous studies such as Experience Corps (Carlson et al., 2008) and the Synapse Project (Park et al., 2013) have both found that older adults experience enhanced performance in memory post-intervention, which is also the strongest finding of this study. Importantly, this study was driven by the theoretical distinction of productive versus receptive engagement, which predicts that not all types of engage- ment are equally beneficial to cognition (Park et al., 2007). Interventions that rely on sustained cognitive challenge Table 3. Pretest and Posttest Cognitive Construct Scores, and Pretest ANOVA Cognitive construct Time Groups Pretest ANOVA iPad Placebo Social F p Processing speed Pre −0.065 (1.04) 0.154 (0.801) −0.088 (0.831) 0.401 .671 Post 0.205 (1.15) 0.097 (0.718) −0.201 (0.761) Mental control Pre 0.066 (0.510) 0.011 (0.880) −0.076 (0.945) 0.147 .863 Post 0.111 (0.508) 0.169 (0.885) 0.241 (0.804) Episodic memory Pre −0.258 (0.592) 0.020 (0.945) 0.238 (0.840) 1.72 .190 Post 0.397 (0.460) 0.165 (0.826) 0.471 (0.700) Visuospatial processing Pre 0.231 (0.708) −0.232 (0.684) 0.013 (0.751) 1.89 .161 Post 0.415 (0.640) 0.029 (0.730) 0.064 (0.683) Note: Mean Blom-transformed score (SD). ANOVA = analysis of variance. Table 4. Net Effect Sizes of Cognitive Constructs Net effect sizes Cognitive construct iPad vs. Placebo iPad vs. Social Placebo vs. Social Processing speed .43 .37 −.06 Mental control −.35 −.14 .20 Episodic memory .52 .62 .11 Visuospatial processing .18 −.11 −.29 Note: Net effect sizes represent gain in performance (from pretest to posttest) normalized by pretest sample variance. The Gerontologist, 2016, Vol. 56, No. 3 481 Figure 2. Individual gain score (pretest adjusted to 0) for tasks with significant differences. Figure 1. Mean standardized gain scores for iPad, Placebo, and Social. Error bars: ±1 SE. The Gerontologist, 2016, Vol. 56, No. 3482 (and typically novelty) will be more facilitative than non- cognitively challenging activities. Park and colleagues (2013) previously demonstrated two other forms of pro- ductive engagement—learning digital photography and/or quilting—were facilitative of enhanced episodic memory performance. The iPad intervention differed considerably in format and substance from quilting or photography, but had in common the requirement that individuals engage in considerable new learning, mental challenge, and self- initiated processing. One important direction for future research is to assess whether an increasing degree of cog- nitive challenge facilitates greater cognitive improvement. Some measures of rated difficulty of engaging activity or manipulation of load during engagement would be an important next step in evaluating the role of mental chal- lenge in facilitating cognitive health. Another direction or future research would be to determine whether engagement promotes neural scaf- folding. As noted earlier, the Scaffolding Theory of Aging and Cognition (STAC) model (Park & Reuter-Lorenz, 2009)  proposes that neural scaffolding—recruitment of additional neural circuits to compensate for declin- ing brain structures—develops in response to continu- ous engagement associated with novel and cognitively challenging tasks. As an example, recent evidence dem- onstrated that older adults who experienced cognitive gains after video game training also showed more efficient neural function through enhancement of electroenceph- alography (EEG) signal in regions associated with cog- nitive control (Anguera et  al., 2013). We recognize that more research is needed to confirm the underlying brain mechanisms that may facilitate cognitive improvements observed in this study, but note that the STAC model pro- vides a theoretical framework for understanding cognitive changes that resulted from the engagement intervention. Future studies incorporating neuroimaging are likely to provide evidence of the mechanisms underlying enhance- ment effects associated with productive engagement. Importantly, we note that programs similar to the iPad intervention, which focus on a broad lifestyle engagement approach, could be easily implemented in a community setting. In this study, participants met for group classes in a research site resembling a community center. The pro- gram not only consisted of a core curriculum organized by themes and related apps, but also encouraged participants to discover and interact with apps that were personally rel- evant. As shown in a recent evaluation of a community- based computer training program (Czaja, Lee, Branham, & Remis, 2012), the advantage of programs similar to iPad intervention is that it can be effectively implemented with community volunteers. Finally, we note that the overall experience of those who participated in the iPad intervention was extremely posi- tive, and, according to the postcompletion survey, all 18 participants obtained a tablet device after the completion of the program (either by purchase or as a gift). Therefore, facilitation effects could be maintained or enhanced follow- ing the iPad training; however, future research using lon- gitudinal data from follow-up cognitive testing should be administered to quantify the long-term retention of benefits after the intervention. Conclusion In sum, the iPad intervention was designed to facilitate cog- nitive improvements and offer comprehensive training on a cutting-edge technology that could be easily implemented among community-dwelling older adults. Participants were able to access the wide variety of services and activities avail- able through the apps, both during and after the completion of the program. Thus, the program was successful at improv- ing cognitive performances through productive engagement and provided an added benefit of technological mastery. Supplementary Material Please visit the article online at http://gerontologist.oxford- journals.org. Funding This research was supported by National Institute on Aging (5R01AG026589-02): Active Interventions for the Aging Mind and National Institute on Aging American Recovery and Reinvestment Act (ARRA) Administrative Supplement grant (RFA NOT-OD-09-056 to D. C. Park). Acknowledgements The authors thank Katie Berglund, for serving as the training instruc- tor in the project, and Michele Sauerbrei and Carol Fox for assisting her. The authors also thank Janice Gebhard, Whitley Aamodt, Carol Bardhoff, and Marcia Wood, for testing participants, and Joanne Pratt, for her assistance in project discussions and meetings. References Album tArt LLC. (n.d.). ScrapPad - Scrapbook for iPad. Retrieved from https://itunes.apple.com/us/app/scrappad-scrapbook-for-ipad/ id353143273?mt=8 Alzheimer’s Association. (2010). Changing the trajectory of Alzheimer’s disease: A national imperative. Chicago, IL: Author. Anguera, J. A., Boccanfuso, J. S., Rintoul, J. L., Al-Hashimi, O., Faraji, F., Janowich, J., …Gazzaley, A. (2013). Video game train- ing enhances cognitive control in older adults. Nature, 501, 97–101. doi:10.1038/nature12486 Ball, K., Berch, D. B., Helmers, K. F., Jobe, J. B., Leveck, M. D., Marsiske, M., … Willis, S. L. (2002). Effects of cognitive training interventions with older adults. Journal of the American Medical Association, 288, 2271–2281. doi:10.1001/jama.288.18.2271 Basak, C., Boot, W. R., Voss, M. W., & Kramer, A. F. (2008). Can training in a real-time strategy video game attenuate cognitive decline in older adults? Psychology and Aging, 23, 765–777. doi:10.1037/a0013494 The Gerontologist, 2016, Vol. 56, No. 3 483 http://gerontologist.oxfordjournals.org/lookup/suppl/doi:10.1093/geront/gnu057/-/DC1 http://gerontologist.oxfordjournals.org/lookup/suppl/doi:10.1093/geront/gnu057/-/DC1 https://itunes.apple.com/us/app/scrappad-scrapbook-for-ipad/id353143273?mt=8 https://itunes.apple.com/us/app/scrappad-scrapbook-for-ipad/id353143273?mt=8 Blom, G. (1958). Statistical estimates and transformed beta-varia- bles. New York: Wiley. Brandt, J. (1991). The Hopkins Verbal Learning Test: Development of a new memory test with six equivalent forms. Clinical Neuropsychologist, 5, 125–142. doi:10.1080/13854049108403297 Carlson, M. C., Saczynski, J. S., Rebok, G. W., Seeman, T., Glass, T. A., McGill, S., …Fried, L. P. (2008). Exploring the effects of an “everyday” activity program on executive function and memory in older adults: Experience Corps. The Gerontologist, 48, 793– 801. doi:10.1093/geront/48.6.793 Czaja, S. J., Guerrier, J. H., Nair, S. N., & Landauer, T. K. (1993). Computer communication as an aid to independence for older adults. Behaviour & Information Technology, 12, 197–207. doi:10.1080/01449299308924382 Czaja, S. J., Lee, C. C., Branham, J., & Remis, P. (2012). OASIS con- nections: Results from an evaluation study. The Gerontologist, 52, 712–721. doi:10.1093/geront/gns004 Eriksen, B. A., & Eriksen, C. W. (1974). Effects of noise letters upon the identification of a target letter in a nonsearch task. Perception & Psychophysics, 16, 143–149. doi:10.3758/BF03203267 Folstein, M. F., Folstein, S. E., & McHugh, P. R. (1975). “Mini-mental state”. A practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research, 12, 189–198. Gutchess, A. H., Welsh, R. C., Hedden, T., Bangert, A., Minear, M., Liu, L. L., & Park, D. C. (2005). Aging and the neural correlates of successful picture encoding: Frontal activations compensate for decreased medial-temporal activity. Journal of Cognitive Neuroscience, 17, 84–96. doi:10.1162/0898929052880048 Mynatt, E. D., & Rogers, W. A. (2001). Developing technology to support the functional independence of older adults. Ageing International, 27, 24–41. doi:10.1007/s12126-001-1014-5 Park, D. C., Gutchess, A. H., Meade, M. L., & Stine-Morrow, E. A. (2007). Improving cognitive function in older adults: Nontraditional approaches. The Journal of Gerontology, Series B: Psychological Sciences and Social Sciences, 62, 45–52. Park, D. C., Lodi-Smith, J., Drew, L., Haber, S., Hebrank, A., Bischof, G. N., & Aamodt, W. (2013). The impact of sus- tained engagement on cognitive function in older adults: The Synapse Project. Psychological Science, 25, 103–112. doi:10.1177/0956797613499592 Park, D. C., & Reuter-Lorenz, P. (2009). The adaptive brain: Aging and neurocognitive scaffolding. Annual Review of Psychology, 60, 173–196. doi:10.1146/annurev.psych.59.103006.093656 Park, D. C., & Shaw, R. J. (1992). Effect of environmental support on implicit and explicit memory in younger and older adults. Psychology and Aging, 7, 632–642. doi:10.1037/0882-7974.7.4.632 Park, D. C., Smith, A. D., Lautenschlager, G., Earles, J. L., Frieske, D., Zwahr, M., & Gaines, C. L. (1996). Mediators of long-term memory performance across the life span. Psychology and Aging, 11, 621–637. doi:10.1037/0882-7974.11.4.621 Raven, J., Raven, J. C., & Court, J. H. (1998). Manual for Raven’s Progressive Matrices and Vocabulary Scale. San Antonio, TX: The Psychological Corporation. Robbins, T. W., James, M., Owen, A. M., Sahakian, B. J., McInnes, L., & Rabbitt, P. (1994). Cambridge Neuropsychological Test Automated Battery (CANTAB): A factor analytic study of a large sample of normal elderly volunteers. Dementia and Geriatric Cognitive Disorders, 5, 266–281. doi:10.1159/000106735 Salthouse, T. A. (1996). The processing-speed theory of adult age differences in cognition. Psychological Review, 103, 403–428. Salthouse, T. A., & Babcock, R. L. (1991). Decomposing adult age differences in working memory. Developmental Psychology, 27, 763–776. doi:10.1037/0012-1649.27.5.763 Schmiedek, F., Lovden, M., & Lindenberger, U. (2010). Hundred days of cognitive training enhance broad cognitive abilities in adulthood: Findings from the COGITO Study. Frontiers in Aging Neuroscience, 2, 27. doi:10.3389/fnagi.2010.00027 Snellen, H. (1863). Probebuchstaben zur Bestimmung der Sehschaerfe. Utrecht, Netherlands: PW van de Weijer. Stine-Morrow, E. A.  L., Parisi, J. M., Morrow, D. G., & Park, D. C. (2008). The effects of an engaged lifestyle on cognitive vital- ity: A  field experiment. Psychology and Aging, 23, 778–786. doi:10.1037/a0014341 Tranter, L. J., & Koutstaal, W. (2008). Age and flexible thinking: An experimental demonstration of the beneficial effects of increased cognitively stimulating activity on fluid intelligence in healthy older adults. Neuropsychology, Development, and Cognition, 15, 184–207. doi:10.1080/13825580701322163 Twitter Inc. (2012). Twitter (Version 4.1.3). Retrieved from https:// itunes.apple.com/us/app/twitter/id333903271?mt=8 Wilson, R. S., Barnes, L. L., Krueger, K. R., Hoganson, G., Bienias, J. L., & Bennett, D. A. (2005). Early and late life cognitive activity and cognitive systems in old age. Journal of the International Neuropsychological Society, 1, 400–407. doi:10.1017/ S1355617705050459 Wilson, R. S., Bennett, D. A., Bienias, J. L., Mendes De Leon, C. F., Morris, M. C., & Evans, D. A. (2003). Cognitive activity and cog- nitive decline in a biracial community population. Neurology, 61, 812–816. doi:10.1212/01.WNL.0000083989.44027.05 Wilson, R. S., Scherr, P. A., Schneider, J. A., Tang, Y., & Bennett, D. A. (2007). Relation of cognitive activity to risk of developing Alzheimer disease. Neurology, 69, 1911–1920. doi:10.1212/01. wnl.0000271087.67782.cb Zynga Inc. (2009). Words with Friends (Version 4.12). Retrieved from https://itunes.apple.com/us/app/words-with-friends-free/ id321916506?mt=8 The Gerontologist, 2016, Vol. 56, No. 3484 https://itunes.apple.com/us/app/twitter/id333903271?mt=8 https://itunes.apple.com/us/app/twitter/id333903271?mt=8 https://itunes.apple.com/us/app/words-with-friends-free/id321916506?mt=8 https://itunes.apple.com/us/app/words-with-friends-free/id321916506?mt=8 work_abtdn2u6vra35kojlcd2m3cpxm ---- 363-371,2014-00130.Virag INTRODUCTION Physical injury to the skin triggers a well-coordinated series of events leading to wound closure. Wound healing occurs in three phases: inflammation, prolifera- tion and remodeling. The inflammatory phase involves hemostatic activation (vasoconstriction, platelet aggregation and clot formation), vasodilation, in- flammatory cell migration, phagocytosis and production of inflammatory media- tors. In the proliferative phase, granula- tion tissue is formed, wound edges con- tract and epithelial cells migrate to cover the wound surface. During remodeling, tissue structure is reorganized to most closely resemble normal skin. This phase involves disappearance of the majority of inflammatory cells, deposition and maturation of proteoglycans and colla- gens replacing transient matrix ele- ments. Many soluble factors such as in- flammatory cytokines (tumor necrosis factor [TNF]-α, interleukin [IL]-1β, IL-6) and chemokines, growth factors (platelet-derived growth factor [PDGF], epidermal growth factor [EGF], fibrob- last growth factor [FGF], transforming growth factor [TGF]β, keratinocyte growth factor [KGF7]) have been shown to regulate the wound healing process (1). Moreover, reactive oxygen and nitro- gen species (ROS and RNS, respectively) are also abundantly produced during wound healing and regulate various as- pects of the process (2). M O L M E D 2 0 : 3 6 3 - 3 7 1 , 2 0 1 4 | E L - H A M O L Y E T A L . | 3 6 3 Activation of Poly(ADP-Ribose) Polymerase-1 Delays Wound Healing by Regulating Keratinocyte Migration and Production of Inflammatory Mediators Tarek El-Hamoly,1,2,3* Csaba Hegedűs,1* Petra Lakatos,1 Katalin Kovács,1,4 Péter Bai,1,5,6 Mona A El-Ghazaly,2 Ezzeddin S El-Denshary,3 Éva Szabó,7 and László Virág1,4 1Department of Medical Chemistry, Faculty of Medicine, University of Debrecen, Debrecen, Hungary; 2Drug Radiation Research Department, National Centre for Radiation Research and Technology, Atomic Energy Authority, Cairo, Egypt; 3Department of Pharmacology & Toxicology, Faculty of Pharmacy, Cairo University, Cairo, Egypt; 4MTA-DE Cell Biology and Signaling Research Group, Debrecen, Hungary; 5MTA-DE Lendület Laboratory of Cellular Metabolism Research Group, Debrecen, Hungary; 6Research Center for Molecular Medicine, University of Debrecen, Debrecen, Hungary; 7Department of Dermatology, Faculty of Medicine, University of Debrecen, Debrecen, Hungary Poly(ADP-ribosyl)ation (PARylation) is a protein modification reaction regulating various diverse cellular functions ranging from metabolism, DNA repair and transcription to cell death. We set out to investigate the role of PARylation in wound healing, a highly complex process involving various cellular and humoral factors. We found that topically applied poly[ADP-ribose] polymerase (PARP) inhibitors 3-aminobenzamide and PJ-34 accelerated wound closure in a mouse model of excision wounding. Moreover, wounds also closed faster in PARP-1 knockout mice as compared with wild-type littermates. Immunofluorescent staining for poly(ADP-ribose) (PAR) indicated increased PAR synthesis in scattered cells of the wound bed. Expression of interleukin (IL)-6, tumor necrosis factor (TNF)-α, inducible nitric oxide synthase and matrix metalloproteinase-9 was lower in the wounds of PARP-1 knock- out mice as compared with control, and expression of IL-1β, cyclooxygenase-2, TIMP-1 and -2 also were affected. The level of ni- trotyrosine (a marker of nitrating stress) was lower in the wounds of PARP-1 knockout animals as compared with controls. In vitro scratch assays revealed significantly faster migration of keratinocytes treated with 3-aminobenzamide or PJ34 as compared with control cells. These data suggest that PARylation by PARP-1 slows down the wound healing process by increasing the production of inflammatory mediators and nitrating stress and by slowing the migration of keratinocytes. Online address: http://www.molmed.org doi: 10.2119/molmed.2014.00130 *TE-H and CH are shared first authors. Address correspondence to László Virág, Department of Medical Chemistry, Faculty of Medicine, University of Debrecen, Nagyerdei krt 98. H-4032 Debrecen, Hungary. Phone: +36-52-412-345; Fax: +36-52-412-566; E-mail: lvirag@med.unideb.hu. Submitted July 4, 2014; Accepted for publication July 8, 2014; Epub (www.molmed.org) ahead of print July 8, 2014. Understanding complex regulatory cir- cuitries operating in normal or patholog- ical wound healing is of utmost impor- tance for the successful treatment of wound healing abnormalities. These in- clude impaired wound healing in dia- betes, systemic sclerosis or chronic leg ulcers. The treatment of chronic, nonhealing wounds represents a major health issue with treatment costs estimated to amount to 25 billion USD per year in the United States alone (3). Current modern or ex- perimental treatments include dressings impregnated with collagenase, DNase, PDGF or nitric oxide-releasing com- pounds. In spite of the high number of novel and innovative approaches to cure nonhealing wounds, the efficient treat- ment of these diseases still represents an unmet medical need. On the basis of available literature, PARylation appeared to be a likely candi- date for being a regulator of the wound healing process (see below). PARylation is a covalent protein modification carried out by poly(ADP-ribose) polymerase (PARP) enzymes. PARPs cleave off nicoti- namide from NAD+ and attach the re- maining ADP-ribose moieties to suitable protein acceptors. By cleaving many NAD+ molecules and adding further ADP-ribose units to the protein-proximal first residue, these enzymes build a large, branched poly(ADP-ribose) (PAR) polymer on pro- teins. The polymer is degraded mostly by PAR glycohydrolase (PARG). PARP-1, -2 and -3 are DNA-damage–sensitive en- zymes that become activated by DNA breaks. For example PARP-1, the main PARP enzyme, has been shown to become activated by oxidative stress-induced DNA damage and to facilitate DNA break repair. Moreover, PARP-1 is a cofactor of nuclear factor (NF)-κB, the master regula- tor of the transcription of inflammatory mediators. PARylation also regulates a wide variety of molecular events and cel- lular functions including genome organi- zation, replication, protein stability and cell death. On the basis of the involvement of many of these PARylation-related reac- tions (oxidative stress response, expres- sion of inflammatory cytokines and chemokines, cell proliferation and mi- gration) in the wound healing process, we set out to investigate the role of PARylation in a mouse model of excision wounding. MATERIALS AND METHODS Animals Animal experiments were approved by the Institutional Animal Care and Use Committee of University of Debrecen (protocol number 9/2008/DEMÁB) and were carried out in accordance with the EU guidelines on animal protection (4). Adult male BALB/c mice (6 to 8 wks old) were used for assessing the effect of the PARP inhibitors 3-aminobenzamide (3-AB) and PJ34 (Sigma-Aldrich, St. Louis, MO, USA) on wound closure. In addi- tion, male (6 to 8 wks old) homozygous PARP-1 knockout mice (PARP-1–) and their respective wild-type (PARP-1+/+) lit- termates (5) were also used. All animals were housed individually under con- trolled conditions with 12-h light–dark cycle, and were allowed free access to water and diet. Wound Healing Model Wound incision was performed as de- scribed previously in (6). Mice were anaesthetized with isoflurane (Abbott Animal Health, Abbott Park, IL, USA). The fur was removed from the back of mice by applying a commercial depila- tory cream and hairless skin was wiped with 70% ethanol. Mice were kept warm during anesthesia and surgery using a heat lamp. Two full thickness 4 mm punch biopsies were made on the dorsal surface of BALB/c mice and the wound beds were wiped with povidone-iodine to prevent infection. The PARP inhibitors PJ-34 and 3-aminobenzamide (3-AB) were applied locally on one of the wounds in hydrophilic cream in 100 μmol/L and 50 μmol/L concentra- tions, respectively. The control wound was treated with hydrophilic cream only. Treatments were repeated daily for 7 d and wounds were covered with a surgi- cal dressing to keep the cream in place. Wounding and treatments of PARP-1–/– and PARP-1+/+ mice (n = 6 for each geno- type) was performed the same way as detailed above. Quantification of Wound Closure Wounds from individual mice were digitally photographed daily for 7 d. All photographs were taken from an angle perpendicular to the wound with a sterile ruler placed adjacent to the wounds for standardization (7). The wound area was quantified using NIH ImageJ software (NIH, Bethesda, MD, USA); and wound closure was expressed as the ratio of ac- tual wound area to initial wound size. Tissue Harvesting Tissue samples were collected from a separate cohort of mice (n = 6 per time point) on d 0, 1, 2, 4 and 8 after wound- ing. Under isoflurane anesthesia, wound tissue was harvested using a circular blade (6 mm). After wounds had been collected, animals were euthanized. One of the two wound samples in each ani- mal was processed for histology while the other was snap frozen in liquid nitro- gen for RNA and protein analysis (8). Histological Examination PAR was detected by immunostaining as formerly described in (9). In brief, im- munofluorescent staining was performed on formalin-fixed skin tissues by use of a PAR-specific monoclonal antibody (clone 10H; Enzo Life Sciences, Farmingdale, NY, USA) and a fluorescein-based mouse- on-mouse Immunodetection kit (Vector Laboratories, Peterborough, UK). Nuclei were counterstained with DAPI (Invitro- gen, Life Technologies [Thermo Fisher Scientific Inc., Waltham, MA, USA]); and stainings were visualized with a Zeiss Axiolab fluorescent microscope. RNA Extraction and Real-Time Quantitative PCR Analysis (RT-qPCR) RT-qPCR was carried out as described in (10). In brief, total RNA from frozen skin tissue was isolated with TRIzol 3 6 4 | E L - H A M O L Y E T A L . | M O L M E D 2 0 : 3 6 3 - 3 7 1 , 2 0 1 4 P A R y l a t i o n I N W O U N D H E A L I N G reagent (Invitrogen, Life Technologies [Thermo Fisher Scientific Inc.]) accord- ing to the manufacturer ’s instructions. The concentration of isolated RNA was determined spectrophotometrically and finally adjusted to 2 μg for the reverse transcription (RT) step. By use of a high capacity-RT kit (Applied Biosystems [Thermo Fisher Scientific Inc.]), RT was performed in a mixture of 2 μL RT buffer, 0.8 μL dNTP, 2 μL random primer, 1 μL reverse transcriptase and diethylpyrocarbonate (DEPC) H2O up to 10 μL. PCR cycles (Veriti 96-Well Ther- mal Cycler, Thermo Fisher Scientific Inc.) were set as follows: 25°C for 10 min, 42°C for 60 min, 95°C for 5 min, 94°C for 5 min, followed by 35 cycles 94°C for 30 s, 67°C for 1 min and 74°C for 1 min. SYBR Green (Applied Biosys- tems [Thermo Fisher Scientific Inc.]) qPCR reactions were run in ABI 7500 thermal cycler (Applied Biosystems [Thermo Fisher Scientific Inc.]). Se- quences of primers are given in Supple- mentary Table 1. Expression values were normalized to the control gene glycer- aldehyde 3-phosphate dehydrogenase (GAPDH) expression. Western Blot Analysis The method of lysate preparation and immunoblotting was carried out as previ- ously reported by Kioka et al. (11). Briefly, frozen skin tissue (approximately 5 mg) was homogenized in 300 μL RIPA buffer (20 mmol/L Tris-HCL pH7.4, 150 mmol/L NaCl, 1 mmol/L EDTA, 1% Triton-X100, 1% sodium deoxycholate, 0.1% SDS) with PMSF (1 mmol/L) and protease inhibitor cocktail (5 mmol/L) added immediately before use. Protein concentrations were determined using BCA kit (Pierce, Rock- ford, IL, USA). Proteins were separated on 8% SDS- polyacrylamide gels, and then transferred onto nitrocellulose mem- branes. Nonspecific protein binding was blocked by incubating the membranes in 5% w/v BSA in Tris-buffered saline with Tween® 20 (TBS-T; 50 mmol/L Tris-HCl, pH 7.6, 150 mmol/L NaCl, and 0.1% v/v Tween-20). Membranes were incubated overnight at 4°C with mouse monoclonal anti-nitrotyrosine antibody (Cayman, Ann Arbor, MI, USA) or rabbit polyclonal anti-FGF antibody (Abcam, Cambridge, UK) at dilutions of 1:1000 and 1:500, re- spectively. After washes, membranes were incubated with either HRP-conju- gated anti-mouse or anti- rabbit antibody at a dilution of 1:10,000 for 1 h at room temperature. Detection of mouse mono- clonal β-actin (Sigma-Aldrich) (1:5000 di- lution, 1-h–room- emperature incubation) served for normalization. Protein bands were detected using the ECL Plus kit (Pierce) and densitometry was carried out using NIH ImageJ software. Scratch Assay Confluent cultures of primary human keratinocytes seeded in 96-well plates were scratched with a small pipette tip utilizing a Tecan Freedom EVO liquid handling robot (Tecan Group Ltd., Männedorf, Switzerland). Damaged cells were removed by washing with PBS so- lution. A triplicate of freshly wounded samples was dried and used as control (0% healed samples). Cells were treated with PJ-34 or 3-AB and incubated for 48 h. Cell culture medium was replaced with Coomassie staining solution and cells were stained for 20 min. Samples were then washed twice with PBS and photos were taken under a Zeiss AxioVision microscope (Carl Zeiss Mi- croscopy GmbH, Jena, Germany). Images were analyzed with Tscratch software. Ratio of covered area was compared with freshly wounded samples. Statistical Analysis All values are presented as means ± SEM. Statistical analysis of experimental data was performed with one-way analy- sis of variance (ANOVA), followed by Dunnett test for multiple comparisons. Results were considered statistically sig- nificant when p < 0.05. All supplementary materials are available online at www.molmed.org. R E S E A R C H A R T I C L E M O L M E D 2 0 : 3 6 3 - 3 7 1 , 2 0 1 4 | E L - H A M O L Y E T A L . | 3 6 5 Figure 1. Two full thickness cutaneous wounds (4-mm diameter) were made on the back of male BALB/c mice, and wounds were evaluated by daily digital photography. Wounds were treated topically with either vehicle (hydrophilic cream, left wound) or PJ34 (100 μmol/L in hydrophilic cream; right wound) for 7 consecutive d; starting 24 h after excision. The line graph represents the mean of percent change relative to original wound size ± SEM (n = 8). *P < 0.05, **P < 0.01, ***P < 0.001: significantly different as compared between PJ34-treated (solid line) and vehicle-treated (dashed line) wounds in the same time point. RESULTS To investigate the role of PARylation in wound healing, we set up a mouse model of excision wounding with two wounds made on the back of each mice. One of the wounds on each mouse was treated with a hydrophilic crème whereas the other wound was covered with hydrophilic crème containing the PARP inhibitor 3-AB (50 μmol/L). We followed wound closure with time and found that 3-AB significantly accelerated the closure of wounds (Figure 1). To confirm our results obtained with 3-AB, we also applied another PARP in- hibitor, PJ-34. Similarly to 3-AB, the ap- plication of PJ-34 has also stimulated the wound-healing process (Figure 2). Since PARP-1 is responsible for the bulk of PAR synthesis in oxidative stress situations such as wound healing, we hy- pothesized that the effects of PARP in- hibitors is mainly due to inhibition of PARP-1. Indeed wounds on PARP-1 knockout animals healed faster than wounds on wild-type mice (Figure 3), con- firming that activation of PARP-1 leads to delayed wound healing. Expression of PARP-1 mRNA increased between d 0 and d 4, but declined by d 8 (Figure 4A). Basal expression of PARP-2 was much lower than that of PARP-1, and it also showed a moderate increase up to d 4 (Figure 4B). PARylation, however, is not primarily regulated at the expression level but mostly at the activity level. Ap- pearance of PAR, the product of PARP- catalyzed reaction in the woundbed (Fig- ure 4C) indicates that PARylation indeed takes place during wound healing. Inflammation is an essential phenome- non accompanying the wound healing process. Excision wounding triggered the expression of various inflammatory me- diators such as IL-1β, TNF-α, IL-6, cy- clooxygenase-2 (COX-2) and inducible ni- tric oxide synthase (iNOS) (Figure 5A–E). PARP-1 deficiency affected the expres- sion of genes encoding most of these in- flammatory mediators with the most sig- nificant reduction seen in IL-6, TNF-α and iNOS expression. Production of much nitric oxide may lead to the forma- 3 6 6 | E L - H A M O L Y E T A L . | M O L M E D 2 0 : 3 6 3 - 3 7 1 , 2 0 1 4 P A R y l a t i o n I N W O U N D H E A L I N G Figure 3. Full thickness cutaneous wounds (4-mm diameter) were made on the back of male PARP-1–/– and wild-type control mice and wounds were evaluated by daily digital photography. Wound areas (percentage of original wound area) are presented as mean ± SEM (n = 6 for each group). (*P < 0.05) Figure 2. Two full thickness cutaneous wounds (4-mm diameter) were made on the back of male BALB/c mice and wounds were evaluated by daily digital photography. Wounds were treated topically with either vehicle (hydrophilic cream, left wound) or 3-AB (50 μmol/L in hydrophilic cream; right wound) for 7 consecutive days; starting 24 h after excision. Each time point represents mean of eight measurements ± SEM (*P < 0.05, **P < 0.01, ***P < 0.001) tion of peroxynitrite, a powerful protein- nitrating agent. Protein nitration was most prominent on a 57-kDa protein of the wound homogenate. Consistent with lower iNOS expression in PARP-1 knockout mice, the intensity of tyrosine nitration was also lower in the wound lysates of PARP-1-deficient animals (Fig- ure 5F). Matrix metalloproteinases (MMPs) and their tissue inhibitors (TIMPs) also play important roles in wound healing (12). We have determined the expression of MMP-9, TIMP-1 and TIMP-2 mRNA in homogenates of wounds from wild-type and PARP-1 knockout mice. Expression of MMP-9, TIMP-1 and -2 increased until d 4 but declined by d 8 (Supplementary Figure S1). The expression profile of TIMP-2 in PARP-1 knockout mice did not significantly differ from that of wild- type mice. However, expression of MMP- 9 on d 1 and d 4 was significantly lower in the PARP-1-deficient animals, whereas the expression of TIMP-1 in PARP-1 knockout mice was lower only on d 4. Proliferation and migration of fibrob- lasts as well as keratinocytes is important for granulation tissue formation and ep- ithelialization. In in vitro scratch assays, inhibition of PARylation by either 3-aminobenzamide or PJ-34 facilitated migration of primary human keratino - cytes and thus the repopulation of the scratched surface (Figure 6). DISCUSSION The wound-healing process requires a precisely orchestrated series of tightly controlled cellular events and it has four main phases: hemostasis, inflammation, proliferation and remodeling. Any dis- ease condition (for example, diabetes, circulatory problems) or external factor (for example, cytostatic drugs) disturbing this highly complex process may lead to the development of chronic nonhealing wounds. On the basis of the pleiotropic biological functions of PARylation by PARP-1 we hypothesized that it may reg- ulate wound healing. Our main finding is that PARP-1 activity slows down the wound-healing process. Both the PARP-1 knockout phenotype and treatment with PARP inhibitors accelerated the healing of incision wounds. Accompanying ac- celerated wound healing, a suppression of inflammatory cytokines (IL-1β, TNF-α, IL-6), COX-2, MMP-9 and iNOS could also be observed. The somewhat surpris- ing finding of accelerated wound healing upon genetic inactivation of the parp-1 gene or inhibition of PARP activity can be due to a combination of effects on var- ious cellular events required for wound healing as discussed below. One possible explanation underlying the stimulatory effect of PARP inhibition or PARP-1 deficiency may be improved cell metabolism or maintenance of gen- eral cellular “health.” It has been demon- strated in a high number of cellular and animal models characterized by oxida- tive stress that PARP-1 becomes acti- vated by DNA breakage caused by ROS and RNS (13). Intense or sustained PARP-1 activation may lead to a compro- mised cellular energetic state due to de- pletion of NAD (the substrate of PARPs) and ATP. Compromised cellular energet- ics may lead to cell dysfunction or even to (necroptotic) cell death (14). One plau- sible hypothesis explaining our current findings is that PARP-1-mediated cell dysfunction may disturb the complex network of wound healing regulation. ROS-induced keratinocyte or fibroblast injury has been repeatedly demonstrated to occur via PARP activation (15–17), and it cannot be excluded that this pathway also operates in excision wounds. In fact, detection of PAR polymer in the wound bed (Figure 4C) illustrates that this might indeed be the case. Slowing down wound healing by PARP activation may even serve a good purpose: considering the role of PARylation in single strand R E S E A R C H A R T I C L E M O L M E D 2 0 : 3 6 3 - 3 7 1 , 2 0 1 4 | E L - H A M O L Y E T A L . | 3 6 7 Figure 4. Relative expression of (A) PARP-1/GAPDH mRNA (in PARP-1+/+mice), (B) PARP-2/ GAPDH mRNA (in PARP-1+/+ and PARP-1–/– mice) were determined from wound tissue sam- ples using RT-qPCR at different time points. Error bars represent SEM of six independent samples. (**P < 0.01). (C) Immunofluorescent staining of formalin-fixed, paraffin-embedded skin tissues (wound beds) for PAR formation (green). Nuclei were counterstained with DAPI (blue). break sensing (and consequently repair) it may slow down cell proliferation (re- quired for granulation tissue formation and epithelialization) and thus provide a sufficient amount of time for the DNA repair machinery to restore the integrity of DNA. Another interesting possibility is that a suppressed inflammatory response in PARP-1 knockout animals or in PARP in- hibitor-treated mice may result in accel- erated wound healing. The role of in- flammation in wound healing is somewhat controversial but most reports underline the beneficial effects of antiin- flammatory therapies. One common fea- ture of most nonhealing wounds is per- sistent inflammation which is mostly due to macrophage dysregulation lead- ing to an M1 dominated (inflammatory) phenotype over M2 (wound healing) phenotype (18). Moreover, application of mesenchymal stem cells, a type of multi- potent stem cells with a strong antiin- flammatory effect has been shown re- peatedly to improve wound healing (19–21). Furthermore, scarless fetal wound healing is also characterized by a reduced inflammatory response (22), in- dicating that inflammation affects not only the dynamics but also the quality of wound healing. Our finding that expression levels of TNF-α and, to a lesser extent, IL-1β is lower in PARP-1 knockout mice may be one of the factors underlying improved wound healing. It has been suggested that targeting these two inflammatory cytokines may be beneficial for the wound healing process. For example Mori et al. (23) reported accelerated wound healing in TNF receptor–deficient mice. Moreover, Infliximab, a chimeric monoclonal antibody against TNF-α used to treat autoimmune diseases, has been found to improve healing of chronic wounds (24). Interestingly, de- spite promising opportunities, the thera- peutic potential of targeting IL-1β in inflammatory conditions (for example, autoimmune diseases) has not yet been explored in such a detail as for TNF-α. As for wound healing, it has been re- ported that disruption of IL-1β signaling 3 6 8 | E L - H A M O L Y E T A L . | M O L M E D 2 0 : 3 6 3 - 3 7 1 , 2 0 1 4 P A R y l a t i o n I N W O U N D H E A L I N G Figure 5. Expression of inflammatory cytokines (A–C), iNOS (D) and COX-2 (E) were quantified using RT-qPCR. Expression values were nor- malized to GAPDH. (F) Immunoblot analysis of nitrotyrosine in PARP-1+/+ and PARP-1–/– mice. All blots were normalized β-actin. Values are expressed as mean ± SEM. (n = 6). (*P < 0.05, **P < 0.01, ***P < 0.001) by knocking out IL-1 receptor type I im- proved the quality of wound healing (lower levels of collagen and improved restoration of normal skin architecture) identifying IL-1β signaling as a potential target in wound healing (25). Moreover, blocking the Nod-like receptor protein 3 (NLRP-3)–caspase-1–IL-1β pathway also was demonstrated to improve wound healing in diabetic mice (18). Of note, ex- pression of IL-1β in our current model did not differ very much between wild- type and PARP-1-knockout animals, sug- gesting that IL-1β is not the main factor contributing to the effects of PARP-1 in- hibition/knockout in this model. IL-6, the level of which also was found to be lower in PARP-1 knockout mice (Figure 5), is also considered as a key in- flammatory cytokine. However, its role in wound healing appears to be opposite to those of TNF-α or IL-1β. For example, treatment with IL-6 augmented wound healing in immunosuppressed mice (26). In line with this, Lin et al. (27) reported delayed wound healing in IL-6-deficient mice. Thus, reduced IL-6 levels are not likely to explain accelerated wound heal- ing in PARP-1-deficient mice. Here we also report reduced expres- sion of iNOS in the wounds of PARP-1- deficient mice. Expression of iNOS is re- garded as a general feature of the inflammatory response. Its role in wound healing, however, appears to be controversial with iNOS deficiency re- ported to either delay (28) or have no ef- fect on wound healing (29) depending on the type of wounds investigated. Im- paired collagen accumulation also has been observed in iNOS-deficient mice (30), further strengthening the hypothe- sis that iNOS regulates wound healing. Nitration of proteins also was found to occur in the wound tissue with nitration of an unknown 57-kDa protein being less intense in the PARP-1-deficient tissue (see Figure 5.). PARP inhibition or PARP-1 deficiency have been shown to reduce both oxidative and nitrative stress in dif- ferent animal models of tissue injury and inflammation ranging from arthritis to sepsis, ischemia-reperfusion or burn injury (13,31). Nitric oxide (NO) and su- peroxide need to be produced in near equimolar amounts to form peroxyni- trite, one of the most prominent nitrat- ing agents. PARP inhibition has been shown to reduce production of both NO and ROS in various diverse models of tissue injury and inflammation and this was shown to result in suppressed nitra- tion of proteins, lipids and DNA (13). Decreased ROS production may also suppress activation of NF-κB, the master regulator of the transcription of inflam- matory mediators which may culminate in reduced production of inflammatory cytokines and chemokines (see above). Protein–protein interaction between NF-κB and PARP-1 also may be impor- tant, but direct stimulation of NF-κB by PARP-1 may only be relevant in PARP-1- deficient animals, but not in mice treated with PARP inhibitors since the interac- tion between these two proteins does not require PARP-1 activity (32). Matrix metalloproteinases (MMPs) are also important, especially in the remod- eling phase of wound healing. Here we found reduced expression of MMP-9 in the wounds of PARP-1-deficient mice and expression of some TIMPs was also dysregulated, but to a lower extent. In line with our finding, Kauppinen and Swanson also reported (33) that PARP in- hibition suppressed MMP-9 expression in TNF-α-stimulated microglia cells. In- creased MMP9 has been suggested to predict poor wound healing in diabetic foot ulcers (34). Therefore, we think it is possible that suppressed MMP-9 expres- sion may have contributed to accelerated wound healing in PARP-1 deficient mice. An interesting finding in our current paper was that keratinocytes migrated faster in the presence of a PARP in- hibitor. Repopulation of the scratched area requires both proliferation and cell migration. The role of PARP-1/PARyla- tion in cell proliferation is somewhat R E S E A R C H A R T I C L E M O L M E D 2 0 : 3 6 3 - 3 7 1 , 2 0 1 4 | E L - H A M O L Y E T A L . | 3 6 9 Figure 6. PARP inhibition stimulates keratinocyte migration. Confluent cultures of keratinocytes were scratched with a pipette tip fixed to a liquid handling robot arm followed by washing and incubation of cells with medium (control), with PJ-34 (10 μmol/L) or 3-AB (3 mmol/L). Microphotographs were taken after 48 h. Representative images are presented (top) and repopulation of the scratched areas also was quantified by image analysis. controversial. On the one hand, PARP-1 has been shown to be part of the multi- protein replication complex (35) while on the other hand various studies identified it as an inhibitor of replication (36) and a negative regulator of cell proliferation (37). Its role in cell migration has not yet been systematically investigated. Thus identifying critical molecular control points of accelerated proliferation/mi- gration of PARP-inhibited cells requires further investigation. We should note a limitation of our study: while the various stages of wound healing may take several weeks to be completed, our study focused on the ini- tial events taking place in the first days after wounding. Thus, to get a compre- hensive view on the role of PARylation in the complete wound-healing process, we should also consider factors such as angiogenesis and remodeling. Of note, PARP-1 has been shown to be required for both angiogenesis (38–40) and fibro- sis (41). These observations may have implications for the later stages of wound healing, which might be worth investigating. Overall, our data presented here are in line with the plethora of previous stud- ies reporting the inflammation-promot- ing and tissue-damage–mediator roles of PARylation/PARP-1 as described in di- verse biological models ranging from various forms of shock, sepsis, ischemia- reperfusion injury (13), arthritis (15), contact hypersensitivity (42), experimen- tal allergic encephalomyelitis (16) and many others (13). Common features of PARP inhibition/knockout in these in vivo models include suppression of the expression of inflammatory mediators, reduced migration of inflammatory cells to the site of inflammation, inhibition of oxidative and nitrative stress and pre- vention of cell dysfunction and cell death. Some of these changes might also be relevant in explaining our current findings. CONCLUSION In summary, we report here that exci- sion wounds heal faster if PARylation is inhibited or PARP-1 gene is knocked out, indicating that PARylation by PARP-1 delays wound healing. Owing to the high complexity or regulatory circuitries involved in wound healing, it is impossi- ble to trace the effect of PARylation and PARP-1 on wound healing to a single molecular event. On the basis of our cur- rent data, reduced expression of inflam- matory mediators with special regard to TNF-α, IL-1β and iNOS, suppressed ni- trative stress and accelerated ker- atinocyte migration may be the key ele- ments underlying accelerated wound healing in PARP-1-deficient animals or in mice treated with PARP inhibitors. ACKNOWLEDGMENTS This research was supported by the European Union and the State of Hun- gary, cofinanced by the European Social Fund in the framework of TÁMOP 4.2.4.A/2-11/1-2012-0001 National Excel- lence Program. Direct costs of this study were supported by the Hungarian Sci- ence Research Fund (OTKA K82009, K112336 and K108308) and by the Fac- ulty of Medicine, University of Debrecen (Bridging Fund). DISCLOSURE The authors declare they have no competing interests as defined by Molec- ular Medicine, or other interests that might be perceived to influence the re- sults and discussion reported in this paper. REFERENCES 1. Behm B, Babilas P, Landthaler M, Schreml S. (2012) Cytokines, chemokines and growth factors in wound healing. J. Eur. Acad. Dermatol. Venereol. 26:812–20. 2. Bryan N, Ahswin H, Smart N, Bayon Y, Wohlert S, Hunt JA. (2012) Reactive oxygen species (ROS)—a family of fate deciding molecules piv- otal in constructive inflammation and wound healing. Eur. Cell. Mater. 24:249–65. 3. Staylor A. (2009) Wound care devices: growth amid uncertainty. MedTech Insight. 11:32–47. 4. Directive 2010/63/EU of the European Parlia- ment and of the Council of 22 September 2010 on the protection of animals used for scientific purposes. 2010 O.J. L 276/33. 5. de Murcia JM, et al. (1997) Requirement of poly(ADP-ribose) polymerase in recovery from DNA damage in mice and in cells. Proc. Natl. Acad. Sci. U. S. A. 94:7303–7. 6. Muller-Decker K, Hirschner W, Marks F, Fursten- berger G. (2002) The effects of cyclooxygenase isozyme inhibition on incisional wound healing in mouse skin. J. Invest. Dermatol. 119:1189–95. 7. Sullivan SR, et al. (2007) Topical application of laminin-332 to diabetic mouse wounds. J. Derma- tol. Sci. 48:177–88. 8. Lim Y, Levy M, Bray TM. (2004) Dietary zinc alters early inflammatory responses during cutaneous wound healing in weanling CD-1 mice. J. Nutr. 134:811–6. 9. Munoz-Gamez JA, et al. (2009) PARP-1 is involved in autophagy induced by DNA damage. Autophagy. 5:61–74. 10. Szabo E, et al. (2001) Peroxynitrite production, DNA breakage, and poly(ADP-ribose) poly- merase activation in a mouse model of oxa- zolone-induced contact hypersensitivity. J. Invest. Dermatol. 117:74–80. 11. Kioka N et al. (2010) Crucial role of vinexin for keratinocyte migration in vitro and epidermal wound healing in vivo. Exp. Cell Res. 316:1728–38. 12. Gill SE, Parks WC. (2008) Metalloproteinases and their inhibitors: regulators of wound healing. Int. J. Biochem. Cell Biol. 40:1334–47. 13. Virag L, Szabo C. (2002) The therapeutic poten- tial of poly(ADP-ribose) polymerase inhibitors. Pharmacol. Rev. 54:375–429. 14. Virag L, Robaszkiewicz A, Rodriguez-Vargas JM, Oliver FJ. (2013) Poly(ADP-ribose) signaling in cell death. Mol. Aspects Med. 34:1153–67. 15. Szabo C, et al. (1998) Protection against peroxyni- trite-induced fibroblast injury and arthritis devel- opment by inhibition of poly(ADP-ribose) syn- thase. Proc. Natl. Acad. Sci. U. S. A. 95:3867–72. 16. Scott GS, Hake P, Kean RB, Virag L, Szabo C, Hooper DC. (2001) Role of poly(ADP-ribose) synthetase activation in the development of ex- perimental allergic encephalomyelitis. J. Neuroim- munol. 117:78–86. 17. Virag L, et al. (2002) Nitric oxide-peroxynitrite- poly(ADP-ribose) polymerase pathway in the skin. Exp. Dermatol. 11:189–202. 18. Mirza RE, Fang MM, Weinheimer-Haus EM, Ennis WJ, Koh TJ. (2014) Sustained inflamma- some activity in macrophages impairs wound healing in type 2 diabetic humans and mice. Dia- betes. 63:1103–14. 19. Falanga V, et al. (2007) Autologous bone marrow- derived cultured mesenchymal stem cells deliv- ered in a fibrin spray accelerate healing in murine and human cutaneous wounds. Tissue Eng. 13:1299–312. 20. Chen JS, Wong VW, Gurtner GC. (2012) Thera- peutic potential of bone marrow-derived mes- enchymal stem cells for cutaneous wound heal- ing. Front. Immunol. 3:192. 21. Singer NG, Caplan AI. (2011) Mesenchymal stem cells: mechanisms of inflammation. Annu. Rev. Pathol. 6:457–78. 22. Zgheib C. (2014) Fetal skin wound healing is 3 7 0 | E L - H A M O L Y E T A L . | M O L M E D 2 0 : 3 6 3 - 3 7 1 , 2 0 1 4 P A R y l a t i o n I N W O U N D H E A L I N G characterized by a reduced inflammatory re- sponse. Adv. Wound Care (New Rochelle). 3:344–55. 23. Mori R, Kondo T, Ohshima T, Ishida Y, Mukaida N. (2002) Accelerated wound healing in tumor necrosis factor receptor p55-deficient mice with re- duced leukocyte infiltration. FASEB J. 16:963–74. 24. Streit M, Beleznay Z, Braathen LR. (2006) Topical application of the tumour necrosis factor-alpha antibody infliximab improves healing of chronic wounds. Int. Wound J. 3:171–9. 25. Thomay AA, et al. (2009) Disruption of interleukin-1 signaling improves the quality of wound healing. Am. J. Pathol. 174:2129–36. 26. Gallucci RM, et al. (2001) Interleukin-6 treatment augments cutaneous wound healing in immuno- suppressed mice. J. Interferon Cytokine Res. 21:603–9. 27. Lin ZQ, Kondo T, Ishida Y, Takayasu T, Mukaida N. (2003) Essential involvement of IL-6 in the skin wound-healing process as evidenced by delayed wound healing in IL-6-deficient mice. J. Leukoc. Biol. 73:713–21. 28. Yamasaki K, et al. (1998) Reversal of impaired wound repair in iNOS-deficient mice by topical adenoviral-mediated iNOS gene transfer. J. Clin. Invest. 101:967–71. 29. Most D, Efron DT, Shi HP, Tantry US, Barbul A. (2002) Characterization of incisional wound heal- ing in inducible nitric oxide synthase knockout mice. Surgery. 132:866–76. 30. Park JE, Abrams MJ, Efron PA, Barbul A. (2013) Excessive nitric oxide impairs wound collagen accumulation. J. Surg. Res. 183:487–92. 31. Avlan D, Unlu A, Ayaz L, Camdeviren H, Nayci A, Aksoyek S. (2005) Poly (ADP-ribose) syn- thetase inhibition reduces oxidative and ni- trosative organ damage after thermal injury. Pedi- atr. Surg. Int. 21:449–55. 32. Hassa PO, Covic M, Hasan S, Imhof R, Hottiger MO. (2001) The enzymatic and DNA binding ac- tivity of PARP-1 are not required for NF-kappa B coactivator function. J. Biol. Chem. 276:45588–97. 33. Kauppinen TM, Swanson RA. (2005) Poly(ADP-ri- bose) polymerase-1 promotes microglial activa- tion, proliferation, and matrix metalloproteinase- 9-mediated neuron death. J. Immunol. 174:2288–96. 34. Liu Y, et al. (2009) Increased matrix metallopro- teinase-9 predicts poor wound healing in dia- betic foot ulcers. Diabetes Care. 32:117–9. 35. Simbulan-Rosenthal CM, et al. (1996) The expres- sion of poly(ADP-ribose) polymerase during dif- ferentiation-linked DNA replication reveals that it is a component of the multiprotein DNA repli- cation complex. Biochemistry. 35:11622–33. 36. Eki T. (1994) Poly (ADP-ribose) polymerase in- hibits DNA replication by human replicative DNA polymerase alpha, delta and epsilon in vitro. FEBS Lett. 356:261–6. 37. Pagano A, Metrailler-Ruchonnet I, Aurrand- Lions M, Lucattelli M, Donati Y, Argiroffo CB. (2007) Poly(ADP-ribose) polymerase-1 (PARP-1) controls lung cell proliferation and repair after hyperoxia-induced lung damage. Am. J. Physiol. Lung Cell. Mol. Physiol. 293: L619–29. 38. Rajesh M, et al. (2006) Pharmacological inhibition of poly(ADP-ribose) polymerase inhibits angio- genesis. Biochem. Biophys. Res. Commun. 350:352–7. 39. Tentori L, et al. (2007) Poly(ADP-ribose) poly- merase (PARP) inhibition or PARP-1 gene deletion reduces angiogenesis. Eur. J. Cancer. 43:2124–33. 40. Pyriochou A, Olah G, Deitch EA, Szabo C, Papa- petropoulos A. (2008) Inhibition of angiogenesis by the poly(ADP-ribose) polymerase inhibitor PJ-34. Int. J. Mol. Med. 22:113–8. 41. Mukhopadhyay P, et al. (2014) Poly (ADP-ribose) polymerase-1 is a key mediator of liver inflam- mation and fibrosis. Hepatology. 59:1998–2009. 42. Bai P, et al. (2009) Poly(ADP-ribose) polymerase mediates inflammation in a mouse model of con- tact hypersensitivity. J. Invest. Dermatol. 129:234–8. R E S E A R C H A R T I C L E M O L M E D 2 0 : 3 6 3 - 3 7 1 , 2 0 1 4 | E L - H A M O L Y E T A L . | 3 7 1 << /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.4 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.1000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize false /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage false /PreserveDICMYKValues true /PreserveEPSInfo true /PreserveFlatness true /PreserveHalftoneInfo false /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Apply /UCRandBGInfo /Remove /UsePrologue false /ColorSettingsFile (Color Management Off) /AlwaysEmbed [ true ] /NeverEmbed [ true ] /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 266 /ColorImageMinResolutionPolicy /Warning /DownsampleColorImages false /ColorImageDownsampleType /Average /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /ColorImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 266 /GrayImageMinResolutionPolicy /Warning /DownsampleGrayImages false /GrayImageDownsampleType /Average /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /GrayImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 900 /MonoImageMinResolutionPolicy /Warning /DownsampleMonoImages false /MonoImageDownsampleType /Average /MonoImageResolution 1200 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 >> /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck true /PDFX3Check false /PDFXCompliantPDFOnly true /PDFXNoTrimBoxError false /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /CHS /CHT /DAN /DEU /ESP /FRA /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF che devono essere conformi o verificati in base a PDF/X-1a:2001, uno standard ISO per lo scambio di contenuto grafico. Per ulteriori informazioni sulla creazione di documenti PDF compatibili con PDF/X-1a, consultare la Guida dell'utente di Acrobat. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 4.0 e versioni successive.) /JPN /KOR /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken die moeten worden gecontroleerd of moeten voldoen aan PDF/X-1a:2001, een ISO-standaard voor het uitwisselen van grafische gegevens. Raadpleeg de gebruikershandleiding van Acrobat voor meer informatie over het maken van PDF-documenten die compatibel zijn met PDF/X-1a. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 4.0 en hoger.) /NOR /PTB /SUO /SVE /ENU (Use these settings to create Adobe PDF documents that are to be checked or must conform to PDF/X-1a:2001, an ISO standard for graphic content exchange. For more information on creating PDF/X-1a compliant PDF documents, please refer to the Acrobat User Guide. Created PDF documents can be opened with Acrobat and Adobe Reader 4.0 and later.) >> /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ << /AsReaderSpreads false /CropImagesToFrames true /ErrorControl /WarnAndContinue /FlattenerIgnoreSpreadOverrides false /IncludeGuidesGrids false /IncludeNonPrinting false /IncludeSlug false /Namespace [ (Adobe) (InDesign) (4.0) ] /OmitPlacedBitmaps false /OmitPlacedEPS false /OmitPlacedPDF false /SimulateOverprint /Legacy >> << /AddBleedMarks false /AddColorBars false /AddCropMarks false /AddPageInfo false /AddRegMarks false /ConvertColors /ConvertToCMYK /DestinationProfileName () /DestinationProfileSelector /DocumentCMYK /Downsample16BitImages true /FlattenerPreset << /PresetSelector /HighResolution >> /FormElements false /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles false /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) ] /PDFXOutputIntentProfileSelector /DocumentCMYK /PreserveEditing true /UntaggedCMYKHandling /LeaveUntagged /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false >> ] >> setdistillerparams << /HWResolution [2400 2400] /PageSize [612.000 792.000] >> setpagedevice work_abzojutcpjehrmrg6wnnwlouam ---- Microsoft Word - 0013-Dendritic_spine__Roughani_[1]_edited by BA.doc Arch Iranian Med 2007; 10 (1): 54 – 58 Archives of Iranian Medicine, Volume 10, Number 1, January 2007 54 Dendritic Spine Changes in Medial Prefrontal Cortex of Male Diabetic Rats Using Golgi-Impregnation Method Mohammad-Taghi Joghataie PhD*, **, Mehrdad Roghani PhD•***, Mohammad-Reza Jalali MD‡, Tourandokht Baluchnejadmojarad PhD†, Maryam Sharayeli MD‡ Background: Neuropathy is one of the major complications contributing to morbidity in patients with diabetes mellitus. The effect of diabetes on brain has not been studied so much and no gross abnormality has been found in the central nervous system of patients with diabetic neuropathy. This study was conducted to evaluate the time-dependent structural changes in medial prefrontal cortex of male diabetic rats using Golgi-impregnation method. Methods: Male Wistar rats were randomly divided into the control and diabetic groups. For induction of diabetes, a single dose of streptozotocin (60 mg/kg) was injected intraperitoneally. At the end of the first and second months, the rats were transcardially perfused with a solution of phosphate buffer containing paraformaldehyde and Golgi-impregnated method was used to evaluate the changes of dendritic spines in medial prefrontal cortex. Results: There was a significant reduction in the mean density of pyramidal neuron dendritic spines in the layers II and III of medial prefrontal cortex only after 2 months in the diabetic group compared to age-matched controls (P < 0.05). Conclusion: Diabetes induces a reduction in the spine density of apical dendrites of medial prefrontal cortex only in two-month diabetic rats. Archives of Iranian Medicine, Volume 10, Number 1, 2007: 54 – 58. Keywords: Diabetes mellitus • Golgi staining • medial prefrontal cortex • pyramidal neuron • rat Introduction europathy is one of the major complications contributing to morbidity in patients with diabetes mellitus. Diabetes leads to a wide range of peripheral neuronal deficits such as reduced motor nerve conduction velocity, impaired sciatic nerve regeneration, axonal shrinkage in association with reduced neurofilament delivery, and deficient anterograde axonal transport.1 – 2 In rats with experimentally-induced diabetes by streptozotocin (STZ), the nerve damage is similar in many ways to the nerve degeneration seen in human diabetic neuropathy.3 In addition, pathologic studies have suggested that diabetes is one of the risk factors for senile dementia of Alzheimer type.4 Many studies of the relationship between diabetes and peripheral neuropathy have been done to date, but the effects of diabetes on brain have not been studied and no gross abnormality has been found in the central nervous system of patients with diabetic neuropathy.5 On the other hand, diabetes mellitus is accompanied by disturbances in learning, memory, and cognitive skills in the human and experimental animals.6 Medial prefrontal cortex (MPC) has been traditionally implicated in attentional processes, working memory, and behavioral flexibility.7 Recently, some noticeable morphologic changes in dendritic spines of cerebral cortex of 4-week diabetic rats have been reported.5 Therefore, this study was conducted to Original Article N Authors’ affiliations: Department of Anatomy, *Iran University of Medical Sciences, **University of Social Welfare and Rehabilitation, Department of Physiology, ***School of Medicine, Shahed University, †Iran University of Medical Sciences, ‡Department of Pathology, School of Medicine, Shahed University, Tehran, Iran. •Corresponding author and reprints: Mehrdad Roghani PhD, Department of Physiology, Shahed University, Dehkadeh St., Keshavarz Blvd., Tehran, Iran. Tel: +98-218-8964792, Fax: +98-218-8966310, E-mail: mehjour@yahoo.com. Accepted for publication: 3 May 2006 M. T. Joghataie, M. Roghani, M. R. Jalali, et al Archives of Iranian Medicine, Volume 10, Number 1, January 2007 55 evaluate time-dependent structural changes in MPC of male diabetic rats using Golgi- impregnation method. Materials and Methods Thirty-two male albino Wistar rats (Pasteur Institute, Tehran, Iran) weighing 290 – 320 g (10 – 12 weeks old) were housed in an air-conditioned colony room on a light/dark cycle (20 – 22°C and 30 – 40% humidity) and supplied with standard pelleted diet and tap water ad libitum. The animals were randomly divided into four groups including 1 and 2 months controls and diabetics, each one containing at least 8 animals. Diabetes was induced by a single intraperitoneal injection of STZ (60 mg/kg) dissolved in cold 0.9% saline solution immediately before use. Diabetes was verified by a serum glucose level higher than 250 mg/dL using glucose oxidation method (glucose oxidase kit, Zistchimie, Tehran). Golgi-impregnation method At the end of the first and second months, Golgi-impregnated method was used to evaluate the changes of dendritic spines in MPC region of the control and age-matched diabetic rats.8 Briefly, the animals were perfused transcardially with a solution of 0.1 M phosphate buffer containing 4% paraformaldehyde (pH 7.4). After removing the forebrains, they were incubated in 1% potassium dichromate, 1% mercury chloride, 0.8% potassium chromate, and 0.5% potassium tungstate in distilled water at 20ºC for 14 – 16 days. After rinsing the brains with distilled water, they were incubated in 1% lithium hydroxide and 15% potassium nitrate in distilled water at 20ºC for two other days. The forebrains were cut on a freezing microtome (Leica, Germany) at -10ºC at a thickness of 50 µm (the blocks were soaked in gradient concentrations of 10% and 30% sucrose in phosphate buffer (0.1 M) for an overnight and at least 2 days, respectively). The sections were rinsed free floating in double distilled water, dehydrated in a series of ethanol, cleared with xylene, mounted onto gelatinized slides, and then coverslipped under Entelane. Data analysis Spines were counted on pyramidal neurons in layer II–III of MPC. In this respect, spines were readily identifiable at a magnification of 800 using an Olympus light microscope with digitalized photography facility. For a blinded assessment, all of the slides were coded before quantitative analysis, and the code was not broken until the analysis was completed. Pyramidal neurons were defined by the presence of a basilar dendritic tree, a distinct single apical dendrite, and dendritic spines. The spines were identified based on the morphologic criteria for mushroom and thin spines. Only protrusions perpendicular to the dendritic shaft that possessed a clear neck and bulbous head were counted. These spine types made up approximately 80 – 85% of the spine population. For each animal, spines were counted on 10 – 15 neurons. For each animal, those cells that exhibited dark and consistent impregnation throughout the cell body and dendritic tree were evaluated. Meanwhile, their relative isolation from neighboring impregnated neurons was an inclusion criterion. For each selected cell, the number of spines on at least four segments of the apical dendritic tree was determined. No primary dendrites were analyzed, and all of the segments selected for analysis were located 100 – 150 µm away from the cell body and not located at the terminal of a dendrite. Computer-assisted tracings were done for each dendritic segment, and the length was determined using Image tool analysis software. The data were expressed as mean values of spine densities (number of spines per 100 µm) for each animal. For statistical analysis, unpaired Student’s t-test was used for comparing the control and diabetic groups. All data were expressed as mean ± SEM and P < 0.05 was considered significant. Results The body weight of the diabetic and normal rats was not different at the beginning of the study. After induction of diabetes, there was a significant reduction in this parameter in the diabetic group after 1 (P < 0.005) and 2 months (P < 0.001) (Figure 1). There was no significant difference in serum glucose level between the control and diabetic groups before the study. But the diabetic group showed a significant increase in serum glucose level at the end of the first and second months (P < 0.001) (Figure 2). Light microscopic examination of Golgi- impregnated tissue of MPC revealed reliable and consistent staining throughout the layers II and III. Dendritic spine changes in MPC of male diabetic rats using Golgi-impregnation method Archives of Iranian Medicine, Volume 10, Number 1, January 2007 56 Tracing of the apical dendrites of pyramidal neurons of MPC revealed significant changes in the number of dendritic spines in the diabetic rats compared to age-matched control group after 2 months (Figure 3). Quantitative analysis of the spine density of these neurons clearly revealed a significant decrease in the number of these spines (Figure 4) suggesting that spine and dendritic morphology may be sensitive to such stressful conditions like diabetes. In addition to the changes in the number of spines per selected length of dendrites, the shape of the remaining spines appeared to be different between the diabetic and control rats. Dendritic spines of the diabetic rats showed fewer and less pronounced protrusions compared to controls (Figure 3). On the contrary, observation of cell body area showed no significant differences between the diabetic and control rats. Discussion Although severe peripheral neuropathy has been reported in diabetic patients, the diabetic brain has not been studied so much and its possible dysfunctions have remained to be clarified. According to existing data, patients who diagnosed as having Alzheimer’s disease have a relatively high frequency of diabetes mellitus.9 However, no significant differences in the severity of Alzheimer-type pathologies such as senile plaques or neurofibrillary tangles between the diabetic and control subjects have been observed.9 In addition, diabetics show impaired cognitive performance Figure 1. Body weight of the control and diabetic rats at different months (*P < 0.005, **P < 0.001). Figure 2. Serum glucose level of control and diabetic rats at different months (*P < 0.001). M. T. Joghataie, M. Roghani, M. R. Jalali, et al Archives of Iranian Medicine, Volume 10, Number 1, January 2007 57 compared to age-matched control subjects.6 It seems that diabetes induces impairment of cognitive performance. The decrease of learning and cognitive abilities may not be restricted to disorders like Alzheimer’s disease2 and diabetics also show impaired cognitive performance compared to age-matched control subjects.6 In this study, body weight, intake of food, or water of the diabetic rats were significantly different from those of the control rats. Cell number and possible morphologic changes in the brains of the diabetic rats were also studied. Previous studies failed to find any changes in neuronal cell number using Nissl staining.5 Using Golgi staining in our study, morphologic changes were evident in the MPC area of the cortex in the diabetic rats after two months. Similar morphologic changes had been observed in the hippocampus at the time of delayed neuronal death induced by the transient global cerebral ischemia in rodents.10 The initiating event that leads to delayed neuronal cell death is possibly the neuronal excitation caused by glutamate and subsequent calcium influx into the cell. Abnormal influxes of glutamate and/or calcium might also occur in diabetic brain, because the expressions of calbindin, synaptophysin, and syntaxin, which are the proteins related to a calcium binding protein, or synaptic secretions are reduced in such brains.6 Their level indicates the degree of expression of glutamate, which initiates the release of other neuronal transmitters. Unusual influxes of glutamate and/or calcium might cause the morphologic changes in the diabetic brain,11 as observed in this study. The neuronal cell death induced by ischemia is due to a failure of recovery process following excitatory damages to this particular neuronal circuitry.12 Lack of several neurotrophic factors such as brain-derived neurotrophic factor (BDNF) or nerve growth factor (NGF) is also considered to cause the neuronal cell death after ischemia.13 The reduced level of BDNF in diabetic hippocampus may induce some neuronal dysfunctions and morphologic changes.14 In conclusion, to the best of our knowledge the Figure 3. Photomicrograph of Golgi-stained dendritic spines from pyramidal neurons in the layers II–III of MPC in the control (A) and age- matched diabetic (B) rats two months after the study (scale bar = 25 µm). Figure 4. The mean spine density per selected length on the apical dendrites from pyramidal neurons of medial forebrain cortex of the intact and age-matched diabetic rats after one and two months. Vertical bars represent SEM values. Asterisk indicates significant difference relative to untreated controls (P < 0.05). Dendritic spine changes in MPC of male diabetic rats using Golgi-impregnation method Archives of Iranian Medicine, Volume 10, Number 1, January 2007 58 present report provides the first evidence that diabetes induces a reduction in the spine density of apical dendrites of MPC in two-month diabetic rats. Further studies are warranted to investigate the detailed mechanisms that lead to these abnormalities. Acknowledgment This study was supported by a grant from Research Council of University of Social Welfare and Rehabilitation, Tehran, Iran. The authors sincerely appreciate the collaboration of Ms. F. Ansari (School of Medicine, Shahed University). References 1 Baydas G, Nedzvetskii VS, Nerush PA, Kirichenko SV, Yoldas T. Altered expression of NCAM in hippocampus and cortex may underlie memory and learning deficits in rats with streptozotocin-induced diabetes mellitus. Life Sci. 2003; 73: 1907 – 1916. 2 Sima AA, Li ZG. The effect of C-peptide on cognitive dysfunction and hippocampal apoptosis in type 1 diabetic rats. Diabetes. 2005; 54: 1497 – 1505. 3 Reagan LP, McEwen BS. Diabetes, but not stress, reduces neuronal nitric oxide synthase expression in rat hippocampus: implications for hippocampal synaptic plasticity. Neuroreport. 2002; 13: 1801 – 1804. 4 Popovic M, Biessels GJ, Isaacson RL, Gispen WH. Learning and memory in streptozotocin-induced diabetic rats in a novel spatial/object discrimination task. Behav Brain Res. 2001; 122: 201 – 207. 5 Nitta A, Murai R, Suzuki N, Ito H, Nomoto H, Katoh G, et al. Diabetic neuropathies in brain are induced by deficiency of BDNF. Neurotoxicol Teratol. 2002; 24: 695 – 701. 6 Parihar MS, Chaudhary M, Shetty R, Hemnani T. Susceptibility of hippocampus and cerebral cortex to oxidative damage in streptozotocin-treated mice: prevention by extracts of Withania somnifera and Aloe vera. J Clin Neurosci. 2004; 11: 397 – 402. 7 Heidbredera CA, Groenewegen HJ. The medial prefrontal cortex in the rat: evidence for a dorso-ventral distinction based upon functional and anatomical characteristics. Neurosci Biobehav Rev. 2003; 27: 555 –579. 8 Fujioka T, Sakata Y, Yamaguchi K, Shibasaki T, Katok H, Nakamura S. The effects of prenatal stress on the development of hypothalamic paraventricular neurons in fetal rats. Neuroscience. 1999; 92: 1079 – 1088. 9 Harris Y, Gorelick PB, Freels S, Billingsley M, Brown N, Robinson D. Neuroepidemiology of vascular and Alzheimer’s dementia among African-American women. J Natl Med Assoc. 1995; 87: 741 – 745. 10 Shigeno T, Mima T, Takakura K, Graham DI, Kato G, Hashimoto Y. Amelioration of delayed neuronal death in the hippocampus by nerve growth factor. J Neurosci. 1991; 11: 2914 – 2919. 11 Wisniewski K, Fedosiewicz-Wasiluk M, Holy ZZ, Car H, Grzeda E. Influence of NMDA, a potent agonist of glutamate receptors, on behavioral activity in 4-week streptozotocin-induced diabetic rats. Pol J Pharmacol. 2003; 55: 345 – 351 . 12 Artola A, Kamal A, Ramakers GM, Biessels GJ, Gispen WH. Diabetes mellitus concomitantly facilitates the induction of long-term depression and inhibits that of long-term potentiation in hippocampus. Eur J Neurosci. 2005; 22: 169 – 178 . 13 Lupien SB, Bluhm EJ, Ishii DN. Systemic insulin-like growth factor-I administration prevents cognitive impairment in diabetic rats, and brain IGF regulates learning/memory in normal adult rats. J Neurosci Res. 2003; 74: 512 – 523. 14 Izumi Y, Yamada KA, Matsukawa M, Zorumski CF. Effects of insulin on long-term potentiation in hippocampal slices from diabetic rats. Diabetologia. 2003; 46: 1007 – 1012. work_ad2n2hf6jvbezi3zug65kqvyaq ---- THIEME 127 Bleaching Stained Arrested Caries Lesions: In vivo Clinical Study Sarah S. Al-Angari1 Mashael AlHadlaq2 Noor Abahussain3 Njood AlAzzam4 1Department of Restorative Dental Science, College of Dentistry, King Saud University, Riyadh, Saudi Arabia 2Pediatric Dentistry Department, National Guard Hospital, Riyadh, Saudi Arabia 3Pediatric Dentistry Department, College of Dentistry, King Saud University, Riyadh, Saudi Arabia 4Department of Prosthetic Dental Science, College of Dentistry, King Saud University, Riyadh, Saudi Arabia Address for correspondence Sarah S. Al-Angari, BDS, MSD, PhD, Department of Restorative Dental Sciences, College of Dentistry, King Saud University, P.O. Box 60169, Riyadh 11545, Saudi Arabia (e-mail: Sangari@ksu.edu.sa). Objective Conservative approaches to esthetically treat stained arrested caries lesions (s-ACLs) have not been explored in clinical studies. This study aims to inves- tigate the efficacy of in-office dental bleaching agent, as a conservative approach, to esthetically treat s-ACLs. Materials and Methods Twelve patients (n = 46) presented with s-ACLs were treated with 40% hydrogen peroxide (in-office bleaching protocol; 20 minutes × 3). Color val- ues were measured using a spectrophotometer (CIE L*a*b*), aided with digital pho- tography to assess visual color change clinically. Measurements were taken for each specimen at baseline and immediately after bleaching. Statistical Analysis The color change calculated before and after bleaching for each dental substrate was analyzed using paired t-test (α = 0.05). Results The bleached s-ACLs had a significant increase in L* values (p < 0.001), and a sig- nificant decrease in both a* (p = 0.001) and b* (p = 0.007) values, indicating lighter color improvement (bleaching efficacy). The baseline mean of L*, a*, and b* values were 61.5, 2, and 15.4, respectively, and after bleaching were 67.7, 1.4, and 13.3, respectively, with a mean increase in ∆E of >7.9, which resulted in a visible clinical stain improvement as orange/light brown stains were removed completely, while gray/black stains improved to a lesser extent. Conclusion Significant color improvement was observed when the in-office bleaching protocol (40% hydrogen peroxide) was used in orange/brown s-ACLs. However, it showed lesser improvement in gray/black s-ACLs. Abstract Keywords ► arrested caries lesions ► bleaching ► color change ► esthetics ► hydrogen peroxide DOI https://doi.org/ 10.1055/s-0040-1716317 ISSN 1305-7456. © 2020. European Journal of Dentistry. This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial-License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/) Thieme Medical and Scientific Publishers Pvt. Ltd., A-12, 2nd Floor, Sector 2, Noida-201301 UP, India Introduction The evolution of dental materials and diagnostic tools particularly in the assessment of early caries lesions has directed modern dental practice into minimally invasive dentistry. It focuses on the detection of caries lesion in their early stages to successfully preserve the dental biological tissues and minimizes surgical intervention.1 Based on this concept, incipient caries lesions have been approached by preventive measures that are considered simple, cost-ef- fective options to halt or minimize the caries process.2,3 Remineralization is one form of the minimum invasive approach that can either occur naturally from the sur- rounding environment (i.e., saliva, biofilm) or induced by therapeutic agents (i.e., antibiotics, fluoride, calcium, and phosphate-based materials) which precipitate into the den- tal structure resulting in an arrested caries lesion (ACL).3 Eur J Dent:2021;5:127–132 Original Article Published online: 2020-09-08 128 European Journal of Dentistry Vol. 15 No. 1/2021 © 2020. European Journal of Dentistry. Bleaching Stained Arrested Caries Lesions Al-Angari et al. ACLs are inactive, highly mineralized surfaces which are usually characterized by a dark unesthetic discoloration.4 Discoloration is related to stain incorporation within the lesion during the remineralization process such as dietary pigments, metallic ions, amino acids from the proteolytic processes, organic debris, and chromogenic bacteria.4,5 In such clinical scenario, some patients would psychologically refuse the dark discolorations and demand esthetic measures to mask the stains.6 The common clinical esthetical approach to eliminate stains associated with the ACLs is surgical inter- vention,7 this may improve the esthetic outcome but result in unnecessary removal of a highly mineralized tooth struc- ture, for the sake of esthetics.8 Furthermore, inadequate res- torations are susceptible to plaque accumulation, secondary caries, sensitivity, and iatrogenic dentistry.9,10 We have introduced dental bleaching as a minimum inva- sive approach to eliminate or improve the stains of the ACLs as it is considered safe, effective, economical, and predictable.11-13 Previous studies have shown promising esthetic results both in a clinical case report and in an in vitro study.11-13 To the best of our knowledge, no clinical study has been done evaluating the effect of bleaching agents on stained ACLs (s-ACLs). This study aimed to investigate the efficacy of in-office dental bleaching agents (40% hydrogen peroxide) as a non-invasive esthetic treatment approach to treat s-ACLs. Materials and Methods Study Design In this clinical study, 12 patients were recruited (n = 46 sur- faces) with s-ACLs (pit and fissure surfaces) and subjected to an in-office bleaching protocol (40% hydrogen peroxide). The factor studied was dental bleaching, and the study outcome was color change (∆E) measured at two time points, at base- line and after bleaching. Color measurements were taken by spectrophotometry (L*a*b* values) and digital photography. Ethical Aspects and Criteria for Inclusion and Exclusion The clinical study was conducted in full ethical accordance with the World Medical Association Declaration of Helsinki, 1964, and after the protocol was approved by the Institutional Review Board (IRB # E-18-3173). The study was conducted at the restorative department at King Saud University College of Dentistry. Twenty-five subjects were informed about the study protocol along with the possible risks and benefits. After ensured understanding, a written consent form was obtained prior to the screening examination (soft and hard tissues) to determine their eligibility for the enroll- ment based on the inclusion and exclusion criteria. Twelve patients (age range: 20–45 years; mean age: 32 years) fit- ted within the inclusion criteria included adult (older than 18 years), medically fit, seeking regular dental bleaching of their vital teeth, and presented with s-ACL. The exclusion criteria included children (younger than 18 years), med- ically compromised, teeth with active caries, crowns or restorations, periodontal disease, previous bleaching treat- ment, and patients who did not have s-ACLs. Dental Bleaching Efficacy Test Prior to the bleaching procedure, three examiners were cal- ibrated using patients from the dental clinic who were not enrolled in the present study,14 their kappa score showed an interexaminer reliability of 0.97. All teeth surfaces received a thorough examination followed by complete isolation of all the targeted teeth along with the maxillary teeth (from the first right premolar to the first left premolar) using liquid dam to protect the gingival tissue and cleaned with pumice (Pumice Preppies, Whip Mix, Louisville, Kentucky, United States) and washed. The baseline color of the s-ACLs was measured spectrophotometrically (VITA Easyshade V, VITA Zahnfabrik Products, Bad Säckingen, Germany) along with digital photography (Canon EOS 550D, Canon, Tokyo, Japan). The bleaching procedure was performed using 40% hydro- gen peroxide (pH 6.0–8.5) (Opalescence Boost, Ultradent Products, South Jordan, Utah, United States) and done in accordance with the manufacturer’s instructions. The bleaching gel was delivered on the labial surface of the maxillary teeth (from the right first premolar to the left first premolar) and the occlusal surfaces of the s-ACLs using a brush and left on the teeth for 20 minutes. Then, the bleach- ing agent was whipped off with a cotton roll and reapplied twice (20 minutes for each cycle) in the same session result- ing in a total of 60 minutes of application time. After bleach- ing, the teeth surfaces were thoroughly washed, and the liquid dam was removed, and teeth were moist just before taking the final color values for each specimen. Color Assessment L*, a*, and b* (Commision Internationale de l’Eclairage) val- ues were taken for each specimen at baseline and immedi- ately after the last bleaching cycle. All measurements have been repeated three times, the means of the L*, a*, and b* val- ues have been taken and the color difference (∆E) has been calculated using the following equation, representing color changes after bleaching (∆E bleaching= bleaching–baseline): ∆E = ([∆L*]2 + [∆a*]2 + [∆b*]2)1/2 Where ∆E represents the amount of color difference, ∆L* coordinate represents an increase (positive direction—white) or decrease (negative direction—black) in lightness, ∆a* coor- dinate represents redness (positive direction) or greenness (negative direction), and ∆b* coordinate represents yellow- ness (positive direction) or blueness (negative direction) of the surface. Statistical Analysis Shapiro–Wilk’s statistical test was used to assess the normal- ity of the values, which appeared to be normal, after which, the color change (∆E) calculated before and after bleaching for each dental substrate was analyzed using paired t-test (α = 0.05). Statistical analysis was performed using SPSS version 25.0 (IBM SPSS Statistics; IBM, Armonk, New York, United States). Prior to the study, calculations performed using SPSS showed that with a planned sample size of 42, the study was designed to have 90% power, assuming a 5% 129Bleaching Stained Arrested Caries Lesions Al-Angari et al. European Journal of Dentistry Vol. 15 No. 1/2021 © 2020 European Journal of Dentistry. significance level and a standard deviation of 2.0 based on a previous study.12 Results The results showed a significant mean change (p ≤ 0.007) in all of the color coordinates values: ∆L*, ∆a*, and ∆b* after bleaching. The mean change in the ∆L*, ∆a*, and ∆b* coor- dinates were 6.3, −0.06, and −0.9, respectively. The increase in the L* values (improve color lightness), along with the decrease in the a* (greenness) and b* (blueness) coordinates (leading to lighter shades) indicated the efficacy of the in-of- fice treatment as it resulted in a perceptible improvement in the color lightness outcome in the s-ACLs. The L*, a*, and b* numeric values at baseline and postbleaching and their mean values are shown in ►Table 1. The bleached s-ACL had a ∆E value that ranged from 1.1 to 24.7 with a mean value of 7.9 and a standard deviation of 5.1, which is considered clinically noticeable as it exceeded the known clinically perceptible value of ∆E ≥3.3. The visual color change (∆E) of some of the s-ACLs at baseline and post- bleaching is shown in ►Fig. 1. Discussion Introducing bleaching as a minimum invasive approach to treat s-ACLs has shown promising esthetic results in previous in vitro studies as they have either eliminated or minimized the discoloration.11-13 However, their major limitation was the inability of laboratory settings to adequately simulate the complex biological processes involved in creating s-ACLs, therefore, recommending the importance of conducting clinical studies to carefully draw clinical conclusions. To our knowledge, this study is the first attempt to clinically inves- tigate the efficacy of in-office bleaching protocol as a nonin- vasive approach to improving the color outcome of s-ACLs. It is considered clinically significant as it improves the color outcome, reduces the necessity of surgical intervention, and consequently iatrogenic dentistry.11-13 This study was conducted on 12 healthy subjects (who met the inclusion criteria) with a total of 46 teeth surfaces with s-ACLs. The color was evaluated objectively, to eliminate subjective errors, using the CIE L*a*b* color system, which is a standard method to characterize colors based on human perception.6,15 The bleaching efficacy was verified based on the mean color change represented in the values of ∆E and the L*, a*, and b* coordinates. The ∆E represented the average color difference between baseline and after bleaching, which is of clinical significance when it equals or exceeds a value of ≥3.3.6 Digital photogra- phy was used as an adjunct subjective measure to demon- strate the clinical color outcome. To better represent a clinical condition of a patient seek- ing professional bleaching, an in-office bleaching agent was used (40% hydrogen peroxide) as dentists have more control over the bleaching procedure and would eliminate the factor of patient’s compliance.16 In-office bleaching protocol is time efficient, site specific, avoids accidental gel ingestion, and has immediate results.17 Furthermore, the high concentrations of hydrogen peroxide would penetrate deeper into the histolog- ical structure of the dental tissue and increase the oxidative power which results in a fast bleaching outcome.18 This study has shown that s-ACLs subjected to in-office bleaching protocol (40% hydrogen peroxide) resulted in a sig- nificant mean color change in most of the teeth (∆E 7.9) which exceeded the clinically perceptible color range of 3.3 indicat- ing the effectiveness of our bleaching treatment. The color difference value (∆E) is based on the changes in the L*, a*, and b* color coordinates. Our study had a significant increase in the lightness values (L*) and a decrease in the blueness (b*) and greenness (a*) values which directed the ∆E toward the lighter color scale indicating color improvement.19 It is noteworthy to mention that although most of the stains (orange/brown) had a significant color improvement and were efficiently bleached, yet some stains (gray/black) bleached to a lesser extent. This variation in the color out- come (∆E 1.1–24.7) can be explained by the different stain history of the ACLs (chemical composition, type, and depth), associated with the variety of patients enrolled in the study. Moreover, it is related to the bleaching mechanism of action. During the bleaching process, oxidative free radicals can easily breakdown the conjugation of the double bonds in organic stains (nonmetallic: orange/brown) as they have a small molecular weight and are more water soluble resulting in smaller molecular weight (single bond) with altered opti- cal properties that reflect a lighter color.17 However, nonor- ganic stains (metallic stains: gray/black) tend to have a larger molecular weight and are less water soluble making it diffi- cult for the bleaching agent to penetrate deeper to oxidize the chromogenic stain molecule resulting in a less satisfactory bleaching result.20,21 The depth of the stains in the ACLs is an important fac- tor as stains confined within the dentin (deep stains) have shown to be more difficult to bleach.11,12 This is justified by the progression of caries lesions within the dental tissue, as it is faster in dentin structure compared with enamel, due to structural and compositional differences, which, in turn, results in more stain incorporation into dentin than enamel, Table 1 The color coordinates (∆L*, ∆a*, and ∆b*) mean values (standard deviation) at baseline and after bleaching Coordinate Baseline Bleaching Change p-Value ∆L* 61.5 (5.9) 67.7 (5.7) 6.3 (5.2) < 0.001 ∆a* 2.0 (1.2) 1.4 (1.4) −0.6 (1.1) 0.001 ∆b* 15.4 (4.6) 13.3 (4.3) −2.0 (4.9) 0.007 Abbreviations: ∆L*, lightness, value of 0 (black) to 100 (white); ∆a*, redness (positive direction) or greenness (negative direction); ∆b*, yellowness (positive direction) or blueness (negative direction). 130 European Journal of Dentistry Vol. 15 No. 1/2021 © 2020. European Journal of Dentistry. Bleaching Stained Arrested Caries Lesions Al-Angari et al. making it easier to stain and harder to bleach.22 Existing lit- erature reports that the bleaching efficacy may increase with several bleaching attempts or when associated with longer bleaching time.23-26 Thus, metallic and deep stains might need further in-office bleaching treatment (more than one visit) or at-home bleaching for longer periods to produce clinically satisfactory results. However, careful consideration must be exercised in view of the possible increased incidence of sensitivity. Despite the advantages of the dental bleaching agents and its proven effect on the color improvement, it may cause morphological alteration within the dental tissue.27-29 The oxidation–reduction reaction of the bleaching agent would cause a degradation of the organic and inorganic dental matrices resulting in areas of demineralization, depressions, reduced microhardness, and increased surface roughness.30-33 These surface changes may lead to sensitiv- ity, plaque accumulation, and the possibility of initiating Fig. 1 Color improvement (*∆E bleaching= bleaching–baseline) of stained arrested caries lesions treated with 40% hydrogen peroxide bleach- ing agent at baseline and after bleaching. 131Bleaching Stained Arrested Caries Lesions Al-Angari et al. European Journal of Dentistry Vol. 15 No. 1/2021 © 2020 European Journal of Dentistry. caries lesions.29 Therefore, dental clinicians should consider using protective remineralizing agents such as fluoride, calcium, amorphous calcium phosphate, and hydroxyapa- tite immediately after the bleaching procedure. This study aimed to measure the color outcome objectively using the spectrophotometer along with taking clinical photographs to accurately quantify the results. Although the spectro- photometer tip diameter in some cases was relatively larger than the stains, yet, it is considered the best tool up to date to measure color and it was able to detect differences before and after bleaching, as numerical values of lighter stains were distinctly different from the value of dark stains. In our study, teeth were hydrated with saliva to remineral- ize the teeth after the end of the bleaching cycle as studies have reported that the exposure of saliva during the bleach- ing treatment would minimize enamel sensitivity and its susceptibility to further demineralization, reduce surface restaining, hence, preserve the bleaching result.5,12,27,28,34 In the clinical cases where the bleached ACLs did not achieve the optimum esthetic outcome, the surgical inter- vention would need to remove a less amount of dental tissue to mask the stains, resulting in a conservative resto- ration.11-13,21 In such a clinical scenario, dentists should take into consideration the effect of immediate bonding of com- posite resin restorations to the bleached surface.35-40 As the residual oxygen from the bleaching agent would interfere with the resin polymerization and weaken the shear bond- ing strength, it may affect the longevity of the restoration. Therefore, dentists should wait for a period of at least 2 weeks prior to bonding.35-37 Overall, in-office bleaching protocol can improve the esthet- ics of the s-ACLs in a fast, time-efficient, and simple approach; however, it depends on the type and depth of staining involved. Therefore, it is essential to understand the stain history (non- metallic and metallic) that contributes to the development of s-ACLs to reach an optimum bleaching outcome. Conclusion s-ACLs treated with in-office bleaching protocol (40% hydrogen peroxide) showed optimum clinically significant color improve- ment (∆E >7.9) in organic stains (orange/brown) and lesser improvement in metallic stains (gray/black). Clinicians should consider bleaching as the first line as an effective, safe, and con- servative approach to improve the color outcome of s-ACLs. Conflict of Interest None declared. Acknowledgments The authors extend their appreciation to the Deanship of Scientific Research at King Saud University for funding this work through the Undergraduate Research Support Program (URSP-4-19-1). Furthermore, we would like to convey their sincere gratitude and appreciation to Dr. Emad Masuadi, for his generous help in conducting the statistical analysis. References 1 White JM, Eakle WS. Rationale and treatment approach in minimally invasive dentistry. J Am Dent Assoc 2000;131(Suppl):13S–19S 2 Contreras V, Toro MJ, Elías-Boneta AR, Encarnación- Burgos A. Effectiveness of silver diamine fluoride in caries pre- vention and arrest: A systematic literature review. Gen Dent 2017;65(3):22–29 3 González-Cabezas C, Fernández CE. Recent advances in remineralization therapies for caries lesions. Adv Dent Res 2018;29(1):55–59 4 Watts A, Addy M. Tooth discolouration and staining: A review of the literature. Br Dent J 2001;190(6):309–316 5 Chen YH, Yang S, Hong DW, Attin T, Yu H. Short-term effects of stain-causing beverages on tooth bleaching: A randomized controlled clinical trial. J Dent 2020;95:103318 6 Hafez R, Ahmed D, Yousry M, El-Badrawy W, El-Mowafy O. Effect of in-office bleaching on color and surface roughness of composite restoratives. Eur J Dent 2010;4(2):118–127 7 Holmgren C, Gaucher C, Decerle N, Doméjean S. Minimal intervention dentistry II: part 3. Management of non-cavi- tated (initial) occlusal caries lesions–non-invasive approaches through remineralisation and therapeutic sealants. Br Dent J 2014;216(5):237–243 8 Sundfeld RH, Sundfeld-Neto D, Machado LS, Franco LM, Fagundes TC, Briso AL. Microabrasion in tooth enamel discol- oration defects: three cases with long-term follow-ups. J Appl Oral Sci 2014;22(4):347–354 9 Ababnaeh KT, Al-Omari M, Alawneh TN. The effect of den- tal restoration type and material on periodontal health. Oral Health Prev Dent 2011;9(4):395–403 10 Boitelle P. Contemporary management of minimal invasive aesthetic treatment of dentition affected by erosion: Case report. BMC Oral Health 2019;19(1):123 11 Al-Angari SS, Hara AT. A conservative approach to estheti- cally treat stained arrested caries lesions. Quintessence Int 2016;47(6):499–504 12 Al-Angari SS, Lippert F, Platt JA, et al. Bleaching of simulated stained-remineralized caries lesions in vitro. Clin Oral Investig 2019;23(4):1785–1792 13 Al-Angari SS, Lippert F, Platt JA, et al. Dental bleaching efficacy and impact on demineralization susceptibility of simulated stained-remineralized caries lesions. J Dent 2019;81:59–63 14 Meireles SS, Demarco FF, dos Santos IdaS, Dumith SdeC, Bona AD. Validation and reliability of visual assessment with a shade guide for tooth-color classification. Oper Dent 2008;33(2):121–126 15 Ceci M, Viola M, Rattalino D, Beltrami R, Colombo M, Poggio C. Discoloration of different esthetic restorative materials: A spec- trophotometric evaluation. Eur J Dent 2017;11(2):149–156 16 Kose C, Calixto AL, Bauer JR, Reis A, Loguercio AD. Comparison of the effects of in-office bleaching times on whitening and tooth sensitivity: a single blind, randomized clinical trial. Oper Dent 2016;41(2):138–145 17 Alqahtani MQ. Tooth-bleaching procedures and their controversial effects: a literature review. Saudi Dent J 2014;26(2):33–46 18 Bortolatto JF, Pretel H, Floros MC, et al. Low concentration H(2) O(2)/TiO_N in office bleaching: a randomized clinical trial. J Dent Res 2014;93(7, Suppl):66S–71S 19 Ahrari F, Akbari M, Mohammadpour S, Forghani M. The efficacy of laser-assisted in-office bleaching and home bleaching on sound and demineralized enamel. Laser Ther 2015;24(4):257–264 20 Horst JA, Ellenikiotis H, Milgrom PM. UCSF protocol for caries arrest using silver diamine fluoride: rationale, indications and consent. Pa Dent J (Harrisb) 2017;84(1):14, 16–26 132 European Journal of Dentistry Vol. 15 No. 1/2021 © 2020. European Journal of Dentistry. Bleaching Stained Arrested Caries Lesions Al-Angari et al. 21 Kwon SR, Kurti SR Jr, Oyoyo U, Li Y. Effect of light-activated tooth whitening on color change relative to color of artificially stained teeth. J Esthet Restor Dent 2015;27(Suppl 1) :S10–S17 22 Chen Z, Cao S, Wang H, et al. Biomimetic remineralization of demineralized dentine using scaffold of CMC/ACP nanocom- plexes in an in vitro tooth model of deep caries. PLoS One 2015;10(1):e0116553 23 Carlos NR, Pinto A, do Amaral F, França F, Turssi CP, Basting RT. Influence of staining solutions on color change and enamel surface properties during at-home and in-office den- tal bleaching: an in situ study. Oper Dent 2019;44(6):595–608 24 Matis BA, Cochran MA, Franco M, Al-Ammar W, Eckert GJ, Stropes M. Eight in-office tooth whitening systems evaluated in vivo: a pilot study. Oper Dent 2007;32(4):322–327 25 Marson FC, Sensi LG, Vieira LC, Araújo E. Clinical evaluation of in-office dental bleaching treatments with and without the use of light-activation sources. Oper Dent 2008;33(1):15–22 26 Sulieman M, MacDonald E, Rees JS, Newcombe RG, Addy M. Tooth bleaching by different concentrations of carbamide peroxide and hydrogen peroxide whitening strips: An in vitro study. J Esthet Restor Dent 2006;18(2):93–100, discussion 101 27 Heshmat H, Ganjkar MH, Miri Y, Fard MJ. The effect of two remineralizing agents and natural saliva on bleached enamel hardness. Dent Res J (Isfahan) 2016;13(1):52–57 28 Farawati FAL, Hsu SM, O’Neill E, Neal D, Clark A, Esquivel-Upshaw J. Effect of carbamide peroxide bleaching on enamel characteristics and susceptibility to further discolor- ation. J Prosthet Dent 2019;121(2):340–346 29 Sasaki RT, Catelan A, Bertoldo EdosS, et al. Effect of 7.5% hydro- gen peroxide containing remineralizing agents on hardness, color change, roughness and micromorphology of human enamel. Am J Dent 2015;28(5):261–267 30 Moraes RR, Marimon JL, Schneider LF. Correr Sobrinho L, Camacho GB, Bueno M. Carbamide peroxide bleaching agents: effects on surface roughness of enamel, composite and porce- lain. Clin Oral Investig 2006;10(1):23–28 31 Zimmerman B, Datko L, Cupelli M, Alapati S, Dean D, Kennedy M. Alteration of dentin-enamel mechanical properties due to dental whitening treatments. J Mech Behav Biomed Mater 2010;3(4):339–346 32 Kutuk ZB, Ergin E, Cakir FY, Gurgan S. Effects of in-office bleaching agent combined with different desensitizing agents on enamel. J Appl Oral Sci 2019;27 33 Omar F, Ab-Ghani Z, Rahman NA, Halim MS. Nonprescription bleaching versus home bleaching with professional prescrip- tions: which one is safer? A comprehensive review of color changes and their side effects on human enamel. Eur J Dent 2019;13(4):589–598 34 Sa Y, Sun L, Wang Z, et al. Effects of two in-office bleaching agents with different pH on the structure of human enamel: an in situ and in vitro study. Oper Dent 2013;38(1):100–110 35 Halabi S, Matsui N, Nikaido T, Burrow MF, Tagami J. Effect of office bleaching on enamel bonding performance. J Adhes Dent 2019;21(2):167–177 36 Da Silva Machado J, Cândido MS, Sundfeld RH, De Alexandre RS, Cardoso JD, Sundefeld ML. The influence of time interval between bleaching and enamel bonding. J Esthet Restor Dent 2007;19(2):111–118, discussion 119 37 Kiomarsi N, Arjmand Y, Kharrazi Fard MJ, Chiniforush N. Effects of erbium family laser on shear bond strength of com- posite to dentin after internal bleaching. J Lasers Med Sci 2018;9(1):58–62 38 Gouveia THN, Públio JDC, Ambrosano GMB, Paulillo LAMS, Aguiar FHB, Lima DANL. Effect of at-home bleaching with dif- ferent thickeners and aging on physical properties of a nano- composite. Eur J Dent 2016;10(1):82–91 39 Vieira HH, Toledo JCJ Jr, Catelan A, et al. Effect of sodium meta- bisulfite gel on the bond strength of dentin of bleached teeth. Eur J Dent 2018;12(2):163–170 40 Kavitha M, Selvaraj S, Khetarpal A, Raj A, Pasupathy S, Shekar S. Comparative evaluation of superoxide dismutase, alpha-tocopherol, and 10% sodium ascorbate on reversal of shear bond strength of bleached enamel: an in vitro study. Eur J Dent 2016;10(1):109–115 work_adxl63lpvzfxfa623ejwj3lmwm ---- Storm Clouds and Silver Linings: Responding to Disruptive Innovations Through Cognitive Resilience University of Calgary PRISM: University of Calgary's Digital Repository Haskayne School of Business Haskayne School of Business Research & Publications 2010 Storm Clouds and Silver Linings: Responding to Disruptive Innovations Through Cognitive Resilience Dewald, Jim; Bowen, Frances Wiley-Blackwell Dewald, J. and Bowen, F. E. (2010), “Storm clouds and silver linings: responding to disruptive innovations through cognitive resilience”, Entrepreneurship Theory and Practice, 34(1): 197-218. http://hdl.handle.net/1880/48241 journal article Downloaded from PRISM: https://prism.ucalgary.ca STORM CLOUDS AND SILVER LININGS: RESPONDING TO DISRUPTIVE INNOVATIONS THROUGH COGNITIVE RESILIENCE Abstract Incumbent firms facing disruptive business-model innovations must decide whether to respond through inaction, resistance, adoption or resilience. We focus on resilient responses to simultaneous perceived threat and opportunity by managers of small incumbent firms. Using cognitive framing arguments, we argue that risk experience moderates perceptions of opportunity, whereas perceived urgency moderates situation threat. We test our framework in the real estate brokerage context where small incumbents face considerable challenges from disruptive business-model innovations such as discount brokers. Analysis of data from 126 real estate brokers broadly confirms our framework. We conclude with implications of our research for small business incumbents. INTRODUCTION When digital photography was initially developed, it represented an exciting and innovative opportunity for aspiring entrepreneurs. At the same time, digital photography represented a disruptive technology that posed tremendous challenges for the incumbent film icon, Kodak. Digital photography struck at the very heart of the Kodak business model of producing, selling, and processing film – a model that could become redundant through digital substitution. For over a decade, Kodak has struggled to find ways to respond to this significant disruption, with little or no success, succumbing to the realization that new business model adoption is confronted with multiple barriers; none more significant than managers‟ cognitive barriers to change (Kim & Mauborgne, 2005; Voelpel, Leibold, Teikie, & Von Krogh, 2005). Recently, the cognitive perspective has been emphasized as an important explanation for entrepreneurial opportunity generation (Baron & Ward, 2004; Mitchell et. al., 2007). We turn this research on its head and present a cognitive perspective of managers‟ responses to disruptive business model innovation. From the manager‟s point of view, the key question is how to respond to new entrepreneurial business models; whether through inaction (Charitou & Markides, 2003), proactive resistance (Markides, 2006), adoption (Christensen & Raynor, 2003), or resilience (Sutcliffe & Vogus, 2003). Further, we extend the usual focus on the responses of large incumbents, such as hub-and spoke airlines‟ difficulty in effectively responding to the low cost carrier model, or traditional steel manufacturers‟ loss of market dominance to minimills, to consider how managers of small incumbent firms choose to respond to new business models. The Kodak story is representative of an increasingly common situation of rapidly changing business environments created by the introduction of disruptive business-model innovations (Charitou & Markides, 2003; Christensen & Raynor, 2003; Markides, 2006). The challenge for managers is to find ways to adopt disruptive business-model innovations in order to prosper, and at times survive the pending environmental change, a concept referred to as organizational resilience (Sutcliffe & Vogus, 2003). While large firms have more resources and scope of expertise, this challenge is particularly difficult for small incumbent firms which are more resource constrained, and so less able to absorb environmental shocks (Dewald, Hall, Chrisman, & Kellermanns, 2007; Jarillo, 1989, Klaas, McClendon, & Gainey, 2000). On the other hand, while resource limitations constrain managers of small incumbent firms, they are able to develop organizational resilience more easily than corporate decision-makers as they are less bound by corporate roles and contexts that reward caution and asset protection (Markides & Geroski, 2004; Corbett & Hmieleski, 2007). In this sense resilience is a theme that links closely to entrepreneurial studies of opportunity identification (Shepherd & DeTienne, 2005) and cognition (Baron, 2006). Whether small or large, it is difficult for any organization invested in „old ways‟ to abandon those known ways in favor of unproven new technologies or business-models. Charitou and Markides‟ (2003) study of 98 companies that had faced disruptive business- model innovations demonstrated that a firm‟s motivation to respond was a key determinant of firm response. However, they fail to explain where the firm‟s motivation is drawn from, leaving a gap between individual managers‟ cognitive resilience and motivation. Hirschman (1970) argues that firms respond to their customer‟s actions, which is at odds with more recent findings by Christensen (1997), who argues that customers and many existing stakeholders, including employees, are embedded in the inertia of reliable but old ways. Either way, the influences, positive or negative, of various stakeholders both internal and external to the firm, are incorporated within managerial cognition. Drilling further down from organizational resilience to the managerial decision- making unit of analysis, we note that Sutcliffe and Vogus (2003) argue that the resilient manager has a rare ability to simultaneously manage organization change and stability, consistent with Tushman and O‟Reilly‟s (1996) description of the “ambidextrous manager”. However, despite the importance of resilience in the face of environmental change (cf. Gittell, Cameron, Lim, & Rivas, 2006), specific attributes or indicators of organizational resilience, in particular those associated with a manager‟s cognitive intentions, have not been clearly delineated. Hence, in understanding resilience to environmental shifts, we are faced with two primary questions: which factors determine an incumbent firm‟s response to disruptive innovations? And, as organizational actions are an outcome of managerial actions, we are specifically interested in determining which factors influence the cognitive intentions of managers of small incumbent firms facing a pending disruption? In this paper, we derive and test a framework that focuses primarily on the second question. Thus, our research expands Charitou and Markides‟ (2003) approach by addressing individual-level attributes and perceptions that influence managerial cognition and determine managerial motivations (Atkinson, 1957). We begin with a review of the relevant literature in organizational resilience and related fields. From this review, we develop a framework that describes the specific attributes of cognitive resilience. These attributes are then incorporated into hypotheses, and tested using primary data collected from 126 managers in small incumbents in the real estate brokerage industry. Our results are discussed, and conclusions are provided including suggestions for managers of small incumbent firms and future research in this area. A COGNITIVE RESILIENCE FRAMEWORK Organizational resilience is a relatively new field of research (Lengnick-Hall & Beck, 2005). Unfortunately, the boundaries of organizational resilience have been ill- defined and wide ranging, including studies that range from a stubborn maintenance of previous routines in defiance of pending environmental change (Edmondson, 1999), maintenance of positive changes under challenging conditions (Weick, Sutcliffe, & Obstfeld, 1999), prosperity in the face of targeted industry threats (Gittell et al., 2006), and a capacity to adjust organizational routines to adapt to untoward events (Lengnick- Hall & Beck, 2005; Sutcliffe & Vogus, 2003). These perspectives are consistent with psychological studies of resilience, which focus on the ability of individuals to adapt (Masten & Reed, 2002), and grow (Richardson, 2002) in the face of adversity. Here, we focus on organizational resilience as an organizational capacity to adopt new organizational routines and processes to address the threats and opportunities arising from disruptive business-model innovation. Organizational resilience is manifested through both cognitive and behavioral resilience. Cognitive resilience is a decision-making intention based on decision-makers‟ ability to “notice, interpret, analyze, and formulate responses” to pending environmental change (Gittell et al, 2006). Behavioral resilience represents the action of implementing the formulated response or intentions developed through cognitive resilience. Disruptive business-model innovations represent a specific form of environmental change, described by Markides (2006) as representing a redefinition of product or service attributes in a manner that is generally perceived as inferior to incumbent product or service providers (Charitou & Markides, 2003; Christensen & Raynor, 2003; Kim & Mauborgne, 2005). While disruptive business-models often incorporate disruptive technologies, adopters need not rely on the discovery of new products or services (Markides, 2006). The challenge for incumbents is that in adopting the business model of their new entrepreneurial competitors, they might run the risk of damaging their existing business and undermining their existing business model (Charitou & Markides, 2003). Furthermore, adopters need to recognize an opportunity to capitalize on the innovation, raising the question of what cognitive factors enable some decision-makers but not others to notice and respond to such changes in their environment (Mitchell et al., 2007). This is particularly challenging when the new business model might be perceived as both a threat and an opportunity to the incumbent. Recognizing recombinations as a form of innovation is nothing new (cf. Schumpeter, 1934), and adding the dimension of perceived inferiority links to a depth of literature on disruptive technologies, initiated by Christensen and Bower (1996). The study of disruptive technologies has over time evolved into a consideration of potential disruptions in a myriad of fields, including medical procedures (Christensen, Bohmer, & Kenagy, 2000), global competitiveness (Hart & Christensen, 2002), and newspaper advertising (Gilbert, 2001). Christensen and Raynor (2003) provided a comprehensive list of 75 historic disruptions, including business model innovations in airlines (Southwest), customer relationship management (Salesforce.com), fast food (MacDonalds), car manufacturing (Ford, Toyota), retailing (Wal-Mart, Staples, Amazon), stock brokerage (Charles Schwab), computer manufacturing (Dell), and education (University of Phoenix). In each of these situations, available technologies were applied to business model innovations. The Kodak story is an example of how the wide-spread use of a well- known technology, digital photography, provided the entrepreneurial opportunity for the development of a new business model. Combining the research on organizational resilience with research on disruptive innovations, and more specifically disruptive business-model innovations, we have developed a framework of cognitive resilience (see Figure 1). In complex and highly uncertain environments, managers of small incumbent firms are more likely to use heuristic-based rather than a systematic rational process to help them navigate change (Mitchell et al., 2007). The decision context is an important source of cognitive schemas aiding the framing of cognitive heuristics (Corbett & Hmieleski, 2007). Managers‟ intentions are particularly driven by the extent to which a given competitive situation is perceived as a threat or an opportunity for the firm. ------------------ Insert Figure 1 ------------------ Situation threat represents the manager‟s perception of an exogenous external threat, such as the introduction of a disruptive business-model innovation. Firm opportunity represents the manager‟s inward assessment of the opportunity presented to the firm if it were to adopt a disruptive business-model innovation. To describe our framework, we considered the context of real estate brokerage. Disruptive business-model innovations are taking hold in the real estate brokerage industries in many areas (Miceli, Pancak, & Sirmans, 2007; Rowley, 2005). Information sharing of real estate property listings has shifted the value network from information control and organization to service (NAR, 2003), providing ample entrepreneurial opportunities for both new firms and incumbents. There are three basic categories of new business-models in the real estate brokerage industry. The most drastic is complete dis- intermediation of the brokerage industry through for sale by owner models („FSBO‟), which has gained momentum through online advertising. The discount brokerage model offers targeted services for a reduced fee, transferring some of the work to consumers, including initial homesearch using electronic datasources. Finally, the corporate model bundles additional services, such as utility connections and legal fees. The FSBO and discount models clearly fit the definition of disruptive business- model innovations, since they involve recombination of existing activities which cumulatively provide a service inferior to the traditional business model, albeit at a reduced fee. While the FSBO and discount models are significantly different value propositions, they both represent threats and opportunities to small incumbents. Since both of these disruptive business-model innovations can spur a similar range of cognitive reactions in managers, we treat them together in this paper. If the manager perceives little or no threat from the introduction of disruptive business-model innovations, and further anticipates little or no firm opportunity from adopting the disruption, then we expect that no action will be taken (Quadrant 1). As Charitou and Markides (2003) suggest, ignoring the innovation is a legitimate response by incumbent firms, particularly when the new business model targets different customers, offers different value propositions and requires different skills and competences. With respect to the real estate brokerage industry, legally only licensed Realtors can facilitate a sale, unless the sale is facilitated directly by the owner. This exclusion is used by homebuilders to allow for unlicensed employees to act as sales representatives, thereby cutting the brokerage industry out of these transactions. In response, many brokerage firms recognize that they do not have the builder relationships, knowledge of the building process, and ability to follow-through on warranty concerns, so by and large they have ignored this specific opportunity. In the next section, we use theories of cognitive framing to illustrate the quadrants of our framework in the real estate context, and propose formal hypotheses on managers‟ responses to disruptive business model innovations in a small incumbent context. Cognitive Framing, Risk Experience, and Urgency Responses to Perceived Threat Two contrasting schools of situation framing permeate the management literature: prospect theory (Kahneman & Tversky, 1979) and issues interpretation (Dutton & Jackson, 1987). In both schools, framing is malleable and subject to individual or organizational perceptions. For instance, an identical organizational situation can be viewed as being negative (i.e. the need to avoid the possible loss of an existing competitive advantage), or positive (i.e. the need to pursue an opportunity in order to gain a new competitive advantage). Paradoxically, each school predicts an opposite outcome from negatively framed situations. Prospect theory predicts a risk-seeking response (the „certainty effect‟), while issues interpretation predicts a risk-adverse response („threat rigidity‟). George, Chattopdhyay, Sitkin, and Barden (2006) argue that both theories apply, proposing that prospect theory responses link to a potential gain or loss of resources, while issues interpretation responses link to a potential gain or loss in control. They do, however, acknowledge and attempt to address the reality that both resources and control often travel together, and in any event it is difficult to distinguish whether resource or control risks are central to the situation. Prospect theory was developed to challenge expected utility theory (Friedman & Savage, 1948). Empirical tests of prospect theory confirm the „certainty effect‟, wherein negative framing results in risk-seeking behaviors, while positive framing yields risk- adverse behaviors (Casey, 1994; Kühberger, 1998; Mittal & Ross, 1998; Mukherji & Wright, 2002; Puto, 1989; Quails & Puto, 1987; Sanders, 2001; Wang, 2004; Wang, Simons, & Bredart, 2001; Wiseman & Gomez-Mejia, 1998). Cognitive-based framing is at the centre of explaining prospect theory findings, with a recent meta-analysis concluding that framing is a “reliable phenomenon” (Kühberger, 1998: 23). Further, individual cognition is explicitly incorporated into prospect theory as the cognitive-based “manipulation of the reference point is clearly effective in framing” (36). Like prospect theory, issues interpretation (Dutton & Jackson, 1987) relies on framing losses and gains around a cognitively constructed reference point. While the certainty effect predicts that negative framing will lead to risk-seeking behavior, issues interpretation research indicates the opposite; that a „threat rigid‟ response to negative framing will lead to risk adverse behavior. We argue that there are two specific differences between prospect theory and issues interpretation that explain this contradiction: the origin of framing, and the defining nature of risk. In issues interpretation, framing is socially constructed. The perception process is dynamic involving either a central decision maker, a highly trusted individual within the strategic decision-making team, or a consensus among the members of the strategic decision-making team (Dutton & Jackson, 1987: 77). On the other hand, in prospect theory literature and testing, framing is embedded in the wording of the question. Hence, the origins and subsequent development of issues interpretation is distinctively different from prospect theory. In the issues interpretation process, a chain of events starts with the decision maker(s) through the categorization (labeling) of strategic issues as either an opportunities or threats, which “affects the subsequent cognitions and motivations of key decision makers, these, in turn, systematically affect the process and content of organizational actions” (1987: 77). Opportunity labeling implies a positive situation with expected gains and control, while threat labeling implies a negative situation with expected losses and little control. Due to the central influence of the decision maker(s) in labeling strategic issues based on their understanding of developments in the industry, we hypothesize that issues interpretation more appropriately fits decision-making in small incumbents than prospect theory. The second distinction between prospect theory and issues interpretation responses relates to the definition and use of „risk‟. Prospect theory is grounded in pure risk, or knowing the available outcomes and the probability of those outcomes occurring, without knowing the actual outcome (Knight, 1921). This is similar to the risk of rolling dice or flipping a coin. On the other hand, issues interpretation, and more specifically the concept of threat rigidity, addresses uncertain or ambiguous environments (McCrimmon & Wright, 1986). In organizational settings, decision-making is mired in uncertainty, and managers are unable to decouple an uncertain future from deterministic calculations of risk probability. Thus, issues interpretation provides a better framework for understanding strategic decision-making wherein managers of small incumbent firms must interpret uncertain or ambiguous changes without knowing the full range of outcomes and probabilities. Prospect theory and issues interpretation predict opposite responses to negative framing. Issues interpretation relies on cognitive formulation of framing, which is consistent with organizational settings. Further, issues interpretation incorporates uncertainty within the determination of risk-adverse behaviour, which is consistent with risk-oriented strategic decision-making. Hence, we argue that, all things being equal, negative framing on its own will encourage risk-adverse responses such as proactive resistance (Quadrant 2 in Figure 1) to disruptive business-model innovations (Charitou & Markides, 2003). An example of proactive resistance in the heavily regulated real estate brokerage industry is lobbying regulators and proactive efforts to amend legislation to protect incumbent business models. Proactive resistance is a sanctioned and encouraged action by the National Association of Realtors (NAR). In a 2005 memorandum to state affiliates, NAR urged its members to pressure for “state laws that are designated to replace competition with regulation”, adding that “Realtors have the right to lobby for legislative and regulatory action – even if the effect of such action would be anti- competitive” (Wall Street Journal, 2005; A8). Several states, including Missouri, Texas, Illinois, Oklahoma, Iowa, Utah, Florida, and Alabama (Wall Street Journal, 2005), have instituted minimum service standards. The minimum standards include requirements to receive and present offers, which are aimed specifically at attacking the discount brokerage models that will provide a limited service, such as listing without presentation or negotiation services, for a relatively small flat fee. We therefore propose the following hypothesis, which is consistent with Quadrant 2 of our framework (Figure 1): Hypothesis 1 – A manager of a small incumbent firm’s increased perception of situation threat arising from a disruptive environmental change will be positively related to their intention to proactively resist a disruptive business-model innovation. Responses to Opportunity There is both intuitive and theoretical support for the capability-based perspective that opportunity framing is consistent with a willingness to adopt disruptive business- model innovations (Charitou & Markides, 2003; Christensen & Raynor, 2003; Markides, 2006). Researchers have considered many theories of how managers recognize the value in new opportunities, including financial potential (Schumpeter, 1934; Shepherd & DeTienne, 2005), prior knowledge (Shane, 2000), alertness (Ardichvili, Cardozo, & Ray, 2003) and managerial cognition (Baron, 2006). For our research, the „how‟ is less important than understanding why or what factors motivate managers to formulate cognitive-based intentions as a first step toward adopting disruptive business models. Hypothesis 1 is based on the expectation that threat framing will result in proactive resistance to disruptive business models innovations. The contrary view is that resistance is myopic (Levinthal & Warglien, 1999), particularly if the business model innovation is inevitable due to external forces such as new customer demands (Christensen & Raynor, 2003). A manager of a small incumbent firm who expects the inevitable changeover may perceive benefits, including being an early adopter of a disruptive business-model innovation, even though it requires significant resource reconfiguration (Lavie, 2006). Some managers will distinguish between the threat posed by external factors, and the opportunity available through adoption of new innovative ways (Gilbert, 2003; Lavie, 2006). We expect that organizations that have the necessary skills, resources, or capabilities that are expected to form a source of competitive advantage will select strategic options that facilitate the exploitation of that opportunity (Barney, 1991). In Figure 1, Quadrant 3 managers primarily perceive an opportunity for the firm, which stimulates an interest in pursuing the disruptive business-model innovation. This combination of high firm opportunity and low situation threat is likely to reside with „early adopters‟ who sense the benefits of the disruption in advance of others in the industry. In the real estate brokerage industry, there are a few early adopter firms, some of which are new to the industry. However, many of the new ventures are led by entrepreneurial broker-managers who have made the shift from incumbent firms to start new ventures. In Canada, Realty Sellers was among the first discount realtors, headed by a well-established Realtor Stephen Moranis, previously president of the nation‟s largest real estate board. As Charitou and Markides (2003) point out, adoption can take at least two forms, depending on whether the firm is “playing two games at once” by spinning out a new venture internally, or embracing the new model completely and scaling it up. Both of these forms of adoption occur when managers perceive more opportunity than threat. Hence, our second hypothesis, consistent with Quadrant 3 of our framework, is as follows: Hypothesis 2 – A manager of a small incumbent firm’s increased perception of firm opportunity arising from a disruptive environmental change will be positively related to their intention to adopt a disruptive business-model innovation. Cognitive Resilience: Simultaneous Threat and Opportunity Finally, we introduce the paradox of high situation threat and high firm opportunity (Quadrant 4 in Figure 1). Charitou and Markides (2003) do not provide a specific response for this situation, and we argue that managers solve this paradox through cognitive resilience. In other words, while the high threat would normally cause incumbents to proactively resist disruptive business-model innovations, a high sense of firm opportunity encourages the manager to consider the benefits of adoption. Adoption may occur through acquisition of a disruptive competitor or direct adoption of disruptive business-model practices. The core contribution of our paper is to examine why managers in small incumbents might choose different resilient responses in this high threat and high opportunity situation. We use literature on cognitive framing to show the importance of risk experience and urgency as moderators in managers‟ intentions to adopt disruptive business models. If both hypothesis 1 and 2 are supported, then a contradiction exists between the threat response (to resist) and the opportunity response (to adopt). Gilbert and Bower (2002) explored this contradiction in earnest, applying the issues interpretation principles to their study of the newspaper industry facing disruptive business-model innovations. The authors developed a matrix of responses to disruptive changes, anchored by an independent framing of (1) the resource allocation process and (2) the venture management process. Resource allocation process framing occurs in advance of venture management framing (Gilbert, 2003), creating a response paradox wherein threat framing at the resource allocation process attracts resources, but opportunity framing provides the control, gains, and positive situation for effective response to disruptive shocks. By de-coupling the response matrix into two time periods, Gilbert and Bower argued that it is possible to isolate the decision-making into two independent actions – one associated with threat framing of the resource allocation intentions, and the other based on opportunity framing associated with venture management. In other words, the firm justifies the resource allocation by recognizing the inherent threat posed by the disruption, and then spins off a new venture mandated to pursue the disruption as an opportunity (Christensen, 1997; Christensen & Bower, 1996; Markides, 2006). This two-staged approach would appear to necessitate a complex and unlikely combination of manipulated framing and ideological flip-flopping, and indeed the results are at best mixed (Charitou & Markides, 2003). We contend that cognitive resilience provides a more reasonable response to the high threat and high opportunity paradox. The threat of disruptions is exogenous to the firm, and hence quite independent of firm framing of opportunity associated with firm resources. In other words, while threat rigidity arguments emphasize the human nature to resist risky change, resilient managers are able to bridge the dominant threat reaction to consider a reasoned evaluation of the opportunity available based on firm capabilities. Hence, through a resilient response, it is possible to resolve the disruptive „dilemma‟, described by Markides as the conflict between new and existing ways (2006: p. 21). We contend that previous research indicates that critical developmental experiences (Krueger, 2007), such as risk-based experience, and perceptions of urgency will further moderate the intentions of a resilient manager (Vlaar, DeVries, & Willenborg, 2005). The secondary matrix we incorporated within Figure 1 examines the variety of resilient responses in a high threat and high opportunity context, and focuses on the moderating impact of risk experience and urgency on resilience. If the manager has a negative risk experience and perceives low urgency of the disruption (Quadrant 4a), we expect them to defer a decision, attending to more pressing priorities while keeping an eye out for ways to gain the necessary experience. When managers perceive negative risk experience coupled with a high sense of urgency (remembering that the threat perception is high), they will set a priority to acquire the necessary experience (Quadrant 4b). For example, in the real estate brokerage industry, managers might hire an experienced manager from internet-based business-models. If urgency is low but risk experience is positive, the manager will monitor the situation, keeping a watchful eye toward selecting the most effective time to adopt the disruption (Quadrant 4c). Our primary interest is in Quadrant 4d, where risk experience is positive and urgency is high, and where we expect that the manager will formulate intentions to adopt the disruption. Sitkin and Pablo (1992) developed a risk propensity model that integrates both individual and situational factors, finding that: (1) risk behaviour is a reflection of risk propensity interacting with risk perception (an individual indicator), (2) risk propensity is derived from three individual factors (risk preference, inertia, and outcome history), and (3) risk perception is determined by five situational factors (problem framing, problem domain familiarity, top management team heterogeneity, social influence, organizational control systems). Risk propensity is driven largely by risk outcome history (Pablo, 1997), or what we term risk experience, supporting the intuitive prospect that favorable experience in making risky decisions will enhance the small incumbent manager‟s risk propensity. Notwithstanding the relative differences in characteristics of risky decisions, Pablo (1997) found that positive experiences realized through previous risky decision- making will reinforce future risk propensity. Although the manager may not have faced a decision as risky or significant as adopting a new business model, positive past experience is expected to increase propensity to take on larger risks, an intuitive and empirically supported notion (Pablo, 1997). Adopting a new business-model might involve the reallocation of critical resources and reconfiguration of capabilities, and can impact the very survival of the business. Managers will draw on their experience when facing unfamiliar risky propositions, and we therefore expect that risk experience will moderate the response of resilient strategic decision-makers facing disruptive business- model innovations. Hypothesis 3 – The relationship between a manager of a small incumbent firm’s increased perception of firm opportunity and intention to adopt a disruptive business model innovation is moderated by positive risk experience, such that positive risk experience increases the likelihood of intention to adopt. Comparative studies indicate that innovative-induced industry change is idiosyncratic (cf. Cooper & Schendel, 1976). Managers often face gestation periods that are unpredictable, ex ante, and beyond the control of the incumbent. This uncertainty is further intensified with complex change such as a business-model adoption, which requires process evolution, and possibly acquisition, integration, and elimination of certain firm capabilities (Eisenhardt & Martin, 2000; Lavie, 2006). Ainslie and Haslam (1992) argue that managers will put off addressing major decisions in favour of less important initiatives, until there is an imminent cost to avoidance. While gestation periods cannot be predicted with any degree of certainty, managers have industry-specific knowledge of technologies and markets and will therefore tend to formulate their own estimates of the urgency with respect to response needs posed by disruptive business- model innovations. Even where an industry is experiencing an ongoing exogenous shock that presents a high situational threat, managers may not perceive this threat as immediately threatening. For instance, alternative energy sources pose a significant, but less than urgent, threat to the energy industry. Wireless technologies pose a significant, but less than urgent threat to cable companies. Hence, we expect that a manager‟s intentions to adopt disruptive business-model innovations will be moderated by their perception of the urgency associated with the need to respond. Hypothesis 4 – The relationship between a manager of a small incumbent firm’s increased perception of situation threat and intention to adopt a disruptive business-model innovation is moderated by an increased perception of urgency, such that high urgency increases the likelihood of intention to adopt. RESEARCH METHOD The most critical criterion in our selection of an appropriate field of study was timing. Charitou and Markides (2003) emphasize the importance of timing, noting that there is a stage in the evolution of disruptive business-model innovations when incumbents recognize that “the new ways of playing the game are in conflict with the established ways” (57). With respect to the real estate brokerage industry, at the time of our data collection in 2005, the National Association of Realtors (NAR) had already issued many reports indicating to its almost 10 million real estate brokerage members that the old ways would not suffice in the future (NAR, 2003). Technological advances had already taken hold and opened the path for new brokerage business models that offer either fewer services for reduced fees, or increased services for fees comparable to current rates. Disruptive business-model innovations had already gained legitimacy as evidenced by NAR statements such as “…in the next three to five years, consolidation of firms and the shift in power from the independent contractor agent to the real estate firm will reinforce each other to alter the landscape of the real estate brokerage industry” (NAR, 2003, p. 48). It was clear to real estate brokers, at the time of our study that the „new ways of playing the game‟ were surely in conflict with the established ways. Real estate brokers are usually either independent owners of their firms or franchisees of large real estate firms such as Remax International. Independent owners have the freedom to set their own business model, choosing their own variant of the traditional full service, new reduced fees or enhanced service models. In a franchise relationship there are term limits on franchises and each franchise operation requires, by statute, a broker as manager. Brokers in a franchise relationship may not have much control over the business model of their overall franchise parent, but have the freedom to sell their current franchise and step out on their own or franchise with another real estate firm. Thus the field of study is both relevant and appropriate as residential real estate brokers, by provincial statute, are the key decision-makers in small incumbent firms, facing imminent environmental changes from disruptive FSBO or discount business models. A mail-in survey was sent to approximately 1,100 members (exact numbers were not provided by the administrators) of a real estate brokerage regulatory association in the Canadian province of Alberta. The survey was targeted at residential brokers, which represents the largest contingent association membership. Unfortunately, the association did not have a segregated list of residential brokers, and was only able to provide rough estimates of the proportion of members who primarily act as residential real estate brokers, which was estimated at 85% to 90% of the membership list. To assist in the survey design, two industry representatives and a senior administrator of the association were asked to complete the instrument and comment on any language or structure concerns. Specifically, they were asked to provide their opinion as to whether the questions „made sense‟ in the context of the residential real estate brokerage industry. The surveys were then delivered in sealed envelopes to the association, and mailed by the association in order to preserve membership confidentiality. Due to confidentiality concerns of the association, reminders could not be sent, limiting the number of responses, and survey responses were treated as being anonymous, and thus no specific geographical or identifying statistics could be captured, other than those requested on the questionnaire. Responses were received from 140 participants, representing approximately 15% of the population. Questionnaires from 14 respondents were not included in our analysis due to substantially incomplete questionnaires, leaving a sample of 126. Although the responses were anonymous, some general information was captured to assess the extent of potential non-response bias within the sample. Comparing the data to other research data (AREA, 2004) indicates that non-response bias based on realtor gender or type of brokerage is not a serious threat to our study. The split between urban and rural clientele was 90/10 in our sample versus 91/9 for the AREA study, and respondents in both studies were predominantly male (83% in this study versus 69% in the AREA study). Our sample consisted of independent brokers (70%), franchise operators (28%) and corporate brokers (11%) 1 . Variables Our empirical analysis involves 9 variables, each measured through self-report questionnaire items. The measures are mostly based on a five-point scale with both anchor and mid-point references. To enhance reliability, most variables combine two or more measured items, with the total combined scores divided by the number of items measured, thereby resulting in a composite score between 1 and 5. In substantially complete questionnaires, occasional missing fields were replaced by mean values, unless noted otherwise. The variables are described in detail below. Cronbach Alpha values were determined to measure the reliability of all multi item variables. ------------------ Insert Table 1 1 The total is more than 100%, due to some respondents answering both franchise and independent (7%), and respondents checking all three (2%). ------------------ Dependent variables. We measured two separate dependent variables to assess the extent to which managers intended to resist or adopt discounted fees, and important disruptive business-model innovation in this industry. As Table 1 indicates, respondents were asked how likely they would be to lobby the authorities to protect the industry (resist), and whether they would be likely to include discounted fees in their service offering (adopt). Responses were scored on a 5 point Likert scale. In our study, adoption and resistance were not significantly correlated at the 0.05 level, indicating that these measures are capturing distinct intentions. Independent variables. We measured firm opportunity by three indicators designed for this study asking questions on the extent to which discount brokerage was a new opportunity for the firm, and the extent to which the public and customers were encouraging the firm to adopt new business models (α = 0.64). We measured situation threat using three indicators of the extent to which the discount model is a threat to the brokerage industry and the extent to which alternative models such as FSBO would threaten incumbents‟ profits (α = 0.61). Moderating variables. We measured urgency with a single open-ended question. The responses to the question on how long, if ever, it will be before commission rates are reduced in order to meet customer demands were coded to reflect the relative urgency of the pending disruption. Imminent adoption would reflect a gestation period that would clearly indicate a time period shorter than the period required by the incumbent to adopt the disruptive business model. The adoption period can vary depending on the capability reconfiguration needs of the incumbent (Lavie, 2006) from a few months to a few years. Discussions with industry representatives indicated that a comfortable period to adjust a business model would be 2 to 4 years. Hence, we classified responses under 4 years as a relatively „short‟ gestation period, and 5+ years a relatively „long‟ gestation period. Missing items were coded at the mid-point of 3 out of 5 in order to position non- respondents between the polar extremes of a perceived imminent adoption and long or non-existent gestation period. We used measures suggested by Pablo (1997) to measure risk experience. In that study, the reliability of the measures was 0.87, compared to 0.94 found here. Control variables. Three control variables were included to address rival theories. Age of respondents was measured and regressed against the dependent variable as a test of the theory that younger managers would be less rigid in their cognitive frames, due to a lack of institutionalized belief in the old business model. Gender was included as a control variable to determine if there was any distinction between the risk preferences of male and female small business managers (Langowitz & Minniti, 2007; Bird & Brush, 2002). Finally, firm performance was included to control for the learning from performance feedback effect, where managers in firms performing below the aspiration level are more likely to adopt strategic changes or innovations than those exceeding expectations (Cyert & March, 1963; Greve, 2003). We measured firm performance using three items, with a very high degree of internal consistency (Cronbach alpha = 0.91). The measures are all set in the current period, inquiring as to volume, profit, and profit per transaction measures in relation to expectations. Industry representatives indicated that these three measures reflect brokerage performance. We also included direct measures of items for use in testing the influence of social desirability (Greenwald & Satow, 1970) and negative affectivity (Watson, Clark, & Tellegen, 1988). Specific tests were undertaken with these variables, and described below. The descriptive statistics and correlations for all variables are indicated in Table 2. All variables were tested, and determined to follow a normal distribution, with the exception of negative affectivity. The impact of a non-normal distribution for this variable is discussed in the next section. ------------------ Insert Table 2 ------------------ Analysis One of the challenges of empirical studies employing cognitive-based decision- making models is that, by definition, cognition can only be measured by direct inquiry. Unfortunately, the use of self-report measures for both independent and dependent variables introduces concerns with respect to common method bias (Armitage & Conner, 2001). We addressed this concern through two primary techniques: (1) design procedures, and (2) statistical controls (Podsakoff, MacKenzie, Lee, & Podsakoff, 2003). Various instrument and process design procedures were incorporated into the research design, including guaranteeing anonymity (to address social desirability), use of industry- specific language (to reduce item ambiguity), reversing pole anchors on item indicators (to improve attentiveness), separating predictor and criterion variables within the instrument design (to reduce response bias), and directly measuring both social desirability and negative affectivity. Although an independent source for the dependent variable would be desirable, self-report is unavoidable, as intentions can only be measured by asking the manager. With respect to statistical analysis, the social desirability and negative affectivity variables are included in the correlation matrix. Bivariate correlation analysis confirmed that there were no unexpected relationships among the variables tested and either social desirability or negative affectivity (Kline, Sulsky, & Rever-Moriyama, 2000). We used OLS-based multiple regression, using SPSS version 14.0, to test the models. Multiple regression is preferred for statistical modeling where independent variables are expected to have a direct effect on the dependent variable. In addition, hierarchical regression models allowed us to observe incremental effects of adding variables to the model. ------------------ Insert Table 3 ------------------ Table 3 shows the standardized OLS multiple regression results of the control, independent and moderating variables regressed on the alternative strategies of proactive resistance and adoption. Our results show support for all four hypotheses. Situation threat is positively and significantly related with proactive resistance (supporting H1 in model 2), while opportunity is positively and significantly associated with adoption (supporting H2 in model 4). Model 5 shows support for the moderated relationships hypothesized in H3 and H4. Focusing on the external dimension, decision-makers are even more likely to intend to adopt when the disruption is seen as urgent, for a given level of situation threat. Alternatively, based on an internal focus, decision-makers are more likely to intend to adopt when they have had a positive experience with risky situations in the past, given a particular perception of firm opportunity. The demographic control variables were non-significant, indicating that the decision-maker‟s intention to adopt was not related with either resistance or adoption. Interestingly, the firm performance coefficient was positive and significant in the unmoderated models of adoption (models 3 and 4). For small incumbents, the learning from performance feedback effect seems to be outweighed by the availability of slack resources which might encourage experimentation with new business models (Cyert & March, 1963). IMPLICATIONS AND CONCLUSIONS Managers of small incumbent firms show cognitive resilience when they form intentions based on their ability to notice, interpret, analyze, and formulate responses to high threat and high opportunity situations. Disruptive business models introduce threats to existing ways, but also opportunities for new sources of competitive advantage (Markides, 2006). Cognitive resilience enables managers to look past the storm clouds of disruptive change to see the opportunities in silver linings. Incumbent real estate brokers face considerable challenges from the threat of discount service providers (Miceli, Pancak, & Sirmans, 2007). Discount models are taking hold quickly in the US, with entrepreneurial firms such as Iggy‟s House which offers free listing services, and Buyside which acts only as a buyer agent rebating 70% of any conventional split commission to the buyer, showing impressive growth in listings and revenue (Cook, 2007). The challenge for small incumbents is to know when to proactively resist or adopt the new discounted business models, given their limited resources, current capabilities, and the danger that adoption might undermine their traditional full service model (Charitou & Markides, 2003). In our study, we found support for cognitive framing explanations of the likelihood of resistance and adoption. Specifically, we found support for the issues interpretation, or threat rigid, response which predicted increased likelihood of resistance when managers perceive business model innovation as a threat, and increased likelihood of adoption when the innovation is perceived as an opportunity. We also found evidence that urgency moderates situation threat, and that risk experience moderates firm opportunity in predicting intentions to adopt. Thus real estate brokers respond based on whether they predominantly perceive discount models as a threat or a business opportunity. Our findings suggest that once the moderating effects of risk experience and urgency are included, the main effects of firm opportunity and situation threat on intention to adopt are non-significant. This may indicate that managers separately align their internal and external perceptions. Looking internally, the real estate brokers evaluate their firm‟s capabilities and hence opportunities through the lens of their own risk experience. Looking externally, they evaluate the threat of discount models through their perception of urgency. In this way, our research indicates that cognitive resilience grows out of simultaneous, rather than a time delayed, two-step, threat-opportunity assessment as in Gilbert and Bower‟s (2002) response paradox. We suggest that cognitive resilience depends on a simultaneous internal and external evaluation of the situation, and not a staged process. Our study of the real estate brokerage industry serves as a reminder that not all incumbents are large, established firms. Examples of large incumbents such as Kodak‟s failure to respond to the digital photography revolution illustrate the importance of cognitive inertia in large established firms, and how managers in corporate contexts are more conditioned to consolidate or exploit existing business models rather than create new markets (Markides & Geroski, 2004; Kim & Mauborgne, 2005). Our findings suggest that managers in small incumbent firms are also likely to proactively resist adoption if they see the new model as a threat. One explanation for this is that smaller incumbents are generally operating closer to the survival level than large firms, and so managers are more likely to refer to the survival level in assessing their risk preferences than in larger firms (March & Shapira, 1992). Entrepreneurship researchers emphasize that cognitive framing may differ between entrepreneurial and corporate settings (Corbett & Hmieleski, 2007). We would encourage further investigation of cognitive framing in an intermediate context, that of small incumbent firms. Implications for Small Incumbents Our research holds at least three practical implications for small incumbents. First, standard issues interpretation threat rigidity arguments indicate that small incumbents will demonstrate the „deer in the headlights‟ response to disruptive business- model innovations. We found evidence that this leads to proactive resistance by small incumbents in the short term. While the new business model is not a direct competitor providing a comparable service, or aiming at the same market, urgency remains low and small incumbents can defer a decision or monitor the market. However, if the disruptive business-model becomes well established in the marketplace it can lead to shifting customer expectations. For example, small boutique clothing retailers have relied on customers‟ desire to try on clothing to proactively resist changes to pricing and distribution of designer clothing introduced by online retailers. Initially the online model was aimed at a more price-conscious market segment, and so deferral or monitoring by incumbents was successful. Over time, shifting customer expectations about wide stock availability, flexible return policies, transparent pricing and even price flexibility through online auctions are increasing the urgency of this online retailing threat. As urgency increases, ultimately small incumbent firms either face the need to adopt (by for example launching their own online store), acquire (by partnering to gain online experience), or accept a reduced market position. Second, by framing situations as both high threat and high opportunity, resilient managers in small incumbents are able to position their organizations for adoption when the time is right. Even with negative risk experience and low urgency, the decision to defer (Quadrant 4a) is much more resilient than rigid proactive resistance (Quadrant 2). With positive risk experience or high urgency, the strategic repertoire available is wider, leading to more creative responses. An implication for managers of small incumbents is that they should look for ways to gain positive risk experience by, for example, sharing positive learning experiences within their strategic team or business partner networks. They should also try to find ways to keep urgency low. One way to do this is to clearly delineate the small incumbent‟s offering from the new business model as independent bookstores have done by emphasizing the social experience of their stores contrasted with the remote delivery of online retailers such as Amazon.com. Third, small incumbents might find reassurance that once risk experience and urgency are taken into account, the availability of resources is not a significant predictor of likelihood of adoption. Thus small incumbents should not fear limited resources as a barrier to adopting new models. Instead, they should focus on gaining positive risk experience and minimizing urgency as outlined above. Directions for Future Research We extended Charitou and Markides‟ (2003) study of responses to disruptive innovation by using cognitive framing to explain the origins of firm motivation. By focusing on small incumbents, we were able to make more direct connections between the cognitive perceptions of individual decision-makers and motivation to respond at the firm level than is realistic in the large incumbent context. Further, we used cognitive framing to predict when small incumbents would exhibit each of the five responses to disruptive innovation. While our research focused on the intention to adopt, we would encourage others to test the other outcomes in our framework. Specifically, we posit that acquisition of capabilities (Quadrant 4b) and monitoring disruptive innovations (Quadrant 4c) are alternative forms of resilience, based on combinations of risk experience and urgency. These outcomes remain to be empirically tested, and would be a valuable extension to our work. Mitchell et al. (2007), in their introduction to a Special Issue of this journal on Cognition, focused on the central question of entrepreneurial cognition research: “How do entrepreneurs think?” Our research contributes directly to this central question by exploring the influences of situational, organizational and individual factors that combine to generate cognitive response. We encourage researchers to extend beyond the isolated considerations of individual motivation to develop more comprehensive models that incorporate direct environmental influences on entrepreneurial cognition. Our findings also contrast with George et al. (2006) who focused on responses to potential gains and losses of resources as opposed to control. Rather than separate resources from control, we incorporated both in our general questions on perceived threat and opportunity, and focused on risk experience and urgency. Integrating our study with George et al.‟s raises the question as to whether risk experience and urgency might relate with gains or losses of resources as opposed to control. We expect that risk experience might relate more closely with control, whereas urgency might relate with resources, but did not test these conjectures in this study. Future studies might also juxtapose George et al.‟s explanation for adoption and ours, and ask which factors dominate adoption decisions: urgency and risk experience or the different dimensions of gains and losses. Finally, we would encourage research assessing whether our findings are due to idiosyncrasies in our research context. More specifically, several of our measures were created to address the real estate context, and the reliability of some variables was less than desired. These measures should be further refined in future studies. We selected the real estate brokerage industry because of the imminent threat posed by discount models, and because independent or franchisee brokers are considered the primary managers and strategic decision-makers in small incumbent real estate brokerages. Since our survey, pressures from discount models have intensified as the success of firms such as Iggy‟s House and Buyside attests. While we expect our findings to be robust across contexts where the potentially disruptive business model does become a success and ones where the new business model ultimately fails, we would be interested to see this conjecture empirically tested. We would also encourage our cognitive resilience framework to be tested in the large incumbent context, or in small business contexts where there are no franchisees. Our paper contributes to our understanding of innovation, specifically by identifying effective incumbent strategic responses to disruptive business-model innovations. Our study of intention to adopt business-model innovations in the real estate brokerage industry helps to answer how incumbents can both accept the risk of new ways, and abandon their old ways. We found that managers‟ cognitive resilience is vital in meeting this crucial challenge. REFERENCES Ainslie, G., & Haslam, N. (1992). Hyperbolic discounting. In G. Loewenstein & J. Elster (Eds.), Choice over time. (pp. 57-92). New York: Russell Sage Foundation. Ardichvili, A., Cardozo, R., Ray, A. (2003). A theory of entrepreneurial opportunity identification and development. Journal of Business Venturing, 18, 105-123. AREA. (2004). Industry viability study – Final report. Prepared by the Alberta Real Estate Association & Overview Business Consulting Inc. Armitage, C.J., & Conner, M. (2001). Efficacy of the theory of planned behavior: a meta analytic review. British Journal of Social Psychology, 40, 471-499. Atkinson, J.W. (1957). Motivational determinants of risk-taking behavior. Psychological Review 64, 359-372. Barney, J. (1991). Firm resources and sustained competitive advantage. Journal of Management, 17(1), 99-120. Baron, R.A. (2006). Opportunity recognition as pattern recognition: How entrepreneurs “connect the dots” to identify new business opportunities. Academy of Management Perspectives, 20(1), 104-119. Baron, R. A. & Ward, T. B. (2004). Expanding entrepreneurial cognition‟s toolbox: Potential contributions from the field of cognitive science, Entrepreneurship Theory and Practice, 28(6), 553-573. Bird, B. & Brush, C. (2002). A gendered perspective on organizational creation. Entrepreneurship Theory and Practice, 26 (3), 41-65. Casey, J.T. (1994). Buyers‟ pricing behavior for risky alternatives: Encoding processes and preference reversals. Management Science, 40(6), 730-749. Charitou, C.D., & Markides, C.C. (2003). Responses to disruptive strategic innovation. MIT Sloan Management Review, 44(2), 55-63. Christensen, C.M. (1997). The innovator’s dilemma: When new technologies cause great firms to fail. Boston, Ma: Harvard Business Press. Christensen, C.M., Bohmer, R., & Kenagy J. (2000). Will disruptive innovations cure health care? Harvard Business Review, 78(5), 102-105. Christensen, C.M. & Bower, J. (1996). Customer power, strategic investment, and the failure of leading firms. Strategic Management Journal, 17(3), 197-218. Christensen, C.M., & Overdorf, M. (2000). Meeting the challenge of disruptive change. Harvard Business Review, 78(2), 67-76. Christensen, C.M., & Raynor, M.E. (2003). The Innovator’s Solution: Creating and Sustaining Successful Growth. Boston, Ma: Harvard Business Press. Cook, J. (2007). New online real estate firm moves into Washington. Seattle Post Intelligencer, June 15, 2007, accessed August 17, 2007 at http://seattlepi.nwsource.com/business/320108_buyside18.html Corbett, A. C. & Hmieleski, K. M. (2007). The conflicting cognitions of corporate entrepreneurs, Entrepreneurship Theory and Practice, 31 (1), 103-121. Cyert, R. M., & March, J. G. (1963). A behavioral theory of the firm. Englewood Cliffs: Prentice-Hall. Dewald, J.R., Hall, J., Chrisman, J.J., & Kellermanns, F.W. (2007). The governance paradox: Preferences of small vulnerable firms in the homebuilding industry. Entrepreneurship Theory and Practice, 31(2), 279-297. Dutton, J.E., & Jackson, S.E. (1987). Categorizing strategic issues: Links to organizational action. Academy of Management Journal, 12(1), 76-90. Edmondson, A. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44, 350-383 Eisenhardt, K.M., & Martin, J.A. (2000). Dynamic capabilities: What are they? Strategic Management Journal, 21, 1105-1121. Friedman, M., & Savage, L.J. (1948). The utility analysis of choices involving risk. The Journal of Political Economy, 56(4), 279-304. George, E., Chattopadhyay, P., Sitkin, S.B., & Barden, J. (2006). Cognitive underpinnings of institutional persistence and change: A framing perspective. Academy of Management Review, 31(2), 347-385. Gilbert, C.G. (2001). A dilemma response: Examining the newspaper industry’s response to the internet. Doctoral Dissertation, Harvard University. Gilbert, C. (2003). The disruptive opportunity. MIT Sloan Management Review, 44(4), 27-32. Gilbert, C., & Bower, J.L. (2002). Disruptive change: When trying harder is part of the problem. Harvard Business Review, 80(5), 95-101. Gittell, J.H., Cameron, K., Lim, S. & Rivas, V. (2006). Relationships, Layoffs, and Organizational Resilience: Airline Industry Responses to September 11. The Journal of Applied Behavioral Science, 42(3), 300-329. Greenwald, H.J., & Satow, Y. (1970). A short desirability scale. Psychology Reports, 27, 131-135. Greve, H. R. (2003). Organisational learning from performance feedback: A behavioral perspective on innovation and change. Cambridge, UK: Cambridge University Press. Hart, S.L., & Christensen, C.M. (2002). The great leap: Driving innovation from the base of the pyramid. MIT Sloan Management Review, 44(1), 51-56. Hirschman, A.O. (1970). Exit, voice, and loyalty: Responses to decline in firms, organizations and states. Cambridge MA: Harvard University Press Jarillo, J.C. (1989). Entrepreneurship and growth: The strategic use of external resources. Journal of Business Venturing, 4, 133-147. Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263-292. Kim, W.C., & Mauborgne, R. (2005). Blue ocean strategy: How to create uncontested market space and make the competition irrelevant. Boston, MA: Harvard Business School Press Klaas, B.S., McClendon, J., & Gainey, T.W. (2000). Managing HR in small and medium enterprise: The impact of professional employer organizations. Entrepreneurship Theory and Practice, 25, 107-124. Kline, T.J.B., Sulsky, L.M., & Rever-Moriyama, S.D. (2000). Common method variance and specification errors: A practical approach to detection. The Journal of Psychology, 134(4), 401-421. Knight, F.H. (1921). Risk, uncertainty and profit. Boston, MA: Houghton Mifflin. Kühberger, A. (1998). The influence of framing on risky decisions: A meta-analysis. Organizational Behavior and Human Decision Processes 75(1): 23-55. Krueger, N.F. (2007). What lies beneath? The experiential essence of entrepreneurial thinking. Entrepreneurship Theory and Practice, 31(1), 123-138. Langowitz, N. & Minniti, M. (2007). The entrepreneurial propensity of women, Entrepreneurship Theory and Practice, 31(3), 341-364. Lavie, D. (2006). Capability reconfiguration: An analysis of incumbent responses to technological change. Academy of Management Review, 31(1), 153-174. Lengnick-Hall, C.A., & Beck, T.E. (2005). Adaptive fit versus robust transformation: How organizations respond to environmental change. Journal of Management, 31(5), 738-757. Levinthal, D.A., Warglien, M. (1999). Landscape design: Designing for local action in complex worlds. Organization Science, 10(3), 342-357. March, J. G. & Shapira, Z. (1992). Variable risk preferences and the focus of attention. Psychological Review, 99(1), 172-183. Markides, C. (2006). Disruptive innovation: In need of a better theory. Product Innovation Management, 23, 19-25. Markides, C. C. & Geroski, P. A. (2004). Fast second: How smart companies bypass radical innovation to enter and dominate new markets. Jossey-Bass. Masten, A.S. & Reed, M.G.J. (2002). Resilience in development, in Snyder C.R. & Lopez, S.J. (Eds.), Handbook of positive psychology, Oxford, UK: Oxford University Press. 74-88 McCrimmon, K.R., & Wehrung, D.A. (1986). Taking Risks: The Management of Uncertainty. New York, NY: The Free Press. Miceli, T.J., Panak, K.A., Sirmans, C.F. (2007). Is the compensation model for real estate brokers broken? Journal of Real Estate Finance Economics, 35, 7-22. Mitchell, R. K., Busenitz, L. W., Bird, B., Gaglio, C. M., McMullen, J. S., Morse, E. A. and Smith, J. B. (2007). The central question in entrepreneurial cognition research 2007. Entrepreneurship Theory and Practice, 31(1), 1-27. Mittal, V., Ross, & W.T. (1998). The impact of positive and negative affect and issue framing on issue interpretation and risk taking. Organizational Behavior and Human Decision Processes, 76(3), 298-324. Mukherji, A., & Wright, P. (2002). Reexamining the relationship between action preference and managerial risk behaviors. Journal of Managerial Issues, 14(3), 314-330. NAR (2003). The future of real estate brokerage: Challenges and opportunities for Realtors. Chicago, Il: National Association of Realtors. Pablo, A.L. (1997). Reconciling predictions of decision making under risk: Insights from a reconceptualized model of risk behavior. Journal of Managerial Psychology, 12(1), 4-20. Podsakoff, P., MacKenzie, S.B., Lee, J.Y., & Podsakoff, N.P. (2003). Common method bias in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879-903. Puto, C.P. (1987). The framing of buying decisions. Journal of Consumer Research, 14, 301- 315. Quails, W.J., & Puto, C.P. (1989). Organizational climate and decision framing: An integrated approach to analyzing industrial buying decisions. Journal of Marketing Research, 26, 179-192. Richardson, G.E. (2002). The metatheory of resilience and resiliency. Journal of Clinical Psychology, 55, 307-321. Rowley, J. (2005). The evolution of internet business strategy: The case of UK estate agency. Property Management, 23, 217-226 Sanders, W.G. (2001). Behavioral responses of CEOs to stock ownership and stock option pay. Academy of Management Journal, 44(3), 477-492. Schumpeter, J. (1934). The theory of economic development: An inquiry into profits, capital, credit, interest, and the business cycle. Cambridge, Ma: President of the Fellows of Harvard College Shane, S. (2000). Prior knowledge and the discovery of entrepreneurial opportunities. Organization Science, 11(4), 448-469. Shepherd, D.A., & DeTienne, D.R. (2005). Prior knowledge, potential financial reward, and opportunity identification. Entrepreneurship, Theory and Practice, 29(1), 91- 112. Sitkin, S.B., & Pablo, A.L. (1992). Reconceptualizing the determinants of risk behavior. Academy of Management Review, 17(1), 9-38. Sutcliffe, K.M., & Vogus, T.J. (2003). Organizing for resilience. In Cameron, K.S., Dutton, J.E., & Quinn, R.E. (Eds.). Positive organizational scholarship: Foundations of a new discipline. (pp. 94-110). Berrett-Khoeler: San Francisco, CA. Tushman, M.L., & O‟Reilly, C.A. (1996). Ambidextrous organizations: Managing evolutionary and revolutionary change. California Management Review, 38(4), 8- 30. Vlaar, P., De Vries, P., & Willenborg, M. (2005). Why incumbents struggle to extract value from new strategic options: Case of the European airline industry. European Management Journal, 23(2), 154-169. Voelpel, S., Leibold, M., Teikie, E., & Von Krogh, G. (2005). Escaping the red queen effect in competitive strategy: Sense-testing business models. European Management Journal, 23(1), 37-49. Wall Street Journal. (2005). The realtor racket. Editorial – Wall Street Journal (Eastern Edition). August 12, 2005 page A8 Wang, X.T. (2004). Self-framing of risky choice. Journal of Behavioral Decision Making, 17, 1-16. Wang, X.T., Simons, F., & Bredart, S. (2001). Social cues and verbal framing in risky choice. Journal of Behavioral Decision Making, 14(1), 1-15. Watson, D., Clark, L.A., & Tellegen, A. (1988). Development and validation of brief measures of positive and negative affect: The PANAS scales. Journal of Personality and Social Psychology, 54(6), 1063-1070. Wiseman, R.M., & Gomez-Mejia, L.R. (1998). A behavioral model of managerial risk taking. Academy of Management Review, 23(1), 133-153. Wright, G., & Goodwin, P. (2002). Eliminating a framing bias by using simple instructions to „think harder‟ and respondents with managerial experience: Comment on „Breaking the Frame‟. Strategic Management Journal, 23, 1059-1067. FIGURES & TABLES TABLE 1 Description of Variables Variable Name Measured Items Anchors α Dependent Variables Resist Change You will lobby the authorities to ensure that the industry is protected from discounted fees. Not at all Likely – Very Likely Adopt Change You will adopt a new business model that includes the option of discounted fees. Not at all Likely – Very Likely Independent variables Firm Opportunity The discount brokerage model is a new opportunity for you Strongly Disagree – Strongly Agree .64 The public wants Realtors to limit their role/service offering Not True At All – Absolutely True Your customers want to play a more direct role in the real estate process Situation Threat The discount brokerage model is a threat to the real estate brokerage industry Strongly Disagree – Strongly Agree .61 In the coming years, FSBO (for sale by owner) will grow to represent a larger share of the market In the next five years, it is likely that profits will shrink Moderating Variables Urgency In your opinion, how long, if ever, will it be before commission rates are reduced in order to meet customer demands? Open-ended Experience Think back to a significant business situation in the past when you took the more risky alternative: How pleased were you with the outcome? Not At All – Totally .94 Overall how would you rate the outcome? Very Negative – Very Positive How would you classify the result? Complete Failure – Complete Success Control Variables Age Please indicate your age. Open-ended Gender Please indicate your gender. Female or Male Firm Performance Your transaction volume in the current year will be. Well Above Projections – Well Below Projections .91 Your profit in the current year will be. Your profit per transaction in the current year will be. TABLE 2 Descriptive Statistics and Pearson Correlation Values Variable Mean s.d. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 1. Resist Change 2.2 1.3 2. Adopt Change 2.2 1.2 -.08 3. Age 54.4 8.2 -.11 -.05 4. Gender 1.8 0.4 .04 -.09 .24** 5. Firm Performance 7.7 2.6 -.05 .20* .13 .03 6. Firm Opportunity 6.9 2.5 -.14 .44** .04 -.01 .08 7. Situation Threat 8.7 2.7 .19* .13 .00 -.10 .43** -.02 8. Urgency 2.8 1.4 -.06 .45** -.04 -.08 .19* .42** .27** 9. Risk Experience 10.8 2.8 -.04 .17 -.04 .08 -.22* .20* -.23** -.03 10. Social Desirability 2.1 0.7 -.09 .07 -.04 .02 -.04 .14 -.13 .08 .19* 11. Negative Affectivity 1.3 0.4 .11 .02 -.01 .03 .29** .01 .44** .24** -.05 .10 * p<.05, ** p<.01; N = 126 TABLE 3 Standardized Regression Tests Dependent Variable Proactive Resistance Adoption Model 1 Model 2 Model 3 Model 4 Model 5 Age -.12 -.12 -.06 -.08 -.07 Gender .06 .10 -.08 -.05 -.04 Firm Performance -.03 -.16 .22* .18* .17 Situation Threat .28** -.11 Firm Opportunity .42** .01 Situation Threat X Urgency .33** Firm Opportunity X Experience .37* Adjusted R 2 -.007 .050 .030 .203 .273 * p<.05, ** p<.01; N = 126 Figure 1 Cognitive Resilience Framework Firm Opportunity Low High X Risk Experience High Negative Positive Situation High Threat Low X Urgency Low Quadrant 2 Proactive Resistance Quadrant3 Proactive Adoption Quadrant 4 Resilience Quadrant 1 No Action Quadrant4b Acquire Quadrant4c Monitor Quadrant4d Adopt Quadrant4a Defer work_aeoaaoul6bds7ptd7526pgnjwi ---- Chianucci F, Cutini A (2012). Digital hemispherical photography for estimating forest canopy properties: current controversies and opportunities. iForest 5: 290-295. Review Paper - doi: 10.3832/ifor0775-005 ©iForest – Biogeosciences and Forestry Introduction Accurate and reliable measures of forest canopy are crucial to a wide range of studies including hydrology, carbon and nutrient cycling, and global change (Chen et al. 1997, Macfarlane et al. 2007c). For this rea- son, forest canopy properties are widely used in many long-term research programs and, on the other hand, to monitoring forest eco- systems’ status (Cutini 2003, Macfarlane 2011). In addition, the availability of obser- vations on forest canopy properties such as leaf area index and forest light conditions are essential to calibrate remotely-sensed in- formation based on airborne and satellite data (Rich 1990, Wang et al. 2004). However, direct measurements of forest canopy are particularly challenging to ob- tain, owing to inherent difficulties in making direct measurements of forest, high levels of spatial and temporal variability, and diffi- culty of generalizing local measurements to the landscape scale (Chen & Cihlar 1995, Chen et al. 1997, Macfarlane et al. 2007b). Harvesting of trees for direct measurements in forest is labor intensive, destructive, time and money consuming, and can not be ap- plied to large areas (Chen et al. 1997). Al- ternative and less destructive methods based on tree allometry and litterfall have been de- veloped in order to measure forest canopy properties. Nevertheless, even these methods are labor intensive, time-consuming and not error-free because of their site- and spe- cies-dependency (Breda 2003). By contrast, remotely sensed vegetation indexes have novel potential but still need cross calibra- tion by means of ground-based observations (Wang et al. 2004). As a consequence, indi- rect measures of forest canopy properties using ground-based instruments have long been implemented, as documented by the rich literature (i.e., Chen et al. 1997, Cutini et al. 1998, Breda 2003, Jonckheere et al. 2004, Macfarlane et al. 2007b). Indirect methods enable estimation of fo- rest canopy properties by measurements of the radiation transmission through the ca- nopy, making use of the radiative transfer theory (Ross 1981). However, indirect me- thods, including the LAI-2000 plant canopy analyzer (Li-Cor, Lincoln, Nebraska) and AccuPAR Ceptometer (Decagon Devices, Pullman, WA), two of the most commonly used devices, are hindered by the complexity of forest canopy architecture (Chen et al. 1997) and the high-cost of the instruments (Macfarlane et al. 2007c). Since the first approach provided by Evans & Coombe (1959), hemispherical photo- graphy (also known as fish-eye photography) has long been used for the indirect optical measurement of forest canopy. However, be- cause of significant obstacles involving film cameras (i.e., lack of software, time-consu- ming acquisition and processing procedu- res), film hemispherical photography has been progressively forsaken (Breda 2003). Advances in digital photographic technology have led to renewal of interest in photo- graphic methods for indirectly quantifying forest canopy. So far, hemispherical photo- graphy is the widely used of several photo- graphic techniques. Fish-eye photography enables characterization of forest canopy by means of photographs taken looking upward (or, in some cases, looking downward) through an extreme wide-angle lens (Jonck- heere et al. 2004). The method has many ad- vantages over the other indirect methods. It is rapid, inexpensive and readily available; hemispherical image provides a permanent record of the geometry distribution of gap fraction, which is generally used to calculate forest light regimes and canopy properties such as canopy openness, leaf area index, leaf angle distribution. Hence, hemispherical photography can greatly expands the number of the canopy properties that are possible to estimate, as compared with the other indirect methods. In spite of the recent improvement in digi- tal photography, significant obstacles to the adoption of digital hemispherical photogra- phy still remain; accurate and meaningful es- timates of forest canopy properties with di- gital hemispherical photography are hin- dered by different critical steps, regarding image acquisition and software processing; thus, adequate field collection and image processing procedure is required to achieve the standard of an ideal device (Jonckheere et al. 2004). The purpose of this contribution is to briefly introduce some of the major draw- backs of the digital hemispherical photo- graphy method. Given that different contro- versies of digital hemispherical photography have been usually treated separately, this contribution is aimed to: (i) provide a basic foreground of digital hemispherical photo- graphy, in order to outline strengths and weaknesses of the method; (ii) to give an up- date framework of the main procedure re- cently proposed to overcome the technical problems of digital hemispherical photo- graphy; (iii) to provide an reliable field measurement and images processing pro- tocol for canopy description and analysis, particularly regarding sampling strategy. © SISEF http://www.sisef.it/iforest/ 290 iForest (2012) 5: 290-295 (1) Consiglio per la Ricerca e la Sperimentazione in Agricoltura - Centro di Ricerca per la Selvicoltura, v. S. Margherita 80, I-52100 Arezzo (Italy); (2) Department for Innovation in Biological, Agro-Food, and Forestry systems, University of Tuscia, v. San Camillo de Lellis, I-01100 Viterbo (Italy) @@ Francesco Chianucci (francesco.chianucci@entecra.it) Received: Sep 13, 2012 - Accepted: Nov 13, 2012 Citation: Chianucci F, Cutini A, 2012. Digital hemispherical photography for estimating forest canopy properties: current controversies and opportunities. iForest 5: 290-295 [online 2012-12-17] URL: http://www.sisef.it/iforest/contents? id=ifor0775-005 Communicated by: Roberto Tognetti Digital hemispherical photography for estimating forest canopy properties: current controversies and opportunities Francesco Chianucci (1-2), Andrea Cutini (1) Hemispherical photography has been used since the 1960s in forest ecology. Nevertheless, specific constraints related to film cameras have progressively prevented widespread adoption of this photographic method. Advances in di- gital photographic technology hold great promise to overcome the major draw - backs of hemispherical photography, particularly regarding field techniques and image processing aspects. This contribution is aimed to: (i) provide a basic foreground of digital hemispherical photography; (ii) illustrate the major strengths and weakness of the method; (iii) provide an reliable protocol for im- age acquisition and analysis, to get the most out of using hemispherical photo- graphy for canopy properties extraction. Keywords: Digital Hemispherical Photography, Fisheye Lens, Leaf Area Index, Radiative Transfer, Foliage Clumping mailto: http://www.sisef.it/iforest/contents?id=ifor0775-005 http://www.sisef.it/iforest/contents?id=ifor0775-005 Chianucci F & Cutini A - iForest 5: 290-295 Foreground to Digital Hemispherical Photography The first hemispherical lens was developed by Hill (1924), to study cloud formation. The first approach to fish-eye photography in forestry was then provided by Evans & Coombe (1959), which used hemispherical photography to describe the light environ- ment under forest canopy. Anderson (1964, 1971) used fish-eye photography to calculate the direct and scattered components of solar radiation from visible sky directions. Sub- sequently, film hemispherical photography has been used for a long time to estimate forest canopy properties (Bonhomme et al. 1974, Anderson 1981, Chan et al. 1986, Wang & Miller 1987). However, technical and theoretical obstacles involving many time consuming steps have progressively prevented the wide spread adoption of film hemispherical photography (Breda 2003, Macfarlane et al. 2007c). More recently, advances in digital photo- graphic technology and image processing software have led to a renewal of interest in digital hemispherical photography for indi- rect quantification of forest canopy proper- ties (Breda 2003, Macfarlane et al. 2007c, Jarčuška et al. 2010). Digital cameras have greatly simplified the process of image cap- ture and processing, when compared with film cameras (Macfarlane 2011). In addition, over the last few years, numerous commer- cial software packages, as well as freeware programs for canopy analysis, have been de- veloped (Frazer et al. 1999, Jonckheere et al. 2005, Jarčuška 2008). Recent studies con- firmed the accuracy of digital hemispherical photography in estimating forest canopy properties (Englund et al. 2000, Jonckheere et al. 2005, Leblanc et al. 2005, Macfarlane et al. 2007a, Ryu et al. 2010a). Moreover, new photographic techniques have been tested recently, confirming the high poten- tiality of digital photography (Macfarlane et al. 2007b, Ryu et al. 2010b, Chianucci & Cutini 2013). Theoretical background Hemispherical photography is a method that measures the gap fraction at multiple zenith angles. Gap fraction is computed by applying the Beer-Lambert law (eqn. 1): where LAI is the leaf area index, θ is the zenith angle of view, G(θ) is named G-func- tion and corresponds to the fraction of fo- liage projected on the plane normal to the zenith direction. In theory, by measuring the gap fraction at multiple zenith angles it is possible to simultaneously determine both LAI and the foliage angle distribution func- tion (Macfarlane et al. 2007b). Forest light environment was also derived from gap frac- tion. The method makes the following assump- tions: • Leaves are randomly distributed within the canopy; • Individual leaf size is small compared with the canopy and thereby with the sensor field of view; • Foliage is black, namely it do not transmit light; • Leaves are azimuthally randomly oriented. Current controversies and corrective strategies Photographic exposure Photographic exposure affects the mag- nitude of the canopy gap fraction (Zhang et al. 2005). The importance of exposure con- trol is well documented, since automatic ex- posure has been demonstrated to prevent ac- curate and reliable estimates of the gap frac- tion (Chen et al. 1991, Macfarlane et al. 2000, Zhang et al. 2005). Images taken with automatic exposure underestimates gap frac- tion in open canopies, while overestimates gap fraction in medium-high density cano- pies (Zhang et al. 2005); as a consequence, exposure needs to be manually set. The optimum exposure for hemispherical photography would be the one which makes the sky appear as white as possible, provi- ding in the meantime the best contrast between canopy and sky (Chen et al. 1991). Adequate exposure can be approximately de- termined in two steps: • reference exposure is measured in a clea- ring (open sky), with aperture set to provi- de adequate depth of field; • subsequently, exposure is set to over- expose image (generally by 1-3 stops of the shutter speed) relative to the open sky reference (Macfarlane et al. 2000, Zhang et al. 2005, Macfarlane et al. 2007c), with the aperture unchanged. This exposure setting makes the sky appears white, providing satisfactory contrast between canopy and sky, and is not influenced by the stand density (Zhang et al. 2005). Gamma function Unlike film cameras, image sensors in di- gital cameras have the advantage of respond linearly to light (Zhang et al. 2005). How- ever, in order to simulate the non-linear be- havior of the human eye, the in-camera soft- ware applies a logarithmic transformation by means of gamma function (Cescatti 2007). The gamma function describes the relation between actual light intensity during photo- graphy and the resulting brightness value of the pixel (Wagner 1998). A gamma value of 1.0 denotes an image that accurately repro- duces actual light intensity (Macfarlane et al. 2007c). Digital cameras typically have gam- ma values between 2.0 - 2.5. The main effect of this correction is to lighten the midtones, thus resulting in worse estimate of canopy light transmittance (Cescatti 2007). Some studies found that gamma correction strongly affects forest canopy properties es- timates in both film and digital cameras (Wagner 1998, Leblanc et al. 2005, Macfar- lane et al. 2007c); consequently, back-cor- recting to 1 the gamma function of the im- ages is recommended. Pixel classification The optimal light intensity (brightness va- lue) from a digital color or grey levels image is generally used as threshold value to distin- guish pixels belonging to sky or canopy, thus producing a binary image (Wagner 1998, Jonckheere et al. 2004, Jonckheere et al. 2005, Cescatti 2007). Some authors sug- gested using the blue channel instead of the grey levels of the RGB image, because the foliage elements have a much lower reflec- tivity and transmittance in the blue region of the visible electromagnetic spectrum (Le- blanc et al. 2005, Zhang et al. 2005, Macfar- lane 2011). Previously, pixel classification was per- formed manually. Some software package such as GLA (Gap Light Analyzer - Frazer et al. 1999) still employs interactive manual thresholding. However, some studies pointed out that manual thresholding could be a re- levant source of error due to its subjectivity (Rich 1990, Jonckheere et al. 2004, Cescatti 2007, Jarčuška et al. 2010). As a con- sequence, different automatic, objective, operator-independent thresholding methods have been proposed to replace manual thre- sholding (Ishida 2004. Jonckheere et al. 2005, Nobis & Hunziker 2005, Macfarlane 2011), while commercial software packages (i.e., Winscanopy) typically have developed automatic pixel classification algorithms (Macfarlane 2011). A detailed analysis of the different classi- fication methods falls out of the scope of this contribution (for a more complete descrip- tion, see Wagner & Hagemeier 2006, Mac- farlane 2011). However, Macfarlane et al. (2007c) noted that correcting images for the camera’s gamma function and correcting the gap fraction distribution for foliage clum- ping are more important on leaf area index derived from digital hemispherical photo- graphy than the classification method cho- sen. In addition, Macfarlane (2011) even found that none of the more complicated classification methods available for image processing (also from remotely-sensed ima- gery classification procedures) yielded re- sults that greatly differed from a simple global binary threshold classification. Leaf area index estimates Three main sources of discrepancy are iForest (2012) 5: 290-295 291 © SISEF http://www.sisef.it/iforest/ P (θ )=exp(−G (θ )LAIcos(θ ) ) Digital hemispherical photography for forest canopy estimation commonly recognized when digital hemi- spherical photography is used to estimate forest leaf area index. (a) Digital hemispherical photography es- timates a plant area index, rather than actual leaf area index, due to the contribution of woody elements (Breda 2003). Deciduous forests allow the estimation of woody area index from optical sensors, which can be es- timated from gap fraction during leafless (Cutini et al. 1998, Leblanc 2008). For ever- green broadleaved and coniferous species, the woody material could be estimated from destructive sampling (Leblanc 2008), or from tabled woody to total area ratio (Chen et al. 1997). (b) Another source of discrepancy is the clumping of foliage (Breda 2003, Macfar- lane et al. 2007c). Foliage clumping (Ω) strongly affect the canopy gap fraction, ac- cording to the Beer-Lambert law (eqn. 2, as modified by Nilson 1971): To overcome the limit of a non-random distribution of foliage within the canopy, some commercial software for hemispherical image analysis (i.e., Winscanopy) calculates clumping indexes from an analysis of the gap size distribution (Chen & Cihlar 1995) or from the gap fraction distribution of a number of azimuth segments for each annu- lus of the hemisphere (Lang & Xiang 1986, Van Gardingen et al. 1999), or by combining these two approaches (Leblanc et al. 2005). Van Gardingen et al. (1999) demonstrated that correcting for foliage clumping can re- duce the underestimation of up to 15%, com- pared with conventional analysis of hemi- spherical photography, which can results in an underestimate of 50% of the leaf area in- dex derived from harvesting. Another ad- vantage of fish-eye photography is that the instrument enables assessment of both wi- thin and between crowns clumping effects, which results in greater accuracy in LAI re- trieval in dense canopies, when compared with LAI-2000 PCA (Chianucci & Cutini 2013). (c) Even though an apparent advantage of fisheye photography is that LAI and the ex- tinction coefficient (k) are simultaneously es- timated [G-function is related to extinction coefficient by G(θ) = k · cos(θ)], previous studies found that the foliage angle distribu- tion calculated from hemispherical photo- graphy appeared sensitive to canopy struc- ture (Chen & Black 1991, Macfarlane et al. 2007a). As such, the foliage angle distribu- tion calculated from fish-eye images should be treated with caution. To overcome this limit, an alternative is measuring the gap fraction at a single zenith angle of θ = 57.5°, given that the extinction coefficient at this angle was largely independent of the foliage angle distribution (Bonhomme & Chartier 1972). Some software packages allow the 57.5 degree analysis of fish-eye images (i.e., Winscanopy and CAN-EYE). Protocol for image acquisition and hemispherical software image analysis In order to provide clear and concise sug- gestions to get the most out of using digital cameras for forest canopy properties estima- tion, an hypothetical application of digital hemispherical photography is illustrated, with an example of the compact camera Nikon CoolPix 4500, equipped with the FC- E8 fish-eye lens converter, and the Winscan- opy software. Camera setup and software analysis was set according to Macfarlane et al. (2007c). The reason for choosing a com- pact camera is motivated mainly because the Nikon CoolPix models have been very popu- lar in this field, and the performance of these cameras, as well as other compact camera models, have been deeply investigated. For instance, Frazer et al. (2001) compared film photography with the 2.1 Megapixel Coolpix 950, Inoue et al. (2004) compared the effect of quality and image size in two different Coolpix models (990 vs. 900), Leblanc et al. (2005) used both Coolpix 990 and 5000 in boreal forests, Englund et al. (2000) tested the effect of image quality using the Coolpix 950. These researchers found that little or no differences exists between TIFF and JPEG images from the same camera, but that image size can influence canopy properties estima- tes. Recently, DSLR (Digital Single Lens Re- flex) cameras have become much more af- fordable and their resolution has greatly in- creased (Pekin & Macfarlane 2009), but tho- rough appraisals using DSLR cameras are still poorly documented; hence, generaliza- tion over canopy measurement procedures using DSLR cameras can not be achieved so far. However, we refers to the work of Pekin & Macfarlane (2009) for a detailed analysis of the effect of quality, image size, file format, ISO in both Coolpix 4500 and DSLR Nikon D80. Sampling strategy Sampling strategy is a key issue when per- forming ground measurements that need to be representative of the whole canopy (Weiss et al. 2004). Number of images and spatial location of shots define the sampling strategy. Canopy and vegetation type, spatial variability, plot area, sensor angle of view and distance to the edge of the stand can greatly influence the accuracy of sampling design (Chason et al. 1991). It is best to consider a sampling protocol designed for the canopy type which is being measured. Canopy height is the first factor which should be considered. As a rule-of- thumb, the distance between the sensor and the nearest leaf should be at least four times the width of the leaf. As a consequence, the use of upward pointing fish-eye images in short canopies such as grassland and agricul- tural crops should be carefully evaluated (Leblanc 2008). The distance between the lens and the canopy may be too short, and the resulting canopy covered by the field of view of the camera may be not representative of the spatial distribution of the canopy (Liu & Pattey 2010), When this situation occurs, the use of downward looking camera orien- tation comes as a reliable and practical al- ternative for agricultural crops and grassland (Demarez et al. 2008, Garrigues et al. 2008, Liu & Pattey 2010). Downward pointing camera can also be used to separate under- story vegetation and top canopy vegetation in a forest stand. Canopy spatial variability is a major factor affecting sampling strategies. For closed and randomly distributed canopies, a grid of sample points is usually a suitable strategy (Law et al. 2001), even though predeter- mined sample location may require several adjustments, in that the presence of leaves immediately above the sensor may block the entire view at low zenith angles. By contrast, Leblanc et al. (2002) proposed the sampling along a 70 m transects over boreal and tem- perate forests, with measurements every 10 m. In the case of regular tree distributions, e.g., plantations of tree in evenly spaced rows, the adoption of a crisscross array scheme is recommended to ensure sampling under trees, thus avoiding bias from in- ter-row gaps sampling (Chen et al. 1997); the sample distance should be proportional to the range of distances between rows (Weiss et al. 2004). Accurate samplings in open and heteroge- neous canopies are more challenging to ob- tain. Gap fraction is greatly influenced by clumping, especially in heterogeneous cano- pies (see eqn. 2). Moreover, clumping occurs at different scales, from shoot level (within crown) to stand level (between crowns). This multiscale nature makes it hard to quantify foliage clumping (Ryu et al. 2010a). Irrespective of the method used to estimate gap fraction, in most applications gap frac- tion is given only in term of zenith angle, since an assumption of azimuthal symmetry is generally used (Van Gardingen et al. 1999, Leblanc 2008). This assumption im- plies that such standard techniques should be limited to homogeneous canopies. It is well known that conifer needles are not randomly arranged in space, and radiation penetration models assuming homogeneous canopy will underestimate the transmittance of a conifer canopy. Hemispherical photography enables assessment of both within and between © SISEF http://www.sisef.it/iforest/ 292 iForest (2012) 5: 290-295 P (θ )=exp(−G (θ )Ω LAIcos(θ ) ) Chianucci F & Cutini A - iForest 5: 290-295 crowns clumping (for more details, see the section “Leaf area index estimates”. As such, the incorporation of clumping is strongly re- commended, when available from software outputs (Tab. 1). Again, heterogeneous canopies require more repetitions (images) than homogeneous canopies to achieve good spatial sampling. Image-processing software also allows to mask some part of the hemisphere, in order to reduce the field of view, which may im- prove spatial representation in heterogen- eous canopies, i.e., to include dense and sparse regions of a heterogeneous canopy in separate images. The masking procedure could also be used in mixed forests, in order to sample clusters of different species in different images. Masking can also be used to prevent some undesired part of the image from being ana- lyzed (i.e., sun glint, operator, etc). As previously outlined, use of downward pointing camera enables analysis of under- story, and even allows separating this com- ponent from top of canopy elements. In the case of taller understory, Rich et al. (1999) suggested using tall-folding monopod with self-leveling mount set-up to sample top of canopy, or, alternatively, using a ladder. Use of a ladder also enable measuring canopy at different heights, which could be useful in tropical wet forests. Other sampling difficulties arise from mea- surement on single trees, because indirect methods are poorly suitable for single plants (Cutini & Varallo 2006). However, Hemi- view software proposes specific options for measurements on single trees (Rich et al. 1999). The LAI-2000 user’s manual pro- poses similar suggestions, which can also be suited to hemispherical photography. Specific precautions should be adopted for slope, such as holding the lens normal to the ground; e.g., self-leveling tripod is provided with Winscanopy equipment. Some authors even suggested corrective methods to intro- duce slope effect in the analysis (Walter & Torquebiau 2000, Schleppi et al. 2007). Image acquisition Hemispherical images should be collected in summer, under fully developed canopy conditions, and under uniform overcast sky, or alternatively close to sunrise or sunset (Leblanc et al. 2005); both these sky condi- tions enable a perfect diffuse sky, thus avoiding the interference of direct sunlight, which can cause errors of up to 50% (Welles & Norman 1991). Images should be collected as fine quality and at maximum resolution JPEG, with the lens set to F1, which enables circular ima- ges. Lens set to F2 enables full-frame fish- eye image instead of circular image, with the former having a better resolution than the circular format. However, only recent re- leases of Winscanopy software (since 2006a version) have implemented analysis of full- frame image; so far, canopy analysis has been usually performed only on circular images (Macfarlane et al. 2007b). Camera must be aligned to magnetic north and poin- ted upward by means of a self-leveling tri- pod. The aperture must be set to minimum (5.3) and, with the camera in aperture-prio- rity (A) mode, the exposure must be recor- ded in an adjacent clearing. Subsequently, the mode must be changed to manual (M) and the shutter speed must be lowered by two stops in comparison to the exposure metered in the clearing (Zhang et al. 2005). Exposure should be measured regularly be- neath the canopy using a spot light meter, in order to check possible changes in sky con- ditions during image acquisition (Macfarlane et al. 2007c). Different exposures can be promptly col- lected by setting exposure bracketing, which automatically adjust the shutter speed from the starting exposure, which is set by the operator (i.e., the open sky reference). On the other hand, digital cameras which can save image files in RAW format, such as DSLR allows varying the exposure after image acquisition. Software image analysis Gamma function of the images needs back- correction to 1 prior to hemispherical soft- ware image analysis. Given that Nikon CoolPix 4500 has a gamma function of ap- proximately 2.2 (Leblanc et al. 2005), the original images must be adjusted with the gamma correction set to 0.45 (1/2.2), using a standard image manipulation program such as Irfanview (Macfarlane et al. 2007b). In the blue band of the electromagnetic spectrum, the foliage appears darker than in the other bands, thus minimizing the inter- ference of multiple scattering in the canopy and chromatic aberration (Zhang et al. 2005). In addition, in diffuse sky conditions, the sky is saturated in the blue band, and thus appears white in 8-bit blue channel (Leblanc 2008). As such, the blue channel of the images should be used in the canopy analysis to achieve optimal brightness value (thresholding). Image must be sharpened (medium), to enhance the contrast between sky and canopy, and then analyzed using iForest (2012) 5: 290-295 293 © SISEF http://www.sisef.it/iforest/ Tab. 1 - Main characteristics of the most diffuse hemispherical image processing software packages. Software Company Pixel classification Availability LAI methods Clumping index Winscanopy Regent Instruments Inc., Quebec, Canada Automatic and interactive (manual) Commercial 57.5 ° (Bonhomme & Chartier 1972) Chen & Cihlar (1995) LAI 2000 (Miller 1967) Lang & Xiang (see Van Gardingen et al. 1999) Generalized LAI 2000 Hybrid (see Leblanc et al. 2005) Ellipsoidal (Norman & Campbell 1989) - GLA Cary Institute of Ecosystems studies, Millbrook, New York, US Manual Free LAI 2000 (Miller 1967) No CAN-EYE INRA (French Natio- nal Institute of Agro- nomical research) Automatic and interactive (manual) Free 57.5 ° (Bonhomme & Chartier 1972) Lang & Xiang (see Van Gardingen et al. 1999)LAI 2000 (Miller 1967) HemiView Delta-T Device Ltd. Cambridge, UK Manual Commercial LAI 2000 (Miller 1967) No Hemisfer WLS Swiss Federal Institute for Forest, Snow and Landscape Research Automatic and interactive (manual) Commercial 57.5 ° (Bonhomme & Chartier 1972) Chen & Cihlar (1995) LAI 2000 (Miller 1967) Lang & Xiang (see Van Gardingen et al. 1999)Generalized LAI 2000 Ellipsoidal (Norman & Campbell 1989) Digital hemispherical photography for forest canopy estimation hemispherical image analysis software. Win- scanopy enables automatic pixel classifica- tion of the images, thus avoiding human in- put. In addition, Wincanopy enables correction for clumping foliage, which can significantly reduce the underestimation of leaf area index in clumped canopies (Lang & Xiang 1986, Van Gardingen et al. 1999, Jonckheere et al. 2004, Chianucci & Cutini 2013). A zenithal angle range of 0-70° and 8 azi- muth segments should be adequate for the image analysis (Macfarlane et al. 2007c). Comparison of software packages The more popular commercial software packages are Winscanopy and Hemiview. Their standard systems include a digital ca- mera, a calibrated fish-eye lens and a self- leveling tripod. Free software packages are available for hemispherical image analysis such as GLA (Gap Light Analyzer - Frazer et al. 1999) and CAN-EYE. Most of the scientific studies concerning hemispherical photography use method ba- sed on the determination of optimal thre- shold (Hemiview, GLA, Winscanopy). Mo- reover, most of these studies focused on forest canopies (Demarez et al. 2008). CAN- EYE is also widely used in agricultural en- vironments, because of its ability to perform different pixel classification procedures, as compared with thresholding method, thereby allowing analysis of downward-looking images (Demarez et al. 2008). Tab. 1 lists the main characteristics of some of the most widely used software packages. Conclusive considerations Despite uncertainties due to image acquisi- tion and processing steps, digital photo- graphy holds great promise for estimating forest canopy properties, on account of its speed, ready availability and low-cost, which enables widespread use of the method. Pho- tography even shows good potential to re- place other indirect methods, due to its abi- lity to provide simultaneously several para- meters characterizing solar radiation and forest canopy properties (Chen et al. 1997). In addition, unlike other methods, hemi- spherical photograph can be interpreted as a map of canopy openings (or, on the contrary, of canopy closure) relative to the locations from which image is taken, which can be in- spected to provide insight into heterogeneity within a canopy and to compare different canopies at different sites (Rich et al. 1999). Last, but not least, digital photography en- ables widespread use of the method. Aside from scientific purposes, photography can be suitably applied for management and moni- toring issues, i.e., routine canopy properties estimation. Recent advances in digital photographic equipments such as higher resolution came- ras and better quality lenses, combined with robust and efficient image processing routines and software packages, are bringing digital photography to a mature stage, where the field techniques and image processing steps are no longer significant obstacle limi- ting its application (Macfarlane 2011). Acknowledgements This research was supported by RI.SELV.ITALIA Research Program 3.1 - “Silviculture, productivity and conservation of forest ecosystems” research project and by the Research Program D.M. 19477/7301/08 - “Maintenance of collections, databases, and other activities of public interest” fun- ded by the Italian Ministry of Agriculture and Forest Policies. References Anderson MC (1964). Studies of the woodland light climate I. The photographic computation of light condition. Journal of Ecology 52: 27-41. - doi: 10.2307/2257780 Anderson MC (1971). Radiation and crop struc- ture. In: “Plant Photosynthetic Production. Manual of Methods” (Sestak Z, Catsky J, Jarvis PG eds). Junk, the Hague, The Netherlands, pp. 77-90. Anderson MC (1981). The geometry of leaf distri- bution in some south-eastern Australian forests. Agricultural Meteorology 25:195-205. - doi: 10.1016/0002-1571(81)90072-8 Bonhomme R, Chartier P (1972). The interpreta- tion and automatical measurement of hemisphe- rical photographs to obtain sunlit foliage area and gap frequency. Israel Journal Agricultural Research 22: 53-61. Bonhomme R, Varlet-Grancher C, Chartier M (1974). The use of hemispherical photographs for determining the leaf area index of young crops. Photosynthetica 8: 299-301. Breda NJ (2003). Ground-based measurements of leaf area index: a review of methods, instruments and current controversies. Journal of Experi- mental Botany 54 (392): 2403-2417. - doi: 10.1093/jxb/erg263 Cescatti A (2007). Indirect estimates of canopy gap fraction based on the linear conversion of hemispherical photographs. Agricultural and Forest Meteorology 143 (1-2): 1-12. - doi: 10.1016/j.agrformet.2006.04.009 Chan SS, McCreight RW, Walstad JD, Spies TA (1986). Evaluating forest vegetative cover with computerized analysis of fisheye photographs. Forest Science 32: 1085-1091. Chason JW, Baldocchi DD, Huston MA (1991). A comparison of direct and indirect methods for es- timating forest canopy leaf area. Agricultural and Forest Meteorology 57 (1-3): 107-128. - doi: 10.1016/0168-1923(91)90081-Z Chen J, Black T (1991). Measuring leaf area in- dex of plant canopies with branch architecture. Agricultural and Forest Meteorology 57 (1-3): 1- 12. - doi: 10.1016/0168-1923(91)90074-Z Chen J, Black T, Adams R (1991). Evaluation of hemispherical photography for determining plant area index and geometry of a forest stand. Agri- cultural and Forest Meteorology 56 (1-2): 129- 143. - doi: 10.1016/0168-1923(91)90108-3 Chen J, Cihlar J (1995). Quantifying the effect of canopy architecture on optical measurements of leaf area index using two gap size analysis me- thods. IEEE Transactions on Geoscience and Re- mote Sensing 33 (3): 777-787. - doi: 10.1109/ 36.387593 Chen JM, Rich PM, Gower ST, Norman JM, Plummer S (1997). Leaf area index of boreal forests: theory, techniques, and measurements. Journal of Geophysical Research 102 (D24): 29429. - doi: 10.1029/97JD01107 Chianucci F, Cutini A (2013). Estimation of ca- nopy properties in deciduous forests with digital hemispherical and cover photography. Agricul- tural and Forest Meteorology 168: 130-139. - doi: 10.1016/j.agrformet.2012.09.002 Cutini A (2003). Litterfall and leaf area index in the CONECOFOR permanent monitoring plots. Journal of Limnology 61: 62-68. Cutini A, Matteucci G, Mugnozza GS (1998). Es- timation of leaf area index with the Li-Cor LAI 2000 in deciduous forests. Forest Ecology and Management 105 (1-3): 55-65. - doi: 10.1016/ S0378-1127(97)00269-7 Cutini A, Varallo A (2006). Estimation of foliage characteristics of isolated trees with the Plant Canopy Analyzer LAI-2000. Current Trends in Ecology 1: 49-56. Demarez V, Duthoit S, Baret F, Weiss M, Dedieu G (2008). Estimation of leaf area and clumping indexes of crops with hemispherical photo- graphs. Agricultural and Forest Meteorology 148 (4): 644-655. - doi: 10.1016/j.agrformet.2007. 11.015 Englund SR, O’Brien JJ, Clark DB (2000). Evalu- ation of digital and film hemispherical photo- graphy for predicting understorey light in a Bornean tropical rain forest. Agricultural and Forest Meteorology 97: 129-139. Evans GD, Coombe DE (1959). Hemispherical and woodland canopy photography and the light climate. Journal of Ecology 47: 103-113. - doi: 10.2307/2257250 Frazer GW, Canham CD, Lertzman K (1999). Gap light analyzer (GLA). Version 2.0. Imaging soft- ware to extract canopy structure and gap light transmission indices from true-color fisheye pho- tographs: user’s manual and program document- ation. Simon Fraser University, Burnaby, BC, Canada, pp.36. Frazer GW, Fournier RA, Trofymow J, Hall RJ (2001). A comparison of digital and film fisheye photography for analysis of forest canopy struc- ture and gap light transmission. Agricultural and Forest Meteorology 109 (4): 249-263. - doi: 10.1016/S0168-1923(01)00274-X Garrigues S, Shabanov N, Swanson K, Morisette J, Baret F, Myneni R (2008). Intercomparison and sensitivity analysis of Leaf Area Index re- trievals from LAI-2000, AccuPAR, and digital hemispherical photography over croplands. Agri- cultural and Forest Meteorology 148 (8-9): 1193- © SISEF http://www.sisef.it/iforest/ 294 iForest (2012) 5: 290-295 http://dx.doi.org/10.2307/2257780 http://dx.doi.org/10.1016/S0168-1923(01)00274-X http://dx.doi.org/10.2307/2257250 http://dx.doi.org/10.1016/j.agrformet.2007.11.015 http://dx.doi.org/10.1016/j.agrformet.2007.11.015 http://dx.doi.org/10.1016/S0378-1127(97)00269-7 http://dx.doi.org/10.1016/S0378-1127(97)00269-7 http://dx.doi.org/10.1016/j.agrformet.2012.09.002 http://dx.doi.org/10.1029/97JD01107 http://dx.doi.org/10.1109/36.387593 http://dx.doi.org/10.1109/36.387593 http://dx.doi.org/10.1016/0168-1923(91)90108-3 http://dx.doi.org/10.1016/0168-1923(91)90074-Z http://dx.doi.org/10.1016/0168-1923(91)90081-Z http://dx.doi.org/10.1016/j.agrformet.2006.04.009 http://dx.doi.org/10.1093/jxb/erg263 http://dx.doi.org/10.1016/0002-1571(81)90072-8 Chianucci F & Cutini A - iForest 5: 290-295 1209. - doi: 10.1016/j.agrformet.2008.02.014 Hill R (1924). A lens for whole sky photographs. Quarterly Journal of the Royal Meteorological Society 50: 227-235. - doi: 10.1002/qj.497050 21110 Inoue A, Yamamoto K, Mizoue N, Kawahara Y (2004). Effects of image quality, size and camera type on forest light environment estimates using digital hemispherical photography. Agricultural and Forest Meteorology 126 (1-2): 89-97. - doi: 10.1016/j.agrformet.2004.06.002 Ishida M (2004). Automatic thresholding for di- gital hemispherical photography. Canadian Journal of Forest Reasearch 34: 2208-2216. - doi: 10.1139/x04-103 Jarčuška B (2008). Methodological overview to hemispherical photography, demonstrated on an example of the software GLA. Folia Oecologica 35: 66-69. Jarčuška B, Kucbel S, Jaloviar P (2010). Compar- ison of output results from two programmes for hemispherical image analysis: Gap Light Analy- ser and WinScanopy. Journal of Forest Science 56 (4): 147-153. Jonckheere I, Fleck S, Nackaerts K, Muys B, Cop- pin P, Weiss M, Baret F (2004). Review of meth- ods for in situ leaf area index determination: Part I. Theories, sensors and hemispherical photo- graphy. Agricultural and Forest Meteorology 121 (1-2): 19-35. - doi: 10.1016/j.agrformet.2003. 08.027 Jonckheere I, Nackaerts K, Muys B, Coppin P (2005). Assessment of automatic gap fraction es- timation of forests from digital hemispherical photography. Agricultural and Forest Meteoro- logy 132 (1-2): 96-114. - doi: 10.1016/j.agrfor- met.2005.06.003 Lang ARG, Xiang Y (1986). Estimation of leaf area index from transmission of direct sunlight in discontinuous canopies. Agricultural and Forest Meteorology 37 (3): 229-243. - doi: 10.1016/0168-1923(86)90033-X Law B, Van Tuyl S, Cescatti A, Baldocchi D (2001). Estimation of leaf area index in open- canopy ponderosa pine forests at different suc- cessional stages and management regimes in Oregon. Agricultural and Forest Meteorology 108 (1): 1-14. - doi: 10.1016/S0168-1923(01) 00226-X Leblanc SG (2008). DHP-TRACWin Manual. Version 1.03. Natural Resources Canada, Saint- Hubert, Quebec, Canada, pp. 29. Leblanc SG, Chen JM, Fernandes R, Deering DW, Conley A (2005). Methodology comparison for canopy structure parameters extraction from di- gital hemispherical photography in boreal forests. Agricultural and Forest Meteorology 129 (3-4): 187-207. - doi: 10.1016/j.agrformet.2004. 09.006 Leblanc SG, Fernandes R, Chen JM (2002). Re- cent advancements in optical field leaf area in- dex, foliage heterogeneity, and foliage angular distribution measurements. In: Proceedings of “IGARSS 2002”. Toronto (Canada) 24-28 June 2002. Liu J, Pattey E (2010). Retrieval of leaf area index from top-of-canopy digital photography over ag- ricultural crops. Agricultural and Forest Meteo- rology 150 (11): 1485-1490. - doi: 10.1016/j.agr- formet.2010.08.002 Macfarlane C (2011). Classification method of mixed pixels does not affect canopy metrics from digital images of forest overstorey. Agricultural and Forest Meteorology 151 (7): 833-840. - doi: 10.1016/j.agrformet.2011.01.019 Macfarlane C, Coote M, White DA, Adams MA (2000). Photographic exposure affects indirect estimation of leaf area in plantations of Euca- lyptus globulus Labill. Agricultural and Forest Meteorology 100 (2-3): 155-168. - doi: 10.1016/ S0168-1923(99)00139-2 Macfarlane C, Arndt SK, Livesley SJ, Edgar AC, White DA, Adams MA, Eamus D (2007a). Es- timation of leaf area index in eucalypt forest with vertical foliage, using cover and fullframe fisheye photography. Forest Ecology and Mana- gement 242 (2-3): 756-763. - doi: 10.1016/j.- foreco.2007.02.021 Macfarlane C, Grigg A, Evangelista C (2007b). Estimating forest leaf area using cover and full- frame fisheye photography: Thinking inside the circle. Agricultural and Forest Meteorology 146 (1-2): 1-12. - doi: 10.1016/j.agrformet.2007. 05.001 Macfarlane C, Hoffman M, Eamus D, Kerp N, Higginson S, Mcmurtrie R, Adams M (2007c). Estimation of leaf area index in eucalypt forest using digital photography. Agricultural and Forest Meteorology 143 (3-4): 176-188. - doi: 10.1016/j.agrformet.2006.10.013 Miller J (1967). A formula for average foliage density. Australian Journal of Botany 15: 141- 144. - doi: 10.1071/BT9670141 Nilson T (1971). A theoretical analysis of the fre- quency of gaps in plant stands. Agricultural Me- teorology 8: 25-38. - doi: 10.1016/0002- 1571(71) 90092-6 Nobis M, Hunziker U (2005). Automatic thresholding for hemispherical canopy-photo- graphs based on edge detection. Agricultural and Forest Meteorology 128 (3-4): 243-250. - doi: 10.1016/j.agrformet.2004.10.002 Norman JM, Campbell GS (1989). Canopy struc- ture. In: “Plant physiological ecology: field meth- ods and instrumentation” (Pearcy RW, Ehleringer JR, Mooney HA, Rundel PW eds). Chapman and Hall, London, pp. 301-325. Pekin B, Macfarlane C (2009). Measurement of crown cover and leaf area index using digital cover photography and its application to remote sensing. Remote Sensing 1 (4): 1298-1320. - doi: 10.3390/rs1041298 Rich PM (1990). Characterizing plant canopies with hemispherical photographs. Remote Sens- ing Reviews 5 (1): 13-29. - doi: 10.1080/ 02757259009532119 Rich PM, Wood J, Vieglais DA, Burek K, Webb N (1999). HemiView manual revision no. 2.1. Delta-T Devices, Ltd. Ross J (1981). The radiation regime and architec- ture of plant stands. Junk, London, pp. 391. Ryu Y, Nilson T, Kobayashi H, Sonnentag O, Law BE, Baldocchi DD (2010a). On the correct estimation of effective leaf area index: does it re- veal information on clumping effects? Agricul- tural and Forest Meteorology 150 (3): 463-472. - doi: 10.1016/j.agrformet.2010.01.009 Ryu Y, Sonnentag O, Nilson T, Vargas R, Kobay- ashi H, Wenk R, Baldocchi DD (2010b). How to quantify tree leaf area index in an open savanna ecosystem: a multi-instrument and multi-model approach. Agricultural and Forest Meteorology 150 (1): 63-76. - doi: 10.1016/j.agrformet.2009. 08.007 Schleppi P, Conedera M, Sedivy I, Thimonier A (2007). Correcting non-linearity and slope ef- fects in the estimation of the leaf area index of forests from hemispherical photographs. Agricul- tural and Forest Meteorology 144 (3-4): 236- 242. - doi: 10.1016/j.agrformet.2007.02.004 Van Gardingen P, Jackson G, Hernandez-Daumas S, Russell G, Sharp L (1999). Leaf area index es- timates obtained for clumped canopies using hemispherical photography. Agricultural and Forest Meteorology 94 (3-4): 243-257. - doi: 10.1016/S0168-1923(99)00018-0 Wagner S (1998). Calibration of grey values of hemispherical photographs for image analysis. Agricultural and Forest Meteorology 90 (1-2): 103-117. - doi: 10.1016/S0168-1923(97)00073- 7 Wagner S, Hagemeier M (2006). Method of seg- mentation affects leaf inclination angle estima- tion in hemispherical photography. Agricultural and Forest Meteorology 139 (1-2): 12-24. - doi: 10.1016/j.agrformet.2006.05.008 Walter JMN, Torquebiau EF (2000). The compu- tation of forest leaf area index on slope using fisheye sensors. Comptes Rendus de l’Académie des Sciences, Série III Sciences de la Vie, vol. 323, pp. 801-813. Wang YS, Miller DR (1987). Calibration of the hemispherical photographic technique to meas- ure leaf area index distributions in hardwood forests. Forest Science 33: 110-126. Wang Y, Woodcock CE, Buermann W, Stenberg P, Voipio P, Smolander H, Häme T, Tian Y, Hu J, Knyazikhin Y, Myneni RB (2004). Evaluation of the MODIS LAI algorithm at a coniferous forest site in Finland. Remote Sensing of Envir- onment 91 (1): 114-127. - doi: 10.1016/ j.rse.2004.02.007 Weiss M, Baret F, Smith GJ, Jonckheere I, Coppin P (2004). Review of methods for in situ leaf area index (LAI) determination. Part II. Estimation of LAI, errors and sampling. Agricultural and Forest Meteorology 121 (1-2): 37-53. - doi: 10.1016/j.agrformet.2003.08.001 Welles JM, Norman JM (1991). Instrument for in- direct measurement of canopy architecture. Ag- ronomy Journal 83 (5): 818. - doi: 10.2134/ag- ronj1991.00021962008300050009x Zhang Y, Chen JM, Miller JR (2005). Determin- ing digital hemispherical photograph exposure for leaf area index estimation. Agricultural and Forest Meteorology 133 (1-4): 166-181. - doi: 10.1016/j.agrformet.2005.09.009 iForest (2012) 5: 290-295 295 © SISEF http://www.sisef.it/iforest/ http://dx.doi.org/10.1016/j.agrformet.2005.09.009 http://dx.doi.org/10.2134/agronj1991.00021962008300050009x http://dx.doi.org/10.2134/agronj1991.00021962008300050009x http://dx.doi.org/10.2134/agronj1991.00021962008300050009x http://dx.doi.org/10.1016/j.agrformet.2003.08.001 http://dx.doi.org/10.1016/j.rse.2004.02.007 http://dx.doi.org/10.1016/j.rse.2004.02.007 http://dx.doi.org/10.1016/j.agrformet.2006.05.008 http://dx.doi.org/10.1016/S0168-1923(97)00073-7 http://dx.doi.org/10.1016/S0168-1923(97)00073-7 http://dx.doi.org/10.1016/S0168-1923(99)00018-0 http://dx.doi.org/10.1016/j.agrformet.2007.02.004 http://dx.doi.org/10.1016/j.agrformet.2009.08.007 http://dx.doi.org/10.1016/j.agrformet.2009.08.007 http://dx.doi.org/10.1016/j.agrformet.2010.01.009 http://dx.doi.org/10.1080/02757259009532119 http://dx.doi.org/10.1080/02757259009532119 http://dx.doi.org/10.3390/rs1041298 http://dx.doi.org/10.1016/j.agrformet.2004.10.002 http://dx.doi.org/10.1016/0002-1571(71)90092-6 http://dx.doi.org/10.1016/0002-1571(71)90092-6 http://dx.doi.org/10.1071/BT9670141 http://dx.doi.org/10.1016/j.agrformet.2006.10.013 http://dx.doi.org/10.1016/j.agrformet.2007.05.001 http://dx.doi.org/10.1016/j.agrformet.2007.05.001 http://dx.doi.org/10.1016/j.foreco.2007.02.021 http://dx.doi.org/10.1016/j.foreco.2007.02.021 http://dx.doi.org/10.1016/S0168-1923(99)00139-2 http://dx.doi.org/10.1016/S0168-1923(99)00139-2 http://dx.doi.org/10.1016/j.agrformet.2011.01.019 http://dx.doi.org/10.1016/j.agrformet.2010.08.002 http://dx.doi.org/10.1016/j.agrformet.2010.08.002 http://dx.doi.org/10.1016/j.agrformet.2004.09.006 http://dx.doi.org/10.1016/j.agrformet.2004.09.006 http://dx.doi.org/10.1016/S0168-1923(01)00226-X http://dx.doi.org/10.1016/S0168-1923(01)00226-X http://dx.doi.org/10.1016/0168-1923(86)90033-X http://dx.doi.org/10.1016/j.agrformet.2005.06.003 http://dx.doi.org/10.1016/j.agrformet.2005.06.003 http://dx.doi.org/10.1016/j.agrformet.2003.08.027 http://dx.doi.org/10.1016/j.agrformet.2003.08.027 http://dx.doi.org/10.1139/x04-103 http://dx.doi.org/10.1016/j.agrformet.2004.06.002 http://dx.doi.org/10.1002/qj.49705021110 http://dx.doi.org/10.1002/qj.49705021110 http://dx.doi.org/10.1016/j.agrformet.2008.02.014 Digital hemispherical photography for estimating forest canopy properties: current controversies and opportunities Introduction Foreground to Digital Hemispherical Photography Theoretical background Current controversies and corrective strategies Photographic exposure Gamma function Pixel classification Leaf area index estimates Protocol for image acquisition and hemispherical software image analysis Sampling strategy Image acquisition Software image analysis Comparison of software packages Conclusive considerations Acknowledgements References work_aerrt6yvzzdtzixq6whccnvbhq ---- SENSORY SYSTEMS DISORDERS - Cost Studies PSS5 BEVACIZUMAB VERSUS RANIBIZUMAB FOR AGE-RELATED MACULAR DEGENERATION (AMD): A BUDGET IMPACT ANALYSIS Zimmermann I, Schneiders RE, Mosca M, Alexandre RF, do Nascimento Jr JM, Gadelha CA Ministry of Health, Brasília, DF, Brazil OBJECTIVES: The use of intravitreal injection of vascular endothelial growth factor inhibitors is an effective treatment for AMD and trials have showed similar clinical effects of bevacizumab and ranibizumab. The aim of this study was to estimate the budget impact for Brazilian Ministry of Health (MoH) recommending ranibizumab instead of bevacizumab for AMD. METHODS: We did a deterministic budget impact analysis, with the MoH perspective, comparing the use of ranibizumab and bevaci- zumab for wet AMD. The target population was estimated by extrapolating epide- miologic data to the Brazilian population. Data about dosage, administration and fractioning were extracted from literature. Prices were obtained with the Brazilian regulatory agency, applying potential discounting benefits. This analysis did not consider the cost of the fractioning process because it will be assumed by the states and not by the MoH. RESULTS: The considered price of the ranibizumab vial was US$ 962.86 (fractioning is not an option). In contrast, a 4 mL vial of bevacizumab would cost US$ 410.86 (US$ 5.14 each 0.05 mL dose, resulting in 80 doses/vial). Therefore, the expenses of one year on ranibizumab would be about US$ 11,554.37 and about US$ 61.63 for bevacizumab (12 injections for both). Thus, the use of ranibizumab instead of bevacizumab for treating 467,600 people would be related with a US$ 5,374,007,960.48 budget impact. The sensitivity analyses also demon- strated a budget impact of US$ 3,097,416,007.65 and US$ 5,287,555,101.51 (1 dose/ vial and 20 doses/vial, respectively). CONCLUSIONS: Although not a label indica- tion, bevacizumab has been widely adopted in clinical practice. As presented above, even with inefficient fractioning methods, the use of bevacizumab would bring substantial savings to MoH resources. Even the need of preserving the steril- ity of the solution being a real-world worry, stability studies have showed the maintenance of the solution characteristics through adequate handling and stor- age. PSS6 COST-OF-ILLNESS OF CHRONIC LYMPHOEDEMA PATIENTS IN HAMBURG AND SUBURBAN REGION Purwins S1, Dietz D1, Blome C2, Heyer K1, Herberger K1, Augustin M1 1University Clinics of Hamburg, Hamburg, Germany, 2University Medical Center Hamburg, Hamburg, Germany OBJECTIVES: Chronic lymphedema is of particular interest from the socioeco- nomic point of view, since it is accompanied with high costs, disease burden and permanent need of medical treatment. The economic and social impact can in- crease if complications such as erysipelas and ulcers develop. Therefore, cost-of- illness of patients with lymphoedema or lipoedema should be known. METHODS: Patients with chronic primary or secondary lymph- or lipoedema of upper or lower limbs, with at most 6 months of disease duration, were enrolled in an observa- tional, cross-sectional study in Hamburg and surroundings (population of approx- imately 4 Mio inhabitants, 90% of which are insured in the statutory health insur- ance (SHI) and 10% in private insurance). Standardized clinical examinations and patient interviews were carried out. The oedemas were documented via digital photography as well as further available patient data. Resource utilizations were collected. From the societal perspective direct medical, non - medical and indirect costs were computed. RESULTS: A total of 348 patients were enrolled and inter- viewed. 90.8% of them were female and had a mean age of 57.3� 14.5 years. Mean annual costs per lymphoedema were €8121. These costs consisted of 58% direct (€4708) and 42% indirect (€3413) costs. The SHI accounted for about €5552 expenses and the patient €494.20 out-of-pocket costs. Conducted subgroup analyses on (a) arm vs leg oedema and (b) primary vs secondary vs lipo-lymphoedema did not show significant differences in costs. The main costs drivers in this study were medical treatment and disability costs. CONCLUSIONS: The treatment of patients with chronic lymphoedema is associated with high direct and indirect costs. PSS7 C-REALITY (CANADIAN BURDEN OF DIABETIC MACULAR EDEMA OBSERVATIONAL STUDY): 6-MONTH FINDINGS Barbeau M1, Gonder J2, Walker V3, Zaour N1, Hartje J4, Li R1 1Novartis Pharmaceuticals Canada Inc., Dorval, QC, Canada, 2St. Joseph’s Health Care, London, ON, Canada, 3OptumInsight, Burlington, ON, Canada, 4OptumInsight, Eden Prairie, MN, USA OBJECTIVES: To characterize the economic and societal burden of Diabetic Macular Edema (DME) in Canada. METHODS: Patients with clinically significant macular edema (CSME) were enrolled by ophthalmologists and retinal specialists across Canada. Patients were followed over a 6-month period to combine prospective data collected during monthly telephone interviews and at sites at months 0, 3 and 6. Visual acuity (VA) was measured and DME-related health care resource informa- tion was collected. Patient health-related quality of life (HRQOL) was measured using the National Eye Institute Visual Functioning Questionnaire (VFQ-25), and the EuroQol Five Dimensions (EQ-5D). RESULTS: A total of 145 patients [mean age 63.7 years (range: 30-86 yrs); 52% male; 81% Type 2 diabetes; mean duration of diabetes 18 years (range: 1-62 yrs); 72% bilateral CSME] were enrolled from 16 sites across 6 provinces in Canada. At baseline, the mean VA was 20/60 (range: 20/20- 20/800) across all eyes diagnosed with CSME (249 eyes). Sixty-three percent of pa- tients had VA severity in the eye diagnosed with DME (worse seeing eye if both eyes diagnosed) of normal/mild vision loss (VA 20/10 to � 20/80), 10% moderate vision loss (VA � 20/80 to � 20/200), and 26% severe vision loss/nearly blind (VA � 20/200). At month 6, the mean VFQ-25 composite score was 79.6, the mean EQ-5D utility score was 0.78, and the EQ visual analogue scale (VAS) score was 71.0. The average 6-month DME-related cost per patient was $2,092 across all patients (95% confi- dence interval: $1,694 to $2,490). The cost was $1,776 for patients with normal/mild vision loss, $1,845 for patients with moderate vision loss, and $3,007 for patients with severe vision loss/nearly blind. CONCLUSIONS: DME is associated with limi- tations in functional ability and quality of life. In addition, the DME-related cost is substantial to the Canadian health care system. PSS8 NON-INTERVENTIONAL STUDY ON THE BURDEN OF ILLNESS IN DIABETIC MACULAR EDEMA (DME) IN BELGIUM Nivelle E1, Caekelbergh K1, Moeremans K1, Gerlier L2, Drieskens S3, Van dijck P4 1IMS Health HEOR, Vilvoorde, Belgium, 2IMS Health, Vilvoorde, Belgium, 3Panacea Officinalis, Antwerp, Belgium, 4N.V. Novartis Pharma S.A., Vilvoorde, Belgium OBJECTIVES: To study real-life patient characteristics, treatment patterns and costs associated with DME and visual acuity (VA) level. METHODS: The study aimed to recruit 100 patients distributed evenly over 4 categories defined by last measured VA. 1-year retrospective data were collected from medical records. An- nual direct costs were calculated from resource use in medical records and official unit costs (€ 2011). Self-reported economic burden was collected via Short Form Health and Labour Questionnaire (SF-HLQ). Indirect costs (€ 2011) included per- sonal expenses and caregiver burden (SF-HLQ). RESULTS: Thirteen Belgian oph- thalmologists recruited 32, 12, 14 and 6 DME patients for VA categories �20/50, 20/63-20/160, 20/200-20/400 and �20/400 respectively. VA was stable during the study in 86% of patients. Recruitment for lower VA categories was difficult due to long-term vision conservation with current treatments, lack of differentiation be- tween lowest categories in medical records and discontinuation of ophthalmolo- gist care in lowest categories. 75% of patients had bilateral DME. 68% were treated for DME during the study, of which 60% in both eyes. 50% received photocoagula- tion, 33% intravitreal drugs. Less than 4% of patients had paid work; 17% received disability replacement income. Total direct medical costs in patients receiving active treatment ranged from €960 (lowest VA) to €3,058. 59% of direct costs were due to monitoring and vision support, 39% to DME treatment. Indirect cost trends were less intuitive due to small samples and large variations. Annual costs grouped by 2 highest and 2 lowest VA levels, were respectively €114 and €312 for visual aids, €407 and €3,854 for home care. CONCLUSIONS: The majority of DME patients had bilateral disease. Except for the lowest VA, direct medical costs increased with VA decrease. Indirect costs were substantially higher at lower VA levels. Low sample sizes in some categories did not allow statistical analysis of cost differences. PSS9 COST OF BLINDNESS AND VISUAL IMPAIRMENT IN SLOVAKIA Psenkova M1, Mackovicova S2, Ondrusova M3, Szilagyiova P4 1Pharm-In, Bratislava, Slovak Republic, 2Pharm-In, spol. s r.o., Bratislava, Slovak Republic, 3Pharm-In, Ltd., Bratislava, Slovak Republic, 4Pfizer Luxembourg SARL, Bratislava, Slovak Republic OBJECTIVES: To measure the burden of the disease and provide a basis for the health care policy decisions. METHODS: The analysis was performed based on the several data sources. Data on prevalence of bilateral blindness and visual impair- ment were obtained from the official Annual Report on the Ophthalmic Clinics Activities. Cost analysis was performed from the Health and Social Insurance per- spective and reflects the real costs of health care payers in 2010. Information on health care and social expenditure were obtained from State Health and Social Insurance Funds. As detailed data on expenditures were not always available in a necessary structure, the missing data were collected in the retrospective patient survey. Both direct and indirect costs were evaluated and divided by the cost type and level of visual impairment. For the estimation of indirect costs Capital method was used. Patient survey was conducted on randomly collected geographically homogeneous sample of 89 respondents from all over Slovakia. RESULTS: A total of 17 201 persons with bilateral blindness or visual impairment were identified in 2010. Total yearly expenditures were 63 677 300 €. Direct costs counted only for 7% (4 468 112 €) of total costs and the most of them were caused by hospitalisations (4 001 539 €) and medical devices (307 739 €). The indirect costs counted for 59 209 188 €. The highest share represented loss of productivity (69%), followed by disability pensions (17%) and compensation of medical devices (14%). CONCLUSIONS: The evidence of cost-effectiveness must be demonstrated in order to get reimburse- ment in Slovakia. According the Slovak guidelines indirect costs are accepted only in exceptional cases. Indirect costs of blindness and visual impairment count more than two thirds of total costs and therefore should be considered in health care policy evaluations. PSS10 ECONOMICAL BURDEN OF SEVERE VISUAL IMPAIRMENT AND BLINDNESS – A SYSTEMATIC REVIEW Köberlein J1, Beifus K1, Finger R2 1University of Wuppertal, Wuppertal, Germany, 2University of Bonn, Bonn, Germany OBJECTIVES: Visual impairment and blindness pose a significant burden in terms of costs on the affected individual as well as society. In addition to a significant loss of quality of life associated with these impairments, a loss of independence leading to increased dependence on caretakers and inability to engage in income generat- ing activities add to the overall societal cost. As there are currently next to no data capturing this impact available for Germany we conducted a systematic review of the literature to estimate the costs of visual impairment and blindness for Ger- many and close this gap. METHODS: A systematic literature search of the main medical and economic information databases was conducted from January-April A569V A L U E I N H E A L T H 1 5 ( 2 0 1 2 ) A 2 7 7 – A 5 7 5 work_ajavwl6gvjdbdcsmit652e2ig4 ---- EUROGRAPHICS 2009 / P. Alliez and M. Magnor Short Paper Depth of Field in Plenoptic Cameras T. Georgiev1 and A. Lumsdaine2 1Adobe Systems, Inc. San Jose, CA USA 2Indiana University, Bloomington, IN USA Abstract Certain new algorithms used by plenoptic cameras require focused microlens images. The range of applicability of these algorithms therefore depends on the depth of field of the relay system comprising the plenoptic camera. We analyze the relationships and tradeoffs between camera parameters and depth of field and characterize conditions for optimal refocusing, stereo, and 3D imaging. Categories and Subject Descriptors (according to ACM CCS): Image Processing And Computer Vision [I.4.3]: Imaging Geometry— 1. Introduction Capture and display of 3D images is becoming increasingly popular with recent work on 3D displays, movies and video. It is likely that high-quality 3D photography and image pro- cessing will eventually replace current 2D photography and image processing in applications like Adobe Photoshop. Fully 3D or “integral” photography was first introduced by Lippmann [Lip08], and improved throughout the years by many researchers [Ive28, IMG00, LH96, Ng05]. The use of film as a medium for integral photography restricted its prac- ticality. However, the approach found new life with digital photography. Initial work by Adelson [AW92], along with further improvements of Ng [Ng05], and Fife [FGW08] held out significant promise that the plenoptic camera could be the 3D camera of the future. One impediment to realizing this promise has been the limited resolution of plenoptic cameras. However, recent re- sults [LG08] suggest that a “Plenoptic 2.0” camera can cap- ture much higher resolution based on appropriate focusing of the microlenses. In this modified plenoptic camera, the microlenses are focused on the image created “in air” by the main camera lens. In this way each microlens works together with the main lens as a relay system, forming on the sensor a true image of part of the photographed object. In this short paper we analyze the parameters of the mod- ified plenoptic camera for the purpose of achieving optimal focusing and depth of field for 3D imaging. We propose a setting where the two possible modes of relay imaging can be realized at the same time. Our experimental results demonstrate that such parameters work in practice to gener- ate a large depth of field. 2. The two modes of focusing of the Plenoptic camera We treat the plenoptic camera as a relay system, where the main lens creates a main image in the air, then this main image is remapped to the sensor by the microlens array. De- pending on where the microlens array is located relative to the main image we have two different modes of operation: Keplerian or Galilean. 2.1. Keplerian mode In this mode the main image is formed in front of the mi- crolens array. If the distance from the microlenses to the main image is a, and the distance from the microlenses to the sensor is b, a perfectly focused system satisfies the lens equation 1/a + 1/b = 1/ f , where f is the focal length of the microlens. See Figure 1. We define M = a b (1) as the inverse magnification. Let’s observe that M needs to satisfy M > 2 because each point needs to be imaged by at least two different microlenses in order to have stereo paral- lax information captured. c© The Eurographics Association 2009. T. Georgiev & A. Lumsdaine / Depth of Field Figure 1: Microlens imaging in Keplerian mode. The main lens (above) forms a main image in front of the microlenses. Figure 2: Microlens imaging in a Galilean camera. Only rays through one microlens are shown. Substituting a from (1) into the lens equation produces b = M + 1 M f (2) We see that the distance b from the microlens to the sensor is required to be in the range f ≤ b < 3 2 f . (3) 2.2. Galilean mode When the main lens is focused to form an image behind the sensor, the image can be treated as virtual, and it can still be focused onto the sensor. In this case the lens equation becomes −1/a + 1/b = 1/ f . Definition (1) and the require- ment M > 2 remain the same. The imaging geometry is rep- resented in Figure 2. In the place of (2) and (3) we derive b = M−1 M f (4) f 2 < b ≤ f (5) Both cases are represented in Figure 3. Horizontal lines Figure 3: Locations behind a microlens where in focus im- ages are formed for different magnifications. Figure 4: The depth of field within which a camera is in focus (i.e. blur smaller than a pixel) is related to pixel size. represent integer values of M starting from 2 and increas- ing to infinity when approaching the focal plane from both sides. These are the locations behind the lens where perfectly in focus images of inverse magnification M are formed, ac- cording to formulas (2) and (4). In the traditional plenoptic camera the sensor is placed at the focal plane. Figure 3 shows where it would have to be placed in the two types of modified plenoptic cameras for different values of M, if the image is perfectly focused. 3. Depth of Field It is well known that a camera image is in focus within a range of distances from the lens, called depth of field. At any distance beyond that range the image is defocused, and can be described by its blur radius. We consider a digital image to be in focus if this blur radius is smaller than the pixel size p. The depth of field x is related to aperture diameter D, which is often expressed in terms of the F-number F = b/D, by the following relation: x = pF. (6) It can be derived considering similar triangles in Figure 4. Using formulas (2) and (4) we can verify that in both cases, if the sensor is placed at distance f from the lens, the c© The Eurographics Association 2009. T. Georgiev & A. Lumsdaine / Depth of Field depth of field x = |b− f | would satisfy M = f x (7) Example: As an example of how (6) and (7) can be used, consider the camera of Ng [Ng05], in which the sensor is placed at the focal plane of the microlenses. The parameters are p = 9µ, F = 4, f = 500µ. Using (6) we compute x = 36µ. In other words, the image is in focus within 36µ of the sensor, on both sides. Also, from (7) we compute the inverse magnification M = 14. Using M, together with formulas (1), (2) and (4), we com- pute two distances aK and aG: aK = (M + 1) f (8) aG = (M−1) f . (9) If the main image is at either of these locations (7 mm from the microlenses), the image on the sensor will be in focus. We see that within about 7mm of the microlens array there is a zone of poor focusing. Anything beyond that zone, and all the way to infinity, is perfectly in focus. This explains the observation of [LG08] that the same camera can be used for Galilean and Keplerian imaging. 4. Effects of Wave Optics Due to diffraction effects, the image blur p depends on the F-number [Goo04]. For simplicity we consider 1D cameras (equivalently, square apertures) in which case the blur and F-number are related according to p = λF (10) This is well known in photography. Using (6) we have x = λF 2 (11) Substituting F from (10) in (6), gives us x = p2/λ, and using (7) we derive a new formula for the lowest M at which microlenses are still in focus. M = λ f p2 . (12) Example: The camera described by Ng [Ng05] can be im- proved by adding apertures to the microlenses so that depth of field is increased. What are the optimal magnification and aperture diameter? Assume λ = 0.5µ. From (12) we get M = 3 (instead of 14), and from (10) we get F = 18 (in- stead of 4). This is a significant improvement because now everything not within x = 2mm from the microlenses is in focus. The camera works at the same time in Keplerian and Galilean mode! If the goal was “refocusability” of the light- field images, now everything is fully refocusable, except for a small 2mm region around the microlenses. Our camera pro- totype, which works at F = 10, is described in Section 5. Our last theoretical result is probably the most interesting one. We find the following relationship between the maxi- mum possible number of pixels N in a microimage and the size of a pixel, p, assuming in focus imaging in Galilean and Keplerian mode at the same time: N = p λ . (13) To derive it, assume that the size of the microimage is half the focal length. This corresponds to camera lens aperture F = 2, which is realistic for most cameras. Then the number of pixels is N = f2 p . Substituting in (12), M = 2Nλ/p. Since the minimal (and the best!) value of M is 2, we obtain (13). We can define multiplicity M as the number of times a world point is seen in different microimages. If our goal is to achieve lowest multiplicity and many pixels in individual microimages, we need big pixels. Example: The sensor of Keith Fife [FGW08] uses very small pixels, under 1µ. According to formula (13), if we want low multiplicity, the microimage would be only 2 pix- els! We argue that this is not optimal for combined Galilean and Keplerian camera that is everywhere in focus. Contrary to common intuition, small pixels are not the so- lution that would produce large depth of field! This might be a nontrivial result. The simple explanation would be that small pixels require big aperture in order to minimize diffraction, and big aperture causes shallow depth of field. Looking at formula (13), the optimal way to achieve mi- croimages of many pixels is to make those pixels big com- pared to the wavelength. For example, if pixel size is p = 10µ, and λ = 0.5µ, then N = 20. Formula (12) gives us the focal length for such a camera, f = 400µ (at M = 2). The apertures on the microlenses must be D = 20µ. 5. Experimental Results We have implemented a camera with the main goal of achieving large depth of field. For practical reasons, besides depth of field we needed to have reasonable sensitivity to light (speed). As a good tradeoff we chose for our microlens apertures about 2 times lower (“faster”) F-number of F = 10 instead of the theoretical F = 18. Two stereo views of the captured scene have been gen- erated with our system. The virtual camera can be syntheti- cally focused on any object. The stereo views above have been generated from the main image inside our camera, observed through the mi- crolens array. In this particular case, the “Optical System” is mapped behind the microlenses as a virtual image and ob- served in a Galilean mode. In Figure 6 we can observe the sharp imaging of our system due to the large depth of field. The main lens of the camera was focused exactly on the text at the top of the “EG” book. Consequently, the main im- age of this area falls in the region of bad focusing, within c© The Eurographics Association 2009. T. Georgiev & A. Lumsdaine / Depth of Field Figure 5: Crossed-eyes stereo rendered from our lightfield. The synthetic camera is focused on the Fluid Dynamics book. Figure 6: Galilean imaging. We see part of the text “Opti- cal” repeated and not inverted in each microimage. Figure 7: Microimages at the top of the “EG” book. 2mm from the microlenses. Some microimages from this re- gion are shown in Figure 7. In the main image inside our camera, the Fluid Dynam- ics book is mapped in front of the microlenses, in Keplerian mode (see Figure 8). Note the sharp imaging of our system due to the large depth of field. References [AW92] ADELSON T., WANG J.: Single lens stereo with a plenoptic camera. IEEE Transactions on Pattern Analysis and Figure 8: Keplerian imaging. We see part of the text “Third Edition” inverted in each microimage. Machine Intelligence (1992), 99–106. [FGW08] FIFE K., GAMAL A. E., WONG H.-S. P.: A 3mpixel multi-aperture image sensor with 0.7um pixels in 0.11um cmos. In IEEE ISSCC Digest of Technical Papers (February 2008), pp. 48–49. [Goo04] GOODMAN J. W.: Introduction to Fourier Optics, 3rd ed. Roberts and Company, 2004. [IMG00] ISAKSEN A., MCMILLAN L., GORTLER S. J.: Dynam- ically reparameterized light fields. ACM Trans. Graph. (2000), 297–306. [Ive28] IVES H. E.: A camera for making parallax panorama- grams. Journal of the Optical Society of America 17, 4 (Dec. 1928), 435–439. [LG08] LUMSDAINE A., GEORGIEV T.: Full Resolution Light- field Rendering. Tech. rep., Adobe Systems, January 2008. [LH96] LEVOY M., HANRAHAN P.: Light field rendering. Pro- ceedings of the 23rd annual conference on Computer Graphics and Interactive Techniques (Jan 1996). [Lip08] LIPPMANN G.: Epreuves reversibles. photographies in- tegrales. Academie des sciences (March 1908), 446–451. [Ng05] NG R.: Fourier slice photography. Proceedings of ACM SIGGRAPH 2005 (Jan 2005). c© The Eurographics Association 2009. work_albwv4rsw5fxjaflyvl2mamo2q ---- FOREWORD A NOVEL SUB-PIXEL MATCHING ALGORITHM BASED ON PHASE CORRELATION USING PEAK CALCULATION Junfeng Xie a , Fan Mo a b , Chao Yang a , Pin Li a d , Shiqiang Tian a c , a Satellite Surveying and Mapping Application Center of China, NO.1 Baisheng Village, Beijing - xiejf@sasmac.cn,yangc@sasmac.cn b Information Engineering University, No.62 Kexue Road, Zhengzhou - surveymofan@163.com c Chang'an University, Middle-section of Nan'er Huan Road, Xi'an - 835301221@qq.com d Liaoning Project Technology University, People Street, Fuxin - 1076760488@qq.com Commission I, WG I/3 KEY WORDS: Image Matching, Phase Correlation, Peak Calculation, Window Constraint, Correlation Coefficient ABSTRACT: The matching accuracy of homonymy points of stereo images is a key point in the development of photogrammetry, which influences the geometrical accuracy of the image products. This paper presents a novel sub-pixel matching method phase correlation using peak calculation to improve the matching accuracy. The peak theoretic centre that means to sub-pixel deviation can be acquired by Peak Calculation (PC) according to inherent geometrical relationship, which is generated by inverse normalized cross-power spectrum, and the mismatching points are rejected by two strategies: window constraint, which is designed by matching window and geometric constraint, and correlation coefficient, which is effective for satellite images used for mismatching points removing. After above, a lot of high-precise homonymy points can be left. Lastly, three experiments are taken to verify the accuracy and efficiency of the presented method. Excellent results show that the presented method is better than traditional phase correlation matching methods based on surface fitting in these aspects of accuracy and efficiency, and the accuracy of the proposed phase correlation matching algorithm can reach 0.1 pixel with a higher calculation efficiency. 1. INTRODUCTION Image matching, in the field of digital photography, is one of the important research topics (ARMIN GRUEN, 2012), and its accuracy directly restricts the development of photogrammetry to full digital photogrammetry, also influences the geometry accuracy of the subsequent geometric processing. Phase correlation matching converts stereo images to frequency domain through Fourier transform, and acquires tie points through the processing of frequency domain information (Kuglin, C.D, 1975). Compared with the traditional cross- correlation and other high-precision image matching method, phase correlation matching has better accuracy and reliability (T. Heid, 2012). Except being applied in digital photography, as the advantage of phase correlation matching, and it has been applied to other areas such as medical imaging (W. S. Hoge), computer vision (K. Ito, 2004) and environmental change monitoring (S. Leprince, 2007), etc. Classic phase correlation matching can attain pixel precision, at present, and we can improve the pixel precision to sub-pixel precision through three kinds of optimization strategy, such as the fitting interpolation method (Kenji TAKITA, 2003), the singular value decomposition method (Xiaohua Tong, 2015) and the local upward sampling method. However, fitting function attained by the least squares estimate was affected by side lobe energy easily, which will bring amount of calculation; singular value decomposition of cross-power spectrum will cause phase unwrapping fuzzy more or less, which make it unable to attain the exact offsets for the cumulative system error. Local upward sampling method is limited by sampling ratio. However, this paper presents a high-precise sub-pixel matching method, based on symmetrical distribution of symmetry energy through peak calculation, to enhance the matching accuracy. The method is based on traditional phase correlation matching and improved, and the peak location acquired by calculation according to inherent geometry. Lastly, the mismatching points are rejected by window constraint. The related theory is simple, but it has been confirmed that the algorithm has high matching precision with less calculation through experiments of simulation data. 2. METHOD 2.1 Classic Phase Correlation Matching Image matching based on phase correlation method employs the Fourier transform to transform the image to be matched to frequency domain for cross-correlation. It only uses power spectrum phase information of the image blocks in the frequency domain, and reduces the effect of the image content, such as pixel value. So it has a good reliability. The principle of phase correlation algorithm is based on the characteristics of the Fourier transform. Two image blocks will reflect a linear phase angle in the frequency domain, if they only have offsets. When offsets, x and y , exist between image block g and image block f :    , ,g x y f x x y y   (1) Convert two sides of formula (1) through Fourier transform, and consider the property of Fourier transform, the resolve can be expressed as:      , , i u x v y G u v F u v e      (2) Where, G and F is the Fourier transform matrix of image g and f respectively. The cross-power spectral function can be acquired by formula (2).             , , , , , i u x v y F u v G u v Q u v e F u v G u v         (3) The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B1, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic This contribution has been peer-reviewed. doi:10.5194/isprsarchives-XLI-B1-253-2016 253 mailto:835301221@qq.com Where, “ * “ is matrix dot product and G is conjugate function of G . Inverse Fourier transform can be used in cross-power spectrum above and Dirac function  ,x y   can be obtained in place  ,x y  :        , , i u x v yx y IFT Q u v IFT e        (4) Where, IFT is the function of inverse fast Fourier transform. If two blocks of image show the same area, peek value of the pulse function can be calculated in the place  ,x y  , and other value, around the peak value, will be far less than the peek value and close to zero. 2.2 Principle of Peak Calculation Classical phase correlation matching only can attain pixel precision according to acquire the path and row of the maximum value in matrix of impulse function. This paper presents a sub-pixel phase correlation matching method, which calculates power peak value base on the symmetrical distribution around energy peak. In the algorithm, the order of value around the peak determines formula, so there are two different conditions to calculate the peak point. When the peak value is greater than the back value, peak calculation algorithm sketch is shown in Figure 1:   1 x 2x 3x X Y 3 y 2 y 1 y x P 1 l 2 l A B C Figure 1. Peak calculation sketch Where,  2 2,B x y is peak point in matrix of pulse function, and between its surrounding points  1 1,A x y and  3 3,C x y with 1 3 y y . Make a straight line 1 l from C to B and make a vertical line across B , and we can attain the angle between the two line is  , according to the principle of energy accord with symmetrical distribution in matrix of pulse function , there must be a line 2 l symmetrical with line 1 l about vertical line that across the peek point P . Therefore, make a straight line 2l from A with an angle 90  , and make 2l and 1l meet at point P , the difference 2 x x between abscissa x of P and abscissa 2 x of B is sub-pixel offsets. There are two geometric relationships according to the figure above and the formula can be expressed as follows: 3 1 3 1 3 2 3 3 3 2 x x x x y y y y y y y y x x x x             (5) Formula (5) can be deduced and simplified as:       2 3 2 2 1 3 1 2 3 2 10 9 3 12 9 0 y y x y y y x y y y         (6) Formula above can be simplified as follows: 2 0ax bx c   (7) In the above formula,  3 22a y y  , 2 1 310 9b y y y   , 1 2 3 3 12 9c y y y   . The solution of the formula above can be attained as follows: 2 4 2 b b ac x a     (8) The solution belong to the interval of 1 2 x x x  is requested. Similarly, when points  1 1,A x y and  3 3,C x y include 1 3 y y , the formula can be expressed as follows:       2 1 2 2 1 3 1 2 3 2 6 7 5 4 0 y y x y y y x y y y         (9) The solution x of the formula (9) can be attained according to formula (7) and (8), which belong to the interval of 2 3 x x x  are requested. In addition, when points  1 1,A x y and  3 3,C x y include 1 3 y y , the images to be matched are regarded as no offset, and 0x  . 2.3 Matching Window Constraint The window mentioned to constraint the result is designed as 3 ×1 or 1×3(set it according to row or line), after getting the result of sub-pixel matching, and it can use window constraint to control the value of matching. The ketch of window constraint is shown as Figure 2: 1 x 2 x 3 x Figure 2. Window constraint sketch When the peak of 3x is far less than the peak of 1x , and 1 x is close to the limitation of peak 2 x , it is shown as figure 2, according to geometric knowledge, we know the distance of x deviates 2x must be less than 0.5, so we can use this constraint to control the value of sub-pixel, using it to act as removed strategy of mismatching points of sub-pixel matching fractional part. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B1, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic This contribution has been peer-reviewed. doi:10.5194/isprsarchives-XLI-B1-253-2016 254 http://dict.youdao.com/w/abscissa/ http://dict.youdao.com/w/abscissa/ Sub-pixel matching algorithm based on phase correlation is implemented by peak value calculating, and its implementation process is shown as the following diagram: Image Image Image block Image block frequency frequency cross-power spectrum impulse function Peak integer value the integer part Peak value calculation The result of sub- pixel matching L R f g F G Q  Fouriertransform fourier inverse transformation Matching window constraint Figure 3 the process of phase correlation matching using peak calculation Stereo images L and R respectively refer to image data blocks f and g , and obtaining F and G according to the two- dimensional fast Fourier transform(adding harming window). The cross-power spectrum Q is obtained through formula (3), and  is obtained using the inverse transformation of Fast Fourier Transform(FFT) for Q(adding harming window). After above, integer part of the sub-pixel can be obtained through getting maximum value in matrix of impulse function, and error values are removed by using the window constraint; We can get sub-pixel matching results through the above process. 3. THE EXPERIMENT AND ANALYSIS Three experiments are designed to verify effectiveness of algorithm presented. Compared with curved surface fitting phase correlation matching algorithm using simulation data in accuracy and speed, the effectiveness of algorithm presented is verified. These simulation data are obtained by down sampling, thus the absolute matching precision of algorithm presented can be measured. Lastly, the multi-spectral Images of ZY-3 satellite, the first civilian stereo mapping satellite in China, is tested to verify the effectiveness according to detection of satellite attitude jitter. 3.1 Experiment I The high resolution remote sensing images of ZY-3 are selected as raw images, and the spatial resolution of panchromatic image is 2.1 meters. The test images are simulated by down sampling raw images with several rates. The point (X,Y) in the raw image is considered as the beginning point of the simulation image. The “A” image with 1000 1000m m pixels is cropped. Similarly, The “B” image with 1000 1000m m pixels cropped from point  1, 1X Y  in the same raw image ; A and B images are separately down sampled to 1000×1000 pixels, then we get image a and image b. Theoretically, the deviant between image a and image b is 1 1 , m m       , and m is the sampling rate. We choose four frame simulated images with different ground features, as are shown in figure 3: m a image b image 3 5 10 20 Figure 3. Simulated images The images of figure 3 are tested with sub-pixel phase correlation matching based on curved surface fitting and sub- pixel phase correlation matching based on peak calculation, and these matching results are showed in table 1 and table2. According to these results, the stability and accuracy of curved surface fitting phase correlation matching algorithm is worse than the phase correlation matching algorithm using peak calculation, and the accuracy of the presented algorithm can reach 0.1 pixel, and the accuracy of it mostly is better than 0.05 pixel. Table 1 the result of curved surface fitting phase correlation matching algorithm m Matching error (%) (0, 0.05) pixel (0.05, 01) pixel (0.1, 1) pixel 3 43.2 43.5 13.3 5 20.7 56.6 22.7 10 59.2 36.4 4.4 20 79.7 19.6 0.7 Table2 the algorithm of phase correlation matching adapted by peak calculation The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B1, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic This contribution has been peer-reviewed. doi:10.5194/isprsarchives-XLI-B1-253-2016 255 file:///D:/软件/Dict/6.3.67.7016/resultui/frame/javascript:void(0); file:///D:/软件/Dict/6.3.67.7016/resultui/frame/javascript:void(0); javascript:void(0); file:///D:/软件/Dict/6.3.67.7016/resultui/frame/javascript:void(0); file:///D:/软件/Dict/6.3.67.7016/resultui/frame/javascript:void(0); javascript:void(0); javascript:void(0); m Matching error (%) (0, 0.05) pixel (0.05, 01) pixel (0.1, 1) pixel 3 79.2 16.3 4.5 5 66.9 23.6 9.5 10 72.8 21.8 5.4 20 75.6 20.9 3.5 3.2 Experiment II In order to verify the speed advantage of the algorithm presented, the same matching window (32×32) and calculation window (3×3) are adopted. The speed difference of two algorithms mainly displays in calculation of peak value, as curved surface fitting needs the least squares fitting, so its calculation is very complicated. Because calculation of matrix could bring bulk single precision floating-point calculation and it will be greater than one hundred times, the times of multiplication is greater than six hundred times, the times of addition is greater than five hundred times; However, the algorithm presented in this paper only needs simple calculation, the times of multiplication is less than thirty times, and the times of addition is less than twenty five times. The presented algorithm needs a very small amount of calculation than curved surface fitting phase correlation matching algorithm in theory. Experiment applies 100 × 100 pixel image data block and gradually increasing to 900×900 pixel image data block, a total of nine block images(increasing 100 pixels every time), contrasting speed of the algorithm presented with curved surface fitting phase correlation. Computing environment is Intel the second generation of core i7-2820QM@2.3GHz processor, single thread operation. Algorithm is written by Visual Studio 2010 platform using MFC. Because Fourier transform transformation of phase correlation matching needing large amount of calculation, in order to reflect the relative calculation speed, the computation time of the presented algorithm will subtract the other one. The difference of time is shown as figure 4. Figure 4. Time difference As shown in figure 4, the horizontal axis is the image size, and the unit is pixel. The vertical axis is the consuming time, and the unit is millisecond. For the proposed algorithm, as the image block size increases, we can find that the advantage of processing speed is more obvious. It is seen from the above two experiments that the algorithm presented in this paper can acquire better matching accuracy, higher stability and smaller amount of calculation time. 3.3 Experiment III The multi-spectral sensor of ZY-3 consist of four parallel sensors about four bands that include blue, green, red and near- infrared, and each sensor has three staggered CCD arrays, as shown in figure 4. B1 B4 B3 B2    152 pixel 128 pixel 128 pixel Figure 4. Instalment relationship The tiny physical distance between the parallel CCD arrays, which are in same scanning column in the direction of flight, will bring parallax matched among them if there is a certain attitude jitter in the satellite platform (Tong X, 2014). We apply raw multi-spectral images without any geometric rectification to detect the attitude jitter based on the physical infrastructure above. According to the experiment III, the actual feasibility of algorithm presented will be tested. The test images as shown in figure 5 are blue and green band image respectively. a) blue band image b)green band image Figure 5. Experimental images We implement the presented algorithm to process the stereo images, and a lot of tie-points can be obtained pixel by pixel. The dense pixel offset between image a and image b in line and column could form parallax maps in cross-track and along-track direction. The parallax maps are showed as follows: a) cross-track b)along-track Figure 6. Parallax images The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B1, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic This contribution has been peer-reviewed. doi:10.5194/isprsarchives-XLI-B1-253-2016 256 javascript:void(0); javascript:void(0); javascript:void(0); javascript:void(0); We can find that the periodic change, caused by satellite platform jitter, is obvious in these parallax images. The regularity, which is reflected in the cross-track parallax, is better than the along-track one, and the main reason may be that along-track direction is affected by more factors. We process the parallax image by mean the each line parallax to get a two- dimensional curve, and the two parallax curves can be obtained and showed as figure 7. a) cross-track b)along-track Figure 7. Parallax curves We can find that approximately 0.67Hz (considering the imaging line time 0.8 ms) attitude jitter is detected in multi- spectral stereo images of ZY-3. The accuracy of matching must reach 0.1 pixel, so the platform jitter can be detected by presented matching method. From the experiment III, the algorithm presented is verified that it could reach the matching accuracy for actual image product. CONCLUSION This paper presents a novel phase correlation sub-pixel algorithm based on peak calculation of the pulse function matrix, and it takes the geometric relationships into account to infer the corresponding mathematical formula. The amount of calculation is smaller than other methods as without the least squares adjustment, and the theory is simple but complete. According to experimental results, we can conclude the accuracy of algorithm presented in this paper can achieve 0.1 pixel, and meet the need of high-precision matching application. ACKNOWLEDGEMENTS Acknowledgements of support for the public welfare special surveying and mapping industry (NO.201412001, NO.201512012) and natural science research fund (NO.41301525, 41571440), and the scientific research plan for academic and technical youth leaders of State Bureau of Surveying and Mapping (NO. 201607), youth science and technology of surveying and mapping project (NO. 1461501900202), and the Major Projects of High Resolution Earth Observation System. REFERENCES ARMIN GRUEN. Development and Status of Image Matching in Photogrammetry[J]. The Photogrammetry Record, 2012, 27(137): 36-57. Dazhao Fan, Erhua Shen, Lu Li et al. Small Baseline Stereo Matching Method Based on Phase Correlation[J]. Journal of Geomatics Science and Technology, 2013, 30(2):154-157. Harold S Stone, Michael T, Ee-Chien Chang, et al. A Fast Direct Fourier-Based Algorithm for Subpixel Registration of Image[J]. IEEE Trans. GEOSCIENCE AND REMOTE SENSING, 2001, 39(10):2235-2243. K. Ito, H. Nakajima, K. Kobayashi et al. A Fingerprint Matching Algorithm Using Phase-Only Correlation[J]. IEICE Trans. Fundam. Electron. Commun. Comput. Sci., 2004, 87(3):682-691. Kenji TAKITA, Tatsuo HIGUCHI. High-Accuracy Subpixel Image Registration Based on Phase-Only Correlayion[J]. IEICE TRANS. FUNDAMENTALS, 2003, E86-A(8):1925-1934. Kuglin, C.D., Hines, D.C. The Phase Correlation Image Alignment Method[J]. Proc. IEEE. Int’ Conf. on Cybernetics Society, 1975, 163-165. S. Leprince, S. Barbot, F. Ayoub, et al. Automatic and Precise Orthorectification, Coregistration, and Subpixel Correlation of Satellite Image, Application to Ground Deformation Measurements[J]. IEEE Trans. Geosci. Remote Sens., 2007, 45(6):1529-1558. T. Heid, A. Kääb. Evaluation of Existing Image Matching Methods for Deriving Glacier Surface Displacement Globally from Optical Satellite Imagery[J]. Remote Sens. Environ., 2012, 118:339-355. W. S. Hoge. A Subpixel Identification Extension to the Phase Correlation Method [MRI Application][J]. IEEE Trans. Med. Image., 2003, 22(2):277-280. Xiaohua Tong, Zhen Ye, Yusheng Xu, et al. A Novel Subpixel Phase Correlation Method Using Singular Value Decomposition and Unified Random Sample Consensus[J]. IEEE TRANSCATION ON GEOSCIENCE AND REMOTE SENSING, 2015, 53(8):4143-4156. Tong X, Ye Z, Xu Y, et al. Framework of Jitter Detection and Compensation for High Resolution Satellites[J]. Remote Sensing, 2014, 6(5):3944-3964. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B1, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic This contribution has been peer-reviewed. doi:10.5194/isprsarchives-XLI-B1-253-2016 257 work_alryudvkanhttinqeeahzeqasm ---- Perspective Systematics and the future of biology Edward O. Wilson* Museum of Comparative Zoology, Harvard University, 26 Oxford Street, Cambridge, MA 02138-2902 Biology is a science of three dimensions. The first is the studyof each species across all levels of biological organization, molecule to cell to organism to population to ecosystem. The second dimension is the diversity of all species in the biosphere. The third dimension is the history of each species in turn, comprising both its genetic evolution and the environmental change that drove the evolution. Biology, by growing in all three dimensions, is progressing toward unification and will continue to do so. A large part of the future of biology depends on interdisciplinary studies that allow easy travel across the three dimensions. Molecular and cellular biology, the subdisciplines of maximum support and activity today, occupy the two lowest levels of biological organization. They are focused on the first dimension in a small set of model species, selected primarily for their ease of culturing and the special traits that make them amenable for different kinds of analysis. The triumph of molecular and cellular biology has been the documentation of one of the two overar- ching principles of biology: that all living phenomena are obe- dient to the laws of physics and chemistry. The other overarching principle of biology is that all living phenomena originated and evolved by natural selection. That, in turn, has been the triumph of organismic and evolutionary biology. Viewed in the framework of the history of biology, the subdisciplines of molecular and cellular biology are in the natural history period of their development. This perhaps surprising characterization can be clarified with the following metaphor. The cell is a system consisting of a very large number of interacting elements and processes. It can be compared to a conventional ecosystem, such as a lake or forest, in this sense: Researchers are discovering the anatomy and functions of the vast array of molecules, the equivalent of the species of plants and animals in ecosystems, that compose the cell. These scien- tists are the Humboldts, the Darwins, the Mayrs, and other explorer-naturalists in a new age, and of a new kind. Mercifully free of mosquito bites and blistered feet, they press forward into the unmapped regions at the lowest levels of biological organi- zation. They are not in the business of creating fundamental principles, which they take mostly from physics and chemistry. Their spectacular success has come instead from technology invented and applied with creative genius. They render visible, by crystallography, immunology, genic substitution, and other methods, the anatomy and functions of the ultramicroscopic inhabitants of the cell, which are otherwise entirely beyond the range of the unaided human senses. They aim and can be expected in time to join with researchers in other subdisciplines of biology to develop the fundamental principles of biological organization. There remains the rest of biology, the vast and mostly unex- plored upper reaches of the first dimension (levels of organiza- tion), and the little known second dimension (diversity) and third dimension (evolution). These domains do not belong to the past of biology, nor are they antiquated and declining in any manner or to any degree whatsoever, as is sometimes misperceived. They are, in large part, the future of biology. Probably �10% of species on the planet have been discovered, and of these only a tiny fraction—�1%— have been addressed with more than a cursory anatomical description. Keep in mind that each species is unique in its genotype, proteome, behavior, history, and environmental relationship to other species. Until we learn more about the immense array of Earth’s little known and entirely unknown species, how they came into being, and what they are doing in the biosphere, the development of the rest of biology, including molecular and cellular biology, will be vastly incomplete. A further conse- quence of the imbalance is that the relation of humanity to the rest of the biosphere will stay largely uncharted territory. And the means to save and manage the living environment for the long term will remain very much a guessing game. The proportionate shortfall of the disciplines can be expressed in practical terms as follows. A large part of the success of molecular and cellular biology is due to their relevance to medicine. In public perception and support, they are virtually married to medicine. Hence, molecular and cellular biology are rich not so much because they have been successful; rather, they are successful because they have been rich. What needs to be appreciated for the future of organismic and evolutionary biol- ogy in practical terms is that where molecular and cellular biology are vital to personal health, organismic and evolutionary biology are vital to environmental health, and thence also to personal health. The exploration of life on this little known planet needs to be made the equivalent of the Human Genome Project. It should be an all-out effort that sets the global biodiversity census as a goal with a timetable and not just a result eventually to be reached. In fact, the technology now exists to speed exploratory and monographic systematics by at least an order of magnitude. This technology includes high-resolution digital photography with computer-aided automontaged depth of focus for even the smallest and hence most highly magnified specimens. As a practicing systematist, I am certain that the first order of business must be to thoroughly photograph representatives of all species for which either type specimens or indisputably authenticated substitutes exist. The images can then be published on the Internet for immediate access on command, available to anyone, at any time, anywhere in the world. Such virtual type collections can in most cases remove most of the time-consuming necessity of visiting museums or securing loans of often fragile specimens. Combined with online repro- ductions of the original published descriptions and earlier mono- graphs, they will accelerate the identification of vast numbers of specimens now backed up in unprocessed collections around the world. They will also allow the rapid preparation of revisionary monographs, local biodiversity analyses, and the publication of made-to-order field guides. This paper results from the Arthur M. Sackler Colloquium of the National Academy of Sciences, ‘‘Systematics and the Origin of Species: On Ernst Mayr’s 100th Anniversary,’’ held December 16 –18, 2004, at the Arnold and Mabel Beckman Center of the National Acad- emies of Science and Engineering in Irvine, CA. *E-mail: ewilson@oeb.harvard.edu. © 2005 by The National Academy of Sciences of the USA 6520 – 6521 � PNAS � May 3, 2005 � vol. 102 � suppl. 1 www.pnas.org�cgi�doi�10.1073�pnas.0501936102 D o w n lo a d e d a t C a rn e g ie M e llo n U n iv e rs ity o n A p ri l 5 , 2 0 2 1 Systematics is now in the early stages of a technology-driven revolution. A conference of biodiversity and informatics leaders held at Harvard University in 2001 agreed that by using new methods already available and including genomics, systematists have the capacity to complete or nearly complete a global biodiversity sur vey within a single human generation—25 years—at about the cost of the Human Genome Project. To give a better feel for the magnitude of this biological moon shot, consider that whereas very roughly 10% of Earth’s species (out of, say, 15–20 million extant) have been discovered and named in the past 250 years since Linnaeus inaugurated the hierarchical and binomial system of classification, now it seems possible that the remaining 90% can be so covered in 1�10th of that time. If such a goal seems out of reach, consider that perhaps 1 million species of all kinds, mostly eukaryotes, have type spec- imens in good enough condition for electronic republication. The New York Botanical Garden has already processed and put online the types of 90,000 plant species, and Harvard’s Museum of Comparative Zoology is well on its way to processing �28,000 insect species. Thus, even in its earliest stages, at only two institutions, the republication program has already reached the hypothetical 10% level. A new global systematics initiative is under way on three fronts. The first is the all-species program, which aims to measure the full breadth of biodiversity in all of the three recognized domains, the Bacteria, Archaea, and Eucarya, by using new and future technology. The next is the Encyclopedia of Life, expanding the all-species program by providing an indefinitely expansible page for each species and containing information either directly available or by linkage to other databases. The final and rapidly growing body of knowledge is the Tree of Life, the reconstructed phylogeny of life forms in ever finer detail, with particular reliance on genomics. The upper levels of biological organization, from organism to ecosystem, the mapping and analysis of biodiversity, and the development of the Tree of Life all of the way from genes to species will eventually amount to most of biology. These pro- portionately still-neglected domains, therefore, offer intellectual stock of substantial growth potential to universities and other research-oriented organizations that invest in them now. It is still relatively easy to provide leadership at the cutting edge of biology extended beyond the molecular and cellular levels of a few species. The cost would be low, and the returns to scale incalculably great. Wilson PNAS � May 3, 2005 � vol. 102 � suppl. 1 � 6521 D o w n lo a d e d a t C a rn e g ie M e llo n U n iv e rs ity o n A p ri l 5 , 2 0 2 1 work_amu46kaat5gvtj7pynewpsglji ---- 371Willdenowia 41 – 2011 Sabrina rilke1* & Ulrike najmi2 FloraGREIF – virtual guide and plant database as a practical approach to the flora of Mongolia Abstract rilke S. & najmi U.: FloraGreiF – virtual guide and plant database as a practical approach to the flora of mongolia. – Willdenowia 41: 371 – 379. December 2011. – Online iSSn 1868-6397; © 2011 bGbm berlin-Dahlem. Stable Url: http://dx.doi.org/10.3372/wi.41.41217 FloraGreiF is a web-based collaborative project on the flora of mongolia. it presents a plant database and a virtual herbarium as an introduction to the flora of mongolia and intends to be used as a digital information system providing taxonomic and biogeographical data. moreover, it offers a virtual research environment which allows scientific on- line cooperation. This project refers to the ongoing long-term research cooperation between Germany and mongolia. it brings together large herbarium collections and modern online communication facilities. The website is a dynamic system with two basic hierarchy levels. On the record level, as many records as possible for a taxon are included. a record is represented by location data, digital scans of herbarium specimens and images of living plants and their habitats. The taxon level presents information about a taxon such as short morphological description, taxonomic comments and hints for reliable identification. Plant data can be explored in a targeted way using various aspects as well as by browsing through the material to obtain an overview. The information can be downloaded at any time and place. additional key words: virtual research environment, digital herbarium specimens, plant identification, plant photo database, biodiversity informatics, webGiS Introduction located in an extremely continental position on the edge of northern central asia, mongolia is populated by cir- cumboreal and eurasian flora elements in its northern part and dominated by desert and semidesert vegeta- tion elements in the south. Vegetation zones from taiga to desert can be found here. although agricultural use is not widespread, increasing nomadic grazing is affecting the sensitive steppe and desert flora intensively. nearly 30 % of the sparse population of only 2.9 million people live nomadically or seminomadically. Pasture livestock increased from 25.8 million head in 1990 to 33.6 mil- lion head in 1999 (nSOm 2003). This continues to cause serious changes in the vegetation. recent projects (e.g. Schickhoff 2007) are studying the process of degrada- tion in detail and are developing proposals to protect and preserve the natural resources. in the field of botany and plant sociology, mongo- lian-German research cooperation dates back to the GDr (German Democratic republic) era and is still very fruit- ful today. The joint research during expeditions and at special research stations has resulted in international con- ferences, many publications (Gubanov & Hilbig 1993; Dorofejuk & Gunin 2000), as well as various additions to the plant inventory, the description of new species and phytochemical analyses. Geobotanical and ecological in- vestigations yielded detailed vegetation descriptions for different parts of the country and an extensive review of the mongolian vegetational units and their site conditions as summarised by Hilbig (1995). Support was also given 1 institute of botany and landscape ecology, University of Greifswald, 17487 Greifswald, Germany; *e-mail: sabrina.rilke@gmx. de (author for correspondence). 2 Computer Centre, University of Greifswald, 17487 Greifswald, Germany; e-mail: ulrike.najmi@uni-greifswald.de 372 rilke & najmi: FloraGreiF – virtual guide and plant database to the flora of mongolia for the protection of nature in mongolia, especially for planning and working out a concept for national parks, the establishment of protected areas and the work on red lists (knapp & Tschimed-Otschir 2001; Dulamsuren & al. 2005). Hilbig (2006) deals with the intensive floristic research of German botanists in mongolia in collabo- ration with their mongolian partners during the last 40 years. in 1982, the publication of the series “explorations into the biological resources of mongolia” (Stubbe & al. 2007) was begun and up to now ten issues have been released. They include comprehensive issues as well as those dedicated to a particular topic. FloraGREIF Sizable collections of mongolian plants have been brought together in the course of the mongolian-German research cooperation. The herbarium of the University of Greifswald (GFW) holds some 5000 specimens, the her- barium of the University of Halle (Hal) about 10 500 specimens and the herbarium of the Plant Genetics in- stitute in Gatersleben (GaT) about 10 000 specimens, to name only the largest three. Collections of plant and habi tat photographs, which are rapidly increasing thanks to digital photography, as well as vegetation and occur- rence data are other existing resources complementing the herbarium specimen. Given the lack, on the other hand, of a modern Flora of mongolia, the perspective has been developed to create a digital information system, which integrates electronically the various data sources from herbarium specimens and photographs to taxonom- ic and geobotanical information and at the same time serves as a digital working platform. Under the name FloraGreiF, this perspective became a joint project of the institute of botany and landscape ecology, the insti- tute of Geography and Geology and the Computer Cen- tre of the University of Greifswald, funded by the DFG (German research Society) from 2007 to 2010. During these three years, a basic system was established, which has been improved continually since then. maintenance and sustainability, involving server uptime, hardware up- dates and database backups are secured by the Computer Centre of the University of Greifswald. as the result, FloraGreiF (2008+) contains a digi- tal database with taxonomic, biogeographical and eco- logical information accessible via internet. it allows easy species identification by means of visual comparison of scans of reliably identified herbarium specimens, macro photographs with taxonomic characteristics relevant for species identification, and photographs of living plants in their natural environment. The digitised specimens may serve also as a source for further taxonomic revisions. by means of a webGiS application, distribution data of the species permit spatial evaluation of distribution and habi- tat. recently, a ‘Herb-Scan’ device has been purchased by the University of Greifswald and at some time in 2012 the entire mongolian herbarium material at GFW will be available online as part of FloraGreiF. Hence FloraGreiF provides a comprehensive digi- tal information system for taxonomic and biogeographic data and image resources and offers a webbased virtual research environment which allows scientific online co- operation across the globe. it uses rapidly evolving stan- dard technologies available all over the world, such as Fig. 1. Data access in FloraGreiF, greatly simplified. 373Willdenowia 41 – 2011 digital photography to document plants, GPS systems to record exact coordinates for localities and the internet to disseminate information, making our findings accessible to all and can therefore be easily implemented every- where. System and datastructure The system is completely based on open source com- ponents: mySQl (database), Zoomify express (soft- ware used to minimise loading time for large image files), HTml (display), PHP (script programming lan- guage), and apache-Webserver with linux (Fig. 1). The FloraGreiF content management system can be reused for other projects as well. We will share the source code with any institution interested in non-commercial use of the system. metadata are stored in the database, whereas image files are located in the file system. The database is queried for each search request; thus the user will always receive the latest updates. internet users may query the information system online free of charge. The results will be displayed in the web interface; authorised users may download data as excel or pdf files. editors themselves can add new data via the web in- terface instead of submitting it to an administrator. This functionality is already operational and will be tested by external users in the near future. at the record level, a data set consists of a herbarium specimen (with scans, photos, or both) or proof of the occurrence of a taxon by a photograph with a description of the location where it was found and of the date when it was found. a scan of a herbarium specimen as a TiFF file comprises c. 200 mb. Therefore, we use the free software Zoomify express to prepare the files for online presentation. This special technique splits the large file systematically into tiles. Processing can take place as a batch process. This digital compression technology makes quick access to the scans of the herbarium specimens possible. Photo files can also be uploaded into the web interface. a time- stamp is added to achieve unique file names. Thumbnails are created automatically. location data, in as much detail as available, are reg- istered for each record. The given coordinates can be checked with webGiS functionality. location data will be used for the planned compilation of distribution maps. record data sets can be edited in the public area or in another internal area. records in the public area are im- mediately available online. records in the internal area are visible for authorised users only. That way, editors can import their field lists, but also treat and publish the collected records individually. Taxon data are edited in the public area only. There- fore, the user will always receive the current state of information on the website. record sets and species in- formation are linked dynamically (Fig. 2). each record is identified as a taxon (species, subspecies or genus); an uncertain identification can be indicated. The follow- ing details are registered for each taxon: name, author, Fig. 2. Schematic diagram of data input in FloraGreiF. 374 rilke & najmi: FloraGreiF – virtual guide and plant database to the flora of mongolia source, important synonyms, short description, taxonom- ic comments, habitat, status of endemism, red list status, distribution and, if applicable, the link to the species in the online version Flora of China (Wu & raven 1994+). editors can add new data manually as record by re- cord or taxon by taxon, or import mass data in prepared lists as an excel/CSV file. Values are tested for plausi- bility before being imported: plant name against existing taxa list, locality against existing locality list, data type by string or number and field length. each import list is given a unique identifier. That way, the editors can enter new data sets as well as update existing ones. The database is structured in a way that allows both practical treatment and straightforward modification (Fig. 3). While preparing FloraGreiF, we discussed data models implemented in related projects. The berlin model is a concept of a database-driven application that clearly separates nomenclature from taxonomy as imple- mented in the project moreTax (berendsohn 2003). it is capable of even representing different taxonomic views and nomenclatural rules. in the context of the practical approach to our information system, this structure is too complex to meet the needs of our application. The fo- cus of the hands-on project FloraGreiF is on the given comprehensive checklists enhanced by the latest floristic and taxonomic literature. The taxon datasets are cross- checked in most instances with established systems: iPni (iPni 2004+) and nCU (Greuter & al. 1993). Only nCU is checked automatically by running a script. The FloraGreiF project implements a simplified tax- on structure: each data set represents a species or subspe- cies. each species can be linked to additional information, e.g. a comparative table of the important characteristics for diverse and hard to identify genera. Furthermore, edi- tors can enter information on a genus or a family, such as descriptions and remarks on the treatment. For further reference, the editor’s name is also registered. Technically, record and taxon are linked by the al- location table “flora_identifications”. The flag ‘current’ indicates whether it is the current or a revised identifica- tion. each record is stored as data set in the table “flo- ra_records” with the following information: identified as, collected by, determined/revised/confirmed by, date, locality, habitat, presentation remarks, flowering status. Details on the locality such as country, province, dis- trict, etc. are stored separately in the table “flora_local- ity”; specifications on the herbaria are stored in the table “flora_index_herbariorum”. Whether the record is public or internal is indicated by a flag of true/false for each data set. metadata on images, such as name of the photo- grapher and description of the contents, is stored in the table “flora_photos”. an image is allocated to a record with a locality. Furthermore, images can be allocated to localities to illustrate the habitat. images can also be al- located to a taxon; currently, this is in use only for her- barium sheet scans. Fig. 3. Data structure of FloraGreiF, greatly simplified. 375Willdenowia 41 – 2011 User management is basically a role management. each user action, such as enter new record/taxon, edit one’s own record, edit others’ records, edit taxon, delete record, delete taxon, is defined as a capability. The sys- tem administrator is responsible for organising the capa- bilities as roles that are assigned to the users. The data output is organised as HTml in the web front end by default. all public data including taxon, record and image data can be downloaded this way. in the editor’s area, the user may download all data as excel files for further treatment or import and as pdf files for printing. The user can specify a search request by providing detailed parameters. These parameters are encoded in the Url, e.g. http://greif.uni-greifswald.de/floragreif/?flora_ search=record&fam=ephedraceae, which shows all re- cords for the family Ephedraceae. This facilitates linking to FloraGreiF from external web pages, e.g. from the given family in Wikipedia (2011). right from the start, FloraGreiF has been designed as a web application. This offers a variety of possibilities for online cooperation with experts worldwide, such as revising herbarium specimens, editing taxon nomencla- ture and descriptions, sharing comments on taxa, ana- lysing and updating distribution information and identi- fying plant images. Browse and search interface Online users can browse through the treatments by fam- ily and genus name. The user immediately receives an overview about the occurring taxa including the genera belonging to each family and the species in each genus, the available records, images, and further remarks on the family or genus, as well as the editor’s name. icons characterise the images by their types: scan of herbarium specimen, photo of living plant, photo of plant in its habi- tat (Fig. 4). moreover, users can search the database targeted at the levels: Taxon – record – image (Fig. 5). Search re- sults are well-arranged; levels of information are linked to each other. Thus, the user can quickly navigate from taxon to record or image data and vice versa. Fig. 4. Web search with FloraGreiF: browse content. http://greif.uni-greifswald.de/floragreif/?flora_ 376 rilke & najmi: FloraGreiF – virtual guide and plant database to the flora of mongolia Search results are generated dynamically from the current status of the database. Therefore, users will al- ways receive the latest information. The information system can be searched with the fol- lowing parameters: (1) Taxon data: • name: family, genus, species (including synonyms) • Growth form: annual, perennial, shrub, tree • Special features (in preparation): medicinal plant • Status: endemic, subendemic • red list status: rare, relict, extinct • Distribution area (geobotanical units acc. to Grubov 1952, 1955 modified): khubsgul, khen- tei, khangai, mongol-Daurian, Great khingan, khobdo, mongolian altai, middle khalkha, east mongolia, Depression of Great lakes, Valley of lakes, east Gobi, Gobi-altai, Dzungarian Gobi, Transaltai Gobi, alashan Gobi • Habitat (frequent terms for habitat description are listed) (2) record data: • Collector/ coll. number • Determined by, tested/revised by • Herbarium • Flowering status (phenology): vegetative, flowering, fruiting • Habitat • location (3) images (always related to a record): • Type: scan of herbarium specimen, species or mac- ro photo, habitat photo • Photographer Floristic content and statistics Currently, 2870 species of vascular plants are included in the “Virtual guide and plant database to the flora of mongolia” (FloraGreiF 2011). The species inventory is based on Gubanov’s (1996) checklist, which contains 2823 taxa in 662 genera from 128 families. it updates and broadens Grubov’s (1982) essential field guide, which provides a very practical key and concise descriptions for 2239 species. This taxonom- ic backbone was gradually expanded to the current size from publications and through revisions, e.g. Gubanov (1999), ebel & rudaya (2002), neuffer & al. (2003), Dulamsuren (2004), Friesen & al. (2006), Scholz (2010). For all species, the name and source of the name, synonyms of names from the relevant literature, status of endemism, typical habitat and information on distribu- tion are available. in case of difficult or deviating taxon names, the nomenclature was checked using the available literature and cross-checking with iPni (2004+). The no- menclature of genera was checked with nCU (Greuter & al. 1993) but accepted according to editors’ concepts. The families are treated according to aPG iii (Stevens 2001+). The distribution of the species in mongolia is recorded in the geobotanical units (phytogeographical regions) ac- cording to Grubov & Yunatov (1952, Grubov 1955 modi- fied) and shown in layers, depicted as red distribution ar- eas (Fig. 6) using the webGiS functionality. 1009 species, 35 % of the flora, have been edited in depth: scans of herbarium specimens and photos are available for 827 species, descriptions and comments are available for another 297 species. Currently, 6553 spe- Fig. 5. Targeted web search with FloraGreiF: parameters. 377Willdenowia 41 – 2011 cies photos are included. They show habit, leaves, inflo- rescences, flowers, and fruits and thus focus on identi- fication features. This goes along with our intention to visualise the most important key features in order to add to existing floras of mongolia. For an exemplary comprehensive treatment of 44 families, the available material from b, GaT, GFW, Hal, je, kaS, le and OSbU (abbreviations following Thiers 2008+) was studied to be able to give a concise description, details about species easily confused and comments. For that purpose, the following sources were used: Flora of Central asia (Grubov 1963 – 2008), Flora SSSr (komarov 1934 – 63), the floras of adjacent Siberia (malyshev & al. 1988 – 2003) and the russian Far east (kharkevich 1985 – 96), as well as Flora of China (Wu & raven 1994+). an instructive herbarium specimen of each species with as many identification features as pos- sible was chosen from the herbarium material at hand and is presented as a high resolution scan. Overview tables of related species groups (e.g. Stipa, Carex, Suaeda and Tamarix) or thematic views (e.g. water plants) are linked to the corresponding species. The edited families are: Adoxaceae, Alliaceae, Apia­ ceae (p.p.), Apocynaceae, Asclepiadaceae, Asteraceae (p.p.), Balsaminaceae, Biebersteiniaceae, Butomaceae, Callitrichaceae, Cannabaceae, Ceratophyllaceae, Che­ no podiaceae, Cynomoriaceae, Cyperaceae, Dipsaca­ ceae, Elaeagnaceae, Empetraceae, Ephedraceae, Faba­ ceae (p.p.), Frankeniaceae, Geraniaceae, Haloragaceae, Hippuridaceae, Hypecoaceae, Hypericaceae, Juncagina­ ceae, Malvaceae, Menispermaceae, Menyanthaceae, Na­ jadaceae, Nymphaeaceae, Orobanchaceae, Paeoniaceae, Papaveraceae, Pinaceae, Poaceae (p.p.), Polygonaceae, Ranunculaceae, Rhamnaceae, Tamaricaceae, Thyme­ laeaceae and Verbenaceae. These families were chosen with regard to their practical value. Families including genera dominating the vegetation (e.g. Artemisia, Stipa, Anabasis and Suaeda), key species for arid habitats or on salty soils and critical groups that are hard to determine (e.g. Ephedra, Artemisia, Cichorieae, Chenopodiaceae, Poaceae) were given priority. Since we were supported by leading taxonomists for some critical taxon groups, FloraGreiF provides thor- oughly revised material of these plant groups for mon- golia. Up to now, H. Freitag (Suaeda, kassel), n. Friesen (Allium, Osnabrück), k. F. Günther (Apiaceae, jena), P. Hanelt (Papaveraceae, Gatersleben), n. kilian (Cichor­ ieae p.p., berlin), P. kuss (Pedicularis, Wien), m. maier- Stolte & H. Freitag (Ephedra, kassel), D. Podlech (As­ tragalus, münchen), e. von raab-Straube (Saussurea, berlin), H. Scholz (Eragrostis, berlin), r. Wisskirchen (Polygonaceae, bonn) have contributed to our work. Presented records 6152 plant records are currently online with 997 scans of herbarium specimens, 5837 species photos, 734 macro photos, and 734 habitat photos (date: September 2011). most pictures were taken by m. Schnittler, m. Stubbe, m. kretschmer, a. Zemmrich, F. joly, m. Vesper and S. rilke. The combination of digitied herbarium specimens and macro photos of living plants collected at the same location, stemming from our additional expeditions, is especially valuable. because mature fruit is often miss- ing in herbarium material, these excursions took place as late in the year as possible for phenological reasons. This particularly holds true for Asteraceae, Boraginace­ ae, Brassicaceae, Chenopodiaceae and Fabaceae, for which the essential distinguishing characteristics are to be found in the fruit. Further content 734 records additionally provide one or more photos of the specimens in their natural habitat, so the user can gain an overview about the species’ environment. 127 endemic plant species and 154 subendemic plant species are listed for the flora of mongolia according to Gubanov (1996) and can be queried in FloraGreiF by choosing ‘targeted search’, entering the taxon search pa- rameter, ticking the checkbox status: endemic or suben- demic (Fig. 5). Further information about the endemic plants of the altai mountains is given by Pyak & al. (2008), but is not yet provided in this database. 97 rare, 23 relict and 2 extinct species are registered according to the mongolian red book (Shiirevdamba & al. 1997) and can be found by ‘targeted search’ (see above). The cited literature is searchable. it consists of all literature citations in FloraGreiF, supplemented by the bibliography on the vegetation of mongolia (Zemmrich 2007). about 200 citations on vegetation and 120 cita- tions on the flora of mongolia are included. WebGIS Spatial analysis (webGiS) is facilitated by combinations of digital topographical maps, elevation models, and dig- ital thematic maps (e.g. vegetation zones, soil zones and satellite data on the vegetation cover). The web client is Open layers, an open source soft- ware that has been modified to meet the project’s require- ments. The web mapping service moskito-GiS is com- pliant with OGC (Open Geospatial Consortium). These standards specify methods, tools and services for data management (including definition and description): ac- quiring, processing, analysing, accessing, presenting and transferring geographic data in digital form between dif- ferent users, systems and locations for geographic infor- mation (Fig. 6). Features available are: (1) location of selected record in (a) base map layer, (b) map of provinces, (c) satel- lite images, (d) map of geobotanical units, (e) vegetation zones and (f) topographic map; (2) distribution of select- ed species in vegetation zones. additional features to be 378 rilke & najmi: FloraGreiF – virtual guide and plant database to the flora of mongolia realised in near future are the display of record locations in maps, additional map layers (geographical regions, topographic maps 1: 5 00 000), additional point data lay- ers (e.g. habitat photo sites to choose from the map) and a gazetteer service. Next steps broadening the content base of FloraGreiF is a long- term perspective. next steps concern the inclusion of al- ready gathered records from the Ulaanbaatar Herbarium in the information system and the integration of digitised collections located in Görlitz (collection by k. Wesche) and Osnabrück (OSbU). Currently in preparation is the implementation of in- teractive identification keys in FloraGreiF. Since the prerequisites are already implemented, it is planned to open FloraGreiF to a wider community for direct data access, enabling experts in the field to add and edit taxon or record data in FloraGreiF online. Further revisions will be added continually to the tax- on and record database, e.g. Caryophyllaceae, Zygophyl­ laceae and Fabaceae. Currently, the system is organised along the basic tax- onomic levels of family, genus, species and subspecies. in the future, this structure will be expanded to include the intermediate ranks subfamily, section and tribe. The export of specimen data will be extended from pdf (for printing) and excel (for further treatment) to the standard abCD (GbiF) using the bioCase Protocol. Acknowledgements We would like to thank the following people and institu- tions for making FloraGreiF possible: the project lead- ers martin Schnittler and reinhard Zölitz, the project members jörg Hartleib, Susanne Starke, anne Zemmrich (all Greifs wald), the herbaria b, GaT, Hal, je, kaS, le, OSbU, W and their curators for making collections avail- able directly, on loan or as digital images, the mongolian academy of Sciences, Ulaanbaatar (m. Urgamal & i. Tuv- shintogtokh), University khovd (D. Oyuunchimeg) and the DFG for project funding from 1.7.2007 – 5.11.2010. References berendsohn W. (ed.) 2003: moreTax. Handling factual information linked to taxonomic concepts in biology. – Schriftenreihe Vegetationsk. 39. Dorofejuk n. i. & Gunin P. D. 2000: bibliogra fi- cheskij ukazatel literatury po rezultatam issledovanij Sovmestnoj rossijsko-mongolskoj kompleksnoj bio- logicheskoj ekspedicii ran i anm (1967 – 95 gg.). – biol. res. Prir. Uslov. mongolii 41. Dulamsuren C. 2004: Floristische Diversität, Vegetation und Standortbedingungen in der Gebirgstaiga des Westkhentej, nordmongolei. – ber. Forschungszen- trums Waldökosysteme, a, 191. Dulamsuren C., Solongo b. & mühlenberg m. 2005: Comments of the red data book of endangered plant species of mongolia. – mongol. j. biol. Sci. 5(2): 43 – 48. ebel a. l. & rudaya n. a. 2002: Zametki po flore Za- padnoy mongolii [notes on the flora of Western mongolia, russian]. – Turczaninowia 5(1): 32 – 42. Friesen n., Fritsch r. & blattner F. 2006: Phylogeny and new intrageneric classification of Allium l. (Alliace­ ae) based on nuclear ribosomal Dna iTS sequences. – aliso 22: 372 – 395. FloraGreif 2008+: Virtual guide and plant database to the Fig. 6. example of webGiS display with overlays and base layers of distribution data in FloraGreiF: distribution of Stipa grandis. 379Willdenowia 41 – 2011 flora of mongolia. – University of Greifswald, pub- lished at http://greif.uni-greifswald.de/floragreif/ Greuter W., brummit r. k., Farr e., kilian n., kirk P. m. & Silva P. C. (ed.) 1993: nCU-3. names in Cur- rent use for extant Plant Genera. – regnum Veg. 129: published online at http://www.bgbm.fu-berlin.de/ iapt/ncu/genera/nCUGQuery.htm Grubov V. i. & Yunatov a. a. 1952: Osnovnye osoben- nosti flory mongolskoy narodnoy respubliki v svy- azi s eyo rayonirovaniem [main peculiarities of the flora the mongolian People’s republic in the frame- work of its spatial classification]. – bot. Zhurn. 37: 45 – 64. Grubov V. i. 1955: konspekt flory mongolskoj narodnoj respubliki. – Trudy mong. komissii 67. Grubov V. i. 1982: Opredelitel’ sosudistych rastenij mongolii 1–2. – leningrad: nauka [english transla- tion, key to the vascular plants of mongolia, enfield: Science Publishers, 2001]. Grubov V. i. (ed.) 1963 – 2008: rastenija Central’noj azii po materialam botanicheskogo instituta im. V. l. komarova 1–15. – leningrad / St Petersburg: nauka [english translation, Plants of Central asia plant col- lections from China and mongolia 1–14a, enfield: Science Publishers, 1999 – 2007]. Gubanov i. a. 1996: konspekt flory vneshney mongolii (sosudistye rasteniya) [Conspectus of flora of Outer mongolia (vascular plants)]. – moskva: Valang. Gubanov i. a. 1999: Dopolneniya i ispravleniya k “kon- spektu Flory Vneshney mongolii (Sosudistye ras- teniya)” [additions and corrections to the “Conspec- tus of flora of Outer mongolia (vascular plants)”]. – Turczaninowia 2(3): 19 – 23. Gubanov i. a. & Hilbig W. 1993: bibliographia phy- tosociologica: mongolia ii. – excerpta bot., b, 30: 63 – 117. Hilbig W. 1995: The vegetation of mongolia. – amster- dam: SPb academy. Hilbig W. 2006: Der beitrag deutscher botaniker an der erforschung von Flora und Vegetation in der mongo- lei. – Feddes repert. 117: 321 – 366. iPni 2004+: The international Plant names index. – Published at http://www.ipni.org. kharkevich S. S. (ed.) 1985–96: Sosudistye rastenija sovetskogo Dal’nego Vostoka 1–8. – St Peters- burg: nauka [english translation, Vascular plants of the russian Far east 1, enfield: Science Publish- ers, 2003]. knapp H. D. & Tschimed-Otschir b. 2001: naturschutz in der mongolei. – in: konold W., böcker r. & Hampicke U. (ed.), Handbuch naturschutz und land- schaftspflege 5. erg. lfg. 6/01: 1 – 15. – landsberg: ecomed. komarov V. l. (ed.) 1934 – 63: Flora SSSr 1 – 30. – len- ingrad: nauka [english translation, jerusalem: israel Program for Scientific Transl., later vols Washington, D.C.: Smithsonian inst.; Dehra Dun: bishen Singh mahendra Pal Singh; königstein: koeltz, 1963– 2004]. malyshev l. i., krasnoborov i. m. & Polozhij a. V. 1988–2003: Flora Sibiri 1–11. – novosibirsk: nauka [english translation: krasnoborov i. m. (ed.), Flora of Siberia 1 – 11, enfield: Science Publishers, 2000 – 06]. nSOm [national Statistical Office of mongolia] (ed.) 2002: mongolian Statistical Yearbook 2001. – Ulaan- baatar: mongol Oron. neuffer b. a., German D. & Hurka H. 2003: Contribu- tion to the knowledge of the Flora of the mongolian altai ii. – Feddes repert. 114: 632 – 637. Pyak a. i., Shaw S. C., ebel a. l., Zverev a. a., Hodg- son j. G., Wheeler C. D., Gaston k. j., morenko m. O., revushkin a. S., kotukhov Y. a. & Oyunchimeg D. 2008: endemic plants of the altai mountain coun- try. – Hampshire: Wild Guides. Schickhoff U. 2007: Project “Pastoral ecosystems in western mongolia”. – Published at http://mongolia. uni-hamburg.de/welcome.html. Scholz H. 2010: infraspecific diversity of Eragrostis mi­ nor (Gramineae) in Central asia. – Pp. 83–87 in: Ti- monin a. C., barykina r. P., Zernov a. S., lotova l. i., novikov V. S., Pimenov m. G., Ploshinskaya m. e. & Sokoloff D. D. (ed.), Xii moscow Plant Phyl- ogeny Symposium dedicated to the 250th anniversary of Professor Georg Franz Hoffmann, Proceedings. – moscow: kmk. Shiirevdamba Ts., Shardarsuren O., erdenejav G., am- galan Ts. & Tsetsegmaa Ts. (ed.) 1997: mongolian red book. – Ulaanbataar: national University of mongolia. Stevens P. F. 2001+: angiosperm phylogeny website. – http://www.mobot.org/mObOT/research/aPweb/. Stubbe a., kaczensky P., Wesche k., Samjaa r. & Stub- be m. (ed.) 2007: exploration into the biological resources of mongolia 10. – Halle: martin-luther- Universität. Thiers b. 2008+ [continuously updated]: index herbari- orum: a global directory of public herbaria and asso- ciated staff. – new York botanical Garden: published at http://sweetgum.nybg.org/ih/ Wikipedia 2011: Wikipedia, die freie enzyklopädie. – Published at http://de.wikipedia.org/. Wu Z. Y. & raven P. H. (ed.) 1994+: Flora of China. – beijing: Science Press; St louis: missouri botanical Garden; published also online at http://hua.huh.har- vard.edu/china/mss/welcome.htm Zemmrich a. 2007: Vegetation-ecological investigations of rangeland ecosystems in Western mongolia. The assessment of grazing impact at various spatial scale levels. – Dissertation, Universität Greifswald. http://greif.uni-greifswald.de/floragreif/ http://www.bgbm.fu-berlin.de/iapt/ncu/genera/nCUGQuery.htm http://www.bgbm.fu-berlin.de/iapt/ncu/genera/nCUGQuery.htm http://www.ipni.org http://mongolia.uni-hamburg.de/welcome.html http://mongolia.uni-hamburg.de/welcome.html http://www.mobot.org/mObOT/research/aPweb/ http://sweetgum.nybg.org/ih/ http://de.wikipedia.org/ http://hua.huh.har-vard.edu/china/mss/welcome.htm http://hua.huh.har-vard.edu/china/mss/welcome.htm work_ar6ply7z65am7ocklp2dr7cs5i ---- https://biointerfaceresearch.com/ 7422 Article Volume 11, Issue 1, 2021, 7422 - 7430 https://doi.org/10.33263/BRIAC111.74227430 The Effect of Myrtus, Honey, Aloe vera and Pseudomonas Phage Treatment on Infected Second Degree Burns: in vivo Study Soheil Khaleghverdi 1, Abouzar Karimi 2, Roshanak Soltani 3, Reza Zare 4,* 1 Department of Basic Sciences, East Tehran Branch, Islamic Azad University, Tehran, Iran 2 Microbiology laboratory, the Research section of Azmoon Salamat Iranian Co., Guilan, Iran 3 Science and Research Branch, Islamic Azad University, Tehran, Iran 4 Faculty of Chemistry, Bu-Ali Sina University, Hamadan, Iran * Correspondence: rzare81@gmail.com; Scopus Author ID 14122640500 Received: 18.05.2020; Revised: 8.06.2020; Accepted: 12.06.2020; Published: 16.06.2020 Abstract: Many researches have been exerted to find an application dressing for wound healing and also attain a considerable microbial reduction in burn wounds. In this study, the healing effect of a mixed herbal ointment (containing Myrtus, honey, Aloe vera, and pseudomonas phage) on the healing process of second degree burn wounds infected with Pseudomonas aeruginosa in comparison was evaluated. For this purpose, a hot metal square piece (4×2 cm, 50 g) was applied using a standard burning technique, and the applied pressure on the skin kept the same for all animals, then infected with Pseudomonas aeruginosa. Rats were randomly divided into 2 groups. Group 1 was treated with mixed herbal ointment, and group 2 received no treatment (control group). The treatment was daily, and sampling was weekly for three consecutive weeks (7, 14, and 21-day). Formalin 10% was used for tissue fixation. Wound healing in test and control groups was investigated by macroscopic and microscopic methods using Hematoxylin-Eosin staining. It wound contraction evaluation (Image J software). Macroscopic findings showed that wound contraction of the mixed herbal ointment group was significantly higher than the control group for 21 days. Hematoxylin-Eosin staining revealed that the epithelialization was considerably more completed in the mixed herbal ointment group in comparison with the control group. Also, neovascularization was significantly higher in the mixed herbal ointment group. The comparative results demonstrated that the mixed herbal ointment group had a significant difference (P<0.05) with a non-treated group (control). Therefore, the mixed herbal ointment is suggested as a suitable candidate for the treatment of second-degree burn wounds infected to pseudomonas aeruginosa. Keywords: Honey; sesame oil; Pseudomonas aeruginosa; burn; wound healing. © 2020 by the authors. This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). 1. Introduction Skin is the largest organ of vertebrates, is a soft tissue covering the outer surface of their body, and constitutes two layers of epiderm and derm. One may also include the subcutaneous tissue, so-called hypoderm that attaches the two upper layers to the underlying tissues such as bone and muscles. Many known compartments such as blood and lymphatic vessels, sweat glands, and hair follicles are located in the derm and hypoderm. When the dermis- the main part of the skin where hair follicles and sweat glands are located - is injured, the complex process of healing triggers automatically. Healing contains a series of certain https://biointerfaceresearch.com/ https://biointerfaceresearch.com/ https://doi.org/10.33263/BRIAC111.74227430 https://orcid.org/0000-0001-5365-3724 https://doi.org/10.33263/BRIAC111.74227430 https://biointerfaceresearch.com/ 7423 phenomena during a very organized process involving several agents. For the reconstruction of the injured tissue in the wound, parenchymal and connective tissue cells should be reproduced and migrate to the right place, so it is a dynamic physiological procedure [1-4]. Wound healing is classically performed as a dynamic coordinated cascade in four overlapping steps: hemostasis (bleeding stop), inflammation, proliferation (granulation tissue formation), and regeneration (remodeling) [1, 5]. In this regard, after hemostasis, several cells contribute to the inflammatory phase, which promotes different cells’ proliferation and collagen synthesis. Thereafter, the wounded tissue starts restructuring; the collagen microfibrillar and new epidermis (the outermost layer) maturates, and new fibroblasts and vessels are generated [5, 6]. Some unnatural conditions such as dysregulated inflammation, superfluous free radicals, decreased angiogenesis and detracted collagen storage may detain this the normal systematized process and lead in situations like non-healing chronic wounds [7-10]. A very problematic situation is affecting the wound healing cascade burns, which is entailed with several life-quality decreasing problems (physical and psychological) as well as mortality. As an emergency and acute disorder, burn wound healing is growingly addressed in the medicinal literature trends in both developed and developing countries, including the injuries during pregnancy. At present, many studies are searching for novel dressings with more healing potential and antibacterial properties other than the silver sulfadiazine as the gold standard [11-16]. Avoiding the clinical and environmental consequences and expenses, organic and natural materials have attracted the consideration of researchers in this field. Honey can be named the earliest natural treatment for wounds, which has been highly appreciated in several old-style medication references to the extent that it was even implied in the Holy Quran more than 14 centuries ago [17-19]. In traditional medicine literature, honey has been employed for treating respiratory, urinary, gastrointestinal, and skin diseases, including ulcers, wounds, eczema, psoriasis, and dandruff. Today, it is confirmed that honey promotes healing and scar fading by raising tissue regeneration and diminishing inflammation, edema, and exudation [20]. Also, a considerable portion of present scientific studies addresses the characteristics of medicinal herbal products, which have been conventionally used for burn wound healing for centuries all over the world [21-24]. Many of these studies confirm the efficiency of medicinal plants in burn wound treatment. As an instance, the sesame oil has been studied solely or mixed (e.g., with camphor and honey) for its medication features on debridement necrotic burns [2, 25]. In the present study, a mixed herbal ointment (Myrtus, honey, Aloe vera, and pseudomonas phage) has been used to examine its healing effect on the second-degree burns in vivo infected with Pseudomonas aeruginosa in comparison with the control subjects (non-treated with herbal ointment). 2. Materials and Methods 2.1. Animals. In this study, 12 Wistar male rats (250-300 g, 2-3 months) were randomly divided into the test and control groups, which were monitored for three weeks and sampled on days 7, 14, and 21. They were all maintained in a sheltered standard environment (20-25˚C with 65-75% humidity) under the supervision of a veterinarian. During the experiment, the rats were fed with https://doi.org/10.33263/BRIAC111.74227430 https://biointerfaceresearch.com/ https://doi.org/10.33263/BRIAC111.74227430 https://biointerfaceresearch.com/ 7424 usual rat chow and tap water. Each group was kept in a separate cage under 12/12 hours light/dark cycle. Studies on all two groups were performed with the same conditions. 2.2. Burns wound model. The rats were anesthetized by xylazine (10 mg/kg) and ketamine hydrochloride (35 mg/kg). The deep second-degree burn wound was created according to Tanideh et al. using a hot plate (a metal square piece, dimensions: 4×2 cm, 50 g) at a fixed temperature [24]. Briefly, the dorsum skin was shaved using an electrical clipper and burned using a plate. The tool was warmed for 5 min in hot water and left in contact with the skin for 10 seconds with an equal pressure). Then, the second-degree burn wounds were infected with Pseudomonas aeruginosa (1.5×104 Cfu/ml) by subcutaneous injection. All surgical procedures on laboratory animals, their maintenance conditions, and the following experimental procedures had been done in accordance with the Ethical Committee. 2.3. Treatment. The mixed herbal ointment was prepared with Myrtus and Aloe vera (20 ml) in mixing with 20 ml of honey. And then, 2.5 ml of optimum phage dilution was added, and pH was regulated in order to have a suitable pH for the skin. In the following, group 1 was treated with the mixed herbal ointment (including Myrtus, honey, Aloe vera, and pseudomonas phage). Group 2 was not treated as the control group. The treatment of burn wounds with mixed herbal ointment/Pseudomonas phage was performed daily. Photography sampling was done on days 7, 14, and 21. 2.4. Macroscopic Observation. Response to the treatment was evaluated by taking photos from wound sites every day and analyzing by Image J software. The speed of wound contraction was evaluated by digital photography under general anesthesia at days 7, 14, and 21 of the treatment. 2.5. Microscopic observations. The histologic criteria considered to determine the efficacy of treatment included: epithelization and neovascularization, as well as regeneration of fibroblasts and epidermis, angiogenesis, and the level of the collagen, inflammation, and hair follicles in the tissue sample using microscopy. For histological studies, the selected number of rats were sacrificed using chloroform containing jar. Afterward, the tissue samples from the wound area were taken and transformed onto a glass microscope slide, then were stained by Hematoxylin–Eosin staining [26-32]. Tissue sections were evaluated under light microscopes. The separation time of the remained scabs and the scar size was analyzed on the basis of the photos. All measurements of the treatment level were performed by a certain person. 2.6. Statistical analysis. The treatment size was measured using Image J software. Non-parametric Friedman test was used to compare the mixed herbal ointment/pseudomonas phage and control groups (P<0.05). https://doi.org/10.33263/BRIAC111.74227430 https://biointerfaceresearch.com/ https://doi.org/10.33263/BRIAC111.74227430 https://biointerfaceresearch.com/ 7425 3. Results and Discussion 3.1. Macroscopic analysis. During this study, the wound contraction showed to be more rapid in the mixed herbal ointment group rather than the control group. Wound contraction was significant in the mixed herbal ointment group on days 14 and 21 in comparison with the control group (Figure 1). Also, the mixed herbal ointment caused the scabs to fall and the wounds to heal in a shorter time compared to the control group. Treatment size evaluation was detected by Image J software between control and the mixed herbal ointment groups. Table 1 shows that wound contraction was considerable in the mixed herbal ointment group in comparison with the control group on days 14 and 21. Figure 1. Macroscopic evaluation between the control and the mixed herbal ointment groups. Group 1: control, Group 2: the mixed herbal ointment. Table 1. Treatment size evaluation between the control and mixed herbal ointment/pseudomonas phage groups. Day 0 7 14 21 Group 1 8 7 5/1 3/3 Group 2 8 6/1 2/3 0/4 *Results are based on three independent experiments. Nonparametric Friedman test, P<0.05, Group 1: control, Group 2: the mixed herbal ointment Figure 2. Histological Epithelization. Group 1: control, Group 2: the mixed herbal ointment (magnification ×40). 3.2. Microscopic analysis. The results demonstrated a considerable epithelization in the mixed herbal ointment group on 7, 14, and 21 days in comparison with the control group. The histological https://doi.org/10.33263/BRIAC111.74227430 https://biointerfaceresearch.com/ https://doi.org/10.33263/BRIAC111.74227430 https://biointerfaceresearch.com/ 7426 epithelization is shown in Figure 2. In addition, neovascularization was significantly higher in the mixed herbal ointment group than the control group after 7, 14, and 21 days (Figure 3). However, in terms of wound healing, no significant difference was observed between the test and control groups (either after 7, 14, or 21 days). Figure 3. Histological neovascularization. Group 1: control, Group 2: the mixed herbal ointment (magnification ×40). The results showed that the maximum peak of angiogenesis has happened on day 7. The mixed herbal ointment group had a higher angiogenesis level with a significant difference with the control group (Results obtained from three independent tests, p <0.05) (Table 1). The results also show that the number of fibroblasts was significantly more increased in the mixed herbal ointment group on days 7 and 14 compared to the control group. However, no significant difference was observed on day 21. The greatest increase in the number of fibroblasts took place on day 14. The results of microscopic studies, including the evaluating of hair follicles density, epidermis depth, collagen presence, and inflammation level (based on the existing inflammatory cells) identified a statistically significant difference between the mixed herbal ointment and control groups (Table 2). Measuring the present collagen in tissue samples showed a statistically significant increase in the mixed herbal ointment group during wound healing. The greatest growth and the presence of collagen were observed on day 21. Measuring the wound epidermis was used as an effective strategy for evaluating the amount of wound healing. After the epidermis formation, the size of the epidermis and its difference between the mixed herbal ointment and control groups were measured. Another quality criterion of wound healing is the regeneration of hair follicles. According to our results, follicular regrowth was not seen on day 7 and 14, but new hair follicles were observed on day 21. The mixed herbal ointment and control groups consisted of an average follicle number of 3 and 8, respectively (Results based on three independent tests, p <0.05). Table 2. Statistical comparison of fibroblasts, epidermis, blood vessels, collagen, inflammation, and hair follicles in each group on days 7, 14, and 21. Results based on the three independent and similar tests (P < 0.05). Group 1: control, Group 2: the mixed herbal ointment Fibroblasts Collagen Angiogenesis Hair follicle Inflammation Epidermis Day Group 2300±41.41 0.58±0.002 1.56±0.43 0±0 2.44±0.693 10±1.36 7 Group 1 5150±30.6 1.15±0.51 2.43±0.12 0±0 1.77±0.234 25±2.12 14 1769±19.55 0.87±0.002 1.49±0.30 3±0.012 0.3±0.741 72±2.21 21 17800±40.39 2.65±0.012 7.34±0.278 0±0 1.65±0.554 49±1.35 7 Group 2 30100±49.30 4.66±1.15 6.72±0.20 0±0 0.77±0.00043 141±6.00 14 8800±29.56 5.32±0.714 2.31±0.66 7±0.2 0.34±0.00013 179±16.32 21 https://doi.org/10.33263/BRIAC111.74227430 https://biointerfaceresearch.com/ https://doi.org/10.33263/BRIAC111.74227430 https://biointerfaceresearch.com/ 7427 Burns are the most common injuries, specifically among children. Recoverability from burn injuries and rescue from serious health consequences depends on the cause and degree of the injury. However, more serious burns need direct emergency medical care to survive complications and death. The burn injuries are categorized into three degrees based on the seriousness, harshness, and tensity of the damage to the skin. The first-degree wounds are the least problematic, and the third-degree injuries are the most severe ones. The damages of first- degree burns are superficial and contain red and non-blistered skin. In the second-degree burns, damages include blisters, and partial thickening of the skin extended from the beneath of epidermis into the dermis. Finally, in the third-degree burns, there is widespread thickness with a white leathery appearance [17, 19, 21, 33]. Many researches are focused on investigating the effect of natural ingredients and ointments on the second-degree burn wounds. For example, Somboonwong et al. showed that aloe vera could help the wound healing promotion on the second-degree burn wound [33]. Afshar et al., also showed that the topical application of Emu oil on the II-degree burns in Balb/c mice could positively effect the wound healing and hair follicles recuperation in the margins of the wound. According to their report, the superficial application of Emu oil studied, however, has a slow healing process and causes inflammation to last longer, increases the number of hair follicles more than what can be seen in a non-treated control group. Also, Emu oil treatment led to appearing more active and mature hair follicles in several layers and increasing fibrogenesis and collagen synthesis [34]. Chen et al., compared the effect of silver nanoparticles, 1% silver sulfadiazine cream, and vaseline dressings on second degree burn wounds in patients with superficial and deep injuries. Using before and after dressing change swab bacterial cultures, they indicated that silver nanoparticle dressing could reduce the risk of wound infection and accelerate the wound healing [35]. In another study, Khorasani et al. determined that saffron (Crocus sativus) pollen extract cream can efficiently make the wound healing of hot water-induced burns faster. Their results were reported in comparison with silver sulfadiazine (SSD) on second-degree burn injuries in rat models [36]. Gupta et al., performed a retrospective review on the records of patients with first and second-degree burns admitted to their institution within 5 years. Considering several factors including the burn-dressing interval, the site, percentage, degree, and depth of burns, duration of healing, remained scar, etc. they concluded that wound sterilization was faster and wound healing increased more in the case of honey dressing [17]. Akhoondinasab et al. have compared the effects of Robaxin, Rimojen, SSD, and aloe vera on the healing process of both second and third-degree burn injuries in rat models. Digital photography assessment of histological parameters (PMN, epithelialization, fibrosis, and angiogenesis) showed that displayed a more considerable performance in wound healing. Also, aloe vera treating could ameliorate the wounds faster than SSD [37]. In this study, in order to evaluate the healing effect of mixed herbal ointment/pseudomonas phage on the second degree burn wounds, the histologic parameters including epithelization, neovascularization, regeneration of fibroblasts and epidermis, angiogenesis, and collagen synthesis, inflammation lasting, and recreating hair follicles were measured. The wounds were also infected with Pseudomonas aeruginosa in order to indicate the potential of the mixed herbal ointment treatment in wound sterilization. Based on our observations, the epithelization and neovascularization were higher in the mixed herbal ointment group rather than in the control group. Also, the present results showed a faster burn wound contraction in the mixed herbal ointment treated group compared with the control group within study duration. Additionally, fibroblasts increasing, epidermis and blood vessel https://doi.org/10.33263/BRIAC111.74227430 https://biointerfaceresearch.com/ https://doi.org/10.33263/BRIAC111.74227430 https://biointerfaceresearch.com/ 7428 regeneration, collagen synthesis, hair follicles recovery, and inflammatory reactions happened more and better in rat models, which were treated with mixed herbal ointment compared to those received no treatment. 4. Conclusions Considering the above results, it can be concluded that mixed herbal ointment/pseudomonas phage can be suggested as a successful candidate herbal drug for healing second degree burn wounds infected with Pseudomonas aeruginosa. This conclusion is supported by the data obtained from the time of wound healing in the macroscopic evaluation and microscopic analysis. Funding This research received no external funding. Acknowledgments This research has no acknowledgment. Conflicts of Interest The authors declare no conflict of interest. References 1. Boateng, J.S.; Matthews, K.H.; Stevens, H.N.; Eccleston, G.M. Wound healing dressings and drug delivery systems: a review. J Pharm Sci 2008, 97, 2892-923, https://doi.org/10.1002/jps.21210. 2. Karami, A.; Tebyanian, H.; Barkhordari, A.; Motavallian, E.; Soufdoost, R.S.; Nourani, M.R. Healing effects of ointment drug on full-thickness wound. C. R. Acad. Bulg. Sci 2019, 72, 123-129, https://doi.org/10.7546/CRABS.2019.01.16. 3. Shakeri, F.; Tebyanian, H.; Karami, A.; Babavalian, H.; Tahmasbi, M.H. Effect of Topical Phenytoin on Wound Healing. Trauma Monthly 2016, 22, e35488, https://doi.org/10.5812/traumamon.35488. 4. Esfahanizadeh, N.; Motalebi, S.; Daneshparvar, N.; Akhoundi, N.; Bonakdar, S. Morphology, proliferation, and gene expression of gingival fibroblasts on Laser-Lok, titanium, and zirconia surfaces. Lasers Med Sci 2016, 31, 863-73, https://doi.org/10.1007/s10103-016-1927-6. 5. Soufdoost, R.S.; Mosaddad, S.A.; Salari, Y.; Yazdanian, M.; Tebyanian, H.; Tahmasebi, E.; Yazdanian, A.; Karami, A.; Barkhordari, A. Surgical Suture Assembled with Tadalafil/Polycaprolactone Drug-Delivery for Vascular Stimulation Around Wound: Validated in a Preclinical Model. Biointerface Res Appl Chem 2020, 10, 6317-6327, https://doi.org/10.33263/BRIAC105.63176327. 6. Dickinson, L.E.; Gerecht, S. Engineered Biopolymeric Scaffolds for Chronic Wound Healing. Front Physiol 2016, 7, 341, https://doi.org/10.3389/fphys.2016.00341. 7. Hassani, H.; Khoshdel, A.; Sharifzadeh, S.R.; Heydari, M.F.; Alizadeh, S.; Aghideh, A.N. TNF-α and TGF- ß level after intraoperative allogeneic red blood cell transfusion in orthopedic operation patients. Turk J Med Sci 2017, 47, 1813-1818, https://doi.org/10.3906/sag-1508-36. 8. Sharif, F.; Steenbergen, P.J.; Metz, J.R.; Champagne, D.L. Long-lasting effects of dexamethasone on immune cells and wound healing in the zebrafish. Wound Repair Regen 2015, 23, 855-65, https://doi.org/10.1111/wrr.12366. 9. Tebyanian, H.; Mirhosseiny, S.H.; Kheirkhah, B.; Hassanshahian, M.; Farhadian, H. Isolation and Identification of Mycoplasma synoviae From Suspected Ostriches by Polymerase Chain Reaction, in Kerman Province, Iran. Jundishapur J Microbiol 2014, 7, e19262, https://doi.org/10.5812/jjm.19262. 10. Heidari, M.F.; Arab, S.S.; Noroozi-Aghideh, A.; Tebyanian, H.; Latifi, A.M. Evaluation of the substitutions in 212, 342 and 215 amino acid positions in binding site of organophosphorus acid anhydrolase using the molecular docking and laboratory analysis. Bratisl Lek Listy 2019, 120, 139-143, https://doi.org/10.4149/BLL_2019_022. 11. Karami, A.; Tebyanian, H.; Goodarzi, V.; Shiri, S. Planarians: an In Vivo Model for Regenerative Medicine. Int J Stem Cells 2015, 8, 128-33, https://doi.org/10.15283/ijsc.2015.8.2.128. https://doi.org/10.33263/BRIAC111.74227430 https://biointerfaceresearch.com/ https://doi.org/10.1002/jps.21210 https://doi.org/10.7546/CRABS.2019.01.16 https://doi.org/10.5812/traumamon.35488 https://doi.org/10.1007/s10103-016-1927-6 https://doi.org/10.3389/fphys.2016.00341 https://doi.org/10.1111/wrr.12366 https://doi.org/10.5812/jjm.19262 https://doi.org/10.4149/BLL_2019_022 https://doi.org/10.15283/ijsc.2015.8.2.128 https://doi.org/10.33263/BRIAC111.74227430 https://biointerfaceresearch.com/ 7429 12. Khomarlou, N.; Aberoomand-Azar, P.; Lashgari, A.P.; Tebyanian, H.; Hakakian, A.; Ranjbar, R.; Ayatollahi, S.A. Essential oil composition and in vitro antibacterial activity of Chenopodium album subsp. striatum. Acta Biol Hung 2018, 69, 144-155, https://doi.org/10.1556/018.69.2018.2.4. 13. Taherian, A.; Fazilati, M.; Moghadam, A.T.; Tebyanian, H. Optimization of purification procedure for horse F(ab´)2 antivenom against Androctonus crassicauda (Scorpion) venom. Trop J Pharm Res 2018, 17, 409- 414, https://doi.org/10.4314/tjpr.v17i3.4. 14. Babavalian, H.; Latifi, A.M.; Shokrgozar, M.A.; Bonakdar, S.; Tebyanian, H.; Shakeri, F. Cloning and expression of recombinant human platelet-derived growth factor-BB in Pichia Pink. Cell Mol Biol (Noisy- le-grand) 2016, 62, 45-51, https://doi.org/10.14715/cmb/2016.62.8.8. 15. Mosaddad, S.A.; Tahmasebi, E.; Yazdanian, A.; Rezvani, M.B.; Seifalian, A.; Yazdanian, M.; Tebyanian, H. Oral microbial biofilms: an update. Eur J Clin Microbiol Infect Dis 2019, 38, 2005-2019, https://doi.org/10.1007/s10096-019-03641-9. 16. Seifi Kafshgari, H.; Yazdanian, M.; Ranjbar, R.; Tahmasebi, E.; Mirsaeed, S.R.G.; Tebyanian, H.; Ebrahimzadeh, M.A.; Goli, H.R. The effect of Citrullus colocynthis extracts on Streptococcus mutans, Candida albicans, normal gingival fibroblast and breast cancer cells. J Biol Res 2019, 92, 8201, https://doi.org/10.4081/jbr.2019.8201. 17. Gupta, S.S.; Singh, O.; Bhagel, P.S.; Moses, S.; Shukla, S.; Mathur, R.K. Honey dressing versus silver sulfadiazene dressing for wound healing in burn patients: a retrospective study. J Cutan Aesthet Surg 2011, 4, 183-7, https://doi.org/10.4103/0974-2077.91249. 18. Israili, Z.H. Antimicrobial properties of honey. Am J Ther 2014, 21, 304-23, https://doi.org/10.1097/MJT.0b013e318293b09b. 19. Subrahmanyam, M. A prospective randomised clinical and histological study of superficial burn wound healing with honey and silver sulfadiazine. Burns 1998, 24, 157-61, https://doi.org/10.1016/s0305- 4179(97)00113-7. 20. Hussein, S.Z.; Mohd Yusoff, K.; Makpol, S.; Mohd Yusof, Y.A. Gelam Honey Inhibits the Production of Proinflammatory, Mediators NO, PGE(2), TNF-alpha, and IL-6 in Carrageenan-Induced Acute Paw Edema in Rats. Evid Based Complement Alternat Med 2012, 2012, https://doi.org/10.1155/2012/109636. 21. Amini, M.; Kherad, M.; Mehrabani, D.; Azarpira, N.; Panjehshahin, M.R.; Tanideh, N. Effect of Plantago major on burn wound healing in rat. J Appl Anim Res 2010, 37, 53-56, https://doi.org/10.1080/09712119.2010.9707093. 22. Noroozi-Aghideh, A.; Kheirandish, M. Human cord blood-derived viral pathogens as the potential threats to the hematopoietic stem cell transplantation safety: A mini review. World J Stem Cells 2019, 11, 73-83, https://doi.org/10.4252/wjsc.v11.i2.73. 23. Noroozi-Aghideh, A.; Kashani khatib, Z.; Naderi, M.; Dorgalaleh, A.; Yaghmaie, M.; Paryan, M.; Alizadeh, S. Expression and CpG island methylation pattern of MMP-2 and MMP-9 genes in patients with congenital factor XIII deficiency and intracranial hemorrhage. Hematology 2019, 24, 601-605, https://doi.org/10.1080/16078454.2019.1654181. 24. Tanideh, N.; Rokhsari, P.; Mehrabani, D.; Mohammadi Samani, S.; Sabet Sarvestani, F.; Ashraf, M.J.; Koohi Hosseinabadi, O.; Shamsian, S.; Ahmadi, N. The Healing Effect of Licorice on Pseudomonas aeruginosa Infected Burn Wounds in Experimental Rat Model. World J Plast Surg 2014, 3, 99-106. 25. Anilakumar, K.R.; Pal, A.; Khanum, F.; Bawa, A.S. Nutritional, medicinal and industrial uses of sesame (Sesamum indicum L.) seeds-an overview. Agric conspec sci 2010, 75, 159-168. 26. Tebyanian, H.; Karami, A.; Motavallian, E.; Aslani, J.; Samadikuchaksaraei, A.; Arjmand, B.; Nourani, M.R. Histologic analyses of different concentrations of TritonX-100 and Sodium dodecyl sulfate detergent in lung decellularization. Cell Mol Biol (Noisy-le-grand) 2017, 63, 46-51, https://doi.org/10.14715/cmb/2017.63.7.8. 27. Tebyanian, H.; Karami, A.; Motavallian, E.; Samadikuchaksaraei, A.; Arjmand, B.; Nourani, M.R. Rat lung decellularization using chemical detergents for lung tissue engineering. Biotech Histochem 2019, 94, 214- 222, https://doi.org/10.1080/10520295.2018.1544376. 28. Tebyanian, H.; Karami, A.; Motavallian, E.; Aslani, J.; Samadikuchaksaraei, A.; Arjmand, B.; Nourani, M.R. A Comparative Study of Rat Lung Decellularization by Chemical Detergents for Lung Tissue Engineering. Open Access Maced J Med Sci 2017, 5, 859-865, https://doi.org/10.3889/oamjms.2017.179. 29. Soufdoost, R.S.; Yazdanian, M.; Tahmasebi, E.; Yazdanian, A.; Tebyanian, H.; Karami, A.; Nourani, M.R.; Panahi, Y. In vitro and in vivo evaluation of novel Tadalafil/β-TCP/Collagen scaffold for bone regeneration: A rabbit critical-size calvarial defect study. Biocybern Biomed Eng 2019, 39, 789-796, https://doi.org/10.1016/j.bbe.2019.07.003. 30. Esfahanizadeh, N.; Daneshparvar, P.; Takzaree, N.; Rezvan, M.; Daneshparvar, N. Histologic Evaluation of the Bone Regeneration Capacities of Bio-Oss and MinerOss X in Rabbit Calvarial Defects. Int J Periodontics Restorative Dent 2019, 39, e219-e227, https://doi.org/10.11607/prd.4181. 31. Esfahanizadeh, N.; Yousefi, H. Successful Implant Placement in a Case of Florid Cemento-Osseous Dysplasia: A Case Report and Literature Review. J Oral Implantol 2018, 44, 275-279, https://doi.org/10.1563/aaid-joi-D-17-00140. https://doi.org/10.33263/BRIAC111.74227430 https://biointerfaceresearch.com/ https://doi.org/10.1556/018.69.2018.2.4 https://doi.org/10.4314/tjpr.v17i3.4 https://doi.org/10.1007/s10096-019-03641-9 https://doi.org/10.4081/jbr.2019.8201 https://doi.org/10.4103/0974-2077.91249 https://doi.org/10.1097/MJT.0b013e318293b09b https://doi.org/10.1016/s0305-4179(97)00113-7 https://doi.org/10.1016/s0305-4179(97)00113-7 https://doi.org/10.1155/2012/109636 https://doi.org/10.1080/09712119.2010.9707093 https://doi.org/10.14715/cmb/2017.63.7.8 https://doi.org/10.1080/10520295.2018.1544376 https://doi.org/10.3889/oamjms.2017.179 https://doi.org/10.1016/j.bbe.2019.07.003 https://doi.org/10.11607/prd.4181 https://doi.org/10.1563/aaid-joi-D-17-00140 https://doi.org/10.33263/BRIAC111.74227430 https://biointerfaceresearch.com/ 7430 32. Daneshparvar, H.; Sadat-Shirazi, M.S.; Fekri, M.; Khalifeh, S.; Ziaie, A.; Esfahanizadeh, N.; Vousooghi, N.; Zarrindast, M.R. NMDA receptor subunits change in the prefrontal cortex of pure-opioid and multi-drug abusers: a post-mortem study. Eur Arch Psychiatry Clin Neurosci 2019, 269, 309-315, https://doi.org/10.1007/s00406-018-0900-8. 33. Somboonwong, J.; Thanamittramanee, S.; Jariyapongskul, A.; Patumraj, S. Therapeutic effects of Aloe vera on cutaneous microcirculation and wound healing in second degree burn model in rats. J Med Assoc Thai 2000, 83, 417-25. 34. Afshar, M.; Ghaderi, R.; Zardast, M.; Delshad, P. Effects of Topical Emu Oil on Burn Wounds in the Skin of Balb/c Mice. Dermatol Res Pract 2016, 2016, 6419216, https://doi.org/10.1155/2016/6419216. 35. Chen, J.; Han, C.M.; Lin, X.W.; Tang, Z.J.; Su, S.J. [Effect of silver nanoparticle dressing on second degree burn wound]. Zhonghua Wai Ke Za Zhi 2006, 44, 50-2. 36. Khorasani, G.; Hosseinimehr, S.J.; Zamani, P.; Ghasemi, M.; Ahmadi, A. The effect of saffron (Crocus sativus) extract for healing of second-degree burn wounds in rats. Keio J Med 2008, 57, 190-5, https://doi.org/10.2302/kjm.57.190. 37. Akhoondinasab, M.R.; Akhoondinasab, M.; Saberi, M. Comparison of healing effect of aloe vera extract and silver sulfadiazine in burn injuries in experimental rat model. World J Plast Surg 2014, 3, 29-34. https://doi.org/10.33263/BRIAC111.74227430 https://biointerfaceresearch.com/ https://doi.org/10.1007/s00406-018-0900-8 https://doi.org/10.1155/2016/6419216 https://doi.org/10.2302/kjm.57.190 work_ar7h2ss5qbfsbi6636rq4vdd4a ---- The Effects of TeleWound Management on Use of Service and Financial Outcomes 663 TELEMEDICINE AND e-HEALTH Volume 13, Number 6, 2007 © Mary Ann Liebert, Inc. DOI: 10.1089/tmj.2007.9971 Original Research The Effects of TeleWound Management on Use of Service and Financial Outcomes RILEY S. REES, M.D., and NOURA BASHSHUR, M.H.S.A. ABSTRACT This study investigated the effects of a TeleWound program on the use of service and finan- cial outcomes among homebound patients with chronic wounds. The TeleWound program consisted of a Web-based transmission of digital photographs together with a clinical proto- col. It enabled homebound patients with chronic pressure ulcers to be monitored remotely by a plastic surgeon. Chronic wounds are highly prevalent among chronically ill patients in the United States (U.S.). About 5 million chronically ill patients in the U.S. have chronic wounds, and the aggregate cost of their care exceeds $20 billion annually. Although 25% of home care referrals in the U.S. are for wounds, less than 0.2% of the registered nurses in the U.S. are wound care certified. This implies that the majority of patients with chronic wounds may not be receiving optimal care in their home environments. We hypothesized that Tele- Wound management would reduce visits to the emergency department (ED), hospitalization, length of stay, and visit acuity. Hence, it would improve financial performance for the hos- pital. A quasi-experimental design was used. A sample of 19 patients receiving this inter- vention was observed prospectively for 2 years. This was matched to a historical control group of an additional 19 patients from hospital records. Findings from the study revealed that Tele- Wound patients had fewer ED visits, fewer hospitalizations, and shorter length of stay, as compared to the control group. Overall, they encumbered lower cost. The results of this clin- ical study are striking and provide strong encouragement that a single provider can affect positive clinical and financial outcomes using a telemedicine wound care program. Tele- Wound was found to be a credible modality to manage pressure ulcers at lower cost and pos- sibly better health outcomes. The next step in this process is to integrate the model into daily practice at bellwether medical centers to determine programmatic effectiveness in larger clin- ical arenas. Wound Care Center, Department of Plastic Surgery, University of Michigan Health System, Ann Arbor, Michigan. INTRODUCTION THIS STUDY WAS AIMED at investigating the ef-fects of a TeleWound program on the use of service and financial outcomes among home- bound patients with chronic wounds. The Tele- Wound program was internally developed at the University of Michigan Health System (UMHS), jointly designed by the Telemedicine Resource Center and Department of Plastic Surgery. The program allows homebound pa- tients with pressure ulcers to receive distance care and remote monitoring from their plastic surgeon. It utilizes a store-and-forward system and consists of Web-based digital photography and standardized clinical protocols. Measures of use of service include frequency of emer- gency department (ED) visits, outpatient clinic visits, hospitalization and length of hospital stay, number of outpatient clinic contacts (other than visits), and outpatient visit acuity. Finan- cial outcome measures were limited to the UMHS’s direct and indirect costs. MATERIALS AND METHODS Chronic wounds are highly prevalent among chronically ill patients in the United States (U.S.). Typically, the cost of care for such pa- tients is quite substantial. Overall, it has been estimated that about 5 million chronically ill patients in the total population of the U.S. have chronic wounds, and the aggregate cost of care for them exceeds $20 billion annually. More- over, this cost increases with advancing age at the rate of about 10% per year.1 Another esti- mate places the number of patients with chronic wounds at 6 million, overall, or nearly 2% of the total U.S. population.2 Pressure ulcers occur frequently among hospitalized patients. It has been estimated that about 2.5 million hospitalized patients re- ceive treatment for pressure ulcers each year.3 Their care is very costly. For instance in 1999, the estimate for the total cost of caring for hospital-acquired pressure ulcers was be- tween $2.2 and $3.6 billion per year.3 The cost of care for a single hospitalized patient with a pressure ulcer ranged between $5,000 and $40,000.4–6 The estimated cost for surgical clo- sure of a pressure sore ranged from $75,000 to $90,000. Often, the hospital does not get fully reimbursed for this cost.1 Hence, the average hospital in the U.S. incurs between $400,000 and $700,000 in direct costs to treat pressure ulcers annually, and a large portion of this cost is not reimbursable.3,7 There is empirical evidence that demon- strates a strong association between pressure ulcer and hospital length of stay.8 Under the prospective payment system, the required care for these patients places hospitals at a signifi- cant financial disadvantage, particularly the loss of revenue as a result of the prolonged hos- pitalization.1 The problem is all the more acute among the elderly. Indeed, 70% of all pressure ulcers occur among patients who are 70 years of age or older.8 Additionally, pressure ulcers greatly increase the risk for osteomyelitis of the pelvis and septicemia,8 and cellulitis.9 For in- stance, nearly two thirds, or 65%, of elderly pa- tients hospitalized with hip fractures develop pressure ulcers.10 Among other investigators, Allman et al. found that pressure ulcers con- stitute a “significant predictor of both length of hospital stay and total hospital costs,4,11,12” in- creased risk of amputation and death.3,4,11,12 Several factors contribute to the high cost of treatment for pressure ulcers. These include nursing time, physician time, surgical proce- dures—flaps and debridement—and, of course, longer hospitalization for complica- tions. There is also the added cost of expensive devices and products, such as specialty beds, pressure-relieving devices, pharmacotherapy, and rehabilitation. The experience can be painful and disfiguring. In addition to the physical symptoms, afflicted patients may suf- fer from the sequelae of low self-esteem, em- barrassment, and “body image disturbances.” Indeed, the problem degrades quality of life and functional performance among patients. This becomes all the more serious when con- sidering that such patients often experience slow recovery because of having comorbidities. Often, they become dependent on a caregiver to change their dressings and to help them with basic activities of daily living. In turn, care- givers have to devote time and energy to care for these patients, who tend to be elderly par- ents. The burden on caregivers is substantial, and it can disrupt the normal routines of nu- clear families.13 Since many of these patients require profes- sional help in their home environments, there was a concomitant increase in the demand for homecare services.14 From the wound care per- spective, this increased the demand for home health services. However, it was not a boon for REES AND BASHSHUR664 home health agencies that have to operate un- der the Prospective Payment System because the increased demand did not generate addi- tional revenue.15 Nonetheless, the actual in- crease in the number of homecare patients with acute and chronic wounds resulted in the prominence of the homecare setting for the care of these patients.16 For instance, between 2000 and 2005, the average number of home health visits per patient was expected to increase from 65 to 82.17 Yet overall reimbursement has been declining. Dansky et al. reported that home health agencies are currently being reimbursed at levels 2% lower than 1993–1994 levels.18,19 Nearly a quarter of home care referrals in the U.S. are for wounds.20 Homecare agencies com- plain about the deficit they incur in treating such patients. Although their cost ranges be- tween $8,000 and $30,000 per year, their reim- bursement is limited to about $2,500 on a na- tional level.20 Nurses who are specialty trained in wound care can reduce the cost of care by applying their knowledge, skill, and efficient use of re- sources.20,21 Indeed, it has been demonstrated that effective and timely treatment of chronic wounds is based on high-quality, standard- ized, “community-based specialty care”,20,22,23 and that chronic wounds heal more rapidly in the home setting when the care is provided by specially trained nurses, as compared to their counterparts.24 However, less than 0.2% of the registered nurses in the U.S. are wound care certified. This implies that the vast majority of such patients do not receive optimal care in their home environments.20 Description of the intervention The TeleWound program consists of a Web- based transmission of digital photographs and a clinical protocol that enables homebound pa- tients who have chronic pressure ulcers to be monitored remotely by a plastic surgeon who is a specialist in wound care management. These patients have been receiving their nor- mal care from their plastic surgeon when they visit the office. However, care in their homes has been provided by a home health agency. Those participating in the TeleWound program are visited by the TeleWound nurse in their homes. During these visits, the nurse takes a digital photograph of each wound, as shown in Figure 1, and completes a standardized wound assessment form, which captures detailed data such as (but not limited to) wound location, size, stage, drainage, odor, presence of exposed bone, “feel” of bone if exposed, percent necrotic tissue, dressing regime, and products used. There are also two open-ended sections, one where the TeleWound nurse documents her overall assessment, and another section for communication/questions from the nurse to the physician. The information is relayed on a secure Web-based platform to the manager of the program who arranges the complete file for the physician. The digital camera model chosen was the Nikon CoolPix 4500 model (Nikon, Melville, NY), plus a Nikon CoolLight S-1. The Cool- Light is a ring light that attaches directly to the camera. It ensures consistent lighting for all im- ages, because the ambient light situation in the homecare environment is often unpredictable. (Flash is never used because it tends to “wash out” the wound color.) Digital photography employs an explicit but simple protocol, including specifications as an- gling the camera “head on” to wound, taking the photo between 10 and 18 inches away from the wound, placing a paper ruler and wound identifier tag in close proximity to the wound, and capturing the wound, tag, and ruler in each TELEWOUND MANAGEMENT 665 FIG. 1. Digital transmission of TeleWound photograph. http://www.liebertonline.com/action/showImage?doi=10.1089/tmj.2007.9971&iName=master.img-000.jpg&w=228&h=170 photo. The tag is preprinted with a large red circle. This red circle is the same hue and in- tensity of healthy wound bed tissue, and is used to compare the color of the actual wound to a standardized color. The nurse also makes sure the wound is centered in the photograph, without shadows, and uses a contrasting back- ground. After executing this task, the nurse re- views the photographs. If they are found inad- equate, new photographs are obtained. All photographs and protocols are transmitted over a Virtual Private Network within 24 hours. In-home TeleWound visits are conducted by the nurse either weekly or biweekly, depend- ing on the stability of the wound. The surgeon reviews the information within 36 hours of re- ceiving it, and makes treatment decisions based on the data, except when alerted to an emer- gency situation by the nurse or manager. The surgeon determines whether or not a change of order is indicated or whether a clinic visit or surgical debridement should be scheduled. The surgeon also responds to the nurse’s questions and comments, which were relayed in the open-ended sections of the wound assessment form. STUDY DESIGN The leading hypothesis in this study posits that the TeleWound management program will result in reduced frequency of visits to the ED, reduced hospitalization and length of stay, re- duced visit acuity, and improved financial per- formance. Moreover, it is expected that the longer patients participate in the TeleWound program, the greater the effects. A quasiexperimental design was chosen as the most feasible approach to test this multi- part hypothesis. It consists of an improved non- equivalent control group design, in which the experimental (TeleWound) cases are observed prospectively, whereas the control group is a matched historical group. The improvement over the traditional nonequivalent group de- sign derives from matching cases with compa- rable controls. The TeleWound group consisted of 19 patients who gave informed consent to participate in the study. Of those who were asked to participate in the study, two refused after the start of the project. Hence, the total pool consisted of 21 patients. In order to ensure the reliability of the analysis, these cases were included in the study only if they participated in the TeleWound program for a minimum of 3 months. Moreover, for analytic purposes, the experimental (TeleWound) group of 19 subjects was further subdivided into “established” pa- tients (those who have participated in the pro- gram for 12 months or more from the start of the project) and “nonestablished” patients (those who have participated less than 12 months). The experience of the subset of “es- tablished” patients was examined separately because its members were expected to show a larger effect than the group as a whole. The TeleWound group consisted of 11 males and 8 females. The “established” subset among them consisted of 7 males and 4 females. Nineteen controls were matched by wound type, comorbidities, distance of residence to clinic, and payer mix. These cases were selected from hospital records. All received their wound care from the same surgeon but not nec- essarily from the same home health agency. Distance from home was calculated by placing home address and the address of the clinic into MapQuest®, which computed mileage for each address. Visit acuity was abstracted from the electronic medical record, where it is system- atically coded. The two groups were not matched by age. However, all subjects of the study are patients of a single provider and receive treatment in the same outpatient clinic as the experimental group. The data for the TeleWound group were collected prospectively for a period of 2 years, whereas the data for the control group were ab- stracted from the electronic master records for the same period. The dependent variables: use of service and financial performance Use of service measures was straightfor- ward. They included number of ED visits, number of outpatient clinic visits, number of inpatient hospitalizations, length of stay, num- REES AND BASHSHUR666 ber of outpatient clinic contacts, and level of outpatient visit acuity. This latter variable was abstracted from hospital financial records. A “clinic contact” is defined as any type of con- tact (phone calls, nursing notes, etc.) with the plastic surgeon or plastic surgery nurse, which was recorded in the electronic medical record. Financial outcome measures examined were based on inpatient and outpatient health sys- tem direct and indirect costs. These data cov- ered 2 full calendar years, 2004 and 2005. Fac- tors such as wound healing time were not measured in the study. DATA ANALYSIS AND FINDINGS The analysis of the data consisted of com- paring the utilization experiences of the two groups—TeleWound (experimental) and con- trol—over a 2-year period. A further assess- ment is made for the subset of “established pa- tients” in order to test the secondary hypothesis to the effect that experience in the TeleWound program tends to increase the magnitude of the effect. The significance of the difference be- tween the TeleWound and control groups is as- certained by the �2 and their contingency coef- ficients (p values). The value of �2 was calcu- lated using the standard formula. Although several of the dependent variables were con- tinuous, we had to collapse the categories be- cause of the small sample size. ED visits For the 2 years combined, patients in the TeleWound group made a total of 19 ED visits, whereas the control group made a total of 39 such visits. The averages for the two groups were 0.84 and 2.05, respectively, that is, on av- erage, patients in the TeleWound group made less than 1 visit to the ED over a 2-year period, whereas those in the control group averaged more than 2 visits. This trend is even more dra- matic among established patients. Nine of 11 in the TeleWound group did not make any ED visits and only 2 had 1 or more visits, whereas the reverse is true of the control group (data are shown in Table 1). Number of ED visits: TeleWound patients are much less likely to use the ED than the con- trol group, and the TeleWound group uses the ED less frequently. The TeleWound program intervention is positively related to absence of ED use, and negatively related to number of ED visits. These effects increase the longer pa- TELEWOUND MANAGEMENT 667 TABLE 1. EMERGENCY DEPARTMENT VISITS BY TELEWOUND AND CONTROL GROUPS Total sample Established patients only One or One or No more No more visits visits Total Mean visits visits Total Mean TeleWound group 11 8 19 0.84 9 2 11 0.45 Control group 5 14 19 2.05 2 9 11 2.82 �2 � 3.89 p � 0.049 �2 � 8.91 p � 0.003 TABLE 2. OUTPATIENT CLINIC VISITS BY TELEWOUND AND CONTROL GROUPS Total sample Established patients only Ten or 11 or Ten or 11 or fewer more fewer more visits visits Total Mean visits visits Total Mean TeleWound group 8 11 19 11.12 7 4 11 9.36 Control group 10 9 19 10.00 3 8 11 13.09 �2 � 0.42 p � 0.746 �2 � 2.93 p � 0.087 T A B L E 3. H O S P IT A L IZ A T IO N S B Y T E L E W O U N D A N D C O N T R O L G R O U P S T ot al s am pl e E st ab li sh ed p at ie n ts o n ly N o. 1– 3 4– 6 7 or M or e 0– 3 4 or M or e H os pi ta li za ti on s H os pi ta li za ti on s H os pi ta li za ti on s H os pi ta li za ti on s T ot al M ea n H os pi ta li za ti on s H os pi ta li za ti on s T ot al M ea n T el eW o u n d g ro u p 5 8 6 0 19 2. 63 9 2 11 1. 91 C o n tr o l g ro u p 1 6 7 5 19 4. 89 2 9 11 6. 09 � 2 � 8. 03 p � 0. 04 5 � 2 � 8. 91 p � 0. 00 3 tients participate in the program. The differ- ences between the two groups are statistically significant. Outpatient visits The TeleWound group made more outpa- tient visits than the control group (a total of 211 versus 190, or an average of 11.12 versus 10.0). However, these differences were not statisti- cally significant as measure by the �2. Among established patients, the TeleWound group made fewer outpatient visits than did the con- trol group. Again, the differences were not sta- tistically significant. Hence, this part of the hy- pothesis was not substantiated by the data (Table 2). Hospitalization We employed 2 measures of hospitalization in this study: number of hospital admissions and length of stay. For the 2 years, the Tele- Wound group had a total of 50 admissions, whereas the number for the control group was nearly double (93 admissions). When examined in more detail (as shown in Table 3), no patients in the TeleWound group were admitted more than 6 times, as compared to 5 patients in the control group who were in the same category. The difference between the two groups is sta- tistically significant. The established patients demonstrated this difference even more starkly. The average for the TeleWound group of established patients was nearly 2 admissions (1.91), whereas the average for the control group was more than threefold (6.09). The same trends were observed with regards to length of stay (data shown in Table 4). The TeleWound group spent a total of 399 days in the hospital over the 2-year period, whereas the control group spent a total of 732 days. Stated differently, the patients in the TeleWound group spent an average of 21 days in the hos- pital during 2004 and 2005, whereas patients in the control group spent an average of 38.53 days in the hospital. Among established pa- tients the difference was even more substantial. Seven of the 11 established patients in the Tele- Wound group spent 6 days or less in the hos- pital, as compared to only 1 in the control group with the same length of stay. Clinic contacts The program intervention was not designed to reduce contact between patients and clini- cians; it was designed to reduce the cost of car- TELEWOUND MANAGEMENT 669 TABLE 4. LENGTH OF HOSPITAL STAY BY TELEWOUND AND CONTROL GROUPS FOR BOTH 2004 AND 2005 Total sample Established patients only 5 Days 6 or 6 Days 7 Days or more or or less days Total Mean less more Total Mean TeleWound group 10 9 19 21.00 7 4 11 12.45 Control group 9 16 19 38.53 1 10 11 48.18 �2 � 5.73 p � 0.017 �2 � 7.07 p � 0.008 TABLE 5. OUTPATIENT CONTACTS BY TELEWOUND AND CONTROL GROUPS Total sample Established patients only 15 16 15 16 Contacts Contacts Contacts Contacts or fewer or more Total Mean or fewer or more Total Mean TeleWound group 8 11 19 16.11 5 6 11 14.73 Control group 11 8 19 13.63 4 7 11 17.00 �2 � 0.95 p � 0.330 �2 � 0.19 p � 0.665 ing for complex, chronic wounds. Indeed, all patients with chronic wounds (in both the Tele- Wound and control groups) were encouraged to contact the clinic when they had a problem or a question, and they were not charged for such contacts. We wanted to ascertain whether the 2 groups differed in terms of this variable. The findings confirmed the similarity of the 2 groups, as shown in Table 5. Visit acuity We assessed differences in visits between the TeleWound and control groups at 3 levels: high, medium, and low acuity. No differences between the 2 groups were observed among those having either high- or low-acuity visits. On the other hand, there were significant dif- ferences among those having medium-acuity visits. This finding implies that the program had no effect on the intensity of care during outpatient visits when the patients had either serious or small problems. It did make a dif- ference for those in between. However, that ef- fect disappeared among established patients. (Data shown for medium acuity only, Table 6) Financial outcomes In order to assess financial outcomes, we ex- amined “all cost data” generated by study pa- tients for calendar years 2004 and 2005, for both groups: TeleWound and control. “All cost data” is defined as any cost related to any and REES AND BASHSHUR670 TABLE 6. OUTPATIENT VISIT ACUITY BY TELEWOUND AND CONTROL GROUPS MEDIUM ACUITY ONLY Total sample Established patients only 5 Visits 6 Visits 5 Visits 6 Visits or fewer or more Total Mean or fewer or more Total Mean TeleWound group 4 13 19 6.21 3 7 11 5.64 Control group 9 5 19 4.53 5 4 11 5.45 �2 � 5.24 p � 0.022 �2 � 0.44 p � 0.508 TABLE 7. TOTAL FINANCIAL OUTCOMES All financial data Wound-related care only Direct Indirect Direct Indirect Total costs Total costs Total costs Total costs Patient direct per indirect per direct per indirect per type costs patient costs patient costs patient costs patient Inpatient Year 04 TeleWound 433,970 22,841 242,874 12,783 61,506 3,237 34,100 1,795 Year 05 TeleWound 122,011 6,422 64,323 3,385 30,677 1,615 16,299 858 Total $555,981 $29,262 $307,197 $16,168 $92,183 $4,852 $50,399 $2,653 Year 04 Non-TW 459,809 24,200 272,055 14,319 193,311 10,174 114,757 6,040 Year 05 Non-TW 425,687 22,405 237,289 12,489 106,265 5,593 58,122 3,059 Total $885,496 $46,605 $509,344 $26,808 $299,576 $15,767 $172,879 $9,099 Outpatient Year 04 TeleWound 118,257 6,224 55,052 2,897 26,826 1,412 15,343 808 Year 05 TeleWound 121,752 6,408 52,389 2,757 11,746 618 6,671 351 Total $240,009 $12,632 $107,441 $5,654 $38,572 $2,030 $22,014 $1,159 Year 04 Non-TW 43,575 2,293 24,959 1,314 18,670 983 11,345 597 Year 05 Non-TW 28,386 1,494 16,863 888 8,818 464 6,416 338 Total $71,961 $3,787 $41,822 $2,202 $27,488 $1,447 $17,761 $935 n � 19 all care these patients received at our health system, both inpatient and outpatient, regard- less of whether or not the care was wound re- lated. Direct costs are those costs driven by and directly attributable to each individual patient. They are variable in nature, since they are gen- erated by rendering care to that individual, per episode of care. Direct costs are essentially “controllable.” On the other hand, indirect costs are those costs spread across all patients, and are fixed in nature. Data shown in Table 7 reveal that both indi- rect and direct inpatient costs were consider- ably less for the TeleWound group as com- pared to the control group. Inpatient costs also decreased from year 1 to year 2 for the Tele- Wound group from $22,481 to $6,422, whereas they stayed relatively stable from year to year for the control group, from $24,200 to $22,405. On the other hand, outpatient direct costs for the TeleWound group remained almost the same, $6,224 versus $6,408, and it declined for the control group from $2,293 to $1,494. The reason for the decline among the control group is not clear. We also examined cost data related to wound management only, including services provided by any department or service unit within the health system. In all likelihood, we were able to account for all costs because all these patients were regular clients of the health system. How- ever, we cannot be certain that no other costs were incurred if other providers were used. These data are also presented in Table 7. All costs incurred for wound management only declined for both groups. However, the TeleWound group experienced a slightly greater decline. On the inpatient side, the Tele- Wound group experienced a decline from an average of $3,237 to $1,615, whereas the aver- age for the control group declined from $10,174 to $5,593. The most notable differential exists between the total direct inpatient costs for the TeleWound group versus the control group— the difference is staggering ($92,183 compared to $299,576, respectively). Similar trends of de- cline are observed for outpatient costs. If we consider the total inpatient and outpa- tient costs for the 2 groups, the advantage is still held by the TeleWound group for both types of costs. The TeleWound group incurred an average total direct (inpatient plus outpa- tient) cost of $6,882, and the control group in- curred an average of $17,214. For total indirect (inpatient and outpatient) costs, the Tele- Wound group incurred an average of $3,811, TELEWOUND MANAGEMENT 671 TABLE 8. FINANCIAL OUTCOMES FOR ESTABLISHED PATIENTS ONLY All financial data Wound-related care only Direct Indirect Direct Indirect Total costs Total costs Total costs Total costs Patient direct per indirect per direct per indirect per type costs patient costs patient costs patient costs patient Inpatient Year 04 TeleWound 198,518 18,047 109,778 9,980 30,003 2,728 16,325 1,484 Year 05 TeleWound 17,291 1,572 9,068 824 16,105 1,464 8,343 758 Total $215,809 $19,619 $118,846 $10,804 $46,108 $4,192 $24,668 $2,242 Year 04 Non-TW 226,320 20,575 128,731 11,70 110,906 10,082 62,879 5,716 Year 05 Non-TW 385,392 35,036 214,707 19,519 95,181 8,653 52,645 4,786 Total $611,712 $55,610 $343,438 $31,222 $206,087 $18,735 $115,524 $10,502 Outpatient Year 04 TeleWound 50,572 4,597 16,577 1,507 11,814 1,074 7,217 656 Year 05 TeleWound 25,406 2,310 12,685 1,153 6,049 550 3,475 316 Total $75,978 $6,907 $29,262 $2,660 $17,863 $1,624 $10,692 $972 Year 04 Non-TW 34,035 3,094 19,542 1,777 16,754 1,523 10,220 929 Year 05 Non-TW 20,057 1,823 12,129 1,103 7,179 653 5,368 488 Total $54,092 $4,917 $31,671 $2,880 $23,933 $2,176 $15,588 $1,417 n � 11 and the non-TeleWound group incurred an av- erage of $10,034. For both types of costs, the ad- vantage lies with the TeleWound group. Finally, we examined the financial outcomes for the established patients only. This is the subset of 11 established cases in the TeleWound group who were also matched with the control group. We examined total financial outcomes related to all care received at the health system as well as financial outcomes related to wound management only. It may be recalled that we expected the established patients to show a greater effect from the intervention. Among established TeleWound patients, the average inpatient direct cost for all care de- clined from $18,047 in 2004 to $1,572 in 2005, whereas the control group experienced a sub- stantial increase from $20,575 in 2004 to $35,036 in 2005. The decline in average cost of all outpatient care was relatively similar be- tween the TeleWound and control groups (as shown in Table 8). However, overall, the Tele- Wound group had a substantial advantage over the control group. The comparisons in their total costs for the 2 years combined are as follows: The total average direct cost (inpa- tient and outpatient) for the TeleWound group was $26,526 and $60,528 for the control group. Average indirect costs were $2,748 for the Tele- Wound group and $34,101 for the control group. These differences become even more dra- matic when we examine wound management costs only. Average inpatient direct costs for the TeleWound group were $4,192 for both years, whereas the control group incurred an average of $18,735 per patient. Indirect costs were also markedly different: the TeleWound group incurred an average of $2,242 per pa- tient, and the control group incurred an aver- age cost of $10,502. If we consider the total inpatient and outpa- tient costs for the 2 groups, a clear advantage is held by the TeleWound group for both types of costs. The TeleWound group incurred an av- erage total direct (inpatient plus outpatient) cost of $5,816, and the control group incurred an average of $20,911. For total indirect (inpa- tient and outpatient) costs, the TeleWound group incurred an average of $3,215 and the non-TeleWound group incurred an average of $11,919. CONCLUSION The urgent needs of our aging population, in particular, nursing home residents, prisoners, and Veterans Hospital patients, provide strong incentives to develop innovative methods to deliver clinically appropriate and cost-effective wound care. Despite the proliferation of wound care centers, often patients with pres- sure ulcers and associated multiple comorbidi- ties are poorly suited for management at these sites. Hence, we investigated the appropriate- ness of telemedicine for in-home wound man- agement. Telemedicine is a particularly attrac- tive strategy for those with spinal cord injuries, because their transportation is always difficult, time consuming, and expensive. The leading hypothesis for this study posited that subjects managed with a combination of traditional wound care plus telemedicine would have superior financial outcomes over subjects managed with a traditional “only see the doctor in person” approach. We utilized a case-controlled study design in which patients in the control group were carefully matched for wound type, comorbidities, distance of resi- dence to clinic, and payer mix. Primary out- come variables included services provided and financial performance. The subjects were fol- lowed up between 12 and 24 months. A single provider managed all the outpatient visits and attended all of the telemedicine sessions to evaluate the wounds. The results of this clinical study are striking and provide strong encouragement that a sin- gle provider can affect positive clinical and fi- nancial outcomes using a telemedicine wound care program. For example, the telemedicine group had 50% fewer emergency department visits than the control group. Furthermore, the subjects in the telemedicine group averaged less than 1 emergency department visit each, during the 2-year period of the study. This is a particularly important statistic because presen- tation in an emergency room with a pressure ulcer and fever virtually guarantees hospital admission. The analysis of the data provides insight into the impact of an emergency visit on hospital- ization. In the telemedicine group, hospitaliza- tion occurred half as many times as in the con- trol group. Although all subjects in the trial REES AND BASHSHUR672 required hospitalization, the incidence was threefold higher in the control group. The most telling statistic is the length of stay during hos- pitalization because it is an indicator of acuity of illness and is a significant driver of cost. There were 54% fewer days of hospitalization in the telemedicine group versus the controls. Thus, the data show that telemedicine wound care lowers the admission rate as well as actual length of stay for subjects with pressure ulcers. Clearly, fewer hospital days would increase rev- enue margins for hospitals using the prospec- tive payment system for pressure ulcers. Good management of subjects in the trial came at a small price. The paradigm shift from on-site clinic care to remote telemedicine visits required that the telemedicine group be “seen” more frequently than the controls. Although the difference was not statistically significant, it does imply more oversight from the clinician because the telemedicine group was evaluated weekly via digital photography and nursing as- sessments. This factor led to the success of the TeleWound program as a credible modality to manage pressure ulcers at a lower cost and pos- sibly better health outcomes. Cost savings were even more dramatic on the inpatient side. The impact factor of this study will be a function of technology use by clinicians who manage wounds. Surgeons are particularly well-suited for this task because they aggres- sively debride wounds and have short seg- ments of time to devote to reviewing tele- medicine photos and nursing assessments. Hospitals will profit immensely from this ap- proach because hospital beds are at a pre- mium, and their goals are to maximize rev- enue, access, and quality of care. The next step in this process is to integrate the model into daily practice at bellwether medical centers to determine programmatic effectiveness in larger clinical arenas. REFERENCES 1. Ablaza V, Fisher J. Telemedicine and Wound Care Management. Available from http://www.rubic. com/articles/article2.html; posted 1998. (Last ac- cessed December 2006.) 2. Available at http://www.criticalcaresystems.com/ woundcare/patients/overview.html (Last accessed December 2006.) 3. Beckrich K, Aronovitch SA. Hospital-acquired pres- sure ulcers: A comparison of costs in medical vs. sur- gical patients. Nurs Econ 1999;17:263–271. 4. Allman, RN, Laprade CA, Noel LB, et al. Pressure sores among hospitalized patients. Ann Intern Med 1986;105:337–342. 5. Kerstein M, Gemmen E, van Rijswijk L. Cost and cost effectiveness of venous and pressure ulcer protocols of care. Dis Manage Health Outcomes 2001;9:651–663. 6. Alterescu V. The financial costs of inpatient pressure ulcers to an acute care facility. Decubitus 1989;2:14–23. 7. Whittington K, Patrick M, Roberts J. A national study of pressure ulcer prevalence and incidence in acute care hospitals. JWCON 2000;27(4):209–215. 8. Thomas DR. Pressure ulcers. In: Cassel CK, Cohen HJ, Larson EB, Meier DE, Resnick NM, Rubenstein LZ, Sorenson LB, eds. Geriatric Medicine 3rd ed. New York: Springer, 1997:767–785. 9. Making health care safer: A critical analysis of patient safety practices. Evidence report/technology assess- ment: Number 43. AHRQ publication no. 01-E058, July 2001. Rockville, MD: Agency for Healthcare Re- search and Quality. Available at: http://www.ahrq. gov/clinic/ptsafety/ (Last accessed December 2006.) 10. Agency for Health Care Policy and Research (AHCPR) Panel for the Prediction and Prevention of Pressure Ulcers in Adults. Pressure ulcers in adults: Prediction and prevention. Clinical practice guide- line, number 3, AHCPR publication no. 92-0047. Rockville, MD: Agency for Health Care Policy Re- search, Public Health Service, U.S. Department of Health and Human Services, May 1992. 11. Allman RM. Pressure ulcers among the elderly. N Engl J Med 1989;320:850. 12. Ablaza V, Fisher J. Wound care via telemedicine: The wave of the future. Date posted 1999. Available at http://www.rubic.com/articles/article1/html (Last accessed December 2006.) 13. Berlowitz DR, Wilking SVB. The short-term outcome of pressure sores. J Am Geriatric Soc 1990;38:748–752. 14. Salyer J. Wound management in the home; factors in- fluencing healing. Home Health Care Nurse 1988;6: 24–33. 15. Kinsella A. Internet-based wound care. Home Care Automation Report 2002;7–8. 16. Piper B. Wound prevalence, types and treatment in home care. Advances in Wound Care 1999. Available at http://www.findarticles.com/p/articles/mi_qa3964 /is_1999904/ai_n8841992/print (Last accessed De- cember 2006.) 17. Mauser E. Medicare home health initiative: Current activities and future direction. Health Care Financing Rev 1997;18:275–291. 18. George J. Cuts hit home health hard. Philadelphia Busi- ness J 1998;3:42. 19. Dansky K, Palmer L, Shea D, Bowles K. Cost analy- sis of telehomecare. Telemed J e-Health 2001;7(3): 225–232. 20. Wooton R, Dimmick S, Kvedar J. Home telehealth: Con- necting care within the community. London: Royal So- ciety of Medicine Press Ltd, 2006. TELEWOUND MANAGEMENT 673 21. Kaufman MW. The WOC nurse: Economic, quality of life, and legal benefits. Dermatol Nurs 2001;9:153–159. 22. Bourne V. Community nurses’ view of leg ulcer treat- ment. Prof Nurse 1999;15:21–24. 23. Flanagan N, Rotchell L, Fletcher J. Community nurses’, home carers’, and patients’ perceptions of fac- tors affecting venous leg ulcer recurrence and man- agement of services. J Nurs Manag 2001;9:153–159. 24. Arnold N, Weir D. Retrospective analysis of healing in wounds cared for by ET nurses versus staff nurses in a home setting. J Wound Ostomy Continence Nurs 1994;21:156–160. Address reprint requests to: Noura Bashshur, M.H.S.A. Department of Plastic Surgery University of Michigan Health System Domino’s Farms Lobby A, P.O. Box 441 24 Frank Lloyd Wright Drive Ann Arbor, MI 48106 E-mail: nourab@umich.edu REES AND BASHSHUR674 This article has been cited by: 1. Caroline Chanussot-Deprez, José Contreras-Ruiz. 2009. Telemedicine in wound care. International Wound Journal 5:5, 651-654. [CrossRef] http://dx.doi.org/10.1111/j.1742-481X.2008.00478.x work_asrsbbsp35cxhdkgk675esxqbq ---- Climate Research 39:175 CLIMATE RESEARCH Clim Res Vol. 39: 175–177, 2009 doi: 10.3354/cr00829 Published September 10 Phenology has emerged from the shadows to be- come a major component of climate change studies. In part this is because the phenological responses of species to temperature, particularly in plants, are very strong. Phenological change is relatively easy to iden- tify, especially in comparison with changes in distribu- tion, fecundity, population size, morphology, etc. Even with the relatively modest levels of climate warming experienced so far, phenological change has become very evident. Phenology formed a large part of the evi- dence on climate impacts in the most recent IPCC report (Rosenzweig et al. 2008). ISI Web of Knowledge reports that publications containing ‘Phenolog*’ as a topic have risen from 162 in 1990 to 1099 in 2008 (with an unexplained outlier of only 70 in 2001). The propor- tion of these that also contain ‘Climate’ as a topic has risen steadily (Fig. 1) emphasising the growing impor- tance of phenology in climate studies. The advantages of phenology include that it is rela- tively simple to record, that it is robust to differences in collection protocols, and there exists extensive archive material. The latter may not yet be fully identified, and is certainly not completely available in digital form. Long-term phenological series have been recorded in Japan and China; the former has a record of cherry flowering going back to 705 AD (Menzel & Dose 2005). Europe, too, has a long tradition in phenology; Theo- phrastus (ca. 371 to ca. 287 BC), a student of Aris- totle, produced a Calendar of Flora in Athens, Greece (Stillingfleet 1762). There are probably a number of archaeological records with phenological data yet to be discovered. Collaborative European networks have several pre- cursors, the most widespread of which was the coordina- tion of records by Professors Ihne and Hoffmann from Giessen, Germany between 1883 and 1941 (Nekovar et al. 2008). The current Special of Climate Research (CR) represents one of the outputs of COST 725 of the ‘Euro- pean Cooperation in the field of Scientific and Technical Research’ funded by the European Science Foundation. For the 5 years ending in 2009, COST 725 ‘Establishing a European Phenological Data Platform for Climatolog- ical Applications’ brought together researchers from 27 European countries, enabling them to meet twice a year, and funded scientific exchange visits as well as encour- aged collaboration. An early output of COST 725 was an extensive analysis of >125 000 phenological series for a standard- ised time period (Menzel et al. 2006). Previous to this, papers summarising phenological change relied on published studies, potentially biased in favor of those showing significant change, whereas Menzel et al. (2006) summarised all available data series from across Europe. This meta-analysis was featured prominently in the IPCC report and suggested average advances in phenological phenomena of 7.5 d in the period 1971– 2000. Furthermore, countries that had experienced greater temperature increases were associated with greater phenological advances. A book summarising the history and current status of phenology in Europe (Nekovar et al. 2008) is also an output of COST 725. This is the first time that such an extensive summary has been compiled, and it will remain a valuable reference for phenological re- searchers for many years to come. A major component of COST 725 was the construction of a European data- base of phenological observations. Despite the differ- ences in languages, protocols, and species, this data- base, hosted by ZAMG (Zentralanstalt for Meteoro- logie und Geodynamik) in Vienna, Austria, is nearing completion. Proposals are being considered that would enable it to be kept up to date. This CR Special presents 9 studies based on the work of 36 scientists from 11 countries. © Inter-Research 2009 · www.int-res.com*Email: thsparks@btopenworld.com INTRODUCTION European cooperation in plant phenology Tim H. Sparks1,*, Annette Menzel2, Nils Chr. Stenseth3 168 Girton Road, Girton, Cambridge CB3 0LN, UK 2 Centre of Life and Food Sciences Weihenstephan, Chair of Ecoclimatology, Technische Universität München, Am Hochanger 13, 85354 Freising, Germany 3 Centre for Ecological and Evolutionary Synthesis (CEES), Department of Biology, University of Oslo, PO Box 1066, Blindern, 0316 Oslo, Norway OPENPEN ACCESSCCESS Contribution to CR Special 19 ‘European plant phenology’ Clim Res 39: 175–177, 2009 Rutishauser et al. (2009, this Special) look at some very long data series from Europe, examine correla- tions between them and investigate temporal changes in both trends and temperature responsiveness. Phe- nologists tend to work with climatic data that is readily available; either monthly mean shade temperatures or daily temperature accumulations. Whenever climate change is considered it is usually monthly or annual mean temperatures that are examined. Sparks et al. (2009a, this Special) examine changes in a number of potentially important summaries of daily temperature data, including thresholds and temperature accumula- tions. Maps of changes in European temperatures illustrate the spatial context. Kalvāne et al. (2009, this Special) show how recent temperature changes have expressed themselves as phenological changes in the Baltic countries of Latvia and Lithuania. The influence of the North Atlantic Oscillation and of precipitation is also examined. The subsequent study by Sparks et al. (2009b, this Special) looks at phenological change in flowering within Europe’s last remaining primeval lowland forest. Change is apparent in this pristine environment, reminding us that the consequences of a changing cli- mate extend beyond those areas directly modified by humans. Ziello et al. (2009, this Special) study changes in phe- nology with increasing altitude in the European Alps. A strong relationship of delayed phenology at higher altitude was found. Recent advances in flowering dates may have been greater at higher altitude. Estrella et al. (2009, this Special) use 36 000 data series to examine changes in phenology in relation to location, phase timing and human population density. Phenological events are divided into different categories and differ- ences in the strength of phenological advance in differ- ent categories were revealed. Schleip et al. (2009, this Special) use Bayesian methods to examine phenologi- cal changes in 2600 European data series. This approach confirmed that recent phenological change has not been linear; rather it has been abrupt, associ- ated with rising temperature. Phenological trends were most marked in NW Europe. Technological applications of phenology are consid- ered in the final 2 studies. Ahrends et al. (2009, this Special) investigate the use of digital photography on flux towers and the relationship between phenology and gross primary productivity. Different parts of the forest canopy were identified for examination of the development of individual tree species. Karlsen et al. (2009, this Special) use satellite imagery on an 8 km grid to examine the beginning and end of the growing season in Fennoscandia. They compare these data with local records of Betula phenology and conclude that changes in the growing season over the study area are heterogeneous, but average a lengthening of the growing season by 6 days per decade. COST 725 has made a substantial contribution to phenological collaboration between European coun- tries. The importance of phenological records is now acknowledged by some countries and organisations that had considered cutting back phenological pro- grammes. New schemes, for example in the Republic of Ireland and Sweden, have been inspired by this work. The prospects for increased phenological research have never looked so encouraging. LITERATURE CITED Ahrends HE, Etzold S, Kutsch WL, Stoeckli R and others (2009) Tree phenology and carbon dioxide fluxes: use of digital photography for process-based interpretation at the ecosystem scale. Clim Res 39:261–274 Estrella N, Sparks TH, Menzel A (2009) Effects of tempera- ture, phase type and timing location, and human density on plant phenological responses in Europe. Clim Res 39: 235–248 Kalvāne G, Romanovskaja D, Briede A, Bak$ienė E (2009) Influence of climate change on phenological phases in Latvia and Lithuania. Clim Res 39:209–219 Karlsen SR, Høgda KA, Wielgolaski FE, Tolvanen A, Tøm- mervik H, Poikolainen J, Kubin E (2009) Growing-season trends in Fennoscandia 1982–2006, determined from satellite and phenology data. Clim Res 39:275–286 Menzel A, Dose V (2005) Analysis of long-term time series of the beginning of flowering by Bayesian function estima- tion. Meteorol Z 14:429–434 Menzel A, Sparks TH, Estrella N, Koch E and others (2006) European phenological response to climate change matches the warming pattern. Glob Change Biol 12: 1969–1976 Nekovar J, Koch E, Kubin E, Nejedlik P, Sparks T, Wielgolaski FE (eds) (2008) The history and current status of plant phenology in Europe. COST Action 725. COST, Brussels 176 20102005200019951990 40 30 20 10 0 Year P ro p o rt io n o f p a p e rs ( % ) Fig. 1. Percentage of papers associated with the ISI Web of Knowledge with the topic ‘Phenolog*’ that also contain ‘climate’ as a topic has been rising steadily (2009 incomplete). The per- centage for 2001 may be an outlier, as the total number (denom- inator) of papers with the topic ‘Phenolog*’ was unusually low Sparks et al.: Introduction to phenology Special Rosenzweig C, Karoly D, Vicarelli M, Neofotis P and others (2008) Attributing physical and biological impacts to anthropogenic climate change. Nature 453:353–357 Rutishauser T, Schleip C, Sparks TH, Nordli Ø and others (2009) Temperature sensitivity of Swiss and British plant phenology from 1753 to 1958. Clim Res 39:179–190 Schleip C, Sparks TH, Estrella N, Menzel A (2009) Spatial variation in onset dates and trends in phenology across Europe. Clim Res 39:249–260 Sparks TH, Aasa A, Huber K, Wadsworth R (2009a) Changes and patterns in biologically relevant temperatures in Europe 1941–2000. Clim Res 39:191–207 Sparks TH, Jaroszewicz B, Krawczyk M, Tryjanowski P (2009b) Advancing phenology in Europe’s last lowland primeval forest: non-linear temperature response. Clim Res 39:221–226 Stillingfleet B (1762). Miscellaneous tracts relating to natural history, husbandry and physick; to which is added the cal- endar of flora. Dodsley, London Ziello C, Estrella N, Kostova M, Koch E, Menzel A (2009) Influ- ence of altitude on phenology of selected plant species in the Alpine region (1971–2000). Clim Res 39:227–234 177 cite2: cite3: cite4: cite1: work_atbxcemly5e4bcm7kre3hf4gze ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218541347 Params is empty 218541347 exception Params is empty 2021/04/06-02:16:18 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218541347 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:18 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_atvurvgmtnealhjfgo2l4sedoy ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218553173 Params is empty 218553173 exception Params is empty 2021/04/06-02:16:32 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218553173 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:32 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_auwgsybv5radnbyzfqxudlkupa ---- epi_00955.tex Epilepsia, 48(3):531–538, 2007 Blackwell Publishing, Inc. C© 2007 International League Against Epilepsy Distribution of Auditory and Visual Naming Sites in Nonlesional Temporal Lobe Epilepsy Patients and Patients with Space-Occupying Temporal Lobe Lesions ∗Marla J. Hamberger, ‡Shearwood McClelland, III, †Guy M. McKhann, II, ∗Alicia C. Williams, and †Robert R. Goodman ∗Department of Neurology, and †Department of Neurological Surgery, College of Physicians and Surgeons, Columbia University, New York, New York; and ‡Department of Neurosurgery, University of Minnesota Medical School, Minneapolis, Minnesota, U.S.A. Summary: Purpose: Current knowledge regarding the topog- raphy of essential language cortex is based primarily on stimula- tion mapping studies of nonlesional epilepsy patients. We sought to determine whether space-occupying temporal lobe lesions are associated with a similar topography of language sites, as this information would be useful in surgical planning. Methods: We retrospectively compared the topography of au- ditory and visual naming sites in 25 nonlesional temporal lobe epilepsy patients (“nonlesional”) and 18 patients with space- occupying lesions (“lesional”) who underwent cortical language mapping before left temporal resection. Results: Both groups exhibited a similar pattern of auditory naming sites anterior to visual and dual (auditory–visual) nam- ing sites; no group differences were specific to auditory or visual naming sites. However, significantly fewer lesional (10 of 20) compared with nonlesional patients (21 of 25) exhibited any naming sites in the temporal region (p = 0.04). Although the proportion of naming sites on the superior temporal gyrus was similar between groups, naming sites were found on the mid- dle temporal gyrus in 13 of 25 nonlesional patients, yet in only one of 18 lesional patients (p = 0.002). Across groups, patients with visual naming sites were older than patients without visual naming sites identified (p = 0.02). Conclusions: The precise location of essential language cortex cannot be reliably inferred from anatomic landmarks or patient- related variables. As time constraints are a common quandary in stimulation mapping, the different patterns reported here for patients with and without space-occupying lesions can be used to guide mapping strategies, thereby increasing the effi- ciency by which positive naming sites are identified. Key Words: Cortical language mapping—Cortical stimulation— Epilepsy—Naming sites—Temporal lobe. The temporal lobes are a common locus for neu- ropathology requiring surgical intervention, including two thirds of medically intractable partial seizures with demonstrable foci, many vascular malformations and nearly 40% of gliomas (Laws et al., 1986; Semah et al., 1998; Moran et al., 1999; Yeh et al., 2005; Zhao et al., 2005). One challenge of temporal lobe surgery is to re- move a sufficient amount of pathologic tissue, without removing or damaging tissue that is critical for normal lan- guage function. Although the Wernicke area, loosely con- sidered to represent the posterior portion of the superior temporal gyrus, serves as a general landmark for eloquent cortex supporting language, research in language local- ization has shown considerable interindividual variability in the location of essential language cortex (Penfield and Accepted September 23, 2006. Address correspondence and reprint requests to Dr. M.J. Hamberger at The Neurological Institute, 710 West 168th Street, Box 100, New York, NY 10032, U.S.A. E-mail: mh61@columbia.edu doi:10.1111/j.1528-1167.2006.00955.x Roberts, 1959; Ojemann, 1983a). Consequently, surgical resection involving the language-dominant temporal re- gion often requires pre-resection, stimulation-based corti- cal language mapping (Black and Ronner, 1987; Ojemann et al., 1989). Mapping is performed either extraopera- tively, by using permanently implanted electrode grids and strips, or intraoperatively, by using electrodes placed on the cortical surface after craniotomy (Luders et al., 1987; Ojemann, 1990). Cortical sites at which electrical stim- ulation impedes performance on one or more language tasks are considered essential for normal language func- tion. These positive sites (and 1 cm of adjacent cortex) are typically then spared from resection, with the goal of preserving postoperative language abilities (Ojemann and Dodrill, 1981; Hermann et al., 1999). The clinical population composing most stimulation- mapping studies has been nonlesional epilepsy patients, and the task most often used to identify essential language cortex has been visual object naming (Ojemann, 1983b; Hermann and Wyler, 1988). Within the parameters of this 531 532 M. J. HAMBERGER ET AL. population and this particular task, naming sites have been considered to be located in the posterior portion of the temporal region, primarily on the superior temporal gyrus (STG) and middle temporal gyrus (MTG), with some suprasylvian representation. Recent work in stimulation mapping using auditory description naming (e.g., “a pet that purrs”), in addition to visual naming, has shown that auditory naming sites (i.e., sites at which stimulation im- pairs auditory but not visual naming) are generally located anterior to visual naming sites or “dual” sites (i.e., sites at which stimulation disrupts both auditory and visual nam- ing) (Hamberger et al., 2001). Similar to that found for visual naming sites, removal or encroachment on auditory naming sites is associated with postoperative word-finding decline (Hamberger et al., 2005). Given this history of stimulation language mapping, relatively little is known regarding the topographic rep- resentation of visual naming sites in patients with space- occupying lesions, and no published studies address the topography of auditory naming sites in this population. Such information would be useful in guiding surgical plan- ning and mapping strategies, particularly given the time constraints inherent in intraoperative mapping. In this ret- rospective study, we sought to examine and compare the cortical distribution of auditory and visual naming sites in nonlesional temporal lobe epilepsy patients and patients with space-occupying temporal lobe lesions. We hypoth- esized that space-occupying lesions might displace both auditory and visual naming sites outside the lateral tempo- ral region, where language sites have been most frequently identified in nonlesional epilepsy patients. Specifically, we reasoned that lesional patients would have fewer posi- tive naming sites in lateral temporal cortex compared with those demonstrated in nonlesional patients. To address this hypothesis, we retrospectively analyzed the topography of auditory and visual naming sites in 25 nonlesional and 18 lesional patients who underwent cortical language map- ping before left temporal surgical resection. METHODS Subjects A series of 43 consecutive patients who underwent cor- tical language mapping before left temporal surgical re- section and met inclusion criteria was included in this study. Subjects were required to be left hemisphere lan- guage dominant, and either native English speakers or to have learned English before age 5 years. Left hemisphere language dominance was identified by intracarotid amo- barbital testing, functional imaging, and/or intraoperative identification of language sites. Twenty-five of these pa- tients had temporal lobe epilepsy and were categorized as “nonlesional,” in that they had no space-occupying le- sions (13 with medial temporal sclerosis as defined by MRI, 12 with no abnormality on MRI), and 18 patients, categorized as “lesional,” had space-occupying temporal lobe lesions. Lesions consisted of 11 temporal tumors (two oligodendrogliomas, three gangliogliomas, one astrocy- toma, one glioblastoma multiforme, two dysembryoplas- tic neuroepithelial tumors, one mixed oligoastrocytoma, and one low-grade glioma), five cavernous malformations, and two arteriovenous malformations. All patients with nonlesional temporal lobe pathology had medically in- tractable seizures, as did eight of 18 patients with temporal lobe lesions. Twenty-six patients underwent intraoperative language mapping before resection, all at Columbia Uni- versity Medical Center (CUMC). The remaining 17 pa- tients (16 nonlesional) underwent extraoperative language mapping via subdural electrodes, nine at CUMC and seven at New York University Medical Center (NYU). Demo- graphic and clinical information for the two groups was as follows: age at surgery (nonlesional, 35.6, SD, 11.5; lesional, 30.9, SD, 12.6; p = 0.21), years of education (nonlesional, 14.6; SD, 3.0; lesional, 15.0; SD, 3.1; p = 0.69), and age at onset (onset of recurrent seizures or detec- tion of lesion; nonlesional, 16.8 years; SD, 11.3; lesional, 27.8 years; SD, 13.9; p = 0.007). The group difference in onset age is addressed later. Preoperative scores for Ver- bal IQ (VIQ) (Wechsler, 1997), Visual naming (Boston Naming Test, Kaplan et al., 1983), and Auditory naming (Hamberger and Seidel, 2003) were available for 23 non- lesional and 13 lesional patients (see Results). Electrodes For patients evaluated intraoperatively (17 lesional, nine nonlesional), perisylvian sites were stimulated by using a bipolar stimulator with 2-mm-diameter ball con- tacts separated by 5 mm (Ojemann Cortical Stimulator; Radionics, Burlington, MA, U.S.A.). The sites were cho- sen based on gyral/vascular anatomy and spaced <10 mm apart. Electrode positions were documented by using dig- ital photography and schematic diagrams. For patients who underwent extraoperative mapping (one lesional, 16 nonlesional), a 64-contact (i.e., eight-by-eight) grid ar- ray, with 5-mm-diameter electrodes embedded in Silas- tic with center-to-center interelectrode distances of 1 cm (Ad-Tech, Racine, WI, U.S.A.), was positioned over the temporal/perisylvian region (trimmed as needed to con- form to the covered area). The exposed cortical surface and grid position were documented by digital photography and schematic diagrams. Initial schematics were drawn by the surgeon intraoperatively, while looking directly at the brain surface. Digital photographs were then used post- operatively to refine the diagrams. Additionally, subdu- ral electrode positions were verified by skull radiographs postoperatively. The STG and MTG, >3.5 cm to ∼8.5 cm from the temporal pole, were reliably mapped in all 43 patients. This was determined empirically by segment- ing the STG, MTG, inferior temporal gyrus (ITG), and suprasylvian region into centimeter-wide sections from Epilepsia, Vol. 48, No. 3, 2007 TEMPORAL LOBE LANGUAGE CORTEX 533 the temporal pole and comparing the number of sites tested within each section between groups. Although many pa- tients also had anterior temporal (i.e., ≤3.5 cm from the temporal pole) and suprasylvian mapping, only the areas that were comparably mapped in all patients were included in the analyses described later. Mapping procedures All patients performed both auditory and visual naming tasks. All auditory and visual naming stimuli were admin- istered to patients within 1 to 4 months before surgery. Auditory and visual items, selected from previously pub- lished stimuli, were equated for word frequency and were similar in difficulty level (Hamberger and Seidel, 2003). Only items that patients successfully completed at base- line were administered during cortical mapping (items associated with word-retrieval errors at baseline could not be used to identify stimulation-related errors during mapping). For epilepsy patients, mapping was performed while antiepileptic drug (AED) levels were in the thera- peutic range to minimize afterdischarges and seizure ac- tivity. Extraoperative language mapping was conducted after video-EEG monitoring to identify the seizure-onset zone. Testing was conducted during electrical stimulation ap- plied to adjacent electrodes. When results were positive, each electrode was studied individually and referenced to a remote electrode in “silent cortex.” All available sites along lateral temporal cortex, as well as parietal sites in the perisylvian area, were stimulated. Patients who underwent intraoperative mapping were initially anesthetized with propofol. Language mapping began after craniotomy/dural opening, electrocorticogra- phy, and stimulation to determine the threshold for after- discharges. Several practice trials were conducted to en- sure an adequate level of patient responsiveness, and that naming ability was at the level the patient demonstrated at baseline, preoperatively. Stimulation sites were primarily in the vicinity of the anticipated resection, as determined by the presence of a lesion or intracranial EEG evidence of seizure onset. If no visual naming cortex was identified, additional perisylvian sites were tested in an attempt to identify positively the visual naming cortex (rather than relying on negative responses alone). Sites were tested with a bipolar stimulator, as described earlier. Stimulation mapping followed well-established meth- ods (Ojemann, 1983a, 1991). For both intra- and extraop- erative mapping at CUMC, a constant-current stimulator (Ojemann Cortical Stimulator, Radionics, Inc., Burling- ton, MA, U.S.A.) delivered a biphasic square waveform at a frequency of 60 Hz with a 2-ms pulse duration and deliv- ered amperage ranging from 3 to 15 mA during intraopera- tive mapping. Mapping at NYU was conducted by using a Grass Instruments S-12 cortical stimulator (Quincy, MA, U.S.A.) with a biphasic square waveform at frequency 50 Hz with a 0.3-ms pulse duration, with amperage rang- ing from 3 to 15 mA. Afterdischarge levels were deter- mined by increasing amperage until an afterdischarge was elicited, with an upper limit of 15 mA. Amperage for stim- ulation was set at 0.5 to 1 mA below that which elicited an afterdischarge (or 15 mA), which was determined for each site individually. Results reported here are from trials during which no afterdischarges were elicited. At least two trials of both visual and auditory naming were conducted at each site. If results were ambiguous or the patient was temporarily inattentive, additional tri- als were administered. For visual naming, patients were shown line drawings of common items (e.g., bench, heli- copter), and for auditory naming, patients heard oral de- scriptions of concrete items (e.g., “What a king wears on his head”). For visual naming, patients began with the phrase, “This is a” to enable differentiation between speech arrest and anomia, whereas, for auditory naming, patients were instructed to name the target item. To re- duce differences in duration of cortical stimulation across tasks, the auditory stimuli were limited to those that con- tained a maximum of eight words and could be presented clearly within 4 s. Additionally, the requirement for pa- tients during visual naming to articulate the carrier phase (i.e., This is a ———) before naming the pictured object further balanced the stimulus-processing and stimulation- duration times among tasks. For each task, electrical stim- ulation began immediately before presentation of pictures or auditory descriptions and lasted for a maximum of 10 s, but terminated immediately on the patient’s production of a correct response. For both tasks, patients were in- structed to respond as quickly as possible. Sites were con- sidered critical for task performance if the patient could not name target items during stimulation, but provided correct responses on cessation of stimulation. When one of two trials was performed inaccurately, another two tri- als were administered. Sites were considered critical for task performance only when responses to both of these two trials were incorrect. Sites at which this further test- ing resulted in 50% accuracy were not considered critical for task performance. Additionally, if it became apparent during the procedure that naming abilities deteriorated to a level below that demonstrated at baseline, mapping was discontinued. Data analysis For each patient, the location of electrode sites was determined by intraoperative digitized photographs and schematic drawings, and supplemented by postoperative skull radiographs. Naming sites from each patient were plotted on a schematic of the temporal lobe region and coded to indicate whether auditory, visual, or both au- ditory and visual naming were disrupted by stimulation. The topographic distribution and proportion of auditory and visual naming sites in the STG versus the MTG were Epilepsia, Vol. 48, No. 3, 2007 534 M. J. HAMBERGER ET AL. compared in lesional and nonlesional patients by using χ 2 analysis or Fisher’s exact test, depending on cell sizes. Al- though some patients had more extensive mappings that included anterior and inferior temporal cortex, group com- parisons of the topography of naming sites included only the STG and MTG regions that were reliably mapped in all patients. T tests were used for comparisons of de- mographic and patient-related data. Pearson correlations were performed to assess the relation between the number of naming sites identified per patient and patient-related variables. Data from auditory and visual naming were analyzed separately, and subsequently grouped together when no differences were found in the modality-specific results. RESULTS Overall, patients in the nonlesional group were more likely to have naming sites identified relative to patients in the lesional group (Fig. 1). Of the 25 nonlesional patients, at least one positive naming site was found in 21 patients, whereas, of the 18 lesional patients, only 10 patients exhibited any positive naming sites (p = 0.04). No group differences were noted with respect to the num- ber of patients who exhibited auditory (p = 0.63) or visual (p = 0.10) naming sites. Although fewer sites were identified in the lesional compared with the nonlesional group, the spatial patterns of auditory naming sites anterior to visual naming sites was similar and significant in both groups (nonlesional p = 0.001; lesional p = 0.013). However, the superior/inferior distribution of positive naming sites (both auditory and vi- sual), differed. Specifically, 17 of 25 nonlesional patients (25 sites) and 10 of 18 lesional patients (18 sites) had sites identified on the STG (p = 0.40), whereas 13 of 25 nonlesional patients (23 sites) yet only one of 18 lesional patients (one site) had sites identified on the MTG (p = 0.002). Although most lesional patients were tested intraopera- tively and most nonlesional patients were tested extraoper- atively, no group differences were found in the number of sites tested in lesional (mean, 12.4; SD, 6.5) versus nonle- sional patients (mean, 14.2; SD, 5.7; p = 0.35). As noted, analyses of mapping results were limited to the temporal areas that were reliably mapped in both groups. Thus the number of naming sites identified is not merely a func- tion of the number of sites or the size of the region tested. Additionally, no significant differences were seen in num- ber of naming sites identified in patients with (mean, 2.2; SD, 1.9) versus without epilepsy (mean, 1.7; SD, 3.1; p = 0.53).To determine whether the presence of naming sites in the lesional group was related to lesion location, lesion lo- cations were classified as “anterior” if the posterior margin of the lesion was <5 cm from the temporal pole, and as “posterior” if the anterior margin of the lesion was >5 cm from the temporal pole. Of the six patients with anterior lesions, three had positive naming sites, and of the 12 pa- tients with posterior lesions, seven had positive naming sites (p = 1.0). Thus to the extent we were able to quan- tify lesion location, this did not account for the presence or absence of positive naming sites. Analysis of lesion type also showed no consistent relation with presence or absence of naming sites (p = 0.67). As noted earlier, age at onset was significantly earlier in the nonlesional group. Although onset age showed no sys- tematic correlation with number of naming sites identified (r = −0.04; p = 0.82), it should be noted that correlations involving number of naming sites are likely limited by the restricted range. The number of naming sites identified per patient ranged from none to 10, yet with a mean of 2.1 and a median of 2. T tests comparing patients with and without any naming sites identified (likely a more valid means of analyzing these data) showed no significant differences in onset age (p = 0.79), VIQ (p = 0.23), or education level (p = 0.26). However, patients with sites identified were older at the time of surgery (mean age, 36.8 years; SD, 12.1) than were patients with no sites identified (mean age, 25.8; SD, 7.8; p = 0.006). This difference was spe- cific to the presence of visual (not auditory) naming sites (visual naming sites found: mean age, 37.7; SD, 10.7; vi- sual naming sites not found: mean age, 29.2; SD, 12.2; p = 0.02). As noted earlier, age at the time of surgery was comparable between lesional and nonlesional groups (p = 0.23). Preoperative scores on verbal measures were avail- able for 23 nonlesional patients and 13 lesional patients (Table 1). As shown, VIQ and visual naming were sig- nificantly stronger in the lesional group; auditory naming was similar in direction, but the group difference was not significant. Neither VIQ (r = 0.04; p = 0.80) nor Visual naming (r = 0.17; p = 0.33) correlated with the number of auditory or visual naming sites identified. Addition- ally, no differences in verbal scores were noted between patients who did and did not have naming sites identified (VIQ: p = 0.23; Visual naming: p = 0.66; Auditory nam- ing: p = 0.24). Of note, VIQ scores for both lesional and nonlesional patients were well within the average range, whereas visual and auditory naming scores were below the average range in both groups with impaired naming performance in the nonlesional group. TABLE 1. Verbal test scores Lesional Nonlesional p Value VIQ 106.2 (10.0) 94.7 (14.1) 0.021 Visual Naming (BNT) 52.9 (8.6) 46.2 (8.8) 0.040 Auditory Naming 47.6 (3.9) 45.2 (4.4) 0.133 Values expressed as mean (SD). VIQ, verbal IQ; Visual Naming maximum score, 60; Auditory Naming maximum score, 50. Epilepsia, Vol. 48, No. 3, 2007 TEMPORAL LOBE LANGUAGE CORTEX 535 FIG. 1. Topographic distribution of audi- tory (solid circles) and visual (open cir- cles) naming sites in nonlesional epilepsy patients (A) and patients with space- occupying temporal lobe lesions (B). DISCUSSION Previous work in stimulation language mapping de- scribes, primarily, the topography of visual naming sites in nonlesional epilepsy patients. In this study, we sought to elucidate the cortical distribution of both auditory and vi- sual naming sites in both nonlesional epilepsy patients and in patients with space-occupying lesions in the temporal lobe. Consistent with our hypothesis, we found that le- sional patients were less likely to have any naming sites identified in lateral temporal cortex. Additionally, whereas naming sites in nonlesional patients were scattered equally across the STG and MTG, naming sites were rarely found on the MTG in lesional patients, with most naming sites clustered in the posterior portion of the STG. In both groups, however, auditory naming sites were generally anterior to visual naming sites, concordant with previous findings (Hamberger et al., 2001). Epilepsia, Vol. 48, No. 3, 2007 536 M. J. HAMBERGER ET AL. The consistency in the anterior/posterior distribution of auditory and visual naming sites across groups is not surprising, as this maintains the spatial relation between auditory and visual association areas. The lower propor- tion of lesional patients with positive naming sites is con- sistent with results of a previously reported series that included 40 patients with temporal gliomas and 83 nonle- sional epilepsy patients (Haglund et al., 1994). However, unlike our findings, this previous series found a lower per- centage of STG naming sites in the lesional group and a lower proportion of MTG relative to STG naming sites in both groups. Although our use of auditory naming pro- vided more positive sites overall, the use of auditory nam- ing in our study does not appear to account for these differ- ences, as no group differences in topography were specific to auditory versus visual naming. One possible explanation for the overall fewer positive sites in the lesional group is that as a space-occupying lesion expands, existing sites might be altered or elimi- nated (Haglund et al., 1994). It should be noted, however, that stimulation mapping is limited spatially by clinical and patient-related factors (see later) or by the extent of subdural grid coverage. Thus it is possible that in some pa- tients, naming sites may have been located in unexpected areas that were not tested. Additionally, it is infrequent that patients who undergo stimulation mapping require remapping at a subsequent time; however, repeated map- ping would potentially address the question of changes in language site location over time. Although simulation mapping is the standard of care for clinical purposes, func- tional imaging carries the advantage of repeatable test- ing and has been used to demonstrate changes in both intra- and interhemispheric language representation after a stroke (Thulborn et al., 1999). The literature on language reorganization after left hemisphere insult is notable for controversy regarding the nature and location of pathology or injury associ- ated with cortical reorganization. As we limited our study to left hemisphere–dominant patients, we were interested specifically in intrahemispheric language organization in these two groups. Whereas some investigators have con- cluded that epileptogenic tissue does not typically dis- place language cortex (De Vos et al., 1995; Duchowny et al., 1996), others report otherwise (Rausch et al., 1991; Devinsky et al., 1993; Schwartz et al., 1998). In addi- tion to nature and location of insult, patient-related fac- tors such as IQ, age at onset, and educational attainment have been shown to correlate with location and quan- tity of positive naming sites (Ojemann, 1983a; Devinsky et al., 1993). Although both VIQ and age at onset dif- fered significantly between our lesional and nonlesional groups (i.e., higher VIQ and later onset age in lesional group), we found no systematic relation between VIQ or onset age and the presence/absence of positive nam- ing sites, consistent with other reports (Duchowny et al., 1996; Liegeois et al., 2004). The more-superior/posterior representation of naming sites in the lesional group is concordant with theories that the maintenance of left hemisphere dominance in patients with left temporal lesions is accomplished by reorganization of speech to adjacent posterior structures in the left hemisphere (Rasmussen and Milner, 1977) that reach functional mat- uration and commitment later in childhood (Fennell et al., 1977; Satz et al., 1988, 1990). Our findings are also consistent with results of a relatively recent functional imaging study, suggesting that either physical displace- ment or intrahemispheric reorganization of language oc- curs in patients with space-occupying lesions (Stowe et al., 2000). Although it is possible that the development of a space-occupying lesion merely pushes aside normal brain tissue, resulting in fewer sites identified within the region studied, several investigators have reported the presence of functional tissue (including language cortex) within clearly tumor-infiltrated brain regions (Ojemann et al., 1996; Skirboll et al., 1996; Duffau et al., 2004). Thus the relation between functional and abnormal tissue does not appear to be straightforward (Haglund et al., 1994). An alternative explanation to consider, particularly given the lower baseline verbal scores and earlier onset age in the nonlesional group, is that the pattern of positive sites in the lesional group is actually a closer approxima- tion of normal language organization than that observed in the nonlesional group. Accordingly, the location of nam- ing sites in the posterior portion of the STG is consis- tent with the purported location of the Wernicke area in- ferred from studies of stroke patients (Brust et al., 1976; Mazzocchi and Vignolo, 1979), which, although lack- ing in resolution, likely reflects normal language orga- nization. In keeping with this line of thought, the broad distribution of naming sites in nonlesional epilepsy pa- tients might reflect the development of additional naming sites, perhaps in an attempt to compensate for damage to the original language area caused by chronic epileptiform activity. It is also reasonable to take a more theoretic stance and consider what positive naming sites might actually rep- resent. In the current data set, we found that individuals with positive (visual) naming sites were older than in- dividuals without any naming sites identified. Although we tend to infer structure/function relations from posi- tive naming sites, we can state with certainty only that stimulation at a particular location disrupts the task at hand. Thus it is possible that both chronic epilepsy and aging induce changes in cortical function such that nor- mal function is more readily disturbed by stimulation (i.e., increasing the likelihood of finding positive naming sites). This perspective is most consistent with the posi- tion that the lesional group provides a closer representa- tion of “normal” organization than does the nonlesional group. Epilepsia, Vol. 48, No. 3, 2007 TEMPORAL LOBE LANGUAGE CORTEX 537 A limitation of the current study is the relatively re- stricted region of cortex that was compared between groups. Although analysis of a broader region of cortex would have been preferable, practical limitations, such as time constraints, patient fatigue and cooperation, clinical concerns, and IRB regulations, render such data difficult to obtain. The main limitation of stimulation mapping, in gen- eral, is that it is invasive; only pathologic populations can be studied. Thus the patterns identified in the two populations studied here might both represent abnormal variants: one due to chronic, irritative electrophysiologic activity, the other due to structural displacement. In ac- cordance with this, the finding of weak baseline perfor- mance on both visual and auditory naming in both groups suggests that neither group is entirely “normal,” at least from a functional perspective. Perhaps discerning both the consistencies and discrepancies between stimulation mapping data from patient populations and functional imaging data from normal subjects by using similar tasks might help elucidate the normal cortical organization of language. Given the inconsistencies in the literature on language localization after left temporal insult, it is reasonable to conclude that the precise localization of essential language cortex cannot be reliably inferred from anatomic land- marks or from demographic or patient characteristics. The findings reported here demonstrate two general patterns that can be used to guide the search for language cor- tex in patients with and without space-occupying lesions. Stimulation mapping, which remains the gold standard for identifying essential language cortex, is a time-intensive, yet also a time-constrained, procedure. It is hoped that the current results will increase the efficiency by which positive sites are identified. Acknowledgment: We thank Jesse Brand and John Y. Chen for assistance with data management, Dr. Werner Doyle for neu- rosurgical information, Dr. Kenneth Perrine for patient informa- tion, and Drs. William T. Seidel and Frank Gilliam for edito- rial comments. This work was supported by NIH grant R01 NS 35140 (M.J.H., S.M. III). REFERENCES Black P, Ronner S. (1987) Cortical mapping for defining the limits of tumor resection. Neurosurgery 20:914–919. Brust JC, Shafer SQ, Richter RW, Bruun B. (1976) Aphasia in acute stroke. Stroke 7:167–174. De Vos KJ, Wyllie E, Glecker C, Kotogal P, Comair Y. (1995) Lan- guage dominance in patients with early childhood tumors near left hemisphere language areas. Neurology 45:349–356. Devinsky O, Perrine K, Llinas R, Luciano DJ, Dogali M. (1993) Anterior temporal language areas in patients with early onset of TLE. Annals of Neurology 34:727–732. Duchowny M, Jayakar P, Harvey AS, Resnick T, Alvarez L, Dean P, Levin B. (1996) Language cortex representation: effects of develop- mental versus acquired pathology. Annals of Neurology 40:31–38. Duffau H, Velut S, Mitchell MC, Gatignol P, Capelle L. (2004) Intra- operative mapping of the subcortical visual pathways using direct electrical stimulations. Acta Neurochirurgica (Wien) 46:265–269. Fennell EB, Bowers D, Satz P. (1977) Within-modal and cross-modal reliabilities of two laterality tests among left handers. Perceptual Motor Skills 45:451–456. Haglund M, Berger M, Shamseldin M, Lettich E, Ojemann GA. (1994) Cortical localization of temporal lobe language sites in patients with gliomas. Neurosurgery 34:567–576. Hamberger M, Seidel WT. (2003) Auditory and visual naming tests: normative and patient data for accuracy, response time and tip-of- the-tongue. Journal of the International Neuropsychology Society 9:479–489. Hamberger M, Goodman RR, Perrine K, Tammy T. (2001) Anatomical dissociation of auditory and visual naming in the lateral temporal cortex. Neurology 56:56–61. Hamberger M, Seidel WT, McKhann GM, Perrine K, Goodman RR. (2005) Brain stimulation reveals critical auditory naming cortex. Brain 128:2742–2749. Hermann BP, Wyler AR. (1988) Comparative results of dominant tempo- ral lobectomy under general or local anesthesia: language outcome. Journal of Epilepsy 1:127–134. Hermann BP, Perrine K, Chelune GJ, Barr W, Loring DW, Strauss E, Trenerry MR, Westerveldt M. (1999) Visual confrontation naming following left ATL: a comparison of surgical approaches. Neuropsy- chology 13:3–9. Kaplan EF, Goodglass H, Weintraub S. (1983) The Boston Naming Test. 2nd ed. Lea & Febiger, Philadelphia. Laws ER Jr, Taylor WF, Bergstralh EJ, Okazaki H, Clifton MB. (1986) The neurosurgical management of low-grade astrocytoma. Clinical Neurosurgery 33:575–588. Liegeois F, Connelly A, Cross JH, Boyd SF, Gadian DG, Vergha- Khadem F, Baldeweg T. (2004) Language reorganization in children with early-onset lesions of the left hemisphere: an fMRI study. Brain 127:1229–1236. Luders H, Lesser R, Dinner R. (1987) Commentary: chronic intracranial recording and stimulation with intracranial electrodes. In Engel J (Ed) Surgical treatment of the epilepsies. Raven Press, New York, pp. 297–321. Mazzocchi F, Vignolo L. (1979) Localisation of lesions in aphasia: clinical-CT scan correlations in stroke patients. Cortex 15:627– 653. Moran NF, Fish DR, Kitchen N, Shorvon S, Kendall BE, Stevens JM. (1999) Supratentorial cavernous haemangiomas and epilepsy: a re- view of the literature and case series. Journal of Neurology, Neuro- surgery and Psychiatry 66:561–568. Ojemann GA. (1983a) Brain organization for language from the perspec- tive of electrical stimulation mapping. Behavioral Brain Research 6:189–230. Ojemann GA. (1983b) The intrahemispheric organization of human lan- guage derived with electrical stimulation techniques. Trends in Neu- roscience 6:184–189. Ojemann GA. (1990) Organization of language cortex derived from in- vestigations during neurosurgery. Seminars in Neuroscience 2:297– 305. Ojemann GA. (1991) Cortical organization of language. Journal of Neu- roscience 11:2281–2287. Ojemann GA, Dodrill CB. (1981) Predicting postoperative language and memory deficits after dominant hemisphere anterior temporal lobec- tomy by intraoperative stimulation mapping. Conference Proceed- ings, American Association of Neurological Surgeons: Paper 38, pp. 76–77. Ojemann GA, Ojemann J, Lettich E, Berger M. (1989) Cortical lan- guage localization in left-dominant hemisphere: an electrical stimu- lation mapping investigation in 117 patients. Journal of Neurosurgery 71:316–326. Ojemann JG, Miller JW, Silbergeld DL. (1996) Preserved function in brain invaded by tumor. Neurosurgery 39:253–258. Penfield W, Roberts L. (1959) Speech and brain mechanisms. Princeton University Press, Princeton, NJ. Rasmussen T, Milner B. (1977) The role of early left-brain injury in determining lateralization of cerebral speech functions. New York Academy of Science 299:355–379. Epilepsia, Vol. 48, No. 3, 2007 538 M. J. HAMBERGER ET AL. Rausch R, Boone K, Ary CM. (1991) Right-hemisphere language dom- inance in temporal lobe epilepsy: clinical and neuropsychological correlates. Journal of Clinical and Experimental Neuropsychology 13:217–231. Satz P, Strauss E, Whitaker H. (1990) The ontogeny of hemispheric specialization: some old hypotheses revisited. Brain and Language 38:596–614. Satz P, Strauss E, Wada J, Orsini DL. (1988) Some correlates of intra- and interhemispheric speech organization after left focal brain injury. Neuropsychologia 26:345–350. Schwartz TH, Devinsky O, Doyle W, Perrine K. (1998) Preoperative pre- dictors of anterior temporal language areas. Journal of Neurosurgery 89:962–970. Semah F, Picot MC, Adam C, Broglin D, Arzimanoglou A, Bazin B, Cavalcanti D, Baulac M. (1998) Is the underlying cause of epilepsy a major prognostic factor for recurrence? Neurology 51:1256–1262. Skirboll SS, Ojemann GA, Berger MS, Lettich E, Winn HR. (1996) Functional cortex and subcortical white matter located within gliomas. Neurosurgery 38:678–684. Stowe LA, Go IG, Pruim J, den Dunnen W, Meiners LC, Paans AMJ. (2000) Language localization in cases of left temporal lobe arachnoid cysts: evidence against interhemispheric reorganization. Brain and Language 75:347–358. Thulborn KR, Carpenter PA, Just M. (1999) Plasticity of language- related brain function during recovery from stroke. Stroke 30:749– 754. Wechsler D. (1997) Wechsler Adult Intelligence Scale–III manual. The Psychological Corporation, New York. Yeh SA, Ho JT, Lui CC, Huang YJ, Hsiung CY, Huang EY. (2005) Treat- ment outcomes and prognostic factors in patients with supratentorial low-grade gliomas. British Journal of Radiology 78:230–235. Zhao J, Wang S, Li J, Qi W, Sui D, Zhao Y. (2005) Clinical charac- teristics and surgical results of patients with cerebral arteriovenous malformations. Surgical Neurology 63:156–161. Epilepsia, Vol. 48, No. 3, 2007 work_av23lemdvvalngbvme7mvczyxm ---- Copia de 21e-1084.indd E88 Clinical Dentistry Digital diagnosis records in orthodontics Med Oral Patol Oral Cir Bucal 2006;11:E88-93. Digital diagnosis records in orthodonticsClinical Dentistry Digital diagnosis records in orthodontics Med Oral Patol Oral Cir Bucal 2006;11:E88-93. Digital diagnosis records in orthodontics Digital diagnosis records in orthodontics. An overview Vanessa Paredes 1, José Luis Gandia 2, Rosa Cibrián 3 (1) Assistant Professor. Department of Orthodontics (2) Professor. Department of Orthodontics (3) Professor. Department of Physiology. School of Medicine. University of Valencia. Spain Correspondence: Dra. Vanessa Paredes Av. Blasco Ibáñez 20-15 Valencia 46010 Spain Phone 00.34.96.369.94.44 Fax 00.34.96.339.06.96 E-mail: clinicaparedes@medynet.com Received: 15-09-2005 Accepted: 11-12-2005 Paredes V, Gandia JL, Cibrián R. Digital diagnosis records in orthodontics. An overview. Med Oral Patol Oral Cir Bucal 2006;11:E88-93. © Medicina Oral S. L. C.I.F. B 96689336 - ISSN 1698-6946 ABSTRACT Digital technology is becoming day by day a more important procedure in most of the clinic activities and, thus, orthodontists are increasingly adding digital technology to their orthodontics records. In this article we want to outline the advantages and disadvantages of the use of digital photography, digital radiography as well as one of the latest developments: the digital study stone casts. We will also present the state of the art related to dentists that use these digital records routinely in our country. Key words: Digital photography, digital radiography, digital dental casts. RESUMEN Actualmente la tecnología digital es una realidad que cada vez se impone más en todos los ámbitos clínicos y, por tanto, existe una incorporación también de los ortodoncistas a la digitalización de los registros ortodóncicos diagnósticos. En este trabajo queremos hacer una valoración sobre las ventajas y desventajas, del uso de la radiografía digital, la fotografía digital así como de la última incorporación, los modelos de estudio digitalizados. Basados en encuestas previas, mostraremos la situación actual en nuestro país en cuanto al número de profesionales que utilizan estos registros digitales de manera sistemática. Palabras clave: Fotografía digital, radiografía digital, modelo de estudio digital. Indexed in: -Index Medicus / MEDLINE / PubMed -EMBASE, Excerpta Medica -Indice Médico Español -IBECS © Medicina Oral S.L. Email: medicina@medicinaoral.com Click here to view the article in Spanish http://www.medicinaoral.com/medoralfree01/v11i1/medoralv11i1p88e.pdf http://www.medicinaoral.com/medoralfree01/v11i1/medoralv11i1p88e.pdf http://www.medicinaoral.com/medoralfree01/v11i1/medoralv11i1p88e.pdf Clinical Dentistry Digital diagnosis records in orthodontics Med Oral Patol Oral Cir Bucal 2006;11:E88-93. Digital diagnosis records in orthodontics E89 Clinical Dentistry Digital diagnosis records in orthodontics Med Oral Patol Oral Cir Bucal 2006;11:E88-93. Digital diagnosis records in orthodontics thus saving time for the orthodontist. 39 per cent of the responders to a Spanish survey´s (1) uses digital camera, another 39 per cent still uses traditional one, while 23 per cent uses both, digital and traditional cameras. In the United States (2), 28 per cent uses digital camera while 48 per cent still uses traditional one. Before acquiring a digital camera, we describe some options to orthodontists ready to incorporate digital photography into their practices (7,8). • In orthodontics, if you expect very high quality image, a top- end camera will be necessary. • If you have an analogical photographic system, it is better to change just the body of the camera instead of buying a new one. • If you don’t have any experience at all in digital photography, it is better to buy a mid-range camera and not a very sophis- ticated one. • Since digital technology is advancing substantially and decreasing in price everyday, it is better to buy a mid-range price camera. • Consult Internet and have good references in order to be aware of all new things in digital photography. DISADVANTAGES OF DIGITAL PHOTOGRAPHY Digital photography has several disadvantages to keep in mind (3): • Cameras prices are still high but they are decreasing in prices everyday (3,4). • Digital image can be retouched and won’t be useful for me- dico-legal requirements as the traditional negative (3). • Since digital quality and technology are advancing, actual digital cameras will be absolutes in a few years (3). DIGITAL RADIOGRAPHY As digital photography, there is a slow but steady move towards digital radiography which supposes a big quality improvement. ADVANTAGES OF DIGITAL RADIOGRAPHY Respect digital radiography, the advantages are discussed: • Using digital radiography, the image is instantly obtained, thus saving time and allowing the professional to diagnosis immediately. • Digital images are now of more than adequate quality and are certainly to conventionally obtained radiographs. • Easy use by the operator. • The reduction in radiation is like 70%. • No chemicals or film needed for digital radiography. • The brightness, contrast and saturation can be altered on the radiography which can make identification of anatomic tissues easier by the professional. • We can use software for managing these radiographs and localize and place cephalometric points easily and automati- cally (Figure 2). 51,4 per cent of the Spanish responders (1), uses a cephalome- tric program for orthodontic diagnosis while 28 per cent of the INTRODUCTION Orthodontic records have always been very important in Ortho- dontics since they are a basic diagnosis tool which tells us about the patient occlusion. This information will be very useful to make and plan a right diagnosis and orthodontic treatment. These records can be divided in three main groups; radiographs photographs and study dental casts. These records must be taken like before, sometimes during and after every orthodontic treatment. Intraoral and extraoral photographs, study dental casts, pano- ramic and lateral radiography are the most useful orthodontic records as it was showed in a recent survey between Spanish Orthodontits (1) and with similar results to another survey in the USA (2). Traditionally, conventional radiographs and photographs were made of regular photographic or radiographic paper, while study dental casts were made of stone material. But nowadays, there is a big move towards digital orthodontic records. DIGITAL PHOTOGRAPHY Digital photography has been the first orthodontic digital record changing from the traditional photos into digital ones. Digital images have enjoyed enormous technological advancements in the past five years where digital camera’s sales have been greater than traditional ones (3). Digital orthodontic extraoral and intraoral photos can also be instantly integrated into the practice’s software and have them all together in the same screen. (Figure 1) ADVANTAGES OF DIGITAL PHOTOGRAPHY The advantages of digital photography to traditional photogra- phy are enumerated (3-6). • Ability to view the image as soon as it has been taken both in the camera screen or in the PC, allowing the doctor or ope- rator to rectify it, repeat it or show it to the patients in order to motivate them. • The absence of film, slides or processing cost is very well welcome for everybody. • The ability to store records electronically is useful since after a number of years working, the space needed to store a large number of pictures records is significant. • Image copies can be made automatically and easily with no economic cost. • Digital photos are suited for immediate data transmission automatically everywhere to a colleague with the advantage of keeping original ones. • There is not dust, scratch or damage of slides with time, even though it is necessary to make security copies very often. • Digital records allow complete more confidentiality as the number of people involved in the processing and storage pro- cedure is reduced. • Digital records are easily and automatically introduced in main lectures, oral communications or PC presentations for teaching purposes. • Any competent assistant canbe trained to take digital photos, E90 Clinical Dentistry Digital diagnosis records in orthodontics Med Oral Patol Oral Cir Bucal 2006;11:E88-93. Digital diagnosis records in orthodonticsClinical Dentistry Digital diagnosis records in orthodontics Med Oral Patol Oral Cir Bucal 2006;11:E88-93. Digital diagnosis records in orthodontics Americans (2) does it. Fig. 1. Extraoral and intraoral digital photography of an orthodontic patient. Fig. 2. Digital Lateral measured radiography. Fig. 3. A dental cast been digitalized with a conventional scanner. Clinical Dentistry Digital diagnosis records in orthodontics Med Oral Patol Oral Cir Bucal 2006;11:E88-93. Digital diagnosis records in orthodontics E91 Clinical Dentistry Digital diagnosis records in orthodontics Med Oral Patol Oral Cir Bucal 2006;11:E88-93. Digital diagnosis records in orthodontics Fig. 4. Upper dental cast image digitalized inside the Digital program. Fig. 5. A and B: Dental casts indirect and direct teeth measurements with the digital program. E92 Clinical Dentistry Digital diagnosis records in orthodontics Med Oral Patol Oral Cir Bucal 2006;11:E88-93. Digital diagnosis records in orthodonticsClinical Dentistry Digital diagnosis records in orthodontics Med Oral Patol Oral Cir Bucal 2006;11:E88-93. Digital diagnosis records in orthodontics DIGITAL STUDY MODELS Any Orthodontist who has been in practice for a number of years, experiences a problem about storage space in his office besides it is a consuming time procedure. For this reason, the additional advantage of saving space of digital models is a welcome advance (9-11). Begole (12) was one of the first authors introducing a computer program to aid the direct analysis of study models as a clinical diagnostic and using dental cast photos. Rudge (13) devised another computer system using an elec- tronic X-Y reader in order to relate changes in dentition as a result of orthodontic treatment. At the same time, Yen (14) proposed a simple computer pro- gram using a study model photocopy. This program predicts “required space” and compares it to “available space”. This analysis also includes Bolton ratios, intercanine distances, intermolar distances, arch length and arch discrepancy. The introduction of digital image in our society and especially in orthodontics permits the use of, either a regular scanner, or a digital camera, to digitalize stone dental casts and to measure several measurements as mesiodistal tooth size, arch length, intercanine distance or intermolar distance by a computer pro- gram prepared to use these obtained digital images in 2D. Rivero et al (15), used a regular scanner to digitalize dental sto- ne casts while Carter et al (16), introduced another new digital method to measure tooth-arch width, length and perimeter and to evaluate longitudinal arch changes with age. Gouvianaski et al (17) were one of the first authors using a digital camera linked to a computer program to photograph and measure stone dental casts while Trankmann et al (18) measured dental models by digitalizing tables. Ho and Freer (19), designed a computer program that permits direct input of tooth-width measurements per patient from study casts using digital calipers instead of traditional ones. Mok and Cooke (20) compared the reproducibility of mesio- distal total tooth widths and arch perimeter values, on plaster casts, given by sonic digitization and by digital calipers. Redmond (9,10) introduced a new computerized system in which impressions are sent to the company where they are scanned and e-mailed to the Orthodontist back. This new sys- tem allows five simultaneous views of the model in 3D, can be incorporated into computer patient records and eliminating the need for model storage. Tomassetti (21) compared three digital measuring methods with the traditional method for calculating the Bolton Index and concluded that digital methods were much quicker. Garino (22) also compared dental arch measurements between stone and digital casts concluding that digital program were as easy and accurate as traditional ones with the additional advantage of saving storage space. In McKeown´s study (23) tooth dimensions were compared between index patients with severe hypodontia, their relatives with a full of complement of teeth and a control group. All formed teeth were imaged buccally and occlusally from study models with a digital camera linked to a computer. The camera was mounted horizontally above the model on an adjustable rod with the lens focused parallel to the tooth surface and parallel to the long axis of the clinical crown for occlusal views. Tran (24) compared Little index results with traditional calipers and a digital program concluding that the digital system was a very good option for this measurement. But most of the digital programs reviewed, give teeth measu- rement information as a group and not individually. In the present study, we introduce a new, fast and accurate 2D computerized-aid system, already introduced and proved by us (25), to measure mesio-distal tooth size, intercanine and intermolar distance, arch perimeter and calculate Overall and Anterior Bolton ratios, arch discrepancy and teeth asymmetries automatically. All the study casts were digitized with a conventional scanner (Figure 3). But before making any measurement, it is important to use an accurate and easy calibration system to obtain dental casts tooth real dimensions in millimetres. With the aid of the mouse as a user interface of the digital me- thod, we marked the points of the mesiodistal size of each per- manent tooth on the image of the casts. The software designed for this purpose, which we have tested and found accurate and reliable (25) determines dental measurements in millimetres automatically. From this data, we were able to predict the rest of teeth measurements (Figure 4, 5A and 5B). ADVANTAGES OF DIGITAL STUDY MODELS •These virtual casts can be kept in digital format and also elimi- nating the storage problem with study models in our offices. • Digital images can be made bigger and localizing anatomic points easily. • Digital photos are suited for immediate data transmission for instance to a colleague by Internet for an orthodontic diagnosis. • Digital study casts can be showed to patients in order to motivate them in their treatments. • Measurements can be made on digital casts in an easy, accurate and automatic way. • Digital casts and their measurements can be accessed at any time and at any distance for diagnostic, clinical and informa- tion purposes. In Spain, an orthodontics survey (1) reported that digital pro- grams for dental casts are just used by 10%. DISADVANTAGES OF DIGITAL STUDY MO- DELS • If we have poor dental casts, digitalized images can be altered during digitalizing even if a good calibration system is made. • Sometimes, some dental images in mixed dentition are diffi- cult to recognize and measure. • Digitalizing dental casts is a laborious process that has always to be made under the same conditions. Nowadays, digital study models can be viewed from any angle, turned through 360º in all planes of space and even opened to allow upper and lower models to be viewed separately. Measurements can be carried out to allow space analyses to be conducted (26-28). Clinical Dentistry Digital diagnosis records in orthodontics Med Oral Patol Oral Cir Bucal 2006;11:E88-93. Digital diagnosis records in orthodontics E93 Clinical Dentistry Digital diagnosis records in orthodontics Med Oral Patol Oral Cir Bucal 2006;11:E88-93. Digital diagnosis records in orthodontics REFERENCES 1. Resultados de una encuesta realizada entre ortodoncistas exclusivos espa- ñoles en el año 2003. Paredes V, Brusola C, Gascón J, Torres V, Gandia JL, Cibrian R. Rev Esp Ortod 2004;34:225-33. 2. Sheridan JJ. The reader’s corner. J Clin Orthod 2000;34: 593-7. 3. Fernández-Boza J. Fotografía digital: ventajas e inconvenientes. Rev Esp Ortod 2004;34:335-41. 4. Fernández-Boza J. El equipamiento para la fotografía digital. Rev Esp Ortod 2005;35:75-84. 5. Sandler PJ, Murray A.M, Bearn D. Digital records In Orthodontics. Dent Update 2002;29:18-24. 6. Christensen GJ. Important clinical uses for digital photography. J Am Dent Assoc 2005;136:77-9. 7. Costa A, Fernández-Bozal J. ¿Qué cámara me compro?. Rev Esp Ortod 2005;35:155- 9. 8. McKeown HF, Murray AM, Sandler PJ. How to avoid common errors in clinical photography. Orthod. 2005; 32:43-54. 9. Redmond WR. Digital models: a new diagnostic tool. J Clin Orthod 2001;6:386-7. 10. Redmond WR, Redmond WJ, Redmond M.J. Clinical implications of digital orthodontics. Am J Orthod Dentofacial Orthop 2002;117:240-1 11. Mc Guinness NJ, Stephens CD. Storage of orthodontic study models in hospital units in the U.K.J Orthod 1992;19: 227-32. 12. Begole EA, Cleall JF, Gorny HC. A computer program for the analysis of dental models. Comput Prog Biomed 1979;10:261-70. 13. Rudge SJ. A Computer program for the analysis of study models. Eur J Orthod 1982;4:269-73. 14. Yen CH. Computer-aided space analysis. J Clin Orthod 1991;25:236-8. 15. Rivero JC, Ochandiano S, Carreño A, Jimenez S. Uso tridimensional del oclusograma en el plan de tratamiento ortodóncico. Ortodoncia Española 1998;38:42-50. 16. Carter GA, Mc. Namara JA.Jr. Longitudinal dental arch changes in adults. Am J Orthod Dentofacial Orthop 1998; 114:88-99. 17. Gouvianakis D, Drescher D. Der Tertiare Unterkieferengstand in Abhani- gkeit von Behandlungsbeginn und Methodik. Fortsohitte der Kieferorthopadie 1987;48:407-15. 18. Trankmann J, Mohrmann G, Themm P.Vergleichende Untersuchungen der Stuzzonenprognose. Fortsohitte der Kieferorthopadie 1990;51:189-94. 19. Ho CTC, Freer TJ. A computerized tooth-width analysis. J Clin Orthod 1999;33:498-503. 20. Mok KH, Cooke MS. Space analysis: a comparison of sonic digitization (Di- giGraph Workstation) and the digital caliper. Eur J Orthod 1998;20:653-61. 21. Tomassetti JJ, Taloumis LJ, Denny JM, Fischer JR. A comparation of 3 Computerized Bolton tooth-size analyses with a commonly used method. Angle Orthod 2001;71:351-7. 22. Garino F, Garino GB. Comparison of dental arch measurement between stone and digital casts.World J Orthod 2002; 3:250-4. 23. Mckeown HF, Robinson DL, Elcock C, Al-Sharood M, Brook AH. Tooth dimensions in hypodontia patients, their unaffected relatives and a control group measured by a new image analysis system. Eur J Orthod 2002:24:131-41. 24. Tran AM, Rugh JD, Chacon JA, Hatch JP. Techno Bytes: Reliability and validity of a computer-based Little irregularity index. Am J Orthod Dentofacial Orthop 2003;123:349-51. 25. Paredes Gallardo V. Tesis Doctoral. Desarrollo de un nuevo método digital para la medición y predicción de tamaños dentarios: Aplicaciones para deter- minar alteraciones en el índice de Bolton. Universidad de Valencia. 2003. 26. Harrel WE, Hatcher DC, Bolt RL. In search of anatomic truth: 3-dimensio- nal digital modelling and the future of orthodontics. Am J Orthod Dentofacial Orthop 2002;122: 325-30. 27. Kusnoto B, Evans CA. Reliability of a 3D surface laser scanner for ortho- dontic applications. Am J Orthod Dentofacial Orthop 2002;122:342-8. 28. Richmond S. Recording dental casts in three dimensions. Am J Orthod Dentofacial Orthop 1987;92:199-206. work_ayzcepugvbbvdntqrtzpavinjm ---- [PDF] Wound Areas by Computerized Planimetry of Digital Images: Accuracy and Reliability | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1097/01.ASW.0000350839.19477.ce Corpus ID: 13752557Wound Areas by Computerized Planimetry of Digital Images: Accuracy and Reliability @article{Mayrovitz2009WoundAB, title={Wound Areas by Computerized Planimetry of Digital Images: Accuracy and Reliability}, author={H. Mayrovitz and Lisa B Soontupe}, journal={Advances in Skin & Wound Care}, year={2009}, volume={22}, pages={222-229} } H. Mayrovitz, Lisa B Soontupe Published 2009 Medicine Advances in Skin & Wound Care BACKGROUND: Tracking wound size is an essential part of treatment. Because a wound's initial size may affect apparent healing rates, its surface area (S) and its surface area-to-perimeter (S/P) ratio are useful to document healing. Assessments of these parameters can be made by computerized planimetry of digital images using suitable software. OBJECTIVE: Because different caregivers often evaluate wounds and because measurement time is important, the objective of this study was to determine… Expand View on Wolters Kluwer nova.edu Save to Library Create Alert Cite Launch Research Feed Share This Paper 42 CitationsBackground Citations 7 Methods Citations 9 View All Topics from this paper Varicose Ulcer Triplicate Diabetic Neuropathies photograph Estimated Breast Feeding Evaluation procedure Physiological Sexual Disorders Population Parameter 42 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Simulated Wound Assessment Using Digital Planimetry versus Three-Dimensional Cameras: Implications for Clinical Assessment. K. Williams, V. Sounderajah, B. Dharmarajah, A. Thapar, A. Davies Medicine Annals of vascular surgery 2017 4 Save Alert Research Feed Efficacy of the Mobile Three-Dimensional Wound Measurement System in Pressure Ulcer Assessment Dongkeun Jun, Hyungon Choi, +5 authors D. Shin Medicine 2019 1 PDF View 1 excerpt Save Alert Research Feed Repeatability and clinical utility in stereophotogrammetric measurements of wounds. A. Davis, J. Nishimura, J. Seton, B. Goodman, C. Ho, K. Bogie Medicine Journal of wound care 2013 22 Save Alert Research Feed ImageJ: A Free, Easy, and Reliable Method to Measure Leg Ulcers Using Digital Pictures J. Aragón-Sánchez, Y. Quintana-Marrero, Cristina Aragón-Hernández, María José Hernández-Herero Medicine The international journal of lower extremity wounds 2017 14 View 2 excerpts, cites methods Save Alert Research Feed Validation of a Laser-Assisted Wound Measurement Device for Measuring Wound Volume K. Davis, Fadi C Constantine, Elaine C Macaslan, J. Bills, D. Noble, L. Lavery Medicine Journal of diabetes science and technology 2013 14 PDF Save Alert Research Feed Methods to assess area and volume of wounds – a systematic review Line Bisgaard Jørgensen, Jens A Sørensen, Gregor B.E. Jemec, Knud B Yderstræde Medicine International wound journal 2016 30 View 1 excerpt, cites background Save Alert Research Feed Precision of a photogrammetric method to perform 3D wound measurements compared to standard 2D photographic techniques in the horse. R. Labens, A. Blikslager Medicine Equine veterinary journal 2013 8 View 1 excerpt, cites methods Save Alert Research Feed Clinical Usability of a Wound Measurement Device Michelle E. Nemeth, S. Sprigle, Anita Gajjala Computer Science 2010 15 PDF View 1 excerpt, cites methods Save Alert Research Feed Wound Measurement Tools and Techniques: A Review D. M. Wendland, D. Taylor Medicine 2017 4 View 1 excerpt, cites background Save Alert Research Feed Concurrent validation and reliability of digital image analysis of granulation tissue color for clinical pressure ulcers S. Iizaka, J. Sugama, +7 authors H. Sanada Medicine Wound repair and regeneration : official publication of the Wound Healing Society [and] the European Tissue Repair Society 2011 13 Save Alert Research Feed ... 1 2 3 4 5 ... References SHOWING 1-10 OF 33 REFERENCES SORT BYRelevance Most Influenced Papers Recency Pressure Ulcer Surface Area Measurement Using Instant Full-Scale Photography and Transparency Tracings C. Lucas, J. Classen, D. Harrison, R. J. de Haan Medicine Advances in skin & wound care 2002 42 Save Alert Research Feed Reliability of wound surface area measurements. C. Majeske Mathematics, Medicine Physical therapy 1992 115 Save Alert Research Feed A comparison of photographic and transparency-based methods for measuring wound surface area. J. Griffin, E. Tolley, R. Tooms, R. A. Reyes, J. K. Clifft Medicine Physical therapy 1993 64 Save Alert Research Feed Reliability of digital videometry and acetate tracing in measuring the surface area of cutaneous wounds. R. P. Wunderlich, E. J. Peters, D. Armstrong, L. Lavery Medicine Diabetes research and clinical practice 2000 41 View 2 excerpts, references background Save Alert Research Feed Wound measurement comparing the use of acetate tracings and Visitrak digital planimetry. G. Gethin, S. Cowman Medicine Journal of clinical nursing 2006 75 Save Alert Research Feed The healing wound: a comparison of three clinically useful methods of measurement. Thomas Ac, Wysocki Ab Medicine 1990 78 Save Alert Research Feed The healing wound: a comparison of three clinically useful methods of measurement. A. C. Thomas, A. Wysocki Medicine Decubitus 1990 25 Save Alert Research Feed Two-dimensional wound measurement: comparison of 4 techniques. D. Langemo, H. Melland, D. Hanson, B. Olson, S. Hunter, S. Henly Medicine Advances in wound care : the journal for prevention and healing 1998 65 View 1 excerpt Save Alert Research Feed A comparison of computer-assisted and manual wound size measurement. H. A. Thawer, P. Houghton, M. Woodbury, D. Keast, K. Campbell Medicine Ostomy/wound management 2002 81 Save Alert Research Feed Reliability of electronic versus manual wound measurement techniques. S. Haghpanah, K. Bogie, Xiaofeng Wang, P. Banks, C. Ho Mathematics, Medicine Archives of physical medicine and rehabilitation 2006 66 Save Alert Research Feed ... 1 2 3 4 ... Related Papers Abstract Topics 42 Citations 33 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_azmfroy4jzeotjfiky3xqepj5e ---- Research Article An Assessment of the Validity of an Audio-Video Method of Food Journaling for Dietary Quantity and Quality Emily Jago,1 Alain P. Gauthier,2 Ann Pegoraro,3 and Sandra C. Dorman 4 1Laurentian University, Sudbury, Canada 2Director Centre for Rural and Northern Health Research, Laurentian University, Sudbury, Canada 3School of Human Kinetics, Laurentian University, Sudbury, Canada 4Director Centre for Research Occupational Safety and Health, Laurentian University, Sudbury, Canada Correspondence should be addressed to Sandra C. Dorman; sdorman@laurentian.ca Received 14 November 2018; Accepted 10 February 2019; Published 26 March 2019 Academic Editor: Iris Iglesia Copyright © 2019 Emily Jago et al. )is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Objective. To validate an audio-video (AV) method of food journaling, in a free-living scenario, compared to direct, weighed food assessment. Design and Setting. Data were collected in a cafeteria. Meals, selected by participants (n � 30), were documented using the AV method: participants video-recorded their tray while audio-recording a description of their selected meal, after which the research team digitally weighed each food item and created an itemized diary record of the food. Variables Measured. Data from the AV method and from the weighed food diaries were transcribed and entered into a nutrition software analysis program (Nutribase Pro 10.0). Nutrient outputs were compared between the two methods including kilocalories, macronutrients, and selected micronutrients. Analyses. Using mean scores for each variable, Wilcoxon signed-rank test and Spearman’s correlation coefficients were conducted. Interclass correlation coefficient (ICC) was calculated for absolute agreement between the two methods to assess interrater reliability. Results. With the exception of Vitamin E and total weight, nutrient values were highly correlated between methods and were statistically significant given alpha � 0.05, power � 0.95, and effect size of 0.70. Conclusions. )e AV method may be a meaningful alternative to diary recording in a free-living setting. 1. Introduction Novel methods for assessing nutrient intake in the free- living setting are needed to manage food-related health challenges [1]. )e most accurate measure of dietary in- take is direct observation and prospective recording of weighed foods [2]. )is gold-standard method requires that each item of food be weighed and recorded prior to (pre-meal) and following consumption (post-meal), where the researcher weighs the plate with any leftover food items. )is results in the valid and reliable quanti- fication of dietary intake and permits retrospective cal- culation of nutritional intake (i.e., kilocalories, macro- and micronutrients). However, this method is time- consuming and expensive to execute (e.g., participant/ patient training) in research studies and in clinical set- tings. Furthermore, there is considerable participant burden, and the mere act of keeping such a detailed, weighed food record by participants/parents can become an intervention in and of itself [3]. Historically, the principal methods for assessing dietary intake have included 24-hour recall and food frequency questionnaires (FFQs); but both have been deemed faulty [4]. In fact, Dhurandhar et al. reported that self-reported intakes of energy are regularly used in health research “despite the fact that self-report questionnaires have been repeatedly shown to be seriously flawed [4]” (p. 1110). )ree-day food diaries, despite limitations, remain the best option. )is method requires participants to record, in detail, all foods and beverages consumed during a three- day time period, ideally, every second day, two during the week and one day during the weekend to capture variability [5]. Limitations include (i) compliance; participants tire of recording food diaries (which is why people are generally Hindawi Journal of Nutrition and Metabolism Volume 2019, Article ID 9839320, 8 pages https://doi.org/10.1155/2019/9839320 mailto:sdorman@laurentian.ca http://orcid.org/0000-0001-5196-5847 https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/ https://doi.org/10.1155/2019/9839320 recommended not to exceed three days of collection be- cause compliance rates diminish after this timeframe); [6] (ii) self-reporting bias [7]; (iii) poor ability to estimate serving sizes [8]; (iv) the act of writing down one’s food fundamentally changes our eating patterns [8]; and (v) the literacy of the person collecting the food data will also fundamentally affect the data provided [8]. )e use of food journaling has become mainstream and studied at length in the last decade; however, few advances in this methodology resolving these limitations have occurred. )e exception is the use of photography, although this method still requires a written documentation of food items [9]. )e advent of digital photography has provided ad- vantages that include its low cost, limited participant bur- den, and rapid data collection [10]. Additionally, it is possible to extend the method to collect data on populations [11]. Significantly, people have already incorporated this application into their lives as photographing and sharing food photos has become commonplace [12]. Several groups have tested the photographic method and found it to be reliable for food assessment and preferred by participants in research studies [13–18]. Taken together, the literature supports the use of digital photography, provided that the picture resolution is of good quality and under scenarios where the entire meal can be seen [16]. However, in cases where visual estimation was insufficient to determine food choices, food diary data were still required [16]. In 2017, a novel method of assessing food intake in a free-living setting was reported, which removed the re- quirement of written journaling; specifically, they used audio-video (AV) food recording [19]. )is AV method was employed amongst wildland firefighters with the goal to understand food consumption patterns during fire de- ployment, while eliminating barriers in food data collection. In particular, to achieve compliance amongst participants, it was critical that the food data collection not be overly la- borious or time-consuming (i.e., written journal records), or rely on participant memory [20]. Robertson et al. reported that the AV method was beneficial because it could be completed at any time, in any location, and did not impede participant work tasks; written journaling would have been difficult given the nature of their work and the inclement weather and field conditions. )e AV method employed by Robertson et al. builds on the principle of the photographic dietary record; that is, the video image provides (pre- sumably) equivalent data to a digital photo but with the benefit that the written journal is no longer required, since participants instead included an audio dictation of the meal and any hidden or unseen ingredients while video-recording [19]. Given that previous research has validated the pho- tographic method for measuring food intake, compared to visual estimation [9, 10], it suggests that video-recording would provide similar results and that the AV method may be a novel, alternative method to estimating food intake via direct observation [21]. However, to our knowledge, the AV method has not yet been validated in the literature. Doing so is meaningful, given the potential applications for this method, specifically, since 91% of the global population has access to portable, personal devices, capable of AV (i.e., cell phones) [22]. Audio-video food journaling could therefore allow people, globally, in a free-living environment, to readily track their food and better understand their food consumption habits. In addition, written journaling also requires a level of literacy; the AV method removes this constraint, increasing participant pools and potentially reaching people, previously unable to contribute food data to the research literature [23]. Given the technological ad- vancement in these devices, including high-resolution video capabilities, we are poised for rapid increases in available, personal food data for analysis, leading to broad-based opportunities for mobile phone interventions designed to support specific components of evidence-based treatments relating to food and health [24, 25]. )erefore, the purpose of this paper was to assess the AV method of recording meals in a free-living scenario, in comparison to direct, weighed food assessment, the gold standard. 2. Methods 2.1. Participants. Participants were recruited, at random, from a university-based cafeteria. After selecting their meal choices, participants were approached by researchers until a total of n � 30 participants (female: 18; male: 12) agreed to participate. Forty-seven people were approached, and 17 chose not to participate. )irty participant meals, including a mix of lunch and breakfast, were documented over a four- hour period, in one day. All meals were included in the data analysis. Each participant received an incentive of $10.00 for agreeing to participate. Results of this study are solely based on the foods selected, and so no personal data were collected; participant food data were assigned a participant ID number. All participants provided written, informed con- sent prior to participation, and this study was approved by the Institutional Research Ethics Board (REB#201606100). 2.2. Study Design. Data were collected in January 2017, in a cafeteria setting in a medium-sized University in Canada. Each meal was selected by the participant prior to study recruitment, resulting in a range of portion sizes and items chosen per meal. 2.3. AV Method. Participants were provided with an iPod touch© (3rd generation) and were asked to AV record the food they selected for their meal; i.e., while video- recording their tray, they provided a verbal description of their food, including a listing of contents (e.g., mustard). )e participants were asked if they understood the method and could demonstrate this during a trial re- cording. )irty meals were AV recorded in the form of mp4 files. Afterward, in the laboratory, a researcher and a registered dietician, independently, reviewed the AV recordings and created a diary listing of food items for each meal and estimated portion sizes per item indicated in the recording. 2 Journal of Nutrition and Metabolism 2.4. Weighted Method. )e research assistants digitally weighed participant meals, using sanitary methods. Spe- cifically, the participant was asked to place their food on a clean plate resting on a weigh scale (plate type and weight was premeasured to subtract from the total weight). Each food item was listed and given a weighed value. 3. Nutrient Analysis Participant meals were entered into NutriBase Pro; a soft- ware program used by registered dieticians and researchers to analyze the contents of foods and provide an overview of the micro- and macronutrients contained within the meals selected by the participants. Each participant meal was coded and entered into the program three times: (1) AV estimates from researcher; (2) AV estimates from registered dietician; and (3) weighed food items (gold standard). 3.1. Audio-Video Method. A researcher explained the AV method using a script and once a participant understood the method, they moved forward in the protocol. Participants selected the video option under the camera setting on the iPod touch©, pressing “start” to begin taking a video of the meal they had selected, while speaking directly into the device, participants dictated the quantities and name of each item on their plate (e.g., one apple) or by volume (e.g., one cup of white rice). In the case of complex items, such as a breakfast sandwich, individual components were listed. Participants would dictate that the sandwich included one fried egg, one white English muffin, one slice of cheddar cheese, and one tablespoon ketchup. )e participants were briefly trained on the AV method over a period of ap- proximately 3 to 5 minutes and were asked to demonstrate a “trial” AV recording prior to commencing with a real-time AV recording. )e participants were asked if they un- derstood how to perform the AV method correctly, and any clarification required was provided. )e AV method re- quires some level of technological literacy; however, par- ticipants in this study did not struggle with this aspect of participation. 3.2.WeightedMethod. One lead researcher and two research assistants completed all of the data collection and were trained prior to interacting with participants, including sanitary methods of weighing food items. Researchers weighed individual food items on a StarFrit© food scale. Each item was recorded on a coded log sheet, which included a list of items for each selected meal, the weight in grams (g), subject ID, and date and time for the AV recording and weighing. Two research assistants confirmed the weight of each item before it was recorded and then recorded it. )e item identified on the coded log sheet, the same as identified by the AV method, was used for data entry. 3.3. Nutrient Analysis. All items recorded were entered into the Nutribase Pro nutrition software program. NutriBase draws its nutrient data from the Canadian Nutrient Profile produced by Health Canada [26]. First, the researcher created “client profiles” for each of the video-recordings, using AV subject ID coding to identify each profile. Next, the researcher searched for food items using the “food item search” function and selected the food item from a list of Canadian foods available on Nutribase Pro. At this point, the researcher entered the serving size and repeated this for each food item listed in the AV recording. )is develops a nutrient profile with macro- and micronutrient values per food item and for the meal as a whole. )is procedure was repeated with es- timates provided by (1) the researcher and (2) the registered dietician. )ere were no discrepancies between item lists. Next, the researcher followed the same procedure as above, but for the weighed food items. Since the researcher recorded weights during data collection and completed data entry, participant information was coded, and the researcher waited seven days to enter the data in an attempt to prevent recall bias [27]. Again, the researcher developed individual “client profiles” for each of the participants, coded by subject ID. )e researcher searched for food items using the “food item search” function and selected the measured serving size. None of the items included in data collection were prepackaged and so the researcher relied solely on de- veloping tailored Personal Food Items (PFI) from the Nutribase Pro 10.0 software. 3.4. Statistical Analysis. Prior to data collection, a sample size calculation was performed with the following inputs (type 1 error: 5%; type II error: 90%; effect size: 100 kcal; and 2-sided test). All data are shown as mean±standard error of the mean (SE). )e Shapiro–Wilk test for normality was conducted and outputs indicated that variables were not normally distributed; thus, nonparametric tests were used to compare the weighing methods. Spearman’s correlation coefficient was calculated to de- termine the strength of the relationship between actual weight of the meals and the averaged AV-estimations from the re- searcher and the registered dietician. Variable averages were computed for the researcher and registered dietician in order to best capture the weight of each variable given the large amount of variance between each participant. Spearman’s correlation coefficient was used to compare the researcher and registered dietician data. Data were expressed as mean±standard error of the mean (SE), calculated for continuous variables, and compared using Wilcoxon match- pairs signed-rank test. )e Wilcoxon match-pairs signed test was used to compare both the weighed and AV recorded methods of dietary measurement. )is was followed with the interclass correlation coefficient (ICC) for absolute agreement between the two methods to assess interrater reliability. SPSS statistical package version 20 (SPSS Inc, Chicago, IL) was used for all statistical analysis, reporting significance levels at alpha � 0.05, power � 0.95, and effect size of 0.70. 4. Results Table 1 indicates the mean±standard deviation for the selected meals estimated by the registered dietician and Journal of Nutrition and Metabolism 3 researcher from the AV recordings; Spearman correlations were calculated. We chose to average these values, since the correlation between them were high. Averaged estimates from the registered dietician and researcher are compared to the actual weight of the meals in Table 2. )e ICC estimates and their 95% confidence intervals were calculated based on a mean-rating (k � 3), consistency, and 2-way mixed-effects model. Table 2 represents the weight measured in grams for each variable compared to the average weight, calculated using weighed and registered dietician suggested weight. )e results indicate that the researcher and registered dieticians ICC scores were highly correlated for macronutrients and kilocalories and some micronutrients, including sodium, potassium, calcium, zinc, and iron. From the analysis computed, it was consistently noted that, throughout all analyses, both total weight and vitamin E did not yield significant results. However, all other vari- ables analyzed provided statistically significant results (alpha � 0.05, power � 0.95, and effect size of 0.70). As seen in Figure 1, the researcher and registered di- etician both overestimated portion sizes, resulting in higher macronutrient and kilocalorie outputs per meal, compared to the macronutrient and kilocalorie outputs from the actual weight of the food items. 5. Discussion )is validation study was aimed at assessing a novel method of calculating dietary intake in a free-living set- ting, which is more practical than previous methods. )e results from this study suggest that the audio-video method is a valid method for providing visual and au- dio information of food selection, allowing a researcher or RD to make accurate estimations of food to determine energy consumption; replacing the laborious nature of food journaling, since it is correlated to precise weights of food items. 5.1. Total Weight. )e AV videos were assessed after data were collected to provide serving size estimates of the items recorded for each meal. After the data were entered to Nutribase Pro from both the researcher and the reg- istered dietician, outputs were averaged and then de- scriptive statistics were run on those averages values. )is is likely the cause of the discrepancy between actual weight and the video estimations by the researcher and registered dietician. 5.2. Kilocalories and Nutrients. Unlike other cafeteria-based studies [20], participants in the present study served their own meals, none of which were preportioned by trained cafeteria staff. Likewise, a list of ingredients and cooking methods was not provided by the cafeteria management company, and each meal was presented to the researchers after selection by the participant. Given that the normal application of the gold-standard method would not have access to these conditions, we wanted to compare the nu- trient values produced from assessing the AV method to the use of the weighed method. While the spearman correlation demonstrates a moderate to very good level (r values ranging from 0.50 to 0.75 indicate moderate to good correlation, and r values from 0.75 to 1 point to very good to excellent correlation between the variables) of correlation between the two diary methods, suggesting that the nutrient values produced from assessing the AV method are valid, and the mean standard deviation reflects that participants selected varying levels of total calories per meal. )is is critical be- cause it demonstrates that the AV method is able to capture differences in portion size, resulting in a range of caloric values per meal, even in a free-living setting. Table 1: Software analysis data for estimates by researcher and registered dietician for macro- and micronutrients. Researcher (mean±SD) Registered dietician (mean±SD) Spearman correlation Total weight (g) 411.6±139.05 456.5±195.68 0.599 Calories (kcal) 697.5±352.64 794.5±388.88 0.774 Energy (kJ) 2823.4±1314.16 3255.0±1689.37 0.781 Protein (g) 31.5±15.30 30.8±14.79 0.667 Carbohydrates (g) 74.4±44.09 85.5±54.33 0.803 Fiber (g) 5.8±4.75 7.0±5.78 0.798 Fat (g) 29.8±19.52 35.1±19.25 0.675 Saturated fat (g) 7.7±5.01 8.1±3.67 0.731 Trans fat (g) 2.9±9.46 0.7±749 0.513 Vitamin A (µg) 230.7±220.19 239.9±204.65 0.551 Vitamin C (mg) 20.7±19.20 23.9±27.21 0.453 Vitamin D (mg) 52.3±72.59 57.4±69.62 0.533 Vitamin E (mg) 5.2±3.84 5.1±3.52 0.596 Calcium (mg) 215.7±228.16 215.7±174.19 0.778 Magnesium (mg) 83.0±51.71 83.2±48.57 0.670 Potassium (mg) 916.4±551.26 978.3±700.62 0.769 Sodium (mg) 1355.8±893.43 1203.2±695.19 0.817 Iron (mg) 4.3±1.82 4.7±2.15 0.745 Zinc (mg) 3.4±2.10 3.6±1.95 0.818 Folate (µg) 114.0±71.15 124.7±81.12 0.671 4 Journal of Nutrition and Metabolism �e ability to accurately estimate portion sizes is im- portant to understanding actual calories consumed, re- ducing the margin of error in a food journal. �e researcher and registered dietician estimated portion sizes in this study because it is known that trained persons can more accurately estimate portion sizes when compared to nontrained per- sons [28], and in the present study, participants inaccurately estimated portion sizes or commented in their recordings that they did not know how to assess portion sizes, saying “I’m not exactly sure, maybe 1 cup, could be more though”. Some researchers have recommended portion size training methods for participants to help them accurately estimate portion sizes, resulting in improved portion size estimation accuracy [28, 29]. Since weight gain is directly attributed to an overconsumption of calories compared to energy ex- penditure, it is critical to improve current methods of portion size estimation by the general public, in a free-living setting. One solution to this problem is the introduction of computer-based estimation using image recognition soft- ware. In this study, both the researcher and registered di- etician overestimated portion sizes, reinforcing the need for unbiased, computer-based estimation in the free-living Table 2: Macro- and micronutrient comparisons between methods. Weighed mean±SD AV method (R/RD) (mean±SD) ICC Wilcoxon signed-ranks Total weight (g) 359.3±121.79 434.1±167.36 0.793 Z �−3.013,p � 0.003 Calories (kcal) 593.2±265.74 746.0±370.76 0.813 Z �−3.198,p � 0.001 Energy (kJ) 2477.8±1110.64 3039.2±1501.76 0.808 Z �−2.900,p � 0.004 Protein (g) 29.4±14.49 31.2±15.04 0.891 Z �−1.386,p � 0.166 Carbohydrates (g) 63.2±30.07 79.9±49.21 0.793 Z �−2.865,p � 0.004 Fiber (g) 5.0±2.89 6.4±5.27 0.851 Z �−2.779,p � 0.005 Fat (g) 25.4±14.81 32.5±19.39 0.808 Z �−2.922,p � 0.003 Saturated fat (g) 6.4±4.23 7.9±4.34 0.900 Z �−2.916,p � 0.004 Trans fat (g) 1.6±6.25 1.8±5.10 0.804 Z �−0.743,p � 0.457 Vitamin A (µg) 229.1±216.03 235.3±212.42 0.875 Z �−0.761,p � 0.447 Vitamin C (mg) 17.7±18.04 22.3±23.21 0.823 Z �−1.395,p � 0.163 Vitamin D (mg) 48.7±66.92 54.9±71.11 0.859 Z �−2.847,p � 0.004 Vitamin E (mg) 3.1±2.76 5.1±3.68 0.586 Z �−2.392,p � 0.017 Calcium (mg) 202.1±223.47 215.7±201.18 0.911 Z �−2.222,p � 0.026 Magnesium (mg) 68.9±24.05 83.1±50.14 0.778 Z �−1.564,p � 0.118 Potassium (mg) 785.2±350.49 947.3±625.94 0.821 Z �−1.224,p � 0.221 Sodium (mg) 1114.8±627.64 1279.5±794.31 0.880 Z �−1.142,p � 0.254 Iron (mg) 3.8±1.85 4.5±1.99 0.885 Z �−2.596,p � 0.009 Zinc (mg) 3.3±2.08 3.5±2.03 0.976 Z �−2.147,p � 0.032 Folate (µg) 105.9±68.35 119.4±76.16 0.933 Z �−2.006,p � 0.045 1000 900 800 700 600 500 400 300 200 100 0 Total weight (g) G ra m s ( g) o r t ot al k ilo ca lo ri es (k ca l) Kilocalories Protein Carbohydrate Fat Weighed Researcher-dietician average Researcher estimate Dietician estimate Figure 1: Kilocalories and macronutrients by each method and as estimated by the researcher and the registered dietician. Journal of Nutrition and Metabolism 5 setting, where weighing individual food items is not an option. Some aspects of the present study could be considered limitations and merit future methodological modifications. First, since participants tend to inaccurately estimate portion sizes, we did not rely on, nor ask for participant estimates. Since one aim of the study was to validate the AV method, the researcher and registered dietician estimated portion sizes because they are known to be more accurate estima- tions [28]. However, there is a cost associated with requiring a registered dietician or trained individual to evaluate food items from the AV recordings and use a nutrient database. Nutrient database subscriptions come at a cost and are not likely to be used and purchased by untrained individuals. One alternative is for individuals to use free, online nutrient data-sharing programs, such as “My FitnessPal.” In a clinical setting, registered dieticians perform analyses and have fee- for-service appointments to provide nutrition counseling. )e cost of these services is typically paid for by the patient or client or through their health coverage. Compared to written food journals, it is likely that the same or similar costs would be incurred given that a registered dietician or trained individual would still need to enter the food values in a nutrient database to determine nutrient outputs. Second, the food captured in the present study was done in a cafeteria setting, which is different from the home or other free-living environments. In a cafeteria, participants can only select foods that are made available to them, and signage related to, and presentation of food items may influence a participants’ food selection, which may not reflect “typical” foods in their diet. )e AV method was designed based on previous success with photographic food journals. When compared to tra- ditional methods of three-day food records, the use of digital photography for assessing food choices has been shown to be preferred amongst participants [17] and as accurate as real- time estimates of food [18]. )ere is however an inherent challenge to using this methodology: it is difficult/impossible to determine food contents when the food is not readily visible. Gauthier et al. [9] overcame this problem this by having participants list the items in their meals; however, this created similar challenges found with written journal- ing. )e use of audio-video recording of meals therefore aimed to reduce the burden of participation and facilitated food selection such that participants may also have de- creased awareness about the food assessment process. Participants “chatted” into the iPod, highlighting those components of the meals, which they perceived would be difficult to view. )ey were not required to estimate portion size nor were they required to itemize the foods in their meal. )erefore, for some meals, participants merely verbalized their beverage. We identify several additional advantages to this method including (i) using the same tools already employed for digital photography; (ii) excellent resolution on iPods, iPhones, and other devices that already exist; (iii) video adds dimension to the photos (i.e., several angles can be captured simultaneously, while freeze-framing for por- tion analysis); and (iv) audio-video allows the participant to speak directly to the researcher describing the meal and its components, rather than diary keeping and audio can be transcribed verbatim. In addition, since participants are not being asked to estimate the serving sizes or number, they may be more accurately depicting real choices, rather than modifying behaviour; this requires further research to de- termine. Lastly, people are already photographing their food and electronically sending food pictures, suggesting that they are comfortable with this type of data collection methodology. As some may find dictating information uncomfortable in a public setting, developing image rec- ognition software should be explored, as it would eliminate the need to dictate food items and unseen items. 6. Implications for Future Research and Practice In the clinical setting, registered dieticians frequently ask patients to keep a food diary to assess their food behaviours, to help advise them about food choices [30], or as a method of self-regulation or self-monitoring in an effort to improve eating patterns. Understanding energy expenditure and energy intake is complicated for the average person, and so one goal of utilizing a food journal is to keep track of ap- proximate portions of food and more specifically calories, fat, protein, and carbohydrate; however, multiple challenges are experienced with this approach. First, patients tire of recording food diaries [6]; the AV method reduces partic- ipant burden and therefore may improve extended food collections. Second, patients tend to be biased when self- reporting [7]; the AV method reduces the capacity to do this, as it is literally “showing” the foods selected. )ird, patients are poor at estimating their serving sizes [8]; although this study demonstrates that even the experts are not perfect at estimating size given the discrepancy in estimations made, it is more likely that the registered dietician or researcher would generally estimate serving sizes better than the un- trained client. Fourth, given that food journaling is one method for weight loss, we know that writing down one’s food fundamentally changes our eating patterns [8]; we do not know whether this is true for AV journaling. Lastly, the AV method bypasses literacy problems in food data col- lection [8]. As indicated, the AV method requires some level of technological literacy. However, following a brief tutorial on the AV method, all participants were able to perform the AV method as required. Overall, this method provides significant advantages over written food journals and should be considered for future research and clinical applications. Accurate diet analysis of macro- and micronutrient intake allows healthcare providers, including registered dieticians, to provide accurate counseling to improve and maintain health. In fact, AV journaling may provide the clinician additional information which could be used to help counsel the patient (e.g., time-of day when the meal is consumed). 7. Conclusion Diet intervention is a critical community issue for many Canadians; a new method of food journaling, which includes 6 Journal of Nutrition and Metabolism an easy to use application for the general public to use in a free-living environment and enhances communication be- tween clients and healthcare providers, is needed. )e AV method allows participants with limited health literacy and language and literacy barriers to participate in meal re- cording for diet analysis for the purpose of dietetic con- sultation and is clinically comparable to the gold-standard weighed method. How the AV method compares to written journaling is not yet known and should be explored in future research, given that this is the most common type of diet recording used in practice [31]. Data Availability )e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest )e authors declare that they have no conflicts of interest. Acknowledgments )e authors would like to thank the registered dieticians Charley-Ann Dinnes and Victoria Gionotti and the study participants for their time on this project, as well as the Laurentian University Research Fund (LURF) (#20160810) for financial support of this project. )e Laurentian Uni- versity Research Fund was obtained prior to data collection. References [1] S. C. Gorber, M. Tremblay, D. Moher, and B. Gorber, “A comparison of direct vs. self-report measures for assessing height, weight and body mass index: a systematic review,” Obesity Reviews, vol. 8, no. 4, pp. 307–326, 2007. [2] L. T. Ptomey, E. A. Willis, J. J. Honas et al., “Validity of energy intake estimated by digital photography plus recall in over- weight and obese young adults,” Journal of the Academy of Nutrition and Dietetics, vol. 115, no. 9, pp. 1392–1399, 2015. [3] L. Small, K. Sidora-Arcoleo, L. Vaughan, J. Creed-Capsel, K.-Y. Chung, and C. Stevens, “Validity and reliability of photographic diet diaries for assessing dietary intake among young children,” Infant, Child, & Adolescent Nutrition (ICAN), vol. 1, no. 1, pp. 27–36, 2009. [4] N. V. Dhurandhar, D. Schoeller, A. W. Brown et al., “Energy balance measurement: when something is not better than nothing,” International Journal of Obesity, vol. 39, no. 7, pp. 1109–1113, 2014. [5] A. S. Kolar, R. E. Patterson, E. White et al., “A practical method for collecting 3-day food records in a large cohort,” Epidemiology, vol. 16, no. 4, pp. 579–583, 2005. [6] A.-K. Illner, H. Freisling, H. Boeing, I. Huybrechts, S. Crispim, and N. Slimani, “Review and evaluation of in- novative technologies for measuring diet in nutritional epi- demiology,” International Journal of Epidemiology, vol. 41, no. 4, pp. 1187–1203, 2012. [7] L. S. Freedman, J. M. Commins, J. E. Moler et al., “Pooled results from 5 validation studies of dietary self-report in- struments using recovery biomarkers for energy and protein intake,” American Journal of Epidemiology, vol. 180, no. 2, pp. 172–188, 2014. [8] T. Boehl, “Linguistic issues and literacy barriers in nutrition,” Journal of the American Dietetic Association, vol. 107, no. 3, pp. 380–383, 2007. [9] A. P. Gauthier, B. T. Jaunzarins, S.-J. MacDougall et al., “Evaluating the reliability of assessing home-packed food items using digital photographs and dietary log sheets,” Journal of Nutrition Education and Behavior, vol. 45, no. 6, pp. 708–712, 2013. [10] S. C. Dorman, A. P. Gauthier, M. Laurence, L. )irkill, and J. L. Kabaroff, “Photographic examination of student lunches in schools using the balanced school day versus traditional school day schedules,” Infant, Child, & Adolescent Nutrition (ICAN), vol. 5, no. 2, pp. 78–84, 2013. [11] C. K. Martin, H. Han, S. M. Coulon, H. R. Allen, C. M. Champagne, and S. D. Anton, “A novel method to remotely measure food intake of free-living individuals in real time: the remote food photography method,” British Journal of Nutrition, vol. 101, no. 3, pp. 446–456, 2008. [12] K. Patrick, W. G. Griswold, F. Raab, and S. S. Intille, “Health and the mobile phone,” American Journal of Preventive Medicine, vol. 35, no. 2, pp. 177–181, 2008. [13] J. A. Higgins, A. L. LaSalle, P. Zhaoxing et al., “Validation of photographic food records in children: are pictures really worth a thousand words?,” European Journal of Clinical Nutrition, vol. 63, no. 8, pp. 1025–1033, 2009. [14] C. K. Martin, R. L. Newton, S. D. Anton et al., “Measurement of children’s food intake with digital photography and the effects of second servings upon food intake,” Eating Behaviors, vol. 8, no. 2, pp. 148–156, 2007. [15] M. Parent, H. Niezgoda, H. H. Keller, L. W. Chambers, and S. Daly, “Comparison of visual estimation methods for regular and modified textures: real-time vs digital imaging,” Journal of the Academy of Nutrition and Dietetics, vol. 112, no. 10, pp. 1636–1641, 2012. [16] C. K. Martin, T. Nicklas, B. Gunturk, J. B. Correa, H. R. Allen, and C. Champagne, “Measuring food intake with digital photography,” Journal of Human Nutrition and Dietetics, vol. 27, no. S1, pp. 72–81, 2013. [17] M. Swanson, “Digital photography as a tool to measure school cafeteria consumption,” Journal of School Health, vol. 78, no. 8, pp. 432–437, 2008. [18] D. A. Williamson, H. R. Allen, P. D. Martin, A. J. Alfonso, B. Gerald, and A. Hunt, “Comparison of digital photography to weighed and visual estimation of portion sizes,” Journal of the American Dietetic Association, vol. 103, no. 9, pp. 1139– 1145, 2003. [19] A. H. Robertson, C. Larivière, C. R. Leduc et al., “Novel tools in determining the physiological demands and nutritional practices of Ontario fire rangers during fire deployments,” PLoS One, vol. 12, no. 1, Article ID e0169390, 2017. [20] A. S. Olafsdottir, A. Hörnell, M. Hedelin, M. Waling, I. Gunnarsdottir, and C. Olsson, “Development and valida- tion of a photographic method to use for dietary assessment in school settings,” PLoS One, vol. 11, no. 10, Article ID e0163970, 2016. [21] D. A. Williamson, H. R. Allen, P. D. Martin, A. Alfonso, B. Gerald, and A. Hunt, “Digital photography: a new method for estimating food intake in cafeteria settings,” Eating and Weight Disorders-Studies on Anorexia, Bulimia and Obesity, vol. 9, no. 1, pp. 24–28, 2004. [22] J. P. Tregarthen, J. Lock, and A. M. Darcy, “Development of a smartphone application for eating disorder self-monitoring,” International Journal of Eating Disorders, vol. 48, no. 7, pp. 972–982, 2015. Journal of Nutrition and Metabolism 7 [23] S. March, E. Torres, M. Ramos et al., “Adult community health-promoting interventions in primary health care: a systematic review,” Preventive Medicine, vol. 76, no. Sl, pp. S94–S104, 2015. [24] D. D. Luxton, R. A. McCann, N. E. Bush, M. C. Mishkind, and G. M. Reger, “mHealth for mental health: integrating smartphone technology in behavioral healthcare,” Pro- fessional Psychology: Research and Practice, vol. 42, no. 6, pp. 505–512, 2011. [25] D. C. Mohr, S. M. Schueller, E. Montague, M. N. Burns, and P. Rashidi, “)e behavioral intervention technology model: an integrated conceptual and technological framework for eHealth and mHealth interventions,” Journal of Medical In- ternet Research, vol. 16, no. 6, p. e146, 2014. [26] Pro., N., Version 10.0, Cybersoft, Phoenix, AZ, USA, 2012. [27] E. Hassan, “Recall bias can be a threat to retrospective and prospective research designs,” Internet Journal of Epidemi- ology, vol. 3, no. 2, 2006. [28] L. M. Trucil, J. C. Vladescu, K. F. Reeve, R. M. DeBar, and L. K. Schnell, “Improving portion-size estimation using equivalence-based instruction,” ;e Psychological Record, vol. 65, no. 4, pp. 761–770, 2015. [29] N. L. Hausman, J. C. Borrero, A. Fisher, and S. Kahng, “Improving accuracy of portion-size estimations through a stimulus equivalence paradigm,” Journal of Applied Behavior Analysis, vol. 47, no. 3, pp. 485–499, 2014. [30] Y. J. Yang, M. K. Kim, S. H. Hwang, Y. Ahn, J. E. Shim, and D. H. Kim, “Relative validities of 3-day food records and the food frequency questionnaire,” Nutrition Research and Practice, vol. 4, no. 2, pp. 142–148, 2010. [31] R. K. Johnson, “Dietary intake-how do we measure what people are really eating?,” Obesity Research, vol. 10, no. S11, pp. 63S–68S, 2002. 8 Journal of Nutrition and Metabolism Stem Cells International Hindawi www.hindawi.com Volume 2018 Hindawi www.hindawi.com Volume 2018 MEDIATORS INFLAMMATION of Endocrinology International Journal of Hindawi www.hindawi.com Volume 2018 Hindawi www.hindawi.com Volume 2018 Disease Markers Hindawi www.hindawi.com Volume 2018 BioMed Research International Oncology Journal of Hindawi www.hindawi.com Volume 2013 Hindawi www.hindawi.com Volume 2018 Oxidative Medicine and Cellular Longevity Hindawi www.hindawi.com Volume 2018 PPAR Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 Hindawi www.hindawi.com The Scientific World Journal Volume 2018 Immunology Research Hindawi www.hindawi.com Volume 2018 Journal of Obesity Journal of Hindawi www.hindawi.com Volume 2018 Hindawi www.hindawi.com Volume 2018 Computational and Mathematical Methods in Medicine Hindawi www.hindawi.com Volume 2018 Behavioural Neurology Ophthalmology Journal of Hindawi www.hindawi.com Volume 2018 Diabetes Research Journal of Hindawi www.hindawi.com Volume 2018 Hindawi www.hindawi.com Volume 2018 Research and Treatment AIDS Hindawi www.hindawi.com Volume 2018 Gastroenterology Research and Practice Hindawi www.hindawi.com Volume 2018 Parkinson’s Disease Evidence-Based Complementary and Alternative Medicine Volume 2018 Hindawi www.hindawi.com Submit your manuscripts at www.hindawi.com https://www.hindawi.com/journals/sci/ https://www.hindawi.com/journals/mi/ https://www.hindawi.com/journals/ije/ https://www.hindawi.com/journals/dm/ https://www.hindawi.com/journals/bmri/ https://www.hindawi.com/journals/jo/ https://www.hindawi.com/journals/omcl/ https://www.hindawi.com/journals/ppar/ https://www.hindawi.com/journals/tswj/ https://www.hindawi.com/journals/jir/ https://www.hindawi.com/journals/jobe/ https://www.hindawi.com/journals/cmmm/ https://www.hindawi.com/journals/bn/ https://www.hindawi.com/journals/joph/ https://www.hindawi.com/journals/jdr/ https://www.hindawi.com/journals/art/ https://www.hindawi.com/journals/grp/ https://www.hindawi.com/journals/pd/ https://www.hindawi.com/journals/ecam/ https://www.hindawi.com/ https://www.hindawi.com/ work_b2tcu3il7jfsro2wb3ud7atevq ---- [PDF] Digital versus film Fundus photography for research grading of diabetic retinopathy severity. | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1167/iovs.09-4803 Corpus ID: 207561236Digital versus film Fundus photography for research grading of diabetic retinopathy severity. @article{Li2010DigitalVF, title={Digital versus film Fundus photography for research grading of diabetic retinopathy severity.}, author={H. Li and L. Hubbard and R. Danis and A. Esquivel and Jose F. Florez-Arango and N. Ferrier and E. Krupinski}, journal={Investigative ophthalmology & visual science}, year={2010}, volume={51 11}, pages={ 5846-52 } } H. Li, L. Hubbard, +4 authors E. Krupinski Published 2010 Medicine Investigative ophthalmology & visual science PURPOSE To assess agreement between digital and film photography for research classification of diabetic retinopathy severity. METHODS Digital and film photographs from a 152-eye cohort with a full spectrum of Early Treatment Diabetic Retinopathy Study (ETDRS) severity levels were assessed for repeatability of grading within each image medium and for agreement on ETDRS discrete severity levels, ascending severity thresholds, and presence or absence of diabetic retinopathy index lesions… Expand View on PubMed iovs.arvojournals.org Save to Library Create Alert Cite Launch Research Feed Share This Paper 27 CitationsBackground Citations 7 View All Figures, Tables, and Topics from this paper table 1 figure 1 table 2 figure 2 table 3 figure 3 View All 6 Figures & Tables Diabetic Retinopathy Retinal Diseases Histopathologic Grade Gastric Fundus Carcinoma In Situ Slide (glass microscope) photograph Congenital Abnormality Bead Dosage Form 27 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Comparability of digital photography with the ETDRS film protocol for evaluation of diabetic retinopathy severity. H. Li, R. Danis, L. Hubbard, Jose F. Florez-Arango, A. Esquivel, E. Krupinski Computer Science, Medicine Investigative ophthalmology & visual science 2011 23 PDF Save Alert Research Feed Comparison of film and digital fundus photographs in eyes of individuals with diabetes mellitus. S. Gangaputra, Talat Almukhtar, +5 authors M. Davis Medicine Investigative ophthalmology & visual science 2011 33 Save Alert Research Feed GRADING DIABETIC RETINOPATHY SEVERITY FROM COMPRESSED DIGITAL RETINAL IMAGES COMPARED WITH UNCOMPRESSED IMAGES AND FILM Helen K. Li, J. Florez-Arango, L. Hubbard, A. Esquivel, R. Danis, Elizabeth A. Krupinski Medicine Retina 2010 9 Save Alert Research Feed MOSAICS VERSUS EARLY TREATMENT DIABETIC RETINOPATHY SEVEN STANDARD FIELDS FOR EVALUATION OF DIABETIC RETINOPATHY SEVERITY Helen K. Li, A. Esquivel, L. Hubbard, J. Florez-Arango, R. Danis, Elizabeth A. Krupinski Medicine Retina 2011 10 Save Alert Research Feed Comparison of digital and film grading of diabetic retinopathy severity in the diabetes control and complications trial/epidemiology of diabetes interventions and complications study. L. Hubbard, W. Sun, +6 authors M. Davis Medicine Archives of ophthalmology 2011 25 PDF Save Alert Research Feed Feasibility of Diagnosing Both Severity and Features of Diabetic Retinopathy in Fundus Photography Juan Wang, Yujing Bai, Bin Xia Computer Science IEEE Access 2019 4 PDF Save Alert Research Feed Automatic detection of diabetic retinopathy and age-related macular degeneration in digital fundus images. C. Agurto, E. Barriga, +6 authors P. Soliz Medicine Investigative ophthalmology & visual science 2011 89 PDF Save Alert Research Feed Grader variability and the importance of reference standards for evaluating machine learning models for diabetic retinopathy Jonathan Krause, Varun Gulshan, +5 authors Dale R. Webster Medicine, Computer Science Ophthalmology 2018 153 PDF Save Alert Research Feed Evidence for Telemedicine for Diabetic Retinal Disease A. Gupta, J. Cavallerano, J. Sun, P. Silva Medicine Seminars in ophthalmology 2017 28 Save Alert Research Feed A Fast Supervised Retinal Blood Vessel Segmentation Using Digital Fundus Imaging Chirag Bhatia, D. Bhatt, M. Choudhary, Harihar Samant, Pratvina Talele 2015 2 View 1 excerpt, cites background Save Alert Research Feed ... 1 2 3 ... References SHOWING 1-10 OF 24 REFERENCES SORT BYRelevance Most Influenced Papers Recency Web-based grading of compressed stereoscopic digital photography versus standard slide film photography for the diagnosis of diabetic retinopathy. C. Rudnisky, M. Tennant, E. Weis, A. Ting, B. J. Hinz, M. Greve Medicine Ophthalmology 2007 86 View 1 excerpt, references background Save Alert Research Feed Screening for diabetic retinopathy: 1 and 3 nonmydriatic 45-degree digital fundus photographs vs 7 standard early treatment diabetic retinopathy study fields. S. Vujosevic, E. Benetti, +5 authors E. Midena Medicine American journal of ophthalmology 2009 110 PDF Save Alert Research Feed The sensitivity and specificity of single-field nonmydriatic monochromatic digital fundus photography with remote image interpretation for diabetic retinopathy screening: a comparison with ophthalmoscopy and standardized mydriatic color photography. D. Lin, M. Blumenkranz, R. J. Brothers, D. Grosvenor Medicine American journal of ophthalmology 2002 295 View 1 excerpt, references methods Save Alert Research Feed Clinical evaluation of patients with diabetic retinopathy: accuracy of the Inoveon diabetic retinopathy-3DT system. S. Fransen, T. Leonard-Martin, W. Feuer, P. L. Hildebrand Medicine Ophthalmology 2002 92 Save Alert Research Feed Stereo nonmydriatic digital-video color retinal imaging compared with Early Treatment Diabetic Retinopathy Study seven standard field 35-mm stereo color photos for determining level of diabetic retinopathy. S. Bursell, J. Cavallerano, +4 authors L. Aiello Medicine Ophthalmology 2001 247 PDF Save Alert Research Feed USE OF JOSLIN VISION NETWORK DIGITAL-VIDEO NONMYDRIATIC RETINAL IMAGING TO ASSESS DIABETIC RETINOPATHY IN A CLINICAL PROGRAM A. Cavallerano, J. Cavallerano, P. Katalinic, A. Tolson, L. Aiello, L. Aiello Medicine Retina 2003 94 PDF View 1 excerpt, references background Save Alert Research Feed Brightness, contrast, and color balance of digital versus film retinal images in the age-related eye disease study 2. L. Hubbard, R. Danis, +5 authors Michael F Pugliese Mathematics, Medicine Investigative ophthalmology & visual science 2008 63 PDF Save Alert Research Feed The accuracy of digital-video retinal imaging to screen for diabetic retinopathy: an analysis of two digital-video retinal imaging systems using standard stereoscopic seven-field photography and dilated clinical examination as reference standards. M. Lawrence Medicine Transactions of the American Ophthalmological Society 2004 42 PDF Save Alert Research Feed Tele-ophthalmology via stereoscopic digital imaging: a pilot project. M. Tennant, C. Rudnisky, B. J. Hinz, I. MacDonald, M. Greve Medicine Diabetes technology & therapeutics 2000 33 View 1 excerpt, references background Save Alert Research Feed Improving Diabetic Retinopathy Screening Ratios Using Telemedicine-Based Digital Retinal Imaging Technology Cathy R. Taylor, L. Merin, +4 authors B. Pilon Medicine Diabetes Care 2007 99 PDF Save Alert Research Feed ... 1 2 3 ... Related Papers Abstract Figures, Tables, and Topics 27 Citations 24 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_b5zgiblkzfbidjmfsc3cmnbuou ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218547947 Params is empty 218547947 exception Params is empty 2021/04/06-02:16:26 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218547947 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:26 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_b74gnkd46rcpthmt5zuowlykye ---- Quantitative phenological observations of a mixed beech forest in northern Switzerland with digital photography Quantitative phenological observations of a mixed beech forest in northern Switzerland with digital photography Hella Ellen Ahrends, 1 Robert Brügger, 1 Reto Stöckli, 2,3 Jürg Schenk, 1 Pavel Michna, 1 Francois Jeanneret, 1 Heinz Wanner, 1 and Werner Eugster 4 Received 12 November 2007; revised 2 April 2008; accepted 23 July 2008; published 9 October 2008. [1] Vegetation phenology has a strong influence on the timing and phase of global terrestrial carbon and water exchanges and is an important indicator of climate change and variability. In this study we tested the application of inexpensive digital visible-light cameras in monitoring phenology. A standard digital camera was mounted on a 45 m tall flux tower at the Lägeren FLUXNET/CarboEuropeIP site (Switzerland), providing hourly images of a mixed beech forest. Image analysis was conducted separately on a set of regions of interest representing two different tree species during spring in 2005 and 2006. We estimated the date of leaf emergence based on the levels of the extracted red, green and blue colors. Comparisons with validation data were in accordance with the phenology of the observed trees. The mean error of observed leaf unfolding dates compared with validation data was 3 days in 2005 and 3.6 days in 2006. An uncertainty analysis was performed and demonstrated moderate impacts on color values of changing illumination conditions due to clouds and illumination angles. We conclude that digital visible-light cameras could provide inexpensive, spatially representative and objective information with the required temporal resolution for phenological studies. Citation: Ahrends, H. E., R. Brügger, R. Stöckli, J. Schenk, P. Michna, F. Jeanneret, H. Wanner, and W. Eugster (2008), Quantitative phenological observations of a mixed beech forest in northern Switzerland with digital photography, J. Geophys. Res., 113, G04004, doi:10.1029/2007JG000650. 1. Introduction [2] Phenology is the study of recurring biological events in the biosphere and the causes of their timing [Lieth, 1976]. Historically, phenological studies have been performed in agriculture to document events such as plant emergence, fruiting and harvest. In recent decades phenology has become recognized as an important integrative method for assessing the impact of climate variability and climate change on ecosystems [Menzel, 2002; Sparks and Menzel, 2002]. Recent global warming has had significant effects on the seasonality of ecosystems [Badeck et al., 2004; Chuine et al., 2004; Peñuelas and Filella, 2001; Zhang et al., 2007]. Time series analyses of selected variables such as green-up, maturity, senescence and dormancy, yield valu- able information about ecosystem responses to climate and are widely used in climatological and ecological models [Cleland et al., 2007; Reed et al., 1994; Schwartz, 1994]. Plant phenology is strongly connected to the gas and water exchange of ecosystems [Davis et al., 2003; Knohl et al., 2003; Moore et al., 1996]. Shifts in phenology can therefore significantly affect the global carbon and water cycle [Baldocchi et al., 2005; Churkina et al., 2005; Piao et al., 2007]. Consequently, a knowledge and understanding of these phenological processes is needed for the parameteri- zation of models used in climate predictions [Arora and Boer, 2005; Lawrence and Slingo, 2004a, 2004b; Lu et al., 2001]. [3] Phenological ground observations span several deca- des, sometimes up to centuries [Rutishauser et al., 2007], but they are often observer-biased [Kharin, 1976; Menzel, 2002]. Additionally, there is a significant decline in long- term observation sites due to a lack of volunteers for phenological field work. For two decades satellite remote sensing has been providing a globally integrated view of vegetation phenological states. However, this method still heavily depends on ground-based measurements for valida- tion. Moreover, satellite images often have limited temporal and spatial coverage due to clouds, aerosols and other atmospheric- or sensor-related characteristics [e.g., Ahl et al., 2006; Fisher et al., 2006; Studer et al., 2007; Zhang et al., 2004, 2006]. Within the framework of the COST Action 725 (‘‘Establishing a European Phenological Data Platform for Climatological Applications’’), our project investigates the application of ground-based, commercially available digital cameras in observational procedures and quality assurance of phenological monitoring. [4] Overall, the adoption of standard digital visible-light cameras in ecological research has been slow but recently JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 113, G04004, doi:10.1029/2007JG000650, 2008 1 Institute of Geography, University of Bern, Bern, Switzerland. 2 Climate Analysis, Climate Services, Federal Office of Meteorology and Climatology MeteoSwiss, Zurich, Switzerland. 3 Department of Atmospheric Science, Colorado State University, Fort Collins, Colorado, USA. 4 Institute of Plant Sciences, ETH Zürich, Zurich, Switzerland. Copyright 2008 by the American Geophysical Union. 0148-0227/08/2007JG000650 G04004 1 of 11 s o u r c e : h t t p : / / b o r i s . u n i b e . c h / 3 4 2 7 9 / | d o w n l o a d e d : 8 . 5 . 2 0 1 6 CORE Metadata, citation and similar papers at core.ac.uk Provided by Bern Open Repository and Information System (BORIS) https://core.ac.uk/display/33062691?utm_source=pdf&utm_medium=banner&utm_campaign=pdf-decoration-v1 an increasing number of studies have used digital images from standard ground-based RGB (red, green and blue) cameras for vegetation studies. In an early approach Brandhorst and Pinkhof [1935] compared flowering and leaf development of common park trees with dated analog photography, and emphasized the use of photography for objective phenological monitoring. Vertical sky-ward wide- angle photography has been successfully used for monitor- ing changing light conditions in forests, Leaf Area Index (LAI) and canopy-closure-estimation [Rich, 1988, 1990; Rich et al., 1993]. This technique was first used for forests by Evans and Coombe [1959] and is currently widely applied [Jonckheere et al., 2004; Nobis and Hunziker, 2005; Pellika, 2001]. Vertical downward photography for the quantification of parameters such as vegetation fraction, LAI and biomass was performed by Vanamburg et al. [2006], Zhou et al. [1998], Zhou and Robson [2001], and Behrens and Diepenbrock [2006]. Other approaches ana- lyzed vertical vegetation structures [Zehm et al., 2003] and directional reflectance distributions of vegetation targets [Dymond and Trotter, 1997] with multiple digital images. The link between leaf pigmentation and digital images was found by Kawashima and Nakatani [1998] who estimated the chlorophyll content in leaves using a video camera. Net CO2 uptake of moss was analyzed by Graham et al. [2006] with a RGB camera based upon the changes in reflected visible light (VIS) during moss drying and hydrating. The observation of phenological phases was first tested by Adamsen et al. [1999], who analyzed wheat senescence with a digital camera. Sparks et al. [2006] used monthly fixed-date, fixed-subject photographs to examine the corre- lation between plant phenology and mean monthly meteo- rological data. Digital camera images, phenology and satellite-based data were jointly analyzed by Fisher et al. [2006], who used multiple photographs, visually classified by independent observers, as validation data for satellite model estimates of phenological development. The latest research in webcam-based phenological monitoring was published by Richardson et al. [2007]. Digital webcam- images were successfully used for spring green-up tracking of a forest and jointly analyzed with FAPAR (fraction of incident photosynthetically active radiation absorbed by the canopy), broadband Normalized Difference Vegetation In- dex (NDVI) and the light-saturated rate of canopy photo- synthesis, inferred from eddy covariance measurements at a flux tower site. [5] Despite this pioneering work, the application of digital image analysis in vegetation phenology is still a young field. Previous studies mainly documented the con- ceptual use of area-integrated digital camera images in vegetation monitoring. Spatial and temporal uncertainty of digital images for continuous objective monitoring of phe- nological processes is still largely unknown. Species-specific image analysis, comparable with traditional phenological observations in a mixed forest canopy, has not yet been conducted. In this study we conducted species-specific phenological observation using digital photography, incor- porating an uncertainty analysis, for a managed mixed forest in northern Switzerland. We focused on the forest spring phenology in 2005 and 2006, and aimed to identify leaf unfolding dates of the two dominant tree species, beech (Fagus sylvatica L.) and ash (Fraxinus excelsior L.). The automated observation of different species presented in our study here adds a further level of complexity and requires the separation of the phenological signal of single species from that of their surroundings. Given that within a mixed canopy the phenology of individual trees is locally adapted to environmental conditions such as light or tree age [e.g., Kikuzawa, 2003] we observed three ash and two beech trees using the camera. Additionally, we observed spring green-up of a mixed forested region located a few kilometers from the camera. Extending the work of previous studies, we con- ducted an uncertainty analysis, a species-dependant pheno- logical observation including a year-to-year comparison, and interpreted the camera signal in detail. We were therefore able to examine both the potential and the limitations of this novel observation method. 2. Materials and Methods 2.1. Site Description [6] The Lägeren research site is located at 47�2804900N and 8�2100500E at 682 m a.s.l. on the south slope of the Lägeren mountain, approximately 15 km northwest of Zürich, Switzerland. The south slope of the Lägeren moun- tain marks the boundary of the Swiss Plateau, which is bordered by the Jura and the Alps. Since 1986 the Lägeren site has been a permanent station of the Swiss air quality monitoring network (NABEL) [Burkard et al., 2003]. A 45 m tall flux tower provides micrometeorological data at a high temporal resolution. Routine CO2 and H2O flux measurements as a contribution to the FLUXNET/Car- boEuropeIP network started in April 2004 [Eugster et al., 2007]. The mean annual temperature is 8�C. The mean annual precipitation is 1200 mm and the vegetation period is 170–190 days. The natural vegetation cover around the tower is a mixed beech forest. The western part is domi- nated by broad-leaved trees, mainly beech (Fagus sylvatica L.) and ash (Fraxinus excelsior L.). In the eastern part beech and Norway spruce (Picea abies (L.) Karst.) are dominant. The forest stand has a relatively high diversity of species, ages, and diameters [Eugster et al., 2007]. [7] The vegetation cover within the camera’s field of view predominantly consisted of beech, ash and silver fir (Abies alba Mill.). Understory vegetation varies, but during the study period it was dominated by bear garlic (Allium ursinum L.) and beech saplings. 2.2. Technical Setup [8] On the uppermost platform of the Lägeren flux tower a standard digital 5-megapixel camera (NIKON Coolpix 5400, CCD sensor) was connected to a Linux-based com- puter and mounted in a weatherproof enclosure. The camera provides hourly digital raw images (2592 � 1944 pixels, 12 bit color resolution) of the Lägeren forest since autumn 2004. This camera was chosen because of its high sensor resolution, good quality optics and its ability to store images in uncompressed raw format. Quantitative image analysis requires the original and complete image information. Spatial or color compression (e.g., JPEG) by the camera’s complex and proprietary image processing algorithms can potentially lead to information loss and nonlinearities [Stevens et al., 2007]. Raw files contain the actual pixel data captured by the camera sensor, before it has undergone G04004 AHRENDS ET AL.: PHENOLOGY BY USE OF DIGITAL PHOTOGRAPHY 2 of 11 G04004 any processing method. Furthermore, in the raw format, information such as white balance, saturation, color space or tonality are included as metadata and can be adjusted manually by the user following image capture. The field of view of the camera is approximately 60� wide and the view angle is tilted 25� below the horizon. A sample image is presented in Figure 1. [9] Image capture was controlled by custom Perl (Prac- tical Extraction and Report Language) scripts and the open- Source software gphoto2 (http://gphoto.org). To cope with diurnally and seasonally changing illumination conditions the camera was operated in the automatic exposure and aperture mode. The camera was pointed toward the west, looking at the southern slope of the Lägeren mountain. The sun therefore moved from behind to in front of the camera over the course of a day. As shown in Figure 1 a color calibration panel was included in the camera’s field of view, but was not in the image analysis since reflection values were saturated on sunny days. 2.3. Image Analysis [10] The NIKON raw image format (NEF) was processed into standard TIFF (Tagged Image File Format) without performing white balance, changes in color space, and without applying any automatic correction methods or filters, maintaining the original image information. The TIFF images (linearly scaled to 48 bit color resolution) provided a time series of digital data in the visible part of the electromagnetic spectrum (located approximately be- tween 400 – 700 nm), at an hourly temporal resolution (excluding the dark hours). This scheduling was chosen so as to provide an adequate temporal resolution under limited data storage capacity. [11] Data analysis was based on 64 midday images for 2005 and 67 images for 2006 between Day of Year (DOY) 101 (11 April) and 170 (19 June). Only the images taken near local noon time were used in order to minimize angular effects of the forest canopy’s hemispherical directional reflectance function (HDRF) [Chen et al., 2000]. This selection was supported by an uncertainty analysis exam- ining the influence of diurnal illumination changes on camera-based phenology. Images with raindrops on the camera lens, images of very foggy days or days with snow cover were excluded from the time series analysis. Image analysis was conducted separately on each region in a set of regions of interest (ROIs) representing single tree species (Figure 1). Each ROI covered a specific set of species, and within each of these there was variation in vegetation during spring due to changing leaf coverage: closer trees masked those further away during green-up. As an extension to previous studies our analysis here explores the effects of these overlaying signals. Table 1 shows the different species included within the different ROIs. Regions of interest were named after the dominant tree species. We observed the phenology of three ash and two beech trees. The Beech 2 ROI and the Beech 3 ROI were mainly covered by the same tree, Beech 2 representing its upper crown and Beech 3 representing the lower crown and part of the understory vegetation. Beech trees generally show successive leafing, starting at the lower parts of the crown and moving up to the top of the crown [e.g., Kikuzawa, 1983; Kikuzawa, 1989; Kikuzawa, 2003]. Therefore we compared leafing dates of the Beech 2 ROI with those from the Beech 3 ROI. The Background Forest ROI was chosen in order to measure spatially integrated phenology characteristics of a heteroge- neous area and to study the atmospheric disturbance effects at a larger distance from the camera. [12] The image’s color values (red, green and blue) were extracted and averaged across each ROI at daily intervals. Several vegetation indices have been developed to describe biomass amount and vegetation status. Since the camera’s spectral sensitivity is limited to the visible part of the spectrum, we used the relative brightness of the green channel (GF: the ratio of the mean raw green value to the sum of the mean raw red, green and blue values): GF ¼ G= G þ R þ Bð Þð Þ [13] The exact spectral sensitivity of the NIKON Coolpix 5400 was not revealed by the manufacturer. The GF was computed from unsmoothed and noninterpolated mean color values for each ROI and was a suitable index for recording changes in vegetation state during spring. GF values were then plotted over time to describe changing phenology in the observed ROIs (Figure 2). Leaf emergence dates were (a) automatically determined using first and second derivatives (DGF/Dt and D2GF/Dt2) describing curvature changes (data not shown) and (b) by visual curvature shape interpretation. Generally leaf emergence is represented by the maximum value of the second derivative, that is, by the starting date of a significant GF increase (inflection point). The derivative methodology is very responsive to noise in the unsmoothed GF time series. It was successfully applied to Beech 1, 2 and 3, Background and Ash 3 ROIs. For the Ash 1 and 2 ROIs, leaf emergence dates were derived visually from the GF curvature shape as there was interference in the data from the earlier greening of background vegetation and understory (see in the fol- lowing section). Figure 1. Sample camera image (24 May 2005) and regions of interests (ROIs) for image analysis. A color calibration panel facing south (right) is included in all images. ROIs are named after the dominant tree species. G04004 AHRENDS ET AL.: PHENOLOGY BY USE OF DIGITAL PHOTOGRAPHY 3 of 11 G04004 [14] Validation data were obtained from daily visual image interpretation combined with direct phenological observations of sample trees performed by the Forestry Office Wettingen. In 2006 the percentage of foliation in the upper, middle and lower part of Ash 3 and Beeches 1, 2 and 3 was documented between DOY 115 (25 April 2006) and 124 (4 May 2006) (data not shown). Leaf unfolding dates for the Ash 1 and 2 ROIs were visually estimated from imagery (Table 2). In 2005 validation data for all ROIs was visually estimated. Comparisons of field observations from 2006 with 2006 imagery served as reference. The human eye is readily able to differentiate between different stages of leaf emergence justifying visual image interpretation (‘‘visual leaf unfolding’’ in Table 2). Visual interpretation of Background forest phenology further resulted in a higher uncertainty. 2.4. Uncertainty Analysis [15] Pixel values and therefore GF values and leaf emer- gence dates are influenced by different biotic and abiotic factors. Uncertainties and variations on daily and hourly time-scales are thought to be mainly a result of abiotic factors such as changing ambient illumination conditions and wind influences. [16] It can be assumed that the tree positions in the images changed due to tower and tree movements on windy days. Wind influences and natural leaf development led to different leaf inclination angles which may have influenced the levels of the extracted color values. We did not consider these influences within this study. To quantify uncertainties we focused on ambient illumination conditions. They affect results because of (a) changing radiation due to clouds and humidity, (b) diurnally and seasonally changing illumina- tion angle, and (c) the limited dynamic range of the camera. Due to clouds and other atmospheric influences, such as water vapor content, the fraction of direct radiation changes over time. Its impact on photosynthetic activity and surface albedo was described e.g., by Wang et al. [2002], Rocha et al. [2004] and Gamon et al. [2006]. Also, dependencies of vegetation index values on illumination angles are well known [Holben, 1986]. Among others, Holben and Kimes [1986], Jackson et al. [1990] and Schaaf and Strahler [1994] have shown that surface anisotropic properties have a significant influence on surface reflectance measurements. Table 1. Regions of Interest Chosen for Image Analysis and the Main Vegetation Types Covered by the ROI a Region of Interest Number of Pixels Main Vegetation Cover Ash 1 58696 ash, mixed background vegetation of the lower forest Ash 2 73924 ash, beech Ash 3 50530 ash, silver fir Beech 1 87048 beech, ash Beech 2 137960 beech, silver fir, understory Beech 3 21021 beech, understory Background Forest 61313 beech, ash, maple a ROI, regions of interest. Figure 2. Green fraction (G/(G + R + B)) development during spring (DOY 100–170) for (a) beech- dominated ROIs in 2005 and (d) 2006, (b) ash-dominated ROIs in 2005 and (e) 2006 and (c) the Background Forest ROI in 2005 and (f) 2006. Solid vertical lines show mean GF-based leaf unfolding dates of the dominant tree species and dotted lines show mean unfolding dates provided by the validation data. G04004 AHRENDS ET AL.: PHENOLOGY BY USE OF DIGITAL PHOTOGRAPHY 4 of 11 G04004 The effect of the illumination angle is dependent on vegetation type, varies with spectral wavelength and is difficult to quantify [Qi et al., 1995]. Using sample images we analyzed the variance due to both cloud conditions and to changing illumination angles, and quantified the impacts on the GF. [17] We attempted to quantify the impact of changing fractions of diffuse radiation on the GF by comparing vegetation indices of (visually classified) ‘‘cloudy’’ and ‘‘sunny’’ images. We compared eight pairs of sunny and cloudy images from successive days taken at midday in the middle and at the end of the growing season. Except for the radiation, stable environmental conditions were assumed for the picture pairs. GF values of the images with higher diffuse radiation fractions (‘‘cloudy’’ days: DOY 144, 155, 163, 168, 209, 229, 234 in 2005) were subtracted from those with high direct radiation fraction (‘‘sunny’’ days: DOY 145, 154, 162, 167, 169, 208, 228, 235 in 2005). For each ROI and each pair of images the relative change of the GF was calculated (values normalized with the GF of the ‘‘sunny’’ images): DGF %½ � ¼ 1 GFcloudy=GFsunny � � *100 %½ � [18] Moreover, we hypothesized that changing diurnal illumination angles could have a significant impact on image pixel values and thus also on the GF. To test this assumption we calculated GF variation over five sunny days in the middle and at the end of the growing season of 2006 (DOY 163, 175, 181, 198 and 211). Mean RGB values were extracted from hourly digital images between 7:30 and 15:30 h. 3. Results 3.1. Temporal Spectral Response of the GF [19] Figure 2 shows the GF curves and GF-derived leaf emergence dates for 2005 and 2006. The GF showed a characteristic curve shape, similar to spring leaf emergence phenology, and could therefore be used as a surrogate index for phenological phases. Generally the values showed species-specific seasonality. Short-term variations (noise component) were similar for each ROI. [20] The Ash 1 ROI and the Ash 2 ROI displayed two consecutive pronounced rises in spring. The first rise was due to the leaf emergence of trees or understory vegetation in the background with earlier green-up dates, whereas the second increase was caused by the leaf flush of the ash. Figure 4 shows enlarged image data as an example of the Ash 2 ROI in spring 2006. This ROI was strongly influ- enced by a beech growing behind the ash, having an earlier leaf emergence which was partially visible in the space between the stem and bare branches of the ash tree in the foreground. [21] Figure 3 shows the red and blue color values normalized to the overall brightness (red and blue fraction) for the ROIs Beech 1 and Ash 1 in 2005. The values were similar in 2006 (data not shown). For both species, the blue fraction (BF) decreased simultaneously with the increase of the GF during green-up while the red fraction (RF) remained approximately constant. [22] After complete leaf expansion, when leaves were getting thicker, drier and darker and reached their delayed full photosynthetic activity [e.g., Morecroft et al., 2003; Schulze, 1970], GF decreased in all ROIs, particularly in 2006. BF slightly increases and the RF decreases at that time. [23] In 2006 the GF was generally lower than it had been in 2005. RF and BF values of the Background Forest ROI were highly scattered but curve characteristics were similar to the beech dominated ROIs (data not shown). Moreover, the Background Forest reflection values were higher than those of the regions next to the tower (data not shown). Since the camera adjusted exposure and aperture automat- ically, the background generally appeared brighter than the foreground, especially with high diffuse radiation fractions. At increasing distances from the camera higher atmospheric absorption and scattering of light occurs [Janeiro et al., 2006]. Therefore the GF of the Background Forest ROI showed a less consistent curve compared to the other ROIs. 3.2. Validation of the GF [24] Dates for the leaf emergence of the dominant tree species generally agreed well with validation data (Table 2). ROIs with similar dominant tree species showed similar GF curve shapes. GF curves in 2005 were more consistent than in 2006 (Figure 2). The estimation of transition dates in 2006 was more difficult due to irregularities in the GF curve. The mean error of observed onset compared with the validation data was 3 days in 2005 and 3.6 days in 2006. The maximum disagreement between validation and GF- Table 2. Dates for Leaf Unfolding in 2005 and 2006 at the Lägeren Research Site a ROI 2005 2006 Visual Leaf Unfolding (DOY) GF-Based Leaf Unfolding (DOY) Absolute Difference (days) Visual Leaf Unfolding (DOY) GF-Based Leaf Unfolding (DOY) Absolute Difference (days) Ash 1 138 (V) 140 2 128 (V) 131 3 Ash 2 131 (V) 139 8 124 (G) 131 7 Ash 3 129 (V) 133 4 124 (V) 131 7 Beech 1 121 (V) 119 2 115 (G) 114 1 Beech 2 120 (V) 119 1 116 (G) 114 2 Beech 3 119 (V) 119 0 115 (G) 113 2 Background Forest 112 (V) 116 4 111 (V) 114 3 Mean error (days) 3 3.6 a Signals derived from digital imagery (GF-based leaf unfolding) and validation data (visual leaf unfolding) for the dominant vegetation type in each ROI are shown. Validation data are based on visual image interpretation (V) or based on ground observations (G). G04004 AHRENDS ET AL.: PHENOLOGY BY USE OF DIGITAL PHOTOGRAPHY 5 of 11 G04004 based estimates was eight days in 2005 and seven days in 2006 for the Ash 2 ROI (Figure 4). GF-based dates for leaf unfolding for the beech-dominated ROIs showed better agreement with validation data than the ash-dominated ROIs. Because of missing images between DOY 116 and 119 in 2005 caused by a technical system failure, the derivation of the exact date for beech leaf unfolding was not possible. This does not affect the general finding that in both years the GF values in beech-dominated ROIs in- creased slightly earlier than indicated by the validation data. Possibly, this demonstrates the high objectivity and accuracy of the imagery for the observation of the budbreak. In field studies it was difficult to observe the exact date for first leaf appearance. However, the GF values may additionally be influenced by early leaf unfolding of beech saplings in the understory vegetation. In contrast, GF values for ash- dominated ROIs increased after leaf unfolding was observed in the field. As noted above, GF already increased for the ash-dominated ROIs when other vegetation components such as the understory or beech trees were greening up. Therefore, after leaf emergence of the ash tree, the GF values only increased when ash leaves started covering branches, stem and other previously nongreen parts in the ROI. This is also the most likely cause for the greater disagreement between the ash-dominated regions with the validation data. [25] The mean date of leaf unfolding was five days earlier in 2006 than in 2005 for beech-dominated ROIs and six days earlier for ash-dominated ROIs. For the Background Forest ROI leaf emergence started two days earlier in 2006. Maximum GF values of the observed ROIs were reached between the middle and the end of May in both years. Within species, the GF-based dates for the start of the growing season dates were very similar. The delayed green-up of the Ash 1 ROI and the earlier green-up of the Ash 3 ROI in 2005 agreed well with validation data. In 2006 leaf emergence in the lower crown layers, represented by the Beech 2 ROI, happened earlier than in the upper crown layers, represented by the Beech 3 ROI. In 2005 this difference between the Beech 2 and the Beech 3 ROIs did not show up in the GF values, probably because of the missing images between DOY 116 and 119. Figure 3. Red and blue color fractions exemplarily for (a) the Beech 1 ROI, (b) the Ash 1 ROI, and (c) the Background Forest ROI plotted over time (day of year) for 2005. Solid vertical line shows start of growing season date estimated from GF values. Figure 4. Enlarged images of the Ash 2 ROI. Successive masking of the beech growing behind the ash: (a) no leaves (14 April 2006), (b) beech leaves unfolded (4 May 2006), and (c) ash masks beech (8 June 2006). G04004 AHRENDS ET AL.: PHENOLOGY BY USE OF DIGITAL PHOTOGRAPHY 6 of 11 G04004 3.3. Uncertainty Analysis [26] To quantify the impact of the diffuse radiation fraction on the GF values we compared eight pairs of sunny and cloudy images. Resulting mean and maximum estimated GF changes for each ROI are given in Table 3. As the chosen images do not represent extremely contrasting sky condi- tions, the values represent average uncertainty rather than maximum uncertainty. The impact was highest for the furthest ROIs, such as the Background Forest and Ash 1. Their greater distances from the camera made these parts of the images more susceptible to the effects of water vapor absorption and scattering processes (particularly on cloudy days with higher air humidity). Therefore, these parts gener- ally appeared brighter, resulting in a higher impact on the GF. Also, the spatial variability caused by stronger contrasts between shadowed and sunny areas seemed to be compen- sated by the averaging of color values over the observed ROIs. [27] To test the impact of changing illumination angles we calculated the diurnal course of GF values over five sunny days of 2006 between 7:30 and 15:30 h (Figure 5). Generally, GF values first increased and then decreased again. Maxi- mum GF values were reached before midday, except for the Beech 3 ROI (Figure 5b). To quantify the impact the coefficient of variation (in percent) was calculated for each ROI. Largest variations were found in the Beech 3 ROI (Table 4). This ROI was significantly affected by shadow effects. All ROIs showed significant dependencies on the illumination angle. This supports our choice of using only noontime images. The computed values displayed, not only effects of daily changes in illumination angles, but also revealed an effect of the seasonal shift in the sun’s position. For ash-dominated ROIs and the Background Forest ROI GF values generally decreased from June to July. Beech- dominated ROIs showed first a decrease between DOY 163 and 175, and afterwards remained constant. Further- more, varying radiation values, leaf inclination angles and environmental conditions were influencing the result. GF Table 3. Mean and Maximum Relative Change of the Green Fraction Due to Clouds in 2005 a ROI Mean DGF (%) Maximum DGF (%) Ash 1 1.2 2.7 Ash 2 0.5 1.3 Ash 3 0.6 1.8 Beech 1 0.6 1.1 Beech 2 0.5 1.2 Beech 3 0.8 1.8 Background Forest 1.5 3.7 a Results for each ROI (region of interest) are obtained by pairwise comparison of 16 images with different illumination conditions. GF, green fraction. Figure 5. Variation of the green fraction (G/(G + R + B)) during DOY 163 (12 June), 175 (24 June), 181 (30 June), 198 (17 July) and 211 (30 July) for each ROI. G04004 AHRENDS ET AL.: PHENOLOGY BY USE OF DIGITAL PHOTOGRAPHY 7 of 11 G04004 values were generally more sensitive to illumination angles than to variations in the diffuse radiation fraction. 4. Discussion [28] The image-based estimates of leaf emergence dates suggest that camera-based observation of a forest-canopy provides temporally accurate and objective phenological information at the species level. In this study, we focused on the estimation of the leaf flushing date of two deciduous tree species. Image data provides a variety of information about vegetation development. However, without knowing the spectral sensitivity of the camera’s sensor, image color values do not allow us to draw firm conclusions with respect to biogeochemical processes such as photosynthetic efficiency [Kira and Kumura, 1985]. Nevertheless, the GF was found to be a reliable measure for the timing of biophysical processes such as leaf emergence and expansion in spring. Maximum GF values represent vegetation cover fraction and maximum canopy closure within the ROI. Therefore the GF allows, within a specific uncertainty due to mixed species, objective statements about phenology and leaf emergence rates. These estimates can be compared with data obtained by ground-based field studies. Moreover the GF describes optical leaf color such as leaf darkening due to chlorophyll accumulation [e.g., Bray and Sanger, 1961; Bray et al., 1966] and changes in the leaf surface due to maturity and aging processes [e.g., Ito et al., 2006]. How- ever, the decrease in the GF after complete leaf expansion which was observable in all ROIs may have been addition- ally influenced by seasonal changes of midday sun angles as described in our uncertainty analysis. [29] Owing to the mixed background signal from the variety of species, GF values did not provide a method for estimating the leaf area index [Chen and Black, 1992] for the ROIs. We found a generally lower green fraction in 2006. Field studies showed that 2006 was a mast year for the beech and possibly the ash. Fruiting beech trees gener- ally have more transparent crowns because a proportion of the buds have developed into flowers and the leaf size is reduced [Innes, 1994]. This could be a possible explanation for the lower GF levels in these ROIs. However, observed similarity of short-term GF variations demonstrate the larger influence of environmental conditions, e.g., illumination conditions. Therefore, it is also reasonable that the lower GF values were caused by abiotic factors. [30] Not only species-specific but also individual leaf unfolding dates could be observed. Individual trees experi- ence different green-up dates due to genetic differences and based on their position within the forest canopy. The GF- based leaf unfolding date was up to one week later for the Ash 1 ROI in 2005 compared to the other ash-dominated ROIs. This ROI was mainly covered by a relatively free- standing ash tree compared with the other ash trees within the camera’s field of view. Although the delayed leaf unfolding could not be shown with GF values for 2006, the observation is consistent with the validation data. Our results agree with studies from Brügger et al. [2003] about the phenological variability within one species due to the genetic differences and the different social positions of the individual trees. Successive leafing processes moving from lower to upper parts of the foliage could be observed for beeches during field work but only showed up in the image data for 2006. The mean error for the estimation of the leaf unfolding dates was larger than the variation in the leaf unfolding dates for the different parts of the crown. [31] Our study also found less consistent GF curves for the Background Forest ROI located a few kilometers from the camera compared to the other ROIs which were next to the tower. The higher noise component of the GF for this ROI made an automated detection of phenological transition dates much more difficult. However, since the Background Forest ROI covers a set of different tree species, GF values probably represent the maximum time span of leaf devel- opment for the sampled trees. [32] The GF is strongly influenced by the different species included within the ROIs. Therefore the installation of phenological cameras should be performed cautiously with respect to the different species included in the cameras field of view. An observation with camera images of the understory green-up provides useful additional information. For instance, since the phenological studies with satellite images based on the NDVI [e.g., Tucker, 1979] are strongly influenced by the earlier green-up of understory and sap- lings, analyses of the image’s color values can be used for objective satellite data validation. However, comparisons with satellite data should be performed with respect to the timing of phenological processes rather than as a quantita- tive comparison. For a realistic validation and for the comparison of imagery from different cameras the spectral calibration of the camera model under use would be required [Stevens et al., 2007]. [33] In recent years eddy covariance measurements have been performed worldwide at flux tower sites for calculating local and regional carbon dioxide and water balances. In this context, shifts in phenology could significantly affect annual carbon uptake and water cycling [i.e., Baldocchi et al., 2005; Churkina et al., 2005; Gu et al., 2003; Keeling et al., 1996; Morecroft et al., 2003; Niemand et al., 2005; Piao et al., 2007; White et al., 1999]. Therefore Baldocchi et al. [2005] suggested installing video cameras at flux tower sites for continuous monitoring of the canopy state. Since pho- tographic cameras have a much better image resolution and quality than video cameras, our study suggests that still digital cameras may be better suited for this purpose. However, since measurements of the net ecosystem carbon dioxide exchange are strongly related to leaf gas exchange and therefore to photosynthetic activity and biomass, com- parisons with GF values from standard digital images should also be based on the timing of phenological phases Table 4. GF Coefficient of Variation in 2006 Due to Changing Illumination Angle Over the Day a Region of Interest DOY 163 DOY 175 DOY 181 DOY 198 DOY 211 Ash 1 1.9 1.4 1.1 1.2 1.2 Ash 2 2.0 1.5 1.2 0.4 1.1 Ash 3 2.0 1.5 1.2 0.9 0.9 Beech 1 1.7 1.4 1.0 0.6 0.9 Beech 2 1.9 1.7 1.7 1.1 1.1 Beech 3 2.8 2.5 3.0 2.0 1.8 Background Forest 2.0 1.8 1.4 0.8 1.0 a GF variation was calculated over 5 sunny days in the middle and at the end of the growing season. G04004 AHRENDS ET AL.: PHENOLOGY BY USE OF DIGITAL PHOTOGRAPHY 8 of 11 G04004 rather than on quantitative data. We believe that the instal- lation of digital cameras can be used to bridge the gap between CO2 flux measurements at ecosystem scale and satellite-based vegetation monitoring at a regional scale. Continuous time series of digital images of the forest canopy could complement terrestrial monitoring of gas and water exchanges at forest sites. [34] We found a considerable sensitivity of the GF to illumination conditions, mainly to changing sun angles. The interpretation of the result of the uncertainty analysis is difficult due to complex canopy structures [Leuchner et al., 2007] and the different geometric positions of the trees relative to the sun and the camera within the canopy at midday. Further research with respect to the quantification of these influences is needed. Changing fractions of differ- ent tree species over time also have to be considered. Illumination conditions need to be quantified as a base for comparisons. Visual estimates of tree phenology for ground truthing may be hampered by the same light and visibility problems as digital camera image analysis. Weather con- ditions are rarely uniform and especially fog, sun angle and general brightness have a significant influence on the color sensitivity of the human eye. Considering all these aspects together, it becomes clear that, despite there is moderate uncertainty in the GF values, GF curves can be used for detection of leaf emergence dates which are typically accurate to within a few days from the validation data. 5. Conclusions [35] We found that the use of consumer-grade digital cameras offers the possibility of monitoring phenology with high temporal and spatial accuracy with respect to the phenological state of leaf emergence of individual decidu- ous trees. Our study clearly showed that changing illumi- nation conditions introduce a moderate uncertainty in phenological estimates. By choosing pictures taken at a particular hour every day to monitor vegetation develop- ment, this uncertainty can be minimized. Furthermore our results suggest that species-dependant phenological obser- vations in mixed forests are successful if overlaying signals of the different species covered by a specific analyzed ROI are detected and separated. The camera should be mounted within an appropriate distance from the observed canopy to minimize scattering of the color values which aggravates automated detection of phenological transition dates. [36] Based on this case study for a European mixed forest we anticipate that a network of digital cameras could provide inexpensive, spatially representative and objective information with the required temporal resolution for phe- nological applications at species-level and process-based ecosystem research. In future studies the application of digital camera images at sites with different dominant vegetation types such as grassland sites or in agriculture should be analyzed. From a technical viewpoint, noise removal, algorithm development (e.g., by optimized filter- ing methods) to automate the detection of phenological phases, and the standardization of these algorithms for public use, should receive special attention. Since these aspects are related to data processing and not to data acquisition, we are convinced that phenological monitoring with digital cameras is a suitable method to be used in a network of automated phenological observation sites. [37] Acknowledgments. This project is funded by the Federal De- partment of Home Affairs FDAA: State Secreteriat for Education and Research SER, SBF CO5.0032. We are grateful to the National Center of Competence in Research on Climate (NCCR Climate) for supporting this project. This publication was supported by the Foundation Marchese Francesco Medici del Vascello. We acknowledge Paul Messerli for publi- cation sponsorship. We are grateful to Philipp Vock (Forestry Office, Wettingen) for field work and validation data. References Adamsen, F. J., P. J. Pinter, E. M. Barnes, R. L. LaMorte, G. W. Wall, S. W. Leavitt, and B. A. Kimball (1999), Measuring wheat senescence using a digital camera, Crop Sci., 39, 719–724. Ahl, D. E., S. T. Gower, S. N. Burrows, N. V. Shabanov, R. B. Myneni, and Y. Knyazikhin (2006), Monitoring spring canopy phenology of a decid- uous broadleaf forest using MODIS, Remote Sens. Environ., 104, 88–95. Arora, V. K., and G. J. Boer (2005), A parameterization of leaf phenology for the terrestrial ecosystem component of climate models, Global Change Biol., 11, 39–59. Badeck, F. W., A. Bondeau, K. Böttcher, D. Doktor, W. Lucht, J. Schaber, and S. Sitch (2004), Responses of spring phenology to climate change, New Phytol., 162, 295–309. Baldocchi, D. D., et al. (2005), Predicting the onset of net carbon uptake by deciduous forests with soil temperature and climate data: A synthesis of FLUXNET data, Int. J. Biometeorol., 49(6), 377–387. Behrens, T., and W. Diepenbrock (2006), Using digital image analysis to describe canopies of winter oilseed rape (Brassica napus L.) during ve- getative developmental stages, J. Agron. Crop Sci., 192, 295–302. Brandhorst, A. L., and M. Pinkhof (1935), Exact determination of phyto- phenological stages, Acte Phaenol., 3, 101–109. Bray, J. R., and J. E. Sanger (1961), Light reflectivity as an index of chlorophyll content and production potential of various kinds of vegeta- tion, Proc. Minn. Acad. Sci., 29, 222–226. Bray, J. R., J. E. Sanger, and A. L. Archer (1966), The visible albedo of surfaces in central Minnesota, Ecology, 47(4), 524–531. Brügger, R., M. Dobbertin, and N. Krauchi (2003), Phenological variation of forest trees, in Phenology: An Integrative Environmental Science, edited by M. D. Schwartz, pp. 255–268, Kluwer Acad., Dordrecht, Netherlands. Burkard, R., P. Bützberger, and W. Eugster (2003), Vertical fogwater flux measurements above an elevated forest canopy at the Lägeren research site, Switzerland, Atmos. Environ., 37, 2979–2990. Chen, J. M., and T. A. Black (1992), Defining leaf area index for non-flat leaves, Plant Cell Environ., 15(4), 421–429. Chen, J. M., X. Li, T. Nilsonn, and A. Strahler (2000), Recent advances in geometrical optical modelling and its applications, Remote Sens. Rev., 18, 227–262. Chuine, I., P. Yiou, N. Viovy, and B. Seguin (2004), Grape ripening as a past climate indicator, Nature, 432, 289–290. Churkina, G., D. Schimel, B. H. Braswell, and X. Xiao (2005), Spatial analysis of growing season length control over net ecosystem exchange, Global Change Biol., 11, 1777–1787. Cleland, E. E., I. Chuine, A. Menzel, H. A. Mooney, and D. Mark (2007), Shifting plant phenology in response to global change, Trends Ecol. Evol., 22(7), 357–365. Davis, K. J., P. S. Bakwin, C. Yi, B. W. Berger, C. Zhao, R. M. Teclaw, and J. G. Isebrands (2003), The annual cycles of CO2 and H2O exchange over a northern mixed forest as observed from a very tall tower, Global Change Biol., 9, 1278–1293. Dymond, J. R., and C. M. Trotter (1997), Directional reflectance of vegetation measured by a calibrated digital camera, Appl. Opt., 36(18), 4314–4319. Eugster, W., K. Zeyer, M. Zeeman, P. Michna, A. Zingg, N. Buchmann, and L. Emmenegger (2007), Nitrous oxide net exchange in a beech domi- nated mixed forest in Switzerland measured with a quantum cascade laser spectrometer, Biogeosciences Discuss., 4, 1167–1200. Evans, G. C., and D. E. Coombe (1959), Hemispherical and woodland canopy photography and the light climate, J. Ecol., 47, 103–113. Fisher, J. I., J. F. Mustard, and M. A. Vadeboncoeur (2006), Green leaf phenology at Landsat resolution: Scaling from the field to the satellite, Remote Sens. Environ., 100, 265–279. Gamon, J. A., Y. Cheng, H. Claudio, L. MacKinney, and D. A. Sims (2006), A mobile tram system for systematic sampling of ecosystem optical properties, Remote Sens. Environ., 103, 246–254. G04004 AHRENDS ET AL.: PHENOLOGY BY USE OF DIGITAL PHOTOGRAPHY 9 of 11 G04004 Graham, E., M. P. Hamilton, B. D. Mishler, P. W. Rundel, and M. H. Hansen (2006), Use of a networked digital camera to estimate net CO2 uptake of a desiccation-tolerant moss, Int. J. Plant Sci., 167(4), 751–758. Gu, L., W. M. Post, D. Baldocchi, T. A. Black, S. B. Verma, T. Vesala, and S. C. Wofsy (2003), Phenology of vegetation photosynthesis, in Phenol- ogy: An Integrative Environmental Science, edited by M. D. Schwartz, pp. 467–485, Kluwer Acad., Dordrecht, Netherlands. Holben, B. N. (1986), Characteristics of maximum-value composite images from temporal AVHRR data, Int. J. Remote Sens., 7(11), 1417–1434. Holben, B., and D. Kimes (1986), Directional reflectance response in AVHRR red and near-IR bands for three cover types and varying atmo- spheric conditions, Remote Sens. Environ., 19, 213–236. Innes, J. L. (1994), The occurrence of flowering and fruiting on individual trees over 3 years and their effects on subsequent crown conditions, Trees (Berl.), 8, 139–150. Ito, A., H. Muraoka, H. Koizumi, N. Saigusa, S. Murayama, and S. Yamamoto (2006), Seasonal variation in leaf properties and ecosystem carbon budget in a cool-temperate deciduous broad-leaved forest: Simu- lation analysis at Takayama site, Japan, Ecol. Res., 21, 137–149. Jackson, R. D., P. M. Teillet, P. N. Slater, G. Fedosejevs, M. F. Jasinski, J. K. Aase, and M. S. Moran (1990), Bidirectional measurements of surface reflectance for view angle corrections of oblique imagery, Remote Sens. Environ., 32, 189–202. Janeiro, F. M., F. Wagner, and A. M. Silva (2006), Visibility measurements using a commercial digital camera, paper presented at Conference on Visibility, Aerosols, and Atmospheric Optics, Assoc. for Aerosol Res., UNIQA Group Austria, Vienna, Austria, 4–6 Sept. Jonckheere, I., S. Fleck, K. Nackaerts, B. Muys, P. Coppin, M. Weiss, and F. Baret (2004), Review of methods for in situ leaf area index determina- tion, Part I. Theories, sensors and hemispherical photography, Agric. For. Meteorol., 121, 19–35. Kawashima, S., and M. Nakatani (1998), An algorithm for estimating chlorophyll content in leaves using a video camera, Ann. Bot. (Lond.), 81, 49–54. Keeling, C. D., J. F. S. Chin, and T. P. Whorf (1996), Increased activity of northern vegetation inferred from atmospheric CO2 measurements, Nat- ure, 382, 146–149. Kharin, N. G. (1976), Mathematical models in phenology, J. Biogeogr., 3, 357–364. Kikuzawa, K. (1983), Leaf survival of woody plants in deciduous broad- leaved forests. 1. Tall trees, Can. J. Bot., 61, 2133–2139. Kikuzawa, K. (1989), Ecology and evolution of phenological pattern, leaf longevity and leaf habit, Evol. Trends Plants, 3, 105–110. Kikuzawa, K. (2003), Phenological and morphological adaptations to the light environment in two woody and two herbaceous plant species, Funct. Ecol., 17(1), 29–38. Kira, T., and A. Kumura (1985), Dry matter production and efficiency in various types of plant canopies, in Plant Research and Agroforestry, edited by P. A. Huxley, pp. 347–364, Int. Counc. for Res. in Agrofor., Nairobi, Kenya. Knohl, A., E. Schulze, O. Kolle, and N. Buchmann (2003), Large carbon uptake by an unmanaged 250-year-old deciduous forest in Central Ger- many, Agric. For. Meteorol., 118, 151–167. Lawrence, D. M., and J. M. Slingo (2004a), An annual cycle of vegetation in a GCM. Part I: Implementation and impact on evaporation, Clim. Dyn., 22(2–3), 87–105. Lawrence, D. M., and J. M. Slingo (2004b), An annual cycle of vegetation in a GCM. Part II: Global impacts on climate and hydrology, Clim. Dyn., 22(2–3), 107–122. Leuchner, M., A. Menzel, and H. Werner (2007), Quantifying the relation- ship between light quality and light availability at different phenological stages within a mature mixed forest, Agric. For. Meteorol., 142, 35–44. Lieth, H. H. (1976), Contributions to phenology seasonality research, Int. J. Biometeorol., 20(3), 197–199. Lu, L., R. A. Pielke Sr., G. Liston, W. Parton, D. Ojima, and M. Hartman (2001), Implementation of a two-way interactive atmospheric and ecolo- gical model and its application to the central United States, J. Clim., 14, 900–919. Menzel, A. (2002), Phenology: Its importance to the global change com- munity, Clim. Change, 54, 379–385. Moore, K. E., D. R. Fitzjarrald, R. K. Sakai, M. L. Goulden, J. W. Munger, and S. C. Wofsy (1996), Seasonal variation in radiative and turbulent exchange at a deciduous forest in central Massachusetts, J. Appl. Meteor- ol., 35, 122–134. Morecroft, M. D., V. J. Stokes, and J. I. L. Morison (2003), Seasonal changes in the photosynthetic capacity of canopy oak (Quercus robur) leaves: The impact of slow development on annual carbon uptake, Int. J. Biometeorol., 47, 221–226. Niemand, C., B. Köstner, H. Prasse, T. Grünwald, and C. Bernhofer (2005), Relating tree phenology with annual carbon fluxes at Tharandt forest, Meteorol. Z., 14(2), 197–202. Nobis, M., and U. Hunziker (2005), Automatic thresholding for hemisphe- rical canopy-photographs based on edge detection, Agric. For. Meteorol., 128, 243–250. Pellika, P. (2001), Application of vertical skyward wide-angle photography and airborne video data for phenological studies of beech forests in the German Alps, Int. J. Remote Sens., 22(14), 2675–2700. Peñuelas, J., and I. Filella (2001), Responses to a warming world, Science, 294, 793–795. Piao, S., P. Friedlingstein, P. Ciais, N. Viovy, and J. Demarty (2007), Growing season extension and its impact on terrestrial carbon cycle in the Northern Hemisphere over the past 2 decades, Global Biogeochem. Cycles, 21, GB3018, doi:10.1029/2006GB002888. Qi, J., M. S. Moran, F. Cabot, and G. Dedieu (1995), Normalization of sun/ view angle effects using spectral albedo-based vegetation indices, Remote Sens. Environ., 52, 207–217. Reed, B. C., J. F. Brown, D. VanderZee, T. R. Loveland, J. W. Merchant, and D. O. Ohlen (1994), Measuring phenological variability from satellite imagery, J. Veg. Sci., 5(5), 703–714. Rich, P. M. (1988), Video image analysis of hemispherical canopy photo- graphy, in First Special Workshop on Videography, edited by P. W. Mausel, pp. 84–95, American Soc. for Photogramm. and Remote Sens., Terre Haute, Ind. Rich, P. M. (1990), Characterizing plant canopies with hemispherical photo- graphy, in Instrumentation for Studying Vegetation Canopies for Remote Sensing in Optical and Thermal Infrared Regions, edited by N. S. Goel and J. M. Norman, Remote Sens. Rev., 5(1), 13–29. Rich, P. M., D. B. Clark, D. A. Clark, and S. Oberbauer (1993), Long-term study of solar radiation regimesina tropical wetforest usingquantum sensors and hemispherical photography, Agric. For. Meteorol., 65, 107–127. Richardson, A. D., J. P. Jenkins, B. H. Braswell, D. Y. Hollinger, S. V. Ollinger, and M. Smith (2007), Use of digital webcam images to track spring green-up in a deciduous broadleaf forest, Oecologia, 152, 323– 334. Rocha, A. V., H. B. Su, C. S. Vogel, H. P. Schmid, and P. S. Curtis (2004), Photosynthetic and water use efficiency responses to diffuse radiation by an aspen-dominated northern hardwood forest, For. Sci., 50(6), 793–801. Rutishauser, T., J. Luterbacher, F. Jeanneret, C. Pfister, and H. Wanner (2007), A phenology-based reconstruction of inter-annual changes in past spring seasons, J. Geophys. Res., 112, G04016, doi:10.1029/ 2006JG000382. Schaaf, C. B., and A. H. Strahler (1994), Validation of bidirectional and hemispherical reflectances from a geometric-optical model using ASAS imagery and pyranometer measurements of a spruce forest, Remote Sens. Environ., 49, 138–144. Schulze, E. (1970), Der CO2-Gaswechsel der Buche, Flora, 159, 177–232. Schwartz, M. (1994), Monitoring global change with phenology: The case of the spring green wave, Int. J. Biometeorol., 38(1), 18–22. Sparks, T. H., and A. Menzel (2002), Observed changes in seasons: An overview, Int. J. Climatol., 22, 1715–1725. Sparks, T. H., K. Huber, and P. J. Croxton (2006), Plant development scores from fixed-date photographs: The influence of weather variables and recorder experience, Int. J. Biometeorol., 50, 275–279. Stevens, M., C. A. Párraga, I. C. Cuthill, J. C. Partridge, and T. S. Troscianko (2007), Using digital photography to study animal coloration, Biol. J. Linn. Soc. Lond., 90, 211–237. Studer, S., R. Stöckli, C. Appenzeller, and P. L. Vidale (2007), A compara- tive study of satellite and ground-based phenology, Int. J. Biometeorol., 51(5), 405–414. Tucker, C. J. (1979), Red and photographic infrared linear combinations for monitoring vegetation, Remote Sens. Environ., 8, 127–150. Vanamburg, L. K., M. J. Trlica, R. M. Hoffer, and M. A. Weltz (2006), Ground based digital imagery for grassland biomass estimation, Int. J. Remote Sens., 27(5), 939–950. Wang, S., S. G. Leblanc, R. Fernandes, and J. Cihlar (2002), Diurnal variation of direct and diffuse radiation and its impact on surface albedo, IEEE Int. Geosci. Remote Sens. Symp., 6, 3224–3226. White, M. A., S. W. Running, and P. E. Thornton (1999), The impact of growing-season length variability on carbon assimilation and evapotran- spiration over 88 years in the eastern U.S. deciduous forest, Int. J. Bio- meteorol., 42, 139–145. Zehm, A., M. Nobis, and A. Schwabe (2003), Multiparameter analysis of vertical vegetation structure based on digital image processing, Flora, 198, 142–160. Zhang, X., M. A. Friedl, C. B. Schaaf, and A. H. Strahler (2004), Climate controls on vegetation phenological patterns in northern mid- and high latitudes inferred from MODIS data, Global Change Biol., 10, 1133– 1145. G04004 AHRENDS ET AL.: PHENOLOGY BY USE OF DIGITAL PHOTOGRAPHY 10 of 11 G04004 Zhang, X., M. A. Friedl, and C. B. Schaaf (2006), Global vegetation phe- nology from Moderate Resolution Imaging Spectroradiometer (MODIS): Evaluation of global patterns and comparison with in situ measurements, J. Geophys. Res., 111, G04017, doi:10.1029/2006JG000217. Zhang, X., D. Tarpley, and J. T. Sullivan (2007), Diverse responses of vegetation phenology to a warming climate, Geophys. Res. Lett., 34, L19405, doi:10.1029/2007GL031447. Zhou, Q., and M. Robson (2001), Automated rangeland vegetation cover and density estimation using ground digital images and a spectral- contextual classifier, Int. J. Remote Sens., 22(17), 3457–3470. Zhou, Q., M. Robson, and P. Pilesji (1998), On the ground estimation of vegetation cover in Australian rangelands, Int. J. Remote Sens., 19(9), 1815–1820. H. E. Ahrends, R. Brügger, F. Jeanneret, P. Michna, J. Schenk, and H. Wanner, Institute of Geography, University of Bern, Hallerstrasse 12, CH-3012 Bern, Switzerland. (ahrends@giub.unibe.ch) W. Eugster, Institute of Plant Sciences, ETH Zürich, Universitätsstrasse 2, CH-8092 Zürich, Switzerland. R. Stöckli, Climate Analysis, Climate Services, Federal Office of Meteorology and Climatology MeteoSwiss, Krähbühlstrasse 58, CH-8044 Zürich, Switzerland. G04004 AHRENDS ET AL.: PHENOLOGY BY USE OF DIGITAL PHOTOGRAPHY 11 of 11 G04004 work_b7uvvcnqvzh6tep5x3ebsc3hxi ---- None work_bdoynydqwnag5g22pkr2mipvj4 ---- [PDF] An outbreak of human external ophthalmomyiasis due to Oestrus ovis in southern Afghanistan. | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1086/588046 Corpus ID: 205986868An outbreak of human external ophthalmomyiasis due to Oestrus ovis in southern Afghanistan. @article{Dunbar2008AnOO, title={An outbreak of human external ophthalmomyiasis due to Oestrus ovis in southern Afghanistan.}, author={J. Dunbar and B. Cooper and T. Hodgetts and Halabi Yskandar and P. V. van Thiel and S. Whelan and J. Taylor and David R Woods}, journal={Clinical infectious diseases : an official publication of the Infectious Diseases Society of America}, year={2008}, volume={46 11}, pages={ e124-6 } } J. Dunbar, B. Cooper, +5 authors David R Woods Published 2008 Medicine Clinical infectious diseases : an official publication of the Infectious Diseases Society of America Oestrus ovis is the most common cause of human ophthalmomyiasis, and infection is often misdiagnosed as acute conjunctivitis. Although it typically occurs in shepherds and farmers, O. ovis ophthalmomyiasis has also been reported in urban areas. We report the first case study of O. ovis infection from Afghanistan.  View on PubMed academic.oup.com Save to Library Create Alert Cite Launch Research Feed Share This Paper 29 CitationsHighly Influential Citations 2 Background Citations 18 Results Citations 1 View All Figures and Topics from this paper figure 1 Sheep Estrus acute conjunctivitis 29 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency [Conjonctival human myiasis by Oestrus ovis in southern Tunisia]. S. Anane, L. B. Hssine Biology, Medicine Bulletin de la Societe de pathologie exotique 2010 10 View 1 excerpt Save Alert Research Feed External ophthalmomyiasis: A case report. Mohammad A. Al-Amry, F. Al-Saikhan, S. Al-Dahmash Medicine Saudi journal of ophthalmology : official journal of the Saudi Ophthalmological Society 2014 13 PDF View 1 excerpt, cites background Save Alert Research Feed Ophthalmomyiasis externa: A case report from Alkharj, Saudi Arabia K. Sharma Medicine Saudi journal of ophthalmology : official journal of the Saudi Ophthalmological Society 2018 1 View 1 excerpt, cites background Save Alert Research Feed External ophthalmomyiasis: a case series and review of ophthalmomyiasis in Turkey P. Özyol, E. Özyol, F. Sankur Medicine International Ophthalmology 2016 13 View 2 excerpts, cites background Save Alert Research Feed Clustered Cases of Oestrus ovis Ophthalmomyiasis after 3-Week Festival, Marseille, France, 2013 L. Bonzon, I. Toga, M. Piarroux, R. Piarroux Biology, Medicine Emerging infectious diseases 2015 6 PDF View 1 excerpt, cites background Save Alert Research Feed Diagnosis of ophthalmomyiasis externa by dermatoscopy S. Naimer, K. Mumcuoglu Medicine Dermatology practical & conceptual 2014 3 PDF View 1 excerpt, cites background Save Alert Research Feed Vidi, vini, vinci: External ophthalmomyiasis infection that occurred, and was diagnosed and treated in a single day: A rare case report Kamlesh Thakur, G. Singh, Smriti Chauhan, A. Sood Medicine Oman journal of ophthalmology 2009 18 Save Alert Research Feed Ophthalmomyiasis Externa: Case Report of the Clinicopathologic Features. S. Pather, Louis Botha, M. Hale, Shirely Jena-Stuart Medicine International Journal of Ophthalmic Pathology 2013 14 Highly Influenced View 5 excerpts, cites background Save Alert Research Feed Rhinomyiasis by Oestrus ovis in a tourist returning from Corsica C. Brini, B. Nguon, +9 authors A. Varcasia Medicine, Biology Parasitology Research 2019 2 PDF View 2 excerpts, cites background Save Alert Research Feed Oestrus ovis as a Cause of Red Eye in Aljabal Algharbi, Libya Manal Z. M. Abdellatif, H. Elmazar, Amna B. Essa Medicine Middle East African journal of ophthalmology 2011 24 PDF View 1 excerpt, cites background Save Alert Research Feed ... 1 2 3 ... References SHOWING 1-10 OF 18 REFERENCES SORT BYRelevance Most Influenced Papers Recency [A case of external human ophthalmomyiasis by Oestrus ovis in Toulouse (France)]. J. Suzzoni-Blatger, L. Villeneuve, B. Morassin, J. Chevallier Medicine Journal francais d'ophtalmologie 2000 13 View 1 excerpt, references background Save Alert Research Feed Human external ophthalmomyiasis occurring in Barbados. P. Levett, L. Brooker, C. Reifer, P. Prussia, M. Eberhard Medicine The West Indian medical journal 2004 4 View 1 excerpt, references background Save Alert Research Feed Three cases of ophthalmomyiasis externa by sheep botfly Oestrus ovis in Italy. M. Dono, Maria Rosa Bertonati, +8 authors E. Battolla Biology, Medicine The new microbiologica 2005 20 PDF Save Alert Research Feed Nasal myiasis due to Oestrus ovis larvae P. Quesada, M. Navarrete, J. Maeso Biology, Medicine European Archives of Oto-Rhino-Laryngology 2004 23 View 1 excerpt Save Alert Research Feed Ophthalmomyiasis in Kuwait: first report of infections due to the larvae of Oestrus ovis before and after the Gulf conflict. P. Hira, B. Ḥājj, F. Al-Ali, M. J. Hall Biology, Medicine The Journal of tropical medicine and hygiene 1993 21 View 1 excerpt, references background Save Alert Research Feed Ophthalmomyiasis in Oman: a case report and comments. R. Victor, K. Bhargva Medicine Wilderness & environmental medicine 1998 17 View 1 excerpt, references background Save Alert Research Feed Study on human ophthalmomyiasis externa caused by Oestrus ovis larva, in Sirte-Libya: parasite features, clinical presentation and management. F. M. Fathy, Adam El-Barghathi, Abdalla El-Ahwal, Shaban El-Bagar Medicine Journal of the Egyptian Society of Parasitology 2006 20 View 2 excerpts, references background Save Alert Research Feed Palpebral myiasis in a Danish traveler caused by the human bot-fly (Dermatobia hominis). R. Bangsgaard, B. Holst, E. Krogh, S. Heegaard Medicine Acta ophthalmologica Scandinavica 2000 32 View 1 excerpt, references background Save Alert Research Feed Ophthalmomyiasis caused by the sheep bot fly Oestrus ovis in northern Iraq. A. R. Gregory, S. Schatz, H. Laubach Medicine, Biology Optometry and vision science : official publication of the American Academy of Optometry 2004 42 View 1 excerpt, references background Save Alert Research Feed Persistence of human myiasis by Oestrus ovis L. (Diptera: Oestridae) among shepherds of the Etnean area (Sicily) for over 150 years. S. Pampiglione, S. Giannetto, A. Virga Biology, Medicine Parassitologia 1997 36 View 1 excerpt, references background Save Alert Research Feed ... 1 2 ... Related Papers Abstract Figures and Topics 29 Citations 18 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_bflf3b2frrgunnuoc4qxp6z4zu ---- untitled mization, and, hopefully, a cosmetically and function- ally improved reconstruction. Accepted for Publication: July 14, 2009. Author Affiliation: North Florida Dermatology Associ- ates, Jacksonville, Florida. Correspondence: Dr Kantor, North Florida Dermatol- ogy Associates, 1551 Riverside Ave, Jacksonville, FL 32082 (jonkantor@gmail.com). Financial Disclosure: None reported. 1. Zitelli JA, Moy RL. Buried vertical mattress suture. J Dermatol Surg Oncol. 1989; 15(1):17-19. 2. Alam M, Goldberg LH. Utility of fully buried horizontal mattress sutures. J Am Acad Dermatol. 2004;50(1):73-76. 3. Kantor J. The set-back buried dermal suture: an alternative to the buried ver- tical mattress for layered wound closure. J Am Acad Dermatol. In press. 4. Dzubow LM. The use of fascial plication to facilitate wound closure follow- ing microscopically controlled surgery. J Dermatol Surg Oncol. 1989;15(10): 1063-1066. 5. Radonich MA, Bisaccia E, Scarborough D. Management of large surgical de- fects of the forehead and scalp by imbrication of deep tissues. Dermatol Surg. 2002;28(6):524-526. Lack of Lower Extremity Hair Not a Predictor for Peripheral Arterial Disease P eripheral arterial disease (PAD) afflicts 8 to 12 mil-lion Americans, but nearly 75% of them areasymptomatic.1 Physicians rely on history and physical examination to determine which patients re- quire further evaluation. Physical findings that have been associated with arterial disease include a unilaterally cool extremity, skin atrophy and lack of hair, and abnormal pedal pulses, among others.2 The disease spectrum ranges from exertional calf pain to chronic limb ischemia ne- cessitating amputation. The suspicion of arterial disease often leads to further examination of the lower extrem- ity vascular supply. Measurement of the ankle-brachial index (ABI) is a noninvasive method for detecting PAD and is about 95% sensitive and specific when the diag- nostic cutoff is 0.9.3 In general, the accepted ABI for the presence of PAD is lower than 0.9, and that for severe disease is lower than 0.7. The present observational case-control study was un- dertaken based on the clinical observation that many men seem to have hairless lower extremities. Our goal was to determine whether this physical sign is a predictor of PAD. Methods. After obtaining institutional review board ap- proval, we enrolled 50 subjects from Hershey Medical Cen- ter in the study. Twenty-five control subjects were re- cruited from various outpatient clinics and had documented normal ABI measurements (�0.9). Twenty-five subjects with PAD were recruited from the vascular clinic and had either an ABI lower than 0.9 or abnormal lower extremity arterial duplex findings. Subjects with ABIs lower than 0.9 due to disease other than PAD were excluded. Subjects with diabetes who had abnormal ABIs were included in the disease group. Due to arterial calcifica- tion, the vessels in subjects with diabetes may be less com- pressible and so might generate falsely elevated indices. Thus, the vascular disease of patients with diabetes is likely worse than the measured value. Lower extremity hairs were counted on all subjects. First, a measurement was taken from the anterior tibial tuber- osity to the proximal portion of the lateral malleolus. The distance was divided by 3, and hairs were counted at a lo- cation one-third of the distance proximal to the lateral mal- leolus. Scissors were used to trim hairs at this location to several millimeters in length. Temporary black hair dye was then applied to the area for approximately 1 minute. Ex- cess dye was removed, and we took 2 pictures of the area using a magnified digital photography technique, which involved pressing the camera lens against the skin to make full contact while the photograph was taken. All photo- graphs were taken with a Nikon D80 camera (Nikon USA Inc, Melville, New York), stored on a memory card, and uploaded to a computer where Photoshop (Adobe Sys- tems Inc, San Jose, California) was used to crop them to standard dimensions of 2572 � 1564 pixels. Hair count analyses were performed, and data were categorized as either leg hair present (1 or more hairs pre- sent in the examined field) or leg hair absent (no hairs present in the examined field). This assessment was per- formed on data from each of the 50 subjects. Statistical analysis was then completed using a �2 analysis. Results. Of the 50 patients recruited for this study, 25 had existing PAD, and 25 were healthy controls (Table). Subjects in the control group had a mean age of 65 years (age range, 50-80 years). Those in the PAD group had a mean age of 75 years (age range, 55-88 years). Sixty- four percent of patients with PAD had absent leg hair, and 40% of patients without PAD had absent leg hair (Table). Using �2 analysis, we found no statistically sig- nificant relationship between disease presence and ab- sence of lower extremity hair (P = .09). Comment. Peripheral arterial disease involves atheroscle- rotic occlusions in the arterial system distal to the aortic bifurcation.4 It is mainly a disorder of advancing age, and one’s risk of PAD is increased by cigarette smoking, dia- betes, hypercholesterolemia, and hypertension.4 Because many patients are asymptomatic, physicians must recog- nize the early signs and take appropriate action. The goal of the present study was to determine whether the ab- sence of lower extremity hair is a useful predictor of PAD. No statistically significant difference was found between the numbers of diseased patients without leg hair (n = 16) and control patients without leg hair (n = 10) (P = .09), sug- Table. Presence of Lower Extremity Hair in Patients With and Without PAD Lower Extremity Hair Patients, No .(%) With PAD a (n = 25) Without PAD (n = 25) Present 9 (36) 15 (60) Absent 16 (64) 10 (40) Abbreviation: PAD, peripheral arterial disease. a By �2 analysis, no statistically significant relationship was found between disease presence and absence of lower extremity hair (P = .09). Jonathan Kantor, MD, MSCE (REPRINTED) ARCH DERMATOL/ VOL 145 (NO. 12), DEC 2009 WWW.ARCHDERMATOL.COM 1456 ©2009 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 gesting that a lack of lower extremity hair is not useful as a solitary predictor of disease. Therefore, we believe that it is best to consider this examination finding in the context of a patient’s overall presentation and risk factors for PAD. Our study has several limitations. The sample size was only 50 patients. In addition, no demographic informa- tion (including the presence comorbidities such as dia- betes, hypertension, or smoking) was recorded. Accepted for Publication: June 19, 2009. Author Affiliations: Department of Dermatology, Penn State College of Medicine, Hershey, Pennsylvania (Mr Brueseke and Drs Macrino and Miller); and Depart- ment of Radiology, Western Pennsylvania Hospital, Pitts- burgh (Dr Macrino). Correspondence: Dr Miller, Penn State College of Medi- cine, 500 University Dr, HU 14, Hershey, PA 17033- 0850 (jmiller4@hmc.psu.edu). Author Contributions: All authors had full access to all the data in the study and take responsibility for the in- tegrity of the data and accuracy of the data analysis. Study concept and design: Brueseke, Macrino, and Miller. Ac- quisition of data: Brueseke, Macrino, and Miller. Analy- sis and interpretation of data: Brueseke, Macrino, and Miller. Drafting of the manuscript: Brueseke, Macrino, and Miller. Critical revision of the manuscript for important in- tellectual content: Brueseke and Miller. Administrative, tech- nical, and material support: Brueseke, Macrino, and Miller. Financial Disclosure: None reported. Funding/Support: This study was supported in part by The Pennsylvania State University Department of Der- matology. Role of the Sponsors: The sponsor had no role in the design or conduct of the study or in the collection, analy- sis, or interpretation of the manuscript. 1. American Heart Association. PAD quick facts. http:/www.americanheart.org /presenter.jhtml?identifier=3020248. Accessed March 31, 2007. 2. Sontheimer DL. Peripheral vascular disease: diagnosis and treatment. Am Fam Physician. 2006;73(11):1971-1976. 3. Hummel BW, Hummel BA, Mowbry A, Maixner W, Barnes RW. Reactive hy- peremia vs. treadmill exercise testing in arterial disease. Arch Surg. 1978; 113(1):95-98. 4. Meijer WT, Grobbee DE, Hunink MG, Hofman A, Hoes AW. Determinants of peripheral arterial disease in the elderly: the Rotterdam study. Arch Intern Med. 2000;160(19):2934-2938. COMMENTS AND OPINIONS Association Between Thin Melanomas and Atypical Nevi in Middle-aged and Older Men Possibly Attributable to Heightened Patient Awareness W e read with interest the article “Melanoma inMiddle-aged and Older Men” by Swetter et al.1As the authors noted, men with atypical nevi presented with thinner melanomas than those who lacked atypical nevi. According to the study data, median tumor thickness in men with atypical nevi was 0.6 mm, whereas the median thickness was 1.15 mm in men without atypi- cal nevi (P = .02). The authors suggest that men with atypi- cal nevi may have greater knowledge and awareness of mela- noma risk, resulting in earlier detection of their melanomas. Another explanation, suggested by Liu et al,2 is that pa- tients with atypical or increased numbers of moles have more indolent melanomas and thus present with thinner tumors. Methods. To reconcile these alternate explanations, we analyzed the New York University (NYU) database of patients with melanoma prospectively enrolled from 1972 through 1982, many years prior to our col- leagues’ publication of the melanoma ABCD rule (asymmetry, borders, colors, and diameter �6 mm)3 and during an era of much less public awareness of the importance of early melanoma detection. Each patient in the NYU cohort was assessed for numerous clinical factors, including number of nevi.4 However, these patients were enrolled before the significance of atypi- cal nevi was recognized as a risk factor for melanoma, so counts of atypical nevi were not recorded for any patient in the database. Multiple studies, including Roush and Barnhill5 and Nordlund et al,6 have found that individuals with atypical nevi have a higher num- ber of total nevi. These publications suggest that an analysis of number of nevi and median tumor thick- ness is comparable to the analysis of atypical nevi and tumor thickness performed by Swetter et al.1 Results. The accompanying Table and box plot (Figure) summarize data from all men older than 40 years in our cohort (n = 419) and show that tumor thickness did not vary significantly with the number of moles (P �.99 in the Kruskal-Wallis nonparametric analysis of variance test). These data suggest that mela- nomas arising in patients with increased numbers of nevi are not inherently more indolent than melanomas arising in patients with an average (or less than aver- age) number of nevi. Comment. Although these data contrast with those of Swetter et al,1 taken together these findings suggest that increased public awareness and educational efforts may have led to earlier detection of melanoma. Swetter et al demonstrated that men who were aware of melanoma, understood the importance of skin examinations, and showed an overall interest in their health were more likely to present with thinner tumors. At our own institution, we have noted a substantial decrease in tumor thickness Table. Tumor Thickness by Number of Moles Group Patients, No. Moles, No. Tumor Thickness, Median, mm 1 9 0 1.40 2 328 1-25 1.60 3 85 26-100 1.40 4 36 �100 1.85 Taylor J. Brueseke, BS Sheri Macrino, MD Jeffrey J. Miller, MD (REPRINTED) ARCH DERMATOL/ VOL 145 (NO. 12), DEC 2009 WWW.ARCHDERMATOL.COM 1457 ©2009 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 work_bgtwgfoqvbay7lqcxrnsm2cphu ---- Perception of Cambridge A-level Students with Respect to their Technology Engagement Procedia - Social and Behavioral Sciences 123 ( 2014 ) 28 – 34 1877-0428 © 2013 The Authors. Published by Elsevier Ltd. Selection and peer-review under responsibility of the Organizing Committee of TTLC2013. doi: 10.1016/j.sbspro.2014.01.1394 ScienceDirect TTLC 2013 Perception of Cambridge A-Level students with respect to their technology engagement Dr.Purushothaman Ravichandran* Kolej Yayasan UEM, Lemabah Beringin, Perak Abstract This research is an attempt to capture students’ technology engagement that fosters meaningful learning among students of Cambridge A-Level programs at KolejYayasan UEM (KYUEM), Malaysia. Three hundred and ninety students were asked to answer an online questionnaire, which was designed using Google docs and the results generated were further analysed using SPSS 11.5. The hypothetical testing showed no significant difference in the usage of computers in leisure activities, such as Facebook, blogging, electronic mail and internet browsing at home and in college. Further, results showed that majority of the students make good use of the internet and electronic mail facilities at college. However, the proportion of students who agreed or disagreed that computers were accessible to them whenever needed within the college campus was observably equal. © 2013 The Authors. Published by Elsevier Ltd. Selection and peer-review under responsibility of the Organizing Committee of TTLC2013. Keywords: Technology engagement; blogging; Google docs 1. Background This study examines the perceptions and preparedness of A-Level students at KolejYaysan UEM, Malaysia, by capturing the students’ engagement with technology in their learning activities. Extensive published evidence affirm that engagement and motivation are critical elements in student success and learning. Researchers agree that engaged students learn more, retain more, and enjoy learning activities more than students who are not engaged (Dowson and McInerney, 2001; Hancock and Betts, 2002; Lumsden, 1994). Many school-level studies have identified higher levels of student engagement as important predictors of scores on standardized achievement tests, * Corresponding author. Tel.: +6 017-654-2121; fax: +6 03-64-60-1234 E-mail address:computerravi@hotmail.com Available online at www.sciencedirect.com © 2013 The Authors. Published by Elsevier Ltd. Selection and peer-review under responsibility of the Organizing Committee of TTLC2013. Open access under CC BY-NC-ND license. Open access under CC BY-NC-ND license. http://crossmark.crossref.org/dialog/?doi=10.1016/j.sbspro.2014.01.1394&domain=pdf http://creativecommons.org/licenses/by-nc-nd/3.0/ http://creativecommons.org/licenses/by-nc-nd/3.0/ 29 Purushothaman Ravichandran / Procedia - Social and Behavioral Sciences 123 ( 2014 ) 28 – 34 Students Technology Engagement classroom learning and grades, and student persistence (National Research Council, 2000). Therefore, based on these research notions, students’ technology engagement is conclusively defined as the level of students’ technology ability that foster students’ participation and intrinsic interest in their meaningful learning activities within a school environment. Thus, a conceptual model design of students’ technology engagement in the context of this study is shown in Fig. 1. Fig. 1. Conceptual model design of students’ technology engagement 2. Literature Review 2.1. Students’ technology engagement and learning environment Student engagement is an important factor for student motivation during their learning process. The more students are motivated to learn, the more likely it is that they will be successful in their efforts. Many factors influence student motivation. These include teacher motivation, pedagogical strategies, availability of learning tools, technology support and good learning environment. However, to sustain such motivation gained by the students, it is important that the school environment in which their learning process evolves be in line with that of the students’ expectation. Therefore, it is mandatory for each student to get such support from the school where his learning begins to emerge. The most common forms of engagement by Australian children have been found to be electronic mail and information searches (Aisbett, 2001). Similarly, Livingstone, &Bober (2004) reported United Kingdom youth (9 to 19 year olds) as using the internet to communicate, for peer-to-peer interaction and to seek information. Also, a report published by the National School Board Association (2007) found that 96 percent of youth in this age range have used social networking tools at some time, with their average engagement with them rivaling time spent watching TV at 9 hours a week. Yet perhaps the most stunning statistic of their study is that the topic of most conversation at these sites is education—60percent of the students’ surveys said they use the sites to talk about education topics and more than 50 percent use it to talk about specific schoolwork. In yet another research conducted by Look (2005), a review of 219 studies on the use of technology in education consistently found that 30 Purushothaman Ravichandran / Procedia - Social and Behavioral Sciences 123 ( 2014 ) 28 – 34 students in technology rich environments experienced positive effects on achievement in all subject areas. Thus, this study attempts to find student’s technology engagement that foster student’s participation and intrinsic interest in their meaningful learning activities within the school environment. 3. Research Methodology Since this study used quantitative research design, the study is descriptive and sets out to describe behaviour by measuring certain variables. Previous research questionnaires that had been used to capture students/teachers perception of technology usage in the college were compared with some of the universally accepted research questionnaires, such as Becta, to finalize the final version of the questionnaires used in this study. The data obtained from the respondents of online questionnaire was then analyzed using the SPSS version 11.5. 4. Population and Sampling Technique The surveyed population in this study consisted of all students undergoing their A-Levels at a residential college in Malaysia. A total of 390 students were asked to respond to an online questionnaire. Hence, purposive sampling method was used, as it is a sampling method in which elements are chosen based on purpose of the study. Also, purposive sampling may involve studying the entire population of some limited group. Thus, all the first year and the second year students of A-Levels at KolejYaysan UEM, Malaysia were involved in this study. 5. Research Instrument Although 4 different questionnaires were used among 4 different groups, in the context of this study, this article would focus on the information gathered from students’ questionnaire only. The Students’ questionnaire consisted of 11 main questions, which had sub-level options, and one open-end question. Likert rating scales with five levels were used with options ranging from the extent to which a person strongly disagrees to agrees with the question. The questions for students focused on (1) student perceptions of the frequency with which specific technologies are used in their learning activities and (2) student perceptions of the impact of technology on their learning activities. Thus, the data captured from the respondents of online questionnaire would serve as one of the benchmarks for implementing an ICT strategic planning process for the college. Further, this would also enable the college to prioritize efforts needed to establish a technology-based learning system that would be appropriate for the pedagogical community. 6. Research Questions The main research question for this study is to investigate how far the present student community is able to engage in meaningful learning with the current level of technological supports provided by the college. From this main question emerged the following sub-questions: Is there any difference between students’ perception of using computers and its influence on their learning activities in college? Is there any difference between students’ perception of using computers for learning activities in college and its use at home? Is there any difference in students’ perception of using computers in college with respect to gender? 6.1. Engagement of students in different computer related learning activities at KYUEM Table 1 summarizes the usage of different computer applications/activities that students are engaged in while they are at KYUEM. The results shows that more than 50% of the students use computers at KYUEM mainly to browse the internet, to check their e-mails, to find information on Wikipedia, to access their facebook accounts and to make Word documents/PowerPoint presentations/Excel spread sheets. Further, it was observed that the usage of Microsoft word, PowerPoint, Blogging and Bing was slightly higher among the female students. On the other hand, 31 Purushothaman Ravichandran / Procedia - Social and Behavioral Sciences 123 ( 2014 ) 28 – 34 it was observed that there was slightly higher participation of male students in usage of internet, e-mails, Wikipedia, Facebook, Excel, online project collaboration, digital photography/film making and database usage. Table 3 presents data on the involvement of students in different activities while they use computers at KYUEM. Table 1. Common usage of computers at KYUEM Male Female Overall www 96.80% 95.90% 96.30% Email 95.20% 94.60% 94.90% Wikipedia 94.40% 93.90% 94.10% Face book 95.20% 91.20% 93.00% Microsoft Word 91.20% 92.60% 91.90% Others 78.40% 86.50% 82.80% Power point 80.80% 83.10% 82.10% Excel 55.20% 54.10% 54.60% Blogging 46.40% 48.00% 47.30% Online collaboration project 46.40% 41.20% 43.60% Digital photography 45.60% 36.50% 40.70% Database 39.20% 38.50% 38.80% Bing 29.60% 33.80% 31.90% Digital film making 34.40% 27.70% 30.80% 7. Hypothesis Testing 7.1. Students’ perception of using computers and its influence on their learning activities Table 2 shows the proportion of students who support, reject and who are unable to comment on the idea that the use of computers has improved/influenced their learning activities. It was observed that the majority of students (those who agreed) significantly support the idea that computers are essential in their learning process and helps them improve their learning activities by making it more engaging and interesting. Further, it was observed that close to one-fifth of the students disagreed that they make good use of email and internet at home, however this proportion was significantly less than the proportion of students who agreed that students make good use of emails and internet at KYUEM. Lastly, 50% of the students agreed that they usually get access to computers at KYUEM whenever they need to compared to 45% of the students who disagreed that they usually get access to computers whenever they need to. However, the difference in the proportion of users who agree and disagree on the accessibility of computers was not statistically significant. Thus, we conclude that computers are essential in learning process, improves learning process by making it more interesting. Also, majority of the students make good use of internet and e-mail facility. However, the proportion of students who agreed or disagreed that computers were accessible to them whenever needed was observably equal. Table 2. Students’ perception of using computers and its influence on their learning activities Proportion of students who Agree Proportion of students who disagree Proportion of students unable to comment Z-value Significance (p-value) It is essential that I use computers in my learning. 95.24% 2.56% 2.20% 21.66 <0.0001 Using computers at KYUEM improves my learning. 91.58% 4.03% 4.39% 20.48 <0.0001 When I use computers at KYUEM, learning is more interesting. 90.48% 5.49% 4.03% 19.87 <0.0001 I make good use of e-mail facility at KYUEM. 79.85% 15.75% 4.40% 14.99 <0.0001 I make good use of the internet at KYUEM. 87.55% 9.89% 2.56% 18.15 <0.0001 32 Purushothaman Ravichandran / Procedia - Social and Behavioral Sciences 123 ( 2014 ) 28 – 34 I usually get access to computers at KYUEM whenever I need to. 50.18% 45.05% 4.77% 1.20 0.2303 7.2. Students’ perception of using computers for learning activities in college and its usage at home Table 3 shows the usage of different computer related learning activities for students at home and at KYUEM. The results indicate that the usage of activities involving word processing was significantly higher at KYUEM compared to usage at home. Similarly, the usage of activities involving ‘online collaboration’, ‘digital photography’, ‘digital filmography’ and ‘other activities’ was higher at KYUEM compared to usage at home. On the other hand, there was no significant difference in the usage of computers in learning activities of students involving ‘PowerPoint’, ‘www’, ’email’, ‘blogging’ and ‘facebook’ among students at home and KYUEM. Thus, we conclude that usage of academic related activities such as word processing, online collaboration, digital photography, digital filmography, and ‘others’ is more predominant at KYUEM. However, there was no significant difference in the usage of computers in leisure activities such as facebook, blogging, email and internet browsing at home and at KYUEM. Table 3.Students’ perception of using computers for learning activities in college and its usage at home KYUEM user proportion Home user proportion Z-Value p-value Word processing** 91.94% 84.98% 2.55 0.0109 PowerPoint 82.05% 78.39% 1.07 0.2827 WWW 96.34% 93.77% 1.38 0.1670 Online collaboration** 43.59% 25.27% 4.50 <0.0001 Digital photography** 40.66% 32.23% 2.05 0.0408 Digital filmography** 30.77% 17.58% 3.60 0.0003 Email 94.87% 92.31% 1.22 0.2213 Blogging 47.25% 42.12% 1.21 0.2282 Facebook 93.04% 89.01% 1.65 0.0995 Others** 82.78% 68.13% 3.98 <0.0001 7.3. Student’s perception of using computers in college with respect to gender The proportion of male and female students involved in different computer related learning activities is summarized in Table 4. The results indicate that the proportion of male and female students engaged in different learning activities involving computers at KYUEM do not differ significantly. Thus, we conclude that there is no difference in the proportion of male and female students engaged in different learning activities involving computers at KYUEM. Table 4. Students’ perception of using computers for learning activities in college with respect to gender Male Female Z-value p-value Microsoft Word 91.20% 92.57% -0.41 0.6792 Excel 55.20% 54.05% 0.19 0.8497 PowerPoint 80.80% 83.11% -0.50 0.6205 Database 39.20% 38.51% 0.12 0.9077 E-mail 95.20% 94.59% 0.23 0.8212 WWW 96.80% 95.95% 0.37 0.7082 Online Collaboration 46.40% 41.22% 0.86 0.3895 Digital Photography 45.60% 36.49% 1.53 0.1267 Digital Film making 34.40% 27.70% 1.19 0.2323 Blogging 46.40% 47.97% -0.26 0.7954 Facebook 95.20% 91.22% 1.29 0.1975 Wiki 94.40% 93.92% 0.17 0.8661 Bing 29.60% 33.78% -0.74 0.4598 33 Purushothaman Ravichandran / Procedia - Social and Behavioral Sciences 123 ( 2014 ) 28 – 34 Others 78.40% 86.49% -1.76 0.0779 8. Conclusion Majority of the students make good use of internet and e-mail facility. However, the proportion of students who agreed or disagreed that computers were accessible to them whenever needed was observably equal. Further, the difference in the proportion of users who agree and disagree on the accessibility of computers at KYUEM was not statistically significant. On the other hand, there was no significant difference in the usage of computers in learning activities of students involving ‘PowerPoint’, ‘www’, ’email’, ‘blogging’ and ‘facebook’ among students at home and KYUEM. The proportion of male and female students involved in different computer related learning activities had no difference at KYUEM. Although, digital skills divide into very different sub-skills of which only some are important and used in school. What seems to be confirming from the results is that the students’ informal learning of ICT and experiences in using ICT are far more attractive than the school can typically offer. It is therefore hi gh time for the staff and management to invest more time and resources that can provide more meaningful technology engagement amongst the students of KYUEM. References Anderson, T. (2003). Modes of interaction in distance education: Recent developments and research questions. In M. Moore (Ed.) Handbook of Distance Education. (p. 129-144). Mahwah, NJ.: Erlbaum. Aisbett, K. (2001). The Internet at Home, Sydney: Australian Broadcast Authorities URLhttp://www.aba.gov.au/newspubs/documents/InternetAtHome.pdf Accessed 7 August 2006. Barron, A., Kemker, K., Harmes, C., & Kalaydjian, K (2003). Large-scale research study on technology in K-12 schools: Technology integration as it relates to the national technology standards. Journal of Research on Technology in Education 35(4), 489-507. Bober,M., & Livingsonte,S. (2004). UK children go online. Surveying the experiences of young people and their parents. London: London School of Economic and Political Science. Bullock, D. (2004). Moving from theory to practice: an examination of the factors that pre-service teachers encounter as they attempt to gain experience teaching with technology during field placement experiences. Journal of Technology and Teacher Education, 2, 211-237. Russell, Bebell, O'Dwyer & O'Connor, 2003. Connell, J., & Wellborn, J. G. (1991). Competence, autonomy, and relatedness: A motivational analysis of self-system process. In M. R. Gunnar & L. A. Sroufe (Eds.), Self process in development: Minnesota Symposium on Child Psychology, (Vol. 2, pp. 167–216). Dahlgren G. (1997). Strategies for reducing social inequities in health-visions and reality. In: Ollila E., Koivusalo M., Partonen T. (eds), Equity in Health Through Public Policy. Helsinki: STAKES (National Research and Development Center for Health and Welfare). Doering, A., Hughes, J., & Huffman, D. (2003). Preservice teacher: Are we thinking with technology?Journal of Research on Technology in Education, 35,342-361. Dowson, M., & McInerney, D. M. (2001). Psychological parameters of students’ social and work avoidance goals: A qualitative investigation. Journal of Educational Psychology, 93(1), 35-42. Fishman, B. (2000). How activity fosters CMC tool use in classrooms: Re-inventing tools in local contexts Journal of Interactive Learning Research, 11(1), 3-27. Grove, K., Strudler, N., & Odell, S. (2004). Mentoring toward technology use: Cooperating teacher practice in supporting stud ent teachers. Journal of Research on Technology in Education, 37(1), 85–109. Grove, S.J., Carlson, L. and Dorsch, M.J. (2007) Comparing the application of Integrated Marketing Communication (IMC) in magazine ads across product type and time. Journal of Advertising 36 (1): 37–54. Gordon, R.J. (2000). ”Does the ’New Economy’ Measure Up to the Great Inventions of the Past?”, Working Paper 7833, NBER. Hancock, V., & Betts, F. (2002, April 1). Back to the future: Preparing learners for academic success in 2004. Learning and Leading with Technology, 29(7), 10-14. Johnson, M. K., Crosnoe, R., & Elder, G., Jr. (2001). Students’ attachment and academic engagement: The role of race and ethnicity. Sociology of Education, 74, 318–340. Kennedy, C. (2000). Implications for new pedagogy in higher education: Can online technology enhance studentengagement & learning? (Doctoral dissertation, University of California at Berkeley). Retrieved ERIC database. (ED443382) Look, D. (2005). Discussion Paper: Impact of Technology on Education, PUSD Excellence Committee, December 2005. Available from http://pleasanton.k12.ca.us/Superintendent/Downloads/Technology.pdf Lumsden, L.S. (1994). Student Motivation to Learn. Educational Resources. Information Center, Digest Number 92. McKendrick, J. H., & Bowden, A. (1999). Something for everyone? An evaluation of the use of audio-Visual resources in geographical learning in the UK. Journal of Geography in Higher Education, 23(1), 9-20. Retrieved June 18, 2001 from Academic Search Elite on GALILEO: http://www.galileo.peachnet.edu Mumtaz, S. (2000). Factors affecting teachers’ use of information and communications technology: A review of the literature. Journal of Information Technology for Teacher Education, 9(3), 319-342. National Research Council (2000). How people learn: Brain, mind, experience, and school.Washington, DC: National Academies Press. National School Board Association (2007). Creating & connecting: Research and guidelines on online social and educational — networking. Alexandria, VA. Newman, D. (1992). Technology as support for school structure and school restructuring. Phi Delta. 34 Purushothaman Ravichandran / Procedia - Social and Behavioral Sciences 123 ( 2014 ) 28 – 34 Putnam, R., & Borko, H. (2000). What do new views of knowledge and thinking have to sayabout research onteacher learning? Educational Researcher, 29(1), 4-15. Roblyer, M.D. (2004). If technology is the answer, what’s the question? Research to helpmake the case for why we use technology in teaching. Technology and Teacher Education Annual, 2004. Charlottesville, VA: Association for the Advancement of Computing. Rockman et al. (2000). ‘Source: The Laptop Program Research’. Available from http://rockman.com/projects/laptop/ Savery, J. (2002). Faculty and Student Perceptions of Technology of Technology in Teaching. The Journal interactive Online Learning, 1(2). http://www.ncolr.org/jiol/issues/PDF/1.2.5.pdf. (Retrieved 2 December 2005). Skinner, E.A., & Belmont, M.J. (1993). Motivation in the classroom: Reciprocal effects ofteacher behavior andstudent engagement across the school year. Journal of Educational Psychology, 85(4): 571-581. Smerdon, Becky A 1999. “Engagement and Achievement Differences Between African American and White High School Students”. Research in the Sociology of Education and Socialization. 12:103-134. Smith, B. K., & Blankinship, E. (2000). Justifying imagery: multimedia support for learning through exploration IBM Systems Journal, 39(3/4), 749- Turner, J. C., Thorpe, P. K., & Meyer, D.K. (1998). Students' reports of motivation and negative affect: A theoretical and empirical analysis. Journal of Educational Psychology, 90, 758-771. Wishart, J.,Blease,D.(1999).Theories underlying perceived changes in teaching and learning after installing a computer network in a secondary school. British Journal of Educational Technology, 30 (1),pp.25-4.1 work_biz4hihwu5cytoqwuipap3vo3m ---- Solar aureoles caused by dust, smoke, and haze Solar aureoles caused by dust, smoke, and haze Forrest M. Mims III The forward scattering of sunlight by atmospheric aerosols causes a bright glow to appear around the Sun. This phenomenon, the simplest manifestation of the solar corona, is called the solar aureole. Simple methods can be used to photograph the solar aureole with conventional and digital cameras. Aureole images permit both a visually qualitative and an analytically quantitative comparison of aureoles caused by dust, smoke, haze, pollen, and other aerosols. Many hundreds of aureole photographs have been made at Geronimo Creek Observatory in Texas, including a regular time series since September 1998. These images, and measurements extracted from them, provide an important supplement to studies of atmospheric aerosols. © 2003 Optical Society of America OCIS codes: 010.1290, 010.1100, 010.1110, 010.0100, 290.1090. 1. Introduction The brilliant glow that encircles the Sun is known as the solar aureole. The size and intensity of this cir- cumsolar radiation provide information about the presence of aerosols between the Sun and an ob- server. Yet the solar aureole is among the least ob- served optical phenomena in the sky because of its proximity to the Sun and its inherent brightness. The solar aureole is caused by the forward scatter- ing of sunlight by aerosols. The size range and co- lumnar concentration of aerosols can cause great variation in the size and appearance of the aureole. Because of changes in the number of particulates in the slant path between an observer and the Sun, the brightness and the apparent diameter of the aureole decrease as the Sun rises and increase as the Sun descends. Dust can cause brilliant, disklike aure- oles. Summer haze and smoke cause a more-diffuse aureole. When the sky is heavily polluted by sulfur dioxide emissions, multiple scattering can cause the aureole to fill the entire sky. Atmospheric scientists have studied solar aureoles for more than a century. The Smithsonian Astro- physical Observatory studied aureoles as part of its campaign to detect fluctuations in the solar constant. Minnaert defined the aureole as “. . . the corona phenomenon in its simplest form” and presented The author �forrest.mims@ieee.org� is with the Geronimo Creek Observatory, 433 Twin Oak Road, Seguin, Texas 781 55. Received 22 January 2002; revised manuscript received 27 March 2002. 0003-6935�03�030492-05$15.00�0 © 2003 Optical Society of America 492 APPLIED OPTICS � Vol. 42, No. 3 � 20 January 2003 methods for its safe observation.1 Coulson spectro- scopically measured solar aureoles in the clear sky over Mauna Loa Observatory, Hawaii, and in smoggy skies over Los Angeles, California.2 More recently, solar aureoles have been studied by use of conven- tional and digital photography and both hand-held and robotic Sun photometers that automatically scan the Sun and sky. O’Neil and Miller3 made simulta- neous measurements of the aureole and optical ex- tinction. Tanaka et al.4 devised a means for calibrating Sun photometers by simultaneous mea- surements of the direct solar beam and the intensity of the solar aureole. Many authors have discussed in detail the Mie scattering that causes aureoles and diffuses sunlight. Coulson’s treatment2 is especially relevant, for he expertly combines theoretical analysis with many ex- perimental observations from various locations, in- cluding Mauna Loa Observatory in Hawaii, a site noted for its frequency of remarkably clear skies. My goal in this paper is to stimulate renewed in- terest in solar aureoles and the information that they contain through simple photographic methods that are easily implemented by professional scientists and students alike. Although digital photography is particularly well suited for this purpose, Sarah A. Mims, a tenth-grade student at New Braunfels Chris- tian Academy, New Braunfels, Texas, has demon- strated that even inexpensive, disposable cameras can be used to acquire excellent images of solar au- reoles. She has used solar aureole photographs, sat- ellite imagery, and microscopic dust, which was collected on petroleum jelly– coated microscope slides and on filter paper by a motorized air sampler, to support her contention that the dust, which exhibits birefringence characteristic of quartz sand, reached Texas from the Sahara desert.5 2. Aureole Photography Techniques Direct sunlight is so intense that photographs of the Sun do not show the solar aureole unless the aureole extends across a substantial portion of the camera’s field of view. Proper aureole photography requires that the solar disk be occluded. The solar aureoles presented in popular books are often photographed with objects such as a street light or a flagpole interposed before the solar disk. This method works well when an appropriate object is available, which is often not the case. Moreover, it can be hazardous for one’s eyes to make aureole pho- tographs in this fashion. That is so because it is difficult to frame the photograph without inadver- tently looking at a portion of the solar disk. For serious aureole photography, a simple hand- held occlusion device is the preferred way to block the direct solar disk. This method has the added safety advantage of not requiring the photographer to peer through the camera’s viewfinder when the camera is being pointed toward the Sun. One can make a sim- ple but effective occlusion device by mounting a spherical ball or disk on the end of a thin rod or stiff piano wire. Both the ball or disk and the support rod should be painted flat black. The ball or disk should have a diameter slightly larger than that of the camera’s lens. To make a solar aureole photograph, one places the camera on a flat surface and tilts it toward the Sun until sunlight passing through the viewfinder forms a bright spot of light on the surface below and behind the camera. The spot of light will be surrounded by the camera’s shadow. The camera is then aligned until the spot of light is approximately coaxial with that portion of the shadow that approximates the location of the viewfinder with respect to the shadow. The camera is then held in place by one hand with a finger over the shutter but- ton. The occluding device is then held 10 cm or more from the camera until its shadow falls directly over the camera’s lens. The shutter button is then pressed. After some practice, this method of aureole photog- raphy will yield excellent aureole photographs. The photographs will be even more desirable if the pho- tographer’s fingers and hat do not appear in the fin- ished images. Although a literature search did not disclose a description of this method, the principle is so obvious that it is probably known to other aureole photographers. Manually holding the shadow disk or ball in place while also making sure the camera does not move can be somewhat tedious. An alternative method of blocking the Sun is to mount both the camera and the occluding device on a common platform with a back- board on which the camera’s shadow will fall when the apparatus is pointed toward the Sun. The shadow device should be mounted coaxially with re- spect to the lens and at least 10 cm away from the lens. The one aligns the apparatus with respect to the Sun by tilting it until the spot of sunlight is aligned as described above. These methods work well with fixed-focus cameras and automatic cameras with lenses that can be set for infinity. Cameras with autofocus lenses should be set up without the shadow device in place. One does this by pressing the shutter button only until the lens focuses at infinity but not until the shutter opens. The shadow device is then placed between the Sun and the lens, and the shutter button is pressed all the way. 3. Aureole Photographs Solar aureole photographs appear in surprisingly few publications. Lynch and Livingston6 show the au- reole formed on a hazy and on a clear day when the solar disk is blocked by the same street light. The revised edition of Minnaert’s classic Light and Color in the Open Air includes several excellent photo- graphs of coronas, including one around an aureole.1 Accompanying this paper are several digital pho- tographs �Figs. 1� that illustrate the great variability in the appearance of the solar aureole that is possible. Figures 1�a� and 1�c�–1�f � were made with a true color �RGB� digital camera with a resolution of 1024 � 1280 pixels �1.3 megapixels�. Figure 1�b� is a digi- tized scan of a 35-mm Kodachrome photograph. Un- fortunately, the digital camera does not save the exposure settings. None of the photographs has been enhanced or altered, with the exception of minor cropping of Fig. 1�b�. The caption for each photograph includes the aero- sol optical thickness �AOT� at 820 nm measured by LED Sun photometer7 to provide a quantitative scale by which to judge each aureole visually. The se- quence begins with Fig. 1�a�, an unusual photograph of the Sun without an aureole. This image was made from the summit of Mauna Kea in Hawaii at an altitude of 4.3 km on 24 June 2000. When this im- age was made, the column water vapor over the mountain was less than 1 mm. This and the very low aerosol burden provided too few scattering agents to produce an aureole, and the sky appeared deep blue almost to the edge of the solar disk. Thus Fig. 1�a� is a portrait of a pristine sky in which molecular Rayleigh scattering of blue wavelengths far exceeds the multispectral Mie scattering caused by particu- lates. Figure 1�b� is a quite different aureole photograph made from the summit of Mauna Kea on 4 August 1992. Hawaii, and much of the Earth, was overlain by a thick blanket of sulfurous aerosols injected into the stratosphere by the Plinian volcanic eruption of Mount Pinatubo in the Philippines on 15 June 1991. The huge aureole in Fig. 1�b� is in striking contrast to the aureole-free sky shown in Fig. 1�a�. An impor- tant feature of the aureole in Fig. 1�b� is the brownish ring around its circumference. This is the rarely photographed Bishop’s ring, a phenomenon first de- scribed by S. E. Bishop, a missionary in Hawaii, shortly after the volcanic eruption of Mount Krakatau in 1883.8 Several fascinating eye-witness 20 January 2003 � Vol. 42, No. 3 � APPLIED OPTICS 493 accounts of the Bishop’s ring and other curious atmo- spheric optical phenomena that appeared after the Krakatau eruption have been compiled by Simkin and Fiske.9 Meinel and Meinel provide a concise explanation of the Bishop’s ring, which they describe as a variant aureole.10 The Bishop’s ring in Fig. 1�b� is the only photograph in this paper made with 35-mm film �Kodachrome�. Fig. 1. �a� Aureole-free Sun photographed from the 4.3-km summit of Mauna Kea in Hawaii on 24 June 2000 �AOT820 nm � 0.02�. The solar disk here and in �c�–�f � is blocked by a small black sphere mounted on a stiff wire. �b� Bishop’s ring around the Sun caused by volcanic aerosols in the stratosphere photographed from the summit of Mauna Kea on 4 August 1992 �AOT820 nm � 0.37�. The solar disk is blocked by an antenna mast. �c� Solar aureole on a very clear day at Geronimo Creek Observatory, Texas, 14 February 1999 �AOT820 nm � 0.04�. �d� Extraordinarily rare pollen corona on a clear day at Geronimo Creek Observatory, 17 January 1999 �AOT820 nm � 0.07�. �e� Broadly diffuse aureole and bluish sky caused by smoke from Mexico advected over Texas on 5 May 2000 �AOT820 nm � 0.36�. �e� Disk-shaped aureole and grayish sky formed by Sahara dust blown from Africa to Texas on 4 July 2000 �AOT820 nm � 0.24�. 494 APPLIED OPTICS � Vol. 42, No. 3 � 20 January 2003 Figure 1�c� shows a small aureole in an unusually clear winter sky over Geronimo Creek Observatory �29.6° N, 97.9° W� in south-central Texas on 14 Feb- ruary 2000. Summer aureoles at this site are much more pronounced than the one in Fig. 1�c�, but they rarely reach the size of those that can cover much of the sky in regions with significant sulfur dioxide emissions from coal-fired power plants or volcanoes such as Mount Kilauea in Hawaii. Figure 1�d� shows an extraordinarily rare pollen corona11 that appeared briefly over Geronimo Creek Observatory on 17 January 1999. The solar aureole is ordinarily a brilliant white. Exceptions occur when the scattering particles are the same size, as in the case of ice crystals and pollen. When an abun- dance of uniformly sized particles between the Sun and an observer dominates other scattering agencies, the aureole becomes separated into the colorful, con- centric diffraction rings around the Sun known as a corona. Coronas formed by ice crystals are much more common than pollen coronas. The corona in Fig. 1�d� is caused by uniformly sized ��22 �m� pollen grains from juniper trees commonly known as moun- tain cedars �Juniperus ashei Buchholz�. During winter the air over central Texas often carries a heavy burden of this pollen, which is notorious for the allergic reactions that it causes. Pollen coronas may be briefly visible on a few days each year or not at all. Considerable smoke from biomass burning in Cen- tral America and Mexico is sometimes advected over South Texas. Figure 1�e� is an aureole photograph made through such smoke on 5 May 2000. The pres- ence of thin smoke has little effect on the color of the sky. The sky in Fig. 1�e� retained its blueness, but the smoke was sufficiently thick to give the sky a washed-out appearance. Smoke in the sky over large regions of Brazil during the annual burning season causes the sky to appear brownish or gray. Near Alta Floresta, Brazil, in August and September of 1997 the AOT at 440 nm caused by smoke reached values as high as 7. When the sky is this polluted, stars and planets are invisible by night and the Sun appears as a dim orange disk that can be observed without protective glasses. Seasonal wind storms over North Africa and China can blow dust considerable distances. Texas has of- ten been the recipient of such dust in recent years. Figure 1�f � shows a solar aureole caused by a Sahara dust event on 4 July 2000. 4. Image Analysis The photographs in Fig. 1 demonstrate that solar aureoles have distinctive features. Thus the experi- enced observer can, for example, visually differenti- ate aureoles caused by dust and smoke. For quantitative analysis of aureoles, one can use various computer software packages to digitally scan their images. Previous authors have given angular dimensions and intensities for the solar aureole. Linacre,12 for example, observes that the aureole extends to �5° from the Sun and includes �30% of the sky’s diffuse radiation when the sky is clear and the Sun is more than �30° above the horizon. It is now possible for observers to regard Linacre’s values as first approx- imation estimates and to then analyze digitized au- reole images to quickly make angular measurements of their own. Figure 2 is a digitized scan of the intensity of blue light from the edge of the solar disk to 16° away from the Sun for the solar aureoles in Figs. 1�a� �Mauna Kea summit�, 1�d� �Clear sky over Texas�, 1�e� �Mex- ican smoke over Texas�, and 1�f � �Sahara dust over Texas�. The scans were made with SigmaScan Pro 5.0 image analysis software �SPSS Science�, and the results were tabulated on a common spreadsheet. I measured the peak wavelengths designated blue and red by the image analysis software by placing the camera used to make the aureole photographs at the output of a spectrometer �Optronics DMCI-02�. Photographs of the output slit were then made at 10-nm intervals in the blue and red wavelengths. The images were exported to a common background and analyzed by the image analysis software. This yielded a peak blue wavelength of 450 nm and a peak red wavelength of 640 nm. The traces in Fig. 2 allow the aureoles to be more precisely defined than would be possible from a sim- ple visual analysis. The nearly flat slope of the dig- itized trace of the intensity of skylight near the Sun as viewed from Mauna Kea confirms the absence of an aureole. The traces in Fig. 2 also show that the clear sky aureole over Texas �Fig. 1�c�� resembles a Sahara dust aureole more closely than it does the ultraclear sky at Mauna Kea �Fig. 1�a��. The disk- like appearance of the Sahara dust aureole �Fig. 1�f �� and the more-diffuse nature of the smoke aureole �Fig. 1�e�� is confirmed by the sharper intensity of the former when it is compared with the latter. Figure 3 plots the red�blue intensity ratios of the aureoles plotted in Fig. 2. The high color ratio caused by the Sahara dust implies larger particle sizes than for the smoke. Estimates of particle size Fig. 2. Digitized scan of the intensity of blue light from the edge of the solar disk to 16° away from the Sun for the solar aureoles in Figs. 1�a� �Mauna Kea summit�, 1�d� �Clear sky over Texas�, 1�e� �Mexican smoke over Texas�, and 1�f � �Sahara dust over Texas�. 20 January 2003 � Vol. 42, No. 3 � APPLIED OPTICS 495 made by Sun photometers generally rely on interfer- ence filters’ having a bandpass of 10 nm. The ef- fective bandpass of the scans in Figs. 2 and 3 is significantly greater, of the order of 50 nm. Nev- ertheless, gross particle size estimates from the dig- itized scans method show enough promise to deserve study in conjunction with simultaneous Sun photom- eter observations. The camera used for this study is fully automatic and does not permit changes in the aperture setting or the exposure time. Based on preliminary experi- ence with a more advanced camera that records these settings, solar aureoles may be recorded with slightly different settings. Therefore a camera with fixed or manually adjustable settings would be best. 5. Conclusions Solar aureole studies have a long history and con- tinue today through robotic Sun photometer mea- surements and both conventional and digital photography. Considering that time series of daily solar aureole photographs provide a quick method for experienced observers to visually evaluate and esti- mate the sky’s aerosol burden, the scale of current solar aureole monitoring is small. The photographic method is simple and easily implemented by both students and professional scientists. Photograph- ing solar aureoles by use of the shadow method de- scribed here can even become a simple exercise for secondary-school students. Posting catalogs of stan- dardized solar aureole images from around the world on the World Wide Web would permit a much better appreciation of sky conditions than tables of dimen- sionless optical depth numbers. Adding aureole im- ages to time series plots of AOT would significantly enhance the value of the data by providing a visual link between the data and the appearance of the sky. It might also elicit a broader audience for such obser- vations, as it is hoped the accompanying aureole pho- tographs have done for this paper. This study was inspired by the writings of S. E. Bishop, Charles Abbot, Kinsell Coulson, and Aden and Marjorie Meinel. This paper benefitted much from suggestions by two reviewers and discussions with Robert Roosen, John DeLuisi, Kenneth Sassen, W. Patrick Arnott, Brent Holben, John Barnes, Joe Shaw, Robert Evans, and David Brooks. I was as- sisted in the preparation of this paper by the GLOBE program. References 1. M. G. J. Minnaert, Light and Color in the Outdoors, revised edition, L. Seymour, ed. �Springer-Verlag, Berlin, 1993�, pp. 238 –245. 2. K. L. Coulson, Polarization and Intensity of Light in the Atmo- sphere �Deepak, Hampton, Va., 1988�, pp. 350 –362. 3. N. T. O’Neil and J. R. Miller, “Combined solar aureole and solar beam extinction measurements. 1. Calibration con- siderations,” Appl. Opt. 23, 3691–3696 �1984�. 4. M. Tanaka, T. Nakajima, and M. Shiobara, “Calibration of a sunphotometer by simultaneous measurements of direct-solar and circumsolar radiations,” Appl. Opt. 25, 1170 –1176 �1986�. 5. S. A. Mims, “Stuff in the air: Sahara dust and other aerosols collected in South Texas,” presented at the Texas Junior Acad- emy of Science, Texas A&M University, College Station, Texas, 17 April 2002. 6. L. Lynch and W. Livingston, Color and Light in Nature �Cam- bridge U. Press, Cambridge, 1995�, p. 32. 7. F. M. Mims III, “Sun photometer with light-emitting diodes as spectrally selective detectors,” Appl. Opt. 31, 6965– 6967 �1992�. 8. S. E. Bishop, “The remarkable sunsets,” Nature �London� 29, 259 –260 �1884�. 9. T. Simkin and R. S. Fiske, Krakatau 1883 �Smithsonian Insti- tution Press, Washington, D.C., 1983�, pp. 154 –159. 10. A. Meinel and M. Meinel, Sunsets, Twilights, and Evening Skies �Cambridge U. Press, Cambridge, 1983�, pp. 79 – 81. 11. F. M. Mims III, “Solar corona caused by juniper pollen in Texas,” Appl. Opt. 37, 1486 –1488 �1998�. 12. E. Linacre, Climate and Data Resources �Routledge, London, 1992�, pp. 152–153. Fig. 3. Ratios of the red�blue intensities of the aureoles plotted in Fig. 2. The high color ratio caused by the Sahara dust implies that it comprises larger particle sizes than does smoke. 496 APPLIED OPTICS � Vol. 42, No. 3 � 20 January 2003 work_bk67xb2owrgyrlbi2qfdc7snai ---- Microsoft Word - kioskpaper.doc http://pro.sagepub.com/ Ergonomics Society Annual Meeting Proceedings of the Human Factors and http://pro.sagepub.com/content/51/5/484 The online version of this article can be found at: DOI: 10.1177/154193120705100512 2007 51: 484Proceedings of the Human Factors and Ergonomics Society Annual Meeting Jacob Solomon and Frank A. Drews Digital Photo Kiosk Evaluation Published by: http://www.sagepublications.com On behalf of: Human Factors and Ergonomics Society can be found at:Proceedings of the Human Factors and Ergonomics Society Annual MeetingAdditional services and information for http://pro.sagepub.com/cgi/alertsEmail Alerts: http://pro.sagepub.com/subscriptionsSubscriptions: http://www.sagepub.com/journalsReprints.navReprints: http://www.sagepub.com/journalsPermissions.navPermissions: http://pro.sagepub.com/content/51/5/484.refs.htmlCitations: What is This? - Oct 1, 2007Version of Record >> at MICHIGAN STATE UNIV on September 21, 2013pro.sagepub.comDownloaded from at MICHIGAN STATE UNIV on September 21, 2013pro.sagepub.comDownloaded from at MICHIGAN STATE UNIV on September 21, 2013pro.sagepub.comDownloaded from at MICHIGAN STATE UNIV on September 21, 2013pro.sagepub.comDownloaded from at MICHIGAN STATE UNIV on September 21, 2013pro.sagepub.comDownloaded from at MICHIGAN STATE UNIV on September 21, 2013pro.sagepub.comDownloaded from http://pro.sagepub.com/ http://pro.sagepub.com/content/51/5/484 http://www.sagepublications.com http://www.hfes.org http://pro.sagepub.com/cgi/alerts http://pro.sagepub.com/subscriptions http://www.sagepub.com/journalsReprints.nav http://www.sagepub.com/journalsPermissions.nav http://pro.sagepub.com/content/51/5/484.refs.html http://pro.sagepub.com/content/51/5/484.full.pdf http://online.sagepub.com/site/sphelp/vorhelp.xhtml http://pro.sagepub.com/ http://pro.sagepub.com/ http://pro.sagepub.com/ http://pro.sagepub.com/ http://pro.sagepub.com/ http://pro.sagepub.com/ Digital Photo Kiosk Evaluation Jacob Solomon and Frank A. Drews University of Utah Salt Lake City, Utah Self-service modules have become an integral part of the economy throughout the world, replacing expensive human operators in many set- tings. However, usability issues continue to diminish the economic value of these modules. This experiment demonstrates how the application of sound usability principles can be applied to self-service settings to in- crease the usability of self-service modules. The study compared the us- ability of two versions of a self-service digital photo kiosk. In one version we replicated a kiosk presently in use and broadly available. The other version of the software incorporated several design principles, such as the use of a metaphor, intended to increase usability and learnability, more specifically to allow for easier navigation. Participant’s performance in completing tasks was measured as a function of speed, accuracy, and the need for human assistance. The results demonstrate that incorporating usability principles can improve usability of self-service modules. INTRODUCTION Self-service kiosks are an increasingly im- portant mechanism in the modern business world. Self-service devices are currently being used to re- trieve money from a bank account, to check out items from a grocery store, to rent movies, to buy event tickets, and many other uses. From a business point of view self-service allows for bypassing the need for expensive and potentially unreliable human employees. However, there is undoubtedly a con- cern that perhaps a new cost is created by the use of self-service. That cost is related to the issue of us- ability. If self-service machines have poor usability then they are economically unfeasible because of their low acceptance rates by customers. In fact, some machines have such poor usability that they require a human attendant to aid users. In such a circumstance, the technology has failed, because both the cost of the machine, the cost of the em- ployee, and the loss of business due to customer dissatisfaction and frustration must be considered. Digital photo kiosks are a particularly con- spicuous example of the problems which exist in self-service modules. As the photo industry transi- tions from film photography to digital photography, these kiosks are a vital means for photo labs to maintain the profitability of their expensive but high quality photo printing equipment. They are also meant to promote one of the fundamental advan- tages of digital photography, which is that one can take hundreds of pictures and view them before printing, allowing more control over what gets printed and the quality of prints. However, the com- plexity of digital photo editing and organization makes its application to a self-service setting chal- lenging for software developers. Self-service mod- ules have traditionally been reserved for more sim- ple tasks, as is the case with ATM’s, snack ma- chines, and gas pumps. Because the task involved in digital photo kiosks is much more complex than traditional self-service modules, there is danger that the mental models users take from more established and simpler applications can not be adequately ap- plied, with the consequence of poor usability for the customer. Unfortunately, many current photo kiosks use traditional self-service modules as a model for their own design. One example of a self service module that illustrated these problems is the Fuji Aladdin Digital Photo Kiosk. The Aladdin follows a “step-by-step” model of navigation. From the standpoint of a software engineer, this software is very efficient. Photos are loaded from a storage me- dium and processed quickly at the cost of allowing the user only very limited freedom to perform com- plicated or unusual tasks. However, the software PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 51st ANNUAL MEETING—2007 484 does not provide the user the vital information needed to confidently navigate through the applica- tion. This can by highlighted by the complete lack of information about previous and potential future steps in the procedure. Although the user is figura- tively put on a “track” from which they cannot de- viate, it is not quite clear which options will be ahead. This lack of preview leads to frequent mis- takes and requests for assistance from a human su- pervisor. The goal of the present study is to demon- strate how human factors design principles can be applied to a digital photo kiosk in order to compen- sate for the complexity of the task and maintain strong usability. Participants performed tasks on two simulated versions of a kiosk. One version was modeled directly after the Fuji Aladdin Digital Photo Center. The other version used a similar, step by step structure, however it also provided the nec- essary information needed to navigate successfully, and presented this information in a way consistent with good human factors design principles. The necessary information was provided up front, which is crucial to good interaction (Drews & Westen- skow, 2006). Finally, a simple metaphor was also used to help users apply a mental model to the task. Thus, we followed Davidson, Dove and Weltz (1998) recommendation that “Usability is strongly tied to the extent to which a user's mental model matches and predicts the action of a system.” To provide users with a familiar mental model we ap- plied the metaphor of a photo album to provide guidance. We predicted that participants using the redesigned software would complete their task more quickly and with fewer errors than in the control condition (Aladdin system). Finally, we hypothe- sized that the redesigned software would have greater learnability due to the application of these principles. METHODS The study took place at the University of Utah. 22 male and 19 female undergraduate stu- dents were randomly assigned to either the control condition, in which they used the kiosk modeled after the Aladdin, or experimental condition, where they used the redesigned kiosk. Two tasks had to be completed by the participants. All participants were instructed to complete the same two tasks, regard- less of experimental condition. The first and simple task consisted of ordering one 4x6 inch print of all eleven available pictures. After completion, partici- pants quit the kiosk and completed the NASA Task Load Index Survey. Next, participants returned to the kiosk to perform the second and more compli- cated, but still very common task. This task in- volved ordering a 4x6 inch print of five pictures containing athletic shoes, and one 8x10 inch print of the picture containing a horseshoe. The software would give the user an error message any time they attempted to do anything which would misdirect them from the path to com- pleting the specific task. Information about the number of error messages and time required for successful completion was collected. Also, partici- pants were given the option of asking for assistance from the experimenter in completing their task. The experimenter would record the number of requests for assistance. The design of the study was a 2 by 2 design with task difficulty (simple and difficult) as a re- peated subjects factors and version of the kiosk (Aladdin and redesigned version) as a between sub- jects factor. Because we were interested in the im- pact of learning we did structure the tasks in a fixed sequence, where the difficult task always followed the simpler task. Thus, the effectiveness of the re- designed version will be tested focusing on the in- teraction between task difficulty and version of the kiosk. As dependent variables, we used the time for successful completion of the task, the number of errors participants made before completion, and the number of requests for assistance. RESULTS Task Performance Time. The analyses revealed that there was no main effect of experimental condition. Thus, overall participants in both conditions did not differ in terms of their completion time for both tasks. However, consistent with our hypotheses we found a significant interaction effect between kiosk ver- sion and task difficulty (F(1,39) = 6.88; p < .05) in- dicating that participants benefited significantly PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 51st ANNUAL MEETING—2007 485 from the redesign when dealing with the second more difficult task. 143.38 172 214.1 124.25 0 50 100 150 200 250 Aladdin Model Redesigned Model M ea n T im e (s ) Simple Task Difficult Tas k Figure 1. Mean time Errors. The analysis revealed no significant interaction between task difficulty and kiosk ver- sion. However, the main effect of kiosk version re- vealed a trend (F(1,39) = 3.068; p < .1) indicating that overall, users made fewer errors on the redes- igned kiosk. 6.86 3.5 12.24 3.75 0 2 4 6 8 10 12 14 Aladdin Model Redes igned Model M ea n E rr o rs Simple Task Difficult Tas k Figure 2. Mean errors Help. The last analysis focused on the num- ber of requests for assistance made by participants. A significant main effect of kiosk version was found (F(1,39) = 5.673; p < .05). And consistent with the hypothesis, a significant interaction (F(1,39) = 9.066; p < .05) was also found between task difficulty and kiosk version. Users in general required less assistance on the redesigned kiosk, particularly on the difficult second task. 0.05 0.1 0.48 0.05 0 0.1 0.2 0.3 0.4 0.5 0.6 Aladdin Model Redesigned Model M ea n H el p Simple Task Difficult Tas k Figure 3. Mean errors NASA Task Load Index Survey No significant results were found from the self-reported NASA Task Load Index Survey on any of the six categories (mental workload, physical workload, temporal demand, effort, performance, frustration). DISCUSSION The most important conclusion that can be drawn from the data is that the redesigned kiosk clearly had greater learnability. Users of the redes- igned kiosk improved their performance on the sec- ond more difficult task. Performance decreased on the difficult task by users of the Aladdin kiosk. In particular, users moved much more quickly through the more complicated task after having used the software to perform a simpler task. It appears that the design principles incorporated into the new ki- osk design made the use and the navigation easier for users. Table 1 explains the fundamental differ- ences between the two kiosk designs. PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 51st ANNUAL MEETING—2007 486 Aladdin Kiosk Redesigned Kiosk 3 step structure- Edit, order, view order summary 3 step structure- Edit, add to album, view album No instructions Instructions, at begining and on each page No metaphor Photo album metaphor Navigation explained by vague button labels Navigation explained by precise button labels Table 1 Ordering photos from the kiosk is a three step process in both kiosk models. The first step is to prepare final images by cropping, adjusting color and fixing red eye. The next step is to select images to be printed in the desired size or sizes. The third step is to review the work and ensure that the order is satisfactory. If it is not, users can return to either of the other two steps. The Aladdin kiosk model does not explain this three step process; users are expected to figure it out as they go. Not only is this three step process not made explicit for the user, but buttons which guide users between steps are vague. Labeling a button which moves the user into the next step as “okay” is not effective when the user has no idea what the next step is (see figure 4). Figure 4. Opening screen in Aladdin kiosk model A logical solution to this lack of preview would be an instruction screen which lists the steps and describes the entire process. This was provided as the first screen in the redesigned kiosk. However, it was believed that merely a set of instructions, provided only once at the beginning, would not be sufficient, and that after a few minutes users would not remember everything from the instructions, and that often they would not read them carefully or in entirety. The photo album metaphor was intended to help users build / apply a mental model of this three step process, so that they would not only navigate the software but actually learn its structure, improv- ing future performance. In assembling a real photo album, one would prepare final versions of prints, assign prints to various albums or pages based on size, then view the album to ensure its completion. This is essentially the same three step process used by the Aladdin kiosk model. However, the applica- tion of a metaphor in addition to written instructions at the beginning of the process and on each subse- quent page allows users to more easily understand the order of subtasks (see also Drews et al, 2007). This is the mental model which the metaphor hopes to create. Users working with an effective mental model of the software’s structure can more success- fully operate the kiosk. The results of this study demonstrate that users, after completing a simple task, performed a more difficult task with greater ease than the original simple task. Performance of the second task on the Aladdin kiosk decreased. This demonstrates that the metaphor and instruc- tions provided in the redesigned kiosk led to signifi- cant improvements in learnability. The photo album metaphor was intended to replicate the success of the “desktop” metaphor in computers and the “shopping cart” metaphor common in online shop- ping. These metaphors describe an application’s virtual storage space, where users can set aside files or information for quick retrieval at a later point. This sense of virtual storage is crucial in a compli- cated task such as photo editing and printing, as us- ers typically work with more images than they can easily remember or will fit on one page. The virtual photo album of the redesigned kiosk is no different than the “order summary” of the Aladdin kiosk. But not only does the Aladdin kiosk not inform users that an order summary may be viewed until the end of the process, but the term “order summary” does not convey the same sense of location that a “photo album” conveys. We believe this can account for the decreased performance on the difficult task in the Aladdin kiosk, which requires users to pass through all three steps twice because two print sizes are required. Having presumably learned from per- PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 51st ANNUAL MEETING—2007 487 forming the simple task that the “order summary” is the last step, users can easily be confused when they cannot order the 8x10 without passing through the order summary and restarting the entire process. We believe the metaphor allowed users to feel confident about navigating between the subtasks, without re- stricting them to a specific order. In other words, we believe the metaphor made the kiosk more explor- able, which is crucial to good interaction (Tognazz- ini, 2007). This also allows the redesigned kiosk to have a more sophisticated and complex workspace than the Aladdin model. In the Aladdin model, every new option is accompanied by a new screen, presumably to simplify decision making and steer users to the correct locations. But since the meta- phor provides an identifiable location and structure for the users to work from, this is not necessary in the redesigned model. Instead, users can perform multiple functions from the same workspace, since they know where their work is going and how to review it. The kiosk in essence becomes more like a personal computer. Figure 5 Ordering prints on redesigned kiosk The most important conclusion that can be drawn from this study is that kiosk design must take into account task complexity in order to maintain usability. The controlled structure found in the Aladdin kiosk model may be efficient for helping users complete a simple task, but a complex task requires more information and freedom than is pro- vided. The design of self-service modules for com- plicated tasks should look at usability principles found in more complex software for personal com- puters. One of these principles, a metaphor, was ap- plied with great success in this study. Learnability is crucial to the success of self-service. If self-service intends to apply itself in more complex domains, the unavoidable consequence is that users will have to invest greater effort in learning these modules. However, this study shows that good design princi- ples can significantly increase a kiosk’s learnability. Users of both kiosks performed the simple task with the similar speed. But the data indicates that the time spent by users of the Aladdin kiosk wandering through the application by trial and error was spent learning the software on the redesigned kiosk. The difficult task was then performed quickly and eas- ily, while the on the Aladdin kiosk it was signifi- cantly more difficult. If designers of self-service modules will incorporate design principles from personal computer software design, which is al- ready experienced at designing interactive modules for complicated tasks, task complexity can be minimized and usability increased. References Davidson, M.J., Dove, L., and Weltz, J. (1999) Mental models and usability. Cognitive Psy- chology at DePaul University. Retrieved Feb- ruary 9, 2007, from http://www.lauradove.info/reports/mental%20 models.htm Drews, F.A., Picciano, P., Agutter, J., Syroid, N., Westenskow, D.R. & Strayer, D.L. (2007). Development and evaluation of a just-in- time support system. Human Factors, 49, 543-551. Drews, F.A., & Westenskow, D.R. (2006). Human-Computer Interaction in Healthcare. In: P. Carayon (Ed.) Handbook of Human Factors and Ergonomics in Healthcare and Pa- tient Safety. Lawrence Erlbaum Associates. Preece, J., Rogers, Y., and Sharp, H. (2002). Interaction Design: Beyond Human Computer Interaction. New York: John Wiley & Sons Tognazzini, B. (2007). First principles of interaction design. Retrieved February 9, 2007 from http://www.asktog.com/basics/firstPrinciples.h tml PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 51st ANNUAL MEETING—2007 488 work_bkarxm3ebjgjbbhsmhndj66w7e ---- 078 EARLY SYNOVIAL RESPONSES TO ANTERIOR CRUCIATE LIGAMENT AUTOGRAFTING IN THE OVINE STIFLE JOINT Osteoarthritis and Cartilage Vol. 17, Supplement 1 S51 ated with cartilage volume loss (p=0.284, rho=-0.60). Similarly, the increase in GCA correlated significantly with less severe cartilage defect (p=0.001, rho=-0.99), joint effusion (p=0.041, rho=-0.89) and BML (p=0.004, rho=-0.97). At week 26, higher PVF signif- icantly (p=0.013, rho=0.95) correlated with more severe menis- cal tears while higher CGA correlated (p=0.037, rho=-0.90) with cartilage volume loss. In line with these findings, the evolution of meniscal tears significantly correlated with less osteophytosis (p=0.013, rho=-0.95) and joint effusion (p=0.028, rho=-0.92). Conclusions: his exploratory study reveals multiple binary associ- ations between a number of joint structural defects and the extent of OA-induced functional disability. Data revealed that PVF and GCA are mainly affected by BML and cartilage defects, whereas meniscal integrity is more affected by gait biomechanics. These results highlight the need for a physiopathologically- based sta- tistical analysis strategy to better understand the structure-activity relationships of the injured joint. 078 EARLY SYNOVIAL RESPONSES TO ANTERIOR CRUCIATE LIGAMENT AUTOGRAFTING IN THE OVINE STIFLE JOINT B.J. Heard, Y. Achari, M. Chung, N.G. Shrive, C.B. Frank Univ. of Calgary, Calgary, AB, Canada Purpose: Anterior cruciate ligament (ACL) reconstruction using tendon autografts aims to restore the function of a completely damaged ACL. However, evidence suggests that such grafts may be less than ideal due, in part, to abnormal graft tensioning and perhaps-related post-surgical inflammation. We have developed a biomechanically idealized ACL autograft model (using the native ACL itself as an immediate replacement with and without exces- sive ‘graft’ tensioning) to study the biological responses to such surgery. The present study focussed on identifying the alterations to early markers of synovial inflammation and tissue remodelling proteinases. The hypothesis for this study was that all grafts would induce inflammation but overtensioning of ACL grafts would further increase the expression of inflammatory and tissue remodelling bio-markers. Methods: All surgeries were performed using protocols approved by the animal care committee of the University of Calgary. Fifteen skeletally mature 3-4 year old female Suffolk-cross sheep were allocated equally into 5 groups: anatomical ACL-core, twist tight ACL-core, twist loose ACL-core, sham, and non-operated controls. The ACL core surgeries were accomplished via arthrotomy to the right stifle joint. The patella was dislocated medially to expose the ACL. The proximal head of the lateral femoral condyle was the entry point for a guide pin that was inserted to mark the femoral insertion of the ACL. A dry nitrogen drill was used to core down to the marked insertion. After the ACL insertion was freed, the core was either a) immediately fixed in place (anatomical), b) pulled 1mm away from the joint while being twisted 90 degrees and then fixed (twist tight), or c) pushed 1mm into the joint while being untwisted 90 degrees and then fixed (twist loose). For shams, the core was stopped at the halfway mark between the surfaces of the proximal femoral condyle to the femoral ACL insertion, a distance of roughly 1.5cm. The non-operated controls were age matched and housed for the same duration of time as the experimental subjects. All animals were sacrificed 2 weeks post-injury. At dis- section, synovium from both left and right stifle joints were isolated and examined for different matrix metalloproteinases, interleukins and lubricin using real-time RT-PCR. Results: Synovial tissue from the treated joint of the anatomical, twist tight and twist loose core groups all exhibited significant increases in the mRNA levels of the matrix metalloproteinases examined. MMP-1 and MMP-3 mRNA levels exhibited maximum elevation in the twist-tight core groups, followed by anatomically placed ACL and twist-loose core group. However MMP-13 mRNA levels exhibited maximum elevations in the anatomical core group followed by twist tight and twist loose groups. The matrix metallo- proteinases mRNA levels did not change in either the contralateral limbs of the treated groups or the limbs of the non-operated con- trols. Investigation of IL-1β mRNA levels revealed an 8-10 fold increase in the three treated groups respectively with not much variation between the groups. Interestingly the IL-6 did not exhibit any change in the mRNA levels in any of the groups. Lubricin mRNA levels followed the same pattern as MMP-1 and MMP-3. Conclusions: The tension of an ACL graft can influence the mRNA levels of certain MMPs, interleukins and lubricin in the synovium, which in turn may influence the structure and biome- chanical properties of the graft. 079 A NEW INSULT TECHNIQUE FOR A LARGE ANIMAL SURVIVAL MODEL OF HUMAN INTRA-ARTICULAR FRACTURE Y. Tochigi, T.E. Baer, P. Zhang, J.A. Martin, T.D. Brown Univ. of Iowa, Iowa City, IA Purpose: To model human intraarticular fractures (IAFs) in the porcine hock joint (human ankle analogue) in vivo, a new fracture insult technique/system has been developed. In this technique, a joint is subjected to an injurious transarticular compressive force pulse, so as to replicate the most typical mechanism of human distal tibial “plafond” fracture. Figure 1 shows the custom interface device developed for this technique. The “tripod” of pins connects the distal impact face to the talus without soft tissue intervention, while the proximal fixator holds the tibial shaft tilted posteriorly. In this “offset” condition, a force pulse applied to a joint causes sudden elevation of vertical shear stresses in the anterior tibial juxtaarticular bone. With guidance from a stress-rising sawcut placed at the anterior cortex, well-controlled, reproducible anterior malleolar fractures are created (Figure 2). For an animal model of IAF to be scientifically meaningful, pathophysiological realism of fracture-associated cartilage injury is essential. The purpose of this study was to document the cell-level cartilage pathology introducible using this “offset” fracture impact technique. Figure 1. The “tripod” device-to-bone interface system. Methods: Four fresh porcine hock specimens, in which chondro- cytes were fully viable, were utilized. Of these, two were subjected to fracture insult using the offset impact technique, with a force pulse (30 joules) delivered by a drop-tower device. In the other two, morphologically similar distal tibial simulated fractures were created using a sharp osteotome (non-impact osteotomy control). Macroscopic fracture morphology was recorded by means of dig- ital photography. The fractured distal tibial surface, harvested as osteoarticular fragments, was then incubated in culture medium. work_bl254pula5dgzk2jefwuf6ulqi ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218558245 Params is empty 218558245 exception Params is empty 2021/04/06-02:16:38 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218558245 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:38 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_bl2ykkahvzasvbhdqlvrryx34e ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218563517 Params is empty 218563517 exception Params is empty 2021/04/06-02:16:44 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218563517 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:44 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_bl6eatcw4nhwlkfii5kp7vs5mi ---- Reduction in Serine Protease Activity Correlates with Improved Rosacea Severity in a Small, Randomized Pilot Study of a Topical Serine Protease Inhibitor Reduction in Serine Protease Activity Correlates with Improved Rosacea Severity in a Small, Randomized Pilot Study of a Topical Serine Protease Inhibitor Journal of Investigative Dermatology (2014) 134, 1143–1145; doi:10.1038/jid.2013.472; published online 12 December 2013 TO THE EDITOR Individuals with rosacea express high baseline levels of cathelicidin and kal- likrein 5 (KLK5; Yamasaki et al., 2007). The significance of elevated KLK5 is that this trypsin-like serine protease controls the cleavage of the cathelicidin precur- sor protein into LL-37 (Yamasaki et al., 2006, 2007). LL-37 is both proinflam- matory and angiogenic (Koczulla et al., 2003; Lande et al., 2007; Morizane et al., 2012). Based on these findings, we hypothesized that inhibition of KLK5 may improve the clinical signs of ro- sacea by decreasing LL-37 production. To test this hypothesis, we studied the effects of a known inhibitor of trypsin- like serine protease, e-aminocaproic acid (ACA). ACA is Food and Drug Administration approved for systemic administration to enhance hemostasis in cases of fibrinolytic bleeding, and is well-tolerated in humans when adminis- tered intravenously in doses up to 3 g kg�1. To determine whether ACA might be useful in rosacea, ACA was first tested in vitro using an Enzcheck Protease Assay Kit (Invitrogen) against the proteolytic activity of tissue kallikrein (100 mg ml�1, Sigma-Aldrich, St Louis, MO) or stratum corneum pro- tease activity (SPA) collected from tape- strips (D-100 CuDerm, Dallas, TX) of the facial skin of six healthy subjects. SPA was extracted from tape with 20 mM Tris-HCl (pH 8.0) containing 0.1% Triton X-100. Protease activity was measured at 37 1C for 2 hours after treatment with vehicle, ACA (1 M), or the known selective inhibitor of trypsin- related enzymes, aprotinin (50 mg ml�1, Sigma-Aldrich). Aprotinin strongly sup- pressed SPA, indicating predominance of trypsin-like proteases in the stratum corneum extract. The inhibitory effect of ACA was similar to that of aprotinin (Figure 1). These combined data sug- gested that ACA could be effective to inhibit stratum corneum trypsin-like pro- teases such as KLK5. Having confirmed that ACA was an effective inhibitor of SPA in vitro, we next sought to determine whether topi- cal application of a cream formulation containing equal weight of cream and ACA solution (2 M) decreased SPA in patients with rosacea. If active, this formulation, called SEI003, would per- mit us to test the hypothesis that inhibi- tion of SPA improves rosacea. We therefore conducted a proof-of-concept, randomized, double-blind, placebo- controlled study on 11 adult patients with papulopustular rosacea (clinical- trials.gov identifier NCT01398280). The study was conducted at the Uni- versity of California, San Diego (UCSD) dermatology clinic in accordance with the Declaration of Helsinki Principles and the regulations of the UCSD Institu- tional Review Board (reference number 100867). Written informed consent was obtained from all participants. The study was conducted between July 2011 and December 2012, and included 15 sub- jects with papulopustular rosacea ran- domized 2:1 to receive either SEI003 (the treatment group) or the base cream homogenized with an equal weight of the ACA vehicle (the placebo group). Subjects, study coordinators, and those performing clinical assessments were blinded. Of the 11 patients originally assigned to the treatment group, three subjects voluntarily withdrew from the study (one patient reported scheduling conflicts and two others were no longer interested in participating), and one was withdrawn from the study after starting doxycycline for ocular rosacea. The study numbers of participants who with- drew from the study were reassigned to new subjects after patient 11 was KLK inhibition Vehicle SEI003 Aprotinin Aprotinin 4,000,000 3,000,000 2,000,000 1,000,000 1,200,000 1,000,000 800,000 R F U /μ g p ro te in 600,000 400,000 200,000 0 R F U 0 0 30 60 90 120 Time (minutes) P = 0.0026 Stratum corneum protease inhibition P = 0.01 Vehicle SEI003 Figure 1. In vitro effects of SEI003 and aprotinin on KLK and human total stratum corneum protease activity. (a) Both SEI003 and aprotinin inhibit KLK activity compared to vehicle alone. SEI003 completely blocks KLK activity for the first 45 minutes of the assay. (b) Compared to vehicle alone, SEI003 and aprotinin both significantly decrease stratum corneum protease activity. KLK, kallikrein; RFU, relative fluorescent units. Accepted article preview online 8 November 2013; published online 12 December 2013 Abbreviations: ACA, e-aminocaproic acid; CEA, Clinician’s Erythema Assessment; IGA, Investigator’s Global Assessment; KLK, kallikrein; SPA, serine protease activity; UCSD, University of California, San Diego AM Two et al. Inhibition of Serine Protease in Rosacea www.jidonline.org 1143 http://www.jidonline.org enrolled. All four patients originally assigned to the placebo group com- pleted the study in its entirety. Patients were instructed to apply their assigned medication to their entire face twice daily for the 12-week duration of the study. Clinical assessments including the Investigator’s Global Assessment (IGA) and the summed total of the 5- point Clinician’s Erythema Assessment (CEA) score of five different target sites (left cheek, right cheek, nose, chin, and glabella), as well as digital photography were completed at baseline. These procedures were repeated 2, 6, 9, and 12 weeks after the baseline visit along with safety monitoring and tape strip sampling for SPA. The average age and baseline IGA and CEA scores for subjects in the two groups were similar (Figure 2a). The IGA scores at weeks 0 and 12 Subject demographic information Treatment group Placebo group 56–6830–80 57 2 (29%) 5 (71%) 6 (86%) 1 (14%) 1.8 9 63.5 1 (33%) 2 (67%) 4 (100%) 0 (0%) 2.0 7 Baseline IGA score Baseline CEA score CEA scores at weeks 0 and 12 Week 0 Week 12 Week 0 Week 12 P = 0.05 P = 0.15 P = 0.98P = 0.05 2.5 2.0 1.5 IG A s co re C E A s co re 1.0 0.5 0.0 15 10 5 0 0 12 0 12 0 12 0 12 Placebo 2,500 Serine protease activity in all patients by week P = 0.005 P = 0.53 P = 0.04 P = 0.001 SEI003 (N =7) Placebo (N =4) 2,000 S P A ( R F U ) S P A ( R F U ) S P A ( R F U ) 1,500 1,000 500 0 1,500 1,500 4,000 6,000 8,000 1,000 1,000 500 500 0 0 Placebo Treatment group Treatment group SEI003 2 2 6 6 9 9 12 12 2 6 9 12 Weeks of application Weeks of SEI003 application Weeks of placebo application SEI003 SPA of patients in the SEI003 treatment group over time SPA of patients in the placebo group over time Age Gender Ethnicity Caucasian Hispanic Male Range Mean Female Figure 2. Response of rosacea patients to SEI003 treatment. (a) Demographic data for participants completing the trial. (b) Stratum corneum protease activity (SPA) levels were not significantly different in the treatment groups at week 2, however differences at weeks 6, 9, and 12 were statistically significant. (c) SPA levels of individual subjects in the treatment group tended to decrease over time. (d) SPA levels of individual subjects in the placebo group remained relatively unchanged over time. (e) IGA scores decreased throughout the study for participants in both groups, however scores were only significant for time and the interaction between time and treatment group. (f) CEA scores also trended down as participants progressed through the study, however a two-way analysis of variance of CEA scores for participants in both groups was significant only for time. (g) Decreased erythema is seen in patients in the treatment group at week 12. (h) Decreased erythema is also appreciated in this control subject over time, however this finding is less pronounced in these patients. Error bars represent mean±SE of the mean. Treatment group n¼7, control group n¼4. CEA, Clinician’s Erythema Assessment; IGA, Investigator’s Global Assessment; KLK, kallikrein; RFU, relative fluorescent units. AM Two et al. Inhibition of Serine Protease in Rosacea 1144 Journal of Investigative Dermatology (2014), Volume 134 average of the SEI003 treatment group had significantly lower SPA compared with the placebo group at weeks 6, 9, and 12 after the start of therapy. A two- way analysis of variance comparing the SPA of subjects in the two groups at all visits showed that treatment group (P¼0.002), time (P¼0.002), and the interaction between time and treatment group (P¼0.001) all had a significant effect on subjects’ SPA (Figure 2b). As this was a small pilot study, and to better illustrate the individual response of SPA over the course of the study, SPA levels were also plotted for each subject over time. SPA levels of the two subjects with the highest baseline SPA had the most pronounced decrease (Figure 2c). In contrast, SPA levels of subjects in the placebo group remained relatively stable during the course of the study (Figure 2d). Overall improvements in IGA and CEA scores were seen in subjects in both the placebo and SEI003 groups over the 12-week course of the study. Although there were no significant dif- ferences in IGA and CEA scores between the two groups at any time point, a correlation between activity to inhibit SPA and clinical improvement was seen as IGA and CEA scores of subjects in the SEI003 group differed significantly at week 12 compared to baseline (P¼0.05, Figure 2e and f). This significant change was not seen in the placebo group (P¼0.15, Figure 2e; P¼0.98, Figure 2f). These findings are likely related to the small sample size and short duration of the study, as well as a placebo effect from the base cream. Digital photographs taken of partici- pant’s faces at the beginning of each visit showed the pronounced improve- ment in rosacea symptoms in some patients receiving SEI003 (Figure 2g and h). Interestingly, the two subjects with the highest baseline SPA values were among the subjects with the most noticeable changes on digital photogra- phy, and both are pictured in Figure 2g. No adverse events were reported during the study. Previous studies have correlated high KLK5 activity with rosacea disease state (Yamasaki et al., 2007). Recent studies of known rosacea treatments suggest that clinical improvements correlate with decreases in this protease cas- cade, suggesting that KLK5 has a key role in mediating the inflammatory response seen in rosacea by its ability to activate cathelicidin. A recent open- label study of 15% azelaic acid gel showed that rosacea patients with high baseline SPA had a significant reduction in their SPA by week 16 of treatment, and this correlated with clinical impro- vement (Coda et al., 2013). Our data show a similar trend, with subjects with the highest baseline SPA correlating with maximal improvement. In another recent study, the antibiotic doxycycline was found to indirectly inhibit SPA and block LL-37 generation in vitro (Kanada et al., 2012). Subsequently, a report of a multi-center clinical trial showed that the therapeutic benefits of doxycycline when taken as a formulation consisting of both immediate release and delayed release beads also correlated with decreased protease activity and LL-37 (Huang et al., 2013). Our study agrees with the associations observed in these other studies, and further suggests that direct targeted inhibition of SPA can result in a reduction of erythema and inflammatory papules in some patients with rosacea. In conclusion, this pilot study shows early clinical evidence that directly sup- ports a causative role for high protease activity in the pathogenesis of rosacea. Although we failed to show clear evi- dence of clinical benefit over placebo for this formulation of serine protease inhibitor, this study was designed only as a proof-of-concept trial with a small number of patients. This work supports the KLK-cathelicidin inflammation hypothesis and the need for adequately powered clinical trial of the therapeutic benefit of direct targeting of SPA in the treatment and control of rosacea. CONFLICT OF INTEREST RLG serves as an unpaid member of the advisory board and consultant to Skin Epibiotics. EH is acting chief executive officer and chief medical officer of Skin Epibiotics. Skin Epibiotics is a virtual corporate entity designated to assess and document potential intellectual property associated with active pharmaceutical ingredients the company identifies for use in new dermatologic-related indications. Although SEI003 was developed by RLG and EH while both were employed by UCSD, the product was eventually named SEI003 after Skin Epibiotics. ACKNOWLEDGMENTS In vitro analysis described in this work was supported in part by the United States National Institutes of Health (NIH) grant R01-AR052728 to RLG. Neither Therapeutics nor Skin Epibiotics provided any financial compensation for the study or to any members of the study team with the exception of EH, who left his position at UCSD for employment opportunities at these companies part-way through the study. Aimee M. Two1, Tissa R. Hata1, Teruaki Nakatsuji1, Alvin B. Coda1, Paul F. Kotol1, Wiggin Wu1, Faiza Shafiq1, Eugene Y. Huang1,2,3 and Richard L. Gallo1,3 1Division of Dermatology, University of California, San Diego, San Diego, California, USA; 2Therapeutics Clinical Research, San Diego, California, USA and 3Skin EpiBiotics Inc., San Diego, California, USA E-mail: rgallo@ucsd.edu REFERENCES Coda AB, Hata T, Miller J et al. (2013) Cathelici- din, KLK5 and serine protease activity is inhibited during treatment of rosacea with azelaic acid 15% gel. J Am Acad Dermatol 69:570–7 Huang E, DiNardo A, Gallo R et al. (2013) Surface cathelicidin expression is a predictor of treat- ment success in papulopustular rosacea. J Invest Dermatol 133:S23 Kanada KN, Nakatsuji T, Gallo RL (2012) Doxycy- cline indirectly inhibits proteolytic activation of tryptic kallikrein-related peptidases and activation of cathelicidin. J Invest Dermatol 132:1435–42 Koczulla R, von Degenfeld G, Kupatt C et al. (2003) An angiogenic role for the human peptide antiobitic LL-37/hCAP-18. J Clin Invest 111: 1165–72 Lande R, Gregorio J, Facchinetti V et al. (2007) Plasmacytoid dendritic cells sense self-DNA coupled with antimicrobial peptide. Nature 449:564–9 Morizane S, Yamasaki K, Muhleisen B et al. (2012) Cathelicidin antimicrobial peptide LL-37 in psoriasis enables keratinocyte reactivity against TLR9 ligands. J Invest Dermatol 132: 135–43 Yamasaki K, Di Nardo A, Bardan A et al. (2007) Increased serine protease activity and cathe- licidin promotes skin inflammation in rosa- cea. Nat Med 13:975–80 Yamasaki K, Schauber J, Coda A et al. (2006) Kallikrein mediated proteolosis regulates the antimicrobial effects of cathelicidins in skin. FASEB J 20:2068–80 AM Two et al. Inhibition of Serine Protease in Rosacea www.jidonline.org 1145 http://www.jidonline.org Reduction In Serine Protease Activity Correlates Withimproved Rosacea Severity In A Small, Randomized Pilotstudy Of A Topical Serine Protease Inhibitor������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������� References���������������������������������������������� work_bm4tjkz3qjcqzjfizb7yroinie ---- A low-cost, endoscopic, digital, still and video photography system for ENT clinics Short Communication A low-cost, endoscopic, digital, still and video photography system for ENT clinics B W HOWES, C REPANOS* Abstract Image capture systems that display and record endoscopic images are important for documentation and teaching. We have modified a universal serial bus microscope to couple with most clinical endoscopes used in our practice. This very economical device produces images suitable for teaching, and potentially for clinical use. The implications of this could be significant for teaching, patient education, documentation and the developing world. Key words: Camera; Imaging; Endoscope; Teaching Introduction Image capture systems that display and record endoscopic images are important for documentation and teaching.1 Many ENT clinics have only one purpose-built system, and the majority of clinic rooms have no image recording facilities. Home-made methods for attaching consumer digital still cameras to endoscopes have been described.2 – 5 Most record only still images, do not provide a ‘live’ image preview and are expensive, and some are large. We have modified a cheap consumer electronic micro- scope to couple with endoscopes that have a standard eye- piece. Video and still images can be displayed and recorded on any universal serial bus (USB) equipped computer (Figure 1) and (Figure 2). Method Webcams are cheap, miniature, digital video cameras con- nected to a computer, which capture images of the user. The VehoTM VMS-001USB microscope (Veho HQ, Hampshire, UK; www.veho-uk.com) has a similar function to a webcam, but is designed for high magnification. We used part of a disposable camera-head drape (Fair- mont Medical# DCC 8007 disposable camera drape (Bayswater, Victoria, Australia)) as a cheap method of attaching the USB microscope to an endoscope. The fitting is suitable for clinical endoscopes equipped with the common standard 32 mm eyepiece. The USB microscope and endoscope adapter required modification. More details are available online at http:// www.usbendoscopy.co.uk. The camera software runs on most operating systems, including the latest ‘netbook’ low cost compact laptop com- puters. When combined with a battery-powered endoscopy light source, the system is exceptionally portable. We asked 30 colleagues to compare video images (of nasendoscopy performed on one of the authors) recorded on the USB microscope with images from a professional image capture system. They reported that the USB system produced images suitable for teaching and clinical work. We do not advise its use as a diagnostic imaging tool, as we are awaiting clearance for clinical purposes from the Medicines and Healthcare Products Regulatory Agency; however, the device has been cleared as safe by FIG. 1 (a) VehoTM VMS 001 USB Microscope. (b) Fairmont Medical 8007 disposable camera drape ( polythene removed). (c) Universal serial bus camera head. Presented at the Group of Anaesthetists in Training meeting, 4 July 2009, Cambridge, UK. Awarded the Association of Anaesthetists of Great Britain and Ireland Registrar’s Prize (President’s Medal). From the Departments of Anaesthesia and *Otolaryngology, Royal United Bath Hospital, UK. Accepted for publication: 17 September 2009. First published online 22 December 2009. The Journal of Laryngology & Otology (2010), 124, 543 – 544. # JLO (1984) Limited, 2009 doi:10.1017/S0022215109992428 543 https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0022215109992428 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:20, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms https://doi.org/10.1017/S0022215109992428 https://www.cambridge.org/core our hospital’s medical equipment department. A video is available online at http://www.vimeo.com/6006462. Conclusions The low cost USB endoscopy camera described has excellent potential for teaching, patient education, docu- mentation and use in the developing world. The image obtained may be useful for demonstrating pathology in the out-patient clinic and in-patient ENT consultation, and for aiding fibre-optic intubation during anaesthesia. References 1 Tabaee A, Johnson PE, Anand VK. Office based digital video endoscopy in otolaryngology. Laryngoscope 2006; 116:1523 – 5 2 Judd O, Repanos C. Digital photography in otolaryngology. The Otolaryngologist 2007;1:127 – 31 3 Barr GD. Low cost digital endoscopic photography. J Lar- yngol Otol 2009;123:453 – 6 4 Melder PC. Simplifying video endoscopy: inexpensive analog and digital video endoscopy in otolaryngology. Oto- laryngol Head Neck Surg 2005;132:804 – 5 5 Pothier DD, Hajiioannou J, Grant DG, Repanos C, Kalk N. A simple and inexpensive technique of recording operative microscopic and endoscopic video directly to your compu- ter. Clin Otolaryngol 2007;32:498 – 500 Address for correspondence: Mr C Repanos, Specialist Registrar in Otolaryngology, Royal United Bath Hospital, Combe Park, Bath BA1 3NG, UK. E-mail: costa@repanos.com Mr C Repanos takes responsibility for the integrity of the content of the paper. Competing interests: None declared FIG. 2 Example of still images obtained via universal serial bus microscope. B W HOWES, C REPANOS544 https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0022215109992428 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:20, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms https://doi.org/10.1017/S0022215109992428 https://www.cambridge.org/core work_bqiyrooukzda3pjikgay2crlky ---- arr043 755..762 University of Groningen Zebra finch females prefer males with redder bills independent of song rate-a meta-analysis Simons, Mirre J.P.; Verhulst, Simon Published in: Behavioral Ecology DOI: 10.1093/beheco/arr043 IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document version below. Document Version Publisher's PDF, also known as Version of record Publication date: 2011 Link to publication in University of Groningen/UMCG research database Citation for published version (APA): Simons, M. J. P., & Verhulst, S. (2011). Zebra finch females prefer males with redder bills independent of song rate-a meta-analysis. Behavioral Ecology, 22(4), 755-762. https://doi.org/10.1093/beheco/arr043 Copyright Other than for strictly personal use, it is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license (like Creative Commons). Take-down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Downloaded from the University of Groningen/UMCG research database (Pure): http://www.rug.nl/research/portal. For technical reasons the number of authors shown on this cover page is limited to 10 maximum. Download date: 06-04-2021 https://doi.org/10.1093/beheco/arr043 https://research.rug.nl/en/publications/zebra-finch-females-prefer-males-with-redder-bills-independent-of-song-ratea-metaanalysis(67a53c89-7bce-45bd-9cb4-7344f1cbe248).html https://doi.org/10.1093/beheco/arr043 Behavioral Ecology doi:10.1093/beheco/arr043 Advance Access publication 4 May 2011 Original Article Zebra finch females prefer males with redder bills independent of song rate—a meta-analysis Mirre J.P. Simons and Simon Verhulst Behavioural Biology, Center for Life Sciences, University of Groningen, PO Box 11103, 9700 CC Groningen, The Netherlands Male zebra finches display multiple secondary sexual traits such as song and red bill coloration. This color is dependent on carotenoids, which enhance immune function and are antioxidants. A red bill may thus function as an indicator signal. The zebra finch is extensively used in the study of carotenoid-dependent signaling. However, studies of female mate preferences for redder bills show mixed results. Here, we report a meta-analysis of mate-choice studies that reveals that female zebra finches do prefer males with redder bills (r ¼ 0.61), except when there was reduced opportunity for imprinting or when bill color was experi- mentally manipulated, which both reduced preference for red bills to approximately zero. The latter may either be due to aspects of the experimental design or due to bill color being correlated with another trait such as song rate as was previously suggested. We show, however, in a separate meta-analysis on a different set of studies that the correlation between bill coloration and song rate (r ¼ 0.14) was significantly lower than the r ¼ 0.61 between bill color and attractiveness. We conclude, therefore, that the role of bill coloration in mate choice cannot be solely due to an association with song rate. Thus, we conclude that females do prefer males with redder bills when there was sufficient opportunity for sexual imprinting, but to what extent this is causally related to the bill color remains to be established. Key words: bill coloration, mate choice, sexual coloration, sexual signaling, song rate, zebra finch. [Behav Ecol 22:755–762 (2011)] INTRODUCTION Zebra finches (Taeniopygia guttata) exhibit brightly red col-ored bills. Sexual traits often signal quality (Grafen 1990; Kotiaho 2001). In male zebra finches, reproductive success (Price and Burley 1994) and physiological indicators of qual- ity such as immune functioning and condition are positively correlated with bill redness (Birkhead et al. 1998; Birkhead et al. 2006; Bolund et al. 2010). Experimentally, an immune challenge (Alonso-Alvarez et al. 2004; Gautier et al. 2008; Cote et al. 2010) and cold exposure (Eraud et al. 2007) have been shown to decrease zebra finch bill coloration. Male zebra finch bill coloration thus exhibits variation that reflects phe- notypic quality with respect to physiological state. The redness of sexual ornaments of many species, including the zebra finch’s bill, is dependent on carotenoids (McGraw 2004; Olson and Owens 2005; Pike, Blount, Bjerkeng, et al. 2007). Carotenoids have multiple physiological functions, act- ing as antioxidants and supporting the immune system (Pérez-Rodrı́guez 2009) but cannot be synthesized by the animal itself and hence carotenoid availability is determined by the dietary intake. Because carotenoids may be limiting but physiologically important, carotenoid-dependent sexual ornaments may be particularly suitable as quality indicators, signaling immune functioning, and antioxidant capacity (Olson and Owens 1998). Recently, the role of carotenoids as antioxidants has been questioned (Costantini and Moller 2008). However, noncarotenoid antioxidants also increase carotenoid-dependent sexual coloration suggesting that carotenoid-dependent coloration does signal antioxidant ca- pacity (Bertrand et al. 2006; Perez et al. 2008; Pike, Blount, Lindström, and Metcalfe 2007). If carotenoid-dependent sex- ual traits do reliably signal antioxidant capacity (Hartley and Kennedy 2004; Pérez-Rodrı́guez 2009), mate choice for these traits may yield direct and/or indirect fitness benefits, explain- ing why such traits feature in mate selection. Given that zebra finch bill color is extensively used in the study of carotenoid-dependent sexual signaling and that it reflects phenotypic quality, it is surprising that female prefer- ences for bill color show little consensus among studies (Collins and ten Cate 1996). Furthermore, it has been sug- gested that the importance of bill color in mate choice is minor and that instead females prefer high song rates (i.e., display rate), which covaries with bill color (Collins et al. 1994; Collins and ten Cate 1996), hence resulting in an apparent preference for redder bills in some studies. Here, we combine all female mate-choice studies that we could find that presented the required information, using meta-analysis to test the hypothesis that female zebra finches prefer males with redder bills. Studies that correlate natural variation in male bill coloration with female choice cannot dem- onstrate causality because choosing females may rely on traits that covary with bill color resulting in a choice for redder bills. We therefore also included studies that manipulated bill color experimentally in our meta-analysis. We further tested whether the covariance between song rate and bill color can explain female choice for bill coloration as hypothesized by Collins and ten Cate (1996) by summarizing the correlation between song rate and bill coloration in a different set of studies. MATERIALS AND METHODS In a meta-analysis, the individual effect size estimates of differ- ent studies are weighted by study sample size to combine them into one average effect size. If this average effect size deviates significantly from zero, it can be concluded that overall the null hypothesis can be rejected, and hence this provides an objective synthesis of studies that tested a specific hypothesis. Address correspondence to M.J.P. Simons. E-mail: m.j.p.simons @rug.nl. Received 4 October 2010; revised 11 March 2011; accepted 12 March 2011. � The Author 2011. Published by Oxford University Press on behalf of the International Society for Behavioral Ecology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com a t U n ive rsity o f G ro n in g e n o n S e p te m b e r 2 1 , 2 0 1 1 b e h e co .o xfo rd jo u rn a ls.o rg D o w n lo a d e d fro m http://beheco.oxfordjournals.org/ To test whether the correlation between attractiveness and bill color can be attributed to a correlation between bill color and song rate, our approach was to quantify the association between bill color and song rate using meta-analysis on a dif- ferent set of studies and compare the strength of this corre- lation with the correlation between the color of a male’s bill and his attractiveness. When the association between bill color and attractiveness is significantly stronger than the correlation between bill color and song rate we can infer that the associ- ation between bill color and attractiveness is not solely depen- dent on an association with song rate. Female mate choice for male bill coloration We searched studies using Google scholar and ‘‘zebra finch,’’ ‘‘female choice’’ or ‘‘female mate choice,’’ and ‘‘bill’’ or ‘‘beak’’ color (colour) as search terms and also checked the references of the retrieved papers for relevant material. Authors were con- tacted for relevant statistics not reported in papers. When studies did not report raw proportions of choice or test statistics or the raw data could not be measured from graphs or we did not succeed in contacting the authors, they could not be used (Immelmann 1959; Weisman et al. 1994). The statistical approach between studies differed, with some reporting the preference for the reddest male and others reporting the relationship between the difference in redness and the resulting female preference. The second approach includes both the effect of the difference between males in red- ness together with the overall preference for the reddest male. We recommend reporting both in future research to ease com- parison between studies. For the purpose of this review, we included both approaches because the rejection of either ap- proach would have resulted in a substantial loss of studies. We preferred the statistic of the preference for the reddest males if both approaches were available. Our own unpublished data were included in the meta- analysis. A classic 2-way choice trial was conducted in our outside aviaries (Haren, The Netherlands) for 2 h in between which males that could hear but not see each other were switched sides to control for side preferences. Preference was scored as time spent by the female at the side of the aviary of the male relative to the total time spent with either of the 2 males. Bill color was measured by the use of digital photog- raphy under controlled camera settings and lighting condi- tions (Simons MJP, in preparation). The redness of the bill was expressed as hue in HSV (Hue, Saturation, Value) color space. A key element of meta-analysis is the selection of studies to include and unfortunately we had to omit the study by Roberts et al. 2007 on methodological grounds because the authors used principal component analysis to analyze spectrophotomet- ric data of bills, which is difficult to compare with the methods applied by the other studies. The methods employed by the other studies were the Munsell method, photography, or photospectrometry, which were all expressed as hue. In the Munsell method, brightness (as well as saturation) is weighted but little in comparison with hue (Burley and Coopersmith 1987). Roberts et al. 2007 found that males with brighter bills (the principal component corresponding to brightness) were preferred by females; this finding can therefore not be inter- preted as either positive or negative but rather as an incentive for further research on which aspects of the light reflected by zebra finch bills are found to be attractive. Experimental studies Some studies manipulated bill color using nail polish, polymer paint, or marker pen of wild-type or white morph birds (Burley and Coopersmith 1987; Collins et al. 1994; Sullivan 1994; Vos 1995). To evaluate the effect of such manipulations, we used a moderator for studies that interfered with the nat- ural appearance of males. Sexual imprinting In birds sexual imprinting, which can shape mating preferen- ces, seems to be the rule rather than the exception (ten Cate and Vos 1999). In zebra finches, this process continues at least up to 46 days of age (which is the median period of imprinting experimentally shown to be still effective in shap- ing preference: ten Cate 1987; Vos et al. 1993) and requires close interaction with adult conspecifics (ten Cate et al. 1984). Bill color specifically has been shown to be a trait zebra finches imprint strongly on, to the extent that exper- imental imprinting conditions can reverse mate preferences with respect to bill color (Weisman et al. 1994; Vos 1995). The average imprinting opportunity in this set of studies ranged from short, 30–40 days, to long, 48–100 days, and we used these imprinting opportunity ranges as a 2-level categorical moderator in the analysis. This dichotomization was based on the length of the imprinting process, which was evaluated experimentally (ten Cate 1987; Vos et al. 1993). We also report the estimator for a continuous fit, but in further analyses, we use dichotomization for 3 reasons. First, it is well known that there are critical phases for imprinting and hence sexual imprinting is not a linear process. Second, di- chotomization allowed us to compare the overall effect size of relatively long-imprinted individuals with the song rate bill color correlation. Third, it allowed us to include studies where the information provided on the imprinting period was not provided in great detail (Table 1). Three studies isolated chicks from adults at 30–40 days (Balzer and Williams 1998; Blount et al. 2003; Forstmeier and Birkhead 2004), 4 other studies kept chicks with adults on which they could imprint for a period which was on average over 48 days (see Table 1). Of these 3 studies, only Forstmeier and Birkhead (2004) reported their imprinting conditions; housing conditions from Balzer and Williams (1998) and Blount et al. (2003) were obtained through correspondence with the authors. Song rate, bill color correlation Search terms in Google scholar included ‘‘zebra finch’’ and ‘‘song’’ or ‘‘display’’ and ‘‘bill’’ or ‘‘beak’’ color/colour. Authors of studies that measured both song rate and bill coloration, but did not report how they correlated, were contacted. We restricted our analysis to studies that mea- sured song rate without other male competitors present be- cause this confounds song rate with female choice and behavior of the male competitor (this was the reason to omit de Kogel and Prijs 1996). This resulted in a total of 6 studies. In this analysis, we included studies regardless of the age up to which juveniles could imprint on adults because we are not aware of indications that this affects either bill color or song rate. Statistics Reported statistics were converted into Pearson’s r using stan- dard formulas (Rosenthal 1994). Proportions were converted into a v2 statistic before conversion to r. Pearson’s r’s were converted into Fisher’s Zr’s before analysis (Nakagawa and Cuthill 2007). The meta-analyses were performed with the Metafor package (Viechtbauer 2010) in R (R Development Core Team 2009) using random effects meta-analysis fitted 756 Behavioral Ecology a t U n ive rsity o f G ro n in g e n o n S e p te m b e r 2 1 , 2 0 1 1 b e h e co .o xfo rd jo u rn a ls.o rg D o w n lo a d e d fro m http://beheco.oxfordjournals.org/ Table 1 Summary of studies reporting female choice for male bill coloration Study Sample size Independent sample size (corrected for possible pseudoreplication) Statistic reported Effect size (r) Moderating variable Bill color measurement method and other remarks Opportunity to imprint on adults Burley and Coopersmith (1987) 14 4 12/14 0.87 Long imprinting Munsell Between 34 and 62 days 24 5 v2 ¼ 13.35 df ¼ 1 0.43 Long imprinting Munsell Between 34 and 62 days de Kogel and Prijs (1996) 26 9 t ¼ 4.97 df ¼ 24 0.71 Long imprinting Munsell 50 days Unpublished from our lab 21 21 t ¼ 3.31 df ¼ 19 0.60 Long imprinting Digital photography, hue in HSV color space 100 days Houtman (1992) 24 24 F ¼ 10.95 df ¼ 1,22 0.58 Long imprinting Munsell Birds were obtained from commercial breeders and the precise rearing conditions were not specified, but we have no reason to assume that chicks were isolated from adults before 46 days of age since to our best knowledge, commercial breeding usually takes place in aviaries. Roberts et al. (2007) 48 8 v2 ¼ 7.28 df ¼ 1 0.55 Not useable, brightness is incomparable with Munsell system data Principal component of spectrophotometry corresponding to brightness 62 days Vos (1995) 19 5 t ¼ 20.30 df ¼ 17 20.07 Artificial manipulation Measured from Figure 1a, t-test of arc sine converted values against hypothesized mean 55 days Sullivan (1994) 11 5 6/11 0.09 Artificial manipulation 49 days Burley and Coopersmith (1987) 25 7 18/25 0.44 Artificial manipulation Munsell Between 34 and 62 days Collins et al. (1994) 8 8 t ¼ 21.64 df ¼ 6 20.56 Artificial manipulation Not available Blount et al. (2003) 10 10 t ¼ 0.74 df ¼ 8 0.25 Short imprinting Data obtained from author. t-test of arc sine converted values against hypothesized mean, Color measurement comparable with Munsell with use of the Dulux Trade Colour Palette 40 days Balzer and Williams (1998) 33 33 Z ¼ 0.18df ¼ 31 0.03 Short imprinting Munsell 30 days Forstmeier and Birkhead (2004) 77 77 F ¼ 1.465 df ¼ 1,77 20.14 Short imprinting Munsell 35 days HSV, Hue, Saturation, Value. S im o n s a n d V e rh u lst • F e m a le ze b ra fi n ch e s p re fe r re d b ills 7 5 7 at University of Groningen on September 21, 2011 beheco.oxfordjournals.org Downloaded from http://beheco.oxfordjournals.org/ with restricted maximum likelihood. When multiple effect sizes were extracted from one study, we used the weighted average. Each study was weighted by independent sample size n 23 (Cooper et al. 2009). The studies were examined for within study pseudoreplication, which occurs when stimulus sets of males or individual females are used repeatedly. The independent sample size is therefore the sample size that could be used without risking pseudoreplication. For exam- ple, if 20 females were tested with 5 stimulus sets of males, we used 5 as the independent sample size. Publication bias was investigated using funnel plots. In a fun- nel plot, publication bias is revealed by an increase in absolute effect size with decreasing sample size. The significance of this relationship was tested using a rank test (Viechtbauer 2010). When studies use different methodology or there are differ- ences between study populations, this induces variability in ‘‘true’’ effect sizes. The resulting heterogeneity between studies can be evaluated using the Q test (Viechtbauer 2010). A signif- icant heterogeneity indicates that there are likely to be moder- ating variables that explain the variability between studies. The studies that allowed for short- and long-imprinting periods and the experimental studies were coded in one mod- erating variable with 3 levels. These were nonoverlapping cat- egories because all (except one study that did not report this information) studies that manipulated bills experimentally allowed for a long-imprinting period. The estimates reported are the deviation from the group of studies that allowed for a long-imprinting period. RESULTS Do females prefer males with the reddest bill? A total of 11 independent mate-choice studies were obtained (Table 1, Figure 1). Our analysis revealed an average effect size of r ¼ 0.28, (95% confidence interval [CI] 0.0006 2 0.52) showing that on average, females preferred males with redder bills (z ¼ 1.96, P , 0.05). Heterogeneity was significant how- ever (Q ¼ 26.4, P ¼ 0.003). The moderator coding for imprinting conditions and exper- imental bill manipulation explained a significant proportion of this variance (df ¼ 2, Q ¼ 21.93, P , 0.0001). The average effect size of studies that allowed for a long-imprinting period (n ¼ 4) turned out to be significantly higher than the average effect size of the studies (n ¼ 3) with limited opportunity for imprinting (difference: 20.65, z ¼ 24.59, P , 0.0001) and also significantly higher than the average effect size of the experimental studies (n ¼ 4; difference: 20.67, z ¼ 22.58, P , 0.01). Less opportunity for imprinting and artificial ma- nipulation thus reduced female preference for red male bill coloration to approximately zero (0.61 – 0.65 or 0.67 ¼ 20.04 or 20.06). When imprinting opportunity (in days) was fitted as a continuous linear moderator (which reduced sample Figure 1 Effect size of female preference for red bills (r 6 95% CI calculated using independent sample size), ordered within each panel with respect to sample size, with the lowest sample size at the top. Bottom effect size is the average. 758 Behavioral Ecology a t U n ive rsity o f G ro n in g e n o n S e p te m b e r 2 1 , 2 0 1 1 b e h e co .o xfo rd jo u rn a ls.o rg D o w n lo a d e d fro m http://beheco.oxfordjournals.org/ size) this turned out to be also significantly correlated with preference for red bills (n ¼ 6, z ¼ 21.98, P ¼ 0.048). In nonexperimental studies where birds did have ample im- printing opportunity, the average effect size was r ¼ 0.61 (95% CI 0.41–0.75; z ¼ 5.06, P , 0.001) with nonsignificant hetero- geneity (Q ¼ 0.26, P ¼ 0.97). The latter effect size, without the confounding effects of limited imprinting opportunity and bill color manipulation, we consider the best estimate of the strength of the relationship between male bill coloration and female mate choice. We used this effect size in the comparison with correlation between song rate and bill color. Song rate and comparison with female mate-choice effect We found 6 independent studies that reported the correlation between song rate and bill color (Table 2, Figure 2). The average effect size was r ¼ 0.14, and the 95% CI included zero (20.07 to 0.34; z ¼ 1.30, P ¼ 0.19). This suggests that the relationship between song rate and bill coloration of male zebra finches is weak on average and not significant. Hetero- geneity was significant (Q ¼ 12.5, P ¼ 0.03). When we compared this average effect (r ¼ 0.14) with the average effect size of female mate choice for males with red- der bills (r ¼ 0.61), using a Student’s t-test on the average effect sizes with their corresponding standard errors under the assumption of unequal variance, this revealed that the latter was significantly higher (t ¼ 3.23, P ¼ 0.018). Female choice for red bills can therefore not be solely dependent on the correlation between song rate and bill coloration (see MATERIALS AND METHODS). Publication bias Funnel plots (Figure 3) did not reveal a publication bias, but this is difficult to detect with this relatively limited sample of studies. Rank tests for funnel plot asymmetry were nonsignif- icant for both analyses (Kendall’s tau . 20.09, P . 0.70). DISCUSSION Red is preferred Previous studies reported mixed results with respect to the role of male bill color in female mate choice in zebra finches. Using meta-analysis, we show that female zebra finches on average prefer males with redder bills, which is in agreement with the reported link between bill color and phenotypic qual- ity (see INTRODUCTION). Given that bill coloration signals phenotypic quality, female choice for redder bills selects for males with higher phenotypic quality. It has previously been suggested (Collins et al. 1994; Collins and ten Cate 1996) that observed mating preference for males with redder bills might be due to mate selection for song rate, when this is positively associated with bill redness. However, our meta-analysis revealed that the association between bill redness and song rate was not very strong (r ¼ 0.14) and not significant and significantly lower than the reference ef- fect size for preference for redder bills (r ¼ 0.61). This implies that the latter effect cannot be fully explained by the correla- tion between male bill color and song rate. A large difference in measurement error between mate choice and song rate could, however, also be responsible for this difference because random measurement error reduces effect sizes. However, measurement error of mate choice is probably higher than that of song rate, given that mate choice is a behavioral trait with relatively low repeatability (Bell et al. 2009) whereas song rate has been reported to be highly repeatable (Birkhead and Fletcher 1995; Forstmeier and Birkhead 2004). Thus, we con- sider it unlikely that the difference between the 2 effect sizes is due to a difference in measurement error. Figure 2 Effect size of (r 6 95% CI) the song rate/ bill coloration rela- tionship ordered from top to bottom with respect to sample size, starting with the lowest sample size. Bottom effect size is the average. Table 2 Summary of studies reporting the correlation between male bill coloration and song rate Study Sample size Statistic reported (effect size, r) Bill color measurement method and other remarks Birkhead and Fletcher (1995) 10 0.05 Munsell Houtman (1992) 22 0.45 Munsell Birkhead et al. (1998) 31 0.13 Munsell, measured and analyzed from their Figure 4 Unpublished from Barbara Tschirren 57 20.25 Hue calculated from spectrophotometry, for methods see Pryke et al. (2001), Tschirren et al. (2009) Bolund et al. (2010) 76 0.26 Munsell Forstmeier and Birkhead (2004) 104 0.20 Munsell Simons and Verhulst • Female zebra finches prefer red bills 759 a t U n ive rsity o f G ro n in g e n o n S e p te m b e r 2 1 , 2 0 1 1 b e h e co .o xfo rd jo u rn a ls.o rg D o w n lo a d e d fro m http://beheco.oxfordjournals.org/ The proportion of variance in female choice explained by male bill coloration is 0.612 ¼ 0.37. The proportion that can- not be due to the song rate/bill color correlation is 0.37 – 0.142 ¼ 0.35. This considerable amount of explained variance increases our confidence in the signaling function of the male zebra finch bill. However, it still leaves a considerable part of variance to be explained (0.65 if error is ignored). This means that other (sexual) traits have the potential to be more important in mate choice as bill coloration. Due to the correlative nature of the mate choice studies the effect of bill color does not necessarily imply that females discriminate between potential mates using bill coloration. Instead, they may select on traits other than song rate that do covary with bill color. Possible candidates are song content (Holveck and Riebel 2007; Riebel 2009), UV reflectanc (Bennett et al. 1996), and chest plumage symmetry (Swaddle and Cuthill 1994), which have all been shown to play a role in zebra finch mate choice. A way to establish to what extent bill color is causally involved in the strong preference for redder bills is to study the effect of manipulated bill color on attrac- tiveness. However, the available experimental studies show no effect on average (Figure 1). There can be different reasons for the conspicuous contrast between the experimental and observational results, including of course, as discussed above, that females base mate choice on other traits that show, how- ever, a fairly strong correlation with bill color. Alternatively, there could be methodological aspects of the experimental studies that explain the negligible effect size, such as the chal- lenge to manipulate bill color while maintaining a fully natu- ral appearance of bill color. Furthermore, even when the manipulation is successful in maintaining the natural appear- ance of the bill, the manipulation may create a mismatch between bill color and other sexual signals, which in itself may change female perception of the male (e.g., Künzler and Bakker 2001). Lastly, different control groups are missing from the experimental studies, which makes these studies dif- ficult to interpret and compare. When a female is presented with a male manipulated to be more attractive and an unma- nipulated male (as in Burley and Coopersmith 1987), the lack of a sham-manipulated group limits the strict conclusion from such experiments to: female zebra finches prefer or do not prefer artificially manipulated males. When both males are manipulated (as in Collins et al. 1994 and Sullivan 1994), the resulting artificial signal might cause females to behave abnormally if it does not adequately mimic the natural signal (as also argued in Collins and ten Cate 1996). A design that includes nonmanipulated, sham-manipulated, and manipu- lated to be attractive or less attractive could increase our un- derstanding of the causality of female choice for male red bill coloration. Hence, bill color manipulations can show a causal effect of bill color, but failing to reject the null hypothesis can be attributable to the general approach rather than to the absence of causality. Imprinting In zebra finches, mating preferences are at least partly shaped during imprinting at least up to 46 days of age (ten Cate 1987; Vos et al. 1993) and bill color specifically has been shown to be an important trait in this imprinting process (Weisman et al. 1994; Vos 1995). In agreement with these findings, re- duced opportunity for imprinting after the age of 30–40 days resulted in an absence of preference for red male bill color- ation. This effect may arise because the preference for red bill ornamentation was not fully imprinted, which could have re- sulted in reduced kin or sex recognition, which in its turn affected mate choice. Imprinting could have continued after 30–40 days in the groups in which juveniles were kept (per- sonal information from authors) among which bill color had not developed its coloration from black to reddish (de Kogel 1997) resulting in a preference for juvenile coloration. Other aspects of the imprinting process other than specific imprint- ing on bill coloration can also be responsible for the differ- ences we find. Reduced kin or sex recognition, which is the potential result from a short imprinting period, could affect mate-choice decisions in general resulting in no preference for bill coloration. Thus, our analysis confirms that husbandry practices can critically affect female choice with respect to bill color, as previously suggested (Forstmeier and Birkhead 2004) and also reported for song (Riebel 2000). Note, however, that the imprinting effect we report is based on a limited number of studies, and in our view, this result should above all be seen as a good reason to evaluate this pattern with an experiment. An effect of imprinting will be important in the interpretation of studies on sexual selection in the zebra finch, and we sug- gest it would be prudent to control and report imprinting conditions in detail in future studies. Male choice We have focused on female choice, but zebra finches form sta- ble pair bonds with mutual mate choice (Silcox and Evans Figure 3 Funnel plots of both meta-analyses. Independent sample size is plotted against the effect size for each study. 760 Behavioral Ecology a t U n ive rsity o f G ro n in g e n o n S e p te m b e r 2 1 , 2 0 1 1 b e h e co .o xfo rd jo u rn a ls.o rg D o w n lo a d e d fro m http://beheco.oxfordjournals.org/ 1982; Monaghan et al. 1996). Hence, the evolution of bill color will also depend on male preferences, but unfortunately male choice for female bill color has been investigated in only 2 studies (Burley and Coopersmith 1987; de Kogel and Prijs 1996). Burley and Coopersmith (1987) reported a preference for orange females compared with red, but extreme orange toward yellowish females were avoided. de Kogel and Prijs (1996) reported a nonsignificant preference for females with more orange bills compared with red. Interestingly, females with more orange bills were found to have increased repro- ductive output and survival (Price and Burley 1994), suggest- ing males should prefer females with more orange bills compared with red. However, females with redder bills deposit more carotenoids into their eggs and increased yolk carote- noids are associated with increased hatching and fledging success (McGraw et al. 2005). Further study is required before conclusions can be reached regarding the association between female bill color and her sexual attractiveness, and hence the role of male mate choice in the evolution of bill color. CONCLUSIONS We found a significant overall preference for males with redder bills and we show that the overall effect is significantly higher than the correlation between song rate and bill color- ation. This leads us to reject the hypothesis (Collins et al. 1994; Collins and ten Cate 1996) that the preference for red bill coloration is a result from this correlation. Additionally, the significant moderating effect of imprinting on female choice for bill coloration warrants experimental testing of this effect. FUNDING Netherlands Organization of Scientific Research (NWO) Top- talent grant to M.J.P.S. and an NWO Vici grant to S.V. Egbert Koetsier and undergraduate students collected the mate-choice data in our laboratory. Jonathan Blount and Barbara Tschirren kindly provided unpublished data, Katharina Riebel alerted us to the impor- tance of imprinting, and Wolfgang Forstmeier, Tony Williams, Tim Birkhead, and Jonathan Blount discussed their methodology with us. We thank Shinichi Nakagawa and Wolfgang Viechtbauer for advice for the meta-analysis. Carel ten Cate provided valuable comments on an earlier version of the manuscript. REFERENCES Alonso-Alvarez C, Bertrand S, Devevey G, Gaillard M, Prost J, Faivre B, Sorci G. 2004. An experimental test of the dose-dependent effect of carotenoids and immune activation on sexual signals and antioxi- dant activity. Am Nat. 164:651–659. Balzer AL, Williams TD. 1998. Do female zebra finches vary primary reproductive effort in relation to mate attractiveness? Behaviour. 135: 297–309. Bell AM, Hankison SJ, Laskowski KL. 2009. The repeatability of be- haviour: a meta-analysis. Anim Behav. 77:771–783. Bennett ATD, Cuthill IC, Partridge JC, Maier EJ. 1996. Ultraviolet vision and mate choice in zebra finches. Nature. 380:433–435. Bertrand S, Faivre B, Sorci G. 2006. Do carotenoid-based sexual traits signal the availability of non-pigmentary antioxidants? J Exp Biol. 209:4414–4419. Birkhead TR, Fletcher F. 1995. Male phenotype and ejaculate quality in the zebra finch Taeniopygia guttata. Proc R Soc B Biol Sci. 262: 329–334. Birkhead TR, Fletcher F, Pellatt EJ. 1998. Sexual selection in the zebra finch Taeniopygia guttata: condition, sex traits and immune capacity. Behav Ecol Sociobiol. 44:179–191. Birkhead TR, Pellatt EJ, Matthews IM, Roddis NJ, Hunter FM, McPhie F, Castillo-Juarez H. 2006. Genic capture and the genetic basis of sexually selected traits in the zebra finch. Evolution. 60: 2389–2398. Blount JD, Metcalfe NB, Birkhead TR, Surai PF. 2003. Carotenoid modulation of immune function and sexual attractiveness in zebra finches. Science. 300:125–127. Bolund E, Martin K, Kempenaers B, Forstmeier W. 2010. Inbreeding depression of sexually selected traits and attractiveness in the zebra finch. Anim Behav. 79:947–955. Burley NT, Coopersmith C. 1987. Bill color preferences of zebra finches. Ethology. 76:133–151. ten Cate C. 1987. Sexual preferences in zebra finch males raised by two species: II. The internal representation resulting from double imprinting. Anim Behav. 35:321–330. ten Cate C, Los L, Schilperoord L. 1984. The influence of differences in social experience on the development of species recognition in zebra finch males. Anim Behav. 32:852–860. ten Cate C, Vos D. 1999. Sexual imprinting and evolutionary processes in birds: a reassessment. Adv Study Behav. 28:1–31. Collins SA, ten Cate C. 1996. Does beak colour affect female prefer- ence in zebra finches? Anim Behav. 52:105–112. Collins SA, Hubbard C, Houtman AM. 1994. Female mate choice in the zebra finch—the effect of male beak colour and male song. Behav Ecol Sociobiol. 35:21–25. Cooper HM, Hedges LV, Valentine JC. 2009. The handbook of re- search synthesis and meta-analysis. 2nd ed. New York: Russell Sage Foundation. Costantini D, Moller A. 2008. Carotenoids are minor antioxidants for birds. Funct Ecol. 22:367. Cote J, Arnoux E, Sorci G, Gaillard M, Faivre B. 2010. Age-dependent allocation of carotenoids to coloration versus antioxidant defences. J Exp Biol. 213:271–277. Eraud C, Devevey G, Gaillard M, Prost J, Sorci G, Faivre B. 2007. Environmental stress affects the expression of a carotenoid-based sexual trait in male zebra finches. J Exp Biol. 210:3571–3578. Forstmeier W, Birkhead TR. 2004. Repeatability of mate choice in the zebra finch: consistency within and between females. Anim Behav. 68:1017–1028. Gautier P, Barroca M, Bertrand S, Eraud C. 2008. The presence of females modulates the expression of a carotenoid-based sexual sig- nal. Behav Ecol Sociobiol. 62:1159–1166. Grafen A. 1990. Biological signals as handicaps. J Theor Biol. 144: 517–546. Hartley RC, Kennedy MW. 2004. Are carotenoids a red herring in sexual display? Trends Ecol Evol. 19:353–354. Holveck MJ, Riebel K. 2007. Preferred songs predict preferred males: consistency and repeatability of zebra finch females across three test contexts. Anim Behav. 74:297–309. Houtman AM. 1992. Female zebra finches choose extra-pair copula- tions with genetically attractive males. Proc R Soc B Biol Sci. 249:3–6. Immelmann K. 1959. Experimentelle Untersuchungen über die biol- ogische Bedeutung artspezifischer Merkmale beim Zebrafinken (Taeniopygia castanotis Gould). Zool Jahrb Abt Syst Ökol Geogr Tiere. 86:437–592. de Kogel CH. 1997. Long-term effects of brood size manipulation on morphological development and sex-specific mortality of offspring. J Anim Ecol. 66:167–178. de Kogel CH, Prijs HJ. 1996. Effects of brood size manipulations on sexual attractiveness of offspring in the zebra finch. Anim Behav. 51: 699–708. Kotiaho JS. 2001. Costs of sexual traits: a mismatch between theoret- ical considerations and empirical evidence. Biol Rev. 76:365–376. Künzler R, Bakker T. 2001. Female preferences for single and com- bined traits in computer animated stickleback males. Behav Ecol. 12:681–685. McGraw K, Adkins-Regan E, Parker R. 2005. Maternally derived carotenoid pigments affect offspring survival, sex ratio, and sexual attractiveness in a colorful songbird. Naturwissenschaften. 92: 375–380. McGraw KJ. 2004. Colorful songbirds metabolize carotenoids at the integument. J Avian Biol. 35:471–476. Monaghan P, Metcalfe NB, Houston DC. 1996. Male finches selectively pair with fecund females. Proc Biol Sci. 263:1183–1186. Nakagawa S, Cuthill IC. 2007. Effect size, confidence interval and statistical significance: a practical guide for biologists. Biol Rev Camb Philos Soc. 82:591–605. Simons and Verhulst • Female zebra finches prefer red bills 761 a t U n ive rsity o f G ro n in g e n o n S e p te m b e r 2 1 , 2 0 1 1 b e h e co .o xfo rd jo u rn a ls.o rg D o w n lo a d e d fro m http://beheco.oxfordjournals.org/ Olson V, Owens I. 1998. Costly sexual signals: are carotenoids rare, risky or required? Trends Ecol Evol. 13:510–514. Olson VA, Owens IPF. 2005. Interspecific variation in the use of carotenoid-based coloration in birds: diet, life history and phylogeny. J Evol Biol. 18:1534–1546. Perez C, Lores M, Velando A. 2008. Availability of nonpigmentary antioxidant affects red coloration in gulls. Behav Ecol. 19:967–973. Pérez-Rodrı́guez L. 2009. Carotenoids in evolutionary ecology: re- evaluating the antioxidant role. BioEssays. 31:1116–1126. Pike TW, Blount JD, Bjerkeng B, Lindström J, Metcalfe NB. 2007. Carotenoids, oxidative stress and female mating preference for lon- ger lived males. Proc R Soc B Biol Sci. 274:1591–1596. Pike TW, Blount JD, Lindström J, Metcalfe NB. 2007. Availability of non-carotenoid antioxidants affects the expression of a carotenoid- based sexual ornament. Biol Lett. 3:353–356. Price DK, Burley NT. 1994. Constraints on the evolution of attractive traits: selection in male and female zebra finches. Am Nat. 144: 908–934. Pryke SR, Andersson S, Lawes MJ. 2001. Sexual selection of multiple handicaps in the red-collared widowbird: female choice of tail length but not carotenoid display. Evolution. 55:1452–1463. R Development Core Team. 2009. A language and environment for statistical computing. Vienna (Austria): R Foundation for Statistical Computing. Riebel K. 2000. Early exposure leads to repeatable preferences for male song in female zebra finches. Proc R Soc B Biol Sci. 267: 2553–2558. Riebel K. 2009. Song and female mate choice in zebra finches: a re- view. Adv Study Behav. 40:197–238. Roberts ML, Buchanan KL, Bennett ATD, Evans MR. 2007. Mate choice in zebra finches: does corticosterone play a role? Anim Behav. 74:921–929. Rosenthal R. 1994. Parametric measures of effect size. In: Cooper H, Hedges L, editors. The handbook of research synthesis. New York: Russell Sage Foundation Publications. p. 231–244. Silcox AP, Evans SM. 1982. Factors affecting the formation and main- tenance of pair bonds in the zebra finch, Taeniopygia guttata. Anim Behav. 30:1237–1243. Sullivan M. 1994. Discrimination among males by female zebra finches based on past as well as well as current phenotype. Ethology. 96: 97–104. Swaddle J, Cuthill I. 1994. Female zebra finches prefer males with symmetric chest plumage. Proc R Soc B Biol Sci. 258:267–271. Tschirren B, Rutstein AN, Postma E, Mariette M, Griffith SC. 2009. Short- and long-term consequences of early developmental condi- tions: a case study on wild and domesticated zebra finches. J Evol Biol. 22:387–395. Viechtbauer W. 2010. Conducting meta-analyses in R with the metafor package. J Stat Software. 36:1–48. Vos DR. 1995. The role of sexual imprinting for sex recognition in zebra finches: a difference between males and females. Anim Behav. 50:645–653. Vos DR, Prijs J, ten Cate C. 1993. Sexual imprinting in zebra finch males: a differential effect of successive and simultaneous experi- ence with two colour morphs. Behaviour. 126:137–154. Weisman R, Shackleton S, Ratcliffe L, Weary D, Boag P. 1994. Sexual preferences of female zebra finches: imprinting on beak colour. Behaviour. 128:15–24. 762 Behavioral Ecology a t U n ive rsity o f G ro n in g e n o n S e p te m b e r 2 1 , 2 0 1 1 b e h e co .o xfo rd jo u rn a ls.o rg D o w n lo a d e d fro m http://beheco.oxfordjournals.org/ work_btcpqsbqpffgrlytple2m6gqv4 ---- Jaundice Eye Color Index (JECI): quantifying the yellowness of the sclera in jaundiced neonates with digital photography Jaundice Eye Color Index (JECI): quantifying the yellowness of the sclera in jaundiced neonates with digital photography TERENCE S. LEUNG,1,* FELIX OUTLAW,1 LINDSAY W. MACDONALD,2 AND JUDITH MEEK3 1Department of Medical Physics and Biomedical Engineering, University College London, UK 2Department of Civil, Environmental & Geomatic Engineering, University College London, UK 3The Neonatal Care Unit, Elizabeth Garrett Anderson Wing, University College London Hospitals Trust, UK *t.leung@ucl.ac.uk Abstract: The sclera is arguably a better site than the skin to measure jaundice–especially in dark-skinned patients–since it is free of skin pigment (melanin), a major confounding factor. This work aims to show how the yellowness of the sclera can be quantified by digital photography in color spaces including the native RGB and CIE XYZ. We also introduce a new color metric we call “Jaundice Eye Color Index” (JECI) which allows the yellowness of jaundiced sclerae to be predicted for a specific total serum bilirubin level in the neonatal population. Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI. 1. Introduction Neonatal jaundice is a common and sometimes life-threatening condition in the newborn, affecting over 60% of newborn infants [1]. There is an urgent need for cheap, point of care tests to measure jaundice, especially in low and middle-income countries. Jaundice is caused by the accumulation of bilirubin, a breakdown product of red blood cells, in the blood. The yellow-colored bilirubin gives rise to the characteristic yellow discoloration of the skin of jaundiced patients. Since neonatal jaundice usually develops during the first few weeks after birth when the babies have been sent home, the identification of severe jaundice is often carried out without any specialist equipment and by parents or visiting midwives, who tend to perform visual assessment based on the “yellowness” of the skin. A systematic investigation of the reliability of such visual assessment was carried out by Riskin et al. (2008) involving 5 neonatologists and 17 nurses who visually estimated the bilirubin levels of 1,129 term and preterm infants [2]. They found that although there was a good correlation between the visually-estimated and measured total serum bilirubin (TSB) levels (Pearson’s r = 0.752, p < 0.0001), visual assessment was in fact unreliable as a screening tool to detect significant neonatal hyperbilirubinemia. Babies with high TSB levels might be clinically misdiagnosed as low-risk. Another way to visually assess the severity of jaundice is by observing the cephalocaudal progression, the spread of jaundiced (yellow) skin throughout the body. Kramer first introduced a grading system to quantify neonatal jaundice based on visual assessment of the skin, using grades between 0 and 5 to describe the extent of jaundice progression [3]. This approach assumes that the occurrence of jaundiced skin patches, or dermal icterus, starts from the head and spreads to the hands and feet as jaundice becomes more severe. It relies on the assessor to determine if a skin region is jaundiced or not - a binary decision, and provide a maximum extent grade (0 - 5) to quantify the severity of neonatal jaundice. Vol. 10, No. 3 | 1 Mar 2019 | BIOMEDICAL OPTICS EXPRESS 1250 #351912 https://doi.org/10.1364/BOE.10.001250 Journal © 2019 Received 13 Nov 2018; revised 12 Jan 2019; accepted 21 Jan 2019; published 14 Feb 2019 https://crossmark.crossref.org/dialog/?doi=10.1364/BOE.10.001250&domain=pdf&date_stamp=2019-02-14 However, when Keren et al. (2009) conducted a study involving 522 term and late preterm babies using this approach, they found that Kramer’s 5-point scale only had a moderate correlation with the measured TSB level, i.e., Spearman’s rho = 0.45 and 0.55 (p = 0.13) for black and non-black babies, respectively [4]. One difficulty with visual assessment of skin color is the presence of melanin which often obscures the yellowness of the skin. The sclera and the overlaying conjunctiva, on the other hand, are free of melanin and have a high affinity for bilirubin. Therefore, they could be better sites for visually assessing the severity of jaundice. Azzuqa and Watchko (2015) carried out a study involving 240 newborn babies whose eyes were examined by two experienced neonatologists to determine whether conjunctival icterus was present (a binary decision) [5]. They found that visible conjunctival icterus, more often than not, indicated a TSB higher than 255 μmol/L, showing the subjective perception of yellowness in the conjunctiva can identify severely jaundiced newborn babies. Although they also pointed out “conjunctival icterus” as a more appropriate term, we will adopt the more frequently used term sclera to describe the white part of the eye here with the understanding that it incorporates both sclera and its overlying conjunctiva. The aim of this paper is to demonstrate how the yellowness of sclerae in jaundiced newborns can be quantified in color spaces including the native RGB and CIE XYZ, and to discuss the general requirements of successful color metrics for measuring jaundice. We also introduce a new color metric we call “Jaundice Eye Color Index” (JECI) which allows the yellowness of jaundiced sclera to be predicted for a specific TSB level. 2. Methods 2.1 Clinical data collection Eighty-seven newborn babies were recruited in the outpatient clinic on the Neonatal Unit of UCL Hospital. The postnatal age was between 1 day and 28 days. Their TSB were between 17 and 304 µmol/L. A Nikon D3200 digital camera, equipped with a macro lens (60 mm focal length), was used to take digital images of babies’ eyes before they underwent blood tests for TSB as part of their standard care. All images were taken in the same lighting condition and saved in the raw format (NEF, Nikon Electron Format) to avoid any post- processing. The study was approved by National Research Ethics Service Committee (London – City Road and Hampstead), and was conducted according to the Declaration of Helsinki. All parents gave informed consents for their babies to enter the study. 2.2 RGB color space Figure 1(a) depicts a digital image showing 4 balls painted in varying yellowness and 1 white ball. Everyday lighting conditions often give rise to specular reflection and shading which can affect the quantification of color in a digital image. To illustrate this, Fig. 1(b) depicts the blue pixel value image of the 5 balls, parts of which were in shadow. Of the three primary colors, blue pixel values provide the highest contrast of yellowness. In the absence of specular reflection and shading, the blue pixel value profile of each of the five balls should have no variation at all. Figure 1(c) shows the same image in blue chromaticity, defined as the blue pixel value divided by the summation of the red, green and blue pixel values. In Fig. 1(d), the profiles of the blue pixel values (solid lines) across balls 1, 3 and 5 indicate a large variation between 0.1 and 1.0. Figure 1(e) displays the same results in terms of the percentage deviated from the median along the profile, which can be as high as 120% for the specular reflection in ball 1, and as low as −75% for the shadow in balls 1, 3 and 5. By comparison, the blue chromaticity profiles (dashed lines) of balls 1, 3 and 5 have much smaller variation, as can be seen in Fig. 1(d) and Fig. 1(e), which also shows that shadowing (<10% deviation from the median) has less influence than specular reflection (>30% deviation from the median) on the blue chromaticity. It is evident that chromaticity has the ability to minimize the effect of varying reflectance caused by specular reflection and Vol. 10, No. 3 | 1 Mar 2019 | BIOMEDICAL OPTICS EXPRESS 1251 shading. However, chromaticity still cannot correct for the spectral influence of the ambient lighting on a digital image. Therefore, the most straightforward way to compare color or pixel values of different objects is to do so under the same lighting condition. For analysis purpose, the raw images of babies’ eyes were converted into the standard TIFF format using the open- source program DCRAW (version 9.27 by Dave Coffin 2016) and Matlab (MathWorks, Inc.) was used to analyze the TIFF images. The sclera of each baby was visually identified on the image and the median RGB pixel values were calculated over a region of interest, manually drawn to include the largest continuous area of sclera possible whereby the only criteria were to avoid obvious specular reflection, blood vessels, and eye lashes. The blue chromaticity was also calculated. Fig. 1. (a) Original image in RGB. Grayscale images of (b) blue pixel values, and (c) blue chromaticity values. The intensity profiles in (d) show that blue pixel values (solid lines) vary over a much greater range for a given ball than blue chromaticity values (dashed lines). The difference can also be seen in (e), which profiles the percentage deviation from each ball median. 2.3 CIE XYZ color space Human color perception can be quantified using the CIE XYZ color space, which contains all the colors visible to human eyes [6] and is device-independent, unlike the native RGB space. A white patch was identified from the color checker placed near to the baby’s face in each image so that the inverse of the corresponding RGB values could be used as the multipliers for white balance, carried out by the software DCRAW in Matlab. This software also converted the raw image of a baby’s eye into the CIE XYZ space under the D65 illuminant, where the sclera pixels were identified (same procedure as described in section 2.2) and the sclera color was mapped onto the CIE XYZ chromaticity diagram (CIE xy). Each of the 87 data points on the CIE xy diagram shown in Fig. 2 represents the sclera color of a baby. We define a new color metric known as Jaundice Eye Color Index (JECI): ( )65 65JECI 1D Dz z z x y= − = − + + (1) where zD65 = 1 – 0.313 – 0.329 = 0.358 for D65 illuminant. The D65 white point, marked by a yellow cross in Fig. 2, is located at (0.313, 0.329). JECI can be considered as the orthogonal projection of a data point onto the diagonal axis y = x + m (blue diagonal line on Fig. 2). For D65 illuminant, m = yD65 - xD65 = 0.329 - 0.313 = 0.016. Along each of the dashed diagonal lines (y = offset - x), which are all perpendicular to the line y = x + 0.016, the values of JECI Vol. 10, No. 3 | 1 Mar 2019 | BIOMEDICAL OPTICS EXPRESS 1252 are the same. JECI is essentially z chromaticity with the constraint of a zero value at the D65 white point. It is sensitive to an object’s yellowness as the z color matching function overlaps with the blue region of the visible spectrum (where bilirubin absorbs strongly) [6]. The 87 data points in Fig. 2 are color-coded with darker colors corresponding to higher TSB values. Fig. 2. Sclera colors for 87 newborns plotted on the CIE XYZ chromaticity diagram. The blue line is the axis for the Jaundice Eye Color Index. The TSB of each newborn is color-coded. Fig. 3. Mean blue chromaticity values versus total serum bilirubin in newborn sclera (n = 87). 3. Results 3.1 RGB color space Multiple images and regions of interest (e.g. sclera in both eyes) were considered when available. Figure 3 depicts the means and standard deviations for the blue chromaticity of 87 babies and the corresponding TSB. The correlation coefficient (r) between the mean blue chromaticity and TSB is −0.73 (p<0.01). The correlation is negative because a higher TSB would mean more blue light being absorbed and therefore a lower blue chromaticity. For comparison, the r between the mean blue values and TSB is −0.47 (p<0.01) (results not shown), substantially lower than that for the mean blue chromaticity. 0 50 100 150 200 250 300 350 Total Serum Bilirubin ( mol/L) 0.16 0.18 0.2 0.22 0.24 0.26 0.28 Mean Blue Chromaticity in Newborn Sclera vs TSB (n = 87) r = -0.73; p < 0.01 Vol. 10, No. 3 | 1 Mar 2019 | BIOMEDICAL OPTICS EXPRESS 1253 3.2 CIE XYZ color space Figure 4 shows the scatter plot between TSB and JECI with r = 0.73 (p<0.01). A linear regression has also been performed, shown as the straight line in Fig. 4. The regressed line allows us to map TSB to JECI and therefore provides a “typical” scleral yellowness for a particular TSB in the neonatal population (under the D65 illuminant), as shown at the bottom of Fig. 4. In general, the higher the JECI, the more yellow the sclera. It is noted that JECI can also be negative, corresponding to a blueish color, which is often found in newborns with thin sclera. (A zero JECI corresponds to white under the D65 illuminant.) Fig. 4. The Jaundice Eye Color Index (JECI) versus TSB. The horizontal color bar displays the yellowness of sclera corresponding to a specific TSB under D65 illuminant. 4. Discussion and conclusion In our previous publication, we used a multiple linear regression approach to find the relationship between the RGB values of the sclera and the TSB of the newborn babies [7]. The predicted TSB value resulting from the regression is a weighted sum of the RGB values, their squared values, and their cross-products. Due to a large number of parameters involved (9 in total), there is a risk of over-fitting the data, resulting in a technique which may only be accurate for one particular population. The regression approach is based on training data and therefore can be influenced by the TSB itself. In this paper, we have taken a different approach. Instead of trying to find a model that can predict TSB directly, we investigate color metrics that can quantify the yellowness of the sclera. We have considered two training-free color metrics: blue chromaticity in the native RGB space and the newly-defined JECI in the CIE XYZ space. These metrics can be compared between different populations (e.g. term/pre-term neonates) even if the relationship between yellowness and TSB is not the same. In our data set, both blue chromaticity and JECI show strong correlation with TSB (−0.73 and 0.73 respectively; p<0.01 in both). While the native RGB space is defined relative to a specific camera, the advantage of JECI is that the CIE XYZ space is a reference color space containing all possible colors and is device- independent. A previous study showed that conjunctival icterus was mainly found in jaundice babies with TSB>255 μmol/L, as determined by experienced clinicians [5]. Instead of a binary decision, our work has provided evidence that conjunctival icterus is a condition best characterized by a continuous grading scale, since TSB is a quantity that can vary continuously. To this end, we have proposed JECI as the grading scale to quantify the degree of conjunctival icterus (or the yellowness of the sclera). For instance, JECI = 0 corresponds to Vol. 10, No. 3 | 1 Mar 2019 | BIOMEDICAL OPTICS EXPRESS 1254 white sclera and JECI = 0.1 sclera with a high degree of yellowness. Interestingly, JECI can also be negative, e.g., JECI = −0.02 for blueish sclera often found in newborns. One smartphone-based technique has used the scleral color to identify jaundiced adult patients with pancreatic cancer [8]. After considering 105 color features with different color spaces, color channels, pixel selection methods and RGB ratios, the study found that the ratio of green to blue channels in the RGB color space is one color metric that best correlates with the data. The green-to-blue ratio is similar to both blue chromaticity and JECI investigated in this paper in terms of their sensitivity to yellowness and normalization against the brightness of the scene. Our work provides further evidence that a normalized blue color metric correlates well with TSB, and a color-science-based justification for this. One useful application of JECI is the regression of JECI against TSB to predict the “typical” yellowness of the sclera in a jaundiced newborn for a given TSB (Fig. 4). JECI can also be used to quantify the scleral yellowness in other jaundiced patient groups, such as adult liver and pancreatic patients. By applying the JECI metric to data sets from various jaundiced populations, researchers can help healthcare professionals visualize how scleral yellowness relates to TSB, and, if necessary, have this tailored for certain populations. To quantify the yellowness of the sclera successfully, the color metric should fulfil the following requirements: (i) sensitive to yellowness (produces good contrast between yellows), (ii) normalized to reduce the effect of brightness in the scene, (iii) based on a device- independent color space, (iv) able to capture the full gamut of scleral yellowness, and (v) insensitive to ambient lightings. We are currently developing a diagnostic smartphone app based on JECI. To this end, BiliCam has already been developed to diagnose neonatal jaundice based on skin color [9]. Development of diagnostic camera apps is an exciting research area with many promising new cases being actively investigated, a trend that will only grow in the foreseeable future. Funding UCL Grand Challenge Small Grants Scheme (Global Health), the EPSRC-funded UCL Centre for Doctoral Training in Medical Imaging (EP/L016478/1) and the Department of Health’s NIHR-funded Biomedical Research Centre at UCL Hospitals. Acknowledgements We thank all the subjects and their parents for participating in this study, and students, nurses and Miranda Nixon for invaluable help with the coordination of the study and data analysis. Disclosures The authors declare that there are no conflicts of interest related to this article. References 1. V. K. Bhutani, G. R. Gourley, S. Adler, B. Kreamer, C. Dalin, and L. H. Johnson, “Noninvasive measurement of total serum bilirubin in a multiracial predischarge newborn population to assess the risk of severe hyperbilirubinemia,” Pediatrics 106(2), E17 (2000). 2. A. Riskin, A. Tamir, A. Kugelman, M. Hemo, and D. Bader, “Is visual assessment of jaundice reliable as a screening tool to detect significant neonatal hyperbilirubinemia?” The Journal of Pediatrics 152, 782–787 (2008). 3. L. I. Kramer, “Advancement of dermal icterus in the jaundiced newborn,” Am. J. Dis. Child. 118(3), 454–458 (1969). 4. R. Keren, K. Tremont, X. Luan, and A. Cnaan, “Visual assessment of jaundice in term and late preterm infants,” Arch. Dis. Child. Fetal Neonatal Ed. 94(5), F317–F322 (2009). 5. A. Azzuqa and J. F. Watchko, “Bilirubin Concentrations in Jaundiced Neonates with Conjunctival Icterus,” J. Pediatr. 167(4), 840–844 (2015). 6. T. Smith and J. Guild, “The C.I.E. colorimetric standards and their use,” Trans. Opt. Soc. 33(3), 73–134 (1931). 7. T. S. Leung, K. Kapur, A. Guilliam, J. Okell, B. Lim, L. W. MacDonald, and J. Meek, “Screening neonatal jaundice based on the sclera color of the eye using digital photography,” Biomed. Opt. Express 6(11), 4529– 4538 (2015). Vol. 10, No. 3 | 1 Mar 2019 | BIOMEDICAL OPTICS EXPRESS 1255 8. A. Mariakakis, M. A. Banks, L. Phillipi, L. Yu, J. Taylor, and S. N. Patel, “BiliScreen: Smartphone-Based Scleral Jaundice Monitoring for Liver and Pancreatic Disorders,” Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1, 1–26 (2017). 9. L. d. Greef, M. Goel, M. J. Seo, E. C. Larson, J. W. Stout, J. A. Taylor, and S. N. Patel, “BiliCam: Using Mobile Phones to Monitor Newborn Jaundice,” in Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (2014), pp. 331–342. Vol. 10, No. 3 | 1 Mar 2019 | BIOMEDICAL OPTICS EXPRESS 1256 work_bu7ku5zazjca5bzr74mht3j7ei ---- Diagnostic Accuracy of Patients in Performing Skin Self-examination and the Impact of Photography Diagnostic Accuracy of Patients in Performing Skin Self-examination and the Impact of Photography Susan A. Oliveria, ScD; Dorothy Chau, MD; Paul J. Christos, MPH; Carlos A. Charles, MD; Alvin I. Mushlin, MD; Allan C. Halpern, MD Objective: To determine the sensitivity and specificity of skin self-examination (SSE) to detect new and chang- ing moles with and without the aid of baseline digital pho- tographs in patients with dysplastic nevi. Design and Intervention: Patients had baseline digi- tal photography and mole counts of their back, chest, and abdomen and were instructed to perform a baseline SSE. Print copies of the images were provided to the patient. Following the baseline examination, the appearance of existing moles was altered and new moles were created using cosmetic eyeliner. The number of moles altered and/or created totaled approximately 10% of each pa- tients’ absolute mole count. Setting and Patients: Fifty patients with 5 or more dysplastic nevi from the outpatient clinic at Memorial Sloan-Kettering Cancer Center, New York, NY. Main Outcome Measure: Skin self-examinations with and without access to the baseline photographs to iden- tify the number of new and altered moles. Results: The sensitivity and specificity of SSE for de- tection of both altered and new moles without photog- raphy were 60.2% and 96.2%, respectively. Skin self- examination with photography yielded a sensitivity and specificity of 72.4% and 98.4%, respectively. The find- ings were similar when stratified by site (back vs chest or abdomen). The sensitivity and specificity for new moles were higher compared with altered moles. Conclusions: Access to baseline photography improved the diagnostic accuracy of SSE on the back and chest or abdomen and improved detection of changing and new moles. Our results suggest that baseline digital photogra- phy in tandem with SSE may be effective in improving the diagnostic accuracy of patients performing SSE. Arch Dermatol. 2004;140:57-62 L ESION THICKNESS (BRESLOW depth) has been identified as the most important prognos- tic factor for primary cuta- neous melanoma, with sur- vival inversely related to lesion thickness.1-4 There is a direct relationship between sur- vival of patients with melanoma and early detection. The 5-year survival rate for pa- tients with melanoma smaller than 1 mm thick is 94% compared with 50% for mela- nomas larger than 3 mm thick.5 This find- ing suggests that the identification and ex- cision of thin lesions may be important in reducing mortality from melanoma. The American Academy of Derma- tology has recommended that individu- als practice skin self-examination (SSE) to detect new and/or changing lesions.6 Self- screening is important because self- detection by patients, spouses, and fami- lies is the most common way skin cancer is currently detected, even though SSE may not be performed routinely or thor- oughly.7-9 Results suggest that SSE is as- sociated with a reduced risk of mela- noma, and it is a moderately effective tool for detecting changes in mole size.10-12 During the past decades, atypical nevi (dysplastic nevi) have been identified as the strongest indicators of melanoma risk.13-18 The presence of large numbers of clinically atypical nevi hinders self- examination and professional evalua- tion. Because the wholesale excision of these lesions is impractical, the present standard of care for individuals with dys- plastic nevi is close observation and exci- sion of changing lesions.19-21 In individu- als with large numbers of moles and/or dysplastic nevi, attempts to recognize new or changing lesions are aided by compari- son of the clinical examination to pic- tures of the individual’s skin at an earlier point in time.19,22-24 Providing patients with photographs offers a baseline measure and may encourage the patient to carefully watch lesions.25 It has been suggested that STUDY From the Dermatology Service, Memorial Sloan-Kettering Cancer Center (Drs Oliveria, Chau, Charles, and Halpern and Mr Christos), and Department of Public Health, Weill Medical College of Cornell University (Dr Mushlin and Mr Christos), New York, NY; and Department of Dermatology and Cutaneous Surgery, University of Miami, Miami, Fla (Dr Charles). The authors have no relevant financial interest in this article. (REPRINTED) ARCH DERMATOL / VOL 140, JAN 2004 WWW.ARCHDERMATOL.COM 57 ©2004 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 patients may be able to better detect changes in their le- sions if they have an opportunity to repeatedly view the original lesion with photographs.26 In addition, through the application of computerized image analysis, digital imaging may offer an opportunity to identify new le- sions or changes in lesions earlier and more accurately than standard photographically assisted follow-up. The purpose of this study was to determine the sen- sitivity and specificity of SSE to detect new and changing moles in patients with dysplastic nevi. New and chang- ing moles were artificially created with the use of cos- metic makeup. We also assessed the impact of making per- sonal baseline digital photographs available to these patients at the time of SSE on diagnostic accuracy (ie, sensitivity and specificity). METHODS STUDY PARTICIPANTS The study was conducted in the outpatient setting of the Pig- mented Lesion Clinic of the Dermatology Service at Memorial Sloan-Kettering Cancer Center (New York, NY). Fifty patients 18 years or older with 5 or more clinical dysplastic nevi who were willing to have digital whole-body photography were re- cruited and informed consent obtained. Patients who were vi- sually or physically impaired were not eligible for the study. CONDUCT OF THE STUDY AND DATA COLLECTION Baseline Examination Information was obtained from each participant at baseline us- ing an in-person interview conducted and recorded by a re- search fellow (D.C.). Specifically, information was collected on age, sex, race/ethnicity, hair color at the age of 18 years, eye color, skin tone, tendency to burn, ability to tan, self-reported mole count, personal and family history of skin cancer, and SSE practices. As part of the baseline data collection, patients were asked to perform SSE with the aid of a full-length (35 � 127 cm) and hand-held mirror (16 � 19 cm). Patients who wore cor- rective glasses were instructed to wear them while performing their SSE. Digital photography and mole counts of the chest, abdomen, and back were performed on each patient by a re- search fellow (Figure 1).22 Print copies of the images were pro- vided to the patient. Procedures to Create and Alter Moles This was an intervention study design whereby patients received the intervention (alteration or creation of moles) and served as their own comparison group. Following the baseline examination, the appearance of existing moles was altered and new moles were created using cosmetic eyeliner that was water soluble and nontoxic. Four different color shades of eyeliner were available, and the shade closest in color to the patient’s typical nevi was used to minimize any color discrimi- nation by the patient. Each patient had approximately 10% of their moles altered and/or created on their back and chest or abdomen. To alter the size and shape of moles, a template was used to convert existing 5-mm moles to slightly more irregu- larly shaped 7-mm moles. To assess the ability of patients to identify focal changes in the color of moles, a 2-mm, dark brown mark was made in the confines of existing 5-mm moles. A template was used to create new 4-mm moles (Figures 2, 3, 4, and 5). Blindfolding of patients and sham drawing on multiple sites were used to ensure that the patients were unaware of the location of cosmetically altered Figure 2. Unaltered mole. BA Figure 1. Body areas for digital photography and mole counts. (REPRINTED) ARCH DERMATOL / VOL 140, JAN 2004 WWW.ARCHDERMATOL.COM 58 ©2004 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 moles and to preclude tactile recall of the sites of the altered and created moles. Patient Assessment of New and Altered Moles Patients had the blindfolds removed and were then asked to perform SSE (with full-length and hand-held mirror) first with- out the aid of baseline digital photographs and subsequently with access to their personal photographs. The research fel- low recorded the number and types (new vs altered mole) of changes correctly and incorrectly identified by the patient. Statistical Analysis Descriptive statistics were used to characterize the study popu- lation. Sensitivity, specificity, � statistic, 95% confidence in- tervals, and P values derived from the � statistic are presented. Using the mole as the unit of analysis, the sensitivity and speci- ficity of SSE to detect new and changing moles were calcu- lated for SSE with and without the aid of baseline digital pho- tographs. The reference standard was the lesion count and recorded number of moles changed and/or created. The � statistic was used to evaluate the diagnostic accu- racy (sensitivity and specificity) of each SSE modality (eg, with and without baseline photography). In this context, � is a weighted statistic that expresses the desirable properties of the test (eg, low probability of false results). The comparison of the � statistic between each SSE modality and the resultant P value are based on an approach for the comparison of 2 di- agnostic tests, each evaluated against the same gold standard (eg, actual number of moles physically altered and/or created) in the same study sample.27 The positive predictive values (PPVs) and negative predictive value (NPVs) of SSE, with and with- out the aid of baseline digital photography, were also calcu- lated using the Bayes theorem.28 The diagnostic accuracy of SSE can be affected by numer- ous factors. Stratified analyses were performed by age, sex, his- tory of melanoma, melanoma risk factors, and SSE practices. We explored the potential effects on SSE accuracy of mole lo- cation (back vs chest or abdomen), type of mole detected (newly created moles vs altered existing moles), and total number of moles altered and/or created on the patient (dichotomized at median; �5 moles altered or created vs �5 moles altered or created) by conducting stratified analyses. RESULTS The participation rate for this study was 93% (50/54). Characteristics of the 50 patients who were recruited and completed the study are presented in Table 1. There was a total of 3167 moles (median, 50; mean, 63) that con- tributed to the analyses; 108 moles were altered and 211 new moles were created. The number of altered or cre- ated moles per patient ranged from 2 to 27 based on the criteria of altering 10% of each patients’ moles. Fifty- two percent of the patients had 5 or fewer moles altered or created, and 48% of the patients had more than 5 moles altered or created. The sensitivity and specificity of SSE for detection of both new and altered moles without pho- tography were 60.2% and 96.2%, respectively, whereas SSE with photography yielded a sensitivity and specific- ity of 72.4% and 98.4%, respectively (Table 2). Sex differences were apparent, with men perform- ing better than women without the aid of photographs (Table 3). However, women had a higher sensitivity and specificity of SSE with the use of photographs com- pared with men. Results showed that patients with more than 5 moles altered or created had significant improve- ments in diagnostic accuracy with the aid of baseline pho- tographs, although patients with 5 or fewer moles al- tered or created still gained some benefit from access to photography (Table 4). The stratified analyses sug- gested that patients with fairer complexions (eg, light skin, eye, and hair color, tendency to burn, and ability to tan) had higher sensitivities both with (76.6%, 82.8%, 79.8%, 76.0%, and 78.7%, respectively) and without (59.9%, 70.5%, 67.5%, 60.5%, and 61.9%, respectively) the aid of photographs compared with patients who did not have these risk factors (with photography: 62.9%, 66.0%, 68.3%, 54.7%, and 66.5%, respectively; without photog- Figure 3. New 4-mm mole. Figure 4. A 2-mm, dark brown mark within the existing 5-mm mole. Figure 5. A 5-mm mole changed to an irregularly shaped, 7-mm mole. (REPRINTED) ARCH DERMATOL / VOL 140, JAN 2004 WWW.ARCHDERMATOL.COM 59 ©2004 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 raphy: 60.8%, 53.8%, 56.1%, 58.4%, and 58.5%, respectively). There were no similar trends in the analy- ses stratified by family history of skin cancer or SSE prac- tices. However, patients with a personal history of mela- noma had higher sensitivities both with (80.0%) and without (65.8%) the aid of photography compared with patients with no such personal history (with photogra- phy: 67.8%; without photography: 56.8%). We calculated the PPV and NPV for SSE with and without the aid of baseline photography. The stratified estimates for PPV and NPV for SSE with photography ranged from 70% to 90% and 97% to 99%, respectively. For SSE without the aid of baseline photography, the strati- fied estimates for PPV and NPV for SSE ranged from 54% to 67% and 96% to 98%, respectively. However, these es- timates are highly dependent on the prior selected preva- lence of altered or created moles and should therefore be interpreted with caution. The PPV and NPV results could differ in a population with a different prevalence of new and changing moles; in our study, we fixed the prevalence at 10%, since we created the new and chang- ing moles. COMMENT This pilot study determined the sensitivity and specific- ity of SSE to detect new and changing moles in patients with dysplastic nevi and also assessed the impact of mak- ing personal baseline digital photographs available to these patients at the time of initial self-examination. The sen- sitivity of SSE to detect both new and altered moles was 60.2% without photography and increased substantially to 72.4% with the aid of digital photographs. The re- sults suggest that patients had high specificity (few false- positive results) both with and without access to photo- graphs. The specificity was 96.2% without photographs and increased slightly to 98.4% with the aid of photo- graphs. Berwick et al10 reported that SSE is associated with a reduced risk of melanoma, with the potential for a 63% reduction in mortality. A prospective study of the effec- tiveness of SSE is needed; however, the logistic con- straints in conducting such a study are obvious. In a pre- liminary study by Muhn et al,11 the efficacy of SSE to detect changes in mole size has been investigated in high-risk patients.11 The specificity of SSE was 62% and the sen- sitivities of detecting 2-mm and 4-mm changes were 58% and 75%, respectively. In a recent study by Dawid et al,12 the ability of patients to identify real changes in mela- nocytic nevi was evaluated in 251 patients with 1431 me- lanocytic nevi. The sensitivity to detect enlarging nevi was low (10.9%), whereas the specificity was 99.2%. In a study by Edmondson et al,25 the effect of instant pho- tography as a method for screening melanoma during rou- tine health examinations was assessed. Copies of the prints were given to patients for observation of any changes in the lesions of interest. Although the objective was not to study the effect of providing photographs to patients, the results showed that the possession of a photograph by the patient led to a diagnosis of melanoma in 2 in- stances. Although the ultimate end point of interest in a screening study of this type would be melanoma, we de- fined new lesions and lesion change as intermediate out- comes because we believe these are the most important and relevant for self-directed prescreening and early de- tection. Limitations of the study are that SSE was not for- mally taught to the patients and the study was not re- stricted to patients who could adequately perform SSE. Patients who are taught SSE may receive more benefit. We only assessed the diagnostic accuracy of SSE per- formed on the back and chest or abdomen, and the re- sults may not be generalizable to different sites on the body. Also, there can be other subtle changes related to margins, thickness, and texture that are not captured with this simplistic approach of creating new moles and al- tering existing moles using cosmetic eyeliner. The design of the intervention for this study differs from the intervention design that would be observed in routine follow-up examination because the study popu- Table 1. Patient Characteristics Characteristic Patients, No. (%) (N = 50)* Sex Male 20 (40) Female 30 (60) Age, y �30 13 (26) 31-40 18 (36) 41-50 12 (24) �51 3 (6) White race 50 (100) Tendency to burn Easily or some 43 (86) Rarely or never 7 (14) Tendency to tan Deep or moderate 25 (50) Mild or none 25 (50) Eye color Blue or green 20 (40) Hazel or brown 30 (60) Hair color at age 18 y Blond or red 20 (40) Brunette or black 30 (60) Skin tone Very fair or fair 35 (70) Medium or dark 15 (30) Personal history of melanoma Yes 22 (44) No 28 (56) Family history of melanoma Yes 14 (28) No 25 (50) Family history of basal cell carcinoma Yes 18 (36) No 21 (42) Family history of squamous cell carcinoma Yes 4 (8) No 29 (58) No. of skin self-examinations in last 4 mo 0 17 (34) 1-4 33 (66) No. of moles altered and/or created for study protocol �5 (range, 2-5; mean, 4; median, 4) 26 (52) �5 (range, 6-27; mean, 9; median, 7) 24 (48) *Some percentages do not total 100% because of missing responses. (REPRINTED) ARCH DERMATOL / VOL 140, JAN 2004 WWW.ARCHDERMATOL.COM 60 ©2004 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 lation was a select, highly motivated group composed of patients at high risk for melanoma based on the pres- ence of 5 or more dysplastic nevi. These patients ex- pected an artificial alteration in a lesion, and the alter- ation and SSE to detect change were performed at the same appointment. Because the assessment of diagnostic ac- curacy of SSE occurred shortly after the baseline SSE, pa- tients were relying on immediate rather than long-term Table 2. Sensitivity and Specificity of Skin Self-examination for Identification of Altered (A) and New (N) Moles Mole Category* Without Photography, % (95% CI) With Photography, % (95% CI) P Value†Sensitivity Specificity Sensitivity Specificity All A/N moles 60.2 (54.6-65.6) 96.2 (95.4-96.8) 72.4 (67.1-77.2) 98.4 (97.9-98.8) �.001 Back A/N moles 57.5 (50.3-64.4) 97.0 (96.1-97.7) 68.5 (61.5-74.8) 98.3 (97.6-98.8) �.001 Chest or abdomen A/N moles 64.7 (55.4-73.1) 94.8 (93.3-96.0) 79.0 (70.4-85.7) 98.5 (97.6-99.1) �.001 All new moles 63.0 (56.1-69.5) 97.8 (97.3-98.3) 75.8 (69.4-81.3) 99.4 (99.1-99.6) �.001 All altered moles 54.6 (44.8-64.1) 98.3 (97.8-98.7) 65.7 (55.9-74.4) 99.0 (98.6-99.3) �.001 Moles altered in size‡ 56.0 (41.4-69.7) . . . 74.0 (59.4-84.9) . . . .01 Moles altered in color‡ 53.4 (40.0-66.5) . . . 58.6 (45.0-71.1) . . . .58 Abbreviation: CI, confidence interval. *N = 50 patients. All A/N moles, n = 319; back A/N moles, n = 200; chest or abdomen A/N moles, n = 119; new moles, n = 211; all altered moles, n = 108; moles altered in size, n = 50; moles altered in color, n = 58; all unaltered moles, n = 3059; back unaltered moles, n = 1922; and chest or abdomen unaltered moles, n = 1137. †P value for paired comparison of skin self-examination diagnostic accuracy with and without the aid of baseline photography. ‡Patients were not asked if an identified altered mole was specifically altered with respect to size or color. As a result, the specificity of moles altered in size or color could not be calculated. Table 3. Sensitivity of Skin Self-examination for Identification of Altered (A) and New (N) Moles According to Sex Mole Category* Men (n = 20) Women (n = 30) Sensitivity, % (95% CI) P Value† Sensitivity, % (95% CI) P Value†Without Photography With Photography Without Photography With Photography All A/N moles 63.8 (55.4-71.4) 67.8 (59.6-75.1) .01 57.1 (49.3-64.5) 76.5 (69.2-82.5) �.001 Back A/N moles 61.3 (50.6-71.1) 65.6 (54.9-74.9) .32 54.2 (44.3-63.8) 71.0 (61.3-79.2) �.001 Chest or abdomen A/N moles 67.9 (53.9-79.4) 71.4 (57.6-82.3) .001 61.9 (48.8-73.6) 85.7 (74.1-92.9) �.001 All new moles 67.8 (57.0-77.0) 68.9 (58.1-78.0) .09 59.5 (50.2-68.2) 81.0 (72.6-87.3) �.001 All altered moles 57.6 (44.1-70.2) 66.1 (52.5-77.6) .04 51.0 (36.5-65.4) 65.3 (50.3-77.9) .002 Moles altered in size 64.3 (44.1-80.7) 82.1 (62.4-93.2) .13 45.5 (25.1-67.3) 63.6 (40.8-82.0) .13 Moles altered in color 51.6 (33.4-69.4) 51.6 (33.4-69.4) �.99 55.6 (35.6-74.0) 66.7 (46.0-82.8) .38 Abbreviation: CI, confidence interval. *Men: all A/N moles, n = 149; back A/N moles, n = 93; chest or abdomen A/N moles, n = 56; all new moles, n = 90; all altered moles, n = 59; moles altered in size, n = 28; and moles altered in color, n = 31. Women: all A/N moles, n = 170; back A/N moles, n = 107; chest or abdomen A/N moles, n = 63; all new moles, n = 121; all altered moles, n = 49; moles altered in size, n = 22; and moles altered in color, n = 27. †P value for paired comparison of skin self-examination diagnostic accuracy with and without the aid of baseline photography. Table 4. Sensitivity of Skin Self-examination for Identification of Altered (A) and New (N) Moles According to Number of Moles Mole Category* �5 A/N Moles (n = 26) �5 A/N Moles (n = 24) Sensitivity, % (95% CI) P Value† Sensitivity, % (95% CI) P Value†Without Photography With Photography Without Photography With Photography All A/N moles 66.0 (56.0-74.9) 70.9 (61.0-79.2) �.001 57.4 (50.5-64.0) 73.1 (66.6-78.8) �.001 Back A/N moles 61.7 (48.2-73.7) 65.0 (51.5-76.6) .05 55.7 (47.1-64.0) 70.0 (61.6-77.3) �.001 Chest or abdomen A/N moles 72.1 (56.1-84.2) 79.1 (63.5-89.4) �.001 60.5 (48.6-71.4) 78.9 (67.8-87.1) �.001 All new moles 60.0 (48.0-70.9) 70.7 (58.9-80.3) �.001 64.7 (56.0-72.6) 78.7 (70.7-85.0) �.001 All altered moles 82.1 (62.4-93.2) 71.4 (51.1-86.1) .58 45.0 (34.0-56.5) 63.8 (52.2-74.0) �.001 Moles altered in size 80.0 (51.4-94.7) 80.0 (51.4-94.7) �.99 45.7 (29.2-63.1) 71.4 (53.5-84.8) .01 Moles altered in color 84.6 (53.7-97.3) 61.5 (32.3-84.9) .25 44.4 (30.0-59.9) 57.8 (42.2-72.0) .11 Abbreviation: CI, confidence interval. *Five or fewer A/N moles: all A/N moles, n = 103; back A/N moles, n = 60; chest or abdomen A/N moles, n = 43; all new moles, n = 75; all altered moles, n = 28; moles altered in size, n = 15; moles altered in color, n = 13. More than 5 A/N moles: all A/N moles, n = 216; back A/N moles, n = 140; chest or abdomen A/N moles, n = 76; all new moles, n = 136; all altered moles, n = 80; moles altered in size, n = 35; and moles altered in color, n = 45. †P value for paired comparison of skin self-examination diagnostic accuracy with and without the aid of baseline photography. (REPRINTED) ARCH DERMATOL / VOL 140, JAN 2004 WWW.ARCHDERMATOL.COM 61 ©2004 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 recall of the location and characteristics of their moles, and there is the potential for an overestimate of the di- agnostic accuracy. The research fellow who interviewed the patients and altered the patients’ moles was also re- sponsible for recording the patients’ ascertainment of al- tered moles. Hence, there may be the potential for bias related to accuracy ascertainment. However, the re- search fellow was instructed to simply record the pa- tients’ responses with respect to ascertainment. Due to logistical constraints, it was not feasible to use a sepa- rate research fellow for accuracy ascertainment. This study was conducted in an experimental, highly controlled situation and may not represent what would occur in a population-based setting. Sham- altered nevi may not represent what would occur in a real-world setting; however, this was designed as a pilot study. A prospective study is being planned, and the results will allow us to draw firm conclusions about the impact of digital photography on accuracy of SSE in high-risk patients. The patient population was also highly motivated, with a high prevalence of dysplastic nevi (100%), his- tory of malignant melanoma (65.8%), and previous per- formance of SSE (66.0%). These preliminary results likely represent the best case scenario, since our study popu- lation was highly motivated and had the advantage of an immediately antecedent baseline SSE. The availability of baseline digital photographs im- proved the sensitivity and specificity to detect new and altered moles. Our results suggest that baseline digital photography as an adjunct to SSE improves the diagnos- tic accuracy of patients performing SSE. Providing pa- tients with photographs may encourage patients to more carefully monitor their lesions and may enable patients to better detect suspicious changes in their lesions. Accepted for publication May 14, 2003. This study was presented in abstract form at the 62nd Annual Meeting of the Society of Investigative Dermatol- ogy, May 9-11, 2001, Chicago, Ill. We are grateful to the patients at Memorial Sloan- Kettering Cancer Center for their participation in this study. We thank Alexis Liebel, MD, for her helpful comments and suggestions during the revising of the manuscript. Corresponding author: Susan A. Oliveria, ScD, Der- matology Service, Memorial Sloan-Kettering Cancer Center, 1275 York Ave, New York, NY 10021 (e-mail: oliveri1@mskcc.org). REFERENCES 1. Breslow A. Thickness cross-sectional areas and depth of invasion in the prog- nosis of cutaneous melanoma. Ann Surg. 1970;172:902-908. 2. Margolis DJ, Halpern AC, Rebbeck T, et al. Validation of a melanoma prognostic model. Arch Dermatol. 1998;134:1597-1601. 3. Sahin S, Rao B, Kopf AW, et al. Predicting ten-year survival of patients with pri- mary cutaneous melanoma: corroboration of a prognostic model. Cancer. 1997; 80:1426-1431. 4. Breslow A. Tumor thickness, level of invasion and node dissection in stage I cu- taneous melanoma. Ann Surg. 1975;182:572-575. 5. NIH Consensus conference: diagnosis and treatment of early melanoma. JAMA. 1992;268:1314-1319. 6. Koh HK, Geller AC, Miller DR, Lew RA. The early detection of and screening for melanoma: international status. Cancer. 1995;75:674-683. 7. Weinstock MA. Early detection of melanoma. JAMA. 2000;284:886-889. 8. Koh HK, Miller DR, Geller AC, Clapp RW, Mercer MB, Lew RA. Who discovers melanoma? patterns from a population-based survey. J Am Acad Dermatol. 1992; 26:914-919. 9. Miller DR, Geller AC, Wyatt SW, et al. Melanoma awareness and self- examination practices: results of a United States survey. J Am Acad Dermatol. 1996;34:962-970. 10. Berwick M, Begg CB, Fine J, Roush GC, Barnhill RL. Screening for cutaneous melanoma by skin self-examination. J Natl Cancer Inst. 1996;88:17-23. 11. Muhn CY, From L, Glied M. Detection of artificial changes in mole size by skin self-examination. J Am Acad Dermatol. 2000;42:754-759. 12. Dawid M, Pehamberger H, Wolff K, Binder M, Kittler H. Evaluation of the abil- ity of patients to identify enlarging melanocytic nevi. Arch Dermatol. 2002;138: 984-985. 13. Elder DE, Goldman LI, Goldman SC, Greene MH, Clark WHJ. Dysplastic nevus syndrome: a phenotypic association of sporadic cutaneous melanoma. Cancer. 1980;46:1787-1794. 14. Holly EA, Kelly JW, Shpall SN, Chiu SH. Number of melanocytic nevi as a major risk factor for malignant melanoma. J Am Acad Dermatol. 1987;17:459-468. 15. Nordlund JJ, Kirkwood J, Forget BM, et al. Demographic study of clinically atypi- cal (dysplastic) nevi in patients with melanoma and comparison subjects. Can- cer Res. 1985;45:1855-1861. 16. Swerdlow AJ, English J, MacKie RM, et al. Benign melanocytic naevi as a risk factor for malignant melanoma. BMJ. 1986;292:1555-1559. 17. Tucker MA, Halpern A, Holly EA, et al. Clinically recognized dysplastic nevi: a cen- tral risk factor for cutaneous melanoma. JAMA. 1997;277:1439-1444. 18. Rhodes AR, Harrist TJ, Day CL, Mihm MCJ, Fitzpatrick TB, Sober AJ. Dysplastic melanocytic nevi in histologic association with 234 primary cutaneous melano- mas. J Am Acad Dermatol. 1983;9:563-574. 19. Shriner DL, Wagner RFJ, Glowczwski JR. Photography for the early diagnosis of malignant melanoma in patients with atypical moles. Cutis. 1992;50:358-362. 20. Slue WEJ. Total body photography for melanoma surveillance. N Y State J Med. 1992;92:494-495. 21. Tiersten AD, Grin CM, Kopf AW, et al. Prospective follow-up for malignant mela- noma in patients with atypical-mole (dysplastic-nevus) syndrome. J Dermatol Surg Oncol. 1991;17:44-48. 22. Halpern AC. The use of whole body photography in a pigmented lesion clinic. Dermatol Surg. 2000;26:1175-1180. 23. Shriner DL, Wagner RFJ. Photographic utilization in dermatology clinics in the United States: a survey of university-based dermatology residency programs. J Am Acad Dermatol. 1992;27:565-567. 24. Slue W, Kopf AW, Rivers JK. Total-body photographs of dysplastic nevi. Arch Dermatol. 1988;124:1239-1243. 25. Edmondson PC, Curley RK, Mardsen RA, Robinson D, Allaway SL, Willson CD. Screening for malignant melanoma using instant photography. J Med Screen. 1999;6:42-46. 26. Hanrahan PF, Hersey P, Menzies SW, Watson AB, D’Este CA. Examination of the ability of people to identify early changes of melanoma in computer-altered pigmented skin lesions. Arch Dermatol. 1997;133:301-311. 27. Bloch DA. Comparing two diagnostic tests against the same ‘gold standard’ in the same sample. Biometrics. 1997;53:73-85. 28. Vecchio T. Predictive value of a single diagnostic test in unselected populations. N Engl J Med. 1966;274:1171-1173. (REPRINTED) ARCH DERMATOL / VOL 140, JAN 2004 WWW.ARCHDERMATOL.COM 62 ©2004 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 work_bydqh7qpkzcjtavfpc3y6ifilm ---- HDROM_8830985 1..5 Case Report Successful Technique for Closure of Macular Hole Retinal Detachment Using Autologous Retinal Transplant Juan Abel Ramirez-Estudillo ,1 Geovanni Rios-Nequis ,1 Martin Jimenez-Rodríguez ,1 Hugo Valdez-Flores ,1 and Ximena Ramirez-Galicia 2 1Retina Department, Fundacion Hospital Nuestra Señora De La Luz IAP, Mexico 2Research Department, Faculty of Medicine, Universidad La Salle, Mexico Correspondence should be addressed to Martin Jimenez-Rodríguez; martinretina@icloud.com Received 20 May 2020; Revised 12 October 2020; Accepted 23 October 2020; Published 19 November 2020 Academic Editor: Claudio Campa Copyright © 2020 Juan Abel Ramirez-Estudillo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Macular hole retinal detachment (MHRD) for the most part develops in highly myopic eyes. Several surgical methods have been introduced to treat MHRD. We describe our experience with the autologous retinal transplant in patient with MHRD. A 49-year- old female presented with a 2-week history of a sudden decrease in the central vision in the right eye (RE). A 3-port, 25-gauge pars plana vitrectomy was performed with the ILM dye staining and peeling. Endodiathermy was applied around a 1.5-disc diameter neurosensory donor site in the supertemporal retina. The graft was cut with standard 25-gauge curved scissors. Perfluoro-n-octane (PFO) was instilled. The free graft was gently handled until its packing into the macular hole. Two months following the initial PPV, the macular hole was closed, and vision improved from 0.05 to 0.25 logMAR. 1. Introduction Macular hole retinal detachment (MHRD) mostly develops in highly myopic eyes. Since Gonvers and Machemer reported using pars plana vitrectomy (PPV) and gas tam- ponade in 1982 [1], several surgical methods have been introduced to treat MHRD, such as macular buckling, vitrectomy, and gas or silicon oil tamponade fluid with internal limiting membrane (ILM), inverted ILM flap tech- nique, and scleral imbrication [1–6]. In recent years, the autologous neurosensory retinal transplant has been con- sidered like a surgical technique in the treatment for clo- sure of the refractory macular hole. We describe our case with the autologous retinal transplant in patient with MHRD. 2. Case Presentation A 49-year-old female presented with a 2-week history of a sudden decrease in the central vision in the right eye (RE). The patient had no significant history of systemic disease. At her initial visit, the best corrected visual acuity (BCVA) was 0.05 logMAR in her RE and 1.00 logMAR in her left eye. The slit-lamp examination of the anterior segment and intraocular pressure were normal, and the lens status was phakic of both eyes. A fundus examination revealed macular hole retinal detachment in her RE, lattice degeneration, and atrophic holes were found. The surgery was performed under general anaesthesia, and we elected to use no. 41 circumferential scleral buckle owing to the fact that another predisposal lesions were found Hindawi Case Reports in Ophthalmological Medicine Volume 2020, Article ID 8830985, 5 pages https://doi.org/10.1155/2020/8830985 https://orcid.org/0000-0002-0041-9851 https://orcid.org/0000-0001-5979-099X https://orcid.org/0000-0001-9877-2479 https://orcid.org/0000-0003-2093-9771 https://orcid.org/0000-0002-2378-2098 https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/ https://doi.org/10.1155/2020/8830985 (a) (b) (c) (d) (e) (f) Figure 1: Continued. 2 Case Reports in Ophthalmological Medicine in the inferior retina. A standard vitrectomy equipment (Constellation, Alcon Surgical, Ft. Worth, TX), Lumera 700 microscope (Carl Zeiss Germany) and NGENUITY 3D visu- alization system (Alcon Surgical, Ft. Worth, TX) were used. A 3-port, 25-gauge PPV was performed; after posterior hya- loid detachment, the ILM was dyed (TissueBlue, DORC). The ILM peeling was performed on the detached retina in a circular fashion, pulling off approximately 1.0 disk diameters of ILM centered on the foveal defect. We should mention that subretinal migration of brilliant blue was detected; opportunely, when the dye was aspirated, part of the subret- inal fluid came out at the same time with the dye (Figure 1). Peripheral vitrectomy with indentation was done before tak- ing the graft. Endodiathermy was applied around a 1.5-disc diameter neurosensory donor site. The harvest location was selected in the more bullous zone of the retina (superior and tempo- ral) in order to avoid touching the choroid. The graft was cut with standard 25-gauge curved scissors to approximately 90% of its final size to keep the tissue stable. Perfluoro-n-octane (PFO) was instilled to completely cover the donor site, the free graft was teared and manipu- lated into the correct position using end grasping forceps, and the tissue was gently handled until packing it into the macular hole. A fluid air exchange was performed, and subretinal fluid was extracted through the area where the graft was taken. Laser endophotocoagulation was applied as retinopexy in the donor site and the predisposal lesions, and 5000 centis- toke silicone oil was used as tamponade to the finish the surgery. One day after the surgery, the retinal was completely attached, and the free graft stayed inside in the macular hole. Two months following the initial PPV, the macular hole was closed, and the optical coherent tomography (OCT) showed integration of the retinal flap and partial restora- tion of the internal plexiform layer (Figure 2). Best cor- rected visual acuity improved to 0.25 logMAR at 2 months of the surgery. 3. Discussion The coexistent macular hole was described in approximately 1% to 4% of cases of rhegmatogenous retinal detachment. According to the pathological mechanisms, several surgical methods have been introduced to the treatment and the ret- inal reattachment rate ranged from 40% to 90% [7–10], but the macular hole closure rate was lower, approximately 40% (range 35%-70%) [10]. In 2015, the technique involving the use of an autologous neurosensory retinal free flap for closure of the refractory myopic macular hole was described (MH) [11]; since then, several series have been published that men- tions the complete anatomical closure of MH by OCT can reach up to 87.8%. In this case, single surgery achieved the retina reattach- ment and MH closure. Although this type or surgery has been referred [12], the technique with detached retina has not described formerly. The presence of retinal detachment was not a problem for the management of free graft of the neurosensory retina; in fact, in our opinion, the subretinal fluid due to retinal detachment facilitated the harvest by maintaining the neurosensory retina separate from the reti- nal pigment epithelium and helps avoiding touching the cho- roid instead of developing an hydraulic retinal detachment with the aid of a 38 or 41G syringe that can cause subretinal hemorrhage under the graft or subretinal blood migration (g) (h) Figure 1: (a) Intraoperative digital photography showing subretinal migration of brilliant blue during dye staining. (b) The internal limiting membrane has been removed and is grasped with the vitreous forceps. Notice the absence of blue coloration at the macular area at the region of internal limiting membrane ablation. (c) Endodiathermy was applied around a 1.5-disc diameter neurosensory retina. (d) Injection of the perfluoro-n-octane in the retinal interface. (e) The graft cuts were linear, being a square graft. (f) Image showing free graft packing inside the macular hole. (g) A fluid air exchange was performed, and subretinal fluid was extracted through the area where the graft was taken. (h) Laser around the area where the graft was taken. 3Case Reports in Ophthalmological Medicine causing toxicity. 5000 centistoke silicon oil was used because the patient was not able to stay in the city; aside, it allowed us to keep up the graft by OCT image with better quality than SF6 bubble. 4. Conclusions According to the evolution of this case, we believe it is possible to initiate a series of patients with this technique (a) (b) (c) (d) (e) (f) (g) Figure 2: (a) Fundus photo RE shows macular hole retinal detachment. (b) Preoperative optical coherence tomography (OCT) demonstrates macular detachment with full-thickness macular hole. (c) One day after surgery, the foveal contour had improved by the free graft. (d) OCT postop 1 day and the hyperreflective retinal flap are visualized in place over the MH. (e) Color photograph of the fundus of the right eye two months after the autologous retina transplant shows retina reattachment. (f) One month postoperatively, there was significant integration of the retinal flap with improved architecture of the inner retinal layers, and some hyperreflective lines show the graft.(g) At 2 months postoperatively, there was significant integration of the retinal flap. 4 Case Reports in Ophthalmological Medicine in order to consider autologous retinal transplant as a first therapeutic possibility in the retinal detachment associated with a macular hole. Data Availability The data used to support the findings of this study are included within the article. Disclosure The research did not receive any specific funding, although the Hospital Nuestra Señora de la Luz allowed its facilities for exploration, surgery, and patient monitoring. Conflicts of Interest The authors declare no conflict of interest in the aims for this article. References [1] M. Gonvers and R. Machemer, “A new approach to treating retinal detachment with macular hole,” American Journal of Ophthalmology, vol. 94, no. 4, pp. 468–472, 1982. [2] F. Ando, “Use of a special macular explant in surgery for reti- nal detachment with macular hole,” Japanese Journal of Oph- thalmology, vol. 24, pp. 29–34, 1980. [3] E. H. Ryan, C. T. Bramante, R. A. Mittra et al., “Management of rhegmatogenous retinal detachment with coexistent macu- lar hole in the era of internal limiting membrane peeling,” American Journal of Ophthalmology, vol. 152, no. 5, pp. 815– 819.e1, 2011. [4] Z. Michalewska, J. Michalewski, R. A. Adelman, and J. Nawrocki, “Inverted internal limiting membrane flap tech- nique for large macular holes,” Ophthalmology, vol. 117, no. 10, pp. 2018–2025, 2010. [5] E. Ortisi, T. Avitabile, and V. Bonfiglio, “Surgical management of retinal detachment because of macular hole in highly myo- pic eyes,” Retina, vol. 32, no. 9, pp. 1704–1718, 2012. [6] Z. Hu, X. Gu, H. Qian et al., “Perfluorocarbon liquid-assisted inverted limiting membrane flap technique combined with subretinal fluid drainage for macular hole retinal detachment in highly myopic eyes,” Retina, vol. 10, no. 2, pp. 140–144, 2016. [7] C.-C. Lai, Y.-P. Chen, N.-K. Wang et al., “Vitrectomy with internal limiting membrane repositioning and autologous blood for macular hole retinal detachment in highly myopic eyes,” Ophthalmology, vol. 122, no. 9, pp. 1889–1898, 2015. [8] X. Li, W. Wang, S. Tang, and J. Zhao, “Gas injection versus vit- rectomy with gas for treating retinal detachment owing to macular hole in high myopes,” Ophthalmology, vol. 116, no. 6, pp. 1182–1187.e1, 2009. [9] L. S. Lim, A. Tsai, D. Wong et al., “Prognostic factor analysis of vitrectomy for retinal detachment associated with myopic macular holes,” Ophthalmology, vol. 121, no. 1, pp. 305–310, 2014. [10] F. U. M. I. T. A. K. A. ANDO, N. O. R. I. O. OHBA, K. O. U. TOUURA, and H. I. R. O. S. H. I. HIROSE, “ANATOMICAL AND VISUAL OUTCOMES AFTER EPISCLERAL MACU- LAR BUCKLING COMPARED WITH THOSE AFTER PARS PLANA VITRECTOMY FOR RETINAL DETACHMENT CAUSED BY MACULAR HOLE IN HIGHLY MYOPIC EYES,” Retina, vol. 27, no. 1, pp. 37–44, 2007. [11] D. S. Grewal and T. H. Mahmoud, “Autologous Neurosensory Retinal Free Flap for Closure of Refractory Myopic Macular Holes,” JAMA Ophthalmology, vol. 134, no. 2, pp. 229-230, 2016. [12] D. S. Grewal, S. Charles, B. Parolini, K. Kadonosono, and T. H. Mahmoud, “Autologous retinal transplant for refractory mac- ular holes: multicenter international collaborative study group,” Ophthalmology, vol. 126, no. 10, pp. 1399–1408, 2019. 5Case Reports in Ophthalmological Medicine Successful Technique for Closure of Macular Hole Retinal Detachment Using Autologous Retinal Transplant 1. Introduction 2. Case Presentation 3. Discussion 4. Conclusions Data Availability Disclosure Conflicts of Interest work_bzsxaktttngklmgvq633phb47a ---- Real-time light dosimetry for intra-cavity photodynamic therapy: Application for pleural mesothelioma treatment HAL Id: inserm-01484376 https://www.hal.inserm.fr/inserm-01484376 Submitted on 7 Mar 2017 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Real-time light dosimetry for intra-cavity photodynamic therapy: Application for pleural mesothelioma treatment Nacim Betrouni, Camille Munck, Wael Bensoltana, Grégory Baert, Anne-Sophie Dewalle-Vignion, Arnaud Scherpereel, Serge Mordon To cite this version: Nacim Betrouni, Camille Munck, Wael Bensoltana, Grégory Baert, Anne-Sophie Dewalle-Vignion, et al.. Real-time light dosimetry for intra-cavity photodynamic therapy: Application for pleural mesothelioma treatment. Photodiagnosis and Photodynamic Therapy, Elsevier, 2017, Epub ahead of print. �10.1016/j.pdpdt.2017.02.011�. �inserm-01484376� https://www.hal.inserm.fr/inserm-01484376 https://hal.archives-ouvertes.fr 1 Real-time light dosimetry for intra-cavity photodynamic therapy: Application for pleural mesothelioma treatment Nacim Betrouni 1,2, Camille Munck1,2, Wael Bensoltana1, Grégory Baert 1, Anne- Sophie Dewalle-Vignion1, Arnaud Scherpereel2 , Serge Mordon1,2 1 INSERM, U1189, 1, Avenue Oscar Lambret, 59037, Lille Cedex, France 2 Lille University Hospital – 59037 Lille Cedex, France Corresponding author Nacim Betrouni INSERM, U1189 ONCO-THAI 1, Avenue Oscar Lambret 59037 Lille Cedex France Tel: 33. 3. 20.44.67.22 email: nacim.betrouni@inserm.fr 2 Abstract. Complete and homogeneous illumination of the target is necessary for the success of a photodynamic therapy (PDT) procedure. In most applications, light dosimetry is done using detectors placed at strategic locations of the target. In this study we propose a novel approach based on the combination of light distribution modeling with spatial localization of the light applicator for real time estimation and display of the applied dose on medical images. The feasibility approach is demonstrated for intrapleural PDT of malignant pleural mesothelioma. Keywords: photodynamic therapy, intra-cavity application, light distribution modeling, dosimetry, real time tracking, imaging. 3 1- Introduction Photodynamic therapy (PDT) is an emerging technic in oncology. Its mechanism of action is based on the interaction of a photosensitizer (PS) drug, a light source for the PS activation and oxygen in tissues. The combination of these three elements induces chemical reactions leading to the creation of reactive oxygen species (ROS) and tumor cell death. The achievement of the desired therapeutic effect of a PDT procedure relies on the accurate PS concentration distribution and on the continuous and uniform light delivery to the tissues. This last issue is particularly critical due to the variability in target geometry: solid, hollow organs and cavities,… For cavities, various types of light applicators were investigated as bare-fibers, having a directed radiation pattern and being applicable for small lesions, cylindrical diffusers [1] with scattering dome applicators which has a homogeneously distributed radiation pattern with various active lengths and lastly, flat or flexible 2D applicators as textile [2] or blanket [3] allowing obtaining planar illumination. In addition to these applicators, sensors are used to monitor light application by the continuous measurement of the light fluence rate in different locations in the cavity. Most often, these sensors consist of isotropic probes collecting light from every direction [4]. This dosimetry method allows only obtaining punctual measurements of light distribution whereas continuous spatial feedback is required to estimate received dose at each target point. This is particularly the case for Malignant Pleural Mesothelioma (MPM) treatment. Treating MPM remains a challenge with two main alternatives: palliative chemotherapy for inoperable patients, and multimodal treatment including surgery, combined with chemotherapy and radiotherapy for the others. Surgery offers the best chance of survival for this still incurable disease, however after the most complete tumor resection, microscopic tumor cells persist and surgery should be associated with an adjuvant local treatment. PDT appeared as a potential option for an effective intra-operative complement to surgery. After extensive preclinical studies, first clinical results were reported by the University of Bern group [5]. Currently, the most complete studies, with a major impact on survival and minimum toxicity, are certainly those led by the Pennsylvania team [6-8]. The 4 photosensitizer (Porfimer sodium (Photofrin)) is administered to the patients 48 hours before a maximal surgical tumor resection. Then, a light applicator connected to a laser source (wavelength 635 nm) is moved inside the pleural cavity, to illuminate the cavity walls (the target), filled with dilute intra-lipid solution, until a fluence of 20 J.cm- 2 is obtained. The light delivery monitoring is achieved by 7 isotropic probes placed at strategic locations in the thorax and connected to a dosimetry system ([9], [4;10]). However, this method does not provide information about the light delivered between these 7 locations. In this study, we describe a light applicator and introduce a new method for light dosimetry during PDT procedures of intra-cavities with complete and continuous feedback of the applied light dose. We present the feasibility of this approach on a phantom mimicking an intraoperative thorax cavity. 2- Materiel and Methods The global approach proposed herein is based on three ideas. First, we present an unique action model that characterizes the light distribution around the applicator. The second idea is about using a spatial locator for real time tracking of the applicator movements inside the cavity in order to cumulate spatially and temporally the light dose applied to the target. The last idea requires the use of imaging to define the target and to be used as a spatial reference map for the real time display of the applied light dose. 2-1 Light applicator construction and setting The proposed light applicator is an adaptation of the model used in the previous trials [11]. It consists of a cylindrical diffuser held in an endo-tracheal tube in which a carbon tube was inserted, to harden the tube and to correct its curvature. The cuff of the endo-tracheal tube is filled with intra-lipid solution at the concentration of 0.01% to act as a scattering agent. For the real time localization of the applicator, an electromagnetic 3D tracking device is used to track the applicator movements. This tracking system (TrackStar, Ascension Technology Corporation, Burlington, VT, USA) is composed of a control unit with plugs to connect a transmitter and up to 4 six degrees-of-freedom sensors. One sensor (2 mm diameter at the distal end and 1.2 mm diameter for the cable) is fixed just above the cylindrical diffuser (Figure 1.a). It allows the localization of the sensor, and therefore the light wand, giving a 3D position and orientations regarding 5 to the reference defined by the transmitter with a precision of 0.5 mm. Figure 1.b represents the light wand assembled. 2-2 Characterization of a light distribution model Light distribution around the diffusing tip of the applicator was characterized by direct measurements of the light fluence rate (irradiance (W.cm-2)) using sensors completed with a digital photography to define fluence rate isosurfaces. Measurements were done in a plastic tank of black color, limiting light reflection, filled with 0.01% dilute intralipid. The tip of the applicator was fixed horizontally in the tank through a cable gland and the optical fiber was connected to a medical diode laser 635 (Ceralas®, Biolitec).. For the fluence rate measurements, an isotropic probe (Model IP 85, Medlight®S.A., Switzerland), collecting light in a large solid angle, was connected to an optical power meter (Model 841-PE,Newport® Corporation, Irvine, CA, USA). The isotropic probe was fixed vertically in a plastic tube and moved above the diffusing tip of the light wand, using a millimetric precision benchmark with vertical and horizontal freedom degrees. A calibration factor was defined to convert the punctual power measurements (W) into fluence rate values (W.cm-2). It was estimated by correlating pairwise power measures obtained by the isotropic probe and fluence measures obtained by a 1cm² surface detector (Model PD300, OPHIR, Israel). The previous method allowed only obtaining measurements for discrete positions on the light applicator. This dosimetry method with the powermeter only gave punctual measurements of the light distribution. In order to obtain a more complete and continuous light distribution characterization, a digital photography was taken above the tank. After spatial and chromatic calibrations of the picture, the measurements done previously and their respective positions were matched with the image to define iso-surfaces as represented in figure 2. Complete description of the method is provided in Munck et al. [12]. 2-3 Tridimensional Imaging Radiation therapies (radiotherapy and brachytherapy) are always image-based techniques. Imaging allows to define the targets, to estimate their volumes and in 6 some cases, to define organs at risk to be spared. It is also used to optimize the ballistics and as a reference map for the dose display. Regarding PDT, imaging is being increasingly used especially for interstitial applications as in prostate [13], Glioblastoma [14] and head and neck [15;16] diseases. As for radiotherapy, the goal is mainly to define the targets to treat. For MPM management, 3D computed tomography (CT) imaging is the reference for pleural evaluation, fortumor staging and follow-up. We propose to use these images in the dosimetry process of the PDT procedure. First, these images are used to estimate volume and surface of the pleural cavity. Thereby, the total required fluence and light application duration can be estimated before the treatment. Another benefit to CT scan images is to use them as a reference space for the real time light dose display during the treatment. However, as these images are acquired before the procedure, a spatial registration is required to compute the transformation between the pre-operative space and the per-operative space. Fiducial markers consisting of 5 mm diameter capsule filled with paraffin oil can be attached to the patient around the region to be imaged. These markers are multimodality imaging compatible and able to create clear patterns on the images. 2.4 Spatial mapping of light dose In order to compute a 3D light dose (D), the action model established for the applicator is combined with the positions returned by the spatial locator. The dose is matched with the images space using the following transformation: 𝑇"##$%&'()* %+',-. = 𝑇1*'2.+%((-* %+',-. ° 𝑇"##$%&'()* 1*'2.+%((-* (1) 𝑇"##$%&'()* 1*'2.+%((-* is formed using the positions and the orientations given by the tracking device and continuously updated according to the tracking frequency rate, while 𝑇1*'2.+%((-* %+',-. is the registration transformation that locates the patient regarding to the localization system. It is computed by mapping the coordinates of the fiducial markers on the images and their coordinates in the tracking device space (transmitter space). These last coordinates are obtained using a second sensor that points them successively. Once the 3D spatial mapping done, for each voxel point p(x, y, z), the cumulated 7 dose is estimated as: 𝐷#(6,8,9) = 𝐼𝑟𝑟𝑎𝑑𝑖𝑎𝑛𝑐𝑒𝑀𝑜𝑑𝑒𝑙 𝑥,𝑦,𝑧 .𝑑𝑡 1*-'(+-2( (%+- (2) Where IrradianceModel is the above defined unique action model that characterizes the light applicator. 3- Validation of the approach 3-1 Protocol Experiments are done on an anatomically realistic phantom consisting of a human- sized hemithorax, made of plaster, and 3D printed into a right-side intra-operative and post tumor resection pleural cavity (Figure 3). Four fiducial markers were attached to the phantom and CT images were acquired using standard protocol used for MPM exam (Figure 4). A PDT procedure is simulated by moving the light applicator inside the cavity. The evaluation protocol consists in comparing the light dose measured by our proposed method and by the method of reference wi the 7 isotropic probes. First, the 3D positions of the probes are obtained using a spatial locator for each position; the model estimated fluence is extracted from the matrix D (equation 2). These values are referred as Model_Measurements. The values collected by the probes are referred as Ref_Measurements. Six independent experiments (Exp1 to Exp6) were carried out by varying the light applicator movements (speed, trajectory,..) inside the cavity. 3-2 Results Figure 5 shows the accumulated 3D light distribution (matrix D) combined with the CT images to locate the treated and untreated regions. A special color look up table (LUT) gives relative display regarding a reference dose value. Each color represents 10% of this dose. Figures 6 depicts the temporal evolution of the cumulated fluence (J.cm-2) measured by the two methods in one of the 7 intrathoracic locations (probes) respectively for (Exp1, Exp2, Exp5), and (Exp3, Exp4, Exp6). From this figure, it appears that the Ref_Measurements have a linear evolution. This result can be explained by the fact 8 that the probes were continuously collecting light through the intralipid liquid even when the applicator was far from their positions, while the Model_Measurements do not have this ability because the model has a limited volume as defined in section 2.2. For the 7 probes, the mean error between the reference measurements and the estimated measurements was about 10% with under-estimation for the proposed method. Since this under-estimation was constant for the six experiments, a correction factor of 1.1 was established and a new experiment was carried out after correcting the Model_Measurements with this value. Figure 7 presents the temporal evolution of the mean light dose calculated in one position for all experiments after correction. The difference between the two methods measurements decreased below 1%. 4- Discussion The success of a PDT procedure depends on the optimal dosimetric configurations of the photosensitizer and the light. The first issue relies on the determination of the PS distribution to the target tissues, which remains particularly difficult to monitor in clinical conditions. In most applications, based on dose escalation trials, a defined concentration is considered as optimal with sufficient and homogeneous distribution in tissues, and limited toxicity. For Photofrin, used in PDT for MPM, a dose of 2mg/ Kg is commonly administered to the patient. The second issue, light delivery, is to apply a sufficient light dose to each target point. The approach proposed herein constitutes a new trend in dosimetry for intrapleural PDT. Indeed, instead of the continuous measurement of the light delivered, using dedicated materiel (probes, spectrometer, power-meter,…), our dosimetry method relies on a unique action model that characterizes the light propagation from the light applicator. The model was established by defining the light distribution profile in the intra-lipid solution. It takes into account the absorption and the diffusion around the applicator but not the multiple scattering that occurs within the intralipid solution. This can explain the systematic under-estimation of the model. Anyway, in this kind of treatment, the procedure is based on a continuous movement of the light source inside the cavity, which means that the time spent at a specific spatial position is short leading to a non significant impact of the light scattering. 9 The second originality of our dosimetry method resides in the real time tracking of light applicator’s movements inside the cavity, using a spatial locator. Thus, by combining the model and the positions, a complete 3D light dose is estimated. Spatial locators were already used in laser applications to track the light applicator movements ([17], [18]) and were proved to be efficient tools to monitor light application, in addition to being safe and clinically suitable. Actually, from a clinical point of view, the real time update and display of the applied light dose brings the physician three main benefits. The first is the continuous feedback about the light delivered to the treated area. It improves the homogenization of the light dose by adapting the light delivery to untreated or not completely treated regions. The second benefit comes from the fact that visual assessment of the dosimetry is more natural than the punctual evaluation of discrete measurements. Moreover, the use of a gradient dose display allows a quick appreciation of the dose being delivered. Ultimately, one can obtain images as those used in radiotherapy treatment planning (Figure 8). The last benefit regards the procedure time. The proposed approach reduces the intervention time by avoiding the setting of the measurements probes inside the thorax which take about 15 minutes in te case of intrapleural PDT for MPM. 10 Conflict of interest All the authors have no conflict of interest to declare. 11 References [1] Beyer W. Systems for light application and dosimetry in photodynamic therapy. Journal of Photochemistry and Photobiology B: Biology 36 :153-156 1996 [2] Mordon S, Cochrane C Tylcz JB Betrouni N Mortier L Koncar V. Light emitting fabric technologies for photodynamic therapy. Photodiagnosis Photodyn Ther. 12 (1):1-8 2015 [3] Lianga X, Kundua P Finlay J Goodwin M Zhu T. Maximizing fluence rate and field uniformity of light blanket for intraoperative PDT. Proc SPIE Int Soc Opt Eng.2012 January 21; 8210: .doi:10.1117/12.908493.2012 [4] Zhu TC, Kim MM Liang X Lio B Meo JL Finlay JC Dimofte A Rodriguez C Simone CB Cengel K Friedberg JS. Real-time treatment feedback guidance of pleural PDT. Proc SPIE Int Soc Opt Eng. 8568 1-11 2013 [5] H. B. Ris, H. J. Altermatt, R. Inderbitzi, R. Hess, B. Nachbur, J. C. Stewart, Q. Wang, C. K. Lim, R. Bonnett, M. C. Berenbaum. Photodynamic therapy with chlorins for diffuse malignant mesothelioma: initial clinical results. Br J Cancer 64 (1116):1120-1991 [6] Friedberg JS, Mick R, Stevenson J, Metz J, Zhu T, Buyske J, Sterman DH, Pass HI, Glatstein E, Hahn SM. A phase I study of Foscan-mediated photodynamic therapy and surgery in patients with mesothelioma. Ann Thorac Surg. 2003 Mar;75(3):952-9. [7] Friedberg JS, Mick R Culligan M. Photodynamic Therapy and the Evolution of a Lung- Sparing Surgical Treatment for Mesothelioma. Ann Thorac Surg. 91 :1738-1745 2011 [8] Friedberg JS, Culligan MJ, Mick R, Stevenson J, Hahn SM, Sterman D, Punekar S, Glatstein E, Cengel K. Radical pleurectomy and intraoperative photodynamic therapy for malignant pleural mesothelioma. Ann Thorac Surg. 2012 93(5):1658-65. [9] Dimofte A, Zhu TC Jarod C. Finlay. In vivo light dosimetry for pleural PDT. 2009, 71640A In: Kessel DH, editor:1-12 2009. [10] Zhu TC. Dosimetry in pleural photodynamic therapy. J Natl Compr Canc Netw 2012;10:S-60. 10 :60-2012. [11] Friedberg JS, Mick R, Stevenson JP, Zhu T, Busch TM, Shin D, Smith D, Culligan, M, Dimofte A, Glatstein E, Hahn SM. Phase II trial of pleural photodynamic therapy and surgery for patients with non-small-cell lung cancer with pleural spread. J Clin Oncol. 2004 Jun 1; 22(11):2192-201. [12] Munck C., Mordon S., Betrouni N. Illumination profile characterization of a light device for the dosimetry of intra-pleural photodynamic therapy for mesothelioma. Photodiagnosis and Photodynamic Therapy, under press, 2016. [13] Betrouni N, Colin P Puech P Villers A Mordon S. An image guided treatment platform for prostate cancer photodynamic therapy. Conf Proc IEEE Eng Med Biol Soc 370-373 2013 Osaka, Japan [14] Stepp H, Beck T Pongratz T Meinel T Kreth FW Tonn JCh Stummer W. ALA and malignant glioma: fluorescence-guided resection and photodynamic treatment. J Environ Pathol Toxicol Oncol. 26 (2):157-164 2007 [15] Karakullukcu B., van Veen R. Aans J. Hamming-Vrieze O Navran A. Teertstra J van den Boom F. Niatsetski Y. Sterenborg H. Tan B. MR and CT Based Treatment Planning formTHPCMediated Interstitial Photodynamic Therapy of Head and Neck Cancer: Description of the Method. Lasers in Surgery and Medicine 45 :517-523 2013 [16] Oakley E., Wrazen B. Bellnier D. Syed Y. Arshad H. Shafirstein G. A New Finite Element Approach for Near Real-Time Simulation of Light Propagation in Locally Advanced Head and Neck Tumors. Lasers in Surgery and Medicine 47 :60-67 2015 12 [17] S.R.Mordon, M. E. Vuylsteke P. Mahieu N. Betrouni. Endovenous laser treatment of the great saphenous vein: Measurement of the pullback speed of the fiber by magnetic tracking. IRBM 34 (3):252-256 2013 [18] Sadick N., Rochon P. Mordon S. Advantages of real-time magnetic tracking of the cannula for controlled laser assisted lipolysis. 2010 ASLMS Annual Conference, Phoenix, AZ, USA, April 16-18, 2010Lasers in Surgery and Medicine, 2010, Supp 22, 65: 1962010. 13 Figures captions Figure 1: The light applicator. (a) The position of the 3D sensor according to the cylindrical diffuser. (b) Assembly of the applicator. Figure 2:. Illumination profile of the applicator when connected to laser emitting 3 W. Figure 3: (Left) Thoracic phantom with surgical opening and isotropic probes placement in the pleural wall. (Right) Photodynamic therapy procedure simulation. Figure 4: 3D Computerized Tomography of the thoracic phantom with patterns created by the fiducial markers. Images are displayed using three orientations: transversal, sagittal and coronal. Figure 5: 3D light dose display combined with CT images of the thorax cavity phantom. Crossing lines (top left viewer (sagittal), top right viewer (coronal) and center right viewer (axial)) indicate the light applicator position inside the cavity. Figure 6: Cumulated fluence measured in one unique position over time by the two methods: our proposed model based method (Model_Measurments) and the reference method based on a probe and a powermeter (Ref_Measurments). Six independent experiments were done. (a) Measurements: Exp 1, Exp 2 and Exp 5. (b) Measurements: Exp 4, Exp 3 and Exp 6. Figure 7: Temporal evolution of the light dose (fluence J.cm-2) measured in one position by the two methods: Reference method based on a dosimeter (power-meter) and Model method assuming a unique profile emission from the light applicator after correction. Figure 8: Example of a treatment planning display for a lung radiotherapy showing the distribution of radiation doses delivered to the tumor. The prescription dose is 50 gray. Fiducial marker pattern (a) (b) work_c26ke4uo3jh5rhsjqhdkhsu5oa ---- Left Ventricular Volume Reduction by Radiofrequency Heating of Chronic Myocardial Infarction in Patients With Congestive Heart Failure Left Ventricular Volume Reduction by Radiofrequency Heating of Chronic Myocardial Infarction in Patients With Congestive Heart Failure Octavio A. Victal, MD; John R. Teerlink, MD; Efrain Gaxiola, MD; Arthur W. Wallace, MD, PhD; Sergio Najar, MD; David H. Camacho, MD; Augustin Gutierrez, MD; Gabriel Herrera, MD; Gustavo Zuniga, MD; Fausto Mercado-Rios, MD; Mark B. Ratcliffe, MD Background—Myocardial infarct expansion and left ventricular (LV) remodeling are integral components in the evolution of chronic heart failure and predict morbidity and mortality. Radiofrequency (RF) heating and patch placement of chronic LV aneurysms caused a sustained reduction in LV infarct area and volume in an ovine infarct model. This study evaluated the effect of RF heating and epicardial patch as an adjunct to coronary artery bypass graft on LV volumes in patients with prior myocardial infarction, evidence of akinetic/dyskinetic scar, and LV ejection fraction �40%. Methods and Results—Ten patients (3 female; mean age, 64�11 years) scheduled for coronary artery bypass graft were enrolled (Canadian Cardiovascular Society angina class 2.1�1.1; New York Heart Association class 3.1�0.5). Intraoperative digital photography demonstrated an acute 39% reduction in infarct area (n�5; P�0.01), and transesophageal ECGs demonstrated a 16% acute reduction in LV end-diastolic volumes (n�9; P�0.002) after RF treatment. There were no intraoperative or procedure-related postoperative complications, and during an average follow-up of �180 days, there have been no safety issues. All patients had complete relief of their angina and improvement in exercise tolerance. Serial transthoracic ECGs over the 6 months of follow-up after RF treatment demonstrated persistent reductions in LV end-diastolic volume (29%; P�0.0001) and LV end-systolic volume (37%; P�0.0001) with improved ejection fraction (P�0.02). Conclusions—RF heating and patch placement in these 10 patients resulted in acute reduction in infarct area and ventricular volumes that were maintained 180 days after procedure. This technique may reduce the incidence of congestive heart failure and mortality in these patients and warrants investigation in larger clinical trials. (Circulation. 2002;105:1317-1322.) Key Words: heart failure � myocardial infarction � remodeling � surgery � cardiac volume Congestive heart failure afflicts almost 5 million people inthe United States, and despite recent advances in phar- macological approaches to its treatment, the 1-year mortality rate remains high.1 Ventricular remodeling plays a central role in the development of heart failure, and multiple exper- imental and clinical studies have shown that ventricular volumes are the most powerful predictors of mortality.2–7 In addition, the pharmacological agents that have been success- ful in reducing mortality in this condition have also had significant effects on ventricular remodeling.8 –10 These ob- servations have led to the hypothesis that interventions that beneficially influence ventricular remodeling may result in improved survival in patients with left ventricular (LV) dysfunction. Multiple surgical techniques have evolved based on this hypothesis. These techniques often produce acute volume reductions and, in some cases, improved ejection fractions. Unfortunately, many of these techniques, such as LV aneu- rysm repair, have at least modest operative mortality and limited potential to be performed in a minimally invasive manner and ultimately result in subsequent LV redilation.11,12 However, the compelling promise of the hypothesis that ventricular volume reduction should be beneficial and the limitations of present surgical approaches have encouraged the search for surgical ventricular volume reduction tech- niques that have a lower operative mortality and improved long-term results. In a recent study, we (M.B.R., A.W.W., and J.R.T.) demonstrated in an ovine model of myocardial infarction and heart failure that radiofrequency (RF) heating of myocardial infarct scar resulted in acute scar shrinkage and volume reduction that was sustained over a 10-week period.13 RF Received June 20, 2001; revision received January 7, 2002; accepted January 7, 2002. From the Social Securities Hospital, Guadalajara, Mexico and Sections of Cardiology (J.R.T.), Anesthesiology (A.W.W.), and Cardiothoracic Surgery (M.B.R.), San Francisco Veterans Affairs Medical Center and School of Medicine, University of California, San Francisco, Calif. Correspondence to Mark B. Ratcliffe, MD, Cardiothoracic Surgery, 112D, San Francisco Veterans Affairs Medical Center, 4150 Clement St, San Francisco, CA 94121. E-mail mratcliffe@hotmail.com © 2002 American Heart Association, Inc. Circulation is available at http://www.circulationaha.org DOI: 10.1161/hc1102.105566 1317 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 remodeling techniques are based on the principle that colla- gen denatures and contracts when heated above 65°C.14,15 However, RF heating also makes collagen more likely to creep under load,16 a finding that was also confirmed in our previous ovine study.13 Consequently, application of a re- straining patch was incorporated into the technique to prevent dilation of the treated scar area in the immediate posttreat- ment period. The purpose of this pilot study was 3-fold: (1) to demon- strate the feasibility of applying this technique to patients, (2) to assess in a preliminary fashion safety concerns, and (3) to evaluate the effects of this technique on ventricular remod- eling. The hypothesis of this study is that RF treatment of myocardial infarct scar with the MyoTech cardiac restoration system (Figure 1; Hearten Medical, Inc) as an adjunct to coronary artery bypass grafting in patients with prior myo- cardial infarction, evidence of an akinetic/dyskinetic scar, and a LV ejection fraction of �40% will produce significant and sustained reductions in left ventricular volume. Methods Patient Selection Patients were enrolled in the study if they required nonemergent coronary artery bypass graft (CABG) surgery and had a prior history of myocardial infarction (�10 weeks earlier), evidence of chronic dilated LV infarct scar as defined by a LV ejection fraction �40%, and an akinetic/dyskinetic scar on ECG. Patients were excluded if there was evidence of calcification in the scar or thrombus as identified by echocardiography or contrast ventriculography. All patients gave informed written consent to participate in the study. The protocol was approved by the ethical and scientific committee of the Hospital de Especialidades del Centro Médico National de Occidente, del Instituto Mexicano del Seguro Social and conformed to the International Scientific and Ethic Regulations. Preoperative Evaluations Patients underwent cardiac catheterization with right anterior oblique (RAO) contrast ventriculograms at the referring hospitals as well as a history including New York Heart Association (NYHA) classifi- cation, physical examination, chest x-ray, ECG, exercise treadmill (in patients who were able to walk) or dipyridamole stress testing with thallium perfusion imaging, and two-dimensional and Doppler transthoracic ECG. Patients still eligible by the inclusion and exclusion criteria were selected to undergo the intraoperative evaluation. Hospital Course A brief repeat history and physical examination were performed, and blood chemistries, complete blood count, and ECG were obtained. After general anesthesia and sternotomy but before bypass and RF heating, a transesophageal ECG (TEE) was obtained in each patient to measure LV volumes. After isolation and inspection of the scar area, the RF device (MyoMend Wand; Hearten Medical, Inc) was then applied to the myocardial scar on the beating heart without ventriculectomy at normal body temperatures. The desired power, temperature, and application time were selected on the RF generator, and the hand-held wand electrodes were held against the infarcted scar tissue. A thermocouple monitored the temperature of the tissue during treatment to automatically maintain the desired conditions, and the power automatically shut off when the elapsed time had passed. Treatment was repeated in an overlapping, outward spiral pattern to cover the entire infarct zone. After treatment, the Myo- Guard Patch (Hearten Medical, Inc) was sutured within the perimeter of the scar to provide reinforcement during the patient’s initial recovery. Intraoperative digital photography of the myocardial in- farction scar before and after treatment to measure acute shrinkage was performed in the last 5 patients. In these patients, the patch was initially sized to the pretreatment scar area and then digitally photographed on a flat surface with a ruler. After treatment, the patch was trimmed to the posttreatment scar area and another digital photograph was obtained on the flat surface with the ruler. Planim- etry of these areas yielded estimates of pretreatment and posttreat- ment scar areas. After the treatment was complete, cardiopulmonary bypass was initiated and standard CABG was performed. Posttreatment/post-CABG TEE off cardiopulmonary bypass was performed to measure the acute changes in LV volumes. Postoperatively, patients were monitored for 24 hours with a Holter. Patients were followed daily during their hospital courses, and adverse events were recorded. On postoperative days 1, 3, and 5, physical examinations were performed, and blood chemistries, com- plete blood counts, and electrocardiograms were obtained. Follow-Up Patients were followed by one of the operating surgeons (O.A.V.) as well as their referring cardiologists for at least 6 months after the procedure. Serial transthoracic ECGs were performed on days 10, 30, 90, and 180 after the procedure. At 6 months, repeat RAO contrast ventriculograms as well as history, including New York Heart Association (NYHA) classification, and physical examination, chest x-ray, ECGs, 24-hour Holter examinations, and exercise treadmill stress testing were also performed. Statistical Analyses The single-plane RAO ventriculograms were obtained at different referring sites without uniform calibration, so changes in absolute volumes could not be determined. However, these studies were analyzed by 1 investigator (E.G.), and ejection fractions were calculated by planimetry using the area-length method. All of the serial ECGs were analyzed by echocardiography technicians at the local sites, whereas the baseline and 6-month transthoracic ECGs were analyzed at an investigator’s (J.R.T.) remote core echocardio- graphic laboratory by a technician in a random, blinded fashion averaging 3 representative beats. All analyses were based on stan- dard biplane apical views and volumes calculated by modified Simpson’s rule. Results are expressed as mean�SD unless otherwise specified. Comparisons of baseline to postprocedure values are performed with a paired t test, whereas results from the serial ECGs were analyzed with a repeated-measures ANOVA. Statistical signif- icance was considered attained when P�0.05. Figure 1. Photograph of the Myotech cardiac restoration system with the RF generator, MyoMend RF applicator wand, and Myo- Guard polyester patch material. 1318 Circulation March 19, 2002 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 Results Demographics Ten patients (age 63�11 years; 3 women) were enrolled in the study, all of whom successfully completed the procedure and the 6-month follow-up period. The patients on average had a moderate degree of angina (Canadian Cardiovascular Society [CCS] class 2.1�1.1), were 2.8�2.9 years from their most recent myocardial infarction, and had moderate-to- severe heart failure symptoms (NYHA class 3.1�0.5). Half of the patients had hypertension, and diabetes mellitus (4 patients), hypercholesterolemia (3 patients), stroke (2 pa- tients), and severe chronic obstructive pulmonary disease (2 patients) were also present. The patients’ medications on admission are listed in Table 1. Preoperative Evaluations The 10 patients had a significant limitation in daily exercise tolerance (1.7�1.0 blocks without stopping). Eight of the patients were able to perform exercise treadmill tests on a modified Bruce protocol for 6.4�3.2 minutes (Table 2). The 2 other patients were unable to exercise because of muscu- loskeletal disorders. Single-plane RAO contrast ventriculog- raphy demonstrated decreased ejection fractions (31�6%). Intraoperative Patients underwent 18�7 applications of the MyoMend wand followed by suturing of the patch in place, for a total treatment time of 40�17 minutes for the RF procedure, patch placement, TEE assessment, and scar sizing. All 10 patients completed the surgical procedure with an average total cardiopulmonary bypass time of 90�33 minutes and cross- clamp time of 51�23 minutes with 2.8�1.2 grafts. There were no intraoperative complications. Intraoperative digital photography was performed in the last 5 patients, demonstrat- ing an immediate 39% reduction in scar area (Figure 2A). TEE volumes measured before and after the operative proce- dure showed commensurate acute reductions in ventricular volumes (Figure 2B; 16% reduction in LV end-diastolic volume [LVEDVI], 32% reduction in LV end-systolic vol- ume [LVESVI]). There was minimal (r�0.12 to 0.41; P�NS) correlation between various measures of the extent of the acute reduction in ventricular volumes and the extent of scar shrinkage. Using a simple spherical model, a reduction of surface area of 9.7 cm2 from a sphere with a volume of 156 mL (the mean preoperative TEE LVEDV) would result in a commensurate 17-mL volume reduction (less than the ob- served 30-mL reduction in LV volume). Hospital Course All of the patients did well postoperatively, and postoperative complications were limited and believed to be unrelated to the RF treatment. Complications included the following: 1 patient with transient supraventricular tachycardia; 1 transient disorientation on postoperative day 3 to 5; 2 possible small perioperative myocardial infarctions diagnosed solely by troponin-I levels (2 patients with troponin-I levels of 4.90 ng/mL [normal range, 0 to 1.5 ng/mL] on postoperative days 3 and 5, respectively); and 1 nosocomial pneumonia. Postop- erative pulmonary artery thermodilution catheter demon- strated a cardiac output of 3.7�0.4 L/min per m2, and there was no evidence of perioperative low-output state requiring intra-aortic balloon pump nor LV assist device implantation. Nine patients were discharged from the hospital without prolongation of hospital stay, whereas 1 patient with prior pulmonary disease was hospitalized for �1 month because of respiratory problems unrelated to the procedure. TABLE 1. Patient Medications On Admission On Discharge 6 Month Follow-Up ACE inhibitor (ARB) 8 8 4 (1) �-Blockers 1 1 1 Long-acting nitrates 10 7 3 Digoxin 9 9 0 Loop diuretics 3 3 1 Amiodarone 1 1 0 Aspirin 10 10 10 ARB indicates angiotensin receptor blocker. TABLE 2. Symptom Assessments and Exercise Tolerance Preprocedure 6 Month Follow-Up P Angina (CCS Class) 2.1�1.1 0�0 �0.0001 Congestive heart failure symptoms (NYHA Class) 3.1�0.5 1�0 �0.0001 Walking distance, blocks 1.7�1.0 14.4�8.4 0.0015 Exercise treadmill test tolerance, min 6.4�3.2 11.1�3.0 0.03 Values are expressed as mean�SD. Preprocedure and 6-month follow-up values are compared with a paired t test. Figure 2. Acute effect of RF heating on left ventricle. A, Infarct scar area as measured by digital photography before and after RF heating. B, LV volumes before and after RF heating and CABG as measured by intraoperative transesophageal ECGs. Results expressed as mean�SEM. Victal et al LV Volume Reduction by RF Heating 1319 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 Six-Month Follow-Up During an average follow-up of �180 days, there were no safety-related issues (no hemorrhage, rupture, tamponade, stroke, renal failure, or patch-related infection) and no rehos- pitalizations for cardiac cause. All patients had complete relief of their angina (CCS class 0), and all patients were NYHA class I. In addition, the 8 patients able to exercise had a marked improvement in their exercise duration (73% increase to 11.1�3.0 minutes; P�0.03) using the modified Bruce treadmill protocol (Table 2). Repeat single-plane RAO contrast ventriculograms demonstrated significant increases in ejection fraction at 6 months (baseline, 31�6%; 6 months, 39�7%; P�0.01). Analysis of the serial ECGs obtained at baseline through 180 days after the procedure revealed the time course of these marked, sustained decreases in both LV end-diastolic (29% reduction at 6 months; repeated measures ANOVA, P�0.0001) and end-systolic (37% reduction at 6 months; repeated measures ANOVA, P�0.0001) volumes (Figure 3A) with no evidence of progressive ventricular dilatation, as well as time-dependent increases in LVEF (Figure 3B; 25% increase at 6 months; repeated measures ANOVA, P�0.02). There was a positive trend toward an additional decrease in LVESVI from day 10 to day 180 after the procedure (P�0.06; repeated measures ANOVA). Blinded analysis of the baseline and 6-month ECGs con- firmed these significant reductions, demonstrating a 16% reduction in LVEDVI (Figure 4A; mean difference, 12.1 mL/m2; baseline, 74.5�9.0 versus 180 days after procedure, 62.4�10.9 mL/m2; P�0.002) and a 21% reduction in LVESVI (Figure 4B; mean difference, 9.5 mL/m2; baseline, 46.0�4.6 versus 180 days after procedure, 36.5�7.4 mL/m2; P�0.001) and increased ejection fraction (mean difference, 3.9; baseline, 37.8�6.6 versus 180 days after procedure, 41.7�4.0; P�0.06). Stroke volume was maintained at all time points. Analysis of the Holters from baseline and at 6 months showed no significant change in either atrial or ventricular ectopy (Figure 5). Discussion This study of ten patients undergoing concurrent coronary artery bypass graft surgery demonstrated that treatment of a chronic myocardial scar with RF heating resulted in acute reduction in infarct scar area with resultant decreases in ventricular volumes. This acute reduction in ventricular volumes was sustained for at least 6 months and was associated with marked improvement in all measures of symptoms and exercise tolerance. There were no safety- related concerns that were discovered during �6 months of follow-up. The primary finding of this study was that RF heating treatment of myocardial infarct scar resulted in significant reductions in ventricular volumes that were sustained over at least 6 months of follow-up. The 39% acute reduction in infarct scar area was nearly identical to the 34% reduction in the sheep infarct model using the same RF system,13 as was the 16% to 37% decrease in LV volumes at 6 months after the RF treatment in patients compared with a 20% to 34% reduction in sheep at 10 weeks after the procedure.13 In both studies, not only was there preservation of the initial relative volume reduction, but there was no evidence of progressive dilation as is typically seen in the setting of large infarctions, Figure 3. Chronic effects of RF heating on LV volumes. Serial echocardiographic volume (A) and ejection fraction (B) measure- ments over 6 months of follow-up. Results are expressed as mean�SEM. Repeated measures ANOVA performed to analyze differences through time. Figure 4. LVEDVI and LVESVI volumes indexed for body surface area before and 6 months after RF heating as measured by independent, blinded two-dimensional ECGs. Individual cases presented with paired t test comparison. 1320 Circulation March 19, 2002 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 suggesting that the procedure may have interrupted the pathophysiological process of deleterious ventricular remod- eling. In fact, there was evidence of progressive decreases in LVESVI from day 10 to day 180 after the procedure, with a stable LVEDVI, suggesting a possible improvement in LV function. These results were clearly statistically significant, but are they clinically relevant? In the study by White et al5 using contrast ventriculography, a 35-mL increase in LVESV resulted in an 8-fold increase in mortality at 5 years. In the present study, RF heating resulted in up to a 30-mL reduction in LVESV (�18 mL/m2 LVESVI), as measured by biplane echocardiography. Some additional context for this question might be found in the pharmacological heart failure trials. In the first ACE inhibitor trials to assess the effects of captopril on ventricular enlargement, captopril treatment for 6 months demonstrated a 9.4-mL/m2 reduction in LVESVI,17 whereas another study demonstrated an 11-mL (4.3%) decrease from placebo with 1 year of captopril treatment.18 In a substudy of the SOLVD treatment trial,9 enalapril treatment for 1 year resulted in a 26-mL/m2 (17%) decrease in LVEDVI compared with placebo-treated patients. Recent studies with �-blocker therapy in patients with chronic heart failure have shown similar results. In a study by Metra et al,19 the administration of metoprolol or carvedilol for 13 to 15 months resulted in 9% (15 mL/m2) and 12% (20 mL/m2) reductions in LVEDVI. ACE inhibitors and �-blockers obviously have other effects than volume reduction, but the 16% to 37% sustained volume reduction using the RF heating technique in this study was similar to that produced by these agents and supports the hypothesis that these volume reductions may be clinically significant. The principle by which RF heating causes reductions in ventricular volumes is based on the property of collagen to denature and contract at temperatures �65°C. The extent of contraction is modified by the amount of cross-linking, age of scar, pH, temperature, load during heating, and water con- tent.14,15,20 Unloaded bovine chordae tendineae contract �65% when chordae are heated to 85°C; similar findings were observed in tendons from the ovine glenohumeral capsule.20 In this study, epicardial temperature at the treat- ment site was maintained at 95°C, identical to the temperature in the sheep studies.13 Histological studies in the RF-treated sheep demonstrated evidence of the heating extending 50% of the thickness of the infarct scar. Because all patients are alive and well, there are no histological specimens available for the present study, but the findings in the sheep suggest that more aggressive heating might be able to increase infarct shrinkage even more. Of course there is the legitimate concern that full-thickness heating could result in rupture, but because the procedure is performed in the beating heart with the large heat sink of circulating blood, the thermocouple is able to regulate the temperature such that full-thickness heating can be avoided. The RF heating system was designed to be widely appli- cable to a large group of patients. There are �7 million patients with myocardial infarction in the United States alone and �1 million new myocardial infarctions per year. Present treatment strategies have reduced the death rate by 26%, resulting in an increase in the number of myocardial infarc- tion survivors.21 Although the ratio of nontransmural to transmural myocardial infarctions is increasing22 and most infarcts seen in the operating room no longer are confluent scar, available evidence suggests that RF heating of salt-and- pepper infarcts that are mostly collagen will still benefit despite the myocyte loss. In a study by Nagueh et al,23 hibernating myocardium with �25% fibrosis did not recover with revascularization. Furthermore, the extent of fibrosis correlated well with areas that failed to thicken (r��0.83, P�0.01) during noninvasive dobutamine echocardiography. Thus, there seems to be a noninvasive way to assess areas that would not improve from revascularization and might benefit from this procedure. In addition, RF heating of ankle liga- ments has been performed through an arthroscope,20 and the present RF heating system was designed to be adaptable to minimally invasive cardiac surgical techniques. Because the procedure can be performed off bypass, there is the possibil- ity that future applications could be performed as minimally invasive, stand-alone procedures. This study has some important limitations. First, it is a small study that cannot fully evaluate the safety and efficacy of this technique. RF heating has been used safely in other applications, and in the present study of 10 patients, RF heating was associated with no evidence of any adverse effects during the 1 year of follow-up. However, this study is too small to be confident regarding safety issues. Although the study was designed to enroll consecutive patients meeting the enrollment criteria, the group of patients in this study was selected and may not be representative of the population as a whole. Second, echocardiographic follow-up was limited to 6 months. As noted above, most previous studies of ventricular Figure 5. Atrial (A) and ventricular (B) ectopy at baseline and 6 months after RF heating of myocardial scar. Results expressed as mean�SEM. Victal et al LV Volume Reduction by RF Heating 1321 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 remodeling have demonstrated additional dilation during a 6-month follow-up period, so we suggest that there should have been some evidence of dilation noted in the present study during this follow-up period. Third, the absence of a control group is an important limitation. Although most studies suggest that patients with large myocardial infarctions continue to have significant dilation over 6 months, the CABG procedure itself may have resulted in the reduced volumes observed in this study. Unfortunately, the serial changes in ventricular volumes after CABG are relatively unknown, especially in this patient population. However, we suggest that this mechanism is unlikely given the acute volume reductions observed by TEE and the recent studies that show no change in resting ejection fractions in patients similar to those enrolled in this study.24 Finally, this study did not investigate the underlying mechanism for the improve- ment in ventricular volumes. As noted in the results, the extent of volume reduction was disproportionate to that expected given the reduction in surface area from scar shrinkage. This disparity could be attributable to many factors, including technical limitations in the scar area mea- surements, which may have underestimated the scar area reduction, or improvements in ventricular geometry or load- ing characteristics, resulting in beneficial remodeling. The present study is solely hypothesis generating, but it seems to have developed the intriguing hypothesis that RF heating in these patients significantly reduced LV volumes and was safe. This study demonstrates that RF treatment of myocardial infarct scar with the MyoTech system as an adjunct to CABG resulted in significant and sustained reductions in LV volume without significant treatment-related adverse events in pa- tients with prior myocardial infarction, evidence of an aki- netic/dyskinetic scar, and LV ejection fraction of �40%. The ability of this treatment to be applied to millions of patients and the potential to be accomplished with minimally invasive techniques suggests that RF heating of infarct scar may be an important technique to reduce ventricular volumes and pos- sibly reduce the incidence of congestive heart failure and mortality in this high-risk group of patients. Larger controlled clinical trials are being planned to measure the effect on ventricular remodeling and patient outcomes. Acknowledgment Dr Teerlink was the recipient of a Veterans Affairs Career Devel- opment Award. References 1. Teerlink JR, Massie BM. �-Adrenergic blocker mortality trials in con- gestive heart failure. Am J Cardiol. 1999;84:94R–102R. 2. Shanoff HM, Little JA, Csima A, et al. Heart size and ten-year survival after uncomplicated myocardial infarction. Am Heart J. 1969;78: 608 – 614. 3. Kostuk WJ, Kazamias TM, Gander MP, et al. Left ventricular size after acute myocardial infarction: serial changes and their prognostic signif- icance. Circulation. 1973;47:1174 –1179. 4. Hammermeister KE, DeRouen TA, Dodge HT. Variables predictive of survival in patients with coronary disease: selection by univariate and multivariate analyses from the clinical, electrocardiographic, exercise, arteriographic, and quantitative angiographic evaluations. Circulation. 1979;59:421– 430. 5. White HD, Norris RM, Brown MA, et al. Left ventricular end-systolic volume as the major determinant of survival after recovery from myo- cardial infarction. Circulation. 1987;76:44 –51. 6. Pfeffer MA, Braunwald E. Ventricular remodeling after myocardial infarction: experimental observations and clinical implications. Circu- lation. 1990;81:1161–1172. 7. St John Sutton M, Pfeffer MA, Plappert T, et al. Quantitative two- dimensional echocardiographic measurements are major predictors of adverse cardiovascular events after acute myocardial infarction: the pro- tective effects of captopril. Circulation. 1994;89:68 –75. 8. Pfeffer MA, Braunwald E, Moyé LA, et al. Effect of captopril on mor- tality and morbidity in patients with left ventricular dysfunction after myocardial infarction: results of the survival and ventricular enlargement trial. The SAVE investigators. N Engl J Med. 1992;327:669 – 677. 9. Konstam MA, Rousseau MF, Kronenberg MW, et al. Effects of the angiotensin converting enzyme inhibitor enalapril on the long-term pro- gression of left ventricular dysfunction in patients with heart failure: SOLVD investigators. Circulation. 1992;86:431– 438. 10. Lechat P, Packer M, Chalon S, et al. Clinical effects of �-adrenergic blockade in chronic heart failure: a meta-analysis of double-blind, placebo-controlled, randomized trials. Circulation. 1998;98:1184 –1191. 11. Dor V, Sabatier M, Di Donato M, et al. Late hemodynamic results after left ventricular patch repair associated with coronary grafting in patients with postinfarction akinetic or dyskinetic aneurysm of the left ventricle. J Thorac Cardiovasc Surg. 1995;110:1291–1299. 12. Ratcliffe M. Batista’s operation: what have we learned? J Am Coll Cardiol. 2000;36:2115–2118. 13. Ratcliffe MB, Wallace AW, Teerlink JR, et al. Radio frequency heating of chronic ovine infarct leads to sustained infarct area and ventricular volume reduction. J Thorac Cardiovasc Surg. 2000;119:1194 –1204. 14. Chen SS, Wright NT, Humphrey JD. Heat-induced changes in the mechanics of a collagenous tissue: isothermal free shrinkage. J Biomech Eng. 1997;119:372–378. 15. Chen SS, Wright NT, Humphrey JD. Heat-induced changes in the mechanics of a collagenous tissue: isothermal, isotonic shrinkage. J Biomech Eng. 1998;120:382–388. 16. Vangsness CT Jr, Mitchell W III, Nimni M, et al. Collagen shortening: an experimental approach with heat. Clin Orthop. 1997;13:267–271. 17. Sharpe N, Murphy J, Smith H, et al. Treatment of patients with symp- tomless left ventricular dysfunction after myocardial infarction. Lancet. 1988;1:255–259. 18. Pfeffer MA, Lamas GA, Vaughan DE, et al. Effect of captopril on progressive ventricular dilatation after anterior myocardial infarction. 1988;319:80 – 86. 19. Metra M, Giubbini R, Nodari S, et al. Differential effects of �-blockers in patients with heart failure: a prospective, randomized, double-blind com- parison of the long-term effects of metoprolol versus carvedilol. Circu- lation. 2000;102:546 –551. 20. Obrzut SL, Hecht P, Hayashi K, et al. The effect of radiofrequency energy on the length and temperature properties of the glenohumeral joint capsule. Arthroscopy. 1998;14:395– 400. 21. American Heart Association. 2001 Heart and Stroke Statistical Update. Dallas, Tex: American Heart Association; 2000. 22. Goldberg RJ, Gore JM, Alpert JS, et al. Non-Q wave myocardial infarction: recent changes in occurrence and prognosis. A community-wide perspective. Am Heart J. 1987;113:273–279. 23. Nagueh SF, Mikati I, Weilbaecher D, et al. Relation of the contractile reserve of hibernating myocardium to myocardial structure in humans. Circulation. 1999;100:490 – 496. 24. Elhendy A, Cornel JH, van Domburg RT, et al. Effect of coronary artery bypass surgery on myocardial perfusion and ejection fraction response to inotropic stimulation in patients without improvement in resting ejection fraction. Am J Cardiol. 2000;86:490 – 494. 1322 Circulation March 19, 2002 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 work_c4jbtdr2xrhdpavaigiun6oye4 ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218559229 Params is empty 218559229 exception Params is empty 2021/04/06-02:16:39 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218559229 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:39 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_c4lpbe43qfcblowb3o4b3e5ifi ---- Browse Hide footerAboutFeaturesToolsBlogAmbassadorsContactFAQPrivacy PolicyCookie PolicyT&CsAccessibility StatementDisclaimerSitemap figshare. credit for all your research. work_c5umbzkcwfdz7dq7uekl5t6brq ---- untitled Prevalence and risk factors for diabetic retinopathy in rural India. Sankara Nethralaya Diabetic Retinopathy Epidemiology and Molecular Genetic Study III (SN-DREAMS III), report no 2 Rajiv Raman, 1 Suganeswari Ganesan, 1 Swakshyar Saumya Pal, 1 Vaitheeswaran Kulothungan, 2 Tarun Sharma 1 To cite: Raman R, Ganesan S, Pal SS, et al. Prevalence and risk factors for diabetic retinopathy in rural India. Sankara Nethralaya Diabetic Retinopathy Epidemiology and Molecular Genetic Study III (SN-DREAMS III), report no 2. BMJ Open Diabetes Research and Care 2014;2:000005. doi:10.1136/ bmjdrc-2013-000005 PRÉCIS The prevalence of DM and DR in a rural south Indian population was 10.1% and 10.3%, respectively. The risk factors for DR were male gender, longer duration of diabetes, higher HbA1c, insulin use, and higher systolic BP. Received 25 November 2013 Revised 22 April 2014 Accepted 29 April 2014 1Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai, Tamil Nadu, India 2Department of Preventive Ophthalmology, Sankara Nethralaya, Chennai, Tamil Nadu, India Correspondence to Dr Tarun Sharma; drtaruns@gmail.com ABSTRACT Objective: The study was aimed at estimating the prevalence of type 2 diabetes mellitus and diabetic retinopathy in a rural population of South India. Design: A population-based cross-sectional study. Participants: 13 079 participants were enumerated. Methods: A multistage cluster sampling method was used. All eligible participants underwent comprehensive eye examination. The fundi of all patients were photographed using 45°, four-field stereoscopic digital photography, and an additional 30° seven-field stereo digital pairs were taken for participants with diabetic retinopathy. The diagnosis of diabetic retinopathy was based on Klein’s classification. Main outcome measures: Prevalence of diabetes mellitus and diabetic retinopathy and associated risk factors. Results: The prevalence of diabetes in the rural Indian population was 10.4% (95% CI 10.39% to 10.42%); the prevalence of diabetic retinopathy, among patients with diabetes mellitus, was 10.3% (95% CI 8.53% to 11.97%). Statistically significant variables, on multivariate analysis, associated with increased risk of diabetic retinopathy were: gender (men at greater risk; OR 1.52; 95% CI 1.01 to 2.29), use of insulin (OR 3.59; 95% CI 1.41 to 9.14), longer duration of diabetes (15 years; OR 6.01; 95% CI 2.63 to 13.75), systolic hypertension (OR 2.14; 95% CI 1.20 to 3.82), and participants with poor glycemic control (OR 3.37; 95% CI 2.13 to 5.34). Conclusions: Nearly 1 of 10 individuals in rural South India, above the age of 40 years, showed evidence of type 2 diabetes mellitus. Likewise, among participants with diabetes, the prevalence of diabetic retinopathy was around 10%; the strongest predictor being the duration of diabetes. The epidemic of diabetes mellitus (DM), in particular type 2 DM, is assuming significant proportions in developing countries, such as India.1 2 The International Diabetes Federation (IDF) has projected that the number of people with diabetes in India would rise from 65.1 million in 2013 to 109 million in 2035.3 DM, being a lifestyle disease, is on the rise in urban areas; we reported that the prevalence of DM in the population older than 40 years, in urban India, was around 28% (Sankara Nethralaya Diabetic Retinopathy Epidemiology and Molecular Genetic Study (SN-DREAMS) I, report 2).4 5 However, in a study carried out in South India where the population was hybrid, both rural and urban, the prevalence of DM was around 11%.6 The Indian Council of Medical Research-India Diabetes (ICMR-INDIAB) Study, which was carried out in three states (Tamil Nadu, Maharashtra, and Jharkhand) and one union territory (Chandigarh), reported a varied prevalence of diabetes: 10.4% in Tamil Nadu, 8.4% in Maharashtra, 5.3% in Jharkhand, and 13.6% in Chandigarh.1 An epidemiological study estimating the prevalence of DM and dia- betic retinopathy (DR) in rural India is not available; second, the changing lifestyle and urbanization of rural culture are gradually influencing the rural population as well.6 7 Therefore, the present study, SN-DREAMS III, a population-based cross-sectional study, using multistage random sampling was designed to estimate the prevalence of DM and DR in rural India and elucidate risk factors influen- cing DR. Key messages ▪ The prevalence of type 2 DM in the rural Indian population: 10.4%. ▪ The prevalence of DR in this cohort: 10.3%. ▪ The prevalence of DR in newly diagnosed DM: 2.8%. ▪ The most significant variable associated with DR was: Longer duration of DM, 15 years. BMJ Open Diabetes Research and Care 2014;2:000005. doi:10.1136/bmjdrc-2013-000005 1 Open Access Research o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://d rc.b m j.co m / B M J O p e n D ia b R e s C a re : first p u b lish e d a s 1 0 .1 1 3 6 /b m jd rc-2 0 1 3 -0 0 0 0 0 5 o n 7 Ju n e 2 0 1 4 . D o w n lo a d e d fro m http://crossmark.crossref.org/dialog/?doi=10.1136/bmjdrc-2013-000005&domain=pdf&date_stamp=2014-06-20 http://drc.bmj.com/ PATIENTS AND METHODS The study design and research methodology of SN-DREAMS III are described in detail elsewhere.8 The study was approved by the Institution Review Board, and a written informed consent was obtained from the parti- cipants according to the Declaration of Helsinki. The details of the study were explained to the patient in the local vernacular language; the translated local language consent form was either signed (literate) or a thumb impression obtained (illiterate). Study areas and sample size calculations The study was conducted in the rural areas of the dis- trict Kanchipuram and district Thiruvallur, Tamil Nadu, India (figure 1). A multistage cluster sampling method was used. We randomly selected 26 villages, divided into 26 clusters, 13 clusters from each district, and a cluster was defined as having a population of 1200–2000. The estimated sample size was 11 760, assuming a 2% preva- lence of DR based on estimation from previous studies,9 10 keeping a design effect of 2 with a precision of 80% and compliance of 80%. Diagnosis of diabetes mellitus The following definitions were used: ▸ Known diabetes: If they were using antidiabetic agents, either oral or insulin or both, along with dietary recommendations. ▸ Newly diagnosed DM: As a first step, all patients under- went estimation of fasting blood glucose by the capil- lary method (Accutrend α) in the field, and those noted to have a reading of >100 mg/dL were invited for oral glucose tolerance test (OGTT—by enzymatic assay) in the mobile van; an OGTT value of ≥200 mg/dL was considered as newly diagnosed diabetes.11 ▸ Sight-threatening DR (STDR): STDR was defined as the presence of severe non-proliferative DR (NPDR), PDR, and clinically significant macular edema (CSME).12 Evaluation of patients in a mobile van All eligible patients were interviewed by trained bilingual interviewers. All instruments were developed initially in English and later translated into Tamil (the regional spoken language), ensuring that the contents and the meanings were preserved. A comprehensive eye examin- ation was performed in a mobile van which was equipped with an Early Treatment of Diabetic Retinopathy Study (ETDRS) chart and a fundus camera (Carl Zeiss) and other equipment (figure 2); this was performed to ensure that a participant need not travel to the city, as that would increase the compliance rate. The fundi of all patients were photographed using 45°, four-field stereoscopic digital photography; however, an additional 30°, seven-field stereo digital pairs were taken for those who showed any evidence of DR. The diagnosis of DR was based on Klein’s classification (modified ETDRS Scales).13 The clinical grading of digital photo- graphs was performed by two independent observers (experienced retinal specialists) in a masked fashion (k=0.82). Figure 1 Study areas. 2 BMJ Open Diabetes Research and Care 2014;2:000005. doi:10.1136/bmjdrc-2013-000005 Epidemiology/health service research o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://d rc.b m j.co m / B M J O p e n D ia b R e s C a re : first p u b lish e d a s 1 0 .1 1 3 6 /b m jd rc-2 0 1 3 -0 0 0 0 0 5 o n 7 Ju n e 2 0 1 4 . D o w n lo a d e d fro m http://drc.bmj.com/ Step-by-step enumeration and enrollment Figure 3 shows the step-by-step enrollment of the study population. A house-to-house enumeration was com- pleted for 13 079 participants, aged 40 years or above. Participants with diabetes secondary to other conditions (secondary diabetes) were excluded based on the medical history of comorbid conditions. Likewise, those on medications which could possibly alter the blood glucose or cause changes in the retina mimicking DR were also excluded. Of the 13 079 enumerated partici- pants, 12 172 (93.1%) responded to the estimation of first fasting glucose. Of these 12 172 participants, 2730 (22.4%) were considered to have DM, 1075 partici- pants with known diabetes and 1655 participants with fasting blood sugar ≥100 mg/dL. Of the 1655 partici- pants, 1365 (82.5%) reported for OGTT; 335 (24.5%) were then confirmed to have newly diagnosed DM. So, 1234 participants with DM (899, known; 335, newly diagnosed) had their fundus photographed; 44 partici- pants (34, known; 10, newly diagnosed) were excluded as the fundus pictures were adjudged to be ungrad- able. Thus, all together, 1190 participants (865, known; 325, newly diagnosed) were analyzed in the study. Statistical analysis Statistical analyses were performed using statistical soft- ware (SPSS for Windows V.14.0; SPSS Science, Chicago, Illinois, USA). The results were expressed as mean±SD if the variables were continuous and as percentage if the variables were categorical. The Student t test for compar- ing continuous variables and the χ2 test to compare pro- portions among groups were used. Newly diagnosed participants with diabetes were given a value of 0 for Figure 2 (A) A customized mobile van for comprehensive eye examination in rural areas. (B) Inside the mobile van: recording visual acuity using the EDTRS chart. BMJ Open Diabetes Research and Care 2014;2:000005. doi:10.1136/bmjdrc-2013-000005 3 Epidemiology/health service research o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://d rc.b m j.co m / B M J O p e n D ia b R e s C a re : first p u b lish e d a s 1 0 .1 1 3 6 /b m jd rc-2 0 1 3 -0 0 0 0 0 5 o n 7 Ju n e 2 0 1 4 . D o w n lo a d e d fro m http://drc.bmj.com/ duration of diabetes. Univariate and multivariate logistic regression analyses were performed to study the effect of various risk factors using DR as a dependent variable. RESULTS Of the 2730 participants with DM, 1075 were known dia- betes, and 1655 were provisional diabetes (figure 3); these subjects were invited for eye evaluation and OGTT, respectively. Of the 1075 participants with known dia- betes, 899 responded to the eye evaluation, and of the 1655 participants with provisional diabetes, 1365 responded to OGTT. Thus, the data included 2264 responders and 466 non-responders; table 1 compares the data between responders and non-responders with regard to mean age, gender, and diabetes status. No stat- istically significant differences were observed. The age-adjusted and gender-adjusted prevalence of DM in rural India was 10.4% (95% CI 10.39% to 10.42%; table 2). The prevalence was higher in those between the age group of 50–59 years and no gender difference was observed. The prevalence of any DR was 10.3% (95% CI 8.53% to 11.97%; table 3). The preva- lence of any DR was higher among participants with known diabetes (13.1% vs 2.8%; p<0.0001), participants with age between 50 and 69 years (25.4% vs 6.2%; p=0.007), male gender (12.8% vs 8.1%; p=0.008), Figure 3 Flow chart showing a step-by-step enrollment of the study population. Table 1 Comparison of responders and non-responders in the study population Responders Non-responders Variable (n=2264) (n=466) p Value Mean age, years 53.03±9.77 53.28±10.97 0.623 Male, n (%) 953 (42.1) 201 (43.1) 0.717 Female, n (%) 1311 (57.9) 265 (56.9) KD, n (%) 899 (39.7) 176 (37.8) 0.466 NDD, n (%) 1365 (60.3) 290 (62.2) KD, known diabetes; NDD, newly diagnosed diabetes. 4 BMJ Open Diabetes Research and Care 2014;2:000005. doi:10.1136/bmjdrc-2013-000005 Epidemiology/health service research o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://d rc.b m j.co m / B M J O p e n D ia b R e s C a re : first p u b lish e d a s 1 0 .1 1 3 6 /b m jd rc-2 0 1 3 -0 0 0 0 0 5 o n 7 Ju n e 2 0 1 4 . D o w n lo a d e d fro m http://drc.bmj.com/ duration of DM of more than 15 years (37.1% vs 6.3%; p<0001), higher glycosylated hemoglobin (HbA1c; 15.4 vs 5.1%; p<0.0001), use of insulin (44% vs 9.5%; p<0.0001), systolic blood pressure of >140 mm Hg (19.8% vs 8.6%; p<0.0001), and diastolic blood pressure of >90 mm Hg (15.5% vs 8.8%; p=0.002). The preva- lence of STDR was 3.8% (95% CI 2.70% to 4.86%; table 3). The prevalence of STDR was higher among participants with known diabetes (5.0% vs 0.6%; p<0.0001), participants with age between 50 and 69 years (p=0.001), duration of DM of more than 15 years (p<0001), higher HbA1c (6.0% vs 1.5%; p<0.0001), and use of insulin (28% vs 3.8%; p<0.0001). Table 4 shows the univariate and multivariate regres- sion analyses of factors related to any DR. In the univari- ate analysis, increased association was observed in those Table 2 Age-wise and gender-wise distribution of prevalence of diabetes mellitus Prevalence of diabetes mellitus n (%) (95% of CI) p Value Age group (years) Overall (n=2172) 1234 (10.14) (9.60 to 10.67) Adjusted* 1234 (10.41) (10.39 to 10.42) 40–49 (n=5452) 398 (7.30) (6.60 to 7.99) 50–59 (n=2801) 399 (14.24) (12.95 to 15.94) <0.0001 60–69 (n=2810) 332 (11.81) (10.62 to 13.01) 70+ (n=1109) 105 (9.47) (7.74 to 11.19) Gender Male (n=5675) 556 (9.80) (9.02 to 10.57) Female (n=6497) 678 (10.44) (9.69 to 11.18) 0.245 *Adjusted to age and gender as per the Tamil Nadu rural population census of 2001. Table 3 Prevalence of DR in various subgroups Prevalence of DR Prevalence of STDR N n (%) (95% CI) p Value n (%) (95% CI) p Value Overall 1190 122 (10.3) (8.53 to 11.97) 45 (3.8) (2.70 to 4.86) Known diabetes 865 113 (13.1) (10.81 to 15.31) <0.0001 43 (5.0) (3.52 to 6.42) <0.0001 Newly diagnosed 325 9 (2.8) (0.99 to 4.55) 2 (0.6) (-0.23 to 1.47) Age groups (years) 40–49 390 24 (6.2) (3.77 to 8.53) 4 (1.0) (0.03 to 2.03) 50–59 390 50 (12.8) (9.50 to 16.14) 0.007 23 (5.9) (3.56 to 8.24) 0.001 60–69 318 40 (12.6) (8.94 to 16.22) 17 (5.3) (2.88 to 7.82) >69 92 8 (8.7) (2.94 to 14.46) 1 (1.1) (-1.03 to 3.21) Gender Female 651 53 (8.1) (6.04 to 10.24) 0.008 20 (3.1) (1.74 to 4.40) 0.159 Male 539 69 (12.8) (9.98 to 15.62) 25 (4.6) (2.86 to 6.42) Duration group (years) <5 916 58 (6.3) (4.75 to 7.91) 19 (2.1) (1.15 to 2.99) 5–10 160 29 (18.1) (12.16 to 24.10) <0.0001 11 (6.9) (2.96 to 10.80) <0.0001 10–15 79 22 (27.8) (17.97 to 37.73) 9 (11.4) (4.38 to 18.40) >15 35 13 (37.1) (21.13 to 53.15) 6 (17.1) (4.65 to 29.63) HbA1c <7.0 593 30 (5.1) (3.30 to 6.82) 9 (1.5) (0.54 to 2.50) >7.0 597 92 (15.4) (12.51 to 18.31) <0.0001 36 (6.0) (4.12 to 7.94) <0.0001 Insulin status User 25 11 (44.0) (24.54 to 63.46) 7 (28.0) (10.40 to 45.60) Non-user 1165 111 (9.5) 7.84 to 11.22 <0.0001 38 (3.3) 2.242 to 4.28 <0.0001 Systolic BP <140 1013 87 (8.6) (6.86 to 10.32) 38 (3.8) (2.58 to 4.92) >140 177 35 (19.8) (13.9 to 25.64) <0.0001 7 (4.0) (1.08 to 6.82) 0.896 Diastolic BP <90 932 82 (8.8) (6.98 to 10.62) 34 (3.6) (2.45 to 4.85) >90 258 40 (15.5) (11.08 to 19.92) 0.002 11 (4.3) (1.80 to 6.72) 0.646 BP, blood pressure (mm Hg); DR, diabetic retinopathy; HbA1c, glycosylated hemoglobin (%); STDR, sight-threatening DR. BMJ Open Diabetes Research and Care 2014;2:000005. doi:10.1136/bmjdrc-2013-000005 5 Epidemiology/health service research o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://d rc.b m j.co m / B M J O p e n D ia b R e s C a re : first p u b lish e d a s 1 0 .1 1 3 6 /b m jd rc-2 0 1 3 -0 0 0 0 0 5 o n 7 Ju n e 2 0 1 4 . D o w n lo a d e d fro m http://drc.bmj.com/ participants between the age group of 50 and 59 years (OR 2.24) and between 60 and 69 years (OR 2.19), male gender (OR 1.66), duration of DM >15 years (OR 8.74), HbA1c of >7.0 (OR 3.41), use of insulin (OR 7.46), systolic blood pressure (OR 2.62), and diastolic blood pressure (OR 1.90). A multivariate model identified five variables: male gender (OR 1.52), duration of DM of >15 years (OR 6.01), HbA1c >7.0 (OR 3.37), use of insulin (OR 3.59), and systolic blood pressure of >140 mm Hg (OR 2.14). Table 5 shows the relationship between types of DR (non-proliferative, proliferative and maculopathy) and known versus newly diagnosed DM; non-proliferative DR and diabetic maculopathy were present in higher pro- portion in those with known DM. Of the 122 participants with DR, 3 (0.02%) participants with known diabetes were previously diagnosed to have DR; all had STDR and had received laser photocoagulation. Table 6 shows the relationship between types of DR (non-proliferative, proliferative, and maculopathy) and the duration of DM; all types of DR were present in higher proportions in those having duration of diabetes of more than 15 years. DISCUSSION In this study, the prevalence of diabetes and DR in a rural South Indian population aged 40 years or more was 10.1% and 10.3%, respectively. The prevalence of STDR was 3.8% (known diabetes: 5% and newly Table 4 Analysis of risk factors for diabetic retinopathy Univariate analysis Multivariate analysis OR (95% CI) p Value OR (95% CI) p Value Age (years) 40–49 1 1 50–59 2.24 (1.35 to 3.73) 0.002 1.63 (0.95 to 2.83) 0.075 60–69 2.19 (1.29 to 3.73) 0.004 1.34 (0.75 to 2.39) 0.319 >69 1.45 (0.63 to 3.35) 0.381 0.89 (0.36 to 2.19) 0.797 Gender Female 1 1 Male 1.66 (1.14 to 2.42) 0.009 1.52 (1.01 to 2.29) 0.045 Duration (years) <5 1 1 5–10 3.27 (2.02 to 5.30) <0.0001 2.19 (1.29 to 3.73) 0.004 10–15 5.71 (3.26 to 9.99) <0.0001 3.91 (2.08 to 7.35) <0.0001 >15 8.74 (4.19 to 18.24) <0.0001 6.01 (2.63 to 13.75) <0.0001 HbA1c <7.0 1 1 >7.0 3.41 (2.22 to 5.23) <0.0001 3.37 (2.13 to 5.34) <0.0001 Insulin status Non-user of insulin 1 1 User of insulin 7.46 (3.31 to 16.83) <0.0001 3.59 (1.41 to 9.14) 0.007 Systolic BP <140 1 1 >140 2.62 (1.71 to 4.03) <0.0001 2.14 (1.20 to 3.82) 0.010 Diastolic BP <90 1 1 >90 1.90 (1.27 to 2.85) 0.002 1.36 (0.79 to 2.35) 0.273 BP, blood pressure (mm Hg); HbA1c, glycosylated hemoglobin (%). Table 5 Severity of DR in known versus newly diagnosed DM Mild NPDR Moderate NPDR Severe NPDR PDR CSME Any DR DM n (%) p Value n (%) p Value n (%) p Value n (%) p Value n (%) p Value n (%) p Value Overall (n=1190) 32 (2.7) 54 (4.5) 24 (2.0) 12 (1.0) 25 (2.1) 122 (10.3) Known (n=865) 29 (3.4) 0.021 49 (5.7) 0.002 24 (2.8) 0.002 11 (1.3) 0.139 24 (2.8) 0.008 113 (13.1) <0.0001 Newly diagnosed (n=325) 3 (0.9) 5 (1.5) 0 (0.0) 1 (0.3) 1 (0.3) 9 (2.8) DM, diabetes mellitus; CSME, clinically significant macular edema; NPDR, non-proliferative diabetic retinopathy. 6 BMJ Open Diabetes Research and Care 2014;2:000005. doi:10.1136/bmjdrc-2013-000005 Epidemiology/health service research o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://d rc.b m j.co m / B M J O p e n D ia b R e s C a re : first p u b lish e d a s 1 0 .1 1 3 6 /b m jd rc-2 0 1 3 -0 0 0 0 0 5 o n 7 Ju n e 2 0 1 4 . D o w n lo a d e d fro m http://drc.bmj.com/ diagnosed: 0.6%). Table 7 shows the prevalence of dia- betes and DR among the rural population in India and the Asia-Pacific region.14–23 The previous population- based studies from India, which estimated the preva- lence of DR and DM, had a diagnosis of diabetes by fasting blood sugar.17 18 20 The present study used the gold standard, OGTT, for diagnosis of diabetes, and standard fundus photography for diagnosis of DR. The prevalence of diabetes in other Asia-Pacific region coun- tries like China and Australia was similar to that in the Indian rural population; however, the prevalence of DR was much higher in these countries (21.4% in Australia and 43.1% in the Chinese rural population).22 23 The relative roles of genetic and lifestyle factors in these ethnic differences remain unexplored. We have earlier reported a nearly threefold prevalence of diabetes and twofold prevalence of DR in the urban population in South India.5 Diamond24 had also described a similar trend of the prevalence of diabetes being higher among the affluent, educated, urban Indians than among the poor, uneducated, rural people. The sedentary lifestyle and unhealthy food prefer- ences could be the possible reasons for this urban–rural divide.25–28 On multivariate analysis, the risk factors for DR were male gender, longer duration of diabetes, poor glycemic control, participants with diabetes on insulin, and higher systolic blood pressure. These risk factors are similar to those reported in the urban population and also in other ethnic populations.5 22 23 The prevalence of any DR was more in participants with known DM than in those who were newly diagnosed (13.1% vs 2.8%, table 5). When data of the rural popula- tion were compared with those of the urban population, which were reported earlier, a higher prevalence of any DR in the newly diagnosed was found in urban com- pared with rural populations (6% vs 2.8%). However, the prevalence of STDR (NPDR, PDR, CSME) was quite low in those with newly diagnosed diabetes (2/325, 0.6%, table 5); a low prevalence of 0.4% was also noted in the urban Indian population.5 Others have also noted a somewhat low prevalence, ranging from around 2% to 3% in this subset.20 29–33 The strength of this study is that it uses OGTT for diagnosis of diabetes and photography and standard grading techniques for detecting DR. This study used a unique customized mobile van with all equipments required for standard collection of clinical data, which also had easy access to rural areas. Furthermore, the study is representative of a large population, and results could be extrapolated to the rural population of Tamil Nadu. In view of the lack of previous reports on the rural prevalence of DR and STDR, this study is of importance. The study limitations included the potential Table 6 Severity of diabetic retinopathy in relation to duration of diabetes mellitus Duration (years) Mild NPDR p Value Moderate NPDR p Value Severe NPDR p Value PDR p Value CSME p Value Any DR p Value <5 16 (2.7) 0.001 23 (3.9) <0.001 6 (1.0) <0.0001 4 (0.7) 0.008 13 (2.2) 0.046 49 (8.3) <0.001 5–10 5 (3.1) 13 (8.1) 6 (3.8) 5 (3.1) 5 (3.1) 29 (18.1) 10–15 6 (7.6) 7 (8.9) 8 (10.1) 1 (1.3) 1 (2.5) 22 (27.8) ≥15 2 (5.7) 6 (17.1) 4 (11.4) 1 (2.9) 4 (11.4) 13 (37.1) CSME, clinically significant macular edema; NPDR, non-proliferative diabetic retinopathy. Table 7 Prevalence of diabetes mellitus (DM) and diabetic retinopathy (DR) among the rural population in India and the Asia-Pacific region Location Year Population Sample Age Assessment of DM Prevalence of DM (%) Prevalence of DR (%) India India 14 1993 Population 467 >40 OGTT 4.90 Not reported India 15 1999 Population 6091 >40 OGTT 1.74 Not reported India 16 2003 Self-reported 3949 >50 FBS> 140 Nil 20.30 India 17 2004 Population 4917 >40 PPBS>180 4.40 5.30 India 18 2005 Population 4535 >30 FBS≥110 13.20 Not reported India 19 2006 Self-reported 26 519 >30 Known DM Nil 17.60 India 20 2006 Population 25 969 >30 FBS>126 8.50 10.20 India 21 2007 Self-reported 5212 >50 Known DM 5.10 26.80 Present study 2010 Population 13 079 >40 OGTT 10.41 10.3 Asia-Pacific China 22 2009 Population 6830 >30 FBS>126 6.90 43.10 Australia 23 2007 Population 1608 >40 HbA1c 4.90 21.40 FBS, fasting blood sugar; HbA1c, glycosylated hemoglobin; OGTT, oral glucose tolerance test; PPBS, postprandial blood sugar. BMJ Open Diabetes Research and Care 2014;2:000005. doi:10.1136/bmjdrc-2013-000005 7 Epidemiology/health service research o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://d rc.b m j.co m / B M J O p e n D ia b R e s C a re : first p u b lish e d a s 1 0 .1 1 3 6 /b m jd rc-2 0 1 3 -0 0 0 0 0 5 o n 7 Ju n e 2 0 1 4 . D o w n lo a d e d fro m http://drc.bmj.com/ of bias in ascertaining history-related variables and no cause–effect relationship, being a cross-sectional study; longitudinal studies would be needed to establish such a relationship. On extrapolating the data on the South Indian popu- lation of Tamil Nadu based on the population projec- tions of the Census of the Government of India,34 the estimated population of diabetes in rural Tamil Nadu over the age of 40 years would be nearly 9.5 lakhs and those with DR would be nearly 1 lakh. It is also import- ant to know that of the 865 participants with known dia- betes, only 3 of them were known to have DR, and the rest were newly detected on eye examination. This infor- mation has a great impact on the public health aware- ness programme—highlighting the need for regular eye examination—in educating masses with DM. It is heart- ening that we could identify several of the patients with STDR in participants with known or newly detected DM. These data stress the need for regular diabetic screening programmes not only in urban areas but also in rural areas in India. Contributors SG, SSP, and RR researched the data. SG, VK, and RR wrote the manuscript and researched the data. TS reviewed/edited the manuscript. RR and TS contributed to the discussion. Funding This work was supported by the Jamshetji Tata trust, Mumbai, India. Competing interests None. Patient consent Obtained. Ethics approval Vision Research foundation, Chennai. Provenance and peer review Not commissioned; externally peer reviewed. Data sharing statement No additional data are available. Open Access This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 3.0) license, which permits others to distribute, remix, adapt, build upon this work non- commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http:// creativecommons.org/licenses/by-nc/3.0/ REFERENCES 1. Anjana RM, Pradeepa R, Deepa M, ICMR–INDIAB Collaborative Study Group, et al. . Prevalence of diabetes and prediabetes (impaired fasting glucose and/or impaired glucose tolerance) in urban and rural India: phase I results of the Indian Council of Medical Research-INdia DIABetes (ICMR-INDIAB) study. Diabetologia 2011;54:3022–7. 2. Ramachandran A, Snehalatha C, Ma RC. Diabetes in South-East Asia: an update for 2013 for the IDF Diabetes Atlas. Diabetes Res Clin Pract 2014;103:231–7. 3. IDF Diabetes Atlas, 2013; sixth edition, 34. 4. Mohan V, Deepa M, Deepa R, et al. Secular trends in the prevalence of diabetes and impaired glucose tolerance in urban South India-the Chennai Urban Rural Epidemiology Study (CURES-17). Diabetologia 2006;49:1175–8. 5. Rajiv R, Rani PK, Sudhir R, et al. Prevalence of diabetic retinopathy in India Sankara Nethralaya Diabetic Retinopathy Epidemiology and Molecular Genetics Study report 2. Ophthalmology 2009;116: 311–18. 6. Namperumalsamy P, Kim R, Vignesh TP, et al. Prevalence and risk factors for diabetic retinopathy: a population based assessment from Theni District, South India. Br J Ophthalmol 2009;93:429–34. 7. Mohan V, Shanthirani S, Deepa R, et al. Intra-urban differences in the prevalence of the metabolic syndrome in southern India: the Chennai Urban Population Study. Diabet Med 2001;18:280–7. 8. Swakshyar SP, Rajiv R, Suganeswari G, et al. Sankara Nethralaya Diabetic Retinopathy Epidemiology and Molecular Genetic Study (SN–DREAMS III): study design and research methodology. BMC Ophthalmol 2011;11:7. 9. Ramachandran A, Snehalatha C, Kapur A, et al. Diabetes Epidemiology Study Group in India (DESI). High prevalence of diabetes and impaired glucose tolerance in India: National Urban Diabetes Survey. Diabetologia 2001;44:1094–101. 10. Thulasiraj RD, Narendran V, John RK, et al. Diabetic retinopathy among self reported diabetics in southern India: a population based assessment. Br J Ophthalmol 2002;86:1014–18. 11. Definition and diagnosis of diabetes mellitus and intermediate hyperglycaemia report of a WHO/IDF consultation 14 January 2006. 12. Baeza M, Orozco-Beltrán D, Gil-Guillen VF, et al. Screening for sight threatening diabetic retinopathy using non-mydriatic retinal camera in a primary care setting: to dilate or not to dilate? Int J Clin Pract 2009;63:433–8. 13. Klein R, Klein BE, Magli YL, et al. An alternative method of grading diabetic retinopathy. Ophthalmology 1986;93:1183–7. 14. Patandina S, Botsa ML, Abel R, et al. Impaired glucose tolerance and diabetes mellitus in a rural population in South India. Diabetes Res Clin Pract 1994;24:47–53. 15. Zargar AH, Khan AK, Masoodi SR, et al. Prevalence of type 2 diabetes mellitus and impaired glucose tolerance in the Kashmir Valley of the Indian subcontinent. Diabetes Res Clin Pract 2000;47:135–46. 16. Namperumalsamy P, Nirmalan PK, Ramasamy K. Developing a screening program to detect sight–threatening diabetic retinopathy in South India. Diabetes Care 2003;26:1831–5. 17. Sadikot SM, Nigam A, Das S, et al. The burden of diabetes and impaired glucose tolerance in India using the WHO 1999 criteria: prevalence of diabetes in India study (PODIS). Diabetes Res Clin Pract 2004;66:301–7. 18. Mohan V, Mathur P, Deepa R, et al. Urban rural differences in prevalence of self-reported diabetes in India—The WHO-ICMR Indian NCD risk factor surveillance. Diabetes Res Clin Pract 2008;80:159–68. 19. Chow CK, Raju PK, Raju R, et al. The prevalence and management of diabetes in rural India. Diabetes Care 2006;29:1717–18. 20. Rani PK, Raman R, Chandrakantan A, et al. Risk factors for diabetic retinopathy in self-reported rural population with diabetes. J Postgrad Med 2009;55:92–6. 21. Thulasiraj RD, Narendran V, John RK, et al. Diabetic retinopathy among self-reported diabetics in southern India: a population based assessment. Br J Ophthalmol 2002;86:1014–18. 22. Wang FH, Liang YB, Wei WB, et al. Prevalence of diabetic retinopathy in rural China: the Handan Eye Study. Ophthalmology 2009;116:461–7. 23. McKay R, McCarty CA, Taylor HR. Diabetic retinopathy in Victoria, Australia: the Visual Impairment Project. Br J Ophthalmol 2000;84:865–70. 24. Diamond J. Diabetes in India. Nature 2011;469:478–9. 25. Islam A, Tahir MZ. Health sector reform in South Asia: new challenges and constraints. Health Policy 2002;60:151–69. 26. Mohan V. Why are Indians more prone to diabetes? J Assoc Physicians India 2004;52:468–74. 27. Misra A, Khurana L. Obesity and the metabolic syndrome in developing countries. J Clin Endocrinol Metab 2008;93:S9–30. 28. Misra A, Khurana L, Isharwal S, et al. South Asian diets and insulin resistance. Br J Nutr 2009;101:465–73. 29. Kostev K, Rathmann W. Diabetic retinopathy at diagnosis of type 2 diabetes in the UK: a database analysis. Diabetologia 2013;56: 109–11. 30. Looker HC, Nyangoma SO, Cromie D, et al.; Scottish Diabetic Retinopathy Screening Collaborative; Scottish Diabetes Research Network Epidemiology Group. Diabetic retinopathy at diagnosis of type 2 diabetes in Scotland. Diabetologia 2012;55:2335–42. 31. Raman R, Gupta A, Krishna S, et al. Prevalence and risk factors for diabetic microvascular complications in newly diagnosed type II diabetes mellitus. Sankara Nethralaya Diabetic Retinopathy Epidemiology and Molecular Genetic Study (SN-DREAMS, report 27). J Diabetes Complications 2012;26:123–8. 32. Jammal H, Khader Y, Alkhatib S, et al. Diabetic retinopathy in patients with newly diagnosed type 2 diabetes mellitus in Jordan: prevalence and associated factors. J Diabetes 2013;5:172–9. 33. Kim JH, Kwon HS, Park YM, et al. Prevalence and associated factors of diabetic retinopathy in rural Korea: the Chungju metabolic disease cohort study. J Korean Med Sci 2011;26:1068–73. 34. Indian Government. Total population, population of Tamilnadu, urban and rural and their proportions to the total population. (Online). 2001. http://www.censusindia.net/.html (accessed 14 Dec 2007). 8 BMJ Open Diabetes Research and Care 2014;2:000005. doi:10.1136/bmjdrc-2013-000005 Epidemiology/health service research o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://d rc.b m j.co m / B M J O p e n D ia b R e s C a re : first p u b lish e d a s 1 0 .1 1 3 6 /b m jd rc-2 0 1 3 -0 0 0 0 0 5 o n 7 Ju n e 2 0 1 4 . D o w n lo a d e d fro m http://www.censusindia.net/.html http://www.censusindia.net/.html http://drc.bmj.com/ Prevalence and risk factors for diabetic retinopathy in rural India. Sankara Nethralaya Diabetic Retinopathy Epidemiology and Molecular Genetic Study III (SN-DREAMS III), report no 2 Abstract Patients and methods Study areas and sample size calculations Diagnosis of diabetes mellitus Evaluation of patients in a mobile van Step-by-step enumeration and enrollment Statistical analysis Results Discussion References work_c5zmlih54jd7dmph2m45y2aesm ---- Contributors Contributors Bridges: A Jewish Feminist Journal, Volume 11, Number 1, Spring 2006, pp. 130-135 (Article) Published by Bridges Association DOI: For additional information about this article [ This content has been declared free to read by the pubisher during the COVID-19 pandemic. ] https://doi.org/10.1353/brd.2006.0006 https://muse.jhu.edu/article/195403 https://doi.org/10.1353/brd.2006.0006 https://muse.jhu.edu/article/195403 130 CONTRIBUTORS JUDITH ARCANA Judith Arcana is a Jewish anarchist feminist witch and writer. Her many political, literary, and spiritual influences include Emma Goldman, Grace Paley and Starhawk. Judith’s work is forthcoming soon in Poetica, BuffaloCarp, Women’s Lives and Big Water. Her newest book is re- viewed in this issue: What if your mother (Chicory Blue Press, 2005) is a collection of poems and monologues about motherhood; the focus is on issues that are rarely examined with the neces- sary complexity: abortion, adoption, biotechnology, et al. LILI ARTEL “My roots are in New York City but I’ve flourished creatively in the East Bay area of Califor- nia. An octogenarian, my original intent was to be a writer, particularly of short stories. My fic- tion has appeared in anthologies and literary journals in California and Oregon. My poetry was published in the 25th Anniversary issues of Room of One’s Own, a Canadian feminist literary journal in 2003. A late bloomer, in my fifties in the last century I opened a second creative door to become a sculptor with a retrospective show of 35 years of art work, set for Dec. 1-17, 2005 at the Sun Gallery in my hometown, Hayward, CA. Both in my art and writing there’s an irrev- erent Jewish/feminist voice.” student muse_logo CONTRIBUTORS 131 MASKIT BENDEL Maskit Bendel is the director of projects in the Occupied Territories for Physicians for Human Rights-Israel. She was born and raised in Jerusalem, and currently lives in Tel Aviv. She writes, “my father was a military man, and I grew up in a very Zionist family. I served in the Is- rael Defense Force and took the course of living of the average Israeli youth. My awareness to is- sues of gender and inequality rose at a very late stage in my life, while working for PHR. Through learning more about the Palestinian society, I opened my eyes to what is taking place inside my own society.” Bendel has a degree in textile design and is completing a master’s de- gree in cultural research at Tel Aviv university. She is also studying for a degree in law, intending to focus on human rights and international law. SARAH WERTHAN BUTTENWIESER Sarah Werthan Buttenwieser, a writer, community activist and mother of three, was raised with feminism and without much religion. In hopes her kids’ choices about faith are informed, her family belongs to Beit Ahavah, a fledgling Reform congregation in Florence, MA. She loves that their congregation meets in a church. SUSAN H. CASE Susan H. Case is a college professor in New York. She has recent work in, or forthcoming in: Eclipse, Georgetown Review, Karamu, Pebble Lake Review, Saranac Review, Slant, Tar Wolf Re- view, The Comstock Review and The GW Review, among others. She is the author of The Scottish Café (Slapering Hol Press, 2002), a collection of poems about mathematics in Poland right be- fore and during the Holocaust. These poems have been translated into both Polish and Ukrai- nian and both translators are currently looking for foreign publishers. CLAIRE DRUCKER Claire Drucker is a Jewish bisexual mom living in Sebastopol, CA. She teaches creative writ- ing to adults at a local junior college and poetry to children in the public schools. Claire has been published in Diner, California Quarterly, Phoebe, Eclipse, Women Artists Datebook, Epiph- any and Controlled Burn. SUSAN EISENBERG A poet with a theater background, Susan Eisenberg takes multiple approaches to a subject. She is the author of We’ll Call You If We Need You: Experiences of Women Working Construction (non- fiction) and Pioneering (poetry); and creator of the installation NOT on a SilVER PLATTER. She’s currently working on a poetry/photo/nonfiction project about women with lupus. She teaches at the University of Massachusetts Boston. 132 BRIDGES Volume 11 • Number1Number1 JYL LYNN FELMAN Performance Artist/Visiting Assistant Professor (University of Massachusetts, Amherst) Jyl Lynn Felman is the author of Hot Chicken Wings, Cravings, Never A Dull Moment, and the forthcoming Performing Cultural Rituals. She’s performed in Cuba, England, Canada, Austra- lia, and the Czech Republic and currently touring with Terri Schiavo, Inc. and Silicone Valley. Check out her website: www.jyllynnfelman.com or reach her at drjlfelman@aol.com ELAINE FOX Elaine Fox is a physician, Board Certified in Internal Medicine and Geriatrics, and she is also a Fellow of the American College of Physicians. After working for several years in Pittsburgh and then a time as Medical Director of a nursing home started for Holocaust survivors in Queens, NY, she practiced medicine in Southampton, NY for 17 years. She is married to Oliver Steindecker, an abstract artist, and together they faced and still face the challenge of raising their two daughters in a community that is distinctly non-Jewish. At present, Dr. Fox is on the boards of Physicians for a National Health Program - NY Metro Chapter and the Long Island Coalition for a National Health Plan, serving as liaison between the two groups. She is also pursuing a graduate degree in Public Health at SUNY Stony Brook, maintaining that it is the health care system itself which is her patient now. LISA LINK “I am an artist, educator and mother of two young children and I live and work (at several jobs) in Boston. I love my position as the digital photography teacher and graphic designer for the Crimson Summer Academy, http://www.crimsonsummer.harvard.edu. My series of Jewish feminist montages have traveled nationally for over fourteen years and are in the collections of the Center for the Study of Political Graphics in Los Angeles and the Fogg Art Museum at Har- vard University. I also have completed public art projects and really enjoy collaborating with others especially on cross-disciplinary projects and welcome mail! lisalink@yahoo.com.” SHERYL LUNA Sheryl Luna’s collection of poetry, Pity the Drowned Horses, was published by the University of Notre Dame Press (2005). It won the first Andres Montoya Poetry Prize sponsored by The Institute for Latino Studies. Her maternal grandfather was Jewish. Many of her poems deal with women’s rights and women’s lives along the U.S./Mexican border. KAREN MANDELL Karen Mandell’s novel Repairs and Alterations deals with a family’s secrets concerning the Holocaust. Her novel Tumbling Down concerns the breakup of a family and their attempts at reconciliation, with the help of several historical figures who make their way into the 21st cen- tury. She won the Poetry Society of America/Oil of Olay Contest in 2004 and second place in the Muriel Craft Bailey Contest through the Comstock Review. She lives in Needham Massachusetts CONTRIBUTORS 133 with her husband, Fred, an artist, and they’re extremely grateful to watch the ways in which their children’s lives unfold. ELLEN MEEROPOL Ellen Meeropol recently left her career as a nurse practitioner to spend more time writing and to work in an independent bookstore. Her activism and writing focus on the impact of po- litical activism on children and the impact of illness on medical providers. Previous fiction has been published in Portland Magazine, The Women’s Times, Pedestal, and Patchwork Journal. She lives in Western Massachusetts. MYRA MNIEWSKI Myra Mniewski is a poet and translator who lives and works in New York City. She is cur- rently the director of Yugntruf-Youth for Yiddish, a worldwide organization of Yiddish-speak- ing and Yiddish-learning young adults founded in 1964. For more, www.yugntruf.org. ROSE RAPPOPORT MOSS “I was born Jewish in apartheid South Africa, and at the dining room table heard my parents compare laws against blacks in South Africa with laws against Jews in Lithuania. As a schoolgirl, I rebelled against teachers trying to instill ladylike manners and compliance in an unjust society. I came to the United States in 1964 and found my past illuminated in the anti-war and feminist movements here.” Moss is the author of two novels and numerous short stories and teaches creative writing in the Nieman Program at Harvard. Her webpage is: www.rosemosswriter.com. GRETCHEN PRIMACK Gretchen Primack’s recent publication credits include The Paris Review, Prairie Schooner, Field, Tampa Review, Rhino, and Cimarron Review, and her manuscript Fiery Cake has been shortlisted for several prizes. She lives in the delightfully Jewish feminist-rich Hudson Valley with many beloved animals and a beloved human. RENEE GAL PRIMACK Renee Gal Primack is a feminist, bisexual, cancer treatment survivor, mama, almost-wife, almost-step-mom, sister, daughter yet orphan, Jew, underemployed, reluctant Mid-Westerner. She holds a B.A. in Religious Studies and a Masters in Philosophy, neither of which prepared her for all the years doing Web production, but did teach her how to think. Now if that only could get her a decent paying job. L. A. REED L. A. Reed (pen name) is a beginning writer in her 50s; an artist, poet, singer, scientist, and raiser of butterflies, who likes to have fun as well as make change. She chooses to use her pen 134 BRIDGES Volume 11 • Number1Number1 name in situations where her identity as a disabled person who is poor could have additional complications with publicity of her lesbian, mental health and abuse survivor identities, which involve her family. “Coming to terms with abuse in Jewish families is not always easy, in light of our difficult history. I am hopeful as I, and others, find the courage to talk about these prob- lems, we will find ways to heal ourselves, as well as our families.” WILLA SCHNEBERG Willa Schneberg received the 2002 Oregon Book Award In Poetry for In The Margins of The World, Plain View Press. Her next collection of poetry “Storytelling in Cambodia” is forthcom- ing from Calyx Books, Spring ’06. She judged the 15th Annual Reuben Rose Poetry Competi- tion sponsored by Voices Israel, and, went to Israel in December 2004, to participate in the Awards Ceremony. She is the originator and coordinator of the Oregon Jewish Writers Series at the Oregon Jewish Museum, Portland, where her clay sculpture of Judaica has been exhibited. She is a congregant of P’nai Or in Portland and a member of Brit Tzedek V’Shalom. PENELOPE SCAMBLY SCHOTT Penelope Scambly Schott’s most recent books are a documentary narrative poem, The Pest Maiden: A Story of Lobotomy (2004), and a collection of poems, Baiting the Void (2005). Al- though she was not raised Jewish she has always been a person of the book. BRENDA SEROTTE “‘The Last Kaddish’ was written after a year of mourning for my mother. We had a rocky re- lationship, but shared one thing always: A passionate love of Ladino, our ancient Spanish lan- guage, still spoken today by Sephardi Jews around the world. After her death, I completed a memoir, The Fortune Teller’s Kiss (The University of Nebraska Press), and a book of poetry, The Blue Farm (Ginninderra Press), which explore in depth and celebrate my Sephardi heritage.” Brenda Serotte teaches world literature and writing at Nova Southeastern University in Fort Lauderdale, Florida. She is at work on a novel about Peru. Find her website at www.BrendaSerotte. com. STEPHANIE WAXMAN Stephanie Waxman co-produced “Miracle At Midnight” (starring Sam Waterston and Mia Farrow) for ABC, the story of the Danish resistance movement. She taught at Hebrew Union College for 15 years and currently teaches in the UCLA Writers’ Program. Her fiction has ap- peared in numerous literary journals and her story “Perfection” (The Bitter Oleander) was nominated for a Pushcart Prize. She is the author of the internationally published What Is A Girl? What Is A Boy? and Growing Up Feeling Good: A Child’s Introduction to Sexuality. Her most recent book is A Helping Handbook—When A Loved One Is Critically Ill. CONTRIBUTORS 135 JESSICA WEISSMAN Jessica Weissman resides in the DC area and is involved in various aspects of feminist spir- ituality there. She has been a Jew and a feminist all her life, a philosopher and a computer pro- grammer most of her life, and hopes to remain an active writer for the rest of her life. work_c66asrnu45bntgdvwqnfalrcfy ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218556137 Params is empty 218556137 exception Params is empty 2021/04/06-02:16:35 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218556137 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:35 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_c76pcwvviraude6uy74uwdgawm ---- Chemosensors | Free Full-Text | Dipsticks with Reflectometric Readout of an NIR Dye for Determination of Biogenic Amines | HTML Next Article in Journal Sensitive Electrochemical Detection of Tryptophan Using a Hemin/G-Quadruplex Aptasensor Previous Article in Journal A Critical Factor for Quantifying Proteins in Unmodified Gold Nanoparticles-Based Aptasensing: The Effect of pH Journals Information For Authors For Reviewers For Editors For Librarians For Publishers For Societies Article Processing Charges Open Access Policy Institutional Open Access Program Editorial Process Awards Research and Publication Ethics Author Services Initiatives Sciforum Preprints Scilit SciProfiles MDPI Books Encyclopedia JAMS Proceedings About Sign In / Sign Up Notice You can make submissions to other journals here. clear You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled. clear search menu Journals Information For Authors For Reviewers For Editors For Librarians For Publishers For Societies Article Processing Charges Open Access Policy Institutional Open Access Program Editorial Process Awards Research and Publication Ethics Author Services Initiatives Sciforum Preprints Scilit SciProfiles MDPI Books Encyclopedia JAMS Proceedings About Sign In / Sign Up Submit   Search for Articles: Title / Keyword Author / Affiliation Journal All Journals Acoustics Actuators Administrative Sciences Adolescents Aerospace Agriculture AgriEngineering Agronomy AI Algorithms Allergies Analytica Animals Antibiotics Antibodies Antioxidants Applied Mechanics Applied Microbiology Applied Nano Applied Sciences Applied System Innovation (ASI) Arts Atmosphere Atoms Audiology Research Automation Axioms Batteries Behavioral Sciences Beverages Big Data and Cognitive Computing (BDCC) BioChem Bioengineering Biologics Biology Biomechanics BioMed Biomedicines BioMedInformatics Biomimetics Biomolecules Biophysica Biosensors BioTech Birds Brain Sciences Buildings Businesses C Cancers Cardiogenetics Catalysts Cells Ceramics Challenges ChemEngineering Chemistry Chemistry Proceedings Chemosensors Children CivilEng Clean Technologies (Clean Technol.) Climate Clinics and Practice Clocks & Sleep Coatings Colloids and Interfaces Compounds Computation Computers Condensed Matter Conservation Construction Materials Corrosion and Materials Degradation (CMD) Cosmetics COVID Crops Cryptography Crystals Current Issues in Molecular Biology (CIMB) Current Oncology Dairy Data Dentistry Journal Dermatopathology Designs Diabetology Diagnostics Digital Disabilities Diseases Diversity Drones Dynamics Earth Ecologies Econometrics Economies Education Sciences Electricity Electrochem Electronic Materials Electronics Encyclopedia Endocrines Energies Eng Engineering Proceedings Entropy Environmental Sciences Proceedings Environments Epidemiologia Epigenomes European Burn Journal (EBJ) European Journal of Investigation in Health, Psychology and Education (EJIHPE) Fermentation Fibers Fire Fishes Fluids Foods Forecasting Forensic Sciences Forests Fractal and Fractional (Fractal Fract) Fuels Future Internet Future Transportation Galaxies Games Gases Gastroenterology Insights Gastrointestinal Disorders (GastrointestDisord) Gels Genealogy Genes Geographies GeoHazards Geomatics Geosciences Geotechnics Geriatrics Healthcare Hearts Hemato Heritage Histories Horticulturae Humanities Hydrogen Hydrology Immuno Infectious Disease Reports Informatics Information Infrastructures Inorganics Insects Instruments International Journal of Environmental Research and Public Health (IJERPH) International Journal of Financial Studies (IJFS) International Journal of Molecular Sciences (IJMS) International Journal of Neonatal Screening (IJNS) International Journal of Turbomachinery, Propulsion and Power (IJTPP) Inventions IoT ISPRS International Journal of Geo-Information (IJGI) J Journal of Cardiovascular Development and Disease (JCDD) Journal of Clinical Medicine (JCM) Journal of Composites Science (J. Compos. Sci.) Journal of Cybersecurity and Privacy (JCP) Journal of Developmental Biology (JDB) Journal of Functional Biomaterials (JFB) Journal of Functional Morphology and Kinesiology (JFMK) Journal of Fungi (JoF) Journal of Imaging (J. Imaging) Journal of Intelligence (J. Intell.) Journal of Low Power Electronics and Applications (JLPEA) Journal of Manufacturing and Materials Processing (JMMP) Journal of Marine Science and Engineering (JMSE) Journal of Molecular Pathology (JMP) Journal of Nanotheranostics (JNT) Journal of Nuclear Engineering (JNE) Journal of Open Innovation: Technology, Market, and Complexity (JOItmC) Journal of Otorhinolaryngology, Hearing and Balance Medicine (OHBM) Journal of Personalized Medicine (JPM) Journal of Respiration (JoR) Journal of Risk and Financial Management (JRFM) Journal of Sensor and Actuator Networks (JSAN) Journal of Theoretical and Applied Electronic Commerce Research (JTAER) Journal of Xenobiotics (JoX) Journal of Zoological and Botanical Gardens (JZBG) Journalism and Media Land Languages Laws Life Liquids Livers Logistics Lubricants Machine Learning and Knowledge Extraction (MAKE) Machines Macromol Magnetism Magnetochemistry Marine Drugs Materials Materials Proceedings Mathematical and Computational Applications (MCA) Mathematics Medical Sciences Medicina Medicines Membranes Merits Metabolites Metals Methods and Protocols (MPs) Microbiology Research Micromachines Microorganisms Minerals Mining Modelling Molbank Molecules Multimodal Technologies and Interaction (MTI) Nanoenergy Advances Nanomanufacturing Nanomaterials Network Neuroglia Neurology International NeuroSci Nitrogen Non-Coding RNA (ncRNA) Nursing Reports Nutraceuticals Nutrients Obesities Oceans Onco Optics Oral Organics Osteology Parasitologia Particles Pathogens Pathophysiology Pediatric Reports Pharmaceuticals Pharmaceutics Pharmacy Philosophies Photochem Photonics Physchem Physics Plants Plasma Pollutants Polymers Polysaccharides Proceedings Processes Prosthesis Proteomes Psych Psychiatry International Publications Quantum Beam Science (QuBS) Quantum Reports Quaternary Radiation Reactions Recycling Religions Remote Sensing Reports Reproductive Medicine (Reprod. Med.) Resources Risks Robotics Safety Sci Scientia Pharmaceutica (Sci. Pharm.) Sensors Separations Sexes Signals Sinusitis Smart Cities Social Sciences Societies Soil Systems Solids Sports Standards Stats Stresses Surfaces Surgeries Sustainability Sustainable Chemistry Symmetry Systems Taxonomy Technologies Telecom Textiles Thermo Tomography Tourism and Hospitality Toxics Toxins Transplantology Trauma Care Tropical Medicine and Infectious Disease (TropicalMed) Universe Urban Science Uro Vaccines Vehicles Veterinary Sciences Vibration Viruses Vision Water Women World World Electric Vehicle Journal (WEVJ) Article Type All Article Types Article Review Communication Editorial Addendum Book Review Brief Report Case Report Comment Commentary Concept Paper Conference Report Correction Creative Discussion Erratum Essay Expression of Concern Extended Abstract Hypothesis Interesting Images Letter New Book Received Opinion Perspective Proceedings Project Report Reply Retraction Short Note Study Protocol Technical Note Viewpoint     Advanced Search   Section Special Issue Volume Issue Number Page   Logical OperatorOperator AND OR Search Text Search Type All fields Title Abstract Keywords Authors Affiliations Doi Full Text References   add_circle_outline remove_circle_outline   Journals Chemosensors Volume 8 Issue 4 10.3390/chemosensors8040099 Submit to this Journal Review for this Journal Edit a Special Issue ► ▼ Article Menu Article Menu Article Overview Abstract Supplementary Material Open Access and Permissions Share and Cite Article Metrics Related Articles Order Article Reprints Article Versions Abstract Article Versions Notes Full-Text HTML Full-Text PDF Full-Text XML Full-Text Epub Related Info Links Google Scholar More by Authors Links on DOAJ Mobarez, S. N. Wongkaew, N. Simsek, M. Baeumner, A. J. Duerkop, A. on Google Scholar Mobarez, S. N. Wongkaew, N. Simsek, M. Baeumner, A. J. Duerkop, A. on PubMed Mobarez, S. N. Wongkaew, N. Simsek, M. Baeumner, A. J. Duerkop, A. Full Article Text /ajax/scifeed/subscribe share announcement thumb_up ... textsms ... Need Help? Support Find support for a specific problem in the support section of our website. Get Support Feedback Please let us know what you think of our products and services. Give Feedback Information Visit our dedicated information section to learn more about MDPI. Get Information clear Open AccessArticle Dipsticks with Reflectometric Readout of an NIR Dye for Determination of Biogenic Amines by Sarah N. Mobarez 1,2, Nongnoot Wongkaew 1, Marcel Simsek 1, Antje J. Baeumner 1 and Axel Duerkop 1,* 1 Institute of Analytical Chemistry, Chemo- and Biosensors, University of Regensburg, Universitätsstraße 31, 93053 Regensburg, Germany 2 Chemistry Department, Faculty of Science, Ain Shams University, El-Khalyfa El-Mamoun Street, El-Abaseya, Cairo 11566, Egypt * Author to whom correspondence should be addressed. Chemosensors 2020, 8(4), 99; https://doi.org/10.3390/chemosensors8040099 Received: 6 August 2020 / Revised: 7 October 2020 / Accepted: 9 October 2020 / Published: 14 October 2020 (This article belongs to the Section Optical Chemical Sensors) Download PDF Browse Figures Citation Export BibTeX EndNote RIS Cite This Paper Abstract Electrospun nanofibers (ENFs) are remarkable analytical tools for quantitative analysis since they are inexpensive, easily produced in uniform homogenous mats, and provide a high surface area-to-volume ratio. Taking advantage of these characteristics, a near-infrared (NIR)-dye was doped as chemosensor into ENFs of about 500 nm in diameter electrospun into 50 µm thick mats on indium tin oxide (ITO) supports. The mats were made of cellulose acetate (CA) and used as a sensor layer on optical dipsticks for the determination of biogenic amines (BAs) in food. The ENFs contained the chromogenic amine-reactive chameleon dye S0378 which is green and turns blue upon formation of a dye-BA conjugate. This SN1-reaction of the S0378 dye with various BAs was monitored by reflectance measurements at 635 nm where the intrinsic absorption of biological material is low. The difference of the reflectance before and after the reaction is proportional to BA levels from 0.04–1 mM. The LODs are in the range from 0.03–0.09 mM, concentrations that can induce food poisoning but are not recognized by the human nose. The calibration plots of histamine, putrescine, spermidine, and tyramine are very similar and suggesting the use of the dipsticks to monitor the total sample BA content. Furthermore, the dipsticks are selective to primary amines (both mono- and diamines) and show low interference towards most nucleophiles. A minute interference of proteins in real samples can be overcome by appropriate sample pretreatment. Hence, the ageing of seafood samples could be monitored via their total BA content which rose up to 21.7 ± 3.2 µmol/g over six days of storage. This demonstrates that optically doped NFs represent viable sensor and transducer materials for food analysis with dipsticks. Keywords: dipstick; biogenic amine; electrospun nanofibers; NIR dye; food analysis; reflectometry dipstick; biogenic amine; electrospun nanofibers; NIR dye; food analysis; reflectometry 1. Introduction Biogenic amines (BAs) are important compounds that can determine the quality of food [1]. High levels of amines in food are produced by bacterial decarboxylation of amino acids and have been recognized as an important reason for seafood intoxication [2]. Hence, the determination of the concentration of BAs in fresh food is in high need to determine its freshness status. Papageorgiou and co-workers gave a review on food and beverage products that should be regularly tested since they contain BAs or may develop certain contents over (storage) time [3]. This raises interest in research on rapid and inexpensive optical detection methods and tools like dipsticks to determine BAs in food, not only for individual concentration levels but also as a sum parameter. According to the European Food Safety Authority (EFSA), the U.S. Food and Drug Administration (FDA) as well as the World Health Organization (WHO), there are limits for BA concentrations in food to control the food quality. Histamine is one of the most bioactive and toxic BAs. Histamine exists in the majority of foods and plays an important role in food intolerances [4]. If the histamine concentration in food exceeds 500 mg/kg, there is a high risk for food poisoning [3]. In addition, 5–10 mg of histamine might induce skin irritation, rashes, dilatation of peripheral blood vessels resulting in hypotension and headache, or contractions of intestinal smooth muscles causing diarrhea and vomiting [5]. Furthermore, 10 mg is regarded as a borderline for toxicity and 100 mg can result in medium toxic responses, and 1000 mg is considered as very toxic [3]. Optical detection of BAs is challenging because those are weak absorbers of visible light as most of them lack conjugated aromatic π-electron systems. The solution to this problem can be labeling or derivatization of BAs with chromophores or fluorophores. Additionally, the analyte has to be extracted from out of a complex matrix in food analysis. Therefore, the combination of separation techniques such as GC, HPLC, or capillary electrophoresis with optical, electrochemical, or mass spectrometric detection after BA derivatization was used recently [6,7,8,9,10,11]. Eventually, ELISAs [12] are used for BA determination in food samples. Those are highly selective and sensitive on the one hand but expensive, time-consuming, and require highly trained staff, on the other hand. In order to overcome these limitations, reasonably fast, low-cost, and portable chemo and biosensors are desired for rapid on-site analysis of BAs in food [13]. Frequently, chromogenic and fluorogenic dyes, such as acid-base indicators [14,15,16] porphyrins [17], phthalocyanines [18], chameleon dyes [19,20], coumarin derivatives [21], azo dyes [22,23], or nanomaterials are applied for optical BA determination. Cellulose-based microparticles bonded with a pH-indicator and a blue reference dye yielded in slow traffic light-responding (1.5 h) colorimetric sensors [16]. A very recent concept combined two layers for BA sensing, one with colorimetry and one for laser desorption ionization mass spectrometry [24]. Another study described the highly specific and sensitive detection of histamine in mackerel using thin-layer chromatography with visualization by spraying the sheets with ninhydrin and diazonium reagents [25]. Furthermore, an array of five pH-indicators was shown to respond quickly (10 min) and to differentiate between isobutylamine, triethylamine, and isopentylamine in ppm concentrations via RGB readout using a mobile phone [15]. Mobile phones have become increasingly popular in food sensing in recent years [26]. In most cases, array sensors require additional chemometric data treatment to decipher the individual BA and its concentration from the response received by multiple receptors of low selectivity SEM images [15,27]. Unlike arrays, dipsticks only need a one-point readout and are therefore much quicker and easier to be read out and thus deliver their response much faster. The development of dipsticks is beneficial since they are practical, simple, portable, easy to use, and thus do not require trained staff. Moreover, they have a much lower cost compared to instrumental methods of analysis [14]. In addition to that, colorimetric sensing of BAs using dipsticks provides a simple response based on the color change, and this leads to a yes/no answer besides the quantitative analysis [19,28]. Direct sensing of BAs was recently carried out using filter paper-based dipsticks containing an amine-reactive chromogenic probe and a reference dye. Quantitative determination of BAs could be successfully achieved either visually based on a color change or via luminescence. Digital images of the luminescence of sensor spots were taken, and the BA concentrations were derived from the red-to-green intensity ratio via ImageJ software [19]. As an alternative sensor concept to paper as support for the hydrogel carrying the sensing matrix, electrospun nanofibers on ITO sheets were employed. Dipsticks containing these nanofiber mats showed an up to six-fold higher sensitivity compared to those based on hydrogel sensor membranes containing the same dye [29]. This is due to the high surface area to volume ratio and the high porosity of electrospun nanofibers. Moreover, electrospun nanofibers were designed such that they were counter-charged with respect to BAs in order to achieve an additional enrichment effect. Reflectometric detection of optical sensors raises interest, since it is a fast and simple technique. Reflectometric sensors require a light source (which could be an inexpensive LED), a filter to select the detection wavelength and a detector (which can be a low-cost digital camera). Evaluation of data can then be carried out by free available software. Hence, a complete sensor (array) can be obtained for a few hundred US-$ or less including detection equipment. BA sensing was carried out reflectometrically by a digital camera and data were analyzed using the color space of the International Commission on Illumination (CIE) system [16]. These sensors were applied for quantitation of various amines and ammonia produced during food ageing in food packages. Fish freshness was monitored reflectometrically using a colorimetric sensor of creatine in fish, which is an indicator for the fish ageing [30]. In order to create dipsticks for food analysis with improved features, the following novel features were implemented: (a) a sensor layer containing only one dye instead of two to simplify the dipsticks compared to earlier research [29]; (b) using reflectometry for a one-step detection (instead of a two-step detection process as formerly required with digital photography evaluation [19]); and (c) using an NIR chromogenic dye to reduce the effects of self-absorption and scatter upon measuring in real samples. Therefore, for the first time, an NIR dye reactive to BAs was embedded into a mat of electrospun nanofibers to form reflectometric dipstick sensors. The electrospun nanofibers made from cellulose acetate (CA) are prepared by a simple standard electrospinning procedure and contain the S0378 cyanine dye which absorbs at 800 nm. CA is stable over a wide range of pH values and contains many hydroxyl (OH) and ester groups which render it highly hydrophilic. This allows easy access of polar analytes like BAs to the S0378 chemosensor embedded inside the NFs. Additionally, CA can be easily electrospun into layers of several tens of µm, as common in optical chemical sensors. Primary amines react with the dye by an SN1 nucleophilic substitution mechanism which is accompanied by a color change from green to blue. Hence, the concentration of BAs can be determined based on reflectance detection which is a simple and fast readout. The equal response towards monoamines and diamines makes the dipstick an ideal tool for determination of the total amine content (TAC) in real samples which was demonstrated by monitoring the ageing of shrimp samples over time. 2. Materials and Methods 2.1. Materials S0378 was from FEW Chemicals (www.few.de). The buffer N-cyclohexyl-2-amino ethanesulfonic acid (CHES) was from Roth (www.carlroth.de). Spermidine (SPR), putrescine (PUT), and histamine (HIS) were purchased from Sigma-Aldrich (www.sigmaaldrich.com), all as hydrochloride salts. Tyramine (TYR), cysteine (CYS), triethylamine (TEA) and dimethylamine, each as free base, were from Sigma, Mann research laboratories, Merck and Fluka, respectively. Cellulose acetate (CA) (Mw 30,000 Da, 39.8 wt% acetyl content) and human serum albumin (HSA) and all organic solvents (methanol, acetic acid and acetone) were obtained from Sigma-Aldrich. All chemicals were of analytical grade. Indium tin oxide (ITO) coated on polyethylene terephthalate (PET) with a surface resistivity 60 Ω/sq and 127 µm thickness was purchased from Sigma-Aldrich. Stock solutions of BA (10.0 mM) were prepared in CHES buffer (pH 9.7). Standard solutions of the BAs were freshly prepared by diluting stock solutions with CHES buffer. CHES buffer (5.00 mM) was prepared by dissolving of solid CHES (0.1036 g) in 100.0 mL of deionized water. The pH of CHES was adjusted with sodium hydroxide solution (1.00 M, from Merck (www.merckgroup.com)). 2.2. Apparatus The electrospinning was performed using a home-built electrospinning setup (see Figure S1) with an iseg T1 CP300p high voltage power supply (www.iseg-hv.com) and a syringe pump (Legato 180 from KD Scientific, www.kdscientific.com). Fiber mat thickness, fiber diameter, and images for pore size determination were obtained by a scanning electron microscope (Zeiss/LEO 1530, City, Germany) at 5.0 kV. Samples for SEM were cut with a pair of scissors and sputtered with gold for 30 s (≈7 nm layer thickness). For the study of fiber morphology, fibers were randomly selected from different regions of the SEM image. The fiber diameters within the field of view were quantified by ImageJ analysis software (ImageJ, National Institutes of Health). Within the same SEM image, pore regions were randomly analyzed. Pores surrounded by fiber networks were drawn manually and analyzed by ImageJ, where Feret’s diameter is reported as a pore size. A fraction of the sample prepared for pore size determination was cut and mounted on a 90° tilted high-profile SEM stub for determining the fiber mat thickness. Within the same sample, different locations (at least three) were imaged and analyzed for mat thickness. Image analysis was performed using the SEM software (Zeiss). Reflectance spectra were acquired with an AB2 luminescence spectrometer with a 150 W xenon light source. The reference for reflectance spectra was MgSO4. The light was guided to the dipstick by a y-shaped bifurcated optical fiber, a fiber holder, and the samples were illuminated under an average illumination angle of 33°. The inner central fiber bundle at the tip of the bifurcated optical fiber collects the reflected light from the dipstick and guides it back into the spectrometer. Two wavelengths (650 and 635 nm) were used for illumination and the dipsticks were placed in home-made black plastic holders below the tip of the optical fiber to ensure a flat and reproducible positioning of the dipsticks with respect to the incident light beam. The distance between the dipstick and the tip of the optical fiber is small (3 mm) to increase the collected reflected light and improve the sensitivity. The measurements were done in synchronous mode of the spectrometer i.e., the emission was measured at the same wavelength as the excitation. The band passes are 4 nm for both, excitation and emission monochromators. The absorbance spectra were acquired on a Varian Cary 50 Bio photometer in quartz cells. The emission spectra were obtained using an AB2 luminescence spectrometer equipped with a cell holder with 90° arrangement of excitation and emission beam and no fiber optic. The excitation wavelength was 600 nm and the bandpasses for excitation and emission were 4 nm and 8 nm, respectively. 2.3. Electrospinning of S0378-CA Fiber Mats The polymer CA (0.720 g) and S0378 (30.00 mg) were dissolved in a mixture of 3.00 mL of acetic acid and 1.00 mL of acetone. Then, this spinning dope was first sonicated at room temperature for 30 min, then at 40 °C for another 30 min and finally stirred for about 1 h until the mixture was completely homogeneous. The spinning dope was protected from light by Al-foil while stirring, storage, and electrospinning. The S0378-CA electrospun nanofibers were fabricated using an electrospinning setup with the following parameters: spinning dope in plastic syringe 5 mL (covered with aluminum foil); voltage 17 kV; flow rate 0.002 mL/min; tip-to-collector distance, 11 cm, 15 min spinning time (see Table S1). An ITO sheet (size 7 × 5 cm) was used as a supporting material. Electrospun fiber mats deposited on ITO should be stored in a desiccator. 2.4. Preparation of Dipsticks and BA Determination The dipsticks with S0378-CA nanofibers (Ø = 8 mm) were cut from the ITO sheets with a toggle press from BERG & SCHMID (www.bergundschmid.de) and placed on black, solvent-resistant plastic positioners. After that, 10.00 μL of either BA (in CHES buffer, pH 9.7) or an extract from the real sample were added onto the spots. The color change of the dipsticks took place at 130 °C for 30 min in an oven. The reflectance was measured two times for each dipstick before and after the reaction with BAs. 2.5. Preparation of Real Samples Histamine concentration in real samples was determined using methanolic extraction and a matrix matched calibration curve for concentration determination. The extraction follows the AOAC method 35.1.32 [31] with slight modifications. Shrimp samples were purchased from a local supermarket and stored at −80 °C. A 10.0 g portion was mixed with 50 mL of methanol in a beaker and homogenized in a blender at the highest speed for 2 min. The homogenate was transferred into a conical flask and heated at 60 °C in a water bath for 30 min. 0.19 M Carrez solution I (potassium ferrocyanide) and 1.05 M Carrez solution II (zinc acetate) were prepared in distilled water. In addition, 2 mL of each Carrez solution were added to the shrimp extract before filtration of the homogenate for protein precipitation. The extract was filtered through a porcelain Buchner funnel with blue ribbon filter paper (Schleicher und Schüll: 5893, www.whatman.com) twice to yield a clear solution. Then, different volumes were taken from the sample extract to achieve a suitable dilution factor on each ageing day and histamine solution was added to reach concentrations of 0–600 μM of added histamine after dilution to 500 μL overall volume with CHES buffer (5.00 mM; pH 9.7). After that, 10.00 μL of aliquot solutions were added onto dipsticks and the reflectance of each dipstick was measured, as indicated above. 3. Results and Discussion 3.1. Choice of Dye The main idea of this work was to produce the first NIR-dye-embedded electrospun nanofiber mats which act as sensor layers in dipsticks for visual and reflectometric evaluation of BAs. Upon reaction with BAs (from real samples), the embedded dye shows a color change from green ( λ a b s m a x = 800 nm) to blue (see Figure 1 and Figure 2). The S0378 dye was chosen because of its chameleon property and the reaction with the BA (e.g., tyramine) occurs in CHES buffer at pH 9.7. This pH is required to warrant a free electron pair to be present at the amino group of the BA for a nucleophilic attack at the electrophilic carbon atom in the center of the π-electron system of the cyanine dye. Then, the electron-withdrawing chloro group is replaced by the electron-donating amino group of the BA in an SN1 reaction. This inexpensive cyanine dye was formerly used as long wavelength-absorbing protein label [32]. Additionally, the CA fibers assist in the response of the ENF mat as they are anionic at pH 9.7 and thus can attract cationic BAs from the sample. Furthermore, the emission maxima of S0378 at 663 nm and at 820 nm strongly decrease upon conjugation with BAs as presented in Figure S2. Hence, the determination of BAs could be done by either fluorimetry or reflectometry. As the use of a simple detection method suitable for the evaluation of dipsticks was one of the major aims of this work, reflectometry based on the color change was chosen in this work. The reflectance spectra of S0378 dye (spun into a fiber mat of CA polymer) show an overall decrease when reacted with tyramine (Figure 3). The sharp peaks between 400 nm and 525 nm are the characteristic peaks of the xenon excitation lamp and hence instrumental artifacts. Therefore, a much longer detection wavelength closer to the absorption maximum of S0378 is advisable for the reflectance measurements. This is further supported by the fact that the self-absorption of biological matter strongly decreases at longer wavelengths, which in turn can lead to higher reflectance. It was intended to detect food extracts, and the much higher reproducibility of the reflectance change at longer wavelengths 650 nm was chosen for method optimization and 635 nm for quantitative reflectance detection in real samples. 3.2. Choice of Nanofiber Materials, Conditions of Spinning, Reaction Temperature and Time, and Fiber Morphology The spinning dope used for the electrospinning of the nanofibers was prepared by dissolving the S0378 dye and CA polymer in a solvent mixture of acetic acid and acetone. The electrospinning conditions, the polymer, and the solvents used resemble those used in an earlier study [29] except for dye concentration and spinning time. The concentration of the dye molecules was increased to enhance the sensitivity of the reflectance measurements. It was also tested if reducing the spinning time by half (15 min) still yields a stable fiber mat and avoids detachment of the mat off the ITO sheet. Hence, the spinning time was varied between 15 min and 30 min and the dye concentration in the spinning dope between 7.5 mg/mL and 15 mg/mL. The effects on the change of the reflectance signal after a reaction of the fiber mat with histamine are depicted in Figure 4. Here, lower dye concentrations lead to lower changes of reflectance (ΔR). In all combinations of dye concentration and spinning time ΔR increases with increasing concentration of HIS. On comparing the change of ΔR at 0.8 mM of HIS with ΔR in absence of HIS, there is not too large of a difference among all the four combinations of dye concentration and reaction time. Therefore, a concentration of the dye in the spinning dope of 7.5 mg/mL and 15 min spinning time were chosen to save dye and time for preparation of the sensor mats deposited on the dipstick. Other spinning conditions did not provide a larger dynamic range in calibration plots for determination of BAs. Humidity was controlled and kept in a range from 40%–55%. No effect on fiber quality on fiber was observed, if humidity remained in that range. The reaction of the S0378-CA fiber mat and BAs on the dipsticks was allowed to develop at various temperatures after addition of histamine solution in CHES buffer (pH 9.7) to accelerate the color change. Figure 5 shows that the reaction between the dye embedded inside the electrospun nanofibers was carried out at three temperatures 70, 100, and 130 °C for 30 min. The highest increase of the reflectance is observed at 130 °C. This was expected because it was known from earlier literature [32] that the SN1 reaction of S0378 with e.g., amino-side chains of proteins is not fast, even in solution. In order to obtain a wide detection range and a more reproducible response of the dipsticks, the reaction with BAs was carried out at 130 °C in all subsequent measurements. Figure S3 shows the response of the dipsticks after different development times (10, 20, and 30 min at 70 °C). A development time of 30 min provided the highest change of reflectance and hence better sensitivity. No effect of humidity on the in detected reflectance change was noticed, if humidity was in a 30%–65% range. Characterizations of fiber morphology with respect to fiber mat thickness, pore size, pore density and diameter of the S0378-doped nanofibers prior to their reaction with BAs were performed by scanning electron microscope (SEM) images. An example of a nanofiber mat is displayed in Figure 6. The thickness of the resulting fiber mat was determined to be 50.7 ± 8.4 μm. The pore sizes are 2.80 ± 0.15 μm as determined by the Feret diameter (example given in Figure S4) from 1330 pores. In addition, the density of pores is very high (~1.94 × 105 pores/mm2), thus providing a great surface area for interaction with BAs (see Table S1). The average diameter of the fibers is 496 ± 318 nm (n = 82). The fibers are not round but have an overall shape of two fibers fused with one another (see Figure S5a). Hence, there is a short and a long axis to describe the fiber size (see Figure S6), and the average ratio between the diameters along the short and long fiber axis is 0.62. This also explains the large standard deviation of the fiber diameter. The surface of the fibers is not smooth but rather undulated. After the reaction with 1.0 mM of tyramine in CHES buffer, the fibers became considerably thinner with average diameters of 165 ± 124 nm (n = 82) but still retained a rough surface (see Figure S5b). 3.3. Visible Color Change of Dipsticks The S0378-CA fiber mats on the dipsticks are greenish and turn blue upon the reaction with a primary biogenic amine (like histamine in aqueous solution of pH 9.5–10). This is due to the SN1 nucleophilic substitution of the chlorine atom by the nitrogen atom of the BA. The blue color becomes more intense with increasing the BA concentration as displayed in Figure 7. This color change is most visible starting from a BA concentration of 0.2 mM, which nicely coincides with BA concentrations (0.3–1 mM) in foods that can induce serious health problems [19] but cannot be detected by the human nose. Reflectance detection based on the color change was chosen to determine BAs because it has a low instrumental demand and reasonable sensitivity. With an appropriate device at hand, in-field use of the dipsticks seems feasible, eventually even with less qualified personnel, at a later stage of application. Furthermore, reflectometry is the typical and well established method for evaluation of dipsticks and also the vision of the human eye is based on the perception of reflected light. 3.4. Assay Procedure for Quantitation of BAs Nanofiber mat circles (Ø = 8 mm) were cut from the fiber mats sheets with a toggle press and placed on a positioning device made of black solvent-resistant plastic to absorb stray light. The dipsticks were illuminated by a Y-shaped bifurcated optical fiber (with red light) at 33° illumination angle and the original reflectance of each dipstick was measured as illustrated in Figures S7 and S8. A series of BA solutions of different concentrations was prepared and added onto the dipsticks for calibration. The real sample extracts were delivered to the dipsticks as a methanolic/aqueous mixture using the standard addition method. After adding of the BA solution onto the dipstick, the liquid spread all over the dipstick area. The dipsticks were then transferred to glass slides to allow the reaction with the S0378 dye embedded in the fibers to occur in an oven at 130 °C for 30 min. Then, the reflectance of all dipsticks was measured again. The reflectance of each dipstick was measured twice before and after the reaction with BAs. This considers potential differences of the reflectance of each dipstick originating from minute differences of the thickness of the fiber mats and of mispositioning. The percentage of the reflectance change was calculated by dividing the absolute difference of the dipstick reflectance before and after the reaction by the reflectance before the reaction multiplied by 100%. The reflectance change (ΔR [%]) was then plotted against the concentration of the BA. 3.5. Calibration and Sensitivity The effect of different BA concentrations on the reflectance of the dipsticks was studied for four biogenic amines (spermidine, tyramine, putrescine, and histamine). Their calibration plots are presented in Figure 8 and Figure 9. Eleven concentrations (blank, 0.010, 0.020, 0.040, 0.080, 0.10, 0.20, 0.40, 0.60, 0.80, 1.0 mM) were used to cover the whole dynamic range of the dipstick until the saturation of the change of reflectance was reached. Four replicates for each concentration were used. The calibration curves for all BAs are not linear but rather resemble saturation curves. The signal increase is steeper at lower concentrations (0–0.1 mM) than at higher concentrations (0.2–0.6 mM) and reaches saturation for all of the tested BAs at 0.6 mM, except histamine. The dynamic ranges are 0.040–0.60, 0.080–0.60, 0.10–0.60, and 0.10–1.0 mM for SPR, TYR, PUT, and HIS, respectively. The LOD is the concentration which is corresponding to the blank signal plus three standard deviations. The LODs were found to be 30, 30, 80, and 90 μM for SPR, TYR, PUT, and HIS, respectively. The dynamic ranges and LODs of all BAs are summarized in Table 1. All BAs have a similar response and sensitivity, even though one could expect to see a higher sensitivity for the diamines like putrescine and spermidine. Obviously, the average distance to the next proximate S0378 dye (embedded into either the same or neighbor fiber) is larger than the average length of putrescine or spermidine, respectively, so that no additional reaction occurs. Moreover, the steric hindrance of the secondary amino group on a S0378-spermidine conjugate is obviously too high to get access to the rigid conjugated π-system of a neighboring NIR dye. In addition, a further SN1 reaction of the secondary amino group with another S0378 molecule seems to be prevented. The very similar response of the NIR dye to all BAs is very beneficial for its use in dipsticks. Those are intended as screening tools for the determination of the overall content of BAs in a sample to unveil a suspicious one that then would be further inspected e.g., with HPLC-MS. In this case, a very similar response (i.e., change of reflectance) of the dipsticks in response to all BAs is beneficial to determine the sum content of all BAs, irrespective of their chemical structure. This will reduce potential errors that might occur from overestimation of the S0378 dye towards diamine BAs. The concentration range of BAs in foods that might be potentially dangerous for health is between 0.3 and 1.0 mM. The LODs and working ranges of the dipsticks are even sufficient to detect concentrations lower than 0.3 mM (Table 1). This is an advantage because uncritical concentrations of BAs as, e.g., naturally occurring in seafood are accessible. Moreover, one can follow the increase of the concentration of BAs during the ageing of seafood using a suitable dilution (see Section 3.7 on quantitation of BAs in real samples). This makes the dipsticks suitable tools to control food freshness or spoilage. 3.6. Selectivity The selectivity of the S0378-CA nanofibers was tested with the following substances: dimethylamine (DMA), triethylamine (TEA), human serum albumin (HSA), and cysteine (CYS) (Figure S9). These particular substances were chosen to test the selectivity of the response of the dipsticks towards secondary amines, tertiary amines, proteins, and thiols (in absence of BAs). Being nucleophiles, those could interfere in the reaction of S0378 with a BA. The effect of those interferents depending on their concentrations on the reflectance of the dipstick was studied (Figure 10). No significant interference on the reflectance is observed for DMA and TEA. The dye does not react with secondary or tertiary amines but only with primary amino groups. This is probably due to the higher steric hindrance of DMA and DEA and the low accessibility of the carbenium ion located between the four methyl groups at the two indole moieties of S0378. The embedding of the dye inside the fibers of the CA hydrogel should also contribute here. CYS only slightly decreases the reflectance response of the dipsticks in general but has no concentration-dependent effect. Although CYS has a primary amino group this group is still lowly protonated at the pH of the reaction (9.7) because its pKa is 10.77 [33]. This means that deprotonated (and hence nucleophilic CYS) is present only to a minute degree. CYS therefore mostly exists as zwitterion in which the amino group is not available to react. Although the thiol group could act as an interfering nucleophile and react with the NIR dye, it obviously is less reactive under the conditions of the dipstick reaction. HSA shows an interference only at relatively high concentrations (0.04–0.2 mM). This could be expected because it has various primary amino groups that could react with the dye. On the other hand, the CA polymer in the dipsticks particularly hinders the access of larger macromolecules like proteins. Therefore, the reaction of free amino groups of HSA is only possible at relatively high concentrations of the protein. At cHSA > 0.2 mM, the fibers are destroyed. The reason for this is still unknown. The interference of proteins, however, might be relevant for the use of the dipsticks in real samples, but the common sample pretreatments for determination of BAs which use carrez solutions (this work), methanol [20,31,33], or trichloroacetic acid [19] warrant quantitative removal of proteins prior to use of the dipstick. Additionally, the selectivity of the dipsticks towards tyramine in the presence of HSA interferent was tested. HSA (0.04 mM) was added to solutions with increasing concentrations of TYR and ΔR [%] was measured and compared to the ΔR of solutions of TYR of the same concentrations without HSA. The addition of HSA up to 0.04 mM does not affect the calibration plot of tyramine, as it is obvious from Figure 11. Obviously, the reaction rate of S0378 inside the fibers with primary amines is considerably higher than the reaction rate with amines of proteins due to steric hindrance. Hence, the dipsticks react selectively to primary (biogenic) amines as found in food samples, provided that an appropriate sample preparation is applied. 3.7. Quantitation of BAs in Real Samples Finally, the response of the dipsticks to BAs in shrimp samples was tested over a 6-day storage period at room temperature. The extraction of the shrimp samples follows the AOAC method 35.1.32 [31] and standard additions were done. The methanolic extraction procedure works best in that it is fast, simple, and eliminates the interference of proteins to a major degree. Proteins denature and precipitate in methanol and can be removed with filtration. Carrez solution I (potassium ferrocyanide) and Carrez solution II (zinc acetate) were added to the shrimp extract before filtration to completely precipitate the proteins and avoid any interference. Histamine was chosen as BA as the standard to be added because it is the major BA present in seafood samples [28]. For each ageing day, a suitable dilution factor was used to remain in the dynamic range of the dipsticks for determination of HIS. The development of the concentration of the BAs with the ageing of shrimp at room temperature as derived from the standard addition plots are given in Figure S10. The R2 values for all of the linear fitting plots of the real samples exceed 0.96 (Table 2). These correlation coefficients are good considering that dipsticks are always less reproducible than instrumental analytical methods [19,29]. With respect to shrimp ageing at room temperature, the increase of the HIS concentrations found can also be translated into a total content of biogenic amines (TAC). The TAC is then expressed in equivalents of histamine (µmol HIS/g sample). Since histamine is the major biogenic amine occurring in seafood, its concentration is representative for the TAC [34]. Furthermore, the mean molar mass of the BAs found in food equals the molar mass of histamine. Histamine concentrations found in shrimp during ageing at room temperature were 7.54 ± 0.96, 12.8 ± 0.8 and 21.7 ± 3.2 µmol/g (n = 4, each) on days 0, 1, and 6, respectively, as indicated in Table 2. These results (Figure S11) agree well with ageing profiles of real shrimp samples found in earlier work [35]. 4. Conclusions Chromogenic dipsticks based on electrospun CA nanofiber mats doped with S0378 dye are introduced for reflectometric determination of the content of biogenic amines (BAs) in food samples. S0378 is a chameleon amine-reactive dye which changes its color from green to blue when conjugated to primary amino groups. Hence, the reaction of dipsticks with BAs was monitored via reflectance measurement. The dipsticks can also be used for a yes/no qualitative analysis by the naked eye for determination of potentially dangerous concentrations of biogenic amines. Various biogenic amines such as histamine, tyramine, putrescine, and spermidine, respectively, all show an equal reflectance response. The selectivity of the dipsticks towards primary amines over secondary and tertiary amines is very good and only high protein concentrations may interfere. For real samples, this fact is unimportant, since the sample preparation usually involves steps for protein precipitation. Quantitative analysis of BAs in shrimp samples at room temperature successfully delivered a typical profile of the BA concentration upon ageing of the sample, in agreement with previous studies. Hence, these dipsticks show that electrospun nanofibers are very useful sensor materials for monitoring food freshness using innovative and a simple device. Supplementary Materials The following are available online at https://www.mdpi.com/2227-9040/8/4/99/s1, Figure S1: Image of the electrospinning setup with voltage supply and syringe pump, Figure S2: Change of the fluorescence emission of 5 μM S0378 dye upon binding to 5μM Tyramine at 80 °C over time in CHES buffer, Figure S3: Effect of reaction time on the reflectance of the dipstick at 650 nm to HIS solutions of 0, 0.02, 0.04, 0.1, 0.2, 0.4, 0.8 mM, Figure S4: Representative sample for calculation of the pore size as indicated by the Feret diameter, Figure S5: Morphology of S0378-CA nanofibers electrospun for 15 min as taken by SEM, Figure S6: Sketch of cross-sectional view of fused S0378-CA nanofibers, Figure S7: Instrumental setup used for reflectometric measurements, Figure S8: Close-up view on the dipsticks on the sample holder during detection, Figure S9: Structure of the potential interferents tested with the dipsticks, Figure S10: Calibration plots obtained from the change of the reflectance upon ageing of shrimp at room temperature using standard additions and the dilution factors given in Table 2 on days 0, 1, and 6, Figure S11: Bar plot of the TAC of shrimp ageing at room temperature over 6 days. Table S1. Spinning conditions and fiber (mat) characterization data. Author Contributions Conceptualization, A.D. and A.J.B.; methodology, A.D. and S.N.M.; software, S.N.M., N.W. and M.S.; validation, S.N.M. and A.D.; formal analysis, S.N.M., A.D., N.W., and M.S.; investigation, S.N.M.; resources, A.J.B. and A.D.; data curation, S.N.M.; writing—original draft preparation, S.N.M.; writing—review and editing, S.N.M. and A.D.; visualization, S.N.M. and A.D.; supervision, A.D. and A.J.B.; project administration, A.D.; funding acquisition, University of Regensburg and DAAD. All authors have read and agreed to the published version of the manuscript. Funding S.N.M. received support from the German Academic Exchange Service (DAAD) for a German-Egyptian Research Long-term Scholarship No. 91614688. Conflicts of Interest The authors declare no conflict of interests. References Oliveira, J.; Mantoanelli, F.; Moreira, L.; Elisabete, G.; Pereira, A. Dansyl chloride as a derivatizing agent for the analysis of biogenic amines by CZE-UV. Chromatographia 2020, 83, 767–778. [Google Scholar] [CrossRef] Chong, C.Y.; Bakar, F.A.; Russly, A.R.; Jamilah, B.; Mahyudin, N.A. The effects of food processing on biogenic amines formation. Int. Food Res. J. 2011, 18, 867–876. [Google Scholar] Papageorgiou, M.; Lambropoulou, D.; Morrison, C.; Kłodzińska, E.; Namieśnik, J.; Płotka-Wasylka, J. Literature update of analytical methods for biogenic amines determination in food and beverages. TrAC Trends Anal. Chem. 2018, 98, 128–142. [Google Scholar] [CrossRef] Kettner, L.; Seitl, I.; Fischer, L. Evaluation of porcine diamine oxidase for the conversion of histamine in food-relevant amounts. J. Food Sci. 2020, 85, 843–852. [Google Scholar] [CrossRef] Dong, H.; Xiao, K. Modified QuEChERS combined with ultra high performance liquid chromatography tandem mass spectrometry to determine seven biogenic amines in Chinese traditional condiment soy sauce. Food Chem. 2017, 229, 502–508. [Google Scholar] [CrossRef] Önal, A. A review: Current analytical methods for the determination of biogenic amines in foods. Food Chem. 2007, 103, 1475–1486. [Google Scholar] [CrossRef] Mohammed, G.I.; Bashammakh, A.S.; Alsibaai, A.A.; Alwael, H.; El-Shahawi, M.S. A critical overview on the chemistry, clean-up and recent advances in analysis of biogenic amines in foodstuffs. TrAC Trends Anal. Chem. 2016, 78, 84–94. [Google Scholar] [CrossRef] Erim, F.B. Recent analytical approaches to the analysis of biogenic amines in food samples. TrAC Trends Anal. Chem. 2013, 52, 239–247. [Google Scholar] [CrossRef] He, L.; Xu, Z.; Hirokawa, T.; Shen, L. Simultaneous determination of aliphatic, aromatic and heterocyclic biogenic amines without derivatization by capillary electrophoresis and application in beer analysis. J. Chromatogr. A 2017, 1482, 109–114. [Google Scholar] [CrossRef] Li, D.W.; Liang, J.J.; Shi, R.Q.; Wang, J.; Ma, Y.L.; Li, X.T. Occurrence of biogenic amines in sufu obtained from Chinese market. Food Sci. Biotechnol. 2019, 28, 319–327. [Google Scholar] [CrossRef] Vanegas, D.C.; Patiño, L.; Mendez, C.; de Oliveira, D.A.; Torres, A.M.; Gomes, C.L.; McLamore, E.S. Laser scribed graphene biosensor for detection of biogenic amines in food samples using locally sourced materials. Biosensors 2018, 8, 42. [Google Scholar] [CrossRef] Huisman, H.; Wynveen, P.; Nichkova, M.; Kellermann, G. Novel ELISAs for screening of the biogenic amines GABA, glycine, β-phenylethylamine, agmatine, and taurine using one derivatization procedure of whole urine samples. Anal. Chem. 2010, 82, 6526–6533. [Google Scholar] [CrossRef] [PubMed] Danchuk, A.I.; Komova, N.S.; Mobarez, S.N.; Doronin, S.Y.; Burmistrova, N.A.; Markin, A.V.; Duerkop, A. Optical sensors for determination of biogenic amines in food. Anal. Bioanal. Chem. 2020, 412, 4023–4036. [Google Scholar] [CrossRef] [PubMed] Xiao-Wei, H.; Zhi-Hua, L.; Xiao-Bo, Z.; Ji-Yong, S.; Han-Ping, M.; Jie-Wen, Z.; Li-Min, H.; Holmes, M. Detection of meat-borne trimethylamine based on nanoporous colorimetric sensor arrays. Food Chem. 2016, 197, 930–936. [Google Scholar] [CrossRef] Bueno, L.; Meloni, G.N.; Reddy, S.M.; Paixão, T.R.L.C. Use of plastic-based analytical device, smartphone and chemometric tools to discriminate amines. RSC Adv. 2015, 5, 20148–20154. [Google Scholar] [CrossRef] Schaude, C.; Meindl, C.; Fröhlich, E.; Attard, J.; Mohr, G.J. Developing a sensor layer for the optical detection of amines during food spoilage. Talanta 2017, 170, 481–487. [Google Scholar] [CrossRef] Roales, J.; Pedrosa, J.M.; Guillén, M.G.; Lopes-Costa, T.; Pinto, S.M.A.; Calvete, M.J.F.; Pereira, M.M. Optical detection of amine vapors using ZnTriad porphyrin thin films. Sens. Actuators B Chem. 2015, 210, 28–35. [Google Scholar] [CrossRef] Banimuslem, H.; Hassan, A.; Basova, T.; Esenpinar, A.A.; Tuncel, S.; Durmuş, M.; Gürek, A.G.; Ahsen, V. Dye-modified carbon nanotubes for the optical detection of amines vapours. Sens. Actuators B Chem. 2015, 207, 224–234. [Google Scholar] [CrossRef] Steiner, M.S.; Meier, R.J.; Duerkop, A.; Wolfbeis, O.S. Chromogenic sensing of biogenic amines using a chameleon probe and the red-green-blue readout of digital camera images. Anal. Chem. 2010, 82, 8402–8405. [Google Scholar] [CrossRef] Khairy, G.M.; Azab, H.A.; El-Korashy, S.A.; Steiner, M.S.; Duerkop, A. Validation of a fluorescence sensor microtiterplate for biogenic amines in meat and cheese. J. Fluoresc. 2016, 26, 1905–1916. [Google Scholar] [CrossRef] Lee, B.; Scopelliti, R.; Severin, K. A molecular probe for the optical detection of biogenic amines. Chem. Commun. 2011, 47, 9639–9641. [Google Scholar] [CrossRef] [PubMed] Mohr, G.J. A tricyanovinyl azobenzene dye used for the optical detection of amines via a chemical reaction in polymer layers. Dye Pigment. 2004, 62, 77–81. [Google Scholar] [CrossRef] Mastnak, T.; Lobnik, A.; Mohr, G.J.; Finšgar, M. Indicator layers based on ethylene-vinyl acetate copolymer (EVA) and dicyanovinyl azobenzene dyes for fast and selective evaluation of vaporous biogenic amines. Sensors 2018, 18, 4361. [Google Scholar] [CrossRef] [PubMed] Siripongpreda, T.; Siralertmukul, K.; Rodthongkum, N. Colorimetric sensor and LDI-MS detection of biogenic amines in food spoilage based on porous PLA and graphene oxide. Food Chem. 2020, 329, 127165. [Google Scholar] [CrossRef] Yu, H.; Zhuang, D.; Hu, X.; Zhang, S.; He, Z.; Zeng, M.; Fang, X.; Chen, J.; Chen, X. Rapid determination of histamine in fish by thin-layer chromatography-image analysis method using diazotized visualization reagent prepared with: P -nitroaniline. Anal. Methods 2018, 10, 3386–3392. [Google Scholar] [CrossRef] Nelis, J.L.D.; Tsagkaris, A.S.; Dillon, M.J.; Hajslova, J.; Elliott, C.T. Smartphone-based optical assays in the food safety field. TrAC Trends Anal. Chem. 2020, 129, 115934. [Google Scholar] [CrossRef] Wojnowski, W.; Kalinowska, K.; Majchrzak, T.; Płotka-Wasylka, J.; Namieśnik, J. Prediction of the biogenic amines index of poultry meat using an electronic nose. Sensors 2019, 19, 1580. [Google Scholar] [CrossRef] Basavaraja, D.; Dey, D.; Varsha, T.L.; Thodi, F.; Salfeena, C.; Panda, M.K.; Somappa, S.B. Rapid Visual Detection of Amines by Pyrylium Salts for Food Spoilage Taggant. ACS Appl. Bio Mater. 2020, 3, 772–778. [Google Scholar] [CrossRef] Yurova, N.S.; Danchuk, A.; Mobarez, S.N.; Wongkaew, N.; Rusanova, T.; Baeumner, A.J.; Duerkop, A. Functional electrospun nanofibers for multimodal sensitive detection of biogenic amines in food via a simple dipstick assay. Anal. Bioanal. Chem. 2018, 410, 1111–1121. [Google Scholar] [CrossRef] Fazial, F.F.; Tan, L.L.; Zubairi, S.I. Bienzymatic creatine biosensor based on reflectance measurement for real-time monitoring of fish freshness. Sens. Actuators B Chem. 2018, 269, 36–45. [Google Scholar] [CrossRef] AOAC Official. Methods of Analysis, 16th ed.; AOAC: Washington, DC, USA, 1995. [Google Scholar] Gorris, H.H.; Saleh, S.M.; Groegel, D.B.M.; Ernst, S.; Reiner, K.; Mustroph, H.; Wolfbeis, O.S. Long-wavelength absorbing and fluorescent chameleon labels for proteins, peptides, and amines. Bioconjug. Chem. 2011, 22, 1433–1437. [Google Scholar] [CrossRef] [PubMed] Harris, D.C. Quantitative Chemical Analysis, 6th ed.; W.H. Freeman and Company: New York, NY, USA, 2003; ISBN 0-7167-4464-3. [Google Scholar] Hwang, D.F.; Chang, S.H.; Shiau, C.Y.; Cheng, C. Biogenic amines in the flesh of sailfish (Istiophorus platypterus) responsible for scombroid poisoning. Food Sci. 1996, 60, 926–928. [Google Scholar] [CrossRef] Azab, H.; El-Korashy, S.A.; Anwar, Z..; Khairy, G.M.; Steiner, M.S.; Duerkop, A. High-throughput sensing microtiter plate for determination of biogenic amines in seafood using fluorescence or eye-vision. R. Soc. Chem. 2011, 136, 4492–4499. [Google Scholar] [CrossRef] [PubMed] Figure 1. Illustration of the formation of electrospun fiber mats containing the S0378 dye, the chemical reaction of S0378 with BAs and the detection of the response of the dipstick. Unreacted nanofibers containing the S0378 dye are electrospun to form a sensor mat on ITO (left). Upon reaction with a biogenic amine (BA), a blue conjugate between S0378 and the BA is formed (top right). For quantitation of the BA, the color change of the nanofiber mat is read by reflectometry (bottom right). Figure 1. Illustration of the formation of electrospun fiber mats containing the S0378 dye, the chemical reaction of S0378 with BAs and the detection of the response of the dipstick. Unreacted nanofibers containing the S0378 dye are electrospun to form a sensor mat on ITO (left). Upon reaction with a biogenic amine (BA), a blue conjugate between S0378 and the BA is formed (top right). For quantitation of the BA, the color change of the nanofiber mat is read by reflectometry (bottom right). Figure 2. Absorption spectra of 5 μM S0378 dye upon binding to 5 μM Tyramine at 80 °C over time in CHES buffer. Figure 2. Absorption spectra of 5 μM S0378 dye upon binding to 5 μM Tyramine at 80 °C over time in CHES buffer. Figure 3. Reflectance spectra of a mat of S0378-CA nanofibers on a dipstick in absence (red) and reacted with 1 mM of tyramine (black) (n = 3). Figure 3. Reflectance spectra of a mat of S0378-CA nanofibers on a dipstick in absence (red) and reacted with 1 mM of tyramine (black) (n = 3). Figure 4. Effect of dye concentration and spinning time on the change of reflectance of the dipstick at 650 nm upon reaction with HIS solutions of 0, 0.01, 0.02, 0.04, 0.08, 0.1, 0.2, 0.4, 0.6, 0.8 and 1 mM (n = 4) at 70 °C. Figure 4. Effect of dye concentration and spinning time on the change of reflectance of the dipstick at 650 nm upon reaction with HIS solutions of 0, 0.01, 0.02, 0.04, 0.08, 0.1, 0.2, 0.4, 0.6, 0.8 and 1 mM (n = 4) at 70 °C. Figure 5. Effect of reaction temperature on the reflectance of the dipstick at 650 nm to HIS solutions of 0, 0.02, 0.04, 0.1, 0.2, 0.4, and 0.8 mM concentration in CHES buffer. Figure 5. Effect of reaction temperature on the reflectance of the dipstick at 650 nm to HIS solutions of 0, 0.02, 0.04, 0.1, 0.2, 0.4, and 0.8 mM concentration in CHES buffer. Figure 6. Morphology of S0378-CA nanofibers electrospun for 15 min as taken by SEM with 2500-fold magnification. Figure 6. Morphology of S0378-CA nanofibers electrospun for 15 min as taken by SEM with 2500-fold magnification. Figure 7. Visible color change of dipsticks with S0378-CA nanofibers with histamine solutions of different concentrations at 130 °C for 30 min. (1) blank solution (CHES buffer); (2) 0.020; (3) 0.040; (4) 0.060; (5) 0.080; (6) 0.10; (7) 0.20; (8) 0.40; (9) 0.60; (10) 0.80; (11) 1.0 mM, respectively, of histamine. Figure 7. Visible color change of dipsticks with S0378-CA nanofibers with histamine solutions of different concentrations at 130 °C for 30 min. (1) blank solution (CHES buffer); (2) 0.020; (3) 0.040; (4) 0.060; (5) 0.080; (6) 0.10; (7) 0.20; (8) 0.40; (9) 0.60; (10) 0.80; (11) 1.0 mM, respectively, of histamine. Figure 8. Calibration plot of the reflectometric response of the dipsticks to spermidine and tyramine (illumination wavelength = 635 nm) (n = 4). Figure 8. Calibration plot of the reflectometric response of the dipsticks to spermidine and tyramine (illumination wavelength = 635 nm) (n = 4). Figure 9. Calibration plot of the reflectometric response of the dipsticks to putrescine and histamine (illumination wavelength = 635 nm) (n = 4 (PUT), n = 6 (HIS)). Figure 9. Calibration plot of the reflectometric response of the dipsticks to putrescine and histamine (illumination wavelength = 635 nm) (n = 4 (PUT), n = 6 (HIS)). Figure 10. Selectivity of the dipstick towards DMA, TEA, HSA and CYS (n = 4; illumination wavelength = 635 nm). Figure 10. Selectivity of the dipstick towards DMA, TEA, HSA and CYS (n = 4; illumination wavelength = 635 nm). Figure 11. Effect of HSA interferent (0.04 mM) on the calibration plot of dipsticks with TYR (illumination wavelength = 635 nm) (n = 4). Figure 11. Effect of HSA interferent (0.04 mM) on the calibration plot of dipsticks with TYR (illumination wavelength = 635 nm) (n = 4). Table 1. Dynamic range and LOD derived from the reflectometric response of the dipsticks towards various BAs. Table 1. Dynamic range and LOD derived from the reflectometric response of the dipsticks towards various BAs. BA LOD (mM) Dynamic Range (mM) SPR 0.030 0.040–0.60 TYR 0.030 0.080–0.60 PUT 0.080 0.10–0.60 HIS 0.090 0.10–1.0 Table 2. TAC expressed as histamine concentration as determined in shrimp over various days of ageing at room temperature. Table 2. TAC expressed as histamine concentration as determined in shrimp over various days of ageing at room temperature. 635 nm Dilution Factor Intercept Slope TAC (µmol/g) SD of TAC (%) R2 day 0 1:10 11.97 79.38 7.54 ± 0.96 12.7 0.969 day 1 1:10 18.21 71.37 12.8 ± 0.8 6.02 0.986 day 6 1:20 12.52 57.61 21.7 ± 3.2 14.7 0.962 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). Chemosensors, EISSN 2227-9040, Published by MDPI Disclaimer The statements, opinions and data contained in the journal Chemosensors are solely those of the individual authors and contributors and not of the publisher and the editor(s). MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. RSS Content Alert Further Information Article Processing Charges Pay an Invoice Open Access Policy Contact MDPI Jobs at MDPI Guidelines For Authors For Reviewers For Editors For Librarians For Publishers For Societies MDPI Initiatives Institutional Open Access Program (IOAP) Sciforum Preprints Scilit SciProfiles MDPI Books Encyclopedia JAMS Proceedings MDPI Blog Follow MDPI LinkedIn Facebook Twitter Subscribe to receive issue release notifications and newsletters from MDPI journals Acoustics Actuators Administrative Sciences Adolescents Aerospace Agriculture AgriEngineering Agronomy AI Algorithms Allergies Analytica Animals Antibiotics Antibodies Antioxidants Applied Mechanics Applied Microbiology Applied Nano Applied Sciences Applied System Innovation Arts Atmosphere Atoms Audiology Research Automation Axioms Batteries Behavioral Sciences Beverages Big Data and Cognitive Computing BioChem Bioengineering Biologics Biology Biomechanics BioMed Biomedicines BioMedInformatics Biomimetics Biomolecules Biophysica Biosensors BioTech Birds Brain Sciences Buildings Businesses C Cancers Cardiogenetics Catalysts Cells Ceramics Challenges ChemEngineering Chemistry Chemistry Proceedings Chemosensors Children CivilEng Clean Technologies Climate Clinics and Practice Clocks & Sleep Coatings Colloids and Interfaces Compounds Computation Computers Condensed Matter Conservation Construction Materials Corrosion and Materials Degradation Cosmetics COVID Crops Cryptography Crystals Current Issues in Molecular Biology Current Oncology Dairy Data Dentistry Journal Dermatopathology Designs Diabetology Diagnostics Digital Disabilities Diseases Diversity Drones Dynamics Earth Ecologies Econometrics Economies Education Sciences Electricity Electrochem Electronic Materials Electronics Encyclopedia Endocrines Energies Eng Engineering Proceedings Entropy Environmental Sciences Proceedings Environments Epidemiologia Epigenomes European Burn Journal European Journal of Investigation in Health, Psychology and Education Fermentation Fibers Fire Fishes Fluids Foods Forecasting Forensic Sciences Forests Fractal and Fractional Fuels Future Internet Future Transportation Galaxies Games Gases Gastroenterology Insights Gastrointestinal Disorders Gels Genealogy Genes Geographies GeoHazards Geomatics Geosciences Geotechnics Geriatrics Healthcare Hearts Hemato Heritage Histories Horticulturae Humanities Hydrogen Hydrology Immuno Infectious Disease Reports Informatics Information Infrastructures Inorganics Insects Instruments International Journal of Environmental Research and Public Health International Journal of Financial Studies International Journal of Molecular Sciences International Journal of Neonatal Screening International Journal of Turbomachinery, Propulsion and Power Inventions IoT ISPRS International Journal of Geo-Information J Journal of Cardiovascular Development and Disease Journal of Clinical Medicine Journal of Composites Science Journal of Cybersecurity and Privacy Journal of Developmental Biology Journal of Functional Biomaterials Journal of Functional Morphology and Kinesiology Journal of Fungi Journal of Imaging Journal of Intelligence Journal of Low Power Electronics and Applications Journal of Manufacturing and Materials Processing Journal of Marine Science and Engineering Journal of Molecular Pathology Journal of Nanotheranostics Journal of Nuclear Engineering Journal of Open Innovation: Technology, Market, and Complexity Journal of Otorhinolaryngology, Hearing and Balance Medicine Journal of Personalized Medicine Journal of Respiration Journal of Risk and Financial Management Journal of Sensor and Actuator Networks Journal of Theoretical and Applied Electronic Commerce Research Journal of Xenobiotics Journal of Zoological and Botanical Gardens Journalism and Media Land Languages Laws Life Liquids Livers Logistics Lubricants Machine Learning and Knowledge Extraction Machines Macromol Magnetism Magnetochemistry Marine Drugs Materials Materials Proceedings Mathematical and Computational Applications Mathematics Medical Sciences Medicina Medicines Membranes Merits Metabolites Metals Methods and Protocols Microbiology Research Micromachines Microorganisms Minerals Mining Modelling Molbank Molecules Multimodal Technologies and Interaction Nanoenergy Advances Nanomanufacturing Nanomaterials Network Neuroglia Neurology International NeuroSci Nitrogen Non-Coding RNA Nursing Reports Nutrients Obesities Oceans Onco Optics Oral Organics Osteology Parasitologia Particles Pathogens Pathophysiology Pediatric Reports Pharmaceuticals Pharmaceutics Pharmacy Philosophies Photochem Photonics Physchem Physics Plants Plasma Pollutants Polymers Polysaccharides Proceedings Processes Prosthesis Proteomes Psych Psychiatry International Publications Quantum Beam Science Quantum Reports Quaternary Radiation Reactions Recycling Religions Remote Sensing Reports Reproductive Medicine Resources Risks Robotics Safety Sci Scientia Pharmaceutica Sensors Separations Sexes Signals Sinusitis Smart Cities Social Sciences Societies Soil Systems Solids Sports Standards Stats Stresses Surfaces Surgeries Sustainability Sustainable Chemistry Symmetry Systems Taxonomy Technologies Telecom Textiles Thermo Tomography Tourism and Hospitality Toxics Toxins Transplantology Trauma Care Tropical Medicine and Infectious Disease Universe Urban Science Uro Vaccines Vehicles Veterinary Sciences Vibration Viruses Vision Water Women World World Electric Vehicle Journal Subscribe © 1996-2021 MDPI (Basel, Switzerland) unless otherwise stated Disclaimer The statements, opinions and data contained in the journals are solely those of the individual authors and contributors and not of the publisher and the editor(s). MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Terms and Conditions Privacy Policy We use cookies on our website to ensure you get the best experience. Read more about our cookies here. Accept We have just recently launched a new version of our website. Help us to further improve by taking part in this short 5 minute survey here. here. Never show this again Share Link Copy clear Share https://www.mdpi.com/855958 clear Back to TopTop work_cafi7vldcfgtppsnbbme7pefwi ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218552149 Params is empty 218552149 exception Params is empty 2021/04/06-02:16:31 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218552149 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:31 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_caungjrpgnh43gnxnuio6cuyb4 ----                City, University of London Institutional Repository Citation: Walsh, M.J. & Baker, S.A. (2016). The selfie and the transformation of the public–private distinction. Information, Communication & Society, doi: 10.1080/1369118X.2016.1220969 This is the accepted version of the paper. This version of the publication may differ from the final published version. Permanent repository link: http://openaccess.city.ac.uk/16340/ Link to published version: http://dx.doi.org/10.1080/1369118X.2016.1220969 Copyright and reuse: City Research Online aims to make research outputs of City, University of London available to a wider audience. Copyright and Moral Rights remain with the author(s) and/or copyright holders. URLs from City Research Online may be freely distributed and linked to. City Research Online: http://openaccess.city.ac.uk/ publications@city.ac.uk City Research Online http://openaccess.city.ac.uk/ mailto:publications@city.ac.uk 1 The selfie and the transformation of the public-private distinction Dr. Michael J Walsh* and Dr. Stephanie Alice Baker School of Government and Policy, Faculty of Business, Government and Law and Academic Fellow at Institute of Governance and Policy Analysis, University of Canberra, Canberra, Australia School of Arts and Social Sciences, City University London, London, United Kingdom Dr. Michael J. Walsh University of Canberra University Drive, Bruce, ACT 2617 Faculty of Business, Government and Law Building 11 Michael.Walsh@canberra.edu.au Dr. Stephanie Alice Baker City University London Social Sciences Building Whiskin Street London EC1R 0JD Office: D624 Stephanie.Baker@city.ac.uk Biographical note: Dr Michael Walsh is an Assistant Professor in the Faculty of Business, Government and Law and Academic Fellow at the Institute for Governance and Policy Analysis at the University of Canberra. Michael was awarded his doctorate from Monash University and researches in the areas of the sociology of senses, social interaction and technology. Michael was awarded his Ph.D from Monash University in 2011. Biographical note: Dr Stephanie Alice Baker joined the Department of Sociology as a lecturer in 2014. She received a B.A. (Hons. 1st Class) from the University of Sydney in 2006 and a Ph.D. (awarded without revision) from the University of Western Sydney in 2010. Since completing her doctorate she has held research positions at the Indian Institute of Technology – Bombay, the University of Western Sydney and Goldsmiths, University of London, as well as a visiting lectureship at the University of Greenwich, UK. She also serves as a representative of the European Sociological Association’s Emotions Research Network. *Corresponding author mailto:Michael.Walsh@canberra.edu.au mailto:Stephanie.Baker@city.ac.uk 2 The selfie and the transformation of the public-private distinction ABSTRACT: The selfie is a contemporary form of self-portraiture, representing a photographic image of the human face. The selfie is created for the purpose of reproduction and to communicate images visually with others from a distance. The proliferation of web 2.0 technologies and mobile smart phones enable users to generate and disseminate images at an unprecedented scale. Coupled with the increasing popularity of social media platforms, these technologies allow the selfie to be distributed to a wide audience in close to real time. Drawing upon Erving Goffman’s approach to the study of face-to-face social interaction, this article presents a discussion of the production and consumption of the selfie. We draw upon Goffman’s dramaturgical approach, to explore how the ‘presentation of self’ occurs in the context of a selfie. Next, we consider how the selfie as a form of visual communication holds critical implications for mediated life online as individuals go about doing privacy. We conclude by reflecting on the role of the selfie and its impact on the boundaries between public and private domains in contemporary social life. Keywords: Erving Goffman; Face; Privacy; Photography; Selfie 3 1. Introduction Contemporary society is enamoured with the visual. Through the visual we are connected to one another in a manner seldom paralleled by other sensory stimulation. With visual communication reigning supreme in late modernity, some theorists have characterised contemporary society by the so-called ‘visual turn’ (Jay, 1994). Considerable developments in cultural and technological practices have facilitated changes in the ‘form and fluidity of new media technologies’ that permit ‘a succession of new forms of visual experience’ (Spencer, 2011, p. 10). Three technological innovations have been fundamental to these developments: the emergence of web 2.0 technologies as well as the affordability and accessibility of mobile smart phones and digital cameras, which enable users to generate, capture and disseminate images instantly and en masse (see also Hand, 2012). Coupled with the increasing popularity of social media platforms, these technological innovations allow the selfie to be distributed in close to real time to a networked society. These practices have fundamentally altered routine modes of social interaction (communication, privacy and public behaviour), making the notion of ‘the visual’ both timely and sociologically significant. Drawing upon Erving Goffman’s approach to the study of face-to-face social interaction we present a theoretical discussion of the production and consumption of the selfie. We adopt Goffman’s dramaturgical approach to public and private photographs to explore how the ‘presentation of self’ occurs in the context of a ‘selfie’ in contemporary society. This paper develops Goffman’s understanding of the self by providing a definition of the selfie and examining its relationship to visual culture. Next, we discuss the implications this shift toward life mediated online has for individuals as they go about doing privacy within our contemporary media ecology. Embedded in social media sites, selfies are user-generated content frequently produced and consumed by young people (Döring et al., 2016, p.956). Finally, we contend that the selfie provides insight into the shifting boundaries between public and private domains in contemporary social life (Kumar & Makarova, 2008; Meyrowitz, 1986; Sheller & Urry, 2003; Zerubavel, 1979, 1982; Ford, 2015). In this article we consider how the selfie is connected to larger social transformations. To achieve this objective we draw on Goffman’s (1979) ideas of the public and private photograph in Gender Advertisements, using this work as a framework to explore the interactional implications of disseminating photographic images of the self in the ‘age of the digital prosumer’ (Ritzer & Jurgenson, 2010). 1 Although writing before the advent of the digital, Goffman’s ideas are highly relevant when considering the social significance of the selfie. While much literature has been written on the self in the digital world, Goffman’s legacy provides a solid sociological foundation through which to understand the phenomenon. Through his performative lens we become attuned to the interactional implications of representing the self visually through the selfie. We agree with Jamieson (2013, p. 28) when she states that ‘interactionist accounts of the self can remain fit for a theoretical purpose in a digital age’. For, despite the need to consider the distinct ways in which the self develops in physical and digital domains, it would be premature to divorce self-formation online from face-to-face communication (see also Robinson, 2007). According 1 Although never originally framed as a work of visual sociology, Gender Advertisements now arguably occupies a special place in the pantheon of visual sociology (Smith, 2010, p. 174). 4 to Goffman, copresence 2 is the defining aspect of society. This is indicated in Goffman’s first book, Presentation of Self in Everyday Life (1959) and in his final public engagement where he indicated his life’s work concerned a fascination and engagement with the idea of the ‘interaction order’. The interaction order is the defining feature of the human condition. Individuals are conditioned to be in the immediate presence of others: ‘in other words, that whatever they are, our doings are likely to be, in the narrow sense, socially situated’ (Goffman, 1983, p. 2). Although the selfie is a digital phenomenon that concerns the ‘circulation of self-generated digital photographic portraiture, spread primarily via social media’ (Senft & Baym, 2015, p. 1588), visual interaction remains a vital part of this practice. 2 Antecedents of the Selfie: Goffman on photographs The selfie is defined by three interrelated components: 1. the self-capturing and reproduction of the visual; 2. the portraiture of the human face; and 3. created for the purpose of sharing. 3 Although the first two aspects of the selfie have considerable precedence in the visual arts, it is arguably the third component – the dissemination and sharing of the photographic image and its rapidity – that is most intriguing. Notwithstanding photography’s entanglement in digital media perhaps the most important element of photographic communication is that it captures social action and fixes it in place. To put this in Goffman’s terms, the photographic image becomes lodged within a distinct spatial-temporal frame, rendering the fleeting static: The rendition of structurally important social arrangements and ultimate beliefs which ceremony fleetingly provides the senses, still photography can further condense, omitting temporal sequences and everything else except static visual arrays. And what is caught is fixed into permanent accessibility, becoming something that can be attended anywhere, for any length of time, and at moments of one’s own choosing. (Goffman, 1979, p. 10) This permanent accessibility of the visual allows the photo to become a technology not only of remembrance and nostalgia, but also a technology that feeds into the transformation occurring between public and private social life. Goffman’s formulation echoes Sontag’s (1977, p. 15) meditation on the photograph: ‘[p]recisely by slicing out this moment and freezing it, all photographs testify to time’s relentless melt’. It is the rendering of social action into the permanently accessible photo that has implications for its insertion into situations that it may not usually originate or be party to. Of importance here is Goffman’s suggestion that photographs can be segregated into two classes: public and private. For Goffman, private pictures are those ‘designed for display within the intimate social circle of persons featured in them’ (1979, p. 10). These are pictures that the individual takes to commemorate occasions, relationships, achievements, and life-turning events. What Goffman is describing is the notion of the photographic image as keepsake, which is representative of 2 See Goffman’s (1983, p. 2) written Presidential Address to the American Sociological Association (not presented due to a terminal illness). Here he coins the term ‘interaction order’ to describe the overall guiding focus during his career: ‘My concern over the years has been to promote acceptance of this face-to-face domain as an analytically viable one – a domain which might be titled, for want of any happy name, the interaction order – a domain whose preferred method of study is microanalysis. My colleagues have not been overwhelmed by the merits of the case.’ 3 These three elements are defined in greater detail in the following section. 5 photography in the analogue age as a means for autobiographical remembering rather than as a way of experiencing ‘live’ communication (van Dijck, 2008, p. 58). Yet even though these images are labelled ‘private’, they may be capturing events occurring in public, such as rituals and ceremonies. According to Goffman, such images are private because they hold significance and remain part of the domestic ceremonial life. They contain portraiture that makes striking to the senses what otherwise becomes overlooked in everyday life. As a result, private photographs allow the individual to furnish their surroundings in a ‘self- enhancing’ way. Writing in the late seventies, Goffman noted the use of domestic photography as a key part of furnishing domestic space. Csikszentmihalyi and Rochberg- Halton similarly suggest that ‘more than any other object in the home, photos serve the purpose of preserving and enhancing the memory of personal ties. In their ability to arouse emotion there is no other type of object that can surpass them’ (1981, p. 69). Statements such as these parallel Adorno’s (2002, p. 274) contention that through the practice of listening to one’s own record collection flattering ‘virtual photographs’ of their owners can be obtained whereby the record collection affirms the owner as they ‘could also be just as well preserved’ and hence, a form of ‘self-worship can thus be accomplished’ (Goffman, 1979, p. 11). In this regard, Goffman like Adorno understood the role material culture played in furnishing domestic space to affirm the self throughout everyday life (see also Pink, 2006). Though the constituent materiality of the photograph evolved, the role of photographic representation as it pertains to self-affirmation and also self-presentation remains elemental. If private images are used predominately for domestic purposes then public pictures can be said to be designed for a wider audience: ‘an anonymous aggregate of individuals unconnected to one another by social relationship and social interaction’ (1979, p. 11). The public dissemination of the photograph is important to this distinction. Here ‘print is usually not the final form, only a preliminary step in some type of photo-mechanical reproduction in newspapers, magazines, books, leaflets, or posters’ (Goffman, 1979, p. 11). Print in this sense refers to process that renders printed images from a still film camera. By extension we can augment this to include digital dissemination. While significant changes have occurred in the intervening period, the public picture for Goffman contains images that are commercial in nature, news related (matters to be of current scientific, social and political concerns), instructional (for example those images found in medical text books), human interests stories (whereby those pictured are anonymous and candidly taken) and finally personal publicity pictures. This last category is said to be ‘designed to bring before the public a flattering portrait of some luminary, whether political, religious, military, sporting, theatrical, literary, or – where a class elite still functions and is publicized – social’ (Goffman, 1979, p. 11). This dichotomy of the private and public photograph is an intriguing one. Like most dichotomies, what is particularly revealing is when one considers the possibility of seepage between the two categories. As with most social behaviour that can be symbolically situated as public or private, photographs possess the potential to traverse the thinly veiled boundary. Photographic portraiture represents a rather significant social invention, for, even apart from its role in domestic ritual, it has come to provide a low and very little guarded point in the barrier that both protects and restrains persons of private life from passing over into public recognition. (Goffman, 1979, p. 11) 6 Goffman’s statement is suggestive of the interactional implications the photograph can possess, even prior to their integration in digital media. The photograph is identified as an object that can seep or ‘melt’ the boundary demarcating public and private spheres. Portraiture, in this sense, even in its traditional role of furnishing domestic space, can be seen to represent an irrevocable break in the ‘patterns of access to information’ to the domestic and public (Meyrowitz, 1986, p. 37; see also Zerubavel, 1982). This way of understanding the social significance of the photograph resonates with Joshua Meyrowitz’s use of the term ‘information’, not in the sense of facts, but rather social information; that is, the things that people are capable of knowing about the behaviour and actions of themselves, others and things learnt about others in acts of communication. The photograph provides access to a distinct social space that individuals would otherwise not be able to gain access to. Contemporary photographic practices intensifies the underlying ability of connecting people and information together not merely due to the mobile nature of the photo but because of the mobility of the mechanism (the smartphone) that enables photographs to be distributed quickly and en masse. The ease by which the individual can now capture themselves photographically and then digitally disseminate these images close to real-time to a networked public is significant. 4 This plays into the altering role of photographic practices in the twentieth century and beyond. As van Dijck argues, the almost ubiquitous sharing of images is part of a larger transformation in which ...the self becomes the centre of a virtual universe made up of information and spatial flows; individuals articulate their identity as social beings not only by taking and storing photographs to document their lives, but by participating in communal photographic exchanges that mark their identity as interactive producers and consumers of culture. (van Dijck, 2008, pp. 62-3) Understanding the production and consumption of the selfie is required in order to assess how the digital self-portrait plays into the transformation of public and private social life. Prior to discussing these implications for individuals as they go about doing privacy, a consideration of the nature of the selfie and its relationship to visual culture is required. 3 Face-work and photography: the selfie as proliferating portraiture The ‘selfie’ is a self-captured photograph of the human face created for the purpose of dissemination. The word ‘selfie’ formally entered the English lexicon in 2013, when the term became Oxford Dictionaries’ Word of the Year (OD Blog). The term was first detected in an Australian online forum post in 2002, describing a photo of an injured lip belonging to a user dubbed 'Hopey' (Liddy, 2013). Despite the increasing popularity of the selfie, the practice of ‘taking a selfie’, in terms of self-photography and self-portraiture, has a long history. The ubiquity of digital technologies – smart phones, digital cameras and so on – has had an inexorable impact on the proliferation of this mode of digital self-portraiture (Döring et al., 2016). The proliferation of these digital technologies is synonymous with the infiltration of visual devices, now considered a central aspect of contemporary culture more generally 4 The public dissemination of ‘the selfie’ occurs differently across social media platforms. Facebook, for example, tends to favour the selfie as a mode of documentation, representing significant moments in one's life. Mobile messaging applications, such as, WhatsApp or Snapchat, by contrast, tend to view the selfie as a transient update of one's daily life. http://www.oxforddictionaries.com/definition/english/selfie 7 where digital imaging and photography have become mundane accompaniments to communication and connection practices in everyday life (Hand, 2012, p. 11). Importantly, the selfie is distinct from traditional photographic self-portraiture in terms of technique and framing, but also in its ostensibly spontaneous and casual nature that has become embedded in everyday life (Saltz, 2014). Traditional photographic portraiture is perceived by viewers as staged, whereas the selfie is presented by producers as impulsive, notwithstanding its production may actually require 20 photographs before capturing the ‘right’ presentation worthy of dissemination. Although the contemporary manifestation of representing the self visually in the form of digital photos has been accentuated in contemporary society, self- portraiture is not a new phenomenon (Tifentale, 2014). What is absent from much sociological literature is a definition of the selfie as a social practice. Rather than narrowly viewing the selfie as a narcissistic mode of self-expression (for example, see Schonfeld, 2013; Mehdizadeh, 2010), we contend along with others (Forth, 2015) that such interpretations are unnecessarily reductive and fail to consider how this form of visual communication is implicated in the wider transformation of public and private social life. Such reductive understandings also fail to provide credence for the interactional nature of the self. Examining the interactional aspects of the selfie provides insight into these acts of visual communication, but also into the selfie as a symbol of social transformation. The selfie is a self-photograph. Visual reproduction is intrinsic to this definition with the selfie taken by an individual who is the subject of the photograph, typically rendered by a hand-held digital device. The subject of the photograph is usually an individual – or group of individuals (also referred to as a ‘groupie’) – who aims to capture a portrait of the person(s), therefore, focusing specifically on the human face (or part of the human face). Such photographic images are also digitally disseminated for the purposes of sharing (sometimes consensually, although not always). We contend that these three components are essential features of the selfie: 1. the self-capturing and reproduction of the visual; 2. the photographic portraiture of the human face; 3. created for the purpose of dissemination. Together these three features constitute defining qualities of the selfie. All selfies are therefore photographs, though not all photographs are selfies (Feifer, 2015). In the current digital age, the reproduction of visual images occurs at an unprecedented rate (Hand, 2012). The way in which images are shared has led to a change in the social function of photography. As van Dijck (2008) notes, with the rise of western digital photography the traditional role of using photographs as a form of autobiographical remembering has become outmoded: the photo as keepsake is no longer the primary use of photography (see also Döring & Gundolf 2006). The increased capacity to disseminate images via handheld devices online means that the picture becomes the preferred idiom in mediated communication practices (van Dijck, 2008, p. 58). Visual communication takes on greater prominence. As Hand (2012, p. 1) contends where once we imagined the digital future would entail digital simulation and virtual reality, now arguably the opposite appears true: ‘the visual publicization of ordinary life in a ubiquitous photoscape’. The idiom of the picture and its use as a way connecting and communicating with friends, acquaintances and colleagues signals this heightened role of photography. The widespread use of the cameraphone ushers in a new ways of allowing individuals to communicate visually at a distance. This digital practice not only alters the way we communicate, it alters our attitude towards the visual and specifically the photograph itself. As van Dijck identifies in the case of cameraphone technology: 8 Cameraphone pictures are a way of touching base: ‘Picture this, here! Picture me, now!’ The main difference between cameraphones and the single-purpose camera is the medium’s ‘verbosity’ – the inflation of images inscribed in the apparatus’s script. When pictures become a visual language conveyed through the channel of a communication medium, the value of individual pictures decreases while the general significance of visual communication increases. (2008, p. 62) The unique quality of the visual and specifically, the act of looking is central here. Writing in the early twentieth century, Georg Simmel proposed that the connection between, and interaction among, individuals that lies in the act of individuals looking at one another is ‘perhaps the most direct and purest interaction that exists’ (1997, p. 111). Psychological research supports these observations, emphasising the role of nonverbal communication in human interaction (Knapp et al., 2013). A glance, gesture or wink by a certain person underscores why visual phenomena, though subtle, are revealing sociological sites of analysis. 5 It also signals the importance of the face especially when considering its centrality in photography and by extension, the selfie. It is through the face we know the world and read other peoples’ intentions. Human culture assigns the face a special role in representing the self. When individuals are mourned and remembered, it is the face that is used as a referent. The role of a death mask as an impression from the face of a corpse is suggestive of the importance attributed to the face as a way for the idea of the individual to endure. With regard to capturing images of the face in digital form, the selection of the human face as the subject is noteworthy. The human face is the greatest instrument of communication that individuals use throughout everyday life, revealing thought, feeling and emotion (Ekman and Friesen, 2003). In terms of communication, the face demonstrates appearance. As Simmel (1997, p. 113) contends, the face ‘viewed as an organ of expression is, as it were, of a completely theoretical nature. It does not act, like the hand, the foot or the entire body, it never supports the inner or practical behaviour of people, but rather it only tells others about it’ (author’s emphasis). It is this demonstration that renders the face impractical and yet fundamental to human communication. The significance of the face as a tool of expression can be demonstrated when examining its role in providing communication materials when situated within the presence of other individuals. As Smith suggests (2010, p. 165), Goffman’s key ideas about social behaviours in public build on Simmel’s ideas concerning visual interaction. Goffman is similarly aware of the impractical though important application of the face as a literal and metaphorical instrument of communication. 6 The face provides the ‘ultimate behavioural materials’ that include, though are not limited to ‘the glances, gestures, positioning, and verbal statements 5 Yet, despite the significance of the visual, visual sociology has for various reasons suffered from a lack of legitimacy in terms of it social science credentials, with the image, its subjectivity and specificity renders this preference for the visual seemingly invalid for scientific sociology (Pink, 2007, p. 12). This is also somewhat of an irony when we consider the historical synchronicity of the beginnings of photography and sociology. As Berger and Mor (1982, p. 99) state: ‘The camera was invited in 1839. August Comte was just finishing his Cours de Philosophie Positive. Positivism and the camera and sociology grew up together. What sustained them all as practices was the belief that quantifiable facts, recorded by scientists and experts, would one day offer man such total knowledge about nature and society that he would be able to order them both’. 6 For a discussion on the metaphorical importance of face see Goffman (1967, pp. 5-45). 9 that people continuously feed into the situations, whether intended or not’ (Goffman, 1967, p. 1). This organ of expression is the mechanism by which we provide ‘external signs of orientation and involvement – states of mind and body not ordinarily examined with respect to their social organisation’ (1967, p. 1). The face conveys an individual’s involvement in a particular situation and therefore is one key resource individuals’ use in the collective manufacturing of the self. 7 As Goffman (1959, p. 2) indicates, an individual – primarily through the face – communicates expression in a given situation in two overarching manners: 1. expressions that are ‘given’, whereby the person uses ‘verbal symbols and their substitutes’ (such as facial expressions) ‘to convey the information that he and the others are known to attached these symbols’ and 2. expressions that are ‘given off’, therefore involving a ‘wide range of action that others can treat as symptomatic of the actor, the expectations being that the action was performed for reasons other than the information conveyed in this way’ (Goffman, 1959, p. 2). Understanding the role of the face in communication rituals both in this narrow and broader sense is required. Though our focus tends to fixate on the narrow, it is the wider form of communication as ‘given off’ or the more ‘theatrical and contextual kind, the non-verbal, presumably unintentional kind, whether this communication be purposely engineered or not’ (Goffman, 1959, p. 4) that Goffman was particularly gifted at illuminating. Both classes of expression are used in assessments of the others and determine the extent to which these expression cohere with the definition of a situation: ‘[t]ogether the participations contribute to a single over-all definition of the situation which involves not so much a real agreement as to what exists but rather a real agreement as to whose claims concerning what issues will be temporarily honoured’ (Goffman, 1959, pp. 9-10). The selfie in light of situationally appropriate expression raises implications about the nature of digital portraiture because although the face presented is one that appears spontaneous and casual in appearance, it is actually the result of crafting, deliberate framing and ultimately embodied practices (Frosh, 2015, p. 1614). As a photograph the selfie omits ‘temporal sequences’ but further than this, it provides for the shaping and rendering of the image by the subject. The primed quality of the selfie offers the producer the ability to refine expressions that in Goffman’s terms are visually ‘given’ and conversely restricts if not quashes expressions ‘given off’. However though digital photography is often claimed to have led to an artifice associated with the visual, digitization did not cause manipulability; retouching has always been inherent in photography (van Dijck, 2008, p. 66). What is crucial is that digital photography provides for an increased opportunity for the practice of reviewing and retouching visual reproductions. And this may partially explain the ambivalence and moralising often associated with critiques of the selfie; as a form of visual communication it radically restricts not only the viewer’s ability to assess the worth of the individual presented in the selfie (as when compared with face-to-face interaction), but moreover enables the producer an increased capacity to craft the face presented. Considering the centrality of the face as an organ of expression, and the verbosity and ubiquitous nature of the photograph as a heightened communication medium, the marriage of 7 Following Randal Collins (1986, p. 107), we agree that Goffman’s account of the self is aptly placed in the traditions of social anthropology in which individuals enact social rituals in order to maintain the normative order of society: “Thus, unlike Mead, Thomas, and Blumer, the self in Goffman is not something that individuals negotiate out of social interactions: it is, rather the archetypal modern myth. We are compelled to have an individual self, not because we actually have one but because social interaction requires us to act as if we do.” For an extensive discussion on this point see also Cahill (1998). 10 the face and the image in the form of the selfie is in one sense unremarkable. As with other forms of mechanical reproduction (e.g. sound reproduction), the reproduction of the image in the form of the photograph brings human expression closer, collapsing the tyranny of physical distance though equally reducing the uniqueness of individual experience. This allows, as Walter Benjamin accurately perceives, the drawing closer together of individuals, even though they may be separated by physical distance: The need to bring things spatially and humanly ‘nearer’ is almost an obsession today, as is the tendency to negate the unique or ephemeral quality of a given event by reproducing it photographically. There is an ever–growing compulsion to reproduce the object photographically, in close-up. (Walter Benjamin in Sontag, 1977, pp. 190-1) We contend an important precursor of the selfie is the ‘close-up’. The term describes a camera technique that provides the ultimate framing of the expressive qualities of the human face. The portrait shares qualities of the close-up, using lighting to maximise the expressive qualities of the face (see Keating, 2006). The close-up, as Benjamin indicates, quashes the fleeting nature of human expression. As Sontag (1977, p. 115) similarly indicates: ‘[n]o one would dispute that photography gave a tremendous boost to the cognitive claims of sight, because – through close-up and remote sensing – it so greatly enlarged the realm of the visible’. At the same time, it paradoxically enables a sense of closeness or intimacy at a distance. This arguably marks qualities of both portraiture and the close-up that find their way into the contemporary practice of the selfie. The close-up represents a pivotal moment in the development of modern photographic communication because it reveals potent social information about the presentation of self. Yet rather than simply revealing the true representation of a subject, it expresses a ‘truth’ that is presented through the face. As Finkelstein contends: The common use of the close-up asserts the existence of a ruminative interior or self; the camera is the device giving insight into secrets. Through the close-up, thoughts are made visible. The actor’s facial expressions transport the audience into the deep interior of the mind. The close-up uses the eyes as ‘the windows’ to the concealed personality. Suddenly the interior becomes exteriorized; certain gestures and subtle movements – a tear, quiver of the lip, a slight smile – are signs from the interior unmediated, true emotion. These fine facial movements caught by the close-up camera shot suggest authenticity, as if the realm of meaning behind the visible social surface is indeed the real world. (2007, p. 4 - our emphasis) If the close-up provides the communicative possibility for depicting the deep interior of an actor’s mind in film, it follows that when this technique is incorporated into everyday practice, it similarly is a way of depicting the presentation of ‘true’ or authentic emotion. Therefore, by enlarging the ‘realm of the visible’ the close-up provides the viewer with a sense of greater scrutiny in what is being viewed, though equally and counterintuitively restricts the viewers ability to scan for what viewers perceives as authentic when compared with face-to-face interactions. The visual depiction of the close up of the face restricts if not quashes expressions that are ‘given off’ because both filmic production and the selfie are 11 highly manufactured. Nonetheless it is the suggestion or successful performance of authenticity that is central. The selfie as a means of communicating emotion is crucial. To contemplate the social function of the photograph, and especially the new form of the self- photograph in its digitized state is consumed, we are required to consider how images are ‘read’ or acted upon by actors. Taking our cue from Goffman: ‘[t]o consider photographs – private and public – it is necessary, apparently, to consider the question of perception and reality, and it is necessary to control somehow the systematic ambiguities that characterize our everyday talk about pictures’ (1979, pp. 11-12). It is this perception of the image that we consider in the following section. 4. Consuming the selfie: behavioural regard and doing privacy We use the idea of behavioural regard to assist in our explanation of the selfie. The concept is used to indicate the extent to which individuals display deference or acknowledge one another when entering into another’s presence. Providing ‘regard’ or modifying one’s behaviour when physically copresent with another individual is especially relevant to public situations. For example, in locations involving public transport individuals typically display an implicit type of acknowledgement to one another. Commuters will demonstrate that they are aware of each other’s presence while simultaneously withdrawing to ensure that they do not appear overly curious or interested towards one another (Goffman, 1963, p. 84; Schivelbusch, 1977, pp. 82-92). Behavioural regard in this situation can be understood by the term ‘civil inattention’ (Goffman, 1963, p. 84). The following kinds of cues and communicational gestures may occur: a non-assuming glance; a change of posture to accommodate the movement of commuters when finding seats; repositioning the body and limbs to provide for extra space; and non-assertive alteration in facial expression (Walsh, 2009, p. 50). Civil inattention is therefore not ‘disattention’, but a mode of interaction that implicitly reassures other individuals that they are not obviously interested in other commuters’s affairs (Lofland, 1989, p. 463). This subtle form of interaction is symbolic of behavioural regard. It signals that participants have settled upon a single definition of the situation; all commuters in this situation tacitly agree to impose as minimally as possible on one another. Through employing behavioural regard, individuals demonstrate the adoption of expected forms of social behaviour that reinforce the way in which a situation is defined. Any behaviour (or material culture) that interferes or contradicts the situation’s definition will be actively resisted. This is because individuals strive for a ‘normative organisation’ of the social situation, in the face of a multitude of potential disruptions (Goffman, 1959, p. 254). This means that when an individual is commuting alone, they will expect to be socially isolated and not engage in prolonged conversation with others. This social convention means that when a passenger attempts to engage in animated conversation with another commuter with whom they have no pre-existing relationship, one is likely to experience this form of interaction as contravening civil inattention. The concept of behavioural regard is particularly useful when considering empirical cases of interplay between the public and the private (Waskul, 2016). The single sex public bathroom is another case in point. These locations are accessible to all members of a given sex and therefore, for all intents and purposes, considered ‘public’. They are regions freely accessible 12 to members of the community and ‘in the main, all persons have legal access’ (Lofland, 1973, p. 19). Equally, this setting is a place where individuals are actively encouraged to be ‘private’. This is because, as Kumar and Makarova (2008, p. 330) contend, numerous activities in public such as prayer, the contemplation undertaken in an art gallery or listening to privatised music in a train, represent situations that constitute private activity, while located in public (see also Cahill et al., 1985; Goffman, 1971; Karp, 1973; Lofland, 1973). However, like the situations above, and certainly for the public bathroom, these public settings possess social conventions around their use and are regulated by social norms about how individuals should perform and afford other users degrees of situational privacy. Users of these environments are required to demonstrate regard to other users of these locations. Cahill et al. (1985) in their pioneering study describe how this ritual operates: Clearly, a toilet stall is a member of this sociological family of ecological arrangements. Sociologically speaking, however, it is not physical boundaries, per se, that define a space as a stall but the behavioural regard given such boundaries. For example, individuals who open or attempt to open the door of an occupied toilet stall typically provide a remedy for this act, in most cases a brief apology such as ‘whoops’ or ‘sorry’. By offering such a remedy, the offending individual implicitly defines the attempted instruction as a delict and thereby, affirms his or her beliefs in a rule that prohibits such intrusions (Goffman, 1971, p. 113). In this sense, toilet stalls provide occupying individuals not only physical protection against potential audiences but normative protection as well. (Cahill et al., 1985, p. 36) As with other physical barriers and perceived symbolic boundaries (i.e., not necessarily physical), the separation between stalls (doors and walls) are treated as though they cease communication materials more than they actually do so. In this sense, ‘the work walls do, they do in part because they are honoured or socially recognised as communication barriers’ (Goffman, 1963, p. 152). They provide actors with ‘both physical and normative shields behind which they can perform potentially discrediting acts’ (Cahill et al., 1985, p. 37). It is the act of acknowledging such ‘shields’ and responding in a way deemed appropriate, that we can describe as behavioural regard. While the concept derives from interaction between individuals, behavioural regard is also partly the province of material culture: doors, stalls and arguably photographs. As walls and doors function in a bathroom to provide cues to interactants in this setting, so too photographs provide cues that viewers perceive and interpret. Goffman’s explanation of public and private photographs is indicative of this. As public photographs are identified as being produced for an ‘anonymous aggregate of individuals unconnected to one another by social relationship and social interaction’, the implication is that these images are ‘diverse in function and character’ and serve to furnish the world symbolically for particular purposes beyond the immediate confines of persons known to the producer. The vast majority of pictures that one encounters are public. When we traverse through the city, view magazines and newspapers or surf the web, the majority of images we are exposed to are produced for public consumption; intended for individuals unknown to one another. Private photographs on the other hand are recorded ‘for display within the intimate social circle of persons featured in them’ (Goffman, 1979, p. 10). Here photos are produced for the sake of exhibition and specifically for those known to one another. 13 These categories of photograph imply a distinct kind of behavioural regard. When viewing images produced for public consumption, such viewing is deemed appropriate and usual. We are required to pay little respect or veneration towards the depiction of people we view, because we have no direct social connection to them. When viewing private photos, on the other hand, the context and relationship to the producer of the photograph is significant. The producer’s relationship to the consumer of the image determines the type of regard the image requires. For example, when viewing a family portrait of an acquaintance, respect and deference is highly likely to be afforded to the image. This is especially true when the object is viewed in a domestic setting. In addition to the display of deference, control is another aspect of the private photograph. In the analogue age, private photos were only viewable by individuals sanctioned by their relationship to the producer of the photograph. To sanction viewing of a private photograph was relatively straightforward: one was required to gain access physically to the photograph (displayed within a spatial location). It may be affixed to an internal wall, placed on a mantel or tucked away in a shoe box. To gain access to photographs, one must gain access to the social setting in which that photograph was situated. Regulating the sanctioned viewing of private photographs becomes increasingly difficult with digital media and the introduction of ubiquitous digital photography. 8 It allows for the distribution of images enabling close to instantaneous dissemination via personal handheld devices. The medium of the image becomes increasingly shareable and increases the possibility of seepage across this ‘very little guarded point in the barrier that both protects and restrains persons of private life from passing over into public recognition’ (Goffman, 1979, p. 11). Indeed part of the popularity of digital photography is the increased command over the outcome of the image that allows for its manipulability and with this increased manipulability this ‘renders them vulnerable to unauthorised distribution’ (van Dijck 2008, p. 58). The possibility of a sanctioned viewing of private images is increased and equally, this is true for the non-sanctioned viewings. Accounts of the prevalence of online sexual activity experiences and state prevention strategies highlight this point. For example, Döring (2014) in her analysis of the sexting risk prevention literature explores consensual adolescent sexting and indicates that the topic is framed in predominately two ways: as deviant behaviour associated with a range of different risks (where images are viewed by parties that are not sanctioned) and less frequently, as normal intimate communication in romantic and sexual relationships in the digital age (2014, p. 2). 9 In a social environment where online sexual activity is recorded to be fairly common (see Döring et al., 2015) and the fact that there is an increasingly ‘normalcy of sexting’, especially among adults (and teenage populations) in romantic relationships (Döring, 2014, p. 6; see also Perkins et al., 2014; Salter, 2015), this exacerbates the importance of behavioural regard in relation to the consumption of private photographs. In the context of sexting the role of deference to the photograph is significant. It reinforces that although aspects of sexting (the sending and receiving of sexually-laden images) are not novel; what is new is ‘the use of 8 See Hand (2012, p. 11): ‘In embracing the term ‘ubiquitous’, then, I am not referring simply to images: I suggest that the discourse, technologies and practices of photography have become radically pervasive across all domains of contemporary society’. 9 Döring’s (2014, p. 1) definition of sexting is useful: ‘Sexting is a 21 st century neologism and portmanteau of “sex” and “texting” that refers to the interpersonal exchange of self-produced sexualized texts and above all images (photos, videos) via cell phone or the internet’ (emphasis in original) 14 the cell phone to do so [send and receive images] and ease with which one can engage in sexting’ (Weisskirch & Delevi, 2011, p. 1697). The ease of producing, sharing and consuming private photographs increases and signifies the heightened role of photographic communication practices in everyday life (van Dijck, 2008, p. 62). Accordingly, this situation intensifies issues concerning the unauthorised distribution private images and expectations around privacy. In other words, the manner in which behavioural regard is provided to the photograph. What therefore is at stake in our contemporary media ecology is the individual’s relationship to privacy; privacy, not in the sense of an individual’s claim to particular space (a la civil inattention), but rather their claim to keep private objects and representations that symbolise them. Producers of photographs are now required to consider and actively manage how to maintain control and sanction the viewing of private photographs. Managing control over photographs is an increasingly fundamental aspect of performing privacy. As Albury’s study of digital media suggests, young men and women have a heightened awareness of the need to manage their online presence. In the case of the selfie, Albury reports that her participants were ‘highly conscious of privacy and that not all selfies were made to be shared’ (Albury, 2015, p. 1735). Some selfies in this context were made specifically with the intention of keeping private, while others were there to be shared as a way of expressing the self and communicating ‘to others one’s location and interests at a certain point in time’ (Albury, 2015, p. 1736). What is integral here is the spectrum between concealment and disclosure. Christina Nippert- Eng (2010) in her empirical exploration of privacy argues that rather than being an absolute state, privacy is something that individuals are required to act out and undertake throughout the everyday. How individuals go about revealing themselves publicly impacts on their capacity to remain inaccessible (see also Zerubavel, 1979, 1982): …creating pockets of accessibility is an important way in which we try to achieve a few pockets of inaccessibility too. Making some part of ourselves accessible to some people in some times and places actually helps us get away with denying them access to other parts or at other places and times. We might agree that ‘you can know this about me, you can use this thing of mine, and you can reach me here, at this time, and in this way’, in part to better insist ‘but I won’t let you know this about me, you can’t touch this, and you need to leave me alone now’. (Nippert-Eng, 2010, p. 6) Privacy is therefore interactionally contingent (see also Dellwing, 2012). How an individual goes about constructing privacy and works towards achieving it in collaboration with others is central. The contingent nature of privacy is susceptible to violation or instances where in the case of the selfie, unsanctioned consumption is possible. This becomes acute when we also consider the increased pace of sharing visual culture, where even the accidental or forced revealing for even a brief moment can render what is deemed private, public. Nonetheless, the selfie can be characterised as a moment in which one reveals themselves a certain image of themselves in one situation only to deny others access in another context. 10 The producer 10 In this regard, the selfie can be viewed as part of the counter play between the two interests that Simmel (1950, p. 344) identifies as permeating the entirety of human interaction: concealing and revealing. 15 of the selfie can command what others view and when they view, in order to withhold or seclude other parts of the self. This is certainly indicated in a recent empirical account of the selfie whereby participants in Albury’s study indicate they took ‘private selfies’. These are photographic images that were perceived to be created only for the producer of the image, rather than the public. 11 These images were understood as ‘ordinary, or at least unremarkable, practice albeit somewhat risky, with several participant expressing concern that friend, parent, or teachers might find private selfies on unlocked phones’ (Albury, 2015, p. 1736). Concealing and revealing is a significant aspect of the selfie and underscores why reducing the selfie to narcissistic display fails to understand the wider context. The desire to conceal visual representations of the self and to disclose these at other times are suggestive of the increasing role of impression management as it becomes delegated onto material culture. Individuals’ have always attempted to control the information they reveal to one another when situated in face-to-face interactions through their conduct and expression and this certainly persists in digital interactions. However the extent to which this conduct and expression is viewable is restricted (in a Goffmanian sense) and, as discussed in the previous section, is arguably more susceptible to manipulability and priming by the producer of the selfie. The advent of the digital has increased the ability to insert private materials into public situations, making acute the ‘low and very little guarded point in the barrier that both protects and restrains persons of private life from passing over into public recognition’ (Goffman, 1979, p. 11). But what occurs when a private selfie is viewed without consent? The literature that frames sexting as deviant behaviour underscores the risks and severe legal ramifications (in countries such as Australia and the U.S.A.) associated with producing and consuming these photographs, especially when produced by adolescents (Döring, 2014, p. 7). 12 Yet this danger is not one that is natural. Rather it is manifest in light of legal structures that fail to understand the increasing role of photographic sharing practices and how they increasingly play a role in contemporary romantic relationships (Weisskirch & Delevi, 2011). In contrast, is the example of revenge porn whereby private sexualised photographs are publicly shared without consent in a vindictive manner with intent to damage the reputations of those depicted (Goldberg, 2014, p. 18). This too is an example of the non-consensual circulation of a private photograph and it raises a question about the type of regard a viewer, such as a friend or parent experiences when inadvertently consuming this image. The consumption of the photograph in this case is non-sanctioned and therefore transgresses the division between public and private that social convention usually manages to keep distinct. This reinforces the extent to which privacy is interactionally contingent and is only collaboratively achievable. These instances play into larger transformations between public and private social life. Typically, however, the selfie does not occupy this extreme. The selfie is generally represented as the consensual circulation of self photography. As a relative new communicative practice the social protocols and regard provided to the selfie are still being 11 In one sense, this point highlights why it is the self-capturing component of a selfie that makes it unique. These pictures are not necessarily intended for public dissemination (although media analysis of celebrities may indicate this). 12 Lee et al., (2013) indicate that moral anxieties concerning the practice of sexting has rendered policy making around this issue almost impossible in an Australian context; young people are silenced in public and media discourses while also are the recipients of harsh legal consequences. 16 formed and as such there is much ambivalence displayed around the consumption of selfies for purposes of bodily display and gendered expression (Albury, 2015, p. 1743; see also Döring, Reif, & Poeschl, 2016). Arguably, the selfie and its production and consumption lead to a greater ambiguity. While social ambiguity is certainly possible across all media technologies that transfer communication to different temporal and spatial settings, the selfie – with its emphasis on the communicative instrument of the human face – takes this ambiguity to a new level. 5. Conclusion: the selfie and the transformation of public and private life The selfie challenges the traditional demarcation between ‘public’ and ‘private’ social life. This division between these spheres can be pointed to in numerous analyses of contemporary social life. The boundary between private and public carries powerful normative implications (Nippert-Eng, 2007; Sheller and Urry, 2003; Weintraub, 1997, p. 1; Zerubavel, 1979, 1982), especially in relation to how (i.e., what modes of interaction) and where (i.e., in which social situations) people interact with one another in everyday life. At its most basic, the public- private distinction is characterised by several divisions: first, a division between what is hidden or withdrawn versus what is revealed or accessible; and second, the division between what is individual versus what is collective (Weintraub, 1997, pp. 4-5). Although the boundary is ‘porous and ambiguous’ (Madanipour, 2003, p. 66), it still functions to allow for a ‘realm of private self-expression and intimacy buffered from the larger world of politics and a sense of belonging to a larger community that expresses obligations to all its members, even if they are strangers’ (Wolfe, 1997, pp. 187-88). It is in this context that the selfie, as a mode of self-expression (as sometimes private and at other times public activity) comes into its own. Individuals segment social activities, locations, behaviours and experiences into ‘public’ and ‘private’ categories (Kumar and Makarova, 2008; Meyrowitz, 1986; Sheller and Urry, 2003; Zerubavel, 1979, 1982). For most of the nineteenth and early twentieth centuries, the separation between the two remained relatively distinct. This division has now arguably begun to unravel, becoming increasingly blurred (Ford, 2015, p. 54). As Kumar and Makarova explain: One might not be sure where to put the stress – on the private overwhelming the public, or the public saturating the private – but the general perception, here as elsewhere, is a of a fundamental shifting of boundaries or, even more significantly, of the increasing difficulty of recognising any boundary at all. (2008, p. 326) We argue the selfie is part of this segmentation of social activities; but to categorise it as either public or private is inaccurate. Indeed, the selfie is subject to the increasing ambiguity that is creeping into the public-private division; the practice of taking selfies is emblematic of the blurring of the boundary between public and private social life. The proliferation of selfie as a form of visual expression is connected to the increasing desire to communicate an ideal image of the self. The desire to stay connected regardless of one’s location renders the division between public and private less significant. As discussed, photographs form part of this desire as they represent in Goffman’s (1979, p. 11) words a ‘low and very little guarded point in the barrier that both protects and restrains persons of private life from passing over 17 into public recognition’. But rather than a case of transgression between the private and public photograph, producers and consumers of the selfie appear to not pay the same deference to the boundary as did previous producers and consumers of standard photographs. The selfie appears to straddle this dichotomy of public and private photograph, decreasing the clarity of the division between public and private social life; the role and character of sharing social information is transitioning. Although some commentators might argue this reduces or symbolises the elimination of privacy altogether, we argue that in actual fact individuals still aspire to privacy, however in more nuanced and socially contingent ways (see also Salter, 2015, p. 14). The selfie as situated within these larger social transformations is associated not only with production and consumption of the visual, but specifically with the elevation of photography as a heightened communication medium. By drawing upon the work of Goffman and using his approach to the study of face-to-face social interaction we have been able to more clearly understand the interactional nature of privacy and the significance of how self-presentation occurs when the stage adopted is the ‘selfie’. 18 References Adorno, T. (2002). The curves of the needle. In Richard Leppert (Ed.), Essays On music: Theodor W. Adorno (pp.271–276). Berkeley: University of California Press. Albury, K. (2015). Selfies, sexts, and sneaky hats: young people’s understandings of gendered practices of self-representation. International Journal of Communication 9, 1734–1745. Berger, J., & Mohr, J. (1982). Another way of telling. New York: Vintage International. Cahill, S., Distler, W et al. (1985). Meanwhile backstage: public bathrooms and the interaction order. Urban Life 14(1), 33–58. Cahill, S 1998, ‘Toward A Sociology of the Person’, Sociological Theory 16(2): 131– 148. Collins, R 1986, ‘The Passing of Intellectual Generations: Reflections on the Death of Erving Goffman’, Sociological Theory 4 (Spring): 106–113. Csikszentmihalyi, M., & Rochberg-Halton, E. (1981). The meaning of things: domestic symbols and the self. Cambridge: Cambridge University Press. Dellwing, M (2012). Moving Armies of Stop Signs. Philosophy of the Social Sciences 43(2) 225-245. Döring, N. (2014). Consensual sexting among adolescents: Risk prevention through abstinence education or safer sexting? Cyberpsychology: Journal of Psychosocial Research on Cyberspace 8(1), article 9. Döring, N., Dabeback, K., Shaughnessy, K., Grov, C., Byers, E. (2015). Online Sexual Activity Experiences Among College Students: A Four-Country Comparison. Archives of Sexual Behavior 2015 Dec 10 [Epub ahead of print] Döring, N., Reif, A. & Poeschl, S. (2016) How gender-stereotypical are selfies? A content analysis and comparison with magazine adverts. Computers in Human Behavior 55, 955–962. Döring, N., & Gundolf, A (2006) Your life in Snapshots: Mobile Weblogs. Knowledge, Technology and Policy, Spring Vol.19, No. 1 80-90. Ekman, P. & Friesen, W. V. (2003). Unmasking the face: A guide to recognizing emotions from facial clues. Los Altos: Ishk. Feifer, J. (2015). Is This a Selfie? The New York Times: Opinion July 22, 2015. < http://mobile.nytimes.com/2015/07/22/opinion/is-this- aselfie.html?_r=1&referrer=> Accessed 30 July 2015. Finkelstein, J. (2007). The art of self invention: image and identity in popular visual culture. New York: I.B. Tauris & Co. Ford, S. (2015). Reconceptualising the public/private distinction in the age of information technology. Information, Communication and Society 14(4), 550–567. Frosh, P. (2015) The Gestural Image: The Selfie, Photography Theory, and Kinesthetic Sociability. International Journal of Communication 9, 1607–1628. Goffman, E. (1959). The presentation of self in everyday life. New York: Anchor Books. Goffman, E. (1963). Behavior in public places: notes on the social organization of gatherings. New York: Free Press of Glencoe. Goffman, E. (1967). Interaction ritual: essays in face-to-face behavior. Chicago: Aldine Publishing Company. Goffman, E. (1971). Relations in public: microstudies of the public order. New York: Basic Books. Goffman, E. (1979). Gender advertisements. New York: Harper & Row. http://mobile.nytimes.com/2015/07/22/opinion/is-this-aselfie.html?_r=1&referrer http://mobile.nytimes.com/2015/07/22/opinion/is-this-aselfie.html?_r=1&referrer 19 Goffman, E. (1983). The interaction order: American Sociological Association, 1982 presidential address. American Sociological Review 48(1), 1–17. Goldberg, M. (2014). The war against revenge porn. The Nation 299(6), 18–20. Hand, M. (2012). Ubiquitous photography. Cambridge: Polity Press. Jamieson, L. (2013). Personal relationships, intimacy and the self in a mediated and global digital age. In Kate Orton-Johnson & Nick Prior (Eds.), digital sociology: critical perspectives. London: Palgrave Macmillan. Jay, M. (1994). Downcast eyes: the denigration of vision in the twentieth-century French Thought. Ewing, NJ: University of California Press. Karp, D. (1973). Hiding in pornographic bookstores: a reconsideration of the nature of urban anonymity. Urban life and Culture 1(4), 427–451. Keating, P. (2006). From the portrait to the close-up: gender and technology in still photography and Hollywood cinematography. Cinema Journal 45(3), 90. Knapp, M., Hall, J., & Horgan, T. (2013). Nonverbal communication in human interaction. Boston: Cengage Learning. Kumar, K., & Makarova, E. (2008). The portable home: the domestication of public space. Sociological Theory 26(4), 324–343. Lee, M., Crofts, T., Salter, M., Milivojevic, S., & McGovern, Alyce (2013). ‘Let’s Get Sexting’: Risk, Power, Sex and Criminalisation in the Moral Domain. International Journal for Crime, Justice and Social Democracy Vol. 2, No. 1: 35-49 Liddy, M. (2013). This photo, posted on ABC Online, is the world’s first known ‘selfie’. Australian Broadcasting Cooperation News. Lofland, L. (1973). A world of strangers: order and action in urban public space. New York: Basic Books. Lofland, L. (1989). Social life in the public realm: a review. Journal of Contemporary Ethnography 17(4), 453–482. Madanipour, A. (2003). Public and private spaces of the city. London: Routledge. Mehdizadeh, S. (2010). Self-Presentation 2.0: Narcissim and Self-Esteem on Facebook. Cyberpsychology, Behavior, and Social Networking. Vol 13:4 pp357–364. Meyrowitz, J. (1986). No sense of place: the impact of electronic media on social behavior. New York: Oxford University Press. Nippert-Eng, C. (2010). Islands of privacy. Chicago: University of Chicago Press. Perkins, A,. Becker, J,.Tehee, M, & Mackelprang, E. (2014). Sexting Behaviors Among College Students: Cause for Concern? International Journal of Sexual Health 26: pp 79–92 Pink, S. (2006). Home truths: gender, domestic objects and everyday life. Oxford: Berg. Pink, S. (2007). Doing visual ethnography. London: SAGE Publications. Ritzer, G & Jurgenson, N. (2010) Production, Consumption, Prosumpstion: The Nature of Capitalism in the Age of the Digital ‘Prosumer’. Journal of Consumer Culture 10(1): 13-36. Robinson, L. (2007). The cyberself: the self-ing project goes online, symbolic interaction in the digital age. New Media & Society 9 (1), 93–110 Salter, M (2015). Privates in the online public: Sex(ting) and reputation on social media. New Media & Society pp1-17 Saltz, J. (2014). Art at arm’s length: a history of the selfie. Vulture 26 January http://www.abc.net.au/news/2013-11-19/this-photo-is-worlds-first-selfie/5102568 http://www.abc.net.au/news/2013-11-19/this-photo-is-worlds-first-selfie/5102568 20 Schivelbusch, W. (1977). The railway journey: trains and travel in the 19th century. New York: Urizen Books. Schonfeld , Z. (2013). Of course selfies are narcissistic. So what?. The Wire: News from the Atlantic < http://www.thewire.com/culture/2013/11/of-course-selfies-are- narcissistic/355432/> Senft, T., & Baym, N. (2015). What does the selfie say? investigating a global phenomenon. International Journal of Communication 9(2015), 1588–1606. Sheller, M. & Urry, J. (2003). Mobile transformations of 'public' and 'private' life. Theory, Culture & Society 20(3), 107–125. Simmel, G. (1950). The sociology of Georg Simmel: translated, edited and with an introduction by Kurt H. Wolff. New York: The Free Press. Simmel, G. (1997). Sociology of the senses. In David Frisby and Mike Featherstone (Eds.), Simmel on Culture: selected writings (pp.109–120). London: Sage. Smith, G. (2010). Reconsidering gender advertisements: performativity, framing and display. In Michael Jacobsen (Ed.), The contemporary Goffman (pp.165–184). New York: Routledge. Sontag, S. (1977). On photography. London: Penguin book. Spencer, S. (2011). Visual research methods in the social sciences. New York: Routledge. Tifentale, A. (2014). The selfie: making sense of the “masturbation of self-image” and the “virtual mini-me”. Selfiecity van Dijck, J. (2008). Digital photography: communication, identity, memory. Visual Communication 7(1), 57–76. Weisskirch, R.S & Delevi, R (2011) “Sexting” and adult romantic attachment. Computers in Human Behavior 27,1697–1701. Waskul, Dennis (2016). Going to the Bathroom. In Dennis Waskul and Philip Vannini (Eds.) Popular Culture as Everyday Life (pp.145–154). New York: Taylor & Fancis. Walsh, M. (2009). Portable music device use on trains: a ‘splendid isolation’?. Australian Journal of Communication 36(1), 49–59. Weintraub, J. (1997). Theory and politics of the public/private distinction in J. Weintraub & K. Kumar (Eds.), Public and private in thought and practice: perspectives on a grand dichotomy. Chicago: The University of Chicago Press Zerubavel, E. (1979). Private time and public time: the temporal structure of social accessibility and professional commitments. Social Forces 58(1), 38–58. Zerubavel, E. (1982). Personal information and social life. Symbolic Interaction 5(1), 97– 109. work_cbilymd3nnfhvnixslvxo7fjvi ---- N E W E X H I B I T I O N E X A M I N E S T H E S O C I A L L I F E O F B R O N Z E S T H R O U G H T H E C E N T U R I E S Martin Powers* Abstract This is a short review of the “Mirroring China’s Past” exhibition at the Art Institute of Chicago (February 25th–May 13th, 2018). For reference, the exhibition catalog is: Tao Wang 汪涛 et al. Mirroring China’s Past: Emperors, Scholars, and Their Bronzes [吉金鑒古]. The Art Institute of Chicago, 2018. Distributed by Yale University Press, New Haven and London. The Mirroring China’s Past exhibition at the Art Institute of Chicago ( February 25th–May 13th, 2018) marks a new watershed in the history of Early China exhibitions, being the most important display of Bronze- Age artifacts from China since the Great Bronze Age of China show in 1980. From the moment visitors enter the show they will be struck by the astounding quality of the works on display. This was made possible by the fact that roughly fifty works were donated by the Palace Museum Beijing, with some thirty coming from the Shanghai Museum of Art. In addition, you will find choice pieces from the major museum collections of Chinese art in the States, including the Metropolitan Museum of Art, the Nelson-Atkins, the Minneapolis Museum of Art and many more, not to mention the Art Institute’s own Buckingham collection. Even special- ists will be surprised by little known but exquisite works from private collections. The show, however, is by no means limited either to bronzes or to Early China. This exhibition treats viewers to the evolving afterlives of bronze vessels in multiple media, including ceramic, bamboo, lacquer, cloisonné, rubbings, multi-media works, and various forms of painting and print dating from Song times (960–1279) up to the present. Needless to say, the level of organization, diplomacy, erudition, imagination, and attention to detail required for such a show is itself astounding. Most of the credit for that achievement goes to Tao Wang, Pritzker Chair of Asian Art, along with his staff and strong administrative backing from the Art Institute and its community of supporters. © The Society for the Study of Early China and Cambridge University Press 2018 Early China (2018) vol 41 pp 453–456 doi:10.1017/eac.2018.6 *Martin Powers, 包華石, University of Michigan; email: mpow@umich.edu. terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/eac.2018.6 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:36, subject to the Cambridge Core https://www.cambridge.org/core/terms https://doi.org/10.1017/eac.2018.6 https://www.cambridge.org/core Of course, the impact of even high-quality objects can be compro- mised without adequate display, but here again this exhibition sets a new standard. The cases, tailor-made for the show, are understated so as not to divert attention from the objects. At the same time, they are capa- cious, lending every artifact ample room. Except for objects displayed along the wall, each work can be examined comfortably from all sides. Moreover, wherever possible, the works are placed so that viewers can see at least some of the inscriptions. In keeping with this object-centered aesthetic, the labels are minimalist, identifying the collection, the object, its date and little more. The labels do find amplification in short videos providing viewers with basic information about when, why, and how these objects were made. The process of bronze casting, for instance, is nicely revealed in one video, while others provide context. Still, in this show it is the objects that tell the story. Viewers are invited to enjoy, and to connect the dots. Those desiring greater depth will find it in a catalog offering multiple essays by Tao Wang along with other top-ranking spe- cialists in the field.1 In the end each visitor to the show will develop her own narrative, depending upon her experience and educational background, but everyone will recognize that the objects on display, arranged in a spe- cific and meaningful fashion, have an important tale to tell. Regular visitors to art shows will appreciate the inventiveness manifest in the transformation of classical forms over time, while provincials will not find the slavish copies they had been taught to expect from Orientals. The astute will observe how objects made for royal sacrifice gradually came to be incorporated into the lives of educated but otherwise ordi- nary people. By the twelfth century men and women both were col- lecting classical vessels as masterpieces of artistry, while classicizing styles began to appear on objects of ordinary use even in the homes of middle-income families. Fans of European art will find parallels and contrasts with Western Classicism. In ancient times Classical statues also were intended for ritual purposes. By Roman times they were displayed in the homes of wealthy patricians, less sacred than before, but still replete with reli- gious meaning. Early modern kings competed to build the finest col- lections of Classical art and were aped in this by the nobility. By the eighteenth century, we find middle-class collectors, while classicizing styles in clothing and furnishings also made their way into the homes of educated if otherwise ordinary people, both women and men. 1. Published as Tao Wang et al., Mirroring China’s Past: Emperors, Scholars, and Their Bronzes (New Haven: Yale University Press, 2018). MARTIN POWERS454 terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/eac.2018.6 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:36, subject to the Cambridge Core https://www.cambridge.org/core/terms https://doi.org/10.1017/eac.2018.6 https://www.cambridge.org/core For this reviewer the genius of the show lies in its ability to reveal how artists in China could reinvent the classical past in ever more sophisti- cated ways, moving from “antiquity to plaything” as Tao Wang puts it. Collecting itself sets the stage for this transformation by ripping a ritual object, such as a Raphael Madonna or Tang Bodhisattva, from its sacred setting and putting it on display in a collector ’s home. Comparable transformations occurred countless times in Song China when scholars invited friends over to view their art collection, and create new art, as seen in Ma Yuan’s 馬遠 Composing Poetry on a Spring Outing (Nelson- Atkins Museum of Art, Kansas City). Just as the Pre-Raphaelites, Picasso, or Stravinsky breathed contem- porary sensibilities into classical styles, this show reveals how artists in China repeatedly made the past contemporary. In both traditions, the key was to de-familiarize the past by inserting well-known forms into an unfamiliar context or medium. This forced viewers to rethink the hal- lowed object, as well as its tradition, in a new and more personal light. Some objects on display show how a collector could insert himself into a famous masterpiece. More complex are the multimedia works created by artists such as Wu Changshi 吳昌碩 (1844–1927). These scrolls juxta- pose rubbings of ancient artifacts—highly accurate records an object’s texture and design—with calligraphic commentary and contemporary painting. Wu Changshi’s playful paintings are especially inventive, treating the rubbing as if it were a fictive image of a palpable thing. In such ways Chinese artists blended historical fact and personal fantasy in one, multi-media object, forcing the viewer to recognize the constructed, historical nature of all the media on display. The show concludes with Hong Hao’s 洪浩 contemporary exercises in art that takes the history of art as its subject, blending painting and digital photography to insert the past into the present moment. The final act on the road to self-reflexive modernity is the last work in the show, Tai Xiangzhou’s 大香洲 rendering in ink of Minister Xi’s bucket (you) from Shang times. Close inspection reveals the marks of brush and ink, yet the work is truly “photographic” in its hyperrealism. Juxtaposing the two antipodes of modernist criticism, it challenges us to question both ends of the scale. There is one item I would improve, though it in no way detracts from the show’s many contributions. Principally the didactic panels com- paring China with other parts of the world do not display the level of scholarship evidenced in the catalog. For one, they compare apples and oranges: for example, the evolution of bronze vessels in China with social or political developments elsewhere. The portrayal of European history in particular is highly romanticized, while even the most sig- nificant institutional innovations in China are suppressed. Presumably NEW EXHIBITION EXAMINES THE SOCIAL LIFE OF BRONZES THROUGH THE CENTURIES 455 terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/eac.2018.6 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:36, subject to the Cambridge Core https://www.cambridge.org/core/terms https://doi.org/10.1017/eac.2018.6 https://www.cambridge.org/core this task was given to a research assistant or staff person, one could not expect scholars engaged in writing the catalog to attend to this detail. In the end the achievements of this show and its scholarship are secure, and I would urge anyone interested in the social life of things to view this exhibition and to get the catalog. Either one will change the way you think about China, and its past. 包華石 提要 芝加哥藝術美院《吉金鑒古》展覽的評論。 Keywords: bronze vessels, Art Institute of Chicago, art collecting in China, the social life of things 青銅器、芝加哥藝術美院、中國的藝術收藏史、文物的社會生活 MARTIN POWERS456 terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/eac.2018.6 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:36, subject to the Cambridge Core https://www.cambridge.org/core/terms https://doi.org/10.1017/eac.2018.6 https://www.cambridge.org/core work_cblghpqtxbgbriqn2y3nopgmmy ---- RESEARCH ARTICLE Medicinal Plants in Local Health Traditions (LHTs): Dharmanagar Sub-division, Tripura, India N Shiddamallayya1, Binod B Dora2, Anjana Janardhanan3, Gyati Anku4, T Borah5, Ashish K Tripathi6, Chinmay Rath7, Anupam K Mangal8, Narayanam Srikanth9, Kartar S Dhiman10 A b s t r A c t Aim: Tripura is a small state of the northeastern part of India. The floral biodiversity is playing an important role in the traditional system of treatment in tribal and rural population. The research pertaining to the use of medicinal plants in ethnomedico botany and formulations of study area is limited. The Medico Ethno Botanical Survey (MEBS) team of Regional Ayurveda Research Institute (RARI), Itanagar, documented the local health traditions (LHTs) from traditional healers of rural and tribal pockets of Dharamanagar sub-division, North district, Tripura. Materials and methods: The MEBS has been conducted in tribal pockets and villages of Dharamanagar Range, Damchera Range, and Panisagar Ranges of Dharmanagar sub-division of Forest, North Tripura district, Tripura. Local health traditions (LHTs) were documented through discussion and interview with traditional healers in the prescribed format along with global positioning system (GPS) location and digital photography of healer and plant raw drugs used in the traditional medicine and also prepared formulations. Medicinal plants were identified by using local and regional flora followed by processing, mounting, and preservation. Documented information has been processed for scientific validation and Ayurvedic names were provided to medicinal plants. Results and discussion: The MEBS conducted and documented six folk claims with six medicinal plants in prescribed formats to conserve traditional knowledge. The data presented systematically as botanical name, family, Sanskrit name, part used, morphological description of the plant, method of formulation, indication, and information of folk healer. Conclusion: Folk healers of Dharmanagar sub division, Tripura collects and use medicinal plants from surrounding area in the treatment of Sarpa Danstra (snake bite), Kamala (jaundice), Stanyajanan (galactagogue), Udarasula (acute abdomen), Alpamutrata (oliguria), and Vrana (fresh wound) popularly. Further, scientific validation is required to understand the useful therapeutic benefits and large-scale medicine production for the treatment. Clinical significance: The MEBS team noticed that some medicinal plants are used in the treatment of human diseases, such as Alstonia scholaris (L.) R. Br. in snake bite, Cuscuta reflexa Roxb. in jaundice, Euphorbia herita L. in galactagogue, Scoparia dulcis L. in acute abdomen, Sida acuta Burm. f. for oliguria, and Mikania micrantha Kunth for fresh wound. These plants need to be studied in detail in order to harvest the maximum benefit for the mankind. Keywords: Dharmanagar, Folk claim, MEB survey, Traditional, Tripura. Journal of Drug Research in Ayurvedic Sciences (2019): 10.5005/jdras-10059-0089 I n t r o d u c t I o n Health is an important index for a society toward the development of nation. Individually, as family, and as a society together contribute toward a healthy nation. Local lifestyle regimens and biodiversity of the study area have a greater impact on the habitants and human life. Treatment of health disorders by the traditional system is playing an important role in tribal and rural population. The traditional Indian system of treating diseases is considerably cheap and economic. Most of the herbs used in the single or compound drug formulations are collected from the surrounding forest. The area selected for study was ruled by several emperors and the name of Dharmanagar is from the name of zamindar Dharmajeet Singh under the king of Tripura. About 70% of the geographical areas are comprised with the hilly terrain covered by dense forests and about 25% are plain area. The terrain is mostly undulating, hilly with small water streams, rivers, and fertile valleys. Forest is an integral part of the local population. The usage of fresh and wildly available medicinal herbs from the foothills of Dharmanagar is unique in its action. The use of genuine plant materials is an important factor for the treatment of diseases and maintaining good health. Tr i p u r a i s a s m a l l l a n d l o c ke d s t a t e b e l o n g i n g to t h e northeastern part of India. Varied tribes of Tripura live in the foothills of the forests and depend on the flora and fauna for medications, food, shelter, and ritual practices. Dharmanagar is in strategic location and well connected with different parts of Tripura state. It is vital to ensure the medicinal flora that popularly benefits a large group of population for the treatment in health issues. Several studies have been conducted to explore the traditional knowledge on healing practices of tribal and rural people of Tripura. © The Author(s). 2019 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons. org/licenses/by-nc/4.0/), which permits unrestricted use, distribution, and non-commercial reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. 1–5Regional Ayurveda Research Institute, Itanagar, Arunachal Pradesh, India 6–10Central Council for Research in Ayurvedic Sciences, New Delhi, India Corresponding Author: N Shiddamallayya, Regional Ayurveda Research Institute, Itanagar, Arunachal Pradesh, India, Phone: +91 9449644341, e-mail: snmathapati@gmail.com How to cite this article: Shiddamallayya N, Dora BB, Janardhanan A, et al. Medicinal Plants in Local Health Traditions (LHTs): Dharmanagar Sub-division, Tripura, India. J Drug Res Ayurvedic Sci 2019;4(4):185–191. Source of support: CCRAS, New Delhi Conflict of interest: None Medicinal Plants in Local Health Traditions: Dharmanagar Sub-division, Tripura, India Journal of Drug Research in Ayurvedic Sciences, Volume 4 Issue 4 (October–December 2019)186 Documentation of 19 herbs used in the treatment gastrointestinal disorders and anti hemorrhagic by tribes namely Halam, Tripuri and Chakma tribes of North Tripura.1 Naturally origin 48 folk medicinal plants are used to treat different disease conditions by using 20 single and 28 compound formulations by the rural traditional healers of North district, Tripura.2 Reports on the Medico Ethno Botanical Survey (MEBS) in the remote hills, forests, and rural areas of Tripura and collected information from tribes regarding antifertility, pain, and fever by using medicinal plants are available in their surrounding forests.3,4 Similarly, listing and preliminary documentation of medicinal plants used by tribes for the treatment of human illness and suffering has been also noticed.5,6 Some researchers worked with the Tripuri, Reang, Chakma, Manipuri, and Koloi tribal communities and enlisted the medicinal plants used for the treatment of various human health disorders.7–12 Few researchers from scientific community have conducted the ethnomedicinal survey and enlisted medicinal plants with the uses and formulations used by the tribal, nontribal, and rural populace of Tripura state.13–17 Reviews of research pertaining to the documentation of ethnomedicinal plants and formulations of Dharmanagar, Tripura state, is not in detail in respect of medicinal plants. Therefore, the Regional Ayurveda Research Institute (RARI), Itanagar, conducted MEBS to document the LHT from traditional healers of rural and tribal pockets of Dharamanagar forest sub-division of North district for the present study. Study Area The study area is predominantly covered with mountains and valleys and has a stable climate throughout the year. The mountain ranges to the north experience heavy rainfall and in the south it falls close to the coastal area. Currently, the geography of the district is surrounded by Karimganj district of Assam in east, Unakoti district in the west, Bangladesh in the north, and Mizoram in the south. Dharmanagar sub-division of North Tripura lies in 24° 22′ 0.01” N and 92° 10′ 0.01” E (Fig. 1). M At e r I A l s A n d M e t h o d s A systematic MEBS has been conducted during the period of 2018–19 to document the local health traditions (LHT) in tribal pockets and villages of Churaibari and Sanicheraa forest beats of Fig. 1: Documentation of local health traditions in the forest foot hills of Dharmanagar sub division, North Tripura district, Tripura Medicinal Plants in Local Health Traditions: Dharmanagar Sub-division, Tripura, India Journal of Drug Research in Ayurvedic Sciences, Volume 4 Issue 4 (October–December 2019) 187 recognized herbarium with the acronym of ARRI (Ayurveda Regional Research Institute) at Regional Ayur veda Research Institute, Itanagar, and nomenclature of species updated as per the ICBN principles and rules.22,23 Ayurvedic names have been provided to the plants collected and diseases with reference to the classical literature.24,25 The collected folk claims were validated with the classical and scientific literature.26–29 r e s u lt s Medico Ethno Botanical Survey was conducted to document the LHT through interview of traditional healers in tribal and non tribal habitats and documented 6 folk claims with 6 medicinal plants in prescribed formats to conserve traditional knowledge (Fig. 2). The collected data have been presented systematically and are self- explanatory as botanical name, family, Sanskrit name, part used, morphological description of the plant, method of preparation, dose and duration, indication, and information of folk healer. Dharamanagar Range, Rahum Chera and Khedachera forest beats of Damchera Range, and Bagbassa and Haplang forest beats of Panisagar Ranges of Dharmanagar sub-division of Forest, North Tripura district, Tripura. The information was collected and documented through discussion and interview with available traditional healers in the study area as per the LHTs format provided by the Central Council for Research in Ayurvedic Sciences (CCRAS), Ministry of AYUSH, Government of India, along with global positioning system (GPS) location and digital photography of the healer with name and address, plant raw drugs used in the traditional medicine, and prepared formulation. Collected medicinal plants were identified by using digital photographs of a plant, local and regional flora, and provided with description.18–20 The nomenclature of plant species was updated as per the International Code of Botanical Nomenclature (ICBN) principles and rules.21 Identified specimens were processed, mounted, and preser ved in internationally Figs 2A to D: Glimpse of documentation of local health traditions (LHT) at Dharamanagar sub-division, North Tripura District, Tripura Medicinal Plants in Local Health Traditions: Dharmanagar Sub-division, Tripura, India Journal of Drug Research in Ayurvedic Sciences, Volume 4 Issue 4 (October–December 2019)188 Botanical name: Alstonia scholaris (L.) R. Br. (Acc. No. ARRI- 3064; GPS data: 25° 58′27″ N 91° 32′95″ E) Family: Apocynaceae Sanskrit name: Saptaparni Part used: Root bark Indication: Sarpa danstra (snake bite) Morphological description of the plant: It is a perennial large evergreen tall tree with umbrageous crown. Leaves in whorls, shining above, pale beneath, bitter milky juice, short petiole, bark grey, rough; branches whorled; young branchlets copiously lenticellate, flowers greenish-white, fruit with two long follicles, seeds oblong with brown hairs at both ends. Method of preparation, dose, and duration: Fresh root bark is grounded with water to prepare required quantity of fine paste. This is applied for half an hour. Similarly, the root is also used to prepare 50 mL fresh juice, which is administered orally, three times in first hour. Information of folk healer: Porsat Bum Halam, Age-96, Male, Zoitang, Panisagar Range, North Tripura District. Botanical name: Cuscuta reflexa Roxb. (Acc. No. ARRI-3851; GPS data: 24° 22′42″N 92° 13′51″E) Family: Convolvulaceae Sanskrit name: Amaravalli Part used: Root Indication: Kamala (jaundice) Morphological description of the plant: Perennial herb with rootless and leafless twining parasite, long stem with yellowish green, closely twining, branched, glabrous, flowers in solitary or umbellate clusters in racemes, sometimes dotted with red, fruit globose—ovoid capsule, seeds black, and albuminous with a small embryo. Method of preparation, dose, and duration: Crush the root and mix with water to prepare 100 mL fresh juice. It is orally administered three times a day. Information of folk healer: Porsat Bum Halam, Age-96, Male, Zoitang, Panisagar Range, North Tripura District. Botanical name: Euphorbia hirita L. (Acc. No. ARRI-3575; GPS data: 27° 08′01″ N 88° 23′33″ E) Family: Euphorbiaceae Sanskrit name: Dugdhika Part used: Whole plant Indication: Stanyajanana (galactagogue) Morphological description of the plant: Annual herb, hispid with long often yellowish, crisped hairs, stems terete, ascending branches, quadrangular, leaves opposite, inflorescence with many mane male florets are surrounded with female florets in involucres at axillary or terminal, minute capsules, reddish brown seeds. Method of preparation, dose, and duration: Whole plant is used to prepare 100 mL decoction by adding water. Decoction is orally administered three times a day for 4 days. Contd… Contd… Information of folk healer: Porsat Bum Halam, Age-96, Male, Zoitang, Panisagar Range, North Tripura District. Botanical name: Mikania micrantha Kunth (Acc. No. ARRI- 2760; GPS data: 25° 39′18″ N 94° 09′14″ E) Family: Compositae Sanskrit name: Part used: Leaf Indication: Vrana (fresh wound) Morphological description of the plant: Annual, glabrous, extensive, twiner, leaves ovate-triangulate, petiolate, pubescent, heads four flowered at axil, white, achnes blackish-brown, pappus reddish. Method of preparation, dose, and duration: Quantity sufficient fresh leaves are used to prepare fine paste and applied on affected area for every 2 hours till the wound heals. Information of folk healer: Abdul Basir, Age-41, Male, Halflong Beat, Panisagar Range, North Tripura District. Botanical name: Scoparia dulcis L. (Acc. No. ARRI-3754; GPS data: 24° 15′05″ N 92° 15′28″ E) Family: Plantaginaceae Sanskrit name: Part used: Leaf Indication: Udarasula (acute abdomen) Morphological description of the plant: Annual erect much branched under shrub, hairy, simple opposite glandular leaf with short petiole, small white flowers at axillary or racemose. Capsule globose, many angled seeds. Method of preparation, dose, and duration: Decoction is prepared with fresh leaves and 100 mL is administered orally once daily for 3 days before food. Information of folk healer: Lalchung Monihalan, Age-39, Male, Thangnang, Panisagar Range, North Tripura District. Botanical name: Sida acuta Burm. f. (Acc. No. ARRI-3616; GPS data: 27° 15′00″ N 88° 36′16″ E) Family: Malvaceae Sanskrit name: Bala (bheda) Part used: Leaf Indication: Alpamutrata (oliguria) Morphological description of the plant: Annual, under shrub, branched, hairy, leaves linear, serrate, glabrous on both sides, flowers at axil, two to three in cluster, fruit 5–6 mm, toothed on the dorsal margin, two awns, seeds smooth and black. Method of preparation, dose, and duration: Fresh leaves are used to prepare decoction and 100 mL of prepared decoction administered orally thrice daily before food for 1 week. Information of Folk healer: Lalchung Monihalan, Age-39, Male, Thangnang, Panisagar Range, North Tripura District. Medicinal Plants in Local Health Traditions: Dharmanagar Sub-division, Tripura, India Journal of Drug Research in Ayurvedic Sciences, Volume 4 Issue 4 (October–December 2019) 189 d I s c u s s I o n Plant s have their own phy tochemical prop er ties and play a ver y impor tant role in curing of various health disorders. Alstonia scholaris (L.) R. Br. (Saptaparni) is having properties like tikta, kasaya rasa, laghu and snigdha guna, usna virya and katu vipaka used as kushtaghna and raktasodhaka.26 The plant has shown antivenom activity in the recent research work.27,28 Thereby, this plant is used in the cases of snake bite. The Sida acuta Burm. f. (Bala bheda) is tridoshahara, deepana, sothahara, and useful in fever, daha (burning sensation of the body). It is regarded as sheeta, kashaya rasa pradhana and useful in nervous and urinary diseases.29 The plant is reported for its antiurolithiatic activity in animal model.30 Therefore, this plant is helpful in the cases of oliguria. Similarly, the herb Cuscuta reflexa Roxb. (Amaravela) is bitter in taste, expectorant, carminative, anthelmintic, purgative, and diuretic. It purifies the blood and cleanses the body. It reduces inflammation (Sotha) and useful in jaundice (Yakrit vikara) and pain in the muscle and joints.31 It is proved that the plant is having hepatoprotective activity by using the animal model.32 It is widely used in diseases of the liver and spleen as it acts on blood. The plant Euphorbia hirita L. (Dugdhika) is chiefly used by nursing mothers when the production of milk is inadequate.31 An aqueous extract of the plant has been studied on the animal model to establish the galactagogue effect.33 Healers commonly refer this plant to increase the lactation. The herb Scoparia dulcis L. decoction is used in the acute abdomen.31 The ethanolic extract of whole herb of Scoparia dulcis L. and Ficus racemosa L. showed the analgesic activity in Swiss albino mice. This study helps to confirm the treatment of the acute abdomen.34 The Mikania micrantha Kunth is widely used in the treatment of fresh wounds. It may be due to its anticoagulation and granulation properties.35 The experimental study of fibroblast cells by the starch assay and external application of the ointment prepared from leaf juice powder with Vaseline (1:1 w/w) have shown wound healing activity.36,37 c o n c l u s I o n The use of medicinal plant in traditional healing by the folk healers of Dharmanagar, Tripura, is very scattered and plants found are very common in revenue and forest lands. These are used in treatment of Sarpa Danstra (snake bite), Kamala ( jaundice), Stanyajanan (Galactagogue), Udarasula (acute abdomen), Alpamutrata (oliguria), and Vrana (fresh wound) popularly. Apart from Mikania micrantha Kunth, remaining plants are referred in the classical literature. Many more research work has been carried out on selected medicinal plants. A very limited study has been carried out on luxuriantly growing wild weed plants. This is the time to find the alternative source of raw drugs to the pharmaceutical industry to meet the required demand of the medicine. It is necessary to study listed medicinal plants scientifically to validate the claim mentioned by the healers to reach all needy population. c l I n I c A l s I g n I f I c A n c e The MEBS team noticed that some medicinal plants are used in the treatment of human diseases, such as Alstonia scholaris (L.) R. Br. in snake bite, Cuscuta reflexa Roxb. in jaundice, Euphorbia herita L. in galactagogue, Scoparia dulcis L. in acute abdomen, Sida acuta Burm. f. for oliguria, and Mikania micrantha Kunth for fresh wound. These plants are to be studied in detail in order to harvest the maximum benefit for the mankind. The scientific validation of these medicinal plants may help as a primary or substitute for rare and endangered medicinal plants in preparation of disease-specific formulations. A c k n o w l e d g M e n t The authors are thankful to the Central Council for Research in Ayur vedic Sciences, New Delhi, for f inancial suppor t and encouragement and the cooperation and guidance of local rural and tribal populace and Department of Forest, Tripura state, during MEBS. Thanks are also due to RARI, Itanagar, for administrative support and technical assistance by field attendant and also the driver who helped the team to travel from Arunachal Pradesh to Tripura. r e f e r e n c e s 1. Das S , Ch o u d hu r y D. Pl a nt s us e d a g a i ns t g as t r o - i nte s ti n a l disorders and as anti hemorrhagic by three tribes of north Tripura district, Tripura, India: a report. Ethnobotanical Leaflets 2010;14: 467–478. 2. Choudhury J, Bora D, Baruah D, et al. Portrayal of folk medicinal practices among the indigenous people of North Tripura district of Tripura. IntJ Res Ayurv Pharm 2014;5(4):480–488. DOI: 10.7897/2277- 4343.05499. 3. Das B, Talukdar AD, Choudhury MD. A few traditional medicinal plants used as antifertility agents by ethnic people of Tripura, India. Int J P Pharmac Sci 2014;6(3):47–53. 4. De B, Debnath M, Saha S, et al. Herbal approach of tribal of Tripura-a state of North east India to get relief from pain and fever. Proceeding on traditional healing practices in North East India North Eastern institute of folk medicine (NEIFM), Pasighat, Arunachal Pradesh. 2019;16:163–166. 5. Majumdhar K, Datta BK. Folklore herbal formulations by the Tripura. Proceeding on traditional healing practices in North East India North Eastern institute of folk medicine (NEIFM), Pasighat, Arunachal Pradesh. 2019;15:155–162. 6. Roy D, Datta BK. Traditional medicines in Tripura. Proceeding on traditional healing practices in North East India North Eastern Institute of Folk Medicine (NEIFM), Pasighat, Arunachal Pradesh. 2019;17:167–169. 7. Das HB, Majumdar K, Dutta BK, et al. Ethnobotanical uses of some plants by Tripuri and Reang tribes of Tripura. Nat Prod Radiance 2009;8(2):172–180. 8. Shil S, Choudhury MD. Ethnomedicinal importance of pteridophytes used by Reang tribe of Tripura, North East India. Ethnobotanical leaflets 2009;13:634–643. 9. Kar S, Dutta BK. A glimpse of the traditional uses of plants by Koloi subtribe of Tripura. J Botan Soc Bengal 2015;69(2):147–152. 10. Guha A, Chakma D. Traditional usage of Ethno-medicinal plants among the Chakma community of Tripura, India. Global J Pharmac 2015;9(4):377–384. 11. Sigh SP, Shrivastava P. A study on some plants by Tripuri and Reang tribes Tripura using Ethnobotanical in India. Int J Adv Res Innovat Ideas Edu 2017;4(4):2395–4396. 12. Guha A , Chowdhur y S, Noatia K . Traditional k nowledge and biodiversity of ethno medicinal plants used by the ethnic people of Tripura, North East India. 2nd Int online Confer Biolog Sci, India 2018. 74–81. Medicinal Plants in Local Health Traditions: Dharmanagar Sub-division, Tripura, India Journal of Drug Research in Ayurvedic Sciences, Volume 4 Issue 4 (October–December 2019)190 13. Majumdar K, Datta BK. A study on ethnomedicinal usage of plants among the folklore herbalist and Tripuri medical practioners: Part-II. Nat Prod Rad 2007;6(1):66–73. 14. Dey P, Bardalai D, Kumar NR, et al. Ethnomedicinal knowledge about various medicinal plants used by the tribes of Tripura. J Pharma Phytochem 2012;4(6):297–302. 15. Das S, Choudhury MD. Ethno medicinal uses of some traditional m e di cin a l p la nt s f o u n d in Tr ip u r a , I n d ia . J M e d Plant s R e s 2012;6(35):408–414. 16. Majumdhar K, Saha R, Datta BK, et al. Medicinal plants prescribed by different tribal and non-tribal medicine men of Tripura state. Indian J Tradition Know 2006;5(4):559–562. 17. Debbarma M, Pala NA, Kumar M, et al. Traditional knowledge of medicinal plants in tribes of Tripura in NorthEast India. Tradition, Complemen Alternat Med 2017;14(4):156–168. DOI: 10.21010/ajtcam.v14i4.19. 18. Deb DB. The flora of the Tripura state Vol (I). New Delhi: Today and Tomorrows Printer and Publishers; 1981. 19. Deb DB. The flora of Tripura state Volume (II). New Delhi: Today and Tomorrows Printer and Publishers; 1983. 20. Singh KP, Singh DK, Choudhury PR, et al. Flora of Mizoram Volume-I. Botanical survey of India (Ministry of Environment and forest); 2002. 21. Bennet SSR. Name Changes in Flowering Plants of India and Adjacent Regions. Triseas Publications; 1987. 22. Lawrence GHM. Taxonomy of Vascular Plants. Second Indian Reprint. Calcutta: Oxford and IBH Publishing Co; 1969. 23. Jain SK, Rao RR. A Hand Book of Field and Herbarium Methods. New Delhi: Today and Tomorrow’s Printers and Publishers; 1977. 24. Gurudeva MR. Botanical and Vernacular names of South Indian Plants. Divyachandra Prakashana; 2001. 25. Sharma SK, Medicinal plants used in Ayurveda. Rashtriya Ayurveda Vidyapeeth (National Academy of Ayurveda) 1998. 26. Chopra RN, Nayar SL, Chopra IC. Glossary of India Medicinal Plants. New Delhi, India: Council of Scientific and Industrial Research; 1980. 27. Sarkhel S, Ghosh R. Preliminary screening of aqueous Alstonia scholaris linn bark extract for antivenom activity in experimental animal model. J Dent Med Sci 2017;16(5):120–123. 28. Meenatchisundaram S, Parameswari G, Subbraj T, et al. Anti-venom activity of medicinal plants – A mini review. Ethnobotanical Leaflets 2008;12:1218–1220. 29. Kirtikar KR, Basu BD, An ICS. Indian medicinal plants. vol. I, New Connaught Place, Dehradun: Bishen Singh Mahendra Pal Singh; 1980. 30. George MM, Adiga S, Avin S, et al. Evaluation of diuretic and antiurolithiatic properties of ethanolic extract of Sida acuta burm f. in Wistar albino rats. international. J Pharm Pharmaceut Sci 2016;8(5):122–126. 31. Kirtikar KR, Basu BD, An ICS. Indian Med Plants, vol. III, New Connaught Place, Dehradun: Bishen Singh Mahendra Pal Singh; 1980. 32. Saini P, Mithal R, Menghani E. A parasitic medicinal plant Cuscuta reflexa: an overview. Int J Sci Eng Res 2015;6(12):951–959. 33. Koko BK, Konan AB, Kouacou FKA, et al. Galactagogue effect of Euphorbia hirta (Euphorbiaceae) aqueous leaf ex tract on milk production in female Wistar rats. J Bioscien Med 2019;7(09):51–65. DOI: 10.4236/jbm.2019.79006. 34. Zulfiker A, Rahman MM, Hossain MK, et al. In vivo analgesic activity of ethanolic extracts of two medicinal plants - Scoparia dulcis L. and ficus racemosa linn. Biology and Medicine 2010;2(2):42–48. 35. Ajit KD, Dutta BK, Sharma GD. Medicinal plants used by different tribes of Cachar district, Assam. Indian J Tradition Knowledge 2008;7(3):446–454. 36. S a r i m a h M N , M i z a t o n H H . T h e w o u n d h e a l i n g a c t i v i t y o f Mikania micrantha ethanolic leaf extract. J Fundament Appl Sci 2018;10(6):425–437. 37. Sahu DS, Upadhyaya SN, Bora M, et al. Acute dermal toxicity and wound healing activity of Mikania micrantha ointment in rats. Indian J Appl Res 2019;9(1):4–6. Medicinal Plants in Local Health Traditions: Dharmanagar Sub-division, Tripura, India Journal of Drug Research in Ayurvedic Sciences, Volume 4 Issue 4 (October–December 2019) 191 work_cbxwfvq6hzgqviop634h7uopkm ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218552075 Params is empty 218552075 exception Params is empty 2021/04/06-02:16:31 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218552075 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:31 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_cc7csiuszbaonowso6rluzg4lq ---- When technological discontinuities and disruptive business models challenge dominant industry logics: Insights from the drugs industry When technological discontinuities and disruptive business models challenge dominant industry logics: Insights from the drugs industry Valerie Sabatier a,⁎, Adrienne Craig-Kennard b, Vincent Mangematin a a Grenoble Ecole de Management, F-38000 Grenoble, France b NyX Bio Technologies, F-75003 Paris, France a r t i c l e i n f o a b s t r a c t Article history: Received 10 November 2010 Received in revised form 28 November 2011 Accepted 10 December 2011 Available online 15 January 2012 An industry's dominant logic is the general scheme of value creation and capture shared by its actors. In high technology fields, technological discontinuities are not enough to disrupt an industry's dominant logic. Identifying the factors that might trigger change in that logic can help companies develop strategies to enable them to capture greater value from their inno- vations by disrupting that logic. Based on analyzing the changes that biotechnologies and bioinformatics have brought to the drug industry, we identify and characterize three triggers of change that can create disruptive business models. We suggest that, in mature industries experiencing strong discontinuities and high technological uncertainty, entrants' business models initially tend to fit into the industry's established dominant logic and its value chains remain unchanged. But as new technologies evolve and uncertainty decreases, disruptive business models emerge, challenging dominant industry logics and reshaping established value chains. © 2011 Elsevier Inc. All rights reserved. Keywords: Dominant logic Business model Industry life cycle Drug industry Technological discontinuities 1. Introduction Biotechnology and bioinformatics have brought strong technological discontinuities to the traditional ways of discovering and developing drugs. Research in technology innovation and management offers multiple definitions of terms around innovation and technology management [101]. Technological discontinuities are “those rare, unpredictable innovations which advance a relevant technological frontier by an order-of-magnitude and which involve fundamentally different product or process design” [7] but – surprisingly – those that have occurred in the drug industry seem (thus far) to have reinforced rather than challenged the positions of industry incumbents: the overall industry logics have not really changed, either in how business is done, or in how diseases are prevented or cured. Scholars have argued that technological discontinuities lead to industry shake-outs that can nullify incumbents' competitive advantages [42,78,79]. An emblematic case was that of digital photography [10,63], where the technological discontinuities disrupted the dominant logic of the entire photographic industry and led to the reshaping of its value chain. We define the value chain as “the linked set of value-creating activities all the way through from basic raw material sources for component suppliers to the ultimate end-use product delivered into the final consumer's hands” [38] — and in this case, its reshaping allowed new competitors to enter the industry who introduced new ways of both creating and capturing value. Prahalad and Bettis [73] originally defined dominant logic at the firm level as “the way in which managers conceptualize the business and make critical resource allocation decisions”. A dominant logic can keep organizations focused on the road ahead — or it may act as set of blinkers, restricting managers' peripheral vision [72]. Dominant industry logics evolve and change over time, influencing how Technological Forecasting & Social Change 79 (2012) 949–962 ⁎ Corresponding author at: Grenoble Ecole de Management, F-38000 Grenoble, France Tel.: +33 4 76 70 65 28. E-mail address: valerie.sabatier@grenoble-em.com (V. Sabatier). 0040-1625/$ – see front matter © 2011 Elsevier Inc. All rights reserved. doi:10.1016/j.techfore.2011.12.007 Contents lists available at SciVerse ScienceDirect Technological Forecasting & Social Change http://dx.doi.org/10.1016/j.techfore.2011.12.007 mailto:valerie.sabatier@grenoble-em.com http://dx.doi.org/10.1016/j.techfore.2011.12.007 http://www.sciencedirect.com/science/journal/00401625 strategists conceive their business models and – in some cases – their company business model portfolios [82]. The evolution of dom- inant logics in high-tech industries has been recognized as being driven by the technologies involved [1]. Industries follow general lifecycles from emergence to maturity [3], which are sometimes disrupted by technological discontinuities that may lead either to the industry's decline, or to a new emergent phase [1]. However, the drug industry, which has been facing several waves of technological discontinuities, does not seem to be following that path when technological discontinuities occur [4,32,46,75], which questions the notion of drivers of evolution in technology based industries. But when technological discontinuity does not lead to disruptions of its dominant logic, what other forces lead to such change? The aim of this article is twofold: to provide an understanding of the engines that drive the evolution of industry logics, and to propose a complement to current theories [65,93,99] by suggesting that technological discontinuities are not the only trigger for industry evolution. We argue that the conver- gence of business models from different industries can lead to challenges to dominant logics. While technological discontinuities can initiate industry evolution, business model innovation can also play a central role in driving change in dominant industry logics: so we examine how and why new business models emerge. The pharmaceutical industry has experienced several waves of technological discontinuities, any of which could potentially have led to the emergence of new industry logics. This paper analyzes the triggers of the evolution of the drug industry's dominant logic by interviewing industry experts and analyzing the business models of new entrants. Our findings contribute to understanding the boom, bust and recovery of biotechnology and bioinformatics by following the stories of those promising technologies that encouraged stakeholders to believe in drug industry revolution. For years, entrepreneurial firms failed to deliver the expected financial and scientific performances partly because they found it difficult to fit their business models into existing dominant industry logics in profitable ways [4,14,57]. But now, by testing new business models, young entrepreneurial entrants are renewing the promise of their new technologies. The article first explains the concepts of dominant industry logic and of business models, and provides insights (based on industry lifecycle theory) into the effects of technological discontinuities on mature industries. We then describe our data collection and analysis methods, consider the drug industry's established dominant logic, and analyze the business models of seven young bioinformatics companies. Next, we outline the triggers for change in the industry's dominant logic — new healthcare philosophies, new patterns of collaboration, and new modes of network orchestration and finally discuss our findings and the links between industry evolution and business model innovation. 2. Theoretical foundations 2.1. The dominant logic of an industry Prahalad and Bettis have drawn on Kuhn's work on the notion of a paradigm – “a way of defining and managing the world and a basis of action in that world” [49] – to argue that managers make critical resource allocation decisions within the framework of a ‘dominant logic’ [73]. The authors originally developed this concept at the firm level, first from diversification-driven and then from environmentally-driven organizational change approaches [12]. They argue that actors evolving in the same industry develop similar mental maps of that industry, and that this dominant industry-level logic can be seen as a “mind set or a world view or conceptualization of the business and the administrative tools to accomplish goals and make decisions in that business” [73]. So the dominant logic provides a general framework within which industry firms conceive what their customers want and define how to best serve their needs, and thus – depending on what opportunities they detect – design their strategies and business models. This shared logic guides the perceptions of top managers and leaders about how best to create and capture value in the industry, and so which business models will enable their company to be profitable — but they also risk becoming overly dependent on such mental models of their competitive landscape, leading to ‘cognitive inertia’ [44]. Phaal et al. [67] identify three components of a dominant logic at the industry level: value context, value creation and value capture. The value context is the industrial landscape within which opportunities occur for creating and capturing value, and value creation refers to “the competences and capabilities used by organizations to generate products and services”: the competencies have technology or knowledge-based components, while the capabilities are rooted more in processes and business routines [56]. And value capture refers to “the mechanisms and pro- cesses used by organizations to appropriate value through delivering products and services” ([67]: 223). Von Krogh et al. [96] also suggest a strong relationship between the dominant industry logic as perceived by top managers and their firms' performance. 2.2. Business models The business model concept – a hot topic in research today [9] – comes from practitioners of the late 1990s, and is seen as distinct from strategy: “strategy refers to the choice of business models through which the firm will compete in a marketplace” [17]. Teece [91] argues that business models translate leaders' anticipations: “a business model reflects management's hypothesis about what customers want, how they want it, and how an enterprise can best meet those needs, and get paid for doing so”. In his definition, a business model is organized around the hypothesis of what customers want, so the unit of analysis of a business model is its value proposal. Demil and Lecocq [21] also argue that a business model refers to the articulation between different areas of a firm's activity designed to produce a value prop- osition for customers. In practice several different value propositions may coexist within a specific industry, each of which may dictate the use of different business models based on services or products offered by firms at different steps of the industry's value chain. The changes in managers' perceptions of their firm's opportunities will influence the continuous evolution of the business models it 950 V. Sabatier et al. / Technological Forecasting & Social Change 79 (2012) 949–962 https://isiarticles.com/article/7794 work_ccawd447yvf5bdgbw32p3lwmbu ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218549469 Params is empty 218549469 exception Params is empty 2021/04/06-02:16:27 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218549469 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:27 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_ccij2g7jhza53em6nhcgiyogrq ---- Design of a High Speed Architecture of MQ-Coder for JPEG2000 on FPGA (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 8, No. 6, 2017 165 | P a g e www.ijacsa.thesai.org Design of a High Speed Architecture of MQ-Coder for JPEG2000 on FPGA Taoufik Salem Saidani Department of Computer Sciences Faculty of Computing & Information Technology Northern Border University, Rafha, Saudi Arabia Hafedh Mahmoud Zayani Department of Information System Faculty of Computing & Information Technology Northern Border University, Rafha, Saudi Arabia Abstract—Digital imaging is omnipresent today. In many areas, digitized images replace their analog ancestors such as photographs or X-rays. The world of multimedia makes extensive use of image transfer and storage. The volume of these files is very high and the need to develop compression algorithms to reduce the size of these files has been felt. The JPEG committee has developed a new standard in image compression that now also has the status of Standard International: JPEG 2000. The main advantage of this new standard is its adaptability. Whatever the target application, whatever resources or available bandwidth, JPEG 2000 will adapt optimally. However, this flexibility has a price: the JPEG2000 perplexity is far superior to that of JPEG. This increased complexity can cause problems in applications with real-time constraints. In such cases, the use of a hardware implementation is necessary. In this context, the objective of this paper is the realization of a JPEG2000 encoder architecture satisfying real-time constraints. The proposed architecture will be implemented using programmable chips (FPGA) to ensure its effectiveness in real time. Optimization of renormalization module and byte-out module are described in this paper. Besides, the reduction in computational steps effectively minimizes the time delay and hence the high operating frequency. The design was implemented targeting a Xilinx Virtex 6 and an Altera Stratix FPGAs. Experimental results show that the proposed hardware architecture achieves real-time compression on video sequences on 35 fps at HDTV resolution. Keywords—MQ-Coder; High speed architecture; FPGA; JPEG2000; VHDL I. INTRODUCTION The current development of computer networks and the dramatic increase in the speed of processors reveal many new potentialities for digital imaging. Whether in the medical, commercial or military field, new applications are emerging each with its specificities. The JPEG Group has developed a new, more flexible and better image encoding standard: JPEG2000 [1]. It is built around a wide range of image compression and display tools. This makes the algorithm appealing to many applications, whether for Internet broadcasting, medical imaging or digital photography [2]. The main JPEG2000 coding steps are shown in Fig. 1. Several features are available for encoding, such as progressive quality and/or resolution reconstruction, fast random access to compressed image data, and the ability to encode different regions of the image called regions of interest (ROI). Fig. 1. Overview of JPEG2000 coding process. The JPEG2000 standard can be broken down into several successive blocks. The original image is cut into tiles after the component transformation. All the tiles are then transformed into wavelets (transformation with or without loss), independently of each other [3]. The wavelets used in the JPEG2000 standard are bi-orthogonal, that is to say different wavelets are used for decomposition and reconstruction. Two types of bi-orthogonal wavelets are used: wavelets of Daubechies 9/7 and Le Gall 5/3 [4], [5]. These two wavelets are chosen according to the type of compression desired, lossless or lossy. Le Gall 5/3 wavelets used to perform a reversible transform are used for lossless compression. The wavelets of Daubechies 9/7 allowing realizing a reversible transform are used only for lossy compression. The coefficients of the block-code undergo quantization and the quantized coefficients are decomposed into bit planes. The quantification minimizes the number of bits necessary for coding the supplied coefficients of the preceding block, by retaining only the minimum number of bits making it possible to obtain a certain quality level [6], [7]. Based on the wavelet decomposition technique, JPEG2000 is very different from previous standards and has many advantages that will allow it to be adopted in a wide range of applications, or even to be extended to video encoding. In contrast, this type of compression requires much more computational power than the original JPEG process, which makes software implementations irrelevant when very fast processing is required. Fig. 2 shows the comparison between JPEG and JPEG2000 in terms of performance. We note that the performance of JPEG2000 is greater than those of JPEG standard. (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 8, No. 6, 2017 166 | P a g e www.ijacsa.thesai.org Fig. 2. Comparison between JPEG and JPEG2000 in terms of performance. Due to many successive processes, JPEG2000 requires more computing power to achieve encoding and decoding speeds similar to JPEG. A hardware solution is therefore indispensable for fast applications. The JPEG2000 image compression standard was created to meet the new requirements arising from the diversification of applications in the multimedia field. The many features that it offers bring a new breath to this sector. However, they have led to an increase in the complexity of the algorithm compared with existing standards. Faced with this complexity, a hardware encoder is the solution that allows satisfying the real-time constraints of certain applications. This paper presents an FPGA-based accelerator core for JPEG2000 encoding. Comparison with various FPGA implementations is provided. Contributions in this work are listed as follows: 1) The proposed high speed efficient MQ-coder architecture modifies the probability estimation (Qe) representation to minimize the memory consumption. The modification in probability estimation reduces the bitwise representation to 13 bit. 2) Due to the less memory occupation, the time and power required for the hardware-based JPEG2000 compression are reduced. Thereby, the operating speed is improved (more operating frequency) with the help of proposed MQ encoder for real time image processing. 3) The minimization in bitwise representation in proposed architecture of MQ coder reduces the count of memory elements to (32 9 13) 416 that leads to the preservation of silicon (Si) area further in the compact chip development. 4) The optimization of Renormalization and Byteout modules help speeding up the proposed architecture. The remainder of this paper is decomposed into six sections. After the introduction, Section 2 details the JPEG2000 MQ encoder. Previously proposed hardware architectures for MQ- coder are described in Section 3. Section 4 describes the proposed hardware architecture of MQ coder. In Section 5, experiments and results are detailed. Finally, this paper is concluded in Section 6. II. JPEG2000 MQ-CODER The arithmetic coder used in JPEG2000, called the MQ encoder, takes as inputs the binary values D and the associated contexts CX resulting from the preceding step of binary modeling of the coefficients, and this in the order of the coding passes. Fig. 3 shows the arithmetic encoder inputs and outputs. Fig. 3. The inputs and outputs of MQ-Coder. Rather than representing the intervals associated with the probabilities of ―0‖ or ―1‖, it was chosen to represent the data using the LPS (Less Probable Symbol) and MPS (More Probable Symbol) symbols, respectively representing the probabilities occurrence of the minority and majority species. Obviously, it is necessary to keep track of the meaning attributed to one or the other of the variables ―0‖ or ―1‖ is the minority species. Thus the current interval is represented by the interval 1, which is then divided into two sub-intervals corresponding to the minority and majority species. From a representation point of view, LPS is always given as a lower interval. Each binary decision, represented by a bit, is divided recursively. The divisions are made to estimate the probability of Elias: MPS and LPS. The binary sequence from the MQ is divided into a number of packets. Each of them contains the bit-stream corresponding to the same component, the same resolution level, the same quality layer and the same spatial zone of the resolution level. The spatial areas of each resolution level are called precincts. Each of the packets is preceded by a header containing information allowing identifying very precisely the data conveyed by this packet. Four different progress orders are defined in JPEG 2000. They make it possible, during the decoding, to obtain in priority either the data of the same component, or those of the same resolution level, or those of the same quality layer, or those of the same spatial zone of the image. In JPEG2000, the realization of the arithmetic coder is performed by means of an index table. The table represents the LPS probability estimate (Qe). For each input pair (decision, context), we look for the most probable symbol in a variable containing the different states. As each state is represented in the index table, the context can be associated with the index of the table. On its side, the decoder has the index replica of the table, which makes it possible to carry out the decoding. (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 8, No. 6, 2017 167 | P a g e www.ijacsa.thesai.org The Finite state Machine (FSM) with 47 states defines the Probability Estimation Table structure clearly. The number of calculations to obtain the coding information and the utilization of resources are high that degrade the hardware performance. The hardware modeling of MQ-coder contains the following limitations: 5) Low clock frequency. 6) Consumption of LUTs and registers is more. 7) High hardware resource requirements. A large number of context and decision pairs in MQ encoder shift the parallel operation into a serial operation; such a new architecture is called high-speed MQ encoder architecture. The storage of transformed coefficients in code block consumes more registers that lead to large flip-flop (FF) requirement. Hence, the reduction in code block based on the context pair probability estimation reduces the number of lookup tables (LUTs) and slice registers that leads to less memory consumption. The motivation behind the research work proposed in this paper is the reduction in memory, time and power consumption by reducing the size of the bitwise representation. III. RELATED WORKS The main advantage of JPEG 2000 was to combine most of these qualities, allowing using it in a very wide range of applications. This flexibility, coupled with a very high compression efficiency, unfortunately has a price. Moreover, some applications have real-time aspects which impose very high flow constraints. An architecture composed of three stages is proposed by Mei et al. in [5]. When implemented on an APEX20K FPGA board, it operates with 37.27 MHz. Indeed in this architecture, if the state MPS occurs then two symbols will be coded simultaneously, if not a single symbol will be coded. Shi et al. [8] proposed a MQ-coder hardware core that allows treating two symbols. Indeed, this architecture is based on the following hypothesis: a maximum of two offsets occurs when there is a renormalization operation. The architecture proposed in [9] is composed by three blocks. The first block is responsible for initializing register A at 0x8000, register C at 0, table MPS (Cx) at 0 and index table in ILT RAM, and perform all arithmetic operations. The second block is used to shift the registers A and the register C and to decrement the counter CT by 1. If CT = 0, the third block will be activated and the register B will be emitted as compressed data. The proposed architecture was implemented on a Startix FPGA and works at a frequency of 83.271 MHz. The complexity of the JPEG 2000 algorithm is a problem for these real-time applications. In [10], the author indicates that in view of current technology, it is not possible for purely software implementations to respect the constraints imposed by these real-time applications. This is the reason why a growing number of companies and researchers are interested in (partially) hardware-related achievements of the standard, in which the computing resources have been optimized and the memory requirements reduced. Below we give an overview of the hardware achievements to date and the results obtained. As part of the PRIAM project, Thales Communications has developed an implementation of a JPEG 2000 encoder on an MPC74XX processor. This is studied in [11]. The MPC74XX processor is based on a PowerPC architecture (RISC type processor) to which is added a vector calculation unit called AltiVec. This allows multiple data sets to be processed in parallel in a single instruction. Unlike the other blocks in the decoding chain, the entropy coder, due to its non-systematic behavior, is complex to optimize by means of vectorial instructions. This achievement gives overall very good results, but the entropy coder, requiring 400 cycles per 8-bit pixel, is truly the ―bottleneck‖ of the system. Bonaldi [12] has been working on the creation of a mixed software-hardware encoder. The medium used is the ARM- VIRTEX card of the DICE unit. An input rate of 6.6 Mbps for the entropy encoder is especially supported. Moreover, everything concerning the formation of the bit-stream is carried out in software, on the ARM. This approach of Co- Design is very judicious and is moreover widely supported by the literature. The Amphion company offers an ASIC encoder-decoder available since 2003 [13]. Amphion announces speeds of 480 Mbps at encoding and 160 Mbps at decoding. This embodiment has interesting characteristics, such as the few constraints on the format of the input images, a division of tasks between hardware and software and an architecture compatible with the AMBA bus, which allows easy integration into other systems. Analog devices [14] offer the ADV-JP2000. This circuit operates at a maximum 20 MHz frequency including a 5/3 wavelet transform (no 9/7) and an entropy encoder. The circuit is not fully compliant with the standard. The ADV-JP2000 offers two modes of operation: encodes and decodes. In the encode mode it accepts a single tile and generates the stream of code-blocks conforming to the standard. The ADV-JP2000 communicates via an asynchronous protocol but also allows an interrupt mode. Finally, the circuit supports the DMA mode. Zhang et al. [15] proposed an architecture composed of four stages and three parts (P1, P2, P3). Indeed, P1 is implemented in Stage 1 to determine the new value of Qe when A < (0x8000). The P2 is called in Stage 2 and Stage 3, because the latter updates the Reg A and Reg C and also to perform the arithmetic operations and the offset operations. Finally, the P3 is used in Stage 4 to realize the bit stuffing when the counter CT is equal to 0. The processing frequency of this architecture was 110 MHz on an Altera FPGA card. IV. PROPOSED ARCHITECHTURE The proposed architecture of the encoder is shown by the block diagram of Fig. 4. The pairs (C, D) are received by the MQ coder as input and a sequence of bytes called ByteOutReg are provided as output. This architecture consists of two parts: the part of the prediction of the probability of the symbol to be (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 8, No. 6, 2017 168 | P a g e www.ijacsa.thesai.org coded composed of 2 RAMs (ICX, MPS) and 4 ROMs (NMPS, NLPS, Switch, Qe), and the coding part which is composed of a state machine. The four ROMs are not updated during coding operation. The pairs (CX, D) are first sequentially read. Subsequently, the CX context will be transmitted over the bus address of the ICX RAM and the MPS RAM. Then the value of I(CX) and MPS(CX) will be read. Then the I(CX) index will be delivered to the four ROMs. The mps_D will be executed with signal D, which causes the LPS_en signal to be generated. If this signal is equal to one then the CODELPS state will be carried out otherwise the CODEMPS state will take place. The updating of the ICX RAM depends essentially on the signal Ren_out. Indeed this signal will set to one if the renormalization is carried out. However, the MPS RAM will update if the LPS_SW signal is equal to one. The Probability estimation architecture is shown in Fig. 5. Fig. 4. MQ-Coder architecture. Fig. 5. Probability estimation architecture. We are then interested in the coding part to manage the process of the coding by a machine of finite states by substituting the various sub-algorithms by states. The outputs depend on the current state and the inputs and react directly to changes in inputs. Fourteen states have been set up in order to describe the MQ encoder process. Fig. 6 shows the MQ-Coder state machine. The states used in this state machine are as follows: Fig. 6. MQ-Coder State machine. 8) Repos: This state essentially depends on the input ―go‖, if go = 1 it switches to the state INITENC otherwise it remains in the same state. 9) Initenc: In this state, register Reg_A at (0x8000), register Reg_C at 0 and counter CT at 12 are initialized. Then the MPS RAM is filled with 0s and the RAM of the indexes by the 19 possible values of the context CX. Then you will automatically play (Read). The index/probability tables should be presented in the memory before coding begins. 10) Read: In this state, the context is read to deduce the value of the corresponding MPS (Cx) according to the table initialized in INITENC. We also read Decision D. 11) CODAGE: If Decision D = MPS (Cx), it switches to the CODEMPS state otherwise it goes to the CODELPS state. 12) CODEMPS: The register Reg_A is adjusted to Qe_reg. Then Reg_A is compared to (0x8000). If Reg_A is less than (0x8000), the index will be updated according to the NMPS table of the context index and the Renorme state will be used. If register Reg_A is greater than (0x8000), we add (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 8, No. 6, 2017 169 | P a g e www.ijacsa.thesai.org the probability Qe_reg to the register Reg_C and we will pass to the Finis state. 13) Finis: In this state if the even number (number of pairs (CX, D) lu) is equal to 256 then we pass to the flush state, otherwise the next state will be Read. 14) CODELPS: In this state, register Reg_A to Reg_A- Qe_reg is adjusted. Then, if Reg_A is greater than Qe_reg then the register Reg_A takes the value of Qe_reg, otherwise we add the probability Qe_reg to the register Reg_C. The condition of inversion of the intervals is always checked. If a SWITCH is required, the direction of the MPS will be reversed. The index will take a new value according to the NLPS table and it will change to the Renorme state. 15) Renorme: The contents of Reg_A and Reg_C will be replaced by a simple left shift. This shift repeats until the value of Reg_A is raised above (0x8000). The counter CT containing the number of shifts of Reg_A and Reg_C will then be decremented at each offset. When the counter CT reaches 0 (CT was initially at 13, i.e., 13 left offsets were made at Reg_A and Reg_C), it will pass to byteout1 and if Reg_A is still less than (0 x 8000), we will return to the Renorme state as soon as we have finished with byteout. The optimization of the Renormalization procedure is presented in Fig. 7. 16) Byteout1: This state can be called in two states either in the Renorme state when the shift counter CT becomes equal to zero, or also at the end of the coding when the registers flush. The optimization of the Byteout procedure is presented in Fig. 8. 17) FLUSH: This is the state we reach towards the end of the encoding (in our case if nbpair = 256). The FLUSH procedure contains two calls to Byteout1 and two calls to Setbit; hence, the idea of subdividing it into three states: the first is flush_1 which ends with a call to byteout1, the second is flush_2 (same principle of fulsh_1) and the third is flush_3. a) Flush_1: This state contains two sub-states, the first is the Setbit, the second is byteout1. First we make a call to the state Setbit then we apply an offset to the register Reg_C, then we make a call to byteout1 and we end by making a call to flush_2. i) Setbit: In this state, the Reg_C register shift is automatically changed to byteout1, regardless of the Reg_C register value (i.e., Reg_C is lower or higher than TEMPC). b) Fluch_2: This state has the same principle of the state flush_1 but this time we pass to state flush_3. c) Flush_3: This state contains the end of the flush, when the first end marker 0xFF has to be inserted. The next state will be rest. Fig. 7. (a) Original RENORME architecture (b) Optimized RENORME architecture. Fig. 8. (a) Original BYTEOUT architecture (b) Optimized BYTEOUT architecture. (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 8, No. 6, 2017 170 | P a g e www.ijacsa.thesai.org V. EXPERIMENTAL RESULTS A. Simulation Simulation of the proposed design, using VHDL HDL, is carried out with Mentor graphic. ByteOutReg bytes coincide with those in column B and the parameters IDCX, MPS, Qe_reg, Reg_A, Reg_C and Reg_CT have evolved appropriately. This result has been well verified and we have chosen to take a sequence to visualize it in simulation and explain it in parallel. For the sake of clarity in Fig. 9, we have chosen to display some signals in the simulation flow that are the following: the compressed data Byteout_Reg, the index IDCX, the counter Reg_CT, the probability Qe_reg and states. Table 1 summarizes the simulation results of the MQ encoder: either from decision n° 28 to decision n° 34. B. Synthesis Results Implementation of the proposed design was made on Xilinx Virtex Family Platforms: XC6SLX75T, XC5LX30T and XC4VLX80 devices. We have used the Xilinx ISE tools version 14.1 .The synthesis results of the architecture is shown in Table 2. The proposed MQ encoder design gives the best result, in terms of hardware resources such as (the number of LUTs consumed, slices and Flip-Flop) and frequency of operation when implemented on a platform Virtex 6. Concerning the frequencies obtained, we note that our architecture meets the criteria real-time. The design has a maximum frequency of 423.2MHz on the Virtex 6 (XC6SLX75T) device. C. Comparison A comparative study with other existing designs in the literature has been made. The Virtex 4 XC4VFX140 platform is used for this comparison. The performance comparison of our design with the architecture proposed in [16] is shown Table 3. Our proposed design codes frames in real time at a frequency of 244.475 MHz and requires only 455 slices. The throughput of some architecture of MQ coders compared with our proposed architecture is presented in Table 4. It is calculated from the reported symbol consumption rate and operating frequency. It is found that our architecture encodes frames with a frequency 3.31 and 2.29 times higher than that of architecture [16] and [17] respectively. Table 5 shows the comparisons of logic area, memory requirement, and estimated memory area of several previous works [5], [7], [15]-[20]. The total area of the proposed architecture is less than that of each previous work. However, the hardware cost of the word based architecture is larger than the proposed architecture. The proposed design can code 40 frames per second for high definition TV of 1920p at 254.84 Mhz on Stratix II. Fig. 9. Wave simulation for MQ coder architecture. (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 8, No. 6, 2017 171 | P a g e www.ijacsa.thesai.org TABLE. I. EXAMPLE EXTRACTED FROM THE SIMULATION OF THE MQ ENCODER Symbole D IDCX Reg_A Reg_C CT Byteout_Reg 10 1 28 0xE80E 0x000C6D52 7 0x84 11 0 26 0x9008 0x00636A90 4 0x84 12 0 27 0xF40E 0x00C70122 3 0x84 13 0 27 0xE00D 0x00C71523 3 0x84 14 1 27 0xCC0C 0x00C72924 3 0xC7 25 0 25 0xA008 0x00014920 8 0xC7 26 0 25 0x8807 0x00016121 8 0xC7 TABLE. II. RESULTS OF SYNTHESIS Used Platform XC6SLX75T XC5VLX50T XC4VLX80 Maximum Frequency (MHz) 423.2 336,304 264.2 No. of 4 input LUTs 540/343680 523/28800 766/71680 Total used slices 251/687360 247/28800 396/35840 Total FF slices 177/693 176/659 247/71680 TABLE. III. PERFORMANCE COMPARISON Used FPGA XC4VFX140 Architecture Proposed Architecture [15] Max. Frequency (MHz) 244.475 185.43 Used slices 455 495 Used FF slices 292 392 Used 4 input LUTs 865 893 Used BRAMs 1 2 TABLE. IV. THE THROUGHPUT OF SOME DESIGNS TESTED ON VIRTEX 4 X4VFX140 Used FPGA XC4VFX140 (Virtex4) Design Number of pairs Frequency (Mhz) Throughput (MS/s) Design [7] 2 50.1 100.2 Design [21] 1 185.43 185.43 Design [18] 1.23 53.92 66.38 Design [19] 2 48.3 96.6 Proposed 1 244.475 244.475 VI. CONCLUSION This paper discussed the problems in the real-time implementation of FPGA-based MQ coder architecture. The MQcoder utilization in both encoding and decoding stages performs the probability estimation of coefficients and optimization of Renorme and Byteout modules. The increase in computational overhead required more power and energy consumption. This paper provides the reduction in the bitwise computation to reduce the number of computational steps. The minimization in computational steps decrease the power and time delay. The proposed PET architecture reduced the bitwise representation from 13 bit to 12 bit that provided the reduction in memory elements from 416 to 348 compared to the existing MQ-coder architecture. Therefore, the size of PET ROM is 1376 bits. An embedded architecture of MQ Coder for JPEG2000 is designed and implemented in this paper. The implementations carried out during this work allowed us to know that the proposed architecture of the MQ encoder operates with a frequency of 423.2 MHz on Virtex6 XC6SLX4 device and that it can code 40 frames per second for the high-definition TV application. The proposed architecture is easily expandable to 2048×1080 resolution video at 45 fps. It can be used in several applications such as Internet broadcasting, medical imaging and digital photography. Moreover, the processing time was improved by about 13.6% in comparison with well-known architectures from literature. (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 8, No. 6, 2017 172 | P a g e www.ijacsa.thesai.org ACKNOWLDGMENTS The authors wish to acknowledge the approval and the support of this research study by the grant N° CIT-2016-1-6- F-5718 from the Deanship of the Scientific Research in Northern Border University, Arar, KSA. TABLE I. COMPARISON WITH OTHER MQ CODER ARCHITECTURES Architecture FAGA family Device used Clk (MHz) No. of LEs Symbol/Clk Throughput (MS/s) [5] APEX20K EP20K600EFC672-3. 37.27 1256 2 74.54 [7] Stratix N/A 50.10 1596 2 100.2 [15] Stratix N/A 40.53 12649 2 81.6 [20] Stratix II EP2S15F484C3 106.2 1321 2 212.4 [22] APEX20K EP20K1000EFC672-1X. 9.25 14711 1 9.25 [23] Stratix N/A 27.05 761 1 57.05 [24] Stratix N/A 106.02 1267 2 210 [16] Stratix EP2S90F1020I4. 58.56 1488 2 117 [17] Stratix EP1S10B672C6. 145.9 824 1 145.9 Proposed Stratix II EP2S15F484C3 254.84 603 1 254.84 REFERENCES [1] JPEG 2000 image coding system, ISO/IEC International Standard 15444-1. ITU Recommendation T.800, (2000). [2] D. S. Taubman and M. W. Marcellin. JPEG2000 Image Compression Fundamentals, Standards, and Practice (2002). [3] T. Acharya and P. Tsai, JPEG2000 Standard for Image Compression: Concepts, Algorithms and VLSI Architectures, J. Wiley & sons (2005). [4] JASPER Software Reference Manual, ISO/IEC/JTC1/SC29/WG1N2415. [5] K. Mei, N. Zheng, C. Huang, Y. Liu, Q. Zeng, VLSI design of a high- speed and area-efficient JPEG 2000 encoder, IEEE Transactions on Circuits and Systems for Video Technology 17 (8) (2007) 1065–1078. [6] Horrigue, L., Saidani, T., Ghodhbani, R., Dubois, J., Miteran, J.,Atri, M.: An efficient hardware implementation of MQ decoder of the JPEG2000. Microprocess. Microsyst. 38, 659–668 (2014) [7] L. Liu, N. Chen, H. Meng, L. Zhang, Z. Wang, H. Chen, A VLSI architecture of JPEG 2000 encoder, IEEE Journal of Solid-State Circuits 39 (11) (2004) 2032–2040. [8] Jiangyi Shi, Jie Pang, Zhixiong D Yunsong Li. A Novel Implementation of JPEG2000 MQ-Coder Based on Prediction, International Symposium on Distributed Computing and Applications to Business, Engineering and Science, 2011. pp:179-182. [9] Kishor Sarawadekar and Swapna Banerjee, VLSI design of memory- efficient, high-speed baseline MQ coder for JPEG 2000, Integration, the VLSI Journal, Elseiver. Vol 45, January 2012, Pages 1-8. DOI: 10.1016/j.vlsi.2011.07.004. [10] J. Hunter. Digital cinema reels from motion JPEG 2000 advances, janvier 2003. http ://www.eetimes.com/story/OEG20030106S0034. [11] C. Le Barz and D. Nicholson. Real time implementationof JPEG 2000 . june 2002. [12] C. Bonaldi and Y. Renard. Conception et réealisation d’un codeur JPEG 2000 sur une carte Virtex-ARM . Laboratoire de Microélectronique (DICE), UCL, june 2001. [13] Amphion. CS6590 JPEG 2000 codec preliminary product brief , October 2002. http ://www.amphion.com. [14] D.Taubman and M.W.Marcellin, JPEG2000 – Image Compression Fundamentals, Standards and Practice, Kluwer Academic Publishers, Nov. 2001. [15] K. Liu, Y. Zhou, Y. Song Li, J.F. Ma, A high performance MQ encoder architecture in JPEG2000, Integration, the VLSI Journal 43 (3) (2010) 305–317. [16] P. Zhou, Z. Bao-jun, High-throughout hardware architecture of MQ arithmetic coder, in: 10th IEEE International Conference on Signal Processing (ICSP), 2010. [17] K. Sarawadekar, S. Banerjee, An Efficient Pass-Parallel Architecture for Embedded Block Coder in JPEG 2000 . IEEE Trans. Circuits Systems. Video Technology, 22 (6) (2011) 825-836. [18] Michael Dyer, David Taubman and Saeid Nooshabadi. Concurrency Techniques for Arithmetic Coding in JPEG2000. IEEE Transactions on Circuits and Systems for Video Technology, 2006, vol.53, pp. 1203– 1213. [19] Kishor Sarawadekar and Swapna Banerjee, ―LOW-COST, HIGHPERFORMANCE VLSI DESIGN OF AN MQ CODER FOR JPEG 2000‖ ICSP2010, 2010, pp.397-400. [20] Nandini Ramesh Kumar · Wei Xiang · Yafeng Wang, Two-Symbol FPGA Architecture for Fast Arithmetic Encoding in JPEG 2000, Journal of Signal Processing Systems, 69(2)( 2012)213–224. [21] Saidani, T., Atri, M., Khriji, L., Tourki, R.: An efficient hardware implementation of parallel EBCOT algorithm for JPEG2000. J. Real- Time Image Process. 11, 1–12 (2013). [22] Varma,H.Damecharla,A.Bell,J.Carletta,G.Back,A fast JPEG2000 encoder that preserves coding efficiency:the splitarithmetic encoder, IEEE Transactions on Circuits and Systems—Part I:RegularPapers55(11)(2008) 3711–3722. [23] M. Dyer, S. Nooshabadi, D. Taubman, Design and analysis of system on a chip encoder for JPEG 2000, IEEE Transactions on Circuits and Systems for Video Technology 19 (2) (2009) 215–225. [24] N.R. Kumar, W. Xiang, Y. Wang, An FPGA-based fast two-symbol processing architecture for JPEG 2000 arithmetic coding, in: IEEE International Con- ference on Acoustics Speech and Signal Processing (ICASSP) 2010, 2010, pp. 1282–1285. work_ccjsqzm2j5cpbgji2y43tllcre ---- PHOTO-ELICITATION: USING PHOTOGRAPHS TO READ RETAIL INTERIORS THROUGH CONSUMERS’ EYES Ann Petermans, PHL University College & Hasselt University, Department of Arts and Architecture, Belgium, ann.petermans@phl.be Anthony Kent, University of the Arts London, London College of Fashion, United Kingdom, a.kent@fashion.arts.ac.uk Koenraad Van Cleempoel, PHL University College & Hasselt University, Department of Arts and Architecture, Belgium, koenraad.vancleempoel@phl.be The researchers firstly would like to thank the interior architecture students who assisted in the process of data collection. Secondly, they wish to express their highest gratitude to Prof. dr. Wim Janssens, Prof. dr. Pieter Desmet, dr. Jan Vanrie and interior architect Philippe Swartenbroux for proofreading this article and for their valuable feedback and discussion. Please send correspondence to Ann Petermans, PHL University College & Hasselt University, Department of Arts and Architecture, Agoralaan Building E, 3590 Diepenbeek, Belgium (ann.petermans@phl.be). Ann Petermans: PHL University College & Hasselt University, Department of Arts and Architecture, Agoralaan Building E, 3590 Diepenbeek, Belgium. Telephone: 0032 11 24 92 11, Fax 0032 11 24 92 01. Email: ann.petermans@phl.be . Anthony Kent: University of the Arts London, London College of Fashion, 20 John Prince's Street, London W1G 0BJ, United Kingdom. Email: a.kent@fashion.arts.ac.uk. Koenraad Van Cleempoel: PHL University College & Hasselt University, Department of Arts and Architecture, Agoralaan Building E, 3590 Diepenbeek, Belgium. Email: koenraad.vancleempoel@phl.be mailto:ann.petermans@phl.be mailto:a.kent@fashion.arts.ac.uk mailto:koenraad.vancleempoel@phl.be mailto:ann.petermans@phl.be mailto:ann.petermans@phl.be mailto:a.kent@fashion.arts.ac.uk mailto:koenraad.vancleempoel@phl.be 1 PHOTO-ELICITATION: USING PHOTOGRAPHS TO READ RETAIL INTERIORS THROUGH CONSUMERS’ EYES ABSTRACT Researchers studying experiences in retail environments have typically restricted their attention towards examining the influence of individual atmospheric variables upon customer behavior. In this respect photographs and video are common environmental simulation techniques. This research approach not only concerns researchers active in consumer culture theory, but also interior architects and retail designers. As holistic inspired practitioners, they maintain that interiors function as ‘Gestalt’ environments, interacting with their users. Inspired by their viewpoints, in this paper, the authors reflect on the use of the inductive, holistically inspired method of photo-elicitation in research concerning experiences in retail environments. In addition, they report on the application of photo-elicitation in two empirical projects. The findings demonstrate the value of photo-elicitation in gaining insight into customer experiences in retail interiors. Keywords: photo-elicitation, retail design, retail interiors, customer experiences 2 1. STUDYING RETAIL INTERIORS With the development of marketing research in co-creativity between producer and consumer (Lusch & Vargo, 2006a, 2006b) and studies of consumer experience, it has become important to re-assess the service environment. The rapid growth of consumption in developing economies, but also the continuing development of retail formats and omni-commerce in more mature markets, further supports the store environment as an important focus for new research and an examination of complementary research methodologies. The aim of this research is to develop understanding of retail environments as sites of complex visual sensory experiences and the application of photo-elicitation as a research methodology. The article is organised firstly with a discussion of the literature relating to current research approaches used to obtain knowledge with regards to customer experiences in retail environments. Secondly to explain the visual methodologies and their application in social sciences, and the use of photo-elicitation in particular as a methodology for studying customer experiences in retail environments. Thirdly to analyse the application of photo- elicitation in two empirical projects. Finally, the authors discuss the contribution of photo- elicitation to research methodology together with managerial implications and future research opportunities. 2. RESEARCH APPROACHES TO GAIN INSIGHT IN CUSTOMER EXPERIENCES IN RETAIL ENVIRONMENTS The effect of the store environment on its users is reported in the literatures of environmental psychology, marketing, architecture and design (Greenland & McGoldrick, 1994, 2005). In the past few decades, a range of studies have demonstrated that consumer behavior towards stores and store patronage is influenced at least to some degree, by the store environment 3 (e.g., Donovan & Rossiter, 1982; Baker, Levy & Grewal, 1992; Baker, Grewal & Parasuraman, 1994; Lam, 2001; Davies & Ward, 2002; Baker, Parasuraman, Grewal & Voss, 2002). There are various possible research approaches to obtain information about consumers in this respect and in this section the authors distinguish between positivistic and interpretive research approaches. Positivistic research approaches When focusing on the question how customers experience retail environments, two strands of research are significant. Firstly, environmental psychology provides a theoretical foundation to emotional responses through the Stimulus-Organism-Response (SOR) model, defined by Mehrabian & Russell (1974) and subsequently developed for retail environments by Donovan and Rossiter (1982), to examine behavior through the Pleasure – Arousal - Dominance model (Fiore & Kim, 2007). Their influence is found in the ‘servicescapes’ concept which specifically focuses on customer and employee interactions in the physical environment (Bitner, 1992). Secondly, research in ‘atmospherics’ (Kotler, 1973) has developed understanding of consumer evaluations of and their subsequent behavior in the retail environment (Turley & Milliman, 2000). The atmospherics research tradition focuses on the influence of individual variables, mostly controlled by the retailer, on consumer responses (Verhoef, Lemon, Parasuraman, Roggeveen, Tsiros & Schlesinger, 2009). These studies tend to be dominated by positivistic research approaches, typified by verified facts, systems of organisation, lawlike generalisations, and tested hypotheses. Data collection via experimentation or surveys in which the use of questionnaires is one of the most common quantitative research tools, thus help to support or refute certain theories (Lincoln & Guba, 2000; Creswell, 2003). 4 Interpretive research approaches Interpretivism provides an alternative paradigm to post-positivistic, quantitative research design. It aims to gain insights into customer experiences in retail environments, and holds that in retail practice, multiple stimuli interact and influence consumers’ in-store experiences (Healy, Beverland, Oppewal & Sands, 2007; Sands, 2008; Author 1 & Author 3, 2010). As consumers’ overall perceptions about a specific store can also influence their overall preference for that store (Thang & Tan, 2003), there is an increased interest for undertaking research in retail settings more ‘holistically’. Inspired by writings originating in phenomenology and Gestalt thinking about designed environments (i.e., a frame of reference wherein a place is considered to be a totality, a whole where various elements interact so that users experience the space as more than the sum of its constituent parts), various researchers aim to explain how consumers perceive, and are influenced by the entire in-store atmosphere (Healy et al., 2007; Sands, 2008). The centrality of the consumer in the shopping environment is supported by Consumer Culture Theory (CCT) which explains that existing research in the customer-environment relationship (i.e., in the atmospherics tradition) has often minimized the importance of situation and context (Mariampolski, 1999; Penaloza, 1999; Arnould & Thompson, 2005). Research in this stream of literature focuses on consumers’ use of a designed space for the production of personally significant experiences, meanings and purposes (Arnould, Price & Tierney, 1998; Sherry, 1998). Notably, business and management research tends to marginalise the research methodologies of the designers themselves and their holistic, non- linear approach to design problems. It has long been understood that buildings can have a symbolic meaning beyond their instrumental purpose (Rapoport, 1982; Berg & Kreiner, 1990). More overtly, the outcome of the design process in commercial projects that integrate both the exterior and interior of the store is consistent brand communication (Din, 2000). 5 Drawing on the visual strength of the ‘brandscape’, postmodernist architecture and design consciously communicate retail brand identity (Riewoldt, 2002; Brauer, 2002). From these perspectives, retail interiors which need to be designed or studied should be approached as holistic totalities, as ‘wholes’ where various elements interact and determine how consumers feel and behave in a space. Hence the relevance of interpretive research (Arnould & Thompson, 2005; Carù & Cova, 2007), in which researchers make use of participant-based methodologies for studying consumer behavior and experiences in designed environments. Despite the clear advantages of positivistic approaches, they seem to have a limited practicality for gaining insight in customers’ impressions about design attributes (Pullman & Robson, 2007). Interpretive approaches, and visual research methodologies in particular, are appropriate when researchers aim to gain insights into customer experiences in designed spaces. Taken into account that the researchers aimed to gain insight into customers’ experiences in selected retail environments, rather than employing positivistic methodologies, they applied the interpretive approach of photo-elicitation. 3. VISUAL RESEARCH METHODOLOGIES The foregoing perspectives demonstrate that visual research can contribute to understanding both the symbolic and physical meanings of the built environment (Schroeder, 1998, 2002; Harper, 2002; Prosser, 2007). Within the field of visual research or visual methodology, different authors have identified four visual research approaches, namely (i) acknowledging images as data themselves, that is, visuals signs and symbols that allow to gain insight in the cultures and people that produced them (ii) often in anthropology, using images as a way to truly document social, cultural and physical processes as they are happening (iii) employing images as stimuli to elicit information from participants whereby the image is produced by someone other than the research participant and (iv) using images to help participants to 6 express their feelings, beliefs and so on either as an aid to verbal narrative, or in place of it. In this respect, the produced visual material functions as a communicative tool whereby the images themselves have been produced by the research participants themselves (Harper, 2002; Warren, 2005). This is the approach that the researchers will discuss further in the paper under the term of ‘photo-elicitation’. Since the publication of a text on photography as a research method by Wagner (1979), there has been a continuous growth ‘in handing the camera to those whose lives we wish to explore…because photography offers opportunities for research participants to express their subjectivities as – quite literally – their view of the world’ (Warren, 2005, p. 865). From the first indications of the use of photography as a research method onwards, visual research methods mostly were successful in disciplines with traditional ethnographic histories, namely anthropology and sociology (Berg, 2008). Relatively recent, also researchers in consumer behavior and marketing have starten to become interested in photography as a research method to explore consumers’ perceptions about and experiences in retail environments (e.g., Burt, Johansson & Thelander, 2005, 2007; author 2 & other author, 2009). 3.1 Photo-elicitation Photo-elicitation refers to the method whereby a researcher asks research participants to make photographs that depict some aspect of their experiences (Warren, 2005). Afterwards, the photographs are inserted into a research interview, and act as a communication bridge between the interviewer and the interviewee. The method of photo-elicitationi was proposed for the first time by John Collier in 1957, as a means to overcome practical problems which he encountered in his research. From the 1990s onwards, there has been a steady growth of interest towards using photo-elicitation as a research methodology. This can be clarified by various developments: firstly, the recent rise of interest in photo-elicitation certainly is 7 indebted by researchers’ everyday immersion in multiple visual signs and photographs, which has stimulated Warren (2005, p. 866) to propose that academic research also becomes ‘subject to a visual turn’. Secondly, the interest in photo-elicitation can be explained by the ‘post- modern turn’ (Warren, 2005) in social science which provides a methodological underpinning for visual research that overcomes problems in articulation and discourse. Derived from structuration theory, it accepts activity and practice, that everyone has high levels of ‘practical consciousness’ and that people are more knowledgeable than they actually can tell (Giddens, 1984). Thirdly, Warren (2005) indicates that for researchers active in organization studies (particular ethnographic studies) issues such as reflexive practice, subjectivity, and immersion in the worlds they research have become important. The subjective nature of photography lends itself particularly well to answer these calls, as ‘the photograph almost literally acts a lens through which we see what others ‘see’ and importantly, deem important enough to ‘capture’ with a camera’ (Warren, 2005, p. 866). Fourthly, Pink (2007) argues that the visual is a multi-sensory encounter and should be integrated with other sensory modalities. She accepts that photographs are never merely visual but in fact invoke synaesthetic and kinaesthetic effects, as the visual provokes sensory responses. Some elements of human experience are best represented visually and Pink (2007) sees the relationship between visual and other senses as key to understanding how everyday experiences are constituted. After having theoretically reflected on the interpretive, holistically inspired visual research method of photo-elicitation, in the following section the researchers present the results of two empirical research projects wherein they aimed to gain insight in research participants’ in- store experiences with the help of photo-elicitation. 8 4. EMPIRICAL PROJECTS 4.1 Empirical research project one 4.1.1 Research design The first research project was conducted at two food superstores in a large metropolitan area. One store had traded for over ten years, the other had traded for five years; both stores were located out of centre sites with large car parks. While the building design differed due to a change in corporate design policy, the principles of display and interior layouts were comparable. A sample of customers of twenty shoppers at each store representing family, older and younger segments of the market visited the store in the morning, afternoon and evening. All respondents were familiar with the store, and participated in the project during their normal mid-week shopping visit. On arrival at the store, respondents were given instructions to take photographs that expressed their feelings about the store, and which meant something important or significant to them. The instructions were specifically reduced to avoid researcher bias (Burt et al., 2005). Each respondent was given a disposable camera, with instructions to use up all the film, typically twenty-seven photos. The photographs could be taken inside and outside the store, including the car park. Thirty-four respondents completed their research task and the interviews with these respondents took place 7-10 days after the photographs were taken. Each interview was scheduled in a form of “guided conversation” (Zaltman, 1996). Each respondent was handed the pack of photographs and asked to sort them into meaningful themes (elements) of their own devising; where they sought guidance, respondents were asked to choose 5-8 themes although they could have as many or few as they liked. The elements were expressed as a 9 single noun, or occasionally in short descriptions, using the respondents’ own words or where difficult to summarise, negotiated with the interviewer. The second stage of analysis used a modified repertory grid technique, in which 166 constructs and their meanings were identified by respondents. In the next stage of analysis, following Reynolds and Gutman’s (1988) laddering technique, only complete responses to each emergent and implicit pole were used to examine higher level values. In the final stage of the research respondents were asked to make a selection from their photographs, and to stick these to a sheet of paper in any way that represented their experience of the store. The montage enabled the respondent to highlight significant photographs or themes and provided a means of referencing and validating the earlier analyses. 4.1.2 Research results The respondents collectively took 816 usable photographs. In the first stage, the respondents sorted them into a total of 227 elements, with a range of 5-9 elements per respondent (mean 6.8). Each element was represented by more than one photograph and typically consisted of 3- 4 photographs. These were recorded and interpretative analysis of the respondents’ descriptions, agreed by two researchers, led to the grouping of photographs sharing common element characteristics. The aim was to identify the significant features of the environment from how the respondents had taken the photos and what they meant to them. Products formed the largest group, and these were sub-divided by their focus on specific attributes, for example ‘meat’, and generic ones, such as ‘favourite products’. Fresh produce and newly cooked (bakery, meat) foods were significantly represented in this group. The third largest cluster consisted of ‘dissatisfied’ elements (20). ‘Dissatisfied’ was specifically defined by the respondents by what was perceived to be missing from the store, notably by absence of products demonstrated in messy displays and presentations. The store environment was the 10 fourth largest group, followed by photographs of layouts and displays (17), these groups were typically identified in neutral or positive terms. Respondents approved of neat well-stocked displays, tidy aisles and a general sense of a cornucopia of products, in contrast to ‘dissatisfied’ elements. Trolleys, checkouts and general obstructions were agreed by the researchers as ‘hindrances’ - to be negotiated around - and ‘access’ was assessed separately as a generic issue relating to elements that included customer circulation and shelf heights. Customer service and ‘people’ related to the company’s efforts to look after its customers and formed a separate category to ‘services’, which included general service provision, for example, the café. Turning to the ‘store environment’ group of photographs; these were defined by the spatial and structural arrangement of the store and divided by exterior and interior elements. The respondents’ engagement with the building was almost universally in terms of its functionality, which reflected their personal concerns and interests. External access to the store, from finding an appropriate parking space to negotiating trolleys and pushchairs into the entrance were clearly identified. Trolleys, their accessibility and availability outside the building, their usefulness and ability to cause obstructions were cited, in their own right, as elements by respondents and were implicated in obstructions. The entrance and exit form the store’s boundary, as was demonstrated in both store environments and also elements described as checkouts. The physical margins of the store were typified by security barriers, functional surrounds, and corporate communications through signage and notices. Respondents summarized the areas as ‘boring’ and ‘organised’ in the sense of being controlled, although the checkouts need to be very carefully organized to avoid frustration setting in and to achieve a successful ‘end of visit’ experience. Recycling facilities also needed to be treated carefully; for those respondents who had an interest in recycling, the 11 facilities were shown to be inadequate and at worst demonstrated tokenism towards environmental responsibility. In the second stage of the analysis the largest category of constructs concerned products (39) and reflected the number of product photographs sorted in the earlier stage into elements. Respondents explained the photographs’ meaning in terms of choice and availability in the store, and the use of products for different purchasing occasions. The store environment (32) was the second largest category of constructs and included the meanings of space and types of space both inside and outside the building. In this context the category was broadened from the first stage to include store layouts and safety issues concerning the physical environment, and also store signage (having a spatial dimension) as a medium of communicating information. In the laddering phase the respondents’ constructs relating to the physical environment generally defined the store as a physically safe place, operationally functional and ultimately, pleasing to be in. A second theme that emerged was respondents’ criticism towards the store and company, and these were manifested by explaining their experience of practical problems in finding products, and corporately, were suspicious of being manipulated by the store on prices. Three steps were typically achieved in each ladder, and the values were interpreted as my time, which reflected the value of the effective use of time, a stress-free life, which drew on the consumers’ own knowledge and perceptions of organization and space. The remaining values were identified as satisfaction, the fulfillment in achieving shopping objectives and happiness. With further interpretative analysis these were reduced to fundamental emotional states, ‘happy’ and ‘unhappy’. Happiness was typified by contentment, but also contemplation about the store, defined by being in a reflective state about what the store is and might be. The 12 two stores were separately analysed, and showed that at the older store A, 33 constructs demonstrated states of happiness and 40 unhappiness, while at B, 51 were happy and 28 unhappy. In examining the built environment, Store B’s location was more spacious, its car park and perimeter easier to negotiate and its larger size, enabled its interior spaces to be stock more products in what was perceived to be a more organized state. The final stage of the analysis demonstrated two approaches, the first explicitly or implicitly described a shopping journey. The car park and access to the store was specifically highlighted as elements by two respondents, but also featured in a number of ‘stories’ in the mapping of the store, as the start and end point of the store visit. This approach involved the creative engagement of the respondent and through the choice of photographs, clearly drew attention to significant elements in the shopping experience. These supported findings from the earlier stages of analysis but also demonstrated the importance of specific details in the store. For instance, one respondent had worked in health and safety and had a heightened awareness of the store almost exclusively in these terms. This approach was more typical of the female respondents. The second approach was characterised by using the photographs to list favourite aspects of the store, and tended to focus on products. This more functional style was preferred by male respondents. 4.2 Empirical research project two 4.2.1 Research design The second research project was conducted at a shoe and fashion store, located in a shopping city in Belgium. The retail environment in its whole is relatively large (it is about 750 square meters). This store’s main product offering concerns shoes from well-known brands for men, women and children of different ages, but they also offer clothing and accessories. Thirty- 13 eight customers, of whom half of the sample already was familiar with the store, were invited individually to the retail environment on one of three ordinary weekdays in April 2010. Upon arrival at the entry of the store, the research participants received a digital photo camera and were asked to photograph anything in or outside the store which triggered an experience for them during their store visit, or which made a certain impression on them. As in empirical research project one, instructions to research participants were reduced to avoid researcher bias (Burt et al., 2005). Immediately after their store visit, a researcher interviewed each of the participants individually, whereby he or she organised each interview around the photographs that each of the interviewees had collected. The interviews took place in the store itself, so that participants could elaborate on their photographs while having the possibility to point at details in the store environment itself. In addition, this procedure facilitated participants’ partaking in the research project. In line with recommendations of Fontana & Frey (2000) and Legard, Keegan & Ward (2003), a list of key topics and issues to be covered during the interview was prepared. In each of the interviews, the focus was on the interviewees’ perspectives on and ideas about customer experiences in the selected store case. Taken into account that ‘customer experience’ is an abstract concept to talk about, each interview started at ‘surface’ level (Legard et al., 2003), with general questions about the research participant‘s impressions of the retail environment, its design and its product offerings. The interviewer together with the research participant then started to discuss the total sample of photographs which the concerned research participant had collected. The interviewer then asked the participant to select three photographs which particularly impressed them or which were the most typical examples of elements in or outside the store which triggered an experience for them during their store visit. The use of follow-up questions allowed us to further explore these specific photographs. 14 Finally, the interview concluded with the interviewer asking the research participant to select and elaborate on one photograph that, according to their opinion, captured the best their in- store experience. All the interviews were audio-taped and transcribed. In the analysis, the researchers focused on tacking back and forth between the transcripts of the interviews and the respondents’ photographs. 4.2.2 Research results On average, each of the research participants collected 15 photographs to capture their in- store experience (the highest number of photographs per participant was 47, the minimal number was 3). Most research participants indicated that they had been able to photograph anything in or outside the store case which made a specific impression on them. When the interviewer asked the participant to select three photographs that mainly captured their in- store experiences, they principally chose photographs which referred to (i) the retail interior such as the children’s department (ii) elements of the store’s retail design, such as the diversity of lighting fixtures used throughout the store (iii) particular products, for instance shoes, which particularly appealed to them. When the interviewer ultimately asked the research participant to select one photograph that truly captured their in-store experience, they usually chose an image which referred to the store’s retail design. Analyses on the interview transcripts and the photographs demonstrated that the in-store experiences of the research participants could be captured by two main themes: (a) perceptions and experiences concerning the store’s retail design (b) the product assortment and visual merchandising in the retail environment. 15 4.2.2.1 Perceptions and experiences concerning the store’s retail design When research participants reflected on their perceptions and experiences of the store, they often used the adjectives ‘chic’, ‘luxurious’, ‘contemporary’, ‘trendy’, ‘beautiful’, ‘special’ and ‘fancy’. The store’s retail design was evaluated as being richly coloured, full of contrasts, extraordinary, dark, overwhelming, original, playful and artistically. Apart from the children’s department, the colors and materials were mostly evaluated to be rather dark. However, none of the interviewed participants seemed to have been disturbed by these choices, as they seemed to agree that it fitted perfectly within the special, heterogeneous retail design. Research participants also often discussed the mix of ‘new and old’ design elements throughout the store. According to their opinion, every corner of the store seems to be designed differently, resulting in an eclectic overall design. While looking at her photographs (photograph 1 and 2), one participant for instance indicated that the retail environment ‘is a healthy mixture of … baroque elements, like for instance this chandelier here … while over there, they just used downlighters … it a a mix of different things … a mixture of styles. I think it’s eclectic. … It is something different from other stores, which helps this store to differentiate from others’ Please add photographs 1 and 2 here Most research participants really liked this design approach, as they got the impression to be surprised all the time. In addition, they got the feeling that almost every customer would feel appealed by the design of at least one of the in-store areas. Most research participants also referred to the children’s department and its design, and to the cash register and its design. They really discussed these in-store areas as eye-catching zones. 16 Throughout the transcripts, it became clear that resarch participants appreciated these places for their functional qualities, but also because of the appealing, hedonically pleasing way in which they had been designed. Next to eye-catching in-store areas, research participants also discussed physical eye-catching elements which were used in the design of the store case. The numerous lighting fixtures for instance, all different, which have been used throughout the store, were highly discussed. Notably, participants also discussed the presence of a cupboard in-store which is full of coloured glass bottles, and the presence of a diversity of couches. The interviewees not only valued their presence as eye-catching elements but also because they allow customers to sit at ease in-store, relax for a moment and enjoy the pleasant environment. 4.2.2.2 The product assortment and visual merchandising in the retail environment Our analyses also revealed that most participants referred to the broad and large product range, available in the selected store case. There were products available for men, women and children, ranging from shoes and clothing to accessories. Although most research participants indicated that they particularly liked the broad and large product offerings, some participants complained about what they called ‘an abundance of products’. For some of the participants, this abundance also resulted in them evaluating the retail environment as chaotic and as rather messy. As one participant for instance said: ‘I think there are too many shoes present in a relatively small place’. Different participants not only made photographs of the present product offerings to offer the interviewer an insider’s perspective in their in-store experiences and their evaluation of the broad and large product offerings of the store case. Occasionally, participants also photographed in detail one particular element of the store’s product offerings, for instance because they particularly liked or disliked a particular shoe. Some participants, although a minority, also chose such a 17 photograph when being asked to select one photograph that truly captured or represented their in-store experience. Relating to the visual merchandising, research participants were particularly attentive to the way products were presented to the customer. Also the colorful and playfully designed children’s department often came to the fore in this respect. The merchandise and their presentation thus seemed to be highly valued in the participants’ perception of the store. 4.3 Discussion of results In the researchers’ viewpoint, the results of the empirical projects have demonstrated that undertaking photo-elicitation studies can have different advantages. Firstly, photographs generated by respondents themselves have proven to be valuable ‘can-openers’ or reminders of store-related perceptions and experiences which could be difficult to identify from traditional positivistic research approaches. In the interviews, the photographs help and allow respondents to reflect on their experiences and share their perceptions on various issues. In addition, looking back at photographs can help them remember certain elements (Pullman & Robson, 2007) which they otherwise, in interviews with words alone, could forget. Secondly, interviews combining visual and verbal information do not elicit more information than interviews using words alone, but rather a different kind of information. In the second empirical project for instance, various respondents whom already were familiar to the store told the researcher that while walking through the store with the camera in their hands, they paid attention to and photographed aspects which they had never noticed before. This issue corresponds to the viewpoints of Harper (2002) and Pink (2006), who indicate that photography can deliver insights which are unattainable by text or observation alone. Pullman & Robson (2007, p. 142) use the terms ‘rich source of information’ in this respect. 18 Thirdly, our results demonstrate that photo-elicitation as a method of data collection is unobtrusive and avoids the formal rigidity or strange feeling which people often experience in in a traditional positivistic research approach or in a conventional verbal interview situation. This corresponds to Warren’s (2005) and Creswell’s (2009) findings. As such, photo- elicitation seems to propel not only less hesitant but also more direct responses by the interviewees to the photographs (Emmison & Smith, 2000). Fourthly, in standard in-depth interviews (i.e., limited to the verbal aspect) and in traditional positivistic inspired research approaches, there is the challenge to set up ‘communication’ between two persons who rarely share the same background. In photo-elicitation studies, respondents are enticed to talk freely about their perceptions, impressions and experiences from their proper viewpoint, while using their own language and their own vocabulary. In this way, respondents are empowered: it is what they make of their proper photographs that counts. Collier & Collier (1986) agree, and indicate that they consider combining photographs with open-ended interviews to be a useful tool to obtain knowledge, as they are convinced that nobody knows the situation of the research participants better than the research participants themselves. Also Harper (2002) agrees, and indicates that photographs may lead research participants to new or refreshing views concerning their social existence. As such, ‘they are able to deconstruct their own phenomenological assumptions’ (Harper, 2002, p. 21). Fifthly, our findings demonstrated that photo-elicitation is an appealing research approach towards respondents. Walking through a designed environment while trying to capture their in-store experience not only is a pleasant and appealing ‘task’ for consumers. It also enables them to make their ideas on a rather abstract topic, in our case a designed environment, concrete. As such, the use of photographs adds to the depth of understanding consumers’ perception of and experiences in retail interiors. This corresponds to viewpoints of Rose (2007), who indicates that photographs are particularly good at capturing the ‘texture’ of places. Pictures carry a wealth of visual 19 information, can show details in a moment, and also things that are hard to describe in writing. 5. CONCLUSION More than thirty years ago, Markin, Lillis and Narayana (1976, p. 43) observed that ‘… an environment … is never neutral. The retail store is a bundle of cues, messages and suggestions which communicate to shoppers’. The results of both empirical projects demonstrate that photo-elicitation combined with interviews provides a rigorous research approach that helps to gain in-depth visual and verbal insight in the meanings, perceptions and experiences which consumers experience in a retail environment. In terms of implications, photo-elicitation offers researchers, retail managers, interior architects and retail designers the possibility to truly connect with customers and learn what they think about a certain designed environment. Retail managers, interior architects and retail designers all highly value understanding the consumer perspective on a designed environment, as they usually do not learn about this perspective in their daily work. Therefore, this kind of input can be valuable for their future projects (Author 1 & Author 3, 2011). Understanding the user / consumer perspective on design and its potential implications is crucial for these stakeholders, as design is an important feature in consumer decisions (Norman, 2004). This article reflected about the interpretive method of photo-elicitation in research concerning experiences in retail environments, whereby the findings of two empirical studies were presented. Further research is desirable to substantiate the studies’ findings. Indeed, firstly, the context wherein the concerned studies were organised was rather limited (i.e., two food superstores and a shoe and fashion store). It would be valuable for future research to explore 20 to what extent the findings could be elaborated to other retail environments. Secondly, the sampling procedure for both studies was theoretically driven. It seems worthwhile for future research to take other individual characteristics such as shopping orientation, education or occupation into account as they seem to be important determinants of how people experience retail environments (Caru & Cova, 2007). Thirdly, it seems valuable for future research to elaborate on the issue if photographing in-store experiences can distort participants’ actual experiences, since people do not usually takes photographs of themselves or their experiences while going to a store. However, Venkatraman & Nelson (2008) already have indicated that there is no evidence that people, enrolled in photographing their proper experiences, distorted their experiences in a particular way. Fourthly, after have experimented with digital as well as non-digital photography, the researchers are clear proponents of digital photography. It is immediate so that photographs can be discussed on a much shorter term than is the case when making use of non-digital photography. As such, making use of digital cameras in a photo- elicitation study not only saves time, but also costs less, as no chemical processing is necessary anymore and researchers nor research participants have to set an extra date to discuss the photographs in an interview (Warren, 2005). As such, digital photography can be considered to be an extra aid which researchers can use to help prevent participants from forgetting certain details and to avoid participant fatigue. Finally, it seems worthwhile for future research to elaborate on the use of photo-elicitation in other than retail interiors. A museum for instance seems to be an interesting case of a designed environment which seems open to the use of photo-elicitation to gain insight in people’s experiences. ACKNOWLEDGEMENTS The researchers firstly would like to thank the interior architecture students who assisted in the process of data collection. Secondly, they wish to express their highest gratitude to Prof. dr. 21 Wim Janssens, Prof. dr. Pieter Desmet, dr. Jan Vanrie and interior architect Philippe Swartenbroux for proofreading this article and for their valuable feedback and discussion. 22 REFERENCES Arnould, E., Price, L. & Tierney, P. (1998). Communicative staging of the wilderness servicescape. The Service Industries Journal,18(3), 90-115. Arnould, E. & Thompson, C. (2005). Consumer Culture Theory (CCT): twenty years of research. Journal of Consumer Research, 31(March), 868-882. Baker, J., Levy, M. & Grewal, D. (1992). An experimental approach to making retail store environmental decisions. Journal of Retailing, 68(4), 445-460. Baker, J., Grewal, D. & Parasuraman, A. (1994). The influence of store environment on quality inferences and store image. Journal of the Academy of Marketing Science, 22(4), 328- 339. Baker, J., Parasuraman, A., Grewal, D. & Voss, G. (2002). The influence of multiple store environment cues on perceived merchandise value and patronage intentions. Journal of Marketing, 66(April), 120-141. Berg, B. (2008). Visual ethnography. In L. Given (Ed.), The Sage Encyclopedia of Qualitative Research Methods (pp. 934-938). Los Angeles: Sage. Berg, P. & Kreiner, K. (1990). Corporate architecture: turning physical setting into symbolic resources. In P. Gagliardi (Ed.), Symbols and artifacts: views of the corporate landscape (pp. 41-65). New York: Aldine de Gruyter. Bitner, M. (1992). Servicescapes: the impact of physical surroundings on customers and employees. Journal of Marketing, 56(April), 57-71. Brauer, G. (2002). Architecture as Brand Communication. Basel: Birkhauser. Burt, S., Johansson, U. & Thelander, A. (2005). Retail image as seen through consumers’ eyes: studying international retail image through consumer photographs of stores. Paper presented at the European Association for Education and Research in Consumer Distribution Conference, July 5-7, University of Lund, Sweden. 23 Burt, S., Johansson, U. & Thelander, A. (2007). Retail image as seen through consumers’ eyes: studying international retail image through consumer photographs of stores. International Review of Retail, Distribution and Consumer Research, 17(5), 447-467. Carù, A. & Cova, B. (2007). Consuming experience. London: Routledge. Collier, J. (1957). Photography in anthropology: a report on two experiments. American Anthropologist, 59, 843-859. Collier, J. & Collier, M. (1986). Visual anthropology: photography as a research method. Albuquerque: University of New Mexico Press. Creswell, J. (2009). Research design: qualtitative, quantitative and mixed methods approaches. London: Sage. Davies, B. & Ward, P. (2002). Managing retail consumption. Chichester: John Wily & Sons. Din, R. (2000). New Retail. London: Conran Octopus. Donovan, R. & Rossiter, J. (1982). Store atmosphere: an experimental psychology approach. Journal of Retailing, 58, 34-57. Emmison, M. & Smith, P. (2000). Researching the visual. London: Sage. Fiore, A. & Kim, J. (2007). An integrative framework capturing experiential and utilitarian shopping experience. International Journal of Retail & Distribution Management, 35, 421- 442. Fontana, A. & Frey, J. (2000). The interview: from structured questions to negotiated text. In N. Denzin & Y. Lincoln (Eds.), Handbook of Qualitative Research. 2nd edition (pp.645-672). Thousand Oaks: Sage. Giddens, A. (1984). The constitution of society. Oxford: Polity Press. Greenland, S. & McGoldrick, P. (1994). Atmospherics, attitudes and behavior: modellling the impact of designed space. The International Review of Retail, Distribution and Consumer Research, 4(1), 1-15. 24 Greenland, S. & McGoldrick, P. (2005). Evaluating the design of retail financial service environments. International Journal of Bank Marketing, 23(2), 132-152. Harper, D. (2002). Talking about pictures: a case for photo-elicitation. Visual Studies, 17(1), 13-26. Healy, M., Beverland, M., Oppewal, H. & Sands, S. (2007). Understanding retail experiences – the case for ethnography. International Journal of Market Research, 49, 751-779. Author 2 & other author (2009). Kotler, P. (1973). Atmospherics as a marketing tool. Journal of Retailing, 49, 48-64. Lam, S. (2001). The effects of store environment on shopping behaviors: a critical review. Advances in Consumer Research, 28, 190-197. Legard, R., Keegan, J. & Ward, K. (2003). In-depth interviews. In J. Ritchie & J. Lewis (Eds.), Qualitative research practice. A guide for social science students and researchers (pp.138-169). Sage Publications: London. Lincoln, Y. & Guba, E. (2000). Paradigmatic controversies, contradictions, and emerging confluences. In N. Denzin & Y. Lincoln (Eds.), Handbook of qualitative research. Second edition (pp. 163-188). Thousand Oaks: Sage Publications. Lusch, R. & Vargo, S. (2006a). Service-Dominant Logic: reactions, reflections and refinements. Marketing Theory, 6(3), 281-288. Lusch, R. & Vargo, S. (2006b). The Service-Dominant Logic of marketing: dialog, debate and directions. Armonk, NY: M.E. Sharpe. Mariampolski, H. (1999). The power of ethnography. Journal of the Market Research Society, 41(1), 75-86. Markin, R., Lillis, C. & Narayana, C. (1976). Social-psychological significance of store space. Journal of Retailing, 52(1), 43-54. Mehrabian, A. & Russell, J. (1974). An approach to environmental psychology. Cambridge: MIT Press. 25 Norman, D. (2004). Emotional design: why we love (or hate) everyday things. New York: Basic Books. Penaloza, L. (1999). Just doing it: a visual ethnographic study of spectacular consumption behavior at Nike Town. Consumption, Markets & Culture, 2(4), 337-400. Author 1 & Author 3 (2010). Author 1 & Author 3 (2011). Pink, S. (2006). The future of visual anthropology: engaging the senses. London: Routledge. Pink, S. (2007). Doing visual ethnography: images, media and representation in research. London: Sage. Prosser, J. (2007). Visual methods and the visual culture of schools. Visual Studies, 22(6), 13– 30. Pullman, M. & Robson, S. (2007). Visual methods: using photographs to capture customers’ experiences with design. Cornell hotel and restaurant administration quarterly, 48, 121-144. Rapoport, A. (1982). The meaning of the built environment: a nonverbal communication approach. London: Sage. Reynolds, T. & Gutman, J. (1988). Laddering theory, method, analysis, and interpretation. Journal of Advertising Research, 29(2), 11-31. Riewoldt, O. (2002). Brandscaping - worlds of experience in retail design. Berlin: Birkhauser. Rose, G. (2007). Visual methodologies. An introduction to the interpretation of visual materials. London: Sage. Sands, S. (2008). Consumer responses to in-store themed events. Unpublished doctoral Dissertation, Monash University, Melbourne, Australia. Schroeder, J. (1998). Consuming representation: a visual approach to consumer research. In B. Stern (Ed.), Representing consumers: voices, views, and visions (pp. 193-230). New York: Routledge. Schroeder, J. (2002). Visual consumption. New York: Routledge. 26 Sherry, J. Jr. (1998). The soul of the company store: Nike Town Chicago and the emplaced brandscape. In J. Sherry Jr. (Ed.), Servicescapes: the concept of place in contemporary markets (pp. 109-146). Chicago, IL: American Marketing Association. Thang, D., & Tan, B. (2003). Linking consumer perception to preference of retail stores: an empirical assessment of the multi-attributes of store image. Journal of Retailing and Consumer Services, 10, 193-200. Turley, L. & Milliman, R. (2000). Atmospheric effects on shopping behavior: a review of the experimental evidence. Journal of Business Research, 49, 193-211. Venkatraman, M. & Nelson, T. (2008). From servicescape to consumptionscape: a photo- elicitation study of Starbucks in the New China. Journal of International Business Studies, 39, 1010-1026. Verhoef, P., Lemon, K., Parasuraman, A., Roggeveen, A., Tsiros, M. & Schlesinger, L. (2009). Customer experience creation: determinants, dynamics and management strategies. Journal of Retailing, 85, 31-41. Wagner, J. (1979). Images of information. London: Sage. Warren, S. (2005). Photography and voice in critical qualitative management research. Accounting, auditing & accountability journal, 18(6), 861-882. Zaltman, G. (1996). Metaphorically speaking. Marketing Research, 8(2), 13-20. 27 PHOTOGRAPHS Photograph 1 Photograph 2 28 i In the remainder of this paper, what the researchers label as ‘photo-elicitation’, Wagner (1979) calls ‘native image-making’, that is the research approach whereby photographs are generated by research participants themselves to help them express their feelings, beliefs, opinions etcetera as a kind of aid to verbal narrative. work_cfmwejulqrf4vou4oz2jhnkivi ---- s o u r c e : h t t p s : / / d o i . o r g / 1 0 . 7 8 9 2 / b o r i s . 3 6 3 9 0 | d o w n l o a d e d : 6 . 4 . 2 0 2 1 ≤ � ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ●●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●●● ● ●● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ●●●● ●● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● 20062005 2007 15 10 5 0 0.42 0.38 0.34 30 20 10 0 –10 G P P ( µ m o l m –2 s – 1 ) Te m p e ra tu re ( °C ) G re e n f ra c ti o n P re c ip it a ti o n ( m m ) 20 60 40 ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●●●● ● ●● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ●● ● ●●● ● ● ● ● ● ● ●● ● ● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ●● ● ● ● ● ●● ● ●● ●● ●● ● ● ● ● ● ● ●●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●●● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ●●● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ●●●●● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●●● ● ● ●● ● ●● ● ●● ● ●●● ● ● ● ● ●● ● ● ● ● ● ●●● ● ● ● ● ●● ● ● ● ● ●● ● ●● ● ●● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● 20062005 2007 20 15 10 5 0 0.50 0.42 0.34 20 60 40 30 20 10 0 –10 G P P ( µ m o l m –2 s – 1 ) Te m p e ra tu re ( °C ) G re e n f ra c ti o n P re c ip it a ti o n ( m m ) 110 130 150 Day of year Day of year 220 260 300 340 0.42 0.38 0.34 0.42 0.38 0.34 0.42 0.38 0.34 0.42 0.38 0.34 0.42 0.38 0.34 0.42 0.38 0.34 2005 2006 2007 2005 2006 2007 100 75 50 25 0 0 25 50 75 100 0 25 50 75 100 0 25 50 75 100 100 75 50 25 0 100 75 50 25 0 G re e n f ra c ti o n G re e n f ra c ti o n L e a f d e ve lo p m e n t (% ) L e a f se n e sc e n e ( % ) ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● LAE 2005 y = (−11.3 ± 4.9) + (43.9 ± 12.3)*x R2 = 0.48, p < 0.001 ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● 0.36 0.44 0.52 12 8 4 0 12 8 4 0 12 8 4 0 12 8 4 0 12 8 4 0 G P P ( µ m o l m –2 s – 1 ) Green fraction 0.36 0.44 0.52 Green fraction 0.36 0.40 0.440.320.36 0.40 0.440.320.36 0.40 0.440.32 Green fraction LAE 2006 y = (−25.4 ± 4.1) + (83.2.9 ± 10.9)*x R2 = 0.75, p < 0.001 HAI 2005 y = (−8.5 ± 2.2) + (32.1 ± 5)*x R2 = 0.57, p < 0.001 HAI 2006 y = (−6.9 ± 1) + (26.3 ± 2.4)*x R2 = 0.84, p < 0.001 LAE 2007 y = (−16.7 ± 2.4) + (59.8 ± 6.2)*x R2 = 0.83, p < 0.001 ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●●●● ●● ●● ●● ●● ●●●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●●●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●●●● ●● ●● ●● ●● ●●●● ●● ●● ●● ●● ●● ●● ●● ●● ●●●● ●● ●● ●● Day of year a b c ●● ●● ●● ●● ●●●● ●● ●● ●● ●● ●● ●● ●● ●●●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●●●● ●● ●● ●● ●●●● ●● ●● ●● ●●●● ●● ●● ●● ●●●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●●●● ●● ●● ●● ●● ●● ●● ●● ●● ●●●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●●●●●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●●●● ●● ●● ●●●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●●●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● 15 10 5 0 15 10 5 0 15 10 5 0 0 25 50 30 20 10 0 –10 80 120 160 200 80 120 160 200 80 120 160 200 G P P ( µ m o l m –2 s –1 ) Te m p e ra tu re ( °C ) P re c ip it a ti o n ( m m ) ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Day of year a b c 15 10 5 0 15 10 5 0 15 10 5 0 0 25 50 30 20 10 0 –10 80 120 160 200 80 120 160 200 80 120 160 200 G P P ( µ m o l m –2 s –1 ) Te m p e ra tu re ( °C ) P re c ip it a ti o n ( m m ) ➤ ➤ ➤ ➤ ➤➤ ➤ ➤ ➤ ➤ ➤ ➤➤ ➤ ➤ ➤ ➤ ➤ ➤ ➤ ➤ ➤ ➤ ➤➤ ➤➤ ➤ ➤ ➤ ➤ ➤ ➤➤ ➤ ➤ ➤ ➤➤ ➤➤ ➤ ➤➤ ➤ ➤ ➤ ➤ ➤➤ ➤ ➤➤ ➤ ➤ ➤ 1 work_cfnoxty5x5fvrouot3bp5paa64 ---- 416 book reviews © henk schulte nordholt, 2017 | doi: 10.1163/22134379-17302018 This is an open access article distributed under the terms of the prevailing cc-by-nc license at the time of publication. Zhuang Wubin Photography in Southeast Asia: A Survey. Singapore: nus Press 2016, 552 pp. isbn 9789814722124. Price: sgd 48.00 (hardcover). This is an extremely rich and valuable anthology of photography in South- east Asia. Based on brief biographies of a large number of practitioners in the region, the book traces the history of photography from the colonial period into the present. It explores how a persistent tradition of ‘pictorialism’, or ‘salon pho- tography’ influenced practitioners for a long time, while tracing the emergence of documentary photography and the problematic relationship between pho- tography and visual arts. Another aim of the book is to overcome problematic but entrenched dichotomies between straight photography and conceptual art, the ‘inferior’ snapshot and prestigious art, and egalitarian journalism and elitist art. The book is organized in a very conventional way, presenting his portraits by country (but omitting East-Timor) in order to pay due attention to national particularities. As a result, it is not clear whether the category of Southeast Asian photography holds any meaning, while the role of the Association of Southeast Asian Nations in promoting an overarching regional identity is only mentioned in passing. In the country chapters some general themes emerge within a loosely struc- tured time frame. Introduced by imperial photographers, photography was quickly adopted by dynastic leaders and spread through colonial photo clubs and Chinese photo studios. Siamese kings broke a taboo on making portraits of the rulers by adopting photography in order to create dignified images of themselves, which they shared with their counterparts in Europe, while within Siam postcards and other images of the king became sacred objects. The legacy of pictorialism—staged photos intended to embellish reality, because the cus- tomer did not want ‘plain truth’, but added phantasy—gave photography an aura of elitist high art and gave photography a conservative imprint with a long post-colonial legacy. In Cambodia and elsewhere, colonial photography helped to frame local culture in terms of an ‘authentic essentialism’ that had to be preserved. But in Luan Prabang (Laos) several monasteries kept a remark- able collection of more than 34,000 photos of ritual practices covering a period of 120 years, suggesting that photos had become integrated within the Buddhist cosmology. Fortunately, these collections have been digitized with the help of the British Library. During the early years of independence photography helped to propagate national identities by framing images of both modernity and indigenous tra- ditions. Newspapers gave a boost to photo journalism, which in turn stimu- Downloaded from Brill.com04/06/2021 01:16:45AM via free access https://creativecommons.org/licenses/by-nc/4.0/ https://creativecommons.org/licenses/by-nc/4.0/ book reviews 417 Bijdragen tot de Taal-, Land- en Volkenkunde 173 (2017) 389–418 lated documentary photography. At the same time, various forms of censor- ship limited the range of activities photo journalist were expected to cover, while the tradition of salon photography also continued without major rup- tures. Indonesia showed a remarkable continuity of images of idyllic land- scapes and traditional peoples. In South Vietnam, most photographers glossed over the realities of the Vietnam war due to restrictions on portraying bat- tle scenes and wounded soldiers. In Singapore, the continuity of salon pho- tography was reinforced by making references to traditional Chinese painting intending to evoke delightful emotions, which was also reflected in advertize- ments and documentary photos of Singapore’s remarkable economic devel- opment. In Cambodia, photography disappeared during the Khmer Rouge regime, except for the morbid documentation of 6,000 victims in the infamous s-21 prison. Documentary photography also showed engagement with social move- ments and the urban poor. Here a much more straightforward documentary style emerged, which was, however, still marginal compared to mainstream photography. In the 1990s censorship was lifted in most of Southeast Asia and this coincided with the shift from analogue to digital photography which changed everything. Due to cheap mobile phones and easy dissemination via internet, photography is now more egalitarian and personalized, the ultimate example of which is the omnipresent selfie. In the 1990s in Southeast Asia, photography began to enter the visual arts (installations, montages, performances, and conceptual art) but it remained controversial because photography was often associated with (inferior) snap- shots or with (conservative elitists) salon photography. Moreover, conceptual artists regarded straightforward documentary photographers primarily as tech- nicians working in an artistic vacuum. More recently, artists using photography have tended to focus on per- sonalized narratives, especially personal experiences of distress, while adding layers of irony, denying established techniques by blurring and deform- ing the image, and/or questioning authenticity. In doing so, I would argue, Southeast Asian photographers started to meet soulmates from all over the world. While documenting the historical trajectories of photography in Southeast Asia in an admirable way, Zhuang Wubin aimed to re-imagine salon photog- raphy, documentary work, street photography, and individualized art work by lifting conceptual borders between these domains and undermining hierar- chies that foreground art at the cost of the snapshot.The author relied primarily on text to convey this message, while photos are regrettably rather marginal- ized in this book both in terms of numbers and size. I hope that this book will Downloaded from Brill.com04/06/2021 01:16:45AM via free access 418 book reviews Bijdragen tot de Taal-, Land- en Volkenkunde 173 (2017) 389–418 inspire other authors to further explore themes identified by Zhuang Wubin and to make broader comparisons beyond Southeast Asia. Henk Schulte Nordholt kitlv/Royal Netherlands Institute of Southeast Asian Studies schultenordholt@kitlv.nl Downloaded from Brill.com04/06/2021 01:16:45AM via free access mailto:schultenordholt@kitlv.nl work_cgxa3en7sffuncwpncnbuaty5m ---- Clinical Study The Role of Autologous Dermal Micrografts in Regenerative Surgery: A Clinical Experimental Study Marco Mario Tresoldi ,1,2 Antonio Graziano,3,4 Alberto Malovini,5 Angela Faga ,6 and Giovanni Nicoletti 1,2,6 1Plastic and Reconstructive Surgery, Department of Clinical Surgical, Diagnostic and Pediatric Sciences, University of Pavia, Viale Brambilla, 74 Pavia, Italy 2Plastic and Reconstructive Surgery Unit, Department of Surgery, Istituti Clinici Scientifici Maugeri, Via Salvatore Maugeri, 10 Pavia, Italy 3Department of Public Health, Experimental and Forensic Medicine, University of Pavia, Via Forlanini 6, Pavia, Italy 4Sbarro Health Research Organization (SHRO), Temple University, 1900 N 12th St., Philadelphia, PA 19122, USA 5Laboratory of Informatics and Systems Engineering for Clinical Research, Istituti Clinici Scientifici Maugeri, Via Salvatore Maugeri, 10 Pavia, Italy 6Advanced Technologies for Regenerative Medicine and Inductive Surgery Research Center, University of Pavia, Viale Brambilla, 74 Pavia, Italy Correspondence should be addressed to Marco Mario Tresoldi; marcomario.tresoldi@unipv.it Received 26 March 2019; Revised 3 July 2019; Accepted 4 August 2019; Published 8 September 2019 Guest Editor: Fabio Naro Copyright © 2019 Marco Mario Tresoldi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The aim of the study was the objective assessment of the effectiveness of a microfragmented dermal extract obtained with Rigenera™ technology in promoting the wound healing process in an in vivo homogeneous experimental human acute surgical wound model. The study included 20 patients with 24 acute postsurgical soft tissue loss and a planned sequential two-stage repair with a dermal substitute and an autologous split-thickness skin graft. Each acute postsurgical soft tissue loss was randomized to be treated either with an Integra® dermal substitute enriched with the autologous dermal micrografts obtained with Rigenera™ technology (group A—Rigenera™ protocol) or with an Integra® dermal substitute only (group B—control). The reepithelialization rate in the wounds was assessed in both groups at 4 weeks through digital photography with the software “ImageJ.” The dermal cell suspension enrichment with the Rigenera™ technology was considered effective if the reepithelialized area was higher than 25% of the total wound surface as this threshold was considered far beyond the expected spontaneous reepithelialization rate. In the Rigenera™ protocol group, the statistical analysis failed to demonstrate any significant difference vs. the controls. The old age of the patients likely influenced the outcome as the stem cell regenerative potential is reduced in the elderly. A further explanation for the unsatisfying results of our trial might be the inadequate amount of dermal stem cells used to enrich the dermal substitutes. In our study, we used a 1 : 200 donor/recipient site ratio to minimize donor site morbidity. The gross dimensional disparity between the donor and recipient sites and the low concentration of dermal mesenchymal stromal stem cells might explain the poor epithelial proliferative boost observed in our study. A potential option in the future might be preconditioning of the dermal stem cell harvest with senolytic active principles that would fully enhance their regenerative potential. This trial is registered with trial protocol number NCT03912675. 1. Introduction The dermal extracellular matrix plays a relevant both structural and functional role in signalling cell prolifera- tion, development, shaping, function, and migration. The dermis is provided with a relevant both mesenchymal and adnexa-related stem cell pool. Such properties support the use of dermis-derived extracts to stimulate tissue Hindawi Stem Cells International Volume 2019, Article ID 9843407, 8 pages https://doi.org/10.1155/2019/9843407 https://orcid.org/0000-0001-8582-0830 https://orcid.org/0000-0002-6467-5512 https://orcid.org/0000-0002-1507-4067 https://clinicaltrials.gov/ct2/show/NCT03912675?term=NCT03912675&rank=1 https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/ https://doi.org/10.1155/2019/9843407 regeneration [1–3]. Currently, many technologies are avail- able to separate and expand dermis-derived cells to obtain injectable autologous cell suspensions for regenerative purposes [4, 5]. Recently, in the European Union area, cell manipulation underwent restricting regulations that signifi- cantly reduced the availability of cell expansion technology. According to current regulations, any cell manipulation involving enzymatic treatment and cell culture expansion is allowed in Cell Factories only, with a relevant increase of time and cost burden [6]. Recently, the development of an innovative technology for dermal mechanical microfragmentation named Rigen- era™ allowed the harvest of a filtered available cell pool without any enzymatic manipulation. Such a cell fraction, rich in progenitor cells, was successfully used in the treat- ment of difficult-to-heal wound [3, 7–9]. The advantage of this innovative technology is its unrestricted use in any clinical context and setting. Nevertheless, such an evidence was demonstrated within the frame of pathologies with heterogeneous aetiology. The aim of the study was the objective assessment of the effective- ness of such a microfragmented dermal extract in promoting the wound healing process in an in vivo homogeneous experimental human acute surgical wound model. 2. Materials and Methods 2.1. Study Design. A prospective randomized controlled open clinical trial was carried out at the Plastic and Reconstructive Surgery Unit of the Istituti Clinici Scientifici Maugeri. Twenty patients (4 females and 16 males), with an age range of 53-93 years (mean 77.80, median 79), were enrolled in the trial over a period of 15 months, from September 2017 to December 2018. The exclusion criteria were wound infection, chemotherapy in the last 6 months, use of corticosteroids or immunosuppressive treatment, and metabolic, endocrine, autoimmune, and collagen diseases. The study included patients with a postsurgical defect in any site of the body with a size range of 4-400 cm2. The surgical defect followed an immediate excision of 22 skin cancers, 1 ulcerated actinic keratosis, and 1 chronic difficult-to-heal wound (Table 1). A sequential two-stage repair with a dermal substitute and an autologous split-thickness skin graft was planned. The acute postsurgical soft tissue loss was considered the experi- mental unit of the study irrespective of the number of wounds per patient. Twenty-four experimental units were enrolled in the trial. Each unit, which fulfilled the entry criteria, was randomized to be treated either with an Integra® dermal substitute enriched with the autologous dermal micrografts obtained with Rigenera™ protocol (group A—Ri- genera™ protocol) or with an Integra® dermal substitute only (group B—control). Each group included 12 experimental units. All of the wounds were planned for a sequential second-stage repair with a split-thickness skin graft at the time of complete engraftment of the Integra® dermal substi- tute. According to our clinical experience, the time lag between the first and the second surgical stages was around 4 weeks. The expected endpoint was a spontaneous reepithe- lialization higher than 25% of the total wound area in the group treated with Rigenera™ protocol at 4 weeks that would contraindicate the second staged cover with a split-thickness skin graft. The secondary endpoint was the comparison of the reepithelialization rate at 4 weeks after the first surgical stage between the group treated with Rigenera™ protocol and the controls. The reepithelialization rate in the wounds was assessed at each time point of the study through digital photography with the software “ImageJ” (Figure 1). As a wound spontane- ously reduces in size, due to a physiologic shrinkage process, the measurement of the reepithelialization rate was referred to the actual total wound size at each time point. A formal informed written consent was obtained from all of the patients, and the study conformed to the Declaration of Helsinki. The trial was approved by the Ethical Committee (protocol number 2142) of the Istituti Clinici Scientifici Maugeri SB SpA IRCCS of Pavia. 2.2. Materials and Methods. The micrografts were obtained by Rigeneracons, a single-use sterile CE-certified Class I medical device able to mechanically disaggregate tissues into micrografts that are immediately available for transfer in the clinical practice [10]. It is made of a plastic container pro- vided with an openable lid divided into two chambers by a stationary stainless steel grid with 100 hexagonal holes. Around each hole, 6 microblades are designed for efficient cutting of hard and soft tissues allowing a filtration cut-off of about 80 μm. The upper chamber is provided with a rotat- ing helix forcing the tissue fragments through the grid towards the bottom chamber. The rotation of Rigeneracons Table 1: Cohort’s characteristics. Categorical variables’ distribution is described by counts and relative frequencies (%); continuous variables’ distribution by median (25th–75th percentiles). Variable Distribution Age (years) 78.0 (74.5-84.0) Gender Females 4 (17.39%) Males 16 (82.61%) Localization Limbs 7 (30.43%) Scalp 1 (4.35%) Face 15 (65.22%) Protocol A 11 (47.83%) B 12 (52.17%) Cause BCC 16 (69.57%) SCC 4 (17.39%) Other 3 (13.09%) T1 area (cm2) 9.26 (7.06-12.54) Reepithelialization (%) 13.94 (11.96-20.85) ≥25% 3 (13.04%) <25% 20 (86.96%) 2 Stem Cells International is activated by a Rigenera OR-PRO machine (Esacrom, Italy) using a connection adaptor (Adacons Max). 2.3. The Operative Protocol. The operative protocol consists of different steps: (a) Choice of the micrograft donor site with preference of the retroauricular region (b) Gentle blade shaving of the donor site to remove the epidermis and obtain a bare papillary dermis (c) Harvest of the adequate number of 3 mm punch biopsies from the previously deepithelialized skin; the number of dermal punch biopsies calculated according to the size of the wound, considering that 1 mm2 of the dermal graft was expected to regenerate an epithelial cover up to 2 cm2 (Figure 2) (d) Loading the disposable Rigeneracons with a maxi- mum of 3 dermal samples at a time and addition of 2.5 ml of saline solution (Figures 3 and 4) (e) Device connection to the rotating machine, operating at 80 rpm for 90 seconds, that provides a mechanical disaggregation of the dermal sample into a suspen- sion containing autologous dermal micrografts (Figure 5) (f) Aspiration of the micrografts containing saline solution with a sterile syringe (Figure 6) (g) Cover of the postsurgical soft tissue loss with Integra® dermal substitute (Figure 7) (h) Fixation of the dermal substitute with stitches and gentle imbibition with the saline micrograft suspen- sion (Figure 8) (i) Infiltration of the micrograft suspension in the perilesional tissues 2.4. The Treatment. The time points of the study were designed as follows: (i) T0: starting time includes patient enrollment, signature of informed consent, and randomization. (ii) T1: the first-stage was characterized by surgical pro- cedure of skin lesion excision, digital medical photo- graphs, and soft-tissue loss measurement were obtained with the use of the “ImageJ” program. Soft tissue loss was covered with an Integra® dermal Group A- RigeneraTM protocol Complete re- epithelialization Complete re- epithelialization T3 So�-tissue loss measurement T4 Healed wound T3 Healed wound Healed wound T3 Split thickness skin gra� coverage A B T0 Patient enrollment, signature of informed consent and randomization T1 Surgical procedure So�-tissue loss measurement Group B -control T2 So�-tissue loss measurement T2 So�-tissue loss measurement Split thickness skin gra� coverage Residual split thickness skin gra� coverage (T 1 < 28 days < T 2 ) (T 0 < 6-14 days < T 1 ) (T 1 < 28 days < T 2 ) (T 2 < 14 days < T 3 ) = 0% T 1 area (T 2 < 14 days < T 3 ) > 25% T 1 area < 25% T 1 area T1 Surgical procedure So�-tissue loss measurement Medical photography Figure 1: Scheme of the overall study structure. 3Stem Cells International substitute alone (group B—controls) or enriched with the autologous dermal micrografts (group A—Rigenera™ protocol). (iii) T2: 4 weeks after the first surgical stage, digital medical photographs and residual soft-tissue loss measurement were obtained with the use of the software “ImageJ.” In the Rigenera™ protocol group, if the deepithelialized area in the wound was the same as at T1, a split-thickness skin graft was planned; if the reepithelialized area was >25% than the one at T1, a follow-up was planned in 2 weeks’ time (T3); if reepithelialization was complete, the wound was considered healed and the patient was discharged from the study. In the control group, a split-thickness skin graft cover was carried out. (iv) T3: digital photographs and residual soft-tissue loss measurement were obtained with the use of the soft- ware “ImageJ” in the Rigenera™ protocol group; in the latter group, whatever the extension of the residual deepithelialized area, a split-thickness skin graft was planned at this time; if reepithelialization was complete, the wound was considered healed and the patient was discharged from the study. (v) T4: there was complete reepithelialization 1 week after the split-thickness skin graft cover at T3 in the Rigenera™ protocol group. 2.5. Statistical Methods. The deviation of continuous variable distribution from the normal distribution was assessed by visual inspection of quantile-quantile plots and by the Shapiro test. T1 and final area distribution was normalized by natural logarithm transformation. Continuous variable distribution is described by median and 25th–75th percentiles; categorical variable distribution is described by counts and frequencies (%). The Fisher exact test and the Wilcoxon rank-sum test were applied to compare the categorical and quantitative variables’ distribution between protocols. The Spearman test allowed quantifying the strength of the correlation between continuous variables (rho). Statistical procedures were performed by the R statistical software (http://www.r-project.org.) 3. Results One male patient out of 20 with surgical excision of a squamous cell carcinoma of the scalp dropped out of the study due to postoperative wound infection. An overall of 23 experimental units (12 in the control group and 11 in the Rigenera™ protocol group) completed the trial. At T2, 4 weeks after the first surgical step, the reepithelia- lization rate was 12.98% (10.40-17.61) in the control group and 15.14% (12.42-22.03) in the Rigenera™ protocol group Figure 2: Harvest of the 3 mm punch biopsies from the previously deepithelialized skin. Figure 3: The harvested skin punch biopsy. Figure 4: The disposable Rigeneracons loaded with a maximum of 3 dermal samples. 4 Stem Cells International http://www.r-project.org (p = 0 607). In the latter group, only one wound out of 11 (9.09%) demonstrated a reepithelialization > 25% of the total wound area, while in the control group, such an outcome was observed in 2 wounds out of 12 (16.67%) (p = 1). 4. Discussion It is a common knowledge that the human dermis is a source of stem cells with demonstrated regenerative properties [11–16]. The dermal mesenchymal stromal stem cells display adhesion properties, fibroblast morphology, and osteogenic and adipocyte differentiation. Typically, they express both mesenchymal (α-SMA) and neural (Nestin and βIII-tubulin) cell membrane markers and lack the haematopoietic and endothelial ones (CD31) [13]. Recently, the dermal mesenchymal stromal stem cells were demon- strated to express also the CD105, CD73, and CD90 markers that specifically regulate regeneration in the wound healing process. These cells enhance cell survival and proliferation in the wound site through a fine modulation of the immune and inflammatory response, operated by a finely tuned cas- cade of local mediators. They definitely play a relevant active Figure 5: Connection of the disposable Rigeneracons to the Rigenera OR-PRO rotating machine. Figure 6: Aspiration of the micrografts containing saline solution with a sterile syringe. Figure 7: Cover of the postsurgical soft tissue loss with Integra® dermal substitute. Figure 8: Imbibition of the dermal substitute with the saline micrograft suspension. 5Stem Cells International role along the inflammatory, proliferative, and remodeling phases allowing an eventual favorable outcome in the wound healing process. The hair follicle matrix has been demonstrated to host cells that are capable of self-renewal and produce epithelial transient progenitors, thus having attributes of stem cells, too. Stem cells are multipotent, capable of giving rise not only to all the cell types of the hair but also to the epidermis and the sebaceous gland. These cells display a highly sophisti- cated organization and carry out several functions control- ling the shape of the hair follicle. The inner structures are each produced by a distinct, restricted set of precursors occupying a specific position along the proximodistal axis of the matrix. The matrix seems to be organized by two systems working in orthogonal dimensions and controlling two key operations of hair follicle morphogenesis, notably cell diversification and cell behavior [17]. The bulge cell progeny located in the upper follicle has been demonstrated to emigrate into the epidermis and to proliferate, thus contributing to the long-term maintenance of the epidermis [18, 19]. Based on these observations, it has been proposed that the bulge is a major repository of skin keratinocyte stem cells, which may thus be regarded as the ultimate epider- mal stem cell [20, 21]. Since stem cells are known to be involved in skin tumor formation [22–27], the coincidence of greater tumor susceptibility with the transient prolifera- tion of the bulge cells is consistent with the hypothesis that the bulge cells are stem cells and indicates that follicular stem cells can give rise to experimentally induced skin cancers [22–25]. Taken together, these data suggest that the bulge is the site of follicular stem cells. The Rigenera™ method allows the harvesting of a dermis- derived autologous cell suspension including a stem cell fraction, ready for use without any cell manipulation process [28]. Several clinical studies demonstrated the effectiveness of the Rigenera™ cell harvesting method in the management of complex wounds with complete obliteration and reepithelia- lization of deep soft tissue loss [1–3, 7–9]. In order to objectively assess the effectiveness of the Rigenera™ method, we established a homogeneous experi- mental fresh surgical wound model providing measurable data that excluded gross experimental bias and fit a rigorous statistical analysis. Rigenera™ provides a fluid cell suspension that may be both injected in deep spaces and applied on superficial soft tissue loss in combination with a dermal substitute [29]. According to our current clinical practice, we deliber- ately enrolled patients with a planned two-stage soft tissue loss repair using a dermal substitute followed 4 weeks later by a split-thickness skin graft. The enrichment of a dermal substitute with an autologous cell suspension graft was considered a minimal modification of a current and well- established clinical protocol involving a negligible donor site morbidity and, therefore, allowed approval of the trial by the Ethical Committee. Considering our long-term clinical experience in the field [30], the dermal substitute of choice for the study was Integra® as it was demonstrated to provide an in vitro favorable environment for dermal stromal mesen- chymal stem cell engraftment and replication as early as 7 days [29]. The peculiar three-dimensional structure of Integra®, with a controlled 80 μm porous structure, allows a physiological cell adhesion, infiltration, distribution, and proliferation with the preservation of the typical mesen- chymal fibroblast morphology. In our study, the treatment with the dermal cell suspen- sion prepared with the Rigenera™ technology was considered effective if the reepithelialized area was higher than the 25% of the total wound surface as the latter threshold was considered far beyond the expected spontaneous reepithelialization rate. Nevertheless, in the Rigenera™ protocol group, the statis- tical analysis did not demonstrate any significant difference vs. the controls. The old age of the patients likely influenced the outcome. Indeed, our experimental plan, designed as a two-stage procedure, had to meet ethical requirements, too. Currently, such a procedure is the gold standard in frail elderly patients that often come to observation for advanced skin cancers, requiring extensive demolitions but that are unfit for complex reconstructive procedures [31]. The use of a sequen- tial two-stage reconstructive procedure with a dermal substi- tute and a split-thickness skin graft in these cases allows for a better functional and cosmetic outcome than a simple one- staged skin graft [32–34] (Figure 9). Undoubtedly, the stem cell regenerative potential is reduced in the elderly. Recent literature reports demonstrate an antiapoptotic action of the senescent cells that prevents the full expression of the regenerative potential in the stem cell pool [35]. In our opinion, a sample pretreatment with specific active principles targeting the senescent cells might be suggested to increase the regenerative potential in the dermal stromal mesenchymal stem cell transfer [36]. The latter specific pretreatment might enhance the full regenera- tive potential of a minimally invasive cell transfer, making it a specifically convenient procedure for the frail critical patient. Figure 9: Complete healing of the defect after STSG in group A. 6 Stem Cells International Therefore, reepithelialization of large skin loss in the elderly patient drawing on a minimal dermal fragment might be a realistic option. A further explanation for the unsatisfying results of our experimental trial might be the inadequate amount of dermal stem cells used to enrich the dermal substitutes. Actually, in our study, we used a 1 : 200 donor/recipient site ratio in order to minimize donor site morbidity. The gross dimensional disparity between the donor and recipient sites and the low concentration of dermal mesenchymal stromal stem cells might explain the poor epithelial proliferative boost observed in our study. Unpublished data from animal experimental trials by our research partner staff would suggest that the optimal donor/recipient site ratio is 1 : 20. Nevertheless, such a ratio would not make the dermal cell suspension transfer a convenient procedure in terms of donor site morbidity vs. a traditional large meshed split-thickness skin graft [37]. Actually, in previous literature reports, the Rigenera™ dermal cell transfer proved to be effective in the difficult-to- heal wound where a split-thickness skin graft was not indicated. Therefore, we suppose that the reported favorable outcome might have been related to an overall change of the wound environment, where a spontaneous reepithelializa- tion might have been related to a nonspecific boost of a torpid wound bed from mesenchymal dermal and epithelial stem cells and matrix-derived factors. Instead, in our opinion, in an acute fresh wound, all of the factors involved in the wound healing process display a maximal expression, thus shading the supposed contribution of the dermal extract as a whole. Indeed, a preconditioning of the dermal cells with a treatment enhancing their regenerative potential might yield a better outcome. 5. Conclusions The role of the human dermal stem cell regenerative pool in enhancing the wound healing process is a well-established knowledge and is leading to an increasing number of prom- ising clinical applications. The Rigenera™ technology might promote a spontaneous reepithelialization; nevertheless, even if proved to be effective in stimulating a difficult-to-heal wound by turning a torpid chronic process into an active one, in our experience, it could not demonstrate any improvement in the reepithelialization process of a fresh surgical wound. A potential option in the future might be a preconditioning of the dermal stem cell harvest with senolytic active principles that would fully enhance their regenerative potential. Such a treatment might extend the clinical indications of this minimally invasive, standardized, operator-independent, and easy procedure, specifically suit- able for the management of complex and critical cases. Data Availability The data used to support the findings of this study are restricted by the Ethical Committee (protocol number 2142) of the Istituti Clinici Scientifici Maugeri SB SpA IRCCS of Pavia in order to protect patient privacy. Data are available for researchers who meet the criteria for access to confiden- tial data from the corresponding author upon request. Conflicts of Interest The author Antonio Graziano is a member of the Scientific Division of Human Brain Wave, the company owner of Rigenera™ Technology. The other authors have no conflict of interests to declare. Acknowledgments The experiments were supported by the Human Brain Wave srl supplying Rigeneracons medical devices. References [1] F. Svolacchia, F. De Francesco, L. Trovato, A. Graziano, and G. A. Ferraro, “An innovative regenerative treatment of scars with dermal micrografts,” Journal of Cosmetic Dermatology, vol. 15, no. 3, pp. 245–253, 2016. [2] E. Baglioni, L. Trovato, M. Marcarelli, A. Frenello, and M. A. Bocchiotti, “Treatment of oncological post-surgical wound dehiscence with autologous skin micrografts,” Anticancer Research, vol. 36, no. 3, pp. 975–979, 2016. [3] F. De Francesco, A. Graziano, L. Trovato et al., “A Regenera- tive Approach with Dermal Micrografts in the Treatment of Chronic Ulcers,” Stem Cell Reviews and Reports, vol. 13, no. 1, pp. 139–148, 2017. [4] W. K. Boss, H. Usal, P. B. Fodor, and G. Chernoff, “Autologous Cultured Fibroblasts: A Protein Repair System,” Annals of Plastic Surgery, vol. 44, no. 5, pp. 536–542, 2000. [5] S. Maxson, E. A. Lopez, D. Yoo, A. Danilkovitch-Miagkova, and M. A. Leroux, “Concise review: role of mesenchymal stem cells in wound repair,” Stem Cells Translational Medicine, vol. 1, no. 2, pp. 142–149, 2012. [6] European Commission, Guidelines on Good Manufacturing Practice specific to Advanced Therapy Medicinal Products, 2017. [7] M. Giaccone, M. Brunetti, M. Camandona, L. Trovato, and A. Graziano, “A new medical device, based on Rigenera proto- col, in the management of complex wounds,” Journal of Stem Cell Reviews and Reports, vol. 1, no. 3, p. 3, 2014. [8] L. Trovato, G. Failla, S. Serantoni, and F. P. Palumbo, “Regen- erative Surgery in the Management of the Leg Ulcers,” Journal of Cell Science & Therapy, vol. 7, no. 2, 2016. [9] M. Marcarelli, L. Trovato, E. Novarese, M. Riccio, and A. Graziano, “Rigenera protocol in the treatment of surgical wound dehiscence,” International Wound Journal, vol. 14, no. 1, pp. 277–281, 2017. [10] L. Trovato, M. Monti, C. del Fante et al., “A new medical device Rigeneracons allows to obtain viable micro-grafts from mechanical disaggregation of human tissues,” Journal of Cellu- lar Physiology, vol. 230, no. 10, pp. 2299–2303, 2015. [11] K. Takahashi, K. Tanabe, M. Ohnuki et al., “Induction of pluripotent stem cells from adult human fibroblasts by defined factors,” Cell, vol. 131, no. 5, pp. 861–872, 2007. [12] R. Vishnubalaji, M. Al-Nbaheen, B. Kadalmani, A. Aldahmash, and T. Ramesh, “Skin-derived multipotent stromal cells–an archrival for mesenchymal stem cells,” Cell and Tissue Research, vol. 350, no. 1, pp. 1–12, 2012. 7Stem Cells International [13] M. Dominici, K. le Blanc, I. Mueller et al., “Minimal criteria for defining multipotent mesenchymal stromal cells. The International Society for Cellular Therapy position statement,” Cytotherapy, vol. 8, no. 4, pp. 315–317, 2006. [14] B. Coulomb, L. Friteau, J. Baruch et al., “Advantage of the pres- ence of living dermal fibroblasts within in vitro reconstructed skin for grafting in humans,” Plastic and Reconstructive Surgery, vol. 101, no. 7, pp. 1891–1903, 1998. [15] F. M. Wood, N. Giles, A. Stevenson, S. Rea, and M. Fear, “Characterisation of the cell suspension harvested from the dermal epidermal junction using a ReCell® kit,” Burns, vol. 38, no. 1, pp. 44–51, 2012. [16] R. I. Sharma and J. G. Snedeker, “Paracrine interactions between mesenchymal stem cells affect substrate driven differ- entiation toward tendon and bone phenotypes,” PLoS One, vol. 7, no. 2, article e31504, 2012. [17] E. Legué and J. F. Nicolas, “Hair follicle renewal: organization of stem cells in the matrix and the role of stereotyped lineages and behaviors,” Development, vol. 132, no. 18, pp. 4143–4154, 2005. [18] G. Cotsarelis, T.-T. Sun, and R. M. Lavker, “Label-retaining cells reside in the bulge area of pilosebaceous unit: implications for follicular stem cells, hair cycle, and skin carcinogenesis,” Cell, vol. 61, no. 7, pp. 1329–1337, 1990. [19] R. M. Lavker, S. Miller, C. Wilson et al., “Hair follicle stem cells. Their location, role in hair cycle, and involvement in skin tumor formation,” Journal of Investigative Dermatology, vol. 101, no. 1, pp. S16–S26, 1993. [20] R. M. Lavker, T.-T. Sun, H. Oshima et al., “Hair follicle stem cells,” The Journal of Investigative Dermatology, vol. 8, no. 1, pp. 28–38, 2003. [21] G. Taylor, M. S. Lehrer, P. J. Jensen, T.-T. Sun, and R. M. Lav- ker, “Involvement of follicular stem cells in forming not only the follicle but also the epidermis,” Cell, vol. 102, no. 4, pp. 451–461, 2000. [22] R. J. Morris, S. M. Fischer, and T. J. Slaga, “Evidence that a slowly cycling subpopulation of adult murine epidermal cells retains carcinogen,” Cancer Research, vol. 46, no. 6, pp. 3061–3066, 1986. [23] S. J. Miller, Z. G. Wei, C. Wilson, L. Dzubow, T. T. Sun, and R. M. Lavker, “Mouse skin is particularly susceptible to tumor initiation during early anagen of the hair cycle: possible involvement of hair follicle stem cells,” The Journal of Investi- gative Dermatology, vol. 101, no. 4, pp. 591–594, 1993. [24] R. J. Morris, K. Coulter, K. Tryson, and S. R. Steinberg, “Evidence that cutaneous carcinogen-initiated epithelial cells from mice are quiescent rather than actively cycling,” Cancer Research, vol. 57, no. 16, pp. 3436–3443, 1997. [25] R. J. Morris, K. A. Tryson, and K. Q. Wu, “Evidence that the epidermal targets of carcinogen action are found in the inter- follicular epidermis or infundibulum as well as in the hair follicles,” Cancer Research, vol. 60, no. 2, pp. 226–229, 2000. [26] G. Nicoletti, F. Brenta, A. Malovini, O. Jaber, and A. Faga, “Sites of basal cell carcinomas and head and neck congenital clefts: topographic correlation,” Plastic and Reconstructive Sur- gery Global Open, vol. 2, no. 6, article e164, 2014. [27] G. Nicoletti, M. M. Tresoldi, A. Malovini, S. Prigent, M. Agozzino, and A. Faga, “Correlation between the sites of onset of basal cell carcinoma and the embryonic fusion planes in the auricle,” Clinical Medicine Insights: Oncology, vol. 12, pp. 1–5, 2018. [28] V. Purpura, E. Bondioli, A. Graziano et al., “Tissue Charac- terization after a New Disaggregation Method for Skin Micro-Grafts Generation,” Journal of Visualized Experi- ments, vol. 109, no. 109, 2016. [29] T. da Silva Jeremias, R. G. Machado, S. B. C. Visoni, M. J. Pereima, D. F. Leonardi, and A. G. Trentin, “Dermal substi- tutes support the growth of human skin-derived mesenchy- mal stromal cells: potential tool for skin regeneration,” PLoS One, vol. 9, no. 2, article e89542, 2014. [30] G. Nicoletti, M. M. Tresoldi, A. Malovini, M. Visaggio, A. Faga, and S. Scevola, “Versatile use of dermal substitutes: a retrospective survey of 127 consecutive cases,” Indian Jour- nal of Plastic Surgery, vol. 51, no. 1, pp. 46–53, 2018. [31] S. Scevola, M. M. Tresoldi, and A. Faga, “Rigenerazione e riabilitazione: esperienza di chirurgia plastica in un centro di riabilitazione,” Giornale Italiano di Medicina del Lavoro ed Ergonomia, vol. 40, no. 2, pp. 97–105, 2018. [32] G. Nicoletti, S. Scevola, and A. Faga, “Bioengineered Skin for Aesthetic Reconstruction of the Tip of the Nose,” Dermato- logic Surgery, vol. 34, no. 9, pp. 1283–1287, 2008. [33] A. Faga, G. Nicoletti, F. Brenta, S. Scevola, G. Abatangelo, and P. Brun, “Hyaluronic acid three-dimensional scaffold for surgi- cal revision of retracting scars: a human experimental study,” International Wound Journal, vol. 10, no. 3, pp. 329–335, 2013. [34] G. Nicoletti, F. Brenta, M. Bleve et al., “Long- termin vivoassessment of bioengineered skin substitutes: a clinical study,” Journal of Tissue Engineering and Regenerative Medicine, vol. 9, no. 4, pp. 460–468, 2015. [35] M. V. Blagosklonny, “Paradoxes of senolytics,” Aging, vol. 10, no. 12, pp. 4289–4293, 2018. [36] J. L. Kirkland, T. Tchkonia, Y. Zhu, L. J. Niedernhofer, and P. D. Robbins, “The clinical potential of senolytic drugs,” Jour- nal of the American Geriatrics Society, vol. 65, no. 10, pp. 2297–2301, 2017. [37] R. Cuomo, L. Grimaldi, B. Cesare, G. Nisi, and C. D’Aniello, “Skin graft donor site: a procedure for a faster healing,” Acta Biomedica, vol. 88, no. 3, pp. 310–314, 2017. 8 Stem Cells International Hindawi www.hindawi.com International Journal of Volume 2018 Zoology Hindawi www.hindawi.com Volume 2018 Anatomy Research International Peptides International Journal of Hindawi www.hindawi.com Volume 2018 Hindawi www.hindawi.com Volume 2018 Journal of Parasitology Research Genomics International Journal of Hindawi www.hindawi.com Volume 2018 Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 Hindawi www.hindawi.com The Scientific World Journal Volume 2018 Hindawi www.hindawi.com Volume 2018 Bioinformatics Advances in Marine Biology Journal of Hindawi www.hindawi.com Volume 2018 Hindawi www.hindawi.com Volume 2018 Neuroscience Journal Hindawi www.hindawi.com Volume 2018 BioMed Research International Cell Biology International Journal of Hindawi www.hindawi.com Volume 2018 Hindawi www.hindawi.com Volume 2018 Biochemistry Research International Archaea Hindawi www.hindawi.com Volume 2018 Hindawi www.hindawi.com Volume 2018 Genetics Research International Hindawi www.hindawi.com Volume 2018 Advances in Virolog y Stem Cells International Hindawi www.hindawi.com Volume 2018 Hindawi www.hindawi.com Volume 2018 Enzyme Research Hindawi www.hindawi.com Volume 2018 International Journal of Microbiology Hindawi www.hindawi.com Nucleic Acids Journal of Volume 2018 Submit your manuscripts at www.hindawi.com https://www.hindawi.com/journals/ijz/ https://www.hindawi.com/journals/ari/ https://www.hindawi.com/journals/ijpep/ https://www.hindawi.com/journals/jpr/ https://www.hindawi.com/journals/ijg/ https://www.hindawi.com/journals/tswj/ https://www.hindawi.com/journals/abi/ https://www.hindawi.com/journals/jmb/ https://www.hindawi.com/journals/neuroscience/ https://www.hindawi.com/journals/bmri/ https://www.hindawi.com/journals/ijcb/ https://www.hindawi.com/journals/bri/ https://www.hindawi.com/journals/archaea/ https://www.hindawi.com/journals/gri/ https://www.hindawi.com/journals/av/ https://www.hindawi.com/journals/sci/ https://www.hindawi.com/journals/er/ https://www.hindawi.com/journals/ijmicro/ https://www.hindawi.com/journals/jna/ https://www.hindawi.com/ https://www.hindawi.com/ work_ciz3fb5lxzbc3o6zpze6eiic3i ---- © 2010 Macmillan Publishers Ltd. 1743–6540 Journal of Digital Asset Management Vol. 6, 3, 131–132 www.palgrave-journals.com/dam/ Welcome to Volume 6, Issue 3. The season of spring has sprung and with that comes much growth in the fi eld and practice of digital asset management. This issue sees much growth with many great articles and interviews to stimulate the mind and rejuvenate our interest in DAM. To begin with, JDAM is pleased to have a book excerpt from Patricia Russotti and Richard Anderson ’ s, Digital Photography Best Practices and Workfl ow Handbook: A Guide to Staying Ahead of the Workfl ow Curve . Chapter 7, Getting To Work – Image Ingestion , covers the key components of getting digital images from the camera into the computer. The ingestion step is one of the most important steps in a safe and effi cient digital image workfl ow. Performing this step properly can provide the best opportunity to automate the digital workfl ow and ensure organization and safe keeping of the photographer ’ s work. The authors discuss best practices for: options for safe transfer of images utilizing dedicated ingestion software, creating a fi le and folder backup, and many more. Michael Moon interviews Cheryl McKinnon, Chief Marketing Offi cer at Nuxeo. This interview provides insights into Nuxeo ’ s market strategy, differentiation and why DAM is an essential move for the company. Nuxeo announced the availability of Nuxeo DAM, its fi rst digital asset management earlier this year. An established vendor offering open source Enterprise Content Management since 2000, this move into DAM was largely prompted by demand from its diverse install base. Nuxeo DAM builds upon the strength of the Nuxeo Enterprise Platform, a highly extensible Java- based environment designed specifi cally for content applications. Nuxeo DAM has been designed with the creative content consumer in mind – offering the key functionality needed to manage multiple renditions of rich content, ease the storage burden, protect against intellectual property or copyright breaches, and enable easy and intuitive search and navigation through media assets. In less than 3 years, Aaron Fulkerson, CEO of MindTouch, has grown MindTouch from a small open source project into one of the world ’ s most popular open source collaboration platforms. Its wiki-like platform enables tens of millions of users globally to connect and customize enterprise systems, social tools and web services. Michael Moon interviews Aaron and provides a fascinating discussion on enterprise 2.0 and open source collaboration markets. In fact, enterprise spending on Web 2.0 technologies is predicted by Forrester to grow strongly over the next 5 years, reaching US $ 4.6 billion globally by 2013 and MindTouch was cited by Forrester as the best alternative to industry heavyweights Microsoft and IBM. In a captivating and most timely article, Doug Liles, Group Director, Enterprise Solutions for Southern Graphics Systems provides insight into Web Services and DAM. With such topics as DAM as workfl ow, WCM Integration, Beyond Plug-Ins and the need for standards, Doug makes an impassioned appeal to the DAM marketplace to integrate web services APIs into the core of Digital Asset Management solutions to meet user and application demands. Randal Luckow, Digital Archivist, manages the Metadata and Digital Media Services Department at Tuner Entertainment. In his intriguing article, he argues that when a needed controlled vocabulary does not exist in available taxonomies, it is necessary to develop a new taxonomy based upon original research and or authoritative sources. His article presents an example of how an animation-specifi c taxonomy and subsequent controlled vocabulary was constructed to assist in the successful search and Editorial Journal of Digital Asset Management (2010) 6, 131 – 132. doi: 10.1057/dam.2010.17 Editorial © 2010 Macmillan Publishers Ltd. 1743–6540 Journal of Digital Asset Management Vol. 6, 3, 131–132132 a technology-based education company, is the nation ’ s largest provider of proprietary curriculum and online education programs to students in kindergarten through high school. John Horodyski Managing Editor 5058 St Catherines Street , Vancouver , BC V5W 3E8 , Canada . E-mail: jhorodyskiJDAM@gmail.com retrieval of animated moving image assets. The outcome of this research was used to assist in the automation of internal workfl ows used to comply with the Children ’ s Television Act. To conclude this issue, Michael Moon provides an exciting interview with the Digital Asset Manager of K12 Inc, Henrik de Gyor, on the implementation and daily operations of the Digital Asset Management within the education company. K12 Inc. (NYSE: LRN), Editorial work_cjn6n4pwrvb4vfrmsaqsbqimxe ---- The new change equation I t is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change. – Charles Darwin Change means different things to different people. To the CEO, it may mean increasing profits, cutting costs, or saving the business; to you or me, it may mean no more or no less than keeping or losing our job. That is why change is so profoundly unsettling. And the less control we have over the change, the more unsettling it tends to be. Of course, not every organizational change is job-threatening. A dictionary definition of change is “the act or an instance of making or becoming different; an alteration or modification”. This suggests, entirely accurately, that change comes in many shapes and sizes. Indeed, the word change is used to cover a multitude of situations: everything from the mundane – putting on a pair of clean socks – to the profound physiological alterations that occur during midlife. Organizational change, too, comes in different degrees and guises. I distinguish between four main types: Temporary change For a time, it looks as if things are going to change, but the organization reverts to type and nothing happens. Any initiative quickly peters out, often after creating false hope. The organization is simply not ready for change. How often have you seen the Big Bang approach to change, in which considerable time and effort is placed on announcing the forthcoming strategic agenda and how everyone will gain from the benefits – yet life remains the same? In such instances, the illusion of change substitutes for any reality. More damp squid than big bang. Employees feel disappointed and let down. It’s something they have heard before. Soon lethargy and mistrust seep in and turn to chronic cynicism. The situation becomes toxic; only radical surgery can fix it. Incremental or process change This sort of change aims to provide some small improvements. It is easy and quick to implement, and you get quick returns. The risk of failure is low, but so are returns in terms of benefits. Incremental change means operating within strict controls to gain efficiencies from your company’s system of organization. Fine-tuning a winning formula usually characterizes this type of change. You know the sort of thing. In one study, for example, a call centre in Sunderland increased its productivity by 20 per cent by introducing simple measures that included staff training and the implementation of new software. Similar results of incremental changes and training showed increased sales. Overall, in this separate study the company produced an extra $110.25 per month per sales agent, for a 500-seat call centre. Over a year, that translated to an estimated $661,500 in sales. This sort of incremental change is useful – worth having, certainly – but unlikely to transform the organization’s competitive position. © 2008 The Author | Journal compilation © 2008 London Business SchoolBusiness Strategy Review Winter 200876 S pe ci al r ep or t THE NEW change equation Many leaders who want their companies to change look for a quick and easy formula. Michael Jarrett believes such an approach is flawed from the start. Far better, he argues, to understand how change really happens. Business Strategy Review Winter 2008© 2008 The Author | Journal compilation © 2008 London Business School 77 Organizational restructuring Here, the change focuses on fundamental systems, structures and relationships within the business. The introduction of a new sales force to increase market penetration is a typical example. These changes can take up to a year to embed depending on the scale. The risks increase, but so do the rewards. Supermarkets that add an online distribution channel are examples of this form of change. The UK retail chain Tesco was among the first to move to an Internet strategy on a large scale in the retail grocery market. In the initial phase, it meant restructuring the company’s distribution channels to get the best returns from its existing assets. The company tended to pick groceries from existing stores that acted as local hubs. It had to implement new structures and systems to meet the needs of its new online customers. Transformational and cultural change Programmes of this nature aim to redefine the organization’s strategic direction, cultural assumptions and identity. Examples include IBM moving from hardware to software, Polaroid moving from film to digital photography, and BT moving from telecoms to becoming an Internet provider. Larger-scale change initiatives such as these yield greater returns, but the risks – and stakes – are also much higher. Nissan was a car company in deep trouble at the beginning of the 2000s. It had $17 billion of debt and looked like another casualty on the rocks of change. Carlos Ghosn took the company reins and completely transformed it. Within four years, Nissan was one of the world’s most profitable automakers. The cultural and strategic changes Ghosn implemented marked a trans- formational change. He took a deep cut at the heart of the organization’s DNA and rewired it to meet the challenges of the fiercely competitive and cost-conscious auto industry. Readiness for change In my experience, one of the things that goes wrong with change programmes (again and again!) is that organizations and leaders fail to reconcile or even understand their internal capabilities and the complexity of their external worlds. They either respond to a change in the external environment without thinking of the internal repercussions or attempt to force through changes that make sense internally but no longer fit the context. It doesn’t have to be this way. Managers who achieve successful change do something different. They may not consciously know they are doing it, but they are doing it all the same. They are selecting an appropriate change strategy, one that matches their internal capabilities and their external challenges. My research shows that the best predictor of the success or failure of organizational change is readiness for change. What do I mean by this term? Readiness for change applies at the philosophical level – being open to and prepared to embrace change; but it also applies at the practical level. Readiness applies to those organizations that have developed a set of core dynamic and internal capabilities that allow them to adapt when faced by external demands. It is the precursor to those organizations that gain strategic agility. Basically, successful change is a function of how well an organization’s internal capabilities – its management capacity, culture, processes, resources and people – match the requirements of its external environment, the marketplace. The secret to the management of change is not only what happens on the outside – it is how we respond on the inside: as leaders and as organizations. This is the essential lesson of managing change. To make change stick, we must have organizational readiness. In Louis Pasteur’s words: “Chance favours the prepared mind.” It is also true that organizations with high levels of readiness favour change. So if you want to succeed at introducing change, you need to understand that different situations demand different strategies of change. Simply put, you need to appreciate the change equation: internal capabilities + external environment + strategic leadership = a change strategy. Look inside How does a leader successfully implement far-reaching changes across an organization in the face of external dramatic demands? This was a question I asked Richard Ward, who served as CEO of the International Petroleum Exchange (IPE). Richard started his career as a scientist and an academic. His razor-sharp thinking meant he quickly grasped the complexities and rhythm of the business world and was able to spot trends. So, was he fully prepared for what happened when he announced, in spring 2005, that the IPE oil exchange would be changing from “open outcry” on the floor of the exchange to electronic trading using terminals? The change – a seemingly inevitable update given technological advances and increasingly global finance – met with unexpectedly → S pe ci al r ep or t Managers who achieve successful change do something different. They may not consciously know they are doing it, but they are doing it all the same. © 2008 The Author | Journal compilation © 2008 London Business SchoolBusiness Strategy Review Winter 200878 violent opposition. At one point, Richard found himself seized by the throat and pinned to the wall of the men’s toilet. At the other end of the burly hands was one of the traders from the floor of the exchange. He was six feet tall and all he could see was the end of an era. The trader had worked at the exchange boy and man. He was good at his job and made outstanding money. It was his life. The “open outcry” on the floor represented years of tradition and ritual – men in strangely coloured coats, shouting and accepting bids in a cacophony of yells and excitement. But, to say the least, he was not prepared for change. Nor was the organization he was part of. The context was a true reflection of Adam Smith’s “invisible hand” of capitalism: information was widely known by all; exchange was at a fair price; there were lots of buyers and sellers. It was a perfect market driven by the animal spirits of supply and demand. Now someone was going to change it all and replace it with electronic placing. Why replace a perfect system for one that was, granted, even more transparent, quicker and easier to do business, and that allowed instant access to aggregate data? What was Ward thinking? Indeed, many of the traders on the floor rejected the Big Brother changes and regarded the switch to electronic placing as heresy. They saw no advantages in the new system. It would take the heart and soul out of the process, they argued. It meant the end of an era. They announced their intention to fight the change to the bitter end, and they did. Let’s be clear. The idea of moving to electronic placing was a good strategic decision. The trends and moves at other major markets, such as the New York Stock Exchange, meant that IPE needed to respond to the times. So, it was a sensible strategy, but waiting in the wings was the potential for it unravelling into chaos and despair. Given the patent need for change and the internal opposition, how did Richard Ward and his team make it work? Clearly, they had a long haul. Along the way, he successfully helped to navigate two strands to the change strategy. The first was the external environment. Constant vigilance and extensive networks provided him and his small change team with the information and resources they needed to structure the right deal within the current climate of hostile competition, a drive for cost saving, and the onslaught of technology across the world’s major bourses. Operating and negotiating with a network of agents, brokers and stakeholders maintained good relationships in the market. Managing the internal capability was also part of his secret. The need for change was properly communicated and understood, thus addressing initial major concerns. They closed the trading floor, provided more access points through computer terminals to increase the transparency and speed of trading, reduced errors and provided a secure base for the market. He involved internal stakeholders and eventually managed to find the critical mass to make the changes work. The changes took place in a hostile environment, but the top team managed the external and internal worlds of the organization – and produced a successful outcome. The secret to their success: devising a change strategy that aligned and developed their internal capability with the pressures of the external demands. Ward is not alone. He was smart enough to react and correct things, but his experience emphasizes the point that most strategies fail not because of strategic analysis but because of poor implementation. Preparing an organization internally is absolutely essential to the change equation. Look outside There are at least five external factors that also affect a change strategy. These may be outside of your direct control, but you can influence them. Essentially, these external dependencies change the rules of the game and the way companies create value. Often, when external factors threaten, the challenge is to change or die. There are a number of different types of external challenge. They include the following: Failure to keep up with changes in disruptive technology For example, Polaroid’s failure to respond to the threat of digital photography led directly to the company’s decline. Failing to keep pace with changes in your industry can take you by surprise and lead to competitive advantage suddenly disappearing. Look at how IBM lost its advantage in its traditional hardware market. Even so, it is a positive role model for what can be achieved through change – witness its reinvention over the last decade from hardware to consulting. Reliance or dependency on other organizations for crucial resources or assets Think of outsourcing: you can find yourself locked into particular situations and expectations in which who owns what and who is responsible may be impossible to establish. This happens more regularly than you might think. A rail company with which I worked had previous and long- standing investments that meant that the infrastructure was slow to respond to new demands in transport. The company couldn’t do what it wanted. → S pe ci al r ep or t Often, when external factors threaten, the challenge is to change or die. Business Strategy Review Winter 2008© 2008 The Author | Journal compilation © 2008 London Business School 79 New political and legislative demands Deregulation in the US airline industry led to established companies such as TWA failing to survive. The Sarbanes-Oxley Act of 2002 increased industry concentration among the major US accounting firms. Privatization in some countries is a shock to the system for public- sector organizations. Underestimating competition from unexpected places Many petrol stations now offer food, for example, and compete directly with small grocery shops. Microsoft developed the Xbox in part to stop Sony coming into its space through the back door of the online Sony PlayStation. Microsoft (with some $60 billion in revenue) did the same in bidding for Yahoo against Google, a company that is considerably smaller (around $20 billion) but one that continues to be perceived as a strategic threat. Some of the biggest threats to financial-service organizations in the United Kingdom come from large supermarkets like ASDA, Tesco and Marks & Spencer. These high-street brands can offer retail lending to consumers much easier than traditional channels can. Environmental volatility, market and economic trends and other contingencies The tumult in global stock markets starting last October made it clear just how fast things could change for companies overnight. Yet, head-spinning change is always a risk. I recall undertaking a large consulting assignment for a Malaysian oil company at the end of 1995. In just three weeks, the Malaysian ringgit spiralled downward, losing nearly 25 per cent of its value. When there’s a mega-shift in the marketplace, indi- vidual firms have little or no control over their fortunes; and it is industry or economic shock waves that finally determine those parts of the market that survive and those that die. These are just five of the innumerable external factors that can directly influence a change strategy. While no company or its leaders can alter, for example, the devaluation of a national currency, what’s critical is for leaders to be aware of – and be ready to compensate for – such major external events. A ship that leaves port with no plan or provisions for a major storm is a doomed vessel. Look for leadership A lack of strategic goals for change is also a major point of concern. Without the big strategic planks and the road map that follows, change is impossible to achieve. The strategic goal sets the compass for change and provides a beacon for the organization to steer by. All of this comes down to the presence or absence of one factor – leadership. The insurance industry is populated by companies that stand or fall by how well they can manage change. Insurance companies are always asking a standard question: what is the best way to manage large and unpredictable risks? This was precisely the issue that focused the mind and energy of specialists and managers at Swiss Reinsurance, one of the world’s largest reinsurance companies. The company actually rephrased the standard question into one that was directly relevant to their own success: how can unusual risks be underwritten by placing them back on the open financial markets? Hurricanes in Florida or floods in the Indian subcontinent are infrequent but major events. Techniques in alternative risk transfers marked a small but new territory, and the group had restructured to make its mark there. The company set up the Financial Services Business Group (FSBG) in 2001, and an integration programme swiftly followed to capture the benefits as quickly as possible. Most commentators agreed there was scope for these types of products but mobilizing the market would not be easy. Every minute mattered. FSBG set up a small change team to make the transition as smooth as possible. Jacques Aigrain, head of the group, did all the right things to start with: he set out the strategy with his top team and created an organizational structure to draw on the advantages of the group’s core competencies in reinsurance and investment banking. The agenda for change was clear, and everyone started to go through the usual step models of change: creating a sense of urgency, building a guiding coalition, and so on. But it soon became apparent that, in this case, small steps would not allow the company to make the required leap. Bold leadership was required. Aigrain, working with the company’s CEO, John Coomber, did many things. Among those that made a difference, the company joined forces with the Centre for Health and the Global Environment of Harvard Medical School and the United Nations Development Programme and actually hosted a conference that brought all major players in the insurance world together. As summarized by referenceforbusiness.com: “World leaders in the fields of business, government, and science met in late 2003 and in the spring of 2004 to discuss, define, and strategize. Among those present were representatives of Swiss Re, the Allianz Group, AON, Goldman Sachs, JP Morgan Chase, Johnson & Johnson, BP, and the Association of British Insurers…. As a result of the discussions the participants stated that they would work to increase their knowledge of these new risks to identify proactive responses. They agreed to work individually and in concert.” And, to their credit, Swiss Re did all this while also becoming a better member of society. It committed itself to a 10-year programme to become “greenhouse neutral” by combining emission reduction measures aligned with investment in the World Bank Community Development Fund. One news source declared that Swiss Re thus established itself as the world’s largest financial services company to set such a goal. But such leadership was not easy. It never is. On the very page of the Swiss Re website announcing that Aigrain would succeed Coomber as CEO, there is a telltale statement about how the company views the need for constant change as the only way to cope → S pe ci al r ep or t © 2008 The Author | Journal compilation © 2008 London Business SchoolBusiness Strategy Review Winter 200880 with numerous factors affecting any insurance business. Swiss Re provides an 18-bullet-point list of items the company needs to watch carefully (see right). The list is worth including here, as it helps to make a key point. I cite the bullet-point list to demonstrate that wise leaders plan appropriately for both internal and external factors that can influence a company’s destiny. Or, as Swiss Re says at the bottom of the webpage: “These factors are not exhaustive. We operate in a continually changing environment and new risks emerge continually.” Thus, perhaps the first and most important thing that leaders must attune themselves to is to disavow, once and for all, the myth that change is simple to understand and can be managed by logical, incremental steps. Myth of change The myth of change is that it can be done in steps. This assumes it is a planned, controlled process. My experience is that major change is interactive, complex and nonlinear, undermining all traditional assumptions of change management. Emotions will run high, as will political machinations. Change is emergent; it cannot be controlled. Forget the books and articles that espouse that change is easily managed. This view is based on fundamental assumptions about the world: stability, certainty, homogeneity, and centralized sources of power and authority. We now live in a fast- changing, post-modernist world; complexity, uncertainty and difference are parts of the norm. Sources of power as well as expectations of employees and consumers have shifted; today, emergent, interactive processes yield results. Wise leaders avoid simple-step models. Today the environmental landscape can shift quickly, unexpectedly. Models of change that use recipes provide useful frameworks but are insufficient. They can be static, unresponsive to outside influences and oversimplified. They can miss many subtleties and undercurrents; in some cases, following steps can do more harm than good. Thus, change models need to be contingent upon a firm understanding of the external environment and a grasp of your internal choices. Change is a function of external dynamics and internal capabilities; and, significantly, success or failure is often determined by the interaction between the two. Strategic leadership must be present or the interaction between external events and internal capabilities will never synchronize into success. There is, indeed, no easy formula for managing change. This, however, is the new change equation. ✣ → S pe ci al r ep or t Michael Jarrett (mjarrett@london.edu) is an Adjunct Professor in Organizational Behaviour at London Business School and a founding partner of Ilyas Jarrett & Company, a research and management consultancy. His book, Changeability, will be published by FT Prentice Hall in 2009. ● Cyclicality of the reinsurance industry ● Changes in general economic conditions, particularly in our core markets ● Uncertainties in estimating reserves ● The performance of financial markets ● Expected changes in our investment results as a result of the changed composition of our investment assets or changes in our investment policy ● The frequency, severity and development of insured claim events ● Acts of terrorism and acts of war ● Mortality and morbidity experience ● Policy renewal and lapse rates ● Changes in rating agency policies or practices ● The lowering or withdrawal of one or more of the financial strength or credit ratings of one or more of our subsidiaries ● Changes in levels of interest rates ● Political risks in the countries in which we operate or in which we insure risks ● Extraordinary events affecting our clients, such as bankruptcies and liquidations ● Risks associated with implementing our business strategies ● Changes in currency exchange rates ● Changes in laws and regulations, including changes in accounting standards and taxation requirements, and ● Changes in competitive pressures Source: www.swissre.com Factors to watch Change can be unsettling and difficult. Lots of managers and executives find it hard to cope with. That may help explain why 70% of change initiatives fail. Yet being able to change is critical for business success. Changeability shows how businesses can learn to positively thrive on change. It will give you the strategies you need to prepare for change, enabling you to respond to, and manage it effectively. And if you can become more adept at handling change than your competitors, you’ll have the winning edge. Dr Michael Jarrett is one of Europe’s leading experts on organizational change. He is an adjunct professor in organizational behaviour at London Business School and a founding partner of Ilyas Jarrett & Co, a research and management consulting firm that advises FTSE 100 companies on change management. CHANGEABILITY Why some companies are ready for change – and other aren’t MICHAEL JARRETT ISBN: 9780273712893 | RRP: £24.99 “Europe's leading thinker on change.” Business Strategy Review “Vital reading for all those charged with leading change.” Ron Whatford, Lloyds TSB “ … full of examples and insights that will provoke, delight and educate. A great addition to the library of every thoughtful manager.” Costas Markides, Robert P Bauman Professor of Strategic Leadership, London Business School Changeability is available to order now with a special 30% discount for Business Strategy Review readers. You can place your order at http://www.pearsonbooks.com/changeabilitybsr work_clft6abfj5antdkzebrhfmuyk4 ---- Modeling winter wheat phenology and carbon dioxide fluxes at the ecosystem scale based on digital photography and eddy covariance data Ecological Informatics 18 (2013) 69–78 Contents lists available at SciVerse ScienceDirect Ecological Informatics journal homepage: www.elsevier.com/locate/ecolinf Modeling winter wheat phenology and carbon dioxide fluxes at the ecosystem scale based on digital photography and eddy covariance data☆ Lei Zhou a,b, Hong-lin He a,⁎, Xiao-min Sun a, Li Zhang a, Gui-rui Yu a, Xiao-li Ren a,b, Jia-yin Wang a, Feng-hua Zhao a a Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing 100101, China b University of Chinese Academy of Sciences, Beijing 100049, China Abbreviations: GPP, gross primary production; LUE advanced very high-resolution radiometer; MODIS, spectroradiometer; NEE, net ecosystem exchange; APA radiation absorbed; VPM, vegetation photosynthesis m region of interest. ☆ This is an open-access article distributed under the t Attribution-NonCommercial-No Derivative Works License, use, distribution, and reproduction in any medium, provide are credited. ⁎ Corresponding author. Tel.: +86 10 6488 9599; fax E-mail address: hehl@igsnrr.ac.cn (H. He). 1574-9541/$ – see front matter © 2013 . Published by E http://dx.doi.org/10.1016/j.ecoinf.2013.05.003 a b s t r a c t a r t i c l e i n f o Article history: Received 24 November 2012 Received in revised form 8 May 2013 Accepted 9 May 2013 Available online 23 May 2013 Keywords: Digital camera Greenness index Phenological date Gross primary production Winter wheat Recent studies have shown that the greenness index derived from digital camera imagery has high spatial and temporal resolution. These findings indicate that it can not only provide a reasonable characterization of canopy seasonal variation but also make it possible to optimize ecological models. To examine this possi- bility, we evaluated the application of digital camera imagery for monitoring winter wheat phenology and modeling gross primary production (GPP). By combining the data for the green cover fraction and for GPP, we first compared 2 different indices (the ratio greenness index (green-to-red ratio, G/R) and the relative greenness index (green to sum value, G%)) extracted from digital images obtained repeatedly over time and confirmed that G/R was best suited for tracking canopy status. Second, the key phenological stages were estimated using a time series of G/R values. The mean difference between the observed phenological dates and the dates determined from field data was 3.3 days in 2011 and 4 days in 2012, suggesting that digital camera imagery can provide high-quality ground phenological data. Furthermore, we attempted to use the data (greenness index and meteorological data in 2011) to optimize a light use efficiency (LUE) model and to use the optimal parameters to simulate the daily GPP in 2012. A high correlation (R2 = 0.90) was found between the values of LUE-based GPP and eddy covariance (EC) tower- based GPP, showing that the greenness index and meteorological data can be used to predict the daily GPP. This finding provides a new method for interpolating GPP data and an approach to the estimation of the temporal and spatial distributions of photosynthetic productivity. In this study, we expanded the potential use of the greenness index derived from digital camera imagery by combining it with the LUE model in an analysis of well-managed cropland. The successful application of digital camera imagery will improve our knowledge of ecosystem processes at the temporal and spatial levels. © 2013 . Published by Elsevier B.V. All rights reserved. 1. Introduction Phenology is the study of the timing of recurring biological events and the causes of the changes in this timing produced by biotic and abiotic factors (Lieth, 1976; Zhu and Wan, 1973). Phenological studies have a long tradition in agriculture. The knowledge of the annual , light use efficiency; AVHRR, moderate resolution imaging R, amount of photosynthetic odel; DOY, day of year; ROI, erms of the Creative Commons which permits non-commercial d the original author and source : +86 10 6488 9399. lsevier B.V. All rights reserved. timing of crop phenological stages and their variability can help to improve crop yield and food quality by providing dates for timely irrigation, fertilization, and crop protection (Mirjana and Vulić, 2005). In agro-meteorological studies, phenological data are used to analyze crop–weather relationships and to describe or model the phyto-climate (Frank, 2003). Furthermore, phenological data are one of the most important components of ecosystem and dynamic vegetation models (Arora and Boer, 2005; Dragoni et al., 2011). Only accurate descriptions of phenology and canopy development in eco- system models can produce reasonable carbon budgets at regional and global scales (Gitelson et al., 2012; Richardson et al., 2012). Thus, the ability to accurately model and predict seasonal canopy de- velopment is essential. Generally, 2 methods are used to acquire phenological data: direct observation and satellite-based observation. Phenological field sta- tions that rely on data gathered by direct human observation provide relatively accurate phenological stages (Bowers and Dimmitt, 1994; http://crossmark.dyndns.org/dialog/?doi=10.1016/j.ecoinf.2013.05.003&domain=pdf http://dx.doi.org/10.1016/j.ecoinf.2013.05.003 mailto:hehl@igsnrr.ac.cn Unlabelled image http://dx.doi.org/10.1016/j.ecoinf.2013.05.003 Unlabelled image http://www.sciencedirect.com/science/journal/15749541 70 L. Zhou et al. / Ecological Informatics 18 (2013) 69–78 Menzel and Fabian, 1999). However, the sparsely distributed stations located in limited geographical areas have poor spatial coverage, and phenological data are inadequate to characterize the continuous de- velopment of vegetation (Schwartz et al., 2002; White et al., 2005). Although remote sensing data from sources such as the advanced very high-resolution radiometer (AVHRR), the moderate resolution imaging spectroradiometer (MODIS), and Landsat are used in detecting spatial patterns of global-scale phenology (Ganguly et al., 2010; Heumann et al., 2007; Moulin et al., 1997; Soudani et al., 2012; Zhu et al., 2012), ex- ogenous factors such as atmospheric effects (e.g., cloud contamination, haze) and data processing methods limit the quality of satellite observa- tions and produce variation in the results. For this reason, there remains a crucial need for precise field data that can be used to understand and validate satellite data (Studer et al., 2007). In this context, the most im- portant point is the large gap between spatially integrated information from satellite sensors and point observations of phenological events at the species level (Linderholm, 2006; Schwartz and Reed, 1999). In the past few years, significant improvements in electronic im- aging technologies have made digital camera technology increasingly popular. As a new method for near-surface remote sensing, digital camera technology, which combines the advantages of traditional field phenological observations with remote sensing observations, has shown great potential in monitoring phenological events in vari- ous ecosystems including forests and grassland (Fisher et al., 2006; Hufkens et al., 2012; Richardson et al., 2009; Saitoh et al., 2012; Wingate et al., 2008). Recently, the relationship between the timing of phenological events and carbon flux has received increasing attention (Ahrends et al., 2009; Baldocchi et al., 2001; Kindermann et al., 1996). Previous research has indicated that farmland ecosystems, especially in mid- latitudes, contribute substantially to the regional carbon dioxide budget (Soegaard et al., 2003). As a key parameter of the carbon cycle, GPP can be used to evaluate the effects of climate variation or crop management on food production. However, the lack of an accu- rate description of phenology and canopy status in ecosystem models restricts the accuracy of GPP simulations. Thus, to develop improved simulation models of carbon flux, we need more accurate seasonal phenological data for the canopy (Chiang and Brown, 2007). Real- time canopy status information extracted from digital camera photog- raphy can allow the optimization of GPP models. To the best of our knowledge, data derived from digital camera images have seldom been used in efforts to optimize GPP models (Migliavacca et al., 2011). As a common method of estimating GPP, the light use efficiency (LUE) model has been used at various scales. Optimizing the LUE model based on canopy greenness data obtained from digital photog- raphy and on CO2 flux measured from eddy covariance towers might be useful for several reasons. First, this approach might augment the methodology used to acquire data for ecological studies. Second, it might improve the accuracy of modeling canopy phenology and daily GPP at the ecosystem scale. In China, phenological stations and conventional phenological data are relatively scarce. Both are associated primarily with natural ecosystems; in contrast, they seldom offer phenological coverage of farmland (Chen et al., 2005). The North China Plain is situated on sediments deposited by the Yellow River and is located between 114–121°E and 32–40°N. It includes 2 metropolitan centers (Beijing and Tianjin) and 5 provinces (Anhui, Hebei, Henan, Jiangsu, and Shandong; Liu et al., 2001). It is the largest and most important agri- cultural area of China and is also known as the “Granary of China.” It encompasses an agricultural area of approximately 1.8 × 105 km2 (Wu et al., 2006), producing more than 50% of the nation's wheat supply (Kendy et al., 2003). To date, to the best of our knowledge, no attempts have been made to use digital camera imagery to moni- tor crop phenology and optimize carbon dioxide flux models in China. In this study, we analyzed time series of color indices of winter wheat obtained from digital camera imagery during 2011 and 2012. Our primary goal is to address the following research questions: (1) Can digital camera imagery be used to characterize the seasonal dynamics of winter wheat? (2) Can digital camera imagery be used as an additional source to evaluate the potential of the LUE model for estimating the daily GPP of winter wheat at the site level? The aim of the study is to create a Digital Camera Phenological Observation Network in China and to increase the understanding of the seasonal covariation between GPP and phenology at the ecosystem scale. 2. Materials and methods 2.1. Site description The study was conducted at Yucheng Comprehensive Experimen- tal Station (36°27′N, 116°38′E, 20 m elevation), located in the North China Plain within the East Asian Monsoon region. The climate of this region is warm temperate and semi-humid. Over the past 30 years, the mean temperature and mean precipitation of this area were approximately 13.1 °C and 528 mm, respectively. Nearly 60% of the summer precipitation occurs from June through August. The parent soil materials are alluviums from the Yellow River. The soil texture of the root zone is sandy loam (Li et al., 2006). Winter wheat (Triticum aestivum L) is usually sown in early October and harvested in mid-June. The life cycle of winter wheat is shown in Table 1. 2.2. Camera images and image analysis Canopy images were collected using a commercial webcam (model 214; Axis Communications, Lund, Sweden) installed in a weatherproof enclosure at a height of 4 m above the ground. The camera featured a Sony Corp. 1/4″ Wfine progressive scan red-green-blue (RGB) silicon charge-coupled device. The red, green, and blue channels had peak sensitivities at wavelengths of 620, 540, and 470 nm, respectively (full technical specifications are available online, http://www.sony. net/Products/SC-HP/datasheet/90203/data/a6811217.pdf). The camera provided half-hourly JPEG images (image resolution of 384 ∗ 288 with 3 color channels of 8-bit RGB color information, i.e., digital numbers ranging from 0 to 255) from 8:30 to 17:00 h local time every day. The image covered an area of approximately 30 m2. The raw images were transmitted via wireless networks and automatically stored in a personal computer. The filenames included a date and time stamp for subsequent processing. In this study, we collected 2238 images in 2011 and 1981 images in 2012. On average, 17 images were collected per day; however, unavoidable network connectivity problems resulted in occasional gaps in the webcam data recordings. The data for 19 days (12%) and 21 days (13%) were missing in 2011 and 2012, respectively. The longest gap without a picture was 7 days in 2011 and 15 days in 2012. The image quality was occasionally adversely affected by variable light conditions or by condensation on the window housing the camera. However, no selective editing or artificial enhancement of any of the archived images was performed before the image analysis. This procedure allowed us to compare the results with those obtained from different sites or reported by other studies. The image data were processed using a program written in Matlab (R2009a; The Math Works, Natick, Mass., USA). The camera images were successively loaded, and the date and time were extracted from the filename. First, a subset of each image was selected as a region of interest (ROI) because each population is known to show spatial heterogeneity. The winter wheat showed homogeneous growth within the coverage represented by a single photo. Accordingly, the entire image was selected as an ROI. Second, color channel information (digital numbers: DNs) was extracted from the images and averaged across the ROIs for each of the 3 color channels (red DNs, green DNs, and blue DNs). Furthermore, the camera-based greenness indices http://www.sony.net/Products/SC-HP/datasheet/90203/data/a6811217.pdf http://www.sony.net/Products/SC-HP/datasheet/90203/data/a6811217.pdf Table 1 The life cycle of winter wheat. Crop The life cycle Winter wheat Seeding time Time of emergence Trefoil stage Tillering stage Green returned stage Jointing stage Heading stage Dough stage Harvest time 71L. Zhou et al. / Ecological Informatics 18 (2013) 69–78 (G%, G/R) for each photograph were calculated using the following equations (Adamsen et al., 1999; Ahrends et al., 2008): G% ¼ Green DN Red DN þ Greeen DN þ Blue DN ð1Þ G=R ¼ Green DN=Red DN; ð2Þ where G% and G/R are the greenness indices for the ROI of each photo- graph and Red DN, Green DN, and Blue DN are the DN values of the blue, green, and red channels. 2.3. Eddy covariance flux measurement and micrometeorological data An eddy covariance (EC) system and a microclimate gradient measurement system were placed in the center of a large crop field. The EC system consists of a 3-dimensional sonic anemometer and an open-path infrared CO2/H2O analyzer (IRGA. Li-7500; Li-Cor Inc., Lincoln, Nebraska, USA) at a height of 2.80 m; this system measures the fluctuations in wind velocity, temperature, water vapor, and CO2 concentration. All the data were collected continu- ously at 10 Hz with a Campbell Scientific data logger (model CR5000; Campbell Sci. Inc., Utah, USA), and the 30-min mean data were output (Qin et al., 2005). The microclimate gradient measurement system includes ane- mometers (model AR-100; Vector Instruments, UK) and psychrome- ters (model HPM-45C; Vaisala, Finland) placed at average heights of 2.20 and 3.40 m, respectively. Photosynthetically active radiation was measured using a quantum sensor (Li-190SB; Li-Cor Inc., USA). For a complete description of the arrangement of these instruments, see Li et al. (2006). 2.4. Thresholds for detecting key phenological stages Key phenological stages are periods in which canopy greenness changes markedly and farmland management has a significant effect on farm crop yield. In this study, the key phenological stages for win- ter wheat were the greenup stage, the jointing stage, and the harvest time. (1) During the greenup stage, winter wheat begins to green up from winter dormancy, and green leaves rapidly appear. Therefore, the smoothed time profile of the greenness index was assumed to increase rapidly during this stage. The date of the most rapid increase in the greenness index in the time profile was used as the estimate of the onset of the greenup stage. (2) The jointing stage is the stage at which the internodal tissue in the winter wheat leaf begins to elon- gate and forms a stem. In this phase, the growth phase of the winter wheat changes from vegetative to reproductive. This stage represents a plateau with a high value of the greenness index. Therefore, we de- fined the date on which the curvature of the greenness index reaches its second maximum as the estimated onset of the jointing stage. (3) The harvest time is the date on which the wheat is reaped. The greenness index gradually decreases as the leaves wither and die. An abrupt decrease in the greenness index follows due to harvesting. Therefore, the date corresponding to the inflection point at which the curvature reaches its last maximum was defined as the estimated harvesting date. First, the double logistic function was used to fit the time series of the greenness index because this function can express the growth pro- cess continuously, as shown in Eq. (3). The curvature (ρ) of the fitted time series for the greenness index was then calculated (Eq. (4)) as follows: g tð Þ ¼ a þ b 1 þ exp c−dtð Þ½ � 1 þ exp e−ftð Þ½ � ð3Þ ρ ¼ g tð Þ} 1 þ g tð Þ02 � �3 2 ����� �����; ð4Þ where g(t) is the fitted greenness index, t is the driving variable (day of year, DOY), and a through f are parameters of the fitted function. Parameters a and b are the base levels (e.g., the dormant season value) of g(t) and the seasonal amplitude of g(t), respectively. Parame- ters c and d control the phase and slope for the greenup stage, and pa- rameters e and f control the timing and rate of decrease associated with senescence. g(t)′ = dg(t)/d(t) denotes the first derivative of g(t) with respect to t; g(t)″ = d2g(t)/d(t)2 denotes the second derivative of g(t) with respect to t. 2.5. LUE model The LUE model was proposed by Monteith (Monteith, 1972) and has been extensively used to estimate GPP at various temporal and spatial scales (Sims et al., 2005; Turner et al., 2003). This model assumes that carbon fixation is linearly related to LUE and to the amount of photosynthetic radiation absorbed (APAR), which is the product of photosynthetically active radiation (PAR) and the fraction of PAR absorbed (FPAR; Heinsch et al., 2003). The APAR in the LUE model can be estimated from daily meteorology (Veroustraete et al., 2002) or can be obtained through the use of a vegetation index related to photosynthetic efficiency (Gamon et al., 1997). In this study, GI (Greenness Index) was used as a proxy of FPAR. GPPi ¼ εmax � α þ β � GIið Þ � PAR � f VPDð Þ � f Tminð Þ; ð5Þ where εmax is the maximum light use efficiency (gCMJ −1), with an as- sumed value of 3.47 gCMJ−1 (Yan et al., 2009); f(Tmin) and f(VPD) are linear threshold functions that range between 0 and 1 and that ex- press the effects of suboptimal temperatures and water availability for photosynthesis, respectively; PAR is the incident PAR (MJm−2); and parameters α and β were estimated from the observed daily GPP. The function f(Tmin) used to describe the influence of Tmin is de- fined as follows: f Tminð Þ ¼ 0 if Tmin ≤ Tmmin Tmin−Tmmin Tmmax−Tmin if Tmmax > Tmin > Tmmin 1 if Tmin ≥ Tmmax 8 >>>< >>>: ð6Þ where Tmin is the observed daily minimum temperature and Tmmax and Tmmin indicate the thresholds between which the constraint varies linearly. The daily indicator function f(VPD) for water availability has a value of 1 if the daily VPD is lower than the minimum threshold (VPDmin) and a value of 0 if the daily VPD is greater than the maxi- mum threshold (VPDmax), above which VPD forces stomatal closure. In this study, the values of Tmmin, Tmmax, VPDmin, and VPDmax were set according to the MODIS GPP algorithm (Heinsch et al., 2003). 72 L. Zhou et al. / Ecological Informatics 18 (2013) 69–78 The best-fit model parameters were estimated in Matlab using the Levenberg–Marquardt method. The overall accuracy of the fitted models was evaluated in terms of the fitting statistics (R2, RMSE) calculated for the observed and modeled data (Janssen and Heuberger, 1995). 3. Results 3.1. Qualitative patterns A substantial number of images were collected during the 2011 and 2012 winter wheat growing seasons. Distinct changes in the wheat canopy can be observed by comparing the images obtained at different times. For example, the seasonal change during the growing season in 2011 is shown in Fig. 1. During the winter and early spring, no green leaves emerged, and the images primarily showed the color of the soil. Thus, it could be inferred that the winter wheat was in a dormant stage. The onset of the greenup stage was marked by a change in image coloration. Although yellow was the principal color, green bands began to appear as the leaves emerged; this pattern was clearly visible by day 66 (Fig. 1A). This finding suggested that the winter wheat was in the greenup stage. The images obtained on day 105 (Fig. 1B) showed that the green color had replaced the yellow color. The green coloration continued to increase with the development of the canopy and reached a peak on day 130 (Fig. 1C). Yellow then became increasingly evident and became the principal color on day 163 (Fig. 1D). 3.2. Quantitative patterns The following analysis yielded a quantitative description of the canopy. First, the daily greenness indices were calculated using the functions specified by Eqs. (1) and (2). The color channels (R, G, B), brightness index, and greenness index showed substantial diurnal variation, and these patterns of diurnal variation were nearly sym- metrical (Fig. 2). The daily minimum values for the greenness indices between 10:00 and 15:00 h local time were adopted as the daily greenness index because the minimum value was observed close to noon, when the solar elevation angle is a minimum. The time series for the color channels (R, G, B) did not show any obvious seasonal trends (Fig. 3A,B), suggesting that the greenness value alone fails to capture the development of the canopy. Fig. 1. Sample webcam images of winter wheat. (A) day 66, (B) day 10 The canopy status can be accurately described through the use of the greenness index that can best predict canopy development. The correlations between the ancillary data and the greenness indices are shown in Table 2. There was a positive correlation between cover estimation and GPP. The correlation of G/R with the ancillary data was slightly stronger than that of G%. Therefore, the results are further discussed only in terms of G/R. In 2011, G/R (Fig. 3C) began increasing gradually after DOY 60 in conjunction with an increase in temperature and reached its max- imum at approximately DOY 110. Over the following weeks (at or near DOY 150), G/R showed a marked decline due to the maturation of the winter wheat. This seasonal variation is in accordance with the development of winter wheat. However, a slight decline in G/R was observed before DOY 60 in 2011 (Fig. 3C). This result indicates that the increase in soil moisture caused G/R to decrease. Most likely, this decrease resulted from the increase in soil moisture because the reflectance of G decreased more markedly than that of R, resulting in decreasing brightness values. In 2012, G/R (Fig. 3D) began to in- crease gradually after DOY 70 with increasing temperatures and then showed the same trend as in 2011, i.e., reached its minimum value on approximately DOY 160. The G/R time series for the growing season (1 March to 19 June) was fitted using the curve shown in Eq. (3) (R2 = 0.90), and the max- imum linear curvature was calculated as shown in Eq. (4). Based on this analysis, the onset dates of the 3 key phenological stages were DOY 66, 105, and 160. The same method was used to obtain the key phenological stages in 2012, as shown in Table 3. The interannual variance of the stages in which greenness reappeared was found to be large. The date at which greenness returned was affected by the temperature, soil moisture, radiation, and sowing dates. The mean difference between the greenness-based phenological dates and the field data was 3.3 days in 2011 and 4 days in 2012. These results further confirmed that the G/R index can be used to obtain relatively accurate phenological stages during winter wheat development. 3.3. Performance of the LUE model Three LUE models based on different variables were used to model the daily variation in GPP for 2011. The 3 models differed in terms of the impact of different environmental factors on FPAR. Model 1 5, (C) day 130, (D) day 163 in 2011. ROI is the region of interest. image of Fig.�1 Fig. 2. Diurnal pattern of RGB brightness levels. G/R and G% for winter wheat at different times of the year. 73L. Zhou et al. / Ecological Informatics 18 (2013) 69–78 assumed that temperature and VPD jointly affected FPAR, model 2 as- sumed that only temperature affected FPAR, and model 3 did not in- clude the limited effects of environmental factors on FPAR. To identify the best model, we computed the parameters for the 3 LUE models based on the 2011 data. To compare the 3 GPP models (GPPmod) with the GPP measurements (GPPobs) for 2011, we then computed the correlation coefficients between the model values and the mea- sured values, as well as the associated RMSE (Table 4). The strong correlations found between the GPPmod values and GPPobs showed that color indices (i.e., GI) can be used effectively in combination with meteorological data to describe GPP (Table 4). The comparison of these correlations (Table 4) shows that model 1 is clearly superior to the other 2 models, particularly in view of the strong agreement between model 1 and the measurements (Fig. 4). Fig. 3. Time series of brightness numbers (A, B), rat To verify the feasibility of different LUE models, we modeled the GPP for the growing season in 2012 based on site-specific data (temperature, VPD, PAR, and greenness indices). The values of param- eters α and β for all 3 models are shown in Table 4. The seasonal dynamics of the GPP (GPPmod) predicted by the LUE model was con- sistent with the observed GPP (GPPobs) values, as shown in a Taylor diagram (Fig. 5). In this diagram, the effectiveness of the 3 models considered in this study is represented by the distance between point A and points B, C, and D. A shorter distance between points indicates a more accurate model. For the 2012 growing season, the diagram clearly shows that point B (model 1) is closest to point A. A simple linear regression model also showed good agreement be- tween point A (GPPobs) and point B (GPP obtained from model 1) for the 2012 growing season (Fig. 6, upper right panel). Furthermore, io greenness index (C, D). x axis is day of year. image of Fig.�3 image of Fig.�2 Table 2 Linear correlation coefficient calculated between ancillary data and greenness indices. GPP was daily averaged from 1 March to 31 May in2011. Cover estimation GPP G/R 0.957a 0.76a G% 0.891a 0.74a Cover estimation is percentage of green leaf area in picture, calculated by the software “SAMPLEPOINT” (Booth et al., 2006). a Represents significant correlation (P b 0.001). Table 4 Summary of statistics for identifying the best fit (determination coefficient; r2, root mean square error, RMSE) of different models, the formula of model 1 is equal to Eq. (5); the formula of model 2 is equal to Eq. (5) remove f(VPD); and the formula of model 3 is equal to Eq. (5) remove f(VPD) and f(T). VPD T PAR GI R2 RMSE α β model 1 √ √ √ √ 0.86 2.5 −0.05748 0.077457 model 2 √ √ √ 0.85 2.7 −0.05770 0.075963 model 3 √ √ 0.77 3.2 −0.04255 0.054904 74 L. Zhou et al. / Ecological Informatics 18 (2013) 69–78 point C (model 2) is also close to point B. These comparisons confirm that model 1 is the most reasonable of the 3 models evaluated. The GPPobs data tended to show an initial increase with a subse- quent decrease; the same trend was evident in the GPPmod data (Fig. 6). Furthermore, we calculated the site-scale GPP using the Vegetation Photosynthesis Model (VPM), which has been used at the Yucheng site (a detailed description of the VPM model and its pa- rameters is given in Yan et al., 2009). The results of this calculation showed that the GPP values calculated from LUE model 1 and from the VPM model differed significantly, especially considering the reso- lution of the data (Fig. 6). The GPP values calculated from the LUE model were represented on a daily scale, whereas the GPP values calculated from the VPM model were represented on an 8-day scale; hence, only 14 values were available from the VPM model for the pe- riod from 1 March through 10 June. A statistical analysis showed that the GPP values derived from the LUE model were closer to the GPPobs values than those derived from the VPM model (Table 5, Fig. 6). 4. Discussion 4.1. Effectiveness of digital camera tracking of canopy phenology Our results suggested that digital camera imagery was well suited for monitoring the development and phenological stages of winter wheat, as reported by many previous studies (Adamsen et al., 1999; Jensen et al., 2007; Lukina et al., 1999). The indices (G/R) derived from the imagery analysis provided reliable information on the daily canopy state. Additionally, they facilitated the continuous and unattended monitoring of the timing and rate of canopy development (Migliavacca et al., 2011), demonstrating the importance of digital camera imagery for canopy tracking. In general, determining the phenological stages of a canopy using “near-surface” remote sensing is difficult because of the high hetero- geneity of the canopy. However, the phenological stages obtained using a digital camera in this study were similar to those observed directly (Table 3); this similarity in the results can be explained in 2 ways. First, the study area was located on the alluvial plain of the Yellow River. The terrain of this area is flat, and hydrothermal condi- tions are consistent throughout the area, resulting in a relatively uni- form agro-ecosystem canopy. Second, the digital camera used in this study provided appropriate technical support. The 3 bands of the digital camera (the red, green, and blue channels) had peak sensitivities at wavelengths of 620, 540, and 470 nm, which corresponded to the absorptive/reflective/absorptive bands of the vegetation. The relative Table 3 Phenological dates (day of year) derived from digital camera images (Cam) and dates from field observations (Obs). Greenup stage (DOY) Jointing stage (DOY) Harvest time (DOY) Cam Obs Diff Cam Obs Diff Cam Obs Diff 2011 66 62 4 105 102 3 160 163 3 2012 80 73a 7 104 99 5 159 159 0 a the date was missed and determined by first day of 5-day consecutive mean tem- perature beyond 3 °C. values of reflectance at these wavelengths are normally R540 > R620 and R540 > R470 (Serrano et al., 2000). Soil-forming minerals, water content, organic matter, and texture are known to affect the spectral features of soil in a highly complex manner. The relative values of reflec- tance found by our study were R620 > R540 > R470 on the soil surface (Stoner and Baumgardner, 1981). Hence, the G/R was calculated on the basis of this difference, which describes the absorptive/reflective characteristics of the different bands between the vegetation canopy and the soil surface. Apparently, the gap between the vegetation and the soil was emphasized by the transformation of the banding pattern, providing an effective source of vegetation information. The use of a digital camera to monitor vegetation phenology has as its primary focus the real-time monitoring of canopy development, namely, the phenological phase (Mirjana and Vulić, 2005), including the various stages of canopy development, such as the jointing stage. G/R was found to be an effective index to track the trajectory of the growing season, as previously reported (Adamsen et al., 1999). The seasonal variations in G/R were, most likely, caused by 2 types of phe- nological parameters, namely, the leaf area and the physiological pigments (Saitoh et al., 2012). During the growth of winter wheat, the leaf area increases rapidly after winter recovery, reaches a maxi- mum at or near the heading stage (at approximately the end of April in the study area), and then declines gradually as the lower leaves begin to die (Chen et al., 2010). The content of physiological pigments is known to change regularly. These seasonal variations (leaf area and physiological pigments) translate into clear dynamic changes in pho- tosynthetic capacity (Muraoka and Koizumi, 2005). For this reason, the daily NEE and PAR data were collected using the eddy covariance technique, and the Michaelis–Menten equation was used (Hollinger et al., 2004) to obtain a curve for the theoretical light-saturated rate of canopy photosynthesis (Amax). A positive relationship was found between G/R and Amax (Fig. 7), suggesting that G/R could reflect the phenological activity of the plants at the physiological level. Fig. 4. Seasonal variation in daily GPP values calculated using model 1 and field mea- sured. The hollow circle indicates the modeled values, and the line indicates the field measured values. image of Fig.�4 Fig. 5. Pattern statics between field-measured GPP and GPP measured using different models. Letter A stands for filed-observed GPP; letters B, C, and D indicate modeled GPP derived from models 1, 2, and 3, respectively. Table 5 Summary of statistical variables for GPPobs and GPPmod on the basis of 13 sample data (except the data on 17 May because the LUE model does not include results for this date). Statistical variables Obs GPP LUE model VPM model Mean 9.05 6.98 6.23 Standard error 2.12 1.85 1.30 Standard deviation 7.66 6.67 4.70 Minimum 0.90 0.64 0.36 Maximum 20.66 19.27 14.31 75L. Zhou et al. / Ecological Informatics 18 (2013) 69–78 4.2. The usefulness of greenness indices for improving GPP modeling The daily GPP could be estimated with the LUE model based on a combination of the greenness indices and the meteorological data (Table 4, Fig. 6). In 2011, the modeled GPP values were very similar to the observed GPP values resulting from the incorporation of different numbers of constraint factors, as shown by the comparison among the 3 types of LUE models (Table 4). Because model 3 was derived only on the basis of radiation and G/R, the results derived from this model were closer to the potential photosynthetic activity than were those derived using the other models, and the relationship (R2) with the observed GPP was relatively weak. If a temperature constraint was included (model 2), the relationship (R2) with the observed GPP im- proved substantially. However, the inclusion of the VPD constraint (model 1) improved the relationship (R2) with the observed GPP only slightly. This result indicated that temperature had a profound influence Fig. 6. The comparison between filed-observed GPP and modeled GPP in 2012. The upper right corner is the fitted line between LUE model-based GPP and observed GPP. The data between 10 May and 24 May could not be collected for the LUE model. Hollow circles indicate field-observed daily GPP values, and solid circles indicate LUE model-based GPP values (model 1), upper triangles indicate VPM model-based GPP value. The VPM model was run using site-specific data on temperature, PAR, and vegetation indices in 2012. on the daily GPP (Table 4). Generally, the principal environmental fac- tors for plant species are temperature, day length, and water availability (Soudani et al., 2012). The content (including the naturally occurring nutrient content) of the soil does not represent an important constraint for plants grown under human management (e.g., due to the use of irrigation and/or fertilizer). Therefore, temperature is an important environmental factor at this well-managed cropland site; it is the only important environmental factor that controls daily GPP (Chen and Xu, 2012; Chmielewski and Rotzer, 2002). For 2012, a high correlation was found between the measured data and the simulated data from the 3 models, indicating that the parameters derived from the 2011 data were highly suitable for describing the GPP variation in 2012 (Fig. 5). Although all 3 models showed high correlations, model 1 yielded a more accurate determi- nation of the GPP values for winter wheat in 2012. The simulation re- sults obtained with this model are highly accurate and are superior to those obtained from a simulation using the VPM model and 8-day MODIS data (Yan et al., 2009). The VPM model is based on remote sensing data and was developed to estimate the GPP of terrestrial ecosystems (Xiao et al., 2004); this model has been successfully used to estimate GPP in various ecosystems, including forest, grass- land (Li et al., 2007), and managed cropland (Yan et al., 2009). In this study, the GPP values based on the LUE model agreed more closely at the ecosystem scale with the observed values than did the GPP values based on the VPM model. The principal reason for this difference could be that the greenness indices derived from digital imagery not only have higher spatial and temporal resolution and are less affected by environmental conditions than MODIS data (Richardson et al., 2007; Studer et al., 2007) but also reflect the photo- synthetic capacity of the canopy (Graham et al., 2006; Liu and Pattey, 2010; Pekin and Macfarlane, 2009). Additionally, the effects of a water deficit were represented by VPD in the LUE model, whereas they were represented by the land surface water index (LSWI) derived from MODIS reflectance in the VPM model. Although the LSWI is sensitive to the total amount of liquid water in the crop (Chandrasekar et al., 2010), it is easily affected by exogenous factors. The greenness indices were found to be extremely valuable for de- veloping and testing the LUE model for monitoring daily GPP. This ap- proach might be useful for developing a new method to interpolate the GPP data regardless of data loss and might furnish a method for estimating the temporal and spatial distributions of photosynthetic productivity (Saitoh et al., 2012). 4.3. Uncertainty and limitations Previous studies have suggested the use of digital imagery for monitoring seasonal variations in the structure of the plant canopy (Crimmins and Crimmins, 2008; Kawashima and Nakatani, 1998; Liu and Pattey, 2010; Richardson et al., 2007; Sakamoto et al., 2011). In this study, we highlighted the potential use of digital imagery for the development and parameterization of LUE models for winter wheat. Nevertheless, this method, incorporating RGB color imagery data to obtain vegetation status, is not perfect due to several limita- tions and uncertainties. These limitations need to be addressed to image of Fig.�5 image of Fig.�6 Fig. 7. Seasonal variation of greenness indices (G/R) and Amax (inferred from eddy covariance measurements of surface-atmosphere CO2 exchange). Hollow circles indicate green- ness index; triangles indicate Amax. Lines are best-fit interval smoothing splines (smoothing parameter = 11). 76 L. Zhou et al. / Ecological Informatics 18 (2013) 69–78 improve the reliability of the time series monitoring of vegetation sta- tus (Bradley et al., 2010; Ide and Oguma, 2010) and the accuracy of modeling daily GPP. Digital camera imagery has 3 principal limitations. First, the qual- ity of digital camera images can vary. Sonnentag et al. (2011) have in- dicated that different commercial digital cameras installed to observe the same ecosystem provide different information. Moreover, Ide and Oguma (2010) have noted a year-to-year drift in color balance and suggested that the color balance varies among cameras produced by different manufacturers. Even if the white balance is fixed in a camera, the inevitable noise due to exposure and weather conditions results in varied greenness indices. Therefore, calibration protocols and standards need to be developed to ensure uniformity in long-term datasets from multiple sites (Ide and Oguma, 2010; Migliavacca et al., 2011). The second limitation involves the processing of greenness indi- ces. First, different researchers use unique methods to obtain relative- ly accurate status information about the daily vegetation canopy. For example, Richardson et al. (2009) used mid-day images from 1 cam- era to calculate the greenness index, Kurc and Benton (2010) used the solar noon images from 3 identical cameras to calculate the average greenness index, and Sonnentag et al. (2011) used the daily mean values from images obtained between 8:00 and 11:00 h local time to calculate the greenness index. In this study, the digital images were obtained every half hour from 8:30 to 17:00 h. The greenness indices are known to change periodically due to seasonal changes in incident light angle and intensity (Fig. 2). Although the minimum- value composite that minimizes the viewing geometries of the canopy between 10:00 and 15:00 h was used in our study to determine the variations in the greenness index data, other approaches need to be explored to determine the best greenness measurement method. Second, outliers produced by unfavorable meteorological conditions must be excluded from the data used to obtain a time series of green- ness indices. Different filtering methods can be used to overcome this problem; however, they might introduce new uncertainties. Moreover, in the process of retrieving phenological metrics from the time series of greenness indices, different methods may yield quite different results (White et al., 2009), although we adopted a relatively stable and robust processing method (the double logistic model; Zhu et al., 2012). The third limitation affecting the modeling of daily GPP is that the parameter values, such as VPDmin, VPDmax, Tmmin, and Tmmax, are equal to the values obtained using MODIS data to calculate the daily gross primary productivity mentioned in the User's Guide for GPP and NPP (MOD17A2/A3) Products. In the future, the accuracy of modeled data can be increased by introducing mechanistic experiments. These limitations should be addressed to improve the time series monitoring of long-term phenological data through the use of digital camera imagery and to identify a feasible method of interpolating high temporal-resolution GPP data. In the future, these advances might facilitate the analysis of the interannual and spatial variability of GPP. 5. Summary This study showed that high-resolution digital camera images provide reliable information on canopy status. An automated analysis of phenological events was performed by determining the curvature shape of the greenness index to obtain relatively accurate phenologi- cal stages. Digital camera imagery expands the scope of phenological observation methods for use in the field and might furnish an effec- tive method of validating remote sensing phenology. The LUE model developed in this study, which combined informa- tion derived from color indices and meteorological data, represents a successful model of daily GPP and might facilitate the monitoring of spatial and temporal variations in carbon uptake. In conclusion, digital camera imagery can not only provide impor- tant information at the site level to improve the understanding of the temporal processes of vegetation canopy dynamics; it can also repre- sent a promising tool for validating different phenological models and hypotheses. However, many limitations still affect image processing, and further efforts are required to address these limitations and es- tablish an optimum method for quantitatively monitoring seasonal crop growth. Acknowledgments This study was supported by the National Natural Science Founda- tion of China (Grant No. 41071251), “Strategic Priority Research Program — Climate Change: Carbon Budget and Relevant Issues” of the Chinese Academy of Sciences (Grant No. XDA05050600), and the Environmental Protection Public Welfare Industry Targeted Research Fund (Grant No. gyh5031103). We thank the anonymous reviewers for their comments and criticisms, which have helped to improve the paper significantly. image of Fig.�7 77L. Zhou et al. / Ecological Informatics 18 (2013) 69–78 References Adamsen, F.J., Pinter, P.J., Barnes, E.M., LaMorte, R.L., Wall, G.W., Leavitt, S.W., Kimball, B.A., 1999. Measuring wheat senescence with a digital camera. Crop Science 39 (3), 719–724. Ahrends, H.E., Brugger, R., Stockli, R., Schenk, J., Michna, P., Jeanneret, F., Wanner, H., Eugster, W., 2008. Quantitative phenological observations of a mixed beech forest in northern Switzerland with digital photography. Journal of Geophysical Research — Biogeosciences 113 (G04004), 1–11. Ahrends, H.E., Etzold, S., Kutsch, W.L., Stoeckli, R., Bruegger, R., Jeanneret, F., Wanner, H., Buchmann, N., Eugster, W., 2009. Tree phenology and carbon dioxide fluxes: use of digital photography at for process-based interpretation the ecosystem scale. Climate Research 39 (3), 261–274. Arora, V.K., Boer, G.J., 2005. A parameterization of leaf phenology for the terrestrial eco- system component of climate models. Global Change Biology 11 (1), 39–59. Baldocchi, D., Falge, E., Gu, L.H., Olson, R., Hollinger, D., Running, S., Anthoni, P., Bernhofer, C., Davis, K., Evans, R., Fuentes, J., Goldstein, A., Katul, G., Law, B., Lee, X.H., Malhi, Y., Meyers, T., Munger, W., Oechel, W., Paw U, K.T., Pilegaard, K., Schmid, H.P., Valentini, R., Verma, S., Vesala, T., Wilson, K., Wofsy, S., 2001. FLUXNET: A new tool to study the temporal and spatial variability of ecosystem-scale carbon dioxide, water vapor, and energy flux densities. Bulletin of the American Meteorological Soci- ety 82 (11), 2415–2434. Booth, D.T., Cox, S.E., Berryman, R.D., 2006. Point sampling digital imagery with “SamplePoint”. Environmental Monitoring and Assessment 123 (1–3), 97–108. Bowers, J.E., Dimmitt, M.A., 1994. Flowering phenology of six woody plants in the northern Sonoran Desert. Bulletin of the Torrey Botanical Society 121 (3), 215–229. Bradley, E., Roberts, D., Still, C., 2010. Design of an image analysis website for phenolog- ical and meteorological monitoring. Environmental Modelling and Software 25 (1), 107–116. Chandrasekar, K., Sesha Sai, M.V.R., Roy, P.S., Dwevedi, R.S., 2010. Land Surface Water Index (LSWI) response to rainfall and NDVI using the MODIS Vegetation Index product. International Journal of Remote Sensing 31 (15), 3987–4005. Chen, X., Xu, L., 2012. Temperature controls on the spatial pattern of tree phenology in China's temperate zone. Agricultural and Forest Meteorology 154-155, 195–202. Chen, X., Hu, B., Yu, R., 2005. Spatial and temporal variation of phenological growing season and climate change impacts in temperate eastern China. Global Change Biology 11 (7), 1118–1130. Chen, S.Y., Zhang, X.Y., Sun, H.Y., Ren, T.S., Wang, Y.M., 2010. Effects of winter wheat row spacing on evapotranpsiration, grain yield and water use efficiency. Agricultural Water Management 97 (8), 1126–1132. Chiang, J.-M., Brown, K.J., 2007. Improving the budburst phenology subroutine in the forest carbon model PnET. Ecological Modelling 205 (3), 515–526. Chmielewski, F.M., Rotzer, T., 2002. Annual and spatial variability of the beginning of growing season in Europe in relation to air temperature changes. Climate Research 19 (3), 257–264. Crimmins, M.A., Crimmins, T.M., 2008. Monitoring plant phenology using digital repeat photography. Environmental Management 41 (6), 949–958. Dragoni, D., Schmid, H.P., Wayson, C.A., Potter, H., Grimmond, C.S.B., Randolph, J.C., 2011. Evidence of increased net ecosystem productivity associated with a longer vegetated season in a deciduous forest in south-central Indiana, USA. Global Change Biology 17 (2), 886–897. Fisher, J., Mustard, J., Vadeboncoeur, M., 2006. Green leaf phenology at Landsat resolution: Scaling from the field to the satellite. Remote Sensing of Environment 100 (2), 265–279. Frank, M.C., 2003. Phenology and agriculture. In: Schwartz, M.D. (Ed.), Phenology:an Inte- grative Environmental Science 2003. Kluwer Academic Publishers, MA, pp. 505–522. Gamon, J.A., Serrano, L., Surfus, J.S., 1997. The photochemical reflectance index: an op- tical indicator of photosynthetic radiation use efficiency across species, functional types, and nutrient levels. Oecologia 112 (4), 492–501. Ganguly, S., Friedl, M.A., Tan, B., Zhang, X.Y., Verma, M., 2010. Land surface phenology from MODIS: Characterization of the Collection 5 global land cover dynamics product. Remote Sensing of Environment 114 (8), 1805–1816. Gitelson, A.A., Peng, Y., Masek, J.G., Rundquist, D.C., Verma, S., Suyker, A., Baker, J.M., Hatfield, J.L., Meyers, T., 2012. Remote estimation of crop gross primary production with Landsat data. Remote Sensing of Environment 121, 404–414. Graham, E.A., Hamilton, M.P., Mishler, B.D., Rundel, P.W., Hansen, M.H., 2006. Use of a networked digital camera to estimate net CO2 uptake of a desiccation-tolerant moss. International Journal of Plant Sciences 167 (4), 751–758. Heinsch, F.A., Reeves, M., Votava, P., Kang, S., Milesi, C., Zhao, M., Glassy, J., Jolly, W.M., Loehman, R., Bowker, C.F., Kimball, J.S., Nemani, R.R., Running, S.W., 2003. User's Guide GPP and NPP (MOD17A2/A3) Products NASA MODIS Land Algorithm. Heumann, B.W., Seaquist, J.W., Eklundh, L., Jonsson, P., 2007. AVHRR derived phenological change in the Sahel and Soudan, Africa, 1982–2005. Remote Sensing of Environment 108 (4), 385–392. Hollinger, D.Y., Aber, J., Dail, B., Davidson, E.A., Goltz, S.M., Hughes, H., Leclerc, M.Y., Lee, J.T., Richardson, A.D., Rodrigues, C., Scott, N.A., Achuatavarier, D., Walsh, J., 2004. Spatial and temporal variability in forest–atmosphere CO2 exchange. Global Change Biology 10 (10), 1689–1706. Hufkens, K., Friedl, M., Sonnentag, O., Braswell, B.H., Milliman, T., Richardson, A.D., 2012. Linking near-surface and satellite remote sensing measurements of deciduous broadleaf forest phenology. Remote Sensing of Environment 117, 307–321. Ide, R., Oguma, H., 2010. Use of digital cameras for phenological observations. Ecological Informatics 5 (5), 339–347. Janssen, P.H.M., Heuberger, P.S.C., 1995. Calibration of process-oriented models. Eco- logical Modelling 83 (1–2), 55–66. Jensen, T., Apan, A., Young, F., Zeller, L., 2007. Detecting the attributes of a wheat crop using digital imagery acquired from a low-altitude platform. Computers and Electronics in Agriculture 59 (1–2), 66–77. Kawashima, S., Nakatani, M., 1998. An algorithm for estimating chlorophyll content in leaves using a video camera. Annals of Botany 81 (1), 49–54. Kendy, E., Gerard-Marchant, P., Walter, M.T., Zhang, Y.Q., Liu, C.M., Steenhuis, T.S., 2003. A soil–water-balance approach to quantify groundwater recharge from irrigated cropland in the North China Plain. Hydrological Processes 17 (10), 2011–2031. Kindermann, J., Wurth, G., Kohlmaier, G.H., Badeck, F.W., 1996. Interannual variation of carbon exchange fluxes in terrestrial ecosystems. Global Biogeochemical Cycles 10 (4), 737–755. Kurc, S.A., Benton, L.M., 2010. Digital image-derived greenness links deep soil moisture to carbon uptake in a creosotebush-dominated shrubland. Journal of Arid Environ- ments 74 (5), 585–594. Li, J., Yu, Q., Sun, X.M., Tong, X.J., Ren, C.Y., Wang, J., Liu, E.M., Zhu, Z.L., Yu, G.R., 2006. Carbon dioxide exchange and the mechanism of environmental control in a farm- land ecosystem in North China Plain. Science in China Series D: Earth Sciences 49, 226–240. Li, Z.Q., Yu, G.R., Xiao, X.M., Li, Y.N., Zhao, X.Q., Ren, C.Y., Zhang, L.M., Fu, Y.L., 2007. Modeling gross primary production of alpine ecosystems in the Tibetan Plateau using MODIS images and climate data. Remote Sensing of Environment 107 (3), 510–519. Lieth, H.H., 1976. Contributions to phenology seasonality research. International Journal of Biometeorology 20 (3), 197–199. Linderholm, H.W., 2006. Growing season changes in the last century. Agricultural and Forest Meteorology 137 (1–2), 1–14. Liu, J., Pattey, E., 2010. Retrieval of leaf area index from top-of-canopy digital photog- raphy over agricultural crops. Agricultural and Forest Meteorology 150 (11), 1485–1490. Liu, C.M., Yu, J.J., Kendy, E., 2001. Groundwater exploitation and its impact on the envi- ronment in the North China Plain. Water International 26 (2), 265–272. Lukina, E.V., Stone, M.L., Rann, W.R., 1999. Estimating vegetation coverage in wheat using digital images. Journal of Plant Nutrition 22 (2), 341–350. Menzel, A., Fabian, P., 1999. Growing season extended in Europe. Nature 397 (6721), 659. Migliavacca, M., Galvagno, M., Cremonese, E., Rossini, M., Meroni, M., 2011. Using digital repeat photography and eddy covariance data to model grassland phe- nology and photosynthetic CO2 uptake. Agricultural and Forest Meteorology 151, 1325–1337. Mirjana, R., Vulić, T., 2005. Important of phenological observations and predictions in agriculture. Journal of Agricultural Sciences 50 (2), 217–225. Monteith, J.L., 1972. Solar-Radiation and Productivity in Tropical Ecosystems. Journal of Applied Ecology 9 (3), 747–766. Moulin, S., Kergoat, L., Viovy, N., Dedieu, G., 1997. Global-scale assessment of vegeta- tion phenology using NOAA/AVHRR satellite measurements. Journal of Climate 10 (6), 1154–1170. Muraoka, H., Koizumi, H., 2005. Photosynthetic and structural characteristics of canopy and shrub trees in a cool-temperate deciduous broadleaved forest: Implication to the ecosystem carbon gain. Agricultural and Forest Meteorology 134 (1–4), 39–59. Pekin, B., Macfarlane, C., 2009. Measurement of crown cover and leaf area index using digital cover photography and its application to remote sensing. Remote Sensing 1 (4), 1298–1320. Qin, Z., Yu, Q., Xu, S.H., Hu, B.M., Sun, X.M., Liu, E.M., Wang, J.S., Yu, G.R., Zhu, Z.L., 2005. Water, heat fluxes and water use efficiency measurement and modeling above a farmland in the North China Plain. Science in China Series D: Earth Sciences 48, 207–217. Richardson, A.D., Jenkins, J.P., Braswell, B.H., Hollinger, D.Y., Ollinger, S.V., Smith, M.L., 2007. Use of digital webcam images to track spring green-up in a deciduous broad- leaf forest. Oecologia 152 (2), 323–334. Richardson, A.D., Braswell, B.H., Hollinger, D.Y., Jenkins, J.P., Ollinger, S.V., 2009. Near- surface remote sensing of spatial and temporal variation in canopy phenology. Ecological Applications 19 (6), 1417–1428. Richardson, A.D., Anderson, R.S., Arain, M.A., Barr, A.G., Bohrer, G., Chen, G., Chen, J.M., Ciais, P., Davis, K.J., Desai, A.R., Dietze, M.C., Dragoni, D., Garrity, S.R., Gough, C.M., Grant, R., Hollinger, D.Y., Margolis, H.A., McCaughey, H., Migliavacca, M., Monson, R.K., Munger, J.W., Poulter, B., Raczka, B.M., Ricciuto, D.M., Sahoo, A.K., Schaefer, K., Tian, H., Vargas, R., Verbeeck, H., Xiao, J., Xue, Y., 2012. Terrestrial biosphere models need better representation of vegetation phenology: results from the North American Carbon Program Site Synthesis. Global Change Biology 18 (2), 566–584. Saitoh, T.M., Nagai, S., Saigusa, N., Kobayashi, H., Suzuki, R., Nasahara, K.N., Muraoka, H., 2012. Assessing the use of camera-based indices for characterizing canopy phenology in relation to gross primary production in a deciduous broad-leaved and an ever- green coniferous forest in Japan. Ecological Informatics 11, 45–54. Sakamoto, T., Shibayama, M., Kimura, A., Takada, E., 2011. Assessment of digital camera-derived vegetation indices in quantitative monitoring of seasonal rice growth. ISPRS Journal of Photogrammetry and Remote Sensing 66 (6), 872–882. Schwartz, M.D., Reed, B.C., 1999. Surface phenology and satellite sensor-derived onset of greenness: an initial comparison. International Journal of Remote Sensing 20 (17), 3451–3457. Schwartz, M.D., Reed, B.C., White, M.A., 2002. Assessing satellite-derived start-of- season measures in the conterminous USA. International Journal of Climatology 22 (14), 1793–1805. Serrano, L., Filella, I., Penuelas, J., 2000. Remote sensing of biomass and yield of winter wheat under different nitrogen supplies. Crop Science 40 (3), 723–731. http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0005 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0005 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0330 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0330 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0330 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0015 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0015 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0015 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0020 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0020 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0335 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0335 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0335 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0335 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0030 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0030 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0035 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0035 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0040 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0040 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0040 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0340 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0340 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0340 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0345 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0345 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0055 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0055 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0055 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0050 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0050 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0050 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0060 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0060 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0065 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0065 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0065 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0070 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0070 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0075 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0075 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0075 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0080 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0080 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0080 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0350 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0350 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0085 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0085 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0085 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0090 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0090 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0090 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0355 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0355 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0095 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0095 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0095 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0095 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0360 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0360 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0100 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0100 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0100 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0105 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0105 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0105 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0365 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0365 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0110 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0110 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0115 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0115 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0120 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0120 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0120 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0125 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0125 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0130 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0130 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0130 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0135 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0135 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0135 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0140 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0140 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0140 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0145 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0145 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0145 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0150 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0150 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0150 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0155 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0155 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0160 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0160 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0170 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0170 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0170 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0165 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0165 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0175 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0175 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0180 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0180 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0185 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0185 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0185 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0185 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0185 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0190 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0190 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0195 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0195 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0200 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0200 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0200 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0205 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0205 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0205 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0205 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0210 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0210 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0210 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0215 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0215 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0215 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0230 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0230 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0225 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0225 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0225 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0220 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0220 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0220 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0370 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0370 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0370 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0240 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0240 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0240 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0245 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0245 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0245 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0250 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0250 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0250 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0255 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0255 78 L. Zhou et al. / Ecological Informatics 18 (2013) 69–78 Sims, D.A., Rahman, A.F., Cordova, V.D., Baldocchi, D.D., Flanagan, L.B., Goldstein, A.H., Hollinger, D.Y., Misson, L., Monson, R.K., Schmid, H.P., Wofsy, S.C., Xu, L.K., 2005. Midday values of gross CO2 flux and light use efficiency during satellite overpasses can be used to directly estimate eight-day mean flux. Agricultural and Forest Meteorology 131 (1), 1–12. Soegaard, H., Jensen, N.O., Boegh, E., Hasager, C.B., Schelde, K., Thomsen, A., 2003. Carbon dioxide exchange over agricultural landscape using eddy correlation and footprint modelling. Agricultural and Forest Meteorology 114 (3–4), 153–173. Sonnentag, O., Detto, M., Vargas, R., Ryu, Y., Runkle, B.R.K., Kelly, M., Baldocchi, D.D., 2011. Tracking the structural and functional development of a perennial pepperweed (Lepidium latifolium L.) infestation using a multi-year archive of webcam imagery and eddy covariance measurements. Agricultural and Forest Meteorology 151 (7), 916–926. Soudani, K., Hmimina, G., Delpierre, N., Pontailler, J.Y., Aubinet, M., Bonal, D., Caquet, B., de Grandcourt, A., Burban, B., Flechard, C., Guyon, D., Granier, A., Gross, P., Heinesh, B., Longdoz, B., Loustau, D., Moureaux, C., Ourcival, J.M., Rambal, S., Saint André, L., Dufrêne, E., 2012. Ground-based Network of NDVI measurements for tracking temporal dynamics of canopy structure and vegetation phenology in different biomes. Remote Sensing of Environment 123, 234–245. Stoner, E.R., Baumgardner, M.F., 1981. Characteristic variations in reflectance of surface soils. Soil Science Society of America Journal 45 (6), 1161–1165. Studer, S., Stockli, R., Appenzeller, C., Vidale, P.L., 2007. A comparative study of satellite and ground-based phenology. International Journal of Biometeorology 51 (5), 405–414. Turner, D.P., Ritts, W.D., Cohen, W.B., Gower, S.T., Zhao, M.S., Running, S.W., Wofsy, S.C., Urbanski, S., Dunn, A.L., Munger, J.W., 2003. Scaling Gross Primary Production (GPP) over boreal and deciduous forest landscapes in support of MODIS GPP product validation. Remote Sensing of Environment 88 (3), 256–270. Veroustraete, F., Sabbe, H., Eerens, H., 2002. Estimation of carbon mass fluxes over Europe using the C-Fix model and Euroflux data. Remote Sensing of Environment 83 (3), 376–399. White, M.A., Hoffman, F., Hargrove, W.W., Nemani, R.R., 2005. A global framework for monitoring phenological responses to climate change. Geophysical Research Letters 32 (4), 1–4. White, M.A., de Beurs, K.M., Didan, K., Inouye, D.W., Richardson, A.D., Jensen, O.P., O'Keefe, J., Zhang, G., Nemani, R.R., van Leeuwen, W.J.D., Brown, J.F., de Wit, A., Schaepman, M., Lin, X., Dettinger, M., Bailey, A.S., Kimball, J., Schwartz, M.D., Baldocchi, D.D., Lee, J.T., Lauenroth, W.K., 2009. Intercomparison, interpretation, and assessment of spring phenology in North America estimated from remote sensing for 1982–2006. Global Change Biology 15 (10), 2335–2359. Wingate, Lisa, Richardson, A.D., Weltzin, Jake F., Nasahara, Kenlo N., Grace, John, 2008. Keeping an eye on the carbon balance: linking canopy development and net eco- system exchange using a webcam. Fluxletter. 1 (2), 14–17. Wu, D.R., Yu, Q., Lu, C.H., Hengsdijk, H., 2006. Quantifying production potentials of winter wheat in the North China Plain. European Journal of Agronomy 24 (3), 226–235. Xiao, X., Hollinger, D., Aber, J., Goltz, M., Davidson, E.A., Zhang, Q., Moore, B., 2004. Satellite-based modeling of gross primary production in an evergreen needleleaf forest. Remote Sensing of Environment 89 (4), 519–534. Yan, H., Fu, Y., Xiao, X., Huang, H.Q., He, H., Ediger, L., 2009. Modeling gross primary productivity for winter wheat–maize double cropping system using MODIS time series and CO2 eddy flux tower data. Agriculture, Ecosystems and Environment 129 (4), 391–400. Zhu, K.Zh., Wan, M.W., 1973. Phenology. Science Press, Beijing. Zhu, W.Q., Tian, H.Q., Xu, X.F., Pan, Y.Z., Chen, G.S., Lin, W.P., 2012. Extension of the growing season due to delayed autumn over mid and high latitudes in North America during 1982–2006. Global Ecology and Biogeography 21 (2), 260–271. http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0260 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0260 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0260 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0260 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0265 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0265 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0270 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0270 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0270 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0270 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0375 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0375 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0375 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0275 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0275 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0280 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0280 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0280 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0285 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0285 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0285 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0290 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0290 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0290 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0300 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0300 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0300 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0295 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0295 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0295 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0380 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0380 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0305 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0305 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0310 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0310 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0315 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0315 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0315 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0315 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0315 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0325 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0320 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0320 http://refhub.elsevier.com/S1574-9541(13)00045-9/rf0320 Modeling winter wheat phenology and carbon dioxide fluxes at the ecosystem scale based on digital photography and eddy cova... 1. Introduction 2. Materials and methods 2.1. Site description 2.2. Camera images and image analysis 2.3. Eddy covariance flux measurement and micrometeorological data 2.4. Thresholds for detecting key phenological stages 2.5. LUE model 3. Results 3.1. Qualitative patterns 3.2. Quantitative patterns 3.3. Performance of the LUE model 4. Discussion 4.1. Effectiveness of digital camera tracking of canopy phenology 4.2. The usefulness of greenness indices for improving GPP modeling 4.3. Uncertainty and limitations 5. Summary Acknowledgments References work_clh4lctsbzbf5ggex42xfmwzwa ---- Polaroid continuum P ol a r o i d a f t e r d i g i t a l: t e c h n o l o gy, c u l t u r a l fo r m , a n d t h e s o c i a l p r a c t i c e s of s n a p s h o t p h o t o g r a p h y B u s e , P h t t p :// d x . d o i. o r g / 1 0 . 1 0 8 0 / 1 0 3 0 4 3 1 0 9 0 3 3 6 3 8 6 4 T i t l e P ol a r oi d a f t e r d i g i t a l: t e c h n o l o gy, c u l t u r a l fo r m , a n d t h e s o c i a l p r a c t i c e s of s n a p s h o t p h o t o g r a p h y A u t h o r s B u s e , P Ty p e Ar ti cl e U R L T hi s v e r s i o n i s a v a il a b l e a t : h t t p : // u s ir. s a lf o r d . a c . u k /i d / e p r i n t / 1 8 7 9 5 / P u b l i s h e d D a t e 2 0 1 0 U S I R i s a d i gi t a l c oll e c t i o n of t h e r e s e a r c h o u t p u t of t h e U n iv e r s i t y of S a lf o r d . W h e r e c o p y r i g h t p e r m i t s , f ull t e x t m a t e r i a l h e l d i n t h e r e p o s i t o r y i s m a d e f r e e l y a v a il a b l e o n li n e a n d c a n b e r e a d , d o w n l o a d e d a n d c o p i e d fo r n o n- c o m m e r c i a l p r i v a t e s t u d y o r r e s e a r c h p u r p o s e s . P l e a s e c h e c k t h e m a n u s c r i p t fo r a n y f u r t h e r c o p y r i g h t r e s t r i c ti o n s . F o r m o r e i nf o r m a t i o n , i n cl u d i n g o u r p o li c y a n d s u b m i s s i o n p r o c e d u r e , p l e a s e c o n t a c t t h e R e p o s i t o r y Te a m a t : u s i r @ s a lf o r d . a c . u k . mailto:usir@salford.ac.uk 1 Polaroid into digital: Technology, cultural form, and the social practices of snapshot photography At its Annual Meeting in 1991, the Polaroid Corporation distributed, as part of its Shareholders’ package, a loose sheet devoted to ‘Photo-Document Integration’. Beneath an image depicting a Polaroid camera, a Polaroid print, a scanner, a computer, and a laser printer, the document details how in the future ‘image- dependent businesses’ will rely on ‘converting … images into digital data files that can be easily integrated with other computer data.’ (see Figure 1)1 By any measure it is a melancholy document. It successfully predicts the technological future but cannot see that the full arrival of this future will render the Polaroid image obsolete. In the new media landscape so accurately sketched out by the document, melancholics are of course thin on the ground: one of the pleasant, even narcotic, effects of new media, for those who have access to them, is a forgetfulness about the once new older forms they have replaced. Indeed, in order for any technology to be acclaimed as a novelty, this forgetfulness is an absolute prerequisite. It is the model of progress implicit in this forgetting that caused Walter Benjamin, in his essay ‘Surrealism,’ to follow Andre Breton in turning his attention to the ‘outmoded,’ to ‘objects that have begun to be extinct,’ to once-fashionable things ‘when the vogue has begun to ebb from them’ (1999, 210). Polaroid photography is one of those objects on the verge of extinction: the company stopped making film at its Enschede plant in the Netherlands at the end of 2008, and supplies were expected to run out entirely by the 2 end of 2009. It deserves critical attention now, at the moment of its imminent disappearance, not in the form of a requiem, but because it retroactively sheds light on the newer technologies that have apparently surpassed it. While it is hard to argue against the prevailing wisdom that chemically-based photography has been displaced by electronically-based digital image-making, what that change means is still very much up for grabs. There has been no shortage of attempts to tackle this problem, with Tom Gunning most recently, and convincingly, making the case that the shift from chemical to digital has not radically transformed the basic status of the photographic image. Addressing the oft-noted fact that digital photographs can be easily and quickly manipulated, he claims that this does not ultimately have a profound effect on their indexicality (2008, 24), nor on the ‘nearly inexhaustible visual richness’ of the photographic (37). He concludes that ‘Like…earlier transformations in photographic history, the digital revolution will change how photographs are made, who makes them, and how they are used – but they will still be photographs’ (38). The case he makes is compelling and should be influential in debates on what digital alteration means for the ontology of the photographic image. However, as he admits, he has set aside in his argument the questions of ‘how photographs are made, who makes them, and how they are used’. But can these questions be legitimately set aside, and can images really be so easily separated from practices of image-making? Peter Osborne firmly rejects any attempts to ontologize the photographic on the grounds that any ‘idea of a founding unity of 3 the photographic’ involves the ‘reductive identification of a cultural form with a technology’ (2003, 68). According to Osborne, we cannot simply isolate photography from its social practices, because it will always be an ‘unstable unit[y] of material form and social use’ (65). In other words, it is the wrong question to ask whether the ‘photographic’ has changed, since it does not exist autonomously from its manifestations as cultural form. Taking up the case of Polaroid photography then, this article will argue that the obsolescence of a technology does not necessarily mean the absolute passing of a cultural form, but rather the modification of already existing practices. In considering Polaroid and digital photography together, it has two main aims: 1) To shift debates on digital photography away from their main emphasis on manipulation or alteration of the image. This emphasis has served to isolate images as images alone, severed from their practices of making, an issue which is more difficult to avoid with Polaroid photography, with its distinctive form of image-making. 2) To identify a distinctive snapshot praxis. Polaroid prints, with their white borders, strict size restrictions, and lack of a negative are clearly different materially from other kinds of photograph, and from amateur digital photography. However, the speed with which the image appears, and the absence of a darkroom or other conventional means of image development are features Polaroid photography shares with digital snapshot photography: 4 an examination of the latter through the prism of the former allows us to see better what is at stake in the new techno-cultural form, but also where its absolute novelty must be qualified. Before these issues are taken up, though, it is necessary to give a brief outline of the development and decline of Polaroid photography. The end of ‘one-step’ photography The year in which Polaroid distributed to its shareholders the ‘Photo-Document Integration’ information sheet also marked the death of Edwin Land, inventor of the Polaroid photographic process, and founder, in 1937, of the Polaroid Corporation. The information sheet illustrates a process that would come to be described as ‘convergence,’ but to Edwin Land it would no doubt have been anathema, for it undoes the main principle of image-making that he promoted at Polaroid. When Land announced in 1947 the invention of Polaroid photography, he dubbed it ‘one- step’ photography, because it eliminated a number of steps between exposure and final print, most obviously the process of chemical development of a negative into a positive image in a photo-lab. In an essay describing the process, Land emphasised just how many ‘steps’ his invention had compressed into one by listing the sequence of ‘Conventional Processing’: ‘Expose, develop the negative, rinse, fix, wash, dry, expose the positive through the negative, develop, rinse, fix, wash, dry’ (1947, 62). 5 In these first versions of ‘instant’ photography, the camera operator was still responsible for timing the development of the film, pulling it out of the camera to burst the ‘pod’ of developing reagent, and peeling the useless negative away from the finished print. Only after twenty-five more years of research in chemistry, optics and electronics, did Land achieve his ultimate aim, in the form of SX-70 technology, which mechanically ejects from the camera a white-bordered image that develops before the eyes of its user. For those who remember Polaroid picture-taking, it is usually this second generation of cameras that define the experience of Polaroid use. Having dispensed with every activity on the part of the photographer except loading the film and releasing the shutter, Land was now satisfied that Polaroid had achieved ‘absolute one-step photography’. As he put it, ‘When you press the electric button a whole series of operations happens in the camera; by the end of one and a quarter seconds after the electric shutter button is touched the camera has done its part, the film is ejected and a whole series of events occur within the film’ (1974, 338). Land numbers the steps reduced to one at between ‘two hundred and five hundred, depending on how you choose to fractionate them’ (338). To actually add steps between the pressing of the shutter button and the production of an image, as is done in ‘Photo-Document Integration’, could only be a step backwards from Land’s point of view. Cheaper versions of SX-70 technology were made available through the 1970s, and one of them, the ‘One-Step’, became the world’s most widely sold camera in the early 6 1980s (Columbus 1999, 119). After Kodak, Polaroid was now securely the world’s second largest manufacturer of amateur photographic equipment, fiercely protecting its numerous patents, and defeating Kodak in court for infringement of them. Into the 1980s and 1990s it successfully maintained a monopoly over a single lucrative process: the provision of a completed image shortly after the exposure of a film. But the translation of images into a binary code put paid to that dominance and ultimately ensured the superseding of the original ‘instant photography’ and the decline of the company that nurtured it. The slow poison of ‘new media’ worked throughout the 1990s on Land’s invention, and Polaroid filed for bankruptcy protection in October 2001. It was then bought in August 2002 by a Chicago investment group, One Equity Partners, which sat on its acquisition, while Polaroid continued to produce and sell cameras and film. One Equity then sold it on to Petters Group Worldwide in 2005.2 In an interview with the New York Times in October 2005, the new chairman of Polaroid under Petters, Stewart L. Cohen, claimed that ‘Polaroid still has a multimillion-dollar instant film business that is still profitable,’ but conceded later in the interview that ‘the statistics say there’s not much life left in the business. People just aren’t buying a lot of instant cameras’ (Deutsch 2005). Indeed, he was keener to promote Polaroid’s new brand image as a maker of Plasma TVs and portable DVD players. Or rather than maker, marketer, of these products, since Cohen makes clear in the interview that ‘The most important thing is that Polaroid keep shifting its paradigm away from manufacturing, with its huge fixed costs’ (Deutsch 2005). In other words, a company whose business identity had historically been based on 7 inventing and selling new products that no other company made, and was once known for its ‘maverick laboratory-based corporate structure,’ (Martin Kao 1999, 13) had become just another out-sourcing subsidiary in a crowded marketplace. On February 8, 2008 Polaroid/Petters announced that it would be permanently discontinuing the manufacture of instant film. In late 2008 Tom Petters, chair of Polaroid’s new parent company, was arrested for financial fraud, and Polaroid, filing for a second time for bankruptcy protection, was purchased in April 2009 by a joint US-Canadian investment group in a fire-sale of Petters’ assets. ‘Polaroid’ still exists, then, and still trades in the field of visual technologies, but for all intents and purposes it is no longer the same entity that was formed in 1937 to manufacture polarizing filters and sold the first ‘one-step’ camera in a Boston department store in 1948. The company was in fact actively researching computer graphics in the 1980s, and early in developments of digital photographic technology, but perhaps held out too many hopes for the continued importance of the ‘hard- copy,’ as the ‘Photo-Document Integration’ memo attests.3 Most recently, it introduced in March 2009 a small format ‘instant mobile printer’, the Polaroid PoGo, but high cost and poor image quality have thus far ensured poor sales. More to the point, the PoGo adds steps to the process of amateur image-making rather than taking them away, as Polaroid photography always set out to do. Critical context: the Photoshop cul-de-sac 8 Polaroid’s demise at the hands of digital photography has been noted by many; indeed, the observation is usually extended to all chemically-based photographic technologies. The exact details vary, but it would be difficult to find someone who did not agree in principle with Graham Clarke’s observation that Despite the difference between a daguerreotype and a polaroid print, the photograph has always been based on a chemical process. Images are now being generated on the basis of electronic processes which fundamentally change the terms by which we relate to the photograph, retrieve, experience, and read it. (1997, 218-20) Perhaps in 1997 it was too early to tell, but Clarke did not go on to elaborate how exactly things had changed, how ‘the terms by which we relate to the photograph, retrieve, experience, and read it’ had been modified by digitalisation. Precisely what properties distinguish the old technology from the new one? In what way do these differences impact on the uses of the media? Do they generate new social practices or adapt to existing ones? In the 1990s a number of analysts attempted to answer these questions in relation to the advent of digital photography. While the debate was animated, the results were mixed, largely because of a fixation on the question of image manipulation or alteration. However, asking these questions of popular snapshot photography, and Polaroid photography in particular, helps to reorient and reinvigorate the debate, partly because of Polaroid’s difference from other forms of snapshot photography, but especially because of some key similarities between the 9 instantaneously produced digital image and the Polaroid ‘instant image’ first produced in the 1940s. While advances in new media inevitably attract the most attention for their high-end applications, the lower end of snapshot culture also deserves attention. As was noted earlier with regards to Tom Gunning’s recent intervention, discussion on digital photography, especially at its height in the mid- to late- 1990s, tended to focus disproportionately on a single issue: the manipulation of images. Andrew Murphie and John Potts nicely summarize what is generally thought to be at stake in the advent of digital photography: Photography enjoyed … a truth effect in the nineteenth century and for much of the twentieth; this authentic fit between things and photographs has been undermined by the potential for manipulation residing in digital image-making. Digital photography challenges accepted notions of representation in ways which some find disturbing, yet others find liberating….image scanners and software programs enabling the easy manipulation of digitised images….The widespread use in the 1990s of digital cameras in journalism removed the guarantee of truth held in the photographic negative. (2003, 75-6) As Murphie and Potts suggest, for some early commentators, advances in digital image-making, far from being unsettling, were a welcome development, an opportunity to break with the referent once and for all. One of the key celebrants, W.J.T. Mitchell, announced that 10 Photographs….were comfortably regarded as casually generated truthful reports about things in the real world….But the emergence of digital imaging has irrevocably subverted these certainties, forcing us to adopt a far more wary and more vigilant interpretive stance….Today, as we enter the post-photographic era, we must face once again the ineradicable fragility of our ontological distinctions between the imaginary and the real. (1992, 225) Just as Murphie and Potts hedge their bets when invoking the ‘truth-effect’ of conventional photography, so Mitchell, using the passive voice (photographs ‘were…regarded’) distances himself from the notion that photographs ever were in fact ‘truthful reports about things in the real world.’ Nevertheless, this sort of qualification did not prevent him from being chastised on all sides for the supposed naivety with which he welcomed the digital ‘revolution.’ In fact, for a while it was almost obligatory to take him to task in subsequent commentary on digital photography, such was the preoccupation in the mid-1990s with the effect of manipulation, or ‘Photoshopping’ on the ‘truth’ of the image.4 As a result, a good deal of energy was expended on pointing out the obvious fact that chemically-based photos had always been manipulated, so there was nothing particularly new about digital touching-up.5 Equally, necessary reminders were given that photographic images, just like digital images, are far from unmediated in their relation to the world; that they are coded and therefore read, that they are selected and framed and given meaning by context and caption, that an uncritical positivism lies behind the notion 11 that photos are evidence, and that in any case realism is an elaborate ideological construct and not a transparent window onto reality.6 At the end of all these proofs, the conclusion usually drawn is again cautiously of the ‘nothing particularly new’ school and contra Mitchell’s confident assertion that the digital image is a revolutionary new development in the image-world. The caution and scepticism with which these commentators approach digital photography is shared by much writing on digital culture coming out of the academy. There are two main reasons for this sober stance: 1) a perceived need to act as a balance to the speculative flights of fancy of the popular press and specialist digital hype-merchants; 2) a pulling back from early enthusiasms emanating from the academy on the utopian possibilities of new technologies, before the dot.com bust and before the dreary realities of white-collar e-mail serfdom became clear. Indeed, it is hard these days to find an academic article or book on digital culture that does not distance itself from ‘the cacophonous new media rhetoric’ (Everett and Caldwell 2003, xi) or warn that ‘It is no longer credible…to imagine that digital media is somehow marked by a radical break with traditional media practices’ (Caldwell 2003, 130). One of the best contributors to this genre, Jeffrey Sconce, reflects wryly on the failures of digital culture to fulfil its early promises and notes ‘a disconnect between the increasingly banal applications of digital media in the “real world” and the favored objects of digital study in the academy’ (2003, 181). But his is far from a lone voice. In fact, so eager are the new breed of teetotal new media analysts to throw cold water 12 on the unthinking enthusiasms of the digi-disciples that they tend to shy away from making any strong claims at all about changes inaugurated by digital technologies. If, as Sconce says, the ‘real world’ applications of digital technology are ‘banal,’ then surely it is to the banal that we should be directing our attention. And what could be more banal than the ubiquitous digital snapshot cameras wielded at every birthday party and in front of every tourist attraction; what more commonplace than the presence of phone cameras at every celebrity sighting or public event? On these occasions, is it primarily the possibility of the manipulation of the image which is at stake? Surely, what matters more is the speed with which the image appears after it has been taken and the fact that its taker no longer makes use of a professional photo- finisher. These are relatively new developments in relation to conventional snapshot photography, but not, of course, in relation to Polaroid image-making. Polaroid snapshot praxis In concentrating on the question of manipulation and the supposed changing ‘truth- effect’ of digital images, commentators have focused disproportionately on the finished image, setting aside the practices of making that have led to that image. My case here is that, as far as Polaroid and digital snapshot culture are concerned, the resultant image cannot be considered separately from the practice of its making. What, then, are the key features of Polaroid image-making which distinguish it from other forms of pre-digital photography? There are three7 : 13 1) Speed: the image appears in an ‘instant’ 2) The image develops itself: there is no need to have recourse to a private darkroom or professional developing company 3) Uniqueness of the print: the process provides no negative, and therefore is not easily subject to the normal photographic process of multiple reproduction8 The first and the second properties are shared with digitally-produced snapshots, while the third clearly is not. I will return to the third feature at the end of the article, but first I want to consider the implications of the shared features. Speed and instantaneity are of course relative concepts: the first Polaroid prints were ready in a minute, and the standard SX-70 image takes perhaps four to five to stabilise completely. What is important is that the process takes a single step, in Land’s vocabulary, eliminating the delay (of days, weeks, months) that had normally been the experience of the amateur snapshot photographer. As Polaroid ads enjoined possible users: ‘Take and show party pictures while the fun’s going on’; ‘see results at once….with no intervening delay for processing’.9 Amateur Polaroid and digital image-making could not be further apart in their technologies of production and dissemination, but the speed with which the image appears and the way in which it ‘develops’ inside the camera mean that the former nevertheless anticipates the latter as a practice and a cultural form. 14 One way of gauging the implications of speed and the elimination of the darkroom is to measure Polaroid and digital against the classic model of amateur snapshot practice as outlined by Don Slater. In his analysis of the ways in which snapshot culture contributes to domestic ideology and practices of leisure-time, Slater identifies a fundamental discontinuity. He notes the importance of the ‘family album’ as a device for regulating identity and ordering memory, citing a survey that found that ‘39 per cent of respondents rated their family photos as the possessions they treasure most and would least like to lose’ (1995, 138). However, he goes on to observe that this hypervaluation of the family album sits oddly with our actual use of photographs: the same piece of research indicated that 60 per cent of respondents and their families looked at their family snaps only once a year….Moreover, it is unclear how many people actually organise their photos into anything approximating a family album: most of them remain in the same envelopes in which the processing company returned them. Thus the family album…is hypervalued yet plays little part in everyday life. Taking pictures is a taken for granted part of leisure activities; but looking at them is marginal. (1995, 138-9) Slater then substitutes the term using for looking, and argues that in consumer culture, although images are increasingly used at the domestic level in the form of ‘home entertainment,’ the images we produce ourselves tend not to be used in a structured way. In this account, then, the snapshot only has an intermittent and retroactive value, reflected upon long after its making, if at all. 15 As part of their push to bring the new generation of camera phones to the UK market, Sony Ericsson screened in 2005 a television advert that suggests a rather different snapshot praxis than the one outlined by Slater. Under the tag-line ‘take your best shot – with a phone!,’ the new Sony K750i was promoted by an improbably handsome young couple strolling along the side of a lake. He is on his phone, ignoring her, when she spots something and snatches the phone out of his hand. It is a water-lily and a dragon-fly. She gets them with the phone, shows him. He’s unimpressed, quickly snapping the next manifestation of nature, a fish leaping to eat the dragonfly. He shows her the result, triumphant, she now downcast. Next a roaring grizzly materializes to devour the fish; he holds the camera paralysed, she pushes down on the button. Thus reconciled in their snapshooting partnership, with beatific looks they admire the resulting image of spontaneous nature, whilst an eagle comes down to lift the bear away. Here there is virtually no gap between ‘taking’ and ‘using’ the snapshot image, if we invoke Slater’s distinction. Rather than a delayed or indefinitely postponed activity, the looking at the image has become coterminous with its taking. In terms of what Slater calls the ‘use’ of images, the digital snapshot has a potent life after its taking, potentially sent on as code to other phones, or as computer attachments. But the basic leisure activity here is surely the near simultaneous pairing of taking and showing, of production and consumption. 16 According to Slater, writing before the explosion in affordable digital cameras and camera phones, Taking photographs is itself structured…and is regarded as an intrinsic part of other leisure-event-structures: holidays, time-off, special occasions….Using photographs, however, does not fit the bill of leisure event, of a consumer practice or experience. (1995, 141) However, a good half-century before digital cameras closed the gap between taking and using in snapshot photography, Polaroid advertising campaigns had identified the potential for turning this very proximity into what Slater calls a ‘structured activity.’ Under the general slogan ‘Pictures-in-a-minute,’ Polaroid ads in the late 1940s claimed ‘It’s like taking your darkroom on location,’ and emphasised that ‘no other camera would give me a second chance like this,’ meaning that if the picture didn’t work out, you could take another immediately and improve on the result (see Figure 2).10 The ads also suggested possible activities that could be organised around the new product: ‘hold photographic parties with a prize for the best picture made by a guest…enjoy your pictures with friends when they mean the most – while they are still news.’ In 1950, the Polaroid copywriters came up with the catchphrase, ‘You’re the life of the party with a Polaroid Land Camera,’ establishing the long association of the camera with social gatherings (see Figure 3). In its promotional materials, including the cover of the company’s 1954 Annual Report, Polaroid invariably 17 depicted an admiring group huddled around the recently-developed image which has been peeled away from the dead negative (see Figure 4). There they are, absorbed, like incipient digital camera users, in the immediacy of the image-making experience. They are simultaneously subjects and viewers of the photograph, a tableau mirrored in the image they are consuming. This sort of image of image-consumers became a genre in and of itself in all subsequent representation of Polaroid use. As Nat Trotman has observed, most photographic theories ‘presuppose a certain distance between the act of observing a photograph and the act of taking it,’ but ‘[o]ver the course of a minute, a photograph does not concern remembering or forgetting. Rather, it plays between the lived moment and its reification as an object with its own physical presence. The party Polaroid is not so much an evocation of a past event as an instant fossilization of the present. (2002) Trotman does not draw any conclusions about the broader cultural implications of photography becoming ‘an instant fossilization of the present,’ but we could tentatively note two developments in snapshot practice that have resulted from the collapsing of ‘taking’ into ‘using’. On the one hand, there is the potentially collective nature of the activity as signalled by the ‘party snapshot’. A British reviewer of the SX-70 gave this encomium to the camera in 1976: 18 Polaroid Land photography generally is a much more communal pursuit, since the photographer does not need to leave the scene with a promise to send the pictures on later. With the SX-70, that moment of revelation when the picture begins to appear, and which has made so many of us into photographers for life the first time we saw it, is no longer limited to the darkroom and may be shared with any who care to gather round and watch. (Crawley 1976, 1003) The implications are not minor. With the elimination of the amateur’s darkroom or the photo-finishing company, the sway of the ‘expert’ over the making of images is also eliminated. Equally, the ‘communal pursuit’ envisioned here may put paid to the roving eye of the isolated individual photographer, voyeuristic and detached, replaced instead by a more dispersed collective vision. At the same time as it makes snapshot consumption a more immediate and potentially public activity, the elimination of the darkroom opens up to the casual amateur a range of private practices. Freedom from the monitory gaze of the photo- chemist means what might have been taboo now becomes picturable. Peggy Sealfon, author of a popular manual of instant photography euphemistically sums up this popular practice enabled by Polaroid: No longer did picture-takers have to wait a week for local drugstore processing, and no longer did they have to be concerned about the film’s contents passing 19 under the scrutiny of the druggist’s eye. Instant pictures of lovers and spouses became quite common. (1983, 6) The parallels with popular digital snapshooting should be self-evident, although it is a different sort of obscenity that has captured the most attention. Commenting balefully on the explosion in ‘gadgetry of instant objectification,’ the Retort collective read the continuous digital archiving of the present as a symptom of a ‘crisis in time,’ where no real reflection on the past is any longer possible: ‘it has to be recorded, since experience without instant doubling is no experience at all. “Here’s me third from the left at Thanksgiving in Abu Dhabi; and here’s me on top of a pigpile of Terrorists”’ (2005, 182-3). Retort imply that in the relentless drive to instantaneity the digital camera user hardly distinguishes between Thanksgiving and a pigpile, but surely one difference is that the image of a pigpile would never have been sent to a professional photo-finisher by the amateur snapshooter. As for the death of Polaroid at the hands of digital, what I hope has become clear is that it is not simply a case of one technology being displaced by another. Polaroid technology may be fated to obsolescence, but as William Boddy has argued, ‘digital imaging developed largely through a process of infiltrating existing signifying practices which are already embedded in a diverse set of highly developed cultural forms’ (2004, 68). While there are clearly historical and technical discontinuities between the SX-70 and the camera phone, it is also the case that the modes of using 20 the latter take shape in a cultural field in which there were already existing social practices based around the former. And while digital cameras, particularly in phones, have massively increased the recruitment levels of casual snapshot takers, this process had already been accelerated by Polaroid. Coda: Polaroid’s digital after-life I argued at the start of this article that Tom Gunning brackets questions of practice in his discussion of the digital image. It could be argued that in my attention to practices of image-making I have effectively bracketed the question of the image. I said that the third defining feature of Polaroid photography is the uniqueness of the image, that it provides no negative for further mechanical reproduction. Clearly this is a major difference from the digital image, which is infinitely and effortlessly reproducible in countless possible contexts. Whatever commonality there might be between the Polaroid and digital snapshots at the point of their making, they part company radically here. Indeed, this fact – the digital image’s basis in binary code – has led to a whole array of new practices at the level of the dissemination and circulation of photographic images after their making.11 In contrast, the Polaroid image is stubbornly attached to its material support in a way that even conventional negative-based photography never was. Certainly, an SX-70 Polaroid image can be scanned into a computer, but this is only ever a partially complete operation. When these pictures are scanned in, the built-in white frame, with its wider bottom edge, is invariably included in order to identify the Polaroid image as such. But this frame 21 (the lower part, often used for writing on, houses the pod of developing reagent which bursts as film is ejected) is not strictly speaking part of the image, but rather part of the object. In fact, the SX-70 print is ‘an image that is also a thing’ (Schjeldahl 1987, 9), ‘both sculptural and pictural,’ (Van Lier 1983, xii) and any convergence with computers will always leave an untransmittable remainder, because, counter to the plural logic of technical reproducibility, the Polaroid is always only singular. But the situation is by no means straightforward. While fewer and fewer Polaroid prints are being made as the technology gradually disappears, the distinctive white borders are experiencing a striking digital after-life, especially in advertising. Although promotions for products as varied as National Rail, Peugeot (Rugby World Cup, 2007), Manchester Tourist Board, Co-Operative Bank, and Virgin Megastore clearly have not used actual Polaroid technology to produce the images for their campaigns, these images have been enhanced after the fact with the iconic white borders in the simplest of Photoshop operations. The borders are then usually written on (digitally), just as popular practice dictated that the original prints would often be titled at the bottom. Just to drive home that the images of screenwriters in the ad for ScreenwritersStore.com are meant to be read as Polaroids, the simulated prints are ‘stuck’ to the background of the ad with digitally simulated cellotape. Why do these ads (but it is not just ads – an article in the Times Higher 4 June 2009 uses simulated Polaroids for purposes of illustration; The New York Times online regularly uses the white border for its picture stories) make use of simulated instant 22 snapshots? Presumably because Polaroid prints are a useful shorthand for the photograph, any photograph, as a physical object. In each case, we are clearly meant to understand that these are actual prints – photos as objects – that we are looking at, and the Polaroid print, with its white border, leaves us in no doubt about this, whereas a (simulated) snapshot taken from a shoebox and scanned would not necessarily divulge its object-hood so self-evidently. In the National Rail ad promoting ‘days out’, there are nine ‘Polaroid’ images of a red-hatted gnome in different locations – at the seaside, in the Peaks, in front of a castle (see Figure 5). These days such photos would most likely be captured on a camera phone and circulated as digital files, and yet the advertisers choose an antiquated photo-format rather than showing the images, for example, in a series of phones. The ads are composed entirely on the basis of digital imaging technology, and yet they want us to see their images as if they were singular material objects and not just bits of code. The vernacular snapshot format alluded to by these ads is being gradually dematerialized with digital technology, and the digitally produced Polaroid border – a marker of non-convergence – suggests a lingering regret for the passing of the photo as material object. The simulated white border is, therefore, a kind of compensation for the absence of the photo as tangible and tactile. The research for this article has been generously supported by grants from the Arts and Humanities Research Council UK and the British Academy. Thanks also to Tim Mahoney at the Baker Library, Harvard. 23 References Barthes, R., 1977. Image Music Text. London: Fontana. Benjamin, W. 1999. Selected Writings: Volume 2, 1927-34. Cambridge, MA.: Belknap Press of Harvard University Press. Boddy, William. 2004. New Media and popular imagination: Launching Radio, Television, and Digital Media in the United States. Oxford: Oxford University Press. Buse, P. 2007. Photography Degree Zero: Cultural History of the Polaroid Image. new formations 62: 29-44. Caldwell, J.T. 2003. Second-Shift Media Aesthetics: Programming, Interactivity, and User Flows. In New Media: Theories and Practices of Digitextuality, A. Everett and J.T. Caldwell, 127-44. London and New York: American Film Institute and Routledge. Clarke, Graham. 1997. The Photograph. Oxford: Oxford University Press. Columbus, N. 1999. Polaroid Milestones. In Innovation / Imagination: 50 Years of Polaroid Photography, ed. N. Columbus, 118-19. New York: Harry N. Abrams. Crawley, G. 1976. SX-70 camera and film – A Review. British Journal of Photography 123, no. 46: 998-1003. Deutsch, C.H. 2005. Big Picture Beyond Photos. New York Times, October 1, Section C4. Dubois, P. 1990. L’acte photographique et autres essais, 2nd ed. Brussels: Labor. Everett, A., and Caldwell, J.T. 2003. Introduction: Issues in the Theory and Practice of Media Convergence. In New Media: Theories and Practices of Digitextuality, ed. 24 A. Everett and J.T. Caldwell, xi-xxx. London and New York: American Film Institute and Routledge. Goldsmith, A. 1991. Photos Always Lied. Popular Photography 98, no. 11: 68-75. Gunning, T. 2008. What’s the Point of an Index? Or, Faking Photographs. In Still Moving: Between Cinema and Photography, ed. K.Beckman and J. Ma, 23-40. Durham and London: Duke University Press. Kember, S. 1996. ‘The Shadow of the Object’: photography and realism. Textual Practice 10, no. 1: 145-63. Krasner, J. 2005a. Minnesota Firm to Acquire Polaroid. Boston Globe January 8, Section E1. -------. 2005b. Polaroid cuts R &D, Digital Plans under new owner, Firm is little more than brand name. Boston Globe, August 2, Section C1. Land, E. 1947. A New One-Step Photographic Process. Journal of the Optical Society of America 37, no.2: 61-77. --------. 1974. Absolute One-Step Photography. The Photographic Journal 114: 338-45. Lister, M. 1995. Introductory Essay. In The Photographic Image in Digital Culture, ed. M. Lister, 1-26. London: Routledge. Manovich, L. 2003. The Paradoxes of Digital Photography. In The Photography Reader, ed L. Wells, 240-49. London: Routledge. Martin Kao, D. 1999. Edwin Land’s Polaroid: A New Eye. In Innovation / Imagination: 50 Years of Polaroid Photography, ed. N. Columbus, 13-19. New York: Harry N. Abrams. 25 Mitchell, W.J.T. 1992. The Reconfigured Eye: Visual Truth in the Post-photographic Era. Cambridge, Mass.: MIT Press. Murphie, A., and Potts, J. 2003. Culture and Technology. Basingstoke: Palgrave. Osborne, P. 2003. Photography in an expanding field: Distributive unity and dominant form. In Where is the photograph?, ed. D. Green, 63-70. Brighton: Photoworks and Photoforum. ‘Photo-Document Integration,’ 1991. 1991 New Products and Technologies folder, 1991 Meeting of Stockholders information packet, May 7, 1991, Polaroid Corporation Collection, Baker Library Historical Collections, Harvard Business School. Polaroid Corporation Annual Report, 1954. Retort. 2005. Afflicted Powers: Capital and Spectacle in a New Age of War. London: Verso. Robins, K. 1995. Will Image Move Still? In The Photographic Image in Digital Culture, ed. Martin Lister, 29-50. London: Routledge. Rubinstein, D, and Sluis, K. 2008. A Life More Photographic: Mapping the Networked Image. Photographies 1, no. 1: 9-28. Schjeldahl, P. 1987. The Instant Age. In Legacy of Light, ed. Constance Sullivan, 8-13. New York: Alfred A. Knopf. Sconce, J. 2003. Tulip Theory. In New Media: Theories and Practice of Digitextuality, ed. A. Everett and J.T. Caldwell, 179-193. London and New York: American Film Institute and Routledge. Sealfon, P. 1983. The Magic of Instant Photography. Boston: CBI Publishing. 26 Slater, D. 1995. Domestic Photography and Digital Culture. In The Photographic Image in Digital Culture, ed. Martin Lister, 129-46. London: Routledge. Tripsas, M., and Gavetti, G. 2000. Capabilties, Cognition, and Inertia: Evidence from Digital Imaging. Strategic Management Journal 21: 1147-61. Trotman, N. 2002. The Life of the Party: the Polaroid SX-70 Land camera and instant film photography. Afterimage 29, no. 6: 10. Van Lier, H. 1983. The Polaroid Photograph and the Body. In Stefan de Jaeger, xi-ixx. Brussels: Poot. Warriner, W. 1980. Illuminations: Photography and Scientific Discovery. Close-Up 11, no. 2: 2-9. 1 The author consulted in 2007 materials that Polaroid Corporation donated to Baker Library, Harvard Business School in 2006. The collection was largely unprocessed at the time this article was researched and written. The Polaroid archives will henceforth be cited as the Polaroid Corporation Collection. 2 On the sell-off, see Krasner 2005a and Krasner 2005b. 3 As early as 1980 the Polaroid in-house magazine, Close-Up, featured an article with a detailed discussion of ‘Analog Versus Digital’ (See Warriner 1980, 6). See Tripsas and Gavetti 2000 for an account of Polaroid’s decline as a business. Tripsas and Gavetti claim that Polaroid suffered from ‘organizational inertia in the face of radical technological change….despite early investments and leading-edge technical capability in areas related to digital imaging’ (1148). 4 The following all take issue with Mitchell as a central part of their arguments: Kember 1996; Lister 1995; Manovich 2003; Robins 1995. The complaint against Mitchell is fairly uniformly that he exaggerates the extent of change inaugurated by the digital and thereby grants too much historical agency to it. In other words, he is accused of technological determinism. 5 See, for instance, Goldsmith 1991 or Manovich 2003, 244-5. Gunning 2008, 25-6 also returns to this point. 27 6 Kember 1996 articulates these objections very clearly, and is also good on ‘the paradox of photography’s fading but always mythical realism’ (146). See also Lister 1995. Although the arguments made by Kember, Lister and others are vital to counteract the still powerful realist stance regarding the photographic image, they do not really progress beyond the ground-breaking case made by Roland Barthes in 1961 (see Barthes 1977). The subsequent chorus in the 1960s and 70s denouncing photography’s ‘effect of the real’ is consummately summarized by Dubois 1990, 31-40. 7 These three properties were first outlined in Buse 2007, 37-8. 8 Mainly due to demand from professional photographers, Polaroid introduced in 1961 Type 55 film, which produced a usable negative. Its film for snapshot purposes remained positive-only, and it is of course not possible to produce a negative from ‘integral’ SX-70 film. 9 August 1949 and November 1951 campaigns in The Camera and Modern Photography. 10 September 1949 and November 1949 campaigns in The Camera and U.S. Camera. 11 See Rubinstein and Sluis for a survey of such practices and critical literature on them. The end of ‘one-step’ photography work_clx5zqbmxjeoxj3zn2feyqjyny ---- Anaesthetic eye drops for children in casualty departments across south east England It is a common practice to use topical anaesthetic drops to provide temporary relief and aid in the examination of the eyes when strong blepharospasm precludes thorough examination. Ophthalmology departments usually have several types of these—for example, amethocaine, oxybuprocaine (benoxinate), and proxymetacaine. The dura- tion and degree of discomfort caused by amethocaine is significantly higher than proxymetacaine,1 2 whilst the difference in the discomfort between amethocaine and oxybuprocaine is minimal.2 When dealing with children, therefore, it is recommended to use proxymetacaine drops.1 It was my experience that Accident & Emergency (A&E) departments tend to have less choice of these drops. This survey was done to find out the availability of different anaesthetic drops, and the preference for paediatric use given a choice of the above three. Questionnaires were sent to 40 A&E departments across south east England. Thirty nine replied back, of which one department did not see any eye casualties. None of the 38 departments had proxymeta- caine. Twenty units had amethocaine alone and 10 units had oxybuprocaine alone. For paediatric use, these units were happy to use whatever drops were available within the unit. Eight units stocked amethocaine and oxybuprocaine, four of these were happy to use either of them on children and four used only oxybuprocaine. One of the latter pre- ferred proxymetacaine but had to contend with oxybuprocaine due to cost issues. Children are apprehensive about the instillation of any eye drops. Hence, it is desirable to use the least discomforting drops like proxymetacaine. For eye casualties, in majority of District General Hospitals, A&E departments are the first port of call. Hence, A&E units need to be aware of the benefit of proxymetacaine and stock them for paedia- tric use. M R Vishwanath Department of Ophthalmology, Queen Elizabeth Queen Mother Hospital, Margate, Kent; m.vishwanath@virgin.net doi: 10.1136/emj.2003.010645 References 1 Shafi T, Koay P. Randomised prospective masked study comparing patient comfort following the instillation of topical proxymetacaine and amethocaine. Br J Ophthalmol 1998;82(11):1285–7. 2 Lawrenson JG, Edgar DF, Tanna GK, et al. Comparison of the tolerability and efficacy of unit- dose, preservative-free topical ocular anaesthetics. Ophthalmic Physiol Opt 1998;18(5):393–400. Training in anaesthesia is also an issue for nurses We read with interest the excellent review by Graham.1 An important related issue is the training of the assistant to the emergency physician. We wished to ascertain if use of an emergency nurse as an anaesthetic assistant is common practice. We conducted a short telephone survey of the 12 Scottish emer- gency departments with attendances of more than 50 000 patients per year. We interviewed the duty middle grade doctor about usual practice in that department. In three depart- ments, emergency physicians will routinely perform rapid sequence intubation (RSI), the assistant being an emergency nurse in each case. In nine departments an anaesthetist will usually be involved or emergency physi- cians will only occasionally perform RSI. An emergency nurse will assist in seven of these departments. The Royal College of Anaesthetists2 have stated that anaesthesia should not proceed without a skilled, dedicated assistant. This also applies in the emergency department, where standards should be comparable to those in theatre.3 The training of nurses as anaesthetic assistants is variable and is the subject of a Scottish Executive report.4 This consists of at least a supernumerary in-house program of 1 to 4 months. Continued professional devel- opment and at least 50% of working time devoted to anaesthetic assistance follow this.4 The Faculty of Emergency Nursing has recognised that anaesthetic assistance is a specific competency. We think that this represents an important progression. The curriculum is, however, still in its infancy and is not currently a requirement for emergency nurses (personal communication with L McBride, Royal College of Nursing). Their assessment of competence in anaes- thetic assistance is portfolio based and not set against specified national standards (as has been suggested4). We are aware of one-day courses to familiarise nurses with anaesthesia (personal communication with J McGowan, Southern General Hospital). These are an important introduction, but are clearly incomparable to formal training schemes. While Graham has previously demon- strated the safety of emergency physician anaesthesia,5 we suggest that when anaes- thesia does prove difficult, a skilled assistant is of paramount importance. Our small survey suggests that the use of emergency nurses as anaesthetic assistants is common practice. If, perhaps appropriately, RSI is to be increasingly performed by emergency physicians,5 then the training of the assis- tant must be concomitant with that of the doctor. Continued care of the anaesthetised patient is also a training issue1 and applies to nurses as well. Standards of anaesthetic care need to be independent of its location and provider. R Price Department of Anaesthesia, Western Infirmary, Glasgow: Gartnavel Hospital, Glasgow, UK A Inglis Department of Anaesthesia, Southern General Hospital, Glasgow, UK Correspondence to: R Price, Department of Anaesthesia, 30 Shelley Court, Gartnavel Hospital, Glasgow, G12 0YN; rjp@doctors.org.uk doi: 10.1136/emj.2004.016154 References 1 Graham CA. Advanced airway management in the emergency department: what are the training and skills maintenance needs for UK emergency physicians? Emerg Med J 2004;21:14–19. 2 Guidelines for the provision of anaesthetic services. Royal College of Anaesthetists, London, 1999. http://www.rcoa.ac.uk/docs/glines.pdf. 3 The Role of the Anaesthetist in the Emergency Service. Association of Anaesthetists of Great Britain and Ireland, London, 1991. http:// www.aagbi.org/pdf/emergenc.pdf. 4 Anaesthetic Assistance. A strategy for training, recruitment and retention and the promulgation of safe practice. Scottish Medical and Scientific Advisory Committee. http://www.scotland.gov.uk/ library5/health/aast.pdf. 5 Graham CA, Beard D, Oglesby AJ, et al. Rapid sequence intubation in Scottish urban emergency departments. Emerg Med J 2003;20:3–5. Ultrasound Guidance for Central Venous Catheter Insertion We read Dunning’s BET report with great interest.1 As Dunning himself acknowledges, most of the available literature concerns the insertion of central venous catheters (CVCs) by anaesthetists (and also electively). However, we have found that this data does not necessarily apply to the critically-ill emergency setting, as illustrated by the study looking at emergency medicine physicians2 where the ultrasound did not reduce the complication rate. The literature does not distinguish between potentially life-threatening complications and those with unwanted side-effects. An extra attempt or prolonged time for insertion, whilst unpleasant, has a minimal impact on the patient’s eventual outcome. However, a pneumothorax could prove fatal to a patient with impending cardio-respiratory failure. Some techniques – for example, high internal jugular vein – have much lower rates of pneumothorax. Furthermore, some techni- ques use an arterial pulsation as a landmark. Such techniques can minimise the adverse effect of an arterial puncture as pressure can be applied directly to the artery. We also share Dunning’s doubt in the National Institute for Clinical Excellence (NICE) guidance’s claim that the cost-per- use of an ultrasound could be as low as £10.3 NICE’s economic analysis model assumed that the device is used 15 times a week. This would mean sharing the device with another department, clearly unsatisfactory for most emergency situations. The cost per complica- tion prevented would be even greater. (£500 in Miller’s study, assuming 2 fewer complica- tions per 100 insertions). Finally, the NICE guidance is that ‘‘appro- priate training to achieve competence’’ is LETTERS PostScript. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Accepted for publication 27 May 2004 Accepted for publication 27 May 2004 608 Emerg Med J 2005;22:608–610 www.emjonline.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://e m j.b m j.co m / E m e rg M e d J: first p u b lish e d a s 1 0 .1 1 3 6 /e m j.2 0 0 4 .0 2 1 1 2 1 o n 2 6 Ju ly 2 0 0 5 . D o w n lo a d e d fro m http://emj.bmj.com/ undertaken. We are sure that the authors would concur that the clinical scenario given would not be the appropriate occasion to ‘‘have a go’’ with a new device for the first time. In conclusion, we believe that far more important than ultrasound-guided CVC insertion, is the correct choice of insertion site to avoid those significant risks, which the critically-ill patient would not tolerate. M Chikungwa, M Lim Correspondence to: M Chikungwa; mchikungwa@ msn.com doi: 10.1136/emj.2004.015156 References 1 Dunning J, James Williamson J. Ultrasonic guidance and the complications of central line placement in the emergency department. Emerg Med J 2003;20:551–552. 2 Miller AH, Roth BA, Mills TJ, et al. Ultrasound guidance versus the landmark technique for the placement of central venous catheters in the emergency department. Acad Emerg Med 2002;9:800–5. 3 National Institute for Clinical Excellence. Guidance on the use of ultrasound locating devices for placing central venous catheters. Technology appraisal guidance no 49, 2002 http://www.org.uk/cat.asp?c = 36752 (accessed 24 Dec 2003). Patients’ attitudes toward medical photography in the emergency department Advances in digital technology have made use of digital images increasingly common for the purposes of medical education.1 The high turnover of patients in the emergency department, many of whom have striking visual signs makes this an ideal location for digital photography. These images may even- tually be used for the purposes of medical education in presentations, and in book or journal format.2 3 As a consequence patients’ images may be seen by the general public on the internet, as many journals now have open access internet sites. From an ethical and legal standpoint it is vital that patients give informed consent for use of images in medical photography, and are aware that such images may be published on the world wide web.4 The aim of this pilot study was to investigate patient’s attitudes toward medical photography as a guide to consent and usage of digital photography within the emergency department. A patient survey questionnaire was designed to answer whether patients would consent to their image being taken, which part(s) of their body they would consent to being photographed, and whether they would allow these images to be pub- lished in a medical book, journal, and/or on the internet. All patients attending the minors section of an inner city emergency department between 1 st January 2004 and 30 th April 2004 were eligible for the study. Patients were included if aged over 18 and having a Glasgow coma score of 15. Patients were excluded if in moderate or untreated pain, needed urgent treatment, or were unable to read or under- stand the questionnaire. All patients were informed that the questionnaire was anon- ymous and would not affect their treatment. Data was collected by emergency department Senior House Officers and Emergency Nurse Practitioners. 100 patients completed the questionnaire. The results are summarised below: Q1 Would you consent to a photo- graph being taken in the Emergency Department of you/part of your body for the purposes of medical education? Yes 84%, No 16% 21% replied Yes to all forms of consent, 16% replied No to all forms of consent, while 63% replied Yes with reservations for particular forms of consent. Q2 Would you consent the follow- ing body part(s) to be photo- graphed (head, chest, abdomen, limbs and/or genitalia)? The majority of patients consented for all body areas to be photo- graphed except for genitalia (41% Yes, 59% No) citing invasion of privacy and embarrassment. Q3 Would you consent to your photo being published in a medical journal, book or internet site? The majority of patients gave con- sent for publication of images in a medical journal (71%), book (70%), but were more likely to refuse consent for use of images on internet medical sites (47% Yes, 53% No or unsure). In determining the attitudes of patients presenting in an inner city London emer- gency department regarding the usage of photography, we found that the majority of patients were amenable to having their images used for the purposes of medical education. The exceptions to this were the picturing of genitalia and the usage of any images on internet medical sites/journals. The findings of this pilot study are limited to data collection in a single emergency department in central London. A particular flaw of this survey is the lack of correlation between age, sex, ethnicity, and consent for photography. Further study is ongoing to investigate this. There have been no studies published about patients’ opinions regarding medical photography to date. The importance of obtaining consent for publication of patient images and concealment of identifying fea- tures has been stressed previously.5 This questionnaire study emphasises the need to investigate patients’ beliefs and concerns prior to consent. A Cheung, M Al-Ausi, I Hathorn, J Hyam, P Jaye Emergency Department, St Thomas’ Hospital, UK Correspondence to: Peter Jaye, Consultant in Emergency Medicine, St Thomas’ Hospital, Lambeth Palace Road, London SE1 7RH, UK; peter.jaye@gstt.nhs.uk doi: 10.1136/emj.2004.019893 References 1 Mah ET, Thomsen NO. Digital photography and computerisation in orthopaedics. J Bone Joint Surg Br 2004;86(1):1–4. 2 Clegg GR, Roebuck S, Steedman DJ. A new system for digital image acquisition, storage and presentation in an accident and emergency department. Emerg Med J 2001;18(4):255–8. 3 Chan L, Reilly KM. Integration of digital imaging into emergency medicine education. Acad Emerg Med 2002;9(1):93–5. 4 Hood CA, Hope T, Dove P. Videos, photographs, and patient consent. BMJ 1998;316:1009–11. 5 Smith R. Publishing information about patients. BMJ 1995;311:1240–1. Unnecessary Tetanus boosters in the ED It is recommended that five doses of tetanus toxoid provide lifelong immunity and 10 yearly doses are not required beyond this.1 National immunisation against tetanus began in 1961, providing five doses (three in infancy, one preschool and one on leaving school).2 Coverage is high, with uptake over 90% since 1990.2 Therefore, the majority of the population under the age of 40 are fully immunised against tetanus. Td (tetanus toxoid/low dose diphtheria) vaccine is often administered in the Emergency Department (ED) following a wound or burn based upon the patient’s recollection of their immunisation history. Many patients and staff may believe that doses should still be given every 10 years. During summer 2004, an audit of tetanus immunisation was carried out at our depart- ment. The records of 103 patients who had received Td in the ED were scrutinised and a questionnaire was sent to the patient’s GP requesting information about the patient’s tetanus immunisation history before the dose given in the ED. Information was received in 99 patients (96% response). In 34/99 primary care records showed the patient was fully immunised before the dose given in the ED. One patient had received eight doses before the ED dose and two patients had been immunised less than 1 year before the ED dose. In 35/99 records suggested that the patient was not fully immunised. However, in this group few records were held before the early 1990’s and it is possible some may have had five previous doses. In 30/99 there were no tetanus immunisation records. In 80/99 no features suggesting the wound was tetanus prone were recorded. These findings have caused us to feel that some doses of Td are unnecessary. Patient’s recollections of their immunisation history may be unreliable. We have recommended that during working hours, the patient’s general practice should be contacted to check immunisation records. Out of hours, if the patient is under the age of 40 and the wound is not tetanus prone (as defined in DoH Guidance1), the general practice should be contacted as soon as possible and the immunisation history checked before admin- istering Td. However, we would like to emphasize that wound management is paramount, and that where tetanus is a risk in a patient who is not fully immunised, a tetanus booster will not provide effective protection against tetanus. In these instances, tetanus immunoglobulin (TIG) also needs to be considered (and is essential for tetanus prone wounds). In the elderly and other high-risk groups—for example, intravenous drug abusers—the Accepted for publication 25 February 2004 Accepted for publication 12 October 2004 PostScript 609 www.emjonline.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://e m j.b m j.co m / E m e rg M e d J: first p u b lish e d a s 1 0 .1 1 3 6 /e m j.2 0 0 4 .0 2 1 1 2 1 o n 2 6 Ju ly 2 0 0 5 . D o w n lo a d e d fro m http://emj.bmj.com/ need for a primary course of immunisation against tetanus should be considered not just a single dose and follow up with the general practice is therefore needed. The poor state of many primary care immunisation records is a concern and this may argue in favour of centralised immuni- sation records or a patient electronic record to protect patients against unnecessary immu- nisations as well as tetanus. T Burton, S Crane Accident and Emergency Department, York Hospital, Wigginton Road, York, UK Correspondence to: Dr T Burton, 16 Tom Lane, Fulwood, Sheffield, S10 3PB; tomandjackie@doctors.org.uk doi: 10.1136/emj.2004.021121 References 1 CMO. Replacement of single antigen tetanus vaccine (T) by combined tetanus/low dose diphtheria vaccine for adults and adolescents (Td) and advice for tetanus immunisation following injuries. In:update on immunisation issues PL/ CMO/2002/4 London Department of Health, August, 2002. 2 http://www.hpa.org.uk/infections/topics_az/ tetanus. Accepted for publication 20 October 2004 BOOK REVIEWS and Technical Aspects be comprehensive with regards to disaster planning? Can it provide me with what I need to know? I was confused by the end of my involvement with this book or perhaps overwhelmed by the enormity of the importance of non-medical requirements such as engineering and tech- nical expertise in planning for and managing environmental catastrophes. Who is this book for? I am still not sure. The everyday emergency physician? I think not. It serves a purpose to be educational about what is required, in a generic sort of way, when planning disasters. Would I have turned to it last year during the SARS outbreak? No. When I feared a bio-terrorism threat? No. When I watched helplessly the victims of the latest Iranian earthquake? No. To have done so would have been to participate in some form of voyeurism on other people’s misery. Better to embrace the needs of those victims of environmental disasters in some tangible way than rush to the book shelf to brush up on some aspect of care which is so remote for the majority of us in emergency medicine. J Ryan St Vincent’s Hospital, Ireland; j.ryan@st-vincents.ie Neurological Emergencies: A Symptom Orientated Approach Edited by G L Henry, A Jagoda, N E Little, et al. McGraw-Hill Education, 2003, £43.99, pp 346. ISBN 0071402926 The authors set out with very laudable intentions. They wanted to get the ‘‘max- imum value out of both professional time and expensive testing modalities’’. I therefore picked up this book with great expecta- tions—the prospect of learning a better and more memorable way of dealing with neuro- logical cases in the emergency department. The chapter headings (14 in number) seemed to identify the key points I needed to know and the size of the book (346 pages) indicated that it was possible to read. Unfortunately things did not start well. The initial chapter on basic neuroanatomy mainly used diagrams from other books. The end result was areas of confusion where the text did not entirely marry up with the diagrams. The second chapter dealing with evaluating the neurological complaint was better and had some useful tips. However the format provided a clue as to how the rest of the book was to take shape—mainly text and lists. The content of this book was reasonably up to date and if you like learning neurology by reading text and memorising lists then this is the book for you. Personally I would not buy it. I felt it was a rehash of a standard neurology textbook and missed a golden opportunity of being a comprehensible text on emergency neurology, written by emer- gency practitioners for emergency practi- tioners. P Driscoll Hope Hospital, Salford, UK; peter.driscoll@srht.nhs.uk Emergency medicine procedures E Reichmann, R Simon. New York: McGraw- Hill, 2003, £120, pp 1563. ISBN 0-07- 136032-8. This book has 173 chapters, allowing each chapter to be devoted to a single procedure, which, coupled with a clear table of contents, makes finding a particular procedure easy. This will be appreciated mostly by the emergency doctor on duty needing a rapid "refresher" for infrequently performed skills. ‘‘A picture is worth a thousand words’’ was never so true as when attempting to describe invasive procedures. The strength of this book lies in the clarity of its illustrations, which number over 1700 in total. The text is concise but comprehensive. Anatomy, patho- physiology, indications and contraindica- tions, equipment needed, technicalities, and complications are discussed in a standardised fashion for each chapter. The authors, pre- dominantly US emergency physicians, mostly succeed in refraining from quoting the ‘‘best method’’ and provide balanced views of alternative techniques. This is well illustrated by the shoulder reduction chapter, which pictorially demonstrates 12 different ways of reducing an anterior dislocation. In fact, the only notable absentee is the locally preferred Spaso technique. The book covers every procedure that one would consider in the emergency department and many that one would not. Fish hook removal, zipper injury, contact lens removal, and emergency thoracotomy are all explained with equal clarity. The sections on soft tissue procedures, arthrocentesis, and regional analgesia are superb. In fact, by the end of the book, I was confident that I could reduce any paraphimosis, deliver a baby, and repair a cardiac wound. However, I still had nagging doubts about my ability to aspirate a sub- dural haematoma in an infant, repair the great vessels, or remove a rectal foreign body. Reading the preface again, I was relieved. The main authors acknowledge that some proce- dures are for "surgeons" only and are included solely to improve the understanding by "emergentologists" of procedures that may present with late complications. These chap- ters are unnecessary, while others would be better placed in a pre-hospital text. Thankfully, they are relatively few in number, with the vast majority of the book being directly relevant to emergency practice in the UK. Weighing approximately 4 kg, this is undoubtedly a reference text. The price (£120) will deter some individuals but it should be considered an essential reference book for SHOs, middle grades, and consul- tants alike. Any emergency department would benefit from owning a copy. J Lee Environmental Health in Emergencies and Disasters: A Practical Guide Edited by B Wisner, J Adams. World Health Organization, 2003, £40.00, pp 252. ISBN 9241545410 I have the greatest admiration for doctors who dedicate themselves to disaster prepared- ness and intervention. For most doctors there will, thank god, rarely be any personal involvement in environmental emergencies and disasters. For the others who are involved, the application of this branch of medicine must be some form of ‘‘virtual’’ game of medicine, lacking in visible, tangible gains for the majority of their efforts. Reading this World Health Organization publication however has changed my percep- tion of the importance of emergency plan- ners, administrators, and environmental technical staff. I am an emergency physician, blinkered by measuring the response of interventions in real time; is the peak flow better after the nebuliser? Is the pain less with intravenous morphine? But if truth be known it is the involvement of public health doctors and emergency planners that makes the biggest impact in saving lives worldwide, as with doctors involved in public health medicine. This book served to demonstrate to me my ignorance on matters of disaster responsive- ness. But can 252 pages of General Aspects 610 PostScript www.emjonline.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://e m j.b m j.co m / E m e rg M e d J: first p u b lish e d a s 1 0 .1 1 3 6 /e m j.2 0 0 4 .0 2 1 1 2 1 o n 2 6 Ju ly 2 0 0 5 . D o w n lo a d e d fro m http://emj.bmj.com/ work_cm66c5k6v5bqfpntdpp5jqnegq ---- 78 78..78 obituary Edulji (Eddy) Sethna Formerly Consultant Psychiatrist, Hollymoor Hospital, Birmingham Dr Eddy Sethna was born on 3 December 1925 in Bombay, India. He won two scholarships which entirely funded his medical school training and he qualified with MB BS from Bombay in 1951. Having completed house jobs in Bombay, he became a senior house officer in medicine at the Bury and Rossendale Group of Hospitals in Lancashire in 1954, obtaining his MRCP in 1956. Having specialised in cardiology at the London Heart Hospital and Sefton General Hospital in Liverpool, he also obtained a diploma in tropical medicine and hygiene in preparation for his return to India as a consultant physician to the Jahangir Nursing Home in Poona. He spent 21 months in this post but decided to return to England where he started his career as a psychiatrist. Eddy’s first psychiatric appointment was as a registrar at St Francis and Lady Chichester Group of Hospitals in Sussex. He did his senior registrar training in Birmingham and was appointed in 1966 as a consultant psychiatrist to All Saints Hospital in Birmingham with an attach- ment to West Bromwich and District Hospital. He was awarded his MRC Psych in 1971, and in 1976 he became a consul- tant to Hollymoor Hospital in Birmingham and the Lyndon Clinic in Solihull. He was elected FRCP in 1987, having been elected FRCPsych in 1986. His publications included studies of the benefits of group psychotherapy and refractory depression, but he also had a major interest in phobias. When asked to organise a registrar training programme, Eddy with typical thoroughness and attention to detail demanded that he be allowed to establish the programme from scratch, ignoring the preconceived ideas of those more senior. Having gained their support he established the first rotational psychiatric training programme in the country. This scheme was so popular and successful with the trainees that it was adopted by the Royal College of Psychia- trists as their national model for rotational training. In his early fifties, Eddy returned to his boyhood interest of photography as an antidote to the stresses of his job. In retirement, he became a leader and inspiration to the legions of amateur photographers taking tentative steps into the field of digital photography. He approached digital photography as he had approached medicine, studying the Adobe Photoshop computer programme system- atically so that he understood its ever- evolving capabilities. He willingly offered one-to-one teaching sessions, wrote four books (two on paper and two on CD), was instrumental in the formation of the Royal Photographic Society’s Digital Imaging Group, was founding chairman of the Eyecon Group and served as vice-presi- dent of the Royal Photographic Society. More recently the Royal Photographic Society awarded Eddy its prestigious Fenton Medal and Honorary Membership in recognition of his huge contribution to photography in the UK. He had numerous acceptances in international exhibitions and took great pride in the gold medal he was awarded shortly before his death in recognition of his creativity. Eddy died at home of Hodgkin’s lymphoma on 29 June 2006, cared for by his wife, Beryl, and daughters, Beverley and Julie, as was his wish. Eddy is survived by his wife, three children and seven grandchildren whom he adored. Anne Sutcliffe doi: 10.1192/pb.bp.106.013813 review Clinical Handbook of Psychotropic Drugs for Children and Adolescents K. Z. Bezchlibnyk-Butler & A. S. Virani (eds) Hogrefe & Huber, 2004, US$49.95, 312 pp. ISBN 0-88937-271-3 This handbook is intended as a practical reference book for clinicians. Thus one measure of its success is how useful it is in a busy clinical setting and whether there is any added benefit over the British National Formulary or its international equivalents. The book starts by providing a brief overview of psychiatric disorders in child- hood and adolescence. This section is the weakest because although it provides the ‘basic facts’ there is not sufficient detail for prescribing clinicians. I did, however, find myself using this section as a basis for handouts for medical students. The main body of this text is devoted to medications likely to be used in child and adolescent psychiatric practice. Taking the section on antidepressants as an example, it starts with a brief overview of the different classes of available anti- depressants and general comments on the use of antidepressants in children and young people. For individual classes of drugs indications, pharmacology, dosing, pharmacokinetics, onset and duration of action, adverse effects, withdrawal, precautions, toxicity, use in pregnancy, nursing implications, patient instructions and drug interaction are all covered. There are also helpful (and reproducible) patient information leaflets. The information provided is concise and up to date, although in this fast-developing field there is a danger that such texts can become out of date relatively quickly. I found myself regularly referring to the handbook in clinics. The accessible writing style made it easy to share the informa- tion with young people and their parents/ carers. The only limitation is that the book’s American authorship means that it tends to refer to licensing under the US Food and Drug Administration and reflects American practice, which does differ in some respects from contemporary practice in the UK. Margaret Murphy Department of Child Psychiatry, Cambridgeshire and Peterborough Mental HealthTrust, Ida Darwin Hospital, Cambridge Road, Fulbourn, Cambridge CB15EE, email: margaret.murphy@cambsmh.nhs.uk doi: 10.1192/pb.bp.105.004333 Columns Obituary columns 78 work_cmd5cne7abdilnzccpmilkpn44 ---- DS-00081-2017 635..644 Evaluating the Therapeutic Success of Keloids Treated With Cryotherapy and Intralesional Corticosteroids Using Noninvasive Objective Measures Hannah Schwaiger, MD,* Markus Reinholz, MD,* Julian Poetschke, MD, † Thomas Ruzicka, MD,* and Gerd Gauglitz, MD, MMS* BACKGROUND Intralesional corticosteroid injections combined with cryotherapy are considered a first-line therapy for keloids. However, objective evaluation on its efficacy is widely missing. OBJECTIVE In this study, the authors evaluated the therapeutic benefits of cryotherapy directly followed by intralesional crystalline triamcinolone acetonide injections using ultrasound and a 3D topographic imaging device. MATERIALS AND METHODS Fifteen patients with keloids were treated with cryotherapy and intralesional injections of triamcinolone acetonide for a total of 4 times at intervals of 4 weeks. Objective assessment was performed at each visit. RESULTS After the last treatment, a significant average reduction of scar volume of 34.3% and an average decrease in scar height of 41.3% as determined by 3D imaging was observed compared with baseline. Ultrasound revealed an average reduction of scar height of 31.7% and an average decrease in tissue pene- tration depth of 37.8% when compared with baseline measurements. CONCLUSION Objective measurements of relevant keloid characteristics as height, volume, and penetration depth help in quantifying the therapeutic effect. The observed results confirm that intralesional injections of crystalline triamcinolone acetonide combined with cryotherapy represent a powerful approach to reduce scar height and volume significantly. The authors have indicated no significant interest with commercial supporters. Excessive scarring is aesthetically disturbing andfrequently represents a psychosocial burden for affected patients.1 As keloids often go along with pruritus, contractures, and pain, the need for treatment is apparent and not solely based on cosmetic reasoning.2 Keloids are benign hyperproliferations of dermal connective tissue. Injury of the deep dermis commonly results in scar formation. The physiologic wound healing cascade consists of inflammation, proliferation, and a remodeling phase.3–5 In pathologic scar formation, a prolonged inflammatory phase and some molecular alterations concerning inflammatory pathways are held responsible for excessive scarring.6,7 Keloids commonly appear on the upper trunk, and, in contrast to hypertrophic scars, which may show a similar appearance, they exceed the margins of the original wound. Keloids can occur spontaneously but often show a genetic predisposition. Asian or African individuals are more prone to develop keloids.8–11 Usually, keloid treatment remains more challenging compared with the therapy of hypertrophic scars because keloids are associated with a high recurrence rate and show no tendency to regress spontaneously.2 Over the years, therapeutic approaches to treat excessive scarring have significantly improved.12–14 Common therapeutic strategies include silicone gel *Department of Dermatology and Allergology, Ludwig-Maximilian University, Munich, Germany; †Department of Plastic and Hand Surgery, Klinikum St. Georg gGmbH, Leipzig, Germany © 2017 by the American Society for Dermatologic Surgery, Inc. Published by Wolters Kluwer Health, Inc. All rights reserved. ISSN: 1076-0512·Dermatol Surg 2018;44:635–644·DOI: 10.1097/DSS.0000000000001427 635 © 201 by the American Society for Dermatologic Surgery, Inc. Published by Wolters Kluwer Health, Inc. Unauthorized reproduction of this article is prohibited.8 sheeting, pressure therapy, radiation, and cryosur- gery.15–17 New treatment options are intralesional injections of 5-fluorouracil, interferon, and bleomycin for therapy-refractory scars10,18,19 as well as nonablative and ablative lasers.20,21 In clinical practice, the injection of crystalline glucocorticoids alone or in combination with cryotherapy represents a well-proven therapy.15,22–24 Combinational approaches seem to be superior25 than respective monotherapies. It is assumed that cryotherapy induces changes in microcirculation and apoptosis of fibroblasts in treated scars.15 This procedure results in a localized dermal edema, which facilitates injections into the scar tissue as larger volumes can be applied more easily, thus enhancing the therapeutic effect.13 Common side effects of cryotherapy are pain, a pro- longed healing time because of the induced blistering, and potential depigmentation.14 Triamcinolone acetonide reduces scar thickness by inhibition of the collagen biosynthesis and the proliferation of fibroblasts. Potential side effects include atrophy of the subcutis, the formation of telangiectasia, and pigmentary alterations.2 To date, a variety of articles and studies to evaluate different therapeutic approaches for excessive scarring exist. However, most of them use rather subjective measurements to document therapeutic improve- ments of respective scar therapies. Nevertheless, over the last decade, various studies have highlighted ultrasound as an objective tool to determine changes in scar size.4,26–30 More recently, the application of an 3D topographic imaging devices (PRIMOSpico) has also been studied in this context.31,32 Based on current data, both tools seem reliable to deliver objective data on changes of physical scar characteristics.27,33 Although combination of cryotherapy and intrale- sional triamcinolone acetonide represents a frequently used technique for the treatment of keloids and has been highlighted as a promising approach in current guidelines, only 4 studies exist on its combina- tion.25,34–36 All of them confirmed the effectiveness of intralesional corticosteroids combined with cryother- apy, but none of them applied modern objective measurement methods to assess the therapeutic success. Furthermore, treatment regimens varied, and patients were evaluated before and at the end of treatment, without information on treatment prog- ress. Other studies did not differentiate between hypertrophic scars and keloids. In this study, the authors therefore aimed to evaluate the therapeutic outcome of the guideline-based combination of cryotherapy and corticosteroid injections for keloid treatment using both modern objective imaging solutions. Materials and Methods Patients Fifteen patients were included from the outpatient scar clinic (7 women, 8 men) aged 18 to 54 years (34.5 6 12.5 years) with medium-sized keloids, as defined by common diagnostic criteria,2 which had existed for an average of 7.8 years (65.3 years). Fitzpatrick skin types ranged from II to IV. Keloids either after trauma, surgery, acne, or by spontaneous formation were located mainly on the upper trunk (Table 1). Most patients had treatments in the past, which mainly included moisturizers such as silicones, cryotherapy alone, or laser, which were either unsuc- cessful or led to a regress of keloid formation. The patients were enrolled after approval by the Ethics Committee of the Ludwig-Maximilian University, and written informed consent from each patient was obtained. Exclusion criteria were pregnancy, intrale- sional injections of corticosteroids during the last 6 months, chronic diseases such as diabetes or coagu- lopathies, and participation in other studies. Patients were treated 4 times with cryotherapy directly followed by intralesional injections of triamcinolone acetonide into each scar at monthly intervals. Stan- dardization of injection techniques was based on the authors’ clinical settings and obtained by using the same drug, same needle size (27 Gauge), and same brand of Luer-lock syringes for injections. Keloids were injected by the same experienced physician at each visit. Injection volumes were depending on the size of the keloid and were carefully documented; a visible blanching of the keloid tissue was considered as end points of injections. In accordance with other studies, national and international guidelines O B J E C T I V E E V A L U A T I O N O F C O R T I C O S T E R O I D T R E A T M E N T O F K E L O I D S D E R M A T O L O G I C S U R G E R Y636 © 201 by the American Society for Dermatologic Surgery, Inc. Published by Wolters Kluwer Health, Inc. Unauthorized reproduction of this article is prohibited.8 treatment was initiated with triamcinolone acetonide concentrations of 10 mg/mL, which were increased up to 40 mg/mL at subsequent visits.34,37 Directly before injection, cryotherapy was applied with liquid nitro- gen spray for 10 seconds and the exact same procedure was repeated after 1 minute. Ultrasound and 3D topographic imaging measurements were performed directly before the respective treatment by the same investigator. 3D Topographic Imaging Device: Phase Shift Rapid In Vivo Measurement of Skin (PRIMOSpico) To provide 3-dimensional and high-resolution images of the scars, the PRIMOSpico system (GFMesstechnik, Teltow, Germany) was used. The device is commonly applied to measure wrinkles.26,27,38 Furthermore, it is highly suitable to assess pathologic scarring.32,33 Using micromirrors, this 3D topographic imaging device projects a stripe pattern with sinusoidal gray levels, which is captured by a CCD camera. Measuring the gray levels and displacement of this stripe pattern creates a detailed and color-coded height map of the measured surface. These height maps were used to analyze scar height and volume. Sonography Scars were assessed during each visit by using a high- resolution B-image sonogram. To ensure a good penetration depth into the scar combined with a high resolution, a 11-MHz receiving transducer was used in combination with the Logiq P6 Pro (GE Healthcare, Solingen, Germany) and a sufficient amount of ultra- sound gel was applied to avoid pressure on the scar, which may influence measurement. According to the manufacturer, this configuration yields a skin pene- tration depth of up to 40 mm and a lateral resolution of about 158 mm. Sonography was used to analyze scar elevation and cutaneous penetration. Digital Photography Digital photography was used to document the ther- apeutic success. A professional in-house photographer took photos during each visit after obtaining addi- tional written informed consent for this procedure from each patient. Photography was not standardized between visits within the study limitations section. Data Analysis Statistical analyses were performed using the GraphPad Prism 5 Software (GraphPad Software, TABLE 1. Patient Characteristics Patient Sex Age Skin Type Etiology Location Scar Age Family History of Keloids Previous Treatment 1 M 21 II Trauma Shoulder/back/ breast 1.5 2 None 2 M 48 III Surgery Breast 4 2 None 3 F 48 II Surgery Breast 10 2 Moisturizing, silicone 4 M 34 IV Spontaneous Shoulder/arms 23 2 None 5 F 21 III Surgery Back/shoulder 2 2 Surgical excision 6 F 46 III Surgery Back/breast 7 2 Moisturizing 7 M 45 II Trauma Breast/shoulder 10 2 None 8 M 34 IV Spontaneous Breast 16 2 None 9 M 29 III Acne Back 4.5 2 Moisturizing, cryotherapy 10 F 50 II Surgery Breast/shoulder 4 + Moisturizing 11 F 28 III Surgery Neck 2 + Moisturizing 12 M 54 II Surgery Breast 12 2 Moisturizing 13 F 20 III Acne Shoulder/back/ breast 5 + Moisturizing 14 F 18 III Spontaneous Shoulder 2 2 Moisturizing, laser 15 M 21 III Spontaneous Shoulder 2 2 Moisturizing S C H W A I G E R E T A L 4 4 : 5 : M A Y 2 0 1 8 637 © 201 by the American Society for Dermatologic Surgery, Inc. Published by Wolters Kluwer Health, Inc. Unauthorized reproduction of this article is prohibited.8 Inc., La Jolla, CA). The data were first analyzed for Gaussian distribution using the D’Agostino and Pearson omnibus normality test. After establishing Gaussian distribution of data, the repeated-measures ANOVA was applied to calculate statistical significance of the results, which were displayed as mean 6 SD. To compare the results of each visit in detail, Bonferroni’s Multiple Comparison Test was used. The significance level was set at a = 0.05. Results The authors evaluated the therapeutic progress of triamcinolone acetonide injections in combination with cryotherapy in a total of 15 patients during each visit directly before the respective treatment. In addi- tion, adverse events and side effects were documented: 8 patients developed telangiectasia, 4 of them showed pigmentation disorders such as reddish brown discoloration of the keloid, and one suffered from temporary ulceration after the fourth injection. The obtained results regarding injected amount of triamcinolone acetonide were evaluated for each visit (Table 2). For the first and the second injection, the amount of triamcinolone acetonide averaged 0.9 mL (60.6 mL for the first injection and 60.7 mL for the second injection) for each keloid and decreased to 0.4 mL (60.3 mL) of injected triamcinolone acetonide at the third visit. At Visit 4, an average amount of 0.5 mL (60.3 mL) triamcinolone acetonide was injected. The decreased amount of crystalline corti- costeroid throughout the treatment session resulted from the reduced scar volume. Digital Photography Visual assessment of photographs taken before and after treatment showed clear improvement of scar volume and circumference objectively interpreted by a blinded rater (Figure 1). 3D Topographic Imaging Device In matching mode, a program of the used 3D topo- graphic imaging device, a comparison of the scar before (Figure 2A) and after treatment (Figure 2B) is possible. A highly significant (p < .0001) reduction of surface height after 4 treatment sessions could be observed (Figure 3A). After the first treatment and before the second intervention, the scar elevation was reduced significantly by 18.3% (615.1%) (p = .0143) compared with baseline. Subsequent documentation showed a reduction of 29.9% (617.9%) (p = .2258) after the second, 37.8% (619.9%) (p = .3391) after the third, and 41.3% (620.6%) (p = .2075) after the fourth treatment session compared with baseline (Figure 3B). These results were proportional to the calculated loss of volume of the scar during treatment with triamcinolone acetonide injections (Figure 3C). After the first injection, the average volume was reduced by 13.1% (610.3%) (p = .017), after the second by 21.7% (614.0%) (p = .4602), and after the third injection by 30.5% (616.9%) (p = .5011). By the fourth visit, the relative decrease of height and volume was 34.3% (618.0%) (p = .4799) compared with baseline (Figure 3D). Sonography Ultrasound measurements demonstrated a significant reduction of scar elevation, penetration depth and TABLE 2. Evaluation of Injected Volumes of Triamcinolone Acetonide and Absolute Measurement Results of Scar Height, Volume, and Penetration Depth Absolute Results Injected amount of TAC in n = 15 keloids At Visit I: 10–20 mg/mL Visit I: 0.9 6 0.6 mL At Visit II: 10–20–40 mg/mL Visit II: 0.9 6 0.7 mL At Visit III + IV: 20–40 mg/mL Visit III: 0.4 6 0.3 mL Visit IV: 0.5 6 0.3 mL Average height 6 SD Before 3.2 6 1.2 mm After 2.2 6 1.2 mm Average volume 6 SD Before 737 6 417 mm3 After 443 6 345 mm3 Average embossment 6 SD Before 2.5 6 1.5 mm After 1.7 6 1.3 mm Average penetration depth 6 SD Before 2.0 6 1.3 mm After 1.4 6 1.2 mm TAC, triamcinolone acetonide. O B J E C T I V E E V A L U A T I O N O F C O R T I C O S T E R O I D T R E A T M E N T O F K E L O I D S D E R M A T O L O G I C S U R G E R Y638 © 201 by the American Society for Dermatologic Surgery, Inc. Published by Wolters Kluwer Health, Inc. Unauthorized reproduction of this article is prohibited.8 volume, as well as a reduction of echo-poor areas corresponding to collagen-rich scar tissue (Figure 4A). As treatment progressed, more echo-rich reflections became visible in the sonogram indicating a trans- formation into more homogeneous scar tissue (Figure 4B). By sonography, a decrease of scar height (Figure 5A) and intracutaneous infiltration (Figure 5C) could be demonstrated. After one treat- ment, an average reduction of scar height of 12.9% (614.8%) (p = .0134) was observed, after 2 injections 19.8% (615.9%) (p = .0489), after 3 treatments 26.7% (618.6%) (p = .1993), and after 4 injections 31.7% (619.9%) (p = .3992) compared with baseline (Figure 5B). The reduction in subepidermal depth of Figure 1. Digital photography of Patient 15 (A and B), Patient 5 (C and D), and Patient 13 (E and F) with keloid before (A, C, and E) and after (B, D, and F) 4 injections of triamcinolone acetonide in combination with cryotherapy. Figure 2. The skin-tone and color–height–encoded images were taken with the 3D topographic imaging device PRIMOSpico using the matching program mode. Patient 2—native scar (A)—was treated 4 times with cryotherapy and triamcinolone acetonide injections (B). S C H W A I G E R E T A L 4 4 : 5 : M A Y 2 0 1 8 639 © 201 by the American Society for Dermatologic Surgery, Inc. Published by Wolters Kluwer Health, Inc. Unauthorized reproduction of this article is prohibited.8 the scar after one injection was 15.5% (612.0%) (p = .0018) and after 2 interventions 25.0% (615.4%) (p = .1438). An average reduction of 33.2% (617.4%) (p = .1466) after 3 and of 37.8% (616.7%) (p = .1723) after 4 injections could be observed (Figure 5D). Discussion In this study, the authors aimed to objectively evaluate treatment success of the guideline-based combina- tional therapy of cryotherapy and triamcinolone ace- tonide injections2,14 for the treatment of keloids by using a 3D topographic imaging device and ultra- sound. Based on objective measurements, a significant average reduction of scar volume of 34.3% and an average decrease in scar height of 41.3% as deter- mined by 3D imaging was observed after a total of 4 treatment sessions compared with baseline. Ultra- sound revealed an average reduction of scar height of 31.7% and an average decrease in tissue penetration depth of 37.8% when compared with baseline meas- urements. More specifically, 5 of 15 patients reached an average reduction of scar height and volume of 50% or more after 4 treatment series. An average reduction between 50% and 20% could be observed in 5 patients. The remaining 5 patients did not achieve a sufficient reduction of volume and height and defined as average reduction by 20% or less after 4 injections of triamcinolone acetonide combined with cryother- apy. Based on their observation, a reduction of scar height by 20% or less after 2 treatment periods con- stitutes an adequate objective criterion to define resistance to therapy. Treatments within their study were tolerated relatively well although intralesional Figure 3. Absolute reduction of surface height (A) and volume (C) as well as the relative results of the reduction of scar height (B) and volume (D) as measured with PRIMOSpico. Figure 4. Ultrasound images of Patient 9 in measuring mode before (A) and after (B) treatment. Clearly visible in the reduction of embossment and penetration depth. In addition, a decrease of the darker areas before therapy to a more homogeneous scar tissue after 4 injections of triamcinolone acetonide and cryotherapy is apparent. O B J E C T I V E E V A L U A T I O N O F C O R T I C O S T E R O I D T R E A T M E N T O F K E L O I D S D E R M A T O L O G I C S U R G E R Y640 © 201 by the American Society for Dermatologic Surgery, Inc. Published by Wolters Kluwer Health, Inc. Unauthorized reproduction of this article is prohibited.8 corticosteroid injections have shown side effects in up to 63% of patients.11 Adverse events in their setting mostly included formation of telangiectasia (n = 8), which may be easily improved by pulsed dye lasers or intense pulsed light. In depth, assessment of scar therapy is crucial to pre- vent the implementation of treatment regimen with a considerable risk of side effects, disproportionate to the achieved therapeutic success. Intralesional corticosteroid injections with supple- mentary cryotherapy are first-line therapy for the treatment of keloids.2,39 In 1977, the combination of corticosteroids and cryosurgery was introduced and detailed information on the method provided.36 Despite the lack of objective parameters, the promis- ing results led to the establishment of this method. Injections are recommended for at least 3 courses25 or until complete flattening of the scar is reached. A maximum of 8 injections has been proposed.37,40 An interindividual controlled study with 10 patients using calipers and digital photography was performed to determine the effect of cryotherapy and corticosteroid injections in keloids.25 A more objective assessment method was used in a study of 33 patients35: they took a wax pattern of alginate impressions of each keloid to observe changes in volume and surface of the keloids. Patients were treated only once with either cryotherapy or corticosteroid injections alone or their combination. The results of combination therapy were significantly better than either method alone, so the authors proposed synergistic effect of cryotherapy and intralesional triamcinolone acetonide. The latest study on 21 patients was published in 2008 and compared the therapeutic outcome of either cryotherapy or cor- ticosteroid injections into the keloid alone with the combination of both.34 A visual analog scale was used to evaluate the treatment results, which concluded that the combination therapy was superior. Current guidelines for scar therapy2,13 mention a response rate, which varies between 50% and 100% and a recurrence rate between 9% and 50% according to meta-analysis.41–43 These data are based on studies, which applied different treatment regimens and diverse evaluation methods—neither objective mea- surement methods have been applied nor the combi- nation of cryotherapy and corticosteroid injections into the keloid has been evaluated. The 4 studies, which observed the therapy results of corticosteroid injections alone, were performed in the 1970s or before.42–45 Newer study approaches have not yet been included in current publications. Also, some studies did not differentiate between keloids and hypertrophic scars. As the latter commonly regress spontaneously,2 a definite conclusion about the recurrence rate of enhanced scar formation cannot be made. The wide Figure 5. Evaluation of the absolute reduction of scar height (A) and penetration depth (C) measured with high-resolution B-image sonogram. The authors also observed relative results of the reduction of scar height (B) and penetration depth into the skin (D). S C H W A I G E R E T A L 4 4 : 5 : M A Y 2 0 1 8 641 © 201 by the American Society for Dermatologic Surgery, Inc. Published by Wolters Kluwer Health, Inc. Unauthorized reproduction of this article is prohibited.8 fluctuation of response and recurrence rates is not only due to the various study approaches but also illustrates that every scar requires an individualized treatment. A standard approach still does not exist. The updated international guidelines for scar therapy, therefore, favor a combination approach with multiple modalities.46 With modern 3D topographic imaging device height, volume and surface of the scar can be objectively documented and compared with a baseline measure- ment. The 3D topographic imaging devices as the PRIMOSpico have already provided solid data and valid results in studies evaluating the therapy of hypertrophic scars, keloids, acne scars, and striae distensae.21,26,31,32,38 The various presentation modes are helpful for the 3-dimensional demonstration of the treatment progress. The safe and easy handling makes it a promising tool in clinical scar examination. It is well-suitable for small to medium scars, but in case of extended keloid formation, the 3D imaging unit’s field of view may be too small to capture the entire scar. Another disadvantage is its inability to measure skin Types V and VI. As keloids occur more often in Asian or African individuals9 who present mainly with darker skin types such as IV to VI according to Fitzpatrick,47 ultrasound measurement should be preferred. Sonography represents a frequently used noninvasive observation tool in the treatment of pathologic scars.4,26–29 In dermatological research, high fre- quency ultrasound with 7.5 MHz and above is used. It offers a sufficient penetration depth as well as high resolution and thereby achieves reliable measurement results in scar assessment. In contrast to 3D topo- graphic imaging devices, ultrasound enables the assessment of scar height and penetration depth but not of the volume or surface. However, ultrasound is available in nearly every clinical institution. This might facilitate its more frequent use compared with other devices. Based on their study data, both applied objective measurement methods can be recommended as highly suitable for the therapeutic assessment of pathologic scars. For an overall therapeutic assessment, however, standardized questionnaires may be helpful by adding additional relevant parameters to the evaluation33,48,49 because successful scar treatments also include the reduction of scar-associated symptoms such as pain, contractures, and pruritus. Established questionnaires such as the Patient and Observer Scar Assessment Scale or the Dermatology Life Quality Index cover important itemsof augmented scar formation and may therefore be applied for additional assessment of scar therapy.1,49 Limitations of their study include the relatively small number of included patients, the lack of follow-ups, and certain shortcomings in the standardization of the injection technique itself. Conclusion As corticosteroid injections and cryotherapy are the mainstay of therapy for hypertrophic scars and keloids, evaluating the evidence of this treatment approach is required. Here, the authors could demonstrate that ultrasound assessment along with noninvasive in vivo 3D topographic imaging meas- urements could objectively confirm the clinically observed efficacy of combined cryotherapy and intralesional injection of corticosteroids for the treatment of keloids if the physician is considering potential side effects such as neovesselfomation and pigmentary changes. Because of its sensitivity, these objective measures may be useful in identifying responders from nonresponders earlier than conventional assessments and may thus represent promising tools in future scar assessments. References 1. Reinholz M, Poetschke J, Schwaiger H, Epple A, et al. The dermatology life quality index as a means to assess life quality in patients with different scar types. J Eur Acad Dermatol Venereol 2015;29:2112–9. 2. Nast A, Eming S, Fluhr J, Fritz K, et al. German S2k guidelines for the therapy of pathological scars (hypertrophic scars and keloids). J Dtsch Dermatol Ges 2012;10:747–62. 3. Chike-Obi CJ, Cole PD, Brissett AE. Keloids: pathogenesis, clinical features, and management. Semin Plast Surg 2009;23:178–84. 4. Lumenta DB, Siepmann E, Kamolz LP. Internet-based survey on current practice for evaluation, prevention, and treatment of scars, O B J E C T I V E E V A L U A T I O N O F C O R T I C O S T E R O I D T R E A T M E N T O F K E L O I D S D E R M A T O L O G I C S U R G E R Y642 © 201 by the American Society for Dermatologic Surgery, Inc. Published by Wolters Kluwer Health, Inc. Unauthorized reproduction of this article is prohibited.8 hypertrophic scars, and keloids. Wound Repair Regen 2014;22: 483–91. 5. Gantwerker EA, Hom DB. Skin: histology and physiology of wound healing. Facial Plast Surg Clin North Am 2011;19:441–53. 6. Al-Attar A, Mess S, Thomassen JM, Kauffman CL, et al. Keloid pathogenesis and treatment. Plast Reconstr Surg 2006;117:286–300. 7. Berman B, Maderal A, Raphael B. Keloids and hypertrophic scars: pathophysiology, classification, and treatment. Dermatol Surg 2016;43:3–18. 8. Shih B, Bayat A. Genetics of keloid scarring. Arch Dermatol Res 2010; 302:319–39. 9. Davis SA, Feldman SR, McMichael AJ. Management of keloids in the United States, 1990–2009: an analysis of the national ambulatory medical care survey. Dermatol Surg 2013;39:988–94. 10. Payapvipapong K, Niumpradit N, Piriyanand C, Buranaphalin S, et al. The treatment of keloids and hypertrophic scars with intralesional bleomycin in skin of color. J Cosmet Dermatol 2015;14:83–90. 11. Mustoe TA, Cooter RD, Gold MH, Hobbs FD, et al. International clinical recommendations on scar management. Plast Reconstr Surg 2002;110:560–71. 12. Arno AI, Gauglitz GG, Barret JP, Jeschke MG. New molecular medicine-based scar management strategies. Burns 2014;40:539–51. 13. Gauglitz GG. Management of keloids and hypertrophic scars: current and emerging options. Clin Cosmet Investig Dermatol 2013;6:103–14. 14. Gold MH, Berman B, Clementoni MT, Gauglitz GG, et al. Updated international clinical recommendations on scar management: part 1–evaluating the evidence. Dermatol Surg 2014;40:817–24. 15. Karagoz H, Yuksel F, Ulkur E, Evinc R. Comparison of efficacy of silicone gel, silicone gel sheeting, and topical onion extract including heparin and allantoin for the treatment of postburn hypertrophic scars. Burns 2009;35:1097–103. 16. Beuth J, Hunzelmann N, Van Leendert R, Basten R, et al. Safety and efficacy of local administration of contractubex to hypertrophic scars in comparison to corticosteroid treatment. Results of a multicenter, comparative epidemiological cohort study in Germany. In Vivo 2006; 20:277–83. 17. Candy LH, Cecilia LT, Ping ZY. Effect of different pressure magnitudes on hypertrophic scar in a Chinese population. Burns 2010;36:1234–41. 18. Camacho-Martinez FM, Rey ER, Serrano FC, Wagner A. Results of a combination of bleomycin and triamcinolone acetonide in the treatment of keloids and hypertrophic scars. An Bras Dermatol 2013; 88:387–94. 19. Wilson AM. Eradication of keloids: surgical excision followed by a single injection of intralesional 5-fluorouracil and botulinum toxin. Can J Plast Surg 2013;21:87–91. 20. Kim DH, Ryu HJ, Choi JE, Ahn HH, et al. A comparison of the scar prevention effect between carbon dioxide fractional laser and pulsed dye laser in surgical scars. Dermatol Surg 2014;40:973–8. 21. Chapas AM, Brightman L, Sukal S, Hale E, et al. Successful treatment of acneiform scarring with CO2 ablative fractional resurfacing. Lasers Surg Med 2008;40:381–6. 22. Atiyeh BS. Nonsurgical management of hypertrophic scars: evidence- based therapies, standard practices, and emerging methods. Aesthetic Plast Surg 2007;31:468–92; discussion 493–464. 23. Khan MA, Bashir MM, Khan FA. Intralesional triamcinolone alone and in combination with 5-fluorouracil for the treatment of keloid and hypertrophic scars. J Pak Med Assoc 2014;64:1003–7. 24. Davison SP, Dayan JH, Clemens MW, Sonni S, et al. Efficacy of intralesional 5-fluorouracil and triamcinolone in the treatment of keloids. Aesthet Surg J 2009;29:40–6. 25. Yosipovitch G, Widijanti Sugeng M, Goon A, Chan YH, et al. A comparison of the combined effect of cryotherapy and corticosteroid injections versus corticosteroids and cryotherapy alone on keloids: a controlled study. J Dermatolog Treat 2001;12:87–90. 26. Bleve M, Capra P, Pavanetto F, Perugini P. Ultrasound and 3D skin imaging: methods to evaluate efficacy of striae distensae treatment. Dermatol Res Pract 2012;2012:673706. 27. Perry DM, McGrouther DA, Bayat A. Current tools for noninvasive objective assessment of skin scars. Plast Reconstr Surg 2010;126:912–23. 28. Cheng W, Saing H, Zhou H, Han Y, et al. Ultrasound assessment of scald scars in Asian children receiving pressure garment therapy. J Pediatr Surg 2001;36:466–9. 29. Fraccalvieri M, Sarno A, Gasperini S, Zingarelli E, et al. Can single use negative pressure wound therapy be an alternative method to manage keloid scarring? A preliminary report of a clinical and ultrasound/ colour-power-doppler study. Int Wound J 2013;10:340–4. 30. Acosta S, Ureta E, Yanez R, Oliva N, et al. Effectiveness of intralesional triamcinolone in the treatment of keloids in children. Pediatr Dermatol 2016;33:75–9. 31. Friedman PM, Skover GR, Payonk G, Geronemus RG. Quantitative evaluation of nonablative laser technology. Semin Cutan Med Surg 2002;21:266–73. 32. Gauglitz GG, Bureik D, Dombrowski Y, Pavicic, et al. Botulinum toxin A for the treatment of keloids. Skin Pharmacol Physiol 2012;25:313–8. 33. Poetschke J, Schwaiger H, Gauglitz GG. Current and emerging options for documenting scars and evaluating therapeutic progress. Dermatol Surg 2016;43:25–36. 34. Sharma S, Bhanot A, Kaur A, Dewan SP. Role of liquid nitrogen alone compared with combination of liquid nitrogen and intralesional triamcinolone acetonide in treatment of small keloids. J Cosmet Dermatol 2007;6:258–61. 35. Anchlia S, Rao KS, Bonanthaya K, Vohra D. Keloidoscope: in search for the ideal treatment of keloids. J Maxillofac Oral Surg 2009;8:366–70. 36. Ceilley RI, Babin RW. The combined use of cryosurgery and intralesional injections of suspensions of fluorinated adrenocorticosteroids for reducing keloids and hypertrophic scars. J Dermatol Surg Oncol 1979;5:54–6. 37. Trisliana Perdanasari A, Lazzeri D, Su W, Xi W, et al. Recent developments in the use of intralesional injections keloid treatment. Arch Plast Surg 2014;41:620–9. 38. Bloemen MC, van Gerven MS, van der Wal MB, Verhaegen PD, et al. An objective device for measuring surface roughness of skin and scars. J Am Acad Dermatol 2011;64:706–15. 39. Block L, Gosain A, King TW. Emerging therapies for scar prevention. Adv Wound Care (New Rochelle) 2015;4:607–14. 40. Ahuja RB, Chatterjee P. Comparative efficacy of intralesional verapamil hydrochloride and triamcinolone acetonide in hypertrophic scars and keloids. Burns 2014;40:583–8. 41. Robles DT, Moore E, Draznin M, Berg D. Keloids: pathophysiology and management. Dermatol Online J 2007;13:9. 42. Kiil J. Keloids treated with topical injections of triamcinolone acetonide (kenalog). Immediate and long-term results. Scand J Plast Reconstr Surg 1977;11:169–72. 43. Griffith BH, Monroe CW, McKinney P. A follow-up study on the treatment of keloids with triamicinolone acetonide. Plast Reconstr Surg 1970;46:145–50. 44. Ketchum LD, Robinson DW, Masters FW. Follow-up on treatment of hypertrophic scars and keloids with triamcinolone. Plast Reconstr Surg 1971;48:256–9. S C H W A I G E R E T A L 4 4 : 5 : M A Y 2 0 1 8 643 © 201 by the American Society for Dermatologic Surgery, Inc. Published by Wolters Kluwer Health, Inc. Unauthorized reproduction of this article is prohibited.8 45. Griffith BH. The treatment of keloids with triamcinolone acetonide. Plast Reconstr Surg 1966;38:202–8. 46. Gold MH, McGuire M, Mustoe TA, Pusic A, et al. Updated international clinical recommendations on scar management: part 2–algorithms for scar prevention and treatment. Dermatol Surg 2014; 40:825–31. 47. Roberts WE. Skin type classification systems old and new. Dermatol Clin 2009;27:529–33. viii. 48. Reinholz M, Schwaiger H, Poetschke J, Epple A, et al. Objective and subjective treatment evaluation of scars using optical coherence tomography, sonography, photography, and standardised questionnaires. Eur J Dermatol 2016;26:599–608. 49. Poetschke J, Reinholz M, Schwaiger H, Epple A, et al. DLQI and POSAS Scores in keloid patients. Facial Plast Surg 2016;32:289–95. Address correspondence and reprint requests to: Hannah Schwaiger, MD, Department of Dermatology and Allergology, Ludwig-Maximilian University, Frauenlobstr. 9-11, 80337 Munich, Germany, or e-mail: hannah.schwaiger@web.de O B J E C T I V E E V A L U A T I O N O F C O R T I C O S T E R O I D T R E A T M E N T O F K E L O I D S D E R M A T O L O G I C S U R G E R Y644 © 201 by the American Society for Dermatologic Surgery, Inc. Published by Wolters Kluwer Health, Inc. Unauthorized reproduction of this article is prohibited.8 work_cmflpyrstja35e27urqebxowgu ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218551841 Params is empty 218551841 exception Params is empty 2021/04/06-02:16:30 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218551841 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:30 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_cmo7ya5l7fbz5ifp35wlyj7iwm ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218560632 Params is empty 218560632 exception Params is empty 2021/04/06-02:16:41 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218560632 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:41 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_cp3mahns25etraqttxcs6l7h3y ---- Importance of including borderline cases in trachoma grader certification. | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.4269/ajtmh.13-0658 Corpus ID: 6624278Importance of including borderline cases in trachoma grader certification. @article{Gaynor2014ImportanceOI, title={Importance of including borderline cases in trachoma grader certification.}, author={B. Gaynor and A. Amza and Sintayehu Gebresailassie and B. Kadri and B. Nassirou and N. Stoller and S. Yu and Puja A. Cuddapah and J. Keenan and T. Lietman}, journal={The American journal of tropical medicine and hygiene}, year={2014}, volume={91 3}, pages={ 577-9 } } B. Gaynor, A. Amza, +7 authors T. Lietman Published 2014 Medicine The American journal of tropical medicine and hygiene We assessed trachoma grading agreement among field graders using photographs that included the complete spectrum of disease and compared it with cases where there was consensus among experienced graders. Trained photographers took photographs of children's conjunctiva during a clinical trial in Ethiopia. We calculated κ-agreement statistics using a complete set of 60 cases and then recalculated the κ using a consensus set where cases were limited to those cases with agreement among experienced… Expand View on PubMed europepmc.org Save to Library Create Alert Cite Launch Research Feed Share This Paper 2 Citations Results Citations 1 View All Supplemental Clinical Trials Interventional Clinical Trial Partnership for Rapid Elimination of Trachoma Trachoma, an ocular infection caused by C. trachomatis, is the second leading infectious cause of blindness worldwide. Years of repeated infection with C. trachomatis cause the… Expand Conditions Trachoma Intervention Drug Johns Hopkins University May 2008 - June 2014 Explore Further Discover more papers related to the topics discussed in this paper Figures and Topics from this paper figure 1 Histopathologic Grade Confidence Intervals photograph Certification 2 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Reliability of Trachoma Clinical Grading—Assessing Grading of Marginal Cases Salman A. Rahman, S. Yu, +9 authors T. Lietman Medicine PLoS neglected tropical diseases 2014 2 View 1 excerpt, cites results Save Alert Research Feed Agreement between novice and experienced trachoma graders improves after a single day of didactic training B. P. Prasad, R. Bhatta, +9 authors B. Gaynor Medicine British Journal of Ophthalmology 2015 2 Save Alert Research Feed References SHOWING 1-10 OF 21 REFERENCES SORT BYRelevance Most Influenced Papers Recency Operational evaluation of the use of photographs for grading active trachoma. A. Solomon, R. Bowman, +6 authors D. Mabey Medicine The American journal of tropical medicine and hygiene 2006 17 PDF Save Alert Research Feed Comparison of clinical and photographic assessment of trachoma K. Roper, H. Taylor Medicine British Journal of Ophthalmology 2009 13 Save Alert Research Feed How reliable are tests for trachoma?--a latent class approach. Craig W. See, W. Alemayehu, +7 authors T. Lietman Medicine Investigative ophthalmology & visual science 2011 39 PDF Save Alert Research Feed Reliability of photographs for grading trachoma in field studies. S. West, H. Taylor Medicine The British journal of ophthalmology 1990 19 PDF Save Alert Research Feed Correlation of Clinical Trachoma and Infection in Aboriginal Communities Claude-Edouard C Michel, K. Roper, Magda A. Divena, H. Lee, H. Taylor Medicine PLoS neglected tropical diseases 2011 30 PDF Save Alert Research Feed The Prevalence of Blinding Trachoma in Northern States of Sudan Awad Hassan, J. Ngondi, +10 authors K. Binnawi Medicine PLoS neglected tropical diseases 2011 19 PDF Save Alert Research Feed Trachoma Prevalence and Associated Risk Factors in The Gambia and Tanzania: Baseline Results of a Cluster Randomised Controlled Trial E. Harding-Esch, T. Edwards, +9 authors S. West Medicine PLoS neglected tropical diseases 2010 66 PDF View 1 excerpt, references results Save Alert Research Feed Impact of face-washing on trachoma in Kongwa, Tanzania S. West, B. Muñoz, M. Lynch, A. Kayongoya, H. Taylor Medicine The Lancet 1995 212 Save Alert Research Feed A simple system for the assessment of trachoma and its complications. B. Thylefors, C. Dawson, B. Jones, S. West, H. Taylor Medicine Bulletin of the World Health Organization 1987 690 View 1 excerpt, references methods Save Alert Research Feed Azithromycin in control of trachoma J. Schachter, S. K. West, D. Mabey, C. Dawson, H. Faal Medicine The Lancet 1999 284 PDF Save Alert Research Feed ... 1 2 3 ... Related Papers Abstract Supplemental Clinical Trials Figures and Topics 2 Citations 21 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_cpjcqt6rnbgofjnebuzz7pr5tq ---- Microsoft Word - UBC_2013_fall_beamish_alison.docx     The use of repeat colour digital photography to monitor high Arctic tundra vegetation by Alison Leslie Beamish BScH Geography, Queens University, 2011 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Science in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (Geography) The University Of British Columbia (Vancouver) October 2013 © Alison Leslie Beamish, 2013        ii   Abstract High Arctic ecosystems are experiencing some of the earliest and most extreme changes in climate as a result of global climate change. Temperature increases twice the hemispheric average are initiating changes to terrestrial systems including shifts in timing of phenology, aboveground biomass and community composition of Arctic vegetation. Satellite imagery from the last 30 years has shown a greening across tundra ecosystems with increases in peak productivity and growing season length. A few plot scale field studies support these large-scale trends but overall validation at the plot scale is still lacking. Current manual and automated methods for monitoring vegetation at the community and plot scale is both time consuming and employs expensive, sensitive multispectral instrumentation that can be cumbersome to use in Arctic field sites. In this thesis I examine the utility of colour digital photography in monitoring tundra vegetation across four different vegetation communities, inside and outside of passive warming chambers. Colour and infrared photos were taken on one day peak season in 2010. Relationships between a greenness index derived from colour photographs and biomass data were compared to relationships with NDVI derived from infrared photographs. Results suggest that colour photographs can be used as a proxy for productivity and aboveground biomass in multiple tundra vegetation communities. These data were then used to infer phenological signals at multiple spatial scales from a set of colour photographs taken on six days during the 2012 growing season. Results show higher greenness values due to treatment at the plot scale but not at the individual scale suggesting greater green biomass in warmed plots. At the individual scale site differences emerged for two study species (Salix arctica, Dryas integrifolia) suggesting a difference in vegetation vigor due to differences in soil moisture and perhaps competition. The phenological signal was strongest at the species scale due to reduced interference from bare soil, litter and standing water. Overall, these results show the potential for this methodology for measuring vegetation in the Arctic. Its simplicity, affordability and efficiency has great potential for use in a vegetation monitoring network in the Arctic.        iii   Preface The experimental design used in this thesis was established by Dr. G. Henry. The overall study is part of the International Tundra Experiment (ITEX), an international collaboration monitoring tundra ecosystems in Arctic and alpine ecosystems (Henry and Molau 1997). None of the text in this thesis is taken directly from previously published or collaborative work. Wiebe Nijland wrote the photo-processing program used in Chapter 2 and 3 and Samuel Robinson wrote the flower counting algorithm in Chapter 3. Marc Edwards took the digital and infrared images of plots in 2010 as part of his MSc thesis with Dr. Henry. All data analysis in the thesis including processing the photographs and statistical analyses are my original work. The in-field photography methodology and data collection was of my own design. The environmental data collection was based on protocols established by Dr. Henry for the site and related to those used in ITEX, and was the result of collaborative efforts at Alexandra Fiord during the 2010 and 2012 field seasons.            iv   Table of Contents   Abstract  ......................................................................................................................  ii   Preface  .......................................................................................................................  iii   Table of Contents  ......................................................................................................  iv   List of Tables  .............................................................................................................  vi   List of Figures  ..........................................................................................................  vii   Abbreviations  ............................................................................................................  ix   Acknowledgements  .....................................................................................................  x   Dedication  .................................................................................................................  xi   1   Introduction  ..........................................................................................................  1   Overview  .......................................................................................................................................................................  1   Terrestrial  Arctic  climate  change  .......................................................................................................................  2   Arctic  plants  and  plant  communities  ................................................................................................................  4   International  Tundra  Experiment  .....................................................................................................................  5   Study  site  –  Alexandra  Fiord  ................................................................................................................................  6   Objectives  .....................................................................................................................................................................  7   2   Method Validation  ................................................................................................  8   Introduction  ................................................................................................................................................................  8   Methods  ......................................................................................................................................................................  10   2.1.1   Study site  .....................................................................................................................................................  10   2.1.2   Photography  ...............................................................................................................................................  12   2.1.3   Colour digital photo analysis  ..............................................................................................................  13   2.1.4   Comparison with infrared images  .....................................................................................................  14   2.1.5   Biomass index  ............................................................................................................................................  15   2.1.6   Data analysis  .............................................................................................................................................  15   Results  .........................................................................................................................................................................  16   2.1.7   Overall trends  ............................................................................................................................................  16   2.1.8   Comparison of GEI to NDVI and biomass measurements  .......................................................  18   Discussion  .................................................................................................................................................................  22   2.1.9   Evaluation of GEI and the advantages of digital photography  .............................................  22   Conclusions  ...............................................................................................................................................................  24   3   Landscape Patterns  ............................................................................................  26   Introduction  .............................................................................................................................................................  26   Methods  ......................................................................................................................................................................  28   3.1.1   Study site  .....................................................................................................................................................  28   3.1.2   Photography  ...............................................................................................................................................  29   3.1.3   Biomass data  ..............................................................................................................................................  30   3.1.4   Colour digital photo analysis  ..............................................................................................................  30   3.1.5   Peak flower counting program  ...........................................................................................................  31   3.1.6   Statistical analysis  ...................................................................................................................................  32   Results  .........................................................................................................................................................................  33          v   3.1.7   Timing of phenology  ...............................................................................................................................  33   3.1.8   Quantifying greenness  ...........................................................................................................................  34   3.1.9   Modeling GEI  ............................................................................................................................................  39   3.1.10   Peak flower bloom detection  .............................................................................................................  40   3.1.11   Descriptive statistics of GEI and environmental data  ............................................................  42   Discussion  .................................................................................................................................................................  45   3.1.12   Evaluation of photographs in detecting phenological changes  ..........................................  45   3.1.13   Evaluation of GEI in detecting vegetative differences  ............................................................  47   Conclusions  ...............................................................................................................................................................  48   4   Summary and Synthesis  .....................................................................................  49   Introduction  .............................................................................................................................................................  49   Summary  of  results  ...............................................................................................................................................  49   Limitations  ................................................................................................................................................................  51   Future  research  .......................................................................................................................................................  51   Conclusions  ...............................................................................................................................................................  52   References  ................................................................................................................  53   Appendix  ..................................................................................................................  62   A.1  RGB  data  extraction  program  ...................................................................................................................  62   A.2  Flower  counting  algorithm  ........................................................................................................................  63                vi   List of Tables   Table 2.1 Four moisture defined vegetation community characteristics. ..................................... 11 Table 2.2 Number of flowers in each site representing an average of the number of flowers in each plot. Counts were recorded on DOY 188 in all sites. Flower counts are the sum of the following species in a mature, senescing, or dispersal stage; D. integrifolia, S. arctica, P. Radicatum, E. triste, E. schuezerii. Species were chosen for their dominance in the canopy and the showiness of their blooms. .............................................. 16 Table 2.3 Results of linear mixed models for all sites and the significance of the warming treatment. ........................................................................................................................ 18 Table 2.4 Simple linear regression results between sites and TLH biomass data for both GEI and NDVI. ............................................................................................................................. 20 Table 2.5 Multiple regression results identifying functional groups that drive NDVI and GEI values. The explanatory variables are the number of live hits of each of the functional group. ........................................................................................................................... 21 Table 3.1 Four moisture defined vegetation community characteristics. ..................................... 29 Table 3.2 Model comparison of AIC, BIC, log likelihood and warming treatment significance at the plot and species scale. Simple models include treatment as a fixed effect and site as a random effect. Soil moisture and mature leaf models include those variables as random effect because they improve upon the simple model. .................................. 40              vii   List of Figures   Figure 2.1 Vegetation communities photographed in 2010 and 2012. White frame represents a 1m x 1m quadrant. ..................................................................................................... 12 Figure 2.2 Temporal patterns of GEI across the 2012 growing season in the a) Cassiope site, b) Dryas site, c) Meadow site, and d) Willow site. Colours correspond to the warming treatment (red) and control (blue) plots. Vertical lines represent an average date of all plots in each site of major phenological events for all major species followed; mature leaf: first day mature leaf, flower: first day mature flower, senescence: first day leaf senescence. ............................................................................................................................. 17 Figure 2.3 Simple linear regression by Site; Dryas site; (R2 = 0.12, F54 = 2.62, p = 0.12) Meadow site; (R2 = 0.16, F54 = 2.48, p = 0.14), Willow site; (R2 = 0.28, F54 = 6.90, p = 0.01). Each point represents one photograph from each plot in each of the three sites. ............... 19 Figure 2.4 Simple linear regression of GEI and NDVI with TLH biomass by site. Each point represents the TLH for each plot in each site. See Table 3 for regression analysis. ............ 20 Figure 2.5 Simple linear regressions between average canopy height and each method; (NDVI: R2 = 0.21, F54 = 13.38, p<0.001, GEI: R2 = 0.36, F54 = 30.38, p<0.001). Each point represents an average of 25 canopy height measurements at each plot in all three sites. ............................................................................................................................................... 22 Figure 3.1 Examples of seasonal patterns of green-up from the photographs in each of the four sites. ................................................................................................................................. 34 Figure 3.2 Seasonal patterns of GEI in all sites. Data are fit with a loess curve. Insets represent density plots of GEI values. Vertical lines represent important phenological observations collected in the field. ................................................................................................ 35 Figure 3.3 Seasonal patterns of GEI at the plot scale (A: Cassiope site, B: Dryas site; C:Meadow site; D: Willow site). Data are fit with a loess curve. Insets represent density plots of GEI values. Vertical lines represent important phenological observations collected in the field. ..................................................................................................................... 36 Figure 3.4 Seasonal patterns of GEI at the species scale (A: S. arctica; B: D. integrifolia). Data are fit with a loess curve. Vertical lines represent important phenological observations collected in the field. .......................................................................... 37 Figure 3.5 Seasonal patterns of GEI of S. arctica (A) and D. integrifolia (B) in the three study sites. Data are fit with a loess curve. ................................................................................... 37 Figure 3.6 Density plot of GEI values by day at the plot scale in all sites. .................................. 38        viii   Figure 3.7 Density plots of GEI values by day at the plot scale. (A: All sites, B: Dryas Site; C:Meadow Site; D: Willow Site). ......................................................................................... 39 Figure 3.8 Density plots of GEI values by day (A: S. arctica; B: D. integrifolia). ...................... 39 Figure 3.9 Linear regression between manual and program flower counts in all sites (R2 = 0.99, p <0.0001). ........................................................................................................................ 41 Figure 3.10 Linear regression between manual and program flower counts. A: Cassiope, R2 = 0.98, p < 0.0001; b: Dryas site, R2 = 0.91, p < 0.0001; Meadow site, R2 = 0.99, p < 0.0001; Willow site, R2 = 0.94, p < 0.0001). ................................................................................. 42 Figure 3.11 Boxplot of GEI values by site. Error bars represent the highest/lowest value within 1.5 of the interquartile range. ............................................................................................. 43 Figure 3.12 Boxplot of GEI values by species. Error bars represent the highest/lowest value within 1.5 of the interquartile range. ................................................................................... 43 Figure 3.13 Boxplot of A: Total live hit biomass, B: Average canopy height, C: Soil Moisture, D: First day mature leaf. Error bars represent the highest/lowest value within 1.5 of the interquartile range. ........................................................................................................ 45        ix   Abbreviations AF Alexandra Fiord Lowland ANOVA Analysis of Variance, a set of statistical techniques to identify sources of variability between groups DOY Day of Year ER Ecosystem Respiration FDML First Day Mature Leaf FDMF First Day Mature Flower FDLS First Day Leaf Senescence FOV Field of View GEI Greenness Excess Index GPP Gross Primary Productivity GEP Gross Ecosystem Productivity IR Infrared ITEX International Tundra Experiment LAI Leaf Area Index NDVI Normalized Differential Vegetation Index OTC Open Top Chambers RGB Red, Green, Blue TLH Total Live Hit biomass            x   Acknowledgements Over the past two years I have met many people who I owe gratitude to not only for the planning and completion of this thesis, but also for knowledge gained and my development as a scientist. First I would like to thank Greg Henry for the opportunity to work in the Tundra Ecology lab and for providing me a once in a lifetime experience at Alexandra Fiord. His knowledge and support both financially and academically have fueled my desire to continue to pursue academia and my love of the Arctic for which I am sincerely grateful. Thanks as well to my graduate committee Roy Turkington and Gary Bradfield for comments and support through the writing process. I would like to extend a very special thank-you to Wiebe Nijland whose collaboration made the processing and analysis of my photographs possible. His patience, teaching and guidance were integral to the creation of this thesis. I am indebted to my lab mates Anne Bjorkman, Sam Robinson, and Alison Cassidy for their support academically, in the field and in everyday life. My summer field season and many hours in the office were enhanced greatly by their presence and guidance. Many thanks to our wonderful field assistants Darcy McNicholl, Doug Curley, and Chris Greyson-Gaito for their hard work in the field. I would like to thank the organizations that made this work possible the RCMP and PCSP for logistical support, to Oikiqtani Inuit who allowed us to use their land, and NSERC, ArcticNet, and NSTP for funding. My journey to pursue an MSc would also not have been possible without the mentorship and support from Neal Scott at Queen’s University who introduced me to working in the Arctic. The skills and knowledge I learned from him made the transition to graduate school much easier. To the UBC Geography department and my colleagues, thank you for fostering such a unique and supportive environment. A special thanks to my wonderful cohort and friends Aaron Tamminga, Dan McParland, Claudia von Flotow, Eugenie Paul-Limoges, Alison Cassidy, Sam Robinson, Lisa Henault, and Derek vanderKamp.        xi   Dedication To my loving parents Jim and Gail, this thesis is a product of you and fostering a desire to learn, explore, and succeed. To my siblings Laura, Jennifer, and David, you inspire me with your perseverance and love.        1   1 Introduction Overview Average global temperatures have risen at an unprecedented rate over the last century and are expected to continue to rise throughout the 21st century (IPCC 2007). Arctic ecosystems are experiencing some of the earliest and most extreme changes in climate with temperature increases twice the hemispheric average (IPCC 2007; Kaufman et al. 2009). These climatic shifts are cascading through terrestrial systems altering the structure and function of tundra ecosystems (Post et al. 2009). Large-scale shifts in species distribution, cover, and community composition, as well as small-scale shifts in individual plant phenology and reproduction, have already observed from satellite imagery and field studies (Walker et al. 2006; Bhatt et al. 2010; Elmendorf et al. 2012b). However, these vegetation changes are complex with strong site and species-specific responses through space and time highlighting the heterogeneous nature of these systems (Walker et al. 2006; Elmendorf et al. 2012a). Monitoring ecological change in the Arctic is important due to the implications of climate change on biodiversity, nutrient and carbon cycling, atmospheric warming, and associated feedbacks. Long-term ecological records in the Arctic are limited, but programs like the International Tundra Experiment (ITEX) have resulted in 20-year vegetation records in some locations. The reliability of long-term datasets is difficult to maintain even with set protocols and traditional field-based phenology observations are time consuming and expensive in remote areas. However, phenology is recognized as one of the most responsive vegetation traits to climate change in nature (IPCC 2007). Across the Arctic phenological shifts such as earlier leaf and flowering have been observed as a result of earlier snowmelt and warming temperatures (Myneni et al. 1997; Arft et al. 1999; Wolkovich et al. 2012; Oberbauer et al. 2013). Increasing the volume, accuracy and consistency of phenology data and associated vegetation characteristics will aid in teasing out the complex response of Arctic vegetation to climate change. Digital photography could be a tool to provide consistent and inexpensive phenology and        2   vegetation data in remote tundra systems. It has already been shown to be useful in estimating qualitative (e.g. vigor) (Adamsen et al. 1999) and quantitative (e.g. cover) changes in biomass (Ewing and Horton 1999, Richardson et al. 2001) as well as for phenological observations (Crimmins and Crimmins 2008; Ide and Oguma 2010; Migliavacca et al. 2011). Additionally, digital cameras are affordable and easy to use making them an attractive field instrument for recording environmental phenomena. Automation of vegetation monitoring at the community scale through the implementation of repeat digital photography could achieve a greater volume and consistency of data by reducing human error associated with field-based measures. In this thesis, I examine the utility of colour and infrared photography in monitoring several key characteristics of tundra vegetation in four vegetation communities arrayed along a natural moisture gradient. I explore the use of colour photography in monitoring biomass, productivity and phenology at both the plot and species scale. I also discuss how the results and methodology have potential for implementation across the Arctic. Photography is a powerful visual tool for monitoring and communicating ecological change. Chapter 1 reviews the relevant topics in the literature, describes the study site, and outlines the objectives for the thesis. Chapter 2 compares digital photography and spectral photography in measuring key vegetation characteristics. Chapter 3 examines seasonal productivity signals derived from digital photographs at the plot and species scale. Chapter 4 summarizes and concludes the major findings. Terrestrial Arctic climate change The Arctic is an excellent indicator of climate change as it is dominated by the modern cryosphere (i.e. permafrost, areas of snow, freshwater ice, sea ice, glaciers, ice caps, and ice sheets), the stability of which is dependent on a consistently cold (T < 0°C) thermal regime. Changes to the Arctic tundra biome are important as it plays a critical role in both the global carbon budget and the global energy balance (Bliss 1992; Chapin et al. 2005). Despite having low ecosystem productivity and species diversity, the tundra sequesters a significant amount of atmospheric carbon dioxide (CO2) into organic soils due to low rates of decomposition. Furthermore, the predominantly short, patchy        3   vegetation and prolonged snow cover creates a relatively high surface albedo cooling the atmosphere (Callaghan et al. 1995; Chapin et al. 2005). Current warming trends spanning atmospheric, terrestrial, and marine systems are affecting the crucial role they play in regulating the global climate system (Serreze et al. 2000; Chapin et al. 2005). The intimately coupled nature of Arctic cryosphere stability and atmospheric temperatures are initiating cascading changes in Arctic terrestrial ecosystems. This in turn is initiating climate feedbacks with snow, sea ice, permafrost, and vegetation cover, which can both amplify and dampen climate warming through a variety of physical processes. (Hinzman et al. 2005; McGuire et al. 2009; Bhatt et al. 2010). Increases in soil temperature, active layer depth, nutrient cycling, and precipitation and changes in soil moisture are all expected with increasing temperatures (Kattsov and Walsh 2000; Serreze et al. 2007). The direction and magnitude of feedbacks from vegetation change (Cornelissen et al. 2007) are not fully understood. From compositional shifts such as shrub expansion decreasing albedo but increasing carbon storage, to changing rates of litter accumulation and decomposition, the responses of vegetation and associated feedbacks to climate change cannot be generalized easily (Sturm et al. 2001, Chapin et al. 2005, Cornelissen et al. 2007). Measuring biome scale vegetation changes in recent decades has been accomplished with a combination of satellite observations, field measurements and modeling. Satellite imagery up until the 1990s, showed a relatively uniform greening across the Arctic and northern boreal forest biome (Myneni et al. 1997; Zhou et al. 2001; Lucht et al. 2002). Since 1990, greening has become more apparent in Arctic tundra supporting the projected amplification of climate change at high latitudes (Myneni et al. 1997; Bhatt et al. 2010). Increases in peak productivity and growing season length derived from satellites over the last 30 years have been validated with field studies (ACIA 2004; Walker et al. 2006; Bhatt et al. 2010; Loranty 2011). These productivity changes have been accompanied by large-scale vegetation changes including woody shrub expansion in areas of the low Arctic (Myneni et al. 1997; Sturm et al. 2001; Verbyla 2008), and changes in composition and density of herbaceous species across the Arctic (Epstein et al. 2004; Hudson et al. 2011; Elmendorf et al. 2012b). However, changes in community composition, above ground biomass and the timing of phenology        4   vary significantly across different vegetation communities (Elmendorf et al. 2012a, b). Even the same species in different communities can respond differently to changes in temperature, moisture and nutrients. Observed short-term changes in communities may not reflect long-term trends due to evolutionary history and unique factors that limit growth and dispersal of certain species (Chapin et al. 1996). Arctic plants and plant communities The Arctic tundra biome is characterized as areas north of the boreal/taiga tree line covers an area of more than 7 million km2 (Bliss 1992). A further subdivision between low and high Arctic has been made by Bliss (1992) based on ecological characteristics dictated by latitude and climate. The low Arctic is dominated by “tundra”, a generic definition of landscapes with 80-100% vegetation cover. The high Arctic is dominated by “polar desert” and “polar semi-desert” with only 10-30% vegetation coverage. Heterogeneity defines Arctic landscapes creating a mosaic of vegetation communities driven by variable local climates, hydrology, topography and geology (Bliss 1992). Tundra vegetation communities have been grouped into five broad subzones according to the Circumpolar Arctic Vegetation Map (CAVM; Walker et al. 2005) that span across the low and high Arctic: (A) cushion forb, (B) prostrate dwarf shrub, (C) hemi-prostrate dwarf shrub, (D) erect dwarf shrub, and (E) low shrub. Within these broad classifications there are approximately 900 vascular species, only 0.4% of the world’s total (Billings 1997, Bliss 1979). Species richness and diversity is greatest in the low Arctic and decreases along a latitudinal gradient from low- to high-Arctic (Matveyeva and Chernov 2000), with concurrent decreases in zenith angle, photoperiod and mean annual temperature. Vegetation also shifts from woody to herbaceous along this gradient. Despite low richness and diversity at the biome scale, at the community scale Arctic tundra boasts a diversity of species comparable to those of temperate grassland and coniferous biomes (Bliss et al. 1981). Morphologically, tundra plants are generally low biomass species with up to 98% allocation to belowground structures, an adaptation to the harsh climate, limited nutrients, light and at times extensive herbivory (Callaghan et al. 1999). The low stature        5   of tundra vegetation allows them to take advantage of the surface boundary layer that can be between 3 and 8 °C warmer than surrounding air (Molgaard 1982). This characteristic also minimizes desiccation from wind (Oberbauer and Dawson 1992) and maximizes insulation by snow (Sturm et al. 2005). Many species in the Arctic are adapted to low soil moisture due to limited precipitation. Adaptations include low transpiration rates, high concentrations of carbohydrates, and adjustable water potential for drought resistance (Billings and Mooney 1968). Arctic plants are capable of photosynthesis at low light levels, an important adaptation given the limited growing season length (Chapin and Shaver 1985). Plant growth and reproduction occur quickly after environments become snow free. This is made possible by the stored carbohydrates in roots and rhizomes of perennial Arctic plants (Billings and Mooney 1968). Additionally, flower buds and shoots formed in the previous fall can overwinter giving the plants a head start once snow melts. Tundra species are adapted the cold harsh climate of the Arctic, they are slow growing, long lived and have low fecundity, which may limit their ability to adapt to changes in climate (Callaghan et al. 1995). International Tundra Experiment The International Tundra Experiment (ITEX) was established in 1990 as a collaborative network of over 20 sites in both Arctic and alpine ecosystems to study the responses of a diverse selection of tundra ecosystems to high latitude warming. The main objective was to monitor changes to growth, phenology, and reproduction of circumpolar vascular plant species to environmental manipulations (Henry and Molau 1997). These environmental manipulations included, passive warming through the use of open top chambers (OTC), extension and reduction of growing season length through snow removal and addition, and soil nutrient availability through nutrient addition. The following study examines the responses of tundra plants and plant communities to OTCs , which have been shown to increase near surface air temperatures 1 – 3°C simulating predicted climate change (Henry and Molau 1997; Marion et al. 1997). ITEX has standard protocols for study design, data collection, and measurement techniques allowing for meta-analyses of these data (Arft et al. 1999; Walker et al. 2006; Elmendorf et al. 2012a, b).        6   Study site – Alexandra Fiord All research was conducted at the Alexandra Fiord Twin Glacier lowland on the east- central coast of Ellesmere Island, Nunavut (78°53’N, 75°55’W). The 8 km2 lowland is considerably more vegetated than the surrounding polar desert and semi-desert ecosystems to the east and west. To the north are the waters of Alexandra Fiord and to the south are the two lobes of the Twin Glacier, which drains the Prince of Wales ice cap. The lowland has two rivers and several creeks and streams that run south to north with the gently north sloping topography. This periglacial outwash plain is characterized by glacial and permafrost features such as granite outcrops, glacial erratics, frost boils, and sorted polygons (Freedman et al. 1994). Soils are characteristic of recently deglaciated terrain; young and poorly developed however, they are relatively high in organic matter compared to the surrounding terrain (Muc et al. 1994). Approximately 100 – 200 mm of precipitation falls annually with 10 – 50 mm of that falling during the growing season (Labine 1994). Average growing season temperatures are approximately 5 – 8°C (Labine 1994; G Henry, unpublished data). The soil moisture conditions of the lowland strongly affect the plant community structure (Muc et al. 1989). Plant communities vary significantly across small spatial scales. A total of 96 vascular plant species have been recorded across the lowland, dominated by the dwarf shrubs Salix arctica, Dryas integrifolia, Cassiope tetragona, graminoids such as Arctagrostis latifolia, Luzula spp., Carex spp., and forbs such as Saxifraga oppositifolia, Draba spp., and Papaver radicatum (Ball and Hill 1994). Soil moisture is dictated by snowmelt and localized topography. Drainage is constrained vertically by continuous permafrost underlying the site, and topography horizontally (Freedman et al. 1994). This study was conducted in four distinct vegetation communities arrayed along the soil moisture gradient as defined by the Circumpolar Arctic Vegetation Map (CAVM; Walker et al. 2005). The driest site was a xeric-mesic prostrate dwarf-shrub herb tundra, dominated by Salix arctica (Willow site) with Dryas integrifolia, Saxifraga oppositifolia and graminoids, such as Poa arctica and Luzula confusa. This site had approximately 71g m-2 of standing crop, sandy soils, and experiences the earliest snowmelt of the four study sites (Muc et al. 1994). Two mesic to hydric-mesic prostrate dwarf-shrub herb        7   tundra, dominated by Cassiope tetragona (Cassiope site) and Dryas integrifolia (Dryas Site) respectively, with an estimated aboveground biomass of 190 g m-2 and included important contributions from the graminoids Arctagrostis latifolia and C. misandra. The wettest site was a hydric sedge-moss dwarf-shrub wetland, (Meadow site) with 250 – 370 g m-2 of standing crop (Henry et al. 1990). Hill and Henry (2010) noted increases of 145 – 515% in standing crop in the Meadow site from 1980 to 2005. There is surface water flow in the Meadow site through much of the growing season. Sedges (Carex aquatilis stans, C. membranacea, and Eriophorum angustifolium triste) dominate in the wet hollows and dwarf shrubs (S. arctica, D. integrifolia) on the drier hummocks (Henry 1998). The growing season of all four communities lasts from early June to mid August. Within the four vegetation communities, ten randomly selected 1 m2 plots have been passively warmed using open top chambers (OTCs), with ten corresponding 1 m2 control plots. This experiment was established in 1992, resulting in 20 years of passive warming in six vegetation communities at Alexandra Fiord as the first ITEX site. Objectives The objective of this thesis was to examine the utility of colour digital photography to detect seasonal patterns of greenness at multiple spatial scales, across different vegetation communities, inside and outside of passive warming chambers in the Canadian high Arctic. This study aims to demonstrate the usefulness of data derived from colour photographs as a proxy for measuring phenology, productivity, and biomass. Our first objective was to compare correlations to biomass measures of greenness data derived from RGB data of colour digital photos and of NDVI derived from spectral photos. Our second objective was to use RGB data to examine differences at the plot scale in phenology between communities along a moisture gradient and responses in long-term warming experiments. A third objective was to use RGB data to examine differences in phenology and long-term warming experiments at the species scale. A final objective was to compare flower counts from digital photographs using an automated flower counting algorithm to manual flower counts in the vegetation communities.      8   2 Method Validation Introduction Seasonal patterns of productivity are an important indicator of ecological and global change. Satellite imagery has shown a greening of arctic tundra, suggesting an increase in productivity as a result of the changing climate (Myneni et al. 1997; Zhou et al. 2001; Lucht et al. 2002;). Warmer spring temperatures have the potential to lengthen the growing season through earlier snowmelt as well as increase active layer depth and related microbial activity and nutrient cycling (ACIA, 2005; IPCC, 2007). These changes are currently and will continue to alter primary productivity and potentially timing of plant phenology of these temperature-sensitive ecosystems (Marchand et al. 2004; Walker et al. 2006). There is evidence from long-term monitoring of satellite derived vegetation indices such as the Normalized Differential Vegetation Index (NDVI), that at the biome scale, tundra ecosystems are greening in response to climate change (Myneni et al. 1997; Goetz et al. 2005; Bhatt et al. 2010). However, validation of this trend at the plot scale in terms of photosynthetic, phenological, or biomass change is lacking. Large scale NDVI does not distinguish between changes in quantitative greenness (i.e. growth and infilling) and changes in qualitative greenness (i.e. vigor) (Marchand et al. 2004). These sorts of distinctions have important implications for determining the true mechanisms behind this apparent vegetation change in the arctic tundra. Field based monitoring of vegetation can be time consuming and in the High Arctic, expensive. Recording enough measurements to capture the variability in productivity (i.e. CO2 exchange) at the plot scale is made difficult by instrumentation and sampling constraints. Accurate phenological records and destructive biomass measurements take careful and patient work, while destructive biomass harvesting is detrimental to long-term monitoring. Due to these issues, developing a simple, all-in-one methodology to capture plot scale heterogeneity of all aspects of vegetation change in the high arctic is highly advantageous.        9   Current methodology for assessing the relationship between climate and plant growth varies greatly depending on scale, application of the information, and the ecosystem in question. Plot based NDVI in Arctic ecosystems has shown strong correlations to Gross Ecosystem Productivity (GEP), Ecosystem Respiration (ER), and aboveground biomass (Boelman et al. 2003; Boelman et al. 2005; Boelman et al. 2011). Conducting NDVI measurements at the plot scale involves costly and sensitive instrumentation such as spectrometers or multispectral digital cameras. While these instruments provide valuable information, in remote and meteorologically variable field locations, dealing with these sensitive instruments can be cumbersome. For these reasons, digital photography is emerging as a simple methodology to monitor productivity variables, including seasonal greenness (i.e. GPP), phenology, and changes in biomass. The objective of this study was to examine the utility of colour digital photography compared to infrared imagery in detecting differences in seasonal patterns of greenness across different moisture-defined vegetation communities and inside and outside of passive warming chambers in the Canadian High Arctic. Digital photography has already been shown to be useful in estimating qualitative (Adamsen et al. 1999) and quantitative changes in biomass (Ewing and Horton 1999; Richardson et al. 2001) as well as for phenological observations (Crimmins and Crimmins 2008; Ide and Oguma 2010; Migiliavacca et al. 2011). This study aims to demonstrate the usefulness of RGB data derived from digital photographs as a proxy for measuring productivity, phenology, and biomass. In this study, we present a direct comparison of a set of colour and infrared photographs from three sites, representing a natural moisture gradient, on one day peak season in 2010. NDVI (Tucker 1979) and Greenness Excess Index (GEI, Richardson et al. 2007) were calculated from infrared and colour photos, respectively, and were compared with one another and two non-destructive biomass measures. Those results were then used to infer seasonal patterns of vegetation change from a set of colour photographs from the 2012 growing season.        10   Methods 2.1.1 Study site The study site is located on the East coast of Ellesmere Island, Nunavut in the Alexandra Fiord (AF) coastal lowland (78°53’N, 75°55’W). The 8 km2, well-vegetated lowland is classified as a polar oasis due to favorable climatic conditions as a result of surrounding topography (Freedman et al. 1994). Average growing season temperatures range between 3 and 8°C and average yearly precipitation is less than 50 mm (Labine 1994). The growing season lasts from early June to the middle of August. The moisture regime of the lowland is controlled by snowmelt and glacial melt water and soil moisture is highly variable across the lowland. Variability in soil moisture dictates the plant community types found (Muc et al. 1989, 1994). The study was conducted in four vegetation communities that varied in soil moisture (Table 3.1). They included: a xeric-mesic prostrate dwarf-shrub herb tundra dominated by Salix arctica (Willow site) with sandy soils, and experiences the earliest snowmelt of the four study sites (Muc et al. 1989, 1994); a mesic prostrate dwarf-shrub herb tundra, dominated by Dryas integrifolia (Dryas site), and included important contributions from the evergreen dwarf shrub Cassiope tetragona, and the graminoids Arctagrostis latifolia and C. misandra; a mesic prostrate/hemiprostrate dwarf-shrub herb tundra, dominated by C. tetragona (Cassiope site) which also included contributions from the evergreen dwarf shrub D. integrifolia, and the graminoids A. latifolia and C. misandra; and a hydric sedge, moss, dwarf-shrub wetland, (Meadow site) with surface water flow in much of the site throughout the growing season, dominated by sedges (Carex membranacea, C. aquatilis stans and Eriophorum angustifolium triste) in the wet hollows and dwarf shrubs (S. arctica, D. integrifolia) on the drier hummocks (Henry et al. 1990). In 1992, warming experiments were established in each of the communities consisting of 20 randomly selected 1 m2 plots, and half the plots were warmed using open top chambers (OTCs) with ten corresponding 1m2 control plots. The OTCs passively warm the surface air temperature by 1 – 3°C, similar to increases projected in climate        11   change models (Marion et al. 1997). Marion et al. (1997), Hollister and Webber (2000) and Bokhorst et al. (2013) provide details on the performance of OTCs. The study site is part of the International Tundra Experiment (ITEX), a collaborative network of over 20 arctic and alpine sites established in 1990 to monitor the effect of experimental and ambient warming on tundra vegetation (Henry and Molau 1997). Table 2.1 Characteristics of the four tundra communities at Alexandra Fiord Site CAVM* classification Moisture Above ground live standing crop (g m -2)** Major Species Willow Prostrate dwarf-shrub herb tundra Xeric 71 S. arctica, Luzula confusa, Poa arctica, Papaver radicatum, Oxyria dygina Cassiope Prostrate/hemiprostrate dwarf-shrub herb tundra Mesic 190 C. tetragona, D. integrifolia, Papaver radicatum, Oxyria dygina Dryas Prostrate dwarf-shrub herb tundra Mesic 190 D. integrifolia, C. tetragona, Papaver radicatum, Oxyria dygina Meadow Sedge, moss, dwarf- shrub wetland Hydric 132 Eriophorum angustifolium, E. triste, C. stans, C. membranacea, D. integrifolia * Circumpolar Arctic Vegetation Map (Walker et al. 2006). ** from Muc et al. (1994)        12   Figure 2.1 Vegetation communities photographed in 2010 and 2012. White frame represents a 1 m x 1 m quadrant.   2.1.2 Photography To examine effectiveness of digital photography in detecting the influence of temperature and moisture on seasonal productivity over one growing season, plots were photographed with a digital camera (Nikon D40 SLR, Nikon Corporation, Japan) six times over the growing season in 2012 beginning on Day Of Year (DOY) 165 (June 15) and ending on DOY 215 (August 2). Photographs were taken 1 m off the ground at nadir from the same south facing position on each date. Photos were saved in a Nikon raw format at 3,008 x 2,000 pixels and were taken between 0900h and 1200h and in the same        13   cloudless climatic conditions at each plot to minimize the influence of solar angle or irradiance. Soil Moisture was measured in three locations in each plot, three times throughout the growing season. Phenology was monitored every three days and observations of First Day of Mature Leaf (FDML), First Day of Mature Flower (FDMF), and First Day of Leaf Senescence (FDLS) were recorded. 2.1.3 Colour digital photo analysis Colour photos from each plot were initially processed using Creative Suite 6 (Adobe Systems Incorporated, 2013) to stack and align the pixels of the six photos from each plot. This was done automatically and manual adjustments were made where there were discrepancies. Photos were then cropped to a standard 1:1 ratio size and exported as individual tiff files. Some images were excluded if the alignment processing failed. After this pre-processing, pixel values of each channel (Red, Green, and Blue) were extracted from each photo. Extraction of pixel information was done using a function written with the rgdal package (0.8-4: Keitt et al. 2013) in R version 2.15.2 (R Development Core Team 2012). The photographs were transformed into a data frame and pixel values from each channel were extracted and averaged by photo. To get an estimation of greenness of each photo and to normalize variations in irradiance among photos, the green ratio (rG) was also calculated and averaged by photo (Eq. (1)): rG = G / (R + G + B) (1) Where G = green, R = red, B= blue. A ratio was also calculated for the red (rR) and blue (rB) channels by rearranging Eq. (1). A final index, the Greenness Excess Index (GEI), was calculated using the red, green, and blue ratios (Eq. (2)); Richardson et al. 2009): 2*rG – (rR + rB) (2)        14   Where rG = green ratio, rR = red ratio, rB = blue ratio. Although rG can be used to track phenological and productivity changes in vegetation, Eq. (2), provides a more sensitive indicator of changes in plant pigment (Ide and Oguma 2010). This is desirable in tundra ecosystems as changes in vegetation activity, i.e. photosynthesis, flowering, and senescence can be subtle. 2.1.4 Comparison with infrared images In order to validate the GEI derived from digital photography, this methodology was compared to the more conventional productivity proxy of infrared imagery. A sample of colour and infrared (IR) images taken during the 2010 growing season from plots in three of the four sites photographed in 2012 were compared. The three sites photographed were the Willow, Meadow, and Dryas sites. Photographs were taken one after the other with a digital camera (Panasonic, DMC-LX3) and a portable multispectral vegetation camera (Tetracam ADC, Chatsworth, CA, USA) on one day mid-season in the 2010. This resulted in a total of 60 photographs from each camera, 20 (10 warmed, 10 control) in each site. Colour photos were saved in a jpeg format at 3776 x 2520 pixels and were taken at similar times and in similar climatic conditions at each plot to minimize the influence of solar angle or irradiance. Infrared images were saved in a Tetracam raw format at 2048 x 1536 pixels. Pre-processing was done again using Creative Suite 6 to manually remove non- vegetated areas such as bare ground, rocks and standing water. For each photo identified non-vegetated areas were manually selected and removed. Finally photos were cropped to a standard (1:1) ratio size. RGB data was extracted using the same method listed above. Infrared images were processed using Tetracam’s PixelWrench 2.0 software (Tetracam, Chatsworth, CA, USA), which automatically calculates NDVI. As with the colour photos, bare ground, rocks and standing water were removed from each image before the software calculated NDVI. Four images were removed from the analysis due to overexposure in the infrared image.        15   2.1.5 Biomass index Above ground biomass estimated from total live-hit (TLH) point-intercept pin hit data from 2010 was used to compare the GEI and NDVI values derived from the 2010 photographs to biomass. In 2010, the same 60 permanent 1 m2 plots that were photographed were sampled using a modified version of the ITEX point-intercept method (Molau and Mølgaard 1996). This non-destructive method uses a 1 m2 frame with 10 cm grid spacing to record vegetation type for all layers of the canopy at the100 intersection points within each plot. This method has shown strong correlations to harvest methods (Jonasson 1988; Shaver et al. 2001). In each plot vegetation was recorded to the species level and data were later combined by functional group. Average canopy height was also recorded from 25 points in the 1 m2 frame, per plot. 2.1.6 Data analysis Linear mixed models were constructed in R version 2.15.2 (R Development Core Team 2012), using the nlme package (nlme; Pinheiro and Bates 2000) to model the relationship between 2012 GEI and treatment. Soil moisture and average first day of mature leaf were used as covariates if they improved the model significantly. To account for the repeated measures of the data, day was used as a random effect. Models were constructed at the Site level. Important phenological dates were visually compared to the temporal trends of GEI to infer the ability of GEI to detect phenological changes. Simple linear regression was used to examine the relationship between GEI and NDVI. This method was also used to examine the relationship between the two indices and the TLH biomass data. To explore this relationship further, multiple linear regression was used to identify functional groups that were influencing the increases in NDVI and GEI. An alpha level of 0.05 was used for all statistical tests.        16   Results 2.1.7 Overall trends The time series of GEI derived from 10 plots for each treatment in each site showed varying temporal patterns in greenness and treatment effects, reflecting the heterogeneous responses of the moisture-defined vegetation communities at AF (Figure 2.2). This temporal variation picked up by the RGB data follows a logical seasonal green-up pattern and corresponds well to the mean phenological dates for the major species recorded for each site (Figure 2.2). This is particularly true in the Meadow site, where an obvious mid season plateau corresponds with the FDMF of the major species, E. triste and D. integrifolia. Both species produce showy white blooms likely causing the plateau seen. While less obvious, this trend was also seen in the Willow site where the dominant species S. arctica produces an extensive greyish/red bloom. This trend was not seen in the Dryas or Cassiope sites likely due to the relatively fewer flowers in those sites (Table 2.2). The Dryas, Willow and Meadow sites all show an agreement with FDLS and showed downward trends in GEI near the end of the growing season. FDML had varying degrees of agreement in each site. Table 2.2 Total flowers in all plots at each site recorded by manual counting. Counts were recorded on DOY 188 in all sites. Flower counts are the sum of the following species in a mature, senescing, or dispersal stage; D. integrifolia, S. arctica, P. Radicatum, E. triste, E. schuezerii. Species were chosen for their dominance in the canopy and the showiness of their blooms. Site Willow Cassiope Dryas Meadow Number of flowers Warmed 418 178 196 368 Control 392 93 159 241 The Meadow site had the greatest GEI values followed by the Willow site and the Dryas site, with the lowest GEI in the Cassiope site. These results are a logical reflection of vegetation communities present in each of the four sites (Table 2.1). All sites showed greater GEI values in the warmed plots compared to the control plots though the        17   magnitude and significance of this difference varied by site (Table 2.3). Warming did not appear to accelerate greening across the growing season. Warmed plots started greener and stayed greener across the growing season. This is likely a result of increased biomass caused by 20 years of passive warming by OTCs. Figure 2.2 Temporal patterns of GEI across the 2012 growing season in the A) Willow site, B) Cassiope site, C) Dryas site, and D) Meadow site. Colours correspond to the warming treatment (red) and control (blue) plots. Vertical lines represent an average date of all plots in each site of major phenological events for all major species followed; L: first day mature leaf, F: first day mature flower, S: first day leaf senescence. Results from the linear mixed models showed that when all four sites were grouped together, GEI responded significantly to treatment and was significantly Day  of  Year   G EI   C   A   L   F   S   D   B   L   F   S   L   F   S   L   F   S          18   improved by the addition of soil moisture (Table 2.3). When examined by site, a treatment signal was seen in the Willow and Dryas sites (Figure 2.2A). The models for the Willow, Cassiope, and Meadow sites were not significantly improved by the addition of average day of first mature leaf or soil moisture. The addition of first day of mature leaf significantly improved the model for the mesic Dryas site. Variables and interactions that did not significantly improve the models were removed based on AIC values. Table 2.3 Results of linear mixed models for all sites and the significance of the warming treatment. Site Model AIC LogLik Treatment p-value All Simple -1877 942.3 Soil moisture -1873 943.7 < .0001 Willow Simple -624.8 316.4 < .0001 Cassiope Simple -855.8 431.9 0.258 Dryas Simple -838.7 423.4 Mature leaf -822.5 416.3 < .0001 Meadow Simple -661.4 334.7 0.0612 2.1.8 Comparison of GEI to NDVI and biomass measurements Simple linear regression showed a moderate positive relationship between GEI and NDVI from the 2010 photographs (Figure 2.3). The relationship moderate due to error caused by the varying performance of each method in each of the sites. The relationship between GEI and NDVI was strongest in the Willow site (R2 = 0.28, F54 = 6.90, p = 0.01), while both the Meadow site (R2 = 0.16, F54 = 2.48, p = 0.14), and Dryas site (R2 = 0.12, F54 = 2.62, p = 0.12) showed weak correlations between the two methods.        19   Figure 2.3 Simple linear regression by Site; Dryas site; (R2 = 0.12, F54 = 2.62, p = 0.12) Meadow site; (R2 = 0.16, F54 = 2.48, p = 0.14), Willow site; (R2 = 0.28, F54 = 6.90, p = 0.01). Each point represents one photograph from each plot in each of the three sites. TLH biomass data showed a slightly stronger relationship to NDVI than GEI when all sites were grouped together (Table 2.4). However, when sites were examined individually, results from the linear regression showed the two indices were highly comparable across sites with the exception of the Meadow site (Figure 2.3). GEI showed slightly stronger correlations in both the Willow and Dryas sites, while NDVI had a notably stronger correlation in the Meadow site.        20   Table 2.4 Linear regression results between sites and TLH biomass index for both GEI and NDVI. Site Index R2 Fdf p-value All NDVI 0.36 30.38 <0.001 GEI 0.21 13.83 <0.001 Willow NDVI 0.26 6.13 0.024 GEI 0.27 6.58 0.020 Dryas NDVI 0.12 2.55 0.127 GEI 0.15 3.31 0.086 Meadow NDVI 0.52 16.5 0.001 GEI 0.33 6.45 0.025 Figure 2.4 Simple linear regression of GEI and NDVI with TLH biomass by site. Each point represents the TLH for each plot in each site. See Table 2.4 for regression analysis. Deciduous shrubs and graminoids were identified as important functional groups overall in both indices (Table 2.5). The Willow site had the same significant functional groups for both indices. The Dryas and Meadow sites however, had varying results between the indices. No significant functional groups were identified in the regression        21   analysis with NDVI in either the Dryas or Meadow sites. GEI however, showed a significant relationship with graminoids in the Dryas site (p = 0.014), and evergreen shrubs (p = 0.002) and graminoids (p = 0.002) in the Meadow site. Table 2.5 Multiple regression results identifying functional groups that drive NDVI and GEI values in plots at three high Arctic tundra plant communities. The explanatory variables are the number of live hits of each of the functional group. Site Index R2 full model Fdf p - value Significant functional group All NDVI 0.43 51 <0.001 - Deciduous shrubs - Graminoids GEI 0.50 53 <0.001 - Deciduous shrubs - Graminoids Willow NDVI 0.49 14 0.01 - Deciduous shrubs GEI 0.42 15 0.01 - Deciduous shrubs Dryas NDVI 0.05 15 0.32 GEI 0.32 15 0.04 - Graminoids Meadow NDVI 0.05 12 0.35 GEI 0.68 13 <0.001 - Evergreen shrubs - Graminoids A final comparison of the indices was made with measures of average canopy height. Both indices showed a moderate positive relationship with average canopy height, however, GEI showed a stronger correlation than NDVI (Figure 2.5).        22   Figure 2.5 Linear regressions between average canopy height and each method; (NDVI: R2 = 0.21, F54 = 13.38, p<0.001, GEI: R2 = 0.36, F54 = 30.38, p<0.001). Each point represents an average of 25 canopy height measurements at each plot in all three sites.     Discussion 2.1.9 Evaluation of GEI and the advantages of digital photography Comparisons between the two indices and biomass measures indicate that GEI, derived from the colour photographs, may be suitable as a method for monitoring multiple aspects of vegetation change in high Arctic tundra ecosystems and perhaps in other low stature vegetation types. Both GEI and NDVI showed very similar relationships to the biomass index data. GEI was able to identify significant functional groups in all sites suggesting it is more sensitive to green vegetation in the plots than NDVI, as the functional groups identified encompass the major species in the sites. Differing sensitivities of colour and IR images to non-photosynthetic standing litter could explain this trend. Numerous studies have shown that the presence of standing litter can lead to        23   disproportionate changes in canopy reflectance in the IR band, particularly in sparsely vegetated ecosystems (Huete and Jackson 1987; van Leeuwen and Huete 1996; Laiderler and Treitz 2003; Dawelbait and Morari 2008). Disproportionate reflectance of standing litter can mask the reflectance of green vegetation potentially explaining the lack of significant functional groups for NDVI in Dryas and Meadow sites where standing litter was extensive. The strong relationships of GEI with TLH and canopy height and the similarity to those with NDVI validate the utility of colour images as a proxy for vegetation productivity. GEI has shown strong (R2: 0.78 – 0.84) correlations with GEP measured by eddy covariance methods in both grassland and temperate forest ecosystems (Ahrends et al. 2009; Migliavacca et al. 2011). Although carbon fluxes at the plot scale in tundra ecosystems tend to be small and variable (Oberbauer et al. 2007), the results highlight the potential for estimating productivity indirectly with this method. The temporal trend of GEI derived from the digital photographs in 2012 provides reliable information on vegetation status. Field-based phenological measures correspond logically with the seasonal patterns derived from photos. Similar results of greenness indices derived from colour photographs accurately identifying phenological events down to the species level, have been found in both grassland and forest ecosystems (Ide and Oguma 2010; Migliavacca et al. 2011). GEI was able to identify a warming treatment signal and show that warmed plots are significantly greener than control plots in all but the Cassiope site. This trend is reasonable given the greater aboveground biomass in the warmed plots at the AF lowland and across the tundra biome (Walker et al. 2006; Hudson and Henry 2009; Hill and Henry 2011; Hudson et al. 2011, Elmendorf et al. 2012a). The lack of treatment signal in the Cassiope site could be the result of the nature of the vegetation in that site (Hudson and Henry 2010). Cassiope tetragona, an evergreen dwarf shrub, is the dominant vascular plant species and only the tips of the branches are green. The majority of the branches are covered with dead leaves that remain attached for many years (Rayback and Henry 2006) and there is only a very subtle colour change throughout the season. These subtle changes may be detected using greater pixel density in the photographs.        24   There is great potential for this method to be utilized extensively in vegetation monitoring at both the landscape and plot scale. Digital cameras are affordable, robust, and easy to use, making them very appealing field instruments. It takes less time and effort in the field to obtain the images than to record detailed observations. The photographs can be taken by one person, while the detailed observations usually require at least two people: one to observe and one to record the data. Using digital cameras to obtain vegetation data, such as GEI, is also more cost effective than using a standard field instrument for NDVI like the ADC. The cost of implementing ten Nikon D40 (Nikon D40 SLR, Nikon Corporation, Japan; CDN $500) digital cameras in fixed locations to continually monitor vegetation over one growing season is roughly equivalent to the cost of one ADC (Tetracam ADC, Chatsworth, CA, USA; CDN $4800, tetracam.com). A Nikon D40 has twice the megapixels of an ADC (6.0 vs. 3.2), producing finer resolution images, increasing the amount of RGB data in each photograph. More detailed photographs increase the accuracy of estimating biomass and phenological information. Digital cameras also reduce the amount of human error in the field through automatic focus, and exposure adjustments. The ADC was far more difficult to focus and sensitive to changes in solar illumination leading to unusable images and a loss of data. This case study highlights the utility of digital cameras in field-based monitoring of vegetation. It presents a novel opportunity for the implementation, across the tundra biome, of a simple methodology for tracking seasonal productivity, phenology and biomass. Conclusions Colour digital photography is able to detect differences in seasonal patterns of greenness across different moisture-defined vegetation communities inside and outside of passively warmed plots using OTCs. Seasonal patterns of GEI derived from 2012 data are logical and correspond well to field-based observations of phenology. Seasonal patterns from 2012 were validated through the similarities between biomass index – GEI and biomass index – NDVI relationships from the 2010 data. The most important factor        25   in the utility of colour digital photography for monitoring vegetation lies with the ease, robustness, and affordability of operating and implementing this method. We conclude that colour photographs accurately describe vegetation status in high Arctic tundra ecosystems both qualitatively (i.e. phenology) and quantitatively (i.e. biomass, productivity) in a simple and cost-effective way.      26   3 Landscape Patterns Introduction Plant phenology is controlled by climatic conditions and is an important indicator of climate change impacts in terrestrial ecosystems (Schwartz et al. 2006; Cleland et al. 2007; Badeck et al. 2008; Inouye 2008; Høye et al. 2013). Both observational and experimental warming studies have shown that vegetation phenology is sensitive to climate warming (Arft et al. 1999; Wolkovich et al. 2012; Oberbauer et al. 2013). Warming has long been predicted to be earliest and most intense at high latitudes (IPCC 2007) causing changes in vegetation abundance and composition (Elmendorf et al. 2012a), as well as affecting tundra plant phenology in many sites (Høye et al. 2013; Oberbauer et al. 2013). Expanded monitoring of tundra plant phenology will provide important information about the impacts of current and future climate warming in the Arctic and globally. Over the last 20 years, earlier leaf-out (Myneni et al. 1997; Arft et al. 1999; Oberbauer et al. 2013) and delayed senescence (Marchand et al. 2004; Høye et al. 2013) have been observed in ambient and experimental warming studies across the Arctic. However, changes in timing of phenology, community composition, and above ground biomass vary significantly across different vegetation communities (Elmendorf et al. 2012a; Oberbauer et al. 2013). Populations of the same species in different vegetation communities can respond differently to changes in temperature, moisture and nutrients (Jones et al. 1999; Kudo and Hirao 2006). At the same study site used in this study, Hudson et al. (2011) demonstrated how the response of different vegetation communities to 16 years of experimental warming was highly variable. A hydric sedge meadow site showed no significant difference in plant size traits while a mesic prostrate dwarf-shrub herb tundra site showed significant differences in all traits (Hudson et al. 2011). These site and species-specific responses to environmental change highlight the importance of long-term measurements of vegetation at the plot and species scale.        27   Monitoring phenology in remote ecosystems is expensive and time consuming. Traditional field based tracking of phenology involves patient, detailed observation of individual plants every few days. Reliable, long-term phenological datasets are difficult to maintain even with common protocols such as those used in the International Tundra Experiment (ITEX, Henry and Molau 1997). The IPCC (2007) has recognized phenology as one of the most responsive traits to climate change in nature. Increasing the number of sites involved and the accuracy and consistency of phenology data is important to improve its use as an indicator of climate change and, hence, inclusion in decision- making processes. Developing efficient and cost effective methods of monitoring tundra vegetation is advantageous. Repeat colour digital photography is emerging as a simple and effective tool for monitoring vegetation. This methodology has been employed successfully to obtain phenological dates such as first flower as well as quantitative data such as shifts in vegetation greenness in a variety of temperate and alpine ecosystems (Ide and Oguma 2013; Richardson et al. 2013). Repeat photography is simple to conduct and allows users to easily visualize changes in an ecosystem. This methodology presents a unique opportunity to implement a simple, cost effective, vegetation-monitoring network in the Arctic to accompany existing long-term vegetation and environmental data. The objective of this study was to determine the effectiveness of digital photography in monitoring tundra vegetation at multiple scales. Using colour digital photographs and a vegetation greenness excess index (GEI) derived from the photos, we (1) assessed green-up and senescence signals, (2) compared automated and manual flower counts, and (3) determined differences in above ground biomass and vegetation vigor in response to experimental warming and site moisture status.        28   Methods 3.1.1 Study site The study was conducted in the Alexandra Fiord (AF) coastal lowland (78°53’N, 75°55’W), on the eastern coast of Ellesmere Island, Nunavut. Due to the surrounding topography and resulting favorable climatic conditions, the 8 km2 lowland is considered a polar oasis (Freedman et al. 1994). It is well vegetated compared to surrounding areas and average yearly precipitation is less than 50 mm. Average growing season temperatures range between 3 and 8°C and the growing season lasts from early June to mid August. The moisture regime is controlled by snowmelt and glacial melt water and soil moisture is highly variable across the lowland. Plant community types found in the lowland are dictated by the varying soil moisture regimes (Muc et al. 1989). More detailed information on the environmental setting of Alexandra Fiord is found in Svoboda and Freedman (1994). The study was conducted in four vegetation communities that varied in soil moisture (Table 3.1). They included: a xeric-mesic prostrate dwarf-shrub herb tundra dominated by Salix arctica (Willow site) with sandy soils, and experiences the earliest snowmelt of the four study sites (Muc et al. 1994), a mesic prostrate dwarf-shrub herb tundra, dominated by Dryas integrifolia (Dryas site), and included important contributions from the evergreen dwarf shrub Cassiope tetragona, and the graminoids Arctagrostis latifolia and C. misandra; a mesic prostrate/hemiprostrate dwarf-shrub herb tundra, dominated by C. tetragona (Cassiope site) which also included contributions from the evergreen dwarf shrub D. integrifolia, and the graminoids A. latifolia and C. misandra; and a hydric sedge, moss, dwarf-shrub wetland, (Meadow site) with surface water flow in much of the site throughout the growing season, dominated by sedges (Carex membranacea, C. aquatilis stans and Eriophorum angustifolium triste) in the wet hollows and dwarf shrubs (S. arctica, D. integrifolia) on the drier hummocks (Henry et al. 1990). In 1992, warming experiments were established in each of the communities consisting of 20 randomly selected 1 m2 plots, and half the plots were warmed using open top chambers (OTCs) with ten corresponding 1m2 control plots. The OTCs passively        29   warm the surface air temperature by 1 – 3°C, similar to increases projected in climate change models (Marion et al. 1997). Marion et al. (1997), Hollister and Webber (2000) and Bokhorst et al. (2013) provide details on the performance of OTCs in general, and Hudson and Henry (2010) and Hudson et al. (2011) describe the OTC temperature effects at Alexandra Fiord. The study site is part of the International Tundra Experiment (ITEX), a collaborative network of over 20 arctic and alpine sites established in 1990 to monitor the effect of experimental and ambient warming on tundra vegetation (Henry and Molau 1997). Table 3.1 Characteristics of the four tundra communities at Alexandra Fiord Site CAVM classification* Moisture Above ground live standing crop (g m -2)** Major Species Willow Prostrate dwarf-shrub herb tundra Xeric 71 S. arctica, Luzula confuse, Poa arctica, Papaver radicatum, Oxyria dygina Cassiope Prostrate/hemiprostrate dwarf-shrub herb tundra Mesic 190 C. tetragona, D. integrifolia, Papaver radicatum, Oxyria dygina Dryas Prostrate dwarf-shrub herb tundra Mesic 190 D. integrifolia, C. tetragona, Papaver radicatum, Oxyria dygina Meadow Sedge, moss, dwarf- shrub wetland Hydric 132 Eriophorum angustifolium triste, C. stans, C. membranacea, D. integrifolia * Circumpolar Arctic Vegetation Map (Walker et al. 2006) ** from Muc et al. (1994) 3.1.2 Photography All plots in each of the four communities were photographed with a digital camera (Nikon D40 SLR, Nikon Corporation, Japan) six times over the growing season in 2012 beginning on June 13 (Day Of Year (DOY) 165) and ending on August 2 (DOY 215). Photographs were taken approximately 2 m nadir from the ground from the same south        30   facing position on each date. A 1 m x 1 m white frame was positioned for each photograph in plots on permanent corner markers. They were taken between 0900h and 1200h in cloudless conditions to reduce the influence of changes in solar angle or irradiance and saved in a Nikon raw (NEF) format at 3,008 x 2,000 pixels. Volumetric soil moisture was measured using a Hydrosense II soil-water sensor (Campbell Scientific Inc., Edmonton, Canada) inserted 12 cm into the soil at points in the south, center and north sections of each plot, three times throughout the growing season. Every three days phenological observations were made on tagged plants of the major species in each site and important dates such as first day of mature leaf, first day of mature flower, and first day of leaf senescence were recorded. 3.1.3 Biomass data Above ground biomass was estimated from total live-hit (TLH) point-intercept pin hit data from 2010 (Edwards 2012, Henry et al. unpublished). These data were used to explore covariation with greenness data. The non-destructive TLH method uses a 1 m2 frame with 10 cm grid spacing to record vegetation type for all layers of the canopy at the 100 intersection points within each plot (Molau and Molgaard 1996). This method has shown strong correlations to harvest methods (Jonasson 1988; Shaver et al. 2001). In each plot live hits of all vascular species were recorded to the species level and data were later combined by functional group. Average canopy height was also recorded from 25 points in the 1 m2 frame, per plot. 3.1.4 Colour digital photo analysis The six colour photos from each plot, representing a time series were initially processed using Creative Suite 6 (Adobe Systems Incorporated, 2013) to stack and align the images. Alignment was done automatically by matching pixels and manual adjustments were made where there were discrepancies. Photos were excluded from the time series if alignment failed. Stacked photos were cropped to a standard 1:1 ratio size and exported as individual TIFF files. After pre-processing, pixel values of each channel (Red, Green, and Blue) were extracted from each photo.        31   Extraction of pixel information was done using a function written with the rgdal package (0.8-4: Keitt et al. 2013) in R version 2.15.2 (R Development Core Team 2012). Pixels from the photos were transformed into data frames and Red, Green, and Blue (RGB) values were extracted and averaged by photo. Using a sample of photos from each site, unobstructed, well-separated individuals of S. arctica, P. radicatum, and D. integrifolia were identified within plots and separated as subsets from the larger photograph. Pixel information was extracted for the subset area using the same function to examine species-specific responses. To get an estimation of greenness of each photo and to normalize variations in irradiance among photos, the green ratio (rG) was also calculated and averaged by photo (Eq. (1)): rG = G / (R + G + B) (1) Where G = green, R = red, B= blue. A ratio was also calculated for the red (rR) and blue (rB) channels by rearranging Eq. 1. A final index, the Greenness Excess Index (GEI), was calculated using the red, green, and blue ratios (Eq. (2)); Richardson et al. 2007): GEI = 2*rG – (rR + rB) (2) Where rG = green ratio, rR = red ratio, rB = blue ratio. Although rG can be used to track phenological and productivity changes in vegetation, Eq. (2), provides a more sensitive indicator of changes in plant pigment (Ide and Oguma 2010). This is desirable in tundra ecosystems as the changes can be subtle. 3.1.5 Peak flower counting program The number of flowers in an image was counted using an intensity thresholding algorithm (created by Samuel Robinson) implemented in the Image Processing Toolbox in MATLAB version 7.90.529 (Math Works 2013). The original image A(x; y; c) was desaturated to create a flattened matrix of intensity values, I(x; y). A threshold of        32   brightness values, p, was manually chosen to eliminate background colours, creating a binary image B(x,y) such that: (3) Automated selection of p by Otsu's method (Otsu 1975) proved to be too liberal for the images. The binary image B(x,y) was then subjected to morphological opening and closing (for examples, see Gonzalez et al 2009, and Dougherty 1992) , in order to remove small, non-continuous bright areas such as leaf litter or plant tags. Finally, the number of connected components in the last binary image was counted automatically, giving the total number of bright contiguous areas in each image. Manual flower counts were conducted between DOY 187 (July 5) and 188 (July 6) in the three sites using the same 1 x 1m point frame as the photographs to get an estimation of peak flowering. Mature flowers of D. integrifolia, P. radicatum, Eriophorum triste, and E. scheuchzeri were compared to numbers detected by the algorithm in photographs taken on DOY 188. These species were chosen for the comparison because they have large flowers and are the most obvious and easily identifiable. 3.1.6 Statistical analysis Descriptive statistics, including mean, median, range, and standard deviation were calculated for GEI, soil moisture, TLH, average canopy height and first day mature leaf in each Site by treatment. One-way ANOVAs were used to examine differences between sites and treatments of TLH and canopy height. One-way repeated measures ANOVAs were used to examine differences in soil moisture and first day mature leaf between sites and treatments within sites. Linear mixed models were constructed in R version 2.15.2 (R Development Core Team 2013), using the lme4 package (lme4; Bates, Maechler and Bolker, 2011) to model the relationship between GEI, treatment and environmental variables in each of the three sites and for the two species. Species models include data from a subset of plots with the species of interest present. Warming treatment was used as a fixed effect. For the species models site was used as a random effect. For the site models, day was used as a random effect to account for repeated measures. The biomass B(x,y)= 1 if I(x,y)! 255" p 0 if I(x,y)< 255" p # $ % &%        33   index (TLH), average canopy height, soil moisture, and average day of mature leaf, were added as random effects at all scales only if they improved the model AIC value. Simple two-way ANOVAs were used to test for differences among treatment for field-based phenology observations. Simple linear regression was used to examine the relationship between manual and automated flower counts. An alpha of 0.05 was used for all significance testing. Results 3.1.7 Timing of phenology Plant growth, flowering and senescence are identifiable in the images and correspond well to field based phenology observations (Figure 3.1). The first photos taken on day 165 were dominated by red/brown standing litter with little to no green with the exception of mosses. Photos taken on day 179 showed a greening and new growth of vegetation penetrating through standing litter. By 188, the majority of the canopy was green in all sites and the major species were flowering. In the remaining three days (191, 196 and 213) the changes in greenness were subtle and difficult to detect by human eye. There was a gradual senescence of all flowers and the very beginning of leaf senescence in some images by 213. Seasonal changes in greenness were most pronounced at the species scale and in the more vegetated Meadow and Willow sites.        34   Figure 3.1 Examples of seasonal patterns of green-up from the photographs in each of the four sites. 3.1.8 Quantifying greenness Green-up from the first to the second photos was easily detected by the human eye and was matched by the changes in GEI (Figure 3.2 and 3.3). GEI from all sites combined showed an increase from day 165 to 196 and a leveling off by 213, suggesting the beginning of senescence (Figure 3.3A). The senescence signal was weak as most of the vegetation was still green on the final day of photography. Plotting GEI data also revealed a treatment signal at the plot scale but not at the species scale. The magnitude of the difference in GEI between warmed and control plots were greatest in the Willow site. Density plots, i.e. the distribution of GEI values, (Figure 3.3 insets) show that the Dryas and Willow sites have more green pixels in the warmed plots compared to control plots.        35   Figure 3.2 Seasonal patterns of GEI in all sites combined. Data are fit with a loess curve. Insets represent density plots of GEI values. Vertical lines represent important phenological observations collected in the field. When all sites were grouped together the green-up and senescence signal was well defined (Figure 3.2). There was a pronounced green-up and senescence signal in the Willow and Meadow sites (Figure 3.3A, D). The pattern is less straightforward in the Cassiope and Dryas site as there was very little change in GEI over the growing season, likely a result of the vegetation cover and plant species present (Figure 3.3B, C). L   F   S          36   Figure 3.3 Seasonal patterns of GEI at the plot scale (A: Willow site, B: Cassiope site; C: Dryas site; D: Meadow site). Data are fit with a loess curve. Insets represent density plots of GEI values. (L: first day mature leaf, F: first day mature flower, S: first day leaf senescence).Vertical lines represent important phenological observations collected in the field and averaged across plots for each site. At the individual scale, there was a strong green-up and senescence signal for both species. The deciduous shrub S. arctica had a stronger signal than the evergreen shrub D. integrifolia (Figure 3.4). There was a strong site-specific response of both species across the three sites (Figure 3.5). For S. arctica GEI was greatest in the Willow site where it is the most dominant species, followed by the Meadow site and finally the Dryas site. For D. integrifolia GEI was greatest in the Meadow site with no difference between the Dryas and Willow sites. The seasonal patterns of GEI at all scales are Day  of  Year G EI C A L F S D   B   L F S L F S L F S        37   validated by their correspondence to field based measures of phenology and by examination of density plots of GEI by day (Figure 3.6, 3.7 and 3.8). Figure 3.4 Seasonal patterns of GEI at the species scale (A: S. arctica; B: D. integrifolia). Data are fit with a loess curve. (L: first day mature leaf, S: first day leaf senescence).Vertical lines represent important phenological observations collected in the field and averaged for each species across plots in all sites. Figure 3.5 Seasonal patterns of GEI of S. arctica (A) and D. integrifolia (B) in the three study sites. Data are fit with a loess curve. Day  of  Year   G EI   A   B   L   S  L   S   Day  of  Year   G EI   A   B          38   When three sites representing the moisture gradient were grouped together a double Gaussian type distribution of GEI values emerged with the first peak representing bare ground and standing litter and the second peak representing green vegetation, somewhat confounding the vegetation signal (Figure 3.6). However, when the second peak was examined more closely a logical seasonal shift in GEI density emerges. There was a high density of GEI values close to zero (i.e. not green) past day 165 in the Cassiope, Dryas and Willow sites due to the species present, extensive standing litter and bare soil that exist in these sites (Figure 3.7B, C). After day 165, in the Willow site a double Gaussian type distribution emerged (Figure 3.7A). In the hydric sedge Meadow site there was little to no bare ground present and GEI density shifted obviously by day (Figure 3.7D). An interesting pattern emerged in the Meadow site where day 188 was greener than 191, likely the result of peak flowering masking the green vegetation. Bare soil and standing litter were less of an issue at the species scale and this was reflected in the density plots (Figure 8). Both species demonstrated increasing GEI as the season advances and clearer senescence signals on day 213 than at the plot scale. There was no flower-masking signal at the species scale as areas with flowers were excluded from the analysis. Figure 3.6 Density plot of GEI values by sample day at the plot scale in all sites combined.        39   Figure 3.7 Density plots of GEI values by sample day at the plot scale. (A: Willow; B: Cassiope; C: Dryas Site; D: Meadow Site). Figure 3.8 Density plots of GEI values by day (A: S. arctica; B: D. integrifolia). 3.1.9 Modeling GEI Linear mixed models (Table 3.2) confirm that the warming treatment had significantly greater GEI values at the plot scale in all sites combined (p<.0001), the Dryas (p<.0001) A B D D en si ty GEI C D en si ty   GEI   A   B          40   and Willow sites (p<.0001). The treatment was not significant at the individual scale (S. arctica: p = 0.533, D. integrifolia: p = 0.128) suggesting vegetation showed no difference in greenness. The addition of soil moisture as a random effect improved the model AIC at the plot scale for all sites combined, and for both species. Mature leaf was improved the model AIC at the community scale in the Dryas site. Table 3.2 Analysis of GEI by linear mixed models. Simple models include treatment as a fixed effect and site as a random effect. Soil moisture and mature leaf models include those variables as random effect because they improve upon the simple model. Site / species Model AIC logLik Treatment p-value All Simple -1877 942.3 Soil moisture -1873 943.7 < 0.001 Willow Simple -624.8 316.4 < 0.001 Cassiope Simple -855.8 431.8 0.258 Dryas Simple -838.7 423.4 Mature leaf -822.5 416.3 < 0.001 Meadow Simple -661.4 334.7 0.0612 Salix arctica Simple -397.8 202.9 Soil moisture -392.8 203.3 0.5328 Dryas integrifolia Simple -607.3 307.7 Soil moisture -604.0 309.0 0.1282 3.1.10 Peak flower bloom detection The program was able to detect mature flowers of D. integrifolia, Papaver radicatum, Eriophorum triste, and E. scheuchzeri from digital photographs with high accuracy (Figure 3.9). There was a strong correlation between manual and program counts for all sites. There was no significant difference in performance by site or treatment (Figure 3.10). The meadow site, dominated by Eriophorum spp. had the greatest number of flowers and greatest range in flower numbers (0 – 83).        41   Figure 3.9 Linear regression between manual and program flower counts of D. integrifolia, Papaver radicatum, E. triste, and E. scheuchzeri in all sites (R2 = 0.99, p <0.0001).        42   Figure 3.10 Linear regression between manual and program flower counts for each of the study sites. A: Willow site, R2 = 0.94, p < 0.0001, flowers of D. integrifolia and P. radicatum, B: Cassiope, R2 = 0.98, p < 0.0001, flowers of D. integrifolia and P. radicatum; C: Dryas site, R2 = 0.91, p < 0.0001, flowers of D. integrifolia, P. radicatum, and E. triste; D: Meadow site, R2 = 0.99, p < 0.0001, flowers of Eriophorum spp.). 3.1.11 Descriptive statistics of GEI and environmental data The Meadow site had the greatest mean, standard deviation (0.03 ± 0.02), and range (0.1) in GEI values and the Cassiope site had the lowest (-0.32 ± 0.006, 0.02) (Figure 3.11). Treatment plots had higher GEI in all sites but as the linear mixed models showed, only Dryas and Willow were significant (Table 3.2). In all sites but Cassiope, the treatment plots have greater variance in GEI values than control plots. At the individual species scale, S. arctica had greater mean, standard deviation (0.070 ± 0.070), and range (0.30) when compared to D. integrifolia (0.012 ± 0.035) (Figure 3.12). There was no significant difference between treatments for either species at any site (Table 3.2). Manual  count   A   B   C   D   Pr og ra m  c ou nt          43   Figure 3.11 Boxplot of GEI values by site. Error bars represent the highest/lowest value within 1.5 of the interquartile range. Figure 3.12 Boxplot of GEI values by species. Error bars represent the highest/lowest value within 1.5 of the interquartile range. D.  integrifolia S.  arctica Species G EI          44   Corresponding to the highest mean GEI, the Meadow site had the greatest TLH (153 ± 34.0) and canopy height (12.2 ± 4.22) (Figure 3.13B, C). The Meadow site had significantly greater TLH (p < 0.0001) and canopy height (p < 0.0001) than the Cassiope, Dryas, and Willow sites. There was no difference in TLH or canopy height between the Dryas and Willow sites. TLH was significantly greater in the warmed plots of the Dryas (p = 0.04) and Willow site (p < 0.0001). The Willow site had a significantly taller canopy than the Dryas site (p < 0.001). Canopy height was significantly greater in the control plots of the Meadow site (p < 0.0001) but there was no difference between treatments in the Dryas or Willow sites (Figure 3.13C). As expected, the Meadow site had the highest soil moisture (76.7% ± 10.2) and was significantly greater than all other sites (p < 0.0001). Unexpectedly, the Cassiope site had the lowest soil moisture (21.9% ± 5.52) (Figure 3.13A). The Dryas site had significantly greater soil moisture compared to both the Willow (p = 0.02) and the Cassiope site (p <0.0001) and the Willow site had significantly greater soil moisture compared to the Cassiope site (p < 0.0001). There was no significant difference in soil moisture due to treatment in any of the sites. First day of mature leaf was earliest in the Meadow site (170 ± 1.99) and the Dryas site had the greatest range in first leaf date (12 days) (Figure 13D). There was no significant difference in first day of mature leaf in warmed plots between sites. In the control plots, Willow was significantly later than Dryas (p = 0.02) and Cassiope (p <0.0001) and Meadow was significantly earlier than Dryas (p=0.002). First day of mature leaf was significantly earlier in warmed plots of all sites (p <0.001) except the Cassiope site.        45   Figure 3.13 Boxplot of A: Soil Moisture, B: Total live hit biomass, C: Average canopy height, D: First day mature leaf. Error bars represent the highest/lowest value within 1.5 of the interquartile range.     Discussion 3.1.12 Evaluation of photographs in detecting phenological changes Phenological observations have been conducted at AF for 20 years using modified ITEX protocols (Molau and Mølgaard 1996). Tagged plants of mainly dominant species have been monitored nearly annually in all vegetation communities inside and outside of warmed plots (Oberbauer et al. 2013). This long-term dataset contains highly valuable ecological information, but changing species of interest, evolving methodology, missing data and observer error has affected the dataset. Similarly, other ITEX data sets have been affected by these issues, and some have abandoned the detailed phenological measurements (Oberbauer et al. 2013). The analysis in this chapter demonstrates the Site A B C D        46   potential of photo analysis to reduce error of field-based phenological observations at the plot scale through automation of the method. Phenological signals of tundra vegetation were detectable at multiple scales using GEI extracted from a small sample of colour digital photographs. As with previous studies in alpine and temperate forest ecosystems, there was an overall agreement between GEI and field-based phenological data supporting the validity of inferring phenological stage from GEI (Richardson et al. 2007; Ahrends et al. 2008; Graham et al. 2010; Ide and Oguma, 2010). The peak flower counts of major species are another example of phenological data extractable from colour photographs. The difference in number of flowers by site and treatment can provide information about reproductive effort under warming treatment and different hydrological regimes. The program performed well in all sites and was in agreement with manual counts, but will need to be improved to identify individual species. Phenological signal strength varied with spatial scale, vegetation community and species. This photographic methodology has largely been applied in well-vegetated temperate forests and alpine meadows using a large landscape field of view (FOV) (Richardson et al. 2007; Crimmins and Crimmins 2008, Ide and Oguma, 2010). AF is well vegetated for the High Arctic but areas of bare soil, rock, standing litter and standing water, interfere with the phenological signal at the plot scale at which the photos were taken. This is well demonstrated in the analysis as the weakest signal was in the Dryas site where litter and bare soil are extensive. Individual species had the strongest phenological signal because the influence of non-vegetated areas was minimized as individual plants encompass the entire FOV. Early season temporal patterns of GEI in the Dryas and Willow sites show a different rate of green-up in the treatment plots of these sites. Field-based observations suggest significantly earlier mature leaves in warmed plots of both sites. This was reflected in the time series though greenness appears to decrease slightly from day 165 to day 179 in control plots of both sites. This decrease could be the result of green mosses in the canopy on day 165 taking advantage of moisture following snowmelt. Mosses rely on ample surface water so once the snow and early season saturation dissipates they become susceptible to desiccation potentially explaining the apparent decrease in GEI        47   (Oechel and Sveinbjörnsson 1978). The lack of a clear temporal signal in green-up could also be the result of the low frequency of photographs early in the season. After day 188 temporal patterns were similar in the Dryas and Willow sites suggesting no difference in the timing of senescence due to treatment. A temporal change was observed at the Meadow site after day 188 but there was no significant (P > 0.05) difference in timing of senescence due to treatment in field-based observations. Greater frequency of photos would likely strengthen all phenological signals. Future application of this method at AF will endeavor to achieve this to gain a clearer understanding of the site and species differences. 3.1.13 Evaluation of GEI in detecting vegetative differences In addition to phenological patterns, results demonstrate that GEI is able to detect differences in vegetation cover and vigor at the plot and individual scale, respectively. Warmed plots of the Dryas and Willow sites had significantly greater GEI values and a higher density of green pixels suggesting greater coverage of green vegetation in the photographs due to warming. Previous research in these sites by Hudson et al. (2011) found experimental warming resulted in larger leaf size and height of the dominant plant species. Migilavacca et al. (2011) found strong correlations between GEI, green biomass (r = 0.67) and LAI (r = 0.74) in an alpine ecosystem and at AF, moderate correlations between GEI and TLH and average canopy height have been found (Table 2.4; Figure 2.5; Beamish et al. unpublished data). This highlights the ability of GEI to detect differences in vegetation cover at the plot scale. The strong site-specific response of S. arctica represents a difference in the greenness of the individual species in each site not the amount of greenness (i.e. cover) as seen at the plot scale. Previous research showed greenness ratios derived from colour digital photographs are highly correlated (r2 = 0.91) to physiological traits such as chlorophyll content (Adamsen et al. 1999). Differences in growth form of S. arctica driven by soil moisture and the resulting community composition have been documented at AF (Jones et al. 1999; Hudson et al. 2011). The well-drained mesic conditions of the Willow site allows S. arctica to thrive and grow taller and larger (Walker et al. 2006). This may explain the differences seen between the Willow and hydric Meadow site        48   where conditions are not ideal for S. arctica. The Dryas site has similar site conditions to the Willow site but it melts out later than Willow and recent flooding of that community has caused a shift towards a sedge and grass dominated ecosystem (Henry et al. unpublished). This changing moisture regime and shifting vegetation composition could explain the lower GEI of S. arctica in this site. A recent meta-analysis of experimental warming data highlights the importance of these site and species-specific response of vegetation to understanding tundra vegetation change (Elmendorf et al. 2012b). This photo analysis method provides the opportunity to easily monitor specific species responses to environmental changes in multiple vegetation communities. Conclusions We analyzed the phenological signal derived from a small set of repeat colour digital photographs at the plot and individual species scale in a High Arctic ITEX site. GEI derived from the photographs was able to detect a seasonal phenological signal at all spatial scales. The clarity of the signal decreased with increasing spatial scale. GEI was also able to detect differences in vigor of individual species and above ground biomass at the plot scale. Density plots of GEI by day indicated that flowering was detected at the plot scale and the senescence signal was strongest at the individual species scale. We also developed an algorithm that was able to accurately predict peak total flower numbers automatically. Soil moisture emerged as the most important environmental factor in modeling GEI. This methodology is simple, affordable and efficient and has great potential for use in a vegetation monitoring network in the Arctic.        49   4 Summary and Synthesis   Introduction The goal of this project was to determine the utility of colour digital photography in detecting differences in vegetation and vegetation phenology over one growing season across a moisture gradient, and in response to experimental warming. The magnitude and direction of vegetation change due to warming is highly variable (Elmendorf et al. 2012a). Increasing the volume and consistency of vegetation data through automation of data collection will aid in understanding this heterogeneity that defines tundra ecosystems. Photographs were taken in four moisture defined vegetation communities in plots that have been passively warmed for the last 20 years and corresponding control plots. These data were used to answer the following questions: (1) How does greenness data (GEI) from colour digital photographs relate to biomass measures? (2) How do greenness-biomass relationships compare to NDVI-biomass relationships? (3) Can greenness data detect seasonal patterns of green-up and senescence at the plot and species scale? (4) Can greenness data detect differences due to treatment? (5) Can greenness data detect differences due to differences in hydrological regimes? (6) Can we automate flower counting with an algorithm? Summary of results GEI derived from digital photography demonstrated moderate positive correlations to TLH and canopy height data. These relationships were very similar to correlations between NDVI and the biomass measures suggesting that GEI is a suitable proxy for aboveground biomass and productivity. This is supported by previous research, which        50   has found strong to moderate positive correlations between GEI, live biomass and GPP in both grassland and temperate forests (Ahrends et al. 2009; Migliavacca et al. 2011). When GEI values were plotted in a time series, logical phenological signals of green-up and senescence emerged which corresponded well to field based phenological observations. Agreement between phenological stage inferred from GEI and observed phenological stage has been found in a number of ecosystems including a high alpine meadow (Richardson et al. 2007; Ahrends et al. 2008; Graham et al. 2010; Ide and Oguma, 2010). The strength of the phenological signal was dependent on site and scale. The signal was strongest at the individual species scale because green vegetation encompassed the entire field of view (FOV). The deciduous shrub S. arctica had the strongest green-up and senescence signal. The Cassiope site had the weakest signal due to the dominance of non-green biomass. Differences due to treatment were detectable at the plot scale. The GEI values in warmed plots at the Dryas and Willow sites were significantly higher than control plots. This was supported by significantly greater TLH in these plots. These results show that GEI is able to detect differences in the amount of green vegetation cover at the plot scale. There was no difference between treatments at the species scale (for S. arctica and D. integrifolia) further supporting the ability of GEI to detect quantitative differences in vegetation cover. Though there was a lack of treatment signal at the species scale, there were significant differences in GEI values by site. This is likely the result of differences in vigor of the species due to different moisture regimes as well as other soil characteristics and the resulting community composition and competition. Adamsen et al. (2009) found strong correlations of greenness ratios derived from digital photographs to physiological traits such as chlorophyll content. Different growth forms of some species have been recorded in the different vegetation communities at AF (Jones et al. 1999; Hudson et al. 2011). The Meadow site had the greatest GEI values, which corresponded to the greatest TLH and canopy height values. The Meadow site also had the greatest range in GEI values, which is reflected in the time series showing change in greenness over the season at the plot scale. Overall, GEI values were more variable in the warmed plots suggesting a variable response even from plot to plot within sites. A recent study of tundra        51   vegetation change found strong site and species-specific responses of tundra vegetation to experimental warming highlighting the importance of understanding this within site heterogeneity (Elmendorf et al. 2012a). Peak flower production was accurately estimated from the digital photos using an algorithm. Strong correlations between estimated and manual counts were found in all sites. This additional quantitative vegetation data can provide important information about reproductive effort and differences due to soil moisture and warming. Limitations The major limitation for this study was the limited temporal scale of the photo dataset. The six photographs gave a snapshot of changes in seasonal vegetation status but did not provide detailed information regarding early season green-up patterns when changes are the most rapid. Increasing the frequency of photographs would greatly improve our ability to infer phenological patterns more accurately from RGB data. An additional limitation was the biomass estimations. More detailed biomass information including belowground biomass, LAI, and percent cover could help increase the accuracy of GEI in estimating biomass and productivity. Future research Future work to expand upon the results presented in this thesis is first and foremost a photo dataset with a higher temporal frequency. Ideally, time-lapse cameras would take multiple photos of each plot each day (ore even more frequently) through the growing season from pre-snow melt to full leaf senescence. This would provide a better understanding of snowmelt timing and the resulting green-up and senescence inside and outside of OTCs as well as differences by site. The addition of a larger spatial data set, i.e. the landscape scale would also add valuable information about how the lowland changes as a mosaic rather than just community by community. This would require the use of cameras at varying elevations, and could be accomplished through the use of balloons or drone aircraft.        52   Conclusions The results of this thesis demonstrate the utility of repeat colour digital photography in monitoring tundra vegetation in the high Arctic. The RGB data derived from photographs was able to detect seasonal phenological patterns of green-up and senescence at the plot and individual species scale. It was also able to detect differences in greenness of vegetation due to differences in cover and differences in vigor. These results coupled with the ease and accessibility of digital photography highlights the potential for implementation of a semi-automated monitoring network at Alexandra Fiord and across the Arctic.        53   References   ACIA (2004). Impacts of a Warming Arctic-Arctic Climate Impact Assessment. Impacts of a Warming Arctic-Arctic Climate Impact Assessment, by Arctic Climate Impact Assessment, pp. 144. ISBN 0521617782. Cambridge, UK: Cambridge University Press. Adamsen, F. G., Pinter, P. J., Barnes, E. M., LaMorte, R. L., Wall, G. W., Leavitt, S. W., & Kimball, B. A. (1999). Measuring wheat senescence with a digital camera. Crop Science, 39(3), 719–724. Ahrends, H. E., Etzold, S., Kutsch, W. L., Stoeckli, R., Bruegger, R., Jeanneret, F., Wanner, H., Buchmann, N., Eugster, W., (2009). Tree phenology and carbon dioxide fluxes: use of digital photography for process-based interpretation at the ecosystem scale. Climate Research, 39, 261–274. Arft, A. M., Walker, M. D., Gurevitch, J. E. A., Alatalo, J. M., Bret-Harte, M. S., Dale, M., Diemer, M., Gugerli, F., Henry, G. H.R., Jones, M. H., Hollister, R. D., Jonsdottir, I. S., Laine, K., Levesque E., Marion, G. M., Molau, U., Molgaard, P., Nordenhall, U., Raszhivin V., Robinson, C. H., Starr, G., Stenstrom, A., Stenstrom M., Totland, O., Turner, P. L., Walker, L. J., Webber, P. J., Welker, and J. M., Wookey, P. A., (1999). Responses of tundra plants to experimental warming: meta- analysis of the international tundra experiment. Ecological Monographs, 69(4), 491– 511. Badeck, F. W., Bondeau, A., Bottcher, K., Doktor, D., Lucht, W., Schaber, J., and Sitch, S. (2004). Responses of spring phenology to climate change. New Phytologist, 162(2), 295–309. Ball, P. and Hill, N., (1994). Vascular plants at Alexandra Fiord. In: Svoboda, J. and Freedman, B., Ecology of a Polar Oasis: Alexandra Fiord, Ellesmere Island, Canada. Captus University Publications, Toronto, 255-256. Barbour, M. G., Burk, J. H., Pitts, W. D., (1980). Terrestrial plant ecology. California University, Davis, USA. Bhatt, U. S., Walker, D. A., Raynolds, M. K., Comiso, J. C., Epstein, H. E., Jia, G., Gens, R., Pinzon, J. P., Tucker, C. J., Tweedie, C. E., Webber, P. J., (2010). Circumpolar Arctic tundra vegetation change is linked to sea ice decline. Earth Interactions, 14(8), 1–20. Billings, W. D. and Mooney, H. A., (1968). Ecology of arctic and alpine plants. Biological Reviews of the Cambridge Philosophical Society, 43, 481–529. Bliss, L. C., Courtin, G. M., Pattie, D. L., Riewe, R. R., Whitfield, D., and Widden, P. (1973). Arctic tundra ecosystems. Annual Review of Ecology and Systematics, 4,        54   359–399. Bliss, L. C., Heal, O. W., and Moore, J. J., (1981). Tundra ecosystems: a comparative analysis (Vol. 25). Cambridge University Press, Cambridge, UK. Bliss, L. C., and Matveyeva, N. V. (1992). Circumpolar arctic vegetation. Arctic ecosystems in a changing climate: an ecophysiological perspective, 59-89. Boelman, N. T. N., Stieglitz, M. M., Rueth, H. M. H., Sommerkorn, M. M., Griffin, K. L. K., Shaver, G. R. G., and Gamon, J. A. J., (2003). Response of NDVI, biomass, and ecosystem gas exchange to long-term warming and fertilization in wet sedge tundra. Oecologia, 135(3), 414–421. Boelman, N. T., Stieglitz, M., Griffin, K. L., and Shaver, G. R., (2005). Inter-annual variability of NDVI in response to long-term warming and fertilization in wet sedge and tussock tundra. Oecologia, 143(4), 588–597. Bokhorst, S., Huiskes, A., Aerts, R., Convey, P., Cooper, E.J., Dalen, L., Erschbamer, B., Gudmundsson, J., Hofgaard, A., Hollister, R.D., Johnstone, J., Jónsdóttir, I.S., Lebouvier, M., Van de Vijver, B., Wahren, C.-H., Dorrepaal, E. (2013). Variable temperature effects of Open Top Chambers at polar and alpine sites explained by irradiance and snow depth. Global Change Biology 19:64-74. Boelman, N. T., Gough, L., McLaren, J. R., and Greaves, H., (2011). Does NDVI reflect variation in the structural attributes associated with increasing shrub dominance in arctic tundra? Environmental Research Letters, 6(3), 035501. Bony, S., Colman, R., Kattsov, V. M., Allan, R. P., Bretherton, C. S., Dufresne, J-L., Hall, A., Hallegatte, S., Holland, M. M., Ingram, W., Randall, D. A., Soden, B. J., Tselioudis, G., and Webb, M. J., (2006). How well do we understand and evaluate climate change feedback processes? Journal of Climate, 19(15), 3445–3482. Chapin III, F. S. and Shaver, G. R. (1985). Arctic: Physiological Ecology of North American Plant Communities. Chapman & Hall, London, UK. Chapin III, F. S., Sturm, M., Serresze, M. C., McFadden, J. P., Key, J. R., Lloyd, A. H., McGuire, A. D., Rupp, T. S., Lynch, A. H., Schimel, J. P., Beringer, J., Chpman, W. L., Epstein, H. E., Euskirachen, E. S., Hinzman, L. D., Jia, G., Ping, C.-L., Tape, K. D., Thompson, C. D. C., Walker, D. A., Welker, J. M. (2005). Role of land-surface changes in Arctic summer warming. Science, 310 (5748), 657–660. Callaghan, T. V., Jonasson, S., Nichols, H., Heywood, R. B., and Wookey, P. A. (1995). Arctic Terrestrial Ecosystems and Environmental Change [and Discussion]. Philosophical Transactions of the Royal Society of London. Series A: Physical and Engineering Sciences, 352(1699), 259–276.        55   Callaghan, T. V., Press, M. C., Lee, J. A., Robinson, D. L., and Anderson, C. W., (1999). Spatial and temporal variability in the responses of Arctic terrestrial ecosystems to environmental change. Polar Research, 18(2), 191–197. Cleland, E., Chuine, I., Menzel, A., Mooney, H., and Schwartz, M., (2007). Shifting plant phenology in response to global change. Trends in Ecology and Evolution, 22(7), 357–365. Cornelissen, J. H. C., van Bodegom, P. M., Aerts, R., Callaghan, T. V., van Logtestijn, R. S. P., Alatalo, J., et al. (2007). Global negative vegetation feedback to climate warming responses of leaf litter decomposition rates in cold biomes. Ecology Letters, 10(7), 619–627. Crimmins, M. A., and Crimmins, T. M., (2008). Monitoring plant phenology using digital repeat photography. Environmental Management, 41(6), 949–958. Dawelbait, M., and Morari, F., (2010). Limits and potentialities of studying dryland vegetation using the optical remote sensing. Italian Journal of Agronomy, 3(2), 97– 106. Dougherty E. R. (1992). Binary Opening and Closing. Pages 17–31 in Dougherty E. R., editor. An Introduction to Morphological Image Processing. SPIE Optical Engineering Press, Michigan, USA. Edwards, M. (2012). Effects of long-term experimental warming on three high Arctic plant communities. MSc Thesis. University of British Columbia, Vancouver, British Columbia, Canada. Elmendorf, S. C., Henry, G. H., Hollister, R. D., Björk, R. G., Bjorkman, A. D., Callaghan, T. V., Siegwart Collier, L., Cooper, E. J., Cornelissen, J. H. C., Day, T. A., Fossa, A. M., Gould, W. A., Gretarsdottir, J., Harte, J., Hermanutz, L., Hik, D. S., Hofgaard, A., Jarrad, F., Jonsdottir, S. I., Keuper, F., Klanderud, K., Klein, J. A., Koh, S., Kudo, G., Lang, S. I., Loewen, V., May, J. L., Mercado, J., Michelsen, A., Molau, U., Myers-Smith, I., Oberbauer, S. F., Pieper, S., Post, E., Rixen, C., Robinson, C. H., Schmidt, N. M., Shaver, G. R., Stenstrom, A., Tolvanenm A., Totland, O., Troxler, T., Wahren, C-H., Webber, P. J., Welker, J. M., Wookey, P. A., (2012a). Global assessment of experimental climate warming on tundra vegetation: heterogeneity over space and time. Ecology Letters, 15(2), 164–175. Elmendorf, S. C., Henry, G. H., Hollister, R. D., Björk, R. G., Boulanger-Lapointe, N., Cooper, E. J., Cornelissen, J. H. C., Day, T. A., Dorrepaal, E., Elumeeva, T. G., Gill, M., Gould, W. A., Harte, J., Hik, D. S., Hofgaard, A., Johnson, D. R., Johnstone, J. F., Jonsdottir, S. I., Jorgenson, J. C., Klanderud, K., Klein, J. A., Koh, S., Kudo, G., Lara, M., Levesque, E., Magnusson, B., May, J. L., Mercado-Diaz, J. A., Michelsen, A., Molau, U., Myers-Smith, I., Oberbauer, S. F., Onipchenko, V. G., Rixen, C., Schmidt, N. M., Shaver, G. R., Spasojevic, M. J., Þórhallsdóttir, E. Þ., Tolvanen, A.,        56   Troxler, T., Tweedie, C. E., Villareal, S., Wahren, C-H., Walker, X., Webber, P. J., and Wipf, S., (2012b). Plot-scale evidence of tundra vegetation change and links to recent summer warming. Nature Climate Change, 2(6), 453–457. Epstein, H. E., Calef, M. P., Walker, M. D., Stuart Chapin, F., and Starfield, A. M., (2004). Detecting changes in arctic tundra plant communities in response to warming over decadal time scales. Global Change Biology, 10(8), 1325–1334. Ewing, R. P., and Horton, R., (1999). Quantitative color image analysis of agronomic images. Agronomy Journal, 91(1), 148–153. Freedman, B., Svoboda J., and Henry, G. (1994). Alexandra Fiord-an ecological oasis in a polar desert. In: Svoboda, J. and Freedman, B., Ecology of a polar oasis, Alexandra Fiord, Ellesmere Island, Canada. Captus University Publications, Toronto, 1-9. Goetz, S. J. S., Bunn, A. G. A., Fiske, G. J. G., and Houghton, R. A. R., (2005). Satellite- observed photosynthetic trends across boreal North America associated with climate and fire disturbance. Proceedings of the National Academy of Sciences, 102(38), 13521–13525. Gonzalez, R. C., Woods, R. E., Eddins, S. L. (2009). Digital image processing using MATLAB (Vol. 2). Gatesmark Publishing, Knoxville, USA. Graham, E. A., Riordan, E. C., Yuen, E. M., Estrin, D., and Rundel, R. W., (2010). Public Internet-connected cameras used as a cross-continental ground-based plant phenology monitoring system. Global Change Biology, 16, 3014–3023. Henry, G. H. R., and Svoboda J., (1994) Comparisons of grazed and non-grazed High- Arctic sedge meadows. In: Svoboda, J. and Freedman, B., Ecology of a Polar Oasis: Alexandra Fiord, Ellesmere Island, Canada. Captus University Publications, Toronto, Canada. Henry, G. H. R., (1998). Environmental influences on the structure of sedge meadows in the Canadian high arctic. Plant Ecology, 134:119–129. Henry, G., and Molau, U. (1997). Tundra plants and climate change: the International Tundra Experiment (ITEX). Global Change Biology, 3(S1), 1–9. Henry, G.H.R., Freedman, B., Svoboda, J. (1990). Standing crop and net production of sedge meadows of an ungrazed polar desert oasis. Canadian Journal of Botany 68:2660-2667. Hill, G. B., and Henry, G. H. R., (2011). Responses of High Arctic wet sedge tundra to climate warming since 1980. Global Change Biology, 17(1), 276–287.        57   Hinzman, L. D., Bettez, N. D., Bolton, W. R., Chapin, F. S., Dyurgerov, M. B., Fastie, C. L., Griffith, B., Hollister, R. D., Hope, C. L., Huntington, H. P., Jensen, A. M., Jia, G. J., Jorgenson, T., Kane, D. L., Klein, D. R., Kofinas, G., Lynch, A. H., Llyod, A. H., McGuire, D., Nelson, F. E., Oechel, W. C., Osterkamp, T. E., Racine, C. H., Romanovsky, V. E., Stone, R. S., Stow, D. A., Sturm, M., Tweedie, C. E., Vourlitis, G. L., Walker, M. D., Walker, D. A., Webber, P. J., Welker, J., M., Winker, K. S., Yoshikawa, K., (2005). Evidence and implications of recent climate change in northern Alaska and other arctic regions. Climatic Change, 72(3), 251–298. Hollister, R. D., and Webber, P. J., (2000). Biotic validation of small open-top chambers in a tundra ecosystem. Global Change Biology, 6, 835–842. Høye, T. T., Post, E., Schmidt, N. M., Trøjelsgaard, K., and Forchhammer, M. C. (2013). Shorter flowering seasons and declining abundance of flower visitors in a warmer Arctic. Nature Climate Change.  doi:10.1038/nclimate1909. Hudson, J. M. G., and Henry, G. H. R. (2009). Increased plant biomass in a High Arctic heath community from 1981 to 2008. Ecology, 90(10), 2657–2663. Hudson, J. M. G. and Henry, G. H. R. (2010). High Arctic plant community resists 15 years of experimental warming. Journal of Ecology, 98, 1035-1041. Hudson, J. M. G., Henry, G. H. R., and Cornwell, W. K. (2011). Taller and larger: shifts in Arctic tundra leaf traits after 16 years of experimental warming. Global Change Biology, 17(2), 1013–1021. Huemmrich, K. F., Gamon, J. A., Tweedie, C. E., Oberbauer, S. F., Kinoshita, G., Houston, S., Houston, S., Kuchy, A., Hollister, R. D., Kwong, H., Mano, M., Harazono, Y., Webber, P. J., and Oechel, W. C., (2010). Remote sensing of tundra gross ecosystem productivity and light use efficiency under varying temperature and moisture conditions. Remote Sensing of Environment, 114(3), 481–489. Huete, A. R., and Jackson, R. D. (1987). Suitability of spectral indexes for evaluating vegetation characteristics on arid rangelands. Remote Sensing of Environment, 23, 213–232. Ide, R., and Oguma, H., (2010). Use of digital cameras for phenological observations. Ecological Informatics, 5(5), 339–347. Inouye, D. W. (2008). Effects of climate change on phenology, frost damage, and floral abundance of montane wildflowers. Ecology, 89(2): 353–362. Intergovernmental Panel on Climate Change (IPCC) Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the IPCC (Cambridge Univ. Press, 2007).        58   Jonasson, S., (1988). Evaluation of the point intercept method for the estimation of plant biomass. Oikos, 52, 101–106. Jones, M. H., MacDonald, S. E., Henry, G. H. R. (1999). Sex- and habitat-specific responses of a High Arctic willow, Salix arctica, to experimental climate change. Oikos, 87:129-138. Kattsov, V. M., and Walsh, J. E. (2000). Twentieth-century trends of Arctic precipitation from observational data and a climate model simulation. Journal of Climate, 13, 1362–1370. Kaufman, D. S., Schneider, D. P., McKay, N. P., Ammann, C. M., Bradley, R. S., Briffa, K. R., Miller, G. H., Otto-Bliesner, B. L., Overpeck, J. T., and Vinther, B. M., (2009). Recent warming reverses long-term Arctic cooling. Science, 325(5945), 1236–1239. Kudo, G., and Hirao, A. S. (2006). Habitat-specific responses in the flowering phenology and seed set of alpine plants to climate variation: implications for global-change impacts. Population Ecology, 48:49–58. Labine, C. (1994). Meteorology and climatology of the Alexandra Fiord Lowland. In: Svoboda J, and Freedman, B., Ecology of a Polar Oasis: Alexandra Fiord Ellesmere Island. Captus University Publications, Toronto, 23-40. Laidler, G. J., and Treitz, P., (2003). Biophysical remote sensing of arctic environments. Progress in Physical Geography, 27(1), 44–68. Loranty, M. M., Goetz, S. J., and Beck, P. S., (2011). Tundra vegetation effects on pan- Arctic albedo. Environmental Research Letters, 6(2), 024014. Lucht, W., Prentice, I. C., Myneni, R. B., Sitch, S., Friedlingstein, P., Cramer, W., Bousquet, P., Buermann, W., Smith, B., (2002). Climatic control of the high-latitude vegetation greening trend and Pinatubo effect. Science, 296(5573), 1687–1689. Marchand, F. L., Nijs, I., Heuer, M., Mertens, S., Kockelbergh, F., Pontailler, J. Y., Impens, I., Beyens, L., (2004). Climate warming postpones senescence in High Arctic tundra. Arctic, Antarctic, and Alpine Research, 36(4), 390–394. Marion, G. M., Henry, G., Freckman, D. W., Johnstone, J., Jones, G., Jones, M. H., Levesque E., Molau, U., Molgaard, P., Persons, A. N., Svoboda, J., Virginia, R. A., (1997). Open‐top designs for manipulating field temperature in high‐latitude ecosystems. Global Change Biology, 3(S1), 20–32. Matveyeva, N., and Chernov, Y., (2000). Biodiversity of terrestrial ecosystems. The Arctic: environment, people, policy, 233–274.        59   McGuire, A. D., Anderson, L. G., Christensen, T. R., Dallimore, S., Guo, L., Hayes, D. J., Heimann, M., Lorenson, T. D., MacDonals, R. W., Roulet, N., (2009). Sensitivity of the carbon cycle in the Arctic to climate change. Ecological Monographs, 79(4), 523–555. Meyer, G. E., and Neto, J. C., (2008). Verification of color vegetation indices for automated crop imaging applications. Computers and Electronics in Agriculture, 63(2), 282–293. Migliavacca, M., Galvagno, M., Cremonese, E., Rossini, M., Meroni, M., Sonnentag, O., Cogliati, S., Manca, G., Diotri, F., Busetto, L., Cesacatti, A., Colombo, R., Fava, F., Morra di Cella, U., Pari, E., Siniscalco, C., Richardson, A. D., (2011). Using digital repeat photography and eddy covariance data to model grassland phenology and photosynthetic CO 2 uptake. Agricultural and Forest Meteorology, 151(10), 1325– 1337. Molau, U., and Mølgaard, P., (1996). International tundra experiment (ITEX) manual. Danish Polar Center, Copenhagen, Denmark. MØlgaard, P. (1982). Temperature observations in high arctic plants in relation to microclimate in the vegetation of Peary Land, North Greenland. Arctic and Alpine Research. 14, 105–115. Muc, M., Freedman, B., and Svoboda J., (1989). Vascular plant communities of a polar oasis at Alexandra Fiord (79 degrees N), Ellesmere Island, Canada. Canadian Journal of Botany-revue Canadienne De Botanique, 67:1126–1136. Muc, M., J. Svoboda and Freedman B., (1994). Soils of an extensively vegetated polar desert oasis, Alexandra Fiord, Ellesmere Island. In: Svoboda, J. and Freedman, B., Ecology of a Polar Oasis: Alexandra Fiord, Ellesmere Island, Canada. Captus University Publications, Toronto, Canada. Myneni, R. B., Keeling, C. D., Tucker, C. J., Asrar, G., and Nemani, R. R. (1997). Increased plant growth in the northern high latitudes from 1981 to 1991. Nature, 386(6626), 698–702. Oberbauer, S. F. and Dawson, T. E. (1992). Water Relations of Arctic Vascular Plants. In: Chapin, F. S., Jefferies, R. L., Reynolds, J. F., Shaver, G. R. and Svoboda, J., Arctic Ecosystems In A Changing Climate: An Ecophysiological Perspective. Academic Press, Inc.: San Diego, California, USA; London, England, UK. Oberbauer, S.F., Elmendorf, S.C., Troxler, T.G., Hollister, R.D., Rocha, A.V., Bret- Harte, M.S., Dawes, M.A., Fosaa, A.M., Henry, G.H.R., Høye, T.T., Jarrad, F.C., Jónsdóttir, I.S., Klanderud, K., Klein, J.A., Molau, U., Rixen, C., Schmidt, N.M., Shaver, G.R., Slider, R.T., Totaland, Ø., Wahren, C.-H., Welker, J.M. (2013).        60   Phenological response of tundra plants to background climate variation using the International Tundra Experiment. Philisophical Transactions of the Royal Society B 368: 20120481. Oberbauer, S. F., Tweedie, C. E., Welker, J. M., Fahnestock, J. T., Henry, G. H. R., Webber, P. J., Hollister, R. D., Walker, M. D., Kuchy, A., Elmore, E., Starr, G., (2007). Tundra CO2 fluxes in response to experimental warming across latitudinal and moisture gradients. Ecological Monographs, 77(2), 221–238. Oechel, W. C., and Sveinbjörnsson, B., (1978). Primary production processes in arctic bryophytes at Barrow, Alaska. In: Oechel, W. C., and Sveinbjörnsson, B., Vegetation and Production of an Alaskan Tundra. Springer-Verlag, New York Inc. 269–298. Otsu, N. (1975). A threshold selection method from gray-level histograms. Automatica, 11(285-296):23-27. Post, E., Forchhammer, M. C., Bret-Harte, M. S., Callaghan, T. V., Christensen, T. R., Elberling, B., Fox, D. F., Gilg, O., Hik, D. S., Hoye, T. T., Ims, R. A., Jeppesen, E., Klein, D. R., Madsen, J., McGuire, D. A., Rysgaard, S., Schindler, D. E., Stirling, I., Tamstorf, M. P., Tyler, N., J. C., vander Wal, R., Welker, J., Wookey, P. A., Schmidt, N. M., Aastrup, P., (2009). Ecological dynamics across the Arctic associated with recent climate change. Science, 325(5946), 1355–1358. Rayback, S. A., and Henry, G. H. (2006). Reconstruction of summer temperature for a Canadian High Arctic site from retrospective analysis of the dwarf shrub, Cassiope tetragona. Arctic, Antarctic, and Alpine Research, 38(2), 228–238. Raynolds, M. K., Walker, D. A., and Maier, H. A., (2006). NDVI patterns and phytomass distribution in the circumpolar Arctic. Remote Sensing of Environment, 102(3-4), 271–281. Raynolds, M., Comiso, J., Walker, D., and Verbyla, D. L., (2008). Relationship between satellite-derived land surface temperatures, arctic vegetation types, and NDVI. Remote Sensing of Environment, 112(4), 1884–1894. Richardson, A. D., Berlyn, G. P., and Gregoire, T. G., (2001). Spectral reflectance of Picea rubens (Pinaceae) and Abies balsamea (Pinaceae) needles along an elevational gradient, Mt. Moosilauke, New Hampshire, USA. American Journal of Botany, 88(4), 667–676. Richardson, A. D., Jenkins, J. P., Braswell, B. H., Hollinger, D. Y., Ollinger, S. V., and Smith, M-L., (2007). Use of digital webcam images to track spring green-up in a deciduous broadleaf forest. Oecologia, 152(2), 323–334. Richardson, A. D. A., Braswell, B. H. B., Hollinger, D. Y. D., Jenkins, J. P. J., and Ollinger, S. V. S., (2009). Near-surface remote sensing of spatial and temporal        61   variation in canopy phenology. Ecological Applications, 19(6), 1417–1428. Richardson, A. D., Keenan, T. F., Migliavacca, M., Ryu, Y., Sonnentag, O., Toomey, M. (2013). Climate change, phenology, and phenological control of vegetation feedbacks to the climate system. Agricultural and Forest Meteorology, 169:156–173. Sakamoto, T., Shibayama, M., Kimura, A., and Takada, E., (2011). Assessment of digital camera-derived vegetation indices in quantitative monitoring of seasonal rice growth. ISPRS Journal of Photogrammetry and Remote Sensing, 66(6), 872–882. Schwartz, M., Ahas, R., and Aasa, A., (2006). Onset of spring starting earlier across the Northern Hemisphere. Global Change Biology, 12(2), 343–351. Shaver, G. R., Bret-Harte, S. M., Jones, M. H., Johnstone, J., Gough, L., Laundre, J., & Chapin, F. S., (2001). Species composition interacts with fertilizer to control long- term change in tundra productivity. Ecology, 82, 3163–3181. Serreze, M. C., Walsh, J. E., Chapin III, F. S., Osterkamp, T., Dyurgerov, M., Romanovsky, V., Oechel, W. C., Morison, J., Zhang, T., Barry, R. G. (2000). Observational evidence of recent change in the northern high-latitude environment. Climate Change, 46 (1-2), 159–207. Solomon, S., (2007). Climate change 2007-the physical science basis: Working group I contribution to the fourth assessment report of the IPCC (Vol. 4). Cambridge University Press. Sturm, M., Racine, C., and Tape, K., (2001). Climate change: increasing shrub abundance in the Arctic. Nature, 411(6837), 546–547. Sturm, M., Schimel, J., Michaelson, G., Welker, J. M., Oberbauer, S. F., Liston, G. E., Fahnestock J. Romanovsky, V. E. (2005). Winter biological processes could help convert arctic tundra to shrubland. Bioscience, 55, 17–26. Svoboda, J., and Freedman, B. (1994). Ecology of a Polar Oasis. Alexandra Fjord, Ellesmere Island, Canada. Captus University Publication, Toronto, Ontario, Canada. Tucker, C. J. (1979). Red and photographic infrared linear combinations for monitoring vegetation. Remote sensing of Environment, 8(2), 127-150. Van Leeuwen, W. J. D., and Huete, A. R. (1996). Effects of standing litter on the biophysical interpretation of plant canopies with spectral indices. Remote Sensing of Environment, 55(2), 123-138. Verbyla, D. (2008). The greening and browning of Alaska based on 1982 – 2003 satelitte data. Global Ecology and Biogeography, 17(4), 547–555.        62   Walker DA, Raynolds MK, Daniëls FJA, Einarsson E, Elvebakk A, Gould WA, Katenin AE, Kholod SS, Markon CJ, Melnikov ES, Moskalenko NG, Talbot SS, Yurtsev BA et al. (2005) The Circumpolar Arctic vegetation map. Journal of Vegetation Science 16, 267–282. Walker, M. D., Wahren, C. H., Hollister, R. D., Henry, G. H. R., Ahlquist, L. E., Alatalo, J. M., Bret-Harte, S. M., Calef, M. P., Callaghan, T. V., Carroll, A. B., Epstein, H. E., Jonsdottir, I. S., Klein, J. A., Magnusson, B., Molau, U., Oberbauer, S. F., Rewa, R. P., Robinson, C. H., Shaver, G. R., Suding, K. N., Thompson, C. C., Tolvanen, A., Totland, O., Turner, P. L., Tweedie, C. E., Webber P. J., Wookey, P. A., (2006). Plant community responses to experimental warming across the tundra biome. Proceedings of the National Academy of Sciences of the United States of America, 103(5), 1342–1346. Wolkovich, E. M., Cook, B. I., Allen, J. M., Crimmins, T. M., Betancourt, J. L., Travers, S. E., Regets, P. J., Davies, T. J., Kraft, N. J. B., Ault T. R., Mazer, S. J., McCabe, G. J., McGill, B. J., Parmesan, C., Salamin, N., Schwartz, M. D., and Cleland, E. E., (2012). Warming experiments underpredict plant phenological responses to climate change. Nature, 485(7399), 494–497. Zhou, L., Tucker, C. J., Kaufmann, R. K., Slayback, D., Shabanov, N. V., and Myneni, R. B., (2001). Variations in northern vegetation activity inferred from satellite data of vegetation index during 1981 to 1999. Journal of Geophysical Research: Atmospheres (1984--2012), 106(D17), 20069–20083. Appendix   A.1 RGB data extraction program get_photo <- function(directory, ...){ # get the files in the directory and their creation dates and load them into a dataframe files <- list.files(directory, pattern="(?i)tif$", full.names=TRUE) photos <- data.frame(file=I(files), Av_R=NA, Av_G=NA, Av_B=NA, Index=NA, Index2=NA, Index3=NA) for (i in 1:nrow(photos) ){ f <- GDAL.open(photos[i, 'file']) r <- getRasterData(f, band=1) #extracts band 1 = Red av_r <- mean(r) photos[i, 'Av_R'] <- av_r g <- getRasterData(f, band=2) #extracts band 2 = Green av_g <- mean(g)        63   photos[i, 'Av_G'] <- av_g b <- getRasterData(f, band=3) #extracts band 3 = Blue av_b <- mean(b) photos[i, 'Av_B'] <- av_b index <- ((av_g)/(av_r + av_g + av_b)) #calculates a normalized green value = rG av_index <- mean(index) photos[i, 'Index'] <- av_index index2 <- ((av_r)/(av_r + av_g + av_b)) #calculates a normalized red value = rR av_index2 <- mean(index2) photos[i, 'Index2'] <- av_index2 index3 <- ((av_b)/(av_r + av_g + av_b)) #calculates a normalized blue value = rB av_index3 <- mean(index3) photos[i, 'Index3'] <- av_index3 GDAL.close(f) } return (photos) } A.2 Flower counting algorithm function flowers(~) %FLOWERS Counts number of light objects (flowers) in digital photos %Written by Samuel Robinson, May 2013. Adapted from SEEDS program to work %with image opening and image closing functions. %% GUI Constructor %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% close all; clc window_size=[100 100 1000 600]; default_colour=get(0,'defaultUicontrolBackgroundColor'); colourmap=[0.2 0.2 0.2; 1.0 1.0 1.0]; % Colour map for displaying difference objects %Variables current_dir='F:\AB_NDVI2012_ALL\flowers\Dryas_control'; %Directory containing photos filelist=cell(0); %Cell array to store filenames        64   reviewed=[]; %Logical vector for storing which photos have been reviewed crop_params=[]; %Matrix for storing cropping parameters. Column order: [Photolist number, x,y,w,h] objectnum=[]; %Array for storing numbers of objects in each photo strelsize=5; %Variable for structuring element size threshold=0.9; %The main figure box (and parent to all other functions in the GUI) window=figure('Visible','off',... 'Name','Flower Counter',... 'NumberTitle','off',... 'Position',window_size,... 'MenuBar','none',... 'Color',default_colour); %Title text over current_dir box current_dir_title = uicontrol(window,... 'Units','characters',... 'BackgroundColor',default_colour,... 'Position',[3 44.5 17 1],... 'String','Current Directory',... 'Style','text'); %#ok<*NASGU> %Current directory box current_dir=uicontrol(window,... 'Units','characters',... 'BackgroundColor',[1 1 1],... 'Position',[3 40 35 4],... 'String',current_dir,... 'Style','text'); %Button to change directories chdir_button = uicontrol(window,... 'Units','characters',... 'BackgroundColor',default_colour,... 'Position',[3 37 34.5 2],... 'String','Load Directory',... 'Tag','Changes working directory',... 'Callback',@chdir_button_push); %Title text over photolist box photolist_title = uicontrol(window,... 'Units','characters',... 'BackgroundColor',default_colour,... 'Position',[4 34.8 25.8 1],... 'String','Photos in current directory',...        65   'Style','text'); %Box for list of photos photolist_listbox = uicontrol(window,... 'Units','characters',... 'BackgroundColor',[1 1 1],... 'Position',[3 20 35.2 14],... 'String','',... 'Style','listbox',... 'Value',1,... %First thing selected 'Max',1,'Min',1,... %Max #things selected 'Callback',@photolist_listbox_click); %Buttons to advance photos next_button = uicontrol(window,... 'Units','characters',... 'BackgroundColor',default_colour,... 'Position',[3 17.5 17 2],... 'String','Next',... 'Tag','Next Photo',... 'Callback',@next_button_push); %Buttons to change photos prev_button = uicontrol(window,... 'Units','characters',... 'BackgroundColor',default_colour,... 'Position',[21.5 17.5 16.5 2],... 'String','Prev',... 'Tag','Previous Photo',... 'Callback',@prev_button_push); %Label for brightness threshold box threshold_label = uicontrol(window,... 'Units','characters',... 'BackgroundColor',default_colour,... 'Position',[3 15.5 23 1],... 'ToolTipString','Brightness threshold for subtracted images',... 'String','Brightness Threshold',... 'Style','text'); %Textbox for setting threshold values threshold_textbox = uicontrol(window,... 'Units','characters',... 'BackgroundColor',[1 1 1],... 'Position',[29 15.5 8 1],... 'ToolTipString','Must be between 0 and 1. Default = 0.9',...        66   'String',num2str(threshold),... 'Style','edit',... 'Callback',@threshold_check); %Label for structuring element size box strelsize_label = uicontrol(window,... 'Units','characters',... 'BackgroundColor',default_colour,... 'Position',[3.8 14 23 1],... 'ToolTipString','Size of structuring element used to join areas of difference',... 'String','Stucturing Element Size',... 'Style','text'); %Textbox for setting size of structuring element used to join areas %brightness strelsize_textbox = uicontrol(window,... 'Units','characters',... 'BackgroundColor',[1 1 1],... 'Position',[29 14 8 1],... 'ToolTipString','Must be a positive integer. Default = 5',... 'String','5',... 'Style','edit',... 'Callback',@strelsize_check); %Title text for objects box objectbox_title = uicontrol(window,... 'Units','characters',... 'BackgroundColor',default_colour,... 'Position',[4 12.5 17 1],... 'String','OBJECTS FOUND ->',... 'Style','text'); %#ok<*NASGU> %Objects found box objectbox=uicontrol(window,... 'Units','characters',... 'BackgroundColor',[1 1 1],... 'Position',[23 12.5 16 1],... 'String','0',... 'Style','text'); %Button to crop photos crop_button = uicontrol(window,... 'Units','characters',... 'BackgroundColor',default_colour,... 'Position',[3 10 16 2],... 'String','Crop Photo',...        67   'Callback',@crop_button_push); %Button to save photos save_button = uicontrol(window,... 'Units','characters',... 'BackgroundColor',default_colour,... 'Position',[23 10 16 2],... 'String','Save object list',... 'Tag','Save list to ',... 'Callback',@save_button_push); %Current directory box console=uicontrol(window,... 'Units','characters',... 'BackgroundColor',[1 1 1],... 'Position',[4 1.3 37 8],... 'String','Select a directory to begin',... 'Style','text'); %Picture pic_axes=axes('Parent',window,... 'Units','characters',... 'Position',[44.5 1.5 152 44],... 'TickLength',[0 0],... 'XTickLabel',{},'YTickLabel',{},... 'Box','on'); % Makes the entire GUI visible at the end of initialization set(window,'Visible','on'); %Turns off warnings, so they don't clutter the console window. Errors will %still be reported. warning('off','all') %% Callbacks %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Called to change directory and populate filelist function chdir_button_push(~,~) set(console,'String','Loading photos...'); drawnow update; newdir=uigetdir(get(current_dir,'String')); if newdir~=0 set(current_dir,'String',newdir); %Changes current_dir to a directory specified in uigetdir (pop-up window) files=dir(newdir); %Gets a structure of the files contained in current_dir formats={'.jpg';'.tif';'.png';'.bmp'}; %Accepted file formats        68   choose=[]; %Files to be chosen for i = 3:length(files) %Iterates through files, choosing ones that have the correct extentions (defined in 'formats') [~,~,a,~]=fileparts(files(i).name); if sum(strcmp(a,formats))>0 choose=[choose i]; %#ok end end [filelist{1:length(choose)}] = deal(files(choose).name); %Puts file names into cell array "filelist" crop_params=[]; %Removes previous cropping parameters reviewed=false(length(filelist),1); %Sets entire photo list to "unreviewed" objectnum=NaN(length(filelist),1); %Sets number of objects found in each photo to NaN set(photolist_listbox,'String',filelist,'Value',1); %Sets the strings in photolist_listbox to the filelist cell array refresh_photos; %Refreshes the photo seen in the main axis of the GUI set(console,'String',['Set directory to ' get(current_dir,'String') '. ' num2str(length(filelist)) ' photos found.']); else set(console,'String','No photos found'); return end end %Advances photos by 1 function next_button_push(~,~) current=get(photolist_listbox,'Value'); if current1 set(photolist_listbox,'Value',current-1) refresh_photos; else beep end        69   end %When photolist_listbox is clicked function photolist_listbox_click(~,~) set(console,'String','Working...'); refresh_photos; set(console,'String',['Photo ' num2str(get(photolist_listbox,'Value')) ' of ' num2str(length(filelist))]); drawnow update; end %Checks and changes threshold values in threshold_textbox function threshold_check(~,~) temp=get(threshold_textbox,'String'); temp=str2double(temp); if (temp<=0)||(temp>=1)||(isnan(temp)) msgbox('Enter a number between 0 and 1','Error','error') set(threshold_textbox,'String',num2str(threshold)); return end threshold=temp; refresh_photos; set(console,'String','Changed brightness threshold'); end %Saves the results to a CSV file function save_button_push(~,~) if range(isnan(objectnum)) set(console,'String','Some photos have not been reviewed'); beep return end [filename,pathname]=uiputfile(get(current_dir,'String')); if filename==0 return else csv=NaN(length(filelist),2); csv(:,1)=1:length(filelist); csv(:,2)=objectnum; csvwrite(fullfile(pathname,[filename '.csv']),csv); end set(console,'String','Saved object list'); end %Triggered when a new value is entered for strelsize. Parses new inputs %and rejects incorrectly formatted entries.        70   function strelsize_check(~,~) temp=get(strelsize_textbox,'String'); temp=round(str2num(temp)); %#ok<*ST2NM> if (isempty(temp))||(temp<0) set(console,'String','Enter an integer greater than 1') set(strelsize_textbox,'String',num2str(strelsize)); return end strelsize=temp; set(strelsize_textbox,'String',num2str(strelsize)); refresh_photos; set(console,'String','Changed size of structuring element'); end %Collects parameters for cropping, function crop_button_push(~,~) done=false; while done==false set(console,'String','Select area to crop'); params=getrect(); if isempty(crop_params)||(sum(crop_params(:,1)==get(photolist_listbox,'Value'))~=1) %If cropping parameters don't exist crop_params(size(crop_params,1)+1,:)=[get(photolist_listbox,'Value'),params]; %Tags a new set of parameters onto the bottom of crop_params else %If cropping parameters do exist row=crop_params(:,1)==get(photolist_listbox,'Value'); %Row containing cropping parameters specific to selected photo new_params=[get(photolist_listbox,'Value'),crop_params(row,2:3)+params(1:2),params( 3:4)]; %New cropping parameters crop_params(row,:)=new_params; %Replaces old cropping parameters with new ones end refresh_photos; button = questdlg('Use current cropping selection?','Selection','Yes','No','Redo','Yes'); switch button case 'Yes' done=true; case 'Redo' crop_params(crop_params(:,1)==get(photolist_listbox,'Value'),:)=[]; refresh_photos; case {'No',''} crop_params(crop_params(:,1)==get(photolist_listbox,'Value'),:)=[]; refresh_photos;        71   done=true; end end end %% Convenience functions %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Used to refresh and process sets of photos(uses SEEDCOUNT.M) function refresh_photos(~,~) if isempty(filelist) %If no photos are found return %Exits the function end selected_photo=get(photolist_listbox,'Value'); %Gets position of the selected photo from photolist_listbox path=fullfile(get(current_dir,'String'),filelist{selected_photo}); %Creates the full path of the selected photo photo=imread(path); %Reads selected photo if ~isempty(crop_params)&&sum(crop_params(:,1)==selected_photo)==1 %If matching cropping parameters are found photo=imcrop(photo,crop_params(crop_params(:,1)==selected_photo,2:5)); %Crops photo end thresh=str2double(get(threshold_textbox,'String')); %Threshold of pixels num=flowercount(photo,thresh); %Counts number of objects in photo, using threshold provided set(objectbox,'String',num); %Displays the number of objects in the objectbox objectnum(selected_photo)=num; %Saves the number of objects found imshow(photo,'Parent',pic_axes); %Displays photo reviewed(selected_photo)=true; %Sets photo as 'reviewed' set(console,'String',[num2str(num) ' objects found.']); end function count=flowercount(photo,thresh) % SEEDCOUNT Number of light objects (flowers) in an image % % COUNT=SEEDCOUNT(PHOTO,THRESH) Uses uint8 image PHOTO, threshholds % and groups objects using THRESH. Returns the number of dark objects % found within the image as integer COUNT. % % PATH String % THRESH Proportion of darkest pixels to use (0.1 works well) % % Written by Samuel Robinson, Spring 2013        72   A=im2bw(photo,thresh); %Thresholds THRESH brightest part of image A=imopen(A,strel('disk',strelsize)); A=imclose(A,strel('disk',strelsize)); objects=bwconncomp(A,4); count=objects.NumObjects; end end work_cq7avfngzrhxtozwtqcg4uf7a4 ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218556661 Params is empty 218556661 exception Params is empty 2021/04/06-02:16:36 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218556661 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:36 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_cqbpqpavmberhlaalobw6phljy ---- Telecardiology for chest pain management in Rural Bengal – A private venture model: A pilot project Telecardiology for chest pain management in Rural Bengal – A private venture model: A pilot project Tapan Sinha *, Mrinal Kanti Das Introduction: Rural countryside do not get benefits of proper assessment in ACS for: 1. Absence of qualified and trained medical practitioner in remote places. 2. Suboptimal assessment by inadequately trained rural practitioners (RP). 3. Non-avail- ability of 24 hours ECG facility. Telecardiology (TC). where avail- able, has low penetration and marginal impact. Lots of discussions are done to fast track transfer of modern therapy at CCU including interventions & CABG. They remained utopian even at urban localities not to speak of inaccessible poverty- stricken rural areas. Objectives: To ascertain feasibility study of low cost TC at individual initiative supported by NGOs working in remote areas for extending consultation and treatment to those who can atleast afford/approach very basic medical supportive care only. Methods: 1. Site selection: A remote island (267.5 sq km) in South Bengal close to Sundarban Tiger Reserve, inhabited by about 480,000 people with about 100,000 in age above 60 years. Nearest road to Kolkata, 75 km away, is assessable by only five ferry services available from 5 am to 4 pm across rivers more than two km wide at places. 2. RP Training: Classroom trainings organized and leaflets in vernacular prepared and distributed in accordance to AHA patient information leaflets. 3. One Nodal Centre made operational from March 2015 after training three local volunteers (not involved in RP) for 2 weeks on: a. Single channel digital recording by BPL 1608T, blood pressure mea- surement by Diamond 02277, pulse oxygen estimation by Ish- nee IN111A and CBS estimation by Easy Touch ET301. b. Digital photography, transmission by email by 2G network to either of us in Kolkata by Micromax Canvas P666 Android Tab. 4. Patient selection by filling a form: chest pain at rest and exercise, chest discomfort, radiation of pain, shortness of breath, palpitation, sudden sweating, loss of consciousness, hypertension, dia- betes, age > 60 yrs. c. Mobile phone alert call to either of us after successful transmission. d. Review of ECG, BP, R-CBS and Oxygen saturation information on email as a single attachment as early as possible on Apple Air Tabon 3G network. e. Call back for further history/information and advices. Results: 22 patients assessed from March to end May 2015, all being referred by RPs who have undergone training. Two (2) cases of IHD by ECG noted. Both were known cases of inadequately controlled hypertension and diabetes. No case of ACS received. Two (2) cases of known COAD were received who were also hypertensive and had infection. Three (3) new cases of hyperten- sion detected. All other cases were normal for parameters exam- ined. Medical advices were given, further investigations suggested, follow up maintained by RPs. Advices for COAD given after consultation with colleagues. Average time taken to call back after receiving alert call 25 � 5 min. All calls received between 11am and 7pm. Two cases were needed to be shuffled between us for convenience. Statistical analysis avoided as number of case too small. Observations: 1. Public awareness need to be increased. 2. Resis- tance from RPs for pecuniary reasons may be overcome by involving them. 3. It is feasible to successfully operate the system at a nominal cost per case of about Rs. 70/- only exclud- ing the primary investment on training, instrumentation and service connection. 4. Medical and social impacts remain to be assessed. 5. Multiple nodal centres and involvement of all health care providers should usher into long cherished 'health for all'. A study of etiology, clinical features, ECG and echocardiographic findings in patients with cardiac tamponade Vijayalakshmi Nuthakki *, O.A. Naidu 9-227-85 Laxminagar Colony, Kothapet, Hyderabad, India Cardiac tamponade is defined as hemodynamically significant cardiac compression caused by pericardial fluid. 40 cases of cardiac tamponade were evaluated for clinical features of Beck's triad, ECG evidence of electrical alternans and etiology, biochemical analysis of pericardial fluid. Cardiac tamponade among these patients was confirmed by echocardiographic evidence apart from clinical and ECG evidence. We had 6 cases of hypothyroidism causing tampo- nade. Malignancy was the most common etiology followed by tuberculosis. Only 14 patients had hypotension which points to the fact that echo showed signs of cardiac tamponade prior to clinical evidence of hypotension and aids in better management of patients. Electrical alternans was present in 39 patients. We found that cases with subacute tamponade did not have hypotension but had echo evidence of tamponade.37 patients had exudate and 3 patients had transudate effusion. We wanted to present this case series to high light the fact that electrical alternans has high sensitivity in diagnosing tamponade and hypothyroidism as an etiology of tamponade is not that rare and patients with echocar- diographic evidence of tamponade may not always be in hypoten- sion. Role of trace elements health & disease G. Subrahmanyam 1,*, Ramalingam 2, Rammohan 3, Kantha 4, Indira 4 1 Department of Cardiology, Narayana MEdical Institutions, Chinta Reddy Palem, Nellore 524 002, India 2 Department of Biochemistry, Narayana MEdical Institutions, Chinta Reddy Palem, Nellore 524 002, India 3 Department of Pharmacology, Narayana MEdical Institutions, Chinta Reddy Palem, Nellore 524 002, India 4 College of Nursing, Narayana MEdical Institutions, Chinta Reddy Palem, Nellore 524 002, India A cross sectional one time community based survey is conducted in urban area of Nellore. 68 subjects were selected between 20 to 60 years of age by using convenient sampling techniques. Detailed demographic data is collected and investigations like Lipid Profile, Blood Sugar, ECG, Periscope Study (to assess vascular stiffness), trace Elements analysis of Copper, Zinc are done apart from detailed clinical examination. ECG as per Minnesota code criteria is carried out for the diagnosis of coronary artery diseases. The diagnostic criteria for Hypertension, BMI and Hyperlipidemia as per Asian Indian hypertensive guide lines 2012 and ICMR Guide- lines for diabetes mellitus is followed. Hypertension was present in 39.7%, over weight 14.7%, obesity 27.9%, ECG abnormality as evidence of Ischemic heart disease 8.38% RBB 3.38% sinus tachycardia 3.38%. 24% had raised blood sugar and cholesterol raised in 25% subjects. Periscope study was done in the 48 samples. Vascular stiffness indices as pulse wave velocity and augmentation index increased in 75% of subjects. Trace elements analysis for Zinc and Copper was done for 49 samples. Imbalance of trace elements either increased levels or decreased levels are present in 37 subjects. Vascular stiffness and alteration i n d i a n h e a r t j o u r n a l 6 7 ( 2 0 1 5 ) s 1 2 1 – s 1 3 3 S131 http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.331&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.331&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.332&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.332&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.333&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.333&domain=pdf work_cqc7b256b5gingas4byte2u2fu ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218563692 Params is empty 218563692 exception Params is empty 2021/04/06-02:16:45 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218563692 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:45 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_cqtozv7subba5fgsvz5wcpqxza ---- Paper III Christina A. Pedersen, Richard Hall, Sebastian Gerland, Agnar H. Sivertsen, T. Svenøe and Christian Haas, �Combined Airborne Pro�ling over Fram Strait Sea Ice: Fractional Sea-Ice Types, Albedo and Thickness Measurements�, submitted to Cold Region Science and Technology, 2007. 97 Combined Airborne Profiling over Fram Strait Sea Ice: Fractional Sea-Ice Types, Albedo and Thickness Measurements C. A. Pedersen a,∗ R. Hall a,1 S. Gerland a A. H. Sivertsen b T. Svenøe a C. Haas c,2 aNorwegian Polar Institute, Tromsø, Norway bFiskeriforskning, Tromsø, Norway cAlfred Wegner Institute for Polar and Marine Research, Postdam, Germany Abstract This paper describes the data collected during an expedition from the marginal ice zone into the multi year sea ice in the Fram Strait in May-June 2005 to measure the variance in sea-ice types, albedo and thickness, and the techniques used to an- alyze the data. A combination of methods was used to extract more information from each data set compared to what originally and traditionally are obtained. The principal information from the three methods applied give the sea-ice types from digital photography, the spectral and broadband reflectance factor from a spec- trometer and the thickness profile from a electromagnetic-”bird”, with emphasize on using, adapting and combining the different techniques. The digital images was standardized, textural features extracted and a trained neural network was used for classification, while the optical measurements were normalized and standardized to minimize effects from the set up and atmospheric conditions. The fractional sea-ice types proved to have large spatial variability, with average fractions for snow covered sea-ice of 81.0%, thick bare ice 4.0%, thin ice 5.3% and open water 9.6%, ergo an average ice concentration of 90.3%. The average broadband reflectance factor was 0.73, while the mean sea-ice thickness (including snow) was 2.1 m. Relative high cor- relations were found between the measured albedo and sea-ice concentration (0.69), however, the correlation would probably be higher if it were not for the possible tilt of the helicopter and offsets in the co-location procedure. The paper also addresses the lessons learned for future fusion of data from large field campaigns. Key words: sea ice, airborne measurements, albedo, classification Preprint submitted to Elsevier 31 October 2007 1 Introduction Sea ice is a complex heterogeneous cover which plays an important role in the Earth’s climate system. To obtain a better understanding of the distri- bution of sea ice, in situ data are collected during field campaigns. However, scientific-based operations in the Polar Regions are limited, mainly due to the cost of such operations, ship availability and competition from other scientific programmes for ship-time. Therefore, when opportunities to collect multiple data sets arise, it is important to co-ordinate all activities to ensure that not only are as many parameters as possible covered efficiently, but also that the data can be easily combined and compared for further analysis. This paper describes the data collected during an expedition from the marginal ice zone into the multi-year sea ice in the Fram Strait in May-June 2005 to measure the variance in sea-ice types, albedo and thickness, and the techniques used to analyze the data. A combination of methods was used to extract more information from each data set compared to what originally and traditionally are obtained. The classification of sea-ice types only involved surfaces identi- fied during winter and spring conditions (that is, snow covered sea ice, bare ice, thin ice and open water). E.g. melt ponds were not included since the onset of melt had not started at the time of the measurements. However, the provided techniques are quite general so only minor changes are required to include e.g. melt ponds or other necessary sea ice types. A main question ad- dressed is how albedo varies in relation to the type of sea ice. While there is a simple relationship where thick ice has a high albedo and thin ice has a low albedo, this only applies to thin ice covers up to 30 cm thick under cold winter conditions (Laine, 2004). However, under summer conditions in the Arctic Ocean, the correlation between albedo and sea-ice concentration (extent) extracted from remote sensing data is found to be only 0.34 (0.40), with large variability between individual spatially areas (Laine, 2004). For the Northern Hemisphere as one the numbers are 0.56 (0.50). Previous studies on classifying sea-ice types from helicopter images has mostly concerned identifying melt ponds. As part of the Surface Heat Budget of the Arctic Ocean (SHEBA) field experiment aerial photography and video camera flights were completed between spring and autumn in 1998 (Perovich et al., 2002; Tschudi et al., 2001). Perovich et al. (2002) calculated fractions of ice, new ice, ponds and leads using imaging processing software and manually selected thresholds based on histograms, while Tschudi et al. (2001) identi- fied melt pond and open water fractions from video images using spectral ∗ Corresponding author. Email address: christina.pedersen@npolar.no (C. A. Pedersen). 1 Present address: Kongsberg Satellite Services, Tromsø, Norway 2 Present address: University of Alberta, Edmonton, Alberta, Canada 2 information in the three RGB bands of the converted images. Other previous studies include Derksen et al. (1997) employing low level aerial infrared pho- tographs for identifying melt pond fractions, and Fetterer and Untersteiner (1998) utilizing maximum likelihood algorithms to select a threshold intensity to separate pond distribution from ice distribution. More advanced classifica- tion tools for detecting sea-ice types were used in studies analyzing Synthetic Aperture Radar (SAR) images. Although the SAR scenes have a much coarser spatial resolution than the aerial photography presented in this paper, some of the techniques applied can be adapted to the photographs. Bogdanov et al. (2005) used a neural network and linear discriminate analysis together with data fusion to automatic classifying SAR sea-ice images. Substantial improve- ments were gained by fusion of several data types. Texture statistics from grey level co-occurence matrix was used in Barber and Le Drew (1991). Also a few approaches applied to optical remote sensing data exist. A data fu- sion algorithm involved iterative segmentation procedure on SAR images and extraction of spectral characteristics from AVHRR images, resulted in distin- guishing between six sea-ice types (Lythe et al., 1999). Markus et al. (2002) used a threshold based algorithm on individual Landsat bands to distinguish between white ice, bare/wet ice, melt ponds and open water. This paper presents an interfusion of methods for characterizing individual sea-ice types by discriminating between snow covered sea-ice, thick bare ice, thin ice and open water based on the measurements from the airborne sea ice profile. Digital images, optical reflectance measurements and electromag- netic thickness measurements were combined to obtain a detailed description of the sea-ice physical and optical properties. Sec. 2 gives an overview of the experimental set-up and measurements conducted during the expedition in the Fram Strait spring 2005. We describe in detail the methods used to ana- lyze the images, including standardization and cross-correlation (Sec. 3.1.1), feature extraction (Sec. 3.1.2) and classification (Sec. 3.1.3). The optical mea- surements are discussed in Sec. 3.2, and co-location and fusion (including spectral unmixing) of the three datasets in Sec. 3.3. Results and discussions on the individual datasets are given for sea-ice types in Sec. 4.1, reflectance measurements in Sec. 4.2, sea-ice thickness in Sec. 4.3, and data fusion in Sec. 4.4. Conclusions and outlook with lessons learned for future fusion of data collection from large field campaigns are given in Sec. 5. 2 Experiment setup The Fram Strait represents the main passage for sea ice and water that leaves the central Arctic Ocean, being important for the freshwater balance in the ocean and for ocean circulation and convection (Kwok et al., 2004; Vinje, 2001). In May-June 2005, the Norwegian Polar Institute led a field campaign 3 in the Fram Strait (Fig. 1), where three sets of airborne measurements were collected by helicopter (Tab. 1). Since the optical measurements require a clear field of view underneath the helicopter two separate flights were carried out. The first flight included digital photography and optical measurements, while the second was for the electromagnetic (EM) measurements. For the optical flight, the digital camera and the 8◦ for-optics of the spectrometer were mounted on an aluminum plate and fastened to the floor of the helicopter looking down through a Lexan-glass window (Fig. 2). Fig. 1. Sea-ice concentration (in percent) for Fram Strait 3rd June 2005 after met.no, with Svalbard to the right and Greenland to the left (grey is land area and white is no data). The rectangle mark the investigated area 78.00◦-79.05◦ N and 2.8◦-4.8◦ W. Method Instrument Sampling frequency Fractional sea-ice types Canon EOS 350D digital camera 5 s Reflectance ADS FieldSpec Pro spectrometer 2 s Ice thickness Ferra Dynamics electromagnetic “bird” 0.1 s Table 1 Airborne measurements The position, speed and altitude of the helicopter were logged with a Global Positioning System (GPS) receiver, and the altitude and speed of the heli- copter were restricted so as to receive over-lapping images. A typical optical 4 Fig. 2. Set up for the optical flight with digital camera and spectrometer for-optics mounted on the floor of the helicopter. The cameras and spectrometers field-of-view are shown relatively to each other. flight had an image footprint of 200 m in flight direction and 150 m across flight direction, and 50-75 m overlap between successive images. In reality, each pixel in the image footprint are rectangular due to the speed of the he- licopter and the exposure time of the camera. A typical foot-print for the spectrometer was for simplicity assumed to be a circle with a diameter of 15- 25 m. But similar as for the pixels, the footprint is like to an elliptic due to the helicopter movement during the time taken to conduct a measurement. The reflectance measurements and digital images was co-located post-flight based on GPS time and position. The EM measurements, collected on a separate flight, could not be directly compared to the other measurements, due to a slightly different track and a fast drifting ice cover. The electromagnetic data for deriving ice thickness was processed following standard routines as described in Haas et al. (1997, 2006), giving the distance between snow surface and ice underside (ice plus snow thickness equaling the total thickness, hereafter described with ”ice thickness”). The distance between individual measurement points on the ice is about 3 to 4 m (sampling frequency 10 Hz), and the absolute accuracy of the method is better than 90% Haas et al. (1997). The spectral albedo is the ratio of reflected to incident irradiance (solar light integrated over the hemisphere), while spectral reflectance is the ratio of re- flected to incident radiance (solar light over a restricted field-of-view). The 5 measurements we collected, spectral reflectance factor (SRF), is the ratio of reflected radiance to incident radiation reflected from a perfect, white, diffuse surface (Spectralon) (Nicodemus et al., 1977). The for-optics of the spectrome- ter was mounted behind a Lexan window in the helicopter. After the campaign we realized that the curvature of the Lexan window was acting as a collecting lens in the visible, by directing the light towards the for-optic. The Lexan win- dow was also found to have absorption bands at 350-380 nm, at about 1700 nm and above 2200 nm. Also the reflectance spectra showed to have an un- expected peak at UV wavelengths in particular. Probably the Lexan window disturbed the measurements, but the net effect is difficult to assess. 2.1 Description of sea-ice types The distinction and classification between sea-ice types is not a straight- forward task. While the WMO Sea-Ice Nomenclature (Secretary of World Meteorological Organization, 1970) is the accepted reference, it does not eas- ily allow sufficiently for slight variations in ice cover which can be required in detailed scientific studies. As a result, individual scientific studies developed sea-ice classification schemes based on the WMO, but modified to account for the many variations observed during field campaigns (Armstrong et al., 1966; Steffen, 1986). However, for the purpose in this paper, all these classification schemes are too detailed. Our measurements were collected in spring, before the onset of melt, and four broad and quite general sea-ice types (including snow cover) as given in Table 2, were identified. The classification of the four types were based on visual observations of surface characteristics (Fig. 3) dur- ing the cruise. Snow covered- and bare sea ice were separated mainly based on color, since snow has a white appearance compared to the blue-green bare ice. Thin ice covers the broadest range of types with a wide range in spectral reflectivity. It should be thought of as an intermediate type between thick blue-green ice and open water. Thin ice is separated from the other classes with its grayish color (it also include the masses consisting of small ice-floes partly within the new ice). The open water is easily separated from the others with its dark appearance due to the relative constant very low 0.07 value over the visual part of the spectrum (Brandt et al., 2005). The open water class also includes dark nilas. The four classes correspond well with other ice types chosen for classification (Massom and Comiso, 1994), as the unambiguous dis- tinction of more ice types may be difficult. However, after the onset of melt the picture is quite different with large areas of melting, wet snow and melt ponds on the ice. The techniques described in the next sections are general, so inclusion of more sea-ice types is relatively strait forward. 6 Class index Description of sea-ice types I Snow covered sea ice II Thick bare sea ice III Thin ice (grey ice and light nilas) IV Open water Table 2 Observed sea-ice types from the marginal ice zone into the multi year sea ice in spring 2005 in the Fram Strait. Fig. 3. Sea-ice image example where each of the four sea-ice types are represented. The colors correspond to the spectra in Fig. 5. 3 Data analysis 3.1 Digital photography The images size were originally about 2Mb with an average pixel size equiva- lent to 0.05 m for a typical helicopter altitude. To reduce the processing time the images were down-sampled by averaging over every 10th pixel, giving a down-sampled image of 230x345 pixels and a resolution of approximately 0.50 m. 7 3.1.1 Image standardization The exposure time, aperture opening and white balance parameters of the camera were set to automatic, and therefore the color intensity of the images was scaled according to the amount of light and dark pixels in the image. For example, the snow in an image consisting of only snow (bright pixels) seemed darker than the snow in an image consisting of both snow and open water (bright and dark pixels), as also experienced by others (Derksen et al., 1997). The brightness was not constant across the images, and particularly for snow we experience darker intensities along the edges due to vignitation, however, it did not cause a major problem and is not corrected for. The white balance in the images required corrections for by standardizing the images according to the following iterative procedure (Fig. 4): The first image with good contrast was selected and scaled to an appropriate range. The sub-image of 100 pixels in the flight direction from two succeeding and overlapping images (last 100 pixels from the first image and first 100 pixels from the second image) were normalized and cross-correlated. The maximum in the cross-correlation matrix gave the position where the two images were aligned or had best match. The second sub-image were normalized so that the two overlapping sub-images had the same mean and standard deviation. Due to the angle and tilt and variable speed of the helicopter, the images did not completely overlap in the flight direction, and for some images manual adjustments were requisite. 3.1.2 Feature selection Every pixel in an image was classified separately based on 14 features for texture characterization according to Theodoridis and Koutroumbas (1999) (Tab. 3). Feature 5-11 were calculated inside a 7 × 7 sliding window of the grey-leveled indexed image, and provide information related to the grey level distribution of the image, but do not give information about the relative po- sitions of the various gray levels within the image. Feature 12-14 is based on the second-order histogram, consider pixels in pairs and investigate the rela- tive distance and orientation between them. Maximum discrimination between SAR sea-ice types was obtained when considering gray level co-occurrence matrix with parallel pixels with an interpixel distance of one (Barber and Le Drew, 1991), and that approach was followed for feature 12-14. The best features for separating between snow covered ice, thick bare ice, thin ice and open water were selected according to Fisher Discriminant Anal- ysis (Johnson and Wichern, 2002). Fisher Discriminant Analysis is a transfor- mation of the multi-variate observations from the feature space into the Fisher space, where it selects the linear combination of features to achieve maximum separation between the classes. The Fisher discriminant is calculated based on feature vectors with known classification label, which require a training 8 Fig. 4. The standardizing procedure for getting a homogeneous time series of the airborne images. The upper panels show two succeeding and overlapping images with different brightness and contrast. The two sub-images (of 100 pixels width, marked with a frame) were cross-correlated, giving the matrix in the middle left. The black dot marks the maximum in the cross-correlation matrix, giving the best alignment between the sub-images (shown in the middle-right). The second sub-image was scaled to have the same mean (µ) and standard deviation (σ) as the first. The bottom image shows the two standardized and overlapping images after the standardization procedure. and test set of data where the class is known. The training set is used for constructing the classifier, while the test data is used for testing the perfor- mance of the classifier. The test and training data sets were created by manual classification of the four sea-ice types. Each combination of features (choose k out of 14 features, where k = 1, ..., 14, is 16 384) were tested by calculating 9 Feature 1 Red intensity 2 Green intensity 3 Blue intensity 4 Grey-level intensity 5 Mean intensity 6 Variance 7 Skewness 8 Kurtosis 9 Entropy 10 Energy 11 Coefficient of variance 12 GLCM contrast 13 GLCM energy 14 GLCM homogeneity Table 3 Textural features for sea-ice classification. Feature 5-11 is based on first order statistics, while feature 12-14 is from second-order statistics and the grey-level- co-occurrence matrix (GLCM) the Fisher discriminant, applying the Fisher classification rule (Johnson and Wichern, 2002) and evaluate the total average classification error based on feature vectors with known classification belonging. The set of features giving the smallest classification error was chosen for further investigations. 3.1.3 Classification For classification a feed-forward back propagation neural network (Haykin, 1999) with 3 layers was used. The first layer has a size (number of neurons) equal to the number of features, the hidden layer has two times the number of features neurons, and the output layer has one neuron (separating the four classes on the interval [0,1]). All neurons have the log-sigmoid as the activation function. The neural network is trained by presenting feature vectors with known classification label to the network, and the network updates its weight to minimizing the sum of squared error and achieve the expected output in an adaptive manner. Classification based on texture features (calculated over a sliding window) often experience problems on the edge between classes. E.g., an image con- 10 sisting of a sharp edge between snow covered ice and open water will in the classified image often have a small transition zone where intermediate classes (bare ice or thin ice) are detected. Since the median filter is particular effective in reducing noise while at the same time preserve edge sharpness (Gonzalez and Woods, 1992), the classified images were median filtered (with a filter size equal to the window size used for extracting the texture features). This approach was also used by others (Tschudi et al., 2001; Derksen et al., 1997). 3.2 Optical measurements The reflected radiance from the Spectralon was collected before and after the flight, only the reflected surface radiance were collected during the flight. The reflected surface radiance is affected by the amount of clouds, and change as clouds drift, so variable light conditions results in error in the SRF (both in terms of spectral signature and absolute value). To reduce the effect of chang- ing light conditions and overcome some of the shortcomings with the set-up, the SRF measurements were normalized with the ratio of the reflectance over a large, homogeneous, snow-covered surface both from inside the helicopter when flying and from the ground afterwards, as also done by Allison et al. (1993) on their optical airborne measurements. As the irradiance changed, slightly, but notably, during the flight, the SRF measurements were also scaled to minimize that effect. Some aspects regarding the variable foot-prints of the helicopter and ground measurements and the time difference and changing light conditions between the helicopter and ground measurements may be used to questions this normalization, but all together, we think it is the most reasonable data standardization we can do. 3.3 Data fusion The reflectance measurements and images were co-located based on time and position. For each reflectance spectra the foot-print in the image was identified and the fractions of sea-ice types within that footprint calculated (Fig. 5). As the co-location was based on time (resolution 1 s) and the helicopter had an typical speed of 25-30 ms−1, some error in the co-location procedure must be assumed. Angle and tilt of the helicopter change the direction of the spectrom- eter footprint, and tilt errors is subject to change the albedo (more under clear sky, and less for overcast Allison et al. (1993)). Visual inspection of examples containing measurements over changing surfaces confirmed that these errors principally were small, and no attempt was made towards correcting for this. 11 Fig. 5. An example of the co-location procedure of the data, with the original RGB image (upper panel, left), and the footprint of the spectrometer co-located within the grey-leveled, down-sampled image (upper panel, right). The classified subset of the image (bottom panel, right) gives a fraction of 75.8%, 5.5%, 16.0% and 2.7% for snow covered ice, thick bare ice, thin ice and open water, respectively, with the corresponding characteristic curves (endmembers) for the four sea-ice types (in color) together with the measured and calculated spectral reflectance factor (SRF) (bottom panel, right). 3.3.1 Spectral unmixing Spectral unmixing is an unsupervised classification technique based on the spectral reflectances, by modeling the measured reflectance spectra as a lin- ear combination of characteristic reference spectra (so-called endmembers). If the endmembers are known, the product of the spectral unmixing gives the fraction of each sea-ice type within the spectrometer foot-print by solv- ing (Vikhamar, 2003) f · αch(λ) = r(λ) (1) in a least square manner. f is the (m×4) matrix of fractions for the four sea-ice types for m images, r(λ) is the (m×n) matrix of measured reflectance spectra, and αch(λ) is the (4 × n) characteristic albedo curves for each sea-ice type. n 12 is the number of wavelength bands. The endmembers were identified directly from the classified images (the fraction of sea-ice types within the spectrometer footprint in the image) and the spectral reflectance measurements by using inverse spectral unmixing. This was done in a partly iterative manner, by first assuming standard characteristic albedo curves from previous measurements, as suggested by Tschudi et al. (2001). Based on the classified image fractions and the endmembers, an additional measure of SRF could be calculated by weighting the characteristic spectra with the fractions in the spectrometer foot-print, as done in Perovich et al. (2002). 4 Results and discussion This section presents the measurements collected from the marginal ice zone into the multi year sea ice in the Fram Strait on 3rd June 2005. This day was chosen because the sky was overcast with reasonable stable light as required by the optical measurements. All together 592 images, 1 487 spectra and 26 488 thickness signals were standardized and classified (Sec. 3). The airborne measurements were collected from a transect going east-west-north-east for optics and photography and east-west-east for EM-measurements (Fig. 6). The two east-west transects, seen relative to the ice surface, become more distant to the east as the ice in the western Fram Strait drifts relatively fast in S-SW direction. From 3◦W to 4◦ 36’ W the EM bird flight-line coincides more or less with the first east-west transect of the optical flight, so these sections are taken out for comparing sea-ice thicknesses with findings and characteristics from the optics and photography. Taking the relatively fast ice drift in the western Fram Strait into account, we expect that this comparison is only possible when assessing the general ice regime characteristics, and not individual floe with high spatial resolution. The next paragraphs describe the principal information obtained from each instrument, as well as information obtained from spectral unmixing/inverse spectral unmixing and data fusion. The fractional sea-ice types (Sec. 4.1) are the principal information from neural network classification of the digital images, but was also calculated from spectral unmixing of the optical data and thresholding of the thickness data as proxies for comparisons. The spectral and broadband reflectance factors (Sec. 4.2) are the principal information from the optical measurements, however, optical properties were also calculated from inverse spectral unmixing of the classified images. The sea-ice thickness (Sec. 4.3) can only be calculated from the EM-measurements. In Sec. 4.4 the relationship and correlation between fractional sea-ice types, optical properties and sea-ice thickness are investigated. 13 Fig. 6. Flight track for the two helicopter flights on 3rd June 2005 in the Fram Strait. The red track is for the optical and photography measurements, while the green is for the electromagnetic measurements (only the measurements between 3-4.6◦W are shown and was used, however the track expands to from 2◦-10◦W). Keep in mind the time difference: the SAR image is from 07.31 GMT, the optical flight was com- piled between 07:27-08:19 GMT, and the electromagnetic flight between 11:08-12:32 GMT. The sea ice in the Fram Strait drifts relative fast in S-SW direction, so even if the two track coincide in position they did not cover the same area relative to the ice. 4.1 Sea-ice types The test and training data sets (Sec. 3.1.3) were created by manually classi- fying 120 000 pixels within 23 images to each of the four sea-ice classes. The best set of features were selected according to Fisher Discriminant Analysis (Sec. 3.1.2) by performing 50 Monte Carlo simulations where the test and training set were chosen randomly within the set of classified pixels for each simulation. The best features for separating between the sea-ice classes were found to be the three RGB intensities, the co-efficient of variance (standard de- viation divided by the mean), the entropy (measure of histogram uniformity) and the GLCM homogeneity. The mean plus/minus one standard deviation 14 for the RGB intensities was found to separate the four classes completely, only with slight overlap between thin ice and open water. The co-efficient of vari- ance was high for thin ice, and the mean plus/minus one standard deviation separates it from the other classes, while the mean of the entropy plus/minus one standard deviation separates thick bare ice from thin ice. No such simple relationship was found for the GLCM homogeneity. The neural network proved to be extremely efficient for discriminating between the four sea-ice types, with only 1.06% classification error on the test set. The confusion matrix give the number of times a feature vector belonging to class i (row) is classified to class j (column), where i, j are the four classes (Table 4). The correct classified pixels are along the diagonal from upper left to lower right. The test resulted in 98-100% correct classification for the different classes, which is more than sufficient for routine use. Open water can easily be distinguished from the other types, with only 0.2% confusion with thin ice. Thick bare ice is most often confused with snow covered ice (1.0%). Large scale structures (such as large areas of open water or snow covered sea ice) are generally easily identified both from the original and the classified image (Fig. 5). At smaller scales, the classification is less accurate due to down-scaling and smoothing when calculating the texture features. Errors on the edges between classes are typical (the median filter (Sec. 3.1.3) does not completely remove this) and the consequences are that the intermediate sea- ice types (thick bare ice and thin ice) are over-estimated. The test set results underestimate the classification error since the pixels in the test set was chosen within larger, relative homogeneous areas of the individual sea-ice types, and fewer pixels were on the edge between classes. For images outside the test set, larger classification error is expected, particularly for thick bare ice and thin ice covering relative small areas. Remember the textural features are averages over a 3.5 m x 3.5 m window, and features smaller than 3.5 m x 3.5 m (e.g. wind shaped formations in snow, small ice floes and blocks, pancake ice etc.) will be removed by smoothing and not identified. Snow covered ice Thick bare ice Thin Ice Open water Snow covered ice 98.4 1.3 0.2 0.1 Thick bare ice 1.0 98.3 0.5 0.2 Thin Ice 0 0.6 99.2 0.2 Open water 0 0 0.2 99.8 Table 4 The confusion matrix for neural network classification on the test set, when the best feature combination (the three RGB intensities, co-efficient of variance, entropy and GLCM homogeneity) was used. The confusion matrix gives the number of times a feature vector belonging to class i (along the rows) is classified to class j (along the columns). The correct classified pixels are in bold along the diagonal. 15 The fractional area of snow covered ice, thick bare ice, thin ice and open water as a function of longitude bands show considerable spatial variability, with snow covered ice fractions varying from 0 to 100%, but with an average high ice concentrations over the entire profile (Figs. 7a and 6). The two ice classes without snow cover represent only a small portion compared to snow covered ice and open water. In the western part there are more areas of open water. Overall, the average ice concentration (total of snow covered, thick and thin ice) was 90.4%, with average fractions for snow covered sea-ice of 81.0%, thick bare ice 4.0%, thin ice 5.3% and open water 9.6%. For comparison, the average sea-ice concentration compiled from Norwegian Meteorological Institute from remote sensors Andersen et al. (2005) were 82.8% (with median 83.7% and range 64.0-93.9%) for the twelve 10 km resolution pixels inside the rectangular area of Fig. 1. Fig. 7. Fractional coverage of open water, thin ice, bare thick ice and snow covered ice as a function of longitude bands of 0.05◦. (a) is neural network classification from photography, (b) is neural network classification from photography within the footprint of the spectrometer (only a subset of the image is used), (c) is spectral unmixing from optical measurements, (d) is classification based on EM thickness measurements. The bottom panel only has three classes (open water (black), thin ice (grey) and thick, snow covered ice (light grey)). The neural network classification is the principal information from the digital images and proved to have very small classification error on the test set so it is taken to represents the true sea-ice classes, while the spectral unmixing of 16 optical measurements and thresholding of the ice thickness measurements are taken to be proxies. The spectral unmixing technique overestimates the open water fraction to the west and the thick bare ice to the east (Fig. 7). It has difficulties in detecting thin ice, which is clearly seen in Fig. 7, where the thin ice in the west is detected as open water. The correlation coefficient between the fractions from the neural network and spectral unmixing is highest for snow covered ice (0.90) and open water (0.81), whereas for the two intermediate sea-ice classes the correlation coefficient is substantially smaller, only 0.51 for thick bare ice and 0.58 for thin ice. Limitations in the co-location and tilt of the helicopter is probably responsible for most of the deviations, particularly for the two intermediate types covering smaller spatially areas thereby being more sensitive to small off-sets. A scatter-plot of neural network fractions (fN N ) against spectral unmixing fractions (fSU ) (Fig. 8), show a cluster along fN N = 1 (Fig. 8a), meaning that the spectral unmixing is underestimating the snow covered ice. For thick bare ice and open water (Figs. 8 b and d, respectively) the trend is opposite, with clusters along fN N = 0, implying the spectral unmixing overestimates those fractions. For thin ice (Fig. 8 c) the congestion is along fSU = 0, meaning that the spectral unmixing has problems in detecting thin ice. The overall root mean square error for using spectral unmixing to estimate the fractions are 0.034, 0.027, 0.021 and 0.028 for snow covered ice, thick bare ice, thin ice and open water, respectively. The EM thickness measurements were classified by separating between open water (thickness below 0.05 m), thin ice (thickness between 0.05-0.3 m) and thick snow-covered ice (thickness above 0.3 m). It is not possible to partition the snow and the ice from the EM measurements, since the snow thickness is always included in the total thickness. The fractions from the EM measure- ments show different characteristic, with no trend, and mostly thick (snow covered) sea-ice at all longitudes (Fig. 7 d). These fractions can not be com- pared directly with the others, as the two flight lines did not immediate coin- cide in time. If adding up the snow covered and thick ice fractions from neural network and comparing it with the thick ice fraction from the EM measure- ments, the correlation coefficient is only 0.25, with corresponding correlation coefficient between the thin ice- and open water fractions are 0.34 and 0.08, respectively. 4.2 Reflectance Only the first east-west transect of the optical flight was used, as the light con- ditions changed too much over time as to include all measurements. The broad- band reflectance factor (BRF), calculated from the SRF measurements by weighing the spectral reflectance with an appropriate solar irradiance specter for cloudy conditions (Grenfell and Perovich, 2004) (hereinafter called mea- 17 (a) (b) (c) (d) Fig. 8. Scatter plot between sea-ice fractions as calculated from neural network (fN N ) and spectral unmixing (fSU ). The 1:1 line indicates linear correlation. (a) is for snow covered ice (ρ = 0.90), (b) thick bare ice (ρ = 0.51), (c) thin ice (ρ = 0.58) and (d) open water (ρ = 0.91), where ρ is the correlation coefficient. sured BRF) show relative high mean BRF values over the entire transect, how- ever higher in the east than in the west (Fig. 9). As the broadband albedo is higher for cloudy than clear sky Brandt et al. (2005), this indicate more clouds with time. The average BRF was 0.73 with standard deviation of 0.33. The BRF was also calculated from inverse spectral unmixing (hereinafter called calculated BRF), and corresponds well with the measured BRF (Fig. 9). How- ever, the calculated BRF does not increase towards the east since it has its upper threshold value at 0.8711 corresponding to a completely snow covered surface. The scatter plot of measured versus calculated BRF (Fig. 10) show that the measurements coincide around the 1:1 line, with correlation coeffi- cient of 0.94. Measured BRF are higher than calculated BRF for high values (the measured BRF frequently exceeds one), with a weak tendency of the op- posite for small BRF values. If the measured BRF is taken to represent ground truth reflectance factor, the overall root mean square error for calculated BRF is 0.048. The endmembers for the four sea-ice types were calculated from inverse spec- 18 Fig. 9. Measured (BRFm) and calculated (BRFc) broadband reflectance factor as a function of longitude bands of 0.05◦. Fig. 10. Scatter plot of measured broadband reflectance factor (BRFm) against calculated broadband reflectance factor (BRFc). The correlation coefficient is 0.94. 19 tral unmixing (Eq. (1)), and have spectral signatures similar to other albedo measurements (Brandt et al., 2005; Grenfell and Perovich, 2004; Gerland et al., 2004). However, the set-up affects the endmembers by giving more noisy (jagged) spectras with an unexpected dip at UV wavelengths and a jump at 1100 nm (corresponding to jumps in the transmission coefficient for the Lexan window) and substantial noise at high wavelength, so the endmembers curves were averaged with a running mean over every 30th wavelength to achieve smoother and more realistic curves (Fig. 5). The mean and standard devia- tions of the BRF were calculated for each sea-ice type by including only the spectra for those spectrometer foot-prints having a fraction larger than 90% of one sea-ice type (Tab. 5), i.e. not more than 10% of the pixels within the spectrometer foot-print may belong to other classes. For bare thick ice, no spectrometer footprint had a fraction of 90% or more, so the threshold limit was reduced to 75%, and therefore the error in the mean BRF for thick bare ice may be high (despite a low standard deviation in Tab. 5). Overall the BRF cor- responds well with literature values for broadband albedo. The BRF for open water was slightly higher than corresponding albedo values from Brandt et al. (2005), because the open water was mixed with other types all having higher BRF. Allison et al. (1993) also experienced higher open water albedos than usual, due to snow covered ice in the vicinity of the open water scene (in that study a remote cosine collector was used, having 90% of its signal from a circle with radius approximately two times the hight of the instrument). The BRF of thin ice was 0.23 corresponding to values of young grey ice (Brandt et al., 2005), but with extremely large standard deviations evidently from tilt of the helicopter giving such high BRF as from snow covered ice and such low BRF as open water for the thin ice foot-prints. Previous measurements show that for bare ice, the reflectance factor takes a lower value than the albedo (Per- ovich, 1994). However, the thick ice BRF was higher than what is reported for snow-free first year ice albedo (Brandt et al., 2005), but this is due to mixing with snow covered ice, where on average 15% of the area within the foot-print was snow covered. The nadir reflectance factor and albedo should be similar at all wavelength for snow (Perovich, 1994), and this is in fact shown here where the snow covered sea ice have a BRF well inside the range of expected albedo values for dry snow (Paterson, 2001), and slightly higher than others (Brandt et al., 2005; Grenfell and Perovich, 1984). Snow covered ice Thick bare ice Thin ice Open water Mean(BRF) 0.86 0.63 0.23 0.09 σ(BRF) 0.22 0.16 0.36 0.16 ] of samples 1058 7 7 99 Table 5 The mean and standard deviation (σ) of broadband reflectance factor (BRF). The bottom row gives the number of samples used for the calculations. 20 4.3 Sea-ice thickness From the total set of ice thickness data obtained, we know that the thickness distribution at about 79◦ N exhibits a clear regional gradient from 10◦W to 2◦W, from thicker ice with a broad thickness distribution to thinner ice with a more narrow thickness distribution (Gerland et al., 2006). The modal ice thickness increases from east to west from about 2 m to almost 3 m (Fig. 12 lower panel). Most of the surface along the flight line is covered with sea ice, but leads occur regularly. Few ridges thicker than 6 m were observed. In gen- eral, the thickest ridges were found in the western part of the transect, with one ridge reaching a thickness of more than 10 m. However, airborne EM derived thicknesses can under-estimate thicknesses of ridges by the factor 2 or more (Pfaffling et al., 2006), indicating that real maxima ridge thickness might be at 20 m or more. Longer sections in the profile with no ice thicknesses near zero indicate large ice floes, in agreement with direct visual observations. The probability density functions (pdf) illustrate that the ice is different in the west and east of the investigation area (Fig. 11), which is consistent with the regional trend beyond the section selected for this paper (Gerland et al., 2006). At the marginal ice zone in the east, the modal ice thickness is 1.8 m (Fig. 11a), whereas in the west the distribution indicates thicker ice with the main mode at 2.6 m and an additional prominent mode at 1.1 m (Fig. 11b), indicating multi year and first year ice, respectively. The average sea-ice thick- ness (including snow) was 2.1 m with a standard deviation 1.3 m. (a) (b) Fig. 11. Probability density function (pdf) of the total sea-ice thickness (sea ice plus snow) from the two transects 3-3.8◦ W in (a) and 3.8-4.6◦ W in (b) from the electromagnetic measurements. 21 4.4 Data fusion The combination of the principal information from each instrument clearly shows that variations in measured BRF coincide well with changing sea-ice types (Fig. 12), where high BRF corresponds to large fractions of snow covered ice and low BRF corresponds to large fractions of open water. Small fractions of the two intermediate ice types, e.g at 3.7◦ west, lead to visible reduction in the BRF. The correlation coefficient between BRF and fractional coverage is 0.72 for snow covered ice (Fig. 13a) and -0.61 for open water (Fig. 13b), with large scatter of the samples. The BRF is not very dependent on the fractional coverage of thick ice (correlation coefficient is only -0.16), but slightly more on the thin ice (correlation coefficient of -0.30). The latter dependency is assumed to be higher if it were not for the tilt of the helicopter. Fig. 12. (a) Average fractional coverage of the individual sea-ice types from the classified photographies and (b) average broadband reflectance factor (BRF) as a function of longitude for 0.05◦ longitude bands. (c) Average sea-ice thickness (ice plus snow) measured with the helicopter electromagnetic bird. The correlation between sea-ice concentration and measured broadband albedo was 0.69. This was substantial higher than the correlations found by Laine (2004) using remote sensing data in the Arctic Ocean and Northern Hemi- sphere (0.34 and 0.56, respectively). The optical and EM measurements were taken from different transects, consequently with low correlation between sea- 22 (a) (b) Fig. 13. Scatter-plot of broadband reflectance factor (BRF) and fractional snow covered ice in (a) and fractional open water in (b), with correlation coefficients of 0.72 and -0.61, respectively. ice thickness and snow covered ice- or open water fractions (correlation coeffi- cients of only 0.01 and -0.06, respectively). The average fraction of open water was 9.6%. In a recent study Perovich et al. (2007) emphasize the importance of a correct identification of open water areas due to the solar heating of the sea. 5 Conclusions In this paper we have presented a method for finding the distribution of ice types along an airborne transect. This not only provides the major percentage of individual sea-ice types, sea-ice albedo and total sea-ice thickness in the over flown area, but it is a method which allows to compare (large amounts of) measurements in a consistent manner. The principal information from the three methods applied give the sea-ice types from digital photography, the spectral and broadband reflectance factor from the spectrometer and the to- tal sea-ice thickness from the EM-bird measurements (Fig. 12). Together these three datasets completely describe the complex sea-ice environment: the sea- ice extent is described by combining the three ice types and separating it from open water, sea-ice volume is calculated as the sea-ice extent multiplied with the thickness, and the energy balance is determined from the optical mea- surements. As one important result, we can state that all three datasets are required, since for example the east-west ice thickness gradient does not ap- pear in the image or optical observations. Since most of the sea ice is covered by relatively thick snow, and the albedo is completely determined by a snow cover of only a few cm thickness (Allison et al., 1993), snow covered multiyear ice and first year ice is difficult, if not impossible to distinguish without thick- ness measurements. However, if one dataset is missing (due to lack or failure 23 of instruments) the necessity information can, to some extent, be extracted from the other measurements, but with increased error (Figs. 7 and 9). The average root mean square errors for employing spectral unmixing for sea-ice classification are 0.034, 0.027, 0.021 and 0.028 for snow covered ice, thick bare ice, thin ice and open water, respectively, while for using the inverse spectral unmixing for calculating broadband reflectance factor (BRF) is 0.048. This is not evident for the EM measurements. Despite the fractional coverage of sea-ice types can be extracted from all three datasets individually, remember the neural network uses textural features for classifying the digital images, the spectral unmixing uses the optical features for classifying the reflectance mea- surements, and the thresholded EM data only consider the sea-ice thickness. The average sea-ice fractions for the over flown area were 81.0% for snow covered ice, 4.0% for thick bare ice, 5.3% for thin ice and 9.6% for open water, thus the average sea-ice concentration was 90.3%. The average BRF was 0.73 with standard deviation 0.33, and the average sea-ice thickness (including snow) was 2.1 m with standard deviation 1.3 m. The average sea-ice volume is thus 2.1 times the area. Relative high correlations were found between the measured albedo and sea-ice concentration (0.69), however, the correlation would probably be higher if it were not for the tilt of the helicopter, and problems with the co-location of measurements. This initial study sheds light on the enormous potential of integrated airborne surveys over sea ice with modern methods. Improvements of the individual set-ups and steps will reduce the temporal and spatial bias. This particularly concerns the optical measurements. Future solutions will also include opti- mizations of systems so that all methods can be operated on the same flight, optical sensors will be mounted outside the helicopter to avoid disturbing ef- fects from windows, and the problem introduced by varying incoming solar radiation will be addressed by direct measurements of the incoming radiation, parallel to the nadir reflectance measurements. The co-location procedure will be improved, however, it is not straight forward to save large amounts of data instantaneously, and delays introduce offsets both in time and space between concurrent measurements. Also, the tilt of the helicopter offset the footprint of the image and spectrometer from the directly underlying field. In addition storage of raw images would be an important improvement allowing for stan- dardization and correction of the images regarding shutter opening, exposure time and white balance. Some of these improvements are already under devel- opment and will be applied during campaigns as a part of projects in the Inter- national Polar Year (IPY) 2007-2009. With an improved set-up large amount of such measurements processed with the described methodology, providing sea-ice type fractions, albedo, thickness, extent and volume, will be extremely valuable datasets for e.g. validation of general circulation models and remote sensing products. Also, for applications with unmanned aerial vehicles (UAV) such an integrated airborne approach is required. 24 6 Acknowledgments We thank the captain and crew on board Coastguard K/V Svalbard during the expedition to the Fram Strait spring 2005, and also Terje Gundersen and H̊avard Dahle from Airlift. Norwegian Polar Institute and Department of En- vironment is acknowledged for financing the cruise. We would like to thank O. Pavlova for providing the SSM/I data and O.-M. Olsen for the helicopter tracks on the SAR image. Fred Godtliebsen and Jan-Gunnar Winther are ac- knowledged for comments during an early stage of the work. C. A. Pedersen received founding from the Research Council of Norway. References Allison, I., Brandt, R., and Warren, S. (1993). East Antarctic Sea Ice: Albedo, Thickness Distribution, and Snow Cover. Journal of Geophysical Research, 98:12417–12429. Andersen, S., Breivik, L.-A., Eastwood, S., Godøy, Ø., Lind, M., Porcires, M. and Schyberg, H. (2005). Sea Ice Product Manual. Norwegian and Danish Meteorological Institutes. Armstrong, T., Roberts, B., and Swithinbank, C. (1966). Illustrated Glos- sary of Snow and Ice. Special publication number 4. Scott Polar Research Institute, Cambridge. Barber, D. G. and Le Drew, E. F. (1991). SAR Sea Ice Discrimination Using Texture Statistics: A Multivariate Approach. Photogrammetric Engineering & Remote Sensing, 57(4):385–395. Bogdanov, A. V., Sandven, S., Johannessen, O. M., Alexandrov, V. Y., and Bobylev, L. P. (2005). Multisensor Approach to Automated Classification of Sea Ice Image Data. IEEE Transactions on Geoscience and Remote Sensing, 43(7):1648–1664. Brandt, R. E., Warren, S. G., Worby, A. P., and Grenfell, T. C. (2005). Surface Albedo of the Antarctic Sea Ice Zone. Journal of Climate, 18:3606–3622. Derksen, C., Piwowar, J. and LeDrew, E. (1997). Sea-Ice Melt-Pond Frac- tion as Determined from Low Level Aerial Photographs. Arctic and Alpine Research, 29(3),345–351. Fetterer, F. and Untersteiner, N. (1998). Observations of Melt Ponds on Arctic Sea Ice. Journal of Geophysical Research, 103(C11):24821–24835. Gerland, S., Haas, C., Hall, R., Holfort, J., Hansen, E., Løyning, T., and Renner, A. (2006). Spring Sea Ice Thickness in the Western Fram Strait: Preliminary Results. In Wadhams, P. and Amanatidis, G., editors, Arctic Sea Ice Thickness: past, present & future. Proceedings of an international workshop at Rungstedgaard, Denmark, November 2005, Climate Change and Natural Hazard Series 10. European Commission EUR 22416. 25 Gerland, S., Haas, C., Nicolaus, M., and Winther, J.-G. (2004). Seasonal De- velopment of structure and Optical Properties of Fast Ice in Kongsfjorden, Svalbard. In Wiencke, C., editor, The Coastal Ecosystems of Kongsfjorden, Svalbard, number 492, pages 26–34. Alfred Wegner Institute for Polar & Marine Research. Gonzalez, R. C. and Woods, R. E. (1992). Digital Image Processing. Addison- Wesley Publishing Company. Grenfell, T. and Perovich, D. K. (2004). Seasonal and Spatial Evolution of Albedo in a Snow-Ice-Land-Ocean Environment. Journal of Geophysical Research, 109:1–15. Grenfell, T. C. and Perovich, D. K. (1984). Spectral Albedos of Sea Ice and Incident Solar Irradiance in the Southern Beaufort Sea. Journal of Geo- physical Research, 89(C3):3573–3580. Haas, C., Gerland, S., Eicken, H., and Miller, H. (1997). Comparison of Sea-Ice Thickness Measurements Under Summer and Winter Conditions in the Arc- tic Using a Small Electromagnetic Induction Device. Geophysics, 62(3):749– 757. Haas, C., Goebell, S., Hendricks, S., Martin, T., Pfaffling, A., and von Saldern, C. (2006). Airborne Electromagnetic Measurements of Sea Ice Thickness: Methods and Applications. In Wadhams, P. and Amanatidis, G., editors, Arctic Sea Ice Thickness: past, present & future. Proceedings of an inter- national workshop at Rungstedgaard, Denmark, November 2005, Climate Change and Natural Hazard Series 10. European Commission EUR 22416. Haykin, S. (1999). Neural Networks - A Comprehensive Foundation. Prentice Hall. Johnson, R. A. and Wichern, D. W. (2002). Applied Multivariate Statistical Analysis. Prentice Hall. Kwok, R., Cunningham, G. F., and Pang, S. S. (2004). Fram Strait Sea Ice Outflow. Journal of Geophysical Research, 109(C01009). Laine, V. (2004). Arctic Sea Ice Regional Albedo Variability and Trends, 1982-1998. Journal of Geophysical Research, 109(C06027). Lythe, M., Hauser, A., and Wendler, G. (1999). Classification of Sea Ice Types in the Ross Sea, Antarctica from SAR and AVHRR Imagery. International Journal of Remote Sensing, 20(15 & 16):3073–3085. Markus, T., Cavalieri, D. J., and Ivanoff, A. (2002). The Potential of Using Landsat 7 ETM+ for the Classification of Sea-Ice Surface Conditions during Summer. Annals of Glaciology, 34:415–419. Massom, R. and Comiso, J. C. (1994). The Classification of Arctic Sea Ice Types and the Determination of Surface Temperature Using Advanced Very High Resolution Radiometer Data. Journal of Geophysical Research, 99(C3):5201–5218. Nicodemus, F. E., Richmond, J. C., Ginsberg, I. W., and Limperis, T. (1977). Geometrical Considerations and Nomenclature for Reflectance. Technical report, U.S. Department of Commerce, National Bureau of Standards. Paterson, W. S. B. (2001). The Physics of Glaciers. Butterworth Heinemann, 26 third edition. Perovich, D. K. (1994). Light Reflection from Sea Ice During the Onset of Melt. Journal of Geophysical Research, 99(C2):3351–3359. Perovich, D. K., Tucker, W. B. III and Ligett, K. A. (2002). Aerial Observa- tions of the Evolution of Ice Surface Conditions during Summer. Journal of Geophysical Research, 107(C10):8048–8062. Perovich, D. K., Light, B., Eicken, H., Jones, K. F., Runciman, K. and Nghiem, S. V. (2007). Increasing Solar Heating of the Arctic Ocean and Adjacent Seas, 1979-2005: Attribution and Role in the Ice-Albedo Feedback. Geo- physical Research Letter, 34(L19505). Pfaffling, A., Haas, C., and Reid, J. (2006). Key Characteristics of Heli- copter Electromagnetic Sea Ice Thickness Mapping: Resolution, Accuracy and Footprint. In Wadhams, P. and Amanatidis, G., editors, Arctic Sea Ice Thickness: past, present & future. Proceedings of an international workshop at Rungstedgaard, Denmark, November 2005, Climate Change and Natural Hazard Series 10. European Commission EUR 22416. Secretary of World Meteorological Organization (1970). WMO Sea-Ice Nomen- clature. Technical report, World Meteorological Organization. Steffen, K. (1986). Atlas of the Sea Ice types - Deformation Processes and Openings in the Cce. Technical report, ETH Geographisches Institut. Theodoridis, S. and Koutroumbas, K. (1999). Pattern Recognition. Academic Press. Tschudi, M. A., Curry, J. A., and Maslanik, J. A. (2001). Airborne Observa- tions of Summertime Surface Features and Their Effect on Surface Albedo during FIRE/SHEBA. Journal of Geophysical Research, 106(D14):15335– 15344. Vikhamar, D. (2003). Snow-Cover mapping in Forests by Optical Remote Sens- ing. PhD thesis, Faculty of Mathematics and Natural Science, University of Oslo, Norway. Vinje, T. (2001). Fram Strait Ice Fluxes and Atmospheric Circulation: 1950- 2000. Journal of Climate, 14(16):3508–3517. 27 work_cruog5sfurhtpibtowvk6boay4 ---- a Split-face, double-Blind, randomized and placebo-controlled pilot Evaluation of a novel oligopeptide for the treatment of recalcitrant Melasma Basil M. Hantash MD PhDa and Felipe Jimenez PhDb aelixir Institute of Regenerative Medicine, San Jose, CA bjDerm Consulting, Inc, Seal Beach, CA IntroductIon M elasma is a common cutaneous disorder that pres- ents as patches of darker pigmentation on the cheeks, forehead, upper-lip, nose and chin.1 Melasma most commonly affects females of Asian and Hispanic decent hav- ing Fitzpatrick phototypes IV and higher and only a very small percentage of men.2 Moreover, pregnancy appears to be a contributing factor in bringing about the onset of melasma in females, supporting the proposed role of hormones in the regulation of melanogenesis in women.3,4 Various skin lighten- ing agents such as kojic acid, azelaic acid, ascorbic acid (and its derivatives) and hydroquinone (and its arbutin derivatives) are currently used to treat melasma but are either efficacious and cytotoxic or are mildly efficacious and non-toxic.5 Hence there is a need for novel compounds that strike a balance between skin lightening efficacy and toxicity. The authors have previously demonstrated that a novel pro- prietary synthetic oligopeptide (Lumixyl™) competitively in- hibits both mushroom and human tyrosinase enzymes better than hydroquinone at similar concentrations.6 Moreover, cell culture studies with human melanocytes confirmed that the oli- gopeptide also inhibited intracellular tyrosinase better than hy- droquinone without cytotoxicity. Given the superior tyrosinase- inhibiting properties of the oligopeptide over hydroquinone, it was hypothesized that the oligopeptide might be useful for the treatment of melasma. The aim of the present translational pi- lot study was to determine if twice-daily topical application of 0.01% Lumixyl, in an inert emulsion vehicle, could improve the appearance of recalcitrant melasma after a 16-week course. Methods A split-face, double-blind, randomized and placebo-controlled study was conducted to assess the effects of a novel propri- etary synthetic oligopeptide (Lumixyl, Emed, Inc., Westlake Vil- lage, CA) on the appearance of facial melasma. The secondary objectives were to evaluate the effect of the oligopeptide on the overall facial appearance of study participants and their re- spective satisfaction with their improvements or lack thereof. Lumixyl was synthesized via solid-phase FMOC chemistry and incorporated into an inert oil-in-water emulsion at a concentra- tion of 0.01% (w/w). Both the oligopeptide-containing formula and vehicle alone were tested on all participants in a random- ized, split-face fashion. Volunteers were instructed to wash their face with Dove Soap (Procter & Gamble, Cincinnati, OH), dry thoroughly and apply a pea-size amount of oligopeptide-con- taining formula on one side of the face and vehicle alone on the other side twice-daily for 16 weeks. Compliance with the pre- scribed treatment regimen was monitored by comparing the weights of the study products at 12 and 16 weeks with baseline weights. This study was approved by the local institutional re- view board (IRB) and was conducted following the guidelines of the Declaration of Helsinki. Five healthy female subjects were enrolled in the study. Partici- August 2009 732 Volume 8 • Issue 8 Copyright © 2009 original articlES Journal of Drugs in Dermatology Melasma is a cutaneous disorder associated with an overproduction of melanin by the tyrosinase enzyme. A proprietary oligopep- tide (Lumixyl™) was previously shown to competitively inhibit mushroom and human tyrosinase without the associated toxicity of hydroquinone. The aim of this split-face, randomized, double-blind and placebo-controlled pilot study was to determine the effect of twice-daily topical application of this oligopeptide (0.01% w/w) on moderate, recalcitrant melasma over a 16-week course. Five female participants with Fitzpatrick phototype IV and moderate recalcitrant melasma enrolled and completed the study. Improvement in melasma and overall facial aesthetics as well as assessment of volunteer satisfaction was measured using 10- and five-point grad- ing scales, respectively. Treatment was well tolerated with no visible signs of irritation or allergy. All five participants demonstrated statistically significant improvement in the appearance of melasma and overall facial aesthetics with high patient satisfaction. Results suggest that the oligopeptide may be useful in the treatment of melasma and warrants further evaluation. AbstrAct © 2009-Journal of Drugs in Dermatology. All Rights Reserved. This document contains proprietary information, images and marks of Journal of Drugs in Dermatology (JDD). No reproduction or use of any portion of the contents of these materials may be made without the express written consent of JDD. If you feel you have obtained this copy illegally, please contact JDD immediately. 733 Journal of Drugs in Dermatology August 2009 • Volume 8 • Issue 8 B. Hantash, F. Jimenez pants eligible for inclusion in the study were between the ages of 30 and 45, were of Hispanic or Asian decent and had Fitz- patrick III to IV skin types with moderate-to-severe recalcitrant melasma. Eligible participants had just completed and failed a six-month treatment of twice-daily Tri-Luma® (Galderma, Ft. Worth, TX). Moreover, all subjects demonstrated accentuation of facial pigmentation upon Wood’s lamp darkroom examina- tion, consistent with a primarily epidermal melasma location. Grounds for exclusion included pregnancy, use of retinoids or other prescription anti-aging or skin lightening products over the previous four weeks, having received cosmetic clinical pro- cedures in the last six months and pre-existing skin diseases that would impair the successful completion of the clinical study. Prospective participants were also deemed ineligible if di- agnosed with dermal melasma upon Wood’s lamp examination. All participants signed written consent documents prior to participation in the study and were required to be available for longitudinal study over the course of at least four months in order to assure appropriate follow-up. Digital photography was taken at baseline, 8, 12 and 16 weeks following treatment initiation. Two physicians graded improvement in the appear- ance of melasma in a blinded manner using a 10-point scale (each point equal to a 10% improvement). Two physicians graded improvement in the appearance of melasma by comparing 12- and 16-week photos to baseline photos. Participants were asked to rate their improvement based on their own perception without the aid of any pho- tographs. At 16 weeks, participants and both physicians additionally graded overall facial appearance in a blinded manner using a four-point global assessment scale (0 = no im- provement, 1=1–25% improvement, 2=26–50% improvement, 3=51–75% improvement, 4=76–100% improvement). Physicians used digital photos taken at baseline and 16 weeks to evaluate improvement in overall appearance while participants graded their appearance based on perception alone. Study participants also rated their satisfaction, in a blinded manner, with the visual improvement, or lack thereof, they ob- tained on either side of their face at 16 weeks using a four-point scale (0 = not satisfied, 1 = mildly satisfied, 2 = moderately satisfied, 3 = very satisfied and 4 = extremely satisfied). All statistical data is presented as mean (± SD). Statistical sig- nificance (two-tailed) for melasma improvement scores and global assessments was determined using two-way ANOVA with a Bonferroni post-test. Statistical significance (two-tailed) for satisfaction scores was determined using a t-test. All evalu- ations were performed using the Prism 5 (Graphpad Software, Inc., La Jolla, CA) statistical software suite. results Five healthy women between the ages of 32 and 42 (mean age 36.8 ± 3.7) were enrolled in and completed the study. Three of the volunteers were of Hispanic decent and two were of Asian decent. All five participants had Fitzpatrick type IV skin types and presented with moderate recalcitrant melasma. Upon Wood’s light examination, all five subjects demonstrat- FIGURE 1. Melasma improvement scoring, with respect to baseline, was conducted at 12 and 16 weeks by all volunteers (N=5) and two blinded physician assessors. Blinded volunteer a) and physician b) & c) assessments demonstrate that twice daily oligopeptide treatment is significantly (**P<0.001) more effective at improving melasma than placebo. 0 2 4 6 8 00% 20% 40% 60% 80% PlaceboTreated Im p ro ve m en t in a p p ea ra n ce o f m el as m a Im p ro ve m en t in a p p ea ra n ce o f m el as m a Weeks 12 16 ** ** 0 2 4 6 8 00% 20% 40% 60% 80% PlaceboTreated ** ** 0 2 4 6 8 00% 20% 40% 60% 80% PlaceboTreated ** ** Weeks 12 16 Im p ro ve m en t in a p p ea ra n ce o f m el as m a A B C © 2009-Journal of Drugs in Dermatology. All Rights Reserved. This document contains proprietary information, images and marks of Journal of Drugs in Dermatology (JDD). No reproduction or use of any portion of the contents of these materials may be made without the express written consent of JDD. If you feel you have obtained this copy illegally, please contact JDD immediately. 734 Journal of Drugs in Dermatology August 2009 • Volume 8 • Issue 8 B. Hantash, F. Jimenez FIGURE 2. Subjects were placed on twice-daily topical application of placebo a) & b) or the oligopeptide-containing cream c) & d) and digital photography was taken at baseline a) & c), 8 d) or 16 b) weeks. A mean score of 3 on a 10-point scale, or 30%, improvement in melasma was reported by subjects as early as 8 weeks post- treatment initiation. No improvement was detected by 16 weeks post-treatment initiation for the placebo side. ed facial pigment accentuation, consistent with a diagnosis of epidermal rather than dermal melasma. Medical histories demonstrated that all participants previously failed to show improvement in melasma after six months of treatment with Tri-Luma. All patients subsequently were discontinued off Tri- Luma and completed a four-week wash out period prior to beginning treatment in the study. The oligopeptide formula was well tolerated and participants showed no signs of irritation or allergic reaction. Melasma improvement scores from volunteers and physician graders were in agreement with each other showing >40% improve- ment at 12 weeks and >50% improvement at 16 weeks on the oligopeptide-treated side of the face (Figure 1). Improve- ment in melasma did not exceed 4% on the placebo side at any time-point as assessed by both participants and blinded physicians (Figure 1). By eight weeks, patients began report- ing noticeable improvement in the oligopeptide-treated side (Figures 2C and D), with continued additional benefit at the 12- and 16-week time points. For the placebo side, participant and blinded physician evaluations of digital photography did not reveal any significant improvement in melasma at 16 weeks post-treatment compared to baseline (Figures 2A and 2B). Global assessment scores also demonstrated good agree- ment between participants and physician graders, showing >70% improvement in overall appearance of facial skin on the side treated with the oligopeptide formula (Figure 3). Global assessment scores for the placebo side demonstrated up to a 15% improvement in overall facial appearance. Moreover, vol- unteer satisfaction scores showed that two of five participants were very satisfied and three of five were extremely satisfied with their appearance on the oligopeptide-treated side (Figure 4). In contrast, two of five participants reported mild satisfac- tion and three of five said they were not satisfied with their appearance on the placebo-treated side. dIscussIon Previous results obtained in our laboratory indicated that the proprietary oligopeptide inhibited melanin production in nor- mal human melanocyte cultures by competitive inhibition of tyrosinase without any cytotoxicity.6 The present translational pilot clinical study provides further evidence that topical ap- plication of the oligopeptide may also inhibit melanogenesis in vivo with no apparent irritation to skin. Although the vehicle was not formulated for optimal transdermal delivery of the oligopeptide to the basal epidermis, the present results sug- gest that a sufficient concentration was able to penetrate the stratum corneum. This resulted in the observed improvement in the appearance of melasma, as placebo application during the same period led to virtually no change. Moreover, these results suggest that the oligopeptide may be effective for treat- ing recalcitrant melasma in patients who failed treatment with Tri-Luma or the less efficacious hydroquinone alone. A B C D conclusIon The present results and high participant satisfaction suggest that the tested oligopeptide may be a safe and useful treatment for melasma that warrants further clinical evaluation. © 2009-Journal of Drugs in Dermatology. All Rights Reserved. This document contains proprietary information, images and marks of Journal of Drugs in Dermatology (JDD). No reproduction or use of any portion of the contents of these materials may be made without the express written consent of JDD. If you feel you have obtained this copy illegally, please contact JDD immediately. Basil M. Hantash, MD, PhD Elixir Institute of Regenerative Medicine 5941 Optical Court San Jose, CA 95138 Phone: ................................................................... (323) 306-4330 Fax: ........................................................................ (323) 306-4330 E-mail: ....................................................... basil@elixirinstitute.org Address for correspondence 735 Journal of Drugs in Dermatology August 2009 • Volume 8 • Issue 8 B. Hantash, F. Jimenez 2. Grimes PE. Melasma. Etiologic and therapeutic considerations. Arch Dermatol. 1995;131(12):1453-1457. 3. Ferrario E. Observations on cutaneous hyperchromia during pregnancy. Riv Ostet Ginecol. 1962;17:793-816. 4. Takase Y, Watanabe K. Endocrinological study of pigmentation disor- ders. Horumon To Rinsho. 1967;15(9):705-710. 5. Solano F, Briganti S, Picardo M, Ghanem G. Hypopigmenting agents: An updated review on biological, chemical and clinical aspects. Pig- ment Cell Res. 2006;19(6):550-571. 6. Abu Ubeid A, Zhao L, Wang Y, Hantash BM. Short-sequence oligopep- tides with inhibitory activity against mushroom and human tyrosinase. J Invest Dermatol. 2009:In Press. 0 1 2 3 4 0 25% 50% 75% 100% PlaceboTreated Im p ro ve m en t at 1 6 w ee ks Phys #1 Phys #2 Volunteer ** ** ** 0 1 2 3 4 Not satis�ed Mildly satis�ed Moderately satis�ed Very satis�ed Extremely satis�ed TreatedPlacebo *** Pa ti en t sa ti sf ac ti o n s co re FIGURE 3. Global assessment scoring was conducted at 16 weeks by the volunteers (n=5) and two blinded physician assessors to determine the overall improvement in facial appearance after using the oligopeptide and placebo treatments. Blinded volunteer and phy- sician assessments indicate that twice daily oligopeptide treatment was significantly (**P<0.001) more effective at improving the overall appearance of facial skin than placebo. FIGURE 4. Blinded volunteers (n=5) rated their satisfaction with the improvement in their facial melasma after 16 weeks. Data indicate that volunteers were significantly (***P<0.0001) more satisfied with the improvements brought about by the oligopeptide treatment than that of the placebo. dIsclosures Drs. Hantash and Jimenez have no relevant conflicts of interest to disclose. references Grimes PE. Diseases of hyperpigmentation. In: Sams WM, Lynch 1. PJ, eds. Principles and Practice of Dermatology. New York: Churchill Livingstone; 1990:807-819. © 2009-Journal of Drugs in Dermatology. All Rights Reserved. This document contains proprietary information, images and marks of Journal of Drugs in Dermatology (JDD). No reproduction or use of any portion of the contents of these materials may be made without the express written consent of JDD. If you feel you have obtained this copy illegally, please contact JDD immediately. work_csan25kzfzd4hplznspikvtbti ---- Therapy adherence in elderly of Northern Portugal Poster Presentations August 2015 e99 Background: In patients with type 2 diabetes (T2DM) fixed-dose antihyperglycaemic combinations (FDCs) may provide complemen- tary efficacy, reduce tablet burden, and improve compliance. The aim of this study was to assess the bioequivalence and tolerability of 2 strengths of dapagliflozin (DAPA)/metformin extended-release (MET-XR) FDCs versus their individual components (ICs) in healthy subjects. Material and Methods: This open-label, randomised, 2-way crosso- ver, 4-arm study was conducted in 141 healthy adult Brazilian sub- jects. Two oral doses (5 mg DAPA/500 mg MET-XR and 10 mg DAPA/1000 mg MET-XR) were evaluated in fed and fasted states. Results: Under fed and fasting conditions the 5 mg DAPA/500 mg MET-XR FDC was bioequivalent to its ICs (Table). The 10 mg DAPA/1000 mg MET-XR FDC was bioequivalent to its ICs only in fed patients. C max for metformin was not bioequivalent to its ICs (upper 95% CI outside 80%–125%) in fasted patients; this small increase was not considered clinically meaningful as metformin is recommended to be administered with food. The safety and toler- ability of the FDCs were generally similar to their ICs; no serious adverse events were reported. Conclusions: Both DAPA/MET-XR FDCs were bioequivalent to their ICs, except 10mg DAPA/1000mg MET-XR in fasted patients, sup- porting their use in patients with T2DM. Parameter Geometric mean point estimate of FDC/IC (%) CI (90%) for the point estimate 5 mg DAPA/500 mg MET-XR Arm-1 (n = 34) asted Dapagliflozin C max 104.7 (96.3; 113.7) AUC 0-inf 101.1 (98.0; 104.2) Metformin C max 96.7 (87.1; 107.5) AUC 0-inf 101.7 (93.3; 110.8) Arm-2 (n = 29) Fed Dapagliflozin C max 96.6 (88.0; 106.0) AUC 0-inf 102.0 (98.9; 105.2) Metformin C max 100.9 (95.1; 107.0) AUC 0-inf 104.6 (97.3; 112.4) 10 mg DAPA/1000 mg MET-XR Arm-3 (n = 34) Fasted Dapagliflozin C max 103.7 (96.2; 111.7) AUC 0-inf 102.7 (100.7; 104.8) Metformin C max 118.3 (109.8; 127.5) AUC 0-inf 112.6 (104.8; 120.9) Arm-4 (n = 32) Fed Dapagliflozin C max 91.9 (80.9; 104.4) AUC 0-inf 99.1 (97.0; 101.3) Metformin C max 107.1 (102.6; 111.8) AUC 0-inf 98.6 (93.2; 104.3) 3D PhoToGRAPhy FoR SkIn lESIon QuAnTIFICATIon G. Hogendoorn1; C. Lemoine1,2; R. Rissmann1,2; and J. Burggraaf1,2 1Centre for Human Drug Research, Leiden, The Netherlands; and 2Leiden University, Leiden, The Netherlands Introduction: Reliable methods to quantify skin lesions are critical for the evaluation of disease severity and assessment of therapeu- tic response. In dermatological trials often two dimensional digi- tal photography is utilized which has inherent disadvantages. It appears that high-resolution three-dimensional (3D) imaging may offer many advantages such as offline 3D visualization and auto- matic picture segmentation resulting in an objective and detailed skin lesion characterization. At present this technique is not opti- mally technical and analytical validated which is a pre-requisite for clinical application. Material and Methods: In this study we investigated the perfor- mance and clinical use of the 3D skin-imaging LifeViz™ system (Quantificare, Sophia Antipolis, France) in conjunction with the DermaPix Software. The validation of the LifeViz Micro was con- ducted with four trained operators that captured a synthetic phantom object on three different skin backgrounds at four time points during a period of one week. Results: Coefficient of variations for volume of the 3D system were 1.0%, 2.6% and 1.4% for inter-operator, skin background and inter- day variability, respectively. The overall precision of the system was 2.7% for volume, 1.6% for diameter and 4.1% for height. In order to determine accuracy of the system, a ruler was photographed and a mean error of 0.3% (range 0.0-0.8%) was observed. Preliminary data on cutaneous lesions also show low inter-observer variability and accurate images. Conclusions: This validation study demonstrates that this novel 3D-imaging system is precise and objectively quantifies a phantom object representing a skin lesion. The results support clinical use of this technology enabling high-resolution computation. Also the accu- racy results are promising, but needs to be extended with accuracy assessment of absolute measurements. The preliminary clinical data suggest that application of this non-invasive imaging technique is suitable to quantitatively measure characteristics of cutaneous lesions and may be a promising tool in clinical trials. ThERAPy ADhEREnCE In ElDERly oF noRThERn PoRTuGAl I.C. Pinto1,3; F. Pereira2; and R. Mateos-Campos3,4 1Center for Research and Intervention in the Elderly, Health School of Polytechnic Institute of Bragança, Portugal (isabel. pinto@ipb.pt); 2Center for Research and Intervention in the Elderly, Polytechnic Institute of Bragança, Portugal (fpereira@ ipb.pt); 3School of Pharmacy, University of Salamanca, Spain; and 4INESPO - Innovation Network Spain-Portugal (rmateos@usal.es) Introduction: The elderly population has been growing significantly, leading to an increased prevalence of chronic diseases and conse- quent taking medication. The complex therapies of elderly can lead to therapy non-adherence, increasing several health risks. Aim: This study aimed to estimate the prevalence of therapy adher- ence and associated factors. Material and Methods: This cross-sectional study was based on a questionnaire, with MAT scale (measure of adherence to therapy) validated for the Portuguese population (Lima, 2001) based on the Morisky scale, applied to 52 elderly (≥ 65 years) from northern Portugal. To assess therapy adherence, those whose average adher- ence levels were ≥ 5, were called adherent. It was used descriptive statistics. The level of association between categories of variables was studied through the adjusted residuals (AdR) and the relationship between adherence to the therapeutic and the number of medications taken per day was studied using the Mann-Whitney U test, with a sig- nificance level of 5%. The study was approved by Ethics Committee. Results: The sample consisted mainly of males elderly (61.5% vs. 38.5%), aged between 67 and 98 years (mean 82.71), and while 48.1% was between 75–84 years old. The participants shows high therapy adherence (96.2%). The non-adherent elderly are related to self-med- ication (AdR= 4.3), with the high level of cholesterol (AdR= 2.9) and chronic pain (AdR= 2.9). The non-adherent elderly seem tend to take more drugs per day, although not statistically significant (P = 0.063). Conclusions: This study shows that a large prevalence of elderly adhered to the therapy prescribed. Self-medication, having high cholesterol and chronic pain and higher number of different drugs per day seem related to non-adherence. Key words: Elderly, Therapy adherence, Therapy non-adherence. Clinical Therapeutics e100 Volume 37 number 8S A lIMITED nuMbER oF PRESCRIbED DRuGS ACCounT FoR ThE MAjoRITy oF ClInICAlly RElEVAnT DRuG InTERACTIonS J. Holm1,2; B. Eiermann1; E. Eliasson1,2; and B. Mannheimer1,3 1Karolinska Institutet, Stockholm, Sweden; 2Karolinska University Hospital, Stockholm, Sweden; and 3Södersjukhuset, Stockholm, Sweden Introduction: Drug-drug interactions constitute a predictable and in many cases avoidable cause of adverse drug reactions and thera- peutic failure. We conducted a register-based study to investigate the prevalence of prescribed combinations of interacting drugs in the whole Swedish population (Holm et al. Eur J Clin Pharmacol. 2014). Material and Methods: A retrospective, cross-sectional register study was conducted, covering four months in 2010. Data from the Prescribed Drug Register on all dispensed drug prescriptions in Swedish pharmacies from January 1 to April 30 were linked to the drug-drug interaction database SFINX. The analysis focused on drug interactions classified in the database as clinically relevant that can be handled, e.g. by dose adjustments (C-interactions), and clinically rele- vant interactions that should be avoided (D-interactions). Interactions were categorized according to clinical consequence and drug type and prevalences of interacting drug combinations were described. The study was approved by the Regional Ethics Committee. Results: About half of the population were dispensed at least one drug prescription. Mean (SD) number of dispensed drugs was 3.8 (3.4). About 2.5 million potentially interacting drug combinations were identified in the study population of 9.3 million people. Among detected interactions 38% were classified as C-interactions and 3.8% as D-interactions. About half of all C- and D-interactions were com- binations of drugs with potential to cause therapeutic failure. The 15 most prevalent combinations accounted for 80% of D-interactions. The 10 most prevalent individual drugs were involved in 94% of all D-interactions. Conclusions: A limited number of drugs and a few specific drug combinations account for the majority of D-interactions, i.e. clini- cally relevant interactions that should be avoided, in Sweden. About half of interacting drug combinations among C- and D-interactions potentially leads to treatment failure. ChARACTERIzATIon oF uRACIl CATAbolISM VARIAbIlITy In hEAlThy VolunTEERS D. Kummer1,2; B. Rindlisbacher1; S. Fontana3; J. Sistonen1; U. Amstutz1; and C. Largiadèr1 1Institute of Clinical Chemistry, Bern University Hospital, and University of Bern, Bern, Switzerland; 2Graduate School for Cellular and Biomedical Sciences, University of Bern, Bern, Switzerland; and 3Regional Blood Transfusion Service of the Swiss Red Cross, Bern, Switzerland Uracil catabolism is crucial for the pharmacokinetics of the chemo- therapeutic 5-fluorouracil (5-FU) since 5-FU is degraded by the same pathway. Decreased activity of the first catabolizing enzyme, dihydropyrimidine dehydrogenase (DPD), is a major predictor of 5-FU toxicity with known risk variants in the DPD gene (DPYD) accounting for ~30% of toxicities. However, not all toxicity cases can be explained by DPYD risk variants. To date, the phenotypic variability in the catabolism downstream of DPD by dihydro- pyrimidinase (DHP) and β -ureidopropionase (bUP), potentially contributing to 5-FU toxicity, has not been investigated. Thus, we aimed to characterize the baseline phenotypic variability of endog- enous metabolites and metabolic ratios of 5-FU catabolism enzymes and to correlate the phenotype with genetic variation in the DHP and bUP genes (DPYS and UPB1). Material and Methods: Three variants in DPYS and UPB1 previ- ously associated with 5-FU toxicity were genotyped in 320 healthy volunteers and their plasma uracil, dihydrouracil (UH 2 ), β -ureidopro- pionic acid (UPA), and β -alanine (BAL) concentrations were deter- mined by LC-MS/MS. Results and Conclusions: High inter-individual variability in all met- abolic ratios was observed. Sex-dependent differences were detected at each enzymatic step in the uracil catabolism pathway, with lower metabolite levels (P ≤ 0.007) in women. Moreover, lower UPA/UH 2 ratios (P < 0.001) were observed in women, suggesting that reduced 5-Fluoro-UH 2 catabolism may contribute to higher fluoropyrimi- dine toxicity rates observed in females. Furthermore, volunteers carrying DPYS variant c.265-58T> C had lower UH 2 plasma levels (P = 0.033) and higher UPA/UH 2 ratios (P = 0.036) and carriers of the UPB1 variant c.1-80C> G showed lower BAL plasma levels (P = 0.004). These initial results are in agreement with the previously observed reduced fluoropyrimidine toxicity in c.265-58C carriers and increased toxicity in carriers of c.1-80G, indicating a possible functional effect related to these variants. PolyPhARMACy AnD PoTEnTIAlly InAPPRoPRIATE MEDICATIon In ElDERly oF noRThERn PoRTuGAl I.C. Pinto1,3; F. Pereira2; and R. Mateos-Campos3,4 1Center for Research and Intervention in the Elderly, Health School of Polytechnic Institute of Bragança, Portugal (isabel. pinto@ipb.pt); 2Center for Research and Intervention in the Elderly, Polytechnic Institute of Bragança, Portugal (fpereira@ ipb.pt); 3School of Pharmacy, University of Salamanca, Spain; and 4INESPO - Innovation Network Spain-Portugal (rmateos@usal.es) Introduction: The growing aging of population and increasing preva- lence of chronic diseases require the simultaneous use of drugs, lead to the issue of polypharmacy and potentially interactions and inap- propriate use. Aim: To characterize polymedicated elderly and related factors, iden- tify potentially interactions and inappropriate medication in elderly. Material and Methods: This cross-sectional study was based on a questionnaire applied to 69 elderly (≥ 65 years) from northern Portugal. It was considered as polymedicated seniors taking ≥ 5 drugs. Beers list and the Delafuente classification were used to evalu- ate the therapeutic and possible interactions. It was used descriptive statistics and a model of binary regression, with a significance of 5%. The study was approved by Ethics Committee. Results: The sample consisted mainly of males (53.6% vs. 46.4%), aged between 66 and 99 years (mean 82.01), while 65.2% have more than 80 years. However, most elderly are not polymedicated (58%), on average 4.61 different drugs are administered per day (maximum= 19), antihypertensives (36.2%) and antacids (30.04%) are the most prescribed. Hypertension and depression increase the risk of polymedication eightfold (P = 0.004) and fivefold (P = 0.011) respectively. Female gender seems increase the risk of polypharmacy threefold, although not statistically significant (P = 0.102), and regarding age, the older age group (> 85 years) seems reduces the risk of polypharmacy in 0.6 fold, but also not statistically significant. According with Delafuente classification, 1.4% of elderly has poten- tially drug interactions (Omeprazole and Iron salts). According to the list of Beers, 5.8% of seniors take drugs that classified as having some indications (hydroxyzine, amitriptyline). Conclusions: Regarding polypharmacy, 42% of elderly are poly- medicated with an average of about 5 different drugs per day, anti- hypertensives and antacids the most prescribed. Hypertension and depression are highly associated with polypharmacy. We identified work_ctmkytwqhnay3cqzchfb3i2cqm ---- remote sensing Article Effect of Tree Phenology on LiDAR Measurement of Mediterranean Forest Structure William Simonson 1,2,*, Harriet Allen 3 ID and David Coomes 1 1 Department of Plant Sciences, University of Cambridge, Cambridge CB2 3EA, UK; dac18@cam.ac.uk 2 UN Environment World Conservation Monitoring Centre, Cambridge CB3 0DL, UK 3 Department of Geography, University of Cambridge, Cambridge CB2 3EN, UK; hda1@cam.ac.uk * Correspondence: will.simonson@unep-wcmc.org; Tel.: +44-7734-774-778 Received: 10 March 2018; Accepted: 21 April 2018; Published: 24 April 2018 ���������� ������� Abstract: Retrieval of forest biophysical properties using airborne LiDAR is known to differ between leaf-on and leaf-off states of deciduous trees, but much less is understood about the within-season effects of leafing phenology. Here, we compare two LiDAR surveys separated by just six weeks in spring, in order to assess whether LiDAR variables were influenced by canopy changes in Mediterranean mixed-oak woodlands at this time of year. Maximum and, to a slightly lesser extent, mean heights were consistently measured, whether for the evergreen cork oak (Quercus suber) or semi-deciduous Algerian oak (Q. canariensis) woodlands. Estimates of the standard deviation and skewness of height differed more strongly, especially for Algerian oaks which experienced considerable leaf expansion in the time period covered. Our demonstration of which variables are more or less affected by spring-time leafing phenology has important implications for analyses of both canopy and sub-canopy vegetation layers from LiDAR surveys. Keywords: LiDAR; Mediterranean; forest; phenology 1. Introduction Airborne laser scanning, or Light Detection and Ranging (LiDAR), is a proven technique for making precise and accurate three-dimensional measurements of forest and other complex vegetation canopies over large spatial extents [1–3]. As such, it is a powerful tool for an increasing range of applications in forestry, ecology and conservation. These include habitat suitability modelling [4–6] and the monitoring of carbon stocks, which is an essential requirement of projects for reducing emissions from deforestation and forest degradation (REDD+) [7,8]. At the heart of LiDAR’s effectiveness in modelling vegetation three-dimensional structure is a predictable relationship between return data and the arrangement (e.g., heights and densities) of component parts of the vegetation. However, there is still much to be understood in relation to vegetation–laser pulse interactions [9]. Repeat LiDAR surveys of deciduous/mixed forests in leaf-off and leaf-on states indicate that foliage development of the tree has a significant influence on the scatter of LiDAR height measurements, and therefore the retrieval of information on parameters of interest for individual trees [10] and forest plots [11–15]. Laser penetration rate through the canopy is greater in leaf-off conditions, providing better information capture on the presence of an understorey (of suppressed trees and shrubs), but potentially less optimal for the modelling of the forest canopy structure itself [16]. Indeed, a better representation of ground and understorey layers in leaf-off state of Japanese deciduous forest has been shown [17], and comparable results were obtained for an English mixed deciduous woodland [16] where as much as 57% of last return height measurements were from the ground layer (<1 m) and 42.5% from the understorey layer (1–8 m). The combination of Remote Sens. 2018, 10, 659; doi:10.3390/rs10050659 www.mdpi.com/journal/remotesensing http://www.mdpi.com/journal/remotesensing http://www.mdpi.com https://orcid.org/0000-0001-9046-2134 http://www.mdpi.com/2072-4292/10/5/659?type=check_update&version=1 http://www.mdpi.com/journal/remotesensing http://dx.doi.org/10.3390/rs10050659 Remote Sens. 2018, 10, 659 2 of 13 leaf-on and leaf-off data can help improve tree species classification, and in one example this has been demonstrated using LiDAR intensity information [12]. Less attention has been given to the more subtle seasonal effects on the LiDAR modelling of evergreen or mixed canopies. Much of the world’s forest cover, which amounts to some 30–35% of the land surface or around 39–45 million km2 [18], is dominated by evergreen trees, including coniferous (principally boreal) and broadleaved evergreens. The latter include the majority of Mediterranean forests and woodlands, whose canopies are composed of evergreen oak (Quercus spp.) and other sclerophyllous trees such as phillyrea, rhamnus and olive. Multi-temporal LiDAR is being increasingly employed for the investigation of tree growth, forest patch, vegetation and biomass dynamics (e.g., [19–25]). An understanding of the robustness of LiDAR metrics in the face of seasonal leafing phenology is important for making correct inferences in such studies [26]. In this current investigation, we capitalise on a repeat LiDAR survey of an area of mixed oak forest in southern Spain to consider the effect of timing on the LiDAR measurement of tree canopies and understories. We look at whether LiDAR variables differ significantly when captured six weeks apart. The study spans a period in which an evergreen tree species (Quercus suber) is experiencing leaf drop and concurrent new leaf emergence, and when a semi-deciduous tree species (Q. canariensis) is moving from a state of partial to full leaf expansion. We consider the implications of such processes on LiDAR variable retrieval for the application of this technology to the study of Mediterranean and other forest ecosystems. Comparing LiDAR metrics retrieved at two dates six weeks apart in April and May, we test the hypothesis that they will be significantly affected by phenological changes to leafing state, and that such effects will be greater for Quercus canariensis than for Q. suber because of the leaf expansion of the former tree resulting in significantly less penetration of laser light through the denser mass of May-time foliage. 2. Materials and Methods 2.1. Study Area The Sierra del Aljibe is a range of mountains rising to 1092 m and protected within the Los Alcornocales Natural Park. It represents an edaphic island of sandstone-derived acidic and nutrient-poor soils surrounded by base-rich soils of limestone, marl and clay. The area contains the most extensive Quercus suber (cork oak) forest in Iberia and the Mediterranean region. This evergreen oak forms mixed and segregated patches with the co-dominant Algerian oak semi-deciduous Quercus canariensis, which occupies moister valleys and north-facing slopes. Within the Park, our research is focused on a site of mixed closed-canopy forest (approximate area 93 ha) at Tiradero (36◦ 09′38′′N, 5◦ 35′25′′W; 335–360 m a.s.l.; Figure 1) on a gentle northeast facing slope. This is a reserve set aside from management and therefore unaffected by the widespread understorey cutting that is commonly associated with cork harvest, pasture improvement and fire control. Herbivory levels are high, however, and there is a small amount of disturbance from the maintenance and use of a marked hiking trail, as well as scientific research. 2.2. Field Survey Field sampling of the forest at Tiradero was undertaken in April, May and September 2011. Plant communities were sampled within 28 circular 10 m radius plots in a grid of seven (east–west orientation) by four (north–south), as ground truthing in support of an investigation into topographic and canopy controls on understories [27]. Plots were located using a GPSMAP 60CSx (Garmin, Olathe, KS, USA), with a horizontal accuracy of 3 m; errors in the colocation of field and remotely sensed data had previously been found to be negligible using this system in cork oak woodlands [28]. Vegetation vertical profiles were constructed from plant contacts with a vertical pole at 20 random locations and five height intervals within each plot. Cover abundance values were assigned to all non-graminoid Remote Sens. 2018, 10, 659 3 of 13 vascular plant species according to the Braun Blanquet five-point scale [29]. The number of individuals of each tree species within the plot was recorded, and basal area calculated from diameter-at-breast height (DBH) measurements for trees of DBH > 10 cm. Whilst these data were collected in support of research into canopy–understorey linkages described by [27], they provide contextual information to help interpret the results of this current study. Data from two plots in particular were used for this purpose (Figure 2 and Section 2.5). Remote Sens. 2018, 10, x FOR PEER REVIEW 3 of 12 from diameter-at-breast height (DBH) measurements for trees of DBH > 10 cm. Whilst these data were collected in support of research into canopy–understorey linkages described by [27], they provide contextual information to help interpret the results of this current study. Data from two plots in particular were used for this purpose (Figure 2 and Section 2.5). Figure 1. The study area located in the Los Alcornocales Natural Park, which is to be found in the Andalucia region of southwest Spain (inset). Figure 2. The 20-ha study site (in white), with Quercus suber (left, in yellow) and Quercus canariensis (right, in yellow) 2-ha sub-areas and representative 10-m radius field plots (black circles, see Section 2.2). Digital photography of the tree canopy on 10 April (top) and 22 May 2011 (bottom) is from a Leica RCD105 39 megapixel camera (Wetzlar, Germany). Figure 1. The study area located in the Los Alcornocales Natural Park, which is to be found in the Andalucia region of southwest Spain (inset). Remote Sens. 2018, 10, x FOR PEER REVIEW 3 of 12 from diameter-at-breast height (DBH) measurements for trees of DBH > 10 cm. Whilst these data were collected in support of research into canopy–understorey linkages described by [27], they provide contextual information to help interpret the results of this current study. Data from two plots in particular were used for this purpose (Figure 2 and Section 2.5). Figure 1. The study area located in the Los Alcornocales Natural Park, which is to be found in the Andalucia region of southwest Spain (inset). Figure 2. The 20-ha study site (in white), with Quercus suber (left, in yellow) and Quercus canariensis (right, in yellow) 2-ha sub-areas and representative 10-m radius field plots (black circles, see Section 2.2). Digital photography of the tree canopy on 10 April (top) and 22 May 2011 (bottom) is from a Leica RCD105 39 megapixel camera (Wetzlar, Germany). Figure 2. The 20-ha study site (in white), with Quercus suber (left, in yellow) and Quercus canariensis (right, in yellow) 2-ha sub-areas and representative 10-m radius field plots (black circles, see Section 2.2). Digital photography of the tree canopy on 10 April (top) and 22 May 2011 (bottom) is from a Leica RCD105 39 megapixel camera (Wetzlar, Germany). Remote Sens. 2018, 10, 659 4 of 13 2.3. LiDAR Survey Data acquisition for an area encompassing the Tiradero study site took place on 10 April 2011. A total of sixteen east–west overlapping strips of approximate length 8 km and width 600 m were surveyed from an altitudinal range of 929–953 m above ground level. A coverage of approximately 50 km2, in the coordinate space 36◦08′–36◦11′N, 5◦32′–5◦37′W (UTM WGS84 zone 30N) was thus achieved. The Leica ALS50 LiDAR operates at a wavelength of 1064 nm with a pulse frequency of 86 kHz, pulse density of c 2.5 m−2, and footprint diameter of approximately 13 cm resulting from the combination of instrument operating parameters (Table 1). Vertical placement accuracy of this instrument is <10 cm for the survey flying altitude [30]. Simultaneous GPS measurement was carried out on the ground using a Leica dGPS1200, as an adjustment of permanent GPS station calibration data obtained from Cadiz (SFER) and Ceuta (CEU1). Further LiDAR data were collected on 22 May 2011, i.e., 42 days after the first survey, due to patchy cloud cover leading to incomplete coverage of another section of the study area. The temporal analysis described here is based on a comparison of the data from one of the new (May) east–west strips, and an original (April) one to which it overlapped almost exactly. The viewing angle range for the study area was therefore identical, and at a maximum of 10◦ not considered likely to have a major impact on the height-based measurements [31,32]. For both flights, Specimen Eagle and Hawk hyperspectral imagers were also in operation, and a Leica RCD105 39 megapixel camera produced digital photographic coverage. The latter images were useful for distinguishing the main areas of Q. suber and Q. canariensis dominated canopy, with seasonal true-colour differences reflecting changes in the leafing state of these two trees (Figure 2). Table 1. Specifications for the LiDAR surveys undertaken in the Tiradero study area, Los Alcornocales, 2011. April Survey May Survey LiDAR sensor Leica ALS050 Leica ALS050 Date of deployment 10 April 2011 22 May 2011 Align in 12:48 09:16 Ground speed 135–148 knots 141–150 knots Flight altitude (above ground) 929–953 m 938–960 m Pulse rate frequency 85.1–86.1 MHz 86.1–89.9 MHz Field of view (degrees) 12 12 Scan frequency 54.8 Hz 54.8–57.4 Hz Number of strips 16 (E–W) + 1 (N–S) 2 (E–W) + 1 (N–S) Wavelength 1064 nm 1064 nm Beam divergence 0.22 mrad 0.22 mrad Footprint size 13 cm 13 cm Vertical discrimination 2.8 m 2.8 m Detection system Four return Four return 2.4. LiDAR Data Processing Post-flight processing of the LiDAR data was carried out by NERC’s Remote Sensing Group at the Plymouth Marine Laboratory. Modelling of terrain and canopy heights was performed using the software ‘Tiffs’ 8.0: Toolbox for Lidar Data Filtering and Forest Studies ([33]; Globalidar 2006–2011). After importing and tiling data strips, the Tiffs core function is to filter the point cloud into ground and non-ground returns. A morphological filtering method described by [34] is employed, and, being grid-based, it is computationally efficient and fast. For each tile, the outputs from the filtering process were Digital Elevation Models (DEMs), Digital Surface Models (DSMs), and Object Height Models (OHMs) at a spatial resolution of 5 m, as well as ground, object (vegetation) and combined point data. We employed the DEM from a single LiDAR acquisition (that for April) for both time points, following the precedent of previous multi-temporal studies [35]. Remote Sens. 2018, 10, 659 5 of 13 The Tiffs software facilitates individual tree as well as grid-based analyses. Its marker-controlled watershed segmentation method [36] was used to identify treetops and crown boundaries of trees for tiles covering the Tiradero core study area. The software also extracts structural parameters for the trees, calculated from the normalised vegetation heights (all returns). The statistics that were calculated were maximum, mean, standard deviation, quadratic mean, skewness and kurtosis of vegetation heights, and crown radius and canopy volume (being the volume under the OHM for the delineated tree crown [37]). The simultaneous grid-based statistical analysis (GSA) returned the same set of height metrics (excluding crown dimensions) for cells of size 5 m. 2.5. Temporal Analysis We wanted to compare the effect of acquisition date on the LiDAR measurement of forest structure, at both the tree crown and 5 m grid cell level. Metrics were retrieved from surveys on 10 April 2011 and 22 May 2011, an interval of 42 days. This interval spans a period of significant leaf change for the two dominant trees of the mixed forest study area: the evergreen cork oak Quercus suber undergoes both leaf loss and emergence, whilst the semi-deciduous Q. canariensis is in a state of leaf expansion. The comparisons were made for a 20 ha area of mixed forest, and two smaller (2 ha) areas contained therein, differing in their predominant canopy type: Quercus suber versus Quercus canariensis (Figure 2). The areas used in the comparisons were drawn as quadrilateral polygons in ArcGIS, and were used to extract the sets of tree- and grid-level data from the shapefiles of wider coverage. For the comparison of LiDAR statistics of individual trees, the package spatstat [38] implemented in R language (R Development Core Team 2012) was used to match trees identified in one survey with trees identified in the next. Spatstat is designed for analysing spatial point patterns. For two point patterns A and B, its function nncross computes the nearest neighbour in B for each point of A. For our purposes, we applied nncross to associate trees isolated from the one LiDAR object height model (OHM) to the assumed same trees isolated from the second LiDAR OHM. Distances between A and B will be small when the same tree is detected in the two surveys. Once trees had been matched, the ratio of values for each of the eight LiDAR statistics was calculated and analysed according to distance between the neighbours (corresponding to amount of error in the localisation of trees). In the absence of phenology-related changes, ratios are expected to equal 1 (the measured values do not change between surveys) when trees are perfectly matched (A–B distance values are low). Comparable grid-based analyses were conducted based on raster files of 5 m cell size clipped by the 20 ha and 2 ha areas of interest. In this case, ratios of change of four LiDAR variables (maximum, mean, standard deviation and skewness of vegetation heights) were calculated between surveys. The ratio values (April/May), for both grid cells and trees, were tested for significant difference from the value 1 using Wilcoxon signed rank tests for non-normal data. To aid the interpretation of the results, the same four LiDAR variables compared in the grid-cell analysis were also calculated from the LiDAR vegetation point data extracted for two circular plots of radius 10 m, for which field data on vegetation vertical profile and plant community composition were available (see Section 2.2). The two plots chosen comprised one for each of the two main canopy types, and were the ones nearest to the two 2-ha study areas (Figure 2). The LiDAR point clouds for these plots were inspected using the data exploration tool Treevis (version 0.78, University of Freiburg, Germany), as well as the graphing of scatterplots. Such visualisation and inspection of the data can prove highly useful in the detection of patterns and relationships [39]. 3. Results 3.1. Selection of Trees April–May comparisons were made on eight LiDAR metrics calculated for segmented trees crowns (Figure 3), and four metrics for 5 m grid cells (Figure 4). Temporal differences in the metrics increased with offset distance between matched trees in the near-neighbour analysis (Supplementary Remote Sens. 2018, 10, 659 6 of 13 Materials, Figure S1), although this effect was clearer with some metrics (e.g., mean and quadratic mean) than others (e.g., crown dimensions). Further consideration of the tree data is therefore restricted to trees matched with most certainty, taken to be within 2 m (representing approximately 60% of the total segmented trees, or 2900 comparisons for the 20 ha plot and c. 300 comparisons for the 2 ha plots). 3.2. April–May Comparison of LiDAR Metrics We predicted that April–May comparisons of LiDAR metrics would reveal significant differences as a result of phenological changes to leafing state and this was demonstrated for the majority of the metrics by the significant deviation of the April–May ratios from the value 1 (Figures 3 and 4). The variable that differed least between surveys was maximum height: ratio values in both tree and grid analyses were very close to the value of 1 (Figures 3 and 4, Supplementary Materials Figure S1). The tree analysis also revealed only small differences in the measurement of mean height between surveys (Figure 3); for the grid analysis, the May measurements of mean height were higher for the mixed plot and Q. canariensis (Figure 4). For the standard deviation of heights, ratio values > 1 were evident in nearly all comparisons (Figures 3 and 4). Of all metrics included in the tree and grid cell analyses, skewness values were most dissimilar across surveys, and also variable in that dissimilarity. The results at the tree level showed lower skewness in May than in April (all values are negative, hence in this case a lower ratio value corresponds to a more negative value in May). For the four metrics studied in the tree analysis alone, crown radius, crown volume and quadratic mean height had very similar values in the comparisons, whilst kurtosis values increased from April to May (Figure 3). Remote Sens. 2018, 10, x FOR PEER REVIEW 6 of 12 mean) than others (e.g., crown dimensions). Further consideration of the tree data is therefore restricted to trees matched with most certainty, taken to be within 2 m (representing approximately 60% of the total segmented trees, or 2900 comparisons for the 20 ha plot and c. 300 comparisons for the 2 ha plots). 3.2. April–May Comparison of LiDAR Metrics We predicted that April–May comparisons of LiDAR metrics would reveal significant differences as a result of phenological changes to leafing state and this was demonstrated for the majority of the metrics by the significant deviation of the April–May ratios from the value 1 (Figures 3 and 4). The variable that differed least between surveys was maximum height: ratio values in both tree and grid analyses were very close to the value of 1 (Figures 3 and 4, Supplementary Materials Figure S1). The tree analysis also revealed only small differences in the measurement of mean height between surveys (Figure 3); for the grid analysis, the May measurements of mean height were higher for the mixed plot and Q. canariensis (Figure 4). For the standard deviation of heights, ratio values > 1 were evident in nearly all comparisons (Figures 3 and 4). Of all metrics included in the tree and grid cell analyses, skewness values were most dissimilar across surveys, and also variable in that dissimilarity. The results at the tree level showed lower skewness in May than in April (all values are negative, hence in this case a lower ratio value corresponds to a more negative value in May). For the four metrics studied in the tree analysis alone, crown radius, crown volume and quadratic mean height had very similar values in the comparisons, whilst kurtosis values increased from April to May (Figure 3). Figure 3. Bar plots showing LiDAR April/May measurement ratios (with 95% confidence intervals) for eight tree statistics. Results of Wilcoxon signed rank tests of significant difference from the value 1 are given above each bar (** significant at p = 0.01; * significant at p = 0.05; N.S. not significant.) Results for the 20 ha mixed Q. canariensis/suber plot (Qc/s), 2 ha Q. canariensis (Qc) and Q. suber (Qs) plots are given. Figure 3. Bar plots showing LiDAR April/May measurement ratios (with 95% confidence intervals) for eight tree statistics. Results of Wilcoxon signed rank tests of significant difference from the value 1 are given above each bar (** significant at p = 0.01; * significant at p = 0.05; N.S. not significant.) Results for the 20 ha mixed Q. canariensis/suber plot (Qc/s), 2 ha Q. canariensis (Qc) and Q. suber (Qs) plots are given. Remote Sens. 2018, 10, 659 7 of 13 Remote Sens. 2018, 10, x FOR PEER REVIEW 7 of 12 Figure 4. Bar plots showing LiDAR April/May measurement ratios (with 95% confidence intervals) for four 5 m grid-cell statistics: maximum, mean, standard deviation and skewness. Results of Wilcoxon signed rank tests of significant difference from the value 1 are given above each bar (** significant at p = 0.01; * significant at p = 0.05; N.S. not significant.) Results for the 20 ha mixed Q. canariensis/suber plot (Qc/s), 2 ha Q. canariensis (Qc) and Q. suber (Qs) plots are given. 3.3. Canopy Species-Specific Effects An effect of canopy type was clearly observed in the April–May comparisons (Figures 3 and 4). We hypothesised that, in the case of Quercus canariensis, which undergoes leaf expansion during the study period, the reduced penetration of LiDAR pulses through the canopy in May would lead to greater April–May differences in metric retrieval compared to Q. suber. The results, at both tree and grid-cell level, confirmed this prediction. For the semi-deciduous tree in the tree analysis, the increase in mean height and reduced standard deviation and skewness of heights in May compared to April were more marked than for Q. suber (Figure 3). This was evident for the 2-ha Q. canariensis plot and the 20-ha plot as a whole, which had a predominance of this tree. For both the tree and grid analyses, skewness of height in Q. canariensis canopies was at least 20% less in May compared to April, the differences for Q. suber being much less (Figures 3 and 4). In relation to the Q. suber canopy, the April survey was undertaken at the onset of rapid leaf drop, whilst the May survey was carried out during the spring flush of new leaf growth. The digital imagery, with variation in the hue of green, suggests that full leaf expansion had not been achieved in all trees (Figure 2). As such, the net change in foliage mass and volume is less easy to predict, and the tree and grid analyses produce inconsistent support for either an increase or decrease in interception of the laser pulses at the canopy level. The slightly lower mean height value for May in the grid analysis (Figure 4) would suggest reduced foliage and more penetration of the laser through the canopy, and this is consistent with a higher standard deviation of heights for the second survey. In the tree analysis, skewness of heights experiences a significant reduction in May (Figure 3), whilst also a reduced standard deviation of heights. The distribution of height measurements for the two example circular plots helps to interpret differences exhibited by Quercus canariensis and Q. suber (Figure 5 and Table 2). In the Q. suber plot, the abundance of undershrubs to a height of 2 m is captured in both strips. This was mostly composed of the shrubs Erica arborea and Genista triacanthos and liane Smilax aspera. A relatively unfilled space apparently sits below the Q. suber canopy. For the Q. canariensis plot, a taller, shallower but potentially denser canopy is suggested by the scatterplots. Field records suggest some presence of Q. suber and Phillyrea latifolia, and this is reflected by the scatter of returns at 4 m and above. A lower understorey of Ruscus aculeatus, Smilax aspera, Myrtus communis and other shrubs is also suggested. Notably, the sub-canopy vegetation (7 m and below) appears less well captured in the May survey. Comparisons of the four LiDAR variables were remarkably consistent for both plots (Table 2), with the ratio value falling in the range 0.95–1.03, with the exception of the skewness of heights for Q. suber, and both skewness and standard deviation of heights for Q. canariensis. Figure 4. Bar plots showing LiDAR April/May measurement ratios (with 95% confidence intervals) for four 5 m grid-cell statistics: maximum, mean, standard deviation and skewness. Results of Wilcoxon signed rank tests of significant difference from the value 1 are given above each bar (** significant at p = 0.01; * significant at p = 0.05; N.S. not significant.) Results for the 20 ha mixed Q. canariensis/suber plot (Qc/s), 2 ha Q. canariensis (Qc) and Q. suber (Qs) plots are given. 3.3. Canopy Species-Specific Effects An effect of canopy type was clearly observed in the April–May comparisons (Figures 3 and 4). We hypothesised that, in the case of Quercus canariensis, which undergoes leaf expansion during the study period, the reduced penetration of LiDAR pulses through the canopy in May would lead to greater April–May differences in metric retrieval compared to Q. suber. The results, at both tree and grid-cell level, confirmed this prediction. For the semi-deciduous tree in the tree analysis, the increase in mean height and reduced standard deviation and skewness of heights in May compared to April were more marked than for Q. suber (Figure 3). This was evident for the 2-ha Q. canariensis plot and the 20-ha plot as a whole, which had a predominance of this tree. For both the tree and grid analyses, skewness of height in Q. canariensis canopies was at least 20% less in May compared to April, the differences for Q. suber being much less (Figures 3 and 4). In relation to the Q. suber canopy, the April survey was undertaken at the onset of rapid leaf drop, whilst the May survey was carried out during the spring flush of new leaf growth. The digital imagery, with variation in the hue of green, suggests that full leaf expansion had not been achieved in all trees (Figure 2). As such, the net change in foliage mass and volume is less easy to predict, and the tree and grid analyses produce inconsistent support for either an increase or decrease in interception of the laser pulses at the canopy level. The slightly lower mean height value for May in the grid analysis (Figure 4) would suggest reduced foliage and more penetration of the laser through the canopy, and this is consistent with a higher standard deviation of heights for the second survey. In the tree analysis, skewness of heights experiences a significant reduction in May (Figure 3), whilst also a reduced standard deviation of heights. The distribution of height measurements for the two example circular plots helps to interpret differences exhibited by Quercus canariensis and Q. suber (Figure 5 and Table 2). In the Q. suber plot, the abundance of undershrubs to a height of 2 m is captured in both strips. This was mostly composed of the shrubs Erica arborea and Genista triacanthos and liane Smilax aspera. A relatively unfilled space apparently sits below the Q. suber canopy. For the Q. canariensis plot, a taller, shallower but potentially denser canopy is suggested by the scatterplots. Field records suggest some presence of Q. suber and Phillyrea latifolia, and this is reflected by the scatter of returns at 4 m and above. A lower understorey of Ruscus aculeatus, Smilax aspera, Myrtus communis and other shrubs is also suggested. Notably, the sub-canopy vegetation (7 m and below) appears less well captured in the May survey. Comparisons of the four LiDAR variables were remarkably consistent for both plots (Table 2), with the Remote Sens. 2018, 10, 659 8 of 13 ratio value falling in the range 0.95–1.03, with the exception of the skewness of heights for Q. suber, and both skewness and standard deviation of heights for Q. canariensis. Remote Sens. 2018, 10, x FOR PEER REVIEW 8 of 12 Table 2. Average values of four LiDAR metrics for the two representative 10 m radius circular plots differing by canopy type, as well as ratio values for April–May comparisons. Metric Values Canopy Type Height Statistic (m) April May April–May Ratio Value Quercus suber Maximum 10.65 10.29 1.03 Mean 6.28 5.94 1.06 Standard deviation 2.80 2.89 0.97 Skewness −1.01 −0.91 1.11 Quercus canariensis Maximum 14.96 14.85 1.01 Mean 9.83 10.16 0.97 Standard deviation 3.36 2.93 1.15 Skewness −1.35 −1.62 0.83 Figure 5. Scatterplots of vegetation heights for two representative circular plots of radius 10 m: Quercus suber dominated plot (top row) and Q. canariensis dominated plot (bottom row). Filled symbols are first returns, and open symbols second and third returns. 4. Discussion A chief strength of our approach is that the repeat surveys are conducted using the same sensor configuration and flight parameters, and in the same year. Despite obvious advantages, this has rarely been possible in past multi-temporal LiDAR studies. Data collection in a mixed deciduous woodland in eastern England [16] was separated by two years, as was the case in a study in southeast Norway [11]. The experimental design of the latter investigation was also undertaken with different sensors at different flying altitudes, requiring a number of complicated compensations in the analysis. Flying altitude can have a significant effect, as a reduction in peak pulse power concentration can delay pulse triggering within vegetation, thereby increasing laser penetration into the foliage and reducing height metric values [40]. An increased interval time, on the other hand, allows tree growth to become a factor. Figure 5. Scatterplots of vegetation heights for two representative circular plots of radius 10 m: Quercus suber dominated plot (top row) and Q. canariensis dominated plot (bottom row). Filled symbols are first returns, and open symbols second and third returns. Table 2. Average values of four LiDAR metrics for the two representative 10 m radius circular plots differing by canopy type, as well as ratio values for April–May comparisons. Metric Values Canopy Type Height Statistic (m) April May April–May Ratio Value Quercus suber Maximum 10.65 10.29 1.03 Mean 6.28 5.94 1.06 Standard deviation 2.80 2.89 0.97 Skewness −1.01 −0.91 1.11 Quercus canariensis Maximum 14.96 14.85 1.01 Mean 9.83 10.16 0.97 Standard deviation 3.36 2.93 1.15 Skewness −1.35 −1.62 0.83 4. Discussion A chief strength of our approach is that the repeat surveys are conducted using the same sensor configuration and flight parameters, and in the same year. Despite obvious advantages, this has rarely been possible in past multi-temporal LiDAR studies. Data collection in a mixed deciduous woodland in eastern England [16] was separated by two years, as was the case in a study in southeast Norway [11]. The experimental design of the latter investigation was also undertaken with different Remote Sens. 2018, 10, 659 9 of 13 sensors at different flying altitudes, requiring a number of complicated compensations in the analysis. Flying altitude can have a significant effect, as a reduction in peak pulse power concentration can delay pulse triggering within vegetation, thereby increasing laser penetration into the foliage and reducing height metric values [40]. An increased interval time, on the other hand, allows tree growth to become a factor. This investigation has found evidence for the effect of seasonality, specifically short-term spring-time leafing phenology in two typical Mediterranean forest species, on the retrieval of LiDAR variables of value for describing tree crown and forest stand vegetation structural attributes. In this way, the specific contribution that the study makes is in the investigation of more subtle effects of tree leafing phenology than the often studied leaf-on/leaf-off dichotomy. The investigation was opportunistic upon an unplanned repeat LiDAR survey, and hence a campaign of field data collection aimed at quantifying the associated phenological changes was not planned. We have, however, been able to draw upon contextual field data to reach some robust conclusions on the effect of seasonal tree leafing processes and the retrieval of LiDAR metrics. Airborne LiDAR is being increasingly applied in Mediterranean ecosystems (e.g., [41–47]). Knowledge of the effect on LiDAR parameters of seasonal changes to these predominantly evergreen canopies—including climatic influences on leafing phenology—is relevant to the design, data-analysis and interpretation of LiDAR surveys in this and other warm-temperate/sub-tropical regions. Contrasting responses were observed for Quercus canariensis and Quercus suber canopies, according to our hypothesis. Field observation and comparison of the digital imagery for April and May (Figure 2) indicate that the Quercus canariensis trees develop from a state of partial to full leaf expansion during this period. Under partial leaf expansion, reflectances off branches will presumably represent a relatively high proportion of the LiDAR point cloud. As leaves expand, one can predict that the increased amount of foliage biomass, and canopy closure, will reduce penetration of laser pulses and increase the proportion and concentration of height measurements recorded in the upper strata of the vegetation profile. Our results confirmed this prediction, with an increased mean height, reduced standard deviation and more negative skewness of heights, in May compared to April. These results are analogous to those found for leaf-on/leaf-off conditions. For lowland mixed woodlands in eastern England, a small increase in mean height (13.35 vs. 12.47 m) and decrease in standard deviation of heights (5.10 vs. 5.25 m) was observed in leaf-on conditions [16]. The height distributions of single returns and last-of-many returns have been shown to shift towards the ground in leaf-off conditions within boreal forests [10]; skewness values under leaf-off conditions were more positive for single returns (but not for first or last returns) and the variability of the height distribution tended to increase from leaf-on to leaf-off conditions. In another study, the laser interception by the upper parts of a mixed forest canopy were significantly higher in leaf-on conditions [11]. For the first return data, height metrics were 0.33–0.97 m higher under leaf-on canopy conditions. Analogous results were also shown in a comparison between tropical moist (TMF) and tropical wet (TWF) forest [48]. At the end of the dry season, leaf loss from canopy-forming trees was pronounced in some TMF areas, and this led to less LiDAR energy being reflected from the upper canopy, thereby reducing the LiDAR median height metric relative to the TWF study area. In an investigation of light availability at different developmental stages of a boreal forest [49], a direct relationship between skewness and the degree of light entering through crown was established, results that are consistent with the canopy closure between April and May in our study forest. Maximum height was the least variable of the four metrics for Quercus canariensis plots. The ratio values of ~1 for maximum height measurement for trees and grid cells is reassuring, in terms of reliable measurement of a variable that should be less affected by tree leafing phenology than the others that were calculated. This is again consistent with the results of the studies that have been reviewed here. Leaf-on/leaf-off values in one investigation were 25.31/25.14 m [16]. In another, canopy conditions were found to exert little influence on the maximum height obtained for the individual trees, although maximum laser heights of ‘first’ echoes were higher in birch trees under leaf-off conditions [10]. Remote Sens. 2018, 10, 659 10 of 13 Meanwhile, plot-level stability of maximum height has been reported [11]. Using percentile heights, a tendency for canopy height to be underestimated in leaf-off conditions has been observed, but only where the forest was dominated by deciduous compound-leaved trees [14]. The tree-level and grid analyses have complemented each other in the comparison of time periods and canopy types. The former provided a number of metrics associated with tree crowns, whilst the latter was unaffected by any error in the tree segmentation process. The results of both give confidence that the LiDAR variables are relatively robust to the seasonality of tree leafing that this study spans. A possible exception is the measurement of skewness, especially in the case of the Q. canariensis canopy, and the reasons for this have already been discussed. Skewness of heights is a useful summary measure of their asymmetry as affected by canopy closure [49] and relative density of vegetation in the canopy and understorey layers. It can be used, for example, to differentiate natural forest and plantations, with their varying vertical vegetation profiles [50]. The reduced skewness values obtained for Q. canariensis plots during the May survey suggests that this timing is less optimal to capture information on the presence of understorey vegetation. This may also be the case for the evergreen Q. suber, though evidence for this from our study is equivocal. It may be that a survey undertaken later in the summer, with full leafing out of the cork oak trees, would similarly be less useful for the description of layers below the tree canopy. 5. Conclusions This investigation has provided a unique record of the effect of within-season tree leafing state on the consistency of LiDAR measurement of mixed evergreen/semi-deciduous forests. It provides reassurance that retrieval of standard parameters such as maximum and mean height of trees are robust to a degree of leaf expansion, but that care in the timing of a survey is required when sub-canopy vegetation layers—being relevant, for example, to modelling fire fuel loads—are the focus of investigation, particularly for trees of a more deciduous nature. For the area studied in this current investigation, the earlier spring-time canopy state of April was preferential for detecting an adequate signal of the sub-canopy strata, especially in the patches dominated by the deciduous Quercus canariensis. If the modelling of the canopy surface alone is important, the robustness of the maximum height LiDAR metric would suggest that the survey timing is not critical. Supplementary Materials: The following are available online at http://www.mdpi.com/2072-4292/10/5/659/ s1. Author Contributions: W.S. and D.C. conceived and designed the methodological approach; W.S., H.A. and D.C. undertook the field data collection; W.S. performed the remote sensing data analysis; W.S. wrote the paper and H.A. and D.C. provided inputs and comments on the drafts. Acknowledgments: Data acquisition was performed by the Airborne Research and Survey Facility, NERC (project EU11/03), and access to the Tiradero study site arranged by Manuela Malla of the Los Alcornocales Natural Park office. We are grateful for the cooperation of all involved. Our thanks to Ruben Valbuena for comments on an earlier draft of this paper. Conflicts of Interest: The authors declare no conflict of interest. References 1. Lefsky, M.; Cohen, W.B.; Acker, S.A.; Parker, G.G.; Spies, T.A.; Harding, D. Lidar Remote Sensing of the Canopy Structure and Biophysical Properties of Douglas-Fir Western Hemlock Forests. Remote Sens. Environ. 1999, 70, 339–361. [CrossRef] 2. Lim, K.; Treitz, P.; Wulder, M.; St-Onge, B.; Flood, M. LiDAR remote sensing of forest structure. Prog. Phys. Geogr. 2003, 27, 88–106. [CrossRef] 3. Danson, F.M.; Morsdorf, F.; Koetz, B. Airborne and terrestrial laser scanning for measuring vegetation canopy structure. In Laser Scanning for the Environmental Sciences; Heritage, G., Large, A., Eds.; Wiley-Blackwell: Oxford, UK, 2009; pp. 201–219. http://www.mdpi.com/2072-4292/10/5/659/s1 http://www.mdpi.com/2072-4292/10/5/659/s1 http://dx.doi.org/10.1016/S0034-4257(99)00052-8 http://dx.doi.org/10.1191/0309133303pp360ra Remote Sens. 2018, 10, 659 11 of 13 4. Broughton, R.K.; Hinsley, S.A.; Bellamy, P.E.; Hill, R.A.; Rothery, P. Marsh Tit Poecile palustris territories in a British broad-leaved wood. Ibis 2006, 148, 744–752. [CrossRef] 5. Graf, R.; Mathys, L.; Bollmann, K. Habitat assessment for forest dwelling species using LiDAR remote sensing: Capercaillie in the Alps. For. Ecol. Manag. 2009, 257, 160–167. [CrossRef] 6. Goetz, S.J.; Steinberg, D.; Betts, M.G.; Holmes, R.T.; Doran, P.J.; Dubayah, R.; Hofton, M. Lidar remote sensing variables predict breeding habitat of a Neotropical migrant bird. Ecology 2010, 91, 1569–1576. [CrossRef] [PubMed] 7. Asner, G.P. Tropical forest carbon assessment: Integrating satellite and airborne mapping approaches. Environ. Res. Lett. 2009, 4, 1–11. [CrossRef] 8. Saatchi, S.S.; Harris, N.L.; Brown, S.; Lefsky, M.; Mitchard, E.T.A.; Salas, W.; Zutta, B.R.; Buermann, W.; Lewis, S.L.; Hagen, S.; et al. Benchmark map of forest carbon stocks in tropical regions across three continents. Proc. Natl. Acad. Sci. USA 2011, 108, 9899–9904. [CrossRef] [PubMed] 9. Korpela, I.; Hovi, A.; Korhonen, L. Backscattering of individual LiDAR pulses from forest canopies explained by photogrammetrically derived vegetation structure. ISPRS J. Photogramm. Remote Sens. 2013, 83, 81–93. [CrossRef] 10. Orka, H.O.; Næsset, E.; Bollandsås, O.M. Effects of different sensors and leaf-on and leaf-off canopy conditions on echo distributions and individual tree properties derived from airborne laser scanning. Remote Sens. Environ. 2010, 114, 1445–1461. [CrossRef] 11. Næsset, E. Assessing sensor effects and effects of leaf-off and leaf-on canopy conditions on biophysical stand properties derived from small-footprint airborne laser data. Remote Sens. Environ. 2005, 98, 356–370. [CrossRef] 12. Kim, S.; McGaughey, R.J.; Andersen, H.-E.; Schreuder, G. Tree species differentiation using intensity data derived from leaf-on and leaf-off airborne laser scanner data. Remote Sens. Environ. 2009, 113, 1575–1586. [CrossRef] 13. Villikka, M.; Packalén, P.; Maltamo, M. The suitability of leaf-off airborne laser scanning data in an area-based forest inventory of coniferous and deciduous trees. Silva Fenn. 2012, 46, 99–110. [CrossRef] 14. Wasser, L.; Day, R.; Chasmer, L.; Taylor, A. Influence of Vegetation Structure on Lidar-derived Canopy Height and Fractional Cover in Forested Riparian Buffers During Leaf-Off and Leaf-On Conditions. PLoS ONE 2013, 8, e54776. [CrossRef] [PubMed] 15. White, J.C.; Arnett, J.T.T.R.; Wulder, M.A.; Tompalski, P.; Coops, N.C. Evaluating the impact of leaf-on and leaf-off airborne laser scanning data on the estimation of forest inventory attributes with the area-based approach. Can. J. For. Res. 2015, 1513, 1498–1513. [CrossRef] 16. Hill, R.A.; Broughton, R.K. Mapping the understorey of deciduous woodland from leaf-on and leaf-off airborne LiDAR data: A case study in lowland Britain. ISPRS J. Photogramm. Remote Sens. 2009, 64, 223–233. [CrossRef] 17. Hirata, Y.; Sato, K.; Shibata, M.; Nishizono, T. The capability of helicopter-borne laser scanner data in a temperate deciduous forest. In Proceedings of the Workshop Scandlaser Scientific Workshop on Airborne Laser Scanning of Forests, Umea, Sweden, 3–4 September 2003; Hyyppä, J., Naesset, E., Olsson, H., Granqvist Phalén, T., Reese, H., Eds.; Instutionen för Skoglig Resurshushållning, Sveriges Lantbruksuniversitet: Umeå, Sweden, 2003; pp. 174–179. 18. Thomas, P.A.; Packham, J.R. Ecology of Woodlands and Forests; Cambridge University Press: Cambridge, UK, 2007. 19. Zhao, K.; Suarez, J.C.; Garcia, M.; Hu, T.; Wang, C.; Londo, A. Utility of multitemporal lidar for forest and carbon monitoring: Tree growth, biomass dynamics, and carbon flux. Remote Sens. Environ. 2018, 204, 883–897. [CrossRef] 20. Vepakomma, U.; St-Onge, B.; Kneeshaw, D. Spatially explicit characterization of boreal forest gap dynamics using multi-temporal lidar data. Remote Sens. Environ. 2008, 112, 2326–2340. [CrossRef] 21. Næsset, E.; Bollandsås, O.M.; Gobakken, T.; Gregoire, T.G.; Ståhl, G. Model-assisted estimation of change in forest biomass over an 11 year period in a sample survey supported by airborne LiDAR: A case study with post-stratification to provide “activity data”. Remote Sens. Environ. 2013, 128, 299–314. [CrossRef] 22. Huang, C.; Wylie, B.; Yang, L.; Homer, C.; Zylstra, G. Derivation of a tasselled cap transformation based on Landsat 7 at-satellite reflectance. Int. J. Remote Sens. 2002, 23, 1741–1748. [CrossRef] http://dx.doi.org/10.1111/j.1474-919X.2006.00583.x http://dx.doi.org/10.1016/j.foreco.2008.08.021 http://dx.doi.org/10.1890/09-1670.1 http://www.ncbi.nlm.nih.gov/pubmed/20583698 http://dx.doi.org/10.1088/1748-9326/4/3/034009 http://dx.doi.org/10.1073/pnas.1019576108 http://www.ncbi.nlm.nih.gov/pubmed/21628575 http://dx.doi.org/10.1016/j.isprsjprs.2013.06.002 http://dx.doi.org/10.1016/j.rse.2010.01.024 http://dx.doi.org/10.1016/j.rse.2005.07.012 http://dx.doi.org/10.1016/j.rse.2009.03.017 http://dx.doi.org/10.14214/sf.68 http://dx.doi.org/10.1371/journal.pone.0054776 http://www.ncbi.nlm.nih.gov/pubmed/23382966 http://dx.doi.org/10.1139/cjfr-2015-0192 http://dx.doi.org/10.1016/j.isprsjprs.2008.12.004 http://dx.doi.org/10.1016/j.rse.2017.09.007 http://dx.doi.org/10.1016/j.rse.2007.10.001 http://dx.doi.org/10.1016/j.rse.2012.10.008 http://dx.doi.org/10.1080/01431160110106113 Remote Sens. 2018, 10, 659 12 of 13 23. Hopkinson, C.; Chasmer, L.; Hall, R.J. The uncertainty in conifer plantation growth prediction from multi-temporal lidar datasets. Remote Sens. Environ. 2008, 112, 1168–1180. [CrossRef] 24. Dubayah, R.O.; Sheldon, S.L.; Clark, D.B.; Hofton, M.A.; Blair, J.B.; Hurtt, G.C.; Chazdon, R.L. Estimation of tropical forest height and biomass dynamics using lidar remote sensing at La Selva, Costa Rica. J. Geophys. Res. 2010, 115, G00E09. [CrossRef] 25. Simonson, W.; Ruiz-Benito, P.; Valladares, F.; Coomes, D. Modelling above-ground carbon dynamics using multi-temporal airborne lidar: Insights from a Mediterranean woodland. Biogeosciences 2016, 13, 961–973. [CrossRef] 26. Jenkins, R.B. Airborne laser scanning for vegetation structure quantification in a south east Australian scrubby forest-woodland. Austral Ecol. 2012, 37, 44–55. [CrossRef] 27. Simonson, W.D.; Allen, H.D.; Coomes, D.A. Overstorey and topographic effects on understories: Evidence for linkage from cork oak (Quercus suber) forests in southern Spain. For. Ecol. Manag. 2014, 328, 35–44. [CrossRef] 28. Simonson, W.D.; Allen, H.D.; Coomes, D.A. Use of an Airborne Lidar System to Model Plant Species Composition and Diversity of Mediterranean Oak Forests. Conserv. Biol. 2012, 26, 840–850. [CrossRef] [PubMed] 29. Sutherland, W.J. Ecological Census Techniques, a Handbook; Cambridge University Press: Cambridge, UK, 1996. 30. Leica Leica ALS50-II Product Specifications. Available online: http://www.nts-info.com/inventory/images/ ALS50-II.Ref.703.pdf (accessed on 4 April 2018). 31. Holmgren, J.; Nilsson, M.; Olsson, H. Simulating the effects of lidar scanning angle for estimation of mean tree height and canopy closure. Can. J. Remote Sens. 2003, 29, 623–632. [CrossRef] 32. Morsdorf, F.; Frey, O.; Meier, E.; Itten, K.I.; Allgöwer, B. Assessment of the influence of flying altitude and scan angle on biophysical vegetation products derived from airborne laser scanning. Int. J. Remote Sens. 2008, 1161, 1387–1406. [CrossRef] 33. Chen, Q. Airborne Lidar Data Processing and Information Extraction. Photogramm. Eng. Remote Sens. 2007, 73, 109–112. 34. Chen, Q.; Gong, P.; Baldocchi, D.; Xie, G. Filtering Airborne Laser Scanning Data with Morphological Methods. Photogramm. Eng. Remote Sens. 2007, 73, 175–185. [CrossRef] 35. Yu, X.; Hyyppä, J.; Kaartinen, H.; Maltamo, M. Automatic detection of harvested trees and determination of forest growth using airborne laser scanning. Remote Sens. Environ. 2004, 90, 451–462. [CrossRef] 36. Chen, Q.; Baldocchi, D.; Gong, P.; Kelly, M. Isolating Individual Trees in a Savanna Woodland Using Small Footprint Lidar Data. Photogramm. Eng. Remote Sens. 2006, 72, 923–932. [CrossRef] 37. Chen, Q.; Gong, P.; Baldocchi, D.; Tian, Y.Q. Estimating Basal Area and Stem Volume for Individual Trees from Lidar Data. Photogramm. Eng. Remote Sens. 2007, 73, 1355–1365. [CrossRef] 38. Baddeley, A.; Turner, R. Spatstat: An R package for analyzing spatial point patterns. J. Stat. Softw. 2005, 12, 1–42. [CrossRef] 39. Cleveland, W.S. Rejoinder: A model for studying display methods of statistical graphics. J. Comput. Graph. Stat. 1993, 2, 361–364. 40. Hopkinson, C. The influence of flying altitude, beam divergence, and pulse repetition frequency on laser pulse return intensity and canopy frequency distribution. Can. J. Remote Sens. 2007, 33, 312–324. [CrossRef] 41. Riaño, D.; Valladares, F.; Condés, S.; Chuvieco, E. Estimation of leaf area index and covered ground from airborne laser scanner (Lidar) in two contrasting forests. Agric. For. Meteorol. 2004, 124, 269–275. [CrossRef] 42. Riaño, D.; Meier, E.; Allgower, B.; Chuvieco, E.; Ustin, S.L. Modeling airborne laser scanning data for the spatial generation of critical forest parameters in fire behavior modeling. Remote Sens. Environ. 2003, 86, 177–186. [CrossRef] 43. Riaño, D.; Chuvieco, E.; Ustin, S.L.; Salas, J.; Rodríguez-Pérez, J.R.; Ribeiro, L.M.; Viegas, D.X.; Moreno, J.M.; Fernández, H. Estimation of shrub height for fuel-type mapping combining airborne LiDAR and simultaneous color infrared ortho imaging. Int. J. Wildl. Fire 2007, 16, 341–348. [CrossRef] 44. García, M.; Riaño, D.; Chuvieco, E.; Danson, F.M. Estimating Biomass Carbon Stocks for a Mediterranean Forest in Central Spain Using LiDAR Height and Intensity Data. Remote Sens. Environ. 2010, 114, 816–830. [CrossRef] http://dx.doi.org/10.1016/j.rse.2007.07.020 http://dx.doi.org/10.1029/2009JG000933 http://dx.doi.org/10.5194/bg-13-961-2016 http://dx.doi.org/10.1111/j.1442-9993.2011.02248.x http://dx.doi.org/10.1016/j.foreco.2014.05.009 http://dx.doi.org/10.1111/j.1523-1739.2012.01869.x http://www.ncbi.nlm.nih.gov/pubmed/22731687 http://www.nts-info.com/inventory/images/ALS50-II.Ref.703.pdf http://www.nts-info.com/inventory/images/ALS50-II.Ref.703.pdf http://dx.doi.org/10.5589/m03-030 http://dx.doi.org/10.1080/01431160701736349 http://dx.doi.org/10.14358/PERS.73.2.175 http://dx.doi.org/10.1016/j.rse.2004.02.001 http://dx.doi.org/10.14358/PERS.72.8.923 http://dx.doi.org/10.14358/PERS.73.12.1355 http://dx.doi.org/10.18637/jss.v012.i06 http://dx.doi.org/10.5589/m07-029 http://dx.doi.org/10.1016/j.agrformet.2004.02.005 http://dx.doi.org/10.1016/S0034-4257(03)00098-1 http://dx.doi.org/10.1071/WF06003 http://dx.doi.org/10.1016/j.rse.2009.11.021 Remote Sens. 2018, 10, 659 13 of 13 45. García, M.; Riaño, D.; Chuvieco, E.; Salas, J.; Danson, F.M. Multispectral and LiDAR data fusion for fuel type mapping using Support Vector Machine and decision rules. Remote Sens. Environ. 2011, 115, 1369–1379. [CrossRef] 46. Morsdorf, F.; Mårell, A.; Koetz, B.; Cassagne, N.; Pimont, F.; Rigolot, E.; Allgöwer, B. Discrimination of vegetation strata in a multi-layered Mediterranean forest ecosystem using height and intensity information derived from airborne laser scanning. Remote Sens. Environ. 2010, 114, 1403–1415. [CrossRef] 47. Ferraz, A.; Bretar, F.; Jacquemoud, S.; Gonçalves, G.; Pereira, L.; Tomé, M.; Soares, P. 3-D mapping of a multi-layered Mediterranean forest using ALS data. Remote Sens. Environ. 2012, 121, 210–223. [CrossRef] 48. Drake, J.B.; Knox, R.G.; Dubayah, R.O.; Clark, D.B.; Condit, R.; Blair, J.B.; Hofton, M. Above-ground biomass estimation in closed canopy Neotropical forests using lidar remote sensing: Factors affecting the generality of relationships. Glob. Ecol. Biogeogr. 2003, 12, 147–159. [CrossRef] 49. Valbuena, R.; Maltamo, M.; Mehtätalo, L.; Packalen, P. Key structural features of Boreal forests may be detected directly using L-moments from airborne lidar data. Remote Sens. Environ. 2017, 194, 437–446. [CrossRef] 50. Antonarakis, A.S.; Richards, K.S.; Brasington, J. Object-based land cover classification using airborne LiDAR. Remote Sens. Environ. 2008, 112, 2988–2998. [CrossRef] © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). http://dx.doi.org/10.1016/j.rse.2011.01.017 http://dx.doi.org/10.1016/j.rse.2010.01.023 http://dx.doi.org/10.1016/j.rse.2012.01.020 http://dx.doi.org/10.1046/j.1466-822X.2003.00010.x http://dx.doi.org/10.1016/j.rse.2016.10.024 http://dx.doi.org/10.1016/j.rse.2008.02.004 http://creativecommons.org/ http://creativecommons.org/licenses/by/4.0/. Introduction Materials and Methods Study Area Field Survey LiDAR Survey LiDAR Data Processing Temporal Analysis Results Selection of Trees April–May Comparison of LiDAR Metrics Canopy Species-Specific Effects Discussion Conclusions References work_ctqt2pidhje2bioowgs6qaxeca ---- Digital imaging to simultaneously study device lifetimes of multiple dye-sensitized solar cells Di gi t a l i m a g i n g t o s i m u l t a n e o u s ly s t u d y d e vi c e lif e ti m e s of m u l t i pl e d y e-s e n s i ti z e d s ol a r c e ll s F u r n e ll, L, H o lli m a n , PJ, C o n n e ll, A, Jo n e s , EW, H o b b s , R, Ke r s h a w, CP, An t h o ny, RVE, S e a r l e , J, Wa t s o n , T a n d M c G e t t r i c k , J h t t p : // d x. d oi. o r g / 1 0 . 1 0 3 9 / c 7 s e 0 0 0 1 5 d T i t l e Di gi t a l i m a g i n g t o s i m u l t a n e o u s ly s t u d y d e vi c e lif e ti m e s of m u l t i pl e d y e-s e n s i ti z e d s ol a r c e ll s A u t h o r s F u r n e ll, L, H o lli m a n , PJ, C o n n e ll, A, Jo n e s , EW, H o b b s , R, K e r s h a w, CP, An t h o ny, RVE, S e a r l e , J, Wa t s o n , T a n d M c G e t t r i c k , J Ty p e Ar ti cl e U R L T hi s v e r s i o n i s a v a il a b l e a t : h t t p : // u s ir. s a lf o r d . a c . u k /i d / e p r i n t / 5 6 7 9 5 / P u b l i s h e d D a t e 2 0 1 7 U S I R i s a d i gi t a l c oll e c t i o n of t h e r e s e a r c h o u t p u t of t h e U n iv e r s i t y of S a lf o r d . W h e r e c o p y r i g h t p e r m i t s , f ull t e x t m a t e r i a l h e l d i n t h e r e p o s i t o r y i s m a d e f r e e l y a v a il a b l e o n li n e a n d c a n b e r e a d , d o w n l o a d e d a n d c o p i e d fo r n o n- c o m m e r c i a l p r i v a t e s t u d y o r r e s e a r c h p u r p o s e s . P l e a s e c h e c k t h e m a n u s c r i p t fo r a n y f u r t h e r c o p y r i g h t r e s t r i c ti o n s . F o r m o r e i nf o r m a t i o n , i n cl u d i n g o u r p o li c y a n d s u b m i s s i o n p r o c e d u r e , p l e a s e c o n t a c t t h e R e p o s i t o r y Te a m a t : u s i r @ s a lf o r d . a c . u k . mailto:usir@salford.ac.uk Sustainable Energy & Fuels PAPER P ub li sh ed o n 16 J an ua ry 2 01 7. D ow nl oa de d on 0 1/ 02 /2 01 7 09 :2 5: 34 . View Article Online View Journal Digital imaging t aSchool of Chemistry, Bangor University, holliman@bangor.ac.uk; Fax: +44 (0)1248 3 bSPECIFIC, College of Engineering, Swansea 7AZ UK † Electronic supplementary informa 10.1039/c7se00015d Cite this: DOI: 10.1039/c7se00015d Received 10th January 2017 Accepted 13th January 2017 DOI: 10.1039/c7se00015d rsc.li/sustainable-energy This journal is © The Royal Society of o simultaneously study device lifetimes of multiple dye-sensitized solar cells† Leo Furnell,a Peter J. Holliman,*a Arthur Connell,a Eurig W. Jones,a Robert Hobbs,a Christopher P. Kershaw,a Rosie Anthony,a Justin Searle,b Trystan Watsonb and James McGettrickb In situ degradation of multiple dyes (D35, N719, SQ1 and SQ2) has been investigated simultaneously using digital imaging and colour analysis. The approach has been used to study the air stability of N719 and squaraine dyes adsorbed onto TiO2 films with the data suggesting this method could be used as a rapid screening technique for DSC dyes and other solar cell components. Full DSC devices have then been tested using either D35 or N719 dyes and these data have been correlated with UV-vis, IR and XPS spectroscopy, mass spectrometry, TLC and DSC device performance. Using this method, up to 21 samples have been tested simultaneously ensuring consistent sample exposure. Liquid electrolyte DSC devices have been tested under light soaking including the first report of D35 testing with I�/I3 � electrolyte whilst operating at open circuit, short circuit, or under load, with the slowest degradation shown at open circuit. D35 lifetime data suggest that this dye degrades after ca. 370 h light soaking regardless of UV filtering. Control, N719 devices have also been light soaked for 2500 h to verify the imaging method and the N719 device data confirm that UV filtration is essential to protect the dye and I3 �/I� electrolyte redox couple to maintain device lifetime. The data show a direct link between the colour intensity and/or hue of device sub-components and device degradation, enabling “real time” diagnosis of device failure mechanisms. Introduction O'Regan and Grätzel reported the pivotal breakthrough in dye- sensitized solar cells (DSCs) in 1991 by sensitizing mesoporous TiO2 with Ru-bipyridyl dye to enhance photo-current. 1 Since then, progress has been made in all aspects of DSC devices2 leading to several reports of h > 12% (ref. 3–6) with h ¼ 14.7% (ref. 7) the best efficiency yet reported. DSC lifetimes have also been studied, generally by light soaking allied to device perfor- mance.8,9 In this context, for DSC devices (or any PV device) to be commercially viable, their lifetime must be >3–5 years for personal chargers and 20–25 years for building integrated PV.10 To maintain PV performance, all the components must remain stable with time under either indoor or outdoor conditions. For DSC devices, the light harvesting dye is a key component.11 It has been estimated that this must be able to carry out 100 M turnovers of light absorption and conversion into electricity over a 25 year period.11 Gwynedd LL57 2UW, UK. E-mail: p.j. 70528; Tel: +44 (0)1248 382375 University, Baglan IKC, Port Talbot, SA12 tion (ESI) available. See DOI: Chemistry 2017 Despite its importance for the commercialisation of PV technology, prior work on DSC device stability is limited given the large DSC literature. Those reports which do exist have typically focussed on small numbers of devices of one dye rather than comparative testing of multiple devices. This makes comparing literature reports difficult because testing parame- ters differ whilst any variance can be exacerbated by lengthy testing periods. Of the dyes tested in the literature, very different responses have been reported for dyes with different molecular structures suggesting this is a key factor affecting dye lifetime. For instance, Sommeling et al. carried out lifetime studies (<1500 h) for the Ru-bipyridyl dye N719 in DSC devices at open circuit exposed to light and/or reporting that ambient light soaking had the least effect on device performance but that exposure to 1 Sun at 85 �C caused the greatest losses. These losses were accompanied by electrolyte bleaching due to loss of I2. Xue et al. have also studied N719 degradation using UV-vis and Raman spectroscopy whilst DSC devices were studied using IV, IPCE and EIS data.13 The data showed a 20% drop in effi- ciency aer 1074 h UV-ltered Xe-lamp light exposure.13 This study concluded that N719 degradation was the main reason for the drop in device performance. N719 has also been tested under Xe-lamps and thermal cycles by Giustini et al.14 whilst Bessho et al.15 have extended Ru-bipy dye lifetime by replacing the vulnerable NCS� ligands achieving h ¼ 10.1%. The effect of Sustainable Energy Fuels http://crossmark.crossref.org/dialog/?doi=10.1039/c7se00015d&domain=pdf&date_stamp=2017-01-31 http://dx.doi.org/10.1039/C7SE00015D http://pubs.rsc.org/en/journals/journal/SE Scheme 1 Molecular structures of (a) the triphenylamine dye D35, (b) ruthenium bipyridyl dye N719 and (c) and (d) the squaraine dyes SQ1 and SQ2, respectively. Sustainable Energy & Fuels Paper P ub li sh ed o n 16 J an ua ry 2 01 7. D ow nl oa de d on 0 1/ 02 /2 01 7 09 :2 5: 34 . View Article Online electrolyte bleaching with Z907 has been studied by Mas- troianni et al., which showed image analysis could be used successfully to monitor the progress of electrolyte bleaching.16 Organic DSC dye development has also been increasingly studied in recent times including indoline,17 half-squaraine18,19 and triphenylamine-based (TPA) dyes.20,21 To date, less has been reported on organic dye lifetimes although we have previously reported a TPA dye with >1000 h stability21 with an I3 �/I� redox couple and Joly et al. have also recently reported a TPA dye with high stability aer 2200 h light soaking at 65 �C when using I3 �/I� in an ionic liquid electrolyte.22 In addition, Tanaka et al. have suggested that decarboxylation of the cyanoacrylic acid linker of D131 is an important degradation mechanism for that dye and that electrolytes containing I2 and amine accelerate this.23 There has also been one recent report of D35 lifetime testing using Co(II/III) redox couple which reports the inuence of cation co-additives on dye photo-degradation.24 In recent years, the need to harvest longer wavelength photons has led to interest in squaraine dyes with several examples, e.g. SQ1 (ref. 25) and SQ2 (ref. 26) showing promise both as infrared harvesting dyes and also when co-sensitized with N719.27 For squaraine dyes, Paek et al. have reported panchromatic triphenylamine-modied squaraines with 1000 h stability under light soaking at 60 �C,28 whilst Qin et al. have reported 1000 h stability for double linker squaraines using an ionic liquid electrolyte.29 In addition, Wu et al. have studied squaraine dye degradation under visible light demonstrating that TiO2 acts as a photo-catalyst for this by cleaving the C]C double bond in the dye.30 In this context, the photocatalytic activity of TiO2 is well known 31 and has been studied for water purication and waste degradation.32 Currently, there is a range of dyes which absorb across the visible spectrum. Whilst clearly there is still a need to optimise dye combinations to enhance light harvesting, there is also a need to understand much more about the lifetimes of dyes which absorb from 400–900 nm and under “real-life” working conditions. An image analysis method developed by Asghar et al.33 has been built upon to directly compare multiple dyed samples or devices under the same exposure conditions. As such, this paper reports the use of digital photography and colour image analysis to quantify real time dye degradation to study failure mechanisms of dye adsorbed on TiO2 and in full DSC devices. We have chosen red and blue dyes to study DSC dye lifetimes across a wide range of l in one paper. These dyes have been tested under a range of device operating conditions (i.e. at open circuit, short circuit or under a 10 U load) allowing a detailed picture of device lifetime to be formed. Experimental Dye choice and sample preparation The dyes chosen include the triphenylamine dye (D35, Dye- namo) which has previously been reported in DSC devices and has proved effective both for co-sensitisation34 and in solid state devices35 (Scheme 1). However, to the best of our knowledge there are no literature reports of D35 stability testing using I�/I3 � electrolyte. N719 (Dyesol) has been chosen to act as Sustainable Energy Fuels control dye as it has been widely studied along with the squaraine dyes SQ1 (prepared in house according to Burke et al.25) and SQ2 (Solaronix) which are known to be susceptible to degradation in air (Scheme 1). Dye-TiO2 testing A Photosimile 200 light box was used to simultaneously study the dye degradation of multiple dye samples adsorbed onto TiO2 lms under constant articial light as this had sufficient oor space (40.8 � 58.7 cm), consistent light intensity (5 W m�2, This journal is © The Royal Society of Chemistry 2017 http://dx.doi.org/10.1039/C7SE00015D Paper Sustainable Energy & Fuels P ub li sh ed o n 16 J an ua ry 2 01 7. D ow nl oa de d on 0 1/ 02 /2 01 7 09 :2 5: 34 . View Article Online 6000 lux) and an in-built slot for a camera to be placed directly above the samples. A Canon EOS 1100D with an 18–55 VR lens was used to take pictures at regular intervals ranging from 1 to 5 s. Samples were placed on a numbered grid with a white background (Fig. 1) alongside 2 contrast controls; an undyed TiO2 lm to give 100% “white” and a black square to give 100% “black”. A clock was also added to verify that the time-lapse images were being taken at the correct time intervals. It was important to align the lms consistently because a macro within the computer soware (Sigma Scan Pro 5) was used to select and analyse red, green, blue (RGB) intensities at xed points on each lm. The macro also analysed the control lms to adjust for any changes in the light levels whilst the black control ensured consistent camera focus during any colour change within the TiO2 lms. Dye-TiO2 exposure samples were prepared as follows. TEC15 glass (1.5 � 3 cm, NSG) was cleaned by sonicating in acetone before drying with N2. One layer of P25 TiO2 colloid paste was doctor bladed onto the glass using a single layer of Scotch™ tape as a spacer before sintering at 500 �C. Aer cooling to 80 �C, the TiO2 lms (ca. 7 mm thickness) were immersed in 0.5 mM ethanolic dye solution of SQ1, SQ2 or N719 in tbutanol : CH3CN (1 : 1 v/v) for 18 h before rinsing with ethanol and drying under N2. Dye-TiO2 and DSC device lifetime testing TiO2 DSC device photo-anodes were prepared in the same way as for the dye-TiO2 exposure samples except that the lms were prepared using one layer of active opaque colloid (AO, Dyesol). Platinum paste (PT-1, Dyesol) was deposited onto pre-drilled counter electrodes and sintered at 400 �C. The 2 electrodes were sealed together with Surlyn (25 mm) at 120 �C to produce 1 cm2 devices and electrolyte was injected. N719 and D35 electrolytes were prepared in CH3CN containing I2 (0.05 M) and guanidi- nium thiocyanate (0.05 M). For D35, tBu4NI (0.6 M), LiI (0.1 M) and 4-tBu pyridine (0.5 M) were added.34 For N719, 1-methyl- 3-propylimidazolium iodide (0.8 mM) and benzimidazole (0.3 mM) were added. DSC devices were light soaked in a Photosimile 200 light box which had been stabilised for 2 h before use (ESI Fig. 1 and 2†). Digital images were taken every 20 s using a Canon EOS 1100D camera with an 18–55 mm lens. Image analysis was carried out using a custom-built macro, which analysed each lm individ- ually.36 Devices were studied using image analysis and IV testing under standard conditions (1 Sun, AM1.5). Aer light soaking, selected device electrodes were separated and analysed (UV-vis, ATR-IR, MS, XPS). Other devices that were le sealed had the electrolyte removed and replaced or had the dye desorbed with 0.1 M tBu4NOH before the device cavity neutralized with 2 M HCl(aq) and re-dyed according to Holliman et al. 37 Fig. 1 Digital image of the grid layout for the lifetime studies of dyed 2 � 2 cm TiO2 films (a) as dyed before light exposure, and after (b) 1 h and (c) 4 h light exposure. Controls were an undyed TiO2 film (top centre, position 0) next to a black square. Top row ¼ SQ1, 2nd row ¼ SQ2, 3rd row ¼ N719 and bottom row ¼ D35. Results and discussion Lifetimes of TiO2-sorbed dyes The lifetimes of two squaraine DSC dyes (SQ1 and SQ2), the Ru-bipy DSC dye N719 the triphenylamine DSC dye (D35) This journal is © The Royal Society of Chemistry 2017 Sustainable Energy Fuels http://dx.doi.org/10.1039/C7SE00015D Fig. 2 Red, green, blue (RGB) pixel values for TiO2 films dyed with (a) SQ1 (b) SQ2 and (c) N719 and (d) D35 over 4 h exposure. N.B. when red, blue and green pixel values are all ca. 200, the electrode has faded fully. Sustainable Energy & Fuels Paper P ub li sh ed o n 16 J an ua ry 2 01 7. D ow nl oa de d on 0 1/ 02 /2 01 7 09 :2 5: 34 . View Article Online adsorbed on P25 TiO2 lms have been simultaneously exposed to the same light and atmospheric conditions for 4 h and the digital images are shown in Fig. 1 and the RGB intensity data in Fig. 2. The aim of this experiment was to test whether atmo- spheric light exposure of dyes adsorbed to mesoporous TiO2 lms can be used as a rapid, simple means to screen DSC dye lifetimes. Looking rst at the adsorbed SQ1 dye, this shows the fastest rate of bleaching (Fig. 2a) degradation of adsorbed SQ1. When analysing the RGB data, it is important to note that, for a blue- green dye like SQ1, most of the red components will be absor- bed whilst the blue and green components will be strongly re- ected. This can be seen in the initial RGB values (ca. 200) for blue and green for SQ1 whilst the red value is ca. 70. The rapid bleaching of SQ1 can then be seen in the RGB data by the exponential drop in the signal from the red component which, as the complimentary colour to blue, corresponds to a reduction in the blue colour of the dye. Thus, aer 2 h, the red colour has reached a value of 200, which is similar to the blue and green values (which change very little during the measurements on this dye). Hence, aer 2 h, these lms are effectively deemed colourless by this technique (conrmed by visual inspection of Fig. 1b). The rapid discolouration of TiO2-sorbed squaraine dyes in air has been reported previously.30 We have also observed this previously when making SQ1 dye-sensitized solar cells. In fact, only by using a sealed, pump dyeing system have we been able to manufacture squaraine DSC devices with high efficiency such is the rapidity of degradation.27 It should be noted here that the errors of the RGB analysis (ESI Fig. 3, ESI Table 1†) show �0.95 arbitrary units (a.u.) for red, �0.59 a.u. for green and �1.46 a.u. for blue. The higher error for blue is a result of fewer sensors in the camera for the colour blue. This is because camera sensors are designed to mimic human eyes, which have fewer cones to detect blue colour compared to red and green ones.38 SQ2 is also a blue-green dye. However, this dye shows a very different RGB prole over the 4 h test for (Fig. 2b) with a slower loss of the red intensity compared to SQ1 such that the red component only reaches a value of 200 aer ca. 4 h. This is interesting because SQ1 and SQ2 possess similar molecular structures with SQ2 possessing a naphthyl unit on one of the indole moieties compared to a benzene unit for SQ1 (Scheme 1). The naphthyl unit causes a red-shi on the SQ2 absorption by ca. 50 nm relative to SQ1 whilst SQ2 (319 000 dm3 mol�1 cm�1) also possesses a slightly higher 3 than SQ1 (292 000 dm3 mol�1 cm�1).26 For these two relatively unstable dyes, the absorption of lower energy photons and more intense dye colour seem more likely explanations for slower SQ2 degradation than any direct enhancement of chemical stability arising from SQ2's naphthyl unit. By comparison, N719 is a red-brown dye and so Fig. 2c shows that, initially, the blue and green components (RGB pixel values of 140 and 165, respectively) are absorbed more than the red which is relatively more reected (RGB value of 180). Over the 4 h measurement, the pixel values for each of the RGB do not drop signicantly and do not reach a value of 200. This suggests that little bleaching of N719 takes place over 4 h and hence that N719 is a more stable dye than SQ1 or SQ2 under these conditions. This again is in line with previous reports of Sustainable Energy Fuels This journal is © The Royal Society of Chemistry 2017 http://dx.doi.org/10.1039/C7SE00015D Paper Sustainable Energy & Fuels P ub li sh ed o n 16 J an ua ry 2 01 7. D ow nl oa de d on 0 1/ 02 /2 01 7 09 :2 5: 34 . View Article Online N719 stability testing.10,12,39–41 The D35 data show initial RGB values of 190, 100 and 140, respectively which is line with this dye being red-orange rather than brown (Fig. 2d). These values remain unchanged over the 4 h test period suggesting that this dye is also stable under these exposure conditions. In addition, the control (undyed) TiO2 lm was exposed and analysed in the same way (ESI Fig. 3†). These data show that the TiO2 colour and light intensity remain stable for the duration of the test. Looking at the data from the four dyes together, clear differences are observed in the rate of colour change between the dyes and these changes are in line with previous stability reports for these dyes in full DSC devices. This suggests that digital imaging and analysis of TiO2-sorbed dyes even over relatively short periods (i.e. 4 h) can be used as a rapid screening method for DSC dye lifetime. Fig. 3 D35 device lifetime data; (a) h vs. time under 400 W m�2 light- soaking with (solid lines & circles) or without (dashed lines & crosses) a UV-filter. Devices were under a 10 U load (red lines) or at open (blue lines) or short circuit (green lines); (b) RGB pixel intensity data vs. time for electrolyte colour without a UV filter and (c) images of D35-dyed TiO2 at 0 h (i), after 330 h light soaking (ii), front of desorbed TiO2 electrode (iii) and back of desorbed TiO2 electrode (iv). DSC device lifetime testing Full DSC devices have also been tested using a combination of digital imaging and RGB analysis, I–V testing and forensic analysis. For D35 devices, looking at the RGB data rst, Fig. 3 shows that initial blue and green RGB values of ca. 60 in line with the absorption of these colours. However, the initial data show that the red component is relatively more reected with a pixel value of ca. 100 in line with the fact that D35 is a red dye. Then, between 0 and 150 h, the blue and green components drop slightly whilst the IV data show a slight increase in effi- ciency (Fig. 3a) which is ascribed to changes in dye:electrolyte interactions due to improved TiO2 pore lling as the devices age. Then, the blue and green pixel values remain fairly constant for the rest of the 660 h of testing. By comparison, the RGB data show a gradual change in the red component of the dye from 0 to ca. 370 h until it reaches a value similar to the blue and green components (i.e. ca. 60), aer which it remains constant. This means the red component is being absorbed at a similar level to the blue and green components and the absorption of all colours of light means the sample has turned black which is conrmed by visual analysis (Fig. 3c). Overall, these data suggest that the dye has degraded over the 370 h time period. To study this further, light soaking was continued up to 660 h to fully degrade the devices (full data are shown in ESI Fig. 4–6, ESI Table 2†). The RGB data correlate closely with the I–V data which show a substantial loss of h between 0 and 370 h light exposure (Fig. 3a). The device testing parameters (Voc, Jsc, FF and h) are plotted against time in ESI Fig. 6a.† These data show substantial drops in Jsc between 120 and 370 h which align closely with the drop in device efficiency over this time period. Whilst the drop in Jsc is slower for some unltered devices, these changes are consistent for all the devices tested up to 670 h. By comparison, the Voc drops from ca. 0.8 V which is observed for all devices between 0 and 120 h to ca. 0.6 V aer 370 h for all unltered devices. However, for some UV-ltered devices the Voc increases to ca. 1.0 V aer 370 h although it should be noted that the very small Jsc for these devices means that they are not operating normally by this stage. Aer 660 h of testing the Voc appears quite random between 0.2 and 1.5 V. However, the very low This journal is © The Royal Society of Chemistry 2017 performance of the devices by this time makes accurate IV measurements quite difficult. The FF data reect this by showing an increasing spread on the data with time. Broadly, the FF values for the UV-ltered devices increase with time until the values exceed 1.0, which is the theoretical maximum. Hence, whilst data for FF > 1.0 are not real there is an Sustainable Energy Fuels http://dx.doi.org/10.1039/C7SE00015D Sustainable Energy & Fuels Paper P ub li sh ed o n 16 J an ua ry 2 01 7. D ow nl oa de d on 0 1/ 02 /2 01 7 09 :2 5: 34 . View Article Online interesting, opposing trend for UV-unltered data which drop to between 0.2 to 0.6 which suggest that the electrolyte may also be being photo-oxidized by UV with time as has been reported previously for other dyes.42 Values for the series and shunt resistance data for the devices over time have also been calcu- lated (ESI Fig. 6b†). Initially the devices show shunt resistances (Rsh) of 1.0 to 5.0 kU and series resistances (Rs) of 17 to 40 U. Both the Rsh and Rs data start to increase between 120 and 370 h in line with the device performance dropping with the Rs increasing to N for all but one condition (UV-unltered under load). For an ideal photo-diode, Rsh ¼ N whilst Rs ¼ 0 because Rs corresponds to the sum of all the device resistance elements.43 Here, the increase in Rs to N suggests a complete breakdown of one or more components in the devices. Aer testing for 660 h, the devices were split into 2 batches for detailed analysis of the device failure mechanisms. For the rst batch, the aim was to study whether the devices could be regenerated by replacing each device component in a step-wise manner. For the second group of devices, the 2 electrodes were separated and detailed analysis carried out on the different device components. Looking at the rst batch of devices, the rst test was to remove the existing electrolyte, wash the device cavity and to add fresh electrolyte. Table 1 shows the Jsc remained the same aer new electrolyte had been added which shows that signicant dye degradation had occurred during light soaking which substantially limits light harvesting. By comparison, the FF improved slightly from 0.34 to 0.45 when fresh electrolyte was added implying that, whilst some electro- lyte degradation had taken place during light soaking, this was reversible when new I3 �/I� redox couple was applied. The next device sub-component tested was the D35 dye. Fig. 3c shows the colour change of D35 from the initial red colour (i) to brown aer light soaking (ii). This is conrmed by DRUV-vis spec- troscopy (ESI Fig. 7†) which shows a peak centred at ca. 480 nm for adsorbed D35 which weakens in intensity and broadens aer light soaking (ESI Fig. 7c†). The dye was then desorbed using tBu4NOH(aq) and the desorbed solution was analysed by UV-vis spectroscopy showing almost complete disappearance of the D35 peak at ca. 450 nm indicating chromophore breakdown (ESI Fig. 8†). For the desorbed TiO2, the counter electrode side of the photo-anode shows a white surface (Fig. 3c(iii)) which is conrmed by DRUV-vis spectroscopy (ESI Fig. 7d†). By comparison, the TiO2 photo-anode closest to the working elec- trode (i.e. the side directly exposed during light soaking) is light brown (Fig. 3c(iv)) which is ascribed to photo-degraded dye. This was tested by re-dyeing the photo-anode with Table 1 IV data for a D35 DSC device light soaked at open circuit witho Device FF h (%) Aer 0 h 0.52 4.7 Aer 600 h light soaking 0.34 0.1 Aer new electrolyte 0.45 0.1 D35 dye desorbed 0.28 0.0 Re-dyed with D35 0.34 2.1 Re-sintered & re-dyed 0.30 1.4 Sustainable Energy Fuels D35 dye. Table 1 shows a substantial improvement in device performance on re-dyeing with Jsc and Voc increasing to 9.34 mA cm�2 and 0.66 V compared to 0.67 mA cm�2 and 0.28 V for the light soaked device. However, the Jsc and Voc are lower than in the pre-light soaked devices and the ll factor remains very poor (FF ¼ 0.34). This is ascribed to surface organic matter from degraded dye blocking some dye sorption sites leading to lower Jsc as conrmed by ATR infrared spectroscopy (ESI Fig. 9d†). This organic matter is also believed to increase electron recombination which lowers Voc and increases series resistance by blocking charge transfer to the working electrode which, in turn, lowers FF. To test this, and if the reduced Jsc and brown colouration was due to degraded dye occupying dye sorption sites on the TiO2, the photo-anode was re-sintered at 450 �C and re-dyed. However, the data show that the efficiency did not increase (h ¼ 1.4%). Instead, the Jsc dropped slightly to 7.67 mA cm�2 which may reect that repeating the sintering step reduces TiO2 surface area. Unfortunately this would result in fewer dye sorption sites which hides any device improvement which might result from removing degraded dye. In fact, the biggest problem for this device is the low ll factor (FF ¼ 0.30) which is due to the loss of Pt from the counter electrode as shown by XPS analysis (ESI Table 3†). For the second batch of devices, the electrolyte was studied by UV-vis spectroscopy (ESI Fig. 10–12†) showing that UV ltered devices contained 53.0 mM I3 � whilst devices without UV ltering contained 42.5 mM I3 �. This conrmed the image analysis which suggested the electrolyte had not notably changed colour over 660 h. This also conrmed that I3 � loss is not the main reason for D35 device failure. D35 dye was then desorbed from the TiO2 and analysed by thin layer chroma- tography (TLC) and mass spectrometry (MS). TLC shows a signicant change in Rf value for degraded D35 dye compared to pristine D35 (ESI Fig. 13†). This change is similar for D35 degraded with or without UV ltering, suggesting UV light does not affect the degradation pathway. From the MS data, whilst pristine D35 shows an intense molecular ion at ca. 861 amu in line with the literature,44 the aged and desorbed dyes show only a trace of molecular ion and a noticeable loss of higher mass fragments (ESI Fig. 14†). The predominant ion remaining is at ca. 392 amu which corresponds to the central moiety of the dye aer the loss of phenyl ether units from the triphenylamine donor and the nitrile from the linker group. XPS has been used to study the surfaces of working and counter electrodes of D35 devices light soaked for 660 h which have either been ethanol washed or had the dye desorbed with tBuN4OH and ut UV filtering Voc (V) Jsc (mA cm �2) Rsh (U) Rs (U) 0.82 10.87 3333 17 0.28 0.67 5000 N 0.49 0.61 2500 286 0.10 0.57 143 149 0.66 9.34 130 44 0.61 7.67 106 62 This journal is © The Royal Society of Chemistry 2017 http://dx.doi.org/10.1039/C7SE00015D Fig. 4 (a) h vs. time for N719-dyed DSC devices light-soaked under a Dyesol® UPTS lamp (400 W m�2) either with (solid lines & circles) or without (dashed lines & crosses) a UV-filter. Devices were held under a 10 U load (red lines), open (blue lines) or short circuit (green lines), (b) RGB pixel intensity data vs. time for electrolyte colour in devices without a UV filter and (c) device images. Paper Sustainable Energy & Fuels P ub li sh ed o n 16 J an ua ry 2 01 7. D ow nl oa de d on 0 1/ 02 /2 01 7 09 :2 5: 34 . View Article Online data correlated with the NIST database.45 As expected for TiO2 working electrodes (WE), the solvent washed and dye desorbed samples show Ti 2p signals at 458.3 eV with levels of 20.1 � 0.6 at% and 22.9 � 0.8 at%, respectively. These samples also show O 1s peaks at 529.5 eV and 530.8 eV consistent with TiO2 and organic O environments. The increase in the Ti signal is mirrored for O which increases from 45.4 � 1.0 at% to 52.0 � 1.1 at% which is in line with dye desorption freeing up TiO2 sorption sites. Both samples show a small N 1s signal. The peak at 399.5 eV is ascribed to surface adsorbed tBu-pyridine observed on the solvent washed sample.46 Aer dye desorption, the nitrogen peak is shied to 402.2 eV which is more consis- tent with quaternary ammonium salts such as tBu4N + used as the base for dye desorption. The other major species is C 1s at 284.8 eV which drops from 33.9 � 1.4 at% to 23.8 � 1.7% at% aer dye desorption in line with dye removal from the TiO2 surface. For the counter electrodes (CE), a Sn 3d signal at 486.5 eV is observed for the FTO substrate which drops from 15.7 to 6.6 at% on dye desorption. Then two O 1s signals are observed at ca. 532.0 eV for SnO2 and organic O which drop from 35.8 to 27.5 � 6.2 at% on dye desorption. The high error reects highly variable peak intensities. Platinum (Pt 4f, 70.5 eV) also drops from 5.6 to 1.8 � 0.3 at% on base treatment. The other main signals are for organic C and Na with 1s peaks at 284.8 and 401.8 eV, respectively. However, these signals increase on dye desorption; C from 36.0 to 55.4 at% and N from 2.9 to 3.4 � 0.6 at%. These data can be explained by the action of the tBu4NOH desorbing dye from the working electrode and re- depositing this onto the CE as the base is pumped through the device cavity. At the same time, some Pt particles are removed while a number of tBu4N + ions adsorb to the CE surface. In summary, whilst the XPS clearly shows the changes which occur during dye desorption, the data suggest the WE and CE are not the main reason for the drop in device function. This still suggests that dye degradation is the main problem for D35 devices. N719 devices were also studied as a control experiment to validate the digital imaging and analysis method by light soaking under a Dyesol UPTS lamp for 2500 h with regular I–V device testing using a solar simulator to track degradation. As shown by the device efficiency data in Fig. 4a, without UV ltration, DSC device efficiency declines between 500 and 1000 h. The same devices were also digitally imaged (ESI Fig. 15–17†) and the RGB intensity data were analysed. These data show very little change in N719 colour for DSC devices regardless of whether they had a UV-lter or not. By comparison, the pres- ence of a UV lter had a considerable inuence on electrolyte colour. ESI Fig. 16b† shows a rapid loss of blue colour for the non UV-ltered devices up to ca. 700 h which is not observed when UV lter is in place (ESI Fig. 16c†). The loss of electrolyte colour correlates strongly with the loss of device efficiency in UV-unltered devices. This conrms that digital imaging can be used as a method to study DSC device lifetimes. Looking at reasons for the drop in device performance, UV degradation of the electrolyte has previously been reported by Carnie et al.42 who observed that iodine-based electrolytes are prone to failure aer extended light exposure without UV ltration. Fig. 4a also This journal is © The Royal Society of Chemistry 2017 Sustainable Energy Fuels http://dx.doi.org/10.1039/C7SE00015D Sustainable Energy & Fuels Paper P ub li sh ed o n 16 J an ua ry 2 01 7. D ow nl oa de d on 0 1/ 02 /2 01 7 09 :2 5: 34 . View Article Online shows an increase in efficiency of open circuit cells at 200 h but not in the cells at short circuit or under load. This is due to the increase in efficiency from the stabilisation of the cells as they age. By comparison, the cells at short circuit show no efficiency increase which suggests degradation dominates for these devices. The cells at open circuit show very little degradation aer 2500 h with a UV lter. It is important to note that when studying device degradation, the rst component to fail is the one which determines the overall lifetime. For UV-ltered N719 devices, UV-vis analysis of the aged electrolyte shows that the I3 � concentration is unchanged over 2500 h (ESI Fig. 12†). However, UV-vis analysis of N719 electrolyte without UV ltering conrms literature reports showing that I3 � drops from 50 mM to 20.7 mM, which validates the colour change shown by the image analysis data. However, I3 � is still present which suggests the situation is more complex than a complete loss of the redox couple. Considering the different IV parameters (ESI Fig. 17a†), the Jsc for UV-unltered N719 devices closely follows the overall efficiency suggesting this is the key factor affecting these devices. This is backed up by the Voc which shows a much more gradual drop over the 2500 h testing period whilst the FF actually increases for the UV-unltered devices between 500 and 1000 h. This suggests that there is sufficient I3 � remaining to support the photo-current in these devices. In support of these assertions, the Rsh and Rs data show relatively smaller changes until aer 1000 h of testing (ESI Fig. 17b†). The Rsh increases for UV-ltered devices but not until aer 1000 h whilst the Rs remains below 40 U treatments except the UV-unltered, open circuit devices where Rs does reach ca. 70 U beyond 2000 h. However, none of the Rs values approach values anywhere close to the D35 devices which is in line with all the N719 devices still remaining operational even though some do drop to h � 1.0%. Mass spectrometry (ESI Fig. 18†) further veries these data because the expected molecular ion for N719 dye47 is still observed at 593.2450 amu, z ¼ 2 even aer 2500 h of light soaking either with or without UV ltering conrming that fully operational N719 dye is still present in all devices. Thus, the observed N719 device failure is attributed to a combination of partial dye and redox couple degradation which his strongly inuenced by UV exposure. Conclusions This work has shown that digital photography combined with RGB colour intensity analysis can be a powerful tool to study solar cell device lifetimes and potential degradation mecha- nisms (e.g. light harvester versus electrolyte). Because the method relies on digital photography and soware-based data analysis, it is hugely versatile to both solar cell devices and device sub-components and indeed any sample, material or device that changes colour with time. It is also relatively low cost requiring only a routine SLR digital camera and a computer. The method is applicable at a wide range of length-scales from laboratory-scale devices (ca. 1 cm2) to full modules (e.g. 1–2 m2). In this context, the method can also be used to study large numbers of samples or devices simultaneously. In this paper, we have studied up to 21 DSC devices at a time but this method Sustainable Energy Fuels means that the main limitation on lifetime testing is the number of devices that can be manufactured rather than the testing method itself. In fact, the main limitation on the sample size or number of samples is the space required versus the camera focal length and spatial resolution. In reality, because very high pixel density digital cameras (>10 mega-pixel) can routinely be purchased at low cost it is only really space that should limit image analysis. In addition, because the method is fully digitized, it can be easily automated to run on time-scales from seconds to years. For solar cell devices, the main issues currently known to affect lifetimes are light, temperature and humidity exposure. We have shown that this imaging method can be applied to indoor or simulated light exposure and, with a weather-proof camera, it should also be suitable for outdoor testing with appropriate compensation for changing light intensities which can affect colour. It should also be possible to use digital imaging to monitor accelerated testing (e.g. very high light intensities and/or in combination with temperature and humidity). As such, the method has the potential to be devel- oped as a testing standard for solar cells and modules. Finally, we have also shown that the method can be used to study degradation mitigation strategies (e.g. UV-sensitive electrolyte can be stabilised using a UV lter) to extend the DSC lifetimes by prolonging the life of device sub-components. Thus, the colour of the dyed TiO2 has been shown to be an effective indicator of dye and/or electrolyte lifetime and these data have been corroborated by device performance data and ex situ device analysis using IR, UV-vis and X-ray photoelectron spectroscopy as well as mass spectrometry. These data show that D35 dye degrades under light exposure in the presence of iodine electrolytes with or without UV ltering. However, for N719 devices, the dye and iodine-based electrolyte partially fail under UV exposure but UV ltering can successfully mitigate against this. Comparing data from both dyes suggests that open circuit is the most suitable device arrangement for degradation testing. Hence, in future, simple colour monitoring could be used for rapid screening of PV sub-component and device lifetimes. Acknowledgements We gratefully acknowledge funding from EPSRC CASE and Tata Steel (LF), the Welsh Government for Sêr Cymru (EWJ, PJH, RJH, RA) and NRN (CPK), EPSRC EP/M015254/1 (AC, JM), EPSRC/ InnovateUK EP/I019278/1 (JS), the EPSRC UK National Mass Spectrometry Facility at Swansea University and Pilkington-NSG for TEC™ glass. Notes and references 1 B. O'Regan and M. Grätzel, Nature, 1991, 353, 737. 2 A. Hagfeldt, G. Boschloo, L. Sun, L. Kloo and H. Pettersson, Chem. Rev., 2010, 110, 6595. 3 A. Yella, C.-L. Mai, S. M. Zakeeruddin, S.-N. Chang, C.-H. Hsieh, C.-Y. Yeh and M. Grätzel, Angew. Chem., Int. Ed., 2014, 53, 2973. This journal is © The Royal Society of Chemistry 2017 http://dx.doi.org/10.1039/C7SE00015D Paper Sustainable Energy & Fuels P ub li sh ed o n 16 J an ua ry 2 01 7. D ow nl oa de d on 0 1/ 02 /2 01 7 09 :2 5: 34 . View Article Online 4 Z. Yao, M. Zhang, R. Li, L. Yang, Y. Qiao and P. Wang, Angew. Chem., Int. Ed., 2015, 54, 5994. 5 Z. Yao, M. Zhang, H. Wu, L. Yang, R. Li and P. Wang, J. Am. Chem. Soc., 2015, 137, 3799. 6 K. Kakiage, Y. Aoyama, T. Yano, K. Oya, T. Kyomen and M. Hanaya, Chem. Commun., 2015, 51, 6315. 7 K. Kakiage, Y. Aoyama, T. Yano, K. Oya, J. Fujisawa and M. Hanaya, Chem. Commun., 2015, 51, 15894. 8 A. Hinsch, J. M. Kroon, R. Kern, I. Uhlendorf, J. Holzbock, A. Meyer and J. Ferber, Progress in Photovoltaics: Research and Applications, 2001, 9, 425. 9 J. Kawakita, Science & Technology Trends, 2010, 5, 70. 10 R. Harikisun and H. Disilvestro, Sol. Energy, 2011, 85, 1179. 11 M. Grätzel, C. R. Chim., 2006, 9, 578. 12 P. M. Sommeling, M. Späth, H. J. P. Smit, N. J. Bakker and J. M. Kroon, J. Photochem. Photobiol., A, 2004, 164, 137. 13 G. Xue, Y. Guo, T. Yu, J. Guan, X. Yu, J. Zhang, J. Liu and Z. Zou, Int. J. Electrochem. Sci., 2012, 7, 1496. 14 M. Giustini, D. Angelone, M. Parente, D. Dini, F. Decker, A. Lanuti, A. Reale, T. Brown and A. di Carlo, J. Appl. Electrochem., 2012, 43, 209. 15 T. Bessho, E. Yoneda, J.-H. Yum, M. Gugliemi, I. Tavernelli, H. Imai, U. Rothlisberger, M. K. Nazeeruddin and M. Grätzel, J. Am. Chem. Soc., 2009, 131, 5930. 16 S. Mastroianni, I. Asghar and K. Miettunen, Phys. Chem. Chem. Phys., 2014, 16, 6092. 17 T. Horiuchi, H. Miura and S. Uchida, Chem. Commun., 2003, 3036. 18 A. Connell, P. J. Holliman, M. L. Davies, C. D. Gwenin, S. Weiss, M. B. Pitak, P. N. Horton, S. J. Coles and G. Cooke, J. Mater. Chem. A, 2014, 2(11), 4055. 19 A. Connell, P. J. Holliman, E. W. Jones, L. Furnell, C. Kershaw, M. L. Davies, C. D. Gwenin, M. B. Pitak, S. J. Coles and G. Cooke, J. Mater. Chem. A, 2015, 3, 2883. 20 S. Hwang, J.-H. Lee, C. Park, H. Lee, C. Kim, C. Park, M.-H. Lee, W. Lee, J. Lee, K. Kim and C. Kim, Chem. Commun., 2007, 4887. 21 P. J. Holliman, M. Mohsen, A. Connell, M. L. Davies, K. Al- Salihi, M. B. Pitak, G. J. Tizzard, S. J. Coles, R. W. Harrington, W. Clegg, C. Serpa, O. H. Fontes, C. Charbonneau and M. J. Carnie, J. Mater. Chem., 2012, 22(26), 13318. 22 D. Joly, L. Pellejà, S. Narbey, F. Oswald, J. Chiron, J. N. Clifford, E. Palomares and R. Demadrille, Sci. Rep., 2014, 4, 4033. 23 H. Tanaka, A. Takeichi, K. Higuchi, T. Motohiro, M. Takata, N. Hirota, J. Nakajima and T. Toyoda, Sol. Energy Mater. Sol. Cells, 2009, 93, 1143. 24 J. Gao, W. Yang, M. Pazoki, G. Boschloo and L. Kloo, J. Phys. Chem. C, 2015, 119, 24704. 25 A. Burke, L. Schmidt-Mende, S. Ito and M. Grätzel, Chem. Commun., 2007, 234. This journal is © The Royal Society of Chemistry 2017 26 T. Geiger, S. Kuster, J.-H. Yum, S.-J. Moon, M. K. Nazeeruddin, M. Grätzel and F. Nüesch, Adv. Funct. Mater., 2009, 19, 2720. 27 P. J. Holliman, M. L. Davies, A. Connell, B. Vaca Velasco and T. M. Watson, Chem. Commun., 2010, 46, 7256. 28 S. Paek, H. Choi, C. Kim, N. Cho, S. So, K. Song, M. K. Nazeeruddin and J. Ko, Chem. Commun., 2011, 47, 2874. 29 C. Qin, Y. Numata, S. Zhang, X. Yang, A. Islam, K. Zhang, H. Chen and L. Han, Adv. Funct. Mater., 2014, 24, 3059. 30 T. Wu, T. Lin, J. Zhao, H. Hidaka and N. Serpone, Environ. Sci. Technol., 1999, 33, 1379. 31 A. Fujishima, T. N. Rao and D. A. Tryk, J. Photochem. Photobiol., C, 2000, 1, 1. 32 C. Karunakaran and R. Dhanalakshmi, Sol. Energy Mater. Sol. Cells, 2008, 92, 1315. 33 M. Asghar, M. Kati, M. Simone, H. Janne, H. Vahlman and P. Lund, Sol. Energy, 2012, 86, 331. 34 D. P. Hagberg, X. Jiang, E. Gabrielsson, M. Linder, T. Mariando, T. Brinck, A. Hagfeldt and L. Sun, J. Mater. Chem., 2009, 19, 7232. 35 L. Yang, U. B. Cappel, E. L. Unger, M. Karlsson, K. M. Kalrsson, E. Gabrielsson, L. Sun, G. Boschloo, A. Hagfeldt and E. M. J. Johansson, Phys. Chem. Chem. Phys., 2012, 14, 779. 36 T. Watson, P. Holliman and D. Worsley, J. Mater. Chem., 2011, 21, 4321. 37 P. J. Holliman, K. J. Al-Salihi, A. Connell, M. L. Davies, E. W. Jones and D. A. Worsley, RSC Adv., 2014, 4(5), 2515. 38 H. Hofer, J. Carroll, J. Neitz, M. Neitz and D. R. Williams, J. Neurosci., 2005, 25, 9669. 39 F. Nour-Mohhamadi, S. D. Nguyen, G. Boschloo, A. Hagfeldt and T. Lund, J. Phys. Chem. B, 2005, 109, 22413. 40 H. T. Nguyen, H. M. Ta and T. Lund, Sol. Energy Mater. Sol. Cells, 2007, 91, 1934. 41 E. Leonardi, S. Penna, T. M. Brown, A. Di Carlo and A. Reale, J. Non-Cryst. Solids, 2010, 356, 2049. 42 M. Carnie, D. Bryant, T. Watson and D. Worsley, Int. J. Photoenergy, 2012, 1. 43 L. Han, N. Koide, Y. Chiba, A. Islam and T. Mitate, C. R. Chim., 2006, 9, 645. 44 H. Ellis, V. Leandri, A. Hagfeldt, G. Boschloo, J. Bergquist and D. Shevchenko, J. Mass Spectrom., 2015, 50, 734. 45 NIST, X-ray Photoelectron Spectroscopy Database, Version 4.1, National Institute of Standards and Technology, Gaithersburg, 2012, http://srdata.nist.gov/xps/. 46 S. Yu, S. Ahmadi, P. Palmgren, F. Hennies, M. Zuleta and M. Göthelid, J. Phys. Chem. C, 2009, 113, 13765. 47 R. Buscaino, C. Baiocchi, C. Barolo, C. Medana, M. Grätzel, M. K. Nazeeruddin and G. Viscardi, Inorg. Chim. Acta, 2008, 361, 798. Sustainable Energy Fuels http://dx.doi.org/10.1039/C7SE00015D Digital imaging to simultaneously study device lifetimes of multiple dye-sensitized solar cellsElectronic supplementary information (ESI) available. See DOI: 10.1039/c7se00015d Digital imaging to simultaneously study device lifetimes of multiple dye-sensitized solar cellsElectronic supplementary information (ESI) available. See DOI: 10.1039/c7se00015d Digital imaging to simultaneously study device lifetimes of multiple dye-sensitized solar cellsElectronic supplementary information (ESI) available. See DOI: 10.1039/c7se00015d Digital imaging to simultaneously study device lifetimes of multiple dye-sensitized solar cellsElectronic supplementary information (ESI) available. See DOI: 10.1039/c7se00015d Digital imaging to simultaneously study device lifetimes of multiple dye-sensitized solar cellsElectronic supplementary information (ESI) available. See DOI: 10.1039/c7se00015d Digital imaging to simultaneously study device lifetimes of multiple dye-sensitized solar cellsElectronic supplementary information (ESI) available. See DOI: 10.1039/c7se00015d Digital imaging to simultaneously study device lifetimes of multiple dye-sensitized solar cellsElectronic supplementary information (ESI) available. See DOI: 10.1039/c7se00015d Digital imaging to simultaneously study device lifetimes of multiple dye-sensitized solar cellsElectronic supplementary information (ESI) available. See DOI: 10.1039/c7se00015d Digital imaging to simultaneously study device lifetimes of multiple dye-sensitized solar cellsElectronic supplementary information (ESI) available. See DOI: 10.1039/c7se00015d Digital imaging to simultaneously study device lifetimes of multiple dye-sensitized solar cellsElectronic supplementary information (ESI) available. See DOI: 10.1039/c7se00015d Digital imaging to simultaneously study device lifetimes of multiple dye-sensitized solar cellsElectronic supplementary information (ESI) available. See DOI: 10.1039/c7se00015d work_cvirv5gyxfcati3ijhfikmplwe ---- Microsoft Word - TR _was 3.10 Economic analysis of PF_.doc 1 An Economic Analysis of the Potential for Precision Farming in UK Cereal Production R. J. Godwin1; T.E. Richards1; G.A. Wood1; J. P. Welsh1; S.M. Knight2 1Cranfield University at Silsoe, Silsoe, Bedfordshire MK45 4DT, UK; e-mail of corresponding author: r.godwin@cranfield.ac.uk 2 Arable Research Centres, Shuttleworth Centre, Old Warden Park, Biggleswade, Bedfordshire SG18 9EA, UK li2106 Text Box Biosystems Engineering, Volume 84, Issue 4, April 2003, Pages 533-545. 2 Abstract. The results from alternative spatial nitrogen application studies are analysed in economic terms and compared to the costs of precision farming hardware, software and other services for cereal crops in the UK. At current prices the benefits of variable rate application of nitrogen exceed the returns from a uniform application by an average of £22 ha-1. The cost of the precision farming systems range from £5 ha-1 to £18 ha-1 depending upon the system chosen for an area of 250 ha. The benefits outweigh the associated costs for cereal farms in excess of 80 ha for the lowest price system to 200 – 300 ha for the more sophisticated systems. The scale of benefits obtained depends upon the magnitude of the response to the treatment and the proportion of the field that will respond. To be cost effective, a farmed area of 250 ha of cereals, where 30% of the area will respond to variable treatment, requires an increase in crop yield in the responsive areas of between 0.25 t ha-1 and 1.00 t ha-1 (at £65 t-1) for the basic and most expensive precision farming systems respectively. 1 Introduction The potential benefits of managing crops using precision farming techniques include: (1) The economic benefit of an increase in crop yield, and/or a reduction in inputs i.e. seed, fertiliser and agrochemicals, and (2) The environmental benefit from a more precise targeting of agricultural chemicals. Over the past decade the technology has become commercially available to enable the farmer to both spatially record the yield from a field (Murphy et al., 1995; Birrell et al., 1996; Stafford et al., 1996) and vary both seed and fertiliser rates on a site-specific basis. Significant advances have also been made (Miller & Paice, 1998) to permit the spatial 3 control of weeds on a site specific basis by varying the dose rate of herbicides depending upon the weed density. However, the benefits of either an increase in yield and/or a reduction in fertilisers and agrochemicals have to be offset against the costs of investing in specialist equipment to enable yield maps to be produced and variable applications to be implemented. The aim of this paper is to address these issues. A range of potential benefits have been reported, from various combinations of different variable application rate practices. Earl et al. (1996) postulated a potential benefit of £33.68 ha-1 could be possible combining variable nitrogen application and targeting subsoiling to headlands for a crop of wheat in the UK, when wheat prices were £125 t-1. Increases in returns exceeding £57 ha-1 ($80 ha-1), when maize seed rates were varied according to soil depth, were reported by Barnhisel et al. (1996). Measured benefits in the range of - £11.14 to £74.09 ha-1 (-$6.37 to $42.38 ac-1) were reported by Snyder et al. (1998) on irrigated maize in the USA. Schmerler & Basten (1999) measured an average benefit of £38.60 ha-1 (60 DM ha-1) when growing wheat on a farm scale trial where both seed and agrochemical rates were varied. Studies conducted by James et al. (2000) investigated the benefits of using historic yield data as a guide to varying nitrogen application, for winter barley on a field with both clay loam and sandy loam soil types. Data from the first years results, i.e. 1997 harvest reported by Godwin et al. (1999), indicated that an economic benefit of £27.60 ha-1 could be possible. This analysis was based upon spot measurements of the yield rather than those for the complete zone in either soil type, however, it is generally indicative of the potential benefits. A further part of the experiment applied nitrogen based upon the calculated value of the most economic rate of nitrogen (MERN) and nitrogen rate for 4 maximum yield (NMAX) in the previous year, after the principles developed by Kachanososki et al. (1996). The main conclusions of this research were that: (1) the maximum yield of the response curve for each soil type (NMAX), occurred at the same application rate in each growing season, (2) the MERN for each soil type were not significantly different, and (3) based on yield information from previous yield maps, there was no simple variable rate application strategy that provided a yield or economic benefit compared to a uniform application of nitrogen fertiliser. The work reported in Godwin et al. (1999) also demonstrated the effect of the nitrogen/grain price ratio on the economic return to variable application of nitrogen and that at 1999 prices of £80 t-1 for grain and £0.30 kg-1 for nitrogen the MERN rates were 15 to 30 kg ha-1 lower than NMAX. The costs of implementing precision farming practices have been reported by a number of researchers. Earl et al. (1996) estimated the costs of yield map production and the ability to apply fertiliser on a site-specific basis to be £10.46 ha-1 for an arable area of 250 ha, at 7% interest rate amortised over a 5 year period in the UK. A later estimate of £7.81 ha-1 was made by James (1998), for a similar system, the difference in cost being the reduction in hardware and software cost over the 2 year period and the removal of the DGPS costs. In the same year studies, in the USA, by Snyder et al. (1998) estimated the cost of yield mapping and variable rate equipment, for nitrogen application, for two fields of 49 and 64 ha as £8.50 ha-1 ($11.88 ha-1). Studies by Schmerler and Basten (1999) reported costs of £15.46 ha-1 (49 DM ha-1) for a 7,100 ha German farm, of which 3,900 ha was suitable for site-specific management. The major reason for the higher figures 5 reported by Schmerler and Basten (1999) was the cost of the equipment to variably apply herbicides in addition to the seed rate and fertiliser. This paper: (1) examines the increase in revenues that have been achieved through the use of precision farming practices during a three-year study of 5 fields in cereal production in Southern England, where: (a) the effect of variable nitrogen application to both barley (Welsh et al., 2001a) and wheat (Welsh et al., 2001b) using historic yield map and real time shoot density data have been investigated, (b) significant yield improvements have been made using “real time” nitrogen management in the wheat crop based upon variations in crop canopy (Wood et al., 2001a), and (c) improvements to common crop management problems (e.g. waterlogging, rabbit damage and fertiliser application error, Wood et al. (2000a)) can be identified and corrected. (2) estimates of the costs of upgrading farm equipment, at the time of purchase, to a level that enables precision farming techniques to be practised, and (3) compares the costs/benefits and analyses the potential returns from adopting precision farming technology for given farm sizes and levels of variability. 2 Potential benefits from adopting precision farming The potential benefits considered in this paper arise from: (1) studies to determine the spatially optimum nitrogen application rate using a) historic yield, and shoot density (Welsh et al., 2001a & b for barley and wheat respectively), and 6 b) canopy structure (Wood et al., 2001a for the wheat crop), and (2) identifying and correcting common site-specific problems associated with crop management. 2.1 Historic yield Variation of nitrogen according to historic yield trials were conducted over a period of three years in three fields, Trent Field, Twelve Acres and Far Sweetbrier, using both variable and uniform application rate strategies (Welsh et al., 2001a & b). The variable rate strategies were divided up to give the following treatments: 2.1.1 Historic yield 1 (HY1). Where the high yield zone received 25 - 30% more nitrogen; the average yield zone received the standard nitrogen rate; and the low yield zone received 25 - 30% less nitrogen. 2.1.2 Historic yield 2 (HY2). Where the high yield zone received 25 - 30% less nitrogen; the average yield zone received the standard nitrogen rate; and the low yield zone received 25 - 30% more nitrogen. A further two alternative strategies, HY3 and HY4, could be extracted from the data. HY3 received 25 - 30% more nitrogen on the high yielding zone whilst the remainder of the field received the standard application rate and HY4 received 25 - 30% more nitrogen on the low yielding zone with the standard application rate applied on the other areas. 7 The economic consequences, excluding the costs of variable application equipment, achieved from the HY1, 2, 3 and 4 management strategies, with grain at £65 t-1 and nitrogen at £0.30 kg-1, are presented in Table 1. It can be seen from Table 1 that HY1 and HY2 produced negative returns in 8 and 6 of the 9 trials respectively. Twelve Acres was particularly unresponsive to variable rate application of nitrogen with only the 1998/99 HY4 strategy producing a positive benefit of £4.55 ha-1. The mean benefit recorded in Trent field for HY3 and HY4 strategies of £9.67 and £14.15 ha 1 respectively were a consequence of the crop being grown for malting in 1997/98 and 1998/99. Therefore, to retain the malting quality premium of £10 t-1 the applied nitrogen rate was below that required to produce maximum yield. The mean benefit for each strategy over all fields and all years is presented in Table 2. The following conclusions can be drawn from the data: (1) varying nitrogen fertiliser application according to historic yield strategies HY1 and HY2 has not, in these field trials, produced a positive benefit, (2) the HY3 strategy, of feeding only the higher yielding area, produced a small positive benefit, and (3) feeding the low yielding areas HY4 produced an overall benefit of £7.59 ha-1. 2.2 Shoot density approach Nitrogen application rate was varied according to the density of shoots (i.e. number of shoots m-2) in parallel with the historic yield studies described above (Welsh et al., 2001a and b). The shoot densities being determined in near real time using NDVI data for the 8 field from airborne digital photography calibrated against a number of spot measurements of shoot density (Wood et al, 2001b). Two application strategies were adopted: (1) Shoot density 1 (SD1). Where the areas of high shoot density received 30% more nitrogen; the areas of average shoot density received the standard nitrogen rate; and the areas of low shoot density received 30% less nitrogen. (2) Shoot density 2 (SD2). Where the areas of high shoot density received 30% less nitrogen; the areas of average shoot density received the standard nitrogen rate; and the areas of low shoot density received 30% more nitrogen. At the outset of the experiment high and low shoot densities were terms used to express relative shoot densities compared with the field average, which changed from one season to the next. In the second and third years, however, there was little variation in the shoot density along the experimental strip in Twelve Acre, hence a uniform application was made and in Far Sweetbrier there were no relatively high shoot density zones. This naturally led into the more objective approach to nitrogen management based upon absolute values of shoot density using crop canopy management reported in the next section. The economic consequences of managing nitrogen using the shoot density approach are summarised in Table 3, which shows that both SD1 and SD2 strategies produced an overall small positive benefit over the three year period in Trent field. The apparent reversal of crop response to SD1 and SD2 strategies between the 97/98 and both 98/99 and 1999/2000 seasons, in Trent Field, may be explained by the method used to determine the thresholds between high and low shoot densities. The average shoot densities were c. 1400 shoots m-2, c. 950 shoots m-2 and c. 1070 shoots m-2 in the 9 1997/98, 1998/99 and 1999/2000 seasons respectively. If the criteria for defining shoot density in 1997/98 had been applied to 1998/99 and 1999/2000 seasons the high density zone in both seasons would have been re-classified as low and the SD2 strategy would have been the most effective. The variation in establishment was probably due to adverse weather conditions in the 1998/99 season. Far Sweetbrier performed marginally above standard farm practice in 1997/98 and below standard for the remaining two years for the SD1 regime and significantly better than the standard farm practice for the SD2 regime in the final two years, with a small reduction in 1997/98. There were no high shoot density areas in the 1998/99 and 1999/2000 seasons, therefore, the SD1 strategy only consisted of removing nitrogen from the low shoot density areas, and hence the poor return in these two seasons. Whereas the SD2 strategy added nitrogen to the low shoot dentistry areas improving their response. The average return of £29.90 ha-1 recorded in Far Sweetbrier for the SD2 strategy implies that this method of management could have advantages. This was true especially in the final 2 years of the experiment when due to the absence of a relatively high shoot density zone, the fertiliser management regime consisted of only applying nitrogen to the relatively low shoot density areas. 2.3 Crop canopy approach Two fields, growing winter wheat, were used to conduct variable nitrogen application trials to investigate potential benefits that may be gained from managing the crop canopy in real time by varying nitrogen application (Wood et al., 2001a). Each field plot comprised a series of either 20 m or 24 m wide field length strips with differing seed rates 10 in alternate strips. Nitrogen was applied uniformly to a 10 m or 12 m wide longitudinal split of the strip at a rate representative of the standard farm practice whilst the remaining 10 m to 12 m strip section received nitrogen at varying rates depending on crop canopy structure. Canopy structure was determined using aerial digital photography (ADP) techniques to measure shoot density and green area index (GAI) as discussed in Wood et al. (2000b). Nitrogen application rate was determined according to whether the canopy was observed to be “on-target”, “below-target” or “above-target” according to benchmark figures set out in the Wheat Growth Guide (HGCA 1998). The difference in the economic performance of variable and standard nitrogen application for all seed rates are summarised in Table 4. It can be seen from Table 4 that benefits of variably applying nitrogen in comparison with uniform application, based on the standard farm practise, range from -£1 ha-1 (Far Highlands @ 195 plants m-2) up to £60 ha-1 (Onion Field @ 200 plants m-2). The mean benefit over all seed rates in Onion Field was £36.50 ha-1. This benefit is marginally more than the mean benefit obtained from SD2 in Far Sweetbrier which over a period of 3 seasons averaged a £33.58 ha-1 improvement over the uniform rate of N application for wheat. The overall mean improvement in Onion Field and Far Highlands was £22 ha-1. 3 Estimation of the costs of precision farming systems 3.1 Precision farming monitoring and control systems A full precision farming (PF) system comprises hardware and software to enable variations in crop yield to be mapped and crop related treatments to be variably applied on a site-specific basis. In reviewing the literature it is apparent the cost of practising precision farming techniques is dependent on: 11 (1) the level of technology purchased, i.e. a full or partial system, (2) depreciation and current interest rates, and (3) the area of crops managed. To determine the realistic cost for UK conditions an analysis was conducted, based on prices quoted by main suppliers of precision farming equipment, in January 2001. It was apparent that precision farming systems can be split into four main classes, as shown in Table 5. Where: Class 1. Comprises a fully integrated system from an original equipment manufacturer (OEM). Class 2. Comprises a full system from a specialist manufacturer. Class 3. Comprises a full system, which is a combination of OEM and specialist manufacturer. Class 4. Comprises a basic system – from an OEM comprising the basic elements of a system. Most new combine harvesters sold in the UK can be fitted with yield mapping hardware, however, the degree of integration between the yield mapping system and other components of the combine operating and performance monitoring system varies between manufacturers. The systems range in functionality from fully integrated yield mapping and combine performance monitoring systems, that can be removed from the combine and fitted to tractors or sprayers and include sub-metre DGPS (Class 1 at £11,363) through to low cost partial systems that provide full yield mapping functionality but reduced application rate control functions (Class 4 at £4,500). The remaining two classes 12 comprise, Class 2 (at £14,100) is a full precision farming system produced by specialist manufacturers, and Class 3 (at £16,150) is an addition of parts of Classes 1 and 2 which comprise an OEM integrated yield and combine performance monitor with components from specialist manufacturers to be mounted in either tractors or spray vehicles for variable application rate control. This has the added advantage that the parallel systems enable both harvesting and application control to be undertaken at the same time. The basic system (Class 4 at £4,500), the least cost option, uses a non-differential GPS to provide position information to +10 m. This provides the operator with the capability to produce yield maps of a slightly lower resolution than those produced using full precision farming systems, but probably sufficiently accurate for most management tasks. Variable application rates are achieved through changing the tractor forward speed whilst maintaining a constant material flow from the applicator in use. The speed control is achieved by the operator manually attempting to match a target speed displayed on the on board vehicular computer screen. This provides a limited range over which the application rate can be varied, dependant on the tractor transmission type, but does permit farmers to make initial ventures into precision farming management without a large capital outlay. 3.2 Assumptions used and the basis of the cost calculations The costs are based on the following assumptions: (1) one set of variable-application crop treatment equipment, i.e. PF-system, can ‘farm’ an identical area to that harvested by the combine, (2) operations involving variable application equipment are not conducted at the same time as combine harvesting (with the exception of class 3), 13 (3) when multiple PF-systems are used the total area would be divided equally between units, and (4) the farm office would contain a PC, capable of running yield and application mapping software, therefore, no allowance has been made in the following sections, for the purchase of a new PC. and comprise of both the capital and associated cost components shown in Table 6. 3.2.1 Combine hardware costs These reflect the cost of upgrading a new machine, at the time of purchase, from standard specification to a level where yield mapping can be conducted. This includes the yield mapping system, a DGPS unit and software to enable the production of yield maps using the farm PC. 3.2.2 Tractor hardware costs Costs include the extra hardware required to upgrade a tractor to allow the fitting of either the reprogrammed combine unit or a separate platform, from a specialist manufacturer. Generally the tractor hardware consists of mounting brackets and wiring harnesses to connect the on board vehicular computer to the systems and sensors on the tractor i.e. a radar unit for measuring true speed. 3.2.3 Implement hardware costs Implement hardware covers the control systems required to enable application rates to be varied on a site-specific basis according to some predetermined strategy i.e. an application map. In the case of integrated removable and reprogrammable yield mapping systems the implement hardware generally consists of the implement-mounted 14 control units i.e. motors, metering mechanisms and connecting cables. When non- reprogrammable yield mapping systems are fitted to the combine, this cost also includes those parts of a control platform required to enable application rates to be varied. Implement costs, used in this research, have been calculated using the extra cost incurred when purchasing a new precision farming equipped model of a particular machine compared to the cost of purchasing the same model in standard configuration. 3.2.4 Software Software is used on the farm PC to process the raw yield data to produce both yield maps and application maps for controlling the application of seed and chemicals on a site- specific basis. Software is usually included in the price of the hardware supplied for yield or application mapping operations and, therefore, is not quoted as a separate cost item in this report. There are, however, a number of companies that supply different types of software, for producing and analysing yield and application maps and typically range in cost from being supplied free of charge for determining fertiliser application rates up to £2,400 for a complete precision farming management package compatible with a wide range of commercially available systems. The latter would have a cost of £1.37 ha-1 based upon an area of 500 ha and depreciated over 5 years at an interest rate of 8.5%. 3.2.5 Depreciation of the capital cost Depreciation of the specialist precision farming components has been assumed to be equal to the rate of depreciation of the host machine, using those rates given in the Farm Management Pocketbook (Nix, 2000). 15 3.2.6 Maintenance costs Cost of maintenance of the equipment associated with precision farming is difficult to establish as the equipment is mostly solid state i.e. electronic circuit boards with no moving parts, these components should, therefore, be more reliable than components with many moving parts for instance metering mechanisms. However, they are mounted in a hostile environment that is subjected to vibration, dust, moisture and temperature change, which increases the risk of failure. To allow for this a maintenance cost for the precision farming accessories has been calculated using the same percentage of the initial capital cost as quoted by Nix (2000) for the regular maintenance of the machine to which the equipment is fitted. 3.2.7 Training costs These have been based on a figure of £300, to cover the salary of the trainee during the time of training and the cost course, provided by a leading agricultural machinery manufacturer, amortised over a period of 5 years. 3.2.8 Cost of capital Capital has been calculated at a rate of 8.5 % pa, i.e. base + 2.5% (Nix, 2000) and reflects the cost of the initial investment in the precision farming equipment. 3.3 Precision farming equipment costs The basis of the additional costs associated with purchasing new precision farming equipped machinery over the cost of purchasing new non precision farming equipped machinery are summarised in Table 7 for all systems. From which the annual cost per PF-system is then calculated which can then be directly compared to the benefits. 16 3.3.1 Annual cost per unit area This has been calculated for a range of arable areas that could be managed using a single PF-system i.e. the vehicle mounted computer used to record yield when fitted to the combine harvester and control application rate when fitted to the tractor. The results of the analysis for a Class 1 system, based upon a total cost of £11,363, is presented in Fig. 1 for the depreciation of the initial capital cost, the other associated costs are presented in Fig. 2. These figures show very clearly the effect of the area per PF-system on the annual cost of the operation, with the costs becoming asymptotic to the horizontal axis. The figures show that the other associated costs are greater than the cost of depreciation of the capital equipment. The two data sets have been combined in Fig. 3 to show the total annual cost £ ha-1 for the range of systems. Fig. 3 shows that the basic system (Class 4) which is significantly cheaper than the full systems is less than £5 ha-1 for areas above 250 ha, where corresponding values for the full systems range between £12 ha-1 and £18 ha-1. Obviously doubling the area to 500 ha reduces the cost to £2.50 ha-1 and £6 - £9 ha-1 respectively. These figures are higher than those quoted by Earl et al. (1996) and James (1998), for an area of 250 ha, as they include the additional expense of maintenance and training, but are lower than those suggested by Schmerler and Basten, (1999) when the economies of scale are considered. 17 3.4 Other costs A number of other costs can be associated with the management system for precision farming by providing information on which to base application plans. These are as follows: 3.4.1 Soil texture and soil series Texture and soil series can be determined by traditional manual surveying techniques from auger samples on an approximate 100 m grid basis or the more recently developed electromagnetic induction techniques (Waine, 2000). These are generally based upon a cost per hectare as given in Table 8, and should be viewed as a “one-off” non-recurrent investment. 3.4.2 Soil nutritional (chemical) status Status is determined upon a cost per sample, current sampling and analysis costs for a range of nutrients are also given in Table 8. This indicates that nitrogen analysis is expensive if undertaken annually, with one sample per hectare, and explains why there is great interest in targeting the samples needed for this and similar analyses as outlined in Thomas et al. (1999). 3.4.3 Crop canopy status Crop canopy status covers the costs to assist in the management of the crop canopy in near real time using crop reflectance data. This can be determined using either aerial digital photography (ADP) or tractor mounted radiometers (TMR). The hardware required for obtaining remotely sensed data comprises a pair of digital cameras for use in ADP mounted in a light aircraft (Wood et al, 2001b) or a tractor 18 mounted radiometer (Boissard et al., 2001) for collecting near-ground crop reflectance data. The annual depreciation and maintenance costs, associated with ADP and TMR equipment, have been calculated using the same assumptions as used for calculating the costs of other precision farming system hardware in the earlier sections, these costs are summarised in Table 9. The TMR may also be hired for a season as indicated in Table 9. For the farm scales in the UK it is most likely that a service provider, agronomy consulting group or a syndicate of farmers would make the substantial investment (c £15,000) in the digital camera system for ADP. The annual cost of £3,750 would, therefore, be spread over a much larger area than a single farm. In order to estimate the cost per hectare of acquiring the crop reflectance data using ADP it has been assumed that: (i) each 3 hour flight could cover up to 3,650 ha and that each field would need to be photographed prior to each application of nitrogen at the 3 growth stages in the January – May period, (ii) it is possible to make 2 flights per day, and (iii) weather conditions limit the number of days when images may be obtained. From the experience of this project, the most difficult period for flying was during the phase in late April early May. Taking the worst-case scenario weather conditions during this period may permit only 2 days of data collection, however, during this period it would be possible to collect up to 15,000 ha of crop data. 19 The cost of data collected, per flight as a function of the area per flight is presented in Fig. 4, this includes the cost of the plane, pilot, cameras and the technicians to perform the image calibration in the field. It can be seen that the cost is almost independent of the area flown above 1000 ha, and at 1500 ha (a typical day’s work for collecting the ground calibration data) would cost £7 ha-1. The cost of the tractor-mounted radiometer (TMR) is more likely to be borne by an individual farmer or a small syndicate of farmers. The TMR could provide similar, but lower resolution, data to ADP for use in producing fertiliser application plans. However, certain models of TMR can be interfaced with the fertiliser distributor to effect real time adjustment of nitrogen application rate according to crop reflectance. The cost per ha has been calculated, as a function of the area managed per radiometer, using the data in Table 9, and is presented in Fig. 5. From Fig. 5 it can be seen that the rented TMR and the farmer-owned TMR used for real time control of nitrogen application rate have respectively the highest and lowest cost per unit area. The lower cost per unit area of the TMR used for real time control is due to the lower labour requirement for in-field calibration of the radiometer data with the crop canopy in comparison to that required when application maps are to be produced. The difference between owning (£10.75 ha-1) and renting (£13.75 ha-1) a TMR being £3 ha-1 at 500 ha, however, to be competitive with ADP it would be necessary to manage an area in excess of 1500 ha. The cost of using a TMR for real time control of nitrogen application rate is £5 ha-1 for an area of 500 ha. 20 4 Breakeven analysis The breakeven analysis has been based on a benefit of £15 ha-1. This has been calculated by subtracting the £7 ha-1 cost of acquiring ADP data from the £22 ha-1 benefit achieved by varying nitrogen application according to crop needs assessed using real time monitoring of the canopy in Onion Field and Far Highlands. In order to estimate the area per PF-system required to break even the mean benefit of £15 ha-1 has been compared with the cost of the four different classes of PF-system shown in Fig. 6. It can be seen from Fig. 6 how the increase in system cost increases the area per PF- system required for breakeven at an economic return of £15 ha-1, with the exact areas shown in Table 10. The cost of sampling for soil mineral nitrogen has not been factored into this analysis as it is envisaged that it would be conducted as part of an improved uniform application strategy, the cost would be prohibitive for a sampling density greater than that accounted for good field practise (MAFF, 2000). This shows for a low cost basic system precision farming can be economically viable for areas in excess of 78 ha, rising to 308 ha for the most expensive system for fields responding in a similar way to Onion Field and Far Highlands. These areas would have to be increased if the TMR was used to collect reflectance data, similar to that from ADP, and lower if the TMR was used for “real-time control”. The average benefits obtained from managing Far Highlands on a site-specific basis (namely £7.75 ha-1) are too low to justify the investment in precision farming equipment for nitrogen management alone, however, combined with other benefits listed in Table 9 could prove worthwhile. 21 4.1 Other economic benefits Benefits of up to £60 ha-1 have been achieved in these field trials (Onion Field) when varying nitrogen application rate according to crop requirements determined through the use of ADP to measure the crop canopy. However, there are a number of other factors, which need to be considered. These factors and their possible economic implications are summarised in Table 11. During the course of this project three examples of the above additional benefits of precision farming were recorded. The cost of water logging, shown as up to £195 ha-1, was calculated using a potential yield reduction of 3 t ha-1, experienced at one of the trial sites following a wet period in the winter of 1998/99 (Wood et al., 2000a). This could have been rectified for a one off cost of £50 ha-1 (Nix, 2000) for re-moling the site and clearing blocked drain outlets which could have an economic life in excess of 5 years. Uneven distribution of fertiliser, which had absorbed moisture during transport and storage, resulted in uneven application and a yield penalty of up to 1 t ha-1 during the blanket application phase. This could be corrected by using an alternative source of fertiliser and spreader calibration. The cost of failing to rectify problems involving pH levels was estimated to be up to £7 ha-1, calculated using yield data from one of the trial sites that was affected by sub- optimal pH levels. The collection of these data using yield mapping techniques enables simple cost/benefit analyses to be conducted to ascertain the scale and extent of the 22 problem(s), from which estimates of the cost of correction can be made to compare with the potential long term benefits. Economic benefits resulting from the site-specific control of herbicide (Rew et al., 1997 and Perry et al., 2001) and fungicide (Secher, 1997) application have been included in Table 11. The reported cost savings for herbicides range between £0.50 and £20.70 ha- 1, and were achieved by targeted application using patch-spraying techniques. A statistically significant yield increase of 0.3 t ha-1, equivalent to an increase in revenue of £21.67 ha-1, has been achieved by varying fungicide application rate according to crop canopy density. It can be seen from Table 11 that the potential economic penalties of problems including water-logging due to poor drainage, and poor calibration of fertiliser application equipment, can outweigh the highest increase in benefit achieved, during this research, from spatially varying nitrogen fertiliser. It is, therefore, imperative that the areas affected and the potential economic consequences of these problems are addressed prior to considering the use of spatially varying nitrogen. 4.2 Sensitivity analysis of field variability The scale of any benefit obtained from adopting precision farming practices will ultimately depend on the magnitude of the response and the proportion of the field (%) that will respond positively to variable management. The increase in yield required to break even for different levels of field variability has been calculated using the costs based on Class 1 and Class 4 systems and grain at £65 t-1 is shown in Figs 7a and 7b. The proportion of the field (%) responding positively to variable nitrogen management is 23 based upon data from the shoot density and crop canopy studies, which varied between 12% and 52% of the strip areas and could be estimated for a particular farm using yield maps. It can be seen from Fig. 7 that for: (1) areas per PF-system greater than approximately 250 ha, and (2) areas of the field likely to produce a positive response to site specific management, greater than 30% the response curves are asymptotic to the horizontal axis. Using these figures as an example shows that minimum yield increases of 1.0 t ha-1 and 0.25 t ha-1 are required for breakeven for the Class 3 (most expensive) and Class 4 (least expensive) systems respectively, larger potential yield benefits would then be profitable. 5 Conclusions These conclusions are based upon nitrogen and cereal prices at £0.30 kg-1 and £65 t-1 respectively, and for equipment prices in the UK in January 2001. (1) At the above prices the benefits of the variable rate application of nitrogen, based upon crop canopy management using aerial digital photography compared to a standard uniform rate provided an average improvement of £22 ha-1. This was based upon the results of 8 experimental strips in 2 fields of wheat, with a range of seed rates. The fields were located in Southern and South Eastern England and represented soils similar to 30% of the arable growing area of England and Wales. (2) Applying nitrogen fertiliser based upon the variations in historic yield is not economically justified. (3) The capital cost of yield mapping and variable application equipment varies from: 24 (a) a basic PF-system (£4,500) which uses a non-differential GPS system with the yield monitor to produce a yield map and a forward speed indicator to advise the operator of the target speed needed to achieve the required application rate, to (b) a DGPS equipped unit with greater spatial accuracy and automatically controlled variable application systems ranging in cost from £11,500 to £16,000, depending upon the level of integration and compatibility with existing farm machinery. The most expensive system permitting both yield mapping and spatially variable application to be undertaken concurrently. (4) The annual cost per hectare of the above equipment over a 5 year depreciation period, at an interest rate of 8.5% together with maintenance and training vary between £4.67 ha-1 and £18.46 ha-1 for the basic and most expensive system respectively for an area managed per PF-system of 250 ha. These costs will change in inverse proportion to the area managed per unit. (5) The benefits outweigh the additional costs of the investment in precision farming systems and services for cereal farms greater than 80 ha for basic low cost systems and 200 - 300 ha for the range of the more sophisticated systems. These figures also assume a £7 ha-1 charge for crop canopy images either from aerial or ground based systems. (6) The costs of detailed soil analysis prohibit collection from a dense grid of data points and targeted sampling based upon significant variations in yield or soil type is recommended. (7) Common problems, such as water logging and fertiliser application errors, can result in significant crop yield penalties. Precision Farming can enable these problems to be identified, the lost revenue to be calculated and the resultant impact 25 on the cost/benefit to be determined. This provides a basis from which informed management decisions can be taken. (8) Work carried out by other researchers indicates that savings in herbicide use of the order of £0.50 to £20.70 ha-1 can be made. (9) Currently ADP appears to be less expensive than TMR for collecting NDVI data, however, “real time control” of nitrogen based on a TMR system offers the least cost option. (10) The scale of the benefits obtained from precision farming practices depends upon the magnitude of the response to the corrective/variable treatments and the proportion of the field, which will respond. Typically a farmed area of 250 ha of cereals, where 30% of the area will respond to corrective/variable treatment requires an increase in yield on the responsive areas of between 0.25 t ha-1 and 1.0 t ha-1 for the basic and the most expensive system respectively. These figures will change in inverse proportion to (i) the size of the area managed with each PF-system and (ii) the percentage of the field that will respond to treatment. 6 Acknowledgements The authors would like to thank the sponsors of this work, Home-Grown Cereals Authority, Hydro Agri and AGCO Ltd. for their support, and the contributions made by their collaborators, Arable Research Centres and Shuttleworth Farms. We would also like to thank Dr. David Pullen and Dr. Nicola Cosser for their assistance in developing the research programme and Robert Walker for implementing treatments and harvesting the experiments. Thanks must also be extended to Messrs Dines, Hart, Wilson and Welti who graciously allowed us to use their fields and gave us much support. 26 References Barnhisel R I; Bitzer M J; Grove J H (1996). Agronomic Benefits of Varying Corn Seed Populations: A Central Kentucky Study. In: Precision Agriculture: Proceedings of the 3rd International Conference, (Robert P. C.; Rust R. H.; Larson W. E. eds), 957-965 ,Madison, Wisconsin, ASA, CSSA, SSSA Birrell S J; Sudduth K A; Borgelt S C (1996). Comparison of Sensors and Techniques for crop Yield Mapping. Computers and Electronics in agriculture, 14, 215-233 Boissard P; Boffety D; Devaux J F; Zwaenepoel P; Huet P; Gilliot J -M; Heurtaux J; Troizier J (2001) Mapping of the Wheat Leaf Area From Multidate Radiometric Data Provided by On Board Sensors. In: Proceeding of the 3rd European Conference on Precision Agriculture, (Grenier G; Blackmore S eds), 157- 162 Montpellier, France Earl R; Wheeler P N; Blackmore B S; Godwin R J (1996). Precision Farming the Management of Variability. Landwards, Journal of the Institute of Agricultural Engineers, 51(4), 18-23 Godwin R J; James I T; Welsh J P; Earl R (1999). Managing Spatially Variable Nitrogen – A Practical Approach, Presented at the: ASAE/CSAE-SCGR Annual International Meeting, Paper No. 99 1142, Toronto, Ontario, Canada Godwin R J; Miller P C H (2001). A Review of Technologies for Mapping within Field Variability, Submitted to Biosystems Engineering (part of the batch, needs DOI) Home Grown Cereals Authority (1998). Wheat Growth Guide, Home Grown Cereals Authority, Caledonia House, 223 Pentonville Road, London N1 9HY James I T (2000). Investigation of Management Strategies for the Variable Rate Application of Nitrogen Fertiliser to Winter Barley in the UK. PhD. Thesis (unpublished), Institute of AgriTechnology, Cranfield University, Silsoe, UK 27 Kachanoski R G; Fairchild G L; Beauchamp E G (1996). Yield Indices for Corn Response to Applied Fertiliser: Application in Site-Specific Crop Management. Precision Agriculture: In: Proceedings of the 3rd International Conference, (Robert P. C.; Rust R. H.; Larson W. E. eds), 425-432, Madison, Wisconsin, ASA, CSSA, SSSA Ministry of Agriculture, Fisheries and Food RB209 (2000). Fertiliser Recommendations for Agricultural and Horticultural Crops. (The Stationery Office, Norwich) 77-102 Miller P C H; Paice M E R (1998). Patch Spraying Approaches to Optimise the Use of Herbicides Applied to Arable Land. Journal of the RASE, 159, 70-81 Murphy D P; Schnug E; Haneklaus S (1995) Yield Mapping – A Guide to Improved Techniques and Strategies, In: Proceeding of the second International Conference on Site-Specific Management for Agricultural Systems, (Robert P. C.; Rust R. H.; Larson W. E. eds) 33-47, Bloomington, Minneapolis Madison, USA Nix J (2000). Farm Management Pocketbook 31st edition, Imperial College Wye, ISBN 0 86266 225 7 Perry N H; Lutman P J W; Miller P C H; Wheeler H C (2001). A Map Based System for Patch Spraying Weeds (1) Weed Mapping. In: Proceedings BCPC Crop Protection Conference Brighton Weeds 2001, Brighton, UK (in preparation) (2001). Rew L J; Miller P C H; Paice M E R (1997). The Importance of Patch Mapping Resolution for Sprayer Control. Aspects of Applied Biology, 48, Optimising Pesticide Application, 49-55 Schmerler J; Basten M (1999). Cost/Benefit Analysis of Introducing Site-Specific Management on a Commercial Farm. In: Precision Agriculture 99, The Second 28 European Conference on Precision Agriculture 11th – 15th July 1999 , (Stafford JV ed), 959-967, Odense, Denmark Secher B J M (1997). Site Specific Control of Diseases in Winter Wheat. Aspects of Applied Biology, 48, Optimising Pesticide Applications, 49-55 Snyder C; Havlin J; Kluitenberg G; Schroeder T (1999). Evaluating the Economics of Precision Agriculture. In: Proceedings of the Fourth International Conference on Precision Agriculture, 19th – 22nd July, 1999 (Robert P C; Rust RH; Larson W E eds) 1621-1632, Madison, Wisconsin, USA Stafford J V; Ambler B; Lark R M; Catt J (1996). Mapping and Interpreting the Yield Variation in Cereal Crops. Computers and Electronics in Agriculture, 14, 101-119, Welsh J P; Wood G A; Godwin R J; Taylor J C; Earl R; Blackmore S; Knight S (2001a). Developing Strategies for Spatially Variable Nitrogen Application in I: Barley, Submitted to Biosystems Engineering (part of the batch, needs DOI) J.P. Welsh, G.A. Wood, R.J. Godwin, J.C. Taylor, R. Earl, S. Blackmore, S. Knight (2001b). Developing Strategies for Spatially Variable Nitrogen Application in II: Wheat, Submitted to Precision Agriculture, (part of the batch, needs DOI) (2001b). Wood G A; Welsh J P; JGodwin R; Taylor J C; Knight S; Carver M F F (2000) Precision Farming: seed-rate and nitrogen interactions. Home Grown Cereals Authority Crop Management for the Millenium Conference, Cambridge Wood G A; Welsh J P; Taylor J C; Godwin R J; Knight S (2001a). Real Time Measures of Canopy Size as a Basis for Spatially Varying Nitrogen at Different Seed Rates in Winter Wheat. Submitted to Biosystems Engineering (part of the batch, needs DOI) 29 Wood G A; Taylor J C; Godwin R J (2001b). Calibration Methodology for Mapping Within-Field Crop Variability using Remote Sensing. Submitted to Biosystems Engineering (part of the batch, needs DOI) Tables and Figures for An Economic Analysis of the Potential for Precision Farming in UK Cereal Production R. J. Godwin1; T.E. Richards1; G.A. Wood1; J. P. Welsh1; S.M. Knight2 Table 1 Economic consequences of the alternative strategies in comparison with the standard farm practice 1997/98 £ ha-1 1998/99 £ ha-1 1999/2000 £ ha-1 Mean £ ha-1 1 Far Sweetbrier (Wheat) HY1 11.70 -1.95 -26.65 -5.63 HY2 37.05 41.60 -14.95 21.23 HY3 7.15 0.65 -5.85 0.65 HY4 7.80 24.70 -0.65 10.62 2 Trent Field (Barley) HY1 -7.15 -8.45 -17.55 -11.05 HY2 -17.55 0.00 -29.25 -15.60 HY3 9.10 1.30 11.70 7.37 HY4 11.70 9.10 14.95 11.92 3 Twelve Acres (Wheat) HY1 -20.15 -2.60 -0.46 -7.74 HY2 -10.40 -3.25 -31.20 -14.95 HY3 -1.30 -2.60 -3.25 -2.38 HY4 -0.65 7.15 -5.85 0.22 Table 2 Mean economic consequences, from using historic yield approach all fields all years Strategy £ ha-1 HY1 -8.14 HY2 -3.11 HY3 1.88 HY4 7.59 Table 3 Economic consequences over standard farm practice achieved using shoot density approach to determine nitrogen application rate 1997/98 £ ha-1 1998/99 £ ha-1 1999/2000 £ ha-1 Mean £ ha-1 Far Sweetbrier (Wheat) 4 5 SD1 7.80 -12.35 -23.40 -9.32 SD2 -3.25 57.85 35.10 29.90 Trent Field (Barley) SD1 -29.25 20.80 23.40 4.98 SD2 13.65 -5.85 -6.50 0.43 Table 4 Economic comparisons of variable and uniform nitrogen application rates Target Seed Rate (seeds m-2) 150 250 350 450 Establishment (plants m-2) Onion Field 100 143 177 200 Gross margin £ ha-1 Variable N 366 432 434 441 Uniform N 349 394 403 381 Difference 17 38 31 60 Establishment (plants m-2) Far Highlands 120 195 240 320 Gross margin £ ha-1 Variable N 437 397 406 391 Uniform N 417 398 404 381 Difference 20 -1 2 10 Table 5 Precision farming system cost PF System type Hardware cost (£) Class 1 11,363 Class 2 14,100 Class 3 16,150 Class 4 4,500 Table 6 Cost component of precision farming Cost – capital depreciation Other - associated costs Combine mounted hardware Maintenance Tractor mounted hardware Training Implement mounted hardware Cost of capital Software Bought in services Table 7. Summary of the cost of precision farming equipment 6 PF equipment Cost 7 Full system Class 1 Class 2 Class 3 8 Basic system Class 4 Initial capital cost £ 11 363 £14 100 £ 16 150 £ 4 500 Cost of capital 8.5 % 8.5 % Depreciation all equipment 13% for 5 yr replacement 13% for 5 yr replacement Maintenance Combine 3.5% for 150 hrs use pa 3.5% for 150 hrs use pa Tractor 8% for 1000 hrs use pa 0 Seed drill 7.5% for 150 hrs use pa 0 Fertiliser distributor 7.5% for 150 hrs use pa 0 Training £60 pa (£300 over 5 yr) £60 pa (£300 over 5 yr) Table 8 Typical fixed area costs One-off cost £ ha-1 Annual cost £ sample-1 Soil surveying (manual) 25 P, K, pH, Mg 9 Soil surveying (electromagnetic induction) 14 P, K, pH, Mg + Cu, B 20 Available N a) upper, middle and lower samples from 0.9 m deep core 100 b) 0.9 m core bulked together 40 Table 9 Cost associated with acquiring crop reflectance data TMR Purchased TMR Hired ADP Cameras Hardware cost (£) 10,000 - 15,000 Annual costs (£) Depreciation @ 13% 1,300 - 1,950 Maintenance @ 3.5% 350 - 525 Cost of capital @ 8.5% 850 - 1,275 Rental charges - 4,000 - Total annual cost (£ pa) 2,500 4,000 3,750 Cost of ground calibration (£ ha-1) 4.85 4.85 4.85 Table 10 Breakeven area per PF-system for the average returns In Onion Field and Far Highlands in 1999/2000 System type Breakeven areas (ha) mean benefit £15 ha-1 Class 3 308 Class 2 240 Class 1 206 Class 4 78 Table 11 Other economic considerations Factor Implication Penalty or Benefit Water-logging Economic penalty Up to £195 ha-1 Fertiliser application errors Economic penalty Up to £65 ha-1 .pH Economic advantage Up to £7 ha–1 Herbicide application1 Economic advantage Up to £20 ha-1 1 Fungicide application2 Economic advantage Up to £22 ha-1 2 1 after Rew et al. (1997), 2 after Secher (1997) Fig 1. Depreciation of the initial capital cost of precision farming equipment hardware for a Class 1 system for a range of areas per PF-system: ,Total hardware cost; , Combine; , Seed drill; , Fertilizer distributor; ,Tractor 0 5 10 15 20 25 30 35 0 100 200 300 400 500 600 700 800 900 1000 Area per PF-system (ha) A nn ua l c os t ( £ ha -1 ) Fig 2. Other associated costs of precision farming equipment for a Class 1 system for a range of areas per PF-system: ,Total associated cost; , Cost of capital; , Maintenance; , Training 0 5 10 15 20 25 30 35 0 100 200 300 400 500 600 700 800 900 1000 Area per PF-system (ha) A nn ua l c os t ( £ ha -1 ) 0 10 20 30 40 50 60 70 80 90 100 0 100 200 300 400 500 600 700 800 900 1000 Area per PF-system (ha) A nn ua l c os t ( £ ha -1 ) Fig 3 Total cost of four different Precision farming systems compared to area managed per PF-system: , Class 1; , Class 2; ,Class 3; , Class 4 0 10 20 30 40 50 0 500 1000 1500 2000 2500 Area per flight (ha) C os t ( £ ha -1 ) Fig .4 Cost of acquiring crop reflectance data using aerial digital photography 0 10 20 30 40 50 60 70 80 90 0 100 200 300 400 500 600 700 800 900 1000 Area per radiometer (ha) C os t ( £ ha -1 ) Fig 5 Cost of acquiring crop reflectance data using a tractor mounted radiometer (TMR): , TMR rented; , TMR farmer owned; ,TMR real time control 0 10 20 30 40 50 60 70 80 90 100 0 100 200 300 400 500 600 700 800 900 1000 Area per PF-system (ha) C os t ( £ ha -1 ) Class 3 Class 2 Class 1 Class 4 £15 ha-1 Fig 6. Breakeven area per PF-system for each of the 4 classes of precision farming systems for a return of £15 ha-1 : , Class 1; , Class 2; ,Class 3; , Class 4; ,£15 ha-1 0.0 2.0 4.0 6.0 8.0 10.0 12.0 14.0 16.0 50 150 250 350 450 550 650 750 850 950 Area per PF-system (ha) Y ie ld in cr ea se r eq ui re d in v ar ia bl e pa rt s of th e . fie ld to a ch ie ve b re ak ev en @ £ 65 t- 1 (t h a- 1 ) 10% 20% 30% 50% Percentage area of the field likely to produce a positive response to variable inputs (a) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 50 150 250 350 450 550 650 750 850 950 Area per PF-system (ha) Y ie ld in cr ea se r eq ui re d in v ar ia bl e pa rt s of th e . fie ld to a ch ie ve b re ak ev en @ £ 65 t -1 ( t h a- 1 ) 10% 20% 30% 50% Percentage area of the field likely to produce a positive response to variable inputs (b) Fig 7 Field variability sensitivity analysis a) Class 3, and b) Class 4 systems: , 10%; , 20%; ,30%; ,50%1 Economic_analysis-UK_cereal_farming-.pdf Econimic figs.pdf work_cxbpzhxv7fcvpnwqzyqei6nvpa ---- gl031637 1..5 On the evolution of the snow surface during snowfall H. Löwe, 1 L. Egli, 1 S. Bartlett, 1 M. Guala, 1 and C. Manes 1 Received 9 August 2007; revised 28 September 2007; accepted 12 October 2007; published 14 November 2007. [1] The deposition and attachment mechanism of settling snow crystals during snowfall dictates the very initial structure of ice within a natural snowpack. In this letter we apply ballistic deposition as a simple model to study the structural evolution of the growing surface of a snowpack during its formation. The roughness of the snow surface is predicted from the behaviour of the time dependent height correlation function. The predictions are verified by simple measurements of the growing snow surface based on digital photography during snowfall. The measurements are in agreement with the theoretical predictions within the limitations of the model which are discussed. The application of ballistic deposition type growth models illuminates structural aspects of snow from the perspective of formation which has been ignored so far. Implications of this type of growth on the aerodynamic roughness length, density, and the density correlation function of new snow are discussed. Citation: Löwe, H., L. Egli, S. Bartlett, M. Guala, and C. Manes (2007), On the evolution of the snow surface during snowfall, Geophys. Res. Lett., 34, L21507, doi:10.1029/2007GL031637. 1. Introduction [2] The physical properties of the natural snowpack are of great importance for many geophysical processes such as heat transfer [Kaempfer et al., 2005; Sturm et al., 2002], interaction with the turbulent boundary layer [Lehning et al., 2002], wind erosion [Clifton et al., 2006], failure and crack propagation [Sigrist and Schweizer, 2007; Heierli and Zaiser, 2006]. However, the ice structure within a natural snowpack is a non-equilibrium system which undergoes metamorphic changes induced by many physical and chem- ical processes [see, e.g., Arons and Colbeck, 1995; Schweizer et al., 2003, and references therein]. Linking the ice structure to its physical properties represents a major challenge in snow physics. [3] In contrast to the difficulty of predicting the complete metamorphism dynamics of the ice structure some striking features of its initial condition can be observed during snowfall even by eye: due to cohesion a settling snow crystal has the tendency of being attached to the snow surface immediately at first contact rather than being re- arranged to a position of minimum potential energy. The immediate attachment of the crystal creates a small over- hang of the surface and thus excess pore space directly below the attached crystal. As an obvious consequence of this attachment mechanism the snow surface develops its irregular, rough appearance. Some degree of order within this irregularity is sometimes revealed when a pattern of bumps with well defined characteristic length scale is clearly visible on the surface during snowfall. As a less obvious consequence of the attachment mechanism, also the ice structure below the snow surface should inherit its structure from the way in which overhangs are buried by successive, random attachment events. While the initially generated ice structure is very early influenced by the aforementioned metamorphism dynamics [Kaempfer et al., 2005], the deposition process provides the initial condition to the dynamics and therefore deserves special attention. [4] To the best of our knowledge the characteristics of surface roughness and the ice structure of snow have never been addressed from the most obvious perspective of formation itself which is the purpose of this letter. We stress that the purpose is not to give a comprehensive model of snow deposition for ‘‘operational use’’. We rather demon- strate the wide applicability of a simple but well behaved theoretical framework. 2. Model of Snow Deposition 2.1. Ballistic Deposition [5] Ballistic deposition (BD) was originally introduced as a model of colloidal aggregation. For a comprehensive overview on the theory presented below we refer to Barabási and Stanley [1995] and Meakin [1998]. For illustration of BD we consider the two-dimensional case on a lattice where the surface height h(i, n) is defined at discrete lattice sites i and discrete times n. At each time step, a particle is released well above a randomly chosen site i. It settles down on a straight vertical trajectory until it encoun- ters an occupied neighboring lattice site. Thus, at each time step the height of the surface is updated according to h i; n þ 1ð Þ ¼ maxfh i þ 1; nð Þ; h i � 1; nð Þ; h i; nð Þ þ 1g: ð1Þ The sticking rule (1) is illustrated in Figure 1a. The restriction to a lattice can be abandoned and the general- ization of (1) to the three-dimensional case is straightfor- ward [see Meakin, 1998]. 2.2. Continuum Description [6] As an alternative to the particle based approach within BD one may apply a related continuum description. It is widely believed but still a matter of debate [cf. Katzav and Schwartz, 2004, and references therein] that the uni- versal properties of BD on length scales large compared to the particle size can be recovered by a continuum growth model, namely the Kardar-Parisi-Zhang (KPZ) equation GEOPHYSICAL RESEARCH LETTERS, VOL. 34, L21507, doi:10.1029/2007GL031637, 2007 Click Here for Full Article 1 WSL, Swiss Federal Institute for Snow and Avalanche Research SLF, Davos, Switzerland. Copyright 2007 by the American Geophysical Union. 0094-8276/07/2007GL031637$05.00 L21507 1 of 5 http://dx.doi.org/10.1029/2007GL031637 [Kardar et al., 1986]. For a continuous surface h(x, t) in three-dimensional space it reads @ @t h x; tð Þ ¼ nr2h x; tð Þ þ ljrh x; tð Þj2 þ x x; tð Þ: ð2Þ Time is denoted by t and the gradient by r. The individual terms in (2) can be attributed to particular surface processes: The term x(x, t) = h0 + h(x, t) constitutes the mass supply. It includes a deterministic growth velocity h0 and a fluctuating term h(x, t) which is zero-mean, Gaussian white noise in space and time. The diffusion term in (2) includes local surface relaxation processes. Most important for the relation of the KPZ equation to BD is the non-linear term of strength l. It is the leading-order term for growth normal to the surface [Barabási and Stanley, 1995] which is the analog of an overhang in BD. [7] From the description of a settling snow crystal given in the introduction it is clear that a minimal model of snow deposition should at least include two effects: 1) sticking (which covers both, sintering and interlocking) as the origin of excess pore space and 2) the stochastic nature of the deposition mass flux. Both models, BD and KPZ include these effects. Since we focus on implications and applica- tions of these effects on snow structure we will apply either of them likewise. Obvious limitations of the model can in principle be investigated within extensions of BD/KPZ: the influence of imperfect sticking (e.g., as a consequence of crystal shapes) can be addressed by competitive growth models [Braunstein and Lam, 2005]; non-uniform particle shapes or sizes have been studied by Silveira and Reis [2007]; oblique particle incidence (e.g. as a result of steady, low winds) has been studied by Meakin and Krug [1992]. In contrast, the dynamical evolution of the ice structure below the surface (metamorphism, stress induced re-arrangement) cannot easily be regarded as a generalization of BD/KPZ. 3. Surface Roughness [8] In general, surface roughness can be characterized by studying temporal and spatial correlations of the height h(x, t) of a growing surface in terms of its dynamic, two- point correlation function [Barabási and Stanley, 1995] G x; tð Þ :¼ h x0 þ x; tð Þ � h x0; tð Þð Þ 2 : ð3Þ Here, x, x0 denote two-dimensional position vectors on the substrate. Assuming statistical homogeneity and isotropy of the growth process G(x, t) is independent of x0 and solely a function of the magnitude x:= jxj. The overbar in (3) denotes an ensemble average over realizations of the depo- sition process. For illustration, the correlation function (3) for BD/KPZ type growth is schematically plotted as a function of x in Figure 1b). Such a behaviour on a double logarithmic scale implies that G(x,t) follows the dynamic scaling form G(x, t) x2a g(x/t1/z) [Barabási and Stanley, 1995; Meakin, 1998]. Here g(s) is a scaling function which is constant for s � 1 and decreases algebraically g(s) s �2a for s � 1. This implies that G(x, t) x2a is independent of t if x � t1/z and approaches a constant G(x, t) t2a/z for x � t1/z when plotted as a function of x (cf. Figure 1b)). The exponents a, z are universal and commonly referred to as roughness- and dynamic exponent, respectively. [9] The origin of scaling is the combination of random deposition events with a nearest neighbor sticking rule: Initially, different parts of the surface are uncorrelated. After time t regions of correlated surface heights have formed within a spatial extent x* t1/z from the sticking rule (1) or, likewise, by the non-linear term in (2). Typical numerical estimates for the scaling exponents a,z for KPZ and BD are found to be in a range 0.3 < a < 0.4 and 1.36 < z < 1.65 [cf. Katzav and Schwartz, 2004, and references therein]. 4. Measurements of the Snow Surface [10] Quantitative roughness measurements of snow sur- face outlines at different times were carried out by means of digital photography during several snowfalls in winter 2006/07. Pictures were taken using a 7.0 megapixel digital camera and a scaled target which was carefully inserted within the snow (see Figure 2 for an example). In order to avoid the formation of roughness due to i) wind-induced snow erosion, ii) anomalous deposition processes due to a predominant wind direction, or iii) finite size geometry effects, images of the snow surface outlines were taken during snow falls in the absence of wind, starting from a flat solid surface with dimensions significantly larger than the largest resolved scale which is 20 cm. The snow rough- ness measured in these experiments can thus be regarded (in good approximation) solely as the result of the deposition process. From each image, roughness outlines were identi- Figure 1. Ballistic deposition: (a) Nearest neighbor stick- ing for deposition at site i creates an overhang and thus excess pore space which remains inaccessible for other particles. (b) Schematic plot of the correlation function G(x, t) (see equation (3)) for different times t = 1,10,100,1000 (arbitrary units). Figure 2. Examples of images taken to estimate snow roughness elevations showing (a) an early and (b) a later stage of the evolution. Estimated outlines are displayed in red. The magnification nicely displays an overhang. L21507 LÖWE ET AL.: EVOLUTION OF THE SNOW SURFACE DURING SNOWFALL L21507 2 of 5 fied by defining an optimal grey scale threshold value separating the pixels relating to the snow from those to the target. Overall, it was noted that, when inserted into the snowpack, the thin metal target produced very sharp cuts which preserved the shape of the foreground roughness outlines. The accuracy obtained for the roughness elevations depended solely on the camera resolution and the physical size covered by each image. Such an accuracy was estimat- ed to be 0.1 mm which is fine enough to investigate roughness properties belonging to the sub-crystal scale. [11] As a typical drawback of natural experiments we acknowledge that some observed snow falls, or parts of them, are not documented here, due to the occurrence of non negligible wind during the snowfall or to significant changes of the crystal sizes in time. The results presented below are thus representative of the time evolution of the rough snow surface during snowfall where BD/KPZ should be applicable. 5. Results and Discussion [12] Qualitatively, the measured correlation function G(x,t) (see equation (3)) behaves similarly to what is expected for a BD/KPZ surface. Quantitatively, G(x,t) is plotted for the three sets of experiments as a function of x for different times in Figure 3. The roughness exponent a is obtained by fitting the curves in the ascending range to a power law. The mean values for each set are given in Figure 3, the individual values are a = 0.36, 0.46, 0.43, a = 0.34, 0.45 and a = 0.37, 0.38 for Figures 3a, 3b and 3c respectively which are in the range of the predictions from BD/KPZ (see section 3). [13] Instead, a reliable estimation of the dynamic expo- nent z from the standard deviation s(t) ta/z [Barabási and Stanley, 1995] is presently unfeasible. The reason for this is the difficulty to encounter a sufficiently long period of persistent, ideal snowfall conditions (long duration, persis- tent shapes of the crystals, without wind) which allows to measure the correlation function at many times and estimate the exponent from a single experiment. Likewise, an alter- native approach, namely the estimation of z from data collapse of different experiments could only be achieved by explicitly measuring the number flux of settling crystals: the fundamental length scale l0 of a single experiment is given by l0 3 = v/n where v is the velocity of the mean surface height in units of ms �1 and n is the mean number flux of settling crystals in units of m �2 s �1 . By definition l0 3 is the average volume occupied by a single crystal after its incorporation into the snowpack. Thus, l0 covers branched crystals as well as those for which sticking can be accom- panied by interpenetration. For this reason l0 is an effective crystal size and not the exact one, which is an ambiguous quantity for non-spherical shapes. The fundamental time scale is then given by t0 = l0/v. Only by rescaling the variance by l0 and time by t0 all data can be combined consistently in a single plot. However, measuring n was clearly beyond the scope of our experimental setup. [14] For completeness we simply plotted s of all three experiments as a function of time from the beginning of each experiment in one plot (see Figure 4). As a guide to the eye we added a line of slope a/z = 0.24. The increase of the variance is also evident from Figure 3 and follows qualita- tively that of BD/KPZ type growth (cf. Figure 1b). 6. Applications [15] We demonstrate below that, apparently, different aspects involving the structure of snow can be explained from a unifying perspective of BD/KPZ type growth. 6.1. Aerodynamic Roughness Length of New Snow [16] It has been observed that the aerodynamic roughness length z0 which determines the logarithmic velocity profile in turbulent boundary layer flows over snow varies even for apparently similar samples of new snow [Clifton et al., 2006]. By dimensional analysis one would expect z0 to scale on the standard deviation of the surface height. Such a dependence has also been suggested by Lancaster et al. [1991] for desert surfaces. Hence, variations of the rough- Figure 3. Correlation function for three different snow events. Arrows indicate increasing time, or increasing heights of the mean snow surface, corresponding to (a) 15, 20, 25 mm, (b) 70, 95 mm, and (c) 50, 80 mm. L21507 LÖWE ET AL.: EVOLUTION OF THE SNOW SURFACE DURING SNOWFALL L21507 3 of 5 ness length z0 of new snow might be attributed to variations of the standard deviation s as predicted by BD/KPZ. 6.2. Slope Dependent Density of New Snow [17] It has been observed that the density of new-snow depends on the inclination angle of the slope onto which it is deposited [Endo et al., 1998]. This slope dependence is partly attributed to the mechanism of the deposition process. Such a slope dependence can be predicted by BD/KPZ as follows. For constant vertical mass flux q onto an inclined substrate with slope m = tan q and inclination angle q one can use the KPZ equation (2) to infer v(m) = v(0) + lm2 [Barabási and Stanley, 1995] for the velocity of the mean surface height in terms of the strength l of the non-linear term. Thus, the velocity of the mean surface height normal to the surface increases when the inclination angle is increased and the mean density of the deposit given by r(m) = q/v(m) decreases. For small slopes m this amounts to r(m) r(0)(1 � lm2). Here r(0) = q/v(0) is the density of the deposit for a flat substrate. On one hand this can explain the aforementioned claim of Endo et al. [1998]. On the other hand it provides an experimental method to estimate l by fitting the density as a function of slope to a parabola [Barabási and Stanley, 1995]. 6.3. Anisotropy of Density Correlations in New Snow [18] It has been observed that the ice structure of snow might exhibit an anisotropy [Flin et al., 2004] as a result of metamorphism dynamics under isothermal conditions. A different anisotropy naturally emerges from BD/KPZ solely as a result of the deposition process and irrespective of possible anisotropic crystal shapes. To explain this we will extend recent two-dimensional results on properties of deposits below the growing surface from Katzav et al. [2006]. [19] Fluctuations of the density around the ensemble mean r in a porous medium are commonly studied by means of the two-point density correlation function C x; x3ð Þ :¼ r x; x3ð Þ � rð Þ r 0; 0ð Þ � �ð Þ; ð4Þ where x3 is the vertical component of the three-dimensional lag vector (x, x3) and again x = (x1, x2). We note that the density r(x, x3) has to be understood as a microscopic quantity. It is given by the indicator function of the porous medium, i.e. r(x, x3) = rice if (x, x3) lies in the ice phase and r(x, x3) = 0 if (x, x3) lies in the pore space. [20] Katzav et al. [2006] calculated the correlation function (4) from the time dependent structure factor of the surface. Their results can easily be generalized to three dimensional space by employing the scaling form equation (A.11) of Barabási and Stanley [1995] for the structure factor, yielding C x; x3ð Þ x �4 1�að Þ=z 3 f jxj=x 1=z 3 � � : ð5Þ [21] Here, f(s) is a scaling function which vanishes for s � 1 and approaches a constant for s � 1. In other words, for x3 � jxj, that is, in a thin vertical strip the correlation function has an algebraic tail C(x, x3) x3�4(1�a)/z which is missing for x3 � jxj, that is, in a thin horizontal strip. Using, e.g., a = 0.4 and z = 1.6, the power law decay is governed by an exponent �1.5. The anisotropy, a result of formation, becomes manifest in (5) since it is not solely a function of the magnitude (x1 2 + x2 2 + x3 2 ) of the lag. [22] The great importance of density correlation functions stems from the fact that they arise in rigorous expressions for the effective transport and mechanical properties of porous media [Torquato, 2002]. 7. Conclusions [23] With simple experiments and analytical predictions, we suggest that Ballistic Deposition/Kardar-Parisi-Zhang (BD/KPZ) processes i) well describe the growth of the snow surface during snowfall, ii) allow for the interpretation of previously observed behaviour of the density of new snow and iii) can predict density correlations within new snow. Despite the fact that the BD/KPZ model cannot always be applied to natural snowfalls (limitations are discussed), we emphasize its consequences on the aerody- namic roughness length or the anisotropy of snow as a porous medium. It remains an open question, however, how long the signatures of BD/KPZ persist after snowfall. [24] Acknowledgments. We thank Charles Fierz and Michi Lehning for a careful reading of the manuscript. References Arons, E. M., and S. C. Colbeck (1995), Geometry of heat and mass transfer in dry snow: A review of theory and experiment, Rev. Geophys., 33, 463–493. Barabási, A. L., and H. E. Stanley (1995), Fractal Concepts in Surface Growth, Cambridge Univ. Press, Cambridge, U. K. Braunstein, L. A., and C. H. Lam (2005), Exact scaling in competitive growth models, Phys. Rev. E, 72, 026128. Clifton, A., J.-D. Rüedi, and M. Lehning (2006), Snow saltation thresh- old measurements in a drifting-snow wind tunnel, J. Glaciol., 52, 585–596. Figure 4. Standard deviation of the surface as a function of time from start of the experiments. Different symbols correspond to the experiments in Figures 3a, 3b, and 3c, respectively. L21507 LÖWE ET AL.: EVOLUTION OF THE SNOW SURFACE DURING SNOWFALL L21507 4 of 5 Endo, Y., Y. Kominami, and S. Niwano (1998), Dependence of new-snow density on slope angle, Ann. Glaciol., 26, 14–18. Flin, F., J. B. Brzoska, and B. Lesaffre (2004), Three-dimensional geo- metric measurements of snow microstructural evolution under isothermal conditions, Ann. Glaciol., 38, 39–44. Heierli, J., and M. Zaiser (2006), An analytical model for fracture nuclea- tion in collapsible stratifications, Geophys. Res. Lett., 33, L06501, doi:10.1029/2005GL025311. Kaempfer, T. U., M. Schneebeli, and S. A. Sokratov (2005), A microstruc- tural approach to model heat transfer in snow, Geophys. Res. Lett., 32, L21503, doi:10.1029/2005GL023873. Kardar, M., G. Parisi, and Y. Zhang (1986), Dynamic scaling of growing interfaces, Phys. Rev. Lett., 56, 889–892. Katzav, E., and M. Schwartz (2004), What is the connection between ballistic deposition and the Kardar-Parisi-Zhang equation?, Phys. Rev. E, 70, 061608. Katzav, E., S. F. Edwards, and M. Schwartz (2006), Structure below the growing surface, Europhys. Lett., 75, 29–35. Lancaster, N., R. Greeley, and K. R. Rasmussen (1991), Interaction be- tween unvegetated desert surfaces and the atmospheric boundary layer: A preliminary assessment, Acta Mech., 2, 89–102. Lehning, M., P. Bartelt, B. Brown, and C. Fierz (2002), A physical SNOWPACK model for the Swiss avalanche warning part III: Meteor- ological forcing, thin layer formation and evaluation, Cold Reg. Sci. Technol., 35, 169–184. Meakin, P. (1998), Fractals, Scaling and Growth Far From Equilibrium, Cambridge Univ. Press, Cambridge, U. K. Meakin, P., and J. Krug (1992), Three-dimensional ballistic deposition at oblique incidence, Phys. Rev. A, 46, 3390–3399. Schweizer, J., J. B. Jamieson, and M. Schneebeli (2003), Snow avalanche formation, Rev. Geophys., 41(4), 1016, doi:10.1029/2002RG000123. Sigrist, C., and J. Schweizer (2007), Critical energy release rates of weak snowpack layers determined in field experiments, Geophys. Res. Lett., 34, L03502, doi:10.1029/2006GL028576. Silveira, F. A., and Aarão Reis (2007), Surface and bulk properties of deposits grown with a bidisperse ballistic deposition model, Phys. Rev. E, 75, 061608. Sturm, M., D. K. Perovich, and J. Holmgren (2002), Thermal conductivity and heat transfer through the snow on the ice of the Beaufort Sea, J. Geophys. Res., 107(C10), 8043, doi:10.1029/2000JC000409. Torquato, S. (2002), Random Heterogeneous Materials, Springer, Heidel- berg, Germany. ����������������������� S. Bartlett, L. Egli, M. Guala, H. Löwe, and C. Manes, WSL, Swiss Federal Institute for Snow and Avalanche Research SLF, Flüelastr. 11, CH-7260 Davos Dorf, Switzerland. (loewe@slf.ch) L21507 LÖWE ET AL.: EVOLUTION OF THE SNOW SURFACE DURING SNOWFALL L21507 5 of 5 work_cxj5xkep6ndedjc5dzdzosivqa ---- British Association of Retinal Screeners (BARS): survey of workload in UK diabetic retinopathy screening programmes Introduction A structured screening programme for diabetic retinopathy available to all people with diabetes, aiming for the early detection of asymptomatic individuals allowing for early treat- ment to try and reduce any loss of vision, is now a well accepted and achievable ideal.1,2 The National Service Framework (NSF) for dia- betes suggested diabetic retinopathy screening as an area to target more aggressively, aiming for 100% of people with diabetes being offered annual retinal screening by 2007.1 Primar y care trusts (PCTs) and strategic health authorities (SHAs) have been charged with achieving and maintaining this screening programme. The English National Screening Programme for Diabetic Retinopathy (ENSPDR) suggests digital photogra- phy is the current method of choice.2 The effectiveness of digital images in comparison to standard 35mm photographs and their lower techni- cal failure rate is now well accepted, as is the role of trained professionals grading the images obtained. The ENSPDR provides accepted, struc- tured grading criteria for the images obtained, as well as setting minimum standards for the quality of the image used, the maximum accept- able technical failure rate of the screening system implemented and the need for quality assurance of the whole grading process.2 For individu- als grading these retinal images there is an accredited City and Guilds qualification aiming to ensure all individuals and the grading system within which they work are up to a certain level of competence – i.e. have a suitable standard of sensitivity and specificity with suitable quality assurance.3,4 We have, however, seen no guid- ance on how many people one person should be expected to screen whilst tr ying to achieve these standards, and therefore wished to find out about current practice in UK retinal screening programmes, by asking current members of the British Association of Retinal Screeners (BARS). This may then be helpful in future job planning and when reviewing current grader posts. Method There are currently 107 diabetic retinopathy screening programmes in the UK. The membership of BARS consists of individuals with an inter- est in diabetic retinopathy screening. Most are involved, or have an interest in, the screening of people with dia- betes in schemes being undertaken in the UK as well as a small number of foreign affiliates involved in British Association of Retinal Screeners (BARS): survey of workload in UK diabetic retinopathy screening programmes K Shotliff*, G Duncan, R Dewhirst, L Dixon, A Ellingford, R Greenwood, J Mansell, P Mitchell, G Moss; on behalf of the British Association of Retinal Screeners Council ABSTRACT The aim of this survey was to determine the number of patients being screened per session in UK diabetic retinal screening programmes and the number of patients images being graded in stand alone grading sessions. A questionnaire was sent to all members of the British Association of Retinal Screeners asking for information about diabetic retinal screening schemes in which they were involved. Sixty-eight (31%) replied and suggested that an average of 14.4 patients were being screened per session on a fixed site programme, and an average of 15.7 per session with a mobile service. A standard morning session was, on average, 3 hours and 23 minutes long on a fixed site and 3 hours and 14 minutes on a mobile site. A standard afternoon session was, on average, 3 hours and 5 minutes long on a fixed site and 2 hours and 44 minutes long on a mobile site. Those undertaking grading as a stand alone activity screened an average of 39.3 patients per session (ranging from 20–75 patients per session). While the lengths of morning and afternoon screening sessions were relatively consistent there was more variability in the number of patients whom a stand alone grader would typically grade per session. We believe this range of activity reinforces the importance of a good quality assurance programme to maintain the consistency of the service offered. Copyright © 2010 John Wiley & Sons. Practical Diabetes Int 2010; 27(4): 152–154 KEY WORDS diabetic retinopathy; screening; workload Kevin Shotliff Grant Duncan Richard Dewhirst Lorraine Dixon Angela Ellingford Richard Greenwood Jacqueline Mansell Peter Mitchell Gary Moss (on behalf of the British Association of Retinal Screeners Council) *Correspondence to: Dr Kevin Shotliff, Consultant Physician, Beta Cell Diabetes Centre, Chelsea and Westminster Hospital, 369 Fulham Road, London SW10 9NH, UK; e-mail: kevin.shotliff@chelwest.nhs.uk Received: 9 September 2009 Accepted in revised form: 4 February 2010 152 Pract Diab Int May 2010 Vol. 27 No. 4 Copyright © 2010 John Wiley & Sons ORIGINAL ARTICLE screening programmes abroad. The UK membership includes retinal graders – such as photographers, optometrists, ophthalmology and diabetes nurses – as well as screening programme managers, PCT/SHA commissioners, general practition- ers, diabetologists and ophthalmolo- gists involved in diabetic retinopathy services and clinics into which these screening programmes feed patients with abnormal results. We sent a questionnaire to all 220 UK members of BARS (175 of whom are registered with BARS as being ‘graders’) asking them about the screening programme that they are involved in, specifically asking if they undertook the image collection and the grading of those images. We asked for information about: • Length of a standard morning and afternoon screening session. • Number of people seen/screened per session. • Whether this included just image collection or primary grading as well. • Whether this was at a fixed site or as a mobile service. • If mobile, the time spent travelling. • If grading was undertaken as a stand alone activity, how many patients’ images were looked at per session. Results Sixty-eight individuals replied to our questionnaire (31% of those con- tacted, 39% of those registered with BARS as being ‘graders’) stating that they undertook these activities. • Fifty-five reported undertaking retinal screening on a fixed site, see- ing an average of 14.4 patients per session with a range of 10–26 people (an average of 12.7 per session if this included image collection and grading and 16.4 per session if image collection alone, as shown in Figure 1). • Thirty reported undertaking mobile screening with an average travelling time of 48 minutes, seeing an average of 15.7 patients per session, with a range of 11–30 people (an average of 14.8 per session if this included image collection and grading and 17.1 per session if image collection alone). • A standard morning session of screening was, on average, 3 hours and 23 minutes long on a fixed site and 3 hours and 14 minutes on a mobile site (see Figure 2). • A standard afternoon session was, on, average 3 hours and 5 minutes long on a fixed site and 2 hours and 44 minutes long on a mobile site (see Figure 2). • Those who undertook grading as a stand alone activity screened an average of 39.3 patients per session (ranging from 20–75 patients per session, as shown in Figure 1). Discussion The eye contains the only easily visible vascular bed affected by dia- betes. Helmholtz invented the direct ophthalmoscope in 1851 with the first report of retinal abnormalities in the eyes of a patient with diabetes four years later. Interestingly, these were actually hypertensive changes rather than diabetic, reinforcing the need for a standardised grading scheme and a suitable quality assur- ance programme. Digital photography is currently the investigation of choice for reti- nal screening in a diabetic popula- tion. It allows grading of the images obtained to separate those who just need to be recalled for annual screening (e.g. grades R0M0 and R1M0) from those who require referral to an ophthalmologist with a special interest in diabetes for further investigation and treatment; it also highlights those patients in whom we should be more aggres- sively improving medical parameters such as blood pressure and gly- caemic control. Figure 1. (A) Patients screened per session with no grading at the time of image collection on a fixed site and using a mobile screening site. (B) Number of patients’ images which are graded per session in a stand alone session Screening at a fixed site Screening at a mobile site A P e rc e n ta g e o f sc re e n e rs 70 60 50 40 30 20 10 0 10–14 15–19 20–24 25–29 30–34 No of patients screened per session B P e rc e n ta g e o f g ra d e rs 35 30 25 20 15 10 5 0 20–34 35–44 45–54 55–64 65–75 No of patients graded per session Pract Diab Int May 2010 Vol. 27 No. 4 Copyright © 2010 John Wiley & Sons 153 ORIGINAL ARTICLE Survey of workload in UK diabetic retinopathy screening programmes There is a great deal of informa- tion and advice available relating to the training and the expected level of competence needed by all indi- viduals involved in the screening of people with diabetes for diabetic retinopathy. However, there is little about the time needed or expected to undertake this activity.2–4 This lack of information on the expecta- tion of what a standard ser vice should be asking its screeners and graders to be doing has led to much discussion at several of the recent annual BARS conferences. The limitations of this study are the relatively low response rate and the fact that not all screening ser vices were represented, with some centres potentially having more than one individual replying on their behalf. The study was not designed to clarify which screening service the individual completing the survey worked for; any further studies in this area should deter- mine this and use an average of the responses from each centre to determine a better representation, and also obtain replies from all dia- betic retinopathy screening services to see if these results are consistent across the UK. As shown in Figure 1, there was not a normal distribution of the data for the number of patients screened or graded, which without those extremes may have given a more consistent pattern of numbers of individuals being screened and graded. The anony- mous data from the different screening and grading centres also made it difficult to determine as to why there were different lengths of screening sessions, travelling times and stand alone grading being undertaken. This study does, however, sug- gest, from the people who did reply, that there are relatively consistent numbers of individuals whom one might expect to be screened in a screening session and for the length of that session, particularly if the relatively small percentage of higher numbers of patients being screened is excluded. However, there appears to be a wider varia- tion in the time allocated/taken to grade the images obtained when undertaken as a stand alone activity. On average, nearly 12 patients’ images were graded per hour by those people who undertake stand alone grading – i.e. an average of 39.3 images/patients graded with 3 hours 14 minutes for the average half day grading session, which equates to one person’s images being graded ever y 5 minutes. However, this varied between centres/individuals from almost 2.5 minutes per patient (i.e. 75 per session) to 10 minutes per patient screened (i.e. 20 per session). While we accept that some people are quicker than others in their ability to grade retinal images and that some of the software used to obtain the images may be a little slower than others, the importance of accurate grading – however long it takes – for the subsequent management of these people with diabetes goes without saying. The variability we found in the time allocated to undertake this activity reinforces the importance of a robust quality assurance pro- gramme. Such a programme would make sure that this grading is not being rushed due to the workload being given to graders or slowed down by software issues, and would at the same time ensure that the grading still meets national quality standards. We would therefore strongly recommend anyone involved in the management and quality assurance of a retinal screening scheme to look at this area of their ser vice carefully – not just at individual graders but also at the software being used to extract, view and save the images and their reports. This is because, if that explained the difference in the number of patients’ eyes graded, then efficiency may be improved without any reduction in quality. Conflict of interest statement There are no conflicts of interest. References 1. www.doh.gov.uk/NSF/diabetes/. 2. www.retinalscreening.nhs.uk. 3. www.eyescreening.org.uk/. 4. www.cityandguilds.com/documents/ 7359-diabetic-retinopathy-screening. Figure 2. Reported time taken in minutes for a morning session and an afternoon session of retinal screening using digital photography Key points • The mean number of patients screened on a fixed site was 16.4 patients per session or 12.7 if this included grading at the same time • The mean number of patients screened on a mobile site was 17.1 patients per session or 14.8 if this included grading at the same time • There is a wide range for the number of patients’ images being graded as a stand alone activity by UK graders Morning session Afternoon session P e rc e n ta g e o f sc re e n e rs 60 50 40 30 20 10 0 120–149 150–179 180–209 210–239 240–269 270–299 Time in minutes 154 Pract Diab Int May 2010 Vol. 27 No. 4 Copyright © 2010 John Wiley & Sons ORIGINAL ARTICLE Survey of workload in UK diabetic retinopathy screening programmes work_cxtkklbzsnaepnzuqqfqq3uckq ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218546232 Params is empty 218546232 exception Params is empty 2021/04/06-02:16:24 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218546232 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:24 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_cyvmtrhykfcqjavzbotgr4yfdy ---- rct50175 1..5 Zurich Open Repository and Archive University of Zurich Main Library Strickhofstrasse 39 CH-8057 Zurich www.zora.uzh.ch Year: 2016 Application of a colored multiexposure high dynamic range technique to radiographic imaging: an experimental trial to show feasibility Eppenberger, Patrick ; Marcon, Magda ; Ho, Michael ; Del Grande, Filippo ; Frauenfelder, Thomas ; Andreisek, Gustav Abstract: PURPOSE: The aim of this study was to evaluate the feasibility of applying the high dynamic range (HDR) technique to radiographic imaging to expand the dynamic range of conventional radiographic images using a colored multiexposure approach. MATERIAL AND METHODS: An appropriate study object was repeatedly imaged using a range of different imaging parameters using a standard clinical x-ray unit. An underexposed image (acquired at 80 keV), an intermediate exposed image (110 keV), and an overexposed image (140 keV) were chosen and combined to a 32-bit colored HDR image. To display the resulting HDR image on a regular color display with typically 8 bits per channel, the Reinhard tone mapping algorithm was applied. The source images and the resulting HDR image were qualitatively evaluated by 5 independent radiologists with regard to the visibility of the different anatomic structures using a Likert scale (1, not visible, to 5, excellent visibility). Data were presented descriptively. RESULTS: High dynamic range postprocessing was possible without malalignment or image distortion. Application of the Reinhardt algorithm did not cause visible artifacts. Overall, postprocessing time was 7 minutes 10 seconds for the whole process. Visibility of anatomic structure was rated between 1 and 5, depending on the anatomic structure of interest. Most authors rated the HDR image best before individual source images. CONCLUSIONS: This experimental trial showed the feasibility of applying the HDR technique to radiographic imaging to expand the dynamic range of conventional radiographic images using a colored multiexposure approach. DOI: https://doi.org/10.1097/RCT.0000000000000413 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-123808 Journal Article Published Version Originally published at: Eppenberger, Patrick; Marcon, Magda; Ho, Michael; Del Grande, Filippo; Frauenfelder, Thomas; An- dreisek, Gustav (2016). Application of a colored multiexposure high dynamic range technique to radio- graphic imaging: an experimental trial to show feasibility. Journal of Computer Assisted Tomography, 40(4):658-662. DOI: https://doi.org/10.1097/RCT.0000000000000413 Application of a Colored Multiexposure High Dynamic Range Technique to Radiographic Imaging: An Experimental Trial to Show Feasibility Patrick Eppenberger, MD,*† Magda Marcon, MD,† Michael Ho, MD,† Filippo Del Grande, MD,‡ Thomas Frauenfelder, MD, MAS,† and Gustav Andreisek, MD, MBA† Purpose: The aim of this study was to evaluate the feasibility of applying the high dynamic range (HDR) technique to radiographic imaging to ex- pand the dynamic range of conventional radiographic images using a col- ored multiexposure approach. Material and Methods: An appropriate study object was repeatedly imaged using a range of different imaging parameters using a standard clin- ical x-ray unit. An underexposed image (acquired at 80 keV), an interme- diate exposed image (110 keV), and an overexposed image (140 keV) were chosen and combined to a 32-bit colored HDR image. To display the resulting HDR image on a regular color display with typically 8 bits per channel, the Reinhard tone mapping algorithm was applied. The source images and the resulting HDR image were qualitatively evaluated by 5 in- dependent radiologists with regard to the visibility of the different anatomic structures using a Likert scale (1, not visible, to 5, excellent visibility). Data were presented descriptively. Results: High dynamic range postprocessing was possible without malalignment or image distortion. Application of the Reinhardt algorithm did not cause visible artifacts. Overall, postprocessing time was 7 minutes 10 seconds for the whole process. Visibility of anatomic structure was rated between 1 and 5, depending on the anatomic structure of interest. Most au- thors rated the HDR image best before individual source images. Conclusions: This experimental trial showed the feasibility of applying the HDR technique to radiographic imaging to expand the dynamic range of conventional radiographic images using a colored multiexposure approach. Key Words: high dynamic range, radiography, exposure (J Comput Assist Tomogr 2016;00: 00–00) Advances in Knowledge • High dynamic range (HDR) radiographic imaging is feasible. • Image post-processing was stable without artifacts. • For most readers, the colored multi-exposure HDR image showed the anatomy best. Implication for Patient Care • The colored multi-exposure HDR technique could potentially be applied to standard radiographic and computed tomography imaging in humans. Summary Statement (Discussion section) • This experimental trial showed the feasibility of the application of HDR imaging for radiographs F rom digital photography, the concept to achieve images witha higher dynamic range by combination of images from mul- tiple exposures, referred to as HDR (high dynamic range) photog- raphy, is well known. 1–4 Its advantages over standard photographs are good illumination and control of lighting even with difficult lighting situations, which results in more detail and vibrant colors throughout the whole image. This concept of HDR could also poten- tially be used to enhance the dynamic range of radiographic images. In conventional radiography, contrast is generated by x-ray attenuation mainly as a result of the photoelectric and Compton effects depending on the atomic number and the physical density of the irradiated tissue as well as the energy of the radiation used. The photoelectric effect is typically observed below 100 keV, and radiation absorption is primarily associated with the atomic num- ber of the tissue's material. The Compton effect is typically ob- served above 100 keV, and radiation absorption is primarily associated with tissue density. The hypothesis is that the dy- namic range of radiographic images can be enhanced if the im- ages contain attenuation information not only from a single keV but also from a broader range of absorption spectra below and above 100 keV. The human eye is, however, limited in the per- ception of high range grayscale images. Human observers are able to discriminate between 700 and 900 simultaneous shades of gray for the available luminance range of current medical dis- plays under optimal conditions. Current monochromatic medi- cal displays are typically capable of displaying between 256 and 1024 gray shades (equivalent to a “bit depth” of 8 to 10 bits). 5 For colored images, the range is much broader and includes electromagnetic radiation from approximately 380 to 750 nm. The human visual system is therefore capable to differentiate up to 10 million different colors. 6 The absorption maxima of the pho- toreceptor proteins in the human cone cells lie at approximately 426, 530, and 560 nm, which correspond to the blue, green, and red regions of the visible light spectrum. These colors are referred to as the primary colors. In the RGB model currently used by the display manufacturing industry for the last decades, every color pixel in a digital image is created through a combination of these 3 primary colors. Each of these primary colors is often referred to as a “color channel.” Current color displays are typically capable of displaying a range of 256 intensity values per channel, result- ing in a total of 16.7 million different colors (equivalent to a “bit depth” of 24 bits also referred to as a 24-bit true-color display). 4,7–10 Thus, because of the additional contained informa- tion on tissue composition, it seems a reasonable hypothesis that radiologists would prefer colored multiexposure HDR images over conventional radiographic images. The aim of this experimental trial was to evaluate the fea- sibility of applying the HDR technique to radiographic imaging From the *Polyclinic Crossline, Medical Services of the City of Zurich; and †In- stitute for Diagnostic and Interventional Radiology, University Hospital Zurich, University of Zurich, Switzerland; and ‡Department of Radiology, Ospedale Regionale di Lugano, Lugano, Switzerland. Received for publication November 6, 2015; accepted January 31, 2016. Correspondence to: Patrick Eppenberger, MD, Institute for Diagnostic and Interventional Radiology, University Hospital Zurich, University of Zurich, Ramistrasse 100, 8091 Zurich, Switzerland (e‐mail: patrick.eppenberger@gmx.ch). The authors declare no conflict of interest. Copyright © 2016 Wolters Kluwer Health, Inc. All rights reserved. DOI: 10.1097/RCT.0000000000000413 TECHNICAL NOTE J Comput Assist Tomogr • Volume 00, Number 00, Month 2016 www.jcat.org 1 Copyright © 2016 Wolters Kluwer Health, Inc. Unauthorized reproduction of this article is prohibited. This paper can be cited using the date of access and the unique DOI number which can be found in the footnotes. to expand the dynamic range of conventional radiographic images using a colored multiexposure approach. MATERIALS AND METHODS Study Object No institutional review board approval was necessary for this prospective experimental trial. As study object, a fresh fish (red snapper, lutjanus malabaricus) was chosen. The fish provided tis- sues with a range of densities that typically can be encountered in diagnostic radiographic imaging, including air-filled spaces up to calcifications. 11 No animal was harmed for this study. The fish was bought by one of the authors (P.E.) in a regular grocery store (Globus, Zurich, Switzerland). There was no financial support from the industry for this study. A patent application was filed to the European Patent Of- fice (Munich, Germany). Image Acquisition and Selection The study object was repeatedly exposed using a range of different imaging parameters (Table 1, Figs. 1A-I) using a stan- dard clinical x-ray unit (Polydoros LX-80; Siemens Healthcare, Erlangen, Germany). Images were automatically stored in the hos- pitals picture archiving and communication system (PACS; Impax 6.0; Agfa-Gevaert N.V., Mortsel, Belgium) using a 12-bit gray- scale Digital Imaging and Communications in Medicine (DICOM) format. From these series of images, 3 images were chosen to generate a single HDR image. Two authors (P.E., a radi- ology resident who was originally trained as an industrial graphic designer; G.A., fellowship-trained, board-certified radiologist with 11 years of experience) selected the images in consensus and based on that the resulting HDR image should contain absorp- tion information from the photoelectric and Compton effect dom- inating the radiation absorption below and above 100 keV, respectively. Finally, an underexposed image acquired at 80 keV and 5.07 mAs, an intermediate exposed image acquired at 110 keV and 3.32 mAs, and an overexposed image acquired at 140 keV and 1.75 mAs were chosen (Table 1). Image Postprocessing Image postprocessing was performed by one author (P.E.) using commercially available software (Adobe Photoshop CS5; Adobe Systems Inc, San José, Calif) running on a standard com- puter (Mac Pro Quad-Core 2.8; Apple Inc, Cupertino, Calif). The 3 original images, acquired at 80, 110, and 140 keV, were imported into the software and attributed to the 3 standard color channels, red, green, and blue, respectively. A color depth of 16 bits per channel was used, and postprocessed images were stored as individual colored 16–bit per channel Tagged-Image File For- mat files. The 3 Tagged-Image File Format images were then combined to a single HDR image using the software's dedi- cated algorithm. The final image had a color depth of 32 bits per channel and was stored as an individual HDR image file, like it is supported by most HDR image editors including Adobe Photoshop CS5 (4 bytes per pixel, 1-byte mantissa for each RGB channel, and a shared 1-byte exponent). The latter format allows preservation of all contained image information per pixel (full dynamic range). To display the HDR image on a regular 24-bit true-color dis- play with typically 8 bits per channel, a tone-mapping algorithm had to be applied. The Reinhardt algorithm was chosen because it is known from the literature that this algorithm is usually well suited for nonpictorial colored images and that it generates only TABLE 1. Acquisition of Source Images for Colored Multiexposure HDR Technique mAs 80 keV 1.68 (underexposed) (Fig. 1A) 3.27 (Fig. 1B) 5.07 (used for HDR) (Fig. 1C) 110 keV 1.71 (Fig. 1D) 3.32 (used for HDR) (Fig. 1E) 5.11 (Fig. 1F) 140 keV 1.76 (used for HDR) (Fig. 1G) 3.37 (Fig. 1H) 5.17 (overexposed) (Fig. 1I) FIGURE 1. Series of source images for colored multiexposure HDR technique using a range of different imaging parameters: (A) 80 keV/1.69 mAs, (B) 80 keV/3.27 mAs, (C) 80 keV/5.07 mAs, (D) 110 keV/1.71 mAs, (E) 110 keV/3.32 mAs, (F) 110 keV/5.11 mAs, (G) 140 keV/1.76 mAs, (H) 140 keV/3.37 mAs, and (I) 140 keV/5.17 mAs. (Please also refer to Table 1.) Eppenberger et al J Comput Assist Tomogr • Volume 00, Number 00, Month 2016 2 www.jcat.org © 2016 Wolters Kluwer Health, Inc. All rights reserved. Copyright © 2016 Wolters Kluwer Health, Inc. Unauthorized reproduction of this article is prohibited. This paper can be cited using the date of access and the unique DOI number which can be found in the footnotes. little artifacts. The function below represents the local dodging- and-burning operator as proposed by Reinhard et al. 2,4,10,12 Ld x; yð Þ ¼ L x; yð Þ 1 þ V1 x; y; sm x; yð Þð Þ The luminance of a dark pixel in a relatively bright region will satisfy L < V1, so the operator will decrease the display luminance Ld, thereby increasing the contrast at that pixel in analogy to photo- graphic “dodging.” Similarly, a pixel in a relatively dark region will be compressed less and is thus “burned.” In either case, the pixel's contrast relative to the surrounding area is increased 10 (Fig. 2). Overall, the postprocessing resulted in an 8-bit color image file that was stored using the DICOM format from within the ded- icated image processing software (Adobe Photoshop CS5) (Fig. 3). Image Evaluation Imageswerequalitativelyevaluatedby2authors(P.E.andG.A.) in consensus after each postprocessing step and all artifacts, postprocessing problems, and the file size were noted. The time for postprocessing was noted. The final 8-bit color imagewas theneval- uated along with the source images by 5 radiologists with different levels of experience (1 third-year resident [M.H.], 1 board-certified radiologist subspecialized in breast imaging [M.M.], 1 board- certified radiologist subspecialized in thoracic imaging [T.F.], 1 board-certified radiologist subspecialized in musculoskeletal imag- ing [G.A.], and 1 general radiologist who is chairman of a large radi- ology department [F.G.]) with regard to the visibility of the different anatomicstructuresofthefishusinga5-pointgradingsystem(Likert scale) (Fig. 4): 1, not visible (no diagnostic information can be ob- tained from the images); 2, poor visibility (image quality is heavily degraded due to low contrast and/or artifacts); 3, moderate visibility (image quality is degraded due to low contrast and/or artifacts); 4, goodvisibility(goodcontrastand/orslightartifacts);5,excellentvis- ibility (good contrast and no artifacts). In addition, all radiologists wereaskedtoprovideastatementbasedontheir personalexperience on the possible strength of the HDR image. Image analysis was per- formedindependently,andreaderswereblindedtotheacquisitionpa- rameters of the source images. Descriptive data are presented; no formal statistical analysis was performed because this is only an experimental trial in a single study subject to proof the concept of HDR imaging for radiographs. RESULTS The series of images with various imaging parameters could be acquired successfully. After selection of 3 images, HDR postprocessing was possible as described previously without malalignment or image distortion. Application of the Reinhardt al- gorithm for reducing the images' color depth did not cause image distortion or other visible artifacts. File size (9.55 MB) was small enough to allow smooth postprocessing and data transfer as well as storage to the PACS. Overall postprocessing time was 7 minutes 10 seconds for the whole process. The first step (loading images, applying color channels, and storing them) took 4 minutes 34 seconds, the sec- ond step (calculation of the HDR images) took 1 minute 24 sec- onds, and the final step (applying the Reinhardt algorithm and storing the final HDR image) took 1 minute 12 seconds. FIGURE 2. Process overview: 3 images acquired at 80, 110, and 140 keV, respectively, were mapped the RGB color channels and combined to a colored HDR image with a much broader dynamic range, which additionally reveals information about tissue properties. Thus, this process allows a differentiated representation of the photoelectric effect (atomic number) and the Compton effect (tissue density) on a single colored image, in analogy to the visible light spectrum. FIGURE 3. Colored multiexposure HDR image of a red snapper. The image was generated from source 140-, 110-, and 80-keV radiographic images using dedicated HDR postprocessing. FIGURE 4. Fish anatomy: 1, swim bladder; 2, otolith organs; 3, gills (with cartilaginous lamellae); 4, heart (1 ventricle, 1 atrium); 5, brain; 6, abdominal cavity (liver and viscera); 7, lateral-line organs; 8, osseous structures (including the scull and spine with upper and lower spinous processes); 9, fins (a, pectoral; b, pelvic; c, dorsal; d, anal; e, caudal); 10, musculature (a, epaxial; b, hypaxial). J Comput Assist Tomogr • Volume 00, Number 00, Month 2016 Colored Multiexposure HDR Technique © 2016 Wolters Kluwer Health, Inc. All rights reserved. www.jcat.org 3 Copyright © 2016 Wolters Kluwer Health, Inc. Unauthorized reproduction of this article is prohibited. This paper can be cited using the date of access and the unique DOI number which can be found in the footnotes. Depending on the source images, some structures were al- ready well appreciated on the 3 differently exposed source image (Table 2). Overall, visibility of anatomic structures was rated best by all authors on the colored HDR image, which contained image information from all 3 source images. However, some structures such as the heart and the brain were not visible or not well visible on neither the HDR nor the source images, which was likely be- cause of the specific anatomy of a fish's heart and brain that both seem to be inherently difficult to delineate. One author stated that he could delineate the brain, but this was likely based on different appreciation of the underlying fish anatomy. In general, all radiol- ogists stated that the delineation of soft tissue structures likely could benefit from the HDR technique, whereas dense structures already can be delineated well on a single (appropriately exposed) source image. DISCUSSION This experimental trial showed the feasibility of the applica- tion of HDR imaging for radiographs. The HDR technique has been originally developed for digital photography, and its main ad- vantage is to increase the dynamic range of an image beyond what is normally possible. High dynamic range images are characterized by dense im- age information, and thus different image specifications are usu- ally used compared with regular images. Because HDR images require a far larger range of values, they are commonly encoded in a floating-point number file format, instead of integer values, to represent the single color channels (eg, 0-255 in an 8–bit per pixel range for red, green, and blue). Floating-point numbers (also referred to as exponential notation) are encoded as a decimal num- ber between 1 and 10 multiplied by any power of 10, such as 6.578 � 104, as opposed to integers (eg, 0-255 for an 8-bit range or 0-4096 for a 12-bit range). This allows the image to contain in- formation that would otherwise even exceed a 32-bit integer range. We considered it important that the final images could be stored in the widespread DICOM format. Conventional radio- graphic images are usually stored in a resolution of 2048 � 2048 pixels (4 megapixels) with 12-bit grayscale, corresponding to a dynamic range of 1 to 4096, independent of the energy at which they were acquired. The DICOM standard is based on Bartens model of the contrast sensitivity function so that changes in digital values of the display represent equal perceptual steps in lightness based on threshold differences. The grayscale standard display function is defined for the luminance range from 0.05 to 4000 cd/m 2 . The minimum luminance corresponds to the lowest practically useful luminance of cathode-ray-tube monitors, and the maximum exceeds the unattenuated luminance of very bright light-boxes used for interpreting x-ray mammography. For the available luminance range of current medical displays and in opti- mal conditions, human observers are able to discriminate between 700 and 900 simultaneous shades of gray. Thus, the human eye is limited in the perception of high range grayscale images. This limitation and the fact that the human visual system is otherwise capable to differentiate up to 10 million different colors were the reasons we believe that a previous report by Kanelovitch et al, 13 who proposed a method to produce grayscale HDR mam- mograms, falls too short and we were seeking for colored HDR images. 6 The range of electromagnetic radiation that can be de- tected by the human eye lies in the range of approximately 380 to 750 nm and corresponds to a perceived color range of violet through red. In human cone cells, there are 3 distinct photorecep- tor proteins with absorption maxima at 426, 530, and ~560 nm. Their absorbance corresponds to (in fact, define) the blue, green, and red regions of the visible light spectrum. On the basis of the trichromatic nature of the human eye, the standard solution adopted by industry is to use red, green, and blue as primary colors, using an additive color model. Thus, every color pixel in a digital image is created through a combination of the 3 primary colors: red, green, and blue. Each primary color is often referred to as a “color channel” and can have any range of intensity values specified by its bit depth. Overall, colored images can thus contain a much larger volume of information compared with grayscale images. The additionally applied tone-mapping algorithm in our approach allowed us to represent all the contained information on a regular color display with a dynamic range of 8 bits per chan- nel without the need for windowing as it is otherwise often neces- sary to evaluate radiographic images. Soft tissues in accordance to the predominance of contained elements with a low atomic num- ber are thereby represented in the blue and green color spectrum, whereas mineralized structures such as bones or scales in case of our study object (red snapper, lutjanus malabaricus) are repre- sented in the yellow-to-red color spectrum. Some limitations apply to this colored radiographic multi- exposure HDR technique. First, summation effects remain un- changed and have to be taken in consideration in a similar manner as in conventional grayscale radiographic images. In the- ory, the HDR technique could also be applied to cross-sectional TABLE 2. Image Evaluation Using a 5-Point Likert Scale (1, Not Visible, to 5, Excellent Visibility) HDR 80 keV 110 keV 140 keV R1 R2 R3 R4 R5 R1 R2 R3 R4 R5 R1 R2 R3 R4 R5 R1 R4 R3 R4 R5 Swim bladder 5 5 5 5 5 4 4 5 4 3 4 5 5 4 4 2 4 4 2 2 Otolith organ 4 4 3 4 5 4 4 4 4 4 4 5 4 4 4 3 3 5 4 3 Gils (with cartilaginous lamellae 5 5 5 5 5 2 4 4 1 2 3 5 4 3 3 4 1 5 4 4 Heart (1 ventricle, 1 atrium) 1 2 1 1 1 1 2 1 1 1 1 2 1 1 1 1 1 1 1 1 Brain 1 2 3 1 1 1 1 4 1 1 1 1 4 1 1 1 3 3 1 1 Abdominal cavity (liver and viscera) 5 5 5 5 5 3 4 4 4 3 4 4 4 4 4 3 2 4 2 2 Lateral-line organ 5 5 5 5 5 2 3 5 3 2 3 3 4 2 3 4 3 3 3 4 Osseous structures (including scull and spine) 5 5 5 4 5 3 4 5 3 3 4 4 s 4 4 4 3 4 4 4 Fins (pectoral, pelvic, dorsal, anal, caudal) 5 5 5 5 5 4 5 4 4 4 4 4 5 4 4 3 1 4 3 3 Musculature (epaxial, hypaxial) 3 4 5 3 3 2 3 4 2 2 3 3 3 3 3 2 4 3 1 1 R1 indicates third-year resident; R2, radiologist subspecialized in breast imaging; R3, radiologist subspecialized in thoracic imaging; R4, radiologist subspecialized in musculoskeletal imaging; R5, chairman. Eppenberger et al J Comput Assist Tomogr • Volume 00, Number 00, Month 2016 4 www.jcat.org © 2016 Wolters Kluwer Health, Inc. All rights reserved. Copyright © 2016 Wolters Kluwer Health, Inc. Unauthorized reproduction of this article is prohibited. This paper can be cited using the date of access and the unique DOI number which can be found in the footnotes. imaging techniques such as computed tomography (CT) at differ- ent keV levels. Currently, however, we are not aware of a study that has been using HDR imaging in CT for medical applications, but own investigations are planned. A few publications geared to- ward industrial CT applications suggest the use of HDR algo- rithms to expand the dynamic range, especially when technical objects consisting of dense materials (eg, metals) are scanned. 14 Our study object had a fairly thin diameter of approximately 4.5 cm, which provided optimal conditions for our multiexposure approach. With larger object diameters, however, it need to be ex- pected that photoelectric radiation absorption will significantly in- crease, that is, for the acquisition or source images with the lowest current (80 keV). This needs to be taken into account, especially when our approach shall be applied to CT imaging, for example, in humans. Second, another limitation is that many monitors have a limited dynamic range, which is inadequate to reproduce the full range of HDR images. Tone mapping addresses the problem of strong contrast reduction to the displayable range while preserving the image details and color appearance important to appreciate the original content. Because the human visual system is more sensitive to relative rather than absolute luminancevalues (without such an adjustment, small signals would drown in neuronal noise, and large signals would saturate the system), the algorithm pro- posed by Reinhard et al 10 was applied in our study, which mimics the physiology of the human visual system and is recommended in previous literature. 12 Finally, to be able to use HDR imaging in clinical routine, a full PACS integration of the postprocessing is necessary, if it is not already achieved in-line during image acqui- sition by, for example, batch processing through a software pro- gram. This could be achieved by the different PACS vendors by plug-ins or integrated functionality of the scanner and this is im- portant to reduce postprocessing time for cost-efficient appli- cation of the HDR technique to CT examinations. Potential clinical applications may include detection of breast cancer where slight differences in soft tissue density are present that could potentially be better visible with HDR mam- mographic images. 13 Another potential application could be de- tection of abnormal soft tissue density, which is typically seen before soft tissue calcification in crystal deposition diseases such as gout or chondrocalcinosis. In lung imaging, HDR im- ages might improve detection of areas of abnormal lung density as typically seen in lung cancer. Other future perspectives of the HDR technique beyond conventional radiography include the potential applications in dual-energy CT where the anatomy is typically imaged at 2 different voltages, 15 as well as dual-energy x-ray absorptiometry. In conclusion, this experimental trial showed the feasibility of applying the HDR technique to radiographic imaging to expand the dynamic range of conventional radiographic images using a colored multiexposure approach. REFERENCES 1. Mann S, Picard RW. Being “undigital” with digital cameras: Extending dynamic range by combining differently exposed pictures. In Proceedings of the Is&T 46th Annual Conference; May 1995; Scottsdale, AZ. pp. 422–428. 2. Pardo A, Sapiro G. Visualization of high dynamic range images. IEEE Trans Image Process. 2003;12:639–647. 3. Petschnigg G, Agrawala M, Hoppe H, et al. Digital photography with flash and no-flash image pairs. ACM Transactions on Graphics. 2004;23: 664–672. 4. Reinhard E, Stark M, Shirley P, et al. Photographic tone reproduction for digital images. Acm Transactions on Graphics. 2002;21:267–276. 5. Kimpe T, Tuytschaever T. Increasing the number of gray shades in medical display systems—how much is enough? J Digit Imaging. 2007;20: 422–432. 6. Judd DB, Wyszecki G. Color in Business, Science, and Industry. 3rd ed. New York, NY: Wiley; 1975. 7. Barten PGJ. Physical model for the contrast sensitivity of the human eye. Hum Vision Vis Process Digit Display. 1992;1666:57–72. 8. Indrajit I, Verma B. Monitor displays in radiology: part 1. Indian J Radiol Imaging. 2009;19:24–28. 9. Indrajit IK, Verma BS. Monitor displays in radiology: part 2. Indian J Radiol Imaging. 2009;19:94–98. 10. Reinhard E, Devlin K. Dynamic range reduction inspired by photoreceptor physiology. IEEE Trans Vis Comput Graph. 2005;11:13–24. 11. Johnston J, Fauber T. Essentials of Radiographic Physics and Imaging. St. Louis: Elsevier; 2013:80–92. 12. Park SH, Montag ED. Evaluating tone mapping algorithms for rendering non-pictorial (scientific) high-dynamic-range images. J Vis Commun Image Represent. 2007;18:415–428. 13. Kanelovitch L, Itzchak Y, Rundstein A, et al. Biologically derived companding algorithm for high dynamic range mammography images. IEEE Trans Biomed Eng. 2013;60:2253–2261. 14. Chen P, Han Y, Pan J. High-dynamic-range CT reconstruction based on varying tube-voltage imaging. PLoS One. 2015;10:e0141789. 15. Karcaaltincaba M, Aktas A. Dual-energy CT revisited with multidetector CT: review of principles and clinical applications. Diagn Interv Radiol. 2011;17:181–194. J Comput Assist Tomogr • Volume 00, Number 00, Month 2016 Colored Multiexposure HDR Technique © 2016 Wolters Kluwer Health, Inc. All rights reserved. www.jcat.org 5 Copyright © 2016 Wolters Kluwer Health, Inc. Unauthorized reproduction of this article is prohibited. This paper can be cited using the date of access and the unique DOI number which can be found in the footnotes. work_d4jyw6xjxrfv3nrxlxv4ppkcki ---- farmer.dvi Integre Technical Publishing Co., Inc. College Mathematics Journal 33:2 November 21, 2001 2:33 p.m. farmer.tex page 131 Designing a Calculus Mobile Tom Farmer Tom Farmer (farmerta@muohio.edu) received his Ph.D. in mathematics at the University of Minnesota in 1976 and ever since then has served on the faculty at Miami University in Oxford, Ohio. In addition to gardening, reading, and digital photography, he continues to enjoy learning mathematics and how to teach it. The present paper grew out of discussions of the center of mass in an honors calculus class. Problem: Design a mobile as in Figure 1. The parts are horizontal slices of a lamina bounded on the left and right by smooth curves x = f (y) and x = m f (y), where m > 1 is constant. Each part of the mobile is attached to the one above it by a single connector located along the graph of f , and the parts are intended to hang with their horizontal edges horizontal. Figure 1. This problem is connected with several topics from the latter part of first year cal- culus including the harmonic series, the center of mass of a lamina, and separable differential equations. The purpose of this paper is to show the connections and, in the end, to solve the problem. VOL. 33, NO. 2, MARCH 2002 THE COLLEGE MATHEMATICS JOURNAL 131 Integre Technical Publishing Co., Inc. College Mathematics Journal 33:2 November 21, 2001 2:33 p.m. farmer.tex page 132 Many calculus instructors have used a stack of blocks to illustrate the question of convergence or divergence of a series. Suppose n identical rectangular blocks are stacked as in Figure 2, with the top block extending half its length out from the second block, the second block extending one fourth its length out from the third block, then one sixth, then one eighth, and so on. As n gets large, is there an upper bound on the horizontal distance between the leading edge of the top block and the leading edge of the bottom block? The answer is no, of course. The horizontal distance grows without bound because the series ∑∞ n=1 1/2n diverges. Figure 2. An interesting, but somewhat hidden, aspect of the block-stacking demonstration is the fact that the blocks are positioned with a certain center of mass property. For example, the center of mass of the top two blocks (the two blocks being treated as one object) is directly above the edge of the third block. In general, the collective center of mass of the top n blocks lies directly above the edge of the next block for any n—the stack is extending out as far horizontally as possible without toppling over. In discussing this center of mass property, notice that, although the stack of blocks is three dimensional, we need only look at the two dimensional side view shown in Figure 2. As a planar lamina with constant density 1, the object represented by the side view of the stack has the property that its center of mass (x̄, ȳ) has x̄ = 1 2 + 1 4 + 1 6 + · · · + 1 2n , and this is exactly where the edge of the next rectangle is positioned. This formula for x̄ is easily proved by induction. In the induction step, we need to calculate the moment My of n + 1 rectangles about the y-axis. But, using the induction hypothesis, the top n rectangles, treated as one lamina, has moment (mass times distance) equal to n( 1 2 + 1 4 + 1 6 + · · · + 1 2n ). The added rectangle at the bottom, with its center ( 1 2 + 1 4 + 1 6 + · · · + 1 2n ) + 1 2 units from the y-axis, has moment given by this same numerical value since its mass is 1. Thus, the moment of the n + 1 rectangles is the sum n ( 1 2 + 1 4 + 1 6 + · · · + 1 2n ) + ( 1 2 + 1 4 + 1 6 + · · · + 1 2n ) + 1 2 132 c© THE MATHEMATICAL ASSOCIATION OF AMERICA Integre Technical Publishing Co., Inc. College Mathematics Journal 33:2 November 21, 2001 2:33 p.m. farmer.tex page 133 = (n + 1) ( 1 2 + 1 4 + 1 6 + · · · + 1 2n ) + 1 2 = (n + 1) ( 1 2 + 1 4 + 1 6 + · · · + 1 2n + 1 2(n + 1) ) . It follows that x̄ = My M = 1 2 + 1 4 + 1 6 + · · · + 1 2(n+1) , as desired. A good way to model physically the kinds of problems we wish to consider is to think of a hanging mobile rather than a stack of blocks because then the center of mass property is more apparent. Using a thin but rigid material such as cardboard or foam board, we cut identical rectangles of unit length and hang them in a chain as in Figure 3. The group of n rectangles at the bottom of the chain (for each n) is connected to the lower left corner of the rectangle above by a single connector that is directly above the center of mass of the group. With the connector at this point the group hangs with its edges horizontal and vertical. Figure 3. One of the themes of first year calculus is that regions with curved boundaries can be approximated by unions of rectangles; how about reversing the idea? Imagine that for some large n we construct a mobile consisting of n rectangles of length 1 and width 1/n. We position the mobile in a coordinate system with the upper left hand corner of the top rectangle at 1 on the y-axis and the lower left hand corner of the bottom rectangle at −( 1 2 + 1 4 + 1 6 + · · · + 1 2(n−1) ) on the negative x -axis as in Figure 4. Since Figure 4. VOL. 33, NO. 2, MARCH 2002 THE COLLEGE MATHEMATICS JOURNAL 133 Integre Technical Publishing Co., Inc. College Mathematics Journal 33:2 November 21, 2001 2:33 p.m. farmer.tex page 134 n is large, this lamina is suggestive of one that is bounded between smooth curves. We could ask what happens as n approaches infinity in this picture but, instead, we just want to motivate the idea of using a lamina with curved boundaries and a center of mass property consistent with what was used for a union of rectangles (see Figure 5). Thus, we seek a lamina bounded on the left and right by x = f (y) and x = f (y) + 1, where f is a differentiable function on (0, 1), continuous at 1, and with f (1) = 0. The appropriate assumption on the center of mass is that if we make a horizontal cut at y = t , for any t in (0, 1], then the part of the lamina that lies below this line should have center of mass (x̄, ȳ) with x̄ = f (t ). What is f ? We find it as follows: f (t ) = x̄ = My M = ∫ t 0 f (y)+ f (y)+1 2 (1) d y∫ t 0 (1) d y = ∫ t 0 ( f (y) + 1 2 ) d y t . Multiplying both sides by t and differentiating with respect to t yields f (t ) + t f ′(t ) = f (t ) + 1 2 . Thus, f ′(t ) = 1 2t and f (t ) = 1 2 ln t + C = 1 2 ln t , since f (1) = 0. Given the connec- tion of this problem with the series ∑∞ n=1 1/2n, our formula for f is not surprising. However, it doesn’t seem to have been easily predictable either. The region bounded between the graphs of x = f (y) = 1 2 ln y and x = g(y) = 1 2 ln y + 1 (y ∈ (0, 1]) causes a problem when it comes to constructing a physical model—it is unbounded. Of course, we could cut the tail off, but the resulting lamina may not be convincing as a representation of the center of mass property that was promised. This drawback motivates the problem, stated at the outset, in which we require a lamina bounded between the graphs of x = f (y) and x = g(y) = m f (y), with m > 1 and f (0) = g(0) = 0. In order to set up and solve a differential equation satisfied by f , we assume that f is non-negative and twice differentiable on (0, 1) with f ′ positive. We continue to require the center of mass property just as in Figure 5: for any horizontal cut at y = t , with t in (0, 1], the part of the lamina that lies below this line should have center of mass (x̄, ȳ) with x̄ = f (t ). Figure 5. 134 c© THE MATHEMATICAL ASSOCIATION OF AMERICA Integre Technical Publishing Co., Inc. College Mathematics Journal 33:2 November 21, 2001 2:33 p.m. farmer.tex page 135 Since we require x̄ = My M = f (t ), then f (t ) = ∫ t 0 f (y)+m f (y) 2 (m f (y) − f (y)) d y∫ t 0 (m f (y) − f (y)) d y = 1+m 2 ∫ t 0 ( f (y)) 2 d y∫ t 0 f (y)d y , so f (t ) ∫ t 0 f (y) d y = 1 + m 2 ∫ t 0 ( f (y))2 d y. Taking the derivative with respect to t on both sides, f ′(t ) ∫ t 0 f (y) d y + ( f (t ))2 = 1 + m 2 ( f (t ))2. Finally, if we collect terms, isolate the remaining integral, and differentiate once more, then we obtain the differential equation f (t ) = m − 1 2 ( 2 f (t ) ( f ′(t ))2 − ( f (t ))2 f ′′(t ) ( f ′(t ))2 ) or, equivalently, (m − 1) f (t ) f ′′(t ) = (2m − 4) ( f ′(t ))2 . (1) In order to solve this second-order nonlinear differential equation, we need a trick. Note that (1) can be written as (m − 1) f ′′(t ) f (t ) = (2m − 4) ( f ′(t ) f (t ) )2 . (2) So let u = f ′ f and then u′ = f ′′ f −( f ′)2 f 2 = f ′′ f − u2. In this way, (2) becomes (m − 1)(u′ + u2) = (2m − 4)u2 giving us the separable first order equation u′ u2 = m − 3 m − 1 = −Q. (3) The general solution of (3) is u = 1 Qt+C = f ′(t) f (t) (for t > 0). Integrating again yields f (t ) = D(Qt + C)1/Q and the condition f (0) = 0 determines C = 0. Finally, D could be replaced by D/Q1/Q to provide the simpler form f (t ) = Dt 1/Q . The other boundary of the lamina is g(t ) = m f (t ) = m Dt 1/Q , where m and Q are linked by (3) or, equivalently, m = 3+Q 1+Q . As an example, the lamina in Figure 1 was constructed us- ing Q = 2 and m = 5/3 so the boundary curves are x = √y and x = (5/3)√y. Other examples of interest may include the cases (a) Q = 1, so m = 2 and the region is bounded by x = Dy and x = 2 Dy; and (b) Q = 1/3, so m = 5/2 and the region is bounded by x = Dy3 and x = (5/2) Dy3. VOL. 33, NO. 2, MARCH 2002 THE COLLEGE MATHEMATICS JOURNAL 135 Integre Technical Publishing Co., Inc. College Mathematics Journal 33:2 November 21, 2001 2:33 p.m. farmer.tex page 136 Acknowledgment. The author would like to thank Fred Gass for his contributions to this paper. Reference 1. Sydney C. K. Chu and Man-Keung Siu, How far can you stick out your neck?, College Math Journal 17 (1986), 122–132. 136 c© THE MATHEMATICAL ASSOCIATION OF AMERICA work_d4yk7xzhnvg6tij34jrrlbozs4 ---- Brittle Deformation of Solid and Granular Materials with Applications to Mechanics of Earthquakes and Faults YEHUDA BEN-ZION 1 and CHARLES SAMMIS 1 1. Introduction Crustal fault zones are complex regions of localized deformation with fractured, cataclastic and pulverized rocks having evolving geometries and altered rheo- logical properties from those of the host material. At some level of fracture density (damage) the cohesive matrix of the material is destroyed and the associated volume becomes granular. The occurrence of seismic ruptures, and healing phases in the interseismic peri- ods, continually modify the damage and granularity. This produces evolution of the elasticity, permeability and geometry of the actively deforming regions. The evolution of fault zone structures leads, in turn, to changes in the properties of dynamic earthquake rup- tures, seismic radiation, inter- and post-seismic deformation, and local seismicity patterns. Many fundamental aspects of brittle deformation in crustal rocks remain unsolved. Basic examples include: What are the proper metrics to characterize brittle rock damage and the transitions between damaged rock and granular material? What are the main similarities and differences between the dynamics of damaged rocks and granular media? What are the characteristics of nonlinear stress–strain behavior of damaged rocks and how can they be modeled quantitatively? How much slip is localized on main rupture surfaces and how much is distributed in the bulk for various fault environments? The 16 papers in this volume provide recent theoretical and observational perspectives that address the above and related issues. Topics include damage rheology models, high-resolution measurements of nonlinear evolving elasticity, high-resolution experiments on frictional instabilities and dynamic ruptures, theoret- ical and observational results on different dynamic regimes of sheared solids and granular materials, effects of fluids and roughness on fault zone rheology, seismic radiation near fault kinks, and geological and laboratory characterizations of fault surfaces and damaged rocks. RUBINSTEIN et al. present laboratory results on slip along a frictional interface between two blocks loaded by shear at some height from the interface. Measure- ments of the evolving true contact area and loadings show sequences of progressively larger arrested rapid slip events and slow fronts. When these traverse the entire surface, dynamic system-size events occur. The observations are accompanied by theoretical analysis using block-spring model and discussion of results in the context of natural earthquakes. ARIAS et al. provide analytical results for singular static elastic field near a kink in a mode II shear crack. The solutions include power law functions with real exponents that depend on the angle of the kink, the coefficient of static fric- tion and the difference of shear stress on the opposite sides of the corner. Differences of friction or loads across the corner can lead to tendencies to open the kink or close it more tightly. BHAT et al. present a generalized version of a mi- cromechanical damage model based on tensile wing cracks that are nucleated and grow from a specified distribution of initial flaws. The model can explain observed nonlinearities in triaxial laboratory experi- ments with Westerly granite, and the dependence of strength on loading rate if the polymineralic nature of the granite is accounted for. HAMIEL et al. discuss a 1 Department of Earth Sciences, University of Southern California, Los Angeles, CA 90089-0740, USA. E-mail: benzion@usc.edu Pure Appl. Geophys. 168 (2011), 2147–2149 � 2011 Springer Basel AG DOI 10.1007/s00024-011-0418-8 Pure and Applied Geophysics second-order strain energy function of a continuum damage model that includes, in addition to the two regular terms of Hookean elasticity, a third non-analytic term. The latter accounts for nonlinear stress–strain relation, with abrupt changes of the effective elastic moduli upon stress reversal from compression to ten- sion, along with damage- and stress-induced anisotropy. The model is compared with the third-order Murnaghan framework in the context of laboratory results associ- ated with Westerly granite. TenCate reviews nonlinear resonance effects that characterize numerous geomate- rials. Application of loadings above a certain threshold, which depends on the material and existing damage, leads to damage increase. In the absence of such load- ings, the effective elastic properties recover with time following a log(t) functional form. BEN-ZION et al. provide analytical mean-field results on different dynamic regimes of sheared solids and granular materials based on a model with long range interaction, evolving threshold dynamics and heteroge- neities. The results are summarized in a phase diagram spanned by three tuning parameters. The basic dynamic regimes, seen in the response of both solid and granular materials to slow shear loading, are scale-invariant behavior, quasi-periodicity of systems size events, and long term mode-switching between these two types of response. Changes of cohesion lead to transitions between solid and granular states of material. HAYMAN et al. perform laboratory experiments with photoelas- ticity on stick–slip events in granular material. At relatively low packing of the material the frequency-size distribution of slip events is approximately power low, while at relatively high packing it is approximately exponential. Increasing packing fraction leads to more periodic behavior, but in all cases there is long-term switching between quasi-periodic and aperiodic responses. The aperiodic events involve major reorga- nizations of the force chains between particles. KRIM et al. analyze stick–slip events, steady sliding and inertial oscillations in laboratory experiments with a solid in contact with 2D photoelastic disks that are either fixed in a lattice (granular solid) or unconstrained (granular bed). The observed frictional properties depend on the form of contact, sliding speed, and applied vibrations. The effect of packing disorder in the case of granular bed appears similar to the effect of vibration induced disorder in the case of the granular solid. Mair and Abe use a 3D discrete element model to simulate the microstructural evolution of fault gouge. Each grain is composed of several thousand spherical particles connected by breakable elastic bonds. Depending on the boundary conditions, their model produces two types of particle comminution: grain splitting which leads to a power law grain size dis- tribution and abrasion which leads to a bimodal distribution. Splitting is favored at higher normal stress and rougher surfaces, while abrasion is favored by the opposite conditions. GOREN et al. study the coupled mechanical behavior of granular material and fluid. For undrained conditions the behavior is elastic like, while for well-drained conditions it is viscous like. In addition to the known cases of liq- uefaction with high pore pressure and undrained conditions, liquefaction is shown to also occur for drained and initially over-compacted conditions. During liquefaction events, the stress chains are destroyed, the load is supported only by the pore fluid, and the shear resistance vanishes. SAMMIS et al. show that the coefficient of friction of a layer of sub- micron particles can be strongly influenced by the properties of a few monolayers of absorbed water. At low slip speeds the particles have time to extrude the adsorbed layer and friction is controlled by rock-on- rock contacts. At intermediate slip speeds there is not sufficient time to extrude the layer and friction drops. At high speeds sufficient heat is generated to vaporize the layer and friction returns to its rock-on-rock value. This explanation is shown to be in rough quantitative agreement with recent laboratory data. ANGHELUTA et al. use a simple model of a visco- elastic fault zone sandwiched between rigid wall rocks to explore the effect of surface roughness on the effective rheology of the layer. Analytical and numerical results indicate that the main effect of wall roughness is to increase the effective viscosity of the system. BISTACCHI et al. analyze roughness of slip surfaces exhumed from about 10 km in the strike–slip Gole Larghe fault zone in Italy. Using LIDAR and digital photography, the roughness is shown to be self-affine over 3–5 orders of magnitude. Results at scales smaller than the net fault slip are anisotropic implying evolution of roughness with slip. Differ- ences of results for shallow and deeper faults are interpreted to reflect differences between weakening/ 2148 Y. Ben-Zion, C. Sammis Pure Appl. Geophys. localization and induration/delocalization processes. SMITH et al. present detailed microstructural obser- vations of slip zones in carbonates (limestone) associated with the Tre Monti normal fault in Italy. The principal slip zone has a 2–10 mm thick ultra- cataclasite immediately below the slip surface, with internal layering, calcite veins, fluidization structures and grains warped by calcite likely produced during seismic events, and overprinting foliation likely produced during interseismic periods. In the final two papers, HEESAKKERS et al. document the structure of the Pretorious fault at seismogenic depths of about 3 km where it intersects a deep mine in South Africa. Reactivation by a small mining-induced event caused slip on ancient and healed cataclasite localizations. Laboratory measurements of the strength of fault zone rocks and host rocks indicate that reactivation was due to the strong contrast in material properties between the cataclasites and host rock. To conclude, earthquakes and fault mechanics involves brittle deformation of solids and granular materials, with a wide range of phenomena that operate over broad ranges of space and time scales. As reflected in the diverse papers collected here, insights into this rich field can come from many approaches ranging from statistical physics, structural geology and rock mechanics at large scales, elasticity and non-linear continuum mechanics at intermediate scales, and fracture mechanics, granular mechanics, and surface physics at small scales. Acknowledgments We thank the authors of the papers for their contributions and the referees for critical reviews that improved the scientific quality of the volume. The latter include Jean-Paul Ampuero, Gary Axen, Bob Behringer, Eran Bouchbinder, Brett Carpenter, Emily Brodsky, Fred Chester, Karin Dahmen, Karen Daniels, Steve Day, Eric Dunham, Yuri Fialko, Jay Fineberg, Yariv Hamiel, Nick Hayman, Karen Mair, Mike Mardar, Julia Morgan, Tom Mitchell, Stefan Nielsen, Hiroyuki Noda, Ze’ev Reches, Francois Renard, Amir Saggy, Norm Sleep, Heather Savage, Chris Scholz, Alexandre Schubnel, Leo Silbert, Steven Smith, Amy Rechenmacher, Thierry Reuschle and Renaud Toussaint. (Published online October 1, 2011) Vol. 168, (2011) Brittle Deformation of Solid and Granular Materials 2149 Brittle Deformation of Solid and Granular Materials with Applications to Mechanics of Earthquakes and Faults Introduction Acknowledgments work_d5c5tpd4rrejpbo3qcva5q4tne ---- Clinical, histologic and electron microscopic findings after injection of a calcium hydroxylapatite filler Clinical, histologic and electron microscopic findings after injection of a calcium hydroxylapatite filler Ellen S Marmur, Robert Phelps & David J Goldberg Introduction Soft tissue augmentation has become an enormously popular procedure over the past two decades. As a result, increasing numbers of filler agents have become available. Calcium hydroxylapatite (CaHa) injections represent one of these promising new soft tissue fillers. Fillers are generally classified into four major types: synthetic, xenogeneic, homogeneic, and autogeneic. 1 Synthetic fillers include silicone, poly-methylmethacrylate (PMMA) beads and now CaHa (Radiesse; Bioform, Franksville, WI, USA). Xenograft fillers (donor and recipient from different species) include bovine collagen and hyaluronic acid deriva- tives. Homogeneic fillers (donor and recipient from the same species) include those agents derived from accredited tissue bank human cadaveric tissue. Autogeneic materials (donor and recipient from the same individual) include autologous fat, and autologous collagen and/or dermal fibroblasts. An ideal filler is one which is biocompatible with human tissue, inert, durable, easy to inject, and does not require over or undercorrection. 2 In order to understand the human skin biologic response to CaHa, we performed a pilot study evaluating the clinical, histologic and clinical ultrastructural changes seen at 1 and 6 months after human skin injection. 3,4 Materials and methods Three female subjects were enrolled in the study after signing informed consent. Exclusion criteria included prior soft tissue filler implant injection into the treatment sites, active inflammation, pregnancy, and immunodeficiency disorders and/or medication that might obscure the inflammatory response. Each of the three subjects was injected in the postauricular area with 0.1 cc of CaHa gel. The CaHa gel consisted of calcium and phosphate ions in a gel-based aqueous formulation of sodium carboxymethyl cellulose, glycerin, and high purity water. The post- injection site in the postauricular area initially appeared as a 1-cm dermal nodule. This site was measured and photographed to ensure accurate placement of the tissue biopsies at follow-up visits. Authors: Ellen S Marmur1,2 Robert Phelps2 David J Goldberg1,2 1Skin Laser & Surgery Specialists of NY/NJ 2Department of Dermatology, Mount Sinai School of Medicine, NY, NY, USA Accepted 5 January 2005 Keywords: dermal filler – calcium hydroxylapatite BACKGROUND: Calcium hydroxylapa- tite (CaHa) is one of many newly available soft tissue fillers. OBJECTIVE: We have, in this pilot study, evaluated the clinical, histo- logic and electron microscopic ultra- structural changes seen with CaHa at 1 and 6 months after skin injection. METHODS: Each of the three subjects was injected in the postauricular area with 0.1 cc of CaHa gel. A 3-mm punch tissue biopsy was taken at 1 and 6 months post-injection. Biopsies were analyzed by histopathology and electron microscopy. Clinical results after injection of the nasolabial folds were also evaluated. RESULTS: CaHa particles were found to persist at 6 months with evidence of new collagen formation being seen. Patients still showed clinical improvement at this time. CONCLUSION: This study is the first in vivo ultrastructural analysis of the biologic response to CaHa in human skin. CaHa shows clinical, histologic and electron microscopic evidence of persistence at 6 months. J Cosmet Laser Ther 2004; 6: 223–226 Correspondence: David J Goldberg, MD, Skin Laser & Surgery Specialists of NY/NJ, 20 Prospect Ave., Suite 702, Hackensack, NJ 07601, US. J Cosmet Laser Ther 2004; 6: 223–226 # J Cosmet Laser Ther. All rights reserved ISSN 1476-4172 DOI: 10.1080/147641704100003048 223 Original Research J C os m et L as er T he r D ow nl oa de d fr om i nf or m ah ea lt hc ar e. co m b y N yu M ed ic al C en te r on 0 8/ 07 /1 2 F or p er so na l us e on ly . Each subject was also injected in the bilateral nasolabial folds with 0.5 cc of CaHa gel to each fold. Punch tissue biopsies of 3 mm were taken from the postauricular site at 1 and 6 months after the injections. The tissue specimen was then bisected so that one piece was placed in formalin and the other in glutaraldehyde. Biopsies were analyzed by histopathology and electron microscopy for inflammatory cell reaction, fibrosis, ossification, and/or granuloma formation. Clinical results were evaluated by two non- treating physician observers by way of digital photography. Subjects at 6 months were asked if they were very satisfied, moderately satisfied, minimally satisfied or unsatisfied with their clinical results. Objective independent 6-month digital photograph improvement was categorized as significant, moderate, mild or none. Results All three treated subjects reported that they were ‘very satisfied’ with the clinical results at 6 months after nasolabial fold injection. The treated areas were soft and natural to feel. No patients reported nodules, redness, bruising, extrusion of gel material, or infection. Both physician observers noted moderate to significant clinical improvement in the nasolabial folds of each treatment patient (Figures 1 and 2). Standard light microscopy of tissue specimens at 1 month post-injection show microspherules of CaHa gel with scant or no inflammatory cell response or fibrosis (Figures 3 and 4). The CaHa microspherules were distributed freely in the junction between the dermis and subcutis. They appear as smooth, slightly irregular, small pink spherules, measuring 35 mm on average. The minimal early inflammatory response is largely composed of histiocytes. Electron microscopic sections of biopsies taken 1 month after treatment show findings consistent with those of light microscopy (Figure 5). The CaHa microspherules are scattered in the dermal/subcutaneous junction surrounded by pre-existing Type I collagen and scant inflammation. Light microscopic sections at 6 months showed tight aggregates of microspherules with a fibroelastic replacement of the aqueous gel. Multinucleated giant cells (arrow) are seen surrounding each microspherule (Figure 6). No granuloma formation, ossification, or foreign body reactions Figure 1 Pre-injection clinical photograph of nasolabial folds. Figure 2 Six months post-injection clinical photograph of nasolabial folds. Figure 3 Thick section light microscopy at 1 month post-injection showing a microspherule engulfed by a histiocyte. Figure 4 Light microscopic section at 1 month post-injection showing micro- spherules at the dermal subcuticular junction and a slight increase in histiocytes (arrows). 224 ES Marmur, R Phelps & DJ Goldberg Original Research J C os m et L as er T he r D ow nl oa de d fr om i nf or m ah ea lt hc ar e. co m b y N yu M ed ic al C en te r on 0 8/ 07 /1 2 F or p er so na l us e on ly . are evident. The spherules at 6 months are no longer regular and smooth as in the 1-month specimens. However, the CaHa spherules show no tissue migration, remaining well located in the dermal/subcutaneous junction. Light microscopic sections at 6 months also showed microspherules surrounded by thick collagen and histiocytes (arrows) (Figure 7). Electron microscopic sections at 6 months reveal intracellular and extracellular material consistent with calcium particles. Microspherules appear to be engulfed by tissue macrophages (Figures 8 and 9). Discussion CaHa gel is a new and increasingly popular choice in the field of soft tissue augmentation. It fills a clinical niche for those patients who wish to have a longer-lasting filler, albeit one that is not permanent. 5 For over 25 years CaHa has been used as a synthetic Figure 5 Electron microscopic sections at 1 month showing microspherules and a slight increase in histiocytes. Figure 6 Light microscopy at 6 months showing tight aggregates of micro- spherules with a fibroelastic replacement of the aqueous gel and mul- tinucleated giant cells (arrow) surrounding each microspherule. Figure 7 Thick section light microscopy at 6 months showing microspherules surrounded by thick collagen and histiocytes (arrows). Figure 8 Electron microscopic section at 6 months showing a microspherule engulfed by a tissue macrophage. Figure 9 Electron microscopic section at 6 months showing both an intact microspherule and one undergoing a histiocytic-derived catabolic process into smaller particles of calcium (black particles). The phosphate ions are not seen because they are dissolved in the processing of tissue for electron microscopic analysis. Analysis of the biologic response to CaHa in skin 225 Original Research J C os m et L as er T he r D ow nl oa de d fr om i nf or m ah ea lt hc ar e. co m b y N yu M ed ic al C en te r on 0 8/ 07 /1 2 F or p er so na l us e on ly . implant. It is currently FDA cleared for radiographic tissue marking, vocal fold augmentation, and oral maxillofacial defects. 6–9 Additional studies have also evaluated the role of CaHa for stress urinary incontinence, vesicoureteral reflux, HIV-associated lipoatrophy, cleft lip/cleft palate augmenta- tion, and soft tissue augmentation of the chin and nasolabial folds. 10,11 CaHa has several characteristics that make it a good soft tissue filler choice. CaHa is composed of 35-mm diameter smooth microspheres suspended in a gel carrier. It is biocompatible with human tissue and safe. The CaHa particles are made of calcium and phosphate ions identical to the mineral portion of bone. CaHa requires no allergy tests, allowing for immediate treatment. It is metabolized through normal homeostatic mechanisms that naturally occur in the body via macrophage phagocytosis similar to degradation of small fragments of bone. It is durable, lasting 2–7 years in vivo radiographically, with a 2-year shelf-life at room temperature. The longevity of clinical results can be expected to vary with each patient. The characteristics of the CaHa gel carrier are also of note. The gel is an aqueous formulation of sodium carboxymethyl cellulose, glycerin, and high purity water. It has been used as a vehicle for multiple injectable products and is classified as ‘generally recognized as safe’ by the FDA. It is cohesive, with high viscosity and elasticity, allowing minimal implant leakage to the surrounding tissue planes. In addition, the gel formulation offers a 1:1 implant-to-tissue defect correction allowing physicians to avoid the overcorrection necessary with some fillers. The biology of CaHa also appears to be different from many fillers. Animal studies have been conducted to evaluate the long-term safety of CaHa implants (personal communication, Bioform Internal Studies). Microscopic analyses of implant sites at serial time points over a 36- month trial period revealed an early macrophage response to the gel followed by a fibrotic response to the microspherules. This early macrophage activity associated with the sodium carboxymethyl cellulose subsides and is replaced by a fibrous encapsulation of individual micro- spherules. To our knowledge, our pilot study is the first in vivo ultrastructural analysis of the biological response to CaHa in human skin. Our results are quite similar to those found in animal studies. However, several points are worth mentioning. We found that the early tissue macrophage response to the filler was minimal. Thus, the clinical fill effect was due to the space-occupying characteristics of the synthetic CaHa spherules and its gel carrier. Over time, an increased histiocytic response was seen leading to a notable electron microscopic change in the appearance of the CaHa particles. The particles were more particulate and dis- tributed both intra- and extracellularly. With the passage of time, the increase in histiocytes and associated fibroblasts appears to anchor down the microspherules as well as to induce new collagen formation as the aqueous gel is metabolized. These results were consistent among all three patients studied. Although animal studies have suggested that CaHa may last 5 years in tissue, our pilot trial in three subjects only suggests that the human response is at least 6 months. In addition, our histologic and electron microscopic findings were seen in the postauricular area. Larger studies are required to determine the response in other human anatomic areas. Conclusion Soft tissue augmentation for the treatment of the aging face has undergone a revolution over the past two decades. Our study is the first in vivo ultrastructural analysis of the biological response to CaHa in human skin. It would appear that CaHa is easy to use and shows clinical, histologic and electron microscopic evidence of persistence. Future larger studies will evaluate how long the response to this filler is, in a variety of human anatomic areas. References 1. Ellis DA, Makdessian AS, Brown DJ. Survey of future injectables. Facial Plast Surg Clin North Am 2001; 9: 405–11. 2. Coleman SR. Structural fat grafts: the ideal filler? Clin Plast Surg 2001; 28: 111–19. 3. Overholt MA, Tschen JA, Font RL. Granulomatous reaction to collagen implant: light and electron micro- scopic observations. Cutis 1993; 51: 95–8. 4. Moody BR, Sengelmann RD. Topical tacrolimus in the treatment of bovine collagen hypersensitivity. Dermatol Surg 2001; 27: 789–91. 5. Sklar JA, White SM. Radiance FN: a new soft tissue filler. Dermatol Surg 2004; 30: 764–8; discussion 768. 6. Chhetri DK, Jahan-Parwar B, Hart SD, et al. Injection laryngoplasty with calcium hydroxylapatite gel implant in an in vivo canine model. Ann Otol Rhinol Laryngol 2004; 113: 259–64. 7. Nagase M, Chen RB, Asada Y, Nakajima T. Radiographic and microscopic evaluation of subperios- teally implanted blocks of hydrated and hardened alpha- tricalcium phosphate in rabbits. J Oral Maxillofac Surg 1989; 47: 582–6. 8. Velich N, Nemeth Z, Hrabak K, et al. Repair of bony defect with combination biomaterials. J Craniofac Surg 2004; 15: 11–15. 9. Zitzmann NU, Rateitschak-Pluss E, Marinello CP. Treatment of angular bone defects with a composite bone grafting material in combination with a collagen membrane. J Periodontol 2003; 74: 687–94. 10. Van Kerrebroeck P, ter Meulen F, Farrelly E, et al. Treatment of stress urinary incontinence: recent devel- opments in the role of urethral injection. Urol Res 2003; 30: 356–62. 11. Mayer R, Lightfoot M, Jung I. Preliminary evaluation of calcium hydroxylapatite as a transurethral bulking agent for stress urinary incontinence. Urology 2001; 57: 434–8. 226 ES Marmur, R Phelps & DJ Goldberg Original Research J C os m et L as er T he r D ow nl oa de d fr om i nf or m ah ea lt hc ar e. co m b y N yu M ed ic al C en te r on 0 8/ 07 /1 2 F or p er so na l us e on ly . work_da66gfciobf3hlnt6kipp5qcxm ---- 1 “Monkey see, monkey do”: Peers’ behaviors predict preschoolers’ physical activity and dietary intake in childcare centers Stéphanie Warda, Mathieu Bélangerb, Denise Donovanc, Jonathan Boudreaud, Hassan Vatanparaste, Nazeem Muhajarinef, Anne Leisg, M Louise Humberth, Natalie Carrieri a Centre de formation médicale du Nouveau-Brunswick, Pavillon J.-Raymond-Frenette, 100 rue des Aboiteaux, Université de Sherbrooke, Moncton, New Brunswick, E1A 3E9, Canada, stephanie.ann.ward@usherbrooke.ca b Department of family medicine, Centre de formation médicale du Nouveau-Brunswick, Pavillon J.-Raymond-Frenette, 100 rue des Aboiteaux , Université de Sherbrooke, Moncton, New Brunswick, E1A 3E9, Canada, mathieu.f.belanger@usherbrooke.ca c Department of Community Health Sciences, Centre de formation médicale du Nouveau- Brunswick, Pavillon J.-Raymond-Frenette, 100 rue des Aboiteaux, Université de Sherbrooke, Moncton, New Brunswick, E1A 3E9, Canada, denise.donovan@usherbrooke.ca d New Brunswick Institute for Research, Data and Training, PO Box 4000, 304F Keirstead Hall, University of New Brunswick, Fredericton, New Brunswick, E3B 5A3, Canada, jonathan.boudreau@unb.ca e School of Public Health, 104 Clinic Place, University of Saskatchewan, Saskatoon, Saskatchewan, S7N 2Z4, Canada, vatan.h@usask.ca f Department of Community Health and Epidemiology, 107 Wiggins Road, University of Saskatchewan, Saskatoon, Saskatchewan, S7N 2Z4, Canada, nazeem.muhajarine@usask.ca g Department of Community Health and Epidemiology, 107 Wiggins Road, University of Saskatchewan, Saskatoon, Saskatchewan, S7N 2Z4, Canada, anne.leis@usask.ca 2 h College of Kinesiology, 97 Campus Drive, University of Saskatchewan, Saskatoon, Saskatchewan, S7N 2Z4, Canada, louise.humbert@usask.ca i École des sciences des aliments, de nutrition et d’études familiales, Pavillon Jacqueline- Bouchard, 51 Antonine-Maillet Avenue, Université de Moncton, Moncton, New Brunswick, E1A 3E9, Canada, natalie.carrier@umoncton.ca Corresponding author Stephanie Ward, RD, M.Sc. Centre de formation médicale du Nouveau-Brunswick Pavillon J.-Raymond-Frenette Université de Sherbrooke 100 rue des Aboiteaux Moncton, New Brunswick, E1A 3E9, Canada Email: stephanie.ann.ward@usherbrooke.ca phone: 506-863-2273 Word count, text: 3410; abstract: 247 Tables and Figures: 2 tables, 2 figures Keywords: peer influence; motor activity; diet, food, and nutrition; child, preschool; child day care centers 3 ABSTRACT Preschoolers observe and imitate the behaviors of those who are similar to them. Therefore, peers may be important role models for preschoolers’ dietary intake and physical activity in childcare centers. This study examined whether peers’ behaviors predict change in preschoolers’ dietary intake and physical activity in childcare centers over 9 months. A total of 238 preschoolers (3 to 5 years old) from 23 childcare centers in two Canadian provinces provided data at the beginning (October 2013 and 2014) and the end (June 2014 and 2015) of a 9- month period for this longitudinal study. Dietary intake was collected at lunch using weighed plate waste and digital photography on two consecutive weekdays. Physical activity was assessed using accelerometers over five days. Multilevel linear regressions were used to estimate the influence of peers’ behaviors on preschoolers’ change in dietary intake and physical activity over 9 months. Results showed that preschoolers whose dietary intake or physical activity level deviated the most from those of their peers at the beginning of the year demonstrated greater change in their intakes and activity levels over 9 months (all p values<0.05), which enabled them to become more similar to their peers. This study suggests that preschoolers’ dietary intake and physical activity may be influenced by the behaviors of their peers in childcare centers. Since peers could play an important role in promoting healthy eating behaviors and physical activity in childcare centers, future studies should test interventions based on positive role modeling by children. 4 INTRODUCTION Establishing healthy eating and physical activity behaviors in childhood is important as these can persist into adulthood (Bélanger et al., 2015; Mikkilä et al., 2005). Previous studies have focused on parents as agents for healthy eating and physical activity in preschoolers (Beydoun and Wang, 2009; Zecevic et al., 2010). However, childcare centers have been identified as potential key locations for the promotion of healthy eating behaviors and physical activity as approximately 80% of preschoolers (2 to 5 years old) living in developed countries receive out-of-home care (Organisation for Economic Co-operation and Development, 2013), and spend a considerable amount of their waking hours in childcare centers. For example, 70% of Canadian parents who use childcare services for their children under the age of 4 report using them for at least 30 hours a week (Sinha and Bleakney, 2014). Childcare centers offer many opportunities for children to develop both healthy eating behaviors and physical activity. Children who attend childcare centers on a full-time basis are generally offered lunch and snacks, which can contribute to their daily nutritional requirements (Benjamin Neelon et al., 2011). United States benchmarks for nutrition have suggested that half to two-thirds of children’s nutritional needs should be met while in childcare (Benjamin Neelon et al., 2011). However, many preschoolers consume low amounts of vegetables and fruit, and excessive amounts of saturated fat and added sugars while in childcare centers (Ball et al., 2008; Copeland et al., 2013; Erinosho et al., 2013; Gubbels et al., 2014). Furthermore, despite opportunities for children to be active inside and outside, studies have consistently shown that sedentary time within childcare centers is typically high, while physical activity levels are typically very low, accumulating less than 20 minutes per day of moderate-to-vigorous physical activity (MVPA) during an 8-hour day (Kuzik et al., 2015). Bandura’s theory of observational learning suggests that children can learn new behaviors, or increase or decrease the frequency of a previous behavior by observing, remembering and replicating the behaviors of those around them (Bandura, 1977). Furthermore, it is suggested that children will be more likely to replicate the behavior of someone who they like, respect or who they perceive as similar to themselves (Bandura, 1977). Therefore, preschoolers’ eating and physical activity behaviors may be shaped by imitating those of their peers while in childcare 5 centers. However, a recent systematic review concluded that current evidence of a potential relationship between preschoolers’ food intake or physical activity, and that of their peers is based on small controlled experimental research, cross sectional observations, and small pre-post studies (Ward et al., 2016). This review also highlighted the need for longitudinal population-based studies to examine how peers influence these behaviors over time, as recent data suggest that it can take up to 8 months for a health-related behavior to be adopted (Lally et al., 2010). Therefore, this study aimed to assess how peers’ behaviors predict preschoolers’ dietary intake and physical activity in childcare centers, over the course of 9 months. METHODS Subjects Participants in the Healthy Start – Départ Santé (HSDS) intervention, a clustered randomized controlled trial conducted over a 9-month period in New Brunswick and Saskatchewan, provided data for this longitudinal study (Bélanger et al., 2016). All preschoolers (3 to 5 years) attending the childcare center on a full-time basis were eligible to participate in the study. Of the 61 childcare centers that were recruited for the HSDS intervention, dietary data from children who attended 17 childcare centers randomized to the control group in the first two years of the study were available at the time of analysis. Valid physical activity data from children attending 22 of the 23 centers randomized to the control group in the first two years of study and were used for the analysis on physical activity. Procedures used to obtain these data are described below. The HSDS study received approval from the Centre Hospitalier de l’Université de Sherbrooke, the University of Saskatchewan, and Health Canada ethics review boards. All parents or guardians of participating children provided signed informed consent. Outcome assessments Dietary intake Children’s intake in calories, fiber, sugar, fat, sodium, and fruit and vegetables was assessed at lunch on two consecutive weekdays, at baseline (October 2013 and 2014) and endpoint (9 months later, i.e., June 2014 and 2015, respectively) of the same school year, using weighed plate waste and digital photography. These nutrients were chosen based on reports that Canadian 6 children frequently consume foods and beverages that are high in calories, sugar, fat and sodium, and consume insufficient amounts of fiber-rich fruit and vegetables (Garriget, 2007; Health Canada, 2012). The decision to collect dietary data on only two days was based on feasibility and on reports from previous studies which have assessed children’s dietary data in schools and childcare centers over the same number of days (Ball et al., 2007; Kirks and Wolff, 1985). The weighed plate waste method has shown to be a reliable measurement of dietary intake and has been used in studies among school-aged children (Lee et al., 2001). First, each food item that was offered at lunch on days of data collection was weighed and photographed before and after each serving. This included the main course, sides, beverages and desserts. Second, the difference in weight between the initial serving and the leftovers was calculated to obtain each child’s specific dietary intake (Jacko et al., 2007). Third, the recipes of the lunches were entered into a nutritional analysis software, Food Processor (version 10.10.00), to analyze the child’s intake in calories, fiber, sugar, fat, sodium, and fruit and vegetables. Pictures were used to validate all data collected and to qualitatively identify the proportion of food items that were served and consumed. The average intake over the course of the two days was then computed for each child. The difference between a child’s dietary intake at endpoint and baseline was calculated to reflect the change in dietary intake of the child over the course of 9 months, and was used as the outcome variable for this study. Physical activity and sedentary activity Physical activity was assessed using the Actical accelerometer. The Actical has shown to be a valid tool for measuring physical activity levels of preschoolers (Pfeiffer et al., 2006). Children wore the accelerometer on the right hip with an elastic belt during childcare hours for five consecutive weekdays. Educators were required to place the accelerometer on the children when they first arrived at the childcare center, and remove it before they went home. Children were asked to wear the accelerometer during the entire day, including nap time. After the measurement period, the accelerometers were collected and sent to the research staff. Accelerometer data were recorded in 15 second epochs, and were used to measure time spent in physical activity and sedentary behavior based on predetermined thresholds validated in preschoolers (Pfeiffer et al., 2006). Accelerometer counts of less than 25 counts per 15 seconds 7 defined sedentary behavior (which would include nap time), while counts between 25 and 714 per 15 seconds defined light intensity physical activity time (LPA). Moderate-to-vigorous physical activity was defined as 715 counts or more per 15 seconds. Non-wear time was defined as at least 60 consecutive minutes of zero counts. Valid days and hours were determined using the study’s baseline data, following a statistical method described by Rich et al. (Rich et al., 2013). Specifically, the Spearman-Brown prophecy formula and the intraclass correlation coefficient were used to calculate the reliability coefficients (r) of the mean daily counts/minute and analyses were repeated on data from children who met wear times between one to ten hours (based on typical childcare hours of 7:30 am to 5:30pm), and wear days between one to five (Monday to Friday). Results demonstrated that using a minimum of two hours and four days as valid minimum wear time provided acceptable reliability coefficients (r= 78.6%) while maximizing the number of data collection days and sample size (Figure 1). All children’s physical activity data were then standardized to an 8-hour period to control for within and between participant wear time variation (Katapally and Muhajarine, 2014). Raw accelerometer data were cleaned and managed using SAS codes adapted for this study (Bélanger and Boudreau, 2015). Figure 1 here 8 Peers’ behaviors For each child, the mean baseline dietary intake and physical activity of all other children in their childcare center was computed (the child’s own dietary intake or physical activity was not included). A variable representing the deviation between a child’s behavior and his or her peers’ was then computed by calculating the difference between the child’s and the mean of peers’ dietary intake or physical activity at baseline. Confounding variables Based on directed acyclic graphs and existing literature, age (American Academy of Pediatrics, 2009; Rice and Trost, 2013), sex (Colley et al., 2011; Garriget, 2007), province (Canadian Fitness & Lifestyle Research Institute, 2011a, 2011b; Health Canada, 2005), rurality (Liu et al., 2012) and the number of preschoolers attending the childcare center were identified as potential confounding variables. The children’s age was obtained using parent questionnaires from the HSDS intervention. Rurality of the centers was based on publically available geospatial information from the Community Information Database, 2006 (Government of Canada’s Rural Secretariat, 2006). Urban locations were identified as census metropolitan areas (CMAs), census agglomerations (CAs) or strong metropolitan influenced zone (MIZ). Regions that were identified as moderate, weak or no MIZ were considered as rural locations. The total number of preschoolers attending the childcare center was derived from the number of children reported by the director of each center as being eligible for the study. In order to investigate whether our results could be mediated by the length of time a child had attended the childcare center, parents were asked to provide the month and year their child was enrolled in the current center. Duration of attendance was considered as the time a child had attended center when baseline data collection was obtained. Statistical analyses All statistical analyses were conducted in R, version 3.1.1. To predict peers’ behaviors on a child’s dietary intake and physical activity, bivariate linear regressions were conducted first. 9 Multilevel linear regression models were then generated to control for the potential confounders and to account for clustering at the level of childcare centers. Finally, sensitivity analyses were conducted to determine whether the length of time a child had attended the childcare center influenced results, by using data from children whose parents returned the completed parent questionnaire. This variable could only be used in sensitivity analyses given it was only available for a small portion of participants. In order to visually represent the data, quintiles of the deviation between participants’ and their peers’ baseline dietary intake and physical activity were created. RESULTS Of the 1205 children eligible to participate in the HSDS intervention in the first two years of the study, consent was obtained from parents of 730 children (61%). Of these children, 350 attended a childcare center that was randomized to the control group (203 children in NB and 147 children in SK). An average of 23 children were enrolled in the preschool program in each of the childcare centers. A total of 238 children (mean age of 4.0 at the beginning of the study) provided data at both time points; 152 children (52.6% boys) provided baseline and endpoint dietary data, while 199 children (51.3% boys) provided valid accelerometer data. Parent reported income showed that 82% of children were living in a household which earned $50 000 or more per year. Age-adjusted BMI, dietary intake and physical activity of the children included in the analyses are presented in Table 1. 10 Table 1. Age-adjusted BMI, dietary intake and physical activity of participants per day at baseline and endpoint n=238 All children n=238 Boys n=123 (52%) Girls n=115 (48%) N % N % N % Age –adjusted BMI at baseline n=229 Underweight (BMI <18.5) 24 10.5 12 10.2 12 10.8 Healthy weight (BMI 18.5-24.9) 168 73.4 86 72.9 82 73.9 Overweight (BMI 25 – 29.9) 31 13.5 17 14.4 14 12.6 Obese (BMI ≥30) 6 2.6 3 2.5 3 2.7 Age –adjusted BMI at endpoint n=212 Underweight (BMI <18.5) 12 5.7 5 5.1 7 6.8 Healthy weight (BMI 18.5-24.9) 165 77.8 77 77.8 78 75.7 Overweight (BMI 25 – 29.9) 29 13.7 14 14.1 15 14.6 Obese (BMI ≥30) 6 2.8 3 3.0 3 2.9 Mean (SD) 95% CI Mean (SD) 95% CI Mean (SD) 95% CI Dietary intake per day at baseline n=152 Calories (kcal) 271.4 (103.0) 255.1, 287.8 279.1 (112.0) 254.5, 303.6 263.0 (92.0) 241.7, 284.3 Fiber (g) 2.5 (1.1) 2.3, 2.7 2.6 (1.2) 2.3, 2.9 2.4 (1.0) 2.2, 2.7 Sugar (g) 15.3 (13.6) 13.2, 17.5 16.3 (14.5) 13.1, 19.5 14.3 (12.5) 11.4 (17.1) Fat (g) Sodium (mg) Fruit and vegetables (g) 8.3 (3.9) 457.0 (234.6) 64.9 (49.0) 7.7, 8.9 419.7, 494.3 57.1, 72.7 8.7 (4.1) 449.0 (236.7) 67.0 (54.1) 7.8, 9.6 397.1, 500.9 55.1, 78.8 7.9 (3.6) 465.9 (233.6) 62.6 (43.0) 7.1, 8.7 411.9, 519.9 52.7, 72.6 Dietary intake per day at endpoint Calories (kcal) 320.3 (161.6) 294.7, 346.0 323.0 (176.7) 284.3, 361.7 317.4 (144.2) 284.1, 350.7 Fiber (g) 2.9 (1.7) 2.6, 3.2 2.9 (1.8) 2.5, 3.3 2.8 (1.6) 2.5, 3.2 Sugar (g) 18.5 (15.2) 16.1, 21.0 19.3 (14.8) 16.1, 22.6 17.7 (15.8) 14.1, 21.3 Fat (g) Sodium (mg) Fruit and vegetables (g) 9.8 (6.3) 493.9 (376.4) 80.5 (59.7) 8.8, 10.8 434.1, 553.8 71.0, 90.0 9.6 (7.0) 500.9 (390.6) 86.6 (67.6) 8.1, 11.1 415.3, 586.5 71.8, 101.4 10.1 (5.5) 486.2 (362.6) 73.6 (49.1) 8.8, 11.4 402.5, 570.0 62.3, 85.0 Physical activity per day at baseline n=199 Total physical activity (min) 179.2 (49.3) 172.3, 186.0 185.7 (48.3) 176.3, 195.1 172.3 (49.7) 162.4, 182.2 MVPA (min) 11.1 (13.5) 9.2, 13.0 10.5 (8.1) 8.9, 12.0 11.8 (17.4) 8.3, 15.3 LPA (min) 168.0 (45.4) 161.7, 174.4 175.3 (47.4) 166.0, 184.5 160.5 (42.2) 152.1, 168.9 Sedentary time (min) 300.1 (49.3) 294.0, 307.7 294.3 (48.3) 284.9, 303.7 307.7 (49.7) 297.8, 317.6 11 Physical activity per day at endpoint Total physical activity (min) 177.5 (62.1) 168.9, 186.14 183.1 (64.3) 170.6, 195.6 171.7 (59.4) 159.8, 183.5 MVPA (min) 13.9 (11.8) 12.3, 15.5 13.5 (9.8) 11.6, 15.4 14.3 (13.7) 11.5, 17.0 LPA (min) 163.6 (56.1) 155.8, 171.4 169.5 (59.0) 158.1, 181.0 157.4 (52.4) 147.0, 167.8 Sedentary time (min) 300.7 (62.3) 292.1, 309.4 293.4 (64.3) 281.0, 305.9 308.3 (59.4) 296.5, 320.2 LPA is light intensity physical activity. MVPA is moderate-to-vigorous physical activity. Time/place of the study: October 2013 and 2014 (baseline), and June 2014 and 2015 (endpoint)/ Saskatchewan and New Brunswick, Canada. 12 At baseline, children consumed an average of 271 kcal, 2.5g of fiber, 15.3g of sugar, 8.3g fat, 457mg of sodium and 64.9g of fruit and vegetables during lunchtime. Children accumulated an average of 179.2 minutes of total physical activity during childcare hours (22.4 minutes/hour), 6% (11.1 minutes or 1.4 minutes/hour) of which were spent in MVPA. Children spent 63% of their time in childcare being sedentary (300.1 minutes or 37.5 minutes/hour) and 35% in light physical activity (168.0 minutes or 21 minutes/hour). Although children’s total physical activity and sedentary time remained relatively unchanged at the end of the year, children’s dietary intake of macronutrients increased by 8 to 24% over the 9 months of follow-up. Results from regression analyses suggest that children whose dietary intake differed greatly from their peers at baseline had dietary intakes that were more similar to that of their peers 9 months later (Figure 2, Panel A). In contrast, there was little change between measurement periods in dietary intake of children who had similar intakes as their peers at baseline. For example, if a child ate 100 calories more than his peers at baseline, that child will have eaten 50 calories less at endpoint ((100 calories more than his peers at baseline) X β = -50 calories) (Table 2). Similarly, if a child ate 10 calories less than his peers at baseline, he will have eaten 5 calories more at endpoint ((-10) X β = 5.02). Results remained significant for all outcomes after adjustment for confounders. Figure 2 here 13 Table 2. Association between the deviation in dietary intake and physical activity of children and that of their peers’ at baseline, and the difference between the children’s own behaviors at baseline and endpoint Deviation between children’s dietary intake and their peers’ dietary intake at baseline Difference in children’s dietary intake between baseline and endpoint Univariate linear regressions (n=152) Multilevel linear regressions a All children (n= 152) Sensitivity analyses b (n=22) β 95% CI β 95% CI β 95% CI Calories (kcal) -0.497 -0.764 to -0.229 -0.502 -0.692 to -0.312 0.086 -0.763 to 0.902 Fiber (g) -0.577 -0.831 to -0.322 -0.583 -0.766 to -0.401 -0.371 -1.026 to -0.054 Sugar (g) -0.234 -0.463 to -0.004 -0.236 -0.400 to -0.074 0.506 -0.232 to 1.493 Fat (g) -0.624 -0.918 to -0.329 -0.624 -0.835 to -0.411 0.038 -0.886 to 0.910 Sodium (mg) -0.595 -0.904 to -0.286 -0.608 -0.809 to -0.408 0.239 -0.720 to 0.854 Fruit and vegetables -0.599 -0.823 to -0.375 -0.602 -0.764 to -0.440 -0.387 -0.402 to 0.018 Deviation between children’s physical activity and their peers’ physical activity Difference in children’s physical activity between baseline and endpoint Univariate linear regressions (n=199) Multilevel linear regressions a All children (n= 199) Sensitivity analyses b (n=44) β 95% CI β 95% CI β 95% CI Total physical activity (min) -0.643 -0.816 to - 0.471 -0.650 -0.817 to -0.483 -0.652 -0.941 to -0.362 MVPA (min) -0.625 -0.815 to - 0.436 -0.601 -0.783 to -0.419 -0.556 -0.924 to -0.187 LPA (min) -0.518 -0.682 to - 0.355 -0.517 -0.677 to -0.356 -0.558 -0.871 to -0.244 Sedentary time (min) -0.610 -0.785 to - 0.434 -0.621 -0.792 to -0.450 -0.652 -0.941 to -0.362 LPA is light intensity physical activity. MVPA is moderate-to-vigorous physical activity. a Includes adjustments for province, rurality, age and sex of the child, number of children in the childcare center, and clustering at the level of childcare centers. b Sensitivity analyses were conducted using data from children for whom duration of attendance was obtained. Time/place of the study: October 2013 and 2014 (baseline), and June 2014 and 2015 (endpoint)/ Saskatchewan and New Brunswick, Canada. 14 Children who differed the most from their peers in terms of physical activity level at the first measurement period displayed greater changes in their physical activity level between the two time points (Table 2). On average, children who had relatively low physical activity levels at the first measurement, increased their physical activity level at the second measurement, whereas children who had relatively high physical activity levels initially, decreased their physical activity level during the same period (Figure 2, Panel B). These results remained unchanged following adjustment for confounders. Of the 152 children who provided dietary intake data, length of attendance was available from 22 children. At the time of baseline data collection, these children had attended the childcare center for an average of 749 days (2 years), with duration of attendance ranging from 0 to 1219 days. After conducting sensitivity analyses accounting for duration of attendance, regression coefficients remained similar only for fiber and fruit and vegetables. The small sample size most likely precluded the attainment of statistical significance for the estimated effect of differing from peers at baseline on change in intake over 9 months. Duration of attendance was obtained from 44 of the 199 children who provided physical activity data. Mean attendance was 656 days with a range of 0 to 1308 days. Results from sensitivity analyses were similar to those obtained from the total sample of children, suggesting that peers’ behaviors contribute to predicting children’s physical activity levels regardless of how long they had attended that childcare center. 15 DISCUSSION To our knowledge, this is the first study to objectively assess how peer behavior predict change in preschoolers’ dietary intake and physical activity peers over time. Results suggest that peers’ dietary intake and physical activity predict children’s dietary intake and physical activity over a 9-month period. Specifically, the greater the deviation between children’s intake or physical activity and those of their peers at the beginning of the year, the more their behavior changed to become more similar to that of their peers over time. Similar to studies that found that peers influenced school-aged children’s physical activity over time (Coppinger et al., 2010) and youth’s food intake (Salvy et al., 2009), our findings support the theory of observational learning and suggest that peers should be considered as potential role models for healthy eating and physical activity among preschoolers. Regardless of the quality of the peers’ intake, children’s dietary intake became similar to their peers’ over time. This suggests that if peers have a healthy diet, children whose diet is poor could potentially see improvements over time. However, this hypothesis is contingent on the quality of the foods that are available to children attending childcare centers, and studies have shown that lunches served in childcare centers typically contain low amounts of vegetables and high amounts of added sugars and saturated fat (Erinosho et al., 2013). In order for peers to be role models for healthy eating, healthy foods must be offered by childcare centers or reinforced through nutrition policies when lunch is brought from home. Our results support previous experimental studies that found that peers influenced children’s food choices, preferences and intake (Ward et al., 2016). This finding is promising, particularly in cases where children experience food reluctance (e.g. fussy or picky eaters), a relatively common issue in early childhood (Dubois et al., 2007). As such, peers may be able to inadvertently help these children improve their eating behaviors, especially if childcare educators are able to group food reluctant children with peers who demonstrate positive eating behaviors at the same table at lunch time. Experimental studies have found that preschoolers were more active when in the presence of one or more peers than when they were alone (Barkley et al., 2014; Eaton and Keats, 1981). 16 However, these studies were conducted in laboratory settings with small samples of children (n<70), did not assess the physical activity of the peers, and were not designed to assess the impact of those peers on children’s physical activity over time (Barkley et al., 2014; Eaton and Keats, 1981). Results from the current study suggests that the degree to which peers are more or less active than others can have an impact on the peers’ likelihood of becoming more or less active over time. While additional research is needed to confirm our findings, interventions should consider targeting peers as a method for increasing children’s physical activity in childcare centers. Building on a previous study that suggested that engaging support from friends may increase physical activity among adolescent girls (Neumark-Sztainer et al., 2003), less active children could be paired with very active peers during games and activities. The development and testing of such interventions is required as our results highlight the low levels of physical activity observed in childcare centers. Specifically, children in our study spent 63% of the time in childcare in sedentary activity and only 2% in MVPA, which is almost identical to the results of a study of preschoolers in Alberta, Canada, which found that participants spent 62% of their time in sedentary activity and 4% in MVPA (Kuzik et al., 2015). We questioned whether our findings could be affected by the length of time the child had been exposed to his peers. However, when adjusting for duration of attendance, regression coefficients from sensitivity analyses were similar to those obtained from the main models in the case of physical activity. Although suggesting that peers’ influence on other children’s behaviors is not affected by length of exposure, this should be tested formally by using a larger sample and evaluating the effect of peer influence among children who are new attendees of a childcare center. Limitations of this study must be acknowledged. Aside from the potential interpretations provided, regression to the mean may also explain some of the observed changes. This is a statistical phenomenon where individuals or groups with extreme initial values are likely to have values that will be closer to the overall sample mean in the follow up score (Linden, 2013). Regression to the mean can be mitigated when multiple baseline observations are taken, as this narrows the variability around the true mean (Linden, 2013). As such, the fact that physical activity data were obtained over the course of five days, and that dietary intake data were collected over two days at both time points may have helped reduce the potential for regression to the mean. This 17 phenomenon still cannot be excluded as a potential explanation for our findings. In addition, two days of dietary intake measurements may not have been enough to accurately measure preschoolers’ usual intake, as their appetite can fluctuate from one day to the next. Furthermore, feasibility issues prevented the assessment of snack intake which could have influenced children’s food intake at lunch. Participants in our study were also of similar age, which did not allow us to assess whether children would be influenced differently by older or younger peers. It is also possible that the arrival or departure of children and educators during the course of the year could have had an impact on children’s behaviors. However, all children within the childcare center would have been exposed to the same risk, thus reducing the impact that this could have had on our findings. Despite these limitations, strengths of this study include the objective measurement of children’s dietary intake and physical activity, the use of a large sample derived from a population-based study and the variability in geographical location, as well as the longitudinal nature of the analyses. Also, while previous studies have used arbitrary cut points for determining valid accelerometer wear time in childcare centers, those used in our study were calculated based on non-discretionary statistical methods. Conclusions This study objectively assessed how peers’ behaviors predicted change in preschoolers’ dietary intake and physical activity over time. Specifically, our study suggests that the greater the deviation between children’s dietary intake or physical activity level and those of their peers at the beginning of the year, the greater their change in dietary intake or physical activity will be over time. Our findings suggest that improving some children’s eating behaviors and physical activity could indirectly result in improving other children’s behaviors in childcare centers. Hence, future studies should consider the role that peers could play in promoting healthy eating behaviors and physical activity of preschoolers in this setting. ACKNOWLEDGEMENTS Funding Support: The Healthy Start study is financially supported by a grant from the Public Health Agency of Canada (# 6282-15-2010/3381056-RSFS), a research grant from the Consortium 18 national de formation en santé (# 2014-CFMF-01), and a grant from the Heart and Stroke Foundation of Canada (# 2015-PLNI). SW was supported by a Canadian Institutes of Health Research Charles Best Canada Graduate Scholarships Doctoral Award and by the Gérard-Eugène- Plante Doctoral Scholarship. The funders did not play a role in the design of the study, the writing of the manuscript or the decision to submit it for publication. Declaration of conflict of interest: No conflicts of interest were reported by the authors of this paper. REFERENCES American Academy of Pediatrics, 2009. Caring for your baby and young child: Birth to age 5, 5th ed. ed. Bantam, New York, NY. Ball, S., Bejamin, S., Ward, D., 2008. Dietary intakes in North Carolina child-care centers: are children meeting current recommendations? J Am Diet Assoc 105, 718–721. Ball, S.C., Benjamin, S.E., Ward, D.S., 2007. Development and Reliability of an Observation Method to Assess Food Intake of Young Children in Child Care. J. Am. Diet. Assoc. 107, 656–661. doi:10.1016/j.jada.2007.01.003 Bandura, A., 1977. Social learning theory. Prentice Hall, Englewood Cliffs, NJ. Barkley, J., Salvy, S., Sanders, G., Dey, S., Von Carlowitz, K., Williamson, M., 2014. Peer influence and physical activity behavior in young children: an experimental study. J. Phys. Act. Health 11, 404–409. Bélanger, M., Boudreau, J., 2015. SAS Code for Actical Data Cleaning and Management [WWW Document]. Cent. Form. médicale du Nouv. URL http://www.mathieubelanger.recherche.usherbrooke.ca/Actical.fr.htm (accessed 10.20.15). Bélanger, M., Humbert, L., Vatanparast, H., Ward, S., Muhajarine, N., Froehlich Chow, A., Engler-Stringer, E., Donovan, D., Carrier, N., Leis, A., 2016. A multilevel intervention to increase physical activity and improve healthy eating and physical literacy among young children (ages 3-5) attending early childcare centres: the Healthy Start-Départ Santé cluster randomised controlled trial study protocol. BMC Public Health 16, 313:322. Bélanger, M., Sabiston, C., Barnett, T., O’Loughlin, E., Ward, S., Contreras, G., O’Loughlin, J., 2015. Number of years of participation in some, but not all, types of physical activity during 19 adolescence predicts level of physical activity in adulthood: Results from a 13-year study. Int. J. Behav. Nutr. Phys. Act. 12, 76. Benjamin Neelon, S., Briley, M., American Dietetic Association, 2011. Position of the American Dietetic Association: benchmarks for nutrition in child care. J. Am. Diet. Assoc. 111, 607– 615. Beydoun, M., Wang, Y., 2009. Parent-child dietary intake resemblance in the United States: Evidence from a large representative survey. Soc Sci Med 68, 2137–2144. Canadian Fitness & Lifestyle Research Institute, 2011a. Physical activity levels of Canadian children and youth in New Brunswick. Canadian Fitness & Lifestyle Research Institute, 2011b. Kids CAN PLAY! Physical activity levels of Canadian Children and youth in Saskatchewan [WWW Document]. URL http://www.cflri.ca/media/node/972/files/CANPLAY Bulletin 2.8 SK - EN.pdf (accessed 2.26.14). Colley, R., Garriguet, D., Janssen, I., Craig, C., Clarke, J., Tremblay, M., 2011. Physical activity of Canadian children and youth: accelerometer results from the 2007 to 2009 Canadian Health Measures Survey. Heal. Reports 22, 15–23. Copeland, K. a, Benjamin Neelon, S.E., Howald, A.E., Wosje, K.S., 2013. Nutritional quality of meals compared to snacks in child care. Child. Obes. 9, 223–32. doi:10.1089/chi.2012.0138 Coppinger, T., Jeanes, Y.M., Dabinett, J., Vögele, C., Reeves, S., 2010. Physical activity and dietary intake of children aged 9-11 years and the influence of peers on these behaviours: a 1-year follow-up. Eur. J. Clin. Nutr. 64, 776–781. doi:10.1038/ejcn.2010.63 Dubois, L., Farmer, A., Girard, M., Peterson, K., Tatone-Tokuda, F., 2007. Problem eating behaviors related to social factors and body weight in preschool children: A longitudinal study. Int. J. Behav. Nutr. Phys. Act. 4, 9–18. doi:http://doi.org/10.1186/1479-5868-4-9 Eaton, W., Keats, J., 1981. Peer presence and sex differences in motor activity level. Erinosho, T.O., Ball, S.C., Hanson, P.P., Vaughn, A.E., Ward, D.S., 2013. Assessing Foods Offered to Children at Child-Care Centers Using the Healthy Eating Index-2005. J. Acad. Nutr. Diet. 113, 1084–1089. doi:10.1016/j.jand.2013.04.026 Garriget, D., 2007. Overview of Canadians ’ Eating Habits. Ottawa,, ON. Government of Canada’s Rural Secretariat, 2006. Community Information Database - Metropolitan Influence Zone (MIZ) Topology [WWW Document]. Gubbels, J.., Raaijmakers, L.., Gerards, S.M., Kremers, S.P.J., 2014. Dietary intake by Dutch 1- to 3-year-old children at childcare and at home. Nutrients 6, 304–318. 20 Health Canada, 2012. Do Canadian children meet their nutrient requirements through food intake alone? Health Canada, 2005. ARCHIVED - View Maps for Each Health Indicator [WWW Document]. URL http://www.hc-sc.gc.ca/fn-an/surveill/atlas/map-carte/index-eng.php#nu (accessed 2.26.14). Jacko, C., Dellava, J., Ensle, K., Hoffman, D., 2007. Use of the plate-waste method to measure food intake in children. J. Ext. 45, 6RIB7. Katapally, T., Muhajarine, N., 2014. Towards uniform accelerometry analysis: A standardization methodology to minimize measurement bias due to systematic accelerometer wear-time variation. J. Sport. Sci. Med. 13, 379–386. Kirks, B., Wolff, H., 1985. A comparison of methods for plate waste determinations. J. Am. Diet. Assoc. 85, 328–331. Kuzik, N., Clark, D., Ogden, N., Harber, V., Carson, V., 2015. Physical activity and sedentary behaviour of toddlers and preschoolers in child care centres in Alberta, Canada. Can. J. Public Heal. Rev. Can. Sante Publique 106, e178–83. doi:10.17269/CJPH.106.4794 Lally, P., van Jaarsveld, C., Potts, H., Wardle, J., 2010. How are habits formed: Modelling habit formation in the real world. Eur. J. Soc. Psychol. 40, 998–1009. Lee, H., Lee, K., Shanklin, C., 2001. Elementary student’s food consumption at lunch does not meet recommended dietary allowance for energy, iron, and vitamin A. J. Am. Diet. Assoc. 101, 1060–1063. Linden, A., 2013. Assessing regression to the mean effects in health care initiatives. BMC Med. Res. Methodol. 13, 119. doi:10.1186/1471-2288-13-119 Liu, J., Jones, S., Probst, J., Merchant, A., Cavicchia, P., 2012. Diet, physical activity, and sedentary behaviors as risk factors for childhood obesity: an urban and rural comparison. Child Obes 8, 440–448. Mikkilä, V., Rasanene, L., Raitakari, O., Pietinen, P., Vikari, J., Mikkila, V., Rasanen, L., Viikari, J., 2005. Consistent dietary patterns identified from childhood to adulthood: the cardiovascular risk in Young Finns Study. Br. Jounral Nutr. 93, 923–931. Neumark-Sztainer, D., Story, M., Hannan, P., Tharp, T., Rex, J., 2003. Factors Associated With Changes in Physical Activity: A Cohort Study of Inactive Adolescent Girls. JAMA Pediatr. 157, 803–810. Organisation for Economic Co-operation and Development, 2013. PF3.2 Enrolment in childcare and pre-schools, Social policies and data, OECD Family Database. 21 Pfeiffer, K., Mciver, K., Dowda, M., Almeida, M., Pate, R., 2006. Validation and calibration of the Actical accelerometer in preschool children. Med. Sci. Sports Exerc. 38, 152–157. Rice, K., Trost, S., 2013. Physical activity levels among children attending family day care. J. Nutr. Educ. Behav. Rich, C., Geraci, M., Griffiths, L., Sera, F., Dezateux, C., Cortina-Borja, M., 2013. Quality Control Methods in Accelerometer Data Processing: Defining Minimum Wear Time. PLoS One 8, 1–8. doi:10.1371/journal.pone.0067206 Salvy, S.J., Howard, M., Read, M., Mele, E., 2009. The presence of friends increases food intake in youth. Am. J. Clin. Nutr. 90, 282–287. doi:10.3945/ajcn.2009.27658 Sinha, M., Bleakney, A., 2014. Spotlight on Canadians : Results from the General Social Survey Receiving care at home Catalogue no. 89–652–X. Ward, S., Bélanger, M., Donovan, D., Carrier, N., 2016. Relationship between eating behaviors and physical activity of preschoolers and their peers: a systematic review. Int. J. Behav. Nutr. Phys. Act. 13, 50. doi:10.1186/s12966-016-0374-x Zecevic, C. a, Tremblay, L., Lovsin, T., Michel, L., 2010. Parental Influence on Young Children’s Physical Activity. Int. J. Pediatr. 2010, 468526. doi:10.1155/2010/468526 22 FIGURE CAPTIONS Figure 1. Reliability (%) heatmap and corresponding sample size, by minimum daily wear time and wear days based on accelerometry data collected in the fall of 2013 (1.5-column fitting, colored image) Figure 2. Deviation between children’s average intake in fruit and vegetables (Panel A) and children’s average total physical activity (Panel B) in each quintile, and those of their peers at baseline (October 2013 and 2014) and endpoint (June 2014 and 2015). (1.5 column-fitting image) work_dck27tyubrgrhjc6odyhobk734 ---- Hyaluronic acid, Acne scars, Fractional CO2 laser American Journal of Dermatology and Venereology 2017, 6(2): 17-24 DOI: 10.5923/j.ajdv.20170602.01 The Efficacy of Topical Hyaluronic Acid Serum in Acne Scar Patients Treated with Fractional CO2 Laser Shakir J. Alsaedy, Khansaa A. Mosa, Salah H. Alshami* Department of Dermatology, Alkindy Teaching Hospital, Baghdad, Iraq Abstract Acne is a common disorder experienced by up to 80% of people between 11 and 30 years of age and by up to 5% of older adults. It is caused by several factors including increased sebum production, follicular hypercornification, colonization with Propionibacterium acnes, and a lymphocytic and neutrophilic inflammatory response. Acne scarring can be divided into 2 main categories: Atrophic and Hypertrophic scars, the atrophic scars are subdivided into 3 basic types: icepick scars, rolling scars, and boxcar scars. Fractionated carbon dioxide (CO2) laser resurfacing combines the concept of fractional photothermolysis with an ablative 10600-nm wavelength. This technology allows for the effective treatment of acne scars, reduced recovery periods and a significantly minimal side effect profile as compared to traditional CO2 laser resurfacing. Hyaluronic acid is a carbohydrate occurring naturally in all living organisms. The biological functions include maintenance of the visco-elastisity of liquid connective tissues such as joint synovial and eye vitreous fluid, control of tissue hydration and water transport. This study is designed to evaluate the efficacy of using topical hyaluronic acid serum in patients with atrophic acne scars after treatment with fractional CO2 laser. Patients and Method: Thirty six patients (26 females and 10 males) with moderate to severe atrophic acne scarring were included in this. The patients were divided randomly into two equal groups; one group was treated with CO2 fractional ablative laser alone and the other group was treated with laser in addition to topical hyaluronic acid serum to compare the final results between the two groups and assess the effect of hyaluronic acid on wound healing and post-operative complications. Therapeutic outcomes were assessed by standardized digital photography. Results: The final results after 3 months according to dermatologists' assessment of level of improvement in response to treatment by CO2 laser combined with topical hyaluronic acid were as follows: three patients (16.6%) reported excellent improvement, nine patients (50%) significant improvement, five patients (27.7%) moderate improvement and one patient (5.5%) showed mild improvement in the appearance of the acne scars. The results were significant as indicated by the P value which was 0.002. All participants reported mild side effects like pain, erythema and peeling and few patients suffered from edema and severe peeling but they were all tolerable and transient and last for approximately five days. Conclusion: The addition of hyaluronic acid to CO2 laser resurfacing does not only enhance the effect of laser but also provides faster recovery time and fewer side effects when compared to the CO2 laser resurfacing alone. Keywords Hyaluronic acid, Acne scars, Fractional CO2 laser 1. Introduction Acne is a chronic inflammatory disease of pilosebaceous follicles, characterized by comedones, papules, pustules, nodules & often scars. It is caused by several factors including increased sebum production, follicular hypercornification, colonization with Propionibacterium acnes, and a lymphocytic and neutrophilic inflammatory response [1, 2]. Collagen and other tissue damage from the inflammation of acne leads to permanent skin texture changes and fibrosis. Dermal damage is more long lasting * Corresponding author: salsah71@yahoo.com (Salah H. Alshami) Published online at http://journal.sapub.org/ajdv Copyright © 2017 Scientific & Academic Publishing. All Rights Reserved and results in an increase or decrease of tissue and often worsens in appearance with age as a result of normal skin changes. The two main causes of acne scar formation can be broadly categorized as either a result of increased fibrous tissue formation or, the more common cause, loss or damage of tissue [3]. Treatment of the true scars resulting from acne must reflect several considerations by the physician. Cost of treatment, severity of lesions, physician goals, patient expectations, side-effect profiles, psychological or emotional effect to the patient, and prevention measures should all play a role [4]. There are numerous surgical options available for treatment of acne scars (Table 1). Ablative lasers are technologies with high selectivity for water. Therefore, their action takes place mainly on the surface but the depth of action is certainly to be correlated 18 Shakir J. Alsaedy et al.: The Efficacy of Topical Hyaluronic Acid Serum in Acne Scar Patients Treated with Fractional CO2 Laser to the intensity of the emitted energy and the diameter of the spot used [5]. CO2 lasers, which present lower selectivity for water, besides causing ablation are also capable of determining a denaturation in the tissues surrounding the ablation and a thermal stimulus not coagulated for dermal protein. CO2 lasers have a double effect: they promote the wound healing process and arouse an amplified production of myofibroblasts and matrix proteins such as hyaluronic acid [6]. Ablative lasers remain the gold standard in skin resurfacing and provide the greatest clinical improvements with the least number of treatments. However, because of the complete vaporization of the epidermis and variable coagulative damage to the dermis, healing time is prolonged and treatments carry significant associated risks [7, 8]. Table 1. Surgical Management of Acne Scars Scar Type Proposed Management Icepick <3.5 mm Punch excision Icepick >3.5 mm Elliptical excision Boxcar Punch elevation Sinus tract or wide base Skin graft Rolling ‘‘Subcision’’ Hypertrophic or Keloid Debulking Hyaluronic acid (hyaluronan, HA) is a carbohydrate, more specifically a mucopolysaccharide, occurring naturally in all living organisms. The unique viscoelastic nature of HA along with its biocompatibility and non-immunogenicity has led to its use in a number of clinical applications [9]. HA’s viscoelastic matrix can act as a strong biocompatible support material and therefore is commonly used as growth scaffold in surgery, wound healing and embryology [10]. HA receptors are involved in cellular signal transduction; one receptor family includes the receptors CD44, and RHAMM [11, 12] (Receptor for HA-Mediated Mobility). These are considered to be the major HA receptor on most cell types [13] and they have been implicated in regulating cellular responses to growth factors and plays a role in cell migration, particularly for fibroblasts and smooth cells [14]. Topical application of HA may result in increased hydration of the stratum corneum and lead to the enhanced topical delivery of a concomitantly applied drug across this skin barrier [15]. Cosmetic uses of hyaluronic acid HA has been extensively utilized in cosmetic products because of its viscoelastic properties and excellent biocompatibility [16]. Application of HA containing cosmetic products to the skin is reported to moisturize and restore elasticity thereby achieving an anti-wrinkle effect, albeit no rigorous scientific proof exists to substantiate this claim [17]. HA-based cosmetic formulations or sunscreens may also be capable of protecting the skin against ultraviolet irradiation due to the free radical scavenging properties of HA [18]. HA, either in a stabilized form or in combination with other polymers, is used as a component of commercial dermal fillers in cosmetic surgery. HA degradation products are purported to contribute to scar formation. When hyaluronidase is added to generate HA fragments, there is increased scar formation [19]. These data support the theory that high molecular weight HA promotes cell quiescence and supports tissue integrity, whereas generation of HA breakdown products is a signal that injury has occurred and initiates an inflammatory response [20]. This study is designed to evaluate the efficacy of using topical hyaluronic acid serum in patients with atrophic acne scars after treatment with fractional CO2 laser. 2. Patients and Methods This is an open therapeutic trial study performed at Beirut Private Center for Laser Treatments in Baghdad, Iraq during the period from April, 1st 2015 to April, 1st 2016. Thirty six patients (26 females and 10 males) with moderate to severe atrophic acne scarring according to Goodman's qualitative global acne scarring grading system (table 8) were included in this study. Their ages ranged from 20 to 43 years (mean age: 29.69 years old SD±7.21) and their Fitzpatrick skin types were III-IV. This study approved by the ethical committee in Al-Kindy teaching hospital. Written informed consent was obtained from each patient. Exclusion criteria include known photosensitivity, pregnancy or lactation, inflammatory skin disorders or active herpes infection, history of hypertrophic or keloidal scarring, and the use of isotretinoin or other physical acne treatments over the past 6 months. The use of anti-coagulants, isotretinoin, patients who had any medical illness (e.g. diabetes, chronic infections, blood dyscrasias) that could influence the wound healing process were also excluded. Patients were allowed to continue previous acne medications during the study except isotretinoin. All patients had mixed types of atrophic acne scars, including ice pick, boxcar, and rolling scars, although, some particular type predominates and therefore is used to classify the patients accordingly (Table 2). Table 2. Patients Predominant Scars Types Patients Number Type of scar 7 (19%) significant rolling 14 (40%) shallow boxcar 6 (16%) Deep boxcar 9 (25%) icepick scars The whole procedure was fully explained and thoroughly discussed with the patients about the mechanism of laser treatment, the time required for the treatment, the behavior after the laser treatment, and the prospects of successful treatment and any unrealistic expectations of the end results were strongly discouraged. The patients were informed about all risks that may be caused by the laser treatment and the pre- and post-operative care. The patients were divided randomly into two equal groups; American Journal of Dermatology and Venereology 2017, 6(2): 17-24 19 one group was treated with CO2 fractional ablative laser and the other group was treated with laser in addition to topical hyaluronic acid serum to compare the final results between the two groups and assess the effect of hyaluronic acid on wound healing and post-operative complications. The patients in the second group were asked to apply hyaluronic acid serum (1% sodium hyaluronate serum) to the face twice daily for a month prior to the laser peel in order to condition the skin before the procedure and for three months after the laser peel. The serum was formulated with 1% high molecular weight (about 1000 kDa), pharmaceutical grade and bioengineered hyaluronic acid. Before the application of the product, the patients were asked to cleanse the skin using a gentle cleanser and a good sun protection was highly encouraged. Prior to each treatment, the face was cleansed with a mild non-abrasive detergent and gauzes soaked in 70% isopropyl alcohol. A topical anesthetic cream (EMLA, a eutectic mixture of local anesthesia of 2.5% lidocaine and 2.5% prilocaine, AstraZenica LP, Wilmington DE) was applied under an occlusive dressing for 1 hour and subsequently washed off to obtain completely dry skin surface. Eyes were protected with opaque goggles. Systemic antiviral therapy (acyclovir 400 mg twice daily) prescribed for each patient the night before operation as prophylaxis and for five days post operatively as well as topical antibiotics and a moisturizing cream, the patients were informed to apply a sunscreen for six weeks. Three photos were taken before treatment for each patient for both sides and the front of the face with a digital camera (Sony DSC-T99 Cyber-shot® Digital Camera, 14.1 megapixel HD) and another set of photos was taken in each visit post-treatment using identical camera settings, lighting, and patient positioning. Systemic Antibiotics and Antiviral prescribed for each patients. Fractional CO2 laser was delivered to the areas with treatment parameters that has been adjusted based on the scar severity and each patient’s tolerability. These parameters are: 1. The depth to which the laser will reach. We used 1 mm to 7 mm according to scar depth; 2. The distance between each spots of laser beams. We used 1to 4 mm according to scar size; 3. The power which is the laser energy in mJ we used 30 to 70mJ according to scar severity. The number of each passes (laser application) depends on depth of the scar (more depth need more passes). Smoke evacuator and a forced air cooling system (Zimmer Medizin Systemme, Cryo version 6) accompanied the procedure to improve patients comfort and compliance. Patients were asked to return for medical assessment two weeks after operation then followed up monthly for 3 months. Therapeutic outcomes were assessed by standardized digital photography by the patient himself and by two blinded dermatologists. The dermatologists' evaluation and self-assessment level of improvement of the patients were evaluated using the following five-point scale: 0=no change; 1=slight improvement (0–25%); 2=moderate improvement (26–50%); 3=significant improvement (51–75%); 4=excellent improvement (>75%). The two assessors were blinded to the order of the photographs. The evaluators were asked to perform two actions. First, to identify the photograph that showed better scar appearance. Second, to rate the difference in the severity of the acne scars using the above mentioned scale. In addition, the participants were asked to report any cutaneous or systemic side effects associated with laser treatment particularly erythema, edema, crusts and erosions and pain. A scale of 0–3 was used to determine the level of discomfort from side effects during and after the procedure as following: 1=mild 2=moderate 3=severe Analgesics and Sunscreens prescribed to the patients post-treatment and the patients advised for strict sun protection particularly during the first ten post-treatment days. Statistical data were analyzed by Chi test and P value < 0.05 is considered statistically significant descriptive data by frequency, percent, figure and table. 3. Results Thirty six patients (26 females and 10 males) were included in the study. All patients completed the study, including the 3- month follow-up period. Most patients had mixed types of atrophic acne scars, including ice pick, boxcar, and rolling scars. The patients were divided equally into two groups (18 patients in each group). First group was treated with CO2 fractional ablative laser alone and the second group was treated with laser in addition to topical hyaluronic acid serum. In the second group, according to the patients' self-assessment of improvement, the results were remarkable and showed a great amount of satisfaction (Table 3). The results after 3 months follow-up were as the following: three patients (16.6%) reported excellent improvement, eight patients (44.4%) significant improvement, four patients (22.2%) moderate improvement and three patients (16.6%) mild improvement in the appearance of the acne scars. The results considered significant as indicated by the P value which was 0.001. 20 Shakir J. Alsaedy et al.: The Efficacy of Topical Hyaluronic Acid Serum in Acne Scar Patients Treated with Fractional CO2 Laser Table 3. Patients' Self-Assessment of Level of Improvement in Response to Treatment by CO2 Laser Combined with Topical Hyaluronic Acid Score 1 2 3 4 P value wk 1 wk 7 (38.8%) 9 (50%) 2 (11.1%) - 0.001 4 wk 6 (33.3%) 8 (44.4%) 3 (16.6%) 1 (5.5%) 8 wk 4 (22.2%) 5 (27.7%) 6 (33.3%) 3 (16.6%) 12 wk 3 (16.6%) 4 (22.2%) 8 (44.4%) 3 (16.6%) Table 4. Dermatologists' Assessment of Level of Improvement in Response to Treatment by CO2 Laser Combined with Topical Hyaluronic Acid Score 1 2 3 4 P value wk 1 wk 6 (33.3%) 8 (44.4%) 3 (16.6%) 1 (5.5%) 0.002 4 wk 5 (27.7%) 7 (38.8%) 4 (22.2%) 2 (11.1%) 8 wk 2 (11.1%) 3 (16.6%) 8 (44.4%) 4 (22.2%) 12 wk 1 (5.5%) 5 (27.7%) 9 (50%) 3 (16.6%) Table 5. Dermatologists' Assessment of Level of Improvement in Response to Treatment by CO2 Laser Alone Score 1 2 3 4 P value wk 1 wk 6 (33.3%) 7 (38.8%) 4 (22.2%) 1 (5.5%) 0.002 4 wk 5 (27.7%) 8 (44.4%) 4 (22.2%) 1 (5.5%) 8 wk 3 (16.6%) 6 (33.3%) 6 (33.3%) 3 (16.6%) 12 wk 2 (11.1%) 5 (27.7%) 8 (44.4%) 3 (16.6%) Table 6. Patients' Self-Assessment of Level of Improvement in Response to Treatment by CO2 Laser Alone Score 1 2 3 4 P value wk 1 wk 11 (61.1%) 7 (38.8%) - - 0.001 4 wk 9 (50%) 8 (44.4%) 1 (5.5%) - 8 wk 8 (44.4%) 6 (33.3%) 3 (16.6%) 1 (5.5%) 12 wk 5 (27.7%) 6 (33.3%) 4 (22.2%) 3 (16.6%) According to dermatologists' assessment of the results of patients in the second group (Table 4), the results improved dramatically from 5.5% in 1st week to 16.6% for excellent improvement after 3 months. Significant improvement group shows the major response and increase from 16.6% after 1 week to 50% after 3 months, it gives us a strong indicator of the overall results. The final results after 3 months were as follows: three patients (16.6%) reported excellent improvement, nine patients (50%) significant improvement, five patients (27.7%) moderate improvement and one patient (5.5%) showed mild improvement in the appearance of the acne scars. The results were significant as indicated by the P value which was 0.002. The first group of patients (those were treated with laser alone) shows a different level of improvement according to dermatologists' assessment (Table 5) and patients' self-assessment (table 6). After three months the results (according to dermatologists' assessment) were as following: three patients (16.6%) reported excellent improvement, eight patients (44.4%) significant improvement, five patients (27.7%) moderate improvement and two patients (11.1%) mild improvement in the appearance of the acne scars. The results were significant as indicated by the P value which was 0.002. The results after three months according to patients' self-assessment were as following: three patients (16.6%) reported excellent improvement, four patients (22.2%) significant improvement, six patients (33.3%) moderate improvement and five patients (27.7%) mild improvement in the appearance of the acne scars. The results were significant as indicated by the P value which was 0.001. The laser treatment was generally well tolerated. All participants underwent treatment-related pain, but there was no need for extra anesthesia. All participants reported mild erythema for approximately 2-3 days but some of them experienced severe erythema especially in patients who have been treated with laser alone. Eight patients (44.4%) reported edema after 24 hours following laser treatment with hyaluronic acid that lasted for five days compared to laser treatment alone (83.3%) and lasted for ten days. Peeling occurs from the third day and completed in the fifth day in the majority of patients subjected to combination treatment and in 16.6% last for 7 days, while it lasts for 10 days in patients treated with laser alone. Social activity could commence as early as 7 days after the laser treatment combined with hyaluronic acid while it takes 14 days to return to normal social life in patients treated with laser alone. Other possible adverse events related to laser treatment in general, such as pigmentary alterations (hyperpigmentation), American Journal of Dermatology and Venereology 2017, 6(2): 17-24 21 bleeding, vesiculation, crust, scarring, and infection were not observed. (Tables 7-8). Figures 1-4 showed sample results of second group after adding hyaluronic acid to laser treated patients after three months as compared to pre-operative states. Table 7. Adverse Effects Related to Treatment by Fractional CO2 Laser With and Without Topical Hyaluronic Acid Adverse effects No. & percent of patients without HA No. & percent of patients with HA treatment-related pain (18) 100% (18) 100% mild erythema (18) 100% (18) 100% Severe erythema (6) 33.3% (2) 11.1% Edema (15) 83.3% (8) 44.4% Severe peeling (6) 33.3% (3) 16.6% Bleeding - - Hyperpigmentation - - Vesiculation - - Crust - - Scarring - - Infection - - Table 8. Recovery Time Related to Treatment by Fractional CO2 Laser With and Without Topical Hyaluronic Acid Parameters Recovery time in patients without HA Recovery time in patients with HA Erythema 5 days 3 days Edema 10 days 5 days Peeling 12 days 6 days Social activity 14 days 7 days Figure 1. Patient 1: Pre- and 3 months post operatively 4. Discussion Hyaluronan is a synthesized polysaccharide polymer composed of chains of the monosaccharide glucuronic acid and N-acetylglucosamine-1. A large variety of cells are able to produce hyaluronan; however, the most important, in terms of the wound healing response, appears to be keratinocytes, fibroblasts and platelets [21]. The skin normally contains the highest concentration of hyaluronan within the body [22]; however, the levels of free hyaluronan within the blood are low due to its rapid clearance by the liver [23]. Figure 2. Patient 2: Pre- and 3 months post operatively Figure 3. Patient 3: Pre- and 3 months post operatively Figure 4. Patient 4: Pre- and 3 months post operatively During tissue trauma or following infection, hyaluronan accumulates and stimulates immune cells at the injury site to express inflammatory mediators [24-26]; higher levels were also found in newly formed granulation tissue [27]. Hyaluronan is a highly hygroscopic (moisture retaining) compound [28]; By maintaining a moist wound environment, hyaluronan protects cells from the effects of desiccation, also assisting in cell movement by helping the dividing cell to 22 Shakir J. Alsaedy et al.: The Efficacy of Topical Hyaluronic Acid Serum in Acne Scar Patients Treated with Fractional CO2 Laser disassociate itself from its substratum and providing a hydrated matrix, which facilitates easier cell movement [29]. In addition, it also affects the migration of fibroblasts to the wound site [30] by interaction with hyaluronan receptors (CD44 and RHAMM) through which it activates intracellular signaling pathways [31]. Atrophic acne scars occur as a consequence of impaired resolution or wound healing after the damage of sebaceous follicles during active inflammation [32], and are usually classified into ice pick, rolling, and boxcar scars according to shape and depth [33]. Their treatment options include laser skin resurfacing, chemical peeling, dermabrasion/ microdermabrasion, tissue augmentation, and various other modalities. Fractional photothermolysis (FP) is a relatively new technology for skin resurfacing introduced and discussed in 2004 [34]. It depends on generating multiple non-contiguous zones of thermal damage (MTZ) in the epidermis and dermis, sparing the viable tissue surrounding each MTZ. The untreated healthy skin remains intact and actually aids the repair process, promoting rapid healing with only a day or two of downtime [35]. FP stimulates epidermal turnover and dermal collagen remodelling, which leads to significant improvements in a variety of scars [36, 37]. This technology has been used in a wide variety of skin conditions including (but not limited to) photo-aging, periorbital wrinkling, acne scarring, melasma and other pigmented lesions [38]. The results of first group was very satisfactory as nearly as 72% of patients showed moderate to significant response according to dermatologists' assessment. These results were comparable to the results of another study investigating the effect of CO2 laser in treating acne scars conducted in Iraq few years earlier [39]. The results are significant as indicated by the P values which was 0.002. The results of the second group were very good as more than 77% of patients showed moderate to significant improvement. However, the patients' self-assessment is lower than that of the dermatologists (66%); this might be attributed to the fact that the patients usually use more subjective than objective scales, and they show a higher level of expectations of end results than the actual outcome. The results of both assessment groups (the patients and the dermatologists) are significant as indicated by the P values which were 0.001 and 0.002 respectively. All patients show different level of improvement ranging from mild to excellent results after only one session of laser treatment. In fact, surprisingly, mild improvement was observed in only one patient according to the dermatologists' assessment, although; according to patients' self-assessment the results were less remarkable (as mentioned above). This gives us a strong indicator of the overall results and the level of improvement noticed by the patients in general. The final outcome of laser treatment is best read after 3-6 months post-operatively. This time is usually needed for new collagen remodelling [40] and it was the same time interval we use to follow up our patients and read their end-results which were significantly different from the results after one week post-operatively. The effect of the addition of topical hyaluronic acid to the patients in combination with laser treatment is very obvious. According to previous studies investigating the effect of fractional ablative CO2 laser on acne scars, 70% of the CO2 laser sites were graded as having moderate to significant improvement of scars [39, 41]. Their end-results, although; were not significantly different from our results after 3 months follow-up (77% showed moderate to significant improvement) but taking in consideration the post-operative pain, edema, erythema and duration of peeling which were less and more tolerable and acceptable than that were associated with CO2 laser treatment alone. In contrast to CO2 laser resurfacing alone, the physiologic changes in wound healing treated with hyaluronan serum will allow the skin to recover faster [42]. Pigment alteration, which is a common side effect of CO2 laser, was not reported with hyaluronan addition in any of our patients even with those who don't commit to strict sunscreen. It was clearly demonstrate that hyaluronic acid offers not only beneficial effects to the skin but also influence the expression of many genes including those contributing to keratinocyte differentiation and formation of intercellular tight junction complexes which are reported to be reduced in aged and photodamaged skin [43]. HA has been shown to promote the migration and maturation of keratinocytes in process of re-epithelialization and it has even been proposed as a possible marker for effective healing in soft tissues [44]. Its activity is manifested in granulation tissue where it is abundantly produced and counters the damage induced by reactive oxygen intermediates [45]. Hyaluronan has an active role in the modification of the inflammatory response. It is thought to moderate the inflammatory response through its specific interactions with constituents of the inflammatory response, stabilizing cytokine activity and reducing protease-induced damage [46]. In procedures aiming at aesthetic improvement, patient perception of the treatment outcome appears to be most important because it has a direct impact on patients’ body image and self-esteem [47], which can be obtained superbly by the CO2 laser but when hyaluronic acid topical serum is added, the results are noteworthy, recovery time is considerably shortened and traditional post-resurfacing sequelae are absent. Consequently this allows the patients a rapid return to their social or work environment. In general, the addition of hyaluronic acid to CO2 laser resurfacing does not only enhance the effect of laser but also provides faster recovery time and fewer side effects when compared to the CO2 laser resurfacing alone. 5. Conclusions 1. Fractional CO2 photothermolysis can be a safe and effective option for the treatment of acne scars in Iraqi skin. 2. Combination treatment with topical hyaluronic acid serum can constitute a synergistic approach for American Journal of Dermatology and Venereology 2017, 6(2): 17-24 23 optimal outcomes. 3. Fractional CO2 photothermolysis plus topical hyaluronic acid serum were associated with substantial improvement in the appearance of all types of acne scars with lesser side effects. 4. Most patients began to show a visible improvement following only one session and according to visual assessments of patients and dermatologists, patients improvement continue to occur even after 3 months of operation. REFERENCES [1] JP Leeming, KT Holland, WJ Cunliffe. The pathological and ecological significance of micro-organisms colonizing acne vulgaris comedones. J Med Microbiol 1985; 20:11-16. [2] JFB Norris, WJ Cunliffe. A histological and immunocytochemical study of early acne lesions. Br J Dermatol 1988; 118:651-9. [3] HE Knaggs, DB Holland, C Morris, EJ Wood, WJ Cunliffe. Quantification of cellular proliferation in acne using the monoclonal antibody Ki-67. J Invest Dermatol 1994; 102:89-92. [4] G Goodman. Post acne scarring: a review. J Cosmet Laser Ther 2003; 5:77-95 29. Goodman GJ, Baron JA. The management of postacne scarring. Dermatol Surg 2007; 33:1175-88. [5] CA Nanni , TS Alster: Complications of carbon dioxide laser resurfacing. An evaluation of 500 patients. Dermatol Surg 1998; 24:315-320. [6] TS Alster: Cutaneous resurfacing with CO2 and erbium: YAG lasers: Preoperative, intraoperative and postoperative considerations. Plast Reconstr Surg 1999; 103:619-632. [7] MA Trelles: Laser ablative resurfacing for photorejuvenation based on more than a decade’s experience and 1200 patients: Personal observations. J Cosmet Dermatol 2000; 2:2-13. [8] J Ruiz-Esparza, OL Gomez de la Torre, JM Barba Gomez: UltraPulse laser skin resurfacing of the hands. Dermatol Surg 1989; 24:69-70. [9] S Iacob, CB Knudson. Hyaluronan fragments activate nitric oxide synthase and the production of nitric oxide by articular chondrocytes. The Int J Biochem Cell Biol. 2006; 38:123–33. [10] BP Toole. Hyaluronan in morphogenesis. J Intern Med. 1999; 242:35–40. [11] G Tzircotis, RF Thorne, CM Isacke. Chemotaxis towards hyaluronan is dependent on CD44 expression and modulated by cell type variation in CD44-hyaluronan binding. J Cell Sci. 2005; 118:5119–28. [12] CM McKee, MB Penno, M Cowman. Hyaluronan fragments induce chemokine gene expression in alveolar macrophages. The role of HA size and CD44. J Clin Invest. 1996; 98: 2403–13. [13] B Oertli, X Fan, RP Wuthrich. Characterization of CD44-mediated hyaluronan binding by renal tubular epithelial cells. Nephrology Dialysis Transplantation. 1998; 13:271–8. [14] N Forsberg, N Von Malmborg, K Madsen. Receptors for hyaluronan on corneal endotelial cells. Experimental Eye Research. 1994; 59:689–96. [15] V Liguori, C Guillemin, GF Pesce, RO Mirimanoff, J Bernier. Double-blind, randomized c1inical study comparing hyaluronic acid cream to placebo in patients treated with radiotherapy. Radiother Onco I 1997; 42:155-61. [16] MB Brown, C Marriott, GP Martin. A study of the transdermal drug delivery properties of hyaluronan. In: Willoughby DA, ed. Hyaluronan in Drug Delivery. Royal Society of Medicine Press, London, 1995; 53–71. [17] EA Balazs, JL Denlinger. Clinical uses of hyaluronan. Ciba Found Symp 1989; 143: 265–280. [18] W Manuskiatti, HI Maibach. Hyaluronic acid and skin: wound healing and aging. Int J Dermatol 1996; 35: 539–544. [19] H Trommer, S Wartewig, R Bottcher. The effects of hyaluronan and its fragments on lipid models exposed to UV irradiation. Int J Pharm 2003; 254: 223–234. [20] PW Noble. Hyaluronan and its catabolic products in tissue injury and repair. Matrix Biol. 2002; 21:25–9. [21] JA Brown. The role of hyaluronic acid in wound healing’s proliferative phase. J Wound Care. 2004; 13: 2, 48–51. [22] L Juhlin. Hyaluronan in skin. J Intern Med. 1997; 242: 1, 61–66. [23] S Berg, I Jansson, FJ Hesselvik. Hyaluronan: relationship to hemodynamics and survival in porcine injury and sepsis. Crit Care Med. 1992; 20: 9, 1315–1321. [24] D Jiang, J Liang, PW Noble. Hyaluronan in tissue injury and repair. Annu Rev Cell Dev Biol. 2007; 23: 435–461. [25] B Gerdin, R Hillgren. Dynamic role of hyaluronan (HYA) in connective tissue activation and inflammation. J Intern Med. 1997; 242: 1, 49–55. [26] PH Weigel, GM Fuller, RD LeBoeuf. A model for the role of hyaluronic acid and fibrin in the early events during the inflammatory response and wound healing. J Theor Biol. 1986; 119: 2, 219–234. [27] O Oksala, T Salo, R Tammi. Expression of proteoglycans and hyaluronan during wound healing. J Histochem Cytochem. 1995; 43: 2, 125–135. [28] SR King, WL Hickerson, KG Proctor. Beneficial actions of exogenous hyaluronic acid on wound healing. Surgery. 1991; 109: 1, 76–84. [29] BP Toole. Hyaluronan in morphogenesis. J Intern Med. 1997; 242:1, 35–40. [30] IR Ellis, J Banyard, SL Schor. Differential response of fetal and adult fibroblasts to cytokines: cell migration and hyaluronan synthesis. Development. 1997; 124: 8, 1593–1600. [31] IR Ellis, SL Schor. Differential effects of TGF-beta1 on Hyaluronan synthesis by fetal and adult skin fibroblasts: implications for cell migration and wound healing. Exp Cell Res. 1996; 228: 2, 326–333. 24 Shakir J. Alsaedy et al.: The Efficacy of Topical Hyaluronic Acid Serum in Acne Scar Patients Treated with Fractional CO2 Laser [32] DB Holland, AH Jeremy, SG Roberts, DC Seukeran. Inflammation in acne scarring: A comparison of the responses in lesions from patients prone and not prone to scar. Br J Dermatol. 2004; 150: 72–81. [33] CI Jacob, JS Dover, MS Kaminer. Acne scarring: A classification system and review of treatment options. J Am Acad Dermatol. 2001; 45: 109–17. [34] D Manstein, GS Herron, RK Sink, H Tanner, RR Anderson. Fractional photothermolysis: a new concept for cutaneous remodeling using microscopic patterns of thermal injury. Lasers Surg Med 2004; 34: 426-38. [35] AS Glaich, Z Rahman, LH Goldberg, PM Friedman. Fractional resurfacing for the treatment of hypopigmented scars: A pilot study. Dermatol Surg. 2007; 33: 289–94; discussion 93–4. [36] DS Behroozan, LH Goldberg, T Dai, RG Geronemus, PM Friedman. Fractional photothermolysis for the treatment of surgical scars: A case report. J Cosmet Laser Ther. 2006; 8: 35–8. [37] TS Alster, EL Tanzi, M Lazarus. The use of fractional laser photothermolysis for the treatment of atrophic scars. Dermatol Surg 2007; 33(3):295-299. [38] SA Sukal, RG Geronemus. Fractional photothermolysis. J Drug Dermatol 2008; 7:118-122. [39] MY Saeed. The efficacy and safety of ablative fractional CO2 laser for treatment of acne scar. JSMC 2012; 2(1):29-35. [40] DJ Goldberg. New collagen formation after dermal remodeling with an intense pulsed light source. J Cutan Laser Ther 2000; 2:59-61. [41] W. Manuskiatti. Comparison of Fractional Er:YAG and Carbon Dioxide Lasers in Resurfacing of Atrophic Scars in Asians. Dermatol Surg. 2013 Jan; 39(1 Pt 1):111-20. [42] RD Price, S Myers, IM Leigh. The role of hyaluronic acid in wound healing: assessment of clinical evidence. Am J Clin Dermatol. 2005; 6(6):393–402. [43] KF Cutting KF. Wound healing through synergy of hyaluronan and an iodine complex. J Wound Care. 2011; 20(4249): 424-30. [44] V Voelcker, C Gebhardt, M Averbeck, A Saalbach. Hyaluronan fragments induce cytokine and metalloprotease upregulation in human melanoma cells in part by signaling via TLR4. Exp Dermatol. 2008; 17: 100-7. [45] G Weindl, M Schaller, M Schäfer-Korting, HC Korting. Hyaluronic acid in the treatment and prevention of skin diseases: molecular biological, pharmaceutical & clinical aspects. Skin Pharmacol Physiol 2004; 17: 207-13. [46] HG Wisniewski, J Vilcek. TSG-6: an IL-1/TNF-inducible protein with anti-inflammatory activity. Cytokine Growth Factor Rev. 1997; 8: 2, 143– 156. [47] SJ Al-Saedy, MM Al-Hilo, SH Alshami. Treatment of acne scars using fractional Erbium:YAG laser. Am J Dermatol Venereol 2014; 3(2): 43-9. https://www.ncbi.nlm.nih.gov/pubmed/?term=W.+Manuskiatti.+Comparison+of+Fractional+Er%3AYAG+and+Carbon+Dioxide+Lasers+in+Resurfacing+of+Atrophic+Scars+in+Asians. 1. Introduction 2. Patients and Methods 3. Results 4. Discussion 5. Conclusions work_de7sy5fo2zbehin47kwypccajm ---- Evaluation of Digital Photography from Model Aircraft for Remote Sensing of Crop Biomass and Nitrogen Status E. RAYMOND HUNT JR. erhunt@hydrolab.arsusda.gov USDA ARS Hydrology and Remote Sensing Laboratory, Beltsville Agricultural Research Center, Building 007 Room 104, 10300 Baltimore Avenue, 20705, Beltsville, MD, USA MICHEL CAVIGELLI USDA ARS Sustainable Agricultural Systems Laboratory, Beltsville Agricultural Research Center, Building 001 Room 245, 10300 Baltimore Avenue, 20705, Beltsville, MD, USA CRAIG S. T. DAUGHTRY, JAMES MCMURTREY III, AND CHARLES L. WALTHALL USDA ARS Hydrology and Remote Sensing Laboratory, Beltsville Agricultural Research Center, Building 007 Room 104, 10300 Baltimore Avenue, 20705, Beltsville, MD, USA Abstract. Remote sensing is a key technology for precision agriculture to assess actual crop conditions. Commercial, high-spatial-resolution imagery from aircraft and satellites are expensive so the costs may outweigh the benefits of the information. Hobbyists have been acquiring aerial photography from radio- controlled model aircraft; we evaluated these very-low-cost, very high-resolution digital photography for use in estimating nutrient status of corn and crop biomass of corn, alfalfa, and soybeans. Based on conclusions from previous work, we optimized an aerobatic model aircraft for acquiring pictures using a consumer-oriented digital camera. Colored tarpaulins were used to calibrate the images; there were large differences in digital number (DN) for the same reflectance because of differences in the exposure settings selected by the digital camera. To account for differences in exposure a Normalized Green–Red Difference Index [(NGRDI = (Green DN ) Red DN)/(Green DN + Red DN)] was used; this index was linearly related to the normalized difference of the green and red reflectances, respectively. For soybeans, alfalfa and corn, dry biomass from zero to 120 g m )2 was linearly correlated to NGRDI, but for biomass greater than 150 g m )2 in corn and soybean, NGRDI did not increase further. In a fertilization experiment with corn, NGRDI did not show differences in nitrogen status, even though areas of low nitrogen status were clearly visible on late-season digital photographs. Simulations from the SAIL (Scattering of Arbitrarily Inclined Leaves) canopy radiative transfer model verified that NGRDI would be sensitive to biomass before canopy closure and that variations in leaf chlorophyll concentration would not be detectable. There are many advantages of model aircraft platforms for precision agriculture; currently, the imagery is best visually interpreted. Automated analysis of within-field variability requires more work on sensors that can be used with model aircraft platforms. Keywords: radio-controlled model aircraft, color imagery, remote sensing, biomass, nutrient status, Normalized Green–Red Difference Index Introduction There is a high potential for remote sensing to be used operationally in precision agriculture (Lu et al., 1997; Moran et al., 1997; Pierce and Nowak, 1999; Barnes Precision Agriculture, 6, 359–378, 2005 � 2005 Springer Science+Business Media, Inc. Manufactured in The Netherlands. et al., 2003; Pinter et al., 2003). However, the adoption of remote sensing by farmers is limited by training, costs, and timely availability of imagery (Robert, 2002). Satellite and airborne sensors either have high cost and high spatial resolution (sufficient for precision agriculture) or low cost and low spatial resolution (Hunt et al., 2002, 2003a). Satellite data may have all of the important attributes for pre- cision agriculture, but may not be useful because of cloud cover during acquisition. Crop nutrient requirements can be estimated from chlorophyll concentration and biomass can be estimated from Leaf Area Index (LAI), both of which can be re- motely sensed (Bausch and Duke, 1996; Blackmer et al., 1996b; Daughtry et al., 2000; Scharf et al., 2002; Doraiswamy et al., 2003). Sensors do not have to be technologically advanced; aerial photography with either color or color-infrared film is useful for mapping out biomass and nitrogen requirements (Blackmer et al., 1996a; Scharf and Lory, 2002; Scharf et al., 2002). Furthermore, platforms do not have to be sophisticated; ultralight aircraft (Clevers, 1988; Waring et al., 1995; Booth et al., 2003; Hunt et al., 2003b), blimps (Inoue et al., 2000), powered parasails (Moran et al., 2003; Lopez and Robert, 2003), and helicopters (Hongoh et al., 2001) have been used for high-resolution remote sensing platforms. Quilter and Anderson (2000) suggested that aerial photography from radio- controlled model aircraft can provide very-low-cost, very-high-resolution imagery. Photographic imagery acquired from model-aircraft platforms have substantially lower cost for a given spatial resolution compared with airborne and satellite sensors (Hunt et al., 2002). Hunt et al. (2002, 2003a) used color-infrared film with a low-cost automatic camera and found the Normalized Difference Vegetation Index (NDVI) from the model aircraft was comparable with NDVI from advanced sensors. Some of the problems encountered in the preliminary work were film overexposure, lack of spectral and radiometric calibration, and the need to have long take-off and landing areas (Hunt et al., 2003a). Radio-controlled model aircraft show promise as a low-cost platform for remote sensing; however, more work is required to optimize the system for operational use. Because of the spectral differences between vegetation and soil at green and red wavelengths, we hypothesized color imagery would also be useful for determining both biomass and nutrient status, allowing the use of light-weight digital cameras. We also made further improvements to a fixed-wing model aircraft to test its use- fulness as a remote sensing platform for precision agriculture. Materials and methods Model aircraft Different types of model aircraft have different capabilities, any one of which can be advantageous or disadvantageous under different circumstances (Hunt et al., 2002, 2003a). Compromises must be made among ease of flying, stability in wind, handling flight failures, distance covered, and takeoff/landing requirements. Aerobatic model aircraft can be flown from mowed, grassy fields and require shorter distances for takeoff and landing compared to other fixed-wing aircraft. We used an Edge 540T RAYMOND HUNT ET AL.360 fixed wing aircraft (Aeroworks, Inc., Aurora, CO, USA) with a 20-cm 3 two-stroke engine (Webra Mode1lmotoren, AG, Enzeafeld, Austria) and JR electric servo motors (Horizon Hobbies, Champaign, IL, USA), JR radio transmitters, and JR receivers (Figure 1(a)). 1 The model aircraft was modified to reduce mass by removing some redundant structural material and to increase strength by using carbon–fiber-composite replacement parts. The engine muffler was mounted to vent exhaust above the plane, so that the oily exhaust would not deposit residue on the bottom of the fuselage. We constructed a balsa wood box and mounted the box onto the fuselage bottom to hold a camera in place (Figure 1(b)). A JR electric servo motor was attached to the box, which pressed the camera shutter-release button under control by the radio transmitter (Figure 1(b)). Flights were generally made between 0930 and 1130 h, Eastern Daylight Time, as a compromise between wind speed and solar elevations. High solar elevations reduce Figure 1. (a) Radio controlled model aircraft with 1.52 m length and 1.65 m wingspan. (b) Digital camera mounted on the bottom of the aircraft fuselage. EVALUATION OF DIGITAL PHOTOGRAPHY 361 the amount of shadow in an image and early mornings had still wind conditions, with the wind speeds increasing over the day. It would generally take about 45 min after arrival to set up the airplane, fly the field, examine the images, and pack up. Remote sensing We used an Olympus D40 4.1-megapixel color digital camera (Olympus, Inc., Melville, NY, USA). The shutter speed was set at 0.001 s, and the aperture was adjusted by the camera to control light intensity. The lens was set to be wide angle with the focus distance set at infinity. At an altitude of about 200 m, several plots could be covered in one picture, with a pixel size of about 100 mm. After the plane landed, images from the digital camera were immediately downloaded into a laptop computer and examined, so that spatial coverage of the photographs could be determined. The digital pictures (JPEG format) were opened with the ENVI image processing software (Research Systems, Inc., Boulder, CO, USA). The mean digital numbers for the red, green and blue bands of each plot were determined by including the pixels into a ‘‘region of interest,’’ and using the statistical tools in ENVI. Radiometric and spectral calibration Exposure settings on the digital camera are selected based on overall light intensity, which varies over time because of changes in solar elevation, atmospheric trans- mittance and clouds (Gates, 1980). Radiometric calibration of digital numbers to reflectances corrects for these changes. The original idea was to use colored tar- paulins to calibrate the digital photographs to red, green and blue reflectances (Moran et al., 2003). However, the camera’s field of view was sufficiently small to prevent having the tarpaulins in each photograph, and the camera’s exposure set- tings varied with each aerial photograph, so the calibration determined for one photograph could not be used for another photograph. Five colored tarpaulins (beige, gray, green, red, and black) were used to test the camera’s spectral calibration. The spectral bidirectional reflectance factors of the tarpaulins were determined under sunlight using an ASD FR Pro spectroradiometer (Analytical Spectral Devices, Inc., Boulder, CO, USA) and a Spectralon reference panel (Labsphere, Inc., North Sutton, NH, USA). The wavelengths covered by each band of the digital camera were not available from the manufacturer. We used the monochromatic light source from a SPEX 1680 Double Spectrometer (HORIBA Jobin Yvon, Inc., Edison, NJ, USA) and took photographs of the light output from 400 to 700 nm at 10-nm intervals. The light source intensity was balanced to match the spectral distribution of sunlight (James McMurtrey, Maryn Butcher, and Larry Corp, personal communication). This procedure was replicated twice. The digital numbers were obtained from two loca- tions around the 2 mm slit in each photograph. Digital numbers for each wavelength (DNk) from 400 to 700 nm were obtained from linear interpolation for each 10-mm RAYMOND HUNT ET AL.362 interval. Weighted reflectances of the tarpaulins for the red, green and blue bands (Rband) were calculated: Rband ¼ X RkDNk .X DNk for k from 400 to 700 nm ð1Þ where Rk is the spectral reflectance at wavelength k. Normalized Green–Red Difference Index Topography (slope and azimuth) affects the surface irradiance (Gates, 1980), so different areas within an image with the same surface reflectance will have difference digital numbers in an image. The difference in digital numbers between near-infrared and red hands is large for vegetation and small for soils (Huete, 2004), but the absolute difference is highly sensitive to variations in irradiance. To account for within-scene and between-date variations in irradiance, Rouse et al. (1974) devel- oped the Normalized Difference Vegetation Index (NDVI), in which the difference in digital numbers between the near-infrared and red bands was normalized by dividing by the sum of the bands. Whereas most consumer-oriented digital cameras have detectors that are capable of detecting near-infrared wavelengths, filters are used to prevent near infrared light from reaching the detector; hence NDVI cannot be cal- culated. There are also differences between the green and red bands for vegetation and soil, although these differences are not as large as with near-infrared bands. We used the Normalized Green–Red Difference Index (NGRDI) to analyze the images from the digital camera: NGRDI ¼ðGreen DN �Red DNÞ=ðGreen DN þ Red DNÞ ð2Þ where Green DN and Red DN are the digital numbers of the green and red bands, respectively. The difference between the green and red digital numbers differentiates between plants and soil, and the sum normalizes for variations in light intensity between different images. The possible range of NGRDI is from )1.0 to 1.0, but the actual variation of NGRDI is small for most targets. This index is mathematically related other remotely sensed indices: NGRDI ¼ðratio� 1Þ=ðratioþ 1Þ ð3Þ where the ratio is Green DN/Red DN, so conclusions based on the NGRDI also apply to the ratio of Green DN/Red DN. Alfalfa biomass Alfalfa (Medicago sativa L. cultivar cimarron) was planted in 2001 in field NG-1Z at Beltsville to study organic farming practices. The alfalfa was mowed for hay on July EVALUATION OF DIGITAL PHOTOGRAPHY 363 15, 2002 and the radio-controlled model aircraft was flown on August 6, 2002. Cumulative precipitation for 2002 was 339 mm on August 6, which was 296 mm below normal (54 years of record), so there was little re-growth. Twenty-four plots were selected by eye to obtain the largest possible range of biomass, and located with a Precision Lightweight Global Positioning System Receiver (Rockwell-Collins Inter- national, Inc., Cedar Rapids, IA, USA), which has about 4-m horizontal accuracy. All plant material within a 0.25 m 2 frame was cut at soil level on August 8, 2002, dried at 60�C, and weighed. The single digital image was georegistered using 25 ground control points to an accuracy of 5 m, and 25 pixels centered on the plot position center were used to calculate the mean digital numbers of the red green and blue bands. Corn and soybean biomass The USDA Fanning Systems Project (FSP) at Beltsville, Maryland, is a long-term cropping systems trial designed to compare the agronomic and ecological perfor- mance of two conventional cropping systems and three organic cropping systems (http://www.ba.ars.usda.gov/sasl/research/fsp.html; last accessed November 30, 2004). Full-size farming equipment is used for all crop management operations. The soils are well-drained, moderately well-drained and somewhat poorly drained silt loam Ultisols. Corn (Zea mays L. cultivar Fielders Choice 95+) and soybean (Glycine max L. cultivar Northrup King R-366) were used to examine the rela- tionship between NGRDI and biomass. Plot sizes were 9 m · 111 m. Precipitation in 2003 was well above normal (Table 1), and the waterlogged soils interfered with planting crops in late May and early June. Due to the wet soil conditions, some plots were planted late June, and for crops planted early, the loss of soil nitrogen via denitrification was apparent in flooded areas. To capture a full range of crop biomass levels, 20 sampling locations in each of corn and soybean were selected by eye to obtain a large range of biomass levels from no-till, conventional till, and organic system plots. The model aircraft was flown on July 23, 2003 and biomass was measured the following day. At each location, three plants were cut at soil level arid put into plastic bags, which were sealed until their wet weights were recorded. Then, plants were transferred to cloth bags, dried at 60�C, and weighed. Plant population densities were determined at each sampling location by counting the number of plants within a 1 m radius. The center point for each sampling location was determined by measuring the distance from two perpendicular edges of the plot, so under any oblique view from the model aircraft, the location can be determined geometrically on the image without resorting to georegistration of a large number of images. All of the pixels within a 1-m radius were used to calculate the mean digital numbers. Corn nitrogen status To determine when nutrient deficiency can be detected, corn (Zea mays L. cultivar Pioneer hybrid 111894) was planted on May 5, 2003 in field 5–8 at Beltsville, MD, RAYMOND HUNT ET AL.364 with 20 kg N ha )1 starter nitrogen (Table 1). The original design was to have an early and late planting (Hunt et al., 2002, 2003a), but the wet weather prevented planting in late May/early June (Table 1). The modified design was a randomized complete block design with two replications of each applied nitrogen level. Each plot was 37 nm long and consisted of 48 rows of corn planted 0.76 m apart in an east– west orientation. The plot boundaries were located with a Trimble GPS Total Sta- tion 4700 Real-Time Kinematic (Trimble, Inc., Sunnyvale, CA, USA) with hori- zontal accuracy of 30 mm and vertical accuracy of 50 mm. Corn shoots were cut at all of the plot boundaries so each plot could be identified in an image without image georegistration. The early planting had normal development (Table 1) with the growth stage determined by the leaf-collar method (Ritchie et al., 1993). Plots 1 through 4 and plots 5 through 8 were randomly assigned a nitrogen treatment (Table 2). Sidedress N was applied on July 11, 2003 at 0%, 25%, 50%, and 100% of the recommended rate of 125 kg N ha )1 (Table 2). Before and after sidedress nitrogen, leaf chlorophyll concentration was measured according to Blackmer and Schepers (1995) using a Minolta SPAD-502 chlorophyll meter (Spectrum Technologies, Inc., Plainfield, IL, USA). In each plot, chlorophyll was measured on the topmost leaf with a visible leaf collar, midway from the leaf tip to the leaf collar, for every tenth plant in three different rows. A general linear model analysis of variance procedure was used to determine separability of plant param- eters followed by a Student Newman–Kuels multiple range test for assessment of significant mean separations, P £ 0.05 (SAS Institute, 1999). The aircraft was flown over the held on seven dates during 2003. All of the pixels within a plot were used to obtain the mean digital numbers. Plant population density of each plot was determined by counting all of the plants in six rows evenly spaced across the plot. Table 1. Growth stage of corn in field 5–8 using the leaf collar method (Ritchie et al., 1993) Date Day of year Growth stage a Cumulative precipitation (mm) 2003 Mean Departure May 5 125 Planting 313 324 )11 May 20 140 VE 389 377 12 May 30 150 V2 460 409 51 June 5 156 V4 477 433 44 June 9 160 V5 519 445 74 June 24 175 V7 616 491 125 June 30 181 V9 617 515 102 July 15 196 V14 702 561 141 July 30 211 VT 765 607 158 August 8 220 R1 778 635 143 Mean (1949–2002) and 2003 cumulative precipitation for Beltsville Agricultural Research Center are shown. a V—vegetative stage with the number of the oldest leaf with a visible leaf collar, E—seed leaf emerging from soil, T—tassel emerging from stalk, R—reproductive stage. EVALUATION OF DIGITAL PHOTOGRAPHY 365 SAIL canopy model analyses Because the use of NGRDI is not well established for vegetation remote sensing, we used the Scattering by Arbitrarily Inclined Leaves (SAIL) model to predict canopy reflectances for various LAI (Verhoef, 1984). The SAIL model is now used to prototype vegetation indices before testing in the field and to invert spectral reflectances to bio- physical parameters (Zarco-Tejada et al., 2003). Spectral reflectances and transmit- tances of the upper-most, fully expanded corn leaves from the nutrient experiment were measured with the ASD spectroradiometer and a LI-1800-12 integrating sphere (LI- COR, Inc., Lincoln, NE, USA). Spectral reflectance of the soil in the field was measured with the ASD spectroradiometer. We used other SAIL model parameters, particularly the leaf angle distribution, from Daughtry et al. (2000). Results and discussion Aerial photography The images acquired from the radio-controlled model aircraft were generally in good focus and had good contrast (Figure 2). It was somewhat difficult to get pictures centered over the various fields, but during the average of 8–10 min of flying time, many pictures were taken by the digital camera. Video anal timed picture modes were also available with this camera, but these modes did not provide any advan- tages. Each usable picture required visible ground control points or plot boundaries so the location could be determined. If none of the pictures covered the area on initial inspection, the field was re-flown. Sudden changes in aircraft position due to wind can be seen as blurriness in some pictures (for example Figure 2(b)), which were evident only upon close inspection of the image. The aerobatic model aircraft had superior performance in takeoff and landings, which allowed overflights of more fields compared to the trainer model aircraft used Table 2. Plant density and SPAD chlorophyll-meter measurements of corn on two dates in 2003 Chlorophyll-meter N level % Plot Density (plants m )2 ) June 24 August 6 0 2 5.40a 34.7a,b 39.4a 0 5 5.84b 31.1b 41.8a 25 4 5.40a,b 32.8b 53.1b,c 25 6 5.36a 25.4c 50.7b 50 1 5.64a 36.7a 52.6b,c 50 8 5.47a 24.4c 49.8b 100 3 5.92b 36.1a 55.7c,d 100 7 6.03b 24.8c 58.7d N level was fraction of 125 kg N ha )1 applied as sidedress. Values followed by the same letter within a column are not significantly different at P £ 0.05 using a Student Newman–Kuels multiple range test. RAYMOND HUNT ET AL.366 Figure 2. Digital color images acquired the radio-controlled model aircraft. (a) Field NG-1Z with alfalfa (August 8, 2002). (b) Farming Systems Project with both corn and soybean plots (July 23, 2003); and (c) field 5–8 with an applied nitrogen experiment (July 30, 2003). In b, the colored tarpau- lins (2.85 m · 2.85 m) were used for camera calibration. In c, the plot numbers are shown. EVALUATION OF DIGITAL PHOTOGRAPHY 367 by Hunt et al. (2002, 2003a). The field NG-1Z was flown by both aircraft, but alfalfa was only grown there in 2002, so the two systems could not be compared over the same target. The Farming Systems Project fields (corn and soybean biomass) and field 5–8 (corn nutrient status) could not have been flown by the trainer aircraft used by Hunt et al. (2002, 2003a), because suitable takeoff and landing areas were not available. However, the trainer model aircraft had more stability in the air and required less expertise to fly. The images in Figure 2 were also much improved over the initial attempts with a low-cost automatic camera and color-infrared film (Hunt et al., 2002, 2003a). Most of the reason for the better imagery was the short exposure time (0.001 s). High- quality, high-cost automatic film cameras generally have internal, near-infrared sensors, which may expose color-infrared film. Furthermore, the low-cost cameras set the film speed to ISO 100, whereas the actual speed of the color-infrared film was ISO 200, therefore overexposing the color-infrared film (Hunt et al., 2002, 2003a). Another advantage we found for the digital camera was that the images were examined right after landing, which ensured the entire field was covered, whereas color-infrared film had to be developed. It is feasible to adapt digital color cameras with near-infrared filters (Yang et al., 2000; Milton, 2002), but it is difficult to mount these modified cameras onto model aircraft. Four-band sensors with a near-infrared band, are generally much heavier and require external power; therefore are not yet suitable for radio-controlled model aircraft. Radiometric and spectral calibration The spectral reflectances at visible wavelengths of each tarpaulin corresponded closely with the color of the five tarpaulins (Figure 3). The spectral sensitivity of the digital camera (Figure 4) generally matched the spectral sensitivity of the normal human eye (Wald, 1968). One notable exception was that the red band of the digital camera was sensitive to blue wavelengths (Figure 4), hence pictures of object at 425 nm would appear to be more purple rather than violet. Using the definition of the red color as the mean reflectance from 600 to 700 nm, then the red-band reflectance of the red tarpaulin was 45%. However, taking the red spectral sensitivity of the human eye to be from 580 to 650 nm (Wald, 1968), the red- band reflectance was 30%. Using the typical remote sensing definition of spectral sensitivity as the wavelengths at 50% of maximum response, the wavelength range of the red band was 580–660 nm, so the red reflectance was 33%. Using the weighted spectral response of the digital camera (Eq. (1)), the mean red reflectance was only 25% for the red tarpaulin. The weighted green reflectance of the green tarpaulin was 20%. The weighted green and red reflectances for the beige tarpaulin were 27% and 29%, respectively. About half of the images of the colored tarpaulins (see Figure 2(b)) had mean digital numbers greater than 250 for the green and red bands of the green, red, and beige tarpaulins, indicating the camera detector was light saturated due to the high- weighted reflectances. For these cases, the data for the tarpaulins were not analyzed further; however, other areas in these images were generally not light saturated and RAYMOND HUNT ET AL.368 contained useful data. For the remaining images of the colored tarpaulin, there was a linear relationship between the digital numbers and the weighted reflectances for each band (data not shown). Because the camera aperture changed with changing light level, the slope and intercept for a single image differed considerably from the other images. Thus, the regression coefficients between digital numbers and reflec- tances could not be used to calibrate the digital numbers to reflectances even for adjacent images, necessitating an alternative method of radiometric calibration. Normalized indices such as the NGRDI were designed to reduce variation caused by differences in irradiance and exposure. Mean digital numbers of the red and green bands were used to calculate NGRDI for each tarpaulin in the images. There was a linear relationship between the camera NGRDI and the NGDRI calculated for the five colored tarpaulins from the weighted reflectances (Figure 5). Whereas the regression slope and intercept were significantly different from 1.0 and 0.0, respec- Figure 3. Spectral reflectances of five colored tarpaulins. Figure 4. Spectral response of the digital camera’s red, green and blue bands. The monochromatic light source was adjusted to simulate the daylight spectrum. EVALUATION OF DIGITAL PHOTOGRAPHY 369 tively, the data fell very close to the 1:1 line (Figure 5). Therefore, NGRDI from the digital numbers were sufficiently close to the values calculated from the reflectances to be used without further radiometric calibration. NGRDI and biomass For alfalfa, corn, and soybean, NGRDI were linearly related to biomass levels up to about 120 gm )2 (Figures 6–8). At higher levels of biomass for corn and soybean, NGRDI was saturated at a maximum value, which averaged 0.05 for corn and 0.13 for soybean, respectively (Figures 7 and 8). The maximum amount of biomass in the alfalfa field was low (Figure 6), hence no saturation of NGRDI was observed. Corn nitrogen status Early in the season, there were significant differences among the 8 plots in mean chlorophyll content (Table 2). Plots 1–3 had the relative highest elevation (34.5 m) as determined by global positioning system data, plots 4 and 5 had intermediate ele- vation (34.0 m), and plots 6–8 had the lowest elevation (33.6 m) and were frequently flooded in 2003. The low chlorophyll contents of plots 6–8 reflected the low nitrogen status. After sidedress nitrogen was applied, mean chlorophyll content was related to application rate for the 0% and 100% treatments (Table 2). There was no significant difference in mean chlorophyll content between the 25% and 50% nitrogen treat- ments after application of sidedress nitrogen, but tile chlorophyll contents were intermediate between the 0% and 100% treatments (Table 2). Figure 5. Comparison of the Normalized Green–Red Difference Index for five colored tarpaulins be- tween camera digital numbers and weighted spectral reflectances. The solid line is a least-squares regression (Camera NGRDI = 0.018 + 0.87 reflectance NGRDI, R 2 = 0.96, se(ŷ) = 0.037, P < 0.00001) and the dotted line is the 1:1 line. The intercept is significantly different from 0.0 [95% confidence interval = (0.0073, 0.023)] and the slope is significantly different from 1.0 [95% confidence interval = (0.82, 0.92)]. Data were used only when the digital numbers <2.50. RAYMOND HUNT ET AL.370 NGRDI started out at a low level, about )0.03, from emergence through the V2 growth stage (Figure 9, Table 1). In July, NGRDI reached maximum value of 0.02 (Figure 9). In May, soils were wet and dark, so the difference between Green DN and Red DN was small for all plots. Large differences in NGRDI among plots were seen from late June through the beginning of August (Figure 9). There were small but significant differences in plant density among the plots (Table 2), which was not correlated with NGRDI on any date (Figure 9). Figure 6. Normalized Green–Red Difference Index versus biomass for alfalfa on August 8, 2002. The solid line is a regression (NGRDI = )0.078 + 0.0011 biomass, R2 = 0.47, se(ŷ) = 0.021, P = 0.00024). Figure 7. NGRDI versus corn biomass at FSP fields on July 23, 2003. The solid line at low biomass is a regression (NGRDI= )0.091 + 0.0016 biomass, R2 = 0.88, se(ŷ) = 0.019, P = 0.00015) and the dashed line at high biomass is the mean of NGRDI. EVALUATION OF DIGITAL PHOTOGRAPHY 371 On June 24, there was a significant positive correlation between mean chlorophyll content and NGRDI. The 95% confidence interval for the regression intercept was ()0.074, )0.069), the 95% confidence interval for the regression slope was (0.0004, 0.0024), the goodness-of-fit (R 2 ) was 0.56, and the probability (P) that the null hypothesis was 0.033, with only 6 degrees of freedom. However, on August 6, the correlation between chlorophyll content and NGRDI was not significant with a P = 0.063. Because NGRDI is based on the green and red reflectances, it was expected that there would be a strong positive relationship with chlorophyll content on both dates. Figure 8. NGRDI versus soybean biomass at FSP fields on July 23, 2003. The solid line at low bio- mass is a regression (NGRDI = )0.071 + 0.00075 biomass, R2 = 0.39, se(ŷ) = 0.018, P = 0.022) and the dashed line at high biomass is the mean NGRDI. Figure 9. Changes of NGRDI over the 2003 growing season for corn fertilization experiment. The lines indicate the changes of mean NGRDI for the 8 plots. RAYMOND HUNT ET AL.372 SAIL model analyses Reflectances of individual corn leaves decreased at both green and red wavelengths with increasing amounts of applied nitrogen (Figure 10). The differences among leaf reflectances were largest around 550 nm wavelength in the green, and were smallest around 680 nm wavelength near the chlorophyll absorption maximum (Figure 10). However, there was no spectral response of the digital camera to radiation at 680 nm, and both the red and green bands were sensitive to radiation at 580 nm (Figure 4). Thus, leaf NGRDI based on weighted reflectances was about 0.05 for the three treatments. For dry soil, NGRDI was about )0.20 using weighted reflectances (Figure 10). When the reflectance data were used as inputs to the SAIL model, the canopy reflectance spectra were dominated by soil at low LAI and by the leaves at high LAI (Figure 11). NGRDI calculated using weighted reflectances for the red and green bands varied from about )0.20 at lowest LAI to about 0.05 at an LAI of 5.0 (Figure 12). Significantly, NGRDI was saturated above an LAI of 2 and there were essentially no differences in NGRDI among nitrogen treatments (Figure 12). Biomass and LAI are usually highly correlated during early crop growth, so the SAIL model simulations agreed with the observations of NGRDI at low biomass for alfalfa (Figure 6), corn (Figure 7), and soybean (Figure 8). Similarly, at high bio- mass, the SAIL model simulations agreed with the observations that NGRDI was saturated for corn (Figure 7) and soybean (Figure 8). The simulations with the SAIL model indicated that there should be no response of NGRDI to differences in chlorophyll content, based on the spectral sensitivity of the digital camera. Chlorophyll content was positively correlated with NGRDI on the June 24 overflight but not for the August 6 overflight. Generally, the growth rate and chlorophyll concentration in corn is correlated with soil nitrogen concentration. Early in the growing season when biomass was less than 100 g m )2 , areas with high Figure 10. Reflectance of soil and corn leaves at three levels of applied nitrogen. These data were used as inputs to the Scattering by Arbitrarily Inclined Leaves (SAIL) canopy reflectance model. EVALUATION OF DIGITAL PHOTOGRAPHY 373 soil nitrogen concentration would have had higher biomass and chlorophyll con- centration. Whereas biomass was not measured in this experiment, it is reasonable to conclude the correlation between NGRDI with chlorophyll content on the June 24 overflight was due to an underlying correlation with biomass (Figure 9). In Figure 2c, plots 2 and 5 (0% applied sidedress N) were distinguishable from the other plots in the image, because these plots were more yellow–green. The yellow green color was from increased digital numbers in both the red and green bands. The simulations with the SAIL model show that the mean canopy reflectance over the entire visible range of 400–650 nm decreased with increasing applied nitrogen, but only when NGRDI was saturated at high LAI (Figure 13). Whereas colors are Figure 11. Simulated canopy reflectances from the SAIL model at various Leaf Area Index (LAI) for corn with 100% of the applied nitrogen fertilizer. Simulations from the SAIL model for 50% and 0% applied nitrogen show similar trends. Figure 12. SML model simulations of Normalized Green–Red Difference Index for corn at various Leaf Area Index (LAI) at three levels of applied nitrogen. RAYMOND HUNT ET AL.374 usually defined in terms of red, green and blue reflectances, color as perceived by humans is also defined in terms of hue, saturation and intensity (Shevell, 2003). Low chlorophyll contents are related to an increase in intensity, which is indicated by greater mean visible reflectances for a given NGRDI (Figure 13). Therefore, areas of low nitrogen in Figure 2(c) were distinguished visually by the shift in intensity. If the red band of the digital camera was shifted to longer wavelength (around 680 nm), then NGRDI would be more sensitive to differences in chlorophyll concentration. A single index, such as the NGRDI, is sensitive to many canopy factors, primarily the amount of vegetation because of the spectral differences between vegetation and soil (Daughtry et al., 2000). Two or more remote-sensing indices have to be combined to identify areas of low chlorophyll concentrations simultaneously with changes in vege- tation biomass (Daughtry et al., 2000). Because color digital cameras have only three bands, automated determination of multiple canopy factors (both biomass and nutrient status) from a single image is not feasible. Instead, the pictures would have to visually interpreted based on qualitative judgments in order to get the maximum amount of information from aerial photographs using a consumer-oriented digital camera. Conclusions This study was a comprehensive examination on the use of radio-controlled model aircraft and digital cameras as a very-low-cost, very-high-spatial-resolution alter- native for remote sensing in precision agriculture. The results were mixed, because there were limitations imposed by the use of fixed-wing model aircraft and there were limitations imposed by the use of a consumer-oriented digital camera. The maximum amount of information would be obtained from visual interpretation of the images, particularly regarding crop nitrogen status. Automated digital image processing based on indices usually removes some of the subjectivity involved with interpreta- Figure 13. SAIL model simulations of mean visible reflectance for corn at various NGRDI at three applied nitrogen levels. Mean reflectance is the average from 400 to 661 nm wavelength. Variations in NGRDI were obtained from variation in Leaf Area Index (LAI). EVALUATION OF DIGITAL PHOTOGRAPHY 375 tion, and hence is an important step in exacting useable information from the imagery. We are not advocates stating the technologies used in this study are a panacea for precision agriculture. On the whole, the use of model aircraft and digital cameras overcame many of the problems associated with commercial satellite and airborne imagery, thus these technologies warrant continued investigation and improvement. Based on the development of sophisticated, lightweight sensors used in unmanned airborne vehicles (UAVs) for military operations, major advances using model aircraft or other alternative platforms for remote sensing in precision agriculture should be expected. Acknowledgments First, we thank Jonathan Baker for building and flying the radio-controlled model aircraft. The initial work on use of radio-controlled model aircraft for precision farming was done with a summer program of science, mathematics and engineer- ing, called Imago Excellence/E1 Ingeniero, for seventh and eighth grade Hispanic students at the Beltsville Agricultural Research Center (BARC) led by Ms. Lucy Negrón-Evelyn. Also, we thank John Schroeder, Anne Conklin, Larry Corp, Ma- ryn Butcher and BARC Farm Operations for their assistance. Note 1. Mention of commercial products are for information only and do not constitute an endorsement by the USDA Agricultural Research Service. References Barnes, E. M., Sudduth, K. A., Hummel, J. W., Lesch, S. M., Corwin, D. L., Yang, C., Daughtry, C. S. T. and Bausch, W. C. 2003. Remote- and ground-based sensor techniques to map soil properties. Pho- togrammetric Engineering & Remote Sensing 69(6), 619–630. Bausch, W. C. and Duke, H. R. 1996. Remote sensing of plant nitrogen status in corn. Transactions of the ASAE 39, 1869–1875. Blackmer, T. M. and Schepers, J. S. 1995. Use of a chlorophyll meter to monitor nitrogen status and schedule fertilization for corn. Journal of Production Agriculture 8, 56–60. Blackmer, T. M., Schepers, J. S., Varvel, G. E. and Meyer, G. E. 1996a. Analysis of aerial photography for nitrogen stress within corn fields. Agronomy Journal 88, 729–733. Blackmer, T. M., Schepers, J. S., Varvel, G.E. and Walter-Shea, E. A. 1996b. Nitrogen deficiency detection using reflected shortwave radiation from irrigated corn canopies. Agronomy Journal 88, 1–5. Booth, D. T., Glenn, D., Keating, B., Nance, J., Cox, S. E. and Barriere J. P. 2003. Monitoring rangeland watersheds with very-large scale aerial imagery. In: Proceedings of the First Interagency Conference on Research in the Watersheds, edited by Kenneth G. Renard (USDA-ARS, Washington, DC), pp. 212–215. Clevers, J. G. P. W. 1988. Multispectral aerial photography as a new method in agricultural field trial analysis. International Journal of Remote Sensing 9(2), 319–332. Daughtry, C. S. T., Walthall, C. L., Kim, M. S., de Brown Colstoun, E. and McMurtrey, J. E. 2000. Estimating corn leaf chlorophyll concentration from leaf and canopy reflectance. Remote Sensing of Environment 74, 229–239. RAYMOND HUNT ET AL.376 Doraiswamy, P. C., Moulin, S., Cook, P. W. and Stern, A. 2003. Crop yield assessment from remote sensing. Photogrammetric Engineering & Remote Sensing 69(6), 665–674. Gates, D. M. 1980. Biophysical Ecology. (Springer-Verlag, New York, USA). Hongoh, D., Kajiwara, K. and Honda, Y. 2001. Developing ground truth measurement system using RC helicopter and BRDF model in forest area. In Proceedings of the 22nd Asian Conference on Remote Sensing (Centre for Remote Imaging; Sensing, and Processing, National University of Singapore, Singapore), Vol. I, pp, 59–64. Huete, A. R. 2004. Remote sensing of soils and soil processes. In: Remote sensing for Natural Resource Mangement and Environment Monitoring. Manual of Remote Sensing, edited by S. L. Ustin 3rd edition, Volume 4 (John Wiley & Sons, Inc, Hoboken, NJ, USA), pp. 3–52. Hunt, E. R. Jr., Daughtry, C. S. T., McMurtrey, J. E., Walthall, C. L., Baker, J. A., Schroeder, J. C. and Liang, S. 2002. Comparison of remote sensing imagery for nitrogen management. In: Proceedings of the SixthInternationalConferenceonPrecisionAgricultureand OtherPrecisionResourcesManagement,edited by P. C. Robert, R. H. Rust and W. E. Larson (ASA-CSSA-SSSA, Madison, WI, USA), CD-ROM. Hunt, E. R. Jr., Daughtry, C. S. T., Walthall, C. L., McMurtrey, J. E. and Dulaney, W. P. 2003a. Agricultural remote sensing using radio-controlled model aircraft. In: Digital Imaging and Spectral Techniques: Applications to Precision Agriculture and crop Physiology, edited by T. VanToai, D. Major, M. McDonald, J. Schepers and L. Tapley (ASA-CSSA-SSSA, Madison, WI, USA), pp. 191–199. ASA Special Publication 66. Hunt, E. R. Jr., Everitt, J. H., Ritchie, J. C., Moran, M. S., Booth, D. T., Anderson, G. L., Clark, P. E. and Seyfried, M. S. 2003b. Applications and research using remote sensing for rangeland management. Photogrammetric Engineering & Remote Sensing 69(6), 675–693. Inoue, Y., Morinaga, S. and Tomita, A. 2000. A blimp-based remote sensing system for low-altitude monitoring of plant variables: A preliminary experiment for agricultural and ecological applications. International journal of Remote Sensing 21(2), 379–385. Lu, Y.-C., Daughtry, C., Hart, G. and Watkins, B. 1997. The current state of precision farming. Food Reviews Intermational 13(2), 141–462. Lopez, J. R. and Robert, P. C. 2003. Use of unmanned aerial vehicles to gather remote sensing imagery for precision crop management. In ASA-CSSA-SSSA Annual Meetings Abstracts November 2–6, 2003. Denver, Colorado (ASA-CSSA-SSSA, Madison, WI, USA), CD-ROM. Milton, E. L. 2002. Low-cost ground-based digital infra-red photography. International Journal of Remote Sensing 23(5), 1001–1007. Moran, M. S., Inoue, Y. and Barnes, E. M. 1997. Opportunities and limitations for image-based remote sensing in precision crop management. Remote Sensing of Environment 61, 319–346. Moran, S., Fitgerald, G., Rango, A., Walthall, C., Barnes, E., Bausch, W., Clarke, T., Daughtry, C., Everitt, J., Escobar, D., Hatfield, J., Havstad, K., Jackson, T., Kitchen, N., Kustus, W., McGuire, M., Pinter, P., Sudduth, K., Schepers, J., Schmugge, T., Starks, P. and Upchurch, D. 2003. Sensor devel- opment and radiometric correction for agricultural applications. Photogrammetric Engineering & Remote Sensing 69(6), 705–718. Pierce, F. J. and Nowak, P. 1999. Aspects of precision agriculture. Advances in Agronomy 67, 1–85. Pinter, P. J. Jr., Hatfield, J. L., Schepers, J. S., Barnes, E. M., Moran, M. S., Daughtry, C. S. T. and Upchurch, D. R. 2003. Remote sensing for crop management. Photogrammetric Engineering & Remote Sensing 69(6), 647–664. Quilter, M. C. and Anderson, V. J. 2000. Low altitude/large scale aerial photographs: A tool for range and resource managers. Rangelands 22(2), 13–17. Ritchie, S. W., Hanson, J. J. and Benson, G. O.1993. How a corn plant develops, Special Report No. 48. (Iowa State University of Science and Technology, Cooperative Extension Service, Ames, IA). Robert, P. C. 2002. Precision agriculture: A challenge for crop nutrition management. Plant and Soil 247, 143–149. Rouse, J. W., Haas, R. H., Schell, J. A. and Deering, D. W. 1974. Monitoring vegetation systems in the Great Plains with ERTS. In: Third Earth Resources Technology Satellite–1 Symposium. Volume I: Technical Presentations, NASA SP)351, edited by S. C. Freden, E. P. Mercanti and M. Becker (National Aeronautics and Space Administration, Washington, DC), p. 309–317. SAS Institute Inc. 1999. SAS/STAT User’s Guide, Cary, NC. EVALUATION OF DIGITAL PHOTOGRAPHY 377 Scharf, P. C. and Lory, J. A. 2002. Calibrating corn color from aerial photographs to predict sidedress nitrogen need. Agronomy journal 94, 397–404. Scharf, P. C., Schmidt, J. P., Kitchen, N. R., Sudduth, K. A., Hong, S. Y., Lory, J. A. and Davis, J. G. 2002. Remote sensing for nitrogen management. Journal of Soil and Water Conservation 57, 518–524. Shevell, S. 2003. The Science of Color 2nd Edition (Optical Society of America, Washington, DC, USA). Verhoef, W. 1984. Light scattering by leaf layers with application to campy reflectance modeling: The SAIL model. Remote Sensing of Environment 16, 125–141. Wald, G. 1968. The molecular basis of visual excitation. Nature 219, 800–807. Waring, R. H., Law, B., Goulden, M. L., Bassow, S. L., McCreight, R. W., Wofsy, S. C. and Bazzaz, F. A. 1995. Scaling gross ecosystem production at Harvard Forest with remote sensing: A comparison of estimates from a constrained quantum-use efficiency model and eddy correlation. Plant, Cell & Envi- ronment 18, 1201–1213. Yang, C., Everitt, J. H., Bradford, J. M. and Esccibar, D. E. 2000. Mapping grain sorghum growth and yield variations using airborne multispectral digital imagery. Transactions of the ASAE 43(6), 1927– 1938. Zarco-Tejada, P. J., Rueda, C. A. and Ustin, S. L. 2003. Water content estimation in vegetation with MODIS reflectance data and model inversion methods. Remote Sensing of Environment 85, 109–124. RAYMOND HUNT ET AL.378 work_dfftfz35kjgfbmbrphkikioffq ---- Geosci. Instrum. Method. Data Syst., 5, 417–426, 2016 www.geosci-instrum-method-data-syst.net/5/417/2016/ doi:10.5194/gi-5-417-2016 © Author(s) 2016. CC Attribution 3.0 License. Digital photography for assessing the link between vegetation phenology and CO2 exchange in two contrasting northern ecosystems Maiju Linkosalmi1, Mika Aurela1, Juha-Pekka Tuovinen1, Mikko Peltoniemi2, Cemal M. Tanis1, Ali N. Arslan1, Pasi Kolari3, Kristin Böttcher4, Tuula Aalto1, Juuso Rainne1, Juha Hatakka1, and Tuomas Laurila1 1Finnish Meteorological Institute, Helsinki, Finland 2Natural Resources Institute Finland (LUKE), Vantaa, Finland 3Faculty of Biosciences, University of Helsinki, Helsinki, Finland 4Finnish Environment Institute (SYKE), Helsinki, Finland Correspondence to: Maiju Linkosalmi (maiju.linkosalmi@fmi.fi) Received: 7 December 2015 – Published in Geosci. Instrum. Method. Data Syst. Discuss.: 11 March 2016 Revised: 25 July 2016 – Accepted: 3 August 2016 – Published: 12 September 2016 Abstract. Digital repeat photography has become a widely used tool for assessing the annual course of vegetation phe- nology of different ecosystems. By using the green chro- matic coordinate (GCC) as a greenness measure, we exam- ined the feasibility of digital repeat photography for assess- ing the vegetation phenology in two contrasting high-latitude ecosystems. Ecosystem–atmosphere CO2 fluxes and various meteorological variables were continuously measured at both sites. While the seasonal changes in GCC were more obvious for the ecosystem that is dominated by annual plants (open wetland), clear seasonal patterns were also observed for the evergreen ecosystem (coniferous forest). Daily and seasonal time periods with sufficient solar radiation were determined based on images of a grey reference plate. The variability in cloudiness had only a minor effect on GCC, and GCC did not depend on the sun angle and direction either. The daily GCC of wetland correlated well with the daily photosynthetic capacity estimated from the CO2 flux measurements. At the forest site, the correlation was high in 2015 but there were discernible deviations during the course of the summer of 2014. The year-to-year differences were most likely gener- ated by meteorological conditions, with higher temperatures coinciding with higher GCCs. In addition to depicting the seasonal course of ecosystem functioning, GCC was shown to respond to environmental changes on a timescale of days. Overall, monitoring of phenological variations with digital images provides a powerful tool for linking gross primary production and phenology. 1 Introduction Phenology is an important factor in the ecology of ecosys- tems. The most distinctive phenomena comprising vegeta- tion phenology are the changes in plant physiology, biomass and leaf area (Migliavacca et al., 2011; Sonnentag et al., 2011, 2012; Bauerle et al., 2012). In part, these changes drive the carbon cycle of ecosystems, and they have various feed- backs to the climate system through effects on surface albedo and aerodynamic roughness, and ecosystem–atmosphere ex- changes of various gases (e.g. H2O, CO2 and volatile organic compounds) (Arneth et al., 2010). Besides leaf area, gas ex- change is modulated by seasonal variations in photosynthe- sis and respiration (Richardson et al., 2013). Globally, these variations contribute to the fluctuations in the atmospheric CO2 concentration (Keeling et al., 1996). In the long term, possible trends in vegetation phenology can have a system- atic effect on the mean CO2 level. Phenology further plays a role in the competitive interactions, trophic dynamics, re- productive biology, primary production and nutrient cycling (Morisette et al., 2009). Phenological phenomena are largely controlled by abiotic factors such as temperature, water avail- ability and day length (Bryant and Baird, 2003; Körner and Published by Copernicus Publications on behalf of the European Geosciences Union. 418 M. Linkosalmi et al.: Digital photography for linking phenology and CO2 exchange Basler, 2010), and thus they are sensitive to climate change (Richardson et al., 2013; Rosenzweig et al., 2007; Migli- avacca et al., 2012). Several studies have reported an advanced onset of the growing season during recent decades (Linkosalo et al., 2009; Delbart et al., 2008; Nordli et al., 2008; Pudas et al., 2008). An earlier onset of growth has been observed to play a significant role in the annual carbon budget of temperate and boreal forests, while lengthening autumns have a less clear effect (Goulden et al., 1996; Berninger, 1997; Black et al., 2000; Barr et al., 2007; Richardson et al., 2009). This can be explained by the rapid C accumulation that starts as soon as conditions turn favourable for photosynthesis and growth in spring, while the opposing effect, i.e. ecosystem respira- tion, becomes increasingly important in summer and autumn (White and Nemani, 2003; Dunn et al., 2007). In general, monitoring of vegetation changes by digital cameras has become feasible with the development of ad- vanced but inexpensive cameras that produce automated and continuous real-time data. It has been shown that simple time-lapse photography can facilitate detection of vegeta- tion phenophases and even the related variations in CO2 ex- change (Wingate et al., 2015; Richardson et al., 2007, 2009). This provides new possibilities for monitoring and modelling of ecosystem functioning, for verification of remote sensing products, and for analysis of ecosystem CO2 exchange fluxes and related balances. Especially dynamic vegetation models and simulations of C cycle could be improved by more ac- curate information on the timing of budburst and leaf senes- cence, as simple empirical parameterizations, typically based on degree days or the onset and offset dates of C uptake, are presently used as indicators of the growing season start and end (Baldocchi et al., 2005; Delpierre et al., 2009; Richard- son et al., 2013). Digital cameras produce red-green-blue (RGB) colour channel information, from which different greenness indices can be calculated. For example, canopy greenness has been expressed in terms of the so-called green chromatic coordi- nate (GCC), which has been related to vegetation activity and further to carbon uptake of forests (Richardson et al., 2007, 2009; Ahrends et al., 2009; Ide et al., 2011) and peat- lands (Sonnentag et al., 2011; Peichl et al., 2015). In decidu- ous forests, the main driver of gas exchange is leaf area that changes rapidly in spring and autumn, which is easy to de- tect. In evergreen conifer forests the leaf area changes are much smaller, so it is not obvious whether a similar relation- ship can be established for them. In a peatland environment, repeat images have been used to map the mean greenness of mire vegetation over a wide area (Peichl et al., 2015). For peatland ecosystems with a heterogeneous vegetation cover, it may be possible to simultaneously detect seasonality ef- fects of different vegetation types. Thus digital repeat im- ages of differentially developing vegetation types could po- tentially help decompose an integrated CO2 flux observation into components allocated to these vegetation types. Comparisons of phenological observations made in con- trasting ecosystems are needed for highlighting the pheno- logical features that can be extracted from camera monitoring at different sites (Wingate et al., 2015; Keenan et al., 2014; Toomey et al., 2015; Sonnentag et al., 2012). Differences in the ecosystem characteristics may also affect the ideal set-up of cameras and the interpretation of images, for example in conjunction with surface flux data. The objectives of this study were to (1) evaluate the digi- tal repeat photography as a method for monitoring the phe- nology of boreal vegetation at high latitudes, (2) investigate the differences in the phenology between two adjacent but contrasting ecosystems (pine forest and wetland) located in northern Finland, and (3) assess whether the data obtained from such cameras can support the interpretation of the mi- crometeorological measurements of CO2 fluxes conducted at the sites. This paper is structured as follows: Sect. 2 introduces the measurement sites, camera set-up, image analysis, and the CO2 flux and meteorological data employed; Sect. 3 provides the results and related discussion, including tests of the mon- itoring system and an analysis of the observed phenologi- cal development in relation to CO2 exchange; finally, Sect. 4 presents the conclusions emerging from this study. 2 Materials and methods 2.1 Measurement sites The study sites were located at Sodankylä in northern Fin- land, 100 km north of the Arctic Circle. They represent two contrasting ecosystems, a Scots pine (Pinus sylvestris) forest (67◦21.708′ N, 26◦38.290′ E; 179 m a.s.l.) and an open pris- tine wetland (67◦22.117′ N, 26◦39.244′ E; 180 m a.s.l.). The long-term (1981–2010) mean temperature and precipitation within the area are −0.4 ◦C and 527 mm, respectively (Piri- nen et al., 2012). The Scots pine stand is located on fluvial sandy podzol and has a dominant tree height of 13 m and a tree density of 2100 ha−1. The age of the trees within the camera scope is about 50 years. A single-sided leaf area index (LAI) of 1.2 m2 m−2 has been estimated for the stand based on a forest inventory in 2000. The sparse ground vegetation consists of lichens (73 %), mosses (12 %) and ericaceous shrubs (15 %). The wetland site is located on a mesotrophic fen that rep- resents typical northern aapa mire. The vegetation at this site mainly consists of low species (Carex spp., Menyanthes tri- foliata, Andromeda polifolia, Betula nana, Vaccinium oxy- coccos, Sphagnum spp.). There are no tall trees, only some B. pubescens and a few isolated Scots pines. Different types of vegetation are located on drier (strings) and wetter (flarks) parts of the wetland. The physical surface structure (aerodynamic roughness length) differs between the pine forest and wetland sites. Geosci. Instrum. Method. Data Syst., 5, 417–426, 2016 www.geosci-instrum-method-data-syst.net/5/417/2016/ M. Linkosalmi et al.: Digital photography for linking phenology and CO2 exchange 419 Also, the microclimate and surface exchange of CO2 and sensible and latent heat differ due to different vegetation and soil characteristics. 2.2 Camera set-up The images analysed in this study were taken automati- cally with StarDot Netcam SC 5 digital cameras. The set- up included a weather-proof housing and connections to line current and a web server. The pictures were stored in the 8 bit JPEG format every 30 min with 2592×1944 resolution and transferred automatically to a remote server. The daily collecting period varied according to the time of the year roughly covering the daylight hours. At the forest site, the cameras were mounted to a tower at two different heights: 29 m (“canopy camera”) and 13 m (“crown camera”). The viewing angle of the canopy camera was 45◦ from the horizontal plane, while the crown camera was positioned nearly horizontally. The images of the canopy camera covered parts of the forest canopy and some gen- eral landscape. The crown camera was focused to individ- ual trees to detect their phenological development (e.g. bud burst, shoot growth, needle shedding) more closely. At the wetland site, the camera was adjusted in an angle of 45◦ on top of a 2 m pole. This camera mostly observed the ground vegetation, with some B. pubescens and sky also visible in the images. All cameras were placed facing the north to min- imize lens flare and maximize illumination of the canopy. 2.3 Grey reference plates At the forest site, grey reference plates were employed to monitor the stability of the image colour channels. The plates were attached to the cameras in such a way that they are visi- ble in every picture. The idea behind the reference plates was to detect possible day-to-day shifts in the colour balance due to changing weather conditions, such as radiation variations. The reference images should also not show any obvious sea- sonality (Petach et al., 2014). The grey colour of the plates was close to the “true grey” in a sense that it has an equal mix of red, green and blue colour components. To achieve this, the reference plates were painted with Tikkurila grey/1948 (RGB values: R=95, G=95, B=95). 2.4 Automatic image analysis The digital images were analysed with the FMIPROT soft- ware that has been designed as a toolbox for image process- ing for phenological and meteorological purposes (Tanis and Arslan, 2016). FMIPROT calculates the colour fractions for red, green and blue channels. In the present analysis we use the GCC defined as GCC = ∑ G∑ R+ ∑ G+ ∑ B , (1) Andromeda polifolia Betula pubescens Figure 1. View from the wetland camera. The solid lines indicate four regions of interest defined according to vegetation types. The dashed lines indicate the region of interest that includes all vegeta- tion types except Betula pubescens. where ∑ G, ∑ R and ∑ B are the sums of green, red and blue channel digital numbers, respectively, of all pixels com- prising an image. Within each image, it was possible to define limited sub- areas of regions of interest (ROIs). The ROI feature of FMIPROT makes it possible to limit the GCC calculation to an area that represents a homogeneous vegetation area. It also provides an option for analysing several subareas within the image simultaneously. 2.5 Selection of the region of interest At the wetland site, GCC was calculated separately for four different, clearly identifiable vegetation types. These veg- etation types were dominated by (1) bog rosemary (An- dromeda polifolia) and other shrubs, (2) sedges (Carex spp.) and Sphagnum mosses, (3) big-leafed bogbean (Menyanthes trifoliate), and (4) downy birch (Betula pubescens) (Fig. 1). The first three ROIs also included other ground vegetation, while the fourth ROI was limited to the birch canopy. The GCC values were also analysed from a larger area that in- cludes the three first vegetation types (Fig. 1). The forest site had two cameras, one zoomed to the crown of a pine tree (Fig. 2) and the other providing a general view of the canopy (Fig. 3). From the general canopy image, three separate ROIs were subjectively selected with an aim to de- fine similar homogenous areas of forest canopy (Fig. 3). 2.6 CO2 flux measurements The ecosystem–atmosphere CO2 exchange was measured at both study sites by the micrometeorological eddy covari- ance (EC) method. The EC measurements provided continu- ous data on the CO2 fluxes averaged on an ecosystem scale. The vertical CO2 flux is obtained as the covariance of the www.geosci-instrum-method-data-syst.net/5/417/2016/ Geosci. Instrum. Method. Data Syst., 5, 417–426, 2016 420 M. Linkosalmi et al.: Digital photography for linking phenology and CO2 exchange Figure 2. View from the pine forest crown camera. The line indi- cates the region of interest. high-frequency (10 Hz) fluctuations of vertical wind speed and CO2 mixing ratio (Baldocchi, 2003). At both sites, the EC measurement systems consisted of a USA-1 (METEK GmbH, Elmshorn, Germany) three-axis sonic anemome- ter/thermometer and a closed-path LI-7000 (LI-COR, Inc., Lincoln, NE, USA) CO2/H2O gas analyzer. The measure- ment systems and the data processing procedures have been presented in detail by Aurela et al. (2009). The CO2 fluxes obtained from the EC measurements rep- resent the net ecosystem exchange (NEE) of CO2, which is the sum of gross photosynthetic production (GPP) by plants and a respiration term that includes both the autotrophic res- piration by plants and the heterotrophic respiration by mi- crobes. GPP is typically derived from the NEE data by using a dedicated flux partitioning technique, for example based on nonlinear regressions with photosynthetic photon flux den- sity (PPFD) and air temperature as predictors (Reichstein et al., 2005). Instead of performing such an explicit partition- ing, we determined the daily GPP in terms of the gross pho- tosynthesis index (GPI); for details, see Aurela et al. (2001), where a similar index was termed “PI”. GPI indicates the maximal photosynthetical activity in optimal radiation con- ditions. It is obtained by calculating the differences of the daily averages of the daytime (PPFD > 600 µmol m−2 s−1, which limit represents light saturation of photosynthesis at our sites) and night-time (PPFD < 20 µmol m−2 s−1) NEE. The resulting GPI scales well with the maximal GPP ob- tained from a traditional NEE partitioning, despite the day– night differences in respiration. GPI provides a useful mea- sure especially for depicting the seasonal GPP cycle, but as it is robust against missing data, it also estimates photosyn- thetic activity during fast changes due to short-term varia- tions in air temperature and humidity (Aurela et al., 2001). A C B Figure 3. View from the pine forest canopy camera. The lines indi- cate three regions of interest. 2.7 Meteorological measurements An extensive set of supporting meteorological variables was measured at both measurement sites, including air temper- ature and humidity, various soil parameters (temperature, humidity, soil heat flux and water table level) and differ- ent radiation components (incoming and outgoing shortwave (SW) radiation, PPFD and net radiation). Here we used the temperature data measured at 3 m height on the wetland (Vaisala, HMP155D) and at 8 m at the forest site (Pentronic, PT100). From the SW radiation measurements (Kipp & Zo- nen, CM11) we calculated the surface albedo as the pro- portion of incident radiation that is reflected back to the at- mosphere by the underlying surface. In addition, fractional cloud cover (CL) data were available from the nearby obser- vatory. 3 Results and discussion 3.1 Testing the set-up 3.1.1 Effect of environmental conditions on GCC An accurate GCC observation requires a sufficient illumina- tion level, which was here ensured by selecting only mid-day (10:00–14:00 local winter time) photographs for further anal- ysis. This period was determined on the basis of the GCC of the grey reference plate in different radiation conditions (Supplement, Figs. S1–S3). The influence of cloudiness on GCC was estimated from the data collected in July 2014. This particular month was selected for the test because July represents the peak grow- ing season (for both radiation levels and LAI), and in July 2014 sunny and cloudy days were equally frequent. Based on the observations of fractional cloud cover (rang- Geosci. Instrum. Method. Data Syst., 5, 417–426, 2016 www.geosci-instrum-method-data-syst.net/5/417/2016/ M. Linkosalmi et al.: Digital photography for linking phenology and CO2 exchange 421 00 03 06 09 12 15 18 21 00 G C C ( w e tl a n d ) 0.30 0.32 0.34 0.36 0.38 0.40 Sunny Cloudy 00 03 06 09 12 15 18 21 00 G C C ( f o re s t c ro w n ) 0.30 0.32 0.34 0.36 0.38 0.40 Sunny Cloudy a)( b)( Time (LWT) Figure 4. Mean (± standard deviation shown by error bars) diurnal cycle of GCC during sunny and cloudy conditions observed with (a) the tree crown and (b) the wetland cameras in July 2014. ing from clear sky with CL=0 to completely cloudy con- ditions with CL=8), the images were pooled to two con- trasting cloudiness groups representing sunny (CL=0–1) and cloudy (CL=7–8) conditions. During the daily period of 10:00–14:00, the differences in the mean GCC between sunny and cloudy conditions were statistically insignificant (Mann–Whitney U test) (Fig. 4). The mean GCC difference between the cloudy and sunny groups was 0.0014 and 0.0011 for the fen and forest, respectively. Sonnentag et al. (2012) found an equivalently small, though in part statistically sig- nificant, difference between the diurnal GCC cycles of sunny and overcast situations for their deciduous and coniferous forests. The dependence of GCC on the solar angle with respect to ROI was also estimated from the data of July 2014 (Fig. 4). The difference between the minimum and maximum values of the hourly GCC means within the daytime window was 0.0030 and 0.0020 for sunny and cloudy cases, respectively. This is less than 5 % of the seasonal amplitude of the GCC curve (0.069 between May and July) associated with phe- nological greening of the fen. At the forest site, the corre- sponding values were 0.0022 (sunny) and 0.0012 (cloudy) and, despite the lower annual amplitude (0.024 between May 2014 2015 05 06 07 08 09 10 11 12 01 02 03 04 05 06 07 08 09 10 G C C ( d a y ti m e a v e ra g e ) 0.30 0.32 0.34 0.36 0.38 0.40 Wetland Andromeda Carex Menyanthes Betula pubescens Winter Figure 5. Mean daytime (10:00–14:00, local winter time) GCC of different regions of interest (vegetation types) during the measure- ment period of May 2014 to October 2015. Wetland refers to com- bined ROI shown in Fig. 5b. The grey circles indicate the wintertime data that are influenced by an insufficient light level. and July), the difference was less than 10 % of the seasonal variation. 3.1.2 Sensitivity of GCC to selection of the region of interest The sensitivity of the GCC values to the selection of a sub- area within an image, i.e. a region of interest, was tested by comparing the GCC calculated for different vegetation patches. In particular, we wanted to examine, on the one hand, whether the forest images are homogeneous and thus insensitive to the ROI definition; on the other hand, the wet- land images may provide an opportunity to simultaneously observe various microecosystems incorporated into a single image. The GCC values of the wetland ROIs defined according to vegetation types showed significant differences in the sea- sonal cycle, both in the timing of the major changes in spring and autumn and in the magnitude of the maximum GCC (Fig. 5). For example, downy birch had the earliest growth onset, while the big-leafed bogbean had the largest growing- season maximum. While the seasonal patterns of the GCCs of different ROIs can be compared, the same may not be true for the absolute GCC values, which were affected by dif- ferent viewing angles and distances to the target. To gain a better insight into the quantitative differences between dif- ferent ROIs, these ROI-specific GCC data should be inves- tigated in conjunction with direct vegetation analysis (LAI, biomass) and small-scale (chamber-based) CO2 exchange measurements. For further analysis here we chose to use the larger ROI combining three vegetation types (Fig. 1), which matches better the areally integrating flux measurements. The daily mean GCC values of different forest canopy ROIs remained very similar throughout the time series (Fig. 6). The GCC values determined from crown images differed from those from the camera with a general canopy view, most likely because the cameras had different viewing www.geosci-instrum-method-data-syst.net/5/417/2016/ Geosci. Instrum. Method. Data Syst., 5, 417–426, 2016 422 M. Linkosalmi et al.: Digital photography for linking phenology and CO2 exchange 05 06 07 08 09 10 11 12 01 02 03 04 05 06 07 08 09 10 G C C ( d a y ti m e a v e ra g e ) 0.30 0.32 0.34 0.36 0.38 Crown Canopy A Canopy B Canopy C Wintertime 2014 2015 Figure 6. Mean daytime (10:00–14:00, local winter time) GCC val- ues of different ROIs from two forest cameras during the measure- ment period of May 2014 to October 2015. The grey circles indicate the wintertime data that are influenced by an insufficient light level. angles and distances to the object. The contribution of ground is mixed with the canopy signal, which partially explains why the GCC values in the distant canopy camera images were lower than in the crown camera images. In winter, there was more snow visible behind the canopy in the smaller-scale ROIs. Thus, we decided on using in further analysis only the images from the crown camera. 3.2 Phenological development 3.2.1 Wetland site As previously observed by Peichl et al. (2015), at the wetland the growing season is clearly discernable in the development of GCC data (Fig. 7). GCC started to increase as soon as the wetland vegetation started to assimilate CO2. This growth onset took place in May after the snowmelt, for which the ground albedo provides a sensitive indicator by quantify- ing the proportion of incident solar radiation that is reflected back to the atmosphere. However, the onset was preceded by a short period of reduced GCC values, which were associated with the moist and dark soil. The warm spells during late May and early June in 2014 induced a rapid emergence and growth of annual plants. De- spite the later snowmelt that year, by mid-June the growing season had developed much further than in 2015. This differ- ence is clearly visible in the GCC as well as photosynthetic activity (GPI) data (Fig. 7). The cold period in late June 2014 ceased this fast development, which is also well reflected in the GCC data that show a stabilization and even a temporary reduction during that period. GPI shows a similar pattern, highlighting the coherence between the greenness observa- tion and the actual photosynthetic processes. Following the earlier onset of the growing season in 2014, the peak of plant development was also observed earlier (Fig. 7). However, the magnitude of the GCC maxima dur- ing the two years was the same (0.385). From mid-August 01 02 03 04 05 06 07 08 09 10 11 12 0.39 0.38 0.37 0.36 0.35 0.34 0.33 0.32 0.31 0.4 0.3 0.2 0.1 0.0 30 20 10 0 -10 -20 1.0 0.8 0.6 0.4 0.2 0.0 2014 2015 G C C G P I (m g C O 2 m -2 s -1 ) A ir t e m p e ra tu re ( °C ) A lb e d o GCC Air temperature GPI Albedo Month A BA Figure 7. Mean daytime (10:00–14:00, local winter time) GCC (of the ROI shown in Fig. 1b) together with the daily mean air temper- ature, gross photosynthesis index (GPI) and albedo in 2014–2015 at the wetland site. The triangles indicate the dates of snowmelt (A) and snow appearance (B). The grey circles indicate the wintertime data that are influenced by an insufficient light level. to mid-September, the rate of GCC decline was approxi- mately the same in 2014 and 2015. In mid-September, the slightly higher GCC in 2014 can be attributed to a warm pe- riod. By the first sub-zero values in daily mean temperatures, the GCC had decreased to its minimum value, close to the springtime minimum, and by the snowfall in mid-October it had started increasing towards the level observed for the fully snow-covered conditions in spring. Previous observations suggest that GPP is well correlated with the GCC of wetlands, especially during spring (Peichl et al., 2015). Our results support these observations show- ing a strong relationship between the daily GCC and GPI data (Fig. S4), with a correlation coefficient of 0.90 and 0.92 for the snow-free period in 2014 and 2015, respectively. Es- pecially during the springtime, the match between the GCC and GPI time series was remarkably close during both years, while in the autumn of 2014 GPI lagged slightly behind GCC. 3.2.2 Forest site Due to the closeness of the measurement sites, the meteo- rological conditions in forest were similar to those observed at the wetland (Figs. 7 and 8). However, the onset of photo- synthetical activity differed slightly at the beginning of the growing season: the warm days of early May 2015 were not observed at the wetland as an GPI increase due to the absence of annual vegetation right after the snowmelt, while the pho- tosynthesis of boreal trees is triggered as soon as temperature Geosci. Instrum. Method. Data Syst., 5, 417–426, 2016 www.geosci-instrum-method-data-syst.net/5/417/2016/ M. Linkosalmi et al.: Digital photography for linking phenology and CO2 exchange 423 01 02 03 04 05 06 07 08 09 10 11 12 0.39 0.38 0.37 0.36 0.35 0.34 0.33 0.32 0.31 0.4 0.3 0.2 0.1 0.0 30 20 10 0 -10 -20 1.0 0.8 0.6 0.4 0.2 0.0 2014 2015 G C C G P I (m g C O 2 m -2 s -1 ) A ir t e m p e ra tu re ( °C ) A lb e d o GCC Air temperature GPI Albedo Month 1 2A 3 43 4 B2 B Figure 8. Mean daytime (10:00–14:00, local winter time) GCC (crown camera) together with the daily mean air temperature (at 18 m), gross photosynthesis index (GPI) and albedo in 2014–2015 at the forest site. The triangles indicate the start dates of visually ob- served phenological phases (1 – bud burst, 2 – bud growth, 3 – shoot growth, 4 – old needle browning) and snow status (A – snowmelt, B – snow appearance). The grey circles indicate the wintertime data that are influenced by an insufficient light level. reaches a sufficient level (Tanja et al., 2003). Thus the grow- ing season in the forest started earlier in 2015 than in 2014, while that was not the case at the wetland. Nevertheless, the warm period in late May–early June 2014 also enhanced the forest growth, and by mid-June both GCC and GPI had sur- passed the corresponding level in 2015. The cold period in late June 2014 was again observed as reduced CO2 uptake and even a clearer reduction in GCC than at the wetland. Although in deciduous forest and open wetlands GCC is generally well correlated with the gross ecosystem photosyn- thesis during the start of the growing season (Peichl et al., 2015; Toomey et al., 2015), for evergreen needleleaf forests it has been reported that such correlation is often weaker (Toomey et al., 2015; Wingate et al., 2015). In our pine forest, however, the simultaneous development of GCC and photo- synthesis was evident during the year with spring data avail- able (Fig. S5). Similarly to the wetland, the maximum GCC level at the forest site did not differ between 2014 and 2015, but this level was reached slightly earlier in 2014. This was probably due to the higher temperatures during the first part of the grow- ing season. During both years, GCC started decreasing at the same time, i.e. at the end of July. This was slightly earlier than the start of the senescence detected visually (Phase 4 in Fig. 8). Similarly to the wetland, in 2014 there was a clear phase difference between GCC and GPI, the latter of which stayed at the maximum level until the end of August. In for- est, this may be due to the oldest needles, whose senescence takes place in August, while their photosynthetic capacity has diminished already earlier (Vesala et al., 2005). In both 2014 and 2015, the photosynthetic activity contin- ues until the end of August, but the interannual comparison is not possible here owing to the missing CO2 data in 2015. Nevertheless, in both years GCC decreases to the wintertime level at the beginning of October, at the same time as the daily mean temperature decreases below 0 ◦C. Our results show that the phenological development of the pine canopy could be accurately monitored with the GCC analysis, even though the GCC changes in forest were sub- tler than those observed for the wetland vegetation. This was confirmed by visually identifying the phenological stages of the forest from the crown camera pictures (Fig. 8). In 2014, the cameras were installed too late to detect the bud burst, but the GCC time series was consistent with the observation that the buds started their growth at the beginning of June and re- mained brown until the beginning of July, when they started to green. 4 Conclusions We demonstrated the feasibility of digital repeat photogra- phy for assessing the link between vegetation phenology and CO2 exchange for two contrasting high-latitude ecosystems. While the seasonal changes in the greenness index GCC are more obvious for those ecosystems where the vegetation is renewed every year (here an open wetland), seasonal pat- terns can also be observed in the evergreen ecosystems (here a coniferous forest). We examined the illumination sensitivity of our digital camera system by analysing the images of a grey reference plate, which was included in the camera view. Limited so- lar radiation restricts the use of images during the winter- time as well as during the night-time. At our sites in northern Finland, the daytime radiation levels were sufficient for im- age analysis from February to October. During that period, a diurnal window of 10:00–14:00 (local winter time) pro- vides stable GCC data. Our results show that the variability in cloudiness and solar zenith angle during the daytime does not play a significant role in the GCC analysis. However, it would be relevant to investigate the seasonal dependence of GCC on sun elevation, especially for the coniferous forest. We observed a clear seasonal GCC cycle at both study sites. At the wetland, GCC correlated well with the daily photosynthetic capacity estimated from the ecosystem– atmosphere flux measurements. The interannual variation in GCC was also consistent with the observed CO2 exchange and meteorological conditions. At the forest site, the sea- sonal GCC cycle correlated well with the flux data in 2015 but showed more deviations during the summer of 2014. For www.geosci-instrum-method-data-syst.net/5/417/2016/ Geosci. Instrum. Method. Data Syst., 5, 417–426, 2016 424 M. Linkosalmi et al.: Digital photography for linking phenology and CO2 exchange both ecosystems, the correlation between GCC and CO2 ex- change was highest during the spring. In addition to depicting the seasonal course of ecosystem functioning, we showed that GCC responds to environmen- tal changes on a shorter timescale. We observed that at both sites the increase of GCC and photosynthesis ongoing in June was ceased during a 2-week-long cold and wet period. For an unknown reason, the GCC values even slightly decreased during that period. It is possible that such a reduction is an artefact caused by wet surfaces, for example, rather than a re- sponse to an actual decrease in the chlorophyll concentration in leaves and needles. Due to the low cost of the instrumentation involved, phe- nology monitoring can be established in a much larger num- ber of locations than ecosystem–atmosphere flux measure- ments, thus providing a wider geographical basis for im- provement of the phenological and photosynthesis compo- nents of land surface models that need more calibration and validation. The digital repeat images allow the detection of phenological events, such as shoot elongation and the start of needle growth that cannot be obtained from CO2 flux mea- surements alone. Therefore, they should be utilized to en- hance the analysis of flux data. Furthermore, as our results show, the seasonal cycle of different vegetation types within the footprint of the flux measurements can be determined. This could help decompose the integrated CO2 flux observa- tions when the distribution of the vegetation types within the area is known. 5 Data availability The data presented in this study are provided as a supplement to this article. The Supplement related to this article is available online at doi:10.5194/gi-5-417-2016-supplement. Acknowledgements. This work was supported by the EU: the installation of the cameras and the development of the image processing tool (FMIPROT) was done within MONIMET Project (LIFE12ENV/FI/000409), funded by EU Life+ Programme (2013–2017) (http://monimet.fmi.fi). Edited by: M. Paton Reviewed by: three anonymous referees References Ahrends, H. E., Etzold, S., Kutsch, W. L., Stoeckli, R., Bruegger, R., Jeanneret, F., Wanner, H., Buchmann, N., and Eugster, W.: Tree phenology and carbon dioxide fluxes: use of digital pho- tography for process-based interpretation at the ecosystem scale, Clim. Res., 39, 261–274, doi:10.3354/cr00811, 2009. Arneth, A., Harrison, S. P., Zaehle, S., Tsigaridis, K., Menon, S., Bartlein, P. J., Feichter, J., Korhola, A., Kulmala, M., O’Donnell, D., Schurgers, G., Sorvari, S., Vesala, T.: Terrestrial biogeochem- ical feedbacks in the climate system, Nat. Geosci., 3, 525–532, 2010. Aurela, M., Tuovinen, J.-P., and Laurila, T.: Net CO2 exchange of a subarctic mountain birch ecosystem, Theor. Appl. Climatol., 70, 135–148, 2001. Aurela, M., Lohila, A., Tuovinen, J.-P., Hatakka, J., Riutta, T., and Laurila, T.: Carbon dioxide exchange on a northern boreal fen, Boreal Environ. Res., 14, 699–710, 2009. Baldocchi, D.: Assessing the eddy covariance technique for evalu- ating carbon dioxide exchange rates of ecosystems: past, present and future, Glob. Change Biol., 9, 479–492, 2003. Baldocchi, D. D., Black, T. A., Curtis, P. S., Falge, E., Fuentes, J. D., Granier, A., Gu, L., Knohl, A., Pilegaard, K., Schmid, H. P., Valentini, R., Wilson, K., Wofsy, S., Xu, L., and Yamamoto, S.: Predicting the onset of carbon uptake by deciduous forests with soil temperature and climate data: a synthesis of FLUXNET data, Int. J. Biometeorol., 49, 377–387, 2005. Barr, A. G., Black, T. A., Hogg, E. H., Griffis, T. J., Morgenstern, K., Kljun, N., Theede, A., and Nesic, Z.: Climatic controls on the carbon and water balances of a boreal aspen forest, 1994–2003, Glob. Change Biol., 13, 561–576, 2007. Bauerle, W. L., Oren, R., Way, D. A., Qian, S. S., Stoy, P. C., Thorn- ton, P. E., Bowden, J. D., Hoffman, F. M., and Reynolds, R. F.: Photoperiodic regulation of the seasonal pattern of photosyn- thetic capacity and the implications for carbon cycling, P. Natl. Acad. Sci. USA, 109, 8612–8617, 2012. Berninger, F.: Effects of drought and phenology on GPP in Pi- nus sylvestris: a simulation study along a geographical gradient, Funct. Ecol., 11, 33–43, 1997. Black, T. A., Chen, W. J., Barr, A. G., Arain, M. A., Chen, Z., Nesic, Z., Hogg, E. H., Neumann, H. H., and Yang, P. C.: Increased carbon sequestration by a boreal deciduous forest in years with a warm spring, Geophys. Res. Lett., 27, 1271–1274, 2000. Bryant, R. G. and Baird, A. J.: The spectral behaviour of Sphagnum canopies under varying hydrological conditions, Geophys. Res. Lett., 30, 1134–1138, 2003. Delbart, N., Picard, G., Le Toans, T., Kergoat, L., Quegan, S., Woodward, I., Dye, D., and Fedotova, V.: Spring phenology in boreal Eurasia over a nearly century time scale, Glob. Change Biol., 14, 603–614, 2008. Delpierre, N., Dufrene, E., Soudani, K., Ulrich, E., Cecchini, S., Boe, J., and Francois, C.: Modelling interannual and spatial vari- ability of leaf senescence for three deciduous tree species in France, Agr. Forest Meteorol., 149, 938–948, 2009. Dunn, A. L., Barford, C. C., Wofsy, S. C., Goulden, M. L., and Daube, B. C.: A long-term record of carbon exchange in a boreal black spruce forest: means, responses to interannual variability, and decadal trends, Glob. Change Biol., 13, 577–590, 2007. Geosci. Instrum. Method. Data Syst., 5, 417–426, 2016 www.geosci-instrum-method-data-syst.net/5/417/2016/ http://dx.doi.org/10.5194/gi-5-417-2016-supplement http://monimet.fmi.fi http://dx.doi.org/10.3354/cr00811 M. Linkosalmi et al.: Digital photography for linking phenology and CO2 exchange 425 Goulden, M. L., Munger, J. W., Fan, S. M., Daube, B. C., and Wofsy, S. C.: Measurements of carbon sequestration by long-term eddy covariance: methods and a critical evaluation of accuracy, Glob. Change Biol., 2, 169–182, 1996. Ide, R., Nakaji, T., Motohka, T., and Oguma, H.: Advantages of visible-band spectral remote sensing at both satellite and near- surface scales for monitoring the seasonal dynamics of GPP in a Japanese larch forest, J. Agr. Meteorol., 67, 75–84, 2011. Keeling, C. D., Chin, J. F. S., and Whorf, T. P.: Increased activity of northern vegetation inferred from atmospheric CO2 measure- ments, Nature, 382, 146–149, 1996. Keenan, T. F., Darby, B., Felts, E., Sonnentag, O., Friedl, M., Hufkens, K., O’Keefe, J. F., Klosterman, S., Munger, J. W., Toomey, M., and Richardson, A. D.: Tracking forest phenology and seasonal physiology using digital repeat photography: a crit- ical assessment, Ecol. Appl., 24, 1478–1489, 2014. Körner, C. and Basler, D.: Warming, photoperiods, and tree phenol- ogy response, Science, 329, 278–278, 2010. Linkosalo, T., Häkkinen, R., Terhivuo, J., Tuomenvirta, H., and Hari, P.: The time series of flowering and leaf bud burst of boreal trees (1846–2005) support the direct temperature observations of climatic warming, Agr. Forest Meteorol., 149, 453–461, 2009. Migliavacca, M., Galvagno, M., Cremonese, E., Rossini, M., Meroni, M., Sonnentag, O., Manca, G., Diotri, F., Busetto, L., Cescatti, A., Colombo, R., Fava, F., Morra di Cella, U., Pari, E., Siniscalco, C., and Richardson, A.: Using digital repeat pho- tography and eddy covariance data to model grassland phenol- ogy and photosynthetic CO2 uptake, Agr. Forest Meteorol., 151, 1325–1337, 2011. Migliavacca, M., Sonnentag, O., Keenan, T. F., Cescatti, A., O’Keefe, J., and Richardson, A. D.: On the uncertainty of phe- nological responses to climate change, and implications for a terrestrial biosphere model, Biogeosciences, 9, 2063–2083, doi:10.5194/bg-9-2063-2012, 2012. Morisette, J. T., Richardson, A. D., Knapp, A. K., Fisher, J. I., Graham, E. A., Abatzoglou, J., Wilson, B. E., Breshears, D. D., Henebry, G. M., Hanes, J. M., and Liang, L.: Tracking the rhythm of the seasons in the face of global change: Phenological research in the 21st century, Front. Ecol. Environ., 7, 253–260, 2009. Nordli, O., Wielgolaski, F. E., Bakken, A. K., Hjeltnes, S. H., Mage, F., Sivle, A., and Skre, O.: Regional trends for bud burst and flow- ering of woody plants in Norway as related to climate change, Int. J. Biometeorol., 52, 625–639, 2008. Peichl, M., Sonnentag, O., and Nilsson, M. B.: Bringing Color into the Picture: Using Digital Repeat Photography to Investi- gate Phenology Controls of the Carbon Dioxide Exchange in a Boreal Mire, Ecosystems, 18, 115–131, 2015. Petach, A., Toomey, M., Aubrecht, D., Richardson, A. D: Moni- toring vegetation phenology using an infrared-enabled security camera, Agr. Forest Meteorol., 195, 143–151, 2014. Pirinen, P., Simola, H., Aalto, J., Kaukoranta, J.-P., Karlsson, P., and Ruuhela, R.: Climatological statistics of Finland 1981–2010, Re- ports 2012:1, Finnish Meteorological Institute, Helsinki, 2012. Pudas, E., Leppälä, M., Tolvanen, A., Poikolainen, J., Venäläinen, A., and Kubin, E.: Trends in phenology of Betula pubescens across the boreal zone in Finland, Int. J. Biometeorol., 52, 251– 259, 2008. Reichstein, M., Falge, E., Baldocchi, D., Papale, D., Aubinet, M., Berbigier, P., Bernhofer, C., Buchmann, N., Gilmanov, T., Granier, A., Grünwald, T., Havránková, K., Ilvesniemi, H., Janous, D., Knohl, A., Laurila, T., Lohila, A., Loustau, D., Mat- teucci, G., Meyers, T., Miglietta, F., Ourcival, J.-M., Pumpanen, J., Rambal, S., Rotenberg, E., Sanz, M., Tenhunen, J., Seufert, G., Vaccari, F., Vesala, T., Yakir, D., and Valentini, R.: On the separa- tion of net ecosystem exchange into assimilation and ecosystem respiration: review and improved algorithm, Glob. Change Biol., 11, 1424–1439, 2005. Richardson, A. D., Jenkins, J. P., Braswell, B. H., Hollinger, D. Y., Ollinger, S. V., and Smith, M.-L.: Use of digital webcam images to track spring green-up in a deciduous broadleaf forest, Oecolo- gia, 152, 323–334, 2007. Richardson, A. D., Hollinger, D. Y., Dail, D. B., Lee, J. T., Munger, J. W., and O’Keefe, J.: Influence of spring phenology on sea- sonal and annual carbon balance in two contrasting New England forests, Tree Physiol., 29, 321–331, 2009. Richardson, A. D., Keenan, T. F., Migliavacca, M., Ryua, Y., Son- nentag, O., and Toomey, M.: Climate change, phenology, and phenological control of vegetation feedbacks to the climate sys- tem, Agr. Forest Meteorol., 169, 156–173, 2013. Rosenzweig, C., Casassa, G., Karoly, D. J., Imeson, A., Liu, C., Menzel, A., Rawlins, S., Root, T. L., Seguin, B., and Try- janowski, P.: Supplementary material to chapter 1: Assessment of observed changes and responses in natural and managed sys- tems. Climate Change 2007: Impacts, Adaptation and Vulnera- bility. Contribution of Working Group II to the Fourth Assess- ment Report of the Intergovernmental Panel on Climate Change, edited by: Parry, M. L., Canziani, O. F., Palutikof, J. P., van der Linden, P. J., and Hanson, C. E., Cambridge University Press, Cambridge, UK, 2007 Sonnentag, O., Detto, M., Vargas, R., Ryu, Y., Runkle, B. R. K., Kelly, M., and Baldocchi, D. D.: Tracking the structural and functional development of a perennial pepperweed (Lepidium latifolium L.) infestation using a multi-year archive of webcam imagery and eddy covariance measurements, Agr. Forest Meteo- rol., 151, 916–926, 2011. Sonnentag, O., Hufkens, K., Teshera-Sterne, C., Young, A. M., Friedl, M., Braswell, B. H., Milliman, T., O’Keefe, J., and Richardson, A. D.: Digital repeat photography for phenological research in forest ecosystems, Agr. Forest Meteorol., 152, 159– 177, 2012. Tanis, C. M. and Arslan, A. N.: FMIPROT – Finnish Meteorological Institute Image Processing Tool, User manual, available at: http:// monimet.fmi.fi/index.php?style=warm&page=FMIPROT, 2016. Tanja, S., Berninger, F., Vesala, T., Markkanen, T., Hari, P., Mäkelä, A., Ilvesniemi, H., Hänninen, H., Nikinmaa, E., Huttula, T., Lau- rila, T., Aurela, M., Grelle, A., Lindroth, A., Arneth, A., Shibis- tova, O., and Lloyd, J.: Air temperature triggers the recovery of evergreen boreal forest photosynthesis in spring, Glob. Change Biol., 9, 1410–1426, 2003. Toomey, M., Friedl, M., Frolking, S., Hufkens, K., Klosterman, S., Sonnentag, O., Baldocchi, D., Bernacchi, C., Biraud, S. C., Bohrer, G., Brzostek, E., Burns, S. P., Coursolle, C., Hollinger, D. Y., Margolis, H. A., McCaughey, H., Monson, R. K., Munger, J. W., Pallardy, S., Phillips, R. P., Torn, M. S., Wharton, S., Zeri, M., and Richardson, A. D.: Greenness indices from digital cam- eras predict the timing and seasonal dynamics of canopy-scale photosynthesis, Ecol. Appl., 25, 99–115, 2015. www.geosci-instrum-method-data-syst.net/5/417/2016/ Geosci. Instrum. Method. Data Syst., 5, 417–426, 2016 http://dx.doi.org/10.5194/bg-9-2063-2012 http://monimet.fmi.fi/index.php?style=warm&page=FMIPROT http://monimet.fmi.fi/index.php?style=warm&page=FMIPROT 426 M. Linkosalmi et al.: Digital photography for linking phenology and CO2 exchange Vesala, T., Suni, T., Rannik, Ü., Keronen, P., Markkanen, T., Se- vanto, S., Grönholm, T., Smolander, S., Kulmala, M., Ilves- niemi, H., Ojansuu, R., Uotila, A., Levula, J., Mäkelä, A., Pumpanen, J., Kolari, P., Kulmala, L., Altimir, N., Berninger, F., Nikinmaa, E., and Hari, P.: Effect of thinning on surface fluxes in a boreal forest, Global Biogeochem. Cy., 19, GB2001, doi:10.1029/2004GB002316, 2005. White, M. A. and Nemani, R. R.: Canopy duration has little influ- ence on annual carbon storage in the deciduous broadleaf forest, Glob. Change Biol., 9, 967–972, 2003. Wingate, L., Ogée, J., Cremonese, E., Filippa, G., Mizunuma, T., Migliavacca, M., Moisy, C., Wilkinson, M., Moureaux, C., Wohlfahrt, G., Hammerle, A., Hörtnagl, L., Gimeno, C., Porcar- Castell, A., Galvagno, M., Nakaji, T., Morison, J., Kolle, O., Knohl, A., Kutsch, W., Kolari, P., Nikinmaa, E., Ibrom, A., Gie- len, B., Eugster, W., Balzarolo, M., Papale, D., Klumpp, K., Köstner, B., Grünwald, T., Joffre, R., Ourcival, J.-M., Hellstrom, M., Lindroth, A., George, C., Longdoz, B., Genty, B., Levula, J., Heinesch, B., Sprintsin, M., Yakir, D., Manise, T., Guyon, D., Ahrends, H., Plaza-Aguilar, A., Guan, J. H., and Grace, J.: Interpreting canopy development and physiology using a Euro- pean phenology camera network at flux sites, Biogeosciences, 12, 5995–6015, doi:10.5194/bg-12-5995-2015, 2015. Geosci. Instrum. Method. Data Syst., 5, 417–426, 2016 www.geosci-instrum-method-data-syst.net/5/417/2016/ http://dx.doi.org/10.1029/2004GB002316 http://dx.doi.org/10.5194/bg-12-5995-2015 Abstract Introduction Materials and methods Measurement sites Camera set-up Grey reference plates Automatic image analysis Selection of the region of interest CO2 flux measurements Meteorological measurements Results and discussion Testing the set-up Effect of environmental conditions on GCC Sensitivity of GCC to selection of the region of interest Phenological development Wetland site Forest site Conclusions Data availability Acknowledgements References work_dh2dspsrafdztgaebkodjnkt7u ---- A comparison of foot arch measurement reliability using both digital photography and calliper methods University of Calgary PRISM: University of Calgary's Digital Repository Kinesiology Kinesiology Research & Publications 2010-07-14 A comparison of foot arch measurement reliability using both digital photography and calliper methods Pohl, Michael B.; Farr, Lindsay BioMed Central Pohl and Farr, A comparison of foot arch measurement reliability using both digital photography and calliper methods Journal of Foot and Ankle Research 2010, 3:14 http://hdl.handle.net/1880/48145 journal article Downloaded from PRISM: https://prism.ucalgary.ca JOURNAL OF FOOT AND ANKLE RESEARCH Pohl and Farr Journal of Foot and Ankle Research 2010, 3:14 http://www.jfootankleres.com/content/3/1/14 Open AccessR E S E A R C H ResearchA comparison of foot arch measurement reliability using both digital photography and calliper methods Michael B Pohl*1,2 and Lindsay Farr2 Abstract Background: Both calliper devices and digital photographic methods have been used to quantify foot arch height parameters. The purpose of this study was to compare the reliability of both a calliper device and digital photographic method in determining the arch height index (AHI). Methods: Twenty subjects underwent measurements of AHI on two separate days. On each day, AHI measurements during both sitting and standing were taken using the AHIMS and digital photographic methods by the same single tester. The intra-tester reliability of each measurement technique was assessed using intraclass correlation coefficients (ICC) and standard error of measurement (SEM). Additionally, the relationship between AHI measurements derived from the two different methods was assessed using a correlation analysis. Results: The reliability for both the AHIMS and digital photographic methods was excellent with ICC values exceeding 0.86 and SEM values of less than 0.009 for the AHI. Moreover, the reliability of both measurement techniques was equivalent. There was a strong positive correlation between the AHI values collected using both methods. AHI values calculated using the digital photographic method tended to be greater than those derived using the AHIMS. Conclusion: Digital photographic methods offer equivalent intra-tester reliability to previously established calliper methods when assessing AHI. While AHI measurements calculated using both methods were highly related, the greater AHI values in the photographic method implied caution should be exercised when comparing absolute values between the two methods. Future studies are required to determine whether digital photographic methods can be developed with improved validity. Background The foot is the site at which external forces are applied to the body. Since the foot then transfers these loads further up the kinetic chain, its structure has often been studied in relation to overuse injuries of the lower extremity [1- 3]. In particular, the height of the medial longitudinal arch has become a common measurement used to clas- sify foot structure [4-7]. While radiographic measurements are the gold stan- dard in determining the bony structure of the foot, many research laboratories do not have access to such methods. The arch height index (AHI) was developed by Williams and McClay [6] to quantify the height of the arch using handheld callipers. Briefly the AHI is calculated by divid- ing the height of the dorsum by the truncated foot length (distance from the heel to the first metatarsal head). Although the measurements were stated to be somewhat awkward when performed using handheld callipers, the development of the arch height index measurement sys- tem (AHIMS), a mechanical device, improved the ease of taking measurements [8,9]. The measurements of AHI taken using a mechanical device have demonstrated good intra- and inter-tester reliability [8], in addition to validity when compared with equivalent radiographic measure- ments [6]. However, the reliability has only been quanti- fied using intraclass correlation coefficients. Expressing reliability measurements in terms of coefficients makes it difficult to clinically interpret the results, since the reported reliability units are different from the units of * Correspondence: mbpohl@ucalgary.ca 1 Faculty of Kinesiology, University of Calgary, AB, Canada Full list of author information is available at the end of the article © 2010 Pohl and Farr; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=20630090 Pohl and Farr Journal of Foot and Ankle Research 2010, 3:14 http://www.jfootankleres.com/content/3/1/14 Page 2 of 6 the variable of interest [10]. Therefore, it is desirable to also report reliability within the context of the intended clinical units. While devices such as the AHIMS have been shown to be reliable and valid, they can be costly to buy or con- struct. An alternative idea developed recently involved the use of digital photography to assess the height of the arch [5]. Digital photographic techniques potentially offer a highly practical, convenient and cost effective method of assessing arch structure within a clinical or laboratory setting. Such a technique has been shown to demonstrate good to high levels of intra- and inter-tester reliability as well as validity [5]. However, while the study did include the assessment of dorsum height, the reliability of the AHI was not calculated. Therefore, it is difficult to inter- pret whether the digital photographic method of assess- ing arch height is as reliable as the equivalent measurement taken with mechanical calliper devices such as those used by Butler and colleagues [8]. Between- day differences in measurements taken using digital pho- tography may arise from errors in manual digitising and camera placement, in addition to the discrepancies that also afflict calliper measurements such as participant positioning. However, reliability measurements for the digital photographic technique have only been calculated based on one photograph of the subject [5]. Therefore, the effect of participant and camera positioning between measurements has not been assessed and requires inves- tigation. In summary, methods of quantifying the arch height of the foot have been proposed using either manufactured calliper devices or digital photography. However, it remains unclear whether the two techniques demonstrate similar levels of between-day reliability. Therefore, the purpose of this study was to compare the intra-tester reli- ability of determining arch height when using both a calli- per device and digital photographic methods. These reliability data will provide confirmation as to whether photographic techniques can calculate AHI with similar reliability to existing calliper methods. Methods Subjects Twenty subjects (6 males and 14 females) volunteered to participate in the study. Subjects were recruited from the University population and the surrounding community. The mean age of subjects was 29.9 ± 5.8 years with a mean weight of 70.4 ± 11.7 kg. The institutional review board approved the study and all subjects provided writ- ten informed consent prior to data collection. Subjects were free from lower-extremity injury at the time of test- ing. Experimental protocol Each subject visited the laboratory on two separate days to have measurements taken on their right foot. Prior to the collection of the foot measurements on the first visit, the weight of the subject was recorded. On each day, measurements were taken using both the AHIMS and digital photographic methods by the same tester. The tes- ter had six months of experience using the AHIMS within a clinical setting. A portable instrument for measuring the AHI was cus- tom-built based on the AHIMS developed by Richards et al. [9]. This device consisted of a heel cup and series of sliding callipers and rulers (Figure 1). Subjects began seated with their right hip, knee and ankle joints at 90°. Two blocks (thickness = 4.5 cm) were placed under the heel and metatarsal heads of the right foot leaving the arch unsupported. The left foot was placed 15 cm medial to the right foot on a weighing scale (thickness = 4.5 cm) so that the distal end of the hallux of the left foot was positioned 5 cm behind the heel of the right foot. This ensured a clear view of the medial aspect of the right foot which was required for the digital photographic method (see below). The AHIMS was then placed so that the heel cup was against the heel of the right foot and sliding hori- zontal callipers were used to measure the foot length (FL) and truncated foot length (TFL) (distance from the heel to first metatarsal head). A vertical sliding calliper was then positioned at 50% of the FL, and subsequently used to measure the height of the dorsal arch (DH). The AHI was calculated as the ratio DH:TFL [6]. The subject then stood up with their weight equally distributed on both feet (50% WB) and the measurements were repeated. A final set of measurements were also taken with the sub- ject standing with 90% of their body weight distributed on the right foot (90% WB). A load of 90% BW on the right foot was achieved by asking subjects to lift their left foot off the weighing scale without leaning to either side, Figure 1 The Arch height index measurement device (AHIMS). The heel is placed against the heel cup (A) and the sliding callipers D and C are aligned against the distal phalanx and first metatarsal head respectively. A third calliper (B) is lowered to the dorsal arch at 50% of the FL. Pohl and Farr Journal of Foot and Ankle Research 2010, 3:14 http://www.jfootankleres.com/content/3/1/14 Page 3 of 6 until the scale showed that only 10% BW remained on that foot. The digital photographic method involved the same subject set-up as described for the AHIMS. As with the AHIMS, blocks were placed under the right foot with the left foot positioned behind on the weighing scale. A small mark was made on the first metatarsal head to enable the identification of this landmark in the photos. A digital camera (Model Powershot A540, Canon, Tokyo, Japan) was positioned on a block (height = 4 cm) at a fixed dis- tance of 55 cm from the medial border of the right foot and 10 cm forward of the back of the heel (Figure 2). The foot to camera distance was selected based on pilot test- ing to ensure that the largest expected foot size could be photographed (men's size 13.5 UK). A calibration photo was first taken where an object with known distances (10 cm) was positioned in the plane of the medial arch (55 cm from the camera). The centre of the calibration object was horizontally located approximately perpendicular to the line of view of the camera lens. The calibration object was removed and photos were then taken of the medial aspect of the foot during both sitting (10% WB) and relaxed standing (50% and 90% WB). All digital photos were then downloaded onto a PC where they were processed using ImageJ software (NIH, Bethesda, USA). Briefly, this software allowed the digitiz- ing of selected co-ordinates to calculate the foot measure- ments needed to determine AHI (Figure 3). Co-ordinates were exported from the software as pixels and the cali- bration photo allowed the conversion of pixels to cm. To assist with the digitizing of the foot photos lines were drawn on the image indicating the distal end of the hal- lux, the most posterior aspect of the posterior heel, and the horizontal supporting surface (Figure 3). The FL was obtained by digitizing points at the distal end of the hal- lux and posterior aspect of the heel. The total foot length was then halved to determine 50% of the total foot length. An additional vertical line was then drawn perpendicular from the supporting surface to the dorsum of the foot at 50% of the foot length. The DH was determined by digi- tizing co-ordinates at the top and bottom of this line. Finally, a co-ordinate on the first metatarsal head was dig- itized to enable the calculation of TFL. No enhancements or modifications were made to any of the digital images. Data analysis To compare the intra-tester reliability of both the AHIMS and photo methods, intraclass correlation coefficients (ICC 3,1) were calculated for the between-day measure- ments of both techniques [11]. In addition to ICC values, the between-day standard error of measurement (SEM) was also calculated for each method [12]. Both ICC and SEM were calculated for the variables AHI, TFL and DH. All ICC and SEM reliability variables were assessed dur- ing both sitting and standing. Additionally, the relation- ship between AHI as measured by the AHIMS and photo methods was examined. A Spearman's rank order corre- lation was performed between AHI (AHIMS) and AHI (digital photo) during standing (50% WB). Results Descriptive statistical values for TFL, DH and AHI for both the AHIMS and photo methods are presented in Figure 2 Setup for the digital photographic method. The blocks (A) were placed under the heel and ball of the right foot with the me- dial border lined up with the near edge. The left foot was placed on the scale (B). The camera was placed on another block (C) a fixed distance from the posterior aspect of the heel (10 cm) and medial aspect of the foot (55 cm). A set square (D) was placed in plane with the medial bor- der of the right foot for one of the digital photos to serve as a calibra- tion object. 4.5cm 10cm 55cm 4 cm 4.5cm 15cm A A B C D Figure 3 Digital photographic image used to calculate FL, TFL and AH. Lines were drawn on the image indicating the distal end of the hallux, the most posterior aspect of the posterior heel, and the hor- izontal supporting surface. The co-ordinates A-E were digitized and used to calculate the foot measurements. The horizontal distance be- tween A and B gave FL. Point C was placed at the horizontal midpoint between A and B. The vertical distance between C and D represents AH. The horizontal distance between B and E yielded the TFL. Pohl and Farr Journal of Foot and Ankle Research 2010, 3:14 http://www.jfootankleres.com/content/3/1/14 Page 4 of 6 Table 1. For both measurement techniques, the AHI low- ered from sitting to standing. However, there was little difference between the 50% WB and 90% WB standing conditions, with only a 0.004 change in AHI measured. Therefore, reliability data were only presented for the sit- ting and 50% WB standing conditions. The intra-tester reliability values for foot arch measure- ments using both methods are shown in Table 2. The mean absolute difference for between-day AHI measure- ments was less than 0.009 and similar for both the AHIMS and digital photographic techniques. There were no discernible differences between the two measurement techniques in terms of either SEM or ICC values, with both demonstrating excellent reliability. ICCs were in excess of 0.86 and SEM values for the foot measurements used to calculate AHI (TFL and DH) were equal to or less than 0.2 cm. The results of the Spearman's rank order correlation suggested there was a strong positive relationship between AHI measurements collected using AHIMS and photographic methods (p < 0.00, ρ = 0.90). The individual subject rankings of AHI (low to high) for each method (AHIMS v digital photo) are listed in Table 3. The abso- lute difference between the two ranks was ≤ 2 in 16 out of 20 subjects. In general, the AHI values found using the digital photos were greater than the values measured using the AHIMS (Tables 1 and 3). Discussion The purpose of this study was to compare the intra-tester reliability of two different methods of assessing static arch measurements. The results suggest that arch mea- surements calculated using a digital photographic method were of a similar reliability to the same variables derived using a mechanical callipers device (AHIMS). Moreover, both methods demonstrated a high level of reliability when calculating AHI with ICC's exceeding 0.86 and SEM's below 0.009. The ICC values of TFL, DH and AHI measured using the AHIMS were in agreement with previous studies that reported ICC's ranging from 0.91 to 0.99. This provides further confirmation that arch measurements can be col- lected with excellent reliability when using mechanical calliper devices. The mean AHI value collected during standing using the AHIMS was also similar to the mean values reported in the literature using a similar device [7- 9]. However, this value was considerably greater than the mean value of 0.292 reported by Williams and McClay [6]. Butler and colleagues [8] postulated that their mean value of 0.340 was greater than that of Williams and McClay [6] due to the two respective studies collecting standing AHI using different amounts of body weight applied to the measured foot (50% WB versus 90% WB Table 1: Mean and standard deviation (SD) values of truncated foot length (TFL), dorsum height (DH) and arch height index (AHI) for both the AHIMS and digital photographic techniques. AHIMS Digital Photo Mean SD Mean SD Sitting (10% WB) TFL (cm) 18.1 0.8 17.4 0.8 DH (cm) 6.8 0.5 6.7 0.5 AHI 0.375 0.020 0.384 0.023 Standing (50% WB) TFL 18.4 0.8 17.7 0.8 DH 6.3 0.6 6.4 0.6 AHI 0.345 0.025 0.361 0.025 Standing (90% WB) TFL 18.4 0.8 17.8 0.8 DH 6.3 0.5 6.4 0.6 AHI 0.342 0.024 0.357 0.028 Table 2: Between-day mean absolute differences, standard error of measurement (SEM) and intraclass correlation coefficients (ICC) for both measurement techniques. AHIMS Digital Photo Mean difference SEM ICC Mean difference SEM ICC Sitting (10% WB) TFL 0.2 0.2 0.94 0.3 0.2 0.91 DH 0.1 0.1 0.94 0.1 0.1 0.93 AHI 0.009 0.009 0.87 0.008 0.008 0.88 Standing (50% WB) TFL 0.3 0.2 0.93 0.3 0.2 0.92 DH 0.2 0.2 0.94 0.1 0.1 0.95 AHI 0.008 0.007 0.92 0.007 0.006 0.94 Pohl and Farr Journal of Foot and Ankle Research 2010, 3:14 http://www.jfootankleres.com/content/3/1/14 Page 5 of 6 respectively). However, the present investigation found no differences between AHI when measured during 50% WB or 90% WB, thus indicating that the two loading con- ditions produce a similar measurement outcome. Although values for AHI have not been reported for the digital photographic method before, the good reliability values for dorsum height are in agreement with McPoil and colleagues [5]. However, given that McPoil et al. [5] did not reposition the participant when assessing reliabil- ity, we were curious to explore this further. Indeed, the high reliability of the foot measurements in the present study confirms that the effect of participant positioning between testing sessions was minimal. Moreover, the ICC and SEM values for all foot variables were equivalent to those measured using the AHIMS. This implies that within the context of a single laboratory, a digital photo- graphic method may be used to measure AHI reliably in the absence of mechanical callipers. This is beneficial given that custom built calliper devices can be expensive to construct compared to the cost of a digital camera. There was a strong correlation between AHI measure- ments taken using AHIMS and digital photographic methods. Thus, individuals with high and low arches are likely to be identified correctly using either measurement technique. It is perhaps not surprising that both methods were highly correlated since they have both been shown to be highly correlated with equivalent radiographic mea- surements [5,6]. However, it was noted that mean AHI values measured using digital photos were of a greater magnitude than those recorded using the AHIMS. From the results in Table 1, it would appear that this systematic offset was the result of a shorter TFL being measured in the digital photo method since DH was similar between the two techniques. It is possible that this was the result of the TFL distance (17-20 cm) exceeding the dimensions of the calibration object (10 cm) which might introduce some calibration error. The improvement of calibration procedures such as calibrating over a greater horizontal distance or even using multiple calibration objects, has the potential to increase the validity of TFL measure- Table 3: Individual subject rankings based on AHI during 50% WB. Subject AHIMS Digital Photo Rank Difference AHI Value Rank AHI Value Rank D 0.309 1 0.353 1 0 J 0.309 2 0.363 4 -2 P 0.310 3 0.380 5 -2 O 0.318 4 0.320 2 2 S 0.322 5 0.401 3 2 L 0.328 6 0.359 7 -1 B 0.330 7 0.384 8 -1 H 0.337 8 0.335 6 2 A 0.339 9 0.379 11 -2 C 0.340 10 0.333 15 -5 F 0.344 11 0.397 10 1 N 0.347 12 0.349 9 3 I 0.354 13 0.389 14 -1 R 0.361 14 0.356 12 2 E 0.365 15 0.323 20 -5 T 0.371 16 0.335 16 0 K 0.372 17 0.378 19 -2 Q 0.378 18 0.372 13 5 G 0.379 19 0.330 17 2 M 0.382 20 0.383 18 2 Subjects are listed sequentially from lowest to highest values of AHI as measured using the AHIMS. The numerical rank of each subjects' AHI is also listed for the digital photographic method alongside the AHIMS rank. The rank difference was calculated the digital photographic rank subtracted from the AHIMS rank. Pohl and Farr Journal of Foot and Ankle Research 2010, 3:14 http://www.jfootankleres.com/content/3/1/14 Page 6 of 6 ments conducted using a digital cameras. Given that the digital photographic method was highly correlated with the AHIMS in terms of AHI, it could be speculated that establishing a different set of norms for the photographic method might be a feasible solution. However, a clinical measurement tool such as AHI is much more useful when results can be confidently compared between mul- tiple clinical and research centres. It is presently unknown how equipment and experimental setup might influence the foot variables derived from the digital pho- tos. While good agreement of AHI values between differ- ent laboratories has been reported using the AHIMS [9], inter-laboratory comparisons have not been conducted using digital photographic methods. Studies comparing the results from different laboratories and clinics are war- ranted, in addition to investigating the influence of differ- ent camera placements and calibration procedures. There were some limitations with the current study. Firstly, we only collected intra-tester reliability data. Therefore, it remains to be seen whether the findings can be generalised between different testers. However, strong inter-tester reliability has been reported previously for both the AHIMS [6,8] and digital photographic method [5]. Secondly, it is worth noting that the subjects used in the present investigation were lean and asymptomatic with no notable foot deformities. In cases of pathology, the presence of swelling and deformity may introduce potential error in both the reliability and validity of the measurements taken using both methods. Future work is needed to determine the feasibility of using the AHI mea- surement in patients with clinical foot pathologies. Conclusion In summary, this study demonstrated that AHI calculated using a digital photographic method can be determined reliably. Moreover, this variable can be obtained with equivalent reliability to a previously established method using mechanical callipers. However, AHI values mea- sured using digital photos were of a greater magnitude than those recorded using callipers. Therefore, future studies are needed to establish whether the digital photo- graphic method can be utilised validly for between labo- ratory/clinic comparisons. Competing interests The authors declare that they have no competing interests. Authors' contributions MBP developed the rationale for the study. MBP and LF designed the study protocol. LF conducted the data collections and MBP analysed the data. MBP and LF drafted the manuscript. All authors have read and approved the final manuscript. Acknowledgements This work was supported in part by the Alberta Heritage Foundation for Medi- cal Research, SOLE Inc, and the University of Calgary Olympic Oval High Perfor- mance Fund. The authors gratefully acknowledge the help of Brian Noehren, Chandra Lloyd and Andrea Bachand for their assistance with the project. Author Details 1Faculty of Kinesiology, University of Calgary, AB, Canada and 2Running Injury Clinic, University of Calgary, AB, Canada References 1. Kaufman KR, Brodine SK, Shaffer RA, Johnson CW, Cullison TR: The effect of foot structure and range of motion on musculoskeletal overuse injuries. American Journal of Sports Medicine 1999, 27:585-593. 2. Williams DS, McClay IS, Hamill J: Arch structure and injury patterns in runners. Clinical Biomechanics 2001, 16:341-347. 3. Pohl MB, Rabbito M, Ferber R: The role of tibialis posterior fatigue on foot kinematics during walking. J Foot Ankle Res 2010, 3:6. 4. Cobb SC, Tis LL, Johnson JT, Wang Y, Geil MD, McCarty FA: The effect of low-mobile foot posture on multi-segment medial foot model gait kinematics. Gait & Posture 2009, 30:334-339. 5. McPoil TG, Cornwall MW, Medoff L, Vincenzino B, Forsberg K, Hilz D: Arch height change during sit-to-stand: an alternative for the navicular drop test. J Foot Ankle Res 2008, 1:3. 6. Williams DS, McClay IS: Measurements used to characterize the foot and the medial longitudinal arch: Reliability and validity. Physical Therapy 2000, 80:864-871. 7. Zifchock RA, Davis I, Hillstrom H, Song JS: The effect of gender, age, and lateral dominance on arch height and arch stiffness. Foot & Ankle International 2006, 27:367-372. 8. Butler RJ, Hillstrom H, Song J, Richards CJ, Davis IS: Arch height index measurement system - Establishment of reliability and normative values. Journal of the American Podiatric Medical Association 2008, 98:102-106. 9. Richards CJ, Card K, Song J, Hillstrom H, Butler R, Davis I: A novel arch height index measurement system (AHIMS): intra- and inter-rater reliability. Proceedings of American Society of Biomechanics Annual Meeting Toledo 2003. 10. McGinley JL, Baker R, Wolfe R, Morris ME: The reliability of three- dimensional kinematic gait measurements: A systematic review. Gait & Posture 2009, 29:360-369. 11. Shrout PE, Fleiss JL: Intraclass correlations - uses in assessing rater reliability. Psychological Bulletin 1979, 86:420-428. 12. Portney LG, Watkins MP: Foundations of Clinical Research: Applications to Practice Second edition. Upper Saddle River: Prentice-Hall, Inc; 2000. doi: 10.1186/1757-1146-3-14 Cite this article as: Pohl and Farr, A comparison of foot arch measurement reliability using both digital photography and calliper methods Journal of Foot and Ankle Research 2010, 3:14 Received: 4 June 2010 Accepted: 14 July 2010 Published: 14 July 2010 This article is available from: http://www.jfootankleres.com/content/3/1/14© 2010 Pohl and Farr; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Journal of Foot and Ankle Research 2010, 3:14 http://www.jfootankleres.com/content/3/1/14 http://creativecommons.org/licenses/by/2.0 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10496574 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11358622 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=20406465 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=18822154 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10960934 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=18347117 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=18839484 work_dock4fouazedjouayitutszhca ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218556747 Params is empty 218556747 exception Params is empty 2021/04/06-02:16:36 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218556747 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:36 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_dplvuxp33vhu5ot4tpthjr3gk4 ---- In a BBC Radio 4 programme broadcast on 26 April 2011, archaeologist Christine Finn explored some of the ways in which the digital revolution has changed writers’ working methods, and the further impact this has had on librarians, curators and conservat Title Digital Attraction: from the real to the virtual in manuscript studies Author Peter Ainsworth Publication FORUM: University of Edinburgh Postgraduate Journal of Culture and the Arts Issue Number 12 Issue Date Spring 2011 Publication Date 05/06/2011 Editors Elysse Meredith & Dorothy Butchard FORUM claims non-exclusive rights to reproduce this article electronically (in full or in part) and to publish this work in any such media current or later developed. The author retains all rights, including the right to be identified as the author wherever and whenever this article is published, and the right to use all or part of the article and abstracts, with or without revision or modification in compilations or other publications. Any latter publication shall recognise FORUM as the original publisher. University of Edinburgh Postgraduate Journal of Culture and the Arts Issue 12 | Spring 2011 Digital Attraction : from the real to the virtual in manuscript studies Peter Ainsworth In a BBC Radio 4 programme broadcast on 26 April 2011 (Tales from the Digital Archive),1 archaeologist Christine Finn explored some of the ways in which the digital revolution has changed writers’ working methods, and the consequential impact that these have had on librarians, curators and conservators. Wendy Cope recently donated an archive of 40,000 emails to the British Library, providing scholars with invaluable material for research on drafts of her poems as they developed. Fay Weldon found herself changing quite easily from pen and paper to computer and mouse, discovering in the process that it changed how she actually wrote her novels. Altering phrases or drafts became much easier, and much cheaper too. Weldon’s archive is destined for the University of Indiana. Salman Rushdie’s archive went to Emory University in 2006, becoming available to researchers in 2010; it includes 15 years of electronic material from his computer.2 The British Library has a Curator of Digital Manuscripts, Jeremy John, whose strongroom contains IBM and Macintosh computers, floppies, CDs and other ephemera such as post-it slips (stored in archive boxes alongside the machines they were attached to). Flaubert scholars have for a long time pored over the consecutive drafts of L’Education sentimentale, and the field of textual genetics has delivered important 1 I am grateful to the BBC and to producer Marya Burgess for permission to reproduce the materia l featured in my opening three paragraphs. 2 Whilst one may regret so many of our literary authors committing their archives to overseas institutions, these are often the best-equipped to curate them, and digitised material can of course be disseminated much more easily than original material on print or paper. conclusions. In the near future scholars will be asking themselves slightly different questions: How did authors use and customise their desktops? What were their characteristic habits and shortcuts? What applications were used, and how were their files organised? How was a key manuscript altered on a given day? Which emails went out that day and to whom? How did a particular phrase evolve? Digital technology has had implications even for writers who cordially disliked computers, including Ted Hughes. Hughes favoured the typewriter; unlike Rushdie’s or Weldon’s archive, therefore, his contains no digitally born material; but the British Library’s archive of the poet’s Devon home will include a 360° panoramic picture of his workspace based on 50 high-resolution photographs showing the books on his shelves, his collection of stuffed birds and an ingenious personal system of wooden blocks for keeping track of his projects in progress.3 Digital curation and conservation bring their own challenges: the moment you turn a computer on, you risk changing the dates; the British Library uses a ‘write- blocker’ connected to a forensic workstation to access material without changing it. A screen image of an author’s desktop and files as left, for example, on the day of his or her demise can be generated from their hard drive using an emulator; the hard drive is in this way not altered by date changes or software version updates. Searches can be conducted to pinpoint and delete private information such as credit card or phone numbers, though curators still have to take ultimate decisions as to what may be publishable. The relationship between scholar and curator-conservator is becoming acutely important as the 21st century gets into its stride. It always was, of course, just as paper and print will continue to hold the attention of the majority of arts and humanities 3 Ted Hughes liked to cut and paste, but using paper and sellotape (‘an archivist’s nightmare’). researchers for generations to come. But the advent of high-resolution digital photography and of computing for the arts and humanities affords not only new opportunities and toolkits, but also the emergence of new kinds of research question.4 Scholars working in this exciting and intensively collaborative, interdisciplinary and (arguably) cost-effective field are beginning to do things they were not able to before. Space precludes comprehensive exploration of the field, so just two projects are reviewed here, one of which proceeds in part from the other. Both focus on the Chronicles of the 14th-century writer and poet Jean Froissart (?1337-?1404). For reasons that will become clear, we trust, we begin with the later of the two, forming part of the internationally-funded Digging into Data Challenge.5 *** In the Spring of 2011 an exhibition at the Invalides in Paris curated by Peter Ainsworth in partnership with the Royal Armouries UK and the national French Musée de l’Armée portrayed aspects of the literary and military culture of the Hundred Years’ War. Items of weaponry from the period were displayed against a backdrop provided by large-scale, high-resolution photographs of miniatures from 4 High-resolution photography is not, of course, a sine qua non for the best in computer-enhanced research in the arts and humanities or social sciences, witness such eminent projects as Old Bailey Online (Old Bailey Proceedings Online (www.oldbaileyonline.org [accessed 04 May 2011)) or Connected Histories (Connected Histories (www.connectedhistories.org [accessed 04 May 2011]. 5 The Digging into data challenge programme (2009-2011) funded eight ‘cutting-edge’ projects, arousing so much interest that a second round is open at time of writing (closes 16 June 2011). Round two is supported by no fewer than eight international funders: http://www.jisc.ac.uk/news/stories/2011/03/digging.aspx [accessed 04 May 2011]. See also Hannah Fearn’s Times Higher Education article, ‘Research intelligence – Let’s dig a little deeper’, THE 28 April 2011, and ‘Digging into Data Using New Collaborative Infrastructures Supporting Humanities‐ based Computer Science Research’, First Monday (May 2011). http://www.jisc.ac.uk/news/stories/2011/03/digging.aspx http://www.connectedhistories.org/ http://www.oldbaileyonline.org/ Froissart’s Chronicles.6 The more practical aspects of manuscript culture featured in the guise of pens, brushes, pigments and other implements used by the scribes and artists responsible for the miniatures found in many manuscripts of the Chronicles, kindly loaned by the Scriptorial museum in Avranches, Normandy. The centrepiece of the exhibition was a display in two glazed cases facing one another across a darkened room, of two pairs of twin manuscripts. Besançon Municipal Library mss 864 and 865 (comprising respectively Book I and Books II-III of the Chronicles) thus found themselves just yards away from Paris, Bibliothèque nationale de France fonds français mss 2663 and 2664 (Book I and Book II of the Chronicles). This was exciting for Froissart scholars, since the four volumes had originally been copied and illustrated in central Paris around 1412-1418 under the direction of bookseller Pierre de Liffol, who seems to have glimpsed a market opportunity for the production of luxury copies of Froissart’s Middle French Chronicles(Croenen). 7 The four volumes were displayed open at a fixed point inside sealed cases, but their entire contents could be explored in virtual format via interactive touchscreens nearby, using the Kiosque software specially developed for the purpose at the University of Sheffield by the Department of French and the Humanities Research Institute (Meredith).8 The Chronicles remain one of the most important prose accounts in French of the Hundred Years’ War between France and England and their respective allies; they 6 Images from the exhibition can be viewed at: http://hrionline.ac.uk/onlinefroissart / under “About the Project”, iv. Related Projects. 7 The books were sold mainly to clients in the service of Charles VI of France, though the iconographical emphasis of the illustrations to Besançon BM, ms. 864 testifes to a client with pro- English sympathies: Peter Ainsworth, ‘Representing Royalty: Kings, Queens and Captains in Some Early Fifteenth-Century Manuscripts of Froissart’s Chroniques’, The Medieval Chronicle IV, Editions Rodopi (Amsterdam/Atlanta, 2006), pp. 1-38. 8 Kiosque was developed by Dr Mike Meredith working with Peter Ainsworth and Tribal (Sheffeld). The virtual versions of the Besançon and Paris manuscripts are part of a corpus comprising more than 6,000 high-resolution image fles captured photographically from ten digitised manuscript volumes (2TB of data). remain a key source for study of the conflict. Their content forms the basis of the Online Froissart discussed later on in this paper. The Sheffield team on the Digging into Data to answer Authorship-Related Questions (DID-ARQ) project engaged above all with the virtual Chronicles, considered not so much as flexible surrogates for the originals locked in their closed display cases or hidden away in the strongrooms of their research libraries,9 but rather as a supplementary source of data. However, the research began with the originals themselves (collation, codicology, palaeography, etc.). A second phase involved the careful nurturing over many months of good relationships of mutual understanding between the researchers and the conservators and librarians at the various libraries where photography was to take place. Once these were in place, the recruitment of talented specialist photographers10led in due course to the capture, under almost identical conditions, of no fewer than ten complete facsimiles (most of which are currently viewable via the Online Froissart), seven of them at an unusually high resolution (500 dpi). Development of fit-for-purpose manuscript viewing and manipulation software followed, Virtual Vellum; final copyright clearance and permissions were then obtained to cover shared use of the images for the Online Froissart and by the international DID-ARQ consortium. All but two of the virtual manuscripts were shared in this way. Colleagues at Illinois’s College of Fine and Applied Arts, in partnership with the National Center for Supercomputing Applications, adopted as their particular focus the virtual manuscripts’ decorative and illustrative content, concentrating on the twin manuscripts displayed in Paris in Spring 2010. Art historians working on early fifteenth-century iconography, artists and artistic schools in Paris long ago identified 9 Froissart manuscripts are today housed in libraries across two continents, from Texas to Toulouse. 10 David Cooper, and Colin Dunn (Scriptura Ltd). the hand responsible for the primary decoration of Besançon BM, ms. 864 and Paris, BnF f. fr. ms. 2664 as being that of a disciple of the Rohan Master. The artist responsible for the miniatures of Besançon BM, ms. 865 and Paris, BnF f. fr. ms. 2663, on the other hand, was for many years thought to be a mediocre follower of the Master of the Berry Apocalypse, so called after a copy of the Apocalypse illustrated for the Duke of Berry and housed today at the Pierpont Morgan Library in New York (under ms. shelfmark M.133)11. More recently, Inès Villela-Petit has argued that the artist, properly identified as the Boethius Master, deserves to be clearly distinguished from the Master of the Berry Apocalypse (“Deux Visions”). The illustrations to Besançon, Bibliothèque Municipale, ms. 864, meanwhile, are to be attributed to a forerunner of the Rohan Master, named the Giac Master for having illustrated a Book of Hours for Jeanne du Peschin, dame de Giac (Villela-Petit, “Les Heures”). DID- ARQ’s starting point, therefore, was four manuscript volumes whose miniatures, initial letters and decorative borders were entrusted to two artists’ workshops: those of the Giac and Boethius Masters. These artists were particularly favoured by bookseller De Liffol: their handiwork, and the often elegant penmanship of the scribes given the job of copying the texts, are an eloquent testimony to the remarkable activities of book trade artisans in Paris during the first quarter of the fifteenth century. The research questions that Illinois elected to explore included the following: 1. How does the application of computer algorithms to the analysis of portrayal of the human face in the manuscript miniatures help scholars to refine the parameters of discriminating features traditionally used by art connoisseurs for 11 This was for many years the view imposed by the eminent scholarship of Millard Meiss (chap. XI, p. 360 et sq.). characterising the distinctive handiwork of individual artists such as the Giac and Boethius Masters?12 2. What do these computer techniques and their application to the image data reveal about the hands responsible for secondary decoration (e.g. initials and marginal decoration) of these manuscripts? 3. To what extent might our new e-Science techniques assist scholars to refine current knowledge of the human presence behind such broad-brush labels as ‘Giac Master’ or ‘Boethius Master’? 4. Do our procedures suggest the presence and activity of more than one individual actively at work behind these labels? The Sheffield operation, meanwhile, concentrated on the copying process and its outcome: the writing or script constituting the narrative of the four volumes. This extensive text was copied from exemplars by at least two teams of scribes to whose workshops the unbound quires were sent by Pierre de Liffol.13 Scholars tend to call these scribes ‘A’, ‘C’ or ‘G’, so anonymous are they for the most part.14 Scholars have 12 The scientifc fndings of our Illinois colleagues are to be published separately (paper submitted to Digital Humanities Quarterly); in broad terms, two key iconographical elements generally used by ar t historians as an index of authorship were identifed for scrutiny: (i) the faces of queens, kings and other fgures within the illuminations; and (ii) representations of armour. 13 Unbound quires were also sent out to the illustrators; their circulation and the ‘piece work’ character of their gradual preparation prior to receiving the attentions of bookbinder and bookseller are key aspects of the book culture of this period, as evidenced in Patrons, Authors and Workshops. Books and Book Production in Paris around 1400, G. Croenen and P. Ainsworth (eds), Peeters, “Synthema” 4 (Louvain – Paris‐ Dudley MA, 2006). See also Anne D. Hedeman, Translating the Past. Laurent de Premierfait and Boccaccio’s De casibus, J. Paul Getty Museum (Los Angeles, 2008). 14 There are of course notable exceptions; see for instance M.‐H. Tesnière, ‘Les manuscrits copiés par Raoul Tainguy : un aspect de la culture des grands offciers royaux au début du XVe siècle’, Romania 107 (1986), pp. 282‐368. Wherever possible, the transcriptions of the Online Froissart include marked‐up indications of shifts in hand from one copyist to another (see also: Online Froissart, “Apparatus”, Codicological Descriptions). none the less gradually built up an idea of particular scribal ‘personalities’ by singling out and describing the distinguishing features of their particular hands, and have been able in this way to adduce significant characteristics on which to found conclusions: particular ways of executing certain sequences of letters, such as the ligatures ‘th’ and ‘ch’, the ending ‘-ent’, or certain ways of writing the letters ‘a’ and ‘r’ (against the models furnished by contemporary bookhands taught to all apprentice scribes). DID- ARQ’s archive comprises 10 virtual manuscripts, each of which contains 300,000 words or more, all copied in a bookhand known to specialists as littera cursive libraria. Early in 2011 the Sheffield DID-ARQ team began to test some of the hypotheses just referred to against this digital dataset, applying algorithms referred to below. The key research question (broadly formulated here) underpinning research theorization and practice in this domain was: How might one adduce pertinent e-Science methodologies for the interrogation of such a large-scale database, the better to explore, characterise and circumscribe several particular manifestations (individual scribal hands ‘A’, ‘C’ or ‘G’…) of an attested early 15th-century bookhand (littera cursive libraria) (see Stiennon 284-5; Brown)? Initial palaeographical study suggested that it might prove significant to compare letter and word clusters (across our many hundreds of digitized folios) of writing by a postulated scribe ‘C’, for instance, using semi-automated definition of the perimeters of the letter and word shapes; this, it was thought, should help to generate augmented – and more objective – evidence towards the assignment to a particular scribe of responsibility for a given section of a given manuscript (‘X’). Once that scribe’s manual ‘idiolect’ had been so defined, it should become possible to search for his/her activity in other manuscripts (‘Y’ or ‘Z’). There was already some preliminary evidence to suggest that this was happening across the codices of the ‘Pierre de Liffol’ corpus, but it was our expectation that finer and more accurate methodologies based on the virtual manuscripts might confirm the hypothesis and account for it on the basis of more scientifically convincing evidence. Other potentially interesting areas for electronic investigation might include the ductus of the written text (the movement and direction of hand and pen as they make their upward and downward strokes and curves) as realised by a particular scribe, the overall neatness of the particular folio, the characteristic recourse by a scribe to abbreviations, and his or her recurring deployment of consistently similar spelling patterns. Ways to explore these electronically are still under review. The Sheffield DID-ARQ team has begun its investigations by attempting to extract a ‘digital fingerprint’ from the data using Polygonal Models and Shape Recognition. Sobel edge detection (from NCSA’s Image2Learn API) was applied to source images accessed from commonly-shared samples mounted on the consortium’s Medici image library. Line segments were then fitted to edge map data using the expectation-maximization (EM) algorithm. Shape recognition algorithms were subsequently applied to polygonal models to identify letters, words, symbols and patterns. The algorithms will be run in due course on the NCSA’s Petascale High Performance Computing (HPC) machine, Blue Waters, or on the NSF (National Science Foundation) Teragrid HPC resources, since the computation takes many hours of computing time. At time of writing, results are beginning to emerge which appear to offer some solid confirmation of the potential for identifying a steadily more objective digital fingerprint for our hitherto rather shadowy medieval copyists.15 *** All of the above has only been made possible because of digital resources originally obtained for the Online Froissart (Ainsworth and Croenen). Digital photography of the Stonyhurst and Toulouse manuscripts was supported by a Leverhulme Research Fellowship (Peter Ainsworth, 2005-2006). Grants were also obtained from Yorkshire Universities Gift Aid (2006) and the Worldwide Universities Network (2007 and 2008) to support digitisation of manuscripts at the Bibliothèque Royale, Brussels, and the Bibliothèque nationale de France, Paris. In 2010 the Harry Ransom Research Center at Austin, Texas, and the Bibliothèque Royale Albert 1er, Brussels, kindly gave their permission to the project to display their digital images (respectively Austin ms. 48 and Brussels BR mss II 88 and IV 251) alongside those from Besançon, Stonyhurst and Toulouse. The manuscript tradition of the Chronicles is a particularly rich quarry for research into many aspects of the period (social, political and military history, book production, literature and art history), but research on the manuscripts has to date been hampered by difficulties in comparing the original materials, dispersed to libraries across Europe and in the USA. The Online Froissart offers access to the manuscript tradition of the first three Books of Froissart’s Chronicles. It delivers 15 A workshop scheduled for 06 June 2011 at Sheffeld’s Humanities Research Institute is set to explore the latest fndings and results: http://www.sheffeld.ac.uk/content/1/c6/12/03/81/DID- ARQ %20Workshop%20Programme.pdf http://www.sheffield.ac.uk/content/1/c6/12/03/81/DID-ARQ%20Workshop%20Programme.pdf http://www.sheffield.ac.uk/content/1/c6/12/03/81/DID-ARQ%20Workshop%20Programme.pdf complete or partial transcriptions of all 113 surviving manuscripts containing these Books, a new translation into modern English of selected chapters from each of these, several complete high-resolution reproductions of illuminated manuscript copies, and a range of secondary materials. These include codicological descriptions, an index of persons and places, historical and textual commentaries, scholarly essays, a glossary and some commentaries on the manuscript illustrations. Also provided are a number of advanced tools with which to unlock the riches of the resource: a collation tool allowing word-by-word comparisons across witnesses, a search engine for simple and more complex queries, and a transcription viewing mode allowing users to go straight to definition entries in the online Dictionnaire du Moyen Français.16 The DMF benefits in return from the very considerable lexicological material provided via the Online Froissart’s transcriptions. Additional bilateral collaboration is underway towards the production of fully lemmatised glossaries. Another innovative feature is the Online Froissart’s dedicated manuscript viewer (Virtual Vellum, developed by Peter Ainsworth and Mike Meredith) for manipulating the electronic facsimiles. From May 2011 users will be able quickly to pinpoint episodes from each of the three Books of the Chronicles. The Online Froissart project started in October 2007 with two complementary teams based at the Universities of Sheffield and Liverpool, led by Peter Ainsworth and Godfried Croenen. It was launched on 31 March 2010 at the end of more than two years’ intensive work and is updated regularly (the latest update being scheduled for late May 2011). Work continues beyond the funded period, with 16 Thanks to a programme of workshops held over the period 2008-2011 and jointly funded by the British Academy and Centre National de la Recherche Scientifque. The project brought together the Online Froissart (Universities of Sheffeld and Liverpool), Christine de Pizan Queen’s Manuscript (Universities of Edinburgh and St Andrews) and the Dictionnaire du Moyen Français (Université de Nancy 2, Laboratoire ATILF: Analyse et Traitement Informatique de la Langue Française ). Project title: ‘Middle French and Other Medieval Vernacular Dictionaries’. fresh material being added on each occasion. The project website is hosted by the Humanities Research Institute at the University of Sheffield,17 while the online collaborative environment and associated tools used to share content developed at both universities are hosted at the University of Liverpool. *** Much of the expertise underpinning the Digging into Data project discussed at the beginning of this paper originated from research and development begun on the Online Froissart, which in turn benefited from Leverhulme and other grants supporting the high-resolution photography required. Neither of these electronically- empowered projects would have happened, however, without the primary resources (the real manuscripts of parchment, paper and ink curated by their equally real conservators and librarians), intensive codicological and palaeographical study of which, conducted by ‘lone scholars’ (Peter Ainsworth, Godfried Croenen and their PhD students and postdoctoral colleagues) alone provided the intellectual and academic foundation for the more pervasively collaborative, interdisciplinary work described here.18 As the Arts and Humanities look to the future and its emerging new challenges, these two kinds of (complementary) academic endeavour – imperfectly termed the individual and the collaborative – need to be kept in thoughtful balance. 17 Credit for designing what is in effect an extremely complex resource is due to Jamie McLaughlin; the overall look of the site is the work of Michael Pidd (both of the Humanities Research Institute). 18 The present paper could not have been written without the input of our colleagues on the DID- ARQ project from Michigan State University, the University of Illinois at Urbana-Champaign and the National Center for Supercomputing Applications (also at Urbana-Champaign); we are extremely grateful for their support and partnership. For details of the other datasets studied within the DID-ARQ consortium (maps and quilts) see: http://www.sheffeld.ac.uk/hri/projects/projectpages/did_images/datasets.html http://www.sheffield.ac.uk/hri/projects/projectpages/did_images/datasets.html Bibliography Ainsworth, Peter and Mike Meredith. Virtual Vellum. Accessed 04 May 2011. http://www.shef.ac.uk/hri/projects/projectpages/virtualvellum.html Ainsworth, Peter, and Godfried Croenen, ed. The Online Froissart. version 1.2 (2011). Accessed 15 May 2011. Ainsworth, Peter. “Representing Royalty: Kings, Queens and Captains in Some Early Fifteenth-Century Manuscripts of Froissart’s Chroniques”. The Medieval Chronicle IV (Amsterdam/Atlanta: Editions/Rodopi, 2006): 1-38. Brown, Michelle P. A Guide to Western Historical Scripts from Antiquity to 1600, (London: The British Museum, 1990). Connected Histories. Accessed 04 May 2011. Croenen, Godfried and Peter Ainsworth, eds. Patrons, Authors and Workshops. Books and Book Production in Paris around 1400. Synthema 4 (Paris: Peeters, 2006). Croenen, Godfried. “Le Libraire Pierre de Liffol et la Production de Manuscrits Illustrés des Chroniques de Jean Froissart à Paris au début du XVe siècle”. Art de l’enluminare 31 (Paris: Editions Faton, 2009): 14-23. Dictionnaire du Moyen Français. < http://www.atilf.fr/dmf/> Fearn, Hannah. “Research intelligence – Let’s dig a little deeper”. Times Higher Education (28 April 2011). Hedeman, Anne D. Translating the Past. Laurent de Premierfait and Boccaccio’s De casibus (Los Angeles: J. Paul Getty Museum, 2008). Meiss. Millard. French Painting in the Time of Jean de Berry: The Limbourgs and their Contemporaries. (New York: Thames and Hudson, 1974). Meredith, Mike, with Peter Ainsworth and Tribal. Kiosque. Accessed 04 May 2011. Old Bailey Online (Old Bailey Proceedings Online). Accessed 04 May 2011. Simeone, Michael, Jennifer Guiliano, Rob Kooper, and Peter Bajcsy. ‘Digging into Data Using New Collaborative Infrastructures Supporting Humanities‐based Computer Science Research’. First Monday 16.5 (2011). Stiennon, Jacques. Paléographie du Moyen Âge. 3rd ed. (Paris: Armand Colin, 1999) Tales from the Digital Archive. BBC Radio 4. 26 April 2011. Radio. Tesnière, M.‐H. “Les manuscrits copiés par Raoul Tainguy : un aspect de la culture des grands officiers royaux au début du XVe siècle”, Romania 107 (1986): 282‐368. Villela-Petit, Inès. “Deux visions de la Cité de Dieu : le Maître de Virgile et le Maître de Boèce”. Art de l’enluminure 17 (Paris: Editions Faton, 2006): 2‐19. Villela‐Petit, Inès. “Les Heures de Jeanne du Peschin, dame de Giac. Aux origines du Maître de Rohan”. Art de l’enluminure 34 (Paris: Editions Faton, 2010): 2-25. http://www.oldbaileyonline.org/ http://www.connectedhistories.org/ http://www.hrionline.ac.uk/onlinefroissart http://www.shef.ac.uk/hri/projects/projectpages/virtualvellum.html Bibliography work_drwldikpajbsjbjq3yco6kw5oa ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218545026 Params is empty 218545026 exception Params is empty 2021/04/06-02:16:22 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218545026 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:22 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_dsgpbjl3pndyzefwg27ci4rfvq ---- [PDF] Treatment of hypertrophic scars and keloids with a fractional CO2 laser: A personal experience | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.3109/14764172.2010.514924 Corpus ID: 207529122Treatment of hypertrophic scars and keloids with a fractional CO2 laser: A personal experience @article{Scrimali2010TreatmentOH, title={Treatment of hypertrophic scars and keloids with a fractional CO2 laser: A personal experience}, author={Luca Scrimali and G. Lomeo and Corrado Nolfo and G. Pompili and S. Tamburino and A. Catalani and Paolo Sirag{\'o} and Rosario Emanuele Perrotta}, journal={Journal of Cosmetic and Laser Therapy}, year={2010}, volume={12}, pages={218 - 221} } Luca Scrimali, G. Lomeo, +5 authors Rosario Emanuele Perrotta Published 2010 Medicine Journal of Cosmetic and Laser Therapy Keloids and hypertrophic scars are both abnormal wound responses in predisposed individuals but they differ in that keloids extend beyond the original wound and almost never regress, while hypertrophic scars remain within the original wound and tend to regress. [...] Key Method This study is based on eight consecutive patients (four females and four males, F:M = 1:1) with a total of 12 keloids.Expand View on Taylor & Francis medici-h.cz Save to Library Create Alert Cite Launch Research Feed Share This Paper 35 CitationsBackground Citations 7 Methods Citations 2 Results Citations 2 View All Topics from this paper Keloid Cicatrix, Hypertrophic Hypertrophy Cicatrix Folliculitis Chickenpox Herpes zoster disease Acne Vulgaris Dermatitis Carbon Dioxide Tattooing Body Piercing Laceration Insect Bites 35 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Clinical and histologic effects from CO2 laser treatment of keloids G. Nicoletti, F. Francesco, +6 authors F. D’andrea Medicine Lasers in Medical Science 2012 25 View 1 excerpt, cites background Save Alert Research Feed Laser CO2 versus Radiotherapy in treatment of keloid scars Luca Scrimali, G. Lomeo, S. Tamburino, A. Catalani, R. Perrotta Medicine Journal of cosmetic and laser therapy : official publication of the European Society for Laser Dermatology 2012 17 View 4 excerpts, cites background Save Alert Research Feed Beneath the Surface: A Review of Laser Remodeling of Hypertrophic Scars and Burns. B. Kuehlmann, Zachary A. Stern-Buchbinder, D. Wan, J. Friedstat, G. Gurtner Medicine Advances in wound care 2019 2 Save Alert Research Feed The Effectiveness of Ablative Fractional Carbon Dioxide Laser with Autologous Platelet Rich Plasma Combined Resurfacing for Hypertrophic Scar of the Shoulder Dawoon Lee, E. S. Park, Min Sung Tak, S. M. Nam Medicine 2016 1 PDF View 1 excerpt, cites background Save Alert Research Feed Combined Non-Ablative Laser and Microfat Grafting for Burn Scar Treatment O. Onur Erol, G. Ağaoğlu, M. Jawad Medicine Aesthetic surgery journal 2019 2 Save Alert Research Feed Management of ear keloids using custom-molded pressure clips: a preliminary study V. Tanaydin, Carlo Colla, +5 authors R. V. D. Hulst Medicine European Journal of Plastic Surgery 2014 3 View 1 excerpt Save Alert Research Feed Treatment of keloid scars using light-, laser- and energy-based devices: a contemporary review of the literature E. Forbat, F. Ali, F. Al-Niaimi Medicine Lasers in Medical Science 2017 20 Save Alert Research Feed Prospective Evaluation of Fractional CO2 Laser Treatment of Mature Burn Scars S. Blome-Eberwein, Christina Gogal, M. Weiss, D. Boorse, P. Pagella Medicine Journal of burn care & research : official publication of the American Burn Association 2016 33 Save Alert Research Feed Keloids: A Review of Etiology, Prevention, and Treatment. Udayan Betarbet, Travis W Blalock Medicine The Journal of clinical and aesthetic dermatology 2020 7 Save Alert Research Feed Ablative fractional laser treatment for hypertrophic scars: comparison between Er:YAG and CO2 fractional lasers J. E. Choi, G. Oh, J. Kim, S. H. Seo, H. H. Ahn, Y. Kye Medicine The Journal of dermatological treatment 2014 35 PDF View 1 excerpt, cites results Save Alert Research Feed ... 1 2 3 4 ... References SHOWING 1-10 OF 52 REFERENCES SORT BYRelevance Most Influenced Papers Recency Keloids and hypertrophic scars. R. English, P. Shenefelt Medicine Dermatologic surgery : official publication for American Society for Dermatologic Surgery [et al.] 1999 174 Save Alert Research Feed Wide spread scars, hypertrophic scars, and keloids. R. Rudolph Medicine Clinics in plastic surgery 1987 77 Save Alert Research Feed Scars and keloids Thomas A. Mustoe, MD, FACS Medicine BMJ : British Medical Journal 2004 94 Save Alert Research Feed An approach to management of keloids. F. Stucker, G. Shaw Medicine Archives of otolaryngology--head & neck surgery 1992 68 Save Alert Research Feed Hypertrophic scars and keloids. L. Ketchum Medicine Clinics in plastic surgery 1977 37 Save Alert Research Feed Earlobe keloids. T. Zuber, D. Dewitt Medicine American family physician 1994 27 Save Alert Research Feed Keloids and hypertrophic scars: a comprehensive review. R. Wb, C. Ik, E. Hp Medicine 1989 402 Save Alert Research Feed Role of Growth Factors in Scar Contraction: An In Vitro Analysis S. Younai, G. Venters, S. Vu, L. Nichter, M. Nimni, T. Tuan Medicine Annals of plastic surgery 1996 78 Save Alert Research Feed The Effect of Superpulsed Carbon Dioxide Laser Energy on Keloid and Normal Dermal Fibroblast Secretion of Growth Factors: A Serum-Free Study K. Nowak, M. McCormack, R. Koch Chemistry, Medicine Plastic and reconstructive surgery 2000 81 Save Alert Research Feed Differentiation of keloid and hypertrophic scar; correlation of the water proton relaxation times with the duration of the scar. M. Babu, R. Bai, L. Suguna, K. Ramachandran, K. M. Ramakrishnan Medicine Physiological chemistry and physics and medical NMR 1993 13 Save Alert Research Feed ... 1 2 3 4 5 ... Related Papers Abstract Topics 35 Citations 52 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_dunscjjbszhn3jfdbyeherefqi ---- [PDF] News and Reviews. | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1210/jcem.96.12.zega26 Corpus ID: 23601275News and Reviews. @article{Martin2011NewsAR, title={News and Reviews.}, author={Steve Martin}, journal={The Journal of clinical endocrinology and metabolism}, year={2011}, volume={96 12}, pages={26A-27A} } Steve Martin Published 2011 Medicine The Journal of clinical endocrinology and metabolism This is one of many publications in recent years describing agroforestry in temperate regions. It is made up of a collection of papers from the latest conference in the U.S.A. specifically on agroforestry. The first three and last three chapters look at general aspects, such as concepts, ecology, economics and social issues, while the bulk of the book is made up of five sections each detailing one of the predefined practices relevant to North America; windbreaks, silvopastoral, alley cropping… Expand View on PubMed academic.oup.com Save to Library Create Alert Cite Launch Research Feed Share This Paper Topics from this paper Local Government benefit Lorenzo's oil Related Papers Abstract Topics Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_dvdfq7x2trbr7kdtotgncxzt4m ---- [PDF] Effects of Carving Plane, Level of Harvest, and Oppositional Suturing Techniques on Costal Cartilage Warping | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1097/PRS.0b013e3182958aef Corpus ID: 22142789Effects of Carving Plane, Level of Harvest, and Oppositional Suturing Techniques on Costal Cartilage Warping @article{Farkas2013EffectsOC, title={Effects of Carving Plane, Level of Harvest, and Oppositional Suturing Techniques on Costal Cartilage Warping}, author={Jordan P. Farkas and M. R. Lee and Chris Lakianhi and Rod J. Rohrich}, journal={Plastic and Reconstructive Surgery}, year={2013}, volume={132}, pages={319–325} } Jordan P. Farkas, M. R. Lee, +1 author Rod J. Rohrich Published 2013 Medicine Plastic and Reconstructive Surgery Background: Cartilage warping has plagued reconstructive and cosmetic rhinoplasty since the introduction of extra-anatomical cartilage use. With the present level of knowledge, there is no evidence of the warping properties with respect to cartilage harvest and suture techniques and level of rib harvest. This report aims to improve understanding of costal cartilage warping. Methods: The sixth through tenth costal cartilages were harvested from six fresh cadavers aged 54 to 90 years. Warping… Expand View on Wolters Kluwer breslowmd.com Save to Library Create Alert Cite Launch Research Feed Share This Paper 27 CitationsHighly Influential Citations 1 Background Citations 6 Methods Citations 3 View All Topics from this paper Structure of costal cartilage Bone structure of rib Suture Techniques Specimen Surgical sutures 27 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Technical Maneuvers to Decrease Warping of Peripheral Costal Cartilage Grafts Jordan P. Farkas, M. R. Lee, Rod J. Rohrich Medicine Plastic and reconstructive surgery 2016 13 Save Alert Research Feed A Comparison of Costal Cartilage Warping Using Oblique Split vs Concentric Carving Methods Gemma C Wilson, Laura Dias, C. Faris Medicine JAMA facial plastic surgery 2017 10 PDF Save Alert Research Feed Transverse Slicing of the Sixth–Seventh Costal Cartilaginous Junction: A Novel Technique to Prevent Warping in Nasal Surgery Tara Lynn Teshima, Homan Cheng, A. Pakdel, A. Kiss, J. Fialkov Medicine The Journal of craniofacial surgery 2016 3 View 1 excerpt, cites background Save Alert Research Feed The Cartilage Warp Prevention Suture B. Guyuron, D. Wang, David E Kurlander Medicine Aesthetic Plastic Surgery 2017 6 View 1 excerpt Save Alert Research Feed Technique to Reduce Time, Pain, and Risk in Costal Cartilage Harvest Mikal J Nelson, C. Gaball Medicine JAMA facial plastic surgery 2017 9 Save Alert Research Feed Warping Characteristics of Rib Allograft Cartilage Rod J. Rohrich, Erez Dayan, Paul D. Durand, Í. Brito, Edward M. Gronet Medicine Plastic and reconstructive surgery 2020 1 View 4 excerpts, cites background and methods Save Alert Research Feed Measurement of Warping Angle in Human Rib Graft: An Experimental Study Safvet Ors Medicine Plastic and reconstructive surgery 2018 5 Save Alert Research Feed Reconstructive Rhinoplasty Using Multiplanar Carved Costal Cartilage. M. Nuara, R. Loch, Sarah A Saxon Medicine JAMA facial plastic surgery 2016 11 PDF Save Alert Research Feed Chimeric Autologous Costal Cartilage Graft to Prevent Warping Yen-Chang Hsiao, M. Abdelrahman, +5 authors S. Chuang Medicine Plastic and reconstructive surgery 2014 9 Save Alert Research Feed Harvesting Split Costal Cartilage Graft in Revision Rhinoplasty Without Disturbing the Costal Integrity Safvet Ors Medicine Aesthetic Plastic Surgery 2021 View 2 excerpts, cites background Save Alert Research Feed ... 1 2 3 ... References SHOWING 1-10 OF 50 REFERENCES SORT BYRelevance Most Influenced Papers Recency Internal stabilization of autogenous rib cartilage grafts in rhinoplasty: a barrier to cartilage warping. J. Gunter, C. P. Clark, R. Friedman Medicine Plastic and reconstructive surgery 1997 180 Save Alert Research Feed The rate of warping in irradiated and nonirradiated homograft rib cartilage: a controlled comparison and clinical implications. W. P. Adams, Rod J. Rohrich, J. Gunter, C. P. Clark, J. Robinson Medicine Plastic and reconstructive surgery 1999 88 Save Alert Research Feed Irradiated Homologous Costal Cartilage: Versatile Grafting Material for Rhinoplasty F. Demirkan, E. Arslan, S. Unal, Alper Aksoy Medicine Aesthetic Plastic Surgery 2003 29 Save Alert Research Feed Irradiated homologous costal cartilage for augmentation rhinoplasty. G. Lefkovits Medicine Annals of plastic surgery 1990 46 Save Alert Research Feed Use of irradiated cartilage in rhinoplasty of the non-Caucasian nose. B. Strauch, Heather A Erhard, T. Baum Medicine Aesthetic surgery journal 2004 18 PDF Save Alert Research Feed Nasal reconstruction with articulated irradiated rib cartilage. C. Murakami, T. Cook, R. Guida Medicine Archives of otolaryngology--head & neck surgery 1991 51 Save Alert Research Feed Concentric and eccentric carved costal cartilage: a comparison of warping. D. Kim, A. Shah, D. Toriumi Medicine Archives of facial plastic surgery 2006 86 PDF Save Alert Research Feed Auricular cartilage: harvest technique and versatility in rhinoplasty. Michael W Lee, Sean Callahan, C. S. Cochran Medicine American journal of otolaryngology 2011 11 Save Alert Research Feed Central segment harvest of costal cartilage in rhinoplasty Michael W Lee, J. Inman, Y. Ducic Medicine The Laryngoscope 2011 31 PDF Save Alert Research Feed Practical device for precise cutting of costal cartilage grafts to uniform thickness. A. Foulad, Cyrus T. Manuel, B. Wong Medicine Archives of facial plastic surgery 2011 14 PDF Save Alert Research Feed ... 1 2 3 4 5 ... Related Papers Abstract Topics 27 Citations 50 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_dvjvdhrkf5drtgiododl2w4opa ---- untitled Reliability and Validity of a Photographic Method for Measuring Facial Hair Density in Men M any studies have investigated hair removal orgrowth prevention treatments, but they of-ten measure hair density using noninvasive methods that are subjective and qualitative.1 Although photographic and digital hair-counting methods have been used, their reliability and validity remain unknown.2 We describe a simple, noninvasive method of hair counting used in a hair growth prevention treatment trial and as- sess its reliability and validity. Methods. The data are from the first 14 healthy men con- secutively enrolled in a randomized, double-blinded, pla- cebo-controlled trial of a topical agent for hair growth prevention. Eligible subjects were required to shave at least once daily to avoid a beard with hair length visible above the skin line and to have a baseline physician global assessment (PGA) score for hair density of 4 or 5 in the beard area. The PGA was developed by us for the larger clinical trial as a visual analog scale for rating hair den- sity by overall impression (Figure). Subjects were randomized as to which side of their face would receive drug or placebo, which was then applied once daily after shaving to a treatment area within the beard region in a split-face design (Figure). The duration of ac- tive treatment was 6 or 8 weeks; subjects were assessed every 2 or 4 weeks for up to 8 to 16 weeks. Subjects did not shave for 48 hours prior to each visit so that they would have enough visible hair for assessment. At each visit, the PGA and digital photography of the treatment areas were performed (Figure). The study was approved by the Uni- versity of Pennsylvania institutional review board. Two of us ( J.W. and J.M.S.) independently counted hairs in all photographs to assess interrater reliability (Figure). Five months after the initial measurement, hairs were recounted in all photographs to assess test-retest reliability. We used the intraclass correlation coefficient (ICC) and Spearman � correlation to assess reliability. Construct validity was evaluated by comparing hair counts with respect to corresponding PGA ratings using the t test. We conservatively estimated a sample size of 100 photographs with 85% power to detect an ICC of 0.6, as- suming null ICC of 0.4 and � = 0.05. Results. The median age of the subjects was 28 years (in- terquartile range [IQR], 26-38 years). Eleven subjects were white (79%), and 3 were Asian (21%). All subjects had brown or black hair. A total of 130 photographs were ob- tained. Hair counts were approximately normally dis- tributed, ranging from 2 to 391. The subject PGA scores were available for 114 photographs and ranged from 2 to 5 (median, 4; IQR, 4-4). Test-retest reliability dem- onstrated an ICC of 0.90 (95% confidence interval [CI], 0.86-0.93) and a Spearman � of 0.88 (95% CI, 0.84- 0.92). Interrater reliability demonstrated an ICC of 0.81 (95% CI, 0.74-0.86) and a Spearman � of 0.81 (95% CI, 0.75-0.87). In the validity analysis, we included only PGA scores for which there were at least 10 corresponding pho- tographs. Photographs with a PGA score of 3 had a lower mean hair count (mean [SD] count, n = 195.0 [16.5]) than those with PGA score of 4 (mean [SD] count, n = 237.2 [5.8]) (P = .003). Comment. Our hair counting method demonstrates ex- cellent interrater and intrarater reliability as well as con- struct validity based on its ability to discriminate catego- ries of a PGA.3 In contrast to other methods, our approach does not require expensive or specialized equipment. It provides better quantification of hair changes than global assessment scales, which may be too qualitative for clini- cal trials.1 Moreover, it is less tedious and labor inten- sive than the manual collection, counting, and weigh- ing of hair.4 Although automated methods such as the TrichoScan (TRICHOLOG GmbH, Freiburg, Germany) have reported high reliability, fully automated ap- proaches are hindered by imperfect algorithms, which can lead to inaccuracy.1,5 Component Procedure Treatment area Using clear transparencies, templates with a cutout of the treatment area with its pre-specified size and shape (2.5 centimeter diameter circle in this study) were constructed. Templates also contained the outline of an anatomic landmark (earlobe, angle of the jaw, and jaw line of the left or right side of the face in this study) in order to maintain consistency of the treatment area (beard region in this study) each time. Templates were used for both drug application and clinical evaluation. With the template in place, the investigator performed a PGA of the treatment area according to the following indications: With the template in place, digital photographs of the treatment area were taken with flash (Nikon D70S and SB-400, Nikon Corporation, Tokyo, Japan) at a prespecified distance (12 inches in this study) from the area. No special lighting conditions were required; photographs were obtained under fluorescent overhead lighting in the patient examination rooms of a dermatology clinic. Photographs were pooled, randomized, and viewed by blinded investigators at 100% magnification using imaging software (Photoshop CS4, Adobe Corp, San Jose, CA). Using the software “count tool” to mark and keep count of each hair, the total number of hairs at least 1 millimeter long within follicles located in the target area was counted. Hair lengths were measured using the software “ruler” function. Multiple hairs growing from the same follicular unit were considered to be 1 hair. Digital photographs and hair counts Physical global assessment (PGA) of hair density Very dense5 Dense4 Moderately dense3 Very sparse1 Total alopecia0 Sparse2 Figure. Protocol for hair density assessments. The image herein is copyrighted by the University of Pennsylvania, Philadelphia, and is reproduced with permission. ARCH DERMATOL/ VOL 147 (NO. 11), NOV 2011 WWW.ARCHDERMATOL.COM 1328 ©2011 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 We recognize several limitations. First, hair diameter and length were not evaluated. Second, the camera was not mounted, and the skin in the treatment areas was not marked so as to guarantee the same exact evaluation dis- tance and site every time. The generalizability of our re- sults to areas with different hair density or to people with darker skin is unknown. Finally, additional studies are required to determine if this technique is responsive to true changes in hair density and to compare this method to other approaches such as digital photodermoscopy. Nevertheless, our simple, noninvasive method of hair counting demonstrates excellent reliability and discrimi- nation validity and deserves further evaluation as an as- sessment tool for hair removal or growth prevention studies. Accepted for Publication: July 10, 2011. Author Affiliations: Departments of Dermatology (Mss Wan and Steinemann and Drs Abuabara, Kurd, Musiek, Vittorio, and Gelfand) and Internal Medicine (Drs Abuabara and Kurd) and Center for Clinical Epidemiol- ogy and Biostatistics (Dr Gelfand), University of Penn- sylvania Perelman School of Medicine, Philadelphia; and Philadelphia VA Medical Center, Philadelphia (Dr Musiek). Dr Musiek is now with the Department of Der- matology, Washington University, St Louis, Missouri; Ms Steinemann is now with Drexel University College of Medi- cine, Philadelphia. Correspondence: Dr Gelfand, 1471 Penn Tower, One Convention Avenue, Philadelphia, PA 19104 (Joel.Gelfand @uphs.upenn.edu). Author Contributions: Ms Wan and Dr Gelfand had full access to all of the data in the study and take responsi- bility for the integrity of the data and the accuracy of the data analysis. Study concept and design: Abuabara, Kurd, Vittorio, and Gelfand. Acquisition of data: Wan, Abuabara, Musiek, Steinemann, Gelfand. Analysis and interpreta- tion of data: Wan and Gelfand. Drafting of the manu- script: Wan. Critical revision of the manuscript for impor- tant intellectual content: Wan, Abuabara, Kurd, Musiek, Steinemann, Vittorio, and Gelfand. Statistical analysis: Wan and Gelfand. Obtained funding: Vittorio and Gelfand. Administrative, technical, and material support: Wan, Abuabara, and Kurd. Study supervision: Musiek, Vittorio, and Gelfand. Financial Disclosure: Dr Vittorio has filed a patent ap- plication for the use of DNA polymerase inhibitors in in- ducing alopecia. Dr Gelfand served as consultant and in- vestigator with Abbott, Amgen, Centocor, Genentech, Novartis, and Pfizer; consultant with Celgene, Co- vance, Galderma, Shire Pharmaceuticals, and Wyeth; and investigator with Shionogi. Funding/Support: This study was supported in part by grants from the Department of Dermatology at the Uni- versity of Pennsylvania (Dr Vittorio), the Edwin and Fan- nie Gray Hall Center for Human Appearance at the Uni- versity of Pennsylvania (Dr Vittorio), National Institutes of Health training grant T32-AR07465 (Ms Wan and Dr Musiek), and the Doris Duke Clinical Research Fellow- ship (Dr Abuabara). Role of the Sponsors: The sponsors had no role in the design and conduct of the study; in the collection, analy- sis, and interpretation of data; or in the preparation, re- view, or approval of the manuscript. Additional Contributions: Jennifer Goldfarb, RN, Albana Oktrova, and Debbie Leahy, LPN, did an out- standing job coordinating the clinical trial associated with this study. 1. Chamberlain AJ, Dawber RP. Methods of evaluating hair growth. Australas J Dermatol. 2003;44(1):10-18. 2. Hamzavi I, Tan E, Shapiro J, Lui H. A randomized bilateral vehicle- controlled study of eflornithine cream combined with laser treatment versus laser treatment alone for facial hirsutism in women. J Am Acad Dermatol. 2007; 57(1):54-59. 3. Shrout PE, Fleiss JL. Intraclass correlations: uses in assessing rater reliability. Psychol Bull. 1979;86(2):420-428. 4. Price VH, Menefee E, Strauss PC. Changes in hair weight and hair count in men with androgenetic alopecia, after application of 5% and 2% topical mi- noxidil, placebo, or no treatment. J Am Acad Dermatol. 1999;41(5, pt 1):717- 721. 5. Van Neste D, Trüeb RM. Critical study of hair growth analysis with computer- assisted methods. J Eur Acad Dermatol Venereol. 2006;20(5):578-583. VIGNETTES Connubial Androgenetic Alopecia Report of a Case. We report herein the case of a 52-year- old, postmenopausal Hispanic woman who developed se- vere androgenetic alopecia following involuntary expo- sure to topical testosterone gel used by her spouse for the treatment of hypogonadism. The patient presented to our clinic with complaints of a 1-year history of hair loss. Results of the pull test were negative; physical ex- amination revealed severe hair thinning involving the crown and the frontotemporal regions. Dermoscopy of the scalp revealed more than 20% hair diameter varia- tion. No other signs of hyperandrogenism were identi- fied, such as hirsutism, acne, or obesity. Diagnosis of an- drogenetic alopecia with Hamilton pattern was established based on the clinical and dermoscopic findings. Owing to the abrupt onset and Hamilton pattern, a laboratory workup was performed to exclude endocrine abnormalities. The results revealed high levels of testos- terone (146 ng/dL; normal. 2-45 ng/dL; to convert tes- tosterone to nanomoles per liter, multiply by 0.0347) and free testosterone (22.7 pg/mL; normal, 0.2-5.0 pg/mL). To exclude ovarian malignant neoplasm, abdominal and transvaginal ultrasonograms were obtained. The find- ings were normal. Further investigation into the patient’s history re- vealed that her spouse had started using a topical testos- terone gel (containing 1% testosterone) 18 months pre- viously for the treatment of hypogonadism. He had been applying the topical testosterone once daily (5-g packet) to his upper arm. Joy Wan, BA Katrina Abuabara, MD, MA Shanu K. Kurd, MD, MSCE Amy Musiek, MD Jane M. Steinemann, BA Carmela C. Vittorio, MD Joel M. Gelfand, MD, MSCE ARCH DERMATOL/ VOL 147 (NO. 11), NOV 2011 WWW.ARCHDERMATOL.COM 1329 ©2011 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 work_dxmekkkv45havb466wui6taq2i ---- untitled J. R. Soc. Interface (2012) 9, 1892–1897 D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 *Author for c doi:10.1098/rsif.2011.0809 Published online 15 February 2012 Received 22 N Accepted 20 J Using digital photography to implement the McFarland method L. Lahuerta Zamora* and M. T. Pérez-Gracia Departamento de Quı́mica, Bioquı́mica y Biologı́a Molecular. Facultad de Ciencias de la Salud, Universidad CEU Cardenal Herrera, 46113 Moncada, Spain The McFarland method allows the concentration of bacterial cells in a liquid medium to be determined by either of two instrumental techniques: turbidimetry or nephelometry. The microbes act by absorbing and scattering incident light, so the absorbance (turbidimetry) or light intensity (nephelometry) measured is directly proportional to their concentration in the medium. In this work, we developed a new analytical imaging method for determining the concentration of bacterial cells in liquid media. Digital images of a series of McFarland standards are used to assign turbidity-based colour values with the aid of dedicated software. Such values are proportional to bacterial concentrations, which allow a calibration curve to be readily constructed. This paper assesses the calibration reproducibility of an intra-laboratory study and compares the turbidimetric and nephelometric results with those provided by the proposed method, which is relatively simple and affordable; in fact, it can be implemented with a digital camera and the public domain software IMAGEJ. Keywords: colorimetric imaging analysis; digital photography; McFarland method; charge-coupled device; IMAGEJ 1. INTRODUCTION Microbial concentrations can be determined in various ways, including direct counting, plate counting and measurement of light scattering by bacterial cells in a liquid medium. Although these methods are all non- destructive, the last has the advantage that it is much more expeditious than the other two. Light passing through a microbial suspension is partly absorbed by the microbes and subsequently re-emitted in all direc- tions, as a result, the suspension has a milky appearance under visible light [1]. Measuring the amount of light absorbed (turbidimetry) or scattered (nephelometry) under appropriate conditions allows one to estimate the amount of biomass present in the suspension. McFarland [2] devised a nephelometer for measuring suspended bacteria based on standards optically mimick- ing bacterial suspensions and obtained by chemical precipitation. Thus, mixing appropriate amounts of sul- phuric acid (H2SO4) and barium chloride (BaCl2) produced known amounts of a fine barium sulphate (BaSO4) precipitate with the same light-scattering capacity as a suspension of bacterial cells. Although visual comparison is indeed possible, obtaining precise results entails comparing microbial suspensions and McFarland standards via turbidimetric or nephelometric measurements made at wavelengths over the range of 420–660 nm. Some dedicated commercial instruments have even been designed to provide measurements in ‘McFarland units’ [3]. orrespondence (lahuerta@uch.ceu.es). ovember 2011 anuary 2012 1892 This technique has the advantage that the standards are chemical and thus require no incubation, and also that (if visual) no instrument other than one’s eye- sight is required for comparison. However, it can be directly applied only to Gram-negative bacteria such as Escherichia coli because others differ in volume and mass—and hence in their ability to scatter light. Although McFarland standards currently consist of suspensions of latex or titanium dioxide particles [4,5], which are more stable and longer lived than the former precipitates, barium sulphate remains in use for this purpose—in fact, its suspensions have been shown to remain stable for nearly 20 years if stored in tightly sealed tubes at room temperature in the dark [6]. In microbiology, McFarland standards continue to be used as reference suspensions for comparison with bac- terial suspensions in liquid media for purposes such as obtaining antibiograms [7] and biochemical testing [8]. The increasing technical improvement and afforda- bility of digital photography hardware and software [9] have promoted their use in quantitative chemical ana- lyses. Today’s digital cameras capture images by a light-sensitive chip called ‘charge-coupled device’ (CCD), which plays the same role as photographic film in conventional (chemical) photography. Each cell in a CCD acts as a light-sensitive individual element provid- ing an electrical response to light that can be digitized to build an optical image. Because CCD pixels respond to light intensity—but not to colour—reproducing the three primary colours the human eye can perceive [10] (red, green and blue) requires using a ‘colour’ CCD. Colour digital images are the additive combination of the three colours, which most of the sensors used in digital This journal is q 2012 The Royal Society mailto:lahuerta@uch.ceu.es McFarland method using digital images L. Lahuerta Zamora and M. T. Pérez-Gracia 1893 D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 cameras obtain by superimposing a mosaic of red, green and blue filters (viz. a Bayer mask) over the pixel array in order to interpolate colour-related information for each individual pixel. A CCD consisting of eight-bit pixels can respond to 28 ¼ 256 levels of grey ranging from 0 (black) to 255 (white). This allows each pixel in the red, green or blue channel in a CCD-captured image to be assigned a numerical value from 0 to 255, which can be subsequently used for analytical calibration. As a result, a digital camera can be used as an analytical sensor because each captured image provides a vast amount of information [9]. Commercial CCD cameras have been available for 30 years, and they have been well appreciated and widely used for analytical purposes. Analytical imaging (CCD) methods are increasingly being used in Raman spectroscopy [11], chemiluminescence spectroscopy [12] or electron transmission microscopy [13] at research laboratories. This has promoted the development of dedicated commercial equipment where analyses are implemented on inductively coupled plasma-atomic emission spectroscopy (ICP-AES) [14], mass spec- trometers [15] or fast-scan systems [16]. In biological research, CCDs have been used to capture digital images for UV-fluorescing substances with a variety of purposes including documentation and quantitative analysis in, for example, electrophoretic separations of nucleic acids and proteins—even in in vivo tests in the latter case [17]. In the biotechnology field, digital image analysis techniques have been even used for the in situ characterization of multiphase dispersions [18] and for the monitoring of activated sludge processes [19]. CCDs have also been used to obtain digital images with a view to quantifying chemical species fol- lowing separation by thin-layer chromatography (TLC) or high-performance thin-layer chromatography (HPTLC) [20 – 23]. However, in spite of their extensive use, the CCDs intended for the analytical work are quite expensive. On the other hand, nowadays the very low-cost mass-produced digital cameras (based on CCD technol- ogy) for the home consumer market have not yet been widely appreciated and have scarcely been used for analytical purposes. These low-cost devices have been employed in various fields, including forensic science [24,25], telemedicine and laboratory analyses. In fact, digital cameras have become an effective, inexpensive alternative to commercially available equipment (scanners) for qualitative and quan- titative thin layer chromatographic analysis [26]. Quantitative imaging analysis with a digital camera and the software IMAGEJ recently proved an effective, affordable choice for fluorimetric measurements in teach- ing laboratories [27]. This software–hardware combination has also enabled the quantitation of haemo- globin and melanin with a view to assess erythema and pigmentation in human skin [10]. Digital images obtained with very inexpensive CCD cameras known as ‘webcams’ have been used to assess colour changes during acid – base titrations and found to provide results on a par (viz. no significant differ- ences at the 95% confidence level) with those of spectrophotometric monitoring [28]. Also, a computer J. R. Soc. Interface (2012) monitor has been successfully used as a light source and, a webcam as detector, for purposes such as distinguishing wine samples [29]. Even built-in cameras in mobile phones have been used in telemedicine to capture and transfer biotesting results obtained by paper-based microfluidic devices to quantify glucose and proteins [30]. The method reported in this paper, which is similar to one used in previous work and very recently proposed by one of the authors to quantify analytes absorbing visible light in aqueous solutions [31,32], uses digital images of a series of McFarland standards in combi- nation with appropriate software (IMAGEJ) to assign a numerical value to each colour hue (turbidity). Such a value is directly proportional to the concentration of the McFarland standard concerned and can thus be used for calibration. The proposed method performs on par with the classical turbidimetric and nephelo- metric methods for this purpose, but has the advantage that it uses much more accessible and inex- pensive hardware (a low-cost mass-produced digital camera for the home consumer market) and software (IMAGEJ, which is public domain software). 2. MATERIAL AND METHODS 2.1. Material All reagents used were analytically pure unless stated otherwise. Solutions were prepared from water purified by reverse osmosis, de-ionized to 18 MV cm with a Sybron/Barnstead Nanopure II water purification system furnished with a fibre filter of 0.2 mm pore size. H2SO4 and anhydrous BaCl2 were obtained from Panreac, both in analytical reagent grade. Once prepared, the McFarland standards (3 ml of each one) were transferred to the wells of an Iwaki 3820-024N polystyrene microplate (their holder for photographing; figure 2) with the aid of an Accumax VA-900 micropipette. All photographs were taken with a Nikon Coolpix E995 digital camera and processed with the public domain software IMAGEJ (windows version) developed by the National Institutes for Health and available for free download at http://rsbweb.nih.gov/ij. Lighting was provided by two parallel fluorescent strips (Philips Master TL-D 36 W/840) located 1.5 m over the microplate (figure 1). The diffusing screen was made with white paper filter from ALBET (LabScience), 60 g m22 (in reams) 420 � 520 mm (code RM2504252). The microplate was placed on a piece of black cardboard (NE 30K A4, 180 g) from Hermanos Cebrián, Spain. Absorbance measurements were made with a Spectronic Genesis 20 UV–vis spectrophotometer, and fluorescence measurements with a Perkin Elmer LS50 luminescence spectrometer equipped with the software FL WINLAB. The McFarland value for each standard was obtained by a BD CrystalSpec dedicated nephelometer. 2.2. Procedure All experimental work was performed in five sessions (A – E), involving the operations described below. Solutions containing 1 per cent (w/v) BaCl2 http://rsbweb.nih.gov/ij http://rsbweb.nih.gov/ij Figure 2. Image of a microplate holding McFarland standards and the circular area used to measure the average grey level by IMAGEJ. (Channel, eight-bit, 2048 � 1536 pixels.) Table 1. Composition and equivalences of the standards in the McFarland series. McFarland standard no. 1.0% anhydrous BaCl2 (ml) 1% H2SO4 (ml) approximate bacterial density (�108) (CFU ml21) 0.5 0.05 9.95 1.5 1 0.1 9.9 3.0 2 0.2 9.8 6.0 3 0.3 9.7 9.0 4 0.4 9.6 12.0 6 5 4 3 2 1 20 c m 15 0 cm Figure 1. Schematic of the system for obtaining images. (1) Static support for microplate and camera, (2) black card- board, (3) microplate containing the McFarland standards, (4) camera, (5) diffusing screen and (6) fluorescent strips. 1894 McFarland method using digital images L. Lahuerta Zamora and M. T. Pérez-Gracia D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 (equivalent to 0.04802 mol l – 1) and 1 per cent (v/v) H2SO4 (equivalent to 0.18010 mol l – 1) were used to pre- pare a series of McFarland standards and their composition and equivalence in colony forming units (CFU) per millilitre of microbial suspension, which are shown in table 1. The standards were prepared in disposable, thread-cap test tubes. Next, the optimum photographic conditions were established (figure 1). The supports for McFarland standards were polystyrene microplates on account of their advantageous geometry and transparency. This facilitated the simultaneous capturing of an image of all calibration standards under identical lighting conditions. A volume of 3 ml of each standard was measured with the micropipette and transferred to a plate well. Once an aliquot of all standards was trans- ferred to the plate, then an image was captured and processed with the software IMAGEJ. J. R. Soc. Interface (2012) The camera was operated as follows: because illumi- nating with the camera strobe light (flash) would have caused reflections on solution surfaces, all lighting was provided by fluorescent strips. This makes the pro- cedure easy to implement at virtually any laboratory. The effect of potential reflections of the strips on the solutions was avoided by placing a diffusing screen made with a sheet of white paper filter over and around the camera and plate. This setup provided soft lighting; in addition, using the largest aperture (i.e. the smallest F-number) on the camera lens minimized exposure times, which were set as recommended by the camera in order to avoid too dark (underexposed) or too light (overexposed) images. The camera was placed 20 cm from the plate, on a static support to ensure reproducible framing and shooting under identi- cal conditions: F/5 as lens aperture and 1/2 s as exposure time. The plate was placed on a piece of black cardboard to enhance the milky turbidity of the McFarland standards. Images were processed with the software IMAGEJ. Operation A . B . C means select command B on menu A and then select subcommand C in com- mand B. First, the original colour image (in jpg format, 24 bit, 2048 � 1536 pixels) was split into three according to its RGB (red, green, blue) values (Image . Colour . RGB split), each split image being in jpg format, eight-bit and 2048 � 1536 pixels. The image for the green channel, which exhibited better lin- earity than those for the other two, was used to determine the ‘grey level’ for each standard; this was taken to be the average for a uniform circular area that was selected with the software’s drawing tool (mean of 23 000 pixels) in each plate well (figure 2). The influence of the size of the mentioned circular area was studied by testing five different sizes: namely (in pixel) 230 000 (maximum allowed by the well size), 100 000, 50 000, 25 000 and 12 500. A linear cali- bration curve was obtained for each tested size. Only the obtained with the maximum tested area showed a slope smaller (0.2728) than the others, among which there were no significant differences (mean slope and s.d.: 0.3188 + 0.0014; CV ¼ 0.4%). This difference could be attributed to the reflections produced by the illumination near the internal edge of each well, which can be easily observed in figure 2. Finally, a circular area containing 23 000 pixels was selected as the opti- mum, because it avoids the inclusion of any reflection feature in the measured area. 70 80 90 100 110 120 130 140 150 160 0 0.5 1.0 1.5 2.0 3.0 4.0 4.53.52.5 McFarland standard gr ey l ev el Figure 3. Calibration curve directly obtained with the proposed method. 11.2 11.3 11.4 11.5 11.6 11.7 11.8 11.9 12.0 0 ln (McFarland standard) + 1 ln ( gr ey l ev el ) 3.02.52.01.51.00.5 Figure 4. Linear transformation of the calibration curve of figure 3 via a log – log plot. Table 2. Results obtained with the four instrumental techniques compared. working session slope of the calibration curve nephelometer spectrophotometer spectrofluorimeter IMAGEJ A 1.492 0.1263 168.1 0.3200 B 1.539 0.1317 160.4 0.3304 C 1.378 0.1474 128.2 0.3289 D 1.525 0.1424 137.4 0.3107 E 1.521 0.1384 140.3 0.3450 s.d. 0.07 0.008 17 0.013 average slope 1.49 0.137 147 0.327 CV (%) 4.7 5.8 11.6 4.0 McFarland method using digital images L. Lahuerta Zamora and M. T. Pérez-Gracia 1895 D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 Then, each McFarland standard was used to make absorbance (625 nm) and fluorescence measurements (lex ¼ 625, lem ¼ 625 nm, both slits being set at their minimum value and signal attenuation at 1%) during each working session. Finally, each solution was assigned a McFarland value by the BD CrystalSpec nephelometer. 3. RESULTS AND DISCUSSION By way of representative example, this section presents and discusses the results of working session C. As stated in §2.2, the first step in the determinations involved capturing colour images of the McFarland standard series. The images were then processed with the software IMAGEJ for splitting into the three primary channels, and the green one (G) was stored for sub- sequent processing as it resulted in more linear calibration curves than the red (R) and blue channel (B). Each standard was used to measure the average grey level in a circular area spanning 23 000 pixels (figure 2). As can be seen from figure 3, plotting the results exposed a logarithmic trend in them. In order to accurately compare the calibration curves with the turbidimetric (spectrophotometric) and nephe- lometric (spectrofluorimetric) results, which exhibited a linear trend, the logarithmic calibration curve was easily converted into a straight line (figure 4) by a log– log plot. Negative x-values were avoided by adding one J. R. Soc. Interface (2012) unit (þ1) to all values. Then, the absorbance of each McFarland standard was measured and a calibration curve run from both these results, and those of the spec- trofluorimetric and nephelometric measurements of the standards. These operations were all conducted in each working session. Table 2 shows the slopes of the calibration curves obtained with the four instrumental techniques used in each session. Following application of Dixon’s Q criterion for rejection of outliers—none was in fact detected— averages and their corresponding standard deviations were calculated and rounded off to the required decimal place by the usual criteria, and the coefficients of vari- ation between slopes of the calibration curves for each technique were obtained (table 2). 4. CONCLUSIONS On the basis of the overall results, the proposed method provides results comparable with those of the conven- tional turbidimetric and nephelometric methods, with correlation coefficients between calibrations exceeding 0.99 in all cases. This imaging method possesses a high repeatability (CV ¼ 4.0% with n ¼ 5) exceeding even that of the nephelometric method used by the dedicated instrument for measuring McFarland standards (CV ¼ 4.7%, n ¼ 5). The proposed method is operationally simple and inexpensive: in fact, all it requires is a digital camera 1896 McFarland method using digital images L. Lahuerta Zamora and M. T. Pérez-Gracia D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 and a computer as measuring instruments, and both are more flexible, accessible and affordable than the con- ventional spectrophotometers, spectrofluorimeters or dedicated nephelometer typically used for this purpose. In addition, images can be readily processed with the public domain software IMAGEJ, developed by the National Institutes of Health and freely available on the Internet, which is very user-friendly. In a scenario dominated by increasingly sophisticated and expensive commercial instruments, the proposed method provides an interesting alternative inasmuch as it enables quantitative determinations in teaching lab- oratories and modest facilities in developing countries, where economic resources for purchasing and maintain- ing measuring equipment are typically scant. The authors thank the students Mª Isabel Cano Esteban and Marı́a Solana Altabella, for their assistance during the experimentation. REFERENCES 1 Koch, A. L. 1994 Growth measurement. In Methods for general and molecular bacteriology (eds P. Gerhardt, R. G. E. Murray, W. A. Wood & N. R. Krieg), pp. 248– 277. Washington, DC: American Society for Microbiology. 2 McFarland, J. 1907 The nephelometer: an instrument for estimating the number of bacteria in suspensions used for calculating the opsonic index and for vaccines. J. Am. Med. Assoc. 49, 1176 – 1178. 3 BD CrystalSpec. Nephelometer user’s guide. See http://www. bd.com/ds/technicalCenter/inserts/8809791JAA(201010). pdf (accessed May 2011). 4 Roessler, W. G. & Brewer, C. R. 1967 Permanent turbidity standards. Appl. Microbiol. 15, 1114 – 1121. 5 Pugh, T. L. & Heller, W. 1957 Density of polystyrene and polyvinyl toluene latex particles. J. Colloid Sci. 12, 173– 180. (doi:10.1016/0095-8522(57)90004-1) 6 Washington II, J. A., Warren, E. & Karlson, A. G. 1972 Stability of barium sulfate turbidity standards. Appl. Microbiol. 24, 1013. 7 Performance standards for antimicrobial susceptibility testing. 2009 Nineteenth informational supplement. CLSI document no. M100-S19. 8 Murray, P. R., Baron, E. J., Jorgensen, J. H., Landry, M. L. & Pfaller, M. A. 2007 Manual of clinical micro- biology, 9th edn. Washington, DC: American Society for Microbiology (ASM) Press. 9 Byrne, L., Barker, J., Pennarun-Thomas, G. & Diamond, D. 2000 Digital imaging as a detector for generic analytical measurements. Trend. Anal. Chem. 19, 517–522. (doi:10. 1016/S0165-9936(00)00019-4) 10 Yamamoto, T., Takiwaki, H., Arase, S. & Ohshima, H. 2008 Derivation and clinical application of special imaging by means of digital cameras and Image J freeware for quantifi- cation of erythema and pigmentation. Skin Res. Technol. 14, 26–34. (doi:10.1111/j.1600-0846.2007.00256.x) 11 Schlücker, S., Schaeberle, M. D., Huffman, S. W. & Levin, I. W. 2003 Raman microspectroscopy: a comparison of point, line, and wide-field imaging methodologies. Anal. Chem. 75, 4312 – 4318. (doi:10.1021/ac034169h) 12 Goldmann, T., Zyzik, A., Loeschke, S., Lindsay, W. & Vollmer, E. 2001 Cost-effective gel documentation using a web-cam. J. Biochem. Biophys. Methods 50, 91– 95. (doi:10.1016/S0165-022X(01)00174-9) J. R. Soc. Interface (2012) 13 Fan, G. Y. & Ellisman, M. H. 2000 Digital imaging in transmission electron microscopy. J. Microsc. 200, 1 – 13. (doi:10.1046/j.1365-2818.2000.00737.x) 14 Hanley, Q. S., Earle, C. W., Pennebaker, F. M., Madden, S. P. & Denton, M. B. 1996 Charge-transfer devices in analytical instrumentation. Anal. Chem. 68, A661– A667. (doi:10.1021/ac9621229) 15 Koppenaal, D. W., Barinaga, C. J., Denton, M. B., Sperline, R. P., Hieftje, G. M., Schilling, G. D., Andrade, F. J. & Barnes IV, J. H. 2005 MS detectors. Anal. Chem. 77, 418A – 427A. (doi:10.1021/ac053495p) 16 Merk, S., Lietz, A., Kroner, M., Valler, M. & Heilker, R. 2004 Time-resolved fluorescence measurements using microlens array and area imaging devices. Comb. Chem. High Throughput Screen 7, 45 – 54. 17 Chakravarti, B., Loie, M., Ratanaprayul, W., Raval, A., Gallagher, S. & Chakravarti, D. N. 2008 A highly uniform UV transillumination imaging system for quantitative analysis of nucleic acids and proteins. Proteomics 8, 1789 – 1797. (doi:10.1002/pmic.200700891) 18 Galindo, E., Larralde-Corona, C. P., Brito, T., Cór- dova-Aguilar, M. S., Taboada, B., Vega-Alvarado, L. & Corkidi, G. 2005 Development of advanced image analysis techniques for the in situ characterization of multiphase dispersions occurring in bioreactors. J. Biotechnol. 116, 261 – 270. (doi:10.1016/j.jbiotec. 2004.10.018) 19 Mesquita, D. P., Dias, O., Amaral, A. L. & Ferreira, E. C. 2009 Monitoring of activated sludge settling ability through image analysis: validation on full-scale wastewater treatment plants. Bioprocess Biosyst. Eng. 32, 361 – 367. (doi:10.1007/s00449-008-0255-z) 20 Lancaster, M. D., Goodall, M., Bergström, E. T., McCrossen, S. & Myers, P. 2005 Quantitative measure- ments on wetted thin layer chromatography plates using a charge coupled device camera. J. Chromatogr. A 1090, 165– 171. (doi:10.1016/j.chroma.2005.06.068) 21 Lancaster, M. D., Goodall, M., Bergström, E. T., McCrossen, S. & Myers, P. 2006 Real-time image acqui- sition for absorbance detection and quantification in thin-layer chromatography. Anal. Chem. 78, 905 – 911. (doi:10.1021/ac051390g) 22 Hayakawa, T. & Hirai, M. 2003 An assay of ganglioside using fluorescence image analysis on a thin-layer chrom- atography plate. Anal. Chem. 75, 6728 – 6731. (doi:10. 1021/ac0346095) 23 Prosek, M. & Vovk, I. 1997 Reproducibility of densito- metric and image analysing quantitative evaluation of thin-layer chromatograms. J. Chromatogr. A 779, 329– 336. (doi:10.1016/S0021-9673(97)00442-1) 24 Wagner, J. W. & Miskelly, G. M. 2003 Background cor- rection in forensic photography I. Photography of blood under conditions of non-uniform illumination or variable substrate color. Theoretical aspects and proof of concept. J. Forensic Sci. 48, 593 – 603. (doi:10.1007/s11284-006- 0035-7) 25 Wagner, J. W. & Miskelly, G. M. 2003 Background correc- tion in forensic photography II. Photography of blood under conditions of non-uniform illumination or variable substrate color. Practical aspects and limitations. J. Forensic Sci. 48, 604 – 613. (doi:10.1007/s10531-010- 9817-x) 26 Hess, A. V. I. 2007 Digitally enhanced thin-layer chrom- atography: an inexpensive, new technique for qualitative and quantitative analysis. J. Chem. Ed. 84, 842. (doi:10. 1021/ed084p842) 27 Cumberbatch, T. & Hanley, Q. S. 2007 Quantitative ima- ging in the laboratory: fast kinetics and fluorescence http://www.bd.com/ds/technicalCenter/inserts/8809791JAA(201010).pdf http://www.bd.com/ds/technicalCenter/inserts/8809791JAA(201010).pdf http://www.bd.com/ds/technicalCenter/inserts/8809791JAA(201010).pdf http://www.bd.com/ds/technicalCenter/inserts/8809791JAA(201010).pdf http://dx.doi.org/10.1016/0095-8522(57)90004-1 http://dx.doi.org/10.1016/S0165-9936(00)00019-4 http://dx.doi.org/10.1016/S0165-9936(00)00019-4 http://dx.doi.org/10.1111/j.1600-0846.2007.00256.x http://dx.doi.org/10.1021/ac034169h http://dx.doi.org/10.1016/S0165-022X(01)00174-9 http://dx.doi.org/10.1046/j.1365-2818.2000.00737.x http://dx.doi.org/10.1021/ac9621229 http://dx.doi.org/10.1021/ac053495p http://dx.doi.org/10.1002/pmic.200700891 http://dx.doi.org/10.1016/j.jbiotec.2004.10.018 http://dx.doi.org/10.1016/j.jbiotec.2004.10.018 http://dx.doi.org/10.1007/s00449-008-0255-z http://dx.doi.org/10.1016/j.chroma.2005.06.068 http://dx.doi.org/10.1021/ac051390g http://dx.doi.org/10.1021/ac0346095 http://dx.doi.org/10.1021/ac0346095 http://dx.doi.org/10.1016/S0021-9673(97)00442-1 http://dx.doi.org/10.1007/s11284-006-0035-7 http://dx.doi.org/10.1007/s11284-006-0035-7 http://dx.doi.org/10.1007/s10531-010-9817-x http://dx.doi.org/10.1007/s10531-010-9817-x http://dx.doi.org/10.1021/ed084p842 http://dx.doi.org/10.1021/ed084p842 McFarland method using digital images L. Lahuerta Zamora and M. T. Pérez-Gracia 1897 D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 quenching. J. Chem. Ed. 84, 1319 – 1322. (doi:10.1021/ ed084p1319) 28 Gaiao, E. N., Martins, V. L., Lyra, W. S., Almeida, L. F., Silva, E. C. & Araujo, M. C. U. 2006 Digital image-based titrations. Anal. Chim. Acta 570, 283 – 290. (doi:10.1016/j. aca.2006.04.048) 29 Alimelli, A., Filippini, D., Paolesse, R., Moretti, S., Ciolfi, G., D’Amico, A., Lundstrom, I. & Di Natale, C. 2007 Direct quantitative evaluation of complex substances using computer screen photo-assisted technology: the case of red wine. Anal. Chim. Acta 597, 103– 112. (doi:10.1016/j.aca.2007.06.033) 30 Martinez, A. W., Phillips, S. T., Carrilho, E., Thomas, S. W., Sindi, H. & Whitesides, G. M. 2008 Simple J. R. Soc. Interface (2012) telemedicine for developing regions: camera phones and paper-based microfluidic devices for real-time, off-site diagnosis. Anal. Chem. 80, 3699 – 3707. (doi:10.1021/ ac800112r) 31 Lahuerta Zamora, L., Alemán López, P., Antón Fos, G. M., Martı́n Algarra, R., Mellado Romero, A. M. & Martı́nez Calatayud, J. 2011 Quantitative colorimetric- imaging analysis of nickel in iron meteorites. Talanta 83, 1575 – 1579. (doi:10.1016/j.talanta.2010.11.058) 32 Lahuerta Zamora, L., Mellado Romero, A. M. & Martı́nez Calatayud, J. 2011 Quantitative colorimetric analysis of some inorganic salts using digital photography. Anal. Lett. 44, 1674 – 1682. (doi:10.1080/00032719.2010. 520394) http://dx.doi.org/10.1021/ed084p1319 http://dx.doi.org/10.1021/ed084p1319 http://dx.doi.org/10.1016/j.aca.2006.04.048 http://dx.doi.org/10.1016/j.aca.2006.04.048 http://dx.doi.org/10.1016/j.aca.2007.06.033 http://dx.doi.org/10.1021/ac800112r http://dx.doi.org/10.1021/ac800112r http://dx.doi.org/10.1016/j.talanta.2010.11.058 http://dx.doi.org/10.1080/00032719.2010.520394 http://dx.doi.org/10.1080/00032719.2010.520394 Using digital photography to implement the McFarland method Introduction Material and methods Material Procedure Results and discussion Conclusions The authors thank the students Mª Isabel Cano Esteban and María Solana Altabella, for their assistance during the experimentation. References work_dyh7wdlzergmfamc2ynymotk2a ---- OP-BIOM170003 1..8 Q u a n t ifyi n g p i g m e n t c o v e r t o a s s e s s v a r i a t i o n i n a n i m a l c ol o u r a t i o n S i e g e n t h a l e r, A, M o n d a l , D a n d B e n v e n u t o , C h t t p :// d x . d o i. o r g / 1 0 . 1 0 9 3 / b i o m e t h o d s / b p x 0 0 3 T i t l e Q u a n t ifyi n g p i g m e n t c o v e r t o a s s e s s v a r i a t i o n i n a n i m a l c ol o u r a t i o n A u t h o r s S i e g e n t h a l e r, A, M o n d a l , D a n d B e n v e n u t o , C Ty p e Ar ti cl e U R L T hi s v e r s i o n i s a v a il a b l e a t : h t t p : // u s ir. s a lf o r d . a c . u k /i d / e p r i n t / 4 1 5 4 4 / P u b l i s h e d D a t e 2 0 1 7 U S I R i s a d i gi t a l c oll e c t i o n of t h e r e s e a r c h o u t p u t of t h e U n iv e r s i t y of S a lf o r d . W h e r e c o p y r i g h t p e r m i t s , f ull t e x t m a t e r i a l h e l d i n t h e r e p o s i t o r y i s m a d e f r e e l y a v a il a b l e o n li n e a n d c a n b e r e a d , d o w n l o a d e d a n d c o p i e d fo r n o n- c o m m e r c i a l p r i v a t e s t u d y o r r e s e a r c h p u r p o s e s . P l e a s e c h e c k t h e m a n u s c r i p t fo r a n y f u r t h e r c o p y r i g h t r e s t r i c ti o n s . F o r m o r e i nf o r m a t i o n , i n cl u d i n g o u r p o li c y a n d s u b m i s s i o n p r o c e d u r e , p l e a s e c o n t a c t t h e R e p o s i t o r y Te a m a t : u s i r @ s a lf o r d . a c . u k . mailto:usir@salford.ac.uk M E T H O D S M A N U S C R I P T Quantifying pigment cover to assess variation in animal colouration Andjin Siegenthaler, Debapriya Mondal, and Chiara Benvenuto* School of Environment and Life Sciences, University of Salford, Salford M5 4WT, UK *Correspondence address. School of Environment and Life Sciences, Room 317, Peel Building, University of Salford, Salford M5 4WT, UK. Tel: þ44 (0)161- 295-5141; E-mail: C.Benvenuto@salford.ac.uk Abstract The study of animal colouration addresses fundamental and applied aspects relevant to a wide range of fields, including behavioural ecology, environmental adaptation and visual ecology. Although a variety of methods are available to measure animal colours, only few focus on chromatophores (specialized cells containing pigments) and pigment migration. Here, we illustrate a freely available and user-friendly method to quantify pigment cover (PiC) with high precision and low effort us- ing digital images, where the foreground (i.e. pigments in chromatophores) can be detected and separated from the back- ground. Images of the brown shrimp, Crangon crangon, were used to compare PiC with the traditional Chromatophore Index (CI). Results indicate that PiC outcompetes CI for pigment detection and transparency measures in terms of speed, accuracy and precision. The proposed methodology provides researchers with a useful tool to answer essential physiological, behav- ioural and evolutionary questions on animal colouration in a wide range of species. Keywords: chromatophores; chromatosomes; Chromatophore Index; colour change; colour threshold; Crangon crangon; ImageJ Introduction The study of animal colouration and colour patterns is essential to gather a better understanding on how animals visually com- municate and how they can match different substrates. Furthermore, this type of studies provides important insights on how predation avoidance due to camouflage can drive inter- and intraspecific variation, and how colouration and visual per- ception are connected (e.g. [1]). A wide range of methods has been developed to measure animal colouration, which can be roughly divided in three categories: (i) spectral quantification of colouration and animal vision [2, 3]; (ii) assessment of colour patterns [4–7]; and (iii) analysis of chromatophores and pigment migration [8–10]. The last method has been used mainly to study animal colour changes [9, 11, 12]. Chromatophores are specialized cells containing pigmented organelles and can be located in the dermis, epidermis, beneath a translucent exoskeleton, deep in muscular tissue or around internal organs [13–15]. In crustaceans, multiple tightly bound chromatophores (of similar or different colours) are combined in a structure called chromatosome [13, 16]. Many animals can regulate their colour by the dispersal and concentration of pig- ments within chromatophores (e.g. [12, 17]): colour can be changed in a period of days to months through anabolism and catabolism of pigments and cells (morphological colour change) or within milliseconds to hours via the migration of pigments within chromatophores (physiological colour change) [12]. The concentration or dispersion of pigments reduces or increases their visibility, since less or more surface area is covered by them, respectively [18, 19]. Hogben and Slome [20] described changes in the pigment distribution in the frog Xenopus laevis by classifying chromatophores in five classes (Supplementary Fig. S1), applying a Melanophore Index (MI) for melanophores (also more generally called Chromatophore Index (CI) for chromato- phores containing pigments other than melanin [20, 21]). Although this method has been extensively used (see Table 1 for some recent examples), concerns have been raised about its Received: 27 September 2016; Revised: 15 February 2017; Editorial decision: 17 February 2017; Accepted: 2 March 2017 VC The Author 2017. Published by Oxford University Press. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. 1 Biology Methods and Protocols, 2017, 1–8 doi: 10.1093/biomethods/bpx003 Methods Manuscript Deleted Text: INTRODUCTION Deleted Text: - Deleted Text: [ Deleted Text: 1 Deleted Text: 2 Deleted Text: 3 Deleted Text: s Deleted Text: [ Deleted Text: 5 http://www.oxfordjournals.org/ degree of subjectivity, statistical validity and labour intensive- ness [22, 23]. Here, we describe a new method, PiC (Pigment Cover), to assess the degree of pigment dispersion within chro- matophores (or chromatosomes) by measuring the coverage of pigments in defined areas of an animal body, thus allowing us to evaluate colour variations in a quantitative way. The objec- tive of this study is to demonstrate the use and versatility of PiC and compare it to the established CI. To achieve this, both PiC and CI were applied to a database of pictures of the brown shrimp, Crangon crangon (L.), a crustacean characterized by good background-matching abilities [8]. Material and methods Protocol to measure PiC Image acquisition Measurements on animal colour or pigment migration are usu- ally performed on a specific body region rather than the whole animal [1, 10, 39]. In some cases, e.g. fish scales [36], the area of interest can be separated from the animal prior to image acqui- sition, reducing the effects of animal stress on the colour [40]. The specimen should be placed and photographed on a uniform surface (Fig. 1A). Contrast between background and pigments should be as high as possible; overlap with underlying organs should be avoided, if possible [23]. The magnification should be high enough to distinguish individual chromatosomes. If multi- ple pigments are studied, the collection of multiple images of the same area on different backgrounds might be necessary (see below). To optimize image acquisition, illumination within an image should be uniform and shadows or reflection of light should be avoided. Light conditions are, nevertheless, less con- stricted than in other methods (e.g. [3, 41]) and colour charts are not required (they can vary in quality and applicability; [2, 3]). Still, standardization of lighting conditions and camera settings will significantly reduce the use of manual adaptations during image analysis (see [41] for more information on the standar- dization of digital images). In digital photography, images are commonly displayed in a non-linear standard default colour space (sRGB). PiC can be applied to these standard images. For more rigorous and objective image analyses, linear images are often required. If this is the case, sRGB images can be converted to the CIELAB colour space using the ‘Color Space Converter’ plugin of ImageJ (https://imagej.nih.gov/ij/plugins/color-space- converter.html). A normalization step is advised to slightly en- hance the contrast within the images by using the ‘Enhance Contrast’ command of ImageJ. A slight over-saturation of 1% is advised for improved visual evaluation [48]. Colour threshold PiC image analysis can be performed with any graphic editor able to perform image segmentation (partitioning an image into sets of pixels) by means of thresholding. Image segmentation by semi-automatic thresholding is an established method that has been used in a range of biological studies, including crop root length [42], plant signals [43] and cell counts [44], but not specifically on pigment coverage. The methodology described in this section is tailored to the freely available java-based im- aging program ImageJ (1.48v, http://imagej.nih.gov/ij/; [45]; RRID:SCR_003070) because of its ease of use and efficacy, but could easily be adapted to other graphic software. Images need to be cropped to the region of interest and seg- mented to differentiate foreground (the pigments under study) and background [46]. In ImageJ, sRGB image segmentation is achieved with the ‘Color Threshold’ function (Fig. 1B), which seg- ments 24-bit RGB images based on pixel values (see the ImageJ user guide; [47]). A range of automatic thresholding algorithms is available in ImageJ. These algorithms perform differently depend- ing on the distribution of pixel values in the image and the most suitable thresholding algorithm should be selected prior to analysis [42, 48], e.g. using the ‘Threshold Check’ macro of the BioVoxxel toolbox (http://www.biovoxxel.de/development/, http:// fiji.sc/BioVoxxel_Toolbox#Threshold_Check). The sensitivity of the threshold function can be manually adapted using the ‘Saturation’ and ‘Brightness’ scroll bar in the colour threshold set- tings window (Fig. 1B) until the whole area covered by the pig- ment(s) of interest is selected [44, 49]. Manual alteration of the thresholding level reduces, however, the objectivity of the analysis and should be avoided as much as possible. Specific pigments can also be selected by adapting the ‘Hue’ scroll bar (Fig. 1B) to the Table 1: Selected publications applying the MI of Hogben and Slome [20] Group Species Area of interest Topic Method Source Amphibian Bufo melanostictus Dorsal skin Drug development MIa [24] Hoplobatrachus tigerinus Isolated dorsal skin cell Physiology MIa [25] Rana catesbeiana Dorsal skin Endocrinology MI [26] Taricha granulosa Larva UV protection MI [27] Ambystoma gracile Larva UV protection MI [27] Ambystoma macrodactylum Larva UV protection MI [27] Xenopus laevis Larva Developmental biology MI [28, 29] Crustacean Chasmagnathus granulata Maxilliped’s meropodit UV protection CI [30] Palaemonetes argentinus Dorsal abdomen UV protection CI [30] Eurydice pulchra Not specified Endocrinology CIa [31] Palaemon pacificus Dorsal abdomen Endocrinology CI [32, 33] Reptile Hemidactylus flaviviridis Dorsal skin Drug development MIa [34] Teleost Ctenopharyngodon idellus Scale Physiology MI [35] Danio rerio Scale and embryo Physiology MI [36] Oncorhynchus mykiss Scale Ecotoxicology MI [37] Verasper moseri Base of caudal fin Developmental biology MI [38] aModified index; MI, Melanophore Index (pigment is melanin); CI, Chromatophore Index (pigment is not melanin). 2 | Siegenthaler et al. Deleted Text: MATERIAL AND METHODS Deleted Text: pigment cover Deleted Text: PROTOCOL TO MEASURE PIGMENT COVER Deleted Text: , Deleted Text: [ Deleted Text: [ Deleted Text: s Deleted Text: s Deleted Text: `` Deleted Text: '' https://imagej.nih.gov/ij/plugins/color-space-converter.html https://imagej.nih.gov/ij/plugins/color-space-converter.html Deleted Text: s Deleted Text: Pigment cover Deleted Text: which Deleted Text: [ http://imagej.nih.gov/ij/ Deleted Text: [ Deleted Text: , http://www.biovoxxel.de/development/ http://fiji.sc/BioVoxxel_Toolbox#Threshold_Check http://fiji.sc/BioVoxxel_Toolbox#Threshold_Check required hue values [43]. For transparency measurements, the ‘Hue’ scroll bar should be used to select the background colour to ensure that only the transparent area is selected (the background will be visible through the transparent tissue) and all pigments are ignored. In cases where only one channel of the image is analysed (e.g. CIELAB’s L channel or greyscale images), ImageJ’s ‘Threshold’ function can be used in similar fashion as the ‘Color Threshold’ function. PiC analysis The area of the selected pigment(s) can be calculated with the ‘Analyze Particles’ command (Fig. 1C) that measures ‘particles’ (separate shaped objects) in an image after thresholding by scanning the image and outlining the edge of objects found has been performed [47, 50]. Case study Dark and light pigment measurements and transparency Five specimens of C. crangon were selected based on visual dif- ferences in colour. Their right exopod (the external branch of their tail fan) was photographed under a stereo microscope (Leica S6D) with a Leica DFC295 camera. The tail fan is the most suitable body area of caridean shrimp to be used for monitoring chromatic parameters because: (i) it is very flat; (ii) it has no un- derlying organs or tissue (and is thus highly transparent); and (iii) it can be photographed while causing minimal stress to the animal [23, 51]. Artificial illumination was provided by two led spotlights (JANSJ €O; 88 lm; 3000 Kelvin) positioned at either side of the microscope. We adjusted the white balance prior to im- age collection and allowed the exposure time to be automati- cally adapted. Images were collected in sequence, on four differently coloured backgrounds (Fig. 2): white for the measurement of dark-coloured (black and sepia-brown) pig- ments; black for light-coloured (white and yellow) pigments and green and blue for transparency measurements. Green and blue hues do not occur naturally in C. crangon [8, 51] and are, there- fore, suitable for transparency measurements (both colours were used in order to test which one performs better). To avoid adaptation to the background during the measurements, shrimp were kept for a very short duration only (less than 1 min) on each background. Images were saved in uncompressed TIFF format (RGB), cropped to 1 mm2 and analysed following the protocol described above, using the default thresholding method, based on the IsoData algorithm [52, 53], and manually adapted if needed. We selected the default thresholding algo- rithm for this experiment since it performed best for the variety of features (dark pigment, light pigment, transparency) tested. For the same photos, we determined the CI, in accordance to the method of Hogben and Slome [20], by classifying all chroma- tosomes in the selected area individually and averaging their values (see Supplementary Fig. S1 for reference). Dark PiC and CI comparison Fifty sRGB images of C. crangon (Fig. 3; obtained from 36 individual shrimp) were selected to represent the range of colouration shown by shrimp (lighter or darker, depending on the substrate where animals were kept). We tested the robustness of the meth- odology used by selecting images varying in properties such as il- lumination and picture quality. All images were obtained on a white background and cropped to 1 mm2 in the centre of the exo- pod. Images were analysed for dark pigments, which are the most abundant and evident pigments responsible for dark col- ouration [8, 51]. Three observers analysed the images with the PiC and CI methods, in random order. Prior to analysis, we ap- plied a threshold check (BioVoxxel toolbox) to a sub-selection of Figure 1: Protocol for PiC measurements. This diagram outlines the steps to be performed in ImageJ to determine PiC. See text for details. Animal pigment cover quantification | 3 Deleted Text: a Deleted Text: Pigment cover Deleted Text: which Deleted Text: `` Deleted Text: '' Deleted Text: CASE STUDY Deleted Text: 1 Deleted Text: , Deleted Text: 2 Deleted Text: 3 Deleted Text: one Deleted Text: ute Deleted Text: and Deleted Text: P Deleted Text: p Deleted Text: igment C Deleted Text: c Deleted Text: over Deleted Text: C Deleted Text: c Deleted Text: hromatophore I Deleted Text: i Deleted Text: ndex 13 images to determine the optimal thresholding algorithm. Based on the average score of these images, the MaxEntropy al- gorithm [54] was selected for all images. Manual adaptation was applied as little as possible (on average on 23% of the images, de- pending on the observer). To test the effect of image linearization and normalization, the 50 sRGB images were transformed to the CIELAB colour space and the L channel was normalized prior to PiC determination. The MaxEntropy thresholding algorithm was applied and, in this case, no manual adaptation was allowed to eliminate the need for subjective input. PiC values of the sRGB (averaged over the observers) and linearized/normalized images were compared using linear regression. Data analyses Inter-observer variation for both dark PiC and CI was tested with the Friedman’s test. This statistical test was selected be- cause of the non-normal distributed nature of both proportions and ordinal data, and the fact that each image was tested re- peatedly. Both PiC (percentage of PiC transformed to fraction) and CI results were averaged between observers and a beta re- gression (betareg R package; [55]) was used to compare the methods. This specific analysis can also be important to predict the results from one method (PiC) when having information from the other (CI). Different link functions (log, log–log and logit) were compared based on Akaike Information Criterion (AIC). Beta regression is considered a suitable test for non- parametric and bounded data such as proportions [55]. Data analyses were performed with R statistical software v.3.1.2 (RRID:SCR_001905) and IBM SPSS statistics v. 20 (RRID:SCR_002865). Results Dark and light pigment measurements and transparency For the five specimens analysed, dark PiC values ranged from 8.9% to 92.1% and light PiC values from 0.8% to 10.6% (Fig. 2). Transparency measurement ranged from 19.8% to 89.2% on a blue background and from 18.1% to 84.3% on a green back- ground (mean difference 6 SD: 1 6 5.9%) and did not significantly differ between the background colours (Wilcoxon signed-ranks test: N¼5, Z¼�0.674, P¼0.500). CI, by definition, cannot be calculated for transparency (Fig. 2). When dark pig- ments were predominant (e.g. shrimps 4 and 5 in Fig. 2), the CI of light pigments could not be calculated, as it was impossible to distinguish the chromatosomes’ shape. Furthermore, the high overlap of dark chromatosomes made it impossible to count the number of chromatosomes to calculate the mean dark CI. In these cases, the CI was estimated as 5, the maximum index value. Figure 2: Pigment and transparency values for cover (%) and CI for five shrimps (C. crangon) on different backgrounds. For each specimen, the right exopod was photo- graphed, always in the same exact position and then the image cropped in the centre (selecting 1 mm2). Red areas represent the area selected by the PiC method. NA: CI cannot be calculated. *CI is an estimate. 4 | Siegenthaler et al. Deleted Text: s Deleted Text: s Deleted Text: s Deleted Text: DATA ANALYSES Deleted Text: pigment cover Deleted Text: pigment cover Deleted Text: [ Deleted Text: - Deleted Text: RESULTS Deleted Text: . Deleted Text: . Deleted Text: S Deleted Text: R Deleted Text: T Deleted Text: - Deleted Text: , Figure 3: Percentage dark PiC and CI of 50 images (1 mm2) of C. crangon’s exopods. The images show different levels of chromatosome dispersion and represent the range of colouration exhibited by the animals. See text for information on capital letters. Animal pigment cover quantification | 5 Dark PiC and CI comparison PiC and CI for all 50 images were calculated (Fig. 3). Dark PiC showed a strong exponential relationship with CI (Fig. 4) and the beta regression confirmed a significant relationship be- tween PiC and CI (coefficient 6 SEM: 0.659 6 0.034; P < 0.0001) with a pseudo-R2 value of 0.95. The equation to estimate PiC from a known CI value was modelled as: Ln ðpredicted PiCÞ¼�3:362 þ 0:659 � CI The equation is only valid for: 1�CI�5 and 0�PiC �1. According to AIC values, the log link function (AIC: �127) pro- vided a better fit than models with a logit (AIC: �82) or log–log (AIC: �66) link function. In half of the images, the observers were not able to provide a reliable count of the maximum dispersed chromatosomes, necessary to calculate the CI, due to a high level of overlap be- tween the chromatosomes. Above 63 6 9% PiC, individual chro- matosomes overlapped resulting in unreliable CI estimates; above 80 6 9% PiC it was not possible to detect any difference based on CI since all chromatosomes were in the highest cate- gory (CI¼5). No problems were encountered during the estima- tion of PiC, including the darkest images. The observers spent on average 75 6 5 min calculating the CI and only 18 6 9 min de- termining PiC. Results differed significantly among observers for both methods (Friedman’s test: CI: N¼50, df¼2, v2¼11.09, P¼0.04; PiC: N¼50, df¼2, v2¼18.67, P < 0.001), with an average relative standard deviation over all images of 3% for CI and 6% for PiC. Individual regression parameters were similar among the observers (Supplementary Table S1) and the majority of the variation in the PiC estimates was caused by one observer who relied on manual adaptation (N¼23) much more than the other observers (N¼6 and N¼5). Linear regression estimates of PiC values of sRGB versus linearized/normalized images showed that both methods produce concordant results (Supplementary Fig. S2; df¼45, R2¼0.995, P < 0.001, slope¼0.948), indicating that the use of sRGB images did not produce significant system- atic errors in this case. Discussion Animal colouration can be assessed by determining pigment dispersion in individual chromatophores or in multicellular chromatosomes (e.g. [19, 21]). The traditional and widely used CI [20] classifies individual chromatophores or chromatosomes based on their physiological state, indexing their extent of dis- persion. As a result, the CI does not provide information on their morphological state (abundance of pigments). Animals with widely spaced, but fully dispersed, chromatosomes (Fig. 3, D) have, consequently, the same maximum index (CI¼5) as ani- mals with a high abundance and overlap of chromatosomes (Fig. 3, H), even though the difference in darkness is visually ap- parent. This issue has already been considered by Parker [22] who observed catfish with clear differences in darkness, not dis- tinguishable by the values of CI (all falling in the maximum cat- egory). Methods relying on the measurement of the diameter of the chromatosomes [19, 56] have the same problem, since they also omit morphological variation [22]. PiC combines both infor- mation on the distribution and abundance of pigments and is, therefore, able to distinguish physiological differences within the same animal (Fig. 3, A vs. E) and morphological differences between animals with the same physiological chromatosome state (Fig. 3, F vs. G), even in very dark animals (PiC > 80%). The comparison between PiC and CI shows the range where it is possible to transform the values from one method to the other and where PiC is more precise than CI. The logarithmic relation- ship indicates that the more dispersed the pigments are, the more effective the PiC is in detecting small differences between images compared to the CI. Thresholding methods are consid- ered a more reliable tool for image analysis than human judge- ment [44]. Nevertheless, the accuracy and objectivity of PiC is influenced by the amount of manual adaptation applied. The database used during this study consisted of images taken un- der a variety of lighting conditions to show the wide applicabil- ity of PiC. However, automatic thresholding algorithms work best with images taken with identical lighting conditions and camera settings. Manual adaptation of the threshold values, re- quired in cases where the image quality was not optimal (e.g. Fig. 3, B and C), resulted in increased observer variation and subjectivity. In studies where standardization of the images is not possible, extra care should be taken to ensure the objectivity of the study (e.g. observers being made blind to the treatments; between-observer repeatability analysis). These considerations should also be taken into account for the CI. The CI is further- more less precise in darker animals, and it takes up to 4 times longer than PiC. This difference in analysis speed is due to the fact that the CI can only be determined by the manual classifi- cation of every single chromatosome in the image. Moreover, PiC allows testing for transparency, which is important in stud- ies of colour change [21, 57]. Digital photography is a popular technique in animal colour- ation research due to its availability, speed, relative low price and ease of data acquisition [41, 58, 59]. Although there are is- sues with the use of digital images in animal colour studies [41], most of these relate to the control for variation in lightning con- ditions and the conversion of images to animal vision systems [41, 59]. Most cameras produce non-linear images (e.g. sRGB) that generally over- or underestimate light values and rigorous image analysis methods should include linearization and nor- malization of these images [41, 59]. PiC focusses on a priori specified pigments and does not rely on the exact colour or ob- server’s vision system. In this method, the difference between foreground and background pixels in an image is more Figure 4: Relationship between CI and dark PiC fraction. Measurements were performed on 50 images of C. crangon (Fig. 3). Mean values and SD for the read- ings of three observers are given per image. The solid line shows the beta regres- sion fit (with log link function). 6 | Siegenthaler et al. Deleted Text: pigment cover Deleted Text: chromatophore index Deleted Text: . Deleted Text: . Deleted Text: . Deleted Text: P Deleted Text: Deleted Text: - Deleted Text: - Deleted Text: - Deleted Text: - Deleted Text: pigment cover Deleted Text: T Deleted Text: . Deleted Text: . Deleted Text: X Deleted Text: . Deleted Text: . Deleted Text: X2 Deleted Text: s Deleted Text: . Deleted Text: . Deleted Text: DISCUSSION Deleted Text: [ Deleted Text: Chromatophore Index Deleted Text: , Deleted Text: , Deleted Text: , Deleted Text: , Deleted Text: , Deleted Text: & Deleted Text: s Deleted Text: , Deleted Text: , Deleted Text: which Deleted Text: - Deleted Text: s Deleted Text: PiC focus Deleted Text: ses important than the exact colour, thus stable lighting conditions are less relevant for PiC than for methods requiring linearized images. Studies that analyse chromatophores and pigment mi- gration (see Table 1 for examples) usually focus on a limited number of pigments, in high contrast with the background. In these types of studies, PiC can be used also with sRGB images (as shown by the concordant PiC values of sRGB and linearized images reported above) as long as the users are aware of the limitations of the use of non-linear images. In cases where a more precise, objective and rigours determination of animal col- our is required, image normalization and standardization can be performed prior to PiC determination. Standardization of lighting conditions and camera settings is also advised in these cases. Besides being less constrained regarding lighting condi- tions, PiC is also easy to use and fast in the analysis of large sur- faces (opposed to spectrometry; [3]). The study of animal colouration is a broad field of investiga- tion encompassing molecular, cellular, physiological, behaviou- ral and evolutionary questions [1, 12]. The proposed methodology combines the advantages of digital image acquisi- tion with the power of a free open-source program. PiC is simple to use, can also be easily employed for educational purposes [60] and can be applied in any system where rapid colour change is determined by pigment migration in chromatophores. The brown shrimp’s chromatosomes system is a widely appli- cable model since its physiological factors are well studied and its pigment system is complex and essentially similar to those of vertebrates [8, 13, 61–63]. The proposed method will thus be a useful tool in future investigations on animal colouration as a fast and effective proxy for the interpretation of complex and dynamic biological systems in a wide range of species. Acknowledgements We would like to thank Asma Althomali, Clément Dufaut and Héctor Abarca Velencoso for their great help with image analysis; Paul Oldfield and Steve Manning for their assis- tance with the sampling and Alexander Mastin for his ad- vice on statistical analyses. Three anonymous reviewers have provided insightful and valuable comments and feed- back to a previous version of this manuscript. This project has been funded by the University of Salford and the Mersey Gateway Environmental Trust. Conflict of interest statement. None declared. Supplementary data Supplementary data is available at Biology Methods and Protocols online. References 1. Stevens M, Rong CP, Todd PA. Colour change and camouflage in the horned ghost crab Ocypode ceratophthalmus. Biol J Linn Soc 2013;109:257–70. 2. Stevens M, Stoddard M, Higham J. Studying primate color: to- wards visual system-dependent methods. Int J Primatol 2009;30:893–917. 3. White TE, Dalrymple RL, Noble DWA et al. Reproducible re- search in the study of biological coloration. Anim Behav 2015;106:51–7. 4. Holmes W. The colour changes and colour patterns of Sepia officinalis L. Proc Zool Soc Lond 1940;A110:17–35. 5. Hanlon RT. Cephalopod dynamic camouflage. Curr Biol 2007;17:400–4. 6. Taylor CH, Gilbert F, Reader T. Distance transform: a tool for the study of animal colour patterns. Methods Ecol Evol 2013;4:771–81. 7. Merilaita S, Dimitrova M. Accuracy of background matching and prey detection: predation by blue tits indicates intense selection for highly matching prey colour pattern. Funct Ecol 2014;28:1208–15. 8. Koller G. €Uber chromatophorensystem, farbensinn und farb- wechsel bei Crangon vulgaris. J Comp Physiol A 1927;5:191–246. 9. Perkins EB, Snook T. Control of pigment migration in the chromatophores of crustaceans. Proc Natl Acad Sci USA 1931;17:282–85. 10. Darnell MZ. Ecological physiology of the circadian pigmenta- tion rhythm in the fiddler crab Uca panacea. J Exp Mar Biol Ecol 2012;426–27:39–47. 11. Sköld HN, Aspengren S, Wallin M. Rapid color change in fish and amphibians - function, regulation, and emerging applica- tions. Pigment Cell Melanoma Res 2013;26:29–38. 12. Umbers KDL, Fabricant SA, Gawryszewski FM et al. Reversible colour change in Arthropoda. Biol Rev 2014;89:820–48. 13. Elofsson R, Kauri T. The ultrastructure of the chromatophores of Crangon and Pandalus (Crustacea). J Ultrastruct Res 1971;36:263–70. 14. Bauer RT. Color patterns of the shrimps Heptacarpus pictus and H. paludicola (Caridea: Hippolytidae). Mar Biol 1981;64:141–52. 15. Fujii R. The regulation of motile activity in fish chromato- phores. Pigment Cell Res 2000;13:300–19. 16. McNamara JC. Morphological organization of crustacean pig- mentary effectors. Bio Bull 1981;161:270–80. 17. Meelkop E, Temmerman L, Schoofs L et al. Signalling through pigment dispersing hormone-like peptides in invertebrates. Prog Neurobiol 2011;93:125–47. 18. Smith HG. The receptive mechanism of the background re- sponse in chromatic behaviour of crustacea. Proc R Soc B 1938;125:250–63. 19. Peter J, Meitei KV, Ali AS et al. Role of histamine receptors in the pigmentary responses of the wall lizard, Hemidactylus fla- viviridis. Curr Sci 2011;101:226–29. 20. Hogben L, Slome D. The pigmentary effector system. VI. The dual character of endocrine co-ordination in amphibian col- our change. Proc R Soc B 1931;108:10–53. 21. Auerswald L, Freier U, Lopata A et al. Physiological and mor- phological colour change in Antarctic krill, Euphausia superba: a field study in the Lazarev Sea. J Exp Biol 2008;211:3850–58. 22. Parker GH. Methods of estimating the effects of melanophore changes on animal coloration. Bio Bull 1943;84:273–84. 23. Flores EE, Chien Y-H. Chromatosomes in three phenotypes of Neocaridina denticulata Kemp, 1918: morphological and chro- matic differences measured non-invasively. J Crust Biol 2011;31:590–97. 24. Ali SA, Naaz I. Understanding the ultrastructural aspects of berberine-induced skin-darkening activity in the toad, Bufo melanostictus, melanophores. J Microsc Ultrastruct 2015;3:210–19. 25. Ali SA, Salim S, Sahni T et al. 5-HT receptors as novel targets for optimizing pigmentary responses in dorsal skin melano- phores of frog, Hoplobatrachus tigerinus. Br J Pharmacol 2012;165:1515–25. 26. Gao L, Yu Z, Meng D et al. Analogue of melanotan II (MTII): a novel melanotropin with superpotent action on frog skin. Protein Pept Lett 2015;22:762–66. Animal pigment cover quantification | 7 Deleted Text: t Deleted Text: s Deleted Text: [ Deleted Text: also Deleted Text: see: 27. Belden LK, Blaustein AR. UV-B induced skin darkening in lar- val salamanders does not prevent sublethal effects of expo- sure on growth. Copeia 2002;2002:748–54. 28. Eagleson GW, Selten MM, Roubos EW et al. Pituitary melano- trope cells of Xenopus laevis are of neural ridge origin and do not require induction by the infundibulum. Gen Comp Endocrinol 2012;178:116–22. 29. Eagleson GW, van der Heijden RA, Roubos EW et al. A develop- mental analysis of periodic albinism in the amphibian Xenopus laevis. Gen Comp Endocrinol 2010;168:302–06. 30. Gouveia GR, Lopes TM, Neves CA et al. Ultraviolet radiation induces dose-dependent pigment dispersion in crustacean chromatophores. Pigment Cell Res 2004;17:545–48. 31. Wilcockson DC, Zhang L, Hastings MH et al. A novel form of pigment-dispersing hormone in the central nervous system of the intertidal marine isopod, Eurydice pulchra (leach). J Comp Neurol 2011;519:562–75. 32. Meelkop E, Marco HG, Janssen T et al. A structural and func- tional comparison of nematode and crustacean PDH-like se- quences. Peptides 2012;34:74–81. 33. Marco HG, G€ade G. Biological activity of the predicted red pigment-concentrating hormone of Daphnia pulex in a crustacean and an insect. Gen Comp Endocrinol 2010;166:104–10. 34. Ali SA, Meitei KV. Nigella sativa seed extract and its bioactive compound thymoquinone: the new melanogens causing hyperpigmentation in the wall lizard melanophores. J Pharm Pharmacol 2011;63:741–46. 35. Jiang Q, Wong AO. Signal transduction mechanisms for auto- crine/paracrine regulation of somatolactin-alpha secretion and synthesis in carp pituitary cells by somatolactin-alpha and -beta. Am J Physiol Endocrinol Metab 2013;304:E176–86. 36. Xu J, Xie FK. Alpha- and beta-adrenoceptors of zebrafish in melanosome movement: a comparative study between em- bryo and adult melanophores. Biochem Biophys Res Commun 2011;405:250–55. 37. Lennquist A, Martensson Lindblad LG, Hedberg D et al. Colour and melanophore function in rainbow trout after long term exposure to the new antifoulant medetomidine. Chemosphere 2010;80:1050–55. 38. Yoshikawa N, Matsuda T, Takahashi A et al. Developmental changes in melanophores and their asymmetrical respon- siveness to melanin-concentrating hormone during meta- morphosis in barfin flounder (Verasper moseri). Gen Comp Endocrinol 2013;194:118–23. 39. Brown FA, Sandeen MI. Responses of the chromatophores of the fiddler crab, Uca, to light and temperature. Physiol Zool 1948;21:361–71. 40. Nguyen N, Sugimoto M, Zhu Y. Production and purification of recombinant somatolactin beta and its effects on melano- some aggregation in zebrafish. Gen Comp Endocrinol 2006;145:182–87. 41. Stevens M, P�arraga CA, Cuthill IC et al. Using digital photogra- phy to study animal coloration. Biol J Linn Soc 2007;90:211–37. 42. Tajima R, Kato Y. Comparison of threshold algorithms for au- tomatic image processing of rice roots using freeware ImageJ. Field Crops Res 2011;121:460–63. 43. Swanson J-D, Carlson JE, Guiltinan MJ. Use of image analysis software as a tool to visualize non-radioactive signals in plant in situ analysis. Plant Mol Biol Rep 2006;24:105. 44. Drury JA, Nik H, van Oppenraaij RH et al. Endometrial cell counts in recurrent miscarriage: a comparison of counting methods. Histopathology 2011;59:1156–62. 45. Schneider CA, Rasband WS, Eliceiri KW. NIH Image to ImageJ: 25 years of image analysis. Nat Methods 2012;9:671–75. 46. Hartig SM. Basic image analysis and manipulation in ImageJ. Curr Protoc Mol Biol 2013. doi: 10.1002/0471142727.mb1415s102. 47. Ferreira T, Rasband W. ImageJ User Guide. ImageJ/Fiji 1.46. Bethesda, Maryland: National Institutes of Health, 2012. 48. Brocher J. Qualitative and guantitative evaluation of two new histogram limiting binarization algorithms. IJIP 2014;8:17–65. 49. Jensen EC. Quantitative analysis of histological staining and fluorescence using ImageJ. Anat Rec 2013;296:378–81. 50. Papadopulos F, Spinelli M, Valente S et al. Common tasks in microscopic and ultrastructural image analysis using ImageJ. Ultrastruct Pathol 2007;31:401–07. 51. Brown FA, Wulff VJ. Chromatophore types in Crago and their endocrine control. J Cell Comp Physiol 1941;18:339–53. 52. Ridler TW, Calvard S. Picture thresholding using an iterative selection method. Ieee Trans Syst Man Cybern 1978;8:630–32. 53. Landini G. Auto Threshold (ImageJ). http://fiji.sc/Auto_ Threshold (7 March 2016, date last accessed). 54. Kapur JN, Sahoo PK, Wong AKC. A new method for gray-level picture thresholding using the entropy of the histogram. Comput Vision Graph 1985;29:273–85. 55. Cribari-Neto F, Zeileis A. Beta regression in R. J Stat Soft 2010;34:1–24. 56. McNamara JC, Ribeiro MR. The calcium dependence of pig- ment translocation in freshwater shrimp red ovarian chro- matophores. Biol Bull 2000;198:357–66. 57. Sköld HN, Svensson PA, Zejlon C. The capacity for internal colour change is related to body transparency in fishes. Pigment Cell Melanoma Res 2010;23:292–95. 58. Stevens M, Merilaita S. Animal camouflage: current issues and new perspectives. Philos Trans R Soc B 2009;364:423–27. 59. Troscianko J, Stevens M. Image calibration and analysis tool- box - a free software suite for objectively measuring reflec- tance, colour and pattern. Methods Ecol Evol 2015;6:1320–31. 60. Heggland SJ, Lawless AR, Spencer LW. Visualizing endocrinol- ogy in the classroom. Am Biol Teach 2000;62:597–601. 61. Keeble F, Gamble FW. The colour-physiology of higher crusta- cea. Philos Trans R Soc B 1904;196:295–388. 62. Elofsson R, Hallberg E. Correlation of ultrastructure and chemical composition of crustacean chromatophore pig- ment. J Ultrastruct Res 1973;44:421–29. 63. Rao KR. Crustacean pigmentary-effector hormones: chemis- try and functions of RPCH, PDH, and related peptides. Am Zool 2001;41:364–79. 8 | Siegenthaler et al. http://fiji.sc/Auto_Threshold http://fiji.sc/Auto_Threshold bpx003-TF1 work_dz6rzkf6bbgk7cntlpjktjdoyq ---- University of East London Institutional Repository: http://roar.uel.ac.uk This paper is made available online in accordance with publisher policies. Please scroll down to view the document itself. Please refer to the repository record for this item and our policy information available from the repository home page for further information. To see the final version of this paper please visit the publisher’s website. Access to the published version may require a subscription. Author(s): Tiwari, Meera; Sharmistha, Uma Title: ICTs in Rural India: User Perspective Study of Two Different Models in Madhya Pradesh and Rural Bihar, India Year of publication: 2008 Citation: Tiwari, M., Sharmistha, U. (2008) ‘ICTs in Rural India: User Perspective Study of Two Different Models in Madhya Pradesh and Rural Bihar, India’, Science, Technology & Society 13 (2) pp. 233–258 Link to published version: http://dx.doi.org/10.1177/097172180801300204 DOI: 10.1177/097172180801300204 http://roar.uel.ac.uk/� http://dx.doi.org/10.1177/097172180801300204� ICTs in Rural India: User Perspective Study of Two Different Models in Madhya Pradesh and Rural Bihar, India MEERA TIWARI and UMA SHARMISTHA∗ This article presents fi ndings of two user perspective studies on the impact of ICTs in rural India. It is based on fi eldwork conducted by the authors in Dhar district of Madhya Pradesh and Madhubani district of Bihar. The fi rst study examines the impact of a state-led ICT initiative. The second looks at the impact of an ICT initiative by a non-government organisation. The article identifi es issues critical to enhancing the accessibility of ICT services to the poorest rural households. A comparison is made between the two models in reaching the ICT services to the rural poor. THE GROWING EXTENSIVE application of ICTs in poverty reduction strategies in India is encouraging. This article provides primary data on two types of ICT delivery mechanisms in rural India. It identifi es good practice for ICTs as enablers of development as well as factors impeding the participation of the most disadvantaged. The aim of the primary research is two-fold. First, to explore if ICTs can be deployed to enable the improvement of rural human capital and increase participation in market opportunities. Second, to study which type of delivery mode maybe better suited for reaching the ICTs to the most disadvantaged groups for capacity building at the individual, community and societal levels. Introduction There is growing consensus in the global community on the positive role of ICTs in the development process. More recently the debate on ICTs for rural development has intensifi ed through the emergence of literature— both conceptual and primary, based on ICT applications in developing countries. The success story of India’s ICT sector is well documented; however, research into ICTs in the rural sector is at a rudimentary stage. Thus, the central question arises: what is the impact of ICT-enabled devel- opment on the rural sector in India? India is the second fastest growing economy in the world, at over 8 per cent a year, next only to China. Much of the Indian growth is attributed to the rapid expansion of the export-oriented ICT sector. India accounts for almost 17 per cent of the world’s population. A majority of these live in the rural sector, and are dependent on agriculture as their main livelihood (Datt and Ravallion 2002; World Bank 2001a). It is also the home to the largest number of poor living under the World Bank’s poverty line of US$ 1 a day. An overwhelming majority of these live in the rural sector. In recent literature the views of the proponents of ‘growth is suffi cient’ have been contested and come under intense criticism. The search for pro-poor growth has gained much support and momentum in the current literature (Baulch and McCullock, 2000; Kakwani and Pernia 2000; Manning, 2007; Paternostro et al. 2007; Ravallion 2004). Further, the evidence for the ICT sector—though much in need of further research— suggests that its growth in India may not be pro-poor (Tiwari 2006). The article is, therefore, both timely and critical in exploring whether and how ICT growth is impacting the lives of the rural poor in India. In addition, it investigates the type of ICT-enabled projects that maybe more pro-poor in the rural sector. The study is grounded in primary research conducted through fi eld surveys in Dhar district of Madhya Pradesh and Madhubani district of Bihar. In the year 2000 the fi rst ICT-driven development project in India called Gyandoot was implemented by the Madhya Pradesh government in Dhar. The second project, Drishtee, is an NGO developed to replicate the Gyandoot model across the country. Madhubani in Bihar is one of the few areas where the Drishtee is working for rural development though ICTs. It is envisaged that the fi ndings of the study will further the understand- ing of the much-contested benefi ts—direct and indirect ones of ICT-based projects in the selected rural areas. The objectives of the study are: 1. to build and assess the profi les of the benefi ciary and the non- benefi ciary cohorts of ICT driven rural development projects; 2. to identify causative, impeding and enabling factors in the successful spread of the projects; and 3. to identify which model has a higher reach to the poor. The article is organised in six sections. A brief introduction is followed by an overview of the current literature on the role of ICTs in developing countries. The context within which the primary research was undertaken is discussed next. Then we move on to a description of the two projects, Gyandoot and Drishtee. This is followed by a brief description of the socio-economic features of the regions where the projects are operating. The methodology for the study and the primary data analysis are given in section four. Section fi ve presents a discussion on the wider debate on ICTs and rural poverty reduction based on the fi ndings of the primary research. Conclusions of the study are outlined fi nally. Context and the Current Literature Amidst growing evidence that ICT can play a constructive role in develop- ment (Chapman et al. 2003; Indjikian and Seigel 2005; Hanna 2008; Heeks 2002; Hongladarom 2004; McNamara 2003; Papaioannou and Dimelis 2007; Tongia et al. 2005; World Bank 2006), this article explores the impact of ICT growth on rural poverty in India. Given the increasing interest in ICT and the centrality of rural poverty reduction on the country’s development agenda at the global level, this primary research is timely. The study explores the role of ICT as enablers of development, and how ICT excludes the most disadvantaged in rural India. It also explores the type of delivery mechanism that may be better at reaching the poor. The ‘digital divide’ underpins much of the ongoing discourse on whether ICT can be harnessed for mitigating poverty in developing coun- tries. The proponents of ICT (UNCTAD 2003) consider them a tool that can be used to provide the economic opportunity to the poor and improve in human well-being. The World Bank (2001b) identifi ed three critical areas in poverty reduction efforts: opportunity, empowerment and security. Since then vast amounts of resources have been invested (Hanna 2008; UNCTAD 2003: 1) in ICTs in many developing countries. Their growing extensive application in poverty reduction strategies in India is encouraging. Though in terms of both diffusion of technology and the skills/literacy gap India fares worst and most polarised amidst its competitors, Ireland and Israel, and economic rival China (UNCTAD 2003; World Bank 2001a), India is ahead in the ICT ‘diffusion’ indicators when compared with countries of South Asia and Sub-Saharan Africa. However, the digital divide can further widen if a large section of the population remains unable to participate in the ICT sector. There is an extensive emerging global literature on the use of ICT in the rural sector of developing countries. The studies cover debates on a wide range of applications from rural manufacturing/enterprise (Duncombe 2006; Galloway and Mochrie 2005; Rhodes 2002; Virkkala 2007), rural connectivity and the rural–urban digital divide (Bowonder and Boddu 2005; Gamage and Halpin 2007; Hartviksen et al. 2002; Huggins and Izushi 2002; Narayanan et al. 2005; Nikam et al. 2004; Ramirez 2007), and rural education (Martin et al. 2001; Misra 2006; Yu and Wang 2006), to name a few. There is also a growing literature on the use of ICTs in agriculture and health services in the rural sector. Most of this literature is based on case studies, highlighting the successful, and in some instances the not so successful, transmission mechanisms of ICT engagement in specifi c areas. The overall context for ICT-driven services in the rural sector remains positive. Duncombe’s (2006) work is of specifi c relevance to the current study. He studies ICTs and the development discourse with- in the livelihoods framework, using the case of Botswana. He suggests that the direct benefi ts of ICT for poverty reduction may be just marginal. A model based on livelihoods approach is proposed, which can be applied to build the social and political assets of the poor and strengthen the mechanisms that favour them. Thus, expanding the benefi ts the poor can derive from ICT-led services. The present study assesses the benefi ts of ICT services to examine the impact on livelihoods capital and structures of the user population cohort. Within the context of rural India, ICTs are being applied in numer- ous sectors to overcome the digital divide to reach the poor in mitigating poverty. The objective is to enhance opportunities for the poor. These in- clude improving access to information and health care, empowering them by increasing their use of government services, and providing security through access to micro-fi nance (PREM).1 A detailed discussion on the potential role of IT in India’s development with a special focus on the rural sector is given in Singh (2002, 2004, 2006). The study identifi es extensive pathways of ICT deployment in the Indian rural sector. It then examines both static and dynamic effi ciency gains, as well as the en- visaged reductions in the economic inequality and the social benefi ts that will follow. Singh (2006) defi nes static gains as those accruing just the once through effi cient use of scarce resources. These include increases in operating effi ciency, as well as reduction in transaction costs. The trans- action costs here are interpreted as costs of opportunism and rent seeking. He notes that opportunism is a consequence of information asymmetries, as pointed by Williamson (1981). ICT, by correcting the information asymmetry, can remove opportunism and rent seeking, thus bringing down overall transaction costs. Dynamic gains are considered to be those arising from higher growth, thus improving the overall consumption both in the present and future. According to Singh (2006), ICTs can stimulate innovation, which is an important factor for economic growth within the endogenous growth models. Singh’s ICT-led dynamic gains postulate can be scrutinised within the debate on growth and poverty reduction since the focus of his work is on ICTs and rural development. The relation between growth and poverty remains very contentious in the development discourse. Collier (2007) has refocused the debate to the point that it is the ‘quantity’ of growth and not the ‘quality’ that matters to the poor. However, the arguments for the quality to be more important for poverty reduction remain strong (Adelman 2000; Bourguignon and Morrisson 1998; Gallup et al. 1999; Thirtle et al. 2001; Timmer 1997). Where can ICT-led growth be placed in the quantity/quality debate to examine its impact on rural development?2 The ICT literature for the rural sector indicates their use as tools to improve productivity and expand market opportunities. A technology-intensive interface, though, is likely to engage the educated and the skilled for whom the technology adoption curve would be far less steep as compared with that for education-poor and unskilled cohorts. The quantity/quality debate, therefore, is important for ICT-led initiatives for rural development. Pro- poor ICT initiatives that enable the engagement of the poor either through initial training and support, or facilitate the use of pictographic and local dialect software, offer a more inclusive development platform. The ICT projects analysed in Singh’s study are based on primary in- vestigations of numerous3 rural IT initiatives in India. Both demand- and supply-side factors of rural ICT uses are considered, though demand-side arguments are grounded in the conceptualisation of the potential benefi ts if the ICT implementations are successful. The supply-side analyses are based on the primary work. A useful insight into the applications of ICT captures the potential benefi ts in education, health, market effi ciency, em- ployment, rural development, governance reform and entrepreneurship using both demand- and supply-side constructs. The following are some other projects in India where ICTs are entwined in rural programmes (World Bank 2002). Gujarat’s computerised milk collection ensures fair prices and immediate payments to small farmers, who could neither afford the ten-day lag in payments, nor the hardship imposed through underpayments in the old system. The use of handheld computers provided by the InfoDev-sponsored India Health Care Delivery Project in Andhra Pradesh is intended to cut time spent on collecting and registering data. The freed time allow midwives4 to expand administering immunisations, offer advice on family planning, and educate people on mother and child health programmes. The Swayam Krishi Sangam (SKS), a micro-fi nance institution in Andhra Pradesh, expects to lower the cost of delivering by eliminating paperwork, and reducing errors and fraud. The fi eldwork by Kumar (2004) and Meera, et al. (2004) examines the performance of ICT projects in agriculture. Kumar (2004) has evaluated the fi nancial sustainability of India’s largest rural ICT initiative, known as eChoupals. Meera et al. (2004) have examined three rural projects aimed at improving the information delivery systems in agriculture (Gyandoot, Warana and iKisaan). The eChoupals are distinct in their focus on the agricultural sector by providing the necessary crop and market related information to the farmers. The study concludes that eChoupals can be useful and fi nancially viable, provided these are viewed as tools to enable the exchange of information. Meera et al. (ibid.) found that a majority of the primary users were literate, male, young farmers, though the effective reach of a government project in marginal areas to the illiterate and poor is also noted. The study concluded that the investigated projects were overall benefi cial to the farmers. However, an area where much work remains to be done is gender participation in rural ICT projects. The study observed poor engagement of women farmers in all three projects. Further case study-based literature on ICT applications presents an overall encouraging picture. Bowonder and Boddu (2005) examine the role of public–private partnerships to reduce the digital divide in rural Tamil Nadu. Their study reports a healthy uptake of services by a good proportion of the poor, middle and rich households. While some very positive evidence of the benefi ts being availed by the communities is provided, the overall study is based on the functioning of the providers. Bowonder et al. (2005) attribute the success of ICTs in the traditional leather industry in Kolhapur to two factors. First, to the re-engineering of the design, manufacturing and marketing processes using ICTs and, second, to the systematic training of the local craftsman to adopt the technology. This indicates the importance of the appropriate receptivity of users for ICT led initiatives to be successful. The current literature and primary research on ICTs in rural India ap- pears to be expanding at a much higher rate into the supply-side factors than into the investigation of the demand-side. While a typology of ICT information benefi ts is discussed in Duncombe (2006: 88) and benefi ciary case studies are quoted within the Indian literature, investigations into factors impeding or enabling demand and the study of demand itself remain sparse. This study addresses the gap by grounding the primary research to capture the perceptions and the understanding of the ICT benefi ts to users in the rural areas. The objective, as noted earlier, is to study the impact of ICT-driven projects on rural poverty. The investigation seeks to explore if ICT can be deployed to enable the improvement of rural human capital and increase participation in market opportunities. Further, what is the evidence at the grassroots level for ICT as enabler of development as well as enhancer of capacity building at the individual, community and societal levels? The next section presents fi ndings of the fi eldwork conducted in the Dhar district of Madhya Pradesh and Madhubani district of Bihar. The Projects: Gyandoot and Drishtee In January 2000 a government-owned computer network called Gyandoot was launched in Dhar district of Madhya Pradesh. Gyandoot is considered a pioneer experimenting in taking ICT to the rural sector in India. The objective is to improve the accessibility and use of ICT services by the rural poor. It is a state-run initiative with private partnership at the delivery stage of the services. Drishtee, on the other hand, is an NGO, founded in 2003. It has adopted a similar model to Gyandoot in facilitating rural ICT-based service delivery through franchising of kiosks on a revenue- sharing basis. Drishtee has expanded its network to several states: Assam, Chattisgarh, Bihar, Harayana, Madhya Pradesh, Maharastra, Tamil Nadu and Uttar Pradesh, with over 1,000 kiosks in all. Within the limited scope of this study just one of the networks of Drishtee operational in Madhubani district of Bihar is examined. A brief description of the areas, Dhar and Madhubani districts, where the research was conducted, along with some of its special socio-economic characteristics follow. The objectives and the delivery mechanisms of the projects are then described in detail. This is followed by a detailed discussion and analyses of the fi ndings of the primary research in the next section. Madhya Pradesh and Bihar are among the four most backward states in India.5 Madhya Pradesh, though, has made good overall progress in the last decade. Among these four states it has the highest literacy of 64.1 per cent, with the national average at 65 per cent, while Rajasthan has 61 per cent, Uttar Pradesh 57.4 per cent and Bihar the lowest of 47.5 per cent (Government of India 2001). Further, Madhya Pradesh was one of the seven states in the country that experienced a growth rate of over 5 per cent during the 1990s.6 In addition, state expenditure in social sectors has improved considerably, accounting for almost 40 per cent of the total in 2000. There is increasing awareness of the challenges facing the government in the delivery of an effective health service. In recent years there has been a concerted effort to expand the provision of public health services and improve their access in both rural and urban areas. Dhar is one of the fi fty-two administrative divisions of the state of Madhya Pradesh. It is primarily an agricultural district, with 62 per cent of its land under cultivation and over 83 per cent of its population residing in the rural sector. It has a rich history, making tourism a strong industry. The district accounts for 3 per cent of the state population of 60.3 million. The literacy at 52.7 per cent—48 per cent rural and 75 per cent urban—is below the state average of 64.1 per cent (NCAER 2004). The state of Bihar exhibits overall decline and poor progress over much of the past three decades. While there has been a distinct policy shift and input into the development of the state in the last two years, Bihar remains a region characterised with rigid multiple social divisions, and one of the lowest development indicators of health, education, per capita income and infrastructure in the country. The state has the highest proportion of rural population with almost 90 per cent living in rural areas, with rural literacy of just 44 per cent. Furthermore, 49 per cent of the state’s rural population lives below the national poverty line. Of particular concern is the lowest female literacy level in the country at just 33 per cent (28 per cent rural) (Government of India 2001). Madhubani is one of the thirty-eight administrative divisions in the state and an important agricultural district. The population of the district accounts for around 4 per cent of the state’s population of 83 million. Over 95 per cent of the population resides in the rural sector. The literacy rate of the district is 42.35 per cent (57.26 per cent for males and 26.56 per cent for females), which are lower than the respective rates of the state (ibid.). Gyandoot was launched in Dhar on 1 January 2000, as a pioneering experiment taking ICT into the rural sector. The government-sponsored initiative was set up to use innovative e-governance, e-commerce and e-education to enable development programmes. The specifi c project ob- jectives were: to improve public access to government services, to improve the government functioning by introducing higher levels of accountability and transparency, to bridge the digital divide, to facilitate citizen– government partnerships in social projects, and to enhance livelihood opportunities. An intranet kiosk network is the main delivery mechanism for the various services provided. The information kiosks (soochanalaya) were initially set up in twenty-one village centres of the district. Each information kiosk covers twenty to thirty villages, and a population of between 20,000 and 30,000. The kiosks are run by an operator, known as the ‘soochak’, who is generally a local graduate (minimum education qualifi cation is tenth standard) selected by the Gyandoot committee and then trained by Gyandoot engineers. The operator remains a private entre- preneur and at no stage is he or she an employee of Gyandoot. A partici- patory process involving the community, government offi cials and the Gyandoot team was deployed to select the services offered. Most services carry a fi xed charge. The key facilities offered by the kiosks include: online registration of applications for land records, caste, income and domicile certifi cates; online public grievance redress, and information regarding government programmes; e-mail facilities for social issues and transparency in government. The specifi c services provided by Gyandoot kiosks are summarised in Table 1. TABLE 1 Key Features of Gyandoot and Drishtee Gyandoot Drishtee Government initiative Not-for-profi t organisation ICTs for rural development ICTs for rural development Kiosk-based service delivery Kiosk-based service delivery Independent kiosk operators Independent kiosk operators No revenue sharing with kiosk operator Revenue sharing with kiosk operator Operates in Madhya Pradesh only Operates in Assam, Chattisgarh, Bihar, Harayana, Madhya Pradesh, Maharastra, Tamil Nadu and Uttar Pradesh Services Information and service enhancing: Market opportunities, government schemes for development, agriculture-related input and advice, matrimonial, Hindi e-mail and Internet use Capability enhancing: Computer courses, Web designing, desktop publishing, accountancy, English speaking, and eye care and glasses Entitlement enabling: Land record certifi cates, caste certifi cates, domicile certifi cates, BPL list, exam results Insurance services and books: Life insurance and motor insurance Capability enhancing: Computer courses Services: Digital photography, SIM cards for mobile phones, photocopying and long distance call facilities (PCO) Sources: Drishtee (http://www.Drishtee.com, 2007); Gyandoot (2007). Drishtee started its operation in Madhubani in 2002 and has since established 58 information kiosks. An NGO, though also addressed as ‘a commercial organization, with social objectives’ (Singh 2006), Drishtee has attempted to replicate the Gyandoot model of developing IT-enabled services in rural areas. Kiosks based on a revenue-sharing model are the modus operandi of Drishtee. The objectives are to promote the use of ICT applications for the social and economic development of the villagers, ICT centres based on entrepreneurial and service-driven model of Drishtee, linkages between community needs and the ICT centres, and advisory services related to ICT research and community activities. Drishtee’s services portfolio comprises e-governance services, licences, banking, land records, market information, matrimonial services, online grievance postings, local transport schedules and commercial services. These are rendered to villagers for a very nominal fee. Independent entrepreneurs who are the sole investors in establishing these centres run the kiosks. Drishtee in turn trains these entrepreneurs in the running of the centre, and assists in networking and establishing contacts with the relevant offi cials for the smooth running of the e-governance services. The entrepreneurs pay Drishtee 5 per cent of their total revenue from the kiosk services. Fieldwork Analysis One hundred households from each survey area, comprising both users and non-users of the Gyandoot and Drishtee services, were interviewed. Efforts were made to select households from three economic categories: those below the poverty line, marginal and ‘comfortable’ non-poor households. A comprehensive primary dataset comprising two distinct categories of information in terms of methods deployed and content was obtained from each subject that was interviewed. The fi rst category comprised general quantitative information on the household members’ literacy levels, liveli- hoods, ownership of economic assets and other demographic indices. The second set contained information on the usage of Gyandoot/Drishtee services and interviewee views on Gyandoot/Drishtee services. Semi- structured, open-ended questions focusing on the subject’s understanding of poverty and its causes were used to obtain primary qualitative data in the second category. The fi eldwork focused on the: 1. socio-economic status of the benefi ciaries and the non-benefi ciaries— to fi nd whether the poorest are able to access the benefi ts, which groups are least able to participate, as well as which groups are able to avail the maximum benefi ts; 2. the nature of benefi ts being availed—to study whether these are expanding capabilities, through education, health services, skill en- hancement; expanding market opportunities, through enabling access to buyers’ and sellers’ markets, market information, agriculture- related information, empowering the participants through knowledge and information or expanding/facilitating entitlements; and 3. how useful are the services being provided are, and what the aware- ness levels of these services in the interviewed cohort are. The purpose of the investigation is to explore whether and how the Gyandoot and Drishtee initiatives are meeting their objectives, and, most importantly, who the benefi ciaries are. It is expected that the outcome of this primary research will be three-fold. First, it will further the under- standing of the distribution and the type of the benefi ts of ICT-driven pro- jects in the rural sector. Second, it will identify which mode of delivery maybe better suited for reaching the poorest cohorts. Third, it will assist in exploring the linkages between such projects and rural multidimensional poverty. Additionally, it is also expected to provide evidence-based research outcomes showing which of the Gyandoot/Drishtee services are working and which are not, together with some indication of the causative and enabling factors. Gyandoot Analysis The spread of users and non-users in the surveyed sample was around 40 and sixty per cent (44 users and 58 non-users) respectively. The data for the fi rst category gives user and non-user profi les in terms of income, education and occupation. The BPL category indicates those below poverty line and APL represents those above. The poverty line here is the national poverty line of Rs 365 per month (Rs 12 per day), which is approximately the equivalent of the World Bank’s USD 1 a day (Purchasing Power Parity [PPP], 1993 prices) poverty line. User and Non-user Profi les As show in Table 2, the user group comprises a higher proportion of those above the poverty line (80 per cent) as compared with the non-user group (66 per cent). The educational levels for the user group show only 15 per cent of users to be illiterate as compared with 26 per cent of non-users. Large farmer households account for nearly half of the users (49 per cent), while landless and the medium farmers make up almost two-thirds of the non-users. Overall, those with higher levels of literacy and income are accessing the Gyandoot services more than those with lower literacy and incomes. Causation between the indicators—literacy/educational level, income and occupation—was inconclusive. A number of those above the poverty line and in the large farmer category were also in the illiterate category or had very little formal education. At the same time, a number of inter- viewees who had up to secondary level education were landless and work- ing as casual labour. This conforms to the wider discourse on factor market imperfections and presence of distortions in the rural labour markets in developing countries (Bhagawati 1971; Dhar 2007; Krishna et al. 2002; Tiwari 2001). Two other trends emerged regarding the education levels of the interviewees’ household. First, almost all female adults from user and non-user households were illiterate, or at best educated up to third standard. Second, on a more encouraging note, all children (male and female) were enrolled in primary or secondary school. Of the total sample surveyed, none were female users, though two of the fi fteen functional kiosks were run by women. TABLE 2 Gyandoot User and Non-user Population Profi le Distribution by poverty line User (%) n = 44 Non-user (%) n = 66 BPL∗ 20 34 APL 80 66 Distribution by education Illiterate 15 26 Primary 31 32 Middle 16 19 Secondary 20 12 Pre-university 7 5 Graduate 9 2 Technical 2 4 Distribution by occupation Landless 12 21 Marginal farmer/tenant 2 7 Medium farmer 35 43 Large farmer 49 26 Other 2 3 Source: Author’s fi eldwork data, 2007. Note: ∗Those below poverty line are given a red book by the state government. Further analyses of the outcomes noted is given within context of the wider discussion on ICTs and rural poverty reduction in a later section of the article. Drishtee Analysis The proportion of users and non-users in the surveyed sample in case of Drishtee was around 64 and thirty six per cent (64 users and 36 non- users) respectively. The much larger proportion of users is signifi cant given that the overall uptake of Drishtee services is medium to low. A partial explanation is provided through the high use of digital photography provided at the kiosks. This service, although outside the original remit of the ICT-enabled Drishtee services, is much in demand and provides lucrative returns to the kiosk operator (Rs 25 per picture). As with the Gynadoot data, the data for the fi rst category gives user and non-user profi les in terms of income, education and occupation. The BPL and APL categories, and the poverty line defi nition are the same as those used for the Gyandoot fi eldwork. Those below poverty line are given a red book by the state government. The profi les of the two groups (Table 3) show a higher proportion of users above the poverty line (69 per cent) as compared with non-users (60 per cent). The educational levels show 54 per cernt of users to be graduates compared to 28 per cent non-users. Almost half of the users (49 per cent) belong to the ‘other’ category of occupation. This category is dominant among non-users as well. The user ‘other’ category mostly comprises small businessmen/traders, retired government offi cials and teachers. The non-user ‘other’ category too includes petty traders. The cohort interviewed for the Drishtee survey appears to be concentrated around the township of Madhubani, where its main operations are based. The surveyed sample indicates that those with higher levels of liter- acy and income are accessing Drishtee services more than those with lower literacy and incomes. Occupational distribution trends are far less conclusive. A closer examination of the type of services offered and the usage pattern (discussed in detail in the next section) offers some explanation as to why there may not be any usage trends linked with occupation. Digital photography, an additional service to the ten provided by Drishtee, has a high uptake—sometimes the only service being used—thus raising the overall number of users. User and Non-user Profi les The digital photography service itself has multiple uses, ranging from the simple desire to have family pictures taken at affordable rates, photographs for matrimonial purposes, to photographs for offi cial documentation and records. Both the need and desire components of the service are more ‘human being’ facing than connected with a particular occupation. Similar to the Gyandoot trends, all female adults from user and non-user households were illiterate. Similarly, all children (male and female) were enrolled in primary or secondary school. Of the total sample surveyed, none were female users—even in the widowed households male children was the users. The outcomes noted are further analysed in the wider discussion on ICTs and rural poverty reduction later in this article. TABLE 3 Drishtee User and Non-user Population Profi le Distribution by poverty line User (%) n = 64 Non-user (%) n = 36 BPL 31 40 APL 69 60 Distribution by education Primary 2 0 Middle 2 14 Secondary 26 33 Pre-university 16 25 Graduate 54 28 Distribution by occupation Widow 5 5 Artisan 9 8 Landless 15 14 Marginal farmer/tenant 11 24 Medium farmer 9 11 Large farmer 2 0 Other 49 38 Source: Author’s fi eldwork data, 2007. Awareness of Services, Usage and Users’ Views on Gyandoot and Drishtee The awareness and usage levels of Gyandoot and Drishtee services in the surveyed population of Dhar and Madhubani are indicated in Figures 1 and 2. These services are based on Table 1 and represented by G1 to G16 for Gyandoot and D1 to D10 for Drishtee. G1 is information on agricultural markets and commodities, G2 caste certifi cates, G3 income certifi cates, G4 domicile certifi cates, G5 landlords’ passbook of land rights and loans, G6 land records (Khasra Nakal), G7 Hindi e-mail, G8 e-education, FIGURE 1 Awareness Levels of Gyandoot Services Source: Based on the author’s fi eldwork data. FIGURE 2 Awareness Levels of Drishtee Services Source: Based on the author’s fi eldwork data. G9 advisory module, G10 rural news, G11 rural market information, G12 matrimonial services, G13 employment news, G14 BPL family list, G15 exam results via the Internet, and G16 information on other government schemes. The overall awareness of Gyandoot services ranges from satisfactory to very low. The usage trends are far grimmer than the awareness levels. Just two or three of the sixteen noted services were being used. The most used service was G6: providing certifi cates for land records (Khasra Nakal). Users found the maximum benefi t in it in terms of savings in time and money. The kiosks are able to provide the documentation for a minimal fi xed fee. This not only saves time, effort and costs in commuting to the nearest government offi ce, but also avoids rent seeking and opportunism by offi cials. The difference in the amount paid to the district offi cer varied from Rs 100 to Rs 500 per record as compared with the fi xed amount of Rs 15 paid to the Gyandoot kiosk. Singh (2006), as noted earlier in the article, explains this as a static gains accruing the once through effi cient use of scarce resources. The usage pattern clearly indicates uptake of entitlement-enabling services, in particular physical entitlement. Land records certifi cates are used to confi rm landownership. The entitlement is then used to avail a range of benefi ts and subsidised services. These include banking and fi nancial assistance at concessional rates, subsidised agricultural and infrastructure inputs, as well as numerous social welfare measures. The human capital and capability-enhancing services listed in Gyandoot as e- education, health care and advice/information-related modules were found to have negligible uptake. The main reason for this, as it emerged from the survey fi ndings, was the diffi culty in comprehending the replacement of the human interface—a medically trained person with a ‘machine’. Information services related to market opportunities, after an initial healthy usage level, appeared to be much on the decline through a possible crowding out effect of mobile technology. Information on current market rates is disseminated widely even if just one person from the village or a relative from another village visits the market. The information is com- municated and shared in real time over mobile phones, which have a high penetration in the region. Market information through kiosks on the other hand is not always current, depending on when it is updated. The more effi cient technology is, therefore, satisfying the demand for the market information, thus diminishing the use of the kiosks for this service. In addition, all users and non-users reported poor information and publicity of Gyandoot services. On numerous occasions the survey team found themselves as ambassadors and propagators of the project. Drishtee services D1 to D11 represent: basic computing (D1), software programming (D2), Web designing (D3), desktop publishing (D4), accountancy (D5), English speaking (D6), eye care and glasses (D7), books/publications (D8), life insurance (D9), motor insurance (D10) and digital photography (D11). The awareness of Drishtee services ranges from very satisfactory—close to 60 per cent in two of its services: basic computer training and digital photography—to very low in the remaining services. The two high-use services appear to be the main income earners for the kiosk operator. The uptake of other services is low to very poor. Digital photography shows a high uptake of nearly 58 per cent of the users, such that 25 per cent Drishtee users only use the kiosk for that particular service. It offers photographs that can be viewed before a print is made available at Rs 25 per piece. While there is clearly a demand for this and it provides a good return to the kiosk operator, it is neither a Drishtee-listed service nor does it fi t into the development programme. It does not enhance capabilities such that the users’ opportunities in life are improved; it does not strengthen livelihood outcomes either. It does perhaps fulfi l components of subjective well-being and happiness through ownership of photographs of loved ones. In addition, it fulfi ls the necessity of photographs for documentation and records. The healthy uptake of computer training and learning English language (spoken and written) at Rs 1,000 to Rs 5,000 indicates a positive trend for the educated in the rural sector. Much of the IT-led progress in southern India is attributed to the availability of skilled labour with linguistic competencies. Thus, skill enhancement with English language profi ciency in rural Bihar can act as a stimulant to IT services there as well. Gyandoot and Drishtee: ICTs in the Rural Sector and Poverty The mission statements and objectives of both Gyandoot and Drishtee focus on the use of ICTs for the socio-economic development of rural communities. Both projects deploy a similar kiosk-based network run by independent operators. One is a state-led initiative of the government of Madhya Pradesh, while the other is a non-profi t organisation. The main differences in the modus operandi of the two initiatives are situated in: 1. the selection of the types of services offered at the kiosks; and 2. the revenue-sharing model of Drishtee. Gyandoot services G1 to G16 as noted earlier can be categorised into: 1. information and service enhancing: market opportunities, govern- ment schemes for development, agriculture-related input and advice, matrimonial, Hindi e-mail and Internet use; 2. entitlement enabling: land record certifi cates, caste certifi cates, domicile certifi cates, BPL list, exam results; and 3. capability enhancing: computer courses. As noted earlier, the uptake of the entitlement-enabling services is the highest. The capability-enhancing service is used by 14- to 18-year-olds who learn computing skills certifi ed by the government at subsidised prices. The entitlement-enabling services are geared towards the rural population, and also the socially and economically backward classes. Drishtee services too can be categorised into: 1. capability enhancing: computer courses, Web designing, desktop publishing, accountancy, English speaking and eye care; 2. insurance services and books: and 3. services such as digital photography, SIM cards for mobile phones, photocopying and long-distance call facilities (PCO). The highest uptake was shown to be for computer courses and services, in particular digital photography. Both services are much needed, and respond to an existing demand as indicated by the interviewees. Drishtee does not offer any entitlement-enabling services either directlythrough issue of various certifi cates as in Gyandoot, or indirectly by facilitating communication between the issuing authorities and users. Drishtree is constrained in issuing any type of offi cial document as it has not been able to acquire the necessary authorisation to issue certifi cates on behalf of the state. However, all other services provided by Drishtee kiosk operators at market rates—there are no subsidies—appear to target the literate population living in the small townships with suffi cient disposable incomes to pay for these. The services lack provision to include the socially and economically backward communities in rural Madhubani. The revenue-sharing model of Drishtee partially explains the absence of services for the poor communities. While its kiosk operators are inde- pendent local entrepreneurs, unlike the Gyandoot model, they pay 5 per cent of their income to Drishtee. Hence, unless the revenue from a service is suffi cient to meet the costs, plus some profi t in addition to the 5 percent commission, the service would not be fi nancially sustainable. The type of services and the mode of service delivery of the two pro- jects target different cohorts of user populations. Gyandoot soochaks have the advantage over Drishtee kiosk operators in having authorised access to government departments. Overall, the Gyandoot initiative offers a more socio-economic inclusive format of services focused towards a rural population. Drishtee services are aimed more towards users with suf- fi cient literacy skills and incomes. The absence of any development- oriented services is noted by the interviewees. In terms of users’ views of both projects, in case of Gyandoot, despite with a clearly pro-poor agenda, there is poor information about the pro- ject and the various schemes offered within it. Where the soochak has exhibited enterprise, there is better information and dissemination of the services. Overall, those with higher levels of income, literacy and landownership are accessing the Gyandoot services more than those with lower literacy, incomes and landownership. Further, there were no female users of Gyandoot services. In its current format of delivery, it was unable to effectively engage the economically and socially disadvantaged groups. Again, despite a strong supply-side market presence underpinned by a robust rationale for inclusive development, the outcome is far from being achieved. Drishtee users, on the other hand, reported higher levels of awareness of its services, though services other than those under the Drishtee remit, such as digital photography, have skewed the awareness data. When the usage pattern is disaggregated for each service, the uptake too is limited to just two services. Drishtee’s service remit in Madhubani remains distant from its vision of ‘ICT for Development’ and its mission ‘to understand, promote and synergise the ICTs for socio-economic development of rural community’. Some noteworthy achievements of both Gyandoot and Drishtee initiatives deserve mention. First, the model itself is a unique experiment of engagement with the local community through a private, locally selected individual to deliver the services. This not only encourages entre- preneurship in the rural sector, but also provides a much-needed stimulant in the employment market for the rural educated labour, albeit by a small factor. Second, the near elimination of hardships for the rural poor in terms of the costs—monetary and time,as well as bribes to the offi cials for obtaining the land record certifi cates in Gyandoot—was immensely appreciated by users. Third, the provision of IT skill enhancement services as well as services to fulfi l subjective well-being and documentation needs through digital photography in rural townships of Bihar is a progressive step that has the potential to feed into the larger labour market. Possible transmission mechanisms of ICT benefi ts, both direct and in- direct, to the rural sector are discussed in Tiwari (2006). The direct bene- fi ts are postulated via the deployment of ICT to enable improvement of rural human capital for poverty reduction. The indirect benefi ts are con- ceptualised through increased participation of the rural labour in market opportunities made possible by better literacy and skills through ICT. In addition, by improving the skill base of rural labour through higher literacy and better health care, ICT can stimulate the economy to generate employment opportunities. Also, by helping to increase participation and empowerment of excluded groups, it has the potential to reduce income inequality. The findings of the current study of Gyandoot and Drishtee in Madhubani indicate the presence of mechanisms to bridge the digital divide and improve social poverty via education and health care portals in the projects. The demand-side factors reveal a different story. Services bridging the digital divide and reducing social poverty were found have a low to negligible uptake. The main cause for the market failure of this service is attributed to incomplete and absent market information. The fi nding contests the assumption that ICT can itself reduce the digital div- ide and enable market participation of the rural population. While the potential for the ICTs to bridge the digital divide and improve rural human capital remains undisputed, it is unlikely that the outcome can be realised without the appropriate stimulation of the demand-side factors. Drishtee’s healthy uptake of the computer courses is capability enhancing for the educated cohort. There are positive spillovers in terms better employment opportunities and expanding the availability of skilled labour in the rural sector. It can also be argued that by raising the skill level of the already educated, it is widening the digital divide between the educated and the illiterate groups in rural Madhubani. Conclusion Gyandoot and Drishtee are two initiatives implemented to enable the use of ICT for rural development. Both projects deploy a similar kiosk-based network run by independent operators. One is a state-led initiative of the government of Madhya Pradesh, while the other is an NGO. In addition, Drishtee kiosks run on a revenue-sharing model. The potential of the Gyandoot model to engage the socially and eco- nomically backward communities in an inclusive development format through education and well-being is robustly convincing. The out- comes, though, are far from being realised. Based on the information gathered through the fi eld survey and the users’ views, there is evidence of market imperfections through sluggish demand-side factors and information asymmetries. This appears to be the main cause impeding the uptake of the majority of services being offered by Gyandoot. Drishtee services in Madhubani, on the other hand, appear to target a cohort with suffi cient educational levels and income. In this aspect Drishtee initiatives are not engaging the socially and economically backward communities in rural Madhubani. Just two of the eleven services—computer training and digital photography—are shown to have a high uptake, while other re-main sparsely used. Areas where there has been notable success through the Gyandoot initiative are in providing invaluable physical entitlement enabling ser- vices, and introducing a unique public–private partnership encouraging entrepreneurship in the local economy. The latter bears the potential to act as a stimulant to the rural labour market for the skilled labour. Working on a similar model, Drishtee has also encouraged entrepreneurship in rural Madhubani. The linkages with other Gyandoot services of education and health together with community engagement and the wider debate on entitlements and capabilities remains weak. Information asymmetries, perceptual gaps and poor receptive capacity of users emerged as the main causes for the poor uptake of services. These could be overcome through a concerted effort to inform and involve users in the delivery mechanisms of these services. Overall, the uptake of Gyandoot and Drishtee services is not at an optimum level. While the potential for the ICT to bridge the digital divide and improve the rural human capital remains undisputed, it is unlikely that this outcome can be realised with the current formats of delivery and appropriate stimulation of the demand-side factors in the rural sector. NOTES 1. Poverty Reduction and Economic Management Network (World Bank, 2002). 2. For a detailed discussion on whether ICT-led growth can be pro-poor, see Tiwari (2006). 3. The following specifi c IT initiatives are examined in Singh’s (2006) study: Drishtee, Aksh, n-Logue, ITC (eChoupals), TARAhaat and Akshaya. 4. Midwives provide most health services in rural India, covering up to 5,000 population per midwife across multiple villages (World Bank 2001). 5. An acronym for Bihar, Madhya Pradesh, Rajasthan and Uttar Pradesh is BIMARU. The word in Hindi means ‘ill’. 6. The others were: Gujarat 9.6 per cent, Maharashtra 8 per cent, West Bengal 6.9 per cent, Tamil Nadu 6.2 per cent, Rajasthan 5.9 per cent and Kerala 5.8 per cent (Government of India 2001). REFERENCES[PRESS! A-LEVEL] Adelman I. (2000), ‘Fifty Years of Economic Development: What Have We Learnt?’ Paper presented at the Annual Bank Conference on Development Economics—Europe, Paris, 26–28 June. Baulch, R. and N. McCullock (2000), ‘Tracking Pro-poor Growth’, in ID21 Insights No. 31. Sussex: Institute of Development Studies. Bhagawati, J. (1971), ‘The Generalized Theory of Distortions and Welfare’, in J. Bhagawati, R.A. Mundell and J. Vanek, eds, Trade Balance of Payments and Growth: Essays in International Economics in Honour of Charles P. Kindleberger. Amsterdam: North Holland Publishing. Bourguignon, F. and C. Morrisson (1998), ‘Inequality and Development: The Role of Dualism’, Journal of Development Economics, 14(2), pp. 233–57. Bowonder, B. and G. Boddu (2005), ‘Internet Kiosks for Rural Communities: Using ICT Platforms for Reducing Digital Divide’, International Journal of Services Technology and Management, 6(3–4), pp. 356–78. Bowonder, B., S. Sadhulla and A. Jain (2005), ‘Evolving an ICT Platform for a Traditional Industry: Transforming Artisans into Entrepreneurs’, International Journal of Services Technology and Management, 6(3–4), pp. 379–401. Chapman, R., T. Slaymaker and T. Young (2003), ‘Livelihoods Approaches to Information and Communication in Support of Rural Poverty Elimination and Food Security’. ODI-DFID-FAO, http://www.odi.org.uk/rapid/publications/Documents/SPISSL_ WP_Complete.pdf. Collier, P. (2007), The Bottom Billion: Why the Poorest Countries Are Failing and What Can be Done About It. Oxford: Oxford University Press. Datt G. and M. Ravallion (2002), ‘Is India’s Economic Growth Leaving the Poor Behind?’ Mimeograph, World Bank, Washington, DC. Dhar, B. (2007), ‘Agricultural Trade Protection: A Perspective from India’. ARTNet Policy Brief No. 11, January. Duncombe, R. (2006), ‘Using the Livelihoods Framework to Analyze ICT Applications for Poverty Reduction through Microenterprise’, Information Technologies and Inter- national Development, 3(3), pp. 81–100. Galloway, L. and R. Mochrie (2005), ‘The Use of ICT in Rural Firms: A Policy-oriented Literature Review’, Journal of Policy, Regulation and Strategy for Telecommunications, 7(3), pp. 33–46. Gallup, J., S. Radelet and A. Warner (1999), ‘Economic Growth and the Income of the Poor’. Consulting Assistance on Economic Reform II Discussion Paper 36, Harvard Institute of International Development, Harvard, MA. Gamage, P. and E.F. Halpin (2007), ‘E-Sri Lanka: Bridging the Digital Divide’, Electronic Library, 25(6), pp. 693–710. Government of India (2001), Central Statistical Organisation, New Delhi. Hanna, N.K. (2008), Transforming Government and Empowering Communities. Washington, DC: World Bank. Hartviksen, G., S. Akselsen and A.K. Eidsvik (2002), ‘MICTS: Municipal ICT Schools—A Means for Bridging the Digital Divide Between Rural and Urban Communities’, Education and Information Technologies, 7(2), pp. 93–109. Heeks, R. (2002), ‘i-Development not e-Development: Special Issue on ICTs and Devel- opment.” Journal of International Development, 14(1), pp. 1–11. Hongladarom, S. (2004), ‘Making Information Transparent as a Means to Close the Global Digital Divide’, Minds and Machines, 14(1), pp. 85–99. Huggins, R. and H. Izushi (2002), ‘The Digital Divide and ICT Learning in Rural Com- munities Examples of Good Practice Service Delivery’, Local Economy, 17(2), pp. 111–22. Indjikian, R. and D.S. Siegel (2005), ‘The Impact of Investment in IT on Economic Performance: Implications for Developing Countries’, World Development, 33(5), 681–700. Kakwani, N. and E. Pernia (2000), ‘What is Pro-Poor Growth?’ Asian Development Review, 18(1), pp. 1–16. Krishna, K., A. Mukhopadhya and C. Yavas (2002), ‘Trade with Labour Market Distortions and Heterogeneous Labour: Why Trade can Hurt’. NBER Working Paper No. 9086, Pennsylvania State University. Kumar, R. (2004), ‘eChoupals: A Study on the Financial Sustainability of Village Internet Centers in Rural Madhya Pradesh’, Information Technologies and International Development 1(3), pp. 45–73. Manning, R. (2007), ‘Pro-poor Growth: Negotiating Consensus on a Contentious Issue’, Development, 50(2), pp. 42–47. Martin, L.M., A. Halstead and J. Taylor (2001), ‘Learning in Rural Communities: Fear of Information Communications Technology Leading to Lifelong Learning?’ Research in Post -compulsory Education, 6(3), pp. 261–76. McNamara, K.S. (2003), ‘Information and Communication Technologies, Poverty and Development: Learning from Experience’. Background paper for the InfoDev Annual Symposium, 9–10 December, Geneva, Switzerland. Meera Shaik, N., A. Jhamtani and D.U.M. Rao (2004), Network Paper No.135. London: ODI. Misra, P.K. (2006), ‘E-strategies to Support Rural Education in India’, Educational Media International, 43(2), pp. 273–83. Narayanan, A., A. Jain and B. Bowonder (2005), ‘Providing Rural Connectivity Infrastructure: ICT Diffusion through Private Sector Participation’, International Journal of Services Technology and Management, 6(3–4), pp. 416–36. National Council of Applied Economic Research (NCAER) (2004), West and Central India Human Development Report. New Delhi: Oxford University Press. Nikam, K., A.C. Ganesh and M. Tamizhchelvan (2004), ‘The Changing Face of India: Bridging the Digital Divide’, Library Review, 53(4), pp. 213–19. Papaioannou, S. and S. Dimelis (2007), ‘Information Technology as a Factor of Economic Development: Evidence from Developed and Developing Countries’, Economics of Innovation and New Technology, 16(3), pp. 179–94. Paternostro, S., A. Rajaram and E.R. Tiongson (2007), ‘How Does the Composition of Public Spending Matter?’ Oxford Development Studies, 35(1), pp. 47–82. Ramirez, R. (2007), ‘Appreciating the Contribution of Broadband ICT with Rural and Remote Communities: Stepping Stones Toward an Alternative Paradigm’, The Information Society, 23(2), pp. 85–94. Ravallion, M. (2004), ‘Pro-Poor Growth: A Primer’. World Bank Policy Research Working Paper Series No. 3242, World Bank, Washington, DC. Rhodes, J. (2002), ‘The Development of an Integrated E-commerce Marketing Framework to Enhance Trading Activities for Rural African Communities’, Perspective on Global Development and Technology, 1(3–4), pp. 269–93. Singh, N. (2002), ‘Information Technology as an Engine of Broad-based Growth in India’,, in P. Banerjee and F.J. Ritcher, eds, The Information Economy in India. London: Palgrave Macmillan, pp. 24–57. ——— (2004), ‘Information Technology and Rural Development in India’, in B. Debroy and A.U. Khan, Integrating the Rural Poor into Markets. New Delhi: Academic Foundation, pp. 221–46, ——— (2006), ICTs and Rural Development in India. Santa Cruz: University of California. Thirtle C., I. Irz, L. Lin, V. McKenzie-Hill and S. Wiggins (2001), ‘The Relationship between Changes in Agricultural Productivity and the Incidence of Poverty in Developing Countries’. Department for International Development Report No.7 946. Department for International Development, London. Timmer P. (1997), ‘How Well Did the Poor Connect to the Growth Process?’ Consulting Assistance on Economic Reform II Discussion Paper No. 17, Harvard Institute of International Development, Harvard, MA. Tiwari M. (2001), ‘Rural Poverty and the Role of Nonfarm Sector in Economic Development: The Indian Experience’. Ph.D. thesis, University of Southampton, Southampton. ——— (2006), ‘An Overview of Growth in the ICT Sector in India: Can This Growth be Pro-poor?’ World Review of Science Technology and Sustainable Development, 3(4), pp. 298–315. Tongia, R., E. Subrahmanian and V.S. Arunachalam (2005), Information and Communication Technologies for Sustainable Development: Defi ning a Global Research Agenda. Bangalore: Allied Publishers. UNCTAD (2003), E-Commerce and Development Report. New York and Geneva: United Nations. Virkkala, S. (2007), ‘Innovation and Networking in Peripheral Areas: A Case Study of Emergence and Change in Rural Manufacturing’, European Planning Studies, 15(4), pp. 511–29. Williamson, O.E. (1981), ‘The Economics of Organization: The Transaction Cost Approach’, American Journal of Sociology, 87(3), pp. 548–77. World Bank (2001a), Country Assistance Strategy, India. Washington, DC: World Bank. ——— (2001b), World Development Report: Attacking Poverty. Washington, DC: World Bank. ——— (2002), Using Information and Communications Technology to Reduce Poverty in Rural India. Washington, DC: World Bank,. http://www1.worldbank.org/prem, accessed October 2007. ——— (2006), Information and Communications for Development 2006. Washington, DC: World Bank. Yu, S. Q. and M.J. Wang (2006), ‘Modern Distance Education Project for the Rural Schools of China: Recent Development and Problems’, Journal of Computer Assisted Learning, 22(4), pp. 273–83. Sci tech cs stsTiwari edit << /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Dot Gain 20%) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.4 /CompressObjects /Tags /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJDFFile false /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams false /MaxSubsetPct 100 /Optimize true /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveEPSInfo true /PreserveHalftoneInfo false /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Apply /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true ] /NeverEmbed [ true ] /AntiAliasColorImages false /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /ColorImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasGrayImages false /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /GrayImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasMonoImages false /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 1200 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 >> /AllowPSXObjects false /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile () /PDFXOutputCondition () /PDFXRegistryName (http://www.color.org) /PDFXTrapped /Unknown /Description << /ENU (Use these settings to create PDF documents with higher image resolution for high quality pre-press printing. The PDF documents can be opened with Acrobat and Reader 5.0 and later. These settings require font embedding.) /JPN /FRA /DEU /PTB /DAN /NLD /ESP /SUO /ITA /NOR /SVE >> >> setdistillerparams << /HWResolution [2400 2400] /PageSize [612.000 792.000] >> setpagedevice work_dzfmyvc7fnchjbyfdu3lahayrm ---- CCIDEN_6760_evolution_of_photography_in_maxillofacial_surgery © 2009 Schaaf et al, publisher and licensee Dove Medical Press Ltd. This is an Open Access article which permits unrestricted noncommercial use, provided the original work is properly cited. Clinical, Cosmetic and Investigational Dentistry 2009:1 39–45 Clinical, Cosmetic and Investigational Dentistry 39 r e v I e w Dovepress open access to scientific and medical research Open Access Full Text Article submit your manuscript | www.dovepress.com Dovepress evolution of photography in maxillofacial surgery: from analog to 3D photography – an overview Heidrun Schaaf Christoph Yves Malik Hans-Peter Howaldt Philipp Streckbein Department of Maxillo-Facial Surgery, University hospital Giessen and Marburg GmbH, Giessen, Germany Correspondence: Heidrun Schaaf University Hospital Giessen and Marburg GmbH, Department of Maxillo-Facial Surgery, Klinikstrasse 29; 35385 Giessen, Germany Tel +49 641/99 46271 Fax +49 641/99 46279 email heidrun.schaaf@uniklinikum-giessen.de; heidrun.schaaf@gmx.net Abstract: In maxillofacial surgery, digital photographic documentation plays a crucial role in clinical routine. This paper gives an overview of the evolution from analog to digital in pho- tography and highlights the integration of digital photography into daily medical routine. The digital workflow is described and we show that image quality is improved by systematic use of photographic equipment and post-processing of digital photographs. One of the advantages of digital photography is the possibility of immediate reappraisal of the photographs for alignment, brightness, positioning, and other photographic settings, which aids in avoiding errors and allows the instant repetition of photographs if necessary. Options for avoiding common mistakes in clinical photography are also described and recommendations made for post-processing of pictures, data storage, and data management systems. The new field of 3D digital photography is described in the context of cranial measurements. Keywords: digital, photography, documentation, dental, 3D imaging Introduction As in most technical and medical fields, impressive developments have occurred in recent years in the technological aspects of digital photography and the possibilities of digital documentation. Digital medical photography allows a professional view of novel clinical cases in cranio-maxillofacial surgery. Visualization can be more effective than a verbal description and can aid in making appropriate decisions for treatment. One of the advantages of digital photography is the possibility of reviewing the picture immediately to judge technical aspects such as sharpness, illumination, color, and patient positioning. The immediate availability of digital images enables the treating physician to monitor a selected aspect in successive or serial shots in the presence of the patient. Fewer appointments with patients may be necessary, as review of the accomplished or planned procedures is possible without waiting for photographs to be processed. Due to the development of powerful data storage tools and software, clinical patient records can be supplemented with informative photographs, and these photographs can be integrated into digital patient files. These improvements along with technical innovations in photography have set the stage for high-quality results in maxillofacial surgery. In the literature, clinical photography is discussed from different viewpoints such as those of plastic and reconstructive surgery, dermatology, dentistry, and orthodontics.1–7 Although human life unfolds in a 3-dimensional (3D) setting, most observations and data are captured only in 2 dimensions, and information about the third dimension is left to our judgment. Especially in the medical field, where surgery can change the appearance of a face, 3D assessment is becoming more and C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e n tis tr y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dentistry 2009:140 Schaaf et al Dovepress submit your manuscript | www.dovepress.com Dovepress more essential. This new method will prove its value not only for planning of dental or surgical procedures, but also for predicting the outcome. Several approaches have been investigated to open the third dimension to the medical world, starting with computerized tomography (CT),8–10 ultrasonography,11–13 stereolithography,14,15 and laser scanners.16,17 A detailed review of 3D craniofacial reconstruction imaging should describe modern imaging techniques most commonly used in medicine and dentistry. Analysis of the whole craniofacial complex, virtual simulation, and real simulation of orthognatic surgery as well as laser scanning with use of stereolithographic biomodeling have been discussed.18 The aim of this article is to describe step-by-step the recent developments in medical photography, address solutions for data storage, and highlight the benefits as well as some of the technical and human pitfalls of this technology in the medical profession. History of digital photography In August 1981, the digital camera revolution began when the Sony Corporation released the first commercial electronic handheld camera without film (the Sony Mavica). This was designed as a point-and-shoot camera, which used a charge-coupled device-sensor (CCD-sensor) to record still images to Mavipak diskettes with the equivalent of 0.3 megapixel (MP) resolution. Because the pictures were viewed on a TV screen and could not be processed on a computer, the Mavica was not considered a true digital camera. In 1988, Fuji unveiled the DS-1P as the first true digital camera, which recorded images to a removable static random-access memory (SRAM) card in a computerized file.19 The first commercially available digital camera was sold in 1990 as the DYCAM Model 1 or Logitech FotoMan with a resolution of 376 × 240 pixels at 256 grayscale levels for a manufacturer’s suggested retail price (MSRP) of US$995.20 The next rung on the evolutionary ladder of digital photography was the Kodak DSC-100, shown publicly at the Photokina in 1990 and marketed in 1991 for a MSRP of US$25,000. It was the first digital single-lens reflex camera (DSLR) consisting of a modified Nikon F3 SLR body and a 1.3 MP digital back.21 Although various companies such as Canon, Nikon, Fujifilm, Sigma, Kodak, Pentax, Olympus, Panasonic, Samsung, and Minolta released DSLR cameras intended for professional photographers and early adopters, DSLR cameras could not compete with film-based SLR cameras due to their lack of speed and image resolution. DSLR cameras began to compete with SLR cameras in 1999, when Nikon introduced the Nikon D1, which employed autofocus lenses such as those in current use. In subsequent years, image resolution increased and prices decreased, until the Canon EOS Digital Rebel made DSLR technology available to amateur photographers with a quality comparable to that of film cameras. Digital workflow in clinical routine With further development of CCD resolution, the question was often raised of when or if digital technology would exceed film technology in image quality. This issue has not yet been resolved and depends on numerous parameters. In summary, a resolution of 12 to 16 MP is equivalent to that of ISO 100 color film, but this comparison can only be made when high-quality lenses are used. For image resolution exceeding 10 MP, the quality of the lenses and image compression seem to be the limiting factor for image quality.22–24 For practical and clinical applications, more detailed image resolution does not yield further advantages, and thus the evolution of the DSLR technique in clinical photography has apparently reached its end. Considering digital imaging as a tool for routine work in dentistry and oral and maxillofacial surgery, acquired image data must be linked to patient data, maintained, and stored long term. The amount and quality of image data determine the dimensions of the required image storage system. The best image quality is supplied by unprocessed RAW-image data, which is not recommended in clinical photography due to the degree of post-processing needed and the large file sizes generated. The standardized JPG image format with variable compression, used with a resolution of 6 to 8 MP and low compression, fulfills the requirements of clinical photography and is manageable even for large numbers of images. In digital workflow, the sharpness, white balance, brightness, and orientation of images should be verified before they are stored in the database. Images should not be post-processed for these parameters, but primarily should be exposed correctly, due to the time-consuming nature of post- processing and the possibility of falsifying the document. Thus, the ability to immediately control the quality of the picture is a valuable advantage of the digital era. The requirements for storage of patient images are complex. A patient image database should have a hierarchical structure for user administration, support key-wording, indexing, and savable queries, have a programmable interface for linking image data to a clinical information system C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e n tis tr y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dentistry 2009:1 41 evolution of photography in maxillofacial surgeryDovepress submit your manuscript | www.dovepress.com Dovepress (CIS), and be fast, scalable, and intuitive to use. Some of the CISs that are currently commercially available support structured data systems with the ability to link an image to a patient file. For more advanced storage and administrative functions, professional digital asset management systems (eg, the Canto® Cumulus) must be integrated into the CIS via a programmable interface. A good compromise for a low-priced image database is to use software such as Adobe Photoshop® Lightroom or ACDSee Pro, which can be used separately from the CIS with few limitations of convenience and function. As the importance of photography in routine work increases, long-term storage, reliability, and availability become an issue. Although image data can be stored to digital media such as DVDs and Blu-ray® discs, the durability of the image data is threatened by the possibility of hardware failure (due to wear, electrical surge, flood, or fire), accidental deletion, theft, and malicious software. To guarantee permanent availability and safe long-term storage of image data, a multistage strategy must be followed including daily automated backup on a physically separate device, firewalls, a virus scanner, an uninterruptible power source (UPS), surge protection, access control, and a documented emergency and disaster recovery plan. Standardization of facial medical photography A meaningfully defined standard picture set is necessary and can be adapted to the concerns of the respective users. A full- face front view, oblique, submental oblique, and lateral views have been described as a useful basic picture set. Intraoral documentation includes upper and lower occlusal, buccal left and right, and frontal views.2,25 Additional picture sets can be obtained for orthognathic surgery, skull deformities, synostotic or positional plagiocephaly, facial palsy, aesthetic surgery, and dental implantology. In dental implantology, the frontal region of the upper jaw is particularly and aesthetically important, and additional close-ups showing neighboring structures are essential. The attention of the surgeon should not focus on the tooth or implant alone, since an implant usually also has effects on the lip and cheek contours of the patient at various ages. A preoperative assessment with the aid of photographs should therefore be included in the planning. Standardization is indispensable to produce pre- and post-operative photographs that are comparable. One of the fundamental parameters should be the patient’s position with the head at the same level as the camera. For each picture, the patient’s position and distance from the camera should remain the same, and rotation of the head and tilting must be avoided. The image should be aligned horizontally and vertically to the middle axis of the occlusion plane. For facial pictures, the Frankfort Horizontal Plane should be parallel to the floor and aligned vertical to the occlusion plane. The deformity can be exaggerated or masked if the patient is wrongly position, and this is especially likely to happen with orthognatic patients, as shown in Figure 1. The photograph should be adjusted so that the mid-sagittal plane of the patient is orientated perpendicular to the optical axis. Interfering cosmetics and jewelry should be removed as well as blood or saliva in intraoral views. 3D photography The brain can achieve 3D perception by interpreting the difference in depth of 2 pictures with the right and left eye. a c b d Figure 1 Lateral view of an orthognatic patient with Angle Class 2. The pictures show markedly different profiles. a) Correct position of the patient; b) tracings of photographs a, c, and d; c) the head is bent backward and the Frankfort Horizontal Plane is not parallel to the ground, and the deformity is therefore underestimated; d) the head is bent forward and the deformity is exaggerated. C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e n tis tr y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dentistry 2009:142 Schaaf et al Dovepress submit your manuscript | www.dovepress.com Dovepress Recently, 3D imaging has been adopted as an innovation in digital photography. The establishment of the next dimension in photography lies in the use of more than one camera at a time. The easiest way to achieve a 3D image is to take 2 pictures of the same object by moving the camera to one side without changing the level. These 2 pictures can now be viewed with 2 eyes using the cross-eye method, looking at the left picture with the left eye and at the right picture with the right eye. The photograph appears 3D when the images are fused. This method can be learned with patience. More professional ways of producing real 3D pictures require additional camera viewpoints, and several camera systems have been introduced with this capacity. In 2008, a 3D digital imaging system, the Fuji Finepix Real 3D, was announced, with dual lenses that capture images simultaneously. Application of 3D digital photography in the medical field For medical concerns, other systems with more than two cameras have been investigated, for example the 3D capture systems by Genex® or 3dMD® (Figure 2). The 3dMD® cranial system, for example, works with five camera viewpoints to obtain a full 360° picture of the head (Figures 3 and 4). These systems have been analyzed with regard to their anthropometric precision and accuracy of digital 3D photogrammetry of the face, and can be combined or compared with direct anthropometry using statistical methods.26 Furthermore, these 3D applications are useful in the description of cranial and facial soft tissues. A meaningful example of their use in medical treatment is the identification of common features in children with craniofacial deformities. The capacity for 3D visualization supports the ability to distinguish synostotic and non-synostotic plagiocephaly. The addition of this feature adds significant information in the diagnosis and treatment of these children. The use of 3D photography is of interest in all fields dealing with the treatment of obvious changes in the appearance of facial morphology, both for evaluating changes and predicting surgical results. Applications of 3D imaging for assessment of facial changes have been described in orthodontics as well as in the related discipline of orthognathic surgery.27–31 Other authors have described applications in patients with cleft lip and palate32–35 or with craniofacial malformations to aid in recognizing the key components of particular syndromes.36 New technologies are being implemented in 3D photogrammetry for collecting phenotypic measurements of the face.37 Photogrammetry is more than simply making measurements using stereoscopic photographs, but can capture 3D images with the ability to estimate coordinates of points, linear or surface distances, and volumetric measurements. The more sophisticated computerized stereophotogrammetry, C3D, has been introduced as a useful technique for 3D recording of monochrome and color stereo images32,38–40 in the field of maxillofacial surgical planning. As previously mentioned, standardization is an essential requirement in clinical and scientific photography, and this has been demonstrated in the field of 3D photography as well. More information is gained with the added dimension, but the number of possible mistakes increases accordingly. Discussion The changeover from analog to digital photography in medicine has occurred gradually and without major difficulties, and the advantages of technologies for digital photography in the dental and maxillofacial field have been clearly outlined; however, the availability of these digital technologies represents both an opportunity and a challenge. The physician is expected to provide sufficient image processing and to ensure the high quality of images. Meaningful archiving and secure storage can be achieved using a professional keyword-indexed asset management system. Such a system provides easy access for presentations and lectures, as well as for forensic purposes. The capability for digital post-processing, however, has the disadvantage of enabling falsification of images. Many published papers define a basic picture set in 2 dimensions for different uses including dentistry, orthodontics, and maxillofacial and plastic surgery.2,3,6,25,41 Furthermore, supplemental picture sets for special Figure 2 The 3dMD® cranial system uses 5 camera viewpoints to generate a 360° image of the head. C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e n tis tr y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dentistry 2009:1 43 evolution of photography in maxillofacial surgeryDovepress submit your manuscript | www.dovepress.com Dovepress circumstances have been described, which are useful in the field of maxillofacial surgery.25 Beyond the function of documentation, attempts have been made to use photography as a means of identifying landmarks and measure distances on two-dimensional photographs. Measurements of photographs have been carried out by various specialists, for example, for computerized eyelid measurement analysis in ophthalmology.42 Other attempts to characterize facial morphology in orthodontics using standardized photographs have been examined and compared to cephalometric measurements.43,44 Photographic methods have also been used to identify landmarks or digitally optimize appliances such as head bands.45–47 Nevertheless, reducing the picture set to a minimum will increase acceptance and feasibility. Knowledge of common mistakes can prevent pitfalls and help in achieving professional skills in digital photography.48,49 Manipulation of the patient’s head position49 or changes in illumination50 can make a difference in the surgical outcome. The advantages of digital photography such as saving time, lower costs, speed of storage, and reduced storage space with easier access to the photographs, have been described in the literature.2,51 The use of 3D photography supports clinical diagnosis and treatment in various fields. In medical genetics, it has demonstrated high levels of sensitivity and specificity in discriminating between controls and individuals diagnosed with Noonan syndrome, and has the potential for use in training physicians.36 Precision and error of 3d phenotypic measures from 3dMD photogrammetric images have also been described in the field of clinical dysmorphology in medical genetics. Here the precision is specified as highly repeatable with an error for placement of landmarks in the sub-millimeter range.37 The development of CT has revolutionized diagnostic and treatment purposes in medicine. Especially the field of orthognatic surgery has major benefits in the three-dimensional analysis.52 The combination of CT-based 3D data sets with 3D photographs could add significant information for tissue landmarks requiring information of hairline or eyelids. It could be shown that the registration of 3D photographs with CT images could provide an accurate match between the 2 surfaces.53 Recently this group was able to confirm the accuracy of matching 3D photographs with skin sur- faces from cone-beam CTs with an error within ±1.5 mm.54 Using 3D stereophotogrammetry for the soft tissue analysis 2 observers showed a high reliability coefficient with 0.97 for intraobserver and 0.94 for intraobserver reliability in 20 patients.55 However, it been reported that the accuracy of 3D facial imaging in orthodontics using the Genex camera system Figure 3 Five camera viewpoints of the head of a patient with deformational plagiocephaly. Camera views: a) half profile front right, b) half profile front left, c) half profile back left, d) half profile back right, e) from above. Figure 4 2D illustration of the composed 3D image of the patient’s head, which was generated from the 5 views in Figure 3. C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e n tis tr y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dentistry 2009:144 Schaaf et al Dovepress submit your manuscript | www.dovepress.com Dovepress showed substantial image distortion when images of sharp angles 90° were captured. This system, the Genex Rainbow 3D Camera Model, is a technology with 2 cameras. The accuracy was greater the less that the z-coordinate was incorporated in the image. This limitation was to be expected, given the camera configuration. Because the lenses were located somewhat close to each other, resulting in a limited field of view, it was difficult to get an accurate z-coordinate measurement.31 In the medical literature several 3D imaging systems in photography have been introduced. Besides commercially o f f e r e d systems like 3dMD and Genex, other 3D custom-made systems and software developments have presented.38–40 The validation of the systems has been published independently.28,32,37,56 The only comparison of measurement data of different 3D photogrammetric systems was performed by Weinberg et al26 and showed that both systems are sufficiently concordant (relative to one another), accurate (relative to direct anthropometry), and precise to meet the needs of most clinical and basic research designs. Conclusion The evolution of photography has resulted in easy-to-use and affordable digital photography for the practitioner. In the specialty of dentistry, medical photography has become a high-quality tool for health care professionals using a defined standard picture set for documentation in a standard reproducible set-up. The newest innovation in photography, incorporating the third dimension, offers detailed studies of the facial surface and soft tissue morphology. The advantages of digital photography include improved capabilities for diagnostics, planning of surgery and treatment, follow-up, and interdisciplinary communication between physicians and other specialists. Disclosures The authors report no conflicts of interest. References 1. Bengel W. Standardization in dental photography. Int Dent J. 1985;35(3):210–217. 2. Ettorre G, Weber M, Schaaf H, Lowry JC, Mommaerts MY, Howaldt HP. Standards for digital photography in cranio-maxillo-facial surgery–Part I: Basic views and guidelines. J Craniomaxillofac Surg. 2006;34(2):65–73. 3. Galdino GM, DaSilva And D, Gunter JP. Digital photography for rhinoplasty. Plast Reconstr Surg. 2002;109(4):1421–1434. 4. Galdino GM, Vogel JE, Vander Kolk CA. Standardizing digital photography: it’s not all in the eye of the beholder. Plast Reconstr Surg. 2001;108(5):1334–1344. 5. Jemec BI, Jemec GB. Suggestions for standardized clinical photography in plastic surgery. J Audiov Media Med. 1981;4(3):99–102. 6. Sandler J, Murray A. Digital photography in orthodontics. J Orthod. 2001;28(3):197–201. 7. Sandler J, Murray A. Clinical photographs-the gold standard. J Orthod. 2002;29(2):158–161. 8. Alder ME, Deahl ST, Matteson SR. Clinical usefulness of two- dimensional reformatted and three-dimensionally rendered computerized tomographic images: literature review and a survey of surgeons’ opinions. J Oral Maxillofac Surg. 1995;53(4):375–386. 9. Guerrero ME, Jacobs R, Loubele M, Schutyser F, Suetens P, van Steenberghe D. State-of-the-art on cone beam CT imaging for preoperative planning of implant placement. Clin Oral Investig. 2006; 10(1):1–7. 10. Xia J, Samman N, Yeung RW, et al. Computer-assisted three-dimensional surgical planing and simulation. 3D soft tissue planning and prediction. Int J Oral Maxillofac Surg. 2000;29(4):250–258. 11. Hell B. 3D sonography. Int J Oral Maxillofac Surg. 1995;24(1 Pt 2): 84–89. 12. Landes CA, Goral WA, Sader R, Mack MG. Three-dimensional versus two-dimensional sonography of the temporomandibular joint in comparison to MRI. Eur J Radiol. 2007;61(2):235–244. 13. Roelfsema NM, Hop WC, Wladimiroff JW. Three-dimensional sonographic determination of normal fetal mandibular and maxillary size during the second half of pregnancy. Ultrasound Obstet Gynecol. 2006;28(7):950–957. 14. Bill JS, Reuther JF, Dittmann W, et al. Stereolithography in oral and maxillofacial operation planning. Int J Oral Maxillofac Surg. 1995; 24(1 Pt 2):98–103. 15. Santler G, Karcher H, Ruda C. Indications and limitations of three- dimensional models in cranio-maxillofacial surgery. J Craniomaxillofac Surg. 1998;26(1):11–16. 16. Nakamura N, Suzuki A, Takahashi H, Honda Y, Sasaguri M, Ohishi M. A longitudinal study on influence of primary facial deformities on maxillofacial growth in patients with cleft lip and palate. Cleft Palate Craniofac J. 2005;42(6):633–640. 17. Noguchi N, Tsuji M, Shigematsu M, Goto M. An orthognathic simulation system integrating teeth, jaw and face data using 3D cephalometry. Int J Oral Maxillofac Surg. 2007;36(7):640–645. 18. Papadopoulos MA, Christou PK, Christou PK, et al. Three-dimensional craniofacial reconstruction imaging. Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 2002;93(4):382–393. 19. Larish LL. Understanding Electronic Photography. New York: McGraw-Hill Education; 1990:44. 20. Photography P. Popular Photography. New York: HFM U.S. 1991:111. 21. Photography P. Popular Photography. New York: HFM U.S. 1991:56. 22. Clark RN. Film versus Digital Summary. www.clarkvision.com/ imagedetail/film.vs.digital.summary1.html 2005. Accessed Nov 23, 2008. 23. Lenhard K. Optik für die Digitale Fotografie. Bad Kreuznach; www.schneiderkreuznach.com/knowhow/digfoto.htm. Accessed Nov 23, 2008. 24. Rockwell K. The Megapixel Myth. La Jolla California. www. kenrockwell.com/tech/mpmyth.htm. 2006. Accessed Nov 23, 2008. 25. Schaaf H, Streckbein P, Ettorre G, Lowry JC, Mommaerts MY, Howaldt HP. Standards for digital photography in cranio-maxillo-facial surgery – Part II: Additional picture sets and avoiding common mistakes. J Craniomaxillofac Surg. 2006;34(7):444–455. 26. Weinberg SM, Naidoo S, Govier DP, Martin RA, Kane AA, Marazita ML. Anthropometric precision and accuracy of digital three-dimensional photogrammetry: comparing the Genex and 3dMD imaging systems with one another and with direct anthropometry. J Craniofac Surg. 2006;17(3):477–483. 27. Hajeer MY, Ayoub AF, Millett DT. Three-dimensional assessment of facial soft-tissue asymmetry before and after orthognathic surgery. Br J Oral Maxillofac Surg. 2004;42(5):396–404. C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e n tis tr y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dentistry 2009:1 Clinical, Cosmetic and Investigational Dentistry Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/clinical-cosmetic-and-investigational-dentistry-journal Clinical, Cosmetic and Investigational Dentistry is an international, peer-reviewed, open access, online journal focusing on the lat- est clinical and experimental research in dentistry with specific emphasis on cosmetic interventions. Innovative developments in dental materials, techniques and devices that improve outcomes and patient satisfaction and preference will be highlighted. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. 45 evolution of photography in maxillofacial surgeryDovepress submit your manuscript | www.dovepress.com Dovepress Dovepress 28. Hajeer MY, Mao Z, Millett DT, Ayoub AF, Siebert JP. A new three-dimensional method of assessing facial volumetric changes after orthognathic treatment. Cleft Palate Craniofac J. 2005;42(2): 113–120. 29. Hajeer MY, Millett DT, Ayoub AF, Siebert JP. Applications of 3D imaging in orthodontics: part II. J Orthod. 2004;31(2):154–162. 30. Hajeer MY, Millett DT, Ayoub AF, Siebert JP. Applications of 3D imaging in orthodontics: part I. J Orthod. 2004;31(1):62–70. 31. Lee JY, Han Q, Trotman CA. Three-dimensional facial imaging: accuracy and considerations for clinical applications in orthodontics. Angle Orthod. 2004;74(5):587–593. 32. A y o u b A , G a r r a h y A , H o o d C , e t a l . V a l i d a t i o n o f a vision-based, three-dimensional facial imaging system. Cleft Palate Craniofac J. 2003;40(5):523–529. 33. Hood CA, Bock M, Hosey MT, Bowman A, Ayoub AF. Facial asymmetry – 3D assessment of infants with cleft lip and palate. Int J Paediatr Dent. 2003;13(6):404–410. 34. Hood CA, Hosey MT, Bock M, White J, Ray A, Ayoub AF. Facial characterization of infants with cleft lip and palate using a three- dimensional capture technique. Cleft Palate Craniofac J. 2004; 41(1):27–35. 35. Schwenzer-Zimmerer K, Chaitidis D, Berg-Boerner I, et al. Quantitative 3D soft tissue analysis of symmetry prior to and after unilateral cleft lip repair compared with non-cleft persons (performed in Cambodia). J Craniomaxillofac Surg. 2008;36(8):431–438. 36. Hammond P, Hutton TJ, Allanson JE, et al. 3D analysis of facial morphology. Am J Med Genet A. 2004;126(4):339–348. 37. A l d r i d g e K , B o y a d j i e v S A , C a p o n e G T , D e L e o n V B , Richtsmeier JT. Precision and error of three-dimensional phenotypic measures acquired from 3dMD photogrammetric images. Am J Med Genet A. 2005;138(3):247–253. 38. Ayoub AF, Siebert P, Moos KF, Wray D, Urquhart C, Niblett TB. A vision-based three-dimensional capture system for maxillofacial assessment and surgical planning. Br J Oral Maxillofac Surg. 1998;36(5): 353–357. 39. Ayoub AF, Wray D, Moos KF, et al. Three-dimensional modeling for modern diagnosis and planning in maxillofacial surgery. Int J Adult Orthodon Orthognath Surg. 1996;11(3):225–233. 40. Bourne CO, Kerr WJ, Ayoub AF. Development of a three-dimensional imaging system for analysis of facial change. Clin Orthod Res. 2001;4(2):105–111. 41. Jones M, Cadier M. Implementation of standardized medical photography for cleft lip and palate audit. J Audiov Media Med. 2004;27(4):154–160. 42. Coombes AG, Sethi CS, Kirkpatrick WN, Waterhouse N, Kelly MH, Joshi N. A standardized digital photography system with computerized ey e l i d m e a s u r e m e n t a n a l y s i s . P l a s t R e c o n s t r S u r g . 2 0 0 7 ; 120(3):647–656. 43. Ferrario VF, Sforza C, Miani A, Tartaglia G. Craniofacial morphometry by photographic evaluations. Am J Orthod Dentofacial Orthop. 1993;103(4):327–337. 44. Zhang X, Hans MG, Graham G, Kirchner HL, Redline S. Correlations between cephalometric and facial photographic measurements of craniofacial form. Am J Orthod Dentofacial Orthop. 2007; 131(1):67–71. 45. Hutchison BL, Hutchison LA, Thompson JM, Mitchell EA. Plagiocephaly and brachycephaly in the first two years of life: a prospective cohort study. Pediatrics. 2004;114(4):970–980. 46. Hutchison BL, Hutchison LA, Thompson JM, Mitchell EA. Quantification of plagiocephaly and brachycephaly in infants using a digital photographic technique. Cleft Palate Craniofac J. 2005;42(5):539–547. 47. Zonenshayn M, Kronberg E, Souweidane MM. Cranial index of symmetry: an objective semiautomated measure of plagiocephaly. Technical note. J Neurosurg. 2004;100(5 Suppl Pediatrics): 537–540. 48. Nayler J, Geddes N, Gomez-Castro C. Managing digital clinical photographs. J Audiov Media Med. 2001;24(4):166–171. 49. Niamtu J. Image is everything: pearls and pitfalls of digital photography and PowerPoint presentations for the cosmetic surgeon. Dermatol Surg. 2004;30(1):81–91. 50. Ikeda I, Urushihara K, Ono T. A pitfall in clinical photography: the appearance of skin lesions depends upon the illumination device. Arch Dermatol Res. 2003;294(10–11):438–443. 51. Trune DR, Berg DM, DeGagne JM. Computerized digital photography in auditory research: a comparison of publication-quality digital printers with traditional darkroom methods. Hear Res. 1995; 86(1–2):163–170. 52. Swennen GR, Schutyser F, Hausamen JE. Three-Dimensional Cephalometry A Color Atlas and Manual . 1st ed. Berlin: Springer; 2005. 53. De Groeve P, Schutyser F, Cleynen-Breugel J, Suetens P. Registration of 3D photographs with spiral CT images for soft tissue simulation in maxillofacial surgery. Med Image Comput Comput Assist Interv. 2001;2208:991–996. 54. Maal TJ, Plooij JM, Rangel FA, Mollemans W, Schutyser FA, Berge SJ. The accuracy of matching three-dimensional photographs with skin surfaces derived from cone-beam computed tomography. Int J Oral Maxillofac Surg. 2008;37(7):641–646. 55. Plooij JM, Swennen GR, Rangel FA, et al. Evaluation of reproducibility and reliability of 3D soft tissue analysis using 3D stereophotogrammetry. Int J Oral Maxillofac Surg. 2009;38(3):267–273. 56. Weinberg SM, Scott NM, Neiswanger K, Brandon CA, Marazita ML. Digital three-dimensional photogrammetry: evaluation of anthropometric precision and accuracy using a Genex 3D camera system. Cleft Palate Craniofac J. 2004;41(5):507–518. C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e n tis tr y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com/clinical-cosmetic-and-investigational-dentistry-journal www.dovepress.com www.dovepress.com www.dovepress.com www.dovepress.com Pub Info 20: Nimber of times reviewed: work_e42wkvtkojfbva25ausv4yidmi ---- The Healing Effect of Nettle Extract on Second Degree Burn Wounds www.wjps.ir /Vol.4/No.1/January 2015 The Healing Effect of Nettle Extract on Second Degree Burn Wounds Hosein Akbari1, Mohammad Javad Fatemi2, Maryam Iranpour3*, Ali Khodarahmi4, Mehrdad Baghaee5, Mir Sepehr Pedram6, Sahar Saleh1, Shirin Araghi1 ABSTRACT BACKGROUND Numerous studies were carried out to develop more sophisticated dressings to expedite healing processes and diminish the bacterial burden in burn wounds. This study assessed the healing effect of nettle extract on second degree burns wound in rats in comparison with silver sulfadiazine and vaseline. METHODS Forty rats were randomly assigned to four equal groups. A deep second-degree burn was created on the back of each rat using a standard burning procedure. The burns were dressed daily with nettle extract in group 1, silver sulfadiazine in group 2, vaseline in group 3 and without any medication in group 4 as control group. The response to treatment was assessed by digital photography during the treatment until day 42. Histological scoring was undertaken for scar tissue samples on days 10 and 42. RESULTS A statistically significant difference was observed in group 1 compared with other groups regarding 4 scoring parameters after 10 days. A statistically significant difference was seen for fibrosis parameter after 42 days. In terms of difference of wound surface area, maximal healing was noticed at the same time in nettle group and minimal repair in the control group. CONCLUSION Our findings showed maximal rate of healing in the nettle group. So it may be a suitable substitute for silver sulfadiazine and vaseline when available. KEYWORDS Healing; Nettle; Burn; Wound; Silver sulfadiazine; Vaseline; Rat Please cite this paper as: Akbari H, Fatemi MJ, Iranpour M, Khodarahmi A, Baghaee M, Pedram MS, Saleh S, Araghi S. The Healing Effect of Nettle Extract on Second Degree Burn Wounds. World J Plast Surg 2015;4(1):23-28. INTRODUCTION Burn is still regarded as one of the emergency medicine affecting both genders and all ages groups in both developed and developing countries leading to physical and psychological disabilities with an increasing trend in mortality and morbidity during pregnancy.1,2 Wound healing is widely discussed in the Original Article 1. Department of Plastic Surgery, Iran University of Medical Sciences, Tehran, Iran; 2. Departmentof Plastic and Reconstructive Surgery, Burn Research Center, Hazrat Fatima Hospital, Iran University of Medical Sciences, Tehran, Iran 3. Department of Pathology, Kerman University of Medical Sciences, Kerman, Iran; 4. Department of Plastic Surgery, Kerman University of Medical Sciences, Kerman, Iran; 5. Department of Surgery, Iran University of Medical Sciences, Tehran, Iran; 6. Department of Surgery and Radiology, Faculty of Veterinary, Tehran University of Medical Sciences, Tehran, Iran *Correspondence Author: Ali Khodarahmi, MD; Assistant Professor of Department of Pathology, Kerman University of Medical Sciences, Kerman, Iran Tel: +98-34-33222212 Fax: +98-34-32449135 Email: dr.iranpour.86@gmail.com Received: August 9, 2014 Accepted: October 15, 2014 23 D o w n lo a d e d f ro m w jp s. ir a t 5 :4 6 + 0 4 3 0 o n T u e sd a y A p ri l 6 th 2 0 2 1 http://wjps.ir/article-1-123-en.html 24 Healing effect of nettle extract www.wjps.ir /Vol.4/No.1/January 2015 medical literature. In topical burn therapy, silver sulfadiazine was introduced as the gold standard having antibacterial properties too.3 Numerous studies were carried out to develop more sophisticated dressings to expedite healing process and diminish bacterial burden in wounds. Even medicinal plants were introduced in wound healing of burned injuries, traditional forms of medicine, especially herbal products, which have been employed for centuries in Africa and Asia are under scientific investigation for their roles in wound treatment.4-7 There are many reports confirming the use of medicinal plants for dressing of wounds who was described by Avicenna, the Persian physician (980-1037 AD) in his famous book, Canon of Medicine.5,6,8-11 Stinging nettle has been used for hundreds of years to treat painful muscles and joints, eczema, arthritis, gout, anemia and benign prostatic hyperplasia.12-15 Combudoron, composed of extracts from arnica and stinging nettle, was used for the treatment of partial thickness burns and insect bites in Europe. Combudoron seems to have positive effects on healing of grade 2 laser induced burns, which needs further investigation.16,17 This study assessed the effect of nettle extract on second-degree burns wound healing in rats in comparison with silver sulfadiazine and vaseline. MATERIALS AND METHODS In a randomized clinical trial, 40 Wistar-albino male rats (average weight: 300-350 g, average age: 3-4 months) were randomly divided into 4 equal groups. Group 1 received topical nettle extract; group 2 was treated with topical silver sulfadiazine; for group 3, topical vaseline was applied and group 4 was considered as the control group with no medication. They were all maintained in a sheltered environment (temperature: 20-25˚ C and humidity: 65-75%) under the supervision of a veterinarian. During the experiment, the rats were fed with usual rat chow and tap water and each group was kept in a separate cage. Studies on all groups were done at the same time. All the rats were handled according to the ethical principles for animal experiments of International Council for Animal Protection. All the experimental procedures were approved by the research ethics committee of the university. The rats were anesthetized by xylazine (10 mg/kg) and ketamine hydrochloride (50-100 mg/kg). A standard 3rd degree burn wound was produced using a hot plate with the same size about 20% total body surface area and at identical temperature as described before.18 Briefly, the skin on the dorsum was shaved with an electrical clipper. A deep second-degree burn wound was created with a hot plate (diameter: 4×2 cm) at an identical temperature (warmed 5 min and placed for 10 sec on the skin with an equal pressure). The burn area was treated with nettle extraction in group 1, silver sulfadiazine in group 2, vaseline in group 3 and without any intervention in group 4. Response to treatment was assessed by digital photography twice per week under general anesthesia during the treatment until day 42. Histologic parameters (PMN infiltration, collagen deposition, fibrosis and angiogenesis) were assessed on biopsy specimens of the wound on days 10 and 42. Every specimen was taken under general anesthesia by a punch device which contained a part of wound and its surrounding skin. The specimens were stained with H&E and Masson’s trichrome. Histological criteria were defined as follows: Collagen: normal bundle: 2, disorganized/edematous: 1, amorphous: 0; PMN X40 field: 0-10: 2 , 11-40:1 ,>40: 0; angiogenesis to 3 degrees of mild, moderate and severe and fibrosis with thickness measurement of collagene bundle to 3 degrees of mild, moderate and severe.19 RESULTS One of the animals in silver sulfadiazine group and one animal in nettle group died on day 11 and one of the animals in vaseline group died on day 15. In the first biopsy (day 10) in nettle group compared with vaseline and control groups, a statistical significant healing effect was seen for the 4 scoring parameters (P<0.05). In nettle group compared with silver sulfadiazine group, a significant difference was observed for 2 parameters of fibrosis and PMN infiltration. In the second biopsy (day 42), for fibrosis parameter among 3 groups, namely, nettle/ silver, silver/vaseline and silver/control, significant differences were found (Tables 1 and 2). For wound surface area at identical time intervals, maximal healing effect was seen in nettle group and minimal repair in the control D o w n lo a d e d f ro m w jp s. ir a t 5 :4 6 + 0 4 3 0 o n T u e sd a y A p ri l 6 th 2 0 2 1 http://wjps.ir/article-1-123-en.html 25 Akbari et al. www.wjps.ir /Vol.4/No.1/January 2015 group. Differences of wound surface area in the first 10 days were equal among all groups and a sharp slope of chart after day 10 showed more improvement in nettle group but was not statistically significant (Figures 1 and 2). A severe angiogenesis was present in dermis in Nettle group (Figure 3) while in control group, it was mild (Figure 4). Figure 5 denotes to dermis showing collagen deposition and a dense fibrosis in the nettle group. DISCUSSION Burns are among the most common and devastating forms of trauma. They are physical and chemical phenomena and cause many morbidities and mortalities in the world. The final goal of all of the current burn treatments is to accelerate skin healing and prevent wound infection.8,14,20 Cutaneous wound repair consists of orderly progression of events that establish the integrity of the damaged tissue. The sequence of events that repairs the damage is categorized into three overlapping phases: inflammation, proliferation, and tissue remodeling. The normal healing process can be impeded at any step along its path by a variety of factors that can contribute to impaired healing.20 Fig. 1: The wound surface area in terms of time in different groups. Table 1: Comparison of the healing effect of burn in all groups (P value<0.05: significant) Parameter Fibrosis Angiogenesis PMN infiltration Collagen deposition Group After 10 days After 41 days After 10 days After 10 days After 10 days Nettle+vaseline N S 0.001 0.001 0.001 0.001 Nettle+control N S 0.001 0.009 0.002 0.001 Nettle+SSD 0.001 0.01 NS 0.005 NS Vaseline+SSD 0.009 0.008 0.007 NS NS Control+SSD 0.009 NS NS NS NS Vaseline+control N S NS NS NS NS P value<0.05: Significant; SSD=Silver sulfadiazine; NS=Not significant (P value>0.05). Table 2: Comparison of the healing effect of burn between two time intervals. Collagen PMN Angiogenesis Fibrosis Epithelialization Time Day 10 Day 41 Day 10 Day 41 Day 10 Day 41 Day 10 Day 41 Day 10 Day 41 Value 20.906 4.270 17.811 0.299 15.605 4.333 21.113 12.950 4.880 1.012 Df 3 3 3 3 3 3 3 3 3 3 P value** 0.001 0.234 0.001 0.960 0.001 0.228 0.001 0.005 0.181 0.798 Df=Degree of freedom; **P value<0.05: Significant D o w n lo a d e d f ro m w jp s. ir a t 5 :4 6 + 0 4 3 0 o n T u e sd a y A p ri l 6 th 2 0 2 1 http://wjps.ir/article-1-123-en.html 26 Healing effect of nettle extract www.wjps.ir /Vol.4/No.1/January 2015 The final step of the proliferative phase is epithelialization, which involves migration, proliferation, and differentiation of epithelial cells from the wound edges to resurface the defect. In open full thickness burn wounds, epithelialization is delayed until a bed of granulation tissue is established to allow for the migration of epithelial cells.21 Studies have demonstrated that burn infection is the main cause of mortality in patients with extensive burns. Therefore, many researchers have tried to achieve appropriate treatment methods to reduce the risk of wound infections and shorten the period of treatment in the patients with burn wounds. Some of these treatments involve using topical antimicrobial agents, which effectively reduce mortality rate of burns. Fig. 2: The wound surface area in terms of group at different time intervals. Fig. 3: Severe angiogenesis (arrows) in dermis (H&E stain) in nettle group. Fig. 5: Dermis showing collagen deposition and dense fibrosis in nettle group (Masson’s trichrome stain). Fig. 4: Mild angiogenesis (arrows) in dermis (H&E stain) in control group. D o w n lo a d e d f ro m w jp s. ir a t 5 :4 6 + 0 4 3 0 o n T u e sd a y A p ri l 6 th 2 0 2 1 http://wjps.ir/article-1-123-en.html 27 Akbari et al. www.wjps.ir /Vol.4/No.1/January 2015 One of these antimicrobial topical ointment is silver sulfadiazine with advantages such as easy and convenient use, not causing pain during administration, yielding low toxicity and sensitivity and having antibacterial effects, which have made it known as the gold standard among anti-microbial topical drugs for the patients with burns and turned it to the main consumed drug in the treatment of burn wounds around the world.8 Use of non-silver treatment led to shorter wound healing time, less dressing changes and shorter length of hospital stay, compared to silver sulfadiazine treatment, but no difference in the incidence of wound infection or grafting was found.22 Cubo gels were reported to be effective in treatment of deep second degree burns which may result into better patient compliance and excellent healing results with least side effects in comparison with the commercially available products.23 As thermal injury disrupts the protective barrier function of skin, dressing is needed to protect against environmental flora and evaporative heat loss. But, it can clean the wound and remove the debries of the separated eschar and devitalized tissue and has an antibacterial activity. A wide variety of substances have been reported to be useful in the treatment of burn wounds.24 Among antimicrobial agents, topical ointment of silver sulfadiazine is the most commonly deployed for partial and full-thickness burns.25 In another study, silversulfadiazin and aloevera were compared and reported a higher healing speed in aloevera group.26 In the present study, nettle extract was compared with vaseline ointment and silver sulfadiazine as the standard treatment for burn wound in rats. Stinging nettle has been used to treat painful muscles and joints, eczema, arthritis, gout, anemia, benign prostatic hyperplasia, urinary tract infections, hay fever (allergic rhinitis) and thumb pain. It has also been used for treating joint pain, sprains and strains, tendonitis and insect bites.12-15 IDS 23, a stinging nettle leaf extract, may inhibit inflammatory cascade in autoimmune diseases.27 Many traditional plants have been used for repair of burn wounds in animal models; however, no specific surveys have been done about effect of nettle extract on healing of burn wounds. The probable mechanisms include providing necessary materials for healing, increasing blood flow to burn area, decreasing the inflammatory response and decreasing the rate of infection. In the histological evaluation, maximal rate of angiogenesis and fibrosis were seen in nettle group with a better wound healing. Our findings showed the maximal rate of healing in the nettle group. So it may be a suitable substitute for silver sulfadiazine and vaseline when available. ACKNOWLEDGEMENT We wish to thank Mottahari Burn Research Center of Iran University of Medical Sciences for financial support. CONFLICT OF INTEREST The authors declare no conflict of interest. REFERENCES 1 Mohammadi AA, Amini M, Mehrabani D, Kiani Z, Seddigh A. A survey on 30 months electrical burns in Shiraz University of Medical Sciences Burn Hospital. Burns 2008;34:111-13. 2 Pasalar M, Mohammadi AA, Rajaeefard AR, Neghab M, Tolidie HR, Mehrabani D. Epidemiology of burns during pregnancy in southern Iran: Effect on maternal and fetal outcomes. World Appl Sci J 2013;28:153-8. 3 Hosseini SV, Tanideh N, Kohanteb J, Ghodrati Z, Mehrabani D, Yarmohammadi H. Comparison between Alpha and silver sulfadiazine ointments in treatment of Pseudomonas infections in 3rd degree burns. Int J Surg 2007;5:23-6. 4 Amini M, Kherad M, Mehrabani D, Azarpira N, Panjehshahin MR, Tanideh N. Effect of plantago major on burn wound healing in rat. J Appl Anim Res 2010;37: 53-6. 5 Hazrati M, Mehrabani D, Japoni A, Montasery H, Azarpira N, Hamidian-Shirazi AR, Tanideh N. Effect of honey on healing of Pseudomonas aeruginosa infected burn wounds in rat. J Appl Anim Res 2010;37:106-10. 6 Hosseini SV, Niknahad H, Fakhar N, Rezaianzadeh A, Mehrabani D. The healing effect of honey, putty, vitriol and olive oil in Psudomonas areoginosa infected burns in experiental rat model. Asian J Anim Vet Adv 2011;6:572-9. 7 Tanideh N , R okhsari P , M ehrabani D , Mohammadi Samani S, Sabet Sarvestani F, Ashraf MJ, Koohi Hosseinabadi O, Shamsian D o w n lo a d e d f ro m w jp s. ir a t 5 :4 6 + 0 4 3 0 o n T u e sd a y A p ri l 6 th 2 0 2 1 http://wjps.ir/article-1-123-en.html 28 Healing effect of nettle extract www.wjps.ir /Vol.4/No.1/January 2015 S, Ahmadi N. The healing effect of licorice on Pseudomonas aeruginosa infected burn wounds in experimental rat model. World J Plast Surg 2014;3:1-8. 8 Daryabeigi R, Heidari M, Hosseini SA, Omranifar M. Comparison of healing time of the 2 degree burn wounds with two dressing methods of fundermol herbal ointment and 1% silver sulfadiazine cream. Iran J Nurs Midwifery Res 2010;15:97-101 9 Franz MG, Dteed DL, Robson MC. Optimized healing of the acute wound by minimizing complications. Curr probl Surg 2007;44:691-763. 10 Peakok EE, Cohen I.K. Wound healing. In: Mccarthy JG, May JW, Litter JW. Editors. Plastic Surgery. Saunders, Philadelphia, USA. 2005; pp. 636-9. 11 American Burn Association: Burn incidence and treatment in the US: 2000 fact sheet. Available at: http://www.Ameriburn.Org/ pub/publication.htm 12 Koch E. Extracts from fruits of saw palmetto (Sabal serrulata) and roots of stinging nettle (Urticadioica): viable alternatives in the medical treatment of benign prostatic hyperplasia and associated lower urinarytracts symptoms. Planta Med 2001;67:489-500. 13 Schneider T, Rubben H. Stinging nettle root extract (Bazoton-uno) in long term treatment of benign prostatic syndrome (BPS). Results of a randomized, double-blind, placebo controlled multicenter study after 12 months. Urologe A 2004;43:302-6. 14 Chrubasik JE, Roufogalis BD, Wagner H, Chr ubasik S. A comprehensive review on the stinging nettle effect and efficacy profiles. Part II: urticae radix. Phytomedicine2007;14:568-79. 15 Lopatkin NA, Sivkov AV, Medvedev AA, Walter K, Schlef ke S, AvdeichukIuI, Golubev GV, Mel’nik KP, Elenberger NA, Engelman U. Combined extract of Sabal palm and nettle in the treatment of patients with lower urinary tract symptoms in double blind, placebo controlled trial. Urologiia 2006;12:9-14. 16 Huber R, Bross F, Schempp C, Gründemann C. Arnica and stinging nettle for treating burns- a self-experiment. Complement Ther Med 2011;19:276-80. 17 Alonso D, Lazarus MC, Baumann L. Effects of topical arnica gel on post-laser treatment bruises. Dermatol Surg 2002;28:686-8. 18 Manafi A, Kohanteb J, Mehrabani D, JaponiA, Amini M, Naghmachi M, Zaghi AH, Khalili N. Active immunization using exotoxin aconfers protection against Pseudomonas aeruginosa infection in a burn mouse model. BMC J Microbiol 2009;9:19-23. 19 Ang ES, Lee ST, Gan CS, See PG, Chan YH, Ng LH, Machin D. Evaluating the role of alternative therapy in burn wound management: randomized trial comparing moist exposed burn ointment with conventional methods in the management of patients with second degree burns. Med Gen Med 2001;3:3. 20 Upadhyay NK, Kumar R, Siddiqui MS, Gupta A. Mechanism of wound-healing activity of Hippo phaerhamnoides L. leaf extract in experimental burns. In: Evidence-based complementary and alternative medicine. 2011; p. 659705. 21 Stipcevic T. Enhanced healing of full- thickness burn wounds using di-rhamnolipid. Burn 2006;32:24–32. 22 Rashaan ZM, Krijnen P, Klamer RR, Schipper IB, Dekkers OM, Breederveld RS. Non- silver treatment versus silver sulfadiazine in treatment of partial thickness burn wounds in children: A systematic review and meta- analysis. Wound Repair Regen 2014;22:473-82 23 Morsi NM, Abdelbary GA, Ahmed MA. Silver sulfadiazine based cubosome hydrogels for topical treatment of burns: development and in vitro/in vivo characterization. Eur J Pharm Biopharm 2014;86:178-89. 24 Inngierdingen K, Nergard CS, Diallo D, Mounkoro PP, Paulsen BS. An ethnopharmacological survey of plants used wound healing in Dogonland, Mail, West Africa. J Ethnophamacol 2004;92:234-44. 25 Heimbach D, Mann R, Engrav L. Evaluationof the burn wound management decision. Total burn care. 2nd ed., Saunders, New York, USA. 2002. 26 Khoondinasab MR, Khodarahmi A, Akhoondinasab M, Saberi M, Iranpour M. Assessing effect of three herbal medicines in second and third degree burns in rats and comparison with silver sulfadiazine ointment. Burn 2014;14:24-7. 27 Klingelhoefer S, Obertreis B, Quast S, BehnkeB. Antirheumatic effect of IDS 23, a stinging nettle leaf extract, on in vitro expression of T helper cytokines. J Rheumatol 1999;26:2517-22. D o w n lo a d e d f ro m w jp s. ir a t 5 :4 6 + 0 4 3 0 o n T u e sd a y A p ri l 6 th 2 0 2 1 http://wjps.ir/article-1-123-en.html work_e6kasrkbczbxtbw7nhurgvpyl4 ---- EmailMe Form - Derm101 was deactivated on December 31, 2019 EmailMeForm Derm101 was deactivated on December 31, 2019 Paid subscribers will receive refunds on any unused balances on or before the end of the year. We are very proud of what Derm101 was able to accomplish and truly appreciate all of our subscribers for their support. Thank you so much for your loyalty. work_eaqigdhfnzarldmhrpqrumueqq ---- Trunk appearance perception scale for physicians (TAPS-Phy) - a valid and reliable tool to rate trunk deformity in idiopathic scoliosis RESEARCH Open Access Trunk appearance perception scale for physicians (TAPS-Phy) - a valid and reliable tool to rate trunk deformity in idiopathic scoliosis Antonia Matamalas1, Elisabetta D’Agata2*, Judith Sanchez-Raya1 and Juan Bago1 Abstract Background: Evaluation of trunk deformity by physicians in patients with idiopathic scoliosis (IS) has been considered an important part of clinical practice. Different methods to quantify the severity of trunk deformity by external observation have been reported. A valid tool to evaluate patients’ perception of trunk deformity, the Trunk Appearance Perception Scale (TAPS), is hereby validated for use by physicians (TAPS-Phy). Methods: Cross-sectional study of patients with non-surgically treated IS. Patients were prospectively recruited. On the day of the visit, a posterior-anterior radiograph in standard position and clinical photographs in three different views (anterior, posterior and forward bending position) were obtained. Patients also completed a TAPS questionnaire (TAPS-Pat). Three different observers scored the TAPS questionnaire (TAPS-Phy), based on the digital photographs previously obtained, twice a week. The angle of trunk inclination (ATRI) was also measured on digital photographs. Inter and intra-rater reliability was calculated through weighted kappa coefficient. External validity was tested by the Spearman correlation coefficient between the TAPS-Phy score and the scoliosis magnitude determined using the magnitude of the largest curve (MLC), ATRI, and TAPS-Pat. Results: Fifty two patients (46 women; mean age 16.6 years) were included. The average curve magnitude of the major curve was 44°. Mean scores of TAPS-Phy for the three evaluators ranged from 3.4 to 3.5. No differences between the three means were found. TAPS-Phy showed good internal consistency (Cronbach’s alpha coefficient 0.84). Inter-observer reliability ranged from slight to substantial (0.14 to 0.63); intra-observer reliability ranged from 0.35 to 0.99. Correlation between TAPS-Phy and ATRI (r = −0.54 to −0.75), MLC (r = −0.47 to −0.6) and TAPS-Pat (r = 0.29 to 0.34) were statistically significant (p < 0.01). Conclusions: TAPS-Phy is a valid and reliable scale to rate a physician’s impression of the severity of the deformity in patients with idiopathic scoliosis and can be useful in routine clinical records. Keywords: Idiopathic scoliosis, Trunk deformity, Trunk appearance perception scale (TAPS), Reliability, Validity * Correspondence: dagata.e@gmail.com 2Vall d’Hebron Research Institut, Passeig Vall d’Hebron, 119-129, 08035 Barcelona, Spain Full list of author information is available at the end of the article © 2016 The Author(s). Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Matamalas et al. Scoliosis and Spinal Disorders (2016) 11:24 DOI 10.1186/s13013-016-0085-8 http://crossmark.crossref.org/dialog/?doi=10.1186/s13013-016-0085-8&domain=pdf mailto:dagata.e@gmail.com http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/publicdomain/zero/1.0/ Background Trunk deformity is a crucial component of idiopathic scoliosis (IS). Physicians’ impressions of the severity of trunk deformity could be of interest for the clinical record. Different methods have been used to quantify trunk deformity by external evaluators. Theologis et al. [1] assessed the validity of a Cosmetic Spinal Score: an exter- nal evaluator quantified the severity of the trunk deformity from 1 to 10. This score showed a moderate inter and intra-observer reliability; the correlation between the score and the Cobb angle was moderate (r = 0.46) whereas the correlation with the rib hump (assessed by ISIS Scan) was r = 0.63. Raso et al. [2] asked several external evaluators to rate 8 components of the trunk deformity on a scale from 0 to 50. The correlation between an evaluator’s score and Cobb angle was 0.41. Data about inter- and intra- observers’ reliability were not provided. In addition, the external validity of some of these characteristics of the de- formity has not been demonstrated. Finally, Zaina et al. [3] developed the Trunk Aesthetic Clinical Evaluation (TRACE) tool to evaluate four aspects of the severity of the trunk deformity: shoulder, scapula, hemi-thorax, and waist asymmetry. To assist the external evaluator to rate each of these features, the tool provides clinical photo- graphs of patients with progressive degrees of deformity. The inter-observer reliability, assessed by unweighted kappa coefficient, was poor and highly variable (kappa ranged from 0.09 to 0.14). The Trunk Appearance Perception Scale (TAPS) is a validated instrument to test the trunk deformity per- ceived by the patient [4]. The scale includes three sets of drawings that correspond to the three views of the trunk: from the back, from the front and in forward bending position (Adams test). The patient has to choose the picture that seems more appropriate to the perception of his/her own image. A moderate correl- ation (r = −0.55) between TAPS score and the radio- logical magnitude has been reported. We hypothesized that the TAPS scale completed by the physician while performing a physical examination could be a method to quantify and describe the severity of the trunk deformity. This research is aimed to validate TAPS completed by physicians (TAPS-Phy) and to compare these data with TAPS fulfilled by patients (TAPS-Pat). Methods This is a prospective cross-sectional study evaluated and approved by the Ethics and Clinical Research Committee from Hospital Vall d’Hebron. Patients with idiopathic scoliosis consulting in the out-patients clinic of our institution and who met the inclusion criteria where consecutively recruited. Inclusion criteria for this study were: diagnosis of idiopathic scoliosis, aged between 10 and 40 years old, receiving non-operative treatment (either brace or observation), and patient consent to par- ticipate. The sample was stratified according to the radiological magnitude, measured with Cobb angle, of the major curve in four groups of 13 patients: <30°, 30° to 45°, 45° to 60°, and >60°. Radiographic measures For each patient, a postero-anterior radiograph of the full trunk in the standing position was taken. An experi- enced clinician (author AM) took all measurements using digital software (Surgimap Spine Software® Nemaris Inc, New York, United States). The coronal Cobb angle of the proximal thoracic (PT), the main thoracic (MT) and the thoracolumbar/lumbar (TL/L) curves were measured. The magnitude of the largest curve (MLC) was also used for statistical analysis. Photographic measures Clinical photographs were taken of each patient with a F2.8 LUMIX digital camera in a standardized manner on the same day of the visit by a single trained examiner (author ED). For each of the cases, photographs from the back and front in standing position were obtained. To evaluate the transverse plane deformity, a photograph, taken from the head of the patient, adopting a forward bending position (Adams test), was also obtained. On the photographs, the angle of trunk inclination (ATRI) was measured using Surgimap software. The photographic ATRI angle was defined by the angle be- tween a line connecting the uppermost points of the left and right posterior rib cage, with the horizontal line (Fig. 1). Questionnaire All of the patients completed the TAPS questionnaire (TAPS-Pat). The TAPS scale includes three sets of draw- ings that correspond to the three views of the trunk: from the front, from the back and in a forward bending position (Additional file 1). Each drawing is rated from one (worst deformity) to five (no deformity) and an aver- age score (sum of the values of the three drawings divided by three) of between one and five is obtained. The same questionnaire was recorded by observers. Three observers, with different degrees of experience in scoliosis (a rehabilitation physician, an orthopedic surgeon and a psychologist), completed the TAPS-Phy tool, scoring the clinical photographs of the patients. This procedure was performed on two occasions, one week apart. Statistical analysis We used descriptive statistics including mean and stand- ard deviation. The non-parametric Kruskal-Wallis-Test was used to compare each mean TAPS score of the three observers. Internal consistency of TAPS-Phy overall score Matamalas et al. Scoliosis and Spinal Disorders (2016) 11:24 Page 2 of 5 was tested by pooling data of the first measurement from the three observers and calculating Cronbach’s alpha coefficient. To test inter and intra-rater reliability, the weighted kappa coefficient was calculated. According to Landis and Koch [5], the kappa coefficient agreement was consid- ered as: slight (0.01 to 0.20), fair (0.21–0.40), moderate (0.41–0.60), substantial (0.61 to 0.80) and almost perfect (0.81 to 0.99). External validity of TAPS-Phy was tested by the Spearman correlation coefficient between TAPS-Phy score and scoliosis MLC, ATRI and TAPS-Pat. SPSS 18.0 statistical software was used for data analysis. Statistical significance was set at p = 0.05. Results Descriptive analysis Fifty two patients with IS were included (six men and 46 women); the mean age was 16.6 years (range ten to 37 years) and the average MLC was 44° (range 20° to 76°). The average PT was 20° ± 15; the average MT was 41° ± 16.8 and the average TL/L was 33° ± 15.5. Scoliosis pattern involved a single curve in 31 cases and double curve in 21 cases. Mean scores of TAPS-Phy for the three evaluators were 3.4 (±0.7), 3.4 (±0.8) and 3.5 (±0.5). These differ- ences were not statistically significant (p = 0.6). Average pooled TAPS-Phy scores for the three evaluators was not different between single and double curves (p > 0.1). The average of TAPS-Pat was 3.2 (±0.9). In Table 1, the coefficients of agreement for each pair of observers are specified. The coefficients ranged from 0.14 to 0.63. Cronbach’s alpha coefficient, used to assess TAPS-Phy’s internal consistency, was 0.84. In Table 2, the intra-observer kappa coefficients are specified for each observer. Observer 1 had substantially higher values (kappa coefficient ranging from 0.87 to 0.99) than the other two observers. Construct validity In Table 3, the correlation coefficients of TAPS-Phy and item two of TAPS-Phy (transverse plain view) with ATRI were detailed for each observer. All correlations achieved statistical significance. The correlation coefficients be- tween TAPS-Phy and MLC were statistically significant for all the three observers (Observer 1 r = − 0.6, Obser- ver 2 r = − 0.52, Observer 3 r = − 0.47; p-value < 0.001). Finally, the correlations between TAPS-Phy scores and Fig. 1 Angle of Trunk Inclination measured on digital photography in forward bending position. Method used to measure the angle Table 1 Inter-observer weighted kappa coefficient for each pair of observers, each TAPS-Phy item, and total score. All values achieved statistical significance (p < 0.05) Observer 2 Observer 3 TAPS-Phy 1 Observer 1 0.42 0.31 Observer 2 0.14 TAPS-Phy 2 Observer 1 0.63 0.30 Observer 2 0.50 TAPS-Phy 3 Observer 1 0.36 0.32 Observer 2 0.36 TAPS-Phy total Observer 1 0.50 0.40 Observer 2 0.40 Matamalas et al. Scoliosis and Spinal Disorders (2016) 11:24 Page 3 of 5 TAPS-Pat were respectively: Observer 1 = 0.34 (p < 0.01), Observer 2 = 0.33 (p = 0.025), Observer 3 = 0.29 (p < 0.05). Discussion When evaluating a patient with IS, the subjective physi- cian’s impression of the patient’s trunk appearance may be interesting data to collect for the clinical record. We felt that this goal might be accomplished if physicians completed the Trunk Appearance Perception Scale (TAPS). TAPS is a validated tool typically used to assess the patient’s perception of trunk deformity. Our objective is to test the validity of TAPS as scored by physicians (TAPS-Phy). TAPS-Phy has good internal consistency (Cronbach’s alpha = 0.84), similar to that published for TAPS as scored by patients (Cronbach’s alpha = 0.89) (5). The inter-observer reliability ranged from 0.16 (Item 1 Obs 2 vs Obs 3) to 0.63 (Item 2 Obs 1 vs Obs 2). For the total score (TAPS-Phy total), reliability ranged from 0.4 to 0.5 between the different pairs of observers and the degree of agreement can be considered to be moderate. The intra-raters’ reliability ranged from 0.37 (Obs 3) to 0.93 (Obs 1). For the total score, kappa weighted coefficients were 0.99, 0.43 and 0.41 respectively. Interestingly, Ob- server 1 reached an almost perfect agreement. As she is the most expert in the management and physical exam- ination of patients with idiopathic scoliosis, this finding could indicate that the reliability of TAPS-Phy would be influenced by the observer’s expertise, which probably warrants investigation. The use of the kappa coefficient as a statistical test to assess the reliability deserves some consideration. TAPS should be considered an ordinal scale, so the kappa co- efficient test is adequate to evaluate inter and intra ob- server reliability. As we were more interested in the agreement across major categories in which there is meaningful difference, we determined the weighted kappa, which assigns less weight to agreement as categories are further apart. Moreover, the use of the kappa coefficient as a measure of the observer variation in clinical practice has been questioned [6]. We could note the contradictory results for the inter-observer reliability as a sign of the difficulty of interpreting the kappa coefficient. The TAPS-Phy inter-rater kappa coefficient for the overall score ranged between 0.4 and 0.5 (moderate) whereas Cronbach’s alpha coefficient, which coincides with intraclass correlation coefficient, was 0.84 (substantial). TAPS-Phy has been proved to be a valid test since the correlation with the Cobb angle varies between −0.47 and −0.6. Interestingly, the observed correlation between the TAPS-Pat and MLC was −0.41, slightly lower than the average observed for external evaluators. The rela- tionship between the questionnaire and ATRI measured in clinical photography was also evaluated. The correl- ation with TAPS-Phy total score was moderate (rho ranged between −0.50 and −0.58), while the correlation with the TAPS-Phy item2, evaluating the transverse plane deformity, was moderate to substantial (rho ranged between −0.54 and −0.75). Therefore, the exter- nal evaluator’s perception of the deformity in the trans- verse plane has a good correlation with the clinical deformity measured by the photography. However, the reliability of ATRI measurement with digital photog- raphy has not been previously studied and therefore representing a limitation to our study. A priori, prob- lems with standardization of patient and camera posi- tioning can be anticipated owing to low reliability. On the other hand, we did not have the ATRI measured with an inclinometer, which is the recommended method of measurement [7]. Finally, the correlation between TAPS-Phy and TAPS-Pat ranged from 0.29 to 0.34. Although statistically significant, this correlation was unexpectedly low, and suggests that the perceptions of trunk appearance between patients and physicians may vastly differ. Similarly, Rigo et al. also found a dis- crepancy between the TAPS completed by patients and their parents [8]. Also, the correlation between the clin- ical and radiological deformity and the external evalua- tor’s perception is higher than that observed for the patients. All these data probably suggest that the sub- jective perception of patients is influenced by several fac- tors that are not perceived by external evaluators. The idea of an external evaluator using a reference instru- ment for rating trunk deformity comes from the work of Zaina et al. [3]. They designed the Trunk Aesthetic Clin- ical Evaluation (TRACE) tool to rate four aspects of the trunk deformity: shoulder, scapula, hemi-thorax, and waist asymmetry. For each of these areas, the tool in- cludes clinical photographs of patients with progressive degrees of deformity. Unfortunately, they report poor Table 2 Intra-observer weighted kappa coefficients for each observer and each TAPS-Phy item and TAPS-Phy total score. All values achieved statistical significance (p < 0.01) Observer 1 Observer 2 Observer 3 TAPS-Phy 1 0.93 0.48 0.37 TAPS-Phy 2 0.91 0.52 0.63 TAPS-Phy 3 0.87 0.56 0.35 TAPS-Phy total 0.99 0.43 0.41 Table 3 Spearman correlations coefficients between ATRI and TAPS-Phy total score and TAPS-Phy item 2. All values achieved statistical significant (p < 0.001) Observer 1 ATRI Observer 2ATRI Observer 3ATRI TAPS-Phy 2 −0.54 −0.75 −0.68 TAPS-Phy Total −0.50 −0.58 −0.57 Matamalas et al. Scoliosis and Spinal Disorders (2016) 11:24 Page 4 of 5 and highly variable inter-observer reliability (kappa co- efficient from 0.09 to 0.14). However, they used the unweighted kappa coefficient that evaluates the agree- ment regardless of the order of categories. Moreover, it is unclear if the photos published in the initial work should be the only ones to be used or if each separate center could use their own photos. Conversely, TAPS is easily accessible, and according to its metric proper- ties, appears to be a reliable and valid instrument. However, the use of TAPS-Phy presents several limita- tions. Given the low intra- and inter-observer agree- ments shown in our study, we reject our hypothesis that it represents a useful system. Furthermore, the correl- ation between TAPS-Phy and the magnitude of scoliosis was only fair to moderate. These data indicate that the discriminant validity of the instrument is limited and question the capability of TAPS-Phy to discriminate be- tween patients with different curve magnitude. Conclusions While TAPS- Phy was shown to be a valid and reliable scale with promising clinical utility in rating a physician’s impression of the severity of the deformity in patients with idiopathic scoliosis, the metric properties identified in our study question its wider use as an estimator of the radiological magnitude of scoliosis. Additional file Additional file 1: TAPS scale. Presentation of the scale. (PDF 210 kb) Acknowledgments No other person aside from the authors contributed to conception, design, acquisition of data, analysis, interpretation of data, drafting the manuscript or revising it critically. No funding was received. A professional language editor was involved in the preparation of the manuscript. Authors’ contributions AM participated in providing test scoring TAPS-Phy and measuring photos, conceived the analysis, draft and gave a critical revision of the manuscript. ED acquired data, scored test, measured photos, analyzed the data and drafted the manuscript. JSR conceived the idea of the study, acquired the data, scored test, measured photos and gave a critical revision of the manuscript. JB designed the study, analyzed and interpreted the data, drafted the manuscript and gave his critical revision. All the authors read and approved the final manuscript. Competing interests The authors declare that they have no competing interests. Author details 1Vall d’Hebron Hospital, Passeig Vall d’Hebron, 119-129, 08035 Barcelona, Spain. 2Vall d’Hebron Research Institut, Passeig Vall d’Hebron, 119-129, 08035 Barcelona, Spain. Received: 10 February 2016 Accepted: 2 August 2016 References 1. Theologis TN, Jefferson RJ, Simpson AH, Turner-Smith AR, Fairbank JC. Quantifying the cosmetic defect of adolescent idiopathic scoliosis. Spine (Phila Pa 1976). 1993;18:909–12. 2. Raso VJ, Lou E, Hill DL, Mahood JK, Moreau MJ, Durdle NG. Trunk distortion in adolescent idiopathic scoliosis. J Pediatr Orthop. 1998;18:222–6. 3. Zaina F, Negrini S, Atanasio S. TRACE (Trunk Aesthetic Clinical Evaluation), a routine clinical tool to evaluate aesthetics in scoliosis patients: development from the Aesthetic Index (AI) and repeatability. Scoliosis. 2009;4:3. 4. Bago J, Sanchez-Raya J, Perez-Grueso FJ, Climent JM. The Trunk Appearance Perception Scale (TAPS): a new tool to evaluate subjective impression of trunk deformity in patients with idiopathic scoliosis. Scoliosis. 2010;5:6. 5. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159–74. 6. de Vet HC, Mokkink LB, Terwee CB, Hoekstra OS, Knol DL. Clinicians are right not to like Cohen’s k. BMJ. 2013;346:f2125. 7. Kotwicki T, Negrini S, Grivas TB, Rigo M, Maruyama T, Durmala J, et al. Methodology of evaluation of morphology of the spine and the trunk in idiopathic scoliosis and other spinal deformities - 6th SOSORT consensus paper. Scoliosis. 2009;4:26. 8. Rigo M, D’Agata E, Jelacic M. Trunk appearance perception scale (TAPS) discrepancy between adolescents with idiopathic scoliosis and their parents influences HRQL. Scoliosis. 2013;8 Suppl 1:O55. • We accept pre-submission inquiries • Our selector tool helps you to find the most relevant journal • We provide round the clock customer support • Convenient online submission • Thorough peer review • Inclusion in PubMed and all major indexing services • Maximum visibility for your research Submit your manuscript at www.biomedcentral.com/submit Submit your next manuscript to BioMed Central and we will help you at every step: Matamalas et al. Scoliosis and Spinal Disorders (2016) 11:24 Page 5 of 5 dx.doi.org/10.1186/s13013-016-0085-8 Abstract Background Methods Results Conclusions Background Methods Radiographic measures Photographic measures Questionnaire Statistical analysis Results Descriptive analysis Construct validity Discussion Conclusions Additional file Acknowledgments Authors’ contributions Competing interests Author details References work_ebozsk3wfrduvfiixxyvmw3qnq ---- [PDF] Effect of breakfast omission and consumption on energy intake and physical activity in adolescent girls: a randomised controlled trial | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1017/S0007114517002148 Corpus ID: 22415733Effect of breakfast omission and consumption on energy intake and physical activity in adolescent girls: a randomised controlled trial @article{ZakrzewskiFruer2017EffectOB, title={Effect of breakfast omission and consumption on energy intake and physical activity in adolescent girls: a randomised controlled trial}, author={J. Zakrzewski-Fruer and T. Plekhanova and Dafni Mandila and Yannis Lekatis and K. Tolfrey}, journal={British Journal of Nutrition}, year={2017}, volume={118}, pages={392 - 400} } J. Zakrzewski-Fruer, T. Plekhanova, +2 authors K. Tolfrey Published 2017 Medicine British Journal of Nutrition Abstract It is not known if breakfast consumption is an effective intervention for altering daily energy balance in adolescents when compared with breakfast omission. This study examined the acute effect of breakfast consumption and omission on free-living energy intake (EI) and physical activity (PA) in adolescent girls. Using an acute randomised cross-over design, forty girls (age 13·3 (sd 0·8) years, BMI 21·5 (sd 5·0) kg/m2) completed two, 3-d conditions in a randomised, counter-balanced… Expand View on Cambridge Press cambridge.org Save to Library Create Alert Cite Launch Research Feed Share This Paper 9 CitationsBackground Citations 4 Methods Citations 1 View All Topics from this paper Breakfast Energy Intake Exercise kilojoule (kJ) Carbohydrates Diet Records Female child energy balance Paper Mentions Blog Post Should You Skip Breakfast? It Depends on Your Weight Big Think 6 December 2017 News Article Breakfast May Not Be The Most Important Meal Of The Day When It Comes To Obesity Emaxhealth.com 16 October 2017 News Article Does skipping breakfast really lead to overeating? Medical News Today 14 October 2017 News Article Most important meal? Skipping breakfast does not lead to overeating later: study Nutrition Insight 12 October 2017 News Article Skipping breakfast ‘does not lead to overeating later in day’ Nursing Times 11 October 2017 News Article Breakfast omission does not lead to overeating in teenage girls, study finds The Medical News 11 October 2017 News Article Missing breakfast cuts more than 300 calories a day Health Medicinet 10 October 2017 News Article Breakfast skippers don’t overeat later after all, suggests study. from NutraIngredients.com nutraingredients.com 4 October 2017 9 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Physical Activity Duration but Not Energy Expenditure Differs between Daily and Intermittent Breakfast Consumption in Adolescent Girls: A Randomized Crossover Trial. J. Zakrzewski-Fruer, Emma K. Wells, Natasha S G Crawford, Sahar M O Afeef, K. Tolfrey Medicine The Journal of nutrition 2018 3 PDF Save Alert Research Feed Physical activity duration but not energy expenditure differs between daily compared with intermittent breakfast consumption in adolescent girls : a randomized crossover trial J. Zakrzewski-Fruer, Emma K. Wells, Natasha S G Crawford, Sahar M O Afeef, K. Tolfrey 2019 1 PDF View 6 excerpts, cites background and methods Save Alert Research Feed Obesity in Adolescents Who Skip Breakfast Is Not Associated with Physical Activity Sara Sila, A. Ilić, M. Misigoj-Durakovic, M. Sorić, I. Radman, Z. Šatalić Medicine Nutrients 2019 4 PDF View 1 excerpt, cites background Save Alert Research Feed Physical Activity and Skipping Breakfast Have Independent Effects on Body Fatness Among Adolescents Suziane U Cayres, J. Urban, R. Fernandes Medicine Journal of pediatric gastroenterology and nutrition 2018 6 View 2 excerpts, cites background Save Alert Research Feed Eating versus skipping breakfast has no discernible effect on obesity-related anthropometric outcomes: a systematic review and meta-analysis M. B. Brown, Jillian E. Milanes, D. Allison, A. Brown Medicine 2020 PDF Save Alert Research Feed Unhealthy eating habits around sleep and sleep duration: To eat or fast? K. Nakajima Medicine World journal of diabetes 2018 4 PDF Save Alert Research Feed Diet-induced obesity enhances postprandial glucagon-like peptide-1 secretion in Wistar rats, but not in diabetic Goto-Kakizaki rats. Jukkrapong Pinyo, H. Hara, T. Hira Medicine The British journal of nutrition 2020 Save Alert Research Feed The Pediatric Obesity Encounter: Literature and Resources to Help with 4 Common Issues Montgomery Zachary, Stephanie Sisley Medicine Current Nutrition Reports 2020 Save Alert Research Feed Investigations of the Mechanisms of Diabetes Remission: A Focus on Adaptive Thermogenesis B. Corley Medicine 2019 View 1 excerpt, cites background Save Alert Research Feed References SHOWING 1-10 OF 100 REFERENCES SORT BYRelevance Most Influenced Papers Recency Does consuming breakfast influence activity levels? An experiment into the effect of breakfast consumption on eating habits and energy expenditure. L. Halsey, J. Huber, T. Low, C. Ibeawuchi, Polly Woodruff, S. Reeves Medicine Public health nutrition 2012 43 PDF View 3 excerpts, references background Save Alert Research Feed Skipping breakfast reduces energy intake and physical activity in healthy women who are habitual breakfast eaters: A randomized crossover trial E. Yoshimura, Yoichi Hatamoto, Satomi Yonekura, H. Tanaka Medicine Physiology & Behavior 2017 27 Highly Influential View 6 excerpts, references results and background Save Alert Research Feed The addition of a protein-rich breakfast and its effects on acute appetite control and food intake in ‘breakfast-skipping’ adolescents H. Leidy, E. M. Racki Medicine International Journal of Obesity 2010 101 PDF View 3 excerpts, references results and background Save Alert Research Feed Breakfast consumption and daily physical activity in 9-10-year-old British children. P. Vissers, A. Jones, +5 authors S. Griffin Medicine Public health nutrition 2013 26 PDF View 2 excerpts, references background and methods Save Alert Research Feed Breakfast consumption and physical activity in British adolescents K. Corder, E. V. van Sluijs, +6 authors U. Ekelund Medicine British Journal of Nutrition 2010 62 PDF View 1 excerpt, references background Save Alert Research Feed Breakfast skipping is associated with differences in meal patterns, macronutrient intakes and overweight among pre-school children. L. Dubois, M. Girard, M. Potvin Kent, A. Farmer, Fabiola Tatone-Tokuda Medicine Public health nutrition 2009 168 PDF View 3 excerpts, references background Save Alert Research Feed Experimental manipulation of breakfast in normal and overweight/obese participants is associated with changes to nutrient and energy intake consumption patterns S. Reeves, J. Huber, L. Halsey, Yasmin Horabady-Farahani, T. Smith Medicine Physiology & Behavior 2014 25 Highly Influential PDF View 4 excerpts, references background and results Save Alert Research Feed Breakfast consumption and physical activity in adolescents: daily associations and hourly patterns123 K. Corder, E. V. van Sluijs, +7 authors U. Ekelund Medicine The American journal of clinical nutrition 2014 30 PDF View 2 excerpts, references background Save Alert Research Feed Effect of Breakfast Omission on Energy Intake and Evening Exercise Performance. D. Clayton, A. Barutcu, C. Machin, D. Stensel, L. James Medicine Medicine and science in sports and exercise 2015 36 PDF View 3 excerpts, references background and results Save Alert Research Feed The causal role of breakfast in energy balance and health: a randomized controlled trial in lean adults1234 J. Betts, J. D. Richardson, E. A. Chowdhury, G. Holman, K. Tsintzas, D. Thompson Medicine The American journal of clinical nutrition 2014 116 Highly Influential PDF View 7 excerpts, references background and results Save Alert Research Feed ... 1 2 3 4 5 ... Related Papers Abstract Topics Paper Mentions 9 Citations 100 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Blog posts, news articles and tweet counts and IDs sourced by Altmetric.com Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_ecph4qqs6bagta4aon6lhib6my ---- Experiencing Urban through On-street Activity Procedia - Social and Behavioral Sciences 170 ( 2015 ) 653 – 658 Available online at www.sciencedirect.com ScienceDirect 1877-0428 © 2015 Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer-review under responsibility of Centre for Environment-Behaviour Studies (cE-Bs), Faculty of Architecture, Planning & Surveying, Universiti Teknologi MARA, Malaysia. doi: 10.1016/j.sbspro.2015.01.067 AcE-Bs2014Seoul Asian Conference on Environment-Behaviour Studies Chung-Ang University, Seoul, S. Korea, 25-27 August 2014 "Environmental Settings in the Era of Urban Regeneration" Experiencing Urban through On-Street Activity Zalina Samadi*, Rodzyah Mohd Yunus, Dasimah Omar, Aida Fadzlin Bakri Faculty of Architecture, Planning and Surveying, Universiti Teknologi MARA, Shah Alam 40450 , Malaysia Abstract The urban economic today enhances a high emphasize on urban outdoor experience. In line with tourism economic demand, this study focuses on pedestrian activity and intensity. This study analyzes data from the outdoor visual observation and digital photography. The objective of this study is to understand the urban context that attracts pedestrian density at the urban outdoor of the Lebuh Harmoni, in George Town, Penang. Retrospectively, this street resembles the harmonious living of Malaysian ethnics: Malays, Chinese and Indian. The heritage shop houses façade is a reflection of urban heritage, commercial and social character of the port city. © 2014 Published by Elsevier Ltd. Selection and peer-review under responsibility of the Centre for Environment- Behaviour Studies (cE-Bs), Faculty of Architecture, Planning & Surveying, Universiti Teknologi MARA, Malaysia. Keywords: Urban heritage shop houses; multi-culturalism; urban outdoor observation; on-street cultural heritage 1. Introduction There are magnetic attractions in terms of on-street activity and product that able to boost pedestrian within Lebuh Harmoni. The local citizen is more familiar with the streets name as “Jalan Masjid Kapitan Keling.” The number of pedestrian contributes to the vibrancy and intensity of the study area. This vibrancy enhances further the urban heritage culture of tourism despite the heritage streets status. This situation creates a positive aura within the outdoor an urban heritage street. The ambience encourages the street to revive commercially and to regenerate instead of deteriorating. Even though, there is a conflict of heritage commercialization as highlighted by Samadi, Z., Mohd Yunus, R & Omar, D. (2012), but the pressure has a positive impact to the street. With regards to urban planning strategy; Jakob, D. (2012) * Corresponding author. Tel.: : 019-2179021. E-mail address: zalina_samadi@yahoo.com. © 2015 Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer-review under responsibility of Centre for Environment-Behaviour Studies (cE-Bs), Faculty of Architecture, Planning & Surveying, Universiti Teknologi MARA, Malaysia. http://crossmark.crossref.org/dialog/?doi=10.1016/j.sbspro.2015.01.067&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.sbspro.2015.01.067&domain=pdf 654 Zalina Samadi et al. / Procedia - Social and Behavioral Sciences 170 ( 2015 ) 653 – 658 repeatedly expresses empathy on the exclusion of the local community from various major events and place promotion projects. The evaluation and analysis by Jacob, D. (2012) describes the festivalization is similar with the eventification and arts-led that altogether leads revitalization culture through promotion of the urban festival. Unfortunately, this scenario used to be ended up with exclusion of local people from participating. The target audience is tourists and wealthy locals. The festivals organizer creates an exclusive and elite. They sells expensive product and purposely make the event as unaffordable to the local. Those disadvantages of eventification especially when it comes to exclusion of local people make it worse and unacceptable. On the other hand, enhancement of local identity and cultural heritage belongs is the local and therefore, they deserved a privileged in the form of high quality of public amenity. The Majlis Perbandaran Pulau Pinang (MPPP), (GTWHI) and the Think City Sdn Bhd believes on “no make-up strategy” and “remain just the way it is” in holding the status-quo as the UNESCOs World Heritage Site since 2008. There was no major strategy except individual grants and guidelines for shop houses renovation works and they let the urban observers to experience the on-street activities of cultural heritage. Understanding this urban social conflict and the existing aspect therefore; this research shares with readers on how local cultural heritage such as public congregation for religious ritual can generate economy and revive. Firstly, it is a local lifestyle, commodity for outdoor street shopping and tourism attraction. Secondly, it is purely an expression of local urban heritage. It means that; no costs for imported artists. Thirdly, in the cultural heritage practice; this is part of the continuum of the urban community heritage of Lebuh Harmoni that means that no promotion cost since it is purely a local cultural heritages celebration. Fourthly, the richness of multi religious of heritage buildings provides the three major attractions for pedestrian congregation. The heritage buildings provide richness of varieties, closely-linked relationship between pedestrian movement and activity. Why Digital Photography (DP) and Outdoor Observation (OO)? The combination of both of the outdoors observation though visual and digital photography strengthens the research methods. It works as complimentary in the mixed methods to answer the research question in understanding the real life phenomenon. Digital Photography has its limitation to a particular angle of viewpoint. Therefore; to record the urban outdoor observation which is on-street experience in terms of visual observation has a high potential to fill up the gap of the study areas coverage. 2. Literature Review 2.1. Urban heritage shop houses The architectural style demonstrates the richness of historical shop houses of Lebuh Harmoni. In terms of the façade treatment, the street shares almost similar physical qualities with Lebuh Armenia, Lebuh Chulia and Lebuh Pantai of George Town, Penang. The outstanding value of tangible heritage is the physical attributes of the leftover built heritage of the early shop houses in George Town, Penang. The urban heritage shop houses are the living monument of the tangible cultural heritage. The close image and identity between Pulau Pinang and Melaka shop houses are due to their similarity in terms of colonial influences to the historic Port City. The feature spices-up with the architecture from the British Colonial, European, Asian and multi-ethnic influences. The high awareness and local appreciation towards the existing shop houses make the conservation of historical shop houses is acceptable. The built heritage is well taken care and remains in good condition. The continuous urban livability shall be part of the reviving effort to inherit the essence of the physical and cultural heritage to future generations. 655 Zalina Samadi et al. / Procedia - Social and Behavioral Sciences 170 ( 2015 ) 653 – 658 2.2. Multiculturalism Besides the shop houses façade, the outstanding universal value of Penang is the multiculturalism quality which rarely found to the world. Besides her tangible heritage, the heritage streets in George Town, Pulau Pinang are truly rich in intangible heritage. Her cultural heritage remains as local way of life through the continuation of the trading lifestyle from the previous port and fortress of Fort Cornwallis. The port activity inherits from the sub-cultures of Asian; such as India, China, Malay and Europe. The street is one of the most vibrant streets for cultural heritage besides Lebuh Pantai and Lebuh Chulia. The name of street: Lebuh Harmoni means harmonious-balanced for all religions. The specialty about this street is that, public has accepted this street as a melting pot for cultural heritage activities. The street has a mosque, Hindu and Buddhist temple. In front of the religious buildings, there are rows of shop houses with varieties to offer. The combination of religious and commercialization enhances heritage buildings to function well and merrier. The urban heritages outdoor quality is the added value to the streets unique identity and authenticity. The existing cultural heritage from the multiculturalism support towards the prosecution of festivalization as planning tool in city planning Markusen, A. and Schrock, G. (2006), Markusen, A. and Gadwa, A. (2010). They claim that there is somehow a benefit for local urban citizen from the festival of cultural heritage promotion within the physical setting of heritage shop houses for the urban activity. 2.3. On-street intangible cultural heritage Intangible Cultural Heritage defines as oral traditions, performing arts and ritual by the UNESCO. The street has a richness of cultural heritage, lively and highly surveillance quality. Because of these opportunities, the street has high revitalization with peacefulness and attraction. No doubt, this street has its own excellent value of living testimony. The dual current functions as religious and shopping attraction validate her Outstanding Universal Value (OUV). The significant value enhances the streets inscription by the UNESCO in 2008 as one of the UNESCO World Heritage Sites. The daily operation at Masjid Kapitan Keling begins as early as five o’clock in the morning. The morning call initiates commercial projects to kick-start to other parts of the street. Both temples begin their morning rituals too. Almost all of the religious building and shop houses start with morning cleansing prior to operation. After the cleaning session, they have a quick breakfast prior to shop opening at 09.00am. The spirit of urban outdoor and activities on the street has begun for the day. 2.4. Urban outdoor observation Outdoor defines as: the space outside the indoor of shop houses in which covers the “kaki lima” or the five foot verandah way and the open space in between heritage shop houses. The outdoor space includes landscape area, street, walkway, street furniture and the end-users of the street. Observation: Descriptive observations of verbal and non-verbal behavior. A method of conducting an outdoor observation is using not only visual but applying five senses: seeing, hearing, smelling, tasting and touching. In this study, the observation includes pedestrian and traffic behavior, façade character, color, composition, on-street activities, density and vibrancy within the coverage area of the urban cultural phenomenon. 3. Methodology There are three basic research paradigms in conducting research: positivism, (quantitative, scientific approach), interpretive and critical science. In this qualitative research, outdoor observation is in the 656 Zalina Samadi et al. / Procedia - Social and Behavioral Sciences 170 ( 2015 ) 653 – 658 category of interpretive research. This study employed unobtrusive-data collection with on-street manual calculation for recording on-street intensity of pedestrian passing-by and utilizing the street. The observation is to identify a typology of pedestrian activity. This manual calculation on-street from selected observation points in the beginning, intermediate and the end of the street is to check from digital photography and manual data in validating the result. During data collection, the researcher has to stop and walk at the beginning, middle and the end of the street. The sequent of data collection begins at the entry of Lebuh Harmoni, passing the junction of Lebuh Chulia and proceed to Lebuh Pantai. The streets visual radius of twenty feet (six meter) is the limit in controlling quality of viewing angle. The researcher records on the enlarged scale of the streets map for the purpose of recording on-site intensity and activity. 4. Findings 4.1. Activity along Lebuh Harmoni Based on the data collection for this study, there are six main activities in Lebuh Harmoni. The following table presents these activities. The main activities are indoor staying and ritual, indoor shopping, street shopping, indoor staying and outdoor entertainment which are in the lineage along the heritage shop houses. The indoor activity has high influence towards the outdoor activities. The indoor activities are overwhelming and generate the outdoor activity. The weekly prayer on Friday at Masjid Kapitan Keling attracts extra capacity towards the street. The prayer unites local community, pedestrian and visitors from the nearby area. The indoor activity attracts crowds and hence creates another theme of vibrancy at the outdoor of the street. Apart from religious activity, street shopping and recreation are the other major activities. Trishaw, motorbike and bicycle-ride are enjoyable activities during morning and evening moment of time. In addition to that, night entertainment is lively since Lebuh Harmoni offers motel, hotel and outdoor dining facilities. The following tables present the timeline and the related activities Table 1. The Activity Pattern and the outdoor space of the Masjid Kapitan Keling The interaction between indoor and outdoor is proportionate to each other. Whenever the indoor activity is full; the pedestrian occupies the outdoor of shop houses or religious buildings. They move along the street from outside to inside and vice-versa and make the activity between indoor and outdoor becomes active and robust. Even though, some of the indoor product sales are duplicating during Friday prayer but then, that does not affect the crowd attention. The pedestrian from the neighborhood area and nearby offices or shop houses only grab their basic items from the street sales. The heritage streets shoppers enjoy the crowd and appreciate the positive aura of on-street market. On-street pedestrian is also accepting of seeing and being part of the crowd as their visual entertainment. Besides Friday crowds, Early morning. Morning. Afternoon. Evening Early Night. Midnight. 0400-0800. 0800-1200. 1200-1600. 1600-2000. 2000-2400. 2400-0400. Accommodation and Ritual. Indoor Shopping. Street Shopping. Indoor Ritual and Indoor Shopping. Street Shopping. Accommodation and Outdoor Entertainment. 657 Zalina Samadi et al. / Procedia - Social and Behavioral Sciences 170 ( 2015 ) 653 – 658 tourists in Penang enjoy on-street dinner and on-street sales. They feel a “sense of unity without worry” as part of the shared urban pleasures. The appreciation on outdoor freedom and outdoor streets views and vistas is observable from their gestures. 4.2. Intensity of pedestrian including visitors, shop owners and street vendors With reference to the above tables; the low distributed model with a number of pedestrian with intensity between one to five persons within ten square meter area as detected early morning along the walkway of the study street. Pedestrian passes along the streets sides are workers and tourists. The least intensity is during the early morning. The low intensity is possibly because of the vehicular traffic is accessible for both directions that limit pedestrians movement. The highest intensity is during afternoon i.e. during lunch time. The following figure illustrates the street activity typology versus the timeline. Fig.1. Distribution of intensity versus timeline 5. Discussion The obvious change of model is from morning to evening. The change of pattern relates to the pedestrian movement which moves to and fro between one shopping lots to another. The most possible reason of the movement is because of the high interest on display products. There is more active movement with high intensity of pedestrian from one shop lot to another to continue their shopping activities on Fridays and lunch-time market period. 6. Conclusion The active outdoor activities revive the legendary of the past cultural heritage activity of the port city. The heritage streets never lose her original functions as melting pot of cultural heritage events. Lebuh Harmoni remains as a “harmonious-balanced street” to blend Muslim, Buddhism and Hinduism of Malaysian. The vibrancy of each religious activity contributes a great impact towards the revitalization of other nearby streets. The encouragement from the local authority through physical enhancement of the 658 Zalina Samadi et al. / Procedia - Social and Behavioral Sciences 170 ( 2015 ) 653 – 658 heritage shop houses with initiatives for improvement enhances further the streets livability and vibrancy. The current trend on eventification of place; as highlighted by Jakob, D. (2012) on the aspects of promotion is not applicable for this subject street. The incline of interests towards urban heritage environment promotes the place identity as a new commodity for international tourism. It is pure attraction towards the local identity of the core zone of George Town, Pulau Pinang. In conclusion, Pulau Pinang is truly “the pearl of the orient” with activity of the outdoor living room relies so much on the local culture of pedestrian density and creative activities. By having this study, hopefully it will improve further the public amenity in terms of urban outdoor experience and quality at Lebuh Harmoni in George Town, Pulau Pinang. Acknowledgement The authors wish to dedicate acknowledgement to the Research Management Institute (RMI) of the Universiti Teknologi MARA for the grant. It is an unforgettable experience being on-street to experience the urban streets events and daily routine. It was a great opportunity to be part of the pedestrian and learning tourists in Lebuh Harmoni while conducting this research. Last, but not least; the deep appreciation goes to all of the research team members for the provision of time and energy for this research and writing. Without their full support from the research team members and RMI; the success of this research is impossible. References Abeer Elshater. (2013). Towards New Concept of New Urbanism in Egypt. Asian Journal of Environment-Behaviour Studies. 4(12). Creswell, J.W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches. Thousand Oaks, CA: Sage Publications. Creswell, J.W., & Clark, V.L.P. (2007). Designing and conducting mixed methods research. Thousand Oaks, CA: Sage Publications. Doreen Jakob. (2012). The eventification of place: Urban Development and Experience Consumption in Berlin and New York City. European Urban and Regional Studies. SAGE. http://eur.sagepub.com/European Urban and Regional Studieshttp://eur.sagepub.com/content/20/4/447. Gibson L & Stevenson D (2004) Urban Space And The Uses Of Culture. International Journal of Cultural Policy, 10, 1–4. Grodach C and Loukaitou-Sideris A (2007). Cultural Development Strategies and Urban Revitalization. International Journal of Cultural Policy, 13, 349–370. Markusen, A. & Gadwa, A. (2010). Arts and culture in urban regional planning: a review and research agenda. Journal of Planning Education and Research, 29, 379–391. Markusen. A. & Schrock, G. (2006). The artistic dividend: urban artistic specialization and economic development implications. Nor Haslina Jaafar, A. Bashri Sulaiman, & Shuhana Shamsuddin. (2013). Landscape Features and Traditional Character in Malaysia. Asian Journal of Environment-Behaviour Studies, 4(13).. R. Siti Rukayah, Bharoto, & Abdul Malik. (2013). Conservation Concept on Areas with Overlapping Character. Asian Journal of Environment-Behaviour Studies, 4(13). Rickinson, M., Dillon, J., Teamey, K., Morris, M., Choi M. Y., Sanders, D., & Benefield, P. (2004, March). A review of research on outdoor learning. National Foundation for Educational Research and Kings College London: UK. Roslina Shapawi, & Ismail Said. (2013). Application of Rasch model in Constructing Walkability Indices for Urban Neighborhood Area. Asian Journal of Environment-Behaviour Studies, 4(12). Saelens, B., Sallis, J. & Frank, L. (2003). Environmental correlates of walking and cycling: findings from the transportation, urban design, and planning literatures, Annals of Behavioral Medicine, 25(2), 80-91. Samadi, Z. Omar, D. & Mohd Yunus, R. (2012). On-Street Visual Analysis at Jalan Hang Jebat, Melaka. Science Direct: On-Line Procedia of Environmental Behavioural Studies. http:www.sciencedirect.com. Song, Y. & Knaap, G.-J. (2004). Measuring urban form, Journal of the American Planning Association, 70(2), 210-225. Urban Studies, 43, 1661–1686. work_eezazmkkq5hpni4k7cekg7cl7i ---- S2326376819000172jxx 302..310 In-Field Digital Photography and the Curation of Associated Records: Not All Prints Are Created Equal Michelle K. Knoll and A. Carver-Kubik ABSTRACT With the advent of commercially available digital cameras in the late 1990s resulting in the near-exclusion of analog photographic prints today, most archaeological repositories around the world have a mix of analog and digital photographic prints. That ratio is increasingly moving toward digital print processes, of which there are several types. To minimize the loss of image quality, collection managers must become familiar with the unique curation challenges of photographic prints from digitally created images. Likewise, creators of digital content must be aware that choices made when selecting a print process for reposit will have a direct effect on image and print per- manence. Site photographs are critical evidence of archaeological activity, and so the preservation of digital prints is in the interest, and is the responsibility, of collection managers and archaeologists alike. Keywords: collections management, digital prints, conservation Con la llegada de las cámaras digitales comerciales a finales de los años noventa, se eliminó casi por completo la impresión fotográfica analógica o tradicional. En el caso de los depósitos arqueológicos alrededor del mundo, probablemente se cuenta ya con un conjunto de impresiones fotográficas analógicas y digitales. Lo cierto es que los procesos de impresión digital van en aumento, por lo que es importante conocer los diversos procesos. Los administradores de colecciones deben aprender a enfrentarse a los desafíos de conservación únicos de las impresiones fotográficas de imágenes digitales para minimizar la pérdida de calidad de imagen. Del mismo modo, los creadores de contenido digital deben ser conscientes en la selección del proceso de impresión ya que este tendrá un efecto directo en la permanencia de la imagen y la impresión. Las fotografías de sitio son evidencia crítica de la actividad arqueológica. Por lo tanto, la conservación de las impresiones digitales es de interés y responsabilidad, tanto de los administradores de colecciones como de los arqueólogos. Palabras clave: administrador de colecciones, impresiones digitales, conservacion Up until the mid- to late 1990s, when the first consumer digital cameras were introduced, photographic collections in archae- ology repositories1 mainly consisted of analog photographic prints (photographs made from chemically light-sensitive materi- als), such as an array of nineteenth-century processes and twentieth-century black/white and chromogenic (color) prints, negatives, and slides. Today, archaeological repositories around the world likely curate hard-copy photographs produced from a born-digital image (i.e., digital print) in their legacy collections or may even continue to require prints to supplement the digital image file. From the perspective of best practices for preserving visual information and photo documentation, having physical prints may be the optimal strategy for your institution. For archi- vists, born-digital records preserved in digital form are considered to be the original copy (Millar 2018:167–170). However, a paper copy can be insurance against disaster if staff are not trained in digital curation or do not have the necessary financial resources to curate digital images for the long term. All collections—whether they include objects, born-digital or hard-copy photos, or some other medium—are susceptible to agents of deterioration (see Meister 2019). Born-digital photos, although not the focus of this article, make up an increasingly sizable part of any archaeological project. Some repositories have in-house trained personnel, the accompanying financial resources, and appropriate facilities, including the necessary hardware and software, to care for digital records and photos; they are able to handle the migration, backup, and safe electronic storage of such records. Most repositories, however, do not have these resources. The Heritage Health Information Survey Report released in February 2019 noted that “nearly two-thirds of archives reported damage or loss to their collections due to obsolete equipment causing loss of access to born digital information” (Institute of Museum and Library Services 2019). Many have rightfully argued that a “digital curation crisis” is slowly brewing and not receiving enough attention from the professional community (e.g., Eiteljorg 2004; Kintigh and Altschul 2010; Sullivan and Childs 2003:37–38). HOW-TO SERIES Advances in Archaeological Practice 7(3), 2019, pp. 302–310 Copyright 2019 © Society for American Archaeology DOI:10.1017/aap.2019.17 302 https://orcid.org/0000-0002-6518-3616 https://doi.org/10.1017/aap.2019.17 This article provides repository personnel and field archaeologists with recommendations to ensure the long-term preservation of digital prints. Because every archaeological project and repository is different and has varying numbers of personnel, levels of funding, and resources, we offer an approach to producing and caring for digital prints that can be easily adapted to these vari- ables. We provide guidance for field archaeologists as they con- sider the creation of photo documentation for new projects during their prefield planning. In addition, we offer suggestions to repository personnel who curate digital prints or may be consid- ering revising existing submission guidelines. PRINT PROCESSES AND CONSIDERATIONS FOR LONG-TERM CURATION OF DIGITAL PRINTS If you curate digital prints in your repository or if you are an archaeologist who is required to reposit prints as part of the associated records that you generate with an archaeological pro- ject, you should be aware that digital prints can have permanence issues under certain storage conditions that differ from the per- manence issues affecting analog chromogenic prints. Some digital prints will fail under certain conditions, whereas others fare well under most conditions. The most important factors for cre- ating and maintaining long-lasting digital prints are (1) using printing processes and materials required for making a good- quality print, (2) having the appropriate storage enclosures, and (3) ensuring compatibility between the digital print and repository storage conditions. There are four main print processes used in digital photography: digitally exposed chromogenic, inkjet, digital electrophotography, and dye diffusion thermal transfer (D2T2; Table 1). It is likely that some combination of these processes is present in an archae- ology repository that curates associated records. FIELD ARCHAEOLOGISTS AND THE CREATION OF DIGITAL PRINTS The permanence of digital prints depends both on those who cre- ate the prints and those who care for them. If a repository requires print copies as part of the submission guidelines, field archaeolo- gists and collections staff should work together early in the project to discuss the repository’s digital print process, labeling, and stor- age enclosure requirements and to determine if the repository accepts or requires both the digital files and digital prints that the archaeologists create. It is also important for the field archaeologist to consider the curation and other fees that are associated with the long-term care and preservation of both the digital files and digital prints. These considerations should be addressed during the prefield planning process, and appropriate funding for curation should be built in the overall project budget (Childs and Benden 2017). If the repository does not require the digital file, then it is the archaeologist’s responsibility to ensure curation of the digital version in another digital archive, such as tDAR. The first consideration in creating digital prints is the resolution of the original image file, which will determine, in part, the quality of the printed image and the size of the print. The general rule of thumb for printing digital images is that the file should be 300 pixels per inch (ppi) and the dimensions should be the same size as the final print (Johnson 2004:42). For example, the image file size for a 4 x 6-inch print should be 300 ppi and 4 x 6 inches. There are a variety of acceptable methods for printing digital photographs, including instant photo kiosks, in-store or online photo-processing companies, and office desktop photo printers. These methods may produce prints using some or all of the cur- rent digital printing processes, each of which has different image quality and preservation concerns. Therefore, it is important to inquire about the printing technology before submitting a print order or making a print. You should also refer to the repository’s submission guidelines before selecting a processing technology because they may dictate which type you must use. Instant photo kiosks are fast, easy, and economical. They typically produce D2T2 prints, but inkjet printers are also available. In-store or online services are also easy and fairly inexpensive, and they use all the current digital printing processes. Digitally exposed chromogenic and inkjet processes can produce a good image quality but have preservation challenges under certain environmental conditions. D2T2 prints have an acceptable image quality but lower than those produced by chromogenic or inkjet processes, regardless of the size of the image file. Digital electrophotographic prints have the lowest image quality. Most home or office desktop photo printers are inkjet printers, but there are also electrophotographic and D2T2 desktop photo printers on the market. Creating digital prints from desktop inkjet printers seems simple but can be much more challenging than simply pressing “print”: there are many considerations and a bit of a learning curve. The printer settings must be perfectly matched to the type of file being printed, and to the size and type of paper being used, so that the printer can apply the correct colors, mix- ture of colors, and volume of ink to the paper (Image Permanence Institute 2009a:5; Johnson 2004:247). To increase the likelihood of producing a long-lasting print, it is advisable to use original equipment manufacturer (OEM) products (i.e., the ink cartridges that come with the printer and paper made by the same manu- facturer for that printer). The importance of compatibility between printer, ink, and paper support cannot be overstated. Wilhelm Imaging Research predicted, using accelerated aging processes, that prints produced by name-brand printers with OEM inks and papers will often last significantly longer than those produced with third-party products under the same conditions (Wilhelm 2007: Table 1). STORAGE ENCLOSURES FOR DIGITAL PRINTS Print stability is directly related to the handling of the print and the enclosure type, in addition to environmental conditions. Pigment-based inkjet prints are more prone to abrasions and scratches than other prints because the image is generally closer to the surface (Burge 2014:5; Image Permanence Institute 2004:5; Jürgens 2009:227). For this reason, polyester enclosures are recommended. For other types of prints, enclosures can be either plastic or paper, as long as they meet the requirements under the International Standard ISO 18902 Imaging materials—Processed HOW-TO SERIES August 2019 | Advances in Archaeological Practice | A Journal of the Society for American Archaeology 303 imaging materials—Albums, framing and storage materials. Under this standard all materials shall have passed the Photographic Activity Test (PAT). Paper-based materials also need to be acid- free and lignin-free with a Kappa number of 7 or lower. Buffered papers should have an alkali reserve of at least 2% calcium car- bonate. Storage materials with a colorant should also have passed the bleed test. Polyvinyl chloride (PVC) should never be used for photographic prints (Holmes 2017); its use is particularly problematic for digital electrophotographic prints because the plasticizers in the sheets can soften the toner and pull it from the surface of the print (Digital Print Preservation Portal 2018a). Polyethylene enclosures, which are effective against moisture absorption, do not prevent damaging pollutants from landing on prints. Polyester and poly- propylene enclosures are better choices (Fenn 2003:33). Blocking—prints adhering to one another or to storage enclo- sures—can result in color transfer, delamination, and other types of surface damage (Rima and Burge 2009). Allow new inkjet prints to fully dry for 24 hours after printing before placing in enclosures, and ensure image sides do not touch each other to minimize the damage caused by blocking (Image Permanence Institute 2018:20; Jürgens 2009:228). Recommendations for Field Archaeologists • Communicate with the repository staff or review the repository’s submission guidelines before you create digital prints as part of a new archaeological project. The repository may not accept digital prints or, if it does, may not be able to curate all print processes that are available. • If the repository does not have the capacity to care for born- digital records and photos, make arrangements with a digital archive for preservation of those records. • Whether using a photo kiosk or an in-store or online printing service, know what type of print is being made. Report the make and model of the printer used to the repository. This “metadata” will be useful to the repository for collection management and curation purposes. • If using a desktop printer, educate yourself on how to make good prints (Horenstein 2011; Johnson 2004; Salvaggio et al. 2009) and use OEM materials. • Follow print enclosure recommendations per the repository guidelines and best practices. REPOSITORY STAFF AND THE CURATION OF DIGITAL PRINTS If you have digital prints in your legacy collections, you should be familiar with the various print processes, know how to identify these processes, and understand print storage requirements. If you accept digital prints, your repository guidelines should ensure compatibility between the print process and your storage-room conditions. Fortunately, the availability of multiple processes provides you with an opportunity to exert control over what is accessioned into your collections. Identifying the printing processes used to create the digital prints in your collection will allow you to mitigate their potential deterior- ation under suboptimal storage conditions. Each process has its own key identifying features that can been seen using different lighting techniques and magnification. To learn a step-by-step approach for print identification, view the process identification videos on Graphics Atlas, www.graphicsatlas.org, or the free webi- nars at www.imagepermanenceinstitute.org/process-id-webinars. Magnification is necessary to identify digital prints. There are sev- eral good-quality and inexpensive loupes and pocket microscopes TABLE 1. Digital Print Processes Most Commonly Found in Archaeological Repositories. Processes Description Digitally exposed silver halide The first practical applications of digital printing technology exposed born-digital images to analog color photographic papers. A computer signal activates an exposure unit (laser or light emitting diodes) to expose light-sensitive photographic paper, which is then chemically processed to produce an image composed of dyes. The paper support is resin coated. Digitally exposed chromogenic prints are still commonly made by online and in-store printing services. Inkjet Many photo-quality printers today use inkjet technology. A computer signal tells the printer to eject ink onto a substrate. Inks are typically aqueous (water based) and the colorants can be pigment, dye, or a combination of both. Papers are typically coated with an Image Receiving Layer (IRL) of which there are two types: polymer (swellable) and mineral (porous), with the mineral IRL being more common today. Resin coated (RC) papers are common and resemble analog chromogenic (color) papers. Digital electrophotography In digital electrophotography (i.e., laser prints), the computer signals a light or laser to selectively discharge the electrical charge on an photoconductor drum, charged toner particles (pigmented thermoplastic resin) are dusted onto the drum, then transferred and fused to a substrate. Toner colorants are usually pigments. Papers can be coated or uncoated; the majority of office and desktop printers use uncoated paper, while digital presses used by online services more commonly use coated papers. Dye diffusion thermal transfer With dye diffusion thermal transfer (also known as D2T2 or dye sublimation), dyes are released from a donor ribbon onto a receiving paper when exposed to heat, which is activated by the computer signal. The paper support is a resin coated paper with a plastic (usually polyester) image recieving layer. This technology is most often used in store photo kiosks where images can be printed in a matter of minutes, but it is also available with one-hour services and with a desktop printer. HOW-TO SERIES 304 Advances in Archaeological Practice | A Journal of the Society for American Archaeology | August 2019 http://www.graphicsatlas.org http://www.imagepermanenceinstitute.org/process-id-webinars available on the market that will be sufficient for this purpose. Loupes are usually 10x magnification, and pocket microscopes should be 60x–120x magnification. Figures 1, 2, and 3 illustrate each digital print process at increasing levels of magnification. Digitally Exposed Chromogenic Digitally exposed chromogenic prints have a resin coated (RC) paper support that feels slick and plastic-like on the back. Usually, the name of the manufacturer or product is printed on the back; sometimes the image file name is also printed there. The color of a new print should look balanced and correct. Older prints may exhibit a shift in color, appearing to have a magenta or cyan cast and yellow highlights. The surface sheen of most chromogenic prints is glossy with an even gloss all over. Some have a slight applied texture that will alter the gloss. With low magnification the image will appear continuous in tone, meaning there is no pattern to the image, such as dots. With high magnification, small pinpricks of color called dye clouds may be visible. Digitally exposed chromogenic prints may also have a slight pattern to the dye clouds: for example, they may line up in rows or have a slight grid pattern. Inkjet An inkjet print with a good image quality looks and feels like a true analog photograph. Inkjet prints typically have a “photo” support of either an RC or a baryta paper. An RC support is slick and has a plastic feel to the back. Prints may have dye-based or pigment-based inks or a combination of both. There are two types of image-receiving layers (IRLs)—polymer and porous—that hold the ink image. The back of RC papers with a polymer IRL usually feels coarse like sandpaper because of the presence of anti- blocking agents that prevent them from sticking to each other. The product or manufacturer name may be printed on the back. Inkjet prints may have several surface characteristics that can be observed at the surface view. Prints on photo papers are usually glossy or semiglossy. Prints made with dye-based inks have an even gloss, whereas prints made with pigment-based inks may exhibit a difference in gloss between the shadows and the high- lights, called differential gloss. Pigment-based inkjet on a glossy surface may also exhibit bronzing, in which the color of the gloss in the shadows is the color bronze. Finally, at the magnification view, inkjet prints are composed of a random pattern of small dots (called an FM screen). The dots are usually cyan, magenta, yellow, and black, but other secondary colors may be visible if the dots overlap or an expanded ink set was used. Dye Diffusion Thermal Transfer These prints may look and feel like analog color photographs but likely have a lower-quality image. They have an RC paper support and may have the product or manufacturer’s name printed on the back. They are glossy with an even gloss. With magnification they are extremely diffuse and very difficult, if not impossible, to focus on. A blurry grid pattern may be visible. Digital Electrophotography Electrophotographic prints are made up of dots, similar in size to those on lithographic prints. Therefore, the image may be rela- tively low quality. The paper may be uncoated plain office paper or a coated support with a coating on one side or both. Prints often exhibit differential gloss, particularly on glossy papers. With magnification, the image is made up of a regular pattern of dots. However, the pattern of the dots depends on two variables: (1) printer design, because there are variations within digital elec- trophotographic technologies, and (2) toner type: dry or liquid. Most desktop printers and some online print-on-demand services FIGURE 1. Four different digital print processes of the same image. Notice the variations in image quality. Images courtesy of the Image Permanence Institute. HOW-TO SERIES August 2019 | Advances in Archaeological Practice | A Journal of the Society for American Archaeology 305 use dry toner. Dry toner dots appear rough and dusty, and the dots are in a regular pattern, usually of rosettes (AM screen) or parallel lines. Color prints have cyan, magenta, yellow, and black dots or lines. Some online print-on-demand services use liquid toner. These dots are fairly smooth and likely have a rosette pat- tern as well. FIGURE 2. 10x magnification (loupe): chromogenic and D2T2 prints are continuous in tone, inkjet prints are made up of very small dots, and electrophotographic prints are made up of large dots. Images courtesy of the Image Permanence Institute. FIGURE 3. 50x magnification (pocket microscope): dye clouds are visible in chromogenic prints, D2T2 prints are very diffuse with a slight grid pattern, inkjet dots are clearly visible, and electrophotographic dots are large and dusty, indicating it is a dry toner electrophotographic print. Images courtesy of the Image Permanence Institute. HOW-TO SERIES 306 Advances in Archaeological Practice | A Journal of the Society for American Archaeology | August 2019 CARING FOR DIGITAL PRINTS The primary forces that can cause deterioration of digital prints are heat, high relative humidity (RH), airborne pollutants, and, when on display, ultraviolet (UV) radiation (Burge et al. 2010; Fenn 2003; Image Permanence Institute 2004, 2009b; Jürgens 2009; Rima and Burge 2009; Venosa et al. 2011). Since 2007, the Image Permanence Institute (IPI) has been testing the permanence of born-digital prints using accelerated aging methods under these conditions. Results of these tests have led researchers at IPI to reach five significant conclusions: (1) digitally printed photo- graphs are highly variable in their sensitivity to deterioration forces, (2) cold storage significantly reduces deterioration rates in inkjet and chromogenic prints, (3) prints made with pigment inkjet can be very sensitive to abrasion, (4) inkjet dyes can bleed when exposed to high humidity, and (5) prolonged exposure to the primary forces that can cause deterioration can cause image fading and support damage in both dye- and pigment-based inkjet photographs with either a polymer or porous IRL (Burge 2014:1). These changes have been observed in chromogenic prints for decades and are now being observed in digital prints as well. However, nonchromogenic digital prints tend to fare better in dark storage than do chromogenic (both analog and digitally exposed) prints. Caring for digital prints can be less costly than caring for chromogenic prints, depending on the print process and your storage environment. Many of the storage requirements for chromogenic prints (i.e., freezing temperatures) will also benefit digital prints but are not required in every instance. Cooler tem- peratures generally slow down chemical deterioration. Recommended temperature set points (Table 2) should be con- sidered as guides for temperature ranges. Digitally Exposed Chromogenic Prints Digitally exposed chromogenic prints have the same deterioration issues as analog chromogenic prints. While typically of excellent image quality, all chromogenic prints will show fading or discol- oration after about 50 to 100 years in dark storage at room tem- perature. Exposure to light and heat will accelerate that degradation (Reilly 1998). Storage temperatures should be, at a minimum, 4°C to meet minimum recommendations for long-term storage with an RH of 30%–50% (Burge 2014:Table 3; Digital Print Preservation Portal 2018b:Table 3). Inkjet Prints The three main changes observed during accelerated aging tests of inkjet prints are fading of the colorants, migration or bleeding of dye colorants, and yellowing and embrittlement of the paper substrate. Color fading, consisting of overall fading of an image and a hue shift, is caused by high temperature, light, and pollu- tants (Figure 4). Ozone pollution is especially problematic for inkjet prints, which should not be exposed to the open air for extended periods (Burge et al. 2010:5). Inkjet prints are more susceptible to warm temperature than other digital processes. Storage temperatures should be, at a minimum, 4°C to meet minimum recommendations for long-term storage (Burge 2014: Table 3; Digital Print Preservation Portal 2018b:Table 3). TABLE 2. Temperature and Relative Humidity Recommendations for Born-Digital Prints. Digitally Printed Photographs Noted: Adapted from Burge (2014:Table 3). HOW-TO SERIES August 2019 | Advances in Archaeological Practice | A Journal of the Society for American Archaeology 307 IPI recommendations for relative humidity ranges from 30% to 50% for all photographic prints. High humidity can cause mold growth, ferrotyping—softening of the print surface that causes an uneven degradation of the glossy surface, blocking—prints adhering to one another or to storage enclosures, and dye migration. Dye migration of dye-based inkjet under conditions of high RH will result in the loss of clarity, color fringing—the bleeding of a single colorant in a multiple color image in which the bled color appears strongly at the edges, and migration of dyes across the surface (Figure 5) or onto other prints in a stack (Digital Print Preservation Portal 2018a; Image Permanence Institute 2009b:2). Dye migration is unique to dye-based inkjet prints. A dye image can be significantly altered in as little as one to seven days with 75%–80% RH at 25°C depending on the IRL type (Burger and Burge 2018). Conversely, humidity below 30% can cause surface coatings to become more fragile. Yellowing may occur in the support caused by changes in the optical brightening agents (if present) with exposure to pollutants (Burge 2014:2), heat, and humidity (Nishimura et al. 2013). In general, inkjet prints made with pigment-based inks are more resistant to poor environmental conditions and are more lightfast. Prints made with dye-based inks are more susceptible to image fading and image bleeding. Papers with a polymer IRL usually, but FIGURE 4. Inkjet prints showing examples of (a) the original print and (b) after fading and hue shift. Images courtesy of the Image Permanence Institute. FIGURE 5. Inkjet prints showing examples of (a) the original print and (b) after dye migration. Images courtesy of the Image Permanence Institute. HOW-TO SERIES 308 Advances in Archaeological Practice | A Journal of the Society for American Archaeology | August 2019 not always, have a dye-based image and are more resistant to abrasions and fading from pollutants but are sensitive to image bleeding when exposed to high humidity (Burge 2014:8). Pressure-sensitive adhesives (PSAs) and poor-quality paper should never be used with inkjet prints because they can cause yellowing in a matter of months (Burge 2014:5). Avoid storing inkjet prints in direct contact with traditional prints because some inkjet prints may induce yellowing of analog photographic prints and analog prints may abrade the more sensitive born-digital prints (Burge 2014:4). Digital Electrophotographic Prints Digital electrophotographic prints are generally considered stable —or as stable as the paper on which they are printed. Poor-quality paper may become yellow and brittle with high temperatures and high relative humidity. In contrast, the colorant in black toner is carbon black and is very stable. Color toners vary between man- ufacturers and, as such, are less predictable but overall tend to be fairly stable. Digital electrophotographic prints are sensitive to heat and pressure, which may result in blocking. Blocking under high RH conditions can result in color transfer, delamination, and other types of surface damage. Digital electrophotographic prints on good quality paper are stable in dark storage at 20°C and an RH of 30%–50% (Burge 2014:Table 3; Digital Print Preservation Portal 2018b:Table 3). Dye Diffusion Thermal Transfer Prints Dye diffusion thermal transfer prints are moderately light sensitive and can fade at or near the rate of chromogenic prints if they are not stored in the dark. Light can also cause the polyethylene layer of the RC support to crack. They are stable in dark storage at 20°C and an RH of 30%–50%. However, D2T2 prints made before the introduction of the protective topcoat in 1994 are very sensitive to light, heat, pressure, water, pollution, and fingerprints (Burge 2014:Table 3; Digital Print Preservation Portal 2018b:Table 3). Recommendations for Repository Staff • For incoming reposits: Electrophotographic and D2T2 pro- cesses are more stable at room temperature; however, they have a lower image quality than prints produced by digitally exposed chromogenic and inkjet, and fine image details are lost. Digitally exposed chromogenic and inkjet prints have a higher image quality with more details but require cold storage conditions. Consider these factors when establishing print process requirements in your repository guidelines. • For legacy collections: If you have digitally exposed chromo- genic or inkjet prints in your legacy collections, and your repository does not have cold or frozen storage, consider rep- rinting the photos from the original file in a more stable process (D2T2 and electrophotographic), keeping in mind the first recommendation. • Request enclosures that have passed the requirements for ISO 18902 for all prints. Repositories in areas that are susceptible to high humidity and warm temperatures should consider using paper enclosures and avoid overstuffing prints where pressure can cause blocking. If pollution is an issue and inkjet prints are in your collection, then inert plastic enclosures may be more suitable. CONCLUSION Digital prints have been present in legacy collections around the world since the mid- to late 1990s and are still being accessioned as new collections along with their digital image file counterparts. Repository staff and field archaeologists should work together early on in the process before new collections are generated. They should communicate about appropriate print and storage meth- ods, as well as necessary funding resources to care for collections for the long term. Repository staff should take steps to ensure that existing digital print collections are preserved if they intend to retain the prints as part of the accessioned collection. For the long-term permanence of digital images, the print process, storage enclosures, and storage room conditions must be carefully considered as a pack- age. If digital prints continue to be required with the reposited associated records, collections staff should be realistic about what level of care they can provide and reflect that reality in their repository guidelines. Field archaeologists should also budget appropriately for long- term care and management of both born-digital and hard-copy derivations of photos and other associated records and need to communicate with repository personnel about submission requirements. These are important considerations for the prefield planning stages of an archaeological project. The recommenda- tions in this article should inform the choices that field archaeol- ogists make before they print and store photographic documentation of the archaeological record; these decisions have long-term consequences for the preservation of that record. Acknowledgments We wish to thank three anonymous reviewers and Danielle Benden for providing useful feedback on this article. We also thank Fanny Blauer for assistance with Spanish-language transla- tions. We are grateful to colleagues at Image Permanence Institute, Daniel Burge and Jennifer Burger, for their research and expertise in digital print materials. Data Availability Statement No original data were presented in this article. NOTE 1. The definition of the term repository (or archaeological repository) as it appears in this article is taken from the Archaeological Collections Consortium article (2016) entitled “Building Common Ground on Collections: An Initial Glossary of Collections-Related Terminology,” pub- lished in The SAA Archaeological Record 16(1):41-43: “a facility or institution that professionally manages collections on a long-term basis.” HOW-TO SERIES August 2019 | Advances in Archaeological Practice | A Journal of the Society for American Archaeology 309 REFERENCES CITED Archaeological Collections Consortium 2016 Building Common Ground on Collections: An Initial Glossary of Collections- Related Terminology. SAA Archaeological Record 16(1):41–43. Burge, Daniel 2014 IPI Guide to Preservation of Digitally-Printed Photographs. Electronic document, http://www.dp3project.org/webfm_send/739, accessed July 23, 2018. Burge, Daniel, Nino Gordeladze, Jean-Louis Bigourdan, and Douglas Nishimura 2010 Effects of Ozone on the Various Digital Print Technologies: Photographs and Documents. Journal of Physics: Conference Series 231(012001):1–6. Burger, Jennifer, and Daniel Burge 2018 A Guide for the Assessment and Mitigation of Bleed, Gloss Change, and Mold in Inkjet Prints during High-Humidity Conditions. Paper presented at the International Symposium on Technologies for Digital Photo Fulfillment, Dresden, Germany. Electronic document, http://www.dp3project.org/ webfm_send/810, accessed on January 17, 2019. Childs, S. Terry, and Danielle M. Benden 2017 A Checklist for Sustainable Management of Archaeological Collections. Advances in Archaeological Practice 5(1):12–25. Digital Print Preservation Portal 2018a Selecting Enclosures. Electronic document, http://www.dp3project.org/ preservation/storage-enclosures, accessed October 18, 2018. 2018b Storage Recommendations. Electronic document, http://www.dp3pro- ject.org/preservation/storage-recommendations, accessed October 18, 2018. Eiteljorg, Harrison, II 2004 Archiving Digital Archaeological Records. In Our Collective Responsibility: The Ethics and Practice of Archaeological Stewardship, edited by S. Terry Childs, pp. 67–76. Society for American Archaeology, Washington, DC. Fenn, Julia 2003 Poisons, Plastics, and Paper Potentially Destructive Combinations in Archival Storage. ESARBICA 22:25–36. Holmes, Sara 2017 The Problem of Plastic: A Sometimes Sticky Situation. MAC Newsletter 44(4), Article 10:26–27. Electronic document, https://lib.dr.iastate.edu/cgi/ viewcontent.cgi?article=1049&context=macnewsletter, accessed October 17, 2018. Horenstein, Henry 2011 Digital Photography: A Basic Manual. Little, Brown, New York. Image Permanence Institute 2004 A Consumer Guide to Traditional and Digital Print Stability. Electronic document, https://www.imagepermanenceinstitute.org/webfm_send/313, accessed September 26, 2018. 2009a A Consumer Guide to Modern Photo Papers. Electronic document, https://www.imagepermanenceinstitute.org/webfm_send/310, accessed September 27, 2018. 2009b A Consumer Guide to Understanding Permanence Testing. Electronic document, https://www.imagepermanenceinstitute.org/webfm_send/311, accessed September 27, 2018. 2018 IPI’s Guide to: Preservation of Digitally Printed Images. Electronic document, http://www.dp3project.org/webfm_send/886, accessed February 26, 2019. Institute of Museum and Library Services 2019 Heritage Health Information Survey (HHIS) Report: A Snapshot of Facts & Figures. Electronic document, https://www.imls.gov/sites/default/files/hhis- snapshot.pdf, accessed March 21, 2019. Johnson, Harald 2004 Mastering Digital Printing. 2nd ed. Muska & Lipman, Cincinnati, Ohio. Jürgens, Martin C. 2009 The Digital Print: Identification and Preservation. Getty Conservation Institute, Los Angeles. Kintigh, Keith W., and Jeffrey H. Altschul 2010 Sustaining the Digital Archaeological Record. Heritage Management 3(2): 264–274. Meister, Nicolette 2019 A Guide to the Preventive Care of Archaeological Collections. Advances in Archaeological Practice 7(3)DOI:10.1017/aap.2019.7 Millar, Laura A. 2018 Archives: Principles and Practices. 2nd ed. Facet, London. Nishimura, Douglas, Daniel Burge, Jean-Louis Bigourdan, and Nino Gordeladze 2013 Reanalysis of Yellowing of Digitally Printed Materials in Cultural Heritage Collections. Book and Paper Group Annual 32:123–128. Reilly, James M. 1998 Storage Guide for Color Photographic Materials. Electronic document, https://www.imagepermanenceinstitute.org/webfm_send/517, accessed January 11, 2019. Rima, Lindsay, and Daniel Burge 2009 Tendency of Digitally Printed Materials to Ferrotype or Block. Society for Imaging Science and Technology, Technical Program and Proceedings. Electronic document, http://www.dp3project.org/webfm_send/553, accessed on October 13, 2018. Salvaggio, Nanette, Leslie Stoebel, and Richard Zakia 2009 Basic Photographic Materials and Processes. 3rd ed. Focal Press, Burlington, Massachusetts. Sullivan, Lynne P., and S. Terry Childs 2003 Curating Archaeological Collections: From the Field to the Repository. AltaMira Press, Lanham, Maryland. Venosa, Andrea, Daniel Burge, and Douglas Nishimura 2011 Effect of Light on Modern Digital Prints: Photographs and Documents. Studies in Conservation 56(4):267–280. Wilhelm, Henry 2007 A Survey of Print Permanence in the 4 x 6-Inch Consumer Digital Print Market in 2004-2007. Electronic document, http://wilhelm-research.com/ist/ ist_2007_03.html, accessed October 5, 2018. AUTHOR INFORMATION Michelle K. Knoll ▪ Natural History Museum of Utah, University of Utah, 301 Wakara Way, Salt Lake City, UT 84124, USA (mknoll@nhmu.utah.edu, corresponding author) https://orcid.org/0000-0002-6518-3616 A. Carver-Kubik ▪ Image Permanence Institute, Rochester Institute of Technology, 70 Lomb Memorial Drive, Rochester, NY 14623, USA HOW-TO SERIES 310 Advances in Archaeological Practice | A Journal of the Society for American Archaeology | August 2019 http://www.dp3project.org/webfm_send/739 http://www.dp3project.org/webfm_send/739 http://www.dp3project.org/webfm_send/810 http://www.dp3project.org/webfm_send/810 http://www.dp3project.org/webfm_send/810 http://www.dp3project.org/preservation/storage-enclosures http://www.dp3project.org/preservation/storage-enclosures http://www.dp3project.org/preservation/storage-enclosures http://www.dp3project.org/preservation/storage-recommendations http://www.dp3project.org/preservation/storage-recommendations https://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=1049&context=macnewsletter https://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=1049&context=macnewsletter https://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=1049&context=macnewsletter https://www.imagepermanenceinstitute.org/webfm_send/313 https://www.imagepermanenceinstitute.org/webfm_send/313 https://www.imagepermanenceinstitute.org/webfm_send/310 https://www.imagepermanenceinstitute.org/webfm_send/310 https://www.imagepermanenceinstitute.org/webfm_send/311 https://www.imagepermanenceinstitute.org/webfm_send/311 http://www.dp3project.org/webfm_send/886 http://www.dp3project.org/webfm_send/886 https://www.imls.gov/sites/default/files/hhis-snapshot.pdf https://www.imls.gov/sites/default/files/hhis-snapshot.pdf https://www.imls.gov/sites/default/files/hhis-snapshot.pdf https://doi.org/10.1017/aap.2019.7 https://www.imagepermanenceinstitute.org/webfm_send/517 https://www.imagepermanenceinstitute.org/webfm_send/517 http://www.dp3project.org/webfm_send/553 http://www.dp3project.org/webfm_send/553 http://wilhelm-research.com/ist/ist_2007_03.html http://wilhelm-research.com/ist/ist_2007_03.html http://wilhelm-research.com/ist/ist_2007_03.html mailto:mknoll@nhmu.utah.edu https://orcid.org/0000-0002-6518-3616 In-Field Digital Photography and the Curation of Associated Records: Not All Prints Are Created Equal PRINT PROCESSES AND CONSIDERATIONS FOR LONG-TERM CURATION OF DIGITAL PRINTS FIELD ARCHAEOLOGISTS AND THE CREATION OF DIGITAL PRINTS STORAGE ENCLOSURES FOR DIGITAL PRINTS Recommendations for Field Archaeologists REPOSITORY STAFF AND THE CURATION OF DIGITAL PRINTS Digitally Exposed Chromogenic Inkjet Dye Diffusion Thermal Transfer Digital Electrophotography CARING FOR DIGITAL PRINTS Digitally Exposed Chromogenic Prints Inkjet Prints Digital Electrophotographic Prints Dye Diffusion Thermal Transfer Prints Recommendations for Repository Staff CONCLUSION Acknowledgments NOTE REFERENCES CITED work_eg756v3ucraubfwk6mtivakvqm ---- INFORMATICS/NEW COMPUTER TECHNOLOGY Image Is Everything: Pearls and Pitfalls of Digital Photography and PowerPoint Presentations for the Cosmetic Surgeon JOSEPH NIAMTU, III, DDS Oral/Maxillofacial and Cosmetic Surgery, Richmond, Virginia BACKGROUND. Cosmetic surgery and photography are insepar- able. Clinical photographs serve as diagnostic aids, medical records, legal protection, and marketing tools. In the past, taking high-quality, standardized images and maintaining and using them for presentations were tasks of significant propor- tion when done correctly. Although the cosmetic literature is replete with articles on standardized photography, this has eluded many practitioners in part to the complexity. A paradigm shift has occurred in the past decade, and digital technology has revolutionized clinical photography and pre- sentations. Digital technology has made it easier than ever to take high-quality, standardized images and to use them in a multitude of ways to enhance the practice of cosmetic surgery. PowerPoint presentations have become the standard for academic presentations, but many pitfalls exist, especially when taking a backup disc to play on an alternate computer at a lecture venue. OBJECTIVE. Embracing digital technology has a mild to moderate learning curve but is complicated by old habits and holdovers from the days of slide photography, macro lenses, and specialized flashes. Discussion is presented to circumvent common problems involving computer glitches with Power- Point presentations. CONCLUSION. In the past, high-quality clinical photography was complex and sometimes beyond the confines of a busy clinical practice. The digital revolution of the past decade has removed many of these associated barriers, and it has never been easier or more affordable to take images and use them in a multitude of ways for learning, judging surgical outcomes, teaching and lecturing, and marketing. Even though this technology has existed for years, many practitioners have failed to embrace it for various reasons or fears. By following a few simple techniques, even the most novice practitioner can be on the forefront of digital imaging technology. By observing a number of modified techniques with digital cameras, any practitioner can take high-quality, standardized clinical photo- graphs and can make and use these images to enhance his or her practice. This article deals with common pitfalls of digital photography and PowerPoint presentations and presents multi- ple pearls to achieve proficiency quickly with digital photo- graphy and imaging as well as avoid malfunction of PowerPoint presentations in an academic lecture venue. J. NIAMTU, III, DDS HAS INDICATED NO SIGNIFICANT INTEREST WITH COMMERCIAL SUPPORTERS. CLINICAL PHOTOGRAPHY and academic presenta- tions have undergone an exponential paradigm shift over the past 10 years.1 For decades before, clinical slide photography and carrousel slide lecture presenta- tions were the gold standard in cosmetic surgery. The availability of mega pixel digital photography, digital imaging systems, and computer-driven digital presen- tations programs has revolutionized teaching and learning. It has never been easier to take, standardize, and use high-quality controlled clinical images. Despite these changes, many practitioners have not adapted this technology or fail to observe the simple rules to ensure standardized image use. Most unfortu- nate is that fact that some of the best known and respected cosmetic surgeons propagate their work with substandard photographic quality. In the scope of things, this new technology is not overly expensive and has a mild to moderate learning curve.2 Lecturers continue to use poor-quality images and presentations because of a lack of attention to a small number of variables. The author feels that any organized group, society, or association should immediately mandate its mem- bers to use digital presentations and abandon slides. Although this statement may seem radical and irritate those who have not yet embraced this technology, it will truly benefit every profession in numerous ways. There is no doubt that most of us take the path of least resistance, and there is significant initial work involved in converting from slides to digital. Because of this excuse, many practitioners continue to give their same ‘‘canned’’ lectures repeatedly. This short- changes the audience, as these presentations and slides are oftentimes not contemporary. Unfortunately, it is very difficult to add new controlled images and data to old slide lectures; thus, oftentimes people do not. Digital presentations, on the other hand, can be r 2004 by the American Society for Dermatologic Surgery, Inc. � Published by Blackwell Publishing, Inc. ISSN: 1076-0512/04/$15.00/0 � Dermatol Surg 2004;30:81–91 Address correspondence and reprint requests to: Joseph Niamtu, III, DDS, Oral/Maxillofacial and Facial Surgery, 10230 cherokee Road, Richmond, VA 23235, or e-mail: niamtu@niamtu.com updated in a matter of seconds, and this author frequently assembles his lectures on the plane trip to the meeting. I have also changed a lecture minutes before going on stage because of additions or deletions of the previous speaker. I truly feel that when doctors convert to updated technology their teaching and subsequent audience learning will be more updated and contemporary. In this case, everyone benefits. The good news is that this conversion needs to be done only one time. The most prohibitive factor is the ‘‘fear and trepidation’’ of having to scan the entirety of one’s slide collection. As most seasoned lecturers maintain thousands of slides, this would be a task of awesome and fearful proportions. Fallacy 1: You Need to Convert Your Entire Slide Collection to a Digital Format This is the most common misconception that prevents adaptation of digital technology. Scanning thousands of slides is simply downright impractical and almost always unnecessary. First, doctors with thousands of slides rarely use them all on a regular basis. In reality, they use a small portion of their collection for routine lectures. I personally recommend that one only scan the images that he or she simply cannot recreate. This would include ‘‘hall of fame’’ cases, rare pathology, surgical images, unusual lesions, etc. Most anything else can be recreated. For instance, I have for years lectured on chin surgery and had multiple canned lectures on the subject. Some of these slides are once in a lifetime situations; thus, I scanned them. Instead of scanning all of the others, I simply photographed my next chin surgery with a digital camera. Now I not only had more contemporary images, but the quality was superior. I could edit them in many ways, such as improving brightness and contrast, cropping, changing hue and saturation, adding text and symbols, and placing multiple images on a single photo. This did not take long and immediately made my ‘‘chin lecture’’ better. I pro- ceeded with the same strategy for other procedures and soon had abandoned the slides that I previously held dear to my heart. My decision to go ‘‘slideless’’ was about 7 years ago, and I have never looked back.3 I cannot tell you how much of a ‘‘purge for freedom’’ it was to take my previously coveted carrousel of slides and dump them in the trash. Pearl 1: Scan Only the Slides You Need and Then Move on With Digital Technology Slide scanners can vary from several hundred dollars to over a thousand dollars. My advice is to either purchase an inexpensive scanner and do not use it often or pay a professional laboratory to scan your slides that cannot be recreated. Pearl 2: Tomorrow Is the First Day of the Rest of Your Life: Do Not Procrastinate There are some basic armamentaria required to make and use digital images.4 Do not try to outrace technology, as you will always be behind. Do not plan on using your current digital equipment for the rest of your life; it will be outdated in a matter of years. If you truly are pursuing excellence in teaching and learning, reinvesting in technology is merely part of the challenge. You have to bite the bullet and purchase a suitable digital camera. Currently, 4 to 6 mega pixels is the high end for most affordable cameras. Nikon, Fuji, Cannon, and others make high-end digital cameras that are designed more for the professional photo- grapher. They have advantages of interchangeable lenses, including macro and telephoto, metered lenses, expanded ports for accessory flashes, and other options used by professionals. They also have much more manual functions, again more for the advanced photographer. These types of cameras are very expensive and bulky. They are not easily transported and are not easily operated by staff and are in the author’s opinion overkill for average cosmetic clinical photography. A number of ‘‘off-the-shelf’’ digital cameras are available for under $1000; these take excellent clinical photographs, including macro. There are some basic instructions that must be followed to control and enhance clinical photography. I lecture all over the world with these types of images and can attest to their adequacy. I currently use an upper-end digital camera that has been modified by an aftermarket company for clinical and macro digital photography. The camera is compact and lightweight, has no special bulky flashes or lenses, and functions wonderfully for 99.9% of cosmetic surgery photographic needs. A small camera of this nature is easily transported to multiple offices, the operating room, the emergency room, and home and on the road. The next tool that you will need is a laptop computer. I suggest using this laptop for all of your imaging; do not store images on any other computer.5 Transferring images between multiple computers is time consuming and confusing, and images are commonly deleted inadvertently. By having all of your images on a laptop, you will never be without them and will know exactly where they are; you can always take them with you. The true power of digital technology is the ability to carry your entire ‘‘slide 82 NIAMTU: DIGITAL PHOTOGRAPHY AND POWERPOINT PRESENTATIONS Dermatol Surg 30:1:January 2004 collection’’ with you at any time. Now you can create lectures and publications on the airplane or at the barber shop, work, or home. Regardless of what type computer is used, one must religiously back up his or her images on a regular basis. A lost or stolen laptop or a malicious virus can wipe out your entire ‘‘slide’’ collection, which is a sickening thought, let alone event. In addition, your computer will be used for your actual presentations. Although multiple presentation programs exist, Microsoft PowerPoint is currently the most popular and universal presentation program. The Windows XP operating system has made image manipulation and use easier than ever. Buying a laptop is like buying a car or a boat; one should buy the biggest and fastest that he or she can afford. In today’s environment, several specifics are important. Digital images are memory intense, and if video is used, this requires even more power. The author recommends a processor of at least 1 gigahertz, a hard drive with a minimum of 30 gigabytes, a video card with a minimum of 32 megabytes of video ram, system RAM memory of at least 256 megabytes (512 to 1024 megabytes being preferable), a CD writer (a DVD writer is preferable), and capabilities to capture video. A serious problem has been encountered with even some of the highest quality laptop computers when playing back digital video during a PowerPoint presentation. The video card set up on some name- brand high-end computers may be such that when attempting to play a video in your PowerPoint presentation, you can see the video on you computer, but not on the main projector screen. The author has experienced and witnessed many presentations go awry because the video that was the crux of the presentation would not play in the lecture hall. The author does not endorse any specific computer but has not had this problem with the Dell computer 8100 and 8500 series laptops (www.dell.com). This may not apply to other Dell models. Make sure this conflict does not exist when considering purchasing a computer for digital imaging and presentations. If your particular computer does not show video on the projector, try toggling your display function key; sometimes the video will project on the projector but not the laptop. If a digital camcorder is to be used, a fire wire (1394) port on the computer is a big advantage for downloading video to the laptop. Most home-quality digital camcorders take excellent clinical video. Common camcorders have excellent macro capabilities and are versatile in many light conditions. A tripod or monopod is essential to make quality stabilized clinical video. The author has used common small-format camcorders to film full-body procedures such as abdominal fat harvest as well as super macro procedures such as transconjunctival blepharoplasty. Most upper end digital camcorders can do both with amazing quality. DVD recorders are in their infancy, and major compatibility issues exist. These discs are desirable as they can hold almost 5 gigabytes of data; this is very handy when dealing with large images or video files. Many images require editing; thus, some type of image editing software is necessary. Many commercial digital imaging systems are available, but mastering inexpensive Photoshop type programs can serve the same basic tasks. The author rarely archives a raw image but usually performs some type of editing such as contrast or color correction, cropping, or resizing. In the ‘‘old slide days of the last millennium,’’ making quality, standardized before and after slides was a serious task. With image editing, one simply opens both images, makes them the same size, and with a single mouse click, stitches them together. There is no longer a need to lecture with two slide projectors. Fallacy 2: Presentation Programs Are Complex and Difficult to Learn The author has heard many practitioners state that they are too old or do not have enough time to learn how to use PowerPoint. This is almost always far from the truth. If a person can get through a residency program and learn cosmetic surgery, he or she is certainly capable of learning PowerPoint. As to the age question, the author has had some very rewarding experience teaching older practitioners how to use the basics of PowerPoint.1 I have seen them blossom and truly appreciate the freedom and accomplishment of learning something new. Age is only important in wine and cheese! A digital projector is also a handy thing to own if you lecture frequently. It is particularly useful for local presentations, marketing and seminars, and business meetings. Most lecture venues have powerful digital projectors, and thus, owning one is not a necessity. The author has found local hospital audio visual departments that are willing to loan projectors to doctors on staff. Pearl 3: The Best Way to Learn Powerpoint Is to Play With It In this age of computer games, learning can be fun. The author suggests use of the included PowerPoint tutorials; one should simply play with the program. Dermatol Surg 30:1:January 2004 NIAMTU: DIGITAL PHOTOGRAPHY AND POWERPOINT PRESENTATIONS 83 Make a simple slide, and then browse the various menus to see what effects and changes are possible. You will be amused and enthralled with the possibi- lities. This author has published several journal articles that can also assist clinicians on the basics of making PowerPoint presentations.1,6 Pearl 4: Keep Your Presentations Simple Because of the novelty of digital presentations, many presenters were overly aggressive in using various enhancements such as animations, transitions, and sounds. A decade ago, playing the ‘‘screeching car tires’’ sound when your title appeared was chic; now it is sophomoric. The initial key to any presentation is an acceptable, uncluttered background. Many standard backgrounds are available with PowerPoint. These default back- grounds are usually set up with proper contrast between font (text) and background color. In a large lecture hall, this contrast is imperative to view the detail. A common error is to include too much small text on a PowerPoint slide. Cluttered slides detract from the content and confuse the audience. Because there is no extra cost, there is no reason to use cluttered slides. Animation is the single best and worse thing that has happened to digital clinical presentations. Although creative animation can emphasize a point or guide the audience though a process, its overuse can seriously detract from presentation content. In addition, overuse of animation can complicate the timing and progression of a presentation, as well as make it longer. Inserting or dragging and dropping images into a slide in a PowerPoint presentation is as simple as several mouse clicks. The image is scalable, meaning that it can be resized without losing perspective. Multiple images may be inserted into a slide, and the images can be edited directly in PowerPoint. It is easy to enhance the color and contrast of an image as well as to crop the borders. Using the PowerPoint drawing tools, arrows, text, and symbols can be added to images. In the author’s opinion, the single most important advancement in digital presentation technology has been the addition of video. The use of movies in a ‘‘still’’ lecture adds to a presentation in much the same way television is superior to radio. One cannot totally appreciate performing a cosmetic procedure from still images in the same manner as real-time video. Making clinical video movies with a home digital camcorder has been discussed. Transferring them to the computer is a separate task and does have a learning curve. Many entry-level video-editing programs are available for under $100, and most new digital camcorders come with editing software. The user friendliness of video capture and editing software has increased exponentially over the past years, and it is actual fun to become an academic producer. One caveat is do not count on using the video clips from your digital camera for serious presentations. Many digital still cameras are now capable of capturing short video movies as well as images. These movie clips are very low resolution and may be suitable for emailing personal movies but are below the standard required for a full-screen presentation. Raw, uncompressed video is extremely memory intense, and thus, compression is usually required for seamless video integration. The MPEG 1 compression scheme is one of the most common types of video compression in use today. This format works well in PowerPoint presentations and is not so memory intense as to slow down the presentation. Because it is compressed, the actual video does not usually fill up the computer or projector screen, but when projected in a large room, it is extremely magnified and easy to see. Other video formats such as MPEG 2 or AVI are very memory intense and do not currently seamlessly integrate with PowerPoint. These video clips are full screen but are truly memory intense and are not recommended for beginners. Fallacy 3: If a PowerPoint Presentation Works on Your Computer, It Will Work on Other Computers Unfortunately, there are many variables and compati- bility issues that can turn a well-rehearsed lecture into a presentation nightmare. Most of us that attend meetings have seen more that one presenter fumble around trying to run a presentation from the podium. In the worse case, some are not able to present at all. This is especially true if video is incorporated into a presentation. Pearl 5: Understanding the Basics of How PowerPoint Handles Images and Video Files Will Insure a Proper Backup Strategy and Allow seamless Compatibility With Other Similar Computers Although most presenters take their laptops to a meeting, it is not totally necessary if you have you presentation backed up on CD or comparable media. It is great feeling to walk into an auditorium with a CD in your pocket instead of a bag of rattling carrousels. If you are using video in your presentation, several points that are very important to successfully 84 NIAMTU: DIGITAL PHOTOGRAPHY AND POWERPOINT PRESENTATIONS Dermatol Surg 30:1:January 2004 replay from a CD. When you insert an image into a PowerPoint presentation, the image becomes em- bedded by default. This means that the digital code of that image becomes part of the presentation, and your pictures will always be there. Sometimes you will see a presenter show a PowerPoint slide, and instead of an image, you will see and geometric icon instead of the image. This is because the image was linked, instead of the default embedding. The best way to avoid this is to follow the menu commands on the PowerPoint menu INSERT and then PICTURE FROM FILE. This will always embed the image. Video, on the other hand, is linked to the presentation and not actually integrated in the presentation. This means that if you make a PowerPoint presentation on your laptop and insert a video, it will play on your computer without a problem. If you back up the presentation on a CD without the video, you presentation will play, but the video will not work. Again, the author has seen presenters be unable to give a featured lecture that they spent many hours preparing because of this problem. In order to circumvent this problem, the PowerPoint presentation and video clips must reside in the same folder before backing up on CD. For instance, if I have a talk on facelift and I want to show some facelift footage, I first need to create a folder on my hard drive to hold both the facelift PowerPoint presentation and the facelift video clips. I will create a folder anywhere on my C:\ drive and will arbitrarily call it ‘‘facelift PowerPoint’’ (the name does not matter, just that you have a dedicated folder). The next step is to place or copy the videos that you wish to use in that presentation into that folder. The last step is to create your PowerPoint presentation and insert the video clips that already reside in that folder. In this way they are linked to that folder, and PowerPoint will look for that link in that folder whether it is on your hard drive or a CD. The PowerPoint presentation must also be saved in this dedicated folder. Failure to observe this order can lead to great frustration on the podium. Finally, the best laid plans can go awry. You may make a flawless digital presentation only to have it fail because of unforeseen problems in the lecture hall such as computer compatibility issues. To combat compatibility issues, it is a good idea to plan ahead. If you are attending a large meeting, find out who is running the audiovisual setup and call ahead to inquire about your specific computer and model for any known issues. Also, when checking in to a hotel at a conference, the author always proceeds directly to the lecture hall or speaker ready room to test the computer, CD, and projector personally. When in doubt, make multiple backup copies as well as bring you own computer. Pearl 6: Never Keep Your Laptop and Your Backed-Up Presentations in the Same Bag This pearl is made under the assumption that anyone wishing not to experience the ‘‘academic suicide’’ of losing all your digital images is smart enough to frequently back up his or her computer. The author has personally seen a case where a keynote speaker at a national meeting lost a carry on bag with not only the computer, but the back-up CDs as well. Do not fall into that trap; pack your back up discs separately. Pearl 7: The Best Doctors Take Many Pictures This is a universal observation that I have made during my 20-year career in facial surgery. Surgeons that are passionate about their work document it very well and use those images to better their skills, as well as for academic presentations and marketing. Pearl 8: Standardized, Quality Images Will Enhance the Creditability of the Presenter Many articles have been written about photographic standardization for clinical photography.7–15 Although we all want to take the best images, overkill can exist in this area. If one is doing a statistically significant scientific study based on clinical photographs, then absolute standardization is a prerequisite. Standardi- zation devices such as head holders, multiple remote flashes, flash reflectors, and umbrellas may be required. One can literally invest thousands and fill an entire room with complex and sophisticated photographic paraphernalia. For the average cosmetic surgeon in the trenches of private practice, this level of proficiency is not necessary. It is necessary to take relatively standardized, consistent images. These images should be consistent in distance, white balance, background, and lighting. With a little forethought, some simple standardizations, and a good quality of digital camera, this is easily accomplished. In pursuit of ‘‘relative standardization,’’ this author has placed a white, nonglossy poster board on the back of the door in each room in the office (Figure 1). White is used because when using a colored background, printing the images can quickly deplete your colored ink. Setting up each room with a standardized back- ground is convenient, as it is difficult to channel all patients to a dedicated photo room in a busy practice. I promise that you will take more images when you can do it in any room. Nothing looks worse than a photos taken with a wooden door for a background or with a patient sitting in an exam chair with counters, Dermatol Surg 30:1:January 2004 NIAMTU: DIGITAL PHOTOGRAPHY AND POWERPOINT PRESENTATIONS 85 instruments, and other distractions in the background. Preferably, the room or ambient lighting should be consistent in these rooms. If not, then adjustments may be necessary to the camera. The next key is to standardize your camera and subject in terms of poses, distance, and flash. It is imperative to take all of your images in a standardized manner. Each patient should be posi- tioned in the same manner for a given pose. The focal distance can be standardized by securing a piece of dental floss or chain to the bottom of your camera and holding it to the nose (or appropriate area) of your subject (Figure 2). This insures that you will be at the same distance from the patient in all views. An alternative means is to make a mark on the floor for the photographer to stand. Regardless of the means, focal distance standardization is imperative. This will save you hours in postprocessing because your images will be the same size. The pose of the patient relative to the camera and background is the next most important factor. Most practitioners take a minimum of a frontal, right and left oblique, right and left lateral, and frequently posterior views of a patient. Various specialties and specialists require other poses. Most common facial views are taken with the patient’s Frankfort horizontal plane (imaginary line from the external auditory canal to the infraorbital rim) parallel to the ground. Observing this will prevent a ‘‘chin-up’’ or a ‘‘chin- down’’ view. Taking a preoperative lateral photo of a facelift patient with their chin down and then showing the same patient postoperatively with their chin elevated can imply artificial results and is a hallmark of incredibility. Clinical imaging is no place for trick photography. The frontal and posterior views are the most difficult to eliminate shadows. If the camera is straight on to the subject, minimal shadow is apparent. Hair, ears, and clothing can cause shadowing. Solu- tions include taking digital images without the flash or using an accessory slave flash, which can be purchased inexpensively at camera stores. The oblique view is Figure 1. A white poster board secured to the back of a door in each treatment room provides a convenient photographic background. Figure 2. A chain or string attached to the camera can help standardize the focal distance for each patient. Figure 3. Aligning soft-tissue nasion with the lacrimal caruncle will standardize the oblique facial views. 86 NIAMTU: DIGITAL PHOTOGRAPHY AND POWERPOINT PRESENTATIONS Dermatol Surg 30:1:January 2004 perhaps the most important view when evaluating facial structures. This is also the view that is the most commonly standardized. An easy means of standardiz- ing the oblique facial view is to line up the soft-tissue nasion with the lacrimal caruncle of the contralateral eye (Figure 3). When taking oblique or lateral photographs, shadow is very problematic but is very correctable. If the direction of the flash is passed over anatomic projections (nose, chin, breasts, etc.), a shadow is cast (Figure 4). In order to eliminate shadow in the oblique or lateral view, the photographer merely needs to rotate the camera sideways so that the flash points toward the side being photographed. If one is taking a right lateral facial profile image, then the camera should be rotated vertically so that the flash is on the same side as the patient’s nose (Figure 5). This prevents the nose, chin, neck, etc. from blocking the flash and causing a shadow. The camera is rotated vertically in the other direction when photographing the left side. Another alternative to eliminating profile shadow is to turn off the camera flash and rely on ambient lighting (Figure 6). When photographing anatomy such as the roof of the mouth, the nares, or the submental or inframammary area, the camera is held upside down so that the flash comes from below, thus preventing a shadow (Figure 7). Another com- monly experienced photographic pitfall that erodes operator creditability is the taking of a preoperative picture without flash or in poor lighting and the postoperative picture with flash or increased lighting. No flash or poor lighting always enhances wrinkles, acne, orbital fat prolapse, and other flaws. Taking the same image without a flash can be so dramatic that it could be passed off as a surgical result (Figure 8). Several other details should be considered for repeatable images. Pay attention to jewelry, glasses, and makeup. Alway takes a series of preoperative images with the patient in full makeup and without makeup immediately preoperatively. If the patient wore glasses in the preoperative image, they should have them in the postoperative image. Jewelry that is large, obtrusive, or distractive should be removed for photo- graphy. When taking facial views, a collarless shirt or blouse is preferable. A neutral-colored patient gown is preferred, as a red shirt (or other bright color) can reflect color to the patient’s face, skewing the true hue and saturation of the image. Patients should be reminded not to smile and should always look into Figure 4. Taking clinical photographs with traditional flash positions can cause unwanted shadowing on the background. Figure 5. Repositioning the camera so that the flash changes direction can eliminate problematic shadows. Dermatol Surg 30:1:January 2004 NIAMTU: DIGITAL PHOTOGRAPHY AND POWERPOINT PRESENTATIONS 87 the camera or in oblique or lateral views should focus on an object that is standardized. The author glues a red dot on the wall and asks the patient to stare at it in the oblique and lateral views. A step stool is also necessary, as the patient may be considerable taller or shorter than the photographer, and pictures taken looking up or down will be distorted. Another problem is the blinking patient. It is very frustrating that some patients simply cannot be photographed with a flash camera without blinking. The author may take 30 images to get one or two acceptable nonblinking images. Using the red-eye reduction function that is available on many digital cameras may assist; turning off the flash or photographing these patients next to a window or outdoors may be necessary. An additional help to photograph individuals that blink at the flash is to have them close their eyes and count to three. Tell them to open their eyes on ‘‘three.’’ By snapping the Figure 6. Disabling the flash is an effective means of eliminating shadows and works best with adequate ambient room lighting and/or manual camera adjustments. Figure 7. Inverting the camera so that the flash shines upward is a convenient means of illuminating difficult photographic areas such as the submental area, the roof of the mouth, and the inframammary fold. Figure 8. This image shows the same patient photographed with the same camera without a flash on the left and with a flash on the right. Preoperative and postoperative photographs should always be taken with the same lighting. Failure to do so can make such a drastic difference that it can appear as a preoperative and postoperative result. 88 NIAMTU: DIGITAL PHOTOGRAPHY AND POWERPOINT PRESENTATIONS Dermatol Surg 30:1:January 2004 picture on the three count, their eyes will be open. In the past, film processing meant that you frequently had ‘‘surprises’’ waiting for you when you got your film back from the laboratory. With digital cameras and preview screens, there is never an excuse to take a poor image. For macro photography such as nevi, eyelids, and lesions, the paradigm has changed. With film cameras equipped with 100-mm macro lenses and ring or point flashes, the photographer used to be able to get as close to the subject as possible. With most digital cameras, the opposite is true. These cameras contain software that is metered for automatic conditions, and most cannot compensate for ultramacro distances. If you get too close to the subject, you will overexpose some areas and block the flash in other areas, causing a shadow. For instance, if you wanted to make a macro photograph of the lateral canthal area and held the camera very close, you would have areas of over- exposure and underexposure (Figure 9A). To compen- sate for this, the trick is to stay back a foot or so from the subject and use the zoom to get close to the area. By doing this, you are far away enough for the flash to disperse over a larger area (Figure 9B). With digital editing, you can crop any extraneous anatomy. If your picture was taken at a sufficiently high resolution, your image will be ‘‘macro’’ after cropping out the unwanted structures (Figure 9C). Using Figure 9 as an example, if you want a close-up of the lateral canthus, stand back about a foot, zoom the camera in all the way, and take the picture. In your image editor, you select the area that you wish to keep and crop the rest. The result is a macro image of only the anatomy that you wish. This was not possible with slide photography, and thus, in the past, you had to get close because what you saw is what you got (unless you sent off your slide to the processing laboratory to be enlarged, cropped, and printed). Most off-the-shelf digital cameras are configured for a broad range of photography from macro to infinity. Because of this, the lenses are less versatile than traditional 100-mm macro lenses. Because most Figure 9. (A) The underexposed and overexposed areas when the lens is too close to the subject. (B) A longer distance when taking a macro shot of the lateral canthus with adequate exposure. (C) The same image, now cropped to show only the desired area. Figure 10. (A) Lens distortion of a typical digital camera when the camera is held too close to the subject. The same picture taken with the same camera at a distance of 3 feet. Zooming in on the subject eliminates the distortion (B). Dermatol Surg 30:1:January 2004 NIAMTU: DIGITAL PHOTOGRAPHY AND POWERPOINT PRESENTATIONS 89 clinicians previously used slide photography with macro lenses, they still attempt take images with the lens close to the patient. Doing so can cause distortion of the anatomy closer to the camera (Figure 10A). Figure 10B shows the same subject taken with the same camera, but the photographer has backed several feet away from the subject and has used the lens to zoom into the subject, thus maintaining the same anatomic aspect ratio. To maintain proper anatomic ratios, take sample macro pictures of a commonly photographed object such as the eye but place a millimeter ruler next to the eye and find the correct focal distance and zoom to replicate accurate measurement. Pearl 9: Archive Your Images in a Convenient Manner Just taking digital images is of little use if you cannot find them. Although proprietary software exists exclusively for digital image archiving, it is unneces- sary. This author has been using digital photography for clinical patients for a decade and has thousands of images archived. A convenient means of establishing an archiving system is to create a dedicated directory on your hard drive. To create a new directory, proceed to your root directory (usually C:\). In most versions of Microsoft Windows, this can be located by clicking on ‘‘My Computer.’’ The following steps show how to make new folders. 1. While in your C:\ directory, select FILE. Then select NEW, and then select FOLDER (Figure 11). 2. To rename the folder, right click on the folder and type a new name. The next step it to make a main folder where all your images will reside. This can be called anything you wish, but for example, we will call it MASTER IMAGES. After you make the MASTER IMAGES file, you need to create a new subfolder for each main procedure you perform. An example is shown in Figure 12. If you desire, you can make subfolders for each procedure. In other words, you may have a folder for LASER and under that you may make subfolders for CO2, ERYBIUM, PULSED DYE, etc. The final step is to make subfolders for your patients and to file them in the appropriate folders. If Mary Smith is a laser patient, you go to the MASTER IMAGES folder and then to the LASER folder and then to the CO2 folder. In that folder, you will make a new folder for SMITH, MARY. You will now place all of Mary’s images in this folder. If you wish, you can continue with subfolders for PREOP, POSTOP, etc. (Figure 13). By using this scheme, it is easy to find a given patient by looking in that procedure category folder. Many patients will have multiple procedures, and thus, their respective images will reside in multiple folders. By simply doing a Windows search, you can find all of the images for any patient or procedure in any folder. Conclusion Clinical photographic standardization is frequently taken for granted. One only has to look at some contemporary clinical journals and academic presenta- tions to realize that many of the astute practitioners unfortunately pay little attention to quality clinical photography. Figure 11. Creating a new folder or subfolder is the key to successful archiving. Figure 12. A master folder for pictures with subfolders for each main procedural category serves to organize images. Figure 13. The final stage of the directory tree is the patient subfolder. 90 NIAMTU: DIGITAL PHOTOGRAPHY AND POWERPOINT PRESENTATIONS Dermatol Surg 30:1:January 2004 Digital photography has changed the paradigm for taking clinical images. Several distinct differences exist between digital cameras and previously used film cameras with macro lenses. Failure to realize these differences can result in poor-quality images. Follow- ing several simple rules of photographic standardiza- tion can greatly enhance the clinical images of the average practitioner. As digital cameras and video continue to evolve, the standardization of clinical photography will become simpler. No matter how advanced the equipment evolves, the attention of the surgeon to the quality and standardization of his or her pictures will always be necessary to evaluate one’s work critically. Those practitioners that use PowerPoint presenta- tion software should heed several basic tenets, includ- ing backup strategies to prevent problematic malfunctions and compatibility issues. References 1. Niamtu J. The power of PowerPoint. Plast Reconst Surg 2001;108:466–84. 2. Niamtu J. Digital camera system: to use or not to use (response). Int J Cosmet Surg Aesthetic Dermatol 2001;3:216–7. 3. Niamtu J. Confessions of a Slide Addict: The Need for Digital Imaging: Plastic Surgery Products. Plastic Surgery Magazine, 1998. 4. Niamtu J. Digital Imaging in Cosmetic Surgery. In: Lowe N, ed. Textbook of Facial Rejuvenation: The Art of Minimally Invasive Therapy, London: Martin Dunitz, 2002. 5. Niamtu J. Digital imaging for the cosmetic dermatologist: part I. J Cosmet Dermatol 2001;14:21–4. 6. Niamtu J. Techno Pearls for Digital Image Management. Dermatol Surg 2002;28:946–50. 7. Ikeda I, Urushihara K, Ono T. A pitfall in clinical photography: the appearance of skin lesions depends upon the illumination device. Arch Dermatol Res 2003;294:438–43. 8. Talamas I, Pando L. Specific requirements for preoperative and postoperative photos used in publication. Aesthetic Plast Surg 2001;25:307–10. 9. Yavuzer R, Smirnes S, Jackson IT. Guidelines for standard photography in plastic surgery. Ann Plast Surg 2001;46: 293–300. 10. Sandler J, Murray A. Recent developments in clinical photography. Br J Orthod 1999;26:269–72. 11. Cranin AN. Developing skills in clinical photography. J Oral Implantol 1998;24:119–20. 12. DiBernado BE, Adams RL, Krause J, Fiorillo MA, Gheradini G. Photographic standards in plastic surgery. Plast Reconstr Surg 1998;102:559–68. 13. Brown S. Digital imaging in clinical photography, part 1. J Audiov Med 1994;17:53–65. 14. Steele DC. Achieve professional results in your clinical photo- graphy. J Mass Dent Soc Fall 1993;42:175–8. 15. Lorson EL. Simplified approach to clinical photography. J Oral Surg 1976;34:1035–7. Dermatol Surg 30:1:January 2004 NIAMTU: DIGITAL PHOTOGRAPHY AND POWERPOINT PRESENTATIONS 91 work_eglgrm3fzjhwvh6odssnlq3mzm ---- Articles © The authors | Journal compilation © J Clin Med Res and Elmer Press Inc™ | www.jocmr.org This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited 115 Case Report J Clin Med Res. 2015;7(2):115-117 ressElmer Preventing “A Bridge Too Far”: Promoting Earlier Identification of Dislodged Dental Appliances During the Perioperative Period John T. Dennya, c, Sloane Yeha, Adil Mohiuddina, Julia E. Dennyb, Christine H. Fratzolaa Abstract The presence of fixed partial dentures presents a unique threat to the perioperative safety of patients that require orotracheal intubation or placement of instruments into the gastrointestinal (GI) tract. There are many chances for the displacement of a fixed partial denture: in- strumentation of the airway for intubation, or introduction of tempo- rary devices, such as gastroscopes or transesophageal echo probes. If dislodged, the fixed partial dentures can enter the hypopharynx, esophagus or lungs and cause perforations with their sharp tines. Oral or esophageal perforation can lead to potentially fatal mediastinitis. We describe a case of a patient with a fixed partial denture who un- derwent cardiac surgery with intubation and transesophageal echocar- diography (TEE). His partial denture was intact after the procedure. After extubation, he reported that his teeth were missing. Multiple procedures were required to remove his dislodged partial dentures. In sign-out reports, verbal descriptions of the patient’s partial dentures were not adequate in this case. A picture of the patient’s denture and oral pharynx pre-operatively would have provided a more accurate template for the post-operative team to refer to when caring for the patient. This may have avoided the multiple potentially risky proce- dures the patient had to undergo. We describe a suggested protocol utilizing a pre-operative photo to reduce the incidence of unrecogniz- ed partial denture dislodgement in the perioperative period. Because the population is aging, this will become a more frequent issue con- fronting practitioners. This protocol could mitigate this complication. Keywords: Partial dentures; Dental injury; Anesthesia complica- tions; Dental complications; Mediastinitis; Cardiac surgery compli- cations; Patient safety; Hand-offs Manuscript accepted for publication September 22, 2014 aDepartment of Anesthesia, Rutgers/Robert Wood Johnson Medical School, 125 Paterson Street, New Brunswick, NJ 08901, USA bRutgers Graduate School of Nursing, 65 Bergen Street, Newark, NJ 07107, USA cCorresponding Author: John T. Denny, Department of Anesthesia, Rutgers/ Robert Wood Johnson Medical School, 125 Paterson Street, New Brunswick, NJ 08901, USA. Email: dennyjt@rwjms.rutgers.edu doi: http://dx.doi.org/10.14740/jocmr1981w Introduction The presence of fixed partial dentures presents a unique prob- lem to the perioperative safety of patients that require oro- tracheal intubation or placement of instruments into the gas- trointestinal (GI) tract. There are many opportunities for the displacement of a fixed partial denture. During the periopera- tive period, when patients have manipulation of their orophar- ynx to accommodate the placement of an endotracheal tube and/or transesophageal echocardiography (TEE) probe, the fixed partial denture can become dislodged. Likewise, in the post-operative period, if the patient bites or chews on the en- dotracheal tube, the fixed partial denture can be displaced. The fixed partial dentures can enter the hypopharynx, es- ophagus or lungs and cause perforations with their sharp tines. Esophageal perforations can result in mediastinitis with a mor- tality of 48% [1]. The retrieval process can be traumatic even with endoscopic retrieval: as the denture is pulled out of the GI tract or airway, the tines can rake the mucosal surface, and cause more perforations. In the worst case, the fragile dental appliance can fragment, resulting in multiple smaller, sharp objects which can easily migrate distally. In cases where en- doscopy is unsuccessful, thoracotomy is needed to retrieve the fixed partial denture. We introduce and recommend a new safety protocol to reduce the morbidity of a dislodged fixed partial denture. Case Report A 70-year-old man underwent urgent cardiac surgery for coro- nary artery bypass grafting. Pre-operative assessment showed that his teeth were in poor condition, with the presence of gin- givitis. Pre-operative examination of his oral cavity showed a stable fixed partial denture. During the surgery, he had easy, atraumatic orotracheal intubation in one attempt. After intuba- tion, a TEE probe was easily placed in his esophagus and used during the entire case. The TEE probe was moved within the esophagus by routine traction forces applied at the mouth to manipulate the scope - and slide the scope in and out. The patient remained intubated and sedated at the end of the case and was transported to the cardiac ICU. At that time, the patient had his fixed partial denture in place. A chest X-ray Articles © The authors | Journal compilation © J Clin Med Res and Elmer Press Inc™ | www.jocmr.org116 Dislodged Dental Appliances J Clin Med Res. 2015;7(2):115-117 was taken at the time of arrival to the ICU, and there was no evidence of a dislodged denture. During the first 12 h in the ICU, the patient had serial chest X-rays to evaluate his lungs and placement of the endotracheal tube and central lines. He was not unduly agitated. After the patient was extubated, he told the ICU nurses that he was missing his teeth. The patient did not experience dyspnea, coughing or dysphagia. Review of serial chest X-rays confirmed that his fixed partial denture had migrated into his hypopharynx; an unsuccessful attempt was made to retrieve the denture in the hypopharynx and the denture migrated into the esophagus. The gastroenterology service was consulted to remove the denture, but they were unable to retrieve the denture with their endoscope. ENT was subsequently consulted, and they were able to retrieve the den- ture with a rigid esophagoscope under general anesthesia in the operating room. Discussion The incidence of ingestion of dental appliances after orotra- cheal intubation is unknown, but it can be compared to infor- mation collected on the ingestion of foreign bodies. Some re- ports state that 1,500 people die annually from the ingestion of foreign bodies [2]. Complications of foreign body inges- tion include gut perforation, sepsis, peritonitis, esophagitis, hemorrhage, and impaction of the GI tract. In an unconscious or sedated patient with an unprotected airway, aspiration into the trachea, and commonly the right bronchus, can occur. If not recognized, the fragment(s) can lead to an abscess, and pneumonia. Mediastinitis is already a risk in uncomplicated coronary artery bypass surgery [3, 4]. Perforation in the oral pharynx can lead to cervical necrotizing fasciitis, which can be fatal [5, 6]. As a dislodged device passes distally, esopha- geal perforation can occur, and also lead to mediastinitis [7]. Esophageal perforation is especially problematic, as mediasti- nitis has been reported even after botulinum toxin esophageal injection for spasm [8]. Although the term fixed partial denture is used to describe dental appliances, these items can be dislodged if there is no manipulation of the oropharynx. Also, if the patient has poor dentition, it is more likely that the fixed partial dentures can Figure 1. Proposed scheme of management. Articles © The authors | Journal compilation © J Clin Med Res and Elmer Press Inc™ | www.jocmr.org 117 Denny et al J Clin Med Res. 2015;7(2):115-117 be dislodged from their abutments. The fixed partial dentures contain sharp tines that cause perforations of the GI tract at any point during their migration out of their dental abutments. A safety protocol to document and emphasize the pres- ence of fixed partial dentures may help reduce the incidence of morbidity related to fixed partial denture dislodgement. In the pre-operative holding area, a digital camera with printer can be made available to document the oral exam of every pa- tient with fixed partial dentures. A picture of the fixed partial dentures in situ can be attached to the chart and given as part of patient information during patient verification and sign-out report. The practice of using digital photography to document oral appliances is well known to the dental field; in fact, many employ consumer-level digital cameras over more specialized intraoral cameras as the former are now capable of high reso- lution capture in macro modes [2]. The old saying of “a pic- ture is worth a thousand words” is apropos here: Being able to show the ICU team a picture of the patients dental appearance pre-operatively is worth more than a verbal description. In our case, despite an adequate description of the patients’ denture to the ICU team, when it subsequently became dislodged, it was only realized when the patient complained. Fixed partial dentures are very hard to verbally describe based on their vari- ous shapes, so a verbal description is likely to be inadequate. The type of procedure should lower the threshold for re- moving unstable fixed partial dentures. When the patient is undergoing procedures that involve passing instruments re- peatedly through the mouth, such as the movement of a TEE during cardiac anesthesia, or the movement of an endoscope for an EGD, the fixed partial denture should be removed if it is slightly unstable. Even if the fixed partial denture is stable, it should be checked periodically during the case for loosening. Furthermore, in cases where the head is not easily accessible to the anesthesiologist, the patient should not be allowed to have the fixed partial denture in his mouth if it is not completely secure. In contrast, if there are no instruments being dynami- cally placed in the mouth, the clinician can reduce the number of inspections of the fixed partial denture. In the post-operative period, a detailed report, with pic- tures, should be given about the location of fixed partial den- tures. Furthermore, serial exams of the oropharynx should be done to confirm the location and status of fixed partial den- tures. If there is evidence of loosening or displacement of the fixed partial dentures, the post-operative care team should re- move the denture or consult dentistry to remove the denture. Special attention should be given to patients who remain intu- bated postoperatively. In the post-operative care unit, the pa- tient must have adequate sedation so they do not grind their denture on the endotracheal tube and displace their denture. In addition, like the routine monitoring of vital signs, the stability of fixed partial dentures should be verified periodically. Fixed partial dentures have an increasing presence in our aging population. The fixed partial denture has sharp tines which anchor the denture to surrounding teeth. These “fixed” partial dentures have the potential to come loose during gener- al anesthesia when an endotracheal tube is introduced through the mouth to secure the airway. If the patient has a TEE placed in his mouth during a case, it is vital to periodically check on the stability of the denture as a TEE is moved in and out of the patient’s mouth. In addition, when a patient remains intubated at the end of a case, adequate sedation must be provided so that the patient does not bite on his endotracheal tube. With increased surveillance of the fixed partial denture, we can re- duce the dislodgement of fixed partial dentures, and avoid the morbidity of the denture traumatizing the GI tract (Fig. 1). References 1. Moghissi K. Esophageal foreign bodies. Eur J Cardio- thorac Surg. 1998;13(5):499. 2. Christensen GJ. Intraoral television cameras versus digi- tal cameras, 2007. J Am Dent Assoc. 2007;138(8):1145- 1147. 3. Charbonneau H, Maillet JM, Faron M, Mangin O, Puymirat E, Le Besnerais P, Du Puy-Montbrun L, et al. Mediastinitis due to Gram-negative bacteria is associ- ated with increased mortality. Clin Microbiol Infect. 2014;20(3):O197-202. 4. Ennker IC. Treatment of mediastinitis following cardiac surgery-still in discussion. HSR Proc Intensive Care Car- diovasc Anesth. 2013;5(2):120-121. 5. Bucak A, Ulu S, Kokulu S, Oz G, Solak O, Kahveci OK, Aycicek A. Facial paralysis and mediastinitis due to od- ontogenic infection and poor prognosis. J Craniofac Surg. 2013;24(6):1953-1956. 6. Di Crescenzo V, Laperuta P, Napolitano F, Carlomagno C, Danzi M, Amato B, Garzi A, et al. Unusual case of exacerbation of sub-acute descending necrotizing medi- astinitis. BMC Surg. 2013;13(Suppl 2):S31. 7. Safranek J, Geiger J, Klecka J, Skalicky T, Spidlen V, Vesely V, Vodicka J. [Mediastinitis after esophageal per- foration]. Rozhl Chir. 2013;92(4):195-200. 8. Marjoux S, Pioche M, Benet T, Lanne JS, Roman S, Ponchon T, Mion F. Fatal mediastinitis following botu- linum toxin injection for esophageal spasm. Endoscopy. 2013;45(Suppl 2 UCTN):E405-406. work_ehv622iocjhfdok6g7khwte22a ---- Shadowy Presences: Mobility, Labor and Absence in the Work of Dominican Photographer Fausto Ortiz Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=rfpc20 Download by: [b-on: Biblioteca do conhecimento online UL] Date: 10 March 2017, At: 08:44 Photography and Culture ISSN: 1751-4517 (Print) 1751-4525 (Online) Journal homepage: http://www.tandfonline.com/loi/rfpc20 Shadowy Presences: Mobility, Labor and Absence in the Work of Dominican Photographer Fausto Ortiz Carlos Garrido Castellano To cite this article: Carlos Garrido Castellano (2017): Shadowy Presences: Mobility, Labor and Absence in the Work of Dominican Photographer Fausto Ortiz, Photography and Culture To link to this article: http://dx.doi.org/10.1080/17514517.2017.1295708 Published online: 10 Mar 2017. Submit your article to this journal View related articles View Crossmark data http://www.tandfonline.com/action/journalInformation?journalCode=rfpc20 http://www.tandfonline.com/loi/rfpc20 http://dx.doi.org/10.1080/17514517.2017.1295708 http://www.tandfonline.com/action/authorSubmission?journalCode=rfpc20&show=instructions http://www.tandfonline.com/action/authorSubmission?journalCode=rfpc20&show=instructions http://www.tandfonline.com/doi/mlt/10.1080/17514517.2017.1295708 http://www.tandfonline.com/doi/mlt/10.1080/17514517.2017.1295708 http://crossmark.crossref.org/dialog/?doi=10.1080/17514517.2017.1295708&domain=pdf&date_stamp=2017-03-10 http://crossmark.crossref.org/dialog/?doi=10.1080/17514517.2017.1295708&domain=pdf&date_stamp=2017-03-10 Photography & Culture Volume XX Issue X March 2017, pp. 1–16 Shadowy Presences: Mobility, Labor and Absence in the Work of Dominican Photographer Fausto Ortiz Carlos Garrido Castellano Abstract This article aims to examine the ways in which contemporary art from the Caribbean, and specifically from the Dominican Republic, is analyzing mobility and human trafficking within a transnational con- text. In this case I will critique the work of the photographer Fausto Ortiz (Santiago de los Caballeros, 1970), who has reflected recently on the consequences of migration and displacement for Dominican cultural politics. Rather than addressing the representation of margin- alized sectors and marginal forms of economy in the particular case of the Dominican Republic, I argue that Ortiz’s photographic practice deepens and broadens the debates about race, citizenship and social inequality, forcing his audience to consider those issues as a central part of the everyday. While addressing those issues, this article tries to insert Ortiz’s photographic practice within international debates on mobility, border practices and displacement. Keywords: Caribbean, islands, migration, photography, transnationalism Migration and mobility have been a central concern in recent Caribbean visual practices. The representation of slavery and of the Middle Passage are commonly confronted and linked to contemporary processes of voluntary and forced displacement. This focus has been key in placing Caribbean visuality within a transnational context, transcending the iden- tification between regional imaginaries and tropical, insular landscapes. A big effort has been made to challenge the visual commoditization of Caribbean reality. A similar interest has pursued to insert Caribbean visual practices within a global arena (see Wainwright 2011; Kempadoo 2013; Stephens 2013; Mohammed 2011). In critical theory, authors such as Juan Flores (2010), Jorge Duany (2011), Silvio Torres Saillant (1999) or Yolanda Martínez San Miguel (2003) have attempted to dismantle the nation–diaspora divide, pointing out the centrality of the “cultures of migration” (Martínez San Miguel 2003) for any understanding of the Photography & Culture Volume XX—Issue X March 2017 pp. 1–16 DOI: 10.1080/17514517.2017.1295708 Reprints available directly from the publishers Photocopying permitted by license only © 2017 Informa UK Limited, trading as Taylor & Francis Group 2 Shadowy presences: mobility, labor and absence in the work of Dominican photographer Fausto Ortiz Carlos Garrido Castellano Photography & Culture Volume XX Issue X March 2017, pp. 2–16 Caribbean cultural imagery. For those authors, mobility appears as a crucial element not only for those individuals or groups that are subjected to displacement, but also for the configuration of social imaginaries across the region. Engaging with those debates, this paper aims to explore how mobility and labor are portrayed in Caribbean visual practices. In order to do that, I examine two series of the Dominican photographer Fausto Ortiz (Santiago de los Caballeros, 1970). Ortiz has developed a consistent artistic career since the 1990s, gaining international recognition at the end of the 2000s, when his work was showcased in major art exhibitions within and outside the region, such as Infinite Island (New York, Brooklyn Museum, 2007). Around that date, when his work became internationally known, Ortiz consolidated a preoccupation for addressing the contradictions of mobility and migration within the context of the Dominican Republic. This concern is best expressed in Ciudad de sombras (City of Shadows) and Remainders, two photographic series produced in the late 2000s. Both are specially bounded to marginal and conflictive modes of displacement. They depict situations of precariousness and instability, forcing the spectator to challenge her expectations towards Caribbean bodies and landscapes, and dismantling the economic and racial discourses behind Dominican nationalism.1 In this article, I attempt to place Ortiz’s photographic practice within the context of Caribbean cultures of migration. By looking at Ciudad de sombras and Remainders, I argue that, although dealing with specific local predicaments, the concerns with labor, migration and trafficking bodies present in both series cannot be isolated as a national phenomenon. My interest here is to show that Ortiz’s exploration of the Dominican landscape of (im)mobility opens new ways of understanding the predicament of Caribbean displacements as a global phenomenon. Issues of difference and proximity/distancing have regulated in many cases the cultural debates around the links between Haiti and the Dominican Republic. The approximations to that predicament developed by Dominican artists share those issues (see Casa de América 2002; De los Santos 2003; Ginebra 2009; Miller 2012). Without dismissing such approaches, in this case I would like to suggest that a major part of Ortiz’s work alludes to a liminal space related to the cultural and political relations in La Española, a space delimiting notions of political economy and habitability in the Caribbean at large. In that context, the question on coexistence is strongly linked to the visibility or invisibility of communities, as well as to governability, forcing the spectator to question human displacement in relation to the widespread of Caribbean visual practices within a transnational context. “We remain labelled but nameless images,” suggest Christopher Cozier and Flores (2012), 9) in dealing with Caribbean contemporary artistic practices. In this article I attempt to show how this concern is present in Ortiz’s photographic imagery. Herein I examine how Ortiz challenges the image of the Caribbean as a paradisiac, exotic location by depicting bodies and histories that are often excluded from the Dominican national imaginary. The shadows and inanimate bodies that center the image composing both series constitute, I will suggest, a troubling presence haunting Caribbean landscapes of migrant labor and mobility. Rather than describing or narrating peripheral presences, Ortiz’s photographic practice is proposing an alternative view of Dominican citizenship, one implying an active response and not just a documentary gesture. The main argument of this paper, therefore, is that Ortiz’s work transcends the direct representation of marginal fluxes, drawing a link between the marginalization of subjects and bodies, the configuration of the Dominican nation, and the impact of neoliberal capitalism across the Caribbean region. The series Ciudad de sombras (2007) depicts the urban land- scape of Santiago de los Caballeros through the lack of visibility of part of its inhabitants.2 A group of shadows in movement are outlined over the walls, indicating a reality that cannot be perceived Carlos Garrido Castellano Shadowy presences: mobility, labor and absence in the work of Dominican photographer Fausto Ortiz 3 Photography & Culture Volume XX Issue X March 2017, pp. 3–16 by the spectator (see figures 1–5). Becoming shadows, errant figures whose position and identity can be only guessed from their reflection, those figures render a vision of a community marked by its estrangement, by its inexpressiveness. Those silhouettes constitute, however, an inseparable layer within the historical materiality of the walls, one that imprints its presence to the urban texture and embodies the contradictions of misrepresenta- tion. Ciudad de sombras portrays the imagery of downtown Santiago de los Caballeros, the second major city in the Dominican Republic. By look- ing at those the images, however, we could not know in what part of the city the pictures have been taken, nor the identity of the photographed persons. The only recognizable elements of the scenes come from the remnants of some messages fixed to the wall addressing political issues and key figures of Dominican political process, but these also seem to fade. The image titles are equally ambiguous, and are used to suggest particular interpretations – Laberinto (Labyrinth), Sombras Pasajeras (Temporary Shadows), Aproximaciones (Approximations) – or the relation among the characters – Procesión (Procession), Partners. Although they modify the urban space where they are inserted, the shadows rarely occupy the whole wall, being relegated to the inner part Figure 1. Fausto Ortiz, “Aproximaciones,” Ciudad de Sombras series, 2007. Digital photography. Courtesy of the artist. Figure 2. Fausto Ortiz, “Ciudad de Sombras,” Ciudad de Sombras series. 2007. Digital photography. Courtesy of the artist. 4 Shadowy presences: mobility, labor and absence in the work of Dominican photographer Fausto Ortiz Carlos Garrido Castellano Photography & Culture Volume XX Issue X March 2017, pp. 4–16 specific references to street furniture such as lampposts, traffic signs or parking meters. In those cases, the object’s shadow appears bigger than the human form, creating a vertical counterpart to the inclining, diagonal shape of the walkers’ silhouettes. This does not eliminate the sense of nakedness, of simplicity of the photographed spaces. In rare cases, moreover, a sense of narration is pursued. Ortiz shows mostly isolated figures. In the group portraits, the subjects are often walking in divergent directions or clearly separated one from others. Some exceptions can be found in photographs like Fragmentos de paz (Peace Fragments), where people are presented in more relaxing, engaging attitudes, such as playing music, or in Laberinto, where we can identify a family walking together, the father guiding the steps of the children. The other series of Ortiz dealing with precarious processes of mobility took the name of Remainders. Located in a tropical coastal landscape, the images show parts of mannequin bodies deposited by the sea as evidence of a shipwreck. While in some cases we witness the inexpressive faces oscillating with the tide, in other occasions the dispersed members lie under the sand, buried. In all the pictures, however, a feeling of abandon and decay appears, suggesting the idea of dispensable lives. The sense of tragedy is reinforced by the calm nature of the landscape, unaltered by the accident, and also by the perfectness of the of the image. This displacement is reinforced by the subjects’ movement, captured while walking. Whereas the human figure holds the prominence of the scene, some photographs also include Figure 3. Fausto Ortiz, “Peace Piece,” Ciudad de Sombras series. 2007. Digital photography. Courtesy of the artist. Figure 4. Fausto Ortiz, “Laberinto 2,” Ciudad de Som- bras series. 2007. Digital photography. Courtesy of the artist. Carlos Garrido Castellano Shadowy presences: mobility, labor and absence in the work of Dominican photographer Fausto Ortiz 5 Photography & Culture Volume XX Issue X March 2017, pp. 5–16 feminine condition of the mannequins could pose issues of sexual labor and forced migration. The surroundings of the shipwreck point to the illegal migration of Dominicans to Puerto Rico in yolas (small handcrafted vessels used for this kind of transit) through the Mona Canal. The open character of the series can be seen, then, as a dispositive used by the artist in order to make Dominican border reality problematic without specifying or privileging any particular interpretation. As none of those visions prevail over the rest, the viewers are forced to guess the possible meaning of the images (see Figure 6). When asked for the meaning of Remainders, Ortiz answers cryptically: “The debris of the sea is what the sea expels; in this case, I wanted to suggest the idea of waste, mutilation, expulsion.”3 Similarly, Ciudad de sombras is also located within a broader geography in Ortiz’s interpretation: I believe that the best way of representing the person who enters and leaves briefly the city without leaving any trace is through the shad- ow. The shadow is projected onto the city, in the walls, and those walls are built by many of those persons, they are not used solely to shelter them … I mean, I am talking in this case also about the Dominicans living in New York, in Spain, to say something. Many people understood my use of shadows as linking the images to skin color, but it does not necessari- ly have to do with this.4 bodies and fragments, which appear without a mark. We cannot find a clear connection between the different characters populating the scene: while some are about to disappear, others seem to enjoy the day. Although we are in a shore, water is outlined in several cases, creating a horizon reinforced by the horizontal alignment of the bodies. The use of limbs and torsos complements the idea underlined by the title of showing remainders of a wider totality recently disappeared. It also emphasizes the contrast between the perfection of the bodies and their outcast character, presenting the scene as a consequence of traumatic experiences. Mannequins, in that sense, appear as an integral part of the economies of movement and consumerism that lead to, according to Sheller, “forms of symbolic violence and cultural appropriation” (Sheller 2003, 4). Inserted in a tragic insular tropical landscape, they become part of what Ann Laura Stoler (2013) has called “imperial debris,” countering both the idyllic image associated to the Caribbean and the role of touristic sheltered isolation attached to tropical insularism. What the shipwreck is telling us, in other words, is that any process of mobility is indissolubly linked to global circulation of money and products. That circulation, as Sheller points out (2004, 15), not only tolerates but also increases transnational processes of inequality. As in Ciudad de sombras, several interpretations are possible here: a direct reading of the Figure 5. Fausto Ortiz, “Sombras pasajeras 2,” Ciudad de Sombras series. 2007. Digital photography. Courtesy of the artist. 6 Shadowy presences: mobility, labor and absence in the work of Dominican photographer Fausto Ortiz Carlos Garrido Castellano Photography & Culture Volume XX Issue X March 2017, pp. 6–16 San Martínez San Miguel 1998, 2001; Martínez- Vergné 2005; Itzigsohn 2001). East and West are more than two cardinal points in the Dominican spatial imaginary. The West is undoubtedly rep- resented by Haiti, an ever-present and unmen- tionable reality in the country. Sara Hermann has recently alluded to how this abrupt extirpation and uncanniness of one of the cardinal points is “one of the most violent acts that can exist.” (Hermann 2012, 86). Haiti and the Dominican Republic are not just separated by a border, but also by a long history of conflicts and political difference: the Dominican Republic gained independence from Haiti, which controlled the totality of the island from 1822 until 18445. The independence of the country was linked to the events that followed the Haitian Revolution, when the island became divided in two, one part controlled by the French until 1809 and another by the nation that had just gained independence. Given the passivity of both As we will see, the opacity and ambivalent tone of both series does not imply lack of attention to the context; on the contrary, it defines a way of linking and spacing marginalized and misrepresented histories, pointing out the complexity that characterizes contemporary experiences of mobility. East and West: reframing the Dominican gaze Ortiz’s photographs are concerned with the overlapping of heterogeneous histories and conflicts, which are braided through several ele- ments, among them mobility, (in)visibility or race and racism. In this section I will focus on the last two terms, attempting to connect them to the Dominican Republic geopolitical imagination. Both series, as we will see, portray the central roles that migration and mobility play in the configuration of Dominican nationalism (see, for example, Martínez Figure 6. Fausto Ortiz, “Number Eleven,” Ciudad de Sombras series. 2007. Digital photography. Courtesy of the artist. Carlos Garrido Castellano Shadowy presences: mobility, labor and absence in the work of Dominican photographer Fausto Ortiz 7 Photography & Culture Volume XX Issue X March 2017, pp. 7–16 Santo Domingo. Finally, David Pérez’s “Karmadavis” work for the Latin American Pavilion of the 2013 Venice Biennale took the form of a collaborative performance between a Haitian and a Dominican disabled citizen (see Fumagalli 2013, 2015). Those are just some recent examples of a topic exploited time and time again in Dominican visual practices9. The East also holds an almost mythical appeal in the Dominican geographical imagination. In this case, it is commonly associated with illegal migration to Puerto Rico through the Mona Canal. This journey means the entrance in US territory, but also constitutes a dangerous passage that has cost many lives. The drama of the yolas is usually related to questions of privilege, acceptance and success: the migrant who reaches Puerto Rico enters the “promised land,” although migration is, in many cases, also associated with precariousness and unwanted jobs. Furthermore, for many Dominicans the process of dealing with the American racial divide also implies a redefinition Spain and the international powers toward the independence of the Spanish side of the island, the Haitian government of Boyer occupied in 1822 the other part, reintroducing the abolition of slavery and imposing the French civil code. One century after, the dictatorship of Rafael Leónidas Trujillo instituted anti-Haitian policy which lead to the expulsion of many Haitian citizens and to several massacres6. The legacy of this historical conun- drum has remained a central issue in the relations between both countries until the present moment, when the situation is aggravated by the massive deportations of Haitian and Haitian-origin citizens7. Contemporary Dominican art has dealt extensively with Haiti and “the Haitian issue,” both in clever, subtle and more straightforward ways. Performance artist Sayuri Guzmán, for example, has braided the hair of Dominican and Haitian women together, bounding them8. More radically, Caryana Castillo wove a flag mixing both countries’, wearing it while walking around the historic Ciudad Colonial of Figure 7. Fausto Ortiz, “Náufragos,” Remainders Series. Digital photography. Courtesy of the artist. 8 Shadowy presences: mobility, labor and absence in the work of Dominican photographer Fausto Ortiz Carlos Garrido Castellano Photography & Culture Volume XX Issue X March 2017, pp. 8–16 Ortiz’s photographs show that the construction of Dominican identity and its national and racial Other(s) cannot be confronted separately. The series force the (Dominican) spectator to rethink her/his position as necessarily touched by heterogeneous and interconnected experiences of marginalization. Precisely this ambivalence troubles the connection between the photographed space, the addressed predicament and the artistic interpretation, for the lack of presence of their protagonists (shadows in the first case, lifeless mannequins in the second) seems to reinforce the unfruitfulness of any identification with “real” referents. Authenticity is excluded in Ortiz’s portraits, and that reinforces the concern for identification that shadows express. In the case of Ciudad de sombras, the first reference that comes to mind is that of the landscape of labor in La Española Island and the increasing processes of miscegenation represented of their bodies and racial identities. As it happens with the West, the East has also been a familiar focus for Dominican artists. For instance, in 2003 the Shampoo Collective created D′La Mona Plaza, a project that shocked artistic audiences in its time and that reached a huge public not familiar with contemporary art. The project advertised in the Dominican media the construction of a big shopping mall in the Mona Canal, which would provide help to shipwreck survivors and migrants in yolas. The most interesting part came when the media did not acknowledge the prank and started discussing the viability of such shopping mall. East and West are behind the gaze constructed in Ciudad de sombras and Remainders. However, I believe that it is difficult to identify each series with a cardinal point. On the contrary, an alternative view of the Dominican imaginary, one in which both cardinal points are confused and confronted together, emerges in both series. Figure 8. Fausto Ortiz, “Horizontes de paz,” Remainders Series. Digital photography. Courtesy of the artist. Carlos Garrido Castellano Shadowy presences: mobility, labor and absence in the work of Dominican photographer Fausto Ortiz 9 Photography & Culture Volume XX Issue X March 2017, pp. 9–16 of togetherness and commonality (a sense perhaps more difficult nowadays than in 2007). Produced before the crisis of citizenship that lead to the deportation of Haitian citizens in 2013, the series already announced many of the ingredients that brought La Española to an international focus in the last years. The “Haitian problem” was a silent but persistent one until recently; a situation mixing incomprehension, racism and economic instability in equal parts. That is likely why there is no protagonist in Ciudad de sombras. Ortiz’s photographs are not images of Haitian citizens, nor they depict eye-catching conflict- based situations. Instead, what the artist offers is a negative of a quotidian situation of sharing a common space collectively10. Ortiz has attempted to keep control of the interpretations made of the series, challenging especially those that identify the series in simplistic ways with the subaltern situation of Haitian migrants. The series also already by generations of Haitian-Dominican citizens. The sombras (shadows) are black presences that come and go without imprinting their presence in the city walls. However, they also allude to an impossibility of identification, echoing the mobility across both countries, but also the mixed condition of most Dominican citizens (contrary to the assumed blackness of Haitians, the Dominican population tends to define itself as “mulatto”). Rather than depicting an equally unfruitful pessimistic or optimistic vision of mobility in the specific context of Santiago de Los Caballeros, Ciudad de sombras outlines the centrality of transnational cultural processes, “familiarizing” them (Hannerz 1996). A general concern with exclusion, absence and misrepresentation is also present. Apart from the obvious platonic resonance to the Cavern Myth, the black of the shadow recalls the claim of the Haitian Revolution about the blackness of all the citizenry, portraying therefore a utopian message Figure 9. Fausto Ortiz, “La ida,” Remainders Series. Digital photography. Courtesy of the artist. 10 Shadowy presences: mobility, labor and absence in the work of Dominican photographer Fausto Ortiz Carlos Garrido Castellano Photography & Culture Volume XX Issue X March 2017, pp. 10–16 across time. In Laberinto (Labyrinth), the oppressive feeling suggested by the title contrasts with the gestures of the three figures composing the photo. Those two photographs are deprived of any urgency. They could belong to any context, to any chronology. Those two examples also propose a different kind of relation with the urban milieu: the characters of those pictures do not seem to be escaping from anything, and that easiness seems to suggest a deeper and more lasting imprint in the city walls. Remainders could also be suggest a straightforward interpretation: an exploration of the migration of Dominican citizens through the Mona Canal (see figures 6–10). However, a more profound look at the images reveal a similar interest in evading a direct confrontation. Remainders translates the focus to a dystopian notion of Caribbean nature. The only element that transfers some comfort to the spectator is the surreal character of the scene. In that sense, perpetuates this by refusing direct thematization and sensationalism. The shadow also allows Ortiz to pose questions concerning the visual presence and representation of migrating communities at large. Dealing with the ‘unhomely’ (Enwezor 2006), Ciudad de sombras examines the precarious lives of the migrant force in transitional spaces, highlighting the way in which they are socially transformed and culturally reshaped. Many of its protagonists are captured in everyday situations. In this sense, it is worth comparing the menacing atmosphere of Aproximaciones or Ciudad de sombras with scenes such as Peace Piece, for example. In this case, the persecutory tone of the two first photographs gives way to a calmed situation: one of the photographs of the diptych portrays a conversation among two subjects, and the other a man playing guitar. Movement and displacement are not present in this case, suggesting the permanence or the repetition of those actions Figure 10. Fausto Ortiz, “Sueños líquidos,” Remainders Series. Digital photography. Courtesy of the artist. Carlos Garrido Castellano Shadowy presences: mobility, labor and absence in the work of Dominican photographer Fausto Ortiz 11 Photography & Culture Volume XX Issue X March 2017, pp. 11–16 of posters and photographs that we find in the wall: abstract in the left photograph, they become clearer at the right, where we find a disappearing portrait of Joaquín Balaguer surrounded by illegible lists and group of portraits. This composition immediately recalls the disappearance of political enemies that took place under the government of Balaguer in the period known as “Los doce años” (The twelve years)11. Between 1966 and 1978, Balaguer attempted to consolidate the transition from dictatorship to democracy by developing a policy of economic development and urban modernization, and also by appealing to the most popular sectors of Dominican society. The disap- pearances and political murders, however, contin- ued after the Trujillo dictatorship, constituting the dark side of the economic growth that defined the official image offered by the government in that period. The juxtaposition of layers and shadowy presences in “Aproximaciones” reveal the conti- nuities between the political violence behind the disappearances and the contemporary exclusion of migrants. The result of both processes is here portrayed as a continuum by the figure at the right carrying disposal. This figure condenses the contra- dictions between the country’s official image and the informal mechanisms undertaken to achieve or maintain it both during the government of Balaguer and in the present moment. The bodies personified by the silhouettes hold an ambivalent relation with the “solid” reality of the wall: they are portrayed as menacing presences, but also as carriers of disposal. They appear as accomplices and victims of a vio- lence that is never openly defined, just suggested. In that sense, the presence of the president gives the title of the triptych (“Approximations”) a definite shape, endowing the scenes with a threatening atmosphere. Furthermore, it reveals the difficulties in confronting situations of violence and margin- alization: as the desaparecidos constitute in many cases a non-traceable presence, the position of the migrants within Dominican cities appears subjected to equally oblique and non-evident menaces. Looked at from the perspective offered by “Aproximaciones,” Ciudad de sombras appears Ortiz approaches the problems derived from the complexity of coexistence in the “quiet tragedy” of marginal migrations. A sense of isolation emerges, distancing the viewer from any kind of imperative of the sublime. If we can associate shipwrecks with both the sublime enjoyment of a contemplator, and the forceful energy of the sea (see Guillén 2004, 19–25), we cannot find any of those elements here. The figure of the Romantic contemplator of nature seems to have been exiled, for the apocalyptic tone of the picture excludes any possibility of witnessing it. In front of those shipwrecks we are alone. Whereas in Ciudad de sombras the difficulties in defining an encompassing citizenry were presented through the generic character of shadows, in this case strangeness and temporal and corporal dislocation work in the same way, expelling us from the pathos of tragedy. This sense is reinforced by the relation between the mannequins and the landscape. Unlike the characters of Ciudad de sombras, integration does not seem possible in this case: on the contrary, the bodies appear as debris, either consumed or expelled by the sea. Shadowy presences: ambivalence and accumulation An interesting element distancing Ortiz’s photo- graphs from the artistic imaginary dealing with the Dominican geopolitical landscape is precisely driven by the ambivalence of both series. The artist is careful enough to gauge a literal approach to the issues dealt with or an illustration of social or political predicaments. Consider, for example, “Aproximaciones,” one of the most symbolic photographs of Ciudad de sombras. In this trip- tych, both the shadows and the walls are defined by an increasing tension between presence and disappearance. The image at the left suggests an abstract menace, portraying a human silhouette with clenched fists walking towards a close fence. This tension materializes in the photograph at the right, where the rotundity of the figures gives way to an elusive figure carrying what seems to be dis- posal. This interpretation is reinforced by the layers 12 Shadowy presences: mobility, labor and absence in the work of Dominican photographer Fausto Ortiz Carlos Garrido Castellano Photography & Culture Volume XX Issue X March 2017, pp. 12–16 and have light-colored eyes, an element that contradicts the interpretation of the scenes as shipwrecks of Dominican migrants. As in Ciudad de sombras, the three possibilities are connected through the accumulation of referents. In this case, the photographs locate mobility at the core of a political economy of corporal and sexual representation that replicates and challenges the official values concerning gender and race; primarily Western whiteness and decency appear not only as a symbol of social and economic status, but also as an indicator of national belonging. The photographs composing both series are, thus, distanced from a direct thematization of migration. They allude to the political economy of the complex network of mobility practices that lie behind the official image of rigid borders in the context of La Española, without forgetting the historical causes and particular conflicts shaping Dominican history. Ortiz escapes from an identification of the migrating subject as other, and questions its visibility within the public space. The human bodies present in both series link several temporalities and situations, pointing out at the pervasiveness of different processes of marginalization and violence. Conclusions In this essay I have attempted to show that trans- national relations are rethought in Ortiz’s work from a complex perspective. Instead of simply asserting a vision of them based on difference and marginalization, Ortiz employs opacity to unearth the contradictions implicit in the Dominican context, transferring the precariousness associated with labor migration and prostitution to the very core of present-day debates on citizenship and national identification. In so doing, he troubles the social, transnational landscape where those exchanges are located. At stake here is how to approach situations of inequality associated with transnational processes of personal and economic movement without falling into a narrative of anec- dotal readings. In this text I have shown how Ortiz does so by juxtaposing historical referents and elusive and ambiguous, but also deeply rooted in the continuity of violence and repression within the Dominican context. A straightforward reading of the series renders it a tribute to the role played by Haitian migrant workers in the construction of Dominican cities landscape. Not by accident, the photographs were taken in Santiago de los Caballeros, the birthplace of Ortiz, and they depict Haitian individuals and families living there. When asked who the people are depicted through the shadows, however, Ortiz points to the centrality of anonymity in every city, therefore restricting the identification of the shadows with Haitian bodies: But certainly those silhouettes meant a lot for me in that particular moment, but now they came to represent anonymity. The anonymity of the city. Because of that I mentioned now about the formal and the informal city; in all contemporary cities you have groups that do not count; groups that are there, but that do not appear anywhere. The “absents.” Those who does not have a number, an ID, a code, or simply those who stay in the shadows of the city, being required and used only when labor force is needed. That happens everywhere. Many countries have the same situation.12 As we have seen in “Aproximaciones,” this is only partially true, since the triptych locates the photographs beyond the predicament of migrant labor through the ambiguity and interchangeable character of the shadow. Something similar occurs in Remainders. Three possible readings thus overlap in this series: first, the fact that all the bodies are female recalls sexual economy and displacement13, one of the most traumatic forms of transnationalism (see, for example, Brennan 2004; Fusco 1996; or Kempadoo 2004). At the same time, the composition of some of the images, with the sunken figures in front of a paradisiac landscape, reinforces the tragic tone of the photographs, alluding to the shipwrecks of Dominican illegal migrants to Puerto Rico. Furthermore, all the bodies are white and shaped Carlos Garrido Castellano Shadowy presences: mobility, labor and absence in the work of Dominican photographer Fausto Ortiz 13 Photography & Culture Volume XX Issue X March 2017, pp. 13–16 Furthermore, we have seen how both Ciudad de sombras and Remainders grapple with the political economy of race and gender within the Dominican context (Howard 2001; Sargás 2002). By the contrast posed by black silhouettes and white mannequins, Ortiz dismantles the distance between self and others, and the racial implications implicit in the process of definition of Dominican nationalism. As the representation of borders are challenged and finally excluded through the opacity of shadows and the unreal appearance of the sunken bodies, Ortiz challenges straightforward interpretations of racial difference. By portraying voices and processes that lie behind the glance of the modernization of the country, Ortiz rethinks the corporeal logic of representation in the Dominican context, including Dominican subjects not as privileged viewers, but as part of the same predicament. Ortiz’s work conveys a double interest: while addressing particularly sharp issues for the configuration of Caribbean societies, such as accessibility, mobility and nation building, he also demonstrates an interest in generating a terrain where particular issues can be expanded and confronted outside thematization. Concerned with the ways in which Caribbean artistic discourses are consumed, tokenized and commoditized, he often separates the use of artworks from a mere illustration of theory and social concerns, producing photographs that capture the different maneuvers of representation taking place in the public domain. Ortiz’s artistic practice is, in that sense, not addressing phenomena that occur at the margins, nor addressing “problematic” issues. Rather, he is questioning the debates on the economic and political relations that lie at the center of postcolonial Caribbean societies. In Ciudad de sombras as well as in Remainders, we are confronted by decisive questions: Who are the shadows and denied bodies of any given urban reality? What kind of relations do they maintain within the contexts that they inhabit? In what way do they belong to those contexts? These questions can then be considered as effective approaches everyday actions or by erasing the identity of the photographed bodies. Through the analysis of some of the photographs composing Ciudad de sombras and Remainders, I have outlined how the bodies appearing in the pictures should be located within a transnational framework, opened to multiple interpretations more than depicting and individuating any community. Ortiz generates a landscape where part of its inhabitants cannot recognize themselves, being consequently excluded from representation. What those bodies suggest is a generalization of precariousness associated with urban experiences of empowerment and marginalization. Their negative presence, however, suggest that any conception of that landscape is unviable without considering its absences. In that sense, Ortiz challenges the human vacuums existing within transnational economic exchanges, exploring the ways in which those exchanges exclude the active voices of many of their participants. As shadows contaminate our view of the inert space of the city, they corroborate their participation in the making of spatial uses of the everyday. Ortiz’s silhouettes express, thus, a minor resistance (De Certeau 1980, 30–31), and a concern on the need to revise the inclusiveness associated with public spaces. In Ortiz’s series, vulnerability is portrayed as constituent of citizenship. Although a first glance would suggest an identification of both series’ figures with marginal subjects, the ambivalence and overlapping of histories the photographs portray points to the centrality of mobility and precariousness in the constitution of Dominican citizenship. In both series, examining human trafficking becomes important from the moment that both transnational sex practices and labor migration are not thematized and isolated, but located as crossing all the spheres of the Dominican society14. In that sense, Ortiz propels us to consider that no image of transnational connections in the Dominican case is possible outside of those “marginal movements” (Puri 2004). 14 Shadowy presences: mobility, labor and absence in the work of Dominican photographer Fausto Ortiz Carlos Garrido Castellano Photography & Culture Volume XX Issue X March 2017, pp. 14–16 de despojo, mutilación y expulsión.” My translation. (Garrido Castellano 2011). 4. “Yo creo que la mejor forma de representar a esa persona que entra y sale de manera fugaz a la ciudad y no deja huella es a través de la sombra. La sombra que se proyecta en la ciudad, en las paredes, y esas paredes que muchos de ellos construyen, definitiva- mente no sólo sirven para albergarles...Digo, hablo en este caso también de los dominicanos que viven en Nueva York, o en España, por decir algo. Mucha gente entendía que el hecho de que fueran sombras lo estaba vinculando al color de la piel, pero no nece- sariamente se trata de ello.” My translation. (Garrido Castellano 2011). 5. The “liberation” of the country from the Haitian dominance in 1844 still plays a central role in the popular Dominican imaginary, conditioning both the racial views of self and Haitian others, and the pres- ent predicament between both countries. 6. Trujillo´s anti-Haitian campaign reached its maximum with the Parsley Massacre of October 1937, when the Dominican military killed thousands of Haitians near the border. 7. The crisis affects not only Haitian citizens, but also individuals born in Dominican Republic, since the law regulating the expulsions retroactively denies Domini- can citizenship to any person born on Dominican soil after 1929 of undocumented foreigners. 8. The action, called Trenzados, implied weaving together the hair of eight Haitian women and eight Dominican women. On Guzmán, see http://hemisphericinstitute.org/hemi/es/enc09-perfor- mances/item/56-09-sayuri-guzm%C3%A1n 9. The work of other artists, such as Giuseppe Riggio, Jorge Pineda or Patricia Castillo could also be men- tioned here. I have dealt with this topic in (Garrido Castellano 2010; Garrido Castellano 2015). 10. The construction of a collective memory recalls Su- san Meiselas’s interest in approaching the complexity of the societies of El Salvador and Nicaragua under situations of transformation after insurrectional mo- ments. See Breckenridge 2006. 11. Balaguer governed Dominican Republic three times, between 1960 and 1962, 1966 and 1978, and 1986– 96. The doce años period refers to his second period of governance, the time when most of the progress in development was made. Paradoxically enough, to interrogate contemporary (Caribbean) cultural practices. For migrations and displacement imply not only a vision of fluidity, but also one of immobility, precariousness and restriction. My reading of the work of Fausto Ortiz has attempted to show, therefore, that Caribbean imageries are proposing new ways of understanding conflict and change from a wider perspective, to one which escapes restrictive and monolithic visions. By addressing the cultural and economic exchanges along the Dominican borders, Ortiz situates himself and the viewer within, and against, a landscape wherein both can be disoriented. Acknowledgements This work was funded by a post-doctorate fellow- ship granted by the Portuguese Fundação para a Ciência e Tecnologia (FCT). The research devel- oped here was produced over several long-term research stays in the Dominican Republic and the United States. The final version of the article ben- efitted from the comments of Nikizzi Serumaga. Special thanks to her and to Fausto Ortiz. Disclosure statement No potential conflict of interest was reported by the author. Notes 1. The imaginary of Dominican national identity has traditionally been associated with four elements: the national territory, Spanish language, Catholic religion and racial miscegenation. All of those elements, as we will see, reveal themselves to be problematic and contradictory. 2. Santiago de los Caballeros is the second biggest city of Dominican Republic after Santo Domingo. Located in the landlocked agricultural and stock region of Cibao, the city is infamous for maintaining a more conservative atmosphere than the capital. However, Santiago is also a vibrant economic center, which causes significant social disparity among its population. 3. “Los desechos del mar es lo que el mar expulsa hacia afuera; en este caso yo pretendía dar esa idea http://hemisphericinstitute.org/hemi/es/enc09-performances/item/56-09-sayuri-guzm%C3%A1n http://hemisphericinstitute.org/hemi/es/enc09-performances/item/56-09-sayuri-guzm%C3%A1n http://hemisphericinstitute.org/hemi/es/enc09-performances/item/56-09-sayuri-guzm%C3%A1n Carlos Garrido Castellano Shadowy presences: mobility, labor and absence in the work of Dominican photographer Fausto Ortiz 15 Photography & Culture Volume XX Issue X March 2017, pp. 15–16 Nicaragua.” Chap. 3. In Photography and Writing in Latin America: Double Exposure, edited by Marcy E. Schwartz and Mary Beth Tierney-Tello, 59-87. Albuquerque, NM: University of New Mexico Press. Brennan, D. 2004. What’s Love Got to Do with It? Durham: Duke University Press. Burman, J. 2002. “Remittance; Or, Diasporic Economies of Yearning.” Small Axe 6 (2): 49–71. de América, Casa, ed. 2002. Arte contemporáneo domini- cano [Contemporary Dominican Art]. Madrid: Casa de América. Cozier, C., and T. Flores. 2012. Wrestling with the Image. Caribbean Interventions. Washington D.C: World Bank. De Certeau, M. 1980. L′invention du quotidien. Arts de faire. Paris: Union Générale d′Editions. De los Santos, D. 2003. Memoria de la pintura domini- cana [Memory of Dominican Painting]. Santiago de los Caballeros: Grupo León Jimenes. Duany, J. 2011. Blurred Borders. Chapel Hill: The University of North Carolina Press. Enwezor, O. 2006. The Unhomely: Phantom Scenes in Global Society: 2nd International Biennial of Contemporary Art of Seville. Seville: Fundación Bienal Internacional de Arte Contemporáneo de Sevilla. Flores, J. 2010. The Diaspora Strikes Back. Caribeño Tales of Learning and Turning. New York: Routledge. Fumagalli, M. C. 2013. “‘Isla Abierta’ or ‘Isla Cerrada’?: Karmadavis’s Pre- and Post-Earthquake Hispaniola.” Bulletin of Latin American Research 32 (4): 421–437. Fumagalli, M. C. 2015. On the Edge: Writing the Border between Haiti and Dominican Republic. Liverpool: Liverpool University Press. Fusco, C. 1996. “Hustling for Dollars.” MS Magazine, September-October : 62-70. Garrido Castellano, C. 2010. “Rompiendo el mapa. Cuatro visiones sobre la frontera en el arte actual dominicano” Intercambio. Revista sobre Centroamérica y el Caribe, 7/8: 37–53. Garrido Castellano, C. 2011. Arte en diálogo. Conversaciones sobre práctica artística contemporánea, identidad e integración cultural en República Dominicana. Santo Domingo: Centro Cultural de España. Garrido Castellano, C. 2015. “On Wanting Images and Shared Responsibilities. Belkis Ramírez´s “De la misma many of the most relevant Dominican cultural institu- tions, among them, those gathered in the Plaza de la Cultura Juan Pablo Duarte in Santo Domingo, were also established in the 1970s. 12. “Pero ciertamente esas siluetas para mí significaban mucho en aquel momento, aunque ahora vinieron a representar el anonimato. El anonimato de la ciudad. Por eso te hablaba ahora de la ciudad formal y la ciu- dad informal; en toda ciudad actual aparecen aquellos grupos que no cuentan; que están, pero que no aparecen en ninguna parte. Los ausentes. Los que no tienen un número, un ID, un código, o simplemente los que permanecen en las sombras de la ciudad, y solamente son requeridos y utilizados cuando se necesita la fuerza laboral; eso pasa en todas partes del mundo. Muchos países tienen la misma situación.” My translation. (Garrido Castellano 2011). 13. The remains of the shipwreck can be approached, thus, as recognition and critical reading of the con- tradictions of the “idyllic landscape” for interracial, intercultural love that frequently conceal sex tourism, a phenomenon that have created a parallel economy in which low-income women can find a quick way of social promotion, not exempt, however, from a high risk of marginalization and impoverishment that the weight of remittances cannot attenuate. 14. Prostitution and migration can also be found in the origin of “remittance settlements” occupied by men whose women are working in Europe as sex workers. For a review on the relations between remittance and architecture see Lynn Lopez (2010). Remittances, however, widely exceeds this phenom- enon and must not be solely connected with pros- titution; rather, it implies a broad range of situations, gestures and stories (Burman 2002). Carlos Garrido Castellano holds a FCT post-doctoral fellowship at the Centro de Estudos Comparatistas and the Instituto de História de Arte at the University of Lisbon. His research interests focus on visual culture, critical theory and collaborative artistic practices. He authored two books on contemporary Caribbean art. Currently he coordinates the research project “Com- paring Wes. Collectivism, Emancipation, Postcoloniality”. References Breckenridge, J. 2006. “Narrative Imag(in)ing: Susan Meiselas Documents the Sandinista Revolution in 16 Shadowy presences: mobility, labor and absence in the work of Dominican photographer Fausto Ortiz Carlos Garrido Castellano Photography & Culture Volume XX Issue X March 2017, pp. 16–16 Two Ways. Culture of Migration in the Hispanic Insular Caribbean]. San Juan, Puerto Rico: Ediciones Callejón. Mar tínez San Miguel, Y. 1998. “De ilegales e indocumentados: representaciones culturales de la migración dominicana en Puer to Rico.” [On Illegal and Undocumented: Cultural Representations of Dominican Migration in Puer to Rico.]” Revista de Ciencias Sociales 4: 147–173. Miller, J. 2012. 1844-2000. Arte dominicano. Escultura, instalaciones, medios no tradicionales y arte vitral [1844-2000. Dominican Art. Sculpture, Installations, Non-Conventional Media and Stained Glass Art]. Santo Domingo: Codetel. Mohammed, P. 2011. Imaging the Caribbean: Culture and Visual Translation. New York: Palgrave Macmillan. Puri, S. 2004. The Caribbean Postcolonial. New York: Palgrave Macmillan. Sargás, E. 2002. Race and Politics in the Dominican Republic. Miami, FL: University Press of Florida. Sheller, M. 2003. Consuming the Caribbean. From Arawaks to Zombies. London and New York: Routledge. Sheller, M. 2004. “Demobilizing and Remobilizing Caribbean Paradise.” In Tourism Mobilities: Places to Play, Places in Play, edited by M. Sheller and J. Irry, 13–22. London and New York: Routledge. Stephens, M. 2013. “What Is an Island? Caribbean Studies and the Contemporary Visual Artist.” Small Axe 17: 8–26. Stoler, A. L., ed. 2013. Imperial Debris. On Ruins and Ruination. Durham: Duke University Press. Torres Saillant, S. 1999. El retorno de las yolas: ensayos sobre diáspora, democracia y dominicanidad [The Return of the Yolas: Essays on Diaspora, Democracy and Dominicanness]. Santo Domingo: Manatí. Wainwright, Leon. 2011. Timed Out. Art and the Transnational Caribbean. Manchester, NH: Manchester University Press. madera”” Miradas. Elektronische Zeitschrift für Iberische und Ibero-amerikanische Kunstgeschichte 2: 23–31. Ginebra, F., ed. 2009. Arte dominicano joven: már- genes, género, interacciones y nuevos territorios [Recent Dominican Art: Margins, Gender, Interactions and New Territories]. Santo Domingo: Casa de Teatro. Guillén, E. 2004. Naufragios: imágenes románticas de la desesperación [Shipwrecks: Romantic Images of Despair]. Madrid: Siruela. Hannerz, U. 1996. Transnational Connections: Culture, People, Places. London and New York: Routledge. Hermann, S. 2012. “Unconscious Curating.” Chap. 6. In Curating in the Caribbean, edited by David Bailey, Alissandra Cummins, Alex Lapp and Allison Thompson, 85-97. Berlin: The Green Box. Howard, D. 2001. Coloring the Nation: Race and Ethnicity in the Dominican Republic. Oxford: Lynne Rienner Publishers. Itzigsohn, J. 2001. “Living Transnational Lives”, Diaspora: A Journal of Transnational Studies 10 (2): 281-296. Kempadoo, K. 2004. Sexing the Caribbean. London and New York: Routledge. Kempadoo, R. 2013. “Gazing Outward and Looking Back: Configuring Caribbean Visual Culture.” Small Axe 17: 136–153. Lynn Lopez, S. 2010. “The Remitance House: Architecture of Migration in Rural Mexico.” Buildings & Landscapes: Journal of the Vernacular Architecture Forum 17 (2): 33–52. Martínez-Vergné, T. 2005. Nation & Citizen in the Dominican Republic, 1880–1916. Durham: University of North Carolina Press. Martínez San Miguel, Y. 2001. “A Caribbean Confederation? Cultural Representations of Cuban and Dominican Migrations to Puerto Rico.” Journal of Caribbean Literatures 3 (1): 93–110. Martínez San Miguel, Y. 2003. Caribe Two Ways. Cultura de la migración en el Caribe insular hispánico [Caribe Abstract The series East and West: reframing the Dominican gaze Shadowy presences: ambivalence and accumulation Conclusions Acknowledgements Disclosure statement Notes Anchor 9 References work_ejhiqhxfdzfhjj5lve5f4w4pjq ---- CCID_A_225881 77..85 O R I G I N A L R E S E A R C H Effect of Topical 5-Fluorouracil Alone versus Its Combination with Erbium:YAG (2940 nm) Laser in Treatment of Vitiligo This article was published in the following Dove Press journal: Clinical, Cosmetic and Investigational Dermatology Mahetab Abdelwahab 1 Manal Salah 2 Nevien Samy 2 Ahmad Rabie 1 Abdelrazik Farrag 3 1Department of Dermatology and Venereology, Medical Division, National Research Centre, Cairo, Egypt; 2Department of Medical Applications of Lasers (MAL), National Institute of Laser Enhanced Sciences, Cairo University, Giza, Egypt; 3Department of Pathology, Medical Division, National Research Centre, Cairo, Egypt Video abstract Point your SmartPhone at the code above. If you have a QR code reader the video abstract will appear. Or use: https://youtu.be/RNPmA6PkfGo Purpose: To compare the efficacy of topical 5-FU as monotherapy to combined therapy of topical 5-FU and Er:YAG (2940 nm) laser in the treatment of non-segmental vitiligo (NSV). Methods: This is a prospective randomized comparative study. Thirty patients diagnosed with NSV were recruited from the dermatology outpatient clinics of the Medical Research Centre of Excellence, the National Research Centre and the National Institute of Laser Enhanced Sciences. Our study group was divided into two subgroups, Group 1 was subjected to ablative Er:YAG and 5-FU cream and Group 2 applied topical 5-FU cream. Three treatment sessions were repeated every 4 to 6 weeks and patients were followed up to 9 months. Repigmentation was assessed by digital photography and subsequent computer based image analysis. Results: Repigmentation of Group 1 patients ranged from 0 to 70% (mean 12±7%) whilst in Group 2 this ranged from 0 to 5% (mean 1.4±0.8%). In Group 1 repigmentation was mild in 22/30 (73.3%) and moderate to severe in 3/30 (10%) starting after 3 months and persisted or increased during the period of follow up to 9 months. Groups 1 and 2 were subdivided into A and B, vitiligo involving non-resistant and resistant areas respectively. Group 1A showed more repigmentation (mean 13.8±8.5%) than Group 1B (mean 9.8±4.5%) and Group 2A showed more repigmentation (mean 1.5±1%) than Group 2B (mean 1.3±0.5%). Conclusion: The combination of Er:YAG with 5-FU is safe and effective in treating and improving outcome in vitiligo especially of non-resistant areas. Computer based image analysis of vitiliginous lesions and assessing post-therapy response is an easy, quick, and reliable method. Keywords: 5-fluorouracil, ablative, Er:YAG, vitiligo, image analysis, resistant Introduction Vitiligo is an acquired disease with a variable course. It is considered the most common depigmentation disorder affecting approximately 0.5 to 2% of the population.1 Vitiligo continues to be a major dermatologic challenge in spite of the availability of many therapeutic modalities.2 No single therapy for vitiligo leads to satisfactory results in all patients but the combination of surgical modalities and medical treatment might lead to faster improvement and better pigmentation.3 Amongst these new promising treatment modalities is 5-fluorouracil (5-FU), which can improve vitiliginous lesions and decrease treatment duration with better patient compliance.4 It was postulated that 5-FU could exert repigmentation in vitiligo by direct stimulation of melanocytes and an increase in the number of melanosomes in the keratinocytes.5 Several studies applied 5-FU after mechanical dermabrasion to vitiliginous lesions with successful results.6,7 Erbium-YAG (Er:YAG) is considered superior to ordinary dermabrasion because of its Correspondence: Mahetab Abdelwahab Tel +20 101 465 7840 Email mahysmr@gmail.com Clinical, Cosmetic and Investigational Dermatology Dovepress open access to scientific and medical research Open Access Full Text Article submit your manuscript | www.dovepress.com Clinical, Cosmetic and Investigational Dermatology 2020:13 77–85 77 http://doi.org/10.2147/CCID.S225881 DovePress © 2020 Abdelwahab et al. This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/ terms.php and incorporate the Creative Commons Attribution – Non Commercial (unported, v3.0) License (http://creativecommons.org/licenses/by-nc/3.0/). By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms (https://www.dovepress.com/terms.php). C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://orcid.org/0000-0001-7637-2474 http://orcid.org/0000-0002-6121-450X http://www.dovepress.com http://www.dovepress.com https://www.facebook.com/DoveMedicalPress/ https://twitter.com/dovepress https://www.linkedin.com/company/dove-medical-press https://www.youtube.com/user/dovepress http://www.dovepress.com/permissions.php optimum bleeding effect at recipient ablated sites, and pre- cise ablation of the epidermis. Its use as a method of delivery of 5-FU achieved 50 to 75% repigmentation in 33.3% of Egyptian vitiligo patients followed for 15 months.8 So, the aim of our study is to compare the efficacy of topical 5-FU as monotherapy to combined therapy of topical 5-FU and Er:YAG laser in the treatment of non- segmental vitiligo (NSV) in a cohort of Egyptian patients. Patients and Methods Ethics The study was approved by the Dermatology Department ethical committee, National Research Centre (NRC). Patients gave their written informed consent to participate in the study, and to be photographed. The nature of the treatment including potential benefits, risks, and side effects was explained to each patient. Patients Thirty patients diagnosed with non-segmental vitiligo were recruited from the Dermatology outpatient clinics of the Medical Research Centre of Excellence (MRCE), the National Research Centre (NRC) and the National Institute of Laser Enhanced Sciences (NILES). Patients included both genders, aged ≥18 years, Fitzpatrick skin types II to IV. Patients were enroled in the study if they had stable (showing no progression of old lesions and/or development of new lesions, as well as absence of Koebner phenomenon) multiple vitiliginous lesions 1 year before enrolment in the study. Inclusion criteria also included the patient not receiving systemic and/or topi- cal treatment for the vitiliginous lesions 3 months and 2 weeks before enrolment in the study respectively. Patients were excluded if they had co-morbidities as other autoimmune disorders, hepatic or renal dysfunction, bleeding disorders, were receiving aspirin for a medical cause, had burns, Koebner’s phenomenon/Keloid at the site of the vitiliginous area, and were pregnant and/or lactating women. The vitiliginous site(s) chosen to conduct the study were randomly selected. Methods This a prospective randomized comparative study. All patients were subjected to thorough history taking including family history and clinical assessment. Lesions were divided into two groups according to the treatment received. Each patient was subjected to ablative Er:YAG and 5-FU cream at one vitiliginous lesion (Group 1) and topical 5-FU cream was applied to a different vitiliginous lesion (Group 2) as com- parative arms of the study. Group 1 Procedure The affected skin area was sterilized with Betadine antiseptic solution and then lidocaine hydrochloride (Pridocaine) cream was applied for 30 to 45 mins for local anaesthesia under occlusion. The laser used was Er:YAG 2940 nm (Fotona Medical Lasers) with a surgical handpiece of spot size 4 mm and a fluence of 60 J/cm2 and 2–3 passes with pinpoint bleed- ing as an endpoint. After each pass, skin was rehydrated with moist saline-soaked gauzes. Tissue debris was removed with gentle rubbing and wiping the treated area using dry gauze. Fluorouracil cream (5-FU) was then added once daily for 2 weeks under occlusion. Topical antibiotic (fusidic acid 2% cream) covered by gauze was applied for another 2 weeks. Patients conducted 3 treat- ment sessions, a session was every 4 to 6 weeks and they were followed up to 9 months to assess the response. Clinical Assessment Patients were assessed initially at the first visit before starting therapy clinically and lesions were documented by photo- graphy. Patients were evaluated thereafter 3 and 6 months after their third session to assess their response to therapy using digital photography with subsequent computer based image analysis. Patients underwent liver function tests [serum glutamic-oxaloacetic transaminase (SGOT), serum glutamic pyruvic transaminase (SGPT), bilirubin] and com- plete blood picture before enrolment in the study and at the start of each session to detect any laboratory side effects of 5-FU. Patients were assessed for local side effects as erythema, post-inflammatory hyperpigmentation, pain, and itching during the follow-up visits and at the end of the study. Group 2 5-FU cream was applied as a monotherapy once daily for 2 weeks to another vitiligous lesion in the same patient. Both sites in the same patient were compared. Assessment of Response to Therapy Digital Photography This was conducted using a Sony digital camera with standar- dized distance and lighting conditions (magnification × 1 at Abdelwahab et al Dovepress submit your manuscript | www.dovepress.com DovePress Clinical, Cosmetic and Investigational Dermatology 2020:1378 C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com http://www.dovepress.com a distance of 20 and 30 cm). A set of standardized photographs wastakeninitiallyandat3and6monthsafterthethirdsessionof treatment focusing on the lesion and documenting any changes, whether repigmentation and/or erythema, crust formation, ooz- ing, and post-inflammatory hyperpigmentation. Photographs were obtained using identical camera settings, lighting, back- ground, and patient positioning to ensure the consistency of images. The images were stored in a file format of JPG. Image Analysis (Morphometric Measurement) The photographs were sent for computerized digital image analysis. Morphometric measurements were done using a Leica Quin500 Image Analyzer (LEICA Imaging Systems Ltd, Cambridge, London), Pathology Department, National Research Centre, Cairo. The area of lesions was measured in µm2 at baseline and then compared with the area after treat- ment to assess if they varied and the degree of repigmentation was calculated as a percentage. The data were copied to Excel sheets and statistically analyzed. Repigmentation response was divided according to Njoo et al. into mild (<25%), moderate (26–75%), and marked (>75%).9 Patients’ response to therapy was divided according to the site into response of flat/non-resistant/hairy areas and non-flat/resistant/non-hairy areas and both were compared. Histopathological Assessment Technique Punch biopsies of 4 mm in size were taken from the lesion sites at baseline, and post therapy. Samples were then fixed in 10% saline for 24 hrs. Afterwards, samples were washed in tap water, dehydrated in ascending grades of ethanol, cleaned in xylene, and embedded in paraffin wax (melting point 55–60°C). Sections of 6 µm thickness were stained with haematoxylin and eosin (H & E, Drury and Wallington, 1980) showing the cytoplasm as shades of pink and red with blue nuclei. In both arms of the study, Group 1 (con- ducted ablative ER:YAG followed by 5-FU cream) and Group 2 (applied topical 5-FU cream), prominent melanin pigmentation and expression of HMB45 (Human Melanoma Black 45) were assessed. The sections were examined under light microscope (Leica Q500ɪW, Leica DMLB) photo- microscope with position captors, and CDD video camera module N50 (JVC TK-C1380). Images were captured with 100× and 400× magnification. Results Thirty patients with non-segmental vitiligo (NSV) not responding to treatment with topical and/or oral corticosteroids, tacrolimus, phototherapy, and psoralen plus ultraviolet A light were enrolled in this study. Our studied cohort included 12 (40%) males and 18 (60%) females. Their age ranged from 15 to 59 years (mean 34.7 ±12.6 years). The disease duration ranged from 2 to 33 (mean 12.6±8.2) years. The skin type, family history of autoimmune dis- ease, previously received treatment, and type of NSV of 30 patients diagnosed with vitiligo is shown in Table 1. Repigmentation Assessed by Image Analysis We divided our patient cohort diagnosed with NSV into 2 subgroups according to the site involved with vitiliginous Table 1 Skin Type, Family History of Autoimmune Disease, Previously Received Treatment, and Type of NSV of Our Studied Cohort (N=30) Item Description Number Percentage Skin Type III 14 46.7% IV 16 53.3% Family history of autoimmune disease Yes 19 63.3% No 11 36.7% Treatment received – Steroid Yes 23 76.7% No 7 23.3% Tacrolimus Yes 12 40.0% No 18 60.0% Ezaline Yes 16 53.3% No 14 46.7% NB-UVB Yes 17 56.7% No 13 43.3% Type of vitiligo Acrofacial/acral 2 6.7% Universalis 1 3.3% Vulgaris 27 90.0% Abbreviation: NB-UVB, narrow band ultraviolet radiation B. Table 2 Repigmentation of Group 1 (Received Er:YAG Followed by 5-FU Cream) Patients and Group 2 (received 5-FU cream) patients and Subgroups A (Flat) and B (Non-flat) Repigmentation Group 1 Group 2 Range 0–70% 0−5% Mean±SD 12±7% 1.4±0.8% Subgroups (Mean±SD) A Flat, n=16 13.8±8.5% 1.5±1% B Non-flat, n=14 9.8±4.5% 1.3±0.5% Dovepress Abdelwahab et al Clinical, Cosmetic and Investigational Dermatology 2020:13 submit your manuscript | www.dovepress.com DovePress 79 C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com http://www.dovepress.com lesions and/or its response to therapy (repigmentation); viti- ligo involving flat (F)/non-resistant/hairy areas, not involving joints (Group A, n=16), and vitiligo involving non-flat (NF)/ resistant/non-hairy areas, those involving and/or surrounding joints (Group B, n=14). Group A included patients with vitiligo involving the nape of the neck (n=2, 12.5%), the hands (n=3, 18.8%), the axilla (n=1, 6.3%), the trunk (n=3, 18.8%), the forearm (n=2, 12.5%), the lower limbs (n=4, 25%), and the back (n=1, 6.3%). Group B included patients with vitiligo involving the wrists (n=3, 21.4%), the elbows (n=7, 50%), the knuckles (n=1, 7.1%), and the knees (n=3, 21.4%). In females vitiligo is usually vulgaris (94.4%) and involving resistant areas, 10/14 (71.4%) are Group B patients. The repigmentation of Group 1 (received Er:YAG fol- lowed by 5-FU cream) patients and Group 2 (received 5-FU cream) patients as well as subgroups A (vitiligo involving flat (F)/non-resistant/hairy areas, not involving joints) and B (vitiligo involving non-flat (NF)/resistant/ non-hairy areas, those involving and/or surrounding joints) is shown in Table 2. In Group 1 patients, repigmentation was mild in 22/30 (73.3%) and moderate/severe in 3/30 (10%). The repig- mentation started after 3 months and persisted/increased during the period of follow up of 9 months. It is important to note that amongst flat areas, the nape of the neck and the abdomen showed the most significant improvement. Figures 1 and 2 show repigmentation of the forearm and nape of the neck in a 35 and an 18 year old patient with NSV, respectively. Repigmentation is usually at the periphery and perifol- licular, and then moves to the centre. Histopathological Results Before and After Therapy Histopathological examination of vitiliginous sections stained with H & E before treatment showed a complete absence of mel- anin pigmentation. The dermis revealed a decreased num- ber of adnexal structures and mild predominantly lymphocytic inflammatory infiltrate. Apoptotic melano- cytes and single mononuclear cells in the papillary dermis Figure 1 A 35 year old female with NSV of the left forearm before treatment (A); and after treatment with Er:YAG laser and 5-FU cream (B); and showing repigmentation (C). Right forearm of the same patient before treatment (D); and after treatment with 5-FU cream (E). Figure 2 An 18 year old female with NSV of the nape of the neck before treatment (A). (B) Same patient after treatment, right half with Er:YAG laser and 5-FU cream while left half treated with 5-FU only. Abdelwahab et al Dovepress submit your manuscript | www.dovepress.com DovePress Clinical, Cosmetic and Investigational Dermatology 2020:1380 C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com http://www.dovepress.com moving towards the epidermis were also found (Figures 3A, 4A). After treatment with combined therapy of Er: YAG followed by 5-FU cream (Group 1) or 5-FU mono- therapy (Group 2), examination of histopathological sec- tions showed significant improvement, repigmentation in the form of the presence of melanocytes in the basal layer of epidermis and reduction of mononuclear cells in the papillary dermis. Improvement was more pronounced on using combined therapy (Figures 3C, 4B) as opposed to monotherapy with 5-FU (Figures 3B, 4C). Figure 3 An 18 year old female with NSV of the nape of the neck (A) before treatment showing apoptotic melanocyte, single mononuclear cells in the papillary dermis moving towards the epidermis (H & E stain, scale bar = 5 µm). (B) After treatment with 5-FU cream alone showing marked hyperkeratosis, elastosis, and apoptotic melanocyte in the basal area of epidermis. Mononuclear cells in the papillary dermis are found either in clumps or as single cells moving towards the epidermis (H & E stain, scale bar = 5 µm). (C) After treatment with 5-FU cream and Er:YAG showing a huge number of melanocytes in the basal layer of the epidermis. Mononuclear cells in the dermis are found in clumps (H & E stain, scale bar = 5 µm). (D) Before treatment weak melanin expression of HMB45 (HMB45 stain). (E) After treatment with 5-FU cream alone showing mild expression of HMB45. In some areas no expression was found (HMB45 stain). (F) After treatment with 5-FU and Er:YAG showing marked expression of HMB45. In some areas no expression was found (HMB45 stain). Dovepress Abdelwahab et al Clinical, Cosmetic and Investigational Dermatology 2020:13 submit your manuscript | www.dovepress.com DovePress 81 C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com http://www.dovepress.com Immunohistochemistry (HMB45) Immunohistochemical study of vitiliginous sections before treatment showed negative expression of HMB45 (Figures 3D, 4D). After applying 5-FU monotherapy to the vitiliginous lesion there was mild expression of HMB45 (Figures 3E, 4F) whilst the vitiliginous lesions showed marked expression of HMB45 when using combined therapy of Er: YAG followed by 5-FU cream (Figures 3F, 4E). Side Effects of Treatment Received and Procedure Conducted All patients complained of pain at the sites of application of 5-FU; itching, and burning pain from laser application. The burning pain of laser and 5-FU was significant enough that a few patients wanted to discontinue treatment. Post- inflammatory hyperpigmentation was also noted after the procedure. However, none of our patients experienced any systemic side effects. Their serum SGOT, SGPT, BUN, CBC, biliru- bin, and urine analyses were normal throughout the study. A consort flow chart of the clinical study is shown in Figure 5. Discussion Treatment of vitiligo has undergone an evolutionary change in the past era. However, there is no single effective treatment modality, hence our study applying Er:YAG laser resurfacing followed by 5-FU cream to patients with NSV. This led to repigmentation in up to 70% (mean 12±7%) as compared to repigmentation in up to 5% (mean 1.4±0.8%) in monotherapy with 5-FU cream. So our results confirm the role of Er:YAG as a mode of transepidermal drug delivery. This agrees with other studies conducted on Egyptian patients and achieving 50 to 75% repigmentation when combining CO2 laser and 5-FU or Er:YAG and 5-FU,8,10 respectively. In our studied cohort, Group 1 patients (received Er:YAG followed by 5-FU cream) showed mild (<25%) repigmentation in 73.3% and moderate (50–75%) repigmentation in 10%. However, other studies showed 75% repigmentation in 49.8% of their vitiliginous lesions that were treated with CO2 laser followed by 5-FU and 50 to 75% repigmentation in 33.3% and 66.7% of their studied cohorts that received combined therapy of Er:YAG plus 5-FU, and Er:YAG 5-FU followed by short term narrow band ultraviolet B, respectively.4,8,10 The higher percentage of repigmentation in these studies can be attributed to several Figure 4 A 35 year old female patient with NSV of the forearm before treatment (A). There is an absence of melanocytes and melanin in the basal cell layer. The dermis reveals mild predominantly lymphocytic inflammatory infiltrate (H & E stain, scale bar = 5 µm). (B) After treatment with 5-FU and Er:YAG showing a large number of the melanocytes in the basal layer of the epidermis. Mononuclear cells in the dermis are found in clumps (H & E stain, scale bar = 5 µm). (C) After treatment with 5-FU cream alone showing a small number of melanocytes in the basal layer of epidermis. Mononuclear cells in the dermis found as single cells moving towards the epidermis (H & E stain, scale bar = 5 µm). (D) Before treatment showing weak expression of HMB45 (HMB45 stain). (E) After treatment with 5-FU and Er:YAG showing marked expression of HMB45 (HMB45 stain). (F) After treatment with 5-FU cream alone showing mild expression of HMB45 (HMB45 stain). Abdelwahab et al Dovepress submit your manuscript | www.dovepress.com DovePress Clinical, Cosmetic and Investigational Dermatology 2020:1382 C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com http://www.dovepress.com factors; the number of laser sessions, pulses, or the use of a third line of therapy as narrow band ultraviolet B (NB- UVB) in the study by Anbar et al.4 Repigmentation in our cohort started after 3 months and persisted and/or increased during the period of follow up that lasted up to 9 months in some patients. The pattern of repigmentation was perifollicular starting from the margins and spreading centripetally which agrees with other studies.10,11 After epidermal abrasion 5-FU is absorbed easily, penetrates deeply, and stimulates the amela- notic (inactive) melanocytes present at the outer root sheath of the lower portion of the hair follicle which proliferate and migrate upward and start to actively synthesize melanin at the infundibulum and from there they migrate upward until they reach the surface of the skin. This appears clinically as Enrollment Assessed for eligibility (n=37) Cohort (n=30) divided into Group 1 and 2 and both received both treatment regimens but sites were randomized Exclusion (n=7) • Not meeting inclusion criteria (n=5) • Declined to participate (n=2) Group 1 (n=30, all allocated to intervention in one site) *Er:YAG once/month followed by 5-FU once daily for 2 weeks then fucidin ointment once daily for 2 weeks All analysed and none were excluded All analysed and none were excluded Allocation Analysis None lost to FU Or discontinued treatment Group 2 (n=30, all allocated to intervention in a different site) *5-FU cream once daily for 2 weeks Categorization according to response to therapy *Group A (flat or non-resistant, n=16) *Group B (non-flat or resistant, n=14) None lost to FU Or discontinued treatment Follow up Figure 5 Consort flow chart of clinical study Group 1 and Group 2. Dovepress Abdelwahab et al Clinical, Cosmetic and Investigational Dermatology 2020:13 submit your manuscript | www.dovepress.com DovePress 83 C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com http://www.dovepress.com perifollicular pigmentation which gradually enlarges to cover the whole depigmented area.12 In our study, flat, non-resistant hairy areas showed more repigmentation (mean 13.8±8.5%) as compared to non-flat, resistant, non-hairy areas (mean 9.8±4.5%). This is contrary to Mina et al, 13 who reported that 40% of acral vitiliginous patches showed good to excellent repigmenta- tion mainly diffuse but the pattern was perifollicular as shown in our study. 5-FU hasantimitotic activity, so it issurprising thatit would be implicated in the vitiligo repigmentation process which obviously needs melanocyte proliferation. Successively, it was postulated that direct overstimulation of melanocyte pro- liferation increases the number of melanosomes in the kerati- nocytes. Moreover, inhibition of agents or cells able to destroy pigment cells, and finally immunomodulation stabilizing the vitiliginouslesions, may stimulate the reservoirofthe follicular melanocytes or the persistent DOPA-negative melanocytes in the depigmented epidermis.5 All our patients complained of pain at the site of application of 5-FU but we reported post-inflammatory hyperpigmentation in only 6.6% whereas none reported systemic side effects. In the study by Anbar et al,4 the authors reported pain in all their patients, and a higher prevalence of hyperpigmentation in 30%. Interestingly in the study using microneedling with 5-FU,13 although they reported the procedure to be well tolerated, patients devel- oped hyperpigmentation in 16%, inflammation in 12%, and ulceration in 4% of patients, all more severe side effects than what is reported in our study. Our studied cohort was assessed by digital photography on enrolment and follow up which is a simple, practical, and accurate way of assessing patients and documenting their response to therapy. Moreover, assessment of vitiliginous lesions by image analysis in our study has shown to be a reliable, easy, and quick method in evaluating treatment response in patients with NSV. The image analysis also detected improvement with therapeutic intervention and was able to measure marginal and perifollicular repigmenta- tion which is in agreement with other studies.14,15 Conclusion The combination of ablative Er:YAG with 5-fluorouracil is safe and effective in treating and improving outcome in vitiligo with flat, hairy areas showing more repigmentation as compared to non-flat, non-hairy, resistant areas. Computer based image analysis of vitiliginous lesions and assessing post-therapy response is an easy, quick, and reliable method. We followed our patients for 9 months, a reasonable period as compared to other studies, yet we recommend a longer follow up period due to the nature of the disease affected by many factors. The longer follow up period together with a larger sample size will assess the long term efficacy, possible side effects of 5-fluorouracil, as well as its underlying mechan- ism of action. More sophisticated methods of image analysis with automated scores can be designed for easy accurate calculation of depigmentation and repigmentation in vitiligo and comparing different treatment modalities. Also a pain scoring system can be an objective method to assess the magnitude and tolerability of pain. Acknowledgment Special thanks to all the patients who participated in the study. Disclosure The authors report no conflicts of interest in this work. References 1. Ezzedine K, Lim HW, Suzuki T, et al. Revised classification/nomen- clature of vitiligo and related issues: the vitiligo global issues con- sensus conference. Pigment Cell Melanoma Res. 2012;25(3):E1–E13. 2. Bacigalupi RM, Postolova AM, Davis RS. Evidence -based, non-surgical treatments for vitiligo: a review. Am J Clin Dermatol. 2012;13(4):217–237. doi:10.2165/11630540-000000000-00000 3. Garg T, Chander R, Jain A. Combination of Microdermabrasion and 5-fluorouracil to induce repigmentation in vitiligo: an observational study. Dermatol Surg. 2011;37(12):1763–1766. doi:10.1111/j.1524- 4725.2011.02127.x 4. Anbar T, Westerhof W, Abdel-Rahman A, et al. Effect of one session of ER: YAG laser ablation plus topical 5-fluorouracil on the outcome of short-term NBUVB phototherapy in the treatment of non-segmental viti- ligo: a left-right comparative study. Photodermatol Photoimmunol Photomed. 2008;24(6):322–329. doi:10.1111/j.1600-0781.2008.00385.x 5. Gauthier Y, Anbar T, Lepreux S, et al. Possible mechanisms by which topical 5-fluorouracil and dermabrasion could induce pigment spread in vitiligo skin: an experimental study. ISRN Dermatol. 2013;8:49–52. 6. Tsuji T, Hamada T. Topically administered fluorouracil in vitiligo. Arch Dermatol. 1983;119(9):722–727. doi:10.1001/archderm.1983.016503 30014006 7. Sethi S, Mahajan B, Gupta R, et al. Comparative evaluation of the therapeutic efficacy of dermabrasion, dermabrasion combined with topical 5% 5-fluorouracil cream, and dermabrasion combined with topical placentrex gel in localized stable vitiligo. Int J Dermatol. 2007;46(8):875–879. doi:10.1111/j.1365-4632.2007.03226.x 8. Anbar T, Westerhof W, Abdel-Rahman A, et al. Treatment of peri- ungual vitiligo with erbium-YAG-laser plus 5-flurouracil: a left to right comparative study. J Cosmet Dermatol. 2006;5(2):135–139. doi:10.1111/j.1473-2165.2006.00240.x 9. Nojoo MD, Bos JD, Westerhof W. Treatment of generalized vitiligo in children with narrow-band(TL- 01) UVB radiation therapy. J Am Acad Dermatol. 2000;42(2Pt 1):245–253. doi:10.1016/S0190- 9622(00)90133-6 10. Mohamed H, Mohammed G, Gomaa A, et al. Carbon dioxide laser plus topical 5-fluorouracil: a new combination therapeutic modality for acral vitiligo. J Cosmet Laser Ther. 2015;17(4):216–223. doi:10.3109/14764172.2014.1003241 Abdelwahab et al Dovepress submit your manuscript | www.dovepress.com DovePress Clinical, Cosmetic and Investigational Dermatology 2020:1384 C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 https://doi.org/10.2165/11630540-000000000-00000 https://doi.org/10.1111/j.1524-4725.2011.02127.x https://doi.org/10.1111/j.1524-4725.2011.02127.x https://doi.org/10.1111/j.1600-0781.2008.00385.x https://doi.org/10.1001/archderm.1983.01650330014006 https://doi.org/10.1001/archderm.1983.01650330014006 https://doi.org/10.1111/j.1365-4632.2007.03226.x https://doi.org/10.1111/j.1473-2165.2006.00240.x https://doi.org/10.1016/S0190-9622(00)90133-6 https://doi.org/10.1016/S0190-9622(00)90133-6 https://doi.org/10.3109/14764172.2014.1003241 http://www.dovepress.com http://www.dovepress.com 11. Ortonne JP, MacDonald DM, Micoud A, et al. PUVA induced repig- mentation of vitiligo: a histochemical (split-DOPA) and ultrastructural study. Br J Dermatol. 1979;101(1):1–12. doi:10.1111/bjd.1979.101. issue-1 12. Cui J, Shen L, Wang G. Role of hair follicles in the repigmentation of vitiligo. J Invest Dermatol. 1991;97(3):410–416. doi:10.1111/1523- 1747.ep12480997 13. Mina M, Elgarhy L, Al-saeid H, et al. Comparison between the efficacy of microneedling combined with 5-fluorouracil vs microneedling with tacrolimus in the treatment of vitiligo. J Cosmet Dermatol. 2018;17 (5):744–751. doi:10.1111/jocd.12440 14. Alghamdi KM, Kumar A, Taieb A, et al. Assessment methods for the evaluation of vitiligo. J Eur Acad Dermatol Venereol. 2012;26 (12):1463–1471. doi:10.1111/j.1468-3083.2012.04505.x 15. Sheth VM, Rithe R, Pandya AG, et al. A pilot study to determine vitiligo target size using a computer-based image analysis program. J Am Acad Dermatol. 2015;73(2):342–345. doi:10.1016/j.jaad.2015.04.035 Clinical, Cosmetic and Investigational Dermatology Dovepress Publish your work in this journal Clinical, Cosmetic and Investigational Dermatology is an interna- tional, peer-reviewed, open access, online journal that focuses on the latest clinical and experimental research in all aspects of skin disease and cosmetic interventions. This journal is indexed on CAS. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. Submit your manuscript here: https://www.dovepress.com/clinical-cosmetic-and-investigational-dermatology-journal Dovepress Abdelwahab et al Clinical, Cosmetic and Investigational Dermatology 2020:13 submit your manuscript | www.dovepress.com DovePress 85 C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 https://doi.org/10.1111/bjd.1979.101.issue-1 https://doi.org/10.1111/bjd.1979.101.issue-1 https://doi.org/10.1111/1523-1747.ep12480997 https://doi.org/10.1111/1523-1747.ep12480997 https://doi.org/10.1111/jocd.12440 https://doi.org/10.1111/j.1468-3083.2012.04505.x https://doi.org/10.1016/j.jaad.2015.04.035 http://www.dovepress.com http://www.dovepress.com/testimonials.php http://www.dovepress.com http://www.dovepress.com work_ejumxkb4zvetdd6jkkr57osicm ---- Oral and Dental Spectral Image Database—ODSI-DB applied sciences Article Oral and Dental Spectral Image Database—ODSI-DB Joni Hyttinen 1,∗ , Pauli Fält 1 , Heli Jäsberg 2 , Arja Kullaa 2 and Markku Hauta-Kasari 1 1 School of Computing, University of Eastern Finland, Yliopistokatu 2, P.O. Box 111, 80101 Joensuu, Finland; pauli.falt@uef.fi (P.F.); markku.hauta-kasari@uef.fi (M.H.-K.) 2 Institute of Dentistry, University of Eastern Finland, Yliopistonranta 1 C, P.O. Box 1627, 70211 Kuopio, Finland; heli.jasberg@uef.fi (H.J.); arja.kullaa@uef.fi (A.K.) * Correspondence: joni.hyttinen@uef.fi Received: 22 September 2020; Accepted: 12 October 2020; Published: 16 October 2020 ���������� ������� Abstract: The most common imaging methods used in dentistry are X-ray imaging and RGB color photography. However, both imaging methods provide only a limited amount of information on the wavelength-dependent optical properties of the hard and soft tissues in the mouth. Spectral imaging, on the other hand, provides significantly more information on the medically relevant dental and oral features (e.g. caries, calculus, and gingivitis). Due to this, we constructed a spectral imaging setup and acquired 316 oral and dental reflectance spectral images, 215 of which are annotated by medical experts, of 30 human test subjects. Spectral images of the subjects’ faces and other areas of interest were captured, along with other medically relevant information (e.g., pulse and blood pressure). We collected these oral, dental, and face spectral images, their annotations and metadata into a publicly available database that we describe in this paper. This oral and dental spectral image database (ODSI-DB) provides a vast amount of data that can be used for developing, e.g., pattern recognition and machine vision applications for dentistry. Keywords: spectral image; dentistry; oral; dental; dataset; database 1. Introduction The majority of dental and oral imaging is performed by X-ray-based techniques or by digital photography. While panoramic radiographs and computed tomography scans reveal anatomical and pathological structures of the teeth and alveolar bone, they expose the patient to ionizing radiation and possibly to non-risk-free contrast agents. The risks increase the barrier for imaging. Additionally, these imaging methods are suitable for hard tissue only. Soft-tissue diagnostics requires alternate methods. Digital photography is more suitable for soft tissue and avoids these risks while still providing useful diagnostic information, even of hard tissue (although this is limited to surface features). From data analysis aspect, these methods offer only a limited amount of information. X-ray-based techniques produce intensity graphs of transmitted radiation, i.e., grayscale images, and allow for analysis of spatial structures and their brightness only [1–4]. X-ray techniques do not acquire any color information or provide comparable multichannel images. Digital photography has been designed to accommodate human color vision, and therefore acquires only a limited amount of color information. A digital photograph consists of only red, green, and blue color channels designed to approximately match the short-, middle-, and long-wavelength detecting cones of the human eye. The color of an object, however, is a more complex question. An illuminated object reflects different wavelengths at different strengths—it has a wavelength-dependent reflectance. Accurate evaluation of an object’s color depends on measuring this reflectance spectrum. Spectral imaging [5,6] increases the number of channels by imaging narrow contiguous wavelength bands, and can thus record the reflection spectrum of a sample. Generally, spectral imaging Appl. Sci. 2020, 10, 7246; doi:10.3390/app10207246 www.mdpi.com/journal/applsci http://www.mdpi.com/journal/applsci http://www.mdpi.com https://orcid.org/0000-0003-1539-7445 https://orcid.org/0000-0003-1539-7445 https://orcid.org/0000-0003-4327-9104 https://orcid.org/0000-0003-3036-2582 https://orcid.org/0000-0001-8198-8185 https://orcid.org/0000-0002-5481-0004 http://dx.doi.org/10.3390/app10207246 http://www.mdpi.com/journal/applsci https://www.mdpi.com/2076-3417/10/20/7246?type=check_update&version=3 Appl. Sci. 2020, 10, 7246 2 of 16 systems may also extend the data acquisition range outside of the visible range (380–780 nm) to cover parts of the ultraviolet (10–380 nm) and infrared regions (780 nm–1 mm). The data files produced by these methods—X-ray imaging, digital photography, and spectral imaging—can, therefore, be divided into three categories: grayscale, color, and spectral images (Figure 1). The additional information provided by spectral imaging enables new data analysis and visualization methods compared to grayscale or color images. (a) Grayscale image BGR (b) Color image λn λ3λ2λ1 (c) Spectral image Figure 1. (a) A single-channel grayscale image, (b) Red/Green/Blue (RGB) color image and its Red/Green/Blue channels as grayscale images, (c) spectral image consisting of several bands as a stack of grayscale images. In the context of dental and oral imaging, healthy and diseased tissue may exhibit subtle changes in their reflection spectra. Accurate spectral imaging solutions can capture these spectra with high fidelity, while data-analysis methods may be able to differentiate these spectra. Successful data-analysis can allow the design of optimized optical imaging solutions for targeted diagnostic analyses. In the simplest cases, spectral images can be used to design systems that enhance the visibility of a targeted feature, for example, by applying different weights on the bands while computing a color image representation [7], or by optimizing contrast to produce optimal optical filters [8,9]. The selected spectra could also be used in training targeted diagnostic imaging systems, like spectral filter array cameras [10], or in automatic diagnostic image segmentation based on deep learning [11]. Medical spectral imaging has been gaining attention, and research in, e.g., retinal disease [12], breast [13] and tongue [14] cancer diagnosis has shown promising results. Despite the amount of research performed and the rise in open-access principles, freely and publicly available medical spectral imaging datasets are still relatively rare. Examples of the few public medical spectral image datasets include images of brain [15,16] and retinal [17] tissue. These publicly available image sets require contacting the authors for access. Using modern spectral cameras, we have collected a novel database of oral and dental spectral images as part of our DIGIDENT-project. The project aimed to research the suitability of spectral imaging in oral dental diagnostics. Our dental and oral spectral image database—ODSI-DB—contains 316 spectral images from human test subjects. The front-view and the occlusal surfaces of lower and upper teeth, oral mucosa, and face surrounding the mouth were imaged from all test subjects. Other features of interest were imaged on a case-by-case basis. When possible, additional medically relevant information was collected from the test subjects—pulse, blood pressure, and blood oxygen saturation—both before and after the imaging session. The database comes with annotations marking the features of interest. The annotations were made by dental experts, who used an annotation tool built for the project. We have successfully used the database to develop a prototype optical imaging system based on partially negative filters derived from principal component analysis of healthy and diseased tissue [9], and in automatic image segmentation and classification based on a convolutional neural network [11]. To the best of our knowledge, our oral and dental spectral image database is the first of its kind to be made publicly available under a permissive license for the research community. The database is available online at the following address: https://sites.uef.fi/spectral/odsi-db/. https://sites.uef.fi/spectral/odsi-db/ Appl. Sci. 2020, 10, 7246 3 of 16 2. Materials and Methods The optical imaging system built for dental and oral imaging consisted of a spectral camera, a ring illuminator (FRI61F50, Thorlabs Inc., Newton, NJ, USA), a halogen light source (Thorlabs OSL2, Thorlabs Inc., Newton, NJ, USA) with an extended IR-range light bulb (OSL2BIR, Thorlabs Inc., Newton, NJ, USA), and a chin–forehead rest, all mounted on a platform, making the system mobile, see Figure 2. (a) (b) Figure 2. Imaging system with (a) Nuance EX, and (b) Specim IQ spectral cameras. Initially, the spectral camera chosen was the Nuance EX (CRI, PerkinElmer, Inc., Waltham, MA, USA). The camera allows imaging at 1392 × 1040 pixel spatial resolution and has spectral resolution in the range 450–950 nm with 10 nm bands (51 bands in total). The Nuance EX spectral camera was subsequently replaced with Specim IQ (Specim, Spectral Imaging Ltd., Oulu, Finland). The Specim IQ spectral camera has a lower spatial resolution (512 × 512 pixels), but the spectral resolution has an increased range of 400–1000 nm with approximately 3-nm steps (204 bands in total). Both spectral cameras have 12-bit sensors. Switching the camera simplified the imaging significantly: Nuance EX camera requires two runs per one imaging, once to determine optimal integration times for the spectral bands and a second run to perform the actual image capture. Specim IQ is a line-scanning camera with a constant integration time over the band images. The chin–forehead rest was used to reduce the involuntary head movements during imaging and was connected to the main platform of the optical setup. The ring illuminator and the camera were mounted on another platform that was allowed to freely slide over the main platform to allow optimal imaging geometry in relation to the test subject. For image normalization purposes (see below), we imaged a reference sample with a known reflectance spectrum during the imaging process. The reference sample used was a matt diffuse gray ceramic sample (“Matt Diff Grey”, Ceram Research, Ltd., Lucideon, Ltd., Stoke-on-Trent, UK) instead of a white plate. White references were found to give over-saturated responses, while the corresponding sample remained under-exposed. The Institute of Dentistry of University of Eastern Finland (UEF) recruited the test subjects at the UEF’s Dental School Clinic (Kuopio, Finland) and the UEF’s School of Computing recruited test subjects in Joensuu, Finland. Ethical permission for the research was granted by The Hospital District of Northern Savo, Kuopio, Finland (413/2016). The volunteering test subjects were fully informed about the research and gave their written consent prior to spectral imaging. A total of 30 test subjects were recruited for the spectral imaging. Appl. Sci. 2020, 10, 7246 4 of 16 The spectral imaging process started with imaging the facial skin around the mouth area (Figure 3a), followed by imaging a gray reference tile positioned at approximately at the same location as the face, thus retaining the imaging geometry. Next, the system is moved closer to the patient and dental imaging is performed: the front teeth are imaged bitten together (Figure 3b), the lower and upper teeth’s occlusal surfaces (Figure 3c,d) are imaged most of the time by using a dental mirror or lip retractors. The dental mirror posed a compositional challenge: there was little time to pose the scene optimally as the mirror fogs quickly when the patient needs to breath. A dental expert handled the patient, and the dental mirror, and helped to expose the areas of interest for spectral imaging. If the patient had other features of interest (such as infection sites or pigmentation changes), they were imaged also. The gray reference sample was imaged as needed anytime the imaging geometry changed significantly. When Nuance EX was used, the reference and dark-current images had to be measured every time the integration times were recalculated. Specim IQ saved dark-current images automatically during a spectral image capture. (a) Face (b) Front teeth (c) Lower teeth (d) Upper teeth Figure 3. Representational example image set consisting of the base set of spectral images acquired of every test subject. Images (c,d) demonstrate the use of a dental mirror in imaging. The reflectance spectral images R(x, y, λ) were normalized with flat-field correction from a sample measurement si(x, y, λ) as R(x, y, λ) = si(x, y, λ) − sd(x, y, λ) sr(λ) − sd(x, y, λ) × tr ti × Rr(λ) (1) Appl. Sci. 2020, 10, 7246 5 of 16 where sr(λ) is the measurement from a reference sample, sd(x, y, λ) is the measured dark-current, ti and tr are the integration times of the sample and reference, respectively, and finally Rr(λ) is the known spectral reflectance of the reference sample. The reference measurement sr(λ) was averaged from a small, well-lit rectangular area of full tile sr(x, y, λ). This was done because the relatively small reference tile could not cover the whole imaging area, nor did the physical shapes of the reference (flat tile) and the sample (oral cavity, teeth) match. Since the physical shapes of the imaged samples and the flat reference tile are different, the reflectance values in the spectral reflectance image R(x, y, λ) cannot be expressed in the range 0–100%. Instead, the relative values in the spectral image are scaled to the range 0–1. It is important to notice that the flat-field correction does not affect the shape of the spectra. Camera-specific image post-processing was needed. The Nuance EX camera contained some dust particles in its internal optics, producing dark artifacts in the images (Figure 4a). Artifact removal was done by imaging a white reference plate (30 × 30 cm, diffuse reflectance target, Edmund Optics Inc., Barrington, NJ, USA) covering the whole image area and creating an artifact mask (Figure 4b). The mask was used to mark areas for digital inpainting with OpenCV’s inpaint-function (Navier-Stokes-based inpainting) [18], see Figure 4c. The Nuance EX spectral camera operates in the band-sequential mode [6] capturing the spectral image one spectral band at a time. Integration time for an individual band image can be as long as 5 s, meaning that technically the longest possible acquisition time per spectral image is 4 min 15 s. The camera system determines the optimal integration times automatically. In practice, the full acquisition took 1–2 min only. The involuntary movements of the test subject cause the individual spectral band images to be misaligned with respect to each other. We utilized generalized dual bootstrap iterative closest point (GDB-ICP) image registration [19] to align the band images. The GDB-ICP-tool provided by Yang et al. performs image registration on a pair of images only. To handle the 51 bands in Nuance EX spectral images, we chose the middle band image (700 nm) as the reference and performed the image registration of the other band images against this reference band. As the algorithm can choose the comparison points automatically, occasionally the image registration did not perform adequately. In these cases, a mask image was used to guide the process to concentrate on aligning the teeth. The need for these post-processing steps were a major motivation factor for switching the spectral camera to Specim IQ. (a) Artifacted image. (b) Artifact mask. (c) Cleaned image. (d) Misaligned stack before image registration. (e) Aligned stack after image registration. Figure 4. (a–c) Nuance EX dark artifact cleaning by digital inpainting and (d–e) the effect of involuntary movements before image registration. Specim IQ is a line scanning system [6,20]. The integration time is therefore per line and it is impossible to adjust per band integration times. On average, the line integration times were 5.4 ± 3.4 ms, meaning the spectral image acquisition time is only seconds. The camera, however, Appl. Sci. 2020, 10, 7246 6 of 16 uses relatively long time—1 min—preparing and finishing the image acquisition. As the whole spectrum is acquired at once, involuntary movements do not affect individual bands of the spectrum. In most cases, the movements were subtle enough not to cause noticeable discontinuities between the captured lines. The tongue was a noticeable exception, but this did not matter in dental images. When the tongue or some part of it was of interest, it was held in place during imaging. Specim IQ spectral images were therefore not processed with image registration (nor digital inpainting). No further image editing, aside from what has been described above, was performed on the spectral images. Dental experts at the Institute of Dentistry (University of Eastern Finland, Kuopio, Finland) annotated the dental and oral reflectance spectral images with software specifically designed for the annotation of dental and oral images (Figure 5). The tool allows users to view the spectral image one band image at a time or by generating a false-color image of user-chosen three bands, the default choices being 700, 546, and 435 nm for the red, green, and blue bands, respectively. Our publicly available dataset contains 316 reflectance spectral images, of which 215 are annotated. Figure 5. Screen capture of the annotation tool. 3. Results and Discussion The results of our work are an oral and dental database of spectral images and their manually segmented annotations. The storage formats used in the database, annotation process and its caveats, and the resulting collections of labeled spectra all require special attention and are explained below. 3.1. Data Availability The database—oral and dental reflectance spectral images, their annotations and metadata —are immediately available at the University of Eastern Finland’s Computational Spectral Imaging research group’s website https://sites.uef.fi/spectral/odsi-db/ under Creative Commons Attribution–NonCommercial–ShareAlike 4.0 International License (CC BY–NC–SA 4.0) [21]. 3.2. Data File Formats The relative reflectance spectral images were saved as Tiff image files. The format supports various image types, multi-page datasets and custom tags. Our spectral image Tiffs contain an RGB preview image of the spectral image on the first page, followed by a grayscale band image sequence starting from the second page. For the RGB-render, a CIE D65 light source and CIE 1931 2◦ Standard Observer were used. The image data are accompanied by a list of center wavelengths of the spectral bands in a custom tag 65,000, as a list of 32-bit floats. Tag 65,111 contains a free-form ASCII string of metadata. The metadata in our oral and dental spectral image dataset include the camera used, https://sites.uef.fi/spectral/odsi-db/ Appl. Sci. 2020, 10, 7246 7 of 16 the objective lens used, light source and settings, imaging date, and the miscellaneous measurement data, such as the blood pressure and oxygen saturation before and after the imaging. The following pattern is used for all metadata strings: Spectral camera: Nuance EX (CRi, Inc., USA). Illumination: Thorlabs OSL2 halogen light source (100% power) + Thorlabs Ring Illuminator. Objective lens: Samyang 35mm f/1.4. Aperture: f/11. Reference sample used: Matt diffuse gray sample ("Matt Diff Grey", Ceram Research, Ltd., UK). Blood pressure and pulse (before imaging): 165/80, 77 Blood oxygen saturation (before imaging): 95% Blood pressure and pulse (after imaging): 164/80, 72 Blood oxygen saturation (after imaging): 97% Spectral data: spectral reflectance. Note: Values have been scaled to range [0,1] due to flat reference sample. Date of spectral image capture: 2018/05/28. The filenames of the spectral images are in following format group “_” set ( “_” take | “_inpainted”? “_GDBICP_similarity”? ) “.tif” where group is an image grouping number, set is one of face, front, bottom, top, or a specific feature of interest, take is a number for images where the focal point is changed between takes. String inpainted is present in the name when the image has been digitally inpainted, while GDBICP_similarity is present if image registration has been performed. One or both of the two strings are present always and only in spectral images that have been acquired with Nuance EX spectral camera, and never in images acquired with Specim IQ. The annotation tool (Figure 5) created for the project creates polygonal markings. These polygons are saved in a semicolon-separated text file. Each line defines an individual marking and follows the format Annotation label; R, G, B; X1, Y1; X2, Y2; ...; Xn, Yn; where R, G, and B are the sRGB color coordinates in 0–255 range, and pixel coordinates Xn, Yn describe the polygon vertices. The coordinate system’s origin is at the top-left corner, positive X-axis leads to the top-right corner, and a positive Y-axis to the bottom-left corner of a spectral image. Since this text-based annotation format is relatively inconvenient for practical use, the oral and dental spectral image dataset contains annotations as bi-level mask images saved in multi-page Tiff files. The mask labels are stored in tag 65,001 as an ASCII-string on each mask image page. The multi-page Tiff solution was chosen due to the annotation tool allowing for overlapping annotation markings. A single image solution would have necessitated encoding the annotation labels in pixel values, requiring static numeric representations for the labels. Using the Tiff tag to save the label allows free-form strings without fixing labels with numeric values. The spectral images and the related annotations have been cropped to ensure the anonymity of the test subjects participating in the research. 3.3. Annotations The annotation labels are categorized as either technical issues, hard tissue and augmentations, hard tissue issues, soft tissue, soft tissue issues, or miscellaneous. The labels themselves are listed in Table 1, including the pixel counts (number of spectra) for each annotation label. Since the spatial resolution of Specim IQ spectral camera is approximately 1/4 of Nuance EX spectral camera, the pixel counts per label per image are lower in Specim IQ acquired spectral images. Appl. Sci. 2020, 10, 7246 8 of 16 Table 1. Annotation labels and separate pixel counts for spectral data acquired with Nuance EX (top value) and Specim IQ (bottom value) spectral cameras per label. (a) Technical Annotation Class Pixel Counts Out of focus area 260,579 8,193,912 Shadow/Noise 306,251 425,958 Specular reflection 1,486,065 445,780 (b) Miscellaneous Annotation Class Pixel Count Hair 1,009,626 374,344 Makeup 0 406 Mole 2288 503 Pigmentation 43,144 0 (c) Hard tissue and augmentations Annotation Class Pixel Counts Enamel 3,929,055 984,750 Metal 132,780 63,902 Plastic 211,588 43,429 Prosthetics 667,618 87,936 Root 9991 3971 (d) Soft tissue Annotation Class Pixel Count Attached gingiva 1,580,681 341,864 Blood vessel 1723 1944 Hard palate 1,292,117 1,083,881 Lip 2,678,629 1,241,971 Marginal gingiva 618,379 186,014 Oral mucosa 6,482,161 1,435,382 Skin 14,625,290 4,368,396 Soft palate 1,013,292 385,302 Tongue 3,177,170 904,519 Appl. Sci. 2020, 10, 7246 9 of 16 Table 1. Cont. (e) Hard tissue issues Annotation Class Pixel Count Attrition/Erosion 72,639 28,280 Calculus 20,324 8291 Dentine caries 6433 183 Fluorosis 17,872 0 Initial caries 20,862 1146 Microfracture 12,591 2168 Plaque 10,024 0 Stain 8197 11,231 (f) Soft tissue issues Annotation Class Pixel Count Fibroma 0 593 Gingivitis 148,516 13,358 Inflammation 71,864 9234 Malignant lesion 1304 0 Leukoplakia 2101 2522 Ulcer 1030 4522 The technical annotation labels mark areas that should be excluded from data analysis. For example, specular reflections do not contain any meaningful information, the spectra on these areas may be oversaturated and these areas should be masked off of any other mask. Shadow/noise label is mainly used to denote areas that are either poorly lit or fall outside of the sample area (originally, the manual image segmentation was intended to apply a label on all pixels in an image; this was later found to be overly rigorous). The Specim IQ spectral camera has a fairly short depth of field, the overly blurry areas are thus labeled as out-of-focus areas (see Figure 6). A pixel in the blurry out-of-focus area may be experiencing strong spectral mixing with the nearby pixels depending on the level of the blur. The miscellaneous-category (Table 1b) contains labels that did not seem to fit other categories. Label hair is another exclusion label like the technical annotations. This label is used to mark hairy areas — not individual strands of hair—so that shadows in the hairy bushes and any possibly visible skin underneath are thus also labeled as hair. Makeup, when clearly present, is annotated as such. Mole and pigmentation labels accurately mark features of interest and can be used in analysis as is. In the the hard-tissue and augmentations -category (Table 1c) the meanings of the labels enamel, and root are self-evident, but the distinction between prosthetics and the materials metal and plastic may be less so (Figure 7). The former is used to mark artificial structures, and the latter two to mark repaired parts of a tooth, and in case of metal, also dental braces. In the hard-tissue issue -category (Table 1e) labels attrition/erosion and microfracture mark the site of the find and some surrounding enamel. Appl. Sci. 2020, 10, 7246 10 of 16 (a) Focus on the front teeth (b) Focus on the back teeth (c) Focus near the uvula (d) Focus on the front teeth (e) Focus on the back teeth (f) Focus near the uvula Figure 6. (a–c) On close range, short depth of field in Specim IQ spectral camera necessitates multiple shots of the same scene on different focal points. (d–f) The out of focus areas are labeled with the appropriately named gray out of focus area label. The off-white markings label specular reflections. Figure 7. Hard tissue and augmentations labeling example: enamel (red), plastic (blue), prosthetics (orange), and specular reflections (off-white). The meanings of the soft-tissue labels (Table 1d) and the labels of the related lesions (Table 1f) should be evident from their names (see example Figure 8). There are, however, some vital details that must be known before using some of these masks. The number of segmented pixels for label blood vessel is fairly low. This is caused by two main factors related to the fact that blood vessels are a fine detail in the images: they are very prone to image stack misalignments in cases where the Nuance EX spectral camera has been used, possibly leading to broken spectra, and the annotation tool was developed to draw polygons. The latter issue could have been fixed by adding a line-drawing tool with an adjustable width into the annotation tool. Consequently, only sufficiently large blood vessels are annotated. Similarly, the tongue label should be used with caution as the tongue is the single Appl. Sci. 2020, 10, 7246 11 of 16 most difficult organ to image due to its involuntary movements. In the Nuance EX images, the image registration was often unable to fix these movements as the emphasis has been on aligning the image stack by the locations of the teeth, leaving the spectra on tongue pixels compromised. The tongue spectra on the Specim IQ -images should remain non-compromised, but the spatial shapes of the tongue can be peculiar due to the line-scanning nature of the camera—see Figure 6b, where the right side of the tongue has moved during image acquisition. Figure 8. Soft-tissue labeling example: tongue (purple), lips (orange), marginal gingiva (yellow green), attached gingiva (spring green) oral mucosa (red), inflammation (purple), ulcer (pink), skin (off-white). Specular reflections (off-white) on teeth and oral mucosa are also shown. 3.4. Reflections on Specular Reflections As can be seen in Figures 7 and 8, specular reflections may be significant and reduce the analyzable areas of the spectral images. In Figure 7, the lip and oral mucosa of the area under the tongue are shown with many specular reflection markings that should be excluded from lip and oral mucosal analyses. Glossiness of the dental enamel can also cause notable specular reflections: in Figure 8, a front tooth has the majority of its surface segmented as specularly reflecting. Our imaging setups did not contain technical solutions for controlling specular reflections besides using a ring illuminator instead of a spotlight from a liquid light guide. This illumination choice provided a more uniform lighting overall, but the specular reflections became circular loops. These loops can be seen in Figure 6c on uvula, in Figure 7 on the lip, and in Figure 8 on a front tooth. It may be possible to reduce specular reflections in case they are polarized by utilizing a linear polarizer in front of the ring illuminator and the spectral camera. This, however, would also reduce the intensity of the illumination and of the reflected light reaching the imaging sensor, leading to unwanted longer image acquisition times. Any longer acquisition times—especially in the case of Nuance EX—would have been unacceptable. It is also important to remember that the spectral cameras themselves may have polarizers. The Nuance EX spectral camera is based on a liquid crystal tunable filter (LCTF). These filter devices rely on parallel linear polarizers enveloping a liquid crystal stack [22]. A linear polarizer attached to the imaging setup would need to be aligned with these internal linear polarizers, as the illumination would still need to be polarized. This would, in turn, mean that finding an optimal angle for the polarizers to minimize specular reflections would require rotating the linear polarizer and the spectral camera together simultaneously. We did not have the tools to implement this, and therefore did not entertain this idea of using polarizers any further. Appl. Sci. 2020, 10, 7246 12 of 16 3.5. Spectral Profiles Due to the non-flat nature of the imaged samples, extracting and combining the segmented spectra from the spectral images will have wide variations. As a preprocessing step, the spectra have been L2-normalized prior to calculating their mean and standard deviation in the spectral profile plots shown in Figures 9 and 10. The spectra acquired with Specim IQ spectral camera have a common feature at the beginning and at the end of the wavelength range—a slope. The specifications of the light bulb [23] show that the normalized intensity at 400–430 nm range is approximately 10% of the maximum at 925 nm. This slope, therefore, is likely caused by poor illumination. Additionally, the specification seems to suggest the light bulb is unstable in the 950–1000 nm wavelength range, possibly explaining the slope at the end of the spectrum. (a) (b) (c) (d) Figure 9. Means of L2-normalized spectra acquired with Specim IQ spectral camera, and their standard deviations for (a) hard tissue, (b) soft tissue, (c) hard tissue lesions, and (d) soft tissue lesions. Appl. Sci. 2020, 10, 7246 13 of 16 (a) (b) (c) (d) Figure 10. Means of L2-normalized spectra acquired with Nuance EX spectral camera, and their standard deviations for (a) hard tissue, (b) soft tissue, (c) hard tissue lesions, and (d) soft tissue lesions. As might be expected, the enamel and plastic share exceedingly similar spectra in Figures 9a and 10a. Interestingly, prosthetics differ in large part (400–600 nm) of the visible range (400–780 nm), but then follow enamel and plastic closely in the red and infrared regions—the latter being invisible to human eyes. Exposed tooth roots, on the other hand, differ remarkably in the 400–600 nm and 700–800 nm regions. The initial caries spectrum calculated from Specim IQ -acquired spectra (Figure 9c) stands out and differs significantly from the one calculated from Nuance EX -acquired spectra (Figure 10c). The standard deviation is also larger than on the other Specim IQ-based spectra. A skewed result is expected as the number of Specim IQ-acquired spectra for initial caries is only 1138, with 891 coming from a single test subject (when the technical masks are applied). The number of spectra with Nuance EX is, in total, 20,862 spectra with a median of 1225 per test subject (12 test subjects). The soft tissue spectra in Figure 9b all seem to blend in within their error margins, with skin being more reflective in the 500–600 nm regions. While it may seems like some skin segmentation in spectral images acquired with Nuance EX has an issue—as suggested by Figure 10b, where the skin spectrum has a very large error margin in 600–800 nm region with a questionable spectrum shape—this is likely not the case. Our test subject pool included people of different skin colors, but this was not taken into account in segmentation labeling. The 685 nm band, in particular, seems to have anthropological significance [24] in relation to skin color. Another notable feature is that all soft-tissue spectra (Figures 9b,d and 10b,d) exhibit a dip in reflectance approximately at 540 and 575 nm wavelength bands. These dips match the absorption spikes of oxygenated hemoglobin [25]. Appl. Sci. 2020, 10, 7246 14 of 16 Like the initial caries earlier, the ulcer spectrum in Nuance EX -acquired spectra exhibits notable individualism in comparison to the other soft-tissue lesion spectra in Figure 10d. In this case, all 1030 spectra forming the normalized mean spectrum for ulcer originate from a single test subject. On the Specim IQ side, Figure 9d, the spectrum originates from three test subjects, each contributing a medium of 1656 spectra. 4. Conclusions We have collected a publicly available oral and dental reflectance spectral image database for use of the research community. In addition to 316 spectral images, the database contains annotation masks for 215 spectral images and technical and medically relevant metadata. The spectral images and the masks are store in multi-page Tiff-image files that contain the associated metadata encoded in custom tags. The database is available at the time of publication of this article at the University of Eastern Finland’s Computational Spectral Imaging research group’s website https://sites.uef.fi/spectral/odsi-db/. The database can be used to develop pattern recognition, machine learning and vision applications, optical filter and imaging systems operating in visible and near infrared illumination range. Author Contributions: Conseptualization, P.F., M.H.-K. and A.K.; methodology, J.H.; software, J.H. and P.F.; validation, J.H.; formal analysis, J.H.; investigation, J.H.; resources, M.H.-K. and A.K.; data curation, J.H., P.F. and H.J.; writing–original draft preparation J.H.; writing–review and editing P.F., M.H.-K., H.J. and A.K.; visualization, J.H.; supervision, M.H.-K. and A.K.; project administration, M.H.-K.; funding acquisition, P.F., M.H.-K., and A.K. All authors have read and agreed to the published version of the manuscript. Funding: This research was funded by Business Finland and the European Regional Development Fund (ERDF), “Spectral sensor technology and digital spectral image databases for oral and dental applications (DIGIDENT)” -project, funding decision 4465/31/2017. Acknowledgments: The work is part of the Academy of Finland Flagship Programme, Photonics Research and Innovation (PREIN), decision 320166. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. References 1. Rad, A.E.; Rahim, M.S.M.; Rehman, A.; Saba, T. Digital Dental X-ray Database for Caries Screening. 3D Res. 2016, 7, 18. doi:10.1007/s13319-016-0096-5. [CrossRef] 2. Silva, G.; Oliveira, L.; Pithon, M. Automatic segmenting teeth in X-ray images: Trends, a novel data set, benchmarking and future perspectives. Expert Syst. Appl. 2018, 107, 15–31. doi:10.1016/j.eswa.2018.04.001. [CrossRef] 3. Son, L.H.; Tuan, T.M.; Fujita, H.; Dey, N.; Ashour, A.S.; Ngoc, V.T.N.; Anh, L.Q.; Chu, D.T. Dental diagnosis from X-Ray images: An expert system based on fuzzy computing. Biomed. Signal Process. Control 2018, 39, 64–73. doi:10.1016/j.bspc.2017.07.005. [CrossRef] 4. Cui, Z.; Li, C.; Wang, W. ToothNet: Automatic Tooth Instance Segmentation and Identification From Cone Beam CT Images. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; p. 6361–6370. doi:10.1109/CVPR.2019.00653. [CrossRef] 5. Garini, Y.; Young, I.T.; McNamara, G. Spectral imaging: Principles and applications. Cytometry A 2006, 69A, 735–747. doi:10.1002/cyto.a.20311. [CrossRef] 6. Li, Q.; He, X.; Wang, Y.; Liu, H.; Xu, D.; Guo, F. Review of spectral imaging technology in biomedical engineering: achievements and challenges. J. Biomed. Opt. 2013, 18, 1–29. doi:10.1117/1.JBO.18.10.100901. [CrossRef] 7. Fält, P.; Hyttinen, J.; Fauch, L.; Riepponen, A.; Kullaa, A.; Hauta-Kasari, M. Spectral Image Enhancement for the Visualization of Dental Lesions. In Image and Signal Processing; Mansouri, A., El Moataz, A., Nouboud, F., Mammass, D., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2018; pp. 490–498. https://sites.uef.fi/spectral/odsi-db/ https://doi.org/10.1007/s13319-016-0096-5 http://dx.doi.org/10.1007/s13319-016-0096-5 https://doi.org/10.1016/j.eswa.2018.04.001 http://dx.doi.org/10.1016/j.eswa.2018.04.001 https://doi.org/10.1016/j.bspc.2017.07.005 http://dx.doi.org/10.1016/j.bspc.2017.07.005 https://doi.org/10.1109/CVPR.2019.00653 http://dx.doi.org/10.1109/CVPR.2019.00653 https://doi.org/10.1002/cyto.a.20311 http://dx.doi.org/10.1002/cyto.a.20311 https://doi.org/10.1117/1.JBO.18.10.100901 http://dx.doi.org/10.1117/1.JBO.18.10.100901 Appl. Sci. 2020, 10, 7246 15 of 16 8. Hyttinen, J.; Fält, P.; Fauch, L.; Riepponen, A.; Kullaa, A.; Hauta-Kasari, M. Contrast Enhancement of Dental Lesions by Light Source Optimisation. In Image and Signal Processing; Mansouri, A., El Moataz, A., Nouboud, F., Mammass, D., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2018; pp. 499–507. 9. Hyttinen, J.; Fält, P.; Jäsberg, H.; Kullaa, A.; Hauta-Kasari, M. Optical implementation of partially negative filters using a spectrally tunable light source, and its application to contrast enhanced oral and dental imaging. Opt. Express 2019, 27, 34022–34037. doi:10.1364/OE.27.034022. [CrossRef] [PubMed] 10. Bauer, J.R.; Thomas, J.B.; Hardeberg, J.Y.; Verdaasdonk, R.M. An Evaluation Framework for Spectral Filter Array Cameras to Optimize Skin Diagnosis. Sensors 2019, 19, 4805. doi:10.3390/s19214805. [CrossRef] 11. Boiko, O.; Hyttinen, J.; Fält, P.; Jäsberg, H.; Mirhashemi, A.; Kullaa, A.; Hauta-Kasari, M. Deep Learning for Dental Hyperspectral Image Analysis. In Proceedings of the 27th Color and Imaging Conference Final Program and Proceedings, Paris, France, 21–25 October 2019; pp. 295–299. doi:10.2352/issn.2169-2629.2019.27.53. [CrossRef] 12. Mordant, D.J.; Al-Abboud, I.; Muyo, G.; Gorman, A.; Sallam, A.; Ritchie, P.; Harvey, A.R.; McNaught, A.I. Spectral imaging of the retina. Eye 2011, 25, 309–320. doi:10.1038/eye.2010.222. [CrossRef] [PubMed] 13. Pourreza-Shahri, R.; Saki, F.; Kehtarnavaz, N.; Leboulluec, P.; Liu, H. Classification of ex-vivo breast cancer positive margins measured by hyperspectral imaging. In Proceedings of the 2013 IEEE International Conference on Image Processing, Melbourne, Victoria, Australia, 15–18 September 2013; pp. 1408–1412. doi:10.1109/ICIP.2013.6738289. [CrossRef] 14. Liu, Z.; Wang, H.; Li, Q. Tongue Tumor Detection in Medical Hyperspectral Images. Sensors 2012, 12, 162–174. doi:10.3390/s120100162. [CrossRef] [PubMed] 15. Fabelo, H.; Ortega, S.; Ravi, D.; Kiran, B.R.; Sosa, C.; Bulters, D.; Callicó, G.M.; Bulstrode, H.; Szolna, A.; Piñeiro, J.F.; et al. Spatio-spectral classification of hyperspectral images for brain cancer detection during surgical operations. PLoS ONE 2018, 13, e0193721. doi:10.1371/journal.pone.0193721. [CrossRef] [PubMed] 16. Fabelo, H.; Ortega, S.; Szolna, A.; Bulters, D.; Piñeiro, J.F.; Kabwama, S.; J-O’Shanahan, A.; Bulstrode, H.; Bisshopp, S.; Kiran, B.R.; et al. In-Vivo Hyperspectral Human Brain Image Database for Brain Cancer Detection. IEEE Access 2019, 7, 39098–39116. doi:10.1109/ACCESS.2019.2904788. [CrossRef] 17. Styles, I.B.; Calcagni, A.; Claridge, E.; Orihuela-Espina, F.; Gibson, J.M. Quantitative analysis of multi-spectral fundus images. Med. Image Anal. 2006, 10, 578–597. doi:10.1016/j.media.2006.05.007. [CrossRef] 18. Bradski, G. The OpenCV Library. Dobb J. Softw. Tools 2000, 120, 122–125. 19. Yang, G.; Stewart, C.V.; Sofka, M.; Tsai, C. Registration of Challenging Image Pairs: Initialization, Estimation, and Decision. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1973–1989. doi:10.1109/TPAMI.2007.1116. [CrossRef] [PubMed] 20. Behmann, J.; Acebron, K.; Emin, D.; Bennertz, S.; Matsubara, S.; Thomas, S.; Bohnenkamp, D.; Kuska, M.T.; Jussila, J.; Salo, H.; et al.. Specim IQ: Evaluation of a New, Miniaturized Handheld Hyperspectral Camera and Its Application for Plant Phenotyping and Disease Detection. Sensors 2018, 18, 441. doi:10.3390/s18020441. [CrossRef] 21. Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0). Available online: https: //creativecommons.org/licenses/by-nc-sa/4.0/ (accessed on 24 August 2020). 22. Beeckman, J.; Neyts, K.; Vanbrabant, P.J.M. Liquid-crystal photonic applications. Opt. Eng. 2011, 50, 081202. doi:10.1117/1.3565046. [CrossRef] 23. Thorlabs OSL2BIR-Product Page with Specifications. Available online: https://www.thorlabs.com/ thorproduct.cfm?partnumber=OSL2BIR (accessed on 19 August 2020). https://doi.org/10.1364/OE.27.034022 http://dx.doi.org/10.1364/OE.27.034022 http://www.ncbi.nlm.nih.gov/pubmed/31878459 https://doi.org/10.3390/s19214805 http://dx.doi.org/10.3390/s19214805 https://doi.org/10.2352/issn.2169-2629.2019.27.53 http://dx.doi.org/10.2352/issn.2169-2629.2019.27.53 https://doi.org/10.1038/eye.2010.222 http://dx.doi.org/10.1038/eye.2010.222 http://www.ncbi.nlm.nih.gov/pubmed/21390065 https://doi.org/10.1109/ICIP.2013.6738289 http://dx.doi.org/10.1109/ICIP.2013.6738289 https://doi.org/10.3390/s120100162 http://dx.doi.org/10.3390/s120100162 http://www.ncbi.nlm.nih.gov/pubmed/22368462 https://doi.org/10.1371/journal.pone.0193721 http://dx.doi.org/10.1371/journal.pone.0193721 http://www.ncbi.nlm.nih.gov/pubmed/29554126 https://doi.org/10.1109/ACCESS.2019.2904788 http://dx.doi.org/10.1109/ACCESS.2019.2904788 https://doi.org/10.1016/j.media.2006.05.007 http://dx.doi.org/10.1016/j.media.2006.05.007 https://doi.org/10.1109/TPAMI.2007.1116 http://dx.doi.org/10.1109/TPAMI.2007.1116 http://www.ncbi.nlm.nih.gov/pubmed/17848778 https://doi.org/10.3390/s18020441 http://dx.doi.org/10.3390/s18020441 https://creativecommons.org/licenses/by-nc-sa/4.0/ https://creativecommons.org/licenses/by-nc-sa/4.0/ https://doi.org/10.1117/1.3565046 http://dx.doi.org/10.1117/1.3565046 https://www.thorlabs.com/thorproduct.cfm?partnumber=OSL2BIR https://www.thorlabs.com/thorproduct.cfm?partnumber=OSL2BIR Appl. Sci. 2020, 10, 7246 16 of 16 24. Jablonski, N.G.; Chaplin, G. The evolution of human skin coloration. J. Hum. Evol. 2000, 39, 57–106. doi:10.1006/jhev.2000.0403. [CrossRef] [PubMed] 25. Horecker, B.L. The Absorption Spectra of Hemoglobin and Its Derivatives in the Visible and Near Infra-Red Regions. J. Biol. Chem. 1943, 148, 173–183. Sample Availability: ODSI-DB database is available at https://sites.uef.fi/spectral/odsi-db/. Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. c© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). https://doi.org/10.1006/jhev.2000.0403 http://dx.doi.org/10.1006/jhev.2000.0403 http://www.ncbi.nlm.nih.gov/pubmed/10896812 https://sites.uef.fi/spectral/odsi-db/ http://creativecommons.org/ http://creativecommons.org/licenses/by/4.0/. Introduction Materials and Methods Results and Discussion Data Availability Data File Formats Annotations Reflections on Specular Reflections Spectral Profiles Conclusions References work_ekfguwik6zborjotbg7efrpbou ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218564188 Params is empty 218564188 exception Params is empty 2021/04/06-02:16:45 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218564188 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:45 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_epbqijcwprc3njboiugypteucm ---- Food Waste in the National School Lunch Program 1978-2015: A Systematic Review RESEARCH 1792 JOURNAL OF THE ACADEMY OF NUTRITION AND DIETETICS ª 2 artic licen Review Food Waste in the National School Lunch Program 1978-2015: A Systematic Review Carmen Byker Shanks, PhD, RDN; Jinan Banna, PhD, RD; Elena L. Serrano, PhD ARTICLE INFORMATION Article history: Submitted 19 April 2016 Accepted 6 June 2017 Available online 11 August 2017 Keywords: Food waste Plate waste School lunch Consumption Diet 2212-2672/Copyright ª 2017 by the Academy of Nutrition and Dietetics. This is an open access article under the CC BY-NC-ND license (http:// creativecommons.org/licenses/by-nc-nd/4.0/). http://dx.doi.org/10.1016/j.jand.2017.06.008 ABSTRACT Background Food waste studies have been used for more than 40 years to assess nutrient intake, dietary quality, menu performance, food acceptability, cost, and effec- tiveness of nutrition education in the National School Lunch Program (NSLP). Objective Describe methods used to measure food waste and respective results in the NSLP across time. Methods A systematic review using PubMed, Science Direct, Informaworld, and Insti- tute of Scientific Information Web of Knowledge was conducted using the following search terms: waste, school lunch, plate waste, food waste, kitchen, half method, quarter method, weight, and photography. Studies published through June 2015 were included. The systematic review followed preferred reporting items for systematic reviews and meta-analyses recommendations. Results The final review included 53 articles. Food waste methodologies included in- person visual estimation (n¼11), digital photography (n¼11), direct weighing (n¼23), and a combination of in-person visual estimation, digital photography, and/or direct weighing (n¼8). A majority of studies used a preepost intervention or cross-sectional design. Fruits and vegetables were the most researched dietary component on the lunch tray and yielded the greatest amount of waste across studies. Conclusions Food waste is commonly assessed in the NSLP, but the methods are diverse and reporting metrics are variable. Future research should focus on establishing more uniform metrics to measure and report on food waste in the NSLP. Consistent food waste measurement methods will allow for better comparisons between studies. Such measures may facilitate better decision making about NSLP practices, programs, and policies that influence student consumption patterns across settings and interventions. J Acad Nutr Diet. 2017;117:1792-1807. HE NATIONAL SCHOOL LUNCH PROGRAM (NSLP) national nutrition standards, food portions are standardized, Tserves more than 31 million children in more than100,000 schools each school day.1,2 The NSLP aims tooffer balanced meals to schoolchildren, provided at free or reduced costs for low-income populations and sub- sidized by the federal government.2 The Healthy Hunger Free Kids Act of 2010 required updated nutrition standards for schools based on the most recent Dietary Guidelines for Americans and Institute of Medicine recommendations.3 The requirements consist of five meal components: fruits, vege- tables, whole grains, low-fat dairy, protein, and sodium content in a specified range. The serving size and caloric limits for each meal for children enrolled in grades kinder- garten through 12 are based on age group. A lunch provided to a student must consist of three out of the five components offered to be considered a reimbursable meal, with one of the components being a fruit or vegetable.3 The NSLP setting provides an important opportunity for researchers and practitioners to study how much and what types of nutrients children consume and waste. The lunch- room is experimental in nature because menus are designed (and can be changed) by local school food authorities per and many students dine in the cafeteria every school day. Study results with high external validity have far reaching implications for the NSLP nationwide. Since the 1970s,4 researchers have used plate and food waste studies to observe nutrient intake, dietary quality, menu performance, food acceptability, cost, and effectiveness of nutrition education in the NSLP. Plate and food waste are used synonymously throughout most of the school foods research literature and will herein be referred to as food waste. Food waste studies measure the uneaten edible portion of food served to an individual.5 Food waste meth- odology can measure several important food and nutrition outcomes,6 including the amount of a specific nutrient available, consumed, and wasted, the types of food groups most likely being eaten or thrown away, compliance with nutrition practices and policies, the effect of nutrition edu- cation on food choice and consumption, acceptability of menu items, and the influence of waste on an institution’s budget and on natural resources. The resulting data can be used to drive important changes in practices, programs, and policies in a school lunch program. In addition, in recent 017 by the Academy of Nutrition and Dietetics. This is an open access le under the CC BY-NC-ND license (http://creativecommons.org/ ses/by-nc-nd/4.0/). http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://dx.doi.org/10.1016/j.jand.2017.06.008 http://crossmark.crossref.org/dialog/?doi=10.1016/j.jand.2017.06.008&domain=pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ Records identified through database search (PubMed, Science Direct, Informaworld, ISI Web of Knowledge) (n=10,892) Remaining records after 1,562 duplicates removed across and within databases (n=9,330) Remaining records after screening by title (n=1,496) Studies included in synthesis (n=53) Remaining records after screening by abstract (n=66) Excluded based upon irrelevanta title (n=7,834) Excluded based upon irrelevanta abstract (n=1,430) Excluded based upon irrelevanta full- text (n=13) Remaining records after screening full-text (n=53) None excluded based upon quality Id en tif ic at io n Sc re en in g E lig ib ili ty In cl ud ed Figure. Preferred reporting items for systematic reviews and meta-analyses (PRISMA) 2009 flow diagram for selecting studies to include in the systematic review of food waste in the National School Lunch Program across time. Terms used in this search included a combination of the following: waste, school lunch, plate waste, food waste, kitchen waste, half method, quarter method, weight, and photography. aRelevance determined by inclusion and exclusion criteria. Inclusion criteria for articles were peer reviewed, English language, and conducted in US National School Lunch Program (NSLP). Exclusion criteria for articles were no focus on the US NSLP, food waste not used as a measurement tool, review of literature, or a conference meeting abstract. ISI¼Institute for Scientific Information. RESEARCH years, global and national food waste campaigns have further amplified the importance of reducing food waste.7,8 The purpose of this systematic review was to provide a summary of the literature describing the measurement and results of food waste studies in the NSLP across time. METHODS Search Strategy Articles included in this systematic literature review were extracted from PubMed, Science Direct, Informaworld, and ISI Web of Knowledge using the preferred reporting items for systematic reviews and meta-analyses (PRISMA) format pub- lished through June 2015.9 When testing key words, these databases yielded relevant articles. The authors tested poten- tial key words related to NSLP and food waste through mock searches to ensure that the final list of terms captured relevant articles that met inclusion and exclusion criteria. Keywords November 2017 Volume 117 Number 11 JO entered with Boolean operators included waste, school lunch, plate waste, food waste, kitchen, half method, quarter method, weight, and photography. The following are two search strate- gies used in Science Direct: waste OR “food waste” OR “plate waste” OR “kitchen waste” AND school AND lunch; waste OR “food waste” OR “plate waste” AND school AND lunch AND “quarter method” OR “half method” OR weight OR photog- raphy. No limits or filters were used in the search. The search strategy was modified for individual databases. Study Selection The main criterion for inclusion was the explicit use and description of a method to measure food waste in the NSLP. Articles included were peer-reviewed, written in the English language, and based on studies conducted in the United States covering the NSLP. Journal articles that collected pri- mary data were considered. Articles were excluded in cases URNAL OF THE ACADEMY OF NUTRITION AND DIETETICS 1793 Table 1. In-person visual estimation through observation for food waste studies conducted in the National School Lunch Program Reference Green and colleagues, 198711 Reger and colleagues, 199612 Auld and colleagues, 199913 Blom-Hoffman and colleagues, 200414 Just and colleagues, 201315 Wansink and colleagues, 201316 Just and colleagues, 201417a Cullen and colleagues, 201518 Cullen and colleagues, 201519a Price and colleagues, 201520 Wansink and colleagues, 201521 Study design Ibc CSd Ie RCTf Ie RCT Ie RCT Ie Ie Ie Specific data collection method 1/2 gh 6i Ej 6k 1/2 l 1/4 mn 1/4 n 1/4 n 1/4 n 1/2 l 1/4 n Type and no. of schools Elementary 1 1 4 1 18 8 8 7 Middle 6 4 High 1 1 Grade level 3 3-6 2-4 Kindergarten-1 NRo NR NR Kindergarten-8 NR 1-6 NR Average percent wasted for dietary components measuredp Grains/bread 37 27 34 Vegetables 12 58 >q > 19 48 32 > 19 Fruits/fruit juice 31 39 > 41r 15 27 23 > Meat/meat alternate 1 18 Milk 50 17 18 27 Other 33r 62s 11s 95t 64t Days of food waste data collectionu 70 20 NR 3 NR 6 3 NR NR 14 3 No. of waste observationsv 123 240 502 NR 47,414 640 3,330 1,576 1,045 22,939 554 (continued on next page) R E S E A R C H 1 7 9 4 JO U R N A L O F T H E A C A D E M Y O F N U T R IT IO N A N D D IE T E T IC S N o vem b er 2017 Vo lu m e 117 N u m b er 11 Table 1. In-person visual estimation through observation for food waste studies conducted in the National School Lunch Program (continued) Reference Green and colleagues, 198711 Reger and colleagues, 199612 Auld and colleagues, 199913 Blom-Hoffman and colleagues, 200414 Just and colleagues, 201315 Wansink and colleagues, 201316 Just and colleagues, 201417a Cullen and colleagues, 201518 Cullen and colleagues, 201519a Price and colleagues, 201520 Wansink and colleagues, 201521 Effective public health practice project quality rating10 Strong Weak Strong Strong Moderate Strong Moderate Strong Strong Strong Strong aData were collected to assess food waste after new school lunch meal patterns were implemented beginning 2012. bI¼intervention. cPre-post-follow-up intervention. dCS¼cross-sectional. ePre-post intervention. fRCT¼randomized controlled trial. g1/2¼half waste method. hA þ sign was recorded for more than half of food wasted and e sign was recorded for less than half of food wasted. i6¼six-point scale scored as 1¼ate all of food to 6¼ate none of food. jE¼estimation. kMeasured with 6-point scale: 5¼91% to 100%; 4¼76% to 90%; 3¼51% to 75%; 2¼26% to 50%; 1¼11% to 25%; 0¼0% to 10%. lMeasured in increments of 1/2 a serving. m1/4¼quarter waste method. nMeasured in increments of none, 1/4 , 1/2, 3/4 , or all wasted. oNR¼not reported with specificity. pIn some cases, the average percent waste within a dietary component was reported within the cited article. In other cases, this study’s authors calculated average percent wasted within a dietary component when research design collected waste across multiple intervention periods. When percent consumed was reported (instead of percent waste), this study’s authors calculated average percent waste by subtracting the percent consumed from 100% and, if necessary, averaged across multiple intervention periods or groups. q>¼study indicated dietary component measured but not average percent wasted within dietary component. rSpecific macro- and/or micronutrients measured in whole meal. sMeasured waste of a mixed entrée. tMeasured waste of legumes. uData calculated as number of days reported for study multiplied by number of schools involved in food waste collections. vData reported according to study as individual food items or entire student tray. R E S E A R C H N o vem b er 2017 Vo lu m e 117 N u m b er 11 JO U R N A L O F T H E A C A D E M Y O F N U T R IT IO N A N D D IE T E T IC S 1 7 9 5 Table 2. Visual estimation through digital photography for food waste studies conducted in the National School Lunch Program Reference Marlette and colleagues, 200522 Martin and colleagues, 200623 Martin and colleagues, 201024 Smith and colleagues, 201325 Williamson and colleagues, 201326 Bontrager and colleagues, 201427 Bontrager and colleagues, 201428 Hubbard and colleagues, 201429a Alaimo and colleagues, 201530 Bontrager and colleagues, 201531a Monlezun and colleagues, 201532a Study design CSb CSc CS CS RCTd Ief CS If If If CS Specific data collection methode RPg RP RP PIh PI PI PI PI PI PI PI Type and no. of schools Elementary 33 3 21 8 9 6 11 1 Middle 3 1 2 1 Other 1 Grade level 6 6 4-6 1-8 4-6 3-5 3-5 NRi 3-5 3-5 Kindergarten-8 Average percent wasted for dietary components measuredj Grains/bread 16 >k 27 32 > > Vegetables 32 > 37l 32 > > > > > > Fruits/fruit juice 38 > 40 > > > > > > Meat/meat alternate 21 > > Milk 15 > 30 27 > > > Other 32m >mn >n 22m >n > >o >m >mn Days of food waste data collectionp 24 5 3 23 3 64 32 10 12 NR 5 No. of waste observationsq 743 215 2,049 899 NRf 4,451 2,292 644 1,192 7,117 1,750 (continued on next page) R E S E A R C H 1 7 9 6 JO U R N A L O F T H E A C A D E M Y O F N U T R IT IO N A N D D IE T E T IC S N o vem b er 2017 Vo lu m e 117 N u m b er 11 Table 2. Visual estimation through digital photography for food waste studies conducted in the National School Lunch Program (continued) Reference Marlette and colleagues, 200522 Martin and colleagues, 200623 Martin and colleagues, 201024 Smith and colleagues, 201325 Williamson and colleagues, 201326 Bontrager and colleagues, 201427 Bontrager and colleagues, 201428 Hubbard and colleagues, 201429a Alaimo and colleagues, 201530 Bontrager and colleagues, 201531a Monlezun and colleagues, 201532a Effective public health project practice quality rating10 Weak Weak Weak Weak Strong Weak Weak Moderate Moderate Moderate Weak aData were collected to assess food waste after new school lunch meal patterns were implemented, beginning 2012. bCS¼cross-sectional. cCross-sectional study used for validation purposes. dRCT¼randomized controlled trial. eI¼intervention. fPre-post intervention. gRP¼raw percent, meaning percent of food selection and plate waste in photograph compared with reference photographed and weighed portion. hPI¼percent increments, meaning percent increments (eg, in 10% or 25% increments) of food selection and plate waste in photograph compared with reference photographed and weighed portion. iNR¼not reported with specificity. jData calculated as number of days reported for study multiplied by number of schools involved in food waste collections. k>¼study indicated dietary component measured but not average percent wasted within dietary component. lFruits and vegetables combined. mMeasured waste of a mixed entrée. nSpecific macro- and/or micronutrients measured in whole meal. oMeasured waste of legumes. pIn some cases, the average percent waste within a dietary component was reported within the cited article. In other cases, this study’s authors calculated average percent wasted within a dietary component when research design collected waste across multiple intervention periods. When percent consumed was reported (instead of percentage waste), this study’s authors calculated average percent waste by subtracting the percent consumed from 100% and, if necessary, averaged across multiple intervention periods or groups. qData reported according to study as individual food items or entire student tray. R E S E A R C H N o vem b er 2017 Vo lu m e 117 N u m b er 11 JO U R N A L O F T H E A C A D E M Y O F N U T R IT IO N A N D D IE T E T IC S 1 7 9 7 Table 3. Direct weighing for food waste studies in the National School Lunch Programa Reference Jansen and colleagues, 197833 Davidson and colleagues, 197934 Comstock and colleagues, 198235 Getlinger and colleagues, 199636 Whatley and colleagues, 199637 Adams and colleagues, 200538 Toma and colleagues, 200939 Hoffman and colleagues, 201040 Lazor and colleagues, 201041 Chu and colleagues, 201142 Hoffman and colleagues, 201143 Study design Qb CSc CS Ide If CS Ie If CS CS Lg Specific data collection method DWjk DWl DWk DWk DWl DWm DWk DWk DWk DWk DWk Type and no. of schools Elementary 29 23 11 1 2 4 1 4 12 4 Middle 5 3 High 29 2 Grade level 5 and 10 1-3 1-5 or 6 1-3 3-5 1-5 Kindergarten-6 Kindergarten-1 NRr NR Kindergarten-1 Average percent wasted for dietary components measureds Grains/bread 21 >t 18 > 35 Vegetables 51 > 16 > > > Fruits/fruit juice 30 > 12 > > > Meat/meat alternate 18 > 18 > Milk 9 > 82 Other 32u >uv > 2 >v >u >v Days of food waste data collectionw 10 NR 33 8 76 4 7 36 NR NR 60 No. of waste observationsx 130,000 230 13,749 NR 560 294 NR 1,414 1,933 NR 1,060 Effective public health practice project quality rating10 Weak Weak Weak Moderate Moderate Weak Moderate Moderate Weak Weak Strong aData were collected to assess food waste after new school lunch meal patterns implemented beginning 2012. bQ¼quasiexperimental. cCS¼cross-sectional. dI¼intervention. ePre-post intervention. fPre-post-follow-up intervention. gL¼longitudnal. hMM¼mixed methods. iRCT¼randomized controlled trial. jDW¼direct weighing. kDifference weight of plate waste for each food minus weight of average selected serving. lPercent plate waste calculated by dividing the weight of edible food waste by the mean serving weight. mDifference weight of plate waste for each food minus pre consumption selections for all students’ plates. nWeight of fluid milk remaining was determined using the full weight and empty container weight of the carton. oFruit and vegetable consumption was calculated by weighing all produce prepared and subtracting unserved and waste weights, divided by number of students. pWaste was sorted by hand and weighed on a digital scale. qAt least one study school was not identified as elementary or middle, but identified kindergarten through eighth grade or was not identified as middle or high, but identified as grades six through 12. rNR¼not reported with specificity. sIn some cases, the average percent waste within a dietary component was reported within the cited article. In other cases, this study’s authors calculated average percent wasted within a dietary component when research design collected waste across multiple intervention periods. When percent consumed was reported (instead of percentage waste), this study’s authors calculated average percent waste by subtracting the percent consumed from 100% and, if necessary, averaged across multiple intervention periods or groups. t>¼study indicated dietary component measured but not average percent wasted within dietary component. uMeasured waste of a mixed entrée. vSpecific macro- and/or micronutrients measured in whole meal. wData calculated as number of days reported for study multiplied by number of schools involved in food waste collections. xData reported according to study as individual food items or entire student tray. RESEARCH 1798 JOURNAL OF THE ACADEMY OF NUTRITION AND DIETETICS November 2017 Volume 117 Number 11 Table 3. Direct weighing for food waste studies in the National School Lunch Programa (continued) Reference Cohen and colleagues, 201244 Yon and colleagues, 201245 Cohen and colleagues, 201346 Ramsay and colleagues, 201347 Byker and colleague, 20145 Cohen and colleaugues, 201448 Hunsberger and colleagues, 201449 Jones and colleagues, 201450 Jones and colleagues, 201451 Cohen and colleagues, 201552 Miller and colleagues, 201553 Wilkie and colleagues, 201554 CS MMh CS Q CS Ie MM Ie Ie RCTi Ie CS DWk DWn DWk DWk DWk DWk DWk DWo DWo DWk DWk DWp 9 1 1 4q 1 1q 1 7 1 1 4 4 7 2q NR 3-5 6-8 K Prekindergarten- Kindergarten 1-8 Kindergarten-2 Kindergarten-8 1-5 3-8 Kindergarten-5 Kindergarten-12 > > > 73 > 51 67 > > > 73 > > 47 > 33 43 > > > 36 > > 75 > 25 > 46 41 > 18u 19u 51u 20u >u 27u > 8 9 8 4 5 16 5 23 64 84 3 20 3,049 793 3,049 473 304 1,030 261 180 251 2,638 2,027 NR Moderate Weak Moderate Moderate Weak Strong Strong Moderate Moderate Strong Strong Weak RESEARCH November 2017 Volume 117 Number 11 JOURNAL OF THE ACADEMY OF NUTRITION AND DIETETICS 1799 RESEARCH where they did not focus on the NSLP, were conducted outside of the United States, did not measure food waste, or presented a review of literature. Meeting abstracts were excluded due to limited information about methodology conducted. Cross-sectional, intervention, quasiexperimental, randomized controlled trial, and mixed-methods study de- signs and methods were considered. Data Extraction Two reviewers first evaluated articles by titles, abstracts, and key words. In cases where food waste and kindergarten through 12th-grade schools were discussed in the title of an article, abstract, or key words, the full article was reviewed to determine relevance. Titles and abstracts that met the in- clusion criteria were recorded for full text review. The refer- ences in each article included were reviewed to determine whether any other additional studies were relevant, although no additional articles were found that were not already captured in the search. The authors reviewed each article independently and met to determine inclusion or exclusion; disagreements were resolved via discussion. For each article included in the review, one coder collected and entered data into an extraction template. Information recorded included: first author and year published, purpose, study design and specific data collection method, school type, number of schools involved, location of school, number of students, free and reduced NSLP eligibility, race/ethnicity, grade level or age, dietary component measures, duration and frequency of the data collected, food waste results, other relevant findings to food waste, and whether conducted before or after implementation of the NSLP standards updated by the Healthy Hunger Free Kids Act of 2010. The categories for data extraction were determined based on factors that may inform a researcher’s decision to select a particular food waste measurement method. For example, it may be useful for re- searchers to understand the various ways results are reported when using a particular method (ie, waste of nutrients, spe- cific foods, or food groups). The data collected, along with the publication, were reviewed by at least two additional coders to ensure accuracy; all disagreements were resolved by dis- cussing inclusion and exclusion criteria to reach consensus. Quality Appraisal of Individual Studies Study quality was assessed using the Effective Public Health Policy Project (EPHPP) Quality Assessment Tool.10 The EPHPP Quality Assessment Tool provides researchers with criteria to evaluate studies on the basis of selection bias, study design, confounders, blinding, data collection methods, withdraws and dropouts, intervention integrity, and analysis. Each criteria is scored numerically according to provided guidelines by the EPHPP Quality Assessment Tool as strong (score¼1), moderate (score¼2), or weak (score¼3). Subsequently, the entire article is rated as strong (no weak ratings), moderate (one weak rating), or weak (two or more weak ratings). This study was exempt from institutional review board re- view because there was no interaction with human subjects. RESULTS A total of 10,892 articles were retrieved using the database search. After eliminating duplicates and articles that did not meet inclusion criteria based on title and abstract screening, 1800 JOURNAL OF THE ACADEMY OF NUTRITION AND DIETETICS 66 articles remained for content review. After reviewing the full articles, 13 studies were excluded due to the following reasons: four were conducted outside of the United States; four did not involve the NSLP; three were in preschools; and two were conference abstracts, not full articles (see the Figure). The 53 studies included in this review used four major types of food waste measurement methodologies: in-person visual estimation (n¼11) (Table 1), digital photography (n¼11) (Table 2), direct weighing (n¼23) (Table 3), and a combination of in-person visual estimation, digital photog- raphy, and/or direct weighing (n¼8) (Table 4). With regard to study design and methods, most studies identified in- terventions with a preepost or preepost-follow-up design (n¼20) or cross-sectional (n¼23), two were quasiexper- imental, two were mixed methods, one study was longitu- dinal, and five were randomized controlled trials. Fourteen studies were rated as strong, 20 studies were rated as mod- erate, and 19 studies were rated as weak according to the EPHPP Quality Assessment Tool. Studies labeled as moderate were likely to have a weak rating for study design, whereas studies labeled as weak were likely to have weak ratings for selection bias or confounders and study design. See Tables 1 through 4 for quality assessment ratings. In-Person Visual Estimation of Food Waste through Observation In-person visual estimation through observation of food waste occurred in 11 studies (Table 1).11-21 Researchers con- ducted in-person visual estimation through observation by first viewing several serving sizes of school lunch foods of interest to understand the appearance of the average plated food component. Researchers then weighed several samples of the plated food item of interest to find the average serving weight in grams or ounces. Finally, student trays were collected and assessed for the amount of food wasted in validated increments. Increments included less or more than half wasted,11,15,20 quarters (eg, none, half, three-quarters, or all),16-19,21 or a 6-point scale (eg, 0¼0% to 10% and 5¼91% to 100%),12,14 or a percent estimation (eg, on a scale of 0% to 100%).13 In some studies, a computer program was used to estimate the grams or ounces and energy of food consumed from the in-person visual estimation through observation. One study focused on the total amount of food wasted.12 Other studies used food waste measurement as a proxy for the amount of foods students consumed. The research had a variety of aims, including to understand the influence of nutrition education11,13,14,21 or changes in nutrition re- quirements.18,19 In addition, studies examined the effects of lunchtime procedures or the food environment or infra- structure15,16,20 and food acceptability on consumption levels.17 Studies were concentrated in the West,15,20 North- east,14,16,17,21 and South,12,18,19 with two studies not reporting geographic location.11,13 Three studies examined schools with free and reduced lunch eligibility rates of more than 80%.12-14 By far, fruits and vegetables were the most frequently studied food groups.12-21 Nutrition education was minimally effective in decreasing the amount of food waste.11,13,14,21 Modifying lunchtime procedures or the food itself increased consumption of foods and decreased waste.15-17,20 New nutrition standards resulted in no significant differences in the percentage of fruits, vegetables, or whole grains November 2017 Volume 117 Number 11 RESEARCH consumed or wasted.19 Sex and age significantly influenced waste in Reger’s study.12 Visual Estimation of Food Waste through Digital Photography Visual estimation through digital photography was used in 11 studies (Table 2).22-32 Researchers conducted visual estima- tion of food waste through digital photography by photo- graphing either or both reference serving sizes of the food component of interest, or the student’s selected food pre- consumption. When taking photographs of the reference serving sizes, researchers generally calculated an average weight for the food component as well. Each student’s tray was then photographed at the tray return area (post- consumption). In reviewing the photographs, food con- sumption was estimated as a percentage of the reference serving size or student’s preconsumption selection. Food waste estimates were made as a raw percent22-24 or in in- crements of 10%,25,26,32 25%,27-31 or 0% to 10% to 25% to 50% to 100%.26,27 Computer applications were used to estimate the weight and energy of food consumed from the visual esti- mation through digital photography in studies using this method. The purposes of each study varied, with food waste mea- sures aimed at primarily understanding the amount of food waste22,31 and food consumption,25,27,28 modification of food environment or lunchtime procedures,26,29 instrument val- idity,23 compliance with nutrition recommendations,24 and nutrition education.30,32 Studies were conducted in the West,25,26 Midwest,27,28,30,31 Northeast,29 and South,22,24,32 although one did not report geographic location.23 Alaimo and colleagues30 and Monlezun and colleagues32 reported free and reduced rates near 100%, whereas several other studies did not report free and reduced rates. As in the studies using visual estimation techniques to measure waste, studies using digital photography also focused predominantly on fruits and vegetables. Several distinguished between forms of fruits and vegetables, such as cooked, raw, canned, and fresh.25,29,31 Two studies reported that waste of fruit and vegetables was the highest when compared with other dietary components.22,24 Three studies reported a decrease in waste of fruit and vegetables and other dietary components as a result of an intervention.25,27,29 Several studies expressed food waste in terms of calories rather than as a percentage of food wasted.26,28,32 Direct Weighing of Food Waste Direct weighing of food waste was used as the main research method in 23 studies (Table 3).5,33-54 The process for direct weighing of food waste generally includes to determine what is being served in the cafeteria on the day of the study, to determine which food(s) will be included in the study, to weigh random samples of the food(s) and calculate an average weight, to collect and weigh food waste from student trays, to calculate percent or grams or ounces consumed by subtracting the food waste collected in Step 4 from the average weight determined in Step 3 and multiplying by 100. Some research measured waste for all foods on the tray,5,33-36,44,46-49,53,54 whereas others focused on collecting waste data about specific foods or food compo- nents.37,39-43,45,50-52 Three additional studies measured the November 2017 Volume 117 Number 11 JO weight of all food before it was served, collected all food waste from student trays, and subtracted the total amount leftover.38,50,51 About three-fourths of studies used food waste as a proxy for understanding the amount of food stu- dents consumed. Research aimed to understand the impacts of nutrition education,40,43,50,51 changes in nutrition requirements,47 lunchtime procedures or the food environment,36,38,49,52 or food acceptability on consumption levels.37,39,41,42,44,45,52,53 Six studies specifically aimed to directly measure the amount of waste produced in the NSLP.5,33-35,46,48,54 Studies were concentrated in the West,38,39,49-51 Midwest,36,42,53 Northeast,40,43,44,46,48,52 South,5,34,32,54 and mixed loca- tions,33,35,41,45 with two studies not reporting geographic location.37,47 Seven studies reported free and reduced lunch eligibility rates above 80%.42-44,46,48,49,52 The most common food components examined in studies involving direct weighing were fruits and vegetables. Sixteen studies reported the quantity of waste from fruits and vege- tables. Other dietary components examined included milk, grains, and high-protein items such as soy-based products. Studies examined acceptance of specific foods in the cafeteria, such as whole grains.38,39,41,42,45 Two studies found a reduc- tion in food waste from changing recess to before lunch.36,49 Many interventions (eg, nutrition education, changes in nutrition requirements, lunchtime procedures or the food environment, or food acceptability on consumption levels) led to a decrease in waste for some foods. Combination of Methods Eight studies used a combination of in-person visual esti- mation through observation, visual estimation through digital photography, and/or direct weighing methods (Table 4).55-62 One study used direct weighing, visual obser- vation, and children’s ratings.55 Three studies used direct weighing and visual observation.56,58,59 Three studies used direct weighing and digital photography.57,61,62 One study used direct weighing, two types of visual observation, and visual photography.60 Four studies were designed to validate or compare food waste measures,55,56,60,61 one study validated a questionnaire against a food waste methodology,58 and three used food waste as a proxy for measuring the amount of food students consumed.57,59,62 Research aiming to understand food waste and consumption examined responses to changes in food requirements.57,59,62 Studies were concentrated in the West,58,59 Northeast,62 South,57 with four studies not reporting the geographic location in the United States.55,56,60,62 Rates for free or reduced school lunch eligibility ranged from 35.0%61 to 93.6%58; however, more than half did not report this information. Fruit and vegetables or components were consistently assessed across all studies except one, which was focused on competitive (snack) foods.57 Researchers used a combination of measures to validate a food waste measurement tool through comparison with a gold standard of direct weighing of waste. For the validity studies, the digital imaging and obser- vation technique was found to be comparable to weighed plate waste with 96%61 agreement and the quarter-waste method had a reliability measure of 0.9,60 both showing promise as URNAL OF THE ACADEMY OF NUTRITION AND DIETETICS 1801 Table 4. Combination of methodologies for food waste studies conducted in the National School Lunch Program (visual estimation, digital photography, direct weighing)a Reference Comstock and colleagues, 198155 Graves and colleagues, 198356 Templeton and colleagues, 200557 Wallen and colleagues, 201158 Gase and colleagues, 201459 Hanks and colleagues, 201460 Taylor and colleagues, 201461 Schwartz and colleagues, 201562 Study design CSbc CSc CS CSc CS CSc CSc Ide Specific data collection method Wf VOg Wh VOi Wh DPjk Wh VOi W VOi W VOi DPk Wh DPk Wh DP Type and no. of schools Elementary 5 1 2 1 2 Middle 3 4 12 Grade level Kindergarten-6 1-6 6 4 NRl Kindergarten-5 3-5 5-7 Average percent wasted for dietary components measuredm Grains/bread >n > > > Vegetables > > > > > > 51 Fruits/fruit juice > > > > > > 31 Meat/meat alternate > > Milk > > > 45 Other >o >o >p >o >o >o 26o Days of food waste data collectionq 4 8 24 1 20 1 8 36 No. of waste observationsr 2,000 450 743 125 2,228 197 276 1,340 (continued on next page) R E S E A R C H 1 8 0 2 JO U R N A L O F T H E A C A D E M Y O F N U T R IT IO N A N D D IE T E T IC S N o vem b er 2017 Vo lu m e 117 N u m b er 11 Table 4. Combination of methodologies for food waste studies conducted in the National School Lunch Program (visual estimation, digital photography, direct weighing)a (continued) Reference Comstock and colleagues, 198155 Graves and colleagues, 198356 Templeton and colleagues, 200557 Wallen and colleagues, 201158 Gase and colleagues, 201459 Hanks and colleagues, 201460 Taylor and colleagues, 201461 Schwartz and colleagues, 201562 Effective public health practice project quality rating10 Weak Weak Moderate Moderate Moderate Moderate Moderate Moderate aData were collected to assess food waste after new school lunch meal patterns were implemented beginning 2012. bCS¼cross-sectional. cCross-sectional study used for validation purposes. dI¼intervention. ePre-post intervention. fW¼direct weighing. gVO¼visual observation. hDifference weight of plate waste for each food minus weight of average selected serving. iQuarter waste method (none, half, three-quarters, or all). jDP¼digital photography. kEstimate percent of food selected and plate waste in photograph compared with reference photograph or a sample tray. lNR¼not reported with specificity. mIn some cases, the average percent waste within a dietary component was reported within the cited article. In other cases, this study’s authors calculated average percentage wasted within a dietary component when research design collected waste across multiple intervention periods. When percent consumed was reported (instead of percentage waste), this study’s authors calculated average percetage waste by subtracting the percentage consumed from 100% and, when necessary, averaged across multiple intervention periods or groups. n>¼Study indicated dietary component measured but not average percentage wasted within dietary component. oMeasured waste of a mixed entrée. pSpecific macro- and/or micronutrients measured in whole meal. qData calculated as number of days reported for study multiplied by number of schools involved in food waste collections. rData reported according to study as individual food items or entire student tray. R E S E A R C H N o vem b er 2017 Vo lu m e 117 N u m b er 11 JO U R N A L O F T H E A C A D E M Y O F N U T R IT IO N A N D D IE T E T IC S 1 8 0 3 RESEARCH alternatives to direct weighing. One other study found that the Day in the Life Questionnaire-Colorado dietary assessment had a high level of validity compared with plate waste.58 DISCUSSION This literature review highlights methods and results from four main research methodologies found across 53 food waste studies in the NSLP across time. Studies using in- person visual estimation, digital photography, direct weigh- ing, and a combination of in-person visual estimation, digital photography, and/or direct weighing varied greatly in research goals, protocol, and reporting. The results of this review may be useful for researchers seeking to measure food waste in school meals, influence what is consumed and wasted at schools, implement effective interventions, and develop new methods for measurement of food waste. Study aims ranged from evaluating the effects of programs on food consumption and/or waste to generally assessing food waste. No discernible trends in food consumption or food waste outcomes were observed based on study design (cross-sectional, intervention, quasiexperimental, mixed methods, or randomized controlled trial), the percentage of students who were eligible for free or reduced school lunch, geographic location of the school, and/or race or ethnicity. Most studies covered elementary schools, followed by middle schools; only five studies were conducted in high schools. Inconsistencies were noted in reporting key study design features (eg, number of schools, location of school, and di- etary component measured), and participant characteristics (eg, eligibility for free or reduced school lunch eligibility, race/ethnicity, and specific grade of students). There was a large degree of variability regarding how food waste was characterized in results. For example, units of measurement were reported in grams, ounces, percentages, or kilocalories. More uniform reporting metrics would lead to pooling food waste data across studies with potential to un- derstand consumption patterns and influence the school lunch field. Across methodologies, most studies reported the percentage of food groups or specific foods wasted. Some studies using in-person visual estimation through observa- tion or digital photography reported food waste in terms of calories or number of servings wasted.15,20,27,29,32 In one study using direct weighing, findings were presented by cost and the percentage of the total food budget wasted.46 Re- searchers also reported findings in terms of nutrients wasted and weight of food wasted. This variability contributes to the difficulty in understanding changes in food waste over time and difference across settings and populations by methodology. Many studies used observation, photography, and/or weighing of food waste as a proxy for measuring food con- sumption. Perhaps using “plate consumption” rather than “food waste” or “plate waste,” as Alaimo suggests,30 would increase the relevance of the measurement method to a study’s purpose. The language around plate waste and food waste should be selected carefully, especially in light of the attention that the NSLP receives from the public, media, and policymakers.63 In addition, plate waste and food waste are used interchangeably in the school lunch literature and re- searchers should choose one term to reduce confusion. 1804 JOURNAL OF THE ACADEMY OF NUTRITION AND DIETETICS Several trends were noted across the methodologies. Nearly all studies were cross-sectional or interventions; only two studies were quasiexperimental, two studies used mixed methods, one study was longitudinal, and five were ran- domized controlled trials. Few longitudinal food waste studies existed; thus, there is no clear understanding of how much food is wasted or not wasted as a result of an inter- vention in the long term. For example, studying the long- term influences on waste of Smarter Lunchrooms design64 is important for knowing how changing the cafeteria food environment changes student consumption and waste throughout kindergarten through grade 12. Some studies aimed to validate a method, compare methods, or to assess intake or another method to assess waste. The five studies that validated or compared measures found acceptable correlation values or similar results be- tween measures.55,56,58,60,61 More studies should incorporate qualitative data in a mixed-methods design. Pairing qualitative with quantitative data allows for study designs that address research questions that are complex and multifaceted.65 Food waste researchers could address several qualitative questions along with quantitative food waste research, such as: How does student perception of the quality of the particular school’s food in- fluence the amount of waste? And, why do students waste food in general, from their own perspective? Overall, researchers using the in-person visual estimation through observation methodology collected food waste data for a greater period of time and at a higher frequency than those who used visual estimation through digital photography and direct weighing, likely given the lower burden on the researchers for data collection and analysis. Direct weighing has been used for a longer period of time when compared with visual estimation through both in- person observation and digital photography. Eighteen arti- cles published before 2014 used weighing compared with eight in-person observation and four digital photography studies. In 2014-2015, 10 studies used direct weighing, seven used in-person observation, and six used digital photography—evidence of the increasing popularity of visual methodologies. Fruits and vegetables were the most consistent dietary components measured, except for 12 studies. Fruits and vegetables were often reported to be the foods wasted in the largest quantities across the methodologies to assess waste. Adequate and balanced nutrition is of vital importance in assisting children to grow and learn. It is important to un- derstand fruit and vegetable consumption within the context of the entire tray (meal). Examining only a segment of the diet does not account for understanding the other foods that compete with a student’s food consumption patterns. Ana- lyses of food preferences toward studied food components, as well as food exposures, would also provide insight into food waste and consumption, especially when research has demonstrated that several exposures may be needed to in- fluence food acceptance.66,67 In addition, a few studies noted that older students wasted more than younger students and girls wasted more than boys; therefore, when addressing food waste, it may be important to consider consumption differences between boys and girls as well as in different age groups. November 2017 Volume 117 Number 11 RESEARCH Of note, no studies reported zero food waste. Since the 1970s, most studies reported more than 30% food waste and, furthermore, no studies have reported <5%. With an increasing focus on supporting self-regulation (eg, internal cues for satiety and hunger) instead of a clean plate or responding to visual cues to consume more,66 some level of waste should be expected. A multitude of other factors also influence food waste, including balancing caloric re- quirements with energy expenditure, metabolic and physical factors, food preferences, serving sizes, the school environ- ment, and what and how much children eat before the meal and in the home environment. However, how can food waste be minimized? This is long-standing question and a complex issue that should be addressed by the NSLP and food waste researchers strategically.68,69 Summarizing and aggregating data will become easier when researchers establish standardized food waste data collection measures and reporting techniques. Selection of a uniform metric to report results is an important consider- ation for researchers because consistent reporting may allow for comparison of findings. Further, the EPHPP Quality Assessment Tool10 ratings were fairly mixed between strong, moderate, and weak. Weaker ratings raise questions about the validity of the findings, potentially due to bias in the selection of subjects, lack of description in the measurement of outcomes, or bias in methods or reporting. Therefore, a standardized food waste data collection measure and reporting technique has the potential to simultaneously increase quality assessment ratings. Limitations exist in this systematic review. The search terms used may not have retrieved all articles relevant to food waste in the NSLP. Therefore, conclusions made in this research are limited to the publications retrieved during the search process. Excluding nonepeer-reviewed research may have overlooked important work conducted addressing food waste in schools. For example, Buzby and colleagues6 pub- lished a Report to Congress about plate waste amounts and measures in the NSLP before 2002. In addition, food waste connected to other food programs for children have been studied, including the School Breakfast Program and the Summer Food Service Program. CONCLUSIONS Generally, studies of food waste and consumption in the NSLP through the use of in-person visual estimation, digital photography, and/or weighing over the past 40 years has yielded mixed results about the amounts of food waste yielded within differing dietary components. The NSLP has the important purpose of feeding a large majority of our nation’s children with balanced and nutritious meals. As such, improving measurement methods to understand the amount of foods consumed and wasted in the lunchroom is an important charge for the public health and dietetics fields. There is a need for development of methods using technology that are low cost, have a low subject burden, and allow for measurement of food waste with limited involvement of re- searchers. Researchers need to better understand the causes and consequences of food waste on the school lunch tray by designing studies with consistent research protocols that examine dietary quality and food preferences of students. The November 2017 Volume 117 Number 11 JO ultimate goal should be to produce food waste data and implementable strategies that promote continuous improvement in the cafeteria food environment and health- ful eating habits among students, especially since wasted food is wasted nutrients.70 References 1. US Department of Agriculture, Food and Nutrition Service. Child nutrition programs. http://www.fns.usda.gov/school-meals/child- nutrition-programs. Published February 2016. Accessed March 2016. 2. US Department of Agriculture, Food and Nutrition Service. National School Lunch Program fact sheet. http://www.fns.usda.gov/cnd/ lunch/AboutLunch/NSLPFactSheet.pdf. Published September 2013. Accessed March 2016. 3. US Government Publishing Office. Nutrition standards in the na- tional school lunch and school breakfast programs: Final rule. http:// www.gpo.gov/fdsys/pkg/FR-2012-01-26/html/2012-1010.htm. Pub- lished January 26, 2012. 77. Accessed March 2016. 4. US Department of Agriculture, Food and Nutrition Service. General Guidelines for Determining Food Acceptability: Procedures for Plate Waste Studies. Washington, DC: US Department of Agriculture; 1975. 5. Byker CJ, Farris AR, Marcenelle M, Davis GC, Serrano EL. Food waste in a school nutrition program after implementation of new lunch program guidelines. J Nutr Educ Behav. 2014;46(5):406-411. 6. Buzby JC, Guthrie JF. Plate Waste in School Nutrition Programs: Final Report to Congress. March 2002. Publication no. EFAN-02-009. https:// www.ers.usda.gov/webdocs/publications/43131/31216_efan02009. pdf?v¼41423. Accessed June 2, 2017. 7. Gustavsson J, Cederberg C, Sonesson U, van Otterdijk R, Meybeck A. Global Food Losses and Food Waste—Extent, Causes and Prevention. Rome, Italy: Food and Agriculture Organization of the United Na- tions; 2011. 8. Buzby JC, Wells HF, Hyman J. The Estimated Amount, Value, and Cal- ories of Postharvest Food Losses at the Retail and Consumer Levels in the United States. Washington, DC: US Department of Agriculture Eco- nomic Research Service; 2014. 9. Moher D, Liberati A, Tetzlaff J, Altman DG; The PRISMA group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med. 2009;6(7):e1000097. 10. Effective Public Health Practice Project Quality Assessment Tool. http://www.ephpp.ca/tools.html. Accessed May 1, 2017. 11. Green NR, Munroe SG. Evaluating nutrient-based nutrition education by nutrition knowledge and school lunch plate waste. Sch Food Service Res Rev. 1987;11(2):112-115. 12. Reger C, O’Neil CE, Nicklas TA, Myers L, Berenson GS. Plate waste of school lunches served to children in a low socioeconomic elemen- tary school in South Louisiana. Sch Food Service Res Rev. 1996;20(suppl):13-19. 13. Auld GW, Romaniello C, Heimendinger J, Hambidge C, Hambidge M. Outcomes from a school-based nutrition education program alter- nating special resource teachers and classroom teachers. J Sch Health. 1999;69(10):403-408. 14. Blom-Hoffman J, Kelleher C, Power TJ, Leff SS. Promoting healthy food consumption among young children: Evaluation of a multi- component nutrition education program. J Sch Psychol. 2004;42(1): 45-60. 15. Just D, Price J. Default options, incentives and food choices: Evidence from elementary-school children. Public Health Nutr. 2013;16(12): 2281-2288. 16. Wansink B, Just DR, Hanks AS, Smith LE. Pre-sliced fruit in school cafeterias: Children’s selection and intake. Am J Prev Med. 2013;44(5):477-480. 17. Just DR, Wansink B, Hanks AS. Chefs move to schools. A pilot ex- amination of how chef-created dishes can increase school lunch participation and fruit and vegetable intake. Appetite. 2014;83: 242-247. 18. Cullen KW, Chen TA, Dave JM, Jensen H. Differential improvements in student fruit and vegetable consumption in response to the new National School Lunch Program regulations: A pilot study. J Acad Nutr Diet. 2015;115(5):743-750. 19. Cullen KW, Chen TA, Dave JM. Changes in foods selected and consumed after implementation of the new National School Lunch URNAL OF THE ACADEMY OF NUTRITION AND DIETETICS 1805 http://www.fns.usda.gov/school-meals/child-nutrition-programs http://www.fns.usda.gov/school-meals/child-nutrition-programs http://www.fns.usda.gov/cnd/lunch/AboutLunch/NSLPFactSheet.pdf http://www.fns.usda.gov/cnd/lunch/AboutLunch/NSLPFactSheet.pdf http://www.gpo.gov/fdsys/pkg/FR-2012-01-26/html/2012-1010.htm http://www.gpo.gov/fdsys/pkg/FR-2012-01-26/html/2012-1010.htm http://refhub.elsevier.com/S2212-2672(17)30598-1/sref4 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref4 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref4 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref5 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref5 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref5 https://www.ers.usda.gov/webdocs/publications/43131/31216_efan02009.pdf?v=41423 https://www.ers.usda.gov/webdocs/publications/43131/31216_efan02009.pdf?v=41423 https://www.ers.usda.gov/webdocs/publications/43131/31216_efan02009.pdf?v=41423 https://www.ers.usda.gov/webdocs/publications/43131/31216_efan02009.pdf?v=41423 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref7 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref7 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref7 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref7 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref8 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref8 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref8 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref8 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref9 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref9 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref9 http://www.ephpp.ca/tools.html http://refhub.elsevier.com/S2212-2672(17)30598-1/sref11 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref11 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref11 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref12 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref12 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref12 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref12 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref13 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref13 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref13 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref13 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref14 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref14 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref14 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref14 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref15 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref15 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref15 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref16 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref16 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref16 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref17 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref17 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref17 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref17 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref18 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref18 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref18 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref18 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref19 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref19 RESEARCH Program meal patterns in southeast Texas. Prev Med Rep. 2015;2: 440-443. 20. Price J, Just DR. Lunch, recess and nutrition: Responding to time incentives in the cafeteria. Prev Med. 2015;71:27-30. 21. Wansink B, Hanks AS, Just DR. A plant to plate pilot: A cold-climate high school garden increased vegetable selection but also waste. Acta Paediatr. 2015;104(8):823-826. 22. Marlette MA, Templeton SB, Panemangalore M. Food type, food preparation, and competitive food purchases impact school lunch plate waste by sixth-grade students. J Am Diet Assoc. 2005;105(11): 1779-1782. 23. Martin CK, Newton RL, Anton SD, et al. Measurement of children’s food intake with digital photography and the effects of second servings upon food intake. Eat Behav. 2007;8(2):148-156. 24. Martin CK, Thomson JL, LeBlanc MM, et al. Children in school cafe- terias select foods containing more saturated fat and energy than the Institute of Medicine Recommendations. J Nutr. 2010;140(9):1653- 1660. 25. Smith SL, Cunningham-Sabo L. Food choice, plate waste and nutrient intake of elementary- and middle-school students participating in the US National School Lunch Program. Public Health Nutr. 2013;17(6):1255-1263. 26. Williamson DA, Han H, Johnson WD, Martin CK, Newton RL. Modi- fication of the school cafeteria environment can impact childhood nutrition: Results from the Wise Mind and LA Health studies. Appetite. 2013;61(1):77-84. 27. Bontrager Yoder AB, Liebhart JL, McCarty DJ, et al. Farm to elemen- tary school programming increases access to fruits and vegetables and increases their consumption among those with low intake. J Nutr Educ Behav. 2014;46(5):341-349. 28. Bontrager Yoder AB, Schoeller DA. Fruits and vegetables displace, but do not decrease, total energy in school lunches. Child Obes. 2014;10(4):357-364. 29. Hubbard KL, Bandini LG, Folta C, et al. Impact of a Smarter Lunch- room intervention on food selection and consumption among ado- lescents and young adults with intellectual and developmental disabilities in a residential school setting. Public Health Nutr. 2015;18(2):361-371. 30. Alaimo K, Carlson JJ, Pfeiffer KA, et al. Project FIT: A school, com- munity and social marketing intervention improves healthy eating among low-income elementary school children. J Community Health. 2015;40(4):815-826. 31. Bontrager Yoder AB, Foecke LL, Schoeller DA. Factors affecting fruit and vegetable school lunch waste in Wisconsin elementary schools participating in Farm to School programs. Public Health Nutr. 2015;18(15):2855-2863. 32. Monlezun DJ, Ly D, Rolfsen M, et al. Digital photography assessment of 1,750 elementary and middle school student lunch meals dem- onstrates improved nutrition with increased exposure to hands-on cooking and gardening classes. J Med Pers. 2015;13(2):129-134. 33. Jansen GR, Harper JM. Consumption and plate waste of menu items served in the National School Lunch Program. J Am Diet Assoc. 1978;73(4):395-400. 34. Davidson FRR. Critical factors for school lunch acceptance in Wash- ington, D.C. Ecol Food Nutr. 1979;8(1):3-9. 35. Comstock EM, Symington LE. Distributions of serving sizes and plate waste in school lunches: Implications for measurement. J Am Diet Assoc. 1982;81(4):413-422. 36. Getlinger MJ, Laughlin VT, Bell E, Akre C, Arjmandi BH. Food waste is reduced when elementary-school children have recess before lunch. J Am Diet Assoc. 1996;96(9):906-908. 37. Whatley JE, Donnelly JE, Jacobsen DJ, Hill JO, Carlson MK. Energy and macronutrient consumption of elementary school children served modified lower fat and sodium lunches or stan- dard higher fat and sodium lunches. J Am Coll Nutr. 1996;15(6): 602-607. 38. Adams MA, Pelletier RL, Zive MM, Sallis JF. Salad bars and fruit and vegetable consumption in elementary schools: A plate waste study. J Am Diet Assoc. 2005;105(11):1789-1792. 39. Toma A, Omary MB, Marquart LF, et al. Children’s acceptance, nutritional, and instrumental evaluations of whole grain and soluble fiber enriched foods. J Food Sci. 2009;74(5):H139-H146. 1806 JOURNAL OF THE ACADEMY OF NUTRITION AND DIETETICS 40. Hoffman JA, Franko DL, Thompson DR, Power TJ, Stallings VA. Longitudinal behavioral effects of a school-based fruit and vegetable promotion program. J Pediatr Psychol. 2010;35(1): 61-71. 41. Lazor K, Chapman N, Levine E. Soy goes to school: Acceptance of healthful, vegetarian options in Maryland middle school lunches. J Sch Health. 2010;80(4):200-206. 42. Chu L, Warren CA, Sceets CE, et al. Acceptance of two US Department of Agriculture commodity whole-grain products: A school-based study in Texas and Minnesota. J Am Diet Assoc. 2011;111(9): 1380-1384. 43. Hoffman JA, Thompson DR, Franko DL, et al. Decaying behavioral effects in a randomized, multi-year fruit and vegetable intake intervention. Prev Med. 2011;52(5):370-375. 44. Cohen JFW, Smit LA, Parker E, et al. Long-term impact of a chef on school lunch consumption: Findings from a 2-Year pilot study in Boston middle schools. J Acad Nutr Diet. 2012;112(6): 927-933. 45. Yon BA, Johnson RK, Stickle TR. School children’s consumption of lower-calorie flavored milk: A plate waste study. J Acad Nutr Diet. 2012;112(1):132-136. 46. Cohen JFW, Richardson S, Austin SB, Economos CD, Rimm EB. School lunch waste among middle school students: Nutrients consumed and costs. Am J Prev Med. 2013;44(2):114-121. 47. Ramsay S, Safaii S, Croschere T, Branen LJ, Weist M. Kindergar- teners’ entrée intake increases when served a larger entrée portion in school lunch: A quasi-experiment. J Sch Health. 2013;83(4): 239-242. 48. Cohen JFW, Richardson S, Parker E, Catalano PJ, Rimm EB. Impact of the new US Department of Agriculture school meals standards on food selection, consumption and waste. Am J Prev Med. 2014;46(4): 388-394. 49. Hunsberger M, McGinnis P, Smith J, Beamer BA, O’Malley J. Elementary school children’s recess schedule and dietary intake at lunch: A community-based participatory research partnership pilot study. BMC Public Health. 2014;14:156. 50. Jones BA, Madden GJ, Wengreen HJ, Aguilar SS, Desjardins EA. Gamification of dietary decision-making in an elementary-school cafeteria. PLoS One. 2014;9(4):e93872. 51. Jones BA, Madden GJ, Wengreen HJ. The FIT Game: Preliminary evaluation of a gamification approach to increasing fruit and vege- table consumption in school. Prev Med. 2014;68:76-79. 52. Cohen JFW, Richardson SA, Cluggish SA, et al. Effects of choice ar- chitecture and chef-enhanced meals on the selection and con- sumption of healthier school foods: A randomized clinical trial. JAMA Pediatr. 2015;169(5):431-437. 53. Miller N, Reicks M, Redden JP, et al. Increasing portion sizes of fruits and vegetables in an elementary school lunch program can increase fruit and vegetable consumption. Appetite. 2015;91: 426-430. 54. Wilke AC, Graunke RE, Cornejo C. Food waste auditing at three Florida schools. Sustainability. 2015;7(2):1370-1387. 55. Comstock EM, St Pierre RG, Mackiernan YD. Measuring individual plate waste in school lunches: Visual estimation and children’s rat- ings vs. actual weighing of plate waste. J Am Diet Assoc. 1981;79(3): 290-296. 56. Graves K, Shannon B. Using visual plate waste measurement to assess school lunch food behavior. J Am Diet Assoc. 1983;82(2): 163-165. 57. Templeton SB, Marlette MA, Panemangalore M. Competitive foods increase the intake of energy and decrease the intake of certain nutrients by adolescents consuming school lunch. J Am Diet Assoc. 2005;105(2):215-220. 58. Wallen V, Cunningham-Sabo L, Auld G, Romaniello C. Validation of a group-administered pictorial dietary recall with 9- to 11-year old children. J Nutr Educ Behav. 2011;43(1):50-54. 59. Gase LN, McCarthy WJ, Robles B, Kuo T. Student receptivity to new school meal offerings: Assessing fruit and vegetable waste among middle school students in the Los Angeles Unified School District. Prev Med. 2014;67(suppl 1):S28-S33. 60. Hanks AS, Wansink B, Just DR. Reliability and accuracy of real-time visualization techniques for measuring school cafeteria tray waste: November 2017 Volume 117 Number 11 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref19 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref19 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref20 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref20 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref21 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref21 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref21 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref22 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref22 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref22 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref22 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref23 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref23 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref23 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref24 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref24 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref24 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref24 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref25 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref25 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref25 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref25 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref26 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref26 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref26 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref26 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref27 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref27 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref27 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref27 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref28 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref28 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref28 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref29 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref29 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref29 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref29 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref29 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref30 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref30 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref30 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref30 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref31 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref31 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref31 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref31 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref32 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref32 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref32 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref32 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref33 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref33 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref33 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref34 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref34 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref35 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref35 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref35 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref36 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref36 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref36 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref37 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref37 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref37 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref37 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref37 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref38 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref38 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref38 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref39 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref39 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref39 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref40 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref40 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref40 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref40 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref41 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref41 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref41 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref42 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref42 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref42 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref42 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref43 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref43 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref43 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref44 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref44 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref44 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref44 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref45 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref45 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref45 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref46 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref46 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref46 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref47 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref47 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref47 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref47 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref48 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref48 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref48 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref48 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref49 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref49 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref49 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref49 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref50 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref50 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref50 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref51 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref51 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref51 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref52 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref52 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref52 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref52 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref53 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref53 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref53 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref53 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref54 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref54 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref55 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref55 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref55 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref55 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref56 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref56 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref56 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref57 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref57 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref57 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref57 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref58 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref58 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref58 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref59 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref59 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref59 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref59 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref60 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref60 RESEARCH Validating the quarter-waste method. J Acad Nutr Diet. 2014;114(3): 470-474. 61. Taylor JC, Yon BA, Johnson RK. Reliability and validity of digital im- aging as a measure of school children’s fruit and vegetable con- sumption. J Acad Nutr Diet. 2014;114(9):1359-1366. 62. Schwartz MB, Henderson KE, Read M, Danna N, Ickovics JR. New school regulations increase fruit consumption and do not in- crease total plate waste. Child Obes. 2015;11(3):242-247. 63. Byker C, Pinard C, Yaroch A, Serrano E. New NSLP guidelines: Chal- lenges and opportunities for nutrition education practitioners and researchers. J Nutr Educ Behav. 2013;45(6):683-968. 64. Hanks AR, DR Just, Wansink B. Smarter lunchrooms can address new school lunchroom guidelines and childhood obesity. Pediatrics. 2013;162(4):867-869. November 2017 Volume 117 Number 11 JO 65. Creswell JW, Plano Clark VL. Designing and Conducting Mixed Methods Research. London, UK: Sage Publications; 2007. 66. Birch LL, Fisher JO. Development of eating behaviors among children and adolescents. Pediatrics. 1998;101(2):539-549. 67. Birch L. Development of food preferences. Annu Rev Nutr. 1999;19(1):41-62. 68. Waste from School Lunches. Washington, DC: US General Accounting Office; 1996. Publication no. GAO/RCED-96-128R. 69. US Department of Agriculture. Let’s talk trash. http://www. choosemyplate.gov/lets-talk-trash. Published September 2015. Accessed March 2016. 70. Spiker ML, Hiza HA, Siddiqi SM, Neff RA. Wasted food, wasted nutrients: Nutrient loss from wasted food in the United States and comparison to gaps in dietary intake. J Acad Nutr Diet. 2017;117(7):1031-1040. AUTHOR INFORMATION C. Byker Shanks is an associate professor, Department of Health and Human Development, Montana State University Food and Health Lab, Bozeman. J. Banna is an assistant professor, Department of Human Nutrition, Food, and Animal Sciences, University of Hawaii-Manoa, Manoa. E. L. Serrano is director, Department of Human Nutrition, Foods and Exercise, Virginia Family Nutrition Program, Virginia Polytechnic Institute and State University, Blacksburg. Address correspondence to: Carmen Byker Shanks, PhD, RDN, Department of Health and Human Development, Montana State University Food and Health Lab, 960 Technology Blvd, Room 215, Bozeman, MT 59718. E-mail: cbykershanks@montana.edu STATEMENT OF POTENTIAL CONFLICT OF INTEREST No potential conflict of interest was reported by the authors. FUNDING/SUPPORT Research reported in this publication was partially supported by the National Institute of General Medical Sciences of the National Institutes of Health under award nos. P20GM103474 and 5P20GM104417. Research reported in this publication was partially supported by Cornell Behavioral Economics in Child Nutrition Programs (BEN) Center Grants Program under award no. 77867-10660. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or Cornell BEN Center. ACKNOWLEDGEMENTS The authors thank Lindsay Kummer, RDN; Susan Chen; Chloe Panizza, MS; Rise Morisato; Alicia Leitch, MS; Bonnie Billingsley, RDN; Allison Milodragovich, MS; and Erin Smith for their contributions to the article. URNAL OF THE ACADEMY OF NUTRITION AND DIETETICS 1807 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref60 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref60 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref61 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref61 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref61 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref62 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref62 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref62 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref63 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref63 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref63 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref64 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref64 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref64 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref65 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref65 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref66 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref66 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref67 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref67 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref68 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref68 http://www.choosemyplate.gov/lets-talk-trash http://www.choosemyplate.gov/lets-talk-trash http://refhub.elsevier.com/S2212-2672(17)30598-1/sref70 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref70 http://refhub.elsevier.com/S2212-2672(17)30598-1/sref70 mailto:cbykershanks@montana.edu Food Waste in the National School Lunch Program 1978-2015: A Systematic Review Methods Search Strategy Study Selection Data Extraction Quality Appraisal of Individual Studies Results In-Person Visual Estimation of Food Waste through Observation Visual Estimation of Food Waste through Digital Photography Direct Weighing of Food Waste Combination of Methods Discussion Conclusions References work_eqdeo5b4vzgrbloszlzlybrma4 ---- Ancient Asia Home About Log in | Register Part of Ancient Asia Journal Management System Ancient Asia is the official continuous publication journal of the Society of South Asian Archaeology (SOSAA). The scope of the Ancient Asia is vast - from the Stone Age to modern times, including archaeology, history, anthropology, art, architecture, numismatics, iconography, ethnography and also various scientific aspects including archaeobotany and archaeozoology, including theoretical and methodological issues. Most popular articles (last 3 months) Most recent articles(view all articles) Terms and Conditions | Privacy Policy Ubiquity Press © 2021 This website is operated by Ubiquity Press Limited, a company registered in England and Wales with Company Number 06677886. Registered office: Ubiquity Press Ltd, Unit 2N, 6 Osborn Street, London, E1 6TD, United Kingdom work_esne23vpkrhp5hxl4uvkltftvi ---- Leukemia Detection using Object Oriented Method Leukemia Detection using Object Oriented Method Dr. M. Rama Bai Sai Sreeja Alapati Computer Science Engineering Computer Science Engineering Mahatma Gandhi Institute of Technology Mahatma Gandhi Institute of Technology Hyderabad, India Hyderabad, India Abstract—Blood cancers affect the production and function of your blood cells. In most blood cancers, the normal blood cell development process is interrupted by uncontrolled growth of an abnormal type of blood cell. This paper will present an approach for the detection and tracking of Leukemia. This paper aims at identifying leukemia, by using image segmentation. A microscopic image of blood sample is taken, the cell count is extracted using image processing module by module. The purpose is to increase the accuracy using three different approaches object wise. I. INTRODUCTION This paper aims at identifying leukemia, by using image segmentation. An image is a way of transferring information, and the image contains lots of useful information. Understanding the image and extracting information from the image to accomplish works is an important area of application in digital image technology, and the first step in understanding the image is the image segmentation. Digital images were acquired using a digital camera connected to a light microscope. In practice, it is often not interested in all parts of the image, but only for some certain areas which have the same characteristics. Image segmentation is one of the hot spots in image processing and computer vision. It is also an important basis for image recognition. It is based on certain criteria to divide an input image into a number of the same nature of the category in order to extract the area which people are interested in. And it is the basis for image analysis and understanding of image feature extraction and recognition. In this paper we use region-based segmentation algorithm. Region-based segmentation algorithms operate iterative by grouping together pixels which are neighbors and have similar values and splitting groups of pixels which are dissimilar in value. The advantage of regional based growth is that it usually separates the connected regions with the same characteristics and provides good boundary information and segmentation results. The idea of regional based growth is simple and requires only a few seed points to complete. And the growth criteria in the growing process can be freely specified. Finally, it can pick multiple criteria at the same time. Blood Cancer is a disorder that affects around 1.2 million people worldwide every year. Blood cancer, or hematologic cancers, affect the production and function of blood cells. Most of these cancers start in the bone marrow where blood is produced. Their symptoms usually come on slowly, so you might not even notice them. And some people have no symptoms at all. Fig. 1. Normal vs. Effected Cells Referring to the images in Fig 1, the left image shows us the composition of blood cells of a normal human being with a high number of red blood cells whereas the right one is of a cancer affected human with high composition of white blood cells. The three most common types of Leukemia are Chronic lymphocyte, Hairy cell, Juvenile Myelomonocyte, Plasma cell, T- cell prolymphocyte. Chronic Lymphocyte (CLL), is the most common leukemia in adults. It’s a type of cancer that starts in cells that become certain white blood cells (called lymphocytes) in the bone marrow. The cancer (leukemia) cells start in the bone marrow but then go into the blood. In CLL, the leukemia cells often build up slowly. Many people don’t have any symptoms for at least a few years. But over time, the cells grow and spread to other parts of the body, including the lymph nodes, liver, and spleen. Hairy Cell Leukemia is a rare, slow-growing cancer of the blood in which your bone marrow makes too many B cells (lymphocytes), a type of white blood cell that fights infection. These excess B cells are abnormal and look "hairy" under a microscope. As the number of leukemia cells increases, fewer healthy white blood cells, red blood cells and platelets are produced. Hairy cell leukemia affects more men than women, and it occurs most commonly in middle-aged or older adults. Juvenile Myelomonocyte (JMML) is a rare childhood cancer that usually happens in children younger than 2 years old. In JMML, too many myelocytes and monocytes (two types of WBCs) are produced from immature blood stem cells called blasts. These International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181http://www.ijert.org IJERTV9IS050374 (This work is licensed under a Creative Commons Attribution 4.0 International License.) Published by : www.ijert.org Vol. 9 Issue 05, May-2020 471 www.ijert.org www.ijert.org www.ijert.org myelocytes, monocytes, and blasts overwhelm the normal cells in the bone marrow and other organs, causing the symptoms of JMML. Plasma Cell leukemia (PCL) is an aggressive form of multiple myeloma characterized by high levels of abnormal plasma cells circulating in the peripheral (circulating) blood. Normal plasma cells in the bone marrow produce antibodies that fight infection. T-cell Prolymphocytic Leukemia (T-PLL) is an extremely rare and typically aggressive malignancy (cancer) that is characterized by the out of control growth of mature T-cells (T-lymphocytes). T-cells are a type of white blood cell that protects the body from infections. T- PLL affects older adults with a median age at diagnosis of 61 years, and it is more common in men than in women. II. LITERATURE SURVEY A. AUTOMATED LEUKEMIA DETECTION USING MICROSCOPIC IMAGES: In this paper, the automated Leukemia detection system analyses the microscopic image and overcomes these draw- backs. It extracts the required parts of the images and applies some filtering techniques. K-mean clustering approach is used for white blood cells detection. The histogram equalization and Zack algorithm are applied for grouping white blood cells. Some of the features like mean, standard deviation, color, area, perimeter, etc. are calculated for leukemia detection. The SVM is used for classification. B. IMAGE SEGMENTATION USING LOCAL THRESHOLDING AND YCbCr COLOR SPACE: In this work, the segmentation is performed in YCbCr color space. The segmented image will have 2 different colors, which are black and white, and for this reason the segmentation is done using local threshold values for the Cb component of YCbCr. A mask is used to determine the neighbors of each pixel in the image. The mask also determines an operation to be applied to the neighborhood of every pixel in the image. The mask and the operations are used to determine the local threshold for every pixel in the image. For each pixel location the threshold will be different. This value is compared with the color value of the pixel. III. MODULES The paper consists of 5 modules. These 5 modules are: Module A: Image module Module B: RGB color space module Module C: CMYK color space module Module D: YCbCr color space module Module E: Result calculation module Fig 2 clearly depicts the modules of our paper where microscopic image is given as input, it processes through the RGB module where it gives the details of red blood cells and white blood cells differently, then the result is calculated by the percentage difference in the count of RBC and WBC , that is how the result is declared by concluding if leukemia exists or it is normal. The picture is next sent to the CMYK module and later to the YCbCr module where the same process repeats. The output can be exported if needed after the process and saved. Fig. 2. Architecture diagram Fig 2 clearly depicts the modules of our paper where microscopic image is given as input, it processes through the RGB module where it gives the details of red blood cells and white blood cells differently, then the result is calculated by the percentage difference in the count of RBC and WBC , that is how the result is declared by concluding if leukemia exists or it is normal. The picture is next sent to the CMYK module and later to the YCbCr module where the same process repeats. The output can be exported if needed after the process and saved. A. IMAGE MODULE: A microscopic image of blood smears is taken from ALL- IDB database and given as input to our system through user interface, the system reads the image and then forwards it to the next module. B. RGB COLOR SPACE MODULE: An RGB color space is any additive color space based on the RGB color model. RGB color space is defined by the three chromaticities of the red, green, and blue additive primaries, and can produce any chromaticity that is the triangle defined by those primary colors. The complete specification of an RGB color space also requires a white point chromaticity and a gamma correction curve. As of 2007, sRGB is by far the most used RGB color space. RGB is an abbreviation for red green blue. RGB is a convenient color model for computer graphics because the human visual system works in a way that is similar – though not quite identical – to an RGB color space. The most commonly used RGB color spaces are sRGB and Adobe RGB (which has a significantly larger gamut). Adobe has recently developed another color space called Adobe Wide Gamut RGB, which is even larger, in detriment to gamut density. C. CMYK COLOR SPACE MODULE: The CMYK color model (process color, four color) is a subtractive color model, used in color printing, and is International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181http://www.ijert.org IJERTV9IS050374 (This work is licensed under a Creative Commons Attribution 4.0 International License.) Published by : www.ijert.org Vol. 9 Issue 05, May-2020 472 www.ijert.org www.ijert.org www.ijert.org also used to describe the printing process itself. CMYK refers to the four inks used in some color printing: cyan, magenta, yellow, and key. The CMYK model works by partially or entirely masking colors on a lighter, usually white, background. The ink reduces the light that would otherwise be reflected. Such a model is called subtractive because inks "subtract" the colors red, green and blue from white light. White light minus red leaves cyan, white light minus green leaves magenta, and white light minus blue leaves yellow. Fig. 3. Color space of RGB and CMYK along with YCbCr color space. Figure 3 shows the RGB and CMYK in a broad view in visible spectrum on the left graph of color space where CMYK is clearly seemed as the base color perlative primaries and then we have RGB in the second space where as the visible spectrum is far the next zone concluding that the color we see is not the actual color of the subject. Coming to YCbCr on the right side, it is the color space of digital photography and videography which can almost display all the colors. D. YCBCR COLOR SPACE MODULE: YCbCr, also written as YCBCR or Y’CBCR, is a family of color spaces used as a part of the color image pipeline in video and digital photography systems. Y’ is the luma component and CB and CR are the blue-difference and red-difference chroma components. Y’ (with prime) is distinguished from Y, which is luminance, meaning that light intensity is nonlinearly encoded based on gamma corrected RGB primaries. Y’CbCr color spaces are defined by a mathematical coordinate transformation from an associated RGB color space. If the underlying RGB color space is absolute, the Y’CbCr color space is an absolute color space as well; conversely, if the RGB space is ill-defined, so is Y’CbCr. Y’ stands for the luma component (the brightness) and U and V are the chrominance (color) components; luminance is denoted by Y and luma by Y’ – the prime symbols (’) denote gamma compression, with "luminance" meaning physical linear-space brightness, while "luma" is (nonlinear) perceptual brightness. Pixel Based Segmentation: Image segmentation is the division of an image into regions or categories, which correspond to different objects or parts of objects. Every pixel in an image is allocated to one of a number of these categories. A good segmentation is typically one in which: 1. Pixels in the same category have similar greyscale of multivariate values and form a connected region. 2. Neighboring pixels which are in different categories have similar values. E. RESULT CALCULATION MODULE: From the previous color space modules, we get the count of red blood cells and white blood cells individually. Then t h e percentage of each of them correspondingly are calculated to the total number of blood cells detected in that particular image. RBC% = RBC ÷ (total cells) W BC% = W BC ÷ (total cells) After the percentage calculation, if the difference in the percentage with WBC greater than 20 is taken as potential leukemia, else declared normal. W BC% − RBC% ≤ 20 This way calculation is done to track and conclude Leukemia occurrence from a microscopic image of blood cells. Export Data: The data after the process is successfully done, we get the output on our user interface and we would lose it if we try another blood sample, so we have an “Export” button with which we can save the figure with the data set we obtained as result and continue working with further files without any necessity of noting down the values or without any efforts of taking a snapshot. IV. RESULTS The original picture we used for execution is the sample image from ALL- IDB database showing plasma cell leukemia and its execution in three modules. Any type of cancer from the before mentioned categories can be tested in the following manner respectively. No preprocessing of image is required if they are obtained from ALL– IDB database, else they need to be altered with sufficient exposure and highlights for better results. Fig. 4. Plasma Cell Leukemia Sample International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181http://www.ijert.org IJERTV9IS050374 (This work is licensed under a Creative Commons Attribution 4.0 International License.) Published by : www.ijert.org Vol. 9 Issue 05, May-2020 473 www.ijert.org www.ijert.org www.ijert.org For the results to be checked, first the code is executed where we acquire all the possible options. The procedure follows as shown in the figures. Fig. 5. Output User Interface Screen Fig 5 is the menu and if we choose our input image from the system and upload it to the system we implemented. Fig. 6. Output obtained after image processing Fig 6 shows the execution clearly, RGB, CMYK and YCbCr modules respectively are processed, the result can be concluded by the majority considering all the three- c o l o r spaces. V. CONCLUSION The paper has presented an approach for the detection and tracking of Leukemia. The proposed system will be able to segment the given microscopic picture in three different approaches to increase the accuracy. With the cell count obtained from both Red Blood and White Blood Cells, we calculate the possibility of Blood Cancer. This method of detection stands strong as the model is tested for all five highly occurring cancers and the testing can be digitalized. This is effective and the cost is absolutely optimized compared to the traditional blood tests. REFERENCES [1] Nimesh Patel,Ashutosh Mishra.Automated Leukemia Detection Using Microscopic images. Sci.2015(12-16). [2] Amit Kumar Mandal, Dilip Kumar Baruah.Image Segmentation Using Local Thresholding and Ycbcr Colour Space.J Appl Sci2010. [3] Sinha N,Ramakrishnan AG.Automation of differential blood count.In Chock-alingam A,editor.Proceedings of the conference on convergent technologies for the Asia pacific region,October 5- 17.Taj Residency Bangalore:IEEE Publisher;2003 p. [4] Kovalev VA, Grigoriev AY, Ahn H. Robust recognition of white blood cell images. In: Kavanaugh ME, Werner B, editors. Proceedings of the 13th international conference on pattern recognition, August 25-29. Vienna, Austria: IEEE Publisher;1996. p. [5] Madhloom Ht,Kareem.An automated white blood cell nucleus localisation and segmentation using image arithmetic and automated threshold.J Appl Sci2010. [6] Nobuhito Matsushrio. Color image information processing, 14-16 July. Boston,MA, USA: IEEE Publisher; 2004. p. [7] Halim NHA, Mashor MY, Hassan R. Automatic blasts counting for acute leukemia based on blood samples. Int J Res Rev Comput Sci 2011; 2(August (4)). [8] Mohapatra S, Patra D, Satpathy S. An ensemble classifier system for early diagnosis of acute lymphoblastic leukemia in blood microscopic images.IEEE Publisher; 2008. [9] Donida Labati R, Piuri V, Scotti F. ALL-IDB: the acute lymphoblastic leukemia image Database for image processing. In: Macq Benot, Schelkens Peter, editors. Proceedings of the 18th IEEE ICIP international conference on image processing. International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181http://www.ijert.org IJERTV9IS050374 (This work is licensed under a Creative Commons Attribution 4.0 International License.) Published by : www.ijert.org Vol. 9 Issue 05, May-2020 474 www.ijert.org www.ijert.org www.ijert.org work_et7dktxgrze6hhzj5toss2tlyi ---- Hindawi Publishing Corporation EURASIP Journal on Image and Video Processing Volume 2008, Article ID 356781, 3 pages doi:10.1155/2008/356781 Editorial Color in Image and Video Processing Alain Trémeau,1 Shoji Tominaga,2 and Konstantinos N. Plataniotis3 1 Laboratoire LIGIV EA 3070, Université Jean Monnet, 42000 Saint Étienne, France 2 Department of Information and Image Sciences, Chiba University, Chiba 263-8522, Japan 3 Department of Electrical and Computer Engineering (ECE), University of Toronto, Toronto, ON, Canada M5S 3G4 Correspondence should be addressed to Alain Trémeau, alain.tremeau@univ-st-etienne.fr Received 30 April 2008; Accepted 30 April 2008 Copyright © 2008 Alain Trémeau et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1. BACKGROUND AND MOTIVATION Color perception is of paramount importance in applica- tions, such as digital imaging, multimedia systems, visual communications, computer vision, entertainment, and con- sumer electronics. Color is essential in digital cinematog- raphy, modern consumer electronics solutions, and digital photography system such as digital cameras, video displays, video-enabled cellular phones, and printing solutions. In these applications, compression- and transmission-based as well as color management algorithms provide the foundation for cost effective, seamless processing of visual information through the processing pipeline. Color plays a significant role in many pattern recognition and multimedia systems, where color-based feature extraction and color segmentation are used to detect and classify objects in application areas such as biomedical image processing and geomatics. During the last fifteen years, important contributions were made in the field of color image processing due to the utilization of color vision, colorimetry, and color appearance. A number of special issues, including survey papers that review the state-of-the-art in the area of color image processing, have been published in the last few years. This special issue on color image and video processing aims to fill the gap in the existing literature and to assist researchers and practitioners who work in the area. Simple inspection of the pertinent scientific literature reveals a significant increase in the number of papers devoted to color image processing in the image processing community. The motivation behind this issue is the desire to provide both a comprehensive overview of the most recent trends and an outline of possible future research directions in color image and video processing. The special issue discusses problems currently under research by the color image processing community and outlines methodologies and approaches for their solution. It focuses on the most promising research areas in color imaging science and covers research topics which, in the opinion of the editors at least, will be the main focus of research attention in the years to come. This issue is intended for graduate students, researchers, and practitioners who have a good knowledge in color science and digital imaging and who want to know and understand the most recent advances and research develop- ments in digital color imaging. 2. SCANNING THE SPECIAL ISSUE Accepted papers cover both the theoretical and practical aspects of the digital color imaging pipeline, including color image acquisition, representation, description, analysis, and processing. The special issue opens with a survey of the most recent trends and future research directions: “Color in image and video processing (most recent trends and future research directions)” by A. Trémeau et al. It presents the most recent trends as well as the state-of-the-art, with a broad survey of the relevant literature, in the main active research areas in color imaging. This survey focuses on the most promising research areas in color imaging science. Lastly, it addresses the research areas which still need more elaboration and which are the next and future perspectives of color in image and video processing. The special issue continues with three articles on color image analysis and description. In a paper titled “A colour topographic map based on the dichromatic model,” M. Gouiffès and B. Zavidovique introduce a novel representa- tion of color topographic maps. The authors proposed to mailto:alain.tremeau@univ-st-etienne.fr 2 EURASIP Journal on Image and Video Processing design color lines along each dominant color vector, from the body reflection. Topographic maps are an interesting alternative to edge-based techniques commonly used in computer vision applications. Indeed, unlike edges, level lines are closed and less sensitive to external parameters. Topographic maps provide a compact geometrical repre- sentation of images and they are, to some extent, robust to contrast changes. In a paper entitled “Demosaicking based on optimization and projection in different frequency bands,” O. Omer and T. Tanaka introduce an iterative demo- saicking algorithm capable of reconstructing a full-color image using single-color filter array data. The missing color values are interpolated using optimization and projection in different frequency bands. A filter bank is utilized in order to decompose the initially interpolated image into low- frequency and high-frequency bands. In the low-frequency band, a quadratic cost function is minimized using the fact that the low-frequency chrominance components vary slowly within an object region. Conversely, in the high-frequency bands, each initially interpolated channel is subsampled into subimages, and then the high-frequency components of the unknown values are projected onto the high-frequency com- ponents of the corresponding known values. In their work entitled “Color image coding by colorization approach,” T. Horiuchi and S. Tominaga propose a new color image coding scheme called “colorization image coding.” The scheme can colorize a gray scale (monochrome) image given only a small number of color pixels. In the proposed solution, the luminance component is firstly separated from the input color image. Then, a small number of color seeds are selected as chrominance information. The luminance image is coded using a lossy coding technique, while the chrominance information is stored as color seeds. Decoding is performed by simply colorizing the reconstructed luminance image using the color seeds. The special issue continues with two articles on color image analysis and description, especially on image quality. In the paper entitled “Improving the quality of colour colonoscopy videos,” R. Dahyot et al. present a method for aligning successively captured red, green, and blue channels of colonoscopy video. The paper proposes a solution to recurrent misalignment of color channels in colonoscopy videos. The color channels are first equalized based on cumulative histogram and processed using robust camera motion estimation and compensation for improving image quality. In another paper entitled “Robust color image super- resolution: an adaptive M-estimation framework,” N. A. El- Yamany and P. E. Papamichalis introduce a new color image superresolution algorithm utilizing an adaptive, robust M- estimation framework. Using a robust error norm as the objective function, the estimation process is adapted to each of the low-resolution frames allowing for measurement outlier suppression. As a result, the method produces super- resoluted frames with crisp details and no color artefacts, without regularization. The second part of the special issue is devoted to color image and video segmentation and color image description. This part opens with a sequence of three papers. O. Losson et al. in their paper “Fuzzy mode enhancement and detection for color image segmentation” present a new and flexible color image segmentation method based on pixel classification. Pixel classes are constructed by detecting the modes of a spatial-color compactness function, which takes into account both the distribution of colors in the color space as well as their spatial location in the image plane. Fuzzy morphological operators are utilized for reliable and cost effective mode detection. In a paper entitled “Colour appearance based approaches for segmentation of traffic signs,” X. Gao et al. show that the CIECAMs color appearance model can be efficiently used to segment traffic sign images captured under variable weather conditions contributing towards a reliable, automated traffic system. In the last paper of this sequence entitled “A robust approach to segment desired object based on salient colors,” J. Rugna and H. Konik combine existing techniques such as salient color extraction, peak-finding algorithm, and classical K-means clustering in order to devise a cost effective segmentation solution. The effectiveness of the proposed algorithm is demonstrated by observing its performance in tracking user- selected objects. The special issue continues with three additional articles on color image and video segmentation and image descrip- tion. K. Lee et al., in their paper entitled “Color-based image retrieval using perceptually modified Hausdorff distance” present a color image dissimilarity measure based on a perceptually modified Hausdorff distance. The new measure allows for the development of cluster-based representation that can deal efficiently with partial matching in image retrieval applications. In a paper entitled “Unsupervised video shot detection using clustering ensemble with a color global scale-invariant feature transform descriptor,” Y. Chang et al. propose a color global scale-invariant feature transform. The method quantizes a color image, representing it with a small number of color indices, and it then transforms the image by determining scale, rotation, and view-point invariant local features. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single-clustering methods for video shot description and detection. In the last paper of this segment, B. Ionescu et al. present a two-step method for detecting and analyzing the color content of animated movies in their paper entitled “A fuzzy color-based approach for understanding animated movies content in the indexing task.” Several global statistical parameters are computed in order to effectively characterize and represent the color content of a motion picture. Subsequently, symbolic features are computed through the utilization of a fuzzy logic system. The third part of the special issue is devoted to important problem of color vision description. In the following two papers, S. Yang et al. present a color compensation scheme which enhances the visual perception of people with color vision deficiency (CVD). Particular emphasis is given to the development of a solution capable to assist individuals suffering from anomalous trichromacy. Compensated color is realized by determining the spectral cone sensitivities of the human eye and the spectral emission functions of the display device. The fundamentals of the proposed color compensation solution are summarized in the paper Alain Trémeau et al. 3 entitled “Quantification and standardized description of color vision deficiency caused by anomalous trichromats— part I: simulation and measurement.” The spectral sensitivity of anomalous cones is modelled according to the deficiency degree of the standardized CVD description in the paper entitled “Quantification and standardized description of color vision deficiency caused by anomalous trichromats— part II: modeling and color compensation.” This is achieved by examining the error score of a computerized hue test (CHT), developed in part I. Shifts at each individual wavelength are modelled using the peak sensitivity and a shape model obtained using the Smith/Pokorny anomalous cone models. ACKNOWLEDGMENTS The guest editors thank all who have helped to make this special issue possible, especially the authors and the reviewers of the articles. They thank the Journal Manager, for her help and support in managing the issue, and finally gratefully acknowledge Jean Luc Dugelay, Editor-In-Chief of EURASIP Journal on Image and Video Processing, for giving them the opportunity to edit this special issue. Alain Trémeau Shoji Tominaga Konstantinos N. Plataniotis Background and motivation Scanning the special issue ACKNOWLEDGMENTS 1Call for Papers4pt Guest Editors 1Call for Papers4pt Guest Editors 1Call for Papers4pt Guest Editors 1Call for Papers4pt Guest Editors 1Call for Papers4pt Guest Editors work_etfog2unq5fu5n44yfjyo2qxza ---- Serbian Dental Journal, vol. 61, No 3, 2014 149INFORMATIVE ARTICLE / INFORMATIVNI RAD UDC: 616.314-07:77 DOI: 10.2298/SGS1403149D Depth of Field in Dental Photography and Methods for Its Control Zlatomir Dukić Faculty of Dental Medicine, University of Belgrade, Belgrade, Serbia SUMMARY The emergence of photographs has been a milestone in the development of society by making life richer and more comprehensive. Dental photography as part of clinical medical photography plays a role primarily as a document, but also as a tool for educating students and continuing education of dentists. The aim of this paper is to present possible applications of traditional and digital photography in dentistry at the present stage of technological development. Dental photography requires some knowledge and equipment to obtain quality images of intraoral structures. The control of depth of field (DOF) is one of the important factors for successful dental photography. DOF can be control- led by changing relative aperture, using special lenses with decentring to achieve the effect of Scheimpflug’s principle as well as the use of specialized software which uses series of images with selective focus to form a composite picture. Special significance of dental photography is documentation and ability to record maximum information in conditions that can be repeated. Standardization of conditions during recording oral cavity, adequate storage and archiving of dental photographs are also important prerequisites for quality and useful photography. Keywords: dental photography; depth of field; relative aperture; hyperfocal distance; Scheimpflug’s principle INTRODUCTION It would be hard to imagine any aspect of our exist­ ence not related to photography. A history of almost two centuries represents a milestone in the development of society by making life richer and more comprehensive. Photography has influenced consciousness in a way that the expression “one picture is worth a thousand words” is accepted as an obvious fact. Photography has now pene­ trated into all aspects of life by providing new facts and knowledge in the fields of science, medicine, industry, communications and arts. The photographer is present everywhere around us making photos of everything and everyone whereas the number of pictures taken per day exceeds the current number of people on Earth. This photography obsession is based on its magic attractive to all people: defiance of time and transience. Currently recorded scenes by photographs remain as lasting proof of existence. Therefore, camera has become one of the most widely used devices by great number of people both for amateur or professional purposes [1]. Dental photography as the part of clinical medical pho­ tography has a role primarily as a document but also in education of students and continuing education of den­ tists. Certainly a role of photography in the marketing of dental equipment and materials should not be forgotten. Dental photography has become a powerful tool in the presentation of treatment plan to patients. Good photographic presentation and effective communication makes easier for patients to understand the procedure and need for it. Rich and well maintained photo archive allows clinicians to present to patients similar previously solved cases but also to provide high level of professional competence [2]. Today’s photography offers two alternatives: 1) classical photography, that uses film as image carrier and requires laboratory processing; and 2) digital photography, where an image is created electronically and it is immediately ready for reproduction. Digital photography in today’s technological development has reached all characteris­ tics of classical photography (quantity of visual informa­ tion on a single image, contrast and color reproduction) whereas in some (possibility of storage per unit) it has even overcome classical. Unlike classical image that is realistic after film processing, digital image is encrypted and only exists in the form of ones and zeros in electronic memory, or magnetic or optical media. To make digital image visible additional devices must be used: a camera itself that has a small LCD screen for viewing already cap­ tured images, a TV or video projector or computer mon­ itor screen attached to the camera. It should be noted that in all three cases the image is shown in its reduced form, because none of these displays can reproduce a full resolu­ tion photos. To be seen in its full resolution it is necessary to print the image on the paper in an adequate format (6 Mpx photo should be increased to the size 18×24 cm). ADVANTAGES OF DIGITAL PHOTOGRAPHY The advantage of digital photography is not in its photo­ graphic quality but in immediate availability for user, Address for correspondence: Zlatomir DUKIĆ, TV Studio of the Faculty of Dental Medicine, Rankeova 4, 11000 Belgrade, Serbia; zlatko.dukic@stomf.bg.ac.rs 150 Dukić Z. Depth of Field in Dental Photography and Methods for Its Control and possibility to be printed, stored, copied or e­mailed anywhere in the world. Also, it is possible to immediately eliminate errors, delete unsuccessful images and repeat process until successful photo is achieved. For any keen photographer classical or digital, in addi­ tion to adequate equipment it is necessary to have some basic knowledge in the field of technology of images which include the use of proper technique, knowledge of basics geometrical optics, illumination, photographic materials and processes as well as basic understanding of contour, shape, color, texture and composition. Camera is mechanical­opto­electronic device for re­ cording real static images on photographic film or elec­ tronic sensor. For dental photography the most appropri­ ate camera is small 35 mm camera with reflex viewfinder and ability to change lenses for classical camera or reso­ lution of 6 Megapixels or more for digital camera. These devices are relatively small, easy to use and adaptable to various recordings. The quality of recordings is such that it allows increasing the quality up to A4 as well as for all types of presentations. The basic parts of camera are: housing, viewfinder, shutter and lens. Housing has a role to connect all parts to a whole and provide light­tight space where film or photosensitive sensor is. Viewfinder is used to determine desired cut­out and correct focus. The role of shutter is to let light pass through the lens at the time of exposure. Lens has optical and mechanical assembly responsible for creating a real character of object. Its design and charac­ teristics influence the appearance and quality of images (Figure 1). Basic characteristics of lens are represented with two parameters: focal length of lens and luminous intensity. Focal length affects the size of object and vision angle. As the length is smaller, smaller is the size of object while the vision angle increased [3]. Luminous intensity of lens is the quotient of the val­ ues of focal length and diameter of the largest diaphragm aperture (iris). 1 = D f f D = diameter at maximum aperture; F = focal length; f = luminous intensity As this number is smaller more light can pass through the lens at its maximum diaphragm aperture and such lenses are considered as strong. Diaphragm diameter is variable and regulates illumination of the focal plane sur­ face in order to obtain correct exposure values. If the hole diameter is reduced by the value 1 = D f f ∖∕∙2 H = F 2 f × Cc ∨2 the area is halved and therefore the brightness of focal plane is cut in half. For that reason, the scale on the ring for the control of dia­ phragm aperture is presented as numbers with geomet­ rical progression and quotient 1 = D f f ∖∕∙2 H = F 2 f × Cc ∨2 (1.4 2 2.8 4 5.6 8 11 16 22 ...). These values of diaphragm aperture are called rela­ tive lens aperture and represent the same illumination of the focal plane for lenses of different focal lengths (high­ er the focal length greater the geometric diaphragm aper­ ture). Besides the role in regulating illumination of focal plane relative aperture affects the focus field on the axis of the lens. For each value of relative aperture there is a value of hyperfocal distance which represents the distance from the lens to the focal plane with sharply reproduced all objects that are in the field that extends from half the value of hyperfocal distance to very distant objects in the broads which distance from lens is showed as sign ∞. 1 = D f f ∖∕∙2 H = F 2 f × Cc H = hyperfocal distance; F = focal length; Cc = dissipa­ tion circle (0.025 mm) The ratio of the size of real object and formed object is called scale mapping. Changing the distance between lens and focal plane, depending on the proportions, is achieved by moving lens because the place of object formation can­ not be changed in relation to camera body and is deter­ mined by the position of a film strip or photosensitive chip. This change is called extractor and with standard lens it is maximally 1/7 of focal length. Lenses that have greater extension, even up to one focal length, are called Macro Lenses and are used for taking photos of close ob­ jects. Dental photography belongs to this category where photos are taken in ratio between 1:10 and 1:1 [4]. IMAGE SHARPNESS One of the main problems when taking photos using Macro is reduced depth of focus, which determines the field of sharp images along the lens axis. Sharp image con­ siders an image of size 20x25 cm observed from a distance of 25 cm with visible detail of 0.2 mm. This means that in the original negative or chip that value should be lower for magnification factor, which in this case is 8 times (40 lines per mm or 0,025 mm). This value is called circle of dis­ sipation. If details of recorded object are reproduced with greater dimension magnified image is seen as blurred. There are three main parameters that influence field of depth (FOD): scale mapping i.e. distance to the observed object, lens focal length and relative aperture. Focal length is determined by the structure of lens, distance to the ob­ ject provides desired opening therefore relative aperture Figure 1. Cross section of modern macro lens Sli ka 1. Po preč ni pre sek sa vre me nog ma kro objek ti va 151Stomatološki glasnik Srbije. 2014;61(3):149-156 changes remain the only option to control FOD. Smaller relative aperture provides greater field FOD (Figure 2) [5]. FOD can be calculated by the following formula: FOD = DF ­ DN 1 = D f f ∖∕∙2 H = F 2 f × Cc ∨2 df = f 2 h – (s – f) dn = h × s h + (s – f) DF = distance from camera to the furthest point of sharpness; DN = distance from camera to the nearest point of sharpness; H = hyperfocal distance; S = distance from camera to the observed object; F = focal distance of lens There are tables for FOD for lenses of various focal dis­ tances. For controlling FOD by changing relative aperture it is important to understand that reducing diaphragm aperture affects the exposure. If relative aperture is re­ duced to 32 for lens of luminous intensity 1:2, it releases 128 times less light compared to the maximum aper­ ture. This means that it is necessary to extend the time of exposure or to use 128 times more powerful lighting that can in some cases can be a limitation because for the sensitivity of the film or chip at ISO 200, duration of exposure of 1/60 s (which is the recording in hand) and relative aperture of 32, the illumination area of 100,000 Lux is required (equal to illumination of the object under the sunlight) (Figure 3). In addition to this method of controlling FOD that can be applied to every camera and every lens there are others that require the use of special photographic techniques and equipment. FIELD OF DEPTH CONTROL Control of FOD is applied only when taking images of the surface not parallel to the focal plane. This means that these two planes intersect. If at the same time lens is rotated around the axis parallel to the recorded plane in a way that the last major plane of lens intersects with recorded and focal plane, then the entire field will be re­ produced sharply in the focal plane regardless of relative aperture of lens. This is Scheimpflug’s principle known for more than a century (Figure 4). To practically apply this principle in photography large­ cameras that use film as a vehicle for images, or special lenses for small sizes cameras either classical or digital are needed. Given that the large­sized cameras are rarely used and they are not suitable for images from close distance the use of lens with possibility of decentring is much more appropriate. In addition to the ability to rotate around the axis that is in the back of the main plane, these lenses have the option of translational displacement of optical axis in one direction and both courses. This results in correction of geometrical perspective in focal plane (parallel lines in a plane that is at an angle to the focal plane remain parallel regardless of the distance). The primary purpose of these lenses is architectural photography and some do not have extension to record in macro area [5]. If the extractor is increased by placing intermediate rings they act as Macro Lenses. Another way to increase the ratio is to use additional lenses. If additional lens is added before collection lens, overall focal length decreases approaching focal plane to lens allowing capturing close objects and in­ creased scale. It also increases luminous intensity of lens, so there is no need to correct the exposure [4]. Figure 2. Change of field of depth depending on relative aperture Sli ka 2. Pro me na po lja oštri ne u du bi nu u za vi sno sti od re la tiv nog otvo ra Figure 3. Impact of relative aperture of the lens to the length of exposure Sli ka 3. Uti caj pro me ne re la tiv nog otvo ra objek ti va na du ži nu eks- po zi ci je 152 The third method of controlling FOD is used mainly for recording very small objects in increased magnifica­ tion (1:1 or more). In this case, the object must be stable and the camera mounted on a tripod with a small mov­ able sled when taking photos. After setting the lights and determining the optimum diaphragm aperture FOD is determined. A series of photographs is taken with suc­ cessive camera sliding as per estimated FOD obtaining a series of photos with focused individual parts of the ob­ served object. These images are processed by specialized software which uses only focused segment of each photo and creates real composite photo of the entire object now reproduced in full focus. These programs are user friendly and in addition to commercial such as Helicon Focus HeliconSoft Company Ltd. there are those with free license such as CombineZM by Alan Hadley that can be adapted to individual needs of authors with knowledge of encoding process (Figure 5). The main goal of taking dental images is documenta­ tion. This means that maximum information should be recorded in conditions that can be repeated. Only this way an image can be seen as an appropriate tool for gather­ ing information. It is therefore necessary to standardize equipment and its use: light setting and position relative to the patient [6]. Particular attention should be paid to control FOD, one of the basic parameters for successful images beside proper exposure and color reproduction. Knowledge of the basics of geometrical optics and principles of creat­ ing photographic images is necessary particularly when dealing with macro photography. The most appropriate camera for this use is Single­ lens reflex with interchangeable lenses, either with a clas­ sical 35 mm film or digital camera with a resolution of at least 8 Mpx. Lens should have focal length of about 100 mm with macro area to the proportions of 1:1. The use of lenses with possibility of decentring is also recom­ mended for geometrical perspective control and FOD. Adequate lighting is an electronic flash unit, mounted on the front of lens consisting of two lamps with reflectors that can rotate about the axis of the lens. The use of ring flash or luminous diodes is necessary for taking intraoral images. There are five basic views that can illustrate intra­oral status of the patient. These are: frontal view, two lateral and two occlusal where the use of retractors and mirrors is required. Additionally, there are six extraoral views: two lateral, frontal, frontal with slightly tilted head backward and two fronto­lateral’s under 45° to the sagittal plane. This standardization of photographic conditions allows comparison of photographs, even if taken after long time periods and by different photographers. Only this way, dental photography can be valuable help in documenta­ tion. To be easily accessible there should be an adequate storage and archive of dental images. The best option for developed negatives and slides is to immediately digitize and store with digital images and make copies that should be kept separately. Photos stored in digital form take up little memory and if organized with appropriate software they become immediately available [7]. CONCLUSION Nowadays, digital photography in some aspects has over­ come classical picture. Many photographers use digital photography exclusively or in combination with clas­ sical. This trend in digital photography will probably not change in future. The use of digital photography in dental practice is simple, fast, and extremely helpful in docu­ menting procedures, implementation of patient education and performing clinical research, providing many bene­ fits to dentists and patients. Good and successful dental photography is a reward for investing in the purchase of equipment as well as effort and time spent in the acquisi­ tion of knowledge in this field. Figure 4. Graphic representation of Scheimpflug’s principle: a) with- out rotation of the lens plane; b) with rotation of the lens plane Sli ka 4. Gra fič ki pri kaz Šajmpflu go vog prin ci pa: a) bez ro ta ci je rav ni objek ti va; b) s ro ti ra nom rav ni objek ti va Figure 5. Camera with a macro lens on a carrier for recording se- lective focus Sli ka 5. Fo to-apa rat s ma kro objek ti vom na no sa ču za sni ma nje se lek tiv nog fo ku sa Dukić Z. Depth of Field in Dental Photography and Methods for Its Control 153Stomatološki glasnik Srbije. 2014;61(3):149-156 REFERENCES 1. Samuels R. Careers in Photography. New York: New York Institute of Photography; 1978. 2. Peres M. Dental Photography for Photographers. Rochester, New York: School of Photographic Arts & Sciences; 2002. 3. Hedgecoe J. Foto-priručnik. Zagreb: Mladost; 1978. 4. Kroj O. Snimanje iz blizine. Beograd: Tehnička knjiga; 1974. 5. Davies A. Close-Up and Macro Photography. Oxford: Focal Press; 2010. 6. Irfan A. Digital and Conventional Dental Photography: A Practical Clinical Manual. Carol Stream, IL: Quintessence Publishing Co; 2004. 7. Peschke A. Digital Photography of Clinical Cases. Schaan, Liechstein: Ivoclar Vivadent AG; 2005. Received: 17/06/2014 • Accepted: 20/08/2014 154 Dukić Z. Depth of Field in Dental Photography and Methods for Its Control „Oštrina u dubinu“ u dentalnoj fotografiji i metode njene kontrole Zlatomir Dukić Stomatološki fakultet, Univerzitet u Beogradu, Beograd, Srbija KRATAK SADRŽAJ Po ja va fo to gra fi je pred sta vlja pre kret ni cu u raz vo ju dru štva či ne ći ži vot lju di bo ga ti jim i sa dr žaj ni jim. Den tal na fo to gra fi ja, kao gra- na kli nič ke me di cin ske fo to gra fi je, ima pr ven stve no ulo gu kao do ku ment, ali i kao sred stvo za obra zo va nje stu de na ta, od no sno u kon ti nu i ra noj edu ka ci ji dok to ra sto ma to lo gi je. Cilj ovog ra da je da pred sta vi mo guć no sti pri me ne kla sič ne i di gi tal ne fo to gra fi je u sto ma to lo gi ji na da na šnjem ste pe nu teh no lo škog raz vo ja. Den tal na fo to gra fi ja u okvi ru svo jih spe ci fič no sti zah te va i ne ka zna nja i opre mu ko ja je neo p hod na za kva li tet ne snim ke struk tu ra u usnoj du plji. Kon tro la po lja oštri ne u du bi nu je je dan od bit nih fak to ra uspe šne den tal ne fo to gra fi je. Oštri na u du bi nu se mo že kon tro li sa ti pro me nom re la tiv nog otvo ra objek ti va, ko ri šće njem spe ci jal nih objek ti va s mo guć no šću de cen tri ra nja da bi se po sti gao efe kat Šajm pflu go vog prin ci pa, kao i ko ri šće njem spe ci ja li zo va nog soft ve ra ko ji od se ri je fo to gra fi ja sa se lek tiv nim fo ku som pra vi kom po zit nu fo to gra fi ju. Po se ban zna čaj den tal ne fo to gra fi je je do ku men ta ci ja, od no sno mo guć nost da se mak si mum in for ma ci ja mo že za be le ži ti u uslo vi ma ko ji se mo gu po no vi ti. Stan dar di za ci ja fo to graf skih uslo va pri sni ma nju u usnoj du plji i pra vil no ču va nje i ar hi vi ra nje den tal nih fo to gra fi ja ta ko đe su zna ča jan pred u slov za kva li tet nu i upo tre blji vu fo to gra fi ju. Ključ ne re či: den tal na fo to gra fi ja; oštri na u du bi nu; re la tiv ni otvor objek ti va; hi per fo kal na dis tan ca; Šajm pflu gov prin cip UVOD Bi lo bi te ško za mi sli ti bi lo ko ji vid na šeg po sto ja nja ko ji ni je u ve zi s fo to gra fi jom. Nje na sko ro dvo ve kov na isto ri ja pred sta vlja pre kret ni cu u raz vo ju dru štva, či ne ći ži vot lju di bo ga ti jim i sa dr­ žaj ni jim. Fo to gra fi ja je to li ko uti ca la na svest, da je iz re ka da jed­ na sli ka vre di kao hi lja du re či pri hva će na kao no tor na či nje ni ca. Fo to gra fi ja je da nas pro dr la u sve seg men te ži vo ta, obez be đu ju ći no ve či nje ni ce i sa zna nja u obla sti na u ke, me di ci ne, in du stri je, ko mu ni ka ci ja i umet no sti. Fo to graf je pri su tan svu da oko nas, fo to gra fi še sve i sva ko ga, pa broj dnev no sni mlje nih fo to gra fi ja pre va zi la zi tre nut ni broj sta nov ni ka na Ze mlji. Ova op sed nu tost fo to gra fi jom po sle di ca je nje ne su šti ne ko ja se ogle da u ma gi ji pri vlač noj svim lju di ma: pr ko še nju vre me nu i pro la zno sti. Tre­ nut no za be le že ni pri zo ri na fo to gra fi ja ma osta ju tra jan do kaz po sto ja nja. Zbog to ga je fo to graf ski apa rat po stao je dan od naj­ ra spro stra nje ni jih ure đa ja ko ji sko ro sva ko dnev no ko ri sti naj ši ri sloj po pu la ci je, bi lo u ama ter ske ili pro fe si o nal ne svr he [1]. Den tal na fo to gra fi ja, kao gra na kli nič ke me di cin ske fo to gra­ fi je, ima pr ven stve no do ku men ta ci o nu ulo gu, ali i edu ka ci o nu, ka ko u obra zo va nju stu de na ta, ta ko i u pro ce su kon ti nu i ra ne edu ka ci je dok to ra sto ma to lo gi je. Sva ka ko ne bi tre ba lo za bo ra­ vi ti ve li ku ulo gu fo to gra fi je u re kla mi ra nju sto ma to lo ške opre­ me i ma te ri ja la. Den tal na fo to gra fi ja je po sta la moć no oru đe u pred sta vlja nju pla na sto ma to lo ških in ter ven ci ja sa mim pa ci jen ti ma. Do bra fo­ to graf ska pre zen ta ci ja i efi ka sna ko mu ni ka ci ja po jed no sta vlju­ ju pa ci jen ti ma te ško ra zu mlji ve sto ma to lo ške ter mi ne de mon­ stri ra ju ći ili po tvr đu ju ći po tre bu za od go va ra ju ćim po stup kom. Bo ga ta i do bro vo đe nja fo to­ar hi va omo gu ća va kli ni ča ri ma da pre zen tu ju pa ci jen ti ma ra ni je re še ne slič ne slu ča je ve, vi zu el no po tvr đu ju ći ti me ni vo pro fe si o nal ne spo sob no sti [2]. Da na šnja fo to gra fi ja nu di dve mo guć no sti: 1) kla sič nu fo to­ gra fi ju, ko ja ko ri sti film ski ma te ri jal kao no sač sli ke i kod ko je je neo p hod na la bo ra to rij ska ob ra da; i 2) di gi tal nu fo to gra fi ju, kod ko je se sli ka stva ra elek tron skim pu tem i isto ga tre nut ka je sprem na za re pro du ko va nje. Ova dru ga je na da na šnjem teh no­ lo škom raz vo ju do sti gla sve od li ke kla sič ne fo to gra fi je (kvan ti­ tet vi zu el nih in for ma ci ja na po je di nač nom snim ku, op seg kon­ tra sta i re pro duk ci ju bo je), a u ne ki ma, kao što je mo guć nost skla di šte nja po je di ni ci za pre mi ne, i pre va zi šla. Za raz li ku od kla sič ne fo to gra fi je, ko ja je po sle ob ra de re al na i po sto ji na fil­ mu, pa sa mim tim je i vi dlji va, di gi tal na fo to gra fi ja je ko di ra na, po sto ji sa mo u ob li ku je di ni ca i nu la u elek tron skoj me mo ri ji ili ne kom od mag net nih ili op tič kih me di ja. Da bi bi la vi dlji va, mo ra ju se ko ri sti ti još ne ki ure đa ji: sam fo to­apa rat, ko ji ima ma li LCD ekran za pri ka zi va nje već sni mlje nih fo to gra fi ja, TV apa rat ili vi deo pro jek tor ili ekran mo ni to ra ra ču na ra ko ji se mo že pri klju či ti na fo to­apa rat. Tre ba na po me nu ti da se u sva tri slu ča ja fo to gra fi ja pri ka zu je u svom re du ko va nom ob li ku, jer ni je dan od ovih ekra na ne mo že re pro du ko va ti pu nu re zo lu ci ji fo to gra fi je. Da bi se vi de le u svom pu nom sja ju, neo p hod no ih je od štam pa ti na pa pi ru u od go va ra ju ćem for ma tu (fo to gra fi ju od 6 me ga pik se la tre ba po ve ća ti na for mat 18×24 cm). PREDNOSTI DIGITALNE FOTOGRAFIJE Pred nost di gi tal ne fo to gra fi je ni je u nje nom kva li te tu u fo to­ graf skom smi slu, već u to me što je tre nut no na kon sni ma nja do stup na ko ri sni ku, što ju je mo gu će od štam pa ti, ar hi vi ra ti, ko pi ra ti i elek tron skom po štom po sla ti u bi lo ko ji kraj sve ta. Ta ko đe, nje nom upo tre bom mo gu će je eli mi ni sa ti gre ške ko je su od mah vi dlji ve i ne u spe le fo to gra fi je od ba ci ti, a po stu pak sni ma nja po na vlja ti do uspe šnog re zul ta ta. Za ba vlje nje fo to gra fi jom, bi lo kla sič nom ili di gi tal nom, osim od go va ra ju će opre me po treb no je ima ti i ne ka osnov na zna nja iz obla sti teh no lo gi je fo to gra fi je ko ja pod ra zu me va ju pra vil no ko ri šće nje teh ni ke, po zna va nje osno va ge o me trij ske op ti ke, upo tre bu osve tlje nja, po zna va nje fo to graf skih ma te ri ja­ la i pro ce sa, kao i osno ve li kov ne kul tu re (kon tu ru, ob lik, bo ju, tek stu ru i kom po zi ci ju). Fo to­apa rat je me ha nič ko­op tič ko­elek tron ski ure đaj za be­ le že nje re al nih sta tič nih sli ka na fo to graf skom fil mu ili elek­ tron skom sen zo ru. Za den tal nu fo to gra fi ju naj bo lja je pri me na ma lo for mat nog 35­mi li me tar skog apa ra ta s re flek snim tra ži lom 155Stomatološki glasnik Srbije. 2014;61(3):149-156 i mo guć no šću iz me ne objek ti va; što se ti če kla sič ne fo to gra fi je, od no sno di gi tal nog apa ra ta istih ka rak te ri sti ka, s re zo lu ci jom od 6 me ga pik se la i vi še. Ovi apa ra ti su re la tiv no ma lih di men zi­ ja, jed no stav ni za upo tre bu i pri la go dlji vi za raz ne vr ste sni ma­ nja. Kva li tet snim ka je ta kav da omo gu ća va kva li tet no po ve ća nje do for ma ta A4, kao i za sve vr ste pre zen ta ci ja. Osnov ni de lo vi fo to­apa ra ta su: ku ći šte, tra ži lo, za tva rač i objek tiv. Ku ći šte ima ulo gu da po ve že sve de lo ve apa ra ta u jed nu ce li nu, kao i da obez be di za sve tlo ne pro pu stan pro stor u ko jem se na la zi film ski ma te ri jal ili fo to o se tlji vi sen zor. Tra ži lo slu ži za od re đi va nje že lje nog iz re za i za po de ša va nje pra vil nog fo ku sa. Ulo ga za tva ra ča je da pro pu sti sve tlo ko je pro la zi kroz objek tiv u tre nut ku eks po zi ci je. Objek tiv je op tič ko­me ha nič ki sklop za du žen za stva ra nje re al nog li ka objek ta sni ma nja. Od nje go vog di zaj na i ka rak te ri­ sti ka di rekt no za vi se iz gled i kva li tet fo to gra fi je (Sli ka 1). Osnov ne oso bi ne objek ti va se pred sta vlja ju dve ma ve li či­ na ma: ži žnom du ži nom objek ti va i nje go vom sve tlo snom ja­ či nom. Ži žna du ži na uti če na ve li či nu for mi ra nog li ka, a ti me i na vid ni ugao. Što je ona ma nja, ma nji je i lik, či me se vid ni ugao po ve ća va [3]. Sve tlo sna ja či na objek ti va je ko lič nik vred no sti ži žne du ži ne i preč ni ka naj ve ćeg otvo ra di ja frag me (iri sa). 1 = D f f D = preč nik mak si mal nog otvo ra; F = ži žna du ži na; f = sve­ tlo sna ja či na Što je taj od nos ma nji, vi še sve tlo sti mo že da pro đe kroz objek tiv pri mak si mal nom otvo ru di ja frag me, i za ta kve objek­ ti ve se ka že da su sve tlo sno ja ki. Vred nost preč ni ka di ja frag me je pro men lji va i njom se re gu li še osve tlje nost po vr ši ne fo kal ne rav ni ra di do bi ja nja is prav ne vred no sti eks po zi ci je. Ako se preč­ nik otvo ra sma nji za vred nost 1 = D f f ∖∕∙2 H = F 2 f × Cc ∨2 , po vr ši na otvo ra je upo la ma­ nja, pa sa mim tim je i osve tlje nost fo kal ne rav ni sma nje na na po la. Zbog to ga je ska la na pr ste nu za re gu li sa nje otvo ra di ja­ frag me pred sta vlje na ne i me no va nim bro je vi ma sa ge o me trij­ skom pro gre si jom sa kvo ci jen tom 1 = D f f ∖∕∙2 H = F 2 f × Cc ∨2 (1,4 2 2,8 4 5,6 8 11 16 22...). Ove vred no sti otvo ra di ja frag me se na zi va ju re la tiv nim otvo rom objek ti vom i pred sta vlja ju istu osve tlje nost fo kal ne rav ni za objek ti ve raz li či tih ži žnih du ži na (što je ve ća ži žna du­ ži na, to je ve ći i ge o me trij ski otvor di ja frag me). Po red ulo ge u re gu li sa nju osve tlje no sti fo kal ne rav ni, re la tiv­ ni otvor objek ti va uti če i na po lje fo ku sa duž ose objek ti va. Za sva ku vred nost re la tiv nog otvo ra po sto ji vred nost hi per fo kal ne dis tan ce, ko ja pred sta vlja ra sto ja nje od objek ti va do fo ku si ra ne rav ni, pri če mu se oštro re pro du ku ju svi objek ti ko ji se na la ze u po lju ko je se pro sti re od po lo vi ne vred no sti hi per fo kal ne dis­ tan ce do ve o ma uda lje nih obje ka ta ko ji su u ne do gle du i či je se ra sto ja nje od objek ti va ozna ča va zna kom ∞. 1 = D f f ∖∕∙2 H = F 2 f × Cc H = hi per fo kal na dis tan ca; F = ži žna du ži na objek ti va; Cc = krug ra si pa nja (0,025 mm) Od nos ve li či na objek ta sni ma nja i for mi ra nog li ka na zi va se raz me ra pre sli ka va nja. Pro me na ra sto ja nja iz me đu objek ti va i fo ku sne rav ni u za vi sno sti od raz me re vr ši se u prak si po me­ ra njem objek ti va jer je me sto for mi ra nja li ka ne pro men lji vo u od no su na ku ći šte apa ra ta i od re đe no po lo ža jem film ske tra ke, od no sno fo to o se tlji vog či pa. To po me ra nje se na zi va iz vla ka i kod stan dard nih objek ti va iz no si mak si mal no 1/7 ži žne du ži ne. Objek ti vi ko ji ima ju ve ću iz vla ku, čak i do vred no sti jed ne ži žne du ži ne, na zi va ju se ma kro o bjek ti vi i ko ri ste se za sni ma nje iz bli zi ne. Upra vo u tu ka te go ri ju spa da i den tal na fo to gra fi ja, u ko joj se uglav nom vr še sni ma nja u raz me ri od 1:10 do 1:1 [4]. OŠTRINA SLIKE Je dan od osnov nih pro ble ma pri sni ma nju u ma kro pod ruč ju pred sta vlja sma nje na du bi na fo ku sa ko ja od re đu je po lje oštre sli ke po du bi ni, od no sno duž ose objek ti va. Oštrom sli kom se sma tra fo to gra fi ja po ve ća na na for mat 20×25 cm po sma tra na s ra sto ja nja od 25 cm, s vi dlji vim de ta lji ma od 0,2 mm. To zna či da na ori gi nal nom ne ga ti vu ili či pu ta vred nost tre ba da bu de ma nja za fak tor uve ća nja, ko ji u ovom slu ča ju iz no si osam pu ta, što je 40 li ni ja po mi li me tru, od no sno 0,025 mm. Ova vred nost se na zi va kru gom ra si pa nja. Uko li ko se de ta lji sni ma nog objek ta re pro du­ ku ju s ve ćom di men zi jom, uve ća na sli ka se do ži vlja va kao neo štra. Tri su osnov na pa ra me tra ko ja uti ču na oštri nu u du bi nu: raz me ra pre sli ka va nja, od no sno uda lje nost do sni ma nog objek­ ta, ži žna du ži na objek ti va i re la tiv ni otvor. Ži žna du ži na je od re­ đe na kon struk ci jom objek ti va, ra sto ja nje do objek ta sni ma nja da je že lje ni iz rez, pa pro me na re la tiv nog otvo ra osta je je di na op ci ja za re gu li sa nje oštri ne u du bi nu. Što je on ma nji, to je ve će po lje du bin ske oštri ne (Sli ka 2) [5]. Po lje oštri ne u du bi nu mo že se iz ra ču na ti na osno vu sle de će for mu le: FOD = DF ­ DN 1 = D f f ∖∕∙2 H = F 2 f × Cc ∨2 df = f 2 h – (s – f) dn = h × s h + (s – f) DF = ra sto ja nje od ka me re do naj da lje tač ke oštri ne; DN = ra sto ja nje od ka me re do naj bli že tač ke oštri ne; H = hi per fo kal na dis tan ca; S = ra sto ja nje od ka me re do sni ma nog objek ta; F = ži žna du ži na objek ti va Po sto je i ta bli ce po lja oštri ne u du bi nu za objek ti ve raz li či tih ži žnih du ži na. Za kon tro li sa nje oštri ne u du bi nu pu tem pro me­ ne re la tiv nog otvo ra objek ti va tre ba ima ti u vi du da sma nje nje otvo ra di ja frag me uti če na eks po zi ci ju. Ako objek ti vu sve tlo sne ja či ne 1:2 sma nji mo re la tiv ni otvor na vred nost 32, on pro pu­ šta 128 pu ta ma nje sve tlo sti u od no su na mak si mal ni otvor. To zna či da je za to li ko pu ta po treb no pro du ži ti vre me eks po zi ci je ili ko ri sti ti 128 pu ta ja če osve tlje nje, što mo že u ne kim slu ča­ je vi ma pred sta vlja ti ogra ni če nje, jer je za ose tlji vost či pa ili fil­ ma vred no sti ISO 200 pri du ži ni tra ja nja eks po zi ci je od 1/60 s (što je gra ni ca za sni ma nje iz ru ke) i sa re la tiv nim otvo rom 32 po treb na osve tlje nost po vr ši ne od 100.000 Lux, što od go va ra osve tlje no sti objek ta sun če vom sve tlo šću (Sli ka 3). Po red ovog na či na kon tro li sa nja oštri ne u du bi nu, ko ji se mo že pri me ni ti na sva kom fo to­apa ra tu i sa sva kim objek ti­ vom, po sto je i dru gi kod ko jih je neo p hod na upo tre ba po seb ne fo to graf ske teh ni ke i opre me. 156 KONTROLA OŠTRINE U DUBINU Kon tro la oštri ne u du bi nu se pri me nju je sa mo kod sni ma nja obje ka ta či ja sni ma na po vr ši na ni je pa ra lel na s fo kal nom rav ni. To zna či da se te dve rav ni ne gde u pro sto ru se ku. Ako se u isto vre me objek tiv ro ti ra oko ose ko ja je pa ra lel na sa sni ma nom rav ni ta ko da se zad nja glav na ra van objek ti va se če s pre se kom sni ma ne i fo kal ne rav ni, on da će se ce lo vid no po lje re pro du ko­ va ti oštro u fo kal noj rav ni bez ob zi ra na re la tiv ni otvor objek ti­ va. Ovo je Šajmp flu gov (Sche impflug) prin cip, po znat već vi še od ve ka (Sli ka 4). Da bi mo gao prak tič no da se pri me ni u fo to gra fi ji, neo p hod na je upo tre ba ili ve li ko for mat nih ka me ra ko je ko ri ste film kao no­ sač sli ke, ili spe ci jal nih objek ti va za ma lo for mat ne ka me re, bi lo kla sič ne ili di gi tal ne. S ob zi rom na to da su ve li ko for mat ne ka­ me re vr lo ret ko u upo tre bi, a uz to ni su po god ne za sni ma nje iz bli zi ne, ko ri šće nje objek ti va s mo guć no šću de cen tri ra nja mno go je pri klad ni je. Osim što po se du ju mo guć nost ro ta ci je oko ose ko ja je u zad njoj glav noj rav ni, ovi objek ti vi ima ju mo guć nost tran sla­ tor nog po me ra nja op tič ke ose u jed nom prav cu u oba sme ra. Ovo za po sle di cu ima is pra vlja nje ge o me trij ske per spek ti ve u fo kal noj rav ni (pa ra lel ne li ni je u rav ni ko ja je pod uglom u od no su na fo kal nu ra van osta ju pa ra lel ne bez ob zi ra na uda lje nost). Pr ven­ stve na na me na ovih objek ti va je sni ma nje ar hi tek tu re, pa ne ki ne ma ju iz vla ku za sni ma nje u ma kro pod ruč ju [5]. Ako im se po ve ća iz vla ka po sta vlja njem me đu pr ste no va, od no sno me ha, po na ša ju se kao ma kro o bjek ti vi. Dru gi na čin za po ve ća va nje raz me re sni ma nja je upo tre ba pred le ća. Ako se objek ti vu do da s pred nje stra ne sa bir no so či vo, ukup na ži žna du ži na se sma nju je ta ko što pri bli ža va fo kal nu ra van objek ti vu, što omo gu ća va sni ma nje bli skih pred me ta, od no sno po ve ća nu raz me ru. Pri to me se po ve ća va i sve tlo sna ja či na objek ti va, pa ne po sto ji po tre ba za ko rek ci jom eks po zi ci je [4]. Tre ći na čin kon tro li sa nja oštri ne u du bi nu se ko ri sti uglav­ nom kod sni ma nja vr lo ma lih obje ka ta u po ve ća noj raz me ri (1:1 i vi še). U tom slu ča ju obje kat mo ra pri li kom sni ma nja bi ti ne po mi čan, a fo to­ a pa rat se po sta vlja na sta tiv ko ji ima ma le po mič ne san ke. Po sle po stav ke sve tla i utvr đi va nja op ti mal­ nog otvo ra di ja frag me pro ce nju je se ko li ka je oštri na u du bi nu. Za tim se sni ma se ri ja fo to gra fi ja sa suk ce siv nim po me ra njem fo to­apa ra ta po san ka ma za vred nost pro ce nje ne oštri ne u du­ bi nu, či me se do bi ja niz fo to gra fi ja s fo ku si ra nim po je di nač nim de lo vi ma sni ma nog objek ta. Te fo to gra fi je se za tim ob ra đu ju spe ci ja li zo va nim soft ve rom ko ji od sva ke fo to gra fi je ko ri sti sa mo fo ku si ra ni seg ment i pra vi kom po zit nu fo to gra fi ju ce log objek ta ko ji je re pro du ko van u pot pu no sti fo ku si ran. Ovi soft­ ve ri su vr lo jed no stav ni za ko ri šće nje, a po red ko mer ci jal nih, kao što je He li con Fo cus fir me He li con Soft Ltd., po sto je i oni sa bes plat nom li cen com i otvo re nim ko dom, kao što je Com bi­ neZM Alana Hedlija, ko ji mo gu bi ti pri la go đe ni in di vi du al nim po tre ba ma auto ra uz po zna va nje pro ce sa ko di ra nja (Sli ka 5). Glav ni cilj den tal ne fo to gra fi je je do ku men ta ci ja. To zna či da što vi še in for ma ci ja tre ba da se za be le ži u uslo vi ma ko ji se mo gu po no vi ti. Sa mo na taj na čin na fo to gra fi ju se mo že gle da ti kao na od go va ra ju ći alat za pri ku plja nje in for ma ci ja. Zbog to ga je neo p hod na stan dar di za ci ja u ovoj obla sti, ka ko u po gle du oda bi ra fo to­opre me, ta ko i u po gle du nje nog ko ri šće nja, tj. po­ stav ke osve tlje nja i po lo ža ja u od no su na pa ci jen ta [6]. Po seb nu pa žnju tre ba obra ti ti na kon tro lu oštri ne u du bi nu, ko ja je je dan od osnov nih pa ra me ta ra uspe šne fo to gra fi je po­ red pra vil ne eks po zi ci je i re pro duk ci je bo ja. Po zna va nje osno va ge o me trij ske op ti ke i prin ci pa stva ra nja fo to graf ske sli ke neo p­ hod no je po seb no za ba vlje nje ma kro fo to gra fi jom. Naj a de kvat ni ji fo to­apa rat za ovu upo tre bu je jed no o ki re­ flek sni sa iz men lji vim objek ti vi ma, bi lo kla sič ni s fil mom od 35 mm, ili di gi tal ni s re zo lu ci jom od naj ma nje 8 me ga pik se la. Objek tiv tre ba da bu de ži žne du ži ne od oko 100 mm s ma kro pod ruč jem do raz me re 1:1. Uko li ko mo guć no sti do zvo lja va ju, pre po ru če na je i upo tre ba objek ti va s mo guć no šću de cen tri ra­ nja u svr hu kon tro li sa nja ge o me trij ske per spek ti ve i po lja oštri­ ne u du bi nu. Od go va ra ju će osve tlje nje je elek tron ska blje ska li ca, ko ja se mon ti ra na pred nju stra nu objek ti va i ima dve lam pe s re flek to ri ma uz mo guć nost ro ta ci je oko ose objek ti va. Upo tre ba pr ste na ste blje ska li ce ili iz vo ra sa sve tle ćim di o da ma je neo p­ hod na za in tra o ral ne snim ke. Po sto ji pet osnov nih po gle da ko ji mo gu da ilu stru ju in tra o­ ral ni sta tus pa ci jen ta. To su fron tal ni po gled, dva la te ral na i dva oklu zal na, pri če mu je neo p hod na upo tre ba re trak to ra i ogle­ da la. Kao do da tak po sto ji i šest spo lja šnjih po gle da: dva boč na, fron tal ni, fron tal ni sa bla go za ba če nom gla vom i dva fron to­la­ te ral na pod uglom od 45 ste pe ni u od no su na sa gi tal nu ra van. Ova kva stan dar di za ci ja fo to graf skih uslo va omo gu ća va upo­ re dlji vost fo to gra fi ja čak i ako su sni mlje ne u ve li kom vre men­ skom in ter va lu i od stra ne raz li či tih fo to gra fa. Sa mo na ovaj na čin den tal na fo to gra fi ja mo že bi ti upo tre blje na kao dra go ce na po moć u do ku men ta ci ji. Da bi ovaj vid do ku men ta ci je bio la ko i uvek pri stu pa čan, po treb ni su pra vil no ču va nje i ar hi vi ra nje den tal nih fo to gra fi ja. Raz vi je ne ne ga ti ve i di ja po zi ti ve na film­ skom ma te ri ja lu naj bo lje je od mah di gi ta li zo va ti i ču va ti sa di gi tal nim fo to gra fi ja ma uz oba ve zno pra vlje nje ko pi ja, ko je, po mo guć stvu, tre ba ču va ti na odvo je nim me sti ma. Fo to gra­ fi je ču va ne u di gi tal nom ob li ku za u zi ma ju vr lo ma lo me sta i uz od go va ra ju će soft ve re za or ga ni za ci ju fo to­do ku men ta ci je mo men tal no su pri stu pač ne [7]. ZAKLJUČAK Di gi tal na fo to gra fi ja je da nas su sti gla – a u ne kim aspek ti ma i nad ma ši la – kla sič nu fo to gra fi ju. Mno gi fo to gra fi ko ri ste ili is klju či vo di gi tal nu fo to gra fi ju, ili di gi tal nu u kom bi na ci ji s kla sič nom. Ovaj trend u ko rist di gi tal ne fo to gra fi je se ve ro vat­ no ne će me nja ti u bu duć no sti. Upo tre ba di gi tal ne fo to gra fi je u sto ma to lo škoj prak si je jed no stav na, br za i kraj nje ko ri sna u do ku men to va nju po stup ka ra da, spro vo đe nju edu ka ci je pa ci­ je na ta i oba vlja nju kli nič kih is tra ži va nja, obez be đu ju ći mno ge ko ri sti sto ma to lo zi ma i pa ci jen ti ma. Do bra i uspe šna den tal na fo to gra fi ja pra va je na gra da za ulo že na sred stva u na bav ku opre­ me, i trud i vre me po tro še no u sti ca nje zna nja iz ove obla sti. Dukić Z. Depth of Field in Dental Photography and Methods for Its Control work_ethaxkoakncpxnd6vt6zkr4alu ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218565908 Params is empty 218565908 exception Params is empty 2021/04/06-02:16:47 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218565908 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:47 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_evr2vmtbfnbellvgcn64hwmfbm ---- Response to Comments on Fedorko et al. Hyperbaric Oxygen Therapy Does Not Reduce Indications for Amputation in Patients With Diabetes With Nonhealing Ulcers of the Lower Limb: A Prospective, Double-Blind, Randomized Controlled Clinical Trial. Diabetes Care 2016;39:392–399 RESPONSE TO COMMENTS ON FEDORKO ET AL. Hyperbaric Oxygen Therapy Does Not Reduce Indications for Amputation in Patients With Diabetes With Nonhealing Ulcers of the Lower Limb: A Prospective, Double-Blind, Randomized Controlled Clinical Trial. Diabetes Care 2016;39:392–399 Diabetes Care 2016;39:e136–e137 | DOI: 10.2337/dci16-0006 We welcome this opportunity to re- spond to the comments (1–3) regarding our study (4). There are a few common themes that we would like to clarify. The first theme is around the criticism of us- ing “meeting criteria for amputation” in- stead of “amputation event.” It is ideal to use more final patient outcomes in all research; however, the sample size and time needed to recruit and follow pa- tients of sufficient duration to observe final events is often prohibitive. This is the reason why intermediate markers and outcomes are used in many disease areas, including diabetes. In addition, fi- nal events like amputations may be an inappropriate outcome in small, ran- domized controlled trials (RCTs) where other factors may confound the true treatment effect. For example, patient cultural preferences, psychological trauma, and procedure-booking logis- tics (among other factors) frequently override medical advice about whether and when the limb should be ampu- tated. This (extrinsic to disease) variabil- ity precludes using actual amputation event as a consistent outcome mea- sure unless a very large sample and long follow-uptimesareused.Thisisimpractical and prohibitively costly in placebo- controlled hyperbaric treatment trials. However, Margolis et al. (5) have done it elegantly in a study of different de- sign. This multicenter observational cohort with propensity score matching methodology showed no amputation- sparing effect or improved wound heal- ing in over 700 patients treated with hyperbaric oxygen therapy (HBOT) matched with over 5,000 patients re- ceiving standard wound care. By using the strength of a large sample of pa- tients, this study was able to corrobo- rate the findings we observed in our smaller RCT. A second criticism theme was around the fact that the “meeting criteria for amputation”outcomewas assessed solely on the basis of the digital photography, a method of validation that is not valid or validated (2,3). This criticism is simply incorrect and misleading. As described in the RESEARCH DESIGN AND METHODS section of our article (4), at the primary end point of the study the wound care nurse presented both clinical case in- formation and all patient photographs to the same vascular surgeon who eval- uated patients prior to enrollment. The surgeon had a choice (at his discretion) to see the patient in person prior to adjudication. This method of adjudica- tion for indications to amputate has been previously formally validated in direct comparison with personal visits (see refs. 19 and 20 in RESEARCH DESIGN AND METHODS) (4). Löndahl et al. (1) also questioned the validity of our results because of small differences in patient demographics and shorter follow-up times between our and their (Swedish) study (6). This criti- cism is unfounded, as one would expect to see some differences in demograph- ics across countries and between popu- lations due to higher rates of diabetes and obesity in North America. Our follow- up time is consistent with several RCTs of other diabetic foot ulcer (DFU) treatment modalities with positive out- comes within 12 weeks (refs. 34–36) (4). There is also very good evidence that the healing rate during the first 4 weeks of treatment is a strong predictor of wound healing (ref. 23) (4). If anything, our results have more relevance to North American populations. Also, contrary to the assertion by Löndahl et al. (1), transcutaneous oxygen 1Department of Anesthesia and Pain Management, Toronto General Hospital, University Health Network, Toronto, ON, Canada 2Department of Clinical Epidemiology & Biostatistics, McMaster University, Hamilton, ON, Canada 3Programs for Assessment of Technology in Health Research Institute, St. Joseph’s Healthcare Hamilton, Hamilton, ON, Canada 4Division of Vascular Surgery, Toronto General Hospital, University Health Network, Toronto, ON, Canada Corresponding author: Ludwik Fedorko, ludwik.fedorko@uhn.ca. © 2016 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered. Ludwik Fedorko,1 James M. Bowen,2,3 Wilhelmine Jones,1 George Oreopoulos,4 Ron Goeree,2,3 Robert B. Hopkins,2,3 and Daria J. O’Reilly2,3 e136 Diabetes Care Volume 39, August 2016 e -L E T T E R S – C O M M E N T S A N D R E S P O N S E S http://crossmark.crossref.org/dialog/?doi=10.2337/dci16-0006&domain=pdf&date_stamp=2016-07-06 mailto:ludwik.fedorko@uhn.ca pressure measurement (TcPO2) has very poor negative predictive value (7), which limits its clinical/research usefulness. It is disappointing that Huang (3) as- serts “methodological errors” seemingly without carefully reading our article. It is simply unprofessional to criticize an article in a letter to the editor simply because of the preconceived notions of treatment benefit without even reading the methods of an article. First, actual amputations were reported (see RESULTS). Second, off-loading (as re- quired, and as tolerated) was used (see RESEARCH DESIGN AND METHODS). Third, we have clearly stated that Wagner grade 1 wounds “were considered healed” for the purposes of analysis (see RESEARCH DESIGN AND METHODS). Fourth, the primary outcome was assessed at 12 weeks; therefore, the rate of actual amputa- tion after the final adjudication point is irrelevant to the study methodology (see RESEARCH DESIGN AND METHODS). Contrary to the assertion by Murad (2), there are no known contraindica- tions to HBOT specific to Wagner grade 2 ulcers. We are dismayed at his un- precedented statement that our study was “not patient-centered” and “did not add any new information” because we did not exclude patients with Wag- ner grade 2 wounds. One should not confuse Undersea and Hyperbaric Medical Society (UHMS) guidelines (8) alignment with U.S. Centers for Medi- care & Medicaid Services (CMS) reim- bursement policies with scientific evidence. Virtually all RCTs of HBOT to date have included patients with Wag- ner grade 2 DFU. In this study, we painstakingly assessed a wide range of patient-centered outcomes. We did not see differences in any outcome mea- sures of wound healing rates, quality of life, or independent living between the groups. Löndahl et al. (1) also refer to the crit- ical issue of an inappropriate placebo used in their double-blind RCT (6), which we discussed in our article, as “irrelevant,” a “‘bubble’ theory” and “of no value.” Huang (3) dismisses our concerns (while agreeing at the same time) that the placebo used by Löndahl et al. (6) was not inert. This study (6) is widely quoted and is used in multiple meta- analysis and reviews (including UHMS guidelines) as important level 1 positive evidence for efficacy of HBOT for DFU treatment. Hyperbaric chambers were originally invented to treat “bends” caused by nitrogen bubbles, not to cause them. We stand by our opinion that the placebo used in the study by Löndahl et al. (6) was not a placebo but a nonbenign exposure of control group subjects. “Placebo” patients were subjected to 40 daily 90-min air com- pressions to 2.5 atmosphere absolute. Such compressed air exposure is be- yond the time limits of the generally accepted civilian no-decompression tables (9), putting patients at significant risk of evolving intravascular gaseous nitrogen bubbles. These tables were never tested for repetitive exposures in elderly, sick people with poor periph- eral circulation. Therefore, this study regimen may have been associated with observed delayed wound healing and with higher 3-year mortality in the placebo group, as was reported by Löndahl et al. in an abstract form (10). It may also conceivably explain why we were not able to reproduce positive re- sults of the study by Löndahl et al. (6) in our placebo-controlled trial. Hyperbaric oxygen is a drug delivered in a large pill called a hyperbaric cham- ber. After several decades of use, it has to be held to the same standards as are applied to other expensive drugs. We were more disappointed than anybody else when we analyzed the data and re- viewed it critically in the context of liter- ature. However, as of now, the best available evidence does not provide sup- port for use of HBOT in the patients with chronic diabetic wounds to facilitate heal- ing or prevent the need for amputations. Duality of Interest. No potential conflicts of interest relevant to this article were reported. References 1. Löndahl M, Fagher K, Katzman P. Comment on Fedorko et al. Hyperbaric oxygen therapy does not reduce indications for amputation in patients with diabetes with nonhealing ulcers of the lower limb: a prospective, double-blind, ran- domized controlled clinical trial. Diabetes Care 2016;39:392–399 (Letter). Diabetes Care 2016; 39:e131–e132. DOI: 10.2337/dc16-0105 2. Murad MH. Comment on Fedorko et al. Hy- perbaric oxygen therapy does not reduce indi- cations for amputation in patients with diabetes with nonhealing ulcers of the lower limb: a pro- spective, double-blind, randomized controlled clinical trial. Diabetes Care 2016;39:392–399 (Letter). Diabetes Care 2016;39:e135. DOI: 10.2337/dc16-0176 3. Huang ET. Comment on Fedorko et al. Hyper- baric oxygen therapy does not reduce indica- tions for amputation in patients with diabetes with nonhealing ulcers of the lower limb: a pro- spective, double-blind, randomized controlled clinical trial. Diabetes Care 2016;39:392–399 (Letter). Diabetes Care 2016;39:e133–e134. DOI: 10.2337/dc16-0196 4. Fedorko L, Bowen JM, Jones W, et al. Hyper- baric oxygen therapy does not reduce indications for amputation in patients with diabetes with nonhealing ulcers of the lower limb: a prospec- tive, double-blind, randomized controlled clinical trial. Diabetes Care 2016;39:392–399 5. Margolis DJ, Gupta J, Hoffstad O, et al. Lack of effectiveness of hyperbaric oxygen therapy for the treatment of diabetic foot ulcer and the prevention of amputation: a cohort study. Diabetes Care 2013;36:1961–1966 6. LöndahlM, KatzmanP,NilssonA, Hammarlund C. Hyperbaric oxygen therapy facilitates healing of chronic foot ulcers in patients with diabetes. Di- abetes Care 2010;33:998–1003 7. Fife CE, Buyukcakir C, Otto GH, et al. The predictive value of transcutaneous oxygen ten- sion measurement in diabetic lower extremity ulcers treated with hyperbaric oxygen therapy: a retrospective analysis of 1,144 patients. Wound Repair Regen. 2002;10:198–207 8. Huang ET, Mansouri J, Murad MH, et al. A clinical practice guideline for the use of hyper- baric oxygen therapy in the treatment of dia- betic foot ulcers. Undersea Hyperb Med 2015; 42:205–247 9. Nishi RY. The DCIEM decompression tables and procedures for air diving. In Decompression in Surface-Based Diving: Proceedings of the Thirty-sixth UHMS Workshop. Tokyo, Japan, Un- dersea and Hyperbaric Medical Society, 1987 10. Löndahl M, Katzman P, Hammarlund, C, et al. Improved survival in patients with diabe- tes and chronic foot ulcers after hyperbaric oxygen therapy. Outcome of a randomized double-blind placebo controlled study. Pre- sented at the 9th Scientific Meeting of the Di- abetic Foot Study Group, 17–19 September 2010, in Uppsala, Sweden care.diabetesjournals.org Fedorko and Associates e137 http://care.diabetesjournals.org work_evvhhqjiy5bhnb23sp5t652gbu ---- 2 Knox DL, Schachat AP, Mustonen E. Primary, secondary and coincidental ocular complications of Crohn’s disease. Ophthalmology 1984; 91: 163–173. 3 Salmon JF, Wright JP, Murray ADN. Ocular inflammation in Crohn’s disease. Ophtalmology 1991; 98: 480–484. 4 Dutt S, Cartwright MJ, Nelson CC. Acute dacryoadenitis and Crohn’s disease: findings and management. Ophthal Plast Reconstr Surg 1992; 8: 295–299. 5 Hwang IP, Jordan DR, Acharya V. Lacrimal gland inflammation as the presenting sign of Crohn’s disease. Can J Ophthalmol 2001; 36: 212–213. N Rafiei1, H Tabandeh1 and M Hirschbein2 1The Wilmer Eye Institute, Baltimore, MD, USA 2The Krieger Eye Institute, Sinai Hospital, Baltimore, MD, USA Correspondence: H Tabandeh, Department of Ophthalmology, Wilmer Eye Institute, B-20, Johns Hopkins Hospital, 600 N. Wolfe Street, Baltimore, MD 21287-9248, USA Tel: þ 1 410 955 8265; Fax: þ 1 410 614 8496. E-mail: htaband1@jhmi.edu Eye (2006) 20, 1372–1373. doi:10.1038/sj.eye.6702205; published online 9 December 2005 Sir, The surgical management of chronic hypotony due to uveitis Prolonged hypotony in uveitis patients is often regarded as the end stage of a chronic disease from which recovery is improbable if not impossible. However, not all hypotony cases are alike. Hypotony resulting from active inflammation will respond to adequate immunosuppression, and as indicated in our article an attempt should be made to treat it medically before considering a surgical approach. The question then is how long should one wait to observe a response. As indicated by Dr Liu and co-workers, periocular steroids can have a prolonged effect. In certain forms of uveitis, a single periocular injection can provide a beneficial effect for 8–12 weeks. However, one would expect to see a response to steroids within the first 10–14 days. To take into account a possible delay in this initial response, we followed patients for 2 months prior to surgery. The patients included in this series did not show a pressure rise on intensified immunosuppression. There is no doubt that steroids and other immunosuppressants contribute to the surgical result. However, the effect will not be sustained once a taper is initiated. Management in this series of patients became easier following surgery with fewer and less immunosuppressants being required. Hypotony in chronic uveitis patients is often characterized by a protracted course requiring frequent reinjections, or modifications to the immunosuppressive regimen. For all the reasons mentioned above, and our results, we feel that a surgical approach should be considered in this group of patients. With time, we should be able to determine the place and timing of surgery in the management of this severe complication of uveitis. MD de Smet, F Gunning and R Feenstra Department of Ophthalmology, University of Amsterdam, Meibergdreef 9, Rm G2-217, Amsterdam 1105 AZ, The Netherlands Correspondence: MD de Smet, Tel: þ 31 20 566 3455; Fax: þ 31 20 566 9053. E-mail: m.d.desmet@amc.uva.nl Eye (2006) 20, 1373. doi:10.1038/sj.eye.6702208; published online 9 December 2005 Sir, The surgical management of chronic hypotony due to uveitis Dr de Smet and associates have conducted an interesting study on surgical interventions for cases of uveitis- induced chronic hypotony. After a joyous reading of the whole article, we think that an important issue should warrant further discussion. Subtenon’s capsule triamcinolone acetonide injection was shown to be effective in the management of intraocular inflammation.1,2 It has an overt advantage over systemic steroid for effaced systemic adverse effect and slow-releasing depot.1 The biological action of subtenon triamcinolone acetonide is long and can be up to 6 weeks or even longer.1,2 From the methodology, it can be learned that some of the patients with intraocular inflammation were given one to two subtenon’s injection prior to the surgical intervention.3 Interestingly, if one inspected Table 1 of the article, it was noted that duration of hypotony in patient numbers 1–4 ranged from 8 to 12 weeks.3 Apparently, Correspondence 1373 Eye there was no washing-out period for the subtenon steroid administered. Hence, out of the six patients enrolled, four of them (66.7%) might undergo the antihypotony surgeries superimposing with the ongoing anti- inflammatory effect of the subtenon steroid depot. This is a significant confounding factor. These inadvertently overlapped medical and surgical managements may blur the attribution that the observed postoperative improvement was solely due to surgical manipulation. If uncontrolled, it may imperil the reproducibility of the proclaimed intraocular pressure-stabilizing effect of the surgery. We would like to learn more from the authors about their precautions against this important confounding influence. Acknowledgements Financial and proprietary interest: Nil Financial support: Nil References 1 Paganelli F, Cardillo JA, Melo Jr LA, Oliveire AG, Skaf M, Costa RA, Brazilian Ocular Pharmacology and Pharmaceutical Technology Research Group. A single intraoperative sub-Tenon’s capsule triamcinolone acetonide injection for the treatment of post-cataract surgery inflammation. Ophthalmology 2004; 111: 2102–2108. 2 Lafranco Dafflon M, Tran VT, Guex-Crosier Y, Herbort CP. Posterior sub-Tenon’s steroid injections for the treatment of posterior ocular inflammation: indications, efficacy and side effects. Graefes Arch Clin Exp Ophthalmol 1999; 237: 289–295. 3 De Smet MD, Gunning F, Feenstra R. The surgical management of chronic hypotony due to uveitis. Eye 2005; 19: 60–64. DTL Liu, C-L Li and VYW Lee Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Prince of Wales Hospital, 147 Argyle Street, Hong Kong SAR, China Correspondence: DTL Liu, Tel: þ 86 852 2632 2878; Fax: þ 86 852 2648 2943. E-mail: david_tlliu@yahoo.com Eye (2006) 20, 1373–1374. doi:10.1038/sj.eye.6702207; published online 9 December 2005 Sir, An evaluation of photographic screening for neovascular age-related macular degeneration We read with great interest the work of DAL Maberley et al1 on the ‘Evaluation of photographic screening for neovascular age-related macular degeneration’. The authors were looking at the utility of colour fundus photographs for identifying subjects with potentially treatable neovascular AMD. While the methods, analysis and conclusions of the study seem both convincing and sound, the following is a suggestion, although meager, we feel could be of value to the authors. DAL Maberley et al used Kodak-chrome colour slides for both stereoscopic and nonstereo images. Although important in both documentation and diagnosis, the 35mm colour fundus photos are slowly loosing their allure in retinal imaging. Colour slides are being replaced by the technologically more advanced digital fundus photography. This imaging tool used to give a less detailed picture in the past when compared to 35mm, however, with the recently available 6.0 megapixel cameras, resolution of the photos has been comparable if not superior to traditional cameras. Even reference reading centres, such as the University of Wisconsin Reading Centre is gradually switching to high- resolution digital photography, replacing the gold standard 35mm slides. Advantages in digital photography comprise better manipulation of the fundus image, including magnification and colour filtering, and easier electronic storage/e-mailing. Finally, despite an initial higher cost, the digital camera’s on going financial burden is by far less than film. We suggest to our authors embarking on digital photography (stereo and nonstereo) for projects to detect retinal pathology. This was proven both valuable and effective in ample studies.2–4 Also, by using the different image manipulation tools, the authors then might achieve an even higher sensitivity and specificity than the one reported. References 1 Maberley DAL, Isbister C, Mackenzie P, Aralar A. An evaluation of photographic screening for neovascular age- related macular degeneration. Eye 2005; 19: 611–616. 2 van Leeuwen R, Chakravarthy U, Vingerling JR, Brussee C, Hooghart AJ, Mulder PG et al. Grading of age-related maculopathy for epidemiological studies: is digital imaging as good as 35-mm film? Ophthalmology 2003; 110: 1540–1544. 3 Merrin L M, Guentri K, Recchia CC. Digital detection of diabetic retinopathy. J Ophthalm Photography 2004; 26: 59–66. 4 Bursell SE, Cavallerano JD, Cavallerano AA, Clermont AC, Birkmire-Peters D, Aiello LP, Aiello LM, Joslin Vision Network Research Team. Stereo nonmydriatic digital-video color retinal imaging compared with Early Treatment Diabetic Retinopathy Study seven standard field 35-mm stereo color photos for determining level of diabetic retinopathy. Ophthalmology 2001; 108: 572–585. Correspondence 1374 Eye The surgical management of chronic hypotony due to uveitis Acknowledgements References work_ey6g6n2gnrcvnjpwul6dakrfoy ---- MICROSCOPY 101 We appreciate the response to this publication feature and welcome all contributions. Contributions may be sent to José A. Mascorro, our Technical Editor, at his e-mail address: jmascor@tulane.edu. José may also be reached at the Department of Structural and Cellular Biology, Tulane University Health Sciences Center, 1430 Tulane Ave., New Orleans, LA 70112 and Ph: (504) 584-2747 (504) 584-1687 New Resin for Repair of Bell Jar Chips Owen P. Mills and Matt Huuki* Michigan Technological University, * Matt's Auto Glass, Houghton, MI opmills@mtu.edu and mhuuki@global.net As you ease the bell jar back down onto its seat, past the evaporation fixture, it is quite easy to chip a $600 jar. Chipped bell jars are a peren- nial problem in EM labs. They degrade vacuum in evaporators result- ing in lower quality films and ultimately shorten the life of the vacuum stuck down to a lab bench top. I wet the paper and slid the bell jar back and forth across the wet sandpaper, taking care to keep the bell jar flat against the bench top surface at all times (Fig. 5). The finished repair is shown in Fig. 6. This is the second jar I have repaired using this technique. Both repaired bell jars hold vacuum and have performed well in all ways. I would recommend this technique to those with shallow chips of small extent. Be aware that your local auto glass repair shop will have no idea what a bell jar is, so be prepared to explain your situation. Better yet, bring this copy of MT with you! fPlease read the Microscopy Today Disclaimer on page 6. This pro- cedure is potentially dangerous. DO NOT ATTEMPT this repair on ANY size bell jar defect without an understanding of the risks involved and your commitment to taking ALL appropriate precautions during the repair process and after the repaired bell jar is returned to service. If you are unsure of the consequences of your actions, don't do it! ... Editor warn system. This note describes a new resin that I have used for repairing bell jar chips. A common chip can be seen in Fig. 1. It was caused by crashing the bell jar into the evaporation fixture. The chip is less than 0.5mm deep. Beside it you can see another chip that will not be treated as it does not extend to the jar edge. IMPORTANT NOTE TO READERS: It would not be wise to at- tempt repairs on deep chips of > 0.10 mm or those spreading over 30mm laterally. Finally, DO NOT attempt repairs on cracked bell jars. Bell jars must resist the stress of vacuum and cracked bell jars must be taken out of service. t I worked with Matt Huuki, owner of Matt's Auto Glass in Houghton, MI, to repair this bell jar chip. The resin Matt uses is Glas-Weld #2020 Clear Resin. This is the same product he uses to repair rock chips in au- tomotive windshields. It is a UV sensitive resin that sets in an anaerobic environment. We made the repair as shown in the follow sequence of photos. In Fig. 2 a dam of masking tape is applied to both sides ofthejarto contain the thin epoxy. Slightly more resin is applied than needed and a mylar sheet is set over the wet resin in Fig. 3. The UV light is then applied for ~10 minutes (Fig. 4). The UV light is applied for a longer period of time for this bell jar repair since the resin is thicker compared with a windshield chip repair where the resin is normally very thin. After the UV is applied, the resin is dry and the tape can be removed. Next, the resin can be ground down to the same height as the ground edge of the bell jar. To accomplish this I used 240 grit SiC pressure sensitive adhesive (PSA) backed wet-dry sandpaper that we temporarily Inexpensive Digitization of an SEM Henry C. Aldrich and Donna S. Williams, University of Florida, Gainesville, FL haldrich2@cox. net Because of the high cost of Polaroid film, many years ago we fitted our Hitachi S-450 scanning electron microscope with a 35 mm camera. At that time, we used a Pentax ME Super, which was totally manual and had to have the film advanced by a hand lever. This was an annoyance, but when we set up the system, Polaroid Type 55 film was about $2.00 per photo, and the cost of 35 mm spooled in our lab ran about $.10 per photo. When we traded the Hitachi S-450 for the later Hitachi S-570, we moved the 35 mm system to this microscope. About 1999, when the Pentax ZX-50 with motorized film advance became available, we adapted it to the S-570, using the Pentax electric shutter release. The lens used with both of these cameras was an elderly 50 mm screw-mount Pentax Macro lens that focused well on the CRT of the SEM. Then, when Canon introduced the Digital Rebel for under $1000 about a year ago, we convinced a local Canon dealer to lend us one to try on the SEM. It worked quite well, and so we purchased it and converted the Hitachi S-570 to digital photography for about $1000 - the cost of the Digital Rebel with its standard zoom lens plus the electric cable release. We also purchased an AC adapter to replace the AA batteries. The Rebel uses Compact Flash cards, which can then be removed from the camera and taken to any computer with a card reader to download the images. We have not experimented with other lenses. A dedicated macro lens might produce slightly sharper images, but the standard Rebel zoom lens seems quite adequate. The Canon electric cable release has a locking feature, so 46 MICROSCOPY TODAY March 2005 D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:19 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S1551929500051506 http://crossmark.crossref.org/dialog?doi=10.1017/S1551929500051506&domain=pdf https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S1551929500051506 MICROSCOPY 101 when the camera is in manual exposure mode set to "Bulb", the shutter stays open as long as the lock is down. We use the 100 second exposure setting on the Hitachi SEM and f-16 on the Canon lens. Some CCD chips tend to become noisy at longer exposure times, but the Canon has a CMOS chip that does not seem prone to this difficulty. It was surprising to learn that not all digital cameras support longer shutter speeds on the "B" setting. One digital camera (not an SLR), that we initially tried would go no longer than 50 seconds and had a noisy CCD chip. It is necessary to have an AC power supply, because the battery is being drained as long as the shutter is open at the "B" setting. As this is written, the 6.3 megapixel Canon EOS Digital Rebel (ap- parently also known as the 300D) is available for $854.95 (late 2004) after rebate from New York discount houses, including the 18-55 mm lens and a one gigabyte memory card. The electric release is $25.95 and the AC adapter $64.95. The camera will record both RAW and JPEG files. We have compared the two modes and see little or no advantage to usingthe RAW file mode for this application, although we find RAW very useful for normal color work. Ordinarily we discard the color information as soon as we download the JPEG file into the computer using Photoshop, and then archive the grayscale images on a hard drive in TIFF format to avoid loss of image quality. Figure A shows the camera on its mount, which was improvised as an extension of the bracket at left that originally held the Polaroid cam - era. We simply added a U-shaped piece of aluminum with a slot in it for a tripod sized screw to hold the Canon camera, and fixed this aluminum bracket to the housing with the screw shown at left. The CRT is indicated by the white arrow. Figure B shows the operating arrangement, with a cardboard box covering the camera to exclude room light and dust. Black arrow shows the electric shutter release for the camera. Figure C shows part of a bacterial biofilm photographed with the 35 mm camera. Figure D shows the same specimen photographed with the Canon digital Rebel as described herein. YES !!! Your « old » SEM will give you high quality & high resolution digital images at a very affordable price ! Update it with Orion 6 for Windows E.L.I. sprl (Belgium) Phone: +32 10 61 13 84 - Fax: +32 10 61 19 66 Email: Sales@OrionMicroscopy.com Visit our web site: www.OrionMicroscopy.com MICROSCOPYTODAY March 2005 • 4 7 D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:19 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S1551929500051506 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S1551929500051506 work_ez43gjmrozdqhl7y5yex6i3fbe ---- An Approach for Finding Saliency Regions in 3D Images and Videos An Approach for Finding Saliency Regions in 3D Images and Videos Joby Anu Mathew Student (Mtech) , Caarmel EngineeringCollege MG University Pathanamthitta,India Salitha M K Asst.Prof, Caarmel Engineering College MG University Pathanamthitta,India Abstract-Multimedia processing applications need newly improved techniques for satisfying the new demands in the modern era. 3D multimedia applications are the new trend in the society. The main difference between the 3D and the 2D images is the depth. In addition to the color, luminance and texture , the depth factor is the major feature of stereoscopic display. So, the saliency detection models for 2D display cannot be used for stereoscopic visuals. In this paper, a simple approach is proposed for both stereoscopic images and videos. Features like color, luminance, texture and depth are extracted from the original image and convert to YCbCr color space and also from the discrete cosine transform coefficients of the feature’s. Then, a Gaussian model of the spatial distance between image patches a gradient filter is used for calculating the depth map. From that the feature map is constructed and all the feature map are combined to produce the final saliency map for 3D images and videos. Keywords-Stereoscopic images, Stereoscopic saliency detection, center bias factor, human visual acuity. I. INTRODUCTION The saliency regions are the most important or the most noticeable region in an image. The saliency tries to mimic how a human eye identify important objects of the scene and is typically based on a simple fundamental that is the contrast between an object and an neighbor. For visual information processing, HVS(Human Visual System) is an important characteristic. The vision can be broadly classified into monocular and binocular vision. Each serve a unique purpose. The difference between the two is the ability to judge distances or have depth perception. Monocular vision is seeing with only one eye at a time. When both eyes are used, this would become binocular vision. In binocular vision, two eyes work together to focus on a single point. It processes that information to determine depth or distance to that point. Thus, binocular vision is used to determine the depth feature of a stereoscopic image. This is sometimes referred as binocular disparity. In other words, it refers to the difference in image location of an object seen by the left and right eyes, resulting from the eyes horizontal separation (parallax). In computer vision, binocular disparity is calculated from stereo images taken from a set of stereo cameras. The variable distance between these cameras, called the baseline can affect the disparity of a specific point on their respective image plane. However in computer vision, binocular disparity is referred as coordinate differences of the point between the left and right images instead of a visual angle. The units are measured in pixels. Thus, binocular disparity helps to find the depth perception effectively. The other features are obtained from DCT coefficients of image patches. Discrete Cosine Transform (DCT) is a powerful transform to extract the features. Features like color, luminance, and texture are extracted from the DCT coefficients. The depth saliency is calculated and a Gaussian model is also applied to obtaining the feature map. Then, fusing all the feature map helps in construction of the final saliency map. The saliency detection has numerous applications. One such application is salient object segmentation. Content aware re-targeting, visual quality assessment, visual coding, 3D video coding,3D rendering etc. are the some of the applications of the saliency detection. The saliency region, that is automatically detected depends on many different applications in image processing. For example, the saliency detection is used for image compression to encode saliency regions in high quality and to increase the compression rate for non – salient regions. Another example, the automatic production of short video is summarized by selecting important shots and scenes from a video. II. RELATED WORK Visual attention mechanism has two types, bottom-up and top-down. Bottom-up approach is an apperception process for selecting the salient regions automatically in natural scenes. Top-down approach is a cognitive task-dependent process affected by the performed task. For 2D multimedia applications Jonathan Harel proposed Graph Based Visual Saliency(GBVS) [2].GBVS consist of two steps mainly, forming the activation map on certain feature channels and normalizing it by highlighting conspicuity and admits combination with other maps. Another model by Xiaodi Hou and Liqing Zhang proposed a simple method for the visual saliency detection [4]. This model is independent of features, categories or other forms of prior information of the objects. In this, it first analyze the log spectrum of an input image and then extract the spectral residual of an image in spectral domain and then proposes a fast method to construct the corresponding saliency map in spatial domain. Based on this model International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181 www.ijert.orgIJERTV4IS080334 (This work is licensed under a Creative Commons Attribution 4.0 International License.) Vol. 4 Issue 08, August-2015 323 Chenlei Guo and Liming Zhang proposed a saliency detection algorithm based on the phase spectrum, in which saliency map is calculated by the Inverse Fourier Transform on a constant amplitude spectrum and the original phase spectrum [14]. Christel Chamaret and his colleagues made a study on the problems of the 3D processing like disparity management and the impact in viewing 3D scene on stereoscopic screens. In this, the 3D experience is improved by applying some effects related to ROI [20]. Potapova introduced a 3D saliency detection model for robotics task by incorporating the top-down cues into the bottom-up saliency detection [12]. Later, Wang proposed a computational model of visual attention for the 3D images by extending the traditional 2D saliency detection methods. In this [13], the authors provided a public database with the ground-truth of eye-tracking data. From the above studies, the 2D saliency detection makes use of features like color, luminance and texture only. For the 3D saliency detection depth is the major feature. Thus a simple approach is proposed for both the 3DS images and videos, by taking the depth feature into account. III.SYSTEM MODEL Fig 1.System model The system model is depicted in Fig.1.The image is given as the input and color, luminance, texture are extracted from the left and right images.The depth map is constructed from the difference of disparities between the left and right images. Then the feature map is constructed from the depth map. By usingall the feature map the final saliency is constructed. The system consists of threephases. A. Feature Extraction This phase consist of three steps. They are (1)Conversion to YCbCr color space, (2) DCT calculation , (3) Depth map calculation. YCbCr is a family of color spaces and is used as a part of color pipeline in video and digital photography system. The first step is performed to extract the feature like color, texture and luminance.. The Y represents the luminance, Cb and Cr are the two color components. The given RGB image is converted to YCbCr color space. Then in the next step, the DCT coefficient’s of YCbCr color space is calculated. The DCT coefficients give the feature like colour, texture and luminance. The DCT coefficients of Y gives the luminance feature, the DC coefficients of Cb and Cr gives the color feature and the texture feature is obtained from the AC coefficients of the Y component. Finally the depth feature is calculated in this phase. The left and right image of the given image are converted to gray scale. The disparity between the left and right image is calculated and it is slide across each other to get the high confidence disparity map. Then calculate the CSAD (Cost of Sum of Absolute Difference) and CGRAD (Cost of Gradient of Absolute Difference). A gradient filter is used to extract the feature signatures. Then final depth map is calculated by checking the noise in the disparities and checking the boundary to ensure that the disparities are correctly lined up. B. Depth Saliency Calculation The depth map is taken as the input for feature map calculation. Firstly divide the depth map into 8 x 8 blocks and obtain the DC coefficients of the image. Then calculate the distance between the image patches and compute the Gaussian value. The Gaussian value is calculated by, Csf= 1 𝑔√2 ∗ 𝑑𝑖𝑠𝑡(𝑖,𝑗)2 2𝑔2 (1) where Csf represents the Gaussian value, dist(i,j) represents the spatial distance between the image patches i and j and g is the Gaussian kernel parameter and which is set as 20.The Dsal depth saliency is calculated by the rcdiff and Eqn.1. That is Dsal=∑(∑ rcdiff ∗ Csf) (2) where rcdiff is the absolute change in the dc coefficients of the image. Then , after normalizing the Eqn 2. , the depth saliency is obtained. C. Saliency Estimation From feature Map fusion The feature maps is calculated by Eq. (1).That is the the feature maps of luminance(ysal )and the two color components are found by, Crsal=∑(∑(𝐶𝑟𝑑𝑖𝑓𝑓 ∗ 𝐶𝑆𝑓)) (3) Cbsal=∑ ∑(𝐶𝑏𝑑𝑖𝑓𝑓 ∗ 𝐶𝑠𝑓) (4) Ysal=∑ ∑(𝑦𝑑𝑖𝑓𝑓 ∗ 𝐶𝑠𝑓) (5) where the Crsal, Cbsal and the ysal are the feature maps of the two color components and luminance. The Crdiffis the absolute change in the Cr, the Cbdiffis the absolute change in the Cb and the ydiff is the absolute change in the y component. Then the final saliency is calculated by fusing Eqn.3, Eqn.4 and Eqn.5.That is, Finalsal= (𝑦𝑠𝑎𝑙 + 𝐶𝑏𝑠𝑎𝑙 + 𝐶𝑟𝑠𝑎𝑙 + 𝐷𝑠𝑎𝑙 ) 4 ⁄ (6). Then the final saliency is enhanced by applying the center bias factor. / Videos International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181 www.ijert.orgIJERTV4IS080334 (This work is licensed under a Creative Commons Attribution 4.0 International License.) Vol. 4 Issue 08, August-2015 324 IV. CONCLUSION An approach for finding the saliency region in 3D images and videos is proposed. The features like color, luminance, texture are extracted by converting RGB to YCbCr color space and depth are extracted from disparities between the left and right images. Then the depth saliency is estimated based on the energy contrast weighted by a Gaussian model and spatial distances between image patches. From the depth saliency the feature maps are calculated and then fusing all the feature map the final saliency is constructed.The proposed saliency detection enhances the stereoscopic applications and it is quite simple to implement. REFERENCES [1] L. Itti, C. Koch, and E. Niebur, “A model of saliency- based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 11, pp. 1254–1259, Nov. 1998. [2] J. Harel, C. Koch, and P. Perona, “Graph-based visual saliency,” in Proc. Adv. NIPS, 2006, pp. 545–552. [3] N. D. Bruce and J. K. Tsotsos, “Saliency based on information maximization,” in Proc. Adv. NIPS, 2006, pp. 155–162. [4] X. Hou and L. Zhang, “Saliency detection: A spectral residual approach,” in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., Jun. 2007,pp. 1–8. [5] Y. Fang, Z. Chen, W. Lin, and C.-W. Lin, “Saliency detection in the compressed domain for adaptive image retargeting,” IEEE Trans. ImageProcess., vol. 21, no. 9, pp. 3888–3901, Sep. 2012. [6] V. Gopalakrishnan, Y. Hu, and D. Rajan, “Salient region detection by modeling distributions of color and orientation,” IEEE Trans. Multimedia, vol. 11, no. 5, pp. 892–905, Aug. 2009. [7] S. Goferman, L. Zelnik-Manor, and A. Tal, “Context-aware saliency detection,” in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit.,Jun. 2010 [8] J. Yan, J. Liu, Y. Li, Z. Niu, and Y. Liu, “Visual saliency detection via rank-sparsity decomposition,” in Proc. IEEE 17th ICIP, Sep. 2010, pp. 1089–1092. [9] Z. Lu,W. Lin, X. Yang, E. Ong, and S. Yao, “Modeling visual attention’s modulatory aftereffects on visual sensitivity and quality evaluation,” IEEE Trans. Image Process., vol. 14, no. 11, pp. 1928–1942, Nov. 2005. [10] A. Torralba, A. Oliva, M. S. Castelhano, and J. M. Henderson, “Contextual guidance of eye movements and attention in real-world scenes:The role of global features in object search,” Psychol. Rev., vol. 113,no. 4, pp. 766–786, 2006. [11] Y. Fang, W. Lin, C. T. Lau, and B.-S. Lee, “A visual attention modelcombining top-down and bottom-up mechanisms for salient object detection,” in Proc. IEEE ICASSP, May 2011, pp. 1293–1296. [12] E. Potapova, M. Zillich, and M. Vincze, “Learning what matters: Combining probabilistic models of 2D and 3D saliency cues,” in Proc. 8th Int. Comput. Vis. Syst., 2011, pp. 132–142. [13] J. Wang, M. Perreira Da Silva, P. Le Callet, and V. Ricordel, “Computational model of stereoscopic 3D visual saliency,” IEEE Trans. Image Process., vol. 22, no. 6, pp. 2151–2165, Jun. 2013. [14] C. Guo and L. Zhang, “A novel multi-resolution spatiotemporal saliency detection model and its applications in image and video compression,” IEEE Trans. Image Process., vol. 19, no. 1, pp. 185–198, Jan. 2010. [15] A. Treisman and G. Gelade, “A feature-integration theory of attention,”Cognitive Psychol., vol. 12, no. 1, pp. 97–136, 1980. [16] J. M. Wolfe, “Guided search 2.0: A revised model of visual search,” Psychonomic Bull. Rev., vol. 1, no. 2, pp. 202–238, 1994. [17] J. M. Wolfe and T. S. Horowitz, “What attributes guide the deploymentof visual attention and how do they do it?” Nature Rev., Neurosci., vol. 5,no. 6. [18] N. Bruce and J. Tsotsos, “An attentional framework for stereo vision,”in Proc. 2nd IEEE Canadian Conf. Comput. Robot Vis., May 2005. [19] Y. Zhang, G. Jiang, M. Yu, and K. Chen, “Stereoscopic visual attentionmodel for 3d video,” in Proc. 16th Int. Conf. Adv. Multimedia Model.2010 [20] C. Chamaret, S. Godeffroy, P. Lopez, and O. Le Meur, “Adaptive 3Drendering based on region-of-interest,” Proc. SPIE, vol. 7524, Feb. 2010. International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181 www.ijert.orgIJERTV4IS080334 (This work is licensed under a Creative Commons Attribution 4.0 International License.) Vol. 4 Issue 08, August-2015 325 work_ezdcgxlt7vfgrgzovwdenda3ky ---- Patients' preferences for how pre-operative patient information should be delivered Abstracts / International Journal of Surgery 10 (2012) S1–S52S36 ABSTRACTS Conclusions: In our study, thyroplasty as a method for vocal cord medi- alisation led to improved voice quality post-operatively and to good patient satisfaction. 0363: INSERTION OF A SECOND NASAL PACK AS A PROGNOSTIC INDICATOR OF EMERGENCY THEATRE REQUIREMENT IN EPISTAXIS PATIENTS Edward Ridyard 1, Vinay Varadarajan 2, Indu Mitra 3. 1 University of Manchester, Manchester, UK; 2 North West Higher Surgical Training Scheme, North West, UK; 3 Manchester Royal Infirmary, Manchester, UK Aim: To quantify the significance of second nasal pack insertion in epistaxis patients, as a measure of requirement for theatre. Method: A one year retrospective analysis of 100 patient notes was undertaken. After application of exclusion criteria (patients treated as outpatients, inappropriate documentation and patients transferred from peripheral hospitals) a total of n¼34 patients were included. Of the many variables measured, specific credence was given to requirement of second packing and requirement for definitive management in theatre. Results: Of all patients, 88.5% required packing. A further 25% (7/28) of this group had a second pack for cessation of recalcitrant haemorrhage. Of the second pack group, 85.7% (6/7) ultimately required definitive management in theatre. One sample t-test showed a statistically significant correlation between patients with a second nasal pack and requirement for theatre (p<0.001). Conclusions: Indications for surgical management for epistaxis vary from hospital to hospital. The results of this study show that insertion of a second pack is a very good indicator of requirement for definitive management in theatre. 0365: MANAGEMENT OF LARYNGEAL CANCERS: GRAMPIAN EXPERIENCE Therese Karlsson 3, Muhammad Shakeel 1, Peter Steele 1, Kim Wong Ah-See 1, Akhtar Hussain 1, David Hurman 2. 1 Department of otolaryngology-head and neck surgery, Aberdeen Royal Infirmary, Aberdeen, UK; 2 Department of Oncology, Aberdeen Royal Infirmary, Aberdeen, UK; 3 University of Aberdeen, Aberdeen, UK Aims: To determine the efficacy of our management protocol for laryngeal cancer and compare it to the published literature. Method: Retrospective study of prospectively maintained departmental oncology database over 10 years (1998-2008). Data collected include demographics, clinical presentation, investigations, management, surveillance, loco-regional control and disease free survival. Results: A total of 225 patients were identified, 183 were male (82%) and 42 female (18%). The average age was 67 years. There were 81 (36%) patients with Stage I disease, 54 (24%) with Stage II, 30 (13%) with Stage III and 60 (27%) with Stage IV disease. Out of 225 patients, (130)96% of Stage I and II carcinomas were treated with radiotherapy (55Gy in 20 fractions). Patients with stage III and IV carcinomas received combined treatment. Overall three-year survival for Stage I, II, III and IV were 91%, 65%, 63% and 45% respectively. Corresponding recurrence rates were 3%, 17%, 17% and 7%; 13 patients required a salvage total laryngectomy due to recurrent disease. Conclusion: Vast majority of our laryngeal cancer population is male (82%) and smokers. Primary radiotherapy provides comparable loco-regional control and survival for early stage disease (I & II). Advanced stage disease is also equally well controlled with multimodal treatment. 0366: RATES OF RHINOPLASTY PERFORMED WITHIN THE NHS IN ENGLAND AND WALES: A 10-YEAR RETROSPECTIVE ANALYSIS Luke Stroman, Robert McLeod, David Owens, Steven Backhouse. University of Cardiff, Wales, UK Aim: To determine whether financial restraint and national health cutbacks have affected the number of rhinoplasty operations done within the NHS both in England and in Wales, looking at varying demographics. Method: Retrospective study of the incidence of rhinoplasty in Wales and England from 1999 to 2009 using OPCS4 codes E025 and E026, using the electronic health databases of England (HesOnline) and Wales (PEDW). Extracted data were explored for total numbers, and variation with respect to age and gender for both nations. Results: 20222 and 1376 rhinoplasties were undertaken over the 10-year study period in England and Wales respectively. A statistical gender bias was seen in uptake of rhinoplasty with women more likely to undergo the surgery in both national cohorts (Wales, p<0.001 and England, p<0.001). Linear regression analysis suggests a statistical drop in numbers under- going rhinoplasty in England (p<0.001) but not in Wales (p>0.05). Conclusion: Rhinoplasty is a common operation in both England and Wales. The current economic constraint combined with differences in funding and corporate ethos between the two sister NHS organisations has led to a statistical reduction in numbers undergoing rhinoplasty in England but not in Wales. 0427: PATIENTS' PREFERENCES FOR HOW PRE-OPERATIVE PATIENT INFORMATION SHOULD BE DELIVERED Jonathan Bird, Venkat Reddy, Warren Bennett, Stuart Burrows. Royal Devon and Exeter Hospital, Exeter, Devon, UK Aim: To establish patients' preferences for preoperative patient informa- tion and their thoughts on the role of the internet. Method: Adult patients undergoing elective ENT surgery were invited to take part in this survey day of surgery. Participants completed a ques- tionnaire recording patient demographics, operation type, quality of the information leaflet they had received, access to the internet and whether they would be satisfied accessing pre-operative information online. Results: Respondents consisted of 52 males and 48 females. 16% were satisfied to receive the information online only, 24% wanted a hard copy only and 60% wanted both. Younger patients are more likely to want online information in stark contrast to elderly patients who preferred a hard copy. Patients aged 50-80 years would be most satisfied with paper and internet information as they were able to pass on the web link to friends and family who wanted to know more. 37% of people were using the internet to further research information on their condition/operation. However, these people wanted information on reliable online sources to use. Conclusions: ENT surgeons should be alert to the appetite for online information and identify links that are reliable to share with patients. 0510: ENHANCING COMMUNICATION BETWEEN DOCTORS USING DIGITAL PHOTOGRAPHY. A PILOT STUDY AND SYSTEMATIC REVIEW Hemanshoo Thakkar, Vikram Dhar, Tony Jacob. Lewisham Hospital NHS Trust, London, UK Aim: The European Working Time Directive has resulted in the practice of non-resident on-calls for senior surgeons across most specialties. Conse- quently majority of communication in the out-of-hours setting takes place over the telephone placing a greater emphasis on verbal communication. We hypothesised this could be improved with the use of digital images. Method: A pilot study involving a junior doctor and senior ENT surgeons. Several clinical scenarios were discussed over the telephone complemented by an image. The junior doctor was blinded to this. A questionnaire was completed which assessed the confidence of the surgeon in the diagnosis and management of the patient. A literature search was conducted using PubMED and the Cochrane Library. Keywords used: “mobile phone”, “photography”, “communication” and “medico-legal”. Results & Conclusions: In all the discussed cases, the use of images either maintained or enhanced the degree of the surgeon's confidence. The use of mobile-phone photography as a means of communication is widespread, however, it's medico-legal implications are often not considered. Our pilot study shows that such means of communication can enhance patient care. We feel that a secure means of data transfer safeguarded by law should be explored as a means of implementing this into routine practice. 0533: THE ENT EMERGENCY CLINIC AT THE ROYAL NATIONAL THROAT, NOSE AND EAR HOSPITAL, LONDON: COMPLETED AUDIT CYCLE Ashwin Algudkar, Gemma Pilgrim. Royal National Throat, Nose and Ear Hospital, London, UK Aims: Identify the type and number of patients seen in the ENT emergency clinic at the Royal National Throat, Nose and Ear Hospital, implement changes to improve the appropriateness of consultations and management and then close the audit. Also set up GP correspondence. Method: First cycle data was collected retrospectively over 2 weeks. Information was captured on patient volume, referral source, consultation Insertion of a second nasal pack as a prognostic indicator of emergency theatre requirement in epistaxis patients Management of laryngeal cancers: Grampian experience Rates of rhinoplasty performed within the NHS IN England and Wales: A 10-year retrospective analysis Patients' preferences for how pre-operative patient information should be delivered Enhancing communication between doctors using digital photography. A pilot study and systematic review The ENT emergency clinic at the royal national throat, nose and ear hospital, London: Completed audit cycle work_ezxf2vt2azbddhmjplbxpyupui ---- [PDF] Incorporating color into integrative taxonomy: analysis of the varied tit (Sittiparus varius) complex in East Asia. | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1093/sysbio/syu016 Corpus ID: 2721445Incorporating color into integrative taxonomy: analysis of the varied tit (Sittiparus varius) complex in East Asia. @article{McKay2014IncorporatingCI, title={Incorporating color into integrative taxonomy: analysis of the varied tit (Sittiparus varius) complex in East Asia.}, author={Bailey D. McKay and H. Mays and Cheng-Te Yao and Dongmei Wan and H. Higuchi and I. Nishiumi}, journal={Systematic biology}, year={2014}, volume={63 4}, pages={ 505-17 } } Bailey D. McKay, H. Mays, +3 authors I. Nishiumi Published 2014 Medicine, Biology Systematic biology Species designations are critically important scientific hypotheses that serve as the foundational units in a wide range of biological subdisciplines. A growing realization that some classes of data fail to delimit species under certain conditions has led to increasingly more integrative taxonomies, whereby species discovery and hypothesis testing are based on multiple kinds of data (e.g., morphological, molecular, behavioral, ecological, etc.). However, although most taxonomic descriptions… Expand View on PubMed academic.oup.com Save to Library Create Alert Cite Launch Research Feed Share This Paper 26 CitationsBackground Citations 9 Methods Citations 3 View All Figures and Topics from this paper figure 1 figure 2 figure 3 figure 4 figure 5 figure 6 View All 6 Figures & Tables Taxonomy Poecile varius Description Unit Aves Class 26 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Hidden diversity within the lizard genus Liolaemus: Genetic vs morphological divergence in the L. rothi complex (Squamata:Liolaeminae). M. Olave, L. Avila, J. Sites, M. Morando Biology, Medicine Molecular phylogenetics and evolution 2017 23 Save Alert Research Feed Taxonomic revision of the Sylvarum group of bumblebees using an integrative approach Nicolas Brasero, Baptiste Martinet, Denis Michez, T. Lecocq, I. Valterová, P. Rasmont Biology 2020 PDF View 2 excerpts, cites background Save Alert Research Feed Integrative taxonomy helps to reveal the mask of the genus Gynandropaa (Amphibia: Anura: Dicroglossidae). Y. Huang, J. Hu, B. Wang, Zhaobin Song, Caiquan Zhou, Jianping Jiang Biology, Medicine Integrative zoology 2016 4 Save Alert Research Feed Two Colors, One Species: The Case of Melissodes nigroaenea (Apidae: Eucerini), an Important Pollinator of Cotton Fields in Brazil Carolina Grando, N. Amon, +5 authors M. I. Zucchi Biology 2018 5 View 1 excerpt, cites background Save Alert Research Feed Revising the taxonomy of Proceratophrys Miranda‐Ribeiro, 1920 (Anura: Odontophrynidae) from the Brazilian semiarid Caatinga: Morphology, calls and molecules support a single widespread species S. Mângia, E. Oliveira, D. Santana, R. Koroiva, F. Paiva, A. Garda Biology 2020 3 PDF View 1 excerpt, cites methods Save Alert Research Feed Species Delimitation of the White-Tailed Rubythroat Calliope pectoralis Complex (Aves, Muscicapidae) Using an Integrative Taxonomic Approach Y. Liu, Guoling Chen, +8 authors P. Alström Biology 2016 8 Save Alert Research Feed Morphological Variation, Niche Divergence, and Phylogeography of Lizards of the Liolaemus lineomaculatus Section (Liolaemini) from Southern Patagonia M. F. Breitman, M. Bonino, J. Sites, L. Avila, M. Morando Biology 2015 15 PDF View 1 excerpt, cites background Save Alert Research Feed Different roads lead to Rome: Integrative taxonomic approaches lead to the discovery of two new lizard lineages in the Liolaemus montanus group (Squamata: Liolaemidae) C. Aguilar, Perry L. Wood, M. Belk, Mike Duff, J. Sites Biology 2016 24 PDF View 2 excerpts, cites methods and background Save Alert Research Feed Issues and Perspectives in Species Delimitation using Phenotypic Data: Atlantean Evolution in Darwin's Finches C. Cadena, F. Zapata, Iván Jiménez Biology, Medicine Systematic biology 2018 26 PDF View 1 excerpt, cites background Save Alert Research Feed A multilocus molecular phylogeny for the avian genus Liocichla (Passeriformes: Leiothrichidae: Liocichla) H. Mays, Bailey D. McKay, +4 authors F. Lei Biology Avian Research 2015 8 Save Alert Research Feed ... 1 2 3 ... References SHOWING 1-10 OF 94 REFERENCES SORT BYRelevance Most Influenced Papers Recency The integrative future of taxonomy J. Padial, A. Miralles, I. De la Riva, M. Vences Medicine, Biology Frontiers in Zoology 2009 1,060 PDF View 2 excerpts, references background Save Alert Research Feed The role of subspecies in obscuring avian biological diversity and misleading conservation policy R. Zink Biology, Medicine Proceedings of the Royal Society of London. Series B: Biological Sciences 2004 449 PDF View 3 excerpts, references background and results Save Alert Research Feed Phenotypic Variation is Clinal in the Yellow-Throated Warbler Bailey D. McKay Biology 2008 15 PDF View 1 excerpt, references background Save Alert Research Feed Sequence-based species delimitation for the DNA taxonomy of undescribed insects. J. Pons, T. Barraclough, +6 authors A. Vogler Biology, Medicine Systematic biology 2006 1,621 PDF View 1 excerpt, references background Save Alert Research Feed A complete multilocus species phylogeny of the tits and chickadees (Aves: Paridae). Ulf S. Johansson, J. Ekman, +4 authors Per G. P. Ericson Biology, Medicine Molecular phylogenetics and evolution 2013 42 PDF View 2 excerpts, references methods and background Save Alert Research Feed Species Delimitation: A Decade After the Renaissance Arley Camargo, Jack Jr. Sites Geography 2013 78 PDF Save Alert Research Feed SpedeSTEM: a rapid and accurate method for species delimitation Daniel D. Ence, B. Carstens Biology, Medicine Molecular ecology resources 2011 206 PDF View 1 excerpt, references background Save Alert Research Feed Species delimitation: new approaches for discovering diversity. J. Wiens Biology, Medicine Systematic biology 2007 335 PDF View 2 excerpts, references background Save Alert Research Feed Integrative taxonomy: a multisource approach to exploring biodiversity. B. C. Schlick-Steiner, F. M. Steiner, B. Seifert, C. Stauffer, E. Christian, R. Crozier Biology, Medicine Annual review of entomology 2010 632 View 2 excerpts, references background Save Alert Research Feed AN AMPLIFICATION OF THE PHYLOGENETIC SPECIES CONCEPT K. Nixon, Q. Wheeler Biology 1990 807 View 1 excerpt, references methods Save Alert Research Feed ... 1 2 3 4 5 ... Related Papers Abstract Figures and Topics 26 Citations 94 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_f2ooi3argbhcrccetpff7eurhq ---- A R C H A E O L O G I C A L D R AW I N Gv o l u m e 1 0 / n . 1 9 - d i c e m b e r 2 0 1 7 I S S N 1 8 2 8 - 5 9 6 1 DISEGNARECON h t t p : / / d i s e g n a r e c o n . u n i v a q . i t P a b l o R o d r í g u e z - N ava r r o He is Tecnical Architect, Art historian and Building Engineer. Professor of Photography, Photogrammetry and Ar- chitectural Survey at the Polytechnic University of Valencia (Spain). Develops its research at the Institute of Heritage Restoration, where he leads the rese- arch group LevARQ. REFLECTIONS ON CURRENT ARCHAEOLOGICAL SURVEYING Archaeology is the science of studying the arts, monu- ments and ancient objects through their remains. Since its inception, the methodology used to carry out this study has been based on the detailed drawing of these objects, to which other types of physical and chemical analysis were incorporated. In this way, we can say that all the archaeological objects and sites that we wish to analyze are drawn for their better knowledge, restoration and dissemination. However, although we can say that modern archaeo- logy has existed as a discipline since the 19th century, it has been during the last decade, with the maturity of digital applications, when archaeological drawing has experienced a real revolution. Already towards the end of the 20th century, with the appearance of computerized drawing and the incipient digital photography, we witnessed some changes, al- though not substantial, coexisting with these new technologies with manual surveying techniques. La arqueología es la ciencia que estudia las artes, los monumentos y los objetos antiguos a través de sus restos. Desde sus inicios la metodología utilizada para llevar a cabo este estudio se ha basado en el dibujo de- tallado de dichos objetos, al que se fueron incorporan- do otros tipos de análisis físicos y químicos. De manera que se puede afirmar que todos los obje- tos y sitios arqueológicos que se desean analizar, se dibujan para su mejor conocimiento, restauración y difusión. Pero aunque la arqueología moderna existe como disciplina desde el siglo XIX, ha sido en la última década, con la madurez de las aplicaciones digitales, cuando el dibujo de la arqueología ha tenido una ver- dadera revolución. Ya a finales del siglo XX, con la aparición del dibujo in- formatizado y la incipiente fotografía digital, fuimos testigos de algunos cambios, aunque no fueron sustan- ciales, conviviendo estas nuevas tecnologías con las técnicas de levantamiento manual. REFLEXIONES SOBRE EL LEVANTAMIENTO ARQUEOLÓGICO ACTUAL. ED.2A R C H A E O L O G I C A L D R AW I N Gv o l u m e 1 0 / n . 1 9 - d e c e m b e r 2 0 1 7 I S S N 1 8 2 8 - 5 9 6 1 DISEGNARECON P. R o d r í g u e z - N a v a r r o R e f l e c t i o n s o n C u r r e n t A r c h a e o l o g i c a l S u r v e y i n g h t t p : / / d i s e g n a r e c o n . u n i v a q . i t REFLECTIONS ON CURRENT ARCHAEOLOGICAL SURVEYING Archaeology is the science of studying the arts, monu- ments and ancient objects through their remains. Sin- ce its inception, the methodology used to carry out this study has been based on the detailed drawing of these objects, to which other types of physical and chemical analysis were incorporated. In this way, we can say that all the archaeological objects and sites that we wish to analyze are drawn for their better knowledge, restora- tion and dissemination. However, although we can say that modern archaeo- logy has existed as a discipline since the 19th century, it has been during the last decade, with the maturity of digital applications, when archaeological drawing has experienced a real revolution. Already towards the end of the 20th century, with the appearance of compu- terized drawing and the incipient digital photography, we witnessed some changes, although not substantial, coexisting with these new technologies with manual surveying techniques. Today, archaeological drawing has totally changed; all this fundamentally due to two factors: on the one hand, the maturity of digital photography combined with the recovery of quality lenses for photographic cameras, as well as the development of algorithms allowing to obtain high quality and accurate 3D mo- dels based on photographic sequences (SfM). On the other hand, we have access to 3D laser scanners that incorporate significantly better quality in their internal cameras, increasing their speed and precision, decre- asing their size, incorporating point cloud automatic recording and lowering their price. Finally, and in a transversal way, we could include the emergence of drones, which are able to place cameras and scanners in positions once unimaginable. However, all changes require a great effort and breadth of vision, combining passions and aversion at the same time. Advances in digital applications are not free from these mixed feelings, and to prove it we only have to recall the schism that resulted from the advent of com- puter aided drawing (CAD), which was denied by many, again and again, for years, especially in the academic domain, where its defenders seemed to be part of a revolution that would end with the graphic quality of our surveys and—even more—with ourselves. This anachronism is still present in current archaeological drawing, although we can say that it is no longer si- gnificant and it will shortly disappear. In truth, new technologies are not exclusive, but provide new pos- sibilities that sometimes coexist or rely on traditional methods; we work methodologically to achieve the objectives and to improve our research, making use of all available resources. A few years ago, I accepted the assignment to edit the 12th issue of DISEGNARECON, whose motto was “Drawing with Digital Photography”. When I wrote the editorial, I entitled it Some Reflections on “Drawing with Digital Photography”. Today some of those thou- ghts come to mind, and although only five years have passed, which seems like a short time, the speed at which new technologies advance makes it enough to look back and see what has happened since. In that editorial, it was pointed out that, without a doubt, 3D surveying through SfM would give rise to a revolution, provoking a true photogrammetry resurgence. This new staging of digital photogrammetry favored not only the achievement of hyper-realistic three-di- mensional models, but also the popularization of these methods due to their extreme simplicity and economy. However, it should not be concealed that this appa- rent ease has caused in some cases the intrusiveness of inexperienced people, who have contributed to its devaluation and have generated a certain chaos re- garding the language used and the methodological di- scourse, presenting results to society that are far from what we can reach today. As far as the active sensors that could be eclipsed by these advances, I also pointed out in that editorial how they should lighten their equipment, gain speed, lo- wer their cost and, of course, make their registration software more user-friendly. In spite of these impro- vement expectations, without a doubt I stated that “we cannot understand any process of graphic docu- mentation without this important resource”, in clear reference in this case, to 3D digital surveying. Finally, and referring to that same editorial, I pointed out the potential of the incipient use of drones, at a time when their use was still not legislated. Currently, the process of archaeological drawing is undoubtedly the result of obtaining a high-quality 3D model, easily managed and with great dimensional Fig.1 3D model obtained by SfM photogrammetry. Terrestrial and aerial pictures. ED.3A R C H A E O L O G I C A L D R AW I N Gv o l u m e 1 0 / n . 1 9 - d e c e m b e r 2 0 1 7 I S S N 1 8 2 8 - 5 9 6 1 DISEGNARECON P. R o d r í g u e z - N a v a r r o R e f l e c t i o n s o n C u r r e n t A r c h a e o l o g i c a l S u r v e y i n g h t t p : / / d i s e g n a r e c o n . u n i v a q . i t precision, which includes a hyper-realistic appearance. Today it is essential to digitize the setting, the building, the object, in order to interrogate it, study it, section it, compare it, analyze it, reproduce it, and give it many uses; even some uses that will not be given at the time, but that will surely be possible with the data obtained from the digital model. Archaeology, and especially archaeological site exca- vations, has a special characteristic conditioning its graphic surveying; we carry out a survey of a setting that will then disappear with the course of the exca- vation itself. This fact is very important, since we only have one opportunity to take the necessary data for its survey, being these data the only source for its study. In this way, keeping it as it is virtually, is the optimal solu- tion, which also allows us to obtain three-dimensional models of the entire excavation process, being able to go back and forth as needed. Furthermore, we might even museumize not only the findings, but the archae- ological site itself with the entire excavation process. In this issue of DISEGNARECON, we may appreciate the diversity of methodologies that today bring us closer to the archaeological drawing to cover diverse objec- tives, revealing not only the attainment of the model, but also the importance of the historical graphic docu- ments that are part of that process of the archaeolo- gical fact. In addition, the case studies presented, the different methodologies used, the applications of the data obtained, and also the reflections made by many authors in their articles, comprise an attractive issue of this journal that, as always, upholds a first-rate scienti- fic level. As a member of the DISEGNARECON Editorial Committee, I proposed a few years ago the inclusion of an interview at the beginning of each issue, which would gather a more personal, more profound view of people that would surely be of interest to the re- aders of our disciplinary field. In other words, a novel format incorporating a new dynamics to the magazine in these times of growth, dissemination and indexing. The proposal to interview Alberto Pratelli perfectly me- ets the objectives that are intended to be addressed by this new manner to open the journal, so I take this opportunity to congratulate both, interviewer and in- terviewee, Roberto Mingucci and Alberto Pratelli, for their excellent contribution. Lastly, I could not end without thanking Roberto Min- gucci and Mario Centofanti, Journal Editors, as well as Stefano Brusaporci, Journal Manager, for the trust pla- ced in me through the invitation to edit this issue. For me, it has been a truly enriching and pleasant experien- ce to work for a journal with the long and highly este- emed track-record of DISEGNARECON, as well as with the outstanding professionals that make it possible. Fig.2 Section of the 3D model. Integration of 3D laser scanner in the underground part with SfM photogrammetry in the upper part. ED.4A R C H A E O L O G I C A L D R AW I N Gv o l u m e 1 0 / n . 1 9 - d e c e m b e r 2 0 1 7 I S S N 1 8 2 8 - 5 9 6 1 DISEGNARECON P. R o d r í g u e z - N a v a r r o R e f l e c t i o n s o n C u r r e n t A r c h a e o l o g i c a l S u r v e y i n g h t t p : / / d i s e g n a r e c o n . u n i v a q . i t Reflexiones sobre el levantamiento arqueológico actual. La arqueología es la ciencia que estudia las artes, los monumentos y los objetos antiguos a través de sus restos. Desde sus inicios la metodología utilizada para llevar a cabo este estudio se ha basado en el dibujo detallado de dichos objetos, al que se fueron incor- porando otros tipos de análisis físicos y químicos. De manera que se puede afirmar que todos los objetos y sitios arqueológicos que se desean analizar, se dibujan para su mejor conocimiento, restauración y difusión. Pero aunque la arqueología moderna existe como di- sciplina desde el siglo XIX, ha sido en la última década, con la madurez de las aplicaciones digitales, cuando el dibujo de la arqueología ha tenido una verdadera re- volución. Ya a finales del siglo XX, con la aparición del dibujo informatizado y la incipiente fotografía digital, fuimos testigos de algunos cambios, aunque no fueron sustanciales, conviviendo estas nuevas tecnologías con las técnicas de levantamiento manual. Hoy en día el dibujo para la arqueología ha cambiado totalmente, debido fundamentalmente a dos factores: por un lado a la madurez de la fotografía digital unida a la recuperación de las lentes de calidad para las cáma- ras fotográficas, así como el desarrollo de algoritmos que permiten obtener modelos 3D de alta calidad y precisión a base de secuencias fotográficas (SfM); por otro lado tenemos los escáneres láser 3D que incor- poran cada vez más calidad en sus cámaras internas, aumentando su velocidad y precisión, disminuyendo su tamaño, incorporando registros automáticos de las nubes y mejorando su precio. Por último y de manera transversal, podríamos incluir la aparición de los dro- nes, que son capaces de situar a nuestras cámaras fo- tográficas y a nuestros escáneres, en posiciones hasta ahora impensables. Sin embargo todos los cambios requieren de un gran esfuerzo y amplitud de miras, aunando pasiones y odios a un mismo tiempo; los avances en las aplica- ciones digitales no se libran de esta doble lectura, y para comprobarlo tan sólo tenemos que recordar el cisma que supuso la llegada del dibujo informatizado (CAD), que fue negado por muchos una y otra vez du- rante años, especialmente en ámbito universitario, en donde sus defensores parecían formar parte de una revolución que acabaría con la calidad gráfica de nue- stros levantamientos, y aún más, con nosotros mismos. Este anacronismo sigue aún presente en el dibujo ar- queológico actual, aunque podemos afirmar que ya no es significativo y en un corto periodo de tiempo aca- bará por desaparecer. Verdaderamente las nuevas tec- nologías no son excluyentes, sino que aportan nuevas posibilidades que en ocasiones conviven o se apoyan en métodos clásicos; se trabaja metodológicamente para alcanzar los objetivos, para mejorar nuestras in- vestigaciones, haciendo uso para ello de todos los re- cursos disponibles. Hace ya unos años acepté el encargo de editar el número 12 de la revista DISEGNARECON, que llevó por lema “Dibujar con la fotografía digital”. Cuando escribí la editorial la titulé Algunas reflexiones en torno al “Di- bujo con la fotografía digital”. Hoy me vienen a la men- ED.5A R C H A E O L O G I C A L D R AW I N Gv o l u m e 1 0 / n . 1 9 - d e c e m b e r 2 0 1 7 I S S N 1 8 2 8 - 5 9 6 1 DISEGNARECON P. R o d r í g u e z - N a v a r r o R e f l e c t i o n s o n C u r r e n t A r c h a e o l o g i c a l S u r v e y i n g h t t p : / / d i s e g n a r e c o n . u n i v a q . i t te algunas de aquellas reflexiones, y aunque tan sólo han pasado cinco años y puede parecer poco tiempo, la velocidad con la que avanzan las nuevas tecnologías hace que sea suficiente para mirar atrás y ver qué ha ocurrido. En aquella editorial se apuntó la revolución que sin duda iban a suponer los levantamientos 3D mediante SfM, provocando un verdadero renacimien- to de la fotogrametría. Esta nueva puesta en escena de la fotogrametría digital favoreció no sólo la obtención de modelos tridimensionales hiperrealistas, sino tam- bién la popularización de estos métodos por su extre- mada sencillez y economía. Sin embargo no hay que ocultar que esta aparente facilidad ha provocado en algunos casos el intrusismo de personas no prepara- das, que han favorecido la devaluación y un cierto caos en el lenguaje utilizado y en el discurso metodológico, presentando ante la sociedad unos resultados que di- stan mucho del lugar al que podemos llegar hoy en día. Respecto a los sensores activos que podían verse eclipsados por estos avances, apuntamos también en aquella editorial cómo debían aligerar sus equipos, ga- nar en velocidad, en economía y, por supuesto, hacer más amables sus software de registro. Pese a estas ex- pectativas de mejora, sin lugar a dudas afirmé que “no podemos entender ningún proceso de documentación gráfica sin este importante recurso”, en clara referencia en este caso a los levantamientos digitales 3D. Por últi- mo, y en referencia a esa editorial, apunté la potencia- lidad del incipiente uso de drones, en un momento en que no disponían ni siquiera de legislación para su uso. Actualmente el proceso del dibujo arqueológico pasa sin lugar a dudas por la obtención del modelo 3D de gran calidad, fácilmente gestionable y con una gran precisión dimensional, que incluya un aspecto hiperre- alista. Hoy en día es imprescindible digitalizar el lugar, el edificio, el objeto, para así poder interrogarlo, estu- diarlo, seccionarlo, compararlo, analizarlo, reproducir- lo, y darle tantos otros usos; incluso algunos usos que no se darán en el momento, pero que a buen seguro serán posibles con los datos obtenidos en esa maqueta digital. La arqueología, y en especial las excavaciones de los sitios arqueológicos, tienen una especial característica que condiciona su levantamiento gráfico; realizamos un levantamiento de un entorno que seguidamente va a desaparecer con el transcurso de la propia exca- vación. Este hecho es importantísimo ya que sólo di- sponemos de una oportunidad para tomar los datos necesarios para su levantamiento, siendo estos datos los que pasarán a ser la única fuente para su estudio. De este modo mantenerlo como es virtualmente es la solución óptima, que además nos permite poder obte- ner los modelos tridimensionales de todo el proceso de excavación, pudiendo ir adelante y atrás cuando lo necesitemos. Incluso podremos musealizar no sólo los hallazgos, sino el propio sitio arqueológico con todo su proceso de excavación. En este número de la revista DISEGNARECON se pue- den apreciar la diversidad de metodologías que hoy en día nos aproximan al dibujo arqueológico para cubrir objetivos diversos, mostrando no sólo la obtención del modelo sino también la importancia de los documen- tos gráficos históricos que forman parte precisamente de ese proceso del hecho arqueológico. Además los ca- sos de estudio presentados, las distintas metodologías empleadas, las aplicaciones de los datos obtenidos, y también las reflexiones realizadas por muchos auto- res en sus artículos, conforman un atractivo número de esta revista que como siempre mantiene un nivel científico de primer orden. Como miembro del Comité Editorial de DISEGNARECON propuse hace ya algunos años la incorporación de una entrevista al inicio de cada número, que recoja un aspecto más personal, más profundo, de personas que a buen seguro intere- sará a los lectores de nuestro ámbito disciplinario. Un nuevo formato que incorpore una dinámica más a la revista, en estos momentos de crecimiento, difusión e indexación. La propuesta de entrevistar a Alberto Pra- telli reúne perfectamente los objetivos que se preten- den abordar en esta nueva forma de abrir la revista, por lo que desde aquí felicito a ambos, entrevistador y entrevistado, Roberto Mingucci y Alberto Pratelli, por su excelente contribución. No puedo terminar sin agradecer a Roberto Mingucci y Mario Centofanti, editores de la revista, así como a Stefano Brusaporci, gestor de la misma, la confianza depositada en mí a través de la invitación para editar este número que ahora ve la luz. Para mí ha sido un verdadero y enriquecedor placer trabajar para una re- vista con la larga y magnífica trayectoria de DISEGNA- RECON, así como con los grandes profesionales que la hacen posible. work_f366gf447jagtjou6iht45ybma ---- 30-year International Pediatric Craniofacial Surgery Partnership: Evolution from the “Third World” Forward D ow nloaded from https://journals.lw w .com /prsgo by B hD M f5eP H K av1zE oum 1tQ fN 4a+kJLhE ZgbsIH o4X M i0hC yw C X 1A W nY Q p/IlQ rH D 3ltH 2fbM A pE 2R Q D 4Y 0W cG tLY V yk25p/W E dw ppC 3A W Q ef5N uIw 8oC 0JA == on 05/27/2020 Downloadedfromhttps://journals.lww.com/prsgobyBhDMf5ePHKav1zEoum1tQfN4a+kJLhEZgbsIHo4XMi0hCywCX1AWnYQp/IlQrHD3ltH2fbMApE2RQD4Y0WcGtLYVyk25p/WEdwppC3AWQef5NuIw8oC0JA==on05/27/2020 www.PRSGlobalOpen.com 1 Plastic surgeons have long helped treat the global burden of disease, particularly through cleft, burn, and trauma care.1–4 Care delivery is evolving from medical mission models to also in- clude emphasis on continuity of care,5 outcomes,6 cost-effectiveness,7,8 and integration into broader health services.9–12 Craniofacial surgery poses differ- ent demands, risks, and challenges compared with cleft or other forms of plastic surgery. Requisite ad- junct medical services are more complex—including Received for publication August 5, 2015; accepted January 29, 2016. Copyright © 2016 The Authors. Published by Wolters Kluwer Health, Inc. on behalf of The American Society of Plastic Surgeons. All rights reserved. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially. DOI: 10.1097/GOX.0000000000000650 From the *Division of Plastic Surgery, Children’s Hospital of Philadelphia and the University of Pennsylvania, Philadelphia, Pa.; and †Division of Pediatric Surgery, Uniwersytecki Szpital Dzieciecy (University Children’s Hospital) and Jagellonion University, Krakow, Poland. Background: Craniofacial diseases constitute an important component of the surgical disease burden in low- and middle-income countries. The con- sideration to introduce craniofacial surgery into such settings poses differ- ent questions, risks, and challenges compared with cleft or other forms of plastic surgery. We report the evolution, innovations, and challenges of a 30-year international craniofacial surgery partnership. Methods: We retrospectively report a partnership between surgeons at the Uniwersytecki Szpital Dzieciecy in Krakow, Poland, and a North American craniofacial surgeon. We studied patient conditions, treatment patterns, and associated complications, as well as program advancements and limita- tions as perceived by surgeons, patient families, and hospital administrators. Results: Since partnership inception in 1986, the complexity of cases per- formed increased gradually, with the first intracranial case performed in 1995. In the most recent 10-year period (2006–2015), 85 patients have been evaluated, with most common diagnoses of Apert syndrome, Crouzon syndrome, and single-suture craniosynostosis. In the same period, 55 major surgical procedures have been undertaken, with LeFort III midface dis- traction, posterior vault distraction, and frontoorbital advancement per- formed most frequently. Key innovations have been the employment of craniofacial distraction osteogenesis, the use of Internet communication and digital photography, and increased understanding of how craniofacial morphology may improve in the absence of surgical intervention. Ongoing challenges include prohibitive training pathways for pediatric plastic sur- geons, difficulty in coordinating care with surgeons in other institutions, and limited medical and material resources. Conclusion: Safe craniofacial surgery can be introduced and sustained in a resource-limited setting through an international partnership. (Plast Reconstr Surg Glob Open 2016;4:e671; doi: 10.1097/GOX.0000000000000650; Published online 6 April 2016.) Jordan W. Swanson, MD, MSc* Jan Skirpan, MD† Beata Stanek, MD† Maciej Kowalczyk, MD† Scott P. Bartlett, MD*† 30-year International Pediatric Craniofacial Surgery Partnership: Evolution from the “Third World” Forward Disclosure: The authors have no financial interest to declare in relation to the content of this article. The Article Processing Charge was paid for by the authors. Craniofacial Surgery Partnership Swanson et al. xxx xxx 4 Manjula Plastic & Reconstructive Surgery-Global Open 2016 4 Special Topic 10.1097/GOX.0000000000000650 29January2016 5August2015 © 2016 The Authors. Published by Wolters Kluwer Health, Inc. on behalf of The American Society of Plastic Surgeons. All rights reserved. Pediatric/Craniofacial SPeCial ToPiC http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ PRS Global Open • 2016 2 intensive care, skilled anesthesia, neurosurgery, and blood transfusion capabilities—and continuity of pa- tient follow-up is critical. Noordhoff13 and Sharma14 have described their experience with and imperatives for establishing high-volume craniofacial centers in highly populous low-income countries. Less clear is how to develop craniofacial surgery through partner- ships or exchange programs that might be more suit- able to smaller populations. We report an international craniofacial surgery partnership based at the Uniwersytecki Szpital Dzieciecy (University Children’s Hospital) in Kra- kow, Poland, which is now in its 30th year. This ar- ticle describes the evolution of the program, patients treated, and innovations both in the surgical model and surgical technology that have enabled it and critically addresses ongoing challenges. EVOLUTION OF A PARTNERSHIP In 1986, a didactic medical conference in Krakow, Poland, convened by Project HOPE brought together American (S.P.B.) and Polish (J.S.) plastic surgeons. In operating together on several sundry plastic surgi- cal cases the following week, disparities between stan- dards of surgical care between their countries of origin were illuminated and discussed, and a mutual interest in future collaboration emerged. Over the following 10 years, the surgeons evaluated patients and operated together in Krakow once or twice annually, gradually increasing the complexity of cases. In 1995, after suf- ficient team training, familiarity, and equipment had been accumulated, the first intracranial correction of a congenital facial deformity was performed. The Jagellonian University (established 1364 CE) in Krakow and the University of Pennsylvania in Philadelphia have together provided academic insti- tutional support during the 30-year partnership. The American surgeon (S.P.B.) has made 68 trips to Kra- kow to conduct clinic or operate in conjuction with lo- cal Polish surgeon colleagues (J.S., and junior partner B.S.) His role has been primarily of an advisory and teaching capacity, through a standing appointment as a visiting Jagellonian University professor. The Polish surgeons have visited the University of Pennsylvania as observing surgeons 4 times, and ancillary staff mem- bers (nurses, hospital administrators) have made multiple visits as well. More recently, University of Pennsylvania craniofacial surgery fellows have joined in the collaboration, in an operative teaching capacity with Krakow faculty and residents. Ongoing relation- ships have developed with anesthesia and intensive care physicians in Krakow. From the outset, the ex- change has emphasized mutual learning, cultural sensitivity and respect, and tailoring interventions appropriate to the patient needs and hospital context in Krakow. PATIENTS TREATED In the most recent 10-year period (2006–2015) for which data are available, 85 patients have been evaluated by the partnership, with the most com- mon diagnoses of Apert syndrome, Crouzon syn- drome, or single-suture craniosynostosis (Table 1). In the same period, 55 major surgical procedures have been undertaken in partnership, with LeFort III midface distraction, posterior vault distraction, and frontoorbital advancement performed most fre- quently (Table 2). A 17-year-old male with Crouzon syndrome presented with forehead as well as mid- and lower-face retrusion, having previously under- gone only frontoorbital advancement (Fig. 1). His age of presentation and treatment course are repre- sentative of patients treated through the partnership (Figs. 2 and 3). During the most recent 10-year period (2006– 2015), there have been 3 (5.5%) major complications Table 1. Patient Conditions Evaluated by the Krakow Craniofacial Surgery Partnership in the Last 10 Years (2006–2015) Condition Patients Apert syndrome 25 Crouzon syndrome 11 Pfeiffer syndrome 3 Encephalocele 2 Craniofacial clefting 5 Craniofacial microsomia 8 Single-suture craniosynostosis 10 Treacher-Collins syndrome 2 Frontonasal dysplasia/hypertelorism 6 Facial paralysis* 4 Pierre Robin sequence 3 Other 6 Total 85 *Includes congenital facial paralysis (without craniofacial microso- mia), and patients with prior neoplasm (post-resection). Table 2. Surgical Procedures Performed by the Krakow Craniofacial Surgery Partnership in the Last 10 Years (2006–2015) Condition Patients Frontoorbital advancement 8 LeFort III midface distraction advancement 11 Monobloc distraction advancement 3 LeFort I maxillary advancement 2 Posterior vault distraction osteogenesis 9 Four-wall box osteotomy 1 Cranial vault reconstruction 5 Mandibular distraction osteogenesis 5 Orbital reconstruction with cranial bone graft 1 Costochondral mandibular reconstruction 2 Cranioplasty 2 Other 6 Total 55 Swanson et al. • Craniofacial Surgery Partnership 3 requiring surgical reintervention. Two of these cases involved LeFort III distraction of patients with syn- dromic craniosynostosis and midface retrustion. One patient developed premature consolidation of the midface, approximately 5 days into activation; this necessitated repeat midface advancement which was performed with shorter latency phase. The other patient experienced supratherapeutic distrac- tion with an activation phase lasting approximately 20 days, which arose from miscommunication be- tween the surgical teams in Krakow and North America and the patient’s family. This patient will require orthognathic corrective surgery with maxil- lary setback (and/or mandibular advancement) to attain occlusion. A third patient developed a postop- erative abscess which warranted return to operating room for incision. There were also 3 (5.5%) minor complications during this period of known superfi- cial surgical site infection/cellulitis treated with an- tibiotics. There have been no patient deaths during the 30-year partnership. FINANCIAL SUPPORT AND HEALTH SYSTEM INTEGRATION Financial support for this partnership originated with Project HOPE, which from 1986 to 1991 fund- ed international team travel; travel was subsequently Fig. 1. Preoperative photographs of a 17-year-old boy with Crouzon syndrome presenting with forehead as well as mid- and lower-face retrusion, having previously undergone only frontoorbital advancement as a child. Fig. 2. The patient was treated with monobloc fronto-facial osteotomy and distraction osteogenesis with an external halo. Serial radiographs and photographs were taken during activation and posted electronically for all team members to review in real-time evaluation. PRS Global Open • 2016 4 funded by the Center for Human Appearance at the University of Pennsylvania and other agencies. Clinical care has been supported by the University Children’s Hospital. Metal internal fixation plates were donated periodically by the manufacturers, in particular when 1 manufacturer transitioned from stainless steel to titanium miniplates in the North American market (Synthes Inc., West Chester, Pa.). And internal and external distractors were reused (after cleaning, checking condition, and sterilizing) after initial use in North America. The University Children’s Hospital is now able to procure a limited number of titanium fixation plates for cases; distrac- tors are not yet affordable. The post-Soviet collapse in the 1990s illuminated the mounting barriers to all forms of medical and surgical care for children in Poland. Although the emphasis of this partnership has been craniofacial surgery, it has also been leveraged as a vehicle to support other forms of surgical care, in what might be considered “horizontal integration” into surgical disease management.8 In 2005, the Children’s Medi- cal Foundation of Central and Eastern Europe was founded as a parallel 501(c)3 organization to provide Fig. 3. Postoperative photographs 2 years later, after completion of monobloc advancement and bilateral sagittal split osteotomy and mandibular advancement. Swanson et al. • Craniofacial Surgery Partnership 5 funding to well-managed yet underfunded hospitals of the region. Since inception, Children’s Medi- cal Foundation of Central and Eastern Europe has spent $600,000 to directly fund equipment ranging from laparoscopic equipment to oxygenators as well as thousands of hours of professional volunteer time. INNOVATIONS Four key innovations arose either to strength- en the partnership or to derive from it. First, cra- niofacial distraction in the early 2000s enabled a paradigm shift in the type and extent of surgeries performed. Shortly after McCarthy’s15 first descrip- tion of mandibular distraction osteogenesis (and its early adoption in North America), we employed ex- ternal mandibular distraction for micrognathic and retrognathic patients with craniofacial microsomia and Treacher-Collins syndrome. We subsequently evolved to internal distraction and also utilize pos- terior vault, LeFort II/III, and monobloc distraction in Krakow. Foremost, this has enabled larger and, in our view, safer advancements to be performed than with conventional fixation. The length of surgery seems to be shorter, concomitant blood loss reduced, low rate of infection experienced, and overall mor- bidity decreased (findings that parallel recent stud- ies of distraction for posterior vault expansion).16–18 Further, by utilizing consolidation rather than inter- nal fixation to achieve osteosynthesis, we have a re- duced need for (and cost of) internal fixation plates. Of course, this is offset by the need for distractors. The cost of new distractors (either internal or exter- nal halo systems) has not been historically feasible for the publicly funded hospital, but we have not yet encountered any equipment failures of reused de- vices. We use external halo devices for monobloc or midfacial advancement and find that it is easier for local surgeon partners to perform hardware removal and easier to cannibalize parts among the different distractors than with internal distractors. It is our hope that the cost of earlier-generation distraction may decrease to an affordable level allowing the use of new distractors in the future. The second 2 innovations, Internet communi- cation and digital photography, have gone hand in hand. At the outset of the partnership, postoperative patient management was done verbally (and hast- ily) by overseas telephone call and involved quite subjective descriptions and guidance. Beginning in about 2000, the availability of digital photography and email enabled photographs and pictures of ra- diographs to be shared postoperatively in a serial fashion and the management plan adopted accord- ingly. Further, learning through evaluation of serial radiographs facilitated teaching among the local team in Krakow. As the relationship has continued to evolve, the majority of postoperative management is directed by the Krakow team with guidance from North America only periodically modifying the local decision making. Of course there are limits to elec- tronic communication and digital photography, and we have found that they do not replace a clear post- operative plan between the patient’s family and both surgical teams. The final innovation has been derived from the Krakow craniofacial exchange but has influenced surgical practice in North America. It is the observa- tion that certain craniofacial morphology improves in the absence of surgical intervention. A specific example is that patients with Apert syndrome and associated frontal bossing, when treated initially with posterior vault distraction osteogenesis (PVDO), exhibit improvement in frontal bossing and frontal morphology. This has led us to both defer fronto- orbital advancement and preferentially treat this patient cohort with PVDO even absent considerable posterior pathology, both of which appear to benefit patients in the form of less surgery in our experience to date. This realization would not have been as like- ly to arise in a North American practice, where the combination of a competitive marketplace of other providers, perceived “standards of care,” and medi- colegal defensiveness each reinforce the status quo. Furthermore, simply evaluating an older patient with less severe frontal morphology would have sug- gested a less severe syndromic variant. However, by following patients continually but with less opportu- nity to intervene surgically, we are thrust into a more actively observing role, and this can improve both our understanding of disease course and impact of treatment. CHALLENGES There continue to be several challenges to fully realizing a center in Krakow that delivers sustainable, high-quality craniofacial care and can be increas- ingly independent. First, recruiting and training successive pediatric plastic surgeons—and, further- more, in craniofacial surgery—has proven difficult. The training pathway to pediatric plastic surgery in Poland originates from pediatric surgery not plas- tic surgery. Trainees complete general surgery then pediatric surgery residency and fellowship, the first and only plastic surgical training that comes as a 2- to 3-year apprenticeship subsequently. This regi- men limits the depth of plastic surgical exposure, which likely compounds the challenge of complex craniofacial surgery. The long length and low salary of PRS Global Open • 2016 6 this training regimen are discouraging and preclude those trained in plastic surgery from doing only a fel- lowship with pediatric or craniofacial focus. A plastic surgeon salary at the hospital, with an expected 38 hour-per-week commitment, is basic; and lack of for- mal plastic surgical training limits the opportunity to supplement this work with a private practice. Second, as several surgeons elsewhere in Poland have expanded their surgical repertoire to incor- porate some basic craniofacial procedures, how to best support them has been challenging. As a case in point, a neurosurgeon and an oral surgeon— each very competent and showing interest—have performed frontal-orbital advancement and orbital reconstruction with variable results and potentially incomplete corrections. Despite mutual interest, at- tempts to collaborate together on cases have been constrained by restrictive hospital credentialing policies. We are sensitive that our craniofacial out- reach partnership fosters rather than undermines the abilities of local surgeons to provide craniofa- cial care. Although helping to train other surgeons is a logical long-term strategy, we struggle on behalf of our patients currently to advocate for other lo- cal surgeons when we perceive that at present the outcomes of our own partnership may be better de- fined. However, there are limitations to our partner- ship, specifically that we only operate on large cases together several times per year, and this limits early intervention. We also note that different subspecial- ties see deformities and treatment objectives differ- ently. Paul Tessier charged that “craniofacial surgery should be performed only if it is the main interest of that surgeon”19; we struggle when the 2 options—a craniofacial outreach partnership or established sur- geons doing occasional cases—are each imperfect. Relating to each of these challenges, we must routinely critique the presence of North American surgeons to ensure it facilitates and does not impede the skill development of local surgeons. The Polish surgeon partners routinely perform smaller cases (eg, genioplasty, mandibular distractor removal) in- dependently, and the acuity of these “smaller” cases continues to increase. At present, all team members seem to prefer to do the larger cases together. Also, recognizing the importance of high volume and a multidisciplinary team care, it is important that this partnership continue to grow in size and scope and potentially collaborate with any future similar efforts to achieve scale. Third and finally are limited medical and mate- rial resources. As mentioned, neither distractors nor resorbable plates are available new, and the internal fixation sets are very limited. Reusing equipment such as distractors risks malfunction or device failure, inadequate sterility, and supply shortages. Further, lack of medical adjuncts such as gelatin hemostatic matrix likely contributes to what we perceive to be higher average blood loss compared with similar cases in North America. Although reports of mortal- ity are rare in the literature, hemorrhage is the most common antecedent cause in intracranial surgery.20 Given that there always exist uncontrollable risks to performing surgery in remote environments, being able to mitigate risks for which solutions exist would be optimal. Certain techniques we employ, such as perioperative hemostatic scalp sutures placed on ei- ther side of the bicoronal incision, have been adapt- ed to the environment and may represent a form of “reverse innovation” from a lower-resource setting.21 The environment of ingenuity fostered by these limi- tations is a silver lining, but clearly the availability of supplies used in wealthy countries is in the interest of superior patient care. CONCLUSIONS This partnership has enabled the development of a craniofacial surgery program in Krakow, Poland, over the last 30 years. It has facilitated the cultivation of surgical skills, capacity, and mutual learning in a lower-income setting. Until a future time when ad- equate experience and resources in Poland enable self-sufficiency, we plan to continue to strengthen cra- niofacial surgery incrementally through such a part- nership, embracing safety, sustainability, and quality of care on behalf of our patients and their families. Scott P. Bartlett, MD Division of Plastic Surgery The Children’s Hospital of Philadelphia 3501 Civic Center Blvd., Philadelphia, PA 19104 E-mail: bartletts@email.chop.edu ACKNOWLEDGMENTS The authors thank Ms. Joanna Szydlowska at the Uni- wersytecki Szpital Dzieciecy in Krakow for her invaluable and continued administrative support of this partnership. PATIENT CONSENT The patient provided written consent for the use of his image. REFERENCES 1. Debas HT, Gosselin R, McCord C, et al. “Surgery.” In: Jamison DT, Breman JG, Measham AR, eds. Disease Control Priorities in Developing Countries. Washington, DC: The International Bank for Reconstruction and Development/The World Bank Group; 2006. 2. Meara JG, Leather AJM, Hagander L. Global surgery 2030: evidence and solutions for achieving health, wel- fare, and economic development. Lancet. 2015;385:1–56. mailto:bartletts@email.chop.edu Swanson et al. • Craniofacial Surgery Partnership 7 3. Mock CN, Donkor P, Gawande A, et al. Essential surgery: key messages from disease control priorities. 3rd edition. Lancet. 2015;385:p2209–p2219. 4. Semer NB, Sullivan SR, Meara JG. Plastic surgery and global health: how plastic surgery impacts the global burden of sur- gical disease. J Plast Reconstr Aesthet Surg. 2010;63:1244–1248. 5. Patel PB, Hoyler M, Maine R, et al. An opportunity for di- agonal development in global surgery: cleft lip and palate care in resource-limited settings. Plastic Surg Int. 2012:1–10. 6. Jansen LA, Carillo L, Wendby L, et al. Improving pa- tient follow-up in developing regions. J Craniofac Surg. 2014;25:1640–1644. 7. Chao TE, Sharma K, Mandigo M, et al. Cost-effectiveness of surgery and its policy implications for global health: a system- atic review and analysis. Lancet Glob Health. 2014;2:e334–e345. 8. Corlew SD. Estimation of impact of surgical disease through economic modeling of cleft lip and palate care. World J Surg. 2010;34:391–396. 9. Nagengast ES, Caterson EJ, Magee WP Jr, et al. Providing more than health care: the dynamics of humanitarian sur- gery efforts on the local microeconomy. J Craniofac Surg. 2014;25:1622–1625. 10. Campbell A, Restrepo C, Mackay D, et al. Scalable, sus- tainable cost-effective surgical care: a model for safety and quality in the developing world, part II: program develop- ment and quality care. J Craniofac Surg. 2014;25:1680–1684. 11. McQueen KA, Ozgediz D, Riviello R, et al. Essential sur- gery: integral to the right to health. Health Hum Rights. 2010;12:137–152. 12. Aliu O, Corlew SD, Heisler ME, et al. Building surgical capacity in low-resource countries: a qualitative analysis of task shifting from surgeon volunteers’ perspectives. Ann Plast Surg. 2014;72:108–112. 13. Noordhoff MS. Establishing a craniofacial center in a de- veloping country. J Craniofac Surg. 2009;20(Suppl 2):1655– 1656. 14. Sharma RK. Craniofacial surgery in a tertiary care center in India. J Craniofac Surg. 2014;25:1594–1600. 15. McCarthy JG. The role of distraction osteogenesis in the reconstruction of the mandible in unilateral craniofacial microsomia. Clin Plast Surg. 1994;21:625–631. 16. Taylor JA, Derderian CA, Bartlett SP, et al. Perioperative morbidity in posterior cranial vault expansion: distrac- tion osteogenesis versus conventional osteotomy. Plast Reconstr Surg. 2012;129:674e–680e. 17. Goldstein JA, Paliga JT, Wink JD, et al. A craniometric analysis of posterior cranial vault distraction osteogenesis. Plast Reconstr Surg. 2013;131:1367–1375. 18. Derderian CA, Wink JD, McGrath JL, et al. Volumetric changes in cranial vault expansion: comparison of fronto-orbital advancement and posterior cranial vault distraction osteogenesis. Plast Reconstr Surg. 2015;135:1665–1672. 19. Jackson IT, Munro IR, Salyer KE, et al. Atlas of Craniomaxillofacial Surgery. St. Louis, MO: Mosby; 1982. 20. Czerwinski M, Hopper RA, Gruss J, et al. Major mor- bidity and mortality rates in craniofacial surgery: an analysis of 8101 major procedures. Plast Reconstr Surg. 2010;126:181–186. 21. Cotton M, Henry JA, Hasek L. Value innovation: an important aspect of global surgical care. Global Health 2014;10:1. work_f3x3gn4ncvc6jpiyxus4rg5b3m ---- PLEASE SCROLL DOWN FOR ARTICLE This article was downloaded by: [University of Limerick] On: 10 February 2011 Access details: Access Details: [subscription number 922100754] Publisher Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37- 41 Mortimer Street, London W1T 3JH, UK Ergonomics Publication details, including instructions for authors and subscription information: http://www.informaworld.com/smpp/title~content=t713701117 The between-day and inter-rater reliability of a novel wireless system to analyse lumbar spine posture Kieran O'Sullivana; Luciana Galeottiab; Wim Dankaertsbc; Leonard O'Sullivana; Peter O'Sullivand a University of Limerick, Limerick, Republic of Ireland b Catholic University, Leuven, Belgium c University College Limburg, Hasselt, Belgium d Curtin University of Technology, Perth, Australia Online publication date: 21 December 2010 To cite this Article O'Sullivan, Kieran , Galeotti, Luciana , Dankaerts, Wim , O'Sullivan, Leonard and O'Sullivan, Peter(2011) 'The between-day and inter-rater reliability of a novel wireless system to analyse lumbar spine posture', Ergonomics, 54: 1, 82 — 90 To link to this Article: DOI: 10.1080/00140139.2010.535020 URL: http://dx.doi.org/10.1080/00140139.2010.535020 Full terms and conditions of use: http://www.informaworld.com/terms-and-conditions-of-access.pdf This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, re-distribution, re-selling, loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. http://www.informaworld.com/smpp/title~content=t713701117 http://dx.doi.org/10.1080/00140139.2010.535020 http://www.informaworld.com/terms-and-conditions-of-access.pdf The between-day and inter-rater reliability of a novel wireless system to analyse lumbar spine posture Kieran O’Sullivan a *, Luciana Galeotti a,b , Wim Dankaerts b,c , Leonard O’Sullivan a and Peter O’Sullivan d a University of Limerick, Limerick, Republic of Ireland; b Catholic University, Leuven, Belgium; c University College Limburg, Hasselt, Belgium; d Curtin University of Technology, Perth, Australia (Received 25 September 2009; final version received 23 September 2010) Lumbar posture is commonly assessed in non-specific chronic low back pain (NSCLBP), although quantitative measures have mostly been limited to laboratory environments. The BodyGuard TM is a spinal position monitoring device that can monitor posture in real time, both inside and outside the laboratory. The reliability of this wireless device was examined in 18 healthy participants during usual sitting and forward bending, two tasks that are commonly provocative in NSCLBP. Reliability was determined using intraclass correlation coefficients (ICC), the standard error of measurement (SEM), the mean difference and the minimal detectable change (MDC90). Between- day ICC values ranged from 0.84 to 0.87, with small SEM (5%), mean difference (59%) and MDC90 (514%) values. Inter-rater ICC values ranged from 0.91 to 0.94, with small SEM (4%), mean difference (6%) and MDC90 (9%) values. Between-day and inter-rater reliability are essential requirements for clinical utility and were excellent in this study. Further studies into the validity of this device and its application in clinical trials in occupational settings are required. Statement of Relevance: A novel device that can analyse spinal posture exposure in occupational settings in a minimally invasive manner has been developed. This study established that the device has excellent between-day and inter-rater reliability in healthy pain-free subjects. Further studies in people with low back pain are planned. Keywords: back pain; posture; reliability 1. Introduction Low back pain (LBP) is a very common and costly disorder that should be considered within a biopsy- chosocial framework (Maniadakis and Gray 2000, O’Sullivan 2005, Hansson et al. 2006, Linton et al. 2007). Most LBP lacks a specific radiological diagnosis and has been termed non-specific chronic low back pain (NSCLBP) (Borkan et al. 2002, Dankaerts et al. 2006a). It is increasingly recognised that within the broad NSCLBP population, specific subgroups exist, which require management addressing the specific mechanism underlying their NSCLBP (Boersma and Linton 2002, Borkan et al. 2002, McCarthy et al. 2004, Dunn and Croft 2005, O’Sullivan 2005, Kent et al. 2009). It has been proposed that in a subgroup of NSCLBP subjects, the adoption of altered patterns of spinal movement and posture represents a primary mechanism for their NSCLBP disorder (O’Sullivan 2005). It has been proposed that these patients present with maladaptive spinal postures and movement patterns that expose their spines to increased loads and strain (O’Sullivan 2005). In line with this, spinal posture is considered by many in both clinical practice and research to be a factor in the development and maintenance of LBP (van Dillen et al. 2003b, Poitras et al. 2005, Dankaerts et al. 2006a, Womersley and May 2006, van Wyk et al. 2009). A number of studies now support the existence of these altered spinal postures in subjects with NSCLBP when examined in a laboratory environment (Burnett et al. 2004, Dankaerts et al. 2006a,b, 2009, Womersley and May 2006, Smith et al. 2008) and modification of posture has been associated with improved clinical outcomes (van Dillen et al. 2003a, Dankaerts et al. 2007). There are numerous methods for analysing lumbar spine posture, from simple visual observation in clinical practice (Poitras et al. 2005), the use of photographic markers (Perry et al. 2008, Smith et al. 2008) to more complex laboratory-based motion analysis systems used in much LBP research (Pearcy and Hindle 1989, Schuit et al. 1997, Mannion and Troke 1999, Dankaerts et al. 2006a). Many laboratory- based methods of analysing posture have been shown to be both reliable and valid (Pearcy and Hindle 1989, *Corresponding author. Email: kieran.osullivan@ul.ie Ergonomics Vol. 54, No. 1, January 2011, 82–90 ISSN 0014-0139 print/ISSN 1366-5847 online � 2011 Taylor & Francis DOI: 10.1080/00140139.2010.535020 http://www.informaworld.com D o w n l o a d e d B y : [ U n i v e r s i t y o f L i m e r i c k ] A t : 0 9 : 1 6 1 0 F e b r u a r y 2 0 1 1 Schuit et al. 1997). Unfortunately, these systems are complex and time-consuming to use and cannot be easily used to analyse posture outside the laboratory setting. The use of portable, minimally invasive methods of analysing posture in ‘real-world’ settings has been advocated to provide a quantitative measurement of posture in the workplace (Hermens and Vollenbroek- Hutton 2008). In recent years a number of devices have been developed to analyse spinal posture outside the laboratory (Donatell et al. 2005, Dean and Dean 2006, Horton and Abbott 2008). Spinal posture analysis is now possible using accelerometers (Bazzarelli et al. 2001, Nevins et al. 2002, Wong and Wong 2008), gyroscopes (Lee et al. 2003), strain gauges and/or optical sensors (Donatell et al. 2005, Dean and Dean 2006) and even sensing fabrics (de Rossi et al. 2003, Walsh et al. 2006). Recent reviews have highlighted that, despite the potential of such devices, there is a lack of empirical data supporting their use (Wong et al. 2007, Hermens and Vollenbroek-Hutton 2008). Unfortu- nately, some of the devices are relatively large and invasive, such that they cannot be concealed easily (Donatell et al. 2005, Magnusson et al. 2008) or only used under supervision (Magnusson et al. 2008). While some of these devices have at least some evidence of initial reliability and/or validity studies being completed (Donatell et al. 2005, Intolo et al. 2010), this is not the case with all devices (Dean and Dean 2006, Magnusson et al. 2008, Mork and Westgaard 2009). It is critical that the desire to promptly use these devices in clinical trials is balanced against the requirement to initially establish the scientific robustness of the device itself. Another significant limitation of traditional labora- tory-based motion analysis systems is that they cannot provide instantaneous postural feedback while the NSCLBP patient performs daily tasks. This short- coming is significant considering the role that reduced postural awareness and position sense may play in NSCLBP (O’Sullivan et al. 2003). In fact recent studies demonstrate that provision of postural awareness training may help improve clinical outcomes in both acute LBP and NSCLBP (Horton and Abbott 2008, Magnusson et al. 2008) The BodyGuard TM (Sels Instruments, Vorselaar, Belgium), a novel wireless method of measuring spinal sagittal plane posture, has recently been developed. This small device can monitor spinal posture in real time without the need for cumbersome cables, thus facilitating more normal movement and function in a wide variety of tasks compared to some existing options (Donatell et al. 2005, Magnusson et al. 2008). The data are accessible for immediate presenta- tion and analysis. The BodyGuard TM device can also be used to provide immediate real-time postural biofeedback (audio or vibratory) with a view to modifying posture or movement patterns and even enhancing the exercise performance of those using the device. While the BodyGuard TM device clearly demonstrates potential clinical utility, and could be trialled immediately in clinical trials as an intervention tool, it is believed that there is a clear need for robust scientific validation initially before progressing to clinical trials. Therefore, a multi-stage investigation into the validity of this device for monitoring posture in NSCLBP patients has been outlined. This first study examines the between-day and inter-rater reliability of the device for analysing sagittal plane spinal posture during functional tasks. In order to minimise postural variation due to pain or external environmental factors, the reliability of the device will first be established among pain-free, healthy control subjects performing closely controlled postures and movements. Since forward bending (FB) and sitting are common aggravating factors in NSCLBP, and are commonly analysed in clinical practice and research (O’Sullivan et al. 2002, O’Sullivan 2005, Dankaerts et al. 2006a, Womersley and May 2006), they were deemed suitable for this reliability study. The aim of the study was to establish the between- day and inter-rater reliability of the BodyGuard TM for monitoring these sagittal plane tasks. 2. Methods 2.1. Participants A total of 18 participants (four males, 14 females) were recruited from within the university campus. These participants had a mean (+SD) age of 21(+2) years, height of 169 (+7) cm, mass of 65.4 (+6.9) kg and BMI of 22.8 (+2.1) kg/m2. Ethical approval from the local university research ethics committee was obtained prior to the study. All participants provided written informed consent prior to participation. Participants were excluded if they were pregnant, aged less than 18 years, had current LBP, previous LBP for greater than 3 months, previous back surgery, a history of previous leg pain over the previous 2 years, previous postural education or a known skin allergic reaction to tape. 2.2. Instrumentation All posture measurements were performed with the BodyGuard TM , which was adhered to the skin using adhesive tape (Figure 1). The BodyGuard TM incorporates a strain gauge that provides information about the relative distance between anatomical landmarks, estimating flexion/extension of the lumbar spine by the degree of strain gauge elongation. Ergonomics 83 D o w n l o a d e d B y : [ U n i v e r s i t y o f L i m e r i c k ] A t : 0 9 : 1 6 1 0 F e b r u a r y 2 0 1 1 Elongation of the strain gauge alters its internal resistance and therefore the voltage of the signal. This alteration in voltage occurs in a linear manner in response to elongation. Therefore, the voltage output is directly related to the length (flexion vs. extension) of the strain gauge. Postural data are recorded in real time at 20 Hz. Based on the elongation of the strain gauge, lower lumbar spine sagittal plane posture is expressed as a percentage of range of motion (ROM). Therefore, the degree of spinal flexion/extension is expressed relative to a referenced ROM, for example, total lumbar flexion ROM, rather than being expressed in degrees (O’Sullivan et al. 2010). This reflects the clinical assessment of patients, where sitting posture is often considered relative to individual ROM. It is also similar to electromyography normalisation of muscle activity relative to maximal or sub-maximal voluntary contraction (Dankaerts et al. 2006b). The linearity of the BodyGuard TM has been established as excellent (correlation with digital callipers 40.99, mean difference 52.5% elongation). In comparison to other devices based on strain gauges (Donatell et al. 2005), the BodyGuard TM is very small, incorporating only a single strain gauge, with no other optical or inertial sensors being required. The device is operationally stable within a temperature range of 10–408C. The BodyGuardTM has been validated as a measure of lumbo–pelvic posture and movement against a traditional flexible electrogoniometer (Biometrics, Cwmfelinfach, Gwent, UK), in both sitting (r¼0.98) and standing (r¼0.99) (O’Sullivan 2010). Figure 1. The posture monitoring device. 2.3. Study design A between-day and inter-rater test–retest design was used. Two raters assessed participants on 1 d and one rater repeated the procedure on a second test day. The raters were a musculoskeletal physiotherapist with 10 years clinical experience and an occupational therapist with 3 years clinical experience. Both raters agreed and practised a procedure for palpation of the spine prior to testing. The BodyGuard TM was removed by each rater after testing and reapplied by the other rater. Both raters were blind to previous results during testing. The mean (+SD) number of days between testing was 5 (+2) d. 2.4. Experimental protocol 2.4.1. Participant preparation Participants removed their shoes and wore shorts during testing. The skin was cleaned with alcohol wipes prior to testing. The BodyGuard TM was positioned directly over the spine at the spinal levels of L3 and S1, as determined by palpation. These spinal levels were chosen as the lower lumbar spine is the most common area for subjects to report LBP (Dankaerts et al. 2006a and recent research suggests that the upper and lower lumbar spine regions demonstrate functional independence (Dankaerts et al. 2006a, Mitchell et al. 2008). The BodyGuard TM was applied with participants sitting in a slouched position. Based on preliminary pilot testing, a 6 cm strain gauge was used for all participants and was secured with tape (Figure 1). Once the BodyGuard TM was positioned, participants stood up and performed repeated maximal flexion movements in standing to ensure the device was securely attached and that its available length would not be exceeded during testing. Further, the BodyGuard TM was calibrated to full lumbar flexion ROM during standing flexion. To do this, each subject was first asked to maintain a relaxed standing position (Figure 2), which was set as 0% of their lumbar flexion ROM, and then to perform full flexion of the spine (‘bend as far as possible towards your toes while keeping your knees straight’). This flexed position was set as 100% of their lumbar flexion ROM (Figure 3). Once this calibration procedure was completed, participants were asked to complete three repetitions of maximum ROM into full lumbar flexion in standing to ensure comfort and consistency of move- ment was possible while wearing the BodyGuard TM . Participants then resumed a seated position and were instructed in the tasks to be performed. All tasks were timed using an electronic clock (www.online-stopwatch.com). The following procedure was repeated in the same manner by both raters, on both test occasions. 84 K. O’Sullivan et al. D o w n l o a d e d B y : [ U n i v e r s i t y o f L i m e r i c k ] A t : 0 9 : 1 6 1 0 F e b r u a r y 2 0 1 1 2.4.2. Forward bending For the FB task, participants were asked to bend forward in standing to touch a 45 cm high target (‘bend to touch the target while keeping your knees straight, and arms and fingers outstretched’) (Figure 4). Foot position and distance from the target were standardised. To minimise the degree of natural variation in how participants performed the task, they were asked to perform it a few times for practice, taking breaks in between, until the rater was satisfied they were performing the task consistently. Participants then bent forward under verbal instruction, maintained this position for 5 s and then returned to their relaxed standing posture. This was performed three times. 2.4.3. Usual sitting For the usual sitting (USit) task, participants sat unsupported on a flat wooden 45 cm high stool with their knees and ankles positioned at 908, both feet flat on the ground, forearms over their thighs, while looking at a convenient fixed point straight ahead (Figure 5). In this position, participants were asked to assume their USit position (‘sit as you usually do and look at the mark on the wall’). Similar to the FB task, participants adopted what they perceived as their USit posture a few times for practice, until the tester was satisfied they were performing this consistently, to minimise the degree of natural variation. Participants were asked to remain in this position for 60 s and this USit posture was recorded once. 2.5. Data analysis Data were automatically uploaded to a Microsoft Excel file via a proprietary wireless signal developed by the manufacturers. For FB, the entire 5 s of each sustained FB movement was analysed and the average Figure 2. Calibration to 0% flexion in standing. Figure 3. Calibration to 100% flexion in standing. Figure 4. Forward bending task. Ergonomics 85 D o w n l o a d e d B y : [ U n i v e r s i t y o f L i m e r i c k ] A t : 0 9 : 1 6 1 0 F e b r u a r y 2 0 1 1 posture (%ROM) was used for comparison. For USit, the entire 60 s was analysed and the average USit posture was used for comparison. Data were then analysed using SPSS 15.0 (SPSS Inc., Chicago, IL, USA). The reliability analysis used in many studies (Troke et al. 1996, Schuit et al. 1997, Mannion and Troke 1999, Ng et al. 2001) has been criticised since the level of association between the data is assessed and no information on the level of agreement between measures is provided (Bland and Altman 1986, McGinley et al. 2009). To overcome this, it has been recommended that intraclass correlation coefficient (ICC) values are complemented with data that examine the level of agreement between measurements, for example, using Bland and Altman methods (Bland and Altman 1986, Rankin and Stokes 1998). Therefore, for this study the association between values was analysed using one-way random (ICC2,1) and two-way mixed (ICC3,2) ICC, for between-day and inter-rater data respectively. In addition, Bland and Altman methods were used to determine the level of agreement between data (Bland and Altman 1986). Finally, the standard error of measurement (SEM) and minimal detectable change at the 90% confidence level (MDC90) were calculated to provide an indication of the dispersion of the measurement error and the difference required between measurements to be considered real change. The mean difference, SEM and MDC90 values were all expressed as a percentage of flexion ROM. 3. Results The mean (+SD) posture during each task for each rater is displayed in Figure 6. 3.1. Between-day reliability Between-day values displayed excellent association for both FB (ICC2,1 ¼ 0.87) and USit (ICC2,1 ¼ 0.84) (Landis and Koch 1977). In addition, the between-day level of agreement was very good, with mean difference values of 8.99% and 7.85% for FB and USit respectively. The SEM was relatively small for both FB (5.91%) and USit (4.78%). Finally, the MDC90 was 13.77% for FB and 11.14% for USit (Table 1). 3.2. Inter-rater reliability Inter-rater values also displayed excellent association for both FB (ICC3,2 ¼ 0.94) and USit (ICC3,2 ¼ 0.91) (Landis and Koch 1977). In addition, the inter-rater level of agreement was very good, with mean difference values of 6.31% and 6.17% for FB and USit respectively. The SEM was again relatively small for both FB (4.16%) and USit (3.87%) Finally, the MDC90 was 9.72% for FB and 9.03% for USit (Table 1). 4. Discussion Good between-day and inter-rater reliability is a basic requirement for validity. Between-day reliability is important if the device is to be used as an outcome measure, while inter-rater reliability is important if different raters are to perform consistent measurements. The results indicate that the reliability of the device for analysing lumbar spine posture during sagittal plane tasks is excellent. All ICC values were Figure 6. Mean (+SD) lumbar posture during forward bending (FB) and usual sitting (USit) for rater 1 on two occasions (R1-A and R1-B), as well as rater 2 (R2) on one occasion. ROM¼range of motion.Figure 5. Usual sitting task. 86 K. O’Sullivan et al. D o w n l o a d e d B y : [ U n i v e r s i t y o f L i m e r i c k ] A t : 0 9 : 1 6 1 0 F e b r u a r y 2 0 1 1 over 0.81, which has been described as an ‘almost perfect’ association between measurements (Landis and Koch 1977). Reliability of spinal posture measurement is critical to ensure that the presence or absence of an association between spinal posture and NSCLBP can be accurately estimated (Dartt et al. 2009). A high degree of measurement error could result in subtle alterations in posture going unnoticed or, indeed, non-existent postural differences could be assumed on the basis of poor reliability. The ICC values observed in this study for sagittal plane postural measurement using the BodyGuard TM are similar to those reported for measurement of standing sagittal plane lumbar posture using labora- tory motion analysis systems (Levine and Whittle 1996, Norton et al. 2002, Schuit et al. 2004, Troke et al. 2007), as well as simpler devices, such as inclinometers (Ng et al. 2001). Furthermore, the ICC values are superior to those reported for digital photography for measurement of either standing or sitting lumbar sagittal plane postures (Dunk et al. 2004, Perry et al. 2008, Pownall et al. 2008). It is acknowledged that no study has previously examined the repeatability of the FB task performed in this study. However, the reliability of digital photography to measure lumbar flexion using a different standardised FB task was also slightly lower than that reported here (Corben et al. 2008). This, in addition to the fact that the device displays greater reliability during the sitting task compared to digital photography (Pownall et al. 2008), implies that the device itself may be more reliable than digital photography. Overall, the ICC values were slightly better for inter-rater measure- ments, possibly due to these tests being conducted on the same day. As previously mentioned, many reliability studies simply describe the association between measurements, but not the true level of agreement, which has been criticised (Bland and Altman 1986, McGinley et al. 2009). The mean differences obtained in this study, which varied between 6% and 9% of lumbar flexion ROM, appear to represent a relatively small amount of variation between testers. The SEM similarly varied between 4% and 6% for all measurements and the MDC90 varied from 9% to 14%. Ideally, the mean difference, SEM and MDC90 should be small and close to zero. It is difficult to estimate what is an acceptable level of difference between repeated measurements using the BodyGuard TM as this depends on the extent of the hypothesised postural differences between NSCLBP subjects and matched controls, which is the scope of current further investigation. Since this discriminative validity of the device has not yet been fully investigated, this information is unknown. However, based on NSCLBP research using angular measurements during laboratory testing (Dankaerts et al. 2006a, the degree of measurement error reported here for the BodyGuard TM appears to be acceptable. Therefore, based on previous research it appears that the reliability of this postural monitoring device is as good as, or better than, other lumbar posture measurement devices, either large laboratory-based systems or smaller, portable devices. 4.1. Applications and implications The BodyGuard TM is a relatively simple, economical, and minimally invasive postural monitoring system. It is considerably smaller than some alternative devices (Donatell et al. 2005), so that it should not interfere with normal movements and function. Many other similar-sized devices are based on overall trunk flexion rather than local spinal flexion (Intolo et al. 2010), which is a considerable disadvantage when addressing subtle local changes in lumbar spine posture. It is difficult to compare the output regarding spinal posture with the results of other devices as the output of this device is not expressed in degrees and currently only measures sagittal plane motion. Despite this, research indicates that expressing lumbar posture relative to ROM (as the BodyGuard TM does) may be very useful, since postural differences between healthy controls and NSCLBP subjects have been observed in Table 1. Reliability of the spinal position monitoring device. ICC 95% ICC d 95% CI for d SDdiff 95% LOA SEM MDC90 FB-between 0.87 0.67, 0.95 8.99 5.11, 12.89 7.84 24.35, –6.36 5.91 13.77 FB-inter 0.94 0.84, 0.98 6.32 3.69, 8.95 5.29 16.68, –4.04 4.16 9.72 USit-between 0.84 0.57, 0.94 7.85 5.59, 10.12 4.56 16.78, –1.08 4.78 11.14 USit-inter 0.91 0.77, 0.97 6.17 3.95, 8.40 4.47 14.93, –2.58 3.87 9.03 ICC¼Intraclass correlation coefficient; d¼mean differences between measures, expressed as % range of motion (ROM); 95% CI for d¼ 95% CI of the mean difference between measures, expressed as %ROM; SDdiff¼standard deviation of the mean difference, expressed as %ROM; 95%LOA¼95% limits of agreement (calculated by d+[SD diff multiplied by 1.96]), expressed as %ROM; SEM¼standard error of measurement, calculated as SD 6 �(1 7 ICC); MDC90¼minimal detectable change at the 90% CI; FB¼forward bending; USit¼usual sitting. Note: ICC2,1 is used for between-day analysis, ICC3,2 is used for inter-rater analysis. Ergonomics 87 D o w n l o a d e d B y : [ U n i v e r s i t y o f L i m e r i c k ] A t : 0 9 : 1 6 1 0 F e b r u a r y 2 0 1 1 laboratory settings (Dankaerts et al. 2006a). Only moderate human resources are needed for data collection and analysis and very little training is required, reducing potential barriers to its application in practice. This is the first in a series of studies planned to determine the clinical utility of the device for research in NSCLBP populations. While the device has been validated in simple sagittal plane postures and movements against an electrogoniometer, future studies will be performed to compare this device to a standard laboratory-based motion system, as well as digital video fluoroscopy, similar to the approach used with the validation of other motion analysis systems (Schuit et al. 1997, Mannion and Troke 1999, Bull and McGregor 2000, Ripani et al. 2008, Intolo et al. 2010). Further studies are also required to determine if the device can discriminate between subgroups of subjects with NSCLBP and matched controls, similar to existing laboratory-based systems (Dankaerts et al. 2006a). Similarly, whether the provision of postural feedback via the BodyGuard TM improves outcomes in NSCLBP requires investigation. The potential to provide patients with feedback regarding the control of their movement patterns may motivate patients (Horton and Abbott 2008), improve exercise performance (Magnusson et al. 2008), reduce the link between pain and movement (Zusman 2008) and might decrease the enormous rehabilitation costs associated with NSCLBP (Hermens and Vollenbroek-Hutton 2008). As stated earlier, the use of spinal biofeedback to change motor patterns was supported in a recent study (Magnusson et al. 2008) and the BodyGuard TM is more portable and less invasive than many existing devices. The BodyGuard TM may allow investigation of whether factors such as postural variability or pro- longed exposure to near end-range postures are predictors of LBP, as detailed examination of these relationships in the past has been constrained by technological limitations. Currently, digital photogra- phy and video analysis are commonly used in ergonomic research as they are the least invasive (Spielholz et al. 2001, Dartt et al. 2009, Straker et al. 2009). In future trials, the device may offer more specific assessment of postural exposure while still allowing maximal work productivity. Indeed, it is now possible to remotely monitor several subjects simulta- neously and longitudinally using the BodyGuard TM , so that the temporal relationship between postural exposure and musculoskeletal pain and discomfort can be studied in greater detail. The device may also help bridge the gap between advice given to NSCLBP patients in the laboratory or clinic and the challenges faced in implementing these postural changes into daily life. 4.2. Limitations The sample size, although larger than some previous studies examining the reliability of lumbar posture and movement (Mannion and Troke 1999, Ng et al. 2001, Pownall et al. 2008), is small. Errors of palpation are always possible between raters; however, every effort was made to ensure consistency of palpation technique between raters. The values obtained reflect those of two raters, who practised palpation and device application in advance, so that it is possible that reliability would be lower for other raters. Inconsistent movement or postures by participants may explain some of the observed variation; however, this was minimised by giving clear instructions, along with time to practise each procedure. This study only evaluated the reliability of the BodyGuard TM for the lower lumbar spine during sagittal plane flexion tasks. The ability of the BodyGuard TM to monitor other spinal regions and planes of motion requires further study. The reliability of the BodyGuard TM in occupational environments, or when self-applied by the NSCLBP patient, has yet to be evaluated. Similar to all skin- mounted spinal measurement systems, the Body- Guard TM may not reflect actual spinal motion, particularly as its output is related to linear strain gauge elongation and not angular displacement. The risk that motion in planes other than the sagittal plane (e.g. rotation or side-flexion) could compromise or contaminate the output must be examined in future validity studies. The limitations associated with not providing an angular output and analysing only sagittal plane postures has already been highlighted. In future studies the calibration procedure may need to be specific to the task being analysed, e.g. seated ROM if examining USit task; however, this was not deemed necessary for this initial study. While there is emerging evidence that, for sub- groups of NSCLBP, postural factors can be significant (Dankaerts et al. 2006a, 2007), it is acknowledged that there is still little agreement on what constitutes ideal posture (O’Sullivan et al. 2006, Claus et al. 2009, Reeve and Dilley 2009, O’Sullivan et al. 2010). Further, it is acknowledged that NSCLBP is a multifactorial biopsychosocial disorder, where numerous factors other than posture and movement patterns must be considered (McCarthy et al. 2004, Linton et al. 2007). 5. Conclusion The results indicate that the BodyGuard TM has excellent reliability for analysis of lower lumbar spine sagittal posture, both between-days and between- raters. It has several potential advantages over existing methods of analysing spinal posture, although the lack of an angular output is a limitation. 88 K. O’Sullivan et al. D o w n l o a d e d B y : [ U n i v e r s i t y o f L i m e r i c k ] A t : 0 9 : 1 6 1 0 F e b r u a r y 2 0 1 1 Further validation studies using this device are indicated before progressing to clinical trials. Acknowledgements The Health Research Board of Ireland, for sponsoring the lead author (K.OS). The University of Limerick seed funding scheme facilitated the research collaboration. References Bazzarelli, M., et al., 2001. A low power hybrid posture monitoring system. In: Proceedings of the IEEE Canadian conference on electrical and computer engineering, 12–15 May, Manitoba, Canada, 3–16. New York: IEEE Press. Bland, J.M. and Altman, D.G., 1986. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet, 1, 307–310. Boersma, K. and Linton, S., 2002. Early assessment of psychological factors: the Orebro screening questionnaire for pain. In: S. Linton, ed. New avenues for the prevention of chronic musculoskeletal pain and disability: Pain research and clinical management. Amsterdam: Elsevier, 205–214. Borkan, J., et al., 2002. Advances in the field of low back pain in primary care: a report from the fourth interna- tional forum. Spine, 27, E128–E132. Bull, A. and McGregor, A., 2000. Measuring spinal motion in rowers: the use of an electromagnetic device. Clinical Biomechanics, 15, 772–776. Burnett, A.F., et al., 2004. Spinal kinematics and trunk muscle activity in cyclists: a comparison between healthy controls and non-specific chronic low back pain subjects – a pilot investigation. Manual Therapy, 9, 211–219. Claus, A., et al., 2009. Is ‘ideal’ sitting real? Measurement of spinal curves in four sitting postures. Manual Therapy, 14, 404–408. Corben, T., Lewis, J., and Petty, N., 2008. Contribution of lumbar spine and hip movement during the palms to floor test in individuals with diagnosed hypermo- bility syndrome. Physiotherapy Theory and Practice, 24, 1–12. Dankaerts, W., et al., 2006a. Differences in sitting postures are associated with non-specific chronic low back pain disorders when sub-classified. Spine, 31, 698–704. Dankaerts, W., et al., 2006b. Altered patterns of superficial trunk muscle activation during sitting in nonspecific chronic low back pain patients: importance of subclassi- fication. Spine, 31, 2017–2023. Dankaerts, W., et al., 2007. The use of a mechanism-based classification system to evaluate and direct management of a patient with non-specific chronic low back pain and motor control impairment – a case report. Manual Therapy, 12, 181–191. Dankaerts, W., et al., 2009. Discriminating healthy controls and two clinical subgroups of nonspecific chronic low back pain patients using trunk muscle activation and lumbosacral kinematics of postures and movements: a statistical classification model. Spine, 34, 1610–1618. Dartt, A., et al., 2009. Reliability of assessing upper limb postures among workers performing manufacturing tasks. Applied Ergonomics, 40, 371–378. Dean, A.Y. and Dean, S.G., 2006. A pilot study investigating the use of the Orthosense Posture Monitor during a real- world moving and handling task. Journal of Bodywork & Movement Therapies, 10, 220–226. de Rossi, D., et al., 2003. Electroactive fabrics and wearable biomonitoring devices. AUTEX Research Journal, 3, 180–185. Donatell, G.J., et al., 2005. A simple device to monitor flexion and lateral bending of the lumbar spine. IEEE Transactions On Neural Systems and Rehabilitation Engineering, 13, 18–23. Dunk, N., et al., 2004. The reliability of quantifying upright standing postures as a baseline diagnostic clinical tool. Journal of Manipulative and Physiological Therapeutics, 27, 91–96. Dunn, K. and Croft, P., 2005. Classification of low back pain in primary care: using ‘bothersomeness’ to identify the most severe cases. Spine, 30, 1887–1892. Hansson, T., et al., 2006. Prevalence of low back pain and sickness absence: a ‘borderline’ study in Norway and Sweden. Scandinavian Journal of Public Health, 34, 555– 558. Hermens, H. and Vollenbroek-Hutton, M., 2008. Towards remote monitoring and remotely supervised training. Journal of Electromyography and Kinesiology, 18, 908– 919. Horton, S. and Abbott, J., 2008. A novel approach to managing graduated return to spinal loading in patients with low back pain using the Spineangel device: a case series report. New Zealand Journal of Physiotherapy, 36, 22–28. Intolo, P., et al., 2010. The Spineangel 1 : examining the validity and reliability of a novel clinical device for monitoring trunk motion. Manual Therapy, 15, 160–166. Kent, P., Keating, J., and Buchbinder, R., 2009. Searching for a conceptual framework for non-specific low back pain. Manual Therapy, 14, 387–396. Landis, J. and Koch, G., 1977. The measurement of observer agreement for categorical data. Biometrics, 33, 159–174. Lee, R., Laprade, J., and Fung, E., 2003. A real-time gyroscopic system for three-dimensional measurement of lumbar spine motion. Medical Engineering and Physics, 25, 817–824. Levine, D. and Whittle, M., 1996. The effects of pelvic movement on lumbar lordosis in the standing position. Journal of Orthopaedic & Sports Physical Therapy, 24, 130–135. Linton, S., et al., 2007. A randomized controlled trial of exposure in vivo for patients with spinal pain reporting fear of work-related activities. European Journal of Pain, 12, 722–730. McCarthy, C., et al., 2004. The biopsychosocial classification of non-specific low back pain: a systematic review. Physical Therapy Reviews, 9, 17–20. McGinley, J., et al., 2009. The reliability of three-dimensional kinematic gait analysis measurements: a systematic review. Gait and Posture, 29, 360–369. Magnusson, M., et al., 2008. Motor control learning in chronic low back pain. Spine, 33, E532–E538. Maniadakis, N. and Gray, A., 2000. The economic burden of back pain in the UK. Pain, 84, 95–103. Mannion, A. and Troke, M., 1999. A comparison of two motion analysis devices used in the measurement of lumbar spine mobility. Clinical Biomechanics, 14, 612–619. Mitchell, T., et al., 2008. Regional differences in lumbar spinal posture and the influence of low back pain. BMC Musculoskeletal Disorders 9, 152. Ergonomics 89 D o w n l o a d e d B y : [ U n i v e r s i t y o f L i m e r i c k ] A t : 0 9 : 1 6 1 0 F e b r u a r y 2 0 1 1 Mork, P. and Westgaard, R., 2009. Back posture and low back muscle activity in female computer workers: a field study. Clinical Biomechanics, 24, 169–175. Nevins, R., Durdle, N., and Raso, V., 2002. A posture monitoring system using accelerometers. In: Proceedings of the IEEE Canadian conference on electrical and computer engineering. 12–15 May, Manitoba, Canada, 1087–1092. New York: IEEE: Press. Ng, J., et al., 2001. Range of motion and lordosis of the lumbar spine: reliability of measurement and normative values. Spine, 26, 53–60. Norton, B., Hensler, K., and Zou, D., 2002. Comparisons among noninvasive methods for measuring lumbar curvature in standing. Journal of Orthopaedic & Sports Physical Therapy, 32, 405–414. O’Sullivan, K., 2010. Comparison of BodyGuard and Biometrics electrogoniometer. Unpublished data. Uni- versity of Limerick. O’Sullivan, K., et al., 2010. Neutral lumbar spine sitting posture in pain-free subjects. Manual Therapy, 15 (6), 557–561. O’Sullivan, P., 2005. Diagnosis and classification of chronic low back pain disorders: maladaptive movement and motor control impairments as underlying mechanism. Manual Therapy, 10, 242–255. O’Sullivan, P., et al., 2002. The effect of different standing and sitting postures on trunk muscle activity in a pain- free population. Spine, 27, 1238–1244. O’Sullivan, P., et al., 2003. Lumbar repositioning deficit in a specific low back pain population. Spine, 28, 1074–1079. O’Sullivan, P., et al., 2006. Effect of different upright sitting postures on spinal-pelvic curvature and trunk muscle activation in a pain-free population. Spine, 31, E707– E712. Pearcy, M.J. and Hindle, R.J., 1989. New non-invasive three- dimensional measurement of human back movement. Clinical Biomechanics, 4, 73–79. Perry, M., et al., 2008. Reliability of sagittal photographic spinal posture assessment in adolescents. Advances in Physiotherapy, 10, 66–75. Poitras, S., et al., 2005. Management of work-related low back pain: a population-based survey of physical therapists. Physical Therapy, 85, 1168–1181. Pownall, P., Moran, R., and Stewart, A., 2008. Consistency of standing and seated posture of asymptomatic male adults over a one-week interval: a digital camera analysis of multiple landmarks. International Journal of Osteo- pathic Medicine, 11, 43–51. Rankin, G. and Stokes, M., 1998. Reliability of assessment tools in rehabilitation: an illustration of appropriate statistical analyses. Clinical Rehabilitation, 12, 187– 199. Reeve, A. and Dilley, A., 2009. Effects of posture on the thickness of transversus abdominis in pain-free subjects. Manual Therapy, 14, 679–684. Ripani, M., et al., 2008. Spinal curvature: comparison of frontal measurements with the Spinal Mouse and radio- graphic assessment. The Journal of Sports Medicine and Physical Fitness, 48, 488. Schuit, D., et al., 1997. Validity and reliability of measures obtained from the OSI CA-6000 Spine Motion Analyzer for lumbar spinal motion. Manual Therapy, 2, 206–215. Schuit, D., et al., 2004. Interrater reliability for determination of upright lumbar spinal position. Physical therapy 2004: The annual conference and exposition of the APTA. Chicago, IL, USA. http://www.apta.org/AM/ abstracts/pt2004/Abs04Authindex.ofm. [Accessed 22 September 2010]. Smith, A., O’Sullivan, P., and Straker, L., 2008. Classification of sagittal thoraco-lumbo-pelvic alignment of the adolescent spine in standing and its relationship to low back pain. Spine, 33, 2101–2107. Spielholz, P., et al., 2001. Comparison of self-report, video observation and direct measurement methods for upper extermity musculoskeletal disorder physical risk factors. Ergonomics, 44, 588–613. Straker, L., et al., 2009. Relationships between prolonged neck/shoulder pain and sitting spinal posture in male and female adolescents. Manual Therapy, 14, 321–329. Troke, M., Moore, A.P., and Cheek, E., 1996. Intra-operator and inter-operator reliability of the OSI CA6000 Spine motion analyser with a new skin fixation system. Manual Therapy, 1, 92–98. Troke, M., Schuit, D., and Petersen, C., 2007. Reliability of lumbar spinal palpation, range of motion, and determination of position. BMC Musculoskeletal Disorders, 8, 103. van Dillen, L., et al., 2003a. The effect of modifying patient- preferred spinal movement and alignment during symptom testing in patients with low back pain: a preliminary report. Archives of Physical Medicine & Rehabilitation, 84, 313–322. van Dillen, L., et al., 2003b. Movement system impairment- based categories for low back pain: stage 1 validation. Journal of Orthopaedic and Sports Physical Therapy, 33, 126–142. van Wyk, P., et al., 2009. Determining the optimal size for posture categories used in video-based posture assessment methods. Ergonomics, 52, 921–930. Walsh, P., et al., 2006. . Marker-based monitoring of seated spinal posture using a calibrated single-variable threshold model. In: Conference Proceedings of the IEEE Engineering in Medicine and Biology Society, 1, 5370– 5373. Womersley, L. and May, S., 2006. Sitting posture of subjects with postural backache. Journal of Manipulative and Physiological Therapeutics, 29, 213–218. Wong, M., Wong, W., and Lo, K., 2007. Clinical applications of sensors for human posture and movement analysis: a review. Prosthetics and Orthotics International, 31, 62–75. Wong, W. and Wong, M., 2008. Detecting spinal posture change in sitting positions with tri-axial accelerometers. Gait and Posture, 27, 168–171. Zusman, M., 2008. Associative memory for movement- evoked chronic back pain and its extinction with musculoskeletal physiotherapy. Physical Therapy Reviews, 13, 57–68. 90 K. O’Sullivan et al. D o w n l o a d e d B y : [ U n i v e r s i t y o f L i m e r i c k ] A t : 0 9 : 1 6 1 0 F e b r u a r y 2 0 1 1 work_f4xtb3hb7zbp3f4cfbx6kmkwyq ---- Optical Elastography and Measurement of Tissue Biomechanics Ruikang K. Wang David D. Sampson Stephen A. Boppart Brendan F. Kennedy Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Biomedical-Optics on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use Optical Elastography and Measurement of Tissue Biomechanics Since the days of Hippocrates, it has been recognized that tissue mechanical properties can play a key role in the diag- nosis and understanding of many diseases. For example, in cancer, the stromal response associated with tumor progres- sion often results in stiffening of tissue surrounding tumor cells. In cardiovascular disease, atherosclerotic plaque mechanics are linked to plaque rupture. Elastography—the use of medical imaging to map the mechanical properties of tissue—has been developed exten- sively over the past 25 years, based mainly on ultrasound and magnetic resonance imaging. These methods continue to develop, are available commercially, and are finding ever- growing application in the clinical and biological sciences. Although promising, these methods offer limited spatial reso- lution to detect small structures and lesions. Over the last several years, the exciting prospect of adapt- ing optical imaging methods, such as optical coherence tomography, to elastography has been investigated with increasing vigor. Due to its superior spatial resolution, optical elastography can be used to image the mechanical properties of biological tissues on a scale that cannot be achieved with other competing elastography modalities, e.g., ultrasound and magnetic resonance imaging. In addition, the exquisite sub- nanometer displacement sensitivity afforded by optical meth- ods can enable the detection of more subtle changes in mechanical properties, potentially enabling disease to be imaged at an earlier stage than is currently feasible. Furthermore, realizing such methods using fiber optics allows for the development of compact endoscopic elastography probes for application in areas such as cardiology and pulmonology. In this special section, fourteen papers highlighting the lat- est technical developments in optical elastography and mea- surement of tissue biomechanics are presented. Optical elastography methods are presented based on a number of imaging methods; optical coherence tomography, digital pho- tography, ultrasound modulated optical tomography and laser speckle imaging. These methods are being developed for a range of medical applications, including in cardiovascular dis- ease, breast cancer, ophthalmology, and skin disease. A key element of elastography is the mechanism used to impart a mechanical load to the tissue being assessed. A number of methods are employed that generate either internal or external loads, and these loading mechanisms are applied either quasi-statically or dynamically. In this special section, papers are presented utilizing an external actuator [K. Kennedy et al., J. Biomed. Opt. 18(12), 121508; S. Song et al., J. Biomed. Opt., 121509], acoustic radiation force [Y. Cheng et al., J. Biomed. Opt. 18(12), 121511], motion of mag- netic nanoparticles [V. Crecea et al., J. Biomed. Opt. 18(12), 121504], needle insertion [K. Kennedy et al., J. Biomed. Opt. 18(12), 121510] and air-pulses [J. Li et al., J. Biomed. Opt. 18(12), 121503]. Novel methods to measure tissue displacement in response to mechanical loading are investigated in several papers. Methods presented include advances in phase-sen- sitive detection [S. Song et al., J. Biomed. Opt. 18(12), 121505], digital image correlation [C. Sun et al., J. Biomed. Opt. 18(12), 121515; J. Fu et al., J. Biomed. Opt. 18(12), 121512] and surface tracking [L. Coutts et al., J. Biomed. Opt. 18(12), 121513]. The response of tissue to loads imparted by physiological processes is an exciting aspect of tissue biomechanics mea- surements. In this special section, two optical coherence tomography–based methods are presented to measure natu- ral motion within the human eye [N. Dragostinoff et al., J. Biomed. Opt. 18(12), 121502; K. O’Hara et al., J. Biomed. Opt. 18(12), 121506]. The special section incorporates two invited papers. In the first, A. Nahas et al. [J. Biomed. Opt. 18(12), 121514] present a combination of transient elastography with full-field optical coherence tomography, which provides higher spatial resolu- tion than existing optical coherence elastography methods. In the second, S. Nadkarni [J. Biomed. Opt. 18(12), 121507] presents a review of optical methods for measuring the mechanical properties of arterial tissue. The advances being made in optical elastography and measurement of tissue biomechanics, as highlighted in this special section, suggest that these methods may well follow the precedent set by ultrasound and magnetic resonance im- aging in translation of laboratory techniques to clinical and biological practice. The papers herein represent important steps along this path. Ruikang K. Wang University of Washington David D. Sampson University of Illinois at Urbana-Champaign Stephen A. Boppart University of Illinois at Urbana-Champaign Brendan F. Kennedy The University of Western Australia Special Section Guest Editors Journal of Biomedical Optics 121501-1 December 2013 • Vol. 18(12) Special Section Guest Editorial Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Biomedical-Optics on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use work_f57mcrqsgnhl7a2u2umdwpkesa ---- ORIGINAL PAPER Photographic assessment of temperate forest understory phenology in relation to springtime meteorological drivers Liang Liang & Mark D. Schwartz & Songlin Fei Received: 2 April 2010 /Revised: 22 February 2011 /Accepted: 22 February 2011 # ISB 2011 Abstract Phenology shows sensitive responses to seasonal changes in atmospheric conditions. Forest understory phenology, in particular, is a crucial component of the forest ecosystem that interacts with meteorological factors, and ecosystem functions such as carbon exchange and nutrient cycling. Quantifying understory phenology is challenging due to the multiplicity of species and hetero- geneous spatial distribution. The use of digital photography for assessing forest understory phenology was systemati- cally tested in this study within a temperate forest during spring 2007. Five phenology metrics (phenometrics) were extracted from digital photos using three band algebra and two greenness percentage (image binarization) methods. Phenometrics were compared with a comprehensive suite of concurrent meteorological variables. Results show that greenness percentage cover approaches were relatively robust in capturing forest understory green-up. Derived spring phenology of understory plants responded to accumulated air temperature as anticipated, and with day- to-day changes strongly affected by estimated moisture availability. This study suggests that visible-light photo- graphic assessment is useful for efficient forest understory phenology monitoring and allows more comprehensive data collection in support of ecosystem/land surface models. Keywords Landscape phenology. Forest understory. Temperate forest . Digital photography. EOS Land Validation Core Sites Introduction Phenology is the study of periodical plant and animal life cycle events as influenced by environmental factors, especially weather and climate (Schwartz 2003). Weather fluctuations and long-term climate change constantly affect the timing behaviors of biological systems. Assessments of global change impacts have identified phenology as a key indicator of ecosystem alteration (IPCC 2007; Morisette et al. 2009). Temperate forests, in particular, are important for assessing changes of ecosystem properties given the typically strong seasonality that is integral to annual temperature regimes of mid-latitude land regions. Further, understory plants form an important component influencing the general composition, nutrient cycling, and health of forests (Fei and Steiner 2008; Gilliam and Roberts 2003; Kaeser et al. 2008; Yarie 1980) and contribute to ecosystem carbon and water exchange (Drewitt et al. 2002; Pfitsch and Pearcy 1989). Phenology of understory plants is influenced by micro-environmental conditions different from that of tree canopies, and could potentially respond to climate change in unique ways within forests (Beatty 1984; Kudo et al. 2008; Rich et al. 2008). In addition, high resolution measurement of understory phenological progression allows for a more comprehensive characterization of forest vegetation dynamics, and enhances the accuracy of L. Liang (*) Department of Forestry, University of Kentucky, 223 T.P. Cooper Bldg, Lexington, KY 40546–0073, USA e-mail: liang.liang@uky.edu M. D. Schwartz Department of Geography, University of Wisconsin-Milwaukee, P.O. Box 413, Milwaukee, WI 53201–0413, USA e-mail: mds@uwm.edu S. Fei Department of Forestry, University of Kentucky, 204 T.P. Cooper Bldg, Lexington, KY 40546–0073, USA e-mail: songlin.fei@uky.edu Int J Biometeorol DOI 10.1007/s00484-011-0438-1 validation efforts for satellite-based vegetation indices (Liang and Schwartz 2009). Given all these important aspects, more in-depth investigations are needed to further understanding of understory phenology in relation to forest ecosystem changes. Also, being able to detect detailed patterns of micrometeorological factors that influence forest understory phenology is crucial for gaining insight concerning how forest ecosystem structure might respond to global climate change. Phenology of typical understory plants, such as grass and herb species, can be described in detail according to their respective growth stages, given sufficient observation time (Henebry 2003; Meier 1997). However, it is difficult to conduct quick repetitive observations with detailed descriptive protocols under field conditions, due to the multiplicity of species intertwined within a small area, diverse morphologies of plants, distribution heterogeneity over the forest floor, and human resource constraints. Therefore, assessment using digital cameras could provide a convenient approach to quickly capturing the green-up sequence of understory plants. In recent years, the use of networked digital cameras has been employed for regularly recording phenological changes of forest canopies (Ahrends et al. 2009; Richardson et al. 2009). Automated camera systems have also been used to record phenology of selected understory species of interest in managed environ- ments (Crimmins and Crimmins 2008; Graham et al. 2009). To our knowledge, systematic sampling of natural forest understory phenology has only rarely been conducted, partly due to a lack of efficient approaches that would allow observers to conduct recurrent measurement and analysis for high resolution samples with desired accuracy, given the challenges mentioned above. In this study, we systematically recorded high resolution understory phenol- ogy using frequent digital photography, and tested different methods for deriving phenological metrics (phenometrics) from digital photos. Using selected phenometrics, we further examined springtime understory phenology in the context of its meteorological drivers through a focused spatiotemporal analysis. Materials and methods Study site description The field work was conducted in the Chequamegon National Forest, near Park Falls, Wisconsin. The study site is located in the vicinity of a television tower (WLEF, 45.946°N, 90.272°W) which also supports AmeriFlux eddy covariance instruments, and is an Earth Observing System (EOS) Land Validation Core Site. The gentle sloping topography underlying the forest is covered with soils developed from parent materials of glacial-fluvial deposits (Martin 1965). The regional climate is humid continental with annual average temperature of 4.8°C, and annual precipitation around 810 mm (Wisconsin Online: http:// www.wisconline.com/counties/price/climate.html). Forests in this limited study area are primarily composed of: (1) aspen/fir forest dominated by quaking aspen (Populus tremuloides) and balsam fir (Abies balsamea), with other species such as red maple (Acer rubrum) and white birch (Betula papyrifera); and (2) forested wetlands occupied mainly by white cedar (Thuja occidentalis), balsam fir, and speckled alder (Alnus regosa) (Ewers et al. 2002). More than 100 understory species were identified in the forest under study (Brosofske et al. 2001). The composition of understory cover ranges from grass, sedge and herba- ceous species in drier habitats, to sphagnum moss in moister habitats, often mixed with seedlings and saplings of woody species. Primary understory plants observed include species such as Canada mayflower (Maianthemum canadense), bunchberry dogwood (Cornus canadensis), dewberry (Rubus spp.), starflower (Trientalis borealis), barren strawberry (Waldsteinia spp.), wild strawberry (Fragaria vesca), groundpine (Lycopodium dendroideum), Virginia creeper (Parthenocissus quinquefolia), goldenrod (Solidago spp.), violet (Viola spp.), bedstraw (Galium spp.), peat moss (Sphagnum spp.), blueberry (Vaccinium spp.), woodfern (Dryopteris spp.), ladyfern (Athyrium spp.), panic grass (Panicum spp.), and a variety of sedges (Carex spp.). Seedlings and saplings found to mix with other understory plants were mainly red maple, balsam fir, and quaking aspen. Phenology data collection Field observation of understory vegetation phenology within the forest was conducted in spring 2007. Ground sampling was done within an approximately 625×275 m area (Fig. 1). Twenty-one 1×1 m plots were set on selected spots along existing transects outlined according to a cyclic sampling design (Burrows et al. 2002; Liang and Schwartz 2009). Sixteen of these plots followed an equally spaced systematic sampling scheme along four transects with 175-m sampling intervals. The remaining five plots were deployed randomly at additional locations empirically identified as representa- tive ground cover types, to increase the representativeness of landscape heterogeneity. Each understory plot was laid out using a PVC pipe “square” and marked with stakes at four corners. Digital photography was used to obtain phenological signals that represent the aggregated reflectance of multi- species green-up. Given previously mentioned challenges in regards to collecting phenological information for individual understory species, we treated the fixed-sized understory plots as basic observation units. Field work was Int J Biometeorol http://www.wisconline.com/counties/price/climate.html http://www.wisconline.com/counties/price/climate.html conducted during the morning or before early afternoon (1300 hours) of every other day from April 29 to May 25, 2007 (14 days in total). A consumer-grade 5MP Kodak DX4530 visible-light digital camera was used. The camera had firmware version 1.0 installed and supported automatic white balance. The camera mode was set to automatic with auto-exposure enabled. Nadir-pointing images were taken over every plot at 1.5 m above the ground. The observer took care to make the images consistent position-wise under varying field conditions, and to take all images from the same height and at zenith above the ground (a sample image is shown in Fig. 2). Meteorological measurements Micrometeorological data were collected with HOBO data loggers (Onset Computer) deployed at 27 locations across the study area (Fig. 1). These data loggers were set to take temperature and humidity measurements of the ambient air (at standard 1.5 m shelter height) at 10-min reading intervals. Recorded data were stored in memory within each logger for later retrieval. HOBO recording was initiated on April 5, 2007 prior to the field campaign and recovered on the last day of observation (May 25, 2007). In addition to the high resolution meteorological measure- ments, daily precipitation data from the closest official weather station at Park Falls (approximately 8 miles, c. 13 km west of the study site) were acquired through the National Climatic Data Center (NCDC: http://www.ncdc. noaa.gov/oa/ncdc.html). The time span of these data ran from January 1 to May 31, 2007. Precipitation data were mainly used to compare with air humidity measurements which supported high resolution analysis. Phenometrics derivation A visible-light digital camera allows assessments of vegetation greening, despite the lack of a near infrared spectral band that is often indispensible for vegetation remote sensing (Jensen 2000; Lukina et al. 1999). Digital images taken with a visible-light camera natively contain separable red-green-blue (RGB) color bands. Band algebras and percentage cover estimates (image binarization) can hence be used to derive signals of plant greenness, and in turn to characterize plant phenology (Ahrends et al. 2008; Graham et al. 2006; Lukina et al. 1999; Richardson et al. 2007). Besides using native RGB bands, a recent study suggested an alternative approach based on hue-saturation- luminance (HSL) color space, which may provide more accurate estimation of plant color change by separating luminance/brightness (potentially affected by illumination conditions) from the hue (Graham et al. 2009). The HSL color space is represented with a cylinder model. The angle around the central axis of the cylinder Fig. 2 A sample digital photo of an understory plot taken on May 25, 2007; cropped to contain the 1×1 m plot area (original photo is in color, presented here in gray scale) Fig. 1 Locations of understory plots and HOBO loggers over- laid on a QuickBird image- derived Normalized Difference Vegetation Index (NDVI) map showing vegetation cover conditions on May 18, 2007 Int J Biometeorol http://www.ncdc.noaa.gov/oa/ncdc.html http://www.ncdc.noaa.gov/oa/ncdc.html corresponds to hue, which represents the color or wavelength of the pixel; the distance towards the axis corresponds to saturation which represents the purity of color, and the height of the cylinder corresponds to lightness (also referred to as intensity) which is the overall brightness of the scene as varying from black to white. The HSL color space representation can be translated from RGB color bands (Conrac Corporation 1980; ERDAS 2008; Pratt 2001) using the following equations: r ¼ M � R M � m ; g ¼ M � G M � m ; b ¼ M � B M � m ð1Þ where R, G, B are each in the range of 0–1 (converted from digital numbers); r, g, b are each in the range of 0–1 (at least one of r, g, or b values is 0, and at least one of the r, g, or b values is 1); M is the maximum value of R, G, or B, m is the minimum value of R, G, or B. Then, the lightness (L) in the range of 0–1 is computed as: L ¼ M þ m 2 ð2Þ The saturation (S) in the range of 0–1 is calculated as: S ¼ 0; IfM ¼ m ð3Þ S ¼ M � m M þ m ; If L � 0:5 ð4Þ S ¼ M � m 2 � M � m ; If L > 0:5 ð5Þ Finally the hue (H) in the range of 0–360 is calculated as: H ¼ 0; If M ¼ m ð6Þ H ¼ 60 2 þ b � gð Þ; If R ¼ M ð7Þ H ¼ 60 4 þ r � bð Þ; If G ¼ M ð8Þ H ¼ 60 6 þ g � rð Þ; If B ¼ M ð9Þ Preprocessing of digital images was conducted with the Paint.NET program. ERDAS Imagine 9.2 (ERDAS 2008) was used to perform band algebra, color space transforma- tion, and image analysis. Each digital image was cropped to include only a 1 m2 plot area marked out by four corner sticks. Three selected methods of combining RGB bands for phenology signal extraction were tested: (1) G/R (Graham et al. 2006); (2) 2G- R-B (Richardson et al. 2007; Woebbecke et al. 1995); and (3) (G-R) / (G+R) (Brügger et al. 2007). Mean values of the band algebra estimates were calculated for all pixels within each plot area. Percentage of green pixels was estimated according to the concept of image binarization which has wide applications in the field of pattern recognition and has been introduced into agricultural practices (Tellaeche et al. 2008; Tian and Slaughter 1998). In this study, simple threshold-based approaches were used to separate green cover from the background. For RGB bands, a criteria: G > R and G > B was applied to discriminate green pixels from the background. The HSL-based approach first converted RGB to HSL color space; then thresholds (above 192 and below 288) for green color within the hue value range (0–360) were applied. The results from both RGB- and HSL-based methods were binary images with green pixels distinguished from the background. Areal percentages of green pixels were then computed for all binary images. Evaluating phenometrics Inter-comparison of the resultant time series of five phenometrics was made in regards to their different temporal variability. Standardization to a 0–1 ratio was applied to averaged time series using the algorithm: (phenometric value – phenometric minimum) / (phenomet- ric maximum – phenometric minimum). In addition to direct comparison, bi-daily increments of phenometric values were computed and compared to one another accordingly. Coefficients of variation (CV) were computed for bi-daily changes of phenometrics. We evaluated the stability which in this context indicates the performance of a phenometric in regards to capturing a realistic phenolog- ical trend. In particular, this trend refers to the green-up process which normally advances without backward devel- opment during springtime, unless affected by disturbances such as insect outbursts or grazing, or by excessive deprivation of required resources. Our basic assumption suggests that a stable phenometric representation should reflect a steady increase in greenness during the early spring under disturbance-free conditions. Throughout the observation time period, no disturbances from wildlife were noticed and all plots remained intact. Given these consid- erations, we expected the phenometrics derived to match the consistently upturning spring green-up process. Conse- quently, downward departures from a previous greenness level as indicated by negative bi-daily changes would suggest measurement instability. We therefore used this criterion to infer the relative robustness of the five phenometrics derived against potential errors and measure- ment uncertainties. More stable phenometrics were selected in support of subsequent analysis regarding meteorological drivers. Int J Biometeorol Processing meteorological variables High frequency/intensity temperature and humidity meas- urements allowed detailed characterization of microme- teorological conditions. We first computed temperature and humidity indices (daily maximum, mean, and minimum) averaged over the entire study area. These indices were compared with precipitation data recorded at the nearby official weather station. Pearson correlation coefficients were calculated for precipitation and air humidity indices, including both absolute humidity and relative humidity. For high resolution analysis, field-measured temperature, abso- lute humidity, and relative humidity data were interpolated to understory plot locations using ordinary Kriging. Daily temperature and humidity indices (maximum, mean, and minimum) for each plot were then computed. Accumulated growing degree hours (AGDH) were calculated from hourly mean temperature based on a −0.6°C threshold (Schwartz 1997) starting from April 5, 2007. It was assumed that chilling requirement necessary for dormancy release (Schwartz 1997; Simpson 1990) was fulfilled prior to this date. In order to perform a more comprehensive examina- tion of potential meteorological drivers, air water potential and vapor pressure deficit were calculated from temperature and relative humidity (Buck 1981; Kirkham 2004; Lambers et al. 1998), and included in the analysis. Accumulated humidity indices were derived from hourly means of measurements in like manner as AGDH. Given that understory observations were taken bi-daily, “cross-day change” (a prior day measure subtracts the value from the day before) of all meteorological variables was calculated. Lastly, antecedent 2-day accumulations of temperature and humidity were computed in order to provide a short-term analysis of accumulated meteorological conditions. A list of the meteorological variables used in the study is provided in Table 1. Climate data interpolation, processing and analysis were performed using R (R Development Core Team 2009). Examining meteorological drivers Understory phenology was compared with both long-term and short-term meteorological conditions as described by a comprehensive suite of variables (see Table 1 for details). In addition to cumulative phenology, changes of pheno- metric values between two consecutive bi-daily observa- tions were calculated for each plot to correspond with day- to-day weather fluctuations. Meteorological drivers of understory phenology were examined using linear multiple regression analysis. At the plot level, our field data represented a panel data structure with multi-date observa- tions from a group of fixed targets, with both temporal and spatial dimensions. In order to accommodate such a data structure, linear multiple regression models were fitted to data using a panel linear model (plm) package in R (Croissant and Millo 2008). Phenometrics at each plot were self-standardized by dividing an original value by the site- specific range (i.e., site maximum minus site minimum) to reduce site variability related to species composition diversity. Both standardized and raw datasets were regressed against meteorological variables. For cumulative phenological time series, accumulated hourly means of temperature, absolute humidity, relative humidity, water potential, and vapor pressure deficit were used as candidate explanatory variables. Phenological changes between two consecutive observations were modeled with additional variables representing daily weather conditions, antecedent weather conditions, and cross-day weather changes. Variable subset selection was first conducted for each variable group (such as temperature-based variables or absolute humidity-based variables) using an exhaustive search approach supported by an R package: leaps (Miller 2002). The leaps package returns variable subsets with data multicollinearity minimized. Bayesian information criterion (BIC, provided in the leaps package in R) was used to determine the best variable subsets with overfitting avoided. Variables chosen from different groups were combined for a further-step selection via the same approach as described above. Final subsets of variables were then used to fit linear panel data regression models. A logarithmic transformation was applied to selected variables before model building to correct heteroscedasticity in the data. Hausman test (provided with the plm package in R) was performed to decide whether a fixed or random effect sub-model was appropriate. Finally, models with all coefficients significant at the 99% confidence level were retained. In addition to plot level analysis, phenometrics averaged across plots over the entire study area were compared with averaged meteorological variables using ordinary least squares linear models. Averaged variables represented overall conditions of the entire landscape; therefore, they are also referred to as landscape-level measures. Results Comparison of phenometrics The five derived phenometrics demonstrated differing degrees of variability over time (Fig. 3; Table 2). Standard- ization of phenometric time series to a range between 0 and 1 allowed effective visual contrast in a unified scale frame, emphasizing their respective temporal response patterns. The among-metric differences were clearly seen in the variability of their bi-daily changes (Fig. 4). Among the Int J Biometeorol band algebra-based phenometrics, bi-daily advance of band differencing (2G-R-B, excess green) and normalized differ- ence ratio ([G−R]/[G+R]) showed frequent negative change values over time. Coefficients of variation (CV) of these two phenometrics were 268.03 and 178.85% respectively, greater than that of the other three phenometrics. The simple band ratio (G/R) time series demonstrated negative departures as well, yet at a smaller degree compared with the last two metrics, with a CV value of 128.42%. As detailed previously, such phenomenon likely indicate instability of these band algebra-based metrics against potential noise and measurement uncertainty, given that observed plant phenology consistently advanced during the study time period. Two greenness percentage measures demonstrated mostly positive increments over time with smaller CV values (103.30 and 102.58%, respectively) compared to band algebra-based phenometrics. And, notice- ably, RGB-based and HSL-based greenness percentage metrics resemble each other in regards to the phenological progression and patterns of bi-daily change. Hence, the two greenness percentage based phenometrics appeared to be more robust against potential errors in capturing a realistic green-up process, and were adopted for subsequent evaluation of meteorological drivers. Meteorological drivers of understory phenology Humidity variables correlated with weather station precip- itation records significantly (Table 3). Correlation coeffi- cients with precipitation for both mean absolute humidity and relative humidity (r=0.44, and 0.61 ,respectively) were significant at the 95% confidence level. In particular, maximum absolute humidity (AHmax), and minimum relative humidity (RHmin) showed highest correlations (r=0.68, and 0.68 ,respectively) with precipitation (signif- icant at 99% confidence level). Further, humidity fluctua- tions appeared to correspond with the timing and magnitude of precipitation occurrences on day-of-year (DOY) 128, 134– 137, 139, 141, and 143–145 (Fig. 5). Given the significant correlations and temporal correspondence between precipi- tation and air humidity, field-measured humidity data were used in characterizing the general moisture availability in relation to understory growth in the below canopy environ- ment for high resolution analysis. Besides, peak temperatures were found to occur often on the first days of rainy periods Fig. 3 Spring time series of understory phenometrics averaged across plots; derived from five different photographic methods: 1) RGB- based greenness percentage; 2) (G-R)/(G+R); 3) 2G-R-B; 4) HSL- based greenness percentage; and 5) G/R ratio (all phenometric values were standardized) Table 1 Summary of meteorological variables used in the analysis Temperature (°C) Absolute humidity (g/m3) Relative humidity (%) Air water potential (MPa) Vapor pressure deficit (kPa) Daily mean Tmean AHmean RHmean WPmean VPDmean Daily minimum Tmin AHmin RHmin WPmin VPDmin Daily maximum Tmax AHmax RHmax WPmax VPDmax Daily sum of hourly means GDH AHsum RHsum WPsum VPDsum Accumulated hourly means AGDH AHcum RHcum WPcum VPDcum "Cross-day change"(a prior day measure subtracts the measure of the day before) TmeanCg AHmeanCg RHmeanCg WPmeanCg VPDmeanCg TminCg AHminCg RHminCg WPminCg VPDminCg TmaxCg AHmaxCg RHmaxCg WPmaxCg VPDmaxCg GDHCg AHsumCg RHsumCg WPsumCg VPDsumCg Antecedent 2-day accumulation GDHPre2 AHsumPre2 RHsumPre2 WPsumPre2 VPDsumPre2 T Temperature, AH absolute humidity, RH relative humidity, WP water potential, VPD vapor pressure deficit, Cg “Cross-day Change” indicates the meteorological change occurred from the day before yesterday to yesterday in regards to an observation day Int J Biometeorol (such as DOY 134 and 143), followed by immediate temperature declines. In addition, bi-daily increments of phenometric values demonstrated temporal correspondences with increases of precipitation and humidity (Figs. 4 and 5). Especially for greenness percentage based phenometrics, accelerated increments were found on DOY 135 and 145 which were preceded by DOY134 and 144 when consider- able amounts of precipitation occurred. An exception was that during the early part of spring season, a single precipitation event on DOY 128 did not seem to match up with any observed phenological advance. For cumulative phenology, fitted panel linear models implied that understory green-up responded to both temperature and humidity strongly. All models were significant at the 99% confidence level (Table 4). Results revealed that accumulated growing degree hours (AGDH) alone could drive a model with a coefficient of determina- tion greater than 0.60. Yet humidity-based variables, accumulated vapor pressure deficit (VPDcum), or the accumulated absolute humidity (AHcum), were also signif- icant explanatory variables. Models with humidity-based variables added significantly raised their explanatory powers. RGB-based phenometrics were consistently better predicted with meteorological variables than HSL-based phenometrics. Further, standardization appeared to raise the explanatory power of models for RGB-based phenometrics in most cases, but only improved one case among models using HSL-based phenometrics. Plotting the identified meteorological drivers respectively against phenometrics, a linear dependency can be clearly seen in all cases (Fig. 6). Panel linear models fitted for incremental phenological change underperformed that of cumulative phenometrics at the plot level. A few plot-level incremental phenology models were significant at confidence levels up to 99%. However, all model coefficients of determination were below 0.20. Therefore, analyzing meteorological drivers of phenological change relied on landscape-level models, fitted with phenometrics and meteorological variables averaged across the entire study area. The landscape-level values here simply indicate the averaged understory phenometrics and meteorological conditions, with inter- plot variations smoothed. According to landscape-level Fig. 4 Bi-daily changes of spring time series of understory phenometrics averaged across plots as derived from the five different photographic methods; CV coefficient of variation DOY RGB (%) (G−R)/(G+R) 2G-R-B (DN) HSL(%) G/R Mean σ Mean σ Mean σ Mean σ Mean σ 119 9.98 5.72 −0.09 0.04 −5.31 7.62 6.91 4.17 0.87 0.05 121 9.44 3.73 −0.07 0.03 −8.42 8.50 6.75 2.83 0.89 0.04 123 9.26 4.90 −0.08 0.03 −6.90 9.14 6.22 3.44 0.88 0.05 125 10.20 6.84 −0.06 0.01 −13.62 11.67 7.58 4.15 0.90 0.02 127 12.60 7.53 −0.05 0.01 −11.35 12.36 9.58 4.57 0.91 0.02 129 13.32 6.71 −0.06 0.03 −1.92 13.99 9.50 5.01 0.90 0.04 131 16.61 7.87 −0.06 0.03 4.26 11.63 12.12 6.11 0.91 0.06 133 17.94 9.64 −0.05 0.02 −2.97 12.67 14.55 7.32 0.92 0.04 135 23.02 9.26 −0.06 0.02 −0.19 17.11 18.70 7.20 0.91 0.03 137 24.18 11.51 −0.06 0.04 12.35 15.88 18.38 9.13 0.92 0.07 139 24.02 9.67 −0.05 0.01 2.46 17.01 19.14 8.18 0.93 0.02 141 28.91 13.82 −0.04 0.03 14.10 22.90 23.70 11.88 0.95 0.06 143 33.39 15.48 −0.03 0.03 19.70 28.76 28.28 12.67 0.97 0.06 145 40.57 15.25 −0.02 0.03 32.32 26.39 33.98 12.53 1.00 0.07 Table 2 Averaged values of phenometrics across plots with standard deviation (σ) DOY Day of year, DN digital number Int J Biometeorol linear regression models, incremental phenology could be explained by AGDH, AHcum, or a combination of daily maximum vapor pressure deficit (VPDmax) and antecedent 2-day absolute humidity accumulation (AHsumPre2), re- spectively (Table 5). All the models were significant at the 99% confidence level. The first two models with either AGDH or AHcum as the predictor variable were similar in their explanatory powers (R2=0.512-0.518). Models driven by VPDmax and AHsumPre2 were considerably stronger with coefficients of determination improvement ranging from 0.17 to 0.30 in comparison with models with long- term accumulated measures (AGDH, or AHcum) as predictors. Investigated separately in scatter plots with linear fit lines, phenological change was positively depen- dent on AGDH, AHcum, or AHsumPre2, and negatively dependent on VPDmax (Fig. 7). Humidity-based variables could explain phenological change independently of AGDH. This was unlike the relationship with cumulative phenometrics, which required AGDH in all models. To summarize, cumulative phenology was reasonably well predicted by thermal time (AGDH), but as far as the increment of phenological progression is concerned, humidity-based indices manifested themselves as independent explanatory factors. In addition, we found that standardized phenometrics only out-performed non-standardized metrics for models driven by VPDmax and AHsumPre2, but with reduced effects on the models driven by AGDH or AHcum. In the case of landscape-level analysis, RGB- and HSL-based phenometrics did not show marked differences overall as far as the explanatory power of models is concerned, except that HSL- based models performed better than the RGB-based for the standardized phenometrics. Discussion The greening of understory plants forms an important aspect of temperate forest seasonal dynamics. As shown in this study, digital photography is useful for more accurate repeated measurement of understory phenology. Among the five phenometrics derived from digital photos, greenness percentage estimates were more robust against potential errors than band algebra-based metrics, although band algebra-based methods may be indispensible for character- izing vegetation change over fully covered plots. The potential errors that caused the instability of phenometrics may stem mainly from illumination variability across plots and uncertainties of the camera system. Different site canopy characteristics, sun angle/position, and cloud condition at the time of observation unavoidably affected the reflectance from understory. The digital camera with auto-exposure enabled also increased the inconsistency of photos taken for different plots at different times, yet it may have balanced off the variability of light conditions to a certain degree. Nevertheless, the greenness percentage approach appeared to be less sensitive to such uncertainties, and provided more realistic phenological progression signals. Greenness percentage estimates from RGB and HSL color spaces responded to meteorological variables sensi- tively. The HSL-based phenometric was considered more accurate given its ability to reduce errors caused by illumination variability (Graham et al. 2009). In this study, according to regression model performance, RGB-based phenometric appeared to be more responsive to micro- meteorological variations across plots than the HSL-based. Fig. 5 Spring time series of daily meteorological measurements (PREC precipitation in mm, AHmax maximum absolute humidity in g/m3, RHmin minimum relative humidity in %, Tmean mean temperature in °C) Table 3 Correlations between precipitation and air humidity variables AHmax AHmin AHmean RHmax RHmin RHmean PREC Pearson's r 0.68 0.35 0.44 0.31 0.68 0.61 p value <0.001 0.066 0.016 0.103 <0.001 <0.001 PREC Precipitation, AH absolute humidity, RH relative humidity Int J Biometeorol But among the landscape-level models created for averaged variables, standardized HSL-based phenometric supported better predictions. Given error sources discussed previously, as well as additional uncertainties from interpolation of climate data, further assessment is still needed to better compare phenometrics estimated using the two-color models. The assessment of meteorological variables revealed detailed patterns of temperature and humidity in influencing Table 4 Panel linear models of plot-level phenometrics Phenometrics Explanatory variables and coefficients R2 p value HSL AGDH (1.540) 0.650 <0.001 HSL AGDH (4.890), VPDcum (−4.444 ) 0.694 <0.001 HSL AGDH (−1.276), AHcum (3.018) 0.676 <0.001 HSL_S AGDH (1.539) 0.650 <0.001 HSL_S AGDH (4.890), VPDcum (−4.444) 0.694 <0.001 HSL_S AGDH (−1.616), AHcum (3.381) 0.695 <0.001 RGB AGDH (1.345) 0.715 <0.001 RGB AGDH (4.417 ), VPDcum (−4.073) 0.755 <0.001 RGB AGDH (−1.769), AHcum (3.337) 0.760 <0.001 RGB_S AGDH ( 1.344) 0.714 <0.001 RGB_S AGDH (4.654), VPDcum (−4.388 ) 0.770 <0.001 RGB_S AGDH (−1.887), AHcum (3.463) 0.774 <0.001 HSL-based phenometric (HSL) and RGB-based phenometric (RGB), both represented with greenness percentage cover. The postfix S indicates that the variable was standardized. The explanatory meteorological variables are: accumulated growing degree hours (AGDH), accumulated vapor pressure deficit (VPDcum), and accumulated absolute humidity (AHcum). All coefficients of explanatory variables were significant at the 99% confidence level (p values of respective coefficients are not presented here). A log transformation was applied to all variables to correct heteroscedasticity Fig. 6 Meteorological drivers of plot-level understory phenol- ogy: HSL- and RGB-based phenometrics in percentage against identified meteorological drivers: accumulated growing degree hours (AGDH), accumu- lated vapor pressure deficit (VPD, in kPa), and accumulated absolute humidity (AH, in g/ m3); regression model functions, coefficients of determination and p values are provided. Heteroscedasticity shown in the data was corrected in models using a log transformation Int J Biometeorol Models R2 p value HSL.inc=0.0006 × AGDH − 2.963 0.516 0.006 HSL.inc=0.001 × AHcum − 2.947 0.516 0.006 HSL.inc=−1.802 × VPDmax + 0.008 × AHsumPre2 + 1.448 0.690 0.003 HSL_S.inc=0.00002 × AGDH − 0.107 0.508 0.006 HSL_S.inc=0.00004 × AHcum − 0.107 0.514 0.006 HSL_S.inc=−0.065 × VPDmax + 0.0003 × AHsumPre2 + 0.035 0.794 <0.001 RGB.inc=0.007 × AGDH − 3.370 0.512 0.006 RGB.inc=0.001 × AHcum − 3.382 0.518 0.006 RGB.inc=−1.699 × VPDmax + 0.012 × AHsumPre2 + 0.110 0.750 <0.001 RGB_S.inc=0.00002 × AGDH − 0.112 0.479 0.009 RGB_S.inc=0.00004 × AHcum − 0.113 0.486 0.008 RGB_S.inc=0.057 × VPDmax + 0.0004 × AHsumPre2 − 0.004 0.780 <0.001 Table 5 Linear models of landscape-level phenometrics averaged over the study area HSL.inc The increment of HSL- based phenometric, RGB.inc the increment of RGB-based phe- nometric. The postfix S indi- cates that the variable was standardized. The explanatory meteorological variables are: accumulated growing degree hours (AGDH), accumulated absolute humidity (AHcum), maximum vapor pressure deficit (VPDmax), and antecedent 2- day absolute humidity accumu- lation (AHsumPre2) Fig. 7 Meteorological drivers of landscape-level (averaged across plots) understory pheno- logical change; HSL- and RGB- based phenometric change (be- tween two consecutive bi-daily observations) in percentage plotted against meteorological drivers: accumulated growing degree hours (AGDH), accumu- lated absolute humidity (AH, in g/m3), daily maximum vapor pressure deficit (VPD, in kPa), and antecedent 2-day absolute humidity (AH, in g/m3) accu- mulation; regression functions, coefficients of determination and p values are provided Int J Biometeorol understory plant phenology. Given the close relationship found between precipitation and air humidity, variables derived from the latter were used to indirectly indicate moisture availability. Variations of cumulative understory phenology at plot level were affected by accumulated moisture in addition to accumulated heat. Incremental phenology responded to heat and moisture accumulation, yet had stronger dependency on daily short-term change of moisture conditions. Such relationships first indicate a generally accelerated rate of understory flushing with increased spring warmth and water availability. Furthermore, they suggest that the occurrence of understory phenological change was more directly triggered by precipitation events (as reflected by humidity increase at plot level) than rising temperatures. Therefore, while accumulated heat and mois- ture provided a baseline for predicting spring growth of understory plants, day-to-day moisture conditions determined the actual occurrence of phenological advance. This detected phenomenon may be related to a relatively dry spring in the study area during the year of observation. From January to May 2007, total precipitation in the area was 198 mm, which was 38 mm below the historical average (1971–2000) of 236 mm during that 5-month period of the year. The relatively dry weather mainly occurred in May (total 50 mm of precipitation) which fell far short of the historical average of 80 mm. Hence, water was a limiting resource that hindered spring green-up of understory plants during the time of observation. In a mesic forest such as that investigated in this project, shallower root systems of understory species could be a direct cause of a stronger reliance on precipitation, especially during drought spells, in line with the typical cases in semi-arid ecosystems (Brown and de Beurs 2008; Henebry 2003) and even tropical moist forests (Wright 1991). This finding further demonstrates the usefulness of photographic assessment for capturing detailed understory phenology information. We identified the following limitations and drawbacks in relation to meteorological driver assessment. First, due to a lack of soil moisture data, air humidity was employed as a proxy for moisture availability. Although both absolute humidity and relative humidity were found to be signifi- cantly correlated with precipitation, this measurement was indirect and a large uncertainty may exist. This may be provisionally workable under conditions where air humidity change is closely connected to soil moisture variation due to precipitation and evapotranspiration within the near- ground environment (Rosenberg 1983). But in different environments where soil moisture and air humidity depart from each other significantly, such as in extreme cases of irrigated land in semiarid regions/dry seasons, using air humidity in place of direct measurement of soil water content would be inadequate. The relationship between understory phenology and meteorology could have been more precisely predicted if soil moisture and/or site specific precipitation data were available. Besides, light condition changes in the below canopy environment not only impose challenges on field observations but may also influence understory phenology directly (Kato and Komiyama 2002; Koizumi and Oshima 1985). Thus, monitoring additional environmental factors such as soil moisture and canopy shading effects is needed for a better understanding of micrometeorological factors influencing understory phenology. Changes in understory phenology constantly affect forest ecosystems in terms of nutrient/energy cycling, species interactions, and community structures (Gilliam and Roberts 2003; Kudo et al. 2008). Accurate quantification of understory phenology can facilitate investigations into issues such like response of woody–herbaceous ecosystems to climate change and disturbances (Rich et al. 2008), forest management associated with tree regeneration and forest structure dynamics (Augspurger and Bartlett 2003; Kaeser et al. 2008), and invasive species detection (Wilfong et al. 2009). In addition, incorporating high resolution understory phenology measurements helps improve the accuracy of forest landscape phenology models for validating satellite phenology signals (Liang et al. 2011), and in turn it may improve land surface parameterizations for climate models (Bonan 2008). Conclusions Visible-light digital photography accompanied with green- ness extraction methods provides a way to effectively collect phenological information from the forest understory, as well as similar vegetation types. Given the drawbacks related to labor intensity and systematic inconsistency, observer-based digital photography measurement could be employed as a supplementary tool to existing canopy- observing systems using fixed automatic cameras (Ahrends et al. 2009; Richardson et al. 2009). We identified percentage cover-based phenometrics as being better for capturing phenological progression against measurement uncertainties. The sensitivity of these phenometrics to subtle changes of meteorological variables supports applying this technique to monitoring understory phenol- ogy shift caused by climate change. In the context of landscape phenology, high resolution data with spatio- temporal analysis (as demonstrated in this study) are needed for detailed investigation of vegetation–climate interactions. Lastly, this easily-implemented photographic approach could potentially encourage researchers, fores- ters, park rangers, and citizen scientists to readily collect phenology data digitally in support of vegetation change studies and modeling. Int J Biometeorol Acknowledgements Eric Graham provided insight regarding digital photo processing. Danlin Yu provided important advice on data analysis. Geoffrey M. Henebry reviewed the manuscript and offered valuable comments. Yanbing Zheng provided valuable support in statistical analysis. Thomas Barnes helped identifying understory plant species. We also thank the five anonymous reviewers who provided constructive comments. The research was partly supported by a National Science Foundation Doctoral Dissertation Research Improve- ment Grant, BCS-0703360. References Ahrends H, Brügger R, Stöckli R, Schenk J, Michna P, Jeanneret F, Wanner H, Eugster W (2008) Quantitative phenological obser- vations of a mixed beech forest in northern Switzerland with digital photography. J Geophys Res 113:G04004. doi:10.1029/ 2007JG000650 Ahrends H, Etzold S, Kutsch W, Stoeckli R, Bruegger R, Jeanneret F, Wanner H, Buchmann N, Eugster W (2009) Tree phenology and carbon dioxide fluxes: use of digital photography for process-based interpretation at the ecosystem scale. Clim Res 39:261–274 Augspurger C, Bartlett E (2003) Differences in leaf phenology between juvenile and adult trees in a temperate deciduous forest. Tree Physiol 23:517–525 Beatty S (1984) Influence of microtopography and canopy species on spatial patterns of forest understory plants. Ecology 65:1406– 1419 Bonan G (2008) Forests and climate change: forcings, feedbacks, and the climate benefits of forests. Science 320:1444–1449 Brosofske K, Chen J, Crow T (2001) Understory vegetation and site factors: implications for a managed Wisconsin landscape. For Ecol Manag 146:75–87 Brown M, de Beurs K (2008) Evaluation of multi-sensor semi-arid crop season parameters based on NDVI and rainfall. Remote Sens Environ 112:2261–2271 Brügger R, Studer S, Stöckli R (2007) Die Vegetationsentwicklung- erfasst am Individuum und über den Raum (Changes in plant development-monitored on the individual plant and over geo- graphical area). Schweiz Z Forstwes 158:221–228 Buck A (1981) New equations for computing vapor pressure and enhancement factor. J Appl Meteorol 20:1527–1532 Burrows S, Gower S, Clayton M, Mackay D, Ahl D, Norman J, Diak G (2002) Application of geostatistics to characterize Leaf Area Index (LAI) from flux tower to landscape scales using a cyclic sampling design. Ecosystems 5:667–679 Conrac Corporation (1980) Raster graphics handbook. Van Nostrand Reinhold, New York Crimmins M, Crimmins T (2008) Monitoring plant phenology using digital repeat photography. Environ Manag 41:949–958 Croissant Y, Millo G (2008) Panel data econometrics in R: the plm package. J Stat Softw 27:1–43 Drewitt G, Black T, Nesic Z, Humphreys E, Jork E, Swanson R, Ethier G, Griffis T, Morgenstern K (2002) Measuring forest floor CO2 fluxes in a Douglas-fir forest. Agric For Meteorol 110:299– 317 ERDAS (2008) ERDAS imagine field guide. ERDAS, Atlanta Ewers B, Mackay D, Gower S, Ahl D, Burrows S, Samanta S (2002) Tree species effects on stand transpiration in northern Wisconsin. Water Resour Res 38(7):1–11 Fei S, Steiner K (2008) Relationships between advance oak regeneration and biotic and abiotic factors. Tree Physiol 28:1111–1119 Gilliam F, Roberts M (2003) The herbaceous layer in forests of eastern North America. Oxford University Press, New York Graham EA, Hamilton MP, Mishler BD, Rundel PW, Hansen MH (2006) Use of a networked digital camera to estimate net CO2 uptake of a dessication-tolerant moss. Int J Plant Sci 167:751– 758 Graham EA, Yuen EM, Robertson GF, Kaiser WJ, Hamilton MP, Rundel PW (2009) Budburst and leaf area expansion measured with a novel mobile camera system and simple color thresh- olding. Environ Exp Bot 65:238–244 Henebry GM (2003) Grasslands of the North American great plains. In: Schwartz MD (ed) Phenology: an integrative environmental science. Kluwer, Dordrecht, pp 157–174 IPCC (2007) Climate change 2007: the physical science basis. Contribution of working group I to the fourth assessment report of the Intergovernmental Panel on Climate Change. Solomon S, Qin D, Manning M, Chen Z, Marquis M, Averyt KB, Tignor M, Miller HL (eds). Cambridge University Press, Cambridge Jensen JR (2000) Remote sensing of the environment: an earth resource perspective. Prentice Hall, Upper Saddle River Kaeser M, Gould P, McDill M, Steiner K, Finley J (2008) Classifying patterns of understory vegetation in mixed-oak forests in two ecoregions of pennsylvania. Northern J Appl For 25:38–44 Kato S, Komiyama A (2002) Spatial and seasonal heterogeneity in understory light conditions caused by differential leaf flushing of deciduous overstory trees. Ecol Res 17:687–693 Kirkham M (2004) Principles of soil and plant water relations. Elsevier, Amsterdam Koizumi H, Oshima Y (1985) Seasonal changes in photosynthesis of four understory herbs in deciduous forests. J Plant Res 98:1–13 Kudo G, Ida T, Tani T (2008) Linkages between phenology, pollination, photosynthesis, and reproduction in deciduous forest understory plants. Ecology 89:321–331 Lambers H, Chapin F, Pons T (1998) Plant physiological ecology. Springer, New York Liang L, Schwartz MD (2009) Landscape phenology: an integrative approach to seasonal vegetation dynamics. Landsc Ecol 24:465– 472 Liang L, Schwartz M, Fei S (2011) Validating satellite phenology through intensive ground observation and landscape scaling in a mixed seasonal forest. Remote Sens Environ 115:143–157 Lukina EV, Stone ML, Raun WR (1999) Estimating vegetation coverage in wheat using digital images. J Plant Nutr 22:341–350 Martin L (1965) Physical geography of Wisconsin. University of Wisconsin Press, Madison Meier U (1997) Growth stages of mono-and dicotyledonous plants: BBCH-monograph. Federal Biological Research Centre for Agriculture and Forestry, Braunschweig Miller A (2002) Subset selection in regression. Chapman & Hall, New York Morisette JT, Richardson AD, Knapp AK, Fisher JI, Graham EA, Abatzoglou J, Wilson BE, Breshears DD, Henebry GM, Hanes JM, Liang L (2009) Tracking the rhythm of the seasons in the face of global change: phenological research in the 21st century. Front Ecol Environ 7:253–260 Pfitsch W, Pearcy R (1989) Daily carbon gain by Adenocaulon bicolor (Asteraceae), a redwood forest understory herb, in relation to its light environment. Oecologia 80:465–470 Pratt W (2001) Digital image processing. Wiley, New York R Development Core Team (2009) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. http://www.R-project.org Rich P, Breshears D, White A (2008) Phenology of mixed woody- herbaceous ecosystems following extreme events: net and differential responses. Ecology 89:342–352 Richardson AD, Jenkins JP, Braswell BH, Hollinger DY, Ollinger SV, Smith ML (2007) Use of digital webcam images to track spring green-up in a deciduous broadleaf forest. Oecologia 152:323–334 Int J Biometeorol http://dx.doi.org/10.1029/2007JG000650 http://dx.doi.org/10.1029/2007JG000650 http://www.R-project.org Richardson AD, Braswell BH, Hollinger DY, Jenkins JP, Ollinger SV (2009) Near-surface remote sensing of spatial and temporal variation in canopy phenology. Ecol Appl 19:1417– 1428 Rosenberg NJ (1983) Microclimate: the biological environment, 2nd edn. Wiley, New York Schwartz MD (1997) Spring index models: an approach to connecting satellite and surface phenology. In: Lieth H, Schwartz MD (eds) Phenology of seasonal climates I. Backhuys, Netherlands, pp 23– 38 Schwartz MD (2003) Phenology: an integrative environmental science. Kluwer, Dordrecht Simpson G (1990) Seed dormancy in grasses. Cambridge Univer- sity Press, Cambridge Tellaeche A, BurgosArtizzu X, Pajares G, Ribeiro A, Fernández- Quintanilla C (2008) A new vision-based approach to differential spraying in precision agriculture. Comput Electron Agric 60:144–155 Tian L, Slaughter D (1998) Environmentally adaptive segmentation algorithm for outdoor image segmentation. Comput Electron Agric 21:153–168 Wilfong B, Gorchov D, Henry M (2009) Detecting an invasive shrub in deciduous forest understories using remote sensing. Weed Sci 57:512–520 Woebbecke D, Meyer G, Von Bargen K, Mortensen D (1995) Color indices for weed identification under various soil, residue, and lighting conditions. Trans ASAE 38:259–269 Wright S (1991) Seasonal drought and the phenology of understory shrubs in a tropical moist forest. Ecology 72:1643–1657 Yarie J (1980) The role of understory vegetation in the nutrient cycle of forested ecosystems in the mountain hemlock biogeoclimatic zone. Ecology 61:1498–1514 Int J Biometeorol Photographic assessment of temperate forest �understory phenology in relation to springtime meteorological drivers Abstract Introduction Materials and methods Study site description Phenology data collection Meteorological measurements Phenometrics derivation Evaluating phenometrics Processing meteorological variables Examining meteorological drivers Results Comparison of phenometrics Meteorological drivers of understory phenology Discussion Conclusions References << /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (ISO Coated v2 300% \050ECI\051) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.3 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJDFFile false /CreateJobTicket false /DefaultRenderingIntent /Perceptual /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /sRGB /DoThumbnails true /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo true /PreserveFlatness true /PreserveHalftoneInfo false /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts false /TransferFunctionInfo /Apply /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true ] /NeverEmbed [ true ] /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 150 /ColorImageMinResolutionPolicy /Warning /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 150 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.40 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /ColorImageDict << /QFactor 1.30 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 10 >> /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 10 >> /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /Warning /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 150 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.40 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /GrayImageDict << /QFactor 1.30 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 10 >> /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 10 >> /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 600 /MonoImageMinResolutionPolicy /Warning /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 >> /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /Description << /CHS /CHT /DAN /ESP /FRA /ITA /JPN /KOR /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken die zijn geoptimaliseerd voor weergave op een beeldscherm, e-mail en internet. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR /PTB /SUO /SVE /ENU (Use these settings to create Adobe PDF documents best suited for on-screen display, e-mail, and the Internet. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.) /DEU >> /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ << /AsReaderSpreads false /CropImagesToFrames true /ErrorControl /WarnAndContinue /FlattenerIgnoreSpreadOverrides false /IncludeGuidesGrids false /IncludeNonPrinting false /IncludeSlug false /Namespace [ (Adobe) (InDesign) (4.0) ] /OmitPlacedBitmaps false /OmitPlacedEPS false /OmitPlacedPDF false /SimulateOverprint /Legacy >> << /AddBleedMarks false /AddColorBars false /AddCropMarks false /AddPageInfo false /AddRegMarks false /ConvertColors /ConvertToRGB /DestinationProfileName (sRGB IEC61966-2.1) /DestinationProfileSelector /UseName /Downsample16BitImages true /FlattenerPreset << /PresetSelector /MediumResolution >> /FormElements false /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles true /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) ] /PDFXOutputIntentProfileSelector /NA /PreserveEditing false /UntaggedCMYKHandling /UseDocumentProfile /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false >> ] >> setdistillerparams << /HWResolution [2400 2400] /PageSize [595.276 841.890] >> setpagedevice work_fbji4bi2mnesxipnpkbukjflyu ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218553196 Params is empty 218553196 exception Params is empty 2021/04/06-02:16:32 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218553196 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:32 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_fcysn3hcjnfxfegxqlikfemvz4 ---- 59.xps Middle-East Journal of Scientific Research 12 (12): 1873-1880, 2012 ISSN 1990-9233 © IDOSI Publications, 2012 DOI: 10.5829/idosi.mejsr.2012.12.12.1236 Corresponding Author: T.V.U. Kiran Kumar, Department of ECE, Bharath University, Chennai-73. 1873 Visual Secret Sharing Scheme for JPEG Compressed Images T.V.U. Kiran Kumar, B. Karthik and E. Bharath Kumaran Department of ECE, Bharath University, Chennai-73, India Abstract: Existing schemes modify either the pixel values or change the Color Index Table (CIT) values. Almost all the existing techniques are applicable to primarily non-compressed images in either monochrome or color domains. However, most imaging applications including digital photography, archiving and internet communications nowadays use images in the JPEG compressed format. Application of the existing Visual Secret Sharing schemes for these images requires conversion back into spatial domain. In this paper we propose a shared key algorithm that works directly in the JPEG domain, thus enabling shared key image encryption for a variety of applications. The scheme directly works on the quantized DCT coefficients and the resulting noise- like shares are also stored in the JPEG format. The decryption process is lossless preserving the original JPEG data. Key words: Pixel values Color Index Table Compressed format Shared key Visual Secret sharing Image encryption. INTRODUCTION applications in the Internet world. The computer-based A secret sharing encryption scheme creates several shared image encryption systems. This has also lead to (n) shares of the original information and distributes to n development of methods that apply to color images. In participants. The decryption is carried out by using a addition, the methods described, this also address prescribed number (k, k = n) of subset shares. Fewer than computational complexity issues by designing schemes k shares are insufficient to reconstruct the original data. that require simpler bit-level arithmetic operations during The ‘k’ keys are generated in such a way that the share decryption. This method similar in spirit to these schemes images are “random" looking with no semblance to the but addressing a larger domain of images stored in original image. It is a highly secure mechanism since the compressed digital formats. Large number of imaging decryption is being performed by the human visual applications including digital photography, archiving and system when both shares are brought together. Here first internet communications primarily use images stored in we will see 2-out-of-2 or {2, 2} secret sharing method. the JPEG format. Most of the digital cameras in the market Next we will see an extension to k out of n {n,k} secret use JPEG. Application of the existing shared key sharing where k stacked images are needed to reconstruct cryptographic schemes for these images requires the clear image. conversion back into spatial domain. When transmission All the existing schemes, the encryption-decryption and storage of the shares are concerned, one may not process is lossy. The original image is transformed into a necessarily be able to apply lossy compression haltoned image before deriving the share images. techniques since the loss may result in an inability to Halftoning is a lossy process. Furthermore, in many of the decrypt. Hence, encryption techniques applicable in the proposed schemes the reconstruction from the k shared compressed domain are needed. images creates a lossy version of the half-toned image Visual cryptography based on secret sharing has itself. This led to several researchers’ efforts on improving received considerable attention since the publication contrast quality of the decrypted images. Though they of [1] by Naor and Shamir. A secret sharing lose the ability of decryption by visual means, such encryption scheme creates several (n) shares of the encryption schemes have a large number of potential original information and distributes to n participants. decryption also opens an avenue to a wide variety of Middle-East J. Sci. Res., 12 (12): 1873-1880, 2012 1874 The decryption is carried out by using a prescribed Visual Secret Sharing Scheme number (k, k<=n) of subset shares. Fewer than k shares (A) {2, 2} Shared Key Encryption: This shared key are insufficient to reconstruct the original data. The algorithm [5] that works directly in the JPEG domain, thus scheme proposed in [1] generated two shared key images enabling shared key image encryption for a variety of from a given binary image as a printed page and a applications. The scheme manipulates the quantized DCT transparency of the same size. When the transparency is coefficients and the resulting noise-like shares are also stacked on top of the printed page, the original image is stored in the JPEG format. The decryption process that formed. The two keys are generated in such a way that the combines the shares is lossless and hence the original share images are “random" looking with no semblance to JPEG file’s fidelity is preserved. Our experiments indicate the original image. It is a highly secure mechanism since that each share image is approximately the same size as the decryption is being performed by the human visual the original JPEG retaining the storage advantage system when both shares are brought together. This is a provided by JPEG. 2-out-of-2 or {2,2} secret sharing method. Several Both encryption and decryption require only small publications that followed this development extended the modifications to standard JPEG computational basic visual cryptography using concepts from digital procedures. This in turn implies an easy adaptation for a halftoning [2, 3] to address gray scale images and color hardware implementation which is particularly useful for pictures [4]. In many of these schemes, the encryption- digital camera applications. decryption process is lossy. The original image is transformed into a haltoned image before deriving the Monochrome Images: The lossy version of JPEG image share images. Halftoning is a lossy process. Furthermore, compression uses DCT.A monochrome image is first split in many of the proposed schemes the reconstruction into 8×8 non-overlapping blocks of pixels. An 8×8 DCT is from the k shared images creates a lossy version of the applied to each block and the resulting coefficients are half-toned image itself. This led to several researchers’ scalar quantized using a quantization matrix. The efforts on improving contrast quality of the decrypted quantized coefficients are then converted from a two- images. dimensional representation to a one-dimensional vector However, in some of the recent publications [6, 7] by a process known as zig-zag scanning and sent to an the original intent of performing the decryption using the entropy coder that uses either Huffman or arithmetic human visual system with a simple mechanical system of coding. The process is shown in Fig 2. stacking transparencies is lost by requiring computer The zig-zag transformation transforms 8x8 two 2D processing to reconstruct the image. Though they lose values into 64 one dimensional quantized coefficients. For the ability of decryption by visual means, such encryption an 8×8 block Bn with an index X , let the quantized schemes have a large number of potential applications in coefficients be denoted as, where X corresponds to the the Internet world [6]. This has also lead to development DC coefficient and the rest to the AC coefficients. The of methods that apply to color images. In addition, the index n can be obtained as the standard scanning order of methods described in [6] and [7] also address blocks of the image as defined in the JPEG standard. We computational complexity issues by designing schemes will also use the notation of that require simpler bit-level arithmetic operations during decryption. In this paper, we describe a method similar in X =[X , X X ,…, X ] (1) spirit to these schemes but addressing a larger domain of images stored in compressed digital formats. According to baseline JPEG specification, the input In [5] the author proposed a scheme for shred key image components are represented by 8-bit pixel values encryption, which is used in this paper for creating shares and the quantized samples are limited to at most 15 bits of from a secret image. The author a proposed scheme for magnitude and a sign bit. For gathering the statistics for {2,2} shared key encryption for JPEG images and entropy coding purposes, XN i values are placed in bins suggested the implementation for {n,k} shared key according to the number of bits needed to represent its encryption. Based on this paper the {n,k} shared key magnitude in binary. This is given by encryption has been developed and shown the results in this paper. N=log X (2) n n n n n n n 1 1, 1 1 2 1 n ( ) ( ) [ ] [ ]{ } ( ) [ ] [ ]{ } ( ) .n1 i.n1 .n2 i i .n1 i 0,1 , 1, 0 ifb L 1 b L b L 0,1 , 1, 0 ifb L 1  = ∈ = ( ) ( ) ( ) N .n1 .n1 N .n1 N-L i i i L=1 X L X L X2 b N-L X2+∑ ( ) ( ) ( ) N .n2 .n2 N .n1 N-L i i i L=1 X L X L X2 b N-L X2+∑ Middle-East J. Sci. Res., 12 (12): 1873-1880, 2012 1875 Fig. 1: {2,2} Shared Key Encryption Mechanism for Fig. 2: Shared Key Decryption Process. JPEG images Now,X can be represented in two’s complement form asi n X b (N) b (N-1)… b (0) (3)i = i i i nb n n n Where each b (l), l=0,1,2,…Ni n N is a binary value, either zero or one. Hence, the value of Using the above procedure, we obtain two X ni can be obtained as N sequences, Xn1 and Xn2, that represent zig-zagged, quantized coefficients for 8×8 blocks for the entire image X -b (N) X 2 + b (N-L) X 2 L=1 (4) based on the original sequence Xn.These two sequencesi = i in n N n N-L Note that N is dependent of both i and n and we arithmetic coding techniques to produce the two share have not explicitly shown the dependency to reduce images in the JPEG format. The sequence Xni and the clutter. Now, let us consider an encryption system where binary representation Xnbi can be reconstructed simply the image to be encrypted is given in the JPEG format. Let by a bit-wise XOR operation (+) of the N+1-bit shares. (8) the image be decoded partially to obtain the zig-zagged, quantized coefficients X =0,1,2,3,…63.corresponding toi n block Bn. At this stage, the two’s complement Color Images: Just as with the JPEG compression, the representation X of X as indicated in the above proposed method is color blind. JPEG uses a componenti i nb n equation is available. Now, two versions, X and X , model for applications to enable coding of color images.I In1 n2 are generated using a {2, 2} shared secret key encryption Typically, most applications use three components in the method based on random assignment. We use an YCbCr color space. The chrominance data is sub-sampled assignment scheme based on the one proposed by [7]. for better compression efficiency. The proposed Each bit b n i (l) is split into two shares using the encryption scheme uses the same JPEG approach to following scheme: handle color images. Since the resulting image shares are (5) JPEG is also suitable for our application. JPEG supports The above procedure is applied to all N+1 bits of X encryption technique in the previous subsection relies oninb to obtain X i and X i whose values X and X a standard baseline mode of JPEG. JPEG also has ai i i in1b n2b n1 n2 respectively are calculated as number of other modes such as progressive DCT, (6) and (7) can further be entropy coded using either Huffman or JPEG images, any color space that can be handled by up to 255 components in one image and hence support for a large variety of image formats. The description of the Middle-East J. Sci. Res., 12 (12): 1873-1880, 2012 1876 hierarchical and lossless. The progressive DCT mode by multiple scans of the quantized coefficients allows incremental transmission of the image so that the picture could be reconstructed sequentially with increasing fidelity. There are two modes for performing progressive DCT: spectral selection and successive approximation. In spectral selection, the AC coefficients are sent after sending the DC coefficients. The successive Fig. 3: Binary tree structure of {n,k} encryption approximation uses a lower precision at the start and improves the precision in the subsequent scans. A mixed mode by combining the two schemes is also possible. For the progressive-DCT JPEG encoding, the proposed encryption scheme applies directly. Once the shares are created, the progressive-DCT based scans could be applied to generate two JPEG shares. During the decode, the shares have to be completely decoded to produce Xn1 and Xn2. XOR operation on these values produce the Xn sequence which can further be processed to obtain the clear picture. JPEG also has a lossless mode. The lossless coding uses differential pulse code modulation (DPCM) and Huffman or arithmetic coding. The proposed encryption method does not directly apply to lossless mode but a scheme described in could be used for this purpose. The hierarchical mode in JPEG uses multi-scale approach to compression. While the proposed scheme can potentially work in the hierarchical mode with DCT- based coding, it will not work in the lossless hierarchical mode. The scheme [5] for share generation is listed below. Read the image in JPEG format and perform variable- length decode to obtain the quantized DCT coefficients, X n, for each block Bn. Using random numbers from a pseudo-random number generator, produce the Sequences Xn1 and Xn2. Perform entropy coding for the two sequences to produce two JPEG share images with the same dimensions and color space. The decryption procedure is simply the reverse of it, where the decoder has also been modified to read two JPEG images and perform entropy decoding to get two share sequences of the quantized DCT coefficients. The sequences are XORed to produce the decrypted Table 1: Shared images and their positions in linear array Secret Share Share Share Share Share Share Name Image 1 2 3 4 5 6 ArrayPosition 1 2 3 4 5 6 7 Share Share Share Share Share Share Share Share 7 8 9 10 11 12 13 14 8 9 10 11 12 13 14 15 sequences and the rest of the decode operations, dequantization and inverse transform, are Continued to produce the decrypted image. {N,k} Shared Key Encryption Algorithm: We can extend the {2,2} encryption to {n,k} encryption by extending the scheme for the generated shares. (i.e). Again generating shares for the Shares 1 & 2. From Share 1 we will get Share 3 and Share 4.Similarly from Share 2 we will get Share 5 and Share 6.This scheme looks like a downward growing binary tree. For the better programming of binary tree, we put it into single dimensional array as following. The child node of a parent node ‘p’ (p is position) can expressed as, Left child=2 x p Right child= (2 x p) +1 RESULTS {2, 2} Shared Key Encryption: We performed the experiments on several types of images and compared its performance. The shares were generated using the proposed method. The decrypted image is exactly same as the original, From Figs.; it is clear that the generated shares have no visual resemblance to the originals and are “random" looking. The randomness is slightly modulated by the underlying image shapes. Hence, the storage and transmission gains obtained by JPEG are directly applicable to the proposed method. {N,2} Shared Key Encryption: Here 14 Share images have been generated from the secret image. The decryption can be carried out from the following combination of share images. Middle-East J. Sci. Res., 12 (12): 1873-1880, 2012 1877 Fig. 4: Secret Image Share 1 Share2 Decrypted from Share 1 and Share 2 images Figure 2 {2,2} Shared key encryption 2 Shares: {Share 1, Share 2} 3 Shares: {Share 2, Share 3, Share 4} {Share 1, Share 5, Share 6} 4 Shares: S=Share {S3,S4, S5, S6} {S1, S5, S13, S14} {S1, S6, S11, S12} {S2, S3, S9, S10} {S2, S4, S7, S8} 5 Shares: {S4,S5,S6,S7,S8}{S3,S4,S5,S13,S14} {S3,S5,S6,S9,S10}{S3,S4,S6,S11,S12} {S1,S11,S12,S13,S14}{S2,S7,S6,S8,S9,S10} {S1,S2,S3,S7,S8}{S1,S2,S4,S9,S10} {S1,S2,S5,S11,S12}{S1,S2,S6,S13,S14} 6 Shares: {S5, S6, S7, S8, S9, S10} {S3, S4, S11, S12, S13, S14} {S3,S5, S9, S10, S13, S14} {S4, S6, S7, S8, S11, S12} {S3,S6,S9,S10,S11,S12} {S4,S5,S7,S8,S13,14} {S1,S3,S5,S6,S7,S8} {S1,S4,S5,S6,S9,S10} {S2,S3,S4,S5,S11,S12} {S2,S3,S4,S6,S13,S14} 7 Shares: {S6,S7,S8,S9,S10,S11,S12} {S3,S9,S10,S11,S12,S13,S14} {S4,S7,S8,S11,S12,S13,S14} {S5,S7,S8,S9,S10,S13, S14} {S1,S3,S5,S7,S8,S13,S14}{S1,S3,S6,S7,S8,S11,S12} {S1,S4,S5,S9,S10,S13,S14}{S1,S4,S6,S9,S10,S11,S12} {S2,S3,S5,S9,S10,S11,S12}{S2,S3,S6,S9,S10,S13,S14} {S2,S4,S5,S7,S8,S11,S12}{S2,S4,S6,S7,S8,S13,S14} 8 Shares {S7, S8, S9, S10, S11, S12, S13, S14} {S1,S2,S3,S4,S7,S8,S9,S10} {S1,S2,S5,S6,S11,S12,S13,S14} {S2,S6,S7,S8,S9,S10,S13,S14} {S2,S5,S7,S8,S9,S10,S11,S12} {S1,S4,S9,S10,S11,S12,S13,S14} {S1,S3,S7,S8,S11,S12,S13,S14} {S1,S2,S4,S6,S9,S10,S13,S14} {S1,S2,S4,S5,S9,S10,S11,S12} {S1,S2,S3,S6,S7,S8,S13,S14} {S1,S2,S3,S5,S7,S8,S11,S12} 9 Shares: {S1,S3,S4,S5,S6,S7,S8,S9} {S2,S3,S4,S5,S6,S11,S12,S13,S14} 10 Shares: {S1,S3,S4,,S6,S7,S8,S9,S10,S11,S12} {S1,S3,S4,S5,S7,S8,S9,S10,S13,S14} {S2,S3,S5,S6,S9,S10,S11,S12,S13,S14} {S2,S4,S5,S6,S7,S8,S11,S12,S13,S14} 11 Shares: {S1,S3,S4,S7,S8,S9,S10,S11,S12,S13,S14} {S2,S5,S6,S7,S8,S9,S10,S11,S12,S13,S14} {S1,S2,S4,S5,S6,S9,S10,S11,S12,S13,S14} {S1,S2,S3,S5,S6,S7,S8,S11,S12,S13,S14} {S1,S2,S3,S4,S6,S7,S8,S9,S10,S13,S14} {S1,S2,S3,S4,S5,S7,S8,S9,S10,S11,S12} 14 Shares {S1,S2,S3,S4,S5,S6,S7,S8,S9,S10,S11,S12,S13,S14} The numbers which are underlined and italic are containing the redundant shares. Middle-East J. Sci. Res., 12 (12): 1873-1880, 2012 1878 Fig. 5: Fig. 6: Table 2: Shared images (n) Shares required --------------------------------------------------------------- for decryption(k) 2 4 6 8 10 12 14 2 1 1 1 1 1 1 1 3 - 1 2 2 2 2 2 4 - - 1 2 3 4 5 5 - - - 1+1 3+2 4+3 6+4 6 - - - 1 1+2 3+3 6+4 7 - - - - - 1+4 4+8 8 - - - - 1 4 1+10 9 - - - - 1 1 2 10 - - - - - 1 4 11 - - - - - 1 6 12 - - - - - - - 13 - - - - - - - 14 - - - - - - 1 Irredundant combinations 1 2 4 6 10 15 25 Total combinations 1 2 4 8 16 32 64 The numbers underlined and bold in table 2 are redundant combinations. From the results we can generalize the formula as Total Combinations=2(n / 2 ) -1 Where ‘n’ is the no of shared images. From the results the following are identified. For any image and any ‘n’ the shared images will have almost equal size. And never equal. For a specified image and specified ‘n’ the shared images created at different period will never same.The probability for being same is 1/( ImageLength*ImageWidth*3*8). The decryption can be carried out at different combination for different ‘k’ values. In the decryption combinations many set of combinations are redundant. Middle-East J. Sci. Res., 12 (12): 1873-1880, 2012 1879 The graph plotted for different ‘n’ values vs. the sizes of the resulting shares are comparable to that of their combinations shows, it is binary exponential. straight JPEG representations. The proposed method can For the values ‘k’ greater than 4, contains the be extended to any other transform or wavelet domain redundant shares. techniques for image coding. An extension to implement Extension and Further Experiments looking into application of this scheme for secure Generalizing the Formula for Decryption: The scheme smartcards and digital rights management. can be generalized to a formula. Look out the first combination in all possible combination. REFERENCES (1,2) (2,3,4) (3,4,5,6) (4,5,6,7,8) (5,6,7,8,9,10) (6,7,8,9,10,11,12) (7,8,9,10,11,12,13,14).If you start at 1. Naor, M. and A. Shamir, 1995. in: A. de Santis (Ed.), position ‘p’ then up to 2p shares we need add to decrypt Visual Cryptography, Advances in Cryptology: successfully (i.e.) p, p+1, p+2,…, 2p.Similarly this can be Eurocrypt ’94, Lecture Notes in Computer Science, extended to get a generalized algorithm for decryption. 950: 1-12. Share Creation in JPEG 2000 Image: The proposed 1996. “Visual cryptography for general access scheme can be implemented in other transforms like structures," Information and Computation, wavelet (JPEG 2000) and any other transforms which is 129(2): 86-106. used for compression. The JPEG 2000 has higher 3. Lin C.C. and W.H. Tsai, 2003. “Visual cryptography compression and less visibility of artifacts. Thus it will be for gray-level images by dithering techniques," useful for higher compressed images. Pattern Recognition Letters, 24: 349- 358. 4. Hou, Y.C., 2003. "Visual cryptography for color Implementing in Reconfigurable Hardware: System images," Pattern Recognition, 36: 1619-1621, 2003. generator tool [14] would be useful when implementing in Rahman Mohseni-Astani, Poorya Haghparast and FPGA (Field Programmable Gate Arrays).The tool can be Sahab Bidgoli-Kashani, “Assessing and Predicting used for converting matlab functions into VHDL/Verilog the Soil Layers Thickness and Type Using Artificial code for the specified target like Spartan 2E,Virtex- Neural Networks - Case Study in Sari City of Iran”, 4,etc.Thus the encryption can be tested in hardware and Middle-East Journal of Scientific Research, the optimization can be achieved.More detals about the ISSN:1990-9233, 6(1): 62-68, 2010. system generator are available in [14]. 5. Ihsan Nuri Demirel, 2010. “Conceptual Description, Summary and Future Work: The proposed method can Participation, Assumption of Inefficiency”, Middle- take in JPEG and create ‘n’ shares in the JPEG format. The East Journal of Scientific Research, ISSN:1990-9233, shares are generated using the quantized DCT 6(1): 15-21. coefficients in the JPEG representation. The shares are 6. Subramania Sudharshan, 2005. ”Shared key used to reconstruct the original quantized DCT encryption of JPEG color images”, IEEE Transactions coefficients during decryption. The proposed scheme is on Consumer Electronics, 51: 4. applicable to color and monochrome images and offers all 7. Hou, Y.C., C.Y. Chang and F. Lin, 1999. "Visual the compression advantage of JPEG to all the share cryptography for color images based on color images. decomposition," Proceedings of the Fifth Conference The computational complexity of the encryption on Information Management, pp: 584-191. process is comparable to that of a standard JPEG encoder, 8. Chang, C.C. and T.X. Yu, 2002. "Sharing a secret gray an additional bit stream entropy encoder and a random image in multiple images," Proceedings of the First number generator. Similarly, the decryption complexity is IEEE International Symposium on Cyber Worlds, pp: also augmented by an additional entropy decoder when 230-237. compared to JPEG decoding. The hardware architecture of 9. Lukac, R. and K.N. Plataniotis, 2004. "Cost-effective the encryption and the decryption processes can easily encryption of natural images," Proceedings of 22nd be obtained by modifying the standard JPEG pipelines. Queen’s Biennial Symposium on Communications, (A software implementation based on the Matlab code is pp: 89-91, (Also from the URL http://www.ece. available from the author.) It was also shown that the queensu. ca/symposium/papers/2C_1.pdf) in FPGA is under development [14]. Currently, we are 2. Ateniese, G., C. Blundo, A. De Santis and D. Stinson, Hastiness in Obtaining Result, Voluntary Middle-East J. Sci. Res., 12 (12): 1873-1880, 2012 1880 10. Independent JPEG Group, http://www.ijg.org. 13. Michael W. Marcellin, Michael J. Gormish, Ali Bilgin 11. Mathematics Department, University of Salzburg, and Martin P. Boliek, 2000. An Overview of JPEG- pLab: Theory and Practice of Random Number 2000.Proceedings of the Data Compression Generation,http://random.mat.sbg.ac.at/. Conference, pp: 523-544. 12. http://www.cryptography.com/resources/whitepape 14. http://www.mathworks.com rs/VIA_rng.pdf 15. http://www.xilinx.com work_ffaleyjcpncpnjcvlexgt37jte ---- The Australian Journal of Indigenous Education Volume 41 Number 2 pp. 131–138 c© The Authors 2013 doi 10.1017/jie.2012.20 Funds of Knowledge of Sorting and Patterning: Networks of Exchange in a Torres Strait Island Community Bronwyn Ewing Yumi Deadly Centre, Faculty of Education, Queensland University of Technology, Australia This article focuses on the funds of knowledge that are mathematical in nature and how they might be used to support parents and children with their learning of mathematics that is taught and learned in the early years of school. Funds of knowledges are those that have been historically and culturally accumulated into a body of knowledge and skills essential for people’s functioning and wellbeing. Drawing on community research design, where the process of learning is owned and framed by the community and its people, a consultative meeting was held to discuss the pilot project. � Keywords: Torres Strait Islands, Torres Strait Islander Parents, Indigenous Knowledge Centre, funds of knowledge, sorting, repeating patterns Background In recent years a number of strategies have been imple- mented to increase Torres Strait Islander and Aboriginal parents’ participation in the education of their children, with the view to making a difference and contributing to helping such families to live productive and fulfilling lives (see, e.g., Closing the Gap Strategy, Department of Families, Housing, Community Services and Indigenous Affairs, 2009; The Productivity Commission, 2009). For example, in April of 2007 the Council of Australia Gov- ernments (COAG) reaffirmed its 10-year commitment to closing the gap between Torres Strait Islander and Abo- riginal people and non-Indigenous Australians, with the initial priority of ensuring that young children get a good start to life (COAG, 2007a). Then, in December of 2007, COAG communicated that all levels of government in Australia were to work together with Torres Strait Islander and Aboriginal people to achieve the target of closing the gap on disadvantage with a commitment to halving the gap in reading, writing and numeracy within 10 years (COAG, 2007b). In July of 2008, COAG agreed to sus- tained engagement by all governments over the next 10 years and beyond to achieve closing the gap targets for Indigenous people. It agreed in principle to a national partnership, with joint funding of $547.2 million over 6 years to address the needs of Indigenous children in their early years (COAG, 2008a). Indigenous early child- hood development continued on the COAG agenda, with a communiqué in October 2008 that all governments sus- tain their commitment, engagement and effort in achiev- ing COAG’s Closing the Gap targets for Indigenous people, with leaders signing the first National Partnership cover- ing Indigenous Early Childhood Development (COAG, 2008b). The agreement detailed that all states and terri- tories work together ‘to improve the early childhood out- comes of Indigenous children by addressing the high levels of disadvantage they currently experience to give them the best start in life’. The National Partnership comprises $564 million of joint funding over 6 years to address the needs of Indigenous children in their early years. Detailed also in this agreement was that governments were to ensure that all Indigenous 4-year-old Children in remote com- munities have access to early childhood education within 5 years and that the gap be halved in reading, writing and numeracy achievement for Indigenous children within 10 years. As a further commitment, thirty-five Children and Family centres were to be established across Aus- tralia to deliver integrated services that offer early learning, childcare and family support programs. The central tenet in COAG’s communiqués and government reports on the ADDRESS FOR CORRESPONDENCE: Bronwyn Ewing, Yumi Deadly Centre, Faculty of Education, QUT, Victoria Park Road, Kelvin Grove QLD 4059 Australia. Email: bf.ewing@qut.edu.au 131 Bronwyn Ewing Closing the Gap strategy is that access to education for Indigenous people will create substantial social and eco- nomic benefits for remote and very remote Indigenous parents and children, thus increasing success in education and life opportunities (FaHCSIA, 2010). Influence of Family and Community A recurring theme of COAG’s tenet is identified in gov- ernment policy and literature on Torres Strait Islander and Aboriginal parent engagement in education (Depart- ment of Education and Training [DET], 2010; Depart- ment of Education, Employment and Workplace Relations [DEEWR], 2011; Schaller, Rocha, & Barshinger, 2007). The National Aboriginal and Torres Strait Islander Ed- ucation Policy (AEP; DEEWR, 2011) advocates the im- portance of Aboriginal and Torres Strait Islander com- munities’ involvement in education and its provision to ‘ensure equitable and appropriate outcomes’ (p. 1). Fur- ther, the Torres Strait Islander Regional Education Coun- cil (TSIREC; 2011) stresses the importance of improving ‘parent and family engagement in their child’s early years development’ (p. 1). This emphasis is repeated in the green paper, A Flying Start for Queensland Children (DET, 2010). This document states that Torres Strait Islander and Abo- riginal families who encourage and instil in their children the importance of learning, are providing the foundation for their children’s later success when they enter formal schooling. Of importance here is the provision of before school and school programs that permit and promote parent engagement. Such programs aim to work towards closing the gap between Torres Strait Islander and Abo- riginal and non-Indigenous Australians. Indigenous Knowledge Centres as Place of Agency The establishment of community network spaces such as Indigenous Knowledge Centres (see, e.g., Taylor, 2004) in Torres Strait Islander and Aboriginal communities has provided places of agency that permit and promote en- gagement in a range of activities for parents and their children. Indigenous Knowledge Centres are spaces where Torres Strait Islander and Aboriginal culture and knowl- edge are showcased to the wider community, a reposi- tory for community knowledge where such knowledge can grow and where two-way learning can occur (e.g., the State Library of Queensland’s Indigenous Knowledge Centre). They can be spaces where Torres Strait Islander and Aboriginal cultural knowledge is kept safe to pass to future generations. Several issues are relevant here: 1. Torres Strait Islander and Aboriginal parents engage in educating their children using a range of strategies that reinforce and nurture their culture — it is their lived experiences (Martin, 2007; Mellor & Corrigan, 2004; Priest, King, Nangala, Brown, & Nangala, 2009). 2. Parents and community mathematical and science funds of knowledge become the building block for the development of schooled or academic concepts? 3. Through Indigenous Knowledge Centre Programs, Tor- res Strait Islander and Aboriginal parents and their net- works can extend on this education (see, e.g., Lester- Irabinna, 2011; Lowe, 2011; O’Connor, 2009; Priest et al., 2009). Networks Torres Strait Islander and Aboriginal parents and network- ing are interconnected and rely on networks (Witheridge, 2009). That is, parents belong to community networks and, in turn, community networks are comprised of par- ents (Ewing, 2009; Janmohamed, 2005; Martin, 2007). Community networks are not static homogenous entities. They reflect values, beliefs, as well as hopes and accom- plishments. They are complex and can be mobilising forces for social justice and the redistribution of power and ma- terial advantage (Sefa Dei, 2005). They can exist in a range of combinations as: 1. spatial localised settings that are defined for the pursuit of socially meaningful interactions; 2. affective and relational communities where members draw on bonds of affinity, and shared experiences of values, attitudes, beliefs, concerns and aspirations; and 3. moral communities where participation and belonging in a citizenry work to achieve common goals defined as a collective good. Within these communities, tensions, struggles, ambigu- ities and contradictions are captured; however, the in- tegrity of a collective membership is maintained. Local knowledges are nurtured and made relevant for daily life (Lahn, 2006; Sefa Dei, 2005). Networks as Funds of Knowledge of Mathematics Networks build capacity in Torres Strait Islander and Abo- riginal parents (Makuwira, 2007). They validate the par- ents’ own definitions of maths as it exists in their com- munities — and through exchanges with kin and non-kin alike, these bodies of knowledge are activated, grown and transformed (Gonzalez, Moll, & Amanti, 2005). The idea of funds of knowledge views that people are competent and have knowledge that has been grown and developed through their life experiences that have given them that knowledge (Gonzalez et al., 2005). The assumption here is that a funds of knowledge approach provides a powerful and rich way to learn about communities in terms of their resources, the competence they possess and the way they utilise these resources to support them with the education of their children. If funds of knowledge of mathematics are those that reflect the unique histories and culture of Indigenous 132 THE AUSTRALIAN JOURNAL OF INDIGENOUS EDUCATION Networks of Exchange in a Torres Strait Island Community FIGURE 1 (Colour online) Torres Strait Regional Authority map. communities, they provide an effective entry point into engagement in learning because it is connected with and situated in communities and the voices of the people. The process of learning then is owned and framed by the com- munity and its people, who then work to build a sense of pride and self-worth in individuals (Khamaganova, 2005; Smith, 1999). Indigenous parents’ identities are shaped in their distinct ways in a distinct physical space such as net- works, with their knowledges, stories and relationships an integral part and tied to physical locations (Khamaganova, 2005; Lahn, 2006). This relationship ensures that maths is emerging from communities and their networks because they are taught and learned in such contexts. Methodology The project adopted a qualitative design approach: com- munity research (Smith, 1999). Community research is described as an approach that ‘conveys a much more inti- mate, human and self-defined space’ (Smith, 1999, p. 127). It relies upon and validates the community’s own defini- tions. As the project is informed by the social at a commu- nity level, it is described as ‘community action research or emancipatory research’, that is, it seeks to demonstrate benefit to the community, making positive differences in the lives of one Torres Strait Islander community. I had established strong working relationships with the parents and community members over time as a consequence of another project; nevertheless, embarking on this prelim- inary process in close collaboration with the community was a challenge linguistically and geographically. A Geographic Excursion The Torres Strait Islands consist of 18 islands and 2 North- ern Peninsula Area communities (Torres Strait Regional Authority, 2010). They are geographically situated from the tip of Cape York north to the borders of Papua New Guinea and Indonesia and scattered over an area of 48, 000 square kilometres. There are five traditional island clusters in the Torres Strait: top western, western, central, eastern and inner islands (see Figure 1, Torres Strait Regional Au- thority Map, 2011). The research project was conducted at one site in the eastern cluster. Language in the Torres Strait Islands Specific languages are spoken in the Torres Strait Is- lander communities, including Standard Australian En- glish, Yumplatok (Creole), Kala Lagaw Ya (Mabuyag) and, Meriam Mir (Osborne, 2009; Shnukal, 2004). Kala Kawaw Ya (KKY) is understood to be a dialect of Kala Lagaw THE AUSTRALIAN JOURNAL OF INDIGENOUS EDUCATION 133 Bronwyn Ewing Ya (Osborne, 2009). The traditional languages of the top western and western islands, Kala Lagaw Ya (KKY and Mabuyag) are understood to come from the mainland of Australia, with the eastern island language, Meriam Mir, emerging from Papua New Guinea. Yumplatok, identified as a modern language and stemming from colonisation, is derived from ‘meshing’ both traditional languages and English, thus creating a language in its own right (Os- borne, 2009; Shnukal, 2004). This language is identified as unifying, that is, it is the one that everyone in the Torres Straits can speak, whereas the western traditional lan- guage speakers cannot speak and understand the eastern language speakers (Osborne, 2009; Shnukal, 2004). Participants Twenty adults and eight children took part in the com- munity consultation meeting. All reside at the site where the meeting was held. The identities of participants are protected (Queensland University of Technology, 2011). For ethical reasons pseudonyms have been used to protect the identity of participants. The location is referred to as the Island. Data Collection Techniques For the purposes of this article, the data collection tech- niques included digital photography, field notes, and email documents. Digital photography as a non-written source of data allowed for the capturing of visual images that were central to the preliminary process and which served as a reminder for me (Stringer, 2004). They also assist au- diences to more clearly visualise settings and events. Field notes provide descriptions of places and events as they oc- cur. They provide ongoing records of important elements of the setting and assist with reporting and reflecting back over events. Email documents provided an efficient and easy form of communication between participants, who then networked with their community. Each technique af- forded the value of insight into the important preliminary planning of the project (Stringer, 2004). Analysis and Discussion: Engaging with Communities In recent years, building on what communities bring to particular contexts and on their strengths has been shown to be effective with engaging with communities (González & Moll, 2002). How does this occur? A way to engage community was to draw them in with knowledge that was already familiar to them, and which then served as a ba- sis for further discussion and learning (González & Moll, 2002). However, with this process there was a challenge and dilemma. How did I know about the knowledge that they brought to the meeting without falling into stereo- typing their cultural practices? How did I address the dy- namic process of the lived experiences of the community? Smith (1999) argues the responses to these questions have emerged from community-based research that relies on the community’s definitions and discussions. The Community Meeting The process of networking within the community to in- form them of the meeting developed through several steps: (1) discussion of the project with school campus leader; (2) discussion with the Island Councillor and to seek permission to meet under the ‘Omei Tree’ (Tree of Wis- dom), which was suggested by Denise, a senior community woman; (3) a chance meeting with the radio announcer for the Island radio that resulted in a radio interview that was broadcast to the Island community; (d) with sup- port from Denise and a parent from the community, a paper-based flyer was delivered in person to the homes of Island parents to inform them face-to-face about a pro- posed community meeting; and (4) a community meeting held under the Omei Tree. The content of the flyer was brief and aimed to provide succinct information for ease of reading and clarity. As per the flyer schedule, the meeting was held for 1 hour under the Omei Tree, with a number of community members in attendance. The fig tree is believed to be over 100 years old and has been a significant meeting place for the Island community. During the meeting I explained the project and how participants might be involved. Gaining consent was respectful of the community’s place and environment as also was that as a visitor, I needed to be mindful of my actions and presence in the community and conduct myself in an ethical manner. I asked the community group where they use mathematics in their daily lives and what mathematics they use specifically. The responses included buying food at the supermar- ket, cooking and counting fish and shells, indicating that it emerges through daily activities. As the discussion pro- gressed, I explained some of the early number ideas such as sorting/classification using shells, sticks, leaves, and Poin- ciana pods gathered from the community. These items were gathered after seeking permission from Julia, a senior community leader, whose home was near the beachfront. Children learn to sort objects from their environments into groups, such as shells. They learn to identify sameness, which defines the characteristics of groupings (Sperry Smith, 2009). In the meeting, Denise volunteered to sort shells into groups (see Figure 2). The group and I had to identify what criteria were used for the grouping. The idea of creating and naming sets continues throughout life and is a way of creating and organising information and mak- ing connections with peoples’ experiences. Before young children can learn to count sets, they begin the process of defining a collection using the objects in their daily lives (Baroody & Benson, 2001). In Figure 2, Denise established the features of each of the sets of shells. If the criteria for membership to a set are vague, it is more challenging to de- cide whether the shells belong to a particular set. Meeting 134 THE AUSTRALIAN JOURNAL OF INDIGENOUS EDUCATION Networks of Exchange in a Torres Strait Island Community FIGURE 2 (Colour online) Sorting shells into sets. members talked among themselves, with Denise allowing them time to identify the features of the sets. From my experience, I could not identify the criteria that defined the sets; however, there was consensus among community members that criteria had been established — edible and non-edible shell creatures. In this example, the community used their daily lives and activities as an op- portunity to talk about sorting using their home language — Yumplatok and English. When I asked when children learn about edible and non-edible shells there was con- sensus that this occurs at a very young age — for ex- ample, 1 to 2 years — and during times when families walk along the shores of the Island, and when fishing or playing in the water. This example reinforces what Nakata (2007) and González and Moll (2002) state, that learn- ing can be rich and purposeful when it is situated within that which already exists — the culture, community and home-language of the group. Gonzalez (2005) explains this further by stating that mathematics is embedded in social knowledge and mediated through language and the activities of the community. It is not learned, nor is it disembodied from its social meaning and context as hap- pens within formal schooling and becomes a linear process of dialogue. The learning about sorting edible and non- edible shell creatures was distributed among the group. It was a shared collective construction of mathematical knowledge. As the meeting progressed, the discussions focused on an introduction to early algebra — patterning. Patterns are a way for people to recognise and organise their lives. In the early years, two particular pattern types are ex- plored: repeating patterns and growing patterns. They are used to find generalisations within the elements them- selves (Warren & Cooper, 2006). What comes next? Which part is repeating? Which part is missing? Repeating pat- terns are patterns where the core elements are repeated as the pattern extends. Young children recognise patterns when singing songs, dancing, learning how to weave and when playing. Some examples of repeating patterns are shown in Figure 3. Repeating patterns can also be represented with ac- tions: jump, hop, jump, hop; as sounds: bell ring, clap hands, bell ring, clap hands; as geometric shapes: trian- gle, square, triangle, square; and as feel: soft, hard, soft, hard. Generally children explore patterns in a sequence: copy a pattern, continue the pattern, identify the elements repeating, complete the pattern, and translate the pattern to a different medium (Warren & Cooper, 2006). FIGURE 3 (Colour online) Examples of repeating patterns. THE AUSTRALIAN JOURNAL OF INDIGENOUS EDUCATION 135 Bronwyn Ewing Using two Poinciana pods, I tapped them together to create a repeating sound pattern. The community was then asked to continue the pattern and identify the repeating elements, using clapping. They were then asked if they would like to offer a repeating pattern. One community member clapped a pattern, to which the remaining mem- bers responded. The community were then asked where they might see or use patterns in their communities. Re- sponses included: the seed pattern inside the Poinciana pod (ABABAB), the weaving pattern used to weave co- conut leaves together (ABABAB), when singing songs to the children, the seasons of the year, and how the winds, seas, sea life, plant life, and bird life work in a repeating cy- cle with many core elements. It was this final response that reminded me of my place within the community, a visi- tor who had a great deal to learn from community about patterns and how they exist in their natural environment. It taught me that the community has extensive knowledge of patterns because they exist in their everyday lives in very rich ways — funds of knowledge, knowledges that are historically and culturally accumulated and shared. The community was asked if they would like some early mathematics workshops to be organised for and with the parents and children. Of importance was that the com- munity needed time to network and discuss whether they wanted me to return and work with parents and chil- dren on the Island and if there were any benefits for their community. Conclusion The process of networks as exchanges of funds of knowl- edge within the Island community has been significant for engagement with the community and to respond to research as described by Smith (1999). I found this as- pect challenging because I had to rely on the community’s networks, which I did not belong to. However, what did become evident was that the networks comprise of parents and they use such networks for communication. What was also evident was that the networks were used as valuable sources for nurturing local knowledges and practices. At the meeting, the community validated their definitions of knowledge — sorting and patterning. In doing so, this process provided a rich way to represent their knowledge and competence to support their children. From this initial experience and through the process of theoretical refinement, I argue that going beyond the simple dichotomy between community funds of knowl- edge (experience, out-of-school, intuitive, tacit) and aca- demic (in-school, linear, deliberate) is critical. The expe- rience demonstrated that the meaning-making in which the community engaged can be described as the principles that should underpin classroom instruction — authen- tic engagement in productive activities, drawing on prior knowledge and complexity and the dialogical emergence of instruction. What this means for educational practice is that by inviting children into a world of motivating ac- tivities where the everyday and spontaneous comes into contact with school, the children’s and their parents’ en- gagement with both the activity and the social context are foregrounded (González et al., 2005). That is, the class- room becomes an information exchange that draws on multiple funds of knowledge that are activated and tied with mathematics curricula. At this stage, I can report that after providing time for the community to network and consult about whether to permit me to conduct the parents and children mathe- matics workshops, the process has allowed me to: 1. have confidence in the way that I had consulted with community and the community with me; 2. continue to work with community for as long as they see that there are benefits; 3. continue to work with the materials similar to those gathered for the community meeting; 4. frame the project’s agenda that situates research as re- flexive engagement with the real — the community’s funds of knowledge as well as my own; and 5. understand how pluralism is about respect for diversity and a willingness to explore and change in ways that continues to remain diverse for situated learning. These points relate to the processes of engagement and community consultation and research, which are envis- aged as continuing once the project officially commences. Acknowledgments I respectfully acknowledge the support, enthusiasm and engagement of the Island community. Without their con- sent the project would not have commenced. I acknowl- edge the digital photograph taken by Vicky Sun, who trav- elled with me for another project at the same site and was requested to take digital documentation of the meeting. References Baroody, A., & Benson, A. (2001). Early number instruction. Teaching Children Mathematics, 8, 154–158. Council of Australia Governments (COAG). (2007a). Indige- nous issues. Retrieved from http://www.coag.gov.au/coag_ meeting_outcomes/2007-04-13/index.cfm#indigenous Council of Australia Governments (COAG). (2007b). Indige- nous issues. Retrieved from http://www.coag.gov.au/coag_ meeting_outcomes/2007-12-20/index.cfm#indigenous Council of Australia Governments (COAG). (2008a). Indige- nous issues: Closing the gap. Retrieved from http://www. coag.gov.au/coag_meeting_outcomes/2008-07-03/index. cfm#indigenous_reform Council of Australia Governments (COAG). (2008b). Indige- nous early childhood development. Retrieved from http:// 136 THE AUSTRALIAN JOURNAL OF INDIGENOUS EDUCATION Networks of Exchange in a Torres Strait Island Community www.coag.gov.au/coag_meeting_outcomes/2008-10-02/ index.cfm#child Department of Education and Training (DET). (2010). A flying start for queensland children: Education green pa- per for public consultation. Retrieved from http://deta.qld. gov.au/aflyingstart/pdfs/greenpaper.pdf Department of Education, Employment and Workplace Relations (DEEWR). (2011). National Aboriginal and Torres Strait Islander education policy (AEP). Retrieved from http://www.deewr.gov.au/Indigenous/Schooling/ PolicyGuidelines/Pages/aep.aspx Department of Families, Housing, Community Services and Indigenous Affairs (FaHCSIA). (2009). Closing the gap on Indigenous disadvantage: The challenge for Australia. Retrieved from http://www.fahcsia.gov.au/sa/ indigenous/pubs/general/documents/closing_the_gap/ p2.htm Department of Families, Housing, Community Services and Indigenous Affairs (FaHCSIA). (2010). Closing the gap: Prime Minister’s report 2010. Retrieved from http://www. fahcsia.gov.au/sa/indigenous/pubs/general/Documents/ ClosingtheGap2010/sec_1_5.htm Ewing, B.F. (2009). Torres Strait Island parents’ involvement in their children’s mathematics learning: A discussion pa- per. First Peoples Child and Family Review Journal, 4(2), 119–124. González, N., & Moll, L. C. (2002). Cruzando el puente: Building bridges to funds of knowledge. Educational Pol- icy, 16(4), 623–641. González, N., Moll, L., & Amanti, C. (2005). Introduction: theorizing practices. In N. González, L. Moll, & C. Amanti, (Eds.), (Funds of knowledge: Theorizing practices in house- holds, communities, and classrooms (pp. 1–28). New York: Routledge. Janmohamed, Z. (2005). Rethinking anti-bias approaches in early childhood education. In G. S. D. G. Johal (Ed.), Anti- racist research methodologies (pp. 163–182). New York: Peter Lang. Khamaganova, E. (2005, September). Traditional Indigenous knowledge: Local view. Paper presented at the International Workshop on Traditional Knowledge, Panama City. Lahn, J. (2006). Women’s gift-fish and sociality in the Torres Strait, Australia. Oceania, 76(3), 297. Lester-Irabinna, R. (2011). Indigenous education and tomor- row’s classroom: Three questions, three answers. In G.N. Purdie, G. Milgate & H.R., Bell (Eds.), Two way teach- ing and learning (pp. 35–48). Melbourne, Australia: Aus- tralian Council for Educational Research. Lowe, K. (2011). A critique of school and Aboriginal com- munity partnerships. In N. Purdie, G. Milgate, & H.R. Bell (Eds.), Two way teaching and learning: Toward culturally reflective and relevant education (pp. 13–32). Melbourne, Australia: Australian Council for Educational Research. Makuwira, J. (2007). The politics of community capacity building: Contestations, contradictions, tensions and am- bivalences in the discourse in Indigenous communities in at Australia. Australian Journal of Indigenous Education, 36S, 129–136 Martin, K. (2007). How we go round the Broomie tree: Abo- riginal early childhood realities and experiences in early childhood services. In J. Ailwood (Ed.), Early childhood in Australia: Historical and comparative contexts (pp. 18–34). Sydney, Australia: Pearson Education Australia. Mellor, S., & Corrigan, M. (2004). The case for change: A review of contemporary research on Indigenous education outcomes. Retrieved from http://search.informit.com.au/ fullText;dn=324175841163329;res=IELHSS Nakata, M. (2007). Disciplining the savages: Savaging the disciplines. Canberra, Australia: Aboriginal Studies Press. O’Connor, K.B. (2009). Northern exposures: Models of ex- periential learning in Indigenous education. Journal of Experiential Education, 31(3), 415–419. Osborne, E. (2009). Throwing off the cloak. Canberra, Aus- tralia: Aboriginal Studies Press. Priest, K., King, S., Nangala, I., Brown, W.N., & Nangala, M. (2009). Warrki jarrinjaku ‘working together every- one and listening’: Growing together as leaders for Abo- riginal children in remote central Australia. European Early Childhood Education Research Journal, 16(1), 117– 130. Queensland University of Technology. (2011). Re- search ethics. Retrieved from http://www.mopp.qut.edu. au/D/D_06_05.jsp. Schaller, A., Rocha, L.O., & Barshinger, D. (2007). Maternal attitudes and parent education: How immigrant mothers support their child’s education despite their own low levels of education. Early Childhood Education Journal, 34(5), 351–356. Sefa Dei, G. (2005). Critical issues in anti-racist research methodologies. An introduction. In G.J. Sefa Dei & G.S. Johal (Eds.), Anti-racist research methodologies (pp. 1–28). New York: Peter Lang Publishers. Shnukal, A. (2004). A dictionary of Torres Strait Creole. Ku- randa, Australia: The Rams Skull Press. Smith, L.T. (1999). Decolonizing methodologies: Research and indigenous peoples. Dunedin, New Zealand: University of Otago Press. Sperry Smith, S. (2009). Early childhood mathematics (4th ed.). Boston, MA: Pearson. Stringer, E. (2004). Action research in education. Upper Saddle River, NJ: Pearson. Taylor, S. (2004, September). Challenging ideas: Indigenous knowledge centre - the Queensland experience. Paper pre- sented at Australian Library Information Association con- ference, Gold Coast, Queensland. Retrieved from http:// conferences.alia.org.au/alia2004/pdfs/taylor.s.paper. pdf The Productivity Commission. (2009). Overcoming Indige- nous disadvantage. Retrieved from http://www.pc.gov. au/__data/assets/pdf_file/0003/90129/key-indicators- 2009.pdf THE AUSTRALIAN JOURNAL OF INDIGENOUS EDUCATION 137 Bronwyn Ewing Torres Strait Islander Regional Education Council. (2011). Torres Strait Islander Regional Education Council. Retrieved from http://www.tsirec.com.au/index.php? option=com_content&view=article&id=37&Itemid=72 Torres Strait Regional Authority. (2010). Torres Strait regional authority. Retrieved from http://www.tsra.gov.au/the- torres-strait/community-profiles.aspx Warren, E., & Cooper, T. (2006). Using repeating patterns to explore functional thinking. Australian Primary Math- ematics Classroom, Spring, 9–14. Witheridge, F. (2009). Early intervention through playgroup supported playgroups. Retrieved from http://www. playgroupaustralia.com.au/qld/download.cfm? DownloadFile=F43235AF-E7F2-2F96- About the Author Dr Bronwyn Ewing is a numeracy education researcher at Queensland University of Technology (QUT) in the YuMi Deadly Centre, specialising in the pedagogy of numeracy classrooms from the early years to VET contexts. She was previously a lecturer in Early Childhood Numeracy for 8 years in the School of Early Childhood at QUT. She has visited and taught in Indigenous communities across Queensland, particularly in the far North and Torres Strait Islands. She has a special interest in the teaching and learning of numeracy to Torres Strait Islander students, and the role of Torres Strait Islander women in their children’s prior-to-school numeracy education. She oversees the research conducted at the Centre. She is the major writer of materials developed to teach measurement in the Torres Strait Islands and vocational numeracy material for Indigenous VET students in Year 11 and 12 and in TAFE Institutes. 138 THE AUSTRALIAN JOURNAL OF INDIGENOUS EDUCATION p1 p2 work_ffocon4bezdatd6owkwbiexuhq ---- Cross‐scale Perspectives: Integrating Long‐term and High‐frequency Data into Our Understanding of Communities and Ecosystems January 2016 129Meeting Reviews Cross-scale Perspectives: Integrating Long- term and High- frequency Data into Our Understanding of Communities and Ecosystems An organized oral session at the 100th annual ESA meeting on Friday, 14 August 2015, organized by Cayelan C. Carey and Kathryn L. Cottingham Cayelan C. Carey 1 and Kathryn L. Cottingham 2 1 Department of Biological Sciences , Virginia Tech, 1405 Perry Street , Blacksburg , Virginia 24061 USA 2 Department of Biological Sciences , Dartmouth College, Class of 1978 Life Sciences Center, 78 College Street , Hanover , New Hampshire 03755 USA Ecologists are amassing extensive data sets that include both long- term records documenting trends and variability in natural systems on inter- annual to decadal time scales and sensor- based measurements on minute to subhourly scales for extended periods (Hampton et al. 2013 ). Together, these long- term and high- frequency data are contributing to our ecological understanding. Although there have been several previous ESA sessions that have explored the insights provided by either long- term data or high- frequency data, to our knowledge this organized oral session provided one of the fi rst opportunities to synthesize the lessons learned from leveraging both long- term data and high- frequency approaches. The session included speakers at a variety of career stages and institutions chosen specifi cally for their unique and complementary perspectives on high- frequency and long- term data collection from desert ecosystems, grasslands, forests, streams, lakes, and coral reefs. While the focus of the session was on the ecological insights gained by using high- frequency and/or long- term data, the speakers also discussed the data approaches used to collect, manage, and analyze time series, as well as the inherent challenges in harnessing “big data” for ecology. Scott Collins (University of New Mexico) kicked off the session by describing the use of sensor arrays to study responses to precipitation experiments in the United States and China (EDGE, edge. biology.colostate.edu). Collins emphasized the fallibility of sensors during long- term deployments, and how this unreliability resulted in data gaps that necessitated aggregating data to coarser temporal inter- vals than planned. Collins advised ecologists collecting sensor data to develop plans for dealing with data gaps a priori and to think carefully about what these data are telling us: e.g., soil CO 2 sensors give measurements of CO 2 concentrations, which then need to be interpreted carefully to get to the variables that we are actually interested in, e.g., soil respiration. Kim La Pierre (University of California, Berkeley) presented fi ndings from a long- term ecological research (LTER) synthesis of plant community responses to factorial experiments manipulating multiple components of global change (e.g., nutrient concentrations, water availability; corredata.weebly.com). Unlike aggregate ecosystem parameters, whose responses were not affected by the duration or number of global change factors simultaneously manipulated (Leuzinger et al. 2011 ) , community properties were much more sensitive and effects strengthened with time and number of global change treatments. Thus, a key take- home message was that long- term experiments are important to understanding the mag- nitude and interactions of global change effects, especially of ongoing “press” perturbations. As such, La Pierre recommended that ecologists conduct studies that manipulate multiple drivers simultaneously over periods longer than 3 years. 130 Bulletin of the Ecological Society of America, 97(1) Meeting Reviews Jack Webster (Virginia Tech) used >40 years of stream nitrogen (N) monitoring data from water- sheds at the Coweeta Hydrologic Lab and LTER site in North Carolina to demonstrate how responses to deforestation in the southeastern United States are very different from the textbook paradigm derived from studies at the Hubbard Brook Experimental Forest in New Hampshire (Bormann and Likens 1994 ). At Coweeta, deforestation led to a regime shift in N dynamics, with a shift from biological control of N export to hydrological control due to a change in the dominant tree species during recovery from logging (Webster et al. 2014 ). Webster predicted that it will take at least two centuries for stream N cycling to return to baseline prelogging levels. Tanner Williamson (Miami University of Ohio) compared the time scales and relative importance of external vs. internal loading of N and phosphorus (P) to the Acton Lake watershed (Ohio, USA) using both high- frequency and long- term data. External loading from the agricultural watershed exhibited seasonal spikes, with most loading occurring in spring months, whereas internal loading by gizzard shad fi sh dominated the growing season (April–October) nutrient budget. He predicted that algal biomass during the peak growing season responds more strongly to the internal, fi sh- derived nutrient subsidy because it is both larger and more predictable than external loads. Paul Hanson (University of Wisconsin—Madison) presented a model that connected daily lake carbon (C) metabolism and its consequences for long- term organic C cycling. Hanson identifi ed the temporal scale of data collection and modeling as a major challenge for resolving lake C budgets: daily, seasonal, and decadal C budgets give varying answers as to if lakes are C sinks or sources in the landscape. Similar to Collins, Hanson pointed out that the oxygen sensors deployed in lakes give us gas fl ux measurements but not organic C pools—and thus, interpreting sensor data was again noted as a challenge. Hanson also echoed La Pierre ’ s call for long- term experiments, advocating for lake experiments manipulating C load- ing to better understand the controls of short- term and long- term ecosystem responses. Steve Carpenter (University of Wisconsin—Madison) reported on a whole- ecosystem experiment to test whether it is possible to detect warnings of an impending regime shift in time to take preventive ac- tion, using statistical tests applied to minute- scale sensor data. To the surprise of many in the audience, he and his collaborators successfully interpreted signals from sensors of oxygen, phycocyanin, and chlo- rophyll, and turned off daily nutrient additions to a lake before a large cyanobacterial bloom developed. His take- home message was that high- frequency data can have a role in ecosystem management, though different ecosystems may exhibit varying responses to the same stressor. Kathy Cottingham (Dartmouth) described a 10- year study to understand the development of cyano- bacterial blooms in a low- nutrient lake in New Hampshire, USA using a combination of daily samples of cyanobacteria collected by a citizen scientist, weekly samples by researchers, and minute- scale sensor data for temperature and light (Carey et al. 2014 ). Cottingham challenged the ecological community to develop better ways to integrate high- frequency abiotic data with low- frequency biotic measurements without resorting to summary statistics. Paralleling Williamson ’ s talk, Alexandra Gerling (Virginia Tech) evaluated the importance of ex- ternal nutrient loading vs. within- lake nutrient recycling—this time by chemical mechanisms rather than biological—for N and P budgets in Falling Creek Reservoir, Virginia (USA). Taking advantage of minute- resolution oxygen sensors, an experimental hypolimnetic oxygenation system, and the ability to January 2016 131Meeting Reviews manipulate external loading from the upstream reservoir, Gerling showed that the reservoir was a net sink of N and P and thus provides an important ecosystem service in reducing nutrient loads to down- stream ecosystems (Gerling et al. 2016 ). Returning to land and moving north to the boreal forest, Jennie McLaren (University of Texas at El Paso) evaluated the long- term (20- year) effects of nutrient fertilization (control vs. NPK) and herbivory (open vs. snowshoe hare exclosures) on plant communities and nutrient cycling, as well as the potential for recovery from these treatments. McLaren joked that her study predated affordable sensors and thus used an army of undergraduates instead. Surprisingly, snowshoe hare exclosures had little effect on the forest understory community or on soils, though fertilization did shift the composition of the plant community. In a parallel experiment examining the effects of long- term fertilization, McLaren found that recovery of the plant and microbial communities from fertilization is still ongoing. Finally, Stuart Sandin (Scripps Institution of Oceanography, University of California—San Diego) described strategies to track changes in coral reefs across both time and space in the central Pacifi c, us- ing a combination of large- scale mapping using digital photography and photomosaics, remote- sensed satellite data (e.g., sea surface temperatures [SST], chlorophyll), and in- ocean buoys (e.g., wave ener- gy). A key focus of his team ’ s work is whether human exploitation drives communities toward earlier successional structures. Sandin found that generalized additive models predict coral community struc- ture from SST, chlorophyll, and waves near uninhabited islands but not uninhabited models, suggesting that humans alter biophysical coupling (Williams et al. 2015 ). Collectively, the speakers made it clear that both long- term and high- frequency data streams are im- portant to ecology, but that there are some common challenges that transcend ecosystem type and research question. We identifi ed four integrative themes across the talks. First, long- term observational studies are extremely important for establishing context and baseline environmental conditions. Many of the speakers harnessed data collected as part of a LTER site or a LTER environmental in biology study: collectively, the 10 talks reported on more than 150 years of monitoring data (!). Second, experiments provide a powerful approach for testing hypotheses, especially if done at large spatial scales or over many years: short- term responses were not always indicative of long- term dynamics. Third, as ecology becomes increasingly “sensored,” we need to be aware of sensor shortcomings: they often fail, collect erroneous data, and may not truly capture the response variables of interest. Ecologists need to be aware of these issues before they deploy expensive sensor arrays and have a plan in place for dealing with sensor gaps, because they will happen. Fourth, downscaling high- frequency data to lower temporal resolution is challenging. Several speakers wrestled with how to decide on the appropriate temporal resolution for data analysis, especially when analyses of the same variable at different temporal scales yield different answers. We organized this session to see how ecologists across subfi elds wrestle with the challenge of integrating decades of long- term data, usually collected at coarse time scales (e.g., weeks to months to years), with fewer years of high- frequency data collected at second to minute to hour time scales. No clear recommendations or best practices emerged from these talks, but it is clear that cross- ecosystem communication will be essential to developing effective strategies for effectively working across tem- poral scales. With new sensor networks coming online every year, our goal is to spur synthesis working groups collaborating across ecosystems to drive this fi eld forward. 132 Bulletin of the Ecological Society of America, 97(1) Meeting Reviews Literature cited Bormann , F. H., and G. Likens . 1994 . Pattern and process in a forested ecosystem . Pages 272. Springer-Verlag , New York, New York, USA . Carey , C. C. , K. C. Weathers , H. A. Ewing , M. L. Greer and K. L. Cottingham . 2014 . Spatial and tem- poral variability in recruitment of the cyanobacterium Gloeotrichia echinulata in an oligotrophic lake . Freshwater Science. 33 : 577 – 592 . Gerling , A. B. , Z. W. Munger , J. P. Doubek , K. D. Hamre , P. A. Gantzer , J. C. Little and C. C. Carey . 2016 . Whole- catchment manipulations of internal and external loading reveal the sensitivity of a century- old reservoir to hypoxia . Ecosystems . DOI: 10.1007/s10021-015-9951-0 . Hampton , S. E. , C. Strasser , A. Batcheller , W. Gram , C. Duke , J. Tewksbury and J. Porter . 2013 . Big data and the future of ecology . Frontiers in Ecology and the Environment 11 : 156 – 162 . Leuzinger , S. , Y. Luo , C. Beier , W. Dieleman , S. Vicca and C. Korner . 2011 . Do global change experi- ments overestimate impacts on terrestrial ecosystems? Trends in Ecology & Evolution. 26 : 236 – 241 . Webster , J. R. , W. T. Swank , J. M. Vose , J. E. Knoepp and K. J. Elliott . 2014 . Ecosystem stability and forest watershed management: a synthesis of 30+ years of research on WS 7 . Pages 229 – 247 in W. T. Swank and J. R. Webster , editors. Long-term response of a forest watershed ecosystem: clearcutting in the southern Appalachians . Oxford University Press , New York, New York, USA . Williams , G. J. , J. J. M. Gove , Y. Eynau , B. J. Zgliczynski and S. A. Sandin . 2015 . Local human impacts decouple natural biophysical relationships on Pacifi c coral reefs . Ecography 38 : 751 – 761 . work_fgbkv4xwyvf57f4tpfj6ziljf4 ---- Provided by the author(s) and NUI Galway in accordance with publisher policies. Please cite the published version when available. Downloaded 2021-04-06T01:15:15Z Some rights reserved. For more information, please see the item record link above. Title Digital camera connectivity solutions using the picture transferprotocol (PTP) Author(s) Bigioi, Petronel; Corcoran, Peter M. Publication Date 2002 Publication Information P. Bigioi, G. Susanu, P. Corcoran, I. Mocanu (2002)" Digital camera connectivity solutions using the picture transfer protocol (PTP)", IEEE Transactions on Consumer Electronics, Vol. 48, No. 3, pp. 417-427 Publisher IEEE Item record http://hdl.handle.net/10379/277 https://aran.library.nuigalway.ie http://creativecommons.org/licenses/by-nc-nd/3.0/ie/ 417 Bigioi et al.: Digital Camera Connectivity Solutions Using the Picture Transfer Protocol (PTP) DIGITAL CAMERA CONNECTIVITY SOLUTIONS USING THE PICTURE TRANSFER PROTOCOL (PTP) Petronel Bigioi', George Susanu2, Peter Corcoran3 and lrina Mocanu4 'Dept. of IT, National University of Ireland, Galway 'Accapella Ltd., Ireland 3Dept. of Electronic Engineering, National University of Ireland, Galway, Ireland 4Dept. of Computer Science and Engineering, University Politehnica of Bucharest, Romania Abstract Interconnectivity of digital camera with other devices is one of the main concerns of consumers and digital camera manufacturers. Interconnectivity features in digital cameras allow more consumer-friendly usage of digital cameras. Moreover, with a suitable application layer software, digital photographs can be sent directly from the camera to a desired target: disk storage, printer, web site, as an e-mail message or web print, using a single, purpose-designed, communication protocol: the Picture Transfer Protocol, PTP. 1 Introduction Digital photography continues to gain market share from conventional photography. This is due to a number of factors: no development costs; no film costs; easy picture preview before printing or saving the image; easy sharing; portability of images; easy presentation in a variety of electronic & print formats, etc. Despite these benefits conventional photography is still more practical in the eyes of many consumers. Thus, although consumers are converting to digital the rate of growth has slowed very significantly in the last couple of years. This situation exists because the workflow for digital photography, starting from the acquisition process through to the final store, print or share process, remains complex and continues to form a usage barrier for the majority of consumers. In brief, digital photography remains targeted towards skilled PC users. A couple of years ago each digital camera manufacturer had its own specific communication protocol to access and control a digital still camera. This method had a number of disadvantages: the camera manufacturer had to provide device drivers for all the operating systems and hardware platforms that they wanted to support and this added costs to the digital camera selling price. Further the end-users were expected to have a certain level of technical ability in order to understand the whole process of digital photography. Even if the above method still exists, nowadays, the most popular approach is to make the digital camera look like a storage device whenever it is attached to a PC or an embedded system. Even if this method has become the present norm and more and more camera manufacturers adopt it, a number of downsides and limitations have already started to emerge. Firstly the digital camera is only able to deliver pictures when attached to a mass storage reader device. Secondly, the camera becomes a peripheral, or slave device to the PC: an upload process from the digital camera to the PC or receiving device can not, be initiated using the mass storage approach. Thus an automation of the digital photography workflow is not practical. Thirdly there is no way to control the digital camera using the mass storage solution because it appears to the PC only as a passive storage device. Device Manufacturer Specific Protocol D W n Fig 1: DSC Communication Protocols - Past This paper will describe a recent standards effort (PIMA 15740) from the Photographic & Imaging Manufacturer's Association for connectivity of digital still photography devices. This standard is known as the Picture Transfer Protocol (PTP) for short. The intention of the PTP standard is to replace and unify the communication protocols between still imaging devices and other receiving devices. Most imaging devices include hardware interfaces that can be used to connect to a host computer or other imaging devices, such as a printer. A number of new, high-speed interface transports have recently been developed, including IrDA, USB, and IEEE1394. This standard is designed to cover the requirements for communicating with still imaging devices over a variety of transports. This includes communications with any type of Manuscript received June 24, 2002 0098 3063/00 $10.00 2002 IEEE ~ 418 IEEE Transactions on Consumer Electronics, Vol. 48, No. 3, AUGUST 2002 devices, including host computers, direct printers and other still imaging devices over suitable transports. The requirements include standard image referencing behavior, operations, responses, events, device properties, datasets, and data formats to ensure interoperability. Mass Storage Device for File Transfer & Custom Solutions for Control Functions File T i n s f e r '4'' File Tiansfer Host PC (No need for manufacturer specific device driver, for picture tranfer) PDAs and evolved embedded systems able to run a file system (file transfer only without specifc firmware) Fig 2: DSC Communication Protocols - Present Standardizing the operations and data requirements for still imaging devices through a standard such as PTP will assist transport . implementers, platform aggregators, service providers and device manufacturers by providing a common ground for interface support. It will also assist developers of host software and image receiving devices by ensuring that their products can interface with many different imaging devices from different manufacturers, and assist users by ensuring that the imaging devices they purchase will inter-operate with those of different manufacturers. One of most interesting facts about the PTP standard is that provides optional operations, formats and defined extension mechanisms, which will allow digital camera manufacturers to use the communication standard even if they want to implement custom behavior for their imaging devices. This standard has been designed to appropriately support popular image formats used in digital still cameras, including the EXIF and TIFF/EP formats defined in IS0 12234-1 and IS0 12234-2, as well as the Design Rule for Camera File System (DCF) and the Digital Print Order Format (DPOF). This paper will provide detailed explanation of the following main issues: 0 Description of the PTP as a common protocol for any device to exchange images with still imaging device, either by retrieving images from it or by sending images to it. Usage models (push and pop models) and usage scenarios (a number of typical usage scenarios for the ,two typical usage models). PTP transport requirements and practical examples using the USB transport. Operating systems support for PTP USB devices. Proposal for PTP implementations over wireless transports. Transport specific issues and solutions. Imaging cdevices as Internet connected devices using PTP mapped on TCPmP protocol. Firewalls, authentication and security issues and solutions. 0 0 Picture Transfer Procol Fig 3: DSC Communication Protocols - Future 2 PTP Description This section describes the PTP and the main guidelines followed by this protocol. 2.1 PTP Device Roles Rather than having a host master to a slave device protocol description or a peer to peer description, the PTP refers to the components engaged in a picture transfer as Initiator and Responder. The PTP defines the Initiator as being the device that initiates the connection - issues the OpenSession PTP command - while the Responder is defined as the device that responds to operation requests such as the OpenSession request. Devices, in the PTP model, can be Initiators, Responders or both. For instance, a PC can be configured only as an Initiator device while a USB camera can be only a Responder. , Similarly, a Bluetooth camera, that opens a connection to a BluetoothPTP printer and pushes pictures for print, can be only an Initiator while the corresponding Bigioi et al.: Digital Camera Connectivity Solutions Using the Picture Transfer Protocol (PTP) 419 printer can be only a Responder. However, a digital camera that can connect to other digital cameras and is able to both initiate and receive a PTP session will have to be capable of behaving both as Initiator and Responder. Usually, the Initiator will have a form of graphical user interface, that the user can seebrowse thumbnails, select and chose an appropriate control action, and so on. Moreover, the Initiator device has to implement the device enumeration and transport mapping (in the case that multiple, PTP-compliant transports are supported), all in a transport specific manner. Typically, a Responder will not have a graphical user interface or multiple transport support. 2.2 PTP Sessions In order for two PTP devices to exchange information about pictures or metadata, a PTP session has to be established. A session is a logical connection between the PTP devices, over which the object identifiers, or ObjectHandles, and storage media identifiers, or StoragelDs, are persistent. A session is considered opened after the Responder returns a valid response to the OpenSession operation requested by the Initiator. A session is closed after the CloseSession operation is completed or the transport closes the communication channel, whichever occurs first. The only operation or data traffic allowed outside the session is the GetDeviceInfo operation and the Devicelnfo dataset. A device can issuelaccept a GetDeviceInfo operation outside a session. A session is needed in order to transfer descriptors (Storagelnfo, Objectlnfo, etc), images or any other objects between devices. Any data communicated between devices is considered valid unless a specific event occurs specifying otherwise. Each session is represented by a unique 32 bit identifier (UINT32) that is assigned by the Initiator using the OpenSession operation request and it has to be non-zero. 2.3 Protocol Model (Transactions) The PTP is specified using the transaction model. A transaction is composed of a request operation, followed by an optional data transfer and a response. Each transaction has an identifier (TransactionlD) that is session unique and comprise of a 32 bit unsigned number (UINT32). The transaction IDS are a sequence of numbers starting with 0x00000000 (for the Sessionopen transaction) and increasing with every following operation. With a few exceptions, all the transactions are synchronous and atomic operations, having bloclung execution within the session. Devices that support multiple sessions must be able to keep the sessions opaque and asynchronous to each other. The asynchronous transactions (Le. InitiateCapture) are treated as synchronous in the initial phase (when specifying the operation), when a response indicating that the operation request has been successful'or not is enough. Asynchronous events are used to handle the communication initiated by such operations (i.e. the availability of new objects becoming available on the device's storage as result of an Intiatecapture operation). The completion of an asynchronous transaction is signaled by a specific event (i.e. Capturecomplete event should be issued). If the, Initiator issues another asynchronous operation while a previous one is still in progress, the device should issue a Device-Bussy response. , Fig 4: PTP Transaction Phases In the PTP the transactions consist of three phases. The data phase is an optional phase and up to the operation it can be present or not. If the data phase is present, the data can be sent either from the Initiator to the Responder or from the Receiver to the Initiator, but it may not consist of data transferred in both directions. Only one transaction at a time can take place within a session. I I I I I I SessionlD I 4 IUINT32 I I SessMnlD I 4 I UINT32 I I TransactionlD I j iUlN$l Parameter 1 Parameter 2 Parameter 3 Parameter4 Parameter 5 ANY I TransactionlD I I U g 1 Parameter 1 Parameter 2 Parameter 3 Parameter4 Parameter 5 ANY Operation Dataset Response Dataset Table 1: Request and Response Datasets 2.3.1 Operation Request Phase In this phase the operation request dataset is transferred from the Initiator to the Responder. This dataset is given in Table 1 . 2.3.2 Data Phase The data phase is optional and is used to transmit a data set that is larger than may be accommodated by the operation or response phases. This phase is used to transfer information that is not specified by small data types (i.e. typically the transfer of a binary image is achieved during 420 IEEE Transactions on Consumer Electronics, Vol. 48, No. 3, AUGUST 2002 3 Getob$:tHandl OxFFFFFFFF OxFFFFFFFF OxOOOOOOOO 4 Getobjecthfo ObjHandle 1 OxOOOOOOOO OxOOOOOOOO rll,lqp, ,nr nhiClanrl,a , Returns Objectlnfo this phase). During the data phase, the information is transferred either from Initiator to Responder or from Responder to the Initiator, but never in both directions. 2.3.3 Response Phase In this phase the response dataset is transferred from the Responder to the Initiator. The response dataset is very similar to the operation request dataset and it is presented in Table 1 . 2.4 Vendor Extensions Imaging devices manufacturers can extend the PTP command set by defining their own commands, events and properties. Of course, only vendor specific software will take advantage of this vendor specific functionality. The VendorExtensionID and the VendorExtensionVersion are fields in DeviceInfo structures that uniquely identify the vendor extensions. A device has to check those fields before using a vendor extended operation, event or property. The VendorExtensionID is assigned by PIMA while VendorExtensionVersion is maintained internally by each manufacturer. .......".,. 2 Opensession SessionlD OxOOOOOOOO o ~ o o o o o o o Opens a PTP session 3 oI Specific Specific Specific Responder operations on specific the propetiy sefiinQ incoming pictures Returns StoragelD. 4 Sendobjecthfo OxOOOOOOOO OxOOOOOOOO OxOOOOOOOO Parent ObiHandle, Optional Configures the Vendor 3 Usage Models The purpose of this section is to describe how PTP devices can interact and what exactly the flow of operations is in each usage model. The PTP can be associated with a dynamic masterklave protocol, where the master role is undertaken by the Initiator device while the slave role is assigned to the Responder. The Initiator determines the flow of operations while the Responder can issue responses to the operations as well as events. The events can be associated with an operation, but they can appear completely asynchronously as well, so the Initiator should be prepared to deal with events that occur outside of the boundaries of a transaction. 3.1 Pull Scenarios In the pull mode, the Initiator retrieves the objects from the Responder and it is usually the way that the PTP communication is implemented in a device that has a rich 6 foreach 7 CloseSession OxOOOOOOOO OxOOOOOOOO OxOOOOOOOO Closes the session ObjectHandle m e r interface Repeat Step 6 7 tor each ObiHandle rn OxOOOOOOOO OxOOOOOOOO I I I I I VU."--. bl OpenSession I SessrbnlD I OxOOOOOOOO ) o ~ o o o o o o o ~ Opens a PTP session I Returns the obiect rn data Returns Object Handles in 31Getobj:ftHandl OxFFFFFFFF I OxFFFFFFFF ~OxOOOOOOOO~ a ObjectHandlesArray 8 CloseSession OxOOOOOOOO OxOOOOOOOO OxOOOOOOOO Closes the session This usually involves three steps that the user has to go through in order to complete the operation: select a PTP device to communicate with (if there are multiple devices in the proximity), download and select thumbnails (from the remote device) and download the selected pictures. A typical scenario for this usage model would be where the Initiator requests all images objects from the Responder, ignoring other objects (non-images) and associations. This is described in Table 2 . Another scenario would be where the Initiator requests all thumbnails from the Responder, ignoring associations and non-images objects. This is described in Table 3 . bl Opensession I SessionlD I OxOOOOOOOO I oxOooooooo I Opens a PTP session 1 I I Repeat Step 4 bl for each 1 ObjHandle n OxOOOOOOOO OxOOOOOOOO l d a ~ ~ ~ t r ~ ~ ~ ~ e I nhiartU*nrib kd GetThumb I ObiHandle 1 I OxOOOOOOOO I OxOOOOOOOO I Retumslheobiect I CloseSession OxOOOOOOOO OxOOOOOOOO OxOOOOOOOO Closes the session Table 3: Get all Thumbnails from a PTP Device 3.2 Push Scenarios Push mode consists of an Initiator sending one or more objects to the Responder. Usually, the Initiator has a rich user interface (Le. PC or digital camera). The Responder receives the pictures and other data from the Initiator. This mode is appropriate for transfers from a device with a rich UI to devices that don't have image display capabilities. For instance, this is useful for direct print operations (over either USB, 1394 or Bluetooth). I I I I I ObjHandle I Sendobject 1 OxOOOOOOOO~ OxOOOOOOOO I OxOOOOOOOO~ Wnles data 10 Store IRepeat Step 3.41 I I I I 1 I I I I Table 4: Push Scenario A typical push scenario is the one illustrated in Table 4 , where the Initiator transfers a number of images to a default location on the Responder. Prior to data transfer, the Initiator can execute a vendor specific command that can configure the Responder for a specific action that has Bigioi et al.: Digital Camera Connectivity Solutions Using the Picture Transfer Protocol (PTP) 0x06 Ox08 42 1 Event Dataset Response Code or Event Code 2 Code The value of PTP Operation Code, 4 TransactionlD The value of associated PTP to take upon the incoming pictures. For example, if the Responder is a printer, than this operation can set the print format. Alternatively, vendor specific properties can be used for the same purpose. 4 USB Transport In this section we describe an example implementation of PTP over the USB transport. This is a typical implementation for a USB digital camera. 4.1 Overview of endpoints PTP over USB uses a number of endpoints (source or sink of data on USB devices) to provide the required functionality. 4.1.1 Default endpoint The default endpoint has the address: Ox00 and it is a control (default) type endpoint. The default endpoint is usually used for standard USB requests and for class specific requests. Class specific requests are: OxOC Device Reset Request (out) - used by the host to reset the device. Get Device Status Request (in) - used’by the host to determinate the device status after the host cancels a transaction or an endpoint becomes stalled. Cancel Request (out) - used by the host to cancel the current transaction. Get Extended Event Data (in) - used by host to retrieve the event data from device that is too big for a standard (PTP) event from the interrupt endpoint. All PTP events must be received from the interrupt endpoint. transaction depends on ContainerType field ? Payload The content of the payload 4.1.2 Data-In endpoint The Data-In endpoint can have any address in range 0x81- Ox8F, but Interrupt endpoint address, It is a Bulk In type endpoint. It is used for responses and the data-in phases of the transaction, from the device to the host. 4.1.3 Data-Out endpoint The Data-Out endpoint can have any address within the range Ox01 - OxOF. It is a Bulk Out type endpoint. It is used for data-out and operation phases of the transaction, from the host to the device. 4.1.4 Interrupt endpoint. The Interrupt endpoint can have any address in range Ox8 1 - Ox8F but Data-In endpoint address. It is an Interrupt type endpoint and it is used to transfer for event data from device to host. 4.2 USB transport standard for still digital cameras supports only single session mode. Even if device is capable of supporting multiple sessions the USB transport protocol is not able to handle more than one session (the Initiator can’t open concurrent session to the Responder). This does not affect the service quality because multiple sessions do not provide many benefits to a PC user and, on the other Mapping PTP to USB transport hand, single session support is much simpler for a device implementation. The PTP communication protocol implies five different kinds of communication “primitives”: Operation Request Phase of a transaction, Data-In Phase of a transaction, Data-Out Phase of a transaction, Response Phase of a transaction and Events. All this primitives are encapsulated and transported through USB in “containers”. The containers that are part of a transaction (transaction phases) are transferred synchronously relative to each other through bulk endpoints. The Event ,Container is transferred asynchronously relative to transaction containers through an interrupt endpoint. I Container b h e whole size of the container Length lfrom offset Ox00 including payload ContainerTypelOne of the following values: I 0 (undefined) - must not be used; I 1 (CommandBlock) - maps a PTP Operation Request Dataset; D 2 (DataSlock) - maps the Data- In and Data-Out phases of a transaction; D 3 (ResponseBlock) - maps PTP Response Dataset; D 4 (EventBlock) - maps PTP Table 5: Generic USB Container Structure This approach limits the host to act as an Initiator and device as a Responder: Also events can be sent from the device to host only (this is suitable for most events). There is a special case of a Cancel Transaction Event, which is sent from the host to the device. This event is transferred through the default endpoint as a class specific request. Note that even though the Cancel Transaction Event is handled separately by the USB transport, the support for these events is required and the corresponding code (0x4001) must be present in Device Info Dataset. On the other hand, the device may cancel the transaction by stalling the corresponding endpoint rather than sending the Cancel Transaction Event. This is a very good example of transport specific implementation of the canceling mechanism specified by the PTP layer. USB Transport defines also other class-specific requests issued to the default endpoint. 422 IEEE Transactions on Consumer Electronics, Vol. 48, No. 3 , AUGUST 2002 4.3 Containers A Container is a USB Transport structure used to encapsulate and transport PTP communication primitives. The PTP “primitives” that are carried over using this USB structure, are defined in Table 5. 4.4 Common Implementation Mistakes During investigations of different implementation of PTP transport for USB several issues have been found at both Initiator and Responder sides. Most of these arise because of inconsistencies in the specification of the USB transport. Others are just violations of this specification. Because PTP is the. first standard protocol for imaging devices and is not mature enough, we have to deal with the consequences. The cases listed below are not necessarily a sign of a bad design in PTP itself. In some cases the specification needs to be refined to clarify specific cases that are presently implemented differently by different device vendors. 4.4.1 Cancellation problems Some camera manufacturers do not implement transaction cancellation properly. The guide here is that USB has a transport specific implementation of cancel by means of the class specific Cancel Request which is not taken into consideration by most PTP implementations. As a result, PTP clients, such as the Windows Imaging Architecture (WIA) mini-driver, wait until a transaction is completed and discard the results. This can lead to significant delays after the user cancels the operation. On the other hand cancellation from Responder side is achieved by stalling the USB pipe and setting a Transaction-Cancelled response code as a device status. The majority of Initiator implementations treat this as a Responder error. In general this is the correct approach because the transaction did not succeed, but in some cases the Initiator may refine the reason for failure as a cancellation to differentiate it from cases when the transaction failed because of transport error. 4.4.2 Incorrect error reporting techniques Many of camera implementations have a “habit” of reporting PTP errors by using a USB level approach. In such cases they usually stall the pipe and set the appropriate response code in device status; cancel a transaction (also by stalling the pipe and reporting a Transaction-Cancelled code); stall the pipe without identifying a reason (in this case the camera often needs a reset); or even reset the USB connection. The correct approach is to use the Response Phase to report the appropriate PTP error code. 4.4.3 USB transport says that undefined parameters of operation, response and event datasets need not be included in a container which incorporates a “number of parameters” field. However the undefined parameters may also be transferred in containers with zero values as documented Undefined Parameters - Transport Issues in the PTP specification. The receiver of the container should plan to handle both cases. 4.4.4 Even if the USB containers specify their length in the header, the end of the container is still indicated by a short or a NULL packet. When the container is not a multiple of the USB packet length, then a short packet is received. In this case there is no need for a NULL packet transmission. The NULL packet must be explicitly issued and handled if the container length is an exact multiple of the USB packet length. Some implementations do not handle the NULL packets properly. 4.4.5 Reset problems Some camera implementations do not reset the PTP session after a Reset Request or do not handle this request at all. That usually leads to a Session-Already-Opened error which is not expected by the Initiator after a reset. . 4.4.6 Timeout issues Neither PTP nor the USB transport specification provides any timing requirements between transaction phases or inter-phase transfers. It is clear that a camera implementatiomof PTP must know how to deal with cases when the Initiator is not responding for a long time during a transaction and take appropriate action (e.g. put a message on OSD and reset the USB connection), but this should be done after a reasonably long timeout ( probably not less than a minute). With too short a timeout the Initiator could be suspended by the operating system for a couple of seconds and the PTP transaction will be broken before the Initiator has a chance to regain control. 5 Bluetooth Transport This section describes a proposal to map the PTP over a Bluetooth transport. 5.1 BPTP As mentioned before, in the PTP there are five main primitives that can be identified. These are: operation requests, operation responses, data in, data out and events. USB NULL packet not handled While the first four primitives are part of the transaction model described in one of the earlier sections, the fifth one, events, is a completely asynchronous one during the PTP session. Therefore, the transport should provide a minimum of two logical channels to the PTP, for an easy mapping. Bluetooth transport has the ability to provide this at the L2CAP (Logical Link Control and Adaptation Protocol) level. So, the natural level to interact with the bluetooth protocol stack is the L2CAP layer. All the PTP communication should go over two L2CAP logical data channels, one dedicated for the event communication and the other for command requests, responses and data traffic. The L2CAP layer is providing high level protocol multiplexing and packet segmentation and reassembly (up to a maximum packet size). It will not be enough just to send PTP data over L2CAP since the size Bigioi et al.: Digital Camera Connectivity Solutions Using the Picture Transfer Protocol (PTP) 423 of the data may exceed the maximum L2CAP packet size. Usually, the L2CAP maximum transmission unit (MTU) will be exported using an implementation specific service interface, while the minimum MTU size is 48 bytes and it should be accommodated by any L2CAP implementations. E z 4 c 0 % k 12cap II Fig 5: Bluetooth Stack It is the responsibility of the higher layer protocol to limit the size of the packets sent to the L2CAP layer bellow the MTU limit. Therefore, in order to map the PTP over bluetooth at this level, we need to define a simple protocol specification (BPTP - Bluetooth PTP) that will deal with this kind of issues (see Fig 5 for its place in the bluetooth protocol stacks architecture). BPTP expects from L2CAP a reliable transport layer, error free communication channels. It is a packet based transport protocol. All the communications between the two bluetooth imaging devices will go over two L2CAP logical data channels. The events (PTP event datasets) are transported separately from the Operation requestshesponses and data, because of their asynchronous nature. Using a separate L2CAP logical channel for event mechanism (therefore using an out of band communication channel), the implementation of the protocol will be very much simplified. The BPTP should perform segmentation and reassembly of the application data (PTP data structures) whenever this data exceeds the MTU negotiated by the L2CAP layers. 5.1.1 Command/Data Transport Channel The BPTP Command/Data transport channel is based on a L2CAP logical data channel. This channel is dedicated to the data transfer during a PTP transaction: operation request phase, data phase (either data in or data out phase) and response phase. Since the size of data transferred in the data phase could exceed the minimum MTU size of L2CAP (48 bytes) it is the only type of data that may require ,segmentation and reassembly. Moreover, in order to ensure an error-free operation, this data is subject to flow control mechanism as well. This channel is- opened and configured by the Initiator PTP device and is identified by the local L2CAP logical channel identifier (LCID). 5.1.2 Event Transport Channel The BPTP Event transport channel is based on a L2CAP logical data channel. This channel is dedicated to PTP Event transport. No segmentation or reassembly is needed for this channel, since all the events have a fixed size not exceeding 22 bytes (less than the minimum L2CAP MTU which is 48 bytes). No flow control mechanism is required for this channel. This channel is opened and configured by the Initiator PTP device. This channel is identified by the L2CAP local logical channel identifier (LCID). 5.2 Transport Channel Management The PTP layer (in the top of BPTP) should be abstracted from the transport mechanism used (Le. the existence of the transport specific CommandlData and Event channels). Therefore, the BPTP layer will rely on the existence of two established L2CAP layer channels between the devices, prior to transporting any PTP structures. 5.2.1 Transport Channels Establishment In the communication between Initiator and Responder it is the responsibility of the' Initiator to begin the device connection and establish a BPTP transport channel to the Responder. YI HCI Fig 6: Suggested Behavior for Bluetooth PTP Initiator Our approach to this is presented in Fig 6, where we introduce a new software component - the BluetoothPTP manager, which can be an extension of the standard Bluetooth stack manager. This component performs the following tasks: 1. Device inquiry, network search for any local Bluetooth 2. Device service discovery on each physical device devices. discovered by the inquiry operation. 424 IEEE Transactions on Consumer Electronics, Vol. 48, No. 3, AUGUST 2002 SessionlD 3.Once a remote device is identified as a PTP imaging device, the BluetootWTP manager tries to create the L2CAP transport channels with the remote device. 4. If the bluetooth PTP manager successfully creates the ControUData channel and the Event channel, then it will pass the associated LCIDs to the BPTP object dealing with the PTP, which will create a new imaging device in the local system 5. An optional notification to a registered PTP aware imaging application can also be performed. * 0x05 to OxFF - Reserved 2 OpenSession and GetDevicelnfo 4 UlNT3 D 0x00000000 -valid only for Because the LCID numbers are allocated dynamically by L2CAP layer, the logical channels have to be open in sequence to distinguish between the CommandlData and Event transport channels. The Event transport channel shall be established before the CommandlData channel establishment is requested. It is important that the established sequence of channels to be used for the BPTP transport is known by both the Initiator and Responder. Thus if an attempt to initiate the CommandlData transport channels fails then the established Event transport channel should be closed. The Initiator device may re-try to establish the BPTP transport channels later (under some time constraints rules). 5.2.2 Configuration of Transport Channels The L2CAP logical data channels used by BPTP should have their parameters configured as follows: Flush Timeout parameter should be set to O x F F F F (intinite) value, unless the BPTP implementation supports transaction packet re-transmission so that channel reliability can be achieved. MTU parameter is implementation and device specific and can be set for each BPTP transport channel to different values. However, the minimum value should be at least 48 bytes. 0 QoS parameter is implementation specific and the Quality of Service support in BPTP is optional. 5.2.3 Shutdown of Transport channels It is the Initiator’s responsibility (BPTP layer) to close all the allocated communication channels whenever the remote device is not in the bluetooth range. The L2CAP channels will coexist during the presence of the Responder in the bluetooth neighbourhood. A communication failure with a Responder, resulting in a session communication failure, will generate the shutdown of the communication channels. Length Paylod 5.3 BPTP Packet Format Beside the typical ETP data structures, a new type of packet is being defined: flow control packet. It is used as mechanism to synchronise the data transfer between the BPTP layers, in order to avoid the blocking of L2CAP layer with BPTP packets in the case that one of the layers is not receivingkansmitting properly. OxFFFFFFFF - reserved UINTI The size of the payload contained in 2 ? ? Dependent on the Packet Type 6 the packet Ox00 - Invalid Value; Ox01 - Operation Request Packet 0x02 - Operation Response Packet 0x03 - Data Packet) 0x04 - Flow Control Packet transactions I I l l Session IDS 0x00000000 to OxFFFFFFFE -valid Table 6: BPTP Packet Format 6 TCPDP Transport This section describes existing issues of implementing PTP over TCP/IP and possible solutions. The ideal layer for mapping the PTP over TCP/IP protocols stack would be TCP. That is because the stream sockets provide a reliable way of communication between two networking devices (i.e. the Initiator and Responder). Moreover, the possibility to open multiple concurrent sockets to the same device would provide the perfect support for Command/Data/Response and Event PTP communication primitives. responses Events Imm R to I mpped over HTTP in a p l i n g Fig 7: PTP over HTTP However, in a real world situation, most of the TCP/IP ports are closed by different forms of security measures. e.g. firewalls, or different constrains caused by availability of reduced number of routable IP’s (proxies). Therefore pure TCP/IP mapping for the PTP will work only in network setups where both the Initiator and Responder will have routable IP’s and will not be separated by any firewall. Usually, a firewall allows only traffic for known communication protocols (i.e. known ports are left open), the most common being FTP, HTTP and SSH. Fig 7 presents briefly the idea of mapping the PTP over the Bigioi et al.: Digital Camera Connectivity Solutions Using the Picture Transfer Protocol (PTP) 425 HTTP protocol. Using HTTP protocol is very attractive from two points of view: the traffic will go through any firewall out there and the Initiator won't have to have a routable IP address. The only requirement is that the Responder should have a routable IP address and implement mini-HTTP server functionality (support for the basic POST and GET functions from the HTTP 1.1 protocol specifications). POST (Request) NO GET (Data-in) H l T P OK GET (Response) '5 Operation Fig 8: PTP Transaction over HTTP The PTP operation request dataset structure will be transported from the Initiator to the Responder using a standard POST HTTP method. The HTTP OK response (200) will trigger on the Initiator to perform the remaining phases to complete the transaction. Fig 8 describes the proposed PTP transaction mapping over HTTP. The events from the Initiator to the Responder will be carried out by a HTTP POST method, while the Events from the Responder to the Initiator will be carried over by the HTTP GET method, which will be called by the Initiator in a pooling fashion (i.e. from an Event thread). The Responder will need to export transactions URI that will be used by the Initiator to make the differentiation between the synchronous transactions and asynchronous events). A future paper will describe in detail the TCPPTP transport protocol (see Fig 9 ) and the way this protocol will encapsulate the specific PTP structures and data flow. Despite the fact that the TCP/IP transport for PTP opens a number of nice possibilities and new usage scenarios for an imaging device, the PnP features of such an approach are lost. The need for a TCP/IP Imaging Manager emerges, a few of its roles being: remote device URI specification (or IP), device discovery, authentication, the creation of TCPPTP layer and remote device presentation in the local system as an imaging device. Optionally, the TCP/IP Imaging manager can notify an imaging application about the presence of an imaging device in the local system. Imaging Application (PTP compliant) Imaging Application (PTP compliant) A Fig 9: PTP Layer Protocol in the TCP/IP Protocol Stack 7 Conclusions The PTP seems to become the imaging transfer protocol of choice for many digital camera manufacturers. As we tried to convey in this paper, PTP is an extremely flexible imaging protocol, designed to interconnect imaging devices. Despite being a powerful and complete protocol, it still lacks certain features for new emerging devices (such as digital camera with streaming of video and audio capabilities). This section aims to identify potential problems that the PTP might have and proposes quick fixes to those problems. 7.1 High Capacity Storage Problems Today the storage capacity of digital media is increasing at vertiginous rates. Digital media of 1GB capacity is a reality; that is the storage that can easily accommodate thousands of pictures. One of the few limitations of the PTP, as it is specified today, is that is not allowing partial browsing for the object handles from the same category (Le. objects handle for JPG file types). This can result in long initialization times or even long update times after storage changes in poorly implemented Initiators. A quick fix to this problem could be window based browsing of object handles from the same category (i.e. request of the first say 10 object handles and then the next 10 and so on). 426 IEEE Transactions on Consumer Electronics, Vol. 48, No. 3, AUGUST 2002 7.2 Multiple Session Problems As it is, the PTP standard says that it is possible for Responder to support multiple sessions at the same time though it is not required to support more than one. On the other hand one of PTP goals is that it is very simple to implement even in an embedded environment with limited resources. The multi-session approach can require more processing power and resources, such as a multitasking environment. So the problem is that even the Responder supports multi-session (and in some cases it can be useful) the generic PTP Initiator cannot assume this in the case when multiple applications will access the camera resources at the same time. This problem is solved very well by serializing transactions by a PTP manager at the Initiator. In order for the serialization not to generate long response time delays, the PTP manager should use the GetPartialObject method to download the picture from the Responder. As a suggestion to a standard extension we would recommend the implementation of SendPartialObject as a standard operation as well. As it is now, in a multi-session implementation of the PTP, the Initiator tries to open a session with the Responder by specifying the SessionID in the operation request. The responder will answer OK if the specified SessionID is free. Otherwise, the Responder will answer Session-Already-Open or Device-Bussy if no session is available. The Initiator has to make a number of Opensession requests until will find a free session. This could be time consuming when the remote device is not directly attached to the Initiator and a different transport than USB is used. The alternative to this problem would be for the Responder to return in the response, as one of the response’s parameter, an alternative SessionID (the first available), or Device-Bussy if no session is available. 7.3 This is more of an implementation problem rather than a problem of the protocol itself. Many implementers of the PTP incorrectly interpret the meanings of different error codes returned by the imaging devices. E.g. Getobject, (Association Object) operation returns Invalid-ObjectHandle instead of Invalid-Parameter (since the association object doesn’t have other associated data then the object info ). A solution to these types of problems would be a revision of the protocol specifications, where such cases should be described in more details. 7.4 A good example of that kind of problems is where an Initiator opens a session with a given Responder, does some data exchange and exits without closing the session. Incorrect Utilization of Error Codes Normal Responder Operation: Invalid States In this case, a CloseSession after a Session-Already-Open error will result in a failure since the Responder will expect the TransactionID in sequence, but the new Initiator will not have the expected TransactionID. A quick fix for this situation would be to specify that the Responder will accept CloseSession with a TransactionID=0x00000000. 7.5 A very common case in existing implementations is where the Responder reports that it implements a certain parameter, but it doesn’t handle the special cases for that particular parameter properly. A very good example is the GetObjectHandle operation that is accepting as second optional parameter the object format code (that specifies the type of object the operation should return handles for). The Responder should either return Specfication-By-Format-Unsuported error code or implement the full functionality. This parameter has a specified special value of OXFFFFFFFF that, if issued, the Responder should return object handles for all image objects. In reality, most of the Responders will return all the object handles ignoring the parameter value. A fix for this problem would be to make impossible the partial implementation of given parameters specification. In this case, the object format code parameter would be either fully supported or unsupported. Handling of Special Value Parameters 7.6 Future Work As future work, the authors intend to implement the Picture Transfer Protocol over Bluetooth transport and verify that the protocol is fully suited for wireless types of transports. 8 References PIMA 15740, Photographic and Imaging Manufacturers Association Inc., 2000. USB Still Image Capture Device Definition, USB Device Working Group, 2000. Bluetooth Specification Version 1.1, Bluetooth SIG, February 2001 Bigioi et al.: Digital Camera Connectivity Solutions Using the Picture Transfer Protocol (PTP) 421 9 Author Biographies Petronel Bigioi received his B.S. degree in Electronic Engineering from “Transilvania” University from Brasov, Romania, in 1997. At the same university he received in 1998 M.S. degree in Electronic Design Automation. He received a M.S. degree in electronic engineering at National University of Ireland, Galway in 2000. Currently he is lecturing embedded systems at National University of Ireland, Galway. His research interests include VLSI design, communication network protocols and embedded systems. Peter Corcoran received the BAI (Electronic Engineering) and BA (Math’s) degrees from Trinity College Dublin in 1984. He continued his studies at TCD and was awarded a Ph.D. in Electronic Engineering for research work in the field of Dielectric Liquids. In 1986 he was appointed to a lectureship in Electronic Engineering at UCG. His research interests include microprocessor applications, environmental monitoring technologies. He is a member of I.E.E.E. George Susanu received his BS degree in microelectronics from “Kishinev Politechnical Institute”, Kishinev, Republic of Moldova. With an experience of eight years in RTOS and embedded systems, having a wide experience in C/C++ programming he currently is working as R&D senior engineer with Accapella Ireland Ltd. His research areas include real time operating systems and device connectivity. Irina Mocanu received her B.S. degree in Computer Science from University Politehnica of Bucharest, Romania, in 1996. At the same university she received in 1997 M.S. degree in Real Time Operating Systems for embedded systems. Currently Bucharest, Romania. She is currently teaching data structures and algorithms, computer graphics and formal languages databases - ,. Her .research interests and computer graphics. include multimedia, she is teaching assistant at University Politehnica of work_fjsluvx3w5gcjcitdlitflosz4 ---- Diabetic Retinopathy: More Patients, Less Laser A longitudinal population-based study in Tayside, Scotland JAMES H. VALLANCE, MBCHB, BSC, MRCOPHTH1 PETER J. WILSON, MBCHB, BSC, MRCSED1 GRAHAM P. LEESE, MD, FRCP2,3 RITCHIE MCALPINE, BSC4 CAROLINE J. MACEWEN, MD, FRCS, FFSEM FRCOPHTH1 JOHN D. ELLIS, MPH, PHD, FRCOPHTH1 OBJECTIVE — We aim to correlate the incidence of diabetic retinopathy and maculopathy requiring laser treatment with the control of risk factors in the diabetic population of Tayside, Scotland, for the years 2001–2006. RESEARCH DESIGN AND METHODS — Retinal laser treatment, retinal screening, and diabetes care databases were linked for calendar years 2001–2006. Primary end points were the numbers of patients undergoing first or any laser treatment for diabetic retinopathy or maculopathy. Mean A1C and blood pressure and retinal screening rates were followed over the study period. RESULTS — Over 6 years, the number of patients with diabetes in Tayside increased from 9,694 to 15,207 (57% increase). The number of patients receiving laser treatment decreased from 222 to 138 and first laser treatments decreased from 100 (1.03% of diabetic population) to 56 (0.37%). The number of patients with type 2 diabetes treated for maculopathy decreased from 180 in 2001 to 103 in 2006 (43% reduction, P � 0.03). Mean A1C decreased for type 1 and type 2 diabetic populations (P � 0.01) and a reduction in blood pressure was observed in type 2 diabetic patients (P � 0.01). The number of patients attending annual digital photographic retinopathy screening increased from 3,012 to 11,932. CONCLUSIONS — Laser treatment for diabetic maculopathy in type 2 diabetic patients has decreased in Tayside over a six-year period, despite an increased prevalence of diabetes and increased screening effort. We propose that earlier identification of type 2 diabetes and improved risk factor control has reduced the incidence of maculopathy severe enough to require laser treatment. Diabetes Care 31:1126–1131, 2008 A number of recent studies have re- ported a lower incidence and preva- lence of severe diabetic retinopathy and maculopathy (1–5). Reduction in blindness in patients with diabetes has also been reported, but this observation is not universal (6 – 8). The use of blindness as an end point for studies of diabetic eye disease is often rendered imprecise by reliance on incomplete blindness registration data and by difficulty in attributing visual loss to di- abetic retinal disease (9). The majority of visual impairment in patients with diabe- tes is not due to diabetic retinopathy (10), and accordingly the incidence of retinop- athy requiring therapeutic intervention (laser) is a more accurate reflection of in- cident diabetic retinal disease provided population and treatment records are complete. National Health Service (NHS) Tay- side serves a predominantly Caucasian rural and urban population, which in- creased from 338,750 in 2001 to 391,639 in 2006 (11). A retinal screening program has been in place since 1990, using digital photography since 2000 (12). In 2003, Scotland introduced a national screen- ing program (13) using annual single- field digital photography with staged mydriasis, a standardized grading sys- tem (14), trained screeners, and rigor- ous quality assurance (15). Tayside also benefits from an established national di- abetes database (16 –18). Laser treat- ments take place at a single site within the region and are recorded on a single database using the same unique patient identifier, allowing easy case linkage studies. Using these data sources, we describe trends in laser utilization, ret- inal screening, and the control of reti- nopathy risk factors in Tayside for the years 2001–2006. RESEARCH DESIGN AND METHODS — We performed a histor- ical cohort study of retinal laser in Tay- side, Scotland. The data sources used in this study were databases of regional laser treatment, retinal screening (“Eyestore”), and the national diabetes register (Scot- tish Care Information–Diabetes Collabo- ration [SCI-DC]) for the complete calendar years 2001–2006. Retinal laser within Tayside is re- corded on a custom-designed database, including treatment given and date. The primary end points for this study ob- tained from this dataset were first laser treatments for diabetic retinopathy or maculopathy and number of patients re- ceiving any laser for diabetic retinopathy or maculopathy per annum. The SCI-DC database uses hierarchi- cal multiple data source captures to create a real-time national diabetes register. In- dependent data sources (e.g., community prescribing, regional biochemistry data- base) are integrated using custom- designed software (16). The health board regions are clearly demarcated and there- fore can be accurately constrained to the Tayside population (16,18). Population risk factors for laser extracted from ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● From the 1Department of Ophthalmology, Ninewells Hospital and Medical School, Dundee, U.K.; the 2University Department of Medicine, Ninewells Hospital and Medical School, Dundee, U.K.; the 3Diabetes Centre, Ninewells Hospital and Medical School, Dundee, U.K.; and the 4Medicines Monitoring Unit, Nine- wells Hospital and Medical School, Dundee, U.K. Corresponding author: Dr. John Ellis, MPH, PhD, FRCOphth, Department of Ophthalmology, Ninewells, Dundee, U.K. DD1 9SY. E-mail: john.ellis@nhs.net. Received for publication 1 August 2007 and accepted in revised form 3 March 2008. Published ahead of print at http://care.diabetesjournals.org on 17 March 2008. DOI: 10.2337/dc07-1498. Abbreviations: NHS, National Health Service; SCI-DC, Scottish Care Information–Diabetes Collabora- tion. © 2008 by the American Diabetes Association. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. E p i d e m i o l o g y / H e a l t h S e r v i c e s R e s e a r c h O R I G I N A L A R T I C L E 1126 DIABETES CARE, VOLUME 31, NUMBER 6, JUNE 2008 SCI-DC were A1C, duration of diabetes, and blood pressure. BMI, cholesterol, and method of diabetic treatment were also extracted. Eyestore contains all information from digital retinal screening performed in Tayside including date of screening and grading outcome (12). Data drawn from Eyestore were total number of screening events and number of events at which re- ferable retinopathy or maculopathy were identified for each year. Referable reti- nopathy and maculopathy were as de- fined in the national screening framework (14). For the purposes of this study, a screening event resulting in treatment was defined as one that occurred no more than 6 months before laser. This defini- tion was used to state with reasonable confidence that screening had identified treatable pathology, not merely referable pathology. This assumption could not be made for laser occurring �6 months after screening, since this could well encom- pass new pathology arising during oph- thalmic clinic follow-up. The three databases were checked for internal validity (Modulus 11 algorithm, identification and exclusion of noninci- dent laser events), and external cross- references between the databases were made. Where discrepancies were identi- fied, arbitration was sought from bio- chemical and clinic attendance records. In addition to the data described above, unique patient identifiers were obtained and matched between the relevant data- bases, before anonymization of the data by a third party. The opinion of the local medical re- search ethics committee was sought. They indicated that Caldicott Guardian ap- proval alone was required. This was ob- tained, and the principles of data protection were adhered to throughout this study. Statistical analysis The administration of SCI-DC changed after the first 2 years of the study period. As a result, with the exception of disease duration, only means of variables were available for 2001 and 2002. Neverthe- less, the large sample sizes meant that this was an acceptable representation of the group. To demonstrate trends in these variables, weighted linear regression was performed, using (N/SD2) to calculate weight (SPSS, Chicago, IL). Since accu- rate measures of N and SD were not avail- able for 2001 and 2002, the weights were estimated allowing for low patient num- bers and high SDs. The robustness of this technique was tested and validated through comparison with the full dataset for duration of disease. Statistical analysis was performed under the supervision of the statistician for NHS Tayside. RESULTS — From 2001–2006 the number of registered diabetic patients in- creased from 9,694 to 15,207 (57% increase). The number of first laser treat- ments per annum fell from 100 to 56 (44% decrease), and the total number of patients receiving laser fell from 222 to 138 (38% decrease). The number of pa- tients undergoing digital retinal photo- graphic screening annually rose from 3,012 in 2001 to 12,035 in 2005. A total of 55,103 retinal screening events were performed (47,864 patients), 1,884 (3.4%) of which identified referable ret- inopathy. However, of patients referred Figure 1— The relationship between the known diabetic population of Tayside, digital retinal photographic retinopathy screening, and progression to laser treatment for the years 2001–2006. On the primary axis: F � total number of patients with diabetes and f � patients undergoing digital retinal photographic retinopathy screening in that year. On the secondary axis: E � patients graded as having referable retinopathy at screening as defined by national guidelines, � � all patients undergoing any form of laser treatment in that year, and ‚ � number of patients undergoing laser within 6 months of a screening event detecting referable disease. Vallance and Associates DIABETES CARE, VOLUME 31, NUMBER 6, JUNE 2008 1127 Table 1—Number of patients with diabetes, number of patients undergoing digital retinal photographic screening, and number of patients undergoing first or any laser treatment in the years 2001–2006 2001 2002 2003 2004 2005 2006 Patients with diabetes† 9,694 11,216 11,932 13,582 14,811 15,207 Prevalence of diabetes in Tayside (%) 2.5 2.9 3.1 3.5 3.8 3.9 Patients undergoing digital retinal photographic screening† 3,012 3,238 6,216 10,294 12,035 11,932 Patients with referable retinopathy from photography 189 149 262 425 302 343 Percentage of patients screened with referable retinopathy (%)† 6.3 4.6 4.2 4.1 2.5 2.9 All patients receiving laser for diabetes 222 201 202 252 199 138 % of all patients with diabetes receiving laser† 2.3 1.8 1.7 1.9 1.3 0.9 Patients receiving first laser for diabetes 100 73 87 105 82 56 % of all patients with diabetes receiving first laser* 1.0 0.7 0.7 0.8 0.6 0.4 Laser �6 months after screening 22 9 11 52 55 35 Laser within 6 months of screening as a percentage of patients screened (%) 0.7 0.3 0.2 0.5 0.5 0.3 Data are n or %. *P � 0.01, †P � 0.05 over the study period. Table 2—Number of patients with type 2 diabetes, number of type 2 diabetic patients undergoing digital retinal photographic screening, and number of patients undergoing first or any laser treatments in the years 2001–2006 correlated with the type 2 diabetic population mean risk factors and hypoglycemic treatment 2001 2002 2003 2004 2005 2006 Type 2 diabetic patients* 8,593 9,935 10,594 12,112 13,352 13,660 Patients undergoing digital retinal photographic screening* 4,979 6,339 6,706 8,933 10,676 10,619 Patients with referable retinopathy from photography† 295 342 409 495 476 441 Patients with referable retinopathy as a percentage of all patients screened (%)† 5.9 5.4 6.1 5.5 4.5 4.2 Laser All laser Macular† 180 168 163 174 147 103 Macular as % of all patients* 2.11 1.69 1.54 1.44 1.1 0.75 Panretinal 58 79 95 86 70 51 Panretinal as % of all patients 0.67 0.8 0.9 0.71 0.52 0.37 First laser Macular 77 45 49 69 65 38 Macular as % of all patients 0.9 0.45 0.46 0.57 0.49 0.28 Panretinal 6 13 13 16 9 15 Panretinal as % of all patients 0.07 0.13 0.12 0.13 0.07 0.11 Risk factors Mean A1C (%)* 7.9 7.6 7.4 7.4 7.5 7.4 Mean systolic blood pressure (mmHg)* 142 141 141 141 138 137 Mean diastolic blood pressure (mmHg)* 79 78 77 76 75 75 Mean age (years) 66.5 66.8 66.9 66.3 66.4 66.6 Mean duration of diabetes (years)* 7.7 7.5 7.5 7.3 7.3 7.4 Mean BMI (kg/m2)* 30.0 30.1 30.3 30.5 30.7 30.9 Mean total cholesterol (mmol/l)* 5.0 4.9 4.8 4.6 4.4 4.3 Treatment Insulin only (%)* 16.0 15.0 15.0 13.6 12.3 11.8 Insulin and oral hypoglycemics (%)† 0.7 1.1 0.9 1.5 3.3 4.4 Oral hypoglycemics (%) 54 55.8 53.7 53.8 50.6 52.0 Diet only (%)† 26 26.1 27.8 28.3 30.7 29 Not known (%) 3.3 2.0 3.3 2.8 2.9 2.8 Data are n, %, or mean. *P � 0.01, †P � 0.05 over the study period. Diabetic retinopathy 1128 DIABETES CARE, VOLUME 31, NUMBER 6, JUNE 2008 to ophthalmology, only 184 (9.8%) proceeded to laser intervention within the following 6 months (Fig. 1, Table 1). Between 2001 and 2006, the number of patients with type 2 diabetes rose from 8,936 to 13,660 (53% increase, Table 2). The most frequently performed treatment was macular laser in type 2 diabetic pa- tients. A total of 180 type 2 diabetic pa- tients (2.1% of the type 2 diabetic population) received macular treatment in 2001 and 103 (0.75% of the type 2 diabetic population) in 2006, a 43% de- crease (P � 0.03). Type 2 panretinal treat- ments peaked in 2004 with 95 patients receiving treatment, falling back to 51 pa- tients in 2006 (Fig. 2A) with no statisti- cally significant trend over the 6-year period as a whole. The type 1 diabetic population grew from 1,158 to 1,547 (34% increase, Table 3) over the same period. Macular treat- ments in type 1 diabetic patients similarly peaked in 2004 at 42 patients, falling to 12 in 2006 (Fig. 2B). Type 1 diabetic pa- tients undergoing panretinal treatment fell from 44 in 2001 to 29 in 2006. This was a significant reduction when viewed as a percentage of the type 1 diabetic pop- ulation (P � 0.01, Table 3). In type 1 diabetic patients, an in- crease in systolic blood pressure was ob- served during the study period (P � 0.01), whereas for type 2 diabetic patients, mean systolic blood pressure fell by 5 mmHg (P � 0.01) and diastolic blood pressure fell by 4 mmHg (P � 0.01). Mean A1C fell from 9.1% to 8.8% in type 1 diabetic patients (P � 0.01) and from 7.9% to 7.4% in type 2 diabetic patients (P � 0.01). Mean duration of diabetes decreased from 7.7 to 7.4 years in type 2 diabetic patients (P � 0.01). Mean BMI rose for both type 1 and type 2 diabetic populations and mean cholesterol decreased in both groups (Tables 2 and 3). The percentage of type 2 diabetic pa- tients whose only treatment was dietary advice increased from 26% in 2001 to 29% in 2006 (P � 0.01). There was no significant change in the proportion of type 2 diabetic patients using insulin (16.7% in 2001, 16.2% in 2006, P � 0.45). CONCLUSIONS — In the Tayside population, the absolute number of pa- tients with type 2 diabetes requiring laser treatment for maculopathy fell by 43% between 2001 and 2006. When taken as a proportion of all patients with type 2 di- abetes, this represented a threefold de- crease in those requiring treatment. The number of type 2 diabetic patients requir- ing panretinal photocoagulation and the number of type 1 diabetic patients requir- ing either macular or panretinal laser de- creased, but not enough to achieve statistical significance. Over the same pe- riod, the prevalence of diabetes and dia- betic retinopathy screening effort have both increased. Why was there no con- comitant increase in individuals with ret- inopathy or maculopathy severe enough to require laser treatment? One potential explanation would be a change in the criteria for laser treatment. During the period of the study, there have been no changes in national or local guidelines for the use of laser in diabetic eye disease and no local changes in per- sonnel or practice (19,20). No patients received intravitreal treatment over this period, and indications for surgical prac- tice were unaltered. We failed to identify any patients with disease severity (e.g., persistent vitreous hemorrhage, trac- tional retinal detachment) sufficient to re- quire immediate surgery without first attempting argon laser treatment. How- ever, it is difficult to exclude an unan- nounced change in practice in the application of macular photocoagulation in patients with “good” visual acuity. Pop- ulations in which screening has been es- tablished report a lower incidence and Figure 2—Trends in laser treatment during 2001–2006 for patients with type 2 diabetes (A) and patients with type 1 diabetes (B). E, All patients treated with macular laser; �, all patients receiving first macular laser treatment; F, all patients treated with panretinal laser; f, all patients receiving first panretinal laser. Vallance and Associates DIABETES CARE, VOLUME 31, NUMBER 6, JUNE 2008 1129 prevalence of diabetic visual loss (3,21), but it is difficult to separate the beneficial effect of screening from the effect of better general diabetic disease management, since the two factors frequently coexist. After 2003, digital retinal photography became almost the sole means of screen- ing in Tayside for patients with diabetes. There was a small prevalence screen effect with laser activity peaking in 2004 before falling over the final 2 years of the study. This effect was particularly marked for type 1 maculopathy and it is possible that decreases in 2005 and 2006 could be a result of earlier identification of pathology under the new annual screening system. In contrast, in the type 2 diabetic popula- tion, only a small peak in incident macu- lopathy treatment was seen in 2004, and comprehensive digital retinal photogra- phy screening had little impact on the overall trend of decreasing laser treat- ment. This may be due to relatively ade- quate screening in Tayside before comprehensive digital photography. In areas where historically there have been fewer resources, the impact of the na- tional screening program has been greater, with a more sizeable initial surge of patients with previously unrecognized sight-threatening retinopathy requiring laser treatment (22,23). A reduction in the mean disease du- ration and the proportion of type 2 dia- betic patients treated with insulin and/or oral hypoglycemic agents suggests that patients are being diagnosed with diabe- tes earlier, reducing the period of subclin- ical dysglycemia. This will increase the prevalence of individuals with clinical type 2 diabetes and might be predicted to translate into a drop in the proportion of patients requiring laser treatment. How- ever, we observed a reduction in the ab- solute numbers of type 2 diabetic patients requiring treatment for maculopathy and not simply a drop in the proportion, indi- cating this is an inadequate sole explana- tion for the trends observed. Another consequence of earlier iden- tification of type 2 diabetic patients is the possibility that patients are receiving treatment early enough in the disease pro- cess to avoid the development of sight- threatening maculopathy. Mean diastolic blood pressure in our type 2 diabetic pop- ulation decreased by 4 mmHg over 6 years to a final mean population blood pressure of 137/75 mmHg. The U.K. Pro- spective Diabetes Study Group (24) com- pared tight control of blood pressure (mean 144/82 mmHg) with less tight con- trol (mean 154/87 mmHg) in type 2 dia- betic patients and showed a 34% risk reduction for progression of retinopathy by two or more steps over 7.5 years. Fur- thermore, there was a 47% risk reduction for loss of three or more lines of Early Treatment of Diabetic Retinopathy Study visual acuity and a 35% reduction in in- dividuals undergoing laser treatment over this period. Since diabetic maculopathy is the main cause of visual impairment in type 2 diabetes, this reduction in visual loss suggests tight blood pressure control reduces the risk of maculopathy. In our study, the mean diastolic blood pressure achieved for the entire population is 7 mmHg lower than the U.K. Prospective Diabetes Study tight control group. A statistically significant fall in A1C was also observed. The 2006 population mean of 7.4% is comparable with the tight control group of newly diagnosed type 2 Table 3—Number of patients with type 1 diabetes, number of type 1 diabetic patients undergoing digital retinal photographic screening, and number of patients undergoing first or any laser treatments in the years 2001–2006 correlated with type 1 diabetes population mean risk factors 2001 2002 2003 2004 2005 2006 Number of patients* 1,158 1,281 1,338 1,470 1,548 1,547 Patients undergoing digital retinal photographic screening* 676 817 847 1,080 1,238 1,128 Patients with referable retinopathy from photography 92 97 154 173 218 145 Patients with referable retinopathy as a percentage of all patients screened (%) 13.6 11.9 18.2 16.0 17.6 12.9 Laser Any laser Macular 30 22 21 42 17 12 Macular as % of all patients 2.59 1.72 1.57 2.86 1.1 0.78 Panretinal 44 38 42 43 37 29 Panretinal as % of all patients* 3.80 2.97 3.14 2.93 2.39 1.87 First laser Macular 11 7 10 13 5 4 Macular as % of all patients 0.95 0.55 0.75 0.88 0.32 0.26 Panretinal 11 9 15 15 9 8 Panretinal as % of all patients 0.95 0.55 0.75 0.88 0.32 0.26 Risk factors Mean A1C (%)* 9.1 8.9 8.8 8.9 8.9 8.8 Mean systolic blood pressure (mmHg)* 129 129 132 132 132 132 Mean diastolic blood pressure (mmHg) 76 75 75 75 75 75 Mean age (years)* 35.9 36.4 36.7 36.7 37.9 38.2 Mean duration of diabetes (years) 17.5 17.5 17.4 17.1 17.4 17.7 Mean BMI (kg/m2)* 25.4 25.6 25.7 25.0 26.6 26.6 Mean total cholesterol (mmol/l)* 5.1 5.0 4.9 4.9 4.6 4.6 Data are n, %, or mean. *P � 0.01 over the study period. Diabetic retinopathy 1130 DIABETES CARE, VOLUME 31, NUMBER 6, JUNE 2008 diabetic patients reported in the U.K. Pro- spective Diabetes Study 33 (25). This group had a 29% lower risk of retinal photocoagulation over 10 years from di- agnosis when compared with individuals receiving “conventional” treatment (mean A1C 7.9%). Mean total cholesterol also decreased significantly over the study pe- riod, and although plasma lipids have not been conclusively proven to influence the course of diabetic retinopathy or macu- lopathy, the Fenofibrate Intervention and Event Lowering in Diabetes (FIELD) study demonstrated a reduction in laser treatment in those treated with fenofi- brate (26). In conclusion, the incidence of macu- lopathy requiring laser treatment in type 2 diabetic patients in Tayside has de- creased over the last 6 years despite in- creased prevalence of type 2 diabetes and increased screening effort. The national screening program contributed a greater number of patients receiving first laser, but did not alter the overall trend to less laser treatment for this group. We suggest that earlier diagnosis and improved man- agement of the risk factors for diabetic maculopathy is reducing the incidence of maculopathy severe enough to require la- ser treatment in type 2 diabetes. Acknowledgments — This study was funded by the Wellcome Trust, Speed Pollock Memo- rial Trust. References 1. Knudsen L, Lervang H-H, Lundbye- Christiansen S, Gorst-Rasmussen A: The North Jutland Count Diabetic Retinopa- thy Study: population and characteristics. Br J Ophthalmol 90:1404 –1409, 2006 2. LeClaire T, Palta M, Zhang H, Allen C, Lien R, D’Alessio D: Lower-than-expected prevalence and severity of retinopathy in an incident cohort followed during the first 4 –14 years of type 1 diabetes: The Wisconsin Diabetes Registry Study. Am J Epidemiol 164:143–150, 2006 3. Olafsdottir E, Andersson D, Stefansson E: Visual acuity in a population with regular screening for type 2 diabetes and eye dis- ease. Acta Ophth Scand 85:40 – 45, 2007 4. Younis N, Broadbent DM, Harding SP, Vora JP: Incidence of sight-threatening retinopathy in type 1 diabetes in a system- atic screening programme. Diabet Med 20: 758 –765, 2003 5. Younis N, Broadbent DM, Harding SP, Vora JP: Incidence of sight-threatening retinopathy in type 2 diabetes in the Liv- erpool Diabetic Eye Study: a cohort study. Lancet 361:195–200, 2003 6. Trautner C, Haarert B, Giani G, Berger M: Incidence of blindness in southern Ger- many between 1990 and 1998. Diabetolo- gia 44:147–150, 2001 7. Backlund LB, Algvere PV, Rosenqvist U: New blindness in diabetes reduced by more than one-third in Stockholm County. Diabet Med 14:732–740, 1997 8. Porta M, Tomalino MG, Santoro F, Ghigo LD, Cairo M, Aimone M, Pietragalla GB, Passera P, Montanaro M, Molinatti GM: Diabetic retinopathy as a cause for blind- ness in the province of Turin, north-west Italy in 1967–1991. Diabet Med 12:355– 361, 1995 9. Barry RJ, Murray PI: Unregistered visual impairment: is registration a failing sys- tem? Br J Ophthalmol 98:995–998, 2005 10. Rhatigan MC, Leese GP, Ellis J, Ellingford A, Morris AD, Newton RW, Roxburgh ST: Blindness in patients with diabetes who have been screened for eye disease. Eye 13:166 –169, 1999 11. General Register Office for Scotland: Mid- 2006 Population Estimates Scotland. July 2007 at http//www.gro-scotland.gov.uk 12. Leese GP, Morris AD, Swaminathan K, Petrie JR, Sinharay R, Ellingford A, Taylor A, Jung RT, Newton RW, Ellis JD: Imple- mentation of national diabetes retinal screening programme is associated with a lower proportion of patients referred to ophthalmology. Diabet Med 22:1112– 1115, 2005 13. Scottish Executive: Diabetic Retinopathy Screening Services in Scotland: Recom- mendations for Implementation (DRSIG). Edinburgh: Scottish Executive 2003. Available from http://www.diabetesin scotland.org/diabetes 14. Diabetic Retinopathy screening services in Scotland: recommendations for imple- mentation. Report by the diabetic screen- ing implementation group, June 2006 (annexe E). At www.scotland.gov.uk/ resource/doc/46930/0013818.pdf 15. NHS Quality Improvement Scotland. Clinical Standards in Diabetic retinopathy screening. March 2004. Available at http://www.nhshealthequality.org 16. Morris AD, Boyle DI, MacAlpine R, Emslie-Smith A, Jung RT, Newton RW, MacDonald TM: The Diabetes Research and Audit in Tayside Scotland (DARTS) study: electronic record linkage to create a district diabetes register. BMJ 315:524 – 528, 1997 17. Scottish Diabetes Core Dataset NHS Scotland 2003 at www.scotland.gov.uk/ Resource/Doc/47021/0013911.pdf 18. Morris AD, Boyle DIR, McMahon AD, Greene SA, MacDonald TM, Newton RW: Adherence to insulin treatment, glycae- mic control, and ketoacidosis in insulin- dependent diabetes mellitus. Lancet 350: 1505–1510, 1997 19. Guidelines for diabetic retinopathy 1997. Royal college of ophthalmologists, Lon- don 1997 20. Guidelines for diabetic retinopathy 2005. Royal college of ophthalmologists, Lon- don 2005. Available at www.rcophth. ac.uk/docs/scientific/publication 21. Stefansson E, Bek T, Porta M, Larsen N, Kristinson JK, Agardth E: Screening and prevention of diabetic blindness. Acta Ophthalmol Scand 78:374 –385, 2000 22. Scanlon PH, Carter S, Foy C, Ratiram D, Harney B: An evaluation of the change in activity and workload arising from dia- betic ophthalmology referrals following the introduction of a community based digital retinal photographic screening programme. Br J Ophthalmol 89:971–975, 2005 23. Leese G, Morris AD, Petrie J, Jung RT, Ellis JD: Diabetic retinopathy screening programmes and reducing ophthalmolo- gists’ workload-authors reply. Diabet Med 23:449 – 450, 2006 24. Tight blood pressure control and risk of microvascular and microvascular compli- cations in type 2 diabetes: UKPDS 38: UKPDS Group. BMJ 317:703–713, 1998 25. U.K. Prospective Diabetes Study (UK- PDS) 33: Intensive blood-glucose control with sulphonylureas or insulin compared with conventional treatment and risk of complications in patients with type 2 di- abetes (UKPDS 33): UKPDS Group. Lan- cet 352:837– 853, 1998 26. Keech AC, Mitchell P, Summanen PA, O’Day J, Davis TM, Moffitt MS, Taskinen MR, Simes RJ, Tse D, Williamson E, Mer- rifield A, Laatikainen LT, d’Emden MN, Crimet DC, O’Connell RL, Colman PG: Effect of fenofibrate on the need for laser treatment for diabetic retinopathy (FIELD study): a randomised controlled trial. Lancet 370:1687–1697, 2007 Vallance and Associates DIABETES CARE, VOLUME 31, NUMBER 6, JUNE 2008 1131 work_flmjyawo4ve6tktsefglggz7rq ---- Journal of Experimental and Clinical Medicine http://dergipark.ulakbim.gov.tr/omujecm Clinical Research J. Exp. Clin. Med., 2016; 33(4): 205-209 doi: 10.5835/jecm.omu.33.04.005 “Intra Scaphal Opposing Sutures” for Stahl’s ear correction Caglayan Yagmura, Ilhami Oguzhan Aydogdub, Osman Kelahmetogluc*, Ismail Kucukera, Ibrahim Alper Aksakalb, Ahmet Demira a Department of Plastic, Reconstructive and Aesthetic Surgery, Medical Faculty, Ondokuz Mayis University, Samsun, Turkey b Department of Plastic, Reconstructive and Aesthetic Surgery, Samsun Research and Education Hospital, Samsun, Turkey c Department of Plastic, Reconstructive and Aesthetic Surgery, Medical Faculty, Bezmialem Vakif University, Istanbul, Turkey ARTICLE INFO ABSTRACT Article History Received 01 / 06 / 2016 Accepted 13 / 08 / 2016 Stahl’s ear deformity is a rare congenital abnormality and is characterized with the presence of an abnormal third crus in the upper pole of the auricle. Although various techniques are advised, there is no standard surgical correction option. In this study, we have tried to show a new suture method to able to correct Stahl’s ear deformity as a more practical and less invasive way. This study includes 4 patients and 4 ears having Stahl ear deformity that was corrected by “Intra Scaphal Opposing Sutures”. The patients were followed up for 12 months with clinical examination and photography. Patient satisfaction was good with favorable results. Suturing techniques are common procedure for Stahl’s ear correction. They can be used alone or in combination with excision, scoring and/or reshaping techniques. The main advantages of our technique are: using a smaller posterior incision, limited dissection and less destruction without performing any excision. Stahl’s ear correction with Intra Scaphal Opposing Sutures may offer a practical and stable solution in selected cases, especially with minor presence of aberrant third crus. © 2016 OMU * Correspondence to: Osman Kelahmetoglu Department of Plastic, Reconstructive and Aesthetic Surgery, Medical Faculty, Bezmialem Vakif University, Istanbul, Turkey e-mail: osmankelahmetoglu@gmail.com Keywords: Opposing suture Stahl’s ear deformity Suture Technique .1. Introduction Stahl’s ear deformity, defined by Stahl in the 19th century, is a rare congenital abnormality and characterized with the presence of an abnormal 3rd crus in the upper pole of the auricle. It is common in Asian populations, being especially even more common among the Japanese (Aki et al., 2000). Although it was suggested to be a genetically inherited deformity, this has not be proven (Binder, 1889). Its presentation is bilateral in approximately 20% of the cases (Nakajima et al., 1984; Furnas, 1989; Steven et al., 2001). According to a well-accepted theory, it is caused by a disorder during embryologic development of the helix and scapha during the first three months of development (Skoog, 1974; Fischl, 1976; Yamada and Fukuda, 1980; Yotsuyanagi et al., 1999). 206 Yamada and Fukuda (1980) have classified Stahl’s ear deformity into four types: Type 1: The third crus has a sharp ridge and extends posterosuperiorly from antihelix. Type 2: Rounded third crus extends posterosuperiorly from antihelix. Type 3: Broad, two folded third crus extends posterosuperiorly from antihelix. Type 4: Third crus extends posteroinferiorly from antihelix crura. Like other deformities consisting of partial malformations of the external ear, there are no standard surgical corrections for Stahl’s ear deformity. Described methods include cartilage excision, reshaping of the cartilage, scoring and suture techniques (Furukawa et al., 1985; Sugino et al., 1989; Noguchi et al., 1994). In this article we have tried to present a novel suture technique by which we were able to correct Stahl’s ear deformity in a more practical and less invasive way. 2. Patients and method This study included four patients (one male and three females). The patients’ ages were between 18 and 56 (mean age 29.5). A total of 4 ears of 4 patients were operated on between 2012 and 2015. The authors were aware of the Code of Ethics of the World Medical Association (Declaration of Helsinki) which was printed in the British Medical Journal (18 July 1964). Informed consent was obtained for each patient before surgery. All patients were followed up for at least 12 months with clinical examination and digital photography. Postoperative satisfaction was evaluated using visual analogue scale (VAS) scores (0-10, 0=worst imaginable surgical outcome, 10=best imaginable surgical outcome). Statistical analysis The data was analyzed using Numbers 2011 for Macintosh (Apple Inc., USA). Surgical technique The procedure was performed under local anesthesia combined with conscious sedation. The ear was anesthetized with a circumferential block using an equal-parts mixture of 1% lidocaine with 1:100,000 epinephrine and 0.24% bupivacaine with 1:200,000 epinephrine. Supplemental injections were given at the location of the posterior incision. No anterior incision was used. The projected position of the third crus from its margins and the junction points of it with helix and antihelix were marked with blue dye. A 2 cm post auricular incision to expose the posterior aspect was also marked. The marked junction points (4 in total) were also the points to introduce sutures. As a convenience, we have numbered the points from 1 to 4 to explain suture technique easily (Fig. 1). The cartilage of the upper pole was exposed posteriorly through a post auricular sulcus incision. The first suture was introduced medially from “point 1-M”. It was delivered laterally from “point 1-L” and reinserted to be delivered from “point 2-L”. After it was delivered from “point 2-L” it was reinserted to be delivered from “point 2-M”. Then the suture passed from “point 3-M” and was delivered from “point 3-L”. It is reinserted from the same point of “point 3-L” and delivered from “point 4-L”. Finally the suture is passed from “point 4-L” to “point 4-M” to prepare for a knot. Traversing the sutures between points was made easily by using a 21 gauge needle during the process (Fig. 2). This suture maneuver forms a direct force to bend the anterior convexity to a concave form. Then the same procedure beginning with point 2 to oppose the first one was performed (Fig. 3a). A posterior horizontal scoring can be made in variable levels to facilitate correction and then the sutures were stitched. Journal of Experimental and Clinical Medicine 33 (2016) 205-209 Fig. 1. Schematic views of locations of points. The view of the first suture (Original artwork by Sefa Ersan Kaya) Fig. 2. Intraoperative view, showing the sutures between points was made easy by using a 21 gauge needle during the process 207 At the end of the procedure, the third cartilage was bent into a normal shape to deepen the scaphoid fossa and antihelical sulcus using two intra scaphal opposing sutures (Fig. 3b). All patients were followed up for at least 12 months with clinical examination and digital photography. Postoperative satisfaction was evaluated using VAS scores (0-10, 0=worst imaginable surgical outcome, 10=best imaginable surgical outcome). 3. Results Patients were evaluated after 12 months postoperatively. There were no wound healing problems, infection or recurrence. All patients’ satisfaction was good with favorable results. The average VAS scores was 8±0.81. Case 1 The male patient was a 56 year old otherwise healthy male with unilateral Stahl’s ear deformity. Under local anesthesia, the described technique was applied for correction deformity. After 12 months, at the control visit the male patient revealed no problem with substantial correction achieved (Fig. 4). Case 2 The female patient was 18 years old and had unilateral Stahl’s ear deformity. The same method was also used for this patient. The patient had no problems with her postoperative 12 month control visit (Fig. 5). 3. Discussion Stahl’s ear deformity probably occurs as a cause of a developmental disorder in the helix and scaphoid fossa of the ear during the third month of embryologic development. Excessive pressure to developing ear cartilage or changes in the mother’s uterus that cause abnormal growth of perichondrium are considered as other possible theories (Yotsuyanagi et al., 1999). As there is no standard technique to fix this deformity, various techniques were identified with variable success rates in the literature. Different techniques can be considered for different age groups. Auricular cartilage of a newborn baby is very soft and malleable. This is mainly caused by the increase of hyaluronic acid content of the extracellular matrix from the impact Yagmur et al. Fig. 3a. Schematic view of the second opposing suture (Original artwork by Sefa Ersan Kaya) Fig. 3b. Schematic view of posterior scoring made and sutures tied (Original artwork by Sefa Ersan Kaya) Fig. 4. The male patient had unilateral Stahl’s ear deformity. Postoperative view (12 months) of the male patient Fig. 5. The female patient had unilateral Stahl’s ear deformity. Postoperative 12 months view of the female patient. 208 of maternal estrogen (Kenny et al., 1973). Therefore pliable metal molds covered with resin can be used to shape the auricular cartilage of a baby during the neonatal period (Matsuo et al., 1984). When molding is performed early after birth, a good correction is possible. However, as estrogen levels decrease in time (in the 6th week, it decreases to similar levels of normal children) the malleability of the auricular cartilage decreases. Therefore external molding approach is not suitable for patients other than infants (Tan et al., 1997). As for the surgical correction, wedge excision of the accessory crus in Stahl’s ear deformity was first defined by Joseph in 1931 (Joseph, 1931). Yamada and Fukuda (1980) performed reshaping by using cartilage excision and suture. A reshaping approach combined with excision was also reported by Ono et al. (1996). The cartilage scoring technique on the other hand, was first defined by Furukawa in 1985. Nakayama and Soeda (1986) obtained successful results by suturing abnormal crus to the temporal bone periosteum. Tsujiguchi and colleagues (1992) reported good results with cuts in different directions for the third crus and excision of the cartilage and reshaping. Excision of the third crus and application of it as a diced cartilage graft to restore shape was performed by Noguchi et al. (1994). In their technique they utilized an external mold to stabilize the grafted region and facilitate healing. Khan and colleagues (2010) described a double layered suturing technique and treated a patient complaining of Stahl’s ear deformity. In their method a post-auricular sulcus incision was used and the cartilage of the upper pole was totally denuded. Then a technique of double row U shaped continuous sutures was used to correct the deformity. Suturing techniques are not new for Stahl’s ear correction. They are used alone or in combination with excision, scoring and/or reshaping techniques. The main advantages of our technique are; using a smaller posterior incision, limited dissection and less destruction without performing any excision. As our technique mainly depends on suturing, the procedure is reversible and more practical when compared to excisional techniques and strong enough to eliminate the third crus. On the other hand, during the application of our suture technique one must be cautious not to harm cartilage, especially during the stitching maneuver. More caution must be provided if cartilage scoring was performed. Another point is to select an appropriate suture material because colored suture materials may reflect color through the skin after application. In conclusion, among other techniques, Intra Scaphal Opposing Sutures may offer a more practical and effective solution for Stahl’s Ear correction especially in cases with minor presence of aberrant third crus. REFERENCES Aki, F.E., Kaimoto, C.L., Katayama, M.L., Kamakura, L., Ferreira, M.C., 2000. Correction of Stahl’s ear. Aesth. Plast. Surg. 24, 382-385. doi: 10.1007/s002660010062. Binder, H., 1889. Das morelische Ohr: Eine psychiatrisch-anthropologische Studie. Arch. Psychiatr. 20, 514-564. doi: 10.1007/ BF02086699. Fischl, R. A., 1976. The third crus of the antihelix and another minor anomaly of the external ear. Plast. Reconstr. Surg. 58, 192-195. doi: 10.1097/00006534-197608000-00009. Furnas, D.W., 1989. Plastic and reconstructive surgery of the external ear. Adv. Plast. Reconstr. Surg. 5, 153-159. Furukawa, M., Mizutani, Z., Hamada, T., 1985. A simple operative procedure for the treatment of Stahl’s’s ear. Br. J. Plast. Surg. 38, 544-545. doi: 10.1016/0007-1226(85)90018-9. Joseph, J., 1931. Die Ohrenplastik, Nasenplastik und sonstige Gesichtsplastik nebst einen Anhang u¨ber Mammaplastik. Kabitzsch, Leipzig. doi: 10.1002/bjs.1800197416. Kenny, F.M., Angsusingha, K., Stinston, D., Hotchkiss, J., 1973. Unconjugated estrogens in the perinatal period. Pediatr. Res. 7, 826-831. doi: 10.1203/00006450-197310000-00006. Khan, M.A., Jose, R.M., Ali, S.N., Yap, L.H., 2010. Correction of Stahl ear deformity using a suture technique. J. Craniofac. Surg. 21, 1619-1621. doi:10.1097/SCS.0b013e3181ef2f0c. Matsuo, K., Hirose, T., Tomono, T., 1984. Non-surgical correction of auricular deformities in the early neonate: A preliminary report. Plast. Reconstr. Surg. 73, 38-50. doi: 10.1097/00006534-198401000-00009. Nakajima, T., Yoshimura, Y., Kami T., 1984. Surgical and conservative repair of Stahl’s ear. Aesthetic. Plast. Surg. 8, 101-107. doi: 10.1007/BF01575252. Nakayama Y., Soeda S., 1986. Surgical treatment of Stahl’s ear using the periosteal string. Plast. Reconstr. Surg. 77, 222-226. doi: 10.1097/00006534-198602000-00008. Noguchi M., Matsuo K., Imai Y., Furuta S., 1994. Simple surgical correction of Stahl’s’s ear. Br. J. Plast. Surg. 47, 570-572. doi: 10.1016/0007-1226(94)90142-2. Ono I., Gunji H., Tateshita T., 1996. An operation for Stahl’s ear. Br. J. Plast. Surg. 49, 564-567. doi: 10.1016/S0007-1226(96)90135- 6. Skoog, T. G., 1974. Perichondral otoplasty in an irregular deformity of the ear. In: Plastic Surgery. Almqvist & Wiksell International, Stockholm. pp. 277. Steven, G., Wallach, M.D., Ravelo, V., Argamaso, M. D., 2001. The crumpled ear deformity. Plast. Reconstr. Surg. 108, 30-37. doi: Journal of Experimental and Clinical Medicine 33 (2016) 205-209 209Yagmur et al. 10.1097/00006534-200107000-00006. Sugino, H., Tsuzuki, K., Bandoh Y., Tange, I., 1989. Surgical correction of Stahl’s’s ear using the cartilage turnover and rotation method. Plast. Reconstr. Surg. 83, 160-164. doi: 10.1097/00006534-198901000-00031. Tan, S.T., Abramson, D.L., MacDonald, D.M., Mulliken, J.B., 1997. Molding therapy for infants with deformational auricular anomalies. Ann Plast Surg. 38, 263-268. doi: 10.1097/00000637-199703000-00013. Tsujiguchi, K., Tajima, S., Tanaka, Y., Hira M., 1992. A new method for correction of Stahl’s ear. Ann. Plast. Surg. 28, 373-376. doi: 10.1097/00000637-199204000-00014. Yamada, A., Fukuda, O., 1980. Evaluation of Stahl’s ear, third crus of antihelix. Ann. Plast. Surg. 4, 511-515. doi: 10.1097/00000637- 198006000-00011. Yotsuyanagi, T., Nihei, Y., Shinmyo, Y., Sawada, Y., 1999. Stahl’s ear caused by an abnormal intrinsic auricular muscle. Plast. Reconstr. Surg. 103, 171-174. doi: 10.1097/00006534-199901000-00027. work_flrsajehlffi7cuy6yf5dwxmje ---- LEPI5802_065-124 168168 JOURNAL OF THE LEPIDOPTERISTS’ SOCIETY Journal of the Lepidopterists’ Society 66(3), 2012, 168–170 A NEW ANTAEOTRICHA SPECIES FROM UTAH AND NEW MEXICO (GELECHIOIDEA: ELACHISTIDAE: STENOMATINAE) CLIFFORD D. FERRIS 5405 Bill Nye Avenue, R.R.#3, Laramie, WY 82070. Research Associate: McGuire Center for Lepidoptera and Biodiversity, Florida Museum of Natural History, University of Florida, Gainesville, FL; C. P. Gillette Museum of Arthropod Diversity, Colorado State University, Ft. Collins, CO; Florida State Collection of Arthropods, Gainesville, FL., email: cdferris@uwyo.edu ABSTRACT. The new species Antaeotricha utahensis is described from San Juan Co., Utah and Catron, Grant and Santa Fe counties, New Mexico. Adults and genitalia are illustrated. Additional key words: Antaeotricha utahensis, Elachistidae, Gelechioidea, New Mexico, North America, Stenomatinae, Utah. For the past several seasons in the Southwest, I have collected specimens of a middle-sized unmarked creamy-white Antaeotricha species. I now have a small series of this moth and my recent examination of the genitalia failed to produce a match with any species in the literature. There are smaller sized pale species known from the eastern United States, but the genitalia in both sexes are very different from what I have collected. In 1964, Duckworth reviewed the North American Stenomatinae and described two new species of Antaeotricha, expanding the North American species total to fifteen. I subsequently have described the maculated gray Antaeotricha arizonensis (Ferris, 2010). The genus Antaeotricha is most easily recognized by the anatomy of the male genitalia. The lightly sclerotized valves are narrow tapering to a rounded tip. The prominent harpe has a thumblike costal projection bearing long, bifurcate, recurved setae (Figs. 2–3). Materials and methods. Nine specimens of the new species were collected in bucket traps of the author’s design using 8 watt BL fluorescent tubes operated from electronic power converters connected to 12 volt motorcycle batteries. Two additional specimens found by John W. Brown in the National Museum of Natural History collection were borrowed for examination. Genitalia dissection was carried out after macerating the abdomens in hot 10% KOH for fifteen minutes. Temporary slides were prepared using glycerin as the suspension medium. The genitalia are stored in glycerin in polyethylene genitalia vials attached to the specimen pins. Antaeotricha utahensis Ferris, new species (Figs. 1–7) Diagnosis. The essentially unmarked glossy creamy dorsal surface of the forewings of Antaeotricha utahensis separate it from its congeners. It superficially resembles a very pale form of A. thomasi (Barnes and Busck), but in the male genitalia the uncus is distally broadly spatulate with a pointed tip, whereas in A. thomasi it is distally narrow with a notched apex. In the female genitalia, the signum is different in shape (a modified six-pointed star in A. utahensis versus a cross in A. thomasi). The male genitalia of A. arizonensis Ferris and A. fuscorectangulata Duckworth have an uncus shape similar to A. utahensis, but adults of both species have heavily maculated dorsal forewings. Description. Imago: Sexes similar except antenna and genitalia. Except as noted for eye, tarsi and wings, remaining body components are glossy creamy white. [The gloss/sheen presented digital photography problems. In Fig. 1 the dark areas of the hindwing are a photographic artifact.] Head: Antenna (shaft and scape) glossy creamy white; ciliated ventrally in male, cilia slightly longer than width of flagellomere with curved tips; simple in female. Eye mottled black. Haustellum present. Labial palpus upcurved extending well above crown of head. Tarsi: Blend from creamy white to very pale tan toward tips; claws brown. Wings: Forewing length: males (n = 9) 9–11 mm, ave. = 10.2 mm; females (n = 2) both 9.5 mm. Dorsal forewing. elongate with rounded distal margin. Ground color glossy silky creamy white, very sparsely overlaid with very small single brown scales (visible only with magnification); a few single brown scales along base of fringe only; fringe scales otherwise glossy creamy white. Ventral forewing covered with many brown scales producing a dark tan color. Dorsal hindwing glossy, with sightly warmer creamy color and without any small brown scales. Ventral hindwing similar to dorsal but less glossy. Male genitalia (Figs. 2–5; 4 dissections): Uncus decurved, spatulate, narrow basally with a sharp apical point. Gnathos upcurved at midpoint with distal portion tapering to a rounded tip. Vinculum complete, arching in front. Anellus without distinct lobes. Valva elongate, expanded in middle, then tapering to broadly rounded tip; harpe thumblike bearing many long recurved bifurcate setae. Aedeagus short and broad (length about 2.5 times diameter) with multiple cornuti and irregular anterior margin; exposed vesica with 2 to 4 apparently deciduous robust spines and additional broad-based semi-fused robust curved spines and a setose brush (arrows in Figs. 4–5). Female genitalia (Figs. 6–7; 1 dissection): Ovipositor lobe basally broad and straight with rounded apex, sparsely covered with short fine hairs. Posterior apophyses well developed; anterior apophyses vestigial. Sterigma broad and open. Ductus bursae at top partially heavily sclerotized with lengthwise slender triangular plate, short and broad (only sightly longer than diameter) opening into tear-shaped corpus bursae, the top of which bulges slightly above junction with ductus bursae. Signum a large stylized six-pointed star with central outwardly projecting oblong plate perpendicular to base; plate is nearly rectangular with rounded edges and corners. Ductus seminalis originates from upper quadrant of corpus bursae. VOLUME 66, NUMBER 3 169 FIGS. 1–7. Antaeotricha utahensis. 1, adult male holotype and female paratype. 2–5, male genitalia. 2, genital capsule, aedeagus removed and flattened. 3, unflattened genital capsule showing spatulate uncus and aedeagus in situ. 4, aedeagus, vesica partially ex- panded (enlarged; arrow points to setal brush). 5, aedeagus of second specimen (enlarged and vesica slightly everted; arrow points to setal brush). 6–7, female genitalia. 6, full genitalia. 7, signum (enlarged). 170170 JOURNAL OF THE LEPIDOPTERISTS’ SOCIETY Discussion. Based upon the male genitalia, Antaeotricha utahensis is closest to A. fuscorectangulata (adult forewings heavily maculated), but the tegumen is not produced into a dorsally projecting process in front; the aedeagus is shorter and broader with a different complement of cornuti. The female genitalia are closest to A. thomasi, but the signum in A. thomasi is a cross with a projecting element. In A. thomasi, the ductus seminalis originates from the midpoint of the ductus bursae, while it originates from the upper quadrant of the corpus bursae in A. utahensis. Types. Holotype male (Fig. 1): UTAH, San Juan Co., 37°44.90'N, 109°24.75'W, 7280' (2220m), 30 July, 2011. Deposited in Carnegie Museum, Pittsburgh, PA. Paratypes: 4m (2 dissected), 1f (dissected), same data as holotype; NEW MEXICO, Catron Co., 33°39.97'N, 108°52.74'W, 6290’ (1918m), 5.vii.07, 1m (dissected); Grant Co., 32°64.86'N, 108°13.44'W, 6820’ (2080m), 19.vii.07, 1m (dissected), 33°03.70'N, 108°12.68'W, 6200' (1890 m), 1.viii.11, 1m. Paratypes in author’s collection. Two additional paratypes in National Museum of Natural History, Washington DC: NEW MEXICO, Grant Co., Pinos Altos Mts., 6500' (1980m), nr. Silver City, 14.viii.87, R. Leuschner, 1f; Santa Fe Co., Tesuque, 26.vii.89, R. Leuschner, 1m. Biology. Unknown; adults from early July to early August. The type locality (Fig. 9) and the three sites in southwestern New Mexico are moderately dry oak–conifer forest. Distribution. Known from southeastern Utah and New Mexico (Fig. 8). Etymology. The name utahensis (adjective) denotes the geographic locality where the holotype was collected. ACKNOWLEDGEMENTS My thanks to John W. Brown, Smithsonian Institution, Na- tional Museum of Natural History, Washington, DC for the loan of specimens for examination. Two anonymous reviewers provided helpful comments that clarified the content of this manuscript. LITERATURE CITED DUCKWORTH, W. D. 1964. North American Stenomidae (Lepi- doptera: Gelechioidea). Proceedings of the United States Na- tional Museum. 116:23–72. FERRIS, C. D. 2010. A new Antaeotricha species from southeastern Arizona (Gelechioidea: Elachistidae: Stenomatinae). ZooKeys 57:59–62. Received for publication 29 January 2012; revised and ac- cepted 28 February 2012 FIG. 8. Distribution map. FIG. 9. Type locality habitat, San Juan Co., Utah. VOLUME 66, NUMBER 3 171 work_foatazjrefha7lds6rh2kegcgm ---- © 2005 Blackwell Publishing • Journal of Cosmetic Dermatology , 4 , 167–173 167 Original Contributions Blackwell Publishing, Ltd. Clinical efficacy assessment in photodamaged skin of 0.5% and 1.0% idebenone DH McDaniel, 1 BA Neudecker, 2 JC DiNardo, 3 JA Lewis II, 3 & HI Maibach 1 Institute of Anti-Aging Research, Eastern Virginia Medical School, Norfolk, VA 2 Department of Dermatology, University of California Son Francisco, San Francisco, CA 3 Pharma Cosmetix Research, LLC, Richmond, VA Summary Idebenone is an antioxidant lower molecular weight analogue of coenzyme Q10. Previously, idebenone was shown to be a very effective antioxidant in its ability to protect against cell damage from oxidative stress in a variety of biochemical, cell biological, and in vivo methods, including its ability to suppress sunburn cell (SBC) formation in living skin. However, no clinical studies have been previously conducted to establish the efficacy of idebenone in a topical skincare formulation for the treatment of photodamaged skin. In this nonvehicle control study, 0.5% and 1.0% idebenone commercial formulations were evaluated in a clinical trial for topical safety and efficacy in photodamaged skin. Forty-one female subjects, aged 30 – 65, with moderate photodamaged skin were randomized to use a blind labelled (either 0.5% or 1.0% idebenone in otherwise identical lotion bases) skincare preparation twice daily for six weeks. Blinded expert grader assessments for skin roughness/dryness, fine lines/wrinkles, and global improvement in photodamage were performed at baseline, three weeks and six weeks. Electrical conductance readings for skin surface hydration and 35 mm digital photography were made at baseline after six weeks. Punch biopsies were taken from randomly selected subjects, baseline and after six weeks, and stained for certain antibodies (interleukin IL-6, interleukin IL-1b, matrixmetallopro- teinase MMP-1, collagen I) using immunofluorescence microscopy. After six weeks’ use of the 1.0% idebenone formula, a 26% reduction in skin roughness/dryness was observed, a 37% increase in skin hydration, a 29% reduction in fine lines/wrinkles, and a 33% improve- ment in overall global assessment of photodamaged skin. For the 0.5% idebenone formulation, a 23% reduction in skin roughness/dryness was observed, a 37% increase in skin hydration, a 27% reduction in fine lines/wrinkles, and a 30% improvement in overall global assessment of photodamaged skin. The immunofluorescence staining revealed a decrease in IL-1b, IL-6, and MMP-1 and an increase in collagen I for both concentrations. Keywords : antioxidant, idebenone, photodamage Introduction Antioxidants have become popular anti-aging ingredients in topical skincare products. Antioxidants have been shown to be photoprotective, anti-inflammatory, able to reduce UVR-induced immunosuppression, and protective against free radical–mediated cellular damage. 1 Typically these ingredients have been referred to as agents that Correspondence: D. H. McDaniel, MD, Institute of Anti-Aging Research, 933 First Colonial Road Suite 113, Virginia Beach, Virginia 23454, E-mail: mjohnson@lsvcv.com Accepted for publication June 25, 2005 Idebenone: a new antioxidant • DH McDaniel et al. 168 © 2005 Blackwell Publishing • Journal of Cosmetic Dermatology , 4 , 167–173 are protective in nature (i.e., excellent for treating the cause of aging) but offer little benefit from an efficacy standpoint in their ability to reverse the signs of skin aging (i.e., treating the effects of skin aging). Although there has been significant research relating to antioxidant protective benefits, there has been very little published clinical research that would demonstrate ef ficacy in the treatment of aging skin or photodamaged skin. Some data have been introduced for vitamin C and coenzyme Q10 in this regard, but generally, clinical data to support antioxidant efficacy in the treatment of photodamaged skin are lacking. 2 – 5 In this clinical research, we assessed the safety and efficacy of topical skincare preparations containing 1.0% and 0.5% idebenone, a novel new antioxidant for skin- care, in the treatment of photodamaged skin. Previously idebenone was compared against commonly known popular antioxidants in skincare products (vitamin C, vitamin E, alpha lipoic acid, kinetin, and coenzyme Q10) and shown to be very effective in its ability to protect against damage as a result of oxidative stress in a variety of biochemical, cell biological, and in vivo methods. In this research, idebenone was shown to be the most effective antioxidant in overall global assessment to prevent oxidative stress and received the highest environmental protection factor (EPF rating) for protection from oxidative stress of the antioxidants tested. 6 The chemical structure of idebenone is very similar to coenzyme Q10 (see Fig. 1). Idebenone is a lower molecular weight (approximately 60% smaller) analogue to coenzyme Q10. In theory this should aid in the pene- tration of this molecule, compared to that of Coenzyme Q10 in topical applications. Coenzyme Q10 is a very important electron transfer agent of the respiratory chain in the mitochondria of our cells, the source of metabolic energy production. However, unlike Coenzyme Q10, idebenone protects against radical formation and cell damage under hypoxic (low oxygen) cellular stress situations. Under these same conditions, Coenzyme Q10 is known to auto-oxidize, becoming a very potent pro-oxidant, leading to the production of hydrogen peroxide, superoxide, and hydroxy radicals in massive numbers. 7 In addition, it has a structure similar to hydroquinone (see Fig. 2), occur- ring in all various chemical forms (quinone, semiquinone, and hydroquinone) depending on the cellular oxidation– reduction (redox) situation. Therefore, one might expect it to have a similar inhibiting action to increased melanin production (hyperpigmentation) by the melanocytes. Thus, idebenone, both a powerful antioxidant lower molecular weight analogue to coenzyme Q10 and a mimic molecule of hydroquinone, might be expected to deliver clinically visible results in the treatment of photodamaged skin. This study was designed to elicit clinical safety and efficacy for a topical preparation containing idebenone at two different dose concentrations (0.5% and 1.0% w/w) and investigate for possible mechanisms of action. The concentrations chosen were selected based on results obtained from previous studies conducted via biochemi- cal, cell biological, and in vivo methods. In particular, a sunburn cell (SBC) dose–response study revealed that idebenone demonstrated significant efficacy at 0.5% (38% reduction in SBCs) and was even more effective at a 1.0% concentration (44% reduction in SBCs). In general, it is known, that there are several major matrix degradation pathways that lead to premature aging of the skin (see Fig. 3). 8 Both the cJun/cFos genes (AP1 transcription factor) pathways that affect MMP/collagen synthesis and the nuclear factor kappa beta transcription factor (NF κ B)/interleukin pathways which affect inflammatory response have as a common denominator their initiation by free radical–mediated oxidative stress. Inhibition of either pathway may improve the overall appearance of photoaged skin. Specifically, in randomly selected subjects participating in the clinical trial, skin biopsies were taken and assessed for the presence of certain biomarkers that are key components of both degradation pathways (IL-6, IL-1b, MMP-1, and collagen I) utilizing immunofluores- cence staining techniques. 9 A decrease in MMP activity Figure 1 Structure and molecular weight of idebenone vs. Coenzyme Q10. Figure 2 Core structure of idebenone demonstrating quinone, semiquinone, and hydroquinone redox forms. Idebenone: a new antioxidant • DH McDaniel et al. © 2005 Blackwell Publishing • Journal of Cosmetic Dermatology , 4 , 167–173 169 may result in a net increase in collagen and a decrease in inflammation, particularly IL-6 may also reduce MMP and thus produce changes in the dermal matrix that may provide the mechanism of action for the clinical results seen in the study. Methods, materials, and study design Fifty female subjects with moderate photoaging (dyschromic facial skin with fine lines and wrinkles) between the ages of 30 and 65 were enrolled in the study and randomly divided into two equal groups of 25 subjects each. Subjects were given a commercial water-oil-water (w/o/w) emulsion lotion containing either 1.0% or 0.5% idebenone w/w in blinded containers. Of the 25 subjects receiving the 1.0% formulation, 21 completed the study; of the 25 subjects receiving the 0.5% formulation, 20 completed the study. All study dropouts were for personal reasons and none were related to the efficacy, safety, or adverse events from the products. A lotion base, as opposed to a gel or cream was chosen because this is, by overwhelming preference, the most popular base for the general consumer population. The subjects were instructed to apply the product to the entire facial area twice daily (b.i.d.) morning and evening for a period of six weeks. Twice daily application was chosen to maximize efficacy benefit and at the same time to evaluate maximum use to reveal any potential skin sensitivity problems or adverse reactions (in order to better assess the safety of the idebenone formulations at each concentration). A study duration time of six weeks was chosen because marketing research has shown that consumers are not likely to continue use of an anti-aging cosmetic product if they cannot see visible results within a relatively short period of time. Six weeks was also chosen to enable comparison to other short-term studies typical for similar cosmetic anti-aging products. All subjects received active product (either the 0.5% or 1.0% formulation) and no placebo was employed in the study. This study design was chosen to maximize dose comparison safety and efficacy, minimize overall cost, and eliminate crossover mix-up or contamination that could potentially occur in a split face active/placebo or vehicle control design as well as to provide specific information concerning safety and efficacy for a finished product formulation intended for market release. [Note: These study results highlight the safety and ef ficacy of a final product formulation rather than the specific ingredient idebenone. It should be noted that no other known anti-aging ingredients were employed in the tested formulation, and therefore, any increase in efficacy over the results one would expect from an ordinary water-oil-water emulsion should reasonably be attributed to the incorporation of idebenone.] Specific subject inclusion and exclusion criteria are given in Table 1. Subjects were allowed to continue their normal facial cleansing routine and makeup products and instructed to apply the test product to clean, dry facial skin. Subjects were also given an SPF 15 sunscreen to apply after application of the test products. No other Figure 3 Pathways of aging skin. Idebenone: a new antioxidant • DH McDaniel et al. 170 © 2005 Blackwell Publishing • Journal of Cosmetic Dermatology , 4 , 167–173 products were used in the study. This design best exemplifies the way the product would be used in the marketplace. Subject selection was based on the willingness to participate in the study, with informed consent, be free of a history of cosmetic ingredient/ product sensitivity, not be pregnant or nursing, and be free of any medication that could potentially impact the study results (i.e., retinoids, anti-inflammatories, etc.). Clinical evaluations were performed at baseline, three weeks, and six weeks, including (1) high-resolution, standardized digital photography (Fuji S1, Japan) and (2) electrical conductance measurements (Novameter Model DPM 9003, Nova Technology Corporation, Gloucester, MA, USA) for skin surface hydration; (3) blinded expert grader assessments using a 0 – 4 scale (0 = no change, 1 = 25% improvement, 2 = 50% improvement, 3 = 75% improvement, 4 = 100% improvement) were made via analysis of high-resolution digital photographs in a blinded scenario for skin roughness/dryness, fine lines/wrinkles, and overall global improvement in photodamaged skin. Patients were instructed to abstain from washing the facial area for a period of at least two hours prior to visits for evaluation and were acclimatized for 15 minutes in a humidity-controlled environment room prior to making Novameter moisturization measurements. Patients were also given a patient diary/questionnaire to complete designed to interpret their perceived experiences with the product and to report any skin irritations, redness, swelling, or other adverse events. The study was conducted from October 6, 2003 through November 14, 2003 at Anti-Aging Research & Consulting, LLC, Virginia Beach, Virginia, USA and the principal investigator was David H. McDaniel, MD. Skin biopsies (2 mm) were taken from the sun-exposed periorbital/temple areas approximately 1 cm apart in randomly selected subjects from both patient subgroups (three subjects total). After formalin fixation 2-mm skin punch biopsies were paraffin embedded and cut into 5-micron sections. The paraffin sections were stained with hematoxylin-eosin (H&E) and Massons trichrome as well as immunostained with antibodies for collagen I, MMP1, IL-1b, and IL-6, and FITC conjugated antigoat antibodies used for immunofluorescence at baseline and after six weeks. Study results After six weeks of use of the 1.0% idebenone formulation, twice daily, a 26% reduction in skin roughness/dryness was observed, a 29% reduction in fine lines/wrinkles, a 37% increase in skin hydration, and an overall 33% improvement in global assessment of photodamaged skin was observed. For the 0.5% idebenone formulation, a 23% reduction in skin roughness/dryness was observed, a 27% reduction in fine lines/wrinkles, a 37% increase in skin hydration, and an overall 30% improvement in global assessment of photodamaged skin was observed (see Fig. 4). Before and after photography recorded a visible improvement in periorbital rhytides and pigment dyschromia from both sets of subjects receiving either the 0.5% or 1.0% formulation (see Fig. 5a,b,c,d). H&E- and Massons-stained slides demonstrated general improvement in epidermis as well as some increase in dermal collagen. Immunofluore- scence staining of skin biopsies revealed decreased staining for MMP-1, IL-6 and IL-1b, and an increase in collagen I. Quantitative measurements were not determined (see Fig. 6a,b,c,d) There were no recorded adverse events. Table 1 Inclusion/exclusion criteria for study. Inclusion Criteria: 1. Subjects with moderate photoaging must be diagnosed by the investigator. 2. Subjects must be female and preferably above 30 years of age with no known medical conditions that, in the investigator’s opinion, may interfere with study participation. 3. Subjects must discontinue all current photoaging products. 4. Subjects must provide written informed consent and photography consent. Exclusion Criteria: 1. Any dermatological disorder or personal appearance issue which, in the investigator’s opinion, may interfere with the accurate evaluation of the subject’s face. 2. Subjects who have demonstrated a previous hypersensitivity reaction to any ingredients in the study products. 3. Concurrent therapy with any medication either topical or oral that might interfere with the study. 4. Subjects who have undergone any surgical treatment to the tissues of the face. 5. Subjects who are not willing to discontinue all anti-aging prescription or OTC cosmeceutical preparations to the face. 6. Subjects who have participated in another clinical trial or have taken an experimental drug within the past 30 days. 7. Subjects who are pregnant, breast-feeding, or planning a pregnancy. 8. Subjects who are unwilling or unable to comply with the requirements of the protocol. Idebenone: a new antioxidant • DH McDaniel et al. © 2005 Blackwell Publishing • Journal of Cosmetic Dermatology , 4 , 167–173 171 Discussion The purpose of the testing presented was to evaluate the efficacy of two products, one containing 1% idebenone and the other 0.5% idebenone, pre- and post-treatment in photodamaged skin. Although no vehicle control was included in the study, the main scope was to evaluate the before and after effects of the products (i.e., baseline values obtained for each individual was considered the control for the effects observed) and not to just specifically evaluate the benefits of the ingredient idebenone. Although one cannot definitively assign all the clinical benefits to idebenone, it Figure 4 Data of 0.5% idebenone and 1.0% idebenone. Figure 5 Wrinkle and pigment reduction. (a) Wrinkle reduction before and after 6-weeks’ use with 1% idebenone; (b) pigment reduction before and after 6-weeks’ use with 1.0% idebenone; (c) wrinkle reduction before and after 6-weeks’ use with 0.5% idebenone; (d) pigment reduction before and after 6-weeks’ use with 0.5% idebenone. Idebenone: a new antioxidant • DH McDaniel et al. 172 © 2005 Blackwell Publishing • Journal of Cosmetic Dermatology , 4 , 167–173 was felt that the moisturization effects noted (37% for both products) appear to be associated more with the base and not with idebenone itself and that the decrease in skin roughness/dryness maybe a combination of the two. Conversely, a general moisturizer similar to what was employed in this study (i.e., free of vitamins, antioxidants, AHAs, BHAs, etc.) had not been documented to produce histological changes associated with collagen production and/or modulation of inflammatory markers. The biopsy results suggest that the mechanism of action of idebenone is to inhibit at least two free radical–mediated degradation pathways that lead to premature aging of the skin. Therefore, we believe that the results observed for the reduction in lines/wrinkles and the overall improvement in photodamaged skin are directly related to idebenone and not the base. In summary, both formulations appear to be effective in the treatment of photodamaged skin. The 1.0% idebenone formulation was about 10% more effective overall but also delivered more expedited results. Additional clinical trials are currently being planned to goin on understanding of the antioxidant benefits and mechanism of actions associated with idebenone. Conclusion The overall results of this clinical trial suggest that idebenone is a promising new active ingredient for topical skincare therapy of aging skin. References 1 Farris PK. Cosmeceuticals: a review of the science behind the claims. Cosmet Derm 2003; 16 : 59–70. 2 Fitzpatrick RE, Rostan EF. Double-blind, half-face study comparing topical vitamin C and vehicle for rejuvenation of photodamage. Dermatol Surg 2002; 28 : 231– 6. Figure 6 Immunofluorescence biopsy. (a) Before and after staining for MMP-1; (b) before and after staining for IL-6; (c) before and after staining for IL-1b; (d) before and after staining for collagen I. Idebenone: a new antioxidant • DH McDaniel et al. © 2005 Blackwell Publishing • Journal of Cosmetic Dermatology , 4 , 167–173 173 3 Traikovich SS. Use of topical ascorbic acid and its ef fects on photodamaged skin topography. Arch Otolaryngol Head Neck Surg 1999; 125 : 1091– 8. 4 Hoppe U, Bergemann J, Diembeck W, Ennen J, Gohla S, Harris I, Jacob J, Kielholz J, Mei W, Pollet D, Schachtschabel D, Sauaermann G, Schreiner V, Stab F, Steckel F. Coenzyme Q10, a cutaneous antioxidant and energizer. Biofactors 1999; 9 : 371– 8. 5 Ghosk DK, Murthy UV. Anti-aging benefits of topical formulation containing Coenzyme Q10: Results of 2 clinical studies. Cosmet Dermatol 2002; 15 : 55– 60. 6 McDaniel DH, Neudecker BA, DiNardo JC, Lewis II JA, Maibach HI. Idebenone: a new antioxidant – Part I. Relative assessment of oxidative stress protection capacity compared to commonly known antioxidants. J Cosmet Dermatol 2005; 4 : 10–7. 7 South J. Idebenone: the ultimate antiaging drug. Offshore Pharmacy , March 1997. 8 McDaniel DH. Genetic research unlocks secrets of effective skincare. Aesthetic Buyers Guide , January/February 2004. 9 http://public.bcm.tmc.edu/rosenlab/protocols/ generalIF.pdf. http://public.bcm.tmc.edu/rosenlab/protocols/ work_frfmxmtkhfg55kwbozbjcsqsfe ---- www.rspsciencehub.com Volume 02 Issue 07 July 2020 International Research Journal on Advanced Science Hub (IRJASH) 100 Machine learning amalgamation of Mathematics, Statistics and Electronics Trupti S. Gaikwad 1 , Snehal A. Jadhav 2 , Ruta R. Vaidya 3 , Snehal H. Kulkarni 4 1,2,3,4 Trupti S. Gaikwad, Dept. of Computer Science, Vishwakarma College of Arts, Commerce and Science, Maharashtra, India trupti08jadhav@gmail.com 1 Abstract Interdisciplinary research is a manner of research carried out by an individual or group of persons. The knowledge, data, techniques, concepts are incorporated from two or more disciplines. In this paper we tried to throw light on this concept. Machine learning is a branch of computer science which uses the information, tools for collection of data, methods for analysis from the subjects like Electronics, Mathematics and Statistics. Why we use machine learning? Because it plays an influential role in prediction of data. Machine learning is used to find hidden patterns and essential ideas from data as well as it solve complex problems. In today’s world, many applications have large volume of data like structured, unstructured and semi structured. This unexploited resource of knowledge can be used to improve business decisions. As data diversifies many are adapting to machine learning tool for analysis of data, so that, they can exploit intelligence and benefit from that data at most. Machine learning adopts different algorithms and each algorithm performs different functionality. In this paper, we tried to explain through example, how Electronics is used for collection of data while Mathematics and Statistics are used for analysis and finally using Machine learning results can be predicted. Keywords: Machine learning, Analysis, Statistics, Electronics, Mathematics 1. Introduction Machine learning is intersection of Mathematics, Statistics, probabilistic, Electronics, and Computer Science. Machine learning uses algorithm to learn iteratively from data and using positive feedback can build an intelligent application. Though machine learning differs from traditional computational access, it is part of computer science. In conventional programming, algorithms are explicitly written for problem solving used by computer. On other hand machine learning is a concept that a machine can learn from past experience and previously solved examples, without being explicitly programmed. For example, biometric attendance, fraud detection, face recognition, text recognition etc. www.rspsciencehub.com Volume 02 Issue 07 July 2020 International Research Journal on Advanced Science Hub (IRJASH) 101 1.1 What is Machine Learning? Machine learning is the study of computer algorithms that improve automatically through experience [1-4]. It provides ability to system, learn and improve using previous experience without being explicitly programmed. It mainly focuses on the development of a program so that they can access data and learn it for themselves. Fig.1. Interdisciplinary Machine Learning How machine learning works? The learning process starts with observations or data, like direct experience, example or instruction, so that it will find patterns for data and make better determination about future based examples provided. The main goal is to allow the system to learn automatically without human intervention and modify actions accordingly. It has different types of algorithm. Machine learning algorithms are classified as:  Supervised machine learning- In supervised machine learning input variable (X) use by an algorithm to give an output variable (Y). Here the algorithm learns the mapping function from input to the output. The mapping function is Y = f(X). The main aim of an algorithm is to estimate the mapping function so that for new input data (X), the algorithm can predict the output variable (Y) for that data. The process of learning of the algorithm from the training data set, it can be considered as a teacher supervising the learning process so that it is called supervised learning. It can be further grouped into regression and classification problem. Examples of supervised learning algorithm are 1. For classification problems- Support vector machines 2. For regression problem- Linear regression 3. For classification and regression problem- Radom forest.  Unsupervised machine learning- Unsupervised machine learning has only input data (X). It does not have related output variable (Y). To acquire more knowledge about the data underlying structure is designed or distribution in the data is done. It is called unsupervised learning because it does not have correct answer and teacher. Algorithms itself can discover and present the interesting structure in the data. It can further group into clustering and association problems. Unsupervised learning algorithm uses following algorithms 1. For clustering problem-k means Electronics StatisticsMachine learning Maths www.rspsciencehub.com Volume 02 Issue 07 July 2020 International Research Journal on Advanced Science Hub (IRJASH) 102 2. For association rule mining- Apriori algorithm  Semi supervised machine learning- In semi supervised machine learning only for some of input variable(X), an output variable (Y) is present. It is a composite of supervised machine learning and unsupervised machine algorithm. Here we can use both i) Supervised learning technique where we want best prediction for unlabeled data, so that we can feed that data back to supervised learning algorithm as training data and use that model to make prediction for new unseen data. ii) Unsupervised learning technique to find and learn the structure in the input variables. Most of today's real world problems are from this type as problems can be expensive or cheap. For expensive label problems domain experts are required and cheap unlabeled problems are easy to collect and store data.  Reinforcement Learning- The machine is exposed to environment and it learns itself by using positive feedback and negative feedback. The machine learns from past experience. It is directed to make specific decisions. It tries to capture the best possible knowledge to make accurate business decisions. Fig.2. Classification of Machine Learning Algorithm 1.2 Electronics: Elementary component of machine learning Electronics is primary element whenever we talk about automated or intelligent or smart systems. Electronics and different branches of computer science like machine learning, artificial intelligence are blended together to invent new applications. The use of machine learning in engineering field is very vital for signal processing. Due to this, there is increase in accuracy and quality when sound, images, and other inputs are transmitted. Machine learning algorithms are helpful to model signals, for pattern detection, to draw meaningful inferences, and make precise adjustments to signal output.[5-8] To feed the data in machine learning systems signal processing techniques are useful. Machine Learning Algorihm Supervised Learning Regression Classification Unsupervised Learrning Clustering Association Problems Semi- supervised Learning Reinforcement Learning https://www.youtube.com/watch?v=mexN6d8QF9o www.rspsciencehub.com Volume 02 Issue 07 July 2020 International Research Journal on Advanced Science Hub (IRJASH) 103 Digital signal processing and digital image processing these two are used in many applications along with machine learning. Smart sensors are also used along with machine learning for developing number of applications like weather forecast system, in healthcare instruments, in smart home systems etc. 1.3 Mathematics behind machine learning Mathematics is main part of machine learning as it is used at backend. Machine learning acquires data through algorithms and then uses this data to make predictions. Machine Learning requires mathematical knowledge. It includes linear Algebra, Calculus, Statistics, Discrete Maths, Probability and Optimization which help to create algorithms. It includes accuracy, training time, model complexity, choosing parameter setting and validation strategies. Importance of Math’s topics needed for machine learning is: Linear Algebra: It is the backbone of machine learning. To find the values of variables X and Y matrix operations are used which are parts of linear algebra. Due to this, linear algebra is necessary in machine learning. Not only all the operations in Linear Algebra are systematic rules but also structural representation of the knowledge that a computer can understand easily. Topics needed for understanding the methods of machine learning in Linear Algebra are LU decomposition, orthogonalization, matrix operations, projections, Eigen values, Eigen vectors, Vector spaces, Singular Value Decomposition (SVD) etc. Natural language processing on tabular datasets, data files such as encoding and dimensionality reduction, images etc is applications of Linear Algebra in machine learning. Multivariate Calculus: The most of machine learning algorithms are trained on multiple variables. It helps to better quantify and predict the information accurately. Laplacian and Lagragian Distribution, Vector-Values Functions, Directional Gradient, Differential and Integral Calculus, Partial Derivatives, Jacobian are the methods of Multivariate Calculus use in machine learning. Examples of Multivariate Calculus include calculation of monthly rain fall, temperature and wind speed and so on. Graph theory: Graphs represent flow of computation. Graph learning models can be used to learn machine learning algorithms. Graphs represent computationally various matrices while matrix provides different types of information. The machine learning models having graph like structure are K-mean, K-nearest neighbours, Decision trees, Random forest, neural networks. 1.4 Statistics: Prerequisite of machine learning Statistics plays important role in machine learning. Definitely statistical knowledge is applied to machine learning through predictive analysis. Following are the examples where Statistics can be applied in machine learning.  Problem Designing  Sympathize Data www.rspsciencehub.com Volume 02 Issue 07 July 2020 International Research Journal on Advanced Science Hub (IRJASH) 104  Data cleansing  Choosing Data  Data Organization  Model Assessment  Model Organization  Model Demonstration  Model Anticipation 2.Predictive analysis: As the name suggests this analysis is used to make predictions about unknown incoming events. From given data sets the information is educed to find out patterns and anticipate future trends, outcomes etc. The data source is customer service data, survey data, digital marketing data, geographical data etc. Basically here different techniques from machine learning, artificial intelligence, data mining, and Statistics are used. Generally predictive model is executed using time series regression curve and surface fitting etc. Predictive models are divided into two classes as parametric and non parametric. A predictive analytics model has three key components:  Historical Data – Using past data predictions about the future trends are made.  Statistical Modeling – Regression analysis stimulate the prediction using past data.  Assumptions – Predictive analytics builds on the simple assumption that future trends in data are based on past trends. Fig.3. Predictive Analysis 2.1 Creation of predictive model: Following are the steps involved.  The outliers are removed, missing data is treated thus date is cleaned.  The predictive model is distinguished as parametric or non parametric model.  The data is converted into suitable format for selected modeling algorithm. The subset of data is to be specified here.  Parameters are estimated for training data sets.  It is checked whether chosen model is appropriate by performing goodness of fit test.  Modeling accuracy of data is tested.  Model is used for prediction. We will consider some examples of different predictive models. Classification model: This model is generally used in answering yes or no type of Predictive analyais Assumpt ions Historical Data Statistical Modeling www.rspsciencehub.com Volume 02 Issue 07 July 2020 International Research Journal on Advanced Science Hub (IRJASH) 105 questions which on further analysis guide for conclusive action. So this model is used in industries. Clustering model: In this model the data is sorted into different groups based on similar attributes. Forecast model: This is frequently used model as the numeric value is estimated from the data. Outlier model: This model identifies abnormal data figures from the data set. It is useful in retail and finance industry. Time series model: This model consists of data having time as input parameter. Regression model: It finds relationship between dependent and two or more independent variable. This relationship is used to predict unknown variable based on known predictions. Decision trees: It is a tree like model of decisions and possible consequences. This is based on Boolean test. In this case problems are organized as tree with end nodes as branches. 2.2 Probability: We can say that fundamental principle of Machine learning is 'Probability'. In many domains of machine learning probability theory is applied. From collected data probability helps to make decisions. These inferences are further used to predict future trends. Many techniques like maximum likelihood estimation (MLE) are based on probability theory. MLE is used in linear regression, logistic regression, artificial neural networks etc. 2.3 Regression: Technique of prediction on the basis of correlation is called as regression. Further there are different regression methods. Simple linear regression is the relationship between two sets of variables. It is used to predict daily maximum temperature, while the aim of multiple regressions is to find explanatory and response variables. Previously it was used to predict weather conditions at a particular place. There are many more types of regression which can be used in applications of machine learning like weather forecasting, healthcare etc.[9-14] 3. Applications of machine learning In machine learning computers, software and devices learn via cognition. Here are some most important day to day applications of machine learning.  Virtual personal assistants- Virtual personal assistant like Siri, Alexa, and Google are popular examples of machine learning. Here over a voice question is asked and it provides information by recalling related queries or sends a command to other electronic devices to collect information. Virtual assistants like smart speakers, smart phones, and mobile apps are used.  Traffic prediction- GPS navigation services are used to predict traffic. Map of current traffic is built using www.rspsciencehub.com Volume 02 Issue 07 July 2020 International Research Journal on Advanced Science Hub (IRJASH) 106 current location of vehicles and velocities which are stored at a central server which helps to handle traffic. Using machine learning the congestion can be found on the basis of daily experience.  Video surveillance- The video surveillance in previous days is tedious job for human attendants. Nowadays it can detect crimes before they happen by finding the unusual behavior of person. This can be tracked by system and give alarm to human attendants.  Healthcare- With the help of wearable sensors and devices, real time health of a patient can be used to collect patient information such as heart beat, blood pressure and other vital parameters. This type of information can be used by doctor and medical expert to analyze the health condition of an individual and can predict future ailments  Digital photography- In digital photography a matrix is used. Operations on the image shearing, cropping, scaling, rotation, translation are all described using the notation and operations of Linear Algebra. These are some of the applications of machine learning. Here we are focusing on the application “Weather Forecasting”. 4. Weather Forecasting- The huge change in climate due to global warming has made weather forecasting trending research topic. That’s why supervision and prediction of weather is very important task nowadays. Traditional techniques of weather forecasting uses big complex models of physics, in which different atmospheric conditions are monitored over a long period of time. These conditions are often unstable because of disturbance of the weather system, which results in inaccurate forecasts. Samples from present weather are taken and by using numerical algorithms of fluid dynamics and thermodynamics the future state is predicted. These equations are nonlinear therefore numerical methods are used to find approximate solutions. Different models use different solution methods. For solving the system of ordinary differential equations which uses physical model is unstable under disturbances, and uncertainties in the initial measurements of the atmospheric conditions. So, this whole process is bit complex and causes fault weather predictions. Here we are introducing machine learning which is relatively robust to disturbances and doesn't require a complete understanding of the physical processes that govern the atmosphere. Therefore, machine learning is alternate and effective method to physical models in weather forecasting. An automatic weather station (AWS) is an automated version of the traditional weather station, in which measurements can be taken from https://en.wikipedia.org/wiki/Weather_station https://en.wikipedia.org/wiki/Weather_station https://en.wikipedia.org/wiki/Weather_station www.rspsciencehub.com Volume 02 Issue 07 July 2020 International Research Journal on Advanced Science Hub (IRJASH) 107 either remote area without interaction of human being. Due to rapid technological development in the area of sensors and measuring system it has become very easy to read the natural parameters like temperature, humidity, rainfall by using different smart sensors like temperature sensor, humidity sensors etc. By utilizing the smart sensors for checking the weather conditions, parameters can be measured and stored accurately. These parameters can be sent to base station using wireless communication facility. After receiving the parameters from particular location the role of Mathematics and Statistics starts. The weather forecasting is divided into two steps. The first step is mathematical modelling of the physical process happening in atmosphere. This includes Numerical Weather Predictions (NWP) based on integration of basic numerical equations. To predict the weather based current conditions use mathematical models of the atmosphere and oceans. Most atmospheric models are numerical, i.e. they discretize equations of motion. They can predict micro scale phenomena. Then in the second step, we will use these data to train simple machine learning models which can predict correct weather conditions for the next few days. It includes statistical methods like regression, predictive analysis etc. Here linear regression and a variation of functional regression these two techniques are used. The model is checked for efficiency and can improve using parameter tuning and cross validation. After that algorithm gives current weather condition and predicts next day’s weather condition. Fig.4.Weather Forecasting Conclusion: Machine learning is continuously revolutionary field of computer science. In today’s world real time problems are solved by machine learning by storing, manipulating, extracting and retrieving data from large sources. In this paper we have focused how machine learning model can be efficiently and effectively designed using Electronics, Mathematics and Statistics. In day to day life we use many applications of machine learning like Google translator, Google directions. Smart Sensor Network • Temperature sensor • Humidity sensor Wireless Communication System • Transmitter Numerical Weather Prediction • Mathematical model of atomosphere and ocean Machine Learning model • Linear regession • Variation of functional regression Model evaluation and optimization • Cross validation • Parameter tuning Prediction • Current Weather Parameters • Tomorrows Weather Forecast www.rspsciencehub.com Volume 02 Issue 07 July 2020 International Research Journal on Advanced Science Hub (IRJASH) 108 Many of us use these applications without being known how machine learning is blended with the subjects like Mathematics, Statistics and Electronics. We have discussed example of weather forecasting which is complex task to understand the capabilities of above mentioned subjects. It is important to note that the algorithms, methodologies, methods used by machine learning to solve problems will continue to change. In short machine learning has ability to learn from ideas, and adapt in changing situations with high accuracy, speed and precision. References: [1] https://en.wikipedia.org/wiki/Machine_lear ning [2] https://machinelearningmastery.com/superv ised-and-unsupervised-machine-learning- algorithms/ [3] https://towardsdatascience.com/the- mathematics-of-machine-learning- 894f046c568 [4] https://medium.com/analytics-vidhya/role- of-mathematics-in-machine-learning- afc75101f877 [5] https://www.newgenapps.com/blog/machin e-learning-vs-predictive-analytics/ [6] https://machinelearningmastery.com/statisti cal-methods-in-an-applied-machine- learning-project/ [7] https://www.mathworks.com/discovery/pre dictive-modeling.html [8] https://www.logianalytics.com/predictive- analytics/predictive-algorithms-and- models/ [9] https://www.conceptatech.com/blog/best- data-science-methods-for-predictive-analytics [10] https://www.linkedin.com/pulse/regression- analysis-weather-forecasting-chonghua-yin [11] https://www.datasciencecentral.com/profiles/ blogs/understanding-the-applications-of- probability-in-machine-learning [12] https://en.wikipedia.org/wiki/Automatic_weat her_station [13] https://en.m.wikipedia.org/wiki/Atmospheric _model [14] https://www.analyticsvidhya.com/blog/2017/ 09/common-machine-learning-algorithms/ https://en.wikipedia.org/wiki/Machine_learning https://en.wikipedia.org/wiki/Machine_learning https://machinelearningmastery.com/supervised-and-unsupervised-machine-learning-algorithms/ https://machinelearningmastery.com/supervised-and-unsupervised-machine-learning-algorithms/ https://machinelearningmastery.com/supervised-and-unsupervised-machine-learning-algorithms/ https://towardsdatascience.com/the-mathematics-of-machine-learning-894f046c568 https://towardsdatascience.com/the-mathematics-of-machine-learning-894f046c568 https://towardsdatascience.com/the-mathematics-of-machine-learning-894f046c568 https://medium.com/analytics-vidhya/role-of-mathematics-in-machine-learning-afc75101f877 https://medium.com/analytics-vidhya/role-of-mathematics-in-machine-learning-afc75101f877 https://medium.com/analytics-vidhya/role-of-mathematics-in-machine-learning-afc75101f877 https://www.newgenapps.com/blog/machine-learning-vs-predictive-analytics/ https://www.newgenapps.com/blog/machine-learning-vs-predictive-analytics/ https://machinelearningmastery.com/statistical-methods-in-an-applied-machine-learning-project/ https://machinelearningmastery.com/statistical-methods-in-an-applied-machine-learning-project/ https://machinelearningmastery.com/statistical-methods-in-an-applied-machine-learning-project/ https://www.mathworks.com/discovery/predictive-modeling.html https://www.mathworks.com/discovery/predictive-modeling.html https://www.logianalytics.com/predictive-analytics/predictive-algorithms-and-models/ https://www.logianalytics.com/predictive-analytics/predictive-algorithms-and-models/ https://www.logianalytics.com/predictive-analytics/predictive-algorithms-and-models/ https://www.conceptatech.com/blog/best-data-science-methods-for-predictive-analytics https://www.conceptatech.com/blog/best-data-science-methods-for-predictive-analytics https://en.wikipedia.org/wiki/Automatic_weather_station https://en.wikipedia.org/wiki/Automatic_weather_station https://en.m.wikipedia.org/wiki/Atmospheric_model https://en.m.wikipedia.org/wiki/Atmospheric_model https://www.analyticsvidhya.com/blog/2017/09/common-machine-learning-algorithms/ https://www.analyticsvidhya.com/blog/2017/09/common-machine-learning-algorithms/ work_frnu6mq3tjfpvllmo5akfvsotu ---- doi:10.1016/j.ijsu.2005.10.004 International Journal of Surgery (2005) 3, 254e257 www.int-journal-surgery.com CORE Metadata, citation and similar papers at core.ac.uk Provided by Elsevier - Publisher Connector Uses and abuses of digital imaging in plastic surgery S.A. Reza Nouraei a, Jim Frame b, Charles Nduka c,* a Department of Ear Nose and Throat Surgery, Charing Cross Hospital, London W6 8RF, United Kingdom b Springfield Hospital, Chelmsford CM1 7GU, United Kingdom c Department of Biosurgery and Surgical Technology, Academic Surgical Unit, Imperial College School of Science, Technology and Medicine of St. Mary’s, London W2 1NY, United Kingdom KEYWORDS Digital imaging; Image manipulation; Plastic surgery Abstract Background: Surgeons extensively rely on photographic communication for documenting surgical results, teaching and research, and obtaining informed consent from patients. With the advent of digital photography and widespread availability of sophisticated image manipulation software, the potential for com- mitting digital fraud cannot be discounted. Methods: Ten ‘before’ and ‘after’ plastic surgical photographs were selected, and a number of them were digitally enhanced using a standard desktop software by a non-expert in digital photography. A panel of 10 consultant plastic surgeons was asked to judge which, if any of the images had been digitally manipulated. Results: Expert assessment had a sensitivity of only 12% in identifying digitally manipulated images. Furthermore, there was poor interobserver agreement with an Intraclass Correlation Coefficient of 0.39. Conclusion: Digital fraud is easy to commit and difficult to detect. Furthermore, a number of inadvertent and simple image manipulation functions can also amount to misrepresentation. There may be scope for cooperation within editorial circles to set standards for the submission of digital photographs. Surgeons need also to be aware of the potential for misrepresentation of information through digital image manipulation and exercise caution in the communication of digital photo- graphic information. ª 2005 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved. * Corresponding author. Fax: þ44 870 4580775. E-mail address: charles.nduka@qvh.nhs.uk (C. Nduka). 1743-9191/$ - see front matter ª 2005 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved. doi:10.1016/j.ijsu.2005.10.004 https://core.ac.uk/display/82576506?utm_source=pdf&utm_medium=banner&utm_campaign=pdf-decoration-v1 mailto:charles.nduka@qvh.nhs.uk http://www.int-journal-surgery.com Uses and abuses of digital imaging in plastic surgery 255 Introduction Plastic surgery is a visual speciality and relies implicitly on photographs for documentation of surgical results, for teaching and research, and as an aid to obtaining the informed consent of patients. The past decade has seen a revolution in digital technology, with high-resolution digital cameras and sophisticated image manipulation software now readily available at reasonable expense. These advances provide exciting new opportuni- ties for better documentation and communication of surgical results,1 but also raise the unwelcome spectre of fraud. Digital image manipulation is not fraudulent per se, and indeed disciplines like diag- nostic radiology are much aided by it,2,3 but in plas- tic surgery, manipulating an image in such a way that it gives a more favourable impression of scar- ring or the aesthetic form is at best misleading. These considerations are particularly pertinent today as many academic journals now accept digital images for publication purposes, and the visual medium, especially the world wide web, is being increasingly used to directly communicate with the public. We investigated the feasibility and consequences of performing ‘desktop plastic sur- gery’ on standard plastic surgical photographs using a personal computer and a widely-used image manipulation software. Methods With the informed written consent of the partic- ipants, 10 before and after plastic surgical photos, including breast reduction and reconstruction, rhinoplasty and scar revision were obtained. The images were digitised using a high-resolution scan- ner, with the same image acquisition settings being used for all photographs. Five of these images were then manipulated using a readily-available image manipulation software (Paint Shop� Pro� 6, Jasc� Software Inc., USA). Examples of manipula- tions performed included reducing the prominence of postoperative scarring or ‘performing’ complete ‘desktop plastic surgery’ on preoperative photo- graphs (Fig. 1). All of these digitised images, including the non- manipulated ones were then reprinted on 5$ � 7$ photographic paper by medical photography de- partment and were thus ‘converted back’ to ‘hard copy’ photographs. A panel of 10 consultant plastic surgeons in- dependently reviewed these images. The surgeons were told that none, some or all of these images may have been digitally manipulated. To further safeguard surgeon objectivity, they were told before viewing these photographs that this series of 10 was one of a number of series of photographs they would be asked to review, within which manipulated images were randomly inserted, and they should therefore make no assumptions about the presence or absence of manipulated image in the particular series they were reviewing. The surgeons were simply asked to identify the manip- ulated photographs and state in which way they thought the images had been altered. Sensitivity of surgeon assessment as a method of identifying fraudulent photographs was calculated and the concordance between the assessments of different surgeons in identifying manipulated pho- tos was obtained using Intraclass Correlation Coefficient. Results Fig. 2 shows an example of a manipulated photo- graph used in this study. There were five manipu- lated and five un-manipulated photographs in the series. Consultant plastic surgeons correctly classi- fied all un-manipulated images as such, but were Figure 1 Breast augmentation surgery. The postopera- tive photograph has been manipulated to reduce scar prominence. 256 S.A.R. Nouraei et al. Figure 2 An example of Paint Shop Pro being used to alter a plastic surgical image. unable to correctly identify manipulated photo- graphs, classifying the majority of them as un- manipulated. Expert assessment had an overall sensitivity of 56%, and a sensitivity of 12% in cor- rectly identifying a manipulated photograph. As there were no ‘‘false positive’’ cases (i.e. un- manipulated images being identified as manipul- ated), we were unable to calculate specificity. An Intraclass Correlation Coefficient of 0.39 for correct identification of manipulated images was obtained, indicating a poor degree of concordance between different surgeons. Discussion Digital fraud is easy to commit and difficult to detect. This has important consequences on plas- tic surgery, which relies more than most other specialities on photography for the documentation of surgical results.4 Photographic misrepresenta- tion is of course as old as photography itself, hav- ing been used, inter alia, to prove the existence of the paranormal in the form of psychic photogra- phy, and for political propaganda. What has changed in the recent past is that with the advent of digital technology, the ability to perform quite sophisticated image manipulation has moved from the dark room and the domain of a few highly skilled technicians to any personal computer and any person with a reasonable knowledge of infor- mation technology.5,6 The necessary software is readily available and the knowledge to perform image manipulation can be easily acquired from online tutorials, by purchasing books on digital photography, as well as an increasing number of tutorials in the medical literature.3e12 Importantly, photographic misrepresentation is not confined to post-acquisition image manipula- tion. For example, changing the camera’s flash settings or the ambient light can radically change the characteristics of the images. The resulting effect is that two consecutive photographs, taken seconds apart, could be convincingly presented as the before and after results of laser facial re- juvenation.7 Furthermore, different digital cam- eras have different characteristics and changing camera could quite easily skew photographic rep- resentation.13 It is important also to note that pho- tographic misrepresentation is not a problem confined to plastic surgery. For example, changing the camera’s flash settings or altering colour char- acteristics of the image can significantly alter the photographic appearance of a segment of bowel when a photograph is taken to document appear- ance in the context of a strangulated hernia. Uses and abuses of digital imaging in plastic surgery 257 However, a reactionary move back to film pho- tography is neither useful, nor would do much to combat fraud, as it is now quite easily possible to print ‘hard copies’ of digital photographs, as was the case in our study. What then can be done to counter digital fraud? The first step is to raise awareness of it within surgical and editorial circles. The best time to recognise a manipulated image is when it is in high-resolution digital format (i.e. when it has been submitted for publication). It is rather more difficult to recognise a manipulated image from the single-column printed version. The British Journal of Plastic Surgery has set out de- tailed requirements for submission of surgical photographs, requiring them not to have been ‘‘altered or retouched in any way’’. Furthermore, it requires that ‘‘before and after photographs of patients should be standardized in terms of size, position and lighting’’. Plastic and Reconstructive Surgery requires that ‘‘no photographs, digital or otherwise, should be substantially modified’’. Sim- ilar requirements could not however, be found in the ‘‘Guide to Authors’’ sections of most otolaryn- gology or maxillofacial journals (where plastic surgical research is also considered for publication) nor in instructions to authors submitting research to many of the general surgical journals. There may be scope for editors of journals to define and agree upon acceptable levels of image adjustment, and require a declaration that submitted images have not been digitally manipulated. It is important to realise also, that not all image manipulations are misrepresentations. It may be very legitimately necessary to ‘crop’ a picture, remove patient labels, cover eyes or add arrows and annotation to an image. It may similarly be considered a legitimate use of digital photography to remove a distracting background from a surgical image to enhance clarity. The difficulties arise with the use of ‘airbrush’ and ‘colouring’ tools, and with global changes to image contrast, light settings or colour and saturations. In our opinion, manipulation of localised areas of the image, for instance to reduce scar prominence, is clearly unacceptable, while changes to image size and cropping are acceptable as long as important information is not ‘cropped out’. We also consider making global changes to image characteristics, such as changing contrast and colour settings unaccept- able, given that such changes can have differential effects on different areas of the image. For example, in a dark-skinned patient, careful changes to colour saturation channels can lead to changes in the colour contrast between the scar and adjacent areas, with the net effect of reducing scar prominence. This study aims to raise awareness within the surgical research community of the fine line be- tween improving image clarity and inadvertent misrepresentation of surgical results. Standardiza- tion of plastic surgical views,4,7,11,13,14 using the same camera to obtain before and after photo- graphs13 and ensuring that all images are obtained under similar lighting conditions7 and to the same scale1 would go a long way toward obviating the need to perform post-acquisition image manipula- tion. Such manipulations, if at all necessary, should then be restricted to removing patient labels, adding annotation, making a photograph unrecognizable if necessary, and cropping unnecessary background. In order to maintain the trust of our patients and the public, plastic surgeons like all doctors must adhere to high standards of probity. The use of digital images in plastic surgery has consider- able advantages and the ease with which it can be abused must not be allowed to undermine this valuable method of communication. References 1. Russell P, Nduka C. Digital photography for rhinoplasty. Plast Reconstr Surg 2003;111:1366. 2. Corl FM, Kuszyk BS, Garland MR, Fishman EK. 3D volume rendering as an anatomical reference for medical illustra- tion. J Biocommun 1999;26:2e7. 3. Corl FM, Gerland MR, Lawler LP, Fishman EK. A five-step approach to digital image manipulation for the radiologist. Radiographics 2002;22:981e92. 4. Rhodes ND, Southern SJ. Digital operation notes: a useful ad- dition to the written record. Ann Plast Surg 2002;48:571e3. 5. Camarena LC, Guerrero MT. Digitization of photographic slides: simple, effective, fast, and inexpensive method. Ann Plast Surg 2002;48:323e7. 6. Benz C. Digital photography: exposures, editing images, and presentation. Int J Comput Dent 2003;6:249e81. 7. Miamtu J. Image is everything: pearls and pitfalls of digital photography and PowerPoint presentations for the cosmetic surgeon. Dermatol Surg 2004;30:81e91. 8. Niamtu J. The power of PowerPoint. Plast Reconstr Surg 2001;108:466e84. 9. Niamtu J. Techno pearls for digital image management. Dermatol Surg 2002;28:946e50. 10. Sandler J, Murray A. Manipulation of digital photographs. J Orthod 2002;29:189e94. 11. Papel ID, Jiannetto DF. Advances in computer imaging/ap- plications in facial plastic surgery. Facial Plast Surg 1999; 15:119e25. 12. Furness PN. The use of digital images in pathology. J Pathol 1997;183:253e63. 13. Galdino GM, Vogel JE, Vander Kolk CA. Standardizing digital photography: it’s not all in the eye of the beholder. Plast Reconstr Surg 2001;108:1334e44. 14. Becker DG, Tardy ME. Standardized photography in facial plastic surgery: pearls and pitfalls. Facial Plast Surg 1999; 15:93e9. Uses and abuses of digital imaging in plastic surgery Introduction Methods Results Discussion References work_fuej4r76wrhklbbzikicexmr7q ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218546730 Params is empty 218546730 exception Params is empty 2021/04/06-02:16:24 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218546730 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:24 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_fvfsw3slsbcopcobqwtvebo23e ---- Digital dental photography. Part 2: purposes and uses Digital dental photography. Part 2: purposes and uses I. Ahmad1 Although the primary purpose of using digital photography in dentistry is for recording various aspects of clinical informa- tion in the oral cavity, other benefi ts also accrue. Detailed here are the uses of digital images for dento-legal documenta- tion, education, communication with patients, dental team members and colleagues and for portfolios, and marketing. These uses enhance the status of a dental practice and improve delivery of care to patients. The primary purpose of digital dental photography is recording, with fi delity, the clinical manifestations of the oral cavity. As a spin-off, secondary uses include dento-legal documentation, edu- cation, communication, portfolios and marketing. Each of these uses enhances and elevates the status of a dental prac- tice as well as improving delivery of care to patients. Whether the use of dental photog- raphy is solely for documentation or for other purposes, before taking any pictures it is essential to obtain writ- ten consent for permission and retain confi dentiality. Unless the patient has unusual or defi ning features such as diastemae, rotations etc, it is diffi cult for a layman to identify an individual by most intra-oral images, and hence confi dentiality is rarely compromised. However, extra-oral images, especially full facial shots, can and do compromise confi dentiality and unless prior permis- sion is sought, these types of images should not be undertaken. This is also applicable for dento-facial images that include the teeth, lips and smiles, which are often unique and reveal the identity of patients. A standard release form stat- ing the intended use of the pictures can readily be drawn up, and when signed by the patient, should be retained in the dental records (Fig. 1). A crucial point worth remembering is clearly stating the ‘intended use’ of the images. While most patients will not object to dental docu- mentation for the purpose of recording pathology and treatment progress, they may be more reticent if their images are used for marketing, such as on practice brochures or newsletters for distribution by a mailshot. DENTAL DOCUMENTATION Dental images, similarly to radiographs or other imaging such as CT scans, become part of the dental records and should be respected accordingly (Fig. 2). Nowadays many media are available for image display and storage includ- ing prints, computer hard drives, discs and memory cards or other back-up 1General Dental Practitioner, The Ridgeway Dental Surgery, 173 The Ridgeway, North Harrow, Middlesex, HA2 7DF Correspondence to: Irfan Ahmad Email: iahmadbds@aol.com www.IrfanAhmadTRDS.co.uk Refereed Paper Accepted 15 November 2008 DOI: 10.1038/sj.bdj.2009.366 ©British Dental Journal 2009; 206: 459-464 BRITISH DENTAL JOURNAL VOLUME 206 NO. 9 MAY 9 2009 459 • Besides dento-legal documentation, dental photography has a host of applications for all dental disciplines. • Communication with patients, technicians and specialists is enhanced with dental imagery and photography is a vital tool for educating patients, staff and colleagues. • Pictures of treatment carried out at the practice can be used for compiling portfolios for marketing, and for construction of a practice website. I N B R I E F PR A C TIC E Fig. 1 It is imperative to ask patients to sign a copyright release form before taking pictures 1. Digital dental photography: an overview 2. Purposes and uses 3. Principles of digital photography 4. Choosing a camera and accessories 5. Lighting 6. Camera settings 7. Extra-oral set-ups 8. Intra-oral set-ups 9. Post-image capture processing 10. Printing, publishing and presentations FUNDAMENTALS OF DIGITAL DENTAL PHOTOGRAPHY Fig. 2 Dental images, similarly to radiographs, become part of the patient’s dental records Fig. 3 Numerous media are available to store images, eg CD, DVD or fl ash drives © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE 460 BRITISH DENTAL JOURNAL VOLUME 206 NO. 9 MAY 9 2009 devices (Fig. 3). While this plethora of methods allows fl exibility and conven- ience, it also demands added responsi- bility for ensuring that discs or memory cards do not go astray. Each medium has its advantages and limitations. A printed photograph is ideal for educat- ing patients about a specifi c treatment modality, or for showing the current state of their dentition and subsequent improvement after therapy. However, prints are not a good method for archiv- ing. On the other hand, electronic stor- age is preferred for permanent archiving and retrieval as it is environmentally friendlier, but is more cumbersome and not readily available to hand compared to prints. The chosen medium is a per- sonal preference and varies for each practice. Fully computerised surgeries may opt to store patients’ images with their dental treatment details, while paper-based surgeries may prefer pho- tographic prints for easier access. Dental documentation can be divided into the following categories: Examination, diagnosis, treatment 1. planning Progress and monitoring2. Treatment outcomes.3. It is worth remembering the proverbial adage, ‘a picture really is worth more than a thousand words’, especially if one has to type them. Examination, diagnosis, treatment planning The fi rst use of photographic documen- tation is examination, diagnosis and treatment planning, since often during an initial examination, many items are missed or overlooked. Therefore, photog- raphy is an ideal method for analysing the pre-operative dental status at a later date. Dental photography should be regarded as a diagnostic tool, similar to radiographs, study casts or other investigations and tests. A series of pre-operative images is not only helpful for recording a baseline of oral health, but is invaluable for arriv- ing at a fi rm diagnosis and offering treat- ment options to restore health, function and aesthetics (Fig. 4). Recording pathology is also a valuable reason, but a photographic record also serves a constructive purpose for many other disciplines, for example analys- ing facial profi les and tooth alignment for orthodontics, assessing occlusal dis- harmonies, deciding methods of pros- thetic rehabilitation for restoring tooth wear, and observing gingival health and periodontal pocketing or ridge mor- phology prior to implant placement, to name a few (Figs 5-8). In the fi eld of forensic dentistry, photographic docu- mentation is an essential piece of evi- dence. Similarly, taking pictures for suspected cases of child abuse is also indispensable proof. Progress and monitoring The second use of documentation is for monitoring the progress of pathological lesions or the stages of prescribed den- tal treatment. It is obviously essential to monitor progress of soft tissue lesions to ensure that healing is progressing accord- ing to plan. If a lesion is not responding with a specifi c modality, assessment can be useful for early intervention with Fig. 4 A set of pre-operative images is ideal for examination, diagnosis and treatment planning Fig. 5 Facial profi le pictures are useful for analysis during a course of orthodontic treatment Fig. 6 Tooth wear requiring replacement of lost enamel and dentine Fig. 7 Periodontal pocketing Fig. 8 Assessing ridge morphology prior to treatment planning for implants © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE alternative treatment options rather than waiting for protracted intervals that could exacerbate the condition. Other uses include tooth movement with orthodon- tic appliances, gingival health after peri- odontal or prosthetic treatment and soft tissue healing and integration following surgery or gingival grafts (Figs 9-10). Visual documentation also emphasises to patients the need for compliance to regain oral health, eg adhering to oral hygiene regimes or dietary recommendations. Treatment outcomes Besides achieving health and function, which are relatively objective goals, the outcome of elective treatments such as cosmetic and aesthetic dentistry is highly subjective. Aesthetic dentistry is one of the major branches of dentistry that can produce ambivalent results. In these instances, if dental photography is not routinely used as part of the course of treatment, it is a recipe for disaster and possible future litigation. Accurate and ongoing documentation is a prereq- uisite for ensuring that the patient, at the outset, understands the limitations of a particular aesthetic procedure. In addition, if the patient chooses an option with dubious prognosis, or against clini- cal advice, photographic documentation is a convincing defence in court. COMMUNICATION Patient Most patients are not dentally knowl- edgeable and will benefi t from explana- tions of various dental diseases, their aetiology, prevention and ameliora- tion. A verbal explanation alone may be confusing or even daunting for a non-professional, but when a pictorial representation is included it can be elu- cidating and has a lasting impact. For example, many individuals suffer from some form of periodontal disease and showing pictures ranging from mild gin- givitis to refractory periodontitis leaves an ever-lasting impression, informing the patient of the potential hazards of this insidious disease (Figs 11-15). In addition, most patients are oblivious to advances in dental care, for example all-ceramic life-like crowns or implants to replace missing teeth. Once again, a BRITISH DENTAL JOURNAL VOLUME 206 NO. 9 MAY 9 2009 461 Fig. 9 Infl amed free gingival margins around defective crowns on central incisors Fig. 11 The benefi ts of scaling and polishing for the teeth and gingivae are clearly evident in this image Fig. 13 Gross calculus build-up in a patient whose fi rst dental visit was at the age of 40 Fig. 15 Hopeless prognosis due to periodontal destruction caused by calculus build-up Fig. 17 Zirconia abutment screwed onto implant Fig. 18 Post-operative: ceramic implant- supported crown to replace missing right central incisor Fig. 16 Pre-operative: missing right central incisor Fig. 14 The patient in Figure 13 after scaling and polishing teeth Fig. 12 Refractory periodontitis in a diabetic patient Fig. 10 Healthy free gingival margins after a week of temporisation with acrylic crowns © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE 462 BRITISH DENTAL JOURNAL VOLUME 206 NO. 9 MAY 9 2009 visual presentation is invaluable so that the patient can judge the benefi ts, as well as pitfalls of these relatively novel treatment options (Figs 16-18). Further- more, before informed consent can be obtained, the patient needs to be pre- sented with treatment options, together with advantages and disadvantages of each proposed modality. The presentation of case studies can be as simple as showing pictures, either prints or on a computer monitor, or using advanced methods such as software manipulation and simulation of what is achievable with contemporary dental therapy. If software manipulation is used, showing virtual changes, say to a smile, it is important to emphasise to the patient that the manipulation is only for illus- trative purposes and what is seen on a monitor screen may not be possible in the mouth. Also, giving ‘before’ and ‘after’ software simulations should be resisted, as these become a legal document that the recipient may refer to if the outcome is not as depicted in the images. Staff In a similar vein to patients, the entire dental team can also benefi t from see- ing treatment sequences, and be better prepared to answer patient queries. Fur- thermore, new staff can appreciate the protocols involved in complex restora- tive procedures, while existing members can learn about new techniques based on the latest scientifi c breakthroughs before they are incorporated into daily prac- tice. Dental education is invaluable for staff members to play their roles within a team and stresses their responsibili- ties for effective communication, cross infection control and keeping abreast of changing ideas and paradigm shifts. Academic Beyond patient and staff education, photography is an integral part of lec- turing for those wishing to pursue the path of academia (Fig. 19). In addition, if a clinician desires to publish post- graduate books or articles, either now or in the future, meticulous photographic documentation is a must. There are innumerable publications ranging from high-level academia to anecdotal den- tal journal. Whichever appeals to an individual is a personal choice, but hav- ing a practice or dentist profi led or pub- lished in the dental literature adds kudos to a practice (Fig. 20). Also, local news- paper features are reassuring for exist- ing patients and promote the surgery to potential new clients. Specialists If referral to a specialist is necessary, either for further treatment or a second opinion, attaching a picture of the lesion or pre-operative status is extremely help- ful. This saves time trying to articulate fi ndings of a visual examination and also allows the specialist to prioritise appoint- ments, particularly in cases of suspected pre-cancerous or malignant lesions. Alter- nately, the images can also be relayed via email attachments, a CD or DVD. Dental technician Communication is also vital between clinician, patient and dental technician. This is particularly relevant to aesthetic dentistry, which can be trying for all con- cerned. As previously mentioned, aesthet- ics is not a clear-cut concept. Therefore, Fig. 19 Dental photography is an integral part of academic teaching Fig. 20 Articles in a dental journal add kudos to a practice © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE if patients’ wishes are not effectively conveyed to the ceramicist, who after all is making the prostheses, disappoint- ment is inevitable. The best way to miti- gate this eventuality is by forwarding images of all stages of treatment to the ceramicist, together with the patients’ expectations and wishes. Photographs can be traced, or marked with indelible pens to communicate salient features such as shape, alignment, characterisa- tions, regions of translucency or defi n- ing features such as mamelons, banding, calcifi cation, etc. Also, taking pictures at the try-in stage allows the ceramist to visualise the prosthesis in situ in relation to soft tissues and neighbouring teeth, as well as to the lips and face. At this stage, alterations can change the shape, colour, alignment, etc, before fi tting the restora- tion (Fig. 21), which obviously avoids the post-operative dissatisfaction that can be embarrassing, frustrating and costly if a remake is the only reparative option. PORTFOLIOS Building a practice portfolio of clini- cal case studies is time consuming but well worth the effort. Some uses have already been mentioned, such as educa- tion, and others, eg marketing, are dis- cussed below. The purpose of showing clinical photographs to patients is two- fold: fi rstly, education about a particular dental treatment option and secondly, convincing sceptics about dental care, or ambivalent patients regarding choice of practices that can deliver a proposed treatment plan. While explanations accompanied by pictures and illustra- tions from dental journals and books are satisfactory for educating patients, they are not convincing evidence as to whether or not a clinician can deliver what is shown in the textbooks. However, pictures taken of patients at the prac- tice who have been successfully treated carry credence and support claims for performing a specifi c procedure. A useful starting point is collating sequences of different dental restora- tions, eg crowns or implants. Over a period of time, examples of every treat- ment carried out at a practice can be documented and subsequently used for educating patients, informing them of the benefi ts and pitfalls of a given therapy. A verbal explanation, of say implants, may be inadequate for patients to fully appreciate the time and effort necessary for achieving successful results. But a visual clinical sequence explains the complexities of advanced treatments, and also helps to justify the expenses involved. After suitable train- ing, educating patients can be delegated to another member of the dental team, eg a nurse, hygienist or therapist, who can use the clinical case studies to accom- pany verbal explanations. The method of presenting photographs is varied, including using prints or a computer monitor. If prints are chosen, they should be printed on high quality photographic paper, either by a photo- graphic laboratory or an inkjet colour printer. An album or folder with sepa- rators, similar to a family album, is ideal for displaying different treatment sequences. An album is also an excellent coffee table book, which can be placed in the waiting room for patients to browse through. Using the digital option for presentation is more elaborate and styl- ish. The simplest is an electronic or dig- ital picture frame (Fig. 22) loaded with a series of repeating pictures, which can be manually advanced while talking through a modality, or set to automatic transitions if placed in a waiting room or reception area. The most sophisticated option for creating a digital portfolio is using presentation software, eg Micro- soft® PowerPoint™. This software allows greater fl exibility compared to advanc- ing from one image to the next. As well as adding text, visual effects and ani- mations, sound or music can be included to enhance the presentation, making the whole educational experience memorable and exciting. Once prepared, the presen- tation can either be manually advanced BRITISH DENTAL JOURNAL VOLUME 206 NO. 9 MAY 9 2009 463 Fig. 21 The crown on the right central incisor has a lower value compared to the natural left central incisor Fig. 22 Digital picture frames Fig. 23 An automated computer presentation is an excellent internal marketing tool © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE 464 BRITISH DENTAL JOURNAL VOLUME 206 NO. 9 MAY 9 2009 for one-to-one sessions, or set to auto- matic display and placed in a communal area of the practice (Fig. 23). MARKETING The last, and an important use of den- tal photography, is for marketing pur- poses. Before embarking on any form of advertising it is advisable to consult the GDC guidelines, and preferably have items checked by an indemnity organi- sation to ensure adherence to ethical and professional standards. Many stock images of teeth and dental practise can be obtained from a dental library or as Internet downloads. But as previously mentioned, using clinical pictures of practice patients enhances confi dence for those who are ambivalent about which practice to attend. It also elevates the practice reputation by picturing a welcoming dental team, or showing treatment carried out at the practice. Marketing can be divided into inter- nal and external categories. The former includes all forms of stationery, practice brochures and newsletters, while latter includes newspapers, journals, books or web pages. Internal marketing A variety of stationery can benefi t from depicting beautiful smiles of bright, clean and healthy teeth. Many dental practices incorporate pictures of teeth or smiles in their logos and with artis- tic creativity these can be unique and defi ning trademarks. Examples of sta- tionery include letterheads, appointment cards, estimate forms, post-operative instructions and business cards (Figs 24-25). In addition, practice merchan- dising such as customised toothbrushes, ball point pens, pads, bags or other gift items are another form of marketing that can incorporate practice logos. A major piece of practice literature which lends itself to imagery is the prac- tice brochure, leafl et or newsletter (Fig. 26). The choice of images is a matter of personal taste and can include pictures of the outdoor view of the premises, recep- tion area, treatment and sterility rooms, gardens or even a patio waiting area for the summer months. It is always more welcoming if each of the practice views includes a smiling staff member, rather than an empty room, which is perceived as isolated, cold or an advertisement for dental surgery equipment or furniture. Other ideas are showing the entire prac- tice team or faces of individual dental personnel. Clinical images of ‘before’ and ‘after’ pretty smiling faces are also always useful inclusions, or sequences showing stages of particular treatments such as crowns, fi llings and implants. If clinical images are included, it is impor- tant to avoid imagery that is gruesome or off-putting to a layman. Images of surgical procedures, infl ammation or haemorrhage are a few examples that obviously warrant exclusion. Designing a practice brochure can either be assigned to a graphics com- pany, or done in-house using numerous drawing software packages. The market is awash with drawing and photo-editing software of varying complexity that can be utilised to create a bespoke brochure or newsletter. Many software packages have standard templates for a variety of stationery, which is relatively easily tai- lored by adding text and images. Some popular designing and graphic soft- ware are Adobe® Creative Suite, Corel Draw®, Quark® Xpress, Pages and many word-processing software packages, eg Microsoft® Word. All these applications have ready-made templates and once the designing is fi nished, the fi les can be transferred to a printing house via email, CD or memory stick for proofi ng and a subsequent print run. Chapter 10 in the series details the stages involved in designing a practice brochure. External marketing Before the advent of the Internet, adver- tising in telephone directories, local news- papers, or even radio and television were the ideal channels. While these media are not obsolete, probably the most effective method today of getting a message across to a large audience is by using the Inter- net. More and more households and busi- nesses have access to the Internet, and using search engines such as Yahoo® or Google® is quicker than wading through heavy telephone directories. If one is computer literate, it is rela- tively easy to design an in-house web page, using images similar to that on the practice brochure or newsletter. However, to construct a web page with an impressive design layout with slick transitions and music requires employ- ing a professional web designer. In addi- tion, the web page designer can advise on the best methods for obtaining hits for the site, plus a host of additional fea- tures (eg links) that ensures the invest- ment is productive. Although the initial cost may seem excessive, it is well worth investing in this form of advertising as it is without doubt the future, and the capital outlay can be readily recouped within a short period of time by refer- rals and/or new patients. Fig. 24 A selection of practice stationery that can benefi t by incorporating dental imagery Fig. 25 Business cards can incorporate dental images for marketing the practice Fig. 26 Practice brochures and newsletters with clinical images © 2009 Macmillan Publishers Limited. All rights reserved. Digital dental photography. Part 2: purposes and uses Introduction Dental documentation Communication Patient Staff Academic Specialists Dental technician Portfolios Marketing Note work_fvpvnjvkgncibiugpn3n5oxviq ---- Topsy-turvy: turning the counter-current heat exchange of leatherback turtles upside down D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 rsbl.royalsocietypublishing.org Research Cite this article: Davenport J, Jones TT, Work TM, Balazs GH. 2015 Topsy-turvy: turning the counter-current heat exchange of leatherback turtles upside down. Biol. Lett. 11: 20150592. http://dx.doi.org/10.1098/rsbl.2015.0592 Received: 8 July 2015 Accepted: 9 September 2015 Subject Areas: evolution Keywords: heat exchange, counter-current, leatherback turtle, locomotory muscles Author for correspondence: John Davenport e-mail: j.davenport@ucc.ie & 2015 The Author(s) Published by the Royal Society. All rights reserved. Physiology Topsy-turvy: turning the counter-current heat exchange of leatherback turtles upside down John Davenport1, T. Todd Jones2, Thierry M. Work3 and George H. Balazs2 1School of Biological, Earth and Environmental Sciences, University College Cork, North Mall Campus, Distillery Fields, Cork, Ireland 2NOAA Fisheries, Pacific Islands Fisheries Science Center, 1845 Wasp Boulevard, Building 176, Honolulu, HI 96818, USA 3US Geological Survey, National Wildlife Health Center, Honolulu Field Station, Honolulu, HI 96850, USA Counter-current heat exchangers associated with appendages of endotherms feature bundles of closely applied arteriovenous vessels. The accepted para- digm is that heat from warm arterial blood travelling into the appendage crosses into cool venous blood returning to the body. High core temperature is maintained, but the appendage functions at low temperature. Leatherback turtles have elevated core temperatures in cold seawater and arteriovenous plexuses at the roots of all four limbs. We demonstrate that plexuses of the hindlimbs are situated wholly within the hip musculature, and that, at the distal ends of the plexuses, most blood vessels supply or drain the hip muscles, with little distal vascular supply to, or drainage from the limb blades. Venous blood entering a plexus will therefore be drained from active locomotory muscles that are overlaid by thick blubber when the adults are foraging in cold temperate waters. Plexuses maintain high limb muscle temperature and avoid excessive loss of heat to the core, the reverse of the accepted paradigm. Plexuses protect the core from overheating generated by muscular thermogenesis during nesting. 1. Introduction Limb counter-current heat exchange arrangements have been identified in birds and mammals living under cold terrestrial or aquatic conditions [1 – 4]. Heat exchangers feature bundles of closely applied arterial and venous vessels. The accepted paradigm is that, under cold conditions, heat from warm arterial blood travelling into the appendage crosses into the cool venous blood return- ing towards the body, facilitating maintenance of core body temperature. However, the corresponding appendage functions at low temperature [5,6]. Dermochelys coriacea is the sole living species of the chelonian family Dermo- chelyidae, which has a long history (ca 50 Myr) of foraging in cool water [7]. The largest of extant sea turtles, leatherbacks, also have the longest fore- and hindlimbs [8]. Limb blades are essentially composed of manus and pes. Muscles that drive them are associated with the pectoral and pelvic girdles, humerus plus radius and ulna, and femur plus tibia and fibula [8]. Propulsion in water is produced by synchronous action of the foreflippers, with the hindlimbs acting as rudders [9] and perhaps elevators. On land, all limbs are involved in propulsion [10], and the hindlimbs used to excavate nests. Adult leatherbacks have elevated core temperatures (25 – 278C) in cold (10.9 – 16.78C) surface seawater [11 – 13] and regularly dive into near-freezing water [14]. Whether the leatherback is endothermic [15], or gigantothermic [16] has been controversial because of low metabolic rate in adults. Current consensus is that they derive heat from exercise [13,17,18]. Leatherbacks have arteriovenous http://crossmark.crossref.org/dialog/?doi=10.1098/rsbl.2015.0592&domain=pdf&date_stamp=2015-10-07 mailto:j.davenport@ucc.ie ventral knee joint left hind foot vascular supply/drainage of digit musculature vascular supply/drainage of ventral hip musculature right hind foot tail plexus right hind foot tail dorsal flexor tibialis iliotibialis iliotibialis plexus vertebral column cut pelvis knee joint carapace (b)(a) (c) Figure 1. (a) View of hindlimb from laterodorsal aspect. Skin and superficial connective and adipose tissue have been removed from thigh and tibial areas. Two dorsal hip muscles are identified. (b) Laterodorsal view of hindquarter vascular plexus. Iliotibialis and flexor tibialis muscles have been parted. Needle indicates artery curving from distal end of plexus towards hip along posterior surface of iliotibialis. Note numerous vessels leaving/entering plexus that supply/drain flexor tibialis and ventral hip muscles, plus muscles of digits. Note also that some vessels that supply/drain the flexor tibialis loop hipwards from their connection with the plexus. (c) Dorsolateral view of distal origin (indicated by needle) of plexus. Cut pelvis and vertebral column (with necropsy cut marks) provide orientation. rsbl.royalsocietypublishing.org Biol. Lett. 11: 20150592 2 D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 counter-current plexuses at the roots of all four limbs, which have been assumed to avoid heat loss from the body core [19]. Leatherback turtles swim continuously when in water [20], even when foraging [13,21]. In cool temperate waters (14 – 158C), Atlantic leatherbacks swim at 2.7 km h21 [13], similar to modal speeds of 2.0 – 3.0 km h21 recorded off the US Virgin Islands [22], where sea surface temperatures are about 288C. Locomotory muscle performance is therefore little affected by the ranges of latitude and temperature rou- tinely encountered, implying similar muscle temperatures in cold and warm environments. Muscular thermogenesis has not been studied directly in Dermochelys, but has been investigated in the pectoral muscles of smaller green turtles (Chelonia mydas [21]). In active adult green turtles, pectoral muscles were 88C warmer than the sea; other tissues were not. There is indirect evidence that leatherback forelimb blades function at lower temperatures than more central tis- sues; cooled lipid samples taken from blade adipose tissue initiate crystallization at 118C, whereas samples from cara- pace fat start to crystallize at 17 – 188C [23]. Our study demonstrates that the vascular arrangements of the hindquar- ter plexus of Dermochelys result in a counter-current function opposite to that described for birds and mammals exposed to cold conditions. 2. Methods Six juvenile leatherbacks (59.5 – 84.1 cm, straight carapace length, 26.0 – 70.9 kg body mass) were collected as bycatch by observers (NOAA Fisheries, Pacific Islands Regional Office, Observer Pro- gramme) on longline fishing vessels operating in the equatorial Pacific. Frozen on death, they were in good post-mortem condition. Turtles were thawed 24 h before routine necropsy and histopathol- ogy to confirm cause of death [24] (drowning). The hindquarter plexus and its relationships with hindlimb musculature and hindlimb anatomy were investigated by gross dissection, tissue manipulation, histology and digital photography. 3. Results Dissection of the hindquarter plexus of all six turtles revealed that its proximal end was at the pelvic girdle and that it ran deep within the hip muscles, alongside and posterior to the femur (figure 1a,b). It consisted of numerous bundles of (a) (b) 100 µm a v n (c) body core plexus warmwarmhothottestcool foot blubber locomotory muscle heat gradient Figure 2. (a) Macrophotograph of cut end of plexus. Individual bundles (about 1 mm diameter) have tubular structures within them. (b) Cross section of part of plexus. Artery (a), vein (v), nerve bundle (n). Circles indicate areas of mixed arteries and veins. Veins are collapsed; arteries with thick smooth muscle walls have diameter about 25 mm. (c) Schematic diagram of function of hindlimb vascular plexus in Dermochelys coriacea. Black arrows indicate blood flow, white arrows indicate heat flow. Red indicates arterial supply, blue indicates venous drainage. Limb root blubber not present in hatchlings, juveniles or nesting adult female turtles, but characteristic of adult turtles on high-latitude feeding grounds [25]. rsbl.royalsocietypublishing.org Biol. Lett. 11: 20150592 3 D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 closely applied veins and arteries interspersed with nerve bundles (figure 2a,b) as is normal in vertebrate limbs. No venous valves were seen. The two types of closely packed vessels are known to be arranged randomly, but with more veins than arteries [19]. The plexus was situated wholly within the core of the thigh musculature; it did not project beyond the knee joint into the lower limb and foot. Distally, most of the plexus broke up into less-packed vessels that sup- plied the hip muscles before the knee was reached. It was evident that some vessels that supply/drain the hip muscles loop proximally from the distal part of the plexus. In conse- quence, in the live animal, where the hip muscles are close to one another, the looped vessels will also be close to the plexus. This means that, upon circulating to the limb, some distally directed arterial blood in the plexus will flow proxi- mally, whereas some venous blood will flow distally before entering the plexus and flowing proximally. 4. Discussion The anatomical arrangements of leatherback hindlimb plexuses appear incompatible with accepted function [5,6,19] for counter-current heat exchangers. Instead of the limbs having lower temperatures than the body core, we believe that the exchangers function primarily to retain thermogenic heat within the locomotory muscles themselves (see figure 2c for schematic diagram), thus allowing the muscles to be kept warm enough to work effectively in cold water, even though hindlimb function is primarily for steering [9] and may be mostly isometric. This implies that the muscle temperatures will usually be above those of the core, and also that the muscles generate enough heat for some to be transferred to the core, where large body size and very effective insulation will combine to retain it and thus maintain a steady 25 – 278C core temperature [11 – 13]. Bird tibiotarsal counter-current vascular arrangements are phylogenetically/structurally varied [2], some species having complex intermingled networks of arteries and veins (rete), others having arrangements in which a single artery is surrounded by counter-current veins (venae comi- tantes). The lower bird leg is largely composed of bones (tarsometatarsi), tendons and skin with little muscle. In all cases, the exchanger is in the distal tibiotarsal region, so that returning cold blood is already warmed before it rsbl.royalsocietypublishing.org Biol. Lett. 11: 20150592 4 D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 passes through the muscular femoral region. The intramus- cular placement of the leatherback plexuses is the reverse of this arrangement. Overheating is a risk for Dermochelys in the tropics. Adults shuttle between warm surface waters and cool deeper water, thereby moderating their body temperatures [26,27]. Nesting female leatherback turtles (which use their hindlimbs for locomotion and nest digging) have core body temperatures of about 328C (around 8 – 108C above air/sand temperatures and 2.5 – 58C above surface seawater temperatures) [28]. We suggest that our model of counter-current heat exchange (figure 2c) will protect the core against hyperthermia, by retaining heat generated by muscular thermogenesis within the limb musculature; enhanced blood perfusion of the tur- tle’s skin (nesting turtles show flushed forearms/wrists, throats and undersides) will aid heat dispersion, working synergistically with plexus function. In summary, the adult leatherback turtle exhibits a different form of endothermy in cold water from that exhib- ited by birds and mammals. Instead of depending upon heat generated by central nutrient-derived thermogenesis in the liver, Dermochelys relies upon continuous exercise of the peripheral locomotory muscles to generate heat that keeps them warm and is mostly retained in the musculature by counter-current heat exchange. Endogenous heat trans- ferred from muscles to core is held there by thermal inertia and effective insulation, rather than by counter-current heat exchange. Ethics. All investigations were carried out upon dead leatherback turtles that had been accidentally caught (and drowned) by commer- cial fishers unconnected with any of our institutions. Their necropsies therefore posed no ethical problems whatsoever. No live animals were studied, and none were collected deliberately for our research. Data accessibility. No data were collected beyond the photographic material used in the paper. No supplementary material is referred to in the paper. Authors’ contributions. All four authors conducted the necropsies, pho- tography and histology in collaborative fashion following joint discussions of hypotheses to be addressed. J.D. led the analysis of the material and the writing of the various drafts of the manuscript, but all authors contributed to that analysis and writing ( plus responding to reviewers’ criticisms). Competing interests. We have no competing interests. Funding. J.D. received partial funding from the JIMAR Visiting Scientist Programme. Acknowledgement. We thank Sarah Alessi, Giulia Anderson, Shandell Brunson, Julia Davenport, Devon Francke, Shawn Murakawa and Marc Rice for technical help. J.D. acknowledges the good offices of T.T.J. and G.H.B. who invited him to participate in necropsies. We thank Dr Jeanette Wyneken and an anonymous reviewer for constructive criticism of an earlier version of the paper. Mention of products or trade names does not imply endorsement by the US Government. References 1. Tarasoff FJ, Fisher HD. 1970 Anatomy of the hind flippers of two species of seals with reference to thermoregulation. Can. J. Zool. 48, 821 – 829. (doi:10.1139/z70-144) 2. Midtgard U. 1981 The rete tibiotarsale and arterio- venous association in the hind limb of birds: a comparative morphological study on counter-current heat exchange systems. Acta Zool. 62, 67 – 87. (doi:10.1111/j.1463-6395.1981.tb00617.x) 3. Heyning JE. 2001 Thermoregulation in feeding baleen whales: morphological and physiological evidence. Aquat. Mamm. 27, 284 – 288. 4. Blix AS, Walløe L, Messelt EB, Folkow LP. 2010 Selective brain cooling and its vascular basis in diving seals. J. Exp. Biol. 213, 2610 – 2616. (doi:10. 1242/jeb.040345) 5. Scholander PF. 1955 Evolution of climatic adaptation in homeotherms. Evolution 9, 15 – 26. (doi:10.2307/ 2405354) 6. Scholander PF, Schevill WE. 1955 Counter-current vascular heat exchange in the fins of whales. J. Appl. Physiol. 8, 279 – 282. 7. Albright 3rd LB, Woodburne MO, Case JA, Chaney DS. 2003 A leatherback sea turtle from the Eocene of Antarctica: implications for antiquity of gigantothermy in Dermochelyidae. J. Vertebr. Paleontol. 23, 945 – 949. (doi:10.1671/1886-19) 8. Wyneken J. 2001 The anatomy of sea turtles. NOAA Technical Memorandum NMFS-SEFSC-470. See http://courses.science.fau.edu/~jwyneken/sta/. 9. Davenport J, Pearson GA. 1994 Observations on the swimming of the Pacific ridley, Lepidochelys olivacea (Eschscholtz, 1829): comparisons with other sea turtles. Herpetol. J. 4, 60 – 63. 10. Renous S, Bels V. 1993 Comparison between aquatic and terrestrial locomotions of the leatherback sea turtle (Dermochelys coriacea). J. Zool. 230, 357 – 378. (doi:10.1111/j.1469-7998. 1993.tb02689.x) 11. Frair W, Ackman RG, Mrosovsky N. 1972 Body temperature of Dermochelys coriacea: warm turtle from cold water. Science 177, 791 – 793. (doi:10. 1126/science.177.4051.791) 12. James MC, Mrosovsky N. 2004 Body temperatures of leatherback turtles (Dermochelys coriacea) in temperate waters off Nova Scotia, Canada. Can. J. Zool. 82, 1302 – 1306. (doi:10. 1139/z04-110) 13. Casey JP, James MC, Williard AS. 2014 Behavioral and metabolic contributions to thermoregulation in freely swimming leatherback turtles at high latitude. J. Exp. Biol. 217, 2331 – 2337. (doi:10. 1242/jeb.100347) 14. James MC, Davenport J, Hays GC. 2006 Expanded thermal niche for a diving vertebrate: a leatherback turtle diving into near-freezing water. J. Exp. Mar. Biol. Ecol. 335, 221 – 226. (doi:10.1016/j.jembe. 2006.03.013) 15. Standora EA, Spotila JR, Keinath JA, Shoop CR. 1984 Body temperature, diving cycles, and movement of a subadult leatherback turtle, Dermochelys coriacea. Herpetologica 40, 169 – 176. 16. Paladino FV, O’Connor MP, Spotila JR. 1990 Metabolism of leatherback turtles, gigantothermy, and thermoregulation of dinosaurs. Nature 344, 858 – 860. (doi:10.1038/344858a0) 17. Bostrom BL, Jones TT, Hastings M, Jones DR. 2010 Behavior and physiology: the thermal strategy of leatherback turtles. PLoS ONE 5, e13925. (doi:10. 1371/journal.pone.0013925) 18. Bostrom BL, Jones DR. 2007 Exercise warms adult leatherback turtles. Comp. Biochem. Physiol. A 147, 323 – 331. (doi:10.1016/j.cbpa.2006. 10.032) 19. Greer AE, Lazell JD, Wright RM. 1973 Anatomical evidence for a counter-current heat exchanger in the leatherback turtle (Dermochelys coriacea). Nature 244, 181. (doi:10.1038/244181a0) 20. Eckert SA. 2002 Swim speed and movement patterns of gravid leatherback sea turtles (Dermochelys coriacea) at St Croix, US Virgin Islands. J. Exp. Biol. 205, 3689 – 3697. 21. Standora EA, Spotila JR, Foley RE. 1982 Regional endothermy in the sea turtle Chelonia mydas. J. Therm. Biol. 7, 159 – 165. (doi:10.1016/0306- 4565(82)90006-7) 22. Heaslip SG, Iverson SJ, Bowen WD, James MC. 2012 Jellyfish support high energy intake of leatherback sea turtles (Dermochelys coriacea): video evidence from animal-borne cameras. PLoS ONE 7, e33259. (doi:10.1371/journal.pone.0033259) 23. Davenport J, Holland DL, East J. 1990 Thermal and biochemical characteristics of the fat of the leatherback turtle Dermochelys coriacea (L.): evidence of endothermy. J. Mar. Biol. Assoc. UK 70, 33 – 41. (doi:10.1017/S0025315400034172) http://dx.doi.org/10.1139/z70-144 http://dx.doi.org/10.1111/j.1463-6395.1981.tb00617.x http://dx.doi.org/10.1242/jeb.040345 http://dx.doi.org/10.1242/jeb.040345 http://dx.doi.org/10.2307/2405354 http://dx.doi.org/10.2307/2405354 http://dx.doi.org/10.1671/1886-19 http://courses.science.fau.edu/~jwyneken/sta/ http://courses.science.fau.edu/~jwyneken/sta/ http://dx.doi.org/10.1111/j.1469-7998.1993.tb02689.x http://dx.doi.org/10.1111/j.1469-7998.1993.tb02689.x http://dx.doi.org/10.1126/science.177.4051.791 http://dx.doi.org/10.1126/science.177.4051.791 http://dx.doi.org/10.1139/z04-110 http://dx.doi.org/10.1139/z04-110 http://dx.doi.org/10.1242/jeb.100347 http://dx.doi.org/10.1242/jeb.100347 http://dx.doi.org/10.1016/j.jembe.2006.03.013 http://dx.doi.org/10.1016/j.jembe.2006.03.013 http://dx.doi.org/10.1038/344858a0 http://dx.doi.org/10.1371/journal.pone.0013925 http://dx.doi.org/10.1371/journal.pone.0013925 http://dx.doi.org/10.1016/j.cbpa.2006.10.032 http://dx.doi.org/10.1016/j.cbpa.2006.10.032 http://dx.doi.org/10.1038/244181a0 http://dx.doi.org/10.1016/0306-4565(82)90006-7 http://dx.doi.org/10.1016/0306-4565(82)90006-7 http://dx.doi.org/10.1371/journal.pone.0033259 http://dx.doi.org/10.1017/S0025315400034172 rsbl.royalsocietypublish 5 D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 24. Work TM, Balazs GH. 2010 Pathology and distribution of sea turtles landed as bycatch in the Hawaii-based North Pacific pelagic longline fishery. J. Wildl. Dis. 46, 422 – 432. (doi:10.7589/0090-3558-46.2.422) 25. Davenport J, Plot V, Georges JV, Doyle TK, James MC. 2011 Pleated turtle escapes the box – shape changes in Dermochelys coriacea. J. Exp. Biol. 214, 3474 – 3479. (doi:10.1242/jeb.057182) 26. Southwood AL, Andrews RD, Paladino FV, Jones DR. 2005 Effects of swimming and diving behavior on body temperatures of Pacific leatherbacks in tropical seas. Physiol. Biochem. Zool. 78, 285 – 297. (doi:10. 1086/427048) 27. Wallace BP, Cassondra LW, Paladino FV, Morreale SJ, Lindstrom RT, Spotila JR. 2005 Bioenergetics and diving activity of internesting leatherback turtles Dermochelys coriacea at Parque Nacional Marino Las Baulas, Costa Rica. J. Exp. Biol. 208, 3873 – 3884. (doi:10.1242/jeb.01860) 28. Paladino FV, Spotila JR, O’Connor MP, Gatten Jr RE. 1996 Respiratory physiology of adult leatherback turtles (Dermochelys coriacea) while nesting on land. Chelonian Conserv. Biol. 2, 223 – 229. ing. org Biol. Lett. 11: 20150592 http://dx.doi.org/10.7589/0090-3558-46.2.422 http://dx.doi.org/10.1242/jeb.057182 http://dx.doi.org/10.1086/427048 http://dx.doi.org/10.1086/427048 http://dx.doi.org/10.1242/jeb.01860 Topsy-turvy: turning the counter-current heat exchange of leatherback turtles upside down Introduction Methods Results Discussion Ethics Data accessibility Authors’ contributions Competing interests Funding Acknowledgement References work_fyj7u2k3zbe37osnu6p7k2sv3a ---- This is a peer-reviewed, final published version of the following document: Scanlon, Peter H ORCID: 0000-0001-8513-710X, Malhotra, R., Greenwood, R H, Aldington, S J, Foy, C, Flatman, M and Downes, S (2003) Comparison of two reference standards in validating two field mydriatic digital photography as a method of screening for diabetic retinopathy. British Journal of Ophthalmology, 87 (10). pp. 1258-1263. EPrint URI: http://eprints.glos.ac.uk/id/eprint/3309 Disclaimer The University of Gloucestershire has obtained warranties from all depositors as to their title in the material deposited and as to their right to deposit such material. The University of Gloucestershire makes no representation or warranties of commercial utility, title, or fitness for a particular purpose or any other warranty, express or implied in respect of any material deposited. The University of Gloucestershire makes no representation that the use of the materials will not infringe any patent, copyright, trademark or other property or proprietary rights. The University of Gloucestershire accepts no liability for any infringement of intellectual property rights in any material deposited but will remove such material from public view pending investigation in the event of an allegation of any such infringement. PLEASE SCROLL DOWN FOR TEXT. EXTENDED REPORT Comparison of two reference standards in validating two field mydriatic digital photography as a method of screening for diabetic retinopathy P H Scanlon, R Malhotra, R H Greenwood, S J Aldington, C Foy, M Flatman, S Downes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . See end of article for authors’ affiliations . . . . . . . . . . . . . . . . . . . . . . . Correspondence to: Dr P H Scanlon, Gloucestershire Eye Unit, Cheltenham General Hospital, Sandford Road, Cheltenham GL53 7AN, UK; peter.scanlon@ egnhst.org.uk Accepted for publication 3 February 2003 . . . . . . . . . . . . . . . . . . . . . . . Br J Ophthalmol 2003;87:1258–1263 Aim: To compare two reference standards when evaluating a method of screening for referable diabetic retinopathy. Method: Clinics at Oxford and Norwich Hospitals were used in a two centre prospective study of 239 people with diabetes receiving an ophthalmologist’s examination using slit lamp biomicroscopy, seven field 35 mm stereophotography and two field mydriatic digital photography. Patients were selected from those attending clinics when the ophthalmologist and ophthalmic photographer were able to attend. The main outcome measures were the detection of referable diabetic retinopathy as defined by the Gloucestershire adaptation of the European Working Party guidelines. Results: In comparison with seven field stereophotography, the ophthalmologist’s examination gave a sensitivity of 87.4% (confidence interval 83.5 to 91.5), a specificity of 94.9% (91.5 to 98.3), and a kappa statistic of 0.80. Two field mydriatic digital photography gave a sensitivity of 80.2% (75.2 to 85.2), specificity of 96.2% (93.2 to 99.2), and a kappa statistic of 0.73. In comparison with the ophthalmologist’s examination, two field mydriatic digital photography gave a sensitivity of 82.8% (78.0 to 87.6), specificity of 92.9% (89.6 to 96.2), and a kappa statistic of 0.76. Seven field stereo gave a sensitivity of 96.4% (94.0 to 98.8), a specificity of 82.9% (77.4 to 88.4), and a kappa statistic of 0.80. 15.3% of seven field sets, 1.5% of the two field digital photographs, and none of the ophthalmologist’s examinations were ungradeable. Conclusion: An ophthalmologist’s examination compares favourably with seven field stereophotography, and two field digital photography performs well against both reference standards. A mobile digital photographic screening programme was introduced in Gloucestershire in October 1998 covering all 85 general practices. A study was designed to formally evaluate the introduction of the screening pro- gramme, which used an ophthalmologist’s examination as a reference standard. There have been very few studies that have compared an ophthalmologist’s examination using slit lamp biomicroscopy with seven field stereophotography. To the authors’ knowledge, no published data exist to show that slit lamp biomicroscopy by an ophthalmologist experienced in retinal examination can compare favourably with seven field stereophotography as a reference standard, when assessing different methods of screening for diabetic retino- pathy. The current study, on a preselected group of patients with a higher prevalence of referable retinopathy, was designed to validate the ophthalmologist’s reference standard and two field digital photography against seven field stereophotography. The results are reported in this paper. SUBJECTS AND METHODS Study design A two centre prospective evaluation study of 239 people with diabetes was carried out between December 2000 and July 2001. Subjects were recruited from the Oxford Eye Hospital diabetic retinopathy clinic, or the Bertram Diabetes Centre diabetic and eye clinic in Norwich. A patient information sheet was posted to all patients 1 week before, and informed consent was obtained at the time of their booked outpatient appointment. Ethics committee approval was obtained from the Oxford and Norwich ethics committees. Screening process All individuals with diabetes attending the above clinics were considered eligible for inclusion except if they were pregnant, under 18 years of age, known to have learning or significant physical disabilities, or unwell. On arrival, visual acuity was tested using a Snellen chart at 6 metres before dilating both pupils using one drop of tropicamide 1% and phenylephrine 2.5%, repeated not more than three times. In the Norwich clinic phenylephrine 2.5% was restricted to patients with blood pressure less than 180/90 and only given once. Patients were then reviewed and examined in the clinic by their ophthalmic or diabetological team as part of their booked outpatient visit. This was followed by an ophthalmic examination by indirect slit lamp biomicroscopy using a 78D lens and direct ophthalmoscopy, performed by an experienced ophthalmologist (PS). Patients then underwent two field mydriatic digital photography using a 45 degree Canon CR5 retinal camera in Oxford and Canon CR6 camera in Norwich. Both cameras had a Sony three chip video camera capturing an image of 7686568 pixel resolution at 24 bits colour depth. Patients then underwent seven standard field stereoscopic 35 mm slide photography using a Zeiss 30 degree retinal camera in Oxford and a Topcon 50X 35 degree retinal camera in Norwich. Grading All three methods of examination were graded indepen- dently. The ophthalmoscopic assessments were completed at the time of examination by PHS, the two field digital images by RM, and the seven field stereoscopic 35 mm slide photographs by the Retinopathy Grading Centre, London. 1258 www.bjophthalmol.com http://bjo.bmj.com The examiner (PHS) was masked regarding each subject’s history of diabetes mellitus, and findings of current or previous ophthalmoscopic examinations although he did, on occasion, have access to the patient’s current visual acuity measurement. The digital and film graders were masked regarding each subject’s history of diabetes mellitus, and find- ings of current or previous ophthalmoscopic examinations. The first two methods were graded using the Gloucestershire grading form to define referable diabetic retinopathy (Table 1), and the seven field photographs were graded according to the Modified Airlie House final classification1 with a comparison table developed for analysis (Table 2). Referable DR was defined as maculopathy, moderate to severe non-proliferative, proliferative and advanced retinopathy as defined by categories 3–6 on the Gloucestershire grading form and/or categories D–G in the comparison table. Image quality for the two field digital was determined using the following criteria: N Fully assessable—possible to see the small vessels of the temporal arcades with reasonable clarity N Partially assessable—possible to see the large vessels of the temporal arcades with reasonable clarity N Not assessable—the large vessels of the temporal arcades are blurred or more than one third of the picture is blurred unless sight threatening retinopathy is detected in the remainder. The evaluation of the quality of the seven field images was performed to determine gradeability based on strict defini- tions of field definition, focus/clarity and stereoscopic effect as outlined in the Early Treatment Diabetic Retinopathy Study2 (ETDRS) manual of procedures. Seven field sets, including those defined as ‘‘ungradeable’’ were regraded to include the presence of haemorrhage or exudates less than 1 disc diameter from the foveola and any proliferative or advanced retinopathy, identified in the available images. This was carried out to provide comparative data with the Gloucestershire grading procedures for these features. Statistical methods The study was designed to include 250 patients, 100 patients having referable retinopathy, in order to achieve a standard error in the estimate of sensitivity no wider than 4%; 150 patient controls were chosen in order to prevent bias in the ophthalmologist’s examination result and in the photo- graphic grading of the seven field photographs. The ophthalmologist and the retinopathy grading centre did not know the percentage of referable retinopathy in the study population but RM, who graded the two field digital images was aware of this percentage, but not the identity of individual patients. Data were entered onto an Excel spreadsheet and down- loaded into SPSS version 10 for data analysis. Sensitivity, specificity, together with their 95% confidence intervals and kappa values were calculated for the following comparisons: N The ophthalmologist’s examination compared with seven field stereophotography N Two field digital photography compared with seven field stereophotography N Two field digital photography compared with the ophthal- mologist’s examination. Calculations were based on assessable images from the appropriate reference standard method. Unassessable images were then included in the sensitivity calculations when comparing other methods with the reference standard used. RESULTS Technical failure rate and image quality Ophthalmoscopic examination was technically possible in all patients. Determination of retinal status was not possible in six eyes of three patients (1.3%) from the two field mydriatic digital images. A total of 151 eyes (31.6%) of seven field stereo photosets were technically unassessable using the strict quality criteria, hence were not suitable for assignment of ETDRS retinopathy level. However, when the criteria were supplemented by the additional assessment of lesions lying within one disc diameter of the foveola or presence of proliferative retinopathy, the technical failure rate reduced to 73 eyes (15.3%). All of the six eyes that were unassessable on two field digital photography had assessable seven field stereophotography using the above criteria. Five out of the six eyes showed referable features both on seven field stereo- photography and the ophthalmologist’s examination. Table 1 The Gloucestershire grading form Grade Right eye Description Left eye Grade Outcome 0 No diabetic retinopathy 0 12/12 1 Minimal non-proliferative DR 1 12/12 2 Mild non-proliferative DR 2 3 Maculopathy Referral 3a Haemorrhage ,1 DD from foveal centre 3a ‘‘Routine’’ 3b Exudates ,1 DD from foveal centre 3b, c, d ‘‘Soon’’ 3c Groups of exudates (including circinate and plaques within the temporal arcades .1 DD from foveal centre 3d Reduced VA not corrected by a pinhole likely to be caused by a diabetic macular problem and/or suspected clinically significant macular oedema 4 Moderate to severe non- proliferative DR 4 4a Multiple cotton wool spots (.5) Refer ‘‘soon’’ And/or multiple haemorrhages 4b And/or intraretinal microvascular abnormalities (IRMA) And/or venous irregularities (beading, reduplication, loops) 5 Proliferative DR 5 Refer ‘‘urgent’’New vessels on the disc (NVD) or New vessels elsewhere (NVE) Preretinal haemorrhage and/or fibrous tissue 6 Advanced DR 6 Refer ‘‘immediate’’Vitreous haemorrhage, and/or traction/traction detachment, and/or rubeosis iridis PC Treated diabetic retinopathy Photocoagulation scars anywhere Focal Sectoral Panretinal Grid Method of screening for diabetic retinopathy 1259 www.bjophthalmol.com http://bjo.bmj.com The reasons that 73 eyes were unassessable when grading the seven field stereophotographs was: N 13 eyes—fields 1 and/or 2 were of insufficient quality for assessment N 50 eyes—fields 3–7 were of insufficient quality for assessment N 5 eyes—fields 1 or 2 and fields 3–7 were of insufficient quality for assessment N 4 eyes—images were too dark for assessment N 1 eye—images were absent Detection of referable diabetic retinopathy Comparison of the two reference standard methods, an ophthalmologist’s examination against seven field stereo photography, in assessable eyes (Table 3) gave a sensitivity of 87.4% (confidence interval 83.5 to 91.5) and a specificity of 94.9% (CI 91.5 to 98.3). The measure of agreement was a kappa value of 0.80. These calculations were based on 405 eyes that had assessable seven field stereophotogaphs. A more detailed analysis of this result, which includes the 73 eyes that were not assessable using seven field stereo photography is shown in Table 4. Comparison of the two field digital photography against seven field stereophotography, in assessable eyes (Table 5) gave a sensitivity of 80.2% (CI 75.2 to 85.2) and a specificity of 96.2% (CI 93.2 to 99.2). The measure of agreement was a kappa value of 0.73. These calculations were based on 399 eyes that were gradable by both methods of examination. Comparison of the two field digital photography against the ophthalmologist’s examination findings, in assessable eyes (Table 6) gave a sensitivity of 82.8% (CI 78.0 to 87.6) and a specificity of 92.9% (CI 89.6 to 96.2). The measure of agreement was a kappa value of 0.76. These calculations were based on 472 eyes that were gradable by both methods of examination. Comparison of the two reference standard methods with the ophthalmologist’s examination as the main reference standard, in assessable eyes (Table 2) gave a sensitivity of 96.4% (CI 94.0 to 98.8) and a specificity of 82.9% (CI 77.4 Table 2 Comparison table in validation study 7-F 7-F description Category 2-F/exam Description Category 10 No retinopathy A 0 No retinopathy A 20 Minimal B 1a Minimal B 35a Mild with loops F 1b Minimal with haem B 35b Mild with Quest CWS/VB/IRMA C 2a Mild with haem/HE/CWS C 35c Mild with haem C 2b Mild with Ma/HE C 35d Mild with HE = 2 C 3a Maculop with haem,1DD D 35e Mild with HE.3 in 1+ C 3b Maculop with HE,1DD D 35f Mild with CWS C 3c Maculop with HE groups D 43a Moderate with HMA.3 in 4/5 E 3d Maculop with reduced VA/ CSME D 43b Moderate with IRMA = 2 in 1–3 E 4a Mod/severe with CWS/HMA E 47a Mod/severe with both 43 E 4b Mod/severe with IRMA/VB F 47b Mod/severe with IRMA = 2 in 4–5 F 5 Prolif G 47c Mod/severe with HMA = 4 in 2–3 E 6 Advanced G 47d Mod/severe with VB = 2 in 1 F U Unassessable U 53a Severe with 2 or more 47s F 53b Severe with HMA.4 in 4–5 E 2c Treated DR C 53c Severe with IRMA.3 in 1+ F 53d Severe with VB = 2 in 2–3 F 61a Mild prolif with FPE or FPD G 61b Mild prolif with NVE = 2 in 1+ G 65a Mod/prolif with NVE.3 in 1 or NVD = 2 G 65b Mod/prolif with VH or PRH = 2 in 1 G 71a HRC with VH or PRH.3 in 1+ G 71b HRC with NVE.3 in 1+ and VH/PRH G 71c HRC with NVD = 2 and VH/PRH G 71d HRC with NVD.3 G 75 HRC with NVD.3 and VH/PRH G 81 Advanced DR G 88 Unasessable U 99 Unassessable U Table 3 Comparison of examination versus seven field Ophthalmologist’s examination findings Total No DR or non-referable DR Referable DR Reference standard result (seven field stereophotography) No DR or non-referable DR 150 8 158 94.9% 5.1% 100% Referable DR 31 216 247 12.6% 87.4% 100% Total 181 224 405 44.7% 55.3% 100% 1260 Scanlon, Malhotra, Greenwood, et al www.bjophthalmol.com http://bjo.bmj.com to 88.4). The measure of agreement was a kappa value of 0.80. These calculations were based on 405 eyes that had assessable seven field stereo photographs. Retrospective examination of the seven field stereophotographs in which there was a difference in grading/classification between the two reference standards There were eight eyes which the ophthalmologist’s examina- tion had classified as referable retinopathy that had been graded as non-referable by seven field stereophotography. On looking retrospectively at the photographs of these eight eyes five eyes were classified by the ophthalmologist as referable in whom the appropriate abnormalities were detected retrospectively on the photographs—four with maculopathy and one with referable non-proliferative retinopathy. There were three patients who had abnormalities noted by the ophthalmologist in the superior retina, two with new vessels and one with IRMA that were not detected retro- spectively on the photographs, as these features lay outside the standard fields. There were 31 eyes that had been graded as referable retinopathy by seven field stereophotography and the ophthal- mologist’s examination had classified as non-referable. Of these, 12 eyes had been graded as having IRMA present on seven field stereophotography. Of these 12 eyes, six had received panretinal photocoagulation and the IRMA had therefore been graded as present between panretinal photocoagulation scars. Of these six eyes, IRMA were Table 4 Detailed comparison of examination versus seven field Reference standard result (seven field stereophotography) Ophthalmologist’s examination findings TotalNot Gr No DR Non-referable DR Maculopathy Mod NPDR Severe NPDR Prolif and adv Not gradable 29 30 6 3 5 73 39.7% 41.1% 8.2% 4.1% 6.8% 100% No DR 68 2 70 97.1% 2.9% 100% Non-referable DR 35 45 4 2 2 88 39.8% 51.1% 4.5% 2.3% 2.3% 100% Maculopathy 13 53 3 7 6 82 15.9% 64.6% 3.7% 8.5% 7.3% 100% Mod NPDR 13 17 7 9 1 47 27.7% 36.2% 14.9% 19.1% 2.1% 100% Severe NPDR 1 7 4 15 3 30 3.3% 23.3% 13.3% 50.0% 10.0% 100% Proliferative and 4 14 8 11 51 88 advanced 4.5% 15.9% 9.1% 12.5% 58.0% 100% Total 0 132 108 101 22 47 68 478 27.6% 22.6% 21.1% 4.6% 9.8% 14.2% 100% Table 5 Comparison of two field versus seven field Two field digital photography Unassessable No DR or non- referable DR Referable DR Subtotal Reference standard result (seven field stereo photography) No DR or non- referable DR 151 6 157 1 96.2% 3.8% 100% Referable DR 48 194 242 5 19.8% 80.2% 100% Total 199 200 399 6 49.9% 50.1% 100% Table 6 Comparison of two field versus examination Two field digital photography Unassessable No DR or non- referable DR Referable DR Subtotal Reference standard result (ophthalmologist’s examination findings) No DR or non- referable DR 222 17 239 1 92.9% 7.1% 100% Referable DR 40 193 233 5 17.2% 82.8% 100% Total 262 210 472 6 55.5% 44.5% 100% Method of screening for diabetic retinopathy 1261 www.bjophthalmol.com http://bjo.bmj.com retrospectively confirmed on the photographs in five eyes. Of the six eyes that had not received panretinal photocoagula- tion IRMA were retrospectively confirmed in all six on one of the peripheral fields (two eyes field 6, two field 7, one field 5, and one field 3). There were a further three eyes in whom the seven field stereophotographs had noted small fibrotic NVE following extensive panretinal photocoagulation (two in field 6 and one in field 3) that the ophthalmologist had classified as stable treated retinopathy (that is, non-referable). These were retrospectively confirmed from the photographs. There were eight eyes that had been graded as having a haemorrhage ,1DD from the foveal centre on seven field stereophotography. In all eight of these eyes the ophthalmolo- gist had classified these as more than two microaneurysms ,1DD from the foveal centre. Looking at the photographs retrospectively we encountered difficulties in interpretation of the ETDRS definition of haemorrhage/microaneurysm, which will be discussed below. There were three eyes that had been graded as having a hard exudate ,1DD from the foveal centre on seven field stereophotography. Retrospectively, examination of the photographs confirmed that hard exudate was present in one of these eyes, a single probable drusen was present in one and no hard exudate was seen in the third, although the image quality of this third image was only just within ETDRS standard 14. There were two eyes that had been graded as having a group of exudates .1DD from the foveal centre on seven field stereophotography. In both of these patients the ophthalmolo- gist had graded non-grouped exudate .1DD from the foveal centre. Looking at the photographs retrospectively we encountered difficulties with the definition ‘‘group of exudates’’ as discussed below. There was one eye that had been graded as having a multiple haemorrhage on seven field stereophotography. The ophthalmologist had classified this eye as .two haems .1DD from the foveal centre. Retrospectively it was agreed that multiple haemorrhages were present. There was one eye that had been graded as having NVD present (ETDRS level 65a) on seven field stereophotography and classified as non-referable on the ophthalmologist’s examination. In retrospect no NVD were visible on the photographs and this patient should have been included within the non-referable group. DISCUSSION No previous studies have included a complete set of patients in whom a comparison has been made between seven field stereophotography and an ophthalmologist’s examination using slit lamp biomicroscopy, as illustrated in the examples below. Comparison with other studies in the literature Moss et al3 compared ophthalmoscopy with fundus photo- graphy in determining the severity of diabetic retinopathy. The ophthalmoscopy method was direct ophthalmoscopy supplemented by indirect ophthalmoscopy if the examiner felt this was necessary. Slit lamp biomicroscopy was not used. Kinyoun et al4 compared indirect ophthalmoscopy by a retina specialist with seven standard field fundus photos but slit lamp biomicroscopy was not used. Schachat et al5 compared the use of clinical examination and fundus photographs in detecting diabetic retinopathy in a population that included people with and without diabetes. Slit lamp biomicroscopy with a 78 dioptre lens, three mirror, or both was performed on 1168 individuals, of whom 9.5% had definite diabetes. The photographic method was two 30 degree fields using 35 mm film (Diabetic Retinopathy Study Standard fields 1 and 2 centred on the disc and macula). They concluded that the clinical examination detected most of the cases of diabetic retinopathy identified by disc and macular photographs read by skilled graders, although there would be an underestimate of prevalence using the clinical examination. Pugh et al6 compared four screening methods in 352 subjects, which included an ophthalmologist’s examination through dilated pupils using direct and indirect ophthalmo- scopy and seven field stereophotography. Two centres were used, 10 ophthalmologists performed the examinations, and one of the two centres used slit lamp biomicroscopy with a 90 dioptre lens. The number of subjects who had an examina- tion that included slit lamp biomicroscopy was not stated, nor whether this affected the examination results. Sensitivity, specificity, and positive and negative likelihood ratios were calculated after dichotomising the retinopathy levels into none and mild non-proliferative versus moderate to severe non-proliferative and proliferative. Overall the examination results were poor with sensitivity of 0.33, specificity of 0.99, positive and negative likelihood ratios of 72 and 0.67. Of a total of seven cases of proliferative retinopathy, the ophthalmologist’s examination only detected three. Reasons for differences between the ophthalmologist and seven field stereophotography in this study A small number of differences were explained by errors being made by both reference methods. Definitions of referable retinopathy accounted for a significant number of differences. Particular sources of difficulty were: N Haemorrhage ,1DD from the central fovea was a common source of confusion with microaneurysms ,1DD often being graded instead of haemorrhage and vice versa N The definition of a group of exudate .1DD from the central foveola needs to be clearly defined. The ophthalmologist in this study differed from seven field stereophotography much more commonly in patients who had received extensive laser treatment. Although the grading form did not differentiate between IRMA in patients who had received panretinal photocoagulation and those who had not, he had not looked for IRMA in the former group. This was because he had considered the lack of new vessels to be a stable treated retina and not a referable eye. The grading form used did not allow for this difference. It perhaps illustrates the difference in performing studies to one’s routine clinical practice. Performance of two field digital photography Two field digital photography performed well with sensitiv- ities of .80% and specificities of .92% against both reference standards. The technical failure rate was low at 1.5%. Reasons for unassessable images in this study During the clinical examination by PS, no patient was recorded as being unassessable. However, patients were excluded from the study by clinic doctors if they had media opacities. These might otherwise have been technical failures for the ophthalmologist. A strict evaluation of the quality of the seven field images was performed to determine assessability based on field definition, focus/clarity and stereoscopic effect as outlined in the Early Treatment Diabetic Retinopathy Study (ETDRS) manual of procedures (ETDRS Chapter 18). However, seven field retinal photography in the ETDRS and similar major 1262 Scanlon, Malhotra, Greenwood, et al www.bjophthalmol.com http://bjo.bmj.com research trials was only carried out by extensively trained, certified, and constantly monitored photographers. Even within the context of such tightly controlled retinal photo- graphy protocols it is not unusual to experience cases where the seven field imaging fails to meet the required quality levels, with 10% technical failure rates being reported within WESDR.7 Indeed, authors suggested in the same article that use of such a relatively complicated and difficult protocol may not be entirely necessary and fewer retinal fields may be appropriate in the context of diabetic retinopathy imaging. Technical failure rates for studies that have used a seven field stereo protocol are not routinely reported in the literature, nor do studies report how many attempts were made to achieve an assessable seven field set of photographs. Within the context of the current study the patients had a much higher prevalence of retinopathy than generally among people with diabetes, and consequently were more likely to have media opacities or to dilate poorly. In addition, the photographers in this study perform this technique relatively infrequently compared to the photographers in the Wisconsin studies,7 and only one chance was given to obtain a seven field set (no photographs were repeated). Hence, it proved difficult to obtain consistently high quality results in seven field stereo imaging, with approximately 15% being techni- cally unassessable, even when applying less strict definitions for assessability. A review of these technically unassessable images indi- cated that in the majority of cases, there were sufficient images present, and of sufficient photographic quality, for the presence of sight threatening retinopathy features to have been detected, should they have been present. seven field sets, including those defined as ‘‘ungradeable,’’ were therefore regraded to include the presence of haemorrhage or exudates less than one disc diameter from the foveola and any proliferative or advanced retinopathy identified in the available images. This was carried out to provide comparative data with the Gloucestershire grading procedures for these features. CONCLUSION The current study has shown that slit lamp biomicroscopy by an ophthalmologist, experienced in retinal examination, can compare favourably with seven field stereophotography as a reference standard when assessing different methods of screening for diabetic retinopathy. There are advantages and disadvantages with both reference methods. There is no hard copy for the ophthalmologist’s slit lamp biomicroscopy and one cannot necessarily conclude that this examination, with different ophthalmologists, will produce consistently high quality results. However, this paper has highlighted the high technical failure rate of seven field stereophotography, which even with the most experienced photographers7 can be 10%, and the technical failure rate for this procedure has not often been reported in previous literature. ACKNOWLEDGEMENTS We thank Lynda Lindsell, research coordinator and Paul Parker, ophthalmic photographer, Visual Science Unit, Radcliffe Infirmary, Oxford, and Helen Lipinski, deputy manager, Retinopathy Grading Centre, London. Authors’ affiliations . . . . . . . . . . . . . . . . . . . . . P H Scanlon, Gloucestershire Eye Unit, Cheltenham General Hospital, Sandford Road, Cheltenham GL53 7AN, UK R Malhotra, Oxford Eye Hospital, Oxford, UK R H Greenwood, Norfolk and Norwich University Hospital, Norwich, UK S J Aldington, Retinopathy Grading Centre, Imperial College, London, UK C Foy, Gloucestershire R & D Support Unit, UK M Flatman, Norwich Diabetes Eye Screening Service, UK S Downes, Oxford Eye Hospital, Oxford, UK Competing interests: None declared. Funding: R & D Project Grant: R/21/01.98/Scanlon/R from the South West R&D Directorate. MD Thesis: PHS is submitting this work for an MD thesis to UCL. REFERENCES 1 Diabetic Retinopathy Study. Report Number 6. Design, methods, and baseline results. Report Number 7. A modification of the Airlie House classification of diabetic retinopathy.Prepared by the Diabetic Retinopathy. Invest Ophthalmol Vis Sci 1981;21(1 Pt 2):1–226. 2 Early Treatment Diabetic Retinopathy Study. Design and baseline patient characteristics. ETDRS report number 7. Ophthalmology 1991;98(5 Suppl):741–56. 3 Moss SE, Klein R, Kessler SD, et al. Comparison between ophthalmoscopy and fundus photography in determining severity of diabetic retinopathy. Ophthalmology 1985;92:62–7. 4 Kinyoun JL, Martin DC, Fujimoto WY, et al. Ophthalmoscopy versus fundus photographs for detecting and grading diabetic retinopathy. Invest Ophthalmol Vis Sci 1992;33:1888–93. 5 Schachat AP, Hyman L, Leske MC, et al. Comparison of diabetic retinopathy detection by clinical examinations and photograph gradings. Barbados (West Indies) Eye Study Group. Arch Ophthalmol 1993;111:1064–70. 6 Pugh JA, Jacobson JM, Van Heuven WA, et al. Screening for diabetic retinopathy. The wide-angle retinal camera. Diabetes Care 1993;16:889–95. 7 Moss SE, Meuer SM, Klein R, et al. Are seven standard photographic fields necessary for classification of diabetic retinopathy? Invest Ophthalmol Vis Sci 1989;30:823–8. Method of screening for diabetic retinopathy 1263 www.bjophthalmol.com http://bjo.bmj.com work_fzbfgty2v5dv5nxmeop4qy3uw4 ---- 5331_200704PPAD_Grffin.qxd ASSESSING AESTHETIC COMPOSITE VENEER PLACEMENT VIA DIGITAL PHOTOGRAPHY Jack D. Griffin, Jr, DMD* Pract Proced Aesthet Dent 2007;19(5):A-F A Restoring teeth using direct composite veneers can be quite challenging for a clin- ician. Achieving natural color blending, masking dark teeth, removing decay, and providing a natural finish require meticulous placement with various composite opacities and shades. Critical self-evaluation using digital photography, documen- tation, and follow-up visits to perform veneer enhancements are critical to ensure an aesthetic outcome. This article will demonstrate how digital photography is used to achieve an aesthetic result in the placement of eight direct resin veneers. Learning Objectives: This article discusses the procedures required to place, evaluate, and finalize com- posite veneers in an efficient manner. Upon reading this article, the reader should: • Become familiar with the use of digital photography to evaluate, plan, and prepare the dentition. • Understand preparation guidelines to correct missing and misshapen teeth for conservative resin veneers using images as a guideline. Key Words: composite, veneer, preparation, digital, camera G R I F F I N J U N E 19 5 * Private practice, Eureka, MO. Jack D. Griffin, Jr, DMD, 18 Hilltop Village Center Drive, Eureka, MO 63025 Tel: 636-938-4141 • E-mail: Esmilecenter@aol.com C O N T I N U I N G E D U C A T I O N X X 5331_200704PPAD_Grffin.qxd 5/24/07 3:28 PM Page A B Vol. 19, No. 4 Practical Procedures & AESTHETIC DENTISTRY Clinical Uses for Photography in a Composite Veneer Case • Pre-treatment: Planning the case; • Mid-treatment: Scrutinizing placement, anatomy, and finish. Using as a framework for making adjustments; and • Post-treatment: Evaluating material and technique performance. Table Porcelain and composite restorations have beenwidely used over the past 20 years to create highly aesthetic smiles. Replacing missing tooth structure can be successfully achieved with minimal tooth reduction as long as concepts in smile design, tooth preparation, and material limitations are understood.1-3 While com- posite veneers may be the greatest expression of cre- ativity and artistic ability among clinicians, direct resin techniques can be difficult to perfect.4 Contemporary composite systems provide clinicians with materials of varying shades and opacities that can mimic natural tooth structure when used in smile enhancement.5 As demonstrated in this presentation, the key to a successful outcome is knowing when to use the differ- ent composites in each tooth, objectively evaluating placement, and then making necessary corrections. Photography and self-evaluation are critical adjuncts in direct veneer placement (Table). Simplifying Composite Selection Many of the resin systems (eg, Renamel, Cosmedent, Chicago, IL; Filtek Supreme Plus, 3M Espe, St. Paul, MN; Esthet-X, Dentsply Caulk, Milford, DE) available today have at least three opacities of composite, an important requisite when life-like veneers are desired.6 The key is knowing where to place each layer and in what shade to obtain consistent and natural results. Dentin compos- ites have more opacity than other composites and are used for three basic reasons: to replace missing dentin, to add length or width to the tooth, and to conceal darker tooth colors. As a result, they efficiently block unwanted light transmission when the clinician is performing length or width additions or covering transition areas within the restoration (eg, when hiding the edge of a fracture). Since these dentin materials are typically microhybrids, they possess the strength needed to withstand occlusal forces in the intraoral environment.7,8 Incisal shades are strategically used in small amounts to mimic translucency and incisal characterization or to enhance the surface of the entire restoration. Since these composites have little opacity, they cannot be used to conceal a color or transition area. If used properly, how- ever, they will give the restoration a natural appearance. Enamel composites basically fall between dentin and incisal in terms of opacity and their ability to mask color. They form the majority of the facial surface of the tooth and are used to form the basis of the final desired tooth shade. Their color intensity is influenced by the thick- ness of the resin layer and by any color that might be underneath it. The thicker it is—this also applies to dentin shades—the more intense and consistent its final shade will be. These are the basic components necessary for the clinician to create highly aesthetic composites using a direct layering technique. While varying tints and cus- tomization materials are often needed when matching a single tooth, they are seldom required for a complete veneer case. Figure 1. The lip line covered all of the gingiva except for the points of the papilla. Both porcelain and composite veneers were offered to the patient. Figure 2. Decay and composite material must be com- pletely removed prior to composite placement. 5331_200704PPAD_Grffin.qxd 5/24/07 3:28 PM Page B PPAD C Griffin Patient Examination and Treatment Planning A 24-year-old female presented with an unaesthetic smile. Comprehensive clinical examination revealed failing restorations, recurrent decay, marginal leakage, and staining. Her basic tooth color was A2 to A3, with a moderate amount of incisal translucency (Figure 1). A full series of intra- and extraoral images were taken for treat- ment planning, marketing, and case documentation. These images were studied—along with clinical exami- nation notes—prior to treatment so that a basic plan could be formulated. After viewing an office-generated PowerPoint slide show displaying different treatment options, the patient chose a final tooth shade that was the darkest bleach shade (ie, 0M3). While treatment plans were made for porcelain and composite veneers, the patient selected composite restorations due to financial reasons. Lateral smile views demonstrated that the ante- rior 10 teeth would require veneers from second premo- lar to premolar for a smooth, natural-appearing, enhanced smile, but only eight were requested by the patient. Treatment would involve veneering the teeth to lighten their color as well as repairing the decayed areas (Figure 2). The patient was scheduled for a 90-minute appointment to place the eight composite veneers with a 50-minute follow-up three days later. Composite Veneer Technique Following anesthetization and tooth isolation, various shades of composite were tried on the teeth to evaluate color and masking ability. It was important to begin this assessment on a central incisor to establish proper mid- line, cant, color, and proportion. When done correctly, it would be much easier to establish the proper size, shape, and alignment of the surrounding teeth. The existing composite resin was removed from #9(21) with a coarse diamond bur, and all interproxi- mal filling material and decay were removed with car- bide burs. Caries indicator (ie, Sable Seek, Ultradent Products, South Jordan, UT) was used to verify complete caries elimination. The face of the tooth was roughened with a diamond bur, and a slight finish line was created just below the gingival margin. A contoured anatomical matrix was placed and wedged loosely (Figure 3). The matrix extended slightly into the sulcus and provided the smoothest possible surface to finish the composite.9 After the tooth was etched with 38% phosphoric acid, it was rinsed thoroughly, and a dentin bonding agent was applied and air thinned. The first composite resin layer (ie, Renamel Universal Hybrid, Cosmedent, Chicago, IL) formed the founda- tion of color within the restoration (Figure 4). Shade A3 was placed in the formerly decayed interproximal areas and on the incisal edges where length was to be added. It was undercontoured by 1 mm in all areas so that the layer would not extend through the final enamel layer Figure 3. Decay, composite, and surface stains are removed, and the entire surface is roughened with a diamond bur. A contour matrix is placed and very lightly wedged. Figure 4. The areas of missing tooth structure interproxi- mally and incisally are replaced with a microhybrid matching the existing dentin shade. Figure 5. The enamel microfill composite is placed with a plastic instrument and shaped with a composite roller after the flowable composite. 5331_200704PPAD_Grffin.qxd 5/24/07 3:28 PM Page C after finishing, and then cured for 20 seconds. The enamel layer was initiated with the placement of a flow- able composite along the gingival aspect of the matrix and followed with a bleach-shade (SB3) composite. The noncured flowable composite reduced voids and was mostly forced out as the more viscous enamel material was placed. The incisal edge was made irregular with the edge of a plastic instrument or explorer and left 1 mm to 2 mm from the desired final edge length, and was then cured for 20 seconds (Figure 5). The key was that the indentations were varied in width and length, so that the incisal shade blended naturally into the enamel with- out abrupt transition areas. A transparent incisal shade of flowable composite was followed by incisal-shaded composite that was shaped and cured (Figure 6). When the matrix was removed, a bulk of material at the gingival aspect extended just beneath the gingiva, where the smooth surface produced by the matrix would remain largely untouched. The excess material and basic shaping were eliminated with a finishing diamond bur, and the midline, cant, and tooth proportion were adjusted (Figure 7). Working distally from the midline, the other teeth were built around the colors and contour of this tooth (Figure 8). Basic contouring was performed with a finishing diamond bur. A #12 blade composite knife, #7901 finishing bur, and sand paper disks (ie, Sof-Lex Brush, 3M Espe, St. Paul, MN) were used for basic con- touring. Curing was performed for an additional 20 sec- onds from the facial and lingual surfaces. A full series of intra- and extraoral images were captured before patient dismissal so that a critique could take place. Additional polishing and shaping were completed three days later at the enhancement appointment. The digital images were loaded onto the computer, scrutinized, and marked with needed changes (Figure 9). The resulting composite veneers were objectively evalu- ated by the clinician, who viewed them critically with- out the distractions of the operatory. These images were then placed on the operatory monitor for chairside ref- erence. Other images were printed and placed behind the patient to form an outline, so that needed enhance- ments could be completed in an orderly manner. Figure 6. Low opacity incisal shade is added and shaped about 1 mm longer than the final desired length to allow for anatomy development and polishing, and is then cured. Figure 7. A basic shape is formed with a finishing dia- mond bur. The critical part is achieving a proper position and cant to the midline and a suitable tooth width. Figure 8. The neighboring teeth can then be completed using the same techniques as before. Figure 9. Marks can be made in a variety of ways to point out places that need correcting, so that areas requiring improvement are not overlooked. D Vol. 19, No. 5 Practical Procedures & AESTHETIC DENTISTRY 5331_200704PPAD_Grffin.qxd 5/24/07 3:28 PM Page D Enhancement Appointment At the three-day follow-up appointment, the veneers were evaluated for areas that were rough, sharp, or unable to be flossed. Phonetics were also used to determine if the incisal edge position required adjustment. The patient was shown the images taken from the placement appoint- ment and decided to have the second premolar veneered as well, since they were clearly visible in the photographs. Areas that required additional composite were modified first via diamond roughening and sand blasting with 27 µm silica, followed by etching and the application of a bonding agent. Composite was placed to correct voids, dark color that was visible, and/or insufficient con- tours. A plastic instrument was used to place the mater- ial and was smoothed and formed with a composite rolling instrument (ie, CompoRoller, Kerr Corporation, Orange, CA) (Figures 10 and 11). Embrasures were shaped and refined with three levels of finishing disks, and interproximal areas were shaped with a composite knife and abrasive strips (Figures 12 and 13). Care was taken to enhance facial anatomy by developing subtle developmental indentations and by beveling the incisal third back toward the lingual, which was accomplished with rubber polishing cups and disks (Figure 14). The last step included a felt wheel with polishing paste in order to achieve a high luster (Figure 15).10 As the patient desired that her mandibular teeth matched the light color of the maxillary teeth, take-home bleaching (ie, Nite White ACP, Discus Dental, Culver City, CA) was recommended. Postoperative Images An additional follow-up appointment was made three weeks later, and a full series of digital images were cap- tured to be used for case evaluation, legal documentation, and practice promotion. A portrait image was critical not only for marketing, but also for final case evaluation of technique and materials. Final midline position and cant were verified and archived in the office portfolio. A full smile revealed consistent color and life-like incisal charac- teristics (Figure 16). The patient continued to bleach her mandibular teeth, and the shade lightened enough to closely match the shade used on the maxillary composite veneers. Figure 10. The composite roller makes smoothing and con- touring final layers very efficient. Figure 13. Interproximal surfaces are then contoured and smoothed with a #12 blade and finishing strips. Figure 11. Care is taken to keep anatomy on the facial surface during contouring and polishing. Figure 12. Flexible sand paper disks are used to shape embrasures between teeth and for basic tooth contouring. PPAD E Griffin 5331_200704PPAD_Grffin.qxd 5/24/07 3:28 PM Page E Discussion Office commotion, lighting, patient positioning, and even doctor fatigue can make the evaluation of a procedure much more difficult to do chairside than on a large mon- itor in a dark room separate from the operatory. An image of the full frontal view of the smile is used to check mid- line, cant, buccal corridor development, and other impor- tant factors. Full intraoral images from the front and sides can be used to check shade and character consistency, embrasures, surface texture, and basic shape to locate any deficiencies that might appear in the veneers. These deficiencies include areas of bulkiness, improper embra- sures, poor contours, unreal surface texture, and incon- sistent colors. Photography makes the defects and needed corrections apparent. Conclusion A realistic characterization of the teeth was achieved by efficiently using different opacities of composite and placement in layers. Although composite veneers require more chairside effort than porcelain veneers and may require more maintenance, they can be rewarding for the dental staff and a cost-effective confidence builder for the patient. In addition, composite veneers allow the clinician to express his or her creativity and artistic abil- ity. Direct resin techniques, however, can be difficult to master; therefore, by using photography as a form of evaluation throughout treatment, the patient will be ensured a superior aesthetic smile. Acknowledgment The author declares no financial interest in any of the products, materials, or suppliers referenced herein. References 1. Milnar F. A minimal intervention approach to the treatment of a Class IV fracture. J Cosmet Dent 2006;21(4):106-112. 2. Christensen GJ. Bonding to dentin and enamel where does it stand in 2005? J Am Dent Assoc 2005;136(9):1299-1302. 3. Terry DA. Direct composite resin restoration of adolescent Class IV tooth fracture: A case report. Prac Periodont Aesthet Dent 2000;12(1):23-29. 4. Fahl N. The direct/indirect composite resin veneers: A case report. Pract Periodont Aesthet Dent 1996;8(7):627-638. 5. Vargas M. Conservative aesthetic enhancement of the anterior dentition using a predictable direct resin protocol. Pract Proced Aesthet Dent 2006;18(8):501-507. 6. Miller M. Reality 2006. Vol 20. Houston, TX: Reality Publishing; 2006. 7. Chyz G. Postorthodontic restoration of worn incisal edges. Contemp Esthet 2006;10(4):36-39. 8. Fahl N. Achieving ultimate anterior esthetics with a new micro- hybrid composite. Compend Contin Educ Dent 2000;26(suppl): 4-13. 9. Belvedere PC. Direct bulk placement for posterior composites using an anatomically shaped clear matrix creating true anatomic interproximal surfaces. J Indiana Dent Assoc 2006;85(1): 14-18. 10. Peyton JH. Finishing and polishing techniques: Direct compos- ite resin restoration. Pract Proced Aesthet Dent 2004;16(4): 293-298. Figure 14. Rubber polishing disks are then used to finish the polish and to enhance developmental grooves on the facial and incisal characteristics. Figure 15. After three weeks, final images are taken that are consistent with the preoperative views. F Vol. 19, No. 5 Practical Procedures & AESTHETIC DENTISTRY Figure 16. By using various opacities and colors, the veneers cover all of the decayed and broken down areas and the new surface color is consistent and natural. 5331_200704PPAD_Grffin.qxd 5/24/07 3:28 PM Page F 1. Clinical dental photography can be used BEFORE a direct composite veneer case to do which of the following? a. Help plan the case with regard to tooth preparation and material choice. b. Have images to help create an office portfolio of “before” and “after” images. c. Document the case for liability reasons. d. All of the above. 2. Which of the following is important in choosing a com- posite for direct veneers? a. Packaging in unidose compules. b. Using a system with varying composite opacities for nat- ural variation of translucency. c. Having a single universal composite that can be used on the tooth to replace both dentin and enamel for effi- ciency. d. Employing incisal shades to mask underlying darker col- ors in the tooth. 3. Which of the following composite components is used for masking color or for matching more opaque parts of the tooth? a. Dentin shades. b. Enamel shades. c. Incisal shades. d. Stumpf shades. 4. Which of the following is an instrument that can aid in final composite placement and contouring? a. A composite roller. b. A composite knife. c. Composite photo. d. A microbrush. 5. Why is photography valuable after composite placement? a. To honestly evaluate veneer placement. b. To form a plan for needed refinement. c. To be printed or placed on an operatory monitor for ref- erence during veneer enhancements. d. All of the above. 6. Which of the following represents a type of matrix that is precontoured, helps retract tissue, and forms the basic composite shape interproximally and gingivally? a. Straight celluloid strip. b. Automatrix. c. Gingival retraction paste. d. Contour matrix. 7. When there is interproximal decay that must be restored, what is the first material chosen and placed in the interproximal area? a. A dentin shade that matches the existing dentin color. b. An enamel shade that is the desired final tooth shade. c. An incisal shade that mimics incisal character. d. Custom staining to create internal character. 8. Flowable composite is placed in the matrix… a. And is cured before adding any other material to pro- vide a strong composite base. b. And is left uncured so that it is forced out by the more viscous composite, which may reduce voids. c. To seal the matrix from salivary contamination. d. To increase bond strength of the composite. 9. Which tooth should be treated first in a direct compos- ite veneer case to establish cant, midline, and tooth proportion? a. Central incisor. b. Lateral incisor. c. Canine. d. Milk tooth. 10. Which of the following criterium/criteria must be fol- lowed when doing final facial polishing? a. Achieve a natural polished surface that will resist stain- ing and plaque accumulation. b. Preserve facial anatomy and avoid creating an unnatu- rally flat surface. c. Create embrasures that are well polished and propor- tionally placed. d. All of the above. To submit your CE Exercise answers, please use the answer sheet found within the CE Editorial Section of this issue and complete as follows: 1) Identify the article; 2) Place an X in the appropriate box for each question of each exercise; 3) Clip answer sheet from the page and mail it to the CE Department at Montage Media Corporation. For further instructions, please refer to the CE Editorial Section. The 10 multiple-choice questions for this Continuing Education (CE) exercise are based on the article “Assessing aesthetic composite veneer placement via digital photography,” by Jack D. Griffin, Jr, DMD. This article is on Pages 000-000. CONTINUING EDUCATION (CE) EXERCISE NO. X CECONTINUING EDUCATIONX PPAD GG Vol. 19, No. 5 5331_200704PPAD_Grffin.qxd 5/24/07 3:28 PM Page G work_g644ejuc5fbihbzt2ndupuqpsq ---- A major mistake that managers make Russell L. Ackoff All through school we are taught that making a mistake is a bad thing. We are downgraded for them. When we graduate and enter the real world and the organizations that occupy it, the aversion to mistakes continues. As a result one tries either to avoid them or, if one is made, to conceal it or transfer blame to another. We pay a high price for this because one can only learn from mistakes; by identifying and correcting them . ... in business, if mistakes are made and laws are not broken, you rarely see any formal investigation. Even when the companies themselves look into what happened, they donbe't do it in a structured and rigorous way. They don't learn anything from the process. (Mittelstaedt, Jr., 2005) One does not learn from doing something right; one already knows how to do it. By doing something right one gets confirmation of what one already knows but no new knowledge. The fact that schools are more interested in teaching than in learning is apparent from their failure to determine if students learn from their mistakes. Once they are graded based on the number of mistakes they make, the teacher presses on, does not check to determine whether the student has learned from the mistakes made. 2 Schools, including business schools, do not even reveal the fact that there are two kinds of mistakes. Errors of commission: doing something that should not have been done. Errors of omission: not doing something that should have been done. Errors of omission, lost opportunities, are generally more critical than errors of commission. Organizations fail or decline more frequently because of what they did not do than because of what they did. For example, IBM ran into serious trouble several decades ago because it did not pursue development of the small computer. Eventually it woke up but never really caught up. Kodak was slow to get into digital photography and is paying a very high price for it today. Additional examples are cited by Mittelstaedt (2005): Coca Cola's "New Coke," Firestone's tire debacle, Intel's mishandling of its Pentium clip recall, American Express's failed blue card Optima launch and Webvan's ill-fated online grocery shopping experiment. The type of accounting that all organizations use accounts for only the lesser important errors, those of commission. If something wrong is done, it eventually shows up in the books. For example, when Kodak bought Sterling Drugs and eventually had to sell it, the loss involved was conspicuous in its books. But errors of omission never appear "in the books." The fact that Kodak failed to acquire Xerox when it could have, never appears in its books. The combined effect of these predispositions of schools and the organizations of which we are part is very costly. Most people are part of an organization that 3 looks down on and censure mistakes. But the only kind of mistake their organizations take into account are errors of commission. Then, to avoid censure one must try to minimize such errors. This is accomplished when managers to do as little as possible. This is seldom a decision made consciously; rather it is a culturally imposed disposition of which most are unconscious. It is the disposition to avoid making a mistake or to accepting responsibility for one that is made that makes organizations averse to change to The cost of not changing is seldom taken into account. Many years ago, when I was working on a project in one of the ten largest corporations in the United States, a senior vice president asked if I would be willing to give a course to 200 of the company's top executives on the frontiers of management thinking. I was more than willing to do so. He then explained that he wanted ten two-day courses of approximately twenty each. The first four would cover lower level vice presidents. The next three would cover intermediate level vice presidents. The next two would cover senior vice presidents and the final one would cover the executive office. I used the last half of the second day of the course for questions and discussion of the material I had presented. At the end of the first course, one member said, "This stuff is great. I would love to use it but I can't without the approval of my boss. Are you going to get a chance to present the same material to him?" explained that I would somewhere in the three second-level courses yet to come. He said he would approach his boss for approval as soon as he came out of the class. 4 Essentially the same issue was raised in each of the four lower level classes. It was also raised in each of the three second level courses. These vice presidents also believed they needed the approval of their bosses, senior vice presidents, before initiating significant changes. As a result, I was not surprised when the senior vice presidents expressed their dependency on approval from the executive office before they could initiate the kind of changes I advocated. I could hardly wait to see what kind of reaction I would get in the final course given to members of the executive office. The CEO opened the discussion. He said, "This stuff is great. I want very much for it to be used in this company but I can't impose it on my subordinates. They must accept it voluntarily. Are you going to get a chance to present it to them so I can get their approval and support?" This was a paralyzed company. No one in senior management was willing to do something that might turn out to be wrong. Everyone of them wanted someone else to assume responsibility for whatever they tried. As a result, significant changes were seldom made. Such risk aversion clearly limits learning and development of organizations especially in a changing environment. Such aversion is a cultural rather than an individual characteristic. It is not the result of individually made conscious decisions; it is a consequence of the way the culture works. Most of the managers who are affected by it are unaware of such a cultural influence. 5 Organization must become tolerant of errors provided that learning is derived from them. This was once very effectively articulated by August Busch, III, CEO of Anheuser-Busch, when he told his reports, "If you didn't make a serious mistake last year you probably didn't do your job and try something new. There is nothing wrong in trying and failing, but you had better learn from it. If you make the same mistake twice, you may not be here the next year." How can an organization's culture be changed so that it as concerned about errors of omission as errors of commission? How can it assure learning from both types of error? This requires systematic and conscious support of an organization's learning. Such support involves the following steps. 1. Preparing a record of every decision of any significance, ones that involve doing something or (of particular importance) ones that involve not doing something. This record should include the following information: • The justification for the decision including its expected effects and the time by which they are expected. Decisions are made for only two possible reasons: to make something happen that would not otherwise happen, or to prevent something from happening that would otherwise happen. For each decision the type of expected effect should be made explicit and it should be formulated in a form that is subject to verification. For example, if a decision is made to increase advertising expenditures, what increase in sales is expected and by when? Or if a decision is not to make a possible acquisition because of the poor performance 6 expected of it in the future, how and when can this expectation be verified? • The assumptions on which the expectations are based. For example, that competitors will not increase their advertising expenditures. Or, in the case of an acquisition not made, that its acquirer will not be able to increase its returns so as to make the acquisition worthwhile. • The information, knowledge, and understanding that went into the decision. • Who made the decision, how it was made, and when. The decision record should be signed off by the senior manager involved in making it. However, it can be prepared by anyone designated by that manager. 2. The decision should be monitored to determine whether the expectations are being met and the assumptions on which they are based remain valid. 3. When a deviation is found in either the assumptions or expectations, it should be diagnosed, the cause determined and corrective action prescribed and taken. 4. The corrective action is itself the result of a decision. A record of this decision should be made and treated as the original decision. In this way the process can not only yield learning but also learning how to learn. 7 5. A record of the entire process (all four steps) should be made and stored for easy access by those who may later be confronted by the need to make a similar type of decision. The number of people required to carry out these steps obviously varies as a function of the size of the organization involved, and the number of decision made. This process can and should be conducted at every level of an organization at which critical decisions are made. The process can, of course, be facilitated by use of computers. These steps, with appropriate modifications, have been taken at General Motors' Corporate Strategy and Knowledge Development Department and Du Pont's Research and Development Department, among others. When applied, something additional has been learned: mistakes are avoided by formulating a decision record. When the expectations and assumptions on which a decision is to be based are made explicit, it often becomes apparent that the decision that might otherwise be made should be avoided. Consider a case of learning from a decision not to do something. When a possible acquisition was revealed to a large corporation it conducted a considerable amount of research to determine what was the current value of the candidate. It then determined that this value was not high enough to justify the asking price for the acquisition. Someone else subsequently acquired the company for considerably more than the first company was willing to pay for it. The acquiring party made changes in the acquired company that increased its value significantly. After these changes were made the acquired company 8 yielded a return greater than was being earned by the company which had turned down the acquisition. The lesson it learned from all this was: in determining the value of a potential acquisition, a plan is required for what would be done after the acquisition to increase the value of the acquired company. This estimated enhanced value should be the basis for what the acquirer is willing to pay, not its current value. The reluctance of an organization to make changes that involve a risk results in a future that happens to that organization, over which it has little control. The willingness to make changes that involve a risk enables an organization to have a major role in creating its future. REFERENCE Mittelstaedt, Jr., Robert E., "Will Your Next Mistake Be Fatal '?", Wharton Alumni Magazine, Winter 2005, pp. 30-33. work_gaj6wzmavbcefblnehjigbptt4 ---- Guest Editorial: Evidence-based practice in wound care: Toward addressing our knowledge gaps Evidence-based practice in wound care: Toward addressing our knowledge gaps Kath M. Bogie, DPhil BACKGROUND Chronic nonhealing wounds are a major complication for many veterans. Particularly at risk are veterans with reduced mobility, such as those who have suffered a spinal cord injury (SCI). Chronic wounds cause significant suffering, including profound negative effects on general physical health, socialization, financial status, body image, level of independence, and con- trol. For individuals with SCI, the development of a pressure ulcer is one of the leading causes of readmission to the hospital after acute rehabilitation and a major source of morbidity and even mortality. The management of chronic wounds is also extremely costly for clinical facilities, from the acute care setting to long-term care. Many clinical practice guidelines (CPGs) for wound care are currently being issued with the overall goal of reducing the incidence and prevalence of this significant healthcare complication. CPGs have the potential to improve the standard of care for chronic wounds and decrease associated costs. A wealth of basic science and early clinical trials are being carried out in the field of chronic wound care research. However, there remains a dis- connect between early stage research efforts and implementation as routine clinical practice in the care of veterans with chronic wounds. The 2nd International Conference on Evidence Based Practice in Wound Care: The Effective Implementation of Pressure Ulcer Clinical Practice Guide- lines was held in Cleveland, Ohio, in June 2009 [1]. This program was designed for the many specialties involved in the interdisciplinary field of wound care research. The focus of the conference was on topics related to the effective selection and implementation of evidence-based CPGs. Over 150 attendees were provided with educational tools to enable them to effectively implement evidence-based practice approaches to pressure ulcer care. While many felt that they were familiar with CPGs for their specialty, there were concerns that implementation could be hindered by lack of support and continuing education in best evidence-based practices for wound care. In conjunction with the 2nd International Conference on Evidence Based Practice in Wound Care, an expert panel was convened in June 2009 to develop a research agenda based on critical knowledge gaps regarding pres- sure ulcers in individuals with SCI and on the implementation of advanced clinical practices. A literature-based discussion of the consensus panel conclu- sions is presented as a second Guest Editorial in this issue of JRRD [2]. The research articles presented illustrate both preclinical and clinical research that will lead to improved rehabilitative and lifetime outcomes for at-risk veterans, particularly those with SCI. vii JRRD, Volume 48, Number 3, 2011 viii ARTICLES IN THIS ISSUE “Effect of sensory and motor electrical stimula- tion in vascular endothelial growth factor expres- sion of muscle and skin in full thickness wound.” Asadi et al. Electrotherapy for the treatment of chronic wounds has long been known, and there have been many clinical reports conducted on the technique. However, widespread implementation has been lim- ited by lack of definitive proof demonstrating the positive effects of electrical stimulation (ES) on wound healing. The current limitations imposed on electrotherapy usage by the Centers for Medicare and Medicaid Services and the lack of Food and Drug Administration approval reflect the underly- ing deficit in understanding of the physiological mechanism that is an essential precursor to optimi- zation of clinical therapy. In their article, “Effect of sensory and motor electrical stimulation in vascular endothelial growth factor expression of muscle and skin in full thickness wound,” Asadi et al. report on a preclinical study employing an acute animal wound model [3]. Varying ES paradigms were employed to study the effect on angiogenesis, as indicated by expression of vascular endothelial growth factor (VEGF). The authors found signifi- cant differences between the effects of sensory and motor ES. In this acute full-thickness wound model, sensory ES produced significantly higher levels of VEGF in skin wound tissue at 1 week postinjury, compared with both control and motor ES. No dif- ferences were found in muscle tissue underlying the wound. The authors propose that sensory ES may facilitate cell migration, as indicated by sustained up-regulation of VEGF. As noted by the authors, the mechanisms by which sensory ES up-regulates VEGF in skin wound tissue remain to be elucidated. The current article is a valuable contribution toward understanding the physiological mechanisms under- lying electrotherapy, a necessary foundation for the implementation of a reliable and effective clinical treatment for chronic wounds. “Assessing evidence supporting redistribution of pressure for pressure ulcer prevention.” Sprigle and Sonenblum Sprigle and Sonenblum’s article, “Assessing evi- dence supporting redistribution of pressure for pres- sure ulcer prevention,” presents a comprehensive review of preclinical and clinical studies [4]. It is well recognized that pressure ulcer development involves multiple risk factors. A pressure ulcer will occur as a consequence of load applied to the tissue above a specific threshold for a specific length of time. The authors review evidence to support address- ing both magnitude and duration of loading in order to provide effective pressure ulcer prevention. Pre- clinical and clinical studies are considered. Research has shown that tissue tolerance does not have a uni- versal safe loading threshold. The authors address the rationale behind several common clinical interven- tions and show that the evidence bases for some standard clinical practices are limited. For example, there is limited evidence that a uniform 2-hour fre- quency is optimal for all at-risk individuals. There is, however, evidence that turning schedules should depend on both the patient’s condition and the support surface. Similarly, current guidelines suggest that fre- quent weight shifting maneuvers are essential for all wheelchair users. Recommendations are based pri- marily on the SCI population, but still vary greatly. It has recently been found that no direct correlation between frequency of weight shifting and pressure ulcer incidence is evident. Further research is needed to determine optimal turning intervals for patients in bed and weight-shifting recommendations for wheelchair users. This comprehensive review article reinforces the need for evidence-based individualized risk management incorporating personal risk factors. “Comparison of in-person and digital photograph assessment of stage III and IV pressure ulcers among veterans with spinal cord injuries.” Terris et al. The use of digital images to monitor wound heal- ing can be a valuable component of a wound care telemedicine clinic and has the potential to improve standard of care in the community. Terris et al. report on a clinical study to evaluate assessment agreement BOGIE. Guest Editorial ix when severe pressure ulcers are evaluated in-person and using digital photography. A standardized wound assessment and photography protocol was employed to evaluate pressure ulcers in a group of inpatients with SCI. In their article, “Comparison of in-person and digital photograph assessment of stage III and IV pressure ulcers among veterans with spinal cord inju- ries,” the authors found moderate interrater agree- ment for both methods [5]. It would appear that, even with a structured evaluation protocol, in-person assessment of severe pressure ulcers is limited, with only 25 percent of wound description categories demonstrating moderate or better agreement. Under- mining of the wound margin was most reliably iden- tified by all raters. The authors consider that subjective perceptions of qualitative wound charac- teristics may be the primary source of interrater varia- tion. At first consideration, this article may appear to provide counterintuitive findings, in that neither assessment method appears to be highly reliable. This may seem to have minimal relevance to evidence- based care. However, it is critically important to deter- mine what does not work well so that improvements can be developed. Subjective perception is always involved in qualitative evaluation of a wound regard- less of the method of viewing. The evidence indicates that a standardized wound assessment methodology based on quantitative measures is required. “Interface shear and pressure characteristics of wheelchair seat cushions.” Akins et al. Shear stress is widely recognized as a risk factor in pressure ulcer development. Several recent wheel- chair cushion designs have incorporated technologies to reduce interface shear stresses. However, reliable and effective techniques to evaluate shear stresses have not been implemented. In the article “Interface shear and pressure characteristics of wheelchair seat cush- ions,” Akins et al. report on a preclinical study of 21 commercially available wheelchair cushions [6]. The authors have developed a new methodology to quan- tify interface shear stress and pressure together with horizontal stiffness. Their findings suggest that the International Organization for Standardization (ISO) 16840-2 horizontal stiffness measure does not provide a complete picture of clinically relevant shear charac- teristics. Concurrent use of a combined interface pres- sure and shear force sensor in addition to ISO standard testing is recommended for comprehensive assessment of wheelchair cushion shear characteristics. This arti- cle represents an important contribution toward stan- dardization of wheelchair cushion characterization, an essential precursor to the development of evidence- based guidelines for seating prescriptions. CONCLUSIONS Pressure ulcers remain a significant complication for many veterans with limited or restricted mobility, particularly those with SCI and the elderly. The overall population at risk for pressure ulcer development is increasing due to changing demographics, and is even more prevalent in the veteran population, with many aging veterans from previous conflicts and increasing numbers of severely injured individuals from Opera- tion Iraqi Freedom/Operation Enduring Freedom. The Veterans Health Research and Development Assess- ment Program has recognized the need to develop and implement interventions that will lessen the incidence of pressure ulcer development in the veteran popula- tion. The articles presented in this single-topic section address various aspects of the current gaps in evidence-based knowledge of wound management, particularly pressure ulcer care. Kath M. Bogie, DPhil Senior Research Scientist, Cleveland Advanced Plat- form Technology Center and Functional Electrical Stimulation Center, Louis Stokes Cleveland Depart- ment of Veterans Affairs Medical Center, Cleveland, OH; Assistant Professor, Departments of Ortho- paedics and Biomedical Engineering, Case Western Reserve University, Cleveland, OH Email: kmb3@case.edu REFERENCES 1. 2nd International Conference on Evidence Based Practice in Wound Care: The Effective Implementa- tion of Pressure Ulcer Clinical Practice Guidelines. mailto:kmb3@case.edu JRRD, Volume 48, Number 3, 2011 x Cleveland, OH: Case Western Reserve University; 2009 Jun. 2. Henzel MK, Bogie KM, Guihan M, Ho CH. Pressure ulcer management and research priorities for patients with spinal cord injury: Consensus opinion from SCI QUERI Expert Panel on Pressure Ulcer Research Imple- mentation. J Rehabil Res Dev. 2011;48(3):xi–xxxii. DOI:10.1682/JRRD.2011.01.0011 3. Asadi MR, Torkaman G , Hedayati M. Effect of sen- sory and motor electrical stimulation in vascular endothelial growth factor expression of muscle and skin in full-thickness wound. J Rehabil Res Dev. 2011;48(3):195–202. DOI:10.1682/JRRD.2009.11.0182 4. Sprigle S, Sonenblum S. Assessing evidence support- ing redistribution of pressure for pressure ulcer pre- vention: A Review. J Rehabil Res Dev. 2011;48(3): 213–24. DOI:10.1682/JRRD.2010.05.0102 5. Terris DD, Woo C, Jarczok MN, Ho CH. Comparison of in-person and digital photograph assessment of stage III and IV pressure ulcers among veterans with spinal cord injuries. J Rehabil Res Dev. 2011;48(3): 225–34. DOI:10.1682/JRRD.2010.03.0036 6. Akins JS, Karg PE, Brienza DM. Interface shear and pressure characteristics of wheelchair seat cushions. J Rehabil Res Dev. 2011;48(3):203–12. DOI:10.1682/JRRD.2009.09.0145 This article and any supplementary material should be cited as follows: Bogie KM. Evidence-based practice in wound care: Toward addressing our knowledge gaps. J Rehabil Res Dev. 2011;48(3):vii–x. DOI:10.1682/JRRD.2011.01.0010 Kath M. Bogie, DPhil Senior Research Scientist, Cleveland Advanced Platform Technology Center and Functional Electrical Stimulation Center, Louis Sto... work_gcaqr2gsdjfxppzizzwfvqyzcu ---- untitled CORRESPONDENCE RESEARCH LETTERS The Utility of Clinical Photographs in Dermatopathologic Diagnosis: A Survey Study D ermatopathologists receive skin biopsy speci-mens accompanied by requisition slips that helpdirect their diagnoses. Owing to busy clini- cian’s time constraints and the increasing frequency with which the paperwork is completed by physician extend- ers rather than physicians, the requisition slips often do not contain adequate information. In some cases, this makes diagnosis more challenging. Prior reports have emphasized the importance of clini- copathologic correlation and the usefulness of clinical pho- tography as an aid in diagnosis.1,2 The present study ex- amines (1) how often photographs are currently being used as an aid in dermatopathologic diagnosis, (2) in which situations they are most likely to be helpful, and (3) whether dermatopathologists want to receive pho- tographs more frequently. Methods. After approval from the institutional review board at Eastern Virginia Medical School, Norfolk, and the American Society of Dermatopathology (ASDP) board of directors, an anonymous, voluntary, Web-based sur- vey was e-mailed to all board-certified dermatopatholo- gist members of the ASDP. In November 2009, the e-mail invitation was sent to all 816 members who provided their e-mail addresses to the ASDP; 34 of the e-mail addresses were considered invalid because an error message was received in response. The survey remained available for responses for 30 days, and a reminder was sent during the final week. In addition to multiple choice answers, respondents were given the opportunity to write addi- tional comments and feedback on the topic. Statistical analysis was performed using SAS computer software, ver- sion 9.2 (SAS Institute Inc, Cary, NC). Results. There were 135 complete responses and 13 par- tial responses from all regions of the United States from both dermatology- and pathology-trained individuals. At least 2 respondents had completed both dermatology and pathology residencies. There were 4 international re- sponses. Current frequency of photography usage is sum- marized in the Table. Analysis with the Cochran-Mantel- Haenszel statistic revealed no significant difference in frequency of use among geographic regions. Ninety- four percent of dermatopathologists stated that they would like to receive photographs more frequently. More respondents stated that clinical photography is ben- eficial in the evaluation of inflammatory skin diseases (92%) than in pigmented lesions (73%) or nonmelanocytic tu- mors and growths (56%). By Fisher exact test, we found no significant difference between pathology- and derma- tology-trained dermatopathologists with regards to find- ing clinical photography useful in any category. Ninety- one percent of respondents stated that they were able to provide a more specific diagnosis with the aid of clinical photographs. For pigmented lesions, respondents stated that photography and history with measurements were help- ful when only a portion of the lesion was biopsied. Oth- erwise, sampling error might lead to a misdiagnosis of mela- noma. One respondent found photographs useful in reporting margins in complex surgical cases. The most preferred methods of photograph delivery included printed-out photographs (54%) and en- crypted e-mail (50%) followed by posting on a secure Web site (21%) and images on compact disc (10%). In addi- tion, several respondents suggested integration into elec- tronic medical records when available. Respondents were also given the opportunity to write open-ended comments. Several dermatopathologists stated that clinical photographs are particularly useful when the specimen is submitted by a nondermatologist who pro- vides a limited history and differential diagnosis. A po- tential drawback to photography mentioned is the time and cost that it takes for clinicians to take the photograph and send it to the dermatopathologist. In addition, pho- tographs provided in a cumbersome format (compact disk or flash drive) may slow down sign-out for the dermato- pathologist; a printed-out photograph attached to the req- uisition slip was the most preferred method of delivery for this reason. Furthermore, several people emphasized that clinical photographs should not replace a good history. Comment. Limitations of this study include an inability for respondents to state that they were trained in both pathology and dermatology residencies prior to obtain- ing a dermatopathology fellowship. Two respondents in- cluded this information in the comment section. In ad- dition, physicians who felt strongly about the beneficial use of clinical photography might have been more likely See Practice Gaps at the end of this letter Table. Frequency of Receiving Clinical Photographs Frequency, Times/y, No. Dermatopathologists, No. (%) (n = 135) 0 18 (13) 1-5 42 (31) 6-12 26 (19) �12 49 (36) (REPRINTED) ARCH DERMATOL/ VOL 146 (NO. 11), NOV 2010 WWW.ARCHDERMATOL.COM 1307 ©2010 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 to respond to the survey, thus creating response bias and overestimating the beneficial effect of photography. Overall, this survey study revealed that dermatopa- thologists find clinical photography most beneficial in the diagnosis of inflammatory skin diseases, and they would like to receive photographs more frequently. They prefer a convenient method of delivery, most commonly a printed- out photograph attached to the requisition slip. Accepted for Publication: June 4, 2010. Author Affiliations: Department of Dermatology (Drs Mohr and Hood) and Epidemiology and Biostatistics Core (Mr Indika), Eastern Virginia Medical School, Norfolk. Correspondence: Dr Mohr, Department of Dermatology, Eastern Virginia Medical School, 721 Fairfax Ave, Ste 200, Norfolk, VA 23507 (melinda.mohr@gmail.com). Author Contributions: All authors had full access to all of the data in the report and take responsibility for the in- tegrity of the data and the accuracy of the data analysis. Study concept and design: Mohr and Hood. Acquisition of data: Mohr. Analysis and interpretation of data: Mohr, In- dika, and Hood. Drafting of the manuscript: Mohr and In- dika. Critical revision of the manuscript for important intel- lectual content: Hood. Statistical analysis: Indika. Study supervision: Hood. Financial Disclosure: None reported. Additional Contributions: We thank the members of the ASDP who participated in this study and the Epidemiol- ogy and Biostatistics Core at Eastern Virginia Medical School, Norfolk, and Old Dominion University, Norfolk, for assisting with statistical analysis. 1. Fogelberg A, Ioffreda M, Helm KF. The utility of digital clinical photographs in dermatopathology. J Cutan Med Surg. 2004;8(2):116-121. 2. Kutzner H, Kempf W, Schärer L, Requena L. Optimizing dermatopathologic diagnosis with digital photography and internet: the significance of clinico- pathologic correlation [in German]. Hautarzt. 2007;58(9):760-768. PRACTICE GAPS Submitting Clinical Photographs to Dermatopathologists to Facilitate Interpretations A dvances in immunohistochemical stains, molecu-lar analysis, and laboratory technology have fa-cilitated dermatopathologic diagnostic accuracy. Nevertheless, patients and clinicians are often frustrated when dermatopathologists render nonspecific diagnoses, which may lead to diagnostic and/or therapeutic uncer- tainty. Given the importance of clinicopathologic correla- tion (CPC), a practice gap exists between what dermato- pathologists desire and what the clinicians provide.1,2 Mohr et al point out that one of the most important tools used to assist accurate dermatopathologic diagno- sis is the information supplied on the dermatopathology form accompanying tissue specimens. They also report that dermatopathologists find the addition of a clinical photograph useful in rendering a microscopic diagno- sis, especially when dealing with inflammatory skin dis- eases. The use of clinical photographs may be particu- larly helpful when dermatopathologists receive specimens with an inadequate clinical description on the dermato- pathology form, which may be more of a concern with specimens submitted by nondermatologists who have less CPC experience. Although clinical photographs are desired, it is ex- tremely infrequent for a dermatopathologist to be pro- vided with one. Barriers to sending clinical photographs with biopsy specimens include the time it takes to create and implement standard operating procedures (SOP), which include identifying the body region to be photo- graphed, obtaining consent from the patient, taking the digital photograph, downloading the photographic file, la- beling the photograph, and either printing or electroni- cally sending the picture to the pathologist. Other barri- ers are limited computer file storage space; costs of obtaining 1 or more digital cameras for the physician office; and com- pliance with the secure data transfer standards of the Health Insurance Portability and Accountability Act and Health Information Technology for Economic and Clinical Health. It is also possible that some patients may object to pho- tography, particularly of specific body parts. This gap between what dermatopathologists desire and what the clinicians provide can be narrowed by improv- ing the quality of information supplied by the clinician to the dermatopathologist. Education directed at office efficiency should include instruction on efficient pro- cesses to incorporate patient photography. Mohr et al un- derscore that patient care will benefit when clinicians im- prove the quality and quantity of the information provided, and they encourage incorporation of photog- raphy as part of routine biopsy procedures. Develop- ment of a more comprehensive way of communicating information to dermatopathologists is needed. Clinician- friendly pathology forms and reminder systems to in- clude clinical photographs may help. Considering patient volume and increasing time limi- tations of office visits, it would optimal for clinicians to train an assistant to take and process the photographs for relevant patients. The SOP should be defined for this process to assist personnel in implementation without loss of efficiency. Creating an SOP for a proper and com- plete provision of information including completion of requisition forms and taking clinical photographs will help establish uniformity of photographic information to the dermatopathologist. Hard copies of photographs are not always neces- sary. Digital technology provides a variety of media to safely transmit images, including secure Internet con- nections and storage on compact discs and flash drives, to protect the confidentiality of patient photographic in- formation, usually considered personal health informa- tion. Data transfer between dermatologists and derma- topathologists can be optimized to maximize the quality of dermatopathology diagnosis. Melinda R. Mohr, MD S. H. Sathish Indika, MS Antoinette F. Hood, MD Alejandra Vivas, MD Robert S. Kirsner, MD, PhD (REPRINTED) ARCH DERMATOL/ VOL 146 (NO. 11), NOV 2010 WWW.ARCHDERMATOL.COM 1308 ©2010 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 work_gdcceibj7rfgto7rfuhp37iqby ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218545223 Params is empty 218545223 exception Params is empty 2021/04/06-02:16:22 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218545223 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:22 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_gdkefu4hrnb4fbmmprqrj4svpy ---- Dataset Paper Spatial and Temporal Abundance of Interacting Populations of White Clover and Grass Species as Assessed by Image Analyses Anne Kjersti Bakken,1 Helge Bonesmo,2 and Bård Pedersen3 1Norwegian Institute for Agricultural and Environmental Research (Bioforsk), 7512 Stjørdal, Norway 2Norwegian Agricultural Economic Research Institute, Statens Hus, P.O. Box 4718, Sluppen, 7468 Trondheim, Norway 3Norwegian Institute for Nature Research, P.O. Box 5685, Sluppen, 7485 Trondheim, Norway Correspondence should be addressed to Anne Kjersti Bakken; anne.kjersti.bakken@bioforsk.no Received 2 April 2014; Accepted 26 September 2014 Academic Editor: Nicola Pizzolato Copyright © 2015 Anne Kjersti Bakken et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The dataset comprises detailed mappings of two communities of interacting populations of white clover (Trifolium repens L.) and grass species under differing experimental treatments over 4-5 years. Information from digital photographs acquired two times per season has been processed into gridded data and documents the temporal and spatial dynamics of the species that followed from a wide range of spatial configurations that arose during the study period. The data contribute a unique basis for validation and further development of previously published models for the dynamics and population oscillations in grass-white clover swards. They will be well suited for estimating parameters in spatially explicit versions of these models, like neighborhood based models that incorporate both the dispersal and the local nature of plant-plant interactions. 1. Introduction White clover (Trifolium repens L.) is a clonal legume that is commonly grown in grasslands managed for grazing and forage production. The facilitation of long term, stable coex- istence of clover and grass in such a system is a challenge [1– 4] and the community dynamics are still poorly understood. The community has therefore turned out to be an important model system for studying and understanding ecological interactions between plants and the consequences of these interactions for coexistence of populations of different plant species. Various mechanisms have been proposed to cause popu- lation oscillations and formation of clover patches in pastures and swards. Amongst them are facilitation and exploitation [5], balancing local extinction and invasion of clover [6], and synchronous fragmentation of ramets into small and vulnerable individuals [7]. Grass and clover populations are suggested to remain stable at a nitrogen level that balances their competitive advantages; otherwise oscillations are expected [5, 8]. An increase in nitrogen availability in the sward increases the abundance of grass, and clover is excluded by grass. Time delayed responses to changes in the availability of nitrogen may increase the probability of oscillations [5]. In order to stabilize field scale oscillations of white clover population size, it is suggested that it must have a patchy distribution, be able to invade space where it can achieve high growth rate, and avoid space where it will be out- competed [9]. In addition, environmental and demographic stochasticity [10] affect dynamics of real ecological systems of interacting species. Such stochastic noise may also induce dynamic phenomena like oscillations, local extinctions, and spatial pattern formation, features that characterize dynamics of grass-clover systems [11, 12]. Thus, both the temporal and spatial variation in clover abundance may be relevant to deduce mechanisms that cause the dynamics. The above theories suggest it is especially relevant to study the spatiotemporal dynamics of clover-grass systems under contrasting environmental conditions that differ with respect to nitrogen availability and disturbance regimes. Simulation models have been developed to explore the spatiotemporal dynamics of white clover populations [5, 6, 8, 9, 13]. There are, however, few reports from field studies contributing data on spatial dynamics [6, 14–17] and Hindawi Publishing Corporation Dataset Papers in Science Volume 2015, Article ID 620164, 9 pages http://dx.doi.org/10.1155/2015/620164 2 Dataset Papers in Science to the best of our knowledge none that cover temporal and spatial abundance of both grass and clover in contrasting environments. We have constructed an experiment with swards domi- nated by white clover and smooth meadow-grass (Poa praten- sis ssp. pratensis L.), with different management regimes with respect to cutting frequency and nitrogen supply. The management and treatments were chosen to develop environ- ments that gave differing competitive advantages of grass and clover. Both the spatial distribution and abundance of white clover and grass were recorded during four or five years by digital photography and image analyses. The results were fur- ther processed to gridded data appropriate as inputs to spatial statistical analyses as suggested by Pedersen et al. [18] or sim- ulation models for population dynamics in space and time. 2. Methodology Site description and experimental layout are as follows. The experiment was initiated in 2001 and undertaken at two sites in pastures/leys dominated by white clover and smooth meadow-grass. The first one was located at Mære Agricultural School (63∘56󸀠N, 11∘25󸀠E, 40 m a.s.l.) in central Norway in a pasture previously grazed by dairy cattle. The species sown in 1993 besides smooth meadow-grass and white clover were timothy (Phleum pratense L.) and meadow fescue (Festuca pratensis Hudson). In 2001 the timothy and meadow fescue had nearly disappeared and there was a low appearance of weeds. The soil at the site is a sandy loam, and the content of organic matter in the topsoil is 5%. The second experimental site was located at Kvithamar Research Centre (63∘29󸀠N 10∘52󸀠E, 40 m a.s.l.) in a ley sown in spring 2000 with white clover (variety Norstar, Graminor AS, Stange, Norway) and smooth meadow-grass (variety Entop- per, Innoseeds b.b., Vlijmen, The Netherlands). Altogether 18 main plots (4 m × 8 m each) were split into small plots (0.5 m × 0.5 m) and sown either with white clover at a rate of 0.8 g m−2 or smooth meadow-grass at a rate of 1.7 g m−2 or a mixture of both species (0.5 g m−2 + 1.5 g m−2). The three types of small plots were randomly distributed within main plots and the respective numbers of them in each main plot were 20 with clover, 20 with grass, and 88 with a mixture of the two species. The leys were cut once in late summer 2000 and were not fertilized until spring 2001. The soil at the site is a silt loam (20–25% clay and 55–65% silt) and the content of organic matter in the topsoil is 8%. From spring 2001 until autumn 2004 (Mære)/spring 2005 (Kvithamar), a factorial experiment with two harvesting regimes combined with three nitrogen-fertilizer levels was conducted at both sites. At Mære the treatments were repli- cated four times on 4 m × 4 m plots within a total area of 480 m2 and at Kvithamar there were three replicates laid out on 4 m × 8 m plots within an area of 684 m2. Plots were bordered with 20 cm wide strips kept free from vegetation by regular spraying with glyphosate. The plots were cut at a stubble height of 5 cm either two (late June and late August) or four times (late May, late June, late July, and late August) each growing season. The first/second and second/fourth cuts were always synchronised. Nitrogen was supplied from a compound mineral fer- tiliser (N-P-K, 18 : 3 : 15) at rates of 0, 10, or 30 g m−2yr−1. In the first treatment, 1.5 g P and 5.1 g K m−2 were supplied from a P-K fertilizer. In the two-cut regime, 60% of the fertiliser was dis- tributed in early spring and 40% after the first cut, whereas in the four-cut regime 40% was supplied in spring and 20% after each of the cuts in late May, late June, and late July. Dry yields were recorded at each of the cuts at Kvithamar. For the years 2002–2005, the content of clover in plotwise subsamples of the yield was determined by Near Infrared Reflectance Spectroscopy [19]. At Mære, there were no yield recordings. On both sites and just before the cuts in late August in 2003 and in late June and August in 2004, the height of the plant canopy was determined by a rising- plate meter [20]. There were 128 and 64 recording points within each plot at Kvithamar (Figure 1) and Mære (Figure 2), respectively. Precipitation, temperature, and global radiation for the experimental period are available at Agrometeorology Norway, lmt.bioforsk.no, stations Kvithamar and Mære. Plots harvested two times per season and supplied with high levels of N were, by time, invaded by tussock forming and more erect and high yielding grass species than smooth meadow-grass. They were Elytrigia repens L., P. pratense L., Agrostis capillaris L., and Lolium perenne L. After harvest, the initial regrowth rate expressed as plant coverage of soil surface, was slower in such plots. In line with this, bare soil at the time of image acquisition (cf. next section) does not indicate low grass abundance or production. Determination of plant coverage image acquisition and processing was as follows. To determine the coverage of clover, grass, and dicotyledonous weeds in the experimental fields, they were mapped by means of digital photography about ten days after the cuts in late June and August every year from 2001 and onwards (Figure 3). To ensure that the images were taken at the same positions in the fields at each acquisition, 1.5 m steel tubes with a diameter of 1.9 cm were inserted into the ground at the corners of the 4.0 m× 4.0 m and 4.0 m×8.0 m plots. Portable rails were anchored on these tubes on each side of the plot and on the rails, across the plot, a moveable bar was placed, and a sliding 0.5 m×0.5 m metal frame that delimited the squares to be analysed sequentially was placed upon the bar (Figure 3). The area to be photographed was shielded from sunlight by an opaque parasol and the internal blitz of the camera was used to eliminate shadows and ensure even illumination. The camera was mounted to a rack on the metal frame such that the distance from the camera to the ground was kept constant at 80 cm. A digital Olympus Camedia C-3040Zoom camera was used, and the images were stored in a compressed format that gave a spatial resolution of approximately 1600 × 1600 pixels m−2. The information from the digital colour photographs was processed by software (“Trifolium.exe”) specially designed for the purpose. The software classifies each pixel in the image as clover, grass, weeds, and bare ground. A thorough description of the image analysis techniques is given by Dataset Papers in Science 3 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 (a) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 (b) Figure 1: Orientation of the maps at Kvithamar: (a) 18 plots (4 m× 8 m plots each) and (b) 128 subplots (small plots: 0.5 m×0.5 m plots each). Bonesmo et al. [21]. The 0.5 m × 0.5 m single images were after processing by the software (Figure 4) combined into one picture (TIF-format) (Figure 5) or data file for each plot after identifying overlap zones and removing them from one of the overlapping images. The procedure for removing overlap zones as well as the software Trifolium.exe is available from the authors. The copyright for the source code of Trifolium.exe is not owned by the authors and cannot be published here. 3. Dataset Description The dataset associated with this Dataset Paper consists of 21 items which are described as follows. Dataset Item 1 (Images). Photos for each plot and image acquisition at Kvithamar. All photos together covered the whole plot. The photos are named according to the number 8 7 6 5 4 3 2 1 16 15 14 13 12 11 10 9 24 23 22 21 20 19 18 17 (a) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 (b) Figure 2: Orientation of the maps at Mære: (a) 24 plots (4 m × 4 m plots each) and (b) 64 subplots (small plots: 0.5 m×0.5 m plots each). Figure 3: Positioned image acquisition to map the grass-clover sward. of image acquisition (9 image acquisitions), plot number (18 main plots), and subplot number (128 subplots). These photos were the basis for the image analyses conducted by the software Trifolium.exe. There are altogether 20736 photos (9 image acquisitions× 18 plots/acquisition× 128 photos/plot). A few photographs are missing. Dataset Item 2 (Images). Photos for each plot and image acquisition at Mære. All photos together covered the whole plot. The photos are named according to the number of image acquisition (8 image acquisitions), plot number (24 main plots), and subplot number (64 subplots). These photos were the basis for the image analyses conducted by the software Trifolium.exe. There are altogether 12288 photos (8 image acquisitions× 24 plots/acquisition× 64 photos/plot). A few photographs are missing. 4 Dataset Papers in Science 200 200 400 400 600 600 800 800 (a) Image 200 200 400 400 600 600 800 800 (b) Grey level image 200 200 400 400 600 600 800 800 (c) Edge image 200 200 400 400 600 600 800 800 (d) Classification Figure 4: Outputs at sequential steps in the analysis of (a) a digital colour image by the software Trifolium.exe, (b) grey level image, (c) edge image, and (d) classified image, where red indicates clover, blue indicates grass, and black indicates soil (published with permission from Taylor & Francis Group, and previously printed in [21]). Figure 5: An example of a 4 m× 4 m plot with patterns occurring from identification of pixels either as clover (red), grass (blue), bare ground (black), or dicotyledonous weeds (white). Dataset Item 3 (Table). Treatments allocated to the different plots are listed together with harvest dates, data for harvested yield, and clover proportion in dried yield samples, as deter- mined by NIRS. Yields were recorded at the site Kvithamar only. The column Site identifies whether the site is Kvithamar or Mære. The column Plot Number presents the plot number within site (1–18 at Kvithamar and 1–24 at Mære). In the col- umn Harvesting Regime, 1 indicates two cuts per season and 2 indicates four cuts per season. In the column Nitrogen Supply Rate, 1 indicates 0 g, 2 indicates 10 g, and 3 indicates 30 g. The column Yield One presents the dry matter yield (g m−2) at harvest in spring (May/June); Yield Two, the dry matter yield (g m−2) at harvest in early summer (June/July); Yield Three, the dry matter yield (g m−2) at harvest in late summer (July/August); Yield Four, the dry matter yield (g m−2) at har- vest in autumn (August/September). The column Clover One presents the proportion (%, weight/weight) of clover in yields in spring; Clover Two, the proportion (%, weight/weight) of clover in yields in early summer; Clover Three, the proportion (%, weight/weight) of clover in yields in late summer; Clover Four, the proportion (%, weight/weight) of clover in yields in autumn. The column Date One presents the date of harvest in spring; Date Two, the date of harvest in early summer; Date Three, the date of harvest in late summer; Date Four, the date of harvest in autumn. The harvest dates at Mære were 2-3 days later than respective dates at Kvithamar. A dot (⋅) in a Dataset Papers in Science 5 cell indicates that no value was recorded for this combination of harvesting regime, N-supply rate, year, and time of season. Column 1: Site Column 2: Plot Number Column 3: Harvesting Regime Column 4: Nitrogen Supply Rate (g m−2y−1) Column 5: Year Column 6: Yield One (g m−2) Column 7: Yield Two (g m−2) Column 8: Yield Three (g m−2) Column 9: Yield Four (g m−2) Column 10: Clover One (%) Column 11: Clover Two (%) Column 12: Clover Three (%) Column 13: Clover Four (%) Column 14: Date One Column 15: Date Two Column 16: Date Three Column 17: Date Four Dataset Item 4 (Matrices). Registration of the first image acquisition at Kvithamar containing 18 matrices that contain 2400× 1200 cells describing the 8 m×4 m plots. Each cell in the matrices contains one digit (1, 2, 3, or 4), which identifies the character of single cells as being covered by clover (1), grass (2), bare ground (3), or dicotyledonous weeds (4). For a few plots, there are smaller areas that are not identified. Such cells are annotated with the digit 0. Dataset Item 5 (Matrices). Registration of the second image acquisition at Kvithamar containing 18 matrices that contain 2400× 1200 cells describing the 8 m×4 m plots. Each cell in the matrices contains one digit (1, 2, 3, or 4), which identifies the character of single cells as being covered by clover (1), grass (2), bare ground (3), or dicotyledonous weeds (4). For a few plots, there are smaller areas that are not identified. Such cells are annotated with the digit 0. Dataset Item 6 (Matrices). Registration of the third image acquisition at Kvithamar containing 18 matrices that contain 2400× 1200 cells describing the 8 m×4 m plots. Each cell in the matrices contains one digit (1, 2, 3, or 4), which identifies the character of single cells as being covered by clover (1), grass (2), bare ground (3), or dicotyledonous weeds (4). For a few plots, there are smaller areas that are not identified. Such cells are annotated with the digit 0. Dataset Item 7 (Matrices). Registration of the fourth image acquisition at Kvithamar containing 18 matrices plots that contain 2400 × 1200 cells describing the 8 m × 4 m. Each cell in the matrices contains one digit (1, 2, 3, or 4), which identifies the character of single cells as being covered by clover (1), grass (2), bare ground (3), or dicotyledonous weeds (4). For a few plots, there are smaller areas that are not identified. Such cells are annotated with the digit 0. Dataset Item 8 (Matrices). Registration of the fifth image acquisition at Kvithamar containing 18 matrices that contain 2400× 1200 cells describing the 8 m×4 m plots. Each cell in the matrices contains one digit (1, 2, 3, or 4), which identifies the character of single cells as being covered by clover (1), grass (2), bare ground (3), or dicotyledonous weeds (4). For a few plots, there are smaller areas that are not identified. Such cells are annotated with the digit 0. Dataset Item 9 (Matrices). Registration of the sixth image acquisition at Kvithamar containing 18 matrices that contain 2400× 1200 cells describing the 8 m×4 m plots. Each cell in the matrices contains one digit (1, 2, 3, or 4), which identifies the character of single cells as being covered by clover (1), grass (2), bare ground (3), or dicotyledonous weeds (4). For a few plots, there are smaller areas that are not identified. Such cells are annotated with the digit 0. Dataset Item 10 (Matrices). Registration of the seventh image acquisition at Kvithamar containing 18 matrices that contain 2400 × 1200 cells describing the 8 m × 4 m plots (plot 9 is missing). Each cell in the matrices contains one digit (1, 2, 3, or 4), which identifies the character of single cells as being covered by clover (1), grass (2), bare ground (3), or dicotyledonous weeds (4). For a few plots, there are smaller areas that are not identified. Such cells are annotated with the digit 0. Dataset Item 11 (Matrices). Registration of the eighth image acquisition at Kvithamar containing 18 matrices that contain 2400 × 1200 cells describing the 8 m × 4 m plots (plot 8 is missing). Each cell in the matrices contains one digit (1, 2, 3, or 4), which identifies the character of single cells as being covered by clover (1), grass (2), bare ground (3), or dicotyledonous weeds (4). For a few plots, there are smaller areas that are not identified. Such cells are annotated with the digit 0. Dataset Item 12 (Matrices). Registration of the ninth image acquisition at Kvithamar containing 18 matrices that contain 2400× 1200 cells describing the 8 m×4 m plots. Each cell in the matrices contains one digit (1, 2, 3, or 4), which identifies the character of single cells as being covered by clover (1), grass (2), bare ground (3), or dicotyledonous weeds (4). For a few plots, there are smaller areas that are not identified. Such cells are annotated with the digit 0. Dataset Item 13 (Matrices). Registration of the first image acquisition at Mære containing 24 matrices that contain 1200×1200 cells describing the 4 m×4 m plots. Each cell in the matrices contains one digit (1, 2, 3, or 4), which identifies the character of single cells as being covered by clover (1), grass (2), bare ground (3), or dicotyledonous weeds (4). For a 6 Dataset Papers in Science few plots, there are smaller areas that are not identified. Such cells are annotated with the digit 0. Dataset Item 14 (Matrices). Registration of the second image acquisition at Mære containing 24 matrices that contain 1200×1200 cells describing the 4 m×4 m plots (plots 11, 19, and 20 are missing). Each cell in the matrices contains one digit (1, 2, 3, or 4), which identifies the character of single cells as being covered by clover (1), grass (2), bare ground (3), or dicotyledonous weeds (4). For a few plots, there are smaller areas that are not identified. Such cells are annotated with the digit 0. Dataset Item 15 (Matrices). Registration of the third image acquisition at Mære containing 24 matrices that contain 1200×1200 cells describing the 4 m×4 m plots. Each cell in the matrices contains one digit (1, 2, 3, or 4), which identifies the character of single cells as being covered by clover (1), grass (2), bare ground (3), or dicotyledonous weeds (4). For a few plots, there are smaller areas that are not identified. Such cells are annotated with the digit 0. Dataset Item 16 (Matrices). Registration of the fourth image acquisition at Mære containing 24 matrices that contain 1200×1200 cells describing the 4 m×4 m plots. Each cell in the matrices contains one digit (1, 2, 3, or 4), which identifies the character of single cells as being covered by clover (1), grass (2), bare ground (3), or dicotyledonous weeds (4). For a few plots, there are smaller areas that are not identified. Such cells are annotated with the digit 0. Dataset Item 17 (Matrices). Registration of the fifth image acquisition at Mære containing 24 matrices that contain 1200×1200 cells describing the 4 m×4 m plots. Each cell in the matrices contains one digit (1, 2, 3, or 4), which identifies the character of single cells as being covered by clover (1), grass (2), bare ground (3), or dicotyledonous weeds (4). For a few plots, there are smaller areas that are not identified. Such cells are annotated with the digit 0. Dataset Item 18 (Matrices). Registration of the sixth image acquisition at Mære containing 24 matrices that contain 1200×1200 cells describing the 4 m×4 m plots. Each cell in the matrices contains one digit (1, 2, 3, or 4), which identifies the character of single cells as being covered by clover (1), grass (2), bare ground (3), or dicotyledonous weeds (4). For a few plots, there are smaller areas that are not identified. Such cells are annotated with the digit 0. Dataset Item 19 (Matrices). Registration of the seventh image acquisition at Mære containing 24 matrices that contain 1200×1200 cells describing the 4 m×4 m plots. Each cell in the matrices contains one digit (1, 2, 3, or 4), which identifies the character of single cells as being covered by clover (1), grass (2), bare ground (3), or dicotyledonous weeds (4). For a few plots, there are smaller areas that are not identified. Such cells are annotated with the digit 0. Dataset Item 20 (Matrices). Registration of the eighth image acquisition at Mære containing 24 matrices that contain 1200×1200 cells describing the 4 m×4 m plots. Each cell in the matrices contains one digit (1, 2, 3, or 4), which identifies the character of single cells as being covered by clover (1), grass (2), bare ground (3), or dicotyledonous weeds (4). For a few plots, there are smaller areas that are not identified. Such cells are annotated with the digit 0. Dataset Item 21 (Table). Recordings of plant canopy height at Kvithamar and Mære in late August in 2003, late June in 2004, and late August in 2004. There were 128 and 64 recording points within each plot at Kvithamar and Mære, respectively. Column 1: Cut Date Column 2: Site Column 3: Plot Number Column 4: Subplot Number Column 5: Height (cm) 4. Concluding Remarks The dataset presented here will contribute a unique basis for validation and further development of previously published models for the dynamics and population oscillations in grass- white clover swards. The set documents the spatial dynamics of the two species that followed from a wide range of spatial configurations that arose during the experiment. It should therefore be well suited for estimating parameters in spatially explicit versions of these models, like neighborhood based models that incorporate both the dispersal and the local nature of plant-plant interactions (e.g., [22–25]). For example, the hypothesis that a patchy distribution of clover and its ability to invade space where it is not outcompeted will damp oscillations may be tested. Dataset Availability The dataset associated with this Dataset Paper is dedicated to the public domain using the CC0 waiver and is available at http://dx.doi.org/10.1155/2015/620164/dataset. Conflict of Interests There is no conflict of interests in the access or publication of this dataset. Acknowledgments Anne Stine Ekker and Anne Langerud have contributed substantially to this work. The Norwegian Research Council and Bioforsk are acknowledged for financial support. References [1] K. W. Steele and P. Shannon, “Concepts relation to the nitrogen economy of a Northland intensive beef farm,” in Nitrogen Dataset Papers in Science 7 Balances in New Zealand Ecosystems, P. Gandar, Ed., pp. 85–89, Department of Scientific and Industrial Research, Wellington, New Zealand, 1982. [2] M. G. Lambert, D. A. Clark, D. A. Grant, and D. A. Costall, “Influence of fertiliser and grazing management on North Island moist hill country. 2. Pasture botanical composition.,” New Zealand Journal of Agricultural Research, vol. 29, no. 1, pp. 1–10, 1986. [3] D. S. Rickard and S. D. McBride, “Irrigated and nonirrigated pasture production at Winchmore,” Tech. Rep. 21, Winchmore Irrigation Research Station, 1986. [4] D. A. Davies, M. Fothergill, and C. T. Morgan, “Assessment of contrasting perennial ryegrasses, with and without white clover, under continuous sheep stocking in the uplands. 5. Herbage production, quality and intake in years 4–6,” Grass & Forage Science, vol. 48, no. 3, pp. 213–222, 1993. [5] S. Schwinning and A. J. Parsons, “Analysis of the coexistence mechanisms for grasses and legumes in grazing systems,” Journal of Ecology, vol. 84, no. 6, pp. 799–813, 1996. [6] M. L. Cain, S. W. Pacala, J. A. Silander Jr., and M.-J. Fortin, “Neighborhood models of clonal growth in the white clover Trifolium repens,” The American Naturalist, vol. 145, no. 6, pp. 888–917, 1995. [7] M. Fothergill, D. A. Davies, and G. J. Daniel, “Morphological dynamics and seedling recruitment in young swards of three contrasting cultivars of white clover (Trifolium repens) under continuous stocking with sheep,” Journal of Agricultural Science, vol. 128, no. 2, pp. 163–172, 1997. [8] J. H. M. Thornley, J. Bergelson, and A. J. Parsons, “Complex dynamics in a carbon-nitrogen model of a grass-legume pas- ture,” Annals of Botany, vol. 75, no. 1, pp. 79–94, 1995. [9] S. Schwinning and A. J. Parsons, “A spatially explicit population model of stoloniferous N-fixing legumes in mixed pasture with grass,” Journal of Ecology, vol. 84, no. 6, pp. 815–826, 1996. [10] R. Lande, S. Engen, and B.-E. Sæther, Stochastic Population Dynamics in Ecology and Conservation, Oxford University Press, Oxford, UK, 2003. [11] B. Spagnolo, A. Fiasconaro, and D. Valenti, “Noise induced phenomena in Lotka-Volterra systems,” Fluctuation and Noise Letters, vol. 3, no. 2, pp. L177–L185, 2003. [12] B. Spagnolo, D. Valenti, and A. Fiasconaro, “Noise in ecosys- tems: a short review,” Mathematical Biosciences and Engineering, vol. 1, no. 1, pp. 185–211, 2004. [13] J. M. Sharp, G. R. Edwards, and M. J. Jeger, “A spatially explicit population model of the effect of spatial scale of heterogeneity in grass-clover grazing systems,” The Journal of Agricultural Science, vol. 152, no. 3, pp. 394–407, 2014. [14] C. A. Marriott, M. A. Smith, and M. A. Baird, “The effect of urine on clover performance in a grazed upland sward,” The Journal of Agricultural Science, vol. 109, no. 1, pp. 177–185, 1987. [15] T. E. Thorhallsdottir, “The dynamics of a grassland community: a simultaneous investigation of spatial and temporal hetero- geneity at various scales,” Journal of Ecology, vol. 78, no. 4, pp. 884–908, 1990. [16] G. R. Edwards, A. J. Parsons, J. A. Newman, and I. A. Wright, “The spatial pattern of vegetation in cut and grazed grass/white clover pastures,” Grass and Forage Science, vol. 51, no. 3, pp. 219– 231, 1996. [17] R. Thanopoulos, C. A. Marriott, and N. Sidiras, “Dynamics of perennial ryegrass and white clover in sown swards in NW Greece,” Grass and Forage Science, vol. 55, no. 4, pp. 361–366, 2000. [18] B. Pedersen, H. Bonesmo, and A. K. Bakken, “Analysing white clover patchiness by wavelets and quadrat variance methods,” in Adaption and Management of Forage Legumes—Strategies for Improved Reliability in Mixed Swards, B. E. Frankow-Lindberg, R. P. Collins, A. Lüscher, M. T. Sébastia, and Á. Helgadóttir, Eds., Proceedings of the 1st COST 852 workshop held in Ystad, Sweden 20–22 September, pp. 286–289, Department of Ecology and Crop Production Science, Swedish University of Agricultural Sciences, 2004. [19] G. Fystro and T. Lunnan, “Analysar av grovfôrkvalitet på NIRS,” Bioforsk Fokus, vol. 1, no. 3, pp. 180–181, 2006. [20] F. L. Mould, “Use of a modified rising-plate meter to assess changes in sward height and structure,” Norwegian Journal of Agricultural Sciences, vol. 6, no. 4, pp. 375–382, 1992. [21] H. Bonesmo, K. Kaspersen, and A. K. Bakken, “Evaluating an image analysis system for mapping white clover pastures,” Acta Agriculturae Scandinavica B: Soil and Plant Science, vol. 54, no. 2, pp. 76–82, 2004. [22] B. Bolker and S. W. Pacala, “Using moment equations to understand stochastically driven spatial pattern formation in ecological systems,” Theoretical Population Biology, vol. 52, no. 3, pp. 179–197, 1997. [23] B. M. Bolker, S. W. Pacala, and S. A. Levin, “Moment methods for ecological processes in continuous space,” in The Geometry of Ecological Interactions: Simplifying Spatial Complexity, U. Dieckmann, R. Law, and J. A. J. Metz, Eds., pp. 388–411, Cambridge University Press, Cambridge, UK, 2000. [24] Y. Iwasa, “Lattice models and pair approximation in ecology,” in The Geometry of Ecological Interactions: Simplifying Spatial Complexity, U. Dieckmann, R. Law, and J. A. J. Metz, Eds., pp. 227–251, Cambridge University Press, Cambridge, UK, 2000. [25] R. Law and U. Dieckmann, “Moment approximations of individual-based models,” in The Geometry of Ecological Inter- actions: Simplifying Spatial Complexity, U. Dieckmann, R. Law, and J. A. J. Metz, Eds., pp. 252–270, Cambridge University Press, Cambridge, UK, 2000. Submit your manuscripts at http://www.hindawi.com Forestry Research International Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Environmental and Public Health Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Ecosystems Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Meteorology Advances in Ecology International Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Marine Biology Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Applied & Environmental Soil Science Volume 2014 Advances in Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Environmental Chemistry Atmospheric Sciences International Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Waste Management Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Geophysics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Geological Research Journal of Earthquakes Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Biodiversity International Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Scientifica Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Oceanography International Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Computational Environmental Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Climatology Journal of work_gdzivpwkgnbaxn7bjuhhtwoldy ---- Rev Cubana Estomatol. 2015;52(4) Órgano Oficial de la Sociedad Cubana de Estomatología ISSN-1561-297X 80 http://www.revestomatologia.sld.cu/index.php/est/article/view/745/229 COMUNICACIÓN BREVE Fotografía clínica estomatológica: consejos para la práctica diaria Clinical dental photography: tips for daily practice Alain Manuel Chaple Gil Clínica Estomatológica “Ana Betancourt”, Playa. Facultad de Ciencias Médicas “Victoria de Girón”. Universidad de Ciencias Médicas de La Habana, Cuba. RESUMEN Poco después de haber surgido la fotografía como invención, Eastman comenzó utilizarla como herramienta para el trabajo científico (1879). Desde entonces ha existido un afán por hacer de la fotografía un medio indispensable para la proyección e ilustración de las ciencias. Tal es así que la fotografía científica originó el desarrollo de la fotografía digital. La redacción de un artículo científico demanda conocimientos sólidamente establecidos. La profundización e investigación en la cotidianidad del quehacer estomatológico haría posible tal propósito. Pero, solo apelando a la descripción se podría mostrar convincentemente el inicio, recorrido y resultado final de nuestro trabajo. La fotografía clínica estomatológica favorecería la precisión y brevedad requeridas para una mejor comprensión del tema tratado. Se exponen características que han de tener las cámaras digitales compactas para lograr fotografías estomatológicas óptimas, así como técnicas y consejos para realizar las intrabucales y las extrabucales. Palabras clave: fotografía, fotografía clínica, fotografía clínica estomatológica. ABSTRACT A short time after photography was invented, Eastman started using it as a tool for scientific work (1879). Ever since then, efforts have been made to turn photography into an indispensable means to project and illustrate science, so much so that scientific photography has paved the way for the development of digital photography. Writing a scientific paper requires profound knowledge about the topic in question. Careful observation of and research into daily dental practice would make that purpose possible. On the other hand, description is the most effective resource to convincingly present the start, evolution and final result of our work. Clinical dental photography would contribute to the accuracy and brevity required to better understand the topic dealt with. A presentation is made of the characteristics required from compact digital cameras to obtain optimal dental photographs, as well as techniques and tips to take intraoral and extraoral photographs. Key words: photography, clinical photography, clinical dental photography. Correspondencia: Alain Manuel Chaple Gil. Clínica Estomatológica “Ana Betancourt”, Playa. Universidad de Ciencias Médicas de La Habana. Facultad de Ciencias Médicas “Victoria de Girón”, Cuba. Correo electrónico: chaple@infomed.sld.cu INTRODUCCIÓN Poco después de haber surgido la fotografía como invención, Eastman comenzó a utilizarla como herramienta para el trabajo científico (1879). Desde entonces ha existido un afán por hacer de la fotografía un medio indispensable para la proyección e ilustración de las ciencias. Tanto es así, que la fotografía científica originó el desarrollo de la fotografía digital.1,2 La redacción de un artículo científico demanda conocimientos sólidamente establecidos. La profundización e investigación en la cotidianidad del quehacer estomatológico haría posible tal propósito. Pero, solo apelando a la descripción se podría mostrar convincentemente el inicio, recorrido y resultado final de nuestro trabajo. La fotografía clínica estomatológica (FCE) favorecería la precisión y brevedad requeridas para una mejor comprensión del tema tratado. mailto:chaple@infomed.sld.cu Rev Cubana Estomatol. 2015;52(4) Órgano Oficial de la Sociedad Cubana de Estomatología ISSN-1561-297X 81 http://www.revestomatologia.sld.cu/index.php/est/article/view/745/229 El propósito fundamental de la fotografía clínica estomatología es obtener un registro de las manifestaciones clínicas de la cavidad bucal; lo cual favorece la documentación médico-legal, las comunicaciones científicas, docentes y para el marketing.3-6 La tendencia para la implementación de la fotografía en los consultorios dentales es la de realizar costosas inversiones en equipamiento con estos fines. Las cámaras computarizadas intrabucales y las cámaras réflex han sido las recomendadas por varios conocedores del tema. Sin embargo, muchos autores han demostrado que la mayoría de las cámaras compactas, más sencillas y baratas, pueden ser utilizadas en la fotografía clínica estomatológica actual.2,5,7 Los requerimientos son simples, y muchas de ellas los reúnen: - Tener al menos 5 megapixeles (MP) de resolución. - Poseer el modo de Programa (Fig. A). En caso de que la cámara no tenga la posibilidad de ajuste de estos parámetros, se ha de utilizar el modo automático. - Tener el flash incorporado lo más cerca posible del objetivo. - Tener disponible el ajuste de sensibilidad del ISO. De esta manera se puede ajustar para que el mismo se ajuste a 100 o 200 que son los propuestos para la toma de fotografías del complejo bucal (Fig. B). - Poseer enfoque central automático. - La cámara debe ser de fácil manejo. - Tener un modo de exposición programable puntual (Fig. C). - La pantalla LCD de la cámara ha de ser grande para poder percibir si las fotografías tiene calidad o no, a fin de poder descartar las que no sean útiles. - La cámara debe tener el modo Macro (Fig. D). - La cámara debe ser económica.2,7-10 - Las fotografías estomatológicas pueden ser extrabucales, intrabucales y complementarias. Dentro de las fotografías extrabucales encontramos la frontal con labios en reposo, frontal con sonrisa forzada, perfil derecho e izquierdo, vista en 45 grados, frontal con platina de Fox, perfil con platina de Fox, sellado labial, sonrisa forzada de frente, sonrisa de perfil derecho e izquierdo, vista en 45 grados de sonrisa y Overjet. En cuanto a la fotografía intrabucal podemos mencionar algunas técnicas que han de ser utilizadas con retractores: de frente con dientes en oclusión, vista en 45 grados de dientes en oclusión, frente con dientes en posiciones funcionales protrusiva, lateralidad derecha y lateralidad izquierda, frontal superior con fondo negro y frontal inferior con fondo negro. Con retractores y espejos encontramos: la lateral derecha e izquierda en oclusión, oclusal superior, oclusal inferior y los primeros planos en zonas específicas.2,10-14 ALGUNAS TÉCNICAS PARA LA TOMA DE FOTOGRAFÍAS INTRABUCALES FOTOGRAFÍA FRONTAL Fig. 1.- Opciones que deben tener las cámaras compactas para fotografías estomatológicas óptimas. A.- Modo programa. B.- ISO ajustable. C.- Modo de exposición programable puntual. D.- Principales controles que debe poseer la cámara compacta para fotografías bucales. Rev Cubana Estomatol. 2015;52(4) Órgano Oficial de la Sociedad Cubana de Estomatología ISSN-1561-297X 82 http://www.revestomatologia.sld.cu/index.php/est/article/view/745/229 Se colocan retractores en las comisuras labiales e introducen en los carrillos para separar el tejido bucal de los dientes. Centrar en la línea media y encuadrar la fotografía para abarcar todos los dientes y tejidos blandos. Para conseguir máxima nitidez de la imagen se debe enfocar la cámara en los caninos, no en lo incisivos centrales. FOTOGRAFÍA OCLUSAL DEL MAXILAR Para tomarlas se requiere de la ayuda de una o dos personas. Después de haberse sentado el paciente en posición semierguida, uno de los ayudantes debe colocar los retractores, llevándolos ligeramente hacia arriba y hacia afuera; el otro ayudante debe apoyar el espéculo de arcada completa sobre la tuberosidad del maxilar, no sobre los dientes. El espéculo debe quedar lo más paralelo posible a la cámara. Se debe alinear la línea media del paladar con el centro del encuadre y ajustar la magnificación (enfocando la zona premolar), luego tomar la fotografía. FOTOGRAFÍA OCLUSAL DE LA MANDÍBULA Colocar al paciente en posición supina, paralelo al suelo, se inclina la cabeza del paciente ligeramente hacia atrás y volverla hacia el fotógrafo de manera que el plano oclusal quede paralelo al suelo. Un asistente debe hacer girar los retractores ligeramente hacia afuera y hacia la mandíbula apoyando el espéculo de arcada completa sobre la almohadilla retromolar. El espéculo debe oponerse del plano oclusal tanto como sea posible, para que el lente de la cámara quede paralelo al plano del espéculo y finalmente se ha de alinear la línea media lingual con el centro del encuadre, y ajustar la magnificación (enfocando la zona premolar), luego tomar la fotografía. FOTOGRAFÍA LATERAL Sentar al paciente en posición semierguida con la cabeza hacia el frente para las fotografías laterales izquierdas y la cabeza hacia el fotógrafo para las fotografías laterales derechas. Luego se coloca el espéculo distal al último diente de la arcada y se desplaza lateralmente, tanto como sea posible, retrayendo el labio al mismo tiempo. Escoger la magnificación de la imagen, encuadrar la fotografía desde la zona distal del canino hasta el diente más posterior con el plano de oclusión paralelo a la película y en el centro del encuadre enfocar la cámara sobre la zona premolar al mismo tiempo, luego tomar la fotografía.7,12,14-16 Como consejo general se recomienda siempre mantener la higiene y esterilización de todos los aditamentos y herramientas que se utilizan para la toma de fotografías bucales, limpieza diaria de la cámara con un paño limpio y contener la respiración en cada toma fotográfica para evitar movimientos y desenfoques. La aplicación de la FCE en la práctica diaria es simple; se ha demostrado que no se requiere de grandes inversiones económicas. El plasmar y dejar evidencia de la ciencia estomatológica que profesamos, hará que las investigaciones venideras adquieran impacto, seriedad y un carácter ético-legal coherente con exigencias y tendencias actuales. REFERENCIAS BIBLIOGRÁFICAS 1. Ahmad I. Digital dental photography. Part 1: an overview. Br Dent J. 2009;206(8):403-7. 2. Moreno MV, Chidiak R, Roa RM, Miranda SA, Rodríguez- Malaver AJ. Importancia y requisitos de la fotografía clínica en odontología. Revista Odontológica de los Andes. 2006;002(2):51-61. 3. Ahmad I. Digital dental photography. Part 2: principles of purposes and uses. Br Dent J. 2009;206(9):459-64. 4. Ahmad I. Digital dental photography. Part 3: principles of digital photography. Br Dent J. 2009;206(10):517-23. 5. Goldstein MB. Dental digital photography update. A report from the 2014 Chicago midwinter meeting. Dent Today. 2014 May;33(5):152,154-5. 6. Wander P, Ireland RS. Dental photography in record keeping and litigation. Br Dent J. 2014 Aug;217(3):133-7. 7. Shagam J, Kleiman A. Technological updates in dental photography. Dent Clin North Am. 2011 Jul;55(3):627-33, x-xi. 8. Ahmad I. Digital dental photography. Part 6: camera settings. Br Dent J. 2009;207(2):63-9 9. Chossegros C, Guyot L, Mantout B, Cheynet F, Olivi P, Blanc JL. Medical and dental digital photography. Choosing a cheap and user- friendly camera. Rev Stomatol Chir Maxillofac. 2010 Apr;111(2):79-83. Rev Cubana Estomatol. 2015;52(4) Órgano Oficial de la Sociedad Cubana de Estomatología ISSN-1561-297X 83 http://www.revestomatologia.sld.cu/index.php/est/article/view/745/229 10. Novak B. Dental photography. Dent Today. 2013 Apr;32(4):14. 11. Ahmad I. Digital dental photography. Part 5: lighting. Br Dent J. 2009;207(1):13-8. 12. Ahmad I. Digital dental photography. Part 9: post- image capture processing. Br Dent J. 2009;207(5):203-9. 13. Desai V, Bumb D. Digital dental photography: a contemporary revolution. Int J Clin Pediatr Dent. 2013 Sep;6(3):193-6. 14. Liu F. Investigation about particularity of dental clinical digital photography. Shanghai Kou Qiang Yi Xue. 2012 Apr;21(2):211-4. 15. McLaren EA, Garber DA, Figueira J. The Photoshop Smile Design technique (part 1): digital dental photography. Compend Contin Educ Dent. 2013 Nov-Dec;34(10):772,774,776. 16. Morse GA, Haque MS, Sharland MR, Burke FJ. The use of clinical photography by UK general dental practitioners. Br Dent J. 2010 Jan 9;208(1):E1;discussion 14-5. Recibido: 2014-12-27 Aprobado: 2015-01-28 work_geqdytmk4vbvzocvijqnfzenma ---- Narrative Review Bio-Engineering in Microtia Reconstruction: A Narrative Review Ann-Sophie Lafrenière, MDCM(c) Published online: 08 May 2019 Faculty of Medicine, McGill University, Montréal, Canada. Corresponding Author: Ann-Sophie Lafrenière, email ann-sophie.lafreniere@mail.mcgill.ca. 1 1 https://www.mjmmed.com/author?authorID=78 Background Applications of three-dimensional (3D) modeling and printing have expanded in the recent years, now part of tissue and organ fabrication, implants and prostheses production, and various drug delivery systems, amongst others(1, 2). Developed in the 1980s, 3D printing consists of manufacturing objects based on digital reconstructions or models(3, 4). With its increasing availability, the cost efficiency of 3D printing has improved for individualized small-scale production(3, 5, 6). Indeed, objects can be printed in a few hours only with high accuracy, reliability and repeatability(5, 6). In surgery, 3D printing allows for production of patient-tailored medical products and equipment, resulting in decreased operative time, shortened recovery, and improved overall surgical success(5, 6). 3D modeling and printing, has proved to be a promising tool for patients with microtia, a congenital malformation of the external ear within the first 6-8 weeks of gestation, secondary to teratogens, ischemia or genetic defects(7). It develops in 1 out of 5000 births(7). Defects of the auricle can range from a small concha with normal anatomy (grade 1), to severely deficient upper half with external auditory canal (EAC) possibly absent (grade 2), to small piece of remnant cartilage displaced upward and forward without EAC (grade 3) to anotia or complete absence of auricular tissue (grade 4) (7). In addition to the psychosocial issues resulting from having an abnormal ear, patients with microtia can develop impaired acoustic function resulting in suboptimal sound Abstract Objective: To provide an overview of the role of 3D modeling and printing in reconstruction of congenital malformations of the ear by plastic surgeons. Background: Three-dimensional (3D) modeling and printing have become widely adopted in surgical fields, whether it be for pre-operative planning, production of prosthesis, or outcomes monitoring. Plastic and reconstructive surgeons have shown interest in using 3D technology in craniofacial reconstruction, in particular for microtia. Methods: A literature search was performed and identified articles using 3D modeling, printing and/or bioprinting for microtia reconstruction. Discussion: In patients with unilateral microtia, 3D modeling and printing of the normal contralateral ear to be used as an intra-operative reference during costochondral or MedPor carving were preferred by surgeons to traditional 2D drawings. Three- dimensional models provided complex ear contour features, such as depth, and logistically saved time. In producing ear prostheses, the 3D technology improved surface texture and color match. Combining tissue engineering with 3D modeling and seeding chondrocytes onto a customized biodegradable ear framework is a promising avenue to restore aesthetics and to obviate certain challenges of autologous costochondral graft technique. Conclusions and relevance: Microtia is a common congenital malformation and its current gold standard treatment presents with technical challenges. As medicine is becoming personalized, 3D modeling and printing will play a larger role in various surgical fields, including in microtia reconstruction. Future studies will likely focus on refining the acquisition of images to produce 3D models used as intra-operative references for carving, standardizing tissue engineering techniques and using bioprinting to produce external ears once the technology is clinically applicable. Tags: 3D printing, Microtia, Bioprinting, Tissue engineering, Plastic Surgery localization, speech perception and speech development(7). Current treatment options for microtia include the wear of an auricular prosthesis, the implantation of a synthetic ear-shaped framework or the graft of an autologous rib cartilage framework(8). The current gold standard treatment for microtia reconstruction consists of the latter. Following the Nagata method, in a two-step surgery, the surgeon carves the harvested rib cartilage into a shape similar to that of the normal contralateral ear and places it subcutaneously in a skin pocket where the ear should be(9, 10). The MedPor technique is an alternative using a pre-fabricated porous polyethylene implant(11). The implant is molded based off a template tracing of the contralateral ear for size, shape and projection. These current reconstructive techniques entail reproducing the complex 3D auricular contour and features with poor flexibility cartilage which is technically challenging and requires significant training. The complexity of microtia reconstruction can result in suboptimal esthetic results, justifying the need for a more accurate ear reconstruction technique. Surgeons and scientists have integrated 3D modeling and printing into their management of microtia. This narrative review discusses the most recent technological advances in synthetic and biologic microtia reconstruction techniques using 3D modeling and printing as published in clinical-scale studies. Methods A search of the current literature was conducted on PubMed from inception to January 2019 using the key terms “3D printing”, “microtia”, “3D modeling”. Studies were selected based on the relevance of the title and/or abstract of retrieved records. The initial screen excluded studies with irrelevant titles or abstracts. If content was unclear based from the abstract review in the initial screen, a formal article review was undertaken. Additional studies were identified from an extensive manual Internet search and from the reference list of relevant articles. Included studies were restricted to English and French. Articles assessed included reviews, case reports, prospective studies and experimental studies. Animal studies were excluded. Discussion 3D printed model as intra-operative reference In reconstructions where cartilage is needed, rib grafts are commonly harvested because of their abundant availability, evidence of durability, low infection rates and good cosmetic outcomes(12). Historically, multi-stage microtia reconstruction using autologous costochondral grafts was developed in the 1950s by Tanzer(13) and later modified and popularized by Brent(14), decreasing complication rates and improved esthetic results. Nagata later reduced the number of surgeries to 2 and set the basis of a four-level three-dimensional cartilage construct to give a more natural ear contour(15). This technique remains the gold standard today for microtia reconstruction with regards to long term safety and durability(7). The major difficulty is to sculpt the costal cartilage into a 3D cartilage framework, using a two-dimensional (2D) tracing of the normal contralateral ear as an intra-operative reference, to create an ear that is as symmetric as possible to the other in terms of size, projection and position. With this regard, two-dimensional (2D) tracings lack details of the complex ear contour features, but most importantly, they lack the depth and thickness components of the ear(16). Moreover, they entail constant back and forth movement between the carving set up and the patient, to further study the contralateral ear, which increases operative time and carries risks of infection(16, 17). To overcome the shortcomings of 2D tracings, surgeons can use 3D printed digital models of the normal ear as an intra-operative reference tool when carving. By stacking a series of 2D X-Ray images, Computed Tomography (CT) can create 3D a geometric model of the patientsʼ ear, but carries radiation exposure(18). In contrast, 3D surface imaging(16), 3D laser scanning(17), and 3D digital photography(19) are radiation-free methods that have been reported to construct digital models. They will be discussed here. A group of plastic surgeons from South Korea used 3D laser scanning to create a mirror image of the patientsʼ normal ear after it was casted using alginate(17). Once the 3D digital model was completed using computer-aided design (CAD) system, it was printed and used as an reference tool when carving the costochondral cartilage as per the Nagata method. Assessment of the accuracy of the shape, size and dimensions of the printed 3D ear model and the casted ear model resulted in a 2.31%. difference. Of note, an accuracy assessment was not performed between the casted ear model and the normal contralateral ear, as the children were too young to sit still during the 3D scanning process. Comparisons of the lateral view of the 3D ear model and the 2D template drawing of the normal ear produced by surgeons intra-operatively revealed a 16% difference. Although the time required to manufacture the 3D ear model was longer than that needed to draw the 2D template, the former was done pre-operatively. This allowed to reduce the intra-operative time by taking down the time required to draw the 2D template and the time allotted to back and forth movements between the carving set up and the patient. The authors did not specify how much time was saved. This study demonstrated the usefulness of 3D scan-to-print technology to create operative reference tools providing 3D anatomical details of the normal ear to simplify microtia reconstruction and to enhance the surgeon s̓ intuition. Similarly, a group of plastic surgeons from Taiwan and Singapore used 3D surface imaging to produce and print a 3D model of the normal ear used as a 3D intra- operative sculpting guide(16). Three-dimensional surface imaging generates a point cloud representation of the ear contour using x, y and z coordinates using stereophotogrammetry. The software used was previously validated with regards to the precision and geographic reliability of its anthropometric measurements of the auricle(20) as well as its safety and speed for quantification of craniofacial features(21). Using the MedPor technique, the authors found that obtaining 3D perceptions was critical in the framework fabrication. However, digital noise was found to obstruct part of the external ear contour. In the United States, plastic surgeons have used 3D digital photography to generate a 3D geometric model of the unaffected ear(19). Once exported into a software, further sculpting and defining can add ultra-fine details and accentuate the essential landmarks of the ear contour. The digital model was 3D printed and brought into the operating theater as a reference when sculpting. In conjunction with the Nagata technique, using 3D digital photography allowed surgeons and engineers to digitally edit and print several models if refinement was necessary. The cost of production was estimated at 1$ for direct material and 500$ for personnel, which was cost-effective. This technique was found to decrease the degree of estimation required to shape the ear. Challenges remain to find the ideal software, digital preparation protocol, printer ink and device. Auricle prostheses from 3D modeling The complexity of microtia reconstruction is such that it can result in suboptimal esthetic results in patients subjected to multiple surgical procedures. Indeed, challenges of autologous reconstruction that are inherent to the patientsʼ anatomy include insufficient soft tissue envelope, sub-optimal positioning of microtic remnants and the lack of a contralateral normal ear for guidance during reconstruction(22). Furthermore, complications of these types of procedures range from cartilage architectural collapse or implant extrusion, to donor site morbidity and asymmetric projection of the reconstructed ear. These are reasons why patients may turn to ear prostheses as an alternative. In the recent years, three-dimensional technology has been incorporated into anaplastology (the science of customizing facial, ocular or somatic prostheses) to recreate realistic human anatomy in previous microtia reconstruction with ear prostheses(23, 24). Using digital 3D modeling was found to ease the form, surface texture and color match of the prosthetic ear when compared with the normal contralateral ear, elements which are typically difficult to produce by maxillofacial prosthetists(23). In addition, using 3D technology reduces sculpting and clinic time(23). Weissler et al.(22) reported the case of a 13-year-old female with Treacher Collins Syndrome and bilateral microtia who failed multiple autologous reconstructions. A 3D surface laser scanner was used to generate a digital model of her father s̓ ears (used to obtain a structural ear form given her complete absence of both ears), which was subsequently scaled, rotated and adjusted for esthetic proportions to the patient s̓ reconstructed 3D craniofacial model. The 3D model was printed and modified by an anaplastologist to match the skin tone of the patient s̓ auricular region. Intraoperative surgical navigation based off a 3D CT reconstruction of the ear region was found to facilitate precise and anatomically favorable alignment of the bone-anchored hearing aid (BAHA) implants drilling sites with the auricular prostheses(22). This improved ear position, projection and symmetry, key factors to achieve favorable esthetic outcomes(22). Limitations of using 3D technology to create prosthetic reconstructions included the costs of running and maintaining the hardware and software, as well as training technicians to use them(23). Engineering patient-specific ear-shaped chondrocytes Many steps in auricle cartilage bioengineering have consistently proven difficult. Finding an adequate cell source, both in terms of quality and quantity of cells, creating a 3-dimensional scaffold structure that yields minimal host immune reaction, seeding the latter with chondrocytes, and maintaining perfusion of the seeded scaffold are challenging(10). Many animal studies have been carried out to explore chondrogenesis capabilities and vascularization techniques(25-28). Recent developments in 3D printing and tissue engineering have made it possible for a group of Chinese scientists and plastic surgeons to successfully bioengineer patient-specific ear-shaped cartilage in patients with unilateral microtia of grade II or III(18). Pre-operatively, using CT, a 3D reconstruction of the normal contralateral ear was generated into a computer-aided design (CAD) system. Its digital mirror image was subsequently printed with a computer-aided manufacturing (CAM) system using resin. Molds of the 3D printed resin ear were then created to help produce a biodegradable scaffold made of polycaprolactone (PCL), polyglycolic acid (PGA) and polyactic acid (PLA). Through a biopsy, the cartilage remaining in the microtic ear was retrieved and further grown in vitro onto the above-described scaffold. Remnants of cartilage found in microtic ears have been deemed an adequate cell source for tissue engineering because of its morphological, architectural and biochemical properties(10). A 3D laser scanning system was employed to obtain a 3D model of the cartilaginous graft after 12 weeks of in-vitro growth to compare its shape to that of the initial ear-shaped scaffold (before cell implantation). The cartilaginous graft was later implanted into a tissue-expanded skin envelope overlying the normal auricular region in the recruited patients. By combining CT scan reconstructions, CAD-CAM systems, and 3D-printing, these researchers have successfully engineered patient-specific ear constructs to a degree of similarity that cannot always be achieved with autologous reconstruction. Furthermore, using the above-mentioned 3D technologies allowed to provide the ear scaffold with the mechanical support and strength necessary to maintain its 3D structure after implantation under skin tension. Limitations included the radiation exposure from using CT scan to acquire images for 3D modeling of the contralateral ear. This biological technique appears promising for microtia reconstruction, but longer follow-up is needed for this clinical trial to assess graft extrusion, shape and mechanical properties, as patients involved in the study were only followed for 2.5 years. Furthermore, when compared to synthetic implants, biological constructs of the ear cartilage framework require significantly more time (12 weeks) as well as additional surgeries, as it is a minimum two-step surgery. The costs of production were not disclosed by the authors. Conclusion Plastic surgeons are showing tremendous interest in using new technologies to achieve ideal reconstructions. Microtia is a common anomaly that results in significant functional and aesthetic impairments. As medicine is moving towards personalized medicine, three-dimensional modeling and printing will continue to expand frontiers of pre-surgical design. Adopting the 3D technology to create individualized auricular constructs can obviate the intrinsic differences in auricular shape, implant properties, recipient tissue condition and surgical techniques, elements which all contribute to the unpredictability of the final aesthetic and functional outcomes of microtia reconstruction. In addition to techniques described in this article, further applications of 3D modeling and printing could include customizing MedPor implants. Indeed, using 3D scanning and printing techniques could increase the reconstructed ear s̓ similarity with the native, in unilateral microtia. Bioprinting tissue and organs directly from living cells has the potential to minimize the host immune response and to allow for a more precise control of speed, resolution, volume and diameter of the final product. However, as bioprinting is an idea in its infancy still far from applicability, 3D modeling and printing for microtia reconstruction will likely focus on producing intra-operative references for costochondral carving. References k. Ventola CL. Medical Applications for 3D Printing: Current and Projected Uses. Pharmacy and Therapeutics. 2014;39(10):704-11. n. Ursan ID, Chiu L, Pierce A. Three-dimensional drug printing: a structured review. Journal of the American Pharmacists Association : JAPhA. 2013;53(2):136-44. o. Schubert C, van Langeveld MC, Donoso LA. Innovations in 3D printing: a 3D overview from optics to organs. The British journal of ophthalmology. 2014;98(2):159-61. p. Lipson H. New world of 3-D printing offers "completely new ways of thinking": Q&A with author, engineer, and 3-D printing expert Hod Lipson. IEEE pulse. 2013;4(6):12-4. t. Banks J. Adding value in additive manufacturing: researchers in the United Kingdom and Europe look to 3D printing for customization. IEEE pulse. 2013;4(6):22-6. u. Mertz L. Dream it, design it, print it in 3-D: what can 3-D printing do for you? IEEE pulse. 2013;4(6):15-21. w. Janis JE. Essentials of Plastic Surgery 2nd Edition ed. St Louis, United States: Thieme Medical Publishers Inc; 2014. 1367 p. x. Bly RA, Bhrany AD, Murakami CS, Sie KC. Microtia Reconstruction. Facial plastic surgery clinics of North America. 2016;24(4):577-91. y. Baluch N, Nagata S, Park C, Wilkes GH, Reinisch J, Kasrai L, et al. Auricular reconstruction for microtia: A review of available methods. Plastic surgery (Oakville, Ont). 2014;22(1):39-43. kz. Schroeder MJ, Lloyd MS. Tissue Engineering Strategies for Auricular Reconstruction. The Journal of craniofacial surgery. 2017;28(8):2007-11. kk. Reinisch JF, Lewin S. Ear reconstruction using a porous polyethylene framework and temporoparietal fascia flap. Facial plastic surgery : FPS. 2009;25(3):181-9. kn. Constantine KK, Gilmore J, Lee K, Leach J, Jr. Comparison of microtia reconstruction outcomes using rib cartilage vs porous polyethylene implant. JAMA facial plastic surgery. 2014;16(4):240-4. ko. Tanzer RC. Microtia--a long-term follow-up of 44 reconstructed auricles. Plastic and reconstructive surgery. 1978;61(2):161-6. kp. Brent B. Technical advances in ear reconstruction with autogenous rib cartilage grafts: personal experience with 1200 cases. Plastic and reconstructive surgery. 1999;104(2):319-34; discussion 35-8. kt. Nagata S. A new method of total reconstruction of the auricle for microtia. Plastic and reconstructive surgery. 1993;92(2):187-201. ku. Chen HY, Ng LS, Chang CS, Lu TC, Chen NH, Chen ZC. Pursuing Mirror Image Reconstruction in Unilateral Microtia: Customizing Auricular Framework by Application of Three-Dimensional Imaging and Three-Dimensional Printing. Plastic and reconstructive surgery. 2017;139(6):1433-43. kw. Jeon B, Lee C, Kim M, Choi TH, Kim S, Kim S. Fabrication of three-dimensional scan-to-print ear model for microtia reconstruction. The Journal of surgical research. 2016;206(2):490-7. kx. Zhou G, Jiang H, Yin Z, Liu Y, Zhang Q, Zhang C, et al. In Vitro Regeneration of Patient-specific Ear-shaped Cartilage and Its First Clinical Application for Auricular Reconstruction. EBioMedicine. 2018;28}287-302. ky. Flores RL, Liss H, Raffaelli S, Humayun A, Khouri KS, Coelho PG, et al. The technique for 3D printing patient-specific models for auricular reconstruction. Journal of cranio-maxillo-facial surgery : official publication of the European Association for Cranio-Maxillo-Facial Surgery. 2017;45(6):937-43. nz. Chen ZC, Albdour MN, Lizardo JA, Chen YA, Chen PK. Precision of three- dimensional stereo-photogrammetry (3dMD) in anthropometry of the auricle and its application in microtia reconstruction. Journal of plastic, reconstructive & aesthetic surgery : JPRAS. 2015;68(5):622-31. nk. Heike CL, Upson K, Stuhaug E, Weinberg SM. 3D digital stereophotogrammetry: a practical guide to facial image acquisition. Head & face medicine. 2010;6}18. nn. Weissler JM, Sosin M, Dorafshar AH, Garcia JR. Combining Virtual Surgical Planning, Intraoperative Navigation, and 3-Dimensional Printing in Prosthetic- Based Bilateral Microtia Reconstruction. Journal of oral and maxillofacial surgery : official journal of the American Association of Oral and Maxillofacial Surgeons. 2017;75(7):1491-7. no. Watson J, Hatamleh MM. Complete integration of technology for improved reproduction of auricular prostheses. The Journal of prosthetic dentistry. 2014;111(5):430-6. np. Hatamleh MM, Watson J. Construction of an implant-retained auricular prosthesis with the aid of contemporary digital technologies: a clinical report. Journal of prosthodontics : official journal of the American College of Prosthodontists. 2013;22(2):132-6. nt. Iyer K, Dearman BL, Wagstaff MJ, Greenwood JE. A Novel Biodegradable Polyurethane Matrix for Auricular Cartilage Repair: An In Vitro and In Vivo Study. Journal of burn care & research : official publication of the American Burn Association. 2016;37(4):e353-64. nu. Cheng Y, Cheng P, Xue F, Wu KM, Jiang MJ, Ji JF, et al. Repair of ear cartilage defects with allogenic bone marrow mesenchymal stem cells in rabbits. Cell biochemistry and biophysics. 2014;70(2):1137-43. nw. von Bomhard A, Veit J, Bermueller C, Rotter N, Staudenmaier R, Storck K, et al. Prefabrication of 3D cartilage contructs: towards a tissue engineered auricle--a model tested in rabbits. PloS one. 2013;8(8):e71667. nx. Hohman MH, Lindsay RW, Pomerantseva I, Bichara DA, Zhao X, Johnson M, et al. Ovine model for auricular reconstruction: porous polyethylene implants. The Annals of otology, rhinology, and laryngology. 2014;123(2):135-40. This work is licensed under a Creative Commons Attribution-NonCommercial- ShareAlike 4.0 International License. © McGill Journal of Medicine 2020 Electronic ISSN 1715-8125   http://creativecommons.org/licenses/by-nc-sa/4.0/ http://creativecommons.org/licenses/by-nc-sa/4.0/ https://www.facebook.com/mjmmed/ https://twitter.com/McGillJMed work_ggfg3y4de5dwde24gyuygxtoei ---- Improved Documentation of Retinal Hemorrhages Using a Wide-Field Digital Ophthalmic Camera in Patients Who Experienced Abusive Head Trauma Improved Documentation of Retinal Hemorrhages Using a Wide-Field Digital Ophthalmic Camera in Patients Who Experienced Abusive Head Trauma Thomas A. Nakagawa, MD; Ruta Skrinska, MD Objective: To describe the clinical use of a wide-field digital ophthalmic camera (RetCam 120; Massie Re- search Laboratories, Inc, Dublin, Calif) for the docu- mentation of retinal hemorrhages in patients who expe- rienced abusive head trauma. Design: Case series. Setting: Pediatric intensive care unit at a tertiary care center. Participants: Children with suspected abusive head trauma. Results: Eight children were studied during a 9-month period. The median age of the children was 2.25 months (range, 0.8-18.0 months). There were 4 male and 4 female patients. All patients had intracranial bleeding, documented by computed axial tomographic scans of the head. Of the 8 patients, 6 had bilateral retinal hemor- rhages. All patients underwent a formal examination by a pediatric ophthalmologist (R.S. and others) using a wide- field digital ophthalmic camera. Three children died. Conclusions: The wide-field digital ophthalmic camera allowed good visualization and produced high-quality photographic images, resulting in instant bedside docu- mentation of retinal pathological features. The wide- field digital ophthalmic camera provides a new tool for the evaluation and precise documentation of retinal hem- orrhages in suspected and confirmed cases of abusive head trauma. Arch Pediatr Adolesc Med. 2001;155:1149-1152 R ETINAL hemorrhages are a common finding in pa- tients who experience abu- sive head trauma, occur- ring in 50% to 90% of infants who were violently shaken.1-6 Al- though some authorities2-4 believe that reti- nal hemorrhages alone may not be diag- nostic of shaken baby syndrome, their presence clearly reinforces the diagnosis when accompanied by intracranial inju- ries. Therefore, documentation of retinal hemorrhages is imperative to support the diagnosis of shaken baby syndrome. Tra- ditionally, retinal hemorrhages are docu- mented by freehand drawings, which can be time-consuming and may not accu- rately reflect retinal pathological fea- tures. While these drawings may give in- vestigators and medical personnel an idea of the severity and number of the hemor- rhages, they do not compare to actual reti- nal photographs. Retinal photography using special- ized handheld cameras improves bedside documentation of retinal hemorrhages, but requires special training and can be lim- ited by the camera’s field of view. Slit- lamp retinal cameras provide high- quality wide-field images, but require considerable patient cooperation and tech- nical expertise and lack portability. Digi- tal photography provides another alter- native for documenting retinal pathological features. This technology has been incor- porated into a wide-field digital ophthal- mic camera (RetCam 120; Massie Re- search Laboratories, Inc, Dublin, Calif) capable of producing high-quality real- time images of the retina. RESULTS During a 9-month period, we examined 8 children (median age, 2.25 months; age range, 0.8-18.0 months) admitted to the pediatric intensive care unit. There were 4 male and 4 female patients. The pri- mary admitting diagnosis, retinal find- ings, and computed tomographic scan re- sults of the head are shown in the Table. A history of trauma was found in 4 pa- tients: fall from a couch (n = 2), dropping the child 105 cm to the floor (n = 1), and ARTICLE From the Division of Pediatric Critical Care Medicine, Children’s Hospital of The King’s Daughters (Dr Nakagawa), and the Department of Pediatrics, Eastern Virginia Medical School (Dr Nakagawa), Norfolk, Va. Dr Skrinska is in private practice in Norfolk. (REPRINTED) ARCH PEDIATR ADOLESC MED/ VOL 155, OCT 2001 WWW.ARCHPEDIATRICS.COM 1149 ©2001 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 a child’s head hitting the sink (n = 1). Of the 8 patients, 6 had bilateral preretinal and intraretinal hemorrhages by direct ophthalmic examination and 2 had no retinal hemorrhages. Subdural and/or subarachnoid bleeding was noted on computed tomographic images of the head in all patients. Four patients had skeletal injuries consis- tent with nonaccidental trauma. Two patients required cardiopulmonary resuscitation before admission to the pediatric intensive care unit. Seven patients underwent mechanical ventilation, and 6 had generalized seizures. The median length of stay in the pediatric intensive care unit was 41⁄2 days (range, 2-9 days). The median total hos- pital stay was 8 days (range, 2-17 days). All children underwent a formal ophthalmologic ex- amination using a wide-field digital ophthalmic camera to confirm and document retinal hemorrhages. Exami- nations using the wide-field digital ophthalmic camera were performed by a pediatric ophthalmology attending physician (R.S. and others). The pupils were dilated using 0.2% cyclopentolate hydrochloride and 1% phen- ylephrine hydrochloride (Cyclomydril; Alcon Laborato- ries, Inc, Ft Worth, Tex) or 1% cyclopentolate hydro- chloride (Cyclogyl; Alcon Laboratories, Inc) and anesthetized with 0.5% proparacaine hydrochloride (Alcon Laboratories, Inc) in 5 of the 8 patients studied. Three patients had fixed and dilated pupils. Hydroxy- propyl methylcellulose (Goniosol; Ciba Vision Ophthal- mics, Atlanta, Ga) was used in all patients to provide an interface between the image capture unit and the cor- nea. In 7 of the 8 patients, the retina was easily visual- ized. In 1 child, images appeared cloudy as a result of blood in the vitreous humor; however, the images ob- tained were of acceptable quality. Multiple photographs were obtained, allowing for selection of the best and elimi- nation of inferior-quality and out-of-focus images. At hospital discharge, 2 patients were released to fos- ter care, 1 was institutionalized, and 2 were released to their mother. Three patients died. In 4 cases, the perpe- trator confessed. Perpetrators were the father or male care- taker in 3 cases and the mother in 1. COMMENT Retinal hemorrhages are a common finding in patients who experience abusive head trauma and support the diagnosis of shaken baby syndrome.1-4,7 Retinal hemor- rhages caused by abuse can be unilateral or bilat- eral,1,6,8,9 and result from rapid acceleration and decel- eration and rotational forces as the child’s head moves unsupported during the shaking event.2-4,7 Retinal hemorrhages associated with abusive head trauma are different than those associated with in- creased intracranial pressure, cardiopulmonary resusci- tation, or childbirth. With inflicted head injury, retinal hemorrhages tend to be multiple, tend to involve multiple retinal layers, and are distributed throughout the retina to the ora ser- rata.10 With cardiopulmonary resuscitation, retinal hem- orrhages tend to be small punctate hemorrhages, tend to be confined to the posterior pole of the retina, and tend to occur infrequently.10,11 Retinal hemorrhages are a com- mon finding in childbirth, occurring more frequently dur- ing vacuum-assisted deliveries, followed by spontane- ous vaginal deliveries; they are infrequent with cesarean deliveries.12,13 Direct compression to the globe and he- modynamic and rheologic changes during labor and de- livery contribute to retinal hemorrhages during child- birth.13 Most retinal hemorrhages associated with childbirth are intraretinal and typically resorb by the time the newborn is aged 7 to 10 days,12,14 although they may persist up to 30 days.13 Emerson and colleagues13 found no preretinal hemorrhages or vitreous blood and only rare isolated subretinal hemorrhages in newborns with reti- nal hemorrhages, resulting in their conclusion that in- traretinal hemorrhages in infants older than 1 month are unlikely to be related to birth trauma. Increased intra- cranial pressure can produce retinal hemorrhages, but these hemorrhages tend to be confined to the posterior pole and there are relatively few.10 Last, traumatic reti- noschisis in children has never been described in any en- tity other than shaken baby syndrome.10 Clearly, accu- rate documentation of retinal hemorrhages is important for diagnosing shaken baby syndrome. Traditionally, retinal hemorrhages were observed using a direct ophthalmoscope or a binocular indirect ophthalmoscope and documented by freehand draw- ings. Although these pictures provide a visual image used by investigators and medical personnel to document the number and severity of the retinal hemorrhages, photographs more accurately depict the type and extent of the hemorrhage and are not dependent on an artistic drawing. A wide-field digital ophthalmic camera uses fiber- optic illumination to provide clear, high-resolution, real- time images. It provides a 120° field of view, producing images of the retina that can be stored and recalled in a portable and easy-to-use unit. The image capture unit is placed on the cornea over the dilated pupil, providing real-time images of the retina. These images are viewed on an external monitor, and the retina is photographed, providing instant documentation of retinal injuries (Figure 1 and Figure 2). Digital images are stored, and medical record–ready photographs can be printed at the PARTICIPANTS AND METHODS All children admitted to the pediatric intensive care unit at our institution with suspected abusive head trauma, including intracranial hemorrhages and/or retinal hemorrhages, were included in this study. Age, race, sex, presenting complaint, and survival data were recorded. Intracranial hemorrhages were docu- mented by computed axial tomographic scans of the head. Retinal hemorrhages were documented by the attending intensivist at the time of admission. All chil- dren with suspected abusive head trauma under- went a formal ophthalmologic examination using a wide-field digital ophthalmic camera. This study was conducted with approval for human investigations by the institutional review board at Eastern Virginia Medical School, Norfolk. (REPRINTED) ARCH PEDIATR ADOLESC MED/ VOL 155, OCT 2001 WWW.ARCHPEDIATRICS.COM 1150 ©2001 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 bedside with the patient’s information imprinted on the photograph, including the time and date of the study. In addition, software allows for the electronic transfer of digital images to other physicians. Photographs of the retina are obtained at the oph- thalmic examination in some centers. This documenta- tion typically depends on an ophthalmologist with spe- cial training in and equipment for photographing the retina. A wide-field digital ophthalmic camera requires minimal training and provides a much wider field of view compared with other more elaborate systems used to pho- tograph the retina. This allows photographic documen- tation to occur at any time by physicians other than oph- thalmologists and improves visualization of hemorrhages that are more peripheral. In addition, digital photo- graphic images provide immediate and precise documen- tation of retinal hemorrhages, eliminating time- consuming freehand illustrations or photographic processing. The visual impact of photographic images al- lows multiple reviewers to independently review pho- tographic documentation of retinal hemorrhages and may play a crucial role in the medicolegal aspects of abusive head trauma as well. Last, a wide-field digital ophthal- mic camera is portable and easily transported to the bed- side, allowing examination of the retina in even the most critically ill child. A wide-field digital ophthalmic camera may prove to play an important role in the early diagnosis and in- tervention of abusive head trauma. Jenny and col- leagues15 noted that an incorrect diagnosis was made in one third of patients who experienced abusive head trauma; the delay resulted in further injury and death to some children. In addition, retinal hemorrhages were missed in almost 30% of abusive head trauma cases when examination of the retina was performed by a nonoph- thalmologist.16 Use of a wide-field digital ophthalmic cam- era by nonophthalmologists is relatively easy and al- lows the fundus of most children to be viewed. Compared Figure 1. Image of the retina produced with a wide-field digital ophthalmic camera (RetCam 120; Massie Research Laboratories, Inc, Dublin, Calif ), showing extensive intraretinal and preretinal hemorrhages throughout the periphery of the retina, with 1 large hemorrhage lateral to the optic nerve. Figure 2. Image of the retina produced with a wide-field digital ophthalmic camera (RetCam 120; Massie Research Laboratories, Inc, Dublin, Calif ), showing extensive retinal and preretinal hemorrhages throughout the periphery. Clinical Characteristics of the 8 Patients Patient No./Sex Admitting Diagnosis Retinal Findings Head CAT Scan Findings* Bony Trauma Outcome 1/F Cardiac arrest Bilateral preretinal, intraretinal, and subretinal hemorrhages Frontal SD hematoma, SA and IP hemorrhages, and cerebral edema Skull, humerus, tibia, and rib fractures Died 2/F Respiratory arrest None Bilateral SD hematoma and cerebral edema Skull and rib fractures Died 3/M Vomiting Bilateral preretinal, intraretinal, and subretinal hemorrhages SA hemorrhage and cerebral edema None Died 4/M Seizures Bilateral preretinal, intraretinal, and subretinal hemorrhages SA and SD hemorrhage None Survived 5/F Respiratory arrest Bilateral preretinal and intraretinal hemorrhages SD hemorrhage Scapula, humerus, femur, ilium, and rib fractures Survived 6/M Vomiting Bilateral preretinal, intraretinal, and subretinal hemorrhages Bilateral frontal SD hematoma and cerebral edema None Survived 7/M Seizures Bilateral preretinal and intraretinal hemorrhages Bilateral SD hematoma None Survived 8/F Seizures None SD hematoma Skull, clavicle, and rib fractures Survived *CAT indicates computed axial tomographic; SD, subdural; SA, subarachnoid; and IP, intraparenchymal. (REPRINTED) ARCH PEDIATR ADOLESC MED/ VOL 155, OCT 2001 WWW.ARCHPEDIATRICS.COM 1151 ©2001 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 with a direct ophthalmic examination using an ophthal- moscope, the wide field of view allows visualization of the retina to the ora serrata. This technology may prove useful by allowing rapid identification of retinal hemor- rhages in suspected cases of abusive head trauma, allow- ing for earlier intervention. A wide-field digital ophthalmic camera is also an ideal teaching tool, allowing students, residents, other allied health personnel, and investigators to instantly visual- ize retinal pathological features on a 43.2-cm monitor. Digital photographic images can be stored, permitting the creation of teaching files, and images can be reviewed and compared with previous examination results. There are limitations to a wide-field digital ophthal- mic camera. It is not a substitute for a formal ophthal- mic examination. This diagnostic imaging tool should be used in collaboration with an ophthalmologist, ensur- ing that proper diagnosis and follow-up are obtained for children who have retinal pathological features. Image quality may be affected by blood in the vitreous humor and is dependent on patient cooperation. In our limited experience, image quality was somewhat affected by blood in the vitreous humor, but acceptable images were ob- tained. An examination using a wide-field digital oph- thalmic camera may not be well tolerated by the awake or combative child; however, this examination would be no different than attempting to examine the eyes using an ophthalmoscope. In addition, it was not difficult to obtain images of children with an altered mental status even when they were not mechanically ventilated and heavily sedated. Imaging of a nondilated pupil is pos- sible, but shadowing of the retina can limit the field of view and may result in image degradation. Printed im- ages using the color printer have some image deteriora- tion, and although the image quality is acceptable, there is no comparison with the resolution provided by the ex- ternal monitor. Last, although expensive (approxi- mately $64 000 with the color printer), a wide-field digi- tal ophthalmic camera is versatile and can be used to image other retinal lesions besides those associated with abu- sive head trauma. In summary, a wide-field digital ophthalmic cam- era is a unique camera that provides a new level of sophistication for the immediate documentation and evaluation of retinal pathological features in suspected cases of abusive head trauma. Accepted for publication April 16, 2001. Corresponding author and reprints: Thomas A. Nakagawa, MD, Division of Pediatric Critical Care Medi- cine, Children’s Hospital of The King’s Daughters, 601 Children’s Ln, Norfolk, VA 23507 (e-mail: NakagaTA @CHKD.com). REFERENCES 1. American Academy of Pediatrics’ Committee on Child Abuse and Neglect. Shaken baby syndrome: inflicted cerebral trauma. Pediatrics. 1993;92:872-875. 2. Conway EE Jr. Nonaccidental head injury in infants: “the shaken baby syndrome revisited.” Pediatr Ann. 1998;27:677-690. 3. Duhaime AC, Christian CW, Rorke LB, Zimmerman RA. Nonaccidental head injury in infants: the “shaken-baby syndrome.” N Engl J Med. 1998;338:1822-1829. 4. Green MA, Lieberman G, Milroy CM, Parsons MA. Ocular and cerebral trauma in non-accidental injury in infancy: underlying mechanisms and implications for pae- diatric practice. Br J Ophthalmol. 1996;80:282-287. 5. Munger CE, Peiffer RL, Bouldin TW, Kylastra JA, Thompson RL. Ocular and asso- ciated neuropathic observations in suspected whiplash shaken infant syndrome: a retrospective study of 12 cases. Am J Forensic Med Pathol. 1993;14:193-200. 6. Ludwig S, Warman M. Shaken baby syndrome: a review of 20 cases. Ann Emerg Med. 1984;13:104-107. 7. Elner SG, Elner VM, Arnall M, Albert DM. Ocular and associated systemic findings in suspected child abuse: a necropsy study. Arch Ophthalmol. 1990;108:1094-1101. 8. Lancon JA, Haines DE, Parent AD. Anatomy of the shaken baby syndrome. Anat Rec. 1998;253:13-18. 9. Budenz D, Farber M, Mirchandani H, Park H, Rorke LB. Ocular and optic nerve hemorrhages in abused infants with intracranial injuries. Ophthalmology. 1995; 101:559-565. 10. Levin A. Retinal hemorrhages in child abuse. In: David TJ, ed. Recent Advances in Paediatrics. Edinburgh, Scotland: Churchill Livingstone Inc; 2000:151-219. 11. Odom A, Christ E, Kerr N, et al. Prevalence of retinal hemorrhages in pediatric patients after in-hospital cardiopulmonary resuscitation: a prospective study. Pediatrics. 1997;99:e3. Available at: http://www.pediatrics.org/cgi/content/full /99/6/e3. Accessed June 1997. 12. Giles CL. Retinal hemorrhage in the newborn. Am J Ophthalmol. 1960;49:1005-1011. 13. Emerson MV, Pieramici DJ, Stoessel KM, Berreen JP, Gariano RF. Incidence and rate of disappearance of retinal hemorrhages in newborns. Ophthalmology. 2001; 108:36-39. 14. Jain I, Singh Y, Grupta S, Gupta A. Ocular hazards during birth. J Pediatr Oph- thalmol Strabismus. 1980;17:14-16. 15. Jenny C, Hymel KP, Ritzen A, Reinert SE, Hay TC. Analysis of missed cases of abusive head trauma. JAMA. 1999;281:621-626. 16. Kivlin JD, Simons KB, Lazoritz ST, Ruttum MS. Shaken baby syndrome. Oph- thalmology. 2000;107:1246-1254. What This Study Adds Freehand drawings may not always reflect the extent of retinal hemorrhages in patients who have experienced abu- sive head trauma. Retinal photography using specialized handheld cameras improves bedside documentation of retinal hemorrhages, but requires special training and can be limited by the camera’s field of view. Wide-field digi- tal photography using a wide-field digital ophthalmic camera can improve bedside documentation of retinal pathological features in this select group of patients. To our knowledge, this study is the first to describe the use of wide-field digital photography for document- ing retinal hemorrhages in patients who have experi- enced abusive head trauma. The wide-field digital oph- thalmic camera allowed good visualization and produced high-quality photographic images, resulting in instant bed- side documentation of retinal pathological features. This technology improves efficiency and provides a new tool for the evaluation and precise documentation of retinal hemorrhages in suspected and confirmed cases of abu- sive head trauma. (REPRINTED) ARCH PEDIATR ADOLESC MED/ VOL 155, OCT 2001 WWW.ARCHPEDIATRICS.COM 1152 ©2001 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 work_gh7zpthrofc3bdflukdupe6cvy ---- Storytelling and Trauma: Reflections on “Now I See It,” a digital storytelling project and exhibition in collaboration with the Native Women’s Shelter of Montreal All Rights Reserved © Faculty of Education, McGill University, 2015 Ce document est protégé par la loi sur le droit d’auteur. L’utilisation des services d’Érudit (y compris la reproduction) est assujettie à sa politique d’utilisation que vous pouvez consulter en ligne. https://apropos.erudit.org/fr/usagers/politique-dutilisation/ Cet article est diffusé et préservé par Érudit. Érudit est un consortium interuniversitaire sans but lucratif composé de l’Université de Montréal, l’Université Laval et l’Université du Québec à Montréal. Il a pour mission la promotion et la valorisation de la recherche. https://www.erudit.org/fr/ Document généré le 5 avr. 2021 21:16 McGill Journal of Education Revue des sciences de l'éducation de McGill Storytelling and Trauma: Reflections on “Now I See It,” a digital storytelling project and exhibition in collaboration with the Native Women’s Shelter of Montreal Récits et traumatismes : réflections sur « Now I See It », un projet de récits numériques et une exposition en collaboration avec le foyer pour femmes autochtones de Montréal Rachel Deutsch, Leah Woolner et Carole-Lynn Byington Volume 49, numéro 3, fall 2014 URI : https://id.erudit.org/iderudit/1033555ar DOI : https://doi.org/10.7202/1033555ar Aller au sommaire du numéro Éditeur(s) Faculty of Education, McGill University ISSN 1916-0666 (numérique) Découvrir la revue Citer cet article Deutsch, R., Woolner, L. & Byington, C.-L. (2014). Storytelling and Trauma: Reflections on “Now I See It,” a digital storytelling project and exhibition in collaboration with the Native Women’s Shelter of Montreal. McGill Journal of Education / Revue des sciences de l'éducation de McGill, 49(3), 707–716. https://doi.org/10.7202/1033555ar Résumé de l'article Le récit numérique est un moyen de surmonter les traumatismes. Partager ses expériences traumatisantes en en faisant le récit constitue pour plusieurs personnes ayant vécu un traumatisme une façon de donner un nouveau sens aux événements passés. Now I See It est un projet de récits numériques avec comme résultante une collection de photographies prises par les membres de la communauté urbaine et autochtone montréalaise. Ce projet a été piloté par le Foyer pour femmes autochtones de Montréal en 2014 et présenté au département de l’éducation du Musée des beaux-arts de Montréal. Now I See It est une manière d’élaborer la « carte interne » de traumatismes très douloureux à comprendre et dont l’expérience est différente pour chaque individu. https://apropos.erudit.org/fr/usagers/politique-dutilisation/ https://www.erudit.org/fr/ https://www.erudit.org/fr/ https://www.erudit.org/fr/revues/mje/ https://id.erudit.org/iderudit/1033555ar https://doi.org/10.7202/1033555ar https://www.erudit.org/fr/revues/mje/2014-v49-n3-mje02117/ https://www.erudit.org/fr/revues/mje/ McGILL JOURNAL OF EDUCATION • VOL. 49 NO 3 FALL 2014 Storytelling and Trauma: Reflections on “Now I See It” 707 NOTES FROM THE FIELD / NOTES DU TERRAIN STORYTELLING AND TRAUMA: REFLECTIONS ON “NOW I SEE IT,” A DIGITAL STORYTELLING PROJECT AND EXHIBITION IN COLLABORATION WITH THE NATIVE WOMEN’S SHELTER OF MONTREAL RACHEL DEUTSCH Montreal Urban Aboriginal Community Strategy Network LEAH WOOLNER McGIll University CAROLE-LYNN BYINGTON ABSTRACT. Storytelling is a way of dealing with trauma. For many of those who have experienced trauma, sharing one’s own experiences, in the form of a personal narrative, can help to develop new meaning on past events. Now I See It was a storytelling project that resulted in a collection of photographs taken by members of the urban Aboriginal community of Montreal. The project was run through the Native Women’s Shelter of Montreal in 2014 and exhibited in the educational department of the Montreal Museum of Fine Arts. Now I See It was a way of creating an “‘internal map”’ because trauma is so painfully hard to see and the experience is so different for each individual. RÉCITS ET TRAUMATISMES : RÉFLECTIONS SUR « NOW I SEE IT », UN PROJET DE RÉCITS NUMÉRIQUES ET UNE EXPOSITION EN COLLABORATION AVEC LE FOYER POUR FEMMES AUTOCHTONES DE MONTRÉAL RÉSUMÉ. Le récit numérique est un moyen de surmonter les traumatismes. Part- ager ses expériences traumatisantes en en faisant le récit constitue pour plusieurs personnes ayant vécu un traumatisme une façon de donner un nouveau sens aux événements passés. Now I See It est un projet de récits numériques avec comme résultante une collection de photographies prises par les membres de la com- munauté urbaine et autochtone montréalaise. Ce projet a été piloté par le Foyer pour femmes autochtones de Montréal en 2014 et présenté au département de l’éducation du Musée des beaux-arts de Montréal. Now I See It est une manière d’élaborer la « carte interne » de traumatismes très douloureux à comprendre et dont l’expérience est différente pour chaque individu. STORYTELLING AND TRAUMA When I was numb, I had so much fear. I felt a great loss of all the ways that I did not participate in life. But, then I learned that fears are meaningful. Storytelling is a way of dealing with trauma. For many of those who have experienced trauma, sharing one’s own experiences in the form of a personal narrative can help to develop new meaning on past events. Rachel Deutsch, Leah Woolner, Carole-Lynn Byington 708 REVUE DES SCIENCES DE L’ÉDUCATION DE McGILL • VOL. 49 NO 3 AUTOMNE 2014 In the dominant historical narrative, Aboriginal peoples were removed from the Canadian landscape, and Canada is portrayed as an empty land with a “disappearing Indian” population (Smith, 2005). For many Aboriginal people in Canada, trauma is often transmitted intergenerationally and rooted in the residential school experience, dispossession of land and way of life, as well as in decades of abuse in the youth protection and prison systems, among many other forms of persistent colonialism (Wesley-Esquimaux & Smolewski, 2004). These diverse and multiple forms of persistent colonialism have been, and continue to be, present in the lives and stories of First Nations women. Native author Cynthia Wesley-Esquimaux (2009) wrote that for generations, First Nations women’s voices were silenced in historical nar- ratives that sidestepped their influence and power […] First Nations women are beginning to understand that many of the social problems they deal with everyday have roots in the extensive historical trauma that was experienced, but never properly voiced out and represented. (p. 20) By re-telling one’s own stories — that is by using different kinds of imagery and exploring alternative ways of interpreting one’s reality — storytelling can provide a sense of hope, belonging, and meaning for people in light of traumatic experiences (White & Epston, 1990). In Rita Joe’s (1989) poem about surviv- ing residential school and her loss of native language as a child, she wrote, “I lost my talk, the talk you took away,” and later, “let me find my talk so I can teach you about me.” The Now I See It project was one such effort to explore the relationship between personal narratives and trauma using storytelling and digital photography to re-tell and re-imagine individual experiences. Quotes from one of the project participants and co-authors, Carole-Lynn Byington, are woven throughout, appearing in italics. THE “NOW I SEE IT” PROJECT Now I See It was a storytelling project that resulted in a collection of photo- graphs taken by members of the urban Aboriginal community of Montreal. The project, run through the Addictions Program of the Native Women’s Shelter of Montreal (NWSM) lasted from January 2014 to October 2014, and consisted of weekly photography and writing workshops. The end result was a series of photographs taken by the participants that were exhibited at the Montreal Museum of Fine Arts through their Sharing the Museum Program. There were eight participants who all self-identified as Aboriginal women; however, one participant discontinued with the project, as she lost contact with facilitators. The first step in the project was distributing donated digital cameras to the participants, which the participants kept throughout the entire process, though some chose to use their cellphones or tablets. The participants were encour- aged to photograph the people, places, and things that were special to them in Montreal and share them during the workshops. Several initial workshops were led by Odile Boucher, a local photographer and volunteer at the NWSM, McGILL JOURNAL OF EDUCATION • VOL. 49 NO 3 FALL 2014 Storytelling and Trauma: Reflections on “Now I See It” 709 in order to introduce the medium of digital photography, including composi- tion, framing, lighting, and portraiture. Other workshops focused on writing, photographic critique, and different aspects of digital storytelling. These weekly meetings also provided a space for critical discussions surround- ing the photographs of the participants as they would often take pictures on their own throughout the week and then share their photos at the group meetings or one-on-one with facilitators. Often, the photos taken would be of a particular place or person in the city that was extremely meaningful to the participants. By sharing the pictures with the group, and by telling the stories related to a series of images, participants began to develop personal narratives on their lives in the city. As participants became more confident in their work, they took turns lead- ing the group in walking tours of the city. Having spent time living on the street and in shelters, many of the participants had an intimate sense of the geography of the city, and were excited to share meaningful places with other participants and facilitators. Many participants were very interested in construc- tion sites as signs of changing landscapes and gentrification throughout the city. When showing a particular place or building, some participants connected this change to a longer history of colonization and their personal relationship to land ownership and development. At the beginning, when I began writing about other peoples’ photographs, I couldn’t feel what they were feeling. I thought I had to put myself in their spot to see what they saw. But then I saw everything. I saw beams, a crane, and I watched the build- ings grow in the photographs. In the final stages of the project, the photographs showed a variety of different thematic similarities, most notably urban development and the intersections of internal and emotional, with external and physical, geographies. At the end, participants collectively decided to name the photographic series Now I See It as a reflection of the small moments and hard-to-notice changes they captured in the surrounding urban landscape. THE PROJECT PHOTOGRAPHS One participant, who came from a family of Mohawk iron-workers, took photographs of construction sites, documenting industrial machinery, and the beginnings of concrete and steel buildings. She explained: I relate to construction and things getting built because my father was part of that and had his own construction company. When we were young we used to sit and watch them work on the family business. I know what they’re doing and I can describe it in detail. She then recounted how she learned the names of building parts, machinery, and the workings of structures growing up: “It’s watching something get put Rachel Deutsch, Leah Woolner, Carole-Lynn Byington 710 REVUE DES SCIENCES DE L’ÉDUCATION DE McGILL • VOL. 49 NO 3 AUTOMNE 2014 together. If you have a part missing it’s not necessarily secure. It’s a really strong way to communicate with people — learning how things are made.” FIGURE 1. Construction site For this participant, her fascination and intimate relationship with construction work was beautifully communicated through the photographs and the stories she shared. In showing the photographs to the group, she spoke at length about her own personal connection to construction as well as the Mohawk commu- nity. In taking the photos, this participant described her desire to challenge the viewer to be able to see the construction sites and buildings in the same way that she understood them, and in doing so, she often positioned herself in specific ways to take the photographs. At first glance, many of her pictures appear disorienting to the viewer because of the unorthodox placement of the structures within the photographic frame. It is only with closer study that one can begin to distinguish specific elements so as to see the whole picture. More McGILL JOURNAL OF EDUCATION • VOL. 49 NO 3 FALL 2014 Storytelling and Trauma: Reflections on “Now I See It” 711 specifically, for one of her photographs, the participant described lying down on the ground to be able to capture a lamppost jutting across a space to touch another building. The overlapping of visual characters against the sky creates an unsettling dynamic within the photograph: one generating feelings of both tension and delicacy, since looking from another perspective would show how physically apart the building and the lamppost actually are. FIGURE 2. Lamppost and sky This same participant often spoke of having many loud and overlapping thoughts and of sometimes becoming lost in the city, even in familiar places. At times, her photographs also show this disjointed and chaotic mood that she described, but also a majestic order through the towering buildings, cranes, and organization of the large structures. She would walk for hours all over Montreal, looking for things and places to photograph. She stated: The noise and the movement is what drew me to take pictures of construc- tion. The noise level in the construction sites were the loudest in the city. It’s the feeling of just beginning, the overwhelming feeling when they are just starting, closing the area, putting up signs, putting barriers so pedestrians can walk next to the site. When you go there just as they are starting, you feel overwhelming energy. There is excitement and happiness, as well as focus. It is a humongous thing they are going to do. One afternoon, this participant’s affinity for loudness brought her to walk to Montreal’s Trudeau Airport. One photograph from this series shows the tail of a plane against a grey sky. Unsure if the plane is landing or taking off, the photograph captures both intense power and movement, but also stillness. Rachel Deutsch, Leah Woolner, Carole-Lynn Byington 712 REVUE DES SCIENCES DE L’ÉDUCATION DE McGILL • VOL. 49 NO 3 AUTOMNE 2014 Perhaps this duality is something the participant understands as someone struggling to find a permanent home and internal peace in a busy city. When sharing these photographs with the other members of the project, she often spoke of temporality with regards to specific places: the small changes over time and the stillness that can accompany slow evolution. She said, “when you look at something, it’s not one space. There are many spaces in space. Even in a little picture the space is so big. For each person, space is different.” Another participant took hundreds of photos of her family and friends, with a special focus on her baby, whom she adores. Having moved from Nunavut to Montreal, her photos show a tightknit and growing urban Inuit community. FIGURE 3. Baby When the photos were developed and given to her, she in turn gave them to her family members. She also proudly displayed some in the Native Friend- ship Center of Montreal, along with postcards advertising the project. While museum spaces have tended to impose anthropological views on Aboriginal peoples and cultures, this participant curated her own exhibition in a space that was familiar and used by the Aboriginal community. By transforming an informal community drop-in space into an ad-hoc gallery, the participant described how she had a growing sense of belonging to the space and to the city. Claiming this space as her own was a strong statement, demonstrating an assertion of her presence as a member of the urban Inuit community and also the strength of strong family bonds. McGILL JOURNAL OF EDUCATION • VOL. 49 NO 3 FALL 2014 Storytelling and Trauma: Reflections on “Now I See It” 713 In my tradition, things are just lent to us. This means that we have to share every- thing. By taking pictures, you are sharing. Through each eye that sees, they see and feel different things. Another participant who was approaching the end of her pregnancy at the time of the project took many close-up photographs of insects and flowers, noticing the tiniest details in her surroundings. This participant described how taking snap-shots of everyday things in close detail, such as a grasshopper or the inside of an orange flower, helped her to focus on “new life” and the “beauty of small things.” CREATING, DISCOVERING AND CONNECTING In the project, I wrote poems about other people’s photography. When I started writ- ing, I saw beauty — so much beauty — sometimes it could be haunting, because you are going into a space of imagination. Gabor Maté (2008) in his book, In the Realm of Hungry Ghosts: Close Encounters with Addiction, argued that effects of early trauma on the brain often influences an individual’s capacity to feel certain emotions, such as producing a reduction in some emotions, while also an increase in others, and that this can lead to substance addiction. Maté (2008) also argued that most people who have ad- dictions and who are homeless have also experienced trauma or “great pain” in early life. While not all participants of the project had struggled with substance abuse, many of were formerly or currently homeless at the time of the project. Also, many of the participants self-identified as having experienced trauma in both childhood and adulthood. While this project was not clinical in nature, understanding how the participants made sense of their own experiences of trauma was important. Through discussion between the project facilitators and participants, there was a general understanding of their traumatic experiences as engendering fear, emotional disassociation, and feelings of being unsafe in certain spaces. Some women described the physical sensations they felt as a result of their trauma, such as a “disconnection” from their bodies, a physical and mental distancing from their surroundings, and even panic when moving through the urban spaces around them. For those who have experienced trauma, being mindful of one’s surroundings can be very important (Maté, 2008). The hope was for this project to help cultivate the participants’ sense of belonging, connection and identification with urban surroundings, through using photograph and re-telling stories. Creating a voice through digital storytelling can help to re-envision experiences and relationships, which can be very important for survivors of trauma. Furthermore, it was important for the project participants to address the re- lationship between their Aboriginal identity and the source of their trauma. As Wesley-Esquimaux (2009) wrote, “the metanarrative of the Western world Rachel Deutsch, Leah Woolner, Carole-Lynn Byington 714 REVUE DES SCIENCES DE L’ÉDUCATION DE McGILL • VOL. 49 NO 3 AUTOMNE 2014 simply did not include the indigenous story of loss, impermanence, and socially debilitating marginalization.” For Aboriginal women, cultural, geographical, and personal loss were widespread, as were the silencing of their narratives and voices. Within the current Canadian context of the tragically rampant murder and erasure of Native women (Harper, 2009), the women participants assert their physical and cultural presence in Montreal through their photographs. In despite of their lived traumatic experiences and the many hardships they face, the photographs show happy family moments, the strengths of structures, the beauty of small things, and a knowledge and sense of belonging to the city. FIGURE 4. Cranes CONCLUSION: NOW I SEE IT Traumatized people want security, a safe space. Every pain is a brick in a wall. We end up by closing ourselves in. We don’t want people to come in and know our stories or spaces. When I was traumatized I made that wall so thick. Later, I started taking down those bricks to see some light. I had to make peace with each brick. Cameras and pictures and be a freedom for traumatized people. The camera is the eye; no one can take that away from them. It’s their eyes and their views, that part belongs to them. When I was alone and lost I made myself my own little prison, an obscure space. But now my space is so big. Lee Maracle (2010) wrote about stories and geographic knowledge creation in her work, Stories are Internal Maps. In this work, Maracle (2010) theorized that stories are maps of people’s intimate experiences, just as land maps illustrate McGILL JOURNAL OF EDUCATION • VOL. 49 NO 3 FALL 2014 Storytelling and Trauma: Reflections on “Now I See It” 715 the physical world. The title of Now I See It, chosen by the artists, fits with the theme of living with trauma and creating an “internal map” because trauma is so painfully hard to see and the experience is so different for each individual. Through digital storytelling the participants externalized their hidden internal emotional landscape, similarly to the scaffolding and beams in the construc- tion site photographs — while scaffolding and wires will one day be covered with plaster and concrete, the same can be true with emotional pain. And, the deconstruction and rebuilding of this internalized world can be powerful, beautiful, and freeing. REFERENCES Harper, A. O. (2009). Is Canada peaceful and safe for Aboriginal women? In P. Monture & P. McGuire (Eds.), First voices: An Aboriginal women’s reader (pp. 333-342). Toronto, ON: Inanna. Joe, R. (1989). I Lost My Talk. In P. Monture and P. McGuire (Eds.), First Voices: An Aboriginal Women’s Reader (p. 129). Toronto: Inanna Publications and Education. Maracle, L. (1999). Stories are internal maps. Unpublished manuscript. Maté, G. (2008). In the realm of hungry ghosts: Close encounters with addiction. Toronto, ON: Knopf Canada. Smith, A. (2005). Conquest: Sexual violence and American Indian genocide. Cambridge, MA: South End Press. Wesley-Esquimaux, C. C. (2009). Trauma to resilience: Notes on decolonization. In G. Valaskakis & M. D. Stout (Eds.), Restoring the balance: First Nations women, community, and culture (pp. 13-34). Winnipeg, MB: University of Manitoba Press. Wesley-Esquimaux, C. C. & Smolewski, M. (2004). Historic trauma and Aboriginal healing (Research Report). Retrieved from the Aboriginal Healing Foundation http://www.ahf.ca/downloads/ historic-trauma.pdf White, M. & Epston, D. (1990). Narrative means to therapeutic ends. New York, NY: W. W. Norton. http://www.ahf.ca/downloads/historic-trauma.pdf http://www.ahf.ca/downloads/historic-trauma.pdf http://www.ahf.ca/downloads/historic-trauma.pdf http://www.ahf.ca/downloads/historic-trauma.pdf http://www.ahf.ca/downloads/historic-trauma.pdf http://www.ahf.ca/downloads/historic-trauma.pdf http://www.ahf.ca/downloads/historic-trauma.pdf Rachel Deutsch, Leah Woolner, Carole-Lynn Byington 716 REVUE DES SCIENCES DE L’ÉDUCATION DE McGILL • VOL. 49 NO 3 AUTOMNE 2014 RACHEL DEUTSCH, MSW., MA., is a former Addictions Program Coordinator at the Native Women’s Shelter of Montreal and facilitator of the project. She now works as the Cabot Square Project Manager for the Montreal Urban Aboriginal Community Strategy Network on a strategy on Aboriginal homelessness in Montreal. She also does filmmaking on the side. LEAH WOOLNER is an MSW student and former Residential Support worker at the Native Women’s Shelter of Montreal and facilitator of the project. She currently is Vice-Chair of PINAY Quebec and is an independent artist. CAROLE-LYNN BYINGTON is a project participant and spiritual guidance Elder. She spent a lot of her adult life in the Laurentians in Quebec. When she was younger, she lived in Winnipeg and Vancouver. She has provided counseling her whole adult life and has been an advisor to many. RACHEL DEUTSCH MSW (maîtrise en travail social), MA (maîtrise ès arts) a été coordon- natrice du programme de toxicomanie au Foyer pour femmes autochtones de Montréal et animatrice pour le projet. Elle est actuellement chargée de projet pour le projet square Cabot, réalisé au sein du Réseau pour la stratégie urbaine de la communauté autochtone à Montréal et visant à développer une stratégie pour pallier au problème d’itinérance chez les autochtones de Montréal. Parallèlement, elle réalise des films. LEAH WOOLNER étudie en travail social et a travaillé comme agente de soutien rési- dentiel au Foyer pour femmes autochtones de Montréal ainsi que comme animatrice pour le projet. Actuellement, elle est vice-présidente de l’Organisation des femmes philippines du Québec et artiste indépendante. CAROLE-LYNN BYINGTON a participé au projet et est une aînée/guide spirituelle. Elle a vécu la majeure partie de sa vie adulte dans les Laurentides, au Québec. Plus jeune, elle a résidé à Winnipeg et Vancouver. Au cours de sa vie adulte, elle a prodigué des conseils et a guidé plusieurs personnes. work_giezee6lazbivnfcp7yccleplq ---- Automatic evaluation of pressure sore status by combining information obtained from high-frequency ultrasound and digital photography Computers in Biology and Medicine 41 (2011) 427–434 Contents lists available at ScienceDirect Computers in Biology and Medicine 0010-48 doi:10.1 n Corr E-m miranbm journal homepage: www.elsevier.com/locate/cbm Automatic evaluation of pressure sore status by combining information obtained from high-frequency ultrasound and digital photography Sahar Moghimi a, Mohammad Hossein Miran Baygi b,n, Giti Torkaman c a Department of Electrical Engineering, Ferdowsi University of Mashhad, Mashhad, Iran b Department of Electrical Computer Engineering, Tarbiat Modares University, Tehran, Iran c Department of Physical Therapy, Tarbiat Modares University, Tehran, Iran a r t i c l e i n f o Article history: Received 27 February 2010 Accepted 8 March 2011 Keywords: Digital color images Sonographic assessment Color histogram Feature extraction Image processing Fuzzy integral Neural networks Pressure sore Guinea pigs 25/$ - see front matter Crown Copyright & 2 016/j.compbiomed.2011.03.020 esponding author. ail addresses: s.moghimi@ferdowsi.u.ac.ir (S. h@modares.ac.ir (M.H. Miran Baygi). a b s t r a c t In this study, the different phases of pressure sore generation and healing are investigated through a combined analysis of high-frequency ultrasound (20 MHz) images and digital color photographs. Pressure sores were artificially induced in guinea pigs, and the injured regions were monitored for 21 days (data were obtained on days 3, 7, 14, and 21). Several statistical features of the images were extracted, relating to both the altering pattern of tissue and its superficial appearance. The features were grouped into five independent categories, and each category was used to train a neural network whose outputs were the four days. The outputs of the five classifiers were then fused using a fuzzy integral to provide the final decision. We demonstrate that the suggested method provides a better decision regarding tissue status than using either imaging technique separately. This new approach may be a viable tool for detecting the phases of pressure sore generation and healing in clinical settings. Crown Copyright & 2011 Published by Elsevier Ltd. All rights reserved. 1. Introduction Pressure sores are a major health care issue, specifically in patients with impaired mobility or a reduced ability to sense injury. Unrelieved pressure on tissue causes ischemia, which if prolonged may result in necrotic tissue and pressure sore forma- tion [1]. Accurate and reliable techniques for assessing the extent and severity of pressure sores can be very helpful in evaluating treatment strategies, and even permit early diagnosis of pressure sore generation. The most popular assessment tools among clinicians consist of an acetate sheet and a non-allergic liquid for measuring wound perimeter and volume [2]. Computed tomography (CT), magnetic resonance imaging (MRI), digital photography, and high-frequency ultrasound (HFU) have all been studied for the assessment of pressure sores. CT and MRI are not economical, so cannot be employed in small offices and clinics, and the final test results are not immediately available. In addition, they expose the patient to X-rays, injected dyes and/or magnetic fields [3]. Digital photography is a simple and practical way to record a wound’s appearance. A sequence of digital images can reveal valuable information such as changes in 011 Published by Elsevier Ltd. All Moghimi), the wound’s dimensions and colors [4–9], but using this techni- que one cannot detect the full extent of tissue damage in cases where the tissue is undermined [4]. Therefore, by utilizing this technique alone we may sometimes fail to differentiate phases of pressure sore generation and healing. HFU provides a means of studying the structure of undermined tissue [10]. However, interpreting the data obtained from these systems requires intensive training. HFU has been employed for evaluating wounds, including burn scars, surgery wounds, and pressure ulcers, which has been shown to be comparable or better at wound assessment than the aforementioned methods [11–17]. Rippon et al. [12] used artificially induced, acute wounds in pigs to demonstrate that HFU and histology are comparable in their ability to reveal the dominant wound healing parameters (e.g., wound depth, eschar/ blood clot depth, collagen accumulation, and granulation tissue depth). Dyson et al. [14] argued that high-frequency ultrasound may measure the wound region more effectively than photogra- phy. Wendelken et al. [3] monitored wounds by filling the cavity with a sterile mapping gel and dressing the area with film. The dimensions of the wound were then measured using a software package. A study of the early stages of pressure ulcer generation revealed that HFU can detect subdermal tissue and skin edema before any clinical or visual signs of skin breakdown appear [16]. Changes in tissue regularity and homogeneity during the healing process have also been observed using HFU [12–14]. However, rights reserved. www.elsevier.com/locate/cbm dx.doi.org/10.1016/j.compbiomed.2011.03.020 mailto:s.moghimi@ferdowsi.u.ac.ir mailto:miranbmh@modares.ac.ir dx.doi.org/10.1016/j.compbiomed.2011.03.020 S. Moghimi et al. / Computers in Biology and Medicine 41 (2011) 427–434428 quantitative measurements for monitoring the above healing parameters with HFU are limited. These measurements include the co-occurrence matrix, explained in detail by Theodoridis et al. [18] and used to analyze the echographic structures of skin and liver tissues [19–21]; and randomly weighted frequency compo- nents of the intensity values, used to calculate the frequency band energy in the region of interest (ROI) as a measure of echogenicity [17]. In a recent work, the generation and healing of pressure sores were assessed quantitatively by extracting relevant para- meters from the HFU images [22]. However, it was demonstrated that this technique may fail to discriminate between some phases of pressure sore generation and healing. Analysis of the color content of digital photographs has several applications including the assessment of skin tumors, erythema, and wounds [8,9,23–33]. Several successful attempts have been made to classify different types of wounds or tissues present in the wound bed (i.e., granulation, necrotic or slut) by clustering or segmenting various sets of features (textures and/or statistics extracted from RGB, HSV, HIS, LAB, and LUV histograms) [26–34]. Many researchers have also studied algorithms for determining the wound area and volume automatically [5,35–39]. Some researchers have recently tried to evaluate the 3D geometry of wounds using color images [6,7,39]. The color content of digital images has also been employed to assess healing in acute wounds and pressure sores [8,9,40]. Since photography is only useful when visual signs of the wounds are apparent, these studies have naturally focused on the appearance of the tissue. The literature survey demonstrated that HFU and digital photography have both been repeatedly used for assessment of pressure sores. Digital photography is a practical and low-cost imaging technique, which provides valuable information about the appearance of the tissue under study. However, changes in the deeper layers of tissue, including those in the early stages of pressure sore generation, cannot be investigated using this technique. HFU on the other hand has been utilized for studying deeper tissue damage which is not visually recognizable. This technique seems to be a good candidate for investigating the undermined tissue, considering its lower cost in comparison with the other imaging techniques (e.g. CT and MRI) and potential to be used in small offices or clinics. The objective of this study is to combine the information obtained by HFU and digital photography to assess the process of pressure sore generation and healing automatically. Each of these imaging techniques provide valuable information about certain aspects of pressure sore generation and healing [4–17]. Therefore, by combining the decision of experts, trained separately by the information obtained from each of the two imaging techniques, the goal of this study is to provide more accurate decisions about the phases of pressure sore generation and healing than what could have been achieved from employing either imaging tech- nique separately. Fig. 1. Region of interest (ROI) selection. EE, entry echo; D, dermis; FT, fatty tissue. 2. Materials and methods 2.1. Animal model for pressure sore generation A guinea pig model was developed for inducing and monitoring pressure sores. The system used for this research and our process for generating pressure sores are fully described in a previous publication [22]. Briefly, pressure was uniformly applied to a 0.75-cm diameter disk over the trochanter region of 28 animal’s hind limbs, using a computer-controlled surface pressure delivery system. The load was kept constant at 40075 g for 5 h. The same region was then monitored over 21 days (measurements taken on days 3, 7, 14, and 21 after pressure sore generation). The reason for choosing the above days was that pressure sores, induced in this manner have been reported to reach their maximum severity after seven days [41,42]. It has also been stated that before this time, the actual extent of necrosis is difficult to define, and that after seven days healing reverses some of the more obvious signs of tissue damage [8]. The 21-day period therefore monitors both sore generation and healing phases. The precise intervals were selected during the experiment to allow changes in tissue structure and skin appearance to manifest. Later, we will consider the day of measurement an unknown class to be determined from the HFU and imaging data. The Ethical Commission of Tarbiat Modares University approved this study. 2.2. High frequency ultrasound imaging The sore region was monitored using a 20 MHz B-mode ultrasound scanner (DUB_USB taberna pro medicum, Luneburg, Germany) in a controlled environment. The 20 MHz scanner could monitor up to a �8 mm depth of tissue, but we deliberately reduced the size of the imaging window as no significant echo- graphic structure was observed below the muscle fascia [22]. The scan region was marked on the skin of each animal to avoid confusion concerning probe head positioning. This enabled us to preserve the angle between the probe head and body line. Animals were kept in a restrainer during image acquisition to avoid motion artifacts. To reduce processing time and avoid noise at the probe edges, a 1.5�2 mm2 window (225�300 pixels) under the superficial layers was selected as the region of interest (ROI) for image analysis (Fig. 1). 2.3. Digital color imaging A 10 Megapixel Canon digital camera was used to obtain color photographs. In order to control lighting conditions and maintain a constant distance between the camera and tissue, we built a cylindrical Teflon box to support the camera. A 1.5�1.5 cm2 square hole was cut in the bottom of the box, taking into account the dimensions of the induced pressure sores. Sixteen LEDs were set in the box, as illustrated in Fig. 2. A diffusing surface was placed in front of the LEDs in order to provide a uniform light source. Since the color images were obtained on different days, factors other than physiological changes may have affected the tissue appearance (e.g., color changes due to camera noise or oscillations in the power supply). To account for this possibility, five elements of the Macbeth chart (white, blue, light skin, red, and green) were mounted around the box hole. The color values of these elements in the photographs were used to calibrate the images as follows. First, an arithmetic mean filter (Eq. (1)) was applied to reduce the Gaussian noise observed in each channel (red, green, blue) Fig. 2. Schematic of the photograph acquisition box. S. Moghimi et al. / Computers in Biology and Medicine 41 (2011) 427–434 429 of the Macbeth elements. The filter mask has the following formulation: f̂ðx,yÞ¼ 1 mn X ðs,tÞA Sxy gðs,tÞ ð1Þ where g and f̂ are the original and restored images. m and n represent the dimensions of the neighborhood window Sxy cen- tered at (x,y). Next, we attempt to bring the RGB values of the images closer to their true values, using the white Macbeth element as a reference. The correction is expressed by the following equation: ðR0,G0,B0Þ¼ðR � Rref =Rw,G � Gref =Gw,B � Bref =BwÞ ð2Þ where R0, G0 and B0 are corrected values and R, G and B are the original values. Rref, Gref and Bref represent the red, green and blue values of the white Macbeth chart. Rw, Gw and Bw are the mean color values of the white reference in the captured image. One last calibration step was performed to restore the color balance. This step is inspired by the calibration algorithm devel- oped by Herbin et al. [43]. The white reference is achromatic, but in the captured images its red, green, and blue values differ due to the spectral distribution of the light source and the properties of the camera sensor [43]. Denoting by MaxðRw,Gw,BwÞ the max- imum of the mean red, green, and blue values for the white reference image, we define another set of multiplicative coeffi- cients: MaxðRw,Gw,BwÞ=Rw, MaxðRw,Gw,BwÞ=Gw, and MaxðRw, Gw,BwÞ=Bw. The final red, green, and blue values of the pixels are then ðR0�MaxðRw,Gw,BwÞ=Rw,G0�MaxðRw,Gw,BwÞ=Gw,B0�MaxðRw,Gw,BwÞ=BwÞ ð3Þ For example, the true RGB values of the white Macbeth chart are (243,242,243), whereas the mean RGB values of the white reference in a captured image were (212,216,221). In the calibrated image (after applying Eq. 3), the mean values changed to (242,244,243). Before calibration, the standard deviations of the RGB channels within the blue, light skin, red, and green Macbeth charts were (8.7,8.1,8.8), (5.3,4.8,5.5), (7.7,7.1,7.9), and (8.7,8.0,8.9), respectively. After calibration, the standard deviations changed to (3.1,2.0,3.6), (2.0,1.7,2.9), (2.1,2.1,3.4), and (3.1,1.9,3.5). After calibration, a 1�1 cm2 region (1000�1000 pixels) from the center of the box window was selected for further processing. 2.4. Extracting features 2.4.1. High-frequency ultrasound The echogenicity of regions in a B-scan indicates the amount of acoustic energy reflected by the corresponding tissue regions. The degree of echogenicity of the corium depends on the amount of collagen fiber material per unit volume; any change in this amount results in acoustic property changes. Collagenous fiber bundles appear as band-like, moderately or highly reflective structures [10]. Since the different structures, present in the generation and healing of pressure sores, exhibit different attenuation properties, changes in the echogenicity may be assumed to follow meaningful patterns. Reflections from the reticular dermis include hyperechoic lines oriented parallel to the skin surface (Fig. 1). These echoes are relevant to the presence of collagen fibers. In contrast, because of the low echogenicity of the cellular infiltrate, the granulation tissue appears as a hypoe- choic region. As the granulation tissue matures, fibroblasts synthesize fibrous extracellular matrix proteins (including hyper- echoic collagen) [12]. These phenomena result in changes in the intensity and textural content of the ROI. Five relevant features are extracted from the ultrasound image to evaluate echogenicity and tissue structure within the ROI. Before calculating these features, the gray levels of the HFU image were reduced from 256 to 64. 2.4.1.1. Statistical parameters. Three statistical parameters, namely the angular second moment (ASM), contrast (CON), and correlation (COR) [18], were extracted from the co-occurrence matrix of the ROI. These parameters were selected for their ability to model texture. 2.4.1.2. Fractal signature. Changes in the properties of a picture related to changes in scale are a sign of fractal properties [44–46]. The fractal surface area may be calculated from the gray levels of an image [45]. A technique introduced by Mandelbrot [41] can be used to compute the area of the fractal surface at any given scale. The slope of the line resulting from plotting fractal surface area versus scale is known as the fractal signature [46]. The fractal signature contains important information about the fineness of variations in the gray level surface, with no need to decompose the image into harmonics (i.e., a Fourier transform) or wavelets [45]. In previous works we developed a modified fractal signature (MFS), defined as the slope of the line resulting from plotting the upper blanket surface against scale on a log–log basis [22,47]. In HFU images, the MFS measures the echo distribution in the ROI and therefore serves as a texture descriptor. The MFS is more suitable than the original fractal signature for extracting features from HFU scans [47]. 2.4.1.3. Echogenicity. All four of the features discussed above are texture descriptors, and therefore good candidates for monitoring changes in the echographic structure of the ROI. The following feature is included to measure changes in ROI echogenicity as a result of pressure sore generation and healing. Since in the ROI collagen fibers are visualized as hyperechoic bands parallel to the skin surface, a mask with the following structure was applied to the ROI after reducing the gray levels from 256 to 64: F ¼ �1 0 1 �2 0 2 �1 0 1 2 64 3 75 ð4Þ This anisotropic mask F may be used to implement the gradient operator along the horizontal direction, thereby high- lighting the presence of vertical lines in the ROI. A weight value of 2 was used to add a smoothing effect [18]. After filtering the ROI by F, the root mean square (RMS) of the pixel values was computed as a measure of the ROI echogenicity (referred to as RE) (Fig. 3). Fig. 3. High-frequency ultrasound scans of the ROI before (day 7, a; day 21, d) and after (day 7, b; day 21, e) the gray levels were reduced from 256 to 64. Also shows are the filtered images (day 7, c; day 21, f). The vertical lines in images c and f are highlighted. S. Moghimi et al. / Computers in Biology and Medicine 41 (2011) 427–434430 The features extracted from the HFU images were normalized to the interval [0,1]. Fig. 4. The 2D hue-saturation histogram of a digital photograph obtained from a pressure sore. 2.4.2. Color features In this research, color features were also used to differentiate the stages of pressure sore generation and healing. The RGB system is not adequate for this purpose, since its components are strongly dependent on the ambient light intensity. Instead, we choose the HSI (hue-saturation-intensity) color model for model- ing the ROI. Since our main objective is to investigate color changes in the altering tissue, the intensity component is neglected. Rather than segmenting the sore region or comparing the superimposed images on different days, we extracted time variant color features over the whole ROI. First we computed a 50�50 bin (2D) histogram of the ROI, its axes representing the hue and saturation components. Each image presents a number of peaks in the 2D histogram with different positions, areas, and intensities. In our experiments, we observed that these features evolve according to a meaningful pattern during the generation and healing of pressure sores. First, the connected regions in each histogram were extracted. Second, we recorded the centroid of each region and its corre- sponding image area (in pixels). The sum of the pixel intensities in each region was also computed. The first and second elements of the centroid are the horizontal and vertical coordinates. In order to reduce the computational burden, only the two largest con- nected regions in each histogram were considered for feature extraction. Thus, we have six color features for each image histogram: the centroids (CEN), areas (AR), and intensity sums (INT) of two connected regions. Fig. 4 shows an example 2D histogram, its two major regions, and the corresponding feature values. All color features except for those representing the location of a centroid were normalized to the interval [0,1]. 2.5. Classification The extracted features were grouped into five categories. The first consists of ASM, CON, and COR. The second includes MFS and RE. The features CEN, AR, and INT were placed into three individual categories. Five Multi-Layer Perceptron Neural Networks (MLP NNs) were trained, one on each category of features. Each network had the same structure, and used a tan-sigmoid transfer function. The weights and biases of the networks were initialized with random values taken from the interval [�1,1]. The weights between neurons were updated only after all training examples had been exposed to the network (batch training). Levenberg–Marquardt back-propagation was found to produce the best results. The network structure has two hidden layers, with eight and three neurons, respectively. The number of hidden layers and neurons were set based on some tests and trials carried out prior to the main training process. The output layer has four neurons, repre- senting soft support for the four different image classes (day 3, day 7, day 14, and day 21). The outputs were then fused using a fuzzy integral in order to determine the final decision of the ensemble. Each network is referred to as a ‘classifier’ henceforth. 2.6. Combining the decision of neural networks with fuzzy integral The problem in simultaneous application of the HFU and digital photography is that the credibility of their vote in deter- mining the phase of pressure sore generation and healing varies with time. Among the decision fusion techniques used for con- tinuous value outputs, class indifferent combiners, e.g. decision templates and Dempster–Shafer, are not appropriate candidates for combining the decision of the classifiers in the present experiment, since the combination strategy for determining the final support of each class must be different. Class-conscious combiners are classified into two categories: trainable and non- trainable combiners [52]. Here we seek a trainable combiner whose parameters and therefore combining paradigm are set based on the data presented to the system. Our choice of combination technique is the fuzzy integral (FI), which has been successfully employed in several different con- texts [48–50]. Here, we review only the basic concepts. A more detailed description of the FI algorithm can be found in Refs. [50,51]. S. Moghimi et al. / Computers in Biology and Medicine 41 (2011) 427–434 431 The fuzzy measure g is a set function for a finite set of elements X, which satisfies the following conditions: (1) AR INT CEN Fig. the p Fig. �0.5 echo gðfÞ¼0 (2) gðXÞ¼ 1 (3) gðA [ BÞ¼gðAÞþgðBÞþlgðAÞgðBÞ, for all A,B� X, A \ B¼f, for some l4�1. The fuzzy measure g can be evaluated from a set of L values gi known as fuzzy densities. The latter may be interpreted as the importance of individual classifiers to the final evaluation. The parameter l is the only root of the following polynomial that is greater than �1: lþ1¼ YL i ¼ 1 ð1þlgiÞ la0 ð5Þ The FI is a nonlinear function and can be regarded as an aggregation operator. This operator is defined over X with respect to g Z X hðxÞ3gð:Þ¼sup½min½a,gðfx9hðxÞZagÞ�� ð6Þ where h : X-½0,1� denotes the confidence value delivered by elements of X (e.g. class membership of data determined by a specific classifier). By sorting the values of h(.) in a descending order, hðx1ÞZhðx2ÞZ. . .ZhðxnÞ, the Sugeno FI can be evaluated asZ X hðxÞ3gð:Þ¼ max n i ¼ 1 ½min½hðxiÞ,gðAiÞ�� ð7Þ Classifier 1 Classifier 2 Classifier 3 Classifier 4 Classifier 5 Output classFI Fuzzy measures ASM, CON, COR MFS, RE (regions 1 and 2) (regions 1 and 2) (regions 1 and 2) 5. A schematic of the ensemble developed for providing the final decision on hase of pressure sores, using five neural networks and fuzzy integral. 6. Digital photographs obtained from an induced pressure sore, from days 3 (a) mm. Ultrasound scans obtained from days 3 (e), 7 (f), 14 (g), and 21 (h). White arr . The distances from A to An and from A to Ann are �1.5 and �2.5 mm, respective where Ai, i¼1,2,y,n, is a partition of the set X. g(Ai) can be calculated recursively by assuming g(1)¼g1 and gðAiÞ¼g i þgðAi�1Þþlg igðAi�1Þ 1oirn ð8Þ Here, the gis were taken to be the accuracy of each classifier evaluated during the training process. Fig. 5 illustrates the ensemble provided for the objective of this research. 2.7. Performance evaluation The Leave One Out technique, which is mainly used to evaluate the performance of classifiers [18], was adopted to validate the performance of the designed ensemble. At each step, one of the 28 samples was left out for testing purposes and the five neural networks were trained for the features extracted from the remaining 27 samples. The performance of the ensemble was evaluated as the mean accuracy on the 27 test iterations. 3. Results The digital photographs and HFU images obtained from one animal on different days are presented in Fig. 6. Images 6(a)–(d) were captured using the apparatus illustrated in Fig. 2. As is apparent in Fig. 6(b), the pressure sore reaches its maximum severity at the surface on day 7. The hypoechoic regions visible in Fig. 6(e) may be related to blood pools or edema. Note that the hyperechoic band at the boundary of the dermis and fatty tissue disappears in day 7, but reappears on day 21. Table 1 displays the significance of the features as determined by the analysis of variance (ANOVA) and multiple comparison tests. The confidence intervals in brackets indicate whether the means of two days are statistically different. Note that day 3 and day 21 cannot be distinguished on the basis of these features, as all five intervals include zero. Features extracted from the two largest connected regions of the 2D histogram were evaluated by the same tests (Tables 2 and 3). The confidence intervals suggest that day 3 could not be distinguished from day 7, and that day 14 could not be distinguished from day 21. Tables 1–3 demonstrate that while the features extracted from different imaging techniques (digital photography and HFU) fail to discriminate some classes, the classes presenting difficulties are not the same for the two techniques. Therefore, it may be assumed that by combining the decisions made by each of the classifiers, trained on the extracted features, a better conclusion can be obtained about the phase of pressure sore generation and healing. , 7 (b), 14 (c), and 21 (d). CE, calibration element. The distances from B to Bn is ows on day 21 indicate the muscle fascia (MF). FT, fatty tissue; D, dermis; EE, entry ly. Table 1 The degree of class separation in HFU features studied through one-way ANOVA and multiple comparison tests. MD is the mean difference, and CI is the 95% confidence interval. Days CON COR ASM RE MFS Po0.05 (P¼1.3�10�21) Po0.05 (P¼2.79�10�11) Po0.05 (P¼2.8�10�24) Po0.05 (P¼1.4�10�29) Po0.05 (P¼1.02�10�29) MD (%95CI) MD (%95CI) MD (%95CI) MD (%95CI) MD (%95CI) Day3/Day7 0.53 [0.41, 0.64] �0.29 [�0.41, �0.17] �0.36 [�0.46, �0.27] 0.35 [0.26, 0.45] �0.46 [�0.57, �0.35] Day3/Day14 0.19 [0.08, 0.31] �0.20 [�0.32, �0.08] �0.11 [�0.20, �0.01] 0.10 [0.01,0.20] �0.12 [�0.23, �0.00] Day3/Day21 �0.01 [�0.12, 0.10] 0.01 [�0.10, 0.13] 0.03 [�0.07, 0.12] �0.01[�0.10, 0.09] 0.05 [�0.07, 0.16] Day7/Day14 �0.33 [�0.44, �0.22] 0.09 [�0.03, 0.21] 0.26 [0.16, 0.35] �0.25 [�0.35, �0.16] 0.35 [0.23, 0.46] Day7/Day21 �0.54 [�0.65, �0.43] 0.30 [0.19, 0.42] 0.39 [0.29, 0.49] �0.36 [�0.45, �0.26] 0.51 [0.39, 0.62] Day14/Day21 �0.20 [�0.32, �0.09] 0.21 [0.09, 0.33] 0.13 [0.04, 0.23] �0.11 [�0.21, �0.01] 0.16 [0.05, 0.27] Table 2 The degree of class separation in features of the hue-saturation color histogram studied through one-way ANOVA and multiple comparison tests. These features belong to the largest connected region in the 2D histogram. The first and second columns of CEN are the horizontal and vertical coordinates of the region’s centroid, respectively. MD is the difference, and CI is the confidence interval. Days AR INT CEN Po0.05 (P¼2.23�10�14) Po0.05 (P¼2.64�10�24) Po0.05 (P¼4�10�7) Po0.05 (P¼4.32�10�6) MD (%95CI) MD (%95CI) MD (%95CI) Day3/Day7 0.06 [�0.02, 0.14] 0.07 [�0.02, 0.16] �0.44 [�2.22, 1.33] �0.59 [�1.47,0.29] Day3/Day14 0.21 [0.13, 0.29] �0.23 [�0.32, �0.13] 2.41 [0.63,4.18] 0.73 [�0.15,1.61] Day3/Day21 0.25 [0.17, 0.33] �0.28 [�0.37, �0.19] 2.98 [1.20,4.75] 1.15 [0.27, 2.03] Day7/Day14 0.15 [0.07, 0.23] �0.29 [�0.39, �0.20] 2.85 [1.07,4.62] 1.32 [0.44, 2.20] Day7/Day21 0.19 [0.11, 0.27] �0.35 [�0.44, �0.26] 3.42 [1.64,5.19] 1.74 [0.86,2.62] Day14/Day21 0.04 [�0.04, 0.12] �0.05 [�0.15, 0.04] 0.57 [�1.20,2.34] 0.42 [�0.46,1.30] Table 3 The degree of class separation in features of the hue-saturation color histogram, studied through one-way ANOVA and multiple comparison tests. These features belong to the second largest connected region in the 2D histogram. The first and second columns of CEN are the horizontal and vertical coordinates of the region’s centroid, respectively. MD is the difference between the two means, and CI is the confidence interval. Days AR INT CEN Po0.05 (P¼1.01�10�26) Po0.05 (P¼1.51�10�5) Po0.05 (P¼1.62�10�27) Po0.05 (P¼3.11�10�10) MD (%95CI) MD (%95CI) MD (%95CI) Day3/Day7 �0.14 [�0.19, �0.08] �0.21 [�0.32, �0.10] �3.75 [�5.64, �1.85] �0.35 [�0.66, �0.03] Day3/Day14 0.12 [0.07, 0.17] �0.06 [�0.18, 0.05] 4.21 [2.31, 6.10] 0.34 [0.02, 0.65] Day3/Day21 0.14 [0.08, 0.19] �0.02 [�0.14, 0.09] 4.37 [2.47, 6.26] 0.50 [0.19, 0.81] Day7/Day14 0.25 [0.20, 0.31] 0.15 [0.03, 0.26] 7.95 [6.06, 9.85] 0.68 [0.37, 0.10] Day7/Day21 0.27 [0.22, 0.32] 0.19 [0.07, 0.30] 8.12 [6.22, 10.01] 0.84 [0.53, 1.16] Day14/Day21 0.01 [�0.04, 0.07] 0.04 [�0.07, 0.15] 0.16 [�1.73, 2.06] 0.16 [�0.15, 0.47] Table 4 Confusion matrix for the test results of classifier 1 in Fig. 5. Classified as Day 3 Classified as Day 7 Classified as Day 14 Classified as Day 21 Day 3 11 0 4 13 Day 7 1 21 2 0 Day 14 5 7 19 5 Day 21 11 0 3 10 Total (%) 39.3 75.0 67.9 35.7 54.5 Table 5 Confusion matrix for the test results of classifier 2 in Fig. 5. Classified as Day 3 Classified as Day 7 Classified as Day 14 Classified as Day 21 Day 3 10 4 4 11 Day 7 1 21 2 0 Day 14 5 3 17 5 Day 21 12 0 5 12 Total (%) 35.7 75.0 60.7 42.9 53.6 S. Moghimi et al. / Computers in Biology and Medicine 41 (2011) 427–434432 Tables 4–8 present the confusion matrices generated by classifiers 1 through 5, respectively. For classifiers trained on features extracted from the HFU images, the highest percentages of misclassified samples occur when comparing days 3 and 21. For classifiers trained on digital photography features, the highest percentages of misclassified samples occur when comparing day 14 and day 21. Table 9 presents the confusion matrix obtained after combin- ing the results with the FI method. The improvements in classi- fication accuracy are noticeable. Table 6 Confusion matrix for the test results of classifier 3 in Fig. 5. Classified as Day 3 Classified as Day 7 Classified as Day 14 Classified as Day 21 Day 3 25 2 0 0 Day 7 3 26 1 0 Day 14 0 0 10 12 Day 21 0 0 17 16 Total (%) 89.3 92.9 35.7 57.1 68.8 Table 7 Confusion matrix for the test results of classifier 4 in Fig. 5. Classified as Day 3 Classified as Day 7 Classified as Day 14 Classified as Day 21 Day 3 25 1 0 0 Day 7 1 25 5 2 Day 14 1 1 11 8 Day 21 1 1 12 18 Total (%) 89.3 89.3 39.3 64.3 70.5 Table 8 Confusion matrix for the test results of classifier 5 in Fig. 5. Classified as Day 3 Classified as Day 7 Classified as Day 14 Classified as Day 21 Day 3 20 1 3 4 Day 7 4 27 1 0 Day 14 3 0 17 7 Day 21 1 0 7 17 Total (%) 71.4 96.4 60.7 60.7 72.3 Table 9 Confusion matrix for the test results after combining all five classifiers by the FI method. Classified as Day 3 Classified as Day 7 Classified as Day 14 Classified as Day 21 Day 3 23 0 1 2 Day 7 3 28 1 1 Day 14 2 0 23 7 Day 21 0 0 3 18 Total (%) 82.1 100 82.1 64.3 82.1 S. Moghimi et al. / Computers in Biology and Medicine 41 (2011) 427–434 433 4. Discussion Both HFU and digital photography have been previously used for the assessment of pressure sores. Our method is specifically designed to improve the recognition of pressure sore generation and healing phases by using the two imaging techniques simul- taneously. In this way, both tissue structure and skin appearance are taken into account. The generation and healing of pressure sores involves the presence of cells, matrices and complex structures. The hypoechoic regions visible in the HFU scans on day 3 (Fig. 6(a)) were assumed to be caused by edema and blood pools. In the corresponding digital photograph, only a red area was observed. Redness is considered an early sign of pressure sore generation [4]. On day 7 hypoechoic regions dominate the HFU scan, signaling the appearance of granulation tissue (cellular infiltrate has low echo- genicity) [12,13]. As Fig. 6(f) shows, the superficial layers also attain maximum severity on day 7. As healing proceeds, the granulation tissue matures and fibroblasts synthesize fibrous extracellular matrix proteins (including hyperechoic collagen) [12]. These events result in the appearance of hyperechoic regions in the HFU scans from day 14 onward. Meanwhile, the digital photographs illustrate superficial signs of tissue healing. The features extracted from the HFU images were selected based on their ability to monitor the echogenicity and texture of the ROI. Table 1 shows that these features are generally good at discriminating the four days, with the exception of comparing day 3 to day 21. This is due to the resemblance of the obtained scans from these two days. Since the objective of this research is to create an automatic system capable of recognizing phases of pressure sore generation and healing, this drawback could cause problems. However, Tables 2 and 3 illustrate that these days are separable based on features extracted from a digital photograph color histogram. Since the pairs that the two techniques fail to discri- minate do not overlap, simultaneously applying both techniques achieves a better recognition rate. These statements are supported statistically by the confusion rates presented in Tables 4–9. Fusing the five outputs has improved both the total accuracy and the misclassification rate. In this study the four classes represent different phases of pressure sore generation and healing. Therefore, it can be sug- gested that the developed method may be capable of detecting the phases of pressure sore generation and healing, provided that the HFU scan and digital photograph of the suspicious region are available. Our study has several limitations. First, given the ethical considerations of small animal use, our hypothesis could not be tested on more severe or non-healing sores. Second, pressure sores in humans do not necessarily follow the same generation and healing patterns. Therefore, a large human database is required to define the recognizable phases of pressure sore generation and healing. Researchers have considered clinical parameters (e.g. pain, swelling, and itching) in assessment of wounds [53]. Considering these symptoms (by means of rating tools) next to the extracted parameters in future human studies can result in a better assessment of pressure sore status. In the present study, the monitored region was marked on the body of the animal and precautions were taken when positioning the probe head and imaging box. Our database therefore consists of images obtained from the same region over time, and of corresponding regions in different animals. Drawing conclusions about sore generation or healing from scans and images obtained from various subject and areas of body is a much more difficult task. Future clinical research on applying the proposed method to human skin will have to deal with the intrinsic variance of images obtained from unknown parts of the body. 5. Conclusion In this study, HFU scans and digital photographs were obtained from pressure sores induced in guinea pigs. Several relevant statistical features were extracted from the obtained images. A set of five neural networks was trained on five separate categories of features. The results demonstrate that a more accurate determination of the phase of pressure sore generation and healing can be achieved by utilizing both imaging techniques simultaneously. Therefore, we conclude that the individual draw- backs of HFU and digital photography can be overcome by combining information from both techniques. S. Moghimi et al. / Computers in Biology and Medicine 41 (2011) 427–434434 Conflict of interest statement None declared. References [1] R. Salcido, S.B. Fisher, J.C. Donofrio, M. Bieschke, C. Knapp, R. Liang, E.K. LeGrand, J.M. Carney, An animal model and computer controlled surface pressure delivery system for the production of pressure ulcers, J. Rehabil. Res. Dev. 32 (2) (1995) 149–161. [2] D.T. Rovee, H.I. Maibach, The epidermis in wound healing, CRC Press Inc, 2003. [3] M.E. Wendelken, L. Markowitz, M. Patel, O.M. Alvarez, Objective, non- invasive wound assessment using B-mode ultrasonography, Wounds 15 (11) (2003) 351–360. [4] D. Bader, C. Bouten, D. Colin, C. Oomens, Pressure ulcer research: current and future perspectives, Springer Verlag, 2005. [5] P. Plassman, T.D. Jones, MAVIS: a non-invasive instrument to measure area and volume of wounds, Med. Eng. Phys. 20 (1998) 332–338. [6] S. Treuillet, B. Albouy, Y. Lucas, Three-dimensional assessment of skin wounds using a standard digital camera, IEEE Trans. Med. Imag 28 (5) (2009) 752–762. [7] A. Malian, A. Azizi, F.H.V. Den, M. Zolfaghari, Development of a robust photogrammetric metrology system for monitoring the healing bedsores, Photogrammetric Rec. 20 (111) (2005) 241–273. [8] G.L. Hansen, E.M. Sparrow, J.Y. Kokate, K.J. Leland, P.A. Iaizzo, Wound status evaluation using color image processing, IEEE Trans. Med. Imag. 16 (1) (1997) 78–86. [9] M. Herbin, F.X. Bon, A. Venot, F. Jeanlouis, M.L. Dubertret, G. Strauch, Assessment of healing kinetics through true color image processing, IEEE Trans. Med. Imag. 12 (1) (1993) 39–43. [10] P. Altmeyer, S. El-Gammal, K. Hoffmann, Ultrasound in Dermatology, Springer Verlag, Berlin Heidelberg, 1992. [11] F.K. Forster, J.E. Oledrud, M.A. Riederer-Henderson, A.W. Holmes, Ultrasonic assessment of skin and surgical wounds utilizing backscatter acoustic techniques to estimate attenuation, Ultrasound Med. Biol. 16 (1) (1990) 43–53. [12] M.G. Rippon, K. Springett, R. Walmsley, K. Patrick, S. Millson, Ultrasound assessment of skin and wound tissue: comparison with histology, Skin Res. Technol. 4 (1998) 147–154. [13] M.G. Rippon, K. Springett, R. Walmsley, Ultrasound evaluation of acute experimental and chronic clinical wounds, Skin Res. Technol. 5 (1999) 228–236. [14] M. Dyson, S. Moodley, L. Verjee, W. Verling, J. Weinman, P. Wilson, Wound healing assessment using 20 MHz ultrasound and photography, Skin Res. Technol. 9 (2003) 116–121. [15] Y.C. Du, C.M. Lin, Y.F. Chen, C.L. Chen, T. Chen, Implementation of a burn scar assessment system by ultrasound techniques, in: Proceedings of the 28th IEEE EMBS Annual International Conference, 2006, pp. 2328–2331. [16] P.R. Quintavalle, C.H. Lyder, P.J. Mertz, C. Phillips-Jones, M. Dyson, Use of high-resolution, high-frequency diagnostic ultrasound to investigate the pathogenesis of pressure ulcer development, Adv. Skin Wound Care 19 (9) (2006) 498–505. [17] S. Prabhakara, Acoustic Imaging of Bruises, Master Thesis, Georgia Institute of Technology, 2004. [18] S. Theodoridis, K. Koutroumbas, Pattern recognition, 2nd ed., Academic Press, San Diego, 2003. [19] F.M.J. Valckx, J.M. Thijssen, Characterization of echographic image texture by cooccurrence matrix parameters, Ultrasound Med. Biol. 23 (4) (1997) 559–571. [20] M. Vogt, H. Ermert, S.E. Gammal, K. Kasper, K. Hoffmann, P. Altmeyer, Structural analysis of the skin using high frequency, broadband ultrasound in the range from 30 to 140 MHz, in: Proceedings of the IEEE International Ultrasonics Symposium, 1998, pp.1685–1688. [21] W.C. Yeh, Y.M. Jeng, C.H. Li, P.H. Lee, P.C. Li, Liver fatty change classification using 25 MHz high frequency ultrasound, in: Proceedings of the IEEE International Ultrasonics, Ferroelectrics, and Frequency Control Joint 50th Anniversary Conference, 2004, pp. 2169–2172. [22] S. Moghimi, M.H. Miran Baygi, G. Torkaman, A. Mahloojifar, Quantitative assessment of pressure sore generation and healing through the numerical analysis of high frequency ultrasound images, J. Rehabil. Res. Dev. 74 (2) (2010) 99–108. [23] Y.V. Haeghen, J. Naeyaert, I. Lemahieu, W. Philips, An imaging system with calibrated color image acquisition for use in dermatology, IEEE Trans. Med. Imag. 19 (7) (2000) 722–730. [24] G. Hance, S. Umbaugh, R. Moss, V. Stoecker, Unsupervised color image segmentation with application to skin tumor borders, IEEE Eng. Med. Biol. Mag. 5 (1) (1996) 104–111. [25] M. Nischik, C. Forster, Analysis of skin erythema using true-color images, IEEE Trans. Med. Imag. 16 (6) (1997) 711–716. [26] B.A. Pinero, C. Serrano, J.I. Acha, Segmentation of burn images using the Lnunvn space and classification of their dept by color and texture information, SPIE 4684 (2002) 1508–1515. [27] B. Belem, Non-invasive Wound Assessment by Image Analysis. Ph.D. Thesis, University of Glamorgan, UK, 2004. [28] M. Kolesnik, A. Fexa, Multi-dimensional color histograms for segmentation of wounds in images, Lecture Notes Comput. Sci. 3656 (2005) 1014–1022. [29] H. Oduncu, A. Hoppe, M. Clark, R.J. Williams, K.J. Harding, Analysis of skin wound images using digital color image processing: a preliminary commu- nication, Int. J. Lower Extremity Wounds 3 (3) (2004) 151–156. [30] M. Galushka, H. Zheng, D. Patterson, L. Bradley, Case-based tissue classifica- tion for monitoring leg ulcer healing, in: Proceedings of the 18th IEEE Symposium on Computer-Based Medical Systems, 2005, pp. 353–358. [31] H. Zheng, L. Bradley, D. Patterson, M. Galushka, J. Winder, New protocol for leg ulcer classification from colour images, in: Proceedings of the 26th Annual International Conference of the IEEE EMBS, 2004, pp.1389–1392. [32] H. Wannous, S. Treuillet, Y. Lucas, Supervised tissue classification from color images for a complete wound assessment tool, in: Proceedings of the 29th annual international conference of the IEEE EMBS, 2007, pp. 6031–6034. [33] A. Hoppe, D. Wertheim, J. Melhuish, H. Morris, K.G. Harding, R.J. Williams, Computer assisted assessment of wound appearance using digital imaging, in: Proceedings of the 23rd annual EMBS International Conference, 2001, pp. 2595–2597. [34] C. Serrano Acha, L. Roa, Segmentation and classification of burn colour images, in: Proceedings of the 23rd annual EMBS International Conference, 2001, pp. 2692–2695. [35] T.D. Jones, P. Plassmann, An instrument to measure the dimensions of skin wounds, IEEE Trans. Biomed. Eng. 42 (5) (1995) 464–470. [36] T.D. Jones, P. Plassman, An active contour model for measuring the area of leg ulcers, IEEE Trans. Med. Imag. 19 (12) (2000) 1202–1210. [37] T. Krouskop, R. Baker, M. Wilson, A noncontact wound measurement system, J. Rehabil. Res. Dev. 39 (3) (2002) 337–346. [38] X. Liu, W. Kim, R. Scmidt, B. Drerup, J. Song, Wound measurement by curvature maps: a feasibility study, Physiol. Meas. 27 (2006) 1107–1123. [39] MAVIS II: 3-D wound instrument measurement Univ. Glamorgan, 2006 [online]. Available: /http://www.imaging.research.glam.ac.uk/projects/wm/ mavis/S. [40] F.X. Bon, E. Briand, S. Guichard, B. Couturaud, M. Revol, J.M. Servant, L. Dubertret, Quantitative and kinetic evolution of wound healing through image analysis, IEEE Trans. Med. Imag. 19 (7) (2000) 767–772. [41] R.K. Daniel, D.L. Priest, D.C. Wheatley, Etiologic factors in pressure sores: an experimental model, Arch. Phys. Med. Rehabil. 62 (1981) 492–498. [42] G. Torkaman, A.A. Sharafi, A. Fallah, H.R. Katoozian, Biomechanical and histological studies of experimental pressure sores in guinea pigs, in: Proceedings of the10th ICBME, 2000, pp. 463–469. [43] M. Herbin, A. Venot, Y. Devaux, C. Piette, Color quantitation through image processing in dermatology, IEEE Trans. Med. Imag. 9 (3) (1990) 262–269. [44] B.B. Mandelbrot, The fractal geometry of nature, W H Freeman and Co., 1982. [45] S. Peleg, J. Naor, R. Hartley, D. Avnir, Multiple resolution texture analysis and classification, IEEE TPAMI 6 (4) (1984) 518–522. [46] S. Lekshmi, K. Revathy, S.R. Prabhakaran Nayar, Galaxy classification using fractal signature, Astron. Astrophys. 405 (2003) 1163–1167. [47] S. Moghimi, G. Torkaman, M.H. Miran Baygi, A. Mahloojifar, Assessment of artificially induced pressure sores using a modified fractal analysis, J. Appl. Sci. 9 (8) (2009) 1544–1549. [48] A.S. Kumar, S.K. Basu, K.L. Majumdar, Robust classification of multispectral data using multiple neural networks and fuzzy integral, IEEE Trans. Geosci. Remote Sens. 35 (3) (1997) 787–790. [49] P.D. Gader, J.M. Keller, B.N. Nelson, Recognition technology for the detection of buried land mines, IEEE Trans. Fuzzy Syst. 9 (1) (2001) 31–43. [50] Y. Lee, D. Marshall, Curvature based normalized 3D component facial image recognition using fuzzy integral, Appl. Math. Comput. 205 (2008) 815–823. [51] L.I. Kuncheva, ‘‘Fuzzy’’ versus ‘‘nonfuzzy’’ in combining classifiers designed by boosting, IEEE Trans. Fuzzy Syst. 11 (6) (2003) 729–741. [52] L.I. Kuncheva, Combining pattern classifiers, Wiley-Interscience, 2004. [53] V. Maida, M. Ennis, C. Kuziemsky, The Toronto symptom assessment system for wounds: a new clinical and research tool, Adv. Skin Wound Care 22 (10) (2009) 468–474. http://www.imaging.research.glam.ac.uk/projects/wm/mavis/ http://www.imaging.research.glam.ac.uk/projects/wm/mavis/ Automatic evaluation of pressure sore status by combining information obtained from high-frequency ultrasound and digital... Introduction Materials and methods Animal model for pressure sore generation High frequency ultrasound imaging Digital color imaging Extracting features High-frequency ultrasound Statistical parameters Fractal signature Echogenicity Color features Classification Combining the decision of neural networks with fuzzy integral Performance evaluation Results Discussion Conclusion Conflict of interest statement References work_gif2dzwncngpxg7mtmfakaub7e ---- ORAL PRESENTATION Open Access Correlation analysis between digital photography measurement of trunk deformity and self-image perception in patients with idiopathic scoliosis Antonia Matamalas*, Joan Bago, Elisabetta DAgata, Ferran Pellise From 11th International Conference on Conservative Management of Spinal Deformities - SOSORT 2014 Annual Meeting Wiesbaden, Germany. 8-10 May 2014 Introduction Trunk deformity in idiopathic scoliosis has been fully ana- lyzed using different surface metrics but all of them are expensive and cannot be widely used. Recently it has been suggested that some measures of trunk deformity obtained in digital photography can be useful in the assessment of trunk deformity. Some asymmetry measures have been proposed but the relationship between these measures and patients’ self-image perception has not been established. Aim To assess the validity of a clinical assessment tool of the trunk deformity based on photographs as compared to self-assessed appearance questionnaires. Study design Cross-sectional study. Concurrent validity between pos- tural indexes obtained from digital photographs and self-assessed appearance questionnaires. Methods Front and back digital photographs of patients with idio- pathic scoliosis (Cobb angle >25º) were obtained. Shoulder, armpit and waist angles in addition to trunk asymmetry indices were calculated on front and back photographs with Surgimap software. All patients com- pleted SRS-22, SAQ and TAPS questionnaires. The Spearman´s rank correlation coefficient (r) was used to estimate concurrent validity between both methods. Results 80 consecutive patients (68 females), mean age 20.3 years old (range 12 to 40 years) were included. Mean Cobb angle was 45.9º (range 25.1º to 77.2º). A moderate but significant correlation was found between waist height angle and TAPS (r=-0.34) and SAQ appearance subscale (r= 0.35). SRS22 image sub- scale did not correlate with any photographic measure. Shoulder height angle and trapezium angle ratio corre- lated significantly with SRS22 Pain (r=-0-34) and SRS22 subtotal (r=-0.23). Any other correlation between body image perception instruments and other photography measurements was found. Conclusion Waist height angle measured with digital photography is moderately correlated with perceived trunk appearance. Trunk asymmetry is poorly correlated with self-assessed appearance whereas shoulder asymmetry is correlated with pain and quality of life. Published: 4 December 2014 doi:10.1186/1748-7161-9-S1-O8 Cite this article as: Matamalas et al.: Correlation analysis between digital photography measurement of trunk deformity and self-image perception in patients with idiopathic scoliosis. Scoliosis 2014 9(Suppl 1): O8. Hospital Vall Hebron, Barcelona, Spain Matamalas et al. Scoliosis 2014, 9(Suppl 1):O8 http://www.scoliosisjournal.com/supplements/9/S1/O8 © 2014 Matamalas et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. http://creativecommons.org/licenses/by/4.0 http://creativecommons.org/publicdomain/zero/1.0/ Introduction Aim Study design Methods Results Conclusion work_gkjq2xqkyrglnjpjctkm7jgzjy ---- OPH418.indd Fax +41 61 306 12 34 E-Mail karger@karger.ch www.karger.com Original Paper Ophthalmologica 2010;224:251–257 DOI: 10.1159/000284351 Screening for Diabetic Retinopathy: A Comparative Trial of Photography and Scanning Laser Ophthalmoscopy P.J. Wilson a J.D. Ellis a C.J. MacEwen a A. Ellingford b J. Talbot a G.P. Leese b Departments of a Ophthalmology and b Diabetes, Ninewells Hospital and Medical School, Dundee , UK Introduction Various national committees recommend that pa- tients with diabetes should undergo screening for reti- nopathy on an annual basis [1–3] . The method used should have a minimum sensitivity of 80% and a specific- ity of at least 95% for referable retinopathy [4, 5] . The mo- dality of choice in the UK is digital retinal photography using 45-degree fields. In Scotland, a single macula-cen- tred field is taken of each eye, whereas in England and Wales this is supplemented with a disc-centred field [2, 6] . With either approach, the sensitivity for referable dia- betic retinopathy has been demonstrated to lie within the range of 78–100% with specificity of 86–100%, respec- tively [7–12] . This variation is due to differential use of mydriasis, definition of referable disease, and fields of photography, but overall retinal photography usually achieves the minimum standards for sensitivity, although it rarely achieves the required standard for specificity. In addition to screening precision, retinal photography must also contend with the possibility of technical failure, i.e. instances in which the image quality is not sufficient to permit grading. In routine practical eye screening pro- grammes, the failure rate has been between 4.6 and 11.9% [13, 14] , but in research projects it has varied from 1.3% with mydriasis to 36% without mydriasis [7–11, 15, 16] . This wide range is partly due to different use of mydriasis Key Words Scanning laser ophthalmoscopy � Retinal photography � Diabetic retinopathy screening Abstract Aims: To evaluate the sensitivity and specificity of wide-field scanning laser ophthalmoscopy (WSLO) in the detection of referable diabetic eye disease, and to compare its perfor- mance with digital retinal photography. Methods: Patients enrolled into the study underwent non-mydriatic WSLO im- aging, then single- and dual-field mydriatic digital retinal photography, and examination with slit lamp biomicrosco- py, the reference standard. Grading of retinopathy was per- formed in a masked fashion. Results: A total of 380 patients (759 eyes) were recruited to the study. Technical failure rates for dilated single-field retinal photography, dual-field retinal photography and undilated WSLO were 6.3, 5.8 and 10.8%, respectively (0.005 ! p ! 0.02 for photography vs. WSLO). The respective indices for screening sensitivity were 82.9, 82.9 and 83.6% (p 1 0.2). Specificity was 92.1, 91.1 and 89.5%, respectively (p 1 0.2). Conclusions: Sensitivity and specific- ity for WSLO were similar to retinal photography. The techni- cal failure rate was greater for the WSLO used in this study. Copyright © 2010 S. Karger AG, Basel Received: November 11, 2009 Accepted: November 29, 2009 Published online: February 10, 2010 Ophthalmologica Dr. Peter Wilson Ward 25, Ninewells Hospital and Medical School Dundee DD1 9SY (UK) Tel. +44 1382 660 111, ext. 34628, Fax +44 1382 632 893 E-Mail pj.wilson @ nhs.net © 2010 S. Karger AG, Basel 0030–3755/10/2244–0251$26.00/0 Accessible online at: www.karger.com/oph http://dx.doi.org/10.1159%2F000284351 Wilson /Ellis /MacEwen /Ellingford / Talbot /Leese Ophthalmologica 2010;224:251–257 252 and partly to variations in definition of ‘ungradable’. Such cases require review by other means, most com- monly slit lamp examination by an ophthalmologist or optometrist [1, 3] . This 2-step screening process is costly, both to providers and to patients in terms of time, per- sonnel and inconvenience. For these reasons, alternative and improved means of retinal screening should con- stantly be explored [17] . Wide-field scanning laser oph- thalmoscopy (WSLO) utilises a fine laser beam which scans the retinal surface in a grid pattern in order to build up an image of the retina. The potential advantages of such a system include: (a) the possibility of an entirely non-mydriatic system; (b) the ability to resolve detail through some cataracts, leading to a lower rate of techni- cal failure; (c) a wider angle of view – equivalent to an external angle of 136 °. A number of studies of WSLO in the clinical context have determined the ability to identify diabetic retinal changes [18, 19] . However, to date no substantial trial of the utility of WSLO as a screening tool in clinical practice has been undertaken. In this study, we evaluated the sen- sitivity and specificity of WSLO for referable diabetic ret- inopathy, and sought to determine the role of WSLO in screening for diabetic retinopathy. Methods Subjects Ethical approval for the study was obtained from the Tayside Committee for Research Ethics. The subjects were recruited from a cohort of patients with diabetes within the Tayside area over a 7-month period and were informed of the study by written invita- tion. Subjects were recruited from the general population in reti- nopathy screening and from the diabetic retinopathy clinic. Pa- tients were excluded if they were unable to provide consent or undergo imaging due to learning difficulties or physical illness. Protocol All participants were recruited in Dundee, Scotland, and gave signed consent to study participation. All underwent monocular testing of visual acuity using a Snellen chart. Each eye was then imaged using a wide-field scanning laser ophthalmoscope (Op- tomap P200, Optos plc, Dunfermline, UK), an example of which is given in figure 1 . Participants then underwent pupillary dila- tion with 1% tropicamide and were imaged using retinal pho- tography (Canon CR-DGi with a Canon 20D SLR set at 2.0 mega- pixels, Canon Inc., Tokyo, Japan) after a 20-min delay to allow mydriasis to occur. Two 45-degree fields were taken with pho- tography (one fovea-centred and one disc-centred field) as rec- ommended by the English National Screening Programme for Diabetic Retinopathy [6] . The fovea-centred image was utilised for single-field scoring, whilst both were analysed in dual-field scoring. Finally, subjects were assessed with slit lamp biomicros- copy using a 78- or 90-dpt lens. Findings from the slit lamp were Table 1. Scottish diabetic retinopathy grading scheme Pathology detected Retinopathy grade No retinopathy R0 Any of: Microaneurysms Dot haemorrhages Flame haemorrhages R1 ≥4 blot haemorrhages in 1 hemifield or quadrant R2 Any of: ≥4 blot haemorrhages per hemifield or quadrant Abnormalities of venous calibre Intraretinal micovascular abnormalities R3 Any of: New active vessels at disc New active vessels elsewhere Vitreous haemorrhage R4 Enucleated R5 Not adequately visualised R6 Maculopathy grade Exudate >1 but ≤2 DD from fovea M1 Exudate or blot haemorrhage ≤1 DD from fovea M2 DD = Disc diameter. Fig. 1. WSLO image of mild non-proliferative diabetic retinopa- thy. Photography and Scanning Laser Ophthalmoscopy in Diabetic Retinopathy Ophthalmologica 2010;224:251–257 253 recorded on a standardised sheet. The WSLO was performed by a trained nurse, the photography by an ophthalmic photographer and the slit lamp assessment by one consultant or one trainee ophthalmologist. All retinas/images were graded in a masked fashion, using a common scoring system. All imaging was graded by an ophthal- mology research fellow (P.J.W.), the photography at magnification levels of 100%, and the WSLO images at 125%. The grading system utilised the Scottish diabetic retinopathy grading scheme [20] , which is based on the Early Treatment Diabetic Retinopathy Study, including the 4: 2:1 rule for severe non-proliferative disease. The grader determined the presence or absence of key pathologies and, where appropriate, the number of lesions or the distance from the fovea. The pathology checklist was used to calculate retinopathy and maculopathy scores for each eye. Referable retinopathy was defined as retinopathy of grade 3 or 4 (R3/R4) or maculopathy of grade 2 (M2) ( table 1 ). To obtain a unified score ‘per patient’, the eye with the more severe pathology was recorded. If images for one eye were deemed ungradable (R6), then the patient was scored as R6, unless the contralateral eye was scored as referable retinopathy or referable maculopathy. The definition used for ungradable im- ages was that agreed by the Scottish Diabetic Retinopathy Screen- ing Programme: any image with visible referable disease is deemed gradable; an adequate image has a sufficient field of view (the fovea Table 2. Distribution of retinopathy/maculopathy per patient Retinopathy grade Slit lamp 1-field photography 2-field photography WSLO No retinopathy 219 (57.6) 185 (48.7) 180 (47.4) 176 (46.3) Background, no maculopathy 75 (19.7) 78 (20.5) 82 (21.6) 65 (17.1) No referable retinopathy, observable maculopathy 8 (2.1) 8 (2.1) 8 (2.1) 9 (2.4) Referable disease 78 (20.5) 85 (22.4) 88 (23.2) 89 (23.4) Technical failure (R6) – 24 (6.3) 22 (5.8) 41 (10.8) Total 380 (100.0) 380 (100.0) 380 (100.0) 380 (100.0) Results are numbers of patients, with percentages in parentheses. Table 3. Comparison between grading of patients by slit lamp and imaging a Retinopathy Slit lamp R0 R1 R2 R3 R4 P1 P2 WSLO P1 P2 WSLO P1 P2 WSLO P1 P2 WSLO P1 P2 WSLO R0 177 171 163 7 8 14 0 0 0 2 2 2 0 0 0 R1 28 34 26 72 70 53 1 1 2 8 8 6 3 2 2 R2 0 0 2 3 4 7 1 1 0 4 5 5 1 0 1 R3 2 2 2 7 8 12 2 2 2 6 14 17 4 6 5 R4 0 0 2 1 2 0 0 0 0 4 5 4 10 10 9 R6 15 15 27 11 9 15 0 0 0 1 1 1 0 0 1 b Maculopathy Slit lamp M0 M1 M2 P1 P2 WSLO P1 P2 WSLO P1 P2 WSLO M0 272 274 250 5 4 6 4 4 7 M1 9 9 13 4 4 4 0 0 0 M2 25 25 27 3 4 3 34 34 29 R6 22 20 38 1 1 0 1 1 3 P1 = Single-field photography; P2 = dual-field photography. Results are numbers of patients. Figures in italics are false-negative, those in bold false-positive imaging. Wilson /Ellis /MacEwen /Ellingford / Talbot /Leese Ophthalmologica 2010;224:251–257 254 is at least 2 disc diameters from the edge of the image, and the op- tic disc is fully seen) and sufficient clarity (third-generation vessels around the fovea are visible) [20] . Slit lamp examination was used as the ‘reference standard’ ex- amination and has previously been shown to be as effective as 7- field stereophotography, whilst also being much less prone to me- dia-opacity-related failure [15] . Sensitivity and specificity were calculated for digital retinal photography and WSLO when com- pared to slit lamp examination. Calculations were made before and after excluding ungradable images from the analysis. Com- parisons between the two methods were assessed statistically us- ing � 2 calculations on SPSS (Chicago, Ill., USA). Results Four hundred and five patients were invited to par- ticipate, of whom 22 declined, and 3 were too infirm to participate. Therefore, 380 patients (93.8% of those in- vited) were recruited to the study, with a median age of 67.4 years (range 21–94), of whom 230 (60.5%) were male. Of the 380 patients, 205 were from the diabetic retinopa- thy clinic, and 175 were from the screening clinic. Of all patients screened, 219 (57.6%) had no retinopa- thy, 75 (19.7%) had background retinopathy without mac- Table 4. Sensitivity and specificity for retinopathy and maculopathy, proportion and 95% CI (in parentheses) Scores per patient 1-field photo 2-field photo WSLO Slit lamp referable non- referable referable non- referable referable non- referable Referable disease (R3/R4/M2) Image referable, n 63 22 63 25 61 28 Image non-referable, n 13 258 13 257 12 238 Technical failure (R6), n 2 22 2 20 5 36 Technical failure (R6), % 6.3 (5.1–7.6)* 5.8 (4.6–7.0)*** 10.8 (9.2–12.4) Sensitivity, % 82.9 (74.4–91.4) 82.9 (74.4–91.4) 83.6 (75.1–92.1) Specificity, % 92.1 (89.0–95.3) 91.1 (87.8–94.5) 89.5 (85.8–93.2) � 0.72 (0.63–0.80) 0.70 (0.61–0.79) 0.67 (0.58–0.76) All referable cases (R3/R4/R6/M2) Image referable, n 65 44 65 45 66 64 Image non-referable, n 13 258 13 257 12 238 Sensitivity, % 83.3 (75.1–91.6) 83.3 (75.1–91.6) 84.6 (76.6–92.6) Specificity, % 85.4 (81.5–89.4)** 85.1 (81.1–89.1) n.s. 78.8 (74.2–83.4) � 0.60 (0.51–0.69) 0.59 (0.50–0.69) 0.51 (0.42–0.60) Referable retinopathy (R3/R4 only) Image referable, n 34 12 35 14 35 18 Image non-referable, n 18 289 17 289 16 267 Technical failure (R6), n 1 26 1 24 2 42 Technical failure (R6), % 7.1 (5.8–8.4)* 6.6 (5.3–7.9)*** 11.6 (9.9–13.2) Sensitivity, % 65.4 (52.5–78.3) 67.3 (54.6–80.1) 68.6 (55.9–81.4) Specificity, % 96.0 (93.8–98.2) 95.4 (93.0–97.7) 93.7 (90.9–96.5) � 0.64 (0.52–0.75) 0.63 (0.52–0.75) 0.60 (0.48–0.72) All diabetic eye diseases Image referable, n 142 29 143 35 130 33 Image non-referable, n 10 175 11 169 17 159 Technical failure (R6), n 9 15 7 15 14 27 Technical failure (R6), % 6.3 (5.1–7.6)* 5.8 (4.6–7.0)*** 10.8 (9.2–12.4) Sensitivity, % 93.4 (89.5–97.4) 92.9 (88.8–96.9) 88.4 (83.3–93.6) Specificity, % 85.8 (81.0–90.6) 82.8 (77.7–88.0) 82.8 (77.5–88.1) � 0.78 (0.71–0.84) 0.74 (0.67–0.81) 0.70 (0.63–0.78) Bold indicates significant difference between photography and WSLO: * p = 0.02, ** p = 0.04, *** p = 0.005, n.s. = p > 0.05. Photography and Scanning Laser Ophthalmoscopy in Diabetic Retinopathy Ophthalmologica 2010;224:251–257 255 ulopathy (R1/R2), 8 (2.1%) had observable maculopathy (M1), and 78 (20.5%) had referable retinopathy or macu- lopathy (R3/R4 or M2; table 2 ). For the purposes of con- text and comparison, during the same time period in the screening programme across Tayside, 59.2% of patients had no retinopathy, 30.0% had background retinopathy without maculopathy, 2.8% had observable maculopathy, and 6.9% had referable retinopathy or maculopathy. The technical failure rate for the region was 3.3%. The grading assigned to each patient is shown in table 3 , with differences in referability highlighted to demonstrate false-positive and false-negative referrals. The majority of false-negative retinopathy referrals were due to venous beading (1-field/2-field/WSLO = 61.1/ 58.8/62.5%, n.s.) and/or intraretinal microvascular ab- normalities (38.9/41.2/25.0%, n.s.). False-positive reti- nopathy referrals were also associated with intraretinal microvascular abnormalities (66.7/71.4/33.3%, n.s.) and/ or venous beading (41.7/42.9/72.2%, n.s.). The number of ungradable images with undilated WSLO was greater than that obtained with dilated 2-field retinal photography (10.8 vs. 5.8%, p = 0.005) and with dilated 1-field retinal photography (10.8 vs. 6.3%, p = 0.02). WSLO technical failure was documented in 54 eyes (47 patients, 6 of whom were scored as referable, due to disease in the contralateral eye). For the 175 patients from the screening clinic, the technical failure rate was much lower: 2.9% (95% confidence interval, CI = 1.6–4.1%) for single- and dual-field photography, and 4.0% (95% CI = 2.5–5.5%) for WSLO (n.s.). The sensitivity for detection of referable disease with WSLO was 83.6% (95% CI = 75.1–92.1%). Specificity was 89.5% (95% CI = 85.8–93.2%). This was not significantly different to either 1- or 2-field retinal photography ( ta- ble 4 ). An alternative method of calculation is to include the ungradable images and count as referable all patients ei- ther with referable disease or ungradable images ( table 4 , ‘all referable cases’). Utilising this method, sensitivity re- mained similar between the groups, but specificity fell more in the WSLO group, due to the inclusion of a great- er number of technical failures (p = 0.04 compared to single-field photography, n.s. compared to dual-field photography). There were no significant differences be- tween all 3 modalities in the detection of the presence of ‘any retinopathy’. The type of lesion identified by each imaging modal- ity was also assessed. Single-field photography was able to identify microaneurysms in 95.9% of cases that slit lamp had done so, whereas WSLO achieved this in only 79.2% (p ! 0.001), a difference that appeared to be due to the lower resolution. On the other hand, WSLO was able to detect blot haemorrhages in 94.0% of cases, compared to 79.7% in single-field photography and 81.2% in dual- field photography (p ! 0.03). With regard to referable le- sions, photography had a tendency to miss new vessels elsewhere more often than WSLO (30.8% sensitivity com- pared to 38.5%, n.s.), but outperformed WSLO in the identification of macular blot haemorrhages (83.3 vs. 36.3%, p ! 0.05). Single-field photography was the quickest to grade (average 64 s/eye), followed by 2-field photography (aver- age 94 s) and WSLO (average 106 s), with p ! 0.001 be- tween all 3 groups. Discussion This study has demonstrated that WSLO achieved a sensitivity of 83.6% in screening for diabetic retinopathy, compared to 82.9% for digital photography, although there was no significant difference between these values. WSLO imaging without mydriasis proved to have a higher rate of ungradable images than routine photography with mydri- asis. Therefore, in screening practice, WSLO may require selected mydriasis, but in fewer patients than for photog- raphy. WSLO without mydriasis obviates the associated disadvantages of mydriasis, which include drug adminis- tration and cost, delay in capturing images, in addition to multiple patient-related issues such as driving difficulties and blurred vision [2, 21] . WSLO was used without my- driasis and demonstrated a technical failure rate of 10.8%. Some of these failures may have been due to a procedural failure to review the images at the time of capture; adjust- ment of capture settings can overcome some of the issues associated with poor image quality. It should be noted that when retinal cameras are used without mydriasis, the tech- nical failure rates may be as high as 20–36% [8, 16] , al- though in routine practice, with selective use of mydriasis, the failure rate can be around 5–12% [13, 14] . The use of mydriasis with WSLO may reduce the number of technical failures to a similar degree. We did not test this, however. Furthermore, a relatively low technical failure rate for dig- ital photography in this study was only achieved on the software platform used for grading WSLO images. This software facilitated the adjustment and enhancement of images in order to maximise the level of detail visible. Such software is not usually available, and photographic grad- ing on the platform used in routine clinical service led to a technical failure rate of 10.7%, which was comparable to Wilson /Ellis /MacEwen /Ellingford / Talbot /Leese Ophthalmologica 2010;224:251–257 256 the WSLO at 10.8%. The high rate of technical failures both for digital photography and for WSLO may largely be explained by the fact that many patients were recruited from the diabetic retinopathy clinic, many of whom had been referred to the clinic solely due to difficulties in pho- tographic screening. This recruitment process follows the methodology of previous studies [15, 16] and also explains the high prevalence of retinopathy in the current study. A wide angle of view resulted in detection of lesions with WSLO that were outwith the field of routine pho- tography. The WSLO covers an area greater than 7-field photography, which is generally accepted as the de facto reference standard of retinal examination. However, in the screening context, the relevance of detecting periph- eral lesions beyond the 45-degree angle of the fovea is currently uncertain and controversial. There are, however, several practical issues and con- siderations which must be addressed before WSLO may be widely accepted and adopted as a screening modality. Firstly, screening is often conducted in peripheral geo- graphical locations with mobile equipment [4] , and the role of WSLO may be limited in these circumstances. Certainly, WSLO has been incorporated into a mobile unit, but this has not been formally trialled in the context of diabetic screening. For this reason WSLO would prob- ably need to be confined to static sites initially. Secondly, the wide angle of view means that the avail- able pixels are spread over a greater area, resulting in low- er pixel density. The English National Screening Com- mittee previously recommended that any camera should have at least 20 pixels/degree in both axes [22] . Although this has since been recognised as a f lawed surrogate for smallest resolvable lesion, no suitable alternative specifi- cation has been identified. The Canon camera operated at a resolution of 2.0 megapixels, equating to approxi- mately 28.1 pixels/degree. The WSLO used in this study has 2 sensors of 4 megapixels each, covering a horizontal external angle of 136°, and a vertical external angle of 96°. This achieves a resolution of 14.6 pixels/degree in the hor- izontal plane and 20.7 in the vertical. However, it must be recognised that describing resolution in ‘pixels per de- gree’ takes no account of the distribution of pixel density across the image, which on the WSLO is greatest central- ly. Furthermore, it is a measure developed for digital pho- tography which uses charge-coupled devices or a comple- mentary metal oxide semiconductor, in contrast to WSLO which uses an avalanche photo diode system which has a different technical operation and performance. Adequate resolution is essential in order to detect small lesions. Poor resolution results in either diagnostic uncertainty (loss of precision) or failure to detect the le- sion at all (loss of sensitivity). Diagnostic uncertainty can be improved by training, experience and familiarity of the grader with the imaging modality. These practical difficulties of WSLO, including lower resolution, appear to have been compensated for by other benefits, given the comparable sensitivity and specificity in this study. These benefits may include improved detection of peripheral le- sions, resistance to chromatic aberration and scatter, re- sulting in improved lesion contrast. It is noteworthy that the WSLO was less effective at detecting microaneurysms but was better at detecting peripheral blot haemorrhages, which are clinically more important. However, significantly more time is spent on analysis of the WSLO image (106 s/eye) than with either dual-field digital photography (94 s/eye) or single-field photography (64 s/eye). This is due to a combination of diagnostic un- certainty, a larger area of the retina to grade, separate anal- ysis of retinal and choroidal layers and the requirement for adjustment and enhancement of the image [23] , a process not required to the same degree for retinal photography. The differences may become less important if automated grading becomes adopted for future practice [24, 25] . Future generations of WSLO may address some of these issues, including higher resolution of images, great- er image enhancement and intensity associated with low- er rates of technical failure, better ergonomics for patient and photographer, and more efficient review software. Whether the newer model WSLO will deliver clinical im- provements in line with the technological improvements should be the subject of future study. Currently the pur- chase or lease cost of WSLO is greater than for retinal cameras, but like retinal cameras, the costs are likely to decrease if more are produced and used. Regardless of the resolving ability of any imaging system, an investment in improved infrastructure and programme administration is essential to optimise the contribution of these technol- ogies to a rapidly growing field. In summary, the sensitivities of retinal photography and WSLO for referable diabetic retinopathy were 82.9% and 83.6%, respectively. If this level of sensitivity is agreed to be acceptable for screening, then both modalities could be appropriately used for screening purposes. Acknowledgements We would like to thank Prof. Peter Donnan for his support in the statistical analysis of the data. We acknowledge that this work was supported by an unattributed research grant from Optos plc. Photography and Scanning Laser Ophthalmoscopy in Diabetic Retinopathy Ophthalmologica 2010;224:251–257 257 References 1 Garvican L, Clowes J, Gillow T: Preservation of sight in diabetes: developing a national risk reduction programme. Diabet Med 2000; 17: 627–634. 2 Facey K, Cummins E, Macpherson K, Morris A, Reay L, Slattery J: Organisation of servic- es for diabetic retinopathy screening. Health Technology Assessment Report 1. Glasgow, Health Technology Board for Scotland, 2002. 3 Fong DS, Aiello L, Gardner TW, King GL, Blankenship G, Cavallerano JD, Ferris FL 3rd, Klein R: Retinopathy in diabetes. Diabe- tes Care 2004; 27(suppl 1):S84–S87. 4 Taylor R: Practical community screening for diabetic retinopathy using the mobile retinal camera: report of a 12 centre study. British Diabetic Association Mobile Retinal Screen- ing Group. Diabet Med 1996; 13: 946–952. 5 British Diabetic Association: Retinal Pho- tography Screening for Diabetic Eye Disease. London, British Diabetic Association, 1997. 6 Workbook 4 of the English National Screen- ing Programme for Diabetic Retinopathy. http://w w w.retinalscreening.nhs.uk/pages (accessed August 12, 2009). 7 Olson JA, Strachan FM, Hipwell JH, Goat- man KA, McHardy KC, Forrester JV, Sharp PF: A comparative evaluation of digital im- aging, retinal photography and optometrist examination in screening for diabetic reti- nopathy. Diabet Med 2003; 20: 528–534. 8 Scanlon PH, Malhotra R, Thomas G, Foy C, Kirkpatrick JN, Lewis-Barned N, Harney B, Aldington SJ: The effectiveness of screening for diabetic retinopathy by digital imaging photography and technician ophthalmosco- py. Diabet Med 2003; 20: 467–474. 9 Harding SP, Broadbent DM, Neoh C, White MC, Vora J: Sensitivity and specificity of photography and direct ophthalmoscopy in screening for sight threatening eye disease: the Liverpool Diabetic Eye Study. Br Med J 1995; 311: 1131–1135. 10 Lin DY, Blumenkranz MS, Brothers RJ, Grosvenor DM: The sensitivity and specific- ity of single-field nonmydriatic monochro- matic digital fundus photography with re- mote image interpretation for diabetic retinopathy screening: a comparison with ophthalmoscopy and standardized mydri- atic color photography. Am J Ophthalmol 2002; 134: 204–213. 11 Lopez-Bastida J, Cabrera-Lopez F, Serrano- Aguilar P: Sensitivity and specificity of digi- tal retinal imaging for screening diabetic retinopathy. Diabet Med 2007; 24: 403–407. 12 Perrier M, Boucher MC, Angioi K, Gresset JA, Olivier S: Comparison of two, three and four 45 degrees image fields obtained with the Topcon CRW6 non-mydriatic camera for screening for diabetic retinopathy. Can J Ophthalmol 2003; 38: 569–574. 13 Leese GP, Morris AD, Swaminathan K, Pet- rie JR, Sinharay R, Ellingford A, Taylor A, Jung RT, Newton RW, Ellis JD: Implementa- tion of national diabetes retinal screening programme is associated with a reduced re- ferral rate to ophthalmology. Diabet Med 2005; 22: 1112–1115. 14 Philip S, Cowie LM, Olson JA: The impact of the Health Technology Board for Scotland’s grading model on referrals to ophthalmolo- gy services. Br J Ophthalmol 2005; 89: 891– 896. 15 Scanlon PH, Malhotra R, Greenwood RH, Aldington SJ, Foy C, Flatman M, Downes S: Comparison of two reference standards in validating two field mydriatic digital pho- tography as a method of screening for dia- betic retinopathy. Br J Ophthalmol 2003; 87: 1258–1263. 16 Murgatroyd H, Ellingford A, Cox A, Binnie M, Ellis JD, MacEwen CJ, Leese GP: Effect of mydriasis and different field strategies on digital image screening of diabetic eye dis- ease. Br J Ophthalmol 2004; 88: 920–924. 17 Leese GP, Ellis JD: Quality assurance for dia- betic retinal screening. Diabet Med 2007; 24: 579–581. 18 Mayer H: The Optos Panoramic 200 scan- ning laser ophthalmoscope: advancing opto- metric practice. Clin Refract Optom 2004; 15: 156–162. 19 Friberg TR, Pandya AN, Eller AW: Non- mydriatic panoramic fundus imaging using a non-contact scanning laser-based system. Ophthalmic Surg Lasers Imag 2003; 34: 488– 497. 20 Scottish diabetic retinopathy grading scheme version 1.1. 2007. http://www.ndrs. scot.nhs.uk/ClinGrp/ (accessed August 12, 2009). 21 Jude E, Ryan B, O’Leary J, Gibson JM, Dod- son PM: Pupillary dilatation and driving in diabetic patients. Diabet Med 1998; 15: 143– 147. 22 The Liverpool Declaration 2005. Report of conference, version 2, Liverpool, 2006. www. drscreening2005.org.uk (accessed August 12, 2009). 23 Khandhadia S, Madhusudhana KC, Kosta- kou A, Forrester JV, Newsom RS: Use of Op- tomap for retinal screening within an eye ca- sualty setting. Br J Ophthalmol 2009; 93: 52–55. 24 Larsen N, Godt J, Grunkin M, Lund-Ander- sen H, Larsen M: Automated detection of diabetic retinopathy in a fundus photo- graphic screening population. Invest Oph- thalmol Vis Sci 2003; 44: 767–771. 25 Philip S, Fleming AD, Goatman KA, Fonseca S, McNamee P, Scotland GS, Prescott GJ, Sharp PF, Olson JA: The efficacy of automat- ed ‘disease/no disease’ grading for diabetic retinopathy in a systematic screening pro- gramme. Br J Ophthalmol 2007; 91: 1512– 1517. work_gkqcpmhocjb37en4lwjvsvalgm ---- appsd1400116 1..8 Applications in Plant Sciences 2015 3 ( 5 ): 1400116 Applications in Plant Sciences 2015 3 ( 5 ): 1400116; http://www.bioone.org/loi/apps © 2015 LaFrankie and Chua. Published by the Botanical Society of America. This work is licensed under a Creative Commons Attribution License (CC-BY-NC-SA). ApApplicatitionsons inin Pl Plant t ScienSciencesces 1 of 8 For more than 300 years, the pressed and dried plant specimen has been the fundamental artifact in the global survey of plant di- versity. Most historians of botany credit Luca Ghini with the for- mal development of techniques to dry plant specimens ( Egerton, 2003 ; Frank and Perkins, 2004 ). His methods worked so well that the 400-year-old herbarium of Ghini’s most renowned student, Andrea Cesalpino, remains intact at the Museo di Storia Naturale di Firenze at Florence. The pressed plant specimen soon became the standard for botanical preservation, storage, and compara- tive study ( DeWolf, 1968 ). Today, approximately 3400 herbaria around the world house an estimated 350,000,000 specimens ( Thiers, 1998 ; Frank and Perkins, 2004 ). Photographs have been little more than an ancillary part of traditional herbarium collections. When photographs of plants fi rst became common in the late 19th century, botanists saw them as a form of botanical illustration rather than as a sort of herbarium specimen and so stored them in botanical libraries rather than as part of the formal herbaria ( Simpson and Barnes, 2008 ). Such a divergent use of photographs and specimens is surprising in that there is a curious and unappreciated connec- tion between botany and the advent of photography. One of the early inventors of photographic methods, William Henry Fox Talbot, used botanical specimens in some of his earliest plates ( Gernshiem, 1986 ). The British botanist Anna Atkins published the fi rst book of photographs in 1843 and titled it Photographs of British Algae: Cyanotype Impressions ( Parr and Badger, 2004 ). The role of photographs as illustrations rather than sci- entifi c artifacts continued into the 20th century. For example, between 1907 and 1922, Ernest Henry Wilson carried to China a large-format Sanderson camera and a set of glass plates with which he composed more than 2400 images. These images re- main at the library of the Arnold Arboretum and some are avail- able online ( Wilson, 2011 ). Most large herbarium libraries have similar holdings of historic photographic prints. The advent of digital photography in the 1990s did not alter the principal role of photographs as a form of illustration rather than of documentation. Examples of fl oras that include on- line digital images include Wisfl ora: Wisconsin Vascular Plant Species ( http://www.botany.wisc.edu/wisfl ora/ ), Michigan Flora Online ( http://michiganfl ora.net/ ), and e-Flora Florida: Field Guide to Florida Plants ( http://www.fl oridaplants.com/Efl ora/ cover.htm ). These photographs, which number in the thousands, are not usually linked to a specimen or to a specifi c record of time and place and so are not treated in the same fashion as herbarium specimens. Indeed, many of the largest plant photo- graphic collections on the Internet are not associated with her- baria at all. For example, the Gymnosperm Database ( http:// www.conifers.org ), the International Aroid Society ( www.aroid .org ), and PhytoImages ( http://www.phytoimages.siu.edu/ ) pres- ent thousands of photographs arranged by individual species. And of course, private individuals have recently fi lled photo- graphic websites such as Flikr or Facebook with millions of digital photographs of plants, often rare, sometimes from iso- lated locations. In the mentioned examples, the photographs are 1 Manuscript received 11 December 2014; revision accepted 3 March 2015. The authors thank the Energy Development Corporation Inc. (Pasig City, Philippines) for access to the Kanlaon Geothermal Site and the Plant and Wildlife Bureau of the Department of Environment and Natural Resources (Quezon City, Philippines) for the collecting permits. 4 Author for correspondence: jlafrankie@yahoo.com doi:10.3732/apps.1400116 APPLICATION ARTICLE APPLICATION OF DIGITAL FIELD PHOTOGRAPHS AS DOCUMENTS FOR TROPICAL PLANT INVENTORY 1 JAMES V. LAFRANKIE JR. 2,3,4 AND ANNA I. CHUA 2 2 Institute of Biology, University of the Philippines, Diliman, Quezon City 1100, Philippines; and 3 College of Forestry, Guangxi University, Nanning, Guangxi, People’s Republic of China • Premise of the study: We tested the credibility and signifi cance of digital fi eld photographs as supplements or substitutes for conventional herbarium specimens with particular relevance to exploration of the tropics. • Methods: We made 113 collections in triplicate at a species-rich mountain in the Philippines while we took 1238 digital pho- tographs of the same plants. We then identifi ed the plants from the photographs alone, categorized the confi dence of the iden- tifi cation and the reason for failure to identify, and compared the results to identifi cations based on the dried specimens. • Results: We identifi ed 72.6% of the photographic sets with high confi dence and 27.4% with low confi dence or only to genus. In no case was a confi dent identifi cation altered by subsequent examination of the dried specimen. The failure to identify pho- tographic sets to species was due to the lack of a key feature in 67.8% of the cases and due to a poorly understood taxonomy in 32.2%. • Discussion: We conclude that digital photographs cannot replace traditional herbarium specimens as the primary elements that document tropical plant diversity. However, photographs represent a new and important artifact that aids an expedient survey of tropical plant diversity while encouraging broad public participation. Key words: digital photographs; fl ora; herbarium specimens; inventory; tropics. 2 of 8 Applications in Plant Sciences 2015 3 ( 5 ): 1400116 LaFrankie and Chua—Digital fi eld photographs for tropical plant inventory doi:10.3732/apps.1400116 http://www.bioone.org/loi/apps encourage their procurement and that herbaria should conserve the images in a manner similar to ordinary specimens?” MATERIALS AND METHODS The study site was an accessible but poorly collected mountain in the Phil- ippines, Mt. Kanlaon, Negros Occidental, 1200 m a.s.l. elevation, 10.477 ° N, 123.149 ° E. Between October 20 and 23, 2012, we made 15 transects separated by roughly 50 m and each roughly 100 m long and perpendicular to the main trail. We collected all fertile angiosperms that we could locate and labeled the resulting 113 specimens as Chua 001 – 113. We attempted to take only a single gathering of each species that was evidently different except where the specimens represented different fl oral or fruit stages. We saved material for specimens in triplicate while simultaneously taking a large number of digital photographs. The plants were photographed in the fi eld and on a table prior to preparation as specimens. Two cameras were used: Nikon D40 SLR (Nikon Corporation, Tokyo, Japan) and Canon PowerShot G12 (Canon USA, Melville, New York, USA) with built-in macro function. Most of the photographs were taken by the junior author, a skilled photographer with limited botanical experience. To make the test a fair refl ection of what an amateur botanist might do on their own, the senior author limited advice on the content of the photos to the recommendation that all plant parts be photographed with maximum magnifi cation, that parts be dissected wherever possible, and that a scale be included. Approximately 1238 photographs were taken at a size of 2500 by 3500 pixels. They were immediately sorted, matched to collection numbers, and then copied to external storage. The speci- mens themselves were pressed lightly in newspaper and temporarily preserved with denatured alcohol. The specimens were sent back to the University of the Philippines, Diliman, Quezon City, where they were pressed, dried, labeled, and mounted in a conventional fashion. Specimens are stored at the Jose Vera Santos Memorial Herbarium, University of the Philippines (PUH), with one duplicate at the Philippine National Herbarium (PNH) and a third for distribution. The senior author examined the 113 sets of photographs with the aim of match- ing the images to a known species and type specimen in consultation with a cur- rently available taxonomic reference. Specialists were consulted in three cases: orchids were reviewed by W. Suarez of the Philippines; fi gs by L. Rodriguez, at the University of the Philippines, Diliman; and Cyrtandra J. R. Forst. & G. Forst. (Gesneriaceae) by G. Bradley, Royal Botanical Gardens, Kew. The level of confi - dence of the identifi cation was recorded as follows: (1) confi dent identifi cation to species; (2) identifi cation to species but with low confi dence, further study of the specimens is needed; (3) identifi cation to genus only. The next category described the reasons for a failure to identify the photographs to species, that is, the reason they were placed in categories 2 and 3. These were either (1) details needed for identifi cation not evident in the photographs, or (2) taxonomy of the genus not suffi ciently known for identifi cation. The specimens were then examined and identifi ed in consultation with the collections of PUH and PNH. RESULTS We found that 72.6% of the photographic sets could be iden- tifi ed to species with high confi dence, 8% to a species with low confi dence, and 19.4% could be identifi ed only to genus (Appendix 1). Of the species identifi cations made with high con- fi dence, none were altered by subsequent examination of the speci- mens themselves. That may surprise some botanists, especially in so far as many of the plants might be considered exceedingly rare or poorly known from a global perspective. However, these plants were locally well known and readily identifi ed by any bota- nist familiar with the Philippine fl ora. The genus Saurauia Willd. (Actinidiaceae) is species-rich and is sometimes diffi cult to iden- tify; however, Chua 002 and Chua 104 were readily identifi ed to the locally abundant S. negrosensis Elmer by comparison with the type specimen Elmer 10139 from southern Negros ( Fig. 1 ) . Three specimens ( Chua 016 , Chua 027 , and Chua 041 ) were identifi ed as Mackinlaya celebica (Harms) Philipson (Apiaceae). This small tree, enigmatic in phylogenetic position and all but unknown in ecology, proved to be one of the most abundant small trees at Mt. Kanlaon ( Fig. 2 ) . It was confi dently identifi ed not treated as herbarium specimens; that is, they are not stored and managed by a trusted institution and linked with full docu- mentary information comparable to label data. There is no stan- dard method for citing such digital images and so they are rarely mentioned in taxonomic revisions even though they often docu- ment details of morphology and habit not evident in the dried specimens. Although photographs were seen primarily as illustrations, they also served the critical scientifi c role in diversity research as documentation of type specimens. Even before the promul- gation of rules for nomenclatural types at the end of the 19th century ( Hitchcock, 1905 ; Swingle, 1913 ; Daston, 2004 ), type specimens had become the essential element to resolve the ever-expanding problems of synonymy. However, the world- wide distribution of types limited access among scientists; transport was dangerous and hand tracing was ineffi cient. Pho- tographs offered a solution. The early efforts of Swingle and Swingle (1916) and others were soon expanded by botanists such as J. Francis Macbride, who traveled to Europe to photograph nomenclatural types. Macbride collected more than 40,000 pho- tographic negatives, which are currently maintained at the Field Museum of Natural History in Chicago ( Grimé and Plowman, 1986 ). That same era saw the use of microfi che photography to record many classic herbaria. These worthy efforts all pale in comparison to the events that followed the twin birth of digital photography and the Internet in the 1990s, whereby type speci- mens were quickly recorded and immediately and globally viewed ( Ariño and Galicia, 2005 ). This worldwide effort is currently led by the JSTOR Global Plants project ( https://plants.jstor.org/ ), which partners more than 200 herbaria and aims to index the location of more than 1.3 million type specimens. Photographic standards for type images have now been published ( Häuser et al., 2005 ), although Vollmar et al. (2010) describe the diverse impediments to further advance. A few herbaria use these digi- tal tools to go beyond the limits of types and have made digital images of their general collections. The New York Botanical Garden currently provides a digital image for more than 1.5 mil- lion specimens and scans an additional 100 specimens per hour ( New York Botanical Garden, 2013 ). In addition to images of dried specimens, some herbaria store digital images that were taken of living plants before the photo- graphed plant was pressed and mounted. The images are then coded and stored as a linked component to the specimen. Two examples are the Missouri Botanical Garden collection of photo- graphs of Madagascar plants available through the web portal Tropicos ( http://www.tropicos.org/ ) and Robin Foster’s extensive collection of fi eld photographs available online through the Field Museum’s Tropical Plant Guides ( http://fm2.fi eldmuseum.org/ plantguides/color_images.asp ). These exceptional examples could be more widely imitated. Baskauf and Kirchoff (2008) rec- ommended that sets of photographs that illustrate a single plant could be treated in a way identical to a conventional herbarium specimen if the images were of high quality and with detail ade- quate for accurate identifi cation. They pointed out that such digi- tal collections could fulfi ll many of the roles played by traditional specimens such as to document the distribution and morphologi- cal variation of known species. However, to date, few herbaria have aggressively pursued this opportunity. Every experienced botanist knows that digital photographs suffer limitations compared with conventional scientifi c speci- mens. The question we pose here with particular relevance to the unexplored tropics is, “Are high-quality digital photographs of living plants of suffi cient scientifi c value that herbaria should Applications in Plant Sciences 2015 3 ( 5 ): 1400116 LaFrankie and Chua—Digital fi eld photographs for tropical plant inventory doi:10.3732/apps.1400116 3 of 8http://www.bioone.org/loi/apps photograph did not show a key part, for example, the fl oral de- tails of orchids. Species of the genus Ardisia Sw. (Primulaceae) were more easily identifi ed by the dry rather than fresh leaves, while the genus Lasianthus Jack (Rubiaceae), despite its recent revision for the Philippines ( Zhu et al., 2012 ), required details of the fl ower not seen in some of our photographs. We might also emphasize our fi nding that single photographs were rarely ade- quate for a sound determination. A combination of photographs taken of the different plant parts at different scales was required. The current poverty of the relevant taxonomy was important in 11 out of 31 cases (35.5%). The photographs appeared to be of suffi cient detail to allow a sound identifi cation if more was known about the genus. An example was the genus Cyrtandra , which is characterized by a large number of species that are very narrowly distributed; many species remain undescribed. A recent review of Cyrtandra in Palawan Island, Philippines ( Atkins and Cronk, 2001 ), found 12 species present of which 10 were island endemics, fi ve species were already described, a further three species and one variety were described as new, and the remain- ing four taxa were likely new but required better collections. In Mt. Kanlaon, we found fi ve species of Cyrtandra in fl ower or fruit, of which only one could be assigned to a species. For an- other three species, we were reasonably confi dent that they are new species and have sent the duplicate specimens to Kew for incorporation in the ongoing regional revision. DISCUSSION Had the senior author not gone to Mt. Kanlaon and had re- ceived nothing but the photographic sets, our herbarium would according to the study by Philipson (1979) . Specimen Chua 024 was readily identifi ed to the variable and widespread montane species Arisaema polyphyllum (Blanco) Merr. (Araceae) by comparison with the well-distributed specimen Merrill: Species Blancoanae No. 460 . Specimen Chua 028 was easily identifi ed to Aquilaria cumingiana (Decne.) Hallier f. (Thymelaeaceae), a singular relative of the agarwood or gharu trees that is wide- spread and abundant in the Philippines. A confi dent identifi cation did not imply the absence of taxo- nomic controversy. For example, collection Chua 057 was read- ily identifi ed within the species-rich genus Medinilla Gaudich. (Melastomataceae) to the species M. monantha Merr. It is charac- terized by a single fl ower per infl orescence and matches Merrill’s (1908) description and the type collection of Clemens 1136. How- ever, contrary to Merrill’s (1908) segregation, Regalado (1995) combined this with the regionally widespread and morphologi- cally variable species Medinilla myrtiformis (Naudin) Triana. The situation in Piper L. (Piperaceae) offered a contrary exam- ple. Quisumbing (1930) recognized 92 species of Piper in the Philippines; almost all were national endemics and many from single locations. Photographs were inadequate to identify species according to his treatment because it required careful microscopic study of the fl owers. However, Gardner (2006) reduced these 92 species to 20, of which only one is endemic. Gardner’s treat- ment allowed most of our photographic sets to be identifi ed within his broad regional species. Failure to identify specimens from photographs was due to in- adequate photographs in 20 cases, or 64.5% of the 31 collections not confi dently identifi ed to species. We should emphasize that “inadequate photographs” did not imply that they were techni- cally poor with regard to focus or magnifi cation, rather the Fig. 1. Photographic set of Saurauia negrosensis Elmer (Actinidiaceae), specimen Chua 002 . (A) Habit. (B) Leaves and fl ower position, scale. (C) Twig apex and glands. (D) Dissected fl ower. (E) Flower arrangement at old leaf scars. 4 of 8 Applications in Plant Sciences 2015 3 ( 5 ): 1400116 LaFrankie and Chua—Digital fi eld photographs for tropical plant inventory doi:10.3732/apps.1400116 http://www.bioone.org/loi/apps overexploited by ornamental plant enthusiasts and for which new collections are unwarranted. Photographs of this completely unmistakable lily would add to the extensive geographic survey of the remaining populations by Balangcod et al. (2011) . A re- lated case would be the ecologically critical task of document- ing the distribution of noxious weeds. It is fi nancially impractical for a poorly funded tropical herbarium to fi ll its shelves with weeds such as Eichhornia crassipes (Mart.) Solms (Pontederiaceae). A single photograph with date, location, and observer would be adequate to build a national record of distribution. In the United States, such a digital program is already underway in the Early Detection & Distribution Mapping System at the Center for Invasive Species and Ecosystem Health at the University of Georgia ( Rawlins et al., 2011 ). A comparison of the merits and defi ciencies of specimens vs. photographic sets is most easily compiled as a simple table ( Box 1 ) . A few points bear further comment. The fi rst and per- haps most obvious question lies in the relative cost effi ciency of photographs vs. collections. A formal comparison is not easily made. Modern cameras make good photography astonishingly easy, and yet the time needed to take an entire set of high-quality photographs will depend on the experience of the photogra- pher, the number of macro photographs required, and also con- ditions such as rain or darkness. In some circumstances, to simply collect a specimen is faster. On the other hand, collec- tions in duplicates of 10 or more also requires drying, printing labels, sorting, and distributing, which can be time consuming. Despite our inability to quantify costs and benefi ts, one aspect of relative effi ciency merits a note. Almost all of the present specimens in the Philippine herbaria were made by people em- ployed for that task, either as professional collectors or as scientists still have reliable documentation for the presence of more than 70 species at a previously unexplored site. We would also have a wealth of new morphological data on phenology, ecology, and fl oral and fruit color. Even photographs that cannot be identifi ed to species proved valuable. For example, the three species of Cyrtandra that were not identifi ed were nonetheless of suffi cient quality to allow a confi dent assertion that they are likely new spe- cies. This occurrence is not unusual. During a recent national re- view of the genus Dillenia L. (Dilleniaceae), we noted on the Internet a set of high-quality photographs of a species with yel- low fl owers from a poorly collected part of Luzon and readily determined this to be a species that could not be accommodated in the last regional revision of the genus ( Hoogland, 1952 ). The exact location is known and efforts are now underway to make a formal collection and so provide the species with a name. The obvious question many might ask is “Why take a set of photographs and not make a voucher collection?” There are several instances where conventional specimens cannot or should not be made. First of all, under Philippine law, specifi - cally Republic Act 9147, the collection of plant specimens for any reason by any person requires acquisition of a set of permits and letters of prior informed consent before permission is granted. A second set of permits is required to transport the specimens within the country. This process, which applies to Philippine citizens as well as international visitors, is compul- sory and is always followed in the case of well-planned ex- peditions. Such demands, however, preclude collections by individuals who travel and explore on the spur of the moment; in such circumstances, it is easier to simply take photographs. Secondly, we must note the case of rare and endangered spe- cies such as Lilium philippinense Baker (Liliaceae), which is Fig. 2. Photographic set of Mackinlaya celebica (Harms) Philipson (Apiaceae), specimens Chua 016 and Chua 041 . (A) Fruit. (B) Dissected fruit. (C) Infl orescence. (D) Flower. (E) Upper side of leaf. (F) Lower surface of leaf. Applications in Plant Sciences 2015 3 ( 5 ): 1400116 LaFrankie and Chua—Digital fi eld photographs for tropical plant inventory doi:10.3732/apps.1400116 5 of 8http://www.bioone.org/loi/apps is distributed and studied by an expert. Of the four classic 19th- century collections from the Philippines, only those of Hugh Cum- ings have had a signifi cant impact on our knowledge of local plants. The collections of the Malaspina Expedition include the specimens of Thaddäus Haenke, which are chiefl y in Prague and have had a mostly European distribution and only a modest study by Presl (1830) , while the estimated 10,000 collections of Luis Née are presumably still in Madrid and have had almost no distribution or study. Sebastian Vidal’s 14,000 collections from the 1880s are in Madrid ( Calabrese and Velayos, 2009 ), and many are still being studied with important effect. A recent review of Vidal’s collections of Fabaceae found that these cen- tury-old specimens included fi ve species not previously doc- umented for the country ( de la Estrella et al., 2007 ). Finally, we should emphasize that the inclusion of a broad base of plant photographers has a social consequence far be- yond the scientifi c value of their documentation. The Philip- pine herbaria were built by professional botanists, with little or no role played by the general public. If photographs were treated as specimens then a larger sector of the population could contribute to the national program of inventory and enumeration and thereby promote a greater appreciation of plant diversity. A recent example of this social movement is found in Co’s Digital Flora of the Philippines ( http://www.philippineplants .org/ ), where more than 5000 members share photographs on a daily basis ( Barcelona et al., 2013 ). The best of these photo- graphs are stored and displayed at the website PhytoImages hosted at Southern Illinois University ( http://www.phytoimages .siu.edu/ ). who collected as a part of their work. Photographs could be contributed by hundreds of volunteers at their own expense. A second point of comparison of photographs and specimens is the sometimes-unappreciated fact that much of the taxonomic literature of the past two centuries is largely based on the her- barium study of dried specimens rather than upon the living plants. Even such a renowned fi eld botanist as E. D. Merrill would sometimes compose a monograph, such as his study of Microtropis Wall. ex Meisn. (Celastraceae), and confess that he had never encountered a living plant of that genus ( Merrill and Freeman, 1940 ). Consequently, many of the characters employed in plant recognition and identifi cation are restricted to dried material. Leaf color and texture are especially notable in this regard. In a contrasting way, Basset et al. (2000) and Thomas et al. (2007) found that in working with fi eld infor- mants in ethnobotany, color photographs were much more likely to be identifi ed than were dried specimens. A third point of comparison is the wealth of morphological detail that is evident in photographs and lost in dried specimens. Photographs can record three-dimensional branching patterns of an infl orescence, the shape and color of fragile fl oral parts, and the presence and color of exudate. This is perhaps generally true for plants but is an especially common problem in mono- cotyledons of the tropics, most notably the Zingiberales , Ara- ceae, Orchidaceae, and Arecaceae. A fourth important point of contrast between photographs and specimens lies in the rapidity with which new fi ndings can be distributed to the scientifi c community. It is not uncommon for years or even decades to pass before a herbarium specimen Photographs Herbarium specimens Disadvantages Advantages Not subject to novel or more detailed scrutiny. Subject to reinvestigation with novel microscopic and chemical methods, even molecular-based identifi cation. Scale must be included; even with a scale, distortion of size and shape is possible. Scale is always clear. Cannot be a type of a new species. The required basis for describing new species. Living plants are often not amenable to existing keys and descriptions—linking fresh and dried characters requires vouchering specimens. Most descriptions and keys in the tropics are based on dried specimens; many characters critical for initial identifi cation are evident only on drying (e.g., dry leaf color). Advantages Disadvantages All parts of the plant can be recorded—habit, bark, wood, twigs, nodes, reproductive parts. Typical specimens include only a fragment of the living plant. Long-lasting, can be duplicated without limit. Unique, subject to decay or destruction. Preserves color and complex shape. Shape and color are greatly modifi ed or lost upon drying. Storage and curation is of modest cost. Storage and curation are costly in space and time. Immediately available. Months, sometimes years or decades before the international community can evaluate the specimen. Available to everyone with Internet access. Even if Internet access is not available, CD-ROMs or fl ash drives make collections accessible. Restricted to individuals with access to herbaria. In general, permits are not required for photographs. In many countries, permits are required to make specimens. A photographer living nearby has repeated opportunities to make a photographic record. The episodic and infrequent fl owering of tropical plants means that conventional expeditions can only gather a small portion of the local fl ora. More than ever before, good photographs can be taken by anyone with a camera and minimal training. Good-quality specimens are usually prepared only by a professional botanist or plant collector. Box 1. Comparison of the merits and defi ciencies of photographs and traditional plant specimens. 6 of 8 Applications in Plant Sciences 2015 3 ( 5 ): 1400116 LaFrankie and Chua—Digital fi eld photographs for tropical plant inventory doi:10.3732/apps.1400116 http://www.bioone.org/loi/apps This study does not suggest that professional botanists no longer make specimens. To make specimens and to study them in a herbarium is still the only road to a deep appreciation of plant diversity. Furthermore, new species require a specimen as the type, and new observations often require microscopic study of specimens or careful quantitative measurements. We might emphasize again that individual snapshots taken in trop- ical forests are almost always inadequate for sound documenta- tion. What can be taken away from this study is that sets of good-quality photographs by amateur botanists yield signifi - cant social and scientifi c value and that they merit conservation by tropical herbaria. LITERATURE CITED APG III . 2009 . An update of the Angiosperm Phylogeny Group clas- sifi cation for the orders and families of fl owering plants: APG III. Botanical Journal of the Linnean Society 161 : 105 – 121 . ARIÑO , A. H. , AND D. GALICIA . 2005 . Taxonomic-grade images. In C. L. Häuser, A. Steiner, J. Holstine, and M. J. Scoble [eds.], Digital imag- ing of biological type specimens: A manual of best practice, 41–55. European Network for Biodiversity Information, Stuttgart, Germany. ATKINS , H. , AND Q. C. B. CRONK . 2001 . The genus Cyrtandra (Gesneri aceae) in Palawan, Philippines. Edinburgh Journal of Botany 58 : 443 – 458 . BALANGCOD , T. D. , V. C. CUEVAS , I. E. BUOT , AND A. K. BALANGCOD . 2011 . Geographic distribution of Lilium philippinense Baker (Liliaceae) in the Cordillera Central Range, Luzon Island, Philippines. Taiwania 56 : 186 – 194 . BARCELONA , J. F. , D. L. NICKRENT , J. V. LAFRANKIE , J. C. CALLADO , AND P. B. PELSER . 2013 . Co’s Digital Flora of the Philippines: Plant identi- fi cation and conservation through cybertaxonomy. Philippine Journal of Science 142 : 57 – 67 . BASKAUF , S. J. , AND B. K. KIRCHOFF . 2008 . Digital plant images as speci- mens: Toward standards for photographing living plants. Vulpia 7 : 16 – 30 . BASSET , Y. , V. NOVOTNY , S. E. MILLER , AND R. PYLE . 2000 . Quantifying biodiversity: Experience with parataxonomists and digital photogra- phy in Papua New Guinea and Guyana. Bioscience 50 : 899 – 908 . CALABRESE , G. M. , AND M. VELAYOS . 2009 . Type specimens in the Vidal Herbarium at the Real Jardín Botánico, Madrid. Botanical Journal of the Linnean Society 159 : 292 – 299 . DASTON , L. 2004 . Type specimens and scientifi c memory. Critical Inquiry 31 : 153 – 182 . DE LA ESTRELLA , M. , G. M. CALABRESE , B. CAMARA , AND M. VELAYOS . 2007 . Leguminosae of Philippines in the Vidal herbarium at Real Jardín Botánico, Madrid. Nordic Journal of Botany 25 : 41 – 52 . DEWOLF , G. P. 1968 . Notes on making a herbarium. Arnoldia 28 : 8 – 9 . EGERTON , F. N. 2003 . A history of the ecological sciences, part 10: Botany during the Italian Renaissance and beginnings of the scientifi c revolu- tion. Bulletin of the Ecological Society of America 84 : 130 – 137 . FRANK , M. S. , AND K. D. PERKINS . 2004 . Herbaria and herbarium specimens. University of Florida Herbarium. Website http://www.fl mnh.ufl .edu/ herbarium/herbariaandspecimens.htm [accessed 19 February 2013]. GARDNER , R. O. 2006 . Piper (Piperaceae) in the Philippine Islands: The climbing species. Blumea 51 : 569 – 586 . GERNSHIEM , H. 1986 . A concise history of photography, 3rd ed . Dover Publications, Mineola, New York, USA. GRIMÉ , W. E. , AND T. PLOWMAN . 1986 . Type photographs at Field Museum of Natural History. Taxon 35 : 932 – 934 . HÄUSER , C. L. , A. STEINER , J. HOLSTEIN , AND M. J. SCOBLE . 2005 . Digital imaging of biological type specimens. A manual of best practice . European Network for Biodiversity Information, Stuttgart, Germany. HITCHCOCK , A. S. 1905 . Nomenclatorial type specimens of plant species. Science 21 : 828 – 832 . HOOGLAND , R. D. 1952 . A revision of the genus Dillenia. Blumea 7 : 1 – 145 . MERRILL , E. D. 1908 . New Philippine plants from the collections of Mary Strong Clemens, I. Philippine Journal of Science 3 : 129 – 165 . MERRILL , E. D. , AND F. L. FREEMAN . 1940 . The old world species of the Celastraceous genus Microtropis Wallich. Proceedings of the American Academy of Arts and Sciences 73 : 271 – 307, 309–310 . NEW YORK BOTANICAL GARDEN . 2013 . The C.V. Starr virtual herbarium picks up momentum: Digitization at The New York Botanical Garden. Website http://www.nybg.org/science/new_20120926.php [accessed 27 February 2015]. PARR , M. , AND G. BADGER . 2004 . The photobook: A history, Vol. 1 . Phaidon, London, United Kingdom. PHILIPSON , W. R. 1979 . Flora Malesiana: series 1. Spermatophyta. Flowering plants: vol. 9, part 1. Revisions. Araliaceae 1: 1–105. Foundation Flora Malesiana, Leiden, The Netherlands. PRESL , C. B. 1830 . Reliquiae Haenkeanae . J. G. Calve, Prague. QUISUMBING , E. 1930 . Philippine Piperaceae. Philippine Journal of Science 43 : 1 – 246 . RAWLINS , K. A. , J. E. GRIFFIN , D. J. MOORHEAD , C. T. BARGERON , AND C. W. EVANS . 2011 . EDDMapS: Invasive plant mapping handbook . Center for Invasive Species and Ecosystem Health, University of Georgia, Tifton, Georgia, USA. REGALADO , J. C. 1995 . Revision of Philippine Medinilla (Melastomataceae). Blumea 40 : 113 – 193 . SIMPSON , N. , AND P. G. BARNES . 2008 . Photography and contemporary bo- tanical illustration. Curtis’s Botanical Magazine 25 : 258 – 280 . SWINGLE , W. T. 1913 . Types of species in botanical taxonomy. Science 37 : 864 – 865 . SWINGLE , W. T. , AND M. K. SWINGLE . 1916 . The utilization of photographic methods in library research work, with special reference to the natural sciences . Bulletin of the American Library Association 10 : 194 – 199 . THIERS , B. M. 1998 onward (continuously updated). Index Herbariorum. New York Botanical Garden. Website http://sweetgum.nybg.org/ih/ [accessed 3 July 2012]. THOMAS , E. , I. VANDEBROEK , AND P. VAN DAMME . 2007 . What works in the fi eld? A comparison of different interviewing methods in ethnobotany with special reference to the use of photographs. Economic Botany 61 : 376 – 384 . VOLLMAR , A. , J. A. MACKLIN , AND L. FORD . 2010 . Natural history speci- men digitization: Challenges and concerns. Biodiversity Informatics 7 : 93 – 112 . WILSON , E. H. 2011 . Magnifi cent trees: 100 of Ernest Henry Wilson’s fi nest images of trees. Website http://arboretum.harvard.edu/library/image- collection/botanical-and-cultural-images-of-eastern-asia/magnifi cent- trees/ [accessed 23 February 2015]. ZHU , H. , M. C. ROOS , AND C. E. RIDSDALE . 2012 . A taxonomic revision of the Malesian species of Lasianthus (Rubiaceae). Blumea 57 : 1 – 102 . http://www.flmnh.ufl.edu/herbarium/herbariaandspecimens.htm http://www.flmnh.ufl.edu/herbarium/herbariaandspecimens.htm http://arboretum.harvard.edu/library/imagecollection/botanical-and-cultural-images-of-eastern-asia/magnificent-trees/ http://arboretum.harvard.edu/library/imagecollection/botanical-and-cultural-images-of-eastern-asia/magnificent-trees/ http://arboretum.harvard.edu/library/imagecollection/botanical-and-cultural-images-of-eastern-asia/magnificent-trees/ Applications in Plant Sciences 2015 3 ( 5 ): 1400116 LaFrankie and Chua—Digital fi eld photographs for tropical plant inventory doi:10.3732/apps.1400116 7 of 8http://www.bioone.org/loi/apps Collection Identifi cation a Family b ID quality c Reason for failure d Chua 001 Ophiorrhiza oblongifolia DC. Rubiaceae 1 Chua 002 Saurauia negrosensis Elmer Actinidiaceae 1 Chua 003 Saurauia trichophora Quisumb. Actinidiaceae 1 Chua 004 Curculigo capitulata (Lour.) Kuntze Hypoxidaceae 1 Chua 005 Spathoglottis plicata Blume Orchidaceae 1 Chua 007 Calliandra calothyrsus Meisn. Fabaceae 1 Chua 008 Litsea quercoides Elmer Lauraceae 1 Chua 009 Sambucus javanica Reinw. ex Blume Adoxaceae 1 Chua 010 Medinilla involucrata Merr. Melastomataceae 1 Chua 011 Elatostema whitfordii Merr. Urticaceae 1 Chua 012 Elatostema spinulosum Elmer Urticaceae 1 Chua 015 Syzygium panayense (Merr.) Merr. Myrtaceae 1 Chua 016 Mackinlaya celebica (Harms) Philipson Apiaceae 1 Chua 017 Aglaia elliptica Blume Meliaceae 1 Chua 021 Piper decumanum L. Piperaceae 1 Chua 022 Piper abbreviatum Opiz Piperaceae 1 Chua 023 Glochidion merrillii C. B. Rob. Phyllanthaceae 1 Chua 024 Arisaema polyphyllum (Blanco) Merr. Araceae 1 Chua 026 Mycetia javanica (Blume) Korth. Rubiaceae 1 Chua 027 Mackinlaya celebica (Harms) Philipson Apiaceae 1 Chua 028 Aquilaria cumingiana (Decne.) Ridl. Thymelaeceae 1 Chua 029 Clerodendrum minahassae Teijsm. & Binn. Lamiaceae 1 Chua 030 Alpinia haenkei C. Presl Zingiberaceae 1 Chua 034 Solanum lasiocarpum Dunal Solanaceae 1 Chua 037 Codiaeum luzonicum Merr. Euphorbiaceae 1 Chua 038 Alyxia sibuyanensis Elmer Apocynaceae 1 Chua 040 Goodyera rubicunda (Blume) Lindl. Orchidaceae 1 Chua 041 Mackinlaya celebica (Harms) Philipson Apiaceae 1 Chua 043 Ophiorrhiza oblongifolia DC. Rubiaceae 1 Chua 044 Lycianthes banahaensis (Elmer) Bitter Solanaceae 1 Chua 047 Elatostema whitfordii Merr. Urticaceae 1 Chua 050 Pipturus arborescens (Link) C. B. Rob. Urticaceae 1 Chua 051 Pipturus arborescens (Link) C. B. Rob. Urticaceae 1 Chua 052 Desmodium gangeticum (L.) DC. Fabaceae 1 Chua 053 Dichroa philippinensis Schltr. Hydrangeaceae 1 Chua 056 Villebrunea trinervis Wedd. Urticaceae 1 Chua 057 Medinilla monantha Merr. Melastomataceae 1 Chua 058 Crassocephalum crepidioides (Benth.) S. Moore Asteraceae 1 Chua 059 Ficus cuneiformis C. C. Berg Moraceae 1 Chua 060 Macaranga tanarius (L.) Müll. Arg. Euphorbiaceae 1 Chua 062 Pollia thyrsifl ora (Blume) Steud. Commelinaceae 1 Chua 063 Garcinia venulosa (Blanco) Choisy Clusiaceae 1 Chua 064 Garcinia venulosa (Blanco) Choisy Clusiaceae 1 Chua 065 Piper abbreviatum Opiz Piperaceae 1 Chua 066 Piper abbreviatum Opiz Piperaceae 1 Chua 067 Scheffl era insularum (Seem.) Harms Araliaceae 1 Chua 068 Piper caninum Blume Piperaceae 1 Chua 069 Clethra canescens Reinw. ex Blume Clethraceae 1 Chua 071 Costus speciosus (J. Koenig) Sm. Costaceae 1 Chua 074 Mycetia javanica (Blume) Korth. Rubiaceae 1 Chua 075 Sarcandra glabra (Thunb.) Nakai Chloranthaceae 1 Chua 076 Magnolia liliifera (L.) Baill. Magnoliaceae 1 Chua 077 Tetracera fagifolia Blume Dilleniaceae 1 Chua 078 Omalanthus populneus (Geisel.) Pax Euphorbiaceae 1 Chua 080 Chloranthus elatior Link Chloranthaceae 1 Chua 081 Tabernaemontana pandacaqui Poir. Apocynaceae 1 Chua 082 Lasianthus attenuatus Jack Rubiaceae 1 Chua 083 Aglaia luzoniensis (S. Vidal) Merr. & Rolfe Meliaceae 1 Chua 084 Alocasia heterophylla (C. Presl) Merr. Araceae 1 Chua 085 Aglaonema densinervium Engl. Araceae 1 Chua 086 Schismatoglottis plurivenia Alderw. Araceae 1 Chua 087 Aglaonema densinervium Engl. Araceae 1 Chua 089 Magnolia liliifera (L.) Baill. Magnoliaceae 1 Chua 090 Dillenia reifferscheidia Fern.-Vill. Dilleniaceae 1 Chua 091 Scheffl era insularum (Seem.) Harms Araliaceae 1 Chua 092 Elatostema spinulosum Elmer Urticaceae 1 Chua 093 Gomphostemma javanicum (Blume) Benth. Lamiaceae 1 Chua 094 Cinnamomum mercadoi S. Vidal Lauraceae 1 APPENDIX 1. List of collections from Mt. Kanlaon; the identifi cations were from examination of the photographic sets and were only confi rmed and not altered with subsequent study of the dry specimens. 8 of 8 Applications in Plant Sciences 2015 3 ( 5 ): 1400116 LaFrankie and Chua—Digital fi eld photographs for tropical plant inventory doi:10.3732/apps.1400116 http://www.bioone.org/loi/apps Collection Identifi cation a Family b ID quality c Reason for failure d Chua 096 Coffea arabica L. Rubiaceae 1 Chua 097 Magnolia philippinensis P. Parm. Magnoliaceae 1 Chua 098 Medinilla involucrata Merr. Melastomataceae 1 Chua 100 Leucosyke capitellata (Poir.) Wedd. Urticaceae 1 Chua 102 Calanthe mcgregorii Ames Orchidaceae 1 Chua 104 Saurauia negrosensis Elmer Actinidiaceae 1 Chua 105 Ficus bataanensis Merr. Moraceae 1 Chua 106 Ficus cuneiformis C. C. Berg Moraceae 1 Chua 107 Acer laurinum Hassk. Sapindaceae 1 Chua 108 Acer laurinum Hassk. Sapindaceae 1 Chua 110 Ficus rufi caulis Merr. Moraceae 1 Chua 111 Litsea luzonica (Blanco) Fern.-Vill. Lauraceae 1 Chua 112 Wikstroemia ovata C. A. Mey. Thymelaeceae 1 Chua 113 Costus speciosus (J. Koenig) Sm. Costaceae 1 Chua 014 Ficus scaberrima Blume Moraceae 2 1 Chua 035 Euphlebium bicolense (Lubag-Arquiza) M. A. Clem. & Cootes Orchidaceae 2 1 Chua 036 Calanthe sp. nov. Orchidaceae 2 1 Chua 046 Maesa denticulata Mez Primulaceae 2 1 Chua 101 Ficus carpenteriana Elmer Moraceae 2 1 Chua 032 Cyrtandra pallida Elmer Gesneriaceae 2 1 Chua 033 Medinilla cf. merrittii Merr. Melastomataceae 2 1 Chua 048 Rhaphidophora aff. philippinensis Engl. & K. Krause Araceae 2 1 Chua 073 Pavetta indica L. Rubiaceae 2 1 Chua 099 Medinilla sp. nov. aff. amplifolia Merr. Melastomataceae 2 2 Chua 020 Ardisia sp. Primulaceae 3 1 Chua 049 Dendrochilum sp. Orchidaceae 3 1 Chua 055 Fabaceae Fabaceae 3 1 Chua 088 Acanthaceae Acanthaceae 3 1 Chua 095 Ficus sp. Moraceae 3 1 Chua 018 Lasianthus sp. Rubiaceae 3 1 Chua 019 Ixora sp. Rubiaceae 3 1 Chua 039 Lasianthus sp. Rubiaceae 3 1 Chua 042 Piper sp. Piperaceae 3 1 Chua 045 Lasianthes sp. Rubiaceae 3 1 Chua 070 Piper sp. Piperaceae 3 1 Chua 103 Calanthe sp. Orchidaceae 3 1 Chua 006 Cyrtandra sp. 1 Gesneriaceae 3 2 Chua 013 Pandanus sp. Pandanaceae 3 2 Chua 025 Pandanus sp. Pandanaceae 3 2 Chua 031 Zingiber sp. Zingiberaceae 3 2 Chua 054 Cyrtandra sp. 2 Gesneriaceae 3 2 Chua 061 Alpinia sp. Zingiberaceae 3 2 Chua 072 Alpinia sp. Zingiberaceae 3 2 Chua 079 Cyrtandra sp. 3 Gesneriaceae 3 2 Chua 109 Cyrtandra sp. 4 Gesneriaceae 3 2 a Species name and authorship follows IPNI. b Family follows APG III (2009) . c 1 = identifi ed to species with high confi dence; 2 = identifi ed to species with low confi dence; 3 = identifi ed only to genus or family. d 1 = photographic details inadequate for identifi cation; 2 = taxonomy of the genus too poorly understood. APPENDIX 1. Continued. work_gkvj3rbfsvaj5pse4snmfstaie ---- [PDF] Children in school cafeterias select foods containing more saturated fat and energy than the Institute of Medicine recommendations. | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.3945/jn.109.119131 Corpus ID: 8674718Children in school cafeterias select foods containing more saturated fat and energy than the Institute of Medicine recommendations. @article{Martin2010ChildrenIS, title={Children in school cafeterias select foods containing more saturated fat and energy than the Institute of Medicine recommendations.}, author={C. Martin and J. Thomson and M. LeBlanc and T. Stewart and R. Newton and H. Han and Alicia D Sample and C. Champagne and D. Williamson}, journal={The Journal of nutrition}, year={2010}, volume={140 9}, pages={ 1653-60 } } C. Martin, J. Thomson, +6 authors D. Williamson Published 2010 Medicine The Journal of nutrition In this study, we examined if children's food selection met the School Meals Initiative (SMI) standards and the recently released Institute of Medicine (IOM) recommendations. Mean food selection, plate waste, and food intake were also examined. Food intake of 2049 4th-6th grade students was measured objectively at lunch over 3 d with digital photography in 33 schools. The percent of children whose food selection met the SMI standards and IOM recommendations for energy (kJ), fat and saturated… Expand View on PubMed academic.oup.com Save to Library Create Alert Cite Launch Research Feed Share This Paper 35 CitationsHighly Influential Citations 1 Background Citations 12 Methods Citations 3 Results Citations 1 View All Supplemental Clinical Trials Interventional Clinical Trial The Louisiana (LA) Health Project There is a worldwide pandemic of obesity with far-reaching consequences for the health of our nation. Obesity is the second leading cause of preventable death in the United States… Expand Conditions Childhood Obesity Intervention Behavioral Pennington Biomedical Research Center August 2005 - May 2009 Explore Further Discover more papers related to the topics discussed in this paper Figures, Tables, and Topics from this paper table 1 figure 1 table 2 table 3 table 5 View All 5 Figures & Tables Saturated fat Calcium Fat Measurement Institute of Medicine (U.S.) Carbohydrates Food standards characteristics Platelet Glycoprotein 4, human Vitamin A Fat embolism (disorder) kilojoule (kJ) Eating Nutrients Physiological Sexual Disorders 35 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency School lunch nutritional adequacy: what is served, consumed and wasted. Margarida Liz Martins, S. Rodrigues, L. Cunha, A. Rocha Medicine Public health nutrition 2020 Save Alert Research Feed Nutritional quality of children's school lunches: differences according to food source. J. Taylor, K. Hernandez, +4 authors P. Veugelers Medicine Public health nutrition 2012 31 PDF View 1 excerpt, cites background Save Alert Research Feed Salad bar selection patterns of elementary school children Geraldine Moreno-Black, J. Stockard Medicine Appetite 2018 9 Save Alert Research Feed Modification of the school cafeteria environment can impact childhood nutrition. Results from the Wise Mind and LA Health studies D. Williamson, H. Han, W. D. Johnson, C. Martin, R. Newton Medicine Appetite 2013 37 View 2 excerpts, cites methods Save Alert Research Feed Efficacy of a school-based obesity prevention intervention at reducing added sugar and sodium in children’s school lunches: the LA Health randomized controlled trial K. Hawkins, J. Burton, J. Apolzan, J. Thomson, D. Williamson, C. Martin Medicine International Journal of Obesity 2018 3 View 1 excerpt, cites background Save Alert Research Feed Meeting school food standards - students' food choice and free school meals. H. Ensaff, Jean M Russell, M. Barker Medicine Public health nutrition 2013 9 PDF Save Alert Research Feed Middle-school students' school lunch consumption does not meet the new Institute of Medicine's National School Lunch Program recommendations. K. Cullen, K. Watson, J. Dave Medicine Public health nutrition 2011 22 PDF View 1 excerpt Save Alert Research Feed Long-term impact of a chef on school lunch consumption: findings from a 2-year pilot study in Boston middle schools. J. Cohen, L. Smit, +4 authors E. Rimm Psychology, Medicine Journal of the Academy of Nutrition and Dietetics 2012 90 PDF View 1 excerpt, cites background Save Alert Research Feed Characterizing Early Adolescent Plate Waste Using the Mobile Food Record Chloe E. Panizza, C. Boushey, +4 authors Jinan C Banna Geography, Medicine Nutrients 2017 9 PDF Save Alert Research Feed Food Waste in the National School Lunch Program 1978–2015: A Systematic Review C. B. Shanks, Jinan C Banna, E. Serrano Medicine Journal of the Academy of Nutrition and Dietetics 2017 40 View 1 excerpt, cites results Save Alert Research Feed ... 1 2 3 4 ... References SHOWING 1-10 OF 22 REFERENCES SORT BYRelevance Most Influenced Papers Recency Nutritional quality of the diets of US public school children and the role of the school meal programs. M. Clark, M. Fox Medicine Journal of the American Dietetic Association 2009 192 Highly Influential View 6 excerpts, references results and background Save Alert Research Feed Meals offered and served in US public schools: do they meet nutrient standards? M. K. Crepinsek, A. Gordon, P. McKinney, Elizabeth M Condon, A. Wilson Medicine Journal of the American Dietetic Association 2009 119 PDF View 2 excerpts, references background and results Save Alert Research Feed School Meals: Building Blocks for Healthy Children Breakfast Programs, V. Stallings, C. WestSuitor, C. Taylor Medicine 2010 171 Highly Influential PDF View 6 excerpts, references background and results Save Alert Research Feed Measurement of children's food intake with digital photography and the effects of second servings upon food intake. C. Martin, R. Newton, +6 authors D. Williamson Psychology, Medicine Eating behaviors 2007 88 View 3 excerpts, references background and methods Save Alert Research Feed Reductions in Entrée Energy Density Increase Children's Vegetable Intake and Reduce Energy Intake Kathleen E. Leahy, L. Birch, J. Fisher, B. Rolls Medicine Obesity 2008 91 PDF Save Alert Research Feed Increased obesity in children living in rural communities of Louisiana. D. Williamson, C. Champagne, +7 authors L. Webber Medicine International journal of pediatric obesity : IJPO : an official journal of the International Association for the Study of Obesity 2009 23 View 2 excerpts, references background Save Alert Research Feed Digital photography: A new method for estimating food intake in cafeteria settings D. Williamson, H. Allen, P. D. Martin, A. Alfonso, B. Gerald, A. Hunt Mathematics, Medicine Eating and weight disorders : EWD 2004 98 View 2 excerpts, references methods Save Alert Research Feed Assessment of Child and Adolescent Overweight and Obesity N. Krebs, J. Himes, D. Jacobson, T. Nicklas, P. Guilday, D. Styne Medicine Pediatrics 2007 895 PDF Save Alert Research Feed Preventing Childhood Obesity: Health in the Balance J. P. Koplan, C. T. Liverman, V. Kraak Medicine 2005 913 View 1 excerpt, references background Save Alert Research Feed Louisiana (LA) Health: design and methods for a childhood obesity prevention program in rural schools. D. Williamson, C. Champagne, +5 authors D. Ryan Medicine Contemporary clinical trials 2008 45 View 1 excerpt, references methods Save Alert Research Feed ... 1 2 3 ... Related Papers Abstract Supplemental Clinical Trials Figures, Tables, and Topics 35 Citations 22 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_glovyl6kp5hjxcirnv54flraji ---- Notulae 9-4 Inhalt-neu.indd 164 Notulae odonatologicae 9(4) 2019: 125-172 The biting midge Forcipomyia paludis as a parasite of Odonata in North Africa (Diptera: Ceratopogonidae) Jean-Pierre Boudot1, Peter Havelka2 & Andreas Martens3 1 Immeuble Orphée, Apt 703, Cidex 62, 78 rue de la Justice, Ludres, France; jean.pierre.boudot @numericable.fr 2 Staatliches Museum für Naturkunde Karlsruhe, Erbprinzenstr. 13, 76133 Karlsruhe, Ger- many; peter.havelka@smnk.de 3 Institut für Biologie und Schulgartenentwicklung, PH Karlsruhe, Bismarckstr. 10, 76133 Karlsruhe, Germany; martens@ph-karlsruhe.de Abstract. In June and July 2013, at two streams in the Middle Atlas Mountains, Morocco, ceratopogonid midges were photographed on and taken from the wings of six species of odo- nates. The specimens were identified as Forcipomyia paludis, a widespread European cerato- pogonid midge new to Africa. The data increase the range of known hosts with the addition of Cordulegaster princeps, Gomphus simillimus maroccanus and Onychogomphus boudoti. Further key words. Dragonfly, Anisoptera, Maghreb, Morocco, first record Introduction Females of Forcipomyia (Pterobosca) paludis (Macfie, 1936) suck haemolymph from the veins of odonate wings (Wildermuth & Martens 2007). In the western Palae- arctic, it is the only ceratopogonid species known to parasitize adult dragonflies. In Europe, 79 species and seven subspecies of odonates have been recorded as hosts (Martens et al. 2008, 2012; Wildermuth 2012; Manger & van der Heijden 2015; Vinko et al. 2017; Cordero-Rivera et al. 2019; Wildermuth et al. 2019). So far, in the western Mediterranean it is recorded from continental Spain, Mal- lorca, Sardinia and France (Dell’Anna et al. 1995; Martens et al. 2008; Martens 2012; Nielsen et al. 2014; Cordero-Rivera et al. 2019). Here, we present the first African as well as the southernmost record. Study sites and methods On 18/19-vi-2013 and 03/04-vii-2013, JPB photographed and collected odonates with midges on their wings at two streams in the Middle Atlas, Khenifra region, Morocco; namely oued Merera (32°53’50”N, 5°29’51”W, 1 170 m a.s.l.), visited on 18-vi-2013 and 04-vii-2013, and oued Arougou (32°55’33”N, 5°33’34”W, 1 150 m a.s.l.), visited on 19-vi-2013 and 03-vii-2013. Each odonate specimen was preserved in ethanol in individual vials, with as many midges as possible, and sent to PH for identification. As midges detached easily upon capture, the number collected was often less than the number actually infesting the host dragonfly, particularly in the case of the larger host species. In the lab, PH carefully dissected the midges from the Notulae odonatologicae 9(4) 2019: 164-168 – DOI:10.5281/zenodo.3539758 165 Notulae odonatologicae 9(4) 2019: 125-172 odonate’s wings under a dissecting microscope, and transferred them into liquefied phenol. Later the specimens were put on a plate with a drop of Canada balsam (ac- cording to Wirth & Marston 1968) and dissected using two watchmakers tweez- ers. Results Infestations were recorded on six anisopteran species: namely Boyeria irene, Cor­ dulegaster princeps, Gomphus simillimus maroccanus, Onychogomphus boudoti, Onycho gomphus forcipatus unguiculatus and Onychogomphus uncatus (Table 1). Oued Merera is a middle-sized permanent stream of ca 3 m width, continuously bordered by trees in a landscape of forest, open agricultural fields and fallow lands. The water is swift, of neutral pH, and highly turbid due to soil erosion in its Triassic red shale basin. There is almost no human settlement in the upper watershed and no pollution was perceptible at the time of the visit. Oued Arougou is a smaller stream Fig. 1. Biting midges Forcipomyia paludis on the wings of a female Cordulegaster princeps. Oued Merera, Khenifra province, Morocco (18-vi-2013). Photo: JPB 166 Notulae odonatologicae 9(4) 2019: 125-172 originating from the limestone and dolomite rock layers of the lower Lias. Its clear calcareous water flows slowly to swiftly in an open landscape of fields, scattered human settlements, isolated marshes and seepage water. The site was described by Ferreira et al. (2014). The specimens from Morocco did not differ in any diagnos- tic characters from European material in the collection of PH. Discussion The present records are the first ones demonstrating the presence of F. paludis in Africa. Until now, the southernmost records were from Sardinia (Dell’Anna et al. 1995), Mallorca (Nielsen et al. 2014) and continental Spain (Cordero-Rivera et al. 2019). Many records are available from southern France (Martens et al. 2008). We expect that either the Atlas ranges or the Sahara form the southern limit of F. paludis, and that the species is limited to the western Palaearctic. The African records are supported by collected specimens. Additionally, digital photography is a very useful tool for recording the presence of the species in its known range and for collecting data on odonate hosts (Martens et al. 2008). In several cases these pho- tos result in first national records (cf. Clarebout 2013; Manger & Martens 2013; Černý 2014; Leuthold & Wildermuth 2014). Cordulegaster princeps, Gomphus simillimus maroccanus and Onychogomphus boudoti were not previously recorded as hosts for F. paludis. Table 1. Odonate adults with Forcipomyia paludis on their wings at the two streams, oued Merera (loc. 1) and oued Arougou (loc. 2), in the Middle Atlas, Morocco. Species Sex No. of midges Type of record Locality Date Boyeria irene male 1 coll. 1 04-vii-2013 Boyeria irene male 1 coll. 1 04-vii-2013 Gomphus similimus maroccanus female 1 photo 1 18-vi-2013 Gomphus similimus maroccanus female 1 photo 1 18-vi-2013 Gomphus similimus maroccanus female 8 coll. 1 04-vii-2013 Onychogomphus f. unguiculatus male 1 coll. 1 04-vii-2013 Onychogomphus f. unguiculatus male 6 photo 1 18-vi-2013 Onychogomphus boudoti male 1 photo 2 03-vii-2013 Onychogomphus boudoti male 1 photo 2 03-vii-2013 Onychogomphus uncatus male 2 coll. 1 04-vii-2013 Onychogomphus uncatus male 3 photo 2 03-vii-2013 Cordulegaster princeps male 7 coll. 1 04-vii-2013 Cordulegaster princeps female 44 photo 1 18-vi-2013 Cordulegaster princeps female 1 coll. 1 18-vi-2013 167 Notulae odonatologicae 9(4) 2019: 125-172 There are occasions, especially at the limits of its known range, when it is desirable to collect voucher specimens. To obtain these specimens AM developed a simple method: (1) Catch the Odonate with an insect net. (2) As quickly as possible trans- fer the odonate together with its midges from the insect net into a glassine envelope. At this stage the midge generally detaches. (3) Transfer the midge(s) carefully from the envelope into a small tube of ethanol (70–96 %). In sub-Saharan Africa, another ceratopogonid occurs, which is known from odonate adults, Forcipomyia mollipes (Macfie, 1932) (Macfie 1932; Clastrier & Legrand 1991). It is easily separated from F. paludis by having no claws. References Černý M. 2014. First records of Forci­ pomyia paludis (Diptera: Ceratopogoni- dae) for Czechia, a midge parasitizing dragonfly imagines (Odonata: Coen- agrionidae, Aeshnidae). Libellula 33: 157-162 Claerebout S. 2013. Première men- tion en Belgique de Forcipomyia (Ptero­ bosca) paludis (Macfie, 1936), ectopara- site des odonates adultes (Diptera: Ceratopogonidae). Bulletin de la Société royale belge d’Entomologie 149: 201-204 Clastrier J. & Legrand J. 1991. Nou- velles captures de Forcipomyia (Ptero­ bosca) mollipes, parasite des ailes de Libellules en Côte d’Ivoire (Diptera, Ceratopogonidae; Odonata). Revue francaise d’Entomologie (NS) 13: 48 Cordero-Rivera A., Barreiro A.R. & Otero M.C. 2019. Forcipomyia paludis (Diptera: Ceratopogonidae) in the Ibe- rian Peninsula, with notes on its behav- iour parasitizing odonates. Boletín de la Sociedad entomológica aragonesa 64: 243-250 Dell’Anna L., Utzeri C., Sabatini A. & Coluzzi M. 1995. Forcipomyia (Ptero bosca) paludis (Macfie, 1936) (Diptera, Ceratopogonidae) on adult dragonflies (Odonata) in Sardinia, Italy. Parassitologia 37: 79-82 Ferreira S., Velo-Antón G., Bro- chard C., Vieira C., Célio A.P., Thompson D.J., Watts P.C. & Brito J.C. 2014. A Critically Endangered new dra- gon fly species from Morocco: Onycho­ gomphus boudoti sp. nov. (Odonata: Gomphidae). Zootaxa 3856: 349-365 Leuthold W. & Wildermuth H. 2014. Erstnachweis der an Libellen parasitie- renden Gnitze Forcipomyia paludis in Litauen (Diptera: Ceratopogonidae; Odonata: Coenagrionidae). Libellula 33: 153-155 Macfie J.W.S. 1932. Ceratopogonidae from wings of dragonflies. Tijdschrift voor Entomologie 75: 265-283 Manger R. & Martens A. 2013. First record of Forcipomyia paludis (Diptera: Ceratopogonidae), a parasite of Odo- nata imagines, in The Netherlands. En­ tomologische Berichten, Amsterdam 73: 182-184 Manger R. & van der Heijden A. 2016. Forcipomyia paludis (Diptera: 168 Notulae odonatologicae 9(4) 2019: 125-172 Ceratopogonidae), een nieuwe libellen- parasiet in Nederland. Brachytron 18: 50-56 Martens A., Ehmann H., Peitzner G., Peitzner P. & Wildermuth H. 2008. European Odonata as hosts of Forcipomyia paludis (Diptera: Cerato- pogonidae). International Journal of Odonatology 11: 59-70, pl. IVa Martens A., Petzold F. & Mayer J. 2012. Die Verbreitung der an Libellen parasitierenden Gnitze Forcipomyia paludis in Deutschland (Odonata; Di- ptera: Ceratopogonidae). Libellula 31: 15-24 Nielsen E.R., Manger R. & Martens A. 2014. First records of Forcipomyia paludis (Diptera: Ceratopogonidae) for the Balearic Islands, Spain, a midge parasitising dragonfly imagines (Odo- nata: Libellulidae). Notulae odonato­ logicae 8: 83-85 Vinko D., Kulijer D., Billquist M. & Martens A. 2017. The biting midge Forcipomyia paludis (Macfie, 1936) (Diptera: Ceratopogonidae) in Slov- enia, Bosnia and Herzegovina, Croatia and Sweden. Natura Sloveniae 19: 5-21 Wildermuth H. 2012. Die Verbrei- tung der an Libellen (Odonata) para- sitierenden Gnitze Forcipomyia paludis (Macfie, 1936) in der Schweiz (Diptera: Ceratopogonidae). Entomo Helvetica 5: 71-83 Wildermuth H. & Martens A. 2007. The feeding action of Forcipomyia paludis (Diptera: Ceratopogonidae), a parasite of Odonata imagines. Interna­ tional Journal of Odonatology 10: 249- 255, pl. IV Wildermuth H., Schröter A. & Kohl S. 2019. The West Palearctic bit- ing midge Forcipomyia paludis (Di- ptera: Ceratopogonidae): first evidence as a parasite on Odonata wings from the Caucasus ecoregion. Notulae odo­ natologicae 9: 158-163 Wirth W.W. & Marston N. 1968. A method for mounting small insects on microscope slides in Canada balsam. Annals of the Entomological Society of America 61: 783-784 Received 17th August 2019 work_glsehaqhtbbwznwthwnsx5sffi ---- Graves' orbitopathy: frequency of ocular hypertension and glaucoma Graves’ orbitopathy: frequency of ocular hypertension and glaucoma FLM da Silva, M de Lourdes Veronese Rodrigues, PMS Akaishi and AAV Cruz Abstract Purpose To determine the relationship between ocular hypertension and glaucoma in patients with Graves’ orbitopathy. Methods A total of 107 patients with a diagnosis of Graves’ orbitopathy, followed at the Oculoplasty sector of the University Hospital, Medical School of Ribeirão Preto, were evaluated by applanation tonometry, computed visual campimetry (Humphrey 30-2, Full Threshold) and analysis and photographic documentation of the optic nerve. The patients considered to have the suspicion of glaucoma were re-evaluated 1 year later for diagnostic confirmation or exclusion. Results A 3.74% prevalence of ocular hypertension (four patients) and a 2.8% prevalence of glaucoma (three patients) was observed. When considering only patients older than 40 years, the prevalence of ocular hypertension was 5.4% (four patients) and the prevalence of glaucoma was 4.76% (three patients). Conclusion The present study did not reveal a statistically significant difference in the prevalence of ocular hypertension or glaucoma between patients with Graves’ orbitopathy and the general population. Eye (2009) 23, 957–959; doi:10.1038/eye.2008.155; published online 6 June 2008 Keywords: frequency; glaucoma; graves’ orbitopathy; intraocular hypertension Introduction Graves’ orbitopathy (GO) is an autoimmune disease characterized by the presence of eyelid retraction, proptosis and restrictive strabismus.1,2 Most patients with GO have autoimmune thyroid dysfunctions, especially hyperthyroidism or Graves’ disease (GD), which occurs in 90% of the cases.3 Although there is a strong temporal relationship between the onset of hyperthyroidism and orbitopathy,4 a large number of patients with GD only show minimal signs of orbitopathy. Depending on the sensitivity of the method of orbitopathy detection the prevalence of eye disease in unselected patients with GD ranges from 25 to 70%.1,5 Intraocular hypertension is a common finding in some patients with GO. Several mechanisms have been implicated in the raise of intraocular pressure. Hypertrophy of the extraocular muscles may induce a marked orbital congestion and increase the episcleral venous pressure.6,7 Restrictive fibrosis of the inferior rectus muscle causes mechanical compression in the primary position of gaze, which leads to abnormal pressure readings in the slit lamp. Another mechanism that may also contribute to the increase in intraocular pressure, is the accumulation of mucopolysaccharides in the network of aqueous drainage, which would reduce drainage of the aqueous humor.7 The prevalence of ocular hypertension and glaucoma in patients with GO has been studied, with controversial results. The objective of the present study was to determine the prevalence of glaucoma and ocular hypertension in patients with a diagnosis of GO. Materials and methods We evaluated 107 consecutive patients with a diagnosis of Graves’ orbitopathy seen at the Oculoplasty sector of the University Hospital, Medical School of Ribeirão Preto. The diagnosis of GO was based on the criteria of Bartley and Gorman,2 which establish the clinical diagnosis of GO in patients who present eyelid retraction associated with at least one of the following signs: thyroid dysfunction, proptosis, Received: 24 October 2007 Accepted in revised form: 15 April 2008 Published online: 6 June 2008 Department of Ophthalmology, Othorhinolaringology and Head and Neck SurgeryFMedical School of Ribeirão Preto, University of São Paulo, São Paulo, Brazil Correspondence: M de Lourdes Veronese Rodrigues, Department of Ophthalmology, Otolaringology and Head and Neck Surgery, Medical School of Ribeirao Preto, University of Sao Paulo, Hospital das Clı́nicasFOftalmologia, Campus USP, Av. Bandeirantes 3900, Ribeirao Preto, Sao Paulo 14048-900, Brazil Tel: 55 16 3602 2523; Fax: 55 16 3602 2860. E-mail: mdlvrodr@ fmrp.usp.br Eye (2009) 23, 957–959 & 2009 Macmillan Publishers Limited All rights reserved 0950-222X/09 $32.00 www.nature.com/eye C L IN IC A L S T U D Y http://dx.doi.org/10.1038/eye.2008.155 mailto:mdlvrodr@fmrp.usp.br mailto:mdlvrodr@fmrp.usp.br http://www.nature.com/eye strabismus or optic neuropathy, after the exclusion of other possible causes When there is no eyelid retraction, the diagnosis of GO can be considered if some of the signs mentioned above are associated with thyroid dysfunction in the absence of other causal factors. A comprehensive ophthalmic evaluation was performed in all patients including aplanation tonometry, biomicroscopy of the optic nerve head with digital photography and visual campimetry (Humphrey campimeter, 30-2 full threshold strategy). All examinations were performed by one examiner and the findings were re-evaluated by three glaucoma specialists who divided the sample in three categories: normal, glaucoma and glaucoma suspect. The diagnosis of glaucoma was based on the presence of two of the following parameters: indicative signs of glaucoma in optic nerve head, ocular hypertension and, or visual field losses. The criteria for inclusion in ocular hypertension group was intraocular pressure above 21 mmHg without other parameters indicative of glaucoma. The patients who received a diagnosis of glaucoma suspect (borderline) continued to be followed at the responsible outpatient sectors of the University Hospital and were re-evaluated about 1 year later by evaluation of GD status, refraction examination, determination of risk factors, repeated tonometry and campimetry and a new photographic documentation of the optic nerve. These new tests were again evaluated by the examiners together with the previous tests for the observation of possible progression of the optic nerve alterations, and/or visual field losses, characteristic of glaucoma. Statistical analysis was performed using Fisher exact test and, when one of the numbers was less than five, the w2 test. We certify that all applicable institutional and governmental regulations concerning the ethical use of human volunteers were followed during this research. Results Of the 107 patients with GO submitted to exams for the diagnosis of glaucoma, 20 (18.7%) were males and 87 (81.3%) were females. Mean age was 41.57 years and median age 42 years (range: 10–74 years). Ocular hypertension in at least one eye was detected in four patients (3.74%; Confidence interval 95: 1.03–9.30%), all of them females. The physical comorbidities presented are anaemia (2) and hepatitis (1). The tests and retinographies of these patients were evaluated by three examiners (specialists in glaucoma) and, on the basis of the diagnosis of at least two examiners, 84 were considered to be normal, 22 suspected and one received a diagnosis of glaucoma. The 22 patients considered to be suspected were then submitted to a new evaluation 1 year later to verify the evolution of GD and to determine or exclude the diagnosis of glaucoma. Eighteen of these 22 patients do not present progression of the GD; GD became more severe in one patient; and three patients were submitted to orbital decompression surgery. The examiners evaluated again the glaucoma tests together with the previous ones and concluded that two of the 22 patients re-evaluated had a diagnosis of glaucoma. Thus, of the total number of 107 patients with GO evaluated, three had a diagnosis of glaucoma, corresponding to 2.80% (Confidence Interval 95: 0.58–7.98%) of the total population. When only patients older than 40 years were considered, the prevalence of intraocular hypertension was 5.4% and the frequency of glaucoma was 4.76%. Discussion Glaucoma is the second most frequent cause of blindness in the world. The estimate is that by 2010 there will be 60.5 million people with glaucoma, with the number increasing to 79.6 million in 2020, and that bilateral blindness due to glaucoma will affect 8.4 million people by 2010 and 11.2 million by 2020.8 Several studies on the prevalence of glaucoma have been conducted in different countries, but the frequency of glaucoma in the general population is estimated at 1.5–2%.9 In the US, the prevalence of open angle glaucoma in the population older than 40 years is estimated at 1.86%.10 In a study conducted in Spain on persons aged 40–79 years, the prevalence of primary open angle glaucoma was found to be 2.1%.11 In Brazil there are no multicenter studies for the determination of the prevalence of glaucoma. Two studies were conducted in the Southeast and South regions of the country. In the first, a prevalence of 1.48% was observed among 8061 patients starting at 40 years of age, increasing to 2.07% above the age of 50 years.12 In the second, conducted on 1953 patients older than 40 years, 79 cases of glaucoma were detected (4.05%), with 73 being patients with primary open angle glaucoma and six (0.31%) having a diagnosis of secondary glaucoma or narrow angle glaucoma.13 In the present study, there was a 3.74% prevalence of ocular hypertension and a 2.80% prevalence of glaucoma in patients with GO, values similar to those observed in the general population. Considering only the patients older than 40 years the prevalence was found to be higher than the other studies, except the one carried out by Sakata et al.13 In a retrospective study of 482 patients with GO, Kalmann and Mourits14 detected a 3.9% prevalence of intraocular hypertension in these patients compared to a Graves orbitopathy and glaucoma FLM da Silva et al 958 Eye prevalence of 1.6% in the general population. A study15 carried out in Pittsburgh on 500 patients with GO revealed a 24% prevalence of ocular hypertension compared to 5% in the general population. In that study15 only seven patients were classified as glaucomatous and two patients showed progressive abnormalities of the visual field and optic disc. Ohtsuka and Nakamura observed a 22% prevalence of ocular hypertension in patients with GO16 and a 13% prevalence of glaucoma in patients with GD (Table 1).17 Despite the controversial results obtained thus far, intraocular pressure should be carefully monitored in patients with GO. It is necessary to point out that the number of participants of the present study is small, and it is a limitation. However, the results indicate that the progressive glaucomatous changes of the optic nerve and visual fields are probably not more common among these patients than in the general population. Treatment of GO, particularly orbital decompression, may have a beneficial effect on intraocular pressure. Patients with ocular hypertension should be monitored for changes in the optic disc and visual field. Anti- glaucoma medical therapy may be necessary, but surgical treatment of glaucoma is not necessary for most patients with GO and ocular hypertension.18–20 Acknowledgements This study was financially supported by CAPES, Brazil. We acknowledge our collaboration with Professor Dr Jayter Silva de Paula, Professor Dr Eduardo Melani Rocha and Dr Daniela Felipe Crosta. References 1 Burch HB, Wartofsky L. Graves’ ophthalmopathy: current concepts regarding pathogenesis and management. Endocr Rev 1993; 14: 747–793. 2 Bartley GB, Gorman CA. Diagnostic criteria for Graves’ ophthalmopathy. Am J Ophthalmol 1995; 119: 792–795. 3 Bartley GB, Fatourechi V, Kadrmas EF, Jacobsen SJ, Ilstrup DM, Garrity JA et al. Chronology of Graves’ ophthalmopathy in an incidence cohort. Am J Ophthalmol 1996; 121: 426–434. 4 Gorman CA. Temporal relationship between onset of Graves’ ophthalmopathy and diagnosis of thyrotoxicosis. Mayo Clin Proc 1983; 58: 515–519. 5 Feldon SE. Graves’ ophthalmopathy. Is it really thyroid disease? Arch Intern Med 1990; 150: 948–950. 6 Dallow RL, Netland PA. Management of thyroid ophthalmology (Graves’ disease). In: Albert DM, Jakobiek FA (eds). Principles and Practice of Ophthalmology. WB Saunders: Philadelphia, 1994, pp 1905–1922. 7 Higginbotham EJ. Glaucoma associated with increased episcleral venous pressure. In: Albert DM, Jakobiek FA, Azar DT, Gragoudas E, Power SM, Robinson NL (eds). Principles and Practice of Ophthalmology. WB Saunders: Philadelphia, 2000, pp 2781–2792. 8 Quigley HA, Broman AT. The number of people with glaucoma worldwide in 2010 and 2020. Br J Ophthalmol 2006; 90: 262–267. 9 Giampani J. Hereditariedade e Genética. In: Susanna Junior R (eds). Glaucoma (Manual CBO). Cultura Médica: Rio de Janeiro, 1999, pp 9–12. 10 Friedman DS, Wolfs RC, O’Colmain BJ, Klein BE, Taylor HR, West S et al. Prevalence of open-angle glaucoma among adults in the United States. Arch Ophthalmol 2004; 122: 532–538. 11 Anton A, Andrada MT, Mujica V, Calle MA, Portela J, Mayo A. Prevalence of primary open-angle glaucoma in a Spanish population: the Segovia study. J Glaucoma 2004; 13: 371–376. 12 Ghanem CC. Levantamento de casos de Glaucoma em JoinvilleFSanta Catarina. Arq Bras Oftalmol 1989; 52: 40–43. 13 Sakata K, Scapucin L, Sakata LM, Carvalho ACA, Selonke I, Sakata VM et al. Projeto glaucomaFresultados parciais 2000 na região de PiraquaraFPR. Arq Bras Oftalmol 2002; 65: 333–337. 14 Kalmann R, Mourits MP. Prevalence and management of elevated intraocular pressure in patients with Graves’ orbitopathy. Br J Ophthalmol 1998; 82: 754–757. 15 Cockerham KP, Pal C, Jani B, Wolter A, Kennerdell JS. The prevalence and implications of ocular hypertension and glaucoma in thyroid-associated orbitopathy. Ophthalmology 1997; 104: 914–917. 16 Ohtsuka K. Intraocular pressure and proptosis in 95 patients with Graves ophthalmopathy. Am J Ophthalmol 1997; 124: 570–572. 17 Ohtsuka K, Nakamura Y. Open-angle glaucoma associated with Graves disease. Am J Ophthalmol 2000; 129: 613–617. 18 King JS, Netland PA. Glaucoma in thyroid eye disease. In: Dutton JJ, Haik BG (eds). Thyroid Eye Disease, Diagnosis and Treatment. Marcel Dekker Inc.: New York, 2002, pp 319–324. 19 Danesh-Meyer HV, Savino PJ, Deramo V, Sergott RC, Smith AF. Intraocular pressure changes after treatment for Graves’orbitophathy. Ophthalmology 2001; 108: 145–150. 20 Crespi J, Rodriguez F, Bull JA. Presión intraocular post-tratamiento de la oftalmolpatia tireoidea. Arch Soc Esp Ophthalmol 2007; 82: 691–696. Table 1 Prevalence (%) of intraocular hypertension and glaucoma in patients with Graves’ Orbitopathy Intraocular hypertension Glaucoma (%) (%) Silva et al. (present study) 3.7 2.8 Kalmann & Mouritt (1998)14 (n ¼ 482) 3.9 0.8 Cockerham et al. (1997)15 (n ¼ 500) 24 1.4 Ohtsuka & Nakamura (1997)16 and (2000)17 a 22 13 aPatients with Graves’ disease (irrespective of orbitopathy). Graves orbitopathy and glaucoma FLM da Silva et al 959 Eye Graves' orbitopathy: frequency of ocular hypertension and glaucoma Introduction Materials and methods Results Discussion Acknowledgements References work_gltqphc5g5gh5mx3s2dok4jsiu ---- Perceptions and Attitudes of Canadian Dentists toward Digital and Electronic Technologies JCDA • www.cda-adc.ca/jcda • April 2006, Vol. 72, No. 3 • 243 Professional I S S U E S D entists have been using computers for various applications in the dental office for many years. Computerized appoint- ment systems,1 dental practice management software2 and programs for recording patient data and for financial management3 have been presented as tools to increase produc- tivity in dental practice. There is also increasing interest in teledentistry4,5 and videoconferencing.6,7 The availability of digital and electronic technologies is a key factor in these changes. Dr. Flores-Mir Email: carlosflores@ ualberta.ca Contact Author Perceptions and Attitudes of Canadian Dentists toward Digital and Electronic Technologies Carlos Flores-Mir, DDS, Cert Ortho, MSc, DSc; Neal G. Palmer, DDS, MSc, FRCD(C); Herbert C. Northcott, PhD; Fareeza Khurshed, BSc; Paul W. Major, DDS, MSc, FRCD(C) ABSTRACT Objectives: To determine dentists’ perceptions of the usefulness of digital technolo- gies in improving dental practice and resolving practice issues; to determine dentists’ willingness to use digital and electronic technologies; to determine perceived obstacles to the use of digital and electronic technologies in dental offices; and to determine dentists’ attitudes toward Internet privacy issues. Methods: An anonymous, self-administered survey of Canadian dentists was conducted by mail. A potential mailing list of 14,052 active Canadian dentists was compiled from the 2003 records of provincial regulatory bodies. For each province, 7.8% of the dentists were randomly selected with the help of computer software. The surveys were mailed to this stratified random sample of 1,096 dentists. Results: The response rate was 28% (312/1,096). Of the 312 respondents, 4 (1%) were in full-time academic positions, 16 (5%) were not practising, and 9 (3%) provided incomplete data. Therefore, 283 survey responses were available for analysis. More than 60% of the dentists indicated that computer technology was quite capable or very capable of improving their current practice by increasing patient satisfaction, decreasing office expenses, increasing practice efficiency, increasing practice produc- tion, improving record quality and improving case diagnosis and treatment planning. More than 50% of respondents reported that digital photography and digital radiography were quite useful or very useful. About 70% of the dentists agreed or strongly agreed with using digital and electronic technologies to consult with dental specialists. Cost of equipment and lack of comfort with technology were regarded as significant or insurmountable obstacles by substantial proportions of respondents. Conclusions: Respondents generally viewed digital and electronic technologies as useful to the profession. Increased office efficiency and production were perceived as positive effects of digital and electronic technologies. These technologies are more often used for consulting with colleagues rather than for consulting with patients. The major obstacles to the general use of these technologies were related to cost, lack of comfort with technology and differences in legislation between provinces and countries. Privacy issues were not perceived as a significant barrier. MeSH Key Words: attitude of health personnel; computer systems/utilization; dentists; practice management, dental/organization & administration © J Can Dent Assoc 2006; 72(3):243 This article has been peer reviewed. mailto:carlosflores@ualberta.ca 243a JCDA • www.cda-adc.ca/jcda • April 2006, Vol. 72, No. 3 • ––– Flores-Mir ––– Many obstacles need to be overcome if digital and electronic technologies are to be fully integrated in the operation of dental offices. These obstacles may be phys- ical, technical or psychosocial barriers in the form of perceptions and attitudes related to software incompati- bilities, patient privacy, unclear association and govern- ment regulations, interference with the patient– practitioner relationship, unclear fee-for-service or remu- neration guidelines, and rejection of claim reimburse- ments by insurance providers.8,9 If dentists perceive that digital and electronic tech- nologies are valuable for practice management or practice efficiency, there will be a greater chance of their more general acceptance. However, if the new technologies are perceived as cumbersome, if learning the technologies is perceived to take time away from the practice and if prac- tice systems are already running adequately, dentists may be unconvinced of the need for any change. In a recent survey, Canadian dentists did not perceive digital and electronic technologies as a high research priority relative to treatment techniques and dental mate- rials.10 Nevertheless, recent research among Canadian orthodontists has shown that the areas of practice causing the most stress for orthodontists are related to time management and patient cooperation.11 Similar research has not been conducted for Canadian dentists. By estab- lishing the current perceptions and attitudes of dentists toward electronic and digital technologies, the dental profession and the industry more generally will be able to plan for future acceptance and implementation of the technologies. No previous reports have been found regarding the perceptions and attitudes of Canadian dentists to digital and electronic technologies. The objectives of this study were to determine dentists’ perceptions of the usefulness of digital technologies in improving dental practice and resolving practice issues; to determine dentists’ willingness to use digital and elec- tronic technologies; to determine the perceived obstacles to the use of digital and electronic technologies in dental offices; and to determine dentists’ attitudes toward Internet privacy issues. Methods The study was approved by the Health Research Ethics Board at the University of Alberta. Survey Instrument A mail survey was developed to obtain information about use of computers and the Internet by Canadian dentists. The survey was adapted from a questionnaire originally developed to evaluate computer and Internet use among Canadian orthodontists.12 The survey used in the present study collected demographic data and information about actual computer and Internet usage, which were reported previously,13 and information about perceptions and attitudes toward computer and Internet use in dental practices, which are reported here. The survey also included questions about the ability of tech- nology to resolve practice-related issues, dentists’ motiva- tion for using the technologies, their willingness to use the technologies and potential obstacles to doing so. Survey Distribution A mailing list of Canadian dentists was compiled from the 2003 records of provincial regulatory bodies. A total of 14,052 dentists were registered as active in that year. For each province, 7.8% of the dentists were randomly selected with the help of computer software. If the selected dentist was also registered as a specialist, he or she was eliminated, and the next dentist was chosen, until the total number of dentists required had been attained. The sur- veys were mailed to this stratified random sample of 1,096 dentists. A response rate between 20% and 30% was expected to produce a final sample size of 250 to 300. This sample would allow for comparisons of attitudes toward computer and Internet usage between Canadian dentists and a similar sample of Canadian orthodontists.12 The survey questionnaire was distributed in a packet that included a self-addressed, stamped return envelope and an introduction letter explaining the research and seeking informed consent from participants. At 1 and 2 weeks after the initial mailing, reminder cards were mailed thanking respondents who had returned their surveys or reminding those who had not responded to complete and return the questionnaire. Data Analysis Completed surveys were coded, and spreadsheets were created for data entry. The survey results were manually entered into a personal computer by a research assistant who was not aware of the study objectives. The data were “cleaned” by checking for entries outside of legitimate ranges and for inconsistent codes; the necessary correc- tions were made by manually rechecking the surveys. A random number generator was used to select 20% of the surveys for hand-checking by a third party to determine the rate of data entry errors. The error rate was 0.2% (19 of 9,918 points), which was deemed low enough to consider the remaining data accurate and to forgo further manual confirmation. The data were analyzed by descriptive statistical methods using Excel (Microsoft Corporation, Seattle, Wash.) and SPSS for Windows (SPSS Inc., Chicago, Ill.). Descriptive statistics such as the frequency and range were reported. Results Of the 1,096 surveys mailed, 312 were returned (response rate of 28%). Another 31 packets (3%) were returned because of incorrect addresses. Of the 312 respon- dents, 4 (1%) were in full-time academic positions, 16 (5%) JCDA • www.cda-adc.ca/jcda • April 2006, Vol. 72, No. 3 • 243b ––– Digital and Electronic Technologies ––– was 46.8 years (standard deviation [SD] 10.1), and they had a mean of 20.4 years of experience (SD 10.3). As expected, a large percentage of respondents (42%) were from Ontario. Quebec, British Columbia and Alberta accounted for 44% of the replies, and 10% of the responses came from the remaining provinces, which had the smallest number of practising dentists. The remaining 4% of respondents did not identify their province of practice. Capabilities of Technology More than 60% of the dentists indi- cated that computer technology was quite capable or very capable of improving their current practice by increasing patient satisfaction, decreasing office expenses, increasing practice efficiency, increasing practice production, improving the quality of office records, or improving case diag- nosis and treatment planning (Fig. 1). Between 40% and 60% felt that computer technology was quite capable or very capable of increasing the number of case starts, improving doctor-to-patient communication, reducing the requirements for record storage, reducing radiation exposure or improving doctor-to-doctor com- munication. Less than 40% thought that computer technology was quite capable or very capable of increasing access to shared patient information or decreasing appointment times. Usefulness of Technology More than 50% of respondents reported that digital photography and digital radiography were quite useful or very useful (Fig. 2). More than 30% thought that electronic or virtual models were quite useful or very useful. About one-quarter of the respondents suggested that electronic referral forms and paperless charting were quite useful or very useful. Willingness to Use Technology Respondents were asked to report their willingness to use digital and electronic technologies in various commu- nication settings. About 70% of the dentists agreed or strongly agreed with using digital and electronic tech- nologies to consult with dental specialists (Fig. 3). Just over 50% agreed or strongly agreed with using these 0 20 40 60 80 100 In cr ea se p at ie nt sa tis fa ct io n De cr ea se o ffi ce e xp en se s In cr ea se p ra ct ice e ffi cie nc y In cr ea se p ra ct ice p ro du ct io n Im pr ov e re co rd q ua lit y Im pr ov e ca se d ia gn os is an d tre at m en t p la nn in g In cr ea se n um be r o f c as e st ar ts Im pr ov e do ct or to p at ie nt co m m un ica tio n Re du ce re co rd st or ag e re qu ire m en ts Re du ce ra di at io n ex po su re Im pr ov e do ct or to d oc to r c om m un ica tio n In cr ea se a cc es s t o sh ar ed p at ie nt in fo rm at io n De cr ea se a pp oi nt m en t t im es % o f d e n ti st s Not capable at all Somewhat capable or capable Quite or very capable 0 10 20 30 40 50 60 70 80 90 100 Digital photography Digital radiography Electronic or virtual models Electronic referral forms Paperless charting 0 10 20 30 40 50 60 70 80 90 100 Digital photography Digital radiography Electronic or virtual models Electronic referral forms Paperless charting % o f d e n ti st s % o f d e n ti st s Somewhat useful Quite or very useful Not at all useful or useful Figure 2: Perception of usefulness of various information technologies. Figure 1: Perception of the capability of various technologies to improve aspects of current practice. reported not currently practising, and 9 (3%) provided incomplete data, which left 283 surveys (91%) for analysis. Of these, 21 (7%) had incomplete responses in the section on perceptions and attitudes. Therefore, for the data analysis reported here, only 262 surveys were fully available. Demographic Characteristics of the Responders As reported previously,13 more than half of the 283 respondents had an individual practice. Their mean age technologies to consult with other general practitioners. Less than 40% agreed or strongly agreed with using digital and electronic technologies for consultations with patients. Obstacles Figure 4 shows how dentists rated various obstacles to the general use of electronic and digital technologies in their own practices. Cost of equipment (63%) and lack of comfort with technology (47%) were the most important obstacles regarded as significant or insurmountable. About 40% of the respondents indi- cated that differences in legislation between provinces and countries, lack of cooperation among dentists, need for technical training and unclear remuneration guidelines for consulta- tions were significant or insurmount- able obstacles. Finally, lack of face- to-face communication, incompatible software or hardware, problems with scheduling for videoconferencing, and security or privacy issues were signifi- cant or insurmountable obstacles for less than 40% of the dentists. Discussion It seems that if new technologies are to gain general acceptance into a professional community, there must be a strong perception that they will offer improvements over current prac- tices. The Internet, the World Wide Web and other technological develop- ments have and will continue to redefine patient care, referral relation- ships, practice management, service quality, professional organizations and competition.14–16 In the survey reported here, the respondents felt that digital and elec- tronic technologies were useful for most aspects of dental practice. For cer- tain aspects (increased sharing of patient information and reduction in duration of appointments) the percep- tion of usefulness was lower, but about 40% of respondents still perceived the technologies as quite capable or very capable of improving practice. These results are similar to perceptions of the capabilities of technology among Canadian orthodontists.12 Most of the aspects evaluated in the survey of orthodontists pertained to the greatest sources of stress and concern among practitioners in that field,11 including office expenses, appointment times, case starts, diagnosis and treatment planning, and patient satisfaction. These are related to broader issues of time management (and lack of personal time), patient cooper- ation and practice management, all of which relate to orthodontists’ stress and personal satisfaction. It has been assumed that determinants of stress and personal 243c JCDA • www.cda-adc.ca/jcda • April 2006, Vol. 72, No. 3 • ––– Flores-Mir ––– Co st o f e qu ip m en t La ck o f c om fo rt w ith te ch no lo gy In te r-p ro vi nc ia l/i nt er na tio na l le gi sla tio n La ck o f i nt er -d oc to r c oo pe ra tio n Ne ed fo r t ec hn ica l t ra in in g Un cle ar re m un er at io n gu id el in es La ck o f f ac e- to -fa ce co m m un ica tio n In co m pa tib le so ftw ar e or h ar dw ar e Sc he du lin g fo r v id eo co nf er en cin g Se cu rit y o r p riv ac y i ss ue s % o f d e n ti st s 0% 20% 40% 60% 80% 100% Not an obstacle at all Small or moderate obstacle Significant or insurmountable obstacle Figure 3: Willingness to use information technologies for consultations with patients and colleagues. Figure 4: Obstacles to the use of information technologies. % o f d e n ti st s % o f d e n ti st s 0 10 20 30 40 50 60 70 80 90 100 Patients GP dentists Other dental specialists Disagree or strongly disagree Neither agree nor disagree Agree or strongly agree JCDA • www.cda-adc.ca/jcda • April 2006, Vol. 72, No. 3 • 243d ––– Digital and Electronic Technologies ––– satisfaction are similar among Canadian orthodontists and dentists, but no previous evaluation for dentists could be found. In the current study of Canadian dentists, digital photography and digital radiology were considered more useful than electronic models, electronic referral forms and paperless charting. Greater proportions of Canadian dentists than Canadian orthodontists12 perceived various technologies as quite or very useful: 52% and 42%, respectively, for digital radiology; 25% and 18% for electronic referral forms; and 32% and 15% for electronic models. In contrast, smaller proportions of Canadian dentists perceived digital photography (56% and 77%) and paperless charting (25% and 34%) as quite or very useful. These differences could be explained by the more diverse uses that dentists have for each of these information technologies. Dentists appear more open to communicating elec- tronically with specialists than with patients; results were similar for Canadian orthodontists.12 These findings are consistent with other research showing that practitioners are leery of using digital and electronic technologies to communicate with the public, but are willing to use such technologies to communicate with colleagues.17,18 Obstacles that impede the acceptance of digital and electronic technologies in the international dental com- munity include cost, lack of comfort with technology, privacy, time, software incompatibility, unclear guidelines, interference with patient–practitioner relationships and lack of remuneration guidelines.8,9,19 Similarly, cost of equipment, lack of comfort with technology, differences in legislation between provinces and countries and lack of communication among practitioners were the major obstacles to the general acceptance of digital and elec- tronic technologies among Canadian dentists. Finally, Canadian dentists did not consider security or privacy issues as a significant obstacle. This finding contrasts with those of previous studies,8,9,12 which have considered security and privacy issues as important obstacles to the use of information technologies. Apparently Canadian dentists feel that information trans- mitted over the Internet is secure. Electronic transmission of claims is one compelling reason for dental practices to have at least one computer with Internet access.20 The current drive among health care professionals to practise evidence-based health care, which is specially strong in the United Kingdom, is an extra incentive to computerize dental practices, since this allows access to electronic databases and accredited online continuing education (CE).19 In Australia21 about 65% of surveyed dentists considered using computers to obtain CE credits. Although evidence-based practice concepts are not as strongly established in Canadian dental practices as in those in the United Kingdom, it is likely that this situation will soon change. Accordingly this could be an additional driving force to computerize dental practices. Recent surveys revealed that more than 64% of the households in the United Kingdom22 and about 40% of those in the United States23 and Canada24 had Internet access. This means that there is tremendous potential for promoting dental practices to patients. Many adult patients find information about goods and services through the Internet. No other medium offers the same degree of flexibility and the potential to reach as many people. Once dentists in general practice realize this potential, they will probably increase their marketing efforts through the Internet, which will increase the use of digital and electronic technologies among Canadian practices. Dentists are sensitive to some of the potential obstacles associated with using electronic technologies, but they are willing to meet these challenges and incorporate these new approaches into their dental practices. Specific research regarding how information technology might benefit spe- cific aspects of practice such as increasing case starts, reducing appointment times and reducing office expenses would likely motivate dentists to increase their adoption of new technologies. Conclusions The dentists who responded to this survey generally viewed digital and electronic technologies as useful to the profession. Increased office efficiency and production were perceived as positive effects of digital and electronic technologies. There seemed to be a greater trend toward consulting electronically with colleagues than with patients. The major obstacles to the general use of these technologies were related to cost, lack of comfort with technology and differences in legislation between provinces and countries. Privacy issues were not perceived to represent a significant barrier. C Acknowledgement: Financial support for this research was provided by the McIntyre Memorial Research Fund. Dr. Flores-Mir is a clinical associate professor in the ortho- dontic graduate program at the University of Alberta, Edmonton, Alberta. Dr. Palmer is a clinical assistant professor and director of undergraduate orthodontics in the department of dentistry, University of Alberta, Edmonton, Alberta. Dr. Northcott is a professor in the department of sociology, University of Alberta, Edmonton, Alberta. THE AUTHORS 243e JCDA • www.cda-adc.ca/jcda • April 2006, Vol. 72, No. 3 • ––– Flores-Mir ––– References 1. Goldstein B. Appointment scheduling system: a vehicle for increased produc- tivity. J Calif Dent Assoc 2001; 29(3):231–3. 2. Gilboe DB, Scott DA. Computer systems for dental practice management: a new generation of independent dental software. Tex Dent J 1994; 111(4):9–14. 3. Snyder TL. Integrating technology into dental practices. J Am Dent Assoc 1995; 126(2):171–8. 4. Folke LE. Teledentistry. An overview. Tex Dent J 2001; 118(1):10–8. 5. Chang SW, Plotkin DR, Mulligan R, Polido JC, Mah JK, Meara JG. Teledentistry in rural California: a USC initiative. J Calif Dent Assoc 2003; 31(8):601–8. 6. Cook J, Austen G, Stephens C. Videoconferencing: what are the benefits for dental practice? Br Dent J 2000; 188(2):67–70. 7. Pick RM. Videoconferencing—the ultimate consult: the future is now. Dent Today 2000; 19(11):88–93. 8. Mitchell E, Sullivan F. A descriptive feast but an evaluative famine: systematic review of published articles on primary care computing during 1980-97. BMJ 2001; 322(7281):279–82. 9. Bodenheimer T, Grumbach K. Electronic technology: a spark to revitalize primary care? JAMA 2003; 290(2):259–64. 10. Allison P, Bedos C. What are the research priorities of Canadian dentists? J Can Dent Assoc 2002; 68(11):662. 11. Roth SF, Heo G, Varnhagen C, Glover KE, Major PW. Occupational stress among Canadian orthodontists. Angle Orthod 2003; 73(11):43–50. 12. Palmer N. Orthodontists’ computer and Internet use [dissertation]. Edmonton: University of Alberta; 2004: p. 121. 13. Flores-Mir C, Palmer NG, Northcott HC, Huston C, Major PW. Computer and Internet usage among Canadian dentists. J Can Dent Assoc 2006; 72(2):145. Available from: URL: http://www.cda-adc.ca/jcda/vol-72/issue-2/145.html. 14. Bauer JC, Brown WT. The digital transformation of oral health care. Teledentistry and electronic commerce. J Am Dent Assoc 2001; 132(2):204–9. 15. Schleyer TK. Digital dentistry in the computer age. J Am Dent Assoc 1999; 130(12):1713–20. 16. Hirschinger R. Digital dentistry: information technology for today’s (and tomorrow’s) dental practice. J Calif Dent Assoc 2001; 29(3):215–21, 223–5. 17. Stephens CD, Cook J. Attitudes of UK consultants to teledentistry as a means of providing orthodontic advice to dental practitioners and their patients. J Orthod 2002; 29(2):137–42. 18. Muhumuza R, Moles DR, Bedi R. A survey of dental practitioners on their use of electronic mail. Br Dent J 1999; 186(3):131–4. 19. John JH, Thomas D, Richards D. Questionnaire survey on the use of com- puterisation in dental practices across the Thames Valley Region. Br Dent J 2003; 195(10):585–90. 20. Wallin LA. Electronic claims transmission: the “fourth party” factor in dentistry. N Y State Dent J 1992; 58(8):14–5. 21. Hou AY, Barnard PD. Computer use in private dental practice in Australia, 1991. Aust Dent J 1993; 38(3):191–5. 22. Office for National Statistics. Individuals accessing the Internet — National Statistics Omnibus Survey. United Kingdom; 2002. Available from: URL: http://www.statistics.gov.uk/CCI/nugget.asp?ID=8&Pos=&ColRank=1&Rank= 374 (accessed February 2006). 23. Newburger EC. U.S. Census Bureau. Home computers and Internet use in the United States: September 2001. Available from: URL: http://www. census.gov/prod/2001pubs/p23-207.pdf (accessed February 2006). 24. Sciadas G. The digital divide in Canada. Statistics Canada. Science, Innovation and Electronic Information Division. October 2002. Available from: URL: http://www.statcan.ca/english/research/56F0009XIE/56F0009XIE.pdf (accessed February 2006). Ms. Khurshed is an MSc student in the department of statistics, University of Alberta, Edmonton, Alberta. Dr. Major is a professor and director of the orthodontic grad- uate program, department of dentistry, University of Alberta, Edmonton, Alberta. Correspondence to: Dr. Carlos Flores-Mir, Faculty of Medicine and Dentistry, Room 4051A Dentistry/Pharmacy Centre, University of Alberta, Edmonton, AB T6G 2N8. The authors have no declared financial interests. work_gmijx7eez5bjnfaukcst6bns3y ---- Study of Ceramic Tube Fragmentation under Shock Wave Loading Procedia Materials Science 3 ( 2014 ) 592 – 597 Available online at www.sciencedirect.com 2211-8128 © 2014 Published by Elsevier Ltd. Open access under CC BY-NC-ND license. Selection and peer-review under responsibility of the Norwegian University of Science and Technology (NTNU), Department of Structural Engineering doi: 10.1016/j.mspro.2014.06.098 ScienceDirect 20th European Conference on Fracture (ECF20) Study of ceramic tube fragmentation under shock wave loading Irina Bannikova, Sergey Uvarov, Marina Davydova, Oleg Naimark* Institute of Continuous Media Mechanics UB RAS, 1, Ac. Koroleva st., Perm, Russia, 614013 Abstract In this paper we investigate the fragmentation of tubular alumina samples under shock-wave loading. Test of tubular samples was performed on a universal experimental set up of electric explosion conductor Bannikova et al. (2013a). The blast tube destroyed into fragments having as rectangular or irregular geometric shapes. The mass distribution obtained by “method of photography” and “method weighting” are in good agreement Bannikova et al. (2013b). For tubes with wall thickness d = 2.05 mm formed fragments described of by double distribution: small fragments whose size is much smaller than the thickness of the tube - distribution power law, large fragments described as exponential and logarithmic distribution are equally. The inflection point of the distribution curves moves toward smaller scales with increasing energy density. Besides other determined that Weibull function a good description of the distribution of all the fragments by mass. © 2014 The Authors. Published by Elsevier Ltd. Selection and peer-review under responsibility of the Norwegian University of Science and Technology (NTNU), Department of Structural Engineering. Keywords: Fragmentation distribution; electroexplosion; shock wave; alumina ceramic; form factor. 1. Introduction Still during the Second World War by Professor of Physics N. F. Mott were undertaken the first attempts to describe the statistics of fragmentation objects (cylindrical projectiles) subjected to intensive pulsed load. Grady (2006). Since then, the study of the problems of fragmentation continues by our time, therefore significant experimental and theoretical data accumulated. In the world objects are investigated the dimensions of it varies from laboratory specimens crumbling into fragments of weighing less than a gram to the arctic ice fields photographed by satellite or galaxies crumbling meteorites and asteroids. Studied objects of different shapes – spheres, cubes, plates, rings, rods, and of the most diverse materials – copper, lead, glass, concrete, ceramics, rocks, ice, clay, gypsum, soap, wax, potatoes. The objects are investigated under different loading conditions (impact, creep, bending and compression). © 2014 Published by Elsevier Ltd. Open access under CC BY-NC-ND license. Selection and peer-review under responsibility of the Norwegian University of Science and Technology (NTNU), Department of Structural Engineering http://crossmark.crossref.org/dialog/?doi=10.1016/j.mspro.2014.06.098&domain=pdf http://creativecommons.org/licenses/by-nc-nd/3.0/ http://creativecommons.org/licenses/by-nc-nd/3.0/ 593 Irina Bannikova et al. / Procedia Materials Science 3 ( 2014 ) 592 – 597 The main tool of the fragmentation statistical regularities research is determination the fragments distribution by size or masses, in other words, the definition number of fragments N(r, m) with size r or mass m greater than some predetermined. The distribution form depends on many factors: the magnitude of energy expended for the destruction of the sample, the material properties – brittle or ductile, elastic or plastic; dimensionality – 2D for example, plate, rod and 3D – ellipsoid. Depending on the conditions observed distributions of the following types: log-normal, power law (D. L. Turcotte), the distribution of Mott, exponential (D. E. Grady), the Weibull distribution and combination of exponential and power distributions (J. J. Gilvarry). In this paper we investigate the fragmentation of 2D objects - tubular ceramic samples (alumina), in conditions of the shock-wave loading. Nomenclature ms mass of sample d wall thickness of tube h height of tube r1 outer radius of tube r2 inner radius of tube ρo density ceramics WC energy stored in the capacitor battery QW energy expended on evaporation conductor at room temperature w density energy of the explosion N quantity of the fragments of a given mass mf, m mass of the fragment d* character size of the fragment S size of the image area the fragment on photo Sin the inner surface of the tube, of the fragment Sout the outer surface of the tube, of the fragment m* mass of fragment by method photography α form factor 2. Experiment and materials Loading of tubular specimens was carried out on a universal experimental setup of electric explosion conductor Bannikova et al. (2013a). Schematic diagram and appearance of the experimental setup are presented in Bannikova et al. (2013a); Bannikova et al. (2013b). It is a complex that includes cylindrical explosion chamber diameter of 0.24 m and a height of 0.085 m, a high voltage source (HVS, Umax = 5÷15 kV), storage capacitors system (Co = 0.022 ÷ 0.44 mF), discharger and discharge ignition system (manual) on the conductor (copper wire diameter dw = 0.1 mm). Damping material is set on the inner side surface and on the bottom of the cuvette in order to not destroy the wall of the chamber by shock wave. Tubular sample with an axial conductor were placed in the explosion chamber filled with distilled water. The shock wave was initiated in the liquid as a result electric explosion conductor by discharge of the capacitor bank charged from HVS. Water decreased “energy” flying fragments that excluded their further fragmentation by collision fragments with each other and the walls of the chamber. During the tests we performed a series of experiments at different energy of the capacitor bank. Amperage is measured on the conductor with Rogowski coil. The current density was about 1011÷1012 A m-2. Alumina tubes had outer and inner radius r1 = 5.9 mm, r2 = 3.9 mm (the thickness of the tube wall d = 2.05 mm) and a height h was in the range 12.7÷16.7 mm (Fig. 1, (a, b)). The density of the tube material was ρo 2.6 kg m -3. A result conductor explosion the tube fragmented as show in Fig. 1 (c), they had rectangular and irregular geometric forms Bannikova et al. (2013b). Optical microscopy showed that the fragments have a developed cracks structure (Fig. 2). Main cracks are vertical gap cracks formed in result a radial loading of explosive wave, and horizontal cracks are on middle of the 594 Irina Bannikova et al. / Procedia Materials Science 3 ( 2014 ) 592 – 597 sample are crack of reset. Appearance of the last cracks possibly due to that maximum of the velocity profile of the explosive waves necessary on the middle of tube. To collect fragments from liquid used polyethylene film. It was placed on the bottom chamber, under tube. After drying of fragments at room temperature (20оC), they were weighed on an electronic balance HR-202i. Mass of fragments was on average ~ 98% for all test samples. It was decided to consider two methods for determining the mass of fragments, because the part of fragments by mass does not exceed the resolving power of the electronic balance, 0.0001 g. Fig. 1. (a) an external view of the tube; (b) a scheme of the sample; (c) the distribution of fragments in the water after the explosion of the tube. Fig. 2. Cracks on the surface of the fragment, the energy density of the explosion w = 5.9 J g-1: (a) appearance fragment; (b) outer side fragment; (c) the inner surface of the fragment. 2.1. The Method of weighing Fragments were sieved through a laboratory sieves system and were weighed on an electronic balance HR-202i. Distribution of the quantity fragments N in dependence from weight mf exceeding some defined in logarithmic coordinates for all the tested samples is shown in Fig. 3. Since part of the energy stored within the capacitor battery was spent on the processes occurring with the conductor: heating to the melting temperature of the conductor, melting the conductor, heating to the evaporation temperature of the material and evaporation of the conductor, that is some energy Qw, and the mass of the samples ms was different, it was decided to use the term “the density of energy”, w. The density of energy was estimated by Eq. (1): r 2 r 1 h a b c Direction explosive wave b c a 595 Irina Bannikova et al. / Procedia Materials Science 3 ( 2014 ) 592 – 597 S WC m QW w (1) With right from No. of sample shown the corresponding value of the specific energy w (Fig. 3). In Fig. 3 we can distinguish two areas corresponding to the different distribution laws. Since our work loaded ceramic tubes are two- dimensional objects and their fragments have two distributions. For large fragments, whose thickness is commensurate with the thickness of the sample (with the thickness of the tube wall) – one distribution law, for small (volume fragments) – another distribution law. These results agree well with Katsuragi et al. (2005). It has been established that the inflection point of distribution law N(mf) is shifted toward smaller scale with increasing energy density. Fig. 3. The distribution of fragments by weight, logarithmic coordinates. Experimental data processing showed that the distribution of fragments N(mf) weight less than the average weight of mf ≤ mmean and with the size of the fragment much less than the character size of the tube d* mmean, d* = d) is obtained that this fragments distribution described well as an exponential law and the logarithmic law (square deviation R2 = 0.96÷0.98). Eq. (3). Change of coefficients in these distributions depending on the specific energy is shown in Fig. 4, (b, c). The higher energy of the explosion the smaller fragments with a large mass, so the inflection point of the distributions moves toward smaller scales (Fig. 3, black arrow). AmeBN , BmAN )ln( (3) 0 1 2 3 4 5 6 7 8 9 -14 -12 -10 -8 -6 -4 -2 0 ln (N ) ln(m) w incrise w incriseise 596 Irina Bannikova et al. / Procedia Materials Science 3 ( 2014 ) 592 – 597 Fig. 4. (a) change of the coefficients B and A in the distribution power law for small fragments depending on the specific energy; the coefficients A and B: (a) in the exponential distribution; (c) in the logarithmic distribution, Eq. (3). 2.2. The Method of photography In addition to weighing the distribution of fragments by size estimated by digital photography, since the accuracy of 0.0001 g weights not possible to measure the mass of the majority of small fragments. Camera CANON 7D 16.1 Mpix was used to photographing fragments (Fig. 5, (a)). Fragments were placed by hand on black backing. In the right side of image are located those fragments which were individually weighed on an electronic balance, and fragments with height corresponded to tube thickness d* = d (Fig. 5). Fragment image area was calculated using the software package to process digital photographs (Fig. 5, (b)). Mass of large fragments (d* = d) were calculated by formula Eq. (4), where S – the average area of the image area of fragment obtained by processing two photographs, S = (Sout+Sin)/2. Mass small fragments (3D, d* < d) could not be calculated by the method described in Brodskii et al. (2011), since the coefficient “factor of form” α characterizing the geometry of the fragment, was different for tube fragments. “Form factor” for smaller fragments was calculated by the method described below. ρdSm* (4) Mass of large fragments was subtracted from the mass of all fragments of the sample and mass of all small fragments were found, md* 1 gives one exponential distribution, as in the case of shells from the walled cylinders in the works of Grady (2006). Found that the distributions obtained on the “weighting method” and “method of photography” are in good agreement, and subsequently use of them to speed up data processing. Acknowledgements This work was supported by projects of RFBR: No. 14-01-00842, No. 14-01-00370 and No. 14-01-96012, No. 13-08-96025. References Bannikova, I.A., Naimark, O.B., Uvarov S.V., 2013a. Development of methodology for research the relaxation properties in the fluid using the setup of electrical explosion, Extreme states of matter, detonation, shock waves. Proceedings of International conference XX Kharitonov topical scientific readings. Sarov, Russia, p.745–754. Bannikova, I.A., Naimark, O.B., Uvarov S.V., 2013b. Research of the fragmentation of tubular ceramic samples using electrical explosive setup. International Conference “Hierarchically organized systems of animate and inanimate nature”, Proceedings of International Conference. Tomsk, Russia. p. 206–209. (in Russian, CD disk). Brodskii R.Ye., Konevskiy P.V., Safronov R.I.., 2011. Size distribution of sapphire fragments in shock fragmentation. Functional Materials 18, No.2. Grady D., 2006. Fragmentation of rings and shells, Springer – Verlag Berlin Heidelberg, 374. Katsuragi H., Ihara S., Honjo H., 2005. Explosive fragmentation of a thin ceramic tube using pulsed power. Physical Review Letters 95, 095503, p.1–4. I II b 10 mm a work_gmu5tj2qnja7hdwwnqrycghbkm ---- Analogue: On Zoe Leonard and Tacita Dean Analogue: On Zoe Leonard and Tacita Dean Author(s): Margaret Iversen Reviewed work(s): Source: Critical Inquiry, Vol. 38, No. 4, Agency and Automatism: Photography as Art Since the Sixties, edited by Diarmud Costello, Margaret Iversen, and Joel Snyder (Summer 2012), pp. 796-818 Published by: The University of Chicago Press Stable URL: http://www.jstor.org/stable/10.1086/667425 . Accessed: 27/09/2012 15:04 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. . The University of Chicago Press is collaborating with JSTOR to digitize, preserve and extend access to Critical Inquiry. http://www.jstor.org http://www.jstor.org/action/showPublisher?publisherCode=ucpress http://www.jstor.org/stable/10.1086/667425?origin=JSTOR-pdf http://www.jstor.org/page/info/about/policies/terms.jsp Analogue: On Zoe Leonard and Tacita Dean Margaret Iversen It is only now, with the rise of digitalization and the near-obsolescence of traditional technology, that we are becoming fully aware of the distinctive character of analogue photography. This owl-of-Minerva-like appreciation of the analogue has prompted photographic art practices that mine the medium for its specificity. Indeed, one could argue that analogue photography has only recently become a medium in the fullest sense of the term, for it is only when artists refuse to switch over to digital photographic technologies that the ques- tion of what constitutes analogue photography as a medium is self- consciously posed. While the benefits of digitalization—in terms of accessibility, dissemination, speed, and efficiency—are universally ac- knowledged, some people are also beginning to reflect on what is being lost in this great technological revolution. In this context, artists’ use of ana- logue film and the revival of early photographic techniques should be regarded as timely interventions, although these may strike some as anach- ronistic. This essay does not attempt an ontological inquiry into the essen- tial nature of the analogue; rather, it is an effort to articulate something about the meaning of analogue photography as an artistic medium for contemporary artists by paying close attention to its meaning and stakes for particular artists. Instead of presenting a general survey, I want to consider the work of just two artists, Zoe Leonard and Tacita Dean, both of whose work is concerned with what is being lost. As Leonard put it: “New technology is usually pitched to us as an improvement. . . . But progress is always an exchange. We gain something, we give something else up. I’m interested in looking at some of what we are losing.”1 Tellingly, both artists 1. Zoe Leonard, “Out of Time,” October, no. 100 (Spring 2002): 89. Critical Inquiry 38 (Summer 2012) © 2012 by The University of Chicago. 0093-1896/12/3804-0006$10.00. All rights reserved. 796 have produced exhibitions simply called Analogue. Leonard gave the title to a large project she did between 1998 and 2009 consisting of 412 silver gelatin and c-prints of local shop fronts in lower Manhattan and poor mar- ket stalls around the world.2 Dean used it for a 2006 retrospective exhibition of her films, photographs, and drawings. Debates about the difference between digital and analogue photo- graphic art practices often turn on the issue of agency and automatism.3 This issue has become prominent because, for the past few decades, several celebrated artists have been producing large-scale photographic images in which artistic intention through digital manipulation is foregrounded. In the work of Jeff Wall or Andreas Gursky, for example, the process involves a kind of painting or collage with pixels where emphasis is placed on the carefully controlled synthesis and composition of multiple images to form the final picture. Both Leonard and Dean, however, are resistant to ma- nipulation. Instead, their work values the analogue’s openness to chance and the medium’s indexicality. Of course, artists using analogue film ex- ercise considerable agency selecting camera and film, in framing, focusing, and setting aperture size, time of exposure, and so on, as well as similar choices throughout the printing process. Yet, for the artists I consider, all these forms of intervention do not compromise the analogue’s photo- chemical continuity with the world. The analogue is defined as a relatively continuous form of inscription involving physical contact. From this point of view, the photogram, produced by contact between an object and light sensitive paper, only makes explicit what is implicit in all analogue photography. Conversely, digital photography’s translation of light into an arbitrary electronic code arguably interrupts that continuity. This dis- 2. There are actually three versions of Analogue: 412 photographs displayed as an installation, a book of 90 photographs, and a series of dye transfer prints that can be displayed in series or individually. 3. See, for example, William J. Mitchell, “Digital Images and the Postmodern Era,” The Reconfigured Eye: Visual Truth in the Post-Photographic Era (Cambridge, Mass., 1992), pp. 8–10. M A R G A R E T I V E R S E N is professor in the School of Philosophy and Art History, University of Essex, England. Her most recent books are Beyond Pleasure: Freud, Lacan, Barthes (2007), Writing Art History (coauthored with Stephen Melville, 2010), and Chance (2010). Her other published books include Alois Riegl: Art History and Theory (1993) and Mary Kelly (1997). She coedited “Photography after Conceptual Art” for Art History (2009) and was director (with Diarmuid Costello) of the Arts and Humanities Research Council research project, “Aesthetics after Photography.” Critical Inquiry / Summer 2012 797 continuity precedes the effects of digital editing or computerized image synthesis.4 These reflections on the distinction between analogue and digital inev- itably raise the thorny question of whether digitalization has compromised the authority of the photographic document. Those who argue the case are likely to underestimate the extent to which the analogue document is nat- urally distorted and intentionally manipulated.5 They also tend to neglect the fact that digital photography provides journalists, astronomers, and doc- tors, among others, with accurate information about the objects or states of affairs that were the image’s origin. Both technologies are causally bound up with their objects and susceptible to manipulation. From this practical point of view, there is no substantive difference between the two technologies.6 From an artistic point of view, however, I argue that there is an important difference. While the truth-value of photography is a much-debated and intriguing topic, it is not the focus of my interest in the analogue; my concerns are aesthetic rather than ontological or epistemological. My theme is the impact of the new technology on artistic practice. Digital photography has had inescapable consequences, not only for those artists who have adopted it, but also for those who have not. It is too early to say whether digital photography constitutes a new medium or if, like the in- troduction of color film, it is a modification of an old one. In any case, it is possible to point to important shifts in practice that have in fact occurred. The interface of photographic technology with the computer and the avail- ability of large-scale digital printing have revolutionized photographic art in the last thirty years. In response, artists working with the analogue have tended to emphasize the virtues or specific character of predigital technol- ogies. Since digital cameras are designed to mimic the functions of ana- logue ones, amateur photographers are probably unaware of much difference in the resulting image. Artists, however, are interested in inves- tigating their materials and so are likely to seize on a technical difference and amplify it. An example of this trend can be seen in the work of artists using digital photography who enlarge low-resolution pictures in order to make the pixel grid visible. Meanwhile, certain contemporary analogue photographers, such as Hiroshi Sugimoto, are reviving earlier printing 4. For a balanced discussion of these issues, see Phillip Rosen, Change Mummified: Cinema, Historicity, Theory (Minneapolis, 2001), esp. chap. 8, “Old and New.” 5. On the disparity between prephotographic reality and the image, see, for instance, Joel Snyder, “Picturing Vision,” Critical Inquiry 6 (Spring 1980): 499–526, and John Tagg, The Burden of Representation: Essays on Photographic Histories (Amherst, Mass., 1988), pp. 1–3. 6. See Snyder, “Photography, Chemical and Numerical,” lecture, American Society of Aesthetics, 6 Nov. 2008. 798 Margaret Iversen / Analogue techniques to achieve effects like the incomparable velvety blacks and lu- minous whites of silver gelatin prints. As we shall see, in response to digitalization, both Leonard and Dean have found ways of making the character of their medium salient. They adopt a receptive attitude and welcome chance effects. They try to make the material, tactile quality of the medium palpable. Both artists are drawn to objects that bear the indexical marks of weather, age, and use—discov- ering an elective affinity between these things and the way they imagine their medium. An analogue record of those traces doubles the indexicality of the image, making the image a trace of a trace and thereby drawing attention to an aspect of the medium within the image. For example, both Leonard and Dean have made series of photographs of misshapen trees. As in these series, their work often focuses on the damaged texture of the world, for it is precisely this texture that is compromised by the digitally “enhanced” environment characteristic of commercial digital photogra- phy. In short, they associate analogue photography with a kind of attentive exposure to things in the world marked by chance, age, and accident. This idea of exposure, in both its photographic and ethical senses, in- forms my sense of the work of Leonard and Dean. A brilliant exposition of the poetics of exposure can be found in Eric Santner’s On Creaturely Life: Rilke, Benjamin, Sebald.7 The book’s point of departure is Rainer Maria Rilke’s poetic evocation of the creaturely gaze in his eighth Duino Elegy.8 According to Rilke, this gaze is quite different from our ordinary sort of perception, which is reflective, conceptually mediated, articulated, and crossed by various purposes that tend to position the subject over against an object. Our consciousness is closed in on itself, reflecting ready-made representations. The human gaze is normally twisted by the knowledge of death and clouded by memory. For Rilke, animals, children, those in love, and those so near death that they can see beyond it are best placed to look, not at the fully constituted objects of habitual experience, but into the Open.9 This romantic conception of the creaturely was subsequently taken up and critiqued by Martin Heidegger and then modified by a German- 7. See Eric L. Santner, On Creaturely Life: Rilke, Benjamin, Sebald (Chicago, 2006). 8. See Rainer Maria Rilke, Duino Elegies, trans. C. F. MacIntyre (New York, 2007). The Elegies were written 1912 –1922. 9. Reading to the end of Rilke’s poem, one discovers that this paradise is far from perfect: And how perturbed is anything come from a womb when it has to fly! As if afraid of itself, it jerks through the air, as a crack goes through a cup. As the track of a bat tears through the porcelain of evening.[Ibid., p. 65] Critical Inquiry / Summer 2012 799 Jewish tradition including Franz Kafka, Walter Benjamin, and W. G. Se- bald. For these writers, creaturely life designates the condition of humans in modernity, that is, the condition of thrownness into an enigmatic Open where one is exposed to political power and surrounded by the cryptic ruins of defunct forms of life. Fundamental moods of boredom and, for Benjamin melancholy, are attuned to this sort of traumatized creaturely exposure to the Open.10 To this list should be added the surrealist’s favored attitude of disponibilité, which involves openness to whatever befalls one, like the disconcerting chance encounter with the found object. Santner’s chapter on Sebald bears most closely on our topic. The degree to which photography, chance, and coincidence are important factors in Sebald’s narratives is well known. For example, the key event in Austerlitz’s life—his entering the ladies’ waiting room in Liverpool Street Station a few weeks before it was demolished—is presented as a chance occurrence. That event precipitated the recovered memory of his childhood transpor- tation from Nazi-occupied Czechoslovakia and his prior life in Germany and so unlocked what had long been constraining his subsequent life. In this case, it was a lucky break, a happy chance, that opened up for him the possibility of change. History and memory were, so to speak, condensed in the architecture. Such objects, writes Sebald, “carry the experiences they have had with us inside them and are—in fact—the book of our history opened before us,” if we are lucky enough or open enough to encounter them.11 Sebald’s interest in this sort of contingency explains the inclusion of photographs in his books. To describe them as illustrations of the text would be to diminish their importance, for opaque, old, found photo- graphs or clippings from newspapers were often the starting point for his literary investigations. He closely associated photographs with loss and chance recovery. In an interview, he remarked that old photographs are almost destined to be lost, vanishing in the attic or a box, “and if they do come to light they do so accidentally, you stumble upon them. The way in which these stray pictures cross your path, it has something at once totally coincidental and fateful about it.”12 In addition, photography is allied with the circumstantial, the detail, the purely contingent, and it is this feature that most associates photography with the creaturely gaze. Benjamin’s traumatic theory of photography and Roland Barthes’s mad realism both 10. Giorgio Agamben has also taken an interest in the concept of the Open. See Giorgio Agamben, The Open: Man and Animal, trans. Kevin Attell (Stanford, Calif., 2004). 11. W. G. Sebald and Jan Peter Tripp, Unrecounted, trans. Michael Hamburger (London, 2004), pp. 78–79. 12. Quoted in Santner, On Creaturely Life, 152. 800 Margaret Iversen / Analogue involve a sort of radical exposure to the creaturely condition of the other.13 This condition is figured in the opening pages of Sebald’s Austerlitz. The narrator visits the Nocturama at the Antwerp Zoo, where he watches a raccoon with a serious expression on its face washing a piece of apple over and over again. The animal’s large eyes remind the narrator of the gaze of certain painters and philosophers who seek to penetrate the darkness.14 In my account of the work of Leonard and Dean, I further develop the idea of exposure in its ethical and photographic senses. Their attitude to the world and to the medium is, I argue, best summed up by the term exposure. But, first, a brief detour is necessary through Thierry de Duve’s phenomeno- logical description of two different sorts of photographic practice. In “Time Exposure and Snapshot: The Photograph as Paradox,” de Duve describes two sorts of photography. Couched in terms of a technical difference, his typology offers a way of rethinking photography as a me- dium with two faces. For de Duve, time exposure photography emphasizes the light sensitivity and indexicality of a medium that is attuned to objects. The instantaneous snapshot, by contrast, tries to capture events. De Duve aims to expand his analysis of photography beyond a purely semiotic read- ing to include “the affective and phenomenological involvement of the unconscious with the external world, rather than its linguistic structure.” Accordingly, he aligns the snapshot with trauma and the time exposure with mourning. He reasons that the snapshot isolates a single point in time and space, and this prevents description or narration; one is rendered “momentarily aphasic” in a way that is analogous to the breakdown of symbolization characteristic of trauma.15 In addition, the temporality of the snapshot is always one of a missed encounter—too late to change what is about to happen but too early to see what transpires. The stillness and chiaroscuro of a time-exposure photograph, on the contrary, allow for an extended duration of viewing and reverie. As a substitutive object, more- over, it facilitates the work of mourning. One can, of course, cite counter- examples to de Duve’s typology (the blurred long-exposure of a football 13. Benjamin describes early photographs as marked by the spark of contingency. See Walter Benjamin, “Little History of Photography,” Walter Benjamin: Selected Writings, trans. Edmund Jephcott, et al., ed. Michael W. Jennings, Howard Eiland, and Gary Smith, 4 vols. (1931; Cambridge, 2005), 2:507–30. Roland Barthes proposes the emotion of pity to characterize his response to the photographs that touch him; see Roland Barthes, Camera Lucida: Reflections on Photography, trans. Richard Howard (New York, 1981), pp. 116–17. For Santner, exposure involves pity for the “creatureliness” of the other. 14. See Tacita Dean, “W. G. Sebald,” October, no. 106 (Fall 2003): 122–36. 15. Thierry de Duve, “Time Exposure and Snapshot: The Photograph as Paradox,” October, no. 5 (Summer 1978): 118; rpt. de Duve, “Time Exposure and Snapshot: The Photograph as Paradox,” in Photography Theory, ed. James Elkins (New York, 2007), pp. 109–24. Critical Inquiry / Summer 2012 801 player in action; the portrait with hair and clothes dishevelled by a gust of wind), but the effect of these images lies precisely in their going against the grain of the type normally associated with the genre. De Duve cites Barthes’s early formulation in “Rhetoric of the Image” (1964) of the photographic image as both “here-now” and “there-then.”16 Yet, when he uses Barthes to describe the snapshot’s “here and the for- merly”17 as traumatic, de Duve adds a new dimension that Barthes then seems to have adapted two years later in Camera Lucida. However, he did so in such a way that it came to characterize photography in general, and more particularly time-exposure portrait photography, examples of which form the bulk of Barthes’s illustrations. Barthes’s revision seems to me entirely justified, for an athlete caught in mid-jump, de Duve’s prime ex- ample of the snapshot, has none of the pathos of an old portrait photo- graph. In any case, the isolation of the experience of an event, characteristic of the snapshot for de Duve, is not a feature of trauma. On the contrary, for Sigmund Freud and for Benjamin after him, it is rather the way that con- sciousness defends against trauma.18 The ballistic art of film with its jump cuts and montage effects is conceived by Benjamin as a means of adapting to modern life, of learning to screen potentially harmful impressions and so preventing them from entering experience.19 Charles Baudelaire, Ben- jamin’s exemplary poet of the shock-experiences of crowds, technology, and gambling, compared the work of the poet or artist to a fencer parrying blows. Baudelaire’s poetry, wrote Benjamin, “exposes the isolated experi- ence in all its nakedness.”20 In short, what de Duve refers to as trauma is rather consciousness as it screens and parries the shocks of contemporary life.21 Traumatic experience, conversely, is defined by Freud as having such an overwhelming or ungraspable character that it slips past those defenses to form a reserve of unconscious memory traces, psychical scars that can only be retrieved retrospectively and involuntarily.22 It is important for my 16. Barthes, “Rhetoric of the Image,” trans. Stephen Heath, Image-Music-Text (New York, 1977), p. 44. 17. Misquoted in de Duve, “Time Exposure and Snapshot,” p. 117. 18. Benjamin uses two German words where we have only the one word, experience. Erlebnis and Erfahrung are usually translated as “isolated or immediate experience” and “long experience,” respectively. 19. See Benjamin, “Work of Art in the Age of Reproducibility (Third Version),” Walter Benjamin, 4: 251–83, esp. p. 267. 20. Benjamin, “On Some Motifs in Baudelaire,” Walter Benjamin, 4: 336. 21. See ibid., p. 318, for Benjamin’s discussion of Chockerlebnis. 22. According to Freud, perception-consciousness and memory are two distinct systems, for, as he observed, “becoming conscious and leaving behind a memory-trace are processes incompatible with each other within one and the same system.” He adds, in italics, “consciousness arises instead of a memory trace.” While conscious experience quickly expires, 802 Margaret Iversen / Analogue argument to maintain this distinction between the shock effect of the snapshot and the traumatic effect of time-exposure photography. Time exposure is presented here as an alternative model of experience to the defensive, snapshot, parrying of the blows; it implies a receptivity or vul- nerability or exposure to whatever is encountered. What is fundamentally at issue here is an analogy between the subject of trauma who is marked by the sight of something that leaves an indelible trace on the psyche and the wide open camera lens and light sensitive medium that records on film a trace of whatever happens. André Breton conceived of the chance encounter in just these terms. The idea of objec- tive chance governing the encounter with the found object is indebted to his reading of Freud’s Beyond the Pleasure Principle.23 This is evident in one of Breton’s key formulations: “Chance would be the form taken by exter- nal reality as it traces a path [se fraie un chemin] in the human uncon- scious.”24 Bypassing the “protective shield against stimuli,” traumatic events leave behind an indelible trace.25 The work of art as the paradigm case of a mind-formulated artifact wholly porous to the intentions of its maker is here challenged by an alternative practice that contrives ways to capture the unpredictability of our encounter with the world. Agency is involved in setting up the apparatus and in judging the outcome; between these moments, chance is allowed to intervene. While this bracketing of intentionality is a choice, the material that emerges is outside the artist’s control. To put the case another way, the fact that an artist intentionally courts chance does not make everything that emerges from that process intentional, unless you claim that retrospective acceptance of a chance occurrence confers intentionality—but that does seem to me to stretch the concept to the breaking point. Zoe Leonard In her major series of photographs of shop fronts, Analogue, Leonard makes apparent the analogue character of her medium (fig. 1). For exam- ple, she leaves the black surround of the film with the brand names Kodak memory traces are “often most powerful and most enduring when the incident that left them behind was one that never entered consciousness” (Sigmund Freud, Beyond the Pleasure Principle, in The Standard Edition of the Complete Psychological Works of Sigmund Freud, trans. James Strachey, 24 vols. [London, 1953–1971], 8:25). 23. See Margaret Iversen, Beyond Pleasure: Freud, Lacan, Barthes (University Park, Penn., 2007), on the artistic and theoretical legacies of Freud’s Beyond the Pleasure Principle. 24. André Breton, Mad Love, trans. Mary Ann Caws (London, 1987), p. 25. 25. Freud, Beyond the Pleasure Principle, 8:298. Critical Inquiry / Summer 2012 803 and Fuji clearly visible. She usually hangs them under glass with no frame or mat. The familiar square format of the prints, 11 in. x 11 in., recalls pictures taken by classic analogue film cameras, like the old Rolleiflex she in fact used. They are also of modest size, although the full installation of all 412 prints, as was seen at Documenta XII in 2007, is monumental in scale. The impetus for the project, which involved a decade of work and thou- sands of pictures, was the gentrification of her neighborhoods in the Lower East Side of New York and in Brooklyn. The old linoleum store and local butcher were making way for new clothing boutiques and bars. Yet it was only at the point when “the layered, frayed and quirky beauty” of her neighborhood was on the point of disappearing that she realized how much she loved and depended on it. 26 Although Analogue moves out from 26. Leonard, “Out of Time,” p. 89. F I G U R E 1 . Zoe Leonard, Analogue, 1998 and 2007 (detail). © the artist, courtesy of Galerie Gisela Capitain, Cologne. 804 Margaret Iversen / Analogue Leonard’s neighborhood, following the movement of unwanted clothes and multinational brand logos from New York to market stalls around the world, including Mexico City, Cuba, Kampala, Ramallah, and Eastern Eu- rope, the photographer’s relation to the things she documents always re- mains close-up, personal, and small scale. Leonard’s project, then, in some way resembles Eugène Atget’s docu- mentation of old Paris around 1900, which was also prompted by its on- going demolition. There are clear allusions to Atget in her work: for example, in the fascination with shop windows, the attention paid to lowly and overlooked quarters, the avoidance of people, and in the organization of the photographs into thematic chapters. The Analogue book contains an essay by Leonard called “A Continuous Signal” (which is one definition of the word analogue) that is made up entirely of quotations from other writers and has a section devoted to Atget. There are also clear allusions to Walker Evans, who published a set of color photos of shop fronts in For- tune together with a statement about the wonders of shop front displays in New York: “What is as dependably entertaining as a really enthusiastic arrangement of plumbers’ tools?”27 The examples of Atget and Evans seem to have offered Leonard a way of reconciling the document and art by using the photograph to frame the strange beauty of the ordinary and overlooked. Shop fronts are Leonard’s version of the surrealists’ flea market where, in a receptive frame of mind, one might chance upon something person- ally revelatory. Leonard’s photographs of found arrangements of objects and the story they tell are both personal and political. Her photographs are like formally uniform boxes that store the large found objects she encounters in her walks through the streets. The camera becomes a receptacle and the author a receiver. Kaja Silverman’s “The Author as Receiver” discusses the fundamentally receptive character of photography and film, as well as the ethics of this position.28 Other contemporary artists have also commented on the significance of vessels and containers in their work as signaling a receptive attitude. For example, commenting on a readymade piece, Gabriel Orozco remarked, “the shoe box is an empty space that holds things. I am interested in the idea of making myself—as an artist and an individual—above all a recep- tacle.” He sums up by saying that “the ideas of the void, of the container and of vulnerability have been important in all my work. I also work a lot with the 27. Walker Evans, “The Pitch Direct,” Fortune 58 (Oct. 1958): 139. 28. See Kaja Silverman, “The Author as Receiver,” October, no. 96 (Spring 2001): 17–34. George Baker pursues a similar line of argument in a piece about Leonard, Dean, and Sharon Lockhart; see George Baker, “Loss and Longing,” in 50 Moons of Saturn: T2 Torino Trienniale, ed. Daniel Birnbaum (Milan, 2008), p. 64. Critical Inquiry / Summer 2012 805 accident.” Orozco makes the link, crucial for my argument, between receptiv- ity and chance.29 Leonard’s sensibility is both informed by and in tension with the formal rigor of minimalism. This ambivalent relation is made very apparent in an installation from 2003. The work, 1961, consists of forty-one different sec- ondhand suitcases in subtle gradations of blue, arranged in a row spanning the length of the room (fig. 2). Leonard has referred to the work as auto- biographical; the title is her birth year and the number of suitcases her age at the time of making. Personal identity is here turned into a series of spatial compartments, repositories of emotions, thoughts, and memories. There is an analogy at work, then, between the suitcases and her photog- raphy as both relate to what she has called “the impossible task of remem- bering.”30 The repeated modules recall Donald Judd’s installations, but the status of the suitcases as found objects attracts a host of associations— people in transit, migration, leaving home, anxiety, and so on. As Leonard has observed: “We use things to communicate complex ideas, feelings; it is 29. Gabriel Orozco, “Gabriel Orozco in Conversation with Guillermo Santamarina, Mexico City, August 2004,” Gabriel Orozco (Madrid, 2005), p. 143. For more on the topic of chance and contemporary art see Chance, ed. Iversen (Cambridge, Mass., 2010). 30. Leonard, email to author, 26 May 2010. F I G U R E 2 . Zoe Leonard, 1961, 2003. © the artist, courtesy of Galerie Gisela Capitain, Cologne. 806 Margaret Iversen / Analogue a dense, compact, potent language, the language of the found object.”31 And, of course, they are receptacles—closed and private. For Leonard, they evoke the idea of life as a journey: “A trip feels like a metaphor for life. It has a beginning, a middle and an end; it is a combination of choice and chance, of intention and surprise.”32 For Leonard and, as we shall see, for Dean, the journey represents a paradoxical intention to abandon oneself to chance, to launch oneself into the unknown. Although Leonard takes great care with the choice of materials and the framing and printing process, I think it is fair to say that, for her, photog- raphy is mainly an art of noticing, recording, and editing. In an interview, she once remarked, “I think my work is less about creating and more about observing.”33 This restriction of authorial agency allows her to be open to the element of chance. One particularly good example of this strategy can be seen in the series tree � bag (2000), which simply records the random arrangements of plastic bags caught up in the branches of bare trees (fig. 3). Leonard made another series of trees she came across in her neighborhood that were struggling in the urban environment and surviving, albeit in a misshapen form, Tree and Fence (1998–99) (fig. 4). She commented: “I was amazed by the way these trees grew in spite of their enclosures—bursting out of them or absorbing them. The pictures in the tree series synthesize my thoughts about struggle. People can’t help but anthropomorphize. I immediately identify with the tree.”34 I find them to be quite painful images that speak of the deforming effects of power, confinement, and discrimi- nation. Certainly a sense of vulnerability is powerfully conveyed by the way the bark of the trees is impressed by the rigid form of iron bars or a chain link fence. Yet Leonard is insistent that the living flesh of the trees should also be seen as resisting and overcoming those effects. The trees, she says, are “growing through and around these fences, so there is evidence of them as living, growing, adapting organisms.”35 For her they are found, everyday emblems of the struggle for survival in inhospitable conditions. Strange Fruit (for David) (1993–98) is a remarkable installation piece that consists of around three hundred skins of various fruits, each one emptied and carefully stitched together. They are scattered on the floor of the gallery, as if fallen from trees (fig. 5). The piece is a work of mourning 31. Leonard, “Salvage,” unpublished manuscript. 32. Leonard, “Recollection,” unpublished manuscript. 33. Leonard, “An Interview with Zoe Leonard,” interview by Beth Dungan, Discourse 24 (Spring 2002): 80. 34. Leonard, “A Thousand Words: Zoe Leonard Talks about Her Recent Work,” Artforum 37 (Jan. 1999): 101; hereafter abbreviated “T.” 35. Leonard, email to author, 26 May 2010. Critical Inquiry / Summer 2012 807 for all of Leonard’s friends who had died from AIDS at a time before drugs were available. The act of stitching together the empty skins suggests that this vain effort might be reparation or restoration. It shows “a desire to make whole, to hold onto the form of something or someone” even after they are gone (“T,” p. 101). And, for her, this desire to preserve is bound up with art making, with photography. While this desire is acknowledged, so also is the impossibility of keeping anything in perpetuity. “This work takes the material of the still life and reworks it. It borrows from the lan- guage of vanitas pictures and suggests that the artwork cannot preserve the person or the memory any more than the artwork can be preserved” (“T,” p. 101). The skins, although owned by a museum, are slowly turning to dust. This piece, more than any other, brings to the fore the theme of the pain of separation. But I think it can act as a lens through which to view her work as driven not by nostalgia but by separation, loss, and desire. What becomes very clear with this work is how deeply the AIDS epidemic af- fected Leonard. It decimated the community around her, and the lack of public mourning for its victims made it harder to bear. The deserted streets of New York in Analogue refer to this traumatic emptying out, as well as to the closure of familiar neighborhood shops.36 I have discussed two sculptural installations in my account of Leonard because I think that her understanding of photography, with its emphasis on the object represented, is close to sculpture.37 We recall that de Duve associated slow optics with the picture, which frames a lost or absent ob- ject, and contrasted it with the snapshot, which tries to capture an event. The paradigm of the first sort is the funerary portrait, that of the second, the abrupt and aggressive press photo. What this implies is that the mo- dality of slow optics is spatial rather than temporal and tied to the object rather than the event. In other words, time exposure is more attuned to sculpture than the prefilmic snapshot. This was also Benjamin’s view; de- scribing a photographic portrait of Friedrich Schelling, he ascribed to the lengthy exposure time the emergence of the very tactile creases in the philosopher’s face and in the folds of his coat. Schelling has grown into his coat and his face in the same way that his image has slowly grown into the light sensitive plate. Equally, “during the considerable period of the expo- sure, the subject (as it were) grew into the picture, in the strongest contrast 36. See Leonard, email to author, 10 Mar. 2011. 37. Leonard comments on why she works in both media: “In both cases, I work with found objects and found images, things I notice” (Leonard, “An Interview with Zoe Leonard,” p. 79). 808 Margaret Iversen / Analogue F I G U R E 3 . Zoe Leonard, Untitled (tree � bags), 2000 (detail). © the artist, courtesy of Galerie Gisela Capitain, Cologne. F I G U R E 4 . Zoe Leonard, (tree � fence), 1998–99 (detail). © the artist, courtesy of Galerie Gisela Capitain, Cologne. with the appearances in a snapshot.” This allowed him “to focus his life in the moment rather than hurrying on past it.”38 Yet if this sort of photog- raphy is allied to sculpture, it is the sort of sculpture that is lined with absence, like Leonard’s hollowed-out fruit skins or suitcases. Tacita Dean I hope that the constellation of linked concepts—including loss, trauma, the chance encounter with the found object, and what I am calling the time exposure style of analogue photography—is beginning to emerge. Without wishing to diminish the distinctive achievements of the two art- ists under consideration here, I now want to turn to the work of Dean and see if this same constellation might prove helpful in thinking about her work. I detect a similar sensibility at work; for example, her film Pie (2003) is a pastoral version of Leonard’s tree � bag series. Shot out of the back window of her house in Berlin, the film shows a tree and the random comings and goings of magpies (fig. 6). 38. Benjamin, “Little History of Photography,” 2:514. F I G U R E 5 . Zoe Leonard, Strange Fruit (for David), 1992–97 (installation view, Philadelphia Museum of Art). © the artist, courtesy of Galerie Gisela Capitain, Cologne. Critical Inquiry / Summer 2012 811 Dean is even more outspoken than Leonard in her stand against the digi- talization of everything. In the introduction to the catalogue for the exhibition, Analogue, she declares that analogue is a description “of everything I hold dear.” She points out that analogue refers to a vast range of things, from the movement of hands on a watch to writing and drawing. She continues, “ana- logue implies a continuous signal—a continuum and a line, whereas digital constitutes what is broken up, or rather, broken down, into millions of num- bers.” While the convenience of digital media is wonderful, she confesses: “for me, it just does not have the means to create poetry; it neither breathes nor wobbles, but tidies up our society, correcting it and then leaves no trace.” It is not “born of the physical world.” We are being “frogmarched,” she declares, into a digital future “without a backward turn, without a sigh or a nod to what we are losing.”39 I can’t help hearing in this last phrase an echo of the tragic myth of Orpheus, who descends into the underworld to rescue his dead wife, but, leading her back to safety, he anxiously turns around and in so doing loses her again. Dean’s posture as an artist is just this turning around out of fear and love. As a sort of elegy for analogue film, Dean made a film of a French Kodak factory in operation shortly before it was to cease producing celluloid film 39. Tacita Dean, “Analogue,” Analogue: Drawings 1991–2006, ed. Theodora Vischer and Isabel Friedli (Göttingen, 2006), p. 8. F I G U R E 6 . Tacita Dean, Pie, 2003, 16mm color film with optical sound, 7 mins. © courtesy of the artist, Frith Street Gallery, London and Marion Goodman Gallery, New York and Paris. 812 Margaret Iversen / Analogue stock (Kodak, 2006). I will confine my attention to those moments in her work when her conjoined interest in chance and analogue film is most apparent. In an email correspondence with me she confirmed her sense of the link between chance and the analogue: “a decline in one will invariably mean a decline in the other and our lives would be greatly impoverished for it.”40 For all the excellent secondary literature on her work, this connection has not been adequately addressed. Although she is mainly a filmmaker, Dean does use still photography— often in the form of found photographs. She is an habitué of the flea market, a collector of the discarded with an eye for old postcards and family snaps—“a junk junkie,” she jokes. This sort of collecting is a matter of chance and luck—and Dean loves to court chance. She explicitly con- nects her collecting activity with the surrealist chance encounter: “I concur with André Breton when he spoke of the objective chance process being about external circumstances acting in response to unspoken desires and demands of the human psyche. I want to be in a position to allow the unforeseeable to happen.”41 She started collecting old photographs in the mid-1990s, but this activity was given a focus only when she was commis- sioned to make a book for the art press Steidl. The result, Floh (2001), has been commented on at length by Mark Godfrey in “Photography Found and Lost: On Tacita Dean’s Floh.”42 All 163 amateur photographs included in the book were found in various flea markets around the world and reproduced without comment (fig. 7). She has stated that this wordlessness is intentional: “I want them to keep the silence of the flea market; the silence they had when I found them; the silence of the lost object.”43 An- other prominent feature of the photographs in Floh is the plethora of mistakes. The quasi-accidental nature of photography can be seen, per- haps especially, in the hands of amateurs and even more so in those prints that end up in flea market bins. The photos are a regular inventory of technical errors: odd framing, poor focus, over- and underexposure, cam- era shake, and blurred subjects in motion, to name but a few. In some cases, one is inclined to surmise that the shutter must have been released accidently. The photographs themselves have also been subject to accidents and wear and tear such as fingerprints, scratches, and other marks. It is as though the condition of the medium were being explored by illustrating ev- erything that can go wrong. The realization gradually dawns on one that these 40. Dean, email to author, 22 Sept. 2010. 41. Dean, An Aside: Selected by Tacita Dean (London, 2005), p. 4. 42. See Mark Godfrey, “Photography Found and Lost: On Tacita Dean’s Floh,” October, no. 114 (Autumn 2005): 90–119. 43. Dean, “Floh,” Tacita Dean: Selected Writings 1992–2011, in Seven Books, 7 vols. (Göttingen, 2011), 6:50. Critical Inquiry / Summer 2012 813 F I G U R E 7 . Tacita Dean, image from Floh, Steidl Books, 2001. © courtesy of the artist, Frith Street Gallery, London and Marion Goodman Gallery, New York and Paris. sorts of accidents don’t occur anymore because modern digital cameras take the guesswork out of taking pictures, and bad ones are instantly deleted. Fur- thermore, many people view their photographs on computers, so they never have a physical, paper form that could eventually wind up in a flea market. This change reflects the tidying-up of our world that Dean finds so impover- ishing. The book thus preserves a hundred-year history of a certain kind of photography that was wide open to chance. Some of the results are enigmatic, others hilariously funny, and some incomparably beautiful. The title Floh refers to the German for “flea,” as in flea market, but it also refers to the flow of photographs detached from their moorings, cast adrift on a sea of discarded things. This sort of flow connects with the recurrent theme of the sea in Dean’s work—a theme that has long been linked with Fortuna, goddess of luck or chance, who is often represented holding a billowing sail. Dean’s large chalk drawings on blackboards of storm-tossed ships is a good example of how she uses her medium analogically in order to evoke the subject, for the chalk makes possible a process of erasure and redrawing that, like the sea, cannot be fixed. As she remarked: “Because of the flux, the drawing and the erasure, the whole process is so like the nature and the movement of the sea.”44 In a brief prose piece, “And He Fell into the Sea,” Dean paid homage to a work by another artist in this tradition— Bas Jan Ader’s final and unfinished performance, In Search of the Miracu- lous II (1975).45 Dean has written about Ader’s and Donald Crowhurst’s unsuccessful one-man sea voyages, both of which ended in death, which is, sadly, one possible consequence of giving oneself up to contingency, but not the only one. Michael Newman’s contribution to Dean’s collection of writings, Seven Books, bears closely on this issue. In his essay, “Salvage,” he discusses Dean’s use of the journey as a topos traditionally understood as a metaphor for human life. He notes that Dean added a subtitle to her film Disappearance at Sea II (1997)—Voyage de guérison (voyage of healing).46 In a short prose piece accompanying the film, Dean wrote of the myth of Tristan who, poisoned and beyond help, “surrendered himself up to the forces of the sea” and departed on a “journey of healing—where he floated alone on a small boat with no oars nor sail nor rudder,” hoping to drift to some magical island where he would be cured. Floating alone in the boat for seven days and seven nights, wounded and weary, he finally finds the healing of Isolde.47 Dean adumbrates here the connections to be found 44. Tacita Dean Tacita Dean (Barcelona, 2001), p. 97. 45. See Dean, “Bas Jan Ader: And He Fell into the Sea,” Tacita Dean: Selected Writings, pp. 91–92. 46. See Michael Newman, “Salvage,” in Tacita Dean, 7 vols. (Paris, 2003), 7:[21]–[31] . 47. Dean, “Disappearance at Sea II,” Tacita Dean: Selected Writings, p. 21. Critical Inquiry / Summer 2012 815 amongst the sea, the journey of life, and the idea of confiding oneself to chance or giving oneself over to contingency as a way of opening up new possibilities in both life and art. The strong thematic link in Dean’s work between the sea voyage and chance is clear, but the question that concerns me is how this bears on her use of photography as a medium. Part of Disappearance at Sea II shows the beam of light from a lighthouse panning across the dark sea (fig. 8). This shot creates an effect similar to a film technique called the blind pan, a term that refers to a sweep across the field of vision without focal point or object. Although it is not about the sea, her film Fernsehturm (2001) is similar in F I G U R E 8 . Tacita Dean, Disappearance at Sea II, 1996, 16mm color anamorphic film with optical sound, 14 mins. © courtesy of the artist, Frith Street Gallery, London and Marion Goodman Gallery, New York and Paris. F I G U R E 9 . Tacita Dean, Fernsehturm, 2001, 16mm color anamorphic film with optical sound, 44 mins. © courtesy of the artist, Frith Street Gallery, London and Marion Goodman Gallery, New York and Paris. 816 Margaret Iversen / Analogue this respect (fig. 9). The film is a forty-four minute view of the interior of the revolving restaurant at the top of the television tower in former East Berlin. The movement is similar to a very slow 360-degree pan with a film camera, except in this case the camera is fixed while the restaurant slowly turns, bringing diners into view, during which time the earth itself turns and night falls. Dean was surprised to find how much it looked like the prow of a ship moving through the sea.48 Interestingly, Dean made a series of photogravures called Blind Pan (2004) that show what appears to be high, blasted moorland spread sequentially over five large frames across which are inscribed stage directions for the self-blinded and lame Oedipus and his daughter Antigone making their way through the wilderness into exile.49 Halt- ingly moving forward, blind, with no definite aim, open to what happens, they are at sea on dry land. Dean attaches an ethical value to the blind pan, which, along with the long take with a fixed camera, is the cinematic ver- sion of what I’ve been referring to as time-exposure photography. Com- menting on her free-associative curatorial project for Camden Arts Centre, An Aside, Dean made a pertinent comment that clearly has wider implications: “Nothing can be more frightening than not knowing where you are going, but nothing can be more satisfying than finding you’ve arrived somewhere without any clear idea of the route. . . . I have at least been faithful to the blindness with which I set out.”50 The blind pan in this context is a metaphor; Dean actually does not generally use pans or zooms. As she remarked in an interview: “I like the static shot that allows for things to happen in the frame. . . . It is just allowing the space and time for whatever to happen, and that comes very much from the nature of film. . . . I tend to hold the frame until my film runs out.”51 The facts that Dean’s films border on still photography and that her favored format is widescreen are also relevant. The viewer, close to the wide screen in a gallery space, is free to pan across the image. In this context she remarks, “I have always thought that art works best when it is open to this subjectivity, when it is not bound by too much direction and 48. See Dean, “A Conversation with Tacita Dean,” interview by Roland Groenenboom, in Tacita Dean (Barcelona, 2001), p. 104. I have not mentioned the sound tracks of Dean’s films. They are analogue, optical sound recordings of ambient, found sound edited in the studio. “Digital silence,” she complains, “has a deadness,” unlike “the prickled sound of mute magnetic tape” (Dean, “Artist Questionnaire: 21 Responses,” October, no. 100 [Spring 2002]: 26). 49. For an interesting discussion of this piece, see Marina Warner, “Tacita Dean: Light Drawing In,” in Gehen (Basel, 2006), p. 17. 50. Dean, An Aside, p. 4. 51. Dean, “A Conversation with Tacita Dean,” p. 91. Critical Inquiry / Summer 2012 817 intent.”52 The work itself must be consigned to the contingencies of its reception. Analogue photography and film as media of contemporary art after digitalization have become associated with a longer tradition of photogra- phy and writing on photography in which the camera eye is imagined as staring unguarded into an enigmatic Open. Within this tradition, open- ness, together with the automatism of the camera and the indexicality of analogue film, result in a kind of photography that is marked by contin- gency and seared by reality. As Dean so succinctly put it, analogue photog- raphy is “the imprint of light on emulsion, the alchemy of circumstance and chemistry”—a conception of the medium that I have elaborated in terms of the idea of exposure.53 52. Dean, “In Conversation with Tacita Dean,” interview by Marina Warner, in Jean- Christophe Royoux, Warner, and Germaine Greer, Tacita Dean (London, 2006), p. 44. 53. Dean, “Analogue,” p. 8. The reflections on analogue film contained in the catalog for Dean’s Tate Modern Turbine Hall Unilever Commission Film are pertinent but arrived too late for consideration here; see Dean, Tacita Dean: Film, ed. Nicholas Cullinan (exhibition catalog, Tate Modern, 2011). 818 Margaret Iversen / Analogue work_gqliwxjxendrll6yp3a6n4omme ---- G U E S T E D I T O R I A L Special Section on Color Imaging: Device-Independent Color, Color Hardcopy, and Graphic Arts Giordano Beretta Hewlett-Packard Company 1501 Page Mill Road, 3U-3 Palo Alto, California 94304 E-mail: beretta@hpl.hp.com Reiner Eschbach Xerox Corporation Digital Imaging Technology Center 800 Phillips Road, 128-27E Webster, New York 14580 E-mail: eschbach@wrc.xerox.com It is time to reconsider our research pri- orities. The veterans in our community started out in an age when the image processing tools in the publishing in- dustry were graphic arts cameras and screens, pin registers, knives, and goldenrod paper. Their contribution was to invent the technologies that brought us from mechanical processes to electronic publishing: scanners, im- age enhancement, colorimetric color control, color management, perceptu- ally lossless image compression, and digital halftoning among others. It is a sign of the exponential progress in science and technology, that the same generation of scientists and engineers is able to contribute to a new paradigm shift, namely from elec- tronic publishing to digital publishing. Digital publishing is a fully digital process, from end to end: cameras, scanners, processing, storage, re- trieval, syndication, distribution, and rendering. Many underlying electronic imaging technologies remain the same. However, the new fully digital process changes the emphasis of which goals are more important, i.e., it requires us to reconsider the priorities in our re- search programs and objectives. We briefly review the impact on some ap- plication areas. Cameras As it becomes digital, the technol- ogy for amateur cameras—USB peripherals—diverges quite substan- tially from the technology for profes- sional cameras—workhorses. Profes- sional digital cameras are now generally accepted as the most appro- priate tool both in the studio—e.g., for catalog work—and in the field—e.g., sports and general reportage. Today’s professional digital cameras fulfill the needs of their users and the main chal- lenge for electronic imaging is to re- duce the device cost by an order of magnitude to bring it more in line with the cost of the old AgX-based technol- ogy. For the amateur market, we must revise our way of thinking. With the AgX technology, an amateur applica- tion is a scaled-down and simplified version of the professional application that can achieve more or less the same image quality. Over the last decade the photo amateur has been replaced by the photo consumer, and any digital photography application for the con- sumer must compete with disposable cameras. In the last couple of years we have seen the emergence of a new paradigm for consumer photography, namely the instantaneous and casual communication of candid images. In this new paradigm for consumer photography, digital cameras are pe- ripherals for hand-held computers and have to obey the rules for such periph- erals, namely be very small and cost as much as a mouse. Regardless of these two constraints, the digital consumer camera has to work anywhere—like a disposable camera—but without a flash, because batteries are too bulky. The requirements from the different ap- plication areas place emphasis on sev- eral electronic imaging research topics, among them dynamic range and color constancy. Processing Contrary to the spartan requirements for digital cameras, image processing benefits from a bonanza in desktop computer power. With 500 MHz pro- cessors on 100 MHz system busses and 128M bytes of RAM, today’s per- sonal computers have bandwidth to spare after the user’s primary require- ments have been fulfilled. This leaves considerable performance available for improved color imaging. New algorithms, more heavily based on non-linear methods and operating in more suitable representational spaces, can now be deployed even in such performance-critical system compo- nents as operating systems and device drivers. For example, if the optical properties for an inexpensive simple in- put device are well characterized, so- phisticated algorithms can be used to restore images to unprecedented qual- ity. One of the main tools for research in color imaging science is colorimetry. Journal of Electronic Imaging / October 1999 / Vol. 8(4) / 329 Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Electronic-Imaging on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use Balancing the imaging pipeline be- comes easier because the color infor- mation is colorimetric instead of con- sisting of device counts. The increased attention to colorimetry brings new challenges to instrument accuracy— better than just noticeable differences is necessary to balance a pipeline. Fur- thermore, imaging scientists need a better understanding of color science so they can correctly interpret the colo- rimetric data. This improved understanding of color will also allow imaging scientists to invent novel automatic image en- hancement algorithms based on ren- dering intent instead of attempting exact color reproduction. This under- standing will allow superior trade-off decisions that will make the systems more robust, and therefore simpler and easier to use. Copying a color image will no longer require a trained special- ist; everybody will be able to reproduce their own images. Intellectual property On August 13, 1997, U.S. District Judge Sonia Sotomayor rendered a very important decision in Tasini et al. vs. New York Times et al. regarding copyrights (see http://www.nwu.org/ nwu/tvt/tvtrule.htm). Her decision, based on section 201 (c) of the Copy- right Act of 1976, which deals with the copyrights in collective works, reads that while the publisher of a collective work retains the copyrights for further publication in databases and CD- ROMs—she declared them revisions of the original work—authors retain the rights to individual contributions and may license them to World Wide Web publishers, without permission from or payment to the publisher. This is an op- portunity for image syndication and will spur the need for electronic imaging technology. With perfect timing, electronic imag- ing researchers are delivering the ap- propriate technology—watermarking. From the perspective of digital publish- ing, watermarking is a hot problem, as is evident from the intensive publication activity. Stock agencies are early adopters of electronic imaging, so the protection of the author’s rights is a major priority. Watermarking facilitates the clearance of copyright royalties and allows the prosecution of thieves in case of unauthorized use of an image. Scanning A pressing problem that is attracting the attention of research in electronic imaging is the digitalization of the hu- man cultural heritage. The documents created since the inception of digital publishing are only a minuscule frac- tion of the patrimony stored in archives. Originals are subject to various forms of decay and there is a certain urgency to digitize a huge number of docu- ments. As much of the original informa- tion has to be preserved as possible, including pentimenti and documents under recoated media. Some of these documents are in a bad state of disre- pair and can be restored only in a digi- tal form, because the originals are too frail. Storage and retrieval Capturing an image is only part of the problem. It is further necessary to cata- log the image, so it can be later re- trieved by searching for criteria or by navigating the space of all images. Re- quired algorithms encompass the fields of pattern matching, object recognition, character recognition—including old typefaces and calligraphy—and re- verse graphic editors to convert bitmap representations into vector representa- tion, which is essential for the large number of geographic maps stored in archives. Specifying the iconography, i.e., the evolving semantic context of an image, is a very difficult task that requires ex- tensive knowledge. To be useful for re- trieving images from the World Wide Web, an iconography cannot be based on haphazardly assigned index words. Catalogers must be assisted by the- sauri, taxonomies, and ontologies, which are essentially all the same arti- fact. These structures, also known as external intelligence, allow catalogers and image retrievers to navigate the set of images instead of performing flat keyword searches. Catalogers are aided by image clas- sifiers, which in turn can use the icono- graphic structures to improve their ac- curacy. Image classifiers are also becoming increasingly important for color gamut mapping. Depending on the output device, the rendering algo- rithm has to map the colors in the im- age to a gamut that has a considerably different gamut than the range of colors in the image. It is necessary to distort the color in the image but some funda- mental memory colors such as com- plexion cannot be changed too much. Output peripherals Only a few years ago, printer MTF was so poor that images could be aggres- sively compressed using the example G U E S T E D I T O R I A L 330 / Journal of Electronic Imaging / October 1999 / Vol. 8(4) Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Electronic-Imaging on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use quantization tables from the JPEG standard specification. Today the quantization tables in lossy data com- pression algorithms must be designed very carefully to be perceptually loss- less for the intended rendering devices and viewing conditions. This entails that the human visual system’s MTF can no longer be used as a weighting function for quantization tables. In- stead, the MTF of each stage in the im- aging pipeline must be taken as a vis- ibility threshold that must be matched to the MTFs of all other stages. The increased use of images in documents poses new problems also to the choice of data compression algo- rithm classes for images. While JPEG and wavelets work well on full color im- ages communicated over the Internet, these algorithms fail on halftoned im- ages. The last cable to the printer has become a severe performance bottle- neck and we need lossless compres- sion methods like JBIG, which do not destroy the structure of halftones. When we design electronic imaging algorithms for digital publishing sys- tems, we shall not forget that some of our technologies are taking a large hu- man toll. As we noted earlier, our new technologies are the basis for tools al- lowing everyone to do their digital pub- lications by themselves. The pre-press industry, which worldwide employs an amazing number of imaging experts working in small businesses, is on the verge of disappearance. While me- chanical paste-up assembly and strip- ping have gone the way of the buggy whip trade, we should not waste the specific knowledge in the pre-press in- dustry. These talents are very much needed in the new world of digital pub- lishing, especially now that we have to reconsider our research priorities. For example, a trained eye can take us a long way in end-to-end MTF optimiza- tion and choosing the most appropriate compression algorithms. The papers in this special section cover many of the topics mentioned in this introduction. A journal’s purpose is to facilitate and stimulate contacts and exchanges between established re- searchers and those new to the field, either because they are at the dawn of their career, they are broadening their expertise, or they are shifting their fo- cus as job descriptions evolve. Sharing their latest results and ideas are the ex- perts in color imaging, the people that are working on the disruptive technolo- gies enabling the paradigm shift from electronic to digital publishing. Giordano Beretta received his doc- torate in computer science from the Swiss Federal In- stitute of Technol- ogy, Zurich, in 1984, and joined Xerox PARC that year. For his pio- neering work in color imaging he received the 1989 Xerox Corporate Research Group Achievement Award. In 1990 he moved on to Canon, where as a senior scientist he was involved mainly in strategic planning and intellectual property management, while exercising his technical skills as the Technical Advisor for Color. Since 1994 he has been with the Computer Peripherals Laboratory at Hewlett-Packard. A strong believer in the social role of synergies and emergent prop- erties, he is a tireless promoter of young sci- entists and engineers, helping them in their first professional steps; he also teaches short courses and organizes conferences. His skills as a speculative designer have translated into a number of patents and ar- ticles. He is a member of ISCC, IS&T, SMS, and SPIE. Reiner Eschbach received his MS and PhD in physics from the University of Essen, Ger- many, in 1983 and 1986, respectively. From 1986 to 1988 he was a visiting scholar at the Uni- versity of Califor- nia, San Diego. He joined Xerox in 1988 where he became a principal scientist at the Xerox Digital Imaging Technology Center in 1994. Dr. Eschbach holds more than 30 pat- ents in the areas of image enhancement, halftoning, compression and color imaging. In 1994 he received the Xerox Eagle Award given to employees with the most patents issued for a given calendar year. Currently he is the project leader for the Image Sci- ence Group in the Color and Digital Imaging Systems Lab. His research interests include color image processing, digital halftoning and compression. He has published more than 20 papers in various peer reviewed journals and has given presentations at nu- merous conferences. He has also been ac- tively involved in the planning and organiza- tion of several conferences in the US and abroad. He is the editor of the Recent Progress Series and he is serving on the Board of Directors of IS&T as Publications Vice President. G U E S T E D I T O R I A L Journal of Electronic Imaging / October 1999 / Vol. 8(4) / 331 Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Electronic-Imaging on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use work_gstkhqsdvre5vmt7ndl3xq6wfq ---- Double linear regressions for single labeled image per person face recognition Double linear regressions for single labeled image per person face recognition Fei Yin a,n, L.C. Jiao a, Fanhua Shang b, Lin Xiong a, Shasha Mao a a Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education of China, Xidian University, Mailbox 224, No. 2 South TaiBai Road, Xi’an 710071, PR China b Department of Electrical and Computer Engineering, Duke University, Durham, NC 27708, USA a r t i c l e i n f o Article history: Received 12 March 2013 Received in revised form 10 September 2013 Accepted 19 September 2013 Available online 9 October 2013 Keywords: Semi-supervised dimensionality reduction Label propagation Sparse representation Linear regressions Linear discriminant analysis Face recognition a b s t r a c t Recently the underlying sparse representation structure in high dimensional data has received considerable attention in pattern recognition and computer vision. In this paper, we propose a novel semi-supervised dimensionality reduction (SDR) method, named Double Linear Regressions (DLR), to tackle the Single Labeled Image per Person (SLIP) face recognition problem. DLR simultaneously seeks the best discriminating subspace and preserves the sparse representation structure. Specifically, a Subspace Assumption based Label Propagation (SALP) method, which is accomplished using Linear Regressions (LR), is first presented to propagate the label information to the unlabeled data. Then, based on the propagated labeled dataset, a sparse representation regularization term is constructed via Linear Regressions (LR). Finally, DLR takes into account both the discriminating efficiency and the sparse representation structure by using the learned sparse representation regularization term as a regularization term of Linear Discriminant Analysis (LDA). The extensive and encouraging experimental results on three publicly available face databases (CMU PIE, Extended Yale B and AR) demonstrate the effectiveness of the proposed method. & 2013 Elsevier Ltd. All rights reserved. 1. Introduction In many fields of scientific research such as face recognition [1], bioinformatics [2], and information retrieval [3], the data are usually presented in a very high dimensional form. This make the researchers confront with the problem of “the curse of dimensionality” [4], which limits the application of many practical technologies due to the heavy computational cost in high dimen- sional space, and deteriorates the performance of model estima- tion when the number of training samples are small compared to the number of features. In practice, dimensionality reduction has been employed as an effective way to deal with “the curse of dimensionality”. In the past years, a variety of dimensionality reduction methods have been proposed [5–10]. According to the geometric structure considered, the existing dimensionality reduction methods can be categorized into three types: global structure based methods, local neighborhood structure based methods, and the recently proposed sparse representation structure [11,12] based methods. Two classical dimensionality reduction meth- ods Principle Component Analysis (PCA) [13] and Linear Discriminant Analysis (LDA) [14] belong to global structure based methods. In the field of face recognition, they are known as “Eigenfaces” [15] and “Fisherfaces” [16]. Two popular local neighborhood structure based methods are Locality Preserving Projections (LPP) [17] and Neighbor- hood Preserving Embedding (NPE) [18]. LPP and NPE are named “Laplacianfaces” [19] and “NPEfaces” [18] in face recognition. The representative sparse representation structure based methods include Sparsity Preserving Projections (SPP) [20], Sparsity Preserving Discri- minant Analysis (SPDA) [21] and Fast Fisher Sparsity Preserving Projections (FFSPP) [22]. They have also been successfully applied to face recognition. In order to deal with the nonlinear structure in data, most of the above linear dimensionality reduction methods have been extended to their kernelized versions which perform in Reproducing Kernel Hilbert Space (RKHS) [23]. Kernel PCA (KPCA) [24] and Kernel LDA (KLDA) [25] are the nonlinear dimensionality reduction methods corresponding to PCA and LDA. Kernel LPP (KLPP) [17,26] and Kernel NPE (KNPE) [27] are the kernelized versions of LPP and NPE. The nonlinear version of SPDA is Kernel SPDA [21]. One of the major challenges to appearance-based face recogni- tion is recognition from a single training image [28,29]. This problem is called “one sample per person” problem: given a stored database of faces, the goal is to identify a person from the database later in time in any different and unpredictable poses, lighting, etc. from just one image per person [28]. Under many practical scenarios, such as law enforcement, driver license and passport card identification, in which there is usually only one labeled sample per person available, the classical appearance-based meth- ods including Eigenfaces and Fisherfaces suffer big performance drop or tend to fail to work. LDA fails to work since the within- class scatter matrix degenerates to a zero matrix when only one Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/pr Pattern Recognition 0031-3203/$ - see front matter & 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.patcog.2013.09.013 n Corresponding author. Tel.: þ86 29 88202661; fax: þ86 29 88201023. E-mail address: yinfei701@163.com (F. Yin). Pattern Recognition 47 (2014) 1547–1558 www.sciencedirect.com/science/journal/00313203 www.elsevier.com/locate/pr http://dx.doi.org/10.1016/j.patcog.2013.09.013 http://dx.doi.org/10.1016/j.patcog.2013.09.013 http://dx.doi.org/10.1016/j.patcog.2013.09.013 http://crossmark.crossref.org/dialog/?doi=10.1016/j.patcog.2013.09.013&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.patcog.2013.09.013&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.patcog.2013.09.013&domain=pdf mailto:yinfei701@163.com http://dx.doi.org/10.1016/j.patcog.2013.09.013 sample per person is available. Zhao et al. [30] suggested replacing the within-class scatter matrix with an identity matrix to make LDA work in this setting, although the performance of this Remedied LDA (ReLDA) is still not satisfying. Due to its importance and difficulty, one sample per person problem has aroused lots of interest in face recognition community. To attack this problem, many ad hoc techniques have been developed, including synthe- sizing virtual samples [31,32], localizing the single training image [33], probabilistic matching [34] and neural network methods [35]. More details on single training image problem can be found in a recent survey [28]. As the fast development of the digital photography industry, it is possible to have a large set of unlabeled images. In this back- ground, a more natural and promising way to attack one labeled sample per person problem is semi-supervised dimensionality reduction (SDR). Semi-supervised Discriminant Analysis (SDA) [29] is one SDR method which has been successfully applied to single labeled image per person face recognition. SDA first learns the local neighborhood structure using the unlabeled data and then uses the learned local neighborhood structure to regularize LDA to obtain a discriminant function which is as smooth as possible on the data manifold. Laplacian LDA (LapLDA) [36], Semi- supervised LDA (SSLDA) [37], and Semi-supervised Maximum Margin Criterion (SSMMC) [37] are all reported semi-supervised dimensionality reduction methods which can improve the performance of their supervised counterparts like LDA and Max- imum Margin Criterion (MMC) [38]. These methods consider the local neighborhood structure and can be unified under the graph embedding framework [37,39]. Despite the success of these SDR methods, there are still some disadvantages: (1) these SDR methods are based on the manifold assumption which requires sufficiently many samples to characterize the data manifold [40]; (2) the adjacency graphs constructed in these methods are artificially defined, which brings the difficulty of parameter selec- tion of neighborhood size and edge weights. To resolve these issues, Sparsity Preserving Discriminant Analysis (SPDA) [21] was presented. SPDA first learns the sparse representation structure through solving n (number of training samples) ℓ1 norm optimi- zation problems, and then uses the learned sparse representation structure to regularize LDA. SPDA has achieved a good perfor- mance on single labeled image per person face recognition, but it still has some shortages: (1) it is computationally expensive since n ℓ1 norm optimization problems need to be solved in learning the sparse representation structure and (2) the label information is not taken advantage of in learning the sparse representation structure. To tackle the above problems, we propose a novel SDR method, named Double Linear Regressions (DLR), which simultaneously seeks the best discriminating subspace and preserves the sparse representa- tion structure. More specifically, a Subspace Assumption Based Label Propagation (SALP) method, which is accomplished using Linear Regressions (LR), is first presented to propagate the label information to the unlabeled data. Then, based on the propagated labeled dataset, a sparse representation regularization term is constructed via Linear Regressions (LR). Finally, DLR takes into account both the discriminat- ing efficiency and the sparse representation structure by using the learned sparse representation regularization term as a regularization term of linear discriminant analysis. It is worthwhile to highlight some aspects of DLR as follows: (1) DLR is a novel semi-supervised dimensionality reduction method aiming at simultaneously seeking the best discriminating sub- space and preserving the sparse representation structure. (2) DLR can obtain the sparse representation structure via n small class specific linear regressions. Thus, it is more time efficient than SPDA. (3) In DLR, label information is first propagated to all the training set. Then it is used in learning a more discriminative sparse representation structure. (4) Unlike SDA, there are no graph construction parameters in DLR. The difficulty of selecting these parameters is avoided. (5) Our proposed label propagation method SALP is quite general. It can be combined with other graph-based SDR methods to construct a more discriminative graph. The rest of the paper is organized as follows. Section 2 gives a brief review of LDA and RDA. DLR is proposed in Section 3. DLR is compared with some related works in Section 4. The experimental results and discussions are presented in Section 5. Finally, Section 6 gives some concluding remarks and future work. 2. A brief review of LDA and RDA Before we go into the details of our proposed DLR algorithm, a brief review of LDA and RDA is given in the following. 2.1. LDA Given a set of samples fxigni ¼ 1, where xi Aℝm, let X ¼ ½x1; x2; …; xn�Aℝm�n be the data matrix consisting of all samples. Suppose samples are from K classes. LDA aims at simultaneously maximizing the between-class scatter and minimizing the within- class scatter. The objective function of LDA is defined as follows: max w wTSBw wTSWw ; ð1Þ SB ¼ ∑ K k ¼ 1 Nkðm�mkÞðm�mkÞT; ð2Þ SW ¼ ∑ K k ¼ 1 ∑ iACk ðxi �mkÞðxi �mkÞT; ð3Þ where mk ¼ 1=Nk∑iACk xi, m ¼ 1=n∑ni ¼ 1xi, Ck is the index set of samples from class k, and Nk is the number of samples in class k. SB is called between-class scatter matrix and SW is called within-class scatter matrix. The optimal w can be computed as the eigenvector of SW �1SB that corresponds to the largest eigenvalue [14]. When there is only one labeled sample per class, LDA fails to work because SW is a zero matrix. The Remedied LDA (ReLDA) for one labeled sample per class scenario was proposed by Zhao et al. [30] in which SW is replaced by an identity matrix. 2.2. RDA Despite its simplicity and effectiveness for classification, LDA suffers from the Small Sample Size (SSS) problem [41]. Among the methods designed to attack this problem, Regularized Discrimi- nant Analysis (RDA) [42,43] is a simple and effective one, whose objective function is defined as follows: max w wTSBw wTSWwþλ1wTw ð4Þ where λ1 is the tradeoff parameter. The optimal w can be computed as the eigenvector of ðSW þλ1IÞ�1SB that corresponds to the largest eigenvalue. F. Yin et al. / Pattern Recognition 47 (2014) 1547–15581548 https://isiarticles.com/article/24686 work_gt747crvevavrdhriscf2vrv4m ---- A review of the monotypic genus Chilelimnophila Alexander (Diptera: Tipulomorpha: Limoniidae) A review of the monotypic genus Chilelimnophila Alexander (Diptera: Tipulomorpha: Limoniidae) Guilherme Cunha Ribeiro1 AbsTRACT A review of the monotypic genus Chilelimnophila Alexander (Diptera: Tipulomorpha: Limoniidae). The Chilean species Chilelimnophila lyra (Alexander, 1952), the only included species in the genus Chilelimnophila Alexander, 1968, is redescribed on the basis of new available material from adjacencies of the type locality and other areas. Misinterpretations from the original description are corrected. Adult anatomical structures are described and illustrated in detail, including information on the previously unknown female. Better grounds for the recognition of the taxon are provided. Keywords: Chilelimnophila lyra, crane flies, taxonomy, Chile, Neotropical Region. ability of more specimens from Chiloé Island without indicating how many and did not add substantive information on the species morphology. During a research visit by the author to the National Museum of Natural History of the Smith‑ sonian Institution, Washington DC, USA (USNM) in 2004, an effort of examining samples of undeter‑ mined Chilean crane flies in the Alexander Collec‑ tion resulted in the finding of additional material of Chilelimnophila lyra. These specimens, coming from two different localities (including a locality adjacent to Curacautín, the type locality), add considerably to the number of known specimens of this inter‑ esting and poorly known species. These specimens allow for a redescription of the taxon, with a study on its morphology which is more detailed than those by Alexander (1952, 1968). This redescription is aimed to provide better grounds for the recognition of Chilelimnophila (and its single included species). InTRoDuCTIon Alexander (1952), described Limnophila lyra from Chile, referring to a single specimen from Cura‑ cautín (Malleco Province) designated as the holotype. Alexander (1952) considered this fly as a very distinc‑ tive species. Its original placement in Limnophila was admittedly provisional as the distinctive morphology of the antenna and male terminalia would require the erection of a new higher group for it. Alexander (1968) then erected for the inclusion of Limnophila lyra the genus Chilelimnophila. The heterogeneity and artificiality of Limnophila has been previously recognized by Alexander (e.g., Alexander, 1924, 1929), and the proposal of a monotypic genus for this distinctive species seemed fully justified. A more rigorous demonstration of Limnophila para‑ phyly (Ribeiro, 2006a; Ribeiro, 2007) corroborates this decision. Alexander (1968) mentioned the avail‑ Volume 47(18):203-211, 2007 Departamento de Biologia (FFCLRP), Universidade de São Paulo. Avenida Bandeirantes, 3900, 14040‑901, Ribeirão Preto, SP, Brasil. E‑mail: ribeirogc@hotmail.com 1. It may also be useful in the eventual discovery of new taxa related to C. lyra, and for future comparative studies. MATeRIAL AnD MeThoDs The studied specimens belong to the Alexan‑ der Collection of Crane Flies housing at USNM. Descriptive terminology follows McAlpine (1981) for most characters. The adopted terminology for the wing veins is shown in Figure 5. The terminology for the structures of the male gonostylus is in accordance with Ribeiro (2006b). Dissections of the head, thorax, male and female terminalia were cleared in warmed KOH and mounted for study in non‑permanent slides with glycerol. After study and illustration the dissected structures were transferred to microvials with glycerol and pinned with their corresponding specimens. Illustrations were made with a drawing tube attached to a com‑ pound microscope. Measurements were taken with an ocular reticule. Photographs were taken with a digi‑ tal photography system attached to both stereoscopic and compound microscopes. Details on the examined specimens are as follows (label information in italics; information of different labels separated by a verti‑ cal line; precise information on locality within square brackets): Holotype: Male. Chile, Curacautín, Dec. 14. 50, Peña. (pin label); Chile-Malleco, Curacautín, 400 m, XII-14, 50 (Luis E. Peña) (slide label) [Chile, Malleco, Cura‑ cautín, ca. 38°27’S 71°53’W]. Preservation. Pinned parts: head with left antenna; thorax with both midlegs and right hindleg. Slide mounted parts: right antenna; wings; 2 legs; abdo‑ men; terminalia. Non-type specimens identified by Charles Paul Alex- ander: 1 male (pinned with terminalia mounted in slide) and 1 female (pinned). Chile, Arauco, Nahuel- buta, Butamalal, 1100-1400 m, I-23-31 ‘54, Peña. | Chilelimnophila lyra Al. Det. C.P. Alexander [Chile, Arauco, Cordillera de Nahuelbuta, Butamalal, ca. 37°54’S 73°12’W]; 2 males (pinned with terminalia mounted in slide). Chile, Arauco, Nahuelbuta, Pichi- nahuel, 1100-1400 m, I-23/31 ‘54, Peña. | Chilelim- nophila lyra Al. Det. C.P. Alexander [Chile, Arauco, Cordillera de Nahuelbuta, Pichinahuel, ca. 37°47’S 73°02’W]; 3 males (mounted in slide). Chilelim- nophila lyra (Alex), Chile, Chiloe I. Dalcahue, Feb. 1954, (L.E. Peña) [Chile, Chiloé Island, Dalcahue, ca. 42°23’S 73°40’W]. Note: Only the specimens from Chiloé Island were referred to by Alexander (1968), who erroneously indicated them as having being collected in 1945. Specimens found in the USNM collection: 12 males and 2 females. CHILE: Curacautín, Rio Blanco, 12-20 January, 1959 | Chilelimnophila lyra Alex., G C Ribeiro det 2007 [Chile, Malleco, Curacautín, Rio Blanco, ca. 38°27’S 71°53’W]; 9 males. CHILE: Cord. Nahuel- buta, Pichinahuel, 1-10 Jan. 1959 | Chilelimnophila lyra Alex., G C Ribeiro det 2007 [Chile, Arauco, Cordillera de Nahuelbuta, Pichinahuel, ca. 37°47’S 73°02’W]. Note: The labels of these specimens do not include information on collector. Basing on the collecting date and localities, it can be assumed that the collector was L.E. Peña (Oliver Flint, personal communication). Genus Chilelimnophila Alexander, 1968 Chilelimnophila Alexander, 1968: 23. Type species: Limnophila lyra Alexander, 1952. Diagnosis: Chilelimnophila can be recognized by the fol‑ lowing characters combined: first flagellomere ovoid, constricted at base; first three to four flagellomeres partially fused in the male, ovoid and not fused in the female; tibial spurs covered with tiny hairs; clasper of gonostylus glabrous, bifid and with its distalmost branch serrated; lateral process of aedeagal sheath long, narrow and acute, strongly sclerotized, twisted at apex. Chilelimnophila lyra (Alexander, 1952) (Figs. 1‑13) Chilelimnophila lyra (Alexander 1952): 108‑109 (original description); 118, figure 7 (male ter‑ minalia). Alexander (1968): 24, figures 2 (wing) and 7 (male terminalia). Alexander & Alexander (1970): 4.93 (catalogue citation). Oosterbroek (2007) (catalogue citation). Redescription Coloration (male and female): Head brownish yellow, generally darker than thorax; thorax and legs light brown‑yellow; abdomen brownish, darker than tho‑ 204 Ribeiro, G.C.: A review of Chilelimnophila rax, with tergum darker than sternum; last abdominal segment darker than previous segments. Dimensions (male; maximum lengths and widths in millimeters): Head length, 0.40‑0.60; head width, 0.35; wing length, 5.87‑6.25 (5.87 in holotype); wing width, 1.62‑1.70 (1.70 in holotype); gonocoxite length, 0.28‑0.31 (0.28 in holotype); clasper of gono‑ stylus length, 0.21‑0.24 (0.24 in holotype); lobe of gonostylus length, 0.14‑0.17 (0.14 in holotype). FIGuRes 1‑2: Chilelimnophila lyra, male head. 1, lateral view. 2, ventral view. Abbreviations: comp eye, compound eye; lbl, labella; plp, maxillary palpus. Papéis Avulsos de Zoologia, 47(18), 2007 205 Morphology (male and female): Head and append‑ ages (Figures 1‑2, 4‑5): Flagellum 14‑segmented; scape cylindrical, ca. 1.75 X longer than wide; pedicel ovoid, ca. 1.2 X longer than wide; scape and pedicel similar in length; first flagellomere ovoid, constricted at base; first three to four flagellomeres as long as wide and partially fused in the male, ovoid and not fused in the female; flagellomeres length/width ratio gradually increasing toward tip of antenna, with last flagellomere longer than preceding; maxillary palpus 4‑segmented; first palpomere more or less cylindrical, ca. 2.2 X longer than wide; other palpomeres ovoid, shorter than the first; compound yes widely separated dorsally, meeting at median line ventrally; rostrum (including labella) ca. 0.4 X the length of head cap‑ sule. Thorax (Figure 3): almost as long as high; pleural sclerites as figured. Leg (Figure 8): tibial spurs (1:2:2) ca. 0.66 X the width of distal part of tibia, covered with tiny hairs; tarsal claws simple, smooth. Wing (Figure 6): h vein situated between the origin and the fork of M+Cu; Sc ending on C at the level of bifurca‑ tion of Rs and A1; position of sc‑r variable, more or less near the tip of Sc; r‑r linking R1 to R2 faint or lack‑ ing; Rs almost straight, originating well proximally to the level of the tip of A2; Rs three branched; R2+3 ca. 0.46 X the length of R3; R2 running more or less FIGuRe 3: Chilelimnophila lyra, male thorax, lateral view. Abbreviations: anepm, anepimeron; anepst, anepisternum; anepst cleft, anepi‑ sternal cleft; aprn, antepronotum; cerv scl, cervical sclerite; cx, coxa; kepm, katepimeron; kepst, katepisternum; ltg, laterotergite; mtepm, metaepimeron; mtkepst, metakatepisternum; mtanepst, metanepisternum; mr, meron; mtg, mediotergite; pltr2, mesothoracic pleu‑ rotrochantin; pltr3, metathoracic pleurotrochantin; pprn, postpronotum; sct, scutum; sctl, scutellum. 206 Ribeiro, G.C.: A review of Chilelimnophila parallel to R3 in most of its length, turning upwards abruptly at tip; section of R4+5 between its origin and point of contact with r‑m curved; section of R4+5 dis‑ tal to point of contact with r‑m almost straight; r‑m straight, similar to m‑cu in length; M four branched; M1+2 long, ca. 3 X the length of M2; M1 ca. 1.37 X the length of M2; M3 sinuous; M4 almost straight; m‑m ca. 0.20 to 0.40 X the length of r‑m; m‑cu attached to M3+4 near mid‑length of discal cell; A1 almost straight; A2 slightly curved at tip. Male terminalia (Figures 7, 9, 10‑12): posterior margin of tergite 9 produced into two small lobes; gonocoxite conical, gradually narrowed toward tip, bearing a ventromedial exten‑ sion; gonostylus terminal; lobe of gonostylus ca. 3.25 X longer than wide, gradually narrowed toward tip; clasper of gonostylus ca. 5.6 X longer than wide, gla‑ brous, bifid (largest and distalmost bifurcation ser‑ rated); aedeagus relatively long, reaching the level the gonostylus insertion; lateral process of aedeagal sheath long, narrow and acute, strongly sclerotized, twisted at apex; interbase blade‑like, rounded at apex, bearing a stout lateral extension articulating with paramere, and FIGuRes 4‑5: Chilelimnophila lyra, antenna. 4, proximal part of male antenna, showing partial fusion of first four flagellomeres (I‑IV). 5, proximal part of female antenna. Abbreviations: ped, pedicel. FIGuRe 6: Chilelimnophila lyra, wig venation. Papéis Avulsos de Zoologia, 47(18), 2007 207 a longer, more slender medial extension. Female ter‑ minalia (Figure 13): tenth tergite ovoid, ca. 2 X lon‑ ger than ninth tergite; hypogynial valve with a more or less ovoid and less sclerotized internal area rang‑ ing from its midlength to its apex; apex of hypogynial valve reaching midlength of cercus, bearing slender bristle‑like filaments. Pilosity: antenna with verticils longer than individual flagellomeres; wing with mac‑ rotrichia all along longitudinal veins. Remarks: The rounded structure indicated by Alexan‑ der (1952: 118, figure 7; 1968: 24, figure 7) as the ninth tergite is actually the ninth sternite: the poste‑ rior margin of the ninth tergite is not rounded, but produced into two small lobes. The three dimensional structure of the clasper of gonostylus is relatively com‑ plex. Although some variation may occur in the rela‑ tive lengths of its two apical extensions, in flattened slide mounted specimens (Figure 9), the apex of the clasper may be distorted in different ways, giving a false impression of variation. Such a distortion has probably driven Alexander’s (1968: 23) description of the apical part of this structure as “expanded into a triangular blade”. Sexual dimorphism is noticeable in structure of the antenna. In the male, the first three to four flagellomeres are partially fused (Figure 4), while in the female, they are more ovoid than in male and not fused (Figure 5). Distribution: As far as known, Chilelimnophila lyra has a restricted geographical distribution in Chile, rang‑ ing latitudinaly from ca. 37°S (northernmost limit at the Cordillera de Nahuelbuta) to 42°S (southernmost limit at Chiloé Island) within the Subantarctic Bio‑ geographical Province in South America. ResuMo Revisão do gênero monotípico Chilelimnophila Alexander (Diptera: Tipulomorpha: Limoniidae). Chilelimnophila lyra (Alexander, 1952), a única espécie FIGuRe 7: Chilelimnophila lyra, photograph of male terminalia, dorsal view. Abbreviations: aed, aedeagus; cgonst, clasper of gonostylus; goncx, gonocoxite; interb, interbase; lgonst, lobe of gonostylus; lp, lateral process of aedeagal sheath; pm, paramere; t9, ninth tergite. 208 Ribeiro, G.C.: A review of Chilelimnophila FIGuRe 8‑9: Chilelimnophila lyra, male. 8, articulation of tibia with tarsus (foreleg) showing tibial spur (arrow). 9, dorsal aspect of gono‑ stylus in flattened, permanent slide mounted specimen. Abbreviations: cgonst, clasper of gonostylus; lgonst, lobe of gonostylus. Papéis Avulsos de Zoologia, 47(18), 2007 209 FIGuRes 10‑12: Chilelimnophila lyra, male terminalia. 10, general aspect of terminalia, dorsal view. 11, aedeagus and associated struc‑ tures, dorsal view. 12, general aspect of terminalia, lateral view. Abbreviations: aed, aedeagus; aed apod, aedeagus apodeme; cgonst, clasper of gonostylus; goncx, gonocoxite; interb, interbase; lgonst, lobe of gonostylus; lp, lateral process of aedeagal sheath; pm, paramere; s9, ninth sternite; t9, ninth tergite. 210 Ribeiro, G.C.: A review of Chilelimnophila do gênero Chilelimnophila Alexander, 1968, do Chile, é redescrita com base em novos espécimes de áreas próximas à localidade tipo e outras áreas. Interpretações incorretas da descrição original são corrigidas. As estruturas anatômicas do imago são descritas e ilustradas em detalhe, incluindo informações sobre as fêmeas, até então desconhecidas. Melhores subsídios para o reconhecimento do táxon são fornecidos. Palavras‑chave: Chilelimnophila lyra, taxonomia, Chile, Região Neotropical. ACknowLeDGeMenTs I thank Dr. Wayne Mathis for supporting my research at the National Museum of Natural History, Smithsonian Institution, and for the loan of several specimens used in this study. During the early phases of this study, I was financially supported by a PhD fellowship by FAPESP (grant # 02/13613‑6). My research visit to the National Museum of Natural History, Washington DC was supported by a fellow‑ ship from CAPES. By the time this paper was sent for publication, I was financially supported by a Post‑ Doc Fellowship by FAPESP (grant # 07/50696‑0). ReFeRenCes Alexander, C.P. 1924. New of little‑known Tipulidae (Diptera). XXI. Australasian species. Annals and Magazine of Natural History, 9(13):359‑380. Alexander, C.P. 1929. Diptera of Patagonia and South Chile. Part I. Crane‑flies (Tipulidae, Trichoceridae, Tanyderidae). British Museum Natural History, London, p. 1‑240. Alexander, C.P. 1952. New or insufficiently‑known crane‑ flies from Chile (Family Tipulidae, Order Diptera). Part IV. Agricultura Técnica, 11:99‑118. Alexander, C.P. 1968. New or little‑known Tipulidae from Chile and Peru (Diptera: Tipulidae). Revista Chilena de Entomología, 6:21‑36. Alexander, C.P. & Alexander, M.M. 1970. 4. Family Tipulidae. In: Departamento de Zoologia, Secretaria da Agriculatura do Estado de São Paulo, A Catalogue of the Diptera of the Americas South of the United States, Museu de Zoologia, Universidade de São Paulo, São Paulo, p. 4.1‑4.259. McAlpine, J.F. 1981. Morphology and terminology – adults. In: McAlpine, J. F.; Peterson, B.V.; Shewell, G.E.; Teskey, H.J.; Vockeroth, J.R. & Wood, D.M. (Coordinators), Manual of Nearctic Diptera. Vol. 1. Research Branch, Agriculture Canada, Monograph 27, p. 9‑63. Oosterbroek, P. 2007. Catalogue of the Craneflies of the World (Insecta, Diptera, Nematocera, Tipuloidea). Available at:< http://ip30.eti.uva.nl/ccw/>. Access in: 2/May/2007. Ribeiro, G.C. 2006a. A cladistic study of the Limnophilinae (Limoniidae) using adult male characters: some light shed on the evolution of Tipulomorpha. In: 6th International Congress of Dipterology. Abstracts, p. 205‑206. Ribeiro, G.C. 2006b. Homology of the gonostylus parts in crane flies, with emphasis on the families Tipulidae and Limoniidae (Diptera, Tipulomorpha). Zootaxa, 1110:47‑57. Ribeiro, G.C. 2007. Filogenia dos Limnophilinae (Limoniidae) e evolução basal dos Tipulomorpha (Diptera). (Dissertação de Mestrado), Universidade de São Paulo, Ribeirão Preto. Recebido em: 04.05.2007 Aceito em: 03.08.2007 Impresso em: 21.12.2007 FIGuRe 13: Chilelimnophila lyra, female ovipositor. Abbreviations: cerc, cercus; hyp vlv, hypogynial valve; s8, eighth sternite; t8‑10, eighth to tenth tergites. Papéis Avulsos de Zoologia, 47(18), 2007 211 Se çã o de Pu bl ic aç õe s d o M Z U SP work_gtc24tjtzzhdff532c6vxv3yzi ---- Sci. Dril., 19, 13–16, 2015 www.sci-dril.net/19/13/2015/ doi:10.5194/sd-19-13-2015 © Author(s) 2015. CC Attribution 3.0 License. T e c h n ic a l D e v e lo p m e n ts An innovative optical and chemical drill core scanner A. S. L. Sjöqvist1, M. Arthursson1, A. Lundström1, E. Calderón Estrada1, A. Inerfeldt1, and H. Lorenz2 1Minalyze AB, Industrivägen 4, 433 61 Sävedalen, Sweden 2Department of Earth Sciences, Uppsala University, Villavägen 16, 752 36 Uppsala, Sweden Correspondence to: A. S. L. Sjöqvist (axel.sjoqvist@minalyze.com) Received: 16 February 2015 – Revised: 9 April 2015 – Accepted: 13 April 2015 – Published: 29 May 2015 Abstract. We describe a new innovative drill core scanner that semi-automatedly analyses drill cores directly in drill core trays with X-ray fluorescence spectrometry, without the need for much sample preparation or operator intervention. The instrument is fed with entire core trays, which are photographed at high resolution and scanned by a 3-D profiling laser. Algorithms recognise the geometry of the core tray, number of slots, location of the drill cores, calculate the optimal scanning path, and execute a continuous XRF analysis of 2 cm width along the core. The instrument is equipped with critical analytical components that allow an effective QA/QC routine to be implemented. It is a mobile instrument that can be manoeuvred by a single person with a manual pallet jack. 1 Introduction Rapid decision-making and a continuous re-evaluation are the key to successful commercial and scientific drilling projects. Geochemical information of drill cores is impor- tant base information for many projects and crucial for cer- tain applications, like exploration and mining drilling. A ma- jor drawback is that performing geochemical analyses takes time. Thus, expensive rig time is often spent while waiting for results, and decisions might be made with only incom- plete background information at hand. An in-depth analy- sis of the drilling industry has revealed that access to even rudimentary geochemical information in an early stage af- ter drilling has substantial advantages for the planning and implementation of the subsequent scientific and analytical work. A concept instrument was envisioned, which would anal- yse drill cores with a non-destructive methodology and min- imal interference with other on-site work in terms of time and labour. Since drill cores are handled in core trays with multiple slots, the processing of whole core trays (compared to individual sections) is an efficient way to analyse great lengths of drill core. The first step was to prove the reliabil- ity of using an in situ non-destructive analytical technology on drill core surfaces. The prototype instrument (for 0.5 m long sections) demonstrated with the successful analysis of more than 22 km of drill core that energy-dispersive X-ray spectrometry on drill core surface is a viable approach. After this proof of concept, the construction of the first complete system commenced in October 2013. 2 Instrument specifications and capabilities The semi-automated new drill core scanner is built with flex- ibility in mind (Fig. 1). More than anything else, the instru- ment is a platform that handles drill cores in core trays. The methodology of processing entire drill core trays is patented and thus unique. The composition of the drill core tray does not matter. Exactly which sensors and attachments are cou- pled to that platform depends on specific project needs and advances in analytical technology. The current set-up is de- scribed below and summarised in Table 1. 2.1 Digital photography and 3-D scanning The high-resolution RGB line scan camera produces digital photo documentation of the drill cores and trays. Digital im- ages are stored in 8 bits per channel lossless TIFF and can have a pixel resolution of up to 10 px mm−1. For reliable analytical results it is of the highest impor- tance to have accurate information about the location and ge- ometry of the samples. For this purpose a 3-D model of the core tray with its contents is created from a laser scan, per- formed at the same time as RGB imaging. Algorithms devel- Published by Copernicus Publications on behalf of the IODP and the ICDP. 14 A. S. L. Sjöqvist et al.: An innovative optical and chemical drill core scanner Figure 1. The new instrument described is a drill core scanner that intelligently handles entire drill core trays to produce chemical anal- yses of drill cores. Table 1. Technical specifications of the new instrument. Dimensions 1.8 m × 1.3 m × 1.2 m (L × W × H ) Mass ca. 1 t Electrical supply Three phase 400 V, 16 A Power consumption ca. 3 kW Cooling External, quick-connect fittings Photography, line Up to 10 px mm−1, 8 bit lossless TIFF 3-D profiling, line 1 mm × 0.34 mm × 17 µm (L × W × H ) Chemical analysis method ED-XRF Detection range, in air (Mg), Al–U Spatial resolution Down to 1 mm Normal throughput 15–20 m h−1 oped by us calculate the geometry of the core tray, the num- ber of slots, the location and geometry of drill core pieces, and the optimal path to scan without colliding the detector, which should be kept at a constant distance to the drill core. Secondary benefits of having detailed information about the drill core geometry are that the drill core length is mea- sured and cracks are semi-automatically identified, which is useful for geotechnical purposes, e.g. RQD (rock quality des- ignation) and fracture frequency. 2.2 XRF analysis In situ non-destructive chemical analyses of the drill cores are acquired through X-ray fluorescence (XRF) analysis by energy-dispersive spectrometry (EDS), using a high-quality silicon drift detector (SDD). A partial vacuum between the irradiated sample surface and the detector window protects the sensitive Be detector window from dust contamination and also reduces attenuation by the air in the lower energy range of the X-ray spectrum, thus enabling the detection of elements down to Al or Mg. X-ray tubes with different anode target materials are avail- able, e.g. Cr, Mo, and Ag. The selection of anode material depends on the project-specific analytical preferences. The X-ray beam is collimated to a linear beam that is 2 cm wide and 1 mm thick perpendicular to the drill core axis. Table 2. Specifications of the analytical parameters used for scan- ning the COSC-1 drill core with the new instrument. X-ray tube anode Cr Voltage 40 kV Current 20 mA Elemental suite Al, Si, P, S, Cl, K, Ca, Ti, Fe, Cu, Zn, Ga, Rb, Sr, Y, Zr, Nb, Pb Scanning speed 10 mm s−1 Analysis resolution 0.1 m Scanning is performed in a continuous motion, and the data over a certain length are integrated. The scanning speed and integration length depend on the user’s requirements for analysis precision and resolution, and the drill core’s chemi- cal composition. Confident detection of the typical major and minor elements of interest (cf. Table 2) is usually achieved with a real-time analysis of 10 s. With a typical scanning speed of 1 cm s−1 this corresponds to a distance of 10 cm. The analytical parameters, the scanning speed, and the inte- gration length for the analysis need to be evaluated and ad- justed for each project. Typical throughput is of the order of two to four drill core trays per hour, depending on the number of slots and the complexity of the scanning path, which for a six-slot drill core tray means an effective scanning through- put of approximately 15–20 m h−1. The instrument is a stand-alone system. No additional ex- ternal processing or storage capabilities are required. Chem- ical analyses are processed in real time and can be displayed on the screen while scanning. The X-ray beam location and intensity are monitored throughout the scanning process and logged to monitor in- strument drift. Two sample holders for pressed pellets or glass pucks allow easy analytical calibration and detec- tor drift measurements with matrix-matched certified refer- ence materials, or analyses of blank samples. This provides easy access to critical components for performing effective QA/QC (quality assurance/quality control) routines during an analytical campaign. 2.3 Operation, connectivity, mobility, and safety The entire instrument runs off a single three-phase plug (400 V, 16 A) and consumes approximately 3 kW. In remote areas, a diesel generator can deliver enough power. Use of a back-up uninterruptible power supply (UPS) is recom- mended to ensure operation stability. The instrument is con- veniently operated by the resistive touch screen, which can be used with any type of safety gloves. Front-end connectivity includes a 230 V socket, USB ports, and an Ethernet port. The user can connect devices that are most convenient for the moment, whether it is a USB hard disk, mouse and keyboard, Wi-Fi antenna, or 4G mobile Internet dongle. Sci. Dril., 19, 13–16, 2015 www.sci-dril.net/19/13/2015/ A. S. L. Sjöqvist et al.: An innovative optical and chemical drill core scanner 15 The bottom plate hosts furrows that allow the instrument to be lifted and moved around by a forklift truck or manual pallet jack by one person. The total mass of the system is approximately 1 t. For the sake of flexibility, the X-ray tube water-cooling system is external and connected to the instrument by quick- connect fittings. One could choose to locate the cooling sys- tem in another room or, in hot climates, to upgrade to a larger system. The radiation-blocking protective shell consists of two lay- ers of 3 mm thick steel, and the window is made of lead glass. The instrument has all necessary radiation safety re- quirements and is completely safe to work with. 3 Geoscientific applications Rock drill cores are the most tangible representations of un- exposed subsurface geology. However, drilling is expensive and the amount of sample very limited. Multiple investiga- tions on the drill core require that the number of destructive analyses is limited to the absolutely necessary. A quick and non-destructive method for obtaining geochemical informa- tion, like XRF scanning, is therefore an asset. The instru- ment, with its combination of digital photography, 3-D pro- filing, and chemical analyses, effectively creates a digital rep- resentation of the drill core, which then can be evaluated in its undisturbed original form in a virtual drill core archive. Availability of chemical information early in the process of drill core processing greatly facilitates the geological doc- umentation, making drill core logging more objective and less dependent on the individual geologist’s experience and best judgement. This is of particular importance for subse- quent and advanced studies that utilise the base scientific documentation of a project. While researchers have performed tests to analyse unpre- pared rock drill cores by XRF for a long time (Carlsson and Akselsson, 1981), in recent years unprepared rock drill cores have almost exclusively been analysed by portable/handheld XRF instruments. Portable/handheld XRF instruments are gaining popularity and have been applied widely and suc- cessfully to analyse drill cores in a non-destructive way (e.g. Gazley et al., 2011, 2012; Fisher et al., 2014; Le Vaillant et al., 2014; Ross et al., 2014). Advantages of the new instru- ment over handheld XRF instruments are a better and more representative coverage (continuous scanning vs. point anal- yses), reduced labour, standardised analytical conditions, in- tegrated routines, and advanced data handling. 4 Preliminary high-resolution chemical data from COSC-1 During October–November of 2014, the entire COSC-1 drill core (Collisional Orogeny in the Scandinavian Caledonides ICDP; cf. Lorenz et al., 2015) was scanned with the new D ril le r’s de pt h [m ] SiO2 0 100% P2 5O 0 0.1% S 0 3% Cl 0 0.1% K2O 0 6% CaO 0 40% Al2O3 0 20% TiO2 0 3% Fe2O3 0 15% 500 1500 1000 2000 Figure 2. Preliminary chemical data of major elements produced by the new instrument of the full length of the COSC-1 drill core, scanned with 0.1 m resolution. www.sci-dril.net/19/13/2015/ Sci. Dril., 19, 13–16, 2015 16 A. S. L. Sjöqvist et al.: An innovative optical and chemical drill core scanner instrument described here. The ca. 2400 m of drill core, whereof ca. 1500 m in H size (76 mm diameter) and 900 m in N size (47.8 and 45 mm), are boxed in 719 core trays with four slots for HQ and five compartments for NQ. The analyt- ical parameters are described in Table 2. A preliminary as- sessment of the analytical precision yields an estimated pre- cision for major elements better than 5–10 % and better than 20 % for most trace elements. The COSC-1 drill core consists of mainly high-grade metamorphosed siliceous sedimentary rocks. Felsic, calc- silicate, and amphibole gneisses are typical representatives, with marbles, amphibolites, and subordinate porphyries. The lower part of the core is dominated by the mylonites of a ma- jor thrust zone. A first assessment of the preliminary XRF data (Fig. 2) shows an increase in SiO2 with depth and that elevated levels of Cl and P2O5 occur in the thrust zone, possi- bly introduced by fluids. High CaO content and associated el- evated Sr levels in the upper and middle part of the drill core can be linked to marbles and possibly calc-silicate gneisses, which become less frequent in the lower part. Peaks in the density, P wave velocity, and rock resistivity downhole logs seem to correlate with low SiO2 values of amphibolites. A more detailed assessment of the data produced by the new instrument started with the utilisation XRF data during the COSC-1 sampling party in Berlin, 2–6 February 2015, where they helped the scientists to select their sampling spots. 5 Summary A new instrument provides fast, non-destructive chemical analyses of drill cores in drill core trays by automated scan- ning XRF. The mobile and autonomous system can be moved and operated anywhere in the world. Drill cores are docu- mented by high-resolution digital photography and 3-D laser profiling. 3-D topographic information is used to calculate the optimal scanning path automatically, and for length and structural measurements of the drill core. The instrument allows scientists to obtain basic chemical data early in a project, e.g. on the drill site where the analyses immediately become available to geologists. Subsequently, core logging is less subjective and less dependent on the individual’s expe- rience. Non-destructive XRF analyses leave more of the drill core to be used for other studies. In addition, a digital copy of the drill core can be stored in a virtual drill core archive in which drill cores can be (re)evaluated in their original state. Acknowledgements. Minalyze wishes to acknowledge B. Arthursson, N. Bragsjö, M. Halonen, E. Hegardt, L. Hellberg, C. Johansson, V. Krpo, V. Kunavuti, A. Nordlund, J. Nordstrand, M. Rostedt, C. Sernevi, A. Smajic, and I. Zagerholm, who have played a role in developing the drill core scanner called Minalyzer CS from concept to functional instrument. Edited by: U. Harms Reviewed by: U. Harms and A. Schleicher References Carlsson, L.-E. and Akselsson, R.: Applicability of PIXE and XRF to fast drill core analysis in air, Adv. X Ray Anal., 24, 313–321, http://lup.lub.lu.se/record/2026610, 1981. Fisher, L., Gazley, M. F., Baensch, A., Barnes, S. J., Cleverley, J., and Duclauz, G.: Resolution of geochemical and lithostrati- graphic complexity: a workflow for application of portable X-ray fluorescence to mineral exploration, Geochem.-Explor. Env. A., 14, 149–159, doi:10.1144/geochem2012-158, 2014. Gazley, M. F., Vry, J. K., du Plessis, E., and Handler, M. R.: Application of portable X-ray fluorescence anal- yses to metabasalt stratigraphy, Plutonic Gold Mine, Western Australia, J. Geochem. Explor., 110, 74–80, doi:10.1016/j.gexplo.2011.03.002, 2011. Gazley, M. F., Duclaux, G., Fisher, L. A., Beer, S. de, Smith, P., Taylor, M., Swanson, R., Hough, R. M., and Cleverley, J. S.: 3D visualisation of portable X-ray fluorescence data to improve ge- ological understanding and predict metallurgical performance at Plutonic Gold Mine, Western Australia, T. I. Min. Metall. B, 120, 88–96, doi:10.1179/1743275812Y.0000000002, 2012. Le Vaillant, M., Barnes, S. J., Fisher, L., Fiorentini, M. L., and Caruso, S.: Use and calibration of portable X-ray fluo- rescence analysers: application to lithogeochemical exploration for komatiite-hosted nickel sulphide deposits, Geochem.-Explor. Env. A., 14, 199–209, doi:10.1144/geochem2012-166, 2014. Lorenz, H., Rosberg, J.-E., Juhlin, C., Bjelm, L., Almqvist, B. S. G., Berthet, T., Conze, R., Gee, D. G., Klonowska, I., Pascal, C., Pedersen, K., Roberts, N., and Tsang, C.: Operational Report about Phase 1 of the Collisional Orogeny in the Scandinavian Caledonides scientific drilling project (COSC-1), Sci. Dril., in review, 2015. Ross, P.-S., Bourke, A., and Fresia, B.: Improving lithological dis- crimination in exploration drill-cores using portable X-ray fluo- rescence measurements: (1) testing three Olympus Innov-X anal- ysers on unprepared cores, Geochem.-Explor. Env. A., 14, 171– 185, doi:10.1144/geochem2012-163, 2014. Sci. Dril., 19, 13–16, 2015 www.sci-dril.net/19/13/2015/ http://lup.lub.lu.se/record/2026610 http://dx.doi.org/10.1144/geochem2012-158 http://dx.doi.org/10.1016/j.gexplo.2011.03.002 http://dx.doi.org/10.1179/1743275812Y.0000000002 http://dx.doi.org/10.1144/geochem2012-166 http://dx.doi.org/10.1144/geochem2012-163 Abstract Introduction Instrument specifications and capabilities Digital photography and 3-D scanning XRF analysis Operation, connectivity, mobility, and safety Geoscientific applications Preliminary high-resolution chemical data from COSC-1 Summary Acknowledgements References work_gthzhmlbzvfdtbkthcaf4zyqpu ---- n23:125 ENDANGERED SPECIES RESEARCH Endang Species Res Vol. 23: 125–132, 2014 doi: 10.3354/esr00571 Published online February 28 INTRODUCTION The fin whale Balaenoptera physalus is the second largest species that has lived on Earth. The commer- cial hunt in the North Atlantic ended in 1987 and the species globally is regarded as Endangered by the In- ternational Union for Conservation of Nature (IUCN) (Reilly et al. 2008). Fin whales are mostly found in temperate and high-latitude feeding grounds during summer, and some populations undergo a migration to warmer waters in winter (Rice 1998). However, there is no clear migration between feeding and breeding grounds as in humpback whales Megaptera novaeangliae or right whales Eubalaena spp., and fin whale songs — a male breeding display — have been recorded in high latitudes year round (Clark 1995). Despite decades of exploitation and research, our knowledge of fin whales’ life history and population structure is limited. The International Whaling Commission (IWC) has defined a number of stock structure hypotheses for fin whales in the North Atlantic with 2, 3 and 4 stocks/ populations. Seven feeding grounds have been iden- tified but it remains unknown where any of the pop- ulations breed (IWC 2009). Genetic analysis showed significant differences between the Mediterranean Sea, the Northeast Atlantic, and the Northwest Atlantic, with some mixing of the latter 2 occurring around Iceland and Greenland (Bérubé et al. 1998, Palsbøll et al. 2004). Mitchell (1974) suggested at *Corresponding author: car@rorqual.com Fin whale survival and abundance in the Gulf of St. Lawrence, Canada Christian Ramp1, 2,*, Julien Delarue1, Martine Bérubé3, Philip S. Hammond2, Richard Sears1 1Mingan Island Cetacean Study, 285 Green, St. Lambert, Quebec J4P 1T3, Canada 2Sea Mammal Research Unit, Scottish Oceans Institute, University of St. Andrews, East Sands, St. Andrews, Fife KY16 8LB, UK 3Marine Evolution and Conservation, Centre for Ecological and Evolutionary Studies, University of Groningen, PO Box 11103, 9700 CC Groningen, The Netherlands ABSTRACT: The fin whale Balaenoptera physalus, the second largest species in the animal king- dom to have lived on Earth, was heavily targeted during the industrial whaling era. North Atlantic whaling for this species ended in 1987 and it is unclear if the populations are recovering. The stock structure in the North Atlantic is still under debate, but several lines of evidence suggest that fin whales in the Gulf of St. Lawrence may form a discrete stock with limited exchange with the rest of the North Atlantic. We applied mark-recapture models to 21 yr of photo-identification data from the Jacques-Cartier Passage to estimate the abundance and, for the first time, a survival rate based on live re-sightings for this stock of fin whales. Using the Cormack-Jolly-Seber model, we estimated a unisex non-calf apparent survival rate of 0.955 (95% CI: 0.936 to 0.969) for the period 1990 to 2010, declining in the last 4 yr of the study. The reduced survivorship was likely caused by a lower site fidelity combined with a higher mortality. The POPAN model yielded a super-popula- tion estimate of 328 individuals (95% CI: 306 to 350) for the period 2004 to 2010, and confirmed the negative trend in apparent survival and annual abundance, indicating that the population has not increased since the last large-scale surveys from 1974 and 1997. KEY WORDS: Fin whale · Mark-recapture · Survival · Abundance · Modeling Resale or republication not permitted without written consent of the publisher © Inter-Research 2014 · www.int-res.com Endang Species Res 23: 125–132, 2014 least 2 distinct stocks in the western North Atlantic, one in the waters centered around Nova Scotia and one off Newfoundland and Labrador, which was sup- ported by the re-analysis of catch data from Cana- dian shore whaling stations (Breiwick 1993). Ser- geant (1977) hypothesized that the animals in the Gulf of St. Lawrence (GSL) could form a separate stock, presumably wintering in the Laurentian Chan- nel near the entrance of the Gulf, their movements dictated by expanding and retreating sea ice. Analy- ses of contaminant levels (Hobbs et al. 2001) and of song structure (Delarue et al. 2009) give support to this hypothesis. Photo-identification studies showed that limited exchange exists with the Nova Scotia stock, which includes Gulf of Maine individuals (Coakes et al. 2005, Robbins et al. 2007). We adopt the definition of a stock from the US Marine Mammal Protection Act (Wade & Angliss 1997), in which a stock is defined as a demographically isolated bio- logical population where internal demographics (births and deaths) are far more important for the population than external dynamics (immigration and emigration). For the present study, we regard the GSL as a separate stock with occasional exchange with stocks in adjacent waters. Mitchell (1974) analyzed data from ship-based sur- veys and estimated that there were 340 fin whales in the GSL and 2800 in Nova Scotian waters. Summer aerial surveys in 1995 and 1996 yielded an estimate of 340 to 380 ani- mals in the GSL (Kingsley & Reeves 1998). The last aerial surveys in 2007 resulted in a combined estimate of 462 (95% CI: 270 to 791) animals for the GSL and the Scotian Shelf (Law- son & Gosselin 2009), although most sightings oc curred on the Scotian Shelf. Both aerial surveys are likely underestimates, especially the recent one, because it was not corrected for animals missed on the track line. For the Northwest Atlantic, the most recent abundance estimate is 3522 (coefficient of variation: 0.27) (Waring et al. 2013). Fin whale natural adult annual mortality rate has been esti- mated to range between 0.04 and 0.06 (Clark 1982, de la Mare 1985). Some parts of the GSL see a lot of marine traffic, and the fin whale is the species most frequently involved in collisions with large vessels (Laist et al. 2001), although it is unclear why. The Quebec stranding network recorded 20 dead fin whales from 2004 to 2010 (Quebec Marine Mammal Emergency Network Call Center Reports 2004− 2010), raising questions as to whether this level of mortality is sustainable. In the present paper, we analyze 21 years of fin whale photo-identification data to estimate for the first time a survival rate based on live sighting−re-sighting data and not on whaling data. In addition, we produce an updated abundance estimate for the GSL stock. MATERIALS AND METHODS Data collection We conducted multiple annual surveys in the Jacques-Cartier Passage (JCP) and adjacent waters in the GSL, Canada (Fig. 1) from 1982. Weather per- mitting, one or several inflatable boats surveyed the region for baleen whales. The average season lasted from the end of May/beginning of June to mid/end of October, with an average annual effort of 60 survey days and ~500 h of observation. Surveys were de - signed to maximize the photo-identification effort. We used standard single lens reflex (SLR) 35 mm cameras with black and white film until 2003 and switched to digital SLR cameras from 2004 on to 126 Fig. 1. Eastern Canadian waters, with the research area Jacques-Cartier Passage (JCP) marked dark gray Ramp et al.: Fin whale abundance and survival identify individual fin whales using their unique pigmentation pattern on their right side and the shape of their dorsal fin (Agler et al. 1990). The sex of an individual was determined by molecular analysis (Bérubé & Palsbøll 1996) of genomic DNA extracted from remotely collected skin biopsy samples (Palsbøll et al. 1992) since 1990. We used the capture history of the identified fin whales from 1990 to 2010 to model the survival prob- ability of fin whales, using years as sampling occa- sions. For this, we regarded an animal as captured for a given year when high-quality photos were taken, regardless of how often an animal was sighted in a given year. Animals sighted more regularly have a greater chance of being biopsied and hence sexed. This results in an overestimation of survival rates for sexed and an underestimation for unsexed animals (Nichols et al. 2004). The same authors presented several solutions to overcome this bias, and we applied their ad hoc approach, which results in unbi- ased survival estimates (Nichols et al. 2004). We used only sexed individuals, conditioned to when the ani- mal was biopsied, and did not apply the information on sex in retrospect. Thus, an animal entered the population only in the year it was sexed, and previ- ous sightings were omitted. For abundance esti- mates, we used all animals from the years 2004 to 2010 due to enhanced identification possibilities using digital photography and image processing soft- ware, resulting in a higher recapture rate. We omit- ted calves from the analysis because their sighting is dependent on that of their mother, and their survival is assumed to be lower than that of adults, as shown for gray whales Eschrichtius robustus and humpback whales Megaptera novaeangliae (Reilly 1984, Barlow & Clapham 1997, Gabriele et al. 2001). We therefore estimate non-calf survival and a population estimate for animals older than 1 yr. Data analysis Modeling survival The Cormack-Jolly-Seber model (CJS) (Cormack 1964, Jolly 1965, Seber 1965) estimates the survival probability (φ) in the population at risk of capture in the interval between 2 successive sampling occa- sions for individuals caught and alive in the first sam- pling event, and the probability of recapture ( p) of those individuals on each sampling occasion (Burn- ham & Anderson 1992). The estimated apparent sur- vival probability is the product of the true survival probability and the probability of return of animals to the study area (site fidelity). Henceforth, for simplic- ity, we refer to this as survival rate. We allowed the probability of survival and recapture to vary due to a number of effects, including time (sampling occasion, t), linear temporal trend (T ), sex (s), and trap depend- ency (m) following the notation of Burnham et al. (1987), Lebreton et al. (1992), and Sandland & Kirk- wood (1981); a parameter constant over time was noted as (.). Additive (+) and interaction (*) effects, e.g. s + t, s * t (representing s + t + s × t), were also ex - plored. Sex (s) referred to 2 groups, females and males. We did not regard trap dependency as gen- uinely representing dependence on capture, but rather as accounting for structural effects mimicking trap dependency, following (Pradel 1993). We added this effect as an individual covariate, taking into account whether or not the animal was sighted on the previous sampling occasion. We applied the CJS model to the dataset 1990 to 2010, the period during which information on sex of the animals was available. Modeling abundance The POPAN model (Schwarz & Arnason 1996) is a parameterization of the Jolly-Seber (JS) (Jolly 1965, Seber 1965) model that estimates, in addition to apparent survival (φ) and probability of (re)capture ( p), the probability of entry into the population (b) and the abundance of the super-population (N). Under the JS model, the probability of survival and the probability of (re)capture include both marked and unmarked animals, in contrast to the CJS model, which takes only marked animals into account. The JS model relies on the assumption that each animal, marked and unmarked, has the same probability of (re)capture, because the model uses the ratio of marked to unmarked animals to estimate abundance. We used the dataset from 2004 to 2010 to estimate abundance, because of the larger sample size in these years resulting from the advances of digital photography and image processing software. The estimate of the abundance (N) of the super-popula- tion encompasses the entire study period, thus all animals alive between 2004 and 2010. Goodness-of-fit testing and model selection We applied goodness-of-fit (GOF) tests in an open population model framework using the program U- 127 Endang Species Res 23: 125–132, 2014 CARE (Choquet et al. 2005) to test if a general model fitted the data adequately well with respect to the assumptions. U-CARE includes 4 tests of different aspects of the model fit (for details, see Burnham et al. 1987, Choquet et al. 2005), and provides an esti- mation of the extra-binominal variation, the so-called over-dispersion (variance inflation) factor ĉ. Additio - nally, U-CARE provides 2 direct tests, one for trap- dependence and one for transience. Model selection was based on the Akaike informa- tion criterion (Akaike 1985, Burnham & Anderson 2002), corrected for small sample size (AICc) using the program MARK (White & Burnham 1999). The model with the lowest AICc value has the best fit using the fewest parameters. When the difference in the AICc (ΔAICc) between 2 models is < 2, both mod- els are inferred to have similar support from the data. If ΔAICc > 2 but < 7, low support is inferred for the least likely model, and models with a ΔAICc >10 are regarded as having no support. When several models showed some support, we applied a model-averag- ing procedure in which the parameters were esti- mated from the models in question proportional to their AICc weights. When we applied the variance inflation factor ĉ, the model selection was based on the quasi-AICc (QAICc) (Burnham & Anderson 2002). RESULTS Modeling survival We identified 422 fin whales from 1982 to 2010, including 96 males, 59 females, and 267 unsexed animals. We restricted the analysis to only sexed animals conditioned to the time when they were first sexed. This resulted in 155 animals from 1990 onwards. The time-dependent CJS model [φ (s * t) p (s * t)] was accepted by the GOF test (χ2 = 104.33, df = 113, p = 0.707). The test details showed that 1 test component (TEST2.CT; see Burnham et al. 1987, Choquet et al. 2005 for details) was marginal for both males and females, and the direct test for trap- dependence was significant for both sexes. The ĉ was <1 and we based model selection on AICc. The model selection process did not support differentia- tion between males and females for either φ or p, so the sexes were pooled (Table 1). We added immedi- ate trap response in the model [φ (t) p (t * m)] follow- ing Pradel (1993). The fully time-dependent model had many inestimable parameters. Time dependency was not supported for the estimation of φ, but the results showed lower survival estimates for the last 4 yr. Inclusion of a trend (T ) in the probability of sur- vival did not improve the model fit. We therefore built a model with constant but different survival rates for the periods 1990−2006 and 2007−2010. This model did not improve the model with constant sur- vival, but had similar support (ΔAICc of 2). The best-supported model (AICc weight: 0.70) resulted in a survival rate of 0.955 (95% CI: 0.936 to 0.969). The second best model estimated an identical survival rate for the period 1990 to 2006 (0.956) and a reduced survival of 0.940 (95% CI: 0.792 to 0.985) for the period 2007 to 2010. Due to the model uncer- tainty, we model-averaged the results over all models with support (Table 1), and the results for φ were 0.955 (95% CI: 0.936 to 0.969) for 1990−2006 and 0.951 (95% CI: 0.883 to 0.981) for 2007−2010. The estimates for recapture probability are given in Fig. 2. Modeling abundance POPAN model In the period 2004 to 2010, we identified 290 fin whales. We pooled all animals in one single group. We applied the GOF test for the CJS model [φ (t) p (t)] as an approximation and it was rejected (χ2 = 32.3, df 128 Model AICc ΔAICc AICc No. Deviance wt param. φ (.) p (t + m) 1339.47 0.00 0.70 22 1293.34 φ (./.) p (t + m) 1341.52 2.06 0.25 23 1293.20 φ (.) p (t * m) 1344.82 5.35 0.05 41 1255.28 φ (./.) p (t) 1359.09 19.63 0 22 1312.97 φ (.) p (t) 1360.97 21.51 0 21 1317.04 φ (T ) p (t) 1362.28 22.82 0 22 1316.16 φ (s) p (t) 1363.16 23.69 0 22 1317.03 φ (t) p (t + m) 1367.37 27.90 0 40 1280.21 φ (.) p (s * t) 1384.35 44.88 0 41 1294.81 φ (t) p (t) 1384.89 45.43 0 39 1300.10 φ (s) p (s * t) 1386.45 46.98 0 42 1294.52 φ (t) p (t * m) 1371.70 32.22 0 58 1240.14 φ (t) p (s * t) 1412.22 72.75 0 59 1278.09 φ (s * t) p (t) 1426.17 86.70 0 59 1292.04 φ (s * t) p (s * t) 1458.00 118.53 0 79 1269.83 Table 1. Cormack-Jolly-Seber (CJS) models selected, or- dered by their value for the Akaike information criterion cor- rected for small sample size (AICc). The notation φ (./.) de- notes the 2 constant survival estimates for the periods 1990−2007 and 2008−2010. ΔAICc: difference in AICc com- pared to best supported model; wt: weight; param.: parame- ters; deviance: deviance explained by the model; φ: survival probability; p: probability of recapture; t: time (sampling oc- casion); T: linear temporal trend; s: sex; m: trap dependency Ramp et al.: Fin whale abundance and survival = 18, p = 0.02). Only one test component (TEST3.SR; see Burnham et al. 1987, Choquet et al. 2005 for details) was significant (χ2 = 13.6, df = 5, p = 0.018), as well as the direct test for transience. POPAN does not allow adjusting for transience, and we applied the estimated ĉ of 1.79. Model selection was therefore based on QAICc. The best supported model had a (negative) trend for the probability of survival, constant probability of capture, and time-dependent probability of entry [model φ (T ) p (.) b (t) N (.)], with an AICc weight of 0.73 (Table 2). The model-averaged estimate for the population size was 328 (95% CI: 306 to 350). The model-averaged estimates of φ and p are shown in Fig. 3. The model-averaged estimates for b fluc tuated highly, with two unable to be estimated, and are not shown. In Table 3, we show the derived annual popu- lation estimates for the years 2005 to 2010. These esti- mates are much lower than the super- population esti- mate, and they show a decline in recent years. DISCUSSION The long-term estimate of the survival rate of non-calf fin whales in the GSL was 0.955 (95% CI: 0.936 to 0.969), but sur- vival declined towards the end of the study. This phenomenon was shown by the CJS model using only sexed animals and by the POPAN parameterization applied to all animals. The apparent survival rate is the prod- uct of the true survival rate and site fi- delity; thus the decrease in the apparent survival rate could be caused by a de- crease in either of the 2 or in both. In gen- eral, the decrease in the last years shown by the POPAN model (Fig. 3) seems too steep to be explained by an increase in mortality alone. In 2010, only 72 animals were identi- fied, compared to 169 in 2006. In more recent years, fewer animals frequented the JCP, which is not the only area where fin whales are sighted in the GSL (Kingsley & Reeves 1998), but is home to the largest 129 Model QAICc ΔQAICc QAICc No. wt param φ (T ) p (.) b (t) N (.) 915.14 0.00 0.73 10 φ (T ) p (t) b (t) N (.) 919.15 4.01 0.10 15 φ (t) p (.) b (t) N (.) 919.49 4.35 0.08 14 φ (.) p (t) b (t) N (.) 919.65 4.52 0.08 14 φ (t) p (t) b (t) N (.) 925.12 9.98 0.01 20 φ (.) p (.) b (t) N (.) 928.00 12.86 0.00 10 φ (.) p (.) b (.) N (.) 958.39 43.25 0.00 4 Table 2. POPAN models selected, ordered by their value for the quasi-Akaike information criterion corrected for small sample size (QAICc). b: probability of entry into the popula- tion; N: abundance of the super-population. See Table 1 for other abbreviations and description of models Fig. 2. Model-averaged estimates of the probability of recapture ( p) for fin whales Balaenoptera physalus over the study period 1990 to 2010 from the best-fitting Cormack-Jolly-Seber (CJS) models Year Population estimate 95% CI 2005 166 134−197 2006 264 237−291 2007 232 203−261 2008 200 170−231 2009 163 127−199 2010 133 76−191 Table 3. Model-averaged annual derived population esti- mates for fin whales Balaenoptera physalus for 2005 to 2010 from the best-fitting POPAN parameterization Fig. 3. Model-averaged estimates for the probability of sur- vival (φ) and capture ( p) from 2004 to 2010 for fin whales Balaenoptera physalus from the best-fitting POPAN models Endang Species Res 23: 125–132, 2014 known aggregation in the Gulf. How ever, the annual number of animals frequenting these waters varies greatly, likely due to prey distribution in the Gulf. Therefore, it seems that a decrease in site fidelity was at least partly responsible for the decline in apparent survival. The POPAN model results confirm the decline in survival. Over the 2004−2010 period, 20 dead fin whales were recorded within the GSL (Quebec Mar- ine Mammal Emergency Network Call Center Reports 2004−2010). Given the low human popula- tion density along the shores of the GSL and the fact that carcasses of rorqual whales are known to sink rapidly (Michael Moore pers. comm.), the number of unreported cases might be substantially higher. Three animals were found entangled in fishing gear, while 4 animals showed signs of vessel collisions. Due to the lack of funding for necropsies, the cause of death remains unknown, but an annual average of almost 3 recorded dead whales (~1% of the esti- mated population size) seems high. We do not know if these numbers have increased, because the strand- ing network records only started in 2004. Marine traffic in the study area has been increas- ing over the study period (C. Ramp pers. obs.), and fin whales are frequently involved in vessel collisions (Laist et al. 2001). Furthermore, fin whales have shifted their temporal occurrence in the Gulf and arrived about a month earlier in 2010 than at the beginning of the study (Mingan Island Cetacean Study, MICS unpubl. data). This shift increased their overlap with the snow crab fisheries along the Que- bec North Shore, which ends around mid-July, and raised the risk of entanglements. Therefore, we sug- gest that the high number of reported deaths in addi- tion to an unknown number of mortalities was partly responsible for the pronounced decrease in survival for sexed individuals. We consider the unisex survival rate of 0.955 (95% CI: 0.936 to 0.969) to be the best available estimate for GSL fin whales for 2 reasons. First, the survival rate remained constant for most of the study period. Second, the estimate is also in the range of survival rates of other long-living baleen whales such as humpbacks (0.96 for non-calves; Barlow & Clapham 1997, Larsen & Hammond 2004) and blue whales Balaenoptera musculus (0.975; Ramp et al. 2006). However, the survival rate of the here analyzed fin whale population in recent years was lower. The super-population estimate of 328 (95% CI: 306 to 350) animals from the POPAN model is very close to the estimate from Mitchell (1974) of 340 for the late 1960s and from Kingsley & Reeves (1998) of 380 ani- mals in the early 1990s, although the latter authors offer the advice to regard their estimate cautiously. Taken at face value, these numbers indicate that the population has not increased over the last 50 yr. The 2 earlier studies used line transect methods for their estimates and covered large areas of the GSL, while our study was based on mark-recapture data col- lected in only a small portion of the Gulf. In addition, our estimate of 328 represents the number of animals alive at any time during the study period (2004 to 2010), while the former estimates (Mitchell 1974, Kingsley & Reeves 1998) are for one specific point in time. Our annual estimates (Table 3) are much lower. Our estimate was likely biased towards the high side due to the occurrence of transience in the data. By definition, transient animals are captured once and leave the study area permanently, and hence are not available for recapture (Pradel et al. 1997). This biases the survival rate towards the low side, and biases the population estimate towards the high side. Despite these biases, we think that the POPAN parameterization is an adequate tool for the biologi- cal situation in which the study species is character- ized by high survival and low reproduction and when it is only applied to a relatively short study period. The results indicate that some fin whales are not returning to the JCP every year but are part of the overall population. Although the significance of the JCP to the overall GSL population is unclear, the large number of whales encountered in some years (just around half of the estimated population in 2006) suggests that we encountered a large proportion of the population over the 2004 to 2010 study period. Indeed, the total number of identified animals (n = 290) was close to the estimate of the super-population (N = 328). This, combined with sighting data showing movements between Gaspé, the Estuary (2 other feeding areas; Fig. 1), and the JCP (MICS unpubl. data), suggests that the JCP is an important area for GSL fin whales and that our estimate is a realistic estimate of the whole population. Although the reported decrease in apparent sur- vival is likely partly caused by a reduction in site fidelity, true mortality may have increased recently. We do not know the number of dead animals, but an average of 3 dead animals (~1% of the estimated population) reported per year is noteworthy, and the real number of deaths must certainly be higher. Such mortality might not be sustainable, especially if the demographic isolation of the presumed GSL stock is confirmed. Furthermore, maritime traffic will increase in the future due to the expanding mining activities in northern Quebec and the potential open- 130 Ramp et al.: Fin whale abundance and survival ing of the Northwest Passage (Barnes 2008). This will increase the risk of ship strikes for this relatively small and possibly isolated stock. More work is needed to evaluate the significance of the JCP to the whole GSL fin whale stock, the demographic isola- tion of that stock, and the reasons behind the de - crease in survival. Acknowledgements. Mingan Island Cetacean Study (MICS) thanks the numerous volunteers and team members, espe- cially Alain Carpentier, for data collection and handling over all these years. We also thank Thomas Doniol-Valcroze for feedback and advice on all parts of this work, and Jim Nichols and Carl Schwarz for statistical advice on the vari- ous models. All fieldwork was conducted under permits from the Department of Fisheries and Oceans Canada. LITERATURE CITED Agler BA, Beard JA, Bowman RS, Corbett HD and others (1990) Fin whale (Balaenoptera physalus) photographic identification: methodology and preliminary results from the western North Atlantic. In: Hammond PS, Mizroch SA, Donovan GP (eds) Rep Int Whal Comm Spec Issue 12: 349−356 Akaike H (1985) Prediction and entropy. In: Atkinson AC, Fienberg SE (eds) A celebration of statistics: the ISI centenary volume. Springer Verlag, New York, NY, p 1−24 Barlow J, Clapham PJ (1997) A new birth-interval approach to estimating demographic parameters of humpback whales. Ecology 78: 535−546 Barnes A (2008) Cost minimizing route choice for marine transportation: expected vessel traffic through the Nort - west Passage 2050–2100. Sample paper for Kendrick DA, Mercado PR, Amman HM: Computational eco - nomics. Available at www. laits. utexas. edu/ compeco/ Sample_ Papers/ Barnes/ Cost-Minimizing_Route_ Choice_ for_ Marine_ Transporta tion_ BARNES. pdf Bérubé M, Palsbøll PJ (1996) Identification of sex in cetaceans by multiplexing with three ZFX and ZFY spe- cific primers. Mol Ecol 5: 283−287 Bérubé M, Aguilar A, Dendanto D, Larsen F and others (1998) Population genetic structure of North Atlantic, Mediterranean Sea and Sea of Cortez fin whales, Bal- aenoptera physalus (Linnaeus 1758): analysis of mito- chondrial and nuclear loci. Mol Ecol 7: 585−599 Breiwick JM (1993) Population dynamics and analyses of the fisheries for fin whales (Balaenoptera physalus) in the Northwest Atlantic Ocean. PhD thesis, University of Washington, Seattle Burnham KP, Anderson DR (1992) Data based selection of an appropriate biological model: the key to modern data analysis. In: McCullough DR, Barerett RH (eds) Wildlife 2001: populations. Springer Verlag, New York, NY, p 16−30 Burnham KP, Anderson DR (2002) Model selection and mul- timodel inference: a practical information-theoretic approach, 2nd edn. Springer, Berlin Burnham KP, Anderson DR, White GC, Brownie C, Pollock KH (1987) Design and analysis methods for fish survival experiments based on release recapture. Am Fish Soc Monogr 5 Choquet R, Reboulet AM, Lebreton JD, Gimenez O, Pradel R (2005) U-CARE 2.2 user’s manual. CEFE, UMR 5175, CNRS, Montpellier Clark WG (1982) Historical rates of recruitment to Southern Hemisphere fin whale stocks. Rep Int Whal Comm 32: 305−324 Clark CW (1995) Application of US Navy underwater hydrophone arrays for scientific research on whales. Rep Int Whal Comm 45: 210−222 Coakes A, Gowans S, Simard P, Giard J, Vashro C, Sears R (2005) Photographic identification of fin whales (Bal- aenoptera physalus) off the Atlantic coast of Nova Scotia, Canada. Mar Mamm Sci 21: 323−326 Cormack R (1964) Estimates of survival from the sighting of marked animals. Biometrika 51: 429−438 de la Mare WK (1985) On the estimation of mortality rates from whale age data, with particular reference to minke whales (Balaenoptera acutorostrata) in the Southern Hemisphere. Rep Int Whal Comm 35: 239−250 Delarue J, Todd SK, Van Parijs SM, Di Iorio L (2009) Geo- graphic variation in Northwest Atlantic fin whale (Bal- aenoptera physalus) song: implications for stock struc- ture assessment. J Acoust Soc Am 125: 1774−1782 Gabriele CM, Straley JM, Mizroch SA, Baker CS and others (2001) Estimating the mortality rate of humpback whale calves in the central North Pacific Ocean. Can J Zool 79: 589−600 Hobbs KE, Muir DC, Mitchell E (2001) Temporal and bio- geographic comparisons of PCBs and persistent organochlorine pollutants in the blubber of fin whales from eastern Canada in 1971-1991. Environ Pollut 114: 243−254 IWC (International Whaling Commission) (2009) Report of the Scientific Committee, Report of the Sub-committee on the Revised Management Procedure, Annex D, Appendix 6. J Cetacean Res Manag 11: 114−131 Jolly GM (1965) Explicit estimates from capture-recapture data with both death and immigration-stochastic model. Biometrika 52: 225−247 Kingsley MCS, Reeves RR (1998) Aerial surveys of cetaceans in the Gulf of St. Lawrence in 1995 and 1996. Can J Zool 76: 1529−1550 Laist DW, Knowlton AR, Mead JG, Collet AS, Podesta M (2001) Collisions between ships and whales. Mar Mamm Sci 17: 35−75 Larsen F, Hammond PS (2004) Distribution and abundance of West Greenland humpback whales (Megaptera novaeangliae). J Zool (Lond) 263: 343−358 Lawson JW, Gosselin JF (2009) Distribution and prelimi- nary abundance estimates for cetaceans seen during Canada’s marine megafauna survey — a component of the 2007 TNASS. DFO Can Sci Advis Sec Res Doc 2009/031 Lebreton JD, Burnham KP, Clobert J, Anderson DR (1992) Modeling survival and testing biological hypotheses using marked animals — a unified approach with case studies. Ecol Monogr 62: 67−118 Mitchell ED (1974) Present status of northwest Atlantic fin and other whale stocks. In: The whale problem: a status report. Harvard University Press, Cambridge, MA, p 108−169 Nichols JD, Kendall WL, Hines JE, Spendelow JA (2004) Estimation of sex-specific survival from capture−recap- 131 http://dx.doi.org/10.1890/03-0578 http://dx.doi.org/10.2307/2937171 http://dx.doi.org/10.1017/S095283690400531X http://dx.doi.org/10.1111/j.1748-7692.2001.tb00980.x http://dx.doi.org/10.1139/z98-054 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=14341276&dopt=Abstract http://dx.doi.org/10.1016/S0269-7491(00)00211-6 http://dx.doi.org/10.1139/z01-014 http://dx.doi.org/10.1121/1.3068454 http://dx.doi.org/10.1111/j.1748-7692.2005.tb01232.x http://dx.doi.org/10.1046/j.1365-294x.1998.00359.x http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=8673273&dopt=Abstract http://dx.doi.org/10.1890/0012-9658(1997)078[0535%3AANBIAT]2.0.CO%3B2 Endang Species Res 23: 125–132, 2014 ture data when sex is not always known. Ecology 85: 3192−3201 Palsbøll PJ, Vader A, Bakke I, Elgewely MR (1992) Determi- nation of gender in cetaceans by the polymerase chain reaction. Can J Zool 70: 2166−2170 Palsbøll PJ, Bérubé M, Aguilar A, Notarbartolo-Di-Sciara G, Nielsen R (2004) Discerning between recurrent gene flow and recent divergence under a finite-site mutation model applied to North Atlantic and Mediterranean Sea fin whale (Balaenoptera physalus) populations. Evolution 58: 670−675 Pradel R (1993) Flexibility in survival analysis from recapture data: handling trap dependence. In: Lebre- ton JD, North PM (eds) Marked individuals in the study of bird populations. Birkhäuser Verlag, Basel, p 29−37 Pradel R, Hines JE, Lebreton JD, Nichols JD (1997) Capture- recapture survival models taking account of transients. Biometrics 53: 60−72 Quebec Marine Mammal Emergency Network/ Réseau qué - bécois d’urgences pour les mammifères marins (2004− 2010). Annual reports 2004–2010. Groupe de recherche et d’éducation sur les mammifères marins (GREMM), Tadoussac, Québec Ramp C, Bérubé M, Hagen W, Sears R (2006) Survival of adult blue whales Balaenoptera musculus in the Gulf of St. Lawrence, Canada. Mar Ecol Prog Ser 319: 287–285 Reilly SB (1984) Observed and published rates of increase in gray whales, Eschrichtius robustus. Rep Int Whal Comm Spec Issue 6: 389−399 Reilly SB, Bannister JL, Best PB, Brown M and others (2008) Balaenoptera physalus. In: IUCN 2012. IUCN Red List of Threatened Species, version 2012.2 Rice DW (1998) Marine mammals of the world: systematics and distribution. No. 4. Society for Marine Mammalogy, Lawrence, KS Robbins J, Dendanto D, Giard J, Panigada S, Sears R, Zanardelli M (2007) Photo-ID studies of fin whales in the North Atlantic Ocean and the Mediterranean Sea. Rep Sci Comm Int Whal Comm SC/59/PF11: 1−4 Sandland R, Kirkwood G (1981) Estimation of survival in marked populations with possibly dependent sighting probabilities. Biometrika 68: 531−541 Schwarz CJ, Arnason AN (1996) A general methodology for the analysis of capture-recapture experiments in open populations. Biometrics 52: 860−873 Seber GAF (1965) A note on the multiple-recapture census. Biometrika 52: 249−259 Sergeant DE (1977) Stocks of fin whales Balaenoptera physalus L. in the North Atlantic Ocean. Rep Int Whal Comm 27: 460−473 Wade PR, Angliss R (1997) Guidelines for assessing marine mammal stocks: report of the GAMMS Workshop, April 3-5, 1996, Seattle, Washington. NOAA Tech Memo NMFS-OPR-12. US Department of Commerce, Waring GT, Josephson E, Maze-Foley K, Rosel PE (2013) U.S. Atlantic and Gulf of Mexico marine mammal stock assessment — 2012. NOAA Tech Memo NMFS-NE-223. National Marine Fisheries Service, Woods Hole, MA. http:// nefsc. noaa. gov/ publications/ tm/ tm223/ White GC, Burnham KP (1999) Program MARK: survival estimation from populations of marked animals. Bird Study 46(Suppl 001): S120−S139 132 Editorial responsibility: Nils Bunnefeld, Stirling, UK Submitted: June 21, 2013; Accepted: November 18, 2013 Proofs received from author(s): February 5, 2014 http://dx.doi.org/10.1080/00063659909477239 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=14341277&dopt=Abstract http://dx.doi.org/10.2307/2533048 http://dx.doi.org/10.1093/biomet/68.2.531 http://dx.doi.org/10.3354/meps319287 http://dx.doi.org/10.2307/2533097 http://dx.doi.org/10.1139/z92-292 cite10: cite21: cite17: cite8: cite12: cite23: cite19: cite2: cite7: cite14: cite20: cite16: cite1: cite11: cite22: cite18: cite5: cite13: cite15: cite4: work_gutukipshvadpn3rcumi4qg5li ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218564583 Params is empty 218564583 exception Params is empty 2021/04/06-02:16:46 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218564583 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:46 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_guyaejvshve3zob6u3ni2m4wnu ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218557180 Params is empty 218557180 exception Params is empty 2021/04/06-02:16:37 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218557180 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:37 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_gv2yxlgymrf6vjyvjepjwjt22m ---- Anaesthetic eye drops for children in casualty departments across south east England It is a common practice to use topical anaesthetic drops to provide temporary relief and aid in the examination of the eyes when strong blepharospasm precludes thorough examination. Ophthalmology departments usually have several types of these—for example, amethocaine, oxybuprocaine (benoxinate), and proxymetacaine. The dura- tion and degree of discomfort caused by amethocaine is significantly higher than proxymetacaine,1 2 whilst the difference in the discomfort between amethocaine and oxybuprocaine is minimal.2 When dealing with children, therefore, it is recommended to use proxymetacaine drops.1 It was my experience that Accident & Emergency (A&E) departments tend to have less choice of these drops. This survey was done to find out the availability of different anaesthetic drops, and the preference for paediatric use given a choice of the above three. Questionnaires were sent to 40 A&E departments across south east England. Thirty nine replied back, of which one department did not see any eye casualties. None of the 38 departments had proxymeta- caine. Twenty units had amethocaine alone and 10 units had oxybuprocaine alone. For paediatric use, these units were happy to use whatever drops were available within the unit. Eight units stocked amethocaine and oxybuprocaine, four of these were happy to use either of them on children and four used only oxybuprocaine. One of the latter pre- ferred proxymetacaine but had to contend with oxybuprocaine due to cost issues. Children are apprehensive about the instillation of any eye drops. Hence, it is desirable to use the least discomforting drops like proxymetacaine. For eye casualties, in majority of District General Hospitals, A&E departments are the first port of call. Hence, A&E units need to be aware of the benefit of proxymetacaine and stock them for paedia- tric use. M R Vishwanath Department of Ophthalmology, Queen Elizabeth Queen Mother Hospital, Margate, Kent; m.vishwanath@virgin.net doi: 10.1136/emj.2003.010645 References 1 Shafi T, Koay P. Randomised prospective masked study comparing patient comfort following the instillation of topical proxymetacaine and amethocaine. Br J Ophthalmol 1998;82(11):1285–7. 2 Lawrenson JG, Edgar DF, Tanna GK, et al. Comparison of the tolerability and efficacy of unit- dose, preservative-free topical ocular anaesthetics. Ophthalmic Physiol Opt 1998;18(5):393–400. Training in anaesthesia is also an issue for nurses We read with interest the excellent review by Graham.1 An important related issue is the training of the assistant to the emergency physician. We wished to ascertain if use of an emergency nurse as an anaesthetic assistant is common practice. We conducted a short telephone survey of the 12 Scottish emer- gency departments with attendances of more than 50 000 patients per year. We interviewed the duty middle grade doctor about usual practice in that department. In three depart- ments, emergency physicians will routinely perform rapid sequence intubation (RSI), the assistant being an emergency nurse in each case. In nine departments an anaesthetist will usually be involved or emergency physi- cians will only occasionally perform RSI. An emergency nurse will assist in seven of these departments. The Royal College of Anaesthetists2 have stated that anaesthesia should not proceed without a skilled, dedicated assistant. This also applies in the emergency department, where standards should be comparable to those in theatre.3 The training of nurses as anaesthetic assistants is variable and is the subject of a Scottish Executive report.4 This consists of at least a supernumerary in-house program of 1 to 4 months. Continued professional devel- opment and at least 50% of working time devoted to anaesthetic assistance follow this.4 The Faculty of Emergency Nursing has recognised that anaesthetic assistance is a specific competency. We think that this represents an important progression. The curriculum is, however, still in its infancy and is not currently a requirement for emergency nurses (personal communication with L McBride, Royal College of Nursing). Their assessment of competence in anaes- thetic assistance is portfolio based and not set against specified national standards (as has been suggested4). We are aware of one-day courses to familiarise nurses with anaesthesia (personal communication with J McGowan, Southern General Hospital). These are an important introduction, but are clearly incomparable to formal training schemes. While Graham has previously demon- strated the safety of emergency physician anaesthesia,5 we suggest that when anaes- thesia does prove difficult, a skilled assistant is of paramount importance. Our small survey suggests that the use of emergency nurses as anaesthetic assistants is common practice. If, perhaps appropriately, RSI is to be increasingly performed by emergency physicians,5 then the training of the assis- tant must be concomitant with that of the doctor. Continued care of the anaesthetised patient is also a training issue1 and applies to nurses as well. Standards of anaesthetic care need to be independent of its location and provider. R Price Department of Anaesthesia, Western Infirmary, Glasgow: Gartnavel Hospital, Glasgow, UK A Inglis Department of Anaesthesia, Southern General Hospital, Glasgow, UK Correspondence to: R Price, Department of Anaesthesia, 30 Shelley Court, Gartnavel Hospital, Glasgow, G12 0YN; rjp@doctors.org.uk doi: 10.1136/emj.2004.016154 References 1 Graham CA. Advanced airway management in the emergency department: what are the training and skills maintenance needs for UK emergency physicians? Emerg Med J 2004;21:14–19. 2 Guidelines for the provision of anaesthetic services. Royal College of Anaesthetists, London, 1999. http://www.rcoa.ac.uk/docs/glines.pdf. 3 The Role of the Anaesthetist in the Emergency Service. Association of Anaesthetists of Great Britain and Ireland, London, 1991. http:// www.aagbi.org/pdf/emergenc.pdf. 4 Anaesthetic Assistance. A strategy for training, recruitment and retention and the promulgation of safe practice. Scottish Medical and Scientific Advisory Committee. http://www.scotland.gov.uk/ library5/health/aast.pdf. 5 Graham CA, Beard D, Oglesby AJ, et al. Rapid sequence intubation in Scottish urban emergency departments. Emerg Med J 2003;20:3–5. Ultrasound Guidance for Central Venous Catheter Insertion We read Dunning’s BET report with great interest.1 As Dunning himself acknowledges, most of the available literature concerns the insertion of central venous catheters (CVCs) by anaesthetists (and also electively). However, we have found that this data does not necessarily apply to the critically-ill emergency setting, as illustrated by the study looking at emergency medicine physicians2 where the ultrasound did not reduce the complication rate. The literature does not distinguish between potentially life-threatening complications and those with unwanted side-effects. An extra attempt or prolonged time for insertion, whilst unpleasant, has a minimal impact on the patient’s eventual outcome. However, a pneumothorax could prove fatal to a patient with impending cardio-respiratory failure. Some techniques – for example, high internal jugular vein – have much lower rates of pneumothorax. Furthermore, some techni- ques use an arterial pulsation as a landmark. Such techniques can minimise the adverse effect of an arterial puncture as pressure can be applied directly to the artery. We also share Dunning’s doubt in the National Institute for Clinical Excellence (NICE) guidance’s claim that the cost-per- use of an ultrasound could be as low as £10.3 NICE’s economic analysis model assumed that the device is used 15 times a week. This would mean sharing the device with another department, clearly unsatisfactory for most emergency situations. The cost per complica- tion prevented would be even greater. (£500 in Miller’s study, assuming 2 fewer complica- tions per 100 insertions). Finally, the NICE guidance is that ‘‘appro- priate training to achieve competence’’ is LETTERS PostScript. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Accepted for publication 27 May 2004 Accepted for publication 27 May 2004 608 Emerg Med J 2005;22:608–610 www.emjonline.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://e m j.b m j.co m / E m e rg M e d J: first p u b lish e d a s 1 0 .1 1 3 6 /e m j.2 0 0 4 .0 1 5 1 5 6 o n 2 6 Ju ly 2 0 0 5 . D o w n lo a d e d fro m http://emj.bmj.com/ undertaken. We are sure that the authors would concur that the clinical scenario given would not be the appropriate occasion to ‘‘have a go’’ with a new device for the first time. In conclusion, we believe that far more important than ultrasound-guided CVC insertion, is the correct choice of insertion site to avoid those significant risks, which the critically-ill patient would not tolerate. M Chikungwa, M Lim Correspondence to: M Chikungwa; mchikungwa@ msn.com doi: 10.1136/emj.2004.015156 References 1 Dunning J, James Williamson J. Ultrasonic guidance and the complications of central line placement in the emergency department. Emerg Med J 2003;20:551–552. 2 Miller AH, Roth BA, Mills TJ, et al. Ultrasound guidance versus the landmark technique for the placement of central venous catheters in the emergency department. Acad Emerg Med 2002;9:800–5. 3 National Institute for Clinical Excellence. Guidance on the use of ultrasound locating devices for placing central venous catheters. Technology appraisal guidance no 49, 2002 http://www.org.uk/cat.asp?c = 36752 (accessed 24 Dec 2003). Patients’ attitudes toward medical photography in the emergency department Advances in digital technology have made use of digital images increasingly common for the purposes of medical education.1 The high turnover of patients in the emergency department, many of whom have striking visual signs makes this an ideal location for digital photography. These images may even- tually be used for the purposes of medical education in presentations, and in book or journal format.2 3 As a consequence patients’ images may be seen by the general public on the internet, as many journals now have open access internet sites. From an ethical and legal standpoint it is vital that patients give informed consent for use of images in medical photography, and are aware that such images may be published on the world wide web.4 The aim of this pilot study was to investigate patient’s attitudes toward medical photography as a guide to consent and usage of digital photography within the emergency department. A patient survey questionnaire was designed to answer whether patients would consent to their image being taken, which part(s) of their body they would consent to being photographed, and whether they would allow these images to be pub- lished in a medical book, journal, and/or on the internet. All patients attending the minors section of an inner city emergency department between 1 st January 2004 and 30 th April 2004 were eligible for the study. Patients were included if aged over 18 and having a Glasgow coma score of 15. Patients were excluded if in moderate or untreated pain, needed urgent treatment, or were unable to read or under- stand the questionnaire. All patients were informed that the questionnaire was anon- ymous and would not affect their treatment. Data was collected by emergency department Senior House Officers and Emergency Nurse Practitioners. 100 patients completed the questionnaire. The results are summarised below: Q1 Would you consent to a photo- graph being taken in the Emergency Department of you/part of your body for the purposes of medical education? Yes 84%, No 16% 21% replied Yes to all forms of consent, 16% replied No to all forms of consent, while 63% replied Yes with reservations for particular forms of consent. Q2 Would you consent the follow- ing body part(s) to be photo- graphed (head, chest, abdomen, limbs and/or genitalia)? The majority of patients consented for all body areas to be photo- graphed except for genitalia (41% Yes, 59% No) citing invasion of privacy and embarrassment. Q3 Would you consent to your photo being published in a medical journal, book or internet site? The majority of patients gave con- sent for publication of images in a medical journal (71%), book (70%), but were more likely to refuse consent for use of images on internet medical sites (47% Yes, 53% No or unsure). In determining the attitudes of patients presenting in an inner city London emer- gency department regarding the usage of photography, we found that the majority of patients were amenable to having their images used for the purposes of medical education. The exceptions to this were the picturing of genitalia and the usage of any images on internet medical sites/journals. The findings of this pilot study are limited to data collection in a single emergency department in central London. A particular flaw of this survey is the lack of correlation between age, sex, ethnicity, and consent for photography. Further study is ongoing to investigate this. There have been no studies published about patients’ opinions regarding medical photography to date. The importance of obtaining consent for publication of patient images and concealment of identifying fea- tures has been stressed previously.5 This questionnaire study emphasises the need to investigate patients’ beliefs and concerns prior to consent. A Cheung, M Al-Ausi, I Hathorn, J Hyam, P Jaye Emergency Department, St Thomas’ Hospital, UK Correspondence to: Peter Jaye, Consultant in Emergency Medicine, St Thomas’ Hospital, Lambeth Palace Road, London SE1 7RH, UK; peter.jaye@gstt.nhs.uk doi: 10.1136/emj.2004.019893 References 1 Mah ET, Thomsen NO. Digital photography and computerisation in orthopaedics. J Bone Joint Surg Br 2004;86(1):1–4. 2 Clegg GR, Roebuck S, Steedman DJ. A new system for digital image acquisition, storage and presentation in an accident and emergency department. Emerg Med J 2001;18(4):255–8. 3 Chan L, Reilly KM. Integration of digital imaging into emergency medicine education. Acad Emerg Med 2002;9(1):93–5. 4 Hood CA, Hope T, Dove P. Videos, photographs, and patient consent. BMJ 1998;316:1009–11. 5 Smith R. Publishing information about patients. BMJ 1995;311:1240–1. Unnecessary Tetanus boosters in the ED It is recommended that five doses of tetanus toxoid provide lifelong immunity and 10 yearly doses are not required beyond this.1 National immunisation against tetanus began in 1961, providing five doses (three in infancy, one preschool and one on leaving school).2 Coverage is high, with uptake over 90% since 1990.2 Therefore, the majority of the population under the age of 40 are fully immunised against tetanus. Td (tetanus toxoid/low dose diphtheria) vaccine is often administered in the Emergency Department (ED) following a wound or burn based upon the patient’s recollection of their immunisation history. Many patients and staff may believe that doses should still be given every 10 years. During summer 2004, an audit of tetanus immunisation was carried out at our depart- ment. The records of 103 patients who had received Td in the ED were scrutinised and a questionnaire was sent to the patient’s GP requesting information about the patient’s tetanus immunisation history before the dose given in the ED. Information was received in 99 patients (96% response). In 34/99 primary care records showed the patient was fully immunised before the dose given in the ED. One patient had received eight doses before the ED dose and two patients had been immunised less than 1 year before the ED dose. In 35/99 records suggested that the patient was not fully immunised. However, in this group few records were held before the early 1990’s and it is possible some may have had five previous doses. In 30/99 there were no tetanus immunisation records. In 80/99 no features suggesting the wound was tetanus prone were recorded. These findings have caused us to feel that some doses of Td are unnecessary. Patient’s recollections of their immunisation history may be unreliable. We have recommended that during working hours, the patient’s general practice should be contacted to check immunisation records. Out of hours, if the patient is under the age of 40 and the wound is not tetanus prone (as defined in DoH Guidance1), the general practice should be contacted as soon as possible and the immunisation history checked before admin- istering Td. However, we would like to emphasize that wound management is paramount, and that where tetanus is a risk in a patient who is not fully immunised, a tetanus booster will not provide effective protection against tetanus. In these instances, tetanus immunoglobulin (TIG) also needs to be considered (and is essential for tetanus prone wounds). In the elderly and other high-risk groups—for example, intravenous drug abusers—the Accepted for publication 25 February 2004 Accepted for publication 12 October 2004 PostScript 609 www.emjonline.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://e m j.b m j.co m / E m e rg M e d J: first p u b lish e d a s 1 0 .1 1 3 6 /e m j.2 0 0 4 .0 1 5 1 5 6 o n 2 6 Ju ly 2 0 0 5 . D o w n lo a d e d fro m http://emj.bmj.com/ need for a primary course of immunisation against tetanus should be considered not just a single dose and follow up with the general practice is therefore needed. The poor state of many primary care immunisation records is a concern and this may argue in favour of centralised immuni- sation records or a patient electronic record to protect patients against unnecessary immu- nisations as well as tetanus. T Burton, S Crane Accident and Emergency Department, York Hospital, Wigginton Road, York, UK Correspondence to: Dr T Burton, 16 Tom Lane, Fulwood, Sheffield, S10 3PB; tomandjackie@doctors.org.uk doi: 10.1136/emj.2004.021121 References 1 CMO. Replacement of single antigen tetanus vaccine (T) by combined tetanus/low dose diphtheria vaccine for adults and adolescents (Td) and advice for tetanus immunisation following injuries. In:update on immunisation issues PL/ CMO/2002/4 London Department of Health, August, 2002. 2 http://www.hpa.org.uk/infections/topics_az/ tetanus. Accepted for publication 20 October 2004 BOOK REVIEWS and Technical Aspects be comprehensive with regards to disaster planning? Can it provide me with what I need to know? I was confused by the end of my involvement with this book or perhaps overwhelmed by the enormity of the importance of non-medical requirements such as engineering and tech- nical expertise in planning for and managing environmental catastrophes. Who is this book for? I am still not sure. The everyday emergency physician? I think not. It serves a purpose to be educational about what is required, in a generic sort of way, when planning disasters. Would I have turned to it last year during the SARS outbreak? No. When I feared a bio-terrorism threat? No. When I watched helplessly the victims of the latest Iranian earthquake? No. To have done so would have been to participate in some form of voyeurism on other people’s misery. Better to embrace the needs of those victims of environmental disasters in some tangible way than rush to the book shelf to brush up on some aspect of care which is so remote for the majority of us in emergency medicine. J Ryan St Vincent’s Hospital, Ireland; j.ryan@st-vincents.ie Neurological Emergencies: A Symptom Orientated Approach Edited by G L Henry, A Jagoda, N E Little, et al. McGraw-Hill Education, 2003, £43.99, pp 346. ISBN 0071402926 The authors set out with very laudable intentions. They wanted to get the ‘‘max- imum value out of both professional time and expensive testing modalities’’. I therefore picked up this book with great expecta- tions—the prospect of learning a better and more memorable way of dealing with neuro- logical cases in the emergency department. The chapter headings (14 in number) seemed to identify the key points I needed to know and the size of the book (346 pages) indicated that it was possible to read. Unfortunately things did not start well. The initial chapter on basic neuroanatomy mainly used diagrams from other books. The end result was areas of confusion where the text did not entirely marry up with the diagrams. The second chapter dealing with evaluating the neurological complaint was better and had some useful tips. However the format provided a clue as to how the rest of the book was to take shape—mainly text and lists. The content of this book was reasonably up to date and if you like learning neurology by reading text and memorising lists then this is the book for you. Personally I would not buy it. I felt it was a rehash of a standard neurology textbook and missed a golden opportunity of being a comprehensible text on emergency neurology, written by emer- gency practitioners for emergency practi- tioners. P Driscoll Hope Hospital, Salford, UK; peter.driscoll@srht.nhs.uk Emergency medicine procedures E Reichmann, R Simon. New York: McGraw- Hill, 2003, £120, pp 1563. ISBN 0-07- 136032-8. This book has 173 chapters, allowing each chapter to be devoted to a single procedure, which, coupled with a clear table of contents, makes finding a particular procedure easy. This will be appreciated mostly by the emergency doctor on duty needing a rapid "refresher" for infrequently performed skills. ‘‘A picture is worth a thousand words’’ was never so true as when attempting to describe invasive procedures. The strength of this book lies in the clarity of its illustrations, which number over 1700 in total. The text is concise but comprehensive. Anatomy, patho- physiology, indications and contraindica- tions, equipment needed, technicalities, and complications are discussed in a standardised fashion for each chapter. The authors, pre- dominantly US emergency physicians, mostly succeed in refraining from quoting the ‘‘best method’’ and provide balanced views of alternative techniques. This is well illustrated by the shoulder reduction chapter, which pictorially demonstrates 12 different ways of reducing an anterior dislocation. In fact, the only notable absentee is the locally preferred Spaso technique. The book covers every procedure that one would consider in the emergency department and many that one would not. Fish hook removal, zipper injury, contact lens removal, and emergency thoracotomy are all explained with equal clarity. The sections on soft tissue procedures, arthrocentesis, and regional analgesia are superb. In fact, by the end of the book, I was confident that I could reduce any paraphimosis, deliver a baby, and repair a cardiac wound. However, I still had nagging doubts about my ability to aspirate a sub- dural haematoma in an infant, repair the great vessels, or remove a rectal foreign body. Reading the preface again, I was relieved. The main authors acknowledge that some proce- dures are for "surgeons" only and are included solely to improve the understanding by "emergentologists" of procedures that may present with late complications. These chap- ters are unnecessary, while others would be better placed in a pre-hospital text. Thankfully, they are relatively few in number, with the vast majority of the book being directly relevant to emergency practice in the UK. Weighing approximately 4 kg, this is undoubtedly a reference text. The price (£120) will deter some individuals but it should be considered an essential reference book for SHOs, middle grades, and consul- tants alike. Any emergency department would benefit from owning a copy. J Lee Environmental Health in Emergencies and Disasters: A Practical Guide Edited by B Wisner, J Adams. World Health Organization, 2003, £40.00, pp 252. ISBN 9241545410 I have the greatest admiration for doctors who dedicate themselves to disaster prepared- ness and intervention. For most doctors there will, thank god, rarely be any personal involvement in environmental emergencies and disasters. For the others who are involved, the application of this branch of medicine must be some form of ‘‘virtual’’ game of medicine, lacking in visible, tangible gains for the majority of their efforts. Reading this World Health Organization publication however has changed my percep- tion of the importance of emergency plan- ners, administrators, and environmental technical staff. I am an emergency physician, blinkered by measuring the response of interventions in real time; is the peak flow better after the nebuliser? Is the pain less with intravenous morphine? But if truth be known it is the involvement of public health doctors and emergency planners that makes the biggest impact in saving lives worldwide, as with doctors involved in public health medicine. This book served to demonstrate to me my ignorance on matters of disaster responsive- ness. But can 252 pages of General Aspects 610 PostScript www.emjonline.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://e m j.b m j.co m / E m e rg M e d J: first p u b lish e d a s 1 0 .1 1 3 6 /e m j.2 0 0 4 .0 1 5 1 5 6 o n 2 6 Ju ly 2 0 0 5 . D o w n lo a d e d fro m http://emj.bmj.com/ work_gvse347e7jdulef62denwcpp3m ---- JPGM_March04.pmd 62 J Postgrad Med March 2004 Vol 50 Issue 1� � 62 CMYK Executive Medical Director, Mirada Solu- tions, Oxford, United Kingdom; *Medical Director, Hunter Area Pathology Service, Newcastle, and Professor and Head, Discipline of Anatomical Pathology, University of Newcastle, Australia. Correspondence: Prof Anthony S-Y Leong, MD E-mail: aleong@mail.newcastle.edu.au www.jpgmonline.com Received : 19-09-03 Review completed : 20-10-03 Accepted : 22-10-03 PubMed ID : 15048004 J Postgrad Med 2004;50:62-69 Digital photography in anatomical pathologyDigital photography in anatomical pathologyDigital photography in anatomical pathologyDigital photography in anatomical pathologyDigital photography in anatomical pathology Leong FJW-M, Leong ASY* ABSTRACTABSTRACTABSTRACTABSTRACTABSTRACT Digital imaging has made major inroads into the routine practice of anatomical pathology and replaces photographic prints and Kodachromes for reporting and conference purposes. More advanced systems coupled to computers allow greater versatility and speed of turnaround as well as lower costs of incorporating macroscopic and microscopic pictures into pathology reports and publications. Digital images allow transmission to remote sites via the Internet for consultation, quality assurance and educational purposes, and can be stored on and disseminated by CD-ROM. Total slide digitisation is now a reality and will replace glass slides to a large extent. Three- dimensional images of gross specimens can be assembled and posted on websites for interactive educational programmes. There are also applications in research, allowing more objective and automated quantitation of a variety of morphological and immunohistological parameters. Early reports indicate that medical vision systems are a reality and can provide for automated computer-generated histopathological diagnosis and quality assurance. KEY WORDS: Digital photography, anatomical pathology, Internet, medical vision systems, education, computer- generated diagnosis, pathology reporting, quality assurance, remote consultation mong the many functions of an anatomical pathologist ar e diagnosis, consultation, documentation, and edu- cation. Implicit in these activities is the need to record mor- phological findings both at the macroscopic and microscopic level.1 Conventionally, this has been performed through the process of descriptive prose with inherent variations in the style, vocabulary and abilities of individual pathologists, which are often considerable and beset by idiosyncrasies. Comparisons and descriptions associated with food are commonplace with such confusing classics like “caseous necrosis” which describes a resemblance to cottage cheese and not cheddar cheese, “nut- meg liver” resembling the cut surface of a nutmeg seed and not the fruit, and “anchovy sauce” which may not be found on the dining tables in many countries. Common colours may become irreproducible when terms like “off white”, “light grey ”, “creamy-white”, “pale white”, “grey-tan”, “egg white” and “ice- white” may be used for the same colour. Photographs documented the true appearances of the patho- logical changes and eliminated much of the inaccuracies re- sulting from the variation in descriptive prowess but there re- mained the inevitable delays of print production and cost re- stricted the number of photographs. Mostly black and white prints were employed in reports. Some institutions maintained elaborate photography set-ups that depended on trained staff to process and print these photos, and to develop coloured projection slides. The advent of the Polaroid film allowed in- stant prints and also immediate projection slides averting the problems of delays, but again costs were significant and qual- ity was not optimal. The development of digital photography and the rapidly fall- ing costs of good quality digital cameras has made major in- roads into our traditional way of documenting pathological findings at both the gross and microscopic level. Together with the expanding use of the Internet, digital photography paves the way for many new applications of image documentation in anatomical pathology. This review will be confined to the capturing of digital images and their current and potential applications in anatomical pathology. Advantages of Digital Photography There are several compelling reasons to convert to digital tech- nology but the most persuasive is lower running costs and shorter image production turnaround times. Following the initial capital investment, additional costs ar e minimal. Memory cards for temporary storage of images are reusable and storage media such as CD-ROMs are inexpensive. Results are almost instantaneous and images can be edited, and the quality and composition improved. It is easier to catalogue, archive and disseminate/transmit a digital image than a print or 35-mm slide. Multiple digital copies in different formats are possible. Photographic laboratories can create glossy prints A Education Forum 63J Postgrad Med March 2004 Vol 50 Issue 1 � � C MYK 63 that are virtually indistinguishable from those made from a chemical film. In addition to improving current practice, the change to the digital medium opens up a new range of applications such as telemicroscopy/ telepathology, image processing and analysis, digital slide archiving and, eventually, automated machine vi- sion systems. Another reason is necessity. As other disciplines of medicine and non-medical fields convert to the digital me- dium, lack of familiarity with digital imaging may eventually become a handicap and be viewed as a deficiency in practice. The Digital Camera Many of the factors important in chemical photography are not applicable to digital photography. Consideration of film speed and type are not relevant and factors such as colour, tem- perature and white balancing are considerably easier to com- pensate for with digital imaging. Focussing is much easier as many cameras have automatic fine focussing. Three primary functions of the microscope are to resolve de- tail, increase contrast and magnify. Digital imaging can en- hance the performance of the microscope through its ability to improve upon the latter two functions, the first being still dependent on optics. The variables influencing image quality are too numerous to discuss but good optics remains the sin- gle most important factor in the acquisition of high-quality digital images. Optical deficiencies revealed by chromatic vari- ance and poor image clarity are far more noticeable when us- ing a digital image sensor compared with 35-mm film. There has been a proliferation of digital microscopy cameras in recent years and a short list of the commonly advertised cameras by the larger microscopy companies is provided in Table 1. Although such devices are generally maintenance-free, it is preferable to select a camera from a manufacturer, which reviews and updates its software drivers with the same degree of efficiency and frequency as a major computer peripheral manufacturer. This is particularly necessary given the rapid rate of evolution of computer operating systems. Cameras fall into three main groups ranging from budget to expensive, and those for specific requirements, e.g. low-light imaging, multi-spectral imaging.2 The cameras listed here ar e generally capable of reasonable imaging for transmission, pres- entation and print. The more expensive models mostly pro- duce larger images of higher quality. Currently, the acquired image size is equated with quality but in the future, the cru- cial determinant will be the method by which the image is acquired at the sensor level as this has a greater influence. Not listed are cameras optimised for low-light photography and cameras capable of high-resolution, high frame-rate imaging. The latter have the advantage of using a technique of multi- ple-frame averaging to improve image quality. As such cam- eras have specific applications, additional e xpense can be avoided by ensuring that specialised features are not purchased if not requir ed. High-end general microscopy cameras such as the ProgRes C14 and Nikon DXM1200 are capable of respect- able low-light imaging. There are essentially two groups of digital microscopy cameras – those designed for still image capture and those for ‘live’ image display. The latter gained popularity for clinical meet- ings where images ar e shown on a television. Recent develop- ments have blurred the boundaries of there two types. Origi- nally intended for use as a presentation tool, lower-priced ana- logue cameras combine all the visual information into a single video signal (composite video). Compared to a digital camera the quality is inferior; however, coupled with a video-capture unit (‘imagegrabber’), it is a low-cost alternative. Video cap- ture units accept analogue signals and ‘capture’ images at reso- lutions of up to 1500 x 1125 pixels through a technique based on oversampling the analogue signal. The resolutions claimed are spurious and those above the base camera r esolution result in the magnification of electronic artefacts. There are several cameras that are able to provide both func- tions. A high-end example is the Kontron/ Jenoptik ProgRes 3012 that is a dedicated microscopy camera capable of captur- ing still images up to 20 megapixels in size (5000 x 4000 pixels) and transmitting them via a digital connection to a computer. However, it also is capable of composite analogue video out- put and can be used for live image display. Similarly, the Nikon Coolpix cameras (950/990/995/5000/5700) are general con- sumer, still image cameras that store images up to 5 megapixels in size onto removable compact flash media. They also have composite video output that provides a live image view that is updated at 15 frames per second. Dual still-image with live video output is slowly becoming the norm because even when camera and eyepieces are parfocal, it is still preferable for fine focusing to be performed automati- cally by the camera rather than through microscope eyepieces. The only way to provide this function is through some form of ‘live’ or rapidly updated camera view as found in the Nikon DXM1200, Kontron/ Jenoptik Prog Res 3012 or C14 that pro- vide this within its image-grabbing software. Table 1: Some typical digital microscopy cameras High-end (>US$5000) Mid-range (US$1000-5000) Cameras which may optionally be microscope-mounted (US$500-5000) Nikon DXM1200 ProgRes C14 Nikon DN100 Leica DC150, DC300 Olympus Camedia E-10, C -4040 Leica DC500 Zeiss Axiocam HR Olympus DP12, DP50 Zeiss Axiocam MR Nikon Coolpix 950, 990, 4500, 5000Nikon D100, D1, D1H, D1XOther digital SLRs Note: This list is by no means comprehensive and represents the more readily available models through microscope vendors Leong et al: Digital photography in anatomical pathology 64 J Postgrad Med March 2004 Vol 50 Issue 1� � 64 CMYK ‘Live video’ digital cameras remain in demand for applications where real-time image processing is required, such as in ro- botic telepathology where the live camera view is used for con- sultation or as the means by which the softwar e-based autofocus is calculated. This area continues to develop with manufacturers using multiple image sensors to provide better image quality (so called ‘triple-chip CCDs’) and/or avoiding analogue artefacts through use of the relatively new IEEE1394 (‘Firewire’) digital interface. The Sony DFW series is an ex- ample of a range of microscope cameras that provide a pure digital live image of up to 1280 x 1024 pixels at 15 frames/ second through an IEEE1394 interface. Macroscopic Photography In the past, the reliance has been on chemical film photogra- phy to produce black and white or more expensive colour prints to supplement the descriptive prose of the pathology report. The photocopier has also been employed for some specimens, particularly those with a flat cut surface, to produce instant, inexpensive prints that could be used to mark sites of block sampling and as a replacement for line drawings. Photomicro- graphs did not accompany such reports and wer e reserved for clinico-pathological conferences and published text. Digital photography has changed the face of the pathology report significantly, allowing the incorporation of coloured prints of the gross specimen as well of relevant microscopic features; greatly enhancing the accuracy and reproducibility of the findings. As there ar e essentially no running costs in- volved, there is no limit to the number of images used so that it is possible to replace or supplement traditional, often lengthy, descriptive prose with high quality images that greatly improve the quality of the report. Coupled with synoptic texts, r eports are more accurate and succinct3 and lengthy block keys and hand-drawn diagrams can be replaced with digital photographs. Several reports describe home-assembled digital photographic set-ups for routine use in surgical and autopsy pathology. 4-7 For an optimal set-up of gross photography the camera must be controlled remotely to avoid contamination, and downloading of images and specimen/accession number rec- ognition should be automatic. While it is possible to obtain good digital images with many of the lower end cameras that produce 3-5 megapixel images, automatic downloading and remote control are much more difficult to achieve. One group has successfully developed software for these two functions employing the Coolpix 995 camera, spending about US$26500 on the system and achieving an annual savings of US$7500 in labour and film costs.8 We have pr eviously described a prototype of an automated system,9 which is now replaced by the more sophisticated MacroPath system (Milestone, Bergamo, Italy, retails at about US$18000). This system allows remote control of a video cam- era (with imagegrabber) through a foot pedal for zoom, cap- ture and save featur es (Figure 1), auto focus, automatic downloading and the use and automatic recognition of spe- cific specimen accession numbers. In addition, the requisition form may be photographed and stored, and the images can be stored to the laboratory information system, shared on the Intranet among pathologists or via floppy disk or CD-ROM. A useful feature of the system is the provision for measurements to be done on formalin-resistant screen. Prior calibration al- lows measurements to be made by drawing a line between points of interest on the image with a stylus or even finger. In a similar manner sites of sampling can be indicated with auto- matic labelling so that the use of lengthy conventional block keys and important aspect of report documentation can be dispensed with (Figure 2). Selected images can be incorpo- Leong et al: Digital photography in anatomical pathology Figure 1: The Milestone MacroPath system comprising a camera (black arrow), foot pedal control with zoom in, zoom out and freeze pedals (white arrow), formalin-resistant touch screen monitor with attached splash-proof computer (yellow ar row), infra-red bar code scanner (red arrow) and keyboard (broad arrow) Figure 2: Digital image of uterus, bilateral tubes and ovaries with leiomyomas including a large subserosal leiomyoma. Measurements of the specimen made on the image and sites of sampling (block key) are shown. When supplemented by images of various cut surfaces, descriptive prose and block keys can be dispensed with. 65J Postgrad Med March 2004 Vol 50 Issue 1 � � C MYK 65 rated into the report or exported through the Internet or ap- pended as colour prints. Microscopic Photography (Photomicrography) A common mistake when making the transition to digital imaging is to be fixated upon the camera. Although it is the camera that converts the analogue image into a digital form, it is the interaction of all the components that determines the final outcome. Therefore, any weak component in the imaging chain can be detrimental to the final product. The minimum hardware components requir ed for digital photomicrography are a digital camera, camera-to-microscope adaptor, micro- scope, and a computer with an adequate graphics card and monitor. Factors such as system Central Processing Unit speed, hard drive speed and capacity, and the nature of the remov- able storage medium are secondary concerns. Dedicated digital microscopy cameras generally do not have a lens and attach to the microscope via a common connection, the ‘C-mount’, a 1-inch diameter x 32-threads/inch screw origi- nally designed for 16-mm motion video cameras but now also adopted for lens- less video cameras. When a digital camera has a fixed lens, a feature of general consumer cameras such as the Nikon Coolpix series and Olympus D-3030 series, a ‘C- mount adaptor’ provides another layer of optics to compen- sate for the camera lens.8 The importance of good optics cannot be overemphasized, ir- respective of the image sensor used. Noticeable improvement in digital image quality is achieved by upgrading objective lenses from fluorochromatic (‘plan fluor’) to apochromatic (‘plan apo’). It is beyond the scope of this paper to discuss the relationship between pixel size and optical resolution. Suffice to say, each component in the imaging pathway should be cali- brated to its optimum state. This includes Köhler illumina- tion for the microscope, white balancing for the camera, and monitor calibration (contrast, brightness and gamma settings). Digitisation of Previous Collections of Projection Slides (K odachromes) The conversion to digital images makes previous collections of macroscopic and microscopic images on film obsolete. These can be scanned with a film scanner (if the chemical images are still in strips they can be scanned more expediently) and stored as digital images, but a more convenient and expedient method is to photograph the projection slide over an X-ray viewing box, a procedure that takes no more than a few seconds compar ed to the more laborious and lengthy scanning process. Some editing of the acquired image may be necessary. Similarly, the difficulties of acquiring low-power scanning images of an en- tire tissue section can be circumvented by photographing the tissue section over an X-ray viewing box. Practical Aspects Instead of mounting the camera via the C-mount it may be placed against one eyepiece of the microscope and images can be captured through this lens. This method is cumbersome and requir es a steady hand. As previously discussed, it is pref- erable to allow the camera to perform automatic fine focussing rather than to focus through the microscope eyepiece even if the system is parfocal. Although one axiom with digital cam- eras is “ what you see is the image you capture”, uneven field illumination or vignetting may commonly be present in low- power images. While the view through the eyepieces may not reveal unevenness in the peripheral field, as the human eye may not detect it, the problem is clearly displayed in the dig- ital image. Vignetting is the r esult of the coning of the light path between the microscope objective and the camera or may be due to an uneven light source. The pre-digital approach was simply to crop the image, decreasing the effective field. With digital imaging, retrospective correction of uneven illu- mination is possible through mathematical modelling of the illumination pattern.10 An even simpler method is to tak e a blank reference image and subtract this from subsequent im- ages. This reference frame corrects any uneven illumination present.11 Vignetting can also be avoided to a large extent by using higher magnification objectives. Most cameras provide both optical and digital zoom facilities. While optical zoom is useful, digital zoom should be avoided as the image tends to blur and the magnification is not true. Simple digital magnification or increasing the number of pix els in an image will not provide increased resolvable detail be- yond the limits of the optics. Three main variables influence the quality of a digital microscopy image. A combination of the objective lens, image sensor size, and the light path between the objective and the sensor determines the physical area imaged in a single field. As with the optical microscope, higher numerical apertures are associated with higher-grade objective lenses and there- fore better quality digital images.1 The numerical aperture, quality, and type of the microscope objective lens determines the accuracy in focusing light on the imaging sensor and there- fore image quality and resolution. The image sensor in the digital camera determines how this light is converted into a digital signal and the number of pixels created. Digital microscopy images are acquired for various purposes including teaching and clinical meetings, conference presen- tations, diagnosis, image analysis and archiving, each with slightly different requirements. Images for a clinical meeting are best projected live, in real-time, with no delays, and the image size requir ements would be modest as the commonest mode of display is an analogue television set that has, at best, 550-600 horizontal lines. Thus, a 640 x 480 pixel image would suffice. On the other hand, an image for publication would require sufficient pixels to satisfy the publisher. A 4 x 3 inch image on a 300 dpi home printer would ideally require at least a 1200 x 900 pixel image. In PowerPoint presentations only 72 pixels per inch are required. For telepathology purposes it seems that the price and avail- ability of digital imaging equipment rather than scientific con- siderations of adequate sampling have guided pathologists. Leong et al: Digital photography in anatomical pathology 66 J Postgrad Med March 2004 Vol 50 Issue 1� � 66 CMYK Many studies have employed digital cameras with analogue video output. Such cameras typically have an image dimen- sion in the vicinity of 768 x 576 pixels. Thus, some patholo- gists have based their clinical practice not upon a scientific measure but on an arbitrary figure set by camera manufactur- ers. Telepathology systems mostly employ images that are ei- ther in the region of 352 x 255 (video-conferencing size) or 768 x 576 pixels (typical framegrabber image size). Image Storage There are two aspects for consideration, the physical storage and the method of storage. Physical storage for daily work is commonly on fixed hard drives, which currently vary between 20-100 gigabytes in size. Portable hard drives offering similar performance through a Firewire (IEEE1394) or USB2 connec- tion are increasingly popular; however, avoid the comparatively slower USB1 connections. Removable media such as Zip disks and optical disks are relatively expensive and USB media allow a cheap and convenient method of transferring data between computers. CD-R (650-700M) is the most economical form of near-permanent storage although DVD-R (4.6G) should become cost-effective in the near future. Large image datasets are the norm when dealing with digi- tised slides and some form of data compression is often neces- sary. It is also necessary to compress images when the trans- mission bandwidth is narrow. The compromise for a smaller file is a decrease in quality. To the human eye, this may not be noticeable but poor quality digital compression such as JPEG or JPEG2000 inevitably introduces digital artefacts and alters the numerical values of the image. This may be significant if the image is to be used for image analysis. For printing, view- ing and even interpretation, poor quality compressed images are acceptable provided the compression ratio is controlled. Image Printing Sales of general consumer digital cameras now exceed that of film cameras so that many film processing laboratories will now print digital images on glossy photo paper that are indis- tinguishable from those derived from 35-mm film. Submis- sion of digital images for printing via the Internet is an accept- able and convenient route provided there is access to a good Internet connection and often, because of the size of the data, Broadband connection may be required. Inkjet printing, when coupled with the appropriate paper can produce publication-quality results and it is not uncommon for professional photographers to exhibit their work in this manner. Colour laser printers ar e better suited for high vol- ume output but are generally not appropriate for photographic- quality prints. The relationship between ink, paper and envi- ronment determines the quality and longevity of the print. Inkjet prints fade or show colour alteration, a phenomenon that usually takes weeks to months and is less lik ely to occur if the print is kept away from direct sunlight and behind glass, protected from environmental elements. ‘Curing’ the print immediately after production by placing it away from air and light, e.g. in an envelope, for a period of time, has been a sug- gested solution but there is no evidence to show that this im- proves subsequent colour stability. Matt photographic rather than glossy paper improves longevity but the more-familiar glossy finish may be preferred. Production of monochromatic images is often necessary as colour plate publication is expensive. Until r ecently, many photo printers were unable to print black-and-white images without introducing some unsightly colour tint. A simple so- lution was to switch off the colour cartridge and use only black ink; however, the results were often slightly coarse particularly across smooth monochromatic tonal gradients. Recent gen- eration photo printers such as the Epson 1270 are able to pro- duce a neutral grey using a balanced mixture of black and col- our inks. Even so, many prefer the greater control and tonal range offered by dedicated monochrome ink cartridges. If the publisher accepts digital files, sending an uncompressed file (e.g., TIFF format, sRGB colorspace) on CD-ROM is al- ways preferable to their attempt at scanning the printed im- age. This circumvents the need to print. (Although printed illustrations are still required in the initial review process they need not be of the best quality.) Total Slide Digitisation and Archiving The traditional glass slide collection of interesting cases re- mains an excellent teaching resource provided the collection is maintained. Unfortunately, such collections are fragile, will fade, cannot be replaced if damaged, are difficult to transport, and cannot be readily shared among many and never simulta- neously. Additionally, they are not in a form for immediate presentation or publication. Digital image archiving provides the solution to many of these drawbacks. At the very least, multiple representative images may be stored and, provided they are properly catalogued, can be retrieved and dissemi- nated far more easily and widely, and without fear of damage. Sampling error and inadequacy of representative images are overcome by the development of total slide digitisation.12 Sys- tems from companies like Zem/Nikon, 13 BacusLabs,14 Aperio, 15 and Interscope16 can scan the entire tissue area on a slide at high-power (20x or 40x objective), and present the data in a manner that replicates the actual slide without loss of resolu- tion. Such systems may form the basis of anatomical pathol- ogy laboratories of the future. When coupled with rapid tis- sue processing systems such as the microwave-stimulated sys- tem from Milestone,17 which has the capability of ultra-rapid tissue processing from fresh tissue to paraffin-embedded block within 30-90 minutes, it will allow large processing “factories” to service pathologists located at distant sites. Specimens can be transported to the centralised “factory” where examination, photography of macroscopic specimens, ultra-rapid tissue processing and whole slide digitisation takes place. The im- ages of gross specimens and total digitised scans of microscopic sections can be transmitted for reporting by pathologists lo- cated at distant laboratories, eliminating the need to incur the expense of maintaining such laboratories. 18 Leong et al: Digital photography in anatomical pathology 67J Postgrad Med March 2004 Vol 50 Issue 1 � � C MYK 67 The American Board of Pathology conducts its exam with vir- tual microscopic slides acquired through whole slide digitisation employing the BacusLabs system.14 Telepathology Telepathology is practised to varying degr ees worldwide and is primarily used for consultation.19 This may be for primary opin- ion, frozen section consultation being an important applica- tion in Europe 20-22 or for second opinion.23,24 There is some confusion in the terminology of the current methodologies available. “Simple store-and-forward” telepathology is the act of capturing several representative images beforehand and for- war ding them to a remote site for review. 19,25 “Dynamic” telepathology has been used in the United States to describe real-time consultation where the pathologist views the live microscope image updated at least 15 frames per second.26 This is commonly done through a video-conferencing codec.27 How- ever, elsewhere, users have adopted the term “dynamic” to describe any consultation occurring in real-time irrespective of the mode of image transfer. 2 8 In between these two modalities the terminology overlaps and other terms such as ‘active’ and ‘passive’21,29 have been introduced in an attempt to provide more clarity. The introduction of digital slide scanning systems has further complicated terminology.29,30 They can be regarded as extremely sophisticated store-and-forward systems but the difference between whole slide digitisation and a dozen representative pictures transmitted by email is considerable. We have sug- gested a classification for such telepathology systems.2 In recent times, reference immunohistology laboratories in the United States31 and Australia32 have employed the Internet to post images of stains performed on referred cases in order the circumvent the delays in courier/postal deliveries of the stained glass slides. While these can currently only be of selected rep- resentative fields, the advent of total slide digitisation will open new avenues of rapid slide transfer and examination. Quality Assurance and Educational Programmes The use of representative digital images or entire digitised slides will eliminate the need to section and disseminate multiple glass slides, besides ensuring that all recipients see exactly the same section.33,34 This circumvents the time and costs involved for slide distribution and is particularly important in cytology quality assurance programmes where only limited smears are available.25 Digital images will also eliminate the need for cum- bersome storage of educational and quality assurance glass slides that fade with time and take up space. Furthermore, digital images can be edited and improved in quality and can be made available on websites for educational purposes after completion of the quality assurance exercise.25,34 The same advantages apply to both— digital microscopic images and images of gross specimens from autopsy or surgical biopsy ex- amination.9 Digital images circulated on CD-ROM will also gradually re- place the traditional glass slides that are used for the teaching slide seminar, and colour images can also be made available as a component of the handout for such seminars and lectures. Digital Pathology Images in Education The idea of using image databases as educational resources is not new. There ar e several websites that provide pathology- related images, the most well-known being Webpath.35 How- ever, the quality of the images is often poor, both in diagnostic sophistication and quality, and in the number of macroscopic and histological images. A major problem has been the lack of standard in terms of image size. Many older databases began collating images before high-quality digital cameras were read- ily available. Total slide digitisation now replaces the need to store representative images of each teaching case as they vir- tually contain all the data in the tissue section. At a macroscopic level, three-dimensional digital simulations of organs provide an alternative to plastination or formalin- filled pathology pots. Digital representations are not subject to the legalities of organ and tissue retention; neither do they require maintenance or physical space for storage. There ar e several methods of creating a three-dimensional specimen. Contemporary rendering software has capabilities of recon- structing faces using only two different views. Rendering dis- eased human organs is a more difficult task due to the more complex shapes and textures. At Oxford University, we used a lower cost but more labour-intensive method of photograph- ing the organ from at least 12 angles and reconstructing it into an interactive rotatable object, deliverable through a web in- terface as a Quicktime VR object.36 Importantly, both gross and microscopic digital images allow a greater scope for inter- active teaching programmes that can be posted on websites or distributed on CD-ROM. At a microscopic level, visualisation of histological image data in three dimensions does not significantly improve diagnostic accuracy but can be a useful exercise in understanding the micro-anatomical growth and arrangement of structures. For example, the impact and influence a tumour has on its sur- rounding microvasculature may provide clues to the diagnosis and understanding of its growth pattern. Image data derived from confocal microscopy can be rendered into a three-dimen- sional image by computer software. It is also possible to do this at a lower cost using serial sections and normal microscopy images, as we have done with tubular carcinoma of the breast.37 Medical Vision Systems Automated cytology screening machines have made major in- roads into pathology practice where they address the shortage of cytology screeners and reduce the false-negative detection rate of the human screener. 38,39 It is unlikely that computer di- agnosis will completely replace human interpretation. How- ever, computer-assisted systems can enhance human perform- ance by identifying highly suspicious areas or cells in cytologi- cal preparations, thus acting as a safety net or quality assur- ance for the pathologist. We are engaged in the preliminary Leong et al: Digital photography in anatomical pathology 68 J Postgrad Med March 2004 Vol 50 Issue 1� � 68 CMYK development of such systems.37 The day may come when it will be regarded as negligent in a court of law if the reporting pathologist does not employ such a system for back-up or qual- ity assurance. Fortunately, as yet, there are no such systems available. The attempt to teach machines to recognise histological con- ditions is a worthy pursuit with the intermediate benefit of the objectification of histological interpretation. Given an ap- propriate training set, it is possible to derive objective criteria to describe a histological condition and it is also possible to statistically rank the features in terms of diagnostic strength.37 Work in several areas including colonic dysplasia,40 grading of cervical intraepithelial neoplasia,41 endometrial hyperplasia,4 2 and prostatic biopsy diagnosis43,44 has made this a reality. Other Applications of Digital Imaging in Pathology Quantitative pathology may be viewed as an instrument of research. Its penetration into daily clinical practice is low, de- spite the existence of many aspects of histopathology, which would benefit from some form of quantification— mitotic counting, assessment of tumour volume and extent, morphom- etry and immunohistological densitometric quantitation be- ing some examples. Quantitation of digital images will be much more reliable than assessment by eyeballing, using poorly de- fined criteria that cannot be reliably reproduced. Image analy- sis may render tumour grading to be more accurate and r epro- ducible, and automation will make quantitation more accept- able in routine practice. Some critics of digital imaging cite the technique as an invita- tion to scientific fraud.45 In fact, the opportunity to be fraudu- lent existed long before the advent of digital imaging and there are certainly easier ways of falsifying scientific data. It is im- possible to have an image that is completely free of some form of processing. For single-chip CCD cameras, the presence of a colour filter array means that there is comple x data interpola- tion being employed to create the final image. Additionally, there is further processing of image data to improve the signal to noise ratio. Many techniques such as spectral imaging and confocal microscopy are dependent upon processing of the image data to produce a viewable image. Photography of immunofluoresence is not optimal without some degree of contrast and brightness adjustment afterwards in an attempt to have the digital image match what was viewed through the microscope eyepieces. The closest equivalent to a raw nega- tive are the unprocessed data files that some cameras such as the Nikon D1 series and Coolpix 5700 can produce. This is image data in a proprietary format that requires software to assemble it into a digital image. In some situations, image processing can assist interpretation. We all are aware of inter- and intra-laboratory variations in histochemical and immunohistological staining. The human eye adapts and compensates for these variations. However, when sections fade with time or sun-exposure, r e-staining is often necessary. With digital image processing it is possible to rescale the distribution of values in a faded image to provide clarity equivalent to re-staining the section. Conclusion While digital imaging is relatively new in pathology, it is rap- idly being adopted in many areas because of cost savings, ra- pidity and convenience of usage, and the many new applica- tions it offers. The use of digital images enhances the quality and accuracy of pathology reports and whole slide digitisation has the potential to completely change the way pathologists examine and store tissue sections. In addition, the applica- tions in education are numerous and include conference pres- entations, publication, undergraduate and postgraduate teach- ing as well as remote consultation through telepathology, and quality assurance programmes. Digital images also have many other applications that are yet to be exploited including morphometric and densitometric quantitation and in medi- cal vision systems with the ultimate aim of computer-gener- ated histological diagnosis, either as a primary diagnostic tool, as a back-up system to aid the pathologist in diagnosis, or in quality control. Rapidly falling costs and extensive applications makes the adoption of digital imaging in anatomical pathol- ogy laboratories an essential consideration. References 1. Leong AS, James C, Thomas A. Handbook of surgical pathology. London: Church- ill Livingstone; 1996. p. 43-100, 177-92. 2. Leong FJ, Leong AS. Digital imaging applications in anatomical pathology. Adv Anat Path 2003;10:88-95. 3. Leong AS. Synoptic/checklist reporting of breast biopsies: Has the time come? Breast J 2001;7:271-4. 4. Schubert E, Gross W, Sideritis RH, Deckenbaugh L , He F, Becich MJ. A patholo- gist-designed imaging system for anatomic pathology signout, teaching and re- search. Sem Diagn Pathol 1994;11:263-73. 5. Cruz D, Seixas MA. A surgical pathology system for gross specimen examination. Proc AMIA Symp 1999;236-40. 6. Belanger AJ, L opes AE, Sinard JH. Implementation of a practical digital imaging system for routine gross photography in an autopsy environment. Arch Pathol Lab Med 2000;124:160-5. 7. Marchevsky AM, Dulbandzhyan R, Seely K, Carey S, Duncan RG. Storage and dis- tribution of pathology digital images using integrated Web-based viewing systems. Arch Pathol Lab Med 2002;126:533-9. 8. Park R, Eom J, Park P, Lee K, Joo H. Automation of gross photography using a remote-controlled digital camera system. Arch Pathol Lab Med 2003;127:726-31. 9. Leong AS, Visinoni F, Visinoni C, Milios J. An advanced digital image-capture com- puter system for gross specimens: A substitute for gross description. Pathology 2000;32:131-5. 10. Likar B, Maintz JBA, Viergever MA, Pernus F. R etrospective shading correction based on entropy minimization. J Microsc 2000;197:285-95. 11. Leong FJ, Brady M, McGee JO. Correction of uneven illumination (vignetting) in digital microscopy images. J Clin Pathol 2003;56:619-21. 12. Leong FJ, McGee JO. Automated complete slide digitization: a medium for simul- taneous viewing by multiple pathologists. J Pathol 2001;195:508-14. 13. N i kon UK. http://www. n i k on.co.uk and http://www. n i kon.co.uk/inst/ flashframeset.htm. Accessed September 2003. 14. BacusLabs. http://www.bacuslabs.com/. Accessed September 2003. 15. Aperio Technologies. http://www.aperio.com/home.asp. Accessed September 2003. 16. Interscope. http://www.interscopetech.com/index-products.htm. Accessed Sep- tember 2003. 17. Visinoni F, Milios J, LEONG AS -Y, Boon ME, Kok LP. Ultra rapid microwave/variable pressure-induced tissue processing-description of a new tissue processor. J Histotechol 1988;21:219-24. 18. Leong AS -Y, Leong FJW-M. Telepathology-the system for Australia. National Telepathology Conference, Sydney, October 5-6, 2001. 19. Leong FJW-M, Graham AK, Gahm T, McGee JOD. Telepathology: Clinical utility and methodology. In : Underwood J, ed. Recent Advances in Histopathology. Lon- don: Churchill Livingstone;1999. p. 217-40. 20. Eide TJ, Nordrum I. Frozen section service via the telenetwork in Northern Norway. Zentralbl Pathol 1992;138:409-12. 21. Kayser K. Telepathology in Europe. Its practical use. Arch Anat Cytol Pathol 1995;43:196-9. 22. Wellnitz U, Fritz P, Voudouri V, Linder A, Toomes H, Schmid J et al. The validity of telepathological frozen section diagnosis with ISDN- mediated remote microscopy. Leong et al: Digital photography in anatomical pathology 69J Postgrad Med March 2004 Vol 50 Issue 1 � � C MYK 69 Virchows Arch 2000;437:52-7. 23. Milosavljevic II, Spasic P, Mihailovic D, Kostov M, Mijuskovic S, Mijatovic D, et al. Telepathology-second opinion network in Yugoslavia. Adv Clin Path 1998;2:156. 24. Della Mea V, Beltrami CA. Telepathology applications of the Internet multimedia electronic mail. Med Inform (Lond) 1998;23:237-44. 25. Lee ES, Kim IS, Choi JS , Yeom B, Kim HK, Ahn GH, et al. Practical telepathology using a digital camera and the Internet. Telemed J E Health 2002;8:159-65. 26. Dunn BE, Choi H , Almagro UA, Recla DL. Combined robotic and nonrobotic telepathology as an integral service component of a geographically dispersed labo- ratory network. Hum Pathol 2001;32:1300-3. 27. Vazir MH, Loane MA, W ootton R. A pilot study of low-cost dynamic telepathology using the public telephone network. J Telemed Telecare 1998;4:168-71. 28. Afework A, Beynon MD, Bustamante F, Cho S, Demarzo A, Ferreira R, et al. Digital dynamic telepathology—the Virtual Microscope. Proc AMIA Symp 1998:912-6. 29. Kayser K, Syzmas J, Weinstein R. Telepathology-Telecommunication, electronic education and publication in pathology. Berlin: Springer-Verlag; 1999. 30. Demichelis F, Barbareschi M, Dalla Palma P, Forti S. The virtual case: a new method to completely digitize cytological and histological slides. Virchows Arch 2002;441:159-64. 31. PhenoPath Laboratories. http://www.phenopath.com/. Accessed September 2003. 32. AIP Laboratory Zoom-in system. http://www.aiplab.com/online/login.mv [Accessed February 2003]. 33. Leong FJ, Graham AK , Schwarzmann P, McGee JO. Clinical trial of telepathology as an alternative modality in breast histopathology quality assurance. Telemed J E Health 2000;6:373-7. 34. Leong FJ, McGee JO. Robotic interactive telepathology in proficiency testing/quality assurance schemes. Electr J Pathol Histol 2001;012-01. 35. Webpath. http://www -medlib.med.utah.edu/WebPath/webpath.html. Accessed October 2003 36. Oxford University Laboratory Medicine Course. http://www.ndcls.ox.ac.uk/LMC. Accessed September 2003. 37. Leong FJ . An automated diagnostic system for tubular carcinoma of the breast- a model for computer-based cancer diagnosis. DPhil Thesis: Oxford Univer- sity;2002. 38. Cenci M, Nagar C, Giovagnoli MR, Vecchione A. The PAPNET system for quality control of cervical smears: validation and limits. Anticancer Res 1997;17:4731-4. 39. Patten SF Jr, Lee JS, Wilbur DC, Bonfiglio TA, Colgan TJ, Richart RM, et al. The AutoPap 300 QC System multi-center clinical trials for use in quality control re- screening of cervical smears: II. A prospective intended use study. Cancer 1997;81:337-42. 40. Hamilton PW, Bartels PH, Thompson D, Anderson NH, Montironi R, Sloan JM. Au- tomated location of dysplastic fields in colorectal histology using image texture analysis. J Pathol 1997;182:68-75. 41. Keenan SJ, Diamond J, Glenn McCluggage W, Bharucha H, Thompson D, Bartels PH et al. An automated machine vision system for the histological grading of cer- vical intraepithelial neoplasia (CIN). J Pathol 2000;192:351-62. 42. Morrison ML , McCluggage WG, Price GJ, Diamond J, Sheeran MR, Mulholland KM, et al. Expert system support using a Bayesian belief network for the classifi- cation of endometrial hyperplasia. J Pathol 2002;197:403-14. 43. Bartels PH, Thompson D, Bartels HG, Montironi R, Scarpelli M, Hamilton PW. Ma- chine vision-based histometry of premalignant and malignant prostatic lesions. Pathol Res Pract 1995;191:935-44. 44. Montironi R, Bartels PH, Hamilton PW, Thompson D. Atypical adenomatous hy- perplasia (adenosis) of the prostate:development of a Bayesian belief network for its distinction from well-differentiated adenocarcinoma. Hum Pathol 1996;27: 396-407. 45. Suvarna SK, Ansary MA. Histopathology and the ‘third great lie’. When is an image not a scientifically authentic image? Histopathology 2001;39:441-6. Leong et al: Digital photography in anatomical pathology work_gynfkaaenfhqhn667qlzxlkemq ---- 10854_2007_9455_20_s1-web 87..93 CMOS photodetector systems for low-level light applications Naser Faramarzpour Æ Munir M. El-Desouki Æ M. Jamal Deen Æ Shahram Shirani Æ Qiyin Fang Received: 19 August 2007 / Accepted: 17 October 2007 / Published online: 8 November 2007 � Springer Science+Business Media, LLC 2007 Abstract In this work, we have designed, fabricated and measured the performance of three different active pixel sensor (APS) structures. These APS structures are studied in the context of applications that require low-level light detection systems. The three APS structures studied were—a conventional APS, an APS with a comparator, and an APS with an integrator. A special focus of our study was on both the signal and noise characteristics of each APS structure so the key performance metric of signal-to-noise ratio can be computed and compared. The pixel structures that are introduced in this work can cover a wide range of applications, such as high resolution digital photography using the APS with a comparator, to ultra-sensitive bio- medical measurements using the APS with an integrator. 1 Introduction The advances in deep submicron CMOS technologies and integrated microlens have made CMOS image sensors (especially the active pixel sensor—APS) a practical alternative to charge-coupled device (CCD) imaging technology. A key advantage of CMOS image sensors is that they are fabricated in standard CMOS technologies, which allows for full integration of the image sensor with the analog and digital processing and control circuits on the same chip. This ‘‘camera-on-chip’’ system leads to reduc- tion in power consumption, cost and sensor size, and it allows for integration of new sensor functionalities [1]. The advantages of CMOS image sensors over CCDs [1–3] include lower power consumption, lower system cost, on-chip functionality leading to camera-on-chip solutions, smaller overall system size, random access of image data, selective readout, higher speed imaging, and finally the capability to avoid blooming and smearing. Some of the disadvantages of CMOS image sensors com- pared to CCDs are lower sensitivity, lower fill-factor, lower quantum efficiency, lower dynamic range (DR), all of which translate into the CMOS imager’s lower overall image quality [1–3]. Typical APS sensors have a fill-factor (FF) of around 30% and the FF is typically limited by the interconnection metals and silicides that shadow the pho- tosensitive area and recombination of the photo-generated carriers with majority carriers [2]. Among the many emerging CMOS imaging applica- tions, biomedical applications that require very low-level light imaging systems are considered a major design challenge. Low-level light bioluminescence applications [4] require novel techniques to reduce the sensor noise and dark current and to increase the gain and sensitivity to low- light levels. Such techniques may involve designs such as the one described in [5], where sensitivity to low-light levels was increased by biasing the photodiode of each pixel to near zero volts and by separating the photodiode from the integration node. Most successful designs that offer higher dynamic range and lower noise comprise of a smart APS, where data processing is done on the pixel level [1]. Pixel level processing, which can be referred to as interpixel analog processing [6], can provide high SNR, low-power consumption, increased DR through adaptive N. Faramarzpour � M. M. El-Desouki � M. J. Deen (&) � S. Shirani Department of Electrical and Computer Engineering (CRL 226), McMaster University, Hamilton, ON, Canada L8S 4K1 e-mail: jamal@mcmaster.ca Q. Fang Department of Engineering Physics, McMaster University, Hamilton, ON, Canada L8S 4K1 123 J Mater Sci: Mater Electron (2009) 20:S87–S93 DOI 10.1007/s10854-007-9455-6 image capturing and processing, and high speed due to parallelism and processing during the integration time [6]. Since there is a practical limit on the minimum pixel size (4–5 lm), then CMOS technology scaling can be used to increase the number of transistors to be integrated into each pixel. For example, when using a CMOS 0.18 lm tech- nology with a 5 9 5 lm2 pixel and a 30% FF, eight analog transistors or 32 digital transistors can be integrated within the pixel [6]. Since digital transistors take more advantage of CMOS scaling properties, digital pixel sensors (DPS) have become very attractive [1]. A digital pixel sensor integrates an analog-to-digital converter (ADC) in each pixel and the digital data is read out from each pixel, thus resulting in a massively parallel readout and conversion that can allow for very high speed operation [7–10]. The low FF of DPS sensors is no longer an issue for CMOS technologies of 0.18 lm and below [1, 6]. The high speed readout makes CMOS image sensors suitable for very high-resolution imagers (multi-megapixels), especially for video applica- tions. For example, in [8], a 352 9 288 pixel CMOS image sensor was presented that is capable of operating at 10,000 frames/s (1 Gpixel/s) with a power consumption of 50 mW. Smart pixels have been reported to reduce the fixed- pattern noise (FPN) by more than 10 times and to increase the dynamic range [11–13]. For example, in [14], the high DR of 132 dB was achieved in a CMOS APS structure. In another example [15], the low-power design (40 nW per pixel from a 3.3 V supply) with an in-pixel ADC and a free running continuous oscillator achieved a DR of 104 dB. Block-of-pixel readout was achieved in [16] using a DPS design that allowed for seamlessly scanning a 5 9 5 pixel kernel filter across a pixel array of 64 9 32 rather than a line-readout that would require reading five lines to obtain the 5 9 5 block. In [17], a 1-bit first order Sigma-Delta (RD) modulator used 17 transistors for each 2 9 2 block of pixels, 4.25 transistors per pixel, to directly convert photocharge to bits. The design is suitable for infrared (IR) applications which require large charge handling capabilities and fine quantization levels. A Nyquist rate multi-channel-bit-serial (MCBS) ADC using successive comparisons, 4.5 transistors per pixel, to convert the pixel voltage to bits, was presented in [18]. This design [18] is suitable for visible applications where low fixed-pattern noise and low data rates are required. A pulse-frequency modulation (PFM) scheme was used in [19] to achieve pixel level ADC with a 23% FF to allow for low-light adaptation by adjusting the saturation level. The average power consumption per pixel is 85 lW in a 0.25 lm CMOS technology. Using two integration times, a linear APS sensor achieved a DR of 92 dB as compared to a DR of 61 dB with only one integration time [3]. Despite the many improvements in CMOS sensors, some of which were highlighted above, there are currently no CMOS image sensors that can provide the image quality of CCDs in terms of noise, sensitivity and dynamic range [3]. This makes CMOS image sensors application specific, since it is possible to improve some of the characteristics of the sensor, but not all of them. Different kinds of image sensors satisfy different performance requirements such as for digital photography, industrial vision, or for medical or space applications. In this article, three APS structures—the conventional APS sensor, the APS with an integrator, and the APS with a comparator, are discussed and compared to show their applicability to different applications. The article is orga- nized as follows. Section 2 introduces the three different APS structures, and the measurements performed—the conventional APS (Sect. 2.1), the APS with comparator (Sect. 2.2), and the APS with integrator (Sect. 2.3). In Sect. 3, the performance of the three different APS struc- tures is compared and their suitability for specific applications discussed. Finally, in Sect. 4, the conclusions are presented. 2 Pixel structures studied In this work, we have designed and fabricated (using a foundry process) three different APS structures. The pixels are fabricated in a 0.18 lm, single poly, six-metal layer, salicide commercial CMOS technology. The different pixel structures are fabricated in the same technology and on the same die. In this way, a fair comparison of their perfor- mance can be made. The performance characteristics of the pixels are compared to verify their suitability for low-level light and other applications. 2.1 Three transistor APS Three transistor APS is the simplest and most commonly used APS structure. Each pixel in this structure consists of a photodiode and three transistors. Figure 1a shows the APS circuit, with its designed layout shown later in Fig. 7a. The pixel operates in repeating integration and reset peri- ods. During the integration time, transistor M1 is off and the photodiode junction capacitance discharges by the internally generated photocurrent and dark current. The voltage drop during the integration period is proportional to the light intensity. At the end of integration, this voltage drop is read through transistor M2 which acts as a buffer. Transistor M3 connects the pixel to the readout bus. At the beginning of reset period, transistor M1 is turned on for a few microseconds to charge the photodiode junction S88 J Mater Sci: Mater Electron (2009) 20:S87–S93 123 capacitance and to make the pixel ready for the next reading. Figure 1b shows the measured output waveform of the APS along with the reset signal. The integration time in this measurement is 20 ms and the voltage drop is about 400 mV. Low-level light applications require detectors with high signal-to-noise ratios (SNRs). The noise at lower levels of light can limit the detection capability of the optical sensor. There are different noise sources that affect the perfor- mance of an APS. During reset, the dominant noise source is the thermal noise. The noise power in the sense node voltage generated during reset is given by: V 2 n ¼ kT 2C ; ð1Þ where k is the Boltzmann’s constant, T is the temperature in Kelvin, and C is the sense node capacitance. The effect of 1/f noise during reset has been analyzed in [20], and it has been shown that the reset noise is dominated by thermal noise. During integration, the shot noise is the dominating source of the noise. The noise power in the sense node voltage at the end of integration is approximately given by: V 2 n ’ qðiPH þ iDKÞ C2 tint; ð2Þ where q is the electronic charge, tint is the integration time, iPH is the photocurrent, and iDK is the dark current. The signal at the end of integration can be approximated with (iPH/C)tint. Assuming that all other sources of noise are small compared to the shot noise and that the dark current is negligible, then the SNR of the output (in dB), before saturation, can be approximated by: SNR ’ 10 log iPH tint q ð3Þ Figure 2 shows the measured signal-to-noise ratio (SNR) of our APS, at different light levels for different durations of integration time. Equation 3 indicates that SNR improves by increasing the integration time [21], given that the pixel capacity is not saturated at the end of integration. Figure 2 implies that the SNR curve will cross zero at lower levels of light for a longer integration time. However, the integration time is limited in length by the rate of temporal variation of the signal to be measured. Also, the dark current of the pixel may saturate the pixel capacity before the long integration time ends. The number of transistors that could fit into a pixel was limited in past. This was due to the large size of transistors compared to the desired pixel pitch for medium- to high- resolution imagers. Deep submicron technologies have made it possible to put more transistors into the same die area. This has made the transition from passive pixel Fig. 1 (a) Structure of a three transistor active pixel sensor. (b) Output of the APS on channel 1 and reset signal on channel 2, captured on the oscilloscope screen Fig. 2 Signal-to-noise ratio (SNR) of the APS measured at different light powers and for different integration times. Equation 3 suggests, and our measurements indicate, that in the region of operation of APS where the shot noise during integration is the dominant noise source, that the output SNR varies linearly with the logarithm of light power J Mater Sci: Mater Electron (2009) 20:S87–S93 S89 123 sensors (one transistor per pixel) to active pixel sensors (three transistors) and beyond, now possible. It is now feasible to do parts of the data processing within the pixel and develop smart pixels. Smart pixel systems are inte- grated and can perform sophisticated tasks faster than conventional imaging systems. In the following subsec- tions we analyze two APS pixels with in-pixel circuitry that are the core of many smart pixels. 2.2 APS with comparator The general structure of our APS pixel with an internal comparator is shown in Fig. 3a. The pixel consists of eight transistors including the reset transistor, with the layout shown later in Fig. 7b. The pixel has a reference level input and its output has a digital ‘‘High’’ or ‘‘Low’’ value, depending on the value of the sense node voltage across the photodiode relative to the value of the refer- ence voltage of the comparator vref. The photodiode and reset transistor combination of the pixel works in the same manner as the conventional APS. Figure 3b shows different signals from the pixel and how they correspond to each other. Waveforms 1 and 2 in Fig. 3b are the measured reset signal and output of the pixel. Wave- forms 3 is an illustration of the internal sense node voltage, and waveform 4 is the reference level. After reset, the sense node voltage is above the reference level and the output is low. The sense node voltage will decrease during integration, and if the light level is high enough, it will cross the reference level. Therefore, the duration of the output pulse at low will be inversely proportional to the light level and this is the output signal of the pixel. The time at which the output of the pixel goes from ‘‘High’’ to ‘‘Low’’ is fixed by the externally applied reset. The time at which the pulse comes back to ‘‘High’’ how- ever, is affected by the noise that is present in the sense node voltage. The noise sources that contribute to the total noise of the sense node voltage are the same as the three- transistor APS described above in Sect. 2.1. In this APS with a comparator, an easy way to quantify the noise is from the jitter in the rising edge of the output pulse of the comparator. Figure 4a shows the jitter of the output, cap- tured on the oscilloscope screen. Figure 4b shows the root- mean-square (RMS) value of the jitter of the output, compared to the output pulse width. Figure 4b shows that the SNR of the output is not the limiting factor in detection of the low-light-levels using this structure. However, sensing lower light levels requires higher integration times to let the sense node voltage drop enough to cross the reference level. This is similar to the three transistor APS, with the difference being that now the reference level can also be adjusted to optimize the detection of the light intensity range of interest. The main advantage of this structure is the immediate analog-to-digital conversion of the signal, inside the pixel, thus eliminating the readout noise of the consequent stages of the imager. It will also provide a parallel and fast A/D conversion of the signal, making it possible to achieve faster scanning times. 2.3 APS with integrator In most of the APS structures, including the two that are described above, the photocurrent is integrated by the junction capacitance of the photodiode. A diode however, is not a perfect capacitor, as the junction capacitance changes with the applied bias [21]. As a result, output of the APS becomes nonlinear [21, 22] and this has an impact on both the signal and SNR characteristics [23]. It should be mentioned that the sense node voltage capacitance of an APS has a parallel component equal to the gate-source Fig. 3 APS with comparator. (a) General schematic of the pixel and (b) Its measured and illustrated waveforms. Channel 1 shows the reset signal applied to the pixel, for a 10 ms readout time. Channel 2 shows the measured output of the pixel. Channel 3 shows the internal sense node voltage and compares it to the reference level of channel 4 S90 J Mater Sci: Mater Electron (2009) 20:S87–S93 123 capacitance of the buffer transistor (M2 in Fig. 1a). One can reduce the effect of nonlinear capacitance of the pho- todiode, by making the gate-source capacitance of M2 high, such that it dominates the sense node capacitance. This solution will keep the capacitance at the sense node relatively constant. However, it will result in an increase in the size of the buffer transistor, thus reducing the fill-factor of the pixel. It will also reduce the charge-to-voltage conversion gain of the pixel, thus degrading its sensitivity. An integrator, using an operational amplifier, can solve this problem by keeping the sense node voltage constant and integrating the photocurrent in its fixed capacitor. We have designed a pixel with a current integrator that integrates the photocurrent into an on-chip metal-oxide-metal capacitor. The schematic of the APS with integrator is shown in Fig. 5a, with its layout shown in Fig. 7c. The measured output of the APS with integrator is shown in Fig. 5b. After the reset period, the capacitance of the integrator discharges. During integration, the operational amplifier of the integrator keeps the bias over the photodiode fixed. This causes the photocurrent gener- ated in the photodiode to be integrated in the capacitor rather than the photodiode. The output of the integrator will then increase in proportion to the generated photocurrent during the integration time. Figure 6 shows how the output of the APS with integrator varies with the incident light power. The measurements are done for light at different wavelengths and they show good linearity of the output with respect to the power of incident light, unless the pixel is saturated. Analysis of the shot noise during integration, for the APS with integrator, is similar to the three transistor APS with the only difference being in the value of the capaci- tance. However, the effect of 1/f noise will be more important now, as different elements of the integrator also contribute to the 1/f noise. It is important to remember that frequency domain analysis is not applicable for the analysis of 1/f noise in this circuit, as APS is a switched circuit, and the 1/f noise will appear as a cyclo-stationary process in its Fig. 4 (a) Measured output of the APS with comparator, zoomed in to show the jitter in its output. This jitter is the noise of the output as the pulse width is the output of the pixel. (b) Measured signal (pulse width) and noise (jitter) of the output of the pixel, as a function of light power Fig. 5 APS with integrator. (a) General schematic of the pixel and (b) Its measured waveforms. Channel 1 shows the measured output of the pixel, while channel 2 shows the reset signal applied to the pixel, for a 2.5 ms readout time J Mater Sci: Mater Electron (2009) 20:S87–S93 S91 123 output [20]. A time domain analysis of the noise, using the auto-covariance function of the equivalent total trap num- ber in the trap model of the 1/f noise d, can be performed to get the power of noise [21], and the result is: V 2 n ¼ q CAtr � �Z tr 0 Z tr 0 d s1; s2 � s1j jð Þds2ds1 ð4Þ where A is the channel area of the reset transistor and tr is the reset time. The output noise level of the APS with integrator, in general, is higher than the equivalent three transistor APS pixel. One advantage of the proposed APS with integrator design is that the size of the photodiode and the capaci- tance of the integrator can be chosen independently. Thus the capacity of the pixel can be adjusted while keeping the photosensitive area of the pixel fixed. The main advantage of this structure, however, is its performance in dark. The amplifier of the structure keeps the bias applied to the photodiode fixed. The bias level is controlled by the input vb which is very close to zero. At these small bias voltages, the dark current generated in the photodiode is small compared to the dark current of the conventional APS generated at a bias close to VDD [24]. As a result, the output voltage read from the pixel at dark will be small, compared to the three transistor APS. 3 Comparison and discussion Three different APS structures are introduced in this work. Each of the structures has characteristics that make it suitable for certain types of applications. Table 1 compares these structures and some of their key performance mea- sures. The three transistor APS has the simplest structure and highest fill-factor. It is suitable for applications that require ultra-high resolution imaging. It has also the least noisy output, because it has the least number of transistors in its data path to the output. The APS with the comparator structure has an accept- able fill-factor of 36% due to our compact design, and this is shown in Fig. 7b. It has a digital output, which makes it applicable to ultra-fast digital imagers. It is possible to adjust the reference level and integration time, and thus to achieve good sensitivity at the desired light levels. It also has the widest dynamic range, as the sense node voltage is Fig. 6 Output of the APS with integrator, sampled at different levels of incident light power. Measurements are done at different wave- lengths. It can be observed that a good linearity exists in the output, unless the output saturates, as shown in the curve for the 700 nm light Fig. 7 Layouts of different APS structures. (a) Three transistor APS has the simplest layout and highest fill-factor. (b) The APS with comparator has eight transistors. However, our compact design has kept the fill-factor at a high level. (c) The APS with integrator. This design considers a capacitor in each pixel that reduces the fill- factor of the pixel. However, the fill-factor is still at a reasonable level Table 1 Comparison of the APS structures studied APS APS with comparator APS with integrator Photodiode size 20 9 20 lm2 10 9 10 lm2 10 9 10 lm2 Number of transistors 3 8 6 Fill-factor 63% 36% 15% Output swing 1 V 2.2 V 0.8 V Dark output rate 50 mV/s 210 mV/s 16 mV/s S92 J Mater Sci: Mater Electron (2009) 20:S87–S93 123 read and converted to digital level immediately, and the overhead voltage drops of amplifier and buffer stages do not affect the output. The APS with integrator structure has a low fill-factor. It has a low dynamic range, is slow, thus it is not suitable for applications that require high scanning rates. However, its output is the most linear with respect to incident light power, and it has an internal dark current cancellation mechanism. These two features make this APS structure a good candidate for low-level light imaging using longer integration times. 4 Conclusions In this research, we have carefully compared the key per- formance characteristics of three different active pixel sensor structures—size of photodiode, number of transis- tors in pixel, fill-factor, output swing and dark output rate. The pixel structure with control transistors inside the imager pixel provides advantages such as easy integration in a two-dimensional array with readout capabilities com- pared to using only a photodiode. The simple three transistor APS is effective for high resolution and low noise applications. The APS with a comparator pixel is good for fast digital imaging and provides high dynamic range. Finally, the APS with a comparator has linear response and has the lowest dark output rate. Acknowledgments The authors are grateful to the Natural Sciences and Engineering Research Council (NSERC) of Canada, the Canada Research Chair program and KACST of Saudi Arabia for partially funding this research work. References 1. A. El Gamal, H. Eltoukhy, IEEE Circuit. Devic. 21(3), 6–20 (2005) 2. E.R. Fossum, IEEE T. Electron. Dev. 44(10), 1689–1698 (1997) 3. M. Bigas, E. Cabruja, J. Forest, J. Salvi, Microelectr. J. 37, 433– 451 (2006) 4. H. Eltoukhy, K. Salama, A. El Gamal, M. Ronaghi, R. Davis, Proc. Technical Digest of the IEEE International Solid-State Circuits Conference (ISSCC), San Francisco, CA, 2004, pp. 222– 223 5. J. Honghao, P.A. Abshire, IEEE International Symposium on Circuits and Systems (ISCAS 2006) pp. 1651–1654 6. A. El Gamal, D. Yang, B. Fowler, Proc. SPIE. 3650, 2–13 (1999) 7. B. Fowler, A. El Gamal, D.X.D. Yang, Technical Digest of the IEEE International Solid-State Circuits Conference (ISSCC), San Francisco, CA, (1994) pp. 226–227 8. S. Kleinfelder, S.H. Lim, X.Q. Liu, A. El Gamal, Technical Digest of the IEEE International Solid-State Circuits Conference (ISSCC), San Francisco, CA, (2001) pp. 88–89 9. D. Yang, A. El Gamal, B. Fowler, H. Tian, IEEE J. Solid-St. Circ. 34, 1821–1834 (1999) 10. W. Bidermann, A. El Gamal, S. Ewedemi, J. Reyneri, H. Tian, D. Wile, D. Yang, Technical Digest of the IEEE International Solid- State Circuits Conference (ISSCC), San Francisco, CA, (2003) pp. 212–213. 11. N. Faramarzpour, M.J. Deen, S. Shirani, Q. Fang, L.W.C. Liu, F. Campos, J.W. Swart, IEEE T. Electron. Dev. 54, 9 (2007) December 12. L. Liang-Wei, C.-H. Lai, Y.-C. King, IEEE Sensors J. 4(1), 122– 126 (2004) 13. J.L. Trepanier, M. Sawan, Y. Audet, J. Coulombe, IEEE Inter- national Midwest Symposium on Circuits and Systems Vol. 2, 437–440 (2002) 14. D. Stoppa, A. Simoni, L. Gonzo, M. Gottardi, G.-F. Dalla Betta, IEEE J Solid-St Circ 37(12), 1846–1852 (2002) 15. L.G. McIlrath, IEEE J. Solid-St. Circ. 36, 846–853 (2001) 16. B. Tongprasit, K. Ito, T. Shibata, IEEE International Symposium on Circuits and Systems Vol. 3, (2005 ISCAS), pp. 2389–2392 17. B. Fowler, A. El-Gamal, Proc. Infrared Readout Elect. IV, SPIE 3360, 2–12 (1998) 18. D. Yang, B. Fowler, A. El Gamal, IEEE J Solid-St. Circ. 34, 348–356 (1999) 19. A. Bermak, A. Bouzerdoum, K. Eshraghian, Microelectr. J. 33(12), 1091–1096 (2002) 20. H. Tian, A. El Gamal, IEEE T. Circuits Syst. 48(1), 151–157 (2001) 21. N. Faramarzpour, M.J. Deen, S. Shirani, J. Vac. Sci. Technol. A (Special Issue Canadian Semiconductor Technology Conference) A24(3), 879–882 (2006) 22. N. Faramarzpour, M.J. Deen, S. Shirani, IEEE T. Electron. Dev. 53(9), 2384–2391 (2006) September 23. Y. Ardeshirpour, M.J. Deen, S. Shirani, J. Vac. Sci. Technol. A (Special Issue Canadian Semiconductor Technology Conference) A24 (3) (May/June 2006) pp. 860–865 24. Y.C. Shih, C.Y. Wu, IEEE International Symposium on Circuits and Systems Vol. 1, (ISCAS 2003), pp. 809–812 J Mater Sci: Mater Electron (2009) 20:S87–S93 S93 123 CMOS photodetector systems for low-level light applications Abstract Introduction Pixel structures studied Three transistor APS APS with comparator APS with integrator Comparison and discussion Conclusions Acknowledgments References << /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (None) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (ISO Coated v2 300% \050ECI\051) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.3 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJDFFile false /CreateJobTicket false /DefaultRenderingIntent /Perceptual /DetectBlends true /ColorConversionStrategy /sRGB /DoThumbnails true /EmbedAllFonts true /EmbedJobOptions true /DSCReportingLevel 0 /SyntheticBoldness 1.00 /EmitDSCWarnings false /EndPage -1 /ImageMemory 524288 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveEPSInfo true /PreserveHalftoneInfo false /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts false /TransferFunctionInfo /Apply /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true ] /NeverEmbed [ true ] /AntiAliasColorImages false /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 150 /ColorImageDepth -1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages false /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasGrayImages false /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 150 /GrayImageDepth -1 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /GrayImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasMonoImages false /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 >> /AllowPSXObjects false /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (None) /PDFXOutputCondition () /PDFXRegistryName (http://www.color.org?) /PDFXTrapped /False /Description << /ENU /DEU >> >> setdistillerparams << /HWResolution [2400 2400] /PageSize [5952.756 8418.897] >> setpagedevice work_gyr2dhdqybbf7dzh6ufonvx5yu ---- Targeting of sebaceous glands to treat acne by micro-insulated needles with radio frequency in a rabbit ear model Lasers in Surgery and Medicine Targeting of Sebaceous Glands to Treat Acne by Micro-Insulated Needles With Radio Frequency in a Rabbit Ear Model Tae-Rin Kwon,1 Eun Ja Choi,1 Chang Taek Oh,1 Dong-Ho Bak,2 Song-I Im,2 Eun Jung Ko,1 Hyuck Ki Hong,3 Yeon Shik Choi,3 Joon Seok,1,2 Sun Young Choi,1,4 Gun Young Ahn,5 and Beom Joon Kim1,2� 1Department of Dermatology, Chung-Ang University College of Medicine, Seoul, Korea 2Department of Medicine, Graduate School, Chung-Ang University, Seoul, Korea 3Medical IT Convergence Research Center, Korea Electronics Technology Institute, Gyeonggi-do, Korea 4Department of Dermatology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea 5Gowoonsesang Dermatology Clinic, Seoul, Korea Background and Objectives: Many studies have inves- tigated the application of micro-insulated needles with radio frequency (RF) to treat acne in humans; however, the use of a micro-insulated needle RF applicator has not yet been studied in an animal model. The purpose of this study was to evaluate the effectiveness of a micro-insulated needle RF applicator in a rabbit ear acne (REA) model. Study Design/Materials and Methods: In this study, we investigated the effect of selectively destroying the sebaceous glands using a micro-insulated needle RF applicator on the formation of comedones induced by application of 50% oleic acid and intradermal injection of P. acnes in the orifices of the external auditory canals of rabbits. The effects of the micro-insulated needle RF applicator treatment were evaluated using regular digital photography in addition to 3D Primos imaging evaluation, Skin Visio Meter microscopic photography, and histologic analyses. Results: Use of the micro-insulated needle RF applicator resulted in successful selective destruction of the seba- ceous glands and attenuated TNF-alpha release in an REA model. The mechanisms by which micro-insulated needles with RF using 1 MHz exerts its effects may involve inhibition of comedone formation, triggering of the wound healing process, and destruction of the sebaceous glands and papules. Conclusion: The use of micro-insulated needles with RF applicators provides a safe and effective method for improving the appearance of symptoms in an REA model. The current in vivo study confirms that the micro- insulated needle RF applicator is selectively destroying the sebaceous glands. Lasers Surg. Med. � 2016 Wiley Periodicals, Inc. Key words: acne; radiofrequency; REA model INTRODUCTION Radiofrequency energy (RF) is chromophore indepen- dent and depends on the electrical properties of the target tissue, and thus is expected to have good safety profiles for all skin types [1]. Over the last decade, clinical treatment systems using RF have been proven to be safe and effective for both non-ablative skin tightening of the face and body, and for fractional RF skin resurfacing [2]. Recently, RF has revolutionized the fields of skin rejuvenation, acne scarring, and acne vulgaris [3,4,5]. Although, conventional fractional treatment has the disadvantage of possible indirect damage to the epidermis [6], a recently introduced minimally invasive fractional radiofrequency microneedle device has been used to overcome such problems by creating radiofrequency thermal zones with minimal epidermal injury [5,7]. Acne is a disease of the pilosebaceous units of the skin due to an inflammatory reaction in the follicle [8]. The basic lesion of acne is a comedo, which is an enlargement of the sebaceous follicle. Acne is graded according to the type of lesions present which can include inflammatory papules, pustules, open and closed comedones, cysts, and nod- ules [9]. Proliferation of the bacteria Propionibacterium acnes (P. acnes) leads to the production of inflammatory compounds resulting in neutrophil chemotaxis. P. acnes also secrete chemotactic factors for leukocytes which subsequently infiltrate the hair follicle, resulting in the destruction of the hair follicle wall. This bacterium is thought to contribute to inflammation through activation of the toll-like receptors TLR2 and TLR4 expressed on Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest and none were reported. Contract grant sponsor: Infrastructure Program for New- Growth Industries; Contract grant number: 10044186; Contract grant sponsor: Ministry of Trade, Industry & Energy (MI, Korea). �Correspondence to: Beom Joon Kim, MD, Department of Dermatology, Chung Ang University, Hospital 224–1 Heukseok-dong, Dongjak-ku, Seoul 156–755, South Korea. E-mail: beomjoon@unitel.co.kr Accepted 11 September 2016 Published online in Wiley Online Library (wileyonlinelibrary.com). DOI 10.1002/lsm.22599 � 2016 Wiley Periodicals, Inc. keratinocytes, sebocytes, and monocytes, leading to the release of proinflammatory cytokines/chemokines such as interleukin (IL)-1a, IL-6, and tumor necrosis factor (TNF)- a [10]. TNF-a is an important pathophysiologic factor involved in the development of acne [11,12]. Currently, the therapeutic agents available for the treatment of acne target and kill bacteria, thereby preventing inflammation or comedo production [13]. Therefore, the development of a novel device that can selectively destroy sebaceous glands without side effects would be an important advancement in the treatment of acne. The rabbit ear assay (REA) is the most common animal model utilizedto determinecompound comedogenicity [14]. Agents that demonstrate marked comedogenicity in the REA also tend to produce comedones in humans. Rabbit inner ear follicles share similarities with human follicles in that they have small pili, and large adjacent sebaceous glands [15]. Therefore, we evaluated the utility of micro- insulated needles with RF applicators in the REA model. Through this method, we were able to selectively destroy the sebaceous glands and attenuate TNF-a release in the REA model. MATERIALS AND METHODS Animals and Experimental Design Female New Zealand white rabbits (3 to 3.5 kg; Yonam Laboratory Animals, Cheonan, Korea) were used in animal experiments. Rabbit auricles were injected with 0.02 ml of P. acnes (107 CFU/ml) or PBS at six points that avoided the blood vessels. Seven days after the injection, topical treatment with 50% oleic acid (Sigma, St Louis, MO) in 70% propylene glycol/30% ethanol was carried out once a day for 28 days. The animals were then divided into three experimental groups of two animals each (the center of the pinna in the concave area just external to the ear canal of rabbits, n ¼ 4). AGNESTM micro-insulated needles with RF applicators (Gowoonsesang Dermatology Clinic, Seoul, Korea) were used for treatment on day 29. This device uses a monopolar handpiece and is based on the principle of facilitating discharge of pustules after treatment of acne by transferring the heat generated by the load or the contact resistance when 1 MHz RF energy is disembogued to the surface and the deep part of the skin (Fig. 1A). A 1.2 mm needle was inserted into papules and pustules of the upper layer of the rabbit ear, and a high-frequency electrical current was treatment with micro-insulated needles at a penetration depth of 1.2 mm, and a power of 2, 3, 5, 46 W, respectively. Rabbit ears were examined for clinical appearance before and after RF applicators treatment. As per the manufacturer’s recommendation, treatment parameters were determined based on the anatomical location and the proximity of underlying bones. Animals were sacrificed and analyzed 7 days after treatment. The treatment regimens varied extensively depending on the clinic involved. Needle depths ranged from 1.2 mm (C type needle) and exposure times ranged from 100 ms using a power 5 W. All procedures involving Fig. 1. Non-insulated microneedle radiofrequency applicator and technique for selectively destroying sebaceous glands in a rabbit ear acne model. (a) This device is a medical appliance designed to combine coagulation and pain relief functions using a high frequency current. (b) H&E stains of in vivo pig skin immediately after treatment with micro-insulated needles at a penetration depth of 1.2 mm and a power of 2, 3, 5, 46 W, respectively. The micro ablation effect is visible into the upper dermal layer. Original magnification, 100� . 2 KWON ET AL. animals were conducted in accordance with the guidelines of the Institutional Animal Care and Use Committee of Chung Ang University in Korea. P. Acnes P. acnes (ATCC1 6919) were grown on Brucella broth agar (BD, Sparks, MD) containing 5% v/v defibrinated sheep blood (Lampire Biological Laboratories, Pipersville, PA), vitamin K (5 mg/ml, Remel, Lenexa, KS), and hemin (50 mg/ml, Remel) under anaerobic conditions using GasPak (BD, Sparks, MD) at 378C. The concentration of bacteria was adjusted to 107 CFU/ml prior to inactivation at 958C for 5 minutes. Bacteria were suspended in the appropriate amount of PBS for use in experiments. Clinical Evaluation Clinical symptoms were measured using images cap- tured daily with a digital camera (Canon 3000D, Canon Inc., Tokyo, Japan). After treatment, the overall area covered with comedones relative to the overall area covered by follicular orifices was calculated by digital image analysis, performed using a phototrichogram system (Folliscope, Lead-M, Seoul, Korea). The acquisition and evaluation of the surface of human skin to evaluate effects on skin roughness were performed using optical in vivo measurements with a three-dimensional (3-D) micro-topography imaging system (PRIMOS, GFM, Teltow, Germany) [16]. Biopsy Specimens and Histologic Measurements Paired 8 mm-punch biopsy specimens were obtained from each side of the facial skin at baseline and at the end of treatment. Post-treatment biopsy specimens were taken near the previous biopsy site. Tissue samples were fixed in 10% buffered formalin and embedded in paraffin. Tissues were fixed with 4% PFA, embedded in paraffin, and sliced into 5 micrometer thick sections. Sections were then transferred to probe-on-plus slides (Fisher Scientific, Pittsburgh, PA). Standard hematoxylin-eosin (H&E) and immunofluorescence staining, including TNF-a staining, were performed. For immunofluorescence, sections were blocked at room temperature for 2 hours in PBS containing 0.2% Triton X-100 and normal horse serum. Sections were stained using mouse monoclonal antibodies against TNF-a (1:100, sc-12744, Santa Cruz Biotechnology, Santa Cruz, CA) and incubated at 48C overnight. Following incubation, sections were washed three times for 5 minutes with 0.2% Triton X-100 in PBS. Sections were then incubated at room temperature with FITC-conjugated anti-Armenian ham- ster secondary antibodies (1:200, sc-2446, Santa Cruz Biotechnology) for 30 minutes Sections were then counter- stained with DAPI for 5 minutes. Statistical Analyses Statistical comparisons between the treated and untreated groups were performed using a two-tailed student t-test for paired samples. The results are expressed as mean � standard deviation from at least three independent experiments, and P-values of �P < 0.05, ��P < 0.01, and ���P < 0.001 were considered statistically significant. RESULTS Determination of Penetration Depth and Morphologic Features of the Injection Site Using Micro-Insulated Needles With RF Applicators The micro-insulated needle with RF method (MRF) includes an electrode array configured to ablate tissue during insertion of the needle and form a coagulation zone at the edges of the needle (1.2 mm). In our evaluation of the depth and thermal lesion changes on pig back skin, H&E images taken using a microscope showed regions of coagulative thermal lesions surrounded by undamaged epidermal tissue (Fig. 1B). To induce comedone formation, each rabbit ear received 50 ml of P. acnes followed by a subcutaneous quad injection in the dermis. After the injections, 50% oleic acid was applied to the orifices of the external auditory canals of the rabbit ears daily for 4 weeks. The animals were divided into three experimental arms with eight animals each. The normal group served as the untreated controls. The control group received a single intradermal injection of a 50 ml P. acnes suspension in the ear. The MRF group began treatment with micro-insulated needles with RF applica- tors after 4 weeks. All groups were sacrificed and analyzed 7 days after treatment. Histologic Analysis and Non-Invasive Objective Skin Measurement Using 3D Skin Imaging External auditory canals of the rabbit ears were examined for changes in skin morphology 4 weeks post- treatment. The reduction in papules and pustules of the upper layer of the ear was confirmed by examination of photographs using the phototrichogram system (Fig. 2A). Rabbit ears in the normal group showed no physical signs of acne, whereas rabbit ears in the group treated with P. acnes and 50% oleic acid (control group) showed the four typical acne symptoms: (i) hyperkeratinization of the follicle, leading to a microcomedo that eventually enlarges into a comedo; (ii) increased sebum production by seba- ceous glands, in which androgens have an important role; (iii) colonization of the follicle by the anaerobe P. acnes; and (iv) an inflammatory reaction in the ears. As shown in Figure 2B, H&E stained sections were used to evaluate the presence of sebaceous glands in the dermis. Examination of the MRF treated-group showed destruc- tion of the sebaceous glands, as evidenced by the magnified view of a damaged sebaceous gland in Figure 2B. Moreover, the epidermis was mostly preserved whereas the dermis was damaged and displayed a decrease in inflammation status. The PRIMOS imaging system, used to generate 3D virtual models of the skin surface at 0, 3, 7 days, allowed us to compare the skin surface micro topography in an objective and quantifiable manner. Images using this system to evaluate the rabbit ear acne model treated with TARGETING OF SEBACEOUS GLANDS TO TREAT ACNE 3 micro-insulated needles with RF are shown in Figure 3A. These images were used to calculate the roughness parameters of arithmetic average height and volume. Both parameters increased during the procedure and decreased at the completion of the procedure, resulting in significant differences (P > 0.01) (Fig. 3B). Image analy- sis revealed that the volume of papules decreased 67.1% and the height of papules decreased 64.7% in the MRF group compared to the control group. Overall, the morphological and histologic findings showed that the use of micro-insulated needles with RF applicators resulted in a decrease in the physical signs of acne and effective destruction of sebaceous glands. Destruction of Rabbit Ear Skin Sebaceous Glands and Decreased Tnf-a Synthesis Caused by Micro- Insulated Needles With Radio Frequency To investigate inflammatory signaling and evaluate the mechanism underlying the release of TNF-a in the control and MRF groups, we performed immunofluorescence microscopy on ear skin specimens using antibodies specific Fig. 2. Histologic analysis showing in vivo changes in the rabbit ear skin surface. (a) Treatment with micro-insulated needles at a penetration depth of 1.2 mm and a power of 5 W. Photographs taken with a folliscope (30�) of the treated areas 7 days after cooling device application showed clear surface changes. (b) H&E staining shows the effect of non-insulated microneedle radiofrequency on sebaceous ducts in the rabbit ear acne model. Original magnification, 100�. MRF, micro-insulated needle RF. 4 KWON ET AL. to TNF-a. Immunofluorescence revealed strong deposi- tions of TNF-a around the sebaceous gland areas. Immunostaining studies showed that the MRF group had markedly decreased TNF-a expression levels com- pared with the control group (Fig. 4). Furthermore, the dermis and the associated sebaceous glands were completely destroyed. These results suggest that MRF alters certain functions that block an increase in lipid synthesis. Additionally, the results confirm that our electrothermolysis method using sufficient electrical heat resulted in a permanent reduction of sebum excretion through the precise destruction of hyperactive sebaceous glands without causing skin surface damage. DISCUSSION Acne vulgaris is a chronic dermatologic condition that can affect patients both physically and psychologically due to the presence of lesions on the face [17]. This has a negative effect on quality of life and can cause psychologi- cal difficulty such as social avoidance [18]. In the search for potential alternative candidate modalities for the treatment of acne, a micro-needle radiofrequency device that allowed effective destruction of sebaceous glands was investigated as a promising candidate using in vivo animal studies. There are many types of acne treatment. Acne patients routinely receive years of topical or systemic therapies. Common treatment options include topical anti- inflammatories, topical and oral antibiotics, hormonal agonists and antagonists, and topical and oral retinoids [19]. Although minimally invasive, the treatment of acne with fractional lasers is accompanied by significant downtime and pain . Recently, various light-based therapies are under development for the treatment of acne. A reduction in acne lesion count upon exposure to blue [20], red [21], violet [22], and ultra-violet light [23] has been reported. Blue light is thought to reduce acne symptoms due to the absorption of endogenous porphyrins produced by P. acnes that results in a phototoxic effect on P. acnes [24]. However, these treatment options must be used over long periods of time and are associated with multiple side effects. Additionally, because these therapies do not target Fig. 3. 3D in vivo optical skin imaging for topographical quantitative assessment of non-insulated microneedle radiofrequency. (a) Using the optical 3D skin measurement system PRIMOS premium, roughness measurements can be determined on a precise area prior to and at different time points after treatment with non-insulated microneedle radiofrequency. (b) and (c) Color-coded surface topography images of an animal with acne scarring before treatment on day 0–7 days after the end of treatment. Treatment with micro-insulated needles at a penetration depth of 1.2 mm and a power of 5 W. A reduction in the measured volume and height from a given area is shown. This can also be visualized as a reduction in green color in this area translating to a reduction in papules. Values shown are mean � S.D (n ¼ 4/ear canal of rabbit; ��P < 0.01 vs. control group). MRF, micro- insulated needle RF. TARGETING OF SEBACEOUS GLANDS TO TREAT ACNE 5 sebaceous glands, long-term remission has not yet been demonstrated and may not be possible due to repopulation by P. acnes. Our data show comparable or better clinical results with high efficiency and less downtime compared to other treatment methods. In particular, the morphological and histologic findings demonstrated that treatment with micro-insulated needles with RF applicators resulted in a decrease in the physical signs of acne, and effective destruction of sebaceous glands. Sebaceous glands are involved in the genesis of acne and serve as the platform for important immunological phenomena, including innate immunity and the synthesis of antimicrobial peptides [25]. Our study revealed that micro-insulated needles with RF applicators can specifi- cally target pilosebaceous glands, leading to an improve- ment in acne. In addition, we confirmed a reduction in the levels of inflammatory mediators, such as TNF-a, and presented clinical histological evidence of a reduction in inflammation after treatment with micro-insulated nee- dles with RF. The presented results evaluated the efficacy and safety of micro-needles radiofrequency technology in the treat- ment of depressed acne in a REA model. The treatment was minimally invasive and caused minimal discomfort and downtime. This study also demonstrated the effec- tive destruction of sebaceous glands with minimal downtime or adverse effects. However, further human clinical trials are needed to clarify the optimal regimen for acne treatment. ACKNOWLEDGMENTS This work was supported by the Infrastructure Program for New-growth Industries (10044186, Development of Smart Beauty Devices Technology and Establishment of Commercialization Support Center), and funded by the Ministry of Trade, Industry & Energy (MI, Korea). REFERENCES 1. Narurkar VA. Lasers, light sources, and radiofrequency devices for skin rejuvenation. Semin Cutan Med Surg 2006;25(3):145–150. 2. Lolis MS, Goldberg DJ. Radiofrequency in cosmetic derma- tology: A review. Dermatol Surg 2012;38(11):1765–1776. 3. Dahan S, Rousseaux I, Cartier H. Multisource radiofrequency for fractional skin resurfacing-significant reduction of wrin- kles. J Cosmet Laser Ther 2013;15(2):91–97. 4. Ramesh M, Gopal M, Kumar S, Talwar A. Novel technology in the treatment of acne scars: The matrix-tunable radio- frequency technology. J Cutan Aesthet Surg 2010;3(2): 97–101. 5. Lee KR, Lee EG, Lee HJ, Yoon MS. Assessment of treatment efficacy and sebosuppressive effect of fractional radiofre- quency microneedle on acne vulgaris. Lasers Surg Med 2013;45(10):639–647. 6. Fisher GH, Geronemus RG. Short-term side effects of fractional photothermolysis. Dermatol Surg 2005;31(9 Pt 2):1245–1249. 7. Harth Y, Frank I. In vivo histological evaluation of non- insulated microneedle radiofrequency applicator with novel fractionated pulse mode. J Drug Dermatol 2013;12(12): 1430–1433. 8. Sykes NL, Webster GF. Acne. A review of optimum treatment. Drugs 1994;48(1):59–70. 9. Tanghetti EA. The role of inflammation in the pathology of acne. J Clin Aesthet Dermatol 2013;6:27–35. Fig. 4. Representative sections of rabbit ear skin tissue stained for TNF-a expression. Dorsal skin biopsies were taken after 7 days and immunofluorescence staining for TNF-a (green) was performed. Nuclei were counterstained with DAPI (blue), n ¼ 3 ears per group. Original magnification, 200�. MRF, Micro-insulated needle RF. 6 KWON ET AL. 10. Webster GF. Inflammation in acne vulgaris. J Am Acad Dermatol 1995;33(9):247–253. 11. Graham GM, Farrar MD, Cruse-Sawyer JE, Holland KT, Ingham E. Proinflammatory cytokine production by human keratinocytes stimulated with Propionibacterium acnes and P. acnes GroEL. Br J Dermatol 2004;150(3):421–428. 12. Taylor M, Gonzalez M, Porter R. Pathways to inflammation: Acne pathophysiology. Eur J Dermatol 2011;21(3):323–333. 13. Bowe W, Kober M. Therapeutic update: Acne. J Drugs Dermatol 2014;13(3):235–238. 14. Kurokawa I, Danby FW, Ju Q, Wang X, Xiang LF, Xia L, Chen W, Nagy I, Picardo M, Suh DH, Ganceviciene R, Schagen S, Tsatsou F, Zouboulis CC. New developments in our under- standing of acne pathogenesis and treatment. Exp Dermatol 2009;18(10):821–832. 15. Mirshahpanah P, Maibach HI. Models in acnegenesis. Cutan Ocul Toxicol 2007;26(3):195–202. 16. Friedman PM, Skover GR, Payonk G, Kauvar AN, Geronemus RG. 3D in-vivo optical skin imaging for topographical quantitative assessment of non-ablative laser technology. Dermatol Surg 2002;28(3):199–204. 17. Tan JK. Psychosocial impact of acne vulgaris: Evaluating the evidence. Skin Therapy Lett 2004;9(7):1–3,9. 18. Berson DS, Chalker DK, Harper JC, Leyden JJ, Shalita AR, Webster GF. Current concepts in the treatment of acne: Report from a clinical roundtable. Cutis 2003;72(1 Suppl):5–13. 19. Oprica C, Emtestam L, Nord CE. Overview of treatments for acne. Dermatol Nurs 2002;14(4):242–246. 20. Fan X, Xing YZ, Liu LH, Liu C, Wang DD, Yang RY, Lapidoth M. Effects of 420-nm intense pulsed light in an acne animal model. J Eur Acad Dermatol Venereol 2013;27(9):1168–1171. 21. Na JI, Suh DH. Red light phototherapy alone is effective for acne vulgaris: Randomized, single-blinded clinical trial. Dermatol Surg 2007;33(10):1228–1233. 22. Sigurdsson V, Knulst AC, van Weelden H. Phototherapy of acne vulgaris with visible light. Dermatology 1997;194(3): 256–260. 23. Nouri K, Villafradez-Diaz LM. Light/laser therapy in the treatment of acne vulgaris. J Cosmet Dermatol 2005;4(4): 318–320. 24. Dai T, Gupta A, Murray CK, Vrahas MS, Tegos GP, Hamblin MR. Blue light for infectious diseases: Propionibacterium acnes, Helicobacter pylori, and beyond? Drug Resist Updat 2012;15(4):223–236. 25. Zouboulis CC. Acne and sebaceous gland function. Clin Dermatol 2004;22(5):360–366. SUPPORTING INFORMATION Additional supporting information may be found in the online version of this article at the publisher’s web-site. TARGETING OF SEBACEOUS GLANDS TO TREAT ACNE 7 work_h5zl264xtnfk3oer2n6fzpg2be ---- [PDF] Making Images/Making Bodies: Visibilizing and Disciplining through Magnetic Resonance Imaging (MRI) | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1177/0162243904271758 Corpus ID: 18095400Making Images/Making Bodies: Visibilizing and Disciplining through Magnetic Resonance Imaging (MRI) @article{Prasad2005MakingIB, title={Making Images/Making Bodies: Visibilizing and Disciplining through Magnetic Resonance Imaging (MRI)}, author={A. Prasad}, journal={Science, Technology, & Human Values}, year={2005}, volume={30}, pages={291 - 316} } A. Prasad Published 2005 Computer Science Science, Technology, & Human Values This article analyzes how the medical gaze made possible by MRI operates in radiological laboratories. It argues that although computer-assisted medical imaging technologies such as MRI shift radiological analysis to the realm of cyborg visuality, radiological analysis continues to depend on visualization produced by other technologies and diagnostic inputs. In the radiological laboratory, MRI is used to produce diverse sets of images of the internal parts of the body to zero in and visually… Expand View on SAGE sociology.missouri.edu Save to Library Create Alert Cite Launch Research Feed Share This Paper 82 CitationsHighly Influential Citations 8 Background Citations 38 Methods Citations 6 Results Citations 2 View All Figures and Topics from this paper figure 1 figure 2 figure 3 figure 4 figure 5 View All 5 Figures & Tables Magnetic Resonance Imaging Radiology Medical imaging Cyborg Arabic numeral 0 Color vision defect Atlases Laboratory 82 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency On the Assembly Line: Neuroimaging Production in Clinical Practice Kelly Joyce Sociology 2011 3 Save Alert Research Feed SISIS ABSTRACT The transformations in radiology induced by new imaging technology R. Burri Engineering 2016 Save Alert Research Feed ‘Sehkollektiv’: Sight Styles in Diagnostic Computed Tomography Kathrin Friedrich Psychology 2010 6 Highly Influenced View 6 excerpts, cites methods, background and results Save Alert Research Feed Magnetic Resonance Imaging (MRI) as mirror and portrait: MRI configurations between science and the arts. S. Casini Medicine, Psychology Configurations 2011 10 Save Alert Research Feed Rediscovering radiology: New technologies and remedial action at the worksite H. Rystedt, J. Ivarsson, Sara Asplund, Å. Johnsson, M. Båth Computer Science, Medicine Social studies of science 2011 19 View 3 excerpts, cites background Save Alert Research Feed The Aesthetics of Magnetic Resonance Imaging (MRI): from the Scientific Laboratory to an Artwork S. Casini Art 2010 4 Save Alert Research Feed Doing Distinctions R. Burri Sociology 2008 113 Save Alert Research Feed Medical Imaging and the "Borderline Gaze of Touch and Hearing": The Politics of Knowledge beyond "Sense Atomism" K. Kazimierczak Psychology 2018 Highly Influenced View 4 excerpts, cites background Save Alert Research Feed The Study of Technoscientific Imaging in STS M. Perrotta Sociology 2013 8 View 1 excerpt, cites background Save Alert Research Feed Imaging the body: embodied vision in minimally invasive surgery H. Mentis, A. Taylor Computer Science CHI 2013 26 Highly Influenced View 5 excerpts, cites background and methods Save Alert Research Feed ... 1 2 3 4 5 ... References SHOWING 1-10 OF 55 REFERENCES SORT BYRelevance Most Influenced Papers Recency Knowledge of shadows: the introduction of X-ray images in medicine. B. Pasveer Psychology, Medicine Sociology of health & illness 1989 90 View 1 excerpt, references background Save Alert Research Feed Image Dissection in Natural Scientific Inquiry K. Knorr-Cetina, K. Amann Sociology 1990 126 PDF View 1 excerpt, references methods Save Alert Research Feed Body criticism: imaging the unseen in Enlightenment art and medicine I. Sutton Medical History 1993 68 View 1 excerpt, references methods Save Alert Research Feed Discipline and the Material Form of Images: An Analysis of Scientific Visibility M. Lynch Computer Science 1985 295 Highly Influential View 7 excerpts, references background Save Alert Research Feed Images Are Not the (Only) Truth: Brain Mapping, Visual Knowledge, and Iconoclasm Anne Beaulieu Sociology 2002 160 PDF View 3 excerpts, references background Save Alert Research Feed Body Criticism: Imaging the Unseen in Enlightenment Art and Medicine B. Stafford Psychology, Philosophy 1991 257 View 2 excerpts, references methods Save Alert Research Feed The Reconfigured Eye: Visual Truth in the Post-Photographic Era W. Mitchell Art 1992 208 PDF View 2 excerpts, references background Save Alert Research Feed Brain Mapping: The Methods A. Toga, J. Mazziotta Physics 2002 756 View 1 excerpt, references results Save Alert Research Feed Voxels in the Brain A. Beaulieu Computer Science, Medicine Social studies of science 2001 102 PDF View 3 excerpts, references methods and background Save Alert Research Feed Virtual Anxiety: Photography, New Technologies and Subjectivity S. Kember Psychology 1998 43 View 3 excerpts, references background Save Alert Research Feed ... 1 2 3 4 5 ... Related Papers Abstract Figures and Topics 82 Citations 55 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_h7isxetmqnhatiozqj3bhsyary ---- Anaesthetic eye drops for children in casualty departments across south east England It is a common practice to use topical anaesthetic drops to provide temporary relief and aid in the examination of the eyes when strong blepharospasm precludes thorough examination. Ophthalmology departments usually have several types of these—for example, amethocaine, oxybuprocaine (benoxinate), and proxymetacaine. The dura- tion and degree of discomfort caused by amethocaine is significantly higher than proxymetacaine,1 2 whilst the difference in the discomfort between amethocaine and oxybuprocaine is minimal.2 When dealing with children, therefore, it is recommended to use proxymetacaine drops.1 It was my experience that Accident & Emergency (A&E) departments tend to have less choice of these drops. This survey was done to find out the availability of different anaesthetic drops, and the preference for paediatric use given a choice of the above three. Questionnaires were sent to 40 A&E departments across south east England. Thirty nine replied back, of which one department did not see any eye casualties. None of the 38 departments had proxymeta- caine. Twenty units had amethocaine alone and 10 units had oxybuprocaine alone. For paediatric use, these units were happy to use whatever drops were available within the unit. Eight units stocked amethocaine and oxybuprocaine, four of these were happy to use either of them on children and four used only oxybuprocaine. One of the latter pre- ferred proxymetacaine but had to contend with oxybuprocaine due to cost issues. Children are apprehensive about the instillation of any eye drops. Hence, it is desirable to use the least discomforting drops like proxymetacaine. For eye casualties, in majority of District General Hospitals, A&E departments are the first port of call. Hence, A&E units need to be aware of the benefit of proxymetacaine and stock them for paedia- tric use. M R Vishwanath Department of Ophthalmology, Queen Elizabeth Queen Mother Hospital, Margate, Kent; m.vishwanath@virgin.net doi: 10.1136/emj.2003.010645 References 1 Shafi T, Koay P. Randomised prospective masked study comparing patient comfort following the instillation of topical proxymetacaine and amethocaine. Br J Ophthalmol 1998;82(11):1285–7. 2 Lawrenson JG, Edgar DF, Tanna GK, et al. Comparison of the tolerability and efficacy of unit- dose, preservative-free topical ocular anaesthetics. Ophthalmic Physiol Opt 1998;18(5):393–400. Training in anaesthesia is also an issue for nurses We read with interest the excellent review by Graham.1 An important related issue is the training of the assistant to the emergency physician. We wished to ascertain if use of an emergency nurse as an anaesthetic assistant is common practice. We conducted a short telephone survey of the 12 Scottish emer- gency departments with attendances of more than 50 000 patients per year. We interviewed the duty middle grade doctor about usual practice in that department. In three depart- ments, emergency physicians will routinely perform rapid sequence intubation (RSI), the assistant being an emergency nurse in each case. In nine departments an anaesthetist will usually be involved or emergency physi- cians will only occasionally perform RSI. An emergency nurse will assist in seven of these departments. The Royal College of Anaesthetists2 have stated that anaesthesia should not proceed without a skilled, dedicated assistant. This also applies in the emergency department, where standards should be comparable to those in theatre.3 The training of nurses as anaesthetic assistants is variable and is the subject of a Scottish Executive report.4 This consists of at least a supernumerary in-house program of 1 to 4 months. Continued professional devel- opment and at least 50% of working time devoted to anaesthetic assistance follow this.4 The Faculty of Emergency Nursing has recognised that anaesthetic assistance is a specific competency. We think that this represents an important progression. The curriculum is, however, still in its infancy and is not currently a requirement for emergency nurses (personal communication with L McBride, Royal College of Nursing). Their assessment of competence in anaes- thetic assistance is portfolio based and not set against specified national standards (as has been suggested4). We are aware of one-day courses to familiarise nurses with anaesthesia (personal communication with J McGowan, Southern General Hospital). These are an important introduction, but are clearly incomparable to formal training schemes. While Graham has previously demon- strated the safety of emergency physician anaesthesia,5 we suggest that when anaes- thesia does prove difficult, a skilled assistant is of paramount importance. Our small survey suggests that the use of emergency nurses as anaesthetic assistants is common practice. If, perhaps appropriately, RSI is to be increasingly performed by emergency physicians,5 then the training of the assis- tant must be concomitant with that of the doctor. Continued care of the anaesthetised patient is also a training issue1 and applies to nurses as well. Standards of anaesthetic care need to be independent of its location and provider. R Price Department of Anaesthesia, Western Infirmary, Glasgow: Gartnavel Hospital, Glasgow, UK A Inglis Department of Anaesthesia, Southern General Hospital, Glasgow, UK Correspondence to: R Price, Department of Anaesthesia, 30 Shelley Court, Gartnavel Hospital, Glasgow, G12 0YN; rjp@doctors.org.uk doi: 10.1136/emj.2004.016154 References 1 Graham CA. Advanced airway management in the emergency department: what are the training and skills maintenance needs for UK emergency physicians? Emerg Med J 2004;21:14–19. 2 Guidelines for the provision of anaesthetic services. Royal College of Anaesthetists, London, 1999. http://www.rcoa.ac.uk/docs/glines.pdf. 3 The Role of the Anaesthetist in the Emergency Service. Association of Anaesthetists of Great Britain and Ireland, London, 1991. http:// www.aagbi.org/pdf/emergenc.pdf. 4 Anaesthetic Assistance. A strategy for training, recruitment and retention and the promulgation of safe practice. Scottish Medical and Scientific Advisory Committee. http://www.scotland.gov.uk/ library5/health/aast.pdf. 5 Graham CA, Beard D, Oglesby AJ, et al. Rapid sequence intubation in Scottish urban emergency departments. Emerg Med J 2003;20:3–5. Ultrasound Guidance for Central Venous Catheter Insertion We read Dunning’s BET report with great interest.1 As Dunning himself acknowledges, most of the available literature concerns the insertion of central venous catheters (CVCs) by anaesthetists (and also electively). However, we have found that this data does not necessarily apply to the critically-ill emergency setting, as illustrated by the study looking at emergency medicine physicians2 where the ultrasound did not reduce the complication rate. The literature does not distinguish between potentially life-threatening complications and those with unwanted side-effects. An extra attempt or prolonged time for insertion, whilst unpleasant, has a minimal impact on the patient’s eventual outcome. However, a pneumothorax could prove fatal to a patient with impending cardio-respiratory failure. Some techniques – for example, high internal jugular vein – have much lower rates of pneumothorax. Furthermore, some techni- ques use an arterial pulsation as a landmark. Such techniques can minimise the adverse effect of an arterial puncture as pressure can be applied directly to the artery. We also share Dunning’s doubt in the National Institute for Clinical Excellence (NICE) guidance’s claim that the cost-per- use of an ultrasound could be as low as £10.3 NICE’s economic analysis model assumed that the device is used 15 times a week. This would mean sharing the device with another department, clearly unsatisfactory for most emergency situations. The cost per complica- tion prevented would be even greater. (£500 in Miller’s study, assuming 2 fewer complica- tions per 100 insertions). Finally, the NICE guidance is that ‘‘appro- priate training to achieve competence’’ is LETTERS PostScript. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Accepted for publication 27 May 2004 Accepted for publication 27 May 2004 608 Emerg Med J 2005;22:608–610 www.emjonline.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://e m j.b m j.co m / E m e rg M e d J: first p u b lish e d a s 1 0 .1 1 3 6 /e m j.2 0 0 3 .0 1 0 6 4 5 o n 2 6 Ju ly 2 0 0 5 . D o w n lo a d e d fro m http://emj.bmj.com/ undertaken. We are sure that the authors would concur that the clinical scenario given would not be the appropriate occasion to ‘‘have a go’’ with a new device for the first time. In conclusion, we believe that far more important than ultrasound-guided CVC insertion, is the correct choice of insertion site to avoid those significant risks, which the critically-ill patient would not tolerate. M Chikungwa, M Lim Correspondence to: M Chikungwa; mchikungwa@ msn.com doi: 10.1136/emj.2004.015156 References 1 Dunning J, James Williamson J. Ultrasonic guidance and the complications of central line placement in the emergency department. Emerg Med J 2003;20:551–552. 2 Miller AH, Roth BA, Mills TJ, et al. Ultrasound guidance versus the landmark technique for the placement of central venous catheters in the emergency department. Acad Emerg Med 2002;9:800–5. 3 National Institute for Clinical Excellence. Guidance on the use of ultrasound locating devices for placing central venous catheters. Technology appraisal guidance no 49, 2002 http://www.org.uk/cat.asp?c = 36752 (accessed 24 Dec 2003). Patients’ attitudes toward medical photography in the emergency department Advances in digital technology have made use of digital images increasingly common for the purposes of medical education.1 The high turnover of patients in the emergency department, many of whom have striking visual signs makes this an ideal location for digital photography. These images may even- tually be used for the purposes of medical education in presentations, and in book or journal format.2 3 As a consequence patients’ images may be seen by the general public on the internet, as many journals now have open access internet sites. From an ethical and legal standpoint it is vital that patients give informed consent for use of images in medical photography, and are aware that such images may be published on the world wide web.4 The aim of this pilot study was to investigate patient’s attitudes toward medical photography as a guide to consent and usage of digital photography within the emergency department. A patient survey questionnaire was designed to answer whether patients would consent to their image being taken, which part(s) of their body they would consent to being photographed, and whether they would allow these images to be pub- lished in a medical book, journal, and/or on the internet. All patients attending the minors section of an inner city emergency department between 1 st January 2004 and 30 th April 2004 were eligible for the study. Patients were included if aged over 18 and having a Glasgow coma score of 15. Patients were excluded if in moderate or untreated pain, needed urgent treatment, or were unable to read or under- stand the questionnaire. All patients were informed that the questionnaire was anon- ymous and would not affect their treatment. Data was collected by emergency department Senior House Officers and Emergency Nurse Practitioners. 100 patients completed the questionnaire. The results are summarised below: Q1 Would you consent to a photo- graph being taken in the Emergency Department of you/part of your body for the purposes of medical education? Yes 84%, No 16% 21% replied Yes to all forms of consent, 16% replied No to all forms of consent, while 63% replied Yes with reservations for particular forms of consent. Q2 Would you consent the follow- ing body part(s) to be photo- graphed (head, chest, abdomen, limbs and/or genitalia)? The majority of patients consented for all body areas to be photo- graphed except for genitalia (41% Yes, 59% No) citing invasion of privacy and embarrassment. Q3 Would you consent to your photo being published in a medical journal, book or internet site? The majority of patients gave con- sent for publication of images in a medical journal (71%), book (70%), but were more likely to refuse consent for use of images on internet medical sites (47% Yes, 53% No or unsure). In determining the attitudes of patients presenting in an inner city London emer- gency department regarding the usage of photography, we found that the majority of patients were amenable to having their images used for the purposes of medical education. The exceptions to this were the picturing of genitalia and the usage of any images on internet medical sites/journals. The findings of this pilot study are limited to data collection in a single emergency department in central London. A particular flaw of this survey is the lack of correlation between age, sex, ethnicity, and consent for photography. Further study is ongoing to investigate this. There have been no studies published about patients’ opinions regarding medical photography to date. The importance of obtaining consent for publication of patient images and concealment of identifying fea- tures has been stressed previously.5 This questionnaire study emphasises the need to investigate patients’ beliefs and concerns prior to consent. A Cheung, M Al-Ausi, I Hathorn, J Hyam, P Jaye Emergency Department, St Thomas’ Hospital, UK Correspondence to: Peter Jaye, Consultant in Emergency Medicine, St Thomas’ Hospital, Lambeth Palace Road, London SE1 7RH, UK; peter.jaye@gstt.nhs.uk doi: 10.1136/emj.2004.019893 References 1 Mah ET, Thomsen NO. Digital photography and computerisation in orthopaedics. J Bone Joint Surg Br 2004;86(1):1–4. 2 Clegg GR, Roebuck S, Steedman DJ. A new system for digital image acquisition, storage and presentation in an accident and emergency department. Emerg Med J 2001;18(4):255–8. 3 Chan L, Reilly KM. Integration of digital imaging into emergency medicine education. Acad Emerg Med 2002;9(1):93–5. 4 Hood CA, Hope T, Dove P. Videos, photographs, and patient consent. BMJ 1998;316:1009–11. 5 Smith R. Publishing information about patients. BMJ 1995;311:1240–1. Unnecessary Tetanus boosters in the ED It is recommended that five doses of tetanus toxoid provide lifelong immunity and 10 yearly doses are not required beyond this.1 National immunisation against tetanus began in 1961, providing five doses (three in infancy, one preschool and one on leaving school).2 Coverage is high, with uptake over 90% since 1990.2 Therefore, the majority of the population under the age of 40 are fully immunised against tetanus. Td (tetanus toxoid/low dose diphtheria) vaccine is often administered in the Emergency Department (ED) following a wound or burn based upon the patient’s recollection of their immunisation history. Many patients and staff may believe that doses should still be given every 10 years. During summer 2004, an audit of tetanus immunisation was carried out at our depart- ment. The records of 103 patients who had received Td in the ED were scrutinised and a questionnaire was sent to the patient’s GP requesting information about the patient’s tetanus immunisation history before the dose given in the ED. Information was received in 99 patients (96% response). In 34/99 primary care records showed the patient was fully immunised before the dose given in the ED. One patient had received eight doses before the ED dose and two patients had been immunised less than 1 year before the ED dose. In 35/99 records suggested that the patient was not fully immunised. However, in this group few records were held before the early 1990’s and it is possible some may have had five previous doses. In 30/99 there were no tetanus immunisation records. In 80/99 no features suggesting the wound was tetanus prone were recorded. These findings have caused us to feel that some doses of Td are unnecessary. Patient’s recollections of their immunisation history may be unreliable. We have recommended that during working hours, the patient’s general practice should be contacted to check immunisation records. Out of hours, if the patient is under the age of 40 and the wound is not tetanus prone (as defined in DoH Guidance1), the general practice should be contacted as soon as possible and the immunisation history checked before admin- istering Td. However, we would like to emphasize that wound management is paramount, and that where tetanus is a risk in a patient who is not fully immunised, a tetanus booster will not provide effective protection against tetanus. In these instances, tetanus immunoglobulin (TIG) also needs to be considered (and is essential for tetanus prone wounds). In the elderly and other high-risk groups—for example, intravenous drug abusers—the Accepted for publication 25 February 2004 Accepted for publication 12 October 2004 PostScript 609 www.emjonline.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://e m j.b m j.co m / E m e rg M e d J: first p u b lish e d a s 1 0 .1 1 3 6 /e m j.2 0 0 3 .0 1 0 6 4 5 o n 2 6 Ju ly 2 0 0 5 . D o w n lo a d e d fro m http://emj.bmj.com/ need for a primary course of immunisation against tetanus should be considered not just a single dose and follow up with the general practice is therefore needed. The poor state of many primary care immunisation records is a concern and this may argue in favour of centralised immuni- sation records or a patient electronic record to protect patients against unnecessary immu- nisations as well as tetanus. T Burton, S Crane Accident and Emergency Department, York Hospital, Wigginton Road, York, UK Correspondence to: Dr T Burton, 16 Tom Lane, Fulwood, Sheffield, S10 3PB; tomandjackie@doctors.org.uk doi: 10.1136/emj.2004.021121 References 1 CMO. Replacement of single antigen tetanus vaccine (T) by combined tetanus/low dose diphtheria vaccine for adults and adolescents (Td) and advice for tetanus immunisation following injuries. In:update on immunisation issues PL/ CMO/2002/4 London Department of Health, August, 2002. 2 http://www.hpa.org.uk/infections/topics_az/ tetanus. Accepted for publication 20 October 2004 BOOK REVIEWS and Technical Aspects be comprehensive with regards to disaster planning? Can it provide me with what I need to know? I was confused by the end of my involvement with this book or perhaps overwhelmed by the enormity of the importance of non-medical requirements such as engineering and tech- nical expertise in planning for and managing environmental catastrophes. Who is this book for? I am still not sure. The everyday emergency physician? I think not. It serves a purpose to be educational about what is required, in a generic sort of way, when planning disasters. Would I have turned to it last year during the SARS outbreak? No. When I feared a bio-terrorism threat? No. When I watched helplessly the victims of the latest Iranian earthquake? No. To have done so would have been to participate in some form of voyeurism on other people’s misery. Better to embrace the needs of those victims of environmental disasters in some tangible way than rush to the book shelf to brush up on some aspect of care which is so remote for the majority of us in emergency medicine. J Ryan St Vincent’s Hospital, Ireland; j.ryan@st-vincents.ie Neurological Emergencies: A Symptom Orientated Approach Edited by G L Henry, A Jagoda, N E Little, et al. McGraw-Hill Education, 2003, £43.99, pp 346. ISBN 0071402926 The authors set out with very laudable intentions. They wanted to get the ‘‘max- imum value out of both professional time and expensive testing modalities’’. I therefore picked up this book with great expecta- tions—the prospect of learning a better and more memorable way of dealing with neuro- logical cases in the emergency department. The chapter headings (14 in number) seemed to identify the key points I needed to know and the size of the book (346 pages) indicated that it was possible to read. Unfortunately things did not start well. The initial chapter on basic neuroanatomy mainly used diagrams from other books. The end result was areas of confusion where the text did not entirely marry up with the diagrams. The second chapter dealing with evaluating the neurological complaint was better and had some useful tips. However the format provided a clue as to how the rest of the book was to take shape—mainly text and lists. The content of this book was reasonably up to date and if you like learning neurology by reading text and memorising lists then this is the book for you. Personally I would not buy it. I felt it was a rehash of a standard neurology textbook and missed a golden opportunity of being a comprehensible text on emergency neurology, written by emer- gency practitioners for emergency practi- tioners. P Driscoll Hope Hospital, Salford, UK; peter.driscoll@srht.nhs.uk Emergency medicine procedures E Reichmann, R Simon. New York: McGraw- Hill, 2003, £120, pp 1563. ISBN 0-07- 136032-8. This book has 173 chapters, allowing each chapter to be devoted to a single procedure, which, coupled with a clear table of contents, makes finding a particular procedure easy. This will be appreciated mostly by the emergency doctor on duty needing a rapid "refresher" for infrequently performed skills. ‘‘A picture is worth a thousand words’’ was never so true as when attempting to describe invasive procedures. The strength of this book lies in the clarity of its illustrations, which number over 1700 in total. The text is concise but comprehensive. Anatomy, patho- physiology, indications and contraindica- tions, equipment needed, technicalities, and complications are discussed in a standardised fashion for each chapter. The authors, pre- dominantly US emergency physicians, mostly succeed in refraining from quoting the ‘‘best method’’ and provide balanced views of alternative techniques. This is well illustrated by the shoulder reduction chapter, which pictorially demonstrates 12 different ways of reducing an anterior dislocation. In fact, the only notable absentee is the locally preferred Spaso technique. The book covers every procedure that one would consider in the emergency department and many that one would not. Fish hook removal, zipper injury, contact lens removal, and emergency thoracotomy are all explained with equal clarity. The sections on soft tissue procedures, arthrocentesis, and regional analgesia are superb. In fact, by the end of the book, I was confident that I could reduce any paraphimosis, deliver a baby, and repair a cardiac wound. However, I still had nagging doubts about my ability to aspirate a sub- dural haematoma in an infant, repair the great vessels, or remove a rectal foreign body. Reading the preface again, I was relieved. The main authors acknowledge that some proce- dures are for "surgeons" only and are included solely to improve the understanding by "emergentologists" of procedures that may present with late complications. These chap- ters are unnecessary, while others would be better placed in a pre-hospital text. Thankfully, they are relatively few in number, with the vast majority of the book being directly relevant to emergency practice in the UK. Weighing approximately 4 kg, this is undoubtedly a reference text. The price (£120) will deter some individuals but it should be considered an essential reference book for SHOs, middle grades, and consul- tants alike. Any emergency department would benefit from owning a copy. J Lee Environmental Health in Emergencies and Disasters: A Practical Guide Edited by B Wisner, J Adams. World Health Organization, 2003, £40.00, pp 252. ISBN 9241545410 I have the greatest admiration for doctors who dedicate themselves to disaster prepared- ness and intervention. For most doctors there will, thank god, rarely be any personal involvement in environmental emergencies and disasters. For the others who are involved, the application of this branch of medicine must be some form of ‘‘virtual’’ game of medicine, lacking in visible, tangible gains for the majority of their efforts. Reading this World Health Organization publication however has changed my percep- tion of the importance of emergency plan- ners, administrators, and environmental technical staff. I am an emergency physician, blinkered by measuring the response of interventions in real time; is the peak flow better after the nebuliser? Is the pain less with intravenous morphine? But if truth be known it is the involvement of public health doctors and emergency planners that makes the biggest impact in saving lives worldwide, as with doctors involved in public health medicine. This book served to demonstrate to me my ignorance on matters of disaster responsive- ness. But can 252 pages of General Aspects 610 PostScript www.emjonline.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://e m j.b m j.co m / E m e rg M e d J: first p u b lish e d a s 1 0 .1 1 3 6 /e m j.2 0 0 3 .0 1 0 6 4 5 o n 2 6 Ju ly 2 0 0 5 . D o w n lo a d e d fro m http://emj.bmj.com/ work_h7kyvzzrhvbybbyeobdiwv5pgu ---- 1Ward S, et al. BMJ Open 2017;7:e013657. doi:10.1136/bmjopen-2016-013657 Open Access Association between childcare educators’ practices and preschoolers’ physical activity and dietary intake: a cross-sectional analysis Stéphanie Ward,1 Mathieu Blanger,2 Denise Donovan,3 Hassan Vatanparast,4 Nazeem Muhajarine,5 Rachel Engler-Stringer,5 Anne Leis,5 M Louise Humbert,6 Natalie Carrier7 To cite: Ward S, Blanger M, Donovan D, et al. Association between childcare educators’ practices and preschoolers’ physical activity and dietary intake: a cross-sectional analysis. BMJ Open 2017;7:e013657. doi:10.1136/ bmjopen-2016-013657 ► Prepublication history and additional material is available. To view please visit the journal (http:// dx. doi. org/ 10.1136/ bmjopen-2016-013657) Received 28 July 2016 Revised 15 March 2017 Accepted 17 March 2017 For numbered affiliations see end of article. Correspondence to Dr Stéphanie Ward; stephanie. ann. ward@ usherbrooke. ca Research AbstrAct Introduction Childcare educators may be role models for healthy eating and physical activity (PA) behaviours among young children. This study aimed to identify which childcare educators’ practices are associated with preschoolers’ dietary intake and PA levels. Methods This cross-sectional analysis included 723 preschoolers from 50 randomly selected childcare centres in two Canadian provinces. All data were collected in the fall of 2013 and 2014 and analysed in the fall of 2015. PA was assessed using Actical accelerometers during childcare hours for 5 consecutive days. Children’s dietary intake was measured at lunch on 2 consecutive days using weighed plate waste and digital photography. Childcare educators’ nutrition practices (modelling, nutrition education, satiety recognition, verbal encouragement and not using food as rewards) and PA practices (informal and formal PA promotion) were assessed by direct observation over the course of 2 days, using the Nutrition and Physical Activity Self-Assessment for Child Care tool. Associations between educators’ practices and preschoolers’ PA and dietary intake were examined using multilevel linear regressions. Results Overall, modelling of healthy eating was positively associated with children’s intake of sugar (β=0.141, 95% CI 0.03 to 0.27), while calorie (β=−0.456, 95% CI −1.46 to –0.02) and fibre intake (β=−0.066, 95% CI −0.12 to –0.01) were negatively associated with providing nutrition education. Not using food as rewards was also negatively associated with fat intake (β=−0.144, 95% CI −0.52 to –0.002). None of the educators’ PA practices were associated with children’s participation in PA. Conclusions Modelling healthy eating, providing nutrition education and not using food as rewards are associated with children’s dietary intake at lunch in childcare centres, highlighting the role that educators play in shaping preschoolers’ eating behaviours. Although PA practices were not associated with children’s PA levels, there is a need to reduce sedentary time in childcare centres. IntroductIon Childhood obesity is currently a great public health challenge.1 Primary preven- tion and treatment strategies for obesity in children include reducing energy intake and increasing physical activity (PA) levels.2 The theory of observational learning3 suggests that children’s behaviours can be influenced by individuals who are part of their social envi- ronment. Specifically, the theory proposes that individuals’ eating behaviours and PA can be shaped by observing and imitating others.4 Over 80% of preschoolers (aged 2–5) living in developed countries receive formal childcare outside their home.5 Preschoolers spend an average of approximately 30 hours a week in childcare centres.6 7 Therefore, child- care educators are potentially key actors for promoting healthy eating and PA behaviours in young children.8 Childcare centres may help shape chil- dren’s eating behaviours and PA.9 10 One systematic review reported that healthy eating interventions in childcare centres seem to have a positive influence on children’s consumption of vegetables and fruit and to improve their nutrition-related knowledge.9 Another reported that limiting the number Strengths and limitations of this study ► This study included a diversity of childcare centres in terms of geographical location, language spoken and socioeconomic status, which were randomly selected across two Canadian provinces. ► Objective methods were used for assessing dietary intake and physical activity of preschoolers in childcare centres, and direct observation was used to measure childcare educators’ practices. ► Dietary intake was assessed at lunch on 2 consecutive days, which may not have been enough to represent preschoolers’ usual intake. ► The presence of research assistants may have influenced childcare educators’ practices and children’s behaviours. group.bmj.com on May 31, 2017 - Published by http://bmjopen.bmj.com/Downloaded from http://bmjopen.bmj.com/ http://dx.doi.org/ 10.1136/bmjopen-2016-013657 http://dx.doi.org/ 10.1136/bmjopen-2016-013657 http://crossmark.crossref.org/dialog/?doi=10.1136/bmjopen-2016-013657&domain=pdf&date_stamp=2017-05-18 http://bmjopen.bmj.com/ http://group.bmj.com 2 Ward S, et al. BMJ Open 2017;7:e013657. doi:10.1136/bmjopen-2016-013657 Open Access of children playing at one time, using ground markings and equipment and focusing on goal setting or rein- forcement were effective PA interventions.10 A recent systematic review suggested that childcare educators may be positive role models for healthy eating behaviours and PA in preschoolers, but which childcare educators’ practices influence children’s eating behaviours and PA is still unclear.11 For example, while studies have found that some educator practices and behaviours promote or are positively associated with PA (eg, leading PA activities, participating in children’s PA)12–16 and healthy eating (eg, eating with the children, talking about healthy foods),17–19 the same practices, in addition to others, were also found to be non-significant in other studies.14 17 18 20 Therefore, to train childcare educators as effective role models, the evidence base must be improved. In light of the existing literature and theory, we hypoth- esise that specific practices of childcare educators can positively influence healthy behaviours for preschoolers. This cross-sectional study aimed to identify practices that are associated with preschoolers’ dietary intake and PA levels. Methods study sample Baseline data from the first and second year (2013–2014 and 2014–2015) of the Healthy Start–Départ Santé (HSDS) study were used for this cross-sectional secondary analysis. HSDS is a cluster-randomised controlled trial (# NCT02375490) conducted in the provinces of Saskatch- ewan and New Brunswick, Canada. It was designed to assess the effectiveness of an intervention promoting healthy eating and PA in childcare centres.21 Details of the HSDS study have been published elsewhere21 and are presented briefly herein. Childcare centres were selected from governmental registries of all licensed childcare centres in both provinces. Inclusion criteria for the HSDS study included not having received a nutri- tion or PA intervention in the past, offering a preschool programme, offering lunch and, for practical purposes, having a minimum of 20 full-time preschoolers. Child- care centres that met eligibility criteria were stratified by geographical location (rural or urban) and by the language of their school district (Anglophone or Fran- cophone) and were then randomly selected. In the first 2 years of the HSDS study, a total of 84 childcare centres were contacted by telephone, provided with information and invited to participate. Consent was obtained from 51 of those centres (61%). All 1208 preschoolers attending these childcare centres on a full-time basis were eligible to participate and parents of 730 children (60.4%) provided signed, informed consent. All parents or guardians of participating children provided signed informed consent. PA and sedentary behaviour PA was assessed using Actical accelerometers (B and Z-se- ries, Mini Mitter/Respironics, Oregon, USA).22Compared with other accelerometers, the Actical has higher intra-in- strument and inter-instrument reliability23 and correlates at r=0.89 with directly measured oxygen consumption in preschoolers.24 Accelerometers were programmed by research assistants the night before they were provided to the children. Monitoring start date and time were entered as midnight of the following day. Children wore the accelerometer on their hip during childcare hours for 5 consecutive weekdays in the fall of 2013 and 2014. Childcare educators were instructed the use of the accel- erometers and were asked to put them on the children on arrival at the childcare centre and remove them before leaving. Since the accelerometers are digitally time stamped, educators were not required to log when accel- erometers were put on or taken off. Accelerometer data were recorded in 15 s epochs to measure time spent in PA and sedentary behaviour according to predetermined thresholds validated in preschoolers.24 Specifically, accelerometer counts of less than 25 counts per epoch indicate sedentary behaviour (which includes nap time),25 counts between 25 and 714 per epoch indicate light intensity PA (LPA) time,24 25 while counts of 715 counts or more per epoch indicate moderate to vigorous intensity PA (MVPA).24 Data obtained in the first year of the study were used to determine the minimum number of valid days and hours to consider using a statistical method described by Rich et al.26 Specifically, the Spearman-Brown formula and the intraclass correlation coefficient were used to calculate the reliability coefficients (r) of the mean daily counts/ minute26 and compare results among children who met wear times between 1 and 10 hours (based on typical childcare hours of 7:30–17:30) and wear days between 1 and 5 (Monday–Friday).26 The results demonstrated that using a minimum of 2 hours of wear time per day on 4 consecutive days provided acceptable reliability coeffi- cients (r=0.79) while maximising sample size (n=360) and was therefore set as the minimal wear time criterion to be included in the analyses.27 This is similar to previous studies in childcare centres which have used a minimum of 1 hour of wear time per day on at least 3 days.28–30 All children’s PA data were then standardised to an 8-hour period to control for within-participant and between-par- ticipant wear-time variations.31 Raw accelerometer data were cleaned and managed using SAS codes adapted for this study.32 dietary intake Children’s intake of vegetables and fruit, fibre, sugar, fat and sodium was measured at lunch on 2 consecutive days with weighed plate waste and digital photography. The weighed plate waste method has been extensively used in studies conducted in school-aged children33–35 and has been shown to be a precise measurement of dietary intake.36 37 Foods were weighed and a picture taken before and after each serving. The difference in weight between the initial serving and the leftovers was used to calculate each child’s food intake.36 37 If food was spilled or dropped group.bmj.com on May 31, 2017 - Published by http://bmjopen.bmj.com/Downloaded from http://bmjopen.bmj.com/ http://group.bmj.com 3Ward S, et al. BMJ Open 2017;7:e013657. doi:10.1136/bmjopen-2016-013657 Open Access around the child’s plate or chair, it was gathered, weighed and added as leftovers. As for spilled beverages, research assistants visually assessed the amount spilled compared with the amount served to estimate the amount consumed by the child. No trades were observed as all children were served the same foods. Pictures were used to validate the data collected from weighing, identify the types of foods served and estimate the quantity of each food item left on the plate. Recipes were obtained and used to assess the nutritional content of the foods served by using nutri- tional analysis software (Food Processor, V.10.10.00) from which estimated intakes of vegetables and fruit, fibre, sugar, fat and sodium were derived. Children’s average dietary intake over the 2 days of data collection was then calculated. childcare educators’ practices Two trained research assistants observed educators’ prac- tices over the course of the two data collection days using 19 of the items of the Nutrition and Physical Activity Self-Assessment for Child Care (NAP SACC).38 39 These items were selected as they specifically assessed educa- tors’ practices. Each research assistant recorded their general observations independently and compared their observations at the end of the second day. Research assis- tants showed excellent inter-rater reliability (Cohen’s kappa=0.942, p<0.001). Three nutrition experts catego- rised the nutrition practices items (13 items) into five types of practices: modelling (three items, ie, ‘When in classrooms during meal or snack times, teachers and staff eat and drink the same foods and beverages as children’; ‘Teachers enthusiastically role model eating healthy foods served at meal and snack times’; ‘Teachers and staff eat or drink unhealthy foods or beverages in front of chil- dren’), nutrition education (two items, ie, ‘Teachers talk with children informally about healthy eating’; ‘Teachers incorporate planned nutrition education into their class- room routines’), satiety recognition (four items, ie, ‘Meals and snacks are served to preschool children by…’; ‘When children eat less than half of a meal or snack, teachers ask them if they are full before removing their plates’; ‘When children request seconds, teachers ask them if they are still hungry before serving more food’; ‘Teachers require that children sit at the table until they clean their plates’), verbal encouragement (three items, ie, ‘During indoor and outdoor physically active playtime, teachers remind children to drink water…’; ‘Teachers praise chil- dren for trying new or less preferred foods’; ‘Teachers use an authoritative feeding style’) and the use of food as rewards (one item, ie, ‘Teachers use food to calm upset children or encourage appropriate behaviour’). Three experts in PA categorised the PA practices items (six items) into two types of practices: informal promotion of PA (three items, ie, ‘Teachers take the following role during preschool children’s physically active playtime…’; ‘Teachers incorporate PA into class- room routines and transitions’; ‘Teachers talk with children informally about the importance of PA’), which was defined as practices that stemmed from educators’ own values or beliefs regarding PA, and formal promo- tion of PA (three items, ie, ‘Teachers offer portable play equipment to preschool children and toddlers during indoor free play time’; ‘As punishment for misbehaviour, preschool children or toddlers are removed from phys- ically active playtime for longer than 5 min’; ‘Teachers lead planned lessons to build preschool children’s and toddlers’ motor skills’), which are practices that are embedded in the childcare centres’ daily routine or policies. Each item was scored on a scale ranging from 0 to 3, where 0 represented the practice less likely condu- cive to healthy behaviours and 3 represented the most favourable practice. The sum of the items in each of the seven categories provided a score for that practice at the childcare centre level and an overall nutrition and PA practices score was calculated. statistical analyses Statistical analyses were conducted in the fall of 2015 using R, V.3.1.1. Normality tests were used to determine the distribution of each outcome variable. To transform the outcomes into approximately normal distributions, logarithmic transformations for fibre, sugar, MVPA and sedentary time were undertaken, and square root trans- formations were used for calories, fat, sodium, as well as fruit and vegetables (with and without potatoes). Multilevel linear regressions were used to evaluate the association between nutrition practices of educators and dietary intake of children, and the association between PA practices of educators and children’s time spent in total PA, MVPA, LPA and sedentary activity. Models were computed in three steps. First, univariate models were generated (Step 1), followed by models which included all covariates such as province (New Brunswick or Saskatchewan), rurality, number of children in the child- care centre and socioeconomic status of the region (Step 2). Models were then fully adjusted by including child- care centres at an additional level to account for potential clustering (Step 3). Socioeconomic status of the region was based on total income of persons aged 15 years and older living in private households, which was obtained from data from the 2011 National Household Survey.40 According to publicly available geospatial information from the Community Information Database, 2006,41 child- care centres were defined as urban if they were in census metropolitan areas, census agglomerations or strong metropolitan influenced zone (MIZ). They were defined as rural if they had moderate, weak or no MIZ. Although body mass index (BMI) was not entered in the models as a confounding variable, the age-adjusted BMIs of children, based on the International Obesity Task Force criteria,42 are presented to give demographic context to this study’s sample. Age-adjusted BMI was obtained by calculating the ratio between their weight in kilograms (measured using the Conair scale, CN2010CX model) and the square of their height in metres (measured using the SECA 213 portable stadiometer). group.bmj.com on May 31, 2017 - Published by http://bmjopen.bmj.com/Downloaded from http://bmjopen.bmj.com/ http://group.bmj.com 4 Ward S, et al. BMJ Open 2017;7:e013657. doi:10.1136/bmjopen-2016-013657 Open Access results Since data collection took longer than expected in one of the 51 centres recruited, research assistants were not able to provide an accurate assessment of educators’ practices in that centre. Therefore, 50 centres were retained for these analyses and a total of 723 children provided data and were included. The average age (SD) of the 723 children was 4.0 (0.7) years and 52% were boys (table 1). On average, the 436 children who were present at lunch on at least one of the 2 days and for whom dietary data were collected and available at the time of these analyses had low fruit and vegetables (64.1 g/day) and fibre (2.7 g/day), and high sugar (13.7 g/day) and sodium (487.4 mg/day) intakes. For the total of 624 children providing valid accelerom- eter data, 64% of their time in childcare centres was spent in sedentary activities (306.7 min/day). On average, childcare centres were awarded approxi- mately half of the possible points for each of the nutrition and PA practices, although food rewards were used in only two of the 50 centres. The variance in scores was slightly greater for the PA practices than for the nutrition practices. Modelling, nutrition education and not using food rewards were associated with the children’s intake of one or more nutrients (table 2). Modelling was positively asso- ciated with the intake of sugar, while nutrition education was negatively associated with the intake of calories and fibre. To put this in context, children under the super- vision of educators who obtained 5 points for modelling consumed an average of 28 g of sugar, versus an average of 48 g among children supervised by educators who obtained 9 points. In addition, children would consume an average of 223 kcal when educators obtained 3 points for nutrition education, versus 167 kcal when educators obtained 6 points. Not using food rewards was negatively associated with intake of fat; however, satiety recognition and verbal encouragement were not associated with chil- dren’s intake of nutrients nor vegetables and fruit. None of the PA practices were associated with total time spent in PA, MVPA, LPA or sedentary activity (table 3). dIscussIon Our results demonstrate that educators’ modelling, nutrition education and not using food as rewards are associated with children’s dietary intake at lunch in child- care centres. However, the benefits of these practices may largely depend on what the childcare centre offers. This study highlights the importance of educators, but also of childcare centres as a whole, in promoting healthy eating among preschoolers. However, our results did not suggest that educators influence PA-related behaviours of chil- dren under their care. educators’ nutrition practices and children’s dietary intake When educators enthusiastically ate or drank the same foods and beverages as the children and did not consume Table 1 Characteristics of study participants n (%) Mean±SD; 95% CI Child-level characteristics n=723 Sex Boys 378 (52.3) Girls 345 (47.7) Age-adjusted body mass index (BMI) Underweight (BMI <18) 79 (12.2) Healthy weight (BMI 18–24.9) 474 (73.0) Overweight (BMI 25–29.9) 73 (11.3) Obese (BMI ≥30) 23 (3.5) Age (years) 4.0±0.7; 4.0 to 4.1 BMI (kg/m2) 20.2±3.7; 20.0 to 20.5 Dietary intake per lunch n=436 Vegetables/fruit (g) 64.1±48.5; 59.6 to 68.7 Vegetables/fruit excluding potatoes (g) 42.9±38.3; 39.3 to 46.5 Calories (kcal) 288.2±125.7; 276.4 to 300.0 Fibre (g) 2.7±1.4; 2.5 to 2.8 Sugar (g) 13.7±12.0; 12.6 to 14.8 Fat (g) 8.8±4.4; 8.4 to 9.2 Sodium (mg) 487.4±292.2; 459.8 to 514.9 Physical activity (PA) per day n=624 Total PA (min/day) 171.9±55.6; 167.5 to 176.2 Moderate to vigorous intensity PA (min/day) 9.7±9.3; 9.0 to 10.5 Light intensity PA (min/day) 162.2±53.6; 158.1 to 166.4 Sedentary time (min/day) 306.7±59.4; 302.0 to 311.3 Centre-level characteristics n=50 Socioeconomic status of the region $30 473±$6805; $28 587 to $32 359 School district Anglophone 32 (64) Francophone 18 (36) Rurality Rural 19 (38) Urban 31 (62) Number of children in the childcare centre 23±11; 20 to 26 Childcare educators’ practices (range)* Modelling (0–9 points) 4.9±1.4; 4.7 to 5.0 Continued group.bmj.com on May 31, 2017 - Published by http://bmjopen.bmj.com/Downloaded from http://bmjopen.bmj.com/ http://group.bmj.com 5Ward S, et al. BMJ Open 2017;7:e013657. doi:10.1136/bmjopen-2016-013657 Open Access unhealthy foods or beverages in front of the children, preschoolers ate greater amounts of sugar. This is in line with a study that found that children’s intake and accep- tance of food increased when educators enthusiastically modelled healthy eating.43 Our study findings probably reflect the nutritional composition of the foods served in the childcare centres. For example, we observed that high-sugar containing foods, such as cookies, pastries and fruit juices, were commonly served, which is similar to previous studies that have reported that children attending childcare centres consume excess amounts of added sugars.44 45 Thus, for modelling to be effective at promoting healthy eating, it is essential for childcare centres to offer nutritious foods. The more nutrition education practices were demon- strated, such as planning nutrition-related activities and talking informally to children about food and healthy eating, the less children ate calories and fibre. The type of nutritional information shared and the sources of this information are likely to be magazines, books and the internet as Canadians use these most frequently for nutrition information.46 These sources often present erroneous, misleading and conflicting nutrition informa- tion. Furthermore, it has been reported that childcare educators believe they have to control what and how much children should eat in order to prevent childhood obesity.47 Providing evidence-based nutrition education to educators could represent a promising avenue for healthy eating promotion among preschoolers. In our study, not using food as rewards was negatively associated with fat intake. Previous studies have found that using a special desert as a reward48 or combining positive reinforcement and a tangible reward (ie, sticker)49 was an effective way of increasing children’s intake of fruit or vegetables. It is possible that food or non-food rewards act as extrinsic motivation for children to eat. If this extrinsic motivation is absent, children may be less inclined to eat, thus explaining our findings. However, studies have shown that offering a desirable food as a reward for eating n (%) Mean±SD; 95% CI Nutrition education (0–6 points) 1.9±1.5; 1.7 to 2.0 Satiety recognition (0–12 points) 5.1±1.8; 4.9 to 5.2 Verbal encouragement (0–9 points) 3.2±1.8; 3.0 to 3.3 No use of food as rewards (0–3 points) 2.8±0.5; 2.8 to 2.9 Overall nutrition practices (0–39 points) 17.8±4.0; 17.5 to 18.2 Informal PA promotion (0–9 points) 4.6±2.6; 4.4 to 4.8 Formal PA promotion (0–9 points) 6.2±2.1; 6.0 to 6.4 Overall PA practices (0–18 points) 10.8 (4.1) 10.5 to 11.1 *High scores indicate healthier practices. Table 1 Continued Ta b le 2 M u lt ile ve l l in e a r re g re ss io n -d e ri ve d e st im a te s o f th e a ss o c ia ti o n b e tw e e n e d u c a to rs ’ p ra c ti c e s a n d c h ild re n ’s d ie ta ry in ta ke E d u c a to rs ’ n u tr it io n p ra c ti c e s V e g e ta b le s a n d fr u it ( g ) V e g e ta b le s a n d f ru it w it h o u t p o ta to e s ( g ) C a lo ri e s ( k c a l) F ib re ( g ) S u g a r (g ) F a t (g ) S o d iu m ( m g ) β 9 5 % C I β 9 5 % C I β 9 5 % C I β 9 5 % C I β 9 5 % C I β 9 5 % C I β 9 5 % C I M o d e lli n g 0 .0 4 2 − 0 .0 7 t o 0 .4 8 0 .0 0 1 − 0 .1 8 t o 0 .2 5 0 .3 6 6 0 .0 0 t o 1 .4 2 0 .0 6 5 − 0 .0 0 1 t o 0 .1 4 0 .1 4 1 0 .0 3 t o 0 .2 7 0 .0 0 4 − 0 .2 7 t o 0 .0 4 0 .1 8 0 − 0 .7 6 2 to 2 .6 9 N u tr it io n e d u c a ti o n − 0 .0 3 8 − 0 .4 2 t o 0 .0 7 − 0 .0 1 − 0 .3 1 t o 0 .1 0 − 0 .4 5 6 − 1 .4 6 t o − 0 .0 2 − 0 .0 6 6 − 0 .1 2 t o − 0 .0 1 − 0 .0 8 1 − 0 .1 7 t o 0 .0 1 9 − 0 .0 0 9 − 0 .0 4 t o 0 .0 0 − 0 .9 5 1 − 4 .3 3 t o 0 .0 2 S a ti e ty re c o g n it io n 0 .0 0 1 − 0 .1 3 t o 0 .1 7 0 .0 0 0 − 0 .1 4 t o 0 .1 4 − 0 .0 0 1 − 0 .2 8 t o 0 .2 0 0 .0 1 1 − 0 .0 4 t o 0 .0 6 0 .0 1 3 − 0 .0 8 t o 0 .1 1 − 0 .0 0 0 − 0 .0 1 t o 0 .0 1 0 .0 0 8 − 0 .7 9 t o 1 .1 2 V e rb a l e n c o u ra g e m e n t 0 .0 6 0 − 0 .0 1 t o 0 .3 7 0 .0 0 0 − 0 .1 2 t o 0 .1 4 − 0 .0 2 1 − 0 .3 7 t o 0 .1 1 0 .0 1 5 − 0 .0 4 t o 0 .0 7 0 .0 2 7 − 0 .0 6 t o 0 .1 2 − 0 .0 0 1 − 0 .0 1 t o 0 .0 0 − 0 .5 9 1 − 2 .7 9 t o 0 .0 2 N o t u si n g f o o d re w a rd s − 0 .0 0 1 − 2 .1 6 t o 2 .0 2 0 .3 5 5 − 0 .6 2 t o 3 .9 2 0 − 1 .2 4 8 − 8 .5 3 t o 0 .4 6 − 0 .0 2 3 − 0 .2 1 t o 0 .2 0 − 0 .0 7 5 − 0 .3 4 t o 0 .3 0 − 0 .1 4 4 − 0 .5 2 t o − 0 .0 0 2 − 0 .0 4 2 − 1 5 .2 1 to 1 2 .2 5 O ve ra ll n u tr it io n p ra c ti c e s 0 .0 0 2 − 0 .0 1 t o 0 .0 4 0 .0 0 0 − 0 .0 2 t o 0 .0 2 − 0 .0 0 4 − 0 .0 7 t o 0 .0 2 0 .0 0 3 − 0 .0 2 t o 0 .0 3 0 .0 1 1 − 0 .0 3 t o 0 .0 5 − 0 .0 0 0 − 0 .0 0 3 t o 0 .0 0 0 − 0 .0 4 − 0 .3 6 t o 0 .0 4 E st im a te s a re a d ju st e d f o r p ro vi n c e , ru ra lit y, s o c io e c o n o m ic s ta tu s o f th e r e g io n a n d n u m b e r o f c h ild re n in t h e c h ild c a re c e n tr e . B o ld fa c e in d ic a te s st a ti st ic a l s ig n ifi c a n c e ( p  < 0 .0 5 ). F ix e d e ff e c t va ri a n c e r a n g e d f ro m 1 .6 4 % t o 1 4 .8 % . R a n d o m e ff e c t va ri a n c e r a n g e d f ro m 2 4 .3 % t o 4 7 .9 % . group.bmj.com on May 31, 2017 - Published by http://bmjopen.bmj.com/Downloaded from http://bmjopen.bmj.com/ http://group.bmj.com 6 Ward S, et al. BMJ Open 2017;7:e013657. doi:10.1136/bmjopen-2016-013657 Open Access another has been linked to an enhanced preference for the food used as a reward, while the preference for the distasteful food decreases.50 51 Therefore, it is suggested that verbal rewards be used rather than tangible rewards.52 Although previous studies have found that verbal encouragement48 49 and encouraging preschoolers to eat healthy foods while allowing them to make their own food choices53 increased their consumption of fruit and vegetables, verbal encouragement was not associated with children’s dietary intake in our study. This could be due to the children’s overall appreciation of the foods served at lunch, as it has been suggested that verbal encourage- ment is more effective in promoting the consumption of disliked foods than that of foods already enjoyed by chil- dren. Similarly, while satiety recognition practices were not associated with children’s dietary intake in our study, they may help promote positive feeding behaviours and a pleasant emotional climate at mealtimes.53 educators’ PA promoting practices and children’s PA levels Our study found no association between educators’ PA practices and children’s PA levels. The results from previous studies are inconsistent.11 While some studies found that offering portable play equipment can increase PA,15 16 54–56 one found that not withholding PA as a means of punishment was not associated with preschoolers’ PA.15 Another reported a decrease in children’s PA when child- care educators were present.57 Other variables may have a larger influence on children’s choice to be physically active, such as the PA levels of their peers,58 or if they feel like being active or not on a particular day. It is also possible that our findings are a consequence of how the PA items of the NAP SACC were grouped. For example, chil- dren’s PA may be associated with some informal or formal practices but not others. Although our results showed no statistically significant effect, it may be important for educators to create opportunities for children to be active, to encourage and model a physically active life- style and to establish an environment that supports PA. A recent study found that PA opportunities accounted for only 48 min or 12% of the total childcare day.20 The same study also found that while outdoor child-initiated free play was most common, outdoor teacher-led physical activities were the least frequently observed PA opportu- nity.20 In line with findings of other studies, our results showed room for improvement as children spent a large amount of time in sedentary activities.20 59 60 Our finding that educators’ practices were associated with children’s dietary intake but not with PA could be explained by differences in the times at which those two behaviours were assessed. Nutrition practices were primarily observed during well-defined lunch periods, at which point children’s dietary intake was also assessed. While the connection observed between educators’ prac- tices and children’s eating was direct and immediate, PA practices were observed at various times during the 2 days of data collection and children’s PA was assessed through the entire day. This disconnect is likely to have obscured any punctual association between educators’ practices and children’s PA. This and the educators’ infrequent use of PA practices could explain why no statistically significant relationship was found. Therefore, it may be important to educate childcare educators on how they can play a role in helping children become more physically active, by providing them with training in PA.61–63 Future research should investigate if increasing childcare educators’ ability to facilitate, encourage and model more PA results in preschoolers becoming more physically active. strengths and limitations This study had several strengths, including the use of objective methods for assessing dietary intake and PA, the direct observation of childcare educators’ practices by trained research assistants and the diversity of child- care centres in terms of geographical location, language spoken and socioeconomic status. However, its limitations must be acknowledged. Multiple testing, as was done in this study, may have incorrectly rejected the null hypoth- esis, thus yielding false statistically significant results.64 Therefore, confirmatory studies which aim to provide definite proof and guide decision making should use appropriate procedures for multiple test adjustments.64 Children’s dietary data were collected on only 2 days, which may not be enough to represent preschoolers’ Table 3 Multilevel linear regression-derived estimates of the association between educators’ practices and children’s physical activity (PA) Educators’ PA promotion practices Total PA (min/day) Moderate to vigorous PA (min/day) Light intensity PA (min/ day) Sedentary activity (min/day) β 95% CI β 95% CI β 95% CI β 95% CI Formal PA promotion −0.382 −4.05 to 3.30 −0.024 −0.07 to 0.02 0.280 −3.14 to 3.71 0.002 −0.01 to 0.01 Informal PA promotion −0.748 −4.43 to 2.96 0.004 −0.98 to 0.05 −0.524 −3.97 to 2.94 0.003 −0.01 to 0.01 Overall PA practices −0.388 −2.54 to 1.78 −0.007 −0.03 to 0.02 −0.082 −2.10 to 1.94 0.001 −0.01 to 0.01 Estimates are adjusted for province, rurality, socioeconomic status of the region and number of children in the childcare centre. Fixed effect variance ranged from 1.7% to 4.6%. Random effect variance ranged from 11.6% to 19.7%. group.bmj.com on May 31, 2017 - Published by http://bmjopen.bmj.com/Downloaded from http://bmjopen.bmj.com/ http://group.bmj.com 7Ward S, et al. BMJ Open 2017;7:e013657. doi:10.1136/bmjopen-2016-013657 Open Access usual intake since it can fluctuate from day to day.65 Also, only two childcare centres were observed to use food as rewards. Therefore, our finding may not be generalisable to centres which commonly use this practice. Since PA and dietary intake were only assessed during childcare hours, it is not possible to know if childcare educa- tors’ practices can impact children’s activity and eating patterns outside the childcare centre. Parental eating and PA behaviours should also be assessed in future studies, as aligning parents and educators’ practices may reinforce positive eating and PA behaviours of children throughout the entire day. It is also possible that the presence of the research assistants influenced the childcare educa- tors’ practices and children’s behaviours. Furthermore, we used census data from the region in which childcare centres were located as a measure of socioeconomic status, which may be different from the actual socio- economic status of the children attending the centres. Finally, the cross-sectional nature of the analyses limits the assessment of causal relationships. conclusion In conclusion, our results provide insight on how childcare educators’ practices may be associated with preschoolers’ healthy behaviours, particularly those relating to dietary intake. We have shown that childcare educators who model healthy eating, provide nutrition education and avoid using food as rewards could potentially help chil- dren eat healthier, provided that the foods served are also of high nutritional value. Our results suggest that inter- ventions should include childcare educators as agents for the promotion of healthy eating among preschoolers. Although none of the PA practices were associated with the preschoolers’ PA levels in our study, the results demonstrate that children spend a large amount of time being sedentary. This supports the need for the develop- ment of effective interventions that aim to increase PA and decrease sedentary time in childcare centres. Author affiliations 1Faculty of Medecine and Health Sciences, Université de Sherbrooke, Moncton, New Brunswick, Canada 2Department of Family Medicine, Université de Sherbrooke, Moncton, New Brunswick, Canada 3Department Community Health Sciences, Université de Sherbrooke, Moncton, New Brunswick, Canada 4School of Public Health, University of Saskatchewan, Saskatoon, Saskatchewan, Canada 5Department of Community Health and Epidemiology, University of Saskatchewan, Saskatoon, Saskatchewan, Canada 6College of Kinesiology, University of Saskatchewan, Saskatoon, Saskatchewan, Canada 7École des sciences des aliments, de nutrition et d'études familiales, Faculté des sciences de la santé et des services communautaires, Université de Moncton, Moncton, New Brunswick, Canada Contributors SW conceived the study, collected, analysed and interpreted the data. MB conceived the study and interpreted the data. NC and DD interpreted the data. HV, NM, RE-S, AL and MLH conceived the study. All authors were involved in writing the manuscript and had final approval of the submitted and published versions. Funding The Healthy Start study is financially supported by agrant from the Public Health Agency of Canada (# 6282-15-2010/3381056-RSFS), aresearch grant from the Consortium national de formation en santé (#2014-CFMF-01), and a grant from the Heart and Stroke Foundation of Canada (#2015-PLNI). SW was supported by a Canadian Institutes of Health ResearchCharles Best Canada Graduate Scholarships Doctoral Award and by theGérard-Eugène-Plante Doctoral Scholarship. The funders did not play a role inthe design of the study, the writing of the manuscript or the decision tosubmit it for publication. No financial disclosures were reported by theauthors of this paper. Competing interests None declared. Ethics approval The HSDS study received approval from the Centre Hospitalier de l'Université de Sherbrooke, the University of Saskatchewan and Health Canada ethics review boards. Provenance and peer review Not commissioned; externally peer reviewed. Data sharing statement Data from the Healthy Start study can be requested by emailing Professor Anne Leis; anne. leis@ usask. ca. Open Access This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by- nc/ 4. 0/ © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted. references 1. World Health Organization. 2012. Population-Based Approaches to Childhood Obesity Prevention.. www. who. int/ dietphysicalactivity/ childhood/ WHO_ new_ childhoodobesity_ PREVENTION_ 27nov_ HR_ PRINT_ OK. pdf . 2. Ebbeling CB, Pawlak DB, Ludwig DS. Childhood obesity: public- health crisis, common sense cure. Lancet 2002;360:473–82. 3. Bandura A. Social Learning Theory. Englewood Cliffs, NJ: Prentice Hall, 1977. 4. Schunk D. Learning theories: an educational perspective. 6th ed. Boston, MA: Pearson Education, 2012. 5. Organisation for economic co-operation and development. PF3.2 Enrolment in Childcare and Pre-Schools. www. oecd. org/ els/ family/ database. htm (accessed 1 May 2014). 6. United States Census Bureau. Child Care: an important part of American life. 2013 www. census. gov/ how/ pdf/ child_ care. pdf (accessed 17 Oct 2014). 7. Bushnik T. 2006. Child Care in Canada. Ottawa, ON: Statistics Canada. 8. Dooris M, Poland B, Kolbe L. Healthy settings: building evidence for the effectiveness of whole system health promotion. Challenges and future directions. In: McQueen D, Jones C, eds. Global perspectives on health promotion effectiveness. New York: Springer, 2007:327–52. 9. Mikkelsen MV, Husby S, Skov LR, et al. A systematic review of types of healthy eating interventions in preschools. Nutr J 2014;13:56–64. 10. Temple M, Robinson JC. A systematic review of interventions to promote physical activity in the preschool setting. J Spec Pediatr Nurs 2014;19:274–84. 11. Ward S, Bélanger M, Donovan D, et al. Systematic review of the relationship between childcare educators’ practices and preschoolers’ physical activity and eating behaviours. Obes Rev 2015;16:1055–70. 12. Brown WH, Googe HS, McIver KL, et al. Effects of teacher- encouraged physical activity on preschool playgrounds. J Early Interv 2009;31:126–45. 13. Bower JK, Hales DP, Tate DF, et al. The childcare environment and children’s physical activity. Am J Prev Med 2008;34:23–9. 14. Henderson KE, Grode GM, O'Connell ML, et al. Environmental factors associated with physical activity in childcare centers. Int J Behav Nutr Phys Act 2015;12:43. 15. Gunter KB, Rice KR, Ward DS, et al. Factors associated with physical activity in children attending family child care homes. Prev Med 2012;54:131–3. 16. Vanderloo LM, Tucker P, Johnson AM, et al. The influence of centre- based childcare on preschoolers’ physical activity levels: a cross- sectional study. Int J Environ Res Public Health 2014;11:1794–802. group.bmj.com on May 31, 2017 - Published by http://bmjopen.bmj.com/Downloaded from http://creativecommons.org/licenses/by-nc/4.0/ http://creativecommons.org/licenses/by-nc/4.0/ www.who.int/dietphysicalactivity/childhood/WHO_new_childhoodobesity_PREVENTION_27nov_HR_PRINT_OK.pdf www.who.int/dietphysicalactivity/childhood/WHO_new_childhoodobesity_PREVENTION_27nov_HR_PRINT_OK.pdf www.who.int/dietphysicalactivity/childhood/WHO_new_childhoodobesity_PREVENTION_27nov_HR_PRINT_OK.pdf http://dx.doi.org/10.1016/S0140-6736(02)09678-2 www.oecd.org/els/family/database.htm www.oecd.org/els/family/database.htm www.census.gov/how/pdf/child_care.pdf http://dx.doi.org/10.1186/1475-2891-13-56 http://dx.doi.org/10.1111/jspn.12081 http://dx.doi.org/10.1111/jspn.12081 http://dx.doi.org/10.1111/obr.12315 http://dx.doi.org/10.1177/1053815109331858 http://dx.doi.org/10.1016/j.amepre.2007.09.022 http://dx.doi.org/10.1186/s12966-015-0198-0 http://dx.doi.org/10.1186/s12966-015-0198-0 http://dx.doi.org/10.1016/j.ypmed.2011.12.002 http://dx.doi.org/10.3390/ijerph110201794 http://bmjopen.bmj.com/ http://group.bmj.com 8 Ward S, et al. BMJ Open 2017;7:e013657. doi:10.1136/bmjopen-2016-013657 Open Access 17. Gubbels JS, Gerards SM, Kremers SP. Use of food practices by childcare staff and the association with dietary intake of children at childcare. Nutrients 2015;7:2161–75. 18. Gubbels JS, Kremers SP, Stafleu A, et al. Child-care environment and dietary intake of 2- and 3-year-old children. J Hum Nutr Diet 2010;23:97–101. 19. Kharofa RY, Kalkwarf HJ, Khoury JC, et al. Are mealtime best practice guidelines for child care centers associated with energy, vegetable, and fruit intake? Child Obes 2016;12:52–8. 20. Tandon PS, Saelens BE, Christakis DA. Active play opportunities at child care. Pediatrics 2015;135:e1425–31. 21. Bélanger M, Humbert L, Vatanparast H, et al. A multilevel intervention to increase physical activity and improve healthy eating and physical literacy among young children (ages 3–5) attending early childcare centres: the healthy Start-Départ Santé cluster randomised controlled trial study protocol. BMC Public Health 2016;16:313–322. 22. Pate RR, O'Neill JR, Mitchell J. Measurement of physical activity in preschool children. Med Sci Sports Exerc 2010;42:508–12. 23. Esliger DW, Tremblay MS. Technical reliability assessment of three accelerometer models in a mechanical setup. Med Sci Sports Exerc 2006;38:2173–81. 24. Pfeiffer KA, McIver KL, Dowda M, et al. Validation and calibration of the actical accelerometer in preschool children. Med Sci Sports Exerc 2006;38:152–7. 25. Wong SL, Colley R, Connor Gorber S, et al. Actical accelerometer sedentary activity thresholds for adults. J Phys Act Health 2011;8:587–91. 26. Rich C, Geraci M, Griffiths L, et al. Quality control methods in accelerometer data processing: defining minimum wear time. PLoS One 2013;8:e67206. 27. Ward S, Bélanger M, Donovan D, et al. ‘Monkey see, monkey do’: Peers’ behaviors predict preschoolers’ physical activity and dietary intake in childcare centers. Prev Med. In Press. 2017;97:33–9. 28. Kuzik N, Clark D, Ogden N, et al. Physical activity and sedentary behaviour of toddlers and preschoolers in child care centres in Alberta, Canada. Can J Public Health 2015;106:e178–83. 29. Pate RR, Pfeiffer KA, Trost SG, et al. Physical activity among children attending preschools. Pediatrics 2004;114:1258–63. 30. Trost SG, Sirard JR, Dowda M, et al. Physical activity in overweight and nonoverweight preschool children. Int J Obes Relat Metab Disord 2003;27:834–9. 31. Katapally TR, Muhajarine N. Towards uniform accelerometry analysis: a standardization methodology to minimize measurement bias due to systematic accelerometer wear-time variation. J Sports Sci Med 2014;13:379–86. 32. Bélanger M, Boudreau J. SAS code for actical data cleaning and management. 2015 www. mathieubelanger. recherche. usherbrooke. ca/ Actical. htm (accessed 30 July 2015). 33. Blakeway SF, Knickrehm ME. Nutrition education in the Little Rock school lunch program. J Am Diet Assoc 1978;72:389–91. 34. Lee HS, Lee KE, Shanklin CW. Elementary students’ food consumption at lunch does not meet recommended dietary allowance for energy, iron, and vitamin A. J Am Diet Assoc 2001;101:1060–3. 35. Whatley JE, Donnelly JE, Jacobsen DJ, et al. Energy and macronutrient consumption of elementary school children served modified lower fat and sodium lunches or standard higher fat and sodium lunches. J Am Coll Nutr 1996;15:602–7. 36. Jacko C, Dellava J, Ensle K, et al. Use of the plate-waste method to measure food intake in children. J Ext 2007;45. 37. Wolper C, Heshka S, Heymsfield S. Measuring food intake: an overview. Allison D, Handbook of assessment measures for eating behaviors and weight-related problems. Thousand Oaks, CA: Sage Publishing, 1995. 38. Ammerman AS, Ward DS, Benjamin SE, et al. An intervention to promote healthy weight: Nutrition and Physical Activity Self- Assessment for Child Care (NAP SACC) theory and design. Prev Chronic Dis 2007;4:A67–A78. 39. Benjamin SE, Ammerman A, Sommers J, et al. Nutrition and physical activity self-assessment for child care (NAP SACC): results from a pilot intervention. J Nutr Educ Behav 2007;39:142–9. 40. Faculty of Arts & Science U of T. Microdata analysis and subsetting. http:// sda. chass. utoronto. ca/ sdaweb/ sda. htm 2015 (accessed 17 Nov 2015). 41. Government of Canada’s Rural Secretariat. Community information database. http://www. cid- bdc. ca/ useful- definitions (accessed 16 Aug 2013). 42. Cole TJ, Lobstein T. Extended international (IOTF) body mass index cut-offs for thinness, overweight and obesity. Pediatr Obes 2012;7:284–94. 43. Hendy HM, Raudenbush B. Effectiveness of teacher modeling to encourage food acceptance in preschool children. Appetite 2000;34:61–76. 44. Erinosho TO, Ball SC, Hanson PP, et al. Assessing foods offered to children at child-care centers using the Healthy Eating Index-2005. J Acad Nutr Diet 2013;113:1084–9. 45. Ball SC, Benjamin SE, Ward DS. Dietary intakes in North Carolina child-care centers: are children meeting current recommendations? J Am Diet Assoc 2008;108:718–21. 46. Marquis M, Dubeau C, Thibault I. Canadians’ level of confidence in their sources of nutrition information. Can J Diet Pract Res 2005;66:170–5. 47. Lindsay AC, Salkeld JA, Greaney ML, et al. Latino family childcare providers’ beliefs, attitudes, and practices related to promotion of healthy behaviors among preschool children: a qualitative study. J Obes 2015;2015:1–9. 48. Hendy HM. Comparison of five teacher actions to encourage children’s new food acceptance. Ann Behav Med 1999;21:20–6. 49. Ireton CL, Guthrie HA. Modification of vegetable-eating behavior in preschool children. J Nutr Educ 1972;4:100–3. 50. Newman J, Taylor A. Effect of a means-end contingency on young children’s food preferences. J Exp Child Psychol 1992;53:200–16. 51. Birch LL, Marlin DW, Rotter J. Eating as the ‘Means’ activity in a contingency: effects on young children's food preference. Child Dev 1984;55:431–9. 52. Eisenberger R, Cameron J. Detrimental effects of reward. reality or myth? Am Psychol 1996;51:1153–66. 53. Patrick H, Nicklas TA, Hughes SO, et al. The benefits of authoritative feeding style: caregiver feeding styles and children's food consumption patterns. Appetite 2005;44:243–9. 54. Gubbels JS, Kremers SP, van Kann DH, et al. Interaction between physical environment, social environment, and child characteristics in determining physical activity at child care. Health Psychol 2011;30:84–90. 55. Nicaise V, Kahan D, Sallis JF. Correlates of moderate-to-vigorous physical activity among preschoolers during unstructured outdoor play periods. Prev Med 2011;53:309–15. 56. Hannon JC, Brown BB. Increasing preschoolers’ physical activity intensities: an activity-friendly preschool playground intervention. Prev Med 2008;46:532–6. 57. Brown WH, Pfeiffer KA, McIver KL, et al. Social and environmental factors associated with preschoolers’ nonsedentary physical activity. Child Dev 2009;80:45–58. 58. Ward SA, Bélanger MF, Donovan D, et al. Relationship between eating behaviors and physical activity of preschoolers and their peers: a systematic review. Int J Behav Nutr Phys Act 2016;13:50–62. 59. Reilly JJ. Low levels of objectively measured physical activity in preschoolers in child care. Med Sci Sports Exerc 2010;42:502–7. 60. Temple VA, Naylor PJ, Rhodes RE, et al. Physical activity of children in family child care. Appl Physiol Nutr Metab 2009;34:794–8. 61. Walkwitz E, Lee A. The role of teacher knowledge in elementary physical education instruction: an exploratory study. Res Q Exerc Sport 1992;63:179–85. 62. Fitzgibbon ML, Stolley MR, Schiffer LA, et al; Hip-Hop to Health Jr. Obesity prevention effectiveness trial: postintervention results. Obesity 2011;19:994–1003. 63. Pate RR, Brown WH, Pfeiffer KA, et al. An intervention to increase physical activity in children: a randomized controlled trial with 4-year- olds in preschools. Am J Prev Med 2016;51:12–22. 64. Bender R, Lange S. Adjusting for multiple testing–when and how? J Clin Epidemiol 2001;54:343–9. 65. Huybrechts I, De Bacquer D, Cox B, et al. Variation in energy and nutrient intakes among pre-school children: implications for study design. Eur J Public Health 2008;18:509–16. group.bmj.com on May 31, 2017 - Published by http://bmjopen.bmj.com/Downloaded from http://dx.doi.org/10.3390/nu7042161 http://dx.doi.org/10.1111/j.1365-277X.2009.01022.x http://dx.doi.org/10.1089/chi.2015.0109 http://dx.doi.org/10.1542/peds.2014-2750 http://dx.doi.org/10.1186/s12889-016-2973-5 http://dx.doi.org/10.1249/MSS.0b013e3181cea116 http://dx.doi.org/10.1249/01.mss.0000239394.55461.08 http://dx.doi.org/10.1249/01.mss.0000183219.44127.e7 http://dx.doi.org/10.1249/01.mss.0000183219.44127.e7 http://dx.doi.org/10.1123/jpah.8.4.587 http://dx.doi.org/10.1371/journal.pone.0067206 http://dx.doi.org/10.1371/journal.pone.0067206 http://dx.doi.org/10.1016/j.ypmed.2017.01.001 http://dx.doi.org/10.17269/cjph.106.4794 http://dx.doi.org/10.1542/peds.2003-1088-L http://dx.doi.org/10.1038/sj.ijo.0802311 http://dx.doi.org/10.1038/sj.ijo.0802311 www.mathieubelanger.recherche.usherbrooke.ca/Actical.htm www.mathieubelanger.recherche.usherbrooke.ca/Actical.htm http://dx.doi.org/10.1016/S0002-8223(01)00261-9 http://dx.doi.org/10.1080/07315724.1996.10718636 http://dx.doi.org/10.1016/j.jneb.2006.08.027 http://sda.chass.utoronto.ca/sdaweb/sda.htm http://www.cid-bdc.ca/useful-definitions http://dx.doi.org/10.1111/j.2047-6310.2012.00064.x http://dx.doi.org/10.1006/appe.1999.0286 http://dx.doi.org/10.1016/j.jand.2013.04.026 http://dx.doi.org/10.1016/j.jand.2013.04.026 http://dx.doi.org/10.1016/j.jada.2008.01.014 http://dx.doi.org/10.1016/j.jada.2008.01.014 http://dx.doi.org/10.3148/66.3.2005.170 http://dx.doi.org/10.1155/2015/409742 http://dx.doi.org/10.1155/2015/409742 http://dx.doi.org/10.1007/BF02895029 http://dx.doi.org/10.1016/S0022-3182(72)80018-9 http://dx.doi.org/10.1016/0022-0965(92)90049-C http://dx.doi.org/10.2307/1129954 http://dx.doi.org/10.1037/0003-066X.51.11.1153 http://dx.doi.org/10.1016/j.appet.2002.07.001 http://dx.doi.org/10.1037/a0021586 http://dx.doi.org/10.1016/j.ypmed.2011.08.018 http://dx.doi.org/10.1016/j.ypmed.2008.01.006 http://dx.doi.org/10.1111/j.1467-8624.2008.01245.x http://dx.doi.org/10.1186/s12966-016-0374-x http://dx.doi.org/10.1249/MSS.0b013e3181cea100 http://dx.doi.org/10.1139/H09-061 http://dx.doi.org/10.1080/02701367.1992.10607578 http://dx.doi.org/10.1080/02701367.1992.10607578 http://dx.doi.org/10.1038/oby.2010.314 http://dx.doi.org/10.1016/j.amepre.2015.12.003 http://dx.doi.org/10.1016/S0895-4356(00)00314-0 http://dx.doi.org/10.1016/S0895-4356(00)00314-0 http://dx.doi.org/10.1093/eurpub/ckn017 http://bmjopen.bmj.com/ http://group.bmj.com and dietary intake: a cross-sectional analysis practices and preschoolers' physical activity Association between childcare educators' Louise Humbert and Natalie Carrier Vatanparast, Nazeem Muhajarine, Rachel Engler-Stringer, Anne Leis, M Stéphanie Ward, Mathieu Blanger, Denise Donovan, Hassan doi: 10.1136/bmjopen-2016-013657 2017 7: BMJ Open http://bmjopen.bmj.com/content/7/5/e013657 Updated information and services can be found at: These include: References #BIBLhttp://bmjopen.bmj.com/content/7/5/e013657 This article cites 53 articles, 3 of which you can access for free at: Open Access http://creativecommons.org/licenses/by-nc/4.0/non-commercial. See: provided the original work is properly cited and the use is non-commercially, and license their derivative works on different terms, permits others to distribute, remix, adapt, build upon this work Commons Attribution Non Commercial (CC BY-NC 4.0) license, which This is an Open Access article distributed in accordance with the Creative service Email alerting box at the top right corner of the online article. Receive free email alerts when new articles cite this article. Sign up in the Collections Topic Articles on similar topics can be found in the following collections (208)Sports and exercise medicine (2119)Public health (599)Paediatrics (312)Nutrition and metabolism Notes http://group.bmj.com/group/rights-licensing/permissions To request permissions go to: http://journals.bmj.com/cgi/reprintform To order reprints go to: http://group.bmj.com/subscribe/ To subscribe to BMJ go to: group.bmj.com on May 31, 2017 - Published by http://bmjopen.bmj.com/Downloaded from http://bmjopen.bmj.com/content/7/5/e013657 http://bmjopen.bmj.com/content/7/5/e013657#BIBL http://creativecommons.org/licenses/by-nc/4.0/ http://bmjopen.bmj.com//cgi/collection/bmj_open_nutrition_and_metabolism http://bmjopen.bmj.com//cgi/collection/bmj_open_paediatrics http://bmjopen.bmj.com//cgi/collection/bmj_open_public_health http://bmjopen.bmj.com//cgi/collection/bmj_open_sports_and_exercise_medicine http://group.bmj.com/group/rights-licensing/permissions http://journals.bmj.com/cgi/reprintform http://group.bmj.com/subscribe/ http://bmjopen.bmj.com/ http://group.bmj.com work_h7tdvsbdsbcxjdu7odiiitsn5q ---- www.intmarhealth.pl 191 Int Marit Health 2011; 62, 3: 191–195 w w w . i n t m a r h e a l t h . p l Copyright © 2011 Via Medica ISSN 1641–9251O R I G I N A L P A P E R � Karin Westlund, Radio Medical Centre, Sahlgrenska Uniwersity Hospital, Gothenburg, Sweden, SAR MRCC Sweden, e-mail: radiomedical@medic.qu.ukl Infections onboard ship — analysis of 1290 advice calls to the Radio Medical (RM) doctor in Sweden. Results from 1997, 2002, 2007, and 2009 Karin Westlund Radio Medical Centre, Sahlgrenska University Hospital, Gothenburg, Sweden ABSTRACT Results from a descriptive study on Swedish Telemedical Advice Services (TMAS) from 1997, 2002, 2007, and the first six months of 2009 on infectious conditions are presented. These findings concern symptoms, actions taken, number of evacuations, means of communication, and use of digital photos. They show that infectious conditions are a significant contributor to calls to the service and that they can be more frequently treated on board than can other conditions. (Int Marit Health 2011; 62, 3: 191–195) Key words: Radio Medical, infections onboard ships, descriptive studyKey words: Radio Medical, infections onboard ships, descriptive studyKey words: Radio Medical, infections onboard ships, descriptive studyKey words: Radio Medical, infections onboard ships, descriptive studyKey words: Radio Medical, infections onboard ships, descriptive study INTRODUCTION Sahlgrenska University Hospital in Gothenburg, Sweden, one of the largest hospitals in Europe, pro- vides Telemedical advice service to employees on- board ships worldwide. The service was established in 1922. The hospital is the international reference hospital for the Swedish Maritime Administration, and the radiomedical (RM) service is asked for advice about new regulations relating to maritime health. The RM service is free of charge to ships and seafar- ers in accordance with international agreements. The RM provides medical support and advice to merchant ships all over the world, to the Swedish Navy and to the Coast Guard. Advice may be requested and giv- en by text and/or speech communication. Figure 1 shows the worldwide coverage. The RM doctor is an Internal medicine specialist and has all other specialists available for consulta- tion when needed. In case of illness or accidents onboard, the offic- er responsible for medical services contacts the JRCC (Joint Rescue Coordination Centre) in Göteborg, Swe- den irrespective of position. The JRCC sends the call on to the Radio Medical doctor on duty. There is a doctor on duty around the clock. During the past 10 years, the Swedish TMAS has handled approxima- tely 500 cases per year. In this study, however, cases with passengers were excluded. The focus of this part of the study was to analyze the disease-pattern concerning infections onboard. We also studied whether the use of digital photos was optimal and if access to the Internet has had any effect on the type of cases on which advice is given: more or less minor illnesses/injuries and more or less evacuations. We are not aware of similar published studies on infections from other national TMAS services. Previ- ous presentations have been made using results from other parts of the wider Swedish study: • 2008, IMHA Workshop Medical chest – present achievements and further perspectives, Athens, Greece; Classification of 201 cases, prescriptions made and results concerning total treatment on- board, number of evacuations and recommen- Int Marit Health 2011; 62, 3: 157–208 www.intmarhealth.pl192 dations of seeing a doctor in next port of destina- tion. • 2009, 10th ISMH in Goa, India; Reasons for con- tact, illnesses versus accidents, and use of digital photos, consultations with other specialists, and type of communication. • 2010, 3rd NECTM in Hamburg, Germany; Disease patterns in telemedicine advice — Experiences from Swedish RM. MATERIALS AND METHODS All contacts with RM passes through the Swedish JRCC. The JRCC’s role is to maintain an online link with duty RM doctor and to ensure communication between the doctor and the ship. In this study all logs from JRCC and the RM doctors´ documenta- tion are studied, in total 1290 cases. Medical records on RM cases is available since 1991 and the study describes the years 1997, 2002, 2007, and the first six months of 2009 (376, 312, 407, and 195 cases) (figure 2). The international classification for primary care, ICPC-2 ver. 1.2 by Wonca´s international classifica- tion committee (WICC) is used (Table 1). The ICPC- -classification is set during the study. The average number of cases handled by Swedish RM is approx- imately 500 per year, but in this study cases con- cerning passengers are excluded. Data collection contains the number of each case, date, position and flag-state, ICPC-classification, ill- ness/accident, treatment and other actions taken, consultation with other specialists, evacuation, type FigurFigurFigurFigurFigure 1e 1e 1e 1e 1..... The geographical spread of Radio Medical cases during 1997, 2002, 2005, 2007, and the first six months of 2009 F i g u rF i g u rF i g u rF i g u rF i g u re 2 .e 2 .e 2 .e 2 .e 2 . Reasons for contact 1997, 2002, 2007, and the first six months of 2009. In total there were 1290 cases, of which 449 concerned infections. The percentage of infec- tion cases in each group is shown in parentheses www.intmarhealth.pl 193 Karin Westlund, Infections onboard ship of communication and use of digital photo, age and sex, and duty onboard. Genders were combined, except for genital infections, as females formed only a small proportion of those on whom advice was given. RESULTS 33% (428) of all cases (1290) are concerned with infections (Figure 3 and Table 2). Tables 3–10 show the different symptom groups divided into subgroups in the first column followed by number of cases, con- sultation with other specialists, and number of eva- cuations. Infection symptoms occur most often from respiratory (113) and digestive systems (73) and skin (62) (Tables 3–5). Cases concerning ear, male genitals, and urinary system were the most frequently dealt with infections (83%, 81%, 73%) (Tables 6, 7, 9). Antibiotics were prescribed 337 times and anti- malarial drugs 7 times. 84 cases had no antibiotic prescription. This might be because it was not ne- cessary, the patient was already under treatment, was evacuated, or data is missing. The use of digital photos is increasing but could be used more frequently (Figure 3). Full treatment onboard was given in 71% of ca- ses. 6% were evacuated and 23% were recommen- ded to see a physician in next port of destination. In comparison to all cases in the study there is no big difference concerning full treatment given onboard, less evacuations, but considerably more frewuent recommendation of seeing a physician in next port of destination (Figure 4). Consultations with other specialists at the hospi- tal were carried out in 12% of the cases concerning infections. That does not differ from the number of consultations concerning all 1290 cases (Table 11). When comparing the cases during 1997, 2002, 2007, and the first six months of 2009 there are no big differences concerning symptoms, reason for contact, or actions taken. The most important change has been the means of communication between ships and the RM doctors. E –mail became more common: 2% in 1997 and 19% in 2009, and the use of satel- lite mobile phones increased from 64% in 1997 to 88% in 2009 (Figure 5). Sending digital photos via e-mail started after 2002 and was used in 8% of all cases in the first six months of 2009 (Figure 3). DISCUSSION 71% of all cases received full treatment onboard. Whether this is the optimal percentage is a matter for speculation.33% of all requests for advice dealt with infections. Advice on treatment is necessarily linked to the antibiotics carried. There is scope for Figure 3.Figure 3.Figure 3.Figure 3.Figure 3. Digital photography usage started in 2002 and is slowly increasing. The red line in the figure shows the usage concerning all cases (1290) in the study. The grey line shows cases were a photo might have added useful information TTTTTable 2.able 2.able 2.able 2.able 2. Infections in the symptom groups. 30–35% of all cases dealt with infections (%) Respiratory (R) 2 5 Digestive (D) 1 6 Skin (S) 1 5 Urinary (U) 1 2 Ear (H) 1 0 Eye (F) 8 Others 1 4 TTTTTa b l e 1a b l e 1a b l e 1a b l e 1a b l e 1..... Structure of International Classification of Primary Care (ICP–2). Each group is divided into symptoms/complaints, infectious diseases, neoplasms, injuries, congenital anomalies, and other diseases A General and unspecified L Musculoskeletal U Urinary B Blood, blood forming N Neurological W Pregnancy, family planning D Digestive P Psychological X Female genital F Eye R Respiratory Y Male genital H Ear S Skin Z Social K Circulatory T Metabolic, endocrine, nutrition Int Marit Health 2011; 62, 3: 157–208 www.intmarhealth.pl194 Figure 4.Figure 4.Figure 4.Figure 4.Figure 4. Results concerning full treatment onboard, evacu- ation, or recommendation of physician in next port of desti- nation. The left column shows all cases. The numbers in the right column show the infection-related cases a more detailed analysis to see if those carried are the most appropriate ones for the cases recorded. More detailed analysis could also review whether those with health-care responsibility on board are well enough prepared and educated and trained? Increased use of digital photography could improve the quality of diagnosis, treatment, and follow up. The use of standard forms for documenting Radio Medical cases can simplify the exchange of know- ledge and aid continued studies and future scienti- fic research. TTTTTable 8.able 8.able 8.able 8.able 8. 31% (39) of 129 cases in the Eye group concerned infections. The Eye group was 10% of all cases Eye (F) 39 cases (31%) Cons. Evac. Inf. conjunctivitis 2 1 2 Eyelid infection 9   O t h e r s 9 4 1 TTTTTable 1able 1able 1able 1able 10 .0 .0 .0 .0 . 13% (17) of 129 cases in the General group con- cerned infections. The General group was 10% of all cases General (A) 17 cases (13%) Cons. Evac. Chicken pox 3 Malaria 2 1 O t h e r s 1 2 4 2 TTTTTa b l e 7a b l e 7a b l e 7a b l e 7a b l e 7..... 83% (46) of 55 cases in the Ear group concerned infections. The Ear group was 4% of all cases Ear (H) 46 cases (83%) Cons. Evac. O t i t i s 2 1 1 External otitis 1 8 1 O t h e r s 7 1 TTTTTa b l e 9 .a b l e 9 .a b l e 9 .a b l e 9 .a b l e 9 . 81% (23) of 27 cases in the Male genital group concerned infections. The Male genital group was 2% of all cases Male Genital (Y) 23 cases (81%) Cons. Evac. S T D 1 0 2 Herpes simplex 2 O t h e r s 1 1 8 STD — Sexually Transmitted Disease TTTTTable 5.able 5.able 5.able 5.able 5. 44% (62) of 143 cases in the Skin group concerned infections. The Skin group was 11% of all cases Skin (S) 62 cases (44%) Cons. Evac. A b s c e s s 2 0 4 1 Post trauma inf. 9 2 Impetigo 6 1 O t h e r s 27 5 1 TTTTTable 6.able 6.able 6.able 6.able 6. 73% (55) of 75 cases in the Urinary group concerned infections. The Urinary group was 2% of all cases Urinary (U) 55 cases (73%) Cons. Evac. C y s t i t i s 5 0 1 1 O t h e r s 5 1 TTTTTa b l e 3 .a b l e 3 .a b l e 3 .a b l e 3 .a b l e 3 . 75% (113) of 150 cases in the Respiratory group concerned infections. The Respiratory group was 12% of all cases Respiratory (R) 113 cases (75%) Cons. Evac. Tonsillitis 3 4   1 Resp. tract inf. 2 5 S i n u s i t i s 2 6 1 1 O t h e r s 2 8 2 1 TTTTTable 4.able 4.able 4.able 4.able 4. 35% (73) of 206 cases in the Digestive group con- cerned infections. The Digestive group was 16% of all cases Digestive (D) 73 cases (35%) Cons. Evac. Teeth & gingivitis 3 0 2 2 Appendicitis 1 4 7 5 Gastroenteritis 1 4 1 3 O t h e r s 1 5 1 5 www.intmarhealth.pl 195 Karin Westlund, Infections onboard ship TTTTTable 1able 1able 1able 1able 111111..... Number of cases where the Radio Medical physician has consulted another specialist at the hospital Speciality 1997 2002 2007 2009 (6 months) No of cases No of cases No of cases No of cases No of cases 376 31 2 4 07 1 9 5 Surgery 3 4 5 7 Orthopaedic/hand surgery 2/3 6/0 4/1 1/6 Dermatology/plastic surgery 0 1 7/1 1 Ophthalmology 88888 77777 55555 99999 I n f e c t i o nI n f e c t i o nI n f e c t i o nI n f e c t i o nI n f e c t i o n 11111 11111 99999 22222 Dental, jaw injury 22222 00000 66666 00000 Others 1 1 2 1 1 0 2 T T T T Tooooo ttttt a la la la la l (% of cases) 30 ( 8 % )( 8 % )( 8 % )( 8 % )( 8 % ) 40 ( 1 3 % )( 1 3 % )( 1 3 % )( 1 3 % )( 1 3 % ) 48 ( 1 2 % )( 1 2 % )( 1 2 % )( 1 2 % )( 1 2 % ) 28 ( 1 4 % )( 1 4 % )( 1 4 % )( 1 4 % )( 1 4 % ) Figure 5.Figure 5.Figure 5.Figure 5.Figure 5. The means of communication between the ship and the Radio Medical physician on duty References in author’s possession. work_hbijvvt2zjbx3ei6fnwm5rjrwy ---- Carbon Dots-Decorated Bi2WO6 in an Inverse Opal Film as a Photoanode for Photoelectrochemical Solar Energy Conversion under Visible-Light Irradiation materials Article Carbon Dots-Decorated Bi2WO6 in an Inverse Opal Film as a Photoanode for Photoelectrochemical Solar Energy Conversion under Visible-Light Irradiation Dongxiang Luo 1, Qizan Chen 1, Ying Qiu 2, Baiquan Liu 3 and Menglong Zhang 1,* 1 School of Materials and Energy, Guangdong University of Technology, Guangzhou 510006, China; luodx@gdut.edu.cn (D.L.); 18219435079@163.com (Q.C.) 2 Guangdong Research and Design Center for Technological Economy, Guangzhou 510000, China; srawoyjs@sina.com 3 Luminous! Center of Excellence for Semiconductor Lighting and Displays, School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; bqliu@ntu.edu.sg * Correspondence: mlzhang@m.scnu.edu.cn Received: 7 April 2019; Accepted: 22 May 2019; Published: 27 May 2019 ���������� ������� Abstract: This work focuses on the crystal size dependence of photoactive materials and light absorption enhancement of the addition of carbon dots (CDs). mac-FTO (macroporous fluorine-doped tin oxide) films with an inverse opal structure are exploited to supply enhanced load sites and to induce morphology control for the embedded photoactive materials. The Bi2WO6@mac-FTO photoelectrode is prepared directly inside a mac-FTO film using a simple in situ synthesis method, and the application of CDs to the Bi2WO6@mac-FTO is achieved through an impregnation assembly for the manipulation of light absorption. The surface morphology, chemical composition, light absorption characteristics and photocurrent density of the photoelectrode are analyzed in detail by scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD), UV–vis diffuse reflectance spectra (DRS), Energy dispersive X-ray analysis (EDX) and linear sweep voltammetry (LSV). Keywords: photoelectrode; carbon dots; macroporous electrode 1. Introduction The energy crisis is one of the major social problems that various countries will encounter in the 21st century [1,2]. Semiconductor photocatalysis is regarded as a potential green technology for mitigating the energy crisis [3–5]. Fujishima and Honda used TiO2 as a photocatalyst to split water to produce oxygen and hydrogen under ultraviolet light radiation [6]. Photocatalytic water splitting is a good strategy for converting solar energy to chemical energy. As a device for solar energy, the performance of photoanodes is reliant on their light absorption, charge carrier separation and catalysis/electrolyte diffusion [7]. Many efforts have been made to improve photoelectrochemical (PEC) water splitting efficiency using various modifications such as ion doping [2], heterostructure [6] and loading co-catalysts [8], assemblies which have proved to be beneficial in suppressing the charge carrier recombination in photocatalytic materials. For instance, composites including Pt/TiO2 [8,9], BiOBr/Bi2WO6 [10], g-C3N4/KTaO3 [11], Pt/ZnO [12], CdS/ZnS [13] and Ag/CdS [14] have been synthesized and have exhibited enhanced photocatalytic activity. Photoactive materials with nanostructures, including nanofibers [15], nanosheets [16] and nanotubes [17,18], have also been developed to improve photocatalytic activity. Materials 2019, 12, 1713; doi:10.3390/ma12101713 www.mdpi.com/journal/materials http://www.mdpi.com/journal/materials http://www.mdpi.com http://www.mdpi.com/1996-1944/12/10/1713?type=check_update&version=1 http://dx.doi.org/10.3390/ma12101713 http://www.mdpi.com/journal/materials Materials 2019, 12, 1713 2 of 11 Among the photoactive materials, transition metal sulfides (such as CdS [19,20], Znln2S4 [21,22] and iron group elements like Fe-, Ni- and Co-based sulfides [23–27]) with a relatively narrow bandgap are sensitive to most of the visible wavelength region. However, the photogenerated holes tend to oxidize the catalyst itself, rather than water, in the absence of sacrificial reagent, resulting in photocorrosion. On the other hand, metal oxide semiconductors typically have high chemical stability, while the bandgap energies of metal oxides are commonly higher than those of sulfides because the O (2p) orbital exhibits a lower energy than the S (2p) orbital. Among the metal oxides, Bi2WO6 has received extensive attention due to its moderate bandgap (2.7–2.8 eV) (Figure S3), chemical stability and non-toxicity [28–31]. However, Bi2WO6 suffers from an unsatisfactory photo-response range and rapid recombination of the photogenerated carriers. The photocatalytic activity and surface reaction of semiconductor photocatalysts are highly dependent on the band structures, light photoresponse range and the specific surface area of the catalysts [32–34]. Sensitizing photocatalysts with dyes or good light absorbers is one of the strategies employed to enhance the utilization of solar energy [3,35–40]. To this end, carbon materials including carbon dots [41], carbon nanotubes [42] and grapheme [43,44] were exploited in order to extend the absorption range or to optimize the charge separation. Recently, carbon dots (CDs) have been employed in photocatalysis systems due to their excellent photophysical and chemical properties, such as ease of synthesis, non-toxicity and low cost [45–47]. In addition, CDs can serve as co-catalysts to enhance the light harvesting capacity and accumulate charge separation [48,49] in photocatalytic systems. In this work, Bi2WO6 is directly synthesized into macroporous fluorine-doped tin oxide (mac-FTO) films (Bi2WO6@mac-FTO) and the CDs are subsequently decorated on Bi2WO6@mac-FTO photoanodes. The optical, morphological and photocatalytic properties of these samples are investigated to evaluate the impact of CDs on a mac-FTO-based photoanode. 2. Experimental 2.1. Reagents and Materials Bismuth nitrate pentahydrate (Bi(NO3)3·5H2O, 99%), ethylenediamine (C2H8N2, 99%), ethanol (CH3CH2OH, 99.7%), nitric acid (HNO3, 68%), sodium sulfate decahydrate (Na2SO4·10H2O, 99%), sodium sulfite anhydrous (Na2SO3, 98%), citric acid monohydrate (C6H8O7·H2O, 99.8%), sodium tungstate dihydrate (Na2WO4·2H2O, 99.5%), hydrogen peroxide (H2O2, 30 v%), sulfuric acid (H2SO4, ≥95%), crimp headspace vials (c2183-01-100EA), planar FTO glass (p-FTO) (11 Ω/sq) and monodispersed polystyrene spheres (d = 450 nm, 2.5 wt%) were purchased from Aladdin industrial Co., Ltd. (Shanghai, China) and used as received. The deionized water used throughout all experiments was purified through a Millipore system (Millipore, Billerica, MA, USA). 2.2. Synthesis of the Polystyrene Film Template The polystyrene film template was prepared by a simple evaporation method. The conductive surface of clean p-FTO glass was coated with polystyrene monodisperse pellets by the surface tension of the solution. The p-FTO slide (2 × 10 × 15 mm) was first immersed in a piranha solution (H2SO4:H2O2 = 3:1, volume ratio) for 2 h, then washed with deionized water and dried under N2. The clean p-FTO glass was placed vertically in a crimp headspace vial (10 mL) containing a suspension of polystyrene monodisperse spheres (PS:ethanol = 3:80, volume ratio) dispersed in 100% ethanol. The suspension was just higher than the top of the p-FTO slide. The obtained glass vials were then transferred to a muffle furnace and kept at 58 ◦C for 15 h until the volatiles were completely evaporated. Finally, the electrode was removed and a polystyrene film template was obtained. 2.3. Fabrication of the Mac-FTO Electrode The macroporous fluorine-doped tin oxide (mac-FTO) film with 3D porous space structures was synthesized using a facile thermal polymerization method. First, 1.4 g of SnCl4·5H2O (4 mmol) was Materials 2019, 12, 1713 3 of 11 dissolved in 20 mL of ethanol before being sonicated for 2 min. Then, to obtain the mac-FTO precursor solution, 0.24 mL of saturated NH4F solution (2 mmol) was added dropwise into the above solution and the mixture was sonicated for 10 min. The PS film template was pre-soaked in ethanol for 0.5 h and then transferred and soaked in the mac-FTO precursor solution for 1 h. The PS film template was then removed and placed in a crucible and sintered in the air atmosphere at 450 ◦C for 2 h at 1 ◦C/min, and was finally left to naturally cool to room temperature. The resulting mac-FTO film provided more attachment sites for Bi2WO6 catalysts. 2.4. Synthesis of the Bi2WO6@mac-FTO Photoelectrode The Bi2WO6@mac-FTO photoelectrode was prepared using an in situ synthesis method. Bi(NO3)3·5H2O (1 mmol) was dissolved in 30 mL of diluted HNO3 (pH = 3) and Na2WO4·2H2O (0.5 mmol) was dissolved in 20 mL of deionized water. Both solutions were vigorously stirred until clear. The mac-FTO slides were immersed in a Bi(NO3)3 solution for 1 min then transferred and immersed in Na2WO4 solution for 1 min. The mac-FTO slides were immersed alternately in the Bi(NO3)3 and Na2WO4 solutions with 20, 60 or 100 times to obtain photoelectrode with different thicknesses. The electrodes were then calcined in an air atmosphere at 600 ◦C for 2 h at 3 ◦C·min−1, and finally left to naturally cool to room temperature. The obtained Bi2WO6@mac-FTO electrodes were heat-treated at 720 ◦C for 2 min. 2.5. Synthesis of the Bi2WO6@mac-FTO Photoelectrode Decorated with CDs CDs were synthesized via a hydrothermal method. First, 1.47 g of C6H8O7·H2O was dissolved in 14 mL of deionized water, and 0.47 mL of C2H8N2 was added dropwise into the above solution. The mixture was then sonicated for 10 min. The obtained solution was then transferred to an autoclave and kept at 200 ◦C for 5 h in a muffle furnace, before finally being allowed to naturally cool to room temperature. To obtain the CD solution, the solution obtained from this reaction was subjected to dialysis using a dialysis bag under vigorous magnetic stirring, which the molecular weight cutoff (MWCO) of the dialysis bag is 1000 dalton. The Bi2WO6@mac-FTO photoelectrode was then immersed in CD solution for 2 h. Finally, the Bi2WO6@mac-FTO photoelectrode was removed from the CD solution and dried in a vacuum oven at 60 ◦C for 24 h to obtain the photoelectrode, which the CDs decorated on the Bi2WO6@mac-FTO photoelectrode (CDs/Bi2WO6@mac-FTO). 3. Sample Characterization The morphology and structure of the as-prepared samples were investigated using a Hitachi SU8220 field emission scanning electron microscope (Hitachi High Co., Ltd., Japan at different amplifications and an accelerating voltage of 15 kV. Low-resolution transmission electron microscopy images and high-resolution transmission electron microscopy images were obtained using an FEI Talos F200S transmission electron microscope (FEI Co., Ltd., USA at an accelerating voltage of 200 KV, and elemental mapping of the as-prepared electrodes was conducted. Powder from the as-prepared electrodes was scraped off and dissolved in ethanol before characterization and the suspension was dispersed by ultrasound. A drop of this suspension was added into a 3 mm diameter micro-grid copper film. The TEM sample was obtained after the drying treatment. XRD patterns were recorded using a Bruker D8 ADVANCE diffractometer (Bruker Co., Ltd., Germany. The crystal structure and composition were measured with CuKa (λ = 0.15406 nm) radiation (40 KV, 30 mA). The datas of UV–vis diffuse-reflectance spectra were collected on an ultraviolet–visible diffuse reflectance spectrometer and a clean p-FTO glass was used as a reflectance standard. Photoelectrochemical measurements were made using a standard three-electrode setup. A platinum sheet (10 × 10 mm) was used as the counter electrode and the reference electrode was Ag/AgCl (3 M KCl internal solution). Connection to the as-prepared samples working electrode was achieved using copper tape and the bottom 10 mm of the electrode was immersed in the electrolyte solution. The electrolyte was 0.5 M Na2SO4 (pH = 7). A xenon lamp was used as solar light simulator Materials 2019, 12, 1713 4 of 11 and the light intensity was adjusted to 100 mW·cm−2. Potentials were referenced to the reversible hydrogen electrode standard using the following formula [50]: ERHE = Evs.(Ag/AgCl) + Eref(Ag/AgCl) + 0.059PH (1) where Eref(Ag/AgCl) = 0.209 V vs. NHE at 25 ◦C. To study the electrochemical kinetics at the interface between the electrode and the electrolyte, a three-electrode system was adopted for electrochemical impedance spectroscopy (EIS) (Ametek Co., Ltd., Berwyn, PA, UK). The physical-electrochemical properties of the Bi2WO6@mac-FTO photoelectrode with applied CDs, the Bi2WO6@mac-FTO photoelectrode and the unmodified Bi2WO6@mac-FTO photoelectrode were examined in a 0.1 M NaSO4 electrolyte solution under light and dark conditions. The real factor (Z’) and imaginary factor (Z”) of characteristic Nyquist plots were used to calculate the charge transfer resistance (Rct) between the electrode and electrolyte interfaces, solution resistance (Rs) and diffusion coefficient. A small range of an EIS semicircle corresponds to a low Rct value and a higher electrical conductivity. 4. Results and Discussion 4.1. Structural Characterization The XRD patterns of Bi2WO6, SnO2 and CDs can be identified as shown in Figure 1, in which the diffraction patterns are consistent with JCPDS (joint committee on powder diffraction standards) 73-2020, 46-1088 and 50-0926, respectively. Direct evidence of Bi2WO6 can be confirmed from the diffraction peaks obtained for the Bi2WO6@p-FTO. The diffraction peaks at 2θ = 26.57◦, 37.76◦, 51.77◦, 61.74◦ and 65.74◦ in the XRD pattern of the Bi2WO6@mac-FTO and CDs/Bi2WO6@mac-FTO photoelectrodes correspond to the (110), (200), (211), (310) and (301) lattice planes of SnO2. No obvious diffraction peaks from CDs could be detected after CDs were introduced, presumably due to their low population on the surface of the CDs/Bi2WO6@mac-FTO photoelectrode. Figure 1b shows an XRD image of CDs which has a broad peak centered at 22.76◦, indicating the presence of CDs [45]. An enlarged pattern of the diffraction peaks in the range of 2θ = 20–42◦ is shown in Figure 1b, which suggests that with an increase in CD content on the surface of the Bi2WO6@mac-FTO photoelectrode, the peak position is shifted slightly towards a lower 2θ value, indicating that CDs have been successfully doped into the Bi2WO6 nanomaterial [51]. Materials 2019, 12, x FOR PEER REVIEW 4 of 11 To study the electrochemical kinetics at the interface between the electrode and the electrolyte, a three-electrode system was adopted for electrochemical impedance spectroscopy (EIS) (Ametek Co., Ltd., Berwyn, PA, UK). The physical-electrochemical properties of the Bi2WO6@mac-FTO photoelectrode with applied CDs, the Bi2WO6@mac-FTO photoelectrode and the unmodified Bi2WO6@mac-FTO photoelectrode were examined in a 0.1 M NaSO4 electrolyte solution under light and dark conditions. The real factor (Z’) and imaginary factor (Z”) of characteristic Nyquist plots were used to calculate the charge transfer resistance (Rct) between the electrode and electrolyte interfaces, solution resistance (Rs) and diffusion coefficient. A small range of an EIS semicircle corresponds to a low Rct value and a higher electrical conductivity. 4. Results and Discussion 4.1. Structural Characterization The XRD patterns of Bi2WO6, SnO2 and CDs can be identified as shown in Figure 1, in which the diffraction patterns are consistent with JCPDS (joint committee on powder diffraction standards) 73-2020, 46-1088 and 50-0926, respectively. Direct evidence of Bi2WO6 can be confirmed from the diffraction peaks obtained for the Bi2WO6@p-FTO. The diffraction peaks at 2θ = 26.57°, 37.76°, 51.77°, 61.74° and 65.74° in the XRD pattern of the Bi2WO6@mac-FTO and CDs/Bi2WO6@mac-FTO photoelectrodes correspond to the (110), (200), (211), (310) and (301) lattice planes of SnO2. No obvious diffraction peaks from CDs could be detected after CDs were introduced, presumably due to their low population on the surface of the CDs/Bi2WO6@mac-FTO photoelectrode. Figure 1b shows an XRD image of CDs which has a broad peak centered at 22.76°, indicating the presence of CDs [45]. An enlarged pattern of the diffraction peaks in the range of 2θ = 20–42° is shown in Figure 1b, which suggests that with an increase in CD content on the surface of the Bi2WO6@mac-FTO photoelectrode, the peak position is shifted slightly towards a lower 2θ value, indicating that CDs have been successfully doped into the Bi2WO6 nanomaterial [51]. Figure 1. (a) XRD in which  = JCPDS 73-2020 for Bi2WO6,  = JCPDS 46-1088 for SnO2; (b) enlargement of (a) from 2Ɵ = 20 to 42°, insert image is the XRD pattern of carbon dots (CDs), JCPDS 50-0926. The low and high labels indicate Bi2WO6@mac-FTOs decorated with low and high amounts of CDs, respectively. The texture and structure and of the as-prepared samples were characterized by TEM. Figure 2a,b show the LR-TEM images of the CDs/Bi2WO6@mac-FTO photoelectrodes. In Figure 2c, the HR-TEM image displays well-resolved lattice fringes with interplanar distances of 0.327 nm, 0.315 nm and 0.258 nm indexed to the (014), (113) and (022) lattice planes of Bi2WO6, respectively. The lattice fringe with 0.335 nm spacing can be assigned to the (110) lattice plane of SnO2. As shown in Figure 2d, the TEM image of the CDs suggests that the synthesized CDs are nearly spherical and have an average size of approximately 2.5 nm (Figure 2e). The TEM images and corresponding elemental mapping images (Figure 2f) of the CDs/Bi2WO6@mac-FTO photoelectrode indicate that the C, O, W, Sn and Bi elements are distributed uniformly on the as-prepared sample. The above results Figure 1. (a) XRD in which �= JCPDS 73-2020 for Bi2WO6, �= JCPDS 46-1088 for SnO2; (b) enlargement of (a) from 2θ = 20 to 42◦, insert image is the XRD pattern of carbon dots (CDs), JCPDS 50-0926. The low and high labels indicate Bi2WO6@mac-FTOs decorated with low and high amounts of CDs, respectively. The texture and structure and of the as-prepared samples were characterized by TEM. Figure 2a,b show the LR-TEM images of the CDs/Bi2WO6@mac-FTO photoelectrodes. In Figure 2c, the HR-TEM Materials 2019, 12, 1713 5 of 11 image displays well-resolved lattice fringes with interplanar distances of 0.327 nm, 0.315 nm and 0.258 nm indexed to the (014), (113) and (022) lattice planes of Bi2WO6, respectively. The lattice fringe with 0.335 nm spacing can be assigned to the (110) lattice plane of SnO2. As shown in Figure 2d, the TEM image of the CDs suggests that the synthesized CDs are nearly spherical and have an average size of approximately 2.5 nm (Figure 2e). The TEM images and corresponding elemental mapping images (Figure 2f) of the CDs/Bi2WO6@mac-FTO photoelectrode indicate that the C, O, W, Sn and Bi elements are distributed uniformly on the as-prepared sample. The above results further confirm that the CDs were successfully decorated on surface of the Bi2WO6@mac-FTO photoelectrode. Materials 2019, 12, x FOR PEER REVIEW 5 of 11 further confirm that the CDs were successfully decorated on surface of the Bi2WO6@mac-FTO photoelectrode. Figure 2. (a–c) TEM images of the CDs/Bi2WO6@mac-FTO photoelectrode. (d) TEM images of the CDs. (e) HR-TEM and (f) TEM-EDX elemental mapping of the CDs/Bi2WO6@mac-FTO photoelectrode. SEM was employed to investigate the texture, structure and morphology of the as-prepared samples, and revealed that the mac-FTO film on the p-FTO substrate exhibits a long-range ordered porous structure (Figure 3a), and SEM image of cross section of mac-SnO2 electrode (Figure S2). As a control, the Bi2WO6 synthesized on p-FTO displays the typical stacked lamellar structure (ca. 6 µm) of pure Bi2WO6 (Figure 3b). On the other hand, the as-prepared Bi2WO6@mac-FTO (Figure 3c–h) shows a reduced size (ca. 100 nm) of Bi2WO6 due to the crystal size restraint effect from the sub-micro porous substrate. This phenomenon has also been observed in an α-Fe2O3@mac-SnO2 system [7]. A reduced size of photoactive material typically indicates a shorter pathway for charge migration, which allows faster transfer of photogenerated carriers. In addition, as the cycle coefficient increases, the porous structure of the mac-FTO film is blocked. Figure 2. (a–c) TEM images of the CDs/Bi2WO6@mac-FTO photoelectrode. (d) TEM images of the CDs. (e) HR-TEM and (f) TEM-EDX elemental mapping of the CDs/Bi2WO6@mac-FTO photoelectrode. SEM was employed to investigate the texture, structure and morphology of the as-prepared samples, and revealed that the mac-FTO film on the p-FTO substrate exhibits a long-range ordered porous structure (Figure 3a), and SEM image of cross section of mac-SnO2 electrode (Figure S2). As a control, the Bi2WO6 synthesized on p-FTO displays the typical stacked lamellar structure (ca. 6 µm) of pure Bi2WO6 (Figure 3b). On the other hand, the as-prepared Bi2WO6@mac-FTO (Figure 3c–h) shows a reduced size (ca. 100 nm) of Bi2WO6 due to the crystal size restraint effect from the sub-micro porous substrate. This phenomenon has also been observed in an α-Fe2O3@mac-SnO2 system [7]. A reduced size of photoactive material typically indicates a shorter pathway for charge migration, which allows faster transfer of photogenerated carriers. In addition, as the cycle coefficient increases, the porous structure of the mac-FTO film is blocked. 4.2. Optical Properties UV–vis transmittance was employed to compare the light absorptivity of samples. As shown in Figure 4, enhanced light absorbance was observed in the CDs/Bi2WO6@mac-FTO photoelectrode under wavelengths shorter than 660 nm. This enhancement can be attributed to the addition of CDs and suggests a more efficient utilization of solar energy. As shown in Figure S1, digital photographys of CDs solution which exposed to visible light and 250 nm UV light, and the PL spectra of CDs under different excitation wavelength. Materials 2019, 12, 1713 6 of 11 Materials 2019, 12, x FOR PEER REVIEW 6 of 11 Figure 3. (a) SEM image of mac-FTO film. (b) SEM image of the Bi2WO6@p-FTO photoelectrode displays a stacked lamellar morphology. (c–h) SEM images of CDs/Bi2WO6@mac-FTO photoelectrodes (the cycles are 20, 60 and 100, respectively), and the corresponding SEM images showing the small size filling of the skeleton. 4.2. Optical Properties UV–vis transmittance was employed to compare the light absorptivity of samples. As shown in Figure 4, enhanced light absorbance was observed in the CDs/Bi2WO6@mac-FTO photoelectrode under wavelengths shorter than 660 nm. This enhancement can be attributed to the addition of CDs and suggests a more efficient utilization of solar energy. As shown in Figure S1, digital photographys of CDs solution which exposed to visible light and 250 nm UV light, and the PL spectra of CDs under different excitation wavelength. Figure 4. (a) Optical absorption and (b) reflection spectra of the Bi2WO6@p-FTO and CDs/Bi2WO6@mac-FTO photoelectrodes. Figure 3. (a) SEM image of mac-FTO film. (b) SEM image of the Bi2WO6@p-FTO photoelectrode displays a stacked lamellar morphology. (c–h) SEM images of CDs/Bi2WO6@mac-FTO photoelectrodes (the cycles are 20, 60 and 100, respectively), and the corresponding SEM images showing the small size filling of the skeleton. Materials 2019, 12, x FOR PEER REVIEW 6 of 11 Figure 3. (a) SEM image of mac-FTO film. (b) SEM image of the Bi2WO6@p-FTO photoelectrode displays a stacked lamellar morphology. (c–h) SEM images of CDs/Bi2WO6@mac-FTO photoelectrodes (the cycles are 20, 60 and 100, respectively), and the corresponding SEM images showing the small size filling of the skeleton. 4.2. Optical Properties UV–vis transmittance was employed to compare the light absorptivity of samples. As shown in Figure 4, enhanced light absorbance was observed in the CDs/Bi2WO6@mac-FTO photoelectrode under wavelengths shorter than 660 nm. This enhancement can be attributed to the addition of CDs and suggests a more efficient utilization of solar energy. As shown in Figure S1, digital photographys of CDs solution which exposed to visible light and 250 nm UV light, and the PL spectra of CDs under different excitation wavelength. Figure 4. (a) Optical absorption and (b) reflection spectra of the Bi2WO6@p-FTO and CDs/Bi2WO6@mac-FTO photoelectrodes. Figure 4. (a) Optical absorption and (b) reflection spectra of the Bi2WO6@p-FTO and CDs/Bi2WO6@mac-FTO photoelectrodes. 4.3. Photoelectrochemistry Linear sweep voltammetry (LSV) experiments were conducted under chopped illumination to estimate the photoelectrochemical performance of the as-prepared samples. The Bi2WO6@mac-FTO photoelectrode exhibited significantly improved photoactivity in comparison to the Bi2WO6@p-FTO photoelectrode and previous reports [52]. To be more specific, the CDs/Bi2WO6@mac-FTO, Materials 2019, 12, 1713 7 of 11 Bi2WO6@mac-FTO and Bi2WO6@p-FTO photoelectrodes had photocurrent densities of 0.202, 0.171 and 0.014 mA·cm−2, at 0 V vs. VAg/AgCl, respectively, (Figure 5a). This enhancement can be attributed to the high surface area and good light absorption supplied by the mac-FTO substrate and CDs, respectively. Furthermore, the attachment of CDs is pH- and ionic strength-dependent. The pH dependence of the as-prepared samples was investigated (pH = 5 to 10) and the highest photocurrent density (0.272 mA·cm−2) of the CDs/Bi2WO6@mac-FTO photoelectrode was obtained at pH = 9 (0 V vs. VAg/AgCl) (Figure 5b). This can be attributed to the accumulation of charge migration by surface hydroxyl groups [6]. In addition, the dependence of the photoactivity of the Bi2WO6@mac-FTO on the amount of modified CDs is shown in Figure 5c, revealing that CDs/Bi2WO6@mac-FTO photoelectrodes with 60 soaking cycles exhibited a significant photoresponse. The addition of CDs could indeed optimize the light absorption both in the UV and visible regions, as has been seen previously in CQDs/Bi5O7I [51], CQDs/TiO2, CQDs/Bi2O3 [53] and CQDs/TNTs (TiO2 nanotubs) [54] composites. This could be because the photon absorption capability of CDs in the samples leads to more efficient PEC/photocatalytic performance. However, the excessive loading of CDs will reduce the surface area of Bi2WO6 exposure to electrolyte. Next, EIS was exploited to study the electrochemical kinetics at the interfaces between the electrode and electrolyte. The range of the EIS semicircle of the electrodes was smaller under illumination than under dark conditions. The diameter of the arc radius on the EIS Nyquist plot of the Bi2WO6@mac-FTO photoelectrode with applied CDs was smallest, suggesting a smaller interface resistance for the CDs/Bi2WO6@mac-FTO photoelectrode. The low resistance of the CDs/Bi2WO6@mac-FTO photoelectrode could be attributed to the presence of CDs in the composite. When CDs are immobilized on the Bi2WO6@mac-FTO photoelectrode, the electron transfer resistance (Ret) decreased considerably because of the perfect electrical conductivity of CDs, and they are responsible for the higher electrical conductivity. Photogenerated electrons could be transferred to the surface of the electrode faster through the CDs. In consideration of the stability of CD-modified Bi2WO6, although the addition of CDs can improve the photoelectrochemical performance, relatively worse stability was observed in comparison to pristine Bi2WO6. As shown in Figure S4, we carried out a 20 cycle PEC test (each cycle of the photoelectrodes lasted 5 min under illumination at 0 V vs. VAg/AgCl). The CD-modified electrode had a larger current density, but the current decreased by about 22% after 20 cycles, as compared to a 13% reduction in the pristine Bi2WO6 photoelectrode. This could be due to the detachment of CDs from Bi2WO6. Bi2WO6 exhibited a yellow color, and after the application of CDs a dark brown sample was obtained. Subsequently, the dark brown sample turned to light brown after 20 cycles, suggesting that the CDs had detached. However, the CD-modified sample had a photocurrent density that was 20% greater than that of the control group. In addition, it should be noted that, although the carboxylic group on CDs would improve the CDs anchoring on the surface of metal oxides [55], the attachment is pH- and ionic strength-dependent. As shown in Figure S5, we came up with a possible mechanism for the enhanced photocatalytic activity of the CD/Bi2WO6@mac-FTO photoelectrode. Materials 2019, 12, 1713 8 of 11 Materials 2019, 12, x FOR PEER REVIEW 8 of 11 Figure 5. (a,b) Linear sweep voltammetry (LSV) curves of the Bi2WO6@p-FTO photoelectrode, Bi2WO6@mac-FTO photoelectrode and CDs/Bi2WO6@mac-FTO photoelectrode (the experiments are conducted in electrolytic solutions with pH = 7 and pH = 9, respectively). (c) Linear sweep voltammogram of the CDs/Bi2WO6@mac-FTO photoelectrodes, wherein the mac-SnO2 film was immersed in precursor solution 20, 60 and 100 cycles (d) Electrochemical impedance spectroscopy (EIS) Nyquist plots of the photoelectrodes. The labels of a , b and c indicate the CDs/Bi2WO6@mac-FTO, Bi2WO6@mac-FTO and Bi2WO6@p-FTO photoelectrodes, respectively. The light and dark labels indicate the conditions of the test. 5. Conclusions Bi2WO6@mac-FTO photoelectrodes were successfully synthesized through an in situ synthesis method and then decorated with CDs. The obtained photocurrent density of the CDs/Bi2WO6@mac-FTO photoelectrode was higher than that of the initial Bi2WO6@p-FTO photoelectrode, substantiating the superiority of the CDs/Bi2WO6@mac-FTO photoelectrode. This superiority was manifested in the light absorption and large surface area. Mace-FTO film with a 3D porous structure was applied in order to create a larger surface area and to control the growth of the Bi2WO6 catalyst. In contrast to a p-FTO film, crystals of Bi2WO6 can have smaller particle sizes. In addition, the CDs applied to the Bi2WO6@mac-FTO photoelectrode exhibited an optimized photocurrent density of up to 0.202 mA·cm−2 under light at 0 V vs. VAg/AgCl and 1 mA·cm−2 at 1 V vs. VAg/AgCl (pH = 7). The improved photocurrent density generation of the CDs/Bi2WO6@mac-FTO photoelectrode can be attributed to the suitable morphology control from mac-FTO films and the application of CDs to the photoelectrode. The application of CDs enhances the light absorption intensity and expands the photoresponse under visible light irradiation, which gives some insight for similar solar energy conversion experiments. Supplementary Materials: The following are available online at www.mdpi.com/xxx/s1, Figure S1. Digital photographs of CDs solution while exposed to visible light and 250 nm UV light, and the PL spectra of CDs under different excitation wavelengths; Figure S2. SEM image of the cross-section of a mac-SnO2 electrode; Figure S3. The image of the bandgap of pure Bi2WO6 photocatalyst; Figure S4. Linear sweep voltammogram of Bi2WO6@mac-FTO and CDs/Bi2WO6@mac-FTO photoelectrodes at 0 V vs. VAg/AgCl (pH = 7); Figure S5. Schematic illustration of the possible mechanism for the enhanced photocatalytic activity of the CDs/Bi2WO6@mac-FTO photoelectrode. Figure 5. (a,b) Linear sweep voltammetry (LSV) curves of the Bi2WO6@p-FTO photoelectrode, Bi2WO6@mac-FTO photoelectrode and CDs/Bi2WO6@mac-FTO photoelectrode (the experiments are conducted in electrolytic solutions with pH = 7 and pH = 9, respectively). (c) Linear sweep voltammogram of the CDs/Bi2WO6@mac-FTO photoelectrodes, wherein the mac-SnO2 film was immersed in precursor solution 20, 60 and 100 cycles (d) Electrochemical impedance spectroscopy (EIS) Nyquist plots of the photoelectrodes. The labels of a, b and c indicate the CDs/Bi2WO6@mac-FTO, Bi2WO6@mac-FTO and Bi2WO6@p-FTO photoelectrodes, respectively. The light and dark labels indicate the conditions of the test. 5. Conclusions Bi2WO6@mac-FTO photoelectrodes were successfully synthesized through an in situ synthesis method and then decorated with CDs. The obtained photocurrent density of the CDs/Bi2WO6@mac-FTO photoelectrode was higher than that of the initial Bi2WO6@p-FTO photoelectrode, substantiating the superiority of the CDs/Bi2WO6@mac-FTO photoelectrode. This superiority was manifested in the light absorption and large surface area. Mace-FTO film with a 3D porous structure was applied in order to create a larger surface area and to control the growth of the Bi2WO6 catalyst. In contrast to a p-FTO film, crystals of Bi2WO6 can have smaller particle sizes. In addition, the CDs applied to the Bi2WO6@mac-FTO photoelectrode exhibited an optimized photocurrent density of up to 0.202 mA·cm−2 under light at 0 V vs. VAg/AgCl and 1 mA·cm−2 at 1 V vs. VAg/AgCl (pH = 7). The improved photocurrent density generation of the CDs/Bi2WO6@mac-FTO photoelectrode can be attributed to the suitable morphology control from mac-FTO films and the application of CDs to the photoelectrode. The application of CDs enhances the light absorption intensity and expands the photoresponse under visible light irradiation, which gives some insight for similar solar energy conversion experiments. Supplementary Materials: The following are available online at http://www.mdpi.com/1996-1944/12/10/1713/s1, Figure S1. Digital photographs of CDs solution while exposed to visible light and 250 nm UV light, and the PL spectra of CDs under different excitation wavelengths; Figure S2. SEM image of the cross-section of a mac-SnO2 electrode; Figure S3. The image of the bandgap of pure Bi2WO6 photocatalyst; Figure S4. Linear sweep voltammogram of Bi2WO6@mac-FTO and CDs/Bi2WO6@mac-FTO photoelectrodes at 0 V vs. VAg/AgCl (pH = 7); Figure S5. Schematic illustration of the possible mechanism for the enhanced photocatalytic activity of the CDs/Bi2WO6@mac-FTO photoelectrode. Author Contributions: D.L., Q.C. and M.Z. conceived the idea; D.L., Q.C. and M.Z. wrote the paper; Y.Q. and B.L. advised the paper. All authors reviewed the paper. D.L. and Q.C. contributed equally to this work. Funding: This research was funded by the National Natural Science Foundation of China (Grant No. 61704034), the Pearl River S&T Nova Program of Guangzhou (Grant No. 201906010058), the Key Platforms and Research http://www.mdpi.com/1996-1944/12/10/1713/s1 Materials 2019, 12, 1713 9 of 11 Projects of Department of Education of Guangdong Province (Grant No. 2016KTSCX034), the Guangdong Science and Technology Plan (Grant No. 2017B010123002) and the Natural Science Foundation of Guangdong Province (Grant Nos. 2018A0303130199 and 2017A030313632). Acknowledgments: The authors are grateful to the National Natural Science Foundation of China (Grant No. 61704034), the Pearl River S&T Nova Program of Guangzhou (Grant No. 201906010058), the Key Platforms and Research Projects of Department of Education of Guangdong Province (Grant No. 2016KTSCX034), the Guangdong Science and Technology Plan (Grant No. 2017B010123002) and the Natural Science Foundation of Guangdong Province (Grant Nos. 2018A0303130199 and 2017A030313632). Conflicts of Interest: The authors declare no conflict of interest. References 1. Kumar, B.V.; Prasad, M.D.; Vithal, M. Enhanced visible light photocatalytic activity of Sn doped Bi2WO6 nanocrystals. Mater. Lett. 2015, 152, 200–202. [CrossRef] 2. Yang, L.; Yao, Y.; Zhu, G.; Ma, M.; Wang, W.; Wang, L.; Zhang, H.; Zhang, Y.; Jiao, Z. Co doping of worm–like Cu2S: An efficient and durable heterogeneous electrocatalyst for alkaline water oxidation. J. Alloys Compd. 2018, 762, 637–642. [CrossRef] 3. Cheng, W.; Pan, J.; Yang, J.; Zheng, Z.; Lu, F.; Chen, Y.; Gao, W. A photoelectrochemical aptasensor for thrombin based on the use of carbon quantum dot-sensitized TiO2 and visible-light photoelectrochemical activity. Mikrochim. Acta 2018, 185, 263. [CrossRef] [PubMed] 4. Ye, H.F.; Shi, R.; Yang, X.; Fu, W.F.; Chen, Y. P-doped ZnxCd1−xS solid solutions as photocatalysts for hydrogen evolution from water splitting coupled with photocatalytic oxidation of 5-hydroxymethylfurfural. Appl. Catal. B Environ. 2018, 233, 70–79. [CrossRef] 5. Jin, Z.; Zhang, Q.; Chen, J.; Huang, S.; Hu, L.; Zeng, Y.J.; Zhang, H.; Ruan, S.; Ohno, T. Hydrogen bonds in heterojunction photocatalysts for efficient charge transfer. Appl. Catal. B Environ. 2018, 234, 198–205. [CrossRef] 6. Kudo, A.; Miseki, Y. Heterogeneous photocatalyst materials for water splittin. Chem. Soc. Rev. 2009, 38, 253–278. [CrossRef] [PubMed] 7. Xiao, Y.; Chen, M.; Zhang, M. Multiple layered macroporous SnO2 film for applications to photoelectrochemistry and morphology control of iron oxide nanocrystal. J. Power Sources 2018, 402, 62–67. [CrossRef] 8. Yu, J.G.; Qi, L.F.; Jaroniec, M. Hydrogen Production by Photocatalytic Water Splitting over Pt/TiO2 Nanosheets with Exposed (001) Facets. J. Phys. Chem. C 2010, 114, 13118–13125. [CrossRef] 9. Atabaev, T.S.; Hossain, M.A.; Lee, D.; Kim, H.K.; Hwang, Y.H. Pt-coated TiO2 nanorods for photoelectrochemical water splitting applications. Results Phys. 2016, 6, 373–376. [CrossRef] 10. Xia, J.; Di, J.; Yin, S.; Xu, H.; Zhang, J.; Xu, Y.; Xu, L.; Li, H.; Ji, M. Facile fabrication of the visible-light-driven Bi2WO6/BiOBr composite with enhanced photocatalytic activity. RSC Adv. 2014, 4, 82–90. [CrossRef] 11. Yong, Z.; Ren, J.; Hu, H.; Li, P.; Ouyang, S.; Xu, H.; Wang, D. Synthesis, Characterization, and Photocatalytic Activity of g-C3N4/KTaO3Composites under Visible Light Irradiation. J. Nanomater. 2015, 2015, 821986. [CrossRef] 12. Zeng, H.; Liu, P.; Cai, W.; Yang, S.; Xu, X. Controllable Pt/ZnO Porous Nanocages with Improved Photocatalytic Activity. J. Phys. Chem. C 2008, 112, 19620–19624. [CrossRef] 13. Liberato, M.; Scher, E.C.; Li, L.S.; Alivisatos, A.P. A Paul, Epitaxial growth and photochemical annealing of graded CdS/ZnS shells on colloidal CdSe nanorods. J. Am. Chem. Soc. 2002, 124, 7136–7145. 14. Kumar, A.; Chaudhary, V. Optical and photophysical properties of Ag/CdS nanocomposites—An analysis of relaxation kinetics of the charge carriers. J. Photochem. Photobiol. A 2007, 189, 272–279. [CrossRef] 15. Liu, Z.; Sun, D.D.; Guo, P.; Leckie, J.O. An efficient bicomponent TiO2/SnO2 nanofiber photocatalyst fabricated by electrospinning with a side-by-side dual spinneret method. Nano Lett. 2007, 7, 1081–1085. [CrossRef] 16. Wu, L.; Bi, J.; Li, Z.; Wang, X.; Fu, X. Rapid preparation of Bi2WO6 photocatalyst with nanosheet morphology via microwave-assisted solvothermal synthesis. Catal. Today 2008, 131, 15–20. [CrossRef] 17. Albu, S.P.; Andrei, G.; Macak, J.M.; Robert, H.; Patrik, S. Self-organized, free-standing TiO2 nanotube membrane for flow-through photocatalytic applications. Nano Lett. 2007, 7, 1286–1289. [CrossRef] 18. Yu, J.; Dai, G.; Huang, B. Fabrication and Characterization of Visible-Light-Driven Plasmonic Photocatalyst Ag/AgCl/TiO2 TiO2 Nanotube Arrays. J. Phys. Chem. C 2009, 113, 16394–16401. [CrossRef] http://dx.doi.org/10.1016/j.matlet.2015.03.096 http://dx.doi.org/10.1016/j.jallcom.2018.05.249 http://dx.doi.org/10.1007/s00604-018-2800-z http://www.ncbi.nlm.nih.gov/pubmed/29687401 http://dx.doi.org/10.1016/j.apcatb.2018.03.060 http://dx.doi.org/10.1016/j.apcatb.2018.04.057 http://dx.doi.org/10.1039/B800489G http://www.ncbi.nlm.nih.gov/pubmed/19088977 http://dx.doi.org/10.1016/j.jpowsour.2018.08.087 http://dx.doi.org/10.1021/jp104488b http://dx.doi.org/10.1016/j.rinp.2016.07.002 http://dx.doi.org/10.1039/C3RA44191A http://dx.doi.org/10.1155/2015/821986 http://dx.doi.org/10.1021/jp807309s http://dx.doi.org/10.1016/j.jphotochem.2007.02.013 http://dx.doi.org/10.1021/nl061898e http://dx.doi.org/10.1016/j.cattod.2007.10.089 http://dx.doi.org/10.1021/nl070264k http://dx.doi.org/10.1021/jp905247j Materials 2019, 12, 1713 10 of 11 19. Liu, Y.; Dai, F.; Zhao, R.; Huai, X.; Han, J.; Wang, L. Aqueous synthesis of core/shell/shell CdSe/CdS/ZnS quantum dots for photocatalytic hydrogen generation. J. Mater. Sci. 2019, 54, 8571–8580. [CrossRef] 20. Li, N.; Huang, H.; Bibi, R.; Shen, Q.; Ngulube, R.; Zhou, J.; Liu, M. Noble-metal-free MOF derived hollow CdS/TiO2 decorated with NiS cocatalyst for efficient photocatalytic hydrogen evolution. Appl. Surf. Sci. 2019, 476, 378–386. [CrossRef] 21. Guo, F.; Cai, Y.; Guan, W.; Huang, H.; Liu, Y. Graphite carbon nitride/ZnIn2S4 heterojunction photocatalyst with enhanced photocatalytic performance for degradation of tetracycline under visible light irradiation. J. Phys. Chem. Solids 2017, 110, 370–378. [CrossRef] 22. Yang, C.; Li, Q.; Xia, Y.; Lv, K.; Li, M. Enhanced visible-light photocatalytic CO2 reduction performance of Znln2S4 microspheres by using CeO2 as cocatalyst. Appl. Surf. Sci. 2019, 464, 388–395. [CrossRef] 23. Huerta-Flores, A.M.; Torres-Martínez, L.M.; Moctezuma, E.; Singh, A.P.; Wickman, B. Green synthesis of earth-abundant metal sulfides (FeS2, CuS, and NiS2) and their use as visible-light active photocatalysts for H2 generation and dye removal. J. Mater. Sci. Mater. Electron. 2018, 29, 11613–11626. [CrossRef] 24. Niu, Y.; Li, F.; Yang, K.; Wu, Q.; Xu, P.; Wang, R. Highly Efficient Photocatalytic Hydrogen on CoS/TiO2 Photocatalysts from Aqueous Methanol Solution. Int. J. Photoenergy 2018, 2018, 8143940. [CrossRef] 25. Lu, L.; Xu, X.; An, K.; Wang, Y.; Shi, F.N. Coordination Polymer Derived NiS@g-C3N4 Composite Photocatalyst for Sulfur Vacancy and Photothermal Effect Synergistic Enhanced H2 Production. ACS Sustain. Chem. Eng. 2018, 6, 11869–11876. [CrossRef] 26. Suroshe, J.S.; Mlowe, S.; Garje, S.S.; Revaprasadu, N. Preparation of Iron Sulfide Nanomaterials from Iron (II) Thiosemicarbazone Complexes and Their Application in Photodegradation of Methylene Blue. J. Inorg. Organomet. Polym. Mater. 2018, 28, 603–611. [CrossRef] 27. Wang, B.; Pan, J.; Jiang, Z.; Dong, Z.; Zhao, C.; Wang, J.; Song, C.; Zheng, Y.; Li, C. The bimetallic iron-nickel sulfide modified g-C3N4 nano-heterojunction and its photocatalytic hydrogen production enhancement. J. Alloys Compd. 2018, 766, 421–428. [CrossRef] 28. Guo, S.; Li, X.; Wang, H.; Dong, F.; Wu, Z. Fe-ions modified mesoporous Bi2WO6 nanosheets with high visible light photocatalytic activity. J. Colloid Interface Sci. 2012, 369, 373–380. [CrossRef] 29. Liao, C.; Ma, Z.; Dong, G.; Qiu, J.; Xie, R.J. Flexible Porous SiO2-Bi2WO6 Nanofibers Film for Visible-Light Photocatalytic Water Purification. J. Am. Ceram. Soc. 2015, 98, 957–964. [CrossRef] 30. Wang, Z.N.; Bai, F.Y.; Wang, X.; Shang, D.; Xing, Y.H. Photocatalytic activity of the modified composite photocatalyst by introducing the rich-nitrogen complex to the Bi2WO6. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2016, 163, 73–78. [CrossRef] 31. Zhu, Z.; Yan, Y.; Li, J. Preparation of flower-like BiOBr–WO3–Bi2WO6 ternary hybrid with enhanced visible-light photocatalytic activity. J. Alloys Compd. 2015, 651, 184–192. [CrossRef] 32. Wu, Y.; Wang, H.; Tu, W.; Liu, Y.; Wu, S.; Tan, Y.Z.; Chew, J.W. Construction of hierarchical 2D-2D Zn3In2S6/fluorinated polymeric carbon nitride nanosheets photocatalyst for boosting photocatalytic degradation and hydrogen production performance. Appl. Catal. B Environ. 2018, 233, 58–69. [CrossRef] 33. Yang, G.; Ding, H.; Chen, D.; Feng, J.; Hao, Q.; Zhu, Y. Construction of urchin-like ZnIn2S4-Au-TiO2 heterostructure with enhanced activity for photocatalytic hydrogen evolution. Appl. Catal. B Environ. 2018, 234, 260–267. [CrossRef] 34. Alagarasi, A.; Rajalakshmi, P.U.; Shanthi, K.; Selvam, P. Ordered mesoporous nanocrystalline titania: A promising new class of photocatalyic materials. Catal. Today 2018, 309, 202–211. [CrossRef] 35. Qu, Z.; Na, W.; Liu, X.; Liu, H.; Su, X. A novel fluorescence biosensor for sensitivity detection of tyrosinase and acid phosphatase based on nitrogen-doped graphene quantum dots. Anal. Chim Acta 2018, 997, 52–59. [CrossRef] 36. Rahbar, N.; Abbaszadegan, P.; Savarizadeh, A. A sensitive fluorescent sensing strategy for nanomolar levels of metformin using graphitic carbon nitride nanosheets as nanofluoroprobe. Anal. Chim. Acta 2018, 1026, 117–124. [CrossRef] [PubMed] 37. Cao, Y.; Mo, G.; Feng, J.; He, X.; Tang, L.; Yu, C.; Deng, B. Based on ZnSe quantum dots labeling and single particle mode ICP-MS coupled with sandwich magnetic immunoassay for the detection of carcinoembryonic antigen in human serum. Anal. Chim. Acta 2018, 1028, 22–31. [CrossRef] 38. Liu, Y.; Cao, N.; Gui, W.; Ma, Q. Nitrogen-doped graphene quantum dots-based fluorescence molecularly imprinted sensor for thiacloprid detection. Talanta 2018, 183, 339–344. [CrossRef] http://dx.doi.org/10.1007/s10853-019-03484-x http://dx.doi.org/10.1016/j.apsusc.2019.01.105 http://dx.doi.org/10.1016/j.jpcs.2017.07.001 http://dx.doi.org/10.1016/j.apsusc.2018.09.099 http://dx.doi.org/10.1007/s10854-018-9259-x http://dx.doi.org/10.1155/2018/8143940 http://dx.doi.org/10.1021/acssuschemeng.8b02153 http://dx.doi.org/10.1007/s10904-018-0816-9 http://dx.doi.org/10.1016/j.jallcom.2018.06.377 http://dx.doi.org/10.1016/j.jcis.2011.12.007 http://dx.doi.org/10.1111/jace.13388 http://dx.doi.org/10.1016/j.saa.2016.03.020 http://dx.doi.org/10.1016/j.jallcom.2015.08.137 http://dx.doi.org/10.1016/j.apcatb.2018.03.105 http://dx.doi.org/10.1016/j.apcatb.2018.04.038 http://dx.doi.org/10.1016/j.cattod.2017.08.001 http://dx.doi.org/10.1016/j.aca.2017.10.010 http://dx.doi.org/10.1016/j.aca.2018.04.007 http://www.ncbi.nlm.nih.gov/pubmed/29852987 http://dx.doi.org/10.1016/j.aca.2018.04.039 http://dx.doi.org/10.1016/j.talanta.2018.01.063 Materials 2019, 12, 1713 11 of 11 39. Li, W.; Zhu, J.; Xie, G.; Ren, Y.; Zheng, Y.Q. Ratiometric system based on graphene quantum dots and Eu3+ for selective detection of tetracyclines. Anal. Chim. Acta 2018, 1022, 131–137. [CrossRef] 40. Salvador, J.P.; Tassies, D.; Reverter, J.C.; Marco, M.P. Enzyme-linked immunosorbent assays for therapeutic drug monitoring coumarin oral anticoagulants in plasma. Anal. Chim. Acta 2018, 1028, 59–65. [CrossRef] 41. Tang, D.; Liu, J.; Wu, X.; Liu, R.; Han, X.; Han, Y.; Huang, H.; Liu, Y.; Kang, Z. Carbon quantum dot/NiFe layered double-hydroxide composite as a highly efficient electrocatalyst for water oxidation. ACS Appl. Mater. Interfaces 2014, 6, 7918–7925. [CrossRef] 42. Gong, M.; Li, Y.; Wang, H.; Liang, Y.; Wu, J.Z.; Zhou, J.; Wang, J.; Regier, T.; Wei, F.; Dai, H. An advanced Ni-Fe layered double hydroxide electrocatalyst for water oxidation. J. Am. Chem. Soc. 2013, 135, 8452–8455. [CrossRef] 43. Shi, R.; Wang, J.; Wang, Z.; Li, T.; Song, Y.F. Unique NiFe NiCoO2 hollow polyhedron as bifunctional electrocatalysts for water splitting. J. Energy Chem. 2019, 33, 74–80. [CrossRef] 44. Chen, S.; Duan, J.; Jaroniec, M.; Qiao, S.Z. Three-dimensional N-doped graphene hydrogel/NiCo double hydroxide electrocatalysts for highly efficient oxygen evolution. Angew. Chem. Int. Ed. 2013, 52, 13567–13570. [CrossRef] 45. Sharma, S.; Mehta, S.K.; Ibhadon, A.O.; Kansal, S.K. Fabrication of novel carbon quantum dots modified bismuth oxide (alpha-Bi2O3/C-dots): Material properties and catalytic applications. J. Colloid Interface Sci. 2019, 533, 227–237. [CrossRef] 46. Li, H.; Kang, Z.; Liu, Y.; Lee, S.T. Carbon nanodots: Synthesis, properties and applications. J. Mater. Chem. 2012, 22, 24230. [CrossRef] 47. Ye, K.H.; Wang, Z.; Gu, J.; Xiao, S.; Yuan, Y.; Zhu, Y.; Zhang, Y.; Mai, W.; Yang, S. Carbon quantum dots as a visible light sensitizer to significantly increase the solar water splitting performance of bismuth vanadate photoanodes. Energy Environ. Sci. 2017, 10, 772–779. [CrossRef] 48. Yu, B.Y.; Kwak, S.Y. Carbon quantum dots embedded with mesoporous hematite nanospheres as efficient visible light-active photocatalysts. J. Mater. Chem. 2012, 22, 8345. [CrossRef] 49. Wang, X.; Cao, L.; Lu, F.; Meziani, M.J.; Li, H.; Qi, G.; Zhou, B.; Harruff, B.A.; Kermarrec, F.; Sun, Y.P. Photoinduced electron transfers with carbon dots. Chem. Commun. 2009, 25, 3774–3776. [CrossRef] 50. Zhang, M.; Robert Mitchell, W.; Huang, H.; Douthwaite, R.E. Ordered multilayer films of hollow sphere aluminium-doped zinc oxide for photoelectrochemical solar energy conversion. J. Mater. Chem. A 2017, 5, 22193–22198. [CrossRef] 51. Chen, R.; Chen, Z.; Ji, M.; Chen, H.; Liu, Y.; Xia, J.; Li, H. Enhanced reactive oxygen species activation for building carbon quantum dots modified Bi5O7I nanorod composites and optimized visible-light-response photocatalytic performance. J. Colloid Interface Sci. 2018, 532, 727–737. [CrossRef] [PubMed] 52. Bhattacharya, C.; Lee, H.C.; Bard, A.J. Rapid Screening by Scanning Electrochemical Microscopy (SECM) of Dopants for Bi2WO6 Improved Photocatalytic Water Oxidation with Zn Doping. J. Phys. Chem. C 2015, 117, 9633–9640. [CrossRef] 53. Que, Q.; Xing, Y.; He, Z.; Yang, Y.; Yin, X.; Que, W. Bi2O3/Carbon quantum dots heterostructured photocatalysts with enhanced photocatalytic activity. Mater. Lett. 2017, 209, 220–223. [CrossRef] 54. Zhao, F.; Rong, Y.; Wan, J.; Hu, Z.; Peng, Z.; Wang, B. High photocatalytic performance of carbon quantum dots/TNTs composites for enhanced photogenerated charges separation under visible light. Catal. Today 2018, 315, 162–170. [CrossRef] 55. Ke, J.; Li, X.; Zhao, Q.; Liu, B.; Liu, S.; Wang, S. Upconversion carbon quantum dots as visible light responsive component for efficient enhancement of photocatalytic performance. J. Colloid Interface Sci. 2017, 496, 425–433. [CrossRef] [PubMed] © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). http://dx.doi.org/10.1016/j.aca.2018.03.018 http://dx.doi.org/10.1016/j.aca.2018.04.042 http://dx.doi.org/10.1021/am501256x http://dx.doi.org/10.1021/ja4027715 http://dx.doi.org/10.1016/j.jechem.2018.08.016 http://dx.doi.org/10.1002/anie.201306166 http://dx.doi.org/10.1016/j.jcis.2018.08.056 http://dx.doi.org/10.1039/c2jm34690g http://dx.doi.org/10.1039/C6EE03442J http://dx.doi.org/10.1039/c2jm16931b http://dx.doi.org/10.1039/b906252a http://dx.doi.org/10.1039/C7TA07509J http://dx.doi.org/10.1016/j.jcis.2018.07.027 http://www.ncbi.nlm.nih.gov/pubmed/30121525 http://dx.doi.org/10.1021/jp308629q http://dx.doi.org/10.1016/j.matlet.2017.07.115 http://dx.doi.org/10.1016/j.cattod.2018.02.019 http://dx.doi.org/10.1016/j.jcis.2017.01.121 http://www.ncbi.nlm.nih.gov/pubmed/28254609 http://creativecommons.org/ http://creativecommons.org/licenses/by/4.0/. Introduction Experimental Reagents and Materials Synthesis of the Polystyrene Film Template Fabrication of the Mac-FTO Electrode Synthesis of the Bi2WO6@mac-FTO Photoelectrode Synthesis of the Bi2WO6@mac-FTO Photoelectrode Decorated with CDs Sample Characterization Results and Discussion Structural Characterization Optical Properties Photoelectrochemistry Conclusions References work_hcyggbclgzfkvntjpmk6vt3iau ---- [PDF] Improved postoperative outcome of segmental fasciectomy in Dupuytren disease by insertion of an absorbable cellulose implant | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.3109/2000656X.2011.558725 Corpus ID: 26305500Improved postoperative outcome of segmental fasciectomy in Dupuytren disease by insertion of an absorbable cellulose implant @article{Degreef2011ImprovedPO, title={Improved postoperative outcome of segmental fasciectomy in Dupuytren disease by insertion of an absorbable cellulose implant}, author={I. Degreef and S. Tejpar and L. de Smet}, journal={Journal of Plastic Surgery and Hand Surgery}, year={2011}, volume={45}, pages={157 - 164} } I. Degreef, S. Tejpar, L. de Smet Published 2011 Medicine Journal of Plastic Surgery and Hand Surgery Abstract In this case-control prospective study, we investigated if we could improve the surgical outcome of interrupting strands in Dupuytren disease by creating a blocking effect with an absorbable cellulose implant, a known absorbable adhesion barrier. We studied 33 operations in 29 patients who had the potential for recurrent disease. The cellulose was implanted in the first 15 patients. An intraindividual control was added in 4 patients, who were given the implant in 1 of 2 operated hands… Expand View on Taylor & Francis dupuytrens.org Save to Library Create Alert Cite Launch Research Feed Share This Paper 20 CitationsHighly Influential Citations 1 Background Citations 4 Methods Citations 1 View All Topics from this paper Fasciotomy Cellulose, Oxidized Recurrent disease Cellulose Implants HL7PublishingSubSection 20 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Cellulose Implants in Dupuytren’s Surgery I. Degreef, L. Smet Medicine 2012 2 Save Alert Research Feed Percutaneous Aponeurotomy and Lipofilling (PALF) versus Limited Fasciectomy in Patients with Primary Dupuytren’s Contracture: A Prospective, Randomized, Controlled Trial Hester J. Kan, R. Selles, C. V. van Nieuwenhoven, C. Zhou, R. Khouri, S. Hovius Medicine Plastic and reconstructive surgery 2016 19 Save Alert Research Feed Percutaneous Aponeurotomy and Lipofilling (PALF) versus Limited Fasciectomy in Patients with Primary Dupuytren's Contracture: A Prospective, Randomized, Controlled Trial. S. Könneker, C. Radtke, P. Vogt Medicine Plastic and reconstructive surgery 2017 13 Save Alert Research Feed Hand function and quality of life before and after fasciectomy for Dupuytren contracture. C. Engstrand, B. Krevers, G. Nylander, J. Kvist Medicine The Journal of hand surgery 2014 34 PDF Save Alert Research Feed Is Recurrence of Dupuytren Disease Avoided in Full-Thickness Grafting? I. Degreef, M. Torrekens Medicine 2017 Save Alert Research Feed Optimal functional outcome measures for assessing treatment for Dupuytren’s disease: a systematic review and recommendations for future practice C. Ball, A. Pratt, J. Nanchahal Medicine BMC Musculoskeletal Disorders 2013 74 PDF Save Alert Research Feed Recurrent Dupuytren Disease C. Eaton 2015 PDF Save Alert Research Feed Safety of Antiadhesion Barriers in Hand Surgery Som Kohanzadeh, L. Lugo, J. Long Medicine Annals of plastic surgery 2013 10 View 1 excerpt, cites background Save Alert Research Feed Current concepts in Dupuytren’s disease S. Lo, M. Pickford Medicine Current Reviews in Musculoskeletal Medicine 2012 4 Highly Influenced PDF View 10 excerpts, cites background Save Alert Research Feed Evidence-based medicine: Dupuytren contracture. C. Eaton Medicine Plastic and reconstructive surgery 2014 45 Save Alert Research Feed ... 1 2 ... References SHOWING 1-10 OF 21 REFERENCES SORT BYRelevance Most Influenced Papers Recency Results following surgery for recurrent Dupuytren's disease. T. F. Roush, P. Stern Medicine The Journal of hand surgery 2000 77 PDF View 1 excerpt, references background Save Alert Research Feed Dupuytren’s Contracture: An Audit of the Outcomes of Surgery J. Dias, J. Braybrooke Medicine, Computer Science Journal of hand surgery 2006 138 View 2 excerpts, references methods and background Save Alert Research Feed Dupuytren's disease: outcome of the proximal interphalangeal joint in isolated fifth ray involvement. N. Van Giffen, I. Degreef, L. de Smet Medicine Acta orthopaedica Belgica 2006 17 PDF Save Alert Research Feed Surgical outcome of Dupuytren’s disease—no higher self-reported recurrence after segmental fasciectomy I. Degreef, T. Boogmans, Pieter Steeno, L. Smet Medicine European Journal of Plastic Surgery 2009 8 View 1 excerpt, references methods Save Alert Research Feed The complications of Dupuytren's contracture surgery. N. Bulstrode, B. Jemec, P. Smith Medicine The Journal of hand surgery 2005 124 PDF View 1 excerpt, references background Save Alert Research Feed Does a 'firebreak' full-thickness skin graft prevent recurrence after surgery for Dupuytren's contracture?: a prospective, randomised trial. A. Ullah, J. Dias, B. Bhowal Medicine, Computer Science The Journal of bone and joint surgery. British volume 2009 61 PDF View 1 excerpt, references methods Save Alert Research Feed Results of Partial Fasciectomy for Dupuytren Disease in 261 Consecutive Patients J. Coert, J. Nerín, M. Meek Medicine Annals of plastic surgery 2006 101 PDF View 2 excerpts, references background and methods Save Alert Research Feed Long-Term Results after Segmental Aponeurectomy for Dupuytren’s Disease J. Moermans Medicine Journal of hand surgery 1996 69 Save Alert Research Feed A new material for prevention of peritendinous fibrotic adhesions after tendon repair: oxidised regenerated cellulose (Interceed), an absorbable adhesion barrier A. Temiz, C. Ozturk, A. Bakunov, K. Kara, T. Kaleli Medicine International Orthopaedics 2007 45 View 2 excerpts, references background Save Alert Research Feed Dupuytren’s Disease: a Predominant Reason for Elective Finger Amputation in Adults I. Degreef, L. de Smet Medicine Acta chirurgica Belgica 2009 15 PDF Save Alert Research Feed ... 1 2 3 ... Related Papers Abstract Topics 20 Citations 21 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_hdvko4sf3bgkfpgptygupxavkm ---- BJPsych Bulletin | Cambridge Core Skip to main content Accessibility help We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings. Login Alert Cancel Log in × × Home Only search content I have access to Home Log in Register Browse subjects What we publish Services About Cambridge Core Cart Cart Access provided by Carnegie Mellon University Manage institution login Logged in as: Carnegie Mellon University Manage institution login Register Register Log in Cart < Back to search results Home Journals BJPsych Bulletin English | Français BJPsych Bulletin Search within full text Search within journal Search within society Submit your article Information Submit your article You are leaving Cambridge Core and will be taken to this journal's article submission site. Cancel Leave now × Other actions Submit your article Information Published on behalf of Royal College of Psychiatrists Visit: Journal Home Journal home Accepted manuscripts FirstView articles Latest issue All issues Night-time confinement article collection Against the stream Most read There is currently a delay in the posting of accepted eLetters to Cambridge Core. We apologise for the inconvenience. Access: Subscribed Open access On the cover Continues Bulletin of the Royal College of Psychiatrists (1977 - 1988), Psychiatric Bulletin (1988 - 2009), The Psychiatrist (2010 - 2013), The Psychiatric Bulletin (2014 - 2014) Title history ISSN: 2056-4694 (Print), 2056-4708 (Online) Editors: Norman Poole South West London and St George's Mental Health NHS Trust, UK, and Dr Richard Latham East London NHS Foundation Trust, UK Editorial board BJPsych Bulletin prioritises research, opinion and informed reflection on the state of psychiatry, management of psychiatric services, and education and training in psychiatry. It provides essential reading and practical value to psychiatrists and anyone involved in the management and provision of mental healthcare. BJPsych Bulletin is an open access, peer-reviewed journal owned by the Royal College of Psychiatrists. The journal is published bimonthly by Cambridge University Press on behalf of the College. There are no submission or publication charges to authors. BJPsych Bulletin is not responsible for statements made by contributors and material in BJPsych Bulletin does not necessarily reflect the views of the Editor-in-Chief or the College. Latest articles View all Article Malik Hussain Mubbashar, MB BS, FRCP (London), FRCP (Edinburgh), MRCP (Glasgow), FCPS (Pakistan), FRCPsych David P. Goldberg, Fareed A. Minhas BJPsych Bulletin, FirstView Article BJB volume 45 issue 2 Cover and Front matter BJPsych Bulletin, Volume 45, Issue 2 Article What have we learnt from Covid? Mark Steven Salter BJPsych Bulletin, Volume 45, Issue 2 Article Invisible youth during times of Covid Melissa Beaumont, Kate Chalker, Layla Clayton, Emily Curtis, Heidi Hales, Duncan Harding, Rhiannon Lewis, Gabrielle Pendlebury BJPsych Bulletin, Volume 45, Issue 2 Article BJB volume 45 issue 2 Cover and Back matter BJPsych Bulletin, Volume 45, Issue 2 Article Unequal effects of climate change and pre-existing inequalities on the mental health of global populations Shuo Zhang, Isobel Braithwaite, Vishal Bhavsar, Jayati Das-Munshi BJPsych Bulletin, FirstView Article Transforming MRCPsych theory examinations: digitisation and very short answer questions (VSAQs) Karl Scheeres, Niruj Agrawal, Stephanie Ewen, Ian Hall BJPsych Bulletin, FirstView Article Psychiatry P.R.N.: Principles, Reality, Next Steps Edited by Sarah Stringer, Laurence Church, Roxanne Keynejad and Juliet Hurn 2nd edn. Oxford University Press. 2020. £30.99 (pb). 304 pp. ISBN 9780198806080 Nidhi Gupta BJPsych Bulletin, FirstView View all RCPsych Article of the Month blog View all Are there ethno-cultural disparities in mental health during the COVID-19 pandemic? 08 March 2021, Diana Miconi, PhD The RCPsych Article of the Month for February is ‘Ethno-cultural disparities in mental health during the COVID-19 pandemic: a cross-sectional study on the impact... View all Tweets by BJPsych Facebook Loading https://www.facebook.com/RCPsych... Most read View all Article Sex, gender and gender identity: a re-evaluation of the evidence Lucy Griffin, Katie Clyde, Richard Byng, Susan Bewley BJPsych Bulletin, FirstView View all Most cited View all Article Losing participants before the trial ends erodes credibility of findings Jun Xia, Clive Adams, Nishant Bhagat, Vinaya Bhagat, Paranthaman Bhoopathi, Hany El-Sayeh, Vanessa Pinfold, Yahya Takriti Psychiatric Bulletin, Volume 33, Issue 7 View all Librarians Authors Publishing partners Agents Corporates Additional Information Accessibility Our blog News Contact and help Cambridge Core legal notices Feedback Sitemap Join us online Legal Information Rights & Permissions Copyright Privacy Notice Terms of use Cookies Policy © Cambridge University Press 2021 Back to top © Cambridge University Press 2021 Back to top Cancel Confirm × work_hfoeh44a2vdspa6csyjbj5hwai ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218542173 Params is empty 218542173 exception Params is empty 2021/04/06-02:16:19 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218542173 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:19 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_hfzgnjxbynh5hib4ce5ztzuwqi ---- Print FEATURES BOOKS 162 MRS BULLETIN • VOLUME 42 • FEBRUARY 2017 • www.mrs.org/bulletin This book gives an overview of silver (Ag) and silver halide (AgX, X = Br, I) nanoparticles used in the fi eld of photogra- phy and other applications. Topics include structure, synthesis, photophysics, cataly- sis, photovoltaics, and stability. Chapter 1 introduces metal nanopar- ticles, plasmonics, and AgX photography. Chapter 2 reviews the shape and structure of metal, Ag, and AgX nanoparticles. The structures of nuclei and seeds, single- crystalline nanoparticles, nanoparticles modifi ed by crystal defects, and compos- ite structures are described. The chapter then gives methods for characterizing the crystal structures. Chapter 3 reviews the preparation of Ag nanoparticles and related materials for plasmonics and AgX photography. The chemistry of nanoparticle synthesis is given, including nuclei and seeds, preparation of single- crystalline nanoparticles, and growth of asymmetric nanoparticles through the introduction of defects and surfactants. The preparation of AgX nanoparticles for photography focuses on AgX-gelatin interactions and a discussion of various methods for preparation of single-crys- talline and tabular AgX nanoparticles. There is a description of industrial-scale AgX nanoparticle synthesis. Methods for the arrangement of AgX and Ag nanopar- ticles needed for fi ne imaging and fabrica- tion of photographic fi lm are described. Chapter 4 covers light absorption and scattering of Ag, including molecule- scale Ag nanoparticles as well as larger isotropic and anisotropic Au nanopar- ticles, nanorods, and nanoplates. There is a discussion of light absorption of J- and H-aggregated chromophores, Ag nanoparticles, and related materials in AgX photography. Chapter 5 discusses catalysis by Ag and other metal nanopar- ticles in plasmonics and photography. Topics discussed include photocatalytic water splitting and hydrogen production. There follows a discussion of the role of Ag catalysts in the mechanism of photo- graphic development. Chapter 6 focuses on the photovoltaic effect in Ag and other metal nanoparticles, covering light-induced charge separation in inorganic and organic semiconductors. The chapter also covers light-induced charge separation in Ag/AgX nanopar- ticle systems in relation to photography. Chapter 7 covers stability of Ag and other metal nanoparticles in AgX photography. There is an extensive discussion of the effect of gelatin on the electrochemical properties of Ag nanoparticles and the reasons why it performs better than other polymers. The chapter also discusses the electronic structure of Ag nanoparticles in gelatin layers in ambient atmosphere and stabilization of Ag and other metal nanoparticles in photographic materials and plasmonic devices. Historically, Ag and AgX nanoparti- cles have played central roles in photogra- phy. Due to the rise of digital photography, knowledge of AgX photography risks being lost. This book is important because it gives an overview of this fi eld drawn from the history of photography and how it can be applied to emerging technolo- gies such as catalysis, photovoltaics, and plasmonics. The author has worked in the photography industry for nearly 50 years and has a deep knowledge that is refl ected in this book. There are 566 refer- ences, including critical articles from the early history of AgX photography, and 168 fi gures. This book is a useful refer- ence for researchers and graduate students interested in all aspects of plasmonics and metal nanoparticles. Reviewer: Thomas M. Cooper of the Air Force Research Laboratory, USA. Silver Nanoparticles: From Silver Halide Photography to Plasmonics Tadaaki Tani Oxford University Press, 2015 240 pages, $110.00 ISBN 978-0-19-871460-6 hi b k i In the interest of transparency, MRS is a co-publisher of this title. However, this review was requested and reviewed by an independent Book Review Board. This authoritative volume introduces the reader to computational thermody- namics and the use of this approach to the design of material properties by tailoring the chemical composition. The text covers applications of this approach, introduces the relevant computational codes, and offers exercises at the end of each chapter. The book has nine chapters and two appendices that provide background mate- rial on computer codes. Chapter 1 covers the fi rst and second laws of thermody- namics, introduces the spinodal limit of stability, and presents the Gibbs–Duhem equation. Chapter 2 focuses on the Gibbs energy function. Starting with a homo- geneous system with a single phase, the authors proceed to phases with variable compositions and polymer blends. The discussion includes the contributions of Computational Thermodynamics of Materials Zi-Kui Liu and Yi Wang Materials Research Society and Cambridge University Press, 2016 260 pages, $89.99 (e-book $72.00) ISBN 9780521198967 h f https://www.cambridge.org/core/terms. https://doi.org/10.1557/mrs.2017.21 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:27, subject to the Cambridge Core terms of use, available at http://crossmark.crossref.org/dialog/?doi=10.1557/mrs.2017.21&domain=pdf https://www.cambridge.org/core/terms https://doi.org/10.1557/mrs.2017.21 https://www.cambridge.org/core FEATURES BOOKS 163MRS BULLETIN • VOLUME 42 • FEBRUARY 2017 • www.mrs.org/bulletin X-ray diffraction (XRD) is a power-ful nondestructive characterization technique for determining the structure, phase, composition, and strain in materials. It is one of the most frequently employed methods for characterizing materials. This book distinguishes itself from other books on this topic by its simplifi ed treatment and its coverage of thin-fi lm analysis. It largely minimizes the math- ematics and is profusely illustrated, mak- ing it a good entry point for learning the basic principles of XRD. The common thin-fi lm structures (random polycrystal- line, textured) and their relationships with the substrate (strain, in-plane rotation) are defi ned and explained. This makes it valu- able to researchers who study thin-fi lm deposition. The book includes example problems to reinforce the concepts cov- ered, plus problems that can be assigned as homework. The background physics is presented first. Chapter 1 covers the properties of electromagnetic radiation, including wave-particle duality and the generation of x-rays. Chapter 2 describes crystal geometry, explaining the concept of a lat- tice and how Miller indices are assigned to planes and directions, reciprocal lattices, and crystal structures. The scope of this treatment is above that found in introduc- tory materials science and engineering textbooks. The interaction of electromag- netic radiation with materials is discussed in chapter 3, including interference and diffraction. Many of these topics will be familiar to those who have taken college physics, but here they are described with an emphasis on their importance to XRD. After establishing the basic physics, the book describes the conditions required for XRD to occur in chapter 4. Bragg’s Law and the Laue equations are presented and explained. Electron diffraction and the Scherrer equation for estimating nanopar- ticle size are discussed. In chapter 5, the main factors controlling the intensity of diffracted x-rays are delineated. These include scattering by electrons and atoms and the specifi c arrangement of atoms, the material’s unit cell. Specifi c applications of XRD are cov- ered in chapter 6 (thin fi lms), chapter 7 (single crystals), and chapter 8 (powder diffraction). Rocking curves for assessing thin-fi lm quality as well as grazing inci- dence XRD for enhancing the signal from the surface and diminishing signal from the substrate are introduced. The Laue method for determining the orientation of single crystals is described in detail. The procedure for identifying phases present and lattice constant values is recounted. This book is a highly accessible intro- duction to XRD for materials research. It is written in concise and clear prose. The text creates a cohesive picture of XRD. After fi nishing this book, researchers will be able to understand the basics of many materials science and engineering research papers. Reviewer: J.H. Edgar of the Department of Chemical Engineering, Kansas State University, USA. external electric and magnetic fi elds to the Gibbs energy. Chapter 3 deals with phase equilibria in heterogeneous systems, the Gibbs phase rule, and phase diagrams. Chapter 4 briefly covers experimental measurements of thermodynamic prop- erties used as input for thermodynamic modeling by calculation of phase diagrams (CALPHAD). Chapter 5 discusses the use of density functional theory to obtain thermochemi- cal data and fi ll gaps where experimental data are missing. The chapter introduces the Vienna ab initio simulation package (VASP) for density functional theory and the YPHON code for phonon calculations. Chapter 6 introduces the modeling of Gibbs energy of phases using the CALPHAD method. Chapter 7 deals with chemi- cal reactions and the Ellingham diagram for metal oxide systems, and presents the calculation of the maximum reaction rate from equilibrium thermodynamics. Chapter 8 is devoted to electrochemical reactions and Pourbaix diagrams with application examples. Chapter 9 concludes this volume with the application of a model of multiple microstates to Ce and Fe3Pt. CALPHAD modeling is briefl y discussed in the context of genomics of materials. The book introduces basic thermody- namic concepts clearly and directs read- ers to appropriate references for advanced concepts and details of software imple- mentation. The list of references is quite comprehensive. The authors make liberal use of diagrams to illustrate key concepts. The two appendices discuss software requirements and the fi le structure, and present templates for special quasi-random structures. There is also a link to download pre-compiled binary fi les of the YPHON code for Linux or Microsoft Windows systems. The exercises at the end of the chapters assume that the reader has access to VASP, which is not freeware. Readers without access to this code can work on a limited number of exercises. However, results from other fi rst-principle codes can be organized in the YPHON format, as explained in the appendix. This book will serve as an excellent reference on compu- tational thermodynamics, and the exercises provided at the end of each chapter make it valuable as a graduate level textbook. Reviewer: Ram Devanathan is Acting Director of the Earth Systems Science Division, Pacific Northwest National Laboratory, USA. X-Ray Diffraction for Materials Research: From Fundamentals to Applications Myeongkyu Lee Apple Academic Press and CRC Press, 2016 302 pages, $159.95 (e-book $111.97) ISBN 9781771882989 https://www.cambridge.org/core/terms. https://doi.org/10.1557/mrs.2017.21 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:27, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms https://doi.org/10.1557/mrs.2017.21 https://www.cambridge.org/core work_hgsz375stzf3dd53bq2zanoin4 ---- 13410_2015_461_Article 299..302 EDITORIAL The eye and the kidney: twin targets in diabetes T. Ravi Raju1,2 & N. V. Madhavi3 & G. R. Sridhar4 Published online: 19 December 2015 # Research Society for Study of Diabetes in India 2015 Diabetes mellitus is both defined by and is a dreaded condition because of microvascular involvement of the eye and the kid- neys. The level of glucose at which retinopathy occurs forms the basis for diagnosing diabetes. Despite non-invasive methods being available for its diagnosis, retinopathy is often underdiagnosed and untreated. Eventually it leads to visual loss often in working-age populations. There appears to be ethnic variation in susceptibility to diabetic retinopathy (DR), with prevalence of DR, sight- threatening DR and macular edema being higher in people from South Asia, Africa, Latin America and indigenous tribal populations [1]. The variations are ascribed to differential sus- ceptibility to known risk factors, as well as differences in lifestyle, access to health care and genetic and epigenetic phe- nomenon. A number of prevalence studies were reported from different parts of India. In a hospital-based report from Kashmir (n, 500 with diabetes), DR was identified in 27 % (n, 135) [2]. Increasing age was a risk factor, and milder forms accounted for the most cases, suggesting that early screening and treatment is necessary [3]. Prevalence studies from rural India are fewer [4]. In a population-based Central India Eye and Medical Study in- volving more than 4500 subjects, DR was diagnosed by fun- dus photography utilizing criteria of Early Treatment of Diabetic Retinopathy Study. DR was identified in 15 subjects (0.33 %) of the entire cohort. In subjects with diabetes, DR was present in 9.6 % [5]. Risk factors were increasing age and higher glucose concentrations. All eyes showed only non- severe forms of DR. There was no significant association with other conventional risk factors or with ocular parameters. In the SN-DREAMS III report no 2 (Sankara Nethralaya Diabetic Retinopathy Epidemiology and Molecular Genetic Study III), population-based cross-sectional screening was carried out in a rural population of south India (n, 13,079) [6]. DR was identified using 45° four-field stereoscopic digital photography, along with 30° seven-field stereo digital pairs in those with DR. Among those with diabetes, DR was identified in 10.3 %, with greater risk in men, insulin users, those with longer duration of diabetes, systolic hypertension and poor glycemic control [6]. In urban Chennai, the CURES Eye Study I, a population- based screening for diabetes, was carried out in adults aged 20 years and above (n, 26,001). Among 1382 known subjects with diabetes who agreed to undergo four-field stereo colour photography, and 354 newly diagnosed subjects with diabetes, DR was present in 17.6 % [7]. The prevalence of DR was higher in men and those with proteinuria. Another population-based study in South Kerala assessed the prevalence of diabetes and DR in a community-based screening programme. One hundred and sixty camps were held in five southern districts of Kerala (Project Trinetra) [8]. In a pilot hospital-based screening for DR from western India (n, 168) using ETDRS classification, prevalence of DR was 33.9 %; non-proliferative DR was 25.5 % and prolifera- tive DR 8.33 % [9]. The current issue of the Journal reports on the risk factors for DR in sub-Saharan Africa, prevalence in a rural south Indian population, and on biomarkers for DR and screening methods. * G. R. Sridhar sridharvizag@gmail.com 1 NTR University of Health Sciences, Vijayawada, India 2 Department of Nephrology, Andhra Medical College, Visakhapatnam, India 3 Dr GM Narayanaswamy Medical Centre, Visakhapatnam, India 4 Endocrine and Diabetes Centre, 15-12-15 Krishnanagar, Visakhapatnam 530002, India Int J Diabetes Dev Ctries (December 2015) 35 (Suppl 3):S299–S302 DOI 10.1007/s13410-015-0461-6 http://crossmark.crossref.org/dialog/?doi=10.1007/s13410-015-0461-6&domain=pdf Photography of the retinal vessels has progressed from an- alog [10] to digital. Despite the convenience of non-mydriatic camera to screen for DR, multiple field stereo photography is necessary for precise classification. Genetic and epigenetic factors were linked to the development of DR. Microvascular dysfunction results from involvement of break- down of blood retinal barrier, capillary basement thickening involving different cellular components of the retina including the retinal neuronal and glial dysfunction. Pathogenic path- ways include altered retinal blood flow, inflammatory path- ways and growth factors such as vascular endothelial growth factor (VEGF) [11]; the latter forms the target for treatment of DR by the use of drug modulators. Diabetic nephropathy (DN) is the other major diabetic mi- crovascular complication accounting for significant morbidity and mortality. Metabolic and genetic factors contribute to its pathogenesis. This issue carries reports on DN relating to risk factors, early recognition, biomarkers and intervention. At the outset, one should bear in mind that non-diabetic renal disease could contribute to renal failure in diabetes mellitus. Clinical situation simplying non-diabetic renal disease includes shorter duration of diabetes, microscopic hematuria and active urinary sediment in the absence of retinopathy [12]. An Indian study showed a poor correlation between DN and DR in type 2 diabetes mellitus and suggested that renal biopsy is useful in staging the lesions in subjects with diabetes having renal dys- function [13]. A similar discordance between DR and DN in type 2 diabetes was reported, advocating the requirement of renal biopsy for a precise renal pathological diagnosis in these situations [14]. A number of biomarkers were studied to identify early stages of renal involvement in diabetes, which may pre-date the clinical diagnosis of diabetes. Markers of tubular damage such as urine neutrophil gelatinase-associated lipocalin and cystan C are potential biomarkers [15]. Other putative bio- markers include serum prolidase activity, which is associated with oxidative stress, also contributing to diabetic nephropa- thy; oxidant stress is reported as total oxidant stress, total antioxidative status and oxidative stress index [16]. Newer biomarkers employing proteomics and emerging omics tech- nologies are likely to improve the capability to identify ne- phropathy at an early stage and allow aggressive interventions to prevent further progress [17]. Whereas urinary miRNAs could serve as biomarkers in diabetic complications, a recent study reported that among five detectable TGF-β1-regulated miRNAs, only circulating levels of let-7b-5p and miR-21-5p were associated with risk of end-stage renal disease [18]. However, one must bear in mind that circulating miRNA level may not correlate with tissue levels [19]. Much focus has been directed towards the role of genetic factors contributing to DR, in view of the availability of se- quencing techniques. It helps in serving as biomarkers, and in providing information into pathogenesis, and opening up avenues for new drug targets. Studies targeted gene variants of inflammatory cytokine genes (interleukins, tumour necrosis factor), extracellular matrix components (collagen, laminins, matrix metalloproteinase 9), renal function (angiotensinogen and angiotensin II receptors), endothelial function and oxida- tive stress (nitric oxide synthase 3, catalase, superoxide dis- mutase 2), and glucose and lipid metabolism (adiponectin, Apolipoprotein E, aldose reductase, glucose transporter 1, per- oxisome proliferator-activated receptor gamma 2) [20]. There have been a number of Indian studies on candidate- gene approach to DR. Vascular endothelial growth factor (VEGF), a potent cytokine, has a role in the pathogenesis of diabetic microvascular disease, including DN. A pilot cross- sectional study (n, 40) from a North Indian population showed that DD genotype and D allele in I/D polymorphism at -2549 position of VEGF gene increased the susceptibility to DN [21]. Polymorphisms of osteopontin promoter were reported to be a genetic risk factor for DN. In a case-control study, three functional promoter gene polymorphisms and their haplotypes were studied in relation to risk of DN (C-443T, delG-156G, G-66T) [22]. The results suggested an association between the gene promoter polymorphisms and their haplotypes with risk of DN in subjects with T2DM. Transcription factor 7-like 2 (TCF7L2) gene is associated with DN. In a south Indian pop- ulation, polymorphisms of the gene were studied in relation to DN [23]. The rs12255372 polymorphism in the TCF7L2 gene is associated with DN but operates through diabetes mellitus. A larger sample from Punjab and from Jammu and Kashmir (n, 1313) was genotyped for TGF-β1 (rs1800470 and rs1800469). The CC genotype of rs1800470 increased the risk of end-stage renal disease by 3.1–4.5-fold [24]. TNF promoter polymorphism, associated with chronic low-grade inflammation, was studied in relation to susceptibility to DN. It was shown that 863C/A polymorphism might be pro- tective against DN whereas -1031T/C may increase the risk of DN [25]. Data on ACE2 is available from different areas of India. Angiotensin 1 receptor and angiotensin 2 receptors regulate NF-kB and ACE2, and provide a potential for identifying drug targets [26]. ACE (I/D) gene polymorphisms were reported to predispose to DN from Kutch region in western India. About 150 each of Ahir and Rabri ethnic groups with diabetes of more than 10 years were studied. Variant at ACE locus as D/D variant in intron 16 contributed to increased risk but not severity of DN [27]. Genes involved in oxidative stress path- way appear to predispose to DN [28, 29]. Replication studies for TGF β1 gene and for SNPs in RAGE and GFPT2 in India were obtained [30, 31]. Eventually, most people with end-stage diabetic renal dis- ease require long-term dialysis or renal transplant. While the technology for the latter is available, lack of adequate donors is a bottleneck. Monitoring the renal status for early signs of rejection is an important aspect in the post-transplant stage. S300 Int J Diabetes Dev Ctries (December 2015) 35 (Suppl 3):S299–S302 Methods for non-invasive detection of renal allograft inflam- mation by measuring mRNA for IP-10 and CXCR3 in urine were shown to be a potential biomarker of acute rejection [32]. Current approach to managing DR is limited to the later stages of the condition and does not address the early stages where reversing the change can be considered. Anti-VEGF treatment is an advance, but other mechanisms can be targeted involving the retinal involvement of glia, neurons and vessels [11]. Emerging treatments in the pipeline include modifying expression of antioxidant genes, the kallekrein inhibitors and NOX-2 inhibitors. Stem cell reparative processes are also be- ing studied in the treatment of DR [11]. Approach to DN is also multifactorial, controlling glucose levels, blood pressure lipids and protein intake in the diet. Further understanding of the pathogenic mechanisms shows promise of newer drug targets against pathways upregulated by elevated glucose levels as well as others leading to progres- sion of DN [33]. Recent GWAS investigations show promise of identifying novel loci influencing albuminuria in diabetes [34]. The mix of articles in this issue reflects the trend for greater focus of genetic studies in DN compared to DR. References 1. Sivaprasad S, Gupta B, Nwaobi RC, Evans J. Prevalence of diabetic retinopathy in various ethnic groups: a worldwide perspective. Surv Ophthalmol. 2012;57:347–70. 2. Qureshi T, Abdullah N, Shagufta. Prevalence of diabetic retinopa- thy in Kashmir, India—a hospital based study. Global J Med Public Health. 2013;2:1. www.gjmedph.com. 3. Rema M, Pradeepa R. Diabetic retinopathy: an Indian perspective. Indian J Med Res. 2007;125:297–310. 4. Mohan V, Pradeepa R. Risk factors for diabetic retinopathy in rural India. J Postgrad Med. 2009;55:89–90. 5. Jonas JB, Nangia V, Khare A, et al. Prevalence and associated factors of diabetic retinopathy in rural Central India. Diabetes Care. 2013;36:e69. 6. Raman R, Ganesan S, Pal SS, Kulothungan V, Sharma T. Prevalence and risk factors for diabetic retinopathy in rural India. Sankara Nethralaya Diabetic Retinopathy Epidemiology and Molecular Genetic Study III (SN-DREAMS III), report no 2. BMJ Open Diab Res Care. 2014;2:000005. doi:10.1136/bmjdrc- 2013-000005. 7. Rema M, Premkumar S, Anitha B, Deepa R, Pradeepa R, Mohan V. Prevalence of diabetic retinopathy in urban India: the Chennai Urban Rural Epidemiology Study (CURES) Eye Study, I. Invest Ophthalmol Vis Sci. 2005;46:2328–33. 8. Soman M, Nair U, Bhilal S, Mathew R, Gafoor F, Nair KGR. Population based assessment of diabetes and diabetic retinopathy in South Kerala-Project Trinetra: an interim report. Kerala J Ophthalmol. 2009;1:36–41. 9. Ramavat PR, Ramavat MR, Ghugare BW, Vaishnav RG, Joshi MU. Prevalence of diabetic retinopathy in western Indian type 2 diabetic population: a hospital-based cross-sectional study. J Clin Diagn Res [serial online]. 2013;7:1387–90. 10. Sridhar GR, Satish K, Ahuja MMS. Non mydriatic retinal color photography in young Indian diabetics. Ann Ophthalmol. 1993;25:187–90. 11. Stitt AW, Lois N, Medina RJ, Adamson P, Curtis TM. Advances in our understanding of diabetic retinopathy. Clin Sci. 2013;125:1–17. 12. Wilfred DC, Mysorekar VV, Venkataramana RS, Eshwarappa M, Subramanyan R. Non-diabetic renal disease in type 2 diabetes mellitus patients: a clinicopathological study. J Lab Physicians. 2013;5:94–9. 13. Sahay M, Mahankali RK, Ismail K, Vali PS, Sahay RK, Swarnalata. Renal histology in diabetic nephropathy: a novel perspective. Indian J Nephrol. 2014;24:226–31. 14. Prakash J, Gupta T, Prakash S, Bhushan P, Usha SM, Singh SP. Non-diabetic renal disease in type 2 diabetes mellitus: study of renal-retinal relationship. Indian J Nephrol. 2015;25:222–8. 15. Garg V, Kumar M, Mahapatra HS, Chitkara A, Gadpayle AK, Sekhar V. Novel urinary biomarkers in pre-diabetic nephropathy. Clin Exp Nephrol 2015 16. Verma AK, Subhash Chandra, Singh RG, Singh TB, Srivastava S, Srivastava R. Serum prolidase activity and oxidative stress in dia- betic nephropathy and end stage renal disease: a correlative study with glucose and creatinine. Biochem Res International 2014. (http://dx.doi.org/10.1155/2014/291458) 17. Rao PV, Lu X, Standle M, et al. Proteomic identification of urinary biomarkers of diabetic nephropathy. Diabetes Care. 2007;30(3): 629–37. 18. Pezzolesi MG, Satake E, McDonald KP, Major M, Smiles AM, Krolewski AS. Circulating TGF-β1-regulated miRNAs and the risk of rapid progression to ESRD in type 1 diabetes. Diabetes. 2015;64: 3285–93. 19. Badal SS, Danesh FR. Diabetic nephropathy: emerging biomarkers for risk assessment. Diabetes. 2015;64:3063–5. 20. Rizvi S, Raza ST, Mahdi F. Association of genetic variants with diabetic nephropathy. World J Diab. 2014;5:809–16. 21. Amle D, Mir R, Khaneja A, Agarwal S, Ahlawat R, Ray PC, et al. Association of 18bp insertion/deletion polymorphism, at -2549 po- sition of VEGF gene, with diabetic nephropathy in type 2 diabetes mellitus patients of North Indian population. J Diab Metab Dis. 2015;14:19. doi:10.1186/s40200-0144-3. 22. Cheema BS, Iyengar S, Sharma R, Kohli HS, Bhansali A, Khullar M. Association between osteopontin promoter gene polymor- phisms and haplotypes with risk of diabetic nephropathy. J Clin Med. 2015;4:1281–92. 23. Bodhini D, Chidambaram M, Liju S et al. Association of TCF7L2 polymorphism with diabetic nephropathy in the south Indian pop- ulation. Ann Human Genet 2015. doi:10.1111/ahg.12122 24. Raina P, Sikka R, Kaur R, et al. Association of transforming growth factor beta-1 (TGF-β1) genetic variation with type 2 diabetes and end stage renal disease in two large population samples from north India. OMICS. 2015;19:306–17. 25. Gupta S, Mehndiratta M, Kalra S, Kalra OP, Shukla R, Gambhir JK. Association of tumour necrosis factor (TNF) promoter polymor- phisms with plasma TNF-a levels and susceptibility to diabetic nephropathy in North Indian population. J Diab Complications. 2015;29:338–42. 26. Pandey A, Goru SK, Kadakol A, Malek V, Gaikwad AB. Differential regulation of angiotensin converting enzyme 2 and nuclear factor-kB by angiotensin II receptor subtypes in type 2 diabetic kidney. Biochimie 2015. doi:10.1016/j.biochi.2015.08. 005 27. Parchwani DN, Palandurkar KM, Hemachandan Kumar D, Patel DJ. Genetic predisposition to diabetic nephropathy: evidence for a role of ACE (I/D) gene polymorphism in type 2 diabetic population from Kutch region. Indian J Clin Biochem. 2015;30:43–54. Int J Diabetes Dev Ctries (December 2015) 35 (Suppl 3):S299–S302 S301 http://www.gjmedph.com/ http://dx.doi.org/10.1136/bmjdrc-2013-000005 http://dx.doi.org/10.1136/bmjdrc-2013-000005 http://dx.doi.org/10.1155/2014/291458 http://dx.doi.org/10.1186/s40200-0144-3 http://dx.doi.org/10.1111/ahg.12122 http://dx.doi.org/10.1016/j.biochi.2015.08.005 http://dx.doi.org/10.1016/j.biochi.2015.08.005 28. Dabhi B, Mistry KN. Oxidative stress and its association with TNF- a-308G/C and IL-1a-889 C/T gene polymorphisms in patients with diabetes and diabetic nephropathy. Gene. 2015;562:197–202. 29. Narne P, Ponnaluri KC, Siraj M, Ishaq M. Polymorphisms in oxidative stress pathway genes and risk of diabetic nephropathy in south Indian type 2 diabetic patients. Nephrol (Carlton). 2014;19:623–9. 30. Prasad P, Tiwari AK, Prasanna Kumar KM, Ammini AC, Arvind G, Rajeev G, et al. Association of TGFβ1, TNFα, CCR2 and CCR5 gene polymorphisms in type-2 diabetes and renal insufficiency among Asian Indians. BMC Med Genet. 2007;8:20. doi:10.1186/ 1471-2350-8-20. 31. Prasad P, Tiwari AK, Prasanna Kumar KM, Ammini AC, Arvind G, Rajeev G, et al. Association analysis of ADPRT1, AKR1B1, RAGE, GFPT2 and PAI-1 gene polymorphisms with chronic renal insufficiency among Asian Indians with type-2 diabetes. BMC Med Genet. 2010;11:52. doi:10.1186/1471-2350-11-52. 32. Ravi Raju T, Muthukumar T, Dadhania D, et al. Noninvasive de- tection of renal allograft inflammation by measurements of mRNA for IP-10 and CXCR3 in urine. Kidney Int. 2004;65:2390–7. 33. Ahmad J. Management of diabetic nephropathy: recent progress and future perspective. Diabetes Metab Syndr 2015. doi: 10.1016/ j.dsx.2015.02.008 34. Teumer A, Tin A, Sorice R et al. Genome-wide association studies identify genetic loci associated with albuminuria in diabetes. Diabetes 2015 S302 Int J Diabetes Dev Ctries (December 2015) 35 (Suppl 3):S299–S302 http://dx.doi.org/10.1186/1471-2350-8-20 http://dx.doi.org/10.1186/1471-2350-8-20 http://dx.doi.org/10.1186/1471-2350-11-52 http://dx.doi.org/10.1016/j.dsx.2015.02.008 http://dx.doi.org/10.1016/j.dsx.2015.02.008 The eye and the kidney: twin targets in diabetes References work_hme2dbrqfra3hbjlf4dtdg5alm ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218549019 Params is empty 218549019 exception Params is empty 2021/04/06-02:16:27 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218549019 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:27 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_hnyap7iyrfhqxa75e7pqcvi3dy ---- Veterinary Record - Wiley Online Library Skip to Main Content Working off-campus? Learn about our remote access options Carnegie Mellon University Carnegie Mellon University Search withinThis Journal All BVA Journals Wiley Online Library Search term Advanced Search Citation Search Search term Advanced Search Citation Search Search term Advanced Search Citation Search Login / Register Journals Veterinary Record In Practice Veterinary Record Case Reports Veterinary Record Open Visit BVA Journal list menu Journal Articles Actions Tools Follow journal Veterinary Record Impact factor:2.442 2019 Journal Citation Reports (Clarivate Analytics): 10/141 (Veterinary Sciences) Online ISSN:2042-7670 © British Veterinary Association LATEST ISSUE  >Volume 188, Issue 6 20 March–3 April 2021 Navigation Bar Menu Home Home About Overview About BVA Contact Editorial Board Editorial Team Advertise Permissions Browse Early View Current Issue All Issues Contribute Author Guidelines Submit a Manuscript Open Access For Referees News Careers Follow journal Alert RSS Feeds Most recent (RSS) Most cited (RSS) Latest News Battle of the bulge: RVC study reveals extent of UK dog obesity crisis One in 14 dogs in the UK is overweight, with certain breeds, such as pugs and beagles, more prone to excessive weight gain than others. 30 March Arabella Gray Government urged to ‘pick up the pace’ with animal sentience A group of veterinary professionals from all areas of the professions have written an open letter to the government calling on it to take quick action to enshrine animal sentience in law.   30 March Georgina Mills Northern Ireland vet school on the horizon Northern Ireland could be set to have its very own vet school in the not too distant future. 23 March Georgina Mills More news > Articles Most Recent Early View full access Prevalence and distribution of Anaplasma phagocytophilum in ticks collected from dogs in the United Kingdom Sophie Keyte Swaid Abdullah Kate James Hannah Newbury Chris Helps Séverine Tasker Richard Wall First Published:  5 April 2021 Abstract Full text PDF References Request permissions full access Comparison of food‐animal veterinarians’ and producers’ perceptions of producer‐centered communication following on‐farm interactions Antonia DeGroot Jason B. Coe David Kelton Cynthia Miltenburg Jeffrey Wichtel Todd Duffield First Published:  5 April 2021 Abstract Full text PDF References Request permissions full access Establishing levels of retained radioactivity in cats receiving radioactive iodine treatment Consuelo Alonzi Kerry Peak Lou Gower David John Walker Ben Johnson First Published:  5 April 2021 Abstract Full text PDF References Request permissions more > full access Prevalence and distribution of Anaplasma phagocytophilum in ticks collected from dogs in the United Kingdom Sophie Keyte Swaid Abdullah Kate James Hannah Newbury Chris Helps Séverine Tasker Richard Wall First Published:  5 April 2021 Abstract Full text PDF References Request permissions full access Comparison of food‐animal veterinarians’ and producers’ perceptions of producer‐centered communication following on‐farm interactions Antonia DeGroot Jason B. Coe David Kelton Cynthia Miltenburg Jeffrey Wichtel Todd Duffield First Published:  5 April 2021 Abstract Full text PDF References Request permissions full access Establishing levels of retained radioactivity in cats receiving radioactive iodine treatment Consuelo Alonzi Kerry Peak Lou Gower David John Walker Ben Johnson First Published:  5 April 2021 Abstract Full text PDF References Request permissions more > Recent issues IssueVolume 188, Issue 6Pages: 201-236, i-iv 20 March–3 April 2021 IssueVolume 188, Issue 5Pages: 161-200, i-iv March 2021 IssueVolume 188, Issue 4Pages: 237-295, i-ii February 2021 IssueVolume 188, Issue 3Pages: 145-215 February 2021 Tools Submit an article Get Content alerts Subscribe to this journal Tweets by BVA Stay Connected Facebook Twitter Instagram YouTube Related Titles LATEST ISSUE  >Volume 43, Issue 2 March 2021 LATEST ISSUE  >Volume 8, Issue 4 October 2020 LATEST ISSUE  >Volume 8, Issue 1 December 2021 Copyright © 2021 British Veterinary Association British Veterinary Association is registered in England No 206456 at 7 Mansfield Street, London, W1G 9NQ. VAT registration number GB 232 7441 80 Copyright © 2021 British Veterinary Association Additional links About Wiley Online Library Privacy Policy Terms of Use Cookies Accessibility Help & Support Contact Us DMCA & Reporting Piracy Opportunities Subscription Agents Advertisers & Corporate Partners Connect with Wiley The Wiley Network Wiley Press Room Copyright © 1999-2021 John Wiley & Sons, Inc. All rights reserved Log in to Wiley Online Library Email or Customer ID Password Forgot password? BVA members log in here Log in with BVA     NEW USER > INSTITUTIONAL LOGIN > Change Password Old Password New Password Too Short Weak Medium Strong Very Strong Too Long Password Changed Successfully Your password has been changed Create a new account Email or Customer ID Returning user Forgot your password? Enter your email address below. Email or Customer ID Please check your email for instructions on resetting your password. If you do not receive an email within 10 minutes, your email address may not be registered, and you may need to create a new Wiley Online Library account. Request Username Can't sign in? Forgot your username? Enter your email address below and we will send you your username Email or Customer ID Close If the address matches an existing account you will receive an email with instructions to retrieve your username work_hox5xlh5p5b2zpxtimdgqzqoyy ---- 1028 Int. J. Morphol., 29(3):1028-1032, 2011. Asymmetry of Human Skull Base During Growth Asimetría de la Base de Cráneo Durante el Crecimiento Priscilla Perez Russo & Ricardo Luiz Smith RUSSO, P. P. & SMITH, R. L. Asymmetry of human skull base during growth. Int. J. Morphol., 29(3):1028-1032, 2011. SUMMARY: Knowledge about human skull asymmetry in normal dry specimens is useful as a parameter for medical and dentistry practice. Skull base was investigated with the objective to validate the method of indirect measurement with digital pictures and to evaluate the degree of asymmetry from human skull base in different ages. We analyzed 176 normal identified human skulls, divided by age in the following groups: Fetuses, newborn, children and adults. Measures were taken from a central point: pharyngeal tubercle and 4 lateral points: foramen ovale, foramen spinosum, carotid canal and stylomastoid foramen using digital biometry after a comparative validation with directed method performed with caliper. Results were presented as asymmetry indexes and data were expressed as percentage. The digital method presented validity in relation to the direct method with caliper. The skulls in all age groups presented asymmetry. The smallest asymmetry index was 2.6% and the largest 6.6%. In the literature, there are no patterns for defined values of asymmetry in normal skulls. The asymmetry of the foramina related to midline was verified in the whole sample and was considered as normal corresponding to an average asymmetry index of 4%. In this study we also observed that in most of the measures there was prevalence of the right side over the left side. KEY WORDS: Anatomy; Asymmetry; Growth; Skull. INTRODUCTION The degree of asymmetry in anatomy may indicate a genetic, congenital or acquired pathological condition, but there is any absolute standard to establish the boundary between “normal asymmetry” and “pathologic asymmetry”. Knowledge of this difference is important for diagnosis and procedures. Some small asymmetries are not detected in clinical practice and it is necessary to establish a range of “normal asymmetry”, the greater the asymmetry, the more likely is a pathological condition (Shah & Joshi, 1978; Rossi et al., 2003; Moreira et al., 2006). Although the human skull is apparently symmetrical, there are differences when we consider the right and left sides. This asymmetry is a common finding, especially in the skull base (Shah & Joshi), where development is associated with neural structures such as the brain and the senses (Brodie, 1955). The skull presents significant changes during growth, mainly related to size, until the end of adult life. The skull growth occurs through bone deposition and resorption. The remodeling promotes the movement of bone as a whole keeping the same shape. In fact, some areas may grow faster than others (Enlow & Hans, 1996). Serjsen et al. (1997) used cranial foramina to assess skull growth and found more intense growth in the base in children aged 4-5 years old, continuing to decrease until growth stops. They also noted an area in skull base bounded by the foramen spinosum, foramen stylomastoid and fora- men magnum which did not present anatomical variation. This stability in the central region of the skull base was already been reported in a study of skull base postnatal growth (Moss & Greenberg, 1967). The human skull is definitely asymmetrical; this is not a matter of skull bones that differ individually from a symmetrical model, but the skull is asymmetric as a whole (Woo, 1931). Some dimensions of the skull bones are prominent on the right side and some on the left and the predominance of volume of right over the left frontal and parietal bones (Woo; Chebib & Chamma, 1981). A study with cephalometric radiographs also highlights further development of the right side over the left side suggesting that this difference is related to brain development since the right side is larger than the left (Shah & Joshi). Department of Morphology and Genetics, Universidade Federal de São Paulo, São Paulo, SP, Brazil. 1029 Some hypothesis was made about factors that promote asymmetry during craniofacial growth, one possible factor is chewing (Shah & Joshi) and another is imbalance of muscle activity, especially in the stomatognathic system (Chebib & Chamma). Those hypotheses were not supported by a study with skulls aged from fetal to adult (Rossi et al.). The authors compared indexes of asymmetry obtained from various measures and concluded that the craniofacial asymmetry was found in whole sample and in all age groups, even before birth (Rossi et al.). Studies of asymmetry and growth were conducted with some methods, among them directed measurements on the bone structure, cephalometric radiographs, digital photography and analysis on computer programs. Traditional method with caliper directly applied on the anatomical structure is simple, objective and well defined. However, the use of technology to capture digital images opens new horizons such as high resolution images, easily archived in digital media and new possibilities of rapid data transmission (Moreira et al.). The objective of this study is to evaluate the degree of the skull base asymmetry in human growth and analyze the measurement methods by comparing direct (digital caliper) with indirect measurements (digital photography and soft- ware Image J). MATERIAL AND METHOD The study was carried out with 176 human identified dry skulls from the Museum of Anatomy Collection of the Universidade Fe- deral de São Paulo. The sample was divided into five age groups: A - fetuses (7 months up to term, n=15, 10 male, 5 female); B - newborn up to one year old (n=32, 22 male, 10 female), C - 13 months up to 9 years old (n=19, 12 male, 7 female); D - 18 years up to 44 years old ( D1 male, n=30; D2 female, n=30) and E - 45 up to 100 years old (E1 male, n=25; E2 Female, n=25). The investigation protocol was approved by the Bioethics Committee for Scientific Research of the Universidade Federal de São Paulo, Brazil. The inclusion criterion was the existence of complete record of identification: age, sex, cause of death and the good condition of the skull and exclusion criterion was an inability to determine the precise points of reference or any condition that affects growth and development identifiable in records or specimens. For this study, we used four bilateral reference points in relation to a central point of skull base to determine the measurements. The central point was the pharyngeal tubercle (PT) and the 4 lateral points at the sides were: foramen ovale: the most anterior outline (FO), foramen spinosum: medial outline (FS), carotid canal: the most medial outline (CC) and stylomastoid foramen: medial outline (FSM) (Fig. 1). With the objective of validating an indirect method of digital biometry, comparing with a direct method of measurement with caliper, a pilot study in 10 skulls randomly selected among the groups was realized. Measures were obtained by the two methods, in a blind study, repeating the measurements 3 times on different days. Fig. 1. Skull base photograph with the points of reference: pharyngeal tubercle (PT), foramen ovale (FO), foramen spinosum (FS), carotid canal (CC) and stylomastoid foramen (FSM). The lines indicate the distances measured. RUSSO, P. P. & SMITH, R. L. Asymmetry of human skull base during growth. Int. J. Morphol., 29(3):1028-1032, 2011. 1030 Initially the observer was submitted to a training or calibration for better effectiveness and homogeneity of the data obtained. The distances among the points selected in relation to the pharyngeal tubercle were measured with a digital caliper (Mitutoyo, Japan) with precision of 0.01 mm. The methodology of digital biometry was used, based on obtaining digital pictures and processing in software (Moreira et al.). Each skull was placed in a support always in the same position leveled in agreement with a horizontal plan. A scale of 10 mm was put over the specimen allowing to gage the software. The digital pictures were taken with a Casio Exilim EX-Z60, with 6.0 mega pixels camera. The digital images were analyzed with software Image J, using the described point of reference. As the measurements were made in skulls of different sizes, from fetuses to adults, it was necessary to transform the differences in percentage, adapting the evaluation of the proportionality regarding the different sizes found in the groups and analyzed skulls. The percentage was obtained as an asymmetry index according to the following formula (Rossi et al.): Right side – Left side Asymmetry index = ---------------------------- X 100 Right side The result of the comparison among the direct and indirect method was analyzed by the General Linear Model test with repeated measures being considered significant with values of p<0.05. For the analysis of the asymmetry Wilcoxon test was used for the paired measures, Mann Whitney for comparison between genre and variance analysis (ANOVA) with correction of Tuckey for comparisons among age groups. The tests were accomplished with SSPS 15 program. RESULTS No statistically significant differences (p<0.05) were found among the two methods. The standard deviation of the two methods indicated that the indirect method for digital biometry is low and it can be applied to obtaining symmetry data. The obtained results are presented in a synthetic way in Table I, the percentages represent the asymmetry index. All skulls expressed asymmetry. The smallest index (2.6%) and the largest index (6.6%) were found in fetuses group (stylomastoid foramen and carotid canal respectively). The graphic representation is shown in Fig. 2. Groups n FO FS CC FSM Mean/group A - Fetuses 15 5.5% 4.8% 6.6% 2.6% 4.9% B - newly born - one year old 32 5.2% 3.8% 5.7% 2.8% 4.4% C - 13 months - 9 years old 19 3.0% 4.6% 4.2% 3.8% 3.9% D1 - 18 years - 44 years old, Female 30 3.5% 3.6% 4.4% 2.9% 3.6% D2 - 18 years - 44 years old, male 30 4.1% 4.6% 5.8% 3.0% 4.4% E1 - 45 up to 100 years old Female 25 3.1% 2.9% 3.9% 2.9% 3.2% E2 - 45 up to 100 years old male 25 4.6% 5.2% 5.6% 4.8% 5.1% Mean/reference point 176 4.2% 4.2% 5.2% 3.3% Table I. Mean of the asymmetry indexes of groups in each point of reference. Fig. 2. Box plot graphic of the asymmetry indexes. The measurements CC, FSM, FS and FO in all groups are indicated in the horizontal table, below. Asymmetry indexes in percentages are indicated in the vertical at left. The circles represent the outlines. The median are represented by the white lines inside boxes sample and minimum and maximum values at the end of straight vertical lines. RUSSO, P. P. & SMITH, R. L. Asymmetry of human skull base during growth. Int. J. Morphol., 29(3):1028-1032, 2011. 1031 The comparison of the asymmetry index among the age groups revealed that the difference is not significant for all of the points (ANOVA, Tuckey correction, p<0.05). DISCUSSION Perfect bilateral symmetry in the body is basically a theoretical concept that rarely exists in live organisms (Bishara et al., 1994). Knowledge of quantitative normal cranial asymmetry in a population without pathology or functional disturbance is necessary to avoid malpractice. Professionals that perform craniofacial surgery, orthodontic treatment and clinical diagnosis of deformities must consider and interpret cranial asymmetries. The use of bilateral points in relation to a central anatomical point is a method for evaluating asymmetry and the use of channels where nerves and blood vessels emerge made possible an evaluation in the skull sample in several ages. In this study, asymmetry of the skull base was evaluated by method of digital biometry for its advantages in relation to other methods, as the caliper direct method or image methods are costly (Moreira et al.). The direct method is a reliable technique used in anatomical and anthropological studies, but there are some disadvantages in its execution, since some points of reference are reached with difficulty by the caliper stems, in addition to fragile sample manipulation and the excessive time spent in obtaining data. The advantages of the digital method were pointed out elsewhere (Moreira et al.), among them: precision and reproducibility to assess the points to be measured, possibility to accomplish measurements with good accuracy, smaller manipulation of the specimens reducing the risk of damages, practical storage in digital media of the pictures and protocol of each sample and the possibility to measure surfaces with a very irregular perimeter. The validation was confirmed by statistical analysis since there were not significant differences among the direct method, considered as the gold pattern, and the indirect method. The results of this study showed that all the skulls of the considered sample present some asymmetry degree. In agreement with the inclusion criteria, this sample was considered as belonging to a normal population that could be settled as a normal pattern for asymmetry. The use of an asymmetry index was necessary because there were skulls of several ages with different dimensions. The asymmetry index found in skulls of all age groups showed no significant differences. When the measures in millimeters are compared, no significant differences were detected between fetuses (A) and newborn (B) and also between adult groups (D and E) meaning that close age groups have similar metric measures. Otherwise the measures in millimeter were significantly different when fetuses were compared with children and adult skulls. Thus the use of an index that supplies the data of asymmetry as percentage meaning quantitative difference among the sides is necessary. The asymmetry of position of vessel and nerves foramina (carotid canal, foramina spinosum, ovale and stylomastoid) verified in the whole sample and considered as normal corresponds to a medium index of 4%, being the smallest 2.6% and the largest 6.6%. The point where the “normal” asymmetry becomes “abnormal” cannot be easily defined. A number of explanations have been given for asymmetry causes, including genetic problems and environmental factors producing differences in the right and left sides (Bishara et al.). According to some authors, the asymmetry presence would be considered when the averages of the differences among the sides go statistically different from zero (Shah & Joshi; Lundström, 1991) or when the statistical analysis accomplished by Student’s t test for paired sample reveal differences among the right and left sides (Letzer & Kronman, 1967; Vig & Hewitt, 1975; Melnik, 1992). The position and shape of 27 paired foramina in the skull was considered asymmetric, when the difference is larger than 0.5 mm (Berge & Bergman, 2001). In the literature there are neither patterns nor defined values of asymmetry for normal skull. Rossi et al. considered asymmetry as any difference among the sides and highlighted that the greater the asymmetry, the greater is likelihood of a pathological condition. It was shown that asymmetry of facial bones exists in fetuses and newborn with the same index as adults (Rossi et al.). This fact demonstrated that the asymmetry considered nor- mal is not linked with the action of chewing muscles. However, the central area of skull base is subject to mechanical conditions related to head control as the column and cervical muscles. The central area of skull base is essentially stable with less variability, while the lateral areas suffer larger changes (Moss & Greenberg; Vinkka & Koski, 1975). This assumption was confirmed in a study that pointed out an invariable area in the skull base limited by foramen spinosum, stylomastoid foramen and foramen magnum (Serjsen et al.). RUSSO, P. P. & SMITH, R. L. Asymmetry of human skull base during growth. Int. J. Morphol., 29(3):1028-1032, 2011. 1032 In conclusion the asymmetry degree of the skull base considered as normal in the studied sample was around 4%. This information may be used by professionals to orientate diagnosis and avoid unnecessary procedures. RUSSO, P. P. & SMITH, R. L. Asimetría de la base de cráneo durante el crecimiento. Int. J. Morphol., 29(3):1028-2032, 2011. RESUMEN: El conocimiento sobre asimetrías craneales a partir de especímenes secos es de gran utilidad en la determinación de parámetros relevantes para las prácticas médica y odontológica. Se estudió la base del cráneo con el objetivo de validar el método de mediciones indirectas con fotografías digitales y evaluar el grado de asimetría en diferentes edades. Se analizaron 176 cráneos humanos identificados, agrupándolos por edad en fetos, recién nacidos, niños y adultos. Las mediciones fueron realizadas a partir de un punto central: el tubérculo faríngeo y de cuatro puntos laterales: foramen oval, foramen espinoso, canal carotideo y foramen estilomastoideo, usando biometría digital, luego de una validación por criterio concurrente respecto de mediciones directas con caliper. Los resultados son presentados como índices de asimetría y los datos son expresados en porcentajes. El método digital evidencia validez concurrente en relación al método directo con caliper. Los cráneos en todos los grupos étareos presentaron asimetrías. El índice de asimetría menor correspondió a 2,6%, mientras que el mayor a 6,6%. En la literatura no se reportan parámetros para definir valores de asimetrías en cráneos normales. La asimetría de la foramina respecto de la línea mediana se verificó en toda la muestra y fue considerada como normal, correspondiendo a un índice de asimetría promedio del 4%. En este estudio se observó también que en la mayoría de las medidas hubo predominancia del lado derecho sobre el lado izquierdo. PALABRAS CLAVE: Anatomía; Asimetría; Crecimiento; Cráneo. REFERENCES Berge, J. K. & Bergman, R. A. Variations in size and in symmetry of foramina of the human skull. Clin. Anat., 14(6):406-13, 2001. Bishara, S. E.; Burkey, P. S. & Kharouf, J. G. Dental and facial asymmetries: a review. Angle Orthod., 64(2):89-98, 1994. Brodie, A. G. The behavior of the cranial base and its components as revealed by serial cephalometric roentgenograms. Angle Orthod., 25:148-60, 1955. Chebib, F. S. & Chamma, A. M. Indices of craniofacial asymmetry. Angle Orthod., 51(3):214-26, 1981. Enlow, D. H. & Hans, M. G. Essentials of facial growth. Philadelphia, W. B. Saunders, 1996. Letzer, G. M. & Kronman, J. H. A posteroanterior cephalometric evaluation of craniofacial asymmetry. Angle Orthod., 37(3):205-11, 1967. Lundström, A. Some asymmetries of the dental arches, jaws, and skull and their etiological significance. Am. J. Orthod., 47:81-106, 1991. Melnik, A. K. A cephalometric study of mandibular asymmetry in a longitudinally followed sample of growing children. Am. J. Orthod. Dentofacial Orthop., 101(4):355-66, 1992. Moreira, R. S.; Sgrott, E. A.; Seiji, F. & Smith, R. L. Biometry of hard palate on digital photographs: a methodology for quantitative measurements. Int. J. Morphol., 24(1):19-23, 2006. Moss, M. L. & Greenberg, S. N. Functional cranial analysis of the human maxillary bone: I, Basal bone. Angle Orthod., 37(3):151-64, 1967. Rossi, M.; Ribeiro, E. & Smith, R. Craniofacial asymmetry in development: an anatomical study. Angle Orthod., 73(4):381-5, 2003. Sejrsen, B.; Jakobsen, J.; Skovgaard, L. T. & Kjaer, I. Growth in the external cranial base evaluated on human dry skulls, using nerve canal openings as references. Acta Odontol. Scand., 55(6):356-64, 1997. Shah, S. M. & Joshi, M. R. An assessment of asymmetry in the normal craniofacial complex. Angle Orthod., 48(2):141-8, 1978. Vig, P. S. & Hewitt, A. B. Asymmetry of the human facial skeleton. Angle Orthod., 45(2):125-9, 1975. Vinkka, H. & Koski, K. Variability of the craniofacial skeleton. II. Comparison between two age groups. Am. J. Orthod., 67(1):34-43, 1975. Woo, T. L. On the asymmetry of the human skull. Biometrika, 22:324-52, 1931. Correspondence to: Ricardo Luiz Smith Ph.D., M.D. Department of Morphology and Genetics Universidade Federal de São Paulo Rua Botucatu 740, São Paulo CEP 04023-900, SP BRAZIL Email: rlsmith@unifesp.br Received: 10-02-2011 Accepted: 26-06-2011 RUSSO, P. P. & SMITH, R. L. Asymmetry of human skull base during growth. Int. J. Morphol., 29(3):1028-1032, 2011. work_hqyfetbyejbrbnb6dos7luqzvm ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218550895 Params is empty 218550895 exception Params is empty 2021/04/06-02:16:29 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218550895 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:29 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_hrwibvch3bhynj5nm7nrk7tdae ---- RECORDS REACHING RECORDING DATA TECHNOLOGIES G.W.L. Gresik a, *, S. Siebe a, R. Drewello a a University of Bamberg, Institute of Archaeology, Heritage Science and Art History (gerhard.gresik, soeren.siebe, rainer.drewello)@uni-bamberg.de CIPA2013-234 Other appropriate recording application SR7 KEY WORDS: digital photography, UV-fluorescence photography, IR-reflectography, IR-Thermography, 3D-Scanning, light stripe topography scanner, shearography, crane ABSTRACT: The goal of RECORDS (Reaching Recording Data Technologies) is the digital capturing of buildings and cultural heritage objects in hard-to-reach areas and the combination of data. It is achieved by using a modified crane from film industry, which is able to carry different measuring systems. The low-vibration measurement should be guaranteed by a gyroscopic controlled advice that has been , developed for the project. The data were achieved by using digital photography, UV-fluorescence photography, infrared reflectography, infrared thermography and shearography. Also a terrestrial 3D laser scanner and a light stripe topography scanner have been used The combination of the recorded data should ensure a complementary analysis of monuments and buildings . * Corresponding author 1. INTENTION 1.1 Fundamental ideas In context of the applied scientific research at the University of Bamberg RECORDS (Reaching Recording Data Technologies) has been invented. The standard requirement of a reliable duty in the field of protection and renovation and also in the field of research in cultural heritage preservation is documentation and monitoring. Thereby exactly measuring and imaging of building components of any kind is a central topic. However at this point problems appear often. Nevertheless the measurement is not the primarily problem, external factors exacerbate the practice much more. Tall heights and hard reachability as a result of this prejudice the usage of these techniques. In practice most frequent requested working heights are between 15 and 20 meters. Furthermore there are demands, that the works should not compromise the usage and should be accomplished quick, reliable, affordable, nondestructive and contactless. 1.2 Basics and project development The experiences, which were made in former research projects, whose goal were primarily the photographic documentation of mural painting and stained glass (V1TRA) and the usage of a terrestrial laser scanner (RRS), leaded to the research project RECORDS, which was started in 2012. The predecessor projects promoted an advancement respectively an optimization of a modular crane system with a vibration stabilizing platform, which could carry different, valuable measuring instruments, additional to the photography. It will be used, next to an usual remotely controlled camera for high-resolution image capture, an ultraviolet camera and infrared cameras for multispectral image capturing with this crane. For 3D-scanning a terrestrial laser scanner and a light stripe topography scanner are used. Inside this project it will be tested how far the shearography (laser-speckle-technique) can be used sensible. 1.3 Realisation RECORDS is promoted by the “Bayerische Forschungsstiftung” (BFS). Four more partners are involved under the direction of the University of Bamberg. These project participants are small and medium- sized businesses, which accomplish advancements and optimizations at their current department. 2. SYSTEMTECHNOLOGY 2.1 Crane The linchpin is the advancement of a film crane (GF16) of the Munich company “Grip Factory Munich GmbH”. The crane is made of modular parts and is useable for the devices to a height up to 16 meters (pic.1). Crucial for the use of this crane is the high stability despite the high flexibility, likewise the ruggedness, the good transport property and the easy and quick assembly. It works entirely mechanic and requires just a team of two or three people locally. An additional highlight is the optional usage as crane for passengers up to a working height of 10 meters, which constitutes a further enrichment within the scope of documentation and study of cultural heritage. To use the highly vibration sensitive devices at the crane, it was necessary to develop a head, stabilized by a gyroscope (pic.3), which is equipped with an appropriate platform to use the different devices and steer them from the ground. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W2, 2013 XXIV International CIPA Symposium, 2 – 6 September 2013, Strasbourg, France This contribution has been peer-reviewed. The peer-review was conducted on the basis of the abstract 289 Picture 1. Working use of the crane ©by beckett&beckett 2.2 Visual research methods An essential amount of the documentation of cultural heritage is accomplished by visual research methods. Without these imaging processes an effective and profound processing is not viable anymore. Different technologies are applied. The basis within the RECORDS-Project is the high resolution digital photography. Beneath the visible range of the electromagnetic spectrum (400-700 nm) researches at the infrared (6780- 1100nm) and ultraviolet (UV-A: 320-400nm, UV-B: 280-320 nm) range are conducted here too. 2.2.1 Photography: The company “beckett&beckett GbR” tests in the course of the research project the current digital chip-technology. Here a single-shot 60.1 megapixel sensor and a multi-shot 50 megapixel sensor are tested. By combination of the same image detail by subsequent alignment of the pixels, finest nuances are shown. By a standardization of shots repeatable and therefore comparable conditions were achieved, which are inevitable for a monitoring. 2.2.2 UV-fluorescence: The UV-fluorescence photography and its usage at the crane enables to use the UV light nearer to difficult access objects and enables therefore a more intensive fluorescence on the one hand or a shorter exposure time at light- sensitive objects on the other hand. In the project progression a new developed UV-A filter at a Broncolor flash unit is examined by “beckett&beckett GbR” and compared with the “UV Flash it!”-system of the company Kohlrusch 2.2.3 IR-reflectography: The IR-reflectography, which uses the diffuse reflection of layers at the wavelength range from 780-2000 nm, is divided into several sections. Also by “beckett&beckett GbR” the high-resolution digital camera Canon 5D MKII without its integrated IR-band-elimination filter is tested, whereby IR-radiation with a wavelength up to 1100 nm can be measured. To get an immediate comparison of this simple and affordable IR-system to a high-end spectral camera, a device for mobile usage is developed by the company IRCAM GmbH (Erlangen), which covers the spectral range from 900 nm to 1700 nm. In addition to it the long-wave spectral range beyond 1700 nm is measured by using another camera by the company IRCAM GmbH. Thereby it is examined up to which spectral range the IR-reflectography yields suitable results. The adjoining spectral range with a wavelength from 3000 to 5000 nm and from 8000 to 14000 nm is used by the IR- thermography. This procedure allows an abundance of different methods of measurement, which are considerable for clarification of questions within architecture and cultural heritage conservation. Therefore the company IRCAM GmbH develops a system to apply the procedure of passive thermography (detecting thermal bridging, visualizing the structure of buildings, measuring temperature diversification) together with pulse-thermography and dew point determination. 2.3 3D-Scanning 2.3.1 Terrestrial laser scanner: To the super ordinated metrological orientation and spatially geometric registration of the respective objects a terrestrial 3D laser scanner (FOCUS 3D, from FARO) is used in this crane system(pic.2). This is a phase comparison scanner. It is lightweight, quick and even overhead deployable. The device has a battery runtime of 6 hours, is equipped with an SD card slot and Wi-Fi. Therefore no cabling to the crane top is necessary. Due to its high resolution the scanner can be used also in qualitative close range shots. The results of the scans allow accurate spatial placement of all the results of the visual imaging and is therefore an essential factor for merging and blending all the data. Picture 2. Faro on the Crane ©by D.Lang 2.3.2 Light fringe projection: A light fringe projection method of the company Steinbichler Optotechnik (Neubeuern), which is adjusted to the crane, completes the geometric field of inventory. The instrument COMET L3D (pic.3.) is on this crane system very suitable for long-term monitoring and recording details in the micrometer range because of its compact construction. Unlike the previous versions of the COMET series, the L3D system requires the use of BLUE-LED, no external light source is necessary no more. With its LED pulse mode delivering a high light output, the system guarantees excellent measurement results in difficult ambient conditions. Therefore, this finally is usable for use on a crane system. The system has an camera resolution of 2448 x 2050 pixels and is equipped with the measurement fields 800 (750mm x 630mm x 500mm), 250 (260mm x 215mm x 140mm) and 75 (74mm x 62mm x 40mm). This achieves a 3D point distance of 300 International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W2, 2013 XXIV International CIPA Symposium, 2 – 6 September 2013, Strasbourg, France This contribution has been peer-reviewed. The peer-review was conducted on the basis of the abstract 290 microns (MV 800 mm³), 100 microns (MV 250 mm³) and 30 microns (75 mm³). The relatively large measurement field with 800 m³ requires an average wall distance of about 1.20 m, which allows a very fast and convenient detection of objects by crane. Picture 3. L3D on the gyroscope ©by G.Gresik Picture 4. ISIS on the gyroscope ©by G.Gresik 2.3.3 Shearography: Beside the described system for monitoring the surface ISIS shearography by the company Steinbichler Optotechnik (pic.4) is optional within the research project RECORDS to examine to what extent the system is effective in practice for the diagnosis of cavities and hidden cracks by historical walls. Deformations with a size of only a few micrometers can be recognized. The technique, which is used here, is the laser speckle shearing interferometry. The object is lighted by laser light and is viewed by a CCD camera with a shearing optic. This special optic projects the image two times at the camera chip. If the object is deforming by a workload, the reflected laser light changes. By comparing two pictures, one of the unstressed object and the other one of the stressed object, a modification can be observed. If this can be ensured demonstrably shearography is implemented in the crane system. 2.4 Researching and first results The project was designed on two years. During the first project period largely, the prototypes were developed and initial tests performed. Currently, work will commence at selected objects. Objectives of this work are to rigorously test the practical interaction of the individual components in the everyday routine. The individual data obtained are combined in a way so that subsequent operations can take place within a single system. This means that the resulting images are based in the correct position to the measurement images. 2.4.1 Christopher depictions, Augsburg: In the cathedral of Augsburg, the entire system is tested in interaction. Several Christopher depictions are located one above the other on the west wall of the south transept. This property is suitable particularly to use the full range of the research project RECORDS, as these findings for a multifunctional analysis, which can be a detailed record of the individual findings over the entire ceiling height and appear beyond reasonable. With this object the aims of the project can be tested and evaluated in its entire circumference. Based on a 3D point cloud that covered the entire area of the building west transept, all subsequent results are geared towards. By the intense representation of points scanning, all photographs could be integrated using natural control points in the point cloud. Thus, all data are combinable, and may be overlaid for example. In the following illustrations it is clearly apparent in what extent the spectrum of pictorial detection expends by the different imaging techniques allowed. (pic.5 – Picture 5. Christopher depiction - Orthoimage from FARO Focus 3D ©by G.Gresik International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W2, 2013 XXIV International CIPA Symposium, 2 – 6 September 2013, Strasbourg, France This contribution has been peer-reviewed. The peer-review was conducted on the basis of the abstract 291 10) This is an area of the west wall of the transept above the gothic arch, which has already been investigated by restorers between 2003 and 2009*. However, this should be shown in this course of the current works uniform combined with the church room, so that the entire wall surface can be further processed in a system. Picture 6. Christopher depiction - high-resolution digital photograph ©by beckett&beckett, rectified ©by G.Gresik Picture 7. Christopher depiction - grazing light digital photograph ©by beckett&beckett, rectified ©by G.Gresik Picture 8. Christopher depiction - UV-fluorescence photograph ©by beckett&beckett, rectified ©by G.Gresik * cf. A.Porst, R.Winkler, in RESTAURO 1/2012 S.16-24 Picture 9. Christopher depiction - IR-reflectography digital photograph ©by beckett&beckett, rectified ©by G.Gresik Picture 10. Christopher depiction - Image of L3D-scan ©by G.Gresik 2.4.2 Cathedral of Bamberg: One of the test objects is the cathedral of Bamberg. Here the terrestrial 3D Faro scanners have been used for the recording of the 3D data. In the context of the modular crane system also various image recordings were tested by different systems. The usage of the flexible crane system has proved very beneficial. On the one hand it was able to take orthogonal images of the wall in the interior of the cathedral at different heights up to the vaults. (pic.11) On the other hand the crane has proved as so flexible that, despite the enormous seating, it was easy to maneuver the crane. Thereby no elaborate exploratory work was necessary. A simple underlayment made of planks is absolutely enough to protect the sensitive floor. The usage of the appropriate rail system allows waiving this exploratory work. It should be added that the smooth-running rail system eases the handling of the crane further. In the case of 3D laser scanning at the exterior of the cathedral areas could have been scanned, which could never been reached from the ground. It was easily possible to pivot the crane above the roof of a chapel and to scan areas behind the roof. Thereby it was possible to document these parts of the cathedral for the first time generally. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W2, 2013 XXIV International CIPA Symposium, 2 – 6 September 2013, Strasbourg, France This contribution has been peer-reviewed. The peer-review was conducted on the basis of the abstract 292 Picture 11. GF16 in the cathedral of Bamberg ©by S.Siebe 2.4.3 Angelic Salutation, Nuremberg: At the church of St. Lorenz at Nuremberg the so-called Angelic Salutation of the famous sculptor Veit Stoss was examined in the context of the Project RECORDS. Selected parts of the masterpiece, which was let down for the reason of cleaning work, were scanned with the light stripe topography scanner Comet L3D, which was mounted at the crane GF 16. At this way reference surfaces for a future monitoring were created. By using the crane system the monitoring should be able in the future in tall heights, because the object normally hangs up to 6 meters above the floor. Additional to this digital photographs and UV-fluorescence photographs were taken, which document the restoration of the past. 2.5 Conclusion In the end a multifunctional modular documentation system is to be developed with a working height up to 16 meters. This is to meet the specific, current and future needs in the field of protection of cultural heritage and construction. The first applications showed encouraging results. Starting with the very simple handling of the crane, this can be built up and decomposed within each of 1 to 2 hours by two people. The crane system is proving to be very stable, so that in the interiors, with a few restrictions, already surprisingly strong results were achieved. When using outdoor, the wind forces still represents a problem to be solved. Also another terrestrial scanner was tested in the course of the work in Augsburg. It is a Leica HDS Scan Station. Despite its relatively high weight of 18 kg it could even with this device more achieved more than satisfactory results. As this device interrupts shocks and thus also due to vibrations of the crane arm its measurement process, and only continue when the unit is again compensated in the tolerance range.(pic.12) The advantage of the adaptation of commercially available appliances includes the ability to customize the system RECORDS always on the rapidly growing state of the art. Picture 12. LEICA Scanstation of GF16 ©by G.Gresik 3. ACKNOWLEDGEMENTS We give thanks to the Bayerische Forschungsstiftung, which enabled this project by their support and we give thanks to the project partners, co. Steinbichler Optotechnik GmbH, co. Grip Factory Munich GmbH, co. Beckett&Beckett Photography and co. IRCAM GmbH for the good collaboration. We also thank the dioceses of Augsburg, Bamberg, and the parish of St. Lawrence for the willing support of the work and making available of the respective objects. We also thank you for the good cooperation with the Bamberg cathedral maintenance department, the office Conn and Giersch, Nuremberg, Dipl.- Ing., MA Reinhold Winkler, Munich and the Munich Diplom Restorer Angelika Porst for the great cooperation. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W2, 2013 XXIV International CIPA Symposium, 2 – 6 September 2013, Strasbourg, France This contribution has been peer-reviewed. The peer-review was conducted on the basis of the abstract 293 work_hrymwtw3yvhmtdqy4vu4lsafiu ---- Two theses on Polaroid culture T h e p o l a r o i d i m a g e a s p h o t o- o b j e c t B u s e , P h t t p : // d x. d oi. o r g / 1 0 . 1 1 7 7 / 1 4 7 0 4 1 2 9 1 0 3 7 2 7 5 4 T i t l e T h e p o l a r o i d i m a g e a s p h o t o-o bj e c t A u t h o r s B u s e , P Ty p e Ar ti cl e U R L T hi s v e r s i o n i s a v a il a b l e a t : h t t p : // u s ir. s a lf o r d . a c . u k /i d / e p r i n t / 1 8 7 7 2 / P u b l i s h e d D a t e 2 0 1 0 U S I R i s a d i gi t a l c oll e c t i o n of t h e r e s e a r c h o u t p u t of t h e U n iv e r s i t y of S a lf o r d . W h e r e c o p y r i g h t p e r m i t s , f ull t e x t m a t e r i a l h e l d i n t h e r e p o s i t o r y i s m a d e f r e e l y a v a il a b l e o n li n e a n d c a n b e r e a d , d o w n l o a d e d a n d c o p i e d fo r n o n- c o m m e r c i a l p r i v a t e s t u d y o r r e s e a r c h p u r p o s e s . P l e a s e c h e c k t h e m a n u s c r i p t fo r a n y f u r t h e r c o p y r i g h t r e s t r i c ti o n s . F o r m o r e i nf o r m a t i o n , i n cl u d i n g o u r p o li c y a n d s u b m i s s i o n p r o c e d u r e , p l e a s e c o n t a c t t h e R e p o s i t o r y Te a m a t : u s i r @ s a lf o r d . a c . u k . mailto:usir@salford.ac.uk AUTHOR BIOGRAPHY Peter Buse is Senior Lecturer in English and a member of the Centre for Cultural, Communication and Media Studies at the University of Salford, UK. He has published articles on photography in History of Photography, Photographies, new formations, Textual Practice, and Journal of Dramatic Theory and Criticism. His most recent books, both co-authored, are Benjamin’s Arcades: An unguided tour; and The Cinema of Alex de la Iglesia. CONTACT School of English, Sociology, Politics and Contemporary History University of Salford Salford, M5 4WT p.buse1@salford.ac.uk tel: 0161 295 4764 fax: 0161 295 5077 2 TITLE „The Polaroid image as photo-object‟ ABSTRACT This article is part of a larger project on the cultural history of Polaroid photography and draws on research done at the Polaroid Corporate archive at Harvard and at the Polaroid company itself. It identifies two cultural practices engendered by Polaroid photography, which, at the point of its extinction, has briefly flared into visibility again. It argues that these practices are mistaken as novel but are in fact rediscoveries of practices that stretch back as many as five decades. The first section identifies Polaroid image-making as a photographic equivalent of what Tom Gunning calls the „cinema of attractions‟. That is, the emphasis in its use is on the display of photographic technologies rather than the resultant image. Equally, the common practice, in both fine art and vernacular circles, of making composite pictures with Polaroid prints, draws attention from image content and redirects it to the photo as object. Keywords Polaroid Photo-objects Cinema of attractions Technological unconscious Photo-collage 3 Introduction: photograph > image A photograph is not exhausted by its image-content, but is also something akin to a body. This is the case that is being made in a growing branch of photography studies that takes as its subject what it calls „photo-objects‟. For instance, near the end of his extensive survey of photographic memorial objects, Geoffrey Batchen writes of the „need to develop a way of talking about the photograph that can attend to its various physical attributes, to its materiality as a medium of representation‟ (Batchen, 2004: 94). By this he means taking into account the way a photograph has been worked upon, with paint or writing; the modes of organisation it undergoes alongside other photographs, in albums or collages; its juxtaposition with other materials, such as human hair; and the heterogeneous forms of framing it submits to. A photograph is an image, so goes this school of thought, but it is also an object, it has a physical being in space and time. As Elizabeth Edwards puts it, „the photograph has always existed, not merely as an image but in relation to the human body, tactile in experienced time, [an] object functioning within everyday practice‟ (Edwards, 1999: 228). „Not merely… an image‟, Edwards writes, and it is a phrase that appears again in her introduction, with Janice Hart, to Photographs Objects Histories: „it is not merely the image qua image that is the site of meaning, but that its material and presentational forms and the uses to which they are put are central to the function of a photograph as a socially salient object‟ (Edwards and Hart, 2004: 2). They might have written more neutrally that a photograph is „more than an image‟, but that „not merely‟ signals a polemical intent: a call to arms to take note of that which in the photograph exceeds the photographic image. They call this the „materiality‟ of the 4 photograph and they identify three key forms that it takes: „the plasticity of the image itself, its chemistry, the paper it is printed on‟; its „presentational forms‟ (such as albums, mounts, and frames); and „the physical traces of usage and time‟ (3). These new photo-materialists prefer to examine photos that have been worked upon in one way or other after they have been made, but there is also a type of photograph which is already at the point of taking a photo-object of the sort that interests them – the Polaroid or „instant‟ print. In Polaroid photography there is no gap between the exposure to light that produces an image and the process of making that results in the photographic object. What is more, the photographic image inside the Polaroid print‟s familiar white frame is surely no longer a Polaroid if separated from that frame, which itself is supplementary to the image. And if it is scanned to reduce its three-dimensionality to electronic code, invariably the frame is scanned as well, in a tacit acknowledgment that the instant photo is irreducible to its image alone (see Figure 1). For these reasons, Nat Trotman can make a strong claim for the unique materiality of the Polaroid image: The images contain a density unlike any other snapshot medium. They have…truly a physical depth and presence….The pictures have interiors, viscous insides of caustic gels that make up the image itself. Users are warned not to cut into the objects without protective gloves – these photographs can be wounded, violated. Their frame protects and preserves them like clothing around a vulnerable body (Trotman, 2002: 10). 5 It is not by chance that the Polaroid print captures critical attention at the very point of its imminent obsolescence, nor that Trotman finds in it the sort of substantiality lacking in the digital photography that is displacing it. Indeed, his anthropomorphizing of the photo-object is quite typical of this general movement in photography studies, which seeks to render corporeal and singular the photographic in an epoch when its material supports are increasingly screens rather than photographic paper and its singularity doubtful when it can be transmitted as code to any computer or network.. This anthropomorphism is most evident when Edwards and Hart call for an „ongoing investigation into the lives that photographs lead after their initial point of inception‟, dubbing this activity a „social biography‟ of objects (Edwards and Hart, 2004: 9-10). They are adapting here a phrase of Igor Kopytoff‟s, who, in „The Cultural Biography of Things‟, demonstrated the tension existing between commoditization and singularization (or sacralization) in all systems of exchange. While they adopt the phrase, Edwards and Hart do not take on Kopytoff‟s analytic vocabulary. Indeed, it could be argued that the „photo-materialist‟ project leans heavily towards „singularization‟ in its efforts to rescue photographs from their commoditization as images, and is therefore part of what Kopytoff calls the „yearning for singularization in complex societies‟ (Kopytoff, 1986: 80). The photo-materialists‟ preference for older, usually 19th century photographs is a typical strategy of singularization, which is often achieved „by reference to the passage of time‟ (Kopytoff, 1986: 80). Edwards and Hart admit as much when they state that „the premise of [their] volume precisely reinvests photographs of all 6 sorts with their own “aura” of thingness‟ (Edwards and Hart, 2004: 9). Their project crucially reminds us to consider photographs as objects as well as images, but it is difficult to follow Edwards and Hart down this particular avenue, which can only be read as a way of warding off the fact that with digitalization so-called photographic „lives‟ have become ultimately untraceable. Nevertheless, the horizon opened up by Trotman is worth exploring in greater depth, for the Polaroid image is a photo-object of considerable complexity and interest. What sort of photo-object is a Polaroid print? Or, more importantly, what material social practices does it give rise to, what desiring networks do they participate in, and what unconscious investments animate them? This article examines two such practices. The first, Trotman has already begun to analyse when he writes of the machine as a party camera: „Taking a Polaroid is an event unto itself, contained within the party atmosphere….the picture does not commemorate the past party, but participates in the party as it occurs‟ (Trotman, 2002: 10). In Polaroid photography the material activity of making the image, the fact that it develops on the spot rather than later in a darkroom, is, as Trotman says, an event in itself. So important has the „event‟ of instant photography been in its history that we can speak of it as a „photography of attractions‟, to borrow and modify a term of Tom Gunning‟s. Gunning and others have argued that in early cinema „attractions‟ took priority over narrative in seducing the spectator, with the filmic apparatus itself one of the main attractions, and it will be argued here that a similar principle applies for the user of Polaroid photography, for whom the spectacle of the technology is just as important as any image which 7 results from it. The second practice is what Edwards and Hart call a „presentational form‟ – the tendency, found in both fine art and vernacular uses of Polaroid photography, to group large numbers of instant prints together in composite figures, or what will be called here „Polaroid mosaics‟, to take into account the tile-like properties of the prints. Just as in the first practice the spectacle of producing the image equals or eclipses in importance the resultant image, so in the Polaroid mosaic, the print as combinatory object threatens to displace the print as individual image. How to explain this insistent surplus of object over image in instant photography? If the photo as object is, as Batchen, Edwards and Hart argue, always supplementing the image, perhaps it is because there is something missing in the image. As W.J.T. Mitchell has put it, images are often striking for „their silence, their reticence, their wildness and nonsensical obduracy‟ and are therefore „wanting‟, in both senses of the word (Mitchell, 2005: 10). Polaroid as photo-object then, but also as photo objet a. Photography of attractions The Polaroid Corporation was formed in 1937 to manufacture and sell polarizing filters; its founder Edwin Land invented instant, or „one-step‟ photography in the period after World War II and the first Polaroid Land camera for the consumer market was sold in 1948. The company filed for bankruptcy protection in 2001, as sales of instant film came under increasing pressure from growth in digital photography. In 2005, after changing hands twice, Polaroid was acquired by the conglomerate Petters Group Worldwide, which primarily used the name to sell LCD TVs and DVD players. Petters announced on Feb 8, 2008, that it was 8 discontinuing permanently production of Polaroid film, with the supplies to last until the end of 2009. In late 2008 Polaroid filed for bankruptcy protection for a second time as Tom Petters, Chairman of the Petters Group, was arrested for financial fraud. In April 2009 the company was purchased by a joint US-Canadian venture specializing in intellectual property rights. There are no plans to relaunch production of instant film. In this Polaroid twilight visibility is so poor that familiar figures are mistaken for the strange and new. Thus Jeremy Kost: joint precipitate of the New York celebrity party circuit and a dying technology. Kost, self-dubbed „anti-paparazzo‟, deals in a product – celebrity photographs – for which the demand is high, but the supply is too. As an amateur with no formal photographic training who uses a Polaroid camera to take pictures of the stars, Kost is sufficiently distinctive to have established a profitable niche in the market. Comparisons that have been made with Warhol (by curator Eric Shiner) are rather hopeful, but Kost‟s 15 minutes have so far extended to a couple of solo shows, short features in fashion magazines, and a regular column in Elle Accessories. His mission statement is available on his MySpace page as well as his commercial site, Roidrage.com: Jeremy Kost has developed a unique approach to celebrity portrait photography….He doesn‟t hound them on the red carpet, nor does he sneak around outside their hotel rooms. He captures these stars in their own relaxed environment, being who they are naturally. He finds beauty in their reality. He looks for truth in natural light even when it is 9 exposing….He does not rely on lighting, make-up or styling, but rather plays with the moment to create magnificence in a hedonistic smile or true exhaustion. (Kost, 2008a) The promise is an old one, and is of course integral to the fabrication of the star- image, to the process of mystification: the unguarded moments, layers of obstruction peeled away, the stars down to earth, and so on and so forth. Kost has in turn become an astral by-product, a third or fourth order celebrity in his own right, „known on the New York circuit as “the Polaroid artist”‟ (Kost, 2008b). This parasitism would be impossible without the Polaroid camera, which has an apparently alchemical function in relation to the stars. The key word here is „access‟: „the un-intimidating camera has earned him access to some of New York‟s most exclusive gatherings‟ (Lyon, 2006); it gives him „the kind of access most photographers can only dream of‟ (Anon., 2008e: 94). The camera acts, then, as guarantee of safe passage to the inner sanctum of the skittish and suspicious star, protective of her image, fearful of those who would take it unasked. But here is Kost, bartering successfully with her, not keeping the image inside his camera and to himself (or worse, transmitting it electronically to a scandal sheet), but handing it over instantly, or at least in exchange for a few more that go into his pocket. The narrative edifice supporting this photographic practice is precarious though, because the illusion of privileged „access‟ to the mysteries is combined with the incompatible notion of star-friendship. Kost 10 explains that he first used the camera to meet people in bars in an unfamiliar city: „“Since I didn‟t really know anyone in New York at the time, the camera served as a sort of social catalyst,”…. He found that he made friends wherever he snapped photos‟ (Lyon 2006). His branding requires therefore that he be on intimate terms with the inaccessible and distant star, with dissonant and contradictory anecdotes the necessary result: „Mena [Suvari] has become a dear friend of mine over the years. From time to time, she shoots her own Polaroids for my website, which I think is interesting; seeing celebrities doing things on their own terms‟ (Anon., 2008e: 95). „Access‟ means proximity, but stardom is defined by distance and separation, so Kost is in the awkward position – „seeing celebrities doing things‟ – of supposedly being close-up but observing as if from afar. Irrespective of such contradictions, Kost has belatedly stumbled upon some basic insights about the operations of Polaroid photography. In a sort of unwitting funeral oration he revisits as if for the first time all those attributes of the camera discovered long ago by its users and by the Polaroid advertising department. On the camera as „social catalyst‟, for instance, we have this testimony in 1957 from Robert Doty, who was testing film for Polaroid: „Another delightful aspect of the camera is its function as an instrument of goodwill [at a vintage car rally]. I soon found that I was giving away more prints than I kept.‟ (Doty, 1957: 8) John Wolbarst, author of Pictures in a Minute, the first handbook of Polaroid photography, extols the camera‟s „ice-breaking‟ virtues, claiming „There is no faster, surer way of meeting people than to unlimber a Polaroid Land camera and start shooting….Start flashing away at a party or dance and you‟ll be overwhelmed 11 by people who were strangers just a few moments ago.‟ (Wolbarst, 1956: 121) Just as the Polaroid camera‟s status as „the ultimate party camera‟ became enshrined very early in the company‟s ad copy and in the uses of the camera, so this „ice-breaking‟ capacity became standard in the lexicon. A Polaroid brochure from 1971 contains the following encomium from a user: I‟m not kidding. All I do is show up with my Polaroid Land camera and I‟m the center of attention. Even strangers pose for me. They watch when I shoot. They hold their breath when the picture‟s developing. And when I peel it off – wow! It‟s a great ice-breaker, that camera. Whenever children are around, I feel like the Pied Piper. (Anon., 1971) The same qualities are attributed to the camera in Peggy Sealfon‟s The Magic of Instant Photography, a handbook from the 1980s: Parties are also marvelous times for candid photography, and an instant camera often becomes a helpful ice-breaker. In fact, an instant camera often will motivate people to do unexpected things, just to see the immediate record of their behaviour. (Sealfon, 1983: 100) In the discourse on the Polaroid as party camera, then, it is not so much that the Polaroid records the party, but that it is the party, the main attraction that gets things going. This logic is extended with a slightly different purpose in mind in a section of Instant Film Photography (1985) entitled „Breaking the Ice‟, where Michael Freeman praises the virtues of the camera in travel and street photography. Freeman claims that „instant film can actually help to change the situation in which pictures are 12 being taken: few things give such immediate pleasure as the gift of a photograph….It is the gentlest bribe you can offer someone whose cooperation you want‟ (Freeman, 1985: 118). He is especially keen to emphasize the value of instant photography in the face of cultural barriers. In a section titled – without irony – „Overcoming Resistance‟, he warns that „Photography without permission is, after all, a form of invasion of privacy, and whether this is offensive or not depends on the mood of the people, and on any cultural or religious objections they may have….If you sense wariness or hostility, however, the instant film gambit may save the day‟ (118). With the same sort of situation in mind, Sealfon notes that the camera can be „useful in foreign lands‟ (Sealfon, 1983: 70). It is advice that Jeremy Kost takes instinctively when he „discovers‟ Polaroid photography in his dealings with celebrities, who are also the cultural other, separated from us by their own impenetrable observances and rites. The pages on which Freeman‟s comments are found are illustrated by photographs of „indigenous‟ peoples absorbed in a Polaroid image of themselves that the photographer has presumably just handed over. In the iconography of Polaroid photography this scene of narcissistic absorption is the ur-image, absolutely central to Polaroid publicity and advertising from its outset in 1948; and it appears frequently in Kost‟s photos as well, with the pictured star cupping in her hand another just-taken image of herself (see Figure 2). Kost‟s activities are misrecognized as a novelty because the outmoded Polaroid technology has been forgotten, but then at the very point of its extinction, briefly flares up again into visibility, starkly different from the instant digital imaging that has displaced it. 13 That it produces an image immediately and also a hard copy makes it fleetingly seem an innovation that comes after digital photography. In this context it is also worth remembering some of the negative press that accompanied the arrival of Polaroid photography and continued to dog it for years, much to the dismay of such high profile promoters (and Polaroid employees) as Ansel Adams, who regretted in his autobiography that most „professional and creative photographers dismissed the process as a gimmick‟ (Adams, 1985: 254). Percy Harris, reviewing the new camera in The Photographic Journal in 1949 gave a typically damning assessment, concluding that „the whole business seems nothing but a de luxe model of the old seaside “while-you-wait” snapshot camera‟ (Harris, 1949: 62). But why should the seaside camera be an occasion for scorn, and why need „gimmick‟ be a term of dismissal? The OED tells us that a „gimmick‟ is „an article used in a conjuring trick; now usu. a tricky or ingenious device, gadget, idea, etc., esp. one adopted for the purpose of attracting attention or publicity‟ and also notes that one of its early users claimed the word was formed as an anagram of magic. With this definition in mind, and taking into account the Polaroid‟s status as party camera and multi-purpose attention-grabber, instant photography can be seen as part of what Tom Gunning and other historians of early cinema have described as a system of „attractions‟. The term „cinema of attractions‟ was introduced in the mid-1980s by Gunning and André Gaudreault as a way of distinguishing early cinema (1896-1906) from the classical narrative cinema that later became the dominant mode. Rather than taking storytelling as its main organizing principle, the „cinema of attractions‟ emphasized sensation and shocks, 14 with „display‟, or what Gaudreault called „monstration‟ as its defining characteristic (Gaudreault, 1990; Gunning, 1993). This cinema therefore holds close affinities with the fairground in the way that it „directly solicits spectator attention, inciting visual curiosity, and supplying pleasure through an exciting spectacle‟ (Gunning, 1990a: 58). One of its other main antecedents is the „magic theatre‟ of the nineteenth century in which charismatic magicians often relied on new technologies to generate their spectacular effects (Gunning, 1990b: 96). If we accept Dulac and Gaudreault‟s hypothesis that „attraction‟ can be applied to a wider range of „cultural series‟ than film alone (Dulac and Gaudreault, 2007: 228), we might then speak of a „photography of attractions‟, with a line of continuity running between the seaside while-you-wait camera, the automated amusement park photo booth (complete with theatrical curtain), the Polaroid camera as „ice-breaker‟ and Jeremy Kost‟s infiltration into the New York celebrity party scene. In her handbook, Peggy Sealfon actually calls the camera an „attraction‟ and notes that Another special advantage of traveling with an instant camera is the way it provokes people‟s interest. People often gravitate to watch the „magical‟ photo machine at work. (Sealfon 1983: 70) Again there are striking parallels with early cinema, where „one of the attractions…was the cinematic apparatus, quite apart from what it showed‟ (Elsaesser, 1990: 13). In Kost‟s hands, then, the Polaroid camera has recovered its importance and value precisely as a gimmick, as a mode of attracting attention and publicity. Unlike the illicit snappers who seek out sunbathing stars, it is essential 15 that Kost not be surreptitious, and the explosive whir and click of the Polaroid camera, its Polaroid noise, is a boon in the din in which he seeks to be noticed. Edwin Land himself recognized this gimmick potential in the cameras he invented, cannily demonstrating them live to gathered members of the press from 1947 onwards (see Figure 3). This practice reached its apotheosis in the early 1970s with the introduction of SX-70 technology, when Land became famous for his unorthodox entertainments at Polaroid Annual Meetings. In purpose built auditoria, sometimes „in-the-round‟ (Anon., 1972a: 8), Land would conduct „a modern magic-lantern show‟ (Anon., 1972b: 83) in which „the otherwise retiring scientist becomes a dashing imperial wizard, unveiling… products with theatrical flourishes‟ (Kostelanetz, 1974: 54). For the 1972 Annual Meeting, where the SX- 70 was launched, Polaroid converted 32,000 square feet of warehouse into a complete theatre space. Foreman Bob Chapman is cited in the Polaroid Newsletter explaining that his team „had to actually construct a completely new facility, installing such things as additional fans and theater lighting equipment, as well as making stages, demonstration platforms‟ (Anon., 1972a: 8). That year Land ended up on the covers of both Time and Life demonstrating his new gimmick. Gunning has emphasized the importance of the early filmmaker as a „showman‟ in the traditions of magic theatre, one who usually lectured charismatically over the film, and it is to this tradition that Land clearly belongs (Gunning, 1990b: 99). Given the cameras‟ potential for „showmanship‟, it was almost inevitable that when Polaroid built an over-size camera to produce large 20‟ x 24‟ prints, self-publicist 16 Andy Warhol posed on the stage of the Waldorf Astoria‟s grand ballroom in February 1978 for a series of instant portraits (MacGill, 1980: 2). In the photography of attractions, the representational value of the image is not entirely negligible, but it has receded in importance, giving way to what might be called its „demonstration-value‟, where it is the process and not the product that takes precedence. It is appropriate that Polaroid image-making should have a theatrical setting in Land‟s and Warhol‟s use of it, because in their hands photography functions above all as magic show: it is the spectacular display of the technology‟s workings which is most important; and attracting attention is the main aim of the operator. Instantaneity, ice, and the technological unconscious What is the cultural significance of this „photography of attractions‟? As Raymond Williams warned in 1974, contra McLuhan, and many others have been warning since, a new technology cannot be considered as autonomously generating its own effects, but should rather be considered dialectically in relation to its cultural and social determinants. The meaning of a technology does not arise spontaneously, then, with its date of invention or dissemination (1947-8 with Polaroid), but rather only when the uses to which it is put become apparent. As Williams puts it, „all technologies have been developed and improved to help with known human practices or with foreseen and desired practices‟ (Williams, 2003: 132). Indeed, to date a technology by its first appearance on the consumer shelves is simply to accede to the logic of the market in our analysis of technological 17 change. However, what Williams leaves out here is the possibility of a technological unconscious, whereby a new technology links into cultural determinants that are neither consciously „known‟ nor „foreseen‟. It is not just that a new technology may lead to unanticipated practices, but that the manifest form of such practices hardly begins to account for what is at stake in them. From this point of view it is tempting to read Edwin Land‟s invention as participating in a generalized culture of acceleration where the imperatives of a rapidly expanding consumer society post-WWII dictated „instant gratification‟ as the order of the day. Might not the instantaneousness of Polaroid photography be seen as a leisure-world complement to the „contraction of time, the disappearance of…territorial space‟ (Virilio, 1986: 140-1) brought about since the 1950s by a range of mainly military technologies of acceleration? Certainly, Polaroid image-making may contribute to, but certainly cannot be credited with producing, this state of affairs, which is hardly „foreseen and desired‟ in the sense that Williams means. It is therefore worth remarking that the ingenious gimmick camera that ensured for fifty years the fortunes of the Polaroid Corporation was preceded by six years of wartime work by the company on various military visual technologies, work carried out for the United States government and armed forces.1 1 During World War II Polaroid produced combat goggles, sighting devices, artificial quinine, and early heat-seeking missile technology. Victor K. McElheney credits Land with this sentence in Vannevar Bush‟s collectively-written post-war strategy document Science: The Endless Frontier (1945): „What is required is the rapid invention and evolution of the peacetime analogues of jet-propelled vehicles, bazookas, and the multiplicity of secret, bold developments of the war‟ (Cited in McElheny 1998: 159). 18 But what of the more specific practice of Polaroid photography carried out by Kost and many others before him – the point at which the axis of acceleration meets the axis of gregariousness? What underlying cultural field can be deduced as the support of Polaroid‟s function in „ice-breaking‟2, or what Kost identifies as its „social catalytic‟ benefits? It is of course risky to extrapolate in this way, but we could draw three provisional conclusions about the cultural parameters in which it is assumed normal that an offer of an instantly-produced image should prove a path to intimacy: 1. the assumption of an atomized and disaggregated social world where communal bonds are weak, and certainly not the primary mediator of social relations, ie, a world of strangers, but where, perhaps even as a correlative of this ubiquitous alterity, 2. from an optimistic or perhaps optimising perspective, every stranger is potentially a friend, whose otherness is far from absolute, and can in fact be quickly overcome through the appropriate technological support, that is to say, 3. the mechanisation of inter-subjectivity is taken for granted, it is self- evident. Alienation and separation – „ice‟ – may be taken as absolutely constitutive of the relation between self and other, then, but at the same time that ice is not 2 This idiomatic phrase comes from American English. The first use recorded by the OED is in 1883 by Mark Twain, who employs it ironically in Travels in Mississippi. 19 considered a serious obstacle. Instead, it is an invitation to an entrepreneurial approach to intimacy which accepts as given the need for a technological prosthetic, of which the photography of attractions is one instance.3 Polaroid mosaic In the photography of attractions display of the technology‟s workings is central, and in the second practice to be considered, the Polaroid mosaic, it is again a question of display, this time of the completed photo-object. The Polaroid mosaic is a form of photo-collage and as such is far from unique to instant photography.4 The practice of combining photos with each other and with other materials to form composite images is not only long-established in fine art, but has a strong tradition in vernacular photography as well. In his discussion of late nineteenth and early twentieth-century photo-objects, Geoffrey Batchen shows how common was „the impulse to group different photographs together in one object‟, which sometimes took the orderly structure of a „grid‟ or „portrait assemblage‟, but also less coherent patterns (Batchen, 2004: 26-8). Whilst participating in this tradition, Polaroid users have developed some highly distinctive variants on it. One such practice is the covering or coating of the body in prints, which is perhaps best described as a bath of Polaroids. Some examples: 3 It could be argued that in their „social catalytic‟ function instant cameras anticipate camera phones. As Kato, Okabe, Ito and Uemoto note, „camera users often described the act of taking and viewing photos when gathered with friends as itself a focus of social activity‟ (Kato, Okabe, Ito, Uemoto, 2005: 305). However, camera phones do not appear to have the same „ice-breaking‟ potential as Polaroid cameras: the same study observes that the mobile phone „reinforces ties between close friends and families rather than communal or weaker and more dispersed social ties‟ (Ito, 2005: 9). 4 The term is not new. Works by Ray K. Metzker and Robert Heinecken have been described as „photomosaics‟ (Warner Marien, 2006: 379) and Sealfon uses the term „instant mosaic‟ for composite images by David Joyce (Sealfon, 159). 20 an online interview with Jeremy Kost is illustrated with a photo of Kost in a shallow bathtub (clothed) with numerous Polaroid prints spread out on the edge of the tub and on the bathroom floor outside the tub (Belonsky 2007)(see Figure 4); on the weheartpolaroid.com web-site photographer Dash Snow is shown photographed from above in a bathtub, submerged in Polaroid prints so that only his head, arms and feet are showing (see Figure 5); on the Flickr photo-sharing site a series of images, one of them entitled „Requiem for a Polaroid‟, show a naked human body laid out flat, covered in Polaroid prints of what appear to be close shots of that same body. The practice is startling, even strange, but its repetition implies that it somehow goes without saying that Polaroid images should be poured onto a subject. Nor is this a new activity: as Wim Wenders‟ The American Friend (1977) testifies, it was conceived very soon after the invention of the SX-70 Polaroid technology that ejected the image automatically from the camera immediately after exposure (prior to 1972 and the SX-70, Polaroid film worked on a peel-apart basis). In this film, a listless Tom Ripley (Dennis Hopper) spreads himself out on his back on a pool table, SX-70 in hand, and proceeds to douse himself in Polaroid prints so that he too is eventually submerged in images in different stages of development. Just as the image itself has a special mylar coating that allows it to develop in the light, so its producer-consumer (with the Polaroid image the two acts virtually coincide) treats it as a sort of supplementary skin in the case of the Polaroid bath. In popular practice, of course, as Trotman has observed, the tactility of the Polaroid print is absolutely central: its users shake the print, scratch it, write on it, 21 or bend its flexible surface between thumb and forefinger as Dash Snow does in the photo by Dave Schubert. The theme of immersion in a bath of Polaroid prints simply amplifies this logic of the Polaroid image as a skin-like interface between the body and the world. And on the weheartpolaroid web-site this logic reaches its natural conclusion with the merging of skin and film in an image of a woman‟s forearm tattooed with the distinctive frame (empty) of the SX-70 print (Anon., 2008b). The fundamental feature of the Polaroid bath is the multiplication of prints, their massing together and overlapping with each other. This same principle is put to work in more orderly fashion in the Polaroid mosaic, in which individual prints become like tiles (Van Lier, 1983: xi) or scales. This type of collage gained prominence in the art-world in the early 1980s through the Polaroid „joiners‟ of British painter David Hockney. Hockney would photograph a scene or subject with an SX-70 camera from a series of close-up positions and then recompose the segmented field by „joining‟ the images together: „When assembled in a grid to form the composite image, everything in the picture – whether foreground, middle ground, or background – is seen close-up on a shallow plane defined by the camera‟s focal length‟ (Knight, 1988: 34). The effect has been compared to cubism (Webb, 1988: 204), and Hockney himself argues that it is as close as photography can come to capturing binocular vision (Hoy, 1988: 56). Most of Hockney‟s 140-plus Polaroid joiners were completed in the first half of 1982 before he switched to a Pentax 110 and conventional film, arguing that the broad white border of the SX-70 image placed too many restrictions on the process 22 (Webb, 1988: 207). However, it could equally be argued that what distinguishes the Polaroid SX-70 mosaic from other forms of photocollage is precisely this prevalence of borders within borders, which calls attention starkly to the process of segmentation (see Figure 6). The British painter was preceded in this technique of Polaroid mosaics by Joyce Neimanas and Stefan de Jaeger, Neimanas experimenting with Polaroid collages from about 1980 (Hoy, 1988: 56) and de Jaeger probably from 1979 (Webb, 1988: 206). The Belgian photographer de Jaeger, who generally concentrated on a single human subject to suggest motion and the passage of time, in fact accused Hockney of stealing the idea from him. It is probably more accurate to say that the idea was in the air, with Wim Wenders‟ protagonist in Alice in the Cities (1974), Philip Winter (Rüdiger Vogler), laying out his SX-70 prints on the beach like so many Tarot cards to be read collectively rather than separately, and Québecois film-maker Michael Snow experimenting in 1969 with the temporality of successively taken black and white Polaroid prints in Authorization. Whatever the source of the practice, if the images collected in The Polaroid Book are representative of uses made of Polaroid film by artist-photographers, the mosaic has become very common in Polaroid photography, with numerous instances anthologized there. What is more, the mosaic is in no way restricted to art-based photography, but is also very widespread as a popular practice. Web-sites such as weheartpolaroid.com, polanoid.net and savethepolaroid.com all feature many variants of the Polaroid mosaic by amateur photographers who may or may not be aware of the art-world parallels to their work (see Figure 7). 23 Irrespective of whether it is gallery-based or a popular practice, then, something about the Polaroid image, especially in its white-bordered SX-70 manifestations, invites or even demands that the images be grouped together in mosaic style, or more haphazardly in the form of a bath of prints. It is true that the SX-70 print has very specific features as an image. As Hockney notes, he stopped making joiners with Polaroid prints because he „realized that the Polaroid camera is essentially a close-up camera, because of the scale of the print‟ (Hockney, 1986: 36). The image itself is almost square, while the print is only 3½ x 4½ inches when the white border is included.5 Combined with the shallow depth of field of many of the cheaper versions of SX-70 technology, this means that the most satisfactory images tend to be those taken from a distance of 3-5 feet. But while these limitations are what brought Hockney to abandon the SX- 70, they may precisely be what encourage the practice of combining a number of Polaroid images in a larger composite. Put simply, it is difficult to get much into a Polaroid print; for it to take on meaning, it requires support from other images. As Geoffrey Batchen has remarked, the placing of photos in grids, or the allied practice of arranging them in albums, is often a way of getting the pictures to tell a story that they cannot tell on their own, since narrative is „always a weakness of individual photographs‟ (Batchen, 2001: 66). This „weakness‟ of photos has been 5 The border is often used for writing captions, but it was not put there in the first place for that purpose: it is a „pod‟ which bursts to release a chemical reagent necessary for the „instantaneous‟ development of the image. 24 famously developed from a psychoanalytical perspective by Christian Metz in „Photography and Fetish‟. In that essay, Metz notes that every photograph is marked by the cropping of space, by what is excluded from the frame: this „off- frame effect…marks the place of an irreversible absence, a place from which the look has been averted forever‟ (Metz, 1985: 87). In other words, there is always something unavoidably missing from the photographic image, which is „a cut inside the referent‟ (84). „The spectator has no empirical knowledge of the contents of the off-frame‟, writes Metz, „but at the same time cannot help imagining some off-frame, hallucinating it, dreaming the shape of this emptiness‟ (87). From this perspective, the narratives provided by photo albums and grids are like hallucinations or dreams to counteract the off-frame emptiness generated by the photographic image. In the case of Polaroid photography this „off-frame effect‟ is intensified by the small size of the image and the shallow focus which make close-ups optimal for image quality. What is more, the very materiality of the frame, stubbornly emphasizing the photo as object not just image, amplifies the crisis of off-frame absence as defined by Metz. The Polaroid image, that is to say, calls attention even more than usual to the partiality of a photograph. The Polaroid mosaic can therefore be read as a compensatory act to restore lost space. What the single Polaroid image leaves out, the composite seeks to make good. Conclusion I began this essay by noting how „photo-materialists‟ such as Elizabeth Edwards and Janice Hart have expressed a certain antipathy for the photographic image „qua image‟ in their attempt to rescue photographs as what they call „auratic‟ 25 objects. There are two drawbacks to this approach. Firstly, their „materialism‟ is strictly limited: by concentrating exclusively on individual photo-objects (usually privileging older ones, where value has accrued with age), they do not take account of equally „material‟ photographic practices where the photo-object itself may not be what is most important – as in the case of the process of Polaroid image-making, which is, as I have argued, a sort of photography of „attractions‟.6 Secondly, the problem of the photo-image, naggingly, does not just go away if we turn our attention to the analysis of photos as objects. What Mitchell calls the image‟s „silence …reticence…obduracy‟ persists, as does what Metz has identified as its troubling „off-frame effect‟. Both the Polaroid mosaic and the photography of attractions bring the object-ness of the image to the fore, but what they perhaps also share in common is that they are ways of avoiding what is lacking in the image. Polaroid SX-70 photography promises an immediate, unmediated and singular image, but its material practices – the Polaroid mosaic and the photography of attractions – are less solutions to the photographic image‟s partiality than further symptoms of that failure.7 If it is the inevitable fate of image and object to always be severed from each other, the instantaneous Polaroid might appear to be an opportunity to reunify them, an opportunity taken up by those who would immerse themselves, bath-like, in the image as object. But as desires go this is even more chimerical than Borges‟ map that covers the entire territory. If the single Polaroid image tells us too little, is far too small, too partial to signify, then the multiplication of images and borders merely splinters 6 The major exception to this object-centric restricted materialism in Edwards and Hart‟s collection is Chalfen and Murui‟s essay on Japanese Print Club photography. 7 On the three key properties of Polaroid photography – speed, elimination of the darkroom, and lack of a negative, see Buse 37-9. 26 and fractures vision further. Without the mosaic, a windowed shard; with it, asyntax. The research for this article has been generously supported by grants from the Arts and Humanities Research Council UK and the British Academy. Thanks also to Barbara Hitchcock and Jennifer Uhrhane of the Polaroid Collections, Concord, MA., and Tim Mahoney at the Baker Library, Harvard Works Cited Adams, Ansel. (1985) An Autobiography. Boston: Little, Brown. Anon. (1971) „60-second-excitement‟ publicity brochure, Box 15-4-1, folder 20, Polaroid Corporation Collection. Baker Library Historical Collections, Harvard Business School. Anon. (1972a) „Annual Meeting, 1972‟, Polaroid Newsletter 17:6 (April 26): 8. Box 18-1-2, Polaroid Corporation Collection. Baker Library Historical Collections, Harvard Business School. Anon. (1972b) „Polaroid‟s Big Gamble on Small Cameras‟, Time (June 26): 80-5. Anon. (2007a) „Requiem for a Polaroid‟, Flickr.com (December 27), URL (consulted April 14, 2009): http://www.flickr.com/photos/foamygreen/2141364398/ Anon. (2008a) „Dash Snow submerged‟, Weheartpolaroid.com (April 26), URL (consulted July 31, 2008): http://weheartpolaroid.blogspot.com/2008/04/dash-snow- submerged.html http://weheartpolaroid.blogspot.com/2008/04/dash-snow-submerged.html http://weheartpolaroid.blogspot.com/2008/04/dash-snow-submerged.html 27 Anon. (2008b) „Kissing booth designer‟, Weheartpolaroid.com (July 21), URL (consulted August 1, 2008): http://weheartpolaroid.blogspot.com/2008/07/kissing-booth- designer.html Anon. (2008c) „Polanoid‟, URL (consulted August 3, 2008): http://polanoid.net Anon. (2008d) „Save the Polaroid‟, URL (consulted August 3, 2008): http://savethepolaroid.com Anon. (2008e) „Snapping the stars: Shooting from the Hip‟, Marie Claire (August): 93-9. Batchen, Geoffrey. (2001) Each Wild Idea: Writing Photography History. Cambridge, Mass.: MIT Press. ------. (2004) Forget Me Not: Photography and Remembrance. New York: Princeton Architectural Press. Belonsky, Andrew. (2007) „Pretty Things: Jeremy Kost‟, Queerty (March 21), URL (consulted July 31, 2008): http://www.queerty.com/queer/pretty- things/pretty-things-jeremy-kost-20070321.php Buse, Peter. (2007) „Photography Degree Zero: Cultural History of the Polaroid Image‟, new formations 62 (Autumn): 29-44. Chalfen, Richard, and Mai Murui. (2004) „Print Club Photography in Japan: framing social relationships‟, in Elizabeth Edwards and Janice Hart (eds) Photographs Objects Histories: On the materiality of images, pp.166-85. London: Routledge. Crist, Steve, ed. (2005) The Polaroid Book: Selections from the Polaroid Collections of Photography. Cologne: Taschen. http://weheartpolaroid.blogspot.com/2008/07/kissing-booth-designer.html http://weheartpolaroid.blogspot.com/2008/07/kissing-booth-designer.html http://polanoid.net/ http://savethepolaroid.com/ http://www.queerty.com/queer/pretty-things/pretty-things-jeremy-kost-20070321.php http://www.queerty.com/queer/pretty-things/pretty-things-jeremy-kost-20070321.php 28 Doty, Robert. (1957) Letter to Beaumont Newhall. n.d. (Summer 1957). Polaroid Collections, Concord, MA. Dulac, Nicolas, and André Gaudreault. (2007) „Circularity and Repetition at the Heart of the Attraction: Optical Toys and the Emergence of a New Cultural Series‟ in Wanda Straven (ed.) The Cinema of Attractions Reloaded, pp227-44. Amsterdam: Amsterdam UP. Edwards, Elizabeth. (1999) „Photographs as Objects of Memory‟, in Marius Kwint, Christopher Breward, and Jeremy Aynsley (eds) Material Memories: Design and Evocation. Oxford: Berg. Edwards, Elizabeth and Janice Hart. (2004) „Introduction: photographs as objects‟, in Edwards and Hart (eds) Photographs Objects Histories, pp. 1-15. Elsaesser, Thomas. (1990) „General Introduction: Early Cinema: From Linear History to Mass Media Archaeology‟, in Thomas Elsaesser (ed.) Early Cinema: Space Frame Narrative, pp.1-8. London: British Film Institute. Freeman, Michael. (1985) Instant Film Photography: A Creative Handbook. London: MacDonald. Gaudreault, André. (1990) „Showing and Telling: Image and Word in Early Cinema‟, in Elsaesser, Early Cinema, pp.274-81. Gunning, Tom. (1990a) „The Cinema of Attractions: Early Film, its Spectator and the Avant-Garde‟, in Elsaesser, Early Cinema, pp.56-63. Gunning, Tom. (1990b) „“Primitive Cinema”: A Frame-Up? Or, The Trick‟s on Us‟, in Elsaesser, Early Cinema, pp. 95-103. Gunning, Tom. (1993) „“Now You See it, Now You Don‟t”: The Temporality of the Cinema of Attractions‟, The Velvet Light Trap 32 (Fall): 3-12. 29 Harris, Percy W. (1949) „The Year‟s Progress‟, Photographic Journal (March): 59-63. Hockney, David. (1986) „On Photography‟ in Photographs by David Hockney, pp.29- 39. Washington: International Exhibitions Foundation. Hoy, Anne. (1988) „Hockney‟s Photocollages‟, in Los Angeles County Museum of Art, David Hockney: A Retrospective, pp.55-65. New York: Harry N. Abrams. Ito, Mizuko. (2005) „Introduction: Personal, Portable, Pedestrian‟, in Mizuko Ito, Daisuke Okabe, and Misa Matsuda (eds) Personal, Portable, Pedestrian: Mobile Phones in Japanese Life, pp.1-16. Cambridge, Mass.: MIT Press. Kato, Fumitoshi, Daisuke Okabe, Mizuko Ito, and Ryuhei Uemoto. (2005) „Uses and Possibilities of the Keitai Camera‟, in Personal, Portable, Pedestrian: Mobile, pp.301-10. Knight, Christopher. (1988) „Composite Views: Themes and Motifs in Hockney‟s Art‟, in Los Angeles County Museum of Art, David Hockney: A Retrospective, pp.23-38. New York: Harry N. Abrams. Kopytoff, Igor. (1986) „The Cultural Biography of Things: Commoditization as Process‟, in Arjun Appadurai (ed.) The Social Life of Things: Commodities in Cultural Perspective, pp.64-91. Cambridge: Cambridge UP. Kost, Jeremy. (2006) „Bioroid‟, Roidrage.com, URL (consulted July 31, 2008): http://www.roidrage.com/about.php ------. (2008a) „Jeremy Kost,‟ Jeremykost.com, URL (consulted July 31, 2008): www.jeremykost.com/bio/bio.html ------. (2008b) „Profile‟, MySpace, URL (consulted July 31, 2008): http://profile.myspace.com/index.cfm?fuseaction=user.viewprofile&frien did=21512814 http://www.roidrage.com/about.php http://www.jeremykost.com/bio/bio.html http://profile.myspace.com/index.cfm?fuseaction=user.viewprofile&friendid=21512814 http://profile.myspace.com/index.cfm?fuseaction=user.viewprofile&friendid=21512814 30 Kostelanetz, Richard. (1974) „A Wide-Angle View and Close-Up Portrait of Edwin Land and his Polaroid Cameras‟, Lithopinion 33 (Spring): 48-57. Lyon, Cody. (2006) „Shooting Glitz with a Polaroid, Jeremy Kost is gracious in a galling trade‟, Columbia News Service (February 14), URL (consulted July 31, 2008): http://jscms.jrn.columbia.edu/cns/2006-02-14/lyon-antipaparazzo McElheny, Victor K. (1998) Insisting on the Impossible: The Life of Edwin Land. Cambridge, MA.: Perseus Books. MacGill, Peter. (1980) „Introduction‟, in Peter MacGill (ed.) 20 x 24 Light, p.2. Philadelphia: Pennsylvania Council of the Arts, 1980. Metz, Christian. (1985) „Photography and Fetish‟, October 34 (Autumn): 81-90. Mitchell, W.J.T. (2005) What Do Pictures Want? The Lives and Loves of Images. Chicago: Chicago UP. Sealfon, Peggy. (1983) The Magic of Instant Photography. Boston: CBI Publishing. Shiner, Eric C. (2007) „Mirror, Mirror Off the Wall‟, Jeremykost.com, URL (consulted July 31, 2008): www.jeremykost.com/bio/bio.html Trotman, Nat. (2002) „The Life of the Party: the Polaroid SX-70 Land Camera and instant film photography‟, Afterimage 29:6 (May/June): 10. Van Lier, Henri. (1983) „The Polaroid Photograph and the Body‟, in Stefan de Jaeger, Stefan de Jaeger, pp.xi-ixx. Brussels: Pout. Virilio, Paul. (1986) Speed and Politics: An Essay on Dromology. Trans. Mark Polizzotti. New York: Semiotext(e). Warner Marien, Mary. (2006) Photography: A Cultural History (2nd ed.). London: Laurence King. Webb, Peter. (1988) Portrait of David Hockney. London: Chatto and Windus. http://jscms.jrn.columbia.edu/cns/2006-02-14/lyon-antipaparazzo http://www.jeremykost.com/bio/bio.html 31 Williams, Raymond. (2003) Television: Technology and Cultural Form. London and New York: Routledge. Wolbarst, John. (1956) Pictures in a Minute. New York: American Photographic Book Publishing. work_hsnvtyh2srcgtj34aw7thzobs4 ---- Digital photographic evidence and the adjudication of domestic violence cases Digital photographic evidence and the adjudication of domestic violence cases Crystal A. Garcia* School of Public and Environmental Affairs, Indiana University Purdue University Indianapolis, 801 West Michigan Street, Business/SPEA Building 4063, Indianapolis, IN 46202, USA Abstract This study reported the impact of digital photographic evidence on domestic violence case outcomes in two Indiana counties. It analyzed whether outcomes differed between cases with digital photographic evidence (treatment group) and cases with no photographic evidence (comparison group). Examined impacts included guilty pleas, convictions, and sentence severity. Data included in the analysis came from case files and police and prosecutor interviews. Findings suggest that digital photographic evidence can be a useful prosecutorial tool – treatment group members were more likely to plead guilty, be convicted, and receive more severe sentences. D 2003 Elsevier Ltd. All rights reserved. Introduction In a 1982 report, the United States Commission on Civil Rights (1982) found that there was a literal ‘‘lack of prosecution’’ in domestic violence cases. Since then, numerous constituents called for in- creased enforcement of domestic violence laws and an improvement in the methods used to investigate and prosecute these cases. As a result, major changes have taken place. Across the country, mandatory arrest policies, civil protection orders, and no-drop prosecution policies have been instituted with the intention of protecting victims from future abuse and halting the cycle of violence; however, evalua- tions of these programs and policies offered mixed results (Cahn, 1992; Ford & Regoli, 1993; Maxwell, Garner, & Fagan, 2001; Mills, 1998; Sherman & Berk, 1984; Sherman, Schmidt, & Rogan, 1992). Some practitioners claim that one way to curb the rate and lethality of domestic violence is to halt the pattern of escalation by intervening early in the battering cycle. 1 If this supposition is correct, then jurisdictions that are able to successfully prosecute lesser batteries and assaults may be able to reduce the number of future violent episodes–saving justice system resour- ces and quite possibly, human lives. Unfortunately, attempting to prosecute misdemeanor domestic bat- teries can be difficult for three major reasons: (1) injuries can be portrayed as minimal (Blitzer, Garcia, & Leitch, 2001); (2) minor altercations (where little evidence is available) can disintegrate into ‘‘he said/ she said’’ battles; and (3) victims may either refuse to assist the prosecutor and/or reconciliation between batterer and victim can occur before a case reaches the adjudication phase – forcing prosecutors to mount victimless prosecutions (Schmidt & Steury, 1989). One way to improve the likelihood of conviction in domestic violence cases is to improve the quality of evidence submitted to the court. Digital photogra- phy is one way to do that. Where injury is not easily 0047-2352/$ – see front matter D 2003 Elsevier Ltd. All rights reserved. doi:10.1016/j.jcrimjus.2003.08.001 * Tel.: +1-317-274-7006 (office); fax: +1-317-274- 7860. E-mail address: crgarcia@iupui.edu (C.A. Garcia). Journal of Criminal Justice 31 (2003) 579–587 apparent or where victims are reluctant to cooperate, digital images offer the court an accurate depiction of the event. This technology allows investigators to record even slight injuries such as abrasions and contusions – injuries that are rarely depicted well on instant film. Another positive aspect of this technology is that it provides immediate feedback to the investigator. After snapping the picture, an officer can see if the image he/she was attempting to capture is adequate. If not, lighting, stance, framing, and angle can be adjusted. While 35 mm photos remain superior in quality to all other film types, they are a rare form of evidence in misdemeanor domestic battery cases (Blitzer & Jacobia, 1997). The most common form of photo- graphic evidence in minor battery cases tends to be images caught on instant film. In a number of jurisdictions, the only way to obtain 35mm photo- graphs is to call a police photographer to the scene. Since this practice can be time consuming and expensive (and most of the cases in question are misdemeanors), few officers go to such lengths in their initial investigations. If crime scenes are not well documented or if 35mm film cannot be pro- cessed, major evidence can be lost and a case doomed. One way to ensure that evidence of a minor assault has been preserved is to equip patrol officers with digital cameras. Although new technologies have been developed to assist in the protection of victims of domestic violence (e.g., personal duress alarms and electronic monitoring of offenders) and the investigation and prosecution of domestic violence cases (e.g., digital photography), there has been little to no systematic analysis of the usefulness of these technologies during adjudication. Therefore, the study described in this article was heuristic in nature – examining the impact of a specific type of evidence (i.e., digital photographs) on the adjudication of misdemeanor domestic battery cases in two Indiana counties. Review of justice responses to domestic violence Prior to the last two decades, domestic violence (e.g., battery and assault incidents among intimates) was believed to be a private matter (Ellis, 1994) and best dealt with outside the auspice of the formal legal system (Binder & Meeker, 1992). In spite of this, many individuals studying spousal assault in the late 1970s and early 1980s lobbied for changes in legis- lation and law enforcement policies regarding domes- tic violence (Mills, 1998). For example, they proposed mandatory arrest policies, believing that the arrest experience would deter future violence (Dobash & Dobash, 1979; Klinger, 1995). Early research by Sherman and Berk (1984) supported the claims of pro-arrest advocates that arrest was the most successful intervention for reducing future violent episodes. Findings from five replication studies, however, offered conflicting out- comes. At three of the sites, arrest was found to increase the likelihood of future violence (Pate & Hamilton, 1992; Sherman et al., 1992). Arrest, however, was shown to have the most favorable impacts (in terms of recidivism) on defendants who were married and gainfully employed (Sherman et al., 1992). While the true efficacy of mandatory arrest policies remained in question, law enforce- ment agencies across the country continued prac- ticing these policies. In an attempt to settle the debate, Maxwell et al. (2001) performed a meta- analysis on the raw data from the replication studies and concluded that arresting batterers was, in fact, consistently related to reduced subsequent aggression. Mandatory arrest policies are probably the most thoroughly researched of the policy-driven responses to domestic violence. Empirical studies focusing on other policy responses are much less common, but include examinations of civil protection orders, court-ordered treatment, and mandatory prosecution policies. Assessments of civil protection orders (CPOs) have shown they are not particularly effective at increasing the safety of victims (Harrell, Smith, & Newmark, 1993). Specifically, women who received CPOs were just as likely to report new violence as those that had no protection orders (Grau, Fagan, & Wexler, 1984). Court-ordered treatment for batterers typically consists of educational and group counseling com- ponents that can be as brief in duration as a few weeks or as long as several months. Unfortunately, few of the individuals who most need treatment (i.e., those with the most lengthy abuse histories) are ordered to receive it (Davis & Smith, 1995). More- over, there is little discussion in the literature identifying which treatment modalities work best for different types of batterers. What little is known suggests that findings are mixed regarding the relationship between court-ordered treatment and the cessation of violent episodes (Davis & Smith, 1995; Dutton, 1986). Another major policy-driven response to domestic violence established in recent years is the ‘‘no-drop’’ or mandatory prosecution policy. These internal pol- icies, instituted by chief prosecutors, usually mandate that charges be filed in all domestic violence battery/ assault cases with a certain level of severity, regard- less of the victim’s wishes (Cahn, 1992). Although little empirical research has been done to assess the C.A. Garcia / Journal of Criminal Justice 31 (2003) 579–587580 https://isiarticles.com/article/36127 work_hulvwmhphjgxjb4nvpat7kp6oy ---- FOREWORD NEW OPPORTUNITIES OF LOW-COST PHOTOGRAMMETRY FOR CULTURE HERITAGE PRESERVATION Roman Shults a a Kyiv National University of Construction and Architecture, Faculty for Geoinformation Systems and Territory Management, Povitroflotskyi Avenue, 31 Kyiv, 03037, Ukraine, r-schultz@mail.ru Commission V WG V/7 KEY WORDS: Low-cost photogrammetry, Camera calibration, Smartphone, Fortification objects, Smartphone measuring tools, Pillbox. ABSTRACT: In the paper, the questions of using the technologies of low-cost photogrammetry in combination with the additional capabilities of modern smartphones are considered. The research was carried out on the example of documenting the historical construction of the II World War - the Kiev Fortified Region. Brief historical information about the object of research is given. The possibilities of using modern smartphones as measuring instruments are considered. To get high-quality results, the camera of the smartphone was calibrated. The calibration results were used in the future to perform 3D modeling of defense facilities. Photographing of three defense structures in a different state: destroyed, partially destroyed and operating was carried out. Based on the results of photography using code targets, 3D object models were constructed. To verify the accuracy of the 3D modelling, control measurements of the lines between the code targets at the objects were performed. The obtained results are satisfying, and the technology considered in the paper can be recommended for use in performing archaeological and historical studies. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-5/W1, 2017 GEOMATICS & RESTORATION – Conservation of Cultural Heritage in the Digital Era, 22–24 May 2017, Florence, Italy This contribution has been peer-reviewed. doi:10.5194/isprs-archives-XLII-5-W1-481-2017 481 1. INTRODUCTION Preservation the historical and cultural heritage is one of the main tasks of any state. In this case to such objects may include various types of structures, are widely known to the public, as well as less known and sometimes abandoned. In the case of the latter, to date it is primarily a task of restoring such objects of cultural heritage. The separate category of cultural heritage objects are the fortification constructions. In the world today, where historic sites are under the influence of various anthropogenic and natural factors, documentation of cultural heritage can help in monitoring tasks, restoration and recovery of such structures (Salvador, et al, 2011, Kapica, et al. 2013, Kersten, et al. 2015, Rodríguez-Gonzálvez, et al. 2015). Conventionally, for such a task the most popular and studied is the technology of close-range or terrestrial photogrammetry. With the advent of digital photography, and multi-function software, almost every user has the possibility to use the technology of terrestrial photogrammetry to solve various practical problems. The most popular among non-professional users of photogrammetric technologies are inexpensive digital cameras, which are equipped with mobile phones, smartphones, tablets etc. Today, almost anyone can in a few minutes, take the pictures of the object, and not knowing the features of images photogrammetric processing in automatic or semi-automatic mode to execute the construction of high-quality three- dimensional models. This technology is called low-cost photogrammetry (Gruen, et al. 2008). In the case of solving the problem of cultural heritage objects documentation, this technology is effective in terms of cost and speed of work. However, the qualitative and quantitative assessment of the results still remains one of the most important tasks of professional photogrammetry. In the present work attempt to study the theoretical and practical low-cost photogrammetry technology as an example of documentation the fortifications II World War near the city Kiev was done. One of the most dramatic stages of the II World War is considered the defence of Kiev, known as The First Battle of Kiev. Defence Kiev lasted for 72 days from July 7 th up to September 26th, 1941. The most important role in this operation played The Kiev Fortified Region (Russian abbreviation KiUR). Unique defence complex of defensive structures, consisting of permanent and field fortifications and engineering obstacles and long anti-tank ditches. The main parts of Kiev Fortified Region were the battalion defence areas with pillboxes. The building of these objects was started in 1929 and was not finished in 1941. The total length of the fortified region is about 85 km from the river Dnieper coast on the North from Kiev to the river Dnieper coast on the South from Kiev. Generally, The Kiev Fortified Region had the three fortifications lines with depth from 1 to 6 km. The total amount of different fortifications is up to 600 without additional entrenchments and trenches around each object. By the Decision of the Cabinet of Ministers of Ukraine in 1993 The Kiev Fortified Region was granted the status of a museum. For more than 20 years, unfortunately, an inventory of the state of The Kiev Fortified Region was not held. Today, thanks to low-cost photogrammetry technology, the opportunity to explore this unique object and create three-dimensional model of the defence area has appeared (Shults, et al. 2017). The key point of this research was a complex using of smartphone camera images and smartphone’s measuring application (Smart Tools). The basic equipment that were used: laser Leica Disto (to perform control measurements); laptop (for calibrating the cameras and three-dimensional modelling); smartphone MEIZU M3 Max (for pictures capturing) and AgiSoft Photoscan (for the three-dimensional modelling). 2. SMARTPHONE CAMERA CALIBRATION Since the main source of data is a digital photo, the quality of the obtained 3D model will depend on the quality of the original photographs. The one way to increase geometric accuracy of photo images is calibration. For calibration, we used flat test objects, which were displayed on laptop screen. During calibration were determined the next parameters: focal distance, principal point coordinates, coefficients of radial and tangential distortion, affinity coefficient. For non-linear distortions, modelling was used well-known Brown's distortion model. Curves of radial and tangential distortion is presented below. Figure 1. Radial distortion curve The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-5/W1, 2017 GEOMATICS & RESTORATION – Conservation of Cultural Heritage in the Digital Era, 22–24 May 2017, Florence, Italy This contribution has been peer-reviewed. doi:10.5194/isprs-archives-XLII-5-W1-481-2017 482 Figure 2. Tangential distortion curve The parameters of smartphone camera calibration and its errors presented on Figure 3. Figure 3. Calibration results These results are showing big camera distortion but at the same time, quality of calibration is not bad and all parameters were determined with sufficient accuracy. Therefore, we can try to use this camera for images capturing. 3. OBJECTS OF RESEARCH After a smartphone camera calibration was done a photographing was made. As a test object, we chose two pillboxes № 451, № 428 and artillery observation post № 453. On this example of the test object, we will try to evaluate the effectiveness of low-cost photogrammetry technology. The position of objects on fragment of topographic map is presented on Figure 4. Figure 4. The position of objects on topographical map scale 1:100000 The first fortified object is pillbox № 428 (Figure 5 and Figure 6). It is a two-storey, machine-gun pillbox with four loopholes. Figure 5. Machine gun pillbox № 428 The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-5/W1, 2017 GEOMATICS & RESTORATION – Conservation of Cultural Heritage in the Digital Era, 22–24 May 2017, Florence, Italy This contribution has been peer-reviewed. doi:10.5194/isprs-archives-XLII-5-W1-481-2017 483 Figure 6. Horizontal cross-section of machine gun pillbox № 428 on upper floor (Kainaran, et al. 2011) The photographing of this object was made without coded targets, as object has good texture and photographing conditions allow capturing images from any sides. Totally were got 62 images The second fortified object is pillbox № 451 (Figure 7 and Figure 9). It is a one-storey, machine-gun pillbox with four loopholes. Figure 7. Scheme of pillbox № 451 (Kainaran, et al. 2011) During the liberation of Kiev in 1943 the object was partially destroyed. The photographing of this object was made with 16 bit coded targets, as object has not so good texture as in the first case and photographing possible only from particular sides due to trenches and trees. Totally were got 45 images. Figure 8. Image of pillbox № 451 with coded targets The third object is artillery observation post № 453 with armored cover. It is a one-storey object. Figure 9. Image of artillery observation post № 453 with coded targets During the liberation of Kiev in 1943 the object was totally destroyed. The photographing of this object also was made with 16 bit coded targets. Totally were got 94 images. In all cases, MEIZU M3 Max camera was used. For getting, a metric model has been used information, obtained by the application Android Smart Tools. This application is very useful and allows measuring distances, heights, inclinations and magnetic azimuths using just smartphone without any additional geodetic equipment. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-5/W1, 2017 GEOMATICS & RESTORATION – Conservation of Cultural Heritage in the Digital Era, 22–24 May 2017, Florence, Italy This contribution has been peer-reviewed. doi:10.5194/isprs-archives-XLII-5-W1-481-2017 484 Figure 10. Example of inclination measurement In our case, we measured just distances and heights in order to scale an object. In addition, we used GNSS coordinates for final 3D model referencing. 4. 3D MODELLING The obtained photos and calibration results allow performing a 3D modelling by the photographic images. For 3D modelling, we used software AgiSoft Photoscan, which allows creating 3D models with point cloud in automatic mode (Jiroušek, et al. 2014). Below we are presenting results of 3D modelling for pillbox № 428 Figure 11. Camera geometry for pillbox № 428 Figure 12. Point cloud of pillbox № 428 with trench Figure 13. TIN model of pillbox № 428 Figure 14. 3D model of pillbox № 428 with trench In order to check the final model accuracy were made control measurements of distances between the coded targets on the artillery observation post № 453. These distances were measured by Android Smart Tools and Leica Disto. The results are given in Table 1 Line Distances from Android Distances from The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-5/W1, 2017 GEOMATICS & RESTORATION – Conservation of Cultural Heritage in the Digital Era, 22–24 May 2017, Florence, Italy This contribution has been peer-reviewed. doi:10.5194/isprs-archives-XLII-5-W1-481-2017 485 Smart Tools, m Leica Disto, m 38-39 1.4 1.135 1-2 1.5 1.318 3-1 1.0 1.150 5-7 1.4 1.573 6-5 1.2 1.035 9-10 0.8 1.021 9-12 1.0 1.072 9-11 1.0 1.150 6-12 2.1 2.223 15-13 1.4 1.186 18-15 1.5 1.305 21-19 1.6 1.786 32-33 1.0 0.804 31-32 1.0 0.892 35-35 1.5 1.706 Table 1. Results of control distances measurements If we accept the Leica Dicto measurement errorless, than we will get root mean square error of Android Smart Tools measurements, which is 0.18 m. This means that resulting model meets the requirements of archaeological and historical measurements. 5. CONCLUSIONS Summarizing the work carried out would like to make the following conclusions. The considered technology of low-cost photogrammetry proved effective enough to meet the challenges for documenting cultural heritage. The resulting accuracy and detail of the models correspond to the requirements for the inventory of the historic structures. During work with inexpensive equipment necessary to provide the mandatory implementation of the photographic equipment calibration twice: directly ahead image capturing and after image capturing. An important element of the technology is to implement control measurements. Because the technology does not provide for the use of surveying equipment (total stations, GNSS precision equipment etc.), the only one option is to carry out control measurements of distances. Such measurements should be at least 5-6 and as much as possible on all sides of object. Modern smartphones allow to measure distances, heights, inclinations of edges and facets, magnetic azimuths. In future, we going to use these measurements as additional constraints in bundle adjustment and research the accuracy of these measurements and them influence on quality of 3D modelling. REFERENCES Gruen A., Akca, D. 2008. Metric accuracy testing with mobile phone cameras, ISPRS Archives XXIst ISPRS Congress Technical Commission V., Volume XXXVII Part B5, China, pp. 729-736. Jiroušek, T., Kapica, R., Vrublová, D. 2014. The testing of Photoscan 3D object modelling software, Geodesy and Cartography, Volume 40, Issue 2: pp. 68-74. http://dx.doi.org/10.3846/20296991.2014.930251 . Kainaran, A.V., Kreshchanov, A.L., Kuzyak, A.G., Yushchenko, M.V., 2011. Kiev fortified region: 1928 - 1941: (History, Pre- war service, Today's day). Volyn. 356 p. Kapica, R., Vrublová, D., Michalusová, M., 2013. Photogrammetric documentation of Czechoslovak border fortifications at Hlučín-Darkovičky, Geodesy and Cartography, Volume 39, Issue 2: pp. 72-79. http://dx.doi.org/10.3846/20296991.2013.806243. Kersten, T., Lindstaedt, M., Maziull, L., Schreyer, K., Tschirschwitz, F., Holm, K. 2015. 3D recording, modelling and visualisation of the fortification Kristiansten in Trondheim (Norway) by photogrammetric methods and terrestrial laser scanning in the framework of Erasmus programmes, In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XL-5/W4, pp. 55-262. doi:10.5194/isprsarchives-XL-5-W4-255-2015. Rodríguez-Gonzálvez, P., Nocerino, E., Menna, F., Minto, S., Remondino, F. 2015. 3D surveying & modeling of underground passages in WWI fortifications In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XL-5/W4, pp. 17-24. doi:10.5194/isprsarchives-XL-5-W4-17-2015. Salvador, I., Vitti, A. 2011. Survey, representation and analysis of a War I complex system of surface and underground fortifications in the Gresta Valley, Italy, In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XXXVIII-5/W16, pp. 319-325. doi:10.5194/isprsarchives-XXXVIII-5-W16-319-2011. Shults, R., Krelshtein, P., Kravchenko, I., Rogoza, O., Kyselov, O., 2017. Low-cost photogrammetry for culture heritage. Environmental Engineering 10th International Conference, Lithuania, Vilnius, http://doi.org/10.3846/enviro.2017.XXX The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-5/W1, 2017 GEOMATICS & RESTORATION – Conservation of Cultural Heritage in the Digital Era, 22–24 May 2017, Florence, Italy This contribution has been peer-reviewed. doi:10.5194/isprs-archives-XLII-5-W1-481-2017 486 http://doi.org/10.3846/enviro.2017.XXX NEW OPPORTUNITIES OF LOW-COST PHOTOGRAMMETRY FOR CULTURE HERITAGE PRESERVATION 1. INTRODUCTION 2. Smartphone camera calibration 3. objects of research 4. 3d modelling 5. Conclusions References work_hwrssokxl5efjiu66nia5o4ezi ---- Practips_Sep03_DD.idd VOL 49: SEPTEMBER • SEPTEMBRE 2003  Canadian Family Physician • Le Médecin de famille canadien 1091 clinical challenge  défi clinique Practice Tips Real-time digital photography Adjunct to medical consultation Sody Naimer, MD Dr Naimer is a Lecturer in the Department of Family Medicine, Division of Community Health, at Ben-Gurion University of the Negev in Beer-Sheva, Israel. clinical challenge  défi clinique In general, photography is useful for documentation, follow up, defen- sive medicine, peer and specialist consultation, and patient education. Although it fulfils many of these needs, standard photography has several drawbacks.1-5 Mounting costs of good-quality fi lm and of developing colour prints are a serious consid- eration. Delays in receiving images, inability to alter resulting pictures, and restricted storage space are a few limitations of standard photography. I used standard photography in the past mainly to document rare fi ndings, and I shared my prints with colleagues for teaching or consulta- tion purposes. The expense of each picture led me to think twice about whether obtaining a photograph was justifi ed in each case. Digital photography has recently evolved into a ver y useful tool that greatly assists physicians in several ways. Instant production of vir tual images and our ability to take a snapshot whenever desired has broadened its use. The cost is virtually negligible, and producing a photo takes almost no time. Verbally describing many clinical conditions (eg, gynecologic diseases; anal fi ndings; lesions on any part of the dorsal surface of the body, such as the back, glutei, upper thighs, or ears; and even deep oral lesions) to patients will not always suf fice. No other technique seems to demonstrate to patients the disease or ongoing processes in these areas as well (Figure 1). As time allows, you can display images by downloading them onto your com- puter screen or view them directly from the back of the camera. Other benefi ts of digital photography are the abil- ity to share images via electronic mail, to duplicate images as many times as you like, to crop and adjust images received, to alter contrast in repro- duction, to project images for lectures or staf f meetings, to enlarge to accentuate details, to select only We encourage readers to share some of their practice experience: the neat little tricks that solve diffi cult clinical situations. Canadian Family Physician pays $50 to authors upon publication of their Practice Tips. Tips can be sent by mail to Dr Tony Reid, Scientifi c Editor, Canadian Family Physician, 2630 Skymark Ave, Mississauga, ON L4W 5A4; by fax (905) 629-0893; or by e-mail tony@cfpc.ca. Figure 1. Interdigital tinea pedis revealed using real-time digital photography 1092 Canadian Family Physician • Le Médecin de famille canadien  VOL 49: SEPTEMBER • SEPTEMBRE 2003 the best of a series of photographs of moving subjects or subjects pho- tographed when conditions were less than ideal, to have simple and rapid access to a large librar y of images without need for storage space, and obviously to use as a means of patient education in the office. I do not have to think twice about shooting a picture with an uncoop- erative child or worr y about poor illumination because, at worst, low- quality images can be deleted by pressing a button. Changing medical problems can be documented during progression until complete evolution. Objective comparisons of images can help you judge the outcome of inter- ventions, for instance, for acne, pig- mented lesions, or scars. I am personally fully satisfied with the digital photography that we have been using in our practice for the past 2 years. I strongly recommend incorporating real-time digital pho- tography into clinical practice. References 1. Sasson M, Schiff T, Stiller MJ. Photography without film: low-cost digital cameras come of age in dermatology. Int J Dermatol 1994;33(2):113-5. 2. Kokoska MS, Currens JW, Hollenbeak CS, Thomas JR, Stack BC Jr. Digital vs 35-mm photography. To convert or not to convert? Arch Facial Plast Surg 1999;1(4):276-81. 3. Spring KR. Scientific imaging with digital cameras. Biotechniques 2000;29(1):70-2,74,76. 4. Hollenbeak CS, Kokoska M, Stack BC Jr. Cost consider- ations of converting to digital photography. Arch Facial Plast Surg 2000;2(2):122-3. 5. Wall S, Kazahaya K, Becker SS, Becker DG. Thirty-five millimeter versus digital photography: comparison of photographic quality and clinical evaluation. Facial Plast Surg 1999;15(2):101-9. ... clinical challenge  défi clinique work_hxjlxrsjcfdz7lytmfu2lxbfka ---- International Journal of Scientific Research in Computer Science, Engineering and Information Technology CSEIT195121 | Received : 10 Jan 2019 | Accepted : 23 Jan 2019 | January-February -2019 [ 5 (1) : 116-122 ] International Journal of Scientific Research in Computer Science, Engineering and Information Technology © 2019 IJSRCSEIT | Volume 5 | Issue 1 | ISSN : 2456-3307 DOI : https://doi.org/10.32628/CSEIT195121 116 Image Dehazing Technique Based On DWT Decomposition and Intensity Retinex Algorithm 1Sunita Shukla, 2Prof. Silky Pareyani 1M. Tech Scholar, Gyan Ganga College of Technology Jabalpur, Madhya Pradesh, India 2Assistant Professor, Gyan Ganga College of Technology Jabalpur, Madhya Pradesh, India ABSTRACT Conventional designs use multiple image or single image to deal with haze removal. The presented paper uses median filer with modified co-efficient (16 adjacent pixel median) and estimate the transmission map and remove haze from a single input image. The median filter prior(co-efficient) is developed based on the idea that the outdoor visibility of images taken under hazy weather conditions seriously reduced when the distance increases. The thickness of the haze can be estimated effectively and a haze-free image can be recovered by adopting the median filter prior and the new haze imaging model. Our method is stable to image local regions containing objects in different depths. Our experiments showed that the proposed method achieved better results than several state-of-the-art methods, and it can be implemented very quickly. Our method due to its fast speed and the good visual effect is suitable for real-time applications. This work confirms that estimating the transmission map using the distance information instead the color information is a crucial point in image enhancement and especially single image haze removal. Keywords : AF: Adaptive filter, AHE: Adaptive Histogram Equalization, LOE: Lightness Order Error I. INTRODUCTION Restoration of hazy image is an important issue in outdoor vision system. Image enhancement and dehazing remain a challenging problem as well as an important task in image processing. Image enhancement is really an important issue in image processing applications such as digital photography, medical image analysis, remote sensing and scientific visualization [1]. Image dehazing and enhancement is the process by which the appearance and the visibility of an image are improved such that the obtained image is suitable for visual perception of human beings or for machine analysis. It is useful not only from an aesthetic point of view but also helps in image analysis and object recognition, etc. Image captured under bad visibility often has a contrast and many of its features are difficult to see. II. METHODOLOGY In this paper, we proposed a new method for single image dehazing using the NAM (Non-symmetry and Anti-packing Model)-based decomposition and contextual regularization. We estimated the airlight by decomposing the image using non-symmetry and anti-packing model [11] to eliminate the false estimation at the boundary or the over-bright object. Then, the scene transmission was calculated using the combination of the boundary constraints, the contextual regularization and the optimization proposed by Meng et al. [12]. The proposed method had better color visual and haze-free image when it http://ijsrcseit.com/ Volume 5, Issue 1, January-February-2019 | http:// ijsrcseit.com Sunita Shukla, Prof. Silky Pareyani Int J Sci Res CSE & IT. January-February-2019 ; 5(1) : 116-122 117 comes to the image dehazing problem, a dehazing model called Atmospheric Scattering Model is widely used in I (x) =J (x)t(x) + A(1-t(x)) where I(x) represents the hazy image, and J(x) denotes the origin haze-free image(also called scene radiance). A is the airlight which is the global light in the atmosphere while t(x) denotes scene transmission function (0 Physics + Math,” G. Sharma, in Computational Color Imaging, Raimondo Schettini, Shoji Tominaga, and Alain Tremeau, Eds., Berlin, Germany, 2011, vol. 6626 of Lecture Notes in Computer Science, pp. 31–46, Springer-Verlag, Invited Paper. 52. “High capacity image barcodes using color separability,” O. Bulan, G. Sharma, and B. Oztan, Proc. SPIE: Color Imaging XVI: Displaying, Processing, Hardcopy, and Applications, 24 - 27 January 2011, San Francisco, CA, vol. 7866, pp. 7866-N, 1–9. 53. “Adaptive color visualization for dichromats using a customized hierarchical palette,” C. E. Rodŕıguez-Pardo and G. Sharma, Proc. SPIE: Color Imaging XVI: Displaying, Processing, Hardcopy, and Applications, 24 - 27 January 2011, San Francisco, CA, vol. 7866, pp. 7866-03, 1–9. 54. “Multiplexed clustered-dot halftone watermarks using bi-directional phase modulation and detection,” B. Oztan and G. Sharma, in Proc. IEEE Intl. Conf. Image Proc., 26-29 September 2010, Hong Kong, pp. 981–984. 55. “SWIFT: Scalable weighted iterative sampling for flow cytometry clustering,” I. Naim, S. Datta, G. Sharma, J. Cavenaugh, T. Mosmann, in Proc. IEEE Intl. Conf. Acoustics Speech and Sig. Proc., Dallas, Texas, USA, Mar. 19, 2010, pp. 509–512. G. Sharma 13 56. “Clustered-dot color halftone watermarks using spatial frequency and color separability,” B. Oztan and G. Sharma, in Proc. SPIE: Color Imaging XV: Processing, Hardcopy, and Applications, vol. 7528, Jan. 2010, San Jose, CA, pp. 7528–33. 57. “Detecting content adaptive scaling of images for forensic applications,” C. Fillion and G. Sharma, in Proc. SPIE: Media Forensics and Security XII, Jan. 2010, vol. 7541, Jan. 2010, San Jose, CA, pp. 7541Z1–12. 58. “Device temporal forensics: An information theoretic approach,” J. Mao, O. Bulan, G. Sharma, and S. Datta, in Proc. IEEE Intl. Conf. Image Proc., 7-11 November 2009, Cairo, Egypt, pp. 1501–1504. 59. “Optimized energy allocation in battery powered image sensor networks,” C. Yu and G. Sharma, in Proc. IEEE Intl. Conf. Image Proc., 7-11 November 2009, Cairo, Egypt, pp. 3461–3464. 60. “Optimal resource allocation for wireless video sensors with power-rate-distortion model of imager,” M. Marijan, W. Heinzelman, G. Sharma, Z. Ignjatovic, in Proc. Midwest Symposium on Circuits and Systems (MWSCAS), August 2009. 61. “Processing of degraded documents for long-term archival using Waferfiche™ technology,” B. Oztan and G. Sharma and A. Pasupuleti and P. R. Mukund, in Final Program and Proceedings: Archiving 2009, 4-7 May 2009 Crystal City Hilton, Arlington, VA, pp. 197–202. 62. “Joint stochastic sampling for RNA secondary structure prediction,” A. O. Harmanci, G. Sharma, and D. H. Mathews, in Proc. Seventh IEEE Intl. Wksp. on Genomic Signal Proc. and Stats.(GENSIPS), May 17-29, 2009, Minneapolis, MN, (CDROM). 63. “Q-SIFT: Efficient feature descriptors for distributed camera calibration,” C. Yu and G. Sharma, in Proc. IEEE Intl. Conf. Acoustics Speech and Sig. Proc., April 19 - 24, 2009, Taipei, Taiwan, pp. 1849-1852. 64. “Geometric distortion signatures for printer identification,” O. Bulan, J. Mao, and G. Sharma, in Proc. IEEE Intl. Conf. Acoustics Speech and Sig. Proc., April 19 - 24, 2009, Taipei, Taiwan, pp. 1401-1404. 65. “Sensor scheduling for lifetime maximization in user-centric image sensor networks,” C. Yu and G. Sharma, in Proc. SPIE: Visual Communications and Image Processing (VCIP), vol. 7257, 18-22 January 2009, San Jose, CA, pp. 7257-OH-1–12. Best Paper Award. 66. “High capacity color barcodes using dot orientation and color separability,” O. Bulan, V. Monga, G. Sharma, in Proc. SPIE: Media Forensics and Security XI, vol. 7254, 19-21 January 2009, San Jose, CA, pp. 725417-1–7. 67. “On the security and robustness of encryption via compressed sensing,” Adem Orsdemir, H. Oktay Altun, Gaurav Sharma, and Mark F. Bocko, in Proceedings Military Communications Conference (MILCOM), Nov. 17-19, 2008, San Diego, CA, [CDROM]. 68. “Clustered-dot color halftone watermarks,” B. Oztan and G. Sharma, in Proc. IS&T/SID Sixteenth Color Imaging Conference: Color Science and Engineering: Systems, Technologies, Applications, Portland, OR, 10-15 Nov. 2008, pp. 99–104. 69. “Adaptive decoding for halftone orientation-based data hiding,” O. Bulan, G. Sharma, and V. Monga, In Proc. IEEE Intl. Conf. Image Proc., 12-15 October 2008, San Diego, CA, pp. 1280–1283. 70. “Application of high capacity data hiding in halftone images,” O. Bulan, G. Sharma, V. Monga, Proc. IS&T NIP24: Intl. Conf. Dig. Printing Technologies, Pittsburgh, PA, 6-11 Sep. 2008, pp. 787–791. 71. “Improving computational efficiency for RNA secondary structure prediction via data-adaptive alignment constraints,” A. D’Orazio and G. Sharma, in Proc. IEEE Intl. Wksp. on Genomic Signal Proc. and Stats. (GENSIPS), 8-10 June 2008, Phoenix, AZ, USA [CDROM]. 72. “On the capacity of orientation modulation halftone channels,” O. Bulan, G. Sharma, and V. Monga, in Proc. IEEE Intl. Conf. Acoustics Speech and Sig. Proc., March 30-April 4, 2008, Las Vegas, NV, pp. 1685–1688. 73. “Probabilistic structural alignment of RNA sequences,” A. O. Harmanci, Gaurav Sharma, and D. H. Mathews, in Proc. IEEE Intl. Conf. Acoustics Speech and Sig. Proc., March 30-April 4, 2008, Las Vegas, NV, pp. 645–648. 74. “Distributed estimation using reduced dimensionality sensor observations: A separation perspective,” C. Yu and G. Sharma, in Proc. 40th Annual Conference on Information Sciences and Systems (CISS), Princeton, NJ, 19 – 21 Mar. 2008, pp. 150–154. 75. “Data embedding in hardcopy images via halftone-dot orientation modulation,” O. Bulan, V. Monga, G. Sharma, and B. Oztan, In Delp et al. Proc. SPIE: Security, Forensics, Steganography, and Watermarking of Multimedia Contents X, vol. 6819, Jan. 2008, San Jose, CA, pp. 68190C-1–12. 76. “Improved embedding efficiency and AWGN robustness for SS watermarks via pre-coding,” O. Altun, O. Bulan, G. Sharma, and M. Bocko, In Delp et al. Proc. SPIE: Security, Forensics, Steganography, and Watermarking of Multimedia Contents X, vol. 6819, Jan. 2008, San Jose, CA, pp. 68191F-1–12. G. Sharma 14 77. “Steganalysis-aware Steganography: Statistical indistinguishability despite high distortion,” A. Orsdemir, O. Altun, G. Sharma, and M. Bocko, In Delp et al. Proc. SPIE: Security, Forensics, Steganography, and Watermarking of Multimedia Contents X, vol. 6819, Jan. 2008, San Jose, CA, pp. 681915–1–9. 78. “On side-informed coding of noisy sensor observations,” C. Yu and G. Sharma, In Proc. 41st Asilomar Conf. on Signals, Systems & Computers Nov. 4-7, 2007, Pacific Grove, CA, pp. 681–685. 79. “Probabilistic methods for improving efficiency of RNA secondary structure prediction across multiple sequences,” G. Sharma, A. O. Harmanci, and D. H. Mathews, In Proc. 41st Asilomar Conf. on Signals, Systems & Computers, Nov. 4-7, 2007, Pacific Grove, CA, pp. 34–38. 80. “Hierarchical compression of color look up tables,” S. R. A. Balaji, G. Sharma, M. Q. Shaw, and R. Guay, in Proc. IS&T/SID Fifteenth Color Imaging Conference: Color Science and Engineering: Systems, Technologies, Applications, Albuquerque, NM, 5-9 Nov. 2007, pp. 261–266. 81. “Lifetime-distortion trade-off in image sensor networks,” C. Yu, S. Soro, G. Sharma, and W. Heinzelman, In Proc. IEEE Intl. Conf. Image Proc., 16-19 September 2007, San Antonio, Texas, pp. 129–132. 82. “Collusion resilient fingerprint design by alternating projections,” O. Altun, G. Sharma, A. Orsdemir, and M. Bocko, In Proc. IEEE Intl. Conf. Image Proc., 16-19 September 2007, San Antonio, Texas, pp. 437–440. 83. “Conditions for color misregistration sensitivity in clustered-dot halftones,” B. Oztan, G. Sharma, and R. P. Loce, In Proc. IEEE Intl. Conf. Image Proc., 16-19 September 2007, San Antonio, Texas, pp. 221–224. 84. “End-to-end channel assurance for communication over open voice channels,” D. J. Coumou and G. Sharma, in Military Communications Conference (MILCOM), Oct. 2007, pp. 1–7, 29-31, 2007, Orlando, Florida (invited paper). 85. “Toward turbo decoding of RNA secondary structure,” A. Harmanci, G. Sharma, and D. H. Mathews, in Proc. IEEE Intl. Conf. Acoustics Speech and Sig. Proc., 15-20 April, 2007, Honolulu, HI, vol I, pp. 365–368. 86. “Multi-view image registration for wide-baseline visual sensor networks,” Gulcin Caner, A. Murat Tekalp, Gaurav Sharma, and Wendi Heinzelman, in Proc. IEEE Intl. Conf. Image Proc., 8-11 October 2006, Atlanta, pp. 369–372. 87. “Self modulated halftones,” B. Oztan and G. Sharma, in Proc. IEEE Intl. Conf. Image Proc., 8-11 October 2006, Atlanta, pp. 1533–1536. 88. “Optimum watermark design by vector space projections,” Oktay Altun, Gaurav Sharma, and Mark Bocko, in Proc. IEEE Intl. Conf. Image Proc., 8-11 October 2006, Atlanta, pp. 1413–1416. 89. “Thin-plate splines for printer data interpolation,” Gaurav Sharma and Mark Q. Shaw, in Proc. EUSIPCO, Sept. 2006 [CDROM]. 90. “Watermark synchronization for feature-based embedding: Application to speech,” David J. Coumou and Gaurav Sharma, in Proc. IEEE Intl. Conf. on Multimedia and Expo., 09–13 Jul. 2006, pp. 849-852. 91. “Watermark synchronization: Perspectives and a new paradigm,” Gaurav Sharma and David J. Coumou, in Proc. 40th Annual Conference on Information Sciences and Systems (CISS), Princeton, NJ, 22 – 24 Mar. 2006, pp. 1182-1187. Invited paper. 92. “Continuous phase modulated halftones and their application to halftone data embedding,” B. Oztan and G. Sharma, in Proc. IEEE Intl. Conf. Acoustics Speech and Sig. Proc., May 15-17, 2006, Toulouse, France, vol. II, pp. 333–336. Student Paper award winner. 93. “Set theoretic quantization index modulation watermarking,” Oktay Altun, Gaurav Sharma, and Mark Bocko, in Proc. IEEE Intl. Conf. Acoustics Speech and Sig. Proc., May 15-17, 2006, Toulouse, France, vol. II, pp. 229–232. 94. “Multiple watermarking: A vector space projections approach,” Oktay Altun, Gaurav Sharma, and Mark Bocko, in Proc. SPIE: Computational Imaging IV, vol. 6065, C. A. Bouman and E. L. Miller, Eds., Jan. 2006, San Jose, CA, pp. 60650O1–60650O12. 95. “Plane-based calibration of cameras with zoom variation,” Chao Yu and Gaurav Sharma, in Proc. SPIE: Visual Communications and Image Processing, vol. 6077, J. G. Apostolopoulos and A. Said, Eds., 15 - 19 Jan. 2006, San Jose, CA. 96. “Analysis of misregistration induced color shifts in the superposition of periodic screens,” B. Oztan, G. Sharma, and R. P. Loce,in Proc. SPIE: Color Imaging XI: Processing, Hardcopy, and Applications, vol. 6058, R. Eschbach and G. G. Marcu, Eds., Jan 17-19, 2006, San Jose, CA. 97. “Novel scanner characterization method for color measurement and diagnostics applications,” B.S. Lee, R. Bala, G. Sharma, in Proc. SPIE: Computational Imaging IV, vol. 6065, C. A. Bouman and E. L. Miller, Eds., Jan. 2006, San Jose, CA, pp. 606512–1–606512–11. G. Sharma 15 98. “Semifragile hierarchical watermarking in a set theoretic framework,” O. Altun, G. Sharma, M. Celik, and M. Bocko, in Proc. IEEE Intl. Conf. Image Proc. Sept. 11-14, 2005, Genova, Italy, vol. I, pp. 1001 - 1004. 99. “Informed watermarking in the fractional Fourier Domain,” O. Altun, G. Sharma,and M. Bocko, in Proc. EUSIPCO 2005, Sept. 4-8, 2005, Antalya, Turkey. 100. “Pitch and duration modification for speech watermarking,” M. Celik, G. Sharma, and A. M. Tekalp, in Proc. IEEE Intl. Conf. Acoustics Speech and Sig. Proc., 2005, vol. II, pp.17-20. 101. “Morphological steganalysis of audio signals and the principle of diminishing marginal distortions,” O. Altun, G. Sharma, M. Celik, M. Sterling, E. Titlebaum, and M. Bocko, in Proc. IEEE Intl. Conf. Acoustics Speech and Sig. Proc., 2005, vol. II, pp. 21-24. 102. “An adaptive filtering framework for image registration,” G. Caner, A. M. Tekalp, G. Sharma, and W. Heinzelman, in Proc. IEEE Intl. Conf. Acoustics Speech and Sig. Proc., vol. II, pp. 885-888. 103. “Imaging arithmetic: Physics U Math > Physics + Math,” G. Sharma, in Proc. SPIE: Color Imaging X: Processing, Hardcopy, and Applications, vol. 5667, Jan. 2005, Invited Paper, pp.95-106. 104. “What color is it?,” R. Eschbach, G. Sharma, and G. B. Unal, in Proc. SPIE: Color Imaging X: Processing, Hardcopy, and Applications, vol. 5667, 2005, pp. 186-192. 105. “A multiresolution halftoning algorithm for progressive display,” M. Mukherjee and G. Sharma, in Proc. SPIE: Color Imaging X: Processing, Hardcopy, and Applications, vol. 5667, Jan. 2005, pp. 525-533. 106. “Quantitative evaluation of misregistration induced color shifts in color halftones,” G. Sharma B. Oztan and R. P. Loce, in Proc. SPIE: Color Imaging X: Processing, Hardcopy, and Applications, vol. 5667, Jan. 2005, pp.501-512. 107. “Smooth blending of two inks of similar hue to simulate one ink,” M. Q. Shaw, R. Bala, and G. Sharma, in Proc. SPIE: Color Imaging X: Processing, Hardcopy, and Applications, vol. 5667, Jan. 2005,, pp.409-416. 108. “Local image registration: an adaptive filtering framework,” G. Caner, A. M. Tekalp, G. Sharma, and W. Heinzelman, in Proc. SPIE: Computational Imaging III, C. A. Bouman and E. L. Miller, Eds., Jan. 2005, vol. 5674, pp. 159-168. 109. “Mathematical Discontinuities in CIEDE2000 Color Difference Computations,” G. Sharma, W. Wu, E. N. Dalal, M. Celik, in Proc. IS&T/SID Twelfth Color Imaging Conference: Color Science and Engineering: Systems, Technologies, Applications, Scottsdale, AZ, 9-12 Nov. 2004, pp. 334–339. 110. “Efficient classification of scanned media using spatial statistics,” G. B. Unal, G. Sharma, R. Eschbach, in Proc. IEEE Intl. Conf. Image Proc., 24-27 October 2004, Singapore, pp. 2395–2398. 111. “Show-through watermarking of duplex printed documents,” G. Sharma and S. Wang, Proc. SPIE: Security, Steganography, and Watermarking of Multimedia Contents VI, vol. 5306, San Jose, 19-22, Jan 2004. 112. “Universal image steganalysis using rate-distortion curves,” M. U. Celik, G. Sharma, and A. M. Tekalp, Proc. SPIE: Security, Steganography, and Watermarking of Multimedia Contents VI, vol. 5306, San Jose, 19-22, Jan 2004. 113. “Stochastic Screens robust to mis-registration in multi-pass printing,” G. Sharma, S. Wang, and Z. Fan Proc. SPIE: Color Imaging: Processing, Hard Copy, and Applications IX, vol. 5293, San Jose, 19-22, Jan 2004, pp. 460-468. 114. “Two-dimensional transforms for device color calibration,” R. Bala, V. Monga, G. Sharma, J. P. VandeCapelle, Proc. SPIE: Color Imaging: Processing, Hard Copy, and Applications IX, vol. 5293, San Jose, 19-22, Jan 2004, pp. 250-261. 115. “Illuminant multiplexed imaging: GCR and special effects,” G. Sharma, R. P. Loce, S. J. Harrington, and Y. Zhang, Proc. IS&T/SID Eleventh Color Imaging Conference: Color Science, Systems and Applications, 04-07 Nov. 2003, Scottsdale, AZ, pp 266-271. Winner of the 2003 Best Poster “Cactus” Award. 116. “Collusion-resilient fingerprinting using random pre-warping,” M. U. Celik, G. Sharma, A. M. Tekalp, Proc. IEEE Intl. Conf. on Image Processing, 14-17 September 2003, Barcelona, Spain. 117. “Illuminant multiplexed imaging: basics and demonstration,” G. Sharma, R. P. Loce, S. J. Harrington, and Y. Zhang, in Final Prog. and Proc. of The PICS Conference: The Digital Photography Conference, 13-16 May 2003, Rochester, NY, pp. 542-547. 118. “Level-successive encoding for digital photography,” M. U. Celik, A. M. Tekalp, and G. Sharma, in Final Prog. and Proc. of The PICS Conference: The Digital Photography Conference, 13-16 May 2003, Rochester, NY, pp. 330-334. 119. “Error-diffusion robust to mis-registration in multi-pass printing,” Z. Fan, G. Sharma, and S. Wang, in Final Prog. and Proc. of The PICS Conference: The Digital Photography Conference, 13-16 May 2003, Rochester, NY, pp. 413-420. G. Sharma 16 120. “Robust processing of color target measurements for device characterization,” R. Bala, G. Sharma, D. Venable, in Final Prog. and Proc. of The PICS Conference: The Digital Photography Conference, 13-16 May 2003, Rochester, NY, pp. 413-420. 121. “Level-embedded lossless image compression,” M. U. Celik, A. M. Tekalp, G. Sharma, in Proc. IEEE ICASSP 2003, 6-10 April 2003, Hong Kong, pp. III-245-248. 122. “Localized lossless authentication watermark (LAW),” M. U. Celik, G. Sharma, A. M. Tekalp, and E. Saber, Proc. SPIE: Security and Watermarking of Multimedia Contents V, vol. 5020, 20-24 January 2003, San Jose, CA, pp. 689-698. 123. “Minimal-effort characterization of color printers for additional substrates,” M. Shaw, G. Sharma, R. Bala, and E. N. Dalal, Proc. IS&T/SID Tenth Color Imaging Conference: Color Science, Systems and Applications, 12-15 Nov. 2002, Scottsdale, AZ, pp. 202–207. Winner of the 2002 Best Poster “Cactus” Award. 124. “Reversible data hiding,” M. U. Celik, G. Sharma, E. Saber, and A. M. Tekalp, Proc. IEEE Intl. Conf. on Image Processing, 22-25 Sept 2002, Rochester, NY, pp.II-157–160. 125. “Comparative evaluation of color characterization and gamut of LCDs versus CRTs,” G. Sharma, Proc. SPIE: Color Imaging: Device Independent Color, Color Hard Copy, and Applications VII, 20-25 January 2002, San Jose, CA, vol. 4663, pp. 177–186. 126. “Spectrum recovery from colorimetric data for color reproductions,” G. Sharma and S. Wang, Proc. SPIE: Color Imaging: Device Independent Color, Color Hard Copy, and Applications VII, 20-25 January 2002, San Jose, CA, vol. 4663, pp. 8–14. 127. “Digital video authentication with self recovery,” M. U. Celik, G. Sharma, E. Saber, and A.M. Tekalp, Proc. SPIE: Security and Watermarking of Multimedia Contents IV, 20-25 January 2002, San Jose, CA, vol. 4675, pp. 531-541. 128. “A hierarchical image authentication watermark with improved localization and security,” M. U. Celik, G. Sharma, E. Saber, and A. M. Tekalp, Proc. IEEE Intl. Conf. on Image Processing, 7-10 Oct. 2001, Thessaloniki, Greece, pp. 502–505. 129. “Influence of resolution on scanner noise perceptibility,” G. Sharma and K. T. Knox, Final Program and Proc. IS&T’s PICS Conference, 22-25 April 2001, Montréal, Canada, pp. 137–141. 130. “Analysis of feature-based geometry invariant watermarking,” M.U. Celik, E. Saber, G. Sharma, and A. M. Tekalp, Proc. SPIE: Security and Watermarking of Multimedia Contents III, vol. 4314, pp. 261–268. 131. “Cancellation of show-through in duplex scanning,” G. Sharma, Proc. IEEE Intl. Conf. on Image Processing, 10-13 Sept. 2000, Vancouver, BC, Canada, pp. II-609-612. 132. “Target-less scanner color calibration,” G. Sharma, Proc. IS&T/SID Seventh Color Imaging Conference: Color Science, Systems and Applications, Scottsdale, AZ, 16-19 November 1999, pp. 69-74. 133. “Total least squares regression in Neugebauer model parameter estimation for dot-on-dot halftone screens,” M. Xia, E. Saber, G. Sharma, and A. M. Tekalp, Proc. IS&T NIP14: Intl. Conf. Dig. Printing Technologies, Toronto, Canada, 18-23 Oct. 1998, pp. 281-284. 134. “Total least squares techniques in color printer characterization,” M. Xia, E. Saber, G. Sharma, and A. M. Tekalp, Proc. IEEE Intl. Conf. Image Proc., Chicago, IL, 4-7 Oct. 1998, pp. 69-73. 135. “The impact of UCR on scanner calibration,” G. Sharma, S. Wang, D. Sidavanahalli, and K. T. Knox, Proc. IS&T PICS Conference, Portland, OR, 17-20 May 1998, pp. 121-124. 136. “Adaptive color rendering for images with flesh tone content,” M. Xia, E. Saber, A. M. Tekalp, and G. Sharma, Proc. SPIE: Color imaging : device-independent color, color hard copy, and graphic arts, 28-30 Jan. 1998, San Jose, CA, vol. 3300, pp. 173-181. 137. “Measures of goodness for color scanners,” G. Sharma and H. J. Trussell, in Proc. IS&T/SID Fourth Color Imaging Conference: Color Science, Systems, and Applications, Scottsdale, AZ, 19-22 Nov. 1996, pp. 28-32. 138. “Optimal filter design for multi-illuminant color correction,” G. Sharma and H. J. Trussell, Proc. IS&T/OSA Optics and Imaging in the Information Age, 20-24 Oct. 1996, Rochester, NY, pp. 83-86. 139. “Restoration of uncertain blurs using an error in variables criterion,” G. Sharma and H. J. Trussell, Proceedings IEEE International Conference on Image Processing, 16-19 Sept. 1996, Lausanne, Switzerland, pp. III-81-84. 140. “Simulation of error trapping decoders on a fading channel,” G. Sharma, A. Dholakia and A. A. Hassan, in Proc. IEEE Vehicular Technology Conference, 28 Apr.–1 May 1996, Atlanta, GA, vol. 2, pp. 1361–1365. 141. “Comparison of measures of goodness of color scanning filters,” H. J. Trussell, G. Sharma, P. Chen and S.A. Rajala, Proc. Ninth IEEE Workshop on Image and Multi-dimensional signal processing, March 3-6, 1996, Belize, pp. 98-99. G. Sharma 17 142. “Color scanner performance trade-offs,” G. Sharma and H. J. Trussell, in Proc. SPIE: Color imaging : device-independent color, color hard copy, and graphic arts, 29 Jan-1 Feb. 1996, San Jose, CA, vol. 2658, pp. 270–278. 143. “Automatic calibration of halftones,” K. Knox, C. Hains and G. Sharma, Proc. SPIE: Human Vision and Electronic Imaging, 29 Jan-1 Feb. 1996, San Jose, CA, vol. 2657, pp. 432-436. 144. “Decomposition of fluorescent illuminant spectra for accurate colorimetry,” G. Sharma and H. J. Trussell, Proceedings IEEE International Conference on Image Processing, 13-16 Nov. 1994, Austin, TX, vol. II, pp. 1002-1006. 145. “Signal processing methods in color calibration,” H. J. Trussell and G. Sharma, Proc. SPIE: Device Independent Color Imaging, 6-10 Feb. 1994, San Jose, CA, vol. 2170, pp. 18-23. 146. “Application of set theoretic methods to the calibration of colorimetry instrumentation,” G. Sharma and H. J. Trussell, Contributed Lecture, Cornelius Lanczos International Centenary Conference, 12-17 Dec. 1993, Raleigh, NC. 147. “Characterization of scanner sensitivity,” G. Sharma and H. J. Trussell, Proc. of the IS&T/SID Color Imaging Conference: Transforms and Portability of Color, 7-11 Nov. 1993, Scottsdale, AZ, pp. 103-107. EDITED BOOKS • G. Sharma, Ed., “Digital Color Imaging Handbook,” CRC Press, 2003, ISBN: 084930900X. • G. Sharma, F. Zhou, J. Liu, Eds., “International Symposium on Optoelectronic Technology and Application 2014: Image Processing and Pattern Recognition,” Proc. SPIE, Vol. 9301, May 2014. ISBN: 9781628413878 • A. Alattar, N. Memon, G. Sharma, Eds., “Proc. IS&T Electronic Imaging: Media Watermarking, Security, and Foren- sics 2018,”, San Francisco, California, 28 January - 1 February 2018. DOI: 10.2352/ISSN.2470-1173.2018.07.MWSF- 557 CONTRIBUTED BOOK CHAPTERS • “Color fundamentals for digital imaging, G. Sharma, in Digital Color Imaging Handbook, G. Sharma, Ed., CRC Press, 2003. • G. Honan, N. Gekakis, M. Hassanalieragh, A. Nadeau, G. Sharma, and T. Soyata, “Energy harvesting and buffering for cyber physical systems: A review,” in Cyber-Physical Systems: A Computational Perspective, G. M. Siddesh, G. C. Deka, K. G. Srinivasa, and L. M. Patnaik, Eds. Boca Raton, FL: CRC Press, Taylor & Francis Group, 2016, ch. 7, pp. 191–218. • N. Gekakis, A. Nadeau, M. Hassanalieragh, Y. Chen, Z. Liu, G. Honan, F. Erdem, G. Sharma, and T. Soyata, “Mod- eling of supercapacitors as an energy buffer for cyber-physical systems,” in Cyber-Physical Systems: A Computational Perspective, G. M. Siddesh, G. C. Deka, K. G. Srinivasa, and L. M. Patnaik, Eds. Boca Raton, FL: CRC Press, Taylor & Francis Group, 2016, ch. 6, pp. 171–190. MAGAZINE COLUMNS AND NEWSLETTER ARTICLES 1. “Select trends in image, video, and multidimensional signal processing,” G. Sharma, L. Karam, and P. Wolfe, IEEE Sig. Proc. Mag., vol. 29, no. 1, pp. 174–176, Jan. 2012. 2. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, Electronic Imaging, Newsletter of the SPIE Electronic Imaging Technical Group, vol 15, no. 2, pp. 1, June 2005. G. Sharma 18 BOOK REVIEWS • “Hidden Markov Processes: Theory and Applications to Biology by M. Vidyasagar,” reviewed by G. Sharma, SIAM Review, vol. 15, no. 1, 2017, pp. 9–12. • “Introduction to Digital Color Imaging by Hsien-Che Lee,” reviewed by G. Sharma, IEEE Sig. Proc. Mag., vol. 23, no. 5, Sept. 2006, pp. 119–120. • “Introduction to Digital Color Imaging by Hsien-Che Lee,” reviewed by G. Sharma, J. Electronic Imaging, vol. 15, No. 2, August 2006, pp. 029901,1–2. • “Visual Color and Color Mixture by Jozef B. Cohen,” reviewed by G. Sharma, Color Research and Application, vol. 28, no. 1, Feb. 2003, pp 76. OTHER RESEARCH PRODUCTS 1. L. Ding, A. E. Kuriyan, R. S. Ramchandran, C. C. Wykoff, and G. Sharma, “PRIME-FP20: Ultra-widefield fundus photography vessel segmentation dataset,” IEEE Dataport, 2020, https://doi.org/10.21227/ctgj-1367. 2. L. Ding, A. E. Kuriyan, R. S. Ramchandran, C. C. Wykoff, and G. Sharma, “(Code Ocean capsule): Deep vessel segmentation for ultra-widefield fundus photography and evaluation,” https://doi.org/10.24433/CO.5712234. v1, Sept. 2020. 3. L. Ding, M. H. Bawany, A. E. Kuriyan, R. S. Ramchandran, C. C. Wykoff, and G. Sharma, “RECOVERY-FA19: Ultra-widefield fluorescein angiography vessel detection dataset,” IEEE Dataport, 2019, http://dx.doi.org/10. 21227/m9yw-xs04. 4. L. Ding, M. H. Bawany, A. E. Kuriyan, R. S. Ramchandran, C. C. Wykoff, and G. Sharma, “(Code Ocean capsule): Deep vessel segmentation for fluorescein angiography and evaluation,” https://doi.org/10.24433/CO.1133548. v1, Apr 2020. 5. Y. Zhang, L. Ding, and G. Sharma, “(Code Ocean capsule): Local-linear-fitting-based matting for joint hole filling and depth upsampling of RGB-D images,” https://doi.org/10.24433/CO.5593522.v1, May 2019. 6. B. Li, X. Liu, K. Dinesh, Z. Duan, and G. Sharma, “Data from: Creating a multi-track classical music performance dataset for multi-modal music analysis: challenges, insights, and applications,” 2019, https://doi.org/10.5061/ dryad.ng3r749.2. 7. Y. Zhang, L. Ding, and G. Sharma, “(Data) HazeRD: An outdoor scene dataset and benchmark for single image dehazing,” 2019, http://dx.doi.org/10.21227/h6q8-y165. 8. A. O. Harmanci, G. Sharma, and D. H. Mathews, “TurboFold Software for iterative probabilistic estimation of secondary structures for multiple RNA sequences,” available as part of RNAStructure at http://rna.urmc. rochester.edu/RNAstructure.html. 9. H. Xie, C. E. Rodŕıguez-Pardo, and G. Sharma “Code for Pareto Optimal Color Display Primary Design,” available at https://labsites.rochester.edu/gsharma/openware/. 10. M. Habibzadeh, M. Hassanalieragh, A. Ishikawa, T. Soyata, A. Nadeau, G. Sharma, “Designs for UR-SolarCap: An Open Source Intelligent Auto-Wakeup Solar/Wind Energy Harvesting System for Supercapacitor based Energy Buffering”, available at http://www2.ece.rochester.edu/projects/siplab/OpenWare/UR-SolarCap.html. 11. T. R. Mosmann, I. Naim, J. Rebhahn, S. Datta, J. S. Cavenaugh, J. M. Weaver, and G. Sharma, “SWIFT FlowCytom- etry Analysis Suite,” available at http://www2.ece.rochester.edu/projects/siplab/Software/SWIFT.html. 12. J. A. Rebhahn, N. Deng, G. Sharma, A. M. Livingstone, S. Huang and T. R. Mosmann, “Software for LAVA:Landscape Animation for Visualizing Attractors,” available at http://www2.ece.rochester.edu/projects/siplab/Software/ LAVA.html. 13. H. Aly and G. Sharma, “Software (MATLAB) for A regularized model-based optimization framework for pan- sharpening,” available at https://labsites.rochester.edu/gsharma/openware/. 14. C. Yu and G. Sharma, “Software for Improved low-density parity check accumulate (LDPCA) codes,” available at https://labsites.rochester.edu/gsharma/openware/. 15. G. Sharma, W. Wu, E. N. Dalal, “CIEDE2000 Code and Information,” available at http://www2.ece.rochester. edu/~gsharma/ciede2000/. G. Sharma 19 KEYNOTE/HIGHLIGHT TALKS 1. “Leveraging Old Tricks in a New World: Efficient Generation of Labeled Data for Deep Learning,” G. Sharma, Keynote talk at 5th IAPR International Conference on Computer Vision & Image Processing (CVIP), December 5, 2020, Indian Institute of Information Technology (IIIT), Allahabad, India [Online]. 2. “Large Scale Visual Data Analytics for Geospatial Applications,” G. Sharma, Keynote talk at SIU 2020: The 28th IEEE Conference on Signal Processing and Communications Applications, 5 October 2020, Gaziantep, Turkey [Online]. 3. “Leveraging Old Tricks in A New World: Efficient Generation of Labeled Data for Deep Learning,” G. Sharma, Keynote talk at 11-th Symposium on Image Processing, Image Analysis and Real Time Imaging (IPIARTI), 22 September 2020, Universiti Teknologi Malaysia, Jalan Semarak, Kuala Lumpur, Malaysia [Online]. 4. “Leveraging Old Tricks in A New World: Efficient Generation of Labeled Data for Deep Learning,” G. Sharma, Keynote talk at IEEE IEEE Region 10 Symposium (TENSYMP), 5 June 2020, Dhaka, Bangladesh [Online]. 5. “Wearable Sensor Signal Processing and Data Analytics for Health Applications,” G. Sharma, Keynote talk at International Conference on Recent Advances in Computer Science and Technology (ICRACST- 2020), 21 February 2020, G H Patel College of Engineering & Technology (GCET), Vidya Vallabh Nagar, Gujarat, India. 6. “Smart Light-Weight Body Worn Sensors for Health Analytics,” G. Sharma, Plenary talk at International Conference on Advances in VLSI and Embedded Systems (AVES-2019), 20 December 2019, Sardar Vallabhbhai National Institute of Technology, Surat, Gujarat, India. 7. “Smart Light-Weight Body Worn Sensors for Health Analytics,” G. Sharma, Keynote talk at Fifth IEEE International Symposium on Smart Electronic Systems (iSES), 17 December 2019, National Institute of Technology, Rourkela, Odisha, India. 8. “Leveraging Old Tricks in A New World: Efficient Generation of Labeled Data for Deep Learning,” G. Sharma, Keynote and IEEE Signal Processing Society Distinguished Lecture talk at Third IEEE Conference on Information and Communication Technology, 7 December 2019, Indian Institute of Information Technology (IIIT), Allahabad, India. 9. “Large Scale Visual Data Analytics for Geospatial Applications,” G. Sharma, IEEE India Council International Conference (INDICON), 16 December 2017, Indian Institute of Technology, Roorkee, India. 10. “Probabilistic Decoding in Communications and Bioinformatics: A Turbo Approach,” G. Sharma, Keynote at Global Initiative of Academic Networks (GIAN) Workshop and IEEE Signal Processing Society Distinguished Lecturer Talk, 11 December 2016, Indian Institute of Information Technology (IIIT), Allahabad, India. 11. “Large Scale Visual Analytics for Wide Area Motion Imagery,” G. Sharma, IEEE UP Section Conference on Electrical, Computer and Electronics (UPCON), 10 December 2016, Indian Institute of Technology (IIT), (Banaras Hindu University) Varanasi, India. 12. “Translating Signal Processing Theory Into Applications,” G. Sharma, IEEE UP Section Conference on Electrical, Computer and Electronics (UPCON), 5 December 2015, Indian Institute of Information Technology (IIIT), Allahabad, India. 13. “Translating Signal Processing Theory Into Applications,” G. Sharma, National Seminar on New Trends in Signal Processing (NeTSiP-2015), 3—4 October 2015, Dhirubhai Ambani Institute of Information and Communication Technology (DA-IICT), Gandhinagar, India. 14. “How to Write a Quality Technical Paper and Where to Publish Within IEEE”, IEEE Authorship Workshop, 20 August 2015, Indian Institute of Technology, Madras, Chennai, India. 15. “How to Write a Quality Technical Paper and Where to Publish Within IEEE”, IEEE Authorship Workshop, 18 August 2015, Don Bosco Institute of Technology, Mumbai, India. 16. “Probabilistic Decoding in Communications and Bioinformatics: A Turbo Approach,” G. Sharma, IEEE SPS-APSIPA Winter School on Machine Intelligence and Signal Processing (MISP), Dec 20-23, 2014, Indraprastha Institute of Information Technology (IIITD), Delhi, India. 17. “Imaging Arithmetic: Physics ⋃ Math > Physics + Math,” G. Sharma, CP7.0 Workshop: Colour Imaging and Printing - State of the Art and Trends for a New Generation, Oct. 24, 2012. 18. “Turbo-decoding of RNA Secondary Structure,” G. Sharma, 30th Brazilian Telecommunications Symposium (SBrT’12), 14 September, 2012. G. Sharma 20 19. “Digital Watermarking: A Feasible Signal Design Perspective,” G. Sharma, International Conference on Image Information Processing (ICIIP), Waknaghat, Shimla, Himachal Pradesh, India, Nov. 4, 2011. 20. “Color Imaging Arithmetic: Physics ⋃ Math > Physics + Math,” G. Sharma, Computational Color Imaging Workshop, Milan, Italy, Apr. 20, 2011. INVITED SEMINARS AND TALKS 1. “AI in Health Care: Emerging Directions and Sample Case Studies,” G. Sharma, invited talk at Faculty Development Program on AI for Healthcare, Sarvajanik College of Engineering and Technology, November 23, 2020, Surat, Gujarat, India [Online]. 2. “Leveraging Old Tricks in a New World: Efficient Generation of Labeled Data for Deep Learning,” G. Sharma, IEEE Signal Processing Society Distinguished Lecture, November 2, 2020, Singapore [Online]. 3. “Leveraging Old Tricks in a New World: Efficient Generation of Labeled Data for Deep Learning,” G. Sharma, IEEE Signal Processing Society Distinguished Lecture, October 20, 2020, Indian Institute of Technology, Kharagpur, West Bengal, India [Online]. 4. “Probabilistic Decoding in Communications and Bioinformatics: A Turbo Approach,” G. Sharma, IEEE Signal Processing Society Distinguished Lecture, April 21, 2020, IEEE Spanish Signal Processing and Communications Joint Chapter, Carlos III University of Madrid- Leganés (UC3M), Leganes, Spain [Online]. 5. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, IEEE Signal Processing Society Distinguished Lecture at University of Puerto Rico Mayagüez (UPRM), March 3, 2020, Mayagüez, Puerto Rico. 6. “Leveraging Old Tricks in a New World: Efficient Generation of Labeled Data for Deep Learning,” G. Sharma, IEEE Signal Processing Society Distinguished Lecture, Rocheser Chapter, February 4, 2020, Rochester Institute of Technology, Rochester, NY, USA. 7. “Large Scale Data Analytics for Airborne Imagery,” G. Sharma, IEEE Signal Processing Society Distinguished Lecture, Chicago Chapter, January 22, 2020, University of Illinois, Chicago, IL, USA. 8. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, IEEE Signal Processing Society Distinguished Lecture at Sarvajanik College of Engineering and Technology (SCET), 20 December 2019, Surat, Gujarat, India. 9. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, invited seminar at Indian Institute of Technology, Ropar, December 13, 2019, Punjab, India. 10. “Turbo-decoding and Belief Propagation in Bioinformatics,” G. Sharma, invited seminar at Jaypee Institute of Information Technology (JIIT), December 11, 2019, Noida, Uttar Pradesh, India. 11. “Set Theoretic Feasibility and Optimality Frameworks for Data Hiding and Privacy,” G. Sharma, IEEE Signal Processing Society Distinguished Lecture at Indraprastha Institute of Information Technology (IIIT), 10 Decem- ber 2019, Delhi, India. 12. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, invited seminar at Dept. of Electronics and Communication Engineering, Madan Mohan Malviya University of Technology, November 29, 2019, Gorakhpur, U.P., India. 13. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, invited seminar at Indian Institute of Information Technology Allahabad, November 28, 2019, Prayagraj, U.P., India. 14. “Large Scale Data Analytics for Airborne Imagery,” G. Sharma, Eshbach Lecture and ECE Dept. Distinguished Lecture, Northwestern University, October 30, 2019, Evanston, IL, USA. 15. “Large Scale Data Analytics for Airborne Imagery,” G. Sharma, invited seminar at Indian Institute of Science, September 10, 2019, Bangalore, India. 16. “Large Scale Data Analytics for Airborne Imagery,” G. Sharma, invited seminar at Centre for Artificial Intelligence, University of Technology Sydney, August 5, 2019, Sydney, Australia. 17. “Large Scale Visual Data Analytics for Geospatial Applications,” G. Sharma, invited seminar at Annual ShanghaiTech Symposium on Information Science and Technology (ASSIST) 2019, July 1, 2019, Shanghai, China. 18. “Large Scale Data Analytics for Airborne Imagery,” G. Sharma, invited seminar at College of Computer Science, Nankai University, April 09, 2019, Tianjin, China. G. Sharma 21 19. “Large Scale Visual Data Analytics for Geospatial Applications,” G. Sharma, invited seminar at School of Computer Science and Technology, Harbin Institute of Technology, March 22, 2019, Harbin, China. 20. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, invited seminar at School of Computer Science and Technology, Harbin Institute of Technology, March 15, 2019, Harbin, China. 21. “Data Analytics for Wearable Sensor-based Health Monitoring,” G. Sharma, invited talk at Federal Institute of Science and Technology (FISAT), February 27, 2019, Angamaly (Ernakulam Distt.), Kerala, India. 22. “Large Scale Visual Data Analytics for Geospatial Applications,” G. Sharma, invited talk at Dept. of Computer Science, University of Kerala, February 22, 2019, Kariavattom, Thiruvananthapuram, Kerala, India. 23. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, invited talk at International School of Photonics, Cochin University of Science and Technology (CUSAT), February 21, 2019, Kochi, Kerala, India. 24. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, invited Public Lecture at Cochin University of Science and Technology (CUSAT), February 20, 2019, Kochi, Kerala, India. 25. “Data Analytics for Wearable Sensor-based Health Monitoring,” G. Sharma, invited talk at Division of Information Technology, School of Engineering, Cochin University of Science and Technology (CUSAT), February 4, 2019, Kochi, Kerala, India. 26. “Large Scale Visual Data Analytics for Geospatial Applications,” G. Sharma, invited Public Lecture at Cochin University of Science and Technology (CUSAT), January 29, 2019, Kochi, Kerala, India. 27. “Large Scale Visual Data Analytics for Geospatial Applications,” G. Sharma, invited seminar at Norwegian University of Science and Technology (NTNU), January 7, 2019, Gjøvik, Norway. 28. “Large Scale Visual Data Analytics for Geospatial Applications,” G. Sharma, invited seminar at the Dept. of Computer Science and Engineering, University of Ioannina, June 12, 2018, Ioannina, Greece. 29. “Large Scale Visual Data Analytics for Geospatial Applications,” G. Sharma, invited seminar at the Rochester Institute of Technology, Center for Imaging Science (CIS) Seminar Series, Apr 25, 2018, Rochester, NY. 30. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, invited Visiting Lecturer talk, IIT Roorkee SPIE Student Branch, December 14 2017, Roorkee, Uttarakhand, India. 31. “Wearable Sensor Analytics for Medicine,” G. Sharma, invited presentation at Gujarat Technological University, March 21, 2017, Ahmedabad, India. 32. “Digital Biomarkers for Huntington’s Disease using Multiple Body-affixed, Lightweight Sensors,” G. Sharma, invited presentation at Huntington Study Group (HSG) Annual Meeting, November 04, 2016, Nashville, TN. 33. “Image-based Data Interfaces: Revisiting Barcodes and Watermarks for Mobile Applications,” G. Sharma, invited presentation at Gujarat Technological University, March 15, 2016, Ahmedabad, India. 34. “Large Scale Visual Analytics for Wide Area Motion Imagery,” G. Sharma, invited presentation given at IEEE Gujarat Section and IEEE Student Branch Ahmedabad University, March 14, 2016, Ahmedabad, India. 35. “Color Barcodes for Mobile and Other Applications,” G. Sharma, invited presentation given to Honeywell Corporation, March 03, 2016. 36. “Color Science and Imaging: A Brief Introduction and Overview,” G. Sharma, invited presentation at Samsung Research India, G. Sharma, Jan 08, 2016, Bangalore, India. 37. “Color Barcodes and Health Sensing on Smart Phones,” G. Sharma, invited presentation at Flipkart India, Jan 07, 2016, Bangalore, India. 38. “Image-based data interfaces revisited: Barcodes and watermarks for the mobile and digital worlds,” G. Sharma, invited speaker at 8th Intl. Conf. on Comm. Sys. and Networks (COMSNETS), Bangalore, India, Jan. 6, 2016. 39. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, invited presentation at Center of Excellence in Signal and Image Processing, College of Engineering, Pune (COEP) and the Institution of Engineering and Technology (IET), Pune Local Chapter, August 21, 2015, Pune, India. 40. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, invited presentation at IEEE Madras Section, August 19, 2015, Chennai, India. 41. “Set Theoretic Watermarking: An Optimality and Feasibility Framework for Data Hiding,” G. Sharma, invited presentation at Bhaskaracharya Institute For Space Applications and Geo-Informatics (BISAG), 19 March 2015. G. Sharma 22 42. “Set Theoretic Watermarking: An Optimality and Feasibility Framework for Data Hiding,” G. Sharma, invited presentation at Gujarat Technological University, March 18, 2015, Ahmedabad, India. 43. “Imaging Arithmetic: Physics U Math > Physics + Math,” invited presentation at Dhirubhai Ambani Institute of Information and Communication Technology (DA-IICT) and Gujarat Chapter of IEEE, Jan. 8, 2015, Gandhinagar, India. 44. “Imaging Arithmetic: Physics U Math > Physics + Math,” invited presentation at Indraprastha Institute of Information Technology (IIITD), Aug. 7, 2014, New Delhi, India. 45. “Imaging Arithmetic: Physics U Math > Physics + Math,” invited presentation at Chung Ang University, May 21, 2014, Seoul, South Korea. 46. “Mathematical Modeling, Design, and Optimization of Multiprimary Color Displays,” G. Sharma, invited presentation at Inha University, Nam-gu, Incheon, South Korea, 20 May 2014. 47. “Set Theoretic Watermarking: An Optimality and Feasibility Framework for Data Hiding,” G. Sharma, invited presentation at Dept. of Electrical and Electronic Engineering, Yonsei University, Seoul, South Korea, 20 May 2014. 48. “Mathematical Modeling, Design, and Optimization of Multiprimary Color Displays,” G. Sharma, invited presentation at Samsung Advanced Technology Training Institute (SATTI), Suwon, South Korea, 19 May 2014. 49. “Imaging Arithmetic: Physics U Math > Physics + Math,” invited presentation at Department of Automation, Tsinghua University, May 16, 2014, Beijing, China. 50. “Decoding RNA Secondary Structure from Multiple Homologs: A Turbo-Decoding Approach,” invited presentation at School of Life Sciences, Tsinghua University, May 16, 2014, Beijing, China. 51. “Color Barcodes for Mobile and Other Applications,” G. Sharma, invited presentation at Conference 9: Image Processing & Pattern Recognition, part of the International Symposium on Optoelectronic Technology and Appli- cation, China National Convention Center, Beijing, China, 14 May 2014. 52. “Color Barcodes for Mobile and Other Applications,” G. Sharma, invited presentation at Qualcomm, August 28, 2013, San Diego, CA. 53. “Turbo-Decoding of RNA Secondary Structure,” G. Sharma, invited talk at the Stochastic Information Processing (SIP) group, June 19, 2013, Computer Science Department, University of Geneva, Geneva, Switzerland. 54. “Set Theoretic Watermarking: An Optimality and Feasibility Framework for Data Hiding,” G. Sharma, invited talk at the Stochastic Information Processing (SIP) group, June 19, 2013, Computer Science Department, University of Geneva, Geneva, Switzerland. 55. “Turbo-Decoding of RNA Secondary Structure,” G. Sharma, invited talk at École Polytechnique fédérale de Lausanne (EPFL), June 18, 2013, Lausanne, Switzerland. 56. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, invited talk at École Polytechnique fédérale de Lausanne (EPFL), June 12, 2013, Lausanne, Switzerland. 57. “Color Barcodes for Mobile and Other Applications,” G. Sharma, invited seminar at the Sharp Labs America, June 03, 2013, Camas, WA. 58. “Color Barcodes for Mobile and Other Applications,” G. Sharma, invited seminar at the Rochester Institute of Technology, Center for Imaging Science (CIS) Seminar Series, May 8, 2013, Rochester, NY. 59. “Technology Transfer and Entrepreneurship: A Sampling from the University of Rochester,” G. Sharma, invited talk at EuTec Seminar Autumn, 2012, Gjøvik University College, October 15, 2012. 60. “System Optimization in Digital Color Imaging,” G. Sharma, invited guest lectures in course in course IMT5261: Special Topics in Color Imaging, Gjøvik University College, October 10 & 17, 2012. 61. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, invited talk at National Engineering College, August 29, 2012, Kovilpatti, Tamilnadu, India. 62. “A Set-theoretic framework for Data Hiding,” G. Sharma, invited talk at National Engineering College, August 29, 2012, Kovilpatti, Tamilnadu, India. 63. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, invited talk at Hewlett-Packard Laboratories, June 12, 2012, Palo Alto, CA. 64. “Making Perceptual Constraints First Class Citizens in the Watermarking World: A Set Theoretic Framework,” invited talk at Dept. of Electrical Engineering and Computer Science, June 6, 2012, Evanston, IL. G. Sharma 23 65. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, invited talk at the Stanford Center for Image Systems Engineering (SCIEN), May 15, 2012, Palo Alto, CA. 66. “Perception based Color Visualization and Distributed Media Processing,” invited presentation at Army Research Laboratory, Adelphi, MD, 07, May 2012. 67. “Decoding RNA Secondary Structure from Multiple Homologs: A Turbo-Decoding Approach,” invited presentation at University of California, Santa Cruz, Dept. of Biomolecular Engineering, May 03, 2012, Santa Cruz, CA. 68. “Turbo-Decoding of RNA Secondary Structure,” invited presentation at University of California, Berkeley EECS Dept. Networking, Communications, and DSP Seminar, April 23, 2012, Berkeley, CA. 69. “Multi-primary Displays: Modeling and Color Characterization,” G. Sharma, invited presentation at Qualcomm, April 20, 2012, San Jose, CA. 70. “Distributed Communications and Mobile Color Barcodes,” G. Sharma, invited presentation at Huawei Research Laboratory, April 04, 2012, Santa Clara, CA. 71. “Systems Approaches in Imaging Systems,” G. Sharma, invited presentation at Sharp Central Research Laboratory, March 23, 2012, Tenri, Japan. 72. “Systems Approaches in Color and Imaging,” G. Sharma, invited presentation at Sony Shinagawa Technology Center, March 22, 2012, Shinagawa, Japan. 73. “Systems Approaches in Color and Imaging,” G. Sharma, invited presentation at Chiba University, March 22, 2012, Chiba, Japan. 74. “Decoding RNA Secondary Structure: An Iterative Belief Propagation Turbo-Decoding Approach,” G. Sharma, invited presentation at Dept. of Computer Science, University of Memphis, February 17, 2012, Memphis, TN. 75. “Turbo-Decoding of RNA Secondary Structure,” G. Sharma, invited presentation at Hewlett Packard Labs, February 01, 2012, Palo Alto, CA. 76. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, NIIT University Distinguished Visitor Lecture Seminar, November 07, 2011, Neemrana, Rajasthan, India. 77. “Set Theoretic Watermarking: An Optimality and Feasibility Framework for Data Hiding,” G. Sharma, invited seminar at Digimarc Corporation, August 02, 2011, Beaverton, OR. 78. “TurboFold: Iterative Probabilistic Secondary Structure Prediction for Multiple RNA Sequences,” G. Sharma, invited seminar at the Sharp Labs America, July 22, 2011, Camas, WA. 79. “Mathematical Metrics for Evaluating Color Recording Devices,” G. Sharma, Invited Seminar at The Norwegian Color Research Laboratory, Gjøvik University College, April 27th, 2011, Gjøvik, Norway. 80. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, Invited Seminar at The Norwegian Color Research Laboratory, Faculty of Computer Science and Media Technology at Gjøvik University College, April 26, 2011, Gjøvik, Norway. 81. “Feasibility and Optimality Watermarking using Convex Projections,” G. Sharma, Invited Seminar at The Norwegian Color Research Laboratory, Gjøvik University College, April 26, 2011, Gjøvik, Norway. 82. “Mathematical Metrics for Evaluating Color Recording Devices,” Dipartmento di Electtronica E Informazione, Poly- technico Milano, April 19th, 2011. 83. “Optimality and Feasibility Frameworks for Data Embedding: A Set Theoretic Approach,” G. Sharma, invited seminar at the Sharp Labs America, January 31, 2011, Camas, WA. 84. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, Invited Seminar, Qualcomm, December 2, 2010, San Diego, CA. 85. “Optimality and Feasibility Frameworks for Data Embedding: A Set Theoretic Approach,” G. Sharma, Invited Seminar, Qualcomm, December 2, 2010, San Diego, CA. 86. “Optimality and Feasibility Frameworks for Data Embedding: A Set Theoretic Approach,” G. Sharma, Department of Computer Engineering Seminar, Rochester Institute of Technology, October 28, 2010, Rochester, NY. 87. “Optimality and Feasibility Frameworks for Data Embedding: A Set Theoretic Approach,” G. Sharma, Invited Seminar, Hong Kong University of Science and Technology, September 30, 2010, Hong Kong. 88. “Optimality and Feasibility Frameworks for Data Embedding: A Set Theoretic Approach,” G. Sharma, Invited Seminar, Xerox Corporation, August 12, 2010, Webster, NY. G. Sharma 24 89. “Optimality and Feasibility Frameworks for Data Embedding: A Set Theoretic Approach,” G. Sharma, Invited Seminar, Indian Institute of Technology, July 12, 2010, New Delhi, India. 90. “Decoding RNA Structure,” G. Sharma, invited presentation at Sri Venkateswara Institute of Medical Sciences, July 10, 2010, Tirupati, India. 91. “Optimality and Feasibility Frameworks for Data Embedding: A Set Theoretic Approach,” G. Sharma, Invited Seminar, Indian Institute of Science, July 03, 2010, Bangalore, India. 92. “TurboFold: Iterative Probabilistic Secondary Structure Prediction for Multiple RNA Sequences,” G. Sharma, invited presentation at Samsung India, July 02, 2010, Bangalore, India. 93. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, invited presentation at Samsung India, July 02, 2010, Bangalore, India. 94. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, invited presentation at Hewlett Packard Labs India, July 02, 2010, Bangalore, India. 95. “High Capacity Data Hiding and Encoding Approaches for Printed Documents,” O. Bulan and G. Sharma, invited seminar at the Rochester Chapter of the IEEE Signal Processing Society, April 22, 2010, Rochester, NY. 96. “Systems Approaches in Imaging Systems,” G. Sharma, invited seminar at the Sharp Labs America, April 8, 2010, Camas, WA. 97. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, invited seminar at the Rochester Institute of Technology, Center for Imaging Science (CIS) Seminar Series, March 24, 2010, Rochester, NY. 98. “Optimality and Feasibility Frameworks for Data Embedding: A Set Theoretic Approach,” G. Sharma, Invited Seminar, Arab Academy for Science, Technology, and Maritime Transport, November 11, 2009, Cairo, Egypt. 99. “Grand Challenge Problems in Media Security,” G. Sharma, Regional Symposium on Graduate Education and Research in Information Security (GERIS), 27 October 2009, Binghamton, NY. 100. “Set-theoretic Watermarking: A Feasibility Framework for Data Embedding,” G. Sharma, Research Seminar, Graduate Institute of Electronics Engineering, National Taiwan University, April 27, 2009, Taipei, Taiwan. 101. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, invited presentation at National Taiwan University of Science and Technology, April 24, 2009, Taipei, Taiwan. 102. “Set-theoretic Watermarking: A Feasibility Framework for Data Embedding,” G. Sharma, Research Seminar, ECE Dept., State University of New York (SUNY) at Binghamton, February 25, 2009, Binghamton, NY. 103. “High Capacity Data Embedding in Halftone Images via Dot Orientation Modulation,” O. Bulan, V. Monga, G. Sharma, invited Technical Presentation at the Rochester Chapter of Society for Imaging Science and Technology, Rochester, NY, 11 Feb. 2009. 104. “Joint Prediction of RNA Secondary Structure,” G. Sharma, invited seminar at the Electrical and Systems Engineering Department, Washington University, Saint Louis, February 04, 2009, Saint Louis, MO. 105. “Set-theoretic Watermarking: A Feasibility Framework for Data Embedding,” G. Sharma, invited seminar at the Chicago Chapter of the IEEE Signal Processing Society, October 31, 2008, Chicago, IL. 106. “High Capacity Data Hiding in Halftone Images,” G. Sharma, invited presentation at Xerox Research Center, February 21, 2008, Webster, NY. 107. “Set theoretic Watermarking: A Feasibility Framework for Data Hiding,” G. Sharma, invited presentation at Hewlett Packard Laboratories, January 31, 2008, Palo Alto, CA. 108. “Efficient Joint Prediction of RNA Secondary Structure,” G. Sharma, invited presentation at Computer Science Research Institute, Sandia National Laboratories, November 9, 2007, Albuquerque NM. 109. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, invited presentation at Ricoh Innovations Inc., November 2, 2007, San Jose, CA. 110. “Set-theoretic Watermarking,” G. Sharma, invited presentation as part of Engineering Week at Harris Corporation, February 20, 2007, Rochester, NY. 111. “Efficient Joint Prediction of RNA Secondary Structure,” G. Sharma, invited presentation at Microsoft Research, January 26, 2007, Redmond, WA. 112. “Imaging Arithmetic: Physics U Math > Physics + Math,” G. Sharma, invited presentation at North Carolina State University, Electrical and Computer Engineering Dept., October 6, 2006, Raleigh, NC. G. Sharma 25 113. “Set-theoretic Watermarking,” G. Sharma, invited presentation at Syracuse University, Dept. of Electrical Engineering and Computer Science, August 17, 2006, Syracuse, NY. 114. “Multimedia Authentication Watermarks: Integrating Cryptography with Signal Processing,” G. Sharma, invited presentation at General Motors, India Science Laboratory, July 06, 2006, Bangalore, India. 115. “Multimedia Authentication Watermarks: Integrating Cryptography with Signal Processing,” G. Sharma, invited presentation at Dept of Mathematics and Statistics, May 10, 2006, Rochester Institute of Technology, Rochester, NY. 116. “Authentication Watermarking: Security, Localization, and a new Lossless Framework,” G. Sharma, invited presentation for Xerox Innovation Group, March 16, 2006, Webster, NY. 117. “Mis-registration color models for periodic halftone screens,” G. Sharma, invited presentation at Xerox Labs, El Segundo, CA, Jan. 20, 2006. 118. “Authentication Watermarking: Security, Localization, and a new Lossless Framework,” G. Sharma, invited presentation for IEEE Rochester Signal Processing Society Chapter, Nov. 30, 2005. 119. “Physically Motivated Signal Processing: Applications in Digital Imaging,” G. Sharma, National University of Singapore, ECE Dept, Oct 28, 2004. 120. “Authentication Watermarking: Security, Localization, and a new Lossless Framework,” G. Sharma, invited presentation at National University of Singapore, CS Dept, Oct 28, 2004. 121. “Authentication Watermarking: Security, Localization, and a new Lossless Framework,” G. Sharma, invited seminar at Polytechnic University Brooklyn, CS Dept, Oct 01, 2004. 122. “Exploiting Physics in Digital Imaging,” G. Sharma, University of Montreal, Computer Science Colloquium, 20 May 2004, Montreal, Canada. 123. “Signal Processing for Physical Imaging,” G. Sharma, Cornell University Electrical and Computer Engineering Colloquium, 20 April 2004, Ithaca, NY. 124. “Color Image Capture: Challenges and Opportunities,” G. Sharma, invited presentation before the Rochester Institute of Technology, IEEE Student Chapter, 06 Feb 2004, Rochester, NY. 125. “Show-through in Duplex Documents: Correction and Exploits,” G. Sharma, invited presentation at Sony America, 23 Jan 2004, San Jose, CA. 126. “Perspectives on R&D Careers in a Large Company,” G. Sharma, invited guest lecture as part of a course on preparing for careers in academia, 17 November 2003, University of Rochester, Rochester, NY. 127. “Two-dimensional transforms for device color calibration,” R. Bala, G. Sharma, V. Monga, invited presentation at the late breaking news session IS&T/SID Eleventh Color Imaging Conference: Color Science, Systems and Applications, 07 Nov. 2003, Scottsdale, AZ. 128. “Digital Color Scanning: Challenges and Opportunities,” G. Sharma, invited presentation at Hewlett-Packard, Barcelona, Spain, 18 Oct. 2003. 129. “Show-through Cancellation for Duplex Scanning,” G. Sharma, invited Technical Presentation before Rochester Chapter of Society for Imaging Science and Technology, Rochester, NY, 21 Feb. 2001. 130. “Document Echo Cancellation: Electronic Show-through Correction for Scans of Duplex Printed Pages,” G. Sharma, invited seminar at Department of Electrical and Computer Engineering, Univ. of Rochester, Rochester, NY, 03 Oct. 2001. 131. “Image Processing for Digital Imaging,” G. Sharma, invited seminar at Silicon Automation Systems (now Sasken), Bangalore, India, October 17, 2000. 132. “Image Processing for Digital Imaging,” G. Sharma, invited seminar at the Department of Electrical Computer Engineering, Indian Institute of Science, Bangalore, India, October 16, 2000. 133. “Total Least Squares Regression in Color Printer Calibration,” M.Xia, E. Saber, G. Sharma, and A. M. Tekalp, IEEE Western NY Image Proc. Workshop, Rochester, NY, 19 Sept. 1997. MENTORING/TRAINING SESSIONS AND TALKS 1. “How to Publish a Quality Technical Paper with IEEE,” G. Sharma, keynote talk at IEEE Authorship Workshop for Researchers in India, Part II, 15 October 2020, India [Online]. G. Sharma 26 2. “Getting started with IEEE Publishing,” G. Sharma, keynote talk at IEEE Authorship Workshop for Researchers in India, Part I, 1 October 2020, India [Online]. 3. “How to Write a Good Technical Paper,” G. Sharma, invited talk at WiSe (Women in Sensors) Week 2020, Gujarat Chapter of Sensors Council, September 28, 2020, Gujarat, India [Online]. 4. “Publication Etiquette and Ethics: Things You Should Know Before Submitting Your Next Paper,” G. Sharma, invited presentation at Indian Institute of Information Technology Allahabad, November 28, 2019, Prayagraj, U.P., India. 5. “Tips and Tricks for Writing and Publishing Technical Papers,” G. Sharma, invited presentation at Indian Institute of Information Technology Allahabad, November 27, 2019, Prayagraj, U.P., India. 6. “Perspectives and Advice on Effectively Initiating research,” G. Sharma, invited talk at Indian Institute of Information Technology Allahabad, November 27, 2019, Prayagraj, U.P., India. 7. “How to write a quality technical paper and where to publish within IEEE,” G. Sharma, keynote talk at IEEE Authorship Workshop, B. R. Ambedkar Central Library, Jawaharlal Nehru University, 13 September 2019, New Delhi, India. 8. “How to write a quality technical paper and where to publish within IEEE,” G. Sharma, keynote talk at IEEE Authorship Workshop, National Physical Laboratory, 13 September 2019, New Delhi, India. 9. “How to write a quality technical paper and where to publish within IEEE,” G. Sharma, keynote talk at IEEE Authorship Workshop, Amity University, 12 September 2019, Noida, India. 10. “How to write a quality technical paper and where to publish within IEEE,” G. Sharma, keynote talk at IEEE Authorship Workshop, National Informatics Centre, 11 September 2019, New Delhi, India. 11. “Walk and Breakfast with an IEEE Fellow: Prof. Gaurav Sharma,” Networking event organized by IEEE Bangalore Section, Lalbagh Garden, Bangalore, India, 8 September 2019. 12. “How to write a quality technical paper and where to publish within IEEE,” G. Sharma, keynote talk at IEEE Authorship Workshop, Indian Institute of Science, 9 September 2019, Bangalore, India. 13. “How to write a quality technical paper and where to publish within IEEE,” G. Sharma, keynote talk at IEEE Authorship Workshop, National Aeronautics Laboratory, 9 September 2019, Bangalore, India. 14. “Initiating Research Effectively: A Guide and Personal Perspectives,” G. Sharma, presentation at School of Computer Science and Technology, Harbin Institute of Technology, 20 March 2019, Harbin, China. 15. “Publication Etiquette and Ethics: Things You Should Know Before Submitting your Next paper,” G. Sharma, presentation at School of Computer Science and Technology, Harbin Institute of Technology, 15 March 2019, Harbin, China. 16. “Publication Etiquette and Ethics: Things You Should Know Before Submitting your Next paper,” J, Fowler and G. Sharma, information session at the IEEE Intl. Conf. Image Proc., Divani Caravel Hotel, Athens, Greece, 9 October 2018. 17. “Publication Etiquette and Ethics: Things You Should Know Before Submitting your Next paper,” J, Fowler and G. Sharma, information session at the IEEE Intl. Conf. Acoustics Speech and Sig. Proc., Calgary Telus Convention Center, Calgary, Alberta, Canada, 20 April 2018. 18. “Initiating Research Effectively: A Guide and Personal Perspectives,” G. Sharma, presentation at Indian Institute of Technology, Indore, April 6, 2018, Indore, India. 19. “Publication Etiquette and Ethics: Things You Should Know Before Submitting your Next paper,” G. Sharma, presentation at Indian Institute of Information Technology, Allahabad, January 4, 2018, Allahabad, India. 20. “Publication Etiquette and Ethics: Things You Should Know Before Submitting your Next paper,” G. Sharma, presentation at Dept. of Electronics and Communcations, Indian Institute of Technology, Roorkee, December 17, 2017, Roorkee, India. 21. “Publication Etiquette and Ethics: Things You Should Know Before Submitting your Next paper,” G. Sharma, presentation at Gujarat Technological University, March 21, 2017, Ahmedabad, India. 22. “Publication Etiquette and Ethics: Things You Should Know Before Submitting your Next paper,” S. Hemami and G. Sharma, information session at the IEEE Intl. Conf. Acoustics Speech and Sig. Proc., Shanghai International Convention Center/Oriental Riverside Hotel, Shanghai, China, 24 March 2016. 23. “Publication Etiquette and Ethics: Things You Should Know Before Submitting your Next paper,” G. Sharma, presentation at Gujarat Technological University, March 16, 2016, Ahmedabad, India. G. Sharma 27 24. “Publication Etiquette and Ethics: Things You Should Know Before Submitting your Next paper,” G. Sharma, invited presentation at Aug. 24, 2015, Seth Jai Parkash Mukand Lal Institute of Engineering & Technology( JMIT), Radaur, Yamunanagar, Haryana, India. 25. “Publication Etiquette and Ethics: Things You Should Know Before Submitting your Next paper,” S. Hemami and G. Sharma, information session at the IEEE Intl. Conf. Acoustics Speech and Sig. Proc., Brisbane Convention & Exhibition Centre, Brisbane, Australia, 21 April 2015. 26. “Publication Etiquette and Ethics:Things You Should Know Before Submitting your First paper,” G. Sharma, invited presentation at Gujarat Technological University, March 28, 2014, Ahmedabad, India. 27. “Publication Etiquette and Ethics: Things You Should Know Before Submitting your First paper,” G. Sharma, invited presentation at Bhaskaracharya Institute For Space Applications and Geo-Informatics (BISAG), 19 March 2015. 28. “Publication Etiquette and Ethics: Things You Should Know Before Submitting your First paper,” G. Sharma, Indo-US Collaboration on Engineering Education (IUCEE) Webinar, 17 June 2014. 29. “Publication Etiquette and Ethics: Things You Should Know Before Submitting your First paper,” G. Sharma, information session at International Symposium on Optoelectronic Technology and Application, China National Convention Center, Beijing, China, 13 May 2014. 30. “Publication Etiquette and Ethics:Things You Should Know Before Submitting your First paper,” G. Sharma, invited presentation at Gujarat Technological University, March 28, 2014, Ahmedabad, India. PANELS 1. Invited panelist for “Challenges and Opportunities for Handheld Device Displays,” panel at 27th annual Symposium on Electronic Imaging, San Francisco, CA, 10 Feb. 2015. 2. Panel organizer, “Online Learning - Will Technology Transform Higher Education?”, 2013 Electronic Imaging Sym- posium, Tuesday, Feb 5 2013, 6-7:30pm, Hyatt Regency Hotel, San Francisco Airport. 3. Invited panelist for “The Future of Watermarking,” panel at Media Watermarking, Security, and Forensics XIV, IS&T/SPIE Electronic Imaging Symposium, 05 Feb. 2013, San Francisco, CA. 4. Invited panelist for “How can a PhD add value to a career,” panel at 2011 Western NY Image Processing Workshop, CIMS Building, Rochester Institute of Technology, Rochester, NY, November 14 2011. 5. Invited panelist for “Research in Color Science: Next Challenges,” panel at HP Labs, Jan 23, 2009. 6. Invited panelist on Rochester Asians’ Network panel “Transitions from the Corporate World - Strategies for Success,” Penfield Country Club, Penfield, NY, Oct. 3 2008. COURSES DEVELOPED AND CONDUCTED • Probabilistic Models for Inference and Estimation, ECE#443, Fall 2020, 2016, Univ. of Rochester, graduate. • Signals, ECE#241, Fall 2018, 2017, 2016, 2015, 2014, 2013, Univ. of Rochester, undergraduate. • Information Theory, ECE#450, Fall 2018, 2014, 2010, 2008, Spring 2006, 2004, Univ. of Rochester, graduate. • Digital Communications, ECE#444/244, Fall 2017, 2015, Spring 2013, Fall 2011, 2009, 2007, 2006, 2004, 2003, Univ. of Rochester, graduate/senior. • Computational Bioinformatics in the Big Data Era, ECE#492, Spring 2014, Univ. of Rochester, graduate. • Real-time Signal Processing Portfolio, ECE#294, Fall 2014, 2013, Univ. of Rochester, undergraduate. • Communication Systems, ECE #242, Spring 2008, 2007, 2006, 2005, Univ. of Rochester, undergraduate. • Special Topics in Image Processing, ECE #492N, Spring 2005, Univ. of Rochester, graduate. • Digital Image Processing, EE#301-779, Spring 2003, Rochester Institute of Technology, graduate. • Data & Computer Commun., CE#306-694, Spring 2001, Rochester Institute of Technology, undergraduate. G. Sharma 28 • Speech and Image Compression, EE#301-749, Winter 2001, Rochester Institute of Technology, graduate. • Communication Networks, EE#301-692, Spring 2000, Rochester Institute of Technology, graduate/senior. SHORT COURSES/TEACHING WORKSHOPS CONDUCTED • Probabilistic Models and Belief Propagation, Indian Institute of Technology, Indore, India, 01– 10 August 2016, conducted as part of the Global Initiative of Academic Networks (GIAN), Ministry of Human Resource Development (MHRD), India. • Graphical Models for Machine Learning, Indian Institute of Information Technology, Allahabad, India, 24 December 2017 – 05 January 2018 as part of GIAN. • Color Image Processing, College of Engineering Pune (COEP), India, 15 – 19 January 2018 as part of GIAN. • Media Security and Forensics, Indian Institute of Technology, Indore, India, conducted 26 March – 6 April 2018 as part of GIAN. • Probabilistic Models for Inference and Estimation, Harbin Institute of Technology, Harbin, China, 11 – 21 March 2019. GRADUATE THESIS DIRECTION • Doctoral Students – “Digital watermarking methods for authentication, tamper localization and lossless recovery,” Mehmet U. Celik, ECE Dept., Univ. of Rochester, thesis defended December 2004 (co-advised with A. M. Tekalp). – “Image registration for multi-view image processing,” Gülçin Caner, Celik, ECE Dept., Univ. of Rochester, thesis defended July 2006 (co-advised with A. M. Tekalp and W. Heinzelman). – “Clustered-dot periodic halftones: modeling, modulation, and applications,” Basak Oztan, ECE Dept., Univ. of Rochester, thesis completed December 2009. – “Probabilistic computational methods for structural alignment of RNA sequences,” Arif Özgün Harmanci, thesis defended April 2010 (co-advised with D. H. Mathews). – “Set-theoretic watermarking,” Oktay Altun, ECE Dept., Univ. of Rochester, thesis defended August 2010 (co-advised with M. Bocko). – “High Capacity Data Embedding for Printed Documents,” Orhan Bulan, ECE Dept., Univ. of Rochester, thesis defended October 2011. – “Distributed Estimation, Coding, and Scheduling in Wireless Visual Sensor Networks,” Chao Yu, thesis defended Feb 13, 2013. – “Vehicle Detection and Tracking in Wide Area Motion Imagery by Co-Registering and Exploiting Vector Roadmaps”, Ahmed Elliethy, thesis defended Jan 25, 2017. – “Hidden Markov Models for Supercapacitor State-of-Charge Tracking and Audio Watermarking.” Andrew Nadeau, thesis defended Aug 20, 2017. – Advising and supervising graduate research of Carlos Eduardo Rodriguez Pardo, Research area: “Color Displays and Visualization.” – Advising and supervising graduate research of Irving Barron, Research area: “Media Security and Privacy.” – Advising and supervising graduate research of Li Ding, Research area: “Comparative Analytics for Large Spatial Datasets.” – Advising and supervising graduate research of Karthik Dinesh, Research area: “Motion Analysis for Medical Applications.” • Masters Thesis Students – Hao Xie, Thesis: “Pareto Optimal Primary Designs for Color Displays”, April 18, 2017. – Yanfu Zhang, Thesis: “Upsampling of Color-Depth Images and a Dehazing Benchmarking Dataset”, April 18, 2017. – Iftekhar Naim, Thesis “Scalable Model-based Clustering for Flow Cytometry,” April 13, 2011. – Claude Fillion, Thesis: “Detection of content adaptive scaling of images for forensic applications,” December 16, 2010. G. Sharma 29 – Darius Fennell, Thesis: “Wavelet-based moving object detection in video with camera motion,” April 14, 2009. – Yang Yu, Thesis: “Estimating motion in IVUS images using pyramidal Lucas Kanade method,” April 14, 2009 (joint with Marvin Doyley). – S. R. Aravindh Balaji, Thesis: “Pre-processing methods for lossless compression of color look-up tables,” December 06, 2007. – Angela D’Orazio, Thesis: “Adaptive determination of alignment constraints for RNA secondary structure pre- diction,” December 12, 2007. – Vishnu Prasan, Thesis: “Image broadcast in VANETs: A feasibility study using JPEG 2000 with unequal error protection,” December 12, 2007. – Mithun Mukherjee, Thesis: “P-HIP : A multiresolution halftoning algorithm for progressive display,” EE Dept., Rochester Institute of Technology, thesis completed Dec. 2004. • Non-thesis Masters Students – Yuxuan He, Research area: “Signal Analysis for Stroke Rehabilitation Exercise Assessment,” Fall 2020 – Spring 2021. – Rui Cheng, Research area: “Multimodal Signal Analysis for Medical Applications,” Fall 2019 – Spring 2020. – Ismail Sadiq, Topic: “Biological Sequence Alignment,” Fall 2013 – Spring 2014. – Weijun Li, Sarang Lele, Juncheng Feng, Topic: “3-D Modeling from Imagery”, Spring 2015. – Jake Arkin, Topic: “Computer Vision on Embedded Systems”, Summer 2014. – Yuchuan Zhuang, Topic: “Image based analysis of photolytic degradation of daguerreotypes,” 2013. – Shuo Chen, Topic: “Image based analysis of photolytic degradation of daguerreotypes,” 2013. – Henryk B lasiński, Topic: “Evaluating Color Barcodes for Mobile Applications”. – Adem Orsdemir, Topic: “Multi-media security,” December 2008. – Junwen Mao, Topic: “Image forensics,” Jan 2009. – Chaoyi Chen and Kunyu Xiong, Topic: “Vehicular ad hoc networks,” December 2007. – Adil Bilici, Topic: “Multi-dimensional data interpolation techniques for color device characterization,” ECE Dept., University of Rochester, August 2005. – Ed Bremer, Topic: “Correspondence estimation in multi-view imagery,” ECE Dept., University of Rochester, December 2005. – Matjaz Kranjc, Topic: “Power-aware routing for wireless sensor motes,” May 2006. – MS students supervised for projects: Andrew Law, Dan Lewis, Ryan Aures • Undergraduate Students – Hsin Jui Yeh, Summer 2019–Fall 2019. – Peter Mansour, Summer 2019. – Xiang Li (Data Science), Fall 2018. – Sixu Meng (Data Science), Fall 2018. – Shingirai Dhoro (ECE), Summer, Fall 2018. – Colleen Skeete (MechE), Summer 2018. – Jiangfeng Lu (ECE), Summer 2018. – Ricky Su (Data Science), Spring 2018. – Tyler Schmidt (Data Science), Spring 2018. – Ariana Cervantes (MechE), Summer 2016. – Matthew Dombroski (ECE), Summer 2016. – Akihiro Ishikawa (ECE), Summer 2015. – Grayson Honan, Fall 2012, Spring 2013. – Nicholas Gekakis, Fall 2012, Spring 2013. – Seth Schober (ECE), Fall 2013, Spring 2014. – Darcey Riley (CS), Spring 2011. – Jinnan Hussain (ECE), Summer 2011. – Colin Funai (ECE), Fall 2010 – Spring 2011. – Iain Marcuson, Fall 2006 – Spring 2006. G. Sharma 30 Undergraduate Senior Design Projects – “Key-less home entry system utilizing RFID,” Kyle Aures, Scott Warren, Aaron Wescott, and Adam R. Williamson, May 2008. – “Speech communication over wireless sensor motes,” Matjaz Kranjc, Osonde Osoba, May 2005. – “Wireless microphone network using the Mote IV platform,” Ryan Aures, Matthew Holland, Yang Zhang, Korey Witt, May 2006. – “Super-resolution Reconstruction for MRI Images,” Saeed Shaikh, May 2007. CONSULTING • Epson, Feb. 2017 – Jun. 2018. • AIG Inc., Sept. 2014 – Apr. 2017. • MEI Inc., West Chester, PA, Oct 2009-Jan2010. Training and consulting in color and security imaging. • Graphic Security Systems, FL: Subject matter expert in intellectual property litigation, May 2009-Aug 2011. • Texas Instruments, Dallas, TX: Intellectual property assessment in the area of color display device and calibration technology, March 2009. • Allied Security Trust, Poughkeepsie, NY: Patent evaluation, April – June 2008. • NanoArk Corporation, Henrietta, NY: Image processing for archival applications, July 2008 – July 2010. • Consultant on R&D strategy and evaluation for Steve Hoover, Vice-President and Center Manager, Xerox Research Center Webster, Xerox Innovation Group, Xerox Corporation, January 2007 - December 2007. • Consultant on R&D strategy and evaluation for Sid Dalal, Vice-President and Center Manager, Imaging and Services Technology Center, Xerox Innovation Group, Xerox Corporation, June 2005 – January 2006. • Eastman Kodak Company, Rochester, NY: Intellectual property assessment in the area of imaging technology, March 2006 – April 2007. PROFESSIONAL SERVICE/ACTIVITIES Steering/Advisory Committees • Member, IEEE TechRxiv Advisory Board, 2019–. • Member, JEI Editor Search Committee, 2020. • Steering Committee Chair, IS&T Journal of Perceptual Imaging, 2017. • Member, Symposium Task Force, IS&T/SPIE Electronic Imaging Symposium, San Francisco, CA, 2016. • Member, Symposium Steering Committee, IS&T/SPIE Electronic Imaging Symposium, San Francisco, CA, 2012, 2013, 2014. Editorial Board Membership • Editor-in-Chief, IEEE Transactions on Image Processing, 2018-2020. • Editor-in-Chief, SPIE/IS&T Journal of Electronic Imaging, 2011-2015. • Editorial Board Member for IS&T and Wiley text book series on Imaging Science and Technology, 2010-. • Associate Editor, SPIE/IS&T Journal of Electronic Imaging, 2003-2010. • Area Editor, IEEE Transactions on Image Processing, 2004-2007. • Associate Editor, IEEE Transactions on Image Processing, 2003-2008. G. Sharma 31 • Associate Editor, IEEE Transactions on Information Forensics and Security, 2004-2009. • Editorial Board Member, Signal Processing Area, Journal of Electrical and Computer Engineering, Hindawi Publishing Corporation (Open Access), 2009-2010. • Editorial Board Member, Research Letters in Signal Processing, Hindawi Publishing Corporation (Open Access), 2008-2009. Board/Committee Membership • Chair, Strategic Planning Committee (SPC) of the IEEE Publication Services and Products Board (PSPB), 2021-. • Chair, Future of Conference IP (FuCIP) Sub-committee of the IEEE Conferences Committee (ICC), 2020-. • Chair, IEEE Ad Hoc Committee on Valid Scientific Content (VSC) on Xplore, 2018-2020. • Treasurer, IEEE Publication Services and Products Board, 2015 – 2017. • Member-at-large, IEEE Publication Services and Products Board (PSPB), 2015-2020, appointed by IEEE Board of Directors. • Member, IEEE TAB/PSPB Ad Hoc Committee on Joint Publications Strategy, 2019. • Member, IEEE Publication Services and Products (PSPB) Strategic Planning Committee, 2013- 2018. • Member, IEEE Publication Services and Products (PSPB) Publishing Conduct Committee, 2019 –. • Member, IEEE Spectrum, Editorial Advisory Board, 2014. • Member, IEEE Finance Committee (IEEE FinComm), 2015 – 2017. • Member, IEEE Technical Activities Board (TAB) Finance Committee, 2015 – 2017. • Member, IEEE Future Conference IP Ad Hoc, 2016–. • Member, IEEE Xplore Platform Guidance Group, 2015-. • Member of the Nominations and Awards Committee, IEEE Publication Services and Products Board, 2016. • Member, IEEE Signal Processing Society Conference Board Executive Subcommittee (CBES), 2015-2016. • Member, IEEE Signal Processing Society, Conference Board, 2014-2016. • Academic and Industry Contributor, IEEE Signal Processing Society Public Visibility Initiative, 2015. • Member, Computer Science Industry Advisory Board (CS IAB), New York Institute of Technology, 2015. • Member, Ethics Subcommittee of the SPIE Board of Editors, 2014-2015. Technical Committees • Chair, Image, Video, and Multiple Dimensional Signal Processing Technical Committee (IVMSP-TC) (of IEEE Signal Processing Society), 2010-2011. • Vice-chair, IVMSP-TC, 2008-2009. • Past Chair, IVMSP-TC, 2012-2013. • Associate Member, IVMSP-TC, 2017–. • Member, Multimedia Signal Processing Technical Committee (MMSP-TC) of the IEEE Signal Processing Society, 2015-2017. • Associate Member, IEEE Signal Processing Society, Computational Imaging (CI) Technical Committee, 2019. • Associate Member, IEEE Signal Processing Society, Computational Imaging (CI) Special Interest Group (SIG), 2018. • Member, IEEE Signal Processing Society, SPS Data Science Initiative (DSI), 2018–. G. Sharma 32 • Elected to the Image, Video, and Multiple Dimensional Signal Processing (IVMSP)1 Technical Committee (of IEEE Signal Processing Society), term 2006–2012. • Elected to the Information Forensics and Security (IFS) Technical Committee (of IEEE Signal Processing Society), term 2010–2012. • Member of the Award Nominations Subcommittee for the Image and Multiple Dimensional Signal Processing (IMDSP) Technical Committee, 2006, 2007. • Member IEEE Industry DSP Technology Standing Committee, 2005-2008. • Advisory Member IEEE Industry DSP Technology Standing Committee, 2009-. Symposium Chair/Co-Chair • Symposium Chair, IS&T/SPIE Electronic Imaging Symposium, 3-7 February 2013, San Francisco, CA. • Symposium Co-Chair, IS&T/SPIE Electronic Imaging Symposium, 22-26 January 2012, San Francisco, CA. Conference/Workshop Co-Chair/Technical Co-Chair • Conference Co-Chair, Media Watermarking, Security, and Forensics (MWSF), part of the IS&T Electronic Imaging Symposium, 27 – 28 January 2021, San Francisco, California. • Conference Co-Chair, Media Watermarking, Security, and Forensics (MWSF), part of the IS&T Electronic Imaging Symposium, 26 – 30 January 2020, San Francisco, California. • Conference Co-Chair, Media Watermarking, Security, and Forensics (MWSF), part of the IS&T Electronic Imaging Symposium, 13 – 17 January 2019, San Francisco, California. • Conference Co-Chair, Media Watermarking, Security, and Forensics (MWSF), part of the IS&T Electronic Imaging Symposium, 28 January- 1 February 2018, San Francisco, California. • Technical Program Co-Chair, IEEE International Conference on Image Processing (ICIP), 25 – 28 September 2016, Phoenix, AZ. • Technical Program Co-Chair, IEEE International Conference on Image Processing (ICIP), 30 September – 3 October, 2012, Orlando, FL. • Technical Program Co-Chair, IEEE Signal Processing Society 10th IVMSP Workshop: Perception and Visual Signal Analysis, Ithaca, New York, USA, June 15-17, 2011. • Conference Co-Chair, Conference 9: Image Processing & Pattern Recognition, part of the International Symposium on Optoelectronic Technology and Application 2014 (IPTA 2014), China National Convention Center, Beijing, China, 13-15 May 2014. • Organizer and Chair, 2003 IEEE Western NY Image Processing Workshop, Rochester, NY, Oct 17, 2003. Local Technical Community Leadership/Participation • Member, Computer Science Industry/Academic Advisory Board (CS IAB), School of Engineering and Computing Sciences, New York Institute of Technology (NYIT), Nov 2015 – • Chair, IEEE Rochester Section, 2007. • Treasurer, IEEE Rochester Section, 2005, 2006. • Chair, Rochester Chapter, IEEE Signal Processing Society, 2003. • Vice-Chair, Rochester Chapter, IEEE Signal Processing Society, 2002. 1Prior to 2009, this committee was called the Image and Multiple Dimensional Signal Processing (IMDSP) Technical Committee. G. Sharma 33 • Treasurer, Rochester Chapter, IEEE Signal Processing Society, 2001. • Secretary, Rochester Chapter, IEEE Signal Processing Society, 2000. • Co-chair IEEE Rochester Section Nominating Committee, 2009. • Chair IEEE Rochester Section Nominating Committee, 2008. • Member of IEEE Rochester Section Audit Committee, 2007. • Chair of IEEE Rochester Section Audit Committee 2008. Conference/Corporate Short Courses/Tutorials Taught • Short Course Instructor, “SC19: Introduction to Probabilistic Models for Machine Learning,” 33rd annual Interna- tional Symposium on Electronic Imaging, San Francisco, CA [online], 11-12 Jan. 2021. • Short Course Instructor, “SC07: An Introduction to Blockchain,” 33rd annual International Symposium on Electronic Imaging, San Francisco, CA [online], 14 Jan. 2021. • Short Course Instructor, “SC01: Color and Imaging,” 28th Color and Imaging Conference, Chiba, Japan [Online], 4-5 Nov. 2020. • Short Course Instructor, “SC02: Advanced Colorimetry and Color Appearance,” 28th Color and Imaging Conference, Chiba, Japan [Online], 9 Nov. 2020. • Short Course Instructor, “SC26: Introduction to Probabilistic Models for Inference and Estimation,” 32nd annual International Symposium on Electronic Imaging, San Francisco, CA, 30 Jan. 2020. • Short Course Instructor, “SC22: An Introduction to Blockchain,” 32nd annual International Symposium on Electronic Imaging, San Francisco, CA, 27 Jan. 2020. • Short Course Instructor, “SC01: Color and Imaging,” 27th Color and Imaging Conference, Paris, France, 21 Oct. 2019. • Short Course Instructor, “SC06: Advanced Colorimetry and Color Appearance,” 27th Color and Imaging Conference, Paris, France, 22 Oct. 2019. • Short Course Instructor, “SC24: Introduction to Probabilistic Models for Inference and Estimation,” 31st annual International Symposium on Electronic Imaging, San Francisco, CA, 17 Jan. 2019. • Short Course Instructor, “SC05: An Introduction to Blockchain,” 31st annual International Symposium on Electronic Imaging, San Francisco, CA, 13 Jan. 2019. • Short Course Instructor, “SC01: Color and Imaging,” 26th Color and Imaging Conference, Vancouver, BC, Canada, 11 Nov. 2018. • Short Course Instructor, “SC06: Advanced Colorimetry and Color Appearance,” 26th Color and Imaging Conference, Vancouver, BC, Canada, 12 Nov. 2018. • Short Course Instructor, “EI18: Introduction to Probabilistic Models for Inference and Estimation,” 30th annual International Symposium on Electronic Imaging, San Francisco, CA, 1 Feb. 2018. • Short Course Instructor, “M1: Color, Vision, and Basic Colorimetry,” 25th Color and Imaging Conference, Lilleham- mer, Norway, 11 Sept. 2017. • Short Course Instructor, “T1A: Advanced Colorimetry and Color Appearance,” 25th Color and Imaging Conference, Lillehammer, Norway, 12 Sept. 2017. • Short Course Instructor, “EI10 Introduction to Digital Color Imaging,” 29th annual International Symposium on Electronic Imaging, San Francisco, CA, 30 Jan. 2017. • Short Course Instructor, “EI10 Introduction to Digital Color Imaging,” 28th annual International Symposium on Electronic Imaging, San Francisco, CA, 14 Feb. 2016. • Short Course Instructor, “Introduction to Digital Color Imaging”, Sharp Laboratories of America, 13 April 2015. • Short Course Instructor, “SC1154: Introduction to Digital Color Imaging”, 27th annual Symposium on Electronic Imaging, San Francisco, CA, 8 Feb. 2015. G. Sharma 34 • Short Course Instructor, “Introduction to Color Imaging”, Samsung Advanced Technology Training Institute (SATTI), Suwon, South Korea, 19 May 2014. • Short Course Instructor, “System Optimization in Color Imaging Systems”, Samsung Advanced Technology Training Institute (SATTI), Suwon, South Korea, 19 May 2014. • Short Course Instructor, “Introduction to Digital Color Imaging”, International Symposium on Optoelectronic Tech- nology and Application, China National Convention Center, Beijing, China, 13 May 2014. • Tutorial Instructor, “System Optimization in Digital Color Imaging,” at IS&T’s Eighteenth Color Imaging Conference, November 8, 2010, San Antonio, TX (with R. Bala). • Tutorial Instructor, “System Interactions in Digital Color Imaging,” at IS&T’s Sixteenth Color Imaging Conference, November 11, 2008, Portland, OR (with R. Bala). • Tutorial Instructor, “ Color Imaging System Optimization,” at IS&T’s NIP24: 24th International Congress on Digital Printing Technologies, September 08, 2008, Pittsburgh, PA (with R. Bala). • Tutorial Instructor, “System Interactions in Color Imaging,” at IS&T’s International Congress on Imaging Science (ICIS), May 07-11, 2006, Rochester, NY (with R. Bala). • Tutorial Instructor, “System Interactions in Color Imaging,” at IS&T’s Thirteenth Color Imaging Conference, Nov 07-11, 2005, Scottsdale, AZ (with R. Bala). • Tutorial Instructor, “System Interactions in Color Imaging,” at IS&T’s Twelfth Color Imaging Conference, Nov 09-12, 2004, Scottsdale, AZ (with R. Bala). • Tutorial Instructor, “System Interactions in Color Imaging,” at IS&T’s Eleventh Color Imaging Conference, Nov 04-07, 2003, Scottsdale, AZ (with R. Bala). • Tutorial Instructor, “Color image scanners,” at IS&T’s 11th Color Imaging Conference, Nov 12-15, 2002, Scottsdale, AZ. Panels (Organization/Moderation/Participation) • Panel Moderator, “Taking Blockchain Beyond Crypto-currency” Media Watermarking, Security, and Forensics (MWSF), IS&T Electronic Imaging Symposium, 15 January 2019, San Francisco, California. • Panel Moderator, “Worlds Collide: Should Government or Private Industry Take the Lead on National Development?” 8th International Conference on Communication Systems and Networks (COMSNETS), January 8, 2016, Bangalore, India. • Panel Discussion Moderator, IEEE Signal Processing Society 10th IVMSP Workshop: Perception and Visual Signal Analysis, Ithaca, New York, USA, June 17, 2011. Conference Committee Roles • Awards Co-Chair, IEEE International Conference on Image Processing (ICIP) 2020, 25–28 October 2020, Abu Dhabi, United Arab Emirates [Virtual]. • Plenary Speaker Chair, GENSIPS 2009: IEEE International Workshop on Genomic Signal Processing and Statistics, May 17-21, 2009, Minneapolis, Minnesota. • Short Course Program Chair, IS&T/SPIE’s Electronic Imaging Symposium, 23-27 January 2011, San Francisco, CA. • Short Course Program Co-Chair, IS&T/SPIE’s Electronic Imaging Symposium, 17-21 January 2010, San Jose, CA. • Tutorial Chair, IS&T’s NIP24: 24th International Congress on Digital Printing Technologies, September 06-11, 2008, Pittsburgh, PA. • Program Chair: Invited Papers, NIP 23, “23rd International Conference on Digital Printing Technologies,” Sept. 16-21, 2007, Anchorage, Alaska. • Panels Co-Chair, 8th International Conference on Communication Systems and Networks (COMSNETS), January 5-9, 2016, Bangalore, India. G. Sharma 35 • Workshops co-chair COMSWARE 2006, First International Conference on Communication Systems Software and Middleware (IEEE/ACM Sigmobil co-sponsored), Bangalore, India. • Publications co-chair for IEEE International Conference on Image Processing (ICIP) 2002. • Tutorial co-chair, IS&T/SID 13th Color Imaging Conference, 07-11 Nov. 2005, Scottsdale, AZ. • Tutorial co-chair, IS&T/SID 12th Color Imaging Conference, 09-12 Nov. 2004, Scottsdale, AZ. Other • Mentor for the Norwegian Technological University (NTNU) Outstanding Academic Fellows programme. • Member, CIE Technical Committee TC1-89, Enhancement of Images for Colour Defective Observers • Member, CIE Technical Committee TC8-07, Multi-spectral Imaging • Leader for Indo-US Collaboration on Engineering Education (IUCEE) workshop on “Digital Color Imaging,” Aug. 24, 2015, Seth Jai Parkash Mukand Lal Institute of Engineering & Technology( JMIT), Radaur, Yamunanagar, Haryana, India. • Leader for Indo-US Collaboration on Engineering Education (IUCEE) workshop on “Effective Research Methodolo- gies,” Mar. 11, 2015, Madanapalle Institute of Technology & Science, Madanapalle, Andhra Pradesh, India. • Leader for Indo-US Collaboration on Engineering Education (IUCEE) workshop “Advanced Image Processing,” Jan. 10, 2015, Chitkara University, Chandigarh, India. • Leader for Indo-US Collaboration on Engineering Education (IUCEE) research workshop “Research Avenues in Image Processing,” Aug. 13, 2014, PES University, Bangalore, India. • Mentor for graduate students at Mentoring Sessions, 8th International Conference on Communication Systems and Networks (COMSNETS), January 5-9, 2016, Bangalore, India. • Mentor, Joint Dept. of Science and Technology (DST)-Indo-US Collaboration on Engineering Education (IUCEE) Research Academy for women pursuing PhD in India, March 18-20, C.R.Rao Advanced Institute of Mathematics, Statistics and Computer Science (AIMSCS), Hyderabad, India. • Leader, Indo-US Collaboration on Engineering Education (IUCEE) Faculty Leadership Workshop on Digital Commu- nications and Image Processing, June 05-09, 2011, Jaypee University of Engineering and Technology (JUET), A-B Road, Raghogarh, Madhya Pradesh, India. • Leader, Indo-US Collaboration on Engineering Education (IUCEE) Faculty Leadership Workshop on Image Processing, July 05-09, 2010, Madanapalle Institute of Technology and Science (MITS), Madanapalle, Andhra Pradesh, India. • Member IEEE Signal Processing Society and IEEE Communications Society. • Served on the Society of Imaging Science and Technology’s Committees for selection of Editor-in-Chief for the Journal of Imaging Science and Technology (JIST), 2011. • Member of Jury for Student Paper Award Selections for International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 20 April 2009, Taipei, Taiwan. • Moderator, “Trends in Image, Video, and Multidimensional Signal Processing,” International Conference on Acous- tics, Speech, and Signal Processing (ICASSP), May 25, 2011, Prague, Czech Republic. • IEEE Student Branch Counselor, Univ. of Rochester Branch, Sept. 2003– Sept. 2008. • Co-chair, University of Rochester, Bioinformatics cluster, Sept. 2006 – (with David H. Mathews). • ICASSP 2011, Technical Program Committee, representative for Image, Video and Multidimensional Signal Process- ing areas. • ICASSP 2010, Technical Program Committee, representative for Image, Video and Multidimensional Signal Process- ing areas. • ICASSP 2009, Technical Program Committee, representative for Image, Video and Multidimensional Signal Process- ing areas. • Technical Program Committee Member, Digital Image Processing and Analysis (DIPA), June 7-10, 2010, The Westin La Paloma, Tucson, AZ, USA. G. Sharma 36 • Program Committee Member, International Conference on Content Protection and Forensics (CPAF), July 11-15, 2011, Barcelona Spain. • Area Chair, ICIP 2016 Technical Program Committee. • Area Chair, ICIP 2015 Technical Program Committee. • Area Chair, ICIP 2014 Technical Program Committee. • Area Chair, ICIP 2012 Technical Program Committee. • Area Chair, ICIP 2011 Technical Program Committee. • Area Chair, ICIP 2010 Technical Program Committee. • Area Chair, ICIP 2009 Technical Program Committee. • Area Chair, ICIP 2008 Technical Program Committee. • Area Chair, ICIP 2007 Technical Program Committee. • Area Chair, ICASSP 2015 Technical Program Committee • Area Chair, ICASSP 2014 Technical Program Committee • Area Chair, ICASSP 2013 Technical Program Committee • Area Chair, ICASSP 2012 Technical Program Committee • Area Chair, ICASSP 2011 Technical Program Committee • Area Chair, ICASSP 2010 Technical Program Committee • Area Chair, ICASSP 2009 Technical Program Committee • Area Chair, ICASSP 2008 Technical Program Committee. • Area Chair, ICASSP 2007 Technical Program Committee. • Jury Member, DoCoMo USA Labs Innovative Paper Awards, for IEEE International Conference on Image Processing (ICIP), 2008. • External Examiner and Advisor, Research Week for faculty pursuing PhD in India, March 27-29, 2014, Gujarat Technical University, Ahmedabad, India. • External Examiner and Advisor, Research Week for faculty pursuing PhD in India, March 21-22, 2013, Gujarat Technical University, Ahmedabad, India. • Judge, Seventh IEEE Student Design Contest, 05 May 2007, Rochester Institute of Technology, Rochester, NY. • Judge, FIRST LEGO League Qualifying Tournament, 17 November, 2012, Webster Spry Middle School, Webster, NY. • Judge, FIRST LEGO League Qualifying Tournament, 20 November, 2010, Webster Spry Middle School, Webster, NY. • Mentor, University of Rochester (UR) Data Dive, Community outreach event, October 25, 2015. • Judge, FIRST LEGO League NanoQuest Competition, 03 December, 2006, Rochester, NY. • Judge, Student Poster Contest, Thirty-Eighth Asilomar Conf. on Signals, Systems & Computers, Nov. 2004. Conference Session Chair • Session Chair: “DeepFake Detection,” Media Watermarking, Security, and Forensics 2021, IS&T Electronic Imaging Symposium, 27 Jan. 2021, San Francisco, CA [Online]. • Session Chair, “Machine Learning For Image/Video Processing II (IVMSP-P6)”, International Conference on Acous- tics, Speech, and Signal Processing (ICASSP), May 6, 2020, Barcelona, Spain. • Special Session Organizer and Chair: “Physical Object Security,” Media Watermarking, Security, and Forensics 2020, IS&T Electronic Imaging Symposium, 29 Jan. 2020, San Francisco, CA. G. Sharma 37 • Session Chair: “Keynote Session: Digital vs Physical Document Security,” Media Watermarking, Security, and Forensics 2020, IS&T Electronic Imaging Symposium, 29 Jan. 2020, San Francisco, CA. • Session Chair: “DeepFakes,” Media Watermarking, Security, and Forensics 2020, IS&T Electronic Imaging Sympo- sium, 28 Jan. 2020, San Francisco, CA. • Session Chair, “ WA.L1 – Classification III”, International Conference on Image Processing (ICIP), Oct. 10, 2018, Athens, Greece. • Session Chair, “Keynote Session: Digital Watermarking from Inflated Expectation to Mainstream Adoption,” Media Watermarking, Security, and Forensics 2018, IS&T Electronic Imaging Symposium, 29 Jan-01 Feb. 2018, San Francisco, CA. • Session Chair, “Audiovisual and Cross-media Processing (MMSP-L2)”, International Conference on Acoustics, Speech, and Signal Processing (ICASSP), March 9, 2017, New Orleans, LA. • Session Co-Chair, “Video Segmentation and Tracking (IVMSP-P12)”, International Conference on Acoustics, Speech, and Signal Processing (ICASSP), March 9, 2017, New Orleans, LA. • Session Chair, “Encryption,” Media Watermarking, Security, and Forensics 2017, IS&T Electronic Imaging Sympo- sium, 30 Jan-01 Feb. 2017, San Francisco, CA. • Session Chair, “MA-L6 – Visual Forensics”, International Conference on Image Processing (ICIP), Sept. 26, 2016, Phoenix, AZ. • Session Chair, “Multimedia Forensics (IFS-P1)”, International Conference on Acoustics, Speech, and Signal Pro- cessing (ICASSP), March 22, 2016, Shanghai, China. • Session Co-Chair, “Video Tracking (IVMSP-L2)”, International Conference on Acoustics, Speech, and Signal Pro- cessing (ICASSP), March 22, 2016, Shanghai, China. • Session Chair, “Emotion and Action Recognition (MMSP-P2)”, International Conference on Acoustics, Speech, and Signal Processing (ICASSP), March 24, 2016, Shanghai, China. • Session Co-Chair, “Image Analysis and Applications (IVMSP-P13)”, International Conference on Acoustics, Speech, and Signal Processing (ICASSP), March 25, 2016, Shanghai, China. • Session Chair, “Watermarking,” Media Watermarking, Security, and Forensics 2016, IS&T Electronic Imaging Sym- posium, 15-17 Feb. 2016, San Francisco, CA. • Session Chair, “Optical Flow and Motion Estimation (ARS-P31)”, International Conference on Image Processing (ICIP), Sept. 28, 2015, Québec City, Canada. • Session Chair, “Infrared, Multispectral and Hyperspectral imaging (ELI-P7)”, International Conference on Image Processing (ICIP), Sept. 30, 2015, Québec City, Canada. • Session Co-Chair, “Color Deficiency,” Color Imaging XX: Displaying, Processing, Hardcopy, and Applications, IS&T/SPIE Electronic Imaging Symposium, 9-12 Feb. 2015, San Francisco, CA. • Session Chair, “Video/Demo and Keynote Session III,” Media Watermarking, Security, and Forensics 2015, IS&T/SPIE Electronic Imaging Symposium, 9-11 Feb. 2015, San Francisco, CA. • Session Chair, “Image Enhancement (IVMSP-L3)”, International Conference on Acoustics, Speech, and Signal Processing (ICASSP), May 7, 2014, Florence, Italy. • Session Chair, “Image Processing and Analysis”, Image Processing & Pattern Recognition, part of the International Symposium on Optoelectronic Technology and Application, China National Convention Center, Beijing, China, 14 May 2014. • Session Chair, “Pattern Recognition and Computer Vision”, Image Processing & Pattern Recognition, part of the International Symposium on Optoelectronic Technology and Application, China National Convention Center, Beijing, China, 15 May 2014. • Session Co-Chair, “Emerging Industry Signal Processing II (ITT-P1)”, International Conference on Acoustics, Speech, and Signal Processing (ICASSP), May 31, 2013, Vancouver, Canada. • Session Chair, “Interpolation and Super-resolution (IVMSP-L4)”, International Conference on Acoustics, Speech, and Signal Processing (ICASSP), May 29, 2013, Vancouver, Canada. • Session Chair, “View Synthesis (WP.L4)”, IEEE International Conference on Image Processing (ICIP), October 3, 2012, Orlando, FL, USA. • Session Chair, “Image and Video Modeling, Biometrics, and Applications (IVMSP-P14)”, International Conference on Acoustics, Speech, and Signal Processing (ICASSP), March 30, 2012, Kyoto, Japan. G. Sharma 38 • Session Co-Chair, “Technology to Practice for Signal Processing (ITT-P1)”, International Conference on Acoustics, Speech, and Signal Processing(ICASSP), March 28, 2012, Kyoto, Japan. • Session Chair, “Saliency and Visual Perception,” IEEE Signal Processing Society 10th IVMSP Workshop: Perception and Visual Signal Analysis, Ithaca, New York, USA, June 17, 2011. • Session Chair, “Stereoscopic and 3-D Coding (IVMSP-L3),” International Conference on Acoustics, Speech, and Signal Processing (ICASSP), May 25, 2011, Prague, Czech Republic. • Session Chair, “Stereoscopic and 3-D Processing (IVMSP-L4),” International Conference on Acoustics, Speech, and Signal Processing (ICASSP), March 17, 2010, Dallas, TX. • Session Co-Chair, “Forensic Imaging (TA.L6),” IEEE International Conference on Image Processing (ICIP), 28 Sept. 2010, Hong Kong. • Session Co-Chair, “Identification,” Media Watermarking, Security, and Forensics XVI, IS&T/SPIE Electronic Imaging Symposium, 3-6 Feb. 2014, San Francisco, CA. • Session Chair, “Watermarking,” Media Watermarking, Security, and Forensics XIV, IS&T/SPIE Electronic Imaging Symposium, 23-25 Jan. 2012, San Francisco, CA. • Session co-organizer and co-chair, “Transportation Imaging,” Visual Information Processing and Communication, 24-26 Jan. 2012, San Francisco, CA. • Session Chair, “Forensics I,” Media Watermarking, Security, and Forensics XIII, IS&T/SPIE Electronic Imaging Symposium, 24-26 Jan. 2011, San Francisco, CA. • Session Chair, “Forensics I,” Media Forensics and Security XII, IS&T/SPIE Electronic Imaging Symposium, 18-20 Jan. 2010, San Jose, CA. • Session Chair, “Camera Calibration, Modeling, and Conditioning (TA.PG),” IEEE International Conference on Image Processing (ICIP), 10 Nov. 2009, Cairo, Egypt. • Session Chair, “Gene Regulatory Networks,” IEEE International Workshop on Genomic Signal Processing and Statis- tics (GENSIPS), 18 May 2009, Minneapolis, Minnesota. • Session Chair, “Image Filtering (IVMSP-L2),” International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 21 April 2009, Taipei, Taiwan. • Session Chair, “Object Detection and Recognition,” Visual Communications and Image Processing (VCIP), 20 Jan. 2009, San Jose, CA. • Session Chair, “Authentication,” Media Forensics and Security XI, IS&T/SPIE Electronic Imaging Symposium, 19-21 Jan. 2009, San Jose, CA. • Session Chair, “Steganography and Steganalysis I (TA-L5),” IEEE International Conference on Image Processing (ICIP), 14 Oct. 2008, San Diego, CA. • Session Chair, “Feature Extraction and Analysis (IMDSP-L4),” International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 03 April 2008, Las Vegas, NV. • Session Chair, “Processing of Physiological Signals (BISP-P1),” International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 02 April 2008, Las Vegas, NV. • Session Chair, “Indexing and Retrieval,” Visual Communications and Image Processing (VCIP), 29-31 Jan. 2008, San Jose, CA. • Session Chair, “Physical Media,” Security, Forensics, Steganography, and Watermarking of Multimedia Contents X, Electronic Imaging Symposium 28-30 Jan. 2008, San Jose, CA. • Session Chair, “Gamuts Galore,” IS&T/SID Fifteenth Color Imaging Conference, 9 Nov. 2007, Albuquerque, NM. • Session Chair, “TA-P2: Security III: Watermarking,” IEEE International Conference on Image Processing (ICIP), 18 September, 2007, San Antonio, TX. • Session Chair, “Image Formation, Sampling, Display and Quality (IMDSP-P5),” International Conference on Acous- tics, Speech, and Signal Processing (ICASSP), 18 April 2007, Honolulu, HI. • Session Chair, NIP 22 IS & T Digital Printing Conference, “Security and Forensic Printing,” 20 Sept. 2006, Denver, CO. • Session Chair, “Digital Watermarking, Data Hiding and Steganography III (TP1-L1),” International Conference on Multimedia and Expo (ICME), 11 July 2006, Toronto, Canada. G. Sharma 39 • Session Chair, “Watermarking (IMDSP-L10),” International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 19 May 2006, Toulouse, France. • Session Chair, “Image Enhancement, Display & Rendering,” International Congress of Imaging Science (ICIS), 09 May 2006, Rochester, NY. • Session Chair, “Printing,” IS&T/SID Thirteenth Color Imaging Conference, 10 Nov. 2005, Scottsdale, AZ. • Organizer and Session Chair, “Bioinformatics/Genomic Signal Processing,” Thirty-Eighth Asilomar Conf. on Signals, Systems & Computers, Nov. 2004. • Session Chair, “Image Scanning, Display, and Printing I,” IEEE International Conference on Image Processing (ICIP), October 27 2004, Singapore. • Session Chair, “Color and lighting in computer images,” IS&T/SID Eleventh Color Imaging Conference, 04 Nov. 2003, Scottsdale, AZ. • Session Chair, “Digital image capture,” IS&T’s PICS Conference, 22-25 April 2001, Montréal, Canada • Session Chair, “Device technology,” CGIV2002: First European Conference on Colour in Graphics, Imaging, and Vision, 3-5 April 2002, Poitiers, France. • Session Chair, “Image Processing Oral Session 5,” International Conference on Image Information Processing (ICIIP), Waknaghat, Shimla, Himachal Pradesh, India, Nov. 4, 2011. • Session Chair, “Security and Forensic Imaging,” 2007 Western NY Image Processing Workshop, organized by IEEE Signal Processing Society, Rochester Section, Sept 28, 2007. • Session Chair, “Color, Document, and Medical Imaging,” 2005 Western NY Image Processing Workshop, organized by IEEE Signal Processing Society, Rochester Section, Sept 30, 2005. • Session Chair, “Document Image Processing,” 2004 Western NY Image Processing Workshop, organized by IEEE Signal Processing Society, Rochester Section, Sept 24, 2004. Reviewer • Reviewer, NSF Panel, March, 2020. • Reviewer for Progress Reports for American Association for the Advancement of Science (AAAS), Research Com- petitiveness Program for the King Abdulaziz City for Science and Technology (KACST), July 2017. • Reviewer for Nebraska’s Experimental Program to Stimulate Competitive Research (EPSCoR) program, Reviews coordinated by American Association for the Advancement of Science (AAAS), Research Competitiveness Program, January 2017. • Panelist for Review panel conducted by American Association for the Advancement of Science (AAAS), Research Competitiveness Program for the King Abdulaziz City for Science and Technology (KACST), November 2014. • Reviewer, Samsung Research Funding Center for Future Technology (SRFC), March 2016, August 2016. • Reviewer, NSERC (National Sciences and Engineering Research Council of Canada), March 2014. • Ad-Hoc Reviewer for NSF Proposals, February, 2014. • Reviewer, NSERC (National Sciences and Engineering Research Council of Canada), Jan 2014. • Reviewer, NSF Panel, March, 2018. • Reviewer, NSF Panel, May, 2012. • Reviewer, Research Grants Council (RGC) of Hong Kong, April 2012. • Reviewer, NSERC (National Sciences and Engineering Research Council of Canada), Jan 2010. • Reviewer, Lousiana Board of Regents, Pilot Funding for New Research (Pfund) program, Oct. 2009. • Reviewer, NSF Panel, June, 2007. • Reviewer, NSF Panel, May, 2006. • Technical program/Paper Review committee member for G. Sharma 40 – IEEE International Conference on Image Processing (ICIP) 2002, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020. – IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020. – CVPR Workshop on Blockchain Meets Computer Vision & AI (BMCVAI), 2019. – IEEE International Symposium on Circuits & Systems (ISCAS), 2016, 2017. – IEEE Emerging Signal Processing Applications (ESPA) Conference 2012. – International Conference on Computer Vision Theory and Applications (VISAPP) 2012, 2013, 2015, 2018. – 8th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applica- tions (VISAPP), Feb. 21-24, 2013, Barcelona, Spain. – International Conference on Photonics, Optics and Laser Technology (PHOTOPTICS), 2017, 2018. – IEEE International Conference on Communications (ICC) 2010. – Media Watermarking, Security, and Forensics 2018, San Francisco, CA, 28 Jan-1 Feb. 2018, San Francisco, CA (part of IS&T Electronic Imaging Symposium). – Media Watermarking, Security, and Forensics 2017, San Francisco, CA, 29 Jan-2 Feb. 2017, San Francisco, CA (part of IS&T Electronic Imaging Symposium). – Media Watermarking, Security, and Forensics 2016, San Francisco, CA, 14-18 Feb. 2016, San Francisco, CA (part of IS&T Electronic Imaging Symposium). – Media Watermarking, Security, and Forensics 2015, San Francisco, CA, 9-11 Feb. 2015, San Francisco, CA (part of IS&T/SPIE Electronic Imaging Symposium). – Media Watermarking, Security, and Forensics XVI, San Francisco, CA, February 2014 (part of IS&T/SPIE Electronic Imaging Symposium). – Media Watermarking, Security, and Forensics XV, San Francisco, CA, 3 – 7 February 2013 (part of IS&T/SPIE Electronic Imaging Symposium). – Media Watermarking, Security, and Forensics XIV, San Francisco, CA, 22 – 26 January 2012 (part of IS&T/SPIE Electronic Imaging Symposium). – Media Watermarking, Security, and Forensics XIII, San Francisco, CA, 24 – 26 January 2011 (part of IS&T/SPIE Electronic Imaging Symposium). – Visual Information Processing and Communication (VIPC), (part of the IS&T Electronic Imaging Symposium), 2017. – Visual Information Processing and Communication (VIPC), (part of the IS&T Electronic Imaging Symposium), 2016. – Visual Information Processing and Communication (VIPC), (part of the IS&T/SPIE Electronic Imaging Sym- posium), 2010, 2011, 2012, 2013, 2014, 2015. – IEEE International Workshop on Information Security and Forensics (WIFS), Guangzhou, China, 2013. – IEEE International Workshop on Information Security and Forensics (WIFS), Tenerife, Spain, 2012. – IEEE International Workshop on Information Security and Forensics (WIFS), Seattle WA, 2010. – Computational Color Imaging Workshop (CCIW) (sponsored by the International Association for Pattern Recog- nition (IAPR)), 2013. – 2013 IEEE International Conference on Electronics, Computing, Communication Technologies (CONECCT2013), Bangalore, India. – International Workshop on Activity Monitoring by Multiple Distributed Sensing (AMMDS 2013), part of the Tenth IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) 2013, Krakow, Poland, Aug. 27, 2013. – Eighth IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) 2011, Kla- genfurt University, Aug. 30 – Sept. 02, 2011. – International ACM Workshop on Multimedia in Forensics and Intelligence (MiFor 2011), – Media Forensics and Security XII, 2010 (part of IS&T/SPIE Electronic Imaging Symposium). – Media Forensics and Security XI, 2009 (part of IS&T/SPIE Electronic Imaging Symposium). – Security, Steganography and Watermarking of Multimedia Contents VIII (2006), IX (2007), X (2008) (part of the IS&T/SPIE Electronic Imaging Symposium). – Visual Communications and Image Processing (VCIP), 2007, 2008, 2009, 2010 (part of IS&T/SPIE Electronic Imaging Symposium). – IEEE International Conference on Multimedia and Expo (ICME) 2007, 2008, 2009, 2011. – IS &T/SID Color Imaging Conference 2004, 2005, 2006, 2008, 2010. – IEEE Signal Processing Society Rochester Chapter, Western New York Image Processing Workshop, 2011. G. Sharma 41 – International Conference on Signal Processing and Communications (SPCOM) 2010. – Seventh International Symposium on Image and Signal Processing and Analysis (ISPA), September 4-6, 2011, Dubrovnik, Croatia. – 2011 International Conference on Image Information Processing (ICIIP 2011), Shimla, Himachal Pradesh, India. – IEEE Signal Processing Society 2017 International Workshop on Multimedia Signal Processing (MMSP 2017), London-Luton, UK. – IEEE Signal Processing Society 2015 International Workshop on Multimedia Signal Processing (MMSP 2015), Xiamen, China. – IEEE Signal Processing Society 2012 International Workshop on Multimedia Signal Processing (MMSP 2012), Banff, Canada. – IEEE Signal Processing Society 2011 International Workshop on Multimedia Signal Processing (MMSP 2011), Hangzhou, China. – IEEE Signal Processing Society 2007 International Workshop on Multimedia Signal Processing (MMSP 2007), Chania, Crete, Greece. – IEEE Signal Processing Society 2006 International Workshop on Multimedia Signal Processing (MMSP 2006), Victoria, BC, Canada. – 2015 Military Communications Conference (MILCOM), October 26–28, 2015, Tampa, FL. – 2012 Military Communications Conference (MILCOM), October 29-November 1, 2012, Orlando, FL. – International Conference on Signal Processing, Computing and Control (ISPCC)2012, Waknaghat, Himachal Pradesh, India. – International Workshop on Multimedia Content Representation, Classification, and Security (MRCS), Sept 11-13, 2006, Istanbul, Turkey. – ACM Multimedia Security Workshop, 2005. – European Signal Processing Conference (EUSIPCO) 2005. – Colour in Graphics, Imaging, and Vision (CGIV) 2004, 2006. – IEEE International Conference Circuits and Systems (ISCAS) 2004. – SAFE 2007: Workshop on Signal Processing Applications for Public Security and Forensics – COMNETS 2005, 2nd IEEE/Create-Net International Workshop on Deployment Models and First/Last Mile Networking Technologies for Broadband Community Networks. – Colour in Graphics and Image Processing 2000. – Systems Cybernetics and Informatics (SCI) 2000. – International Conference on Computer, Communication and Control Technologies: CCCT’03. – IPSI-2005. – Western NY Image Processing Workshop, Rochester, NY, 2005, 2007. – IEEE Upstate New York Workshop on Communications and Networks, 2007. • Reviewer – IEEE Transactions on Image Processing – IEEE Transactions on Information Forensics and Security – Nucleic Acids Research (Oxford Journals). – IEEE Transactions on Signal Processing – IEEE Signal Processing Letters – IEEE Signal Processing Magazine – IEEE Transactions on Information Theory – Signal Processing (EURASIP Journal) – Signal Image and Video Processing (Springer Journals) – IEEE Transactions on Knowledge and Data Engineering – IEEE Transactions on Communications – IEEE Journal of Selected Areas in Communications – IEEE Transactions on Circuits & Systems for Video Technology (CSVT) – IEEE Transactions on Circuits & Systems II – IEE Proceedings on Information Security – IET Circuits, Devices & Systems G. Sharma 42 – IET Computer Vision – Signal Processing: Image Communication (EURASIP) – Journal of Electronic Imaging – Color Research and Application – Journal of Optical Society of America (A) – Computer Vision and Image Understanding – Journal of Imaging Science and Technology – Journal of Society for Information Display – Image Communication Journal – Optical Engineering – ETRI Journal (Korea) – Energies – Security and Communication Networks (Wiley) – Journal of Systems and Software – Pattern-Recognition Letters – Real-time Imaging – Journal of Digital Libraries – Springer-Verlag – Cambridge University Press – Oxford University Press – John Wiley & Sons – CRC Press – SPIE Press SELECTED ARTICLES FEATURING RESEARCH IN POPULAR PRESS/WEB • “Colored Lights Reveal Hidden Images,” by Paula M. Powell, Photonics Spectra, September 2003. • “Eyecatcher from Xerox,” article by Ben Rand, Rochester Democrat and Chronicle, July 18, 2003, pp. 12D, 8D. • “Disappearing Act: Xerox Researchers Demonstrate Color Prints with Images that Switch Under Different Colored Lights,” featured article on www.xeroxtechnology.com, May 15, 2003. • “Unretouched by human hand,” article by Michael Behar The Economist, December 14-20, 2002, pp. 8,10. • “Reversible data hiding embeds data in pictures,” E4Engineering.com, Nov. 29, 2002. • “Digital data hiding pulls a reverse,” report by Daniel C. McCarthy on reversible data hiding work, Photonics Spectra, November 2002, pp. 23. • “Encryption method getting the picture,” by Sandeep Junnarkar, CNet News.com, October 23, 2002. • “Researchers discover imaging technique,” by Smriti Jacob, Rochester Business Journal, Daily (online) edition, September 26, 2002. • “Researchers discover better way to embed, remove hidden data in digital images,” Vision Systems Design, (online) September 24, 2002. • “New technique promises better digital watermarks,” by Dan Orzech, CIO Information Network, published (online) September 24, 2002. • “Xerox, University of Rochester Researchers Discover Better Way to Embed, Remove Hidden Data in Digital Images,” featured article on www.xeroxtechnology.com, September 23, 2002. work_i3vijnjgdjbx5c7kw5fhztivdq ---- EEE 18(3).indd Ethology Ecology & Evolution 18: 247-256, 2006 Dietary carotenoids mediate a trade-off between egg quantity and quality in Japanese quail kevin J. MCgraw 1 Department of Animal Science, University of California-Davis, One Shields Avenue, Davis, California 95616, USA Carotenoids offer animals many nutritional, health, and reproductive benefits. When in high supply, carotenoids can boost antioxidant protection and immune strength as well as stimulate egg production and enrich the color of sexual ornaments like feathers. Certain reproductive investments, howev- er, often come at the cost of others; for example, the production of many off- spring may compromise the quality of those offspring. Under such a scenario, we rarely know the precise intrinsic or extrinsic mechanism that generates such a reproductive trade-off. Here I show that variation in dietary carotenoid intake mediates a trade-off between egg quantity and quality in female Japanese quail (Coturnix japonica). Females fed high doses of two common plant carotenoids, lutein and zeaxanthin, during a 1-month diet experiment were more likely to lay eggs, but produced eggs with significantly smaller yolks. Yolk serves as the criti- cal nutritional supply for developing embryos, and several studies show dramatic negative developmental consequences for offspring that are allocated scant yolk reserves. These results demonstrate nutritional control of yolk size and highlight a potential reproductive cost of high carotenoid accumulation in multiparous birds. In future studies, we should consider total yolk-carotenoid reserves rather than simply carotenoid concentration to better understand the cost-benefit bal- ance of these nutrients. key worDs: Coturnix japonica, egg-yolk carotenoids, lutein, maternal effects, maternal investment, reproductive trade-offs, yolk size, zeaxanthin. INTRODUCTION Mothers from egg-laying animals shunt nutrients to their offspring, in the form of yolk, prior to hatching. Yolk components nourish embryos during develop- ment and facilitate proper growth and survival (williaMs 1994). During these breed- ing times, however, mothers are faced with the challenge of using nutrients to drive their own energy-demanding reproductive efforts, or to allocate them to offspring. In fact, an inherent trade-off between offspring quality and quantity often emerges 1 Current address: School of Life Sciences, Arizona State University, Tempe, AZ 85287-4501, USA (Phone: (480) 965-5518; Fax: (480) 965-6899; E-mail address: kevin.mcgraw@asu.edu). 248 K.J. McGraw (sMiTh & FreTwell 1974, sinervo & svensson 1998); mothers may invest nutrients in producing large numbers of eggs, but this comes at the expense of reduced egg quality (e.g. size, nutrient load; nager et al. 2000). One such class of nutrients that plays a valuable role in both maternal repro- duction and yolk-mediated embryonic development are the carotenoids (blounT et al. 2002, blounT 2004). Carotenoids are fat-soluble molecules that animals acquire from the diet and use for a variety of physiological and morphological purposes. At the somatic level, carotenoids provide antioxidant defenses to the body’s cells and tissues (Møller et al. 2000, krinsky 2001), modulate responsiveness of the immune system (MCgraw & arDia 2003, Chew & Park 2004), and offer photoprotection in sensitive tissues like the eye (ThoMson et al. 2002). Regarding gametic and repro- ductive investment, carotenoids are incorporated into external tissues like feathers, scales, and skin to become colorful and advertise sexual attractiveness and parental quality (hill 2002). They have also been shown recently, when in high dietary sup- ply, to stimulate egg-laying in gulls (blounT et al. 2004) and to increase survival of nestling finches (MCgraw et al. 2005). Interestingly, despite the expectations of the offspring quantity/quality trade-off, few studies have demonstrated negative somatic or reproductive consequences experienced by breeding mothers given high carote- noid supplies. Based on this hypothesis, one would predict that the reproductive benefits of carotenoid accumulation and use are not necessarily universal and that there may be subtle, compromising modifications of egg or offspring quality, espe- cially in species that produce large numbers of clutches or eggs per reproductive attempt or season. Here, I experimentally investigated the effect of carotenoid intake on egg pro- duction and characteristics in Japanese quail (Coturnix japonica). I manipulated two naturally occurring dietary carotenoids (the xanthophylls lutein and zeaxan- thin) across a broad range of concentrations in captive quail for one month and considered both the eggs, laying capacity of females as well as an important meas- ure of embryonic nutrition in eggs-yolk size. Yolk size is an important determinant of hatchling growth and size in animals from lizards (raDDer et al. 2002, warner & anDrews 2002) to birds (hill 1993, Dzialowski & soTherlanD 2004). METHODS Forty female quail, all of which were 23 weeks of age on the last day of the experiment, were housed in individual pens in the Meyer indoor animal facility on the campus of the Uni- versity of California-Davis. During the study, ambient temperature was held constant at 22 °C, relative humidity at 55%, and light intensity at 100 lumins. Birds were kept on a constant day:night cycle of 12:12 hr. I established eight treatment groups of five quail each, varying only in dietary caroten- oid concentration (0, 15, 30, 45, 60, 75, 90, and 105 mg carotenoid/kg diet). Though nothing is yet known of natural variation in carotenoid levels in wild quail, I chose this range of doses because it matches that found in the only quantitative study of dietary carotenoids in wild granivorous birds (house finches [Carpodacus mexicanus]; hill et al. 2002). All quail were fed a rice- and soy-based diet that was formulated to meet or exceed all NRC requirements for chickens (Table 1; sensu kouTsos et al. 2003). To this diet, I added the appropriate amount of xanthophyll carotenoids, in the form of 10% Oro-Glo (Kemin Industries, Inc., Des Moines, IA), for each treatment group during the one-month feeding experiment. Oro-Glo contains two naturally occurring xanthophylls, lutein and zeaxanthin, at a ratio of 93:7%. 249Dietary and yolk carotenoids in quail For the 2 months before the experiment began, birds were fed the base diet, free of carotenoids, in order to remove any carotenoids that birds had in circulation or tissue stor- age (e.g. liver). This was done because, prior to the study, birds were fed a commercial diet (Layena 6501, Purina Mills, St Louis, MO) that contained a small amount of carotenoids (ca 10 mg/kg; unpubl. data). After consuming the carotenoid-free diet for 2 months, I visually inspected a subset of yolks to evaluate the success of the ‘bleaching’ process and all (n = 6) were pale. To examine the effects of carotenoid feeding on egg production and yolk character- istics, I collected eggs at two points during the study: on the 24th and 31st day after caro- tenoid supplementation was initiated. I evaluated yolk characteristics for the 2nd (larger) set of eggs only; at this time point, 21 eggs were collected (n = 2, 2, 2, 2, 4, 2, 3, and 4 for the respective treatment groups, from 0 up to 105 mg/kg). My measure of egg-laying capac- ity was determined by averaging the % of females in each treatment group that produced eggs on each collection day. To be sure that the dietary carotenoid treatments significant- ly impacted yolk-carotenoid contents, I scored the color of yolks using digital photography and computerized image analysis. Fresh eggs were cracked open and the contents gently emptied into a Petri dish. Two photographs were taken of each egg against a neutral-gray photograph board and under standardized indoor lighting conditions. A yellow color chip was included in each image as a reference, to account for any interphotograph differences in color capture and image size. I imported images into Adobe Photoshop (at a resolution of 1760 × 1168 pixels), highlighted the full area of the yolks with the Lasso marquee, and scored yolk coloration along three traditional axes of color — hue, saturation, and bright- ness — using the HSB scale under the Color Picker function (sensu MCgraw et al. 2002). I then measured the area of yolk by determining the number of pixels occupied using the Histogram function (sensu MCgraw et al. 2002) and comparing it to the yellow color stand- ard of known area. RESULTS Treatment effects on yolk coloration, yolk size, and egg-laying capacity Dietary carotenoid treatment had a significant effect on both yolk hue (ANOVA, r2 = 0.77, F7,20 = 6.12, P = 0.003) and saturation (r2 = 0.80, F7,20 = 7.49, P = 0.001), but not brightness (r2 = 0.31, F7,20 = 0.83, P = 0.58; Fig. 1). Post-hoc pairwise comparisons revealed that yolks from the 0 mg/kg group had significantly higher (less yellow) hue scores than those from the 30, 45, 60, 90, and 105 mg/kg carote- noid group and exhibited significantly less saturated colors than yolks from the 45, 60, 90, and 105 mg/kg carotenoid groups (Tukey-Kramer HSD tests, all P < 0.05; Fig. 1). Experimental variation in dietary carotenoid access also had significant effects on other egg characteristics. Carotenoid dose affected both female egg-laying pro- pensity (r2 = 0.84, F7,15 = 6.1, P = 0.01) and yolk size (r2 = 0.73, F7,20 = 4.96, P = 0.006), but in opposite directions. Females fed high amounts of carotenoids (60 and 105 mg/kg) were more likely to lay eggs on the randomly selected collection days than females fed lower amounts of carotenoids (0 and 75 mg/kg) (Fig. 2a). How- ever, females supplemented with high amounts of carotenoids laid eggs with yolks that were significantly smaller in area; in particular, females fed the two highest carotenoid doses (90 and 105 mg/kg) laid eggs with significantly smaller yolks than females fed 45 and 75 mg/kg (Tukey-Kramer HSD tests, all P < 0.05; Fig. 2b). 250 K.J. McGraw Fig. 1. — Effect of dietary carotenoid supplementation on egg-yolk color in female Japanese quail. C0 through C7 treatments correspond to 0, 15, 30, 45, 60, 75, 90, and 105 mg/kg of carotenoid added to the base diet. Horizontal lines (from top to bottom) in the diamond plot signify the 25th, 50th, and 75th percentiles; outer points (from top to bottom) indicate the 10th and 90th percentiles. Yolk colors were scored in traditional tristimulus color space (hue, saturation, and brightness) using digital photography under standard lighting conditions. Yolks became oranger in hue and more saturated with increasing dietary carotenoid content (see text for statistics). H u e (d e g re e s ) 52 53 54 55 56 57 58 59 60 C0 C1 C2 C3 C4 C5 C6 C7 Carotenoid treatment S a tu ra ti o n (% ) 20 30 40 50 60 70 80 90 C0 C1 C2 C3 C4 C5 C6 C7 Carotenoid treatment B ri g h tn e s s (% ) 70 75 80 85 90 95 C0 C1 C2 C3 C4 C5 C6 C7 Carotenoid treatment 251Dietary and yolk carotenoids in quail Correlations between yolk color, yolk size, and egg-laying capacity Because females were randomly assigned to dietary treatments, I expected a notable amount of unaccounted for and inherent (genetically/physiologically based) variation in carotenoid use among groups. Thus, in addition to blocking by treatment group in my statistical analyses, I also took the approach of correlating individual measures of yolk coloration (my mark of their response to the treatment) with both yolk area and egg-laying capacity. I suspected that this might clarify some of the rela- tionships between diet and yolk color/size that were not perfectly dose-dependent % o f fe m a le s la y in g e g g s 0 20 40 60 80 100 120 C0 C1 C2 C3 C4 C5 C6 C7 Carotenoid treatment a abc abc abc c ab abc bc Y o lk a re a (c m 2 ) 3 3.5 4 4.5 5 5.5 C0 C1 C2 C3 C4 C5 C6 C7 Carotenoid treatment ab ab ab a ab a b b Fig. 2. — Effect of dietary carotenoid supplementation on egg-laying capacity and yolk size in fema- le Japanese quail. See legend of Fig. 1 for more information on carotenoid treatments. Eggs were collected on 2 randomly selected days post-supplementation, and egg-laying capacity represents the average percent of females that produced an egg on each collection day. Yolk area was measured from eggs collected using digital photography, with a size standard included in each photograph to control for (slight) interphoto variation in the distance between camera and subject. 252 K.J. McGraw (e.g. the high hue and low saturation values, but large area of the yolks from the 75 mg/kg treatment group). Among all eggs studied, both yolk size (Fig. 3) and the per- centage of females laying eggs per treatment group (Fig. 4) were significantly nega- tively correlated with yolk hue and significantly positively correlated with yolk satura- tion. Moreover, though not statistically significant, group means of yolk size tended to be negatively correlated with group egg-laying propensity (Fig. 4). DISCUSSION I experimentally demonstrate a carotenoid-driven trade-off between offspring quantity and quality in quail. Quail hens fed large supplies of dietary carotenoids were more likely to lay eggs during the study, but their eggs contained a smaller 3 3.4 3.8 4.2 4.6 5 20 30 40 50 60 70 80 90 Yolk saturation (%) Y o lk a re a ( c m 2 ) r = -0.48 P = 0.03 NC C1 C2 C3 C4 C5 C6 C7 3 3.5 4 4.5 5 Y o lk a re a ( c m 2 ) 52 54 56 58 60 Yolk hue (degrees) r = 0.42 P = 0.05 Fig. 3. — Correlations between the yolk-color parameters that significantly varied with diet (hue and saturation) and yolk area for Japanese quail eggs. Points represent individual birds from all treatment groups. Pearson’s correlational tests showed that oranger and more saturated yolks were smaller in area. 253Dietary and yolk carotenoids in quail amount of total yolk reserve. The nutritional challenges of producing many, high- quality offspring have been previously examined and confirmed in birds (e.g. lesser black-backed gulls, Larus fuscus; nager et al. 2000), but to date no individual nutri- ents have been targeted as potential mediators of this trade-off. Due to their var- ied physiological effects, carotenoids may be important molecules for limiting and enhancing aspects of reproductive performance in birds and other animals (blounT 2004, blounT et al. 2004). 53 54 55 56 57 58 59 Y o lk h u e (d e g re e s ) 20 40 60 80 100 20 30 40 50 60 70 80 Y o lk s a tu ra ti o n (% ) 20 40 60 80 100 3.5 4 4.5 5 Y o lk a re a (c m 2 ) 20 40 60 80 100 % of birds laying per treatment group C0 C1 C2 C3 C4 C5 C6 C7 r s = -0.69 P = 0.05 r s = 0.69 P = 0.05 r s = -0.54 P = 0.15 Fig. 4. — Correlations between egg-laying capacity and both yolk color (hue and saturation) and area in Japanese quail. Points represent treatment means. Spearman-rank correlational tests were used, due to small sample sizes, and show that birds who produced more eggs tended to lay eggs with more orange, more saturated, and smaller yolks. 254 K.J. McGraw There are several possible molecular mechanisms by which carotenoids dually control these breeding efforts. Carotenoids may directly stimulate egg-production through the upregulation of estrogenic enzymes (e.g. ng et al. 2000) or by offering antioxidant protection to egg precursors and components (e.g. vitellogenin, VLDL; blounT 2004, blounT et al. 2004). Carotenoids could also modify yolk size directly, by accelerating follicular growth rates (ChrisTians & williaMs 2001) due to their gene-regulatory or antioxidant capacity. However, this could also occur indirectly, as the act of egg production itself depletes internal yolk supplies (nager et al. 2000). Because of different energetic demands, trade-offs that are documented experimen- tally in the lab are not always borne out in free-living animals (e.g. for testoster- one and immunity in birds; PeTers 2000), so it will be important now to examine egg investment in several species of wild birds to determine the prevalence of this nutrient-specific trade-off. For the aforementioned ‘indirect’ mechanism, one would predict that a quantity/quality trade-off would be common in multiparous species (either with large clutches or many clutches per year). In support of this, gulls lay few (ca 3) eggs per year, and no carotenoid-mediated trade-off in egg production and yolk size has been reported (blounT et al. 2004). This study provides the second documented negative effect of carotenoid sup- plementation (at physiological levels) on breeding investment in birds. blounT et al. (2002) showed that high carotenoid levels led to decreased concentrations of maternal antibodies in the yolks of free-living gulls. Taken together, these results suggest that there may be costs to allocating carotenoids to certain reproductive efforts over others, much like lozano (1994) proposed for the allocation of caroten- oids to health versus sexual coloration. Studies that follow the fitness consequences of these investments will help clarify the relative importance of different means of carotenoid allocation. In my study, for example, it is unclear whether eggs contain- ing carotenoid-deficient but larger yolks would have been more likely to hatch or produce young with different growth rates or survival probabilities than eggs con- taining carotenoid-rich but small yolks. The reported findings also support the carotenoid basis of yolk-color variation in birds. Carotenoids and riboflavin (squires & naber 1993) give yellow color to yolks, and in animals other than chickens no studies have experimentally manipu- lated dietary carotenoid content and confirmed that the intensity of yolk color is due to increased carotenoid levels. In fact, yolk coloration is not the lone aspect of carotenoid investment that researchers should be measuring. Total caroten- oid investment, in milligrams, is the important biological parameter of interest to breeding mothers and their developing embryos, and to measure this one must quantify both total carotenoid concentration and yolk mass. Future studies should adopt this more comprehensive approach so that we can continue to improve our understanding of the costs and benefits of carotenoids in animal systems. ACKNOWLEDGMENTS This research was approved by the Institutional Animal Care and Use Committee at the University of California-Davis. I thank K. Klasing, P. Nolan, and two anonymous referees for constructive comments on the manuscript. Funding was provided by the United States Depart- ment of Agriculture (grant to K. Klasing) during data collection and by the School of Life Sci- ences and College of Liberal Arts and Sciences at Arizona State University during manuscript preparation. 255Dietary and yolk carotenoids in quail REFERENCES blounT J.D. 2004. Carotenoids and life-history evolution in animals. Archives of Biochemistry and Biophysics 430: 10-15. blounT J.D., housTon D.C., surai P.F. & Møller a.P. 2004. Egg-laying capacity is limited by carotenoid pigment availability in wild gulls Larus fuscus. Proceedings of the Royal Soci- ety of London (B) 271: S79-S81. blounT J.D., surai P.F., nager r.g., housTon D.C., Møller a.P., Trewby M.l. & kenneDy M.w. 2002. Carotenoids and egg quality in the lesser black-backed gull Larus fuscus: a supplemental feeding study of maternal effects. Proceedings of the Royal Society of Lon- don (B) 269: 29-36. Chew b.P. & Park J.s. 2004. Carotenoid action on the immune response. Journal of Nutrition 134: 257-261. ChrisTians J.k. & williaMs T.D. 2001. Interindividual variation in yolk mass and the rate of growth of ovarian follicles in the zebra finch (Taeniopygia guttata). Journal of Compara- tive Physiology (B) 171: 255-261. Dzialowski e.M. & soTherlanD P.r. 2004. Maternal effects of egg size on emu Dromaius novaehollandiae egg composition and hatchling phenotype. Journal of Experimental Biol- ogy 207: 597-606. hill g.e. 2002. A red bird in a brown bag: the function and evolution of colorful plumage in the house finch. New York: Oxford University Press. hill g.e., inouye C.y. & MonTgoMerie r. 2002. Dietary carotenoids predict plumage colora- tion in wild house finches. Proceedings of the Royal Society of London (B) 62: 1119-1124. hill w.l. 1993. Importance of prenatal nutrition to the development of a precocial chick. Developmental Psychobiology 26: 237-249. kouTsos e.a., CliFForD a.J., CalverT C.C. & klasing k.C. 2003. Maternal carotenoid status modified the incorporation of dietary carotenoids into immune tissues of growing chick- ens (Gallus gallus domesticus). Journal of Nutrition 133: 1132-1138. krinsky n.i. 2001. Carotenoids as antioxidants. Nutrition 17: 815-817. lozano g.a. 1994. Carotenoids, parasites, and sexual selection. Oikos 70: 309-311. MCgraw k.J. & arDia D.r. 2003. Carotenoids, immunocompetence, and the information con- tent of sexual colors: an experimental test. The American Naturalist 162: 704-712. MCgraw k.J., MaCkilloP e.a., Dale J. & hauber M.e. 2002. Different plumage colors reveal different information: how nutritional stress affects the expression of melanin- and structurally based ornamental coloration. Journal of Experimental Biology 205: 3747- 3755. MCgraw k.J., Parker r.s. & aDkins-regan e. 2005. Maternally derived carotenoid pigments affect offspring survival, sex ratio, and sexual attractiveness in a colorful songbird. Naturwissenschaften 92: 375-380. Møller a.P., biarD C., blounT J.D., housTon D.C., ninni P., saino n. & surai P.F. 2000. Carotenoid-dependent signals: indicators of foraging efficiency, immunocompetence or detoxification ability? Avian and Poultry Biology Reviews 11: 137-159. nager r.g., Monaghan P. & housTon D.C. 2000. Within-clutch trade-offs between the number and quality of eggs: experimental manipulations in gulls. Ecology 81: 1339-1350. ng J.h., nesareTnaM k., reiMann k. & laJ l.C. 2000. Effect of retinoic acid and palm oil carotenoids on oestrone sulphatase and oestradiol-17β hydroxysteroid dehydrogenase activities in MCF-7 and MDA-MB-231 breast cancer cell lines. International Journal of Cancer 88: 135-138. PeTers a. 2000. Testosterone treatment is immunosuppressive in superb fairy-wrens, yet free- living males with high testosterone are more immunocompetent. Proceedings of the Royal Society of London (B) 267: 883-889. raDDer r.s., shanbhag b.a. & saiDaPur s.k. 2002. Pattern of yolk internalization by hatch- lings is related to breeding timing in the garden lizard, Calotes versicolor. Current Sci- ence 82: 1484-1486. 256 K.J. McGraw sinervo b. & svensson e. 1998. Mechanistic and selective causes of life history trade-offs and plasticity. Oikos 83: 432-442. sMiTh C.C. & FreTwell s.D. 1974. The optimal balance between size and number of offspring. The American Naturalist 108: 499-506. squires M.w. & naber e.C. 1993. Vitamin profiles of eggs as indicators of nutritional status in the laying hen: riboflavin study. Poultry Science 72: 483-494. ThoMson l.r., ToyoDa y., langner a., Delori F.C., garneTT k.M., CraFT n., niChols C.r., Cheng k.M. & Dorey C.k. 2002. Elevated retinal zeaxanthin and prevention of light- induced photoreceptor cell death in quail. Investigative Ophthalmology and Visual Sci- ence 43: 3538-3549. warner D.a. & anDrews r.M. 2002. Laboratory and field experiments identify sources of vari- ation in phenotypes and survival of hatchling lizards. Biological Journal of the Linnean Society 76: 105-124. williaMs T.D. 1994. Intraspecific variation in egg size and egg composition in birds: effects on offspring fitness. Biological Reviews 68: 35-59. work_i4pzigk2gbfszizu2ul7hwjhvu ---- Microsoft Word - Botticello VS 12016 MSS.docx 1 Julie Botticello, Tom Fisher & Sophie Woodward (2016) “Relational resolutions: digital encounters in ethnographic fieldwork”, Visual Studies, 31:4, 289-294 http://dx.doi.org/10.1080/1472586X.2016.1246350 Abstract The articles in this Special Issue highlight the relationality existing between researchers, participants, cameras, and images, with each article bringing complementary perspectives on the use of digital images in ethnographic fieldwork. These include reactivating archives through their digitization for visual repatriation, facilitating dialogue and understanding between participant and researcher, analyzing the relation between participants and the virtual spaces of their self representations, and exploring the range of capacities for new research methodologies afforded by digital technologies. Individually and through their juxtaposition, the articles highlight the complexity of the interactions between researchers and participants in their digital encounters, and open dialogical spaces, in ethnographic fieldwork and in visual anthropology, about access, participation and transparency in representational practices. Relational resolutions: Digital encounters in ethnographic fieldwork Photographs and film have long been used in anthropology as a form of supplementary documentation, objects to enhance other data as a tool for reconnection to pasts or recovery of lost processes and practices. The focal point here, however, is not, or not only, on the image created as object or tool, but on the processes involved in generating images in contemporary ethnographic fieldwork contexts and their continued meditational salience of images once created. The four papers in this special issue were first presented together in a panel at the RAI’s Anthropology and Photography conference, held at the British Museum in May 2014. As organisers of the panel, we wished to explore different engagements digital photography (as still images) enables, in making as well as viewing images, and what these encounters suggest for thinking further about the visual in anthropology. In particular, has the advent of the digital image in the fieldwork context significantly changed the parameters of the relationships between researchers, participants and images? If so, what impact might this have for visual anthropology more generally? The anthropological tenets of participation and observation can seem at odds with photographic and filmic methods, as cameras and recording equipment can create detachment and distance between an observer and those observed. Yet, since the middle of the twentieth century, there has been a growing emphasis on the relationality evident in these encounters (Banks 2001; Edwards 2003; MacDougall 1991; Morley 2006; Morton and Edwards 2009; Peers and Brown 2003; Pinney 2016; Pink 2003). Since the 1960s, as Woodward (2008: 863-4) recounts, anthropologist and filmmaker Jean Rouch ‘aimed to change the research relationship from people who had power “interrogating people without it”, to a “shared … dialogue between people belonging to different ‘cultures’”’ (citing Morley, 2006: 117), with the camera encouraging people to ‘reveal 2 themselves’ (Morley, 2006: 119) as part of this engagement process. MacDougall highlights that from the 1970s onward, this ‘tendency towards dialogic and polyphonic construction in ethnography’ (1991: 2) has been growing, particularly in the handover of power and equipment to indigenous peoples to represent themselves (Ginsberg 1991; Turner 1992, Worth and Adair 1972[1997]). However, Turner (1991: 7) reminds us that communities are not homogeneous, inclusive groups and that whether in the hands of an outsider or an insider, who holds the camera still does so from a particular perspective, situation, and power base, and thus does not represent the whole, but remains a partial perspective (Haraway 1998; Clifford and Marcus 1986). Nevertheless, Turner argues there is much to learn from the camera techniques and social dynamics in the ‘production of indigenous visual media [… which] provides an opportunity to study the social production of representations rarely approached in non-visual ethnography’ (Turner 1991: 16). The encounters between those in front, those behind the camera, and those spectating make ‘anthropology more sensitive to the politics and possibilities of visual representation’ (MacDougall, 2005: 219), regardless of where the researcher is standing in the exchange. While not always overt, this mutual engagement in the creation of visual representation has always been present. Whereas the photographic image is evidence of ‘that has been’ (Barthes 2000), highlighting that an event occurred, ‘the “contractual” elements of photographic events’ (Pinney 2016: 75) are also evident, as the image reveals, not just that an event took place, but also the ‘social relations which made [the encounter] possible’ (Azoulay 2008: 127). This extends Banks’ pragmatic acknowledgement that ‘all image production [… is] the result of a series of social negotiations, some formal […], most informal’ (2001: 119) to the ‘relationality that flow[s] from the contingency of the photographic event’ (Pinney 2016: 76). Collaboration resides at the basis of representation through image creation and that photographer, those photographed and the viewers (Mustafa 2002: 188) mutually construct the event. In short, photography is a social encounter. In spite of this, the place of the visual in anthropology remains ambivalent, with a primacy for textual analysis taking precedence, and the visual often being used as another method for documentation or education (Ruby 2005) and the images created slipping from context to content, losing the discursive and ‘messy’ (Jungnickel and Hjorth, 2014: 137) interactions of the encounters enabling their creation. This is not to say that evidence of these encounters cannot be seen in the closer readings of the images (Azoulay 2008; Favero 2014; Herle 2009), which visual anthropologists, as seen above, have been highlighting for some time. The possibilities for revealing the ‘messiness’ of social encounters around the camera and images produced have become more accentuated with the affordances provided by digital photographic technology. While in many ways digital imaging has not ‘revolutionised photography’ (Murray, 2008: 161), there are significant differences between them. These are the near simultaneity of 3 image creation and image viewing; the capacity to store larger numbers of images at a time, facilitating chance capture and content build up; the ease of dissemination to known and new audiences, democratizing authorship and spectatorship; creating access to archives and concretizing connections to ancestors and evoking memories linking past with present (Bell 2003, Edwards 2005, Peers and Brown 2003, 2009, Herle 2009); as well as the familiarity and ease of capturing and representing oneself directly, through the increasingly global reach of digital imaging (Ruby, 2005: 166). Further, digital images extend the information images contain through the meta data encoding embedded in the technology, facilitating virtual emplacement of the images back into the landscapes of their creation. As the papers in this SI reveal, in ethnographic fieldwork, digital images both follow and lead the trend toward greater equitability and transparency of the photographic ethnographic encounter. Following trends towards greater transparency and empowerment already evident within visual anthropology, the articles here acknowledge the particular qualities that photographs, whether digital or physical, have for creating conversations between participants and researchers, across time, in particular spaces, with specialists and those less informed (including the researchers). The visual is both a method and mode of analysis that provokes its own interpretations and responses. Images in this SI are combined with text, not because texts provide the meaning for images, ‘control[ling] their polysemy’ (Barbosa 2010: 300), but because each are different modes of access, understanding and transmission. The visual enables ‘shifting perspective[s]’, ‘identification[s]’ and ‘implication[s]’ (MacDougall, 2005: 220) within and beyond the intention of the researcher and those researched. Helping to drive changes in visual anthropology, the articles here aim to contribute to the calls to communicate research (Ruby 2005: 163) visually and to empower, ‘in terms of access, participation and communication’ (Cohen and Salazar 2005: 7, cited in Pink 2011: 228). This has a two-fold outcome, of revealing the complexity of the interactions researches and participants have with one another in their digital encounters, as well as of bringing the ‘different things to understand’ (MacDougall 2005: 220) and ‘new routes to knowledge and its representation’ (Pink 2012: 12) the visual offers, in a fuller approach to anthropological knowledge. Situated in India, Korea, Spain and the UK, and across a variety of disciplines, including anthropology, art and design, cultural sciences, digital communication, and sociology, the articles in this Special Issue each grapple with challenges arising through the photographic social encounter mediated by digital images in ethnographic research. Sbroccoli’s article, “Between the archive and the village, the lives of photographs in time and space”, explicitly addresses the relationality and power structures between researcher and researched, between eras in which these have occurred, between different participants in relation to one another, in relation to images, from past to present, and in the narrative (re)constructions these continually undergo in making and maintaining historicity and contemporary meaning. His work focuses on bridging fieldwork and a photographic archive made in the 4 1950s by A C Mayer with contemporary fieldwork and a photographic revisitation by Sbriccoli, in the same location of Jamgod, in the central Indian state of Madya Pradesh. The temporalities, spaces and narratives in the research are multiple. These range from the 1950s Jamgod in Mayer’s photographs and fieldwork; to Mayer’s recent discussions with Sbriccoli about that work and further digitization of extant photographs from the period; to Sbriccoli’s relocation to Jamgod, and the undertaking of merging new and old, through image making and narrative generation; and finally, among the Jamgod villagers themselves, on their appropriation of Mayer’s image from his fieldwork, their revisitation of these in mobile digital formats, and their individual and collective reconstruction of pasts bridging the 1950s and 2012-14. In the creation of his talking archive, Sbriccoli aims to make all these different perspectives, meanings, intentions, narrations and representations explicit, to be transparent about process and product to all those involved. This methodology aligns with the perspectivism purposed by Deleuze (2006), in which there are not variations of a single truth, but rather the truth of individual variations held by any subject – researched or researcher alike –disturbing the dichotomy between knowing subject and studied subject. His work aligns with all the papers in this volume, around the fluidity, accessibility, and variability of meaning, where meaning created is dependent on who is looking, how much they know, their ability to reveal or conceal, which again suggests that a methodology incorporating image generation/(re)interpretation remains an emergent process. Significantly, Sbriccoli highlights the use of narrative as an essential element to gaining an understanding any given meaning of an image. Following Sontag (1977), he argues that without narration, the images and archives remain mute, unable to articulate in visuals alone the links between content and referent. While many, including the authors in this volume, go into the field with a hope of capturing content through the objectivity of images, as we argue here, knowledge in images lies dormant, or least partially inaccessible, in the absence of interpretation. Here, Sbriccoli praises the versatility of the digital image to mediate in ways impossible 60 years ago; a point further exemplified by Dugnoille’s research in Korea on the potency of the digital and virtual to attest to and progress social change. In this regard, it may be a case that although anthropology awakened to its own politics and powers decades ago (e.g.: Turner 1991, MacDougall 1991), the technological advancement of the digital has further assisted in the shift away from knowledge produced by (academically) knowing subjects, to a more conscious and transparent knowledge, co-produced amongst all subjects involved in the research. Sbriccoli’s project aims to elicit visually this multiplicity of knowing subjects and talking archives, and the instrumentality of images for ‘evok[ing]’ (Edwards 205: 29), creating further spaces for transparency over process, product, perspective and potential. Continuing on from Sbriccol’s relationship between the researched and the researcher and the photographic representation remaining in the hands of the researcher, but narrated and discussed through the knowing subject of the 5 researched, Botticello’s paper, “From documentation to dialogue, interrogating routes to knowledge through digital image making”, also focuses on the collaborative aspect of image making. Like Sbriccoli, Botticello’s data collection is not just a collaboration of different visual perspectives, but it is also a collaboration of different methodologies, of talking, doing and capturing, wherein digital imaging is but one part of the wider process. While there are many tangible aspects to the production of Leaver’s lace at the last factory in England, machinery, materials, documents, understanding the processes which go into making lace still remains an intangible aspect which this research was attempting to capture, preserve and appreciate. Her initial enquiry into capturing and communicating lace workers’ knowledge through images, quickly shifted toward the researcher being in need of the researched to guide and direct her toward to some level of understanding in order to not only capture process, but also to understand what was in the frame once it was captured. A complete object or entity produced such as a web of lace, while evident in plain sight, can conceal the multiple inputs that create it. In her paper, Botticello aims to show the processes she undertook to gain some mastery over these in order to be able to explain how a finished piece of lace comes into existence. What she terms a multi-faceted methodology is a fluid and emergent/emerging collaboration between researcher and researched in the shift between intangible knowledge and the tangible creation. Visual highlights of this emergent tension between intangible and tangible are in understanding the relationship between man, materials and machine in the lace making. Her use of Sennett’s (2008) hand-eye-mind complex, together with the chaîne opératoire approach (Schlanger 2005) brings home the fluidity of tacit knowing and doing amid the tangible manifestation of making processes. While maintaining that her learning remained partial, and was thus predicated on moving out of ignorance toward an infinite destination of lace making knowledge, revisiting earlier images in the fieldwork, she found that what were once obscurities hidden in plain sight, these now revealed more content as her own insight increased. This recycled to a return to the partiality and polysemic nature of images, in that they can always reveal more or less to those with greater or lesser knowledge, and they continue to retain potential for reinterpretation, by the same or new people, at other times, and in other contexts. Dugnoille’s article extends these interactions by exploring how informants interact with digital technologies to represent themselves and new ways of understanding own society. His work, “Digitalizing the Korean cosmos: Representing human-nonhuman continuity and filiality through digital photography in contemporary South Korea”, focuses on animal activism and the platform digital images offer activists to re-represent human and non-human animal relations in contemporary Korea. In this, his research aligns with Sbriccoli’s project on the democratization of knowledge and power through digital, and more significantly, online media. Dugnoille’s concern is with contemporary social change in Korea, as it attempts to move away from ‘traditional values’ around dogs and cats, in which dogs and cats are brutally 6 prepared for slaughter and then eaten (as the animal’s increased adrenaline at death is understood to increase the sexual stamina of the person consuming the meat), toward seeing the same dogs and cats as pets. Digital and online media have played a significant role in re-orientating the human and non-human animal relationships in this regard. Dugnoille worked with animal activists in Korea in the early 2010s, who have taken ownership over the online representation of and changing form of human and non-human animal relationships. In expressing this shift from food to pet, Dugnoille details how the animals depicted online are visually represented as singular entities, with particular personalities, qualities and characteristics, some standing in for their human guardians, others actually taking their own digital images through pet friendly software, which their guardians later upload. Here Dugnoille argues that digital animal photography is used as a ‘vector to demonstrate non-human animals’ affiliation to […] the human domestic sphere’ (Dugnoille, this volume). Further, some of the animals encountered in his research were named according to protocols used in Korean kinship, thus acknowledging that non-human animal relationships relate to cross- and intergenerational kin relationships among human animals. Here Dugnoille’s visual research maps onto the other authors in this volume regarding the interrelationship between visual and discursive practices in representation and the re-imagination of histories and connections. The shift from food to pet, Dugnoille argues, follows Kopytoff’s (1986) notion that commodity status is not a fixed state, but one that things can move in or out of. For the animal activists, the aim is to shift these animals from common commodities toward singular entities falling outside of the market system, where they are not sold as pets, but adopted without charge (as in the 2012 campaign “Don’t buy, adopt”). The great success among activists in promoting animal adoption runs in parallel with an increase in dog and cat meat consumption in Korean society. Whether eaten as food or kept for pets, the sense of non-human animals within Korean society remains the same. Transposed into images, however, without this insider understanding, this notion of an exclusive and interrelated community seems difficult for an outsider to apply when animals are being prepared for consumption and not just adoption. The meanings of the images are not easily transposed beyond Korean eyes, as knowledge in/of images remains decidedly specific to local communities of practice. It remains to be seen whether the use of online digital images may be more successful in shifting toward the singularized expression of non-human animals as pets within the cosmology of Korean society membership. Rounding off this collection of articles analyzing the digital in ethnographic research, Gomez Cruz takes us to forward to consider mobility itself in his article, “Trajectories: Digital/visual data on the move”, and his marking of the pathways between points of capture, made possible through digital technologies. This, he argues, is the next stage in theorizing digital ethnographic research. For Gomez Cruz, the virtual and the digital merge through the concept of a trajectory, whih is not just a trace of having passed through, but a ‘mobile sited ethnography’ (Gomez Cruz, this volume), in which a researcher’s own mobility emerges as a 7 key component in the ethnographic process. Mobility becomes a further element that combines methods articulated already by the other authors in this volume, around visual data, digital methods and reflexivity about practice. Gomez Cruz takes us to Spain, England and beyond, in which he analyses gaze and emplacement. While in Barcelona, Gomez Cruz recounts his daily bicycle rides and the encounters he makes with certain others on his route. Noteworthy are disempowered ‘trolley-men’ (Gomez Cruz, this volume) who migrate daily in an opposite trajectory to Gomez Cruz’s own, using shopping trolleys to collect leftovers from consumer society. Through digitally tracking and mapping their movements against his own, Gomez Cruz constructs a fieldsite of/in movement, through which juxtapositions of class, wealth, space, place, time, and materialities of mobility (bikes versus shopping trolleys), also become evident, and provide the starting points for further ethnographic research by changing trajectories to join others in their movements. In England and beyond, Gomez Cruz considers other happenstance interactions with his mobility, this time not with other mobile persons, but with digital screens. As with the trolley-men, he marks what forms the screens take, what information/content they hold and where they are located – some are fixed and others, like phones and tablets, are also highly mobile – and makes his own digital databases of them. In both instances, Gomez Cruz recognizes that his movements and intersections with other people or screens is not random, but is embedded in situations that call into question notions of agency and structure governing his own movements as much as those he encounters. As other papers in this volume articulate, Gomez Cruz’s concept of trajectories as a research methodology foregrounds the reinsertion of the researcher into relations between self/other, gazing/knowing, technology/change, mobility/stasis in contemporary ethnographic research. Furthermore, reflexivity and awareness also impact on the archive created, in that the serialization of images creates meaning and understanding, which single images or images without further contextual content, be this informant narratives or metadata, cannot. In addressing the processes of generating digital images as well as the interpretive, analytical potential in images collected, the articles in this Special Issue highlight the relationality existing between researchers, participants, cameras, and images. The papers show how digital images in ethnographic fieldwork continue existing trends in visual anthropology around empowerment and equitability. The papers also show how digital imaging can extend the discipline by exploiting the capacities the technology offers for access, participation, transparency and transmission and the impacts these have on researchers, participants, spectators on how they relate to images and one another. Works cited/references: Azoulay, A (2008) The Civil Contract of Photography, New York: Zone Books 8 Banks, M (2001) Visual methods in social research, London: Sage Barbosa, A (2010) ‘Meaning and Sense in Images and Texts’, Visual Anthropology, Vol. 23, pp. 299-310 Barthes, R (2000) Camera Lucida, London: Vintage Bell, J (2003) ‘Looking to see: Reflections on visual repatriation in the Purari Delta, Gulf province, Papua New Guinea’, in L. Peers and A Brown (eds.) Museums and Source Communities, London: Routledge, pp. 111-122 Clifford, J and Marcus, G (1986) Writing Cultures: Poetics and Politics of Ethnography, University of California Press Cohen, H and Salazar, J (2005) ‘Introduction: Prospects for a Digital Anthropology’, Media International Australia, (116): 5-9 Deleuze, G (2006 [1988]) The Fold, London: A&C Black Edwards, E (2003) ‘Introduction: Talking Visual Histories’, in L. Peers and A Brown(eds.) Museums and Source Communities, London: Routledge, pp. 83-99 Edwards, E (2005) ‘Photographs and the Sound of History’, Visual Anthropology Review, Vol. 21 (1 and 2): 27-46 Favero, P (2014) ‘Learning to look beyond the frame: Reflections on the changing meaning of images in the age of digital media practices’, Visual Studies, Vol. 29(2) 166-179 Ginsberg, F (1991) ‘Indigenous Media: Faustian Contract or Global Village?’, Cultural Anthropology, Vol. 6 (1): 92-112 Haraway, D (1998) ‘The persistence of vision’ in N Mirzoeff (ed.) Visual Culture Reader, Routledge: London and New York, pp. 191-198 Herle, A (2009) ‘John Layard long Malakula 19-14-1915: The Potency of Field Photography’ in C. Morton and E. Edwards (eds.) Photography, Anthropology and History: Expanding the Frame, Farnham: Ashgate Publishing Ltd., pp. 241-263 Jungnickel, K and Hjorth, L (2014) ‘Methodological entanglements in the field: methods, transitions and transmissions’, Visual Studies, Vol. 29 (2): 136-145 Kopytoff, I (1986) ‘The Cultural Biography of Things, Commoditization as Process’,in A. Appadurai (ed.) The Social Life of Things: Commodities in Cultural Perspective, Cambridge: Cambridge University Press, pp. 64-91 MacDougall, D (2005) The Corporeal Image, Princeton: Princeton University Press MacDougall, D (1997) ‘The visual in Anthropology’ in M Banks and H Morphy (eds.) Rethinking Visual Anthropology, New Haven and London: Yale University Press, pp. 276-295 MacDougall, D (1991) ‘Whose story is it?’ Visual Anthropology Review, Vol. 7 (2): 2-10 9 Morley, D. (2006) Media, Modernity and Technology. London: Routledge Morton, C and Edwards, E (2009) Photography, Anthropology, History: Expanding the Frame, Farnham: Ashgate Publishing Ltd. Murray, S (2008) ‘Digital Images, Photo-Sharing, and Our Shifting Notions of Everyday Aesthetics’ Journal of Visual Culture, Vol. 7 (2): 147-163 Mustafa, HN (2002) ‘Portraits of Modernity: Fashioning Selves in Darkarois PopularPhotography’, in P. Landau and D. Kaspin (eds.) Images and Empires: Visuality in Colonial and Post Colonial Africa, Berkeley: University of California Press, pp. 172-192 Peers, L and Brown, A (2009) ‘Just by bringing these photographs… on the other meanings of anthropological images’ in C. Morton and E. Edwards (eds.) Photography, Anthropology and History: Expanding the Frame, Farnham: Ashgate Publishing Ltd., pp. 265-280 Pink, S (2012) Advances in Visual Methodology, London: Sage Publications Ltd Pink, S (2011) Digital Visual Anthropology: Potentials and Challenges in M. Banks and J. Ruby (eds.) Made to Be Seen: Perspectives on the History of Visual Anthropology, Chicago: University of Chicago Press, pp. 209-233 Pink, S (2003) ‘Interdisciplinary agendas in visual research: Resituating visual anthropology’, Visual Studies, Vol. 18 (2): 179-192 Pink, S (2001) Doing Visual Ethnography: Images, Media and Representation in Research. London: Sage Pinney, C (2016) ‘Crisis and Visual Critique’ Visual Anthropology Review, Vol. 32 (1): 73–78 Ruby, J (2005) ‘The last 20 years of visual anthropology – a critical review’ Visual Studies, Vol. 20 (2): 159-170 Sennet, R (2008) The Craftsman, London: Allen Lane Schangler, N (2005) ‘The Chaîne Opératoire’ in C. Renfrew and P. Bahn (eds.) Archaeology: The Key Concepts, London: Routledge, pp. 25-31 Sontag, S (1973) On Photography, New York: Picador Turner, T (1992) ‘Defiant Images: The Kayapo appropriation of video’, Anthropology Today, Vol. 8 (6): 5-16 Woodward, S (2008) ‘Digital Photography and Research Relationships: Capturing the Fashion Moment’, Sociology, Vol. 42 (5): 857-872 Worth, S and Adair J (1972[1997]) Through Navajo Eyes: An Exploration in Film Communication and Anthropology, Albequerque: University of New Press work_i4q3okrmbfgfvc57lmlk4qddu4 ---- The multicultural evolution of beauty in facial surgery Braz J Otorhinolaryngol. 2017;83(4):373---374 www.bjorl.org Brazilian Journal of OTORHINOLARYNGOLOGY EDITORIAL The multicultural evolution of beauty in facial surgery� c c m b t a t r i a o h t e e a s a a c i p a c l p a r N a l a f n changes of aging as a decrease in fertility. The fourth and A evolução multicultural da beleza na The concept of facial beauty has been defined in a variety of ways dating back to ancient times, and while the defini- tion continues to develop, it has become clear that beauty crosses ethnic boundaries and has a significant cultural and economic impact. Subconsciously, beauty is perceived by humans as a sign of favorable genes and increased fer- tility, both of which play a role in mate selection. As a result, perceived attractive features that are subcon- sciously selected evolve much more quickly than other naturally selected characteristics. Additionally, the beauti- ful are more likely to get better grades in school, to be hired for a job, to receive higher salaries, and to be viewed as nicer, smarter and healthier.1 While beauty was once stated to be ‘‘in the eye of the beholder’’, more recent studies have suggested that beauty is an objective, quantifiable quality. The ancient Greeks began the quest for a universal standard of beauty and believed it was represented by the ‘‘golden ratio’’ also known as ‘‘phi,’’ which was thought to represent perfect harmony.1---3 In nature, the ratio appears in the spiral of seashells, in the growth rate of the human mandible, and in the DNA antihelix. Examples of its appli- cation include Egyptian art and architecture, the Fibonacci sequence, and geometric shapes such as the pentagon and decagon. Many still believe that phi corresponds to facial beauty as well.3 However, others have found it to be inexact. For example, Marquardt created an ‘‘ideal’’ facial standard based off of phi, and not only did it apply poorly to people of non-European/Caucasian descent but it also masculinized Caucasian women.4 The concept of beauty as a formula continued to evolve with the artists of the Renaissance period. Through Da Vinci and his contemporaries, the neoclassical ideals were largely based on phi. The art anatomists of the 17th and 19th � Please cite this article as: Cerrati EW, Thomas JR. The multicul- tural evolution of beauty in facial surgery. Braz J Otorhinolaryngol. 2017;83:373---4. l a f http://dx.doi.org/10.1016/j.bjorl.2017.04.005 1808-8694/© 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia access article under the CC BY license (http://creativecommons.org/lic irurgia facial enturies propagated these new standards into the edical field, which created a ‘‘universal’’ definition of eauty for the period.2 While these ideals continue today o have a strong influence on facial analysis and serve as guideline for surgical planning, research has shown that hese ideals still do not apply cross-culturally. Despite the inability to universally quantify beauty, esearchers have found that there is a consensus on rat- ng attractiveness across sexual orientations, ethnic groups, nd ages. Studies have shown that diverse populations agree n who is and is not attractive. Additionally, even infants ave an innate preference toward attractive faces.1 Cer- ain conceptions of facial beauty or attractiveness may be verlasting. In 2006, Bashour researched and challenged ach of the four concepts. He concluded that subjective ttractiveness comprises only a small percentage of per- onal preference over a much larger biological objective ssessment of attractiveness.5 The four concepts of facial beauty include symmetry, verageness, youthfulness, and sexual dimorphism. The first oncept of symmetry is believed to represent a high qual- ty of development. A symmetric face reflects a person’s henotypic and genetic condition giving him or her an dvantage in sexual competition. Averageness, the second oncept, is informed by the Darwinian theory that evo- utionary pressures function against the extremes of the opulation. As a result, humans innately appreciate that verageness represents genetic heterozygosity and a greater esistance to disease. The third concept is youthfulness. eonatal features, such as large eyes and a small nose, re believed to suggest desirable qualities of youthful live- iness, open-mindedness and affability. As a person ages nd demonstrates soft tissue descent, the face deviates rom the phi standard, resulting in a decrease in attractive- ess. In addition, the human brain interprets the physical ast concept of beauty is sexual dimorphism, which is defined s a phenotypic difference between males and females. For emales, increased estrogen leads to the development of Cérvico-Facial. Published by Elsevier Editora Ltda. This is an open enses/by/4.0/). dx.doi.org/10.1016/j.bjorl.2017.04.005 http://www.bjorl.org http://crossmark.crossref.org/dialog/?doi=10.1016/j.bjorl.2017.04.005&domain=pdf dx.doi.org/10.1016/j.bjorl.2017.04.005 http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/licenses/by/4.0/ 3 s r c a i a h c f p a r n a i t n o f p p i h E c r a w i s t a f a e i fi r T e g p g t i t i o f t c i s t m s C T R 1 2 3 4 5 Otolaryngology-Head and Neck Surgery, Chicago, USA 74 econdary characteristics that suggest a fertile host and a eproductive advantage. These include a thin jaw, small hin, large widely spaced eyes, small nose, high cheekbones, nd plump lips. On the contrary, desirable physical features n men are those that signify high testosterone levels, such s prominent chins, square jaws, deep-set eyes, thin lips, eavy brows and abundant hair.1,2 Although attractiveness can be agreed upon cross- ulturally, each ethnicity has unique features that are actored into its definition of ‘‘averageness.’’ For the facial lastic surgeon, these unique features must be respected nd embraced in order to create a harmonious and elegant esult that meets the criteria of beauty and attractive- ess. As a result, the neoclassical ideals may not serve as ccurate guidelines in non-Caucasian patients. Specifically n rhinoplasty, distinct anatomic differences exist between he leptorrhine nose seen in Caucasians, the platyrrhine ose seen in African and Asian populations, and the mes- rrhine nose seen in Latin American populations.1 Patients requently want to preserve their cultural identity, so it is aramount that the surgeon clearly distinguishes these goals reoperatively. Today’s typical facial plastic surgery practice is becoming ncreasingly multicultural. The globalized modern society as played a significant role in the perception of beauty. conomic mobility coupled with an increase in interracial ouples has blurred the lines of ethnic identity, and the esulting esthetically unique and beautiful outcomes do not llow patients to be characterized as fitting a narrow mold ith predictable desires.1 The classic principles of beauty ncluding phi, symmetry, averageness, youthfulness, and exual dimorphism can still be applied as guidelines, but he surgeon must incorporate a broader outlook on facial nalysis and surgical techniques. The importance of identi- ying patients’ ethnic identities cannot be underemphasized s patients may want to erase, preserve, modify, or even nhance those specific inherent traits. Furthermore, cosmetic surgery continues to become ncreasingly desirable and socially acceptable. The ampli- ed attention and interest can be credited to its exposure in eality television, social media, and surgical documentaries. he increased demand and the rising population diversity nsure that each patient will present with a unique back- round and cosmetic objective. The surgeon should assist atients to arrive at a goal that is harmonious with their face EDITORIAL iving a timeless, attractive result rather than be swayed by he development of a fashion trend. The proper guidance, nsight, and ethical control distinguish the surgeon from a echnician. More importantly, these qualities preserve the ntegrity of the field of facial plastic surgery. While facial modifications can have a tremendous impact n patients’ lives, the planned result should not venture too ar from the concepts of facial beauty that have defined he field since its creation. Digital photography along with omputer imaging has aided with preoperative assessments n an effort to confirm the surgeon and the patient have the ame esthetic goals. Technology will continue to improve o facilitate this initial conversation. As society evolves so ust our understanding of beauty along with our attempt to urgically define it. onflicts of interest he authors declare no conflicts of interest. eferences . Weeks DM, Thomas JR. Beauty in a multicultural world. Facial Plast Surg Clin N Am. 2014;22:337---41. . Thomas JR, Dixon TK. A global perspective of beauty in a multi- cultural world. JAMA Facial Plast Surg. 2016;18:7---8. . Prokopakis EP, Vlastos IM, Picavet VA, Nolst Trenite G, Thomas JR, Cingi C, et al. The golden ratio in facial symmetry. Rhinology. 2013;51:18---21. . Holland E. Marquardt’s phi mask: pitfalls of relying on fashion models and the golden ratio to describe a beautiful face. Aes- thetic Plast Surg. 2008;32:200---8. . Bashour M. History and current concepts in the analysis of facial attractiveness. Plast Reconstr Surg. 2006;118:741---56. Eric W. Cerrati a,∗, J. Regan Thomas b a University of Illinois at Chicago, Department of Otolaryngology-Head and Neck Surgery, Division of Facial Plastic & Reconstructive Surgery, Chicago, USA b University of Illinois at Chicago, Department of ∗ Corresponding author. E-mail: ecerrati@gmail.com (E.W. Cerrati). http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0030 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0030 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0030 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0030 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0030 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0030 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0030 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0030 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0030 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0030 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0030 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0030 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0030 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0030 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0030 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0030 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0030 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0030 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0030 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0030 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0030 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0035 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0040 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0045 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0050 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0050 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0050 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0050 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0050 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0050 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0050 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0050 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0050 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0050 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0050 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0050 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0050 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0050 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0050 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0050 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0050 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0050 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0050 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0050 http://refhub.elsevier.com/S1808-8694(17)30069-1/sbref0050 mailto:ecerrati@gmail.com work_i4ukjrwlwzajrn2jhq7wxjni7m ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218543440 Params is empty 218543440 exception Params is empty 2021/04/06-02:16:20 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218543440 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:20 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_i7mv5pglhbbelj2ooupyhhkaj4 ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218560084 Params is empty 218560084 exception Params is empty 2021/04/06-02:16:40 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218560084 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:40 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_iah5f2ihj5fi5pcs6am4yfb2jq ---- Cachaça Classification Using Chemical Features and Computer Vision Cachaça Classification Using Chemical Features and Computer Vision Bruno Urbano Rodrigues1, Ronaldo Martins da Costa1∗, Rogério Lopes Salvini1, Anderson da Silva Soares1, Flávio Alves da Silva1, Márcio Caliari1 , Karla Cristina Rodrigues Cardoso1, and Tânia Isabel Monteiro Ribeiro2† 1 Universidade Federal de Goiás, Goiânia, Goiás, Brasil brunourb@gmail.com,ronaldocosta@inf.ufg.br,rogeriosalvini@inf.ufg.br, anderson@inf.ufg.br,flaviocamp@ufg.br,macaliari@ig.com.br,karlagropan@hotmail.com 2 Instituto Politécnico de Bragança, Bragança, Portugal tania.im.ribeiro@gmail.com Abstract Cachaça is a type of distilled drink from sugarcane with great economic importance. Its classifi- cation includes three types: aged, premium and extra premium. These three classifications are related to the aging time of the drink in wooden casks. Besides the aging time, it is important to know what the wood used in the barrel storage in order the properties of each drink are properly informed consumer. This paper shows a method for automatic recognition of the type of wood and the aging time using information from a computer vision system and chemical information. Two algorithms for pattern recognition are used: artificial neural networks and k-NN (k-Nearest Neighbor). In the case study, 144 cachaça samples were used. The results showed 97% accuracy for the problem of the aging time classification and 100% for the problem of woods classification. Keywords: pattern recognition, drink analysis, computer vision 1 Introduction Cachaça is the distilled drink most consumed among alcoholic beverages in Brazil. It is a special type of beverage produced from sugarcane (Saccharum sp) similar to rum. Its differential is the use of different types of wood in the aging process. Aging consists of storing the cachaça in barrels or wooden casks for a certain time. This process produces changes in the chemical composition, aroma, flavor and color of the drink [2]. The legislation classifies the cachaça into three types: aged cachaça, premium cachaça and ∗The authors thanks the research agencies CAPES and FAPEG for the support provided to this research. †The authors thanks the research School of Agriculture, Polytechnic Institute of Bragança for the support provided to this research. Procedia Computer Science Volume 29, 2014, Pages 2024–2033 ICCS 2014. 14th International Conference on Computational Science 2024 Selection and peer-review under responsibility of the Scientific Programme Committee of ICCS 2014 c© The Authors. Published by Elsevier B.V. doi: 10.1016/j.procs.2014.05.186 http://crossmark.crossref.org/dialog/?doi=10.1016/j.procs.2014.05.186&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.procs.2014.05.186&domain=pdf extra premium cachaça. The difference among types is related to the quantity of storage and shelf life. A aged and premium cachaça have aging period of at least 1 year. Aged cachaça may have up to 50% of non-aged cachaça using blend process. The premium chacaça has 100% aged cachaça. The extra premium cachaça has a minimum aging period of at least 3 years and cannot contain non-aged cachaça. As the most famous drinks in the world, whiskeys, brandies and even wines go for oak bar- rels. Cachaça is the only one that uses different woods for this process[2][8]. Each wood gives the drink a sensory analysis involving the measurement, interpretation and understanding of human responses to the properties perceived by the senses (taste - flavor, smell - aroma, vision - color). However, some woods only fortify the acidity of cachaça and do not interfere with their color or taste. The drink remains white and with its distinctive flavor even after properly forti- fied in contact with the wood[2]. Detailed knowledge of the chemical and sensory composition of cachaça, as well as the maturation time, constitute important factors in controlling beverage quality and evaluation of changes that may contribute to the improvement of production pro- cesses. This knowledge can contribute to the production process especially for small producers and artisan industries. De Souza[6] uses gas chromatography - olfactometry - to separate and characterize the odors present in cachaça and rum, these two products of sugarcane were compared and the patterns identified from a descriptive sensory analysis. The disadvantage of this method is maintainabil- ity because it has high cost. [7] demonstrates the differentiation between cachaça and rum by using ionization mass spectrometry. The author used the principal component analysis (PCA), statistical approach in which data are represented by a subset of its eigenvectors, noting the type of wood (amburana -Amburana cearensis e jequitibá - Cariniana legalis). His work con- tributes to further studies can use this technique for the identification of artisanal and industrial cachaça as well as detection of adulteration by adding caramel and other substances such as dyes. Recent works use techniques of computer vision, neural networks, genetic algorithms and statistical methods for food classification. Wan[14] used the computer vision combined with artificial neural networks. A structural and microscopic approach of wines to be classified was used by analyzing the microstructure and texture, factor that influences the assignment of color to the sample. Starting from the idea that different wines have microstructural (microscopy) and micrograph (particles) changes, the study aimed at extracting common features to define a pattern. For such, neural networks were used for classifying samples. The presented results confirm that it is possible to classify the wine through its micrograph, allowing the use of the features in other contexts. Boisier[3] uses ΔE based on the CIELab color space in the samples and demonstrates the grouping according to the tones classified. The proposed goal was to rep- resent the wines’ colors with limited number of colors that were called nuances. The application of ΔE aimed at performing a comparison with the HVS model, observing the brightness, chro- maticity and saturation, thus analyzing the color spectrum, sorting and grouping it according to tone. The results are encouraging since they permit a precise characterization and reproduc- tion of wine color. The RGB color model is an additive color system consisting of Red (Red) Green (Green), and Blue (Blue). Additive colors are emitted or projected colors. The color is generated by mixing various greetings light wave, causing a color sensation when it reaches the eye. RGB formats, also known as true-color, use 8-bits per channel. The CIELab color model is a subtractive color system. CIELab describes the basic colors in three qualities: L * is lightness, a * and b * contain chroma information. L * is luminance, density measurement of the intensity of a reflecting light in a given direction. The information a* and b* refers to the amount of color [9]. Cachaca Classification Rodrigues, Costa, Salvini, Soares, Silva, Caliari, Cardoso and Ribeiro 2025 Qiongshuai[12] in his analysis shows the gain from using genetic algorithms in lecture and classification of wines, combined with computer vision. Kruzlicova[10], evaluates data through an artificial neural network and comparative method use the analysis of variance (ANOVA). Cozzolino[5] proposed to investigate the relationship between sensory analysis, visibility (VIS) and infrared spectroscopy (NIR) to evaluate the sensory properties of commercial varieties of Australian wines by using the PCA (Principal Component Analysis). The methods described in related articles do not address the variety of woods that can be used, as well as aggregate usage of chemical data values obtained by the colorimeter and there is no relation so far of digital photographs of samples using the RGB color model. Sometimes works use only the chemical data, other use only data from colorimeter (CIELab color model) and when using chemical data and data from the colorimeter, did not observe the RGB color model. The colorimeter is generally described as any instrument that characterizes color sam- ples to get an objective measure of color characteristics. In turn, such equipment is available in research laboratories and industries. The relevant point is to make accessible this technology regardless of the producer. Observing the instrumental methods, cost, maintainability and han- dling are performed by a specialist. With computer methods results can be achieved optimizing time and resources. Therefore, this paper proposes a method for classifying the aging process of cachaça in order to identify the wood and the aging time of a sample. Intersection of information obtained in the chemical analysis with that extracted from colorimeters will be performed, as well as data obtained by applying algorithms of images digital processing, digital photographs performed on samples of cachaça. It is used the technique of artificial neural networks to assess the influence of types of wood and the time that cachaça aging has in the color model obtained from digital photographs (RGB) and colorimeter (CIELab). Two techniques of pattern recognition will be used: neural networks and k-NN (K-Nearest Neighbor). 2 Materials and Methods 2.1 Samples Cachaça samples with up to 36 months of in casks aging of amburana (Amburana cearensis), oak (Quercus spp) and nut (Bertholletia excelsa H.B.K) were evaluated. The aging time is described every 4 months (4-8-12-16-20-24-28-32-36). All samples evaluated are from 4 barrels of each timber. Thus, total number of samples to be analyzed are 36 samples per timber. Normative Instruction No. 13 of MAPA (Ministry of Agriculture, Livestock and Supply)[11] defines on the classification of Brazilian cachaça into three types: Aged cachaça, premium cachaça, extra- premium cachaça. All types have alcohol content between 38% and 48% by volume at 20◦C. What differs is the type of storage time in cask wood. Aged cachaça has 50% of the sample stored in wooden cask for at least one year. The premium chacaça has in its entirety, aged in wooden cask for at least one year and finally the extra-premium cachaça, in its entirety, aged in wooden cask for a period not less than three years. The physical and chemical analyses were performed in the laboratories for beverage tech- nology and physicochemical analysis of the School of Agronomy, Federal University of Goiás. The determination of pH, density, real alcohol content at 20 oC, volatile, fixed and total acidity, dry extract, phenolic compounds, color and antioxidant activity were performed on times 0, 2, 4, 6, 8, 10 and 12, i.e. 2 on 2 months of storage for observing changes during the aging period. The analyses followed the following methodologies: • pH (Features 4 and 5) - measured with digital potentiometer calibrated at 20◦C; Cachaca Classification Rodrigues, Costa, Salvini, Soares, Silva, Caliari, Cardoso and Ribeiro 2026 Wood type Aging time (months) samples amburana (Amburana cearensis), 4 - 8 - 12 - 16 - 20 - 24 -28 - 32 - 36 36 oak (Quercus spp) 4 - 8 - 12 - 16 - 20 - 24 -28 - 32 - 36 36 nut (Bertholletia excelsa H.B.K) 4 - 8 - 12 - 16 - 20 - 24 -28 - 32 - 36 36 Total 144 Table 1: Samples of cachaça analyzed for up to 36 months of aging time • density (Features 1 and 2) - based on the relationship between the specific weight of water at 20◦C using pycnometer or hydrostatic device based on the Archimedes’ principle (in which one body immersed in a liquid is subjected to a vertical thrust of the liquid upward, equal to the weight of the displaced fluid); • Real alcohol content at 20◦C (Feature 7 to 13), volatile, fixed and total acidity and dry extract: were performed according to the Brazilian official methods of analysis for distilled drinks. • Total phenolic compounds (Feature 3 to 14) : will be determined according to the official method of analysis of AOAC 952.03 (AOAC, 1997), derived from the standard-curve calibration with tannic acid with reading at 760nm absorbance. • Color: will be determined in a ColorQuest II / Hunter Lab color spectrophotometer, adjusted for reflectance with specular included, using the blank No. C6299 of 03/96 and sample in bucket of clean glass 10mm-optical path with 1-inch field analysis. The configuration included illuminant D65 and angle of incidence of 10o. The readings were performed in the color universal system CIELab with turbidity (homogeneous dispersion of solids in solution) and without turbidity (clear sample). It will be conducted with the reading to determine the color-luminosity coordinates L, a* and b*. The color will also be assessed based on information from digital photographs that will be taken of all the 144 cachaça samples. The features that influence the color is: Feature 1, 2, 7, 8, 9, 10, 11, 12 and 13. • total aldehydes (Features 6 to 14) and esters (Feature 3) and isoamyl (Feature 10), isobutyl (Feature 9) and n-propyl (Feature 8) higher alcohols: were determined in a gas chro- matograph Shimadzu GC-17A equipped with automatic injection, automatic ionization detector, flame ionization detector and capillary column DB-VAX (30m x 0.25 mm x 0.25 mm). In determining the compounds concentration were performed the area method and calibration with external standards. • Testing of antioxidant activity in vitro (Feature 3 to 14): were determined by the method described by Brand-Williams, Cuvelier, and Berset[4]. This method is based on the DPPH stable radical from the reaction medium by the action of antioxidants in the sample. In the analysis is shown the attributes to be used in the model. The attributes are de- scribeda in the table 2. Cachaca Classification Rodrigues, Costa, Salvini, Soares, Silva, Caliari, Cardoso and Ribeiro 2027 (a) Chemical Features description Feature 1 Apparent Alcohol Feature 2 Real Alcohol Feature 3 Total Esters Feature 4 Ethyl Acetate Feature 5 Ethyl Lactate Feature 6 Aldehydes Feature 7 Total Alcohols (b) Chemical Features description Feature 8 n-propyl Feature 9 Isobutanol Feature 10 Isoamyl Feature 11 1-Butanol Feature 12 2-Butanol Feature 13 Methyl Alcohol Feature 14 Furfural Table 2: Representation of Chemical features with their respective numbers and description 2.2 Computer vision system Subsequently, samples were photographed by digital camera Canon EOS REBEL XS with setting ISO 100, aperture to 4.0mm and configured for RAW image that contains all of the image data as captured by the camera sensor format. The ambient light to photograph the samples was controlled by a device which allows the incidence of light in the opposite position to the lens of the camera. A special filter will prevent reflections in the liquid and will allow the capture of a digital image suitable for processing. Figure 1: Computer Vision System Figure 1 shows the project of the device designed to be used in this work, a technique inspired by Sun[13] in his work for bovine meat classification. The device measures were 50cm2, with translucent filter of 30cm2, opening digital camera 10cm2 radius. The purpose of the device is to control the environment of digital photography for better absorption of colors of the target object, in this context the cachaça, in order to observe a correlation between the color characteristics obtained by the colorimeter (model CIELab L* a* b* - Lightness, redness and yelowness) and the RGB model (Red, Green and Blue). Afterwards, assign white balance, a process for removal of unreal colors, so that making white objects that appear being white to our eyes. The color balance is previously made, both in photography and film, to digital photography. The color balance is related to neutrality and should not be confused with color balance that painters and designers often apply for matching colors. The representation of the properties used in the model RGB and CIELab shown in the table: As in chemical analysis, the properties of the CIELab and RGB color models have been named and separated for use in the classifier. Cachaca Classification Rodrigues, Costa, Salvini, Soares, Silva, Caliari, Cardoso and Ribeiro 2028 (a) CIELab Features description Feature 15 Lightness Feature 16 Redness Feature 17 Yelowness (b) RGB Features description Feature 18 Color Red Feature 19 Color Green Feature 20 Color Blue Table 3: Representation of CIELab features (a) and RGB features (b) with their respective numbers and description 2.3 Pattern Recognition Algorithms In this work is proposed the use of two algorithms for pattern recognition: artificial neural network and k-NN. Both techniques use supervised learning type. Artificial Neural Networks (ANN) are mathematical models for data analysis inspired in neuronal structures of the brain. It is a connectionist model, with great power to solve complex and non-linear problems, with application in several areas. A multilayer perceptron neural network (MLP) with 11 neurons in the hidden layer will be used. The training algorithm used was backpropagation. Another method used is the k-NN (k-Nearest Neighbor). Lazy type supervised learning technique, introduced by Aha[1]. The general idea of this technique is to find the k closest labeled examples to that unlabeled; based on the labeling of the closest examples the decision on the class of unlabeled example is made. The size of k in this work is 1 using the Euclidean distance. Due to the limited number of samples, the cross-validation technique was used to measure the accuracy of classifiers. In this technique, samples are divided into n mutually exclusive partitions. At each iteration a different partition is used to test the classifier and all the others n-1 partitions are used to train the classifier. The hit rate and error is the average of all rates calculated for the n iterations. In this work we used the n equal to 10. 3 Results and discussion The pool of colorimeter information, chemical analysis and digital photographs at the entrance of the classifiers was carried out. Two pattern recognition algorithms were used: neural networks and k-NN (k-Nearest Neighbor). In the first experiment is used as attributes only the chemical information, i.e. without using the information from colorimeter and RGB model for identification of the aging time and wood. The results are shown in Table 4. Problem Aging time Wood type hits(%) 94.44% 96.26% errors(%) 5.56% 3.74% Table 4: Recognition results of chemical fea- tures in neural network. Problem Aging time Wood type hits(%) 83.33% 95.33% errors(%) 16.67% 4.67% Table 5: Recognition results of chemical fea- tures in k-Nearest Neighbor. According to the results, both classifiers considered achieved high success rate using the data of chemical analysis. The only caveat presented is that the result for the classification of the aging time using the k-NN obtained a relatively lower rate of accuracy (83.33%). Cachaca Classification Rodrigues, Costa, Salvini, Soares, Silva, Caliari, Cardoso and Ribeiro 2029 Besides chemical attributes, variables that make use of color information were measured using the CIELab and RGB color model. Figure 2 shows the Fisher discriminative capacity for chemical attributes, RGB and CIELab to the problem of classifying the type of wood and aging time. As one can see, the attributes related to color information have more discriminability to the problem of wood classification. The attributes 16 and 20 have the highest discriminability. For the problem of aging time classification is possible to note from the Figure 2(b) that the attributes of greatest discriminability are related to chemical data. Information related to computer vision system has low discriminability to the problem. It is noteworthy that the Fisher’s discriminability considers the attribute of univariate analysis, thus the use of the most discriminative attributes does not imply a good classification model. 0 5 10 15 20 0 1 2 3 4 5 6 Features F is h e r D is cr im in a b ili ty Chemical Features CIELAB RGB (a) Fisher discriminability of class wood type 0 5 10 14 18 21 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Features F is h e r D is cr im in a b ili ty Chemical Features CIELAB RBG (b) Fisher discriminability of class aging time Figure 2: The Fisher Discriminability in wood type and aging time problem, using chemical features, color model CIELab and RGB. In Wood type problem the CIELab and RBG features has major discriminability as showed in (a). However, in aging time problem the chemical features the chemical features has the major discriminability as showed in (b). From the calculated discriminability, the two most discriminative variables for each prob- lem considered for viewing a scatterplot of objects for the problems of wood and aging time classification were used. Figure 3(a) shows that the classification of the wood type is a simpler problem than the aging time classification observed in Figure 3(b). −2 −1.5 −1 −0.5 0 0.5 1 1.5 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 B (Feature 20) A ( F e a tu re 1 6 ) Amburana cearensis Quercus spp Bertholletia excelsa (a) −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −3 −2 −1 0 1 2 3 Lactato Etila (Feature 5) 2 B u ta n o l (F e a tu re 1 2 ) Aged Cachaça Premium Cachaça Extra Premium Cachaça (b) Figure 3: Object disperison in wood type(a) and aging time(b) pattern recogntion problems. Verified that the information may contain relevant details to the problems considered, an Cachaca Classification Rodrigues, Costa, Salvini, Soares, Silva, Caliari, Cardoso and Ribeiro 2030 experiment was performed using only the attributes of RGB and CIELab in the classifiers considered without the use of chemical attributes. The results are shown in Tables 6 and 7. Problem Aging time Wood type hits(%) 52.78% 97.20% errors(%) 47.22% 2.80% Table 6: Recognition results of color model CIELab and RGB in neural network without chemical features. Problem Aging time Wood type hits(%) 44.44% 98.13% errors(%) 55.56% 1.87% Table 7: Recognition results of color model CIELab and RGB in k-NN without chemical features. Satisfactory results for the problem of wood classification can be obtained by using only the attributes of the Computer Vision System (RGB and CIELab features) system. However, for the problem of aging time classification the results for both classifiers have a bad hit rate. From the observation that the information related to colors may contain useful information about the classification problem considered in this paper, a new experiment was performed using the chemical attributes from the CIELab color model. Problem Aging time Wood type hits(%) 91.67% 100.00% errors(%) 8.33% 0.00% Table 8: Recognition results of chemical fea- tures using color model CIELab in neural net- work. Problem Aging time Wood type hits(%) 86.11% 97.19% errors(%) 13.89% 2.81% Table 9: Recognition results of chemical fea- tures using color model CIELab in k-NN. From the results of Tables 8 and 9, one can see that the classifier achieved high success rate using chemical information associated with the CIELab attributes. Both classifiers considered achieved high success rate compared to the classification result of tables 4 and 5. There was improvement in the success rate for the problem of wood classification for both classifiers. In the problem of aging type classification, there was improvement only in the k-NN classifier. In the third experiment were used chemical attributes, the attributes of the RGB and CIELab. Problem Aging time Wood type hits(%) 97.22% 100.00% errors(%) 2.78% 0.00% Table 10: Recognition results of chemical fea- tures using color model CIELab and RGB in neural network. Problem Aging time Wood type hits(%) 88.89% 100.00% errors(%) 11.11% 0.00% Table 11: Recognition results of chemical fea- tures using color model CIELab and RGB in k-NN. According to the results shown in Tables 10 and 11, the problem of wood type classification showed 100% accuracy for both classifiers considered. The result for the aging time classification showed improvement in the accuracy rate for the k-NN classifier (88.89%) and also improvement to the neural network (97.22%). Table 12 shows the neural network confusion matrix for the classification problem of aging time. The only classifier error was to indicate a sample of Premium cachaça and Extra Premium cachaça. Cachaca Classification Rodrigues, Costa, Salvini, Soares, Silva, Caliari, Cardoso and Ribeiro 2031 Aging time Aged Cachaça Premium Cachaça Extra Premium Cachaça Aged Cachaça 12 0 0 Premium Cachaça 0 11 1 Extra Premium Cachaça 0 0 12 Table 12: Confusion matrix of aging time generated by neural network. 4 Conclusion This paper proposed the use of pattern recognition algorithms to identify the type of wood and aging time of cachaça samples. From the results it was observed that for the wood classification problem was possible to obtain classifiers with 100% accuracy. Still to this problem, it was found that the use of computer vision system only, without the use of chemical information is sufficient to identify the wood type with high accuracy rate. For the problem of aging time classification, the best result (97%) was obtained by a neural network using the chemical information and the information from the computer vision system. References [1] D. Aha and D. Kibler. Instance-based learning algorithms. Machine Learning, 6:37–66, 1991. [2] Francisco W. B. Aquino, Ronaldo F. Nascimento, Sueli Rodrigues, and Antônio Renato S. Casemiro. Determinação de marcadores de envelhecimento em cachaças. Food Science and Tech- nology (Campinas), 26:145 – 149, 03 2006. [3] B. Boisier, A. Mansouri, P. Gouton, and P. Trollat. Wine color characterization and classification for nuances reproduction. In Signal-Image Technology Internet-Based Systems (SITIS), 2009 Fifth International Conference on, pages 93–98, 2009. [4] W. Brand-Williams, M.E. Cuvelier, and C. Berset. Use of a free radical method to evaluate antioxidant activity. {LWT} - Food Science and Technology, 28(1):25 – 30, 1995. [5] D. Cozzolino, G. Cowey, K.A. Lattey, P. Godden, W.U. Cynkar, R.G. Dambergs, L. Janik, and M. Gishen. Relationship between wine scores and visible-near-infrared spectra of australian red wines. Analytical and Bioanalytical Chemistry, 391(3):975–981, 2008. [6] Maria D. C. A. de Souza, Pablo Vásquez, Nélida L. del Mastro, Terry E. Acree, and Edward H. Lavin. Characterization of cachaça and rum aroma. Journal of Agricultural and Food Chemistry, 54(2):485–488, 2006. PMID: 16417309. [7] Patterson P. de Souza, Daniella V. Augusti, Rodrigo R. Catharino, Helmuth G. L. Siebald, Mar- cos N. Eberlin, and Rodinei Augusti. Differentiation of rum and brazilian artisan cachaça via elec- trospray ionization mass spectrometry fingerprinting. Journal of Mass Spectrometry, 42(10):1294– 1299, 2007. [8] J. B. Faria, D. W. Franco, and J. R. Piggott. The quality challenge: cachaça for export in the 21st century. In Distilled spirits: tradition and innovation 2004, pages 215–221. Nottingham University Press, Nottingham, UK, 2004. [9] Rafael C. Gonzalez and Richard E. Woods. Digital Image Processing (3rd Edition). Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 2006. [10] Dasa Kruzlicova, Jan Mocak, Branko Balla, Jan Petka, Marta Farkova, and Josef Havel. Classi- fication of slovak white wines using artificial neural networks and discriminant techniques. Food Chemistry, 112(4):1046 – 1052, 2009. [11] MAPA Ministério da Agricultura, Pecuária e Abastecimento. Instrução normativa no 13, de 29 de junho de 2005. [online], January 2014. http://extranet.agricultura.gov.br/ sislegis-consulta/servlet/VisualizarAnexo?id=14175. Cachaca Classification Rodrigues, Costa, Salvini, Soares, Silva, Caliari, Cardoso and Ribeiro 2032 [12] Lv Qiongshuai and Wang Shiqing. A hybrid model of neural network and classification in wine. In Computer Research and Development (ICCRD), 2011 3rd International Conference on, volume 3, pages 58–61, 2011. [13] X. Sun, H. J. Gong, F. Zhang, and K. J. Chen. A digital image method for measuring and analyzing color characteristics of various color scores of beef. In Image and Signal Processing, 2009. CISP ’09. 2nd International Congress on, pages 1–6, 2009. [14] Yi Wan, Xingbo Sun, and Rong Guo. Shape and structure features based chinese wine classi- fication. In Computational Intelligence and Natural Computing, 2009. CINC ’09. International Conference on, volume 2, pages 39–43, 2009. Cachaca Classification Rodrigues, Costa, Salvini, Soares, Silva, Caliari, Cardoso and Ribeiro 2033 work_iduorpowunccrlcbhmxa6a6soe ---- 167 Mini review doi: 10.1016/S2222-1808(15)61006-4 ©2016 by the Asian Pacific Journal of Tropical Disease. All rights reserved. Picture for tropical medicine article: How to provide the qualified one? Somsri Wiwanitkit1*, Viroj Wiwanitkit2 1Wiwanitit House, Bangkhae, Bangkok, Thailand 2Hainan Medical University, Haikou, China Asian Pac J Trop Dis 2016; 6(2): 167-168 Asian Pacific Journal of Tropical Disease journal homepage: www.elsevier.com/locate/apjtd *Corresponding author: Somsri Wiwanitkit, Wiwanitit House, Bangkhae, Bangkok, Thailand. Tel: +66870970933 E-mail: somsriwiwan@hotmail.com The journal implements double-blind peer review practiced by specially invited international editorial board members. 1. Introduction Picture is an important media for communication. A picture can represent hundreds of words. In tropical medicine, picture is important for the description in many situations. To prepare and provide a good picture is an important step for publishing an article in tropical medicine. In the past, a picture must be derived by photography taking by a classical camera and the photography is the standard type of picture for journal to prepare an article. However, due to the advanced computational technology at present, a picture can be taken using many digital tools and the picture can be saved as a computational file. In medicine, the use of digital image has been seen for around two decades[1]. The images help article readers visualize the actual scenario, shape, size and color of the presenting objects. At present, there are several tools for helping picture manipulation and prepare it for presenting in scientific article[2]. It is usually suggested that picture should be provided in any article if it is applicable. To get a good article, a good picture should be selected. How to provide qualified picture is hereby discussed. Also, the authors discussed an important publication ethical problem, picture plagiarism. 2. What is a good picture for a tropical medicine article? To have a picture becomes an easy thing at present. As noted, many tools are available for generating a picture. To get a picture from the real object is the first concept to get the good picture. This means that the picture should be taken from the real specimen, real setting, real patient and real situation. In case that the problem is serial such as the progression of a tropical disease, then the corresponding serial pictures should be prepared. The real picture is more preferable. Although there are some new tools (such as FigureJ[3]) for adjustment of the picture, it is not recommended. The details of a picture (what, where and when) must be provided and the name of the photo taker should also be presented. In case that the picture is gotten from a microscope, the type of microscope and magnification should be indicated. Also, since pictures in medicine usually relate to the patient, the privacy A RT I C L E I N F O A B S T R A C T Picture is an important media for communication. A picture can represent hundreds of words. In tropical medicine, picture is important for the description in many situations. To prepare and provide a good picture is an important step for publishing an article in tropical medicine. How to provide qualified picture is hereby discussed. Also, the authors discuss an important publication ethical problem, picture plagiarism. Contents lists available at ScienceDirect Article history: Received 11 Nov 2015 Received in revised form 20 Nov 2015 Accepted 30 Nov 2015 Available online 20 Jan 2016 Keywords: Picture Tropical medicine Communication Article Somsri Wiwanitkit and Viroj Wiwanitkit/Asian Pac J Trop Dis 2016; 6(2): 167-168168 of the patient must be focused. The eyes of the patient must be blinded and the informed consent and permission is needed before taking any photography. For the derived picture, security is needed. The security of digital photography is a big legal issue at present[4]. In case that the picture cannot be taken from the real things, it is suggested not to use the picture from other sources although it is available. It is advised to use drawing to generate a picture instead. It should also be noted that the picture must be processed by journal process and modification of the size of picture can be expected. In general, the picture will be resized to fit a single journal column. This implies that a 1-cm-wide figure will be transformed to 1-point type. For a general journal, a single column is approximately 8 cm or 8-point size. 3. Picture plagiarism: important thing to be avoided The picture published in a journal is usually the property of the journal since the copyright is usually transferred from the author to the journal before publication. However, the big concern is on the publication ethics. Sometimes, the authors might present the pictures that belong to others and this can result in problematic situation. First, the copyright violation to the original publisher can be seen. In addition, it is an academic dishonesty. It is called picture or figure plagiarism[5]. This kind of plagiarism is difficult to detect by simple computation plagiarism screening tool[6,7] (the examples of the figure plagiarism can be seen at “J Singapore Ped Soc 1978; 20: 122-41” and “Chula Med J 1980; 24: 597-604”; and “J Med Assoc Thai 2015 May; 98 Suppl 4: S22-6” and “Asian Pac J Cancer Prev 2015; 16: 2323-6”). It is the role of the author to provide original picture and the journal editor and reviewer take the role for screening for unethical plagiarism[8]. Finally, one might raise a query how to detect a picture plagiarism. This can be simply done using online computational search engine such as images.google.com with reverse image search technique. In case that picture plagiarism is detected, proper management must be done. Withdrawal (in case of in press, Epub before) or retraction should be done. Notification to plagiarist’s institute and sanction/banning or punishment should be set to the plagiarist. It is noted that although some plagiarists try to ask for using copyrighted picture from the original publisher after the problem is detected, it only solves the copyright problem, not the unethical plagiarism (see examples in “Nat Rev Genet 2007; 8:480-5” and “J Med Assoc Thai 2008; 91(2): 268-71”). 4. Conclusion To have a good picture is important to generate a good article in tropical medicine. Picture from real things without modification plus details is preferable. It is advised to avoid using any other’s picture. The problem of picture plagiarism is serious and common at present. It is an unacceptable thing that should be sanctioned by the scientific community. To prevent the problem is very interesting. The first important thing for any author is the ethical mind. Not copying or stealing the other belongings is the important basic concern. Also, the truth is needed, falsified or fake picture is also a kind of scientific misconduct. To avoid the problem, these suggestions should be kept in mind. The author has to strictly follow the instruction to author of the journal and attach to the standards of publication ethics. The author should take the photo or draw the picture by his/her own self. It is suggested to avoid the copying or using the other’s figures for modification. If the manipulation of the existed picture, either in hand-drawing, painting, printed or digital format is required, permission asking from the primary source is needed and proper substantial modifying is needed. When the modified picture is used, clarification and notification that it is a modification from the original source is required[9-11]. Conflict of interest statement We declare that we have no conflict of interest. References [1] Furness PN. The use of digital images in pathology. J Pathol 1997; 183(3): 253-63. [2] Stuurman N, Swedlow JR. Software tools, data structures, and interfaces for microscope imaging. Cold Spring Harb Protoc 2012; 2012(1): 50-61. [3] Mutterer J, Zinck E. Quick-and-clean article figures with FigureJ. J Microsc 2013; 252(1): 89-91. [4] Thomas VA, Rugeley PB, Lau FH. Digital photo security: what plastic surgeons need to know. Plast Reconstr Surg 2015; 136(5): 1120-6. [5] Wiwanitkit V. Plagiarism: ethical problem for medical writing. J Med Assoc Thai 2008; 91: 955-6. [6] Wiwanitkit V. Plagiarism: word, idea, figure, etc. Croat Med J 2011; 52(5): 657. [7] Wiwanitkit V. Plagiarism, beyond crosscheck, figure and conceptual theft. Sci Eng Ethics 2014; 20(2): 613-4. [8] Wiwanitkit S. Plagiarism and journal editing. Acta Inform Med 2013; 21(1): 71. [9] Cromey DW. Digital images are data: and should be treated as such. Methods Mol Biol 2013; 931: 1-27. [10] Mraz P. Against abuse of digital photography techniques in morphology--Ethical Code of Slovak Anatomical Society. Bratisl Lek Listy 2007; 108(12): 533-5. [11] Hayden JE. Digital manipulation in scientific images: some ethical considerations. J Biocommun 2000; 27(1): 11-9. work_ii6fengegbfs7enwb64ppoo4ha ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218560659 Params is empty 218560659 exception Params is empty 2021/04/06-02:16:41 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218560659 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:41 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_iiy77ogyvvfrldyrnxkzoinaii ---- S0001972019000019jra 207..224 Photographies in Africa in the digital age Richard Vokes In recent years, all kinds of African photographies, both on the continent and in the diaspora, have undergone a digital ‘revolution’ (Ekine and Manji 2012). Without doubt, the key driver of this trend has been the rapid, and massive, influx to practically all African countries of ever more affordable third gener- ation/advanced mobile phone handsets, or smartphones. Even compared with earlier technological revolutions on the continent, both the speed and the scale of this spread have been simply breathtaking. For example, the establishment of transistor radio sets – the process through which personal radios became ubiqui- tous across Africa – occurred over the period of at least a decade and a half (between roughly the late 1950s and the early 1970s).1 Similarly, the original mobile phone revolution – i.e. the process through which early generation/low functionality mobile phones went from being a source of novelty, to a domain for experimentation, to technologies that were routinized in everyday life, throughout the continent – took more than a decade to unfold (the crucial years being those between roughly 1999 and 2010) (Vokes 2018b). Against both of those earlier processes, the emergence of smartphones occurred over a much shorter timeframe. Between 2011 and mid-2015 alone, the number of smartphones being imported into Africa – the majority of them sub-US$100 Android-based systems – jumped from around 10 million units per annum to almost 100 million (Tshabalala 2015), as a result of which these devices became quickly estab- lished as a common feature of everyday life. Today, more than one-third of all mobile phones in Africa are smartphones, and this percentage is set to rise to two-thirds by 2020 (GSMA Intelligence 2017). Among their many effects, smartphones and their associated infrastructures for communication – from high-speed internet connections to social media platforms such as Facebook, Friendster, Instagram, Snapchat, Twitter and WhatsApp – have vastly expanded the possibilities for taking, storing, manipulating, circulating and displaying photographic images. Most obviously, these devices have allowed their users to produce far greater volumes of photographic images than ever before, compared with earlier cameras such as 35 millimetre cameras and even standalone digital camera units (which rely on expensive Secure Digital, or SD, memory cards to function). So smartphones have also enabled photographers to store their pic- tures in new ways, given these devices’ capabilities for an immediate, and straight- forward, placement of digital image files into virtual folders. This contrasts with previous processes, in which storage necessarily required photographers to first develop their images into physical photographic prints, and then to sort them Richard Vokes is Associate Professor in Anthropology and Development at the University of Western Australia. He has long-standing research interests in Uganda, especially in the areas of visual and media anthropology. His latest book is Media and Development (Routledge, 2018). Email: richard.vokes@uwa.edu.au 1See Fardon and Furniss (2000), Mytton (2000) and Vokes (2007). Africa 89 (2) 2019: 207–24 doi:10.1017/S0001972019000019 © International African Institute 2019 of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0001972019000019 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:21, subject to the Cambridge Core terms mailto:richard.vokes@uwa.edu.au https://www.cambridge.org/core/terms https://doi.org/10.1017/S0001972019000019 https://www.cambridge.org/core and locate them in some kind of physical collection. Smartphone-based software has facilitated its users to alter images much more quickly and easily than ever before, in particular via all manner of photography-focused apps, which might typically do anything from changing an image’s lighting to replacing its back- ground – all with just a few touches. Again, manipulations of this sort would have previously required lengthy interventions by a studio at the very least. Finally, smartphones have allowed their users to share photographic images in new ways, especially through online ‘social media’ platforms such as Facebook pages, Twitter feeds and WhatsApp groups. These platforms have allowed photo- graphers to share their images with much wider, and more geographically dis- persed, audiences than ever before, and to combine their own photos with those taken by other photographers, and/or with other kinds of images – for example, with all manner of pictures downloaded from the internet – increasingly easily. They have also enabled photographers to engage with new online formats for display, including virtual ‘albums’, audiovisual/multimedia formats, and visual essays in blogs and online magazines.2 Unsurprisingly, these new possibilities for the production, storage, alteration, circulation and exhibition of photographic images have in turn brought about broader shifts in photographic practice across Africa. The most obvious of these has been a general expansion in the range of places and social contexts in which photography takes place, so that it is no longer an activity particularly asso- ciated with special events and occasions (from state and church functions to life- cycle events such as births, weddings and funerals), as it was from early colonial times onwards. Following the advent of smartphones, photographic practices have become a more or less ubiquitous part of everyday life, especially among young people. An example of just how normalized they have become in quotidian life is provided by Juliet Gilbert, who describes how, for young women in Calabar, Nigeria, photography has even become a primary means of simply filling day-to- day periods of boredom and inactivity, albeit in ways that become socially mean- ingful over time (2018: 247–8). Another effect has been a general proliferation of, and deepening engagement with, what might be termed the ‘techniques of archiv- ing’. In other words, the new possibilities for storing digital photographic images in multiple folders and albums appear to have also generated a growing interest in the possibilities for creating an ever wider range of different kinds of photographic collection. The days when many people simply added their physical photographs to a single, undifferentiated household collection may be coming to an end. Instead, people are now much more likely to arrange their digital shots in various folders and albums, which may typically be categorized in terms of differ- ent aspects of their own past, groups to which they have belonged, or social milieus in which they have engaged, among many other criteria. A third general effect has been a growing experimentation with and innovation in practices of display. In particular, the increased possibilities for manipulating digital images, for 2See, for example, the photo essays from Africa that are published by the blog Africa is a Country (available online at ) and online photo magazines such as Dodho (), LensCulture (), OkayAfrica () and Zam (). 208 Richard Vokes of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0001972019000019 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:21, subject to the Cambridge Core terms http://africasacountry.com/tag/photography/ http://africasacountry.com/tag/photography/ https://www.dodho.com https://www.dodho.com https://www.lensculture.com https://www.lensculture.com https://www.lensculture.com http://www.okayafrica.com http://www.okayafrica.com https://www.zammagazine.com https://www.zammagazine.com https://www.zammagazine.com https://www.cambridge.org/core/terms https://doi.org/10.1017/S0001972019000019 https://www.cambridge.org/core circulating them through Facebook, Twitter and WhatsApp, and for combining one’s own photos with other images have resulted in users creating and dissemin- ating increasingly complicated and imaginative types of photographic display. To cite just one example, as I write these lines I am looking at the Facebook pages of a friend in Zimbabwe who is a university student in Harare. One of the albums included displays images of the young man taken over the past twelve months; in addition to more formal studio portraits, there is a large number of images that comprise complicated photo-collages and photomontages, or that have been heavily photoshopped in other ways, to represent him in the style of an American rapper. This album also includes photos taken from other people’s Facebook pages (including from those of his relatives living in Europe and the US) or downloaded from elsewhere on the internet – as is the case, for example, with several pictures of well-known American celebrity musicians. Similarly, Gilbert’s article in this issue explores how young people in Calabar, in south- eastern Nigeria, regularly produce online albums that include portraits of them- selves placed alongside photographs of global celebrities such as Beyoncé, Rihanna and Cristiano Ronaldo. Yet if the emergence of smartphones has altered photographic practice in Africa, in both a narrow and a broad sense, then a key question remains as to what effect this has had on wider photographic cultures – and visual cultures in general – across the continent. In other words, how have the new possibilities of digital photography altered the broader practices, relationships and social milieus within which photographic images are embedded? The central aim of this introduction and the following articles is to explore this question in detail, based on ethnographic evidence drawn from across the continent. One of the challenges in trying to address this question relates to the issue of how, precisely, we define ‘African photographic cultures’. The first difficulty here relates to the epithet ‘African’, which has long been recognized as a historic- ally, politically and morally loaded category – emerging as it did within the inter- cultural encounter of European colonial expansion (Mudimbe 1988; Comaroff and Comaroff 1997). However, even without rehearsing wider debates on that much larger subject, the difficulties of specifying precisely which element, or ele- ments, should be considered within any regional photographic culture remain significant. For the authors of one of the key early collections on African photo- graphies, In/Sight: African photographers, 1940 to the present (Bell et al. 1996),3 the ‘Africanness’ of given traditions stemmed from the identities of the photogra- phers who contributed to them. In other words, ‘African photography’ referred simply to those pictures that had been taken by people who resided on the contin- ent, or who had once lived there, or who – in relation to the diaspora – identified themselves as African. By extension, cultures of ‘East African photography’, ‘West African photography’ and so on could be defined in similar terms. However, for the editors of another seminal collection in the field, Anthology of African and Indian Ocean Photography (Pivin and Saint Leon 1999), such a definition ran the risk of geocentrism in that it underplayed the ways in which even the most famous of African photographers and studios had invariably been connected with, and 3This was the catalogue for the New York Guggenheim Museum’s major exhibition on African photography of the same name. 209Photographies in Africa in the digital age: Introduction of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0001972019000019 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:21, subject to the Cambridge Core terms https://www.cambridge.org/core/terms https://doi.org/10.1017/S0001972019000019 https://www.cambridge.org/core influenced by, photographers working in other parts of the world – and especially with photographers from ‘Western countries’ (ibid.: 7). In addition, a number of the contributors to that collection, as well as various other theorists writing at around the same time, also sought to define regional photographic cultures in rela- tion to aesthetics. This was especially marked in the scholarship and other com- mentary that followed the discovery of West African studio photographers such as Seydou Keïta, whose work appeared to represent a distinctively African ‘aes- thetics of modernity’ (Bigham 1999; Jedlowski 2008). Yet there were other import- ant contributions to this line of thinking as well, including, for example, the work of C. Angelo Micheli, which, in its wide-ranging survey of West African photog- raphy, showed how double portrait photographs – in which subjects are pictured in pairs, while wearing similar clothes and striking the same poses – were especially popular in cultural contexts in which there was a marked ‘collective imagery inspired by myths and practices related to twinness’ (2008: 66, emphasis added). Finally, a growing body of more recent work, inspired by the wider ‘material turn’ within visual cultural studies, has sought instead to define ‘Africanness’ in relation to the physical photographs themselves, and to the way in which these may operate as culturally meaningful artefacts (Vokes 2008; 2012b). The argument here – which is made more or less explicitly by different theorists – is that ‘African photographies’ refer to only certain kinds of photographic image-objects: those that have been produced in identifiably African social and material contexts (for example, Bajorek 2010; Bleiker and Kay 2007; Haney 2004; 2010); are circulated through and are engaged with as part of the kinds of domestic activities, ritual practices and exchange networks that exist on the continent (see, for example, Behrend 2003; 2012); and invoke culturally distinctive kinds of embodied sensory responses from their viewers (Kratz 2012; Pype 2012; Vokes 2015). In other words, the ‘African’ qualities of particular photographs refer less to the iden- tity of their producers or to anything that they represent, and more to the work that they do as part of their being incorporated into particular kinds of life worlds. So how, then, have the new possibilities for producing, storing, altering, circulating and exhibiting photographic images that have emerged since the advent of digital photography, and its associated infrastructures for communication, changed the work that photographic images do and the effects that they produce within African life worlds? In attempting to address this question, a second major challenge is to avoid making a simple assumption that, because digital image practices are new, they must have had a transformative effect on these wider social realms. In this regard, the continual repetition, both by scholars and particularly by media com- mentators, of the concept of an African ‘digital revolution’ may in fact be singu- larly unhelpful, precisely because this phrase implies rupture, rearrangement and radical change.4 Yet, as Gershon and Bell remind us, ‘when scholars of media attend to the material and historical particularities of [any] media, many recognize that “newness” is not a self-evident social category’. Instead, these scholars often end up drawing attention to ‘how people on the ground interpret and make use of the newness of their media’ in relation to ‘previously established possibilities for 4A recent report on Africa’s emerging digital economy by PricewaterhouseCoopers was even entitled Disrupting Africa: riding the wave of the digital revolution (2016). 210 Richard Vokes of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0001972019000019 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:21, subject to the Cambridge Core terms https://www.cambridge.org/core/terms https://doi.org/10.1017/S0001972019000019 https://www.cambridge.org/core storage, and for communicating across time, distance and with different numbers of people’ (Gershon and Bell 2013: 259–60). To put it another way, just because a medium is new does not mean that it is necessarily disruptive. Instead, attention to users’ ability to act as ‘social analysts in their own right’ (ibid.) may highlight the ways in which that medium has been incorporated into pre-existing processes of identity formation, regimes of aesthetics, patterns of relationality, and so on – albeit in ways that may greatly increase (or reduce) the spatial and temporal scales on which those phenomena operate. The dangers of assuming that a new medium must necessarily be transformative are highlighted by a brief review of the trajectory of the scholarly literature on the emergence of early generation mobile phones. Much of the early academic writing on the subject, to say nothing of the media commentary, was similarly framed in terms of ‘revolution’. This reflected the fact that much of the earliest research on the subject was conducted by those working within the paradigm of ‘information and communications technology for development’ (ICT4D).5 These researchers, and their associates in policymaking and development practitioner circles, had an obvious stake in emphasizing how an expansion of early generation mobile tel- ephony might help African societies to overcome the ‘global digital divide’ in ways that would deliver overwhelmingly positive social changes. As a result, the early writings presented at times an almost utopian vision of the potential for mobile phone communications to generate increased political participation by facilitating new forms of ‘civic engagement’ – for example in public spheres such as radio phone-in shows and online forums (Brisset-Foucault 2018). It also outlined ways in which digital communications might engender new kinds of eco- nomic activities, additional educational opportunities, and improved health out- comes – through their ability to deliver all sorts of ‘useful information’. It further emphasized ways in which these communications might advance the empowerment of marginalized groups, and especially women, by providing them with a new way of expanding their support networks. The problem, though, was that from around the mid-2000s onwards an increas- ing number of empirical studies from across Africa, many of them intentionally written ‘against’ the ICT4D paradigm, began to demonstrate that mobile teleph- ony was just as likely to reinforce existing social relations and structures as it was to produce radical change. In other words, more and more studies by anthropol- ogists, geographers and media scholars began to show how, far from transforming existing political, economic and social configurations, the emergence of early gen- eration mobile telephony was just as likely to have entrenched them. For example, Smith’s work in Nigeria showed how mobile phones, by creating a new domain for communication among political and business elites, further alienated ordinary citizens from what were perceived to be closed and corrupt state and corporate networks. In so doing, they even exacerbated pre-existing political and economic divisions or tensions (Smith 2006). Slater and Kwami’s study of mobile phones in Accra (2005) found that users spent much more time using these devices for engaging in the kind of ‘mundane communication’ they had always engaged in, rather than for accessing ‘useful information’ (cited in Archambault 2011: 5For a good introduction to the history of mobile phones and ICT4D in Africa, see Donner (2008). 211Photographies in Africa in the digital age: Introduction of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0001972019000019 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:21, subject to the Cambridge Core terms https://www.cambridge.org/core/terms https://doi.org/10.1017/S0001972019000019 https://www.cambridge.org/core 446).6 Meanwhile, Archambault’s study of mobile telephony in southern Mozambique found that mobile phone communications are just as likely to become a tool for extending control over women as for empowering them. For example, she cites one instance of a husband who, while working away from home, uses his mobile phone to ‘check up’ on his wife and to make sure that she is at home, which he does by making her wake up their son to put him on the line (2011: 450).7 Moreover, the new practices of digital photography, as I have described them above, certainly could be read in a similar way: that is, less as revolutionary, and more as a means through which users have simply extended the kinds of photographic performances, aesthetic elements and image relations that, in a sense, have always characterized African photographic engagements. However, there may nevertheless be some inherent dangers in developing such a ‘counter- intuitive’ reading, to borrow Archambault’s phrase (2011: 453) – i.e. a reading that stresses not transformation but continuity. (This is a criticism that has also been levelled against the scholarship on early generation mobile phones.) For one thing, such an interpretation might underplay the way in which the new pos- sibilities for photography afforded by smartphones may also become ‘a source of dynamism in [their] own right’, as Tenhunen puts it (2008: 531). In other words, it may understate the ways in which these new ways of producing, storing, altering, circulating and exhibiting photographic images may also produce all kinds of unforeseen or unintended consequences (cf. Postill n.d.). More importantly, a counterintuitive reading may also fail to capture actors’ own perspectives on the subject; for them, the new photographic practices associated with smartphones certainly are perceived, in most instances, to be highly significant agents of change – which is precisely why they invest these new practices with such a wide range of desires, aspirations and fears for the future (which are documented at length in all of the articles collected in this part issue). The goal, then, is to try to specify how photography’s position and effects within African life worlds have been altered by the emergence of the digital age, but without reproducing a naïve view of the latter’s transformative potential, and without simply repeating the kind of ‘continuity thinking’ that characterized some of the literature on early generation mobile telephony and that could be read into this subject as well. In the remainder of this introduction, I attempt to do this through the study of three alternative yet overlapping areas of inquiry: (1) the nexus between photographic technologies, practices and images and polit- ical power and authority in Africa; (2) the relationship between photography and situated (or vernacular) cosmopolitanisms; and (3) the practices and politics for archiving African photographies, and for conducting other research on and through them. In recent years, each of these areas has provided a rich discursive frame through which scholars have developed new histories of analogue photogra- phies in Africa (see Morton and Newbury 2015; Newbury and Vokes 2018; Thomas and Green 2016; Vokes 2012a). The three areas have also provided the contours for nascent discussions of digital photography (see, for example, Siziba and Ncube 2015; Buggenhagen 2014; Nimis 2014 respectively). Most importantly 6Molony has made a similar point, based on his work in Tanzania (2008: 339–40). 7See also Burrell’s work in central Uganda (2010: 238). 212 Richard Vokes of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0001972019000019 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:21, subject to the Cambridge Core terms https://www.cambridge.org/core/terms https://doi.org/10.1017/S0001972019000019 https://www.cambridge.org/core for my purposes here, they are all areas to which the articles collected in this volume speak – in one way or another. As such, they are all fertile areas through which to examine the intersections between photography and African life worlds, and to explore how these have been reconfigured by the coming of the digital age. The nexus between photography and political power and authority in Africa can be traced back to the emergence of photographic technologies and practices on the continent. Not only was photography initially spread by European explorers, missionaries and colonial administrators, among others, but it quickly became a key tool for colonial governance, one that was used ‘both to symbolize the power differential in the colonies, and to bring it into visible order’ (Edwards cited in Peffer 2009: 242; cf. Sharkey 2001: 180; see also the seminal Hartmann et al. 1999). This stemmed from the way in which, in colonies and protectorates throughout the continent, administrations frequently employed or commissioned photographers to depict ‘official state events, civic life, examples of “progress” and portraits for helping categorize individuals’ (Buckley 2010: 147). Colonial authorities also patronized photographic studios extensively to achieve the same ends (Behrend and Wendl 1998). For example, in Côte d’Ivoire during the early decades of the twentieth century, the administration’s requirement that all citizens needed to have an individual ID or ‘passport’ photo- graph incorporated within their official documentation in order to participate in any of the ‘modern’ institutions of the colonial state – from schools and courts to hospitals and prisons – eventually resulted in studios primarily focusing on the production of these kinds of photographs (Werner 2001).8 Later, a number of British colonial administrations in particular set up their own dedicated govern- ment photographic services or sections.9 These services were invariably housed within the Ministry of Information, and they photographed everything from state ceremonies and government development projects to the territory’s subject populations – widely circulating the resulting photographs both within the colony itself and more widely. Moreover, in Uganda’s case at least, this nexus between state power and photography further deepened in the postcolonial era, reaching its zenith during Idi Amin’s rule (1971–79). Reflecting Amin’s fascin- ation both with his own image and with all things media (Peterson and Taylor 2013), following his accession to power he expanded the state’s official photo- graphic section in order to document his every move, and he also encouraged a cadre of international photojournalists to document the state’s activities as widely as possible. Following the expulsion of the Ugandan Asians in 1972, many previously Asian-owned photographic studios were handed over to military men, while the central Kodak agency – which, in theory at least, had a monopoly over the processing of all colour film – was placed in the hands of a senior army officer (Vokes 2012b; this issue). Indeed, so marked did the state’s control over photographic production become that I have even documented a number of cases in which the authorities arrested people simply for being in possession of photographs that were not officially sanctioned. 8See also my discussion of ID photographs in colonial Uganda (Vokes 2012b: 210–12). 9On examples from the Southern Cameroons and Uganda, see Schneider (2018) and Vokes (2018a) respectively. 213Photographies in Africa in the digital age: Introduction of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0001972019000019 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:21, subject to the Cambridge Core terms https://www.cambridge.org/core/terms https://doi.org/10.1017/S0001972019000019 https://www.cambridge.org/core However, if colonial – and, later, postcolonial – administrations tried to control photography from the outset as a means of extending their authority, then so too were their attempts invariably incomplete from the very beginning. In other words – and as a growing body of recent scholarship has highlighted – their efforts to dominate the photographic domain were always and everywhere under- mined by other kinds of photographic practices that were going on at the same time, and that in many cases were antithetical to their state-building projects. For example, Liam Buckley’s work in the Gambia has highlighted how, from the start of colonial rule, government-employed photographers would frequently also conduct their own private or commercial photographic work ‘on the side’ – the products of which often presented a quite different view from the official line on the nature of the colony’s ‘progress’ (2010: 147–8; see also Haney 2010). Similarly, Jennifer Bajorek’s work in Senegal has shown how, throughout the late-colonial period in particular, state-sponsored studios also pro- duced their own images, which later became central to a nascent iconography of African nationalism (2010). Meanwhile, in Uganda, and into the postcolonial period, the state’s attempts to forge a dominant genre of official photography were increasingly undermined by the growing availability of privately owned cameras, which, combined with informal (at times clandestine) networks for pro- cessing film, eventually became the dominant means of photographic production in the country. Moreover, throughout Africa, all of these trends only deepened over time, as government-sponsored photographers and officially sanctioned photographic operations and agencies had to compete with ever growing numbers and types of professional photographers, commercial studios, and privately owned imaging devices. In this context, then, the emergence of digital photography could again be cast as simply an extension, or acceleration, of trends that have been present ever since photography first arrived in Africa. Indeed, following the arrival of smartphones in the late 2000s, one might even have predicted that so widespread would these trends soon become that the very concepts of ‘official photography’ and state control of the photographic domain would soon be things of the past. After all, even at that stage, it was already obvious that smartphones would effectively turn anyone with access to a handset into a photographer, would render many of the previous practices and functions of studios obsolete, and, by making cameras and photography a ubiquitous part of everyday life, would vastly expand Africa’s ‘total photographic archive’ – to such a degree, in fact, that no one genre, or set of genres, ‘official’ or otherwise, could possibly dominate it. The fact that these technologies also provided access to social media and the inter- net, and the whole world of images that these entail, further underlined this point. Also, by the late 2000s, it was becoming clear that smartphones, given their new possibilities for image editing and for accessing social media and the internet, would further destabilize a genre of official photography in other ways as well: in particular, by allowing state-sponsored images to be reworked and circulated more quickly and easily, in potentially subversive ways. In one particularly poign- ant example, Siziba and Ncube (2015) describe how, in 2015 in Zimbabwe, attempts by the ruling party to represent Robert Mugabe in terms of a ‘heroic’ iconography were undermined when an image of the president falling over became rapidly reworked into a series of satirical memes, which were then widely disseminated over social media. In a slightly different example, Agbo 214 Richard Vokes of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0001972019000019 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:21, subject to the Cambridge Core terms https://www.cambridge.org/core/terms https://doi.org/10.1017/S0001972019000019 https://www.cambridge.org/core (this issue) traces how, in Nigeria, both federal and state governments’ efforts to represent their spending programmes as successful may be frequently subverted by even the most visually mundane of images on social media. For example, he records how the official claims made for several quite disparate initiatives were undermined by the multiple re-emergences on Facebook of one image of a group of Nigerian schoolchildren using breeze blocks as desks (an image that was widely interpreted as representing government wastage in the education sector). However, to stop there would be to miss a number of key ways in which the advent of smartphones, and digital photography in general, has also provided African governments and their agents with entirely new possibilities for depicting their state-building projects (and for circulating the resulting images); for visually classifying their populations; and for exerting more general control over the photographic domain. In particular, and as my own article in this issue explores at length, the advent of digital photography, while eroding governments’ control of photography in general, has also provided states with highly privileged access to international advertising agencies, which are capable of producing visual pro- jections of states’ future developmental and political goals, using the most sophis- ticated of post-photographic techniques and computer-generated imagery (CGI). In addition, governments may also have privileged access to the media through which these projections are then disseminated, including public billboards, news- paper inserts, and other forms of political advertising. Indeed, since the genesis of digital photography, states’ uses of these techniques and media have become so widespread that images of planned infrastructure projects, housing developments and cityscapes are now ubiquitous throughout Africa, especially in urban spaces. Such futuristic imagery may even be said to constitute a new public visual culture today, one that is clearly recognizable across the continent (see Vokes, this issue). A nexus between photography and cosmopolitanism in Africa can also be traced back to the emergence of photographic technologies on the continent. In particular, following the arrival of the first photographic studios – which were established in ever greater numbers in urban settlements all around the African coast from around the 1860s onwards, and later in inland areas as well (Behrend 1999: 162; Hayes 2007; Vokes 2008) – these establishments quickly became key sites in which cosmopolitan ‘theatricalities’ could be indulged. By this, I refer to the ways in which, from the very beginning, both studio photogra- phers and their subjects commonly made use of various poses, props, clothing, make-up and – most importantly of all – backdrops, to portray sitters in ways that referenced imaginaries of other peoples, social milieus and places elsewhere in the world (i.e. forms that were removed from the realities of their own lives). As many scholars of early African studio photography have long documented, it soon became typical for these sitters to be pictured in ways that referenced their European ‘others’. As a result, much early African studio portraiture became saturated with the visual symbolism of Victorian-era European (upper- class) domesticity, with subjects dressed in lounge suits and crinoline dresses, posed on sofas and chaises longues alongside vases of flowers, draped curtains and hat stands, and in front of backdrops depicting European stately homes, man- icured gardens and country parks. Equally importantly, these studio images often also conveyed an image of European familial relations, in which an (imagined) nuclear family was central, and in which the generations were both distinct and 215Photographies in Africa in the digital age: Introduction of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0001972019000019 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:21, subject to the Cambridge Core terms https://www.cambridge.org/core/terms https://doi.org/10.1017/S0001972019000019 https://www.cambridge.org/core hierarchically organized. The former effect was achieved through the selection of people to be included in the portrait, the latter through the symmetrical placement of adjacent generations within it. However, it was during what is sometimes referred to as Africa’s golden age of studio photography, from roughly the early 1950s to the 1980s (Wendl 1999: 147–8; Peffer and Cameron 2013), that this relationship between photography and cosmopolitan imaginaries became most fully established. During this period, which coincided with the classic era of ‘modernization’ – which itself brought a rapid expansion of international air travel, as well as an influx of ‘Western’ goods and global brands – African studios became increasingly playful in their incorporation of international cosmopolitan styles and motifs in their mises en scène. In particular, as Tobias Wendl has documented in his work on the studio traditions of the period in West Africa, it became increasingly common for studios’ costumes to reference international fashions, and for their painted backdrops to depict ‘an idealized society of mass consumption’ (Wendl 1999). Photographers frequently captured their subjects in front of artistic scenes of international airports, fantastic cityscapes and ‘bourgeois domestic interiors’, to borrow Pinney’s phrase (2011: 122), complete with the latest televi- sion and hi-fi equipment and other domestic appliances, and with global drink brands (such as Fanta Orange, Guinness and Bell’s whisky; see Wendl 1999: 155). It was not only in West Africa that these practices became commonplace. For example, Heike Behrend’s work on photo studios in Likoni, Mombasa has shown that in this East African setting the use of backdrops to depict scenes and symbols of imaginary international travel (aeroplanes, cruise ships and faraway places) and social mobility (the accoutrements of ‘bourgeois’ domestic life) was also typical (Behrend 2000). Yet the nexus between photography and cosmopolitanism in Africa emerged not only from the semiotics of photographs that were produced in the continent’s studios. From around the late 1970s, it became increasingly shaped by emergent practices of album-making on the continent as well. For example, in Uganda, from around this time, it became more and more common for wealthier people in particular not only to create ‘personal’ photograph albums as a means of construct- ing narratives about ‘the modern self’, but also to insert within these albums photo- graphs of faraway places, of international political leaders and other celebrities, and of global news media events (Vokes 2008). In so doing, album-makers frequently generated new kinds of cosmopolitan imaginaries, given that the pictures they inserted – at least in all the examples I have ever seen – not only invariably showed places that the album-maker had never been to, people they had never met and events in which they had not been directly involved, but also combined these in creative and unusual ways. For instance, I worked on one personal photo- graph album from the early 1980s in which the album-maker placed, alongside images of himself and his family, photographic images of European capital cities, American fashion models, and an international meeting of African leaders. Initially, most of these inserted pictures tended to be made up of images the album-makers had cut out of newspapers, magazines or other publications (this was the case, for example, with all the pictures that were inserted in the album just described). However, these practices became so popular over time that a sizeable nationwide market grew up in photographic prints of exotic locations, global celeb- rities and major events. For instance, I keenly recall that, following the attacks on 216 Richard Vokes of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0001972019000019 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:21, subject to the Cambridge Core terms https://www.cambridge.org/core/terms https://doi.org/10.1017/S0001972019000019 https://www.cambridge.org/core the World Trade Center in New York on 11 September 2001, practically every market in Uganda had at least one stall – and often many more – selling photo- graphic prints of the burning buildings and of people connected to the events, including Osama bin Laden. So large did the national market in photographs of bin Laden become around this time that it eventually became something of a national embarrassment, following which the authorities moved to close it down. Once again, then, in this context the emergence of digital photography could be seen as simply extending a trend that goes back at least to the establishment of the first studios on the continent. In a sense, the young people who now use their smart- phones to depict themselves as global pop stars or sports icons, including by strik- ing the relevant poses and wearing the relevant garb, by photoshopping their portraits against backgrounds of the Los Angeles skyline or Arsenal’s Emirates Stadium, and/or by then placing these images in virtual albums that also include pictures of Beyoncé, Rihanna or Cristiano Ronaldo, are thus engaging in the same kinds of photographic theatricalities that people in Africa have always adopted. However, there are also key ways in which the advent of the smartphone has created entirely new possibilities for the exploration of such cosmopolitan identities. Specifically, the fact that the new digital photographic universe is pri- marily internet-based has afforded new opportunities not only for constructing such cosmopolitan imaginaries but also for incorporating them into new, ‘real- time’ modes of communication with people living elsewhere in the world (see Carrier, this issue; cf. Miller 2015). Neil Carrier’s article (this issue) on the photo- graphic practices of Somalis living in Nairobi’s Eastleigh Estate explores the way in which the real significance of images, for the actors involved, lies less in what they show than in how they may be circulated via WhatsApp, in ways that enable people to participate in global conversations – and through these to create and maintain global social networks, especially with members of the Somali diaspora living elsewhere. For instance, in one of Carrier’s most touching examples, he documents how it is primarily through the sharing of images over the messaging app that one of his respondents is able to maintain a sense of connection with his wife and child (who are now living in Minneapolis) and, through this, to relate to the reality of his family’s now transnational existence. Finally, the past twenty-five years or so have also seen the burgeoning of schol- arly interest in the practices and politics of archiving African photographies, and in conducting other kinds of research on and through them (Morton and Newbury 2015). The origins of this growing interest may be traced to a series of parallel developments in history, art theory and anthropology, which began to converge from the early 1990s. In history, this new interest stemmed from a symposium on ‘Photographs as Sources for African History’ that was organized by David Killingray and Andrew Roberts at SOAS University of London in 1988 – the col- lected papers of which were published shortly afterwards (1989) – and that defined a new programme for visual historical research in Africa. It achieved this by pro- viding one of the first comprehensive overviews of the history of photography on the continent, by offering the first systematic appraisal of surviving colonial, mis- sionary and museum photographic archives, and by introducing a series of debates about the potential of photographs as a source of evidence for African history (debates that have become increasingly focused in the years since). In art theory, the emerging interest followed the establishment of new publications such as the French journal Revue Noire and the American African Arts, which carried 217Photographies in Africa in the digital age: Introduction of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0001972019000019 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:21, subject to the Cambridge Core terms https://www.cambridge.org/core/terms https://doi.org/10.1017/S0001972019000019 https://www.cambridge.org/core many articles on this subject from early on. Finally, in anthropology, the renewed interest in photography initially stemmed from a general re-evaluation of the dis- cipline’s own photographic archives, including from Africa, especially following the publication of Elizabeth Edwards’ Anthropology and Photography, 1860– 1920 (1992). That collection looked at a range of historical ethnographic photog- raphy – including nineteenth-century anthropometric portraiture, which had been produced as part of a Victorian ‘evolutionary’ anthropology – that had long been rejected by the modern discipline of social anthropology as being overtly racist. However, what the contributors to Edwards’ collection were able to show is that by developing ‘counter-readings’ of these photographs – ones that move beyond their producers’ (presumed) intentions – it was possible to recover some historical traces of the people depicted, and to understand how their lives had become sutured into the anthropological project (see Banks and Vokes 2010). However, over time, as scholars in all of these areas wrote more expansively, the boundaries between these various disciplines became increasingly blurred, and the wider field of scholarship on African photographic archives became more inter- disciplinary in character. Yet, for all scholars working in this field, a core set of methodological concerns were held in common: namely, (1) a need to identify additional surviving archives of African photographies (both in public and private holdings, and both on the continent itself and in institutional repositories elsewhere in the world); (2) a desire to preserve those archives once they had been discovered (this desire was given greater urgency by the often fragile state in which many archives in Africa were found, and in many instances resulted in photo- graphic material being removed from the continent to museums and other institu- tions in Europe and North America); and (3) an inclination to try to make these archives accessible to as wide an audience as possible, especially among publics in Africa itself (i.e. among the very ‘source communities’ from which the pictures had come, as a means for both encouraging people to engage in their own visual heritage and for involving those communities in scholarly research pro- jects). As Haney and Schneider have documented (2014), from roughly the late 1990s and throughout the early years of the 2000s, these combined imperatives generated new, and often highly transnational, networks of multidisciplinary scho- lars, museum curators, archivists, art photographers and commercial dealers and publishers, and led to averitable explosion in the number of projects and initiatives aimed at identifying, saving, making accessible and in some cases bringing to market Africa’s extraordinary photographic heritage. Again, the coming of digital photography, and its associated infrastructures for communication, has accelerated and deepened all of these processes. On the one hand, the emergence of websites, web-based publications and Facebook groups dedicated to historical (and other) photographies from Africa has made it easier for scholars to identify surviving archives (especially private archives); in many African countries, these have turned out to be far more extensive than any of us working in the field would have imagined. On the other, advances in cameras and their associated technologies – especially advances in the capacity of memory cards and hard drives, and in internet-linked database capabilities – have made it increasingly easy not only to digitize vast numbers of photographic artefacts (including negatives, slides and prints) but then to make these available over the web. And although the latter development has generated a host of new complexities over issues of photographic copyright specifically and image 218 Richard Vokes of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0001972019000019 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:21, subject to the Cambridge Core terms https://www.cambridge.org/core/terms https://doi.org/10.1017/S0001972019000019 https://www.cambridge.org/core ‘ownership’ more broadly, it has undoubtedly also had an extraordinarily wide- ranging effect in re-connecting publics across Africa to elements of their own visual heritage (Nimis 2014). This, in turn, has produced a spectrum of affective responses from those publics, including everything from anger at aspects of the colonial photographic record to fascination with some of the technological advances of the early postcolonial period, and to excitement at reconnecting with events and processes in which people themselves took part – at least some of which have been captured within academic research projects. For example, my own respondents in south-western Uganda had exactly these responses – and others besides – to a collection of some 650 historical images that I recently took to the field on my iPad and used in a photo-elicitation exercise.10 Finally, if the period from the late 1990s onwards witnessed the emergence of new trans- national and interdisciplinary networks of scholars, museum curators, archivists, art photographers and commercial dealers, then the digital revolution has greatly extended these and has allowed for far more extensive, and technically sophisti- cated, projects and initiatives to be conducted on an international scale and in real time (Haney and Schneider 2014). However, once again, if we were to stop at these observations, we would also miss a numberof ways in which the advent of smartphones, social media and the internet have radically altered the distribution of power within – or rather the political dynamics of – this wider ‘research ecology’. Specifically, and as a number of the articles in this issue document (especially those by Carrier and Graham), whereas once the outputs of the transnational research networks described above would have been significantly influenced by, and in some cases entirely controlled by, ‘Western’ scholars, museum curators and archivists, among others – and as other kinds of relationships assembled around researching photography in Africa would also have been – this is simply no longer the case. Of course, in many instances, Western scholars remain in a privileged position – defined in terms of their mobility, funding and institutional associations. Nevertheless, it is also the case that the advent of smartphones and the internet has significantly tempered the pre-existing power dynamics between researcher and researched that have been established ‘over the last two centuries in the social sciences’, as Jess Auerbach has pointed out (forthcoming 2019). Instead, as Auerbach’s own work has shown, in its examination of her experiences of being blogged about by her respondents while she was in the middle of fieldwork in Angola and among the Angolan diaspora in Brazil, the new digital ecologies have created a situation in which a researcher may effectively have no control over how they are represented online, which in turn shapes how they are perceived as a researcher in the field (ibid.). In short, in a context in which the digital image world is increasingly becom- ing the site where ‘truths are manufactured, contested and communicated’, a reworking of the ways in which researchers are pictured on social media and in other internet forums significantly alters how they are positioned in the ‘real’ (i.e. offline) world as well. Meanwhile, Graham recounts an even more dramatic example, where a German photojournalist’s naïve use of Twitter resulted in her losing all control over the meanings that attached to a photograph she had taken 10Cf. Buckley’s discussion of the responses of interlocutors in the Gambia to an archive of colo- nial photographs that he employed within a similar set of research exercises (2014). 219Photographies in Africa in the digital age: Introduction of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0001972019000019 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:21, subject to the Cambridge Core terms https://www.cambridge.org/core/terms https://doi.org/10.1017/S0001972019000019 https://www.cambridge.org/core of some government soldiers in the eastern Democratic Republic of Congo (DRC), and where disastrous consequences ensued. In conclusion, if the past decade has witnessed a huge expansion of, and growing complexity within, global communications environments, then this has created a range of new possibilities for the production, storage, alteration, circulation and exhibition of photographic images. Yet how have these, in turn, altered the expec- tations that people have of photographs, and the effects that photographs actually have, within African life worlds? This is the focus of this introduction and of the articles in this part issue: to document ethnographically the work that digital photographic images do on, and through, the continent. In this regard, we must be careful not to assume that emergent intersections between photography and digitalization are necessarily revolutionary. On the contrary, as we develop our empirical studies, we realize that there is much in contemporary engagements with digital imagery that continues trends that have been present to some extent since photography first emerged in Africa. Nevertheless, as the articles gathered in this issue attest, there is still much that is ‘dynamic’ in the current contexts. On the one hand, the advent of the ‘ubiquitous camera’ (Agbo, this issue) has sig- nificantly altered the politics of photography in Africa, reworking visual economies that in some cases could be traced back to the earliest days of colonial rule. In some places, this has led to a general democratization of photography; in others, to the rise of new forms of state-controlled iconography. On the other hand, the techno- logical developments of the past ten years have also significantly expanded the pos- sibilities of photography as a medium for forging, and projecting, various ‘projects of the self’ (Vannini and Williams 2009). In particular – and as explored by a number of the articles collected here – the advent of the digital age has enabled these imaginaries to be communicated at speeds and on geographical and social scales that would have been simply unimaginable before, with effects that are both desirable (such as the internet ‘fame’ described by Auerbach (forthcoming 2019)) and hazardous (the controversial photojournalist described by Graham in this issue). In addition, the advent of the digital age has also produced new kinds of institutional photographic archives that in turn have generated new kinds of archival engagements, responses and contestations – in which are impli- cated not only academic researchers but all viewers of photographic images. Acknowledgements I would like to thank Corinne Kratz, Jess Auerbach, Juliet Gilbert, Aubrey Graham, my wife Zheela Vokes and two anonymous reviewers for their very helpful comments and sug- gestions on earlier drafts of this article. I would also like to thank Zheela for copyediting this article, and all of my work. Of course, any mistakes or omissions remain mine alone. This part special issue is dedicated to the memory of my former colleague and mentor Terry Austrin, who died during the final stages of its preparation. References Angelo Micheli, C. (2008) ‘Doubles and twins: a new approach to contemporary studio photography in West Africa’, African Arts 41 (1): 66–85. 220 Richard Vokes of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0001972019000019 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:21, subject to the Cambridge Core terms https://www.cambridge.org/core/terms https://doi.org/10.1017/S0001972019000019 https://www.cambridge.org/core Archambault, J. (2011) ‘Breaking up “because of the phone” and the transforma- tive potential of information in southern Mozambique’, New Media and Society 13 (3): 444–56. Auerbach, J. (forthcoming 2019) From Water to Wine: becoming middle class in Angola. Toronto: University of Toronto Press. Bajorek, J. (2010) ‘Of jumbled valises and civil society: photography and political imagination in Senegal’, History and Anthropology 21 (4): 431–52. Banks, M. and R. Vokes (2010) ‘Introduction: anthropology, photography and the archive’, History and Anthropology 21 (4): 337–49. Behrend, H. (1999) ‘A short history of photography in Kenya’ in J.-L. Pivin and P. M. Saint Leon (eds), Anthology of African and Indian Ocean Photography. Paris: Revue Noire. Behrend, H. (2000) ‘“Feeling global”: the Likoni ferry photographers of Mombasa, Kenya’, African Arts 33 (3): 70–6, 96. Behrend, H. (2003) ‘Photo magic: photographs in practices of healing and harming in East Africa’, Journal of Religion in Africa 33 (2): 129–45. Behrend, H. (2012) ‘“The terror of the feast”: photography, textiles and memory in weddings along the East African coast’ in R. Vokes (ed.), Photography in Africa: ethnographic perspectives. Woodbridge: James Currey. Behrend, H. and T. Wendl (eds) (1998) Snap Me One!: Studiofotografen in Afrika. Munich: Prestel. Bell, C., O. Enwezor, O. Oguibe and O. Zay (eds) (1996) In/Sight: African photo- graphers, 1940 to the present. New York NY: Guggenheim Museum Publications. Bigham, E. (1999) ‘Issues of authorship in the portrait photography of Seydou Keïta’, African Arts 32 (1): 56–67, 94–6. Bleiker, R. and A. Kay (2007) ‘Representing HIV/AIDS in Africa: pluralist photography and local empowerment’, International Studies Quarterly 51 (1): 139–63. Brisset-Foucault, F. (2018) ‘Serial callers: communication technologies and polit- ical personhood in contemporary Uganda’, Ethnos 83 (2): 255–73. Buckley, L. (2010) ‘Cine-film, film-strips and the devolution of colonial photog- raphy in the Gambia’, History of Photography 34 (2): 147–57. Buckley, L. (2014) ‘Photography and photo-elicitation after colonialism’, Cultural Anthropology 29 (4): 720–43. Buggenhagen, B. (2014) ‘A snapshot of happiness: photo albums, respectability and economic uncertainty in Dakar’, Africa 84 (1): 78–100. Burrell, J. (2010) ‘Evaluating shared access: social equality and the circulation of mobile phones in Uganda’, Journal of Computer-Mediated Communication 15: 230–50. Comaroff, J. and J. L. Comaroff (1997) ‘Africa observed: discourses of the imper- ial imagination’ in R. Grinker and C. Steiner (eds), Perspectives on Africa: a reader in culture, history and representation. Oxford: Blackwell. Donner, J. (2008) ‘Research approaches to mobile use in the developing world: a review of the literature’, Information Society 24: 140–59. Edwards, E. (ed.) (1992) Anthropology and Photography, 1860–1920. New Haven CT and London: Yale University Press in association with the Royal Anthropological Institute. Ekine, S. and F. Manji (eds) (2012) African Awakening: the emerging revolutions. Cape Town: Pambazuka Press. 221Photographies in Africa in the digital age: Introduction of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0001972019000019 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:21, subject to the Cambridge Core terms https://www.cambridge.org/core/terms https://doi.org/10.1017/S0001972019000019 https://www.cambridge.org/core Fardon, R. and G. Furniss (eds) (2000) African Broadcast Cultures: radio in tran- sition. Oxford: James Currey. Gershon, I. and J. Bell (2013) ‘Introduction: the newness of new media’, Culture, Theory and Critique 54 (3): 259–64. Gilbert, J. (2018) ‘“They’re my contacts, not my friends”: reconfiguring affect and aspirations through mobile communication in Nigeria’, Ethnos 83 (2): 237–54. GSMA Intelligence (2017) Sub-Saharan Africa Mobile Economy 2017. Boston MA: GSMA Intelligence. Haney, E. (2004) ‘If these walls could talk! Photographs, photographers and their patrons in Accra and Cape Coast, Ghana, 1840–1940’. PhD thesis, University of London. Haney, E. (2010) Photography and Africa. London: Reaktion Books. Haney, E. and J. Schneider (2014) ‘Beyond the “African” archive paradigm’, Visual Anthropology 27: 307–15. Hartmann, W., P. Hayes and J. Silvester (1999) The Colonizing Camera: photo- graphs in the making of Namibian history. Athens OH: Ohio University Press. Hayes, P. (2007) ‘Power, secrecy, proximity: a short history of South African photography’, Kronos 33: 139–62. Jedlowski, A. (2008) ‘Constructing artworks: issues of authorship and articulation around Seydou Keïta’s photographs’, Nordic Journal of African Studies 17 (1): 34–46. Killingray, D. and A. Roberts (1989) ‘An outline history of photography in Africa to c.1940’, History in Africa 16: 197–208. Kratz, C. (2012) ‘Ceremonies, sitting rooms and albums: how Okiek displayed photographs in the 1990s’ in R. Vokes (ed.), Photography in Africa: ethno- graphic perspectives. Woodbridge: James Currey. Miller, D. (2015) ‘Photography in the age of Snapchat’ in Anthropology and Photography. Volume 1. London: Royal Anthropological Institute. Molony, T. (2008) ‘Nondevelopmental uses of mobile communication in Tanzania’ in J. E. Katz (ed.), Handbook of Mobile Communication Studies. Cambridge MA: MIT Press. Morton, C. and D. Newbury (eds) (2015) The African Photographic Archive: research and curatorial strategies. London: Bloomsbury. Mudimbe, V. (1988) The Invention of Africa: gnosis, philosophy, and the order of knowledge. Oxford: James Currey. Mytton, G. (2000) ‘From saucepan to dish: radio and TV in Africa’ in R. Fardon and G. Furniss (eds), African Broadcast Cultures: radio in transition. Oxford: James Currey. Newbury, D. and R. Vokes (eds) (2018) ‘Photography and African futures’, Visual Studies 33 (1): 1–10. Nimis, E. (2014) ‘In search of African history: the re-appropriation of photo- graphic archives by contemporary visual artists’, Social Dynamics 40 (3): 556–66. Peffer, J. (2009) Art and the End of Apartheid. Minneapolis MN: University of Minnesota Press. Peffer, J. and E. Cameron (eds) (2013) Portraiture and Photography in Africa. Bloomington IN: Indiana University Press. 222 Richard Vokes of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0001972019000019 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:21, subject to the Cambridge Core terms https://www.cambridge.org/core/terms https://doi.org/10.1017/S0001972019000019 https://www.cambridge.org/core Peterson, D. and E. Taylor (2013) ‘Rethinking the state in Idi Amin’s Uganda: the politics of exhortation’, Journal of Eastern African Studies 7 (1): 58–82. Pinney, C. (2011) Photography and Anthropology. London: Reaktion Books. Pivin, J.-L. and P. M. Saint Leon (eds) (1999) Anthology of African and Indian Ocean Photography. Paris: Revue Noire. Postill, J. (n.d.) ‘Towards the study of actual (not potential) mobile-related changes in the developing world’. Unpublished article. PricewaterhouseCoopers (2016) Disrupting Africa: riding the wave of the digital revolution. London: PricewaterhouseCoopers. Pype, K. (2012) ‘Political billboards as contact zones: reflections onurban space, the visual and political affect in Kabila’s Kinshasa’ in R. Vokes (ed.), Photography in Africa: ethnographic perspectives. Woodbridge: James Currey. Schneider, J. (2018) ‘Views of continuity and change: the press photo archives in Buea, Cameroon’, Visual Studies 33 (1): 28–40. Sharkey, H. J. (2001) ‘Photography and African studies’, Sudanic Africa 12: 179–81. Siziba, G. and G. Ncube (2015) ‘Mugabe’s fall from grace: satire and fictional nar- ratives as silent forms of resistance in/on Zimbabwe’, Social Dynamics 41 (3): 516–39. Slater, D. and J. Kwami (2005) ‘Embeddedness and escape: internet and mobile use as poverty reduction strategies in Ghana’. ISRG working paper 4. London, Brisbane and Adelaide: Information Society Research Group. Smith, D. (2006) ‘Cell phones, social inequality, and contemporary culture in Nigeria’, Canadian Journal of African Studies 40 (3): 496–523. Tenhunen, S. (2008) ‘Mobile technology in the village: ICTs, culture and social logistics in India’, Journal of the Royal Anthropological Institute 14 (3): 515–34. Thomas, K. and L. Green (eds) (2016) Photography In and Out of Africa: itera- tions with a difference. New York NY: Routledge. Tshabalala, S. (2015) ‘Africa’s smartphone market is on the rise as affordable handsets spur growth’, Quartz Africa, 13 July , accessed 29 August 2017. Vannini, P. and J. P. Williams (eds) (2009) Authenticity in Culture, Self and Society. Farnham: Ashgate. Vokes, R. (2007) ‘Charisma, creativity, and cosmopolitanism: a perspective on the power of the new radio broadcasting in Uganda and Rwanda’, Journal of the Royal Anthropological Institute (NS) 13: 805–24. Vokes, R. (2008) ‘On ancestral self-fashioning: photography in the time of AIDS’, Visual Anthropology 21 (4): 345–63. Vokes, R. (ed.) (2012a) Photography in Africa: ethnographic perspectives. Woodbridge: James Currey. Vokes, R. (2012b) ‘On “the ultimate patronage machine”: photography and substan- tial relations in rural south-western Uganda’ in R. Vokes (ed.), Photography in Africa: ethnographic perspectives. Woodbridge: James Currey. Vokes, R. (2015) ‘The chairman’s photographs: the politics of an archive in south- western Uganda’ in C. Morton and D. Newbury (eds), The African Photographic Archive: research and curatorial strategies. London: Bloomsbury. Vokes, R. (2018a) ‘Photography, exhibitions and embodied futures in colonial Uganda, 1908–1960’, Visual Studies 33 (1): 11–27. 223Photographies in Africa in the digital age: Introduction of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0001972019000019 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:21, subject to the Cambridge Core terms https://qz.com/451844/africas-smartphone-market-is-on-the-rise-as-affordable-handsets-spur-growth/ https://qz.com/451844/africas-smartphone-market-is-on-the-rise-as-affordable-handsets-spur-growth/ https://qz.com/451844/africas-smartphone-market-is-on-the-rise-as-affordable-handsets-spur-growth/ https://www.cambridge.org/core/terms https://doi.org/10.1017/S0001972019000019 https://www.cambridge.org/core Vokes, R. (2018b) ‘Before the call: mobile phones, exchange relations and social change in south-western Uganda’, Ethnos 83 (2): 274–90. Wendl, T. (1999) ‘Portraits and scenery’ in J.-L. Pivin and P. M. Saint Leon (eds), Anthology of African and Indian Ocean Photography. Paris: Revue Noire. Werner, J.-F. (2001) ‘Photography and individualization in contemporary Africa: an Ivoirian case-study’, Visual Anthropology 14 (3): 251–68. 224 Richard Vokes of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0001972019000019 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:21, subject to the Cambridge Core terms https://www.cambridge.org/core/terms https://doi.org/10.1017/S0001972019000019 https://www.cambridge.org/core Photographies in Africa in the digital age Acknowledgements References work_i7v37kiw5zhydgyzuvstl2b3ea ---- Deterministic Systems Analysis IN T E R A C T IV E DIGITAL P H O T O G R A P H Y AT SCALE by B rian M ark Summa A dissertation subm itted to the faculty of The University of U tah in partial fulfillment of the requirem ents for the degree of D octor of Philosophy in C om puter Science School of C om puting The University of U tah May 2013 Copyright © B rian M ark Summ a 2013 All Rights Reserved T h e U n i v e r s i t y o f U t a h G r a d u a t e S c h o o l STATEMENT OF DISSERTATION APPROVAL The dissertation o f Brian M ark Summa has been approved by the following supervisory committee members: Valerio Pascucci Chair 2/25/2013 Date A pproved Paolo Cignoni M em ber 12/13/2012 Date A pproved Charles Hansen M em ber 12/14/2012 Date A pproved Christopher Johnson M em ber 12/14/2012 Date A pproved Paul Rosen M em ber 12/14/2012 Date A pproved and by Alan Davis Chair o f the D epartm ent o f _____________________ School o f Com puting and by Donna M . W hite, Interim D ean o f The Graduate School. A B ST R A C T Interactive editing and m anipulation of digital media is a fundam ental com ponent in digital content creation. One media in particular, digital imagery, has seen a recent increase in popularity of its large or even massive image formats. U nfortunately, current systems and techniques are rarely concerned w ith scalability or usability w ith these large images. Moreover, processing massive (or even large) imagery is assumed to be an off-line, autom atic process, although many problems associated w ith these d atasets require hum an intervention for high quality results. This dissertation details how to design interactive image techniques th a t scale. In particular, massive imagery is typically constructed as a seamless mosaic of many smaller images. The focus of this work is the creation of new technologies to enable user interaction in the form ation of these large mosaics. W hile an interactive system for all stages of the mosaic creation pipeline is a long-term research goal, this dissertation concentrates on the last phase of the mosaic creation pipeline - the com position of registered images into a seamless composite. The work detailed in this dissertation provides the technologies to fully realize interactive editing in mosaic com position on image collections ranging from the very small to massive in scale. To Delila C O N T E N T S A B S T R A C T ................................................................................................................................ iii L IS T O F F I G U R E S ................................................................................................................ v iii L IS T O F T A B L E S .................................................................................................................... x v ii A C K N O W L E D G E M E N T S ..................................................................................................x ix C H A P T E R S 1......M O T I V A T IO N A N D C O N T R I B U T I O N S .......................................................... 1 1.1 The P anoram a Creation P i p e l i n e ............................................................................. 4 1.2 B oundaries....................................................................................................................... 7 1.3 Color Correction ........................................................................................................... 12 2. R E L A T E D W O R K ......................................................................................................... 15 2.1 Image Boundaries: S e a m s ........................................................................................... 15 2.1.1 Pairwise B o u n d a rie s ........................................................................................... 15 2.1.2 G raph C u t s ........................................................................................................... 16 2.1.3 A lternative B oundary T echniques.................................................................... 16 2.1.4 Out-of-Core and D istributed C o m p u ta tio n ................................................... 16 2.2 Color Correction: G radient Domain E d itin g .......................................................... 17 2.2.1 Poisson Image Processing .................................................................................. 17 2.2.2 Poisson Solvers .................................................................................................... 17 2.2.3 Out-of-Core C o m p u ta tio n .................................................................................. 19 2.2.4 D istributed C o m p u ta tio n .................................................................................. 19 2.2.5 Cloud Com puting - MapReduce and Hadoop ............................................... 20 2.2.6 Out-of-Core D ata Access .................................................................................. 21 3. S C A L A B L E A N D E F F I C I E N T D A T A A C C E S S ............................................ 22 3.1 Z- and HZ-Order Background .................................................................................... 22 3.2 Efficient M ultiresolution Range Q u e r ie s ................................................................. 25 3.3 Parallel W r i t e ................................................................................................................ 28 3.4 ViSUS Software Framework ...................................................................................... 28 3.4.1 LightStream Dataflow and Scene G ra p h ........................................................ 31 3.4.2 Portable Visualization Layer - ViSUS A ppK it.............................................. 32 3.4.3 Web-Server and P l u g - I n .................................................................................... 33 3.4.4 Additional Application: Real-Time M o n ito rin g .......................................... 34 4. I N T E R A C T I V E S E A M E D I T I N G A T S C A L E ............................................... 36 4.1 O ptim al Image B o u n d a rie s ........................................................................................ 36 4.1.1 Optim al B o u n d a r ie s ........................................................................................... 36 4.1.2 M in-Cut and M in -P a th ...................................................................................... 37 4.1.3 G raph C u t s ........................................................................................................... 38 4.2 Pairwise Seams and Seam Trees ............................................................................... 38 4.3 From Pairwise to Global Seams ............................................................................... 40 4.3.1 The Dual Adjacency M e s h ............................................................................... 41 4.3.2 Branching Points and Intersection R e s o lu tio n ............................................ 42 4.3.2.1 Branching points......................................................................................... 43 4.3.2.2 Removing invalid intersections................................................................. 44 4.4 Out-of-Core Seam P ro c e s s in g .................................................................................... 46 4.4.1 Branching Point and Shared Edge P h a s e ...................................................... 47 4.4.2 Intersection Resolution P h a s e ........................................................................... 48 4.5 Weaving Interactive S y s t e m ...................................................................................... 49 4.5.1 System S p e c ific s.................................................................................................. 49 4.5.1.1 In p u t.............................................................................................................. 49 4.5.1.2 Initial parallel com putation...................................................................... 50 4.5.1.3 Seam network im p ort................................................................................. 50 4.5.2 Interactions ........................................................................................................... 51 4.5.2.1 Seam bending............................................................................................... 51 4.5.2.2 Seam sp littin g .............................................................................................. 52 4.5.2.3 Branching point movement....................................................................... 52 4.5.2.4 Branching point splitting and merging.................................................. 52 4.5.2.5 Im proper user interaction.......................................................................... 52 4.6 Scalable Seam Interactions ......................................................................................... 53 4.7 Results .............................................................................................................................. 55 4.7.1 P anoram a C re a tio n ............................................................................................. 55 4.7.1.1 In-core results............................................................................................... 55 4.7.1.2 Out-of-core results...................................................................................... 56 4.7.2 P anoram a E d itin g ................................................................................................ 58 4.7.2.1 Editing bad seams....................................................................................... 58 4.7.2.2 Multiple valid seams................................................................................... 60 4.8 Lim itations and Future W o r k .................................................................................... 60 5. I N T E R A C T I V E G R A D I E N T D O M A I N E D I T I N G A T S C A L E ............ 64 5.1 Gradient Domain Image P ro c e s s in g ........................................................................ 64 5.2 Progressive Poisson Solver........................................................................................... 65 5.2.1 Progressive Framework ...................................................................................... 65 5.2.1.1 Initial solution.............................................................................................. 66 5.2.1.2 Progressive refinem ent............................................................................... 67 5.2.1.3 Local preview............................................................................................... 67 5.2.1.4 Progressive full solution............................................................................. 68 5.2.1.5 Out-of-core solver........................................................................................ 69 5.2.2 D ata A ccess........................................................................................................... 69 5.2.3 Interactive Preview and Out-of-Core Solver R e su lts................................... 72 5.3 Parallel D istributed Gradient Domain E diting ...................................................... 79 5.3.1 Parallel Solver ....................................................................................................... 80 vi 5.3.1.1 D ata distribution as tiles........................................................................... 80 5.3.1.2 Coarse solution............................................................................................ 81 5.3.1.3 F irst phase: progressive solution............................................................. 81 5.3.1.4 Second phase: overlap solution................................................................ 82 5.3.1.5 Parallel im plem entation d etails............................................................... 82 5.3.2 Results .................................................................................................................. 84 5.3.2.1 NVIDIA cluster........................................................................................... 85 5.3.2.2 Longhorn cluster.......................................................................................... 87 5.3.2.3 Heterogeneous cluster................................................................................. 88 5.4 Gradient Domain Editing on the C l o u d ................................................................. 89 5.4.1 MapReduce and H a d o o p .................................................................................... 89 5.4.1.1 In p u t.............................................................................................................. 90 5.4.1.2 MapReduce tran sfer................................................................................... 90 5.4.2 MapReduce for G radient D o m a in .................................................................... 92 5.4.2.1 Tiles................................................................................................................ 92 5.4.2.2 Coarse solution............................................................................................ 94 5.4.2.3 F irst (map) phase........................................................................................ 94 5.4.2.4 Second (reduce) phase................................................................................ 94 5.4.2.5 Storage in the H D FS.................................................................................. 95 5.4.3 Results .................................................................................................................. 96 5.4.3.1 Scalability...................................................................................................... 97 5.4.3.2 Fault tolerance............................................................................................. 98 6. F U T U R E W O R K .............................................................................................................. 99 A P P E N D I X : M A S S IV E D A T A S E T S ............................................................................. 101 R E F E R E N C E S ......................................................................................................................... 102 vii LIST OF FIG URES 1.1 A 360 degree panoram a taken of the city of Toronto, Ontario, C anada credited to Armstrong, Beere and Hime in 1856....................................................................... 2 1.2 Massive imagery is typically constructed as a mosaic of many smaller images. (a) A panoram a of Salt Lake City comprised of 624 individual images. The combined image is over 3.2 gigapixels in size. (b) The panoram a after being composited into a single seamless image...................................................................... 2 1.3 The three stages of panoram a (mosaic) creation. First, the individual images must be acquired. Second, they are registered into a common coordinate sys­ tem . Third, they are composited (blended) into a single seamless image. My dissertation research has been to provide technologies to enable interactivity for the final composition stage while a high-performance back-end provides a final image........................................................................................................................... 5 1.4 An example of the three stages of a panoram a’s creation. F irst, the individual images are acquired, typically, from a handheld camera. Second, for registra­ tion, the common coordinate system for the images is computed. Third, they are composited (blended) into a single seamless image............................................ 5 1.5 A diagram to illustrate the main options available during the composition stage of panoram a creation. The most simple process is to merge the image directly from the registration. The simplest approach is an alpha-blend of the overlap areas to achieve a smooth transition between images................................ 6 1.6 A simple blending approach is usually not sufficient in mosaics with moving elements. In these cases, the elements produce “ghosts” (circled here in red) in the final blend............................................................................................................... 7 1.7 (a, e) Two examples (Canoe: 6842 x 2853, 2 images and Lake Path: 4459 x 4816, 2 images) of undesirable, yet exactly optim al seams (unique pairwise overlaps) for the pixel difference energy. (b, f) A zoom of visual artifacts caused by this optim al seam. (c, g) The pixel labeling. (d, h) The result produced by Adobe P ho to sh o p ™ . Images courtesy of City Escapes N ature Photography........................................................................................................................ 7 1.8 Hierarchical G raph Cuts has only been shown to work well on hierarchies of two to three levels. For this four picture panoram a example, we can see th a t Hierarchical G raph Cuts produces a solution th a t passes though a dynamic scene element when using four levels of the hierarchy. A typical input value of a ten pixel dilation was used for this example. While a larger dilation param eter could be used, this would require a larger memory and com putational cost which negates the benefits of the technique................................................................ 9 1.9 Even when seams are visually acceptable, moving elements in the scene may cause m ultiple visually valid seam configurations. On the top, this figure shows a four image panoram a (Crosswalk: 4705 x 3543, four images) with three valid configurations. On the bottom , this figure shows a two image panoram a (Apollo-Aldrin: 3432 x 2297, two images) w ith two valid configu­ rations. Images courtesy of NASA................................................................................ 2.1 Although the result is a seamless, sm ooth image, w ithout coarse upsampling the final image will fail to account for large trends th a t span beyond a single overlap and can lead to unwanted, unappealing shifts in color.............................. 3.1 (a) The first four levels of the Z-order space filling curve; (b) 4x4 array indexed using stand ard Z -o rd er.................................................................................................... 3.2 (a) Address transform ation from row-major index ( i , j ) to Z-order index I (Step 1) and th en to hierarchical Z-order index (Step 2); (b) Levels of the hierarchical Z-order for a 4x4 array. The samples on each level remain ordered by the stand ard Z-order................................................................................................... 3.3 O ur fast-stack Z-order traversal of a 4x4 array w ith concurrent index com pu­ tatio n ................................................................................................................................... 3.4 (a) Naive parallel strategy where each process writes its piece of the overall d ataset into the underlying flle, (b) each process transm its each contiguous d a ta segment to an interm ediate aggregator. Once the aggregator’s buffer is complete, the d a ta are w ritten to disk, (c) several noncontiguous memory accesses are bundled into a single message to decrease com munication overhead 3.5 The ViSUS software framework. Arrows denote external and internal depen­ dences of the main software com ponents. Additionally this figure illustrates the relationship w ith several example applications th a t have been successfully developed w ith this framework. .................................................................................. 3.6 The L ightStream Dataflow used for analysis and visualization of a three­ dimensional com bustion simulation (U intah code). (a) Several dataflow m od­ ules chained together to provide a light and flexible stream processing capa­ bility. (b) One visualization th a t is the result from this dataflow ......................... 3.7 The same application and visualization of a Mars panoram a running on an iPhone 3G mobile device (a) and a powerwall display (b). D ata courtesy of NASA. .............................................................................................................................. 3.8 Rem ote visualization and m onitoring of simulations. (a) An S3D com bustion sim ulation visualized from a desktop in the Scientific C om puting and Imaging (SCI) In stitu te (Salt Lake City, U tah) during its execution on the H O P P E R 2 high performance com puting platform in Lawrence Berkeley N ational Lab­ o ratory (Berkeley, California). (b) Two ViSUS dem onstrations of LLNL sim­ ulation codes (M iranda and R aptor) visualized in real-tim e while executed on the B lueG ene/L prototype installed at the IBM booth of the Supercom puting e xh ibit.................................................................................................................................. 10 21 23 24 27 29 30 32 34 35 ix 4.1 The four-neighborhood min-cut solution (a) w ith its dual m in-path solution (b). The m in-cut labeling is colored in red /b lu e and the m in-path solution is highlighted in red .............................................................................................................. 37 4.2 (a) Given a simple overlap configuration a seam can be thought of as a p ath s th a t connects pairs of boundary intersections u and v. (b) Even in a more com plicated case, a valid seam configuration is still com putable by taking pairs of intersections w ith a consistent winding about an image boundary. Note th a t there is an altern ate configuration denoted in gray................................................... 39 4.3 Given two m in-path trees associated w ith a seam ’s endpoints (u,v), a new seam th a t passes through any point in the overlap (yellow) is a simple linear walk up each tre e ............................................................................................................... 39 4.4 (a) A solution to the panoram a boundary problem can be considered as a network of pairwise boundaries between images. (b) O ur adjacency mesh representation is designed w ith this property in mind. Nodes correspond to panoram a images, edges correspond to boundaries and branching points (inter­ sections in red) correspond to faces of the mesh. (c) G raph C uts optim ization can provide more complex pixel assignments where “islands” of pixels assigned to one image can be com pletely bounded by another image. O ur approach simplifies the solution by removing such islands........................................................ 40 4.5 (a) A three overlap adjacency mesh representation. (b) A four overlap initial quadrilateral adjacency mesh w ith its two valid mesh subdivisions. (c) A five overlap pentagon adjacency mesh w ith an example subdivision. ....................... 42 4.6 Considering the full neighborhood graph of a panoram a (a), where an edge exists if an overlap exists between a pair of images, an initial valid adjacency mesh (b) can be com puted by finding all nonoverlapping, maxim al cliques in the full graph, then activating and deactivating edges based on the boundary of each clique...................................................................................................................... 43 4.7 (a) Pairwise seam endpoints closest to a multioverlap (red) are considered a branching point. (b) This can be determ ined by finding a minimum point in the multioverlap w ith respect to m in-path distance from the p artn er endpoints. (c) After the branching point is found, the new seams are com puted by a linear lookup up the p artn er end p oin t’s seam tree. (d) To enable parallel com putation, each branching point is com puted using the initial endpoint location (green) even if it was moved via another branching point calculation (red )...................................................................................................................................... 44 x 4.8 (a) Pairwise seams may produce invalid intersections or crossings in a m ulti­ overlap, which leads to an inconsistent labeling of the dom ain. The gray area on the top can be given the labels A or B and on the b ottom either C or D. (b) Choosing a label is akin to collapsing one seam onto the other. This leads to new image boundaries, which were based on energy functions th a t do not correlate to this new boundary. The top collapse results in a B-C boundary using an A-B seam (C-D seam for the bottom ). (c and d) O ur technique performs a b ette r collapse where each intersection point is connected to the branching point via a minimal p ath th a t corresponds to the proper boundary (B-C). One can thin k of this as a virtu al addition of a new adjacency mesh edge (B-C) at the tim e of resolution to account for the new boundary............... 45 4.9 The phases of out-of-core seam com putation. (a) F irst, branching points are com puted. The seams for all unshared edges can also be com puted during this pass. (b) Second, once the corresponding branching points are com puted, all unshared edges can be com puted w ith a single m in-path calculation. (c) Third, once all the seams for the edges for a given face have been com puted, the intersections can be resolved. Note, the three passes do not necessarily need to be three separate phases since they can be interleaved when the proper input d a ta are ready. .................................................................................................... 47 4.10 The low memory branching point calculation for our out-of-core seam creation technique. (a) Given a face for which a branching point needs to be com puted, (b) the com putation proceeds “round-robin” on the edges of the face to com­ pute the needed seam trees. The images th a t correspond to the edge endpoints and overlap energy are only needed during the seam tree calculation for a given edge on the face. Therefore by loading and unloading these d ata during the “round-robin” com putation, the memory overhead for the branching point com putation is the cost of storing two images, one energy overlap buffer, and one for the seam trees for the given face..................................................................... 48 4.11 For intersections th a t require a resolution seam, the two images which corre­ spond to the overlap needed for the seam must be loaded. In the figure above, these images are the ones th a t correspond to the endpoint of the diagonal, resolution adjacency mesh edge..................................................................................... 49 4.12 Overview of Panorama Weaving. The initial com putation is given by steps one through four, after which the solution is ready and presented to the user. Interactions, steps five and six, use the tree u pd ate in step four as a background process. Additionally, step six updates the dual adjacency m esh......................... 50 4.13 Im porting a seam network from another algorithm . The user is allowed to im port the result generated by G raph C uts (a) and adjust the seam between the green and purple regions to unm ask a moving person (b). Note th a t this edit has only a local effect, and th a t the rest of the im ported network is unaltered. 51 4.14 Im proper user constraints are resolved or if resolution is not possible, given visual feedback. (a) Resolution of an intersection caused by a user moving a constraint. (b) Resolution of an intersection caused by a user moving a branching point. (c) A non-resolvable case where a user is ju st provided a visual cue of a problem .................................................................................................... 53 xi 4.15 Given the inherent locality of the seam editing interactions, only a very small subset of the adjacency mesh needs to be considered. (a) For operations on an adjacency mesh face (i.e., branching point operations) only the images and overlaps of the corresponding face and its one face neighborhood need to be loaded and com puted. (b) For edge operations (i.e., bending), we need consider only the faces th a t share the edge................................................................. 54 4.16 W hen a user selects an area of a panoram a to edit, the system must determ ine which overlaps intersect w ith the selected area. This can be accomplished w ith a (a) bounding hierarchy of the overlaps. D uring selection this hierarchy is traversed to isolate the proper overlaps for the selection. This gives a logarithm ic lookup w ith respect to the num ber of adjacency mesh faces in the panoram a. Alternatively, (b) if a pixel-to-image labeling is provided, this can be used to isolate a fixed neighborhood th a t needs to be tested for overlap intersection. This labeling is commonly com puted if the panoram a is to be fed into a color correction routine after seam com pu tation .................................... 54 4.17 Fall Salt Lake City, 126,826 x 29,633, 3.27 gigapixel, 611 images. (a) An example window com puted w ith out-of-core G raph C ut technique introduced in Kopf et al. [94]. This single window took 50 minutes for G raph Cuts to converge, w ith the initial iteration requiring 10.2 m inutes. Since the full d ataset contains 495 similar windows, using the windowed technique would take days (85.15 hours) at best, and weeks (17.2 days) in the worst case. (b) The full resolution Panorama Weaving solution was com puted in 68.4 minutes on a single core and 9.5 m inutes on eight cores. Our single core im plem entation required a peak memory footprint of only 290 megabytes while using eight cores had peak memory of only 1.4 gigabytes............................................................ 57 4.18 Lake Louise, 187,069 x 40.202, 7.52 gigapixel, 1512 images. The Panorama Weaving results for the Lake Louise panoram a. Our out-of-core seam compu­ tation produces this full resolution solution in as little as 37.7 minutes while requiring at most only 2.0 gigabytes of memory. P anoram a courtesy of City Escapes N ature Photography ......................................................................................... 59 4.19 Panorama Weaving on a challenging data-set (Nation, 12848 x 3821, nine images) with moving objects during acquisition, registration issues and varying exposure. Our initial autom atic solution (b) was com puted in 4.6 seconds at full resolution for a result with lower seam energy th a n G raph Cuts. Addi­ tionally, we present a system for the interactive user exploration of the seam solution space (c), easily enabling: (d) the resolution of moving objects, (e) the hiding of registration artifacts (split pole) in low contrast areas (scooter) or (f) the fix of semantic notions for which autom atic decisions can be unsatisfactory (stoplight colors are inconsistent after the autom atic solve). The user editing session took only a few minutes. (a) The final, color-corrected panoram a......... 59 4.20 R epairing non-ideal seams may give m ultiple valid seam configurations. (a) The initial seam configuration for the Skating d ataset (9400 x 4752, six images) based on gradient energy. (b and c) Its two m ajor problem areas. (d and e) Using our technique a user can repair the panoram a, b ut also has the choices of two valid seam configurations. P anoram a courtesy of City Escapes N ature Photography........................................................................................................................ 59 xii 4.21 A panoram a taken by Neil A rm strong during the Apollo 11 moon landing (Apollo-Armstrong: 6,913 x 1,014, eleven images). (a) R egistration artifacts exist on the horizon. (b) Our system can be used to hide these artifacts. (c) The final color-corrected image. P anoram a courtesy of NASA............................. 61 4.22 In this example (Graffiti: 10,899 x 3,355, ten images), (a) the user fixed a few recoverable registration artifacts and tuned the seam location for improved gradient-dom ain processing, yielding a colorful color-corrected graffiti. (b) O ur initial autom atic solution (energy function based on pixel gradients). (c) The user edited panoram a. The editing session took 2 m inutes. ....................... 61 4.23 The color-corrected, user edited examples from Figure 1.7. The artifacts caused by the optim al seams can be repaired by a user. Images courtesy of City Escapes N ature P hotography............................................................................ 62 4.24 A lake v ista panoram a (Lake: 7,626 x 1,231, 22 images) w ith canoes which move during acquisition. In all there are six independent areas of movement, therefore there are 64 possible seam configurations of different canoe positions. Here we illustrate two of these configurations w ith color-corrected versions of the full panoram a (a and c) and a zoomed in portion on each panoram a (b and d) showing the differing canoe positions. P anoram a courtesy of City Escapes N ature P hotography......................................................................................................... 62 4.25 Splitting a five valence branching point based on gradient energy of the Fall­ 5way d ataset (5211 x 5177, 5 images): as the user splits the pentagon, the resulting seams m ask/unm ask the dynamic elements. Note th a t each branch­ ing point th a t has a valence higher th a n 3 can be fu rth er subdivided. ............ 62 5.1 O ur adaptive refinement scheme using simple difference averaging. (a) Global progressive up-sampling of the edited image com puted by a background pro­ cess. (b) View-dependent local refinement based on a 2k x 2k window. In both cases we speedup the SOR solver w ith an initial solution obtained by sm ooth refinement of the solution................................................................................................ 68 5.2 Subsampled and tiled hierarchies. (a) A subsam pled hierarchy. As expected, subsam pling has the tendency to produce high-frequency aliasing. Though details such as the cars on the highway and in the parking lots are preserved. (b) A tiled hierarchy. This produces a more visually pleasing image at all resolutions b ut at the cost of potentially losing information. The cars are now completely smoothed away. D ata courtesy of the U.S. Geological Survey.......... 70 5.3 O ur progressive framework using subsam pled and tiled hierarchies. (a) A com posite satellite image of A tlanta, over 100 gigapixels at full resolution, overlaid on Blue M arble background subsampled; (b) a tiled version of the same satellite image; (c) the seamless cloning solution using subsampling; (d) the same solution com puted using a tiled hierarchy; (e) the solution offset com puted using subsampling; (f) the solution com puted using tiles; (g) a full resolution portion com puted using subsampling; (h) the same portion using tiling. Note th a t even though there is a slight difference in the com puted solution, both the tiled and the subsam pled hierarchies produce a seamless stitch w ith our framework. D ata courtesy of the U.S. Geological Survey and NASA’s E a rth O bservatory............................................................................................. 71 xiii 5.4 The Edinburgh P anoram a 16,950 x 2, 956 pixels. (a) O ur coarse solution com puted at a resolution of 0.7 megapixels; (b) the same panoram a solved at full resolution w ith our progressive global solver scaled to approxim ately 12 megapixel for publication; (c) a detail view of a particularly bad seam from the original panoram a; (d) the problem area previewed using our adaptive local refinement; (e) the problem area solved at full resolution using our global solver in 3.48 m inutes....................................................................................................... 74 5.5 The RMS error when com pared to the ideal analytical solution as we increase iterations for both m ethods. Stream ing multigrid has b ette r convergence and less error for the Edinburgh example (a), though our m ethod remains stable for the larger Salt Lake City panoram a (b). Notice th a t every plot has been scaled independently to best illustrate the convergency trends of each m ethod. 75 5.6 P anoram a of Salt Lake City of 3.27 gigapixel, obtained by stitching 611 images. (a) Mosaic of the original images. (b) Our solution com puted at 0.9 megapixel resolution. (c) The full solution provided by our global solver. (d) The difference image between our preview and the full solution at the preview resolution. B oth (a) and (c) have been scaled for publication to approxim ately 12.9 megapixels.................................................................................................................. 76 5.7 A comparison of our adaptive local preview on a portion of the Salt Lake City panoram a one half of the full resolution; (a) the original mosaic, (b) our adaptive preview, (c) the full solution from our global solver, and (d) the difference image between the adaptive preview and the full solution ................ 76 5.8 A com parison of our system w ith the best known out of core m ethod [Kazhdan and Hoppe 2008] and a full analytical solution on a portion of the Salt Lake City panoram a, 21201 x 24001 pixels, 485 megapixel (a) the full analytical so­ lution; (b) our solution com puted in 28.1 minutes; (c) solution from [Kazhdan and Hoppe 2008] com puted in 24.9 minutes; (d) the analytical solution where the solver is allowed to harmonically fill the boundary; (e) our solution with harmonic fill; (f) solution from [Kazhdan and Hoppe 2008] w ith harmonic fill; (g) the m ap image used by all solvers to construct the panoram a where the red color indicates the image th a t provides the pixel color and white denotes the panoram a boundary................................................................................................... 77 5.9 Application of our m ethod to HDR image compression: (a) Original synthetic HDR image of an adaptively refined Sierpinki sponge generated w ith Povray. (b) Tone m apped image w ith recovery of detailed inform ation previously hid­ den in the shadows. (c) Belgium House image solved using our coarse-to-fine m ethod w ith an initial 16 x 12 coarse solution (a = 0.01, = 0.7, compression coefficient=0.5). (d) The direct analytical solution. Image courtesy of R aanan F a tta l.................................................................................................................................... 78 5.10 Satellite imagery collection w ith a background given by a 3.7 gigapixel image from NASA’s Blue M arble Collection. T he Progressive Poisson solver allows the application of the seamless cloning m ethod to two copies of the city of A tlanta, each of 116 gigapixels. An artist can interactively place a copy of A tlanta under shallow w ater and recreate the lost city of Atlantis. D ata courtesy of the U.S. Geological Survey and NASA’s E a rth O bservatory............. 79 xiv 5.11 O ur tile-based approach: (a) An input image is divided into equally spaced tiles. In the first phase, after a symbolic padding by a column and row in all dimensions, a solver is run on a window denoted by a collection of four labeled tiles. D ata are sent and collected for the next phase to create new d a ta windows w ith a 50% overlap. (b) An example tile layout for the Fall P anoram a example. ....................................................................................................... 81 5.12 Windows are d istributed as evenly as possible across all nodes in the dis­ trib u ted system. Windows assigned to a specific node are denoted by color above. Given the overlap scheme, d a ta transfer only needs to occur one-way, denoted by the red arrows and boundary above. To avoid starvation between phases and to hide as much d a ta transfer as possible, windows are processed in inverse order (white arrows) and the tiles needed by other nodes are transferred im mediately......................................................................................................................... 83 5.13 Fall P anoram a - 126,826 x 29, 633, 3.27 gigapixel. (a) The panoram a before seamless blending and (b) the result of the parallel Poisson solver run on 480 cores w ith 124 x 29 windows and com puted in 5.88 m inutes................................. 85 5.14 W inter P anoram a - 92, 570 x 28,600, 2.65 gigapixel. (a) The result of the parallel Poisson solver run on 480 cores w ith 91 x 28 windows and com puted in 6.02 m inutes, (b) the panoram a before seamless blending, and (c) the coarse panoram a solution............................................................................................................. 85 5.15 The two phases of a M apReduce job. In the figure, three m ap tasks produce key/values pairs th a t are hashed into two bins corresponding to the two reduce tasks in the job. W hen the d a ta are ready, the reducers grab their needed d ata from the m ap per’s local disk.......................................................................................... 90 5.16 A diagram of the job control and d a ta flow for one Task Tracker in a Hadoop job. The d otted, red arrows indicate d a ta flow over the network; dashed arrows represent communication; the blue arrow indicates a local d a ta w rite and the black arrows indicate an action taken by the node................................................... 91 5.17 Although the result is a sm ooth image, w ithout coarse upsampling the final image will fail to account for large trends th a t span beyond a single overlap and can lead to unwanted shifts in color. Notice the vertical banding denoted by the red arrows............................................................................................................... 92 5.18 The 512 x 512 tiles used in our Edinburgh (a), Redrock (b), and Salt Lake City (c) exam ples.............................................................................................................. 93 5.19 Our tile-based approach: An input image is divided into equally spaced tiles. In the m ap phase after a symbolic padding by a column and row in all dimensions, a solver is run on a collection of four tiles labeled by numbers above. After the m apper finishes, it assigns a key such th a t each reducer runs its solver a collection of four tiles th a t have a 50% overlap with the previous solutions. ......................................................................................................................... 93 xv 5.20 The results of our cloud im plem entation, from top to bottom : Edinburgh, 25 images, 16, 950 x 2, 956, 50 megapixel and the solution to Edinburgh from our cloud im plem entation; Redrock, nine images, 19, 588 x 4, 457; 87 megapixel and the solution to Redrock from our cloud im plem entation; Salt Lake City, 611 images, 126,826 x 29,633, 3.27-gigapixel and the solution to Salt Lake City from our cloud im plem entation............................................................................ 97 5.21 (a) The scalability plot for the Edinburgh (50 megapixel) panoram a on our one node 8-core test desktop; (b) the scalability plot for Redrock (87 megapixel) panoram a on the same machine .................................................................................... 98 6.1 A typical example of interaction during panoram a registration from the open- source Hugin [77] software tool. Current interaction is limited to the manual selection and deletion of feature points used during registration .......................... 100 xvi LIST OF TABLES 4.1 Performance results com paring Panorama Weaving to G raph Cuts for our test d atasets th a t contain more th a n simple pairwise overlaps. Panorama Weaving run serially (PW -S) com putes solutions quickly. W hen run in p ar­ allel, runtim es are reduced to ju st a few seconds. The energy ratio (E. ratio) between the final seam energy produced by Panorama Weaving and G raph C uts (P W Energy / GC Energy) is shown. For all b ut one d ataset (Fall-5way), Panorama Weaving produces a lower energy result. It is com parable otherwise. P anoram a image sizes are reported in megapixels (M P )......................................... 56 4.2 Strong scaling results for the Fall Salt Lake City panoram a, 126, 826 x 29,633, 3.27 gigapixel, 611 images. Our out-of-core Panorama Weaving technique scales very well in term s of efficacy percentage compared to ideal scaling up to the physical cores of our test system (eight cores). At eight cores our technique loses a slight am ount of efficiency due to our im plem entation having a dedicated thread to handing the seam scheduling. Using the full eight cores to process this panoram a provides a full resolution seam solution in ju st 9.5 minutes. The system is extremely light on memory and uses a t most 1.4 gigabytes. ......................................................................................................................... 57 4.3 Strong scaling results for the Lake Louise panoram a, 187,069 x 40.202, 7.52 gigapixel, 1512 images. Like the smaller Fall Salt Lake city panoram a, our im plem entation shows very good efficiency up to the physical num ber of cores on our test system. Using the full eight cores for the full resolution seam solution for this panoram a requires 37.7 minutes of com pute tim e and at most 2.0 gigabytes of memory.................................................................................................. 58 5.1 The strong scaling results for the Fall Panoram a run on the NVIDIA cluster from 2-60 nodes up to a to tal of 480 cores. Overhead (O /H ) due to M PI communication and I/O is also provided along with its percentage of actual running time. The Fall Panoram a, due to its larger size begins to lose efficiency a t around 32 nodes when I/O overhead begins to dom inate. Even with this overhead, the efficiency (Eff.) remains acceptable.................................................... 86 5.2 The strong scaling results for the W inter P anoram a run on the NVIDIA cluster from 2-60 nodes up to a to tal of 480 cores. Overhead (O /H ) due to M PI communication and I/O is also provided along with its percentage of actual running tim e. For the W inter Panoram a, the I/O overhead does not effect performance up to 60 nodes and the im plem entation m aintains efficiency (Eff.) throughout all of our ru n s............................................................................................... 86 5.3 Weak scaling tests run on the NVIDIA cluster for the Fall P anoram a d ataset. As the num ber of cores, increases so does the image resolution to be solved. The image was expanded from the center of the full image. Iterations of the solver for all windows were locked at 1000 for testing to ensure no variation is due to slower converging image areas. As is shown, our im plem entation shows good efficiency even when running on the maxim um num ber of cores................. 87 5.4 To dem onstrate the p ortability of our im plem entation, we have run strong scalability testing for the Fall P anoram a on the Longhorn cluster from 2-128 nodes up to a to ta l of 1024 cores. As the num bers show, we m aintain good scalability and efficiency even when running on all available nodes and cores. . 87 5.5 Weak scaling tests run on the Longhorn cluster for the Fall P anoram a d ataset. 88 5.6 O ur simulated heterogeneous system. This test example is a simulated mixed system of 2 8-core nodes, 4 4-core nodes, and 8 2-core nodes. The weights for our framework are the num ber of cores available in each node. The tim ings and window distributions are for Fall P anoram a dataset. As you can see, with the proper weightings our framework can distrib ute windows proportionally based on the performance of the system. The max runtim e of 32.70 minutes for this 48 core system is on par w ith tim ings for the 32 core (40.08 minutes) and 64 core (20.83 minutes) runs from the strong scaling te s t.............................. 89 A.1 Massive panoram a d a ta acquired and used in this dissertation work...................101 A.2 Massive satellite d a ta acquired and used in this dissertation work.......................101 xviii A C K N O W L E D G E M E N T S This dissertation was m ade possible through the support of others who I ’d like the thank: F irst, I would like to th an k my family whose endless support made this work possible. T hank you Mom, Dad, Chris, Jason and Amira for your wholehearted support of my decision to quit my job and move across the country to go back to school (in U tah of all places). Most im portantly, I would like to th an k Delila, my p artn er in all of this, for taking this adventure w ith me. You are the source of my inspiration in all things. I would like to th a n k my advisor and mentor, Valerio Pascucci, for his continued guidance and encouragement. We both took a very big chance in our first semester as student and professor at U tah, which, I believe, reaped a very huge reward. Since the sta rt of my work, he has given me the support and confidence I needed to succeed and for this I am grateful. I hope this dissertation is not the end, but the beginning of a long collaboration. I would also like to th a n k the other members of my com mittee: Chris, Chuck, P aul and Paolo for their feedback on this work. I would like to especially th an k K rzysztof Sikorski who was a member of my com m ittee for most of my tim e at U tah. Despite his illness, he was a constant source of guidance and encouragement, for which I am grateful. Finally, I would like to th a n k the many collaborators I have had while here at U tah, and I hope you forgive me for not w riting the numerous list. This work was only possible through these collaborations. I cannot express the p erpetual astonishm ent on w hat we w ere/are able to accomplish together. C H A P T E R 1 M OTIVATION A N D C O N T R IB U T IO N S Interactive editing and m anipulation of digital media is a fundam ental com ponent in digital content creation. One media in particular, digital imagery, has seen a recent increase in popularity of its large or even massive image formats. U nfortunately, current systems and techniques are rarely concerned w ith scalability or usability w ith these large images. For example, the support for large imagery in the most prevalent interactive image editing application, Adobe P h o to sh o p ™ , lacks tru e viability for to d a y ’s massive images. The application’s large image form at has a 90 gigapixel maxim um image size, lim ited editing functionality beyond 900 megapixel, and a tedious processing tim e during an interactive session. Moreover, the creation and processing of large imagery is assumed to be an offline, autom atic process though many of the problems associated w ith these d atasets require hum an intervention for repair. The work outlined in this dissertation will show th a t this expensive, offline assum ption need not be tru e and th a t real-tim e interaction provides new and powerful environm ents for the creation and editing of massive images. Specifically, this work will detail how to design interactive image processing algorithm s th a t scale. There has always been an inherent hum an desire to docum ent or replicate large vistas of our n atu ral world or to docum ent historical events in detail. Panoram ic paintings reached the height of their popularity in the early 19th century due to improvements in perspective drawing techniques. A few decades later, the advent of modern photography was closely followed by the earliest work in the creation of panoram ic images, see Figure 1.1. In the years since, the popularity of panoram as has not waned, see Figure 1.2. These large, sweeping images capture the feeling of being an observer, w hether it is of a beautiful n atu ral view, a historic event such as a P residential inau gu ratio n ,1 or a rem inder of the destruction of w ar.2 Consequently, there exists a significant interest in creating and using large mosaics 1B a ra k O b am a P resid e n tia l In au g u ratio n : h ttp ://g ig a p a n .o rg /g ig a p a n s /1 5 3 7 4 / 2H iroshim a P a n o ra m a P ro jec t: h ttp ://w w w .iw u .e d u / rw ilso n /h iro sh im a / http://gigapan.org/gigapans/15374/ http://www.iwu.edu/ 2 F ig u r e 1.1: A 360 degree panoram a taken of the city of Toronto, O ntario, C anada credited to A rm strong, Beere and Hime in 1856. F ig u r e 1.2: Massive imagery is typically constructed as a mosaic of many smaller images. (a ) A panoram a of Salt Lake City comprised of 624 individual images. The combined image is over 3.2 gigapixels in size. (b) The panoram a after being com posited into a single seamless image. for personal, scientific, a n d /o r commercial applications. Examples include medical imaging, where electron microscopy d a ta is composited into ultra-high resolution images [159] or the study of phenology and genomics.3 Massive imagery is also common in geographic inform ation systems (GIS) in th e form of aerial or satellite d a ta and used for anything from urban planning to global clim ate research. While many-megapixel cameras do exist,4 they are overly expensive and unwieldy to use. Therefore, massive imagery is typically constructed as a mosaic of many smaller images. 3G igaVision P ro ject: h ttp ://w w w .g ig a v isio n .o rg / 4Seitz 6x17 D igital: h ttp ://w w w .ro u n d s h o t.c h /x m L 1 /in te rn e t/d e /a p p lic a tio n /d 4 3 8 /d 9 2 5 /f9 3 4 .c fm http://www.gigavision.org/ http://www.roundshot.ch/xmL1/internet/de/application/d438/d925/f934.cfm 3 At one time, images such as the one in Figure 1.1 were painstakingly constructed by hand. Recent innovations in algorithm s and available hardw are have drastically simplified the creation of small-scale panoram as. This process can be com puted simply offline and can now be embedded in com m odity cam eras (e.g., the Sony Cyber-shot 3D Sweep Panoram a) or mobile devices such as A pple’s iPhone. The panoram as for these algorithm s are assumed to be small and therefore, are not designed to scale. For example, the iPhone’s panoram a feature, released in Septem ber 2012, has a strict 28 megapixel upper limit on the panoram a size. This is small by to d a y ’s standards. An online search for the word “gigapixel” powerfully dem onstrates the increasing desire to create ever larger panoram as. To date, the largest panoram a contains roughly 272 gigapixel, yet if the current trend continues, this record is bound to be broken w ithin a few m onths. This trend is aided by the introduction of new, high-resolution image sensors. For example, w ith current state-of-the-art 36.3 active megapixel CMOS sensors, it would take as little as 70 images to produce a gigapixel panoram a. Creating panoram as at large scales has become exponentially more difficult th a n the simple, small cases for which panoram a techniques were originally designed. For example, the 3.2 gigapixel panoram a shown in Figure 1.2 took several hours to capture and an order of m agnitude more tim e to process on conventional hardware. Furtherm ore, this tim eline assumes a perfect capture and one-time processing. In practice, the process of setting up an autom ated cam era is complicated and error prone and often problems, such as unanticipated occlusion or global m isalignment occur. Unfortunately, many of these issues are only subtly expressed in the individual images and become apparent only after the creation of the final panoram a. Additionally, to d a y ’s processing pipelines are less th a n ideal and typically involve a large num ber of interdependent and unintuitive param eter choices. A mistake or unlucky choice in the setup can easily cause unacceptable artifacts in the image requiring a repeat of the process. Consequently, it may take several weeks and significant com putational resources to produce one large-scale panoram a. This makes it difficult for all b ut a select few to create such images and makes this imagery im practical for many interesting applications. For example, acquiring imagery from unusual locations such as national parks, or covering transient events like an aurora, becomes a significant logistical and m onetary challenge. Furtherm ore, in security applications waiting hours or days for viable results defeats the prim ary purpose of acquiring the images. Finally, in scientific applications, while typically less tim e constrained, the personal and com putational 4 resources necessary to create a large-scale image of the night sky, for example, are beyond the reach of all but the largest projects. Therefore, despite significant interest, creating these massive images remains an esoteric hobby or a closely guarded research project. Work must be done to close the gap between the desire to create large-scale panoram as (and their potential applications) and the ability to capture, process, and utilize such imagery. An ideal panoram a system should allow a user to browse individual images as they are acquired, to setup and preview the processing pipeline and results through accurate, real-tim e approxim ations, and include a flexible and scalable offline com ponent to produce a final image. The system should be divided into two components: First, a real-time framework to supervise and steer the acquisition process and to guide the postprocessing; and second, a com putational back-end based on a d istributed or cloud com puting framework. The real-time system should be able to run on devices as small as an iPad or netbook com puter and be designed to be used in the field to detect any problems as early as possible. The back-end fills the gap between commonly available but slow com modity hardw are and specialized d istributed com puting resources. By implementing a flexible framework able to run on a wide variety of heterogeneous systems, the back-end scales gracefully between a single multicore machine, or a small cluster, to more powerful hardware. My dissertation research has been the creation of technologies to aid in the creation of such a system. 1.1 The Panoram a Creation P ipeline Creating large-scale panoram as can be divided into three stages: acquisition, image registration, and composition. See Figures 1.3 and 1.4. Each stage individually has been the focus of a large am ount of research b ut little effort has been spent on real-tim e performance or their interdependence. Traditionally all three stages are treated as separate postprocesses, making performance or pipelining a secondary consideration. However, as discussed above, this approach is rapidly becoming unsustainable as long acquisition and processing times lead to errors as well as an increased num ber of hardw are and software failures. To this end, my research goals are and have been to develop algorithms for each stage th a t produce high quality approxim ations in real-tim e and provide a scalable infrastructure to create full solutions exploiting all available hardware. Such new technologies would enable a wide variety of applications currently infeasible. For example, the ability to quickly and cheaply produce high resolution images of art galleries, historical events, or national parks would 5 Acquisition F ig u r e 1.3: The three stages of panoram a (mosaic) creation. F irst, the individual images must be acquired. Second, they are registered into a common coordinate system. Third, they are composited (blended) into a single seamless image. My dissertation research has been to provide technologies to enable interactivity for the final com position stage while a high-performance back-end provides a final image. A cquisition Registration C om position F ig u r e 1.4: An example of the three stages of a pan o ram a’s creation. F irst, the individual images are acquired, typically, from a handheld camera. Second, for registration, the common coordinate system for the images is com puted. Third, they are composited (blended) into a single seamless image. greatly benefit schools, universities and the public in general. Enabling the m ilitary to combine footage from multiple security cameras, satellites, or flying drones into a seamless overview would allow operators to more accurately spot changes in a secure area or direct ground operations. While a full system is a long-term research goal, my dissertation work has focused on the last phase of the mosaic creation pipeline, specifically the composition stage. After registration, image mosaics are combined in order to give the illusion of a seamless, massive image. Images acquired w ith inexpensive robots and consumer cameras pose an interesting challenge for image processing techniques. Often, panoram a robots can take seconds between each photograph, causing gigapixel-sized images to be taken over the course of hours. Due to this delay, images can vary significantly in lighting conditions a n d /o r exposure, and when registered can form an unappealing patchwork. Dynamic objects between images may also move during acquisition, ruining the illusion of a single, seamless image. Images acquired by air or satellite also suffer from an extrem e version of this problem, where the tim e of acquisition can vary from hours to days for a single composite. Therefore, 6 minimizing the transition between images is the fundam ental step in the composition stage, see Figure 1.5. The simplest transition approach is an alpha-blend of the overlap areas. Szeliski [155] provides an excellent introduction to this and other blending techniques. Such an approach does not work well in the presence of dynam ic elements which move between captures, artifacts from poor registration, or varying exposures across images, see Figure 1.6. Often, it is preferable to com pute a “h ard ” boundary, or seam, between the images as a final step, or as the preprocess for a technique such as gradient dom ain blending [133, 103]. Techniques exists to com pute these seams based purely on distance [174, 132], but like blending, these will perform poorly when the scene contains moving elements. A more sophisticated approach is to com pute the boundaries between images through an energy function minim ization to produce a nice transition between the mosaic images. These boundaries often provide the illusion of a seamless com posited image. If exposure or lighting conditions vary among the images, a final color correction is necessary to produce a sm ooth image. Techniques such as gradient dom ain blending [133, 103], mean value coordinates [54], or bilateral upsam pling [93] have been shown to provide adequately sm ooth images. G radient dom ain blending remains the most popular, b ut also the most com putationally expensive technique for color correction due to the quality of its final results. The techniques associated w ith panoram a boundaries and blending are typically com­ putationally expensive and are considered an offline, postprocess for large (and even small) panoram as. As the focus of my dissertation work, I provide novel algorithm s and techniques to bring these operations into an interactive setting for massive imagery. F ig u r e 1.5: A diagram to illustrate the main options available during the com position stage of panoram a creation. The most simple process is to merge the image directly from the registration. The simplest approach is an alpha-blend of the overlap areas to achieve a sm ooth transition between images. 7 F ig u r e 1.6: A simple blending approach is usually not sufficient in mosaics w ith moving elements. In these cases, the elements produce “ghosts” (circled here in red) in the final blend. 1.2 Boundaries In the past, panoram a image collections were captured in one sweeping motion (i.e., with image overlaps in only one dimension as in Figure 1.7). Today’s images are often collections of multiple rows and columns or in more u nstructured configurations. Consequently, more sophisticated panoram a processing techniques continue to be developed to account for their more complex configurations. After the initial registration, the p an o ram a’s individual images are blended to give the illusion of a single seamless image. As a usual first step, a boundary between images m ust be com puted as input for a color correction technique such as gradient dom ain blending [133, F ig u r e 1.7: (a, e) Two examples (Canoe: 6842 x 2853, 2 images and Lake P ath : 4459 x 4816, 2 images) of undesirable, yet exactly optim al seams (unique pairwise overlaps) for the pixel difference energy. (b, f) A zoom of visual artifacts caused by this optim al seam. (c, g) The pixel labeling. (d, h) The result produced by Adobe P h o to sh o p ™ . Images courtesy of City Escapes N ature Photography. 8 103]. These boundaries are often called seams. Using a global optim ization technique, these seams can be optimized to minimize visual artifacts due to transition between images. This is typically a pixel-based energy function such as color or color-gradient variations across the boundary. Currently, the most used technique for global seam com putation in a panoram a is the G raph Cuts algorithm [26, 24, 92]. This is a popular and robust com puter vision technique and has been adapted [101, 4] to com pute the boundary between a collection of images. W hile this technique has been used w ith good success for a variety of panoram ic or similar graphics applications [101, 4, 5, 3, 2, 94, 89, 44, 90], it can be problem atic due to its high com putational cost and memory requirements. Moreover, G raph C uts applied to digital panoram as is a typical serial operation. Since com puting the globally optim al boundaries between images is known to be N P-hard when the panoram a is composed of more th a n a collection of unique pairwise overlaps [26], G raph C uts aims to efficiently approxim ate the optim al solution and can therefore fall into local m inim a of the solution space. There has been a large body of work to reduce some of the costs associated w ith G raph C uts [25, 24, 142, 5, 107, 61, 124, 167, 138, 69, 106], b ut each of these works primarily focuses on G raph C u ts’ typical image segm entation or de-noising applications. The success of many of these algorithm s has yet to be dem onstrated for digital panoram as. Those which have been used in a panoram a context can suffer from lim itations. For example, the popular Hierarchical G raph C uts technique [107, 5] has been shown to operate well on hierarchies up to two to three levels in digital panoram as [5] and can be observed practice in panoram as such as the one shown in Figure 1.8. Given the recent trend of the increasing resolution of panoram as (many megapixels to gigapixels), one can see th a t this lim ited hierarchy would not be sufficient to com pute the seams of images of these sizes. As a second example, G raph C uts often needs an integer based energy function to guarantee convergence. This can prove problem atic for high dynamic range (HDR) panoram as. To overcome these types of lim itations, a new approach was designed based on the following observations: • A minimal energy seam does not necessarily give visually pleasing results. Figure 1.7 provides two examples of panoram as w ith an exact pairwise optim al energy boundary based on pixel difference across the seam. This should be sensitive to dynamic, moving objects which appear in the overlap. As you can see, neither seam would be considered ideal by a user since they cut through moving objects in the scene. Additionally, to 9 Full Resolution Two Levels Three Levels Four Levels F ig u r e 1.8: Hierarchical G raph Cuts has only been shown to work well on hierarchies of two to three levels. For this four picture panoram a example, we can see th a t Hierarchical G raph C uts produces a solution th a t passes though a dynam ic scene element when using four levels of the hierarchy. A typical input value of a ten pixel dilation was used for this example. W hile a larger dilation param eter could be used, this would require a larger memory and com putational cost which negates the benefits of the technique. fu rth er argue the im portance of this observation, the figure also shows very similar seams com puted by Adobe P h o to sh o p ™ , a widely used image editing application. • There can be more th a n one valid seam solution. Even if the initial seam solution is visually acceptable to the user, there may be a large num ber of additional, valid solutions. Some of these alternative seams may be preferable and this determ ination is com pletely subjective. For example, a user may have wished th a t the high energy in a seam occurred in an area where it is less likely to be noticed such as the grassy area or the w ater in the images in Figure 1.7. Given moving elements in a scene, such elements may occur entirely w ithin the area of an overlap. Therefore, there can be acceptable seams where the element is included and ones where it is not. Figure 1.9 provides examples. • An interactive technique is necessary and attainable. Given the subjective n atu re of the image boundaries and the possibility of techniques falling into bad local minima, a user m ust be interjected into the seam boundary problem. Currently, finding panoram a boundaries w ith G raph C uts is an offline process w ith only one solution presented to the user. The only existing alternative is the m anual editing, pixel by pixel, of the individual image boundaries. This is a tim e-consuming and tedious 10 F ig u r e 1.9: Even when seams are visually acceptable, moving elements in the scene may cause multiple visually valid seam configurations. On the top, this figure shows a four image panoram a (Crosswalk: 4705 x 3543, four images) w ith three valid configurations. On the bottom , this figure shows a two image panoram a (Apollo-Aldrin: 3432 x 2297, two images) w ith two valid configurations. Images courtesy of NASA. process where the user relies on perception alone to determ ine if the manual seam is acceptable. Therefore, a guided interactive technique for image boundaries is necessary for panoram a processing. This technique should allow users to include or remove dynam ic elements, move an image seam out of possible local minima into a lower error state, move the seam into a higher error sta te (but one w ith more acceptable visual coherency) or hide errors in locations where they feel it is less noticeable. D uring these edits, the user should be provided the optim al seams given these new constraints. • A solution based on pairwise boundaries can achieve good results for panoram as giving a fast, highly parallel, and light system. Com puting pairwise-only optim al boundaries is both fast and exact (i.e., is guaranteed to find the global minimum). It is then of no surprise th a t these boundaries have been used often in past work for pan- or tilt-only panoram as [145, 46, 157, 165]. Although it has been thought not to generalize beyond this case, there has been no technique to use pairwise boundaries in panoram as w ith more complex structure, save for efforts to combine them via a distance m etric [69] or sequentially [53]. This dissertation work not only provides a global solution based on pairwise boundaries, b ut also shows th a t this solution often 11 produces lower energy seams th a n G raph C uts for panoram as. Given a technique to combine pairwise boundaries into a coherent seam network, each disjointed seam can be com puted separately and trivially in parallel. Moreover, the solution produced for each is typically independent and therefore, memory and resources for each can be allocated and released as needed. In addition, the solution dom ain is only the overlap between pairs of images in contrast to some previous applications of G raph C uts for panoram as [101, 4], which often consider the entire com posite image as the solution domain. All of these properties give the potential for a very fast and light system even when operating on the full resolution imagery. Moreover, such a system should have the ability to be extended to an out-of-core or d istributed setting. This dissertation describes a new image boundary technique called Panorama Weaving. F irst, Panorama Weaving provides an autom atic technique to create approxim ate optim al boundaries th a t is fast, has low memory requirements, and is easy to parallelize. Second, it provides the first interactive technique to enable the exploration of the seam solution space. This gives the end-user a powerful editing system for panoram a seams. In particular, the contributions of this work on a technical level are: • A novel technique to merge independently com puted pairwise boundaries into a global, consistent seam network th a t does not cascade to a global calculation. • A panoram a seam creation technique based purely on pairwise boundary solutions. This technique is fast and highly parallel and shows significant speed-ups com pared to previous work, even when run sequentially. More im portantly, it achieves all of this even w ith full resolution imagery. • Out-of-core and d istributed seam creation algorithm s which extend the creation tech­ nique to mosaics massive in size. These algorithm s provide speed-ups com pared to the state-of-the-art. • The first system th a t allows interactive editing of seams in panoram as. This system guarantees minimal user input thanks to an efficient exploration of the solution space. • An intuitive mesh specialization of a region adjacency graph th a t encodes seam and image relations. This adjacency mesh provides a way to guarantee the global consistency of the seam network of the interactions and also enables a robust editing of the netw ork’s topology. 12 1.3 Color Correction Creating a single seamless image from a mosaic has been the subject of a large body of work for which gradient-dom ain (Poisson) techniques currently provide the best solution. Only one m ethod exists to operate on the gradient-dom ain of massive images: the stream ing m ultigrid [89] technique. However, processing the three gigapixel image of Figure 1.2 using this technique still takes well over an hour, which does not support an interactive trial-and- error artistic process. An additional disadvantage of traditio n al out-of-core m ethods is their tendency to achieve a low memory footprint a t the cost of significantly proliferating the disk storage requirements. For example, the m ultigrid m ethod [89] requires auxiliary storage an order of m agnitude greater th a n the input size, alm ost half of which is due to gradient com putation. In contrast, our approach completely avoids such d a ta proliferation, thereby allowing the processing of d a ta which already pushes the boundaries of available storage. The m ultigrid m ethod [89] is also lim ited by main memory usage since it is proportional to the num ber of iterations of the solver. This can cause the m ethod to not achieve acceptable results for images th a t may require a large num ber of iterations, as shown in Section 4. This work introduces a new m ethod w ith memory usage independent of the num ber of iterations of the Poisson solver and, therefore, would scale gracefully in these cases. An option to reduce tim es is to design a similar scheme to run in a d istributed environ­ ment. Consequently, there has been recent work to extend the m ultigrid solver [90] to a parallel im plem entation, reducing the tim e to com pute a gigapixel solution to mere minutes. However, this approach is prim arily a proof-of-concept since it does not supply the classic tests of scalability (weak or strong) nor is it tested significantly. Like many out-of-core m ethods, proliferation of disk storage requirem ents is a m ajor drawback. For example, testing was only possible w ith a full 16-node cluster for some of the stream ing m ultigrid test d a ta due to excessive storage dem ands. Finally, the technique assumes th a t a small num ber of predeterm ined iterations is sufficient to achieve a solution, which may not always be the case. This im plem entation was optimized for a single d istributed system and therefore, is unlikely to port well to other environments. H istory has shown th a t levels of abstraction th a t remove complexity from a code base can be instrum ental in the advancement of technologies. A bstraction th a t allows simple and portable code accelerates innovation and reduces tim e to develop new ideas. The cloud should be explored as such abstraction, allowing a developer to ignore much of the more tedious and complex elements in implementing a d istributed graphics algorithm . A general scheme cannot beat the performance of highly specialized and 13 optimized code. Often for organizations w ith resources, there may be cases where speed and efficiency are more im portant th a n the cost to create and m aintain a typical implementation. Although w ith increased availability of cloud commodities, there is now the o pp ortunity to offer more members of our com munity the ability to develop new algorithm s for a distributed environment. In particular, in the area of color correction this dissertation work introduces a simple and light-weight framework th a t provides the user w ith the illusion of a full Poisson system solve at interactive frame rates for image editing. This framework also allows for the com putation of a full solution on a single machine w ith a simple approach, rivaling the run tim e of the current best out-of-core technique [89], while producing equal or higher quality results on images th a t require a large num ber of iterations. The system is flexible enough to handle different hierarchical image form ats such as tiling for higher quality images or HZ- order for greater in p u t/o u tp u t (I/O ) speed. In particular, by exploiting a new implicit kd- tree hierarchy for HZ-order, the framework needs only to access and solve visible pixels. This allows an a rtist to interactively apply gradient-based techniques to images gigapixels in size. This new framework is straightforw ard and requires neither com plicated spatial indexing nor advanced caching schemes. Additionally, this work introduces a framework for parallel gradient-dom ain processing inspired by the out-of-core technique w ith a novel reform ulation to provide an efficient parallel d istributed algorithm . This new framework has both a straightforw ard im plem entation and shows both strong and weak parallel scalability. W hen implemented in stand ard M PI (Message Passing Interface), the same code base ports well to multiple d istributed systems. Furtherm ore, this d istributed algorithm can be w rapped in a level of abstraction to be run on the cloud, allowing for a simple im plem entation, as well as allowing it to be d istributed to the com munity at large. Specifically, the contributions of this work are: • A coarse-to-fine progressive Poisson solver running at interactive frame rates, extended to a wide variety of gradient dom ain tasks, w ith the ability to scale to gigapixel images. This cascadic solver entirely avoids the coarsening stage of the V-cycle yet produces high quality results. • A m ethod to locally refine solutions having tim e and space requirements th a t are linearly dependent on the screen resolution rath er th a n the resolution of the input image. 14 • A full out-of-core solver th a t m aintains strict control over system resources, rivals the run-tim es for the best known m ethod and consistently achieves quality results where previous m ethods may not converge well in practice. • A light-weight stream ing framework th a t provides adaptive m ultiresolution access to out-of-core images in a cache coherent m anner, w ithout using intricate indexing d ata structures or precaching schemes. • A d istributed algorithm based on the out-of-core scheme, which has a straightforw ard im plem entation and shows both strong and weak parallel scalability. • The first d istributed Poisson solver for imaging implemented in the cloud. C H A P T E R 2 RELATED W ORK This chapter will outline the previous work for the two m ajor portions of the composition step of the panoram a creation pipeline: image boundaries, Section 2.1, and color correction, Section 2.2. In particular, these sections will address the related work for the state-of-the-art for image boundaries, minimal image seams, and color correction, gradient dom ain editing. 2.1 Image Boundaries: Seams This section details the related previous work for com puting image boundaries for an image mosaic. In particular, this section focuses on the current state-of-the-art th a t is the com putation of minimal mosaic seams. 2 .1 .1 P a ir w ise B o u n d a r ie s Some of the seminal works in digital panoram as assume th a t an image collection is acquired in a single sweep (either pan, tilt or a com bination of the two) of the scene. In such panoram as, only pairwise overlaps of images need be considered [119, 120, 157, 145, 46, 165]. The pairwise boundaries which have globally minimal energy can be com puted quickly and exactly using a min-cut or m in-path algorithm . There is an intuitive and proven duality between min-cut and single-source/single-destination m in-path [72]. These pairwise techniques were thought to not be general enough to handle the many configurations possible in modern panoram as. Recent work [69] has dealt w ith the com bination of these seams for more complex panoram as, although the seam combinations are still based on an image distance metric. O ther recent work [53], combined these separate seams for the purposes of tex tu re synthesis combining seams sequentially. For their work, this was sufficient to provide good results for textures. The com bination and intersection of these seams in a digital panoram a can be more complex and therefore, a more expressive com bination is necessary. In addition, interaction was not considered as a necessary functionality in these works. This dissertation presents a novel technique to combine these disjoint seams into a 16 global panoram a seam network and allow for m anual user interaction. 2 .1 .2 G rap h C u ts The G raph Cuts technique [26, 24], com putes a k-labeling of a graph, typically an image, to minimize an energy function on the domain. An algorithm th a t guarantees to find the global minimum is considered to be N P-hard [26] and therefore G raph Cuts was designed to efficiently com pute a good approxim ation. G raph C uts has been shown to give good results for a variety of energy functions [92]. Thus, it is of no surprise given this versatility th a t it has been shown to a d ap t to the image mosaic and panoram a boundary problem [101, 4]. However, G raph Cuts is both a com putationally expensive and memory intensive technique. Given these requirements, there has been work on accelerating the G raph Cuts process by, for instance, adapting the technique to run on the G PU [167], in parallel [106], or in parallel-distributed [48] environments. Building a hierarchy for the G raph C uts com putation [107, 5] has shown to be popular due to its reduction of memory and com putation costs. For panoram as, this strategy has only been shown to provide good results for a hierarchy of two to three levels [5]. There has also been work on bringing G raph C uts into an interactive setting [25, 142, 104, 61, 124] although these works have focused only on user guided image segm entation. This dissertation provides the first technique to allow interactive editing of panoram a boundaries. 2 .1 .3 A lte r n a tiv e B o u n d a r y T ech n iq u es While G raph C uts still m aintains its popularity as a solution to the minimal boundary problem, there has been other ongoing work on alternative techniques. For example, there has been work on techniques based on lum inance voting [82]. There has also been recent work using geodesics to interactively com pute pairwise minimal boundaries [45]. 2 .1 .4 O u t-o f-C o r e an d D is tr ib u te d C o m p u ta tio n While there has been previous work to bring the G raph Cuts technique to massive grids, the previous work has only dealt w ith extending the algorithm to a d istributed and out-of­ core environm ent [48], No current technique decouples the two. The work of this dissertation has the flexibility to operate in-core, out-of-core, or d istributed depending on the application or available resources. Moreover, the inherent parallelism of the new technique is likely to outperform the previous work. Finally, there has been no work to allow interaction with these seams at massive scales. Hierarchical G raph Cuts has been used on large images [2], 17 although given the docum ented lim itation on the viable num ber of levels in the hierarchy [5] this will not scale massively. Applying standard G raph C uts as a sweeping window over an image neighborhood [94] has been used to produce boundaries for gigapixel imagery. Such a process has yet to be form ulated in parallel and is potentially very com putationally expensive. 2.2 Color Correction: Gradient D om ain Editing This section details the related previous work in the most popular and sophisticated color correction procedure for image mosaics called gradient dom ain image editing, or by its alternative name, Poisson image editing. 2 .2 .1 P o is s o n Im a g e P r o c e s s in g A variety of gradient-based m ethods provide a popular, b ut com putationally expensive, set of techniques for advanced image m anipulation. Given a guiding gradient field con­ structed from one or m ultiple source images, these techniques a tte m p t to find a closest-fit image using some predeterm ined distance metric. This basic concept has been adapted for stand ard image editing [133], as well as more advanced m attin g operations [152] , and high level drag-and-drop functionality [79]. Furtherm ore, gradient-based techniques can tone m ap high dynam ic range images to display favorably on stand ard m onitors [55] or hide the seams in panoram as [133, 103, 4, 89]. O ther applications include detecting light­ ing [76] or shapes from images [173], removing shadows [57] or reflections [6], and gradient dom ain painting [114]. Recently, an alternative approach using m ean value coordinates has sm oothly interpolated the boundary offset between source images, thereby mimicking Dirichlet boundary conditions [54]. This promising new line of research has yet to show support of Poisson techniques such as tone-m apping, the ability to work well out-of-core, or consistently acceptable results for m ethods th a t typically require N eum ann boundary conditions. 2 .2 .2 P o is s o n S o lv er s The solution to a two-dimensional (2D) Poisson problem lies at the core of gradient based image processing. Poisson equations have wide utility in many engineering and science applications. C om puting their solution efficiently has been the focus of a large body of work and even a cursory review is beyond the scope of this dissertation. For small images, m ethods exist to find the direct analytical solution using Fast Fourier transform s [75, 7, 8, 18 113]. Simchony [146] provides a survey of these m ethods for com puter vision applications. Often the problem is simplified by discretization into a large linear system whose dimension is typically the num ber of pixels in an image. If this system is small enough to fit into memory, m ethods exist to find the direct solution and we refer the reader to D orr [51] who provides an extensive review on direct m ethods. Typically, iterative Krylov subspace m ethods, such as conjugate gradient, are used due to their fast convergence. For much larger systems, memory consum ption is the lim iting factor and iterative solvers, such as Successive Over-Relaxation (SOR) [13] become more attractive. D epending on the application, different levels of accuracy may be required. Sometimes, a coarse approxim ation is sufficient to achieve the desired result. Bilateral upsampling m ethods [93] operating on a coarse solution produced good results for applications such as tonem apping. Such m ethods have not yet been shown to handle applications such as image stitching where the interpolated values are typically not sm ooth at the seams between images. W hen pure upsampling is insufficient, the system m ust be solved fully. M ultigrid m ethods are often employed to aid the convergence of an iterative solver. Such m ethods have proven particularly effective by dealing w ith the large scale trends at coarse resolutions. These techniques include preconditioners [66, 155] and multigrid solvers [27, 28]. There exist different variants of multigrid algorithm s using either adaptive [20, 88, 21, 2, 139] or nonadaptive meshes [87, 89]. As a first step in a complete m ultigrid system, the mesh is coarsened. The Poisson equation can then be solved in a coarse-to-fine m anner. One full iteration, from fine to coarse and back, is typically called a V-cycle. Most recently, a V-cycle was implemented in a stream ing fashion for large panoram as [89]. However, other systems only implement parts of the V-cycle. K opf et al. [94] implement only the second half in a pure upsampling procedure, while Bolitho et al. [21] implement a purely coarse-to-fine solver also called cascadic [22]. Lischinski et al. [105] applied this pure coarse-to-fine approach for interactive tonal adjustm ent. T he technique outlined in this diseration (for the first tim e) shows th a t a cascadic approach has applications well beyond the adjustm ent of tonal values and can be used for a wide variety of gradient based image processing techniques. This work also extends such techniques to allow the interactive editing and processing of gigapixel images. The solver propagates sufficient information from coarse-to-fine, allowing us to achieve local solutions at interactive rates th a t are virtually indistinguishable from the full-resolution solution. 19 2 .2 .3 O u t-o f-C o r e C o m p u ta tio n Toledo [160] presents a survey of general out-of-core algorithm s for linear systems. The m ajority of algorithms surveyed assume th a t at least the solution vectors can be kept in main memory, which is not the case for large images. For out-of-core processing of large images, the stream ing m ultigrid m ethod of K azhdan and Hoppe [89] has so far provided the only solution. However, processing a three gigapixel image using this technique still takes well over an hour which does not support an interactive trial-and-error artistic process. Many algorithm s such as tone m apping require careful p aram eter tuning to achieve good results. Thus, waiting several hours to examine the effects of a single p aram eter change is not feasible in this context. An additional disadvantage of traditio n al out-of-core m ethods is their tendency to achieve a low memory footprint at the cost of significantly proliferating the disk storage requirements. For example, the m ultigrid m ethod [89] requires auxiliary storage an order of m agnitude greater th a n the input size, alm ost half of which is due to gradient com putation. In contrast, in this dissertation, I w ith my collaborators, introduce an approach th a t completely avoids such d a ta proliferation, thereby allowing the processing of d ata, which already pushes the boundaries of available storage. The m ultigrid m ethod [89] is also limited by main memory usage since it is proportional to the num ber of iterations of the solver. This can cause the m ethod to not achieve acceptable results for images th a t may require a large num ber of iterations, as shown in Section 4. This work provides a new m ethod w ith memory usage independent of the num ber of iterations and, therefore, scales gracefully in these cases. 2 .2 .4 D is tr ib u t e d C o m p u ta tio n Recently, the stream ing m ultigrid m ethod has been extended to a distrib uted environ­ ment [90] and has reduced the tim e to process gigapixel images from hours to minutes. The d istributed multigrid requires 16 bytes/pixel of disk space in tem porary storage for the solver as well as 24 bytes/pixel to store the solution and gradient constraints. For the terapixel example of K azhdan et al. [90], the m ethod had a minimum requirement of 16 nodes in order to accom m odate the needed disk space for fast local caching. In contrast, the approach outlined in this work needs no tem porary storage and is implemented in standard M PI. Streaming multigrid also assumes a precom puted image gradient, which would add substantial overhead in initialization to transfer the color float or double data. O ur new approach is initialized using original image d a ta plus an extra byte for image 20 boundary inform ation which equates to 1/3 less d a ta transfer in initialization th a n the previous m ethod. D ata transfers between this solver’s phases, while floating point, only deal w ith the boundaries between com pute nodes which is substantially smaller th a n the full image and therefore are rarely required to be cached to disk. The multigrid m ethod [89, 90] may also be limited by main memory, since the num ber of iterations of the solver is directly proportional to the memory footprint. For large images, this limits the solver to only a few Gauss-Seidel iterations and therefore may not necessarily converge for challenging cases. O ur m eth o d ’s memory usage is independent of the num ber of iterations and can therefore solve images th a t have slow convergence. Often large images are stored as tiles at the highest resolution; therefore, m ethods th a t exploit this stru ctu re would be advantageous. Stookey et al. [148] use a tile-based approach to com pute an over-determined Laplacian P D E (partial differential equation). By using tiles th a t overlap in all dimensions, the m ethod solves the P D E on each tile separately and then blends the solution via a weighted average. U nfortunately this m ethod cannot account for large scale trends beyond a single overlap and therefore can only be used on problems which have no large (global) trends. Figure 2.1 illustrates why this would be a problem for panoram a processing. The coarse up-sam pling of our approach fixes this issue. 2 .2 .5 C lo u d C o m p u tin g - M a p R e d u c e an d H a d o o p M apReduce [47] was developed by Google as a simple framework to process massive d ata on large d istributed systems. It is an abstraction th a t owes its inspiration to functional program m ing languages such as Lisp. At its core, the framework relies on two simple operations: • Map: Given input, create a key/value pair. • Reduce: Process all values of a given key. All the complexity of a typical d istributed im plem entation due to d a ta distribution, load balancing, fault-recovery and com m unication are under this abstraction layer and therefore can be ignored by a developer. This framework, when combined w ith a d istributed file system, can be a simple yet powerful tool for d a ta processing. Hadoop is an open source im plem entation of M apReduce m aintained by the Apache Software Foundation and can be optionally coupled w ith its own d istributed file system (HDFS). Pavlo et al. [131] found th a t Hadoop was easy to deploy and use, offered adequate 21 F ig u r e 2.1: Although the result is a seamless, sm ooth image, w ithout coarse upsampling the final image will fail to account for large trends th a t span beyond a single overlap and can lead to unwanted, unappealing shifts in color. scalability, has very effective fault-tolerance, and, most im portantly, was easy to adap t for complex analytic tasks. H adoop is also widely available as a com modity resource. For example, Amazon Web Services, a service suite th a t has become nearly synonymous with cloud com puting in the media, provides Hadoop as a native capability [10]. Companies have begun to use Hadoop as a simple alternative for d a ta processing on large clusters [71]. For instance, The New York Times has used H adoop for large scale image to P D F con­ version [68]. Google, IBM, and NSF have also partnered to provide a H adoop cluster for research [41]. 2 .2 .6 O u t-o f-C o r e D a t a A c c e ss Given an image, it is well known th a t the stand ard row-major order exhibits good locality in only one dimension and is therefore ill-suited for an unconstrained out-of-core storage scheme [168]. Previous out-of-core Poisson m ethods [89] have been noted to be severely lim ited by this constraint. Instead, indexing based on various space-filling curves [143] has been proposed in different applications [126, 70, 17, 102] to exploit their inherent geometric locality. Of particular interest is the Z-order (also called Lebesgue-order) [17, 128] since it allows an especially simple conversion to and from stand ard row-major indices. While Z-order exhibits good locality in all dimensions, it does so only at a fixed resolution and does not support hierarchical access. Instead, this work will utilize the hierarchical variant, called HZ-order, proposed by Pascucci and Frank [128]. C H A P T E R 3 SCALABLE A N D EFFIC IE N T DATA ACCESS At the core of any large d a ta processing system is an efficient scheme for d a ta access. In this chapter, I will detail the technology used in the systems outlined in this work. In Section 3.1, I will review the fundam entals of the hierarchical Z-order (HZ-order) for two-dimensional arrays, our chosen form at for large image processing. I will also provide a new, simple algorithm in Section 3.2 for accessing d a ta organized in HZ-order, while avoiding the repeated index conversions used in [128]. Section 3.3 will provide a new parallel w rite scheme for HZ-order d ata. Finally, in Section 3.4 I will give an outline of the ViSUS software infrastructure, the core system behind much of the massive image processing outlined in this dissertation. Conversion of large images into theViSUS form at requires no additional storage, com pared to the typical 1/3 d a ta increase common for typical tiled image hierarchies. From our test d ata, we have found th a t there is only a 27% overhead due to the conversion compared to ju st copying the raw d a ta which makes this conversion very light. The conversion requires no operations on the pixel d a ta and will outperform even the most simple tiled hierarchies, which require some m anipulation of the pixel data. Section 5.2.2 will show th a t this new I/O infrastructure reduces the overhead by 28%-40% when com pared to a stand ard tiled image hierarchy. These num bers reflect the theoretical bound of 1/3 overhead, m ade worse by the inability to constrain real queries to perfect alignment w ith the boundaries of a quadtree. 3.1 Z- and HZ-Order Background The d a ta access routine of our system achieves high performance on our image d ata by utilizing a hierarchical variant of a stand ard Z-order (Lebesgue) space filling curve to lay out our two-dimensional d a ta in one-dimensional memory. In the two-dimensional case the Z-order curve can be defined recursively by a Z shape whose vertices are replaced by Z shapes half its size (see Figure 3.1 (a)). Given the binary row-major index of a pixel 23 (a) (b) Binary representation Z-order i 0 0000 00 00 1 0001 01 00 2 0010 00 01 3 0011 01 01 Sa mp le in de x 4 0100 10 005 0101 11 00 6 0110 10 01 7 0111 11 01 8 1000 00 10 9 1001 01 10 10 1010 00 11 11 1011 01 11 12 1100 10 10 13 1101 11 10 14 1110 10 11 15 1 1 1 1 11 11 F ig u r e 3.1: (a) The first four levels of the Z-order space filling curve; (b) 4x4 array indexed using stand ard Z-order (in • • • i 1i0, j n • • • j 1j 0) the corresponding Z-order index I is com puted by interleaving the indices I = j n in . . . j 1i 1j 0i0 (see Figure 3.2 (a) step 1). While Z-order exhibits good locality in all dimensions, it does so only at full resolution and does not support hierarchical access. Instead, our system uses the hierarchical variant, called HZ-order, proposed by Pascucci and Frank [128]. This new index changes the standard Z-order of Figure 3.1 (b) to be organized by levels corresponding to a subsampling binary tree, in which each level doubles the num ber of points in one dimension (see Figure 3.2 (b)). This pixel order is com puted by adding a second step to the index conversion. To com pute an HZ-order index I, the binary representation of a given Z-order index I is shifted to the right until the first 1-bit exits. D uring the first shift, a 1-bit is added to the left and 0-bits are added in all following shifts (see Figure 3.2 (a)). This conversion could have a potentially very simple and efficient hardw are im plem entation. The software C + + version can be implemented as follows: 24 (a) (b) F ig u r e 3.2: (a) Address transform ation from row-m ajor index ( i , j ) to Z-order index I (Step 1) and then to hierarchical Z-order index (Step 2); (b) Levels of th e hierarchical Z-order for a 4x4 array. T he samples on each level rem ain ordered by the stan d ard Z-order. i n l i n e adhocindex r e m a p (r e g is te r adhocindex i ) { i |= la s t_ b it_ m a s k ; / / s e t le f t m o s t one i /= i& - i; / / remove t r a i l i n g z e r o s r e tu r n (i> > 1 ); / / remove r ig h tm o st one } We store th e d a ta in a way guaranteeing efficient access to any subregion w ithout internal caching and w ithout opening a d a ta block more th a n once. F urtherm ore, we allow for storage of incomplete arrays. In our storage form at, we first sort the d a ta in HZ-order and group consecutive samples in blocks of constant size. A sequence of consecutive blocks is grouped 25 into a record and records are clustered in groups, which are organized hierarchically. Each record has a header specifying which of its blocks are actually present and if the data are stored raw or compressed. Groups can miss entire records or subgroups, implying that all their respective blocks and records are missing. The file format is implemented via a header file describing the various parameters (dimension, block size, record size, etc.) and one file per record. The hierarchy of groups is implemented as a hierarchy of directories each containing a predetermined maximum number of subdirectories. The leaves of each directory contain only records. To open a file, one needs only to reconstruct the path of a record and defer its search to the file system. In particular, the path of a record is constructed as follows: we take the HZ-address of the first sample in the record, represent it as a string, and partition it into chunks of characters naming directories, subdirectories, and the record file. Note that, since blocks, records and groups can be missing, one is not restricted to arrays of data that cover the entire index space. In fact, we can easily store even images with different regions sampled at different resolutions. 3.2 Efficient Multiresolution Range Queries One of the key components of our framework is the ability to quickly extract rectangular subsets of the input image in a progressive manner. Computing the row-major indices of all samples residing within a given query box is straightforward. However, efficiently calculating their corresponding HZ-indices is not. Transforming each address individually results in a large number of redundant computations by repeatedly converting similar indices. To avoid this overhead, we introduce a recursive access scheme that traverses an image in HZ-order, while concurrently computing the corresponding row-major indices. This traversal implicitly follows a kd-tree style subdivision, allowing us to quickly skip large portions of the image. To better illustrate the algorithm I will first describe how to recursively traverse an array in plain Z-order using the 4x4 array of Figure 3.1 (b) as example. Subsequently, I will discuss how to restrict the traversal to a given query rectangle and finally, how the scheme is adapted to HZ-order. We use a stack containing tuples of type (split-dimension, Lstart, mind, maxJ, m in j, m a x j, num elem ents). To start the process we push the tuple t0 =(1,0,0,3,0,3,16) onto the stack. At each iteration we pop the top-most element t from the stack. If t contains only a single element we output the current L start as HZ-index and fetch the correspond­ 26 ing sample. Otherwise, we split the region represented by t into two pieces along the axis given by split-dimension and create the corresponding tuples t1 = (0,0,0,3,0,1,8) and t2 =(0,8,0,3,2,3,8). Note that all elements of t1 and t2 can be computed from t by simple bit manipulation. In case of a square array, we simply flip the split dimension each time a tuple is split. However, one can also store a specific split order to accommodate rectangular arrays. Figure 3.3 shows the first eight iteration of the algorithm outputting the first four elements in the array of Figure 3.1 (b). To use this algorithm for fast range queries, each tuple is tested against the query box as it comes off the stack and discarded if no overlap exists. Since the row-major indices describing the bounding box of each tuple are computed concurrently, the intersection test is straightforward. Furthermore, the scheme applies, virtually unchanged, to traverse samples in Z-order that sub-sample an array uniformly along each axis, where the sub-sampling rate along each axis could be different. Finally, to adapt the algorithm to HZ-order (see Figure 3.2 (b)), one exploits the following two important facts: • One can directly compute the starting HZ-index for each level. For example, in a squared array level 0 contains one sample and all other levels h contain 2h-1 samples. Therefore the starting HZ-index of level h, lShtart, is 2m -h, where m is the number of bits of the largest HZ-index. • Within each level, samples are ordered according to plain Z-order and can be traversed with the stack algorithm described above, using the appropriate subsampling rate. Using these two facts one can iterate through an array in HZ-order by processing one level at a time, adding lShtart to the I s t a r t index of each tuple. In practice, we avoid subdividing the stack tuples to the level of a single sample. Instead, depending on the platform, we choose a parameter n and build a table, with the sequence of Z-order indices for an array with 2n elements. When running the stack algorithm, each time a tuple t with 2n elements appears, we loop through the table instead of splitting t. By accessing only the necessary samples in strict HZ-order, the stack-based algorithm guarantees that only the minimal number of disk blocks are touched and each block is loaded exactly once. For progressively refined zooms in a given area, we can apply this algorithm with a minor variation. In particular, one would need to reduce the size of the bounding box represented s s G J II O © o 1,4) t1 = ( 0 , 0 , 0 , 3, 0, 1,8) t4 =(1, 4, 2, 3, 0, 1,4) t = ( 1 , 0 , 0 , 3, 0, 3, 16) t2 =(0, 8, 0, 3, 2, 3, 8) t2 =( 0, 8, 0 , 3, 2, 3, 8) t6 . 3 X\Vi. i"T 0 t 2 FT t7 t8 t2 t2 • 1-3 (i,j)=(l,l) t4 =(1, 4, 2, 3,0, 1,4) t2 =(0, 8, 0, 3, 2, 3, 8) Figure 3.3: Our fast-stack Z-order traversal of a 4x4 array with concurrent index computation 28 in a tuple each time it is pushed back into the stack. In this way, even for a progressively refined zoom, one would access only the needed data blocks, each being accessed only once. 3.3 Parallel Write The multiresolution data layout outlined above is a progressive, linear format and therefore has a write routine that is inherently serial. When processing a large image on a distributed system, or even on a single multicore system, it would be ideal for each node, or process, to be able to write out its piece of the data directly in this layout. Therefore a parallel write strategy must be employed. Figure 3.4 illustrates different possible parallel strategies. As shown in Figure 3.4 (a), each process can naively write its own data directly to the underlying binary file. This is inefficient due to the large number of small file accesses. As data gets large, it becomes disadvantageous to store the entire dataset as a single, large file and typically the entire dataset is partitioned into a series of smaller more manageable files. This disjointness can be used by a parallel write routine. As each simulation process produces simulation data, it can store its piece of the overall dataset locally and pass the data on to an aggregator process. These aggregator processes can be used to gather the individual pieces and composite the entire dataset. In Figure 3.4 (b), each process transmits each contiguous data segment to an intermediate aggregator. Once the aggregator’s buffer is complete, the data are written to disk using a single large I/O operation. Figure 3.4 (c), illustrates a strategy where several noncontiguous memory accesses from each process are bundled into into a single message. This approach reduces the number of small network messages needed to transfer data to aggregators. This last strategy has been shown to exhibit good throughput performance and weak scaling for S3D combustion simulation applications when compared to standard Fortran I/O benchmark [98, 99]. 3.4 ViSUS Software Framework The ViSUS (Visual Streams for Ultimate Scalability) software framework6 has been designed as an environment to allow the interactive exploration of massive datasets on a variety of hardware, possibly over platforms distributed geographically. The system and I/O infrastructure is designed to handle n-dimensional datasets but is typically used on two-dimensional and three-dimensional image data. This two-dimensional portion of this 6h t t p : / / v i s u s . c o or h t t p : / / v i s u s . u s http://visus.co http://visus.us 29 | Rank One || Rank Two 11Rank Three| | Rank One || Rank Two 11Rank Three| \ \ \ i \ i j / / / / / \w'\«> i"/ - • Aggregator Process ^ Rank Z e r o f 1 f f s / • ^ Rank Zero f Aggregator Process (a) ^ IDX Binary File (b) ^ IDX Binary File (c) Forming indexed datatype MPI file writes ♦ ------------- MPI puts O MPI indexed datatype • Data chunks F igure 3.4: (a) Naive parallel strategy where each process writes its piece of the overall dataset into the underlying flle, (b) each process transmits each contiguous data segment to an intermediate aggregator. Once the aggregator’s buffer is complete, the data are written to disk, (c) several noncontiguous memory accesses are bundled into a single message to decrease communication overhead. system is the core application on which the massive applications outlined in this dissertation are built. The ViSUS software framework was designed with the primary philosophy that the visualization and/or processing of massive data need not be tied to specialized hardware or infrastructure. In other words, a visualization environment for large data can be designed to be lightweight, highly scalable and run on a variety of platforms or hardware. Moreover, if designed generally such an infrastructure can have a wide variety of applications, all from the same code base. Figure 3.5 details example applications and the major components of the ViSUS infrastructure. The components can be grouped into three major categories. First, a lightweight and fast out-of-core data management framework using multiresolution space filling curves, which I have outlined Sections 3.1, 3.2, and 3.3. This allows the organization of information in an order that exploits the cache hierarchies of any modern data storage architectures. Second, ViSUS contains a dataflow framework to allow data to be processed during movement. Processing massive datasets in their entirety would be a long and expensive operation which hinders interactive exploration. By designing new algorithms to fit within this framework, data can be processed as it moves. The Progressive Poisson technique outlined in Section 5.2 is one such new algorithm. Third, ViSUS provides a portable visualization layer that was designed to scale from mobile devices to powerwall displays with the same code base. Figure 3.5 provides a diagram of the ViSUS software architecture. In this section I External SQL Image Conversion Compression Networking and o t h e r... (Freelmage) (zlib) (curl) Threadin g (pthreads/ w indows-native) Rendering (opengl) Rendering extensions (glew) GUI library (Juce) ViSUS Apps with GUI Figure 3.5: The ViSUS software framework. Arrows denote external and internal dependences of the main software components. Additionally this figure illustrates the relationship with several example applications that have been successfully developed with this framework. 00o 31 will detail three of ViSUS’s major components and how they couple with the efficient data access detailed in the previous sections to achieve a fast, scalable, and highly portable data processing and visualization environment. Finally, I will illustrate an important additional use of this infrastructure, real-time monitoring of scientific simulations. 3.4.1 LightStream Dataflow and Scene Graph Even simple manipulations can be overly expensive when applied to each variable in a large scale dataset. Instead, it would be ideal to process the data based on need by pushing data through a processing pipeline as the user interacts with different portions of the data. The ViSUS multiresolution data layout enables efficient access to different regions of the data at varying resolutions. Therefore different compute modules can be implemented using progressive algorithms to operate on these data stream. Operations such as binning, clustering, or rescaling are trivial to implement on this hierarchy given some known statistics on the data, such as the function value range, etc. These operators can be applied to the data stream as-is, while the data are moving to the user, progressively refining the operation as more data arrives. More complex operations can also be reformulated to work well using the hierarchy. For instance, using the layout for image data produces a hierarchy which is identical to a subsampled image pyramid on the data. Moreover, as data are requested progressively, the transfer will traverse this pyramid in a coarse-to-fine manner. Techniques such as gradient-domain image editing can be reformulated to use this progressive stream and produce visually acceptable solutions which will be detailed in Section 5.2. These adaptive, progressive solutions allow the user to explore a full resolution solution as if it was fully available, without the expensive, full computation. ViSUS LightStream facilitates this steam processing model by providing definable mod­ ules within a dataflow framework with a well understood API. Figure 3.6 gives an example of a dataflow for the analysis and visualization of a scientific simulation. This particular example is the dataflow for a Uintah combustion simulation used by the C-SAFE Center for the Simulation of Accidental Fires and Explosions at the University of Utah. Each LightStream module provides streaming capability through input and output data ports that can be used in a variety of data transfer/sharing modes. In this way, groups of modules can be chained to provide complex processing operations as the data are transferred from the initial source to the final data analysis and visualization stages. This data flow is typically driven by user demands and interactions. A variety of “standard” modules, such 32 (a) (b) F igure 3.6: The LightStream Dataflow used for analysis and visualization of a three­ dimensional combustion simulation (Uintah code). (a) Several dataflow modules chained together to provide a light and flexible stream processing capability. (b) One visualization that is the result from this dataflow. as data differencing (for change detection), content based image clustering (for feature detection), or volume rendering with multiple, science-centric transfer functions, are part of the base system. These can be used by new developers as templates for their own progressive streaming data processing modules. ViSUS also provides a scene graph hierarchy for both organizing objects in a particular environment, as well as the sharing and inheriting of parameters. Each component in a model is represented by a node in this scene graph and inherits the transformations and environment parameters from its parents. Three-dimensional volume or two-dimensional slice extractors are children of a data set node. As an example of inheritance, a scene graph parameter for a transfer function can be applied to the scene graph node of a data set. If the extractor on this data set does not provide its own transfer function, it will be inherited. 3.4.2 Portable Visualization Layer - ViSU S AppK it. The visualization component of ViSUS was built with the philosophy that a single code base can be designed to run on a variety of platforms and hardware ranging from mobile devices to powerwall displays. To enable this portability, the basic draw routines were designed to be OpenGL ES compatible. This is a limited subset of OpenGL used primarily for mobile devices. More advanced draw routines can be enabled if a system’s hardware can support it. In this way, the data visualization can scale in quality depending on the available hardware. Beyond the display of the data, the underlying graphical user interface (GUI) library can hinder portability to multiple devices. At this time ViSUS has 33 made use of the Juce 7 library which is lightweight and supports mobile platforms such as iOS and Android in addition to major operating systems. ViSUS provides a demo viewer that contains standard visualizations such as slicing, volume rendering and isosurfacing. Similarly to the example LightStream modules, these routines can be expanded through a well-defined application programming interface (API). Additionally, the base system can display two-dimensional and three-dimensional time varying data. As mentioned above, each of these visualizations can operate on the end result of a LightStream dataflow. The system considers a two-dimensional dataset as a special case of a slice renderer and therefore the same code base is used for two-dimensional and three-dimensional datasets. Combining all of the above design decisions allows the same code base to be used on multiple platforms seamlessly for data of arbitrary dimensions. Figure 3.7 shows the same application and visualization running on an iPhone 3G mobile device and a powerwall display. 3.4.3 Web-Server and Plug-In ViSUS has been extended to support a client-server model in addition to the tradi­ tional viewer. The ViSUS server can be used as a standalone application or a web server plugin module. The ViSUS server uses HTTP (a stateless protocol) in order to support many clients. A traditional client/server infrastructure, where the client established and maintained a stable connection to the server, can only handle a limited number of clients robustly. Using HTTP, the ViSUS server can scale to thousands of connections. The ViSUS client keeps a number (normally 48) of connections alive in a pool using the “keep-alive” option of HTTP. The use of lossy or lossless compression is configurable by the user. For example, ViSUS supports JPEG and EXR for lossy compression of byte and floating point data, respectively. The ViSUS server is an open client/server architecture, therefore it is possible to port the plugin to any web server which supports a C + + module (i.e., Apache, IIS). The ViSUS client can be enabled to cache data to local memory or to disk. In this way, a client can minimize transfer time by referencing data already sent, as well as having the ability to work offline if the server becomes unreachable. The ViSUS portable visualization framework (Appkit) also has the ability to be compiled as a Google Chrome, Microsoft Internet Explorer, or Mozilla Firefox web browser plugin. This allows a ViSUS framework based viewer to be easily integrated into web visualization portals. 7http://www.rawm aterialsoftware.com http://www.rawmaterialsoftware.com 34 (a) (b) F igure 3.7: The same application and visualization of a Mars panorama running on an iPhone 3G mobile device (a) and a powerwall display (b). Data courtesy of NASA. 3.4.4 Additional Application: Real-Time Monitoring In addition to the ones provided in this dissertation, the ViSUS framework has an ad­ ditional ideal application in the real-time monitoring of large scientific simulations. Ideally, for these simulations a user-scientist would like to view a simulation as it is computed, in order to steer or correct the simulation as unforeseen events arise. Simulation data are often very large. For instance, a single field of a time-step from the S3D combustion simulation in Figure 3.8 (a) is approximately 128 gigabytes in size. In the time needed to transfer this single time-step, the user-scientist would have lost any chance for significant steering/correction of an ongoing simulation or at least the ability to save wasting further resources by terminating a simulation early. By using the parallel ViSUS data format in simulation checkpointing [98, 99], we can link this data directly with an Apache server using a ViSUS plug-in running on a node of the cluster system. By doing this, user-scientists can visualize simulation data as checkpoints are reached. ViSUS can handle missing or partial data; therefore, the data can be visualized even as it is being written to disk by the system. ViSUS’s support for a wide-variety of clients (a stand-alone application, a web-browser plug-in, or an iOS application for the iPad or iPhone) allows the application scientist to monitor a simulation as it is produced, on practically any system that is available without any need to transfer the data off the computing cluster. As mentioned above, Figure 3.8 (a) is an S3D large-scale, combustion simulation visualized remotely from an high performance 35 (a) (b) F igure 3.8: Remote visualization and monitoring of simulations. (a) An S3D combustion simulation visualized from a desktop in the Scientific Computing and Imaging (SCI) Institute (Salt Lake City, Utah) during its execution on the HOPPER 2 high performance computing platform in Lawrence Berkeley National Laboratory (Berkeley, California). (b) Two ViSUS demonstrations of LLNL simulation codes (Miranda and Raptor) visualized in real-time while executed on the BlueGene/L prototype installed at the IBM booth of the Supercomputing exhibit. computing platform8. This work is the natural evolution of the ViSUS approach of targeting practical ap­ plications for out-of-core data analysis and visualization. This approach has been used for direct streaming and real-time remote monitoring of the early large scale simulations such as those executed on the IBM B G /L supercomputers at Lawrence Livermore National Laboratory (LLNL) [130] shown in Figure 3.8 (b). This work continues its evolution towards the deployment of high performance tools for in situ and postprocessing data management and analysis for the software and hardware resources of the future including exascale DOE platforms of the next decade9. 8Data are courtesy o f Jackie Chen at Sandia National Laboratories, Combustion Research Facility h t t p ://a s c r .s a n d ia .g o v /p e o p le /C h e n .h t m 9 Center for Exascale Simulation o f Combustion in Turbulence (E xaCT) h t t p : // s c i e n c e .e n e r g y .g o v / a s c r /r e s e a r c h /s c i d a c /c o - d e s i g n / http://ascr.sandia.gov/people/Chen.htm http://science.energy.gov/ascr/research/scidac/co-design/ CHAPTER 4 INTERACTIVE SEAM EDITING AT SCALE This chapter outlines the Panorama Weaving technique which brings the boundary computation phase of panorama creation pipeline into an interactive environment. Sec­ tion 4.1 gives the relevant background and formulation for the computation of optimal image boundaries. Section 4.2 discusses how to achieve interaction with pairwise boundaries. Section 4.3 introduces the adjacency mesh data structure and how it can be used to bring pairwise seams to a global seam solution. Section 4.4 details how to extend the Panorama Weaving technique to an out-of-core environment, thereby scaling the technique to gigapixel images. In Section 4.5, I will detail how to design an interactive system using this technique and scale it to large images in Section 4.6. Finally, Section 4.7 provides results for the technique and in Section 4.8 I discuss its limitations. 4.1 Optimal Image Boundaries In this section, we discuss the technical background for boundary calculations of both pairwise and many-image panoramas. 4.1.1 Optimal Boundaries Given a collection of n panorama images I i , l 2 ..In and the panorama P , the image boundary problem can be thought of as finding a discrete labeling L(p) e (1...n) for all panorama pixels p e P , which minimizes the transition between each image. If L (p) = k, this indicates that the pixel value for location p in the panorama comes from image Ik. This transition can be defined by an energy on the piecewise smoothness E s(p, q) of the labeling of neighboring elements p, q e N , where N is the set of all neighboring pixels. We would like to minimize the sum of the energy of all neighbors, E . For the panorama boundary problem, this energy is typically [4] defined as: 37 E (L) = ^ Es(jp,t p,q&N If minimizing the transition in pixel values: E s (p ,q ) = \\Il (j>)(P) - I L(q)(P) \ + ||1L(p)(q) - ^L(q)(9) y or if minimizing the transition in the gradient: E s(p ,q ) = ||V / l ( p) (p) - V/L(q)(p)| + |V/L(p) (q) - v 1 L(q)(9)| where L(p) and L(q) are the labeling of the two pixels. Notice that L(p) = L(q) implies E s(p, q) = 0. Minimizing the change in pixel value works well in the context of poor registration or moving objects in the scene, while minimizing the gradient produces a nice input for techniques such as gradient domain blending. In addition, techniques can use a linear combination of the two energies. 4.1.2 M in-C ut and Min-Path When computing the optimal boundary between two images, the binary labeling is equivalent to computing a min-cut of a graph whose nodes are the pixels and arcs connect a pixel to its neighbors. The arc weights are then the energy function being minimized, see Figure 4.1 (a). If we consider a four-neighborhood and the dual-graph of the planar min-cut graph, as we show in Figure 4.1 (b), we can see that there is an equivalent min-path to the min-cut solution on the dual-graph. This has been shown to be true for all single source, single destination paths on planar graphs [72]. The approaches are equivalent in the sense (a) E s ( P , q ) D n -0 -D O — Q —1 - 0 — O — O QTaT-n-Tp C H -0-r-0-K > — O (b) D r a t H - T Q F igure 4.1: The four-neighborhood min-cut solution (a) with its dual min-path solution (b). The min-cut labeling is colored in red/blue and the min-path solution is highlighted in red. 38 that the final solution of a min-cut calculation defines the pixel labeling L(p) while the min-path solution defines the path that separates pixels of different labeling. 4.1.3 Graph Cuts This technique provides good solutions to pixel labeling problems for more than two images. The intricacies of the algorithm [26, 24, 92] are beyond the scope of this dissertation, but at a high level, Graph Cuts finds a labeling L which minimizes an energy function E '(L ). This function consists of term E s(p, q) augmented with energy associated with individual pixel locations E d(p). E '(L ) = £ E d (p )+ £ E s(p,q) p p,qeN For the panorama boundary problem, this data energy E d is typically [4] defined as being 0 if location p is a valid pixel of IL(P). Otherwise, it has infinite energy. 4.2 Pairwise Seams and Seam Trees Figure 4.2 illustrates two example pairwise image seams. In the simplest and most common case, Figure 4.2 (a), the boundary lines of the two images intersect at two points u and v connected by the seam s. The other simple, but more general case in Figure 4.2 (b) shows two overlapping images, where the intersection of their boundary lines results in an even number of intersection points. A set of seams can be built by connecting pairs of points with a consistent winding. The seams computed in this way define a complete partition of the space between the two images. In nonsimple cases, i.e., with co-linear boundary intervals, we can achieve the same result by choosing one representative point (possibly optimized to minimize an energy). Notice that the case in Figure 4.2 (b) produces more than a single set of valid seams, denoted by the purple and grey dashed lines. For clarity in the discussion, we will focus on the case in Figure 4.2 (a) since we can treat each seam of the case in Figure 4.2 (b) as independent. Assuming the dual-path energy representation in Figure 4.1 (b), a seam is a path that connects the intersection points (u,v). Computing the minimal path of a given energy function will give an optimal seam s, which can be computed efficiently with Dijkstra’s algorithm [50]. With minimal additional overhead, we can compute both min-path trees Tu and Tv from u and v (single source all paths). These trees provide all minimal seams which originate from either endpoint and define the dual seam tree of our technique. Given a point in the image overlap, we can find its minimal paths to u and v with a linear walk 39 (a) (b) F igure 4.2: (a) Given a simple overlap configuration a seam can be thought of as a path s that connects pairs of boundary intersections u and v. (b) Even in a more complicated case, a valid seam configuration is still computable by taking pairs of intersections with a consistent winding about an image boundary. Note that there is an alternate configuration denoted in gray. up the trees Tu and Tv, as shown in Figure 4.3. If this point is a user constraint, the union of the two minimal paths forms a new constrained optimal seam. Due to the simplicity of the lookup, this path computation is fast enough to achieve interactive rates even for large image overlaps. Note that two min-paths on the same energy function are guaranteed not to cross. Although, since each dual-seam tree is computed independently, the minimal paths from a constraint (to u and v) can cross. In particular, if the trees computed by Dijkstra’s algorithm are dependent on the order in which the edges are calculated and there are multiple paths in an overlap that share the same energy, the paths on each tree to a user constraint can cross. To avoid this problem we enforce an ordering based on the edge index and we are guaranteed to achieve noncrossing solutions. Moving an endpoint is also a simple walk up its partner endpoint’s seam tree. Therefore © □ G F igure 4.3: Given two min-path trees associated with a seam’s endpoints (u,v), a new seam that passes through any point in the overlap (yellow) is a simple linear walk up each tree. 40 a user can change an endpoint location at-will, interactively. Although after the movement, the shifted endpoint's seam tree is no longer valid since it was based on a previous location. If future interactions are desired, the tree must be recomputed. This can be computed as a background process after the users finish their initial interaction without any loss of responsiveness to the system. 4.3 From Pairwise to Global Seams To avoid incurring the cost associated with the solution of a global optimization, we build the panorama as a proper collection of pairwise seams. This is based on the observation, illustrated in Figure 4.4 (a), that the label assignment in a Graph Cut optimization mostly forms a simple collection of regions partitioned by pairwise image seams (denoted in the picture by the double-arrows). Our technique is designed with this property in mind and independently computes each seam constrained by the pairwise intersections called branching points. These are colored in red in Figure 4.4 (b). O---------------------Q . F igure 4.4: (a) A solution to the panorama boundary problem can be considered as a network of pairwise boundaries between images. (b) Our adjacency mesh representation is designed with this property in mind. Nodes correspond to panorama images, edges correspond to boundaries and branching points (intersections in red) correspond to faces of the mesh. (c) Graph Cuts optimization can provide more complex pixel assignments where “islands” of pixels assigned to one image can be completely bounded by another image. Our approach simplifies the solution by removing such islands. 41 Note that the solution of a Graph Cuts optimization can provide more complex pixel assignments, where “islands” of pixels assigned to one image can be completely bounded by another image, as shown in Figure 4.4 (c). Obviously, our approach simplifies the solution by removing such islands and makes each region simply connected. We have checked how the energy optimized by our technique would change with this assumption (see Section 4.7). In all cases we have noticed that the energy of the seams produced by our system remains in the same order of magnitude as Graph Cuts, actually being reduced in all cases but one. Limitations on this assumption are detailed in Section 4.8. 4.3.1 The Dual Adjacency Mesh To construct a seam network, our computations are driven by an abstract structure that we call the dual adjacency mesh. We draw the inspiration for our adjacency mesh representation from the traditional region adjacency graph used in computer vision, as well as the regions of difference (ROD) graphs of Uyttendaele et al [165]. In Figure 4.4 (b and c), we have the adjacency graph for a global, Graph Cuts computation. This graph can be considered the dual to the seam network: each node corresponds to an image in the panorama, whereas each edge describes an overlap relation between images. Edges are then orthogonal to the seam they represent. If we consider this graph as having the structure of a mesh, the dual of the panorama branching points are the faces of this mesh representation. In Figure 4.4 (b), the branching points are highlighted in red. Seams which exit this mesh representation correspond to pairwise overlaps on the panorama boundary. These are illustrated in Figure 4.4 (b) with a single yellow endpoint. Connecting the branching points on adjacent faces in the mesh and/or the external endpoints gives a global seam network of pairwise image boundaries. In addition to the branching points in the seam network, the faces of the adjacency mesh are also an intuitive representation for overlap clusters. Specifically, clusters are groups of overlaps that share a common area that we call a multioverlap. These multioverlaps are areas where branching points must occur. The simplest multioverlap beyond the pairwise case consists of three overlaps and is represented by a triangle, see Figure 4.5 (a). A multioverlap with four pairwise overlaps, can be represented by a quadrilateral, indicating that all four pairwise seams branch at a mutual point. An important property of this representation is that this quadrilateral can be split into two triangles, a classic subdivision, see Figure 4.5 (b). Any valid (no edge crossing) subdivision of a polygon in this mesh will result in 42 F igure 4.5: (a) A three overlap adjacency mesh representation. (b) A four overlap initial quadrilateral adjacency mesh with its two valid mesh subdivisions. (c) A five overlap pentagon adjacency mesh with an example subdivision. a valid seam configuration. In this way, the representation can handle a wide range of seam combinations, but keep the overall network valid. Figure 4.5 (c) shows an example subdivision of a five-way intersection. As a precomputation, we calculate the initial adjacency mesh consisting of simple n-gon face representations for every n-way cluster. This precomputation stage enables the conversion of the initial nonplanar full neighborhood graph into a planar mesh repre­ sentation, see Figure 4.6. Clusters (and their corresponding muti-overlaps) by definition are nonoverlapping, maximal cliques of the full neighborhood graph. This computation is a classic clique problem and is known to be NP-complete [43]. For most panoramas, we have found the neighborhood graph is small enough that a brute-force search can be computed quickly. Although, previous work has shown that given a graph with a polynomial bound on the number of maximal cliques, they can be found in polynomial time [141]. This is indeed the case for the neighborhood graph which has maximal boxicity [140] dimension of two [36]. After the maximal cliques have been found, each n-gon face is extracted by finding the fully spanning cycle of clique vertices on the boundary in relation to the centroids of the images. The boundary edges of the n-gon face are marked as active, while the interior (intersecting) edges are marked as inactive as shown in Figure 4.6. This adjacency mesh is used to drive the computation and combination of the pairwise boundaries as well as user manipulation. As we will illustrate in Section 4.5, it can be completely hidden from a user of the interactive system with intuitive editing concepts. 4.3.2 Branching Points and Intersection Resolution Given a collection of seam trees that correspond to active edges in the adjacency mesh, we can now combine the seams into a global seam network. To do this, we need to compute the branching points which correspond to each adjacency mesh face, adjust the seam given a possible new endpoint, and resolve any invalid intersections that may arise (in order to maintain consistency). 43 F igure 4.6: Considering the full neighborhood graph of a panorama (a), where an edge exists if an overlap exists between a pair of images, an initial valid adjacency mesh (b) can be computed by finding all nonoverlapping, maximal cliques in the full graph, then activating and deactivating edges based on the boundary of each clique. 4.3.2.1 B ran ch in g p oin ts. Assuming for each pairwise seam there exists only two endpoints, for each multioverlap one endpoint must be adapted into a branching point. We refer to this endpoint as being inside in relation to the adjacency mesh face. The other seam endpoint is considered to be outside in relation to the multioverlap. These can be computed by finding the endpoints which are closest (euclidean distance) to the multioverlap associated with the face. Figure 4.7 (a) displays these endpoints with the color red and the multioverlap area with a blue shading. Although it is possible to create a pathological overlap configuration where this distance metric fails, we have found that this strategy works well in practice. If we use the dual seam tree distances, i.e., the path distance values associated with the outside endpoints, we can compute a branching point which is optimal with respect to these paths, as illustrated in Figure 4.7 (b). This can be accomplished with a simple lookup of the distance values in the trees. We have found that minimizing the sum of the least squared error provides a nice low energy solution. The new path associated with a moved endpoint is determined by a simple walk up the dual seam tree, see Figure 4.7 (c). Additionally, each seam tree associated with the branching point is recalculated given its location. As Figure 4.7 (d) illustrates, the branching point is always computed using the distance field of the initial endpoint location even if this point had been previously adjusted by an adjacent face. In practice, we have found the contribution of the root location is minimal to the overall structure of the seam tree towards the leaves of the tree. Since using the initial starting endpoints allows the branching points to be computed independently and in a single parallel pass, we have adopted this into our technique. The seams produced by this initial process in the four-overlap case are similar to the 44 Tree Lookup—a. f ---------- J J •»- I ck 1— jJ \ K. i |M >------ V-J r A o----- O — r\ _ /' — - jlr-v ' V X" o — ---- (a) (b) (c) (d) F igure 4.7: (a) Pairwise seam endpoints closest to a multioverlap (red) are considered a branching point. (b) This can be determined by finding a minimum point in the multioverlap with respect to min-path distance from the partner endpoints. (c) After the branching point is found, the new seams are computed by a linear lookup up the partner endpoint’s seam tree. (d) To enable parallel computation, each branching point is computed using the initial endpoint location (green) even if it was moved via another branching point calculation (red). sequential techniques introduced by Efros and Freeman [53] and Cohen et al. [42]. With the additional adjacency mesh, our technique is much more expressive in the possible seam configurations (especially allowing arbitrary valence branching points). In addition, as we will illustrate next, for panoramas and especially in an interactive setting one cannot assume that a seam’s path to a branching point respects the paths of other seams. 4.3.2.2 R e m o v in g invalid in tersection s. Since each seam is computed using a separate energy function, seam-to-seam intersections beyond the branching points are pos­ sible. Small intersections of this type must be allowed to ensure solutions are computable in a four-neighborhood configuration. For instance, there would be no nonintersecting way to combine five seams into a single branching point. This allowance is defined by an e-neighborhood around the branching point which can be set by the user. We have found allowing an intersection neighborhood of one or two pixels gives good results with no visible artifacts from the intersection. The intersections in this neighborhood are collapsed to be co-linear to the shortest of the intersecting paths. Intersections that occur outside of this e-neighborhood must be resolved due to the inconsistent pixel labeling that they imply. Figure 4.8 (a, c) shows an example of in­ tersections in a four-way image overlap. The areas highlighted in gray have conflicting image assignments. Enforcing no intersections at the time of the seam computation would complicate parallelism and be overly expensive. This corresponds to a k-way planar escape problem with multiple energies (where k is the number of seams incoming to the branching point) for which variants have been shown to be NP-complete [179]. This could also lead to possible unstable interactions since small movements may lead to extremely large changes in the overall seam paths. The simplest solution is to choose one assignment per conflict 45 A B " ~~ - ~ t" „ " C D ( a) C D ( b) A A \ \ B o v o A B - I" \ C D 0 — 0 - 1 - C D ( c) O V O 0-0 A \ B " - - C D ( d) F igure 4.8: (a) Pairwise seams may produce invalid intersections or crossings in a multioverlap, which leads to an inconsistent labeling of the domain. The gray area on the top can be given the labels A or B and on the bottom either C or D. (b) Choosing a label is akin to collapsing one seam onto the other. This leads to new image boundaries, which were based on energy functions that do not correlate to this new boundary. The top collapse results in a B-C boundary using an A-B seam (C-D seam for the bottom). (c and d) Our technique performs a better collapse where each intersection point is connected to the branching point via a minimal path that corresponds to the proper boundary (B-C). One can think of this as a virtual addition of a new adjacency mesh edge (B-C) at the time of resolution to account for the new boundary. A B C D area. This is equivalent to collapsing the area and making the two seams co-linear at points where they “cross.” Each collapse introduces a new image boundary for which the wrong energy function has been minimized, Figure 4.8 (a, b). In our technique, we perform a more sophisticated collapse. For a given pair of intersecting seams, multiple intersections can be resolved by taking into account only the furthest intersection from the branching point in terms of distance on the seam. Given that each seam divides the domain, this intersection can only occur between seams that divide a common image. If presented with a seam-to-seam intersection, we can easily compute the new boundary that is introduced during the collapse. This is simply a resolution seam on an overlap between the images which is not common between the intersection seams. The resolution seams connect the intersection points with the branching points. Often, if multiple resolution seams share the same overlap, as in Figure 4.8, only one min-path calculation from the branching point is needed to fill in all min-paths. The new resolution seams are constrained by the other seams in the cluster in order to not introduce new intersections with the new paths. The constraints are also given the ability to gradually increase the allowed intersection neighborhood beyond the user defined e-neighborhood in 46 the chance that no solution path exists. The crossings and intersections are collapsed in this neighborhood. Due to the rarity of this occurrence, the routine adds minimal overhead to the overall technique in practice. Order matters in both finding the intersections and computing the resolution seams and therefore must be consistent. We have found that ordering based on the overlap size works well. Resolution seams and expanded e-neighborhood are considered to be temporary states. Figure 4.8 (c, d) shows an example of an intersection resolution. This technique robustly handles possible seam intersections at the branching points. Most importantly, since we are only adjusting the seam from the intersection point on, we can resolve each adjacency mesh face in parallel. In addition, since the seam is not changed outside of the multioverlap within a cluster, the resolution is local and will not cascade to a global resolution. However, it is possible for a user to introduce unresolvable intersections through added constraints, as we will discuss in Section 4.5. 4.4 Out-of-Core Seam Processing While designed to be light on resources, the technique outlined in the previous sections assumes all images can be stored in-core. This assumption holds true for many panoramas, but not images gigapixels in size. In this section, I will detail how the original technique can be modified to handle large panoramas. As illustrated in Figure 4.9, the initial seam solution for the Panorama Weaving tech­ nique can be thought to occur in three phases. First, for each adjacency mesh face, the corresponding branching point must be computed. After the branching point is found, any seams which occur on the panorama boundary can be computed. These correspond to edges in our mesh that belong to only one face. See Figure 4.9 (a). As mentioned previously, the branching point computations for each face given our simple pairwise seam assumption are completely independent and can be computed in parallel. As a second phase, the seams for the shared edges can be found by connecting the newly computed branching points. Each shared edge can be processed independently in parallel. See Figure 4.9 (b). Finally, once all edges for a face are computed the seam intersections for the given face can be resolved. This resolution is also independent and parallel for each adjacency mesh face. See Figure 4.9 (c). Note that each phase need not be entirely distinct since the can be interleaved. If we bundle the branching point and shared seam calculation into a single operation, the logic of our out-of-core computation would only deal with the adjacency mesh faces. This can drastically reduce the system complexity even when working in a multicore environment. 47 0 + < > K > + < > -v O 0 4 < > K > ^ 0 -r O (a) (b) (c) F igure 4.9: The phases of out-of-core seam computation. (a) First, branching points are computed. The seams for all unshared edges can also be computed during this pass. (b) Second, once the corresponding branching points are computed, all unshared edges can be computed with a single min-path calculation. (c) Third, once all the seams for the edges for a given face have been computed, the intersections can be resolved. Note, the three passes do not necessarily need to be three separate phases since they can be interleaved when the proper input data are ready. By doing this, we also cast our problem into the simple problem of graph traversal for the two remaining phases of our seam processing system. For our traversal strategy, we chose a simple row-major traversal of the mesh faces. 4.4.1 Branching Point and Shared Edge Phase The design goal for this phase is to keep the memory requirement low and predictable. The reasoning for this approach is twofold. First, low memory requirements enable the portability of our out-of-core system to a wide variety of systems. Such a technique has the ability to run on systems from laptops to HPC computers. Second, when moving into a multicore implementation being able to have a low, predictable memory requirement per face would make the logic and scheduling of the many threads operating on the adjacency mesh faces very simple. Given the resources of the system, we can predict how many faces the system can compute in parallel. We achieve our low memory footprint by computing the branching point for a given face in a “round-robin” fashion, as shown in Figure 4.10. For each edge, the images that correspond to its endpoints are loaded and the overlap energy is calculated. Next, the seam tree needed for the branching point is calculated. After this calculation, the overlap energy is no longer needed and is therefore ejected from memory. The calculation then moves onto the next edge that shares an image with the previous computation. The rest of the branching point calculation proceeds in the same way until all seam trees have been computed to compute the location of the new branching point. As mentioned above, we couple the shared edge phase into this phase of calculation. This 48 (a) (b) F igure 4.10: The low memory branching point calculation for our out-of-core seam creation technique. (a) Given a face for which a branching point needs to be computed, (b) the computation proceeds “round-robin” on the edges of the face to compute the needed seam trees. The images that correspond to the edge endpoints and overlap energy are only needed during the seam tree calculation for a given edge on the face. Therefore by loading and unloading these data during the “round-robin” computation, the memory overhead for the branching point computation is the cost of storing two images, one energy overlap buffer, and one for the seam trees for the given face. is done with a flag per face to indicate whether the branching point has been computed. After the branching point has been computed for a face, the flag is set and the process checks to see if the flag is also set for the other faces on a face's shared edges. If the face calculation is the second to set this flag, it knows it can fill in the new seam for the shared edge. For the multicore implementation, this check is made atomic with locks to ensure there is always a clear first- and second-face calculation when setting flags for the endpoints of a shared seam. Note that the memory overhead for this computation is equal to the space required to store two images, one buffer for the overlap energy, and one for the seam trees for the edges of the mesh face. As you can see, this overhead is quite low and given an average image size and overlap percentage, very predictable. The geometry of the seams are stored in-core for the final phase of the calculation. 4.4.2 Intersection Resolution Phase When all seams for a face have been computed, the seam intersections that correspond to the face can be resolved. The intersection test is computed on the seam geometry which is already in memory. If an intersection occurs within a threshold distance on the seam from the branching point it is considered small and collapsed. This is an equivalent operation to the intersection collapse from the in-core Panorama Weaving technique. For larger 49 intersections, a resolution seam must be computed and the two images that correspond to the overlap of the resolution seam need to be loaded before computation. See Figure 4.11 for an example. 4.5 Weaving Interactive System In this section, we outline how to create a light and fast interactive system using the Panorama Weaving technique. A simplified diagram of the operation of the system is given in Figure 4.12. In Section 4.7, we provide examples of this application editing a variety of panoramas. 4.5.1 System Specifics In this subsection, I will detail specifics for the prototype system. 4.5.1.1 In pu t. The system inputs for our prototype are flat, registered raster images with no geometry save the image offset. Any input can be converted into this format and therefore it is the most general. The initial image intersection computation is computed using the rasterized boundaries. Due to aliasing, there may be many intersections found. If the intersections are contiguous, they are treated as the same intersection and a represen­ tative point is chosen. In practice, we have found this choice has little effect on the seam outside a small neighborhood (less than 10 pixels from the intersection). Therefore, the system picks the minimal point in this group in terms of the energy. Pairs of intersections that are very close in terms of euclidean distance (less than 20 pixels) are considered to be infinitesimally small seams and are ignored. The user is also allowed to dictate the energy function for the entire panorama, image, or overlap. This can be done as an initial input parameter or within the interactive program \ \ " - i r~ — / F igure 4.11: For intersections that require a resolution seam, the two images which correspond to the overlap needed for the seam must be loaded. In the figure above, these images are the ones that correspond to the endpoint of the diagonal, resolution adjacency mesh edge. 50 A utom atic C om putation F igure 4.12: Overview of Panorama Weaving. The initial computation is given by steps one through four, after which the solution is ready and presented to the user. Interactions, steps five and six, use the tree update in step four as a background process. Additionally, step six updates the dual adjacency mesh. itself. Specifically, our prototype allows the user to switch between pixel difference or gradient difference energies. 4.5.1.2 Initial parallel co m p u ta tio n . Parallel computation is accomplished using a thread pool equal to the number of available cores. The initial dual seam tree and branching points computation can be run trivially in parallel. In the presence of two adjacent faces in the adjacency mesh, a mutex flag must be used on their shared seam since both faces may attempt to write this data simultaneously. As a final phase, each adjacency mesh face resolves intersections in parallel. In order to compute these resolutions in parallel, we split a seam’s data into three separate structures for the start, middle, and end of the seam. The middle seam contains the original seam before intersection resolution and its extent is maintained by pointers. The structure’s start and end are updated with the intersection resolution seams by the faces associated with their respective branching points. Either vector can be associated only with one face; therefore, we run no risk of multiple threads writing to the same location. Each seam tree is stored as two buffers: one for node distance and one which encodes the tree itself. The tree buffer encodes the tree with each pixel referencing its parent. This can be done in 2 bits (rounded to 1 byte for speed) for a four-pixel neighborhood. Therefore, for float distances we need only 5 bytes per pixel to store a seam tree. 4.5.1.3 Seam n etw ork im p o rt. It is possible to import a seam network computed with an alternative technique (such as Graph Cuts, see Figure 4.13), and edit it with our 51 F igure 4.13: Importing a seam network from another algorithm. The user is allowed to import the result generated by Graph Cuts (a) and adjust the seam between the green and purple regions to unmask a moving person (b). Note that this edit has only a local effect, and that the rest of the imported network is unaltered. system. Our import procedure works as follows. Given a labeling of the pixels of the panorama, the algorithm first extracts the boundaries of the regions. Then, branching points (boundary intersections) are extracted. Next, each boundary segment (bounded by two branching points) is identified as a seam and the connected components of the resulting seam network are identified. To be compatible with our framework, only the seam networks made of a single connected component can be imported. Thus, we only consider the biggest connected component of the network and small islands are discarded. Finally, our seam data-structures are fed with the seam network and the adjacency mesh is updated if necessary. Since the editing operations do not cascade globally, a user can edit a problem area locally and maintain much of the original solution if desired. 4.5.2 Interactions In this subsection, I will detail some possible interactions that can be accomplished with our system. 4.5.2.1 Seam b en d in g. The adding of a constraint and its movement is called a bending interaction in our system and operates as outlined in Section 4.2. A user is allowed to add a constraint to a seam and is provided instantly the optimal seam which must pass through it. The constraint can be moved interactively to explore the seam solution space. Intersections in any adjacency mesh face containing the corresponding edge are resolved, which can be done in parallel. Most importantly, given how the technique resolves intersections, seams cannot change beyond the multioverlap area in these faces. Therefore, the seam resolution does not cascade globally. 52 4.5.2.2 Seam splitting. Adding more than one constraint is akin to splitting the seam into segments. After a bending interaction, the seam trees are split into four, where there were previously two. Two of the trees (corresponding to the endpoints) are inherited by the new seams. The two trees associated with the new constraint are identical, therefore only one tree computation is necessary. Splitting occurs in our prototype when a user releases the mouse after a bending interaction. Editing is locked for this seam until the corresponding trees are resolved. This is a quick process and it is very rare for a user to be fast enough to beat the computation. 4.5.2.3 B ran ch in g p oin t m ovem en t. The user is given the ability to grab and move the branching point associated with a selected face of the adjacency graph As I have detailed in Section 4.2, a movement of an endpoint is a simple lookup on its partner’s dual seam tree. As the user moves a branching point, intersections for both the selected face and all adjacent faces are resolved. Given that the intersection resolution does not adjust seam geometry beyond the multioverlap, we need only to look at this one-face neighborhood and not globally. To enable further interaction, the seam trees associated with this endpoint need to be recalculated after movement. When the user releases the mouse, the seam tree data for all the endpoints associated with the active seams for the face are recomputed as a background process in parallel. Like splitting, editing is locked for each seam until it completes the seam tree update. 4.5.2.4 B ran ch in g p oin t splittin g and m erging. The user can add and remove additional panorama seams by splitting and merging branching points. Addition and removal of seams is equivalent to subdividing and merging faces of the adjacency mesh. Improper requests for a subdivision or merge correlate to a non-valid seam network and are therefore restricted. If splitting is possible for a selected branching point, the user can iterate and choose from all possible subdivisions of the corresponding face. To maintain consistent seams, merging is only possible between branching points resulting from a previous split. In other words, merging faces associated with different initial adjacency mesh faces would lead to an invalid seam configuration since the corresponding images do not overlap. If a seam is added, its dual seam tree is computed. In addition, the other active seams associated with this face will need to be updated much like a branching point movement. 4.5.2.5 Im p ro p e r user in teraction . Given the editing freedom allowed to users, they may move a seam into a inconsistent configuration. Figure 4.14 illustrates some examples. Rather than constrain the user, the prototype system either tries to resolve the 53 improper seams or if that is not possible give the user visual feedback indicating a problem configuration. For example, if the user introduces a seam intersection, our intersection routine is launched to resolve it, see Figure 4.14 (a). Crossing branching points, Figure 4.14 (b), can be resolved similarly. Figure 4.14 (c) illustrates a configuration with no resolution. In this instance, the crossing edges are collapsed and the user is given a visual hint that there is a problem. Given the locality of the interactions in the Panorama Weaving technique, extending the interactions to gigapixel sized images does not require a large change to the base interactions. Our interaction scheme is based on a standard large image viewer and on-the-fly loading and computation of the data needed for seam editing. As Figure 4.15 shows, due to the technique's simple pairwise seam assumption, the data which needs to be loaded and computed is local given an interaction. Our system works as follows: we leverage the large image, out-of-core ViSUS system outlined in Chapter 3 to provide exploration of a flattened gigapixel image created from the seams from our initial out-of-core seam computation. If a user wishes have a seam or branching point interaction, she/he can initiate this edit my selecting a seam area she/he wished to edit. The action is determined similarly to the in-core seam system in that the overlap bounding boxes are tested against the user selected area. If all the overlap bounding boxes for a given face are selected, then it is assumed the user wishes to have a branching point manipulation. Otherwise, it is assumed the user wishes a seam bending interaction and a single overlap from the selection is chosen. A user can cycle through the single selection options if more than one is present for a given input. A brute-force bounding box intersection test has the possibility of being overly expensive for panoramas which contain thousands of images and overlaps, therefore, we have designed two i i i i i , i■ i i i i * i * F igure 4.14: Improper user constraints are resolved or if resolution is not possible, given visual feedback. (a) Resolution of an intersection caused by a user moving a constraint. (b) Resolution of an intersection caused by a user moving a branching point. (c) A non- resolvable case where a user is just provided a visual cue of a problem. 4.6 Scalable Seam Interactions (a) (b) (c) 54 F igure 4.15: Given the inherent locality of the seam editing interactions, only a very small subset of the adjacency mesh needs to be considered. (a) For operations on an adjacency mesh face (i.e., branching point operations) only the images and overlaps of the corresponding face and its one face neighborhood need to be loaded and computed. (b) For edge operations (i.e., bending), we need consider only the faces that share the edge. more scalable options for the selection. As Figure 4.16 (a) illustrates, a bounding hierarchy of overlaps can be built by merging the bounding boxes of pairs of neighboring adjacency mesh faces. During a user selection, the hierarchy is traversed to determine which faces need to be considered for selection. Once this is determined, the selected face’s overlaps are testing for user selection. This provides a selection runtime that is logarithmic in the number of faces in our panorama. Alternatively, as Figure 4.16 (b) shows, if there is a pixel to image map this can be leveraged to determine the neighborhood of overlaps that need to be tested for selection. This neighborhood consists of the edges of faces that share the node that corresponds to the pixel-map image. After the user finishes their interaction, the F igure 4.16: When a user selects an area of a panorama to edit, the system must determine which overlaps intersect with the selected area. This can be accomplished with a (a) bounding hierarchy of the overlaps. During selection this hierarchy is traversed to isolate the proper overlaps for the selection. This gives a logarithmic lookup with respect to the number of adjacency mesh faces in the panorama. Alternatively, (b) if a pixel-to-image labeling is provided, this can be used to isolate a fixed neighborhood that needs to be tested for overlap intersection. This labeling is commonly computed if the panorama is to be fed into a color correction routine after seam computation. 55 seams are saved and the loaded images are masked and saved to the flattened image. 4.7 Results In this section, we detail the results in both the creation and editing phases of our system. In-core results were performed on a 3.07 GHz Intel i7 four-core processor (with Hyperthreading) with 24 gigabytes of memory. The large system memory was required in order to run the Graph Cuts implementation, as is, on all datasets. Panorama Weaving performed well for all datasets on test systems including laptops with only 4 gigabytes of memory. Out-of-core results were run on 2.67GHz Xeon X5550 (eight cores) system with 24 gigabytes of memory. 4.7.1 Panorama Creation This subsection details the results for in-core and out-of-core initial seam creation. 4.7.1.1 In -c o r e results. We compare the panorama creation phase of our system to the implementation provided by the authors of the Graph Cuts technique [26, 24, 92], which many consider the exemplary implementation. Both a expansion and swap algorithms were run until convergence to guarantee minimal errors and the best time is reported. Since this implementation has various ways of passing data and smoothness terms, we tested all and report the fastest, which is precomputed arrays for the costs with a function pointer acting as a lookup. Not having an equally well-established in-core parallel implementation for Graph Cuts, we use a serial version of our algorithm for comparison. Timings for Graph Cuts are based on the implementation's reported runtime. Due to the parallel option of Panorama Weaving, its timings are based on wall-clock time. Datasets which contain more than simple pairwise overlaps were run at full resolution and the running times and energy comparisons are provided in Table 4.1. Our technique produces lower energy seams for all but one example, Fall-5way, and even in this case the techniques have comparable energy. In terms of performance, serial Panorama Weaving computes its solution faster than the Graph Cuts for all datasets (at the same resolution). As the Graph Cuts results show, a hierarchical approach would be necessary to achieve similar performance by trading quality for speed. Parallel Panorama Weaving further reduces the runtime down to mere seconds for all datasets at full resolution. On average, we see that the scaling performance between Panorama Weaving’s serial and parallel implementations to be about a five times speedup. This is in sync with the number of physical cores in the test system. Hyperthreading is effective when data access is a main bottleneck. A speedup corresponding to the number 56 Table 4.1: Performance results comparing Panorama Weaving to Graph Cuts for our test datasets that contain more than simple pairwise overlaps. Panorama Weaving run serially (PW-S) computes solutions quickly. When run in parallel, runtimes are reduced to just a few seconds. The energy ratio (E. ratio) between the final seam energy produced by Panorama Weaving and Graph Cuts (PW Energy / GC Energy) is shown. For all but one dataset (Fall-5way), Panorama Weaving produces a lower energy result. It is comparable otherwise. Panorama image sizes are reported in megapixels (MP). Dataset Megapixel Images PW Parallel PW Serial GC Serial E. Ratio Crosswalk 16.7 4 1.3 7.2 369.6 0.995 Fall-5way 30.0 5 2.4 12.1 735.4 1.220 Skating 44.7 6 3.2 16.8 734.0 0.851 Lake 9.4 22 0.5 2.9 337.2 0.503 Graffiti 36.6 10 4.3 19.6 983.7 0.707 Nation 49.1 9 4.6 23.2 1168.7 0.800 of physical cores should be expected when an algorithm is compute-bound which is true for Panorama Weaving. Therefore our implementation is scaling quite well on our test system. 4.7.1.2 O u t-o f-c o r e results. To test the performance for our out-of-core imple­ mentation, we computed the seams for two large panoramas on our eight core test system. The Fall Salt Lake City panorama consist of 611 overlapping images and is 126,826 x 29,633, 3.27 gigapixel, when combined. The final image that is the result of our seam computation is provided in Figure 4.17 (b). Additionally, in Table 4.2 we provide strong scaling test of our implementation for this panorama as we vary the core count from one to eight. As this table illustrates, our implementation shows very good efficiency up to the number of cores of our test system. At eight cores, our efficiency takes a slight dip due to our implementation using a dedicated thread to schedule the face computation for each phase of the out-of-core Panorama Weaving technique. On a single core, our system can produce a seam solution in only 68.5 minutes. As I have discussed in Section 2.1 Hierarchical Graph Cuts [2] does not scale beyond two to three levels of the hierarchy. At three levels, the coarse version of this panorama is still approximately 100 megapixel in size with 611 labels and could not be run on our test system. Therefore, to provide context for our running times, we compare our technique to the predicted runtime of a similar technique [94] which relies on a moving window of a Graph Cuts solution. Figure 4.17 (a) provides an example of one of these windows. In our tests, a Graph Cuts solution took 3003.86 seconds to converge. Even more problematic is that the first iteration for this window still took a very long time to compute: 612.99 seconds. Therefore, if this window is a good representation for this 57 F igure 4.17: Fall Salt Lake City, 126, 826 x 29, 633, 3.27 gigapixel, 611 images. (a) An example window computed with out-of-core Graph Cut technique introduced in Kopf et al. [94]. This single window took 50 minutes for Graph Cuts to converge, with the initial iteration requiring 10.2 minutes. Since the full dataset contains 495 similar windows, using the windowed technique would take days (85.15 hours) at best, and weeks (17.2 days) in the worst case. (b) The full resolution Panorama Weaving solution was computed in 68.4 minutes on a single core and 9.5 minutes on eight cores. Our single core implementation required a peak memory footprint of only 290 megabytes while using eight cores had peak memory of only 1.4 gigabytes. Table 4.2: Strong scaling results for the Fall Salt Lake City panorama, 126, 826 x 29, 633, 3.27 gigapixel, 611 images. Our out-of-core Panorama Weaving technique scales very well in terms of efficacy percentage compared to ideal scaling up to the physical cores of our test system (eight cores). At eight cores our technique loses a slight amount of efficiency due to our implementation having a dedicated thread to handing the seam scheduling. Using the full eight cores to process this panorama provides a full resolution seam solution in just 9.5 minutes. The system is extremely light on memory and uses at most 1.4 gigabytes. Cores Tim e(s) Ideal(s) Efficiency Max Mem. 1 4109 N A NA 290 MB 2 2079 2054.5 98.8% 443 MB 3 1403 1369.7 97.6% 599 MB 4 1049 1027.3 97.9% 791 MB 5 840 821.8 97.8% 881 MB 6 706 684.8 97.0% 1.1 GB 7 601 587.0 97.7% 1.2 GB 8 573 513.6 89.6% 1.4 GB panorama dataset (which our testing indicates that it is) computing all 495 windows in this dataset would take days (85.15 hours) at best, and weeks (17.2 days) in the worst case. Our implementation can compute a full resolution solution in a little over an hour for a single core and only a few minutes when run on eight cores. Also of note is the small use of memory inherent with our scheme. At any given time, we need only hold the cost of computing the seams for a face per core. Therefore the per-core memory footprint is only the cost of holding two images, an energy buffer, and the seam tress for a given face in memory. Even with floating point precision and eight cores, for this dataset our technique uses at most 1.4 gigabytes of memory. To further test the scalability of our system, our 58 implementation was run on even larger image. The Lake Louise panorama consists of 1512 images and is 187,069 x 40.202 (7.52 gigapixel) when combined. In Table 4.3 we provide a strong scaling test for our implementation for this dataset. Like the previous example, our implementation shows very good efficiency for one to eight cores on our test system. The system is very light on memory resources and needs only 2.0 gigabytes of memory to operate on all eight cores. When using all cores, our implementation can provide a seam solution in only 37.7 minutes. The final image resulting from our seam calculation is provided in Figure 4.18. 4.7.2 Panorama Editing We provide additional results of the interactive portion of our technique editing a variety of panoramas. Images which are color-corrected were processed using gradient domain blending [133, 103]. 4.7.2.1 E d itin g b a d seams. In Figure 4.19, the Nation dataset is a highly dynamic scene of a busy intersection with initial seams that pass through moving cars/people, see Figure 4.19(d). In addition, there are various registration artifacts, see Figure 4.19(e). Before our technique, a user would consider this panorama unsalvageable or be required to manually edit the boundary masks pixel-by-pixel. In just a few minutes, using our system, a user can produce an appealing panorama by adjusting seams to account for the moving objects and pulling registration artifacts into areas which are less noticeable. Figure 4.20 (a, b and c) shows the initial seam configuration for the Skating dataset with two problem areas. The initial seams pass through people who change position on the ice and produce Table 4.3: Strong scaling results for the Lake Louise panorama, 187, 069 x 40.202, 7.52 gigapixel, 1512 images. Like the smaller Fall Salt Lake city panorama, our implementation shows very good efficiency up to the physical number of cores on our test system. Using the full eight cores for the full resolution seam solution for this panorama requires 37.7 minutes of compute time and at most 2.0 gigabytes of memory. Cores Tim e(s) Ideal(s) Efficiency Max Mem. 1 16279 N A NA 382 MB 2 8263 8139.5 98.51% 627 MB 3 5516 5426.3 98.37% 877 MB 4 4132 4069.8 98.49% 1.1 GB 5 3306 3255.8 98.48% 1.4 GB 6 2778 2713.2 97.67% 1.6 GB 7 2383 2325.6 97.59% 1.8 GB 8 2259 2034.9 90.08% 2.0 GB 59 F igure 4.18: Lake Louise, 187,069 x 40.202, 7.52 gigapixel, 1512 images. The Panorama Weaving results for the Lake Louise panorama. Our out-of-core seam computation produces this full resolution solution in as little as 37.7 minutes while requiring at most only 2.0 gigabytes of memory. Panorama courtesy of City Escapes Nature Photography (d) (e) (f) F igure 4.19: Panorama Weaving on a challenging data-set (Nation, 12848 x 3821, nine images) with moving objects during acquisition, registration issues and varying exposure. Our initial automatic solution (b) was computed in 4.6 seconds at full resolution for a result with lower seam energy than Graph Cuts. Additionally, we present a system for the interactive user exploration of the seam solution space (c), easily enabling: (d) the resolution of moving objects, (e) the hiding of registration artifacts (split pole) in low contrast areas (scooter) or (f) the fix of semantic notions for which automatic decisions can be unsatisfactory (stoplight colors are inconsistent after the automatic solve). The user editing session took only a few minutes. (a) The final, color-corrected panorama. F igure 4.20: Repairing non-ideal seams may give multiple valid seam configurations. (a) The initial seam configuration for the Skating dataset (9400 x 4752, six images) based on gradient energy. (b and c) Its two major problem areas. (d and e) Using our technique a user can repair the panorama, but also has the choices of two valid seam configurations. Panorama courtesy of City Escapes Nature Photography. 60 either an amalgamation of two positions of a single person or a partial person. As shown in the companion video, repairing these seams only takes a few seconds of interaction, see Figure 4.20 (d and e) for edited results. Figure 4.21 illustrates how a user can correct registration artifacts that appear on the moon’s horizon in the Apollo-Armstrong dataset. 4.7.2.2 M u ltip le valid seams. Along with repairing unideal seams, Figure 4.19 and 4.20 (Nation and Skating) are also examples of a user choosing between multiple valid seam configurations. In Figure 4.19 (f), the initial seam calculation for the Nation dataset produces an intersection with four red stoplights, an inconsistent configuration. With our system, a user can turn two stoplights green creating a more realistic setting. Figure 4.20 (bottom) shows 2 valid seam configurations the that user can choose while fixing the Skating dataset. Each was repaired with a simple bend of the panorama seam. In Figure 4.22, we provide an example of how a user can fix registration artifacts of the dataset (Graffiti) while tuning the seam location for improved results in the final color-correction. For gradient- domain blending, smooth, low-gradient areas provide the best results, therefore the user placed the seams in the smooth wall locations, Figure 4.22 (c). This editing session required just 2 minutes of interaction. Finally, in Figure 4.23 we show the color-corrected edits of the originally optimal, but non-visually pleasing, seams of Figure 1.7 for the two datasets: Canoe and Lake Path. Both interactions required only a few seconds of user input. Figure 4.24 is a Lake vista with multiple dynamic objects moving in the scene during acquisition. In all, there are six independent areas in the panorama where a canoe, or groups of canoes, change positions in overlap areas. Figure 4.24 shows two examples of alternative edits. A user editing with our technique would have the choice of 64 valid seam combinations of canoes. In Figure 4.25, we show a user iterating through valid splitting options of a five valence branching point of the Fall-5way dataset. In this way, we allow users the freedom to add and remove seams as they see fit. Finally, the images Crosswalk and Apollo-Aldrin in Figure 1.9 were created and edited in our system to show how panoramas can have multiple valid seam configurations. 4.8 Limitations and Future Work Our technique is versatile and can robustly handle a multitude of panorama configura­ tions. However, there is currently a limitation on the configurations which we can handle. The adjacency mesh data structure in its current form relies on the fact that the intersection of pairwise overlaps yields an area of exactly one connected component (which is needed 61 F igure 4.21: A panorama taken by Neil Armstrong during the Apollo 11 moon landing (Apollo-Armstrong: 6,913 x 1,014, eleven images). (a) Registration artifacts exist on the horizon. (b) Our system can be used to hide these artifacts. (c) The final color-corrected image. Panorama courtesy of NASA. F igure 4.22: In this example (Graffiti: 10,899 x 3,355, ten images), (a) the user fixed a few recoverable registration artifacts and tuned the seam location for improved gradient-domain processing, yielding a colorful color-corrected graffiti. (b) Our initial automatic solution (energy function based on pixel gradients). (c) The user edited panorama. The editing session took 2 minutes. 62 F igure 4.23: The color-corrected, user edited examples from Figure 1.7. The artifacts caused by the optimal seams can be repaired by a user. Images courtesy of City Escapes Nature Photography. F igure 4.24: A lake vista panorama (Lake: 7,626 x 1,231, 22 images) with canoes which move during acquisition. In all there are six independent areas of movement, therefore there are 64 possible seam configurations of different canoe positions. Here we illustrate two of these configurations with color-corrected versions of the full panorama (a and c) and a zoomed in portion on each panorama (b and d) showing the differing canoe positions. Panorama courtesy of City Escapes Nature Photography. F igure 4.25: Splitting a five valence branching point based on gradient energy of the Fall-5way dataset (5211 x 5177, 5 images): as the user splits the pentagon, the resulting seams mask/unmask the dynamic elements. Note that each branching point that has a valence higher than 3 can be further subdivided. 63 to guarantee the manifold structure of the mesh). For example, less than one connected component would arise in a situation where one overlap is completely incased inside another and more than one can be caused by an overlap’s area passing through the middle of another overlap. Both of these cases break the pairwise seam network assumption. In addition, an image whose boundary is completely enclosed by another image’s boundary (100% overlap) is currently considered invalid. These are pathological cases that we have yet to encounter in practice. Overall, the authors feel that these limitations are only temporary and that the data-structures and methods outlined in this work are general enough to support these cases as a future extension. The serial and parallel out-of-core implementations discussed above show good scale seam processing to images gigapixels in size. The schemes show good scaling for our test datasets even though there is some redundancy in the file I/O . Each image is loaded for ever multioverlap which it is a member. In other words, an image is loaded (reloaded) an equivalent number of times based on the valency of its adjacency mesh node. As future work, my collaborators and I wish to explore caching strategies for images and overlaps, as well as how these strategies interplay with different face traversal order. CHAPTER 5 INTERACTIVE GRADIENT DOM AIN EDITING AT SCALE This chapter introduces the Progressive Poisson technique which brings the color-correction phase of panorama creation pipeline into an interactive environment. This technique provides gradient domain processing interactively which is the most sophisticated and computationally expensive correction method. This operation is an inherently global and therefore, before this work, there were no known techniques on applying it to massive images interactively. Section 5.2 outlines the interactive technique and a full resolution out-of-core Progressive Poisson solver. Section 5.3 extends the full solver to a parallel distributed environment and Section 5.4 shows how it can be redesigned as a cloud based resource. 5.1 Gradient Domain Image Processing Gradient domain image processing encompasses a family of techniques that manipulate an image based on the value of a gradient field rather than operating directly on the pixel values. Seamless cloning, panorama stitching, and high dynamic range tone mapping are all techniques that belong to this class. Given a gradient field G (x ,y ), defined over a domain Q C K2, we seek to find an image P (x ,y ) such that its gradient V P fits G (x ,y ). In order to minimize ||VP — G || in a least squares sense, one has to solve the following optimization problem: m i n £ ||VP — G||2 (5.1) It is well known that minimizing equation (5.1) is equivalent to solving the Poisson equation A P = div G (x ,y ) where A denotes the Laplace operator A P = + dyr and div G (x,y) denotes the divergence of G . To adapt the equations shown above to discrete images, we apply a standard finite difference approach which approximates the Laplacian as: A P (x,y) = P (x + 1, y) + P (x — 1,y) + 65 P ( x , y + 1) + P ( x , y — 1) - 4 P ( x , y ) (5.2) and the divergence of G (x ,y ) = (Gx ( x , y ) , G y (x ,y )) as: div G (x ,y ) = Gx(x,y) — Gx(x — 1,y) + Gy (x, y) — Gy (x ,y — 1). The differential form A P = div G(x, y) can therefore be discretized into the following sparse linear system: Lp = b. (5.3) Each row of the matrix L stores the weights of the standard five point Laplacian stencil given by (5.2), p is the vector of pixel colors, and b encodes the guiding gradient field, as well as the boundary constraints. The choice of guiding gradient field G (x, y) and boundary conditions for the system determines which image processing technique is applied. In the case of seamless cloning, it is necessary to use Dirichlet boundary conditions, set to be the color values of the background image at the boundaries, and the guiding gradient to be the gradient of the source image (see [133] for a detailed description). For tone mapping and image stitching, Neumann boundary condition are used. The guiding gradient field for image stitching is the composited gradient field of the original images. The unwanted gradient between images is commonly set to zero or averaged across the stitch. The guiding gradient for tone mapping is adjusted from the original pixel values to compress the high dynamic range (HDR) (see [55] for more detail). Methods such as gradient domain painting [114] allow the guiding gradient to be user defined. 5.2 Progressive Poisson Solver This section discusses a progressive framework for solving very large Poisson systems in massive image editing. This technique allows for a simple implementation, yet is highly scalable, and performs well even with limited storage and processing resources. 5.2.1 Progressive Framework For an image P of n x n pixels, the Laplace system (5.3) has n2 independent variables, one per pixel. Computing the entire solution is therefore expensive both in terms of space and time. For large images, the space requirements easily exceed the main memory available on most computers. Moreover, the long computation times make any interactive application unfeasible. 66 Acceleration methods try to address either or both of these issues. The recent adaptive formulation by Agarwala [2] has been particularly insightful. By exploiting the smoothness of the solution, this method was the first to reduce both the cost of the computation and its memory requirements. The approach by Kazhdan and Hoppe [89] demonstrates how a streaming approach can achieve high performance by optimizing the memory access patterns. We extend these acceleration techniques and show how to achieve high quality local solutions, without the need for solving the entire system. Moreover, we show that coarse approximations are of acceptable visual quality without the cost of a typical coarsening stage used in the V-cycle. These new features, coupled with a simple multiresolution framework, enable a data-driven interactive environment that exploits the fact that interactive editing sessions are always limited by screen resolution. At any given time, a user only sees either a low resolution view of the entire image or a high resolution view of a small area. We take advantage of this practical restriction and solve the Poisson equation only for the visible pixels. This provides performance advantages for interactive sessions, as well as tight control over the memory usage. For example, even the simple step of computing the gradient of the full resolution image can be problematic due to its significant processing time and storage requirement. In our approach, we avoid this problem by estimating gradients on-the-fly using only the available pixels. Overall, our interactive system is based on a simple two-tier approach: • A global progressive solver provides a near instant coarse approximation of the full solution. This approximation can be refined up to a desired solution by a lightweight process, often running in the background and possibly out-of-core. Any time the user changes input parameters, this process is restarted. • A local progressive solver provides a quick solution for the visible pixels. This process is driven by the interactive viewer and uses as a coarse approximation the best solution available from the global solver. These components can be coupled with different multiresolution hierarchies as discussed in the next section. 5.2.1.1 Initial solu tion . At launch, the system computes a coarse image for the initial view. A fast two-dimensional direct method using cosine and Fast Fourier transforms by Agrawal [7, 8] is used for this initial solve for techniques that require Neumann boundaries 67 (stitching, HDR compression). For methods that require Dirichlet boundaries (seamless cloning) we using an iterative method such as SOR. To provide the user with a meaningful preview, we use an initial coarse resolution of one to two megapixel depending on the physical display. We have found, in practice, that the Fast Fourier solver usually gives us this approximation in under 2 seconds. This initial solution is at the core of the progressive refinement defined in the next paragraph. 5.2.1.2 P rogressive refin em en t. The goal of progressive refinement is to increase the resolution of our solution either locally or globally. This requires injecting color trans­ port information from coarser to finer resolutions. In doing so, we exploit the fact that the solution, away from the seams, tends to be smooth [2] and up-sampling the coarse solution gives high quality results in large areas of the image. To improve the solution and resolve the problems at the seams, we run an iterative method, estimating new gradients from the original pixel data of the finer resolution and using the up-sampled values as the initial solution estimate. The finer resolution gradient field allows the iterative solver to reconstruct the detailed features of the original image. For the iterative method we allow the use of either conjugate gradient (for faster convergence) or SOR (for minimal memory overhead). The iterative solver is assumed to have converged when the L 2 norm of the residual between iterations is less than 1.0 x 10- 3 . In practice there is no perceptible difference between iterations after this condition is met. Figure 5.1 shows the refinement process where we assume for simplicity that each resolution doubles each dimension separately and our data is a subsampled hierarchy. In this case, computing each finer resolution is equivalent to adding new rows (or columns) to the coarse resolution. Therefore, we know that each new pixel added has two neighbors from the coarse solution. We can take the average difference from these two neighbors and apply it to the original Rgigabyte value of the pixel from the new resolution (see Figure 5.1 (a)). Since the image is subsampled, the average difference and application to the new pixel is trivial. In a tiled hierarchy one would need to double both dimensions at the same time, requiring a simple adjustment to the interpolation. Each new resolution is treated as new data and the offset is based on the solution from the previous resolution and the transform between levels. 5.2.1.3 L oca l p review . Comegabyteining the coarse, global solve with a progres­ sively refined local preview is all that is necessary for our interactive system. For data 68 Pr = Ps — Pn [~[ Ps = solved pixel □ Pn = new pixel from original image n x n 2n x n 2n x 2n Pn(x, y) + Pr (x—1’y)+ Pr(x+1’y) Pn(x, y) + Pr (x’y—1)+ Pr(x’y—1) 4n x 4n (a ) k X k 2k x 2k Global Solver 2k k 2k 2k Local Adaptive Solver (b) F igure 5.1: Our adaptive refinement scheme using simple difference averaging. (a) Global progressive up-sampling of the edited image computed by a background process. (b) View- dependent local refinement based on a 2k x 2k window. In both cases we speedup the SOR solver with an initial solution obtained by smooth refinement of the solution. requests at resolutions equal to or less than our coarse solution, we simply display the available data. As the user zooms into an area, the image is progressively refined in a local region. Since the resolution increase is directly coupled with the decrease in the extent of the local view, the numegabyteer of pixels that must be processed remains constant (see Figure 5.1 (b)). This results in a logarithmic run-time complexity and constant storage requirement, which allows our system to gracefully scale to images orders of magnitude larger than previously possible. 5.2.1.4 P rogressive full solu tion . The progressive refinement can be applied glob­ ally to compute a full solution. Since the method requires a very small overhead, it can easily be run as background process during the interactive preview. When a new resolution has been solved, the interactive preview uses the solution as a new coarse approximation, 69 thereby saving com putation during th e local adaptive phase. Like o ther in-core m ethods, this progressive global solver is limited by available system memory. To address this issue, th e global solver has th e ability to switch modes to a moving-window out-of-core progressive solver. 5 .2 .1 .5 O u t- o f - c o r e s o lv e r. The out-of-core solver m aintains strict control over memory usage by sweeping th e d a ta w ith a sliding window. The window traverses the finest desired resolution, which can be several levels in th e hierarchy from th e current available solution. If th e ju m p in resolution is too high, th e process can be repeated several times. W ithin each window, th e coarse solution is up-sampled and th e new resolution image is solved using th e gradients from th e desired resolution. Since th e window lives at th e desired resolution, we never need to expand memory beyond th e size of th e window. F urtherm ore, windows are arranged such th a t they overlap w ith th e previously solved d a ta in b o th dimensions to produce a sm ooth solution. The overlap causes th e solver to load and com pute some of th e d a ta m ultiple times. This overlap has an inherent overhead when com pared to an idealized in-core solver. For instance, given a 1 /x overlap, th e four corners, each 1 /x x 1 /x in size, are executed four times. The four strips on th e edge of th e window, not including th e corners, each 1 /x x (1 — 2 /x ) in size are executed two times. All other pixels, size (1 — 2 /x ) x (1 — 2 /x ), are executed once. Therefore, th e overhead com putation for this 1 /x overlap is given by: 4 /x (1 + 1 /x ). Moreover, th e I/O overhead can be reduced to 1/x , since we can retain pixel d a ta from th e previous window in th e prim ary traversal direction. In principle, a larger overlap between windows results in higher quality solutions, though in practice we have found th a t for a 1024 x 1024 window a 1/32 overlap is sufficient for good results. This overlap requires only a 12.8% com pute overhead and a 3.1% I/O overhead. A larger window can be used to reduce th e percentage overlap while achieving th e same quality results. For instance, by doubling th e window size in b o th dimensions, a 2048 x 2048 window can be com puted w ith a 1/64 overlay, only incurring a 6.3% com pute overhead and a 1.5% I/O overhead. Compared to th e exact analytical solution, our m ethod produces even higher quality results th a n th e best known m ethod [89] for equivalent run times. 5 .2 .2 D a t a A c c e s s O ur progressive solver can operate well on multiple hierarchical schemes. Tiled hierar­ chies are often used to produce sm oother, antialiased images, though high contrast areas 70 in th e original image may be lost in th e smoothing. As Figure 5.2 (b) shows, the tiled image is visually pleasing, b ut details such as th e cars on th e highway are lost. This visual smoothness can also come at the cost of significant preprocessing, reduced flexibility when dealing w ith missing d ata, and increased I/O when traversing th e d ata. The costs can be especially significant for massive d a ta if one has to process it w ith very limited resources. The least costly image hierarchy can be com puted by subsampling. Subsampling is simple and lightweight, b ut is prone to high frequency aliasing. It does, though, retain higher contrast a t th e coarse resolution. Figure 5.2 (a) shows how th e subsampled hierarchy has aliasing artifacts, b ut also retains enough contrast to see the cars on th e highway. This contrast may be beneficial for some applications, such as an analyst studying satellite imagery. To show th e flexibility of our interactive system, we support b o th a filtered tiled hierarchy and a subsampled hierarchy (see Figure 5.3). For a tiled scheme, we com pute th e image F ig u r e 5.2: Subsampled and tiled hierarchies. (a) A subsampled hierarchy. As expected, subsam pling has th e tendency to produce high-frequency aliasing. Though details such as th e cars on th e highway and in th e parking lots are preserved. (b) A tiled hierarchy. This produces a more visually pleasing image a t all resolutions b ut at th e cost of potentially losing inform ation. The cars are now com pletely sm oothed away. D ata courtesy of th e U.S. Geological Survey. 71 F ig u r e 5.3: O ur progressive framework using subsampled and tiled hierarchies. (a) A com posite satellite image of A tlanta, over 100 gigapixels at full resolution, overlaid on Blue M arble background subsampled; (b) a tiled version of th e same satellite image; (c) the seamless cloning solution using subsampling; (d) th e same solution com puted using a tiled hierarchy; (e) th e solution offset com puted using subsampling; (f) th e solution com puted using tiles; (g) a full resolution portion com puted using subsampling; (h) th e same portion using tiling. Note th a t even though there is a slight difference in th e com puted solution, b o th th e tiled and th e subsampled hierarchies produce a seamless stitch w ith our framework. D ata courtesy of th e U.S. Geological Survey and NASA’s E a rth Observatory. hierarchy using a Gaussian kernel to produce a smooth, antialiased image (Figure 5.3 right column). W ith a m inor variation to th e underlying I/O layer, our system also supports a faster, subsampled Hierarchical Z-order as proposed by Pascucci and Frank [2002] (Figure 5.3 left column). For an overview of th e HZ d a ta form at, see Section 3.1. To achieve th e level of scalability necessary in th e current system, we fu rth er simplify the HZ d a ta access scheme. We use a lightweight recursive algorithm th a t avoids repeated index com putations, provides progressive and adaptive access, guarantees cache coherency and minimizes th e num egabyteer of I/O operations w ithout using any explicit caching mechanisms. In particular, com puting th e HZ index w ith this new algorithm a ttain s a th irty times speedup com pared to th e previous work. For example, to com pute th e indices for a 0.8 gigapixel image th e new algorithm requires 4.7 seconds where th e previous m ethod would take 144.1 seconds. Moreover, since th e traversal follows th e HZ storage order exactly for any query window, we guarantee th a t each file is accessed only once w ithout need of holding any block of d a ta in cache. For details on our new recursive algorithm see Section 3.2. This 72 approach makes th e system intrinsically cache friendly for any realistic caching architecture and, therefore, very flexible in exploiting m odern hardware. Conversion into HZ-order requires no additional storage. On th e o ther hand, for tiled hierarchies a 1/3 d a ta increase is common. Due to our new d a ta access scheme, conversion to HZ-order is straightforw ard and inexpensive. For our te st d ata, we have found th a t there is only a 27% overhead due to th e conversion com pared to ju st copying th e raw d ata. In essence, th e conversion is strictly a reordering of th e d a ta and requires no operations on th e pixel d ata. This conversion will outperform even th e most simple tiled hierarchies which require some m anipulation of the pixel d ata. Each resolution in th e HZ-hierarchy is in plain Z-order, which allows for fast, cache coherent access of subregions of th e image. HZ is not tied to a specific d a ta traversal order, such as th e row-major imposed by trad itio n al file formats, as previously observed in [89]. In fact, HZ m aintains a high degree of cache coherency even during adaptive local traversals. The locality of our d a ta access provides graceful performance degradation even in extrem e conditions. In particular, we d em onstrate accessing a d a ta set, of roughly a terab y te in size, by simply m ounting a rem ote file system over an encrypted V PN channel via a wireless connection. Even in normal running conditions, we have found th a t th e I/O overhead caused by using a tiled hierarchy increased th e running tim e by 39%-67%. These numegabyteers reflect th e theoretical bound of 1/3 overhead, m ade worse by th e inability to constrain real queries to perfect alignment w ith th e boundaries of a quadtree. The effect of this overhead is detrim ental to th e scalability of th e system under more difficult running conditions such as th e one m entioned above. Moreover, HZ easily handles partially converted d ata, as we show in one portion of th e accompanying video for th e editing of the Salt Lake City panoram a. In a tiled scheme, th e entire hierarchy may need to be recom puted as new d a ta is added. 5 .2 .3 I n te r a c tiv e P r e v ie w a n d O u t-o f-C o r e S o lv e r R e s u lt s We d em onstrate th e scalability and interactivity of our approach on several applications, using a num egabyteer of images ranging from megapixels to hundreds of gigapixels in size. To fu rth er illustrate th e responsiveness of our system, th e accompanying video shows screen captures of live dem onstrations. To highlight particu lar details and validate th e approach, th e figures in this section show previews and close-ups of our interactive system, alongside th e results of our full out-of-core progressive solver. We also provide running times of our 73 full out-of-core solver com pared w ith th e best current m ethod, stream ing m ultigrid [89], which we have verified to use th e same gradient inform ation. All timings and demos were performed on a 64-bit 2.67 GHz Intel Q uad Core desktop, w ith 8 gigabyte of memory. All stream ing m ultigrid tim ings were com puted from code provided by th e authors and include th e tim ing for th e gradient preprocess along w ith th e tim ing to produce a solution. O ur simple framework provides th e illusion of a fully solved Poisson system at interactive frame rates and under continuous p aram eter changes w ith only a simple GL te x tu re for display and no special hardw are acceleration. Therefore, our code is platform independent. O ur simple progressive out-of-core solver produces robust solutions w ith run times th a t rival [89]. Unlike th e previous m ethod, our out-of-core solver does not use hardw are acceleration and did not undergo high code optim ization to achieve th e following runtimes. The solver is also sequential and uses no threading to accelerate the com putation. If further optim ization of th e run-tim es is desired, there is nothing in our system to prevent the addition of these acceleration techniques. Different from o ther out-of-core m ethods, we do not rely on large external memory d a ta structures and we do not need to pre-com pute gradients for th e entire image. For th e Salt Lake City panoram a, for example, th e stream ing m ultigrid m ethod [89] creates 75.2 gigabyte of auxiliary inform ation for a 7.9 gigabyte input image. W hile disk space is generally assumed to be plentiful, such an explosion in disk space is unsustainable for images hundreds of gigapixel in size. The collection of satellite imagery we use in our video is more th a n one terab y te in size and would, therefore, require more th a n 9.5 terabytes of tem porary storage. The Edinburgh example is 25 images has resolution 16,950 x 2,956 (50 megapixel). At launch, our system performs a seamless stitch Poisson solve of a global 0.7 megapixel image in 1.26 seconds using our direct analytical solve, see Figure 5.4 (a). From this point on, th e system can pan and zoom interactively as if th e full-solution were already available. Our local adaptive refinement gives a solution th a t is visually equivalent to a solution to th e entire system, see Figure 5.4 (c, d, and e). In th e accompanying video, we d em onstrate interactive editing and solving of th e Poisson system, after th e repeated user-selected replacement of pixels of a particu lar color. We also perform a seamless clone of a 2000 x 1600 airplane on E d inburgh’s cloudy sky. The plane is anim ated along a linear p ath across th e panoram a. As evident in th e video, our framework shows th e entire sequence in real-time.W e also dem onstrate similar interactive editing w ith th e Redrock panoram a (d ata 74 F ig u r e 5.4: The E dinburgh P an o ram a 16, 950 x 2, 956 pixels. (a) O ur coarse solution com puted a t a resolution of 0.7 megapixels; (b) th e same panoram a solved at full resolution w ith our progressive global solver scaled to approxim ately 12 megapixel for publication; (c) a detail view of a particularly bad seam from th e original panoram a; (d) th e problem area previewed using our adaptive local refinement; (e) th e problem area solved at full resolution using our global solver in 3.48 minutes. courtesy of Aseem Agarwala): nine images, 19, 588 x 4,457; 87 megapixels. Given this initial coarse solution, our m ethod can produce a full solution of Edinburg, see Figure 5.4 (b), in 3.48 minutes. T he stream ing m ultigrid m ethod requires 3.52 minutes. Figure 5.5 (a) shows th e convergence and error for our m ethod and stream ing m ultigrid when com pared to the ideal direct solution. The Salt Lake City example is 611 images w ith resolution of 126,826 x 29,633 (3.27 gigapixel). A significantly larger example is provided by a panoram a cap tu red w ith a simple cam era m ounted on a G igaPan robot [63]. To maximize individual image quality th e pictures were taken w ith autom atic exposure times, which inherently increases the color differences between images th a t need to be corrected by th e Poisson solver. An initial coarse preview of 0.87 megapixel is com puted by our direct analytical solver in 2.07 seconds. Figure 5.6 shows th e original set of images (a), th e panoram a th a t our systems stitches in real tim e (b), th e global solution provided by our out-of-core solver (c), and th e difference image between th e interactive preview and th e final solution a t th e coarse resolution (d). There are slight deviations at some of th e more challenging seams, b ut overall there is negligible visible difference. O ur local adaptive preview mimics well th e global solution, as shown in Figure 5.7. To test th e accuracy of th e m ethods, we have run a full analytical Poisson solver on a 485 megapixel subset of the panoram a on a H PC -com puter. Figures 5.8 (a) and (b) show how close our out-of-core solution comes to th e exact analytical solution. Figure 5.8 (c) shows th a t th e m ultigrid m ethod has yet to converge to an acceptable solution given an equivalent am ount of running tim e. All solutions were com puted using th e m ap (a) (b) O O ô r ooo CM S tream ing M ultigrid - SLC Portion O ■ Time (s) RMS Error 41.6219 41.6237 41.6257 Iterations g Progressive O u t-o f-co re - SLC Portion 00 cn oo00 o o o Time (s) RMS Error H5.85382 ■ ■ % M 3.11283 3.12682 Iterations ■ to ■ 15 25 35 45 55 100 200 300 F ig u r e 5.5: The RMS error when com pared to th e ideal analytical solution as we increase iterations for b o th m ethods. Stream ing m ultigrid has b e tte r convergence and less error for th e E dinburgh example (a), though our m ethod remains stable for th e larger Salt Lake City panoram a (b). Notice th a t every plot has been scaled independently to best illustrate th e convergency trends of each m ethod. 76 F ig u r e 5.6: P an o ram a of Salt Lake City of 3.27 gigapixel, obtained by stitching 611 images. (a) Mosaic of th e original images. (b) O ur solution com puted at 0.9 megapixel resolution. (c) The full solution provided by our global solver. (d) The difference image between our preview and th e full solution at th e preview resolution. B oth (a) and (c) have been scaled for publication to approxim ately 12.9 megapixels. F ig u r e 5.7: A com parison of our adaptive local preview on a portion of th e Salt Lake City panoram a one half of th e full resolution; (a) th e original mosaic, (b) our adaptive preview, (c) th e full solution from our global solver, and (d) th e difference image between th e adaptive preview and th e full solution 77 F ig u r e 5.8: A com parison of our system w ith th e best known out of core m ethod [Kazhdan and Hoppe 2008] and a full analytical solution on a portion of th e Salt Lake City panoram a, 21201 x 24001 pixels, 485 megapixel (a) th e full analytical solution; (b) our solution com puted in 28.1 minutes; (c) solution from [Kazhdan and Hoppe 2008] com puted in 24.9 minutes; (d) th e analytical solution where th e solver is allowed to harm onically fill th e boundary; (e) our solution w ith harm onic fill; (f) solution from [Kazhdan and Hoppe 2008] w ith harm onic fill; (g) th e m ap image used by all solvers to construct th e panoram a where th e red color indicates th e image th a t provides th e pixel color and white denotes the panoram a boundary. increases th e memory usage of th e method. Sierpinki Sponge example has resolution 128k x 128k (16 gigapixel). We have tested th e tone m apping application on a synthetic high dynam ic range image generated w ith M egaPOV [118]. In this image we use a partially refined model of a Sierpinki Sponge to create high variations in level-of-detail. Such details can be com pletely hidden in the d ark areas under projected shadows. We follow the approach introduced by F a tta l [55] to reconstruct th e inform ation hidden in th e d ark regions. To validate the approach, we ran a typical H DR test image, th e Belgium House, progressively refined from a 16 x 12 coarse solution. Even w ith such a coarse initial solution, we achieve results very close to th e exact solution (see Figure 5.9 (c) and (d)). Figure 5.9 shows th e original sponge model (a) and 78 y J K I r ! ? ■ I . 1 I I ^ I ■ F ig u r e 5.9: A pplication of our m ethod to HDR image compression: (a) Original synthetic H DR image of an adaptively refined Sierpinki sponge generated w ith Povray. (b) Tone m apped image w ith recovery of detailed inform ation previously hidden in th e shadows. (c) Belgium House image solved using our coarse-to-fine m ethod w ith an initial 16 x 12 coarse solution (a = 0.01, = 0.7, compression coefficient=0.5). (d) The direct analytical solution. Image courtesy of R aan an F attal. th e processed version (b), where all the details under th e shadows have been recovered. The satellite example contains a Blue M arble background image th a t is 3.7 gigapixel and imagery of A tlan ta and o ther cities which are well over 100 gigapixel. To d em onstrate the scalability of our system, we have run th e seamless cloning algorithm for entire cities over a variety of realistic backgrounds from NASA’s Blue M arble Collection [125] (see Figure 5.10). We show how a user can take advantage of these capabilities to achieve artistic effects and create virtu al worlds from real d ata. We also create a dynam ic environm ent by anim ating 79 F ig u r e 5.10: Satellite imagery collection w ith a background given by a 3.7 gigapixel image from NASA’s Blue M arble Collection. The Progressive Poisson solver allows the application of the seamless cloning m ethod to two copies of th e city of A tlanta, each of 116 gigapixels. An artist can interactively place a copy of A tlan ta under shallow w ater and recreate th e lost city of A tlantis. D ata courtesy of th e U.S. Geological Survey and NASA’s E a rth Observatory. th e background world m ap over 12 months and concurrently use th e Poisson solver to show how th e appearance of a city would change across th e seasons. 5.3 Parallel D istributed Gradient D om ain Editing In th e following, we provide details of our parallel Progressive Poisson algorithm and M PI im plem entation. This new algorithm reduces th e tim e to com pute a full resolution gradient dom ain solution from hours in th e case of a single, out-of-core solution to minutes when run on a distrib u ted cluster. 80 5 .3 .1 P a r a lle l S o lv e r Commonly, large images are stored as tiles, which gives one an underlying stru ctu re to di­ vide an image am ongst th e nodes/processors for a d istrib u ted solver. Tile-based distrib u ted solvers have been shown to work well when only local trends are present. Seamless stitching commonly contains large scale tren d s where a naive tile-based approach will provide poor results. The addition of the Progressive Poisson m e th o d ’s coarse upsampling, allows for a simple, tile-based parallel solver th a t can account for large trends. Our algorithm works in two phases: T he first phase performs th e progressive upsam ple of a precom puted coarse solution for each tile. The second phase solves for a sm ooth image on tiles th a t significantly overlap th e solution tiles from th e first phase. In this way, th e second phase sm ooths any seams not captured or even introduced by th e first phase, producing a complete, seamless image. 5 .3 .1 .1 D a t a d i s t r i b u t i o n a s tile s . A lthough a tile-based approach leverages a common image storage form at, it is not typically how m ethods are designed to handle seamless stitching of large panoram as. For instance, m ethods like stream ing m ultigrid [89, 90] often assume precom puted gradients for th e whole image. Our system is designed to take tiles directly as inp u t and therefore m ust be able to handle th e gradient com putation on-the-fly. An im p o rtan t and often undocum ented com ponent of panoram a stitching is the m ap or label image. Given an ordered set of images which compose the panoram a, the m ap image gives th e correspondence of a pixel location in the overall panoram a to the smaller image th a t supplies th e color. This m ap file is necessary to determ ine th e difference between actual gradients and those due to seams. This m ap also defines th e boundaries of th e panoram a, which are commonly irregular. This file along w ith each individual image th a t composes th e mosaic are needed for a traditional, out-of-core scheme [89, 149] for gradient com putation. If th e gradient across th e seams is assumed to be zero, which is a common technique we adopt for th is solver, each tile can be com posited in advance and the m ap file is only needed to denote image seams or boundary. As noted above, this com posited tile is often already provided if used in a trad itio n al large image system. The map file can th en be encoded as an ex tra channel of color inform ation, typically th e alpha channel. For mosaics of m any hundreds of images, such as th e examples provided in th is dissertation, we cannot encode an index for each image in a byte of d ata. Though in practice each tile has very little probability of having more th a n 256 individual images, each image is given a unique 0-255 num ber on a per tile basis. 81 We have chosen an overlap of 50% in b o th dimensions for th e second phase windowing scheme of th e parallel solver for simplicity in im plem entation. Each window is composed of a 2 x 2 collections of tiles. To avoid undefined windows in th e second phase, we add a symbolic padding of one row /colum n of tiles to all sides of th e image which th e solver regards as pure boundary. Figure 5.11 gives an example of a tile layout. T he overlapping window size used for our testin g was 1024 x 1024 pixels (assuming 512 x 512 tiles), which we found to be a good compromise between a low memory footprint and image coverage. Each node receives a p artitio n of windows equivalent to a contiguous subimage w ith no overlap necessary between nodes during th e same phase. D ata can be d istrib u ted evenly across all nodes in th e case of a homogeneous d istrib u ted system or dependent on weights due to available resources in th e case of a heterogeneous hardw are. We provide a te st case for a heterogeneous system in Section 5. 5 .3 .1 .2 C o a r s e s o lu tio n . As a first step, th e first phase of our solver will upsample via bilinear interpolation a 1-2 megapixel coarse solution. Much like th e Progressive Poisson m ethod [149], each node com putes a solution in ju s t a few seconds using a direct F F T solver on a coarsely sampled version of our large image. In tiled hierarchies, this coarse image is typically already present and can be encoded w ith th e m ap inform ation in much the same way as th e tiles. 5 .3 .1 .3 F i r s t p h a s e : p r o g r e s s iv e s o lu tio n . This phase com putes a Progressive Poisson solution for each window which are composed of tiles read off of a distrib u ted (a) * Image 2 2 3 3 4 4 2 2 3 3 4 4 5 5 6 6 7 7 8 8 5 5 6 6 7 7 8 8 9 9 10 10 11 11 12 12 9 9 10 10 11 11 12 12 * 6 6 7 7 8 8 6 6 7 7 8 8 10 10 11 11 12 12 10 10 11 11 12 12 (b) F ig u r e 5.11: O ur tile-based approach: (a) An input image is divided into equally spaced tiles. In th e first phase, after a symbolic padding by a column and row in all dimensions, a solver is run on a window denoted by a collection of four labeled tiles. D ata are sent and collected for th e next phase to create new d a ta windows w ith a 50% overlap. (b) An example tile layout for the Fall P an o ram a example. 82 file system. To progressively solve a window, an image hierarchy is necessary. For our im plem entation a stan d ard power-of-two image pyram id was used. As a first step, the solver upsamples th e solution to a finer resolution in the image pyram id using a coarse solution image and th e original pixel values. An iterative solver is th en run for several iterations to sm ooth this upsam ple using th e original pixel gradients as th e guiding field. This process is repeating down th e image hierarchy until th e full resolution is reached. The solver is considered to have converged at this resolution when th e L 2 norm falls below 10-3 which is based on th e range of byte color d ata. From our testing, we have found th a t SOR gives b o th good running times and low memory consum ption and therefore is our default solver. As noted above, this window is logically composed of four tiles, which are com puted and saved in memory for th e next phase as floating point color d ata. This leads to 12 b ytes/pixel (three floating point color d ata) to transfer between phases. Given th e d a ta distribution, one node may process many windows. If this is th e case, only th e tiles which border a node’s dom ain are prepared to be transferred to another node, thereby keeping d a ta com m unication between phases to a relatively small zone. 5 .3 .1 .4 S e c o n d p h a s e : o v e r la p s o lu tio n . The second phase gathers th e four tiles (b o th solution and original) th a t make up th e overlapping window. A fter th e d a ta are gathered, th e gradients are com puted from th e original pixel values and an iterative solver (SOR) is run after being initialized w ith the solutions from th e first phase. The iterative solver is constrained to only work on interior pixels to prevent this phase from introducing new seams at th e window boundary. Technically, there may be errors at th e pixels around th e m idpoints of th e boundary edges of these windows, though we have not encountered this in practice. Again, this solver is run until convergence given by th e L 2 norm. N ote th a t even though th e tile gradients are com puted in th e first phase, we have chosen to recom pute them on th e fly in th e second phase. Passing th e gradients would cost a t least an additional 12 b ytes/pixel overhead. As nodes increase, d a ta transfer and com m unication becomes a significant bottleneck in most distrib u ted schemes therefore, we chose to pay th e cost of increased com putation and reading th e less expensive byte image d a ta from th e distrib u ted file system instead of th e costly transfer. 5 .3 .1 .5 P a r a ll e l i m p l e m e n t a t i o n d e ta ils . Each node has one m aster th read which coordinates all processing and com munication. The core com ponent of this th read is a priority queue of windows and tiles to be processed. At launch, this queue is initialized by a separate seeding th read w ith th e initial dom ain of windows to be solved in th e first 83 phase. Because of th e separation of th e main th read from th e seeding of th e queue, the main th read can begin processing windows immediately. Each window is given a first phase id, which is th e window’s row and column location in th e subimage to be processed by a node. C om m unication between nodes need only be one-way in our system, therefore we have chosen for com m unication to be “u p stream ” between nodes, i.e., th e nodes operating on a subimage w ith horizontal or vertical location greater th a n th e current node. In order to avoid starvation in th e second phase, th e queue is loaded w ith windows in reverse order in term s of th e tile id. Figure 5.12 gives an example of th e traversal and communication. All initially seeded windows are given equal low priority in th e queue. In essence th e initial queue operates much like a first-in-first-out (FIFO ) queue. As windows are removed from the queue, th e main th read launches a progressive solver th read which is handed off to an in tra­ node dynam ic scheduler. Our im plem entation uses a HyperFlow [170] scheduler to execute th e solver on all available cores. H yperFlow has been shown to efficiently schedule execution F ig u r e 5.12: Windows are distrib u ted as evenly as possible across all nodes in the distrib u ted system. Windows assigned to a specific node are denoted by color above. Given th e overlap scheme, d a ta transfer only needs to occur one-way, denoted by th e red arrows and boundary above. To avoid starvation between phases and to hide as much d a ta transfer as possible, windows are processed in inverse order (white arrows) and th e tiles needed by other nodes are transferred immediately. 84 of workflows on multicore systems and therefore is th e perfect solution for our intra-node scheduling. In all there are two distinct sequential stages in each phase: (1) loading of th e tile d a ta and th e com putation of th e image gradient and (2) the progressive solution. This flow inform ation allows HyperFlow to exploit d ata, task, and pipeline parallelism to maximize throu g h p u t. A fter a solution is com puted, th e progressive solver th read partitio n s th e window into th e tiles th a t comprise it. This allows th e second phase to recombine th e tiles needed for th e 50% overlap window. All four tiles are loaded into th e queue w ith high priority. If th e main th read removes a tile (as opposed to a window) from th e queue and th e tile is needed by another node, th e main th read im m ediately sends th e d a ta asynchronously to th e proper node. Otherwise, if th e node needs this tile for phase two, th e second phase id of th e window which needs th e tile is com puted and hashed w ith a two-dimensional hash function th e same size as th e window dom ain for the second phase. If all four tiles for a given second phase window have been hashed, th e main th read now knows a second phase window is ready and im m ediately passes th e window to a solver th read for processing. If th e main th read receives a solved tile from another node, this is also im m ediately hashed. 5 .3 .2 R e s u lt s To d em onstrate th e scalability and ad ap tab ility of th e approach, we have tested our im plem entation using two panoram a d atasets, gigapixels in size. To illustrate th e portability of the system, we have also shown its running times and scalability on two distrib u ted systems. Our main system, th e NVIDIA Center of Excellence cluster in th e Scientific Com puting and Imaging In stitu te at th e U niversity of U tah, consists of 60 active nodes w ith 2.67GHz Xeon X5550 Processors (8 cores), 24GB of RAM per node, and 750GB local scratch disk space. The second system, th e Longhorn visualization cluster in th e Texas Advanced C om puter Center a t th e University of Texas a t A ustin, consists of 256 nodes (of which 128 were available for our tests) w ith 2.5GHz Nehalem Processors (8 cores), 48GB of RAM per node, and 73GB local scratch disk space. Weak and strong scalability tests were performed on b oth systems. Given th e proven scalability of Hyperflow on one node, we have tested th e scalability of th e M PI im plem entation from 2-60 and 2-128 nodes for the NVIDIA cluster and Longhorn cluster, respectively. Timings are taken as best over several runs to discount external effects to th e cluster from shared resources such as the distrib u ted file system. The d atasets used for testing were: 85 • Fall Panoram a. 126,826x29, 633, 3.27 gigapixel. W hen tiled, this d ataset is composed of 124 x 29 10242 sized windows. See Figure 5.13 for image results from a NVIDIA cluster 480 core te st run. • W inter Panoram a. 92, 570 x 28, 600, 2.65 gigapixel. W hen tiled, this d atase t is composed of 91 x 28 10242 sized windows. See Figure 5.14 for image results from a NVIDIA cluster 480 core te st run. 5 .3 .2 .1 N V I D I A c l u s t e r . To show th e M PI scalability of our framework and im­ plem entation, strong and weak scaling tests were performed for 2-60 nodes. As shown in Tables 5.1 and 5.2, b o th d atasets scale close to ideal and w ith high efficiency for strong F ig u r e 5.13: Fall P an o ram a - 126, 826 x 29, 633, 3.27 gigapixel. (a) The panoram a before seamless blending and (b) th e result of th e parallel Poisson solver run on 480 cores w ith 124 x 29 windows and com puted in 5.88 minutes. F ig u r e 5.14: W inter P an o ram a - 92, 570 x 28, 600, 2.65 gigapixel. (a) The result of the parallel Poisson solver run on 480 cores w ith 91 x 28 windows and com puted in 6.02 minutes, (b) th e panoram a before seamless blending, and (c) th e coarse panoram a solution. 86 T a b l e 5 .1 : T h e stro n g scaling resu lts for th e Fall P a n o ra m a ru n on th e N V ID IA c lu ste r from 2-60 nodes up to a to ta l of 480 cores. O verhead (O /H ) due to M P I c o m m u n icatio n and I / O is also provided along w ith its pe rc e n ta g e of a c tu a l ru n n in g tim e. T h e Fall P a n o ra m a , due to its larger size begins to lose efficiency a t aro u n d 32 nodes w hen I / O overhead begins to d o m in a te . E ven w ith th is overhead, th e efficiency (Eff.) rem ains acceptable. S tro n g Scaling - Fall P a n o ra m a - N V ID IA c lu ste r Nodes Cores Ideal (m) Actual (m) Eff. % O /H (m) % O /H 2 16 79.35 79.35 100.0 18.80 23.7 4 32 39.68 40.08 97.1 9.05 22.2 8 64 19.84 20.83 95.2 7.28 35.0 16 128 9.92 11.43 78.9 6.50 51.7 32 256 4.96 6.20 53.8 6.20 67.3 48 384 3.31 6.40 51.7 6.40 100.0 60 480 2.65 5.88 45.0 5.88 100.0 T a b l e 5 .2 : T h e stro n g scaling resu lts for th e W in te r P a n o ra m a ru n on th e N V ID IA c lu ste r from 2-60 nodes u p to a to ta l of 480 cores. O verhead (O /H ) d u e to M P I com m u n icatio n a n d I / O is also provided along w ith its pe rc e n ta g e of a c tu a l ru n n in g tim e. F or th e W in te r P a n o ra m a , th e I / O overhead does n o t effect perform ance u p to 60 nodes a n d th e im p lem e n ta tio n m ain ta in s efficiency (Eff.) th ro u g h o u t all of o u r runs. S tro n g Scaling - W in te r P a n o ra m a - N V ID IA c lu ste r Nodes Cores Ideal (m) Actual (m) Eff. % O /H (m) % O /H 2 16 128.87 128.87 100.0 8.63 6.7 4 32 64.43 77.68 82.9 4.70 6.1 8 64 32.22 40.63 79.3 4.28 10.5 16 128 16.11 21.17 76.1 4.17 19.7 32 256 8.05 10.88 74.0 4.08 37.5 48 384 5.37 6.98 76.9 4.10 58.7 60 480 4.30 6.02 71.4 4.00 66.5 scaling. T h e Fall P a n o ra m a , due to its larger size begins to lose efficiency a t a ro u n d 32 nodes w hen I / O overhead begins to d o m in a te . E ven w ith th is overhead, th e efficiency rem ains acceptable. For th e W in te r P a n o ra m a , th e I / O overhead does n o t effect p erform ance up to 60 nodes a n d th e im p lem e n ta tio n m ain ta in s efficiency th ro u g h o u t th e te s t. W eak scaling te s ts were perform ed using a subim age of th e Fall P a n o ra m a d a ta s e t. See T able 5.3 for th e w eak scaling resu lts. As th e n u m b er of cores increases so does th e im age resolution to be solved. T h e subim age was ex p an d ed from th e c e n te r of th e full im age and ite ra tio n s of th e solver for all w indows were locked a t 1000 for te s tin g to ensure no v a ria tio n is d u e to slower converging im age areas. As th e figure shows, o u r im p lem e n ta tio n shows good w eak scaling 87 T a b l e 5 .3 : W eak scaling te s ts ru n on th e N V ID IA c lu ste r for th e Fall P a n o ra m a d a ta s e t. As th e n u m b er of cores, increases so does th e im age reso lu tio n to be solved. T h e im age was e x p a n d ed from th e c e n te r of th e full im age. Ite ra tio n s of th e solver for all w indows were locked a t 1000 for te s tin g to ensure no v a ria tio n is d u e to slower converging im age areas. As is shown, o u r im p lem e n ta tio n shows good efficiency even w hen ru n n in g on th e m axim um n u m b er of cores. W eak Scaling - N V ID IA clu ster N odes Cores Size (M P ) T im e (m in.) Efficiency 2 16 100.66 5.55 100.00% 4 32 201.33 5.55 100.00% 8 64 402.65 5.53 100.30% 16 128 805.31 5.68 97.65% 32 256 1610.61 5.77 96.24% 60 480 3019.90 6.57 84.52% efficiency even for 60 nodes w ith 480 cores. In all, we have p ro d u ced a g rad ie n t d om ain solution to a d a ta s e t w hich in previous w ork th e b est know n m eth o d s [89, 149] to o k hours to com pute. 5 .3 .2 .2 L o n g h o r n c l u s t e r . To show th e p o r ta b ility a n d M P I scalability of our fram ew ork a n d im p lem e n ta tio n , stro n g a n d weak scaling te s ts were perform ed on th e largest d a ta s e t (Fall P a n o ra m a ) on a second clu ster. T h e stro n g scaling te s ts were perform ed from 2-128 nodes a n d th e weak scaling te sts, lim ited by th e size of th e im age, were perform ed from 2-64 nodes. As show n in T able 5.4, o u r im p lem e n ta tio n m a in ta in s very good efficiency a n d tim in g s for our s tro n g scaling te s t up to th e full 1024 cores available on th e system . M uch like th e N V ID IA cluster, weak scaling te s ts were perform ed on a p o rtio n of th e Fall T a b l e 5 .4 : To d e m o n s tra te th e p o r ta b ility of o u r im p lem e n ta tio n , we have ru n stro n g scalab ility te s tin g for th e Fall P a n o ra m a on th e L onghorn c lu ste r from 2-128 nodes up to a t o ta l of 1024 cores. As th e nu m b ers show, we m a in ta in good scalability a n d efficiency even w hen ru n n in g on all available nodes a n d cores. S tro n g Scaling - Fall P a n o ra m a - L onghorn Nodes Cores Ideal(m) Actual(m) Efficiency 2 16 84.07 84.07 100% 4 32 42.03 43.18 97% 8 64 21.02 21.85 96% 16 128 10.51 12.08 87% 32 256 5.25 6.93 76% 64 512 2.63 3.89 68% 128 1024 1.31 2.73 48% 88 P an o ram a and iterations of th e solver were locked at 1000. To ensure th a t each node got a reasonably sized subimage to solve, th e tests were limited to 64 nodes. Table 5.5 dem onstrates our im plem entations ability to weak scale on this cluster, m aintaining good efficiency for up to 512 cores. 5 .3 .2 .3 H e te r o g e n e o u s c l u s t e r . As a final test of p o rtability and adaptability, we presented our im plem entation w ith a simulated heterogeneous distrib u ted system. Our parallel framework provides th e ability to give weights to nodes which is typically even and therefore results in an even d istribution of windows across all nodes. For this example, a simple weighting scheme can easily load-balance this mixed network, giving the nodes w ith more resources more windows to com pute. Table 5.6 gives an example mixed system of two 8-core nodes, four 4-core nodes, and eight 2-core nodes. In all, this system has 48 available cores. The weights for our framework are simply th e num ber of cores available in each node. This network was sim ulated using th e NVIDIA cluster by overloading Hyperflow’s knowledge of available resources w ith our desired properties. W hile this is not a perfect sim ulation since th e m ain th read handling M PI com m unication would not be limited to reside on th e desired cores, as shown in th e strong scaling tests even w ith evenly distrib u ted d a ta on 8-16 nodes th e im plem entation is not yet I/O bound. Therefore, we should still have a good approxim ation to a real, limited system. The figure details th e window d istribution and timings for th e Fall P an o ram a for all nodes in this test. As is shown, we m aintain good load balancing given proper node weighting when dealing w ith heterogenous systems. The m ax runtim e of 32.70 minutes for this 48 core system is on par w ith run tim e for the 32 core (40.08 minutes) and 64 core (20.83 minutes) strong scaling results. T a b le 5.5: Weak scaling tests run on th e Longhorn cluster for th e Fall P an o ram a dataset. Weak Scaling - Longhorn cluster Nodes Cores Size (M P) Time (min.) Efficiency 2 16 75.5 5.50 100.00% 4 32 151 6.13 89.67% 8 64 302 6.15 89.43% 16 128 604 6.15 89.43% 32 256 1208 6.13 89.67% 64 512 2416 6.15 89.43% 89 T a b le 5.6: O ur sim ulated heterogeneous system. This te st example is a sim ulated mixed system of 2 8-core nodes, 4 4-core nodes, and 8 2-core nodes. The weights for our framework are th e num ber of cores available in each node. T he timings and window distributions are for Fall P an o ram a d ataset. As you can see, w ith th e proper weightings our framework can d istrib u te windows proportionally based on th e performance of th e system. The m ax runtim e of 32.70 minutes for this 48 core system is on par w ith tim ings for th e 32 core (40.08 minutes) and 64 core (20.83 minutes) runs from th e strong scaling test. ■ Total W ind o w s Processed ■ T im e (m ) Heterogeneous System - Fall Panorama Cores 8 4 2 Time(m) 27.9 28.9 32.1 32.7 32 32.5 16.6 23.1 28.7 32.2 20 23.6 24.6 28.4 Windows 1239 1239 640 640 580 600 276 285 300 330 304 304 290 319 5.4 Gradient D om ain Editing on th e Cloud The parallel algorithm outlined in th e previous section provides a full resolution gradient dom ain solution for massive images in only a few m inutes of processing tim e. In this section, we explore redesigning this technique as a cloud-based application. For this work, we chose to ta rg e t th e M apReduce framework and its open source im plem entation, Hadoop. M apReduce and Hadoop have emerged in recent years as popular and widely supported cloud technologies. Therefore, they are th e logical targ ets for this work. 5 .4 .1 M a p R e d u c e a n d H a d o o p This subsection briefly reviews some of th e fundam entals of th e M apReduce framework and how to design graphics algorithm s to work well w ith H adoop and H adoop's D istributed File System (HDFS). We provide a high level view to justify design decisions outlined in th e next section. The m ap function operates on key/value pairs producing one or more key/value pairs for th e reduce phase. The reduce function is a per-key operation th a t works on th e o u tp u t of th e m apper (see Figure 5.15). H adoop’s scheduler will interleave th eir execution as d a ta are available. Currently, Hadoop does not support job chaining. Therefore, any algorithm th a t requires two passes will likely require two separate M apReduce jobs. W hile this will likely change in the future, at this tim e minimizing th e num ber of passes is an im portant consideration since th e overhead incurred by launching new jobs in H adoop is significant. 90 F ig u r e 5.15: The two phases of a M apReduce job. In th e figure, three m ap tasks produce key/values pairs th a t are hashed into two bins corresponding to th e two reduce tasks in the job. W hen the d a ta are ready, th e reducers grab th eir needed d a ta from th e m ap p er’s local disk. In Section 5.4.2 we detail our algorithm , which requires only one pass. H adoop has been optimized to handle large files and to pro cess/tran sfer small chunks of d ata. For m any applications including th e one outlined in th e next section, understanding H adoop’s d a ta flow is vital for an efficient im plem entation, much like random memory access m ust be considered in a GPU. 5 .4 .1 .1 I n p u t . The Hadoop distrib u ted file system stripes d a ta across all available nodes on a per block basis w ith replication to guarantee a certain level of locality for the m ap phase and to be able to handle system faults. W hen a jo b is launched, H adoop will split th e input d a ta evenly for all map instances. For our example, allowing Hadoop to arb itrarily split th e input d a ta could result in fragm ented images. Therefore, th e system allows th e developer to specialize th e function reading th e inp u t which we use to constrain th e split to only occur a t image boundaries. 5 .4 .1 .2 M a p R e d u c e t r a n s f e r . D uring execution, each m apper hashes th e key of each key/value pair into bins. The num ber of bins equal th e num ber of reducers (see Figure 5.15) and each bin is also sorted by key. The m ap first stores and sorts th e d a ta in a buffer in memory b u t will spill to disk if this is exceeded (the default buffer size is 512 91 MB). This spill can lead to poor m apper performance and should be avoided if possible. A fter a m apper completes execution, th e interm ediate d a ta are stored to a node’s local disk. Each m apper informs th e control node th a t its d a ta are finished and ready for the reducers. Since H adoop assumes th a t any m apper is equally likely to produce any key, there is no assumed locality for th e reducers. Each reducer m ust pull its d a ta from multiple m appers in th e cluster (see Figure 5.15 and 5.16). If a reducer m ust grab key/value pairs from many local disks on th e cluster (possibly an N -to -N mapping), this phase can have d rastic effect on th e performance. Job coordination is handled w ith a m aster/slave model where th e control node, called th e Job Tracker distributes and manages th e m ap and reduce tasks. W hen a program is launched th e Job Tracker initiates Task Trackers on nodes in th e cluster. The Job Tracker th en schedules tasks on th e Task Tracker m aintaining a com m unication link to handle system faults (see Figure 5.16). — I iiiP NodeJob tracker 1 v Create C om m unication Task tracker DFS I I- / Task tracker Output from other mappers Node Disk Node Disk Node Disk F ig u r e 5.16: A diagram of th e job control and d a ta flow for one Task Tracker in a Hadoop job. The dotted, red arrows indicate d a ta flow over th e network; dashed arrows represent com munication; th e blue arrow indicates a local d a ta w rite and th e black arrows indicate an action taken by th e node. 92 5 .4 .2 M a p R e d u c e for G r a d ie n t D o m a in Commonly, large images are stored as tiles, which gives us th e underlying stru ctu re for our scheme. However, a tiled-based approach by itself would not account for large scale trends common in panoram as (see Figure 5.17). Therefore, we add upsam pling of a coarse solution similar to th e approach used in Summa et al. [149] to cap tu re these trends. Our algorithm works in two phases: The first phase performs th e upsam ple of a precom puted coarse solution and solves each tile to produce a sm ooth solution over th e extent of th e tile. The second phase solves for a sm ooth image on tiles th a t significantly overlap th e smoothed tiles from th e first phase. In this way, th e second phase smooths any seams not captured or even introduced by th e first phase solvers. This algorithm can be simply implem ented in one M apReduce job in Hadoop. 5 .4 .2 .1 T ile s . We have chosen an overlap of 50% in b o th dimensions for th e second phase due to th e simplicity of im plem entation, although Summa et al. [149] has shown th a t a good solution can be found w ith much less. To easily accomplish this overlap, we divide th e d a ta into tiles 1/4 of th e proper size. Figure 5.18 shows th e tile layout for our test images. Each phase operates on four of these smaller tiles which are combined to construct th e larger tiles. To avoid undefined tiles in th e second phase, we add a symbolic padding of one row /colum n to all sides of th e image. Figure 5.19 gives an example of a tile layout. An im p o rtan t com ponent of panoram a stitching is a m ap file which gives th e correspondence from a pixel location in th e overall panoram a to th e smaller image th a t supplies th e color. This map file is necessary to determ ine the difference between actual gradients and those due to seams. This m ap also defines th e boundaries of th e panoram a, which are commonly irregular and do not usually follow th e actual image boundary. T he panoram a boundary is a seam we would like to preserve. We encode th e m ap file into each individual tile as an alpha channel. For images such as th e Salt Lake City example, we cannot encode an index F ig u r e 5.17: A lthough th e result is a sm ooth image, w ithout coarse upsam pling th e final image will fail to account for large trends th a t span beyond a single overlap and can lead to unwanted shifts in color. Notice th e vertical banding denoted by the red arrows. 93 F ig u r e 5.18: The 512 x 512 tiles used in our Edinburgh (a), Redrock (b), and Salt Lake City (c) examples. 1 - 0 ,0 - 1 - 0 ,1 - 1 - 0 ,2 - 1 - 0 ,3 - - 1 ,0 - -1,1 - - 1 ,2 - - 1 ,3 - -2 ,0 ­ 1 -2 ,1 ­ 1 -2 ,2 ­ 1 - 2 , 3 ­ 1 1,1 1,2- 1,3- 2,1 2,2 2 ,3 - Image Map Reduce F ig u r e 5.19: Our tile-based approach: An input image is divided into equally spaced tiles. In th e m ap phase after a symbolic padding by a column and row in all dimensions, a solver is run on a collection of four tiles labeled by numbers above. A fter th e m apper finishes, it assigns a key such th a t each reducer runs its solver a collection of four tiles th a t have a 50% overlap w ith the previous solutions. for each image in a byte of d ata. However, th e map is only used to denote if two pixels are from the same source image or if a pixel is on th e boundary. Therefore a byte is more th a n enough to encode this correspondence. The symbolic padding is encoded as boundary and images th a t are not evenly divisible by our tile size are also padded w ith boundary. The overlapping window size used for our te st was 1024 x 1024 pixels which we found was a good compromise between a low memory footprint and image coverage. 94 5 .4 .2 .2 C o a r s e s o lu tio n . As a first step, th e first phase of our solver will upsample via bilinear interpolation a 1-2 megapixel coarse solution. Much like th e m ethod from Summa et al. [149], we precom pute th e coarse solution in ju st a few seconds using a direct F F T solver on a coarsely sampled version of our large image. In tiled hierarchies, this coarse image is typically already present. In Hadoop, this coarse solution is sent along w ith the M apReduce jo b when launched. The Job Tracker stores this image on th e distrib u ted file system for Task Trackers to pull and store locally. 5 .4 .2 .3 F i r s t ( m a p ) p h a s e . A fter loading/com bining th e smaller tiles and perform ­ ing th e upsample, th e first phase runs an iterative solver initialized w ith th e upsam pled pixel colors. From our testing, we have found th a t SOR gives good running tim es and low memory consum ption and therefore is our default solver. The solver is considered to have converged when th e L 2 norm falls below 10-3 which is based on th e range of byte d ata. A fter a sm ooth image is com puted, the solution is split back into its four smaller tiles and sent to th e next phase as byte d ata. Some precision is lost in th e solution d a ta by this tru n catio n of bits and can cause slower convergence in th e next phase. However, in many distrib u ted systems, th e bottleneck is d a ta transfer; therefore it is preferable to use smaller d a ta a t th e cost of increased com putation. For th e Hadoop im plem entation, this first phase of our algorithm fits well w ith H adoop’s m ap phase. Each m apper em its a key/value pair, where th e value is the d a ta from a small tile and th e key is com puted in such a way th a t we achieve the desired 50% overlap in th e next phase. The key is com puted as a row /colum n pair in the space of th e larger tiles. This key is stored in 4 bytes before being em itted. The high word contains th e row and th e low word contains th e column. For a tile a t location (x ,y ), the key for sub-tile (i, j ) is com puted as: keyjro w = x * 2 + i; (5.4) key-col = y * 2 + j; (5.5) Below we provide pseudocode for th e m ap phase and Figure 5.19 provides an example. 5 .4 .2 .4 S e c o n d ( r e d u c e ) p h a s e . T he second phase now gathers th e four smaller tiles th a t make up th e overlapping window. These tiles sit as interm ediate d a ta on th e local disks of th e cluster. If th e system accounts for locality, each instance would only need to gath er three tiles since th e nodes could be placed such th a t one tile is always stored locally. A fter th e d a ta are gathered, th e gradients are com puted from th e original pixel values and an iterative solver (SOR) is run after being initialized w ith th e solutions from th e first 95 proc Map(blockld, im age) = row := b lockld >> 16; col := blockld & 0xFFFF; solver.com pute-gradient (image); solver.upsam ple_coarse(im age, row, col); so lver.S O R (im a g e); for i := 0 to 1 do for j := 0 to 1 do keyR o w := row * 2 + i; keyC ol := col * 2 + j ; key := keyR o w << 16 + keyCol; emit(key, solver.tiles[i][j ]); phase. The iterative solver is constrained to only work on interior pixels to prevent this phase from introducing new seams. Technically, there may be errors at th e pixels around th e m idpoints of th e boundary edges of these tiles, though in practice we have not seen this affect th e solution. This second phase fits well w ith H adoop’s reduce phase w ith some considerations. Hadoop does not account for d a ta locality for th e reducers, therefore, we m ust assume th e worst case g ath er of four tiles. Also, th e reducers do not have access to the HDFS, nor can any task request specific d ata. The m appers in th e first phase modify the pixel values, b u t th e reducer needs th e original values to com pute th e gradient vector for th e iterative solver. Therefore, th e m apper m ust also concatenate th e original pixel values to th e solved d a ta before it em its th e key/value pair. This leads to a 6 b ytes/pixel transfer between phases. Below we provide pseudocode for th e reduce phase. proc Reduce(blockld, [(map1, o r g 1 ) ,..., (map4, org4)]) = m apper-output := merge(map1, map2, map3, map4); o rig in a L tile := merge(org1, org2, org3, org4); so lver.com pute-gradient(original-tile); solver.S O R (m a p p er-o u tp u t); e m it(B lo ck Id , solver.tiles); 5 .4 .2 .5 S to r a g e in t h e H D F S . In Hadoop, saving th e image in stan d ard row m ajor order would lead to poor performance in th e m appers since there is good locality in only one dimension. Saving individual tiles would also not be efficient since H adoop’s HDFS is optimized for large files. Therefore, we save th e d a ta as th e large tiles, comprised of th e four 96 smaller tiles, which th e m apper needs in th e first phase. We concatenate the tiles together, row-by-row, into a single large file. 5 .4 .3 R e s u lt s We d em onstrate th e quality of our approach on three te st panoram as which range from megapixels to gigapixels in size. We also d em onstrate th e generality of th e ab stractio n by running our code, w ithout modification, on a single desktop and on a large cluster. Finally, we te st H adoop’s scalability w ith two of our te st panoram as. The single node tests were performed on a 2 xQ uad-C ore Intel Xeon w5580 3.2GHz desktop w ith 24GB of memory. For our large distrib u ted tests, we ran our m ethod on the NSF CLuE [41] cluster, which consists of 275 nodes each w ith dual Intel Xeon 2.8GHz processors w ith H yperT hreading and 8GB of memory. W hile still a valuable resource for research, as far as m odern clusters are concerned, C L uE ’s hardw are is o u td ated being a retired system based on a 6-year-old technology originally produced in 2004. Moreover, CLuE is also a shared resource and all tim ings were certainly affected by other researchers using th e machines. The E dinburgh panoram a consists of 25 images w ith a full resolution of 16, 950 x 2, 956 pixels (50 megapixel) and was broken into 48 tiles. For our single node test, our m ethod produced a solution in 81 seconds w ith eight m appers and four reducers. The Redrock panoram a consists of nine images w ith a full resolution of 19, 588 x 4,457 pixels (87 megapixel) and was partitioned into 96 tiles. O ur m ethod running on a single node solved th e panoram a in 156 seconds w ith nine m appers and nine reducers. The solver running on th e cluster ran in 199 seconds w ith 96 m appers and 96 reducers. Due to the small size of th e panoram as, th e ex tra parallelization given to us by th e distrib u ted system did not increase performance. Q uite th e opposite was true, th e runtim es were worse due to overhead of H adoop launching and coordinating many tasks. Also, because th e cluster was a shared resource, this increase in com pute tim e could have easily come from external influences. See Figure 5.20 for th e original and solved panoram as. The Salt Lake City panoram a consists of 611 images w ith a full resolution of 126,826 x 29, 633 pixels (3.27 gigapixel) and was split into 3,444 tiles. Our m ethod took 3 hours and 5 minutes to com pute a solution on our one node te st desktop. On th e distrib u ted cluster w ith 492 m appers and 492 reducers th e tim e to com pute a solution dropped to 28 minutes and 44 seconds of which 3 minutes and 24 seconds was due to Hadoop overhead and 15 minutes was due to I/O and d a ta transfer between th e m ap and reduce phases. R unning 97 F ig u r e 5.20: The results of our cloud im plem entation, from top to bottom : Edinburgh, 25 images, 16, 950 x 2, 956, 50 megapixel and th e solution to E dinburgh from our cloud im plem entation; Redrock, nine images, 19, 588 x 4,457; 87 megapixel and th e solution to R edrock from our cloud im plem entation; Salt Lake City, 611 images, 126, 826 x 29, 633, 3.27-gigapixel and th e solution to Salt Lake City from our cloud im plem entation. Salt Lake City w ith 246 m appers and 246 reducers produced a solution in 39 minutes and 49 seconds of which 2 minutes and 7 seconds was due to Hadoop overhead and 30 minutes was due to I/O and d a ta transfer. Note th a t these are all wall clock times and include activity of other people on a shared system. Moreover, th e configuration, which we could not change, required running a t least three processes on every node which have only two cores. Therefore, we can only hope to have 2 /3 com pute efficiency out of this cluster. See Figure 5.20 or th e original and solved panoram a. Based on our tim ing and th e pricing available online, running th e 492 m ap p er/red u cer job would have cost approxim ately $50 to run on A m azon’s Elastic Reduce [10]. This is orders of m agnitude less expensive and tim e comsuming th a n operating and m aintaining a proprietary cluster and would allow any researcher in th e field to experim ent w ith new ideas. 5 .4 .3 .1 S c a la b ility . Due to th e shared n atu re of th e CLuE cluster, we restricted our scalability tests to only th e single node test desktop. Figure 5.21 plots th e runtim e to solve b o th th e E dinburgh and Redrock panoram as as a function of num ber of reducers and m appers. We varied th e num ber of m appers and reducers from one to th e num ber of cores. The plot shows th a t as b o th th e m appers and reduces increase so does our performance, b ut as th e to tal num ber of b o th m appers and reducers meets or exceeds th e available cores of our system, th e performance gain flattens. This is an im p o rtan t observation and m ust be remembered when choosing an optim al num ber of m appers and reducers especially when purchasing tim e and cores as a commodity. 98 F ig u r e 5.21: (a) The scalability plot for th e E dinburgh (50 megapixel) panoram a on our one node 8-core te st desktop; (b) th e scalability plot for Redrock (87 megapixel) panoram a on th e same machine 5 .4 .3 .2 F a u lt t o l e r a n c e . H adoop has been developed to robustly handle failures in th e cluster. Achieving a fault to lerant im plem entation is a m ajor challenge on its own and is not easily available in o ther distrib u ted frameworks such as M PI. The trem endous advantage of fault tolerance comes a t th e cost of high variability in running tim es, though jobs are guaranteed to finish. In fact, all runs on th e distrib u ted cluster had some kind of failure in th e system a t some tim e during th e execution and still we were able to get results, which would not be available w ith a trad itio n al distrib u ted im plem entation. In particular, th e running tim e stated above for th e Salt Lake City example w ith 492 m appers/reducers was based on th e job w ith th e minimum num ber of failures (95 failed tasks). In practice, we have seen this example run as long as 49 m inutes to account for th e 133 failed tasked th a t occurred during th e job. C H A P T E R 6 F U T U R E W O RK The work outlined in this dissertation provides b o th th e justification for and th e solutions to bringing th e com position stage of th e panoram a processing pipeline into an interactive set­ ting. T he future of this work is to bring the entire pipeline into an interactive environment. A system built w ith this guiding principle would allow th e user to add and remove images, fix registration problems, or adjust image boundaries all while having a preview of th e final color-corrected, com posited panoram a. This will give users an unprecedented am ount of control over th e creation of new panoram as, increasing b o th the accessibility of panoram a creation and quality of th e final results. Due to th e work completed for this dissertation, the logical next step to achieving my ultim ate goal is to provide new and interactive solutions for th e registration phase of th e panoram a pipeline. Currently, interaction for registration is typically non-existent. Even when it is provided by a system, th e interaction is rudim entary at best. For instance, th e only interaction possible for a panoram a processing system such as Hugin [77] is th e m anual selection and deletion of image feature points between pairs of images, see Figure 6.1. This is a tedious process for small images, and com pletely unwieldy for larger image collections. The scaling of registration algorithm s will also need fu rth er study. D espite significant previous work, many current m ethods have been shown to work w ith relatively small collections of images. A lthough assumed to scale well, only recently has work shown an extension of current techniques to work w ith extrem ely large collections of photographs [1]. Often such assum ptions of scaling can be false; for instance some of th e commercial and open-source products, although advertised to handle an arb itrary num ber of images, have failed when presented w ith panoram as w ith hundreds of images. General purpose algorithm s for au tom atic registration of extrem ely large image collections rem ain an open avenue of investigation. In addition, state-of-the-art stitching software often needs a reduction of complexity by strictly enforcing th a t th e images are acquired in a regular p a tte rn (columns and rows) to reduce th e search space for possible registrations. My collaborators and I have 100 1 4 *• - - - * •• F ig u r e 6.1: A typical example of interaction during panoram a registration from th e open- source Hugin [77] software tool. C urrent interaction is limited to th e manual selection and deletion of feature points used during registration. found th a t these program s will often fail when presented w ith large image collections w ith no assumed structure. T h e focus of my dissertation work was to bring th e com position stage of th e panoram a creation pipeline into an interactive setting, not only for small images, b u t for images massive in size. The Progressive Poisson and Panorama Weaving algorithm s elegantly achieve this goal. A lthough panoram as were the prim ary focus of th e work, th e m ethods and frameworks developed th roughout my dissertation provide new paradigm s for interacting w ith high resolution imagery. For instance, th e Progressive Poisson provides a working proof-of-concept on how to reform ulate global algorithm s to work in an interactive setting for large d a ta by com puting screen resolution previews in real-tim e and using out-of-core com putation for full resolution solutions. O ne can envision expanding th e frameworks and techniques outlined in this work w ith o ther d a ta processing tools to allow comprehensive editing of massive d atasets on regular desktop com puters. A P P E N D IX M A SSIV E DATASETS T a b le A .1 : Massive panoram a d a ta acquired and used in this dissertation work. D ataset Images Format Individual Image Size Lake Louise Large 5794 RAW NEF 16-bit 4288 x 2848 (12 Megapixel) Lake Louise Small 1 2805 RAW NEF 16-bit 4288 x 2848 (12 Megapixel) Lake Louise W inter 1 1983 JP E G Fine 4288 x 2848 (12 Megapixel) Lake Louise W inter 2 1876 JP E G Fine 4288 x 2848 (12 Megapixel) Lake Louise Morning 1656 JP E G Fine 4288 x 2848 (12 Megapixel) Lake Louise Small 2 1440 RAW NEF 16-bit 4288 x 2848 (12 Megapixel) Salt Lake City Large 1311 JP E G Fine 3456 x 2592 (9 megapixels) Lake Louise Evening 1220 JP E G Fine 4288 x 2848 (12 Megapixel) Salt Lake City Winter 1219 JP E G Fine 3456 x 2592 (9 megapixels) Salt Lake City Fall 624 JP E G Fine 3456 x 2592 (9 megapixels) Mount Rushmore 300 JP E G Fine 4288 x 2848 (12 Megapixel) Salt Lake City Small 132 JP E G Fine 3264 x 2448 (8 megapixels) T a b le A .2 : Massive satellite d a ta acquired and used in this dissertation work. Dataset Resolution Gigapixels New York, NY 80000 x 80000 6.40 Chattanooga, TN 120000 x 100000 12.00 Washington, DC 131350 x 159375 20.93 Hamilton County, SC 240000 x 232000 55.68 Philadelphia, PA 250000 x 230000 57.50 Indianapolis, IN 260000 x 260000 67.60 San Diego, CA 200000 x 365000 73.00 San Francisco, CA 225000 x 330000 74.25 New Orleans, LA 330000 x 290000 95.70 Olympia, WA 501059 x 329220 164.96 San Antonio, TX 521640 x 492480 256.90 Atlanta, GA 524288 x 524288 274.88 Seattle, WA 411280 x 693528 285.23 Phoenix, AZ 720000 x 540000 388.80 R EFER EN C ES [1] A g a r w a l , S., S n a v e ly , N ., S e itz , S., a n d S z e lis k i, R . B undle adjustm ent in the large. E C C V ’10: Proceedings o f the 11th European conference on Computer vision: P art I I (Jan. 9), 29-42. [2] A g a r w a l a , A. Efficient gradient-dom ain com positing using quadtrees. A C M Trans. Graph 26, 3 (2007), 94. [3] A g a r w a l a , A ., A g r a w a l a , M ., C o h e n , M. F ., S a le s i n , D ., a n d S z e lis k i, R. P hotographing long scenes w ith multi-viewpoint panoram as. A C M Trans. Graph 25, 3 (2006), 853-861. [4] A g a r w a l a , A ., D o n t c h e v a , M ., A g r a w a l a , M ., D r u c k e r , S. M ., C o l b u r n , A ., C u r l e s s , B ., S a le s i n , D ., a n d C o h e n , M. F . Interactive digital photom on­ tage. A C M Trans. Graph 23, 3 (2004), 294-302. [5] A g a r w a l a , A ., Z h e n g , K . C., P a l , C ., A g r a w a l a , M ., C o h e n , M. F ., C u r l e s s , B ., S a le s i n , D ., a n d S z e lis k i, R . Panoram ic video textures. A C M Trans. Graph 24, 3 (2005), 821-827. [6] A g r a w a l , A ., R a s k a r , R ., N a y a r , S. K ., a n d Li, Y. Removing photography artifacts using gradient projection and flash-exposure sampling. In S IG G R A P H ’05: A C M S IG G R A P H 2005 Papers (New York, NY, USA, 2005), ACM, pp. 828-835. [7] A g r a w a l , A. K ., C h e l l a p p a , R ., a n d R a s k a r , R . An algebraic approach to surface reconstruction from gradient fields. In IC C V (2005), pp. I: 174-181. [8] A g r a w a l , A. K ., R a s k a r , R ., a n d C h e l l a p p a , R . W h at is th e range of surface reconstructions from a gradient field? In E C C V (2006), pp. I: 578-591. [9] A g r a w a l , A. K ., X u, Y ., a n d R a s k a r , R . Invertible m otion blur in video. A C M Trans. Graph 28, 3 (2009). [10] A m a zo n. E lastic m apreduce, 2012. h ttp ://aw s.am azo n .co m /elasticm ap red u ce. [11] A n a n d a n , P ., B u r t , P . J ., D a n a , K ., H a n s e n , M. W ., a n d v a n d e r W a l, G. S. Real-tim e scene stabilization and mosaic construction. In Image Understanding Workshop (1994), pp. I:457-465. [12] A u t o P a n o . h ttp ://w w w .a u to p a n o .n e t. [13] A x e ls s o n , O. Iterative Solution Methods. Cam bridge Universty Press, New York, NY, 1994. [14] B a d r a , F ., Q u m sieh , A ., a n d D u d e k , G. R o tatio n and zooming in image mosaicing. In W A C V (1998), pp. 50-55. http://aws.amazon.com/elasticmapreduce http://www.autopano.net 103 [15] B a e , S., P a r i s , S., a n d D u r a n d , F . Two-scale tone m anagem ent for photographic look. In S IG G R A P H ’06: A C M S IG G R A P H 2006 Papers (New York, NY, USA, 2006), ACM, pp. 637-645. [16] B a i, X ., W a n g , J ., Sim ons, D ., a n d S a p ir o , G. Video snapcut: R obust video object cu to u t using localized classifiers. A C M Trans. Graph 28, 3 (2009). [17] B a l m e l l i , L., K o v a c e v ic , J ., a n d V e t t e r l i , M. Q uadtrees for em bedded surface visualization: C onstraints and efficient d a ta structures. In in Proc. o f IE E E International Conference on Image Processing (1999), pp. 487-491. [18] B a y , H ., T u y t e l a a r s , T ., a n d G o o l , L. J . V. SURF: Speeded up robust features. In E C C V (2006), pp. I: 404-417. [19] B e n - E z r a , M ., a n d N a y a r , S. K . M otion-based motion deblurring. IE E E Trans. P attern Anal. Mach. Intell 26, 6 (2004), 689-698. [20] B e r g e r , M. J ., a n d C o l e l l a , P . Local adaptive mesh refinement for shock hydrodynamics. Journal Computational Physics 82 (1989), 64-84. [21] B o l i t h o , M ., K a z h d a n , M ., B u r n s , R ., a n d H o p p e , H. Multilevel stream ing for out-of-core surface reconstruction. In SG P ’07: Proceedings o f the fifth Eurographics Sym posium on Geometry Processing (Aire-la-Ville, Switzerland, Switzerland, 2007), Eurographics Association, pp. 69-78. [22] B o r n e m a n n , F . A ., a n d K r a u s e , R . Classical and cascadic m ultigrid - a m ethod­ ological comparison. In In Proceedings o f the 9th International Conference on D omain Decomposition Methods (1996), D omain Decomposition Press, pp. 64-71. [23] B o u k e r c h e , A ., a n d P a z z i, R . W . N. R em ote rendering and stream ing of progressive panoram as for mobile devices. In A C M M ultimedia (2006), K. N ahrstedt, M. Turk, Y. Rui, W. Klas, and K. M ayer-Patel, Eds., ACM, pp. 691-694. [24] B o y k o v , Y ., a n d K o lm o g o r o v , V. An experim ental com parison of m in-cut/m ax- flow algorithm s for energy minim ization in vision. IE E E Trans. P attern Anal. Mach. Intell 26, 9 (2004), 1124-1137. [25] B o y k o v , Y. Y ., a n d J o l l y , M. P . Interactive graph cuts for optim al boundary and region segm entation of objects in N-D images. In IC C V (2001), pp. I: 105-112. [26] B o y k o v , Y. Y ., V e k s l e r , O ., a n d Z a b ih , R . Fast approxim ate energy minimiza­ tion via graph cuts. IE E E Trans. P attern Analysis and M achine Intelligence 23, 11 (Nov. 2001), 1222-1239. [27] B r a n d t , A. Multi-level adaptive solutions to boundary-value problems. M athem at­ ics o f Com putation 31, 138 (1977), 333-390. [28] B r ig g s , W . L., H e n s o n , V. E ., a n d M c C o r m ic k , S. F . A M ultigrid Tutorial (2nd Ed.). Society for Industrial and Applied M athem atics, Philadelphia, PA, USA, 2000. [29] B r o w n , D. C. Close-range cam era calibration. Photogrammetric Engineering 37, 8 (Aug. 1971), 855-866. 104 [30] B r o w n , M ., a n d L o w e , D . A utom atic panoram ic image stitching using invariant features. International Journal o f Com puter Vision (Jan. 2007). [31] B r o w n , M ., a n d L o w e , D. G. Recognising panoram as. In IC C V (2003), IE E E C om puter Society, pp. 1218-1227. [32] B r o w n , M ., S z e lis k i, R . S., a n d W i n d e r , S. A. J . M ulti-image m atching using multi-scale oriented patches. In C V P R (2005), pp. I: 510-517. [33] B u c h a n a n , A ., a n d F i t z g ib b o n , A. W . Combining local and global motion models for feature point tracking. In C V P R (2007), pp. 1-8. [34] C a p e l , D ., a n d Z is s e rm a n , A. A utom ated mosaicing w ith super-resolution zoom. Proc. C V P R (Jan. 1998). [35] C h am , T . J ., a n d C i p o l l a , R . A statistical framework for long-range feature m atching in uncalibrated image mosaicing. In C V P R (1998), pp. 442-447. [36] C h a n d r a n , L. S., a n d S iv a d a s a n , N. Geometric representation of graphs in low dimension. In Proceedings o f the 12th A nnual International Conference on Com­ puting and Combinatorics (Berlin, Heidelberg, 2006), C O C O O N ’06, Springer-Verlag, pp. 398-407. [37] C h a r t r a n d , G ., a n d H a r a r y , F . P lan a r perm u tatio n graphs. Ann. Inst. Henri Poincare 3, 4 (1967), 433-438. [38] C h e n , S. E . Quicktime VR: An image-based approach to virtu al environm ent navigation. In S IG G R A P H (1995), pp. 29-38. [39] C h o , S., a n d L ee , S. Fast m otion deblurring. A C M Trans. Graph 28, 5 (2009). [40] C h o w , E ., F a l g o u t , R . D ., H u , J . J ., T u m in a r o , R . S., a n d Y a n g , U. M. A survey of parallelization techniques for m ultigrid solvers. In Parallel Processing fo r Scientific Computing, M. A. Heroux, P. Raghavan, and H. D. Simon, Eds., vol. 20 of Software, Environm ents, and Tools. SIAM, Philadelphia, PA, Nov. 2006, pp. 179-201. ch. 10,. [41] C luE. Clue program, 2008. h ttp://w w w .nsf.gov/pubs/2008/nsf08560 /nsf08560.htm . [42] C o h e n , M. F ., S h a d e , J ., H i l l e r , S., a n d D e u s s e n , O. Wang tiles for image and tex tu re generation. A C M Trans. Graph 22, 3 (2003), 287-294. [43] C o r m e n , T . H ., L e i s e r s o n , C. E ., R i v e s t , R . L., a n d S t e i n , C. Introduction to Algorithms, Third E dition, 3rd ed. The M IT Press, Cambridge, MA, 2009. [44] C o r r e a , C. D ., a n d M a, K .-L . Dynamic video narratives. A C M Trans. Graph 29, 4 (2010). [45] C rim in isi, A ., S h a r p , T ., R o t h e r , C., a n d P E r e z , P . Geodesic image and video editing. A C M Trans. Graph 29, 5 (2010), 134. [46] D av is, J . E . Mosaics of scenes w ith moving objects. In C V P R (1998), pp. 354-360. http://www.nsf.gov/pubs/2008/nsf08560 105 [47] D e a n , J ., a n d G h e m a w a t, S. MapReduce: Simplified d a ta processing on large clusters. C AC M 51, 1 (2008), 107-113. [48] D e l o n g , A ., a n d B o y k o v , Y . A scalable graph-cut algorithm for N-D grids. In C V P R (2008), IE E E C om puter Society. [49] D em m el, J . W . Applied Numerical Linear Algebra. SIAM, 1997. [50] D i j k s t r a , E . W . A note on two problems in connexion w ith graphs. Numerische M athem atik 1 (1959), 269-271. [51] D o r r , F . W . The direct solution of th e discrete poisson equation on a rectangle. S IA M Review 12, 2 (April 1970), 248-263. [52] E d e l s b r u n n e r , H ., a n d M u c k e , E . P . Simulation of simplicity: A technique to cope w ith degenerate cases in geometric algorithm s. A C M Trans. Graphics 9, 1 (Jan. 1990), 67-104. [53] E f r o s , A. A ., a n d F r e e m a n , W . T . Image quilting for tex tu re synthesis and transfer. In S IG G R A P H (2001), pp. 341-346. [54] F a r b m a n , Z., H o f f e r , G ., L ip m an , Y ., C o h e n - O r , D ., a n d L is c h in s k i, D. Coordinates for instan t image cloning. In S IG G R A P H ’09: Proceedings o f the 36th A nnual Conference on Computer Graphics and Interactive Techniques (New York, NY, USA, 2009), ACM. [55] F a t t a l , R ., L is c h in s k i, D ., a n d W e r m a n , M. G radient dom ain high dynam ic range compression. In S IG G R A P H ’02: Proceedings o f the 29th A nnual Conference on Com puter Graphics and Interactive Techniques (New York, NY, USA, 2002), ACM, pp. 249-256. [56] F e r g u s , R ., S in g h , B ., H e r tz m a n n , A ., R o w e is, S. T ., a n d F r e e m a n , W . T. Removing cam era shake from a single photograph. A C M Transactions on Graphics 25, 3 (July 2006), 787-794. [57] F i n l a y s o n , G. D ., H o r d l e y , S. D ., a n d D r e w , M. S. Removing shadows from images. In E C C V ’02: Proceedings o f the 7th European Conference on Computer Vision-Part I V (London, UK, 2002), Springer-Verlag, pp. 823-836. [58] F l u s s e r , J ., B o ld y s , J ., a n d Z ito v A , B. M oment forms invariant to ro tatio n and blur in arb itrary num ber of dimensions. IE E E Trans. P attern Anal. Mach. Intell 25, 2 (2003), 234-246. [59] F l u s s e r , J ., a n d S u k , T . D egraded image analysis: An invariant approach. IE E E Transactions on P attern Analysis and M achine Intelligence 20, 6 (1998), 590-603. [60] F o r d , L. R ., a n d F u l k e r s o n , D. R . M aximal flow th rough a network. Canadian Journal o f M athem atics 8 (1956), 399-404. [61] F r e e d m a n , D ., a n d Z h a n g , T . Interactive graph cut based segm entation w ith shape priors. In C V P R (2005), pp. I: 755-762. [62] G a l l , D. L. Mpeg: A video compression stan d ard for m ultim edia applications. Com munications o f the A C M (Jan 1991). 106 [63] G ig a P a n . h ttp ://w w w .g ig a p a n .o rg /a b o u t.p h p . [64] G o ld m a n , D. B. V ignette and exposure calibration and com pensation. IE E E Trans. P attern Anal. Mach. Intell 32, 12 (2010), 2276-2288. [65] G o o g l e . Google E a rth h ttp ://e a rth .g o o g le .c o m /. [66] G o r t l e r , S., a n d C o h e n , M. V ariational modeling w ith wavelets. In Sym posium on Interactive 3D Graphics (1995), pp. 35-42. [67] G o s h t a s b y , A. A. 2-D and 3-D Image Registration: For Medical, Rem ote Sensing, and Industrial Applications. Wiley-Interscience, New York, NY, M ar. 2005. [68] G o t t f r id , D . Self-service, prorated super com puting fun!, 2007. h ttp ://o p e n .b lo g s . nytim es.com /2007/11/01/self-service-prorated-super-com puting-fun. [69] G r a c i a s , N. R . E ., M a h o o r , M. H ., N e g a h d a r i p o u r , S., a n d G l e a s o n , A. C. R . Fast image blending using w atersheds and graph cuts. Image and Vision Computing 27, 5 (Apr. 2009), 597-607. [70] G r i e b e l , M ., a n d Z u m b u sc h , G. Parallel m ultigrid in an adaptive pde solver based on hashing and space-filling curves. Parallel Comput. 25, 7 (1999), 827-843. [71] H a d o o p . A pplications and organizations using hadoop, 2012. h ttp ://w ik i.a p a c h e . org/hadoop/Pow eredB y. [72] H a s s in , R . M aximum flow in (s, t)-p lan ar networks. Inform . Proc. Lett. 13 (1981), 107. [73] H e a t h , N g , a n d P e y t o n . Parallel algorithm s for sparse linear systems. S IR E V : S IA M Review 33 (1991). [74] H iR IS E . High Resolution Imaging Science E xperim ent h ttp ://h iris e .lp l.a riz o n a .e d u /. [75] H o c k n e y , R . W . A fast direct solution of Poisson’s equation using Fourier analysis. Journal o f the A C M 12, 1 (Jan. 1965), 95-113. [76] H o r n , B. K . P . D eterm ining lightness from an image. Comput. Graphics Image Processing 3, 1 (Dec. 1974), 277-299. [77] H u g in . h ttp ://h u g in .so u rcefo rg e.n et. [78] I k e d a , S., S a t o , T ., a n d Y o k o y a , N. High-resolution panoram ic movie gener­ ation from video stream s acquired by an om nidirectional m ulti-cam era system. In Proceedings o f IE E E International Conference on M ultisensor Fusion and Integration fo r Intelligent System s (Aug. 2003), p. 155. [79] J ia , J ., S un, J ., T a n g , C .-K ., a n d Shum , H .-Y . D rag-and-drop pasting. In S IG G R A P H ’06: A C M S IG G R A P H 2006 Papers (New York, NY, 2006), ACM, pp. 631-637. [80] J ia , J ., a n d T a n g , C .-K . Tensor voting for image correction by global and local intensity alignment. IE E E Trans. P attern Anal. Mach. Intell 27, 1 (2005), 36-50. http://www.gigapan.org/about.php http://earth.google.com/ http://open.blogs http://wiki.apache http://hirise.lpl.arizona.edu/ http://hugin.sourceforge.net 107 [81] J ia , J ., a n d T a n g , C .-K . Image stitching using stru ctu re deform ation. IE E E Trans. P attern Anal. Mach. Intell 30, 4 (2008), 617-631. [82] J ia , J. Y ., a n d T a n g , C. K . Image registration w ith global and local luminance alignment. In IC C V (2003), pp. 156-163. [83] J ia , J . Y ., a n d T a n g , C. K . E lim inating stru ctu re and intensity misalignment in image stitching. In IC C V (2005), pp. II: 1651-1658. [84] J o h n s o n , D . B. Efficient algorithm s for shortest paths in sparse networks. Journal o f the A C M 24, 1 (Jan. 1977), 1-13. [85] J o s h i, N ., K a n g , S. B., Z i tn ic k , C. L., a n d S z e lis k i, R . Image deblurring using inertial m easurem ent sensors. A C M Trans. Graph 29, 4 (2010). [86] J o s h i, N ., S z e lis k i, R ., a n d K r ie g m a n , D. J . P S F estim ation using sharp edge prediction. In C V P R (2008), pp. 1-8. [87] K a z h d a n , M. R econstruction of solid models from oriented point sets. In Euro­ graphics Sym posium on Geometry Processing (2005), pp. 73-82. [88] K a z h d a n , M ., B o l i t h o , M ., a n d H o p p e , H. Poisson surface reconstruction. In Eurographics Sym posium on Geometry Processing (2006), pp. 61-70. [89] K a z h d a n , M ., a n d H o p p e , H. Stream ing m ultigrid for gradient-dom ain operations on large images. A C M ToG. 27, 3 (2008). [90] K a z h d a n , M ., S u r e n d r a n , D ., a n d H o p p e , H. D istributed gradient-dom ain processing of p lanar and spherical images. Transactions on Graphics (T O G 29, 2 (M ar 2010). [91] K a z h d a n , M. M ., a n d H o p p e , H. Stream ing m ultigrid for gradient-dom ain operations on large images. A C M Trans. Graph 27, 3 (2008). [92] K o lm o g o r o v , V ., a n d Z a b ih , R . W h at energy functions can be minimized via graph cuts? IE E E Trans. P attern Anal. Mach. Intell 26, 2 (2004), 147-159. [93] K o p f , J ., C o h e n , M. F ., L is c h in s k i, D ., a n d U y t t e n d a e l e , M. Joint bilateral upsampling. A C M Trans. Graph 26, 3 (2007), 96. [94] K o p f , J ., U y t t e n d a e l e , M ., D e u s s e n , O ., a n d C o h e n , M. F . C apturing and viewing gigapixel images. A C M Trans. Graph 26, 3 (2007), 93. [95] K o u r o g i, M ., K u r a t a , T ., H o s h in o , J ., a n d M u r a o k a , Y . Real-tim e image mosaicing from a video sequence. In IC IP (4) (1999), pp. 133-137. [96] K r u g e r , S., a n d C a lw a y , A. Image registration using m ultiresolution frequency dom ain correlation. Proc. B ritish Machine Vision C onf (Jan 1998). [97] K u g lin , C. D ., a n d H in es, D . C. The phase correlation image alignment method. Assorted Conferences and Workshops (Sept. 1975), 163-165. 108 [98] K u m a r , S., P a s c u c c i , V ., V is h w a n a th , V ., C a r n s , P ., H e r e l d , M ., L a th a m , R ., P e t e r k a , T ., P a p k a , M ., a n d R o ss, R . Towards parallel access of m ulti­ dimensional, m ulti-resolution scientific d ata. In Petascale Data Storage Workshop (PD SW ), 2010 5th (Nov. 2010), pp. 1 -5. [99] K u m a r , S., V is h w a n a th , V ., C a r n s , p . , Sum m a, B., S c o r z e l l i , G ., P a s c u c c i , V ., R o s s , R ., C h e n , J ., K o l l a , H ., a n d G r o u t , R . Pidx: Efficient parallel i/o for m ulti-resolution multi-dimensional scientific d atasets. In Proceedings o f IE E E Cluster 2011 (Sep. 2011). [100] K u n d u r , D ., a n d H a t z i n a k o s , D. Blind image deconvolution. IE E E Signal Processing Magazine 13, 3 (May 1996), 43-64. [101] K w a t r a , V ., S c h o d l , A ., E s s a , I., T u r k , G ., a n d B o b ic k , A. G raphcut textures: Image and video synthesis using graph cuts. A C M Transactions on Graphics 22, 3 (July 2003), 277-286. [102] L a w d e r , J . K ., a n d K in g , P . J . H. Using space-filling curves for multi-dimensional indexing. In L N C S (2000), Springer Verlag, pp. 20-35. [103] L e v in , A ., Z o m e t, A ., P e l e g , S., a n d W e iss, Y . Seamless image stitching in the gradient domain. In E C C V (2004), pp. Vol IV: 377-389. [104] Li, Y ., S u n , J ., T a n g , C .-K ., a n d Shum , H .-Y . Lazy snapping. A C M Trans. Graph 23, 3 (2004), 303-308. [105] L is c h in s k i, D ., F a r b m a n , Z., U y t t e n d a e l e , M ., a n d S z e lis k i, R . Interactive local adjustm ent of tonal values. A C M ToG 25, 3 (2006), 646-653. [106] L iu, J ., a n d S u n , J . Parallel graph-cuts by adaptive b o ttom -up merging. In C VP R (2010), IEEE, pp. 2181-2188. [107] L o m b a e r t , H ., S u n , Y . Y ., G r a d y , L., a n d X u, C. Y . A multilevel banded graph cuts m ethod for fast image segm entation. In IC C V (2005), pp. I: 259-265. [108] L o w e , D . G. O bject recognition from local scale-invariant features. In IC C V (1999), p p . 1150-1157. [109] L u c a s , B ., a n d K a n a d e , T . An iterative image registration technique w ith an application to stereo vision. International Jo in t Conference on Artificial Intelligence 3 (1981), 674-679. [110] M a li n g , D. H. Coordinate System s and Map Projections. B utterw orth-H einem ann, W oburn, MA, 1993. [111] M a n n , S., a n d P i c a r d , R . W . V irtual bellows: C onstructing high quality stills from video. In IC IP (1) (1994), pp. 363-367. [112] M a t u n g k a , R ., Z h e n g , Y ., a n d E w in g , R . Image registration using adaptive polar transform . Image Processing, IE E E Transactions on 18, 10 (2009), 2340 - 2354. [113] M c C a n n , J. Recalling th e single-FFT direct poisson solve. In S IG G R A P H Posters (2008), ACM, p. 71. 109 [114] M c C a n n , J ., a n d P o l l a r d , N. S. Real-tim e gradient-dom ain painting. In S IG G R A P H ’08: A C M S IG G R A P H 2008 papers (New York, NY, USA, 2008), ACM, pp. 1-7. [115] M c G u ir e , M. An image registration technique for recovering rotation, scale and tran slatio n param eters. N E C Res. Inst. Tech. Rep., T R (1998), 98-018. [116] M c L a u c h l a n , P ., a n d J a e n i c k e , A. Image mosaicing using sequential bundle adjustm ent. Image and Vision Computing (Jan. 2002). [117] M e e h a n , J . Panoramic Photography. Amphoto, Oct. 1990. [118] M e g a P O V . h ttp ://m e g a p o v .in e ta rt.n e t. [119] M ilg r a m , D. L. C om puter m ethods for creating photomosaics. IE E E Trans. Com puter 23 (1975), 1113-1119. [120] M ilg r a m , D . L. A daptive techniques for photomosaicking. IE E E Trans. Computer 26 (1977), 1175-1180. [121] M i l l s , A ., a n d D u d e k , G. Image stitching w ith dynam ic elements. Image and Vision Computing 27, 10 (Sept. 2009), 1593-1602. [122] M o r t e n s e n , E. N ., a n d B a r r e t t , W . A. Intelligent scissors for image composi­ tion. In S IG G R A P H (1995), pp. 191-198. [123] M o r t e n s e n , E. N ., a n d B a r r e t t , W . A. Interactive segm entation w ith intelligent scissors. Graphical models and image processing: GM IP 60, 5 (Sept. 1998), 349-384. [124] N a g a h a s h i, T ., F u jiy o s h i, H ., a n d K a n a d e , T . Image segm entation using iterated graph cuts based on multi-scale smoothing. In A C C V (2007), pp. II: 806-816. [125] N A SA . NASA Blue M arble h ttp ://e arth o b serv ato ry .n asa.g o v / F eatu res/B lu eM arb le/. [126] N ie d e r m e ie r , R ., R e i n h a r d t , K ., a n d S a n d e r s , P . Towards optim al locality in meshindexings. In Proc. Fundamentals o f Com putation Theory (1997), vol. 1279 of LNCS, Spinger, pp. 364-375. [127] O ja n s iv u , V ., a n d H e i k k i l a , J . Image registration using blur-invariant phase correlation. IE E E Signal Processing Letters 14, 7 (July 2007), 449-452. [128] P a s c u c c i , V ., a n d F r a n k , R . J . Hierarchical indexing for out-of-core access to m ulti-resolution d ata. In Hierarchical and Geometrical Methods in Scientific Visualization, M athem atics and Visualization. Springer, New York, NY. [129] P a s c u c c i , V ., a n d F r a n k , R . J . Global static indexing for real-tim e exploration of very large regular grids. In Supercomputing (S C ’01) (2001), p. 2. [130] P a s c u c c i , V ., L a n e y , D . E ., F r a n k , R . J ., G y g i, F ., S c o r z e l l i , G ., L in se n , L., a n d H a m a n n , B. R eal-tim e m onitoring of large scientific simulations. In S A C (2003), ACM, pp. 194-198. http://megapov.inetart.net http://earthobservatory.nasa.gov/ 110 [131] P a v l o , A ., P a u l s o n , E ., R a s in , A ., A b a d i, D . J ., D e W i t t , D. J ., M a d d e n , S. R ., a n d S t o n e b r a k e r , M. A com parison of approaches to large scale d a ta analysis. In SIG M O D (Providence, RI, USA, 2009). [132] P e l e g , S., R o u s s o , B ., A c h a , A. R ., a n d Z o m e t, A. Mosaicing on adaptive manifolds. IE E E Trans. P attern Analysis and Machine Intelligence 22, 10 (Oct. 2000), 1144-1154. [133] P E r e z , P ., G a n g n e t , M ., a n d B l a k e , A. Poisson image editing. A C M Trans. Graph. 22, 3 (2003), 313-318. [134] P h ili p , S., Summa, B ., B r e m e r , P .- T ., a n d P a s c u c c i , V. Parallel gradient dom ain processing of massive images. In Eurographics Sym posium on Parallel Graph­ ics and Visualization (Llandudno, Wales, UK, 2011), T. Kuhlen, R. P ajarola, and K. Zhou, Eds., Eurographics Association, pp. 11-19. [135] P h ili p , S., Summa, B ., P a s c u c c i , V ., a n d B r e m e r , P .- T . H ybrid cpu-gpu solver for gradient dom ain processing of massive images. In IC P A D S ’11: Proceedings o f the 2011 IE E E 17th International Conference on Parallel and Distributed System s (W ashington, DC, USA, 2011), IE E E C om puter Society, pp. 244-251. [136] P r e t t o , A ., M e n e g a t t i , E ., B e n n e w it z , M ., B u r g a r d , W ., a n d P a g e l l o , E . A visual odom etry framework robust to motion blur. In IC R A (2009), IEEE, pp. 2250-2257. [137] P T g u i, 2012. h ttp ://w w w .p tg u i.c o m . [138] R a s t o g i , A ., a n d K r i s h n a m u r t h y , B. Localized hierarchical graph cuts. In IC V G IP (2008), IEEE, pp. 163-170. [139] R i c k e r , P . M . A direct m ultigrid poisson solver for oct-tree adaptive meshes. The Astrophysical Journal Supplem ent Series 176 (2008), 293-300. [140] R o b e r t s , F . On the boxicity and cubicity o f a graph. Recent Progress in Combina­ torics, 1969. [141] R o s g e n , B., a n d S t e w a r t , L. Complexity results on graphs w ith few cliques. Discrete M athem atics & Theoretical Computer Science 9, 1 (2007). [142] R o t h e r , C ., K o lm o g o r o v , V ., a n d B l a k e , A. G rabcut: Interactive foreground ex traction using iterated graph cuts. A C M Trans. Graph 23, 3 (2004), 309-314. [143] S a g a n , H. Space-Filling Curves. Spinger-Verlag, New York, NY, 1994. [144] S a n d , P ., a n d T e l l e r , S. Video m atching. ToG 23, 3 (2004), 592-599. [145] Shum , H. Y ., a n d S z e lis k i, R . S. C onstruction and refinement of panoram ic mosaics w ith global and local alignment. In IC C V (1998), pp. 953-956. [146] S im c h o n y , T ., a n d C h e l l a p p a , R . Direct analytical m ethods for solving Poisson equations in com puter vision problems. IE E E Trans. P attern Anal. Mach. Intell. 12 (1990), 435-446. http://www.ptgui.com 111 [147] S n a v e ly , N ., G a r g , R ., S e itz , S. M ., a n d S z e lis k i, R . F inding paths through th e w orld’s photos. In SIGGraph-08 (2008), pp. xx-yy. [148] S t o o k e y , j . , X ie, Z., C u t l e r , B ., F r a n k l i n , W . R ., T r a c y , D. M ., a n d A n d r a d e , M. V. A. Parallel O D ETLA P for te rra in compression and reconstruction. In GIS (2008), W. G. Aref, M. F. Mokbel, and M. Schneider, Eds., ACM, p. 17. [149] Sum m a, B ., S c o r z e l l i , G ., J i a n g , M ., B r e m e r , P .- T ., a n d P a s c u c c i , V. Interactive editing of massive imagery made simple: Turning A tla n ta into A tlantis. A C M Trans. Graph. 30, 2 (Apr. 2011), 7:1-7:13. [150] Sum m a, B ., T i e r n y , J ., a n d P a s c u c c i , V. P anoram a weaving: F ast and flexible seam processing. A C M Trans. Graph. 31, 4 (July 2012), 83:1-83:11. [151] Sum m a, B ., V o, H. T ., S ilv a , C ., a n d P a s c u c c i , V. Massive image editing on the cloud. In IA S T E D International Conference on Computational Photography (CPhoto 2011) (2011). [152] S u n , J ., J ia , J ., T a n g , C .-K ., a n d Shum , H .-Y . Poisson m atting. A C M Trans. Graph. 23, 3 (2004), 315-321. [153] S z e lis k i, R . Image mosaicing for tele-reality applications. In Proceedings o f the Second IE E E Workshop on Applications o f Com puter Vision (1994), pp. 44-53. [154] S z e lis k i, R . Video mosaics for virtu al environm ents. Com puter Graphics and Applications, IE E E 16, 2 (1996), 22 - 30. [155] S z e lis k i, R . Image alignment and stitching: A tuto rial. Foundations and Trends in Com puter Graphics and Vision 2, 1 (2006). [156] S z e lis k i, R ., a n d Shum , H .-Y . C reating full view panoram ic image mosaics and environm ent maps. S IG G R A P H ’97: Proceedings o f the 24th annual conference on Com puter graphics and interactive techniques (Aug 1997). [157] S z e lis k i, R . S. Video mosaics for virtu al environm ents. IE E E Computer Graphics and Applications 16, 2 (Mar. 1996), 22-30. [158] T a i, Y. W ., D u , H ., B r o w n , M. S., a n d Lin, S. Im age/video deblurring using a hybrid cam era. In C V P R (2008), pp. 1-8. [159] T a s d iz e n , T ., K o s h e v o y , p . , G rim m , B. C., A n d e r s o n , J . R ., J o n e s , B. W ., W a t t , C. B ., W h i t a k e r , R . T ., a n d M a r c , R . E . A utom atic mosaicking and volume assembly for high-throughput serial-section transm ission electron microscopy. Journal o f Neuroscience Methods 193, 1 (2010), 132 - 144. [160] T o l e d o , S. A survey of out-of-core algorithm s in numerical linear algebra. In E xternal m em ory algorithms, Dimacs Series In Discrete M athem atics And Theoretical C om puter Science. American M athem atical Society, Boston, MA, 1999, pp. 161-179. [161] T r i g g s , B ., M c L a u c h l a n , P ., H a r t l e y , R . I., a n d F itz g ib b o n , A. W . Bundle adjustm ent: A m odern synthesis. In Vision Algorithms Workshop: Theory and Practice (1999), pp. 298-372. 112 [162] T u y t e l a a r s , T ., a n d M ik o l a j c z y k , K. Local invariant feature detectors: A survey. Foundations and Trends in Computer Graphics and Vision 3, 3 (2007), 177­ 280. [163] T z i m r o p o u lo s , G ., A r g y r i o u , V ., Z a f e i r i o u , S., a n d S t a t h a k i , T . R obust fft-based scale-invariant image registration w ith image gradients. P attern Analysis and Machine Intelligence, IE E E Transactions on D O I - 10.1109/34.55103 PP, 99 (2010), 1 - 1. [164] U SG S,. U nited States Geological Survey h ttp ://w w w .u sg s.g o v /. [165] U y t t e n d a e l e , M. T ., E d e n , A ., a n d S z e lis k i, R . S. E lim inating ghosting and exposure artifacts in image mosaics. In C V P R (2001), pp. II:509-516. [166] V a l g r e n , C., a n d L i l i e n t h a l , A. J . SIFT, SURF & seasons: A ppearance-based long-term localization in outdoor environm ents. Robotics and A utonom ous System s 58, 2 (2010), 149-156. [167] V i n e e t , V ., a n d N a r a y a n a n , P . J. CUDA cuts: Fast graph cuts on th e GPU. In Com puter Vision on GPU (2008), pp. 1-8. [168] V i t t e r , J . S. E xtern al memory algorithm s and d a ta structures: Dealing w ith massive d ata. A C M Comput. Surv. 33, 2 (2001), 209-271. [169] V o , H ., B r o n s o n , J ., Summa, B ., C o m b a, J ., F r e i r e , J ., H o w e , B ., P a s c u c c i , V ., a n d S ilv a , C. Parallel visualization on large clusters using m apreduce. In Pro­ ceedings o f the 2011 IE E E Sym posium on Large-Scale Data Analysis and Visualization (LD AV) (2011), p. (to appear). [170] V o , H. T ., O s m a ri, D . K ., Sum m a, B ., C o m b a, J . L. D ., P a s c u c c i , V ., a n d S ilv a , C. T . Stream ing-enabled parallel dataflow architecture for multicore systems. Comput. Graph. Forum 29, 3 (2010), 1073-1082. [171] W a n g , B ., Sum m a, B ., P a s c u c c i , V ., a n d V e jd e m o - J o h a n s s o n , M. Branching and circular features in high dim ensional d ata. Visualization and Computer Graphics, IE E E Transactions on 17, 12 (dec. 2011), 1902-1911. [172] W a r d , G. Hiding seams in high dynam ic range panoram as. In A P G V (2006), R. W. Flem ing and S. Kim, Eds., vol. 153 of A C M International Conference Proceeding Series, ACM, p. 150. [173] W e iss, Y . Deriving intrinsic images from image sequences. In International Confer­ ence on Com puter Vision (2001), pp. 68-75. [174] W o o d , D. N ., F i n k e l s t e i n , A ., H u g h e s , J . F ., T h a y e r , C. E ., a n d S a le s i n , D. M ultiperspective panoram as for cel anim ation. In S IG G R A P H (1997), pp. 243-250. [175] W u , C. SiftGPU: A G PU im plem entation of scale invariant feature transform (SIFT), 2007. h ttp ://c s .u n c .e d u / ccw u/siftgpu. [176] X io n g , Y ., a n d P u l l i , K . Fast image labeling for creating high-resolution panoram ic images on mobile devices. In IS M (2009), IE E E C om puter Society, pp. 369-376. http://www.usgs.gov/ http://cs.unc.edu/ 113 [177] X io n g , Y ., a n d P u l l i , K . Fast panoram a stitching for high-quality panoram ic images on mobile phones. IE E E Transactions on Consumer Electronics (Jan 2010). [178] X io n g , Y ., W a n g , X ., T ic o , M ., a n d L ia n g , C. Panoram ic imaging system for mobile devices. S IG G R A P H ’09: Posters (Jan 2009). [179] X u , D ., C h e n , Y ., X io n g , Y ., Q ia o , C ., a n d H e, X. On th e complexity o f/an d algorithm s for finding shortest p ath w ith a disjoint counterpart. IE E E /A C M Trans. on Networking 14, 1 (2006), 147-158. [180] Ya h o o ! Yahoo! expands its m45 cloud com puting initiative, adding to p universities to supercom puting research cluster. h t t p : // r e s e a r c h .y a h o o .c o m /n e w s /3 3 7 4 . [181] Y o o n , M .-S .-E ., a n d L i n d s t r o m , M .-P . Mesh layouts for block-based caches. IE E E Transactions on Visualization and Com puter Graphics 12, 5 (2006), 1213-1220. [182] Y o o n , S .-E ., L i n d s t r o m , P ., P a s c u c c i , V ., a n d M a n o c h a , D . Cache-oblivious mesh layouts. A C M Trans. Graph. 24, 3 (2005), 886-893. [183] Y u a n , L., S u n , J ., Q u a n , L., a n d Shum , H .-Y . Image deblurring w ith blu rred /n o isy image pairs. A C M Trans. Graph 26, 3 (2007), 1. [184] Z i to v a , B., a n d F l u s s e r , J . Image registration methods: A survey. Image and Vision Computing 21, 11 (Oct. 2003), 977-1000. [185] Z o g h la m i, I., F a u g e r a s , O ., a n d D e r i c h e , R . Using geometric corners to build a 2d mosaic from a set of images. Computer Vision and P attern Recognition, 1997. Proceedings., 1997 IE E E Com puter Society Conference on (1997), 420 - 425. http://research.yahoo.com/news/3374 work_ikqipowzqfecdpdss6txvj6igm ---- ORAL PRESENTATION Open Access Clinical measurement of sagittal trunk curvatures: photographic angles versus rippstein plurimeter angles in healthy school children Łukasz Stoliński1*, Dariusz Czaprowski2, Mateusz Kozinoga1, Tomasz Kotwicki3 From 11th International Conference on Conservative Management of Spinal Deformities - SOSORT 2014 Annual Meeting Wiesbaden, Germany. 8-10 May 2014 Background Digital photography is a simply method to calculate quantitative photographic parameters of the body posture in the frontal and sagittal plane. Aim The aim of the study was to determine the correlation between the measurements of the sagittal trunk curva- tures carried out with two diagnostic tools: photography and Rippstein plurimeter. Design This is a reliability study. Methods Sixty-one asymptomatic children (31 girls, 30 boys) aged 7-9 years (mean 7.9 ±0.8) were assessed once by one observer for the sagittal curvatures of the trunk: thoracic kyphosis (TK), lumbar lordosis (LL) and sacral slope (SS) first with digital photography and with Rippstein pluri- meter. Statistical analysis was performed using paired Student t-test, Wilcoxon matched-pairs and Pearson cor- relation coefficient. Results There was no significant difference regarding the measure- ment of TK performed with photography versus pluri- meter (43.3° ±8.8 vs. 43.0° ±8.4, p=0.47). Differences were found for LL (39.8° ±8.2 vs.38.3° ±8.5, p<0.0001) and SS (23.3° ±6.0 vs. 22.7° ±6.4, p=0.024). Significant correlation between measurements performed with photography versus Rippstein plurimeter were observed: TK (r=0.949, p<0.0001), LL (r=0.951, p<0.0001) and SS (r=0.944, p<0.0001). Conclusions Although significant difference for LL and SS were found, the difference between measurements is small, so it seems that photography and Rippstein plurimeter can be used for assessment of sagittal trunk curvatures in the clinical assessment. Competing interests There was no conflict of interest in relation to this study. Authors’ details 1Rehasport Clinic, Spine Disorders Unit, Department of Pediatric Orthopedics and Traumatology, University of Medical Sciences, Poznań, Poland. 2Department of Physiotherapy, Józef Rusiecki University College, Olsztyn, Poland. 3Spine Disorders Unit, Department of Pediatric Orthopedics and Traumatology, University of Medical Sciences, Poznań, Poland. Published: 4 December 2014 References 1. Ferreira EAG, Duarte M, Maldonado EP, Burke TN, Marques AP: Postural assessment software (PAS/SAPO): validation and reliability. Clinics 2010, 65(7):675-81. 2. Stoliński Ł, Kotwicki T, Czaprowski D: Active self correction of child’s posture assessed with plurimeter and documented with digital photography. Progress in Medicine 2012, 25(6):484-490. doi:10.1186/1748-7161-9-S1-O15 Cite this article as: Stoliński et al.: Clinical measurement of sagittal trunk curvatures: photographic angles versus rippstein plurimeter angles in healthy school children. Scoliosis 2014 9(Suppl 1):O15. 1Rehasport Clinic, Spine Disorders Unit, Department of Pediatric Orthopedics and Traumatology, University of Medical Sciences, Poznań, Poland Full list of author information is available at the end of the article Stoliński et al. Scoliosis 2014, 9(Suppl 1):O15 http://www.scoliosisjournal.com/supplements/9/S1/O15 © 2014 Stoliński et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http:// creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. http://www.ncbi.nlm.nih.gov/pubmed/20668624?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/20668624?dopt=Abstract http://creativecommons.org/licenses/by/4.0 http://creativecommons.org/publicdomain/zero/1.0/ http://creativecommons.org/publicdomain/zero/1.0/ Background Aim Design Methods Results Conclusions Competing interests Authors’ details References work_ikzxuus45jgaxcgjgt4jorskey ---- ORAL PRESENTATION Open Access Analysis of anterior trunk symmetry index (ATSI). Preliminary report L Stolinski1*, T Kotwicki2, D Czaprowski1,2,3, J Chowanska1,2 From 9th International Conference on Conservative Management of Spinal Deformities - SOSORT 2012 Annual Meeting Milan, Italy. 10-12 May 2012 Background Spinal deformities and postural disorders can be assessed by evaluation of trunk surface deformity. Usually, the back shape is assessed. However, the ante- rior trunk can also develop deformity, which is observed by the patient more easily than the back surface. Aim of the study To introduce a new parameter, Anterior Trunk Symme- try Index (ATSI), for anterior trunk deformity assessment. Methods Seventy primary school children, free of idiopathic scolio- sis, both sexes, aged 6-7 years, mean 6.9 ± 0.3 years were examined with digital photography of their trunk, taken from the front in standardized conditions. The anatomi- cal landmarks were: sternal notch, acromions, axilla folds, waist lines, and umbilicus. ATSI was defined as the sum of six indices: three frontal plane asymmetry indices (one for sternal notch, axilla folds and waist lines, respec- tively) and three height difference indices, (one for acro- mions, axilla folds, and waist lines, respectively). The software was developed for semi-automatic calculation of ATSI, after the anatomical points are indicated on a digi- tal photo by the observer. The intra-observer error was calculated by the first author, by measuring four times the pictures of 20 children, in the interval of at least one day. The inter-observer error was calculated by one sur- geon, and three experienced physiotherapists, by measur- ing the pictures of 20 children. The normal value limit was calculated as mean ± 3SD. Results The assessment of the ATSI on digital photography took around 1 minute. The mean ATSI value for 70 children was 22.6 ± 10.8. The intra-observer error was 1.23. The inter-observer error for the four observers was 3.08. The normal value limit was 32.3. Conclusion This new surface parameter can be easily calculated on regular digital photographs of the anterior trunk. Both intra- and inter-observer errors are small, indicating possible reliability of ATSI for the assessment of ante- rior trunk asymmetry in children. Further studies are needed to assess the clinical usefulness of the ATSI parameter. Author details 1Rehasport Clinic, Poznan, Poland. 2Spine Disorders Unit, Department of Pediatric Orthopedics and Traumatology, University of Medical Sciences, Poznan, Poland. 3The Faculty of Physiotherapy, Jozef Rusiecki University, Olsztyn, Poland. Published: 3 June 2013 References 1. Suzuki N, I K, Ono T, Kohno K, Asher MA: Analysis of posterior trunk symmetry index (POTSI) in scoliosis, part 1. Stud Health Technol Inform 1999, 59:81-84. 2. Inami K, S N, Ono T, Yamashita Y, Kohno K, Morisue H: Analysis of posterior trunk symmetry index (POTSI) in scoliosis, part 2. Stud Health Technol Inform 1999, 59:85-88. 3. Minguez MF, Buendia M, Cibrian RM, Salvador R, Laguia M, Martin A, Gomar F: Quantifier variables of the back surface deformity obtained with a noninvasive structured light method: evaluation of their usefulness in idiopathic scoliosis diagnosis. Eur Spine J 2007, 16(1):73-82. doi:10.1186/1748-7161-8-S1-O25 Cite this article as: Stolinski et al.: Analysis of anterior trunk symmetry index (ATSI). Preliminary report. Scoliosis 2013 8(Suppl 1):O25. 1Rehasport Clinic, Poznan, Poland Full list of author information is available at the end of the article Stolinski et al. Scoliosis 2013, 8(Suppl 1):O25 http://www.scoliosisjournal.com/content/8/S1/O25 © 2013 Stolinski et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. http://www.ncbi.nlm.nih.gov/pubmed/16609858?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16609858?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16609858?dopt=Abstract http://creativecommons.org/licenses/by/2.0 Background Aim of the study Methods Results Conclusion Author details References work_ilik5dxg4vgi7nsrsmwxfgtqf4 ---- FOREWORD A NOVEL SUB-PIXEL MATCHING ALGORITHM BASED ON PHASE CORRELATION USING PEAK CALCULATION Junfeng Xie a , Fan Mo a b , Chao Yang a , Pin Li a d , Shiqiang Tian a c , a Satellite Surveying and Mapping Application Center of China, NO.1 Baisheng Village, Beijing - xiejf@sasmac.cn,yangc@sasmac.cn b Information Engineering University, No.62 Kexue Road, Zhengzhou - surveymofan@163.com c Chang'an University, Middle-section of Nan'er Huan Road, Xi'an - 835301221@qq.com d Liaoning Project Technology University, People Street, Fuxin - 1076760488@qq.com Commission I, WG I/3 KEY WORDS: Image Matching, Phase Correlation, Peak Calculation, Window Constraint, Correlation Coefficient ABSTRACT: The matching accuracy of homonymy points of stereo images is a key point in the development of photogrammetry, which influences the geometrical accuracy of the image products. This paper presents a novel sub-pixel matching method phase correlation using peak calculation to improve the matching accuracy. The peak theoretic centre that means to sub-pixel deviation can be acquired by Peak Calculation (PC) according to inherent geometrical relationship, which is generated by inverse normalized cross-power spectrum, and the mismatching points are rejected by two strategies: window constraint, which is designed by matching window and geometric constraint, and correlation coefficient, which is effective for satellite images used for mismatching points removing. After above, a lot of high-precise homonymy points can be left. Lastly, three experiments are taken to verify the accuracy and efficiency of the presented method. Excellent results show that the presented method is better than traditional phase correlation matching methods based on surface fitting in these aspects of accuracy and efficiency, and the accuracy of the proposed phase correlation matching algorithm can reach 0.1 pixel with a higher calculation efficiency. 1. INTRODUCTION Image matching, in the field of digital photography, is one of the important research topics (ARMIN GRUEN, 2012), and its accuracy directly restricts the development of photogrammetry to full digital photogrammetry, also influences the geometry accuracy of the subsequent geometric processing. Phase correlation matching converts stereo images to frequency domain through Fourier transform, and acquires tie points through the processing of frequency domain information (Kuglin, C.D, 1975). Compared with the traditional cross- correlation and other high-precision image matching method, phase correlation matching has better accuracy and reliability (T. Heid, 2012). Except being applied in digital photography, as the advantage of phase correlation matching, and it has been applied to other areas such as medical imaging (W. S. Hoge), computer vision (K. Ito, 2004) and environmental change monitoring (S. Leprince, 2007), etc. Classic phase correlation matching can attain pixel precision, at present, and we can improve the pixel precision to sub-pixel precision through three kinds of optimization strategy, such as the fitting interpolation method (Kenji TAKITA, 2003), the singular value decomposition method (Xiaohua Tong, 2015) and the local upward sampling method. However, fitting function attained by the least squares estimate was affected by side lobe energy easily, which will bring amount of calculation; singular value decomposition of cross-power spectrum will cause phase unwrapping fuzzy more or less, which make it unable to attain the exact offsets for the cumulative system error. Local upward sampling method is limited by sampling ratio. However, this paper presents a high-precise sub-pixel matching method, based on symmetrical distribution of symmetry energy through peak calculation, to enhance the matching accuracy. The method is based on traditional phase correlation matching and improved, and the peak location acquired by calculation according to inherent geometry. Lastly, the mismatching points are rejected by window constraint. The related theory is simple, but it has been confirmed that the algorithm has high matching precision with less calculation through experiments of simulation data. 2. METHOD 2.1 Classic Phase Correlation Matching Image matching based on phase correlation method employs the Fourier transform to transform the image to be matched to frequency domain for cross-correlation. It only uses power spectrum phase information of the image blocks in the frequency domain, and reduces the effect of the image content, such as pixel value. So it has a good reliability. The principle of phase correlation algorithm is based on the characteristics of the Fourier transform. Two image blocks will reflect a linear phase angle in the frequency domain, if they only have offsets. When offsets, x and y , exist between image block g and image block f :    , ,g x y f x x y y   (1) Convert two sides of formula (1) through Fourier transform, and consider the property of Fourier transform, the resolve can be expressed as:      , , i u x v y G u v F u v e      (2) Where, G and F is the Fourier transform matrix of image g and f respectively. The cross-power spectral function can be acquired by formula (2).             , , , , , i u x v y F u v G u v Q u v e F u v G u v         (3) The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B1, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic This contribution has been peer-reviewed. doi:10.5194/isprsarchives-XLI-B1-253-2016 253 mailto:835301221@qq.com Where, “ * “ is matrix dot product and G is conjugate function of G . Inverse Fourier transform can be used in cross-power spectrum above and Dirac function  ,x y   can be obtained in place  ,x y  :        , , i u x v yx y IFT Q u v IFT e        (4) Where, IFT is the function of inverse fast Fourier transform. If two blocks of image show the same area, peek value of the pulse function can be calculated in the place  ,x y  , and other value, around the peak value, will be far less than the peek value and close to zero. 2.2 Principle of Peak Calculation Classical phase correlation matching only can attain pixel precision according to acquire the path and row of the maximum value in matrix of impulse function. This paper presents a sub-pixel phase correlation matching method, which calculates power peak value base on the symmetrical distribution around energy peak. In the algorithm, the order of value around the peak determines formula, so there are two different conditions to calculate the peak point. When the peak value is greater than the back value, peak calculation algorithm sketch is shown in Figure 1:   1 x 2x 3x X Y 3 y 2 y 1 y x P 1 l 2 l A B C Figure 1. Peak calculation sketch Where,  2 2,B x y is peak point in matrix of pulse function, and between its surrounding points  1 1,A x y and  3 3,C x y with 1 3 y y . Make a straight line 1 l from C to B and make a vertical line across B , and we can attain the angle between the two line is  , according to the principle of energy accord with symmetrical distribution in matrix of pulse function , there must be a line 2 l symmetrical with line 1 l about vertical line that across the peek point P . Therefore, make a straight line 2l from A with an angle 90  , and make 2l and 1l meet at point P , the difference 2 x x between abscissa x of P and abscissa 2 x of B is sub-pixel offsets. There are two geometric relationships according to the figure above and the formula can be expressed as follows: 3 1 3 1 3 2 3 3 3 2 x x x x y y y y y y y y x x x x             (5) Formula (5) can be deduced and simplified as:       2 3 2 2 1 3 1 2 3 2 10 9 3 12 9 0 y y x y y y x y y y         (6) Formula above can be simplified as follows: 2 0ax bx c   (7) In the above formula,  3 22a y y  , 2 1 310 9b y y y   , 1 2 3 3 12 9c y y y   . The solution of the formula above can be attained as follows: 2 4 2 b b ac x a     (8) The solution belong to the interval of 1 2 x x x  is requested. Similarly, when points  1 1,A x y and  3 3,C x y include 1 3 y y , the formula can be expressed as follows:       2 1 2 2 1 3 1 2 3 2 6 7 5 4 0 y y x y y y x y y y         (9) The solution x of the formula (9) can be attained according to formula (7) and (8), which belong to the interval of 2 3 x x x  are requested. In addition, when points  1 1,A x y and  3 3,C x y include 1 3 y y , the images to be matched are regarded as no offset, and 0x  . 2.3 Matching Window Constraint The window mentioned to constraint the result is designed as 3 ×1 or 1×3(set it according to row or line), after getting the result of sub-pixel matching, and it can use window constraint to control the value of matching. The ketch of window constraint is shown as Figure 2: 1 x 2 x 3 x Figure 2. Window constraint sketch When the peak of 3x is far less than the peak of 1x , and 1 x is close to the limitation of peak 2 x , it is shown as figure 2, according to geometric knowledge, we know the distance of x deviates 2x must be less than 0.5, so we can use this constraint to control the value of sub-pixel, using it to act as removed strategy of mismatching points of sub-pixel matching fractional part. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B1, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic This contribution has been peer-reviewed. doi:10.5194/isprsarchives-XLI-B1-253-2016 254 http://dict.youdao.com/w/abscissa/ http://dict.youdao.com/w/abscissa/ Sub-pixel matching algorithm based on phase correlation is implemented by peak value calculating, and its implementation process is shown as the following diagram: Image Image Image block Image block frequency frequency cross-power spectrum impulse function Peak integer value the integer part Peak value calculation The result of sub- pixel matching L R f g F G Q  Fouriertransform fourier inverse transformation Matching window constraint Figure 3 the process of phase correlation matching using peak calculation Stereo images L and R respectively refer to image data blocks f and g , and obtaining F and G according to the two- dimensional fast Fourier transform(adding harming window). The cross-power spectrum Q is obtained through formula (3), and  is obtained using the inverse transformation of Fast Fourier Transform(FFT) for Q(adding harming window). After above, integer part of the sub-pixel can be obtained through getting maximum value in matrix of impulse function, and error values are removed by using the window constraint; We can get sub-pixel matching results through the above process. 3. THE EXPERIMENT AND ANALYSIS Three experiments are designed to verify effectiveness of algorithm presented. Compared with curved surface fitting phase correlation matching algorithm using simulation data in accuracy and speed, the effectiveness of algorithm presented is verified. These simulation data are obtained by down sampling, thus the absolute matching precision of algorithm presented can be measured. Lastly, the multi-spectral Images of ZY-3 satellite, the first civilian stereo mapping satellite in China, is tested to verify the effectiveness according to detection of satellite attitude jitter. 3.1 Experiment I The high resolution remote sensing images of ZY-3 are selected as raw images, and the spatial resolution of panchromatic image is 2.1 meters. The test images are simulated by down sampling raw images with several rates. The point (X,Y) in the raw image is considered as the beginning point of the simulation image. The “A” image with 1000 1000m m pixels is cropped. Similarly, The “B” image with 1000 1000m m pixels cropped from point  1, 1X Y  in the same raw image ; A and B images are separately down sampled to 1000×1000 pixels, then we get image a and image b. Theoretically, the deviant between image a and image b is 1 1 , m m       , and m is the sampling rate. We choose four frame simulated images with different ground features, as are shown in figure 3: m a image b image 3 5 10 20 Figure 3. Simulated images The images of figure 3 are tested with sub-pixel phase correlation matching based on curved surface fitting and sub- pixel phase correlation matching based on peak calculation, and these matching results are showed in table 1 and table2. According to these results, the stability and accuracy of curved surface fitting phase correlation matching algorithm is worse than the phase correlation matching algorithm using peak calculation, and the accuracy of the presented algorithm can reach 0.1 pixel, and the accuracy of it mostly is better than 0.05 pixel. Table 1 the result of curved surface fitting phase correlation matching algorithm m Matching error (%) (0, 0.05) pixel (0.05, 01) pixel (0.1, 1) pixel 3 43.2 43.5 13.3 5 20.7 56.6 22.7 10 59.2 36.4 4.4 20 79.7 19.6 0.7 Table2 the algorithm of phase correlation matching adapted by peak calculation The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B1, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic This contribution has been peer-reviewed. doi:10.5194/isprsarchives-XLI-B1-253-2016 255 file:///D:/软件/Dict/6.3.67.7016/resultui/frame/javascript:void(0); file:///D:/软件/Dict/6.3.67.7016/resultui/frame/javascript:void(0); javascript:void(0); file:///D:/软件/Dict/6.3.67.7016/resultui/frame/javascript:void(0); file:///D:/软件/Dict/6.3.67.7016/resultui/frame/javascript:void(0); javascript:void(0); javascript:void(0); m Matching error (%) (0, 0.05) pixel (0.05, 01) pixel (0.1, 1) pixel 3 79.2 16.3 4.5 5 66.9 23.6 9.5 10 72.8 21.8 5.4 20 75.6 20.9 3.5 3.2 Experiment II In order to verify the speed advantage of the algorithm presented, the same matching window (32×32) and calculation window (3×3) are adopted. The speed difference of two algorithms mainly displays in calculation of peak value, as curved surface fitting needs the least squares fitting, so its calculation is very complicated. Because calculation of matrix could bring bulk single precision floating-point calculation and it will be greater than one hundred times, the times of multiplication is greater than six hundred times, the times of addition is greater than five hundred times; However, the algorithm presented in this paper only needs simple calculation, the times of multiplication is less than thirty times, and the times of addition is less than twenty five times. The presented algorithm needs a very small amount of calculation than curved surface fitting phase correlation matching algorithm in theory. Experiment applies 100 × 100 pixel image data block and gradually increasing to 900×900 pixel image data block, a total of nine block images(increasing 100 pixels every time), contrasting speed of the algorithm presented with curved surface fitting phase correlation. Computing environment is Intel the second generation of core i7-2820QM@2.3GHz processor, single thread operation. Algorithm is written by Visual Studio 2010 platform using MFC. Because Fourier transform transformation of phase correlation matching needing large amount of calculation, in order to reflect the relative calculation speed, the computation time of the presented algorithm will subtract the other one. The difference of time is shown as figure 4. Figure 4. Time difference As shown in figure 4, the horizontal axis is the image size, and the unit is pixel. The vertical axis is the consuming time, and the unit is millisecond. For the proposed algorithm, as the image block size increases, we can find that the advantage of processing speed is more obvious. It is seen from the above two experiments that the algorithm presented in this paper can acquire better matching accuracy, higher stability and smaller amount of calculation time. 3.3 Experiment III The multi-spectral sensor of ZY-3 consist of four parallel sensors about four bands that include blue, green, red and near- infrared, and each sensor has three staggered CCD arrays, as shown in figure 4. B1 B4 B3 B2    152 pixel 128 pixel 128 pixel Figure 4. Instalment relationship The tiny physical distance between the parallel CCD arrays, which are in same scanning column in the direction of flight, will bring parallax matched among them if there is a certain attitude jitter in the satellite platform (Tong X, 2014). We apply raw multi-spectral images without any geometric rectification to detect the attitude jitter based on the physical infrastructure above. According to the experiment III, the actual feasibility of algorithm presented will be tested. The test images as shown in figure 5 are blue and green band image respectively. a) blue band image b)green band image Figure 5. Experimental images We implement the presented algorithm to process the stereo images, and a lot of tie-points can be obtained pixel by pixel. The dense pixel offset between image a and image b in line and column could form parallax maps in cross-track and along-track direction. The parallax maps are showed as follows: a) cross-track b)along-track Figure 6. Parallax images The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B1, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic This contribution has been peer-reviewed. doi:10.5194/isprsarchives-XLI-B1-253-2016 256 javascript:void(0); javascript:void(0); javascript:void(0); javascript:void(0); We can find that the periodic change, caused by satellite platform jitter, is obvious in these parallax images. The regularity, which is reflected in the cross-track parallax, is better than the along-track one, and the main reason may be that along-track direction is affected by more factors. We process the parallax image by mean the each line parallax to get a two- dimensional curve, and the two parallax curves can be obtained and showed as figure 7. a) cross-track b)along-track Figure 7. Parallax curves We can find that approximately 0.67Hz (considering the imaging line time 0.8 ms) attitude jitter is detected in multi- spectral stereo images of ZY-3. The accuracy of matching must reach 0.1 pixel, so the platform jitter can be detected by presented matching method. From the experiment III, the algorithm presented is verified that it could reach the matching accuracy for actual image product. CONCLUSION This paper presents a novel phase correlation sub-pixel algorithm based on peak calculation of the pulse function matrix, and it takes the geometric relationships into account to infer the corresponding mathematical formula. The amount of calculation is smaller than other methods as without the least squares adjustment, and the theory is simple but complete. According to experimental results, we can conclude the accuracy of algorithm presented in this paper can achieve 0.1 pixel, and meet the need of high-precision matching application. ACKNOWLEDGEMENTS Acknowledgements of support for the public welfare special surveying and mapping industry (NO.201412001, NO.201512012) and natural science research fund (NO.41301525, 41571440), and the scientific research plan for academic and technical youth leaders of State Bureau of Surveying and Mapping (NO. 201607), youth science and technology of surveying and mapping project (NO. 1461501900202), and the Major Projects of High Resolution Earth Observation System. REFERENCES ARMIN GRUEN. Development and Status of Image Matching in Photogrammetry[J]. The Photogrammetry Record, 2012, 27(137): 36-57. Dazhao Fan, Erhua Shen, Lu Li et al. Small Baseline Stereo Matching Method Based on Phase Correlation[J]. Journal of Geomatics Science and Technology, 2013, 30(2):154-157. Harold S Stone, Michael T, Ee-Chien Chang, et al. A Fast Direct Fourier-Based Algorithm for Subpixel Registration of Image[J]. IEEE Trans. GEOSCIENCE AND REMOTE SENSING, 2001, 39(10):2235-2243. K. Ito, H. Nakajima, K. Kobayashi et al. A Fingerprint Matching Algorithm Using Phase-Only Correlation[J]. IEICE Trans. Fundam. Electron. Commun. Comput. Sci., 2004, 87(3):682-691. Kenji TAKITA, Tatsuo HIGUCHI. High-Accuracy Subpixel Image Registration Based on Phase-Only Correlayion[J]. IEICE TRANS. FUNDAMENTALS, 2003, E86-A(8):1925-1934. Kuglin, C.D., Hines, D.C. The Phase Correlation Image Alignment Method[J]. Proc. IEEE. Int’ Conf. on Cybernetics Society, 1975, 163-165. S. Leprince, S. Barbot, F. Ayoub, et al. Automatic and Precise Orthorectification, Coregistration, and Subpixel Correlation of Satellite Image, Application to Ground Deformation Measurements[J]. IEEE Trans. Geosci. Remote Sens., 2007, 45(6):1529-1558. T. Heid, A. Kääb. Evaluation of Existing Image Matching Methods for Deriving Glacier Surface Displacement Globally from Optical Satellite Imagery[J]. Remote Sens. Environ., 2012, 118:339-355. W. S. Hoge. A Subpixel Identification Extension to the Phase Correlation Method [MRI Application][J]. IEEE Trans. Med. Image., 2003, 22(2):277-280. Xiaohua Tong, Zhen Ye, Yusheng Xu, et al. A Novel Subpixel Phase Correlation Method Using Singular Value Decomposition and Unified Random Sample Consensus[J]. IEEE TRANSCATION ON GEOSCIENCE AND REMOTE SENSING, 2015, 53(8):4143-4156. Tong X, Ye Z, Xu Y, et al. Framework of Jitter Detection and Compensation for High Resolution Satellites[J]. Remote Sensing, 2014, 6(5):3944-3964. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B1, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic This contribution has been peer-reviewed. doi:10.5194/isprsarchives-XLI-B1-253-2016 257 work_imsd5pao45cpbpmqyxbouzrvxq ---- Management of laryngeal cancers: Grampian experience Abstracts / International Journal of Surgery 10 (2012) S1–S52S36 ABSTRACTS Conclusions: In our study, thyroplasty as a method for vocal cord medi- alisation led to improved voice quality post-operatively and to good patient satisfaction. 0363: INSERTION OF A SECOND NASAL PACK AS A PROGNOSTIC INDICATOR OF EMERGENCY THEATRE REQUIREMENT IN EPISTAXIS PATIENTS Edward Ridyard 1, Vinay Varadarajan 2, Indu Mitra 3. 1 University of Manchester, Manchester, UK; 2 North West Higher Surgical Training Scheme, North West, UK; 3 Manchester Royal Infirmary, Manchester, UK Aim: To quantify the significance of second nasal pack insertion in epistaxis patients, as a measure of requirement for theatre. Method: A one year retrospective analysis of 100 patient notes was undertaken. After application of exclusion criteria (patients treated as outpatients, inappropriate documentation and patients transferred from peripheral hospitals) a total of n¼34 patients were included. Of the many variables measured, specific credence was given to requirement of second packing and requirement for definitive management in theatre. Results: Of all patients, 88.5% required packing. A further 25% (7/28) of this group had a second pack for cessation of recalcitrant haemorrhage. Of the second pack group, 85.7% (6/7) ultimately required definitive management in theatre. One sample t-test showed a statistically significant correlation between patients with a second nasal pack and requirement for theatre (p<0.001). Conclusions: Indications for surgical management for epistaxis vary from hospital to hospital. The results of this study show that insertion of a second pack is a very good indicator of requirement for definitive management in theatre. 0365: MANAGEMENT OF LARYNGEAL CANCERS: GRAMPIAN EXPERIENCE Therese Karlsson 3, Muhammad Shakeel 1, Peter Steele 1, Kim Wong Ah-See 1, Akhtar Hussain 1, David Hurman 2. 1 Department of otolaryngology-head and neck surgery, Aberdeen Royal Infirmary, Aberdeen, UK; 2 Department of Oncology, Aberdeen Royal Infirmary, Aberdeen, UK; 3 University of Aberdeen, Aberdeen, UK Aims: To determine the efficacy of our management protocol for laryngeal cancer and compare it to the published literature. Method: Retrospective study of prospectively maintained departmental oncology database over 10 years (1998-2008). Data collected include demographics, clinical presentation, investigations, management, surveillance, loco-regional control and disease free survival. Results: A total of 225 patients were identified, 183 were male (82%) and 42 female (18%). The average age was 67 years. There were 81 (36%) patients with Stage I disease, 54 (24%) with Stage II, 30 (13%) with Stage III and 60 (27%) with Stage IV disease. Out of 225 patients, (130)96% of Stage I and II carcinomas were treated with radiotherapy (55Gy in 20 fractions). Patients with stage III and IV carcinomas received combined treatment. Overall three-year survival for Stage I, II, III and IV were 91%, 65%, 63% and 45% respectively. Corresponding recurrence rates were 3%, 17%, 17% and 7%; 13 patients required a salvage total laryngectomy due to recurrent disease. Conclusion: Vast majority of our laryngeal cancer population is male (82%) and smokers. Primary radiotherapy provides comparable loco-regional control and survival for early stage disease (I & II). Advanced stage disease is also equally well controlled with multimodal treatment. 0366: RATES OF RHINOPLASTY PERFORMED WITHIN THE NHS IN ENGLAND AND WALES: A 10-YEAR RETROSPECTIVE ANALYSIS Luke Stroman, Robert McLeod, David Owens, Steven Backhouse. University of Cardiff, Wales, UK Aim: To determine whether financial restraint and national health cutbacks have affected the number of rhinoplasty operations done within the NHS both in England and in Wales, looking at varying demographics. Method: Retrospective study of the incidence of rhinoplasty in Wales and England from 1999 to 2009 using OPCS4 codes E025 and E026, using the electronic health databases of England (HesOnline) and Wales (PEDW). Extracted data were explored for total numbers, and variation with respect to age and gender for both nations. Results: 20222 and 1376 rhinoplasties were undertaken over the 10-year study period in England and Wales respectively. A statistical gender bias was seen in uptake of rhinoplasty with women more likely to undergo the surgery in both national cohorts (Wales, p<0.001 and England, p<0.001). Linear regression analysis suggests a statistical drop in numbers under- going rhinoplasty in England (p<0.001) but not in Wales (p>0.05). Conclusion: Rhinoplasty is a common operation in both England and Wales. The current economic constraint combined with differences in funding and corporate ethos between the two sister NHS organisations has led to a statistical reduction in numbers undergoing rhinoplasty in England but not in Wales. 0427: PATIENTS' PREFERENCES FOR HOW PRE-OPERATIVE PATIENT INFORMATION SHOULD BE DELIVERED Jonathan Bird, Venkat Reddy, Warren Bennett, Stuart Burrows. Royal Devon and Exeter Hospital, Exeter, Devon, UK Aim: To establish patients' preferences for preoperative patient informa- tion and their thoughts on the role of the internet. Method: Adult patients undergoing elective ENT surgery were invited to take part in this survey day of surgery. Participants completed a ques- tionnaire recording patient demographics, operation type, quality of the information leaflet they had received, access to the internet and whether they would be satisfied accessing pre-operative information online. Results: Respondents consisted of 52 males and 48 females. 16% were satisfied to receive the information online only, 24% wanted a hard copy only and 60% wanted both. Younger patients are more likely to want online information in stark contrast to elderly patients who preferred a hard copy. Patients aged 50-80 years would be most satisfied with paper and internet information as they were able to pass on the web link to friends and family who wanted to know more. 37% of people were using the internet to further research information on their condition/operation. However, these people wanted information on reliable online sources to use. Conclusions: ENT surgeons should be alert to the appetite for online information and identify links that are reliable to share with patients. 0510: ENHANCING COMMUNICATION BETWEEN DOCTORS USING DIGITAL PHOTOGRAPHY. A PILOT STUDY AND SYSTEMATIC REVIEW Hemanshoo Thakkar, Vikram Dhar, Tony Jacob. Lewisham Hospital NHS Trust, London, UK Aim: The European Working Time Directive has resulted in the practice of non-resident on-calls for senior surgeons across most specialties. Conse- quently majority of communication in the out-of-hours setting takes place over the telephone placing a greater emphasis on verbal communication. We hypothesised this could be improved with the use of digital images. Method: A pilot study involving a junior doctor and senior ENT surgeons. Several clinical scenarios were discussed over the telephone complemented by an image. The junior doctor was blinded to this. A questionnaire was completed which assessed the confidence of the surgeon in the diagnosis and management of the patient. A literature search was conducted using PubMED and the Cochrane Library. Keywords used: “mobile phone”, “photography”, “communication” and “medico-legal”. Results & Conclusions: In all the discussed cases, the use of images either maintained or enhanced the degree of the surgeon's confidence. The use of mobile-phone photography as a means of communication is widespread, however, it's medico-legal implications are often not considered. Our pilot study shows that such means of communication can enhance patient care. We feel that a secure means of data transfer safeguarded by law should be explored as a means of implementing this into routine practice. 0533: THE ENT EMERGENCY CLINIC AT THE ROYAL NATIONAL THROAT, NOSE AND EAR HOSPITAL, LONDON: COMPLETED AUDIT CYCLE Ashwin Algudkar, Gemma Pilgrim. Royal National Throat, Nose and Ear Hospital, London, UK Aims: Identify the type and number of patients seen in the ENT emergency clinic at the Royal National Throat, Nose and Ear Hospital, implement changes to improve the appropriateness of consultations and management and then close the audit. Also set up GP correspondence. Method: First cycle data was collected retrospectively over 2 weeks. Information was captured on patient volume, referral source, consultation Insertion of a second nasal pack as a prognostic indicator of emergency theatre requirement in epistaxis patients Management of laryngeal cancers: Grampian experience Rates of rhinoplasty performed within the NHS IN England and Wales: A 10-year retrospective analysis Patients' preferences for how pre-operative patient information should be delivered Enhancing communication between doctors using digital photography. A pilot study and systematic review The ENT emergency clinic at the royal national throat, nose and ear hospital, London: Completed audit cycle work_in2lrr3ul5ey7kan7batvapgie ---- Crowdsourcing as a Screening Tool to Detect Clinical Features of Glaucomatous Optic Neuropathy from Digital Photography Crowdsourcing as a screening tool to detect clinical features of glaucomatous optic neuropathy from digital photography Mitry, D., Peto, T., Hayat, S., Blows, P., Morgan, J., Khaw, K-T., & Foster, P. J. (2015). Crowdsourcing as a screening tool to detect clinical features of glaucomatous optic neuropathy from digital photography. PloS one, 10(2), 1-8. [e0117401]. https://doi.org/10.1371/journal.pone.0117401 Published in: PloS one Document Version: Publisher's PDF, also known as Version of record Queen's University Belfast - Research Portal: Link to publication record in Queen's University Belfast Research Portal Publisher rights Copyright 2017 the authors. This is an open access article published under a Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium, provided the author and source are cited. General rights Copyright for the publications made accessible via the Queen's University Belfast Research Portal is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The Research Portal is Queen's institutional repository that provides access to Queen's research output. Every effort has been made to ensure that content in the Research Portal does not infringe any person's rights, or applicable UK laws. If you discover content in the Research Portal that you believe breaches copyright or violates any law, please contact openaccess@qub.ac.uk. Download date:06. Apr. 2021 https://doi.org/10.1371/journal.pone.0117401 https://pure.qub.ac.uk/en/publications/crowdsourcing-as-a-screening-tool-to-detect-clinical-features-of-glaucomatous-optic-neuropathy-from-digital-photography(3804d11d-08b5-43a1-997e-961d4cf2633a).html RESEARCH ARTICLE Crowdsourcing as a Screening Tool to Detect Clinical Features of Glaucomatous Optic Neuropathy from Digital Photography Danny Mitry1, Tunde Peto1, Shabina Hayat2, Peter Blows1, James Morgan3, Kay- Tee Khaw4, Paul J. Foster1* 1 NIHR Biomedical Research Centre, Moorfields Eye Hospital and UCL Institute of Ophthalmology, London, United Kingdom, 2 Department of Public Health and Primary Care, University of Cambridge, Strangeways Research Laboratory, Cambridge, United Kingdom, 3 School of Optometry and Vision Sciences, Cardiff University, Cardiff, United Kingdom, 4 Department of Clinical Gerontology, Addenbrookes Hospital, University of Cambridge, Cambridge, United Kingdom * p.foster@ucl.ac.uk Abstract Aim Crowdsourcing is the process of simplifying and outsourcing numerous tasks to many un- trained individuals. Our aim was to assess the performance and repeatability of crowdsourc- ing in the classification of normal and glaucomatous discs from optic disc images. Methods Optic disc images (N = 127) with pre-determined disease status were selected by consen- sus agreement from grading experts from a large cohort study. After reading brief illustrative instructions, we requested that knowledge workers (KWs) from a crowdsourcing platform (Amazon MTurk) classified each image as normal or abnormal. Each image was classified 20 times by different KWs. Two study designs were examined to assess the effect of varying KW experience and both study designs were conducted twice for consistency. Performance was assessed by comparing the sensitivity, specificity and area under the receiver operat- ing characteristic curve (AUC). Results Overall, 2,540 classifications were received in under 24 hours at minimal cost. The sensitivi- ty ranged between 83–88% across both trials and study designs, however the specificity was poor, ranging between 35–43%. In trial 1, the highest AUC (95%CI) was 0.64(0.62– 0.66) and in trial 2 it was 0.63(0.61–0.65). There were no significant differences between study design or trials conducted. Conclusions Crowdsourcing represents a cost-effective method of image analysis which demonstrates good repeatability and a high sensitivity. Optimisation of variables such as reward schemes, PLOS ONE | DOI:10.1371/journal.pone.0117401 February 18, 2015 1 / 8 OPEN ACCESS Citation: Mitry D, Peto T, Hayat S, Blows P, Morgan J, Khaw K-T, et al. (2015) Crowdsourcing as a Screening Tool to Detect Clinical Features of Glaucomatous Optic Neuropathy from Digital Photography. PLoS ONE 10(2): e0117401. doi:10.1371/journal.pone.0117401 Academic Editor: William H. Merigan Jr., Univ Rochester Medical Ctr, UNITED STATES Received: April 24, 2014 Accepted: December 20, 2014 Published: February 18, 2015 Copyright: © 2015 Mitry et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Data Availability Statement: All relevant data are within the paper and its Supporting Information files. Funding: The authors would like to acknowledge the Special Trustees of Moorfields Eye Hospital and NIHR Biomedical Research Centre at Moorfields Eye Hospital and UCL Institute of Ophthalmology. This study was funded by a Fight For Sight grant award (Ref: 1471/2). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. http://crossmark.crossref.org/dialog/?doi=10.1371/journal.pone.0117401&domain=pdf http://creativecommons.org/licenses/by/4.0/ mode of image presentation, expanded response options and incorporation of training modules should be examined to determine their effect on the accuracy and reliability of this technique in retinal image analysis. Introduction Glaucoma is a neurodegenerative disease of the optic nerve, characterized by morphologic changes in the optic disc and the retinal nerve fiber layer with corresponding loss in visual field. Signs associated with glaucomatous optic nerve damage include progressive enlargement of the optic cup, focal notches in the neuroretinal rim, optic disc hemorrhages, nerve fiber layer defects, and parapapillary atrophy.[1] In the last decade, there has been considerable interest in developing a screening tool for glaucomatous optic neuropathy using either expert graded im- aging or automated detection[2–4], however to date, no individual method can be recom- mended.[5] Crowdsourcing, the process of outsourcing small simplified tasks to a large number of indi- viduals, is a novel and cost-effective way of classifying medical images.[6] The largest commer- cial crowdsourcing provider is Amazon’s Mechanical Turk. (https://www.mturk.com/mturk/ welcome) MTurk is an Internet-based platform that allows requesters to distribute small com- puter-based tasks to a large number of untrained workers. Using the MTurk platform, our aim was to assess the sensitivity and specificity of crowd- sourcing as a technique to detect typical signs of glaucomatous optic neuropathy from colour fundus photographs. Methods Images were extracted and anonymised, with permission, from studies undertaken at the Moorfields Eye Hospital Reading Centre (MEHRC). The images have been graded normal/ab- normal by fully trained Graders at MEHRC. These were then adjudicated by the clinical lead of the Reading Centre. Those taken from diabetic retinopathy screening and deemed to have glau- comatous discs were all verified in a clinical setting by a glaucoma consultant (PJF) at Moor- fields Eye Hospital. Those with normal discs were graded by at least two senior graders; and only those images with100% agreement between the graders and adjudicated normal by the clinical lead were included in this current set. In total 127 disc images were used. Abnormal images were designated as those with thinning or notching of the neuro-retinal rim or the presence of peri-papillary hemorrhages. Normal images were designated as an absence of any of these features. All images were anonymised and uploaded onto an ftp site for the study duration, to allow remote access. We used the MTurk Web platform for anonymous workers to perform a classification task of the optic nerve images in our dataset. MTurk employs knowledge workers (KWs), who are untrained individuals to carry out simple tasks. KWs are registered Amazon users who have a record of completing these types of tasks. Each KW receives a small monetary reward from the requester for each task that they complete that is of a suitable standard to the requester. Ama- zon keeps a record of the performance of each KW and if desired, filters can be set by the re- quester, for example, permitting only KWs with a high success rate to perform the task. Each image classification task was published as one human intelligence task (HIT). For each HIT, KWs were given some background information and a written description of abnormal features of interest. (S1 Fig. is an example of the online questionnaire for each HIT) After reading Crowdsourcing Clinical Images as a Diagnostic Tool for Glaucoma PLOS ONE | DOI:10.1371/journal.pone.0117401 February 18, 2015 2 / 8 Competing Interests: The authors have declared that no competing interests exist. https://www.mturk.com/mturk/welcome https://www.mturk.com/mturk/welcome through a descriptive illustration, KWs were asked if the test image had any suspicious features (thinning/notching of the neuroretinal rim or peri-papillary hemorrhage) which would warrant referral to an eye specialist. If none of the features were present, they were asked to designate the image as normal. There were no restrictions placed on the country of origin of workers. Any eligible worker could perform the task. Each image could be classified only once by each worker and there was no limit to how many images each worker could classify. Based on previous estimations of repeated task accuracy in distributed human intelligence tasks, we requested 20 KW classifications per image.[6,7] Analysis In order to assess the effect of categorization skill on classification accuracy we conducted two separate study designs: 1. No previous experience required—compensation 0.05cents (USD) per HIT 2. Previously completed �500 HITs with �90% approval—compensation 0.05cents per HIT Both study designs were repeated to determine if the findings from trial 1 were reproducible. Using the selection of images as a pre-defined reference standard, we calculated the sensitivity and specificity for each of the study. This was calculated based upon the pooled responses of all image classifications (N = 2,540). In addition, we used a majority judgement method to identify the percentage of images correctly classified by the majority of KWs. We calculated a KW score determined by the ratio of votes for a normal or abnormal classification to the total number of votes for each classification. Receiver operating characteristic (ROC) curves were analysed for each study design and trial. The area under the ROC curves (AUC) were calculated as non parametric Mann-Whitney estimates and comparison between curves was performed using the z statistic for correlation. All analyses were performed using STATA v12. Results All 2,540 classifications were obtained for 127 colour disc image (20 classifications per image) in under 24 hours. 54 images were designated as abnormal by pre-determined consensus, and 73 were designated normal. Table 1 highlights the baseline characteristics of the KWs for each trial. The mean time spent on each classification was under 1 minute. The time spent on each HIT did not differ significantly between correct and incorrect classification. Table 2 shows the sensitivity and specificity of trials one and two. Fig. 1 illustrates the area under the ROC curve (AUC) for both study designs and trials. The sensitivity was between 83– 88% across both trials, however the specificity was poor, ranging between 35–43%. There were no pairwise differences in the AUC between either trial or study design. Examining the percentage correctly classified (Table 3) shows that across both trials only be- tween 8–36% of normal images were correctly assigned by the majority of KWs, whereas all ab- normal images were correctly assigned by the majority of KWs. Figs. 2 and 3 show the classifications stratified by KW score for normal and abnormal images, demonstrating a much higher level of confidence in the true classification of abnormal. Discussion Crowdsourcing represents a compelling technique with potential for efficient analysis of medi- cal images. Overall, we received 2,540 unique classifications of 127 images in several hours at minimal cost. In this study, we compared the accuracy of crowdsourcing in detecting disc Crowdsourcing Clinical Images as a Diagnostic Tool for Glaucoma PLOS ONE | DOI:10.1371/journal.pone.0117401 February 18, 2015 3 / 8 abnormalities suggestive of glaucomatous optic neuropathy with the gold standard of senior image graders. Overall, the area under the ROC curve (AUC) ranged between 0.62–0.64 for all study de- signs and trials conducted. This is lower than estimates of automated glaucoma detection from fundus images (0.88)[8] and from expert graders (0.86; 0.89–0.97).[4,9] Sensitivity/specificity estimates for expert binary grading of optic disc images was has been reported to vary between 76–78%/91–92%[10] with other reports suggesting an AUC of 0.80 for binary classification of optic disc images by general ophthalmologists.[11] However, is it recognized that subjective evaluation of the optic disc is a challenging task, often with poor agreement from graders. [12,13]. Using a simple online questionnaire, KWs were shown only 4 images for training, however a repeatable sensitivity of 83–88% was achieved. The principle limitation of the crowdsource in this task was the high rate of false positives due to the incorrect classification of normal images as abnormal resulting in a low specificity. Table 3 and Fig. 2 highlight that cor- rect classification of abnormal images is performed with a much greater level of confidence by the crowdsource, compared to correct classification of normal images. Other variables involved in crowdsourcing, such as incentive, motivation and previous experience may also play a role in task accuracy, however based on our study designs we could not demonstrate a difference between moderately experienced and inexperienced MTurks users. In addition, as has been Table 2. The sensitivity, specificity and area under the ROC curve (AUC) for each study design in trials 1 and 2. Sensitivity Specificity AUC (95%CI) Trial 1 0.05c 88.80% 35.50% 0.62(0.61–0.64) 0.05c_500_90 83.98% 43.97% 0.64(0.62–0.66) Trial 2 0.05c 86.20% 39.79% 0.63(0.61–0.65) 0.05c_500_90 86.94% 36.10% 0.62(0.6–0.63) (0.05c = study design 1—no previous experience; 0.05c_500_90% = study design 2—moderate experience) doi:10.1371/journal.pone.0117401.t002 Table 1. Baseline characteristics of knowledge workers (KW) participation by study design for trials 1 and 2. Trial 1 0.05c 0.05c_500_90 Number of different KWs 78 63 Mean(SD) number of HITs per KWs 44(23) 34(19) Mean (SD) time on each HIT (secs) 31(43) 40(50) Time to overall completion <24hrs <24hrs Trial 2 0.05c 0.05c_500_90 Number of different workers 65 54 Mean(SD) number of hits per KWs 32(20) 28(14) Mean (SD) time on each hit (secs) 25(32) 32(44) Time to overall completion <24hrs <24hrs (0.05c = study design 1—no previous experience; 0.05c_500_90% = study design 2—moderate experience) doi:10.1371/journal.pone.0117401.t001 Crowdsourcing Clinical Images as a Diagnostic Tool for Glaucoma PLOS ONE | DOI:10.1371/journal.pone.0117401 February 18, 2015 4 / 8 demonstrated previously[6,7], we also found that crowdsourcing is reliable and consistent, with minimal variation found between trials. Future studies of this technique should aim to more clearly define the range of acceptable normal features rather than focusing primarily on the detection of abnormal features and should aim to incorporate a structured training module. This technique may find its primary utility in screening large Biobank datasets for more se- vere abnormalities, where grading time and physical infrastructure pose considerable limita- tions. Furthermore, a unique advantage of this technique may be to combine different imaging modalities to form part of a single classification, for example the crowdsource could be asked to classify a colour photograph and an OCT image of the same individual which may improve diagnostic precision. In summary, crowdsourcing is a novel tool in Ophthalmic image analysis that should be developed so that its full potential may be realised. Optimal crowdsourcing pa- rameters such as incentivized rewards systems, better visualization methods, image presenta- tion and expanded non-binary response options should be further explored so that their utility in improving the accuracy and reliability of this technique can be established. Fig 1. ROC curves for each study design in trials 1 and 2. doi:10.1371/journal.pone.0117401.g001 Crowdsourcing Clinical Images as a Diagnostic Tool for Glaucoma PLOS ONE | DOI:10.1371/journal.pone.0117401 February 18, 2015 5 / 8 Table 3. The percentage of Human Intelligence Tasks (HITs) correctly classified by the majority (>50%) of key workers (KW’s), with range of percentage of correct “votes” for each image category in brackets. Trial 1 0.05c 0.05c_500_90 Normal (N = 73) 11%(0–70) 36%(0–90) Abnormal (N = 54) 100%(70–100) 100%(65–100) Trial 2 0.05c 0.05c_500_90 Normal (N = 73) 23%(0–70) 8%(5–60) Abnormal (N = 54) 100%(60–100) 100%(65–100) (0.05c = study design 1—no previous experience; 0.05c_500_90% = study design 2—moderate experience) doi:10.1371/journal.pone.0117401.t003 Fig 2. Histogram of classifications by KW score (calculated as ratio of votes for Normal to total number of votes for each classification) (N = 73) (0.05c trial 1). doi:10.1371/journal.pone.0117401.g002 Crowdsourcing Clinical Images as a Diagnostic Tool for Glaucoma PLOS ONE | DOI:10.1371/journal.pone.0117401 February 18, 2015 6 / 8 Supporting Information S1 Data. Raw data for analysis derived from Amazon MTurk. (ZIP) S1 Fig. An example of the online human intelligence task questionnaire presented to all key workers. (PDF) Author Contributions Analyzed the data: DM. Wrote the paper: DM. Provided senior supervision: TP PB SH JM KTK PJF. Provided access to data and expertise in image grading: TP PB SH JM. Involved in synthesis and design of the study: JM. Assisted in analysis and designed the study protocol: KTK PJF. Reviewed the final manuscript: DM TP SH PB JM KTK PJF. Fig 3. Histogram of classifications by KW score (calculated as ratio of votes for Abnormal to total number of votes for each classification) (N = 54) (0.05c trial 1). doi:10.1371/journal.pone.0117401.g003 Crowdsourcing Clinical Images as a Diagnostic Tool for Glaucoma PLOS ONE | DOI:10.1371/journal.pone.0117401 February 18, 2015 7 / 8 http://www.plosone.org/article/fetchSingleRepresentation.action?uri=info:doi/10.1371/journal.pone.0117401.s001 http://www.plosone.org/article/fetchSingleRepresentation.action?uri=info:doi/10.1371/journal.pone.0117401.s002 References 1. Jonas JB, Budde WM (2000) Diagnosis and pathogenesis of glaucomatous optic neuropathy: morpho- logical aspects. Prog Retin Eye Res 19: 1–40. PMID: 10614679 2. Bussel II, Wollstein G, Schuman JS (2013) OCT for glaucoma diagnosis, screening and detection of glaucoma progression. Br J Ophthalmol. 3. Wollstein G, Garway-Heath DF, Fontana L, Hitchings RA (2000) Identifying early glaucomatous changes. Comparison between expert clinical assessment of optic disc photographs and confocal scanning ophthalmoscopy. Ophthalmology 107: 2272–2277. PMID: 11097609 4. Badala F, Nouri-Mahdavi K, Raoof DA, Leeprechanon N, Law SK, et al. (2007) Optic disk and nerve fiber layer imaging to detect glaucoma. Am J Ophthalmol 144: 724–732. PMID: 17868631 5. Mowatt G, Burr JM, Cook JA, Siddiqui MA, Ramsay C, et al. (2008) Screening tests for detecting open- angle glaucoma: systematic review and meta-analysis. Invest Ophthalmol Vis Sci 49: 5373–5385. doi: 10.1167/iovs.07-1501 PMID: 18614810 6. Nguyen TB, Wang S, Anugu V, Rose N, McKenna M, et al. (2012) Distributed human intelligence for co- lonic polyp classification in computer-aided detection for CT colonography. Radiology 262: 824–833. doi: 10.1148/radiol.11110938 PMID: 22274839 7. Mitry D, Peto T, Hayat S, Morgan JE, Khaw KT, et al. (2013) Crowdsourcing as a novel technique for retinal fundus photography classification: analysis of images in the EPIC Norfolk cohort on behalf of the UK Biobank Eye and Vision Consortium. PLoS One 8: e71154. doi: 10.1371/journal.pone.0071154 PMID: 23990935 8. Bock R, Meier J, Nyul LG, Hornegger J, Michelson G (2010) Glaucoma risk index: automated glaucoma detection from color fundus images. Med Image Anal 14: 471–481. doi: 10.1016/j.media.2009.12.006 PMID: 20117959 9. Girkin CA, McGwin G Jr, Long C, Leon-Ortega J, Graf CM, et al. (2004) Subjective and objective optic nerve assessment in African Americans and whites. Invest Ophthalmol Vis Sci 45: 2272–2278. PMID: 15223805 10. Girkin CA, Leon-Ortega JE, Xie A, McGwin G, Arthur SN, et al. (2006) Comparison of the Moorfields classification using confocal scanning laser ophthalmoscopy and subjective optic disc classification in detecting glaucoma in blacks and whites. Ophthalmology 113: 2144–2149. PMID: 16996609 11. Vessani RM, Moritz R, Batis L, Zagui RB, Bernardoni S, et al. (2009) Comparison of quantitative imag- ing devices and subjective optic nerve head assessment by general ophthalmologists to differentiate normal from glaucomatous eyes. J Glaucoma 18: 253–261. doi: 10.1097/IJG.0b013e31818153da PMID: 19295383 12. Azuara-Blanco A, Katz LJ, Spaeth GL, Vernon SA, Spencer F, et al. (2003) Clinical agreement among glaucoma experts in the detection of glaucomatous changes of the optic disk using simultaneous ste- reoscopic photographs. Am J Ophthalmol 136: 949–950. PMID: 14597063 13. Breusegem C, Fieuws S, Stalmans I, Zeyen T (2011) Agreement and accuracy of non-expert ophthal- mologists in assessing glaucomatous changes in serial stereo optic disc photographs. Ophthalmology 118: 742–746. doi: 10.1016/j.ophtha.2010.08.019 PMID: 21055815 Crowdsourcing Clinical Images as a Diagnostic Tool for Glaucoma PLOS ONE | DOI:10.1371/journal.pone.0117401 February 18, 2015 8 / 8 http://www.ncbi.nlm.nih.gov/pubmed/10614679 http://www.ncbi.nlm.nih.gov/pubmed/11097609 http://www.ncbi.nlm.nih.gov/pubmed/17868631 http://dx.doi.org/10.1167/iovs.07-1501 http://www.ncbi.nlm.nih.gov/pubmed/18614810 http://dx.doi.org/10.1148/radiol.11110938 http://www.ncbi.nlm.nih.gov/pubmed/22274839 http://dx.doi.org/10.1371/journal.pone.0071154 http://www.ncbi.nlm.nih.gov/pubmed/23990935 http://dx.doi.org/10.1016/j.media.2009.12.006 http://www.ncbi.nlm.nih.gov/pubmed/20117959 http://www.ncbi.nlm.nih.gov/pubmed/15223805 http://www.ncbi.nlm.nih.gov/pubmed/16996609 http://dx.doi.org/10.1097/IJG.0b013e31818153da http://www.ncbi.nlm.nih.gov/pubmed/19295383 http://www.ncbi.nlm.nih.gov/pubmed/14597063 http://dx.doi.org/10.1016/j.ophtha.2010.08.019 http://www.ncbi.nlm.nih.gov/pubmed/21055815 << /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Dot Gain 20%) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.4 /CompressObjects /Tags /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /CMYK /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams false /MaxSubsetPct 100 /Optimize true /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo true /PreserveFlatness true /PreserveHalftoneInfo false /PreserveOPIComments true /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Apply /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true ] /NeverEmbed [ true ] /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 300 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /ColorImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 300 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /GrayImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 1200 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 >> /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile () /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA /BGR /CHS /CHT /CZE /DAN /DEU /ESP /ETI /FRA /GRE /HEB /HRV (Za stvaranje Adobe PDF dokumenata najpogodnijih za visokokvalitetni ispis prije tiskanja koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.) /HUN /ITA /JPN /KOR /LTH /LVI /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken die zijn geoptimaliseerd voor prepress-afdrukken van hoge kwaliteit. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR /POL /PTB /RUM /RUS /SKY /SLV /SUO /SVE /TUR /UKR /ENU (Use these settings to create Adobe PDF documents best suited for high-quality prepress printing. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.) >> /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ << /AsReaderSpreads false /CropImagesToFrames true /ErrorControl /WarnAndContinue /FlattenerIgnoreSpreadOverrides false /IncludeGuidesGrids false /IncludeNonPrinting false /IncludeSlug false /Namespace [ (Adobe) (InDesign) (4.0) ] /OmitPlacedBitmaps false /OmitPlacedEPS false /OmitPlacedPDF false /SimulateOverprint /Legacy >> << /AddBleedMarks false /AddColorBars false /AddCropMarks false /AddPageInfo false /AddRegMarks false /ConvertColors /ConvertToCMYK /DestinationProfileName () /DestinationProfileSelector /DocumentCMYK /Downsample16BitImages true /FlattenerPreset << /PresetSelector /MediumResolution >> /FormElements false /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles false /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) ] /PDFXOutputIntentProfileSelector /DocumentCMYK /PreserveEditing true /UntaggedCMYKHandling /LeaveUntagged /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false >> ] >> setdistillerparams << /HWResolution [2400 2400] /PageSize [612.000 792.000] >> setpagedevice work_iqlbiwnednh3nctzlmz6wmyjay ---- Role of trace elements health & disease Telecardiology for chest pain management in Rural Bengal – A private venture model: A pilot project Tapan Sinha *, Mrinal Kanti Das Introduction: Rural countryside do not get benefits of proper assessment in ACS for: 1. Absence of qualified and trained medical practitioner in remote places. 2. Suboptimal assessment by inadequately trained rural practitioners (RP). 3. Non-avail- ability of 24 hours ECG facility. Telecardiology (TC). where avail- able, has low penetration and marginal impact. Lots of discussions are done to fast track transfer of modern therapy at CCU including interventions & CABG. They remained utopian even at urban localities not to speak of inaccessible poverty- stricken rural areas. Objectives: To ascertain feasibility study of low cost TC at individual initiative supported by NGOs working in remote areas for extending consultation and treatment to those who can atleast afford/approach very basic medical supportive care only. Methods: 1. Site selection: A remote island (267.5 sq km) in South Bengal close to Sundarban Tiger Reserve, inhabited by about 480,000 people with about 100,000 in age above 60 years. Nearest road to Kolkata, 75 km away, is assessable by only five ferry services available from 5 am to 4 pm across rivers more than two km wide at places. 2. RP Training: Classroom trainings organized and leaflets in vernacular prepared and distributed in accordance to AHA patient information leaflets. 3. One Nodal Centre made operational from March 2015 after training three local volunteers (not involved in RP) for 2 weeks on: a. Single channel digital recording by BPL 1608T, blood pressure mea- surement by Diamond 02277, pulse oxygen estimation by Ish- nee IN111A and CBS estimation by Easy Touch ET301. b. Digital photography, transmission by email by 2G network to either of us in Kolkata by Micromax Canvas P666 Android Tab. 4. Patient selection by filling a form: chest pain at rest and exercise, chest discomfort, radiation of pain, shortness of breath, palpitation, sudden sweating, loss of consciousness, hypertension, dia- betes, age > 60 yrs. c. Mobile phone alert call to either of us after successful transmission. d. Review of ECG, BP, R-CBS and Oxygen saturation information on email as a single attachment as early as possible on Apple Air Tabon 3G network. e. Call back for further history/information and advices. Results: 22 patients assessed from March to end May 2015, all being referred by RPs who have undergone training. Two (2) cases of IHD by ECG noted. Both were known cases of inadequately controlled hypertension and diabetes. No case of ACS received. Two (2) cases of known COAD were received who were also hypertensive and had infection. Three (3) new cases of hyperten- sion detected. All other cases were normal for parameters exam- ined. Medical advices were given, further investigations suggested, follow up maintained by RPs. Advices for COAD given after consultation with colleagues. Average time taken to call back after receiving alert call 25 � 5 min. All calls received between 11am and 7pm. Two cases were needed to be shuffled between us for convenience. Statistical analysis avoided as number of case too small. Observations: 1. Public awareness need to be increased. 2. Resis- tance from RPs for pecuniary reasons may be overcome by involving them. 3. It is feasible to successfully operate the system at a nominal cost per case of about Rs. 70/- only exclud- ing the primary investment on training, instrumentation and service connection. 4. Medical and social impacts remain to be assessed. 5. Multiple nodal centres and involvement of all health care providers should usher into long cherished 'health for all'. A study of etiology, clinical features, ECG and echocardiographic findings in patients with cardiac tamponade Vijayalakshmi Nuthakki *, O.A. Naidu 9-227-85 Laxminagar Colony, Kothapet, Hyderabad, India Cardiac tamponade is defined as hemodynamically significant cardiac compression caused by pericardial fluid. 40 cases of cardiac tamponade were evaluated for clinical features of Beck's triad, ECG evidence of electrical alternans and etiology, biochemical analysis of pericardial fluid. Cardiac tamponade among these patients was confirmed by echocardiographic evidence apart from clinical and ECG evidence. We had 6 cases of hypothyroidism causing tampo- nade. Malignancy was the most common etiology followed by tuberculosis. Only 14 patients had hypotension which points to the fact that echo showed signs of cardiac tamponade prior to clinical evidence of hypotension and aids in better management of patients. Electrical alternans was present in 39 patients. We found that cases with subacute tamponade did not have hypotension but had echo evidence of tamponade.37 patients had exudate and 3 patients had transudate effusion. We wanted to present this case series to high light the fact that electrical alternans has high sensitivity in diagnosing tamponade and hypothyroidism as an etiology of tamponade is not that rare and patients with echocar- diographic evidence of tamponade may not always be in hypoten- sion. Role of trace elements health & disease G. Subrahmanyam 1,*, Ramalingam 2, Rammohan 3, Kantha 4, Indira 4 1 Department of Cardiology, Narayana MEdical Institutions, Chinta Reddy Palem, Nellore 524 002, India 2 Department of Biochemistry, Narayana MEdical Institutions, Chinta Reddy Palem, Nellore 524 002, India 3 Department of Pharmacology, Narayana MEdical Institutions, Chinta Reddy Palem, Nellore 524 002, India 4 College of Nursing, Narayana MEdical Institutions, Chinta Reddy Palem, Nellore 524 002, India A cross sectional one time community based survey is conducted in urban area of Nellore. 68 subjects were selected between 20 to 60 years of age by using convenient sampling techniques. Detailed demographic data is collected and investigations like Lipid Profile, Blood Sugar, ECG, Periscope Study (to assess vascular stiffness), trace Elements analysis of Copper, Zinc are done apart from detailed clinical examination. ECG as per Minnesota code criteria is carried out for the diagnosis of coronary artery diseases. The diagnostic criteria for Hypertension, BMI and Hyperlipidemia as per Asian Indian hypertensive guide lines 2012 and ICMR Guide- lines for diabetes mellitus is followed. Hypertension was present in 39.7%, over weight 14.7%, obesity 27.9%, ECG abnormality as evidence of Ischemic heart disease 8.38% RBB 3.38% sinus tachycardia 3.38%. 24% had raised blood sugar and cholesterol raised in 25% subjects. Periscope study was done in the 48 samples. Vascular stiffness indices as pulse wave velocity and augmentation index increased in 75% of subjects. Trace elements analysis for Zinc and Copper was done for 49 samples. Imbalance of trace elements either increased levels or decreased levels are present in 37 subjects. Vascular stiffness and alteration i n d i a n h e a r t j o u r n a l 6 7 ( 2 0 1 5 ) s 1 2 1 – s 1 3 3 S131 http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.331&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.331&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.332&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.332&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.333&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.333&domain=pdf of trace elements are correlated with the risk factors and ischemic heart disease. Based on the above study we suggest that further detailed studies can be done to consider fortification of food with trace elements to reduce the risk of Ischemic Heart Disease. Study to assess improvement in therapeutic outcomes by platelet inhibition with prasugrel Amit Madaan Flat No. 402 Anand Towers, Sarvodya Nagar 117/k/13, Kanpur 208005, Uttar Pradesh, India Background: Despite the established short term and long term benefits of DAPT with aspirin and clopidogrel, many patients continue to have recurrent atherothrombotic events. Prasugrel is thought to inhibit ADP induced platelet aggregation more rapidly, consistently and to a greater extent than do standard and higher doses of clopidogrel. We aimed to test whether prasugrel prevents clinical ischemic events better than clopidogrel and also compared the safety (bleeding events) among the two. Methods: A total of 136 patients were studied with a primary diagnosis of ACS (NSTEMI or STEMI) who underwent PCI. Patients were assigned into clopidogrel (600 mg LD/150 mg MD (1 month) and 75 mg (8 months)) or prasugrel receiving group (60 mg LD/ 10 mg MD). Primary end points studied were CV Death and ACS; Stent thrombosis and Bleeding events (GUSTO Bleeding criteria). Results: Incidence of CV Death and ACS was 11.7% and 8.8% in clopidogrel and prasugrel group respectively ( p < 0.05); while that of Stent thrombosis was 4.4% and 2.9% ( p < 0.05). When studied in diabetic subgroup, incidence of CV Death and ACS was 16.7% and 8.3% ( p < 0.05). Bleeding events in the two groups were similar, with severe or life threatening (0%), moderate (2.9%) and mild (5.8%) ( p > 0.05). Conclusions: Our results support the hypothesis that the greater inhibition of ADP induced platelet aggregation by prasugrel (a potent P2Y12 inhibitor) is more effective at preventing ischemic events (including stent thrombosis) than is the inhibition con- ferred by a high dose clopidogrel regimen, with similar risks of bleeding. The results were in favor of prasugrel when studied in context of diabetic subgroup as well. Trace element levels in coronary artery disease G. Subrahmanyam 1,*, E. Ravi Kumar 2, Mahaboob V. Shaik 1 1 Advanced Research Center, Narayana Medical College & Hospitals, Nellore 524003, Andhra Pradesh, India 2 Department of Cardio Vascular Surgery, Narayana Medical College & Hospitals, Nellore 524003, Andhra Pradesh, India Background: Cardiovascular disease/Coronary artery disease, leading cause of global morbidity and mortality. Trace elements such as Se, Zn and Cu play a crucial role in defending against oxidant damage. There is no such studies have been carried out to analyse trace element levels in coronary artery tissues which correlate with disease condition. Objective: Hence, the aim of the present study was to investigate the changes occurring in the levels zinc (Zn), copper (Cu), and Selenium (Se) in coronary artery tissues of patients with CAD. Method: Coronary artery samples collected from these patients during bypass surgery from known CAD patients. These samples were analyzed for Se, Zn, and Cu using atomic absorption spectro- photometry and results are expressed in terms of wet weight of coronary artery tissue. Results: study included a total of 20 sample size. In present study, copper, Zinc and Selenium levels observed in coronary artery from patients with CAD as follows. Copper levels ranged from 0.5 to 1.5 in mg/g weight of coronary artery. Zinc levels ranged from 1.5 to 8.5 in mg/g weight of coronary artery. Selenium levels ranged from 0.15 to 0.45 in mg/g weight of coronary artery. Conclusion: The observations of the present study showed the levels of Zn, Cu and Se in coronary artery samples of CAD patients. Effect of vanadium supplementation on high fat diet induced hyperlipidemia G. Subramanyam 1,*, Ramalingam 2, Veeranjaneyulu 3 1 Department of Cardiology, Narayana Medical Institutions, Chinta Reddy Palem, Nellore, India 2 Department of Biochemistry, Narayana Medical Institutions, Chinta Reddy Palem, Nellore, India 3 Animal Lab, Narayana Medical Institutions, Chinta Reddy Palem, Nellore, India In mica mine workers in Nellore district the prevalence of Ischemic heart disease and Serum cholesterol levels are less in when com- pared to other rural population in our earlier studies. Vanadium content of mica and Blood levels of vanadium in mica workers is high compared to other rural population. Hence study in experi- mental animals is under taken in our laboratory to know the effect of vanadium. Vanadium is an essential trace elements in certain animals and its role in humans is debated. Under physiological conditions vana- dium predominantly exists in either an anionic form (vanadate) or a cationic form vanadyl. The vanadate is mainly bound to trans- ferin and to a lesser extent to albumin. The present study aimed to know the effect of vanadium supplementation on high fat diet induced hyperlipidemia in experimental animals. In this study Newzeland white breed male rabbits divided into three groups and each group consists of 6 in number. Group I: Rabbits fed with standard diet Group II: Fed with 2% cholesterol diet Group III: Rabbits fed with group II diet and supplemented with 0.75 mg/kg of elemental vanadium as Sodium meta vanadate. Total cholesterol, LDL Cholesterol and Triglycerides were signifi- cantly decreased in Group III when compared to Group II after the experiment. HDL is similar in both groups. Present study shows antilipedemic effect of Vanadium in experimental rabbits. Supple- mentation of vanadium may prevent cardiovascular risk factor like Hyperlipidemia. A study of cardiovascular involvement in cases of rheumatoid arthritis with high disease activity R.K. Kotokey *, M.S. Chaliha, Luhamdao Bathari Old D C Bungalow, East Chowkidingee 786001, India Introduction: Rheumatoid arthritis (RA) is a chronic inflammatory disease of unknown etiology marked by symmetric, peripheral i n d i a n h e a r t j o u r n a l 6 7 ( 2 0 1 5 ) s 1 2 1 – s 1 3 3S132 http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.334&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.334&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.335&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.335&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.336&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.336&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.337&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.337&domain=pdf work_iqzqmtz65rbytjvsan3betxoim ---- Title: An Evaluation of Three Wound Measurement Techniques in Diabetic Foot Wounds 1 An Evaluation of Three Wound Measurement Techniques in Diabetic Foot Wounds Julia Shaw, BSc1 Ciara M Hughes, PhD2 Katie M Lagan, DPhil2 Patrick M Bell, MD FRCP3 Michael R Stevenson, BSc Fss 4 1 Regional Centre for Endocrinology and Diabetes, Royal Hospitals, Belfast, Northern Ireland 2 University of Ulster, Newtownabbey, Belfast, Northern Ireland 3 Regional Centre for Endocrinology and Diabetes, Royal Hospitals, Belfast, Northern Ireland 4 Medical Statistics, Epidemiology and Public Health, Queen’s University, Belfast, Northern Ireland Corresponding Author: Professor Patrick M Bell East Wing Office, Royal Hospitals Grosvenor Rd, Belfast BT12 6BA. Northern Ireland. Email: Patrick.bell@royalhospitals.n-i.nhs.uk Received for publication 19 January 2007 and accepted in revised form 21 June 2007. Additional information for this article can be found in an online appendix at http://care.diabetesjournals.org. Diabetes Care Publish Ahead of Print, published online June 26, 2007 Copyright American Diabetes Association, Inc., 2007 2 Approximately 80% of diabetes- related amputations are preceded by a diabetic foot ulcer (1-2). Wound measurement is an important component of successful wound management (3-6). Accurate identification of the wound margin and the calculation of wound area are crucial (7-9). Although more complex methods of wound measurement exist (planimetry, digitising techniques and stereophotogrammetry) (4,10-14), current practice focuses on wound measurement using simple ruler-based methods or by wound tracing. Ruler- based schemes tended to be less reliable in wounds >5cm2 (11). Various mathematical formulae (including the calculation of area based on the formula for an ellipse) have been proposed to improve accuracy in wound surface area calculation in wounds <40cm2 in size (10,11,15-17). The aim of this study was to evaluate and compare three wound measurement techniques: the Visitrak system (Smith and Nephew Healthcare Ltd., Hull), a digital photography and image processing system (IP) (Analyze Version 6.0. Lenexa, Kansas, US) and an elliptical measurement method using the standard formula (πab) for the calculation of the area of an ellipse. Research Design and Methods Patients (n=16) with neuropathic and neuroischaemic diabetic foot wounds were recruited from the Diabetic Foot Clinic in the Royal Hospitals Trust, Belfast. Ethical obligations were fulfilled and patients received standard multi-disciplinary care. Validity and repeatability within each method were investigated and determined by measuring images of a known size 20 times each. Repeatability and comparability were considered between each method of measurement on the wounds. Each wound was traced and measured a total of 9 times; wound surface area was calculated in mm2 and means and standard deviations calculated. Statistical Analysis Validity was analysed using a 1-sample t- test. Repeatability within each wound measurement method was investigated by calculating a coefficient of variation (CV) for each wound measurement. Using SPSS (Version 11.0 for Windows), the Friedman’s test was used to determine if any one method was consistently more repeatable than another. In order to compare wound measurement between the methods, a mean wound size was calculated for each wound using each measurement method, a logarithmic conversion of the data was performed, and an Analysis of Variance (ANOVA) was used to complete a calculation of comparability. A Bland and Altman Plot supported by a paired t-test were used to examine differences between the elliptical and Visitrak methods. Results Validity varied across the three methods but was deemed to be acceptable overall (Table 1). The Visitrak method measured images <25mm2 inaccurately (p<0.001), and the elliptical method tended to under- estimate size in small wounds (p<0.001). 3 The IP method was advantageous in allowing unique calibration of each image, and so eliminated subjective wound tracing. The method was repeatable. The main disadvantage was that validity of this method was questionable. The mean CV (n=46) for all wounds was calculated as 7.0 (Visitrak), 4.7 (IP) and 8.5 (Ellipse) indicating that repeatability was acceptable overall. Freidman’s test indicated that no one measurement method was consistently more repeatable than another (p=0.15). Elliptical wound measurement had some of the advantages of the Visitrak method (tracings were quick, easy, inexpensive and non-invasive to perform). The main disadvantages described in using ruler- based mathematical methods are that they have been shown to over-estimate wound area by 10-25% (16,18) in wounds >5cm2. By contrast, in this study the elliptical method of measurement was shown to under-estimate wound size in smaller wounds (p<0.001) compared to the other two methods. Analysis of comparability indicated that there were some differences between the three methods. Graphical analysis reported 3 outlying values (both high and low) using the IP method and so wound measurement could be inaccurate either way compared to the other two methods. Differences were shown between the Visitrak and Elliptical methods when analysed alone (t-test = -2.72, p=0.017). This study does have limitations. The sample size was small and conclusions can only be drawn for a specific type of wound. There is no gold standard method of wound measurement. The authors conclude that the elliptical method is a suitable measurement tool for use in studies investigating diabetic foot wounds as it is simple, inexpensive, valid, repeatable and easy to use. Discussion The main advantages of the Visitrak method were that the tracings were quick, easy, inexpensive to perform and non-invasive for the patient. Foot curvature was considered and the subjectivity associated with manual square counting was removed. The method was both valid and repeatable in the measurement of wounds >25mm2 in size. The main disadvantages were the inability to measure small wounds of <25mm2 accurately (p<0.001). When compared to the other methods, the Visitrak method tended to underestimate wound size and statistical significant differences were found (p=0.017) when compared to the elliptical method alone. Acknowledgements We wish to thank Dr RJ Winder, Director of the Health and Rehabilitation Sciences Research Institute, University of Ulster, Newtownabbey, Belfast, for his expertise and assistance with the IP System. 4 References 1. Pecoraro RE, Reiber GE, Burgess EM: Pathways to diabetic limb amputation: basis for prevention. Diabetes Care 13:513-521, 1990. 2. McNeely MJ, Boyko EJ, Ahroni JH, Stensel VL, Reiber GE, Smith DG, Pecoraro RF: The independent contributions of diabetic neuropathy and vasculopathy in foot ulceration: how great are the risks? Diabetes Care 18: 216-219, 1995. 3. Oyibo SO, Jude EB, Tarawneh I, Nguyen HC, Armstrong DG, Harkless LB, Boulton AJM: The effects of ulcer size and site, patient's age, sex and type and duration of diabetes on the outcome of diabetic foot ulcers. Diabetic Medicine, 18, 2: 133-138, 2001. 4. Flanagan M: Wound Measurement: can it help us to monitor progression to healing? Journal of Wound Care, 12, 5: 189-194, 2003. 5. Margolis DJ, Hoffstad O, Gelfand JM, Berlin JA: Surrogate end points for the treatment of diabetic neuropathic foot ulcers. Diabetes Care, 26; 6:1696-1700, 2003. 6. McArdle J, Smith M, Brewin E, Young M: Visitrak: Wound measurement as an aid to making treatment decisions. The Diabetic Foot, 8, 4: 207-211, 2005. 7. Plassmann P, Jones BF: Measuring leg ulcers by colour-coded structured light. J.Wound.Care, 1, 3: 35-38, 1992. 8. Plassmann P, Melhuish JM, Harding KG: Methods of measuring wound size: a comparative study. Ostomy Wound Management, 40, 7; 50-52, 1994. 9. Plassmann P: Measuring wounds. J.Wound.Care, 4, 6: 269-272, 1995. 10. Kantor J, Margolis DJ: Efficacy and prognostic value of simple wound measurements. Arch Dermatol, 134:1571-1574, 1998. 11. Öien R. F, Håkansson A, Hansen B U, Bjellrup M: Measuring the size of ulcers by planimetry: a useful method in the clinical setting. J.Wound.Care, 11, 5: 165- 168, 2002. 12. Lagan KM, Dusoir AE, McDonagh SM, Baxter D: Wound Measurement: The comparative reliability of direct versus photographic tracings analyzed by planimetry versus digitising techniques. Arch Phys Med Rehabil, 81:1110-1116, 2000 13. Langemo, D. K, Melland H., Hanson D, Olson B, Hunter S, Henly S. J: Two- dimensional wound measurement: comparison of 4 techniques. Adv.Wound.Care, 11, 7: 337-343, 1998. 14. Melhuish JM, Plassmann P, Harding KG: Circumference, area and volume of the healing wound. Journal of wound care. 3, 8:380-384, 1994. 5 15. Johnson JD: Using ulcer surface area and volume to document wound size. J.Am.Podiatr.Med.Assoc, 85, 2: 91-95, 1995. 16. Goldman RJ, Salcido R: More than one way to measure a wound: an overview of tools and techniques. Advances in Skin and Wound Care, Sept/Oct :236-242, 2002. 17. Mayrovitz HN: Shape and area measurement considerations in the assessment of diabetic plantar ulcers. Wounds, 9, 1: 21-28, 1997. 18. Majeske C: Reliability of wound surface area measurements. Physical Therapy, 72, 2: 138-141, 1992. 6 Table 1. Summary of results reported on the validity and repeatability of 3 wound measurement methods in diabetic foot wounds. Validity/ Reliability Repeatability Definition (in relation The ability of an instrument to measure what it is supposed to measure (wound The ability of the same operator using the same to wound measurement) area) in a precise way over a short period of time. instrument to measure the same wound over a short period of time repeatedly Statistical Analysis 1-sample t-test on images of a known size Coefficients of Variation (CV’s) calculated for each wound measurement method Freidman’s test used to determine if one method was consistently more repeatable than another Method Image of a known size (mm2) Mean area measured by each method (mm2) % difference p-value Calculable CV’s for wound area measured by each method Visitrak 25 19.5 -22.0 <0.001 100 98.5 -1.5 =0.27 Mean CV 7.0% 1600 1580.5 -1.2 =0.06 IP 20 20.02 +0.1 =0.64 Mean CV 4.7% 20 20.01 0.0 =0.73 Elliptical 37 34.3 -7.3 <0.001 p-value 883 883.0 0.0 =1.0 Mean CV 8.5% p=0.15 5361 5338.2 -0.4 =0.26 A Results p-value work_isk7jtykxnfq7gbaqh2yzipuyi ---- The English National Screening Programme for diabetic retinopathy 2003–2016 REVIEW ARTICLE The English National Screening Programme for diabetic retinopathy 2003–2016 Peter H. Scanlon1,2 Received: 5 January 2017 / Accepted: 8 February 2017 / Published online: 22 February 2017 � The Author(s) 2017. This article is published with open access at Springerlink.com Abstract The aim of the English NHS Diabetic Eye Screening Programme is to reduce the risk of sight loss amongst people with diabetes by the prompt identification and effective treatment if necessary of sight-threatening diabetic retinopathy, at the appropriate stage during the disease process. In order to achieve the delivery of evi- dence-based, population-based screening programmes, it was recognised that certain key components were required. It is necessary to identify the eligible population in order to deliver the programme to the maximum number of people with diabetes. The programme is delivered and supported by suitably trained, competent, and qualified, clinical and non-clinical staff who participate in recognised ongoing Continuous Professional Development and Quality Assur- ance schemes. There is an appropriate referral route for those with screen-positive disease for ophthalmology treatment and for assessment of the retinal status in those with poor-quality images. Appropriate assessment of con- trol of their diabetes is also important in those who are screen positive. Audit and internal and external quality assurance schemes are embedded in the service. In Eng- land, two-field mydriatic digital photographic screening is offered annually to all people with diabetes aged 12 years and over. The programme commenced in 2003 and reached population coverage across the whole of England by 2008. Increasing uptake has been achieved and the current annual uptake of the programme in 2015–16 is 82.8% when 2.59 million people with diabetes were offered screening and 2.14 million were screened. The benefit of the programme is that, in England, diabetic retinopathy/maculopathy is no longer the leading cause of certifiable blindness in the working age group. Keywords Screening � Diabetic retinopathy � Blindness Background A reduction in diabetes-related blindness by at least one- third was declared a primary objective for Europe in 1989 in the St. Vincent Declaration [1]. Countrywide popula- tion-based diabetic retinopathy screening programmes have developed in Iceland (17,200 with diabetes [2] in 2015), Scotland (271,300 people [3] with diabetes), Wales (183,300 people [3] with diabetes), Northern Ireland (84,800 [3] people with diabetes) and England (2.91 mil- lion people [3] with diabetes). Regional and local screening programmes have developed in other parts of Europe [4] and around the world. The cost of the English Screening Programme is believed to be approximately 85.6 million US dollars or 40 US dollars per person screened. The Wilson and Junger criteria for a screening pro- gramme, which are the 1968 principles [5] applied by the World Health Organisation, formed the basis of the UK National Screening Committee criteria for appraising the viability, effectiveness and appropriateness of a screening programme when the English NHS Diabetic Eye Screening Programme commenced in 2003. I previously described how we applied these principles to sight-threatening Managed by Massimo Porta. & Peter H. Scanlon peter.scanlon@glos.nhs.uk 1 The English NHS Diabetic Eye Screening Programme, Gloucestershire Diabetic Retinopathy Research Group, Office above Oakley Ward, Cheltenham General Hospital, Sandford Road, Cheltenham GL53 7AN, UK 2 Gloucestershire Hospitals NHS Foundation Trust, Cheltenham, UK 123 Acta Diabetol (2017) 54:515–525 DOI 10.1007/s00592-017-0974-1 http://crossmark.crossref.org/dialog/?doi=10.1007/s00592-017-0974-1&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1007/s00592-017-0974-1&domain=pdf diabetic retinopathy to provide an evidence base [6, 7] for the development of the programme. It is important to realise the following principles of screening: 1. Screening is a public health programme, not a diagnostic test. 2. Large numbers of apparently healthy individuals are invited for screening, and, if their screening test is positive, offer further diagnostic investigation. 3. Some people may be harmed by the process, or falsely reassured. 4. There is an ethical and moral responsibility to ensure that the programmes are of high quality. 5. Quality Assurance of Screening programmes is there- fore essential to ensure that the programme achieves the highest possible standards and minimises harm. These principles are fundamentally different to most branches of medicine where tests are considered to be diagnostic although, even in the circumstances of diag- nostic tests, there will be some false positives and some false negatives. The sensitivity of a screening test is the percentage of the condition that is correctly detected. If a screening test has a sensitivity of 90%, this means that 1 in 10 is missed. The specificity of a screening test is the percentage of people that one refers unnecessarily. If a screening test is 90% specific, this means 1 in 10 is referred unnecessarily. In 1995, a consensus view was put forward by clinicians at a meeting of the British Diabetic Association in Exeter that a screening test for diabetic retinopathy should have a minimum specificity of 80% and a specificity of 95%. Most studies on screening tests for diabetic retinopathy have achieved over 85% against a recognised reference standard and the specificity target of 95% has been achieved if the numbers with ungradable images are not calculated as test positive [8] but has proved more challenging to achieve when they have been calculated as test positive [9, 10]. It is important that any information that is sent to people who are offered screening tests explains to them that it will not detect all people with the disease and that a small number of people will be referred unnecessarily. It is also important to explain in the literature that a screening test for sight-threatening diabetic retinopathy will not pick up all other eye conditions. Stages in the development of the English NHS diabetic eye screening programme When developing the NHS Diabetic Eye Screening Pro- gramme in England, we needed to consider 11 different stages that are listed in Table 1. It was critical for diabetologists, ophthalmologists, public health doctors and optometrists to speak with one voice; otherwise, we would never have established the programme. Assessment and treatment facilities are avail- able in England as part of our National Health Service, but this question becomes much more relevant in developing countries where treatment facilities may not be so readily available. There is no point in screening for sight-threat- ening diabetic retinopathy if treatment facilities are either not available or inadequate. In England, everyone has a Primary Care Physician (GP) and so we have to obtain details from Primary Care on those diagnosed with diabetes and everyone has an NHS identifier number. A letter is sent out to everyone with diabetes aged 12 years and over to invite them for a diabetic eye screening appointment once a year. National leaflets have been produced to explain about diabetic retinopathy and about the screening test and what happens if screen-positive diabetic retinopathy is found. We have included information in the leaflet that the screening is not a diagnostic test and hence will only detect at best 90% of sight-threatening diabetic retinopathy and not other eye conditions. There has also been active engagement with patient organisations. There are appropriate exclusion criteria for those who do not need to be invited for screening, e.g. those already under ophthalmology and terminally ill. Software has been developed to provide a single collated list of people with diabetes and for call recall, screening, grading and audit. In the database, the images are attached to patient details and confidentiality of patient data is a priority. The database is regularly backed up and an IT infrastructure has been established for capture and trans- mission of images to and from the cameras. In England, we use non-mydriatic cameras and under- take mydriatic photography on all people with diabetes aged 12 years and older. The two 45� fields captured by the English Screening Programme are shown in Fig. 1 together with the one 45� field used by Scotland and the seven 30� stereo fields that are used as a reference standard against which screening tests are judged. The English Screening Programme sets a minimum camera specification and tests all prospective cameras that meet this minimum specifi- cation on patients who are known to have specific features of diabetic retinopathy. The specification document is a fairly lengthy document which includes the following statements: The unit must be capable of providing a min- imum field of view of 45� horizontally and 40� vertically at the specified resolution (at least 30 pixels/degree). The unit must be capable of accommodating refractive errors of ±15 D as detailed in EN ISO 10940. The internal fix- ation aid should be capable of positioning the eye to cap- ture the fields of regard specified below. The ‘field of 516 Acta Diabetol (2017) 54:515–525 123 regard’ of the fundus camera must make it relatively straightforward for an appropriately trained and competent retinal screener to capture images centred on (1) the foveal area and (2) the optic disc. In addition, the ‘field of regard’ of the fundus camera must be able to capture images as defined by the area covered by fields 3–7 of the seven-field protocol used in the Early Treatment Diabetic Retinopathy Study [11]. The list of cameras that are currently approved to be used in the English Screening Programme is pub- lished by Public Health England on their webpage [12]. There has been a progressive increase in the size of uncompressed images from the modern camera backs which are now over 20 MB—the English NHS Diabetic Eye Screening Programme recommends capture of images that are compressed to a size of 1–2 MB. This level of compression has not been shown to lose any clinically significant information [13–15]. When considering whether to routinely dilate the pupil of people with diabetes attending for screening the study that was influential in the decision-making process was a Table 1 Stages and considerations required in the development of the English Screening Programme Stages Considerations 1 Manoeuvring around the politics of funding 2 Are assessment and treatment facilities available? 3 Identify cohort for invitation and call—recall 4 How to invite them? 5 Informing the patients and maximising uptake 6 Establish an IT infrastructure 7 Choose a camera and decide on compression levels for photographs 8 The test 9 The grading referral criteria and viewing of the images 10 Employ and train a competent workforce 11 Introduce Quality Assurance Fig. 1 Photographic fields Acta Diabetol (2017) 54:515–525 517 123 population-based screening study [16] of 1549 people with diabetes who had received non-mydriatic one-field digital photography followed by mydriatic two-field digital pho- tography and a reference standard examination by an experienced ophthalmologist whose examination was tes- ted separately against seven-field stereo-photography [17]. The sensitivity for one-field non-mydriatic photography was 86.0% (95% CI, 80.9–91.1%), the specificity was 76.7% (95% CI, 74.5–78.9%) and the poor-quality image rate was 19.7% (95% CI, 18.4–21.0%). The sensitivity for two-field mydriatic photography was 87.8% (95% CI, 83.0–92.6%), the specificity was 86.1% (95% CI, 84.2–87.8%) and a poor-quality image rate was 3.7% (95% CI, 3.1–4.3%). This study led to the approach used in England of two-field mydriatic photography and the approach in Scotland of staged mydriasis with one-field non-mydriatic photography and with dilation only if poor- quality images were obtained. The correlation [18] with age led Northern Ireland to only routinely dilate those aged 50 years and over. It has been demonstrated that there is a strong correla- tion [18] between age and poor-quality image rates in diabetic retinopathy screening, for both non-mydriatic and mydriatic photography. Hence, publications [19–22] with small numbers in a young age range are not relevant to population-based screening programmes where many of the individuals to be screened are over 60 years. In any population-based screening programme, it is necessary to balance acceptability to the individuals being screened and cost-effectiveness of the screening method with detection rates of sight-threatening diabetic retinopa- thy. Population-based screening programmes that use non- mydriatic photography like the Scottish Screening Pro- gramme [23] usually capture one field centred on the fovea and those that use mydriatic photography like the English Screening Programme usually capture a second field that is centred on the disc, which also give a second view of the macular area. In 1989, Moss [11] demonstrated that for eight retinopathy levels, the rate of agreement with seven stereoscopic fields ranges from 80% for two 30� stereo fields to 91% for four 30� stereo fields. In 2003, Scanlon [17] reported that two-field mydriatic digital photography gave a sensitivity of 80.2% (75.2–85.2) and specificity of 96.2% (93.2–99.2) in comparison with seven-field stereo- photography. In the latter study, 15.3% of seven-field sets were ungradable compared with 1.5% of the two-field digital photographs. Clear protocols need to be in place for management of people with poor-quality images. In the English Screening Programme, all people with poor-quality images are referred for examination by slit lamp biomicroscopy. The English NHS Diabetic Eye Screening Programme routinely measures Visual Acuity at screening, but it is recognised that it is not sufficiently sensitive on its own to be a screening tool [24, 25]. Hence, it needs to be used in conjunction with other features that are detected at grading. The diabetic retinopathy grading classification that has the best evidence base is the Early Treatment Diabetic Retinopathy Study (ETDRS) final Retinopathy Severity Scale [26] because it provided the first detailed classifica- tion system for retinopathy severity based on a natural history study of untreated eyes. However, this relies on detailed grading of stereo-photographs of seven fields of each eye. This scale did not grade lesions in the macular area. The ETDRS study did define ‘clinically significant macular oedema’ which was a level at which laser treatment was advised, but this was based on stereo-photography and the study did not recommend a referral level for closer observation before laser treatment was recommended. Table 2 shows the International Classification [27] which was developed by the American Academy of Oph- thalmology in 2002 and recommends that any level of retinopathy more severe than mild retinopathy (defined as the presence of microaneurysms only) warrants examina- tion by an ophthalmologist. However, this is too early a referral level for use in the English Screening Programme and the referral level for the English Screening Programme is also listed in Table 2. Table 3 shows the risks of progression to proliferative diabetic retinopathy as recorded in the Early Treatment Diabetic Retinopathy Study [26]. Screening Programmes need to accept a certain level of risk. In the English pro- gramme, we needed to decide whether we were prepared to accept a 6.2% risk or an 11.3% risk that a patient who has been screened and given a 1-year appointment develops proliferative DR before their next screen. We opted for the 11.3% risk which is the equivalent to moderate non-pro- liferative diabetic retinopathy on the ETDRS final Retinopathy Severity Scale [26]. We also had to develop a definition for maculopathy referral based on two-dimen- sional markers. The ETDRS study did not classify macu- lopathy, but it did make recommendations on what constituted clinically significant macular oedema requiring laser treatment as shown in Table 4. We opted for three referral criteria, based on two-dimensional photographic markers and measurement of Visual Acuity: 1. Exudate within 1 disc diameter (DD) of the centre of the fovea (Fig. 2). 2. Circinate or group of exudates within the macula (Fig. 3). 3. Any microaneurysm or haemorrhage within 1DD of the centre of the fovea only if associated with a best VA of B 6/12 (if no stereo) (Fig. 4). A minimum screen resolution is recommended [12] when viewing the images for grading which has progressed 518 Acta Diabetol (2017) 54:515–525 123 as the technology of screens has advanced. The current minimum acceptable standard for screen resolution is a vertical resolution of 1080 (1920 9 1080) with an achievable and recommended standard of a minimum of 1200 (1920 9 1200 or higher). It is recommended that a minimum of 60% of the image should be viewable on the grading screen to avoid too much scrolling to see the full image. To ensure that whole screening programme is provided by a trained and competent workforce a minimum quali- fication [28] is required for screeners and graders in the English programme. Evidence of ongoing continuous professional development and taking the monthly External Quality Assurance Test sets [29, 30] is also required—all 1500 graders in the English Screening Programme are required to take a monthly test set of 20 image sets and their grading of these images is compared against a guide grade. An International Version of the qualification [31] and the monthly test and training set [32] for screeners working outside the UK is available. An important part of any screening programme is the introduction of Quality Assurance. The purpose of intro- ducing Quality Assurance is to reduce the probability of error and risk, ensure that errors are dealt with competently Table 2 International and English screening retinopathy classifications ‘International’ clinical classification [27] English Screening Programme [48] Optimise medical therapy, screen at least annually R0 Currently screen Annually Ma’s only R1 Screen annually Background Microaneurysm(s) or HMa Retinal haemorrhage(s) Venous loop Any exudate or cotton wool spots (CWS) in the presence of other non-referable features of DR More than just micro aneurysms but less severe than severe NPDR Refer to ophthalmologist R2 Refer to ophthalmologist Pre-proliferative Venous beading Venous reduplication Intraretinal microvascular abnormality (IRMA) Multiple deep, round or blot haemorrhages Severe NPDR Any of the following: (a) Extensive intraretinal haem ([20) in 4 quadrants (b) Definite venous beading in 2? quadrants (c) Prominent IRMA in 1? quadrant And no signs of PDR Consider Scatter photocoagulation for type 2 diabetes Neovascularisation Vitreous/pre-retinal haemorrhage Scatter Photocoagulation without delay for patients with vitreous haemorrhage or neovascularisation within 1 disc diameter of the optic nerve head R3A Urgent referral to ophthalmologist R3A. Proliferative New vessels on disc (NVD) New vessels elsewhere (NVE) Pre-retinal or vitreous haemorrhage Pre-retinal fibrosis ± tractional retinal detachment R3S Follow-up annually within screening or at appropriate interval in surveillance R3S. Stable treated proliferative Evidence of peripheral retinal laser treatment AND Stable retina from photograph taken at or shortly after discharge from the hospital eye service (HES) Acta Diabetol (2017) 54:515–525 519 123 and sensitively, help professionals and organisations improve year on year, and set and keep under review national standards. The NHS Diabetic Eye Screening Programme has developed three Key Performance Indicators and nine other Quality Standards [33]. These are given in Table 5 with the three Key Performance Indicators being shown in the right- hand column. A programme board which includes local health service representatives and national Quality Assur- ance team representatives oversees the results of a programmes performance against the standards four times a year, and if a programme is performing poorly, they are expected to improve or the service may be recommissioned to be provided by a different provider. Graders who per- form poorly on test sets undergo extra training and have all of their work second graded until their performance improves. In addition, an External Quality Assurance visit to all regional programmes who undertake Diabetic Eye Screening as part of the NHS Diabetic Eye Screening Table 3 ETDRS Diabetic Retinopathy Classification of Progression to Proliferative DR ETDRS final retinopathy severity scale [26] ETDRS (final) grade Lesions Risk of progression to PDR in 1 year (ETDRS interim) No apparent retinopathy 10 14, 15 DR absent DR questionable Mild non-proliferative diabetic retinopathy (NPDR) 20 Micro aneurysms only 35 a b c d e One or more of the following: Venous loops [ definite in 1 field SE, IRMA, or VB questionable Retinal haemorrhages present HE [ definite in 1 field SE [ definite in 1 field Level 30 = 6.2% Moderate NPDR 43a b H/Ma moderate in 4–5 fields or severe in 1 field or IRMA definite in 1–3 fields Level 41 = 11.3% Moderately severe NPDR 47 a b c d Both level 43 characteristics – H/Ma moderate in 4–5 fields or severe in 1 field and IRMA definite in 1–3 fields or any one of the following: IRMA in 4–5 fields HMA severe in 2–3 fields VB definite in 1 field Level 45 = 20.7% Severe NPDR 53 a b c d One or more of the following: [ 2 of the 3 levels, 47 characteristics H/Ma severe in 4–5 fields IRMA [ moderate in 1 field VB [ definite in 2–3 fields Level 51 = 44.2% Level 55 = 54.8% Mild PDR 61a b FPD or FPE present with NVD absent or NVE = definite Moderate PDR 65a b (1) NVE [ moderate in 1 field or definite NVD with VH and PRH absent or questionable or (2) VH or PRH definite and NVE \ moderate in 1 field and NVD absent High-risk PDR 71 a b c d Any of the following: (1) VH or PRH [ moderate in 1 field (2) NVE [ moderate in 1 field and VH or PRH definite in 1 field (3) NVD = 2 and VH or PRH definite in 1 field (4) NVD [ moderate High-risk PDR 75 NVD [ moderate and definite VH or PRH Advanced PDR 81 Retina obscured due to VH or PRH 520 Acta Diabetol (2017) 54:515–525 123 Programme is undertaken every 3 years. EQA visits are an integral part of Diabetic Eye Screening Quality Assurance. Formal EQA visits to a screening programme provide the forum for a review of the whole multidisciplinary screen- ing pathway and an assessment of the effectiveness of team working within the screening centre and associated referral sites. Programme results In the development of the programme, I calculated [7] that the NHS Diabetic Eye Screening Programme had the potential to reduce the prevalence of blindness in England from 4200 people to under 1000 people based on UK certification of blindness. If WHO definitions were used the prevalence, incidence and potential reductions in blindness are much greater. In 2014, Liew [34] reported on the causes of blindness certifications in England and Wales in work- ing age adults (16–64 years) in 2009–2010 and compared these with figures from 1999 to 2000. For the first time in at least five decades, diabetic retinopathy/maculopathy was no longer the leading cause of certifiable blindness amongst working age adults in England and Wales, having been overtaken by inherited retinal disorders. This change was considered to be due to the introduction of nationwide diabetic retinopathy screening programmes in England and Wales and improved glycaemic control. The era in which this reduction in blindness occurred was during the period when laser treatment was being used for maculopathy and before the use of VEGF inhibitors for diabetic macular oedema. Table 4 ETDRS Maculopathy Classification Early treatment diabetic retinopathy study Outcome Clinically significant macular oedema [49] as defined by A zone or zones of retinal thickening one disc area or larger, any part of which is within one disc diameter of the centre of the macula Consider laser Retinal thickening at or within 500 microns of the centre of the macula Consider laser Hard exudates at or within 500 microns of the centre of the macula, if associated with thickening of the adjacent retina (not residual hard exudates remaining after disappearance of retinal thickening) Consider laser Fig. 2 Exudate within 1 disc diameter (DD) of the centre of the fovea Fig. 3 Circinate or group of exudates within the macula Fig. 4 A microaneurysm within 1DD of the centre of the fovea associated with a best VA of B 6/12 Acta Diabetol (2017) 54:515–525 521 123 In 2015–2016, the NHS Diabetic Eye Screening Pro- gramme in England [35] offered screening to 2,590,082 with diabetes using two-field mydriatic digital photogra- phy. There were 3,083,401 known people with diabetes in England, but people who are under an ophthalmologist for diabetic eye disease and certain other categories of people (e.g. terminally ill) are not invited. A total of 2,144,007 with diabetes were screened (Uptake 82.8%). New regis- trations to programmes in 2015–2016 were 326,587. There were 7593 urgent referrals with proliferative retinopathy and 52,597 referrals with screen-positive maculopathy or pre-proliferative diabetic retinopathy. Rate of retinopathy per 100,000 screened was 2807. Future developments for the programme Changes in technology have introduced three-dimensional imaging in the form of Optical Coherence Tomography. Table 5 Standards and key performance indicators in the English NHS Diabetic Eye Screening Programme Standard Criteria Thresholds Key performance indicators 1 Proportion of the known eligible people with diabetes offered an appointment for routine digital screening Acceptable: C 95% Achievable: C 98% 2 Proportion of people newly diagnosed with diabetes offered a first routine digital screening appointment that is due to occur within 89 calendar days of the programme being notified of their diagnosis Acceptable: C 90% Achievable: C 95% 3 Proportion of eligible people with diabetes offered an appointment for routine digital screening occurring 6 weeks before or after their due date Acceptable: C 95% Achievable: C 98% 4 Proportion of people with diabetes offered an appointment for slit lamp biomicroscopy 6 weeks before or after their due date Thresholds, to be set 5 Proportion of people with diabetes on digital surveillance who have been offered an appointment that occurs within a reasonable time of their follow-up period Thresholds, to be set 6 Proportion of pregnant women with diabetes seen within 6 weeks of notification of their pregnancy to the screening programme Thresholds, to be set 7 The proportion of those offered routine digital screening who attend a digital screening event where images are captured Acceptable: C 75% Achievable: C 85% KPI 1 8 Proportion of eligible people with diabetes who have not attended for screening in the previous 3 years Thresholds, to be set 9 Proportion of eligible people with diabetes where a digital image has been obtained but the final grading outcome is ungradable Acceptable: 2–4% 10 Time between routine digital screening event or digital surveillance event or slit lamp biomicroscopy event and printing of results letters to the person with diabetes, GP and relevant health professionals Acceptable: 85% \ 3 weeks and 99% \ 6 weeks. KPI 2 11 Time between routine digital screening event or digital surveillance event or slit lamp biomicroscopy event and issuing the referral to the hospital eye service 1. Urgent Acceptable: C 95% 2 weeks Achievable: C 98% 2 weeks 2. Routine Acceptable: C 90% 3 weeks Achievable: C 95% 3 weeks 12 Time between screening event and first attended consultation at hospital eye services or digital surveillance 1. Urgent Acceptable: C 80% 6 weeks Achievable: C 95% 6 weeks 2. Routine Acceptable: C 70% 13 weeks Achievable: C 95% 13 weeks KPI 3 13 Time between digital screening event and first attended consultation in slit lamp biomicroscopy surveillance Acceptable: C 70% within 13 weeks Achievable: C 95% within 13 weeks 522 Acta Diabetol (2017) 54:515–525 123 These machines are more costly than digital cameras and are not felt to be cost-effective as a first-line screening tool when 65% of the population of people with diabetes have no retinopathy. However, there is a high possibility that they will be introduced as a second-line screening tool for screen-positive maculopathy using two-dimensional markers. It is believed that, of the 52,597 referrals with screen-positive maculopathy, only 20% actually require treatment and a significant proportion of the remaining 80% could be followed up in a technician-led clinic [36] that includes OCT images to exclude any significant dia- betic macular oedema. Cost-effectiveness data are needed before this can be introduced. Extensive work has been done in the area [37–40] of extended screening intervals for those at low risk. The UK National Screening Committee agreed at their committee on 19 November 2015 and published their recommendation in January 2016 that: (a) For people with diabetes at low risk of sight loss, the interval between screening tests should change from 1 to 2 years. (b) The current 1 -year interval should remain unchanged for the remaining people at high risk of sight loss. The introduction of this extension of screening interval for those with no retinopathy on two consecutive screens, which is the current recommendation in England, is dependent on software development for the programme. The use of automated analysis is currently being eval- uated for use in the English Screening Programme. A recent HTA report [41] has been published on this topic. There are different ways in which automated analysis could be used: (a) To classify images as no diabetic retinopathy or diabetic retinopathy so that a human grader would only need to look at those with diabetic retinopathy. (b) To detect referral levels of retinopathy. (c) To act as a quality assurance tool for retinopathy that is missed. (d) To determine which images are gradable and which are ungradable. Scanning laser ophthalmoscopes and wide-field imaging have been widely studied [42–44], but this method has not yet been shown to be cost-effective. The earlier devices that provided wide-field imaging compromised [45] on the detection of microaneurysms in the central field. No hand-held device has ever been shown [46] to have comparable sensitivities and specificities for the detection of sight-threatening diabetic retinopathy to devices where the camera is fixed and the patient’s head is placed on a chin rest and forehead against a fixed band and cannot, therefore, be recommended for population-based screening at the present time. OCT angiography is new technology [47] that is not currently suitable for population-based screening. Conclusions Screening for sight-threatening diabetic retinopathy has been shown to be very effective in England in reducing blindness due to diabetic retinopathy and reducing the number of vitrectomies being performed on advanced disease. Compliance with ethical standards Conflict of interest None. Ethical standard The author has complied with the journals ethical standards. Statement of human and animal rights This article does not con- tain any studies with human or animal subjects performed by the any of the authors. Informed consent All patients screened in the English Diabetic Eye Screening Programme provide informed consent to the procedure. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://crea tivecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. References 1. Diabetes care and research in Europe (1990) The Saint Vincent declaration. Diabet Med 7:360 2. Iceland (2015) http://www.idf.org/membership/eur/iceland. Accessed 03 Feb 1017 3. Facts and Stats (2016) https://www.diabetes.org.uk/Documents/Posi tion%20statements/DiabetesUK_Facts_Stats_Oct16.pdf. Accessed 03 Feb 2017 4. Screening for diabetic retinopathy in Europe—strategies for overcoming hurdles to progress (2011) http://www.drscreen ing2005.org.uk/gdansk_2011.html. Accessed 03 Feb 2017 5. Wilson J, Jungner G (1968) The principles and practice of screening for disease. Public Health Papers 34. Public Health Papers, WHO, Geneva 6. Scanlon P (2005) An evaluation of the effectiveness and cost- effectiveness of screening for diabetic retinopathy by digital imaging photography & technician ophthalmoscopy & the sub- sequent change in activity, workload and costs of new diabetic ophthalmology referrals. [M.D.]: London 7. Scanlon PH (2008) The English national screening programme for sight-threatening diabetic retinopathy. J Med Screen 15:1–4 8. Pandit RJ, Taylor R (2002) Quality assurance in screening for sight-threatening diabetic retinopathy. Diabet Med 19:285–291 Acta Diabetol (2017) 54:515–525 523 123 http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/licenses/by/4.0/ http://www.idf.org/membership/eur/iceland https://www.diabetes.org.uk/Documents/Position%20statements/DiabetesUK_Facts_Stats_Oct16.pdf https://www.diabetes.org.uk/Documents/Position%20statements/DiabetesUK_Facts_Stats_Oct16.pdf http://www.drscreening2005.org.uk/gdansk_2011.html http://www.drscreening2005.org.uk/gdansk_2011.html 9. Scanlon P (2005) An evaluation of the effectiveness and cost- effectiveness of screening for diabetic retinopathy by digital imaging photography and technician ophthalmoscopy and the subsequent change in activity, workload and costs of new diabetic ophthalmology referrals. UCL, London 10. Harding SP, Broadbent DM, Neoh C, Vora J, Williams EMI (1994) The liverpool diabetic eye study—sensitivity and speci- ficity of photography and direct ophthalmoscopy in the detection of sight threatening eye disease. Diabetic Med 79:S45 11. Moss SE, Meuer SM, Klein R, Hubbard LD, Brothers RJ, Klein BE (1989) Are seven standard photographic fields necessary for classification of diabetic retinopathy? Invest Ophthalmol Vis Sci 30:823–828 12. Diabetic eye screening: guidance on camera approval (2016). https://www.gov.uk/government/publications/diabetic-eye-screen ing-approved-cameras-and-settings/diabetic-eye-screening-guidance -on-camera-approval. Accessed 03 Feb 2017 13. Basu A (2006) Digital image compression should be limited in diabetic retinopathy screening. J Telemed Telecare 12:163–165 14. Conrath J, Erginay A, Giorgi R et al (2007) Evaluation of the effect of JPEG and JPEG2000 image compression on the detec- tion of diabetic retinopathy. Eye 21:487–493 15. Li HK, Florez-Arango JF, Hubbard LD, Esquivel A, Danis RP, Krupinski EA (2010) Grading diabetic retinopathy severity from compressed digital retinal images compared with uncompressed images and film. Retina 30:1651–1661 16. Scanlon PH, Malhotra R, Thomas G et al (2003) The effective- ness of screening for diabetic retinopathy by digital imaging photography and technician ophthalmoscopy. Diabet Med 20:467–474 17. Scanlon PH, Malhotra R, Greenwood RH et al (2003) Compar- ison of two reference standards in validating two field mydriatic digital photography as a method of screening for diabetic retinopathy. Br J Ophthalmol 87:1258–1263 18. Scanlon PH, Foy C, Malhotra R, Aldington SJ (2005) The influence of age, duration of diabetes, cataract, and pupil size on image quality in digital photographic retinal screening. Diabetes Care 28:2448–2453 19. Massin P, Erginay A, Ben Mehidi A et al (2003) Evaluation of a new non-mydriatic digital camera for detection of diabetic retinopathy. Diabet Med 20:635–641 20. Cavallerano JD, Aiello LP, Cavallerano AA et al (2005) Non mydriatic digital imaging alternative for annual retinal exami- nation in persons with previously documented no or mild diabetic retinopathy. Am J Ophthalmol 140:667–673 21. Aptel F, Denis P, Rouberol F, Thivolet C (2008) Screening of diabetic retinopathy: effect of field number and mydriasis on sensitivity and specificity of digital fundus photography. Diabetes Metab 34(3):290–293 22. Vujosevic S, Benetti E, Massignan F et al (2009) Screening for diabetic retinopathy: 1 and 3 Nonmydriatic 45� digital fundus photographs vs 7 standard early treatment diabetic retinopathy study fields. Am J Ophthalmol 148:111–118 23. Scottish Diabetic Retinopathy Screening (DRS) Collaborative (2017). http://www.ndrs.scot.nhs.uk/Links/index.htm/. Accessed 04 Jan 2017 24. Scanlon PH, Foy C, Chen FK (2008) Visual acuity measurement and ocular co-morbidity in diabetic retinopathy screening. Br J Ophthalmol 92:775–778 25. Corcoran JS, Moore K, Agarawal OP, Edgar DF, Yudkin J (1985) Visual acuity screening for diabetic maculopathy. Practical Dia- betes 2:230–232 26. Early Treatment Diabetic Retinopathy Study Research Group (1991) Fundus photographic risk factors for progression of dia- betic retinopathy. ETDRS report number 12. Ophthalmology 98:823–833 27. Wilkinson CP, Ferris FL 3rd, Klein RE et al (2003) Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales. Ophthalmology 110:1677–1682 28. Continuous professional development for screening—the new qualification (2017). https://cpdscreening.phe.org.uk/health screenerqualification. Accessed 04 Jan 2017 29. Updates to test and training system benefit diabetic eye screening providers (2017). https://phescreening.blog.gov.uk/2016/08/23/ updates-to-test-and-training-system-benefit-diabetic-eye-screening -providers/. Accessed 04 Jan 2017 30. NHS public health functions agreement (2016–17) Service specification no. 22. NHS Diabetic Eye Screening Programme. https://www.england.nhs.uk/commissioning/wp-content/uploads/ sites/12/2016/02/serv-spec-22.pdf. Accessed 04 Jan 2017 31. Certificate of higher education in diabetic retinopathy screening (2017). http://drscreening.org/pages/default.asp?id=2&sID=3. Accessed 04 Jan 2017 32. International test and training (2017). http://drscreening.org/ pages/default.asp?id=27&sID=40. Accessed 04 Jan 2017 33. Pathway standards for NHS diabetic eye screening programme (2016) https://www.gov.uk/government/uploads/system/uploads/ attachment_data/file/543686/Diabetic_eye_screening_pathway_ standards.pdf. Accessed 04 Jan 2017 34. Liew G, Michaelides M, Bunce C (2014) A comparison of the causes of blindness certifications in England and Wales in working age adults (16–64 years), 1999–2000 with 2009–2010. BMJ Open 4:e004015 35. NHS screening programmes in England. 1 April 2015 to 31 March 2016 (2016). https://www.gov.uk/government/uploads/ system/uploads/attachment_data/file/574713/Screening_in_Eng land_2015_to_2016.pdf. Accessed 04 Jan 2017 36. Mackenzie S, Schmermer C, Charnley A et al (2011) SDOCT imaging to identify macular pathology in patients diagnosed with diabetic maculopathy by a Digital photographic retinal screening programme. PLoS ONE 6:e14811 37. Stratton IM, Aldington SJ, Taylor DJ, Adler AI, Scanlon PH (2013) A simple risk stratification for time to development of sight-threatening diabetic retinopathy. Diabetes Care 36:580–585 38. Scanlon PH, Aldington SJ, Leal J et al (2015) Development of a cost-effectiveness model for optimisation of the screening inter- val in diabetic retinopathy screening. Health Technol Assess 19:1–116 39. Aspelund T, Thornorisdottir O, Olafsdottir E et al (2011) Indi- vidual risk assessment and information technology to optimise screening frequency for diabetic retinopathy. Diabetologia 54:2525–2532 40. Lund SH, Aspelund T, Kirby P et al (2016) Individualised risk assessment for diabetic retinopathy and optimisation of screening intervals: a scientific approach to reducing healthcare costs. Br J Ophthalmol 100:683–687 41. (2016) An observational study to assess if automated diabetic retinopathy image assessment software can replace one or more steps of manual imaging grading and to determine their cost- effectiveness. https://www.journalslibrary.nihr.ac.uk/hta/hta209 20/#/full-report 42. Silva PS, Cavallerano JD, Sun JK, Soliman AZ, Aiello LM, Aiello LP (2013) Peripheral lesions identified by mydriatic ultrawide field imaging: distribution and potential impact on diabetic retinopathy severity. Am Acad Ophthalmol 120:2587–2595 43. Silva PS, Cavallerano JD, Tolls D et al (2014) Potential effi- ciency benefits of nonmydriatic ultrawide field retinal imaging in an ocular telehealth diabetic retinopathy program. Diabetes Care 37:50–55 44. Liegl R, Liegl K, Ceklic L et al (2014) Nonmydriatic ultra-wide- field scanning laser ophthalmoscopy (Optomap) versus two-field 524 Acta Diabetol (2017) 54:515–525 123 https://www.gov.uk/government/publications/diabetic-eye-screening-approved-cameras-and-settings/diabetic-eye-screening-guidance-on-camera-approval https://www.gov.uk/government/publications/diabetic-eye-screening-approved-cameras-and-settings/diabetic-eye-screening-guidance-on-camera-approval https://www.gov.uk/government/publications/diabetic-eye-screening-approved-cameras-and-settings/diabetic-eye-screening-guidance-on-camera-approval http://www.ndrs.scot.nhs.uk/Links/index.htm/ https://cpdscreening.phe.org.uk/healthscreenerqualification https://cpdscreening.phe.org.uk/healthscreenerqualification https://phescreening.blog.gov.uk/2016/08/23/updates-to-test-and-training-system-benefit-diabetic-eye-screening-providers/ https://phescreening.blog.gov.uk/2016/08/23/updates-to-test-and-training-system-benefit-diabetic-eye-screening-providers/ https://phescreening.blog.gov.uk/2016/08/23/updates-to-test-and-training-system-benefit-diabetic-eye-screening-providers/ https://www.england.nhs.uk/commissioning/wp-content/uploads/sites/12/2016/02/serv-spec-22.pdf https://www.england.nhs.uk/commissioning/wp-content/uploads/sites/12/2016/02/serv-spec-22.pdf http://drscreening.org/pages/default.asp?id=2&sID=3 http://drscreening.org/pages/default.asp?id=27&sID=40 http://drscreening.org/pages/default.asp?id=27&sID=40 https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/543686/Diabetic_eye_screening_pathway_standards.pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/543686/Diabetic_eye_screening_pathway_standards.pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/543686/Diabetic_eye_screening_pathway_standards.pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/574713/Screening_in_England_2015_to_2016.pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/574713/Screening_in_England_2015_to_2016.pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/574713/Screening_in_England_2015_to_2016.pdf https://www.journalslibrary.nihr.ac.uk/hta/hta20920/%23/full-report https://www.journalslibrary.nihr.ac.uk/hta/hta20920/%23/full-report fundus photography in diabetic retinopathy. Ophthalmologica 231:31–36 45. Wilson PJ, Ellis JD, MacEwen CJ, Ellingford A, Talbot J, Leese GP (2010) Screening for diabetic retinopathy: a comparative trial of photography and scanning laser ophthalmoscopy. Ophthal- mologica 224:251–257 46. Yogesan K, Constable IJ, Barry CJ, Eikelboom RH, McAllister IL, Tay-Kearney ML (2000) Telemedicine screening of diabetic retinopathy using a hand-held fundus camera. Telemed J 6:219–223 47. Al-Sheikh M, Akil H, Pfau M, Sadda SR (2016) Swept-source OCT angiography imaging of the foveal avascular zone and macular capillary network density in diabetic retinopathy. Invest Ophthalmol Vis Sci 57:3907–3913 48. Harding S, Greenwood R, Aldington S et al (2003) Grading and disease management in national screening for diabetic retinopa- thy in England and Wales. Diabet Med 20:965–971 49. Treatment Diabetic Retinopathy Study Research Group (1987) Treatment techniques and clinical guidelines for photocoagula- tion of diabetic macular edema. Early treatment diabetic retinopathy study report number 2. Early Ophthalmol 94:761–774 Acta Diabetol (2017) 54:515–525 525 123 The English National Screening Programme for diabetic retinopathy 2003--2016 Abstract Background Stages in the development of the English NHS diabetic eye screening programme Programme results Future developments for the programme Conclusions Open Access References work_isojvd42wzdk7csuphm4iwrf64 ---- untitled STUDY Morphologic Features and Natural History of Scalp Nevi in Children Monique Gupta, MPhil, MD; David R. Berk, MD; Cheryl Gray, MD; Lynn A. Cornelius, MD; Susan J. Bayliss, MD Objective: To characterize the clinical changes in clini- cally distinctive scalp nevi over time in children to help guide management and avoid misdiagnosis as melanoma. Design: Cohort study. Setting: Washington University School of Medicine pe- diatric dermatology clinics. Patients: Of 93 patients younger than 18 years with pho- tographically documented, clinically distinctive scalp nevi, 28 (30%) consented to participate. Minimum follow-up from the initial visit was 1 year. Collectively, these pa- tients had 44 scalp nevi at the initial visit. No patient had a personal diagnosis of melanoma or dysplastic nevus syndrome. Main Outcome Measures: Clinical changes in scalp nevi as determined using the ABCDE scoring system (ie, asymmetry, border irregularity, color variegation, diam- eter �6 mm, and evolution/elevation from initial to fol- low-up images) on initial and follow-up photographs of scalp nevi. Results: Overall, 77% of the clinically distinctive scalp nevi (34 of 44) showed clinical signs of change during mean fol- low-up of 2.8 years. Of those with changes, 18 (53%) be- came more atypical and 16 (47%) became less atypical since the initial examination. None of the changes were con- cerning for melanoma. The mean total scalp nevus count was 2.6. Scalp nevi represented approximately 6% of total- body nevi. The number of scalp nevi increased with age. Boys had 1.5 times the number of scalp nevi as girls (P=.03). Conclusions: Scalp nevi are clinically dynamic in child- hood. These changes include an increase or a decrease in atypical features and occur in all age groups. This prelimi- nary study does not support excisional biopsies but does support physician evaluation of scalp nevi evolution and serial photography of clinically distinctive lesions. Arch Dermatol. 2010;146(5):506-511 D URING RECENT DECADES, the incidences of adult and pediatric melanoma have markedly increased.1 , 2 Scalp melanoma may have a poorer prognosis than melanoma at other sites.3-6 Better understanding of the precur- sors of scalp melanoma and the natural his- tory of scalp nevi in children may lead to more informed management of these lesions. Atypical melanocytic nevi may be pre- cursors of melanoma and are important risk factors for melanoma at all ages.7-11 Clini- cally, atypical melanocytic nevi share some clinical features with melanoma (eg, asym- metry, border irregularity, color variabil- ity, and diameter �6 mm), but usually to a lesser degree.12,13 In children, the scalp has been found to have a high incidence of either clinically or histopathologically dys- plastic nevi and is often the first site in- volved in dysplastic nevus syndrome.14-17 The scalp has recently been added to the list of anatomical locations for nevi with site-related atypia, a subset of melanocytic nevi that share histologic features with melanoma but that are benign.18,19 How- ever, unlike nevi with site-related atypia at acral, genital, mammary, ear, and conjunc- tival locations, scalp nevi also demon- strate clinically distinctive features, not just pathologic atypia. When evaluating these nevi, if clinical features are suggestive of melanoma, prompt excision is warranted. Differing opinions exist on how to man- age clinically distinctive scalp nevi in chil- dren.20 Because scalp nevi are difficult for patients, families, and physicians to ob- serve over time, some physicians advo- cate excising all clinically distinctive scalp nevi in children, especially if there is a fam- ily history of melanoma.15 Other physi- cians do not routinely excise clinically dis- CME available online at www.jamaarchivescme.com Author Affiliations: Division of Dermatology, Departments of Internal Medicine and Pediatrics, Washington University School of Medicine and St Louis Children’s Hospital, St Louis, Missouri. (REPRINTED) ARCH DERMATOL/ VOL 146 (NO. 5), MAY 2010 WWW.ARCHDERMATOL.COM 506 ©2010 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 tinctive scalp nevi; instead, they follow these nevi with serial examinations and photography. It is unclear whether clinically distinctive nevi on the scalp of children follow the same natural history as com- mon melanocytic nevi because their clinical progression has rarely been documented.21,22 A case study23 examining the progression of an eclipse-type scalp nevus in a child showed fading of the defining peripheral brown rim and elevation of the tan center across 7 years. However, to our knowledge, no study has systematically evaluated the natu- ral history of scalp nevi in children. We performed a descriptive study of the morpho- logic features and natural history of a subset of pediatric scalp nevi, defined as those with sufficiently unusual or distinctive clinical features that they prompted photog- raphy and a recommendation for clinical observation but not excision to rule out melanoma. The objective was to describe the morphologic features of these scalp nevi using the ABCDE system (ie, asymmetry, border irregularity, color variegation, diameter �6 mm, and evolution/ elevation from initial to follow-up images) and to cata- log their evolution using digital photography and, in some follow-up cases, dermoscopy. We hope that this study will help physicians recognize scalp nevi in children that may be clinically distinctive or changing but benign. METHODS PATIENT RECRUITMENT After receiving institutional review board approval from Wash- ington University School of Medicine, a medical record re- view was conducted covering January 18, 1993, through March 27, 2008, at the Division of Pediatric Dermatology, Washing- ton University School of Medicine, to identify children with high- resolution photographs of melanocytic scalp nevi. It has been our general practice to photograph scalp nevi with unusual clini- cal features. For this study, clinically distinctive scalp nevi in- cluded nevi that were larger than expected (�5 mm) or had color variations. Although these 2 features are also features of atypical nevi according to the 1992 World Health Organiza- tion consensus agreement, in the nevi included in this study, these 2 features were sufficiently notable to prompt digital pho- tography and to recommend clinical observation only. Ninety-three children (�18 years old) met the following in- clusion criteria: clinical diagnosis of acquired scalp nevi, mini- mum of 1 year of follow-up, and availability of high-reso- lution photographs taken at initial evaluation. Scalp nevi were determined to be acquired based on history. No children were excluded because of a previous diagnosis of dysplastic nevus syndrome, melanoma, or subsequent biopsy findings. A letter was mailed to their parents or guardians outlining the study’s purpose and design and requesting permission for the child’s participation (Figure 1). Follow-up telephone calls were made to answer questions about the study and to schedule appoint- ments. Written informed consent was obtained from the par- ents or guardians of all the participants. The study was con- ducted between June 1 and August 31, 2008. FOLLOW-UP SKIN EXAMINATION Before the follow-up examination, scalp nevus photographs from the initial examination were printed to determine site. For chil- dren with more than 1 qualifying nevus, scalp nevi were num- bered randomly. During the follow-up examination, new photo- graphs were taken of the identified scalp nevi. Any previously undocumented scalp nevi were also counted and photographed. Study participants (n=15) who had appointments in August 2008 also had dermoscopic photographs taken (a total of 26 lesions) using the DermLite II multispectral attachment to the Nikon Cool- Pix 4500 (Nikon Inc, Melville, New York). Because baseline docu- mentation of the scalp nevi did not include dermoscopy, we did not use dermoscopy in this study for comparison. All the participants also had a total-body survey. Similar to previous studies,24 all pigmented macules or papules 2 mm or larger considered to be melanocytic nevi were counted on the body. The scalp was defined as the hair-bearing region on top of the head with 1-cm margins and corresponded to 6.5% of the body surface area. Freckles, defined as lightly pigmented, irregular macules appearing in clusters in sun-exposed sites, were distinguished from melanocytic nevi. Children were identified with documented clinically distinctive scalp nevi 93 Children replied yes to being participants 39Children had telephone numbers that were disconnected or busy 16Children had messages left on answering machines 22Children replied that they were not interested 14 Children scheduled appointments 29Children never scheduled appointments 10 Child canceled appointment 1 Children were seen and filled out questionnaires 28 Children were sent mailings outlining the study and were called 91Addresses and telephone numbers were not found for 2 children Figure 1. Flowchart of the participant selection process. (REPRINTED) ARCH DERMATOL/ VOL 146 (NO. 5), MAY 2010 WWW.ARCHDERMATOL.COM 507 ©2010 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 PHOTOGRAPH ANALYSIS Images from the initial and follow-up examinations were re- viewed by two of us (M.G. and S.J.B.). Each scalp nevus photo- graphed was assessed using the ABCDE criteria.25-27 A lesion was classified as asymmetrical if the pigment was not equally dis- tributed throughout the entire lesion or if there were discrep- ancies in the lesion’s border or shape along its vertical or hori- zontal axis. Borders were classified as irregular if the border was indistinct and faded into the surrounding skin (smudged) or if the border was jagged or undulated. A lesion was classified as having color variegation if multiple shades existed within 1 le- sion or if there was a patterned distribution of pigment (see the next paragraph). Nevi diameters were measured from the com- puter monitor calibrated to a scale included in the image. Any change in these ABCD characteristics or elevation between ini- tial and follow-up images was classified as evolution. Scalp nevi that had characteristic patterns of color were cat- egorized as eclipse, reverse eclipse, or cockade. Eclipse nevi have a tan center and an irregular brown peripheral rim.23 Reverse eclipse nevi have a brown center and a tan peripheral rim. Cock- ade nevi have targetlike morphologic features, typically with a centrally pigmented portion, an intervening nonpigmented area, and a peripheral pigmented portion.28 Nevi that demonstrated objective evidence of change were categorized as either more or less atypical than in the initial photograph. To be categorized as more atypical, nevi had to demonstrate more asymmetry, more irregular borders, more color variegation, or increased diameter. In contrast, a nevus was categorized as less atypical if the shift in clinical morpho- logic features was toward the appearance of a banal or disap- pearing nevus. FOLLOW-UP QUESTIONNAIRE All the participants completed a questionnaire at their fol- low-up appointment with the help of their guardians or par- ents. They were asked to answer the following questions regard- ing their scalp nevi (moles): (1) Have you noticed a change in the symmetry of your mole since your last visit? (If you drew a line down the middle of the mole, has one side changed more than the other?) (2) Have you noticed a change in the borders of your mole since your last visit? (3) Have you noticed a change in the color of your mole since your last visit? (4) Have you no- ticed a change in the size of your mole since your last visit? If yes, has it gotten bigger or smaller? (5) Does your mole ever itch? (6) Does your mole ever bleed? (7) Does your mole ever hurt? (8) Does the appearance of the mole bother you?29 The ques- tionnaire also included questions about demographics and per- sonal or family history of melanoma. STATISTICAL METHODS Descriptive statistics were calculated to characterize the study cohort, to describe the percentage of scalp nevi that experi- enced change, and to compare questionnaire responses with investigator findings. All the demographic data are reported at the time of each patient’s follow-up examination. The preva- lence of scalp nevi was determined in relation to sex, age group, and total-body nevus count. Univariate analysis using clustered logistic regression mod- els was conducted to evaluate factors that affect nevus change, including sex, age group (�8, 8 to 12, and �12 years old, simi- lar to previous studies18), patterned distribution of color (eclipse, reverse eclipse, and cockade), and family history of mela- noma. Follow-up time was included as an adjustment variable in all the models because patients with longer follow-up are more likely to have changes in their nevi than are patients with shorter follow-up. A clustered model was used to account for the correlation between nevi on the same patient. The Fisher exact test was used to identify differences in the number of scalp nevi by sex. A P � .05 was considered signifi- cant. Statistical analyses were performed using SAS statistical software (version 9.1.3; SAS Institute Inc, Cary, North Carolina). RESULTS RESPONSE AND DEMOGRAPHIC CHARACTERISTICS Of the 93 invited children, 28 (30%) participated, in- cluding 13 boys and 15 girls. Participants ranged from 5 to 17 years old (mean age, 11 years). Eleven partici- pants were older than 12 years but no older than 18 years, 12 were aged 8 to 12 years, and 5 were younger than 8 years. The 28 participants had a total of 44 clinically dis- tinctive scalp nevi documented at initial examination. Fol- low-up ranged from 1 to 12 years (mean, 2.8 years). All the participants were white. No participant developed melanoma during follow-up. A family history of mela- noma was present in 36% of the patients (n = 10). On follow-up examination, 69 scalp nevi (clinically dis- tinctive or otherwise) were observed on the 28 partici- pants, of which 44 (64%) had been documented on the initial examination as clinically distinctive and were, there- fore, compared. The remaining 25 scalp nevi were first pho- tographed at the follow-up examination only and were, therefore, not compared. These newly documented nevi do not necessarily represent new nevi and may reflect more thorough counting of the overall number of scalp nevi (whether clinically distinctive or not) at follow-up. The 25 newly documented scalp nevi were much smaller than the 44 clinically distinctive scalp nevi originally docu- mented (mean diameter, 3.5 vs 6.1 mm). None of these newly documented scalp nevi (36% of total scalp nevi) dem- onstrated clinically unusual or distinctive features. Of the 44 scalp nevi originally documented and fol- lowed up, 28 (64%) had characteristic color distribu- tions, with 8 being classified as eclipse, 16 as reverse eclipse, and 4 as cockade (Figure 2). None of the 25 newly documented scalp nevi had a characteristic color distribution. Twenty-three of the 28 participants had at least 1 scalp nevus with one of these patterns of color distribution. Three participants had more than 1 of the 3 varieties of nevi on their scalp, supporting claims that eclipse nevi and cockade nevi may be on a continuum.30 Participants with any of these 3 scalp nevi with charac- teristic color distributions had a mean scalp nevus count of 2.7 and a mean total-body nevus count of 47. All other participants had a mean scalp nevus count of 2.6 and a mean total-body nevus count of 19.4. None of the nevi resembled typical Spitz nevi.31 CHANGES IN SCALP NEVI AND QUESTIONNAIRE RESULTS Of the 44 clinically distinctive scalp nevi originally docu- mented, 34 (77%) demonstrated observable clinical changes on follow-up (symmetry, 24% [n = 8]; border, 15% [n = 5]; color, 44% [n = 15]; diameter, 26% [n = 9]; papu- (REPRINTED) ARCH DERMATOL/ VOL 146 (NO. 5), MAY 2010 WWW.ARCHDERMATOL.COM 508 ©2010 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 lar center, 32% [n = 11]; and multiple factors, 32% [n = 11]). Of those with noticeable changes, 18 (53%) be- came more atypical and 16 (47%) became less atypical. Most scalp nevi that became more atypical were be- cause of increased diameter (7 of 18 [39%]) and more color variegation (6 of 18 [33%]). Six percent of the scalp nevi (1 of 18) had more than 1 feature that became more atypical. Although clinical features in 53% of the nevi be- came more atypical, none of the changes were consid- ered to be concerning for melanoma or to require exci- sion. The most common features to change in those nevi to become less atypical were color variegation, asymme- try, and elevation. The effects of sex (P = .56), age (P = .16), and patterned color distribution (P = .51) on the prob- ability of scalp nevus change were not significant. Nevi on participants with a family history of melanoma were not more likely to change compared with those on par- ticipants without such a history (P = .02). When participants and their parents or guardians were asked to assess scalp nevi, 43% (12 of 28) noted a change in the ABCD criteria (symmetry, 0%; border, 0%; color, 14% [4 of 28]; and diameter, 39% [11 of 28]). Fifteen of the 28 respondents (54%) agreed with the investigator re- garding overall change in ABCD criteria. Of those, only A B C D E F Figure 2. Examples of pattern morphologic features of scalp nevi. Shown are overview pictures (A, C, and E) from a digital camera and close-up photographs (B, D, and F) from a dermoscopic camera (original magnification �10). The images demonstrate eclipse (A and B), reverse eclipse (C and D), and cockade (E and F) morphologic features. (REPRINTED) ARCH DERMATOL/ VOL 146 (NO. 5), MAY 2010 WWW.ARCHDERMATOL.COM 509 ©2010 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 6 (40%) of the respondents agreed with investigators re- garding the specific criterion that had changed. No par- ticipant reported that their scalp nevus itched or bled, al- though 1 participant described mild pain. The appearance of the scalp nevus bothered 5 of the 28 participants (18%). TOTAL-BODY NEVUS COUNTS All nonscalp nevi on the 28 patients were clinically banal in appearance. The mean total-body nevus count was 42, and the mean total scalp nevus count was 2.6. Anatomic site breakdown for total-body nevi on the 28 participants was as follows: scalp, 6% (2.6 of 42); upper extremities, 29% (12 of 42); back, 22% (9.4 of 42); lower extremities, chest, and face; 11% each (4.5 of 42); neck; 8% (3.2 of 42); and ears and buttocks, 1% each (0.5 of 42). All calcula- tions were based on means for site. Mean total-body ne- vus count and mean scalp nevus count increased with age (Figure 3A). Boys had higher total-body nevus counts (2.1 times higher; P = .04) and scalp nevus counts (1.5 times higher; P = .03) than did girls (Figure 3B). COMMENT Scalp nevi in children may be distinctive in appearance, change over time, and represent a common reason for referral to dermatologists.32 Because of their relation- ship with melanoma, it is important to understand the evolution of nevi. Scalp nevi are particularly poorly un- derstood and challenging to manage. Many scalp nevi of- ten have a unique pattern of pigmentation, for example, eclipse. These benign morphologic features have ap- peared infrequently in the literature and may be worri- some to parents and inexperienced physicians.23,28,30,33 In this study, we observed several trends in nevus counts regarding age, sex, and the presence of scalp nevi with characteristic color distributions. Mean scalp and total-body nevus counts increased with age. Partici- pants with scalp nevi with characteristic color distribu- tions had a trend toward higher mean total-body nevus counts, supporting claims that these lesions are mark- ers for children who are destined to become “moley.”34 We also found that boys had higher mean scalp and total- body nevus counts, agreeing with previous studies.35-38 The development of scalp nevi in childhood may be a marker for higher-than-average total-body nevus counts.32,35 In a study35 looking at the frequency and dis- tribution pattern of melanocytic nevi in 524 Swedish chil- dren, those with 1 or more nevi on the scalp had twice as many nevi compared with those without scalp nevi. A recent study32 of 180 children in Spain confirmed that scalp nevi are a marker for higher total-body nevus counts. The present study concurs. We also found that a high percentage of children evalu- ated for scalp nevi had a family history of melanoma. Scalp nevi may be a genetic marker for those with increased risk of melanoma, on the scalp or elsewhere, or it may be that these patients were more concerned about nevi than were others without a family history of melanoma. Based on responses to the questionnaire, patients and their families observed their own scalp nevi, and more than 50% had noted change. However, their observations agreed with the physician’s assessment only 40% of the time. These discrepancies could be attributable to parents not having clinical photographs to accurately compare changes over time. These data reinforce the importance of serial physi- cian evaluations and photography for scalp nevi. This study has limitations. First, the sample size was small. Also, selection bias existed in terms of the origi- nal referral and families’ subsequent decisions to partici- pate. Children may have been brought in because of a family history of melanoma. Because of these biases, this population may have more scalp nevi and a greater rate of clinical change in nevi compared with the general popu- lation. In addition, we limited this study to acquired rather than congenital melanocytic nevi. This determination was based on parental recall. It is possible that some small congenital nevi were mistakenly included because of er- rors in parental recall. The ABCDE evaluation system is a widely used, well- validated scale for clinical appraisal of pigmented lesions. However, application of the ABCDE criteria can be physi- cian dependent, and, as shown in other studies,23,28,39 many clinically distinctive yet benign nevi can share several of the ABCDE properties of melanoma. We used the ABCDE system as a descriptive method to serve as a starting point to decide whether a nevus may need further evaluation. This preliminary study does not support excisional bi- opsies but does support physician evaluation of scalp nevi evolution and serial photography of clinically distinctive lesions. We plan to prospectively observe these patients via 80 60 70 50 20 10 30 40 0 Scalp Total Body N ev i, N o. A < 8 y 8 to 12 y > 12 to 18 y 70 60 50 20 10 30 40 0 Scalp Total Body N ev i, N o. B Girls Boys Figure 3. Trends in mean nevus counts by age group (A) and sex (B). Scalp and total-body mean nevus counts increased by age group, and boys had more nevi than did girls on the scalp and the total body. (REPRINTED) ARCH DERMATOL/ VOL 146 (NO. 5), MAY 2010 WWW.ARCHDERMATOL.COM 510 ©2010 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 clinical images and dermoscopy images. These results con- firm and begin to characterize the evolution of scalp nevi. This study demonstrated that clinically distinctive scalp nevi in children frequently undergo benign changes, with 77% of all evaluated nevi changing during the observa- tion period. These clinical changes included either an in- crease or a decrease in atypical features; however, none of the changes were worrisome for melanoma. Long- term follow-up is needed to further delineate the signifi- cance of the changes. Accepted for Publication: November 2, 2009. Correspondence: Susan J. Bayliss, MD, Division of Der- matology, Departments of Internal Medicine and Pedi- atrics, Washington University School of Medicine, 660 S Euclid Ave, Campus Box 8123, St Louis, MO 63110 (SBAYLISS@dom.wustl.edu). Author Contributions: Drs Gupta and Bayliss had full ac- cess to all the data in this study and take responsibility for the integrity of the data and the accuracy of the data analy- sis. Study concept and design: Gupta, Gray, and Bayliss. Ac- quisition of data: Gupta and Bayliss. Analysis and interpre- tation of data: Gupta, Berk, and Cornelius. Drafting of the manuscript: Gupta. Critical revision of the manuscript for im- portant intellectual content: Gupta, Berk, Gray, Cornelius, and Bayliss. Statistical analysis: Gupta. Obtained funding: Bay- liss. Administrative, technical, and material support: Cor- nelius and Bayliss. Study supervision: Bayliss. Financial Disclosure: None reported. Funding/Support: This study was supported in part by the Division of Dermatology, Departments of Internal Medi- cine and Pediatrics, Washington University School of Medi- cine; and by support grant P30 CA091842 from the Na- tional Cancer Institute (NCI) Cancer Center Program. Role of the Sponsors: The sponsors had no role in the design and conduct of the study; in the collection, analy- sis, and interpretation of the data; or in the preparation, review, or approval of the manuscript. Additional Contributions: The authors wish to acknowl- edge the support of Kim Trinkaus of the Biostatistics Core, Siteman Comprehensive Cancer Center, and NCI Can- cer Center Support grant P30 CA091842. Patty Crader, RN, and Clodean Crowell assisted in this project. REFERENCES 1. Jemal A, Siegel R, Ward E, et al. Cancer statistics, 2008. CA Cancer J Clin. 2008; 58(2):71-96. 2. Strouse JJ, Fears TR, Tucker MA, Wayne AS. Pediatric melanoma: risk factor and survival analysis of the Surveillance, Epidemiology and End Results database. J Clin Oncol. 2005;23(21):4735-4741. 3. Shumate CR, Carlson GW, Giacco GG, Guinee VF, Byers RM. The prognostic im- plications of location for scalp melanoma. Am J Surg. 1991;162(4):315-319. 4. Garbe C, Buttner P, Bertz J, et al. Primary cutaneous melanoma: prognostic clas- sification of anatomic location. Cancer. 1995;75(10):2492-2498. 5. Lachiewicz AM, Berwick M, Wiggins CL, Thomas NE. Survival differences be- tween patients with scalp or neck melanoma and those with melanoma of other sites in the Surveillance, Epidemiology, and End Results (SEER) program. Arch Dermatol. 2008;144(4):515-521. 6. McCarthy B. Melanoma of the scalp and neck had greater risk of melanoma- specific mortality than melanoma of the extremities [commentary]. Evid Based Med. 2008;13(5):155. 7. Greene MH, Clark WH Jr, Tucker MA, et al. Precursor naevi in cutaneous malig- nant melanoma: a proposed nomenclature [letter]. Lancet. 1980;2(8202):1024. 8. Holly EA, Kelly JW, Shpall SN, Chiu SH. Number of melanocytic nevi as a major risk factor for malignant melanoma. J Am Acad Dermatol. 1987;17(3):459-468. 9. Swerdlow AJ, English J, MacKie RM, et al. Benign melanocytic naevi as a risk fac- tor for malignant melanoma. Br Med J (Clin Res Ed). 1986;292(6535):1555-1559. 10. Grob JJ, Gouvernet J, Aymar D, et al. Count of benign melanocytic nevi as a ma- jor indicator of risk for nonfamilial nodular and superficial spreading melanoma. Cancer. 1990;66(2):387-395. 11. Youl P, Aitken J, Hayward N, et al. Melanoma in adolescents: a case-control study of risk factors in Queensland, Australia. Int J Cancer. 2002;98(1):92-98. 12. Tucker MA, Halpern A, Holly EA, et al. Clinically recognized dysplastic nevi: a cen- tral risk factor for cutaneous melanoma. JAMA. 1997;277(18):1439-1444. 13. Elder DE, Clark WH Jr, Elenitsas R, Guerry D IV, Halpern AC. The early and intermediate precursor lesions of tumor progression in the melanocytic system: common acquired nevi and atypical (dysplastic) nevi. Semin Diagn Pathol. 1993;10(1):18-35. 14. Fernandez M, Raimer SS, Sanchez RL. Dysplastic nevi of the scalp and forehead in children. Pediatr Dermatol. 2001;18(1):5-8. 15. Tucker MA, Greene MH, Clark WH Jr, Kraemer KH, Fraser MC, Elder DE. Dys- plastic nevi on the scalp of prepubertal children from melanoma-prone families. J Pediatr. 1983;103(1):65-69. 16. Haley JC, Hood AF, Chuang TY, Rasmussen J. The frequency of histologically dys- plastic nevi in 199 pediatric patients. Pediatr Dermatol. 2000;17(4):266-269. 17. Perry BN, Ruben B. Nevi on the scalp: “special” not only in children but in young adults as well. Abstract presented at: 45th American Society of Dermatopathol- ogy Annual Meeting; October 17, 2008; San Francisco, CA. 18. Fabrizi G, Pagliarello C, Parente P, Massi G. Atypical nevi of the scalp in adolescents. J Cutan Pathol. 2007;34(5):365-369. 19. Hosler GA, Moresi JM, Barrett TL. Nevi with site-related atypia: a review of me- lanocytic nevi with atypical histologic features based on anatomic site. J Cutan Pathol. 2008;35(10):889-898. 20. Rothman KF, Esterly NB, Williams ML, et al. Dysplastic nevi in children. Pediatr Dermatol. 1990;7(3):218-234. 21. Stegmaier O. Life cycle of the nevus. Mod Med. 1963;31:79-91. 22. Brodell R, Sims DM, Zaim MT. Natural history of melanocytic nevi. Am Fam Physician. 1988;38(5):93-101. 23. Schaffer JV, Glusac EJ, Bolognia JL. The eclipse naevus: tan centre with stellate brown rim. Br J Dermatol. 2001;145(6):1023-1026. 24. Halpern AC, Guerry D IV, Elder DE, Trock B, Synnestvedt M, Humphreys T. Natural history of dysplastic nevi. J Am Acad Dermatol. 1993;29(1):51-57. 25. Friedman RJ, Rigel DS, Kopf AW. Early detection of malignant melanoma: the role of physician examination and self-examination of the skin. CA Cancer J Clin. 1985;35(3):130-151. 26. Abbasi NR, Shaw HM, Rigel DS, et al. Early diagnosis of cutaneous melanoma: revisiting the ABCD criteria. JAMA. 2004;292(22):2771-2776. 27. Rigel DS, Friedman RJ, Kopf AW, Polsky D. ABCDE: an evolving concept in the early detection of melanoma. Arch Dermatol. 2005;141(8):1032-1034. 28. Happle R. Cockade nevus: unusual variation of nevus cell nevus [in German]. Hautarzt. 1974;25(12):594-596. 29. Cassileth BR, Lusk EJ, Guerry D IV, Clark WH Jr, Matozzo I, Frederick BE. “Cata- lyst” symptoms in malignant melanoma. J Gen Intern Med. 1987;2(1):1-4. 30. Yazici AC, Ikizoglu G, Apa DD, Kaya TI, Tataroglu C, Kokturk A. The eclipse nae- vus and cockade naevus: are they two of a kind? Clin Exp Dermatol. 2006; 31(4):596-597. 31. Sulit DJ, Guardiano RA, Krivda S. Classic and atypical Spitz nevi: review of the literature. Cutis. 2007;79(2):141-146. 32. Aguilera P, Puig S, Guilabert A, et al. Prevalence study of nevi in children from Barcelona: dermoscopy, constitutional and environmental factors. Dermatology. 2009;218(3):203-214. 33. Guzzo C, Johnson B, Honig P. Cockarde nevus: a case report and review of the literature. Pediatr Dermatol. 1988;5(4):250-253. 34. Bolognia JL. Too many moles [editorial]. Arch Dermatol. 2006;142(4):508. 35. Synnerstad I, Nilsson L, Fredrikson M, Rosdahl I. Frequency and distribution pat- tern of melanocytic naevi in Swedish 8-9-year-old children. Acta Derm Venereol. 2004;84(4):271-276. 36. Gallagher RP, McLean DI, Yang CP, et al. Anatomic distribution of acquired me- lanocytic nevi in white children: a comparison with melanoma: the Vancouver Mole Study. Arch Dermatol. 1990;126(4):466-471. 37. English DR, Armstrong BK. Melanocytic nevi in children, I: anatomic sites and demographic and host factors. Am J Epidemiol. 1994;139(4):390-401. 38. Crane LA, Mokrohisky ST, Dellavalle RP, et al. Melanocytic nevus development in Colorado children born in 1998: a longitudinal study. Arch Dermatol. 2009; 145(2):148-156. 39. Kessides MC, Puttgen KB, Cohen BA. No biopsy needed for eclipse and cockade nevi found on the scalps of children. Arch Dermatol. 2009;145(11):1334- 1336. (REPRINTED) ARCH DERMATOL/ VOL 146 (NO. 5), MAY 2010 WWW.ARCHDERMATOL.COM 511 ©2010 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 work_iu2hxi6i25bixdgr3rqnqs4lmy ---- Estimation of Vegetation Cover Using Digital Photography in a Regional Survey of Central Mexico Article Estimation of Vegetation Cover Using Digital Photography in a Regional Survey of Central Mexico Víctor Salas-Aguilar 1,*, Cristóbal Sánchez-Sánchez 2, Fabiola Rojas-García 2, Fernando Paz-Pellat 3, J. René Valdez-Lazalde 2 and Carmelo Pinedo-Alvarez 4 1 Programa Mexicano del Carbono, Texcoco 56230, Mexico 2 Postgrado en Ciencias Forestales, Colegio de Postgraduados, Texcoco 56230, Mexico; crisdansanchez@gmail.com (C.S.-S.); fabiosxto1981@gmail.com (F.R.-G.); jrene.valdez@gmail.com (J.R.V.-L.) 3 Postgrado en Hidrociencias, Colegio de Postgraduados, Texcoco 56230, Mexico; ferpazpel@gmail.com 4 Facultad de Zootecnia y Ecología, Universidad Autónoma de Chihuahua, Chihuahua 33820, Mexico; cpinedo@uach.mx * Correspondence: vsalasaguilarl@gmail.com; Tel.: +52-595-120-2056 Received: 9 September 2017; Accepted: 9 October 2017; Published: 15 October 2017 Abstract: The methods for measuring vegetation cover in Mexican forest surveys are subjective and imprecise. The objectives of this research were to compare the sampling designs used to measure the vegetation cover and estimate the over and understory cover in different land uses, using digital photography. The study was carried out in 754 circular sampling sites in central Mexico. Four spatial sampling designs were evaluated in three spatial distribution patterns of the trees. The sampling designs with photographic captures in diagonal form had lower values of mean absolute error (MAE < 0.12) and less variation in random and grouped patterns. The Carbon and Biomass Sampling Plot (CBSP) design was chosen due to its smaller error in the different spatial tree patterns. The image processing was performed using threshold segmentation techniques and was automated through an application developed in the Python language. The two proposed methods to estimate vegetation cover through digital photographs were robust and replicable in all sampling plots with different land uses and different illumination conditions. The automation of the process avoided human estimation errors and ensured the reproducibility of the results. This method is working for regional surveys and could be used in national surveys due to its functionality. Keywords: automated classification; sampling design; adaptive threshold; over and understory cover 1. Introduction Vegetation cover is used in studies of the aerosphere, pedosphere, hydrosphere, and biosphere, as well as their interactions [1]. Remote sensing (RS) technology, particularly the development of vegetation indices, has allowed researchers to estimate vegetation cover at a regional and global scale [2]. Validation of high and medium resolution satellite products is a critical aspect of its usefulness in operational approaches [3]. The feasibility and precision of RS must be verified before data can be applied [4]. One way of validating and re-scaling RS products is the use of field measurements, especially the application of digital photography [5,6]. The use of digital photography to estimate the understory cover (shrub and herbaceous—nadir angle) and overstory cover (arboreal—zenith angle) has been advocated in recent years as one of the most accurate methods to estimate these variables [7,8]. According to Liang et al. [1] and White et al. [9], this technique is the most reliable and can be easily employed to extract vegetation cover information in different physiographic conditions. Forests 2017, 8, 392; doi:10.3390/f8100392 www.mdpi.com/journal/forests http://www.mdpi.com/journal/forests http://www.mdpi.com http://dx.doi.org/10.3390/f8100392 http://www.mdpi.com/journal/forests Forests 2017, 8, 392 2 of 18 In relation to shrub and herbaceous cover (understory), robust segmentation algorithms have been developed to discriminate bare soil and vegetation regardless of the type of vegetation present [10,11] and type of luminosity (presence of shadows) [12]. On the other hand, vegetation cover estimations have also progressed in terms of the classification methods [13,14], automation [15], and classification when there is a mixture of vegetation-sky pixels [16]. The advantage of using a non-destructive method such as digital photography to estimate the vegetation cover is that it can be related to the biophysical variables of an ecosystem at a lower cost and time [17,18]. However, few operational studies (local, regional, and national surveys) contemplate measuring this variable for lack of knowledge on how to pursue it efficiently [19] or for the inconsistency of associating forest attributes to a sampling site with an estimated vegetation cover that may exceed the site surface [20]. A disadvantage of the mentioned methods is that sampling sites in which the experiments are made are generally small and homogeneous (low slope and similar land use conditions and plant architecture) [21]. Therefore, the application of these techniques in operational inventories must be planned to efficiently provide the information for which they are developed, and this condition includes considerations of the accuracy of estimates, costs of data collection and processing, and the speed of the process from the planning stage to the presentation of results [22]. At present, there is no agreement among researchers on how to define the optimum sampling design to derive the leaf area index and vegetation cover in field measurements [23]. In a sampling plot, the vegetation cover and its spatial distribution may vary when considering the effects of management [24]. A rapid, reliable, and economical way to compare vegetation cover sampling designs is by predicting the crown diameter through allometric ratios [25] and by estimating the spatial patterns of trees in the sample site. The advantage is that the crown diameter and spatial clustering of trees can be projected into a geographic information system [26], avoiding the intensive work of conducting and comparing them directly in a survey campaign [27]. Due to the lack of an accurate vegetation cover estimation method for forest surveys in México, the objectives of the present study were: (1) to compare the sampling patterns used to measure the vegetation cover; and (2) to estimate the overstory (trees) and understory (shrub and herbaceous) cover in sampling sites of the State of Mexico, Mexico, using a practical procedure and easily reproducible method. 2. Materials and Methods The State of Mexico is located in the southern part of the southern plateau of Mexico, between parallels 18◦22′ and 20◦17′ North and meridians 98◦36′ and 100◦37′ West, in an area of 22,333 km2. In this region and particularly around the Valley of Mexico, there are specific environmental and historical conditions that have resulted in great biological and cultural diversification along mountain ranges, basins, rivers, and forests. Ceballos et al. [28] consider that the vegetation of the State of Mexico is represented by three main ecosystems with variations: temperate-cold (temperate forests), semi-warm and sub-humid warm (low deciduous forests), and arid zones (arid and semi-arid vegetation). The study was carried out from January to September 2015; 754 circular sampling sites of 1000 m2 were established and distributed in eight forest regions of the State of Mexico (Figure 1) [29]. In each region, we collected information on the type of vegetation cover and land use. The classification of vegetation was established according to the land use and vegetation chart, Series IV, scale 1:250,000 [30], and was verified in the field. Forests 2017, 8, 392 3 of 18 Forests 2017, 8, 392    3 of 19    Figure 1. Spatial location of the sampling sites, within the forest regions of the State of Mexico.  In each region, we collected information on the type of vegetation cover and land use. The  classification of vegetation was established according to the land use and vegetation chart, Series IV,  scale 1:250,000 [30], and was verified in the field.  2.1. Spatial Projection to Evaluate Sampling Designs  The regional survey was planned as a complement to the National Forest and Soil Inventory  (NFSI) in a simplified way, where the height and crown diameters were not measured as done in the  NFSI. These variables were planned to be estimated from the state and national surveys.  Before the survey phase, a pre‐survey of 30 sites was carried out in the Texcoco forest region to  evaluate the spatial pattern of trees in four sampling designs of vegetation cover. Comparisons were  made  between VALERI  [31] and SLAT  [32]  designs,  along with  two alternative  samples:  CBSP  (carbon and biomass sampling plots) and RM (regular mesh). The VALERI design is composed of 13  samples, SLAT of 15 samples, CBSP of 21 samples, and RM of 37 samples (Figure 2a).  Figure 1. Spatial location of the sampling sites, within the forest regions of the State of Mexico. 2.1. Spatial Projection to Evaluate Sampling Designs The regional survey was planned as a complement to the National Forest and Soil Inventory (NFSI) in a simplified way, where the height and crown diameters were not measured as done in the NFSI. These variables were planned to be estimated from the state and national surveys. Before the survey phase, a pre-survey of 30 sites was carried out in the Texcoco forest region to evaluate the spatial pattern of trees in four sampling designs of vegetation cover. Comparisons were made between VALERI [31] and SLAT [32] designs, along with two alternative samples: CBSP (carbon and biomass sampling plots) and RM (regular mesh). The VALERI design is composed of 13 samples, SLAT of 15 samples, CBSP of 21 samples, and RM of 37 samples (Figure 2a). Forests 2017, 8, 392    4 of 19    Figure 2. (a) Photographic captures by sampling design: (1) RM, (2) CBSP, (3) SLAT, (4) VALERI; (b)  Projected photo capture areas (pixels) in a CBSP sampling design.  Due  to  the  difficulties  of  knowing  the  real  vegetation  cover  within  a  sampling  site,  the  comparison of sampling designs was performed within a geographic information system. Initially,  we recorded the central coordinates of each site (as planned in the regional survey), the distance of  the trees to the central point, and their azimuth. Then we calculated the location of trees using the  central location of the plot and the azimuth and distance to the central point of the respective tree.  Given that no sampling of the tree crowns diameter was recorded in the survey, an allometric  relationship was established between the crown diameter and diameter at breast height [25]. The data  of this function were obtained from the National Forest and Soil Inventory [33]. The function is the  following: DC = 0.1553 + 0.1859 (Dn) (R2 = 0.79, p < 0.001), where DC is the tree crown diameter and  Dn is the diameter at breast height. The linear model is generalized because it comprised all of the  timber species found in the survey. The estimated DC allowed us to construct the crown influence  area projection of the trees, assuming a circular shape.  The spatial patterns of the trees were evaluated using the Average Nearest Neighbor (ANN)  equation [34]. If the pattern of the tree distribution is completely random ANN = 1, if ANN < 1 the  trees are grouped, and  if ANN > 1  the tree mass  is regular (dispersed). The ANN analysis was  performed within the ArcGIS (10.3, Esri, Redlands, CA, United States).  2.2. Projected Photographic Captures within the GIS  A single photographic capture area of 16.38 m2 was established to determine the projected cover  per site and type of sampling (the procedure for estimating the area is described below). In each area  we built a grid (10 × 10) with the purpose of simulating the pixels of a photographic camera (Figure 2b).  The total observed cover was calculated by dividing the area of overlapping crowns between  the sizes of the plot (1000 m2). On the other hand, the estimated cover resulted from the following  equation:    (1)  where NPSi is the number of projected pixels (grid) intersecting with the tree crown area and NTP is  the total number of pixels per sampling design.  As a quantitative measure of the error, the mean absolute error (MAE) was estimated:  Figure 2. (a) Photographic captures by sampling design: (1) RM, (2) CBSP, (3) SLAT, (4) VALERI; (b) Projected photo capture areas (pixels) in a CBSP sampling design. Forests 2017, 8, 392 4 of 18 Due to the difficulties of knowing the real vegetation cover within a sampling site, the comparison of sampling designs was performed within a geographic information system. Initially, we recorded the central coordinates of each site (as planned in the regional survey), the distance of the trees to the central point, and their azimuth. Then we calculated the location of trees using the central location of the plot and the azimuth and distance to the central point of the respective tree. Given that no sampling of the tree crowns diameter was recorded in the survey, an allometric relationship was established between the crown diameter and diameter at breast height [25]. The data of this function were obtained from the National Forest and Soil Inventory [33]. The function is the following: DC = 0.1553 + 0.1859 (Dn) (R2 = 0.79, p < 0.001), where DC is the tree crown diameter and Dn is the diameter at breast height. The linear model is generalized because it comprised all of the timber species found in the survey. The estimated DC allowed us to construct the crown influence area projection of the trees, assuming a circular shape. The spatial patterns of the trees were evaluated using the Average Nearest Neighbor (ANN) equation [34]. If the pattern of the tree distribution is completely random ANN = 1, if ANN < 1 the trees are grouped, and if ANN > 1 the tree mass is regular (dispersed). The ANN analysis was performed within the ArcGIS (10.3, Esri, Redlands, CA, United States). 2.2. Projected Photographic Captures within the GIS A single photographic capture area of 16.38 m2 was established to determine the projected cover per site and type of sampling (the procedure for estimating the area is described below). In each area we built a grid (10 × 10) with the purpose of simulating the pixels of a photographic camera (Figure 2b). The total observed cover was calculated by dividing the area of overlapping crowns between the sizes of the plot (1000 m2). On the other hand, the estimated cover resulted from the following equation: n ∑ i=0 N PSi NTP (1) where NPSi is the number of projected pixels (grid) intersecting with the tree crown area and NTP is the total number of pixels per sampling design. As a quantitative measure of the error, the mean absolute error (MAE) was estimated: M AE = N−1 N ∑ I=1 |Oi − Ei| (2) where O is the observed value of the total projected cover, E is the estimated value (Equation (1)), and N is the number of captures per sampling design. 2.3. Field Sampling Sampling sites were targeted to include vegetation succession and degradation among land uses in Central Mexico [29]. Information was collected on sites with and without anthropogenic intervention. 2.4. Photo Features The photographic images were taken at a resolution of 5184 × 3456 pixels in JPG format. We used a Canon EOS Rebeld T5RM camera. The camera lens was adjusted to a range of 18 to 55 mm focal length and an ISO 200 with aperture and exposure in automatic mode. 2.5. Taking Photos at the Sampling Sites We applied the CBSP sampling design with 21 captures to nadir and zenith, according to Figure 3a. The lines represent the transects within the sampling site (L1–L4) and each letter represents Forests 2017, 8, 392 5 of 18 a photographic capture. Figure 3b shows the photograph taken at zenith, where the distance between the camera and the ground is 1.5 m. Figure 3c shows the process of shooting understory (shrub and herbaceous strata), where the interference of the personnel in the photograph was avoided using a stick of five meters long; in this case, the distance between the camera and the ground was 3 m. Two levels of bubble were used to control the angle in which the photographs were taken, one near the operator and the other one stuck to the side of the camera. Forests 2017, 8, 392    5 of 19  | |  (2)  where O is the observed value of the total projected cover, E is the estimated value (Equation (1)),  and N is the number of captures per sampling design.  2.3. Field Sampling  Sampling sites were targeted to include vegetation succession and degradation among land uses  in  Central  Mexico  [29].  Information  was  collected  on  sites  with  and  without  anthropogenic  intervention.  2.4. Photo Features  The photographic images were taken at a resolution of 5184 × 3456 pixels in JPG format. We used  a Canon EOS Rebeld T5RM camera. The camera lens was adjusted to a range of 18 to 55 mm focal  length and an ISO 200 with aperture and exposure in automatic mode.  2.5. Taking Photos at the Sampling Sites  We applied the CBSP sampling design with 21 captures to nadir and zenith, according to Figure  3a. The lines represent the transects within the sampling site (L1–L4) and each letter represents a  photographic capture. Figure 3b shows the photograph taken at zenith, where the distance between  the camera and the ground is 1.5 m.  Figure 3c shows the process of shooting understory (shrub and herbaceous strata), where the  interference of the personnel in the photograph was avoided using a stick of five meters long; in this  case, the distance between the camera and the ground was 3 m. Two levels of bubble were used to  control the angle in which the photographs were taken, one near the operator and the other one stuck  to the side of the camera.    Figure 3. Photographic capture process at sampling sites. (a) CBSP design, the letters correspond to  the distance of capture from the center of the sampling site; (b) photographic capture to zenith; (c)  photographic capture to nadir.  The purpose of the CBSP sampling was to capture the largest possible physical area with the  fewest samples. To do so, the visual field angle of the lens (θ) was adjusted depending on the size of  the sensor (n) and its focal length (f):  2 tan 2   (3)  The real area covered by a photograph depends on three variables: sensor size (nij), focal length  of the lens (f), and distance of the lens to the object (h). In the case of nadir, h is the distance between  Figure 3. Photographic capture process at sampling sites. (a) CBSP design, the letters correspond to the distance of capture from the center of the sampling site; (b) photographic capture to zenith; (c) photographic capture to nadir. The purpose of the CBSP sampling was to capture the largest possible physical area with the fewest samples. To do so, the visual field angle of the lens (θ) was adjusted depending on the size of the sensor (n) and its focal length (f ): θ = 2 × tan−1 ( n 2 × f ) (3) The real area covered by a photograph depends on three variables: sensor size (nij), focal length of the lens (f ), and distance of the lens to the object (h). In the case of nadir, h is the distance between the camera lens and the ground. For zenith, h corresponds to the distance between the lens and the tree crowns. Equation (4) defines the calculation of the real area of the photograph: Gij = nij × h f (4) where G is the actual length of the object in the horizontal (i) and vertical (j), where the horizontal distance of the ni sensor size of the camera used was 22.3 mm and the vertical distance nj was 14.9 mm. The value of f for the nadir photographs was set at 18 mm because h was established at 3 m. By solving Equation (4), we estimated a real area to nadir of 9.2 m2. In the case of zenith photographs, a larger real area was required to be representative at the sampling site. The minimum average height of the tree crowns in the forested areas of the region was 4 m. At this point, the value of θ must be adjusted to reach the largest surface, so the value of f was set at 18 mm. Then, by solving Equation (4), the real area at zenith is 16.38 m2. In heterogeneous forests, such as the study area, the height of the trees can vary in short distances, so in order to maintain the area captured independently of the height of the tree, the value of f can be adjusted by multiplying the average height of the trees at the point of capture by the constant 4.5 (f = 4.5 × h). If the height value was four meters, the camera was placed as close as possible to the Forests 2017, 8, 392 6 of 18 ground; in the case of exceeding six meters in height, the camera was placed at a fixed distance of 1.5 m on the ground. 2.6. Estimation of over and Understory Cover The processing of images to estimate the vegetation cover is different in nadir and zenith projections; in the first case a robust classifier is needed to distinguish the shade of the vegetation, whereas the second one requires a methodology that distinguishes the cover of the canopy in contrast to the sky (atmosphere). Due to the large number of photographs that needed to be processed (24,182 photographs), a code was written in the Python 2.7RM language (Python Software Foundation (PSF): Wilmington, DE, USA) to optimize the process (the software can be requested from the authors). The following sections describe the methodology used. 2.6.1. Estimation of Overstory Cover The photographs were taken in the morning and in the afternoon, before the sun surpassed 130◦ of azimuth or after 230◦ to avoid confusion due to the brightness of the leaves in association with the sky. We used the SunEarthTool tool (http://www.sunearthtools.com/dp/tools/pos_sun.php) to identify the appropriate times to take the photographs. The methodology of Fuentes et al. [15] was adjusted within the Python language for image processing. The images were converted to vector format in order to separate the three color channels (R, G, B). The blue channel (B) was used to filter the clouds from the image because it gives the best contrast between the cover of the foliage and the sky with the presence of clouds. The adaptive threshold method was used to classify the image [35]. The method consists of dividing the image into sub-images. The threshold (M) of the sub-image is calculated using the mean or median Gaussian methods. In this case, the median was used as a threshold to perform the separation. The size of the blocks used to divide the image was 200 × 200 pixels. The sum of the proportions of the number of pixels with vegetation in each block to the total number of pixels of the photograph was the cover of the canopy per photograph. Figure 4 shows an outline of the threshold calculation using this method. The overstory cover includes branches and the upper stems of trees. Forests 2017, 8, 392    7 of 19    Figure 4. Outline of  thresholds  in blocks by  the adaptive  threshold method;  the  left part of  the  histogram corresponds to pixels with vegetation, the right side of the histogram corresponds to pixels  without vegetation. The M value is the threshold for each block.  2.6.2. Estimation of Understory Cover  The classification of green vegetation and soil was achieved by calculating a threshold within a  two‐dimensional space. The images were transformed in the color space L*a*b* [10]. The green‐red  component a* was used to distinguish the vegetation from the bare soil, where the values skewed to  the left of the histogram indicated green pixels (vegetation) and those skewed to the right showed  pixels in red (bare soil). The assumption of the methodology is that the distribution of this component  tends to be a bimodal Gaussian distribution.  2 2   (5)  where μ1 and μ2 are the green vegetation and soil average, respectively; and σ1 and σ2 are the standard  deviations of green vegetation and soil, respectively. The value W1 is a weighting of the pixels in  green and W2 is the respective weighting for soil. The image is scaled to values of 0–255.  Threshold Adjustment  Regardless of the land use, the value of the pixels is between 75 and 150 in all photographs; to  make an optimal adjustment, an initial threshold was set, which was obtained in the middle of the  range 75–150 (T0 = 112). The optimal value of the threshold (T1) occurred when the functions of  Equation (5) were equal (Figure 5). In this case, the error of omission of vegetation and soil classification,  represented by areas S1 and S2, is minimal. The following equation was used to solve T1:  1 1 0  (6)  where:  2 2 2 1 2⁄ (7)  Figure 4. Outline of thresholds in blocks by the adaptive threshold method; the left part of the histogram corresponds to pixels with vegetation, the right side of the histogram corresponds to pixels without vegetation. The M value is the threshold for each block. http://www.sunearthtools.com/dp/tools/pos_sun.php Forests 2017, 8, 392 7 of 18 2.6.2. Estimation of Understory Cover The classification of green vegetation and soil was achieved by calculating a threshold within a two-dimensional space. The images were transformed in the color space L*a*b* [10]. The green-red component a* was used to distinguish the vegetation from the bare soil, where the values skewed to the left of the histogram indicated green pixels (vegetation) and those skewed to the right showed pixels in red (bare soil). The assumption of the methodology is that the distribution of this component tends to be a bimodal Gaussian distribution. F(x) = W1√ 2πσ1 e ( (x−µ1) 2 2σ21 ) + W2√ 2πσ2 e ( −(x−µ2) 2 2σ22 ) (5) where µ1 and µ2 are the green vegetation and soil average, respectively; and σ1 and σ2 are the standard deviations of green vegetation and soil, respectively. The value W1 is a weighting of the pixels in green and W2 is the respective weighting for soil. The image is scaled to values of 0–255. Threshold Adjustment Regardless of the land use, the value of the pixels is between 75 and 150 in all photographs; to make an optimal adjustment, an initial threshold was set, which was obtained in the middle of the range 75–150 (T0 = 112). The optimal value of the threshold (T1) occurred when the functions of Equation (5) were equal (Figure 5). In this case, the error of omission of vegetation and soil classification, represented by areas S1 and S2, is minimal. The following equation was used to solve T1: AT12 + BT1 + C = 0 (6) where: A = σ21 − σ 2 2 B = 2 × ( µ1σ 2 2 − µ2σ 2 1 ) C = σ21 µ2 − µ1σ 2 2 + 2σ 2 1 2σ 2 2 ln(σ2W1/σ1W2) (7) Forests 2017, 8, 392    8 of 19    Figure 5. Identification of the optimal T1 threshold in a bimodal distribution  In extreme situations where bimodality is not evident, that is, photographs where there is only  vegetation or bare soil, we applied the algorithm proposed by Liu et al. [10].  2.6.3. Calculation of Total Vegetation Cover  Total vegetation cover (TVC) was calculated as follows (Figure 6):    Figure 6. Flowchart for estimating the cover of vegetation vertical strata at sampling sites in the State  of Mexico.  Figure 5. Identification of the optimal T1 threshold in a bimodal distribution. Forests 2017, 8, 392 8 of 18 In extreme situations where bimodality is not evident, that is, photographs where there is only vegetation or bare soil, we applied the algorithm proposed by Liu et al. [10]. 2.6.3. Calculation of Total Vegetation Cover Total vegetation cover (TVC) was calculated as follows (Figure 6): Forests 2017, 8, 392    8 of 19    Figure 5. Identification of the optimal T1 threshold in a bimodal distribution  In extreme situations where bimodality is not evident, that is, photographs where there is only  vegetation or bare soil, we applied the algorithm proposed by Liu et al. [10].  2.6.3. Calculation of Total Vegetation Cover  Total vegetation cover (TVC) was calculated as follows (Figure 6):    Figure 6. Flowchart for estimating the cover of vegetation vertical strata at sampling sites in the State  of Mexico.  Figure 6. Flowchart for estimating the cover of vegetation vertical strata at sampling sites in the State of Mexico. TVC = ∑ni=0 FCC N PT + ( 1 − ∑ n i=0 FCC N PT ) × ( ∑ni=0 FCV N PT ) (8) where FCC is the proportion of the number of pixels classified as aerial (overstory) cover, FCV is the fraction of the vegetative cover of the understory (lower stratum), and NPT is the total number of pixels contained in the image. The sum of TVC in all images of the CBSP design was considered the total cover per plot. Figure 6 presents the flowchart of the process of classification. 2.6.4. Accuracy in Cover Classification The accuracy of the cover estimates obtained using the proposed methodology was calculated through a comparison of these values using a visual classification of the images within the ENVI 5.0RM program. We considered two classes to distinguish the colors in the photographs. In understory, all pixels in green were considered as leaves, and the rest were classified as bare soil. In overgrowth, all pixels corresponding to leaves, stems, and branches were classified as cover, and the rest of the pixels were classified as sky. As mentioned in [11], visual classification is considered as the real values of cover in the image and those are compared with the automated threshold proposed in this research. Images of 12 zenith plots (252 images) and 11 plots to nadir (231 images) were used. The plots represented different land uses. The accuracy of the implemented classifier (AC) was evaluated using the following formula [11]. AC = 100 × (1 − ∑ni=1 ∣∣∣ A−BA ∣∣∣ N ) (9) Forests 2017, 8, 392 9 of 18 where A represents the number of pixels with a real presence of vegetation in the reference image (visual classification) and B represents the number of pixels classified as having vegetation in the applied methods. An average accuracy was obtained in each plot evaluated, where 100% corresponds to a classification without errors. 3. Results 3.1. Sampling Design The observed (total area of projected crowns) and estimated (projected photographic captured area) values of the projected cover (%) per type of sampling design and spatial pattern of the trees are shown in Table 1. In the pattern analysis of the trees, 10 plots corresponded to a grouped (clustered) pattern, 15 to a random pattern, and five plots belonged to a dispersed cluster. The calculation of the estimated area is explained in Equation (1). Table 1. Estimated and observed cover values (%) per type of sampling and spatial pattern of trees. Design Spatial Pattern Observed (%) Estimated (%) MAE Coefficient of Variation RM Grouped 87.1 80.9 0.175 0.24 Random 64.0 58.1 0.173 0.34 Dispersed 34.9 29.0 0.171 0.06 CBSP Grouped 87.1 85.3 0.091 0.09 Random 64.0 62.5 0.097 0.33 Dispersed 34.9 32.9 0.089 0.16 SLAT Grouped 87.1 86.9 0.079 0.22 Random 64.0 62.2 0.119 0.38 Dispersed 34.9 33.7 0.117 0.20 VALERI Grouped 87.1 34.2 0.153 0.23 Random 64.0 31.7 0.179 0.41 Dispersed 34.9 29.2 0.155 0.22 RM: regular mesh; CBSP: Biomass and carbon sampling plots; SLAT: tree and land use sampling; VALERI: Remote sensing ground validation instrument. The CBSP design showed the least error in two of the three types of spatial patterns. The second design that showed minor error was SLAT, which indicates that the sampling design with photographic captures in diagonal form exhibits better results. The RM and VALERI designs had the highest and lowest number of samples, respectively. However, their errors were the highest (MAE > 0.15). These results practically discard them from being considered in operational sampling. The random spatial pattern showed a higher coefficient of variation (CV) in the four sampling designs due to the design geometry. The dispersed pattern was the second one with the highest CV in the CBSP, SLAT, and VALERI designs. Within the grouped pattern, the CBSP sample recorded the lowest variation. 3.2. Segmentation of Images The use of the Python program allowed us to make the segmentation threshold selection consistent. The number of captured photos in the sampling makes it impractical to analyze the photographs in a supervised way or in semi-automated processes (photo by photo). The developed program classifies a sampling plot of 42 photographs at zenith and nadir in 30 s. The processor used has 2.6 GHz and 8 GB of memory. 3.3. Classification of Overstory Images Figure 7 shows the classification of zenith images in four cover conditions. Figure 7a and its classification 7b represent the photograph with the highest cover in the whole 97% sampling (Oyamel fir forest). Figure 7c, d represent 50% cover (Oyamel fir forest secondary vegetation). Figure 7g Forests 2017, 8, 392 10 of 18 shows how the classifier correctly discriminates foliage from clouds (secondary vegetation of Pine Forest). Finally, Figure 7i presents a correct classification with minimum cover (secondary vegetation of Pine Forest). Forests 2017, 8, 392    10 of 19  The CBSP design showed the least error in two of the three types of spatial patterns. The second  design  that  showed  minor  error  was  SLAT,  which  indicates  that  the  sampling  design  with  photographic captures in diagonal form exhibits better results. The RM and VALERI designs had the  highest and lowest number of samples, respectively. However, their errors were the highest (MAE >  0.15). These results practically discard them from being considered in operational sampling.  The random spatial pattern showed a higher coefficient of variation (CV) in the four sampling  designs due to the design geometry. The dispersed pattern was the second one with the highest CV  in the CBSP, SLAT, and VALERI designs. Within the grouped pattern, the CBSP sample recorded the  lowest variation.  3.2. Segmentation of Images  The  use  of  the  Python  program  allowed  us  to  make  the  segmentation  threshold  selection  consistent. The number of captured photos  in  the sampling makes  it  impractical  to analyze  the  photographs in a supervised way or in semi‐automated processes (photo by photo). The developed  program classifies a sampling plot of 42 photographs at zenith and nadir in 30 s. The processor used  has 2.6 GHz and 8 GB of memory.  3.3. Classification of Overstory Images  Figure 7 shows the classification of zenith images in four cover conditions. Figure 7a and its  classification 7b represent the photograph with the highest cover in the whole 97% sampling (Oyamel  fir forest). Figure 7c, d represent 50% cover (Oyamel fir forest secondary vegetation). Figure 7g shows  how the classifier correctly discriminates foliage from clouds (secondary vegetation of Pine Forest).  Finally, Figure 7i presents a correct classification with minimum cover (secondary vegetation of Pine  Forest).   Forests 2017, 8, 392    11 of 19    Figure 7. Classification of zenith images, with different cover conditions. Images (a,c,f,h) correspond  to the original photograph. Images (b,d,g,i) correspond to the adaptive threshold classifier.  3.4. Classification of Undestory Images  Figure 8 presents a sample of four photographs in different land use and different illumination  and vegetation cover conditions, as well as their respective classification and histogram. Figure 8a,b  are presented within a plot of land with secondary vegetation of Oyamel fir forest; the threshold in  this capture was a* = 119 and the area under the curve with vegetation (V) was 43%. Figure 8c,d  correspond to herbaceous vegetation within an Oyamel fir forest; in this particular vegetation the  illumination was reduced because of the high overstory cover, with an understory cover of 62%.  Figure 8f,g show the image within a Rainfed Agriculture area and we observed that the classifier  adequately discriminated between bare soil vegetation and shadows; the threshold in this image was  a* = 122 and the vegetation cover was 35%. Figure 8i,h are presented within a plot without vegetation;  the classifier was able to detect the minimal green cover found in the photograph. In this case, as the  distribution of the histogram was unimodal, the threshold was set at a* < 105 and the vegetation cover  was 0.8%.  Figure 7. Classification of zenith images, with different cover conditions. Images (a,c,f,h) correspond to the original photograph. Images (b,d,g,i) correspond to the adaptive threshold classifier. 3.4. Classification of Undestory Images Figure 8 presents a sample of four photographs in different land use and different illumination and vegetation cover conditions, as well as their respective classification and histogram. Figure 8a,b are presented within a plot of land with secondary vegetation of Oyamel fir forest; the threshold in this capture was a* = 119 and the area under the curve with vegetation (V) was 43%. Figure 8c,d Forests 2017, 8, 392 11 of 18 correspond to herbaceous vegetation within an Oyamel fir forest; in this particular vegetation the illumination was reduced because of the high overstory cover, with an understory cover of 62%. Figure 8f,g show the image within a Rainfed Agriculture area and we observed that the classifier adequately discriminated between bare soil vegetation and shadows; the threshold in this image was a* = 122 and the vegetation cover was 35%. Figure 8i,h are presented within a plot without vegetation; the classifier was able to detect the minimal green cover found in the photograph. In this case, as the distribution of the histogram was unimodal, the threshold was set at a* < 105 and the vegetation cover was 0.8%.Forests 2017, 8, 392    12 of 19    Figure 8. Classification of images in different land uses to nadir. Images (a,c,f,h) correspond to the  original photograph. Images (b,d,g,i) correspond to the classified images. The third column shows  the distribution of the histogram according to the cover.  3.5. Accuracy of the Classification  Table 2 presents  the comparison of  the classified cover  (%) by  the supervised classification  method (visual classification) and the zenith method (Estimated). The accuracy was high in all land  uses with an average of 93%. In relation to the coefficient of variation CV (representing the variability  of the sampling design), we observed that this variability increased as the estimated cover of the  different land uses declined. In primary forests (BQP, BQ, BA, BPQ), the variation of the sampling  design was  low;  insofar as there was disturbance  in the  land use (VSa, VSA, VSh) the variation  increased because the static designs were sensitive to the opening of the canopy.  Table 2. Accuracy evaluation of the visual interpretation of the images in 12 zenith sampling plots  (overstory cover).  Land Use  Description  Classified (%)  Estimated (%)    CV  BQP  Oak‐pine forest  84.0  83.0  99.0  0.06  BQ  Oak forest  86.0  84.0  96.0  0.09  BA  Oyamel fir forest  82.0  78.0  94.0  0.10  BPQ  Pine‐oak forest  74.0  75.0  96.0  0.12  VSa/BQ  Secondary shrub vegetation of oak forest  52.0  48.0  90.0  0.13  VSA/BPQ  Secondary arboreal vegetation of pine‐oak forest  69.0  67.0  94.0  0.17  VSa/BQP  Secondary shrub vegetation of oak‐pine forest  71.0  69.0  96.0  0.21  VSa/BQP  Secondary shrub vegetation of oak‐pine forest  50.0  49.0  96.0  0.22  VSA/BA  Secondary arboreal vegetation of oyamel fir forest  68.0  62.0  91.0  0.24  VSA/BP  Secondary arboreal vegetation of pine forest  22.0  22.0  91.0  0.28  VSh/BP  Secondary herbaceous vegetation of pine forest  16.0  15.0  92.0  0.66  VSh/BPQ  Secondary herbaceous vegetation of pine‐oak forest  31.0  35.0  87.0  0.68  Figure 8. Classification of images in different land uses to nadir. Images (a,c,f,h) correspond to the original photograph. Images (b,d,g,i) correspond to the classified images. The third column shows the distribution of the histogram according to the cover. 3.5. Accuracy of the Classification Table 2 presents the comparison of the classified cover (%) by the supervised classification method (visual classification) and the zenith method (Estimated). The accuracy was high in all land uses with an average of 93%. In relation to the coefficient of variation CV (representing the variability of the sampling design), we observed that this variability increased as the estimated cover of the different land uses declined. In primary forests (BQP, BQ, BA, BPQ), the variation of the sampling design was low; insofar as there was disturbance in the land use (VSa, VSA, VSh) the variation increased because the static designs were sensitive to the opening of the canopy. Forests 2017, 8, 392 12 of 18 Table 2. Accuracy evaluation of the visual interpretation of the images in 12 zenith sampling plots (overstory cover). Land Use Description Classified (%) Estimated (%) AC CV BQP Oak-pine forest 84.0 83.0 99.0 0.06 BQ Oak forest 86.0 84.0 96.0 0.09 BA Oyamel fir forest 82.0 78.0 94.0 0.10 BPQ Pine-oak forest 74.0 75.0 96.0 0.12 VSa/BQ Secondary shrub vegetation of oak forest 52.0 48.0 90.0 0.13 VSA/BPQ Secondary arboreal vegetation of pine-oak forest 69.0 67.0 94.0 0.17 VSa/BQP Secondary shrub vegetation of oak-pine forest 71.0 69.0 96.0 0.21 VSa/BQP Secondary shrub vegetation of oak-pine forest 50.0 49.0 96.0 0.22 VSA/BA Secondary arboreal vegetation of oyamel fir forest 68.0 62.0 91.0 0.24 VSA/BP Secondary arboreal vegetation of pine forest 22.0 22.0 91.0 0.28 VSh/BP Secondary herbaceous vegetation of pine forest 16.0 15.0 92.0 0.66 VSh/BPQ Secondary herbaceous vegetation of pine-oak forest 31.0 35.0 87.0 0.68 Table 3 presents the evaluation of the vegetative cover to nadir. The average accuracy was 94%. As in zenith, the estimated cover maintains a negative correlation with CV. For this classification, the greatest cover is found in secondary herbaceous vegetation (VSh) where the variability of the sampling is lower; as the vegetation transits to mature forest, the CV is high due to the fact that the understory cover is random. In sites with non-vascular vegetation (bryophytes), the classifier overestimates the percentage of vascular vegetation because it associates the green color with this type of cover. Table 3. Accuracy evaluation of the visual interpretation of the images in 12 sampling plots at nadir (understory cover). Land Use Description Classified (%) Estimated (%) AC CV VSh/BPQ Secondary herbaceous vegetation of pine-oak forest 82.10 85.00 89.0 0.06 VSh/BP Secondary herbaceous vegetation of pine forest 78.43 79.00 96.0 0.10 TA Rain-fed agriculture 27.37 28.00 90.0 0.16 VSh/BA Secondary herbaceous vegetation of oyamel fir forest 84.00 83.50 98.0 0.19 VSa/BQ Secondary shrub vegetation of oak forest 32.70 32.00 94.0 0.51 PC Cultivated grassland 47.23 49.33 97.0 0.84 BP Pine forest 29.30 27.67 95.0 0.86 VSA/BPQ Secondary arboreal vegetation of pine-oak forest 37.65 34.42 92.0 0.87 BQP Pine-oak forest 30.69 29.87 96.0 0.88 BQ Oak forest 6.58 6.00 94.0 0.94 VSa/BQP Secondary shrub vegetation of oak-pine forest 5.63 6.95 87.0 0.98 4. Application of CBSP Design to the Regional Survey After validating that the CBSP design was robust and precise enough to be used in a regional survey, it was implemented in the campaign through a replication of the procedure used in the pre-survey evaluation in the 754 sampling plots. It is identified that the application of the sampling design allowed us to capture photographs in an easy and fast way in the majority of land uses. Figure 9 shows the average total vegetation cover of the main land uses in the regions of the State of Mexico. The cover data are presented from highest to lowest and from mature forest to agriculture. Mature forests have a tendency of 50–90% cover in the eight regions, and there was higher average coverage in the region of Toluca (70–90%) (Figure 9a); in wooded areas with secondary tree vegetation (VSA) and secondary shrub and herbaceous vegetation (VSa and Vsh), the cover ranges from 20 to 90%. This is because the limit of tree vegetation in these plots is a mixture of perennial and deciduous cover, therefore presenting a seasonal change of vegetation cover because of the weather pattern. In this case, the region with the highest coverage was Zumpango (Figure 9b) and that with the lowest coverage was Coatepec (Figure 9f). With regard to cover in agricultural areas (RA, TA), the development of cover over time follows a spatial pattern associated with the time of planting and growth. Cover starts at <10% in all regions and the percentage increases up to 100%, as in the Toluca and Texcoco regions (Figure 9a,c). Forests 2017, 8, 392 13 of 18 Forests 2017, 8, 392    14 of 19    Figure 9. Average total vegetation cover by land use in the eight regions of the State of Mexico: (a)  Toluca Region; (b) Zumpango Region; (c) Texcoco Region; (d) Tejupilco Region; (e) Atlacomulco  Region; (f) Coatepec Region; (g) Valle de Bravo Region; (i) Jilotepec Region. (1) Mature forest; (2)  disturbed forest; (3) grassland; (4) agricultural area.  5. Discussion  The  use  of  digital  photography  for  vegetation  cover  estimations  is  an  easy,  low‐cost,  and  potentially  suitable  approach  for  hard‐to‐reach  places.  These  properties  give  this  method  an  advantage  over  direct  (destructive)  and  indirect  (fisheye  cameras)  methods  [13].  In  Mexico,  operational surveys such as the National Forest and Soils Survey [33] generate vegetation cover  estimations that rely on the technician criteria in the field. This research proposes an accurate survey  design method which is potentially suitable for the forest sector in the country.  Figure 9. Average total vegetation cover by land use in the eight regions of the State of Mexico: (a) Toluca Region; (b) Zumpango Region; (c) Texcoco Region; (d) Tejupilco Region; (e) Atlacomulco Region; (f) Coatepec Region; (g) Valle de Bravo Region; (i) Jilotepec Region. (1) Mature forest; (2) disturbed forest; (3) grassland; (4) agricultural area. 5. Discussion The use of digital photography for vegetation cover estimations is an easy, low-cost, and potentially suitable approach for hard-to-reach places. These properties give this method an advantage over direct (destructive) and indirect (fisheye cameras) methods [13]. In Mexico, operational surveys such as the National Forest and Soils Survey [33] generate vegetation cover estimations that rely on the technician criteria in the field. This research proposes an accurate survey design method which is potentially suitable for the forest sector in the country. Forests 2017, 8, 392 14 of 18 The advantage of using field data is its strength for the validation of satellite information, as a way to use the cover values at greater scales [6]. On the other hand, a disadvantage of field photography data is the requirement of several photographs in order to produce a reliable estimate. However, there are software methods for the automation of these processes, such as the one proposed in this research. 5.1. Sampling Designs Comparison One important consideration in field vegetation cover measurements is the need to determine an appropriate sampling design [31]. The methodologies for measuring vegetation cover are ambiguous in reference to which method should be used to ensure a correct estimation within a sampling plot. The projection made in the GIS allowed us to observe differences in vegetation cover estimations with a simple scheme. The results provide important information for choosing a sampling design and reducing the costs of the collection and processing of field data [22], considering the large amount of samples of this survey. Martens et al. [36] show that the spatial patterns influence the height, cover, and distribution of vegetation in their different strata. Our results showed that, independently of the spatial pattern of the survey sites, sampling designs that captured diagonal photographs (CBSP and SLAT) exhibited the least estimation error. The advantages of these designs are the low number of photographs needed (42 phothographs) and their easy field implementation. This ratifies the vegetation cover estimation operability using this method and how it can be related to grouping indices to evaluate the effect of the forest management of zones without disturbance on disturbed zones [24]. 5.2. Sampling Survey Forest surveys in several parts of the world, including Mexico, are carried out in circular plots of 1000 m2 (17.85 m radius) [22]. In this research, we adopted this design to evaluate the biomass and carbon storage in different land uses within the State of Mexico. The CBSP design proved to be feasible for its implementation in different land uses and spatial patterns of the trees; this simple design allowed the application in distant sampling points and rugged terrain, which makes it an operative method to capture vegetation cover with digital photographs. An example of the sampling operability is that bubble levels were used to stabilize the camera at each sampling point, instead of using tripods to fix it. This technique helped to reduce the time to take the photographs and presented no considerable error when compared to tripod shots [37]. The number of samples and their arrangement was another variable to be considered for the sampling to be operative and related to other variables measured in the plot (i.e., biomass, carbon), so captures were fixed at the ends of the sites [20]. The CBSP design obtained the best results when estimating the vegetation cover. The efficiency of its application and smaller errors in the cover estimation turns it into a practical design for this type of application, besides saving storage and time when processing the images. The spatial distribution of trees within the site is an important element for the dynamics of the forest ecosystem [26]. In the case of overstory cover, we observed that the applied sampling design showed a negative trend between the estimated cover and its coefficient of variation (CV). The primary forests presented high and compact canopy covers (low CV); when reducing it, the canopy cover tends to be dispersed (high CV). In the case of understory cover, the opposite occurs. In disturbed areas (secondary vegetation), the cover is larger and compact; when this cover is reduced, the pattern is dispersed. The real area of a photograph is another important aspect that has been little explored in vegetation cover sampling design. Researchers generally use a fixed lens viewing angle (35–40◦) to estimate the canopy cover, so a greater opening angle would be measuring the closure of the canopy. Jennings et al. [38] describe in detail the difference between these two concepts. In this study, we observed that in real situations the height of the trees in a sampling site is heterogeneous. For this Forests 2017, 8, 392 15 of 18 reason, we proposed to adjust the focal length of the camera at the point of capture by a constant as this ensures that one is able to approximate and make repeatable the capture area independently of the architecture of the trees. 5.3. Automated Classification of Images The automation of vegetative cover classification is a process that avoids the error of the human component and ensures the possibility of reproducing the results [11]. The efforts to automate image classification have focused on programs such as MATLAB [12,15,39], WinScanopy [40], or Photoshop [41], among others. But although the former meets the requirement of making the batch processing of the images operative, it has the disadvantage of having an additional cost. The other programs have the disadvantage of processing the images photo by photo, which discarded them for the analysis of cover in this study. The PythonRM program was chosen for the versatility of the specialized libraries available, and for being a free access program. The written code enabled us to process in batch the images of the sampling in a time similar to that described in programs like MATLAB. 5.3.1. Overstory Cover In the classification of digital photographs, binary methods (global thresholds) are generally used to estimate canopy cover [42]. The colors of the classified images have gray tonalities for vegetation and white for the sky. According to Chityala and Pudipeddi [35], the accuracy in the classification for the global threshold methods is low. The problem is trying to find the maximum variance between two logical groups of segments within the whole image, which can cause confusion in the classification by overexposure in the camera or image capture at inappropriate times. In this research, we propose the use of the adaptive threshold in the blue space of the image [15]. The method is based on the same principles of the global threshold, but the segmentation statistics are done at the block level within the image, allowing greater accuracy in the overall classification. The accuracy of the classification is high and related to the comparison of binary algorithms performed by Ghatthorn and Beckschäfer [42]. The sampling was directed to avoid the effect of the sun (the captures were made before noon and at near sunset); however, in the sampling, there were circumstances that caused the photographs to exceed the proposed range, such as the VSh/BPQ land use plot (Table 2), which showed the lowest precision (87%) [16]. Nonetheless, the development of methodologies applied to this problem are beyond the scope of this research. 5.3.2. Understory Cover There are many methods to extract the vegetation cover fraction [11,12,43] so the degree of accuracy is an important factor in the efficiency of field measurements. For example, supervised classification has a high accuracy but low efficiency, while unsupervised classification has a high efficiency but low precision due to commission and omission errors [1]. The algorithm proposed by Liu et al. [10] has the property of being simple, easy to automate, and has a high degree of precision. When comparing it with the supervised classification in different sampling plots, the results in terms of precision were high (>87%); the main problem of the misclassification was the confusion of the vascular vegetation with bryophytes, which in the color space *a detects a green color that is difficult to discern. In conditions of low illumination due to the effect of the canopy, the algorithm had no problems in correctly classifying vegetation and shade [12], and with a single component predomination (bare soil) in the photograph, the classifier generated good results (Figure 8i). The two methods proposed to evaluate the over and understory vegetation cover were robust and replicable in all sampling plots. The reason for estimating the foliar projective cover rather than the leaf area index is that the former is a more adequate variable to characterize vegetation [44] since the projective foliar cover captured with digital photographs contains information on individual plants Forests 2017, 8, 392 16 of 18 and their spatial distribution. With this perspective and considering the validation of the proposed sampling and plot area, we will contemplate the validation of biophysical variables calculated with remote sensors in future work of the research group. 6. Conclusions The over and understory vegetation cover was estimated with digital photographs in sampling plots of the State of Mexico. The high efficiency and precision of the classification methods indicate that they are robust for discerning vegetation in different land uses and illumination conditions. The use of digital photography reduces the ambiguity of vegetation cover estimations in regional and national surveys. The proposed method is easily reproducible in heterogeneous land and vegetation conditions. The automation of the process using a free programming language avoided human errors and ensured the reproducibility of the results at a low cost. The sampling methods using the capture of diagonal-angled photographs were the best way to obtain less biased information when taking digital photographs at a circular sampling site. The simulation showed that the CBSP design has a smaller error when considering the spatial distribution of trees within the sampling site. Its easy field management, the number of photographs per site, and its precision make it an operative design. One additional advantage of the proposed field survey is that the real area covered by the photograph is independent of the height of trees. This guarantees representability and avoids image superposition in the sampling site. Mature forests have a high and compact overstory vegetation cover, which tends to be reduced in secondary forests. The greater cover of the understory is found in secondary forests, where it is denser. The cover of the undergrowth declines in mature forests. The application of this method for regional and national surveys is recommended. Acknowledgments: We appreciate the financial support given by the Programa Mexicano del Carbono and Protectora de Bosques del Estado de México (PROBOSQUE) which allowed us to conduct this study. Author Contributions: Víctor Salas-Aguilar and Fernando Paz-Pellat contributed to the initial proposal of the methodology. Víctor Salas-Aguilar designed and developed the software to process the data in python for implementation on a larger scale. Fabiola Rojas-García, Cristóbal Sánchez-Sánchez, J. René Valdez-Lazalde, and Carmelo Pinedo-Alvarez conducted the research methods. All authors discussed the structure and commented on the manuscript at all stages. Conflicts of Interest: The authors declare no conflict of interest. References 1. Liang, S.; Li, X.; Wang, J. Advanced Remote Sensing: Terrestrial Information Extraction and Applications; Academic Press: Cambridge, MA, USA, 2012. 2. Gutman, G.; Ignatov, A. The derivation of the green vegetation fraction from NOAA/AVHRR data for use in numerical weather prediction models. Int. J. Remote Sens. 1998, 19, 1533–1543. [CrossRef] 3. Li, Y.; Wang, H.; Li, X.B. Fractional vegetation cover estimation based on an improved selective endmember spectral mixture model. PLoS ONE 2015, 10, e0124608. [CrossRef] [PubMed] 4. Mu, X.; Hu, M.; Song, W.; Ruan, G.; Ge, Y.; Wang, J.; Huang, S.; Yan, G. Evaluation of sampling methods for validation of remotely sensed fractional vegetation cover. Remote Sens. 2015, 7, 16164–16182. [CrossRef] 5. Li, F.; Chen, W.; Zeng, Y.; Zhao, Q.; Wu, B. Improving estimates of grassland fractional vegetation cover based on a pixel dichotomy model: A case study in Inner Mongolia, China. Remote Sens. 2014, 6, 4705–4722. [CrossRef] 6. Mu, X.; Huang, S.; Ren, H.; Yan, G.; Song, W.; Ruan, G. Validating GEOV1 fractional vegetation cover derived from coarse-resolution remote sensing images over croplands. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2015, 8, 439–446. [CrossRef] 7. Zhou, Q.; Robson, M.; Pilesjo, P. On the ground estimation of vegetation cover in Australian rangelands. Int. J. Remote Sens. 1998, 19, 1815–1820. [CrossRef] http://dx.doi.org/10.1080/014311698215333 http://dx.doi.org/10.1371/journal.pone.0124608 http://www.ncbi.nlm.nih.gov/pubmed/25905772 http://dx.doi.org/10.3390/rs71215817 http://dx.doi.org/10.3390/rs6064705 http://dx.doi.org/10.1109/JSTARS.2014.2342257 http://dx.doi.org/10.1080/014311698215261 Forests 2017, 8, 392 17 of 18 8. Chianucci, F.; Cutini, A. Estimation of canopy properties in deciduous forests with digital hemispherical and cover photography. Agric. For. Meteorol. 2013, 168, 130–139. [CrossRef] 9. White, M.A.; Asner, G.P.; Nemani, R.R.; Privette, J.L.; Running, S.W. Measuring fractional cover and leaf area index in arid ecosystems: Digital camera, radiation transmittance, and laser altimetry methods. Remote Sens. Environ. 2000, 74, 45–57. [CrossRef] 10. Liu, Y.; Mu, X.; Wang, H.; Yan, G. A novel method for extracting green fractional vegetation cover from digital images. J. Veg. Sci. 2012, 23, 406–418. [CrossRef] 11. Coy, A.; Rankine, D.; Taylor, M.; Nielsen, D.C.; Cohen, J. Increasing the accuracy and automation of fractional vegetation cover estimation from digital photographs. Remote Sens. 2016, 8, 474. [CrossRef] 12. Song, W.; Mu, X.; Yan, G.; Huang, S. Extracting the green fractional vegetation cover from digital images using a shadow-resistant algorithm (SHAR-LABFVC). Remote Sens. 2015, 7, 10425–10443. [CrossRef] 13. Macfarlane, C.; Hoffman, M.; Eamus, D.; Kerp, N.; Higginson, S.; McMurtrie, R.; Adams, M. Estimation of leaf area index in eucalypt forest using digital photography. Agric. For. Meteorol. 2007, 143, 176–188. [CrossRef] 14. Chianucci, F.; Chiavetta, U.; Cutini, A. The estimation of canopy attributes from digital cover photography by two different image analysis methods. iFor. Biogeosci. For. 2014, 7, 255–259. [CrossRef] 15. Fuentes, S.; Palmer, A.R.; Taylor, D.; Zeppel, M.; Whitley, R.; Eamus, D. An automated procedure for estimating the leaf area index (LAI) of woodland ecosystems using digital imagery, MATLAB programming and its application to an examination of the relationship between remotely sensed and field measurements of LAI. Funct. Plant Biol. 2008, 35, 1070–1079. [CrossRef] 16. Macfarlane, C. Classification method of mixed pixels does not affect canopy metrics from digital images of forest overstorey. Agric. For. Meteorol. 2011, 151, 833–840. [CrossRef] 17. Tausch, R.; Tueller, P. Foliage biomass and cover relationships between tree-and shrub-dominated communities in pinyon-juniper woodlands. Great Basin Nat. 1990, 50, 121–134. 18. Muukkonen, P.; Mäkipää, R. Empirical biomass models of understorey vegetation in boreal forests according to stand and site attributes. Boreal Environ. Res. 2006, 11, 355–369. 19. Luna, J.A.N.; Hernández, E.H. Relaciones morfométricas de un bosque coetáneo de la región de El Salto, Durango. Ra Ximhai 2008, 4, 69–82. 20. Williams, M.S.; Patterson, P.L.; Mowrer, H.T. Comparison of ground sampling methods for estimating canopy cover. For. Sci. 2003, 49, 235–246. 21. Muir, J.; Schmidt, M.; Tindall, D.; Trevithick, R.; Scarth, P.; Stewart, J. Field Measurement of Fractional Ground Cover: A Technical Handbook Supporting Ground Cover Monitoring for Australia; Australian Bureau of Agricultural and Resource Economics and Sciences (ABARES): Canberra, Australia, 2011. 22. Matern, B. Recopilación de Notas Sobre Técnicas de Muestreo Usadas en Inventarios Forestales; SARH-INIFAP Pub. Especial: Distrito Federal, Mexico, 1993. 23. Gobron, N.; Verstraete, M. Remote sensing and geoinformation processing in the assessment and monitoring land degradation and desertification state of art and operational perspectives. In Assessment of the Status of the Development of the Standards for the Terrestrial Essential Climate Variables: Fraction of Absorbed Photosynthetically Active Radiation (FAPAR); GTOS Secretariat Food and Agriculture Organization of the United Nation (FAO): Rome, Italy, 23 April 2009. 24. Corral-Rivas, J.J.; Wehenkel, C.; Castellanos-Bocaz, H.A.; Vargas-Larreta, B.; Diéguez-Aranda, U. A permutation test of spatial randomness: Application to nearest neighbour indices in forest stands. J. For. Res. 2010, 15, 218–225. [CrossRef] 25. Hemery, G.; Savill, P.; Pryor, S. Applications of the crown diameter–stem diameter relationship for different species of broadleaved trees. For. Ecol. Manag. 2005, 215, 285–294. [CrossRef] 26. LeMay, V.; Maedel, J.; Coops, N.C. Estimating stand structural details using nearest neighbor analyses to link ground data, forest cover maps, and Landsat imagery. Remote Sens. Environ. 2008, 112, 2578–2591. [CrossRef] 27. Shaw, J.D. Models for Estimation and Simulation of Crown and Canopy Cover; General Technical Report (GTR); US Forest Service: Washington, DC, USA, 2005. 28. Ceballos, G.; List, R.; Garduño, G.; López, R.; Muñozcano, M.; Collado, E.; San Román, J. La Diversidad Biológica del Estado de México; Estudio de Estado; Biblioteca Mexiquense del Bicentenario: Ventura, CA, USA, 2009. http://dx.doi.org/10.1016/j.agrformet.2012.09.002 http://dx.doi.org/10.1016/S0034-4257(00)00119-X http://dx.doi.org/10.1111/j.1654-1103.2011.01373.x http://dx.doi.org/10.3390/rs8070474 http://dx.doi.org/10.3390/rs70810425 http://dx.doi.org/10.1016/j.agrformet.2006.10.013 http://dx.doi.org/10.3832/ifor0939-007 http://dx.doi.org/10.1071/FP08045 http://dx.doi.org/10.1016/j.agrformet.2011.01.019 http://dx.doi.org/10.1007/s10310-010-0181-1 http://dx.doi.org/10.1016/j.foreco.2005.05.016 http://dx.doi.org/10.1016/j.rse.2007.12.007 Forests 2017, 8, 392 18 of 18 29. Programa Mexicano del Carbono (PMC). Manual de Procedimientos Inventario de Carbono+. In Estudio de Factibilidad Técnica Para el Pago de Bonos de Carbono en el Estado de México; Programa Mexicano del Carbono: Texcoco, Mexico, 2015; p. 69. 30. INEGI Datos Vectoriales Escala 1:250,000 de Uso de Suelo y Vegetación. Available online: http://www.inegi. org.mx/go/contends/recant/mussel/ (accessed on 24 May 2017). 31. Baret, F.; Weiss, M.; Allard, D.; Garrigues, S.; Leroy, M.; Jeanjean, H.; Fernandes, R.; Myneni, R.; Privette, J.; Morisette, J. VALERI: A network of sites and a methodology for the validation of medium spatial resolution land satellite products. Remote Sens. Environ. 2005, 76, 36–39. 32. Kuhnell, C.A.; Goulevitch, B.M.; Danaher, T.J.; Harris, D.P. Mapping woody vegetation cover over the state of Queensland using Landsat TM imagery. In Proceedings of the 9th Australasian Remote Sensing and Photogrammetry Conference, Sydney, Australia, 24 July 1998; pp. 20–24. 33. CONAFOR (Comisión Nacional Forestal). Inventario Nacional Forestal y de Suelos Informe de Resultados 2004–2009 National Forest and Soils Survey, Results Report 2004–2009; CONAFOR: Zapopan, Mexico, 2012; p. 212. 34. Clark, P.J.; Evans, F.C. Distance to nearest neighbor as a measure of spatial relationships in populations. Ecology 1954, 35, 445–453. [CrossRef] 35. Chityala, R.; Pudipeddi, S. Image Processing and Acquisition Using Python; CRC Press: Boca Raton, FL, USA, 2014. 36. Martens, S.N.; Breshears, D.D.; Meyer, C.W. Spatial distributions of understory light along the grassland/forest continuum: Effects of cover, height, and spatial pattern of tree canopies. Ecol. Model. 2000, 126, 79–93. [CrossRef] 37. Origo, N.; Calders, K.; Nightingale, J.; Disney, M. Influence of levelling technique on the retrieval of canopy structural parameters from digital hemispherical photography. Agric. For. Meteorol. 2017, 237, 143–149. [CrossRef] 38. Jennings, S.; Brown, N.; Sheil, D. Assessing forest canopies and understorey illumination: Canopy closure, canopy cover and other measures. Forestry 1999, 72, 59–74. [CrossRef] 39. Korhonen, L.; Heikkinen, J. Automated analysis of in situ canopy images for the estimation of forest canopy cover. For. Sci. 2009, 55, 323–334. 40. Pekin, B.; Macfarlane, C. Measurement of crown cover and leaf area index using digital cover photography and its application to remote sensing. Remote Sens. 2009, 1, 1298–1320. [CrossRef] 41. Lee, K.-J.; Lee, B.-W. Estimating canopy cover from color digital camera image of rice field. J. Crop Sci. Biotechnol. 2011, 14, 151–155. [CrossRef] 42. Glatthorn, J.; Beckschäfer, P. Standardizing the protocol for hemispherical photographs: Accuracy assessment of binarization algorithms. PLoS ONE 2014, 9, e111924. [CrossRef] [PubMed] 43. Liu, J.; Pattey, E. Retrieval of leaf area index from top-of-canopy digital photography over agricultural crops. Agric. For. Meteorol. 2010, 150, 1485–1490. [CrossRef] 44. Poblete-Echeverría, C.; Fuentes, S.; Ortega-Farias, S.; Gonzalez-Talice, J.; Yuri, J.A. Digital cover photography for estimating leaf area index (LAI) in apple trees using a variable light extinction coefficient. Sensors 2015, 15, 2860–2872. [CrossRef] [PubMed] © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). http://www.inegi.org.mx/go/contends/recant/mussel/ http://www.inegi.org.mx/go/contends/recant/mussel/ http://dx.doi.org/10.2307/1931034 http://dx.doi.org/10.1016/S0304-3800(99)00188-X http://dx.doi.org/10.1016/j.agrformet.2017.02.004 http://dx.doi.org/10.1093/forestry/72.1.59 http://dx.doi.org/10.3390/rs1041298 http://dx.doi.org/10.1007/s12892-011-0029-z http://dx.doi.org/10.1371/journal.pone.0111924 http://www.ncbi.nlm.nih.gov/pubmed/25420014 http://dx.doi.org/10.1016/j.agrformet.2010.08.002 http://dx.doi.org/10.3390/s150202860 http://www.ncbi.nlm.nih.gov/pubmed/25635411 http://creativecommons.org/ http://creativecommons.org/licenses/by/4.0/. Introduction Materials and Methods Spatial Projection to Evaluate Sampling Designs Projected Photographic Captures within the GIS Field Sampling Photo Features Taking Photos at the Sampling Sites Estimation of over and Understory Cover Estimation of Overstory Cover Estimation of Understory Cover Calculation of Total Vegetation Cover Accuracy in Cover Classification Results Sampling Design Segmentation of Images Classification of Overstory Images Classification of Undestory Images Accuracy of the Classification Application of CBSP Design to the Regional Survey Discussion Sampling Designs Comparison Sampling Survey Automated Classification of Images Overstory Cover Understory Cover Conclusions work_iuaki5yfljeu3gwj2f6sdx6ngi ---- EDITORIAL Taking up the watch Peter J. Strouse Received: 12 November 2012 /Accepted: 12 November 2012 /Published online: 30 November 2012 # Springer-Verlag Berlin Heidelberg 2012 From the journal’s inception in 1973, Walter Berdon served as one of six founding editorial secretaries of Pediatric Radiology. He was a managing editor from 1985 to 2002. The journal served as the official organ of the Society for Pediatric Radiology. The society and the journal were both small. The journal filled the need for a publication focused on pediatric radiology. Early publications focused on radiography, fluoroscopy and beginning sonography. CT and MRI were years away. Those of us blessed with modern technology can only imagine the challenges of producing the journal in an era before computers, digital photography, e-mail, faxes and numerous other technologies. Nonetheless, Dr. Berdon’s office churned out a new issue every other month. For the last 18 years, Dr. Tom Slovis has served as an editor of Pediatric Radiology. For 8 years, Tom served as an assistant editor under Dr. Berdon. For the last 10 years, Tom has served as the Editor for the Americas, working hand in hand with his European co-editors, first Steve Chapman and more recently Guy Sebag. Early in Tom’s tenure, the man- uscript submission and review process for the journal was migrated to online, but not without trepidation. Advances in technology have accelerated the submission and review process. E-mail and fax service have rendered communica- tion very quick. The world is smaller than it used to be. The journal is global. In the last 10 years, under Tom’s stewardship, the content and quality of the journal have continually im- proved. Review articles and mini-symposia have been introduced, providing immense value to our readership. Case reports, long a staple of the journal, remain but only those that truly teach us something new (“are prescient”). Original articles must achieve a high standard to warrant publication. Tom Slovis demands high standards and, under his watch, the journal has achieved this. Manuscripts must be well written. Content must benefit the readers. Science must be good. Illustrations must tell the story and must “jump out at you.” During his tenure Tom has reviewed nearly every image that has appeared in the journal and, with his photography assistant, he has cropped, adjusted and relabeled countless images for optimal clarity. Tom has consistently insisted that Springer, our publishing partner, maintain similar high standards regarding layouts, copy editing and picture quality in print. The result is a high-quality journal. Tom’s leadership style is simple. Work hard. Do what is right. Do it well. Encourage others to follow. Recog- nize achievement and excellence. Value leadership. Value visionary thought. Be honest. Be reliable. Be efficient. Be passionate. Tom Slovis is passionate about the care of children. Everything that he has done as a physician, researcher, author and editor has that as its focus. In editorials, Tom has preached the need to provide value-added service and to be the child’s doctor, always acting in the best interest of the child. As Chair of Radiology at Children’s Hospital of Michigan he has practiced what he preaches. Tom is pas- sionate about minimizing radiation dose in children. He has been at the forefront of our campaign to manage radiation dose in pediatric radiology. Tom was a key player in plan- ning the first ALARA meeting on CT. Tom is passionate about properly diagnosing child abuse, differentiating it from other processes, and making the correct diagnosis for the good of the child. Like all of us, he is saddened by the P. J. Strouse (*) Department of Radiology, Section of Pediatric Radiology, University of Michigan Health System, C.S. Mott Children’s Hospital, 3-231, 1540 E. Hospital Drive, Ann Arbor, MI 48109-4252, USA e-mail: pstrouse@umich.edu Pediatr Radiol (2013) 43:1–2 DOI 10.1007/s00247-012-2574-0 child abuse contrarians who needlessly and recklessly challenge our ability to diagnose and protect our world’s greatest resource. Tom will not retire, nor will he willfully fade into the sunset. He will continue to contribute actively to the journal as an editor emeritus, functioning as an adviser to the editorial team, hopefully contributing an occasional com- mentary, and serving as a reviewer from time to time. Tom has many other passions to keep him busy—photography, travel, Telluride, New York Giants football, Detroit Tigers baseball, University of Michigan football and basketball (Go Blue!), watching his grandsons play hockey, innumer- able friends, and, most of all, his family. Tom leaves very big shoes to fill. To some extent, how- ever, Tom has made it easy. He has assembled a great team of contributors and built an infrastructure for sustained excellence. The journal is a point of pride for our specialty. Tom has walked the walk and talked the talk. He has been a great mentor and showed us the proper way to serve as editor. It is with great humility that I become editor of this great journal. It is truly an honor and privilege to follow my good friend Tom Slovis. 2 Pediatr Radiol (2013) 43:1–2 Taking up the watch work_iucqtlfp4rdo5eyj6ne3k7dttu ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218560515 Params is empty 218560515 exception Params is empty 2021/04/06-02:16:41 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218560515 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:41 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_ivqmauxfejdljk2mhzj7c4jhdu ---- Cytomegalovirus Retinitis in infancy SME Wren1, AR Fielder2, D Bethell3, EGH Lyall3, G Tudor-Williams3, KD Cocker2 and SM Mitchell1 Abstract Purpose To describe the presentation of cytomegalovirus retinitis (CMVR) in a series of infants. Methods Immunocompromised infants with either HIVor systemic cytomegalovirus (CMV) were examined for CMVR. Ocular involvement was recorded and monitored by digital imaging. Results Five infants were detected to have CMVR. All the infants demonstrated changes within the macula. One infant progressed from a fine granular pattern to fulminant CMVR. Conclusion Infants under a year with CMVR have a predilection for the disease to present at the macula, in contrast to the presentation in adults, which tends to involve more peripheral parts of the retina. Eye (2004) 18, 389–392. doi:10.1038/ sj.eye.6700696 Keywords: cytomegalovirus; retinitis; infants Introduction Cytomegalovirus (CMV) is the most frequent intrauterine viral infection affecting between 0.5% and 2.0% of all live births,1 and evidence of ocular involvement is present in 5–30%2 of infants exhibiting general symptoms and signs of the infection. CMV retinitis (CMVR) is the most common opportunistic infection affecting the eyes in adults with HIV and before combination therapy, studies showed that between 20 and 40%2,3 were affected. In contrast, CMVR has been reported in only 5% of children with acquired immunodeficiency syndrome (AIDS).2,4 In adults, CMVR presents in the peripheral retina in 85% of instances.2 It may affect one or both eyes and is frequently multifocal. It is typically recognised by a ‘fulminant’ picture of retinal vasculitis and vascular sheathing with areas of yellow-white, full-thickness, retinal necrosis producing retinal oedema associated with haemorrhage and hard exudates. The ‘indolent or granular’ variant describes a pattern in which there is less oedema and no haemorrhage or vascular sheathing, and this variant is seen more often in the periphery of the retina. These fulminant and indolent forms represent two ends of the clinical spectrum, and an individual may exhibit elements of both. Isolated macular involvement occurs in less than 5% of eyes in adults.3 On the clinical suspicion that CMVR might present differently in infants and adults, we analysed the ocular findings of a series of immunocompromised infants. Methods Immunocompromised infants presenting to St Mary’s Hospital Paediatric Infectious Diseases Unit, London, between September 1999 and October 2001, with either HIV or under investigation for systemic CMV disease, were screened for CMVR by retinal examination. Contact digital photography was performed in infants with retinal changes using the RetCam 120 wide-field digital fundus camera (Massie Labs, Dublin, CA, USA). CMV infection was confirmed through virus shedding in the urine and the presence of CMV viraemia. CMV viral load was measured using assays on either whole blood or plasma and were fully quantitative looking at nested polymerase chain reaction (PCR), second round on the light cycler real time detection using syber green dye (Micropathology Ltd, Coventry, UK). Where intravitreal ganciclovir was administered, a vitreous biopsy was taken and qualitative PCR-based assays confirmed the presence of CMV DNA. In the HIV-affected infants, the viral load was measured using the Chiron bDNA assay. Results Retinal abnormalities were detected in five infants screened (age 0–10 months), four had the HIV infection and one had congenital CMV infection (Table 1). There were two male and Received: 6 March 2003 Accepted in revised form: 25 July 2003 1The Western Eye Hospital London, UK 2Department of Ophthalmology Imperial College, London UK 3Department of Paediatrics St Mary’s Hospital, London UK Correspondence: AR Fielder Department of Ophthalmology Imperial College London Room 9L02 Charing Cross Campus St Dunstan’s Road London W6 8RP, UK Tel: þ 44 (0)20 8383 3693 Fax: þ 44 (0)20 8383 3651 E-mail: a.fielder@ imperial.ac.uk Eye (2004) 18, 389–392 & 2004 Nature Publishing Group All rights reserved 0950-222X/04 $25.00 www.nature.com/eye C L IN IC A L S T U D Y three female infants. All of them had bilateral disease. The clinical presentation varied from subtle white flecks at the macula, giving a granular appearance with central pigment epithelial disturbance (Figures 1a and b). The clinical findings did not progress over 3 months in these two infants. One infant displayed a similar subtle granular appearance at presentation, but this progressed to florid CMVR over 10 days (Figure 1c). The remaining two infants demonstrated more florid disease at presentation, more characteristic of typical CMVR (Figures 1d and e). The presentation was located at the macula in all the infants in at least one eye. Three infants with active retinitis had vitreous samples taken, which confirmed the presence of CMV DNA. They were treated with intravenous ganciclovir with or without foscarnet and subsequently with serial intraocular injections of ganciclovir. Discussion CMVR is a significant cause of ocular morbidity in immunocompromised patients and here we show that the pattern of presentation, both with respect to its location within the retina and its progression, may be different in infants compared to adults. Several reasons for differences in the presentation of CMVR have been postulated. For example, reactivation of latent infections is less likely to be present in children, which makes the infection more likely to be a primary infection.2 The immaturity of the immune system in infants is likely to make the response to infection more severe.5 Finally, different exposure to the virus through breast milk in the perinatal period as opposed to sexual or percutaneous transmission may affect the presentation.6 The HIV-infected infants in our series were all immunosuppressed and were in their primary HIV viraemic phase, in contrast to adults with CMVR, who present later in the course of their illness (when the HIV disease progresses to severe immunosuppression). Moreover, as CMV and HIV infections occur during an important immunological developmental period, there may be interactions between the infections.7 The above factors may contribute to the differences in disease expression, for example, in a series of infants described by Baumal et al4 89% of paediatric cases Figure 1 Colour photographs of infants with CMV retinitis: (a) Infant 1: Right and left fundal photographs. Shows subtle white flecks at the macula, giving a granular appearance with central pigment epithelial disturbance. (b) Infant 2: Right and left fundal photographs. Shows subtle flecks at the macula in the left eye and a larger noncentral lesion in the right fundus. Neither changed in clinical appearance. (c) Infant 3: Right and left fundal photographs. Shows a subtle granular appearance at presenta- tion, which progressed to florid CMVR over 10 days. (d) Infant 4: Right and left fundal photographs. As presented with a florid picture of CMVR. (e) Infant 5: Right and left fundal photographs. Presented with florid CMVR. a b c d e CMVR in infancy SME Wren et al 390 Eye presented bilaterally (in contrast to approximately 33% of adults) and the retinitis was noted to have a predisposition for posterior pole of the eyes in infants as compared to adults.4 Similar features of CMVR having a predisposition for the posterior pole and presenting bilaterally have been recorded in other series of infants with HIV or with congenital CMV. The distribution of active retinitis in infants in other series is shown in Table 2. Congenital CMV infections do not present in the typical haemorrhagic manner, as seen in immunocompromised children,1 which we noted in the infant with congenitally acquired CMV. No identifiable factors were associated with the lack of progression in the infants with stable retinitis over the study period. Our findings are interesting, but have limitations. Although the number of infants in this series is small, they were systematically examined. It was not a prospective study, rather the response to a clinical suspicion, and represents in effect a pilot study that may stimulate further investigation. References 1 Coats DK, Demmler GJ, Paysee EA, Du LT, Libby C. Ophthalmologic findings in children with congenital cytomegalovirus infection. J AAPOS 2000; 4(2): 110–116. 2 Cunningham ET, Pepose JS, Holland GN. Cytomegalovirus infections of the retina. In: Ryan SJ (ed). Retina, 3rd ed. Mosby: London, 2001, pp 1558–1575. 3 Studies of Ocular Complications of AIDS (SOCA) Research Group in Collaboration with the AIDS Clinical Trials Group (ACTG). Foscarnet–ganciclovir cytomegalovirus trial. 5. Clinical features of cytomegalovirus retinitis at diagnosis. Am J Ophthalmol 1997; 124: 141–157. 4 Baumal CR, Levin AV, Read SE. Cytomegalovirus Retinitis in Immunosuppressed children. Am J Ophthalmol 1999; 127(5): 550–558. Table 1 Summary of the characteristics of infants with CMV retinitis Infant 1 Infant 2 Infant 3 Infant 4 Infant 5 Figure 1a Figure 1b Figure 1c Figure 1d Figure 1e Age at presentation (weeks) 2 12 10 40 18 HIV viral load (RNA copies/ml) N/A 750 000 750 000 750 000 75 000 CD4+ counts at the time of CMVR diagnosis (cells/ml3) N/A 90 90 112 14 CD4+ (%) N/A 5 9 24 6 CMV viral load (CMV DNA/ml) 62 000 2 million 800 million 1.6 million 530 000 Retinal findings at presentation Subtle, flecked lesions at macula. Large peripheral focus. All non- progressive ‘Mottled’ maculae with white flecks and pigment epithelial disturbance. Nonprogressive. Small cotton wool spots medial to the macula in both eyes. Progressed to central florid retinitis Active central retinitis in both eyes at presentation. Active retinitis at the macula in both eyes Treatment Nil i.v. ganciclovir i.v. ganciclovir and foscarnet i.o. ganciclovir i.v. and i.o. ganciclovir i.v. and i.o. ganciclovir i.o.=Intraocular; i.v.=Intravenous. Table 2 Distribution of active retinitis within the retina in children with HIV or congenital CMV No. of infants with retinitis o12 months Location of CMVR within the retina Central Peripheral Unspecified Bilateral Coats et al1 1/125 a 1 a a Baumal et al4 2/9 ‘Posterior pole’ a a 2 Pass et al8 4/34 a a 4 a Yamanaka et al9 2/2 a a 2 1 Vadala et al10 2/2 a a 2 2 Dennehy et al11 1/14 a a 1 1 Salvador et al12 1 1 a a a Hammond et al13 2/12 1 1 N/A 1 Levin et al7 1 1 a a 1 Information taken from published literature available. aInformation not specified. CMVR in infancy SME Wren et al 391 Eye 5 Williams AJ, Duong T, McNally LM, Tookey PA, Masters J, Miller R et al. Pneumocystis carinii pneumonia and cytomegalovirus infection in children with vertically acquired HIV infection. AIDS 2001; 15(3): 335–339. 6 Du LT, Coats DK, Kline MW, Rosenblatt HM, Bohannon B, Contant CF et al. Incidence of presumed cytomegalovirus retinitis in HIV-infected paediatric patients. J AAPOS 1999; 3(4): 245–249. 7 Levin AV, Zeichner S, Duker JS, Starr SE, Augsburger JJ, Kronwith S. Cytomegalovirus retinitis in an infant with acquired immunodeficiency syndrome. Pediatrics 1989; 84: 683–687. 8 Pass RF, Stagno S, Myers GJ, Alford CA. Outcome of symptomatic congenital cytomegalovirus infection: results of long-term longitudinal follow up. Paediatrics 1980; 66: 758–762. 9 Yamanaka H, Yamanaka J, Okazaki KI, Miyazawa H, Kuratsuji T, Genka I et al. Cytomegalovirus Infection of Newborns Infected with HIV-1 from Mother: Case report. Jpn J Infect Dis 2000; 53: 215–216. 10 Vadala P, Fortunato M, Capozzi P, Vadala F. Case report: CMV retinitis in two 10-month-old children with AIDS. J Paediatr Ophthalmol Strabismus 1998; 35: 334–335. 11 Dennehy PJ, Warman R, Flynn JT, Scott GB, Mastrucci MT. Ocular manifestations in paediatric patients with acquired immunodeficiency syndrome. Arch Ophthalmol 1989; 107: 978–982. 12 Salvador F, Blanco R, Colin A, Galan A, Gil-Gibernau JJ. Cytomegalovirus retinitis in paediatric acquired immunodeficiency syndrome: report of two cases. J Paediatr Ophthalmol Strabismus 1993; 30: 159–162. 13 Hammond CJ, Evans JA, Shah SM, Acheson JF, Walters MDS. The spectrum of eye diseases in children with AIDS due to vertically transmitted HIV disease: clinical findings, virology and recommendations for surveillance. Graefe’s Arch Clin Exp Ophthalmol 1997; 235: 125–129. CMVR in infancy SME Wren et al 392 Eye Cytomegalovirus Retinitis in infancy Introduction Methods Results Discussion References work_ixdrthwuvbeb3kfp73oc6senci ---- UC Irvine UC Irvine Previously Published Works Title Combined benzoporphyrin derivative monoacid ring photodynamic therapy and pulsed dye laser for port wine stain birthmarks. Permalink https://escholarship.org/uc/item/1p49t236 Journal Photodiagnosis and photodynamic therapy, 6(3-4) ISSN 1572-1000 Authors Tournas, Joshua A Lai, Jennifer Truitt, Anne et al. Publication Date 2009-09-01 DOI 10.1016/j.pdpdt.2009.10.002 License https://creativecommons.org/licenses/by/4.0/ 4.0 Peer reviewed eScholarship.org Powered by the California Digital Library University of California https://escholarship.org/uc/item/1p49t236 https://escholarship.org/uc/item/1p49t236#author https://creativecommons.org/licenses/https://creativecommons.org/licenses/by/4.0//4.0 https://escholarship.org http://www.cdlib.org/ Combined Benzoporphyrin Derivative Monoacid Ring A Photodynamic Therapy and Pulsed Dye Laser for Port Wine Stain Birthmarks Joshua A. Tournas, M.D.1,2, Jennifer Lai, M.D.1,3, Anne Truitt, M.D.1,2, Y.C. Huang, Ph.D.1, Kathryn E. Osann, Ph.D.6, Bernard Choi, Ph.D.1,4,5, and Kristen M. Kelly, M.D.1,2,4 1Beckman Laser Institute, University of California, Irvine, CA 2Department of Dermatology, University of California, Irvine, CA 3Keck School of Medicine, University of Southern California, Los Angeles, CA 4Department of Surgery, University of California, Irvine, CA 5Department of Biomedical Engineering, University of California, Irvine, CA 6Department of Medicine, University of California, Irvine, CA Abstract Background—Pulsed dye laser (PDL) is a commonly utilized treatment for port wine stain birthmarks (PWS) in the United States; however, results are variable and few patients achieve complete removal. Photodynamic therapy (PDT) is commonly used in China, but treatment associated photosensitivity lasts several weeks and scarring may occur. We propose an alternative treatment option, combined PDT+PDL and performed a proof-of-concept preliminary clinical trial. Methods—Subjects with non-facial PWS were studied. Each subject had four test sites: control, PDL alone, PDT alone (benzoporphyrin derivative monoacid ring A photosensitizer with 576 nm light), and PDT+PDL. Radiant exposure time for PDT was increased in increments of 15 J/cm2. Authors evaluated photographs and chromametric measurements before and 12 weeks post- treatment. Results—No serious adverse events were reported; epidermal changes were mild and self-limited. No clinical blanching was noted in control or PDT-alone sites. At PDT radiant exposures of 15 and 30 J/cm2, equivalent purpura and blanching was observed at PDL and PDT+PDL sites. At PDT radiant exposures over 30 J/cm2, greater purpura was noted at PDT+PDL sites as compared to PDL alone. Starting at 75 J/cm2, improved blanching was noted at PDT+PDL sites. Conclusions—Preliminary results indicate that PDT+PDL is safe and may offer improved PWS treatment efficacy. Additional studies are warranted. © 2009 Elsevier B.V. All rights reserved. Corresponding Author: Kristen M. Kelly, M.D., 1002 Health Sciences Road East, Irvine, CA 92612, Telephone: (949) 824-7997, Fax: (949) 824-2726, KMKelly@uci.edu. Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. NIH Public Access Author Manuscript Photodiagnosis Photodyn Ther. Author manuscript; available in PMC 2010 November 12. Published in final edited form as: Photodiagnosis Photodyn Ther. 2009 ; 6(3-4): 195–199. doi:10.1016/j.pdpdt.2009.10.002. N IH -P A A uthor M anuscript N IH -P A A uthor M anuscript N IH -P A A uthor M anuscript Keywords Port wine stain birthmarks; pulsed dye laser; photodynamic therapy; vascular birthmarks INTRODUCTION Port wine stain birthmarks (PWS) are congenital, progressive vascular malformations of the skin which are present at birth in 0.3% of infants, are commonly found on the face and in many cases disfigure the bearer. The pulsed dye laser (PDL) is the current standard of care for treatment of such cutaneous vascular lesions. The PDL enables lesion lightening by causing selective photothermal injury to tissue vasculature. Yellow light (λ = 577–600 nm wavelength) emitted by the pulsed dye laser is preferentially absorbed by hemoglobin (the major blood chromophore) in the ectatic capillaries of the upper dermis. Radiant energy is converted to heat, causing thermal damage and thrombosis in targeted vessels(1). However, with PDL alone, few patients (< 10%) achieve complete blanching of their lesion, and multiple treatments (5–30 or more) are generally required. In a five-year study of 640 PWS patients at Bridgend General Hospital, Lanigan et al. concluded that the degree of fading achieved following PDL therapy is “variable and often unpredictable”(2). Huikeshoven et al. (3) published a 10-year follow-up study of 51 patients who had undergone PDL treatment for PWS birthmarks. They reported significant re-darkening of PWS since an initial course of PDL therapy (although PWS remained significantly lighter as compared to pre-treatment). Because of the limitations of PDL PWS therapy, the search continues for other safe and more effective treatment modalities. Photodynamic therapy (PDT) may offer such an alternative. PDT utilizes a photosensitizer and light to generate reactive oxygen species and creates an opportunity for targeted lesion destruction. PDT has been used to treat a wide range of benign, pre-malignant and malignant conditions, including age-related macular degeneration(4), actinic keratoses(5) and cancers of the skin, lung and gastrointestinal tract(6). PDT may offer several advantages for treatment of cutaneous vascular lesions. However, careful protocol design is required because the potential exists for complete vascular network destruction, which can result in necrosis, ulceration, and subsequent scarring. A small number of studies have been conducted using PDT as a stand-alone treatment of PWS, with varied success and safety outcomes. Collectively, the use of PDT to treat PWS has been associated with promising blanching, but also with prolonged photosensitivity and a significant risk of complications(7),(8) Careful photosensitizer and light source selection is required to achieve desired efficacy during PDT treatment of PWS, while limiting injury to vessels at the desired depth. In this study, we propose the use of benzoporphyrin derivative monoacid ring A (BPD) and yellow light for PDT. BPD is an excellent photosensitizer for PWS treatment based on the following characteristics: 1) vascular predominance(9),(10),(11); 2) proven safety and efficacy in humans (4),(12) and 3) photosensitivity of relatively short duration (1–5 days depending on dose administered)(12). We have proposed combining the photochemical and photothermal aspects of PDT and PDL therapy(9),(13). Combining these approaches allows the use of lower radiant exposures for each portion of the procedure, avoiding adverse effects such as scarring, while providing enhanced vascular shutdown. Our preliminary animal experiments(9;13) demonstrated that PDT+PDL combination therapy can achieve enhanced vascular effect without epidermal injury and may address limitations associated with the current standard of care treatment, PDL alone. Tournas et al. Page 2 Photodiagnosis Photodyn Ther. Author manuscript; available in PMC 2010 November 12. N IH -P A A uthor M anuscript N IH -P A A uthor M anuscript N IH -P A A uthor M anuscript We report observations from a proof-of-concept, tolerance and safety dose ranging study designed to evaluate and compare PDL alone, PDT alone and PDT+PDL for PWS treatment. This is the first published report evaluating clinical use of the PDT+PDL approach. MATERIALS AND METHODS The Institutional Review Board at the University of California, Irvine approved the research protocol. Patients 18 or older with non-facial PWS were recruited. Exclusion criteria included history of allergy to study medication, history of photosensitizing conditions such as the porphyrias, use of photosensitizing or blood-thinning medications, active infections, and recent laser treatment of PWS. Eight patients were enrolled. Three patients underwent two separate sets of treatment, for a total of 11 sets of treatment sites. Subject’s ages ranged from 19 to 53 and 7 of 8 patients were female. Fitzpatrick skin types were II or III. Lesions were located on the trunk and extremities. For each subject, four circular test sites with a 2-cm diameter were delineated on the day of treatment. Sites were documented with digital photography and chromameter (Minolta Inc.) measurements were taken. Site #1 served as a control and received no treatment. Site #2 was treated with PDL alone (585 nm, 7 mm spot, 8 J/cm2 radiant exposure, 30 ms cryogen spurt with a 20 ms delay). Sites #3 and #4 were treated with PDT alone and PDT+PDL, respectively. After use of the PDL on site #2, patients received liposomal BPD-MA (verteporfin; Visudyne; QLT, Inc. Vancouver, Canada) by intravenous infusion. Drug administration procedures in the verteporfin package insert were followed. Each 15 mg vial of medication was reconstituted with 7 ml sterile water, providing 7.5 ml of a 2 mg/ml solution of verteporfin. The reconstituted verteporfin was protected from light and used within four hours of preparation. The volume of reconstituted verteporfin required to achieve a dose of 6 mg/m2 body surface area (calculation based on weight and height as provided in drug package information) was withdrawn and diluted with 5% dextrose for a total infusion volume of 30 ml. The full infusion volume was administered mechanically over a 10-min period at a rate of 3 ml/min using a syringe pump and in-line filter. Irradiation of the PDT test site was initiated 15 min after the start of the 10-min BPD infusion with an argon pumped dye laser (Lumenis Inc., Santa Clara, CA) tuned to 576 nm with an output irradiance of 100 mW/cm2. To study effectiveness and safety of PDT at different light doses, total CW radiant exposure administered was steadily increased over the course of the study by increasing total time of laser exposure. Two subjects each received a CW radiant exposure of 15, 30, 45, 60, and 75 J/cm2, and one subject received a radiant exposure of 90 J/ cm2. For the combined PDT+PDL test site, PDT was performed as described in the preceding paragraph. This was followed immediately by PDL irradiation at the same parameters as the PDL alone test site. Subjects returned for follow-up visits at 1 and 3 days; and 1, 2, 4, 8 and 12 weeks post intervention. At each visit, documentation of PWS appearance was obtained by visual inspection, digital photography and chromometer measurements. Patients were questioned regarding adverse effects. Pre- and 12-week post treatment measurements of erythema( a*), were compared for the various treatment groups using repeated measures analysis of variance. Tournas et al. Page 3 Photodiagnosis Photodyn Ther. Author manuscript; available in PMC 2010 November 12. N IH -P A A uthor M anuscript N IH -P A A uthor M anuscript N IH -P A A uthor M anuscript RESULTS Safety and Tolerability Treatments were well tolerated with most subjects reporting no discomfort during PDT (only subject 1 reported mild temporary discomfort) and mild discomfort during PDL therapy. No subjects reported increased discomfort during PDT+PDL as compared to PDL alone. No serious adverse events were reported. One patient experienced extravasation at the i.v. site (antecubital fossa of arm) and required photoprotection of this area for several weeks. This patient received a repeat treatment without incident, six weeks later. Epidermal changes were limited to fine scabbing and temporary mild hyperpigmentation at PDL-treated sites, which resolved without treatment. Other adverse effects were reported, but were not thought to be related to treatment. One subject experienced viral symptoms three days post-treatment including fever, chills and nasal congestion. Other family members were similarly affected. A second subject developed a crust at the PDT+PDL site four weeks post-intervention. The subject reported that similar crusting had happened before, sometimes related to trauma. This complication resolved without treatment or sequelae. A third subject reported an asthma flare two weeks post-treatment. This subject has a long history of significant asthma. Visual Assessment of Blanching Response No changes were observed at control sites. We also did not observe any PWS blanching at PDT alone sites. Clinical assessments of PDL alone versus PDT+PDL test sites are summarized in Table 1. At PDT radiant exposures of 15 and 30 J/cm2, blanching was similar at the PDL and PDT+PDL sites. Starting at a PDT radiant exposure of 30 J/cm2, a greater amount of purpura was noted at the PDT+PDL site as compared to the PDL alone site (Figure 1). Starting at a PDT radiant exposure of 75 J/cm2, improved efficacy was noted at the PDT+PDL site (Figure 2). Near complete blanching was achieved in one of the subjects who received a light dose of 75 J/ cm2. Chromametric Measurements Chromometer measurement of a* values pre-treatment were compared to those measured at 12 weeks post-treatment. The a* value is a measurement of color on the red-green scale, with higher a* values indicating increasing redness. The average change in a* over the 12 week study period for each of the test sites was as follows: Control 0.48; PDT alone −0.14; PDL alone −2.67; PDT+PDL −3.22. Repeated measures analysis of variance of a* values for control as compared to other test sites revealed statistical significance for the PDL alone group (p = 0. 03) and the PDT+PDL group (p = 0.05). Comparisons between the different intervention sites were not statistically significant. DISCUSSION Several important observations were made during this dose ranging study. First, PDT alone and PDT+PDL therapy, using BPD and the selected light parameters, did not demonstrate any major adverse effects. Epidermal changes were limited to fine scabbing in areas treated with the PDL and temporary mild hyperpigmentation in the same areas, all of which resolved without treatment. Similar hyperpigmentation is frequently seen with PDL alone treatment and is generally well tolerated by patients. Prior studies utilizing blue light and red light PDT for PWS treatment have resulted in skin necrosis and ulceration, possibly due to utilization of high light intensities or selection of longer wavelengths. PWS lightening was achieved with PDT Tournas et al. Page 4 Photodiagnosis Photodyn Ther. Author manuscript; available in PMC 2010 November 12. N IH -P A A uthor M anuscript N IH -P A A uthor M anuscript N IH -P A A uthor M anuscript +PDL without adverse effects using a vascular specific photosensitizer (BPD), lower irradiance (100 mW/cm2) and PDT light doses up to 90 J/cm2. Photosensitizers previously utilized for PDT vascular lesion treatment have resulted in severe photosensitivity of two weeks or more. BPD administration results in a photosensitivity period in humans of 5 days or less. We did carefully counsel subjects regarding photosensitivity precautions and subjects left our center wearing protective clothing. However, several subjects drove after treatment from our center in Southern California to their homes (an hour or more away) without sequelae. Further, no photosensitive reactions were reported, except for the one subject in whom extravasation occurred. As noted above, she reported mild stinging in the area of extravasation, but covering of the area prevented any further cutaneous effects. Second, improved efficacy of PDT+PDL over PDL alone was observed and was dose- dependent. Increased purpura was noted by the investigators in the PDT+PDL sites of subjects who received PDT radiant exposures equal to or greater than 30 J/cm2. We also noted increased blanching of the PDT+PDL sites over PDL sites in subjects that received PDT light doses of 75 or 90 J/cm2. Interestingly, our studies did not find significant improvement of PDT alone as compared to control sites. We expect that higher light or drug doses will be required in order to achieve desired effects with PDT alone. This is undesirable because of time considerations (with the current protocol, dose of 90 J/cm2 requires 15 min of irradiation) and financial considerations (cost of photosensitizer). Further studies will be required to better elucidate dosing requirements for PDT alone (using BPD and yellow light) for treatment of PWS. This study builds on findings from our earlier in vivo animal studies demonstrating the potential of the PDT+PDL combination intervention to achieve significant vascular effect. Our current pilot human study provides evidence that PDT+PDL may offer enhanced blanching effect for removal of PWS. We did note that dosing effects noted in animal models were not directly transferable to clinical studies. As such, dose ranging studies such as the current one will be required as protocol adjustments are made. With additional study of the PDT+PDL treatment protocol, we envision that parameters could be ultimately adjusted based on individual lesion characteristics. Longer wavelengths may be useful for treatment of thicker lesions such as blebbed port wine stain, hemangiomas and other vascular malformations. In conclusion, in this preliminary report, PDT+PDL was shown to be a potentially efficacious option for treatment of PWS and no long term adverse effects were noted. PDT alone did not result in PWS blanching. Further prospective, comparative, and controlled multi-center clinical studies are required to further develop the PDT+PDL protocol which may offer enhanced efficacy as compared to current standard of care treatment with PDL alone. We are continuing evaluations with the rodent dorsal skin fold window chamber to evaluate the use of alternative light sources and determine methods to safely and effectively irradiate larger treatment areas in a single session. Once our protocol is optimized further, we intend to expand our pilot study into a phase II randomized, clinical trial study testing the effectiveness of PDT+PDL. It is our belief that this protocol may offer an alternative treatment option for PWS birthmarks and may offer a safe and more consistent blanching effect. This protocol may also be adapted for treatment of other cutaneous vascular lesions including malignancies and hemangiomas. Acknowledgments This work was supported in part by grants obtained from the National Institutes of Health (AR51443 to KMK), Sturge Weber Foundation (KMK), Arnold and Mabel Beckman Foundation and the A. Ward Ford foundation (BC). Tournas et al. Page 5 Photodiagnosis Photodyn Ther. Author manuscript; available in PMC 2010 November 12. N IH -P A A uthor M anuscript N IH -P A A uthor M anuscript N IH -P A A uthor M anuscript Institutional support was provided by the National Institutes of Health Laser Microbeam and Medical Program (LAMMP). This investigation was also supported by the US Public Health Service research grant M01 RR00827 from the National Center for Research Resources, University of California, Irvine. BPD and administration kits were generously provided by QLT (Vancouver, BC, Canada). We thank Drs. Wim Verkruysse, Sol Kimel and Lars Svaasand for their advice and assistance. References 1. Kelly KM, Nelson JS. An update on the clinical management of port wine stains. Lasers Med Sci 2000;15:220–226. 2. Lanigan SW. Port-wine stains unresponsive to pulsed dye laser: explanations and solutions. Br J Dermatol 1998;139:173–177. [PubMed: 9767228] 3. Huikeshoven M, Koster PH, de Borgie CA, Beek JF, van Gemert MJ, van der Horst CM. Redarkening of port-wine stains 10 years after pulsed-dye-laser treatment. N Engl J Med 2007;356(12):1235–1240. [PubMed: 17377161] 4. Sickenberg M. Verteporfin therapy for choroidal neovascularization: from concept to proof of efficacy. Semin Ophthalmol 2001;16(4):177–180. [PubMed: 15513439] 5. Jeffes EW, McCullough JL, Weinstein GD, Fergin PE, Nelson JS, Shull TF, et al. Photodynamic therapy of actinic keratoses with topical 5-aminolevulinic acid (ALA): a pilot dose-ranging study. Arch Dermatol 1997;133:727–732. [PubMed: 9197826] 6. Marcus, SL. Lasers in photodynamic therapy. In: Waynant, RW., editor. Lasers in Medicine. Boca Raton: CRC Press; 2002. p. 287-324. 7. Gu Y, Huang NY, Liang J, Pan YM, Liu FG. Clinical study of 1949 cases of port wine stains treated with vascular photodynamic therapy (Gu's PDT). Ann Dermatol Venereol 2007;134(3 Pt 1):241–244. [PubMed: 17389848] 8. Qin ZP, Li KL, Ren L, Liu XJ. Photodynamic therapy of port wine stains – a report of 238 cases. Photodiag Photodyn Ther 2007;4:53–59. 9. Smith T, Choi B, Ramirez-San-Juan J, Nelson JS, Osann K, Kelly KM. Microvascular blood flow dynamics associated with photodynamic therapy, pulsed dye laser irradiation and combined regimens. Lasers Surg Med 2006;38:532–539. [PubMed: 16615132] 10. Fingar VH, Kik PK, Haydon PS, Cerrito PB, Tseng M, Abang E, et al. Analysis of acute vascular damage after photodynamic therapy using benzoporphyrin derivative (BPD). Br J Cancer 1999;79:1702–1708. [PubMed: 10206280] 11. Tsoukas MM, Gonzales S, Flotte TJ, Anderson RR, Sherwood ME, Kollias N. Wavelength and fluence effect on vascular damage with photodynamic therapy on skin. J Invest Dermatol 2000;114:303– 308. [PubMed: 10651990] 12. Houle JM, Strong A. Duration of skin photosensitivity and incidence of photosensitivity reactions after administration of verteporfin. Retina 2002;22:691–697. [PubMed: 12476093] 13. Kelly KM, Kimel S, Smith T, Stacy A, Hammer-Wilson M, Svassand LO, et al. Combined photodynamic and photothermal induced injury enhances damage to in vivo model. Lasers Surg Med 2004;34:407–413. [PubMed: 15216534] 14. Huang YC, Ringold TL, Nelson JS, Choi B. Noninvasive blood flow imaging for real-time feedback during laser therapy of port wine stain birthmarks. Lasers Surg Med. In press. Tournas et al. Page 6 Photodiagnosis Photodyn Ther. Author manuscript; available in PMC 2010 November 12. N IH -P A A uthor M anuscript N IH -P A A uthor M anuscript N IH -P A A uthor M anuscript Figure 1. Increased post-treatment purpura noted one week post PDT+PDL treatment as compared to PDL alone therapy (PDT radiant exposure = 45 J/cm2). Tournas et al. Page 7 Photodiagnosis Photodyn Ther. Author manuscript; available in PMC 2010 November 12. N IH -P A A uthor M anuscript N IH -P A A uthor M anuscript N IH -P A A uthor M anuscript Figure 2. PDL site (test site 2) (top left) prior to and (top right) 12 weeks post-intervention. PDT+PDL (test site 4) (bottom left) prior to and (bottom right) 12 weeks post-intervention. The PDL therapy site remained unchanged in color and size. In contrast, the PDT+PDL treatment site decreased in area by ~50% and some areas of the PWS resolved completely. No changes were noted in control and PDT alone sites. Tournas et al. Page 8 Photodiagnosis Photodyn Ther. Author manuscript; available in PMC 2010 November 12. N IH -P A A uthor M anuscript N IH -P A A uthor M anuscript N IH -P A A uthor M anuscript N IH -P A A uthor M anuscript N IH -P A A uthor M anuscript N IH -P A A uthor M anuscript Tournas et al. Page 9 Table 1 Summary of Treatment Observations by Investigator PDT Radiant Exposure (J/cm2) Subject # Investigator Observations – PDL alone versus PDT +PDL 15 1 0/2 : Difference in purpura 2 0/2 : Difference in blanching 30 3 2/2 : Increased purupura PDT+PDL 4 0/2 : Difference in blanching 45 5 2/2 : Increased purpura PDT+PDL 6 1/2 : Equal blanching 1/2 : Improved blanching PDT+PDL 60 7 2/2 : Increased purpura PDT+PDL 8 0/2 : Difference in blanching 75 9 2/2 : Increased purpura PDT+PDL 10 2/2 : Improved blanching PDT+PDL 90 11 1/1 : Increased purpura PDT+PDL 1/1 : Improved blanching PDT+PDL Photodiagnosis Photodyn Ther. Author manuscript; available in PMC 2010 November 12. work_iy7d2qlnqnhflfasazqvcaia2m ---- Microsoft Word - SoundPaperSouvenirs.doc 1 Sound, paper and memorabilia: Resources for a simpler digital photography David Frohlich Jacqueline Fennell Digital World Research Centre Department of Interaction Design School of Human Sciences Royal College of Art University of Surrey Kensington Gore Guildford GU2 7XH London SW7 2EU d.frohlich@surrey.ac.uk jac@hijacdesign.com 1. INTRODUCTION Domestic photography, in the west, is undergoing its most radical change since the introduction of the Kodak box brownie camera in 1900. The process of private analogue film exposure and professional film development (‘you press the button, we do the rest’), is giving way to a range of alternatives made possible by digital imaging technology. These include private home printing, self-service or web-service printing, development onto CD-ROM, electronic archiving, transmission and publishing of photographs, photo manipulation, and photo viewing on every possible kind of screen. Furthermore the incorporation of camera features into other devices such as mobile phones, PDAs, camcorders and MP3 music players is expanding the context and volume in which photographs are taken and leading to their combination with other media and activities. As an example of an area in which consumers are led to think that ‘More is More’, digital photography is just about perfect. In this paper we present a body of work to develop a simpler and more reflective form of digital photography. We give three pairs of examples of ‘Less is More’ thinking and design which are made possible by new technology, but directed and inspired by user behaviours and reactions. Each pair of examples happens to review the place of an old technology in the new scheme of things, and challenges a technological trend in the industry. Hence, we consider the role of sound in photography to recommend audiophotographs rather than short video clips as a new media form. We look again at the role of paper in photo sharing and recommend its support and augmentation against the trend towards screen-based viewing. Finally, we consider the role of physical memorabilia alongside photographs, to recommend their use as story triggers and containers, in contrast to extensive photo-video narratives. All these ideas were generated through close attention to current-day practices and needs in domestic photography, and yet have been productive in developing novel trajectories for simple designs. We take up this theme at the end of the paper to draw out some design lessons for simple computing. The inspiration and structure for this paper come from a similar article published 9 years ago as a two page note in a conference proceedings. Strong and Gaver (1996) outlined three novel concepts for supporting simple intimacy in mediated communications. Each involved pairs of devices used by remote partners to let one partner know the other was thinking of them. Hence, ‘Feather’ comprised a picture frame which, when picked up, caused a fan to blow a feather into the air inside a glass container. ‘Scent’ used the same picture frame to trigger a burner to come on beneath 2 a bowl of essential oil, releasing a lingering aroma. ‘Shaker’ comprised a pair of handheld devices which transmitted a shaking motion in one device to the other. What was unique about these concepts at the time was their recognition of a core truth about human communication that was overlooked by the myriad voice-and-data facilities of other CSCW systems: that it involved expressions of emotional intimacy conveyed through the most minimal of non-verbal cues. In fact, this insight was a reminder that CSCW systems were about supporting human and humane communication, rather than all the complex activities that had come to characterise use of the technology. In the same way, we would like to call attention again to what digital photography systems are about, and to use this insight to populate a new design space by example. 2. DOMESTIC ICONOGRAPHY AND DIGITAL PHOTOGRAPHY Digital photography systems are about the support of domestic photographic behaviours, which themselves relate to fundamental processes of memory, narrative and identity. Photographs happen to be one particular record or token of experience, which people seem to find useful for remembering that experience, reflecting on it, displaying it, and talking about it. In the absence of other theories of domestic photography, Frohlich (2004) has proposed a framework for thinking about the activities in photography as an interplay between the photograph, the photographer, the subject and the audience. This framework is reproduced below in Figure 1. It shows that half the activities, shown as dotted lines, involve different kinds of solitary reflections on the photograph by the various human participants. The other half, shown as solid lines, involve different kinds of three way interactions between participants around the photograph. In this view, memory is something which happens individually on review of a photograph, but also socially and collaboratively in interaction. Narrative is a form of interpretation an audience can read into a photograph, or a story that can be told from a photograph by a photographer or subject. And identity is both something that is seen in a photograph of oneself as subject, or deliberately presented to others for interpretation and response. Figure 1. The diamond framework for domestic photography (Figure 3.5 in Frohlich 2004, p44. Reprinted with permission). 3 Going beyond this conception of photography, we note that other records of experience can substitute for photographs in the framework. Sounds, textures and scents can trigger memories or interpretations in individual participants, and even serve as talking points for reminiscing or storytelling conversations. Composite records such as video or ‘audiophotos’ serve a similar function, - albeit with different properties and affordances for behaviour (see again Frohlich 2004). Finally, objects can substitute for photographs in the framework, whenever they come to be associated with an experience or identity. We define such objects as memorabilia, since they stand as a token of experience without being literal records of that experience. In this broader view, it becomes harder to refer to the spectrum of activities involved as ‘just photography’ since other media have expanded the forms of behaviour within the framework. However, we argue that the dynamics of the interactions within the diamond remain the same, and continue to refer to something of what people do when they use tokens to manage memory, narrative and identity. For want of a suitable label, let us call this activity domestic iconography. This refers to the use of visual and non-visual icons in mediating personal narratives. Given this reminder of what digital photography should be about, how does it actually measure up? More specifically for the theme of this book, what are the assumptions underlying the kind of more-is-more philosophy in the industry, and are they justified in terms of supporting what we have called domestic iconography? Taking the first question first, we can say that the new digital photography products being used today make it easier than ever to capture photographs in volume, to review the results more quickly and share the images more effectively than before. In fact, there seems to have been a shift in focus from the support of personal capture and consumption of photographs printed out at home, to the capture and sharing of screen- displayed photographs through photoware (Frohlich, Kuchinsky, Pering, Don & Ariss 2002). This has happened within the photographic paradigm with digital cameras and the internet, but also within the telephone paradigm with the advent of cameraphones and multimedia messaging (Koskinen, Kurvinen & Lehtonen 2002). There has also been a convergence of camera and camcorder functionality such that most capture devices now support photographs and video, with audio thrown in as a bi-product of video. Since digital cameras are still cheaper and more popular than digital camcorders, the emphasis remains on still images as the primary recording form, with short video clips emerging as a secondary form alongside them. This is reinforcing the screen-based orientation of ‘playback’ and leading to a reduction in the proportion of photos that are printed. Interestingly, the same number of photographs are still being printed worldwide, because the overall number of photographs captured has gone up (Worthington 2004). This analysis suggests that people have been presented with a variety of new options for performing many photographic activities in Figure 1, especially the screen-based sharing of images. However, little has been done to encourage more general iconographic activities with other media, apart from the provision of short video capture on cameras. Implicit in these trends are a number of assumptions about what is good for the consumer. These are variations on a general more-is-more philosophy to provide the largest number of features at the lowest possible price. One assumption seems to be that the additional realism of video is preferable to photographs as a record of 4 experience, all other things being equal. This can be seen in the attempts to increase the quality of video on still cameras at fixed cost, and decrease the cost and size of camcorders themselves. Much effort is also being put into the provision of new digital video editing software and storage solutions, to further encourage an upsurge in video capture. In the cameraphone arena, digital video is a major driver for 3G applications. Another assumption is that screen-based display and playback of media is preferable to printing photographs because it increases the convenience and impact of images at lower cost. This can be seen in the provision of photo and video browsers on all kinds of platforms, from the LCD on the camera or phone, to software for the PC, Mac, TV, projector, media player and gamestation. On certain platforms like mobile phones, image capture and display was made available prior to any printing solution – expecting the user to figure out how to move the images to another platform from which to print. Finally, there is an assumption that increasing the coverage of photo and video is a good thing in itself, so that consumers are able to capture and look back on more candid and complete records of their lives. This can be seen partially in the promotion of video over photo records in which the frame rate is much higher, but also in the movements to develop ubiquitous and wearable cameras. Situated cameras at theme parks currently offer photographs the subjects could not have taken themselves, while wearable camera prototypes promise always- on video, slow burst stills or photographs triggered from contextual events such as laughter (e.g. Gemmell, Williams, Wood, Lueder & Bell 2004). Photography in this view would become more about consumption than capture, and the navigation of large media repositories representing entire life histories. In the rest of this paper, we critique each assumption in turn as a way of introducing a less-is-more alternative. The critiques are based on a series of user studies addressing these issues from a consumer point of view. In each case we present two design concepts embodying the alternative view and suggesting new approaches to design which might be developed by others. 3. VIDEO REALISM AND THE ROLE OF SOUND Although video undoubtedly captures more of the experience of an event than a photograph, it does not follow that a video clip serves as a better trigger for memory, narrative or the presentation of identity. We can see this quickly from a number of technical contrasts between the media. Symbolically, the selection of a key moment in an unfolding experience can represent that experience in a single frame – as in the blowing out of candles on a birthday cake. Aesthetically, a still image points back in time and causes the viewer to search for an explanation, whereas a moving image points forwards to a different narrative question of what will happen next (Berger 1982). Interactively, the watching of a video sequence is passive and paced by the real-time development of the action, or the directors cut. In contrast, photographs are usually consumed actively at a pace controlled by the viewer. Psychologically, video is a hotter medium than photographs because it contains a higher density of information and requires the viewer to do less interpretive work to understand the content (c.f. McLuhan 1964). All these differences are potentially important when considering which medium is best for what activity in the framework of Figure 1. More clues about whether video is preferable overall come from empirical studies of video use and direct preference contrasts. 5 Somewhat surprisingly, very little has been published about domestic video use and how it is changing with digital technology. In an unpublished study carried out in HP Labs, Budd (1996) found a great disenchantment with analogue video by camcorder owners, and a lively market in second hand devices. The owners complained of recording too much material and not having time to edit or index it. They also felt more distanced from the original event by filming it rather than participating in it themselves. This had led many to stop using their camcorders, and to neglect watching the video archive they had collected. In contrast, a parallel study of analogue camera owners found that they were largely satisfied with their cameras and quality of photographs they produced (Geelhoed 1996). Problems of organising and archiving photographs were mentioned, but these did not prevent home photographers from using or enjoying them. A different set of trade-offs were reported in a study by Chalfen (1998). He interviewed 30 teenagers about their attitudes to home video versus photographs. The teenagers reported that video was better than photos for re-living the event, since the records were more detailed and realistic. However, they also felt that video was too real to allow room for thinking and talking about the past with others. In other words, the hotter ‘temperature’ of the video medium referred to above, was seen as a disadvantage in terms of reflecting on a memory or telling a story about it. This led Chalfen to conclude that ‘less is better’ in respect to this contrast (p174). In a follow-up study to the camcorder and camera surveys cited above (Budd & Frohlich 1996) uncovered latent interest in a middle ground between photos and video. Four consumer focus groups clustered by age and gender, responded consistently well to the idea of attaching sounds to photographs. In fact sound was perceived to be the most attractive media type to combine with photos, compared to handwriting, text or even short video clips. This is because it was seen to have a variety of uses, including adding atmosphere with ambient sound, increasing nostalgia with music, and attaching a voice message with narration. This made us wonder how much of the realism of video was actually provided in the audio stream, and whether a new audiophoto combination might be better than video for memory and storytelling. This possibility was tested in a subsequent audiophoto trial (Frohlich & Tallyn 1999, Frohlich 2004 Chapter 4). Four families were given audiocamera units on their summer holidays, on which to record arbitrary combinations of still images and sound clips. The resulting audiophoto albums were shown back to the families for comment and discussion. The main finding was that ambient sound rather than voice commentary was the most attractive form in the corpus, and that this enhanced the memory of the event compared to a photograph alone. It also added atmosphere and interest for audiences of the photograph, and enlivened reminiscing conversations about the event. When asked about the contrast between audiophotos and video, 11 out of 14 participants chose an audiocamera in preference to a camcorder. This was because they used the trial unit more like a point-and-shoot camera than like a conventional camcorder, paying careful attention to the framing of sound clips with images and keeping the duration of clips down to about 24 seconds. This allowed users to participate in the event to a greater extent than they might have done with video, and resulted in more professional effects that were not seen to need editing. In short, combining sound with photographs appeared to carry many of the benefits of video without the usual costs. 6 These insights eventually led to a number of novel design proposals and prototypes for possible audiophoto products. We present two examples here, representing an audiocamera and a storage format for audiophotographs. The audiocamera is shown first in Figure 2.. It was an HP Labs prototype built in 2000, designed primarily for testing CMOS sensors and never intended for commercialisation. However, it was also used to explore a new model for audiophoto capture, using a forward-facing microphone and a dedicated button for sound capture. The idea of using two buttons for concurrent sound and image capture led to the camera’s code name Blink. The sound capture button was positioned on the top edge of the unit opposite the image ‘shutter’ button, to fall under the user’s left index finger. It could be operated like a dictaphone to capture sentimental ambient sounds alone, or held down across one or more shutter button depressions to generate simple audiophotos or audiophoto slide shows. As the camera itself had no LCD screen or speaker to review the results, it was designed to dock to a PC and automatically upload a mixed photo, audio and audiophoto album to a website. Although this digital audiocamera unit was never developed into an HP product, several of its interface design features were patented and/or later found their way into camera products. For example, all HP cameras currently support ambient audiophoto capture through a combined sound-and-shutter button. This can be held down after the photo has been taken to append sound to the jpeg file, until released. Figure 2. The Blink digital audiocamera While the initial audiophoto trial suggested that ambient sounds were the most attractive kinds to combine with photographs on a camera, further studies suggested that voiceover, music and recorded conversation might be added to good effect later. Hence, voiceover messages were found to contextualise photographs for remote recipients, music was found to add emotion to images, and conversation about the photograph was found to contain useful personal reminders of the event (Frohlich 2004). This eventually led to the idea of a multilayered audiophoto containing four different types of sound that could be muted or merged at playback time, depending on the requirements of the playback context. This concept is illustrated in Figure 3 which shows a complex audiophoto displayed in a frame. On a screen-based device, the user would be able to select an edge to toggle on or off each of the four different sound types. Ironically, this goes far beyond what might be recorded or represented in a simple video clip, and requires a new kind of complex data format for storage. Patents applications and specifications for this format have been fed into a new DVD standard called multiphoto video or ‘mpv’ for short (http://www.osta.org/mpv/public/index.htm ). 7 Figure 3. The multilayered audiophoto, containing ambient sound, voiceover, music and conversation (adapted from Figure 9.4 in Frohlich 2004) 4. SCREEN-BASED PRESENTATIONS AND THE ROLE OF PAPER Screen-based display and playback of photographs and video is indeed popular today, not least on the back of digital cameras and cameraphones. Indeed it could be argued that immediate review and sharing in the moment are two of the biggest advantages of digital photography over analogue photography, together with remote transmission of image-based material. Such transmission itself results in further screen-based review, either on the back of a receiving cameraphone, or on a computer screen with email or web access. However, these behaviours do not lead to the conclusion that printed photographs are dead, or that further screen-based viewers will automatically be preferred to paper. In fact consumers appear to be printing the same absolute number of photographs as they always did. It is just that they are taking and reviewing on-screen a greater volume of images overall (see again Worthington 2004). So printing is getting more selective. Furthermore, consumers are choosing to ignore a number of facilities and products for further screen-based review. These include electronic photo frames, electronic books and photo-based media viewers. These products have yet to be adopted by the mass market. Even the playback of photographs on TV has been slow to take off, given the possibility of direct connection from most digital cameras, and the trend towards DVD storage of photographs. One reason for the persistence of prints is the inaccessibility and vulnerability of digital photos across a variety of devices and storage media. This is a real problem long term, since those devices and formats may become obsolete over time. Even in the medium term, photographs can be difficult to locate and manipulate within an electronic repository (e.g. Kuchinsky, Pering, Creech, Freeze, Serra & Gwizdka 1999, Rodden & Wood 2003). Physical photographs in packs or albums have the advantage of being associated with particular locations in the home. They are also bounded and indexed by their packaging and design, and can be browsed quickly by direct manipulation. Electronic photographs can be given some of these properties, but only by clever software and interface design (e.g. Bederson 2001, Vroeginderwejj 2002). 8 Another reason for the persistence of prints is the way in which they can be shared in conversation. As with printed work documents, photographs can be spread out and compared, annotated, shuffled and handed around (O’Hara & Sellen 1997). These properties are likely to be important for conventional photo sharing conversation, which is highly interactive and responsive to audience participation. Hence, in a study of printed photo-talk, 11 families self-recorded 81 naturally-occurring photo sharing conversations on audiotape, and filled in photo diary entries for each session (Frohlich et al 2002, Frohlich 2004 Chapter 7). The majority of these conversations were between partners who shared the memory of the event depicted in the photographs, and were so interactive that it was not appropriate to describe one or other partner as leading the conversation. Such reminiscing conversations, involved a kind of shared participation in which each partner chipped in comments in an ongoing discussion of the material, often in overlap with each other. Other conversations involved storytelling to people who did not share the memory depicted in the images. These were more directed by photograph owners, but still involved interruptions, questions and contributions from the audience who appeared to direct the conversation to photographs of mutual interest. Both kinds of conversation were quite unlike the linear slide-show model of photo-talk encouraged by digital photo-album software. This was because the tangibility of the photographs allowed the image presentation and conversation to get out of synch’ with one another. This created opportunities for the participants to look back or ahead of the narrative, and to seize control of the images or conversation. Thus, despite the advantages offered by screen-based photographs and multimedia variants, consumers are reticent to give up the beauty and simplicity of tangible prints. These provide a reliable method of accessing and managing images, and lead to highly interactive modes of sharing them. Building on these insights, and those already mentioned on the value of sound with photographs, we have begun to explore ways of augmenting printed photographs with sound that can be played back from paper audioprints. Figures 4 and 5 show two alternative approaches that are offered here as further examples of less-is-more thinking in this domain. Figure 4 shows a printed photograph with an embedded chip in the paper, capable of encoding up to 30 seconds of high quality sound. A handheld audioprint player is then used to contact the chip and playback the sound. The sound is recorded into the chip either when the photograph is printed, or when it is inserted into the player. In the first case, an existing audiophotograph (recorded on an audiocamera or PC) can simply be ‘printed’ on an audio-enabled printer loaded with special paper. In the second case, a silent photograph can be printed in the same way, and later annotated with sound using the handheld player in record mode. A working version of this prototype was build and tested in HP Labs Bristol in 1999, and has now been successfully patented (e.g. Frohlich, Adams & Tallyn 2000). Although, this concept involves a dedicated handheld photograph player, the same functionality could technically be built into a range of existing devices such as cameraphones, MP3 players and PDAs. Alternatively a whole family of specialised audiophoto players could be created to include audio-enabled albums, frames and cards. When some of these options were demonstrated to families in subsequent studies, alongside a variety of screen-based viewers, families rated all options positively (Frohlich 2004 Chapter 8). This confirmed the value of paper-based playback as expected, but also showed 9 that consumers wanted to move back and forth between different forms of paper and screen displays for particular purposes and contexts. Figure 4. The audioprint player (Figure 3 in Frohlich, Adams & Tallyn 2000 reprinted with permission). Figure 5 shows a digital desk with an overhead camera and speakers instead of a projector (after Wellner 1993). The camera is used to recognise printed photographs placed on the desk surface and play their associated sounds from a PC under the desk (Frohlich, Clancy, Robinson & Costanza 2003). Audiophotos are loaded into the PC on a CD-ROM containing jpeg and wav files with the same filename, and the system works from nothing more than regular jpeg prints displayed within a white border. Multiple prints will play at the same time and the system keeps track of the position of prints on the desk. When any print is pushed away from the user towards the top of the desk it plays more quietly, while moving it left or right across the desk shifts the balance of playback from the left to the right speaker. Users can therefore playback an audioprint automatically with no other interface than the print itself, and physically mix a complex soundscape from a pack of photographs simply by changing the print arrangement on the desk. In contrast to the audioprint player, the desk allowed users to assemble a combination of sound and image clips which could be mixed together at playback time. This gave the output some of the qualities of the multilayered audiophoto in Figure 2, especially when different types of sounds were associated with photographs from the same event. These qualities were exploited in a subsequent trial of the desk, in which users tried to design audiophoto collage material for interactive performance (e.g. Lindley & Monk 2005). Such effects would be difficult or impossible to achieve on screen., and seek to preserve the highly interactive nature of photo-talk. 10 Figure 5. The audiophoto desk (Figure 1 in Frohlich, Clancy, Robinson & Costanza 2003 reprinted with permission. 5. PHOTO COVERAGE AND THE ROLE OF MEMORABILIA One of the things that is changing with digital photography is a shift in the coverage of photographs over a lifetime. Whereas analogue photography used to be reserved for special occasions such as weddings, parties and holidays (e.g. Chalfen 1987), digital photography is beginning to catalogue other areas of life such as working events, shopping trips and domestic routines. This trend is exaggerated by the growth of photo sharing behaviours, which means that individuals are now receiving many more photographs of themselves or their friends that they never took. The industry continues to encourage more casual and automatic forms of image capture through situated and wearable cameras, without really addressing a couple of key questions. Do people want all these images and what will they do with them when they have got them? It is possible to be sceptical about an answer to the first question by considering the second. We know something of what consumers do with their photographs now, both privately and in communication with others. Families currently struggle to organise their analogue and digital photographs and tend to archive the majority with very little intervention. A basic sort appears to be done on new images to distinguish good from bad, or best from the rest, so that further organisation and sharing activities are carried out from the good set (Rodden & Wood, Frohlich et al 2002). Good photographs may then be assembled into temporary or thematic albums, usually in chronological order, but families quickly fall behind with this activity. Beyond that, individual photographs or small sets of images tend to be singled out for special attention. Favourite photographs will be framed for display on their own or in a collage. Images will be emailed to others, alone or in small coherent groups. Even when whole sets of photographs or albums are shared in conversation, storytelling tends to spring off individual images rather than developing over a narrative sequence of images. This 11 can often be about things that are ‘off-frame’ and not depicted in the images themselves - as in the story of a racoon-infested camp site which was triggered by a picture of another camp (Figure 7.12 in Frohlich 2004). In the cameraphone context more unusual images are taken, but often with the intention of showing or sending them once to particular people (Kindberg T., Spasojevic M., Fleck R & Sellen A. 2004). So the use of photographs today involves a series of reductions on the total collection, often down to individual images which epitomise an entire trip, a relationship or a story worth telling. As an aid to memory, narrative and identity management, the camera might be likened to a notepad and pen which are used to take notes useful for each activity. The deliberate and selective taking of pictures as notes, is likely to be important for subsequent recall of the depicted events, and for their use in storytelling and display. And the review of captured pictures for these purposes will be easier if the picture-notes are sparse but succinct. This analysis leads to the view that consumers do not need more images which are captured and organised for them, because this would be like being handed reams of unprocessed notes which somebody else had written. Instead they need almost the opposite: fewer images of more personal significance that can be accessed easily in a range of situations. To explore this notion further we began to examine the role of framed photographs and other memorabilia to be found on display or stored away in the home. Working initially with the blind, we tried to understand the role of objects in remembering and sharing the past (Fennell & Frohlich 2004). Surprisingly, we found that framed photographs were still important to people who had lost their sight gradually with age, trauma or disease. This was illustrated most dramatically by one participant who told us she would sometimes touch a framed photograph on her wall, in order to remember what it looked like. This shows that the frame itself had become a reminder of a visual memory of the image it contained. In the same way, a great variety of other objects and things had come to stand for images, scenes, people and events in the ‘mind’s eye’. These included ornaments, jewellery, clothes, furniture and framed photographs, but also music, food, perfume and other people. In some cases, these items had developed strong associations with the past ‘accidentally’, through extended use. In other cases, memorabilia items were purchased or received with a specific association. This was the case with gifts and inherited objects which were linked to their original owners, and souvenirs bought on holiday which were linked to their place of purchase. The placement, storage and display of memorabilia turned out to be an important factor in how they were used. Displayed items were selected and placed for easy personal access and with visitors in mind. Stored items were more private, and seemed to build up without explicit selection in some of the most inaccessible parts of the house such as the attic. Ironically, we found that people habituated to the nostalgic value of displayed items so that they seldom thought about their associations until prompted to consider them again by a visitor or some re-organisation of the house. In contrast, the surprise discovery of a forgotten item in storage resulted in a powerful form of reminiscing. Sometimes a single item would result in a whole chain of memories flooding back, with an accompanying burst of explanations and stories to the interviewer. Storytelling itself appeared to be integral to the remembering process. Memorabilia only appeared to come to life in the minds of the participants when they 12 began to tell us about them in the interviews. They appeared to relive the memories through this telling. Conversely the memories and stories lay hidden in the object when it was not discussed. In subsequent work, sighted participants were asked to write down the story of sentimental objects on a postcard (Fennell 2005). Although brief, these were often deeper and better formed than stories told about photographs. For example, they contained a small number of recurring themes including rescues, achievements, first occasions and gifts, and sometimes finished with a moral lesson. This concurs with recent attempts by BBC Wales to improve digital storytelling by asking participants to bring in memorabilia items rather than old photographs and video clips (Morlais 2005, personal communication). Figures 6 and 7 show two novel ways of supporting this kind of storytelling with objects. These serve as examples in a new design space of augmented memorabilia, which has been almost completely overlooked by the digital photography industry. Further examples are to be found in Fennell & Frohlich (2004), Fennell (2005) and related publications (e.g. Aats & Marzano 2003, Frohlich & Murphy 2000, Gaver & Martin 2000, Hoven & Eggen 2003, Stevens, Abowd, Truong & Vollmer 2003). Figure 6 shows a memory shelf based on the idea of an augmented mantelpiece. Sentimental items on the mantelpiece can be moved to a weighing platform at one end to have their associated stories recorded or played back. Stories are recorded verbally at the shelf by pressing the record button below the platform, and pressing again to stop the recording. The recording is stored digitally in the shelf itself and indexed with the exact weight of the object. Existing stories are played back by replacing the object on the weighing platform which returns the weight index. The system relies on all objects having a unique weight when measured accurately, but can be fooled by reproducing weights with a continuous pressure. Depending on how the shelf is calibrated, a story can be triggered from an object weight within some margin of error. This could become a playful feature of the system to allow users to scroll through a series of stories by applying increasing pressure to the platform. Alternatively, users could consider the relationship between stories for objects of a similar weight, or pile up multiple objects to elicit new stories, or record stories for objects which change weight. For example, flatmates could leave messages and replies for each other on a bar of chocolate they bite after listening to. Figure 6. The memory shelf (pp13 and 14 in Fennell & Frohlich 2004 reprinted with permission) 13 Figure 7 shows an anniversary plinth which ‘curates’ any object placed upon it. Objects are identified by unique RFID tags, and come with a factual history that is written on the tag. This can include the date and place of manufacture and sale, service history, and owner information. In this way, the object can have its own memory of significant events over its lifetime. Unlike the memory shelf, the information associated with an object is not dispensed immediately, but is automatically generated on dates memorable to the object. For example the date the object was purchased or inherited could be printed on tickertape on its own anniversary. This would have the effect of advertising the memory of this event to its owner, and so overcoming the memory habituation which sets in with displayed objects. Not knowing when the plinth is going to spring into action might simulate the kind of serendipitous remembering that comes with a surprise rediscovery of a forgotten object. Figure 7. The anniversary plinth (Figure 18 in Fennell 2005 reprinted with permission) 6. LESSONS FOR SIMPLE COMPUTING The general purpose nature of computing technology means that it is always possible to add new functionality at minimal cost. In a competitive market place, this leads to featurism within digital products that try to out-do each other in technical rather than user value. The lesson of this chapter has been that user value is a different thing altogether, which relates to the activity being supported rather than the technology supporting it. By looking carefully at this activity and the value that could be added to it by technology, we can oppose featurism and complexity by prioritising and developing only functionality that really matters to people. In the digital photography domain we have suggested that the core activity involves a kind of domestic iconography involving the use of tokens to manage memory, narrative and identity. This immediately highlighted the importance of tangible prints in photography and connected it with the use of memorabilia objects in the home. It also called into question a number of more-is-more assumptions underlying current 14 trends in the industry. These included the promotion of more realism in the photographic record, more ephemeral sharing of images on screens, and more coverage of life events through images. In each case, data from user studies helped us to test these assumptions and develop a more informed view of what users require. This usually led to an alternative approach which extended current practice in some new direction rather than replacing it with a new one. This was the case for instance in augmenting printed photographs with sounds (Figure 4) or capturing stories on displayed objects (Figure 6). These interventions allow consumers to continue using these familiar tokens as reminders, props and statements, but in new and more interesting ways. We see this approach as improving the ‘fit’ between technology, people and the cultural context in which they live, rather than offering up a new technology extension and hoping for the best. This property is the basis of what Landay, Bell & Saponas (2005) call digital simplicity - this volume. In the end, the technology used to accomplish digital simplicity might well be more complex than that which leads to digital complexity for the user. We saw this in the discussion of multilayered audiophotos (Figure 3) which quickly become more technically complicated to represent than a linear video clip. This complexity might even extend to the user interface and user interactions with the product, as in the use of a second button for sound on an audiocamera (Figure 2). In some respects, this complicates the use of a traditional camera, but in a way which fits with and extends the practice of traditional photography. One could argue that it does this more simply than a camcorder, which although easier to operate, forces users to change practice and move to a secondary (screen-based) technology to playback the results. All this suggests that ‘less’ and ‘more’ are relative terms which can be viewed from two perspectives. From the users’ point of view, they can get often get more out of less technology and less out of more technology, but sometimes it is possible to get more out of more technology. From the technology point of view, less and more are meaningless terms and better replaced with a notion of simplicity of use; meaning fitness for purpose and context. REFERENCES Aarts E. & Marzano S. (Eds.) (2003) The new everyday: Views on ambient intelligence. Rotterdam: 010 Publishers. Bederson, B.B. (2001). PhotoMesa: A Zoomable image browser using quantum treemaps and bubblemaps. UIST 2001, ACM Symposium on User Interface Software and Technology, CHI Letters, 3(2), pp. 71-80. Berger, J. (1982) Stories. In J. Berger and J. Mohr, Another way of telling. London: Butler & Tanner. Budd J. (1996) A survey of camcorder owners. HP Labs Bristol Working Paper. Budd J. & Frohlich D.M. (1996) New use of images focus groups. HP Labs Technical Report No. Chalfen R. (1987) Snapshot versions of life. Bowling Green Ohio: Bowling Green State University. 15 Fennell J. (2005) Biographical objects: The role of objects in reminiscing. Interaction Design Studio Working Paper, Royal College of Art. Fennell J. & Frohlich D.M. (2005) Beyond Photographs: A design exploration of multisensory memorabelia for the visually impaired. Hewlett Packard Laboratories Technical Report. Frohlich, D.M. (2004) Audiophotography: Bringing photos to life with sounds. Kluwer Academic Publishers. Frohlich, D.M. & Murphy R. (2000) The memory box. Personal Technologies 4: 238-240. Frohlich D. M. & Tallyn E. (1999) Audiophotography: Practice and prospects. Proceedings of CHI 99: 296-297 . New York: ACM Press. Frohlich, D.M., Adams, G. & Tallyn, E. (2000) Augmenting photographs with audio. Personal Technologies 4: 205-208 Frohlich D.M., Clancy T., Robinson J. & Costanza E. (2003) The audiophoto desk. Proceedings of 2AD, Second international conference on Appliance Design, 11-13th May 2004, Bristol. Frohlich D.M., Kuchinsky, A., Pering C., Don A. & Ariss S. (2002) Requirements for photoware. Proceedings of CSCW '02, New York: ACM Press. Gaver B. & Martin H. (2000) Alternatives: Exploring information appliances through conceptual design proposals. Proceedings of CHI ’00:209-216. New York: ACM Press. Geelhoed E. (1996) A survey of camera owners. HP Labs Bristol Working Paper. Gemmell J., Williams L., Wood K., Lueder R. & Bell G. (2004) Passive capture and ensuing issues for a personal lifetime store. Proceedings of CARPE 2004. Hoven E. van den & Eggen B. (2003) Digital photo browsing with souvenirs. Proceedings of Interact 2003 (videopaper):1000-1004. Kindberg T., Spasojevic M., Fleck R & Sellen A. (2004) The ubiquitous camera: An in-depth study of cameraphone use. Proceedings of IEEEPC ’04. Koskinen I., Kurvinen E. & Lehtonen T. (2002). Mobile image. Finland: Edita Publishing Inc. Kuchinsky A., Pering C., Creech M.L., Freeze D., Serra B. & Gwizdka J. (1999) FotoFile: a consumer multimedia organization and retrieval system. Proceedings of CHI 99: 496 - 503 . New York: ACM Press. 16 Landay J.A., Bell G. & Saponas T.S. (2005) Digital simplicity: Usable personal ubicomp. Proceedings of the ‘Less is More’ conference, Microsoft Research, 27-28th April 2005, Cambridge UK. Lindley S.E. & Monk A.F. (2005) Augmenting photographs with sound for collocated sharing: An exploratory study of the Audiophoto Desk. In Sloane A. (Ed.) Home Oriented Informatics and Telematics. Proceedings of the IFIP WG 9.3 HOIT 2005 Conference. Springer. McLuhan, M. (1964) Understanding media: The extensions of man. Cambridge, Massachusetts: MIT Press. O’Hara, K. & Sellen, A. (1997) Comparison of reading paper and on-line documents. Proceedings of CHI ’97, 335-342. New York: ACM Press. Rodden K. & Wood K.R. (2003). How do people manage their digital photographs? Proceedings of CHI 2003: 409-416. New York: ACM Press. Stevens M.M., Abowd G.D., Truong K.N. & Vollmer (2003) Getting into the living memory box: Family archives & holistic design. Strong R. and Gaver B. (1996) Feather, scent & shaker. Proceedings of CSCW ’96. New York: ACM Press. Vroegindeweij, S. (2002) My pictures: Informal image collections. HP Labs Technical Report No. HPL-2002-72R1. Wellner P.(1993) Interacting with paper on the digital desk. Communications of the ACM 36 (7), 86-96. Worthington P. (2004) Kiosks and print services for consumer digital photography. Future Image Market Analysis. work_izdgxwun2va7lpu7cg3dqgmj3y ---- AIS (Androgen Insensitivity Syndrome) – Support Group Skip to the content AIS (Androgen Insensitivity Syndrome) Support Group AIS (Androgen Insensitivity Syndrome) Support Group News Fertility Issues June 23, 2019March 30, 2021 By admin Introduction This page carries excerpts from our journal/newsletter (ALIAS) on advances in reproductive technology that might someday provide an AIS woman with some reproductive options. Please note that although some of us have a scientific/medical background, and we have tried to give informed predictions/comments, we are not experts in this area and many of our comments are speculative. Brave New World (from ALIAS No. 3, Winter 1995) At the end of a TV discussion by an expert panel on the issues of surrogacy, IVF, abortion, use of foetal eggs or , genetic engineering etc., etc. [Ref. 1] the presenter expressed surprise that no-one had brought up the possibility of men becoming pregnant. Prof. Robert Winston of Hammersmith Hospital, a well-known worker in the area of assisted conception, answered: This is relevant in two main cases. Firstly there are some transexual men who might want to bear children. Amazingly, the human embryo has a propensity to implant in almost any tissue so there is no real reason why men should not carry babies, albeit with trans-abdominal rather than vaginal delivery. Secondly, there are some women who happen to have a Y chromosome and who have no ovaries or uterus and are therefore infertile. Again, there is no reason why we couldn’t make these women pregnant [Ref. 2].The major problem lies in implanting the embryo in tissue that wouldn’t cause a massive haemorrhage. A recent TV programme examined the emerging science of tissue engineering [Ref. 3]. It showed living human cartilage cells growing, in a laboratory dish, into the form of a human ear, using the physical support of an ear-shaped template made of a fine filament matrix and which later dissolves. The technique is being pioneered by clinicians and engineers at a Boston hospital and MIT, and implantation of the engineered ear is currently being tested in animals. There was talk of eventually extending the technique to internal body parts. Perhaps this holds out some distant hope for an improvement in plastic surgery in AIS? Ref. 1: ‘Brave New World?’ (After Dark Special) 30 May 1994, Channel 4. Ref. 2: Presumably with donor egg fertilised by the partner’s sperm. Ref. 3: Test Tube Bodies, BBC1, 24 October 1995. A Fertile Future? (from ALIAS No. 4, Spring 1996) A recent TV/press report (Jan ‘96) described the successful in vitro fertilization of a woman’s egg by sperm material extracted directly from her partner’s testes via a biopsy. [Ref. 1] We understand this to be the technique of ICSI (Inter-Cytoplasmic Sperm Inoculation). We are not sure whether the testes in AIS contain any viable sperm cells [Ref. 2] but if so it would seem possible, in theory at least, that these might be extracted and used to fertilize an egg donated by, say, the normal XX sister of an AIS woman’s male partner? We are awaiting further clinical input on this possibility so don’t get too excited. If it is feasible then presumably only Y-bearing sperm cells would be used, to avoid passing on the faulty X gene in , so only normal male babies would result (no females). Paradoxically this would mean the ‘mother’ providing the sperm and the ‘father’, as it were, providing the egg (via his close female relative). Dr Richard Stanhope [Ref. 3] suggests that, in view of this, the main problem would be related to ethical matters. Gonadectomy is usually recommended for AIS patients at some point after puberty in order to avoid a risk of malignant changes. In many cases this is done in infancy or childhood (without informed consent of the patient) but the risk of malignant changes before puberty is extremely low and the risk as a whole seems to be discounted in some quarters (See “Gonadectomy” in ALIAS No. 2, Summer 1995.). If there was even a remote chance that at some time in the next 20-30 years it might be possible for an AIS woman to become the natural parent of a child, then it might be wise to delay gonadectomy in today’s AIS infants? See also “Early Gonadectomy” and “Brave New World” in ALIAS No 3, Winter 1995. Ref. 1: Prof. Gedis Grudzinskas, Prof. of Obstetrics & Gynaecology, Royal London Hospital Medical College, London E1 2ED. Reported in Sunday Times, 28 January ‘96. Also a clinician in Nottingham doing same/similar work featured in TV news. Ref. 2: It has been reported that 28% of a group of 43 AIS cases had rare spermatogonia in their testes. Rutgers J.L. and Scully R.E: The androgen insensitivity syndrome (testicular feminization): A clinicopathologic study of 43 cases. Int. J. Gynecol. Pathol. 10:126-144, 1991. Ref. 3: Consultant Paediatric Endocrinologist and Senior Lecturer, Institute of Child Health/Gt. Ormond St. Children’s Hospital, London WC1N 3JH. Fertility Advances (from ALIAS No. 7, Spring 1997) Following on from our coverage in an earlier issue [Ref. 1] of Intra-Cytoplasmic Sperm Injection (ICSI) and its possible relevance to AIS, we report that Dr. Simon Fishel [Ref. 2] has made a further advance in the treatment of male infertility. This involves extracting spermatid – a form of pre-sperm – from the testes and injecting it into a human egg in a laboratory dish before re-implantation in the woman’s uterus [Ref. 3]. Dr. Fishel is awaiting approval from the Human Fertilisation and Embryology Authority to start trials involving more than 100 couples. Scientists at Nuture hope that it will prove as successful as ICSI which in their hands has a 27% success rate. It was reported that within the ensuing 18 months Dr. Fishel and his team hoped to achieve human pregnancies using an even earlier form of pre-sperm, called the spermatocyte. We asked Dr. Fishel whether an even earlier form of sperm cell than this, the spermatogonium, (which has been reported to be occasionally present in the testes of some AIS individuals [Ref. 4]) might be used, thus holding out some slight future possibility of someone with AIS ‘fathering’ a child via surgical extraction of material from the primitive spermatic cells in their testes? He responded (March 1996) as follows: We are unsure how early we can go with regard to sperm precursors. There is some work in animals which indicates that we can go back to the spermatocyte. However, to go right back to the original precursor, the spermatogonium, would not be possible at this stage. Currently my team is working on spermatogenesis in vitro, but I can see no immediate breakthroughs in future for the stage as early as spermatogonia. It is something we will be working very hard to attain in the coming decade and, considering the pace of current development, I’d have to say anything is possible! Certainly the next 20-30 years [Ref. 5] that you suggest in your letter is an awful long time in scientific development [we believe he means ‘the pace is fast enough to permit many advances in that time’]. Another area we are working on, is to offer cryopreservation (freezing) [of testicular tissue/cells] for future use. We are trying to raise research funding for this project so that we could compare the effects of freezing pre-pubertal tissue compared to [that of] adults. Should we obtain funding, however, I believe we could make great strides in this area. Later in the year, a newspaper [Ref. 6] reported that Dr. Fishel had in fact engineered the birth of a baby following the freezing of testicular tissue from a man whose sperm did not mature. Long-term freezing has potential for use in boys with cancer in whom chemotherapy might cause infertility. Parents of AIS infants/children may want to ask about this possibility when discussing early gonadectomy with their specialist? Dr. Fishel is apparently willing to answer questions via a European Infertility Network (EIN) web site (see Links to Other Sites). Our US representative told us last year of a news report on the possibility of men [Ref. 7] becoming ‘pregnant’ via an embryo implanted in the abdomen. An intra-abdominal pregnancy (i.e. outside the uterus) in a Canadian woman was successful, leading doctors to conclude that they could do this with men. Dr. Edmond Confino in Chicago maintains that the only thing stopping them from doing this with men is ‘societal reaction.’ Ref. 1: See “A Fertile Future” in in ALIAS No. 4, Spring 1996. Ref. 2: Scientific director of Nurture (Tel: 01159 709490) Nottingham University’s non profit-making research and treatment unit in reproduction. Ref. 3: Report in The Times 12 (or 13?) February 1996. Ref. 4: Dr. Joanne Rutgers (Dept. of Pathology, Harbor-UCLA Medical Center, University of California, Los Angeles), co-author of the study of the 43 cases of AIS which reported this, told us recently that they “had no case in which there was complete spermatogenesis” and that “only a minority of cases had any germ cells whatsoever, and these few completely lacked development.” They concluded that “unfortunately the method you describe [ICSI] would not be applicable to patients with AIS.” Ref. 5: i.e. the point at which today’s AIS infants might want a family of their own and wish their testes had not been removed, without their consent, in childhood. Ref. 6: The Sunday Times, 10 Nov 1996. Ref. 7: And presumably women with AIS? Fertility Update (from ALIAS No. 8, Summer 1997) In previous issues [Ref. 1] we speculated on the possibility of AIS individuals being able to ‘father’ children via the extraction of immature spermatic cell material from their intact (or frozen) testicular tissue. An article [Ref. 2] described the freezing of ovarian tissue, in an attempt to secure future fertility, by Paul Serhal, a gynaecologist who heads the assisted conception unit at University College Hospital, London. Another consultant, Mr. Lower at St. Bartholomew’s Hospital, also featured. Peter Brinsden, medical director at Bourne Hall, Cambridge, a private fertility clinic [Ref. 3], said that they had stored tissue from patients with lymphatic cancers as young as six years-old. He said “We hope that by the time they want children we will have the technology to help them”. Dr. Charmian Quigley, in her review of AIS [Ref. 4], makes the following points: Testicular development occurs normally in the AIS fetus, and immature spermatogonia (germ cells) are present in the testes at birth and during childhood. However, histological studies [Ref. 5] of testes of older AIS individuals reveal the presence of only occasional spermatogonia in testes removed during the peri-pubertal years, and no germ cells are present in the testes of affected adults, suggesting a progressive decline of germ cell numbers with increasing age. Spermatocytes and more mature germ cells are absent at all ages. Although we understand that the use of cells as immature as spermatogonia for in vitro fertilization is not on the immediate horizon, early gonadectomy in AIS with cryopreservation (freezing) of testicular tissue might increase the chances of success, should this become possible in say 15-20 years time? Ref. 1: See “A Fertile Future?” in ALIAS No. 4, Spring 1996 and “Fertility Advances” in ALIAS No. 7, Spring 1997. Ref. 2: ‘Putting a Future on Ice’ by Lulu Appleton, Daily Telegraph, 18 February 1997. Ref. 3: Http://www.bourn-hall-clinic.co.uk/ Ref. 4: Quigley et al: Androgen Receptor Defects: Historical, Clinical and Molecular Perspectives. Endocrine Reviews, Vol. 16, No. 3, pp271-321 (1995). Ref. 5: Microscopic examination of tissue samples. Men Becoming Pregnant (from ALIAS No. 14, Spring 1999) The UK Sunday Times (14 March ‘99) published another of those articles speculating about the possibility of men carrying a foetus. The technique would involve attaching the foetus to the muscles inside the abdomen or even fashioning an artificial womb from abdominal tissue. Female hormone treatment would be vital for encouraging the placenta to attach. The child would be born by caesarian section. The medical experts quoted – Lord Robert Winston [Ref. 1], UK IVF expert, and Dr. Simon Fishel [Ref. 2] (who has worked on the ICSI method) – were sceptical of its likely success (because of the possibility of massive internal bleeding and abnormal foetal development) although both have been approached by heterosexual couples seeking a male pregnancy. The article says that the problem of the female hormones causing dramatic changes in physique in men would mean that transsexuals would probably be among the first to undergo the procedure because such changes would have already been induced by the drugs used to help them change sex. A male-to-female transsexual is quoted as saying she’d be willing to try it. Why is it that transsexuals, and others with a normal reproductive system for their genetic sex (in this case, men) are always the first to be considered and given press coverage regarding ‘advances’ such as these? Why not increase awareness of the need for some basic reproductive choices in population groups like XY women with AIS, who were born without either ovaries or uterus? Please write to the authors of such articles, and to the people quoted therein, to ask that our case be put forward before that of people who already have reproductive options. Ref. 1: Hammersmith Hospital, London. Ref. 2: Centre for Assisted Reproduction, Nottingham. See “Fertility Advances in ALIAS No. 7, Spring 1997. One Woman’s Meat…. (from ALIAS No. 15, Summer 1999) A new parent subscriber (Mum of CAIS 12 year-old) wrote: Thank you for sending the documentation on your group. Enclosed you will find our [subscription] forms and cheque. We read with interest your Internet information. I passed along the website no. [address] to our pediatric endocrinologist and he informed me he has since visited it. We were all pleased with the open, lucid presentation. One small suggestion. Both my daughter and I found rather freakish the factsheet’s suggestion of primitive sperm extraction from AIS testes to fertilize a donor egg. It was the only questionable notion in an otherwise serious, professional document. We look forward to receiving the AISSG information, and wish you very continued success. Ed’s Note: Our speculation on this possible future fertility option for AIS women (see “Fertility Update” in ALIAS No. 8, Summer 1997) was a serious one. Since they have no ovaries, the only germ cells that could produce a child that is genetically related to an AIS woman would have to come from her testes. The recently-developed methodology (ICSI), for helping men with immature sperm cells, cannot as yet cope with the even more primitive cells in AIS testes, but with the current pace of genetic advances it might well be available within the next 20 years. At that time, many of today’s AIS infants might jump at the chance (especially if they had a male partner whose sister, say, might donate an egg; and perhaps carry the child). They might well be thankful that their parents had resisted pressure to have their child’s testes removed before an age when they could give informed consent. Fertility Possibility? (from ALIAS No. 15, Summer 1999) We asked an andrological surgeon Mr. Anthony Hirsch [Ref. 1], (who featured in a recent UK TV documentary on advances in male infertility treatments) about the possibility of these techniques being applied to the testicular material of AIS women. He replied (July 1999): Many thanks for your note of 25 May. I apologize for the delay in replying to you, partly due to pressure of work, but also because an answer demanded some thought and consideration. It is certainly true that immature sperm material has been found in the testes of patients with complete androgen insensitivity. If the diagnosis is made before they are operated on, the gonads are probably better not removed immediately so that the patient may mature in response to her own internal natural hormones rather than prescribed synthetic hormone. Since CAIS patients are usually well shaped “females” [Ref. 2], who would usually attract a male partner, it is difficult to envisage why a patient would consider having their own immature sperm cells frozen and stored for the future. The obvious exception would be those complete AIS patients who have female sexual partners, who might one day wish to bear children created by assisted conception from their own joint genetic material [Ref. 3]. I have no information on how many CAIS patients have lesbian partners, but you may be aware of this. Within the testicular tissue in CAIS, the sperm material is usually immature, with no formed spermatozoa. Within the next 5 years it is probable that immature spermatogenic material could be successfully cultured in the laboratory with the creation of spermatozoa that could be used for intracytoplasmic sperm injection (ICSI). ICSI has a success rate of 22% in terms of a baby per treatment cycle commenced, provided the female partner is under 35 years of age. The AIS Support Group might therefore consider whether it should advise the parents of CAIS girls about freezing testicular tissue. I think you should also ‘sound out’ the views of 1 or 2 IVF gynaecologists or centres (e.g. Bourn Hall) about the acceptability of freezing gonadal tissue in this way. Assisted conception units would probably need to seek the advice of their Ethical Committees. I am not sure how the present law stands, but presume the HFEA [Ref. 4] would have no objections to IVF or ICSI in this situation, as there appears to be no problem concerning donor sperm for lesbian couples. It would be very difficult for parents to say “no” to something that may be feasible in the near future, provided it is not illegal or socially completely unacceptable. Therefore, on balance, your proposal is probably something you could advise parents of CAIS girls to consider rather than making a formal recommendation. I hope this letter has contributed something and will help you. With kind regards and best wishes. Ref. 1: 113 Harley Street, London W1N 1DG. Tel: 0171 935 6588. Also at Bourn Hall Clinic, Cambridgeshire. Http://www.bourn-hall-clinic.co.uk/ Ref. 2: His quotation marks. Ref. 3: Our suggestion was that a male partner’s sister, for example, might help. See previous article. Ref. 4: Human Fertilization and Embryology Authority, Paxton House, 30 Artillery Lane, London E1 7LS. Tel: 0171 377 5077. Website: http://www.hfea.org.uk Fertility Advance (from ALIAS No. 16, Spring 2000) Tammy, a 27 year-old woman with 5 alpha reductase deficiency emailed to a group of women with AIS and related conditions (Oct. 1999): I was reading the Johns Hopkins [Hospital] site on 5-AR and noticed it said that in 5-AR the testes will contain sperm. I had them removed at 13 but was wondering…. might I have been able to ‘have’ my own children if they had taken the sperm out [and preserved them]. Or would the sperm be dead because of ‘cooking’ in my abdomen. I know for normal men sperm will die if it is too hot, and part of infertility treatments include wearing boxers to provide adequate air circulation. I know it doesn’t matter at this point for me personally, because my testes and any sperm I may have had are probably either sitting in a jar of formaldehyde or have long ago been thrown out with the rest of the medical waste. BUT if there is a possibility that our little orchid sisters may have reproductive options, I’d like to know and consider the possibilities… I wish I had the option… and think I would do it (talk about crossing gender barriers!)… mommy No. 1 donates the sperm so mommy No. 2 could carry the baby… I think it sounds wonderful. Another barrier… if two women are able to procreate a child together how can they be denied marital rights? And would society have to accept an ‘I’ for those of us who want it? [Ref. 1] OK, maybe the kid would have issues about it but we prove here that anything can be overcome. Besides, lesbians merge families all the time and raise healthy happy children…. it’s all about love in my opinion. If any of you know if the sperm would be viable, please respond. We directed her to “ICSI” in ALIAS No. 15, Autumn 1999, about which she commented: Dr. Hirsch wrote (in letter in No. 15): “…Since CAIS patients are usually well shaped females, who would usually attract a male partner, it is difficult to envisage why a patient would consider having their own immature sperm cells frozen and stored for the future….” Why indeed would a well shaped CAIS woman, capable of attracting a male partner want to have reproductive choices? I wonder sometimes if people who make statements like this realize they are talking about real people. Yeah….it’s kind of a weird concept to, in essence, ‘father’ your child but it’s the only option we have, and it shouldn’t be so easily dismissed… especially by a doc. It’s a personal choice for the affected XY woman to make. How much trouble would it be to freeze the tissue and let her decide later what to do with it? I think I’ll write my old doc just to be sure they threw mine out 🙂 Ref. 1: Some group members (CAIS as well as PAIS) have said they would like the option to enter ‘I’ (intersexed) instead of ‘M’ or ‘F’ on official forms. Half-Cloning (from ALIAS No. 16, Spring 2000) On 5th Sept 1999 the UK’s Sunday Times [Ref. 1] reported on a medical advance pioneered by a team headed by Zev Rosenwaks at the Cornell Medical Center, New York. They had been able to take immature egg cells from the ovaries of a donor, remove the nucleus (containing the donor’s genetic material) and replace it with genetic material taken from an ordinary body cell of another animal. The researchers have found they can reprogramme the DNA genetic blueprint from any living cell to make it behave like an unfertilized egg. Once the reconstituted egg cell is mature, it is fertilized in the laboratory and then incubated in the womb of a surrogate mother. The donor egg cell thus acts as an ‘envelope’ for the prospective genetic mother’s genetic material. They are primarily working on animals (of 35 mice eggs, almost half matured) but the work is being pursued in humans (in Rosenwaks’ first batch of 10 reconstituted human eggs, six were capable of maturing). They have no human pregnancies yet. The UK’s Human Fertilisation and Embryology Authority [Ref. 2], which regulates infertility treatment in Britain, said the research would be unlikely to receive approval. Other specialists believe that pressure from childless women will lead to its acceptance. The technology, which would sweep away the queue of more than 1,000 childless women waiting for donor eggs, has been welcomed by British infertility experts. Peter Brinsden, director of Bourn Hall, Cambridgeshire [Ref. 3], one of Britain’s largest infertility clinics said: “Egg donation does not give women their own genetic child; this technology does. I would have no problem using it once it is established.” Although babies born from the technique would be only ‘half-cloned’ (the maternal cloned genetic material is fertilized by normal sperm as in regular IVF treatments), there is concern that using ‘old’ DNA from cells in the mother’s body could mean that a newborn baby was the genetic age of the mother. Early studies of Dolly the sheep, the first animal to be wholly cloned, suggest that her cells are much older than her chronological age. The announcement, at a conference in France, has also raised anxiety about the gathering pace of cloning technology before regulatory frameworks. Said Philip Hammond, the [UK] Conservative [Party] health spokesman, “We need legislation. If nature intended post-menopausal women to have babies, it would not have created the menopause.” Ed’s Note: As usual, a male political spokesman pontificates about women’s reproductive choices, and needless to say, women who’ve never had any reproductive options, at any time of their lives (e.g. those with AIS) get missed out of the discussion altogether. Ref. 1: Egg Cloning May Let 70-year-olds Become Mothers” by Lois Rogers. Ref. 2: http://www.hfea.org.uk Ref. 3: http://www.bourn-hall-clinic.co.uk/ Ectopic Pregnancy (from ALIAS No. 16, Spring 2000) A CAIS woman emailed to a group of AIS women: Anyone else feel that great things might be possible for the next generation of ‘orchids’? There was headline news today [10th Sept 1999] of the successful delivery of a baby (by Davor Jurkovic, Obstetrician at King’s College Hosp., London) that developed outside his mother’s womb. He’s a triplet. The other 2 eggs made it to the uterus, the third lodged in the Fallopian tube which then ruptured and the embryo escaped and grew in the abdominal cavity between the mother’s bladder and uterus. All 3 babies are doing fine. Jurkovic said it was a 1 in 60-100 million chance of successful outcome in these circumstances. He’d seen one other case earlier in his career but the mother died during delivery. It has been known for some time that it is possible for an embryo to implant and grow completely outside of the reproductive tract, i.e. anywhere in the abdominal cavity where it can tap into a rich blood supply (that’s my understanding from what I’ve read). The main problem is that a massive and life-threatening haemorrhage is highly likely at some stage in the maturation or the delivery process. The apparent insurmountability of this problem is the main reason why medics have said that in practice it is not something that can be contemplated as an elective procedure for people who do not have a uterus, e.g. normal men, transsexuals (it’s always these groups that are mentioned in the press when speculating on this; never AIS women, needless to say). See “Men Becoming Pregnant” in ALIAS No. 14. In the current case, a team of 26 medical staff enabled safe delivery and this birth at King’s proves that it is possible although Dr. Jurkovic said that “I don’t think a surgical attempt to replicate it would be successful because the surgery involved would have to be so delicate it would not be possible.” I’m excited about the possible options opening up for future ‘orchids’, but at the same time sad that it’s too late for most of us, but that’s always the way with medical advances. A CAIS 40 yr-old commented: This is interesting. I think I may have shown you the article theorizing male pregnancies by Edmund Confino, M.D. (as I recall) of Northwestern University [Ref. 1]. I contacted him with my comment that while it was nice to conjecture about men (read “XY with functioning androgen receptors”) becoming pregnant it would certainly behove medical science to attempt such procedures with AIS women. Never got a response. While perhaps I might have considered having children if such techniques had been available ten or twenty years ago, the truth is that at this point I’d damn well settle for ‘normal’ vaginal length. And it bugs me that teams of doctors could be assembled at a cost of thousands of pounds to treat this woman and zero dollars seems to be spent perfecting vaginoplasty options… It is so @#$%^&* frustrating to still not have this resolved!! OK, I’m off my soap box. I guess it’s just that advances in fertility treatments kinda frost me when we didn’t get so much as the truth from our doctors, never mind an ounce of psychological or emotional support. Ref. 1: See “Fertility Advances” in ALIAS No. 7, Spring 1997 where he was said to be in Chicago. Fertility Breakthrough (from ALIAS No. 17) A team of French, Spanish and Italian fertility experts reported [Ref. 1] that an infertile woman’s genetic make-up can be introduced into a donated egg so that the resulting child carries her genes rather than the donor’s. [Ref. 2] A woman who is unable to make eggs can thus reprogram donor eggs with the genetic material from any of her own body cells to effectively ‘create her own eggs’. The technique is known as ‘membrane fusion’ and parts of it resemble the methodology used to clone Dolly the sheep. [Ref. 3] The team’s leader, Dr. Jan Tesarik from Paris, said that no attempt had been made to fertilize the experimentally reconstituted eggs because the creation of human embryos (fertilized eggs) for research is banned in France and Spain, and strictly regulated in Italy. One of the team, Dr. Peter Nagy, is already working in Brazil where ethical guidelines allow such studies and says it would be possible to use the technique on patients there by the end of the year. Use of the method in Europe would probably take longer because of the need to get ethical agreement. Ref. 1: Tesarik J., Nagy Z.P., Mendoza C., and Greco E: Chemically and mechanically induced membrane fusion: non-activating methods for nuclear transfer in mature human oocytes. Human Reproduction 2000 May; 15(5):1149-54. (Laboratoire d’Eylau, 55 rue Saint-Didier, 75116 Paris, France, MAR & Molecular Assisted Reproduction and Genetics, Granada, Spain, CIVTE, Centre of Insemination In Vitro and Embryo Transfer, Sevilla, Spain, Centre for Reproductive Medicine, European Hos.) Ref. 1: Reported (with diagrams of the technique) in The Daily Telegraph, Thursday April 27th. Go to http://www.telegraph.co.uk and do a search (at foot of page) on ‘Tesarik’. Ref. 3: See also “Half-cloning” in ALIAS No. 16, Spring 2000. [Further coverage of fertility issues can be found in ALIAS Nos, 18, 19, 20, 21 and 22 (not yet summarized here).] Update (Oct 2010) Time and resources have not allowed us to keep this page updated as much as we’d like. To quickly summarise the current situation, we reproduce here a section from the 2010 paper by Berra et al: Coordination of fertility options for women with DSD requires both knowledge of the potential for each individual but also of the provision of unusual fertility services. In the UK the three main choices for starting a family come under different agencies. Adoption comes under the auspices of social services, ovum donation is often provided by private fertility clinics and surrogacy is supported by a voluntary organizations (see Internet Resources [below]). Preparation for fertility may start with a clinical psychology assessment working through the plans and wishes of each individual. A fertility specialist is required to describe the practicalities of each option. Support groups are [a] very helpful source of user information with forums passing on up to date experiences. Women with no uterus will usually choose between surrogacy and adoption. Women with gonadal dysgenesis and a normal uterus may consider ovum donation. While ovum donation is available in the UK sometimes as part of the NHS fertility services, the rate limiting step is the availability of donates oocytes. In order to circumvent any delay, many couples choose to enroll with clinics in Europe, India or North America w[h]ere supplies of oocytes are less restricted. Successful pregnancies after egg donation in women with 46XY gonadal dysgenesis are still to[o] few to be certain of success rates (Siddique et al, Plante et al, Kan et al, Selvaraj et al). Although [the] rate of Caesarean Section may be increased (Cornet et al), normal term vaginal delivery has been reported (Siddique et al). Internet Resources: – AISSG Androgen Insensitivity Syndrome Support Group – COTS Childness Overcome Through Surrogacy, Surrogacy UK and HFEA Human Fertilisation Embryology Authority (see Links to Other Sites) – Daisy Network Premature Menopause Support Group (see Links to Other Sites) AISSG group meetings continue to discuss issues such as adoption and it is planned to have expert speakers on the topic at future meetings. Testimonials Personal stories from people with AIS or a related condition November 22, 2018March 30, 2021 By admin We use this part of the web site to display people’s stories. If you have AIS or a related condition, or you are the parents of an affected child, we would like to hear from you. The UK group looks after the web site so please send your story to the UK group (see How to Contact Us). Many people with AIS and related conditions have found it very helpful to tell what happened to them. For many of us, it is very therapeutic to “say” those things that we have never told anyone, and quite cathartic to “get it off your chest” by tossing it out into the wider world, but in a safe way. If you wish, you can supply a pseudonym, although most people opt for us to use their real first name on the site. We understand the importance of safeguarding people’s identity/privacy, so your full name, email address etc. will be completely confidential. Please remember not to mention other people’s names without first obtaining their permission. It is also a great help to other affected people/families to discover, by reading people’s stories, that in fact a community of XY women exists, when they always thought they were “the only one”. Tell us how you learned about your, or your child’s condition, what you felt and thought, and where you are with it now. What aspects do you think need further attention? Please give your email a title/subject line that give us some hint that it’s a genuine message, because we sometimes don’t open emails that arrive with a blank subject line or ones that look like spam. A CAIS woman wrote in 1997: I remember repeatedly thinking as a teenager “How am I going to get myself out of this?” What I meant was not how would I make it all go away (I knew that was an impossibility). Rather I thought “How am I going to summon up the courage to get the help I need, how am I going to find the strength to talk about it?” (even though I didn’t yet know the truth and know what ‘it’ was), “and how the hell am I ever going to build a normal relationship?” I couldn’t even imagine telling anyone I was unable to have children and didn’t have pubic hair – the two things I actually knew and understood at that age. The image of being painted into a corner was vivid in my mind since the age of 12 or 13. It haunted me until age 36. I never saw any way out of the corner except by taking my life, until I came across the letter [from another AIS woman, in a medical journal, giving the support group contact details]. The letter wasn’t just a release from a corner, it was a release from the prison that was my mind, a place where everything was locked shut inside and could find no freedom of expression. And when I read the description, in ALIAS No. 1, of “….the process of hearing oneself actually saying out loud those words that you thought would forever remain as circling thoughts in your head”, I convulsed with sobbing (the word convulsing is not an exaggeration; I had never cried from so deep a place, or as intensely as when I read that quote). Nothing I ever read so brilliantly depicted how I felt about my life experience and the ordeal of keeping it all locked inside my mind. We received the following email in March 2004: A kind hello from Giorgia, I mail you from Belgium. I don’t have an AIS diagnosis nor dealt with the medical issues mentioned on your site. I just want to mail to express how much impressed I am from reading the personal stories. I’m also very shocked at the medical discourses, terms and attitudes towards women with AIS. Having an educational background and work experience in both anthropology and psychotherapy, I feel that the degree of primitiveness expressed by our culture to accomodate “variations” is very high: we live in a sex, gender and sexual primitive society. On the psychological side, I feel hurt to read the effects of the knowledge women have on theirselves. The way that this knowledge has been transmitted and expressed, strikes me as very painfull. Since I too don’t bleed and don’t have a womb but appear female, I can sympatise very strongly with the stories of these women. Warm regards, Giorgia. News What is AIS? October 30, 2018October 30, 2018 By admin Androgen Insensitivity Syndrome (AIS) is one of a number of biological intersex conditions. Intersex results from a variation in the embryological development of the reproductive tract, often determined by a known genetic mutation. Index to this Page Table of Contents What is Intersex? Terminology (and Media Confusion) Introduction to AIS How AIS Occurs Forms of AIS (Complete and Partial) Synonyms Genetics – Usual (non-AIS) Situation Foetal Development – Usual (non-AIS) Situation Genetics – AIS Foetal Development – AIS Incidence Early Knowledge of AIS General Refs: What is Intersex? The usual pattern of human foetal development results in a 3-part alignment, as follows Either: 1) sex chromosomes = XY, leading to 2) gonads = testes, leading to 3) external genitalia = male or: 1) sex chromosomes = XXX, leading to 2) gonads = ovaries, leading to 3) external genitalia = female So what happens in intersex? Very rare… … is a type of intersex condition in which the person actually has a male/female mix at the genetic (chromosome) level and at the gonadal level (the ‘1’ and ‘2’ above). This is extremely rare and only one or two members of our group are in this situation. The old term used in medicine for this situation is a hermaphrodite. Note, however, that a hermaphrodite, in the sense understood by most of society, is a purely mythical creature from ancient literature, one that supposedly has a complete working set of both male and female internal and external organs (such that the individual can, in theory, impregnate itself). This is not humanly possible. Unfortunately medicine took over this literary term in the days before genetics was understood and used it as a medical term, to refer to these individuals who have both ovarian and testicular tissue internally (an ovo-testis) and who, as a result, can have ambiguous external genitalia. Not quite so rare… … is the type of intersex condition in which the sex chromosomes are either XY or XX (i.e. not a mixture) and the gonads are either testes or ovaries (i.e. not a mixture) – as in the majority of the population – but there is a mismatch or distortion in the usual alignment of these two elements (the ‘1’ and ‘2’ above) with the external genitalia (the ‘3’ above). This means that you can, for example, have an XY individual with testes but with an external appearance that is essentially female (i.e. an XY female: either completely female in appearance as in Complete AIS, or partly female in appearence as in Partial AIS) or an XX individual with ovaries but with some degree of male genital appearance (e.g. a woman with congenital adrenal hyperplasia or CAH). Note that the term ‘intersex’ relates to the elements of this entire axis or alignment (the sex chromosomes, the gonads and the genitalia), and not just to the appearance of the external genitalia. A patient with the complete form of AIS (CAIS), or with Swyers Syndrome (XY gonadal dysgenesis), will always appear female externally (no ambiguity) but she is still intersexed, because she has XY chromosomes and internal testes (testicular streak gonads in the case of Swyers) that are considered at odds with her external femaleness. Terminology (and Media Confusion) Before we get into a basic introduction to what AIS is, in medical terms, a brief but important diversion to the subject of bad/confusing terminology and how the media can make things worse. First off, please note that it is nonsense to talk, in relation to sport for example, of ‘gender testing’ because gender is to a large extent a social construct, describing the way people present themselves to the world and so cannot usefully be subjected to ‘verification’. Rather, it is the notion of sex that Olympic committees and the like are seeking to police. ‘Trans’ Terms Intersexuality is not the same as a transsexuality (gender dysphoria) and is not a transgender state. Neither of the latter terms is one that we recognise as belonging in any general discussion of intersex. We are not happy with the recent tendency of some trans groups/people to promote transgender as an umbrella term to encompass, for example, transsexuality, transvestitism and intersex. We object to other organisations/individuals putting us in categories without consulting us, especially categories that imply that interexed people, of necessity, have gender identity issues. See the paper by Mazur et al cited at foot of this page. The problems this causes… We are constantly trying to get away from the idea that intersex is necessarily to do with gender identity, a notion that others (including the press/media) like to impose on us. Moreover, the prefix trans- infers a “moving across”, and although a few people with intersex conditions may choose to change their gender role, the vast majority never “go” anywhere in terms of their sex or their gender, but are happy to stay in the status in which they grew up. XY females may suffer various problems on finding out their diagnosis. Problems such as: confusion anger at secrecy and paternalism (withholding of diagnostic information) shame and stigma an existential type of identity crisis low self-esteem difficulty grasping how this biological phenomenon can come about grief at being denied fertility and rites of passage (e.g. lack of menstruation) a feeling of freakishness and isolation compared to their peers a fear that others might see them as ‘male’ a concern regarding their ability to function in a relationship (e.g. vaginal hypoplasia) the burden of keeping a secret, or uncertainty over who to tell and how a retreat from medical care leading to failure to take HRT with a risk of osteoporosis etc., etc. These are the issues that are of major concern to most of our members; and none of these necessarily means that their inner sense of gender identity is compromised. This trend towards ‘muscling in’ on intersex issues seems to be a initiative on the part of certain politically-minded people in the ‘trans’ community, to bring intersex under their banner (for whatever reason – it lends more credibility to their cause?) or even to actively interfere in clinical issues relating to intersex. See Announcements for an account of the problems we had in 2000 with a gender dysphoria/transsexual organisation trying to interfere in protocols for ‘gender reinforcement’ surgery in intersexed infants with so-called genital ambiguity. ‘Intersex’ vs ‘Ambiguous Genitalia’ Note that the term ‘ambiguous genitalia’ refers to one specific component (the form taken by the external genitalia) in some intersex states. Yet even specialist clinicians will sometimes use the more general term ‘intersex’ when they are actually referring specifically to patients with outwardly observable ‘ambiguous genitalia’. Many intersexed patients have a totally female phenotype (body form), and usually no gender ID conflicts. Yet they are still intersexed because their internal features (e.g. chromosomes, gonads) are seen to be incongruent with their external appearance. Women with the Complete form of AIS or with Swyers syndrome, for example, come into this category. The problems this causes… This sloppy use of terminology causes us a lot of problems because the media (magazines, newspapers, TV) pick up on this and assume that ‘intersex = ambiguous genitalia = gender identity problem’, then print/broadcast material that conveys to the general public the idea that gender identity is, of necessity, an issue in intersex… which it absolutely is not. The vast majority of our members have a secure female gender identity. Yet it has been known for a CAIS group member (with, by definition, no external genital ambiguity) to agree the text of an article based on her story (one that concentrates on secrecy, confusion about her diagnosis, isolation etc.) only to find at publication that the editor has, say, slipped in a picture (in one case, the left half of a man’s photo joined to the right half of a woman’s photo) with a caption that says “[Name] is confused about being a man or a woman”. …. just because they have this fixed idea that if you’re intersexed then you must have a gender identity problem (and no doubt because this notion sells more magazines/newpapers). See AIS in Articles/Books. Or else an article will discuss the issues quite sensibly, using the non-emotive term ‘androgen insensitivity’, but when the featured group member finally sees it on the newstand she finds that a huge headline has been splashed across the cover page (again, without consulting her… or us, if we have been involved…) saying “[Name] discovers she is a hermaphrodite.” ‘Hermaphrodite’ Terms When medicine eventually realised that the mis-alignment type of condition described earlier was a somewhat different situation to the very rare one in which the actual chromosomes and gonadal tisue are a male/female mix, the term for the latter situation was adjusted, from ‘hermaphrodite’ to ‘true hermaphrodite’ (see Related Conditions for more information). This meant that a new variant of the term, i.e. ‘pseudo-hermaphrodite’, could be introduced to describe the mis-aligment situation. The particular mis-alignment where you have XY –> testes –> female appearance (sometimes referred to nowadays as an ‘XY female’ condition) was charmingly termed ‘male pseudo-hermaphrodite’ (and there is also a type of condition that comes under the umbrella term ‘female pseudo-hermaphrodite’; Congenital Adrenal Hyperplasia for example). The problems this causes… Most of our members detest these hermaphrodite terms, just as those with AIS find the old name (testicular feminisation syndrome) for their specific condition deeply offensive. For many of our members who have not been told the truth by doctors, it is these terms that they come across in medical libraries/bookshops, when searching for information that will allow them to make sense of their situation. This is deeply traumatising for a teenager who in all respects except for her internal organs appears to be female (and who often has only come to medical attention through a failure to menstruate) and we feel these archaic terms should be banned from the medical literature. Jeffrey Eugenides, in his novel Middlesex, was wrong in referring to his main character Callie as a hermaphrodite – something that the press, needless to say, picked up and propagated in their reviews of the book (see AIS in Books/Articles). If he was going to use these out-of-date and confusing terms then Callie (like the vast majority of our members with AIS, 5-alpha-reductase deficiency, Swyers syndrome etc.) was not a hermaphrodite (i.e a true hermaphrodite) but a ‘male pseudo-hermaphrodite’. As mentioned above, this is an umbrella term for someone with certain male characteristics (XY sex chromosomes, internal testes) yet certain female characteristics (female external genitalia and general body form, breasts etc.). But this is splitting hairs for most people, and rather than use either archaic term, it would have been much better if Eugenides had just used the more up-to-date term, ‘intersex’. DSD Terminology There was a proposal, arising out of a conference in 2005 of (mainly paediatric) clinicians to adopt the term Disorders of Sex Development (DSD) as a new term to replace ‘intersex’; with the various ‘hermaphrodite’ terms being replaced with Sex Chromosome DSD (in place of true hermaphrodite), 46,XY DSD (in place of male pseudo-hermaphrodite) and 46,XX DSD (in place of female pseudo-hermaphrodite). The DSD terminology and the way it was introduced has been controversial in some circles. See the Debates/Discussions page. Introduction to AIS So AIS is a condition that affects the development of the reproductive and genital organs. Male foetuses usually have a Y sex chromosome which initiates the formation of testes (and the suppression of female internal organ development) during gestation. Testes are the site of production of masculinizing hormones (androgens) in large quantities. Both male and female foetuses usually have at least one X sex chromosome, which contains a gene that gives their body tissues the capacity to recognise and react to androgens. At puberty girls react to the relatively small quantity of androgens (that come mainly from their adrenal glands) by developing pubic/underam hair and darkish pigmentation around the nipples. People with AIS have a functioning Y sex chromosome (and therefore no female internal organs), but an abnormality on the X sex chromosome that renders the body completely or partially incapable of recognising the androgens produced. In the case of complete androgen insensitivity (CAIS), the external genital development takes a female form. In the case of partial insensitivity (PAIS), the external genital appearance may lie anywhere along the spectrum from male to female. Other related conditions, resulting from changes on different chromosomes, also disrupt the normal pathway of androgen action, resulting again in a feminized phenotype (body form). See Related Conditions. People with these ‘XY conditions’ may identify as female, intergendered, or male. How AIS Occurs Every foetus, whether genetically male (XY) or female (XX), starts life with the capacity to develop either a male or female reproductive system. All foetuses have non-specific genitals for the first 8 weeks or so after conception. After a few weeks, in an XY foetus (without AIS), the non-specific genitals develop into male genitals under the influence of male hormones (androgens). In AIS, the child is conceived with male (XY) sex chromosomes. Embryonic testes develop inside the body and start to produce androgens. In AIS, these androgens cannot complete the male genital development due to a rare inability to use the androgens that the testes produce so the development of the external genitals continues along female lines. However, another hormone produced by the foetal testes suppresses the development of female internal organs. Thus a person with AIS has external genitals that in Complete AIS (CAIS) are completely female or in Partial AIS (PAIS) are partially female. Internally, however, there are testes instead of a uterus and ovaries. So in a genetically male (XY) foetus the active intervention of male hormones (androgens) is needed to produce a fully male system. A female body type with female external genitalia is the basic underlying human form. In about two thirds of all cases, AIS is inherited from the mother. In the other third there is a spontaneous mutation in the egg. The mother of the foetus, who does not have AIS, but has the genetic error for AIS on one of her X chromosomes, is called a carrier. Forms of AIS (Complete and Partial) There are two forms of the condition: Complete AIS (CAIS) where the tissues are completely insensitive to androgens, and Partial AIS (PAIS) where the tissues are partially sensitive to varying degrees. The condition is actually represented by a spectrum, with CAIS being a single entity at one end of a range of various PAIS manifestations. Drs. Charmian Quigley and Frank French (The Laboratories for Reproductive Biology, The University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599-7500, USA) proposed a grading system for the phenotypic features (external appearance) in AIS, modelled on the Prader classification for Congenital Adrenal Hyperplasia (CAH). The scale runs from AIS Grade 1 to Grade 7 with increasing severity of androgen resistance – and hence decreasing masculinization with increasing feminization. At the CAIS end of the spectrum the outward appearance is completely female (AIS Grades 6/7) and the sex of rearing is invariably female. In PAIS the outward genital appearance can lie anywhere from being almost completely female (Grade 5), through mixed male/female, to completely male (Grade 1); it has been suggested that slight androgen insensitivity might contribute to infertility in some otherwise normal men. Some babies with PAIS may be raised as males but many are re-assigned as female. Grade 1 PAIS Male genitals, infertility Grade 2 PAIS Male genitals but mildly ‘under-masculinized’, isolated hypospadias Grade 3 PAIS Predominantly male genitals but more severely ‘under-masculinized’ (perineal hypospadias, small penis, cryptorchidism i.e. undescended testes, and/or bifid scrotum) Grade 4 PAIS Ambiguous genitals, severely ‘under-masculinized’ (phallic structure that is indeterminate between a penis and a clitoris) Grade 5 PAIS Essentially female genitals (including separate urethral and vaginal orifices, mild clitoromegaly i.e. enlarged clitoris) Grade 6 PAIS Female genitals with pubic/underarm hair Grade 7 CAIS Female genitals with little or no pubic/underam hair Before puberty, individuals with Grade 6 or 7 are indistinguishable Note however that in the study of Hannema et al (2004), 70% of ‘CAIS’ patients with substitution mutations in the androgen receptor ligand-binding domain had epididymides and vasa deferentia present. These structures develop from the primitive Wolffian ducts under the influence of androgens, once the testes have formed and started to make testosterone, and were in fact more developed than epididymides and vasa deferentia in ‘normal’ 16 to 20 week-old male fetuses. The researchers suggest that the combination of some slight tissue sensitivity to androgens together with the particularly high levels of testosterone seen in CAIS can stimulate Wolffian duct development/differentiation. They suggest that the classification of androgen insensitivity in such patients should be considered severe [PAIS] rather than complete [CAIS]. The grading scale is described in more detail, with line drawings of the genital appearance, in issue No. 6 of our newsletter which is available as a free sample download file (see Literature). It is thought not possible to have the complete and partial forms of AIS in the same family unit. In cases where this appears to happen it is probably a case of one sibling having a low grade of partial AIS (e.g. grade 3 in which there will be some masculinization of the genitalia) and the other sibling having a high grade of PAIS (e.g. grade 6 in which the genital appearance will appear female as in the complete form, which is grade 7, but with pubic/underam hair). The two forms are considered in more detail in Complete AIS and Partial AIS. Synonyms Androgen Insensitivity Syndrome, Androgen Resistance Syndrome, Testicular Feminization Syndrome (Testicular Feminisation Syndrome), Feminizing Testes Syndrome (Feminising Testes Syndrome), Male Pseudo-hermaphroditism, Morris’s Syndrome (CAIS), Goldberg-Maxwell Syndrome, Reifenstein Syndrome (PAIS), Gilbert-Dreyfus Syndrome (PAIS), Rosewater Syndrome (PAIS), Lubs Syndrome (PAIS). Other XY conditions with some similarities to AIS: 5 alpha-reductase deficiency, 17 keto-steroid reductase deficiency, XY gonadal dysgenesis (Swyer Syndrome), leydig cell hypoplasia, Denys-Drash Syndrome, Smith-Lemli-Opitz Syndrome. See Related Conditions. XX conditions with some similarities to AIS: Mayer Rokitansky Kuster Hauser (MRKH) Syndrome, Mullerian dysgenesis, vaginal atresia. Genetics – Usual (non-AIS) Situation In human somatic (body) cells there are normally 46 chromosomes made up of 23 pairs. 44 of the 46 are called autosomes because they are not thought to determine gender. The other two are called sex chromosomes. Non-AIS males have a relatively large X swiss rolex orange and a small Y sex chromosome and normal females have two X sex chromosomes. When the germ, or generative cells, are formed in the body of the adult, these sex chromosomes become separated, so that a sperm carries either a single X or a single Y chromosome, whilst every egg carries a single X chromosome. At conception the new embryo will be XX or XY, according to whether the egg, which is second hand ultra thin watch always X, was fertilized by an X-bearing sperm or by a Y-bearing sperm. Thus the sperm controls the genetic sex of the child. Foetal Development – Usual (non-AIS) Situation Although the sex of the embryo is determined at the time of conception, anatomical differences don’t show until approximately two months later. In this ‘indifferent stage’ every foetus has the primitive structures necessary for either a male or a female system: there are both Wolffian ducts and Mullerian ducts. Gonad is the term given to the undifferentiated organ that will later become either a testis an ovary. Testes develop earlier than ovaries. In an XY foetus, the gonads develop into testes. The testes then cause the Wolffian ducts to develop into the rest of the internal male system, and the Mullerian ducts to be suppressed. In an XX foetus, the gonads develop into ovaries, the Mullerian ducts then form the rest of the female internal system and the Wolffian ducts are suppressed. It is important to understand not only that there is a single primitive structure in the indifferent stage from which the male or the female organs develop, but that, each reproductive organ in either sex has a counterpart in the opposite sex. For example, the penis of the male and the much smaller clitoris of the female both come from the embryonic genital tubercle or phallus. Men have a vestigial uterus, the utriculus masculinus in the prostate and women have a homologue prostate in the glands at the lower end of the urethra. Genetics – AIS There are a number of abnormalities of the sex chromosomes that can occur. One example is Klinefelter’s Syndrome, in which a man carries an extra X chromosome. Another is Turner Syndrome in which a woman is missing an X chromosome. AIS is not a disorder of the sex chromosomes, because the sex chromosomes in an AIS baby are those of a normal male, XY. The genetic fault lies in the Androgen Receptor (AR) gene on the X chromosome received from the mother. This affects the responsiveness, or sensitivity, of the foetus’s body tissues to androgens. AIS is inherited by a genetic condition in the family known as an X-linked recessive inheritance pattern, or partly recessive gene or a male-limited autosomal dominant. This means that it is passed on via the female line, so it can affect some or all of a mother’s XY children. For a carrier woman there is a 1 in 4 risk in each pregnancy of that child having AIS (or a 1 in 2 risk if the foetus of such a pregnancy is known to be genetically male, e.g. as a result of an amniocentesis sample taken during pregnancy). The AIS condition does not manifest in embryos of a carrier mother that are genetically female (i.e. XX) but they have a 1 in 2 chance of being carriers themselves and could therefore pass AIS on to their children. So the possibilities in a given pregnancy, where the mother is a carrier, are… ‘Normal’ XY boy, or AIS XY baby, or ‘Normal’ XX girl, or Carrier XX girl … with a 1 in 4 chance of each situation resulting. Diagram X-linked recessive inheritance in AIS The term ‘affected son’ is used in the above diagram because the X-linked recessive inheritance pattern is common to a number of genetic conditions which manifest in sons and not daughters. If AIS is present in a family, there are tests available to see if an XX woman is a carrier and thus capable of passing the defective gene on to her children (see Obtaining/Facing Diagnosis). Foetal Development – AIS In the case of an AIS foetus, a Y-bearing sperm fertilizes the egg (which is always X) and produces an XY embryo. In the early stage of foetal life differentiation is as a male, with testes and the Mullerian ducts regressing. The Mullerian ducts would have formed the internal female organs in an XX girl. Once the testes are formed, they start to produce testosterone, which would normally cause the masculinization of the body. Masculinization is an active process; it needs the positive or active intervention of the male hormones in order to take place. If these male hormones are either absent, or the tissues do not respond to them (as happens to differing degrees in the various forms of AIS), then the passive tendency is for the external genitals to differentiate into female external organs which, in the complete form of AIS, are indistinguishable from those of normal girls. This female physical development is not due to the presence and influence of oestrogens but to the ineffectiveness of androgens . In other words, the inherent trend is for any foetus to develop female external genitals and general body form, in the absence of the masculinizing effects of male hormones. Or as Money et al express it “Cellular insensitivity to androgen (in AIS) permits the 46,XY foetus not to masculinize.” Shearman is somewhat more dramatic in his declaration that “If the target cells lack this (androgen) receptor, testosterone passes like a stranger in the night and neutral female absolutism reigns supreme.” Unfortunately, however, by the time the androgen insensitivity in AIS becomes evident, the internal reproductive organs have already progressed partially down the male route, and the Mullerian Inhibitory Factor (MIF) from the testes has already begun its work of destroying the primitive female internal organs. The testes remain in a ‘frozen’ partially-developed male state and the development of the internal female organs cannot be reactivated. Incidence Mainly using data on the frequency of inguinal (groin) hernia in presumed females, Jagiello and Atwell estimated the frequency of AIS to be about 1 in 65,000 genetic males. This presumably refers only to the complete form (CAIS), since the infants were assumed to be female until the occurrence of the hernia. DeGroot quotes an incidence of about 1 in 60,000. Hauser gives an incidence of 1 in 2,000. Adams-Smith et al give a figure of 1 in 20,000. The most accurate figure currently available is probably that from an analysis (Bangsboll et al.) of a nationwide Danish patient register, suggesting an incidence of 1 in 20,400 male births (hospitalized cases only, so the true incidence is probably higher). CAIS has been said to rank third as a cause of primary amenorrhoea (lack of menstruation), after gonadal dysgenesis (Turner Syndrome) and congenital absence of the vagina (Mayer-Rokitansky-Kuster-Hauser syndrome). Complete AIS (CAIS) is sometimes referred to as ‘classical testicular feminization’ (‘classical testicular feminisation’), CAIS may be more common than PAIS (the ‘partial’ form of the condition) but we don’t have incidence figures for PAIS. Early Knowledge of AIS A condition that could have been AIS was mentioned in the Talmud (400 BC). A speculation has been made that Joan of Arc (1412) might have had AIS (Wooster 1992?). The same suggestion has been made with regard to Queen Elizabeth I, the ‘Virgin Queen’ (1533 – 1603) (Bakan 1985). AIS may have been reported as early as 1817, when Steglehner described the case of an apparently normal woman who had undescended testes. The condition has been of relatively long interest to geneticists. Dieffenbach, an American geneticist, first pointed out in 1906 that there is a hereditary pattern to AIS. Petterson and Bonnier (1937) concluded that the affected persons are genetically male. Some early textbooks refer to the syndrome as the Goldberg-Maxwell Syndrome. Morris (1953) first used the term “testicular feminization.” Morris and Mahesh (1963) subsequently described an incomplete form of the condition. Wilkins (1957) first demonstrated that the basic defect is tissue unresponsiveness to androgens. Hence the newer, and as far as most clinicians are concerned, more correct name of Androgen Insensitivity Syndrome. Netter and colleagues (1958) reported this disorder in a famous photographic model and Marshall and Harder (1958) reported affected twins who worked as airline stewardesses. In 1974 Migeon showed that the condition results from androgen receptor resistance. The androgen receptor gene was cloned and sequenced in 1988.   Note: The list below contains references to medical journal articles/papers relevant to the subject matter of this web page. We don’t expend as much effort keeping this list updated as we do for those on other pages (covering topics like facing the daignosis, vaginal hypoplasia, the pros and cons of gonadectomy and genital surgery) because medical articles covering clinical features, genetics etc. are not as useful to patients/parents as those covering practical issues and dilemmas. General Refs: See Medical Literature Sites on our ‘Links to Other Sites’ page for ways of accessing journal articles. Money J., Schwartz M., Lewis V.G: Adult erotosexual status and fetal hormonal masculinization and demasculinization: 46,XX congenital virilizing adrenal hyperplasia and 46,XY androgen-insensitivity syndrome compared. Psychoneuroendocrinology, Vol 9, No 4, pp 405-414, 1984. Shearman R.P: Intersexuality. In Clinical Reproductive Endocrinology, Churchill Livingstone, p346-361, 1985. Jagiello F. and Atwell J.D.: Prevalence of testicular feminization. Lancet I: 329 only, 1962. DeGroot L.J. (ed): Endocrinology, Vol 2 (Ed 2), Philadelphia PA, Saunders, 1989. Hauser G. A: Testicular Feminization. In Intersexuality. Edited by C. Overzier. Academic Press, New York, 1963. Adams-Smith W.N. and Peng M.T: Inductive influence of testosterone upon central sexual maturation in the rat. J. Embryol. Morph., 17: 171, 1967. Bangsboll S et al: Testicular feminization syndrome and associated tumours in Denmark. Acta Obstet Gynecol Scand 71:63-66, 1992. Griffin J.E: Wilson J.D: The syndromes of androgen resistance. N. Engl. J. Med. 1980; 302: 198-209. Wooster N: The Real Joan of Arc?, published by The Book Guild, Lewes, Sussex (1992?). The statement in this book, that Bernard Shaw’s heroine may have had testicular feminization, is discussed in a letter from G. Richeux, Doncaster, to The Guardian on 4 September 1992 and entitled “Oh why can’t a woman be more like an admirable, literary man?”. Bakan R: Queen Elizabeth I: A Case of Testicular Feminization Syndrome? Med. Hypotheses, 1985, 17(3), 277-84. Steglehner G: De Hermaphroditism Nature. Kunz Bambergae et Lipsias, 1817. Dieffenbach H.: Familiaerer Hermaphroditismus. Inaugural Dissertation, Stuttgart, 1912. Petterson G. and Bonner G.: Inherited sex-mosaic in man. Hereditas 23: 49-69, 1937. Goldberg M.B., Maxwell A.F.: Male pseudohermaphroditism proved by surgical exploration and microscopic examination. A case report with speculations regarding pathogenesis. J. Clin. Endocrinol. 8:367-379, 1948. Morris J: The syndrome of testicular feminization in male pseudohermaphrodites. Am. J. Obstet. Gynecol, 65: 1192-1211, 1953. Morris J.M. and Mahesh V. B: Further observations on the syndrome, `testicular feminization’. Am. J. Obstet. Gynec. 87: 731-748, 1963. Wilkins L: The Diagnosis and Treatment of Endocrine Disorders in Childhood and Adolescence. Springfield, III: Charles C. Thomas, 1957 (2nd ed.). Netter A., Lumbrosa P., Yaneva H and Liddle J: Le testicule feminisant. Ann. Endocr. 19: 994-1014, 1958. Marshall H.K. and Harder H.I: Testicular feminizing syndrome in male pseudohermaphrodite: report of two cases in identical twins. Obstet. Gynec. 12:284-293, 1958. Migeon et al.: Studies of the locus for androgen receptor:localization on the human X and evidence for homology with the Tfm locus in the mouse. Proc. Nat. Acad. Sci. 78: 6339-6343, 1981. Williams, J: Androgen insensitivity syndrome: a survey of the terminology, language and information found in medical textbooks, scientific papers and the media. Thesis (M.Sc.) Science Communication, Department of Humanities, Imperial College, London (1996). Blackless, M. et al: How Sexually Dimorphic Are We? Review and Synthesis. American Journal of Human Biology 12:151æ¶ (2000). Boehmer A.L. et al: Genotype versus phenotype in families with androgen insensitivity syndrome. J. Clin. Endoc. and Metab.,86: 4151-60 (2001). Wilson Bruce E: Androgen Insensitivity Syndrome. A useful online monograph covering many aspects of AIS. Dr. Wilson has been mentioned a number of times in our newsletter, ALIAS, and has spoken at AISSG US group meetings. Hannema S.E., Scott I.S., Hodapp J., Martin H., Coleman N., Schwabe J.W. and Hughes I.A: Residual Activity of Mutant Androgen Receptors Explains Wolffian Duct Development in the Complete Androgen Insensitivity Syndrome. J. Clin. Endoc. and Metab., Vol. 89, No. 11, 5815-5822 (2004). Minto C. L. et al: XY Females: Revisiting the Diagnosis. BJOG (an International Journal of Obstetrics and Gynaecology), Vol. 112, pp. 1407-1410, October 2005. Mazur, T., Colsman, M., and Sandberg, D. E: Intersex: Definition, Example, Gender Stability, and the Case Against Merging with Transsexualism. In Ettner, R., Monstrey, S., and Eyler, A. E. (eds), Principles of Transgender Medicine and Surgery. Binghamton, New York: Haworth Press, Inc., 235-259 (2007). Parker P.M: Androgen Insensitivity Syndrome – A Bibliography and Dictionary for Physicians, Patients, and Genome Researchers (paperback pub. 19 Jul 2007, Icon Group International Inc.). This is one of a number of similarly titled books covering various medical conditions from the same editors and publisher. The publisher’s online catalogue lists it as Testicular Feminization – A Medical Dictionary, Bibliography, and Annotated Research Guide to Internet References. In spite of the inclusion of á´©entsíŠ in the newer title it is virtually useless to that readership. The descriptive text portions are heavily oriented towards molecular genetics and the bibliography sections seem based entirely on the highly medical literature databases at the US National Institutes of Health. The book fails to mention the support group for AIS that has been in existence since 1988. The book’s recommended standard search on AIS produces pages and pages of article titles, only one of which relates to psychosocial aspects (the Natarajan paper sanctioning non-disclosure of diagnostic information!). A section describing how to search a complementary medicine database at NIH containing, one is led to believe, material of more interest to patients, gives 13 titles, only two of which are of any relevance. Wikipedia tells us that: 詬ip M. Parker.. ..has patented a method to automatically produce a set of similar books from a template which is filled with data from database and internet searches, and is currently listed as the author of 85,000 books at Amazon.com.” See http://www.nytimes.com/2008/04/14/business/media/14link.html. As Cheryl Chase put it (Personal Communication 31 March 2008): “Human hands (and judgment) never touched the book.쯓MALL> Mendonca B.B., Domenice S., Arnhold I.J. and Costa E.M: 46,XY disorders of sex development. Clinical Endocrinology, [e-publication ahead of print] 2008. Warne G.L. and Raza J: Disorders of sex development (DSDs), their presentation and management in different cultures. Reviews in Endocrine and Metabolic Disorders, 9: 227-36, 2008. Berra, M., Liao, L-M., Creighton, S. and Conway, G.S: Long-term health issues of women with XY karyotype. Maturitas 65(2), 172-178, 2010. Karkazis, K., Jordan-Young, R., Davis, G. and Camporesi, S: Out of Bounds? A Critique of the New Policies on Hyperandrogenism in Elite Female Athletes. The American Journal of Bioethics, 12(7): 3–16, 2012. News Related Conditios October 21, 2018October 21, 2018 By admin Table of Contents Introduction ‘XX Female’ Conditions Mayer Rokitansky Kuster Hauser (MRKH) Syndrome Mullerian (Duct) Aplasia or Mullerian (Duct) Failure Vaginal Atresia MURCS Association True Hermaphroditism ‘XY Female’ Conditions ‘Umbrella’ Term (MPH) Overlap with AIS Mis-Diagnosis Importance of Correct Diagnosis 1. Failure of Testes Formation Swyer Syndrome XO/XY Mosaicism Testicular Regression Syndrome 2. Under-development of Leydig Cells Leydig Cell Hypoplasia 3. Enzymatic Defects in Testosterone Biosynthesis 5-alpha-reductase Deficiency Introduction Most of the page is devoted to various XY female intersex conditions other than AIS (AIS having been described in detail elsewhere on the site) and includes a brief mention of the dreaded ‘male-pseudohermaphrodite’ umbrella term! But firstly we consider some XXX female non-intersex conditions which share common features with some XY female intersex conditions, such as lack of vaginal development. It then looks briefly at true hermaphroditism. The page does not cover XX female intersex conditions such as Congenital Adrenal Hyperplasia (CAH). For information on this type of condition please refer to the web sites of the CAH support groups. ‘XX Female’ Conditions These will only be considered briefly since the main focus of this site is on ‘XY female’ conditions. However, they do have some features, such as vaginal hypoplasia (underdevelopment), in common with some XY female conditions and we have members of our group who have MRKH. See Overview for a summary of normal female development. For resources related to XX conditions such as MRKH please refer to our (PDF format) Vaginal Hypoplasia Information Sheet. See also Patricia DeFrain’s Index to Articles, Books etc. on MRKH Syndrome (and Related Issues), a list of publications from Patricia’s (now defunct) website. See also Links to Other Sites. Note: You need to have a PDF Reader on your PC in order to access PDF files. See About this Site page for more info. Mayer Rokitansky Kuster Hauser (MRKH) Syndrome Absent or incomplete vagina, incompletely developed uterus with normal Fallopian tubes and ovaries. Normal breast development and pubic hair. Failure to menstruate. Continue Reading News The Androgen Insensitivity Syndrome Support Group (AISSG) is a UK-based group which started in 1988 (formalised in 1993). October 10, 2018October 10, 2018 By admin We provide information and support to young people, adults and families affected by XY-female conditions such as complete and partial Androgen Insensitivity Syndrome or AIS (old name Testicular Feminization Syndrome or Testicular Feminisation Syndrome). We rolex daytona also support those affected by Swyer’s Syndrome (XY Gonadal Dysgenesis), 5-alpha Reductase Deficiency, Leydig Cell Hypoplasia, Mayer-Rokitansky-Kuster-Hauser (MRKH) Syndrome, Mullerian Dysgenesis, Mullerian Duct Aplasia, Vaginal Atresia, and other related conditions. The group has played a dual role in providing support and comfort to affected adults/families all over the world, as well as fighting for and contributing to a better understanding of the various conditions, and of how they should be ‘treated’ by the medical community. This site provides access not just to the UK group but to a consortium of worldwide support groups that have grown from the UK group (see How to Contact Us for contact details of the UK and other national groups). Aims of the Group Continue Reading AIS (Androgen Insensitivity Syndrome), Proudly powered by WordPress. work_j33aumffyrdhhlzo4bbwy4tqgy ---- Image restoration in digital photography - Consumer Electronics, IEEE Transactions on Title Image restoration in digital photography Author(s) Lam, EY Citation Ieee Transactions On Consumer Electronics, 2003, v. 49 n. 2, p.269-274 Issued Date 2003 URL http://hdl.handle.net/10722/42927 Rights Creative Commons: Attribution 3.0 Hong Kong License E. Y. Lam: Image Restoration in Digital Photography 269 Image Restoration in Digital Photography Edmund Y. Lam, Member, IEEE Abstract - This paper introduces some novel image restoration algorithms f o r digitalphotography, which has one . of the fastest growing consumer electronics markets in recent years. Many attempts have been made to improve the quality of the digital pictures in comparison with photography taken on films. A lot of these methods have their roots in discrete signal and image processing developed over the last two decades, but the ever-increasing computational power of personal computers has made possible new designs and advanced techniques. The algorithms we are presenting here take advantage of the programmability of the pixels and the availability of a compression codec commonly found inside digital cameras, and work in compliance with either the JPEG or the JPEG-2000 image compression standard'. Index Terms - Image Restoration, Resolution Enhancement, Digital Photography, JPEG, JPEG-2000. I. INTRODUCTION N traditional film-based photography, a picture is recorded specks of silver that form a latent image of the scene. This is an irreversible process. Various chemical processes are involved, and it is not easy to modify the picture recorded on the film. Digital photography, on the other hand, is vastly different. Images are recorded on electronic sensors, either in the form of charge coupled devices (CCD) or complementary metal- oxide semiconductor (CMOS). The latter has shown a lot of promise as a viahie, low cost solution because they enable the integration of sensing, processing, and memory. In both cases, the images are recorded directly in a digital medium, without the need for further chemical processes. They can he stored permanently without degradation, while photographic film and prints will inevitably fade over time. The electronic sensors can also be reused once the previous image has been saved in memory, and therefore over time it will be more economical than constantly buying new film. The most important feature of digital photography, however, is the programmability of the pixels. It has already been demonstrated that pixel-level analog-to-digital conversion (ADC) is possible [ I ] . With further transistor size shrinkage in the iiiture, more logic can he put on the pixel level to enhance the function of the cameras. Furthermore, we already have application-specific integrated circuits (ASIC) on the digital cameras to perform operations such as white balancing, color processing, compression, storage, and transmission. These provide the I when incident " photons hit the silver halide film to create ' Edmund Y . Lam i s with the Depariment of Electrical and Electronic Engineering, University of Hong Kong, Hong Kong. (e-mail: elam@eee.hku.hk). bases with which we can manipulate the digital images much more easily than with traditional photography. In this paper, our goal is to explore methods by which we can restore degraded images by taking advantage of the programmability of digital cameras. We devise novel image restoration algorithms to he implemented in the camera alongside the compression of the images. We aim at creating a more versatile digital camera that would make use of the power o f its programmability. 11. IMAGE PROCESSING PIPELINE A prerequisite to designing effective image restoration algorithms for digital photography is a good understanding of the image acquisition process, especially under incoherent illumination which is the typical case for consumer use of digital cameras. Linear system theory provides us a very powerful tool for the analysis of the optical system inside the cameras. In particular, the frequency domain offers very compact relationships between the object and the image, establishing the preeminent role of Fourier analysis in the theory of incoherent imaging. In this section, we explore a simple model of the imaging system, and describe how restoration can be performed as suggested by linear system theory and Fourier analysis. A . The Imaging System Many steps are involved in the imaging process for digital cameras, including white balancing, demosaicing, color matrix correction, nonlinear conversion, compression, and storage. A block diagram of the different steps in the image processing pipeline inside a digital camera is shown in Fig. 1. Here we focus on the image acquisition, which describes how images are formed. In its simplest form, the image acquisition system can be represented as in Fig. 2. The lens in the diagram represents a system of lenses. The object plane is located at a distance zo from the entrance pupil, while the image plane is located at a distance zi from the exit pupil. Our goal is to find the image intensity distribution given a certain object at the object plane. We assume the imaging system has the property that a diverging spherical wave, emanating from a point-source object, is converted by the system into a new spherical wave converging towards a point in the image plane. The locations of the ideal image point and the original object are related by a simple scaling factor, which is constant across ail points in the image field of interest. Such systems are called diffraction- limited when the object of interest is confined to the region where the above property holds [2]. We further assume that the camera is used under incoherent light. One important properly Contributed Paper Manuscript received April 7, 2003 0098 3063/00 $10.00 B 2003 IEEE 270 IEEE Transactions On Consumer Electronics, Val. 49, No. 2, MAY 2003 ..- j White j Balancing ’. ........................................................................................................................................ .: . Fig. 1. Simplified block diagram of the image processing pipeline. of an incoherent imaging system is that it is linear in intensity. Defining G,(f,,f,,) as the normalized frequency spectrum of the object and G j ( f x , S , , ) as the normalized frequency spectrum of the image, we have the important relationship [2] Gj (,/”, fj,) = H ( f x , /,.FE (L 2 fi),) (11 where H ( f , , f , , ) is called the optical transfer function (OTF) of the system. For a diffraction-limited system with incoherent imaging, we can have an analytical formula for the OTF in terms of the pupil function, wavelength, and zi [2]. In all practical imaging systems, noise is inevitably present. It is common to model it as additive white Gaussian noise at the output, when the dominant source is the random thermal motion of electrons [3]. A uniformly distributed noise assumption is also common for the quantization noise distribution [4], while Poisson noise is typically used for astronomical images, which are taken at low light levels. Now, let us take equation ( I ) hack to the space domain by inverse Fourier transforms, and sample the quantities at regular intervals in both the horizontal and vertical directions. This.is useful because computers can only handle discrete images of finite size. We also include the contribution of additive noise to arrive at the following equation’ where g ( x , y ) is the object, i ( x , y ) is the image, n ( x , y ) is the noise, and h(x, y ) is the point spread function (PSF). i ( x , y ) = h ( * , y ) * g ( x l y ) + n ( x , y ) , (2) ,I B. Aberrations In the presence of a point-source object, if the wavefront leaving the exit pupil departs significantly from ideal spherical shape’the imaging system is said to have aberrations [2]. By na&e all aberrations are linear. When a particular aberration is space-invariant, its effects can be incorporated by changing the 0TF;so that the general imaging equations ( I ) and (2) are still applicable but withnew H ( f , , f , ) and h ( x , y ) . A simple focusing error is one of the most common space- invariant aberrations. In the literature, the inverse Fourier transform of the OTF is usually modeled as a circular disc with unit magnitude, the size of the disc being indicative of the amount of focusing error. This is a valid model from a geometric (ray) optics perspective. Its usefulness, however, is limited in cases when diffraction effects govern, for instance, in a lens with small relative aperture. For those situations, Fourier (wave) optics allows us to derive more accurate expressions. Only in the limiting case where the focusing error is severe does the geometric optics expression become a good approximation, because the effects of diffraction become negligible [2]. Readers are referred to [ 5 ] for discussions on the analytical yxpressions of the OTF for various amount of defocus. It suffices to note that as the amount of defocus increases, the ‘magnitude of H ( f , , f , ) for middle to high frequencies is smaller, sometimes even becoming negative. This correspoyds to a lo’ss of image details, and visually the effect is that the image is blurred. The OTF’s of many other aberrations can also he derived or measured, such as spherical aberration, coma, astigmatism, c u m a h r e of field, and distortion, which are collectively called Seidel aberrations [ 6 ] . However, one should be aware that most of these aberrations are space-variant. Therefore, only “local” OTF’s can he used. In somg cases, if we know the type and amount of the aberration, we ,can perform a geometric coordinate transformation to turn the space-variant imaging equation into a space-invariant imaging equation similar to equations ( I ) and (2). C. Image Restoration From the perspective of equation (Z), the goal of image restoration is to recover the object ‘ g ( x , y ) given the observed image i ( x . y ) Since the imaging equation represents a convolutional relationship, the restoration problem is often called deconvolution. Generally for image restoration, the PSF h ( x , y ) is assumed to he known. The restoration problem is termed blind image deconvolution when h ( x , y ) is not knowp precisely, and is a subject of active research. See [7] for some recent developments in this area. When the imaging system has a negligible amount of noise, equation ( I ) leads to a v e G direct method for restoration. At E. Y . Lam: Image Restoration in Digital Photography object plane i m a g e plane lens Fig. 2. Generalized model of an imaging system. each frequency, we define the restored object Gg(~,1!,) as spectrum (3) I 0 1 otherwise. This is called pseudoinverse filtering. Although simple, this method has limited use because of the presence of noise. A better solution is found by using Wiener filtering. This is computed with where Q E ( f x , f J , ) and Q n ( , f I , . f l ) are the power spectra of the object and noise respectively. 111. GENERAL SETTING FOR RESTORATION IN CAMERA Because of the large quantity of data captured in the camera image, compression is necessw. Each camera is therefore equipped with a compression engine, while the decompression is done after the images are downloaded to a computer host. It is advantageous to see whether it is possible to achieve some restoration akin to equation (4) as part of the compression and decompression process. In this section we will attempt a general formulation, while in sections IV and V we will deal more specifically with JPEG [8] and P E G 2000 [9] respectively, which are the two most common compression standards. The method comes in using different encoding and decoding parameters. Consider Fig. 3 , 'which represents the process of blurring, compression, quantization, decompression, and debluning. Quantization is represented as the addition of quantization noise [4]. Typically, we use a symmetric pair of encoder and decoder, i.e., if i ( x , y ) is the image before the encoder, then or to simplify notation, DLW(x.y)ll = ; ( X , Y ) , 21 1 D = E ~ ' . ( 6 ) Alternatively, we should rather use another decompressor D ' , which incorporates the effect of dehlurring as well. This principle has been applied for vector quantization design in the past [ l o ] . Its feasibility can he argued from the following thought experiment: suppose that the quantization effect is small. The blurring causes some loss i n ' h i g h frequency components. If the compression is based on the frequency components, we should design a decompressor that has a boosting effect in the high frequency components with respect to the compressor. The deblurring operation is then "absorbed" into the decompression. In practice, the quantization noise may not he negligible, especially for low hit-rate transmission. Furthermore, the compression is often not in Fourier frequency components; it is not for JPEG and JPEG 2000. The two sections below deal with the specific algorithms possible with the two compression standards. Related works that deal specifically with out-of-focus images have heen reported in [5, 111. IV. IMAGE RESTORATION WITH THE JPEG ALGORITHM A . Motivarion In JPEG, the first step is to divide an image into 8 x 8 pixel non-overlapping blocks. Each block is then subjected to a discrete cosine transform (DCT), as follows: let i ( p , q ) denote a pixel value, p = 0 ,... 7 , q = 0 ,_.. 7 , within an 8 x 8 block. The DCT coefficients are then with m = O ,... 7 , n=O ,_.. 7 , a n d for m , n = 0 1 f o r m , n > O The coefficients are then quantized according to the quantization matrix Q, , by rounding off the quotients when the DCT coefficients are divided entrywise by the corresponding element in the matrix. They are then entropy- coded, with either modified Huffman coding or arithmetic coding, before transmission. Upon receiving the compressed data, the decoder reverses the process for the entropy coding, dequantizes the coefficients by multiplying entrywise with the matrix Q d . and performs the inverse DCT [SI. The compression is considered lossy because of the quantization process. Usually we use the,same quantization matrix for both encoding and decoding, i.e., Q, = Q d . A s an example, the JPEG committee uses a matrix that attempts to take into account some human visual system properties. One then needs to transmit the matrix as part of the compressed image 212 IEEE Transactions on Consumer Electronics, Vol. 49, No. 2, MAY 2003 quantization noise - I 0 e Fig. 3. image blurring and compression model. hitstream to the decoder. To adjust the quality of the compression, one usually scales the whole quantization matrix by a constant factor. This is called the quality factor. However, as argued in the previous section, in the presence of blur or quantization noise using a decompressor that is the inverse of the compressor may not he optimal. In the case for JPEG, that means we should not always set Q, = Qd . This is explained in [ 5 ] and summarized below. Let I,(m,n) denote the quantized coefficients and I n ( m , n ) the quantization noise. To compare the original and the decompressed image, we need to calculate the mean-square error (MSE) in the space domain, which can he done in the DCT domain because of its unitary nature and Parseval's theorem [12]. Therefore, (9) where it is understood that the above quantities all have arguments(m,n) . We can see that when I,(m,n) is small and I,(m,n) large, it is reasonable to set Q, = Qd to generate a where ( x , y ) denotes the inner product of x and, y . We can further improve this algorithm if we have some prior estimate of the blur. In this case, we can first process I,(m,n) with a restoration filter, such as using a Wiener filter with the estimated h ( x , y ) to produce i,,(m,n). Equation (11) is then computed with i , ( m , n ) replacing I , ( m , n ) . Because I,(m,n) is now closer to I , ( m , n ) , but usually not identical due to limitation in the restoration filter and blur estimate, a ( m , n ) would he more stable and the approximation in I , ( m , n)a(m, n ) = I,(m, n ) would be closer to equality. Note however that this would add extra computational burden to the digital camera and may not he feasible for those cameras that need to conserve power as much as possible. In both cases, since image restoration is an ill-posed deconvolution problem, a minimum MSE solution is known to be highly sensitive to noise, especially at high frequencies. It is very important to incorporate some regularization constraint. A technique to this effect is proposed in [13, 141. Let L ( m , n ) be a highpass filter in the DCT domain. We measure the amount of desired high frequencies in all the blocks by B. Algorithm We seek to adjust the dequantization matrix by letting it he In order to calculate a(m, n ) , we assume that we have at our disposal a collection of images that have both the blurred and restored versions available. Let I,,(m,n) he the vector containing the DCT coefficients of the blurred images, and I,(m,n) he the corresponding vector for the restored images. We seek to find a ( m , n ) where I , ( m , n ) a ( m , n ) = I r ( m , n ) . The best value of a ( m , n ) , in the mean-square sense, can he found by [SI Qd(m,n) = a(m,n)Qe(m,n). (10) where the subscript n again is used to denote the quantization noise. Using a Lagrange multiplier formulation that attempts to balance the MSE while staying close to the amount of desired high frequency components, and simplifying, we have Combining this with equation (12), therefore, 273 . .. = k,(m,ni ,. t . a s g n , ( l , ( m l n ) ) l e d ( m , n ) , . . . (16) where sgn(x) denotes'the SIgnbf x.:. It'is zero if x = 0 . a is a design parameter that we 'can make 'use of here 'to further improve the restored image quality. Ordinarily, we can pick a=0.5 in the absence of knowledge about the probability density function of q . However, we know t h a t . I , the unquantired DWT coefficient, is usually modeled as Laplacian [15], i . e . , p ( 1 ) = - e p 4 4 . (17) 2 (Note that this does not mean the quantized coefficient I , is also Laplacian.) As such, a smaller value of a can reduce the overall distortion. The question arises as to how to estimate p . If we have the values of I ( m , n ) available, we can compute the maximum likelihood (ML) estimation. Let l ( p ) denote the log likelihood. With the probability density function in equation (1 7 ) , we have M N U ) = C C O o g p - w - P I I ( ~ ~ ~ ) I I (18) m 4 n=1 Setting l ' ( p ) = 0 , we have E. Y . Lam: Image Restoration i? Digital Photopaphy V. IMAGE RESTORATION WITH THE JPEG 2000 ALGORITHM A . Motivation The P E G image compression is a very successful standard, as evident from its proliferation in many areas o f . image representation, storage, and transmission. Yet technology is constantly evolving, and new applications are being devised. The emergence and wide adoption of the Internet prompts new developments in image compression, with its demand for progressive transmission in quality and resolution. Electronic commerce also pushes for better image security, giving rise to many schemes for digital watermarking and steganography. Digital photography is also a new application .to the JPEG compression standard, and is heavily demanding for ' a more efficient compression scheme with a relatively low complexity. Progressive coding of images is also a desirable quality because of the limited memory available in the camera, which calls for a need to discard the less significant information of the existing pictures to make room for a new photograph, if necessary. To address for these needs, much research has shown that wavelet-based compression schemes can produce better image quality than DCT-based compression at the same bibate, while providing many enhancements to the features desired. That is primarily because it is ideally suitable for multiresolution transmission due to its subband decomposition nature. At the same time, a breakthrough in using bit-plane coding allows for progressive decoding, and when combined with an integer wavelet basis, can provide a range of lossy to lossless transmission of an image. A more elaborate bitstream syntax could also allow for better region-of-interest coding. As a response, the JPEG 2000 committee was formed to develop a new, advanced standardized image coding system to serve new applications. The JPEG 2000 standard is divided into two parts: part I defines the minimum set of functionality and features, while part 11 includes advanced techniques and algorithms. Details of the standard can be found in [9]. For our purpose here, we note that although wavelet is much different from DCT, they both f i t the blur model depicted in Fig. 3. In the case of JPEG 2000, we try to alter the wavelet transform coefficients given a certain wavelet basis. B. Algorithm Similar to the DCT domain image restoration algorithm, our wavelet domain restoration also takes advantage of the flexibility of using different quantization for compression and decompression. We begin with a reference quantization step sizes for the discrete wavelet transform (DWT) coefficients. This step size can be different for different subbands. Using I,(m,n) and I , ( m , n ) from the training data set, we can use equation (1 1) to best value of a(m, n) , in the mean-square sense. Again, if we have some estimate of the blur, we can .improve the algorithm with the use o f f,,(m,n) similar to the previous section. The dequantization is somewhat different. In the standard, the process is governed by the equation When we only have I , ( m , n ) available instead of I ( m , n ) , we can still compute the ML estimation with a probability mass function The ML estimation still has an analytical expression, although the expression is somewhat cumbersome. A similar derivation can be found in [16]. Once we have an estimate of p , we can use it to find the centroid of each partition, which is the optimal decoding according to the Lloyd-Max condition [4]. This is derived from the equation After some arithmetic manivulation, we have Notethat O < a < 0 . 5 [ 1 1 ] . 274 VI. CONSIDERATIONS FOR 1MPLEMENTATION AND EXTENSIONS In implementing the above algorithms, we need to have training images to assist us in adjusting the quantization levels. Depending on applications, this may or may not be totally feasible. One altemative would be to have models of the blur and simulate the adjustments in quantization levels necessary. The .advantage that these algorithms bring to us is then an efficient way of performing the restoration without actually computing a filtering operation, such as that in equation (4). We can pre-compute some common blurs, such as defocusing, and store the quantization changes in the camera. Another point to note about these algorithms is that they provide a basic setting for restoration implemented together with the compression. There are at least two areas where this work can be extended: Restoration of images with linear space-variant aberrations. As explained earlier, a number of optical aberrations are space-variant, and some can be converted to space-invariant degradations by suitable geometric coordinate transformations. To incorporate these in our restoration scheme, we need to include both the forward and inverse coordinate transformations before and after the linear space-invariant equivalence of the blur. The challenge is then to devise algorithms that can embed these additional operations in the transform domain used for compression. 2. Restoration of color images. As it stands, our algorithm deals with monochrome images, and therefore should only be applied on the luminance channel of a color image. Since almost all consumer level digital cameras produce color pictures, it is important to extend our algorithms to color images. Color image processing requires much more sophistication than processing of the three color planes separately. First of all, one needs to be concerned with how the color image is acquired, for instance, the particular demosaicing algorithm associated with a certain color filter array. Second, one should have a prudent choice of the color space to c a n y out the restoration. Traditionally, a luminance-chrominance color space such as YUV or YC& is preferable to RGB or CMY color spaces. Finally, whatever the color space, the aberrations on the three color planes are related, and this provides extra information for image restoration. How that information can be put to good use is itself a large area for further studies. 1. VII. CONCLUSION . The digital photography industry is experiencing exponential growth. as digital cameras become more viable competitors with film-based cameras. The push for advancement in technology has been extensive and diverse, IEEE TranSactions on Consumer Electronics, Val. 49, No. 2, MAY 2003 with a lot of research focusing on increasing the resolution of the sensor, designing new shapes of the pixels, or improving the readout of the image for faster frame rate. In this paper, we concentrate our efforts to show that digital photography allows complex calculations using the pixel intensities, which enables us to implement image processing algorithms inside a digital camera to restore the quality of degraded images. This would be a definitive advantage over film-based photography. REFERENCES D. Yang, B. Fowler, and A. El Gamul, “A Nyquist rate pixel level ADC for CMOS image sensors, ” IEEE J. Solid Store Circuits, vol. 34, no. 7, pp. 348-356, Mar. 1999. I . W. Goodman, Introducrion to Fourier Optics, 2nd ed., McGraw-Hill, New York, 1996. ti. R.’Castleman, Digital Image Proce.wing. Prentice Hall, Englewood Cliffs, New Jersey, 1996. A. Gersho and R. Gmy, Vector Quontirotion ond Signal Compression. Kluwer Academic Publishers, Boston, 1992. E. Y. Lam and J . W. Goodman, “Discrete cosine transform domain restoration of defocused images,” Appl. Optics, vol. 37, no. 26, pp. 6213-6218, Sep. 1998. H. H. Hopkins, Wove Theory o/ Aberrotions. Oxford University Press, Oxford, 1950. E. Y. Lam and J . W. Goodman, “An iterative statistical approach to blind image deconvolution,” J . Opt. Soc. o/Amer. A, vol. 17, no. 7, pp. 1177-1184, Jul. 2000. W. Pennebaker and J . Mitchell, JPEG Still Image Udta Compression Standord. Van Nostrand Reinhold, New York, 1992. D. Taubman and M. Marcellin, JPEG 2000: Image Compression Fundament& Standards and Prrrctice, Kluwer Academic Publishers, Boston, 2001. . [IO] D. Sheppard,, A: Bilgin, M. Nadar, B. Hunt, and M. Marcellin, ‘A vector quantizer for image restoration,” IEEE Trans. on lmoge P m c . , vol. 7,no. I , p p . 119-124,Jan. 1998. [ I l l E. Y. Lam, “Digital restoration of defocused images in the wavelet domain,”Appl..Optics,vol.41,na. 23,pp.4806-4811,Aug. 2002. [I21 K. Rao and P. Yip, Uixrete Cosine Tron.sform: Algorithms, Advantages andApplicotions. Academic Press, New York, 1990. 1131 R. P m f Y. Dine, and A. Baskun, “JPEG desuantilation array for ~~ regularized decompression,” IEEE Trans. on Image Pror., vol. 6, no. 6, pp. 883-888, Jun. 1997. [I41 W. Philips, “Corrections to ‘JPEG dequantization array for regularized decompression’,” IEEE Trans. on Image P r o e . vol. 7, no. 12, pp. 1725, Dec. 1998. 1151 S. Mallat, “A theory for multiresolution signal decomposition: the wavelet representation,” IEEE Trunr. on Pott. A n d ond Mac. Intel., vol. 11, no. 7, pp. 674-693, Jul. 1989. [I61 J. R. Price and M. Rabbani, “Biased reconstruction for JPEG decoding,” Sig. Proc. Lett.. vol. 6, no. 12, pp. 297-299, Dec. 1999. Edmund Y. Lam (S’97-M’00) received the B.S. degree in 1995, the M.S. degree in 1996, and the Ph.D. degree in 2000, all in electrical engineering from Stanford Uni@’ersity, U.S.A. At Stanford, he was a member o f t h e Information Systems Laboratory, conducting research for the Stanford Programmable Digital Camera project. His focus was on developing image restoration algorithms for digital photography. Outside Stanford, he also consulted for industly in the areas of digital camera systems design and algorithms development. Before returning to academia, he worked in the Reticle and Photomask Inspection Division (RAPID) of KLA-Tencor Corporatian- in San Jose as a senior engineer. His responsibility was to improve on the core die-to-die and die-to-database inspection algorithms, especially for phase shift masks. He is now an Assistant Professor of Electrical and Electronic Engineering at the University of Hong Kong. His research interests include optics, defect detection, image restoration, and superresolution. work_j3zuaq4azfduvc2x4kzadh5z5e ---- This Month in Archives of Dermatology Diagnostic Accuracy of Patients in Performing Skin Self-examination and the Impact of Photography E arly identification and excisionof malignant melanomas while they are relatively thin may be impor- tant in reducing mortality, and skin self-examination (SSE) is routinely rec- ommended to patients as a means to detect new or changing nevi. In this study, Oliveria et al determined the sensitivity and specificity of SSE to de- tect new and changing moles in highly motivated patients with 5 or more atypical nevi. The sensitivity of SSE to detect new and cosmetically altered moles was 62%, which increased sub- stantially to 72% with the use of digi- tal photography. The specificity was 96% without photographs and 98% with photographs, suggesting that pa- tients had few false positives both with and without access to baseline digital photographs. See page 57 Sentinel Node Biopsy for High-Risk Nonmelanoma Cutaneous Malignancy T he presence of regional lymphnode metastasis is the most powerful prognostic factor for pre- dicting recurrence and survival in pa- tients with most solid tumors whose primary spread is lymphatic. Lim- ited, selected, less invasive surgical methods for detection of occult me- tastases have gradually replaced radi- cal lymph node dissection. Indeed, lymphatic mapping and sentinel lymph node biopsy (SLNB) have be- come the standard of care for mela- noma at virtually all major mela- noma centers. In this consecutive clinical case series, Wagner et al dem- onstrate the feasibility of SLNB in 24 patients with selected high-risk skin cancers, including squamous cell car- cinoma, Merkel cell carcinoma, and adenocarcinoma. See page 75 Early Detection of Asymptomatic Pulmonary Melanoma Metastases by Routine Chest Radiographs Is Not Associated With Improved Survival T he lungs represent a common site of cutaneous melanoma metastasis.Chest radiography (CR) is routinely used in postmelanoma surveil- lance, despite the fact that its utility and impact on survival are still uncer- tain. In this retrospective analysis, Tsao et al use a historical cohort of pa- tients with melanoma to determine if there is an apparent survival prolongation associated with earlier detection of asymptomatic pulmonary metastasis by routine CR. The high false-positive rate and lack of a survival advantage in patients in whom asymptomatic pulmonary metastases were discovered fur- ther challenges the use of CR in postmelanoma surveillance. See page 67 Photodynamic Therapy Using Topical Methyl Aminolevulinate vs Surgery for Nodular Basal Cell Carcinoma Results of a Multicenter Randomized Prospective Trial P hotodynamic therapy (PDT) is increasingly used as a noninvasive treat-ment for nodular basal cell carcinoma (nBCC) despite the lack of sound evidence for this therapy. Topical PDT has been most commonly practiced with 5-aminolevulinic acid (ALA). Methyl aminolevulinate (MAL) offers several ad- vantages over ALA, including enhanced lipophilicity, improved skin penetra- tion, and specificity for neoplastic cells. In this prospective, randomized mul- ticenter trial, Rhodes et al compare topical MAL-PDT with simple surgical excision for primary nBCC, demonstrating the efficacy of this treatment as well as the cosmetic advantages over surgery. See page 17 Treatment of Refractory Pemphigus Vulgaris With Rituximab (Anti-CD20 Monoclonal Antibody) P emphigus vul-garis (PV) is a severe antibody-me- diated autoimmune bullous disease that involves skin and mu- cous membranes. The mainstays of therapy remain systemic cor- ticosteroid treatment and various other ad- juvant immunosup- pressive drugs. Ritux- imab is a genetically engineered chimeric murine/human anti-CD20 monoclo- nal antibody that targets B cells. In this case series, Dupuy et al review the responses in 3 patients with severe, refractory PV to weekly intravenous rituximab infusions. Each patient had a clinical response following the first series of 4 infusions, which allowed for tapering of corticosteroids and other immuno- suppressant therapies. See page 91 SECTION EDITOR: ROBIN L. TRAVERS, MD BA A, Erosive lesions involving the leg. B, Noticeable clinical remission at week 10 after treatment with rituximab. THIS MONTH IN ARCHIVES OF DERMATOLOGY (REPRINTED) ARCH DERMATOL / VOL 140, JAN 2004 WWW.ARCHDERMATOL.COM 8 ©2004 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 work_j5rtwnjntvcupbc36bfjyohgaa ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218546748 Params is empty 218546748 exception Params is empty 2021/04/06-02:16:24 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218546748 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:24 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_j5yf63e7ofamdk7joa2lu64qya ---- REMOVABLE PROSTHODONTICS AT A GLANCE James Field and Claire Storey 2020; Wiley Blackwell; 120 pp; ebook ISBN: 9781119510741 ‘Providing prostheses that are satisfactory to the patient is a challenge – and there are many reasons why patients can be dissatisfied with the finished result. Many relate to social aspects of patients’ lives … there is often a common thread running through [the diversity of complaints] – lack of information exchange and an inappropriate level of patient expectation. We would therefore argue that the most important skill when making satisfactory removable prostheses is that of communication.’ This new addition to Wiley’s ‘At a Glance’ series covers the difficult topic of removable dentures both complete and partial. It covers the basics in short easy to read chapters full of simple diagrams and easily digestible text along with an accompanying website with quiz questions and downloadable assessment forms. The book consists of 47 short chapters looking at everything from the function of removable prostheses to positioning of the dentist at different stages of prosthetic treatment and all aspects of production, techniques and problems. The appendices consist of proformas for complete denture assessment and restorative assessment, referral letters, a partial denture design sheet and a reading list. An framework/fixed hybrid prostheses’, ‘Treatment of edentulous patients with immediate occlusal loading: conventional surgical and computer-generated surgical guides’, ‘Treatment of edentulous patients with immediate occlusal loading: conventional surgical/prosthetic protocols’, ‘Treatment of partially edentulous patients with immediate non-occlusal loading protocols’. The book finishes with a chapter covering guidelines and maintenance procedures for fixed, full-arch, implant- retained prostheses. https://tinyurl.com/ ebookimplants ESSENTIALS OF DENTAL PHOTOGRAPHY Irfan Ahmad; 2019; Wiley-Blackwell; 360 pp; ebook ISBN: 9781119312147 ‘[M]any clinicians are reticent about incorporating dental photography into their daily practice due to uncertainty about the choice of equipment, a steep learning curve and initial capital expenditure. These fallacious notions are fuelled by the plethora of dental literature on the subject, some scientifically based, some anecdotal, while others perversely complicate what is basically a simple procedure. It is the endeavour of this book to demystify many of these erroneous beliefs by proposing protocols for standardising photographs that are invaluable for intra- and inter-patient comparison.’ LIBRARY CORNER An electrifying selection of virtual reading Compiled by Helen Nield This issue’s selection is made up entirely of ebooks. If you have not tried an ebook please email library@bda.org or call 020 7563 4545 for access details. Members can read these books online at any time or download most titles for up to a week before having to download them again. Visit bda.org/ebooks to find all of these books and more. excellent book that can be used both as an introduction to the subject and as a revision guide. https://tinyurl.com/ ebooksglance IMPLANT RESTORATIONS: A STEP-BY- STEP GUIDE (4TH EDITION) Carl Drago 2019; Wiley-Blackwell; 536 pp; ebook ISBN: 9781119538110 ‘The purpose of this textbook is to provide clinicians and dental laboratory technicians with a step-by-step approach to the treatment of certain types of edentulous and partially edentulous patients with dental implants. Six types of patient treatments with multiple implant loading protocols, have been featured.’ This new edition of a well-known textbook has been fully updated and expanded since it was last published in 2014. New to this edition are digital dentistry techniques that have become available in the last few years. The text is full of images including CT scans and pictures from software programs as well as photographs and diagrams. Each chapter begins with a literature review and ends with a bibliography and a fully comprehensive clinical case presentation. Treatments that are covered are, ‘Treatment of edentulous mandibular patients’, ‘Replacement of single teeth with CAD/CAM implant restorations’, ‘Fixed dental prostheses: retreatment of a patient with a fractured implant-retained fixed dental prosthesis’, ‘Accelerated treatment protocol of a patient with edentulous jaws and CAD/CAM titanium  914 BRITISH DENTAL JOURNAL | VOLUME 228 NO. 12 | JUNE 26 2020 UPFRONT © 2020 British Dental Association. All rights reserved. https://tinyurl.com/ebookimplants https://tinyurl.com/ebookimplants https://tinyurl.com/ebooksglance https://tinyurl.com/ebooksglance This title is a very practical text written by the author of previous books and articles on the subject including a digital photography series of articles that appeared in the BDJ in 2009. This new book covers the basics of photography that a dental practitioner would need to start taking their own images. It is split into three sections: Equipment and concepts; Photographic set-ups; and Processing images. The book looks not only at intra- and extra-oral imaging but also covers the taking of full head portraits, ‘bench images’ which incorporate macrophotography and imaginative shots for promotional and teaching purposes and some special advanced applications such as ‘taking additional images for elucidating certain features, or analysing images for effective communicating with patients and fellow colleagues.’ The book finishes with a look at how images can be used both in terms of clinical practice documentation and for marketing purposes. https://tinyurl.com/ ebookphotography This annual publication comprises new articles on various aspects of dental technology by a variety of different authors. This latest offering contains articles by, amongst others, Mario Alessio Allegri, James Choi, Douglas Terry, Cristiano Soares and Naoki Hayashi. A bumper edition, it includes a short editorial on computational photography and 17 different articles. Minimally invasive dentistry is represented by articles on ‘The pillars of full-mouth rehabilitation: a minimally-invasive, low-cost approach to prosthetic treatment’ and ‘Digital minimally invasive esthetic treatment’. Amongst other topics there are pieces on subjects as diverse as digital workflow, self-glazing liquid ceramics and photopolymerisation. If you can ignore the many pages of adverts between the articles then this is a great book to browse through to pick up ideas on current technology and marvel at the art of the possible. https://tinyurl.com/ ebooktechnology QUINTESSENCE OF DENTAL TECHNOLOGY (QDT) VOLUME 43, 2020 Silas Duarte, Jr (Editor) 2020; Quintessence Publishing Co; 257 pp; ebook ISBN: 9781647240141 ‘Digital dentistry and facially generated treatment planning (smile design) are closely related. Natural reconstructions with application of digital facial scanning and analysis as well as other digital technologies vital to all phases of clinical dentistry are required in current esthetic restorative treatment.’ – from the article Digital minimally invasive esthetic treatment.  Data collected by the Oral Health Foundation as part of National Smile Month shows that 61% of the United Kingdom support an expansion of the current Soft Drinks Industry Levy – also known as the sugar tax. Milkshakes, fruit juices, smoothies and alcoholic mixers, which are exempt under the current sugar tax, all received equal backing as possible routes for an extension. A previous report looking into some of the drinks exempt from the sugar tax found that half contain a child’s entire recommended daily sugar intake, which is almost 19 g or nearly five teaspoons. Dr Nigel Carter OBE, Chief Executive of the Oral Health Foundation, believes the Soft Drinks Industry Levy has had a positive impact on the nation’s health and supports calls to extend the sugar tax further. Dr Carter said: ‘The sugar tax has been a significant success, not only for oral health, but for general health and wellbeing too. The more sugar we can continue to cut from sugar tax to include milkshakes, smoothies and fruit juices is a relatively small step but the impact it could have would be enormous.’ The sugar tax was introduced two years ago and applies to drinks with more than 8 g of added sugar per 100 ml. The tax forced manufacturers to lower their sugar content or face a tax rate equivalent to 24p per litre. As a result, many of them did. So much so that the new levy brought £800 million less than it was forecast to. Since then, the sugar content of drinks sold has fallen by 21.6% – equating to more than 30,000 tonnes of sugar a year. ‘The impact that sugar has on teeth is horrific,’ said Dr Carter. ‘It is why one-in- three adults in the UK have tooth decay and it is the reason why around 35,000 children are admitted to hospital each year.’ During National Smile Month, the Oral Health Foundation is challenging the public to cut its added sugar intake. Advice on sugar swaps is available at www. smilemonth.org. Public back calls to extend sugar tax drinks, the healthier our population will be. It will allow more of us to be free of the diseases and conditions linked to sugar, and it will also save the NHS millions every year. ‘The lack of progress by government to build on the current sugar tax proposals has been extremely disappointing. Expanding the © H el en D av ie s/ iS to ck /G et ty Im ag es P lu s BRITISH DENTAL JOURNAL | VOLUME 228 NO. 12 | JUNE 26 2020 915 UPFRONT © 2020 British Dental Association. All rights reserved. https://tinyurl.com/ebookphotography https://tinyurl.com/ebookphotography https://tinyurl.com/ebooktechnology https://tinyurl.com/ebooktechnology http://www.smilemonth.org http://www.smilemonth.org work_j67vdtgxv5cpnglivl265a62cu ---- FSI-decentralisation EC 25022018 Digital transformations and the viability of forensic science laboratories: crisis-opportunity through decentralisation Forensic Science International. Accepted manuscript. Eoghan Casey, Olivier Ribaux School of Criminal Justice, Faculty of Law, Criminal Justice, and Public Administration, University of Lausanne, Switzerland Claude Roux Centre for Forensic Science, School of Mathematical and Physica Sciences, University of Technology Sydney, Sydney Australia Rapid proliferation of technology in modern society is changing the way people behave and is dramatically increasing the traceability of individuals. In particular, new portable and miniaturized technologies put powerful capabilities in everyone’s pocket, and capture a large quantities of detailed information in a digital format. These technological advances are undermining the traditional business model of forensic science laboratories by placing forensic capabilities directly in the hands of police. Simultaneously, these advances create new opportunities for laboratories to harness the power of big data and play a more central role in problem solving collaboratively with the police and other stakeholders. Resistance is futile – embrace the change The forces driving decentralisation of forensic capabilities are irresistible, and will continue to grow in the future, particularly for digital traces. This is also true for many traditional forensic activities. Inevitably, problems are arising when non-scientists apply forensic capabilities that they do not fully understand, resulting in mistakes and missed opportunities. A subtler problem is that diffusion of forensic expertise makes it more difficult for individuals to learn about existing solutions to problems they encounter. Ultimately, individual investigators cannot cope with the growing quantities of data, rapid advances in technology, and varied contexts and criminal behaviours. Forensic science laboratories have the opportunity (and dare we say duty) to mitigate these problems by embracing the decentralisation movement. Laboratories can accomplish this by forming a forensic ecosystem to amplify decentralised forensic capabilities, stream data from distributed sources into data lakes for big data analysis, and systematically distil and circulate knowledge throughout the decentralised forensic ecosystem. Avoiding Kodak syndrome – digital transformation The Eastman Kodak Company faced similar challenges, struggling to adapt to the decentralisation of multimedia-making equipment and self-produced digital photographs and videos. Kodak was the historical global market leader, but, in the transition to digital photography, failed to transform its processes based on the centralization of its laboratory. The company ultimately filed for bankruptcy in 2012, and has continued to decline despite various efforts to restructure its business. This Kodak syndrome illustrates why what are called “digital transformations” are now essential to the viability of an organization. To avoid the Kodak syndrome, a forensic science laboratory must undergo a digital transformation, changing its processes and culture to reinforce decentralised forensic capabilities and cultivate big data analysis as a core function, expanding its view of forensic science beyond the courtroom to advance problem-oriented and intelligence- led strategies. Forensic science laboratories, which have focused on technologies centrally and routinely operated by technicians, are in great trouble. Many routine activities are being relatively easily transferred to local personnel while remaining compliant with quality assurance frameworks, effectively eliminating laboratories as the middle man. It is argued that the laboratories which are better able to adapt and survive are those with a stronger focus on their forensic role based on the knowledge gained through research and collaborative work with the police and the justice system. Laboratories must adapt to this decentralisation movement, integrating in their model the loss of their monopoly over the use of certain technologies, and providing knowledge on how to apply them adequately in a decentralised way. Furthermore, laboratories should strengthen their role in the forensic ecosystem by keeping a global vision of the growing amount of data produced by decentralised devices in their daily use. The implementation of a two-way transfer of information and knowledge (from the field to the laboratory and from the laboratory to the field), as well as the development of big data processing and interpretation capacity, are key aspects of the needed digital transformation. Forward leaning laboratories Beyond the development of knowledge to help solve complex cases, laboratories have the opportunity (and dare we say duty) to play a central role in moving towards a more proactive and strategic approach to dealing with crime in modern society. By keeping a global view and extracting solid knowledge from the particular situations investigated, laboratories can be at the forefront of discovering crime patterns and reducing linkage blindness. This forward leaning posture requires laboratories to support much more centrally a wide variety of decision making processes contributing directly to abating crime, strengthening security, and reinforcing the criminal justice system. This strategy is already relevant for many traditional forms of crimes. This becomes much more important because of the amplification of the strong serial nature of crimes perpetrated in digital environments. Many research and operational projects already illustrate the concept. They are linked to areas as diverse as high volume crimes(Rossy et al. 2013), illicit markets (illicit drugs, counterfeit materials) and false ID documents (Baechler et al. 2015), as well as internet and computer-related crimes (Pineau et al. 2016) . However, there is still a long way to go until this extraordinary potential can be more generally exploited. Laboratory managers and policymakers are still mainly focused on quality management of existing processes. The debate around digital transformations in forensic science remains weak. We encourage relevant communities to look beyond the court process and adopt a proactive attitude in face of the rapidly coming changes. By doing so, they will quickly realise that forensic scientists need to be reintroduced, from the crime scene to the laboratory, to value forensic knowledge and intelligence, and reinforce the use of decentralised technology and big data analysis. In doing so, they will support the development of broader strategies to deal with crime and security in a globalized, digitized and rapidly changing society. These broader strategies will further the objectives of governments that have recognized the need for systematic solutions to modern societal problems of crime and security. Baechler, S., M. Morelato, O. Ribaux, A. Beavis, M. Tahtouh, P. Kirkbride, P. Esseiva, P. Margot and C. Roux (2015) 'Forensic intelligence framework. Part II: Study of the main generic building blocks and challenges through the examples of illicit drugs and false identity documents monitoring', Forensic Science International Vol 250, 44-52, 10.1016/j.forsciint.2015.02.021 Pineau, T., A. Schopfer, L. Grossrieder, J. Broséus, P. Esseiva and Q. Rossy (2016) 'The study of doping market: How to produce intelligence from Internet forums', Forensic Science International Vol 268, 103-115, 10.1016/j.forsciint.2016.09.017 Rossy, Q., S. Ioset, D. Dessimoz and O. Ribaux (2013) 'Integrating forensic information in a crime intelligence database', Forensic Science International Vol 230, 137-146, 10.1016/j.forsciint.2012.10.010 work_j7tatjfc3fcmbbmquusohi4f5y ---- First record of the family Prodoxidae (Lepidoptera: Adeloidea), Lampronia flavimitrella (HUbner), reported from Korea lable at ScienceDirect Journal of Asia-Pacific Biodiversity 9 (2016) 212e214 Contents lists avai H O S T E D BY Journal of Asia-Pacific Biodiversity journal homepage: http://www.elsevier.com/locate/japb Original article First record of the family Prodoxidae (Lepidoptera: Adeloidea), Lampronia flavimitrella (Hübner), reported from Korea Minyoung Kim a, Young-Mi Park b,* a Incheon International Airport Regional Office, Animal and Plant Quarantine Agency, Incheon, Republic of Korea b Department of Plant Quarantine, Animal and Plant Quarantine Agency, Gimcheon-si, Gyeongsangbuk-do, Republic of Korea a r t i c l e i n f o Article history: Received 16 February 2016 Received in revised form 25 February 2016 Accepted 28 February 2016 Available online 3 March 2016 Keywords: Lampronia Lepidoptera new record Prodoxidae Taxonomy * Corresponding author. Tel.: þ82 54 912 0639; fax E-mail address: insectcola@korea.kr (Y.-M. Park). Peer review under responsibility of National Science M Korea National Arboretum (KNA). http://dx.doi.org/10.1016/j.japb.2016.02.008 pISSN2287-884X eISSN2287-9544/Copyright � 2016, This is an open access article under the CC BY-NC-ND a b s t r a c t The family Prodoxidae is recorded for the first time from Korea, reporting Lampronia flavimitrella (Hübner) which was collected at Jeju-do Island. Redescription of the adult is given, with images of adult and male genitalia. Copyright � 2016, National Science Museum of Korea (NSMK) and Korea National Arboretum (KNA). Production and hosting by Elsevier. This is an open access article under the CC BY-NC-ND license (http:// creativecommons.org/licenses/by-nc-nd/4.0/). Introduction The family Prodoxidae (Lepidoptera) is commonly known as “yucca moth”, because the moth is well known as an obligate pollinator, as well as an herbivore (Pellmyr et al 1996). They are one of a primitive monotrysian Lepidoptera, comprising 98 species belonging to nine genera in the world. The family is divided into two subfamilies: Prodoxinae and Lamproniinae (van Nieukerken et al 2011). Prodoxidae is characterized by a pair of stellate signa in the corpus bursae, a rounded sternum posteriorly, and a trian- gular tergum VII in the female genitalia. The family is almost entirely distributed in the Holarctic Region, and is mostly diversi- fied in the Nearctic Region (Davis 1999a,b). They are known as endophagous in herbs and shrubs, boring inside plant shoots, leaves, buds, flower receptacles, fruits, or seeds (Nielsen and Davis 1985; Davis 1999a,b). Lampronia capitella (Clerck) is especially well known to European gardeners as “currant shoot borer”. The genus Lampronia Stephens comprises 28 described species in Holarctic regions belonging to the subfamily Lamproniinae (Davis 1999a,b). The species type is Incurvaria aenescens Wal- shingham, which was described from Rogue River (OR, North : þ82 54 912 0652. useum of Korea (NSMK) and National Science Museum of Korea license (http://creativecommons. America), and has mainly been distributed. The genus is distin- guished from other groups by having small compound eyes and a proboscis shorter than the labial palpi. Most of the small-to- medium sized members have a golden or dark ground color, with some species being unicolored and many of having forewings with a few-to-numerous whitish or yellowish spots across the forewing. This species was reported in a faunal survey of subtropical moths from the Jeju-do Islands by Kim et al (2014). In this study, a first record of the family Prodoxidae is described from Korea. Its morphology is described with illustrations, and a description based on male genitalia is provided to aid in identification. Materials and methods Materials examined in this study were collected in the Jeju-do Islands, located at the southern part of the Korean Peninsula, in 2015, using bucket traps with ultraviolet lamps (12 V/8 W). Morphological structures and genital characteristics were exam- ined under a stereo microscope (Leica S8APO; Leica, Wetzlar, Ger- many), and a Nikon D90 (Nikon, Tokyo, Japan) and Carl Zeiss Axio Imager A1 (Carl Zeiss, Oberkochen, Germany) were used for the digital photography. Color standards for the description of adults follows Kornerup and Wanscher (1978). The specimens are deposited in the Jeju Regional Office (Jeju-si), Animal and Plant Quarantine Agency, Korea. (NSMK) and Korea National Arboretum (KNA). Production and hosting by Elsevier. org/licenses/by-nc-nd/4.0/). http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://crossmark.crossref.org/dialog/?doi=10.1016/j.japb.2016.02.008&domain=pdf mailto:insectcola@korea.kr www.sciencedirect.com/science/journal/2287884X http://www.elsevier.com/locate/japb http://dx.doi.org/10.1016/j.japb.2016.02.008 http://creativecommons.org/licenses/by-nc-nd/4.0/ http://dx.doi.org/10.1016/j.japb.2016.02.008 http://dx.doi.org/10.1016/j.japb.2016.02.008 Figure 1. Lampronia flavimitrella (Hübner). A, adult; B, dorsal view of the wing pattern. Figure 2. Lampronia flavimitrella (Hübner). A, diagram of male genitalia; B, aedeagus M Kim, YM Park / Journal of Asia-Pacific Biodiversity 9 (2016) 212e214 213 Systematic accounts Family Prodoxidae (Riley, 1881) Genus Lampronia Stephens (1829) Type species: Incurvaria aenescens Walshingham (1888) Lampronia flavimitrella (Hübner, 1817) Tinea flavimitrella Hübner (1817) Lampronia flavimitrella Hübner (1817); Lempke (1976); Kuchlein and Donner (1993); Kuchlein and De Vos (1999); Kuchlein and Bot (2010) Diagnosis. This species is distinguished from its congeners by the two elongated yellowish white antemedial and postmedial bands on the forewing. Adult (Figures 1A and 1B). The wingspan is 13.0e15.0 mm. The head involves yellowish-white setae and is tufted. The ocelli and Chaetosemata are absent. The antennae are brownish yellow, fili- form, and less than half as long as the forewing. The second segment of the labial palpus is whitish yellow, and the third segment is shorter than second segment. The forewing ground color is brownish gray, with well-developed yellowish-white antemedial and postmedial bands, suffused with dark-brown scales along the costal margin. The fringe is tipped with yellowish brown. The hindwing is grayish brown, and is paler at the base. The Sc at the base not connected with the truck of R. In the forewing, the common trunk of A2þ3 is >2.5-times longer than the basal fork. The legs are gray, and the hind tibia is grayish white with a pair of spurs. The tarsus is gray tinged with white and is longer than the tibia. The female is unknown. Male genitalia (Figures 2A and 2B). The uncus is reduced, and the valva is short and rounded. The apex of cucullus has lateral falcate processes, and the costal margin has a curved process near the base. The saccus is extremely long, slender, and at least half as long as the that of the genitalia. The juxta is well defined, with a horizontal sub-semicircular ridge on the posterior half. The anterior margin is convex medially, and the caudal lobe is short and falcate. The vin- culum was sclerotized, well-developed, and Y-shaped. The anellus was membranous, and the aedeagus was extremely slender, as long as the valva, straight distally, and without cornuti. Material examined. 2_, U-do Island, Jeju-do, 14 iv 2015 (SM Oh & RN Sohn), genitalia slide no. QIA-131, 136, same locality, 1_, 22 iv 2015 (SM Oh & RN Sohn). Host plant. Rubus idaeus L. and R. caesius L. (Razowski 1978). Biology. Moths appear from June to July in Japan (Jinbo 2004e 2008). Most species fly during the day, with some being active into dusk (Razowski 1978). M Kim, YM Park / Journal of Asia-Pacific Biodiversity 9 (2016) 212e214214 Distribution. Korea (new record), Europe (Central, Northern), the Balkan Peninsula, Russia (West, Southwestern), Japan (Hokkaido, Honshu). Acknowledgments This study was supported by the Plant Quarantine Technology Project provided under the Animal and Plant Quarantine Agency, Korea. We thank Mr SM Oh, Miss RN Sohn, Jeju Regional Office, and Mr BH Kang, Jeju International Airport District Office, Animal and Plant Quarantine Agency, for their help in supplying specimens. References Davis DR. 1999a. The Monotrysian Heteroneura. In: Kristensen NP, editor. Lepi- doptera: Moths and Butterflies. Vol. 1. Evolution, Systematics, and Biogeography. Handbook of Zoology Vol. IV. Berlin and New York: De Gruyter. pp. 65e90. Davis DR. 1999b. The Monotrysian Heteroneura. In: Kristensen NP, editor. Lepi- doptera: Moths and Butterflies. Vol. 1. Evolution, Systematics, and Biogeography. Handbook of Zoology Vol. IV. Berlin and New York: De Gruyter. p. 491. Hübner J. 1817. Sammlung europäischer Schmetterlinge. Lepidoptera VIII. Tineae. Augsburg: Jacob Hübner. Jinbo U. 2004e2008. List-MJ: A checklist of Japanese moths [Internet]. Available at: http://listmj.mothprog.com/ [Date accessed: 08/05/13 modified]. Kim M, Park YM, Hyun IH, et al. 2014. A newly known genus Charitoprepes Warren (Lepidoptera: Pyraloidea: Crambidae) in Korea, with report of C. lubricosa Warren. Korean Journal of Applied Entomology 53:301e303. Kornerup A, Wanscher JH. 1978. Methuen handbook of colour. 3rd ed. London: Methuen. Kuchlein JH, Donner JH. 1993. De kleine vlinders: Handboek voor de faunistiek van de Nederlandse Microlepidoptera. Wageningen: Pudoc. p. 715 [Text Dutch with English summary]. Kuchlein JH, De Vos R. 1999. Geannoteerde naamlijst van de Nederlandse vlinders. Annotated Checklist of the Dutch Lepidoptera. Leiden: Backhuys Publishers. p. 302 [in Dutch and English]. Kuchlein JH, Bot LEJ. 2010. Identification Keys to the Microlepidoptera of the Netherlands. Zeist: TINEA Foundation & KNNV Publishing. p. 414 [in Dutch]. Lempke BJ. 1976. Naamlijst van de Nederlandse Lepidoptera. Koninklijke Neder- landse Natuurhistorische Vereneniging, Hoogwoud. Bibliotheek 21:1e100 [in Dutch]. Nielsen ES, Davis DR. 1985. The first southern hemisphere prodoxid and the phy- logeny of the Incurvarioidea (Lepidoptera). Systematic Entomology 10:307e322. Pellmyr O, Thompson JN, Brown J, et al. 1996. Evolution of pollination and mutu- alism in the yucca moth lineage. American Naturalist 148:827e847. Razowski J. 1978. Motyle (Lepidoptera) Polski III: Adeloidea. Panstwowe wydaw- nictwo naukowe, Warsaw Svensson, I. 1993. Fjärilskalender. Self-published: Kristianstad, Sweden. Van Nieukerken EJ, Kaila L, Kitching IJ, et al. 2011. Order Lepidoptera Linnaeus, 1758. In: Zhang ZQ, editor. Animal biodiversity: an outline of higher-level classification and survey of taxonomic richness. Zootaxa 3148:212e221. http://refhub.elsevier.com/S2287-884X(16)30007-3/sref1 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref1 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref1 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref1 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref2 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref2 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref2 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref3 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref3 http://listmj.mothprog.com/ http://refhub.elsevier.com/S2287-884X(16)30007-3/sref5 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref5 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref5 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref5 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref6 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref6 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref7 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref7 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref7 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref8 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref8 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref8 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref9 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref9 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref10 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref10 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref10 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref10 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref12 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref12 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref12 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref13 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref13 http://refhub.elsevier.com/S2287-884X(16)30007-3/sref13 First record of the family Prodoxidae (Lepidoptera: Adeloidea), Lampronia flavimitrella (Hübner), reported from Korea Introduction Materials and methods Acknowledgments References work_jcbqhrzfhfbcbfwr23vuxdvq2a ---- Quantitative digital imaging of banana growth suppression by plant parasitic nematodes. This is a repository copy of Quantitative digital imaging of banana growth suppression by plant parasitic nematodes.. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/83410/ Version: Published Version Article: Roderick, H, Mbiru, E, Coyne, D et al. (2 more authors) (2012) Quantitative digital imaging of banana growth suppression by plant parasitic nematodes. PLoS One, 7 (12). e53355. ISSN 1932-6203 https://doi.org/10.1371/journal.pone.0053355 eprints@whiterose.ac.uk https://eprints.whiterose.ac.uk/ Reuse Unless indicated otherwise, fulltext items are protected by copyright with all rights reserved. The copyright exception in section 29 of the Copyright, Designs and Patents Act 1988 allows the making of a single copy solely for the purpose of non-commercial research or private study within the limits of fair dealing. The publisher or other rights-holder may allow further reproduction and re-use of this version - refer to the White Rose Research Online record for this item. Where records identify the publisher as the copyright holder, users can verify any specific terms of use on the publisher’s website. Takedown If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing eprints@whiterose.ac.uk including the URL of the record and the reason for the withdrawal request. mailto:eprints@whiterose.ac.uk https://eprints.whiterose.ac.uk/ Quantitative Digital Imaging of Banana Growth Suppression by Plant Parasitic Nematodes Hugh Roderick 1 , Elvis Mbiru 2 , Danny Coyne 2 , Leena Tripathi 2 , Howard J. Atkinson 1* 1 Centre for Plant Sciences, University of Leeds, Leeds, United Kingdom, 2 International Institute of Tropical Agriculture, Kampala, Uganda Abstract A digital camera fitted with a hemispherical lens was used to generate canopy leaf area index (LAI) values for a banana (Musa spp.) field trial with the aim of establishing a method for monitoring stresses on tall crop plants. The trial in Uganda consisted of two cultivars susceptible to nematodes, a plantain, Gonja manjaya and an East African Highland banana, Mbwazirume, plus a nematode resistant dessert banana, Yangambi km5. A comparative approach included adding a mixed population of Radopholus similis, Helicotylenchus multicinctus and Meloidogyne spp. to the soil around half the plants of each cultivar prior to field planting. Measurements of LAI were made fortnightly from 106 days post-planting over two successive cropping cycles. The highest mean LAI during the first cycle for Gonja manjaya was suppressed to 74.863.5% by the addition of nematodes, while for Mbwazirume the values were reduced to 71.161.9%. During the second cycle these values were 69.262.2% and 72.262.7%, respectively. Reductions in LAI values were validated as due to the biotic stress by assessing nematode numbers in roots and the necrosis they caused at each of two harvests and the relationship is described. Yield losses, including a component due to toppled plants, were 35.3% and 55.3% for Gonja manjaya and 31.4% and 55.8% for Mbwazirume, at first and second harvests respectively. Yangambi km5 showed no decrease in LAI and yield in the presence of nematodes at both harvests. LAI estimated by hemispherical photography provided a rapid basis for detecting biotic growth checks by nematodes on bananas, and demonstrated the potential of the approach for studies of growth checks to other tall crop plants caused by biotic or abiotic stresses. Citation: Roderick H, Mbiru E, Coyne D, Tripathi L, Atkinson HJ (2012) Quantitative Digital Imaging of Banana Growth Suppression by Plant Parasitic Nematodes. PLoS ONE 7(12): e53355. doi:10.1371/journal.pone.0053355 Editor: Sunghun Park, Kansas State University, United States of America Received June 15, 2012; Accepted November 30, 2012; Published December 28, 2012 Copyright: � 2012 Roderick et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: This work was supported by funding from the BBSRC (grant number BB/F004001/1; http://www.bbsrc.ac.uk/home/home.aspx) and IITA (http://www. iita.org/). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing Interests: The authors have declared that no competing interests exist. * E-mail: h.j.atkinson@leeds.ac.uk Introduction Establishing damage levels from biotic stresses and the resultant economic loss to crops requires optimisation between assuring the reliability of the information gained and the effort and resources invested in its acquisition. The plant parasitic nematodes that cause severe losses to global banana and plantain (Musa spp.) production provide a specific example of the difficulties of damage assessments. The yield losses they cause are a combination of reduced plant growth that extends the production cycle, reduced harvested banana bunch weight and toppling of plants with damaged root systems in storms [1,2]. They are often controlled in commercial plantations by periodic application of pesticides [3] but chemical control is rarely affordable or available to small producers in African countries where losses may be up to 71616% [4]. Damage and economic thresholds based on nematode densities recovered from roots have been determined for Radopholus similis, which is widely damaging in commercial plantations, but the values vary for different geographical regions [1,5]. A second approach measures the extent of root necrosis caused by this nematode. The resultant disease index is readily obtained and can be related to economic thresholds [5–7]. However, this approach has limitations when additional economically damaging nematode species are present, as frequently occurs in Uganda [8]. In that country Helicotylenchus multicinctus and Meloidogyne spp. are also present in tropical lowland ecologies with Pratylenchus goodeyi present at cooler higher altitudes [8]. Work in Costa Rica on the dessert banana cultivar Grand Naine ranked R. similis, P. coffeae, M. incognita and H. multicinctus in descending order of severity of root damage by equal densities of each nematode species but recorded M. incognita as causing the largest reduction in bunch weight [9]. Similar work with the plantain cultivar Apantu- pa in Ghana identified that P. coffeae caused the most severe necrosis and yield reductions but both M. javanica and H. multicinctus caused losses of 30% and 26% respectively with relatively less necrosis [10]. Consequently root necrosis may not be a reliable basis for assessing nematode losses to yield when Meloidogyne spp. are one of the economic species present [11] or generally when R. similis is not the only damaging species present. This work uses leaf area index (LAI) to measure banana plant growth checks by nematodes more directly than inferring them from nematode root density or damage. LAI is widely used to measure the growth of crop plants and can be defined as the amount of leaf surface area (single side) per-unit area of the ground. It relates to the capacity of the banana canopy to intercept solar radiation and fix carbon and varies with location, planting density and additional factors such as season [12]. LAI is important because leaves normally far outweigh the contributions from other plant parts to the yield of most crops [13]. A range of methods have been used to measure LAI. Those applied to crops PLOS ONE | www.plosone.org 1 December 2012 | Volume 7 | Issue 12 | e53355 include direct measurement of the lengths and widths of leaves [14] and approaches that use ground based instruments that measure the fraction of transmitted radiation that passes through a plant canopy [15,16]. The approach taken in this work is based on hemispherical photography using a camera with a 180u fisheye lens directed from the ground to the sky zenith to capture an image. This is then analysed to determine the LAI for those plants in the field of view. Hemispherical photography has been shown to be an effective tool for estimating LAI in various settings, including for coniferous forests [17] and single urban trees [18]. The method has also been validated against other indirect methods for monitoring LAI [18–20]. The approach has benefited from the development of digital photography, avoiding the need to develop film camera images. Proprietary software for a personal computer rapidly processes digital images to LAI estimates. The current study explores the use of digital photography and LAI to detect growth checks to banana plants using a replicated field trial design in Uganda imposed by challenge with a combination of R. similis, H. multicinctus and Meloidogyne spp. Two nematode susceptible banana cultivars were used together with Yangambi km5 that has known resistance against R. similis [21]. The differences detected that are attributable to nematode challenge were related to bunch weights and root necrosis over two successive cropping cycles, to provide a comprehensive assessment of the technique. The results establish the potential of the approach for studies of growth checks to banana and other tall crop plants including those caused by biotic or abiotic stresses. Methods Study Site The trial was conducted at Sendusu (328340E, 08320N), 1150 m above sea level, 28 km north of Kampala, Uganda. The site receives mean annual rainfall of 1300 mm in a bimodal pattern and annual minimum and maximum mean temperatures of 16uC and 28uC. The trial site land had been fallow for one year following cultivation of yam and cassava. No endangered or protected species were involved in the described field study and no specific permits were required for the studies, which were carried out on land designated for field trials at the research station of the International Institute for Tropical Agriculture (IITA). Experimental Design The trial included two factors: banana cultivar and nematode treatment. Three cultivars were used, the East African highland banana (EAHB) Mbwazirume (AAA-EA genotype), the plantain Gonja manjaya (AAB genotype) and the dessert banana Yangambi km5 (AAA genotype) which were selected as a representatives of different banana genotypes and nematode host status. All planting material was sourced from farms in close proximity to the Sendusu site. Suckers were pared and hot water treated for 30 sec at 98uC for disinfection of plant parasitic nematodes and weevil larvae [22], then potted into steam-sterilised sandy loam soil in 10 litre polythene bags. Nematode treatment comprised either no supplementation of the natural population or addition of approximately 5,500 nematodes to soil around plants prior to planting. Nematodes were added using 20 g of fresh banana root segments containing 77% R. similis, 17% H. multicinctus and 6% Meloidogyne spp. The inoculum roots were collected from an infested banana site at Sendusu and placed into a shallow excavation around each plant and mixed into the soil. Estimated nematode densities were calculated from five 5 g root segment sub-samples immediately prior to inoculation. Root pieces were bulked, chopped (,0.5 cm) and sub-samples removed, macerated in a blender for 10 s and the nematodes extracted overnight using a modified Baermann technique [23]. Each plant not given added nematodes received 20 g of autoclaved root segments. All plants were planted into the field twenty days later on 18th May 2009. A standard split-plot design [24] was used with plot replicates of 36 banana plants spaced 3 m apart in a 6 by 6 grid. Each cultivar was replicated in eight plots, four of which were infected with nematodes giving a total of 144 plants for each cultivar/treatment combination. The cultivation followed local practises with 10 kg of dry cow manure added per planting hole and fresh elephant grass (Pennisetum purpureum) mulched four times annually around each plant. Water was applied daily for 1 week to ensure transplanted plants established. The trial was surrounded by a row of guard plants transplanted with the trialled plants. Crop Data Collection Plant growth was assessed from 106 days post planting (dpp) every 14 days between 10 am and noon by hemispherical photography. Images were captured with an 8 mm full circle fisheye lens (EX-DG, Sigma, Kanagawa, Japan) on a full-frame digital camera (EOS 5D, Canon, Tokyo, Japan) attached to a self levelling mount (SLM7, Delta-T, Cambridge, UK) and a monopod. The camera lens front was placed 95 cm above the ground in the centre of four plants and directed at the sky zenith. A single exposure was taken while the observer knelt below the field of view. The majority of the canopy recorded in each image was dominated by the four plants surrounding the camera. Nine images per plot were taken, so that each plant was dominant in only one image. Mature bunch weight and date of harvest was recorded for each plant which spanned 350–520 dpp and 530–900 dpp for the first and second harvests respectively. The crop was continually monitored by field observation and during analysis of hemispherical images for the foliar diseases Black Sigatoka, Fusarium wilt and bacterial Xanthomonas wilt and for the presence of the banana weevil Cosmopolites sordidus. Analysis of Hemispherical Photographic Images The images collected by digital camera were imported into proprietary software (HemiView Version 2, Delta-T, Cambridge, UK) calibrated to the lens used from pre-loaded settings. The software classifies images to distinguish visible and obscured sky directions and so define canopy openings relative to banana foliage. The sky visibility and obstruction is then defined as a function of sky direction and canopy indices are calculated from this information to provide results including LAI [25]. The algorithms estimate LAI as half of the total leaf area per unit ground area by means of an iterative inversion of Beer’s Law [12]. LAI generated for each image was exported into Excel (Microsoft, Redmond, WA) before further statistical analysis. The mean of the measurements across four plots, 36 images in total, was calculated for each cultivar/treatment combination on each day of measure- ment. Nematode Damage Assessment At harvest, the percentage root necrosis for each plant was derived from five randomly selected functional primary roots. They were cut into 10 cm lengths, sliced length-wise and the proportion of cortex necrosis estimated for each root piece [11]. The overall percentage root necrosis comprised the average of the five pieces. The five functional root segments used to score for root necrosis were cut into 0.5 cm sections, thoroughly mixed, and a 5 g sub-sample used for nematode extraction. Each sub-sample was macerated in a blender for 10 s and the nematodes extracted overnight as above. Nematode population densities were estimated Quantitative Imaging of Nematode Damage to Banana PLOS ONE | www.plosone.org 2 December 2012 | Volume 7 | Issue 12 | e53355 from three 2 ml aliquots from a 25 ml suspension. Nematodes were identified to species level and population densities estimated per 100 g root fresh weight. At the first harvest nematode counts were made for all plants, at the second harvest nematode counts were made on three samples per plot. Nematode densities were log transformed before analysis. Statistical Analysis All data were analysed using a standard statistical package (SPSS v18; IBM Corporation Armonk, New York, USA; http:// www-01.ibm.com/software/analytics/spss/) installed on a person- al computer with choice of method informed by a standard text [26]. Results The Leaf Area Index of Growing Plants Figure 1 provides mean canopy LAI values based on 36 images for each data point. Images were collected over two growth cycles for plants with or without nematode addition before their transfer to the field. The images in the figure indicate the range of LAI values that were obtained. In the first cropping cycle, the highest grand mean value for LAI occurred at 340 dpp for Gonja manjaya and Mbwazirume. The mean (6 SEM) for the images on this day for plants receiving nematodes before transplanting was 74.863.52% for Gonja manjaya and 71.161.88% for Mbwazir- ume of LAI values of 0.65260.0154 and 0.58760.0148, respectively, of plants not challenged by added nematodes. The highest grand mean LAI for the same two cultivars before the second harvest occurred at 540 dpp. When nematodes had been added to plants before transplanting, the mean values were 69.262.15% for Gonja manjaya and 72.262.74% for Mbwazir- ume of mean LAI values of 0.79160.0406 and 0.74360.0240, respectively, when no nematodes were added. A small opposite effect of adding nematodes was observed for the nematode resistant Yangambi km5 plants. Mean values at the same dpp as used for the other two cultivars were 118.3%65.52% and 113.464.16% for the first and second crop cycles relative to LAI values when nematodes were not added of 0.78760.0307 and 1.04860.0471 respectively. The difference in LAI between adding and not adding nematodes was statistically significant for all three cultivars (P,0.001, for both occasions of Gonja and Mbwazirume and P,0.05 for both Yangambi km5 comparisons; n = 36 for each mean, Oneway Anova with apriori contrasts). Nematode Densities For the two susceptible cultivars, the addition of nematodes increased the number of R. similis and H. multicinctus and decreased the number of Meloidogyne spp. recovered at both harvests relative to those plants not receiving that treatment. A low level of invasion occurred in plants not receiving a nematode treatment prior to transfer to the field from the nematode population in the field soil at the site (Figure 2). The density of R. similis relative to H. multicinctus increased from the first to second harvests on both susceptible cultivars. Nematode addition also led to a higher density of nematodes recovered from the resistant cultivar, Yangambi km5, at both harvests compared to the plants not receiving additional nematodes, but less than was seen on the susceptible cultivars. The values are statistically significant (P,0.001) with the exception of the non-abundant H. multicinctus on Yangambi km5 at the first harvest and Meloidogyne spp. on Mbwazirume and R. similis on Yangambi km5 at the second harvest. Leaf Area Index and Nematode Densities The relationship between LAI and nematode densities was described by a cubic regression curve fitted to LAI values and nematode densities at harvest (Figure 3). There were similar decline curves for LAI values with increasing nematode density at both harvests for cultivars Gonja and Mbwazirume (P,0.001 in both cases; not shown). The data were pooled for the two cultivars and summarised with a cubic regression using mean LAI values at logarithmic mean nematode densities for a range of density classes. The first significant fall in LAI to 92.762.89% of 0.56560.0111 in the absence of recovered nematodes occurred at a mean density of 4,1666348 nematodes/100 g roots for Mbwazirume (P,0.05; Figure 3). A significant decline in LAI for Gonja was established at the next higher density class with a mean LAI of 80.061.58% of 0.50460.0172 at a mean density of 28,2486970 nematodes/ 100 g roots (P,0.001; Figure 3). The resistant cultivar Yangambi km5 showed no significant decrease from a LAI of 0.84060.0307 as the nematode density increased and so this cultivar is not included in Figure 3. The Influence of Nematodes on Banana Bunch Weight Bunch weights were lower for Gonja manjaya and Mbwazirume for both harvests (univariate ANOVA) on plants receiving added nematodes compared to plants that did not (Figure 4). Bunch weights were lower when nematodes had been added by 23.561.82% and 24.062.34% for Gonja manjaya and 14.862.13% and 38.061.42% for Mbwazirume, for the first and second harvests respectively. Yangambi km5 showed a slight increase in yield when nematodes had been added of 7.062.45% and 1.362.08% at the first and second harvests respectively. Necrosis of Roots There was only limited necrosis observed on roots of plants not receiving nematodes before field planting and for Yangambi km5 roots with or without nematodes added (Figure 5). In contrast considerable necrosis was observed on both Gonja manjaya and Mbwazirume plants that had received nematodes. It was more severe on Gonja manjaya than Mbwazirume over both harvests (P,0.001; univariate analysis). Coefficients of the linear equation between necrosis and the nematode density were obtained by a stepwise entry of eligible variables. R. similis contributed most to the correlation coefficient (0.596), which increased to 0.628 when H. multicinctus was also entered with both additions being statistically significant (P,0.001). The partial correlation coeffi- cient for each nematode for its linear effect on necrosis when the contribution of the other nematodes was removed was also statistically significant at 0.512 and 0.246 respectively (P,0.001 in both cases). The small contribution made by Meloidogyne spp. was not significant and so was excluded from the final model. The Incidence of Toppling During the first cropping cycle 41 plants of Mbwazirume and Gonja manjaya plots to which nematodes had been added toppled. No plants toppled in plots without added nematodes for these two cultivars and no Yangambi km5 plants toppled irrespective of treatment (Table 1). In the second cropping cycle plants from all three cultivars toppled, mostly those to which nematodes had been added and less so for Yangambi km5, which had the same level for both treatments. Toppling incidence was similar when nematodes had been added for the two cropping cycles for Mbwazirume but increased significantly (P,0.01) for Gonja manjaya in the second cycle. Quantitative Imaging of Nematode Damage to Banana PLOS ONE | www.plosone.org 3 December 2012 | Volume 7 | Issue 12 | e53355 Figure 1. Leaf area index (LAI) measured by hemispherical digital photography. Mean canopy LAI measurements of four replicate plots of 36 banana plants per treatment of the susceptible cultivars Gonja manjaya and Mbwazirume and the resistant cultivar Yangambi km5 across two growth cycles with or without nematode addition of a mixed population of 5,500 nematodes per plant before transfer to the field. Error bars indicate 6 SEM (n = 36) and the first and second harvest periods are indicated by open and closed bars, respectively. Adjacent images indicate example LAI values at the minimum, median and maximum values for each banana cultivar. doi:10.1371/journal.pone.0053355.g001 Quantitative Imaging of Nematode Damage to Banana PLOS ONE | www.plosone.org 4 December 2012 | Volume 7 | Issue 12 | e53355 Plant Height, Girth and Time to Harvest A significant reduction in plant height at harvest was caused by adding nematodes before planting for both harvests with Gonja manjaya and Mbwazirume but the effect was only 5% and 12– 15% for the first and second harvests respectively. The additional nematode presence also resulted in a 10–16% reduction in girth of Gonja manjaya at both harvests and Mbwazirume for the second harvest only. Similarly, there was a small, significant reduction in days to the first harvest for Mbwazirume and an increase for Yangambi km5 but no significant difference between treatments for Gonja manjaya (Table 2). The Incidence of Other Biotic Stresses Plants were examined in the field for foliar diseases on each day that hemispherical images were captured for LAI analysis and again during the analysis of the images. Initial symptoms of Bacterial wilt were noticed for only 15 of the 766 (2%) standing plants during the second crop cycle. These plants were immedi- ately destroyed on detection of the disease. Black Sigatoka and Fusarium wilt were absent or present at such a low level that damage to leaves was not observed. Weevil damage was detected for just 5 plants but not until close to the second harvest. The low incidence of these biotic stresses is likely due to the deliberate sighting of the trial away from established banana plantations and the use of land that had been fallow for a year following cultivation of cassava and yam to ensure LAI differences detected could be attributed to nematodes. Discussion Measurement of LAI by hemispherical digital photography provided a rapid method of assessing banana growth. The approach provides a basis for the continual measurement of a wide range of stresses on the growth of tall crops such as banana under field conditions. Its potential was established in this work using the example of nematode challenge, which is a known major biotic stress for banana crops. The blocked and replicated field trial design ensured variation in growth between plants in the same treatment did not prevent the effects under study reaching statistical significance. The design enabled the decrease in LAI values recorded with high nematode density to be related to the biotic stress the animals imposed. It incorporated a comparative approach which revealed a lower LAI for those plants with a substantially higher nematode density at harvest after their addition prior to planting relative to those plants challenged only by the lower natural population in the soil. In addition, the nematode resistant cultivar Yangambi km5 was included and it showed no decrease in LAI with high nematode density in contrast to the two susceptible cultivars Mbwazirume and Gonja manjaya. There was also only a low incidence of the other major biotic stresses in Uganda, bacterial Xanthomonas wilt, Fusarium wilt, Black Sigatoka and weevils, such that their contribution to mean and variability of the LAI values was limited. The LAI values increased as the plants grew in an asymptotic manner to a plateau phase over the first cropping cycle with the increasing leaf area being completed by 340 dpp for both Mbwazirume and Gonja manjaya. Yangambi km5 grew vigor- ously and its LAI values reached a higher value than the other two Figure 2. Nematodes recovered at harvest. Mean numbers per 100 g root of each of the three added nematode species recovered at first and second harvest on each of the three banana cultivars. Error bars indicate 6 SEM (Harvest 1 n = 144; Harvest 2 n = 12). Univariate analysis was used to detect differences between nematodes added (+) and not added (–) treatments (***–P,0.001). doi:10.1371/journal.pone.0053355.g002 Quantitative Imaging of Nematode Damage to Banana PLOS ONE | www.plosone.org 5 December 2012 | Volume 7 | Issue 12 | e53355 cultivars. The plant growth pattern measured by LAI was less evident for the second crop of the cultivars as second generation plants had grown considerably before the first harvest removed the contribution of mother plants to the values. The LAI also fluctuated with the impact of climatic factors such as storms. The addition of nematodes to soil around roots before planting out to the field reduced the subsequent LAI for both Mbwazirume and Gonja manjaya from about 300 dpp relative to those not receiving this treatment. The difference in the LAI for these cultivars between the two treatments increased across the two cropping cycles, indicating that a progressive suppression of growth was occurring. Yangambi km5 showed a slightly increased LAI in the presence of added nematodes over both cropping cycles, indicating that in this study this cultivar responds to nematode infection with stimulated growth. Such an effect has previously been reported under certain circumstances such as low nematode densities [27]. The suppression of peak LAI by 25–31% for Mbwazirume and Gonja manjaya, when nematodes were added before planting, was greater than the change of 5–11% found in plant height at harvest. Pseudostem girth has been used to predict yields [14] but no significant reduction was recorded at first harvest of Mbwazirume in this work, although its yield was reduced significantly by the added nematode challenge. The results suggest LAI is superior to both these other measures of plant size for detecting the impact of nematodes on banana plant growth. The LAI values are lower than recorded previously in other parts of the world but similar to previous values for EAHB obtained by measuring individual leaf dimensions at the same locality as this trial [14]. This relatively low value is partly because the trial planting density was typical of plantations in Uganda which are less than in many commercial plantations elsewhere. Presumably they are optimal for EAHB based on grower Figure 3. Relationship between LAI and nematode density. A cubic regression curve is fitted for the relationship for the cultivars Gonja (N) and Mbwazirume (&) between LAI and mean nematode density for values within one of up to five nematode density classes (0; 1–,1,000; 1,000– ,10,000; 10,000–,100,000 and 100,000 or more/100 g roots) for two harvests for both plants to which nematodes were and were not added prior to planting. All nematode density classes for which there is significant percentage reduction in LAI relative to when no nematodes were recovered from roots is shown for both cultivars (*–P,0.05; ***–P,0.001, Oneway Anova with apriori contrasts). The values in parenthesis are the number of values associated with each mean; (n), Gonja and [n] Mbwazirume. doi:10.1371/journal.pone.0053355.g003 Quantitative Imaging of Nematode Damage to Banana PLOS ONE | www.plosone.org 6 December 2012 | Volume 7 | Issue 12 | e53355 experience given a cultivation history in East Africa of at least 1,000 years [14]. The relationship between growth suppression, as measured by LAI, and nematode numbers fitted a polynomial curve (Figure 3) as used previously to relate nematode density to root necrosis of banana [7,9] and conformed in shape to that derivable on a theoretical basis [28]. It was validated by comparing LAI values at different levels of nematode challenge and by measurement of root necrosis at harvest. The addition of nematodes resulted in considerable necrosis of the root systems of Gonja manjaya and Mbwazirume but not Yangambi km5 at both harvests. The nematodes were responsible for initiating much of the necrosis as it was much less evident when they were not added to the soil around plants. Correlations from linear regression indicate R. similis contributes more to necrosis than H. multicinctus with Meloidogyne not contributing significantly to this effect. The necrosis associated with R. similis causes structural damage to roots leading to an incidence of toppling [29]. In the current study, necrosis of Mbwazirume roots increased proportionally with R. similis densities over the two cropping cycles. However, considerable necrosis was associated with more H. multicinctus than R. similis in roots of Gonja manjaya by first harvest. This damage did not Figure 4. Harvested banana bunch weights. Mean values for each treatment for the three banana cultivars at first and second harvest. Error bars indicate 6 SEM (Harvest 1 n$120; Harvest 2 n$95). Univariate analysis was used to detect significant differences (***–P,0.001) between bunch weights of nematodes added and not added treatments at first harvest (FH) and second harvest (SH). doi:10.1371/journal.pone.0053355.g004 Figure 5. Percentage root necrosis at harvest. Mean values for each treatment for the three banana cultivars at first and second harvest. Values presented are back transformed from log necrosis values. Error bars indicate 6SEM (Harvest 1 n$120; Harvest 2 n$95). doi:10.1371/journal.pone.0053355.g005 Quantitative Imaging of Nematode Damage to Banana PLOS ONE | www.plosone.org 7 December 2012 | Volume 7 | Issue 12 | e53355 become more severe by the second harvest although an increase in the density of R. similis was recorded. This may indicate that the H. multicinctus population caused much of the necrosis of Gonja manjaya roots and indicate variability in responses of different cultivars to complex populations of parasitic nematodes. The nematode density recovered from Gonja manjaya at harvest of the second crop cycle was also less than the first. This effect may relate to the influence on nematode densities of the root-carrying capacity of the damaged root system as a crop grows, as studied for Pratylenchus zeae on rice [30]. Possibly the high density of nematodes present towards the end of the first cropping cycle of this plantain suppressed early root growth in the second cycle so reducing the subsequent root-carrying capacity for nematodes. In contrast, Mbwazirume had greater nematode densities associated with its roots at harvest of the second cycle than the first. It may have offered a larger root system than Gonja manjaya, which consequently reduced toppling incidence and allowed a subse- quent build up of the nematode population to levels that resulted in a higher loss of yield by the second harvest. It has been suggested that in Uganda a disease index in excess of 10, the equivalent of 273 R. similis/100 g root, can result in economic damage at an altitude of 1200 m above sea level [5]. This is a considerably lower nematode density than obtained in the current work at harvest even when no nematodes were added and limited necrosis occurred. Disease indices based on root necrosis alone may be an unreliable predictor of economic damage, particularly in the complex situations such as occurs in Uganda. A combination of percentage dead roots, number of large lesions and the nematode population density was used to measure Musa genotype field response to R. similis and H. multicinctus [31]. That approach is labour intensive and requires more specialist expertise than required to measure LAI as in the current study. In our work we selected three distinct banana genotypes to optimise the outcome but the comparison of planting types needs to be expanded. For instance, there is a considerable range of EAHB cultivars with five distinct genotypes [14] that may have differing responses to nematode presence in roots. We have shown that LAI provides a continual measure of the impact of a plant stress on banana growth at a locality for particular genotypes but like other measurements of crop growth and yield in the field it is an integrative measure. As such studies need to be designed so that it can be demonstrated that the stress considered is the major factor in the outcome recorded. Therefore the cause of the effect on growth, such as the density of nematodes, should be confirmed to define the one or more plant stresses principally involved. This is, however, a requirement of all methods of assessing nematode induced losses to banana in field conditions as none provide measures that are independent of the consequences of other factors and stresses. For instance the level of necrosis is sometimes not reflected in yield loss in the absence of banana toppling for both R. similis in the Cameroon [32] and P. goodeyi in Rwanda [33]. In otherwise favourable conditions, the sum of all stresses on these plants may not check growth. The proportion of root mass undamaged by nematodes may be sufficient for normal physiological functions but inadequate to prevent the plant from toppling during storms and high winds. More work may establish that LAI at a specific crop growth stage offers a reliable approach for defining damage thresholds for banana crops that can be related to even complex stresses, such as a combination of nematode species. Although the yield loss contributions due to toppling and bunch weight loss of standing plants differed for Gonja manjaya and Mbwazirume the overall values were similar. The combined yield loss including toppled plants on plots receiving nematodes over the T a b le 1 . T h e im p a ct o f th e a d d it io n o f n e m a to d e s p ri o r to tr a n sp la n ta ti o n in to th e fi e ld o n th e n u m b e r o f st a n d in g (S ) a n d to p p le d (T ) b a n a n a p la n ts d u ri n g tw o cr o p cy cl e s. F ir s t c ro p c y c le S e c o n d c ro p c y c le N e m a to d e s a d d e d N e m a to d e s n o t a d d e d x 2 P V a lu e N e m a to d e s a d d e d N e m a to d e s n o t a d d e d x 2 P V a lu e C u lt iv a r S T S T a d d e d v s n o t c f. M b w + G o n ja S T S T a d d e d v s n o t c f. M b w + G o n ja M b w a zi ru m e 1 2 0 2 4 1 4 4 0 , 0 .0 0 1 N S 1 1 5 2 9 1 4 0 4 , 0 .0 0 1 , 0 .0 1 G o n ja m a n ja y a 1 2 7 1 7 1 4 4 0 , 0 .0 0 1 9 5 4 9 1 3 6 8 , 0 .0 0 1 Y a n g a m b i k m 5 1 4 4 0 1 4 4 0 N S 1 4 0 4 1 4 0 4 N S T o ta l 3 9 1 4 1 4 3 2 0 3 5 0 8 2 4 1 6 1 6 d o i:1 0 .1 3 7 1 /j o u rn a l.p o n e .0 0 5 3 3 5 5 .t 0 0 1 Quantitative Imaging of Nematode Damage to Banana PLOS ONE | www.plosone.org 8 December 2012 | Volume 7 | Issue 12 | e53355 second cropping cycle was significantly larger for Gonja manjaya and Mbwazirume than the losses over the first cycle. This demonstrates progressive and severe nematode induced losses in perennial banana plantations of susceptible cultivars that can be inflicted within the first few crop cycles. Loss estimates in excess of 50% have been reported from several studies [4] and emphasises the need for improved nematode control of this important food security crop in sub-Saharan Africa. The results establish that LAI has potential for continual measurement of stress on the growth of tall crops, such as banana under field conditions. We will next apply this approach in a contained field trial to assess yield responses to nematode challenge of recently developed transgenic plantains with resis- tance to R. similis and H. multicinctus [34]. This will establish whether LAI is a reliable diagnostic tool for assessing nematode resistance in such plants. Acknowledgments The authors thank the International Institute for Tropical Agriculture, Uganda for providing the land for the trial and laboratory facilities. Author Contributions Conceived and designed the experiments: HR DC HA. Performed the experiments: EM. Analyzed the data: HR EM HA. Contributed reagents/ materials/analysis tools: HR DC LT HA. Wrote the paper: HR DC HA. References 1. Bridge J, Price NS, Kofi P (1995) Plant parasitic nematodes of plantain and other crops in Cameroon, West Africa. Fundam Appl Nematol 18: 251–260. 2. Sarah JL, Pinochet J, Stanton J (1996) The burrowing nematode of bananas Radopholus similis Cobb 1913. Musa Pest Fact Sheet No. 1. Montpellier, France: INIBAP. 3. Gowen SC, Quénéhervé P, Fogain R (2005) Nematode parasites of bananas and plantains. In: Luc M, Sikora RA, Bridge J, editors. Plant parasitic nematodes in subtropical and tropical agriculture. Second edition. Wallingford, UK: CAB International. 611–643. 4. Atkinson HJ (2003) Strategies for resistance to nematodes in Musa spp. In: Atkinson HJ, Dale J, Harding R, Kiggundu A, Kunert K, Muchwezi JM, Sagi L, Viljoen A, editors. Genetic transformation strategies to address the major constraints to banana and plantain production in Africa. Montpellier, France: INIBAP. 74–107. 5. Pattison AB, Stanton JM, Cobon JA, Doogan VJ (2002) Population dynamics and economic threshold of the nematodes Radopholus similis and Pratylenchus goodeyi on banana in Australia. Int J Pest Manage 48: 107–111. 6. Bridge J, Gowen SC (1993) Visual assessment of plant parasitic nematodes and weevil damage on banana and plantain. In: Gold CS, Gemmill B, editors. Biological and Integrated Control of Highland Banana and Plantain Pests and Diseases: Proceedings of a Research Coordination Meeting, 12–14 November 1991, Cotonou, Benin. Ibadan, Nigeria: IITA. 147–154. 7. Moens T, Araya M, De Waele D (2001) Correlation between nematode numbers and damage to banana (Musa AAA) roots under commercial conditions. Nematropica 31: 55–65. 8. Kashaija IN, Speijer PR, Gold CS, Gowen SC (1994) Occurrence, distribution and abundance of plant parasitic nematodes of bananas in Uganda. Afr Crop Sci J 2: 99–104. 9. Moens T, Araya M, Swennen R, De Waele D (2006) Reproduction and pathogenicity of Helicotylenchus multicinctus, Meloidogyne incognita and Pratylenchus coffeae, and their interaction with Radopholus similis on Musa. Nematology 8: 45– 58. 10. Brentu CF, Speijer PR, Green K, Hemeng B, De Waele D, et al. (2004) Micro- plot evaluation of the yield reduction potential of Pratylenchus coffeae, Helicotylenchus multicinctus and Meloidogyne javanica on plantain cv. Apantu-pa (Musa spp., AAB- group) in Ghana. Nematology 6: 455–462. 11. Speijer PR, De Waele D (1997) Screening of Musa germplasm for resistance and tolerance to nematodes. Montpellier, France: INIBAP. 47 p. 12. Turner DW (1997) Ecophysiology of bananas: the generation and functioning of the leaf canopy. Acta Hortic 490: 211–222. 13. Guan J, Nutter FW (2002) Relationships between defoliation, leaf area index, canopy reflectance, and forage yield in the alfalfa-leaf spot pathosystem. Comput Electron Agr 37: 97–112. 14. Nyombi K, Van Asten PJA, Leffelaar PA, Corbeels M, Kaizzi CK, et al. (2009) Allometric growth relationships of East Africa highland bananas (Musa AAA- EAHB) cv. Kisansa and Mbwazirume. Ann Appl Biol 155: 403–418. 15. Bréda NJJ (2003) Ground-based measurements of leaf area index: a review of methods, instruments and current controversies. J Exp Bot 54: 2403–2417. 16. Kreye C, Bouman BAM, Castañeda AR, Lampayan RM, Faronilo JE, et al. (2009) Possible causes of yield failure in tropical aerobic rice. Field Crops Res 111: 197–206. 17. Chen JM, Rich PM, Gower ST, Norman JM, Plummer S (1997) Leaf area index of boreal forests: Theory, techniques, and measurements. J Geophys Res 102: 29429–29443. 18. Peper PJ, McPherson EG (1998) Comparison of five methods for estimating leaf area index of open-grown deciduous trees. J Arboriculture 24: 98–111. 19. White MA, Asner GP, Nemani RR, Privette JL, Running SW (2000) Measuring fractional cover and leaf area index in arid ecosystems: digital camera, radiation transmittance, and laser altimetry methods. Remote Sens Environ 74: 45–57. 20. Leblanc SG, Fernandes R, Chen JM (2002) Recent advancements in optical field leaf area index, foliage heterogeneity, and foliar angular distribution measure- ments. In: Proceedings of IGARSS 2002, Toronto, Canada, 24–28 June. 21. Fogain R, Gowen SC (1998) Yangambi km5 (Musa AAA, Ibota subgroup): a possible source of resistance to Radopholus similis and Pratylenchus goodeyi. Fundam Appl Nematol 21: 75–80. 22. Coyne DL, Wasukira A, Dusabe J, Rotifa I, Dubois T (2010) Boiling water treatment: a simple, rapid and effective technique for nematode and banana weevil management in banana and plantain (Musa spp.) planting material. Crop Prot 29: 1478–1482. Table 2. The impact of the addition of nematodes prior to transplantation into the field on days to harvest, plant height at harvest and pseudostem girth at harvest for two crop cycles. Cultivar Treatment Days to harvest Plant height (cm) Pseudostem girth (cm) First harvest Second harvest First harvest Second harvest First harvest Second harvest Gonja manjaya Nematodes added 442.3 63.34 714.9 68.55 255.4 63.72** 218 64.69*** 43.8 60.542*** 37.2 60.850*** Nematodes not added 440.3 63.05 712.8 66.72 269.4 62.28 257.6 64.77 48.4 60.389 44.5 60.824 Mbwazirume Nematodes added 363.6 62.48 ** 630.7 67.38 231.3 61.54*** 231.2 62.93*** 47.9 60.450 46.8 60.718*** Nematodes not added 376.4 63.03 617.1 66.46 243.7 61.44 264.1 62.92 53.9 63.63 52.5 60.750 Yangambi km5 Nematodes added 411.3 63.12 *** 613.1 65.62* 198 61.81 250.7 63.38 38.1 60.453 43.5 60.801 Nematodes not added 431.9 64.01 636 67.18 197.7 61.90 245.1 63.98 38.7 60.401 42.3 60.753 ***P,0.001; **P,0.01; *P,0.05; t-test comparison between the two treatments for each cultivar at harvest. Each mean (6 SEM) is based on 83–138 and 27–137 plants for the first and second harvests, respectively. doi:10.1371/journal.pone.0053355.t002 Quantitative Imaging of Nematode Damage to Banana PLOS ONE | www.plosone.org 9 December 2012 | Volume 7 | Issue 12 | e53355 23. Coyne DL, Nicol J, Claudius-Cole A (2007) Practical plant nematology: a field and laboratory guide. Ibadan, Nigeria: IITA. 82 p. 24. Little TM, Hills FJ (1978) Agricultural experimentation: Design and analysis. New York, USA: Wiley. 350p. 25. Rich PM, Wood J, Vieglais DA, Burek K, Webb N (1999) Guide to HemiView: software for analysis of hemispherical photography. Cambridge, England: Delta–T Devices. 79p. 26. Snedecor GW, Cochran WG (1989) Statistical methods, eighth edition. Ames, USA: Iowa State University Press. 503p. 27. Barker KR, Olthof THA (1976) Relationships between nematode population densities and crop responses. Annu Rev Phytopathol 14: 327–353. 28. Seinhorst JW (1965) The relationship between nematode density and damage to plants. Nematologica 11: 137–154. 29. Tixier P, Salmon F, Chabrier C, Quénéhervé P (2008) Modelling pest dynamics of new crop cultivars: the FB920 banana with the Helicotylenchus multicinctus- Radopholus similis nematode complex in Martinique. Crop Prot 27: 1427–1431. 30. Prot JC, Savary S (1993) Interpreting upland rice yield and Pratylenchus zeae relationships: correspondence analyses. J Nematol 25: 277–285. 31. Hartman JB, Vuylsteke D, Speijer PR, Ssango F, Coyne DL, et al. (2010) Measurement of the field response of Musa genotypes to Radopholus similis and Helicotylenchus multicinctus and the implications for nematode resistance breeding. Euphytica 172: 139–148. 32. Fogain R (2000) Effect of Radopholus similis on plant growth and yield of plantains (Musa, AAB). Nematology 2: 129–133. 33. Gaidashova SV, van Asten PJA, Delvaux B, De Waele D (2010) The influence of the topographic position within highlands of Western Rwanda on the interactions between banana (Musa spp. AAA-EA), parasitic nematodes and soil factors. Sci Hortic-Amsterdam 125: 316–322. 34. Roderick H, Tripathi L, Babirye A, Wang D, Tripathi J, et al. (2012) Generation of transgenic plantain (Musa spp.) with resistance to plant pathogenic nematodes. Mol Plant Pathol 13: 842–851. Quantitative Imaging of Nematode Damage to Banana PLOS ONE | www.plosone.org 10 December 2012 | Volume 7 | Issue 12 | e53355 work_jdy3e74jsbbctncr7zd7fsww6y ---- Microsoft Word - Ploderer_editorial review version v3.docx Manual Engagement and Automation in Amateur Photography Bernd Ploderer, Tuck Wah Leong Abstract Automation has been central to the development of modern photography and, in the age of digital and smartphone photography, now largely defines everyday experience of the photographic process. In this paper, we question the acceptance of automation as the default position for photography, arguing that discussions of automation need to move beyond binary concerns of whether to automate or not and, instead, to consider what is being automated and the degree of automation couched within the particularities of people’s practices. We base this upon findings from ethnographic fieldwork with people engaging manually with film-based photography. While automation liberates people from having to interact with various processes of photography, participants in our study reported a greater sense of control, richer experiences, and opportunities for experimentation when they were able to engage manually with photographic processes. Keywords photography, film development, automation, social media, Flickr Automating Photography The progressive introduction of automation is a key aspect to the development of modern photography. More than 100 years ago, Kodak began paving the way with gradually automating different aspects of photography (Marien, 2011). The Kodak Brownie camera, designed with a fixed focus and shutter time, meant that photographers only had to frame their subject and press a button to take a photo. Furthermore, cameras could now be sent to Kodak factories, where film was taken out of the camera, developed and printed by machines. The prints, negatives, and the camera with a new roll of film in it, were sent back to the customer. Kodak not only made photography more convenient to the masses, Kodak’s automation made photography accessible to anyone wishing to take pictures without their having to know or learn how to process and print film. What is being automated and the extent of automation continue to advance in contemporary photography. Various aspects of phototaking have been automated, even prior to the advent of digital photography, such as automatic focussing and ploderer Sticky Note This is the accepted author manuscript. The published version is on https://doi.org/10.1177%2F1329878X17738829 adjustment of exposure time and aperture. Image processing has also been largely automated, with algorithms that can apply image corrections and filters through a click on an app. Wearable cameras take automation a step further. Once clipped on, these cameras automatically and continuously take photos without the wearer having to do anything (Ljungblad, 2007). Camera traps used by wildlife photographers automatically take photos without the photographer being present. Computer vision algorithms have automated sight itself (Manovich, 1996), with cameras recognising faces and objects in the image and, thus, panning and zooming accordingly. This paper contributes to discussions surrounding automation by drawing upon findings from an ethnographic work conducted by the lead author with a photography club whose amateur photographer members chose to manually engage with the photographic processes. The findings reflect how these amateur photographers experienced the manual aspects of photography and also their views about automation in photography. Their experiences highlight a range of perceived benefits when people take the time to manually engage with photography, such as a sense of control over equipment and oneself, the opportunity to involve one’s body fully and one’s senses when developing and printing photos in the darkroom, and opportunities for experimentation and creative accidents with old and imperfect equipment. It is not surprising that a study of amateur photographers would extol the benefits of manual photographic processes. However, their experiences highlight particular affective relationships to technology and automation that are of interest at a time when we are seeing increased analogue nostalgia both within manual and digital photography practices (Caoduro, 2014; Pickering and Keightley, 2006). Furthermore, a study of photographers who actively chose to engage with manual photography practices, gives us an opportunity to observe and uncover how manual aspects of the technology and manual processes could potentially afford the reported perceived benefits. We are not, in this paper, demonizing or decrying the increasing automation of photography. The amateur photographers we spoke to have also acknowledged that automation has simplified various aspects of photography. At the same time, we urge the need to be more nuanced when discussing automation – not seeing automation as an either-or proposition but rather considering how automation could best allow people to meet their needs, goals and experiences in their particular activities and practices, at particular moments in time. Manual Work and Amateur Aspirations in Photography Even with highly automated technology, photography still requires work and involves a variety of manual processes. Chalfen (1987), in an often-cited study of ‘Kodak culture’ consumer photography, highlighted the work involved in curating images from birthdays and family holidays and the sharing of these images and associated stories with family members and friends. Sontag (1990) commented that photography appeals to the work ethic of many American, German and Japanese citizens, because by meticulously capturing people, objects and events, these citizens can approach their spare time in a work-like manner. Digital photography has introduced additional forms of ‘photowork’ (Kirk et al., 2006). ‘Photowork’ is a term created to refer to the work or activities that people have to do and engage in after capturing photos. Photowork is particular to digital photography as there is now a need to manage our burgeoning collections of digital photos: downloading images from the camera; browsing, searching, selecting, and editing images; creating collections; deleting images; creating back-ups; printing; and sharing pictures (Kirk et al., 2006). Smartphones have made it possible to conduct all photowork from a single device (Gómez Cruz and Meyer, 2012). However, labour is still required to share images and perform identity work (Vivienne and Burgess, 2013), for example, through printed photobooks and text messages, and via a range of social apps and social media platforms. Following this, we are seeing different efforts to automate photo sharing, allowing photographers to capture and share photos with family, friends and online networks at the click of a button. Before we present the fieldwork, we highlight related work that suggests links between manual work and the aspirations of the ‘amateur’ in photography. We use the term ‘amateur’ here to refer to people who treat photography as a serious leisure activity (Stebbins, 1992) – that is, people who aspire to producing outputs that bear the qualities that professional photographers and artists create, but without financial ambitions. Compared to other art forms like painting or playing a musical instrument, photography is relatively easy to learn and thus the results produced by casual (vernacular) photographers, amateurs, or professionals can be difficult to distinguish from one another (Sontag, 1990; Burgess, 2009; Bourdieu, 2010). One of the ways amateurs (and professionals) distinguish their work from casual photographers is through their devotion to particular manual labour, such as experimenting with new techniques to achieve unique results. Furthermore, amateurs seek to distinguish themselves through a vocabulary and standards for judging the artistic quality of images that are inspired by professional and fine arts photography (Grinter, 2005; Ploderer et al., 2012; Schwartz, 1986). Similarly, with digital photography, Manovich (2016) notes that aspiring amateur photographers and professionals devote significant effort to create a certain aesthetic for social media platforms, such as producing collections of images for Instagram. To distinguish their work from that of casual photographers, amateurs work towards aesthetics that follow either traditional professional standards of composition (e.g., rules of thirds), or the aesthetics of graphic design (e.g., reduced detail, a clear hierarchy of information). The case of ‘influencer selfies’ (Abidin, 2016) illustrates that some photographers spend hours to construct the ‘perfect’ image to stand out from other Instagram members and to generate an identity with high commercial value. In other words, being able to engage manually with various aspects of photography provides ways for amateurs to produce outputs that aspire towards standards of professional photography and to forge an identity as a credible photographer. Ethnographic Fieldwork with a Film-Based Photography Club In this study, fieldwork was conducted with an amateur photography club in Melbourne, Australia. The aim was to examine the experiences of people who choose to engage more manually with the processes of photography. This club promotes traditional forms of film-based photography, and club members share a passion for vintage cameras, shooting on film, and developing and printing film in the darkroom. They prefer to have a high degree of control over the process of photography instead of using digitally automated technologies. These amateurs are not luddites who reject digital technologies. On the contrary, all of them use social media to share their photos (digitally scanned from film). In fact, the club emerged out of a Flickr group of like-minded photographers and now constitutes a registered photography club that organizes exhibitions, competitions, and regular informal outings ('photowalks') to take photos and to socialize. The fieldwork took place over a period of seven months and involved participant observations and interviews. The study began with observations of how club members’ interacted on the club’s Flickr group. Data gathered from these observations included topics of interest in online discussions, the types of photos that they take, and identifying the active club members. After a month of observation, the lead author joined the group officially and began to contribute actively to the group's online activities as well as participate in their offline gatherings. Reflective notes about how the lead author felt, his evolving perception of photography, and how his interactions with the group might have coloured his observations were kept throughout the fieldwork (Birks et al., 2008). A series of semi-structured interviews was conducted with eight club members. The interviewees’ ages ranged from 18 to 40 years, and their experience with film-based photography ranged from 3 to 16 years. All interviewees regarded film-based photography as their hobby, although some of them also did commercial work. (Table 1 contains the relevant information of the interviewees. All names have been anonymized.) These interviews sought to elicit discussion of the photographers’ experiences with film-based photography. For the first interview participants were asked to bring one of their cameras to talk about their approach to taking photos, their engagement in the technical aspects of film-based photography, and their choice of equipment. A second interview with the same participants was carried out in the fourth month of the study. For this interview, participants were asked to bring photos that reflected their interest in film-based photography. These photos were used as triggers to discuss what participants value in a photo, including how it is produced and their personal aspirations regarding photography. Name (anonymized) Age Occupation # Years into film-based photography Diane 40 Chef 4 Gary 18 Student 3 Henry 21 Student 3 Ken 25 Professional photographer 5 Martin 34 Technician, part-time photographer 6 Robert 37 Film reviewer, part-time photographer 4 Sebastian 37 Restaurant owner 16 Steve 39 Graphic designer 10 Table 1. Demographics of the participants. While the lead author was responsible for all aspects of the data collection, all authors were involved in the data analysis. Iterative coding of the field notes and interview transcripts were carried out in NVivo with a focus on people's experiences of the process of photography (Miles and Huberman, 1994). Findings A number of themes emerged when analysing participants’ experiences. Although participants’ personal anecdotes and comments highlight challenges and failures in engaging with manual processes of photography, they also reveal the pleasure and pride these participants take in this engagement. Quotes from interviewees are labelled to show both the person and the interview it is taken from. Control over Equipment and Oneself in Taking Photos Being able to manually engage in the entire process of photography was important for the club members interviewed. This is because club members wanted control over all aspects of the process and aspired towards mastery of at least some part of the process. In photo taking, the club members emphasised a desire to have control over subjects, location, available light, and equipment. And with equipment such as a film- based camera, one has to know how the camera works because there is no automation provided. In fact, we found that club members typically preferred older cameras, which do not even have an in-built light meter. This meant that they had to use and rely upon a separate light meter to obtain light readings to guide adjustments of the aperture and shutter speed on their cameras. Furthermore, using such cameras effectively requires careful framing and typically manual focusing. For example, Sebastian’s preferred camera was a Leica rangefinder (see Figure 1), which, he said, delivered excellent results, yet was also, in his own words, ‘painful to use’. ‘It's like a wild horse, and you need to tame it; it requires mastery, whereas a lot of Japanese cameras just do it automatically’. Similarly, Ken argued passionately that ‘you don’t actually need a computer to get brilliant results’. He was referring to the work of Ansel Adams (a famous American landscape photographer known for his mastery of the camera and the darkroom) to argue that all you need is ‘to know what you are doing’. Figure 1: A Leica M3 rangefinder camera, which requires mastery to set exposure time, frame and focus. Image reproduced with permission from the photographer, http://flickr.com/photos/pgoyette/236884240/ Indeed, club members saw the challenges that come with manual control as adding to the interest and enjoyment of photography: It [manually engaging with photography] is challenging, which is part of the fun, because you don't want it to be too easy. You don't just want to turn up and do 'click' and that's done – that's easy. It would be great to just go click and to have an outstanding photo, but it would be a bit boring, wouldn’t it? (Henry, I2) Besides interest and enjoyment, photographers persevered in part because they saw this ‘struggle’ as part of the journey other great photographers they admire have also undergone: overcoming challenges and persevering with mastering the processes, in order to produce great photographs. Also, some mentioned that developing the ability to control the camera that appears simple and yet cumbersome gave them a sense of pride in their personal agency and level of skill in photography. In other words, a good photo is not simply a result of a great camera but also, according to participants like Ken, requires a skilful photographer: I think anyone who has a professional looking camera and can take reasonably good photos has probably had comments like ‘your camera really takes good photos!’ It happens all the time, and that’s really annoying, because if I had given you the camera, you wouldn't have taken the good photo. And I’m not getting any respect for the photo – I’m the one who took it! (Ken, I1) Similar to learning to master the camera and to control the processes of the camera, controlling oneself to wait for photographic opportunities is one of the key tenets of producing a ‘good’ photo. Contemporary wearable cameras and camera traps can automate this process and alleviate the need to wait for a photo opportunity. However, the photographers in our study argued that the person needs to be attuned to, aware of, and engaged with the environment prior to taking the photo. This entails sharpening and heightening their senses of their surroundings. That is why many photographers stress the importance of taking time to prepare the shot, to wait for a ‘decisive moment’ when something significant happens, and when all the elements of the picture come together as a near-perfect composition (Cartier-Bresson, 2014). The ability to wait for a decisive moment is a common skill among most photographers and not just limited to film-based photographers. Involvement of Senses and Time to Control Results A second key theme was how engaging with manual processes of photography allowed people to use their bodies and senses to have some control over the aesthetics of the end product. This was often accompanied with having to slow down and attune oneself with the processes. For those working with film-based photography, spending time in the darkroom to develop films is a very visceral experience that results in a heightened awareness of their sense and time. Some of the chemicals used to develop and print film are toxic, smell sharp, and need to be handled with care to avoid harm. The pictures on the exposed film are also fragile requiring careful handling. To expose the image onto photographic paper, photographers adjust the image by using their hands to selectively add or block light to certain areas (‘burning’ and ‘dodging’). This manual process which results in an image gradually emerging onto a white sheet of photographic paper is described as magical and memorable: ‘Nothing beats processing the film, placing it on the enlarger and watching the print develop from thin air. I still remember my first roll of Ilford Delta 100 – it was like magic!’ (Steve, I1). This manual process allows people to experiment with the chemicals to try different treatments, different papers, different duration, and colour filters. Through tinkering with these variables, people learn to refine their prints step by step. Over time they also learn ways to further manipulate the image, whether through selectively adding light to enhance contrast, or by treating the photographic paper with additional chemicals to add a slightly different tone and feel to the image. Being able to produce different prints from one negative, making incremental improvements, and holding the printed image in one’s hands, all adds to the thrill and magic of learning the art of photography (see Figure 2). Figure 2: Different prints produced from the same image, by changing exposure time as well as by adding light to certain areas (e.g., ‘burning’ the sky in the second image to the left). Image produced by the first author. Printing of images does take more time and is a more unpredictable process when compared to the automated process in digital photographs where this process can be supported by image editing software such as Adobe Photoshop. For club member Gary, Photoshop means you can say ‘it’s too dark, let’s make it brighter’. Whereas for us, we have to make the print, develop it, and fix it, which takes 4 to 5 minutes before we can evaluate the impact, the effect of the settings we chose. So every time we do something, if we want to make it lighter or darker or change the contrast, we have to guess at the result and then test it, which can spin out the time required to do the work tremendously. (Gary, I1) However, for Gary, this process offers him an ability to express his sense of aesthetics and artistic expression, and who he is as a photographer: ‘I like the process because it has a particular aesthetic that matches what I want my art to say. It allows me to present my work the way I want to. I can present it as big prints on a specific paper, which I could do with digital, but the process is different. I don't feel as involved with digital’ (Gary, I1). While developing and printing photos can be engaging, it is important to acknowledge that this desire to work in the darkroom is far from universal among club members. For Gary, Henry and Ken, the initial motivation to develop their own film was to save money. It was only through spending time working in the darkroom that they gradually developed their passion for developing and printing their own film. Diane, Sebastian and Steve, on the other hand, only occasionally develop and print their photos in the darkroom. Most of the time they seek the convenience of a photo lab to get their films developed and printed and for Sebastian, for instance, , the thrill is gone once I've shot. … No, it's a different art. If I take good pictures, it doesn't mean I'm a good printer. If someone is a good printer, it doesn't mean he is a good photographer. Photography is very sort of fast moment, not organised. It can't be organised. It's very free. It just takes the person to press the button. Whereas chemical printing requires discipline, timing, patience, exactness. (Sebastian, I1) Beyond that, some participants reported finding manual processes of photography, such as scanning and post-processing, to be boring and even tedious. Steve scans all his photos with the highest possible resolution so that he can potentially produce large prints from the digital copy. It takes him a long time to scan and to use Photoshop to remove dust particles that appear on the digital scans. Although this dust removal process can be automated, none of the people interviewed use this feature because it blurs the image slightly and thus diminishes the quality of the image. Steve described the rather tedious nature of scanning: ‘One photo takes about 45 minutes. It takes a long time. … Every year the pile [of photos] just gets bigger and bigger. When I get a free moment, I scan one photo. There are lots of little spots here and there and I try to remove them. It's a lot of pain’ (Steve, I1) Idiosyncrasies and Experimentation Manual photography not only allows the club members to better control the process, it enables them to exploit the idiosyncrasies found in imperfect equipment and through experimentation with different parts of the process. Using older cameras means dealing with imperfections. Old cameras are often not lightproof, which creates unpredictable effects upon the final image. While photographers who strive for perfection would try to avoid light leaks at all cost, the club members embraced these flaws because they can add a unique element to a photo. One flaw is lens flare, which has become so popular that many digital photographers add it during postproduction using the Photoshop software. However, from the perspective of those we interviewed, the digitally added lens flare cannot compensate for the unpredictable and unique lens flare created by using old lenses: What I like about this lens is the way that it handles light. Sometimes light seems to creep around, like flares. And it looks quite beautiful with this lens. It's quite beautiful to look at. (Robert, I1) Adopting manual processes of photography means that over time, a photographer becomes more familiar with their camera and the film they use and gains a better understanding of how light behaves and how to control the camera. Participants in this study felt that this knowledge afforded them opportunities to experiment. Experimenting goes hand-in-hand with learning. One of the participants, Martin, experimented with how novel images can be created through the playful use of ice and light in photography because he wanted to take the idea of the pinhole camera further. Martin tried to freeze film in a block of ice before exposing it to light. He persevered and tinkered with the process, refining it and trialling it until he was satisfied with the results: And then the next day I took that block of ice, which was by now a cylinder of film inside a block of ice, and I put that into a cardboard box. And the cardboard box I sealed up, and then I put pinholes on all the sides. And I put it outside for about 15 minutes. It turned out really good, to everybody's surprise. (Martin, I1) Wanting to experiment with pattern effects on the image (see figure 3), Diane went as far as destroying her negative by putting it in a salt solution before scanning: ‘the negative was destroyed, it has salt crystals on the surface; it was just an experiment’ (Diane, I1). Figure 3: Experimentation with salt on negative shared on Flickr, which in turn creates opportunities for cooperation and learning. Discussion The findings show a range of benefits as well as rich and personally meaningful experiences that photographers gain when engaging with photography manually. The fact that amateurs highlight experiences that they find meaningful and beneficial is unsurprising. What is important, however, is that they would have missed out on these experiences had their interactions with photography been highly automated. We saw how manual engagement with photography could provide the person with opportunities to manipulate and improvise with the equipment and exert control over various aspects of the process, equipment and one’s senses and body. Manual engagement also allows opportunities to experiment with processes to produce potentially innovative outcomes as one pursues outcomes that meet one’s aesthetics and judgements of quality while at the same time learning and developing one’s skills. To feel that one is able to fully control, modulate and have the opportunities to affect the desired outcomes adds to one’s sense of satisfaction, pride and agency. At the same time, manual engagement means slowing down and taking more time, which in photography supports opportunities for reflection and learning. Taking time and slowing down in photography resonates with broader trends of the ‘slow movement’ through ‘slow food’, ‘slow travel, ‘slow technology’, and so forth (Hallnäs and Redström, 2001). ‘Slow food’, for example, has been a response to quick and cheap food products, in order to connect people more closely with their food and to preserve culture and traditions (Andrews, 2008). Similarly, for the photographers in this study, manual engagement is a way for them to connect more intimately with photography: to preserve traditional forms of photography, and to walk in the footsteps of film photographers like Ansel Adams and Henri Cartier-Bresson. In fact, Fred Conrad from the New York Times Lens blog (Conrad, 2009) succinctly summarizes the ideal of ‘slow photography' that the photographers in this study strove for: ‘One advantage of using larger formats is that the process is slower. It takes time to set up the camera. It takes time to visualize what you want. When doing portraits, it enables the photographer to talk and listen to subjects, to observe their behavior. When I use an 8-by-10 camera for portraits, I will compose the picture and step back. Using a long cable release, I will look at the subject and wait for the moment’. While manual engagement may also increase opportunities for mistakes, people get a chance to learn from their mistakes and in certain situations, even turn mistakes into interesting and desirable outcomes. It is important to note that opportunities for manual engagement are not limited to traditional forms of photography. In fact, modern digital SLRs provide perhaps an even greater set of controls for photographers to master than the cameras used by the photographers in our study. Similarly, one can argue that photo-editing software offers an even wider range of options for image manipulation than the darkroom. Conversely, not all film-based photographers engage manually with all processes. Even among the club members who are passionate about photography, many use photo labs to get their film developed and printed in order to save time and effort. However, what is different with digital devices and software, is that tinkering is more difficult. Steve’s experimentation in the darkroom and Martin’s example of the pinhole camera highlight the opportunities for tinkering provided by traditional forms of photography. On the other hand, manufacturers or vendors of digital devices and software often limit or discourage people’s ability to tinker with the technologies. As discussed by Gillespie (2006) more broadly in the context of digital media, contemporary digital devices and software are often designed as a black box, hiding their inner workings behind a user-friendly interface. Additionally, warranty settings, end-user agreements, and encrypted code prevent people from opening up a camera or software, which may constrain the agency of users and creative exploration. Consistency and Experimentation in the Craft of Photography Our observations of, and findings from, the experiences of amateur photographers engaging manually with processes of photography highlight many parallels with discussions surrounding craftsmanship. The similarities include the desire for control and agency over one’s processes, the preference for handwork and having physical evidences of one’s labour (Treadaway, 2009), as well as the aspiration and drive to create outcomes of high technical and artistic quality (Sennett, 2008). In fact, Sennett’s (2008) study of the craftsman argues that it is the aspiration for quality that drives the craftsman to improve so as to get better rather than to get by. To be a good craftsman is to be engaged with the process, developing skills to a high degree, and in general being dedicated to good work for its own sake. For the amateur photographers in this study, adherence to this ‘craft’ of photography offers them a way to distinguish their work from casual photographers, such as those that share their photographs on Flickr. While casual photographers on Flickr can produce images of high quality, having control over the processes allows club members and other amateur photographers to produce high-quality images with a higher degree of consistency. This consistency in quality is reminiscent of Pye’s (1968) notion of ‘workmanship of certainty’, which was particularly noticeable in film processing and printing, which aims to reduce any risk of destroying the original image. Experimentation, on the other hand, is characteristic of a ‘workmanship of risk’ (Pye, 1968). With manual processes, the quality of results is not pre-determined. Furthermore, the quality of the result is continually at risk during the process, or as in the examples provided by Diane and Martin, the camera and film are destroyed during the process. While automation in photography leads to consistent results, e.g., automated film-processing in a photo laboratory will reliably produce images of high quality, it is exactly that risk in the process (e.g., by putting salt on the negative, cross-processing, pushing and pulling film) that leads to unique results and diversity. By seeing how photography can bear elements of workmanship of certainty and workmanship of risk reveals both opportunities and tensions in combining automation and manual engagement in new ways. Put simply, automation promotes workmanship of certainty, where results are predictable and reliable (Pye, 1968). In other words, there are benefits for both processes. However, we argue that automation limits people’s opportunities to experiment, particularly experimentation that relies on risk and accidents. There are a number of new technologies that afford experimentation with photography. For example, unmanned aerial vehicles (UAVs or ‘drones’) allow photographers to experiment with perspectives, whereas light-field cameras allow photographers to play with the focus after taking the picture. Both technologies provide room for experimentation and both require a high degree of mastery to get refined results. What is missing from such technologies, however, is the ‘workmanship of risk’ that we observed in our work with film-based photographers, where the quality of the outcomes might be diminished in the process or destroyed altogether. Many drones have livestreams to mobile phones and similarly light-field cameras have image displays that offer immediate feedback to the photographer. Hence, a photographer can assess the impact of her actions with the drone or light-field camera on the quality of the image quality in close to real-time. Such tight feedback loops between action and image feedback may foster experimentation and reduce the likelihood of having no image at all, but they also reduce opportunities for surprises and creative accidents to occur. Conclusion Through presenting the findings from our ethnographic fieldwork with amateur photographers who engaged with film-based photography, we have seen the benefits that these photographers perceived through having manual control and engaging in certain manual processes of photography. However, we also see how automation benefitted some of them. In fact, if designers were to favour one over another, or – worse – to preclude one over the other, they will be depriving these photographers of valuable choices. The choice to turn to manual processes or automation, at least in what we observed with the photographers interviewed for this study, are determined by the situation, people’s skills and comfort level, as well as people’s goals and aspirations. We argue that we should avoid simplistic binaries such as to include or to exclude automation when designing technologies that intervene in people’s activities and practices. Instead, we need to be much more nuanced when discussing automation. For example, in photography, we need to discuss the extent of the automation. Are we talking about fully automating the entire process like what we have with point-and-shoot cameras or are we talking about automating some of the processes, such as developing the images? This means considering what is being automated and why. In technology design, we argue that much more thoughtful research needs to take place whereby automation should not just be the default stance. As the experiential data of our ethnographic fieldwork reveals, it is important to consider how automation can affect the people involved in the particular activities, the practices, the goals and so on. Through this, we hope that future technologies will not only be easy to use but at the same time provide opportunities for people to engage actively with all their senses, to develop a greater appreciation of their activities, and also to experience technology as empowering, supportive of their goals and what they value. Acknowledgements We gratefully acknowledge the contributions of the photography club. Furthermore, we would like to thank the anonymous reviewers for their suggestions to strengthen this article. References Abidin C (2016) ‘Aren’t these just young, rich women doing vain things online?’: Influencer selfies as subversive frivolity. Social Media + Society 2(2). DOI: 2056305116641342. Andrews G (2008) The Slow Food Story: Politics and Pleasure, London: Pluto Press. Birks M, Chapman Y and Francis K (2008) Memoing in qualitative research: Probing data and processes. Journal of Research in Nursing 13(1): 68–75. Bourdieu P. (2010) Distinction: A Social Critique of the Judgement of Taste. London: Taylor & Francis. Burgess JE (2009) Remediating vernacular creativity: Photography and cultural citizenship in the Flickr photosharing network. In: Edensor T, Leslie D, Millington S, et al. (eds) Spaces of Vernacular Creativity: Rethinking the Cultural Economy. London: Routledge, pp. 116–126. Caoduro E (2014) Photo filter apps: Understanding analogue nostalgia in the new media ecology. Networking Knowledge: Journal of the MeCCSA Postgraduate Network 7(2). Cartier-Bresson H (2014) The Decisive Moment. Reprint. Göttingen: Steidl Verlag. Chalfen R (1987) Snapshot Versions of Life. Bowling Green, OH: Bowling Green State University Popular Press. Conrad FR (2009) Slow photography in an instantaneous age. Available at: http://lens.blogs.nytimes.com/2009/05/17/essay-slow-photography-in-an- instantaneous-age/ Gillespie T (2006) Designed to ‘effectively frustrate’: Copyright, technology and the agency of users. New Media & Society 8(4): 651–669. Gómez Cruz E and Meyer ET (2012) Creation and control in the photographic process: iPhones and the emerging fifth moment of photography. Photographies 5(2): 203–221. Grinter RE (2005) Words about images: Coordinating community in amateur photography. Computer Supported Cooperative Work (CSCW) 14(2): 161– 188. Hallnäs L and Redström J (2001) Slow technology: Designing for reflection. Personal and Ubiquitous Computing 5(3): 201–212. Kirk D, Sellen A, Rother C, et al. (2006) Understanding photowork. ). In: Grinter R, Rodden T, Aoki P, et al. (eds) Proceedings of the Conference on Human Factors in Computing (CHI 2006). New York: ACM, pp. 761–770. Ljungblad S (2007) Designing for new photographic experiences: How the lomographic practice informed context photography. In: Koskinen I and Keinonen T (eds) Proceedings of the Conference on Designing Pleasurable Products and Interfaces (DPPI 2007). New York: ACM, pp. 357–374. Manovich L (1996) The automation of sight: From photography to computer vision. In: Druckrey T (ed) Electronic Culture: Technology and Visual Representation. New York: Aperture, pp. 229–239. Manovich L (2016) Instagram and Contemporary Image. Available at: http://manovich.net/index.php/projects/instagram-and-contemporary-image. Marien MW (2011) Photography: A Cultural History. Upper Saddle River, NJ: Pearson. Miles MB and Huberman AM (1994) Qualitative Data Analysis: An Expanded Sourcebook. Thousand Oaks, CA: Sage. Pickering M and Keightley E. (2006) The modalities of nostalgia. Current Sociology 54(6): 919-941. Ploderer B, Leong T, Ashkanasy S, et al. (2012) A process of engagement: Engaging with the process. In: Olivier P, Wright P, Blevis B, et al. (eds) Proceedings of the Designing Interactive Systems Conference. New York: ACM, pp. 224– 233. Pye D (1968) The Nature and Art of Workmanship, London: Cambridge University Press. Schwartz D (1986) Camera clubs and fine art photography: The social construction of an elite code. Journal of Contemporary Ethnography 15(2): 165–195. Sennett R (2008) The Craftsman, New Haven: Yale University Press. Sontag S (1990) On Photography, New York: Anchor Books. Stebbins RA (1992) Amateurs, Professionals, and Serious Leisure, Montreal: McGill- Queen’s University Press. Treadaway CP (2009) Hand e-craft: An investigation into hand use in digital creative practice. In: Bryan-Kinns N, Gross M, Johnson H, et al. (eds) Proceeding of the Conference on Creativity and Cognition (C&C 2009). New York: ACM, pp. 185–194. Vivienne S and Burgess J (2013) The remediation of the personal photograph and the politics of self-representation in digital storytelling. Journal of Material Culture 18(3): 279–298. work_jfb4huvwnffhdcjqb56obxuose ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218547045 Params is empty 218547045 exception Params is empty 2021/04/06-02:16:25 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218547045 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:24 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_jfipn6cttfgbzcuj2hjzrbwim4 ---- 017 105..110 Patient-posture and Ileal-intubation during colonoscopy (PIC): a randomized controlled open-label trial Authors Sk Mahiuddin Ahammed, Kshaunish Das, R. Sarkar, J. Dasgupta, S. Bandopadhyay, G. K. Dhali Institution Division of Gastroenterology, School of Digestive and Liver Diseases, IPGME and R, Kolkata, West Bengal, India submitted 16. June 2013 accepted after revision 31. January 2014 Bibliography DOI http://dx.doi.org/ 10.1055/s-0034-1365541 Published online: 7.5.2014 Endoscopy International Open 2014; 02: E105–E110 © Georg Thieme Verlag KG Stuttgart · New York E-ISSN 2196-9736 Corresponding author Kshaunish Das, MD, DM Division of Gastroenterology, SDLD, IPGME and R 244 AJC Bose Road, Kolkata-700020 India dockdas@gmail.com License terms Original article E105 THIEME Introduction ! Since its introduction into clinical practice in the late 1960s, colonoscopy has now become a stand- ard procedure [1,2]. After the initial report of suc- cessful ileal intubation during colonoscopy, [3] ileoscopy is often recommended by some in unse- lected patients undergoing colonoscopy [4] or at least in those patients presenting with diarrhea or suspected Inflammatory Bowel Disease (IBD) or intestinal tuberculosis (ITB) [5–10]. In addi- tion, identification of the ileocecal (IC) valve and ileal mucosa documents completeness of full length colonoscopy [11,12]. However, various difficulties are encountered during ileoscopy. The IC valve may be difficult to locate when it is hidden behind large semilunar folds or if excess air is insufflated with stretching of the lumen or if the valve is papillary in form and flush with the wall [1,13]. In various studies, the success of ileoscopy, without any special man- euvers, varies from 77.8% to 81.3%, with a time required to insert the scope into the ileum of, on average, 3.4 min (range: 30s to 10min) [14–16]. Despite this, with various endoscopic maneuvers, it is possible to successfully intubate the terminal ileum in a significant number of patients [16]. During colonoscopy, the endoscopist employs various maneuvers, including changing the pa- tient’s posture, to achieve complete colonoscopic examination [1,13,16]. If one position fails, an- other is attempted. Posture change has also been reported to increase the success rate of ileal intu- bation, with the supine posture often bringing the IC valve to a 6–7 o’clock position (●" Fig.1) [16, 17]. However, there has been no randomized trial which has shown that a particular posture of the patient increases the success rate of ileoscopy. The present study was performed to determine the impact of the patient’s posture (left lateral vs supine position) on the success rate of ileal intu- bation. In addition, we also assessed the yield Ahammed Sk Mahiuddin et al. Patient-sture and Ilealntubation during colonoscopy (PIC)… Endoscopy International Open 2014; 02: E105–E110 Background and aims: Patient’s posture change is commonly employed by a colonoscopist to achieve complete examination. We studied whether patient’s posture (left-lateral decubitus vs supine) influenced the success rate of ileal in- tubation. Patients and methods: In this prospective open- label randomized study performed in the Endos- copy Suite of a tertiary-care center, all adult out- patients referred for colonoscopy, in whom cecal intubation was achieved and who satisfied prede- fined inclusion criteria, were randomized to un- dergo ileal intubation in either of the above two postures. Colonoscopy (EC-201 WL, Fujinon) was performed after overnight poly-ethylene-glycol preparation, under conscious sedation and con- tinuous pulse-oxymetry monitoring. After con- firming cecal intubation, patients were random- ized for ileal intubation. Success was defined by visualization of ileal mucosa or villi (confirmed by digital photography) and was attempted until limited by pain and/or time of ≥6min. Results: Of 320 eligible patients, 217 patients (150 males) were randomized, 106 to left-lateral decubitus and 111 to supine posture. At base- line, the two groups were evenly matched. Suc- cessful ileal intubation was achieved in 145 (66.8%) patients overall, significantly higher in the supine posture (74.8% versus 58.5%; P= 0.014). On multivariate analysis, supine posture (P=0.02), average/good right-colon preparation (P<0.01), non-thin-lipped ileocecal (IC) valve (P <0.001) and younger age (P=0.02) were inde- pendent predictors of success. Positive ileal find- ings were recorded in 13 (9%) patients. Conclusion: Ileoscopy is more successful in su- pine than in left-lateral decubitus posture. Age, bowel preparation and type of IC valve also de- termine success. (clinically relevant positive findings) of ileal intubation and the predictors of successful terminal ileal intubation. Patients and Methods ! Study Design All outpatients (>12 years old) referred for colonoscopy between June and December 2010 were eligible for entry into the study provided they did not have the following conditions: 1) acute ful- minant colitis; 2) acute intestinal obstruction; 3) suspected intes- tinal perforation; 4) peritonitis; 5) pregnancy; 6) severe cardio- respiratory disease (ASA grade >II); 7) decompensated liver dis- ease; 8) recent pelvic or colonic surgery (in the last 6 months); 9) large aortic or iliac artery aneurysm; 10) Human Immunodefi- ciency Virus (HIV) infection; and 11) were non-consenting. Colonoscopy was performed in the Endoscopy Suite of the School of Digestive and Liver Diseases (SDLD), IPGME & R, Kolkata, a ter- tiary referral-cum-teaching hospital with a videocolonoscope (EC-201 WL, Fujinon) by endoscopists with different skill levels, including third-year gastroenterology trainees and departmental consultants, under conscious sedation with titrated intravenous midazolam (dose: 0.05–0.1mg/kg) while the patients were mon- itored by continuous pulse oxymetry (Draco-oxy Biptronics Ple- thysmogram). Randomization In this randomized, open-label study, patients were randomized to undergo ileal intubation, either in the left-lateral decubitus (LL) or supine (S) posture, after the cecum had been successfully intubated by a 1:1 randomization (equal proportions) by the sealed envelope method. An independent statistician generated random numbers on a computer using SPSS 13.0 for Windows (SPSS Inc, Chicago, Ill). He prepared hundreds of sealed envelopes containing random numbers for allocation, which had equal odd–even proportions. An odd number allocated the patient to the LL posture group and an even number to the S posture group. An assistant nurse opened the envelopes just after completion of cecal intubation at colonoscopy. Sample-size calculation This was done with the free software G*Power 3.0.10 (Universität Kiel, Germany). Based on the literature, the reported success of ileoscopy without any maneuver was taken to be ~80%. [14–16] Expecting at least a 15% increase in success at ileal intubation after changing the patient’s posture to supine, the sample size re- quired using Fisher’s exact 2-sided test to detect a statistically significant difference (α-error probability=0.05) was 216 (108 in each group) with 90% power (β-error=0.1). Procedure Colonoscopy and preparation Patients were advised to take clear liquid on the night prior to co- lonoscopy. Bowel preparation was done by taking two tablets of Bisacodyl (Tablet Dalculax 10mg) at bedtime on the night before colonoscopy and a standard poly-ethylene glycol (PEG) prepara- tion (containing 118.0g PEG, 2.93g sodium chloride, 1.48g po- tassium chloride, 3.37g sodium bicarbonate, 11.3g anhydrous bi- carbonate; ColopegTM, J.B. Chemicals & Pharmaceuticals Ltd, In- dia) in the early morning on the day of colonoscopy. Adequacy of bowel preparation was assessed by the validated Boston Bowel Preparation Scale (BBPS or “bee-bops”) [18]. Colonoscopy was started in the LL posture. The endoscopist was allowed to apply all maneuvers to successfully intubate the ce- cum, which was confirmed by standard landmarks, i.e., appendi- cular orifice, IC valve and transillumination [11]. The shape (●" Fig.1) and position of the IC valve were recorded. Ileoscopy and randomization After confirmation of cecal intubation and with the scope shor- tened and de-looped, patients were randomized to either the LL or S position if they satisfied the predefined inclusion criteria. Then terminal ileal intubation was attempted. Partial suction and scope maneuvers for ileal intubation were allowed [13]. Use of anti-peristalsis agents, biopsy forceps (as an “anchor” to facili- tate IC valve intubation), and intubation in the retroflexed posi- tion were not allowed. If the ileum was not intubated within 6 min, [19] it was considered a failure and the study end point was reached. Further attempts at ileoscopy were at the discretion of the endoscopist. Thin-lip Single-bulge down to the left Double-bulge Volcanic Fig.1 Upper panel shows the “down to left” position taken by the IC valve when shifting the patient’s posture from left-lateral decubitus to supine. Lower panel shows the four types of IC valve. Ahammed Sk Mahiuddin et al. Patient-sture and Ilealntubation during colonoscopy (PIC)… Endoscopy International Open 2014; 02: E105–E110 Original articleE106 THIEME Outcome measures The primary outcome was whether ileal intubation was achieved or not within the stipulated 6min. The other outcomes that were recorded were: 1) time taken to intubate the ileum; 2) depth of ileal intubation; and 3) presence or absence of ileal pathology. Successful ileal intubation was defined as visualization of the ileal mucosa or villi (confirmed by digital photography). [20] Time to ileal intubation was calculated from the first attempts, after randomization, until visualization of the ileal mucosa, and was limited by pain or time ≤6min.[19] After ileal intubation, ileal exploration was performed with minimal air insufflation and was limited by pain and/or length of the colonoscope. Depth of ileoscopy was calculated at the time of withdrawal of the scope, from the furthest point reached in the ileum to the IC valve. Any abnormal finding was recorded and biopsies were tak- en, if deemed necessary, by the performing endoscopist. Patient tolerability of ileoscopy was rated by a three-point score: 0, no pain; 1, pain not limiting the procedure; 2, pain limiting the procedure [19]. All adverse events were recorded including hy- poxia, tachy- and/or bradycardia. Ethical clearance and statistical analysis Each patient’s data were meticulously recorded by one of the in- vestigators (MA) in a structured proforma. Written informed consent about the study protocol was taken from each patient before colonoscopy and the study was approved by the Institu- tional Ethics Committee (IEC). All statistical analyses were performed on an intention-to-treat basis. Mean, median, standard deviation (SD), standard error of the mean (SEM), range and proportions were calculated, as ap- propriate. Categorical variables were analyzed using the χ2 test with the Yates correction or Fisher’s exact test, as appropriate. Continuous variables were compared using the Student’s t-test. Multivariate analysis, to identify the predictors of successful ileal intubation, was done by binary logistic regression analysis using variables found to be significant on univariate analysis (P<0.05). Statistical analysis was done using SPSSTM (Version 13 for Win- dows) software. A P-value of <0.05 was considered to be statisti- cally significant. Results ! Between June 1 and December 31, 2010, of the 653 patients re- ferred for colonoscopy, 623 (95.4%) had successful cecal intuba- tion. Of the 653 patients, 320 (49%) were eligible for inclusion (●" Fig.2). Of these, 217 (67.8%; 150 males), were randomized, 106 and 111 undergoing ileoscopy in the left-lateral decubitus and supine posture, respectively (●" Fig.2: CONSORT Diagram). Referred for colonoscopy = 653 patients Cecal intubation achieved = 623 (95.4 %) 320 eligible 217 (67.8 %) randomized 106 ileoscopy in left- lateral decubitus posture 111 ileoscopy in supine posture analyzed (n = 106) analyzed (n = 111) Not eligible (n = 303) Inpatients 223 Past abdominal surgery 52 Age <12 or >80 years 25 HBV/HCV positive 3 Excluded (n = 103) Refused 53 Other endoscopist 50 Fig.2 Flowchart of the study population. Table 1 Baseline characteristics of the two groups. Left lateral decubitus (n=106) Supine (n=111) P-value Age (mean ± SEM) 40.7 ± 1.5 39.8 ± 1.4 0.7 Male (%) 69 69 1.0 R-colon preparation (BBPS) [%] 0.5 Sub-optimal (score = 1) 23 22 Average (score = 2) 50 44 Good (score = 3) 27 34 IC valve type (%) 0.3 Thin-lipped (n = 70) 39 30 Single-bulged (n = 42) 19 21 Double-bulged (n = 66) 32 31 Volcanic (n = 31) 10 18 Indication (%) 0.3 Bleeding P/R 37 26 Diarrhea 10 14 Anemia 6 14 SAIO 12 10 FBD 12 13 Endoscopist (%) 0.7 No 1 13 9 No 2 27 24 No 3 14 15 No 4 46 52 P/R, per rectal; SAIO, subacute intestinal obstruction; FBD, functional bowel disorder AU: Please check that FBD has been defined correctly. Ahammed Sk Mahiuddin et al. Patient-sture and Ilealntubation during colonoscopy (PIC)… Endoscopy International Open 2014; 02: E105–E110 Original article E107 THIEME At baseline (●" Table 1), the two groups were evenly matched with respect to demography, degree of colonic preparation, type of IC valve, indication for attempted colonoscopy, and the endos- copist performing the procedure. Endoscopist 1 (JD) was senior with >15 years’ experience in colonoscopy doing >70 ileoscopies per year. Endoscopist 2 (SB) and 3 (RS) had >3 years’ experi- ence in colonoscopy while endoscopist 4 (MA) had >2 years’ ex- perience in colonoscopy. Endoscopist 2, 3 and 4 had already done >100 ileoscopies, more than twice the number regarded as necessary to achieve competence [19]. Overall, the primary end point, i.e., successful ileal intubation, was achieved in 145 of 217 (66.8%) patients, significantly more often in the supine (74.8%) versus the left-lateral decubitus (58.5%) posture (P=0.014). The predominant reason for failure in 69 patients was inability to intubate the ileum in the stipulated predetermined time of ≤6min (n=55, 80%) alone or combined with poor right colon preparation (n=9, 13%). Among the secondary outcomes (●" Table 2), the median time taken for ileal intubation was 74.0s (range: 2 to 349s) overall, with a longer time required in LL posture versus S posture, which did not achieve statistical significance. The mean (±SEM) depth of ileal intubation was 8.4±0.3cm overall, with no difference be- tween the two groups. No major adverse event developed in any of the patients in the trial. Among the minor adverse events, there was no difference between the two groups with respect to pain, hypoxia and bradycardia. Only asymptomatic tachycardia was significantly more common in the supine posture (●" Ta- ble 2). Clinically relevant findings were recorded in 13 (9%) patients in whom successful ileoscopy was performed: a small bowel source of bleeding was confirmed in 2; ulcers ± nodularity ± strictures were seen in 8 patients (Tuberculosis 5; Crohn’s 1; Idiopathic 2); terminal ileal nodularity in 1 and polyp in another 1 patient. On univariate analysis, in addition to supine posture, successful ileal intubation was more frequent in younger patients (mean± SEM: 38.0±1.2 vs 44.7±1.9 years; P<0.01), in those with better right-colon preparation (BBPS average and/or good vs subopti- mal: 72% vs 48%; P<0.01) and in those who did not have a thin- lipped IC valve (other types vs thin-lipped: 79% vs 43%; P< 0.0001). Sex, endoscopist and other factors were not significant in univariate analysis. On multivariate analysis, age (OR 0.97; 95% CI: 0.95–0.99), su- pine posture (OR 2.2; 95% CI: 1.1–4.3), average and/or good BBPS of right colon (OR 4.1; 95% CI: 1.9–8.9) and IC valve type that was not thin-lipped (OR 5.1; 95% CI: 2.6–9.8) were indepen- dent predictors of successful ileal intubation (●" Table3). When analyzing the independent predictors of successful ileal intubation separately in the LL and S postures, by multivariate a- nalysis, age, degree of right-colon preparation and type of IC- valve were significant independent predictors of ileal intubation in the LL posture, whereas the latter two alone were significant independent predictors of ileal intubation in the S posture (data not shown). Discussion ! This prospective randomized trial demonstrated that, within a predetermined stipulated time frame, ileal intubation is signifi- cantly more successful in the supine than in the left-lateral decu- bitus posture. Moreover, in addition to supine posture, younger age, better right colon preparation and type of IC valve (non- thin-lipped) are independent predictors of successful ileal intu- bation. The major strengths of our study are its prospective ran- domized nature, sample-size generation, blinded allocation and analysis, using predefined stringent criteria for success and fail- ure and use of a well-validated bowel preparation score. There have been very few studies about the technical feasibility of terminal ileoscopy during colonoscopy [14,16,17] with most of the studies predominantly focusing on the yield of ileoscopy and/or ileal biopsies [5–10,21–24]. Most endoscopists intubate the terminal ileum in the left-lateral decubitus posture with the ileocecal valve in the 6–7 o’clock position, using downward de- flection and anticlockwise torque while withdrawing from the cecal cone [16]. Prospective studies have shown that the ileum is successfully intubated in this position in 65–78% of patients, de- pending on the training level of endoscopists [16,19,25]. Shifting the posture to supine, however, achieves an ileoscopy completion rate of >95%. [16,19] It is against this background that our trial conclusively demonstrates that the supine posture is technically the easier posture to intubate the terminal ileum, with a higher success rate. While our analysis was ongoing, another randomized trial was published that showed that the prone posture is equivalent to the left-lateral decubitus posture with regard to the frequency of ileal intubation achieved, albeit within a shorter time [26]. How- ever, that trial did not stratify or analyze the confounding factors such as endoscopist, bowel preparation and IC valve morphology [26]. Table 2 Secondary outcome measures in the two groups. Total Left lateral decubitus (n=106) Supine (n=111) P-value Successful ileal intubation, n (%) 145 (66.8) 62 (58.5) 83 (74.8) 0.014 Depth of ileal intubation (cm; mean ± SEM) 8.4 ± 0.3 8.6 ± 0.4 8.2 ± 0.3 0.5 Time to ileal intubation (s; median, range) 74.0 (2 – 349) 84.5 (2 – 310) 71.0 (4 – 349) 0.3 Patient-perceived pain (%) 0.3 Nil 93 90 95 Present, not limiting 7 10 5 Adverse events (%) Hypoxia 2 2 2 1.0 Tachycardia 16 9 22 0.015 Bradycardia 0.5 1.0 0 0.5 Ahammed Sk Mahiuddin et al. Patient-sture and Ilealntubation during colonoscopy (PIC)… Endoscopy International Open 2014; 02: E105–E110 Original articleE108 THIEME In addition to posture, we also found that age, degree of right-co- lon preparation and morphology of IC valve are independent pre- dictors of ileal intubation success. Younger age was associated with increased success at ileal intubation overall, and especially in left lateral decubitus posture. Age was not an independent pre- dictor of ileal intubation success in the supine posture. The qual- ity of right-colon preparation was an independent predictor of success overall, as well as in both postures independently. This is to be expected as good bowel preparation is an important deter- minant of technical success in colonoscopy and in any colono- scopic procedure. In compliance with a previous study, the morphology of the IC valve was an independent predictor of ileal intubation success. [19] The thin-lipped type of valve is difficult to intubate as it is less distensible and often flush with the semicircular fold of the cecum. One drawback in our study was the lower rate of successful ileal intubation overall (~67%). Although this was well within the range reported in the literature, the major reason (~80%) for fail- ure was the inability to intubate the ileum within the stipulated time frame of ≤6min. We had stipulated a cut-off time of 6min, in compliance with a previous study, and in addition, this is dou- ble the average time required to intubate the ileum according to the literature [19]. Thus, in a prospective controlled trial setting with a time-pressure, the rate of success might be lower for com- plex visuo-spatio-psychomotor procedures such as colonoscopy and ileoscopy [27]. Moreover, our population was predominantly male and had a higher prevalence of the difficult thin-lipped type of IC-valve vis-à-vis, the Italian population reported in the litera- ture [20]. This could also have contributed to the lower rate of success. Reviewing the literature, we found two studies that prospectively assessed the technical success of ileoscopy during colonoscopy. In the first study, overall success was achieved in 79%, with a longer time (mean: 3.4 min; range: 30s to 10min) and no difference be- tween endoscopists having different levels of training [14]. In the second study, ileoscopy, performed by a single colonoscopist, was achieved without any intervention in 77.8%, which increased to ~97% when changing the posture to supine and/or use of an anti-peristalsis agent (Hyoscine-n-butyl bromide) in one-fifth of patients, respectively [16]. The median time taken for intubation was 55s (range: 2–1140s) [16], not much different from our rates. Moreover, we did not use any anti-peristalsis agent which is proven to facilitate ease of ileal intubation [28]. One of the biggest impediments to the technical success of colo- noscopy is the result of “looping” in the tortuous and often re- dundant areas of the colon, especially the sigmoid and transverse colon [29–31]. We speculate that the same should hold for ileo- scopy, as it is difficult to intubate the terminal ileum if the colo- noscope is not de-looped. Older age is one of the risk factors for increased looping, which is reflected in our study by the de- creased success at ileoscopy in older age [31]. Posture change is one of the maneuvers that successfully de-loops the colon, espe- cially in the sigmoid and transverse colon [30]. We speculate that this might be the reason for the increased success of ileoscopy in the supine vis-à-vis the left lateral decubitus posture. In conclusion, this randomized controlled trial proves that ileal intubation is more successful in supine than left-lateral decubitus posture. Age, bowel preparation and type of IC valve also deter- mine successful ileoscopy. Competing interests: None Table 3 Univariate and multi- variate analysis of the factors associated with successful ileal intubation. Univariate Multivariate Success (n=145) Failure (n=72) P-value OR (95% CI) P-value Age (mean ± SEM) 38.0 ± 1.2 44.7 ± 1.9 0.002 0.97 (0.95 – 0.99) 0.02 Male (%) 71 65 0.4 R-colon preparation (BBPS) [%] 0.007 Sub-optimal (score = 1) 16 35 1 Average/Good (score = 2 /3) 84 65 4.1 (1.9 – 8.9) < 0.0001 Patient posture (%) 0.014 0.02 Left lateral 43 61 1 Supine 57 39 2.2 (1.1 – 4.3) IC valve type (%) < 0.0001 < 0.0001 Thin-lipped 22 58 1 Others 78 42 5.0 (2.6 – 9.8) Indication (%) 0.6 Bleeding P/R 28 37 Diarrhea 13 10 Anemia 10 10 SAIO 12 8 FBD 12 12 Endoscopist (%) 0.2 No 1 9 15 No 2 26 24 No 3 17 8 No 4 48 53 Ahammed Sk Mahiuddin et al. Patient-sture and Ilealntubation during colonoscopy (PIC)… Endoscopy International Open 2014; 02: E105–E110 Original article E109 THIEME References 1 Williams C, Teague R. Colonoscopy. Gut 1973; 14: 990–1003 2 Cotton PB, Williams CB (ed.) Colonoscopy and Flexible Sigmoidoscopy. Practical Gastrointestinal Endoscopy. Fifth edition. Oxford, UK: Black- well Publishing Ltd; 2003: 83–171 3 Nagasako K, Yazawa C, Takemoto T. Biopsy of the terminal ileum. Gas- trointest Endosc 1972; 19: 7–10 4 Baillie J. Gastrointestinal Endoscopy, Basic Principles and Practice. Ox- ford: Butterworth-Heinemann Ltd; 1992: 79–80 5 Bhasin DK, Goenka MK, Dhavan S et al. Diagnostic value of ileoscopy: a report from India. J Clin Gastroenterol 2000; 31: 144–146 6 Geboes K, Ectors N, D’Haens G et al. Is ileoscopy with biopsy worthwhile in patients presenting with symptoms of inflammatory bowel disease? Am J Gastroenterol 1998; 93: 201–206 7 Morini S, Lorenzetti R, Stella F et al. Retrograde ileoscopy in chronic nonbloody diarrhea: a prospective, case-control study. Am J Gastroen- terol 2003; 98: 1512–1515 8 Jeong SH, Lee KJ, Kim YB et al. Diagnostic value of terminal ileum intu- bation during colonoscopy. J Gastroenterol Hepatol 2008; 23: 51–55 9 Yusoff IF, Ormonde DG, Hoffman NE. Routine colonic mucosal biopsy and ileoscopy increases diagnostic yield in patients undergoing colo- noscopy for diarrhea. J Gastroenterol Hepatol 2002; 17: 276–280 10 Misra SP, Dwivedi M, Misra V et al. Endoscopic biopsies from normal- appearing terminal ileum and cecum in patients with suspected colo- nic tuberculosis. Endoscopy 2004; 36: 612–616 11 Cirocco WC, Rusin LC. The reliability of cecal landmarks during colonos- copy. Surg Endosc 1993; 7: 33–36 12 Powell N, Knight H, Dunn J et al. Images of the terminal ileum are more convincing than cecal images for verifying the extent of colonoscopy. Endoscopy 2011; 43: 196–201 13 Sakai Y Section 6, Chapter 81 In: Sivak MV Jr (ed.) Technique of Colo- noscopy. Gastro-enterologic Endoscopy. Second edition. Philadelphia, PA: WB Saunders; 1999 14 Kundrotas LW, Clement DJ, Kubik CM et al. A prospective evaluation of successful terminal ileum intubation during routine colonoscopy. Gas- trointest Endosc 1994; 40: 544–546 15 Börsch G, Schmidt G. Endoscopy of the terminal ileum. Diagnostic yield in 400 consecutive examinations. Dis Colon Rectum 1985; 28: 499– 501 16 Ansari A, Soon SY, Saunders BP et al. A prospective study of the techni- cal feasibility of ileoscopy at colonoscopy. Scand J Gastroenterol 2003; 38: 1184–1186 17 Chen M, Khanduja KS. Intubation of the ileocecal valve made easy. Dis Colon Rectum 1997; 40: 494–496 18 Lai EJ, Calderwood AH, Doros G et al. The Boston Bowel Preparation Scale: A valid and reliable instrument for colonoscopy-oriented re- search. Gastrointest Endosc 2009; 69: 620–625 19 Iacopini G, Frontespezi S, Vitale MA et al. Routine ileoscopy at colonos- copy: a prospective evaluation of learning curve and skill-keeping line. Gastrointest Endosc 2006; 63: 250–256 20 Powell N, Hayee BH, Yeoh DPK et al. Terminal ileal photography or biop- sy to verify total colonoscopy: does the endoscope agree with the mi- croscope? Gastrointest Endosc 2007; 66: 320–325 21 Kennedy G, Larson D, Wolff B et al. Routine ileal intubation during screening colonoscopy: a useful maneuver? Surg Endosc 2008; 22: 2606–2608 22 Yoong KKY, Heymann T. It is not worthwhile to perform ileoscopy on all patients. Surg Endosc 2006; 20: 809–811 23 McHugh JB, Appelman HD, McKenna BJ. The diagnostic value of endo- scopic terminal ileum biopsies. Am J Gastroenterol 2007; 102: 1084– 1089 24 Melton SD, Feagins LA, Saboorian MH et al. Ileal biopsy: Clinical indica- tions, endoscopic and histopathologic findings in 10,000 patients. Dig Liver Dis 2011; 43: 199–203 25 Cherian S, Singh P. Is routine ileoscopy useful? An observational study of procedure times, diagnostic yield, and learning curve. Am J Gastro- enterol 2004; 99: 2324–2329 26 De Silva AP, Kumarasena RS, Perera Keragala SD et al. The prone 12 o’clock position reduces ileal intubation time during colonoscopy compared to the left lateral 6 o’clock (standard) position. BMC Gastro- enterol 2011; 11: 89 27 Poolton JM, Wilson MR, Malhotra N et al. A comparison of evaluation, time pressure, and multitasking as stressors of psychomotor operative performance. Surgery 2011; 149: 776–782 28 Misra SP, Dwivedi M. Role of intravenously administered hyoscine bu- tyl bromide in retrograde terminal ileoscopy: a randomized, double- blinded, placebo-controlled trial. World J Gastroenterol 2007; 13: 1820–1823 29 Rex DK. Achieving cecal intubation in the very difficult colon. Gastroin- test Endosc 2008; 67: 938–944 30 Shah SG, Saunders BP, Brooker JC et al. Magnetic imaging of colonosco- py: an audit of looping, accuracy and ancillary maneuvers. Gastroin- test Endosc 2000; 52: 1–8 31 Shah HA, Paszat LF, Saskin R et al. Factors associated with incomplete colonoscopy: a population-based study. Gastroenterology 2007; 132: 2297–2303 Ahammed Sk Mahiuddin et al. Patient-sture and Ilealntubation during colonoscopy (PIC)… Endoscopy International Open 2014; 02: E105–E110 Original articleE110 THIEME work_jgcd5ltudrhgfhw7hfaii3jpae ---- 1 Technical note: A simple approach for efficient collection of field reference data for calibrating remote sensing mapping of northern wetlands Magnus Gålfalk1, Martin Karlson1, Patrick Crill2, Philippe Bousquet3, David Bastviken1 1Department of Thematic Studies – Environmental Change, Linköping University, 581 83 Linköping, Sweden. 5 2Department of Geological Sciences, Stockholm University, 106 91 Stockholm, Sweden. 3Laboratoire des Sciences du Climat et de l'Environnement (LSCE), Gif sur Yvette, France. Correspondence to: Magnus Gålfalk (magnus.galfalk@liu.se) Abstract. The calibration and validation of remote sensing land cover products is highly dependent on accurate field reference data, which are costly and practically challenging to collect. We describe an optical method for collection of field reference 10 data that is a fast, cost-efficient, and robust alternative to field surveys and UAV imaging. A light weight, water proof, remote controlled RGB-camera (GoPro) was used to take wide-angle images from 3.1 - 4.5 m altitude using an extendable monopod, as well as representative near-ground (< 1 m) images to identify spectral and structural features that correspond to various land covers at present lighting conditions. A semi-automatic classification was made based on six surface types (graminoids, water, shrubs, dry moss, wet moss, and rock). The method enables collection of detailed field reference data which is critical in many 15 remote sensing applications, such as satellite-based wetland mapping. The method uses common non-expensive equipment, does not require special skills or education, and is facilitated by a step-by-step manual that is included in the supplementary information. Over time a global ground cover database can be built that is relevant for ground truthing of wetland studies from satellites such as Sentinel 1 and 2 (10 m pixel size). 1 Introduction 20 Accurate and timely land cover data are important for e.g. economic, political, and environmental assessments, and for societal and landscape planning and management. The capacity for generating land cover data products from remote sensing is developing rapidly. There has been an exponential increase in launches of new satellites with improved sensor capabilities, including shorter revisit time, larger area coverage, and increased spatial resolution (Belward & Skøien 2015). Similarly, the development of land cover products is increasingly supported by the progress in computing capacities and machine learning 25 approaches. However, at the same time it is clear that the knowledge of the Earth´s land cover is still poorly constrained. For example, a comparison between multiple state-of-the-art land cover products for West Siberia revealed disturbing uncertainties (Frey and Smith 2007). Estimated wetland areas ranged from 2 - 26% of the total area, and the correspondence with in situ observations Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. 2 for wetlands was only 2 - 56%. For lakes, all products revealed similar area cover (2-3%), but the agreement with field observations was as low as 0-5%. Hence, in spite of the progress in technical capabilities and data analysis progress, there are apparently fundamental factors that still need consideration to obtain accurate land cover information. The West Siberia example is not unique. Current estimates of the global wetland area range from 8.6 to 26.9 x 106 km2 with great inconsistencies between different data products (Melton et al. 2013). The uncertainty in wetland distribution has multiple 5 consequences, including being a major bottleneck for constraining the assessments of global methane (CH4) emissions, which was the motivation for this area comparison. Wetlands and lakes are the largest natural CH4 sources (Saunois et al. 2016) and available evidence suggest that these emissions can be highly climate sensitive, particularly at northern latitudes predicted to experience the highest temperature increases and melting permafrost – both contributing to higher CH4 fluxes (Yvon-Durocher et al. 2014; Schuur et al. 2009). 10 CH4 fluxes from plant functional groups in northern wetlands can differ by orders of magnitude. Small wet areas dominated by emergent graminoid plants account for by far the highest fluxes per m2, while the more widespread areas covered by e.g. Sphagnum mosses have much lower CH4 emissions per m 2 (e.g. Bäckstrand et al. 2010). The fluxes associated with the heterogeneous and patchy (i.e. mixed) land cover in northern wetlands is well understood on the local plot scale, whereas the large-scale extrapolations are very uncertain. The two main reasons for this uncertainty is that the total wetland extent is 15 unknown and that present map products do not distinguish between different wetland habitats which control fluxes and flux regulation. As a consequence the whole source attribution in the global CH4 budget remains highly uncertain (Kirschke et al. 2013; Saunois et al. 2016). To resolve this, improved land cover products being relevant for CH4 fluxes and their regulation are needed. The detailed characterization of wetland features or habitats requires the use of high resolution satellite data and sub-pixel classification 20 that quantify percent, or fractional, land cover. A fundamental bottleneck for the development of fractional land cover products is the quantity and quality of the ground truth, or reference, data used for calibration and validation (Foody 2013; Foody et al. 2016). While the concept “ground truth” leads the thoughts to a perfectly represented reality, 100% accurate reference data do not exist. In fact, reference data can often be any data available at higher resolution than the data product, including other satellite imagery, airborne surveys, in addition to field observations. In turn, the field observations can range from rapid 25 landscape assessments to detailed vegetation mapping in inventory plots, where the latter yields high resolution and high- quality data but is very expensive to generate in terms of time and manpower (Olofsson et al. 2014; Frey & Smith 2007). Ground-based reference data for fractional land cover mapping can be acquired using traditional methods, such as visual estimation, point frame assessment or digital photography (Chen et al. 2010). These methods can be applied using a transect approach to increase the area coverage in order to match the spatial resolutions of different satellite sensors (Mougin et al. 30 2014). The application of digital photography and image analysis software has shown promise for enabling rapid and objective measurements of fractional land cover that can be repeated over time for comparative analysis (Booth et al. 2006a). While several geometrical corrections and photometric setups are used, nadir (downward facing) and hemispherical view Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. 3 photography are most common, and the selected setup depends on the height structure of the vegetation (Chen et al. 2010). However, most previous research has focused on distinguishing between major general categories, such as vegetation and non- vegetation (Laliberte et al. 2007; Zhou & Liu 2015), and are typically not used to characterize more subtle patterns within major land cover classes. Many applications in literature have been in rangeland, while there is a lack of wetland classification. Furthermore, images have mainly been close-up images taken from a nadir view perspective (Booth et al. 2006a; Chen et al. 5 2010; Zhou & Liu 2015), thereby limiting the spatial extent to well below the pixel size of satellite systems suitable for regional scale mapping. From a methano-centric viewpoint, accurate reference data at high enough resolution, being able to separate wetland (and upland) habitats with differing flux levels and regulation, is needed to facilitate progress with available satellite sensors. The resolution should preferable be better than 1 m2 given how the high emitting graminoid areas are scattered on the wettest spots 10 where emergent plants can grow. Given this need, we propose a quick and simple type of field assessment adapted for the 10 x 10 m pixels of the Sentinel 1 and 2 satellites. Our method uses true color images of the ground, followed by image analysis to distinguish fractional cover of key land cover types relevant for CH4 fluxes from northern wetlands, where we focus on few classes, that differ in their CH4 emissions. We provide a simple manual allowing anyone to take the photos needed in a few minutes per field plot. Land cover classification 15 can then be made using the Red-Green-Blue (RGB) field images (sometimes also converting them to the Intensity-Hue- Saturation (IHS) color space) by software such as e.g. CAN-EYE (Weiss & Baret 2010), VegMeasure (Johnson et al. 2003), SamplePoint (Booth et al. 2006b), or eCognition (Trimble commercial software). With this simple approach it would be quick and easy for the community to share such images online and to generate a global reference database that can be used for land cover classification relevant to wetland CH4 fluxes, of other purposes depending of the land cover classes used. We use our 20 own routines written in Matlab due to the large field of view used in the method, in order to correct for the geometrical perspective when calculating areas (to speed up the development of a global land cover reference database, we can do the classification on request if all necessary parameters and images are available as given in our manual). 2 Field work The camera setup is illustrated in Fig.1, with lines showing the spatial extent of a field plot. Our equipment included a 25 lightweight RGB-camera (GoPro 4 Hero Silver; other types of cameras with remote control and suitable wide field of view would also work) mounted on an extendable monopod that allows imaging from a height of 3.1 - 4.5 meters. The camera had a resolution of 4000 x 3000 pixels with a wide field of view (FOV) of 122.6 x 94.4 deg. and was remotely controlled over Bluetooth using a mobile phone application that allows a live preview, making it possible to always include the horizon close to the upper edge in each image (needed for image processing later – see below). The camera had a waterproof casing and 30 could therefore be used in rainy conditions, making the method robust to variable weather conditions. Measurements were made for about 200 field plots in northern Sweden in the period 6-8 September 2016 . Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. 4 Figure 1: A remotely controlled wide-field camera mounted on a long monopod captures the scene in one shot, from above the horizon down to nadir. After using the horizon image position to correct for the camera angle, a 10 x 10 m area close to the camera is used for classification. For each field plot, the following was recorded: 5 • One image taken at > 3.1 m height (see illustration in Fig. 1) which includes the horizon coordinate close to the top of the image. • 3-4 close-up images of common surface cover in the plot (e.g. typical vegetation). • GPS position of the camera location (reference point) • Notes of the image direction relative to the reference point. 10 The long monopod was made from two ordinary extendable monopods taped together, with a GoPro camera mount at the end. The geographic coordinate of the camera position was registered using a handheld Garmin Oregon 550 GPS with a horizontal accuracy of approximately 3 m. The positional accuracy of the images can be improved by using a differential GPS and by registering the cardinal direction of the FOV. The camera battery typically lasts for a few hours after a full charge, but was charged at intervals when not used, e.g. when moving between different field sites. 15 3 Image processing and models As the camera had a very wide FOV, the raw images do have a strong lens distortion (Fig. 2). This can be corrected for most camera models (e.g., the GoPro series) using either commercial software, such as Adobe Lightroom or Photoshop (which we used), or using distortion models in programming languages (e.g. Matlab). Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. 5 Figure 2: Correction of lens distortion. (A) Raw wide-field camera image. (B) After correction. Using a distortion-corrected calibration image, we developed a model of the ground geometry by projecting and fitting a 10 x 10 m grid on a parking lot with measured distances marked using chalk (Fig. 3). The geometric model uses the camera FOV, camera height, and the vertical coordinate of the horizon (to obtain the camera angle). We find an excellent agreement between 5 the modeled and measured grids (fits are within a few centimeters) for both camera heights of 3.1 and 4.5 m. The vertical angle 𝛼 from nadir to a certain point in the grid with ground distance Y along the center line is given by 𝛼 = arctan(𝑌/ℎ), where h is the camera height. For distance points in our calibration image (Fig. 3), using 0.2 m steps in the range 0 – 1 m and 1 m steps from 1 to 10 m, we calculate the nadir angles 𝛼(Y) and measure the corresponding vertical image coordinates 𝑦𝑐𝑎𝑙𝑖𝑏 (𝑌). 10 Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. 6 Figure 3: Calibration of projected geometry using an image corrected for lens distortion. Model geometry are shown as white numbers and a white grid, while green and red numbers are written on the ground using chalk (red lines at 2 and 4 m left of the center line were strengthened for clarity). The camera height in this calibration measurement is 3.1 m. In principle, for any distortion corrected image there is a simple relationship 𝑦𝑖𝑚𝑔 (𝛼) = (𝛼(𝑌) − 𝛼0)/𝑃𝐹𝑂𝑉, where 𝑦𝑖𝑚𝑔 is 5 the image vertical pixel coordinate for a certain distance 𝑌, 𝑃𝐹𝑂𝑉 the pixel field of view (deg pixel-1), and 𝛼0 the nadir angle of the bottom image edge. In practice, however, correction for lens distortion is not perfect so we have fitted a polynomial in the calibration image to obtain 𝑦𝑐𝑎𝑙𝑖𝑏 (𝛼) from the known 𝛼 and measured 𝑦𝑐𝑎𝑙𝑖𝑏 . Using this function we can then obtain the 𝑦𝑖𝑚𝑔 coordinate in any subsequent field image using 𝑦𝑖𝑚𝑔 = 𝑦𝑐𝑎𝑙𝑖𝑏 (𝛼 + 𝑃𝐹𝑂𝑉ℎ𝑜𝑟 ∙ (𝑦𝑖𝑚𝑔 ℎ𝑜𝑟 − 𝑦𝑐𝑎𝑙𝑖𝑏 ℎ𝑜𝑟 )) (1) where 𝑦𝑖𝑚𝑔 ℎ𝑜𝑟 and 𝑦𝑐𝑎𝑙𝑖𝑏 ℎ𝑜𝑟 are the vertical image coordinates of the horizon in the field and calibration image, respectively. As the 10 𝑃𝐹𝑂𝑉 varies by a small amount across the image due to small deviations in the lens distortion correction, we have used 𝑃𝐹𝑂𝑉ℎ𝑜𝑟 which is the pixel field of view at the horizon coordinate. In short, the shift in horizon position between the field and calibration images is used to compensate for the camera having different tilts in different images. In order to obtain correct ground geometry it is therefore important to always include the horizon in all images. Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. 7 The horizontal ground scale 𝑑𝑥 (pixels m-1) varies linearly with 𝑦𝑖𝑚𝑔 , making it possible to calculate the horizontal image coordinate 𝑥𝑖𝑚𝑔 using 𝑥𝑖𝑚𝑔 = 𝑥𝑐 + 𝑋 ∙ 𝑑𝑥 = 𝑥𝑐 + 𝑋 ∙ (𝑦𝑖𝑚𝑔 ℎ𝑜𝑟 − 𝑦𝑖𝑚𝑔 ) ∙ 𝑑𝑥0 𝑦𝑐𝑎𝑙𝑖𝑏 ℎ𝑜𝑟 ∙ ℎ𝑐𝑎𝑙𝑖𝑏 ℎ𝑖𝑚𝑔 (2) where 𝑑𝑥0 is the horizontal ground scale at the bottom edge of the calibration image, 𝑥𝑐 the center line coordinate (half the horizontal image size), 𝑋 the horizontal ground distance, and ℎ𝑐𝑎𝑙𝑖𝑏 and ℎ𝑖𝑚𝑔 the camera heights in the calibration and field image, respectively. 5 Thus, using Eqs. (1) and (2) we can calculate the image coordinates (𝑥𝑖𝑚𝑔 , 𝑦𝑖𝑚𝑔 ) in a field image from any ground coordinates (𝑋, 𝑌). A model grid is shown in Fig. 3 together with the calibration image, illustrating their agreement. Figure 4: One of our field plots. (A) Image corrected for lens distortion, with a projected 10 x 10 m grid overlaid. (B) Image after recalculation to overhead projection (10 x 10 m). 10 For each field image, after correction for image distortion, our Matlab script asks for the 𝑦 -coordinate of the horizon (which is selected using a mouse). This is used to calculate the camera tilt and to over-plot a distance grid projected on the ground (Fig. 4A). Using Eqs. (1) and (2) we then recalculate the image to an overhead projection of the nearest 10 x 10 m area (Fig. 4B). This is done using interpolation, where a (𝑥𝑖𝑚𝑔 , 𝑦𝑖𝑚𝑔 ) coordinate is obtained from each (𝑋, 𝑌) coordinate, and the brightness in each color channel (𝑅, 𝐺, 𝐵) calculated using sub-pixel interpolation. The resulting image is reminiscent of an 15 overhead image, with equal scales in both axes. There is however a small difference, as the geometry (due to line of sight) does not provide information about the ground behind high vegetation (such as high grass) in the same way as an image taken from overhead. Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. 8 4 Image classification After a field plot has been geometrically rectified, so that the spatial resolution is the same over the surface area used for classification, the script distinguishes land cover types by color, brightness and spatial variability. Aided by the close-up images of typical surface types also taken at each field plot (Fig. 5), providing further verification, a script is applied to each overhead- projected calibration field (Fig. 4B) that classifies the field plot into land cover types. This is a semi-automatic method that 5 can account for illumination differences between images. In addition, it facilitates identification as there can for instance be different vegetation with similar color, and rock surfaces that have similar appearance as water or vegetation. After an initial automatic classification, the script has an interface that allows manual movement of areas between classes. For calculations of surface-color we filter the overhead projected field-images using a running 3 x 3 pixel mean filter, providing more reliable statistics. Spatial variation in brightness, used as a measurement of surface roughness, is calculated using a 10 running 3 x 3 pixel standard deviation filter. Denoting the brightness in each (red, green, and blue) color channel 𝑅, 𝐺 and 𝐵, respectively, we could for instance find areas with green grass using the green filter index 2𝐺/(𝑅 + 𝐵), where a value above 1 indicates green vegetation. In the same way, areas with water (if the close-up images show blue water due to clear sky) can be found using a blue filter index 2𝐵/(𝑅 + 𝐺). If the close-up images show dark or gray water (cloudy weather) it can be distinguished from rock and white vegetation using either a total brightness index (𝑅 + 𝐺 + 𝐵)/3 or an index that is sensitive 15 to surface roughness, involving 𝜎(𝑅), 𝜎(𝐺), or 𝜎(𝐵), where 𝜎 denotes the 3 x 3 pixels standard deviation centered on each pixel, for a certain color channel. Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. 9 Figure 5: Close-up images in one of our 10 x 10 m field plots (Fig. 4). In this study we used six different land cover types of relevance for CH4 regulation: graminoids, water, shrubs, dry moss, wet moss, and rock. Examples of classified images are shown in Fig. 6. In a test study, we were able to make classifications of about 200 field plots in northern Sweden in a three-day test campaign despite rainy and windy conditions. For each field plot, 5 surface area (m2) and coverage (%) were calculated for each class. An additional field plot and classification example can be found in supplementary information S2. Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. 10 Figure 6: Classification of a field plot image (Fig. 4B) into the six main surface components. All panels have an area of 10 x 10 m. (A) Graminoids. (B) Water. (C) Shrubs. (D) Dry moss. (E) Wet moss. (F) Rock. 5 Conclusions This study describes a quick method to document ground surface cover and process the data to make it suitable as ground truth 5 for remote sensing. The method requires a minimum of equipment that is frequently used by researchers and persons with general interest in outdoor activities, and image recording can be made easily and in a few minutes per plot without requirements of specific skills or education. Hence, if the method gets widespread and a fraction of those who visits northern wetlands (or other environments without dense tall vegetation where the method is suitable) contributes images and related information, there is a potential for rapid development of a global database of images and processed results with detailed land 10 cover for individual satellite pixels. In turn, this could become a valuable resource for remote sensing ground truthing. To facilitate this development supplementary information S1 includes a complete manual and authors will assist with early stage image processing and initiate database development. Acknowledgements 15 Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. 11 This study was funded by a grant from the Swedish Research Council VR to David Bastviken (ref. no. VR 2012-48). We would also like to acknowledge the collaboration with the IZOMET project (ref.no VR 2014-6584) and IZOMET partner Marielle Saunois (Laboratoire des Sciences du Climat et de l'Environnement (LSCE), Gif sur Yvette, France). References Belward, A. S. and Skøien, J. O.: Who launched what, when and why; trends in global land-cover observation capacity from 5 civilian earth observation satellites, Isprs Journal of Photogrammetry and Remote Sensing, 103, 115-128, 2015. Booth, D. T., Cox, S. E., Meikle, T. W., and Fitzgerald, C.: The accuracy of ground cover measurements, Rangeland Ecology and Management, 59 (2), 179-188, 2006a. Booth, D. T., Cox, S. E., and Berryman R. D.: Point sampling digital imagery with "SamplePoint", Environ Monk Assess 123, 97-108, 2006b. 10 Bäckstrand, K., Crill, P. M., Jackowicz-Korczynski, M., Mastepanov, M., Christensen, T. R., Bastviken, D.: Annual carbon gas budget for a subarctic peatland, Northern Sweden, Biogeosciences, 7, 95-108, 2010. Chen, Z., Chen, W., Leblanc, S. G., and Henry, G. H. R.: Digital Photograph Analysis for Measuring Percent Plant Cover in the Arctic, Arctic, 63 (3), 315-326, 2010. Frey, K. E., Smith, L. C.: How well do we know northern land cover? Comparison of four global vegetation and wetland 15 products with a new ground-truth database for West Siberia, Global Biogeochem. Cycles 21, n/a-n/a, 2007. Foody, G. M.: Ground reference data error and the mis-estimation of the area of land cover change as a function of its abundance, Remote Sensing Letters, 4, 783-792, 2013. Foody, G. M., Pal, M., Rocchini, D., Garzon-Lopez, C. X., and Bastin, L.: The Sensitivity of Mapping Methods to Reference Data Quality: Training Supervised Image Classifications with Imperfect Reference Data, Isprs International Journal of Geo-20 Information, 5, 2016. Johnson, D. E, Vulfson, M., Louhaichi, M., and Harris, N. R.: VegMeasure version 1.6 user's manual. Corvallis, OR: Department of Rangeland Resources, Oregon State University, 2003. Kirschke, S., Bousquet, P., Ciais, P., Saunois, M., Canadell, J. G., Dlugokencky, E. J., et al.: Three decades of global methane sources and sinks, Nature Geoscience, 6, 813-823, 2013. 25 Laliberte, A. S., Rango, A., Herrick, J. E., Fredrickson, E. L., and Burkett, L.: An object-based image analysis approach for determining fractional cover of senescent and green vegetation with digital plot photography, Journal of Arid Environments, 69, 1–14, 2007. Melton, J. R., Wania, R., Hodson, E. L., Poulter, B., Ringeval, B., Spahni, R., et al.: Present state of global wetland extent and wetland methane modelling: conclusions from a model inter-comparison project (WETCHIMP), Biogeosciences, 10, 753-788, 30 2013. Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. 12 Mougin, E., Demarez, V., Diawara, M., Hiernaux, P., Soumaguel, N., and Berg, A.: Estimation of LAI, fAPAR and fCover of Sahel rangelands, Agricultural and Forest Meteorology, 198-199, 155-167, 2014. Olofsson, P., Foody, G. M., Herold, M., Stehman, S. V., Woodcock, C. E., and Wulder, M. A.: Good practices for estimating area and assessing accuracy of land change, Remote Sensing of Environment, 148, 42-57, 2014. Saunois, M., Bousquet, P., Poulter, B., Peregon, A., Ciais, P., Canadell, J. G., et al.: The global methane budget 2000–2012, 5 Earth Syst. Sci. Data, 8, 697-751, 2016. Schuur, E. A. G., Vogel, J. G., Crummer, K. G., et al.: The effect of permafrost thaw on old carbon release and net carbon exchange from tundra, Nature, 459, 556-559, 2009. Weiss, M., and Baret, F.: CAN-EYE V6.4.6 User Manual. In: EMMAH laboratory (Mediterranean environment and agro- hydro system modelisation). National Institute of Agricultural Research (INRA), French, 2010. 10 Yvon-Durocher, G., Allen, A. P., Bastviken, D., Conrad, R., Gudasz, C., St-Pierre, A., Thanh-Duc, N., and del Giorgio, P. A.: Methane fluxes show consistent temperature dependence across microbial to ecosystem scales, Nature, 507, 488-491, 2014. Zhou, G. and S. Liu.: Estimating ground fractional vegetation cover using the double-exposure method, International Journal of Remote Sensing, 36 (24), 6085-6100, 2015. Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. work_jgywt4yttrg77mwm3tdcsxhsia ---- [PDF] Mnemonics for gillies principles of plastic surgery and it importance in residency training programme | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.4103/ijps.IJPS_93_16 Corpus ID: 25454543Mnemonics for gillies principles of plastic surgery and it importance in residency training programme @article{Pandey2017MnemonicsFG, title={Mnemonics for gillies principles of plastic surgery and it importance in residency training programme}, author={S. Pandey and R. Chittoria and D. Mohapatra and M. Friji and D. K. Sivakumar}, journal={Indian Journal of Plastic Surgery : Official Publication of the Association of Plastic Surgeons of India}, year={2017}, volume={50}, pages={114 - 115} } S. Pandey, R. Chittoria, +2 authors D. K. Sivakumar Published 2017 Medicine Indian Journal of Plastic Surgery : Official Publication of the Association of Plastic Surgeons of India How to cite this article: Gupta A, Gupta S, Kumar A, Jha MK, Bhattacharaya S, Tiwari VK. Novel use of preputial flap. Indian J Plast Surg 2017;50:112-4. This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as the author is credited and the new creations are licensed under the identical terms.  View on PubMed thieme-connect.de Save to Library Create Alert Cite Launch Research Feed Share This Paper 1 Citations View All Figures and Topics from this paper figure 1 Plastic Surgery Specialty One Citation Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Meta-analysis of prepectoral implant-based breast reconstruction: guide to patient selection and current outcomes Olivia Abbate, Nikki Rosado, Nikhil Sobti, B. Vieira, Eric C. Liao Medicine Breast Cancer Research and Treatment 2020 3 View 1 excerpt Save Alert Research Feed Related Papers Abstract Figures and Topics 1 Citations Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_jhnti3dtbzbfxnthnpremuaegm ---- 07_Skrzat_Enhancement.p65 Folia Morphol. Vol. 70, No. 4, pp. 260–262 Copyright © 2011 Via Medica ISSN 0015–5659 www.fm.viamedica.pl O R I G I N A L A R T I C L E 260 Address for correspondence: Dr hab. n. biol. J. Skrzat, Department of Anatomy, Collegium Medicum, Jagiellonian University, ul. Kopernika 12, 31–034 Kraków, Poland, tel: +48 12 422 95 11, e-mail: jskrzat@poczta.onet.pl Enhancement of the focal depth in anatomical photography J. Skrzat Department of Anatomy, Collegium Medicum, Jagiellonian University, Krakow, Poland [Received 12 October 2011; Accepted 14 October 2011] Limited depth of field is one of the crucial disadvantages of macro photogra- phy because some details of the imagined object are blurred. This paper pre- sents the benefits of using an algorithm which enhances focal depth in the close-up views of anatomical structures. The applied technique was based on combining a set of images of the same object (temporal bone) taken on differ- ent focal planes. In effect, a single image was generated which presented all details sharply across the photographed object. The extended depth of field of the composite image was reconstructed by CombineZP Image Stacking Soft- ware. (Folia Morphol 2011; 70, 4: 260–262) Key words: depth of field, depth of focus, image fusion, digital photos INTRODUCTION Depth of field is defined as the distance between the nearest and farthest objects in a scene that ap- pear as sharp within the image. The following fac- tors have a direct relationship with depth of field: the diaphragm opening of the lens, the focal length of the lens, and the distance between the lens and the subject. Recent advances in digital photography have enabled close-up images that have both high reso- lution and large depth of field. Nevertheless, depth of field still remains a big problem when photo- graphing small objects. In macro photography the depth of field is very shallow. In particular, micros- copy imaging suffers from limited depth of focus. An increase in magnification significantly decreases the depth of field [4]. Therefore, close-up views of anatomical structures become problematic because not all details can be seen sharply. Fortunately, specific algorithms have been de- veloped to fuse images having various planes of focus, thus obtaining a completely focused image with virtually extended depth of field. The algorithm of extended depth of field automatically determines which of the overlapping images is focused the best. Then it selects only the best focused image areas, which are combined into a single picture in which all details are visibly sharp. Traditional extended depth of field algorithms rely on a high-pass criterion that is applied to each image in the stack. However, there are a variety of methods dealing with the process of image fusion to obtain images with a greater depth of field [2, 3, 9]. The aim of this study was to evaluate the useful- ness of an algorithm which enhances focal depth to improve the image quality of photographed ana- tomical objects. This technique was tested on digi- tal images of the temporal bone because this bone shows multidirectional organisation, and usually macro photography of its anatomical details pro- duces blurred areas in the picture. MATERIAL AND METHODS Application of the extended depth of field was tested on the images of the temporal bone isolated from a dry human skull. The intracranial aspect of 261 J. Skrzat, Enhancement of focal depth the temporal bone was subjected to digital photo- graphy to capture most of the anatomical details. The camera (Canon EOS 5D) was subsequently focused on four areas of the temporal bone, and the follow- ing were photographed: the petrous apex, the in- ternal acoustic meatus, the arcuate eminence, and the inner surface of the squama. This procedure is called “focus stacking” because a set of images is shot by gradually incrementing the focusing distance across the object. As a result, singular details are seen sharply but in different images. The obtained photos are then aligned (their content overlaid pixel by pix- el) by the computer program. A composite image is generated from the sharpest regions from each of the previously obtained separate images. The pro- cess of digital composition of the selected parts of each image in focus was performed by CombineZP Image Stacking Software by Alan Hadley (GNU Pub- lic Licence). This software was downloaded from the web page: www.hadleyweb.pwp.blueyonder.co.uk/ /CZP/News.htm. RESULTS In traditional digital photography each acqui- sition of the temporal bone shows certain regions of the specimen in and out of focus (Fig. 1). A set of 4 images was obtained by sweeping focus from the petrous apex to the squama of the temporal bone (Fig. 1A, D). In the first picture (Fig. 1A) the foreground (anterior part of the petrous bone) is sharp while the background (temporal squama) is blurred. Conversely, the fourth picture (Fig. 1D) presents a sharp background and blurred fore- ground. Subsequent images (Fig. 1B, C) can be regarded as average focus. In these two cases, the quality of the image can be acceptable. However, there are parts of the object in which anatomical details are not visible sharply. These are: the pe- trous apex, the internal acoustic meatus, the pos- terior part of the pyramid, and partially the squa- ma with squamous suture. The optical disadvantage was corrected by the algorithm of the image fusion, which allowed us to obtain a single picture of the temporal bone with- out blurred areas. The final image of the temporal bone presented in Figure 2 is the most satisfactory. All details of the temporal bone are easily distin- guishable. Both the foreground (the petrous part) and the background (temporal squama) are sharp, and subsequent anatomical details are sharply visi- ble. This is the effect of enhancement of the focal range, which was achieved by the digital assembly of images that contained in-focus parts of images which covered the whole depth range of an object (Figs. 1, 2). DISCUSSION In anatomical photography it is desirable to ob- tain the entire image in sharp focus. Thus, a large depth of field is appropriate to capture and visuali- Figure 1A–D. Serial images of the temporal bone with different focal plane; asterisks mark the area in focus. Note that parts of the images are blurred (out of focus). 262 Folia Morphol., 2011, Vol. 70, No. 4 se in one image the spatial organisation of the ob- served anatomical structures. Unfortunately, one of the main problems in optical imaging is the limited depth of focus. This is an essential obstacle to ac- quire, in focus, in a single image plane, objects that are located at different distances [7]. The solution to this problem seems to be focus stacking, which enhances virtually depth of field. This technique has been used in microscope systems, and has become particularly beneficial in imaging thick objects. Us- ing the extended depth of focus technology may increase by six to eight times the depth of focus of standard systems [1]. Focus stacking is regarded as a powerful tech- nique, but it also has some limitations. These are: the photographed object must be motionless (a steady tripod is necessary to keep the camera still), a high precision focusing device is necessary, and specialised software is required to combine the stack of photos into a composite image. Nevertheless, it is worth applying this procedure in anatomical photography to obtain better pictures. The argument for adopting this technique in ana- tomical photography is obvious because image quali- Figure 2. Composite deep focus image of the temporal bone, assembled from 4 separate images presented in Figure 1. Note that all areas of the image are sharp. ty is significantly enhanced. Thus, interpretation of the image becomes easier and is not biased [8]. In turn, high quality images can be subjected to auto- mated quantification or become a reliable source of information in pathological analysis [5, 6]. Anatomical photography requires more depth of field than can be obtained in typical photography. Therefore, image stacking seems to be a helpful pro- cedure because a composite digital image reveals all details in sharp focus across the object. A per- spective view of many details and the precise cap- ture of their spatial relationship is a clue to under- standing the topography of anatomical structures. Hence, good quality photos are helpful in diagnos- tic and anatomical education. Application of the fo- cus stacking algorithm opens a wide range of possi- bilities in careful imaging of morphological features of biological objects. REFERENCES 1. Bradburn S, Cathey WT, Dowski ER (1998) Applications of extended depth of focus technology to light micro- scope systems. www.colorado.edu/isl/papers. 2. Demkowski M, Mikołajczak P (2008) Photography im- age enhancement by image fusion. Ann UMCS Infor- matica AI, 8: 43–53. 3. Dowski ER, Cathey WT (1995) Extended depth of field through wave-front coding. Appl Opt, 34: 1859–1866. 4. Goldsmith NT (2000) Deep focus; a digital image pro- cessing technique to produce improved focal depth in light microscopy. Image Anal Stereol, 19: 163–167. 5. Leong FJ, Leong AS (2003) Digital imaging applications in anatomic pathology. Adv Anat Pathol, 10: 88–95. 6. O’Brien MJ, Sotnikov AV (1996) Digital imaging in ana- tomic pathology. Am J Clin Pathol, 106 (4 suppl. 1): 25–32. 7. Paturzo M, Ferraro P (2009) Creating an extended fo- cus image of a tilted object in Fourier digital hologra- phy. Opt Express, 17: 20546–20552. 8. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Im- age quality assessment: from error visibility to structural similarity. IEEE Trans Image Processing, 13: 600–612. 9. Wang Z, Ziou D, Armenakis C, Li D, Li Q (2005) A com- parative analysis of image fusion methods. IEEE Trans Geosci Remote Sens, 43: 1391–1402. work_jijwbll4ejhvhkwkszzcfc5xru ---- [PDF] Outcomes of Polydioxanone Knotless Thread Lifting for Facial Rejuvenation | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1097/DSS.0000000000000368 Corpus ID: 33718585Outcomes of Polydioxanone Knotless Thread Lifting for Facial Rejuvenation @article{Suh2015OutcomesOP, title={Outcomes of Polydioxanone Knotless Thread Lifting for Facial Rejuvenation}, author={D. Suh and Hee Won Jang and S. Lee and W. Lee and H. Ryu}, journal={Dermatologic Surgery}, year={2015}, volume={41}, pages={720–725} } D. Suh, Hee Won Jang, +2 authors H. Ryu Published 2015 Medicine, Computer Science Dermatologic Surgery BACKGROUND Thread lifting is a minimally invasive technique for facial rejuvenation. [...] Key MethodMETHODS A retrospective chart review was conducted over a 24-month period. A total of 31 thread lifting procedures were performed. On each side, 5 bidirectional cog threads were used in the procedure for the flabby skin of the nasolabial folds. And, the procedure was performed on the marionette line using 2 twin threads. RESULTS In most patients (87%), the results obtained were considered satisfactory…Expand View on Wolters Kluwer mdkala.com Save to Library Create Alert Cite Launch Research Feed Share This Paper 58 CitationsHighly Influential Citations 2 Background Citations 17 View All Topics from this paper Polydioxanone Rejuvenation Lifting Nasolabial sulcus Entity Name Part Qualifier - adopted Paper Mentions News Article The 'Human Ken Doll' Has Undergone Six More Procedures in the Past Three Weeks Yahoo! 22 March 2017 58 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Outcome of facial rejuvenation with polydioxanone thread for Asians Hyejeong Lee, Kichan Yoon, M. Lee Medicine Journal of cosmetic and laser therapy : official publication of the European Society for Laser Dermatology 2018 17 View 2 excerpts, cites background Save Alert Research Feed Experiences of barbed polydioxanone (PDO) cog thread for facial rejuvenation and our technique to prevent thread migration M. Unal, Gizem Kaya İslamoğlu, Gülbahar Ürün Unal, Nihal Köylü Medicine The Journal of dermatological treatment 2019 1 Save Alert Research Feed Use of Polydioxanone Threads as an Alternative in Nonsurgical Procedures in Facial Rejuvenation. R. Cobo Medicine Facial plastic surgery : FPS 2020 PDF Save Alert Research Feed Short-term Treatment Outcomes of Facial Rejuvenation Using the Mint Lift Fine Hyoungjin Moon, Dooyeol Chang, Won Lee Medicine Plastic and reconstructive surgery. Global open 2020 1 Save Alert Research Feed Mini-midface Lift Using Polydioxanone Cog Threads Y. Myung, Chinkoo Jung Medicine Plastic and reconstructive surgery. Global open 2020 2 Save Alert Research Feed Vertical Lifting: A New Optimal Thread Lifting Technique for Asians S. H. Kang, E. J. Byun, H. Kim Medicine Dermatologic surgery : official publication for American Society for Dermatologic Surgery [et al.] 2017 22 View 1 excerpt, cites background Save Alert Research Feed A-PDO – EYEBROW LIFTING WITH ANCHORED POLYDIOXANONE THREADS-10 CASES REPORT BORTOLOZO 2017 PDF Save Alert Research Feed Technique for Nonsurgical Lifting Procedures Using Polydioxanone Threads K. Karimi Medicine JAMA facial plastic surgery 2018 1 Save Alert Research Feed Effectiveness, Longevity, and Complications of Facelift by Barbed Suture Insertion D. Bertossi, G. Botti, +4 authors B. van der Lei Medicine Aesthetic surgery journal 2019 17 PDF Save Alert Research Feed Percutaneous Thread Lift Facial Rejuvenation: Literature Review and Evidence-Based Analysis B. Atiyeh, F. Chahine, Odette M. Abou Ghanem Medicine Aesthetic Plastic Surgery 2021 View 2 excerpts, cites background Save Alert Research Feed ... 1 2 3 4 5 ... References SHOWING 1-10 OF 12 REFERENCES SORT BYRelevance Most Influenced Papers Recency Facial Rejuvenation With Fine-Barbed Threads: The Simple Miz Lift T. H. Park, S. Seo, Kwi Whan Whang Medicine Aesthetic Plastic Surgery 2013 30 Save Alert Research Feed Outcomes in Thread Lift for Facial Rejuvenation: a Study Performed with Happy Lift™ Revitalizing A. Savoia, C. Accardo, Fulvio Vannini, Basso Di Pasquale, A. Baldi Medicine Dermatology and Therapy 2014 50 PDF View 2 excerpts, references background Save Alert Research Feed Incidence of Complications and Early Recurrence in 29 Patients After Facial Rejuvenation with Barbed Suture Lifting J. D. Rachel, E. B. Lack, B. Larson Medicine Dermatologic surgery : official publication for American Society for Dermatologic Surgery [et al.] 2010 60 View 1 excerpt, references background Save Alert Research Feed Barbed sutures “lunch time” lifting: evidence‐based efficacy B. Atiyeh, S. Dibo, M. Costagliola, S. Hayek Medicine Journal of cosmetic dermatology 2010 38 PDF View 1 excerpt, references background Save Alert Research Feed APTOS suture lifting methods: 10 years of experience. M. Sulamanidze, G. Sulamanidze Medicine Clinics in plastic surgery 2009 40 PDF Save Alert Research Feed Effect of Cog Threads under Rat Skin H. J. Jang, W. Lee, K. Hwang, J. Park, D. Kim Medicine Dermatologic surgery : official publication for American Society for Dermatologic Surgery [et al.] 2005 26 Save Alert Research Feed Barbed sutures in aesthetic plastic surgery: evolution of thought and process. M. Paul Medicine Aesthetic surgery journal 2013 40 PDF View 3 excerpts, references background and methods Save Alert Research Feed Barbed sutures in facial rejuvenation. Woffles T. L. Wu Medicine Aesthetic surgery journal 2004 83 PDF Save Alert Research Feed Elevating the midface with barbed polypropylene sutures. N. Isse, P. Fodor Medicine Aesthetic surgery journal 2005 40 PDF Save Alert Research Feed A review of sutures and other skin closure materials Ani L. Tajirian, D. Goldberg Medicine Journal of cosmetic and laser therapy : official publication of the European Society for Laser Dermatology 2010 96 View 1 excerpt, references background Save Alert Research Feed ... 1 2 ... Related Papers Abstract Topics Paper Mentions 58 Citations 12 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_jje5ugneuveknarrtsduit2cbi ---- The ENT emergency clinic at the royal national throat, nose and ear hospital, London: Completed audit cycle Abstracts / International Journal of Surgery 10 (2012) S1–S52S36 ABSTRACTS Conclusions: In our study, thyroplasty as a method for vocal cord medi- alisation led to improved voice quality post-operatively and to good patient satisfaction. 0363: INSERTION OF A SECOND NASAL PACK AS A PROGNOSTIC INDICATOR OF EMERGENCY THEATRE REQUIREMENT IN EPISTAXIS PATIENTS Edward Ridyard 1, Vinay Varadarajan 2, Indu Mitra 3. 1 University of Manchester, Manchester, UK; 2 North West Higher Surgical Training Scheme, North West, UK; 3 Manchester Royal Infirmary, Manchester, UK Aim: To quantify the significance of second nasal pack insertion in epistaxis patients, as a measure of requirement for theatre. Method: A one year retrospective analysis of 100 patient notes was undertaken. After application of exclusion criteria (patients treated as outpatients, inappropriate documentation and patients transferred from peripheral hospitals) a total of n¼34 patients were included. Of the many variables measured, specific credence was given to requirement of second packing and requirement for definitive management in theatre. Results: Of all patients, 88.5% required packing. A further 25% (7/28) of this group had a second pack for cessation of recalcitrant haemorrhage. Of the second pack group, 85.7% (6/7) ultimately required definitive management in theatre. One sample t-test showed a statistically significant correlation between patients with a second nasal pack and requirement for theatre (p<0.001). Conclusions: Indications for surgical management for epistaxis vary from hospital to hospital. The results of this study show that insertion of a second pack is a very good indicator of requirement for definitive management in theatre. 0365: MANAGEMENT OF LARYNGEAL CANCERS: GRAMPIAN EXPERIENCE Therese Karlsson 3, Muhammad Shakeel 1, Peter Steele 1, Kim Wong Ah-See 1, Akhtar Hussain 1, David Hurman 2. 1 Department of otolaryngology-head and neck surgery, Aberdeen Royal Infirmary, Aberdeen, UK; 2 Department of Oncology, Aberdeen Royal Infirmary, Aberdeen, UK; 3 University of Aberdeen, Aberdeen, UK Aims: To determine the efficacy of our management protocol for laryngeal cancer and compare it to the published literature. Method: Retrospective study of prospectively maintained departmental oncology database over 10 years (1998-2008). Data collected include demographics, clinical presentation, investigations, management, surveillance, loco-regional control and disease free survival. Results: A total of 225 patients were identified, 183 were male (82%) and 42 female (18%). The average age was 67 years. There were 81 (36%) patients with Stage I disease, 54 (24%) with Stage II, 30 (13%) with Stage III and 60 (27%) with Stage IV disease. Out of 225 patients, (130)96% of Stage I and II carcinomas were treated with radiotherapy (55Gy in 20 fractions). Patients with stage III and IV carcinomas received combined treatment. Overall three-year survival for Stage I, II, III and IV were 91%, 65%, 63% and 45% respectively. Corresponding recurrence rates were 3%, 17%, 17% and 7%; 13 patients required a salvage total laryngectomy due to recurrent disease. Conclusion: Vast majority of our laryngeal cancer population is male (82%) and smokers. Primary radiotherapy provides comparable loco-regional control and survival for early stage disease (I & II). Advanced stage disease is also equally well controlled with multimodal treatment. 0366: RATES OF RHINOPLASTY PERFORMED WITHIN THE NHS IN ENGLAND AND WALES: A 10-YEAR RETROSPECTIVE ANALYSIS Luke Stroman, Robert McLeod, David Owens, Steven Backhouse. University of Cardiff, Wales, UK Aim: To determine whether financial restraint and national health cutbacks have affected the number of rhinoplasty operations done within the NHS both in England and in Wales, looking at varying demographics. Method: Retrospective study of the incidence of rhinoplasty in Wales and England from 1999 to 2009 using OPCS4 codes E025 and E026, using the electronic health databases of England (HesOnline) and Wales (PEDW). Extracted data were explored for total numbers, and variation with respect to age and gender for both nations. Results: 20222 and 1376 rhinoplasties were undertaken over the 10-year study period in England and Wales respectively. A statistical gender bias was seen in uptake of rhinoplasty with women more likely to undergo the surgery in both national cohorts (Wales, p<0.001 and England, p<0.001). Linear regression analysis suggests a statistical drop in numbers under- going rhinoplasty in England (p<0.001) but not in Wales (p>0.05). Conclusion: Rhinoplasty is a common operation in both England and Wales. The current economic constraint combined with differences in funding and corporate ethos between the two sister NHS organisations has led to a statistical reduction in numbers undergoing rhinoplasty in England but not in Wales. 0427: PATIENTS' PREFERENCES FOR HOW PRE-OPERATIVE PATIENT INFORMATION SHOULD BE DELIVERED Jonathan Bird, Venkat Reddy, Warren Bennett, Stuart Burrows. Royal Devon and Exeter Hospital, Exeter, Devon, UK Aim: To establish patients' preferences for preoperative patient informa- tion and their thoughts on the role of the internet. Method: Adult patients undergoing elective ENT surgery were invited to take part in this survey day of surgery. Participants completed a ques- tionnaire recording patient demographics, operation type, quality of the information leaflet they had received, access to the internet and whether they would be satisfied accessing pre-operative information online. Results: Respondents consisted of 52 males and 48 females. 16% were satisfied to receive the information online only, 24% wanted a hard copy only and 60% wanted both. Younger patients are more likely to want online information in stark contrast to elderly patients who preferred a hard copy. Patients aged 50-80 years would be most satisfied with paper and internet information as they were able to pass on the web link to friends and family who wanted to know more. 37% of people were using the internet to further research information on their condition/operation. However, these people wanted information on reliable online sources to use. Conclusions: ENT surgeons should be alert to the appetite for online information and identify links that are reliable to share with patients. 0510: ENHANCING COMMUNICATION BETWEEN DOCTORS USING DIGITAL PHOTOGRAPHY. A PILOT STUDY AND SYSTEMATIC REVIEW Hemanshoo Thakkar, Vikram Dhar, Tony Jacob. Lewisham Hospital NHS Trust, London, UK Aim: The European Working Time Directive has resulted in the practice of non-resident on-calls for senior surgeons across most specialties. Conse- quently majority of communication in the out-of-hours setting takes place over the telephone placing a greater emphasis on verbal communication. We hypothesised this could be improved with the use of digital images. Method: A pilot study involving a junior doctor and senior ENT surgeons. Several clinical scenarios were discussed over the telephone complemented by an image. The junior doctor was blinded to this. A questionnaire was completed which assessed the confidence of the surgeon in the diagnosis and management of the patient. A literature search was conducted using PubMED and the Cochrane Library. Keywords used: “mobile phone”, “photography”, “communication” and “medico-legal”. Results & Conclusions: In all the discussed cases, the use of images either maintained or enhanced the degree of the surgeon's confidence. The use of mobile-phone photography as a means of communication is widespread, however, it's medico-legal implications are often not considered. Our pilot study shows that such means of communication can enhance patient care. We feel that a secure means of data transfer safeguarded by law should be explored as a means of implementing this into routine practice. 0533: THE ENT EMERGENCY CLINIC AT THE ROYAL NATIONAL THROAT, NOSE AND EAR HOSPITAL, LONDON: COMPLETED AUDIT CYCLE Ashwin Algudkar, Gemma Pilgrim. Royal National Throat, Nose and Ear Hospital, London, UK Aims: Identify the type and number of patients seen in the ENT emergency clinic at the Royal National Throat, Nose and Ear Hospital, implement changes to improve the appropriateness of consultations and management and then close the audit. Also set up GP correspondence. Method: First cycle data was collected retrospectively over 2 weeks. Information was captured on patient volume, referral source, consultation Abstracts / International Journal of Surgery 10 (2012) S1–S52 S37 ABSTRACTS nature and patient destination. Changes implemented included ensuring the management and follow-up of otitis externa patients met the American Academy of Otolaryngology-Head and Neck Surgery Foundation guidelines. Data for the second cycle was then collected retrospectively over 2 weeks after staff education. A GP letter was issued for every patient seen. Results: First cycle: 261 patients. Follow-ups: 28%. Reviewed patients: 23% booked for emergency clinic follow-up, 17% booked for main clinic follow- up. Discharge rate: 43%. Second cycle: 158 patients. Follow-ups: 9%. Reviewed patients: 9% booked for emergency clinic follow-up, 3% booked for main clinic follow-up. Discharge rate: 72%. Conclusions: Managing the common condition otitis externa according to international guidelines has improved the workload and follow-up rate in the RNTNE emergency clinic. Improving staff numbers has also helped. By setting up correspondencewe have also improved communicationwith GPs. 0563: AN AUDIT OF THE PUNCTUALITY OF THEATRE LISTS WITHIN AN ENT DEPARTMENT Shyamica Thennakon, Christopher Webb. Royal Liverpool and Broadgreen University Hospital, Liverpool, UK Aim: Operating theatres utilise between £1-16 million per annum in each trust, with our department's patient waiting lists for elective ENT opera- tions averaging at two months. Our audit aims to assess if our department is maximising our allocated theatre time with punctual starts (standard ¼ 95%). Methods: A retrospective audit of 35 consecutive, elective theatre lists in a two month period (01.11.2011 – 31.12.2011). We compare start and end times of theatre lists as recorded by the ORMIS theatre system with the scheduled theatre time. Results: 97% of our theatre lists started late (range10-58 minutes). Of the theatre lists which started late, 74% finished late (range 19-126 minutes), 26% finished early (range 19-126 minutes). 3% of theatre lists started early (9 minutes) and finished late (101 minutes). Conclusion: We have highlighted an inefficient use of allocated theatre time and propose a supplementary documenting system of theatre timings. This aims to document and raise awareness of which arm(s) of the surgical process (the anaesthetist, theatre staff, surgeon, ward staff or patient) is accountable for the delays. Information from this new system aims to facilitate awareness and further changes. 0579: CONSENT FOR ENT SURGERY - ARE WE THE ONES AT RISK? Matthew Smith, Raj Lakhani. Peterborough City Hospital, Peterborough, UK Aim: To audit the consent process for common ENT operations against DoH, GMC, RCS and BMA guidance. Method: Consecutive patients undergoing common ENT procedures were identified. 120 consent forms and all clinic letters relating to tonsillectomy, grommet insertion, septoplasty and hemithyroidectomy were analyzed. Results: All patients had consent forms. Only ‘procedure', ‘intended benefit' and ‘anaesthetic' sections received 100% completion. Consent was taken by SHOs (4%), Staff grades (14%), SpRs (44%) and Consultants (38%). Day-of- surgery consent occurred in 7.5% cases. The average period between consent and surgery was two months, though consent confirmation only occurred in 40%, with no correlation to period elapsed. The number of risks listed for each procedure decreased with staff seniority. Despite 100% of forms for tonsillectomy listing bleeding as a risk, possible transfusion was only indicated on 20%. Clinic letters rarely featured consent details. Conclusions: Completion of consent forms is variable. There is poor compli- ance with guidance from professional bodies. The medico-legal implications are potentially significant and key areas require attention if patient safety and autonomy are to be maintained. Particular focus must be made regarding consent confirmation, consent for blood transfusion in procedures with a significant transfusion rate, and in the listing of operative risks. 0582: CAN WE SLEEP EASY? - AN ASSESSMENT OF OUT-OF-HOURS ENT COVER Matthew Smith, Raj Lakhani. Peterborough City Hospital, Peterborough, UK Aims: To assess the management of ENT emergencies by ‘cross-specialty' SHO's covering ENT at night. To evaluate confidence and experience of ‘cross-specialty' SHOs. Method: An online questionnaire (33 written and photographic true-false questions) was designed to test the management of ENT emergencies. Questions were graded ‘essential' or ‘desirable' knowledge. A cohort of ‘non-ENT' SHO's covering multiple specialties, including ENT, at night (February-November 2011) completed the survey. Additional questions surveyed training, experience and confidence. Results: 15/18 completed questionnaires were received. The median score was 19/33 (range 15-28/33). Questions testing ‘essential' knowl- edge were answered correctly more often (median score 13/18). Two thirds of SHOs managed ‘time-critical' presentations incorrectly, delay- ing essential treatment. Up to 9/15 mis-managed certain life-threatening conditions. Awareness of postoperative complications was poor. Only 2/ 15 SHO's had prior ENT experience, 9/15 had no formal training in ENT emergencies and only 7/15 were confident performing an ENT exami- nation. 10/15 self-rated their ENT knowledge as average and 5/15 as poor. Conclusions: SHOs that cross-cover ENT at night frequently lack relevant training, experience, and essential knowledge required to provide emer- gency cover for this surgical specialty. In the setting of limited under- graduate education, additional specialist training is required to ensure patient safety. 0594: CLINICAL APPLICABILITY OF THE THY3 SUB-CLASSIFICATION SYSTEM Gentle Wong 1, Zaid Awad 1, Roy Farrell 1, Stephen Wood 2, Tanya Levine 1. 1 Northwick Park Hospital, London, UK; 2 Wexham Park Hospital, London, UK Aim: To determine malignancy rates of Thy3a and Thy3f. To assess the clinical applicability of the Thy3 sub-classification system. Method: A multi-institutional prospective audit of clinical practice, spanning 3 cancer networks in North West London. One hundred and fifteen consecutive patients with Thy3 cytology discussed at the weekly multi-disciplinary team (MDT) meetings between 2010 and 2011 were included. Our main outcome measures were Thy3f and Thy3a malignancy rates, clinical applicability of the Thy3 sub-classification system. Results: In the present series, 115 Thy3 lesions were identified comprising 83 Thy3f and 32 Thy3a. 65 Thy3f and 11 Thy3a have corresponding histology. 45% of the Thy3f and 64% of Thy3a lesions were found to be malignant on histopathological examination. Conclusions: The sub-classification has not demonstrated a convincing difference in malignancy rates to help make a translational difference in how we manage these subgroup patients clinically. We have identified Thy3a may have a higher malignancy potential than Thy3f; this mayimpacton how weevaluate future managements of Thy3a patients. 0597: ‘ONE ON, ONE OFF'. A MODEL FOR SAFE AND EFFICIENT PAEDIATRIC ENT SURGERY David Walker, Samuel Cartwright, Jonathon Blanshard, Paul Spraggs. Hampshire Hospitals NHS Foundation Trust, Basingstoke, Hampshire, UK Aims: To demonstrate a system for efficient theatre session management using a ‘One on, One off' approach to achieve up to 10 cases per session, and to outline the business case to support it. Methods: Routine paediatric otolaryngology procedures are allocated for surgery on a dedicated paediatric list. The day surgery ward is trans- formed to ‘Paediatrics Only' and staffed by paediatric nurses. Two paediatric trained Anaesthetists and two Operating Department Assis- tants (ODAs) are assigned to the list, to allow a ‘One On, One Off' system i.e. the next patient anaesthetised by the time the previous case leaves theatre. Results: Over a two year period, the average number of cases for a single theatre session was 7.9 (range 3-10), compared to 4 on an equivalent session at a neighbouring hospital. The cost of the extra Anaesthetist and ODA was £300, however this additional activity generates extra revenue of £2000-4000, depending on case mix. There were no adverse outcomes during this time period. Conclusions: This model, easily applied to other surgical specialities, can drive down waiting lists, increase efficiency and improve revenue. The business case for supplying extra Anaesthetic staff is clear and provides fast turnaround whilst maintaining patient safety and training. Insertion of a second nasal pack as a prognostic indicator of emergency theatre requirement in epistaxis patients Management of laryngeal cancers: Grampian experience Rates of rhinoplasty performed within the NHS IN England and Wales: A 10-year retrospective analysis Patients' preferences for how pre-operative patient information should be delivered Enhancing communication between doctors using digital photography. A pilot study and systematic review The ENT emergency clinic at the royal national throat, nose and ear hospital, London: Completed audit cycle An audit of the punctuality of theatre lists within an ENT department Consent for ENT surgery - Are we the ones at risk? Can we sleep easy? - An assessment of out-of-hours ENT cover Clinical applicability of the Thy3 sub-classification system ‘One on, One off'. A model for safe and efficient paediatric ENT surgery work_jjipn5zy5ff3tf7aipxe5bj65e ---- Indian J Dermatol Venereol Leprol | September-October | Vol 74 | Issue 5532 Basic digital photography in dermatology Feroze Kaliyadan, Jayasree Manoj, S. Venkitakrishnan, A. D. Dharmaratnam Amrita Institute of Medical Sciences and Research Centre, Kochi, Kerala -682 026, India Address for correspondence: Dr. Feroze Kaliyadan, Department of Dermatology, Amrita Institute of Medical Sciences and Research Centre, Kochi, Kerala -682 026, India. E-mail: ferozkal@hotmail.com ABSTRACT Digital photography has virtually replaced conventional film photography as far as clinical imaging is concerned. Though most dermatologists are familiar with digital cameras, there is room for improvement in the quality of clinical images. We aim to give an overview of the basics of digital photography in relation to dermatology, which would be useful to a dermatologist in his or her future clinical practice. Key Words: Dermatology, Digital photography, Clinical photography The image sensor employed by most digital cameras is a charge-coupled device (CCD). Some cameras use complementary metal oxide semiconductor (CMOS) technology instead. Both CCD and CMOS image sensors convert light into electrons. Both of these have their own relative advantages and disadvantages.[1] How many pixels? The amount of detail that the camera can capture is called the resolution, and it is measured in pixels. Basically, the more pixels a camera has, the more detail it can capture. Also as the pixels increase, larger prints can be made without losing detail. Example: • 640x480 - Relatively lower resolution. This resolution is ideal for e-mailing pictures or posting pictures on a web site. • 1216x912 - This is a “megapixel” image size -- 1,109,000 total pixels -- good for printing pictures. • 1600x1200 - This is “high resolution”. Good quality prints of 4 X 5 inches can be obtained • 2240x1680 - Found on 4 megapixel cameras this allows even larger printed photos, with good quality for prints up to 16x20 inches.[1] INTRoDUCTIoN With the advent of new generation digital cameras more and more dermatologists are using digital images in their regular practice. The dramatic reduction of the cost of digital photography as compared to conventional film photography is one of the reasons for the rapid large scale acceptance of digital photography as a part of medical imaging. Digital photography improves the physician’s ability to communicate with peers, patients, and the public. This article aims to make dermatologists familiar with the basics of digital imaging in the context of dermatology. Basic science of a digital photograph Basically a digital camera, just like a conventional camera, has a series of lenses that focus light to create an image of an object. But instead of focusing this light onto a piece of film, it focuses it onto a semiconductor device that records light electronically. A computer then breaks this electronic information down into digital data. In other words instead of film, a digital camera has a sensor that converts light into electrical charges. How to cite this article: Kaliyadan F, Manoj J, Venkitakrishnan S, Dharmaratnam AD. Basic digital photography in dermatology. Indian J Dermatol Venereol Leprol 2008;74:532-6. Received: January, 2008. Accepted: July, 2008. Source of Support: Nil. Conflict of Interest: None Declared. Focus 533Indian J Dermatol Venereol Leprol | September-October | Vol 74 | Issue 5 The question of what is the ideal resolution for dermatological photography has been an often-discussed topic. Some authors like Siegel point out that for all practical purposes in clinical dermatology, the current technology with regard to resolution has already gone beyond the needs of the clinician. Siegel’s article using freeware and commercially used software, offers proof that a single megapixel (MP) image is adequate for on screen evaluation and publication purposes.[2] Bittorf et al. have suggested that a resolution of 768 X 512 (with 24 bit color) i.e. around 0.4 mega pixel may be sufficient for routine dermatological purposes.[3] Miot et al. in their article suggest that a camera resolution of 1.3 MP (1280 X 960) is adequate for dermatological photography.[4] However, in the same article, Miot et al. point out that prior to taking the photograph whenever we know that the image will need editing to remove an undesirable aspect or to concentrate attention solely on one element of the photo, a greater than standard resolution should be used considering that a significant quantity of pixels will be discarded.[4] Virtually all of the new cameras available in the market at present have resolutions starting from at least 3 megapixel. Therefore resolution is unlikely to be a major issue in the future as far as digital photography in dermatology is concerned. Compact (point -and -shoot) cameras or digital SLR (single lens reflex)? As far as the quality of photos is concerned, the digital SLRs undoubtedly outscore the common point-and-shoot varieties. However for all practical purposes, including publication and PowerPoint presentations the image quality of the compact cameras are more than sufficient. Besides, the exorbitant cost of digital SLRs is another prohibitive factor. The one disadvantage of most compact cameras is a limitation in manual adjustments specifically in relation to controlling factors like aperture size, shutter speed, flash intensity etc. However, there are intermediate, ‘prosumer’ or ‘bridging’ cameras available these days which have functional capabilities between a simple compact camera and a digital SLR. Most of these are quite affordable compared to a digital SLR.[1] BACKGRoUND AND MATERIALS One of the basics of clinical photography is to stress on the lesion /area of interest. There should be an emphasis on removing any kind of clutter or other distracting elements from the background. Most experts recommend a plain light blue or green non-reflective surface, like a linen cloth. Make it a point to remove items like ornaments, which unnecessarily divert focus from the lesion of interest. Other accessories, which would come in handy, are measurement tapes and skin markers.[5] Lighting and flash Lighting is often a tricky issue in dermatological photography. Ideally broad daylight or a naturally lit room, if available would be the best; however, many times we have to take our photographs indoors with the use of flashes or other accessory light sources. All compact digital cameras have inbuilt flash units. Unfortunately most of the compact units do not have options for controlling the intensity of the flash. The most important thing while using an inbuilt flash is to avoid getting too close as the distinctive features of the lesion may get washed off. Also it would be advisable to vary the ‘white balance’ on the camera depending on the primary lighting (e.g. adjust the white balance to fluorescent if shooting under predominantly fluorescent light). External light sources can be useful in taking very close shots with the ‘macro’ feature turned on (Using the flash while taking extreme close-up tends to cast a shadow of the camera head over the image). Another enhancement which might help for very close shots is a ‘ring’ flash. However, this too may reflect from the surface of the skin lesion being photographed, particularly if the distance between the camera and the lesion is small, and wash away all the surface details. Macro photography One of the very evident advantages of digital cameras compared to the film cameras is the ability to produce extremely good close up shots. Though routine dermatological imaging for publishing or presentation does not really require extreme close ups, macro photography can give stunning detail to the close-up images of skin lesions. Most digital compacts offer macro shots from distances of 2 cm to 5 cm without any lens attachment. To put it simply, these cameras can shoot images at distances of as close as 2 to 5 cm and the image projected on the digital sensor is close to the same size as the subject itself. The universal symbol for the macro mode is a flower ( ). The quality of the macro shots can be enhanced with macro lenses which can be attached to the digital SLRs and some bridging cameras. However for all practical purposes, dermatological photography does not require the use of specific macro lenses. General recommendations and tips 1) Always take the patient’s consent before photographing, especially if the shots are taken Feroze: Digital photography in dermatology Indian J Dermatol Venereol Leprol | September-October | Vol 74 | Issue 5534 during a surgical procedure when the patient may not be aware of the same. A written informed consent would be the best and is a must if one is planning to use the photograph for publications. 2) Include the patient’s hospital card, tag or number in all or at least one of the images of a series so as to enable easy identification later. 3) Always try to take before and after photographs in the same settings with respect to patient positioning, background, lighting and camera settings. 4) Use auto-focus as often as possible, use manual controls only if you are well versed with them 5) Select the ‘macro’ mode for close-up shots 6) Use flash as often as possible when the available ligting is poor, but avoid getting too close to the lesion as the over exposure may wipe out the details 7) For very close shots oblique views may be preferred 8) Try to add some shots of areas you expect to be involved in some of your differential diagnoses, but are apparently free of involvement in the particular case (eg. nails in psoriasis) 9) Eliminate distractions from the background. Try taking all photographs with a plain non-reflective blue or green background. Framing tips For different body areas certain standard framing patterns are followed. Detailed instructions on these can be found in an article by Pak[5] For all lesions make it a point to take at least 2 shots from each point of focus. Minimal blurring may not be obvious in the LCD screen and may be noticeable only after the images are viewed on the monitor. It is always better to have an extra copy from every focus point so that the best image can be selected. Always try to capture distinctive elements like typical representative lesions, particular configuration or distribution patterns. For generalised lesions take shots from at least three ranges: a) A complete vertical view of the patient showing the extent and distribution of the rash b) A medium distance shot showing the arrangement and configuration of the rash. c) A close up view highlighting a representative lesion For localised lesions take shots from at least two points: a) A medium view showing the rash /lesion with respect to location and configuration (always include a recognizable body landmark so that the location is obvious. eg. For lesions on the abdomen include the umbilicus in the medium distance shot) b) A close up view of the representative lesion For isolated lesions it is also advisable to include a discernible landmark in one of the shots. For the close up shots use a measuring tape in the frame to demonstrate the size of the lesion. It would be advisable to take the close up shots from more than one angle and include oblique shots. Shots with and without flash may be taken and the best shot selected for storage. Basic microscopic photography Another interesting adaptation of compact digital cameras is in recording basic light microscopy images e.g. hair microscopy, scabies or pediculosis [Figures 1,2]. The front of the camera lens can be placed on to the eye piece and the image taken either in the auto mode or with a fast shutter speed. Actual skin histopathology images ideally require the use of dedicated camera units integrated with the microscope. Children and infants While photographing very young children and infants make sure that the subject is comfortable and not anxious. Children tend to be fidgety and and getting blurred images because of the movement is a common problem. A small toy or a pen would come in handy, though this should not be included in the frame. If your camera has an option of adjusting shutter speeds a fast shutter speed would be useful in shooting photographs of children. Also use the flash as often as possible. Avoid pointing the flash directly into the eyes, especially in the case of infants. Oral /Dental images Proper imaging of the oral cavity requires the use of good quality dental mirrors. However satifactory images can be obtained by using a very good point light source. The ‘auto-illuminator’ feature available in most modern cameras also helps in obtaining a good shot of oral mucosal lesions, without costly mirrors. Videos Most present day compact digital cameras have video recording capabilities with sound. Most allow 640X 480 Feroze: Digital photography in dermatology 535Indian J Dermatol Venereol Leprol | September-October | Vol 74 | Issue 5 video which gives sufficiently good clarity for PowerPoint presentations etc. The same can be put to various uses effectively, eg. for demonstrating basic signs in dermatology like Auspitz’s sign or the Nikolsy sign etc. Storage Most electronic submissions accept the JPEG (Joint Photographic Expert Group) format as the standard. The major advantage of the JPEG format is that the image size can be compressed considerably without significant visible loss of resolution. This ensures ease in online submission as well as powerpoint presentations etc. Some journals insist on the TIFF (Tagged Image File Format) images to be sent on CDs as a follow up to online submission (where digital images are the only source). TIFF is considered to be the default industry standard for a cross platform image format that can be opened by virtually all graphics applications. The disadvantage of TIFF files vis-à-vis JPEG is a bigger file size. Other standard formats used for storage include the -PSD (Photoshop document), PNG (Portable Network Graphics), BMP (windows bitmap) and GIF (Graphics Interchange Format).[6] Most digital cameras save the images by default at a resolution of 72 dpi. This can be converted to 300 dpi with the use of basic photoediting software. The resolution of 300 dpi is the general standard for most journal submissions.[6] Another file format used, especially in the context of SLRs is that of the RAW files. This refers to the minimally processed data file from the image sensor of a digital camera. Raw image files are sometimes called digital negatives, as they fulfill the same role as film negatives in traditional chemical photography: that is, the negative is not directly usable as an image, but has all of the information needed to create an image. The advantage obviously is the markedly higher quality of image as virtually no pixels are lost, as a corollary, the disadvantage is that the files are two to six times larger than JPEG files. Another problem is that currently there is no standardised RAW format, with different camera manufacturers using different versions eg. .crw (Canon), .ptx (Pentax) and .nef (Nikon). With the cost of hard disks dramatically going down over the last few years, it has become very easy to store entire image inventories in single spaces. Other than the primary hard disk it is always advisable to keep a back up copy of your images on an alternate site like a portable hard disk (a range of which are available at very reasonable prices), The back up copies can also be saved in the compressed JPEG format so that the space taken up can be minimized. It always makes sense to delete images that are blurred as they are unlikely to be used by you and will unnecessarily clutter up the hard disk space. Make it a point to catalog all saved images (or containing folders) tagging it with the patient’s name, hospital number, date and even the provisional diagnosis if possible. Meticulous cataloging may seem cumbersome at the beginning but makes future retrieval of imaging very convenient. Imaging software and tampering issues A variety of software packages are available on proprietary, shareware and freeware basis. The most commonly used ones include Adobe Photoshop, Paint shop pro, Corel draw, Figure 1: Hair microscopy using a compact digital camera (x4) Figure 2: A pubic louse photographed using a compact digital camera (x4) Feroze: Digital photography in dermatology Indian J Dermatol Venereol Leprol | September-October | Vol 74 | Issue 5536 Feroze: Digital photography in dermatology GIMP and Irfan view. The software can be used for optimal cropping, adjusting the resolution and to a certain extent in adjusting variables like brightness, contrast and saturation. The question to what extent of image adjustment will fall within the purview of ethical image editing is still not answered. There already have been umpteen instances of editing software used unethically to completely alter medical images -both clinical and histopathological.[7,8] Teledermatology The digital image forms the basis of ‘store-and-forward‘ teledermatology. A proper digital photograph highlighting the representative lesions and a proper history is often sufficient for a dermatologist to make a reliable diagnosis. Many studies have demonstrated that good quality images can actually substitute for a dermatological physical examination in a good percentage of cases.[9] Photography resources and help sites on the net http://en.wikipedia.org/wiki/Portal:Photography www.steves-digicams.com/hardware_reviews.html www.dpreview.com/reviews/ http://www.shortcourses.com/ The above resources are all regularly updated and give a fair idea of which camera to go for depending on whether you are a beginner, or an advanced user. In fact http://www.dpreview.com/reviews/compare.asp gives you the choice to select exact attributes of the camera you want and get a list of the available cameras in that range. For beginners we would suggest an entry level camera with at least 6 MP resolution, and 3X optical zoom as a minimum -eg. Canon PowerShot A590 IS, Fujifilm FinePix J10, Nikon Coolpix P60 or Kodak EasyShare M1033 (other relevant features like macro mode, and video are available in virtually all present day entry level cameras). For users looking for more advanced options - ‘prosumer’ cameras like the Sony H series / Canon SD 950IS or entry level digital SLRs like the Canon 350/400D would be a good option. For professionals of course a good SLR like Canon EOS 40D,Nikon D300 and Sony alpha A700 would be good, provided the cost is not a consideration. CoNCLUSIoN Digital photography has revolutionised the way images can be taken and stored in the context of clinical dermatology practice, research and teaching. However as the options of available equipment increase day by day, we should be aware of what is the optimal equipment that we need as well as undertstand the potential possibilities and limitations of the available equipment in our hands. For our routine practice, the entry -level digital cameras not only suffice, but are also handy. Moreover even with these cameras additions like movie mode and basic microphotographs can be done. With a few basic points regarding framing, lighting, exposure and resolution virtually anyone can produce good quality clinical photographs of standards meeting the specifications for publication. REFERENCES Wilson TV, Nice K, Gurevich G. How digital cameras work. 1. Available from: http://electronics.howstuffworks.com/digital- camera.htm. [Last accessed 20 Aug 2007]. Siegel DM. Resolution in digital imaging: enough already? 2. Semin Cutan Med Surg 2002;21:209-15. Bittorf A, Fartasch M, Schuler G, Diepgen TL. Resolution 3. requirements for digital images in dermatology. J Am Acad Dermatol 1997;37:195-8. Miot HA, Paixão MP, Paschoal FM. Basics of digital 4. photography in Dermatology. An Bras Dermatol 2006;81:174- 80. Pak HS. Dermatologic Photography [homepage] 5. American Telemedicine Association; 1999. Available from: http://www.atmeda.org/ICOT/telederm%20Forms/ GuidetoDermatologicPhotography.pdf. [Last accessed 20 Aug 2007]. Roach D. Imaging. In: Nouri K, Leal-Khouri S, editors. 6. Techniques in dermatological surgery. 1st ed. Mosby; 2003. p. 33-42. Cutrone M, Grimalt R. The true and the false: Pixel-byte 7. syndrome. Pediatr Dermatol 2001;18:523-6. Suvarna SK, Ansary MA. Histopathology and the ‘third great 8. lie’: When is an image not a scientifically authentic image? Histopathology 2001;39:441-6. Kvedar JC, Edwards RA, Menn ER, Mofid M, Gonzalez 9. E, Dover J, et al. The substitution of digital images for dermatologic physical examination. Arch Dermatol 1997;133:161-7. work_jkcpo4hz4jgopdznr4ukt77mre ---- Vol. 61 No. 9 • JOM 9www.tms.org/jom.html TMS Member Profi les Each month, JOM features a TMS member and his or her activities outside of the realm of materials science and engineering. To suggest a candidate for this feature, contact Francine Garrone, JOM news editor, at fgarrone@tms .org. By Francine Garrone When Diana Lados began playing tennis at the young age of 4, she was learning to perfect her game with a wooden racket. By the time she was a championship player at 14, Lados had moved on to an aluminum racket. Already, the science and aesthetics of aluminum had made their mark on her game and, ultimately, on her life. Looking back on her championship year, Lados, an assistant professor in the Mechanical Engineering Depart- ment and director of the Integrated Materials Design Center at Worcester Polytechnic Institute (WPI) in Mas- sachusetts, associates her understand- ing of materials in tennis, and their evolution, with materials used in other applications, particularly aerospace materials. Shortly after starting her doctoral research at WPI and many years after winning her tennis trophy, Lados developed a passion for cast- ing the same material that her racket was made from—aluminum—and creating beautiful artwork. Her hobby Meet a Member: Diana Lados: Casting Creations in Aluminum & Photography soon became an artistic inspiration that benefi ted from her studying materials science and having an understanding of the metal. “While preparing my re- search samples, I would see interest- ing shapes that I thought I could bring to life in a different way,” Lados said. “I never made parts from scratch but rather re-shaped or accentuated the ex- isting pieces of cast aluminum alloys from our foundry.” By “post-processing” pieces of cast aluminum that she fi nds in WPI’s foundry, Lados creates such artwork as “The Egg” (Figure 1). Post-processing includes bending, hammering, coarse or mirror polishing, and selective area etching. At times, Lados paints the pieces with vibrant colors or glues several pieces together to create the fi - nal shape to enhance specifi c features of the piece. “Most pieces represent objects or beings often with stylized features to create different effects and emphasize certain characteristics,” she said. “There are also a few abstract pieces that challenge the viewer’s imagination.” To date, Lados has created more than two dozen pieces of cast alumi- num artwork. Each of her works ranges in size from a few inches to several feet tall. She said she gives her artwork to friends to be placed in their homes and gardens. On occasion, Lados has used her cre- ativity to shed light on certain features of her cast aluminum artwork using another one of her hobbies—photog- raphy. “Both cast aluminum creations and photography require imagination and a good sense of proportion and three-dimensional space visualiza- tion,” she said. “They are great ways to use technical knowledge and tools for artistic manifestations.” Lados’ photography not only cap- tures her cast aluminum artwork but also landmarks, nature, churches, as well as many objects and “catch the moment” shots. “I started with an old 35 millimeter Leica camera over 20 years ago, and in recent years, I began to explore more with digital photog- raphy,” Lados said. “I was always in- clined to observe things, place them in a context, and pay attention to details.” Currently, Lados is preparing an exhibit with 72 of her photographs to be displayed in the Gordon Library at WPI. Work from her “Nature, Color, and Life in South America” and “Ar- chitecture Around the World” collec- tions will be featured. “In a way, both of these interests have enabled me to look at the world around us with a different perspective, add new dimen- sions to it, and appreciate the beauty, form and context,” Lados said. Figure 1: Al-7%Si-Mg (A356) alloy egg. The question is: Which came fi rst? The cast Al egg or the cast Al hen? Figure 2: Machu Picchu, Urubam- ba Valley, Peru—“The Lost City of the Incas.” << /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (ISO Coated v2 300% \050ECI\051) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.3 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Perceptual /DetectBlends true /DetectCurves 0.1000 /ColorConversionStrategy /sRGB /DoThumbnails true /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo true /PreserveFlatness true /PreserveHalftoneInfo false /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts false /TransferFunctionInfo /Apply /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true ] /NeverEmbed [ true ] /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 150 /ColorImageMinResolutionPolicy /Warning /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 150 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 >> /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 >> /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /Warning /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 150 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 >> /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 >> /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 600 /MonoImageMinResolutionPolicy /Warning /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 >> /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /CHS /CHT /DAN /DEU /ESP /FRA /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN /KOR /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR /PTB /SUO /SVE /ENU >> >> setdistillerparams << /HWResolution [2400 2400] /PageSize [595.276 841.890] >> setpagedevice work_jl3qdokxvvgf5pdwsnqmqo2mca ---- S397 H Scientific Sessions for Medical Students (H) Insights Imaging DOI 10.1007/s13244-01 -00 -1 81 8 Scientific Sessions for Medical Students S398 CA B D E F G H Friday, March 4, 12:30 - 13:30 Room L/M MS SS 1 Session 1 Moderator: H. Ringl; Vienna/AT H-1 Fascinating imaging and promising researches of the brain A. Rácz; Budapest/HU Purpose: Magnetic resonance imaging (MR) has undergone rapid development over the last two decades and has become the leading tool of imaging brain structures and functions. Of the non-conventional MR techniques diffusion ten- sor imaging (DTI) and MR spectroscopy (MRS) are providing new perspectives in understanding underlying mechanisms in neurological diseases, giving wide prospects in research and therapy, especially in neuro-oncology. Methods and Materials: DTI is a diffusion-weighted imaging technique which shows the directionality of the diffusion of water molecules voxel by voxel, hence creating the fractional anisotropy (FA) map. With this diffusion properties can be observed. Tractography is based on FA map and is to image the intact or injured white matter connections. MR spectroscopy enables us to show certain metabolites in the brain tissue and to measure their quantity in the normal tissue and in lesions. The most widely used MRS technique is proton spectroscopy. Results: DTI and tractography allow us to show not only structural but functional changes in the brain. Therefore, with the help of FA maps, we are able to describe the properties of different pathological processes. This possibility has the most important role in the field of neuro-oncology. Beside this, tractography has a strong potential in surgery planning with its ability to show the intact white matter connec- tions. By showing metabolic changes MRS also has an implication in oncology, but it can be very useful in many other processes too, like inherited or acquired metabolic diseases. Conclusion: With the use of DTI, tractography and MRS we can widen differential diagnostic tools and in the future not just our diagnostic capabilities might be stron- ger but with the help of these techniques we can also reveal the exact etiology of many, still undiscovered diseases. H-2 Intuitive updating of post-test probabilities in the light of new evidence - application and pitfall M. Benndorf; Jena/DE Purpose: To demonstrate that naïve updating of post-test probabilities according to new evidence may be flawed. Methods and Materials: For the interpretation of a diagnostic test, the positive and negative predictive values (PPV, NPV) must be known. These values can be obtained by using Bayes’ theorem [PPV=SE×π/(SE×π+(1-SP)×(1-π))], which employs sensitivity (SE), specificity (SP) and estimated prior-probability (π) of the target disease. Most radiologic examinations do not convey only a singular information, but a huge amount of image features (X1…Xn) is observable. The sensitivity of the combination of X1…Xn is P(X1…Xn|D), where “D” denotes “diseased”. If X1…Xn are conditionally independent, P(X1…Xn|D)=P(X1|D)×P(X2|D)×… . It is possible to calculate sensitivity and specificity for each of X1…Xn separately. However, in clinical image interpretation the independence assumption seldom holds. The “mammographic mass” dataset from the UCI machine-learning repository is used to illustrate the flaw introduced into derivation of post-test probabilities by erroneously assuming independence of successively observed signs. The dataset comprises classification of margin and shape of 892 mammographic lesions (424 malignant, 468 benign). Results: Lesion feature X1 “irregular shape” had sensitivity=73.3%, specific- ity=81.8% and PPV=78.5% (π≈0.48). Lesions feature X2 “ill-defined/spiculated margin” had sensitivity=69.8%, specificity=77.8% and PPV=74.0% (π≈0.48). If PPV for X2 was calculated with π=PPV(X1)=78.5%, sensitivity=69.8% and speci- ficity=77.8%, a value of 92.0% was achieved. This procedure is only valid if X1 and X2 are conditionally independent. However, if X1 was positive in the dataset (n=396), sensitivity and specificity of X2 changed to 77.8% and 25.9%, respectively. PPV(X2) now was 79.3%, being significantly lower than the PPV under assumed conditional independence (P<0.05). Conclusion: Thoughtful image interpretation requires dedicated knowledge of observable signs and their diagnostic performance. The naïve updating of post-test probabilities with assumed fixed sensitivity and specificity therefore may be flawed. H-3 Scintigraphy of neuroendocrine tumours using 99mTc-Tektrotyd M. Stojkovic; Belgrade/RS Purpose: 99mTc-Tektrotyd is a radiopharmaceutical for diagnosis of tumours with overexpression of somatostatin receptors. The aim of the study is detection of primary and metastatic neuroendocrine tumours (NET). Methods and Materials: In 33 patients with different NET, whole body scintigraphy, SPECT and particular views were performed 2h and 24h after i.v. administration of 740MBq 99mTc-Tektrotyd. Results: In the group of 9 patients with NET of unknown origin, there were 7 true positive (TP), and 2 false negative findings (FN). Diagnosis was made according to SPECT findings in 6 patients of this group. In the group of 8 patients with gut carcinoids, there were 4 TP, two true negative (TN), one FN, and one false positive (FP) finding. Diagnosis was made according to SPECT findings in 2 patients of this group. In the group of 7 patients with neuroendocrine pancreatic carcinomas there were 4 TP, and 3 TN findings. Diagnosis was made according to SPECT findings in 2 patients of this group. In the group of 6 patients with lung carcinoids there were 4 TP, one TN, and one FN. Diagnosis was made according to SPECT findings in two patients of the group. In the group of 3 patients with gastrinomas there were 2 TP findings, and one TN. Diagnosis was made according to SPECT findings in two patients of the group. According to our results, overall sensitivity of the method is 84%, specificity 88%, positive predictive value 95%, negative predictive value 64% and accuracy 85%. Conclusion: Our preliminary results show that scintigraphy with 99mTc-Tektrotyd is useful for diagnosis, staging and follow up of the patients suspected to have NETs. SPECT had an important role in diagnosis. H-4 Paracingulate sulcus morphology and neuropsychological characteristics in people with a genetic susceptibility to bipolar disorder: a neuroradiological study emphasising the potential for prediction of development of psychosis by radiological methods C.Carstairs; Edinburgh/UK Purpose: This study explores the fascinating potential for using neuroradiological methods to predict the onset of bipolar disorder (BD). Neuroradiological studies show that bipolar subjects display morphological differences in the paracingulate sulcus (PCS), a brain-fold within the anterior cingulate cortex. It is unknown whether these differences are genetically determined. Neuropsychological studies indicate that some individuals at high genetic risk of BD display subclinical features of the ill- ness, but it is unknown what determines which high-risk individuals are affected. This study aimed to determine whether: 1. individuals at high genetic risk of BD exhibit abnormalities in PCS morphology; 2. there is an association between any differing aspects of PCS morphology and the presence of psychopathological symptoms. Methods and Materials: In this unique study PCS morphology was rated on MRI scans, using previously established methods, in 117 individuals at high familial risk of BD and 75 controls aged between 16 and 25. Results from neuropsychological assessment by TEMPS-A, RISC and Ekman 60 Faces tests were also analysed PCS morphology. Results: Morphological differences between high-risk and healthy control subjects included: statistically significant greater incidence of a continuous left PCS in high-risk subjects; a trend approaching significance for greater incidence of PCS in high-risk subjects across hemispheres; qualitative differences between groups in patterns of PCS asymmetry. Continuous left PCS correlated significantly with neuropsychological test scores. Conclusion: These novel findings provide an excellent basis for further work in determining neuroradiological predictive factors in BD onset. The study adds to our knowledge of anatomical differences in individuals genetically susceptible to BD, as well as morphological associations between the PCS and cognitive and emotional function. This highlights the fascinating challenge of eventually using radiological methods to help predict onset of BD. H-5 Coronary CTA: input of prospective cardiac gating to reducation of radiation dose M.A. Glatzkova, Moscow/RU Purpose: New methods for СTA radiation exposure reduction remain one of the fascinating challenges for the future. The purpose of the study was to compare the patient radiation dose and image quality of coronary CTA examinations, performed with retrospective and prospective ECG-gating. Methods and Materials: 59 CTA studies of patients with coronary artery disease Scientific Sessions for Medical Students A B S399C D E F G H were selected retrospectively from the database. 29 patients were examined with prospective and 30 patients – with retrospective ECG-gated protocols, respectively. All examinations were performed with 64-row MDCT. Image quality of coronary arteries was evaluated using a four-point grading scale (4 - nondiagnostic images; 1 - excellent quality). Effective radiation doses of prospective and retrospective CT angiography were calculated using volume CT dose index (CTDIvol) and dose- length product (DLP) with a conversion coefficient 0,014 mSv/ mGy*cm. Data regarding image quality and radiation exposure for prospective and retrospective CTA were compared. Results: Age, heart rate and scan parameters (tube voltage, tube current and scan range) were not statistically different between the two groups. Image quality in coronary artery branches was similar between the retrospective and prospective gating protocols (1,5±0,7 vs 1,45±0,6, respectively, n.s; P= 0,47). CTDIvol and DLP in the prospective ECG-gating group were 19,7±4,2 mGy and 273,9±64,9 mGy/ cm, which were significantly lower (P<0.05) than those values in the retrospective ECG-gating group (38,3±7,7 mGy and 665,7±180 mGy/cm). Calculated effective dose for prospective CT angiography was 59% lower than that for retrospective CT angiography (3,8±0,9 mSv vs 9,3±2.5 mSv; P < 0,001). Conclusion: Prospective ECG-gating can substantially reduce radiation dose during CT angiography without decrease of image quality. Friday, March 4, 12:30 - 13:30 Room N/O MS SS 2 Session 2 Moderators: A.K. Dixon; Cambridge/UK B.J. Hillmann; Charlottesville, VA/US H-6 To be or not to be a radiologist - radiology through the eyes of medical students G.Ungureanu, V. Barbus; Cluj Napoca/RP Purpose: Choosing a medical specialty has always been a challenge for any stu- dent. Our purpose was to determine the factors which influence them in making this choice and the degree to which students consider radiology as a possible career. Methods and Materials: Cross-sectional study on 849 students attending the Faculty of Medicine in Cluj, conducted by applying a questionnaire containing 27 closed ended questions. The questionnaire was tailored for this study and addressed the following issues: aspects considered when choosing a medical specialty; perspective on the ideal means of studying radiology; radiology as a career. There were three groups: G1- students that had already studied radiology (525); G2- questioned both before and after taking the Radiology classes (120); G3 - hadn’t studied radiology (204 students). Results: In G1, 16% of the respondents considered it one of their top three career options. In G2, 6% added radiology among their top three choices (6% vs. 12%). 84% of the respondents considered passion as the main criteria while 43% referred to the financial aspect as being very important in choosing a career. 72% of re- spondents considered the teacher as a great influence over their specialty choice. Speech skills (66%) and grading correctitude (50%) were the most important traits of a professor. 67% of students in G1 and 3 consider clinical work more instruc- tive while 33% consider theoretical activity more important for learning Radiology. Regarding reasons for choosing radiology as a career, 40% of G1 and 3 referred to the work amount/wage ratio, 37% considered the importance of the specialty. Insufficient interaction with patients deters them from choosing radiology (66%). Conclusion: Adequate teaching skills and work in a clinic could increase the at- tractiveness of radiology as a medical specialty. H-7 Improving the clinical supervision of undergraduate students in CT E. Zaloni, V. Diakatou; Athens/GR Purpose: The purpose of this study was to measure student perceptions of the clinical learning environment in CT. Methods and Materials: A Likert-scale questionnaire was developed, including 33 forced-choice items subdivided into five attitude scales: communication and feedback, learning opportunities, learning support and assistance, department atmosphere and supervisory relationship. The questionnaire was administered to 45 radiography students that attended their 11 weeks clinical education in CT during the winter semester of 2010 in 12 hospitals. Results: Forty-three out of 45 students (95.5%) filled out the questionnaire. The mean age of the group was 22 years. The students generally rated as “positive” the learning environment, with the mean range of questions being between 3.23 and 4.67, and the mean range of scales being between 3.60 and 4.07 (on a 1 to 5 scale, with higher values corresponding to more positive learning situations). The between-hospital differences as for the students’ responses in the 5 scales were significant for all the scales (p < .01), while the differences in students’ responses in relation to the percentage of supervision was statistically significant only for the supervisory relationship scale (p = .000). Conclusion: The differences between the hospitals were important and the ques- tionnaire, once refined, can be used to audit the clinical supervision of students in CT. In half of the hospitals, only 20% of students were assigned a clinical instructor, whilst 80% of students were assigned a staff resource. This study indicates the need of developing a new model for the clinical practice based on the introduction of mentoring. H-8 Parents ̓views of paediatric care concerning their child’s conventional x-ray examination C.G. Stanica; Arlöv/SE Purpose: Studies indicate that conventional x-ray examinations can be stressful to children and have a great impact upon them. One coping strategy that children use is to have their parents nearby. Parents can also experience stress during their child’s hospital visit and it can be difficult for them to watch their child undergo treat- ments or examinations. Worried parents lead to worried children and it is therefore important to support parents in their child’s health care. The aim of this survey is to measure the degree of parent satisfaction/dissatisfaction of their child’s care in connection with a conventional x-ray examination. Methods and Materials: A questionnaire survey was used. The participants were consecutively selected from patients referred to the x-ray department of paediat- rics at Lund and Malmö University Hospital, Sweden. An adjusted version of the questionnaire “Healthcare Satisfaction Module specific for Hematology/Oncology” (Varni, Quiggins, Ayala 2000) was used. The modified version of the questionnaire contains 20 questions divided into 6 domains (information, communication, emo- tional needs, technical skills, inclusion of family, general satisfaction). Results and Conclusion: We are in the process of writing this report and right now we have collected half of our data. The report will be completed and examined in January 2011 at Lund University, Sweden. H-9 Radiology as a career: what do students and interns think N.M. Hughes; Dublin/IE Purpose: The purpose of this study is to examine the factors that influence students and interns when considering the possibility of Radiology as a career and how clinical rotations through the department impact on their perception of the specialty. Methods and Materials: A cross-sectional, attitudinal survey was distributed to 4th year medical students, final year medical students and interns working in 2 teaching hospitals affiliated with the university. 99 of 417 (=23.7%) surveys were returned. 24 4th year students, 70 final year students and 5 interns completed the survey. The survey asked students/interns if they would consider a career in radiology and to rate the most attractive and unappealing areas of the specialty. Respondents were also asked about their clinical rotation through the department and if it influenced their perception of the specialty. Results: 14% of respondents said they would consider a career in radiology, rating lifestyle, working hours and a more flexible working schedule highly. 43 of 78 students who had completed a rotation in the department listed interventional radiology as the most appealing subspecialty because of more patient interaction and a therapeutic element. 53% of respondents would not consider radiology as a career largely because of a perceived lack of patient contact and concern over their status amongst colleagues. The remaining 33% were undecided. 78% reported an improved perception of radiology after completing a week long rotation and indicated that they had a greater understanding of the specialty as a result. Conclusion: This study highlights the importance of a clinical rotation through the radiology department for medical students as it improved the perception of the specialty in the majority of respondents. Scientific Sessions for Medical Students S400 CA B D E F G H H-12 Sono4You: ultrasound tutorials for students by students A. Sachs; Vienna/AT Purpose: Medical Students at the Medical University of Vienna lack education in basic ultrasound skills such as standard techniques and patterns. Furthermore, there are few possibilities to practice these skills during their studies. Therefore, we established the project “Sono4You”, which provides students with the possibility to practice sonographic skills on a regular basis. Methods and Materials: Ultrasound courses, divided between “Ultrasound Basics and Standard Patterns” and “Practicing Tutorials” are hold by experienced students (tutors) who are in advanced stages of their studies (4th to 6th year). They have all been educated by radiologists and cardiologists. Students in their 3rd and 4th years are the target audience. Tutorials are provided for groups of six participants, the duration ranging between two and four hours depending on the experience of the group as well as the avail- ability and dedication of the tutor. It has never been the intention to pay these tutors, as they are all providing the tutorials on a voluntary basis, solely to educate themselves and other colleagues. Participants do not receive any confirmation of participation. Results: Beginning in 2007, the project averaged 30 to 40 courses per semester, thereby giving a huge number of students an additional possibility to practice ultrasound techniques regardless of their year of studies. At present, additional courses (head and neck region, basic echocardiography) are provided by these tutors. Conclusion: Since 2007 the range of courses has improved. At the moment, courses are available on a regular basis in upper abdominal sonography, echo- cardiography and sonography of the head and neck region. Participants gain the ability to understand anatomical structures in three dimensions which improves the educational effects of dissection courses, medical clerkships, and practical work in the 3rd part of medical studies. H-13 A medical student´s journey in research and radiology S. Oberoi; Charleston, SC/US Purpose: Even before I began a career in medicine, my first encounter in the foreign milieu of research had implanted the notion of scientific inquiry in my mind. My commitment to the study of medicine has only reinforced my conviction to be- come a lifelong scientist. Research is the driving force of technological progress in radiology. It demands persistence, dedication and fortitude, the same defining characteristics of a physician. I believe that we would not be effective physicians without considering the important medical breakthroughs that research offers and I am resolute to a lifetime of contribution to this growing body of knowledge. With a field as dynamic as radiology, it is challenging to predict what the future of diagnostic radiology will entail. By being on the forefront of scientific discovery now, I hope to be equipped with the skills necessary to tackle the problems radiology will face. Results: I acknowledge that such research is valid only when carried out in the interest of the patient. Medical knowledge is best used to provide a connection between clinical research and treatment, while always keeping the needs of the patient in mind. Conclusion: The process of scientific inquiry, the active search for knowledge, is a humbling one. The journey may sometimes manifest in the form of an epiphany but more often than not, will send me back to the white board, in search of answers. Despite these frustrations, I am driven by my thirst for the unknown. H-14 A day in the life of a radiology resident M. Maqbool; Dublin/IE Purpose: Radiology is an exciting and evolving specialty that presents unique challenges to the radiologist-in-training in terms of gaining knowledge, acquiring skills, and communicating effectively within the department, with referring physi- cians, and with staff and patients. As with all training posts in medicine and surgery, personal education must co- incide with maintaining workflow, which for the radiologist necessitates effective multitasking in the form of manipulating multiple computer workstations, using dictation software, responding to queries both by phone and in person, and look- ing up information relevant to the case at hand all while serving two potentially contradictory interests: those of the patient and those of the referring physician. The incredible diversity of and rapid advances in radiology means that there is always a study, technique, presentation, or diagnosis that one doesn't know, and dedicated study must be balanced with practical hands-on experience in the reading H-10 How radiology is changing: three unavoidable challenges for the future C. Messina; Milan/IT Purpose: My purpose is to discuss three relevant challenges for radiology in the present time and for the next future. Methods and Materials: Radiology is considered the most rapidly evolving spe- cialty in medicine. However, many challenges are still to be faced. Results: First, “personalised medicine” is promising to revolution the approach to disease. The link between genetic and clinical profile will permit to abandon the concept of “patient” for the newer one of “individual”. The right diagnosis and treat- ment at the right time could be offered to each patient. In this scenario, radiology should play an important role, introducing the personalised screening programmes, taking into account the individual risk of the disease. A first example is the use of MRI for screening women with a high genetic-familial predisposition to breast cancer. Second, radiology was traditionally based on image acquisition and visual inter- pretation. Although qualitative analysis remains essential, more and more we need to obtain numerical data to work on. In other words, radiologists should be able to measure and give interpretation of parameters based on numeric quantification. This would allow for establishing objective thresholds to understand whether a parameter is normal, also producing results as less affected as possible by hu- man and instrumental variability. The third challenge is intimately connected to the previous ones. Evidence-based radiology is a clinical practice based on the critical evaluation of the results obtained from scientific research. Radiologists should demonstrate that new imaging discoveries are directed to provide significant gain in patient outcome, rather than increase ability to see more and better. Conclusion: In conclusion, winning these challenges, radiology will produce enormous benefits in terms of patients’ care and cost reduction. Saturday, March 5, 12:30 - 13:30 Room L/M MS SS 3 Session 3 Moderator: A.P. Toms; Norwich/UK H-11 Changing radiology teaching within the undergraduate curriculum in a single centre within the United Kingdom J.-R. Angus; Dundee/UK Purpose: Radiology can help to develop students’ knowledge of human morphol- ogy, physiology and disease processes during early undergraduate (UG) education and further enhances understanding of patient management during clinical place- ments. The advent of digital imaging using picture archive and communication systems (PACS) and adoption of problem-based learning (PBL) allow radiology teaching to be conducted out with the radiology department within an integrated teaching model. However, radiology remains under-represented within the UG curriculum, despite its integral role in modern patient management. The Royal College of Radiologists (RCR), United Kingdom, has highlighted the need for change in radiology teaching within medical school curricula. The RCR has proposed a core syllabus, which comprises basic radiograph interpretation, an understanding of the role of imaging within clinical investigation, knowledge of the Ionizing Radiation (Medical Exposure) Regulations (2000) and developing communication skills to better prepare patients for investigation. This would enable students to understand the strengths and limitations of key investigations and to adopt appropriate attitudes towards justifiable referrals. Investment in teaching radiology at an early stage may benefit both the patient and the hospital, due to a reduction in inappropriate referrals that are costly and expose the patient to un- necessary ionizing radiation. Conclusion: The relative lack of formal radiology teaching content in the University of Dundee UG medical curriculum has triggered the incentive to set up a curriculum development group within the National Health Service (NHS) Tayside Radiology Department, the purpose being to review the curriculum framework proposed by the RCR with a view of incorporating it into the University of Dundee medical UG curriculum. This paper describes the extent and challenges posed to developing such a curriculum and integrating radiology teaching in Dundee. Scientific Sessions for Medical Students A B S401C D E F G H ferent methods and all possible diseases in all parts of the body. Conclusion: Radiology has the reputation to be a department where images are looked at instead of patients and that radiologists are only service providers for the other departments, but in fact, radiologists treat patients themselves and develop new technologies to improve the insight into the human body. H-17 The x-choice E. Zagvozdkin; Moscow/RU Purpose: Why radiology? For me, it’s no question and I'll explain why. The best radiologists are knowledgeable in a vast swath of the medical terrain with some degree of expertise in practically all fields of medicine. It's remarkable to be trained in such a diversity of pathology. At the same time a good radiologist is an indispens- able assistant to the treating physician. It's easy to lose sight of the technical achievements embodied by an ultrasound or MR scanner, and almost everything that radiologists do is dependent on some of the most sophisticated scientific developments on the planet. The radiologic technology is amazing and approaching to the realm of science fiction. There is tremendous growing potential in the field. Various visualization techniques are used for many medical manipulations and this increases efficiency and safety of treatment. Molecular imaging - the ability to produce maps of physiologic activity on the protein and biochemical level - is fast becoming a reality. Combined with advances in biotechnology and nanotechnology, radiology will remain in the fore- front of medical innovation. There is great variety of opportunities available in radiology. For example, you can do teleradiology from halfway around the world or get a consultation from your col- leagues potentially at any time. Fellowship opportunities in radiology are plentiful and everyone can find interesting ones for him/her. Conclusion: Radiologist is a profession that demands commitment, a high level of skill and accuracy. I think it's an adequate price for the opportunity of examination of the human organism. It's difficult to imagine modern medicine without radiology. This science is very interesting and truly absorbing part of medicine. This is my area of interest, my choice. H-18 Magnetic resonance imaging education “down-to-earth“ L.I. Lanczi, Debrecen/HU Purpose: Students of medicine and radiography often find it strikingly hard to interpret MR images not merely because of their small experience in radiology but also due to the fact that understanding images calls for the comprehension of image acquisition technique basics. The necessity of profound imaging physics knowledge becomes evident when students face state-of-the-art MR scans like diffusion-weighted imaging or fMRI. To address this problem more interactive tools and practice-oriented teaching sessions are required. We believe that the optimal education guides students through the entire acquisition procedure and also allows trial-and-error interaction with the MRI device. Teachers confront major problems during practical MRI education: the equipment is expensive, stationary and interaction is limited. Recent developments allow manufacturers to produce MRI devices that lack large superconductors or per- manent magnets; the Earth’s weak but homogeneous magnetic field can also be exploited to perform basic MR experiments. A portable, Earth-field MRI (efMRI) device is available in our institution; it is envisaged that hands-on MRI practices will be implemented into education, regardless of the training programme: clinical radiology or radiography. In our lecture we would like to demonstrate the feasibility of our efMRI device in undergraduate education. Conclusion: Magnetic resonance imaging (MRI) – a flagship among the state- of-the art medical imaging techniques – had found widespread use in several clinical subspecialties during the last decades. There is compelling evidence that the integration of the hands-on MRI experience into the undergraduate education is beneficial. The inexpensive and ingenious efMRI is optimal for studying the MR phenomenon: from the setup and optimisation tasks until the first FID signal or 3D images are acquired, students can gain insight into imaging physics and acquisition techniques. room and in the procedure room. The radiology resident needs both confidence and humility when approaching any given case – the confidence to make the call or to capably perform the procedure, and the humility to know that something may be missed or that the procedure may not succeed. The key is to recognise one's limitations and to ask for help when appropriate without compromising patient safety and with minimal interruption to others in the department. Conclusion: I hope to illustrate the challenges and opportunities involved in radiol- ogy training by outlining a “typical” day in the life of a radiology resident, describing the caseload, the conferences, the procedures and the interruptions that are part and parcel of learning radiology on the job while providing a vital service to patients and physicians as part of modern medical practice. H-15 Problems in undergraduate education in Brazil D.B.D.Z. Dalke; Curitiba/BR Purpose: Higher education in radiology in Brazil is not so recent, however, its structure still presents serious problems. The regulation of the profession of radiol- ogy technologist has not happened yet; the graduates in this mode have to submit tenders and the technical level of wages. The counsel representing the class is also the technical level. Methods and Materials: Some universities offer the course recognized technolo- gist in radiology and has high concept before the MEC, however there are many universities offering such low-quality course without even the recognition of the MEC, which worsens the situation in the labour market, because with so many professionals have shown that higher education was worse than the technical level, justifying the salary you end up getting. Results: This situation urgently needs to change, it is extremely important that the professional working with ionizing radiation has a top quality course that offers both theoretical and practical physics and medicine. It is unacceptable for a profes- sional in this area do not know how the x-rays are produced, and unfortunately this is the reality in Brazil. Conclusion: The college is the best solution to solve these problems, however it is necessary that the MEC does its job and close the universities that are unprepared to teach this profession, and also that the government regulates the profession of radiology technologist with a minimum wage appropriate for professionals to play their role properly and not end up abandoning the profession, leaving this very important tool. But it must be used cautiously in the hands of professionals with less knowledge. Saturday, March 5, 12:30 - 13:30 Room N/O MS SS 4 Session 4 Moderators: A.K. Dixon; Cambridge/UK B.J. Hillmann; Charlottesville, VA/US H-16 Radiology: prejudices and hidden fascination A. Jost; Sulzbach/DE Purpose: A day in a life of a radiologist is presented. The aim is to evaluate the prejudices of radiology and to illuminate its fascination. Methods and Materials: In the morning, the radiologist steps in front of the sur- geons, assessing about 100 pictures each in a few seconds. Afterwards, he presents images to the internists. While mentioning differential diagnoses, someone tells clinical information that dispels 2/3 of them. The radiologists workplace can either be at CT, MRT, angiography or x-ray. There he interprets pictures from all parts of the body. When there is an emergency, the radiologist quickly assesses the trauma scan while the surgeons wait impatiently. In case of ongoing bleeding, the radiolo- gist coils the vessel to save the patients life. Due to the emergency, one patient is waiting at CT with a lumbar bone fracture for his radiofrequency kyphoplasty and another one at angiography for chemoembolisation. Later, the radiologist lectures young doctors new radiologic methods like PET-MRT or molecular imaging. Results: The prejudice that radiologists have no contact to patients is refuted. They are not only service providers anymore and though their diagnoses are of high importance, radiologists also treat patients themselves. Radiology is very interdisciplinary, every part of the human body is assessed. Additionally, radiology is highly innovative and new innovations are implemented quickly, compared to other departments. Research is multifaceted with all the dif- Scientific Sessions for Medical Students S CA B D E F G H H-19 Foetal ultrasound imaging: form of art B. Rancane; Riga/LV Purpose: The purpose of this study is to prove the duality of the work of a radiolo- gist. A good specialist not only is a medical doctor, but also possesses qualities of an artist. Research highlights the similarities between fetal ultrasound imaging and digital photography. Methods and Materials: The work of an obstetric sonographer and the work of a children’s photographer were chosen for the research. Accordingly the fetal ultrasonography was compared to the digital photography. The comparison was based on the following parameters: a) reason for taking images, b) way of creating images, c) equipment used, d) image editing, e) image analysis and f) criteria for a successful image. The information was collected through researcher’s personal experience and empirical knowledge. Results: The results showed that 5 out of 6 parameters match proving the similarity between fetal sonography and digital photography. Conclusion: Both, obstetric sonographer and children’s photographer, aim to create good-quality real-time images of their models, who are actively moving in their own environment. Both specialists have to find the best angle and wait for the perfect moment to take a successful image. They both use computer software for editing the images, adjusting the colours, contrast and brightness. In both cases evaluation and analysis of the image is present, looking for any disharmony. Even though the reason behind taking pictures for both of these professionals dif- fer, they certainly share the same artistic way of thinking, they both are looking for the unusual and they need to have talent and certain characteristics to succeed in their profession. S-20 Radiology: a holistic medicine beyond medicine M. Petrini; San Donato Milanese/IT Purpose: My purpose is to show how radiology can be a holistic approach to medicine, going beyond medicine. Methods and Materials: A patient requiring medical care may find today's medi- cine fragmented: specialties and subspecialities may seem, and sometimes are, unrelated. Highly specialised doctors may get bored with their job: they have to do the same procedures day after day. I think it is intrinsically unavoidable that mod- ern medicine goes along this way in order to reach the best clinical performance. However, medical doctors should have a holistic view of medicine. Radiology is a way to do that: every image shows a patient, not an organ. However, over the years, radiology has changed and has been divided in subspecialties. Neverthe- less, radiologists must have a general view: differential diagnosis and incidental findings are day-by-day challenges. Conversely, radiological scientific research allows to get a deeper and deeper insight of specific preclinical or clinical fields. Results: Many people wrongly think that radiologists spend their time watching at x-ray films to detect bone fractures or pulmonary infections. This could have been true many years ago. Today, the large spectrum of imaging modalities dramatically changed the scenario. Modern radiology won the battle of watching inside body using as little as possible ionizing radiation to investigate both morphology and func- tion Moreover, interventional radiology permits unique ways for treating diseases using minimally invasive procedures. But radiology can go beyond medicine: fMRI is capable to monitor where our behaviors and intellective capacities are addressed within the brain, a kind of “soul imaging”. Conclusion: In conclusion, for these reasons, I really want to be a radiologist. hands of professionals with less knowledge. 402 H Scientific Sessions for Medical Students (H) MS SS 1 Session 1 MS SS 2 Session 2 MS SS 3 Session 3 MS SS 4 Session 4 work_jlrfqsor3vc5jniyvhweq34wo4 ---- EXTENDED REPORT The impact of the Health Technology Board for Scotland’s grading model on referrals to ophthalmology services S Philip, L M Cowie, J A Olson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . See end of article for authors’ affiliations . . . . . . . . . . . . . . . . . . . . . . . Correspondence to: Dr John Olson, Grampian Diabetes Retinal Screening Programme, Woolmanhill, Aberdeen AB25 1LD, UK Accepted for publication 11 November 2004 . . . . . . . . . . . . . . . . . . . . . . . Br J Ophthalmol 2005;89:891–896. doi: 10.1136/bjo.2004.051334 Aim: To ascertain the impact of the Health Technology Board for Scotland’s grading model on referrals to ophthalmology services. Methods: An analysis was performed of the screening outcomes of 5575 consecutive patients, who were screened by the Grampian Diabetic Retinopathy Screening Programme between March and September 2003 according to the recommendations of the Health Technology Board and the Scottish Diabetic Retinopathy Grading Scheme 2003. Results: 3066 (55%) were male. The median age was 65 years. 5.4% were passed on to the level 3 grader and 3.4% were finally referred to ophthalmology services. 2.3% required re-screening in 6 months; 85% were screened without mydriasis; 11.9% had ungradeable images despite a staged mydriasis protocol. Time to complete grading was 32 days (22–45). Conclusion: The impact of the Health Technology Board for Scotland’s recommendations on referrals to ophthalmology services is modest and should be containable within existing resources. D iabetic retinopathy is still the commonest cause of blindness in people of a working age in the United Kingdom but there is a long latent period between the onset of diabetic retinopathy and its progression to sight threatening eye disease and blindness.1 2 The Diabetic Retinopathy Study and the Early Treatment Diabetic Retinopathy Study showed that photocoagulation can reduce the risk of severe visual loss by 50% or more.3 4 Systematic screening for retinopathy among patients with diabetes has been shown to be cost effective.5–7 All nations within the United Kingdom are embarking on national screening programmes for diabetic retinopathy.8 9 The most effective and pragmatic model for diabetic retinopathy has yet to be ascertained.9–11 The Scottish Executive Health Department has decided that in Scotland the national diabetic retinopathy screening programme should be implemented according to the recommendations of the Health Technology Board for Scotland.12 The Health Technology Board for Scotland’s model attempted to optimise ‘‘clinical effectiveness and cost effectiveness while respecting patient preference.’’ After applying a rigorous health technology assessment process it recommended a digital photography based screening pro- gramme based on single 45 degree disc/macula using a staged mydriasis protocol.12 The Health Technology Board for Scotland recommended a three level grading process (fig 1) to enable screening programmes to become quickly estab- lished while still protecting patients. The Scottish Executive Health Department’s Diabetic Retinopathy Screening Implementation Group predicted that the impact of this model on referrals to ophthalmology services would be modest based on early experience from Grampian, Lanarkshire and Tayside.8 The aim of this study was to assess whether or not this prediction was accurate. METHODS Study population The Grampian Diabetes Retinal Screening Programme com- menced in April 2002. Grampian’s population of 525 859 is served by 89 general practices. All patients over the age of 10 years with diabetes mellitus were referred to the retinal screening programme by their general practitioner. Patients could opt out of the screening programme only if they were attending ophthalmology services on a regular basis and the consultant ophthalmologist was willing to continue screen- ing for diabetic retinopathy. In this scenario, the hospital patient administration system was used to confirm the presence of an ophthalmology appointment and whether they attended a specialist medical retina clinic or not. The retinal screening programme had one fixed non-mydriatic 45 degree retinal Canon CR5–45NM camera (Canon Inc, Medical Equipment Business Group, Kanagawa, Japan) based at the Diabetes Centre, Woolmanhill Hospital, Aberdeen, and two mobile Canon CR6–45NM cameras (Canon Inc, Medical Equipment Business Group, Kanagawa, Japan) housed in mobile vans. All cameras were attached to high resolution (2160 6 1440 pixels) Canon D30 digital cameras. At the time of the study there were 15 700 patients on the register. Grading and outcome data on all 5575 patients who underwent screening between March 2003 and September 2003 were collated and analysed. Staffing The screening programme employed three level 1 graders/ retinal photographers, two part-time level 2 graders, and one part-time level 3 grader during the period of the study. Whole time equivalents were 1.0 for level 1 grading, 0.1 for level 2 grading, 0.1 for level 3 grading, and 3.0 for photography. The level 1 graders/retinal photographers had no previous experience in retinal screening before recruitment in January 2003 and underwent a 5 day intensive practical training programme run over 2 weeks. The level 2 graders, all of whom had previous experience in retinal screening, undertook the grading component of the same training programme. The level 3 grader was a consultant in medical ophthalmology. Screening examination Patients had their best visual acuity, with pinhole correction if required, measured by Bailey-Lovie logMAR charts.13 Then, 891 www.bjophthalmol.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://b jo .b m j.co m / B r J O p h th a lm o l: first p u b lish e d a s 1 0 .1 1 3 6 /b jo .2 0 0 4 .0 5 1 3 3 4 o n 1 7 Ju n e 2 0 0 5 . D o w n lo a d e d fro m http://bjo.bmj.com/ a single 45˚ disc/macular field digital image was taken for each eye using a high resolution non-mydriatic digital fundus camera in a darkened room. If the pupils failed physiologi- cally to dilate sufficiently to enable an acceptable image to be obtained, the photograph was repeated at the same visit following mydriasis with tropicamide 1%. After the grading process, those patients identified as having ungradeable images were invited to attend an addi- tional appointment at the Diabetes Centre, Woolmanhill Hospital, Aberdeen. There, slit lamp examination, following mydriasis with tropicamide 1%, was undertaken by a level 2 grader. Refer for slit lamp examinationR6 Recall in 6 monthsR2, M1 Recall in 12 monthsR0, R1Level 2 grader Pictures regraded for all features of retinopathy. Software calculates grade of retinopathy Recall in 12 monthsR0 Level 1 grader All pictures graded for all features of retinopathy. Software calculates grade of retinopathy R1, R2, R3, R4, M1, M2 Refer for slit lamp examinationR6 Recall in 6 monthsR2, M1 Recall in 12 monthsR0, R1Level 3 grader Pictures regraded for all features of retinopathy by ophthalmologist. Software calculates grade of retinopathy M2, R3, R4 Refer to ophthalmology M2, R3, R4 Figure 1 Outline of the Health Technology Board for Scotland grading model. Table 1 Levels of retinopathy according to the Scottish Diabetic Retinopathy Grading Scheme 2003 Retinopathy Description Outcome R0 No diabetic retinopathy anywhere Re-screen in 12 months R1 (mild) Background diabetic retinopathy (BDR), mild Re-screen in 12 months At least one dot haemorrhage or microaneurysm with or without hard exudates R2 (moderate) BDR, moderate Four or more blot haemorrhages (ie, >AH* standard photograph 2a) in one hemifield only. (Inferior and superior hemifields are delineated by a line passing through the centre of the fovea and the optic disc) Re-screen in 6 months (or refer to ophthalmology if this is not feasible) R3 (severe) BDR, severe Refer to ophthalmology Any of the following features Four or more blot haemorrhages (ie, >AH* standard photograph 2a) in both inferior and superior hemifields Venous beading (ie, >AH* standard photograph 6a) IRMA (ie, >AH* standard photograph 8a) R4 (proliferative) Proliferative diabetic retinopathy Refer to ophthalmology PDR Any of the following features New vessel Vitreous haemorrhage R5 (enucleated) Enucleated eye Re-screen 12 months (other eye) R6 Not adequately visualised Technical failure AH*, Airlie house standard photographs.14 892 Philip, Cowie, Olson www.bjophthalmol.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://b jo .b m j.co m / B r J O p h th a lm o l: first p u b lish e d a s 1 0 .1 1 3 6 /b jo .2 0 0 4 .0 5 1 3 3 4 o n 1 7 Ju n e 2 0 0 5 . D o w n lo a d e d fro m http://bjo.bmj.com/ Grading Commercial software (Digital Healthcare, UK) was used for image grading and report generation. All graders identified individual features of retinopathy on the images. The retinopathy grade was automatically calculated according to the Scottish Diabetic Retinopathy Grading System 2003.8 This was a revised version of the grading scheme recommended by the Health Technology Board for Scotland (tables 1–3). Level 1 graders graded all images first for image quality and then for retinopathy according to the Scottish Diabetes Retinopathy Grading System 2003 (table 1–3). Images with any retinopathy or those that were of insufficient quality for grading (technical failure) were passed on to a level 2 grader. These images were then re-graded and those deemed to have features of referable retinopathy were passed on to a level 3 grader, a consultant in medical ophthalmology. Both level 2 and level 3 graders could final grade ungradeable images (technical failures). Level 1 and level 2 graders also passed on patients with other significant retinal findings such as retinal vein occlusions and suspicious optic discs. The level 3 grader made the final assessment, confirming or refuting the need for referral to ophthalmology. The level 3 grader also reviewed any images with abnormal features of concern not related to diabetic retinopathy. Patients with gradeable images stayed within the screening programme unless referred by the level 3 grader to ophthalmology. The final arbiter of referral for patients with ungradeable images was the slit lamp examiner. Patients with coincidental findings of concern were referred to the primary care clinic at the ophthalmology department. Statistical analysis Data were analysed using Statistical Package for Social Sciences (version 11.5.0, SPSS, Chicago, IL, USA). Descrip- tive data are expressed as mean (SD) or median with 25th and 75th quartiles. The proportions of patients with different outcomes and different grades of retinopathy were calculated with 95% confidence intervals. If a patient had assessment by more than one level of grader then the grade assigned by the highest level of grader was taken as the final result. RESULTS Patient characteristics Of the 5575 patients, 3066 (55%) were male. The median age of patients screened was 65 (54–73) years. Eighty five per cent of the patients were over the age of 45 and 48.2% over 65. These demographics were almost identical to those of the Scottish Diabetes Survey 2002.15 In all, 4742 (85%) patients had screening without mydriasis. Patients whose images were classed as technical failures were then invited for slit lamp examination were older (mean 74.3 (SD 11.2) years) than those with gradeable images (mean 60.5 (15.5) years, p,0.01). Thirteen patients had only one functioning eye (0.2%). Grading process and screening outcomes All 5575 patients had their images reviewed by the level 1 grader. Figure 2 shows the number and proportion of images passed on to the level 2 and level 3 graders. Patients assigned their final screening outcome at each grading level are outlined in tables 4 and 5. Fifty seven per cent of patients were assigned their final grade by the level 1 grader. The Table 2 Levels of maculopathy according to Scottish Diabetic Retinopathy Grading Scheme 2003 Maculopathy Description Outcome M1 (observable) Lesions within a radius of .1 but (2 disc diameters of the centre of the fovea Re-screen in 6 months (or refer to ophthalmology if this is not feasible) Any hard exudates M2 (referable) Lesions within a radius of (1 disc diameters of the centre of the fovea Refer to ophthalmology Any blot haemorrhages Any hard exudates Table 3 Coincidental findings according to Scottish Diabetic Retinopathy Grading Scheme 2003 Coincidental findings Description Outcome Photocoagulation Laser photocoagulation scars present Not applicable Other Other non-diabetic lesion present Refer according to local guidelines Pigmented lesion (naevus) Age related macular degeneration Drusen maculopathy Myelinated nerve fibres Asteroid hyalosis Retinal vein thrombosis Level 2 2124 (38%) Level 1 3149 (57%) Level 3 302 (5%) Figure 2 The grading level at which images were assigned the final grade. Impact of HTBS grading model 893 www.bjophthalmol.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://b jo .b m j.co m / B r J O p h th a lm o l: first p u b lish e d a s 1 0 .1 1 3 6 /b jo .2 0 0 4 .0 5 1 3 3 4 o n 1 7 Ju n e 2 0 0 5 . D o w n lo a d e d fro m http://bjo.bmj.com/ median time for complete grading was 32 days (22–45). The main delay in grading appeared to occur at the level 1 stage (median 28 days (21–40)). The level 1 grader passed on images of 2426 (43.5%) patients to the level 2 grader. Only 302 (5.4%) were subsequently passed on to the level 3 grader, a consultant in medical ophthalmology, for final assessment. At final assessment, the level 3 grader agreed that 190 patients should be referred to ophthalmology services including five patients who had significant non-diabetes pathology (retinal vein occlusions, two; glaucoma, two; large naevus, one). However, for 112 patients the level 3 grader disagreed, recommending that 21 should be re-screened in 6 months, 33 should be re- screened in 12 months, 46 did not have significant non- diabetes pathology, and 12 were ungradeable requiring slit lamp examination. Only 331 (5.9%) patients passed on by the level 1 grader as having any retinopathy were finally downgraded by a higher level grader as having no retino- pathy After final grading, 3485 patients (62.5%) had no retino- pathy, while 18 patients (0.3%) had proliferative retinopathy (table 5). The most frequent cause for referral to ophthal- mology services was referable maculopathy (131 patients, 2.3%). DISCUSSION This study provides outcome data on the impact of the Health Technology Board for Scotland’s grading model on referrals to ophthalmology from a primary care based diabetic retinopathy screening programme using digital photography that meets the recommended resolution agreed by both the Health Technology Board for Scotland and the National Screening Committee for England.9 12 The Health Technology Board for Scotland’s grading model appears to be effective in triaging patients needing review by the third level grader. Only 302 (5.4%) patients’ images needed review by the third level grader and, of those, 190 (3.4%) were finally referred to ophthalmology services. Only 10% of the images passed to the level 3 grader were downgraded for review in 12 months. Most errors were caused by the misclassification of referable maculopathy such as misjudging the position of blot haemorrhages or exudates, incorrectly classifying drusen as exudates or mistaking dot haemorrhages for blots haemorrhages. A major difference between the recommendations for Scotland and those for England is that in Scotland, patients with ungradeable images requiring slit lamp examination are to be contained within the screening programme, whereas in England they are to be referred to ophthalmology services. In our study 661 patients (11.9%) required re-invitation for examination by slit lamp biomicroscopy. The reported rates for technical failures in other studies vary from 3.7% to 19.7%.16–21 In the study by Scanlon et al, the technical failure rate for mydriatic photography was reported as 3.7%; however, a further 15.5% of patients had only ‘‘partially assessable images.’’18 Comparisons of technical failure figures are difficult because of a lack of standard image quality assessment protocols; however, our high technical failure rate might be explained by the presence of two new photographers during the study period who were still in their learning phase. Patients were also in their first systematic screening cycle when those who had undiagnosed cataract or other permanent media opacities, might first be encountered. In the second screening cycle these patients would be offered slit lamp examination, if cataract surgery was inappropriate, rather than digital photography, which should lead to a lower photographic technical failure rate. Furthermore, although high resolution cameras, as recom- mended by the Health Technology Board for Scotland, have diagnostic advantages, their use of a higher intensity flash compared to that used by lower resolution cameras may result in increased reflections from media opacities increasing the technical failure rate. In addition, the relative quality of the good photographs obtained by high resolution cameras is also better, thus the threshold for rejecting photographs may be higher. Though our technical failure rates are higher than others have documented we think they are realistic for newly established screening programmes. Regardless, provision must be made for those patients with ungradeable images. In Scotland it has been recommended that this resource be provided within the screening programme, an alternative would be to set up a dedicated clinic within ophthalmology. Which is more effective has yet to be determined. Another difference between the Health Technology Board for Scotland and the National Screening Committee recom- mendations for England is the presence of a 6 monthly re- screening interval for borderline patients (table 4).9 The Health Technology Board for Scotland recommended that Table 4 Final screening outcomes at each grading level Screening outcome Level 1 Level 2 Level 3 Total No (%) (95% CI ) Re-screening in 12 months 3149 (56.5%) 1368 (24.5%) 79 (1.4%) 4596 (82.4) (81.5 to 83.4) Re-screening in 6 months NA 107 (1.9%) 21 (0.4%) 128 (2.3) (1.9 to 2.7) Referral for slit lamp biomicroscopy NA 649 (11.6%) 12 (0.2%) 661 (11.9) (11.0 to 12.7) Referral to ophthalmology NA NA 190 (3.4%) 190 (3.4) (3.0 to 3.9) NA, not applicable as these outcomes could not be assigned at this level. Table 5 Overall grades of retinopathy assigned for patients screened Overall level of retinopathy or maculopathy Number of patients (%) (95% CI) R0: No diabetic retinopathy 3485 (62.5) (61.2 to 63.7) R1: Mild diabetic retinopathy 1116 (20.0) (18.9 to 21.0) R2: Moderate diabetic retinopathy 17 (0.3) (0.2 to 0.5) M1: Observable maculopathy 111 (2) (1.7 to 2.4) M2: Referable maculopathy 131 (2.3) (2.0 to 2.8) R3: Severe diabetic retinopathy 36 (0.6) (0.5 to 0.9) R4: Proliferative diabetic retinopathy 18 (0.3) (0.2 to 0.5) R6: Image is of a quality inadequate for grading 661 (11.9) (11.0 to 12.7) 894 Philip, Cowie, Olson www.bjophthalmol.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://b jo .b m j.co m / B r J O p h th a lm o l: first p u b lish e d a s 1 0 .1 1 3 6 /b jo .2 0 0 4 .0 5 1 3 3 4 o n 1 7 Ju n e 2 0 0 5 . D o w n lo a d e d fro m http://bjo.bmj.com/ patients with ‘‘moderate background retinopathy’’ or ‘‘obser- vable maculopathy’’ could be re-screened at 6 months or referred to ophthalmology services if the screening pro- gramme lacked the capacity to contain them within the programme.12 In our study 128 patients (2.3% (1.9–2.7)) fell into these borderline categories and were re-screened at 6 months thus reducing the impact of the screening programme on ophthalmology services. There are also differences with respect to classification of diabetic maculopathy. The National Screening Committee for England recommends that patients with a dot or blot haemorrhage at the macula are referred if they have a visual acuity of worse than or equal to 6/12. Using the National Screening Committee for England’s criteria this would have resulted in an additional 159 (2.8%) referrals. The number of patients in our study who needed referral to ophthalmology services was lower than others have sug- gested and has not yet led to the requirement for any additional resources within ophthalmology services. Using a low resolution digital mydriatic photographic screening method, Scanlon et al had a 12.2% referral rate for patients not already under the care of an ophthalmologist.18 However, this figure included patients with technical failures in either eye as these patients were not contained within the screening programme. In Liverpool between 1991 and 1999, using colour slide mydriatic photography, a ‘‘high resolution’’ technique, a referral rate of 7.1% was obtained.22 In Newcastle using combined mydriatic Polaroid photography and mydriatic direct ophthalmoscopy a referral rate of 4.5% was documented.23 Our lower referral rate of 3.4% may reflect the effectiveness of previous retinal screening performed at hospital diabetes clinics in Grampian or, alternatively, may reflect different population demographics compared to other studied populations. The population of Grampian is relatively affluent and predominantly white.24 25 Other ethnic groups are known to be at a higher risk of developing diabetic retinopathy and to have a higher rate of cataract formation.26–28 It is possible that higher rates of untreated cataract formation will be found in a less affluent population.29 If all patients with ungradeable pictures are to be referred to ophthalmology, and in addition there is not a 6 month review group, then the National Screening Committee for England’s prediction of an 8% initial referral rate to ophthalmology services seems low. Furthermore, as the National Screening Committee for England recommends a two field photographic schedule then it is likely that the technical failure rate will be higher if both photographs need to meet image quality standards. Ophthalmology time will be required to train retinal screeners to grade before any screening programme can commence. The exact amount and level of training needed is still being debated throughout the four nations. Our experience, based on internal quality assurance by the level 3 grader, suggests that newly appointed level 1 graders with no previous experience of retinal screening can be quickly trained to accurately identify images with no features of retinopathy. Finally, grading for quality assurance may place a significant burden on ophthalmology services. National Health Service Quality Improvement Scotland has published diabetic retinopathy screening standards for Scotland that state that ‘‘the images from a minimum of 500 randomly selected patients (or all images graded if less than 500 patients) per grader per annum not otherwise referred to a third level grader are reviewed by a third level grader.’’30 As the Grampian Diabetes Retinal Screening Programme has five graders, this equates to 2500 patients (1250 over 6 months) per annum. The National Screening Committee for England recommends that 10% of the normal patients (348 patients in our study) should be regraded by a second level grader rather than an ophthalmologist.31 In addition, it recommends that ideally each grader should read a minimum of 1500 and maximum of 4000 patients per annum. National Health Service Quality Improvement Scotland took the view that until diabetic retinopathy screening programmes had become established and that there were proved training programmes for retinal screeners then this quality assurance would have to be performed by medical retina specialists. In time, however, it is hoped that this burden could be shared by other healthcare professionals. In conclusion, the Health Technology Board for Scotland’s three level grading model appears to be effective in both triaging patients needing review by the level 3 grader and in minimising the numbers of unnecessary referrals to ophthal- mology services. Thus, the impact of the Health Technology Board for Scotland’s recommendations should be modest, enabling referrals to be contained within existing ophthal- mology resources in Scotland. ACKNOWLEDGEMENTS The authors gratefully acknowledge the invaluable assistance of the retinal screening nurses and the administrative staff of the Grampian Diabetes Retinal Screening Programme. Authors’ affiliations . . . . . . . . . . . . . . . . . . . . . S Philip, L M Cowie, J A Olson, Grampian Diabetes Retinal Screening Programme, Woolmanhill, Aberdeen AB25 1LD, UK REFERENCES 1 Bamashmus MA, Matlhaga B, Dutton GN. Causes of blindness and visual impairment in the west of Scotland. Eye 2004;18:257–61. 2 Evans J, Rooney C, Ashwood F, et al. Blindness and partial sight in England and Wales: April 1990-March 1991. Health Trends 1996;28:5–12. 3 Early Treatment Diabetic Retinopathy Study Research Group. Photocoagulation for diabetic macular edema. Early Treatment Diabetic Retinopathy Study report number 1. Arch Ophthalmol 1985;103:1796–806. 4 The Diabetic Retinopathy Study Research Group. Photocoagulation treatment of proliferative diabetic retinopathy. Clinical application of Diabetic Retinopathy Study (DRS) findings, DRS Report Number 8. Ophthalmology 1981;88:583–600. 5 Foulds WS, McCuish A, Barrie T, et al. Diabetic retinopathy in the west of Scotland: its detection and prevalence, and the cost-effectiveness of a proposed screening programme. Health Bull (Edinb) 1983;41:318–26. 6 Sculpher MJ, Buxton MJ, Ferguson BA, et al. Screening for diabetic retinopathy: a relative cost-effectiveness analysis of alternative modalities and strategies. Health Econ 1992;1:39–51. 7 James M, Turner DA, Broadbent DM, et al. Cost effectiveness analysis of screening for sight threatening diabetic eye disease. BMJ 2000;320:1627–31. 8 Diabetic Retinopathy Screening Implementation Group. Diabetic retinopathy screening services in scotland: recommendations for implementation. Edinburgh: Scottish Executive, 2003:69–70. 9 Harding S, Greenwood R, Aldington S, et al. Grading and disease management in national screening for diabetic retinopathy in England and Wales. Diabet Med 2003;20:965–971. 10 Lau HC, Voo YO, Yeo KT, et al. Mass screening for diabetic retinopathy—a report on diabetic retinal screening in primary care clinics in Singapore. Singapore Med J 1995;36:510–13. 11 Peters AL, Davidson MB, Ziel FH. Cost-effective screening for diabetic retinopathy using a nonmydriatic retinal camera in a prepaid health-care setting. Diabetes Care 1993;16:1193–5. 12 Facey K, Cummins E, Macpherson K, et al. Organisation of services for diabetic retinopathy screening, Health Technology Assessment Report 1. Glasgow: Health Technology Board for Scotland, 2002. 13 Lovie-Kitchin JE. Validity and reliability of visual acuity measurements. Ophthalmic Physiol Opt 1988;8:363–70. 14 Davis MD, Norton EWD, Myers FL. The Airlie classification of diabetic retinopathy. In: Goldberg MF, Fine SL, eds. Symposium on the treatment of diabetic retinopathy.US Dept of Health, Education and Welfare, 1968:7–37. 15 Scottish Diabetes Survey Monitoring Group. Scottish Diabetes Survey 2002. Edinburgh: Scottish Executive Health Department, 2002:18–19. 16 Olson JA, Strachan FM, Hipwell JH, et al. A comparative evaluation of digital imaging, retinal photography and optometrist examination in screening for diabetic retinopathy. Diabet Med 2003;20:528–34. Impact of HTBS grading model 895 www.bjophthalmol.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://b jo .b m j.co m / B r J O p h th a lm o l: first p u b lish e d a s 1 0 .1 1 3 6 /b jo .2 0 0 4 .0 5 1 3 3 4 o n 1 7 Ju n e 2 0 0 5 . D o w n lo a d e d fro m http://bjo.bmj.com/ 17 Massin P, Erginay A, Ben Mehidi A, et al. Evaluation of a new non-mydriatic digital camera for detection of diabetic retinopathy. Diabet Med 2003;20:635–41. 18 Scanlon PH, Malhotra R, Thomas G, et al. The effectiveness of screening for diabetic retinopathy by digital imaging photography and technician ophthalmoscopy. Diabet Med 2003;20:467–74. 19 Herbert HM, Jordan K, Flanagan DW. Is screening with digital imaging using one retinal view adequate? Eye 2003;17:497–500. 20 Boucher MC, Gresset JA, Angioi K, et al. Effectiveness and safety of screening for diabetic retinopathy with two nonmydriatic digital images compared with the seven standard stereoscopic photographic fields. Can J Ophthalmol 2003;38:557–68. 21 Agrawal A, McKibbin MA. Technical failure in photographic screening for diabetic retinopathy. Diabet Med 2003;20:777. 22 Younis N, Broadbent DM, Vora JR, et al. Prevalence of diabetic eye disease in patients entering a systematic primary care-based eye screening programme. Diabet Med 2002;19:1014–21. 23 Pandit RJ, Taylor R. Quality assurance in screening for sight-threatening diabetic retinopathy. Diabet Med 2002;19:285–91. 24 Social Disadvantage Research Centre. Scottish indices of deprivation index 2003. Edinburgh: Scottish Executive, 2003:21–28. 25 Office of Chief Statistician. Analysis of ethnicity in the 2001 census. Edinburgh: Scottish Executive, 2004:26. 26 Pardhan S, Gilchrist J, Mahomed I. Impact of age and duration on sight- threatening retinopathy in South Asians and Caucasians attending a diabetic clinic. Eye 2004;18:233–40. 27 Mather HM, Chaturvedi N, Kehely AM. Comparison of prevalence and risk factors for microalbuminuria in South Asians and Europeans with type 2 diabetes mellitus. Diabet Med 1998;15:672–7. 28 Das BN, Thompson JR, Patel R, et al. The prevalence of eye disease in Leicester: a comparison of adults of Asian and European descent. J R Soc Med 1994;87:219–22. 29 Klein BE, Klein R, Lee KE, et al. Socioeconomic and lifestyle factors and the 10- year incidence of age-related cataracts. Am J Ophthalmol 2003;136:506–12. 30 NHS Quality Improvement Scotland. Diabetic retinopathy screening clinical standards—March 2004. Edinburgh: Scottish Executive Health Department, 2004:31. 31 UK National Screening Committee. Essential elements in developing a diabetic retinopathy screening programme. 2004:43 (www.nscretinopathy.org.uk/ resources/FinalWorkbook2doc, accessed 27 May 2004.). 896 Philip, Cowie, Olson www.bjophthalmol.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://b jo .b m j.co m / B r J O p h th a lm o l: first p u b lish e d a s 1 0 .1 1 3 6 /b jo .2 0 0 4 .0 5 1 3 3 4 o n 1 7 Ju n e 2 0 0 5 . D o w n lo a d e d fro m http://bjo.bmj.com/ work_jp3sq72e3jevbiai4h4677ygem ---- Article Reference Digital Photographic Procedure for Comprehensive Two-Dimensional Tooth Shade Analysis BROKOS, Ioannis, STAVRIDAKIS, Minos, KREJCI, Ivo Abstract Current commercially available restorative materials vary in their esthetic properties, depending on brand and shade. Variations are related not only to basic color parameters such as hue, chroma, and value, but also to other important properties that affect the overall esthetic restorative outcome, such as opalescence, fluorescence, translucency, and metamerism. Fluorescence and bluish opalescence, though associated with the ingredients and chemical composition of the material, may be controlled and refined by a proper layering technique if that pretreatment analysis has been performed with the aid of appropriate photographic techniques. Digital cameras and dental photography have long been imperative tools for clinicians in their daily practice. Traditionally, digital photography has been used for recordkeeping, documentation, presentation, and informing patients of their oral status before and after treatment. Today, evolved techniques facilitate clinicians' ability to compare the esthetic properties of restorative materials with those of natural teeth for delivery of natural-looking restorations. Moreover, [...] BROKOS, Ioannis, STAVRIDAKIS, Minos, KREJCI, Ivo. Digital Photographic Procedure for Comprehensive Two-Dimensional Tooth Shade Analysis. Compendium of Continuing Education in Dentistry, 2017, vol. 38, no. 8, p. e1-e4 PMID : 28862463 Available at: http://archive-ouverte.unige.ch/unige:101758 Disclaimer: layout of this document may differ from the published version. 1 / 1 http://archive-ouverte.unige.ch/unige:101758 Yiannis Brokos, DDS, MSc, Dr. med. dent.; Minos Stavridakis, DDS, MSc, Dr. med. dent.; and Ivo Krejci, Prof. Dr. med. dent. Abstract Current commercially available restorative materials vary in their esthetic properties, depending on brand and shade. Variations are related not only to basic color parameters such as hue, chroma, and value, but also to other important properties that affect the overall esthetic restorative outcome, such as opalescence, fluorescence, translucency, and metamerism. Fluorescence and bluish opalescence, though associated with the ingredients and chemical composition of the material, may be controlled and refined by a proper layering technique if that pretreatment analysis has been performed with the aid of appropriate photographic techniques. Digital cameras and dental photography have long been imperative tools for clinicians in their daily practice. Traditionally, digital photography has been used for recordkeeping, documentation, presentation, and informing patients of their oral status before and after treatment. Today, evolved techniques facilitate clinicians’ ability to compare the esthetic properties of restorative materials with those of natural teeth for delivery of natural-looking restorations. Moreover, documentation obtained before and after restoration may be used for more comprehensive information for the patient. Digital Photographic Procedure for Comprehensive Two-Dimensional Tooth Shade Analysis AEGIS Communications, By Yiannis Brokos, DDS, MSc, Dr. med. dent., Minos Stavridakis, DDS, MSc, Dr. med. dent., Ivo Krejci, Prof. Dr. med. dent. dentalaegis.com September 2017 | Shade matching between natural teeth and composite materials is critical when creating esthetic restorations.1-6 Hue, chroma, and value are basic color parameters that influence the esthetic outcome of a tooth-colored restoration.7-13 The shade of contemporary enamel and dentin composite materials should, and most likely do, accurately mimic the shade of enamel14-17 and dentin.18,19 Unfortunately, commercially available composite materials, even of the same shade, vary in fundamental properties, such as opalescence,20-24 fluorescence,25-33 translucency34-48 and metamerism49,50, between different brands and shades. Clinicians should be able to determine the variations of these properties and either correct any mismatch in the course of the layering technique51-58 or at least keep a record of non-correctable differences. The application of appropriate digital photography techniques is the procedure of choice to control and document the esthetic properties of current materials. Digital cameras have undergone impressive evolution since their development in the early 2000s59,60 incorporating emerging technologies to make photography highly accurate, simple, and economic. Digital photography61-65 has long been a necessary tool in dentistry for recording, diagnosing, communicating, presenting, informing, and documenting66,67 the oral status of the patient before and after treatment. A digital single lens reflex (DSLR) system should be considered standard equipment for dental photography. A DSLR camera body using an 80-mm to 105-mm macro lens and a ring flash or a twin flash system is the basic setup for capturing detailed extraoral and intraoral images. Most current digital camera bodies are efficient enough in terms of resolution, aperture, and exposure settings. ISO configuration is considered an important and sensitive parameter to be investigated, especially for advanced fluorescence, red-orange opalescence, and translucency recording. Macro lenses of 80 mm to 105 mm provide an ideal combination of proper magnification within convenient working distance for intraoral close-up photography. A life-size reproduction of the photographed object is referred to as 1:1 magnification and, in frontal dental photography, comprises the four maxillary incisors. Electronic flashlight is necessary for proper illumination of the dark areas within the intraoral environment. The two main types of flash systems for dental purposes are a ring flash and a twin-light flash. The light of a ring flash eliminates shadows in the oral cavity but may produce undesirable specular reflections. The illumination by a twin flash produces a soft and 3-dimensional lighting effect with prominent and detailed surface morphology. Evolved capturing techniques68-70 enable clinicians to observe the physical characteristics and properties of natural dentition. Surface details, such as macro- and micromorphology, texture, luster, gloss, enamel cracks and striations, chromatic mapping, and dentinal architecture, can be revealed and recorded with direct reflective lighting techniques using different illumination angles. In addition, photographic applications for daily clinical practice have recently been developed to address the refinement or diagnosis of material properties, such as opalescence, fluorescence, translucency, and metamerism. The purpose of this article is to explore through the eye of the lens these optical properties, while demystifying and describing the respective photographic applications from a clinical and technical point of view. Photographic Equipment DSLR Camera, Lens, and Flash A DSLR Canon 550D camera (Canon U.S.A., Inc., usa.canon.com) with a Canon 100-mm macro lens and a Canon MT-24EX twin-light flash were used in the photographic protocol presented. A custom- fabricated plastic o-ring, with four metallic screws at 0o, 90o, 180o, and 270o, was fixed with silicone to the original flash framework, ready to receive the interchangeable add-on filters ( and ). Flash Plastic Diffusers The full excitation wavelength range of the xenon flash lamps is between 300 nm and 800 nm. When the original protective plastic diffusers cover the flash lamps, the range is limited within the visible light of 400 nm to 700 nm. In the case of fluorescence, the appropriate excitation wavelength is in the ultraviolet (UV) range, more precisely at 365 nm. Both plastic diffusers were removed from the flash and attached to the interchangeable plastic framework ( and ) to protect mechanically both flash lamps and to protect the polarizing membrane (described later) from the increased output of the lamps. For the cross-polarizing add-on filter, the framework with the diffusers and the framework with the polarizing membranes were combined (). Additional caution was taken by placing clear plastics in place of plastic diffusers. Three interchangeable plastic frameworks with different filters were fabricated and connected to the underlying flash by four magnets, either separately or combined together, in front of the lens and the flash lamps. The first contained the diffusers (), the second the polarizing membranes (), and the third the 365-nm UV glass filters (). Removing the plastic diffusers from the flash may lead to flash warranty loss. Because the purpose of this article is to present the principal technical aspects of the applications, the warranty was not taken into consideration. Cross-Polarized Filters Two pieces of a polarized plastic membrane were placed in parallel on both sides of the plastic framework. Another piece was placed in the center, perpendicularly to the lateral pieces (). The cross- polarized filter may not be used directly on the flash lamps because the membrane might be burned by the energy of the flashes (). For this reason, as mentioned previously, the cross-polarized filters were placed on top of the add-on frame with the plastic diffusers. 365-nm UV Filters The fluorescence filters were composed of two 365-nm UV glass filters placed on both sides of the plastic framework to cover the flash lamps. No additional filter was required in front of the lens (). Setup for Metamerism For metamerism diagnosis, a device with continuous illumination from two different light-emitting diodes (LEDs) (Rite Lite 2™, AdDent, Inc., addent.com) generating three different illumination qualities, such as 5500°K simulating day light, 3200°K simulating incandescent light, and 3900°K simulating fluorescent tube light, was luted to a plastic o-ring with four magnets (). In this way, it was possible to attach it in front of the lens for static images () without flash. The color rendering index (CRI) is described as the relative ability of a light source to replicate and is reported as a number between 0 and 100. A CRI score of 100 would accurately reproduce the colors on a sunny day at noon. This lighting condition is considered the ideal illumination environment for shade matching in restorative dentistry, even though it is rarely present during shade matching. Rite Lite 2 has LEDs with high CRI values, varying from 87 to 92, thus approaching the ideal illumination conditions (CRI 100, 5000°K to 6000°K). Fluorescence Fluorescence71-76 is a variation of luminescence. The more fluorescent a material is, the more bright and ‘vital’ it appears. Fluorescence is defined as the ability of a natural or artificial substance to emit visible light spontaneously when irradiated by UVA illumination. The excitation spectrum of dentin has a center wavelength of 365 nm, and the fluorescence emission peak is observed at 440 nm with a full width at half maximum of 20 nm. Enamel presents a much less intense fluorescence peak at 450 nm, which slowly decreases up to 680 nm. Thus a wide, but not intense, band of fluorescence spectrum is present in enamel. Studies show that dentin is three times more fluorescent than enamel, and that dentin fluorescence intensity increases over time because dentin has higher quantities of minerals, pyrimidine, pyridinoline, tryptophan, and hydroxypyridium.77-82 Photographic Application Fluorescence can be captured with two photographic applications using continuous or flash lighting. The traditional method of continuous lighting is quite complicated. It requires a continuous UV light source and a dark room to avoid any other artificial lighting source in the operatory field to make the light visible because the intensity of the fluorescence is very low. In addition, the photographic recording necessitates long exposure times of several seconds and increased ISO sensitivity of the sensor of the DSLR camera with the disadvantage of increased picture noise, the need for stabilization of the camera on a tripod, and significant irradiation of the patient by UV light (). Recently, the authors proposed a novel setup that avoids all these practical disadvantages and minimizes the exposure of the patient to UVA light. This setup consists of an interchangeable UVA 365-nm excitation filter placed in front of the commercial macro flash lamps after removal of their plastic diffusers, together with a DSLR camera with a macro lens. This allows for fluorescence documentation under normal dental office conditions in the same way as standard clinical photography using a macro lens and a flash does, without the need of a dark room or extended exposure times (). Clinical Significance Fluorescence is considered a clinically significant optical property in esthetic restorations because, under fluorescence, teeth appear more vital, whiter, and brighter. Under black lighting, such as in nightclubs, where UV-coated lamps emit the appropriate excitation light to induce fluorescence, restorations should not be distinguishable from the natural dentition. A perfectly integrated esthetic restoration should exhibit a similar fluorescence to that of the natural dentition, and the practitioner must be able to check on this property in his or her routine clinical setting. Enamel and dentin composite materials should closely mimic the fluorescence emission levels of enamel and dentin. Unfortunately, currently commercially available composite materials vary in their fluorescence, depending on brand and shade.83-85 Opalescence Opalescence86,87 is defined as the optical property observed in natural tissues or artificial substances to appear bluish-greyish in reflective illumination (opalescent halo) and yellowish-orange-brown under transmitted illumination (counter opalescence). The enamel of natural teeth is opalescent. This optical phenomenon is based on the difference in refractive indexes of the enamel components, which are hydroxyapatite crystals and water. Moreover, the specific dimension and diverse orientation of the hydroxyapatite crystals scatter light within the visible spectrum, more in short wavelengths for bluish effect and less in long wavelengths for yellowish-red effect. These effects are clearly visible, especially at the incisal enamel edge and also on the border between enamel and dentinal lobes. Photographic Application for Yellowish-red Opalescence To document and photograph yellowish-red opalescence, transmitted continuous light must be used for illumination, without flashlight. An appropriate illumination source for this application is a white LED in the oral cavity directed toward the palatal surface of the tooth to be examined. The exposure time is increased to 1/60 (but not below, to avoid a shaking effect), the aperture is decreased to around f = 10, and the sensitivity of the sensor is increased to a high value (~ 3200). With these settings, the DSLR camera is sufficiently sensitive to capture enough light without any further increase in the exposure time, which would necessitate a tripod to avoid shaking (). The LED Microlux™ Transilluminator (AdDent) with a 3-mm microtip glass light guide was used for the specific photographic application. Photographic Application for Bluish Opalescence To capture bluish opalescence, reflective technique is acquired using normal macro photography and flash lighting. The use of a polarizing filter is helpful, eliminating whitish areas of specular reflections and revealing the exact anatomy of the opalescence zone (). Clinical Significance Ideal restorative materials should exhibit comparable properties to natural dentition. Unfortunately, commercially available composite materials, as they do for other optical properties, vary in their opalescence. Bluish opalescence, which is more evident than yellowish-red opalescence in social settings, may be determined with the appropriate photographic technique and controlled at the restorative phase. Translucency Translucency is the property of a material that allows for light transmission but also dissipates the light within the material.88-90. As such the material is not completely transparent but has an appearance of a milky glass. In other words, translucency is the relative amount of light transmittance or diffuse reflectance from a material.91Factors that influence the transmission of light within composite resins are related to structural components, such as the resin matrix and filler contents, the difference of refraction indexes between them, the size and shape of inorganic fillers, and, finally, pigments and other additives. The aging of the material and the polymerization procedure used may also influence the degree of translucency that current composite materials exhibit. Photographic Application To document and photograph translucency, a similar method to the capturing of yellowish-red opalescence is necessary. Thus, transmitted continuous light for illumination, without flash light but in contact with the palatal surface of the tooth to be examined, is required (). Clinical Significance Dental enamel is translucent. Consequently, contemporary composite systems have highlighted the importance of this property. Translucency of enamel materials provides a depth of color for the underlying dentin and contributes to shade matching by enhancing the chameleon effect. Enamel becomes more translucent with age. Metamerism The metameric effect leads to different color changes under different lighting conditions.92-97 Two substances that have the same color appearance under certain lightning conditions may have different color appearances when the lightning conditions change. This fundamental effect in the field of restorative dentistry may be influenced by three parameters: the material, the observer, and the lighting conditions. For example, if the spectrophotometric light emission curve of the material in the visible light range is different from the one of the surrounding tissue, then metamerism is observed. This parameter can only be controlled if the curve of the material is known and compared before its use. Regarding the observer parameter, individual subjective color perception of the observer may be biased. This could be due to deuteranopia, for example. Color vision deficiencies by practitioners, especially men, who are much more likely than women to have deuteranopia, should therefore be considered. Finally, daylight in outdoor environments and room and mixed lighting conditions in various indoor environments may influence the appearance of restorations because of metamerism. If composite restorations exhibit a coherent appearance and shade matching under these lighting conditions, then they are considered to have successfully overcome the effect of metamerism. Photographic Application Commercially available devices using LED technology can simulate multiple lighting conditions, which may be used for the disclosure of metamerism. One of these devices was adapted in front of the lens of the digital camera to capture images in three different color temperatures ( through ), disclosing shade differences if metameric effect was present. depicts how the restorations appeared under daylight, under incandescent light, and under ambient light. Copyright © AEGIS Communications, All Rights Reserved work_jplwpu46yjbwdem7rpkghrv4ka ---- *Corresponding author: Vazirian_m@tums.ac.ir © 2020. Open access. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by-nc/4.0/) Research Journal of Pharmacognosy (RJP) 7(2), 2020: 61-69 Received: 29 July 2019 Accepted: 4 Mar 2020 Published online: 10 Mar 2020 DOI: 10.22127/RJP.2020.104701 Original article Potential of Trachyspermum ammi (ajwain) Gel for Treatment of Facial Acne vulgaris: a Pilot Study with Skin Biophysical Profile Assessment and Red Fluorescence Photography Ziba Talebi1 , Gholamreza Kord Afshari2, Saman Ahmad Nasrollahi3, Alireza Firooz3, Maedeh Ghovvati1, Aniseh Samadi3, Mehrdad Karimi2, Sima Kolahdooz2, Mahdi Vazirian4,5* 1International Campus, School of Pharmacy, Tehran University of Medical Sciences, Tehran, Iran. 2Department of Iranian Traditional Medicine, School of Iranian Traditional Medicine, Tehran University of Medical Sciences, Tehran, Iran. 3Center for Research & Training in Skin Diseases & Leprosy, Tehran University of Medical Sciences, Tehran, Iran. 4Department of Pharmacognosy, School of Pharmacy, Tehran University of Medical Sciences, Tehran, Iran. 5Medicinal Plants Research Center, School of Pharmacy, Tehran University of Medical Sciences, Tehran, Iran. Abstract Background and objectives: Acne vulgaris is one of the most common dermatologic conditions. The available anti-acne treatments are not satisfactory and safe. In this regard, searching for new treatments, especially natural materials with reasonable side effects and satisfactory effectiveness, could be promising. The aim of the present study was to explore the safety and efficacy of a topical formulation containing Trachyspermum ammi (ajwain) fruits essential oil in patients with facial acne. Methods: The essential oil of the fruits was extracted by hydrodistillation method and formulated as a 1% gel. In this open-labeled, uncontrolled clinical trial, 20 patients with mild to moderate acne received topical ajwain gel twice daily for 8 weeks. The outcomes of acne lesion count, red fluorescence parameters and biophysical skin profiles were evaluated at baseline, 4th and 8th weeks. Any adverse reaction was recorded during the study. Results: All patients completed the study. Two months after treatment, the mean reduction in the total (8.2±3.36; P=0.000) and non-inflammatory (7.3±4.53; p=0.000) lesions was statistically significant. Furthermore, a significant reduction in the size and quantity of red fluorescence spots was also observed. Biophysical skin profile measurements revealed a significant reduction in erythema (p=0.033) and sebum (p=0.026) and a significant increase in pH (p=0.005). No serious adverse events were reported. Conclusion: The results of this pilot study provided a basis for the effectiveness of topical ajwain gel for the treatment of mild to moderate facial acne. Conducting further double blind clinical trials are necessary to confirm the efficacy and safety of the product. Keywords: acne vulgaris; ajwain; clinical trial; essential oil; Trachyspermum ammi Citation: Talebi Z, Kord Afshari Gh, Ahmad Nasrollahi S, Firooz A, Ghovvati M, Samadi A, Karimi M, Kolahdooz S, Vazirian M. Potential of Trachyspermum ammi (ajwain) gel for treatment of facial acne vulgaris: a pilot study with skin biophysical profile assessment and red fluorescence photography. Res J Pharmacogn. 2020; 7(2): 61-69. Introduction Acne vulgaris is one of the most common causes of referring to dermatologists [1]. This disease is characterized by non-inflammatory black and white comedones and inflammatory papules, pustules, nodules and cysts on the face, neck, and the trunk [2]. The chronic nature of the disease and its appearance on face and body may lead to anxiety, depression, decreased self-esteem and in https://orcid.org/0000-0003-4526-3384 https://orcid.org/0000-0002-8386-2225 Talebi Z. et al. 62 Res J Pharmacogn 7(2): 61-69 severe cases, suicidal thoughts in patients [3]. Acne is considered a pilosebaceous unit disorder which the following factors are involved in its pathogenesis: pilosebaceous canal obstruction due to hypercratization, increased sebum production, Propionibacterium acne, inflammation and oxidative stress [4]. Topical therapy is the mainstay of mild to moderate acne treatment. Retinoids and antimicrobial drugs such as benzoyl peroxide and antibiotics are the most common topical medications administered for this disease [2]. These medications target one or several pathogenetic factors via reducing follicular hyperkeratosis and anti-inflammatory, antioxidant, antimicrobial, keratolytic and sebostatic activities [5]. Increased bacterial resistance to antibiotics and skin irritation are the most common side effects of the available anti-acne treatments [6]; therefore, there is a need to develop new, safer and more effective therapies [7]. Natural medicines, by their multi-component nature and long-term use by humans, may be good candidates for this purpose [8]; however, there is not enough evidence for this claim and further studies are needed [9]. Persian medicine (PM) is one of the oldest traditional medicine systems in the world with a vast majority of experiences in treatment of diseases with medicinal herbs [10]. PM Scholars believed that skin has an important role in the secretion of waste products and any disturbance in this process may cause skin rashes and facial spots similar to acne. In traditional literatures, many kinds of topical and oral preparations are mentioned for skin cleansing in different dermatologic conditions [11]. Trachyspermum ammi L. (synonym: Carum copticum), commonly named ajwain and known as “Zenyan” or “Nankhah” in PM, belongs to the Apiaceae family [12]. In PM resources, topical use of ajwain fruit has been recommended for treatment of a variety of skin conditions including acne [13]. Furthermore, the anti- inflammatory [14,15], antioxidant [16-21] and antibacterial [22,23] properties of these fruits have been confirmed in pharmacologic studies. Despite traditional use and related evidences for probable anti-acne properties of ajwain fruits, no study has evaluated their safety and efficacy for treatment of acne. This pilot study aimed to evaluate the safety and efficacy of a topical formulation of ajwain essential oil for treatment of facial acne in a phase 2A clinical trial. To provide more objective results, we used fluorescence digital photography and skin biophysical parameters assessment. Material and Methods Ethical considerations The study protocol was approved by the Research Ethics Committee of Tehran University of Medical Sciences (approval number IR.TUMS.REC.1394.2063 on 26/02/2016) and the trial was registered in the Iranian Registry of Clinical Trials (www.irct.ir, registration number: IRCT2016031126938N3). All patients provided written informed consent before entering the study. Plant material Ajwain fruits were purchased from a medicinal plant market (Tehran, Iran). They were ground in a mechanical grinder for 60 s, just before extraction of the essential oil. Essential oil extraction The essential oil was obtained from 500 g of ground powder of ajwain fruits by hydro- distillation method using Clevenger apparatus for 2.5 h. The obtained essential oil was yellow in color, had a pungent odor (because of thymol) and a density of 0.89 mg/mL. It was collected in a glass container and kept at refrigerator (4±1°C) and protected from light until used. Gas chromatography analysis The essential oil was diluted 1:100 in n-hexane (HPLC grade, Merck, Germany) before injection to Gas chromatography column. The identification of essential oil components was carried out by a GC/MS system (Agilent 6890 for Gas Chromatograph, and 5973 for Mass spectrophotometer, USA) equipped with a BPX5 column (30 m × 0.25 mm internal diameter, 0.25 µm film thickness). Electron ionization (70 eV) was used for system detection. The carrier gas was helium with a flow rate of 1 mL/min. The injection room temperature was set at 250 ° C. The initial column temperature was set at 50 ° C for 1 min, which was then raised to a final temperature of 280 ° C by a rate of 5 ° C/min. The operation was done in 56 min. The identification of components was made by calculation of Kovats indices, calculated by injection of a mixture of Trachyspermum ammi gel for treatment of facial acne vulgaris 63 homologous n-alkanes (C8-C25) on GC column and further confirmation of proposed structure (by software library, Wiley nl7) with standard spectra [24]. Quantitative analysis of each compound was based on the relative area under the curve of each peak in the spectrum. Preparation of topical gel Based on maximum dermal use of 1.4% (w/w) for ajwain essential oil [24], a 1% (w/w) gel, was prepared by dissolving 5 g ajwain oil in hydro- alcoholic solution (containing 50 g propylene glycol, 320.8 g 96% ethanol and 119 g distilled water) and then pre-hydrated gelling agent (5 g carbomer 941) was added to this mixture under stirring (400 rpm). Finally, triethanolamine was added slowly until the final gel was formed. Physicochemical evaluation of formulation Physical features of the ajwain gel were assessed based on the ICH stability guideline. The prepared gel was stored at 25±2 °C and 60±5% relative humidity for 6 months. During this period, the color, odor, consistency, and homogeneity of the ajwain gel was analyzed visually. The viscosity of gel was examined by a DV1TM digital viscometer (Brookfield, Spain) using spindle 5 at 100 rpm with the runtime of 15 seconds. About 2 g of the gel was dissolved in 20 mL distilled water and the pH was determined by calibrated pH meter (Metrohm 827, Swiss). Study design This open-label uncontrolled clinical trial was conducted between December 2017 and April 2018 in Center for Research and Training in Skin Diseases and Leprosy (CRTSDL) of Tehran University of Medical Sciences, Tehran, Iran. Inclusion and exclusion criteria The inclusion criteria was: age of 20-58 years; mild to moderate acne affection without cystic acne; having at least 15 inflamed and 15 non- inflamed acne lesions on the face. Acne severity was determined according to the number of papules and pustules per half face: 0-5 = mild, 6- 20 = moderate, 21-50 = severe and more than 50 = very severe [26]. The following conditions were considered as exclusion criteria: (1) the presence of nodulocystic acne lesion, (2) pregnancy, lactation, (3) excessive sun or UV light exposure, (4) allergy or sensitivity to product ingredients, (5) use of topical anti-acne treatments or procedures four weeks prior to the study, and (6) use of systemic anti-acne medications six months prior to the study. The dermatologist referred eligible patients to researcher. Finally, twenty volunteers aged 20-58 years with mild to moderate facial acne from both sexes were chosen through convenience sampling by a dermatologist Intervention After obtaining informed consent from the patients, they were instructed to use the topical gel on their face, twice daily (morning and night) for 8 weeks. Outcome measurement To determine the therapeutic efficacy, clinical assessment, digital photography, fluorescence photography, and biophysical skin profile measurement were evaluated at baseline and at the week 4, and week 8 follow-up visits. For clinical evaluations, the count of acne lesions (inflammatory, non-inflammatory, and total) was determined by a dermatologist. Additionally, Patients satisfaction from the treatment efficacy was assessed using a Visual Analog Scale (VAS) of 0 (totally unsatisfied) to 10 (totally satisfied). Local adverse effects such as erythema, itching, edema, scaling, dryness and burning/stinging were recorded during the study. Digital photographs of the face were captured using an identical camera and photographic conditions at baseline and each follow-up session. A consent form was obtained from each individual to use their photograph for research purposes. Fluorescence digital photography was performed using the Visiopor® PP 34 camera (Courage & Khazaka, Germany), with a narrow-band UVA light of 375 nm. Photographs were taken from the right cheek. The quantity and size (percentage of the area covered) of orange-red spots in a measured area of 6×8 mm were analyzed. Six skin biophysical parameters (stratum corneum hydration, sebum content, trans-epidermal water loss (TEWL), erythema index, melanin and pH value) were measured using the Cutometer® dual MPA 580 (Courage-Khazaka, Germany) with the following probes: Corneometer CM 825, Sebumeter SM 815, Tewameter TM 300, Mexameter MX 18 and pH-Meter pH 908. These parameters were assessed by the same investigator on the right cheeks of the face in a laboratory with Talebi Z. et al. 64 Res J Pharmacogn 7(2): 61-69 20-25 °C (room temperature) and a relative humidity of 35-50%. Statistical analysis Statistical analysis was performed using SPSS 16 software (SPSS Inc., USA). The descriptive data were expressed as the mean ± SD. The paired T- test with a significance level of 0.05 was used for comparison between baseline and post-treatment values. Results and Discussion The main components of the oil were identified as thymol (50.17%), gamma-terpinene (16.95%) and para-cymene (27.11%). The results of this section have been presented in table 1. The dermatologist conducted the primary screening on thirty patients with facial acne vulgaris and referred a group of twenty eligible patients (2 males and 18 females) with mild to moderate disease to the researcher. Ten patients were excluded due to acne severity and concurrent use of anti-acne treatments. All patients recruited completed the study. All patients had moderate acne. The average age of the patients was 21.4 ± 3.4 years (range, 20-40 years). Nine patients declared a family history of acne. Twelve patients had persistent acne (began in adolescence) and rest of them had late-onset acne (first appears after 25 years old). The baseline demographic and clinical characteristics of patients have been presented in table 2. 8 weeks after treatment, a mean reduction of 8.2 ± 3.36 (p = 0.000) in total lesion count, 7.3 ± 4.53 (p = 0.000) in non-inflammatory lesion count and 0.9 ± 2.77 (p = 0.16) in inflammatory lesion count was observed. The reduction in the total and non- inflammatory lesions was statistically significant (p≤0.05). The Mean (SD) count of different lesion types has been summarized in table 3. In fluorescence digital photography, the mean decrease in the size (0.46 ± 0.7; P = 0.009) and quantity (8.1 ± 6.6; P = 0.000) of red spots versus baseline were significant. The effect of ajwain gel on red fluorescence parameters has been depicted in figure 3. Two months after using ajwain gel, the mean value of sebum decrease (23.05 ± 42.6; p = 0.026) and pH increase (0.48 ± 0.68; p = 0.005) were significant. Additionally, the results showed a decrease of 6.56 ± 18.99 in skin hydration and an increase of 2.66 ± 11.43 in trans-epidermal water loss (TEWL). Finally, Melanin index had an average increase of 2.17 ± 27.25 and erythema index had an average decrease of 37.71 ± 73.46 versus baseline. The value of skin biophysical parameters at baseline and follow-up visits has been shown in table 4. At the final visit, the mean VAS score of patient satisfaction was 5.98 ± 1.14 (ranged 4-7). Ajwain gel was well tolerated and no adverse event was reported. The clinical effectiveness of the ajwain gel has been shown in figure 1. Acne vulgaris is a very common and chronic pilosebaceous unit disorder with a complex pathogenesis [27]. None of the available treatments for this disease are completely satisfactory and safe. Studies on natural substances with anti-acne properties could potentially help in developing efficient alternative treatments with fewer side effects [28]. Table 1. Chemical composition of Trachyspermum ammi essential oil, obtained by hydrodistillation method Percentage RT3 KI2 RI1 Compound No. 1.4 10.17 980 981 β-Pinene 1 0.27 10.76 991 992 β-Myrcene 2 27.11 12.00 1026 1031 p-Cymene 3 16.95 13.17 1062 1063 γ-Terpinene 5 0.2 24.71 1177 1190 Terpinen-4-ol 6 0.31 19.78 1270 1275 Cinnamaldehyde 9 50.17 20.457 1290 1306 Thymol 10 0.86 20.52 1298 1312 Carvacrol 11 7.9 Monoterpenes hydrocarbons 0.5 Oxygenated monoterpenes 90.7 Aromatic monoterpenes 99.1 Total identified 1Relative index; 2Kovats index (based on literature); 3Retention time Table 2. Baseline demographic and clinical characteristics of patients involved in trial Demographic/clinical parameter Characteristic Age (year) Gender Severity of Acne Mean Min/Max Male Female Mild Moderate Severe 21.4 20/40 2 18 0 20 0 Trachyspermum ammi gel for treatment of facial acne vulgaris 65 Table 3. Mean counts of total, inflammatory and non-inflammatory lesions at baseline, 4th and 8th week after treatment and percenage reduction versus baseline Type of lesions Baseline (n=20) Week 4 (n=20) Week 8 (n=20) Mean (SD) Mean (SD) % reduction Mean (SD) % reduction Non-inflammatory 12.35 (6.4) 8.10 (5) 34 5.05 (4.09) 61 Inflammatory 3.10 (2.12) 3.00 (1.91) 3 2.20 (2.04) 29 Total 15.45 (5.62) 11.10 (5.24) 28 7.25 (4.62) 55 Figure 1. Digital photographs, at baseline (a & c) and 2 months (b & d) after application of 1% (w/w) ajwain gel depicts a decrease in the facial acne lesions Figure 2. Mean value of red fluorescence parameters at baseline, 4th week and 8th week after treatment with Trachyspermum. ammi gel Table 4. Mean value of skin biophysical parameters at baseline, 4th and 8th week after treatment Parameters Baseline (n=20) Week 4 (n=20) Week 8 (n=20) p value* Maen (SD) Maen (SD) Maen (SD) Hydration 59.44 (13.56) 58.95 (11.33) 52.87 (16.86) 0.138 TEWL 20.02 (5.52) 19.41 (5.48) 22.68 (10.41) 0.311 Sebum content 74.3 (31.5) 52.15 (52.23) 51.25 (26.51) 0.026 Skin pH 6.44 (0.56) 6.56 (0.53) 6.93 (0.36) 0.005 Erythema index 411.33 (95.44) 378.75 (90.74) 373.61 (101.11) 0.033 Melanin index 179.45 (26.03) 178.84 (30.43) 181.61 (34.84) 0.726 *p value of paired t-test between baseline and 8th week Time (weeks) S iz e o f sp o ts ( m m ) N u m b e r o f sp o ts Time (weeks) Talebi Z. et al. 66 Res J Pharmacogn 7(2): 61-69 In this study we investigated the clinical efficacy and safety of a topical gel made from Trachyspermum ammi fruits in patients with acne vulgaris. Our pilot study showed that ajwain gel was clinically efficient for treatment of mild to moderate facial acne. After two months, a statistically significant decrease was observed in the mean number of total and non-inflammatory acne lesions. Furthermore, size and quantity of red fluorescence spots, skin sebum and erythema index declined significantly after treatment. Porphyrins are metabolic products of P. acnes and are strongly fluorescent [29]. Significant decrease in fluorescence parameters in our study may be explained by anti-bacterial effect of ajwain essential oil. Propionibacterium acnes is an anaerobic, gram- positive commensal bacterium that normally lives on the skin. Excessive sebum and keratinocytes derbies could plug hair follicles and make a favorable anaerobic environment for the growth of this bacterium in the pilosebaceous unit [30]. Release of porphyrins, chemotactic factors and enzymes by P. acnes potentiate the oxidation of sebum lipids and neutrophils accumulation at acne prone sites. All of these events result in cutaneous oxidative stress, inflammatory cascades, tissue injury and acne lesion formation [31]. Significant therapeutic response after using ajwain gel in our study may be explained by antimicrobial, anti-inflammatory and antioxidant properties of T. ammi fruits. Among the normal skin flora related to acne vulgaris, antimicrobial activity of T. ammi against Staphylococcus aureus [32-36] and Staphylococcus epidermidis [35] have been documented in several studies. Alpha-pinene, a minor ajwain essential oil constituent, has been found to have antimicrobial activity against S. aureus, S. epidermidis and P. acnes [37]. Known mechanisms for anti-inflammatory effects of T. ammi fruits are free radicals scavenging activity [38] and suppressing the production of nitric oxide and pro-inflammatory cytokines i.e. IL-18 and TNF-α [39,40]. These therapeutic effects are mainly attributed to major phenolic compounds of T. ammi named thymol and carvacrol [41]. The topical preparation of our study was well tolerated and no adverse events were reported. Acidic pH of the skin is important for keratinization, skin barrier regeneration and cutaneous antimicrobial defense [42]. Although at the end of the study, skin pH was still in acidic range, a significant rise in pH (6.44 ± 0.56 to 6.93 ± 0.36) compared with baseline was observed. Other unwanted effects of ajwain gel on skin barrier properties were mild non-significant increase in TEWL and melanin index and decrease in hydration two months after treatment. To our knowledge, this is the first clinical trial evaluating the effect of ajwain fruit in acne vulgaris. Tea tree oil is one of the proposed herbal treatments for acne vulgaris. In a study by Malhi et al. mean percent decrease in total lesion count after applying tea tree oil gel and face wash at week 4 and 8 were respectively 25% and 37% in comparison with 28% and 55% in our study [43], Compared with the results of a similar study on the use of cinnamon gel, ajwain gel was more efficient in reduction of the total (55% vs.47%) and non- inflammatory (61% vs.48%) lesion counts after 8 weeks. Applying cinnamon gel caused a significant decrease in fluorescence spots size whereas ajwain gel could significantly decrease both the size and quantity of the spots. Changes of skin biophysical profile after applying ajwain and cinnamon gel was similar as follow: both preparations increased TEWL, melanin index and decreased hydration, erythema and sebum [44]. In conclusion, our results suggest that twice daily use of topical ajwain gel for two months could be safe and effective in the treatment of mild to moderate facial acne vulgaris. This is the first study about ajwain application for treatment of acne vulgaris and double-blind randomized clinical trials with larger sample size are needed to confirm these findings. In addition, further studies are warranted to elucidate the action mechanism of T. ammi fruits on acne pathogenic factors and its long-term effects on skin barrier properties. Acknowledgments This research project was supported by Tehran University of Medical Sciences (TUMS) (grant no. 31055). Author contributions Mahdi Vazirian designed the study and revised the manuscript; Sima Kolahdooz prepared the manuscript and was involved in data analysis; Trachyspermum ammi gel for treatment of facial acne vulgaris 67 Ziba Talebi and Maedeh Ghovvati conducted the experiments related to pharmacognosy and pharmaceutical and clinical parts; Gholamreza kord Afshari and Mehrdad Karimi were involved in patient recruitment; Saman Ahmad Nasrollahi formulated the gel; Alireza Firooz supervised the clinical part; Aniseh Samadi was involved in data analysis. Declaration of interest The authors declare that there is no conflict of interest. The authors alone are responsible for the accuracy and integrity of the paper content. References [1] Chaudhary SS, Tariq M, Zaman R, Imtiyaz S. The in vitro anti-acne activity of two unani drugs. Anc Sci Life. 2013; 33(1): 35-38. [2] Kraft J, Freiman A. Management of acne. Can Med Assoc J. 2011; 183(7): 430-435. [3] Barnes LE, Levender MM, Fleischer AB, Feldman SR. Quality of life measures for acne patients. Dermatol Clin. 2012; 30(2): 293-300. [4] Bowe WP, Patel N, Logan AC. Acne vulgaris: the role of oxidative stress and the potential therapeutic value of local and systemic antioxidants. J Drugs Dermatol. 2012; 11(6): 742-746. [5] Kosmadaki M, Katsambas A. Topical treatments for acne. Clin Dermatol. 2017; 35(2): 173-178. [6] Goh CL, Noppakun N, Micali G, Azizan NZ, Boonchai W, Chan Y, Etnawati K, Gulmatico- Flores Z, Foong H, Kubba R, Paz-Lao P, Lee YY, Loo S, Modi F, Nguyen TH, Pham TL, Shih YH, Sitohang IB, Wong SN. Meeting the challenges of acne treatment in Asian patients: a review of the role of dermocosmetics as adjunctive therapy. J Cutan Aesthet Surg. 2016; 9(2): 85-92. [7] Yang JH, Yoon JY, Kwon HH, Min S, Moon J, Suh DH. Seeking new acne treatment from natural products, devices and synthetic drug discovery. Dermatoendocrinol. 2017; Article ID: 1356520. [8] Solórzano-Santos F, Miranda-Novales MG. Essential oils from aromatic herbs as antimicrobial agents. Curr Opin Biotech. 2012; 23(2): 136-141. [9] Yarnell E, Abascal K. Herbal medicine for acne vulgaris. Altern Complement Ther. 2006; 12(6): 303-309. [10] Rezaeizadeh H, Alizadeh M, Naseri M, Ardakani M. The traditional Iranian medicine point of view on health and disease. Iran J Pub Health. 2009; 38(1): 169-172. [11] Shirbeigi L, Oveidzadeh L, Jafari Z, Fard MS. Acne etiology and treatments in traditional Persian medicine. Int J Mol Sci. 2016; 41(S3): 19. [12] Zarshenas MM, Moein M, Mohammadi Samani S, Petramfar P. An overview on ajwain (Trachyspermum ammi) pharmacological effects; modern and traditional. J Neurosci R. 2013; 14(1): 98-105. [13] Avicenna. Canon of medicine. New Delhi: S. Waris Nawab, Senior Press Superintendent, Jamia Hamdard Printing press, 1998. [14] Thangam C, Dhananjayan R. Antiinflammatory potential of the seeds of Carum copticum Linn. Indian J Pharmacol. 2003; 35(6): 388-391. [15] Umar S, Asif M, Sajad M, Ansari MM, Hussain U, Ahmad W, Siddiqui SA, Ahmad S, Khan HA. Anti-inflammatory and antioxidant activity of Trachyspermum ammi seeds in collagen induced arthritis in rats. Int J Drug Dev Res. 2012; 4(1): 210-219. [16] Saxena S, Agarwal D, Saxena R, Rathore S. Analysis of anti-oxidant properties of ajwain (Trachyspermum ammi L) seed extract. Int J Seed Spices. 2012; 2(1): 50-55. [17] Singh A, Ahmad A. Antioxidant activity of essential oil extracted by SC-CO2 from seeds of Trachyspermum ammi. Medicines. 2017; Article ID: 28930268. [18] Prashanth M, Revanasiddappa H, Rai K, Raveesha K, Jayalakshmi B. Antioxidant and antibacterial activity of ajwain seed extract against antibiotic resistant bacteria and activity enhancement by the addition of metal salts. J Pharm Res. 2012; 5(4): 1952-1956. [19] Chatterjee S, Goswami N, Bhatnagar P. Estimation of phenolic components and in vitro antioxidant activity of fennel (Foeniculum vulgare) and ajwain (Trachyspermum ammi) seeds. Adv Biores. 2012; 3(2): 109-118. [20] Chatterjee S, Goswami N, Kothari N. Evaluation of antioxidant activity of essential oil from ajwain (Trachyspermum ammi) seeds. Int J Green Pharm. 2013; 7(2): 140-144. [21] Bajpai VK, Agrawal P. Studies on phytochemicals, antioxidant, free radical scavenging and lipid peroxidation inhibitory effects of Trachyspermum ammi seeds. Indian Talebi Z. et al. 68 Res J Pharmacogn 7(2): 61-69 J Pharm Educ. 2015; 49(1): 58-65. [22] Mood BS, Shafeghat M, Metanat M, Saeidi S, Sepehri N. The inhibitory effect of ajowan essential oil on bacterial growth. Int J Infect. 2014; Article ID: 9394. [23] Manayi A, Vazirian M, Omidpanah S, Hosseinkhani F, Hasseli A. Chemical composition and antibacterial activity of essential oil of Trachyspermum ammi. Planta Med. 2014; 80(16): P2B60. [24] Adams RP, Sparkman OD. Review of identification of essential oil components by gas chromatography/mass spectrometry. J Am Soc Mass Spectrom. 2007; 18(4): 803-806. [25] Tisserand R, Young R. Essential oil safety: a guide for health care professionals. 2nd ed. London: Elsevier Health Sciences, 2013. [26] Hayashi N, Akamatsu H, Kawashima M. Establishment of grading criteria for acne severity. J Dermatol. 2008; 35(5): 255-260. [27] Tahir CM. Pathogenesis of acne vulgaris: simplified. J Pak Assoc Dermatol. 2016; 20(2): 93-97. [28] Kapoor S, Saraf S. Topical herbal therapies an alternative and complementary choice to combat acne. Res J Med Plant. 2011; 5(6): 650-659. [29] Youn SW, Kim JH, Lee JE, Kim SO, Park KC. The facial red fluorescence of ultraviolet photography: is this color due to Propionibacterium acnes or the unknown content of secreted sebum? Skin Res Technol. 2009; 15(2): 230-236. [30] Bojar RA, Holland KT. Acne and Propionibacterium acnes. Clin Dermatol. 2004; 22(5): 375-379. [31] Sarici G, Cinar S, Armutcu F, Altınyazar C, Koca R, Tekin N. Oxidative stress in acne vulgaris. J Eur Acad Dermatol Venereol. 2010; 24(7): 763-777. [32] Shrivastava V, Shrivastava G, Sharma R, Mahajan N, Sharma V, Bhardwaj U. Anti- microbial potential of ajwain (Trachyspermum copticum): an immense medical spice. J Pharm Res. 2012; 5(7): 3837- 3840. [33] Hassanshahian M, Bayat Z, Saeidi S, Shiri Y. Antimicrobial activity of Trachyspermum ammi essential oil against human bacterial. Int J Adv Biol Biomed Res. 2014; 2(1): 18-24. [34] Hassan W, Noreen H, Rehman S, Gul S. Antimicrobial efficacies, antioxidant activity and nutritional potentials of Trachyspermum ammi. Vitam Miner. 2016; 5(3): 1-5. [35] Awan UA, Andleeb S, Kiyani A, Zafar A, Shafique I, Riaz N, Azahr MT, Uddin H. Antibacterial screening of traditional herbal plants and standard antibiotics against some human bacterial pathogens. Pak J Pharm Sci. 2013; 26(6): 1109-1116. [36] Vitali LA, Beghelli D, Nya PCB, Bistoni O, Cappellacci L, Damiano S, Vitali LA, Beghelli D, Biapa Nya PC, Bistoni O, Cappellacci L, Damiano S, Lupidi G, Maggi F, Orsomando G, Papa F, Petrelli D, Petrelli R, Quassinti L, Sorci L, Majd Zadeh M, Bramucci M. Diverse biological effects of the essential oil from Iranian Trachyspermum ammi. Arab J Chem. 2016; 9(6): 775-786. [37] Raman A, Weir U, Bloomfield S. Antimicrobial effects of tea‐tree oil and its major components on Staphylococcus aureus, Staph. epidermidis and Propionibacterium acnes. Lett Appl Microbiol. 1995; 21(4): 242- 245. [38] Kavoosi G, Tafsiry A, Ebdam AA, Rowshan V. Evaluation of antioxidant and antimicrobial activities of essential oils from Carum copticum seed and Ferula assafoetida latex. J Food Sci. 2013; 78(2): 356-361. [39] Abtahi M, Mohammadi Z, Hatef B, Amini M, khorrami M, Aghanoori MR. Antifungal effect of flavonoid extract of Trachyspermum Ammi plant on the gene expression of pro- inflammatory cytokines such as IL-18 and TNF-α in articular chondrocyte cells. Biosci Biotech Res Asia. 2015; 12(3): 2081-2087. [40] Abtahi MS, Mohammadi Z, Hatef B. Antifungal effect of flavonoid extract of Trachyspermum ammi plant on the gene expression of pro-inflammatory cytokines such as IL-18 and TNF-α in articular THP-1 monocyte/macrophages cells. Biosci Biotech Res Asia. 2015; 12(2): 1341-1342. [41] Alavinezhad A, Boskabady MH. Antiinflammatory, antioxidant, and immunological effects of Carum copticum L. and some of its constituents. Phytother Res. 2014; 28(12): 1739-1748. [42] Schmid-Wendtner MH, Korting HC. The pH of the skin surface and its impact on the barrier function. Skin Pharmacol Physiol. 2006; Trachyspermum ammi gel for treatment of facial acne vulgaris 69 19(6): 296-302. [43] Malhi HK, Tu J, Riley TV, Kumarasinghe SP, Hammer KA. Tea tree oil gel for mild to moderate acne; a 12 week uncontrolled, open‐ label phase II pilot study. Australas J Dermatol. 2017; 58(3): 205-210. [44] Ghovvati M, Afshari GK, Nasrollahi SA, Firooz A, Samadi A, Karimi M, Talebi Z, Kolahdooz S, Vazirian M. Efficacy of topical cinnamon gel for the treatment of facial acne vulgaris: A preliminary study. Biomed Res Ther. 2019; 6(1): 2958-2965. Abbreviations VAS: Visual Analog Scale; TEWL: Trans- epidermal water loss work_jqawwj4cirfchh5xhppkodygfa ---- Using a Sony Cyber-Shot Digital Camera for Photomicrography Gregor Overney, Agilent Technologies Inc. gregor_overney@agilent com Introduction Photomicrography is the combination of photography and compound microscopy. Photographers working with compound microscopes are facing many challenges (for an introduction see [1] and [2]). Digital photography offers great advantages, but also adds additional difficulties. Digital cameras have been used in photomicrography for over a decade now. Today, we have access to many excellent consumer-grade digital cameras that are most suitable for low-cost imaging systems for light microscopy. In this short paper, I summarize my experience with the Sony DSC-S70 digital camera, which comes with a nice, large Zeiss lens. (Most of the ideas presented in this paper are also valid for the DSC- S75 and DSC-S85.) Many of the published reports I could find about this topic suggest using a Nikon Coolpix 990 (or 995) camera. This Coolpix model has a smaller front lens than the DSC-S70, which is often used as an argument for choosing this Nikon camera. - I have a Coolpix 995 with a Nikon MDC relay lens that does not outperform my Sony DSC-S70 when connected with a "homemade" adapter to the same Nikon MDC lens. In 2000, with the introduction of the DSC-S70 [3], Sony sur- prised the competition by using a lens system designed by Carl Zeiss labs. This lens system, called a Vario Sonnar, produces very reasonable images. It is a bright, F2.0 lens system. The DSC-S70 has a maximum resolution of 2048x1536 pixels with an image ratio of 4:3. A Closer Look There are essentially three different ways to operate this Sony camera to take pictures through a microscope. The first one, a very primitive and unsophisticated way to take pictures, is to use an eye- piece with a rubber ring {called ocular guard), which is intended to avoid scratches on eyeglasses. Using such a protected eyepiece, the iens of the DSC-S70 can directly be placed on the ocular. It is recommended to hold the camera firmly with one hand and use manual focusing. With some practice, it is possible to take quite reasonable images with this simple setup (see picture 1 for an image through an eyepiece with 20 mm field of view (FOV)). The second possibility, is using an adapter that directly mounts the camera to a regular eyepiece. This is particularly recommended if the optical setup requires a compensating eyepiece to fully correct for lateral chromatic aberration. Nikon's CF and CFI60 objectives are corrected for lateral chromatic aberration without assistance from the eyepiece. I have not tried this setup with my DSC-S70, but it is very similar to the third setup, where the camera is mounted on a phototube. The use of a phototube offers the most stable con- figuration and helps greatly to avoid vibration. I am using manual focus and aperture priority mode with the lowest possible aperture setting (depends on the 200m setting). I adjust the brightness of the illumination to ensure that the camera sets exposure time to be 1 /100 second orfaster. This fast exposure time avoids "dangerous" exposure times, usually between 1/50 second to 1 second [5]. I am using a trtnocular phototube that has a 38 mm ISO port. The choice of camera fell on the DSC-S70 since I already owned one. To connect this camera to the microscope, I use a photo-relay lens from Nikon, called an MDC lens. I got this lens with the Cool- 10 • nilOROJOOPYTODIlY November/December 2002 Picture 1; "Freehand" image of human kidney, t.s. of cortical zone, through standard eyepiece (20 mm FOV) using 40x objective. pix camera and had only to craft a simple connector to fit it to the DSC-S70. I will explain all parts in the next section entitled T h e Individual Parts." Let's look at some more details of the DSC-S70. The CCD of the DSC-S70 is 1/1.8" In size (diagonal 8.933 mm). It is a frame readout CCD image sensor with square pixel for color cameras. Its designation is ICX252AQ. This silicon, solid-state image sensor has RGB primary color mosaic filters with a unit cell size of 3.45um (H) by 3.45pm (V). This sensor also supports Sony's Super HAD CCD technology (HAD = Hole-Accumulation Diode). TheA/D conversion is done with a resolution of 12-bit. The Cart Zeiss lens system goes from 7 - 2 1 mm (34 to 102 mm equivalent) with an aperture of F2.0 to F2.5. The Individual Parts In this chapter, I introduce all necessary parts to connect a Sony digital camera via a Nikon MDC relay lens to a C-rnount adapter mounted on a phototube. One of the disadvantages of this particular consumer elec- tronics camera is its non-removable lens. Because of this, the photographer cannot mount this camera directly on a C-mount but will require a converter lens that is very similar to an eyepiece. I chose Nikon's MDC relay lens designed for the Coolpix 995 camera. It has a C-mount thread (see picture 2). The MDC lens is a very expensive lens (over $600). Most certainly, there are less expensive ways to mount a low-cost digi- tal camera to a microscope. But the principles are very similar. A commercially available relay lens for the DSC-S70 with a C-mount thread costs around $400 [6], But remember, such a relay iens is a special part. It is not sold in high volume. Therefore, the price for a reasonable converter or relay lens is pretty high. Furthermore, it is a lens that might not be frequently used by professional photographers working in the field of photomicrography. When professional photog- raphers use digital cameras on a microscope, they mostly prefer an image sensor with a CCD that has a great quantum efficiency and low noise but comes without a built-in lens system. There are many companies that produce great CCD's (such as Sony and Kodak), A good specialized CCD camera for photomicrography can easily cost more than $5000 (without a lens!). But the prices are really coming down. There is a cheaper alternative to a CCD camera sold by Vitana Corp (see http://www.pixelink.com/), called the PixeLink PL-A642 D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:40 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S1551929500058442 http://crossmark.crossref.org/dialog?doi=10.1017/S1551929500058442&domain=pdf https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S1551929500058442 The NORAN System SIX Spectral imaging X-ray system delivers high performance for today's microanatysis applications. Combining high- throughput data acquisition, a complete suite of analytical tools, and high quality reporting, System SIX is designed to get to your results quickly. Sample in — Knowledge out. The complete package for next generation x-ray microanalysis: • High-throughput acquisition electronics • Digital pulse processing and digital beam control • Single software program for all analysis modes • Project Data Manager • Spectral Imaging data archive, extraction and analysis • COMPASS automatic pure component analysis • Expert auto peak identification with SpectraCheck • Automated features for optimized data collection and analysis • One-click formated data reports, Word export • Industry standard file formats (EMSA, TIFF, CSV) With more features and functionality than we can list here, the new NORAN System SIX deserves a closer look. Call us today at +1 608 831-6511 or visit the NORAN System home page on the web at www.thermo.coin/nss. Thermo NORAN D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:40 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S1551929500058442 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S1551929500058442 Picture2: (Left) Nikon MDCrelay Isns (partnumber82014). The small ring located on the left side of this picture had to be removed. The field of view (or field number) of this lens is 18 mm. Picture 3.1 (Center) Parts for this setup: Picture on left side shows Y-T TV tube with 1.0xC-mount, VAD-S70 adapter and MDC lens with "homemade" adapter ring. Picture on right side shows all parts connected. Picture A: (Right) Dimensions of ''homemade" adapter. The image below shows the small ring that must be removed from the MDC lens. FireWire camera. This camera has a CMOS sensor, which offers a decent resolution. However, currently available CMOS sensors are much noisier than CCD's. So far, I do not know of any high-end digital camera that uses CMOS image sensors [7]. I use a trinoculartubeE2-TFfora Nikon Eclipse microscope. It has a 38 mm ISO port. Connected to this port is a Nikon ISO 38 mm Y-T TV tube with a D10NLC1.Ox C-mount adapterfrom Diagnostics Instruments. It is important to mention that the 1.0x C-mount has no built-in lens system. It is just a simple connector, which costs around eighty dollars. The Nikon MDC lens connects to this 1.0x Picture 5: 100 diatoms forming a beautiful rosette (10x objective). Klaus Kemp made this exhibition slide in 2001. C-mount adapter. The whole setup is parfocal with the rest of the microscope, which meansthat once the image is in focus, I can look through the MDC lens and no refocusing is required. A"homemade" connector is mounted onto the Nikon MDC lens. (In the next paragraph, I will describe this connector in more detail.) Finally the other side of this "homemade" connector is connected to a Sony VAD-S70 adapter, which can be purchased from Sony for $35. This adapter allows using 52 mm filters with the DSC-S70, DSC-S75, and DSC-SS5. After the camera is connected to the VAD- S70 adapter, the setup is complete. - T o summarize, I required the following seven parts: Nikon trinocular viewing body, Nikon TV-tube, 1.0x C-mount adapter, Nikon MDC lens, "homemade" connector, Sony VAD-S70 52 mm filter-adapter, and Sony OSC-S70 digital camera (see picture 3). What is this "homemade" connector? - When looking at the bottom part of picture 4 (right side), the reader can see a thread of 6 mm height. A 3 mm ring has already been removed to increase the threaded boarder to 6 mm. This 3 mm ring is replaced by a larger disk of aluminum with a diameter of 52 mm and a thickness of 3 mm. The aluminum disk has a threaded aperture in the center that just fits the thread of the MDC lens. This disk can easily be mounted inside the frame of a 52 mm'filter-holder, I selected a fil- ter-holder for a polarizing filter since such a filter-holder allows me to conveniently adjust the orientation of the camera. I had to make sure that the front-lens of the MDC lens is close to the front-lens of the DSC-S70 but does not touch it for ail zoom and focus settings. For my camera, a 3 mm thick disk lets the Nikon MDC lens come sufficiently close to the front-lens of the DSC-S70 when using a standard filter-holder. If you are using a DSC-S75, or even a DSC-S85, please ensure that this works for you! To find out how far the MDC lens can stick out, one could mount this "homemade" disk onto the VAD-S70 adapter, without a Nikon MDC lens, and measure the distance carefully. To measure this distance, I put the camera Into the zoom/focus posi- tion where the lens sticks out the most, Then I used a Q-tip, slightly wetted with Windex, and stuck it into the aperture of the aluminum disk until it touched the lens. Pressing the Q-tip towards the thread of the aluminum disk marked the side of the Q-tip, which allowed me later to measure this distance with a ruler. See picture 4 for an illustration of the geometry of this setup. This Setup in Action Photomicrography with a digital camera is a complex topic. it requires great care. Many tricks and recommendations, which 12 • WIICROIOOPYTODflY November/December 2002 D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:40 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S1551929500058442 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S1551929500058442 • • • Imaging Systems Electron energy loss spectrometers Cathodoluminescence Systems Cryo-stages Specimen Preparation Specimen Holders Microscopy Software a complete solution forTEM and SEM viflviv.gafan.com D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:40 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S1551929500058442 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S1551929500058442 apply to "analog" photomicrography {see [1] for more information), are valid for digital photography as well. But there are major differ- ences. CCDand CMOS image sensors have their unique properties (such as wave-length dependencies, quantum efficiency, resolution, contrast, etc.). I am not going to talk about this in this short paper. For me, the best resources currently available are various web sites. Quite often, it is required to post-process the digital image to enhance, but not to change, its contents. Fine programs to do this are PaintShop Pro 7 from JASC [8] and ImageJ from NIH [10]. Two of my favorite slides are a human kidney slide using a modified Masson's Trichrome stain and an exhibition slide by Klaus Kemp that shows 100 diatoms [11]. The diatom slide shows Coscinodiscus, Didymosphenia Geminata, Paralia Sulcata, Trina- cria Insipiens, Biddulphia, and Aulacodiscus Kittoniae, which are most perfectly arranged to form a rosette (see picture 5). The healthy human kidney section is shown on the cover of this magazine. This kidney section is rather thick, which makes it more suitable for lenses with a lower numerical aperture, such as achromats [9]. From all these images, it becomes very obvious that the Sony DSC-S70 camera has enough pixel resolution to be useful for simple photomicrography. The analog to digital (A/D) conver- sion per color channel seems to be adequate. However, as it was expected from a low-cost non-cooled CCD, the performance is moderate. Compared to analog cameras using color films (such as Fuji Velvia), the DSC-S70 offers significantly lower contrast. I encountered a couple of limitations with the Sony DSC-S70. First, I was not able to control the camera directly from a computer. There is no software interface available that will allow the user to program and control this camera directly from his application. Picture 6; Diatoms from "diatom test plate 8 forms", see [4], Nitzschia Sigma, Stauroneis Phoenocenteron, Navicula Lyra with 10x objective (small image), and Navicula Lyra with 40x objective (large image). Second, it is very difficult to focus the image with this camera, as the reader might be able to see from some of the pictures. Focusing is very cumbersome with this camera. Especially when using lenses with high numerical aperture, the job of controlling image focus is very difficult [9]. This situation improves when us- ing a digital camera that can be controlled from a high-resolution computer display. In this case, one can focus the image on the computer display. With the DSC-S70 camera, one must control the focus using the small display of the camera. The auto focus did not work for me and I had to switch to manual focus. An example is illustrated in picture 6. (It was quite difficult to focus exactly on the center of Navicula Lyra.) - Third, the quality of these pictures does not even come close to what I am used to when looking directly through the microscope. But this "subjective impression" is mainly based on the fact that any camera is very different from our eyes. For all performance studies of optical systems, the observer's characteristics {such as his eyes and his physiology) should be included [12]. In spite of all these disadvantages, I am pleased by this camera's resolution and ease of use. Having access to images shortly after taking them is very useful. It is a great tool that helps to communicate information between microscopists. The reader can find many reports about using a Nikon Coolpix for photomi- crography [13]. Using a Sony DSC-S70 makes a lot of sense, but it is much more difficult to find information about mounting such a Sony camera to a light microscope. • Acknowledgements I want to thank Dr. Jean-Luc Truche for many stimulating dis- cussions. My thanks also go to Dr. Jerry Dowell who provided some valuable suggestions. References [1]Fred Rostand Ron Oldfield, Photography with a Microscope, Cambridge University Press (2000). See chapter 17 about digital, confocai and video techniques. [2] Werner Nachtigall, Exploring with the Microscope, Sterling Publishing Co., New York (1996). [3] One year after the introduction of the DSC-S70, Sony introduced the DSC-S75. At the time of writing, the S75 is still sold by Sony. It has the identical CCD and lens system and can therefore by used in place of the DSC-S70. [4] This diatom test plate with 8 different forms is purchased from Carolina Biological Supply Company, 2700 York Rd.T Burlington, NC 27215, US (see http://wwwxarolina.coni/), [5] See page 62 in [2]. [6] Adapter sold by Micro-Tech Lab (http://www.micro-tech-lab.de/). The adapter is called LM-Scope digital adapter #DASC70 (made in Austria). [ have not used this adapter and do not know how well it performs. [7] But things might change soon (see hL1p://www.foveon.coin/X3_ tech.html to find out more about the "first" full-color image sensor). [8] PaintShop Pro is currently sold as version 7.04 by JASC (see http: //www.jaKu.com/products/psp/). It cost slightly over 100 dollars. It requires Microsoft Windows operating systems. [9] Depth of field (DOF) is the area in front of and behind the abject that is in acceptable focus. The depth of field is inversely proportional to the square of the numerical aperture, DOF depends on the wavelength. [10] ImageJ is provided by the National Institutes of Health, US, and requires a JAVA virtual machine. It is available at http://rsb.info.nih.gov/ij/ free of charge, [11] Klaus D. Kemp has a web page at http://www,diatoms,C0.uk/. [12] Vasco Ronchi, Optics - The Science of Vision, Dover Publications, Inc., New York (1991). [13] Theodore M. Clarke, Fitting a Student Microscope with a Consumer Digi- tal Camera, Microscopy Today, 02-3 (2002), and references therein. 14 I M C R O f G O P Y T O D A Y November/December 2002 D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:40 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S1551929500058442 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S1551929500058442 Feeling pressed for time, cash and capacity? l e t digital image analysis put you in the lead! Exploit the generous scope of digital acquisition, archiving, documentation and analytical functions offered by Soft Imaging System. The never-ending pressure to cut costs and the rising product-quality demands require efficiently-organized workflows and optimized use of device capacity. • This is precisely what the analysis® image-analytical system provides you with: microscope control, camera operation, optimized image acquisition, manual or auto- mated image analysis, database archiving, report generation, and .e-mailing. • And it's all integrated in a single software program: analysis". This saves time, reduces running costs arid frees up capacity for what really is important. For more information on analysis® and Soft Imaging System, visit us at: www. soft-imaging.net Digital Solutions for Imaging and Microscopy Soft Imaging System Europe +49 (251)798000 North America (888) FIND SIS + 1 (303)2349270 Asia / Pacific +60(3)83181400 D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:40 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S1551929500058442 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S1551929500058442 work_jsbxvhgm6jhhfi34prnfaedqsi ---- 1 TITLE The evidence for automated grading in diabetic retinopathy screening AUTHORS Alan D Fleming 1 , Sam Philip 2 , Keith A Goatman 1 , Gordon J Prescott 2 , Peter F Sharp 1 , John A Olson 2 AFFILIATIONS 1. College of Life Science and Medicine, University of Aberdeen, Foresterhill, AB25 2ZD 2. Diabetes Retinal Screening Service, David Anderson Building, Foresterhill Road, Aberdeen AB25 2ZP CORRESPONDING AUTHOR Alan Fleming Medical Physics Building School of Medicine and Dentistry Aberdeen University and Grampian University Hospitals Foresterhill Aberdeen AB25 2ZD Scotland T: (+44) (0)1224 553195 F: (+44) (0)1224 552514 E: a.fleming@abdn.ac.uk 2 ABSTRACT Systematic screening for diabetic retinopathy using retinal photography has been shown to reduce the incidence of blindness among people with diabetes. The implementation of diabetic retinopathy screening programmes faces several challenges. Consequently, methods for improving the efficiency of screening are being sought, one of which is the automation of image grading involving detection of images with either disease or of inadequate quality using computer software. This review aims to bring together the available evidence that is suitable for making a judgement about whether automated grading systems could be used effectively in diabetic retinopathy screening. To do this, it focuses on studies made by the few centres who have presented results of tests of automated grading software on large sets of patients or screening episodes. It also considers economic model analyses and papers describing the effectiveness of manual grading in order that the effect of replacing stages of manual grading by automated grading can be judged. In conclusion, the review shows that there is sufficient evidence to suggest that automated grading, operating as a disease / no disease grader, is safe and could reduce the workload of manual grading in diabetic retinopathy screening. Key words diabetic retinopathy, screening, computer-assisted image analysis, imaging, telemedicine 3 INTRODUCTION Early detection of diabetic retinopathy allows this disease to be treated before it becomes symptomatic so that sight loss can be limited. In the UK, diabetic retinopathy screening is run by the NHS aiming to reduce the incidence of blindness in people with diabetes and is based on digital photography and slit-lamp examination for technical failures [1]. Typically around 70% of screened patients are normal [2], and hence the initial grading task is to perform disease / no disease grading, identifying and removing all normal images and retaining those with some disease or other abnormality for further scrutiny. This is followed by one or more stages of full-disease grading and possibly arbitration grading. The scale of this task is huge; in the UK screening programmes there are approximately two million retinal image sets that require grading each year. Therefore, computer automation has been considered to assist the grading process [3,4]. The intention of this review is to guide potential users of automated retinopathy grading about whether current systems are suitable for introduction into screening. Therefore it only includes papers on automated disease detection that cover studies on large datasets similar to what would be found in a screened population. A section on manual grading is also included since a decision on which of the alternative systems will provide the greatest benefit requires knowledge of the performance of each option. Indications for the feasibility of automated grading Most regional screening programmes generate tens to hundreds of thousands of images to be graded each year. Computers excel at repetitive tasks and, if the necessary infrastructure is in place, it is likely that computer grading will be cheaper than manual grading. There are several indications that computers would be capable of this task;  Even if a computer is only capable of grading at a disease / no disease level, this would still provide a large reduction in the manual grading workload. This is because a large proportion of the patients attending diabetic retinopathy screening are normal.  The images are relatively constrained in that normal images show a very standard structure. They are also uncluttered in that lesions are usually separate from normal features.  The digital network infrastructure is already in place that could allow centralised computer processing of screening programme images. In addition, over the last two decades, many research groups have shown that it is possible to develop software that, in small tests, can identify retinal lesions to a high accuracy and details may be found in previous reviews that have focussed mainly on the algorithms involved [5-8]. However only large studies can inform decisions on the introduction of automated grading into screening since there is no certainty that the performance found using a small image set will be maintained in practice. The role that automated grading may play in screening is open to debate. Since most of the available evidence relates to automated “disease / no disease” grading, this will be the main emphasis of this article. However, a section is devoted to other roles. Terminology The term “referable retinopathy” is used here to mean retinopathy, including maculopathy, more serious than mild non-proliferative retinopathy as defined by the Early Treatment of Diabetic Retinopathy Study scheme [9] or background retinopathy in the English or Scottish schemes [10,11]. The term “maculopathy” is used to mean the presence of photographic surrogate markers indicative of macula oedema requiring more frequent observation than the standard screening interval or referral to ophthalmology [10,11]. 4 Grading roles of “disease / no disease” grading and “full disease” grading are explained in the UK National Screening Committee Workbook [10]. The process of disease / no disease grading separates episodes showing any disease or abnormality from those which are normal. Full disease grading takes place after disease / no disease grading and identifies episodes requiring a clinical outcome which is not recall at the default screening interval. It assigns a grade according to disease severity. METHODS - SELECTION OF STUDIE S A search was made for peer-reviewed studies from journals using ISI Web of Knowledge with the specification for paper topic: (automat* OR comput*) AND (detection OR diagnosis OR grading) AND diabet* AND retinopathy. The search found 747 papers and these were manually examined using the title and abstract to find those whose main topic was automated grading or computer detection of diabetic retinopathy and its associated signs in retinal photographs. This resulted in 82 papers. The following selection criteria were then used to select those which may be suitable to draw conclusions on the use of automated retinopathy grading within healthcare: 1. The study should be an assessment of an automated image analysis system applied to detection of retinopathy in retinal photographs. 2. The results should be reported for referable retinopathy on a per-patient or per-episode basis, or it should be based on such data. 3. The study should be based on tests with at least 200 subjects with and 200 without referable retinopathy. Criterion 3 means that sensitivity and specificity for detection of referable retinopathy can be estimated with a confidence interval of less than ±5% at a sensitivity or specificity of 90%. Seven papers satisfied criteria 1 and 2 but were rejected because of study size [12-18] with the largest having 95 [12] patients with referable retinopathy. The remaining papers were divided into efficacy studies reporting original results for the sensitivity or specificity for detection of referable retinopathy and economic studies describing economic analysis based on efficacy results presented elsewhere. THE EVI DENCE Six efficacy studies and two economic studies were identified that satisfy the above criteria and are based on three systems, Utrecht / Iowa, Aberdeen and Brest, as shown in table 1. Only the Utrecht / Iowa and Aberdeen systems can be considered complete automated grading systems since only these have an image quality assessment module. Table 2 shows the performances demonstrated in each paper and includes further study details. 5 Table 1: Details of the automated systems described in studies listed in table 2. Two of the systems contain multiple modules; however, not all studies used all modules. References System Developed at Photographic protocol used Modules Abramoff 2008 [19] Niemeijer 2009 [20] Abramoff 2010 [21] Utrecht / Iowa Image Sciences Institute Utrecht and University of Iowa 99.5% Two fields per eye, mydriasis as required. Red lesion detection Exudate detection Assessment of quality Philip 2007 [22] Scotland 2007 [23] Fleming 2010 [24] Scotland 2010 [25] Fleming 2010 [26] Aberdeen University of Aberdeen 80% One field per eye, mydriasis as required. Microaneurysm detection Haemorrhage detection Exudates detection Assessment of quality Abramoff 2010 [21] Brest Brest University Hospital, France Two fields per eye, mydriasis as required. Microaneurysm detection 6 Table 2: Studies on automated image analysis in diabetic retinopathy that satisfy the selection criteria. Study size is given along with the number of cases having referable retinopathy. Study Type of study Automated system details Study size (referable retinopathy) Case selection strategy Detection rates Specif- icity Philip et al 2007 [22] Efficacy Aberdeen system: microaneurysm detection and quality assessment 6672 (330) Screening programme cohort – all cases included Referable retinopathy / ungradable cases 98.9%* Referable retinopathy 97.9% Mild retinopathy 80.9% Ungradable cases 99.5% 52.4%* Scotland et al 2007 [23] Economic modelling As for Philip 2007 [17] Abramoff et al 2008 [19] Efficacy Utrecht / Iowa system: haemorrhage/microaneurysm and exudate detection and quality assessment 7689 (378) Screening programme cohort – ungradable cases removed Referable retinopathy 84.4%* Ungradable cases 80% 64.3%* Niemeijer et al 2009 [20] Efficacy Utrecht / Iowa system: haemorrhage/microaneurysm and exudate detection and quality assessment 15000 (394) Screening programme cohort – all cases included Referable retinopathy / ungradable cases 93.0% 60.0% Fleming et al 2010 [24] Efficacy (automated system comparison) Aberdeen system: (1) microaneurysm detection and (2) microaneurysm, haemorrhage and exudate detection and quality assessment 7568 (1253) Screening programme – stratified selection Referable retinopathy / ungradable cases 97.3%* Referable retinopathy 96.9% Mild retinopathy 79.0% Maculopathy 95.4%* Proliferative referable retinopathy 97.4% Non-proliferative referable retinopathy 98.9% Ungradable cases 98.8% 49.0% Scotland et al 2010 [25] Economic modelling As for Fleming 2010 [21] Abramoff et al 2010 [21] Efficacy (automated system comparison) (1) Utrecht / Iowa system: haemorrhage/ microaneurysm detection (possibly combined with quality assessment*); (2) Brest system: microaneurysm detection. 15980 (793) Screening programme cohort – ungradable cases removed Utrecht/Iowa system Referable retinopathy 90.0% Brest system Referable retinopathy 90.0% 47.7% 43.6% Fleming et al 2010 [26] Efficacy Aberdeen system: microaneurysm detection and quality assessment 33535 (2214) Screening programme cohort – all cases included Referable retinopathy / ungradable cases 98.8% Referable retinopathy 97.8%* Mild retinopathy 83.9% Maculopathy 97.3% Proliferative referable retinopathy 100% Non-proliferative referable retinopathy 100% Ungradable cases 99.8% 41.1%* *The paper does not make it clear whether quality assessment was included or not. *Calculated from figures in the paper but not reported directly therein. 7 The efficacy of automated grading Table 2 illustrates that the Aberdeen system achieved higher sensitivities than the Utrecht / Iowa system for referable retinopathy (including ungradable cases) though the average specificity over the papers was lower for the Aberdeen system. When comparing sensitivities and specificities it should be borne in mind that there is a trade-off between sensitivity and specificity. (During development, the sensitivity of a system can be increased at the expense of specificity and vice versa.) A comparison between two systems is difficult, except where they are tested in the same study, because the study populations are different and because different people created the reference grading. Only one direct comparison has been made between two systems, the Utrecht/Iowa and Brest systems [23]. This considered disease in gradable images. The Utrecht / Iowa and Aberdeen systems have been designed in the context of different photographic protocols; the Utrecht / Iowa system is based on a protocol used in the Netherlands with two photographs per eye while the Aberdeen system is based on the Scottish protocol with a single field per eye. Detection rates for proliferative retinopathy have been reported only for the Aberdeen system and was found to be 100% [22,26] and 97.4% [24]. These studies reported detection rates for maculopathy to be 97.3%, 97.8% and 95.4% (calculated figures) respectively. Three of the studies avoided selection criteria that required examination of the images, one using the Utrecht / Iowa system [20] and two using the Aberdeen system [22,26]. Unfortunately, one of these has limited data on clinical performance because it is published in a technical journal [20]. Two studies with the Utrecht / Iowa system involved removal of ungradable cases in a major part of the analysis and one study with the Aberdeen system involved data stratified according to whether retinopathy was referable or not [24]. One study with the Aberdeen system used 7 graders to arbitrate the false negatives of automated grading compared to the screening programme manual grading and thereby provided a consensus grade for these images [26]. It showed that no cases of non-maculopathy referable retinopathy were missed by the automated system and that the detection rate for maculopathy was 97.8%. An analysis was also made of false negatives with the Utrecht / Iowa system [19]; of 87 missed cases of referable retinopathy according to the manual grading, 24 had large haemorrhages or neovascularisation, 23 had small haemorrhages, 18 had exudates or cottonwool spots and 22 were not diabetic retinopathy according to a second expert. In Philip et al 2007, two false negative eyes, graded as having proliferative retinopathy but missed by the Aberdeen system, were found to have only mild diabetic retinopathy at eye clinic examination [22]. Two studies using different systems have looked at the relative benefits of using detectors for the various dot- lesion types. For the Utrecht / Iowa system it was found that detection of referable retinopathy (including maculopathy) is improved by using a combination of bright and red lesion detectors rather than the individual detectors [20]. These results are compatible with those reported for the Aberdeen system which had significantly better detection of referable retinopathy when using microaneurysm, haemorrhage and exudate detection in combination compared to using microaneurysm detection alone [24]. The improvement was specifically for the maculopathy cases. For proliferative retinopathy, the addition of haemorrhage and exudate detection had no effect on performance. All of the studies listed in table 1 used dot-lesion detection; vascular abnormalities were not explicitly detected. This suggests that detection of dot-lesions, without detection of vascular abnormalities, is sufficient for referable retinopathy detection. The efficacy of manual grading The impact of replacing part of manual grading with automated grading can only be known if the efficacy of both manual and automated grading is known. 8 The evidence concerning the “reliability”, meaning the reproducibility, of manual grading has been reviewed in Benbassat and Polak 2009 [27]. The results of the reviewed studies show that reproducibility of grading is very rarely 100%, implying also that sensitivity is below 100%. Table 3: Results of studies assessing sensitivity and specificity of manual grading of referable retinopathy. Papers using the term “sight threatening retinopathy” were included. Study Total study size (number with referable retinopathy) Sensitivity % Specificity % Notes Liesenfeld 2000 [28] 115 (13) 85 90 Median of 6 results Olson 2003 [29] 545 (55) 93 87 Stellingwerf 2004 [30] 197 (70) 86 92 93 93 Results for TIFF images Arun 2006 [31] 498 (62) 93.5 97.8 Ruamviboonsuk 2006 [32] 400 (44) 93 100 97 73 Best sensitivity and specificity results are given Philip 2007 [22] 6722 (330) 99.1 73.9* Abramoff 2008 [19] 500 (unknown) 62 85 84 89 Best sensitivity and specificity results are given * Calculated from figures in the paper but not reported directly therein. Some example results for studies that reported sensitivity and specificity of manual grading for referable retinopathy are given in table 3. Sensitivity and specificity are easier to interpret than kappa agreement coefficient which, though used by many authors, has the disadvantage that it is affected by prevalence. It should be borne in mind that the reference for these assessments is itself a judgement, though hopefully produced in a more robust manner than the grading being assessed. It is likely that there are great differences in performances between professional groups and between individuals as suggested by Ruamviboonsuk et al 2006 [32]. This and two other studies compared multiple graders on the same image set [19,30]. Philip et al 2007 is the only study to satisfy the size criterion applied in this review for selection of studies on automated grading. Cases of referable retinopathy are not only missed during grading; they are also missed due to the nature of digital photography. Scanlon et al 2003 measured the sensitivity and specificity for digital photography against slit-lamp biomicroscopy [33]. These were 87.8%, and 86.1%, respectively, for mydriatic 2-field photography. Comparing automated and manual grading A direct comparison of automated and manual grading, using the same dataset, has been made only in one study [22]. Both automated and manual graders were operating as disease / no disease graders. For the manual system, a sensitivity of 99.1% was reported for detection of referable retinopathy at a specificity of 73.9% (calculated figure). For automated grading the sensitivity was 97.9% at specificity 52.4% (calculated figure). The difference between automated and manual grading sensitivities for referable retinopathy was not statistically significant at the 5% level. For mild retinopathy and for ungradable images, the automated system had higher sensitivity than the manual system. The sensitivities and specificities of detection of referable retinopathy by manual graders and by automated grading as listed in tables 2 and 3 are displayed in figure 1. Specificity is consistently higher for manual compared to automated grading. This suggests that automated grading cannot completely replace manual grading but, because automated grading has a high sensitivity, it could safely reduce workload. The following considerations should be taken into account when comparing the results shown in figure 1. Firstly, some of the manual grading results displayed in table 3 and figure 1 may have been from graders operating at a level of specificity which is suitable as the final specificity of screening; this is a level at which ophthalmology services are not swamped by unnecessary referrals. A trade-off exists between sensitivity and specificity and so this specificity may only be possible by operating at lower levels of sensitivity. 9 Secondly, the lower specificity of the automated grading systems compared to manual grading means that patients in which automated grading detected any abnormality would have to be referred to manual grading. Therefore, a combined automated and manual grading system would be necessary so that the specificity of the screening programme is acceptable. However, the sensitivity of multiple sequential grading stages is inevitably lower than the sensitivity of any individual stage. A comparison between the combined performances of complete grading systems would be more useful than a comparison of the individual grader performances that are shown in figure 1. Figure 1. Specificity and sensitivity, presented in Receiver Operator Characteristics curve format, for detection of referable retinopathy as reported in the studies listed in tables 2 and 4. The point labelled by an asterisk is for a study which reported results only for referable retinopathy and ungradable cases grouped together [20]. The economics of automated grading Modelling of complete grading systems was described in two economic analyses, linked to two of the efficacy studies for the Aberdeen system. These modelled a system consisting of a combination of automated and manual grading and a fully manual system. Scotland et al 2007 [23] used the efficacy figures reported in Philip et al 2007 [22] and predicted that replacement of manual disease / no disease grading by an automated system would result in a cost saving of around £200,000 when scaled to the population of Scotland [18]. The lower specificity of automated compared to manual grading would mean that a greater number of full disease gradings must be performed. Nonetheless the overall workload would be reduced. The efficacy results presented in Fleming et al 2010 [24] were also subject to economic analysis in Scotland et al 2010 [25]. This concluded that inclusion of exudate and haemorrhage detection produces a more cost- effective algorithm compared to microaneurysm detection alone (£68 additional cost per additional case of 10 referable retinopathy detected). It also found that automated grading would significantly reduce grading costs by £212,695 in Scotland. Other roles for automated grading The efficacy results for automated grading are most easily interpreted in terms of a disease / no disease grading role. This section looks briefly at other possible roles. Full disease grading: A full-disease grader must operate with specificity above what can be achieved by automated grading. Therefore, this role does not seem to be achievable by current systems. Verification grading: An automated system working in parallel with and as a verification of manual grading has been suggested as the role of automated image analysis in screening for breast cancer [34]. A similar role may be considered useful for automated retinopathy grading. Given that full-disease grading is not practical by existing automated grading systems, this would have to be at a disease / no disease level. There would be more discrepancies than there would be from manual graders verifying each other since automated grading produces more false positive results than manual grading. Quality assurance: Another possible role for automated grading is quality assurance since this is a significant component of the work involved in the maintenance of a screening programme [35]. Internal quality assurance requires checking samples of grading results which are large enough to detect suboptimal performance by graders. Automated grading in this role might reduce, but probably not eliminate, the burden of manual quality assurance grading. Before automated grading is used in this role, it would be essential to know the ability of automated grading to detect images which were manual grading failures; on average, these are probably more difficult than routine cases. Such a test would be easy to perform but does not seem to have been reported. Manual grading assistant: To assess the performance of automated grading as an assistant to manual grading, it would be necessary to test the performance of manual graders with and without this assistance. Does automated lesion detection assist manual graders or does giving attention to the additional information provided by automated grading increase their work? Studies on this issue do not seem to have been reported. CONCLUSIONS This review has evaluated the evidence on whether automated grading systems could be used effectively in diabetic retinopathy screening. Automated grading using detectors of multiple types of dot lesions shows increased performance compared to automated grading using a detector of a single lesion type. However, this improvement may be restricted to the detection of maculopathy; microaneurysm detection is capable of detecting 100% of cases of proliferative retinopathy. The most likely role for automated grading appears to be disease / no disease grading. Automated grading has been shown to have very high detection rates for referable retinopathy and for ungradable cases. For the Aberdeen system, the detection rate for proliferative retinopathy approaches 100%. There is no evidence that the replacement of manual grading by automated grading would reduce the detection rates for disease. Studies on manual grading show that detection rates are well below 100%. The specificity of automated grading is lower than for manual grading. Despite this, the specificity of automated grading is sufficiently high to provide a substantial workload reduction to manual grading from which cost savings would be expected. From economic modelling based on large effectiveness studies, it appears that automated disease / no disease grading would result in savings both to running costs and to workload. 11 CONFLICT OF INTEREST Commercial implementation associated with some of the referenced work may provide some remuneration for the University of Aberdeen, NHS Grampian, Alan Fleming, John Olson and Peter Sharp. Funding for Alan Fleming was provided by Medalytix Ltd under an agreement with Scottish Health Innovations Limited. Funder sources for all authors made no input into the study design, the collection, analysis and interpretation of data, the writing of the report, nor the decision to submit the paper for publication. REFERENCES [1] Scanlon PH. The English national screening programme for sight-threatening diabetic retinopathy. J Med Screen 2008;15:1-4 [2] Sharp PF, Olson J, Strachan F, Hipwell J, Ludbrook A, O’Donnell M, Wallace S, Goatman KA, Grant A, Waugh N, McHardy, K and Forrester JV. The value of digital imaging in diabetic retinopathy, Health Technology Assessment 2003; Vol 7:number 30 [3] Leese GP, Morris AD, Swaminathan K, Petrie JR, Sinharay R, Ellingford A, Taylor A, Jung RT, Newton RW, Ellis JD. Implementation of national diabetes retinal screening programme is associated with a lower proportion of patients referred to ophthalmology. Diabetic Med 2005, 22:8:1112-1115 [4] European conference on screening for diabetic retinopathy in Europe May 2006. http://www.drscreening2005.org.uk/conference_report.doc (accessed 19 April 2011). [5] Teng T, Lefley M, Claremont D. Progress towards automated diabetic ocular screening: a review of image analysis and intelligent systems for diabetic retinopathy. Med Biol Eng Comput 2002,40:2–13 [6] Patton N, Aslam TM, MacGillivray T, Deary IJ, Dhillon B, Eikelboom RH, Yogesana K, Constable IJ. Retinal image analysis: Concepts, applications and potential. Prog Retin Eye Res 2006, 25:99–127 [7] Winder RJ, Morrow PJ, McRitchie IN, Bailie JR, Hart PM. Algorithms for digital image processing in diabetic retinopathy. Comput Med Imag Grap 2009, 33:608–22 [8] Abràmoff MD, Garvin M, Sonka M. Retinal Imaging and Image Analysis. IEEE Reviews in Biomedical Engineering, 3, 169-208, 2010 [9] Wilkinson CP, Ferris FL III, Klein RE, Lee PP, Agardh CD, Davis M, Dills D, Kampik A, Pararajasegaram R, Verdaguer JT. Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales. Ophthalmology 2003;101:1677–82. [10] UK National Screening Committee, Essential Elements in Developing a Diabetic Retinopathy Screening Programme, Workbook 4.3, June 2009, www.retinalscreening.nhs.uk (accessed 3rd May 2011). [11] Scottish Diabetic Retinopathy Screening Collaborative. Scottish diabetic retinopathy grading scheme. 2007 v1.1. http://www.ndrs.scot.nhs.uk/ClinGrp/Docs/Grading%20;Scheme%202007%20v1 (accessed 31st May 2011). [12] Hipwell JH, Strachan F, Olson JA, McHardy KC, Sharp PF, Forrester JV Automated detection of microaneurysms in digital red-free photographs: a diabetic retinopathy screening tool. Diabetic Med 2000, 17;8:588-94 [13] Usher D, Dumskyj M, Himaga M, Williamson TH, Nussey S, Boyce J. Automated detection of diabetic retinopathy in digital retinal images: a tool for diabetic retinopathy screening. Diabetic Med 2004, 21;1:84-90 [14] Hansen AB, Hartvig NV, Jensen MS, Borch-Johnsen K, Lund-Andersen H, Larsen M. Diabetic retinopathy screening using digital non-mydriatic fundus photography and automated image analysis, Acta Ophthalmologica Scandinavica 2004, 82;6: 666-72 [15] Larsen M, Gondolf T, Godt J, Jensen MS, Hartvig NV, Lund-Andersen H, Larsen N, Assessment of automated screening for treatment-requiring diabetic retinopathy. Curr Eye Res 2007, 32:4:331-6 [16] Acharya UR, Chua, CK, Ng EYK, Yu W, Chee C. Application of higher order spectra for the identification of diabetes retinopathy stages. J Med Syst 2008, 32;6: 481-8 [17] Acharya UR, Lim CM, Ng EYK, Chee C, Tamura T, Computer-based detection of diabetes retinopathy stages using digital fundus images. P I Mech Eng H 2009, 223:H5:545-53 [18] Quellec G, Russell SR, Abramoff MD, Optimal Filter Framework for Automated Instantaneous Detection of Lesions in Retinal Images. IEEE T Med Imag 2011:30; 2:523-33 12 [19] Abràmoff MD, Niemeijer M, Suttorp-Schulten MSA, Viergever MA, Russell SR, van Ginneken B. Evaluation of a System for Automatic Detection of Diabetic Retinopathy From Color Fundus Photographs in a Large Population of Patients With Diabetes. Diabetes Care 2008 31(2):193-198 [20] Niemeijer M, Abramoff MD, van Ginneken B et al. Information Fusion for Diabetic Retinopathy CAD in Digital Color Fundus Photographs, IEEE T Med Imag 2009, 28:775-85 [21] Abràmoff MD, Reinhardt JM, Russell SR, Folk JC, Mahajan VB, Niemeijer M, Quellec G, Automated Early Detection of Diabetic Retinopathy. Ophthalmology 2010, 117:1147-54 [22] Philip S, Fleming AD, Goatman KA, Fonseca S, Mcnamee P, Scotland GS, Prescott GJ, Sharp PF, Olson JA. The efficacy of automated "disease/no disease'' grading for diabetic retinopathy in a systematic screening programme Br J Ophthalmol 2007,91,1512-7 [23] Scotland GS, McNamee P, Philip S, Fleming AD, Goatman KA, Prescott GJ, Fonseca S, Sharp PF, Olson JA. Cost-effectiveness of implementing automated grading within the national screening programme for diabetic retinopathy in Scotland, Br J Ophthalmol,2007, 91:1518-23 [24] Fleming AD, Goatman KA, Philip S, Williams GJ, Prescott GJ, Scotland GS, McNamee P, Leese GP, Wykes WN, Sharp PF, Olson JA. The role of haemorrhage and exudate detection in automated grading of diabetic retinopathy, Br J Ophthalmol 2010,94:706-711 [25] Scotland GS, McNamee P, Fleming AD, Goatman KA, Philip S, Prescott GJ, Sharp PF, Williams GJ, Wykes W, Leese GP, and Olson JA. Costs and consequences of automated algorithms versus manual grading for the detection of referable diabetic retinopathy. Br J Ophthalmol 2010,94:712-9 [26] Fleming, AD; Goatman, KA; Philip, S, Prescott GJ, Sharp PF, Olson JA. Automated grading for diabetic retinopathy: a large-scale audit using arbitration by clinical experts, Br J Ophthalmol 2010;94:1606-10 [27] Benbassat J, Polak BCP. Reliability of screening methods for diabetic retinopathy. Diabet Med 2009,26:783-90 [28] Liesenfeld B, Kohner E, Piehlmeier W, Kluthe S, Aldington S, Porta M et al. A telemedical approach to the screening of diabetic retinopathy: digital fundus photography. Diabetes Care 2000; 23: 345–348. [29] Olson JA, Strachan FM, Hipwell JH, Goatman KA, McHardy KC, Forrester JV, Sharp PF, A comparative evaluation of digital imaging, retinal photography and optometrist examination in screening for diabetic retinopathy, Diabetic Med 2003;20:7,528-34 [30] Stellingwerf C, Hardus PL, Hooymans JM. Assessing diabetic retinopathy using two-field digital photography and the influence of JPEG compression. Doc Ophthalmol 2004; 108: 203-9. [31] Arun CS, Young D, Batey D, Shotton M, Mitchie D, Stannard KP et al. Establishing ongoing quality assurance in a retinal screening programme. Diabetic Med 2006; 23: 629–34. [32] Ruamviboonsuk P, Teerasuwanajak K, Tiensuwan M, Yuttitham K, Thai Screening for Diabetic Retinopathy Study Group. Interobserver Agreement in the Interpretation of Single-Field Digital Fundus Images for Diabetic Retinopathy Screening. Ophthalmology 2006;113:826–32 [33] Scanlon PH, Malhotra R, Thomas G, Foy C, Kirkpatrick JN, Lewis-Barned N, Harney B, Aldington SJ. The effectiveness of screening for diabetic retinopathy by digital imaging photography and technician ophthalmoscopy. Diabetic Med 2003;20:467–74 [34] Gilbert FJ, Astley SM, Gillan MGC, Agbaje OF, Wallis MG, James J, Boggis CRM, Duffy SW. Single reading with computer-aided detection for screening mammography. New Engl J Med 2008, 359:1675- 84 [35] Nagi DK, Gosden C, Walton C, Winocour PH, Turner B, Williams R, James J, Holt RIG. A national survey of the current state of screening services for diabetic retinopathy: ABCD-Diabetes UK survey of specialist diabetes services 2006. Diabetic Med 2009, 26:1301-5 work_jttrhk5c3nairkm32ljcunkiza ---- Treatment of Acne Scars Using Fractional Erbium: YAG Laser American Journal of Dermatology and Venereology 2014, 3(2): 43-49 DOI: 10.5923/j.ajdv.20140302.04 Treatment of Acne Scars Using Fractional Erbium: YAG Laser Shakir J. Al-Saedy1,*, Maytham M. Al-Hilo1, Salah H. Al-Shami2 1MBChB, FICMS, DV, Consultant Dermatologist and Venereologist, Al-Kindy Teaching Hospital, Baghdad, Iraq 2MBChB., Al-Kindy Teaching Hospital, Baghdad, Iraq Abstract Background: Acne is a common disorder experienced by people between 11 and 30 years of age and to lesser extent by older adults. Fractional resurfacing employs a unique mechanism of action that repairs a fraction of skin at a time. The untreated healthy skin remains intact and actually aids the repair process, promoting rapid healing with only a day or two of downtime. Objective: This study is designed to evaluate the safety and effectiveness of fractional photothermolysis (fractionated Erbium: YAG laser) in treating moderate – severe atrophic acne scars. Methods: Thirty one females and 9 males with moderate to severe atrophic acne scarring were enrolled in this study that attained Beirut Private Center for Laser Treatments in Baghdad, Iraq during the period from March, 1st 2011 to September, 1st 2011. Fractional Erbium:YAG laser 2940 nm wavelength was delivered to the whole face with a single pass treatment and for the acne scar areas with two passes. Therapeutic outcomes were assessed by standardized digital photography. Results: Ten patients (25%) reported excellent improvement, twenty one patients (50%) significant improvement, six patients (15%) moderate improvement, and four patients (10%) mild improvement in the appearance of the acne scars. Conclusion: Erbium:YAG laser is an effective device for skin resurfacing with faster recovery time and fewer side effects in comparison to other treatment modalities. Keywords Acne, Atrophic acne scar, Fractional Er: YAG laser 1. Introduction Acne is a common disorder experienced by up to 80% of people between 11 and 30 years of age and by up to 5% of older adults [1]. Several factors are incriminated in the pathogenesis of acne including increased sebum production, follicular abmormal keratinization, colonization with Propionibacterium acnes, and a lymphocytic and neutrophilic inflammatory response [2]. The severe inflammatory response to P acnes may results in permanent disfiguring scars. Stigmata of severe acne scarring can lead to social ostracism, withdrawal from society, and severe psychological depression [3]. Patients dislike the appearance of acne, and prevention of acne scarring is often a key motivation behind treatment. Once scarring has occurred, patients and physicians are left to struggle with the options available for improving the appearance of the skin [4]. Acne scarring can be divided into 3 basic types: icepick scars, rolling scars, and boxcar scars [5]. Boxcar scars can be further subdivided into shallow or deep [6]. Other less common scars such as sinus tracts, hypertrophic * Corresponding author: firas_rashad@yahoo.com (Shakir J. Al-Saedy) Published online at http://journal.sapub.org/ajdv Copyright © 2014 Scientific & Academic Publishing. All Rights Reserved scars, and keloidal scars may occur after acne treatment [7]. Their treatment options include excision, cryosurgery, pulsed dye laser treatment, compression with silicone sheeting, and various other modalities [8]. Goodman proposed a qualitative global acne scarring grading system (table 1) (Greg. J. Goodman and Jennifer A. Baron) [9]. The concept of fractional photothermolysis revolutionized cutaneous laser resurfacing when introduced by Manstein et al in 2004. Using a nonablative, 1550-nm Er-doped fiber laser, full-thickness columns of thermal injury (termed microthermal treatment zones or MTZs) are created in a pixelated pattern just below the level of the stratum corneum, with the surrounding skin left intact. Fractional resurfacing employs a unique mechanism of action that repairs a fraction of skin at a time. The laser is used to resurface the epidermis and, at the same time, to heat the dermis to promote safely the formation of new collagen. The untreated healthy skin remains intact and actually aids the repair process, promoting rapid healing with only a day or two of downtime [10]. The primary target is both the epidermis and dermis with the aim of creating small zones of micro-damage separated by zones of non irradiated tissue that assist with the rapid healing process. The aim of the fractional approach is to obtain the best possible results with the least possible damage, and the degree of thermal damage delivered to the target skin depends on the dosage, the pulse 44 Shakir J. Al-Saedy et al.: Treatment of Acne Scars Using Fractional Erbium: YAG Laser width of the beam, and the number of passes over the same target area [11]. Er:YAG laser is a flashlamp-excited system that emits light at an invisible infrared wavelength of 2940 nm. Its light is about 16 times better absorbed by tissue water than the 10,600 nm wavelength emitted by the CO2 laser. The Er:YAG laser produces a pulse of 250-350 microseconds that is less than the thermal relaxation time of the skin, which is 1 msec. Also, the Er:YAG laser causes tissue ablation with very little tissue vaporization and desiccation. The ablation threshold of the Er:YAG laser for human skin has been calculated at 1.6 J/cm2 as compared with 5 J/cm2 calculated for high-energy, short-pulse CO2 laser systems. Because the Er:YAG laser is so exquisitely absorbed by water, it causes l0-40 μm of tissue ablation and as little as 5 μm of thermal damage. In contrast, the high-energy, short-pulse CO2 lasers cause l00-120 μm of tissue damage, which is composed of 50-60 μm of apparent tissue desiccation (ablation or coagulation) and an additional 50-75 μm of thermal damage. The precise tissue ablation and small zone of residual thermal damage results in faster reepithelialization and an improved side effect profile. Apart from water being the major chromophore of skin ablative lasers, the Er:YAG laser wavelength is also absorbed by the collagen, further supporting the ablation process within the deeper dermal layers [12]. In 1997 the FDA approved Er:YAG laser for resurfacing [13] and since then it gained more and more interest for the purpose of resurfacing procedures, such as in acne scars or in the rejuvenation of photoaged skin areas. In addition, many other skin disorders formerly treated by dermabrasion or that were indication for thermal laser coagulation or vaporization can be removed by Er:YAG skin ablation (Table 2). They comprise many superficial lesions derived from epidermal or adnexal structures, but also various circumscribed malformations and benign tumors located deeper within the dermis. In addition, certain pigmented and melanocytic lesions, as well as a variety of miscellaneous pathological conditions can be removed [14-18]. This study, was designed to evaluate the safety and effectiveness of fractional photothermolysis (fractionated Erbium:YAG laser) in treating moderate – severe atrophic acne scars. 2. Patients & Methods This is an open therapeutic trial study performed at Beirut Private Center for Laser Treatments in Baghdad, Iraq during the period from March, 1st 2011 to September, 1st 2011. Forty patients (31 females and 9 males) with moderate to severe atrophic acne scarring according to Goodman’s qualitative global scarring grading system were included in this study. Their age ranges from 17-48 years with a mean ± SD of 28.075 ± 6.87 years. Their skin types were III – IV (Fitzpatrick skin types). This study approved by the ethical committee in Al-Kindy teaching hospital. Written informed consent was obtained from each patient. Exclusion criteria include known photosensitivity, pregnancy or lactation, inflammatory skin disorders or active herpes infection. Patients with hypertrophic or keloidal scarring or history of hypertrophic or keloid were driven out of the study. The use of anti-coagulants, isotretinoin or other physical acne treatments over the past 6 months, patients who had any medical illness (e.g. diabetes, chronic infections, blood dyscrasias) that could influence the wound healing process were also excluded. Patients were allowed to continue previous acne medications during the study except isotretinoin. The whole procedure was fully explained and thoroughly discussed with the patients about the mechanism of laser treatment, the time required for the treatment, the behavior after the laser treatment, and the prospects of successful treatment and any unrealistic expectations of the end results were strongly discouraged. The patients were informed about all risks that may be caused by the laser treatment and the pre- and post-operative care. Prior to each treatment, the face was cleansed with a mild non-abrasive detergent and gauzes soaked in 70% isopropyl alcohol. A topical anesthetic cream (EMLA, a eutectic mixture of local anesthesia of 2.5% lidocaine and 2.5% prilocaine, AstraZenica LP, Wilmington DE) was applied under an occlusive dressing for 1 hour and subsequently washed off to obtain completely dry skin surface. Eyes were protected with opaque goggles. Systemic antiviral therapy (acyclovir 400 mg twice daily) prescribed for each patient the night before operation as prophylaxis and for five days post operatively as well as topical antibiotics and a moisturizing cream, the patients were informed to apply a sunscreen for six weeks. Three photos were taken before treatment for each patient for both sides and the front of the face with a digital camera (Sony DSC-T99 Cyber-shot® Digital Camera, 14.1 megapixel HD) and another set of photos was taken in each visit post-treatment using identical camera settings, lighting, and patient positioning. Fractional Erbium:YAG laser (MCL30 Dermablate, Asclepion Laser Technologies, Germany) 2940 nm wavelength was delivered to the whole face with a single pass treatment and with two passes for the acne scar areas with total fluence of 108 J/cm2, interval of 0.5 second, and the window of the laser hand piece was 9 x 9 mm supporting 169 microbeams with pulse energy on the treated site of 1.5 J. The same parameters applied for all patients. Smoke evacuator and a forced air cooling system (Zimmer MedizinSystemme, Cryo version 6) accompanied the procedure to improve patients comfort and compliance. Patients were asked to return for medical assessment 1 week after operation then followed up monthly for 3 months. Therapeutic outcomes were assessed by standardized digital photography by the patient himself and by two blinded dermatologists. The dermatologists' evaluation and self-assessment level of improvement of the patients were evaluated using the following five-point scale: 0 = no change; American Journal of Dermatology and Venereology 2014, 3(2): 43-49 45 1 = slight improvement (0–25%); 2 = moderate improvement (26–50%); 3 = significant improvement (51–75%); 4 = excellent improvement (>75%). The two assessors were blinded to the order of the photographs. The evaluators were asked to perform two actions. First, to identify the photograph that showed better scar appearance. Second, to rate the difference in the severity of the acne scars using the above mentioned scale. In addition, the participants were asked to report any cutaneous or systemic side effects associated with laser treatment. In particular, a pain scale of 0–3 was used to determine the level of discomfort during the procedures as following: 0 = no pain 1 = mild pain 2 = moderate pain 3 = severe pain Statistical data were analyzed by Chi test using Software Minitab V.16 and P value < 0.05 is considered statistically significant descriptive data by frequency, percent, figure and table. Figure 1. Patient no.1 pre- and 3 months post-operatively Figure 2. Patient no.2 pre- and 3 months post-operatively 46 Shakir J. Al-Saedy et al.: Treatment of Acne Scars Using Fractional Erbium: YAG Laser Figure 3. Patient no.3 pre- and 3 months post-operatively Table 1. Goodman’s qualitative global scarring grading system [9] Grade Level of Disease Clinical Features Examples of Scars 1 Macular disease Erythematous, hyper- or hypo-pigmented flat marks visible to patient or observer irrespective of distance Erythematous, hyper- or hypo –pigmented flat marks 2 Mild disease Mild atrophy or hypertrophy that may not be obvious at social distances of 50 cm or greater and may be covered adequately by makeup or the normal shadow of shaved beard hair in males or normal body hair if extrafacial Mild rolling, small soft papular 3 Moderate disease Moderate atrophic or hypertrophic scarring that is obvious at social distances of 50 cm or greater and is not covered easily by makeup the normal shadow of shaved beard hair in males or body hair if extrafacial, but is still able to be flattened by manual stretching of the skin More significant rolling, shallow ‘‘box car ,’’ mild to moderate hypertrophic or papular scars 4 Severe disease Severe atrophic or hypertrophic scarring that is obvious at social distances of 50 cm or greater and is not covered easily by makeup or the normal shadow of shaved beard hair in males or body hair (if extrafacial) and is not able to be flattened by manual stretching of the skin Punched out atrophic (deep ‘‘boxcar’’), ‘‘ice pick’’, bridges and tunnels, gross atrophy, dystrophic scars, significant hypertrophy or keloid American Journal of Dermatology and Venereology 2014, 3(2): 43-49 47 Table 2. Patients predominant scars types Patients Number Type of scar 10 (25%) significant rolling 16 (40%) shallow boxcar 8 (20%) Deep boxcar 6 (15%) icepick scars Table 3. Response to treatment by Er:YAG laser for the level of improvement assessed by dermatologists Score 0 1 2 3 4 P value Weeks 1 wk 12 (30%) 18 (45%) 8 (20%) 2 (5%) 0.002 4 wk 10 (25%) 15 (37.5%) 10 (25%) 5 (12.5%) 8 wk 6 (15%) 8 (20%) 16 (40%) 8 (20%) 12 wk 4 (10%) 6 (15%) 20 (50%) 10 (25%) Table 4. Response to treatment by Er:YAG laser for the level of improvement assessed by the patient Score 0 1 2 3 4 P value Weeks 1 wk 15 (37.5%) 20 (50%) 4 (10%) 1 (2.5%) 0.001 4 wk 13 (32.5%) 17 (42.5%) 7 (17.5%) 3 (7.5%) 8 wk 9 (22.5%) 11 (27.5%) 13 (32.5%) 7 (17.5%) 12 wk 8 (20%) 7 (17.5%) 17 (42.5%) 8 (20%) 3. Results Forty patients (31 females and 9 males) were included in the study. All patients completed the study, including the 3- month follow-up period. All patients had mixed types of atrophic acne scars, including ice pick, boxcar, and rolling scars, although, some particular type predominates and therefore is used to classify the patients accordingly (Table 2). According to dermatologists' assessment (Table 3) the results were escalating dramatically from 5% in 1st week to 25% for excellent improvement after 3 months; although, this group is not the major group who shows improvement. Significant improvement group shows increase from 20% after 1 week to 50% after 3 months, it gives us a strong indicator of the overall results. The improvement scale was so obvious from first week (30% mild to 5% significant) through 4th – and 8th -week to become more satisfactory (10% mild to 25%) after three months of operation. The final results after 3 months were as follows: Ten patients (25%) reported excellent improvement, twenty one patients (50%) significant improvement, six patients (15%) moderate improvement, and four patients (10%) mild improvement in the appearance of the acne scars. The results were significant as indicated by the P value which was 0.002. The patients self assessment of improvement was also remarkable and showed a great amount of satisfaction (Table 4). The results were almost comparable to the dermatologists' assessment and were considered significant as indicated by the P value which was 0.001. Table (3) shows that the two factors (scores and weeks) are not independent, in another words there is a relationship between two factors (chi-square value 25.591 df=9 P=0.002). Similar conclusion can be reported for Table (4) (chi-square value 27.30 df=9 P=0.001) The laser treatment was generally well tolerated. All participants underwent treatment-related pain, but there was no need for extra anesthesia (Table 5). All participants reported mild erythema for approximately 2-3 days, and 80% of patients experienced edema for <24 hours following laser treatment. Peeling occurs from the second day and completed in the fifth day in 90% of the patients and in 10% last for 7 days. Social activity could commence as early as 3 days after the laser treatment. Other possible adverse events related to laser treatment in general, such as pigmentary alterations (hyperpigmentation), bleeding, vesiculation, crust, scarring, and infection were not observed. (Table 6) Table 5. Patients pain scale during operation Degree of pain Number of patients 0 0 1 11 (27.5%) 2 25 (62.5%) 3 4 (10%) Table 6. Adverse effects related to treatment by Fractional Er: YAG Laser Adverse effects No.&% of patients treatment-related pain (40) 100% mild erythema (40) 100% Edema (32)80% Severe peeling (4)10% Bleeding - Hyperpigmentation - Vesiculation - Crust - Scarring - Infection - 48 Shakir J. Al-Saedy et al.: Treatment of Acne Scars Using Fractional Erbium: YAG Laser 4. Discussion In this open therapeutic trial study, patients received fractional Er:YAG laser in a single treatment session. The final outcome was evaluated after three months period by two blinded dermatologists and by the patient's self-assessment using standardized digital photography. The results were very satisfactory as more than 60% of patients showed moderate to significant improvement. The patients' self assessment is slightly lower than that of the dermatologists; this might be attributed to the fact that the patients usually use more subjective than objective scales, and they show a higher level of expectations of end results than the actual outcome. The results of both assessment groups (the patients and the dermatologists) are significant as indicated by the P values which were 0.001 and 0.002 respectively. According to best of our knowledge in fractional photothermolysis, studies that investigate the role of Erbium:YAG laser as a sole option in the treatment of atrophic acne scars are lacking or very limited. There were no controlled trials but few case series which reported the effects of either the carbon dioxide or erbium:YAG laser. All of the studies were of poor quality. The types and severity of scarring were poorly described and there was no standard scale used to measure scar improvement. There was no reliable or validated measure of patient satisfaction; most improvement was based on visual clinical judgment, in many cases without blinded assessment [25]. This might be partially attributed to the fact that Er:YAG laser is considered as a superficial laser and is usually not substantial for the treatment of the relatively deep lesions of acne scars. In a series of 78 patients, Weinstein [26] reported 70-90% improvement of acne scarring in the majority of patients treated with a modulated Er:YAG laser. He proposed that pitted acne scars may require ancillary procedures, such as subcision or punch excision, for optimal results. These procedures can be performed either prior to or concomitant with Er:YAG laser resurfacing. The effect of fractional CO2 laser on skin resurfacing is fully considered worldwide. However, there have been limited studies comparing the clinical outcome and adverse effects of these two lasers (CO2 vs. Er:YAG). In two studies, one conducted in Iraq [29] and another one in Thailand [30), investigating the efficacy of fractional CO2 laser for the treatment of acne scars, 75% of the CO2 laser sites were graded as having moderate to significant improvement of scars. Their end-results were not significantly different from our results after 3 months follow-up (65% showed moderate to significant improvement) but taking in consideration the duration of operation which was much shorter for Er:YAG laser than the time needed to operate with CO2 laser (less than 10 minutes for Er:YAG compared to 45 minutes for CO2 laser). The post-operative pain, edema, erythema and duration of peeling were less and more tolerable and acceptable than that were associated with CO2 laser treatment. In contrast to CO2 laser resurfacing, the narrower zone of necrosis produced by the Er:YAG laser will allow the skin to recover faster [31]. Pigment alteration, which is a common side effect of CO2 laser, was not reported with Er:YAG laser in any of our patients even with those who don't commit to strict sunscreen. In procedures aiming at aesthetic improvement, patient perception of the treatment outcome appears to be most important because it has a direct impact on patients’ body image and self-esteem, which can be obtained superbly by the CO2 laser but when the Er:YAG laser is used for resurfacing in the fractional mode, the results are noteworthy, recovery time is considerably shortened and traditional post-resurfacing sequelae are absent. Consequently this allows the patients a rapid return to their social or work environment [27]. Using the parameters mentioned earlier in this study, all patients show different level of improvement ranging from mild to excellent results after only one session of laser treatment even in patients with icepick or deep boxcar scars which are usually resistant to other conventional laser treatment. Furthermore, many patients mentioned they experienced a remarkable improvement in ‘skin quality’ and subsequently can wear more natural makeup. The final outcome of our treatment is best read after 3-6 months post-operatively. This time is usually needed for new collagen remodeling [28] and it was the same time interval we use to follow up our patients and read their end-results which were significantly different from the results after one week post-operatively. Most types of acne scars will benefit to some degree by laser resurfacing techniques. In acne scars the precision of sculpturing with excellent visual control and minimal heat damage can make Er:YAG laser ablation superior to CO2 laser. Moreover, thermal damage to follicles and sebaceous glands can be avoided, so that acne flare ups, as reported after CO2 laser is not reported [32]. In general, Erbium:YAG laser is an effective device for skin resurfacing and has faster recovery time and fewer side effects when compared to the CO2 laser resurfacing [33]. 5. Conclusions 1. Fractional Erbium:YAG photothermolysis can be a safe and effective option for the treatment of acne scars in Iraqi patients by offering faster recovery time and fewer side effects 2. Fractional Erbium:YAG photothermolysis was associated with substantial improvement in the appearance of all types of acne scar, which includes the softening of scar contours as well as the reduction of scar depth. 3. Most patients began to show a visible improvement following only one session. According to visual assessments of patients and dermatologists, patients' improvement continues to occur even after 3 months of operation. American Journal of Dermatology and Venereology 2014, 3(2): 43-49 49 6. Recommendations We need further studies with: 1. higher fluence and more passes. 2. more treatment sessions. 3. further follow up for 6-12 months. REFERENCES [1] Kraning KK, Odland GF. Prevalence, morbidity, and cost of dermatological diseases. J Invest Dermatol 1979; 73(Suppl): 395-401. [2] Goulden V, Stables GI, Cunliffe WJ. Prevalence of facial acne in adults. J Am Acad Dermatol 1999;41:577-80. [3] Leeming JP, Holland KT, Cunliffe WJ. The pathological and ecological significance of micro-organisms colonizing acne vulgaris comedones. J Med Microbiol 1985;20:11-6. [4] Norris JFB, Cunliffe WJ.A histological and immunocyto chemical study of early acne lesions. Br J Dermatol 1988;118:651-9. [5] Knaggs HE, Holland DB, Morris C, Wood EJ, Cunliffe WJ. Quantification of cellular proliferation in acne using the monoclonal antibody Ki-67. J Invest Dermatol 1994;102:89-92. [6] Koo JY, Smith LL. Psychologic aspects of acne. Pediatr Dermatol 1991;8:185-8. [7] Alster TS,West TB. Treatment of scars: a review. Ann Plast Surg 1997;39:418-32. [8] Tsau SS, Dover JS, Arndt KA, et al. Scar management: keloid, hypertrophic, atrophic, and acne scars. Semin Cutan Med Surg 2002;21:46-75. [9] Goodman GJ, Baron JA. Postacne scarring a quantitative global scarring grading system. J Cosmet Dermatol 2006;5:48-52. [10] Tierney EP, et al: Review of fractional photothermolysis: treatment indications an efficacy. Dermatol Surg 2009 Oct; 35(10):1445-1461. [11] Brightman LA, et al: Ablative and fractional ablative lasers. Dermatol Clin 2009 Oct: 27(4):479-489, vi-vii. [12] Roland Kaufmann, Christian Beier. Erbium:YAG laser therapy of the skin lesions. Med Laser Appl 2001; 16:252-263. [13] Khatri KA. Ablation of cutaneous lesions using an erbium:YAG laser. J Cosmetic and Laser Ther 2003;5:1-4. [14] Dmovesk-Olup B, Vedlin B. Use of Er:YAg laser for benign skin disorders. Laser Surg Med 1997;21(1):13-19. [15] Kaufman R, Hibst R. Pulsed erbium:YAG laser ablation in cutaneous surgery. Laser Surg Med 1996; 19:324-30. [16] Beier C, Kaufman R. Efficacy of erbium:YAG laser ablation in Darrier disease and Hailey-Hailey disease. Arch Dermatol 1999;135:423-7. [17] Alora MB, Arndt KA. Treatment of café-au-lait macule with erbium:YAG laser. Arch Dermatol 2001;45:566-8. [18] Ammirati CT, Giancola JM, Hruza GJ. Adult-onset facial colloid millium successfully treated with the long-pulsed Er:YAG laser. Dermatol Surg 2002;28:215-19. [19] Manstein D, Herron GS, Sink RK, Tanner H& Anderson RR.Fractional photothermolysis: a new concept for cutaneous remodeling using microscopic patterns of thermal injury. Lasers Surg Med 2004;34:426-38. [20] Glaich AS, Rahman Z, Goldberg LH, Friedman PM. Fractional resurfacing for the treatment of hypopigmented scars: A pilot study. Dermatol Surg. 2007; 33: 289–94. [21] Behroozan DS, Goldberg LH, Dai T, Geronemus RG, Friedman PM. Fractional photothermolysis for the treatment of surgical scars: A case report. J Cosmet Laser Ther. 2006;8:35–8. [22] Alster TS, Tanzi EL, Lazarus M. The use of fractional laser photothermolysis for the treatment of atrophic scars. Dermatol Surg 2007;33(3):295-299. [23] Holland DB, Jeremy AH, Roberts SG, Seukeran DC, Layton AM, Cunliffe WJ. Inflammation in acne scarring: A comparison of the responses in lesions from patients prone and not prone to scar. Br J Dermatol. 2004;150:72–81. [24] Jacob CI, Dover JS, Kaminer MS. Acne scarring: A classification system and review of treatment options. J Am Acad Dermatol. 2001;45:109–17. [25] Jordan R, Cummins C, Burls A. Laser resurfacing of the skin for the improvement of facial acne scarring: Asystematic review of the evidence. Br J Dermatol. 2000;142:413–23. [26] Weinstein C, Scheflan M. Simultaneously combined ER:YAG and carbon dioxide laser (derma K) for skin resurfacing. Clin Plast Surg. Apr 2000;27(2):273-85, xi. [27] Mario AT, Serge M, Mariano V. Results of fractional ablative facial skin resurfacing with the Er:YAG laser 1 week and 2 months after one single treatment in 30 patients. Laser Med Sci 2009; 24 (2):186-94. [28] Sukal Sa, Geronemus RG. Fractional photothermolysis. J Drug Dermatol 2008;7:118-122. [29] Mohammad Y. The efficacy and safety of ablative fractional CO2 laser for treatment of acne scar. JSMC 2012; 2 (1): 30-5. [30] W. Manuskiatti. Comparison of Fractional Er:YAG and Carbon Dioxide Lasers in Resurfacing of Atrophic Scars in Asians. Dermatol Surg 2013; 39(1 pt 1): 111-20. [31] Adrian RM. Pulsed carbon dioxide and erbium-YAG laser resurfacing: a comparative clinical study. J Cutan Laser Ther 1999;1:29-35 [32] Roland K. Christian B. Erbium:YAG laser therapy of skin lesions. Med Laser Appl 2001;16:252-253. [33] Khatri KA. Ablation of cutaneous lesions using an erbium:YAG laser. J Cosmetic and Laser Ther 2003;5:1-4. work_jugwlcg5xzh77oiycn2ibxmy64 ---- reSeArcH ArtIcLeS VOL. 2 № 4 (7) 2010 | ActA nAturAe | 95 “Prostate Cancer Proteomics” Database S. S. Shishkin1*, L. I. Kovalyov 1, M. A. Kovalyova 1, K. V. Lisitskaya 1, L. S. Eremina1, A. V. Ivanov 1, E. V. Gerasimov1, E. G. Sadykhov1, N. Y. Ulasova2, O. S. Sokolova2, I. Y. Toropygin3, V. O. Popov1 1 Bach Institute of Biochemistry, Russian Academy of Sciences 2 Lomonosov Moscow State University 3 Orekhovich Institute of Biomedical Chemistry, Russian Academy of Medical Sciences *E-mail: shishkin@inbi.ras.ru Received 28.09.2010 ABSTRACT A database of Prostate Cancer Proteomics has been created by using the results of a proteomic study of human prostate carcinoma and benign hyperplasia tissues, and of some human-cultured cell lines (PCP, http://ef.inbi.ras.ru). PCP consists of 7 interrelated modules, each containing four levels of proteomic and biomedical data on the proteins in corresponding tissues or cells. The first data level, onto which each module is based, is a 2DE proteomic reference map where proteins separated by 2D electrophoresis, and subsequently identified by mass-spectrometry, are marked. The results of proteomic experiments form the second data level. The third level contains protein data from published articles and existing databases. The fourth level is formed with direct Internet links to the information on corresponding proteins in the NCBI and UniProt databases. PCP contains data on 359 proteins in total, including 17 potential biomarkers of prostate cancer, particularly AGR2, annexins, S100 proteins, PRO2675, and PRO2044. The database will be useful in a wide range of applications, including studies of molecular mechanisms of the aetiology and pathogenesis of prostate diseases, finding new diagnostic markers, etc. KEyWORDS proteomics, prostate cancer, digital database ABBREVIATIONS 2DE—two-dimensional gel electrophoresis, BPH—benign prostate hyperplasia, PCa—prostate cancer, PCP—prostate cancer proteomics INTRODuCTION the first decade of the post-genome era was marked by a rapid development in the field of bioinformatics, the extension of major databases (such as ncBI and uniProt), and the creation of specialised information resources for biomedical research in many countries [1–4]. the impressive resources created in Ireland (ucD-2DPAGe, http://proteomics-portal.ucd.ie:8082/ cgi-bin/2d/2d.cgi) [2] and India (Human Proteinpedia, www.humanproteinpedia.org) [3] make the state of things in russia pale in comparison. currently, one of the most important tasks for bio- medical research is to find efficient prostate cancer (Pca) biomarkers which would enable new diagnostic methods [5–8]. the fact that in recent years the Pca incidence rate has dramatically increased worldwide [9, 10], and particularly in russia, making Pca the most frequent male oncological disease in some countries [11, 12], is reason enough to pay close attention to this disease. In early diagnostics of Pca at the moment it is important to establish the presence of one of the most studied biomarkers, the so-called Prostate-Specific An- tigens (PSA), in the blood. the test, however, is known to produce a significant number of false-positive and false-negative results, leading to the wrong clinical and financial outcomes [5, 7]. therefore, in the u.S. and in other Western countries, new Pca biomarkers are be- ing actively sought, an initiative recently stimulated by the development of proteomic and other post-genome technologies [6, 8, 13]. Since 2005, the Bach Institute of Biochemistry, in collaboration with other research and medical in- stitutions, has been researching new Pca biomar- kers by utilising various proteomic technologies [14, 15]. In 2009, the “Prostate cancer Proteomics” (PcP, http://ef.inbi.ras.ru/) national database was created in order to facilitate this research, summarising experi- mental and referenced published data and providing links to several other biomedical Internet databases. this paper describes the structure and capabilities of the new, extended PcP version. MATERIALS AND METHODS Biomaterials— Biopsy and surgical samples of p r o s t a t e t i s s u e s f r o m p a t i e n t s w i t h P c a ( n = 72) and benign prostatic hyperplasia (BPH, n = 69) were provided by staff members of the Urology Depart- ment of the Botkin Clinical Hospital (Moscow). Diagnosis 96 | ActA nAturAe | VOL. 2 № 4 (7) 2010 reSeArcH ArtIcLeS was performed using clinical, histological, and immu- nochemical (PSA level) tests. Histological verification was performed via u.S.-controlled transrectal multifo- cal needle biopsy; up to 18 tissue samples from various prostate zones per patient were taken [16, 17]. All Pca cases were found to be adenocarcinoma. Gleason score was determined by following the standard procedure [16, 17]. In parallel tests, we analysed the proteins of the РС-3 (АСС 465), Du-145 (Acc 261), and BPH-1 (Acc 143) cell cultures purchased from the German col- lection of Microorganisms and cell cultures, as well as the proteins of cultured cells of the LncaP line provided by Dr. I. G. Shemyakin (Obolensk national Science centre for Applied Microbiology and Biotech- nology). the cells were cultured in the rPMI-1640 medium with HePeS, sodium pyruvate, gentamicine and 20% fetal bovine serum (FBS) [18], using cell cul- ture plastic (costar, uSA and nunc, Denmark) in a СО 2 -incubator (Sanyo, Japan). In addition, we studied proteins from the cultured cells of two lines of human rhabdomyosarcoma (А-204 and rD) purchased from the Ivanovsky Virology Institute, rAMS, and proteins from the cultured normal human myoblasts kindly provided by Dr t. B. Krohina [19]. the preparation of protein extracts, their O’Farrell 2De fractioning, coomassie Blue r-250 and silver ni- trate staining, and 2De analysis were performed fol- lowing the techniques described in [20, 21]. In addition, we used a 2De procedure with isoelectric focusing us- ing IPG-PAGe and ettan IPGphor 3 kit (Ge Health- care), according to the manufacturer’s protocol. Pro- Prepare human biomaterials for study Step 1 Collect biopsy and prostatectomy sample series from patients with PCa and BPH Select several cultured human cell lines (LNCaP, PC-3, DU-145, BPH-1 etc) for a comparative study Step 2 Extract and fraction proteins by O’Farrell 2DECollect two-dimensional electrophoregram series Collect two-dimensional electrophoregram series Collect 1 Collect 2 Scan and analyze images Scan and analyze images Create synthetic images № 1 Create synthetic images № 2 Step 3 (for each series) Identify proteins and create protein map Separate protein fractions and perform their trypsinolysis Perform MS analysis of tryptic peptides Identify proteins using Mascot (Matrix Science) and map Step 4 Create the four-level PCP database using the protein maps of human prostate tissue samples and cultured cell lines http://ef.inbi.ras.ru Fig. 1. Main steps in the proteomic study of prostate tissue samples from patients with malignant and benign tumors, as well as from several cultured human cell lines. reSeArcH ArtIcLeS VOL. 2 № 4 (7) 2010 | ActA nAturAe | 97 teins were identified with MALDI-tOF MS and MS/ MS using an ultraflex instrument (Bruker) at a 336-nm uV laser beam in a 500–8000 Da cation mode calibrated using reference trypsin autolysis peaks and processed with Mascot software, Peptide Fingerprint option (Ma- trix Science, uSA) [21, 22]. the proteins were identi- fied by matching experimental masses with the masses of proteins listed in the ncBI Рrotein and SwissProt/ treMBL databases. the accuracy of monoisotopic masses measured in the reflection mode calibrated with autolytic trypsin peaks was 0.005%, and the accuracy of the fragment masses was ±1 Da. Hypothetical pro- teins identified with MALDI-tOF MS corresponding to fragments of the full-size proteins, which are pro- ducts of corresponding genes, were revealed with MS/ MS. the molecular masses of protein fractions were determined using the ultrapure recombinant protein sets SM0661 (10–200 kDa) and SM0671 (10–170 kDa) (Fermentas). the measurement of the optical density of 2De images and/or their fragments was performed following scanning (epson expression 1680) or digital photography (nikon 2500 or canon PowerShot A1000 IS). Digital image processing with densitometry of the protein fractions was performed with Melanie Image- Master, versions 6 and 7 (Genebio). Data logging and processing for the Prostate cancer Proteomics multilevel database were done with various software packages, including Mapthis!, Molly Penguin Software, Mozilla Firefox, and some Microsoft Office applications. A MySQL-based interactive database was used which could be updated and modified online using any computer with Internet connection. the BIOStAt and Microsoft Office excel 2003 software packages were used for statistical analysis. RESuLTS AND DISCuSSION According to the conventional proteomics strategy de- veloped in the late 20th century, the national PcP da- tabase was created in several consecutive steps which involved systematic characterisation of proteins in prostate tissue samples obtained from benign and ma- lignant tumors (Fig. 1) ([23, 24]). Proteins from several А B C D E F G 1 2 3 4 5 6 7 a pl 4.8 6.2 7.15 9.0 10.0 Mm (kDa) 200 150 85 70 60 50 40 30 20 15 10 b c Fig. 2. Results of proteomic study of malignant prostate tissue samples. A. Typical 2D electrophoregram of proteins from prostate tissue biopsy samples with PCa. B. Synthetic 2DE image of proteins from prostate tissue biopsy samples with PCa; reference fractions are marked with red ovals. Vertical axis: molecular weights. Horizontal axis: isoelectric points. C. 2D protein map constructed from synthetic 2DE image of proteins from prostate tissue biopsy samples with PCa; fractions characterized with 2DE are marked with blue arrows. 98 | ActA nAturAe | VOL. 2 № 4 (7) 2010 reSeArcH ArtIcLeS cultured human cell lines were studied in parallel ex- periments (Fig. 1). the first step was to make series of 2De protein sam- ples (50 or more) by fractioning dozens of bioptates or prostate tissue samples (from 30 or more patients). Fi- gure 2A illustrates a typical 2De of the Pca prostate tissue proteins. the 2De series for cell line proteins were created with 20 2De, taking into account the ho- mogeneity of the analyte. the distribution of 2De protein fractions was re- gistered and stored as graphic *.tif files. the images of the entire 2De and (in some cases) their segments were produced by scanning and/or digital photogra- phy. the relevance of the selected 2De was assessed by comparing protein fractioning results using digital image matching [23, 24]. the second step was to construct synthetic 2D maps of the proteins. 2De images from each series were standardised with Melanie ImageMaster software us- ing 15 selected reference points corresponding to easily identifiable major protein fractions. Figure 2B shows reference points on the 2De images of proteins from prostate tissue samples. each image was analysed using the cummings tech- nique [25] with some modifications [20, 24]. the analysis was based on dividing the images into 49 rectangular fragments, the sides of which formed six horizontal and six vertical standard lines and the sides of the 2De im- age itself. the points for plotting the horizontal lines were determined by special molecular-weight-marker proteins, which were placed on each gel plate before fractioning in the second direction (SDS-PAGe). thus, the protein fractions located on the corresponding hori- zontal lines have identical molecular weights. For plot- ting the vertical lines, protein markers with previously measured pI values were used [20, 24]. As a result, each image analyzed consisted of 49 rectangular fragments, each usually containing no more than 10 protein frac- tions (only 4 fragments contained more than 20 frac- tions). Image fragmenting significantly simplified the image comparison and construction of synthetic 2D maps. the described procedure was performed with the 60 best 2De images of proteins from BHP samples and with 70 2De images of proteins from Pca samples. the comparison of standardised BHP and Pca 2De images showed that the coordinates of more than 95% of pro- tein fractions were constant. Quantitative or qualita- tive variations in the coordinates were observed for less than 5% of the fractions. the variations could be caused Enter Level 1 Module selection panel Module 1 Module 2 Module 3 Protein maps of objects Other control panels Level 2 Electrophoretic properties of individual proteins and other experimental data Cross- references The same protein The same protein Level 3 Literature data on individual proteins and genes coding for those proteins Level 4 Hyperlinks between Level 3 fields of the PCP database and the NCBI (Protein, OMIM, Gene, PubMed) and Swiss-Prot databases General organisation of the PCP database (exemplified with three cross-referenced modules) Fig. 3. General organisation of the “Prostate Cancer Proteomics” (PCP) database. reSeArcH ArtIcLeS VOL. 2 № 4 (7) 2010 | ActA nAturAe | 99 by genetic factors (e.g. single nucleotide polymorphism) or a different expression of corresponding genes, as well as differences in tissue composition of the samples and the pathology’s intensity. Having made sure that the positions of the majority of the protein fractions were constant, we were able to construct the 2D maps of prostate tissue proteins from patients with BHP and Pca. After that, we per- formed a fragment-by-fragment comparison of the 2D maps constructed. the fraction patterns in BHP and Pca maps were found to be quite similar, the differ- ence being that about 20 fractions present in the Pca map were either present in a much smaller amount in the BHP map or absent altogether. We paid particu- lar attention to those fractions in further study, as de- scribed below. In general, as a result of our analysis, an integrated synthetic 2D map of human prostate pro- teins was constructed that contains more than 200 pro- tein fractions in the ranges of Mw 8.5–450 kDa and pI 4.5–11.5 (Fig. 2c). each fraction was assigned a unique seven-digit number; the first four digits representing the logarithm of the fraction’s molecular weight, and the next three digits representing the value of the iso- electric point expressed in units described in [20, 24]. the same procedure was applied to construct other synthetic maps of proteins from cultured human cell lines, although much fewer 2De images were used for those maps. Table 1. Description of the PCP modules and the number of identified proteins by modules. Modules—synthetic maps of proteins in specific objects (fractioning technique) Proteins identified Prostate biopsy samples, Pca and BPH (IeF-PAGe*) 165 LncaP (IeF-PAGe*) 60 LncaP (IPG-PAGe) 18 Pc-3 (IeF-PAGe*) 25 BPH-1 (IeF-PAGe*) 24 rhabdomyosarcoma (IeF-PAGe*) 29 normal human fibroblasts (IeF-PAGe*) 38 * modification [20] thus, each of the constructed maps contained data on the electrophoretic properties of the protein frac- tions (represented by their coordinates) in the corre- sponding object. these maps were in *.jpg format (with a resolution of at least 300 dpi), constituting Level 1 of the database. Further studies and analysis of data on proteins were based on the information contained in the Level 1 maps. therefore, the synthetic maps rep- resented original modules enabling the characterizing and formalizing of the biochemical properties of the proteins studied. there are currently seven modules in PcP (table 1). there is a special panel enabling navi- gation among the modules. the 2D maps are scaleable, and the user can mark certain proteins on the maps and create links (buttons) for accessing other levels– second, third, and fourth–that contain data on the pro- teins. the database also automatically displays 2D co- ordinates (along the two fractioning axes) as the cursor moves around the map. the general organisation of the PcP database is presented in Fig. 3. the third step in the proteomic study was to identify individual protein fractions. the proteins were mainly identified by mass-spectrometry: the results are pre- sented in table 1. As table 1 shows, there is a total of 359 identified proteins in PcP. Among them there are many well- known proteins, such as the enzymes responsible for glycolysis (glyceraldehyde-3-phosphate dehydroge- nase, triose-phosphate isomerase, etc.) and other met- abolic processes, as well as cytoskeletal (actin, trans- gelins, etc.) and mitochondrial (porins, superoxide dismutase, etc.) proteins. Some of the identified pro- teins, for instance transgelins [21, 22], were represented with several isoforms. We paid particular attention to the identification of the protein fractions which differed qualitatively or quantitatively in the prostate tissue samples from patients with BHP and Pca. We previously reported on the results of identification of two potential Pca biomarkers, the proteins AGr2 [14] and Dj-1 [26]. In total, we succeeded in identifying 17 potential Pca bi- omarkers, some of which are new. table 2 provides a short description of the potential Pca biomarkers. For example, Fig.4 shows the results of MS identification of one of the new biomarkers, protein PrO2675, which contains an albumin domain in its primary structure. For each protein identified (and marked with a “but- ton” on the 2D map), the second information Level was formed, comprising a standardized system of 15 fields for the entry of text and graphical data obtained during characterization of the corresponding protein fraction. In four fields, general information about the protein is entered, in the next six fields the identification results are entered, and in the other five fields additional in- 100 | ActA nAturAe | VOL. 2 № 4 (7) 2010 reSeArcH ArtIcLeS formation is entered. the filled Level 2 fields for one of the potential Pca biomarkers, protein nAnS (n- acetylneuraminic acid phosphate synthase), are shown in Fig. 5. the same protein could be present in more than one object. therefore, within Level 2, one can use the con- trol panel to create cross-links between identical pro- teins in different modules. An example of such cross- referencing for protein Dj-1 is presented in Fig. 6. Table 2. Potential PCa biomarkers in the “Prostate biopsy samples, PCa and BPH” module and other PCP modules unique identifier* Protein (synonyms and symbol in PcP) numbers in ncBI** and Swiss-Prot Additional information in PcP and refer- ences*** 5653580 Ferritin light chain complex (K-(L)F) 182516, P02792 [15] 4785508 (4799550) chaperonin (HSPD1) 31542947, nP_002147, 118190, P10809 Found in rhabdomyosarcoma cells; {Bindukumar B. et al. 2008, 18646040} 4716560 (4756612) Protein-disulphide isomerase (er60) 7437388, P30101 [15] 4531685 n-acetylneuraminat phosphate synthase (nAnS) 12652539, AAH00008, nP_061819, 605202, Q9nr45 Found in rhabdomyosarcoma cells; [15] 4502675 Annexin 2, isophorm 2 (AnXA2-i2) 4757756, nP_004030, 151740, P07355 {Shiozawa Y. et al. 2008, 18636554; Hastie c. et al. 2008, 18211896} 4454692 Unknown protein PRO2675 containing the albumin domain (PRO2675) 7770217 [15] 4447605 Protein 29 of endoplasmic reticu- lum, isophorm 1 (erp29) 5803013, nP_006808, 602287, P30040 {Myung J.K. et al. 2004, 15598346} 4352630 (4342620) Dj-1 protein (Dj-1) 50513593, 1SOA_A, 606324, Q99497 {Bindukumar B. et al. 2008, 18646040} 4356607 (4344615) Dj-1 protein, electrophoretic isophorm (Dj-1-ei) 31543380 4336712 (4301795) Prostatic binding protein (neu- ropolypeptide h3, PeBP1) 21410340, AAH31102, 604591, P30086 [15]; {Li et al. 2008, 18161940; Woods Ignatoski K.M. et al. 2008, 18722266} 4286750 (4290620) nM23B-protein, nucleoside phos- phate kinase B 4505409, nP_002503, 156491, P22392 {Johansson B. et al. 2006, 16705742} 4255880 Unnamed protein (NEDO human cDNA sequencing project, tissue type = “testis”) (NEDO) 21758704, BAc05360 4204630 Fatty acid binding protein, iso- phorm 5 (e-FABP) 30583737, AAP36117, 605168, Q01469 [15]; {Morgan e.A. et al. 2008, 18360704} 4279900 AGr2 (AGr2) 37183136, AAQ89368, 606358. Q4JM47 [14, 15]; {Zhang J.S. et al. 2005, 15834940; weitzig D.r. et al. 2007, 17694278} 41811130 Hystone H3, 3A family (H3f3a) 55665435 [15] 4161675 Unknown protein PRO2044 containing the albumin domain (PRO2044) 6650826 [15] 4021610 S100 calcium binding protein A11 (S100A11) 12655117, AAH01410, 603114, P31949 {rehman I. et al. 2004, 15668896; Schaefer K.L. et al. 2004, 15150091} * Numbers in the “LNCaP (IEF-2DE Modification)” module are given in parenthesis ** Numbers from the NCBI databases are given in the following order: Protein, Genbank and/or Nucleotide, OMIM *** Publications listed in the References section of this article are given in square brackets; publications from the PubMed database are given in curly brackets. Note: a new potential biomarkers are shown in bold the majority of the 359 identified fractions were well-known proteins (and/or their electrophoretic iso- phorms) described in the literature and various data- bases. Some information on those proteins, relating to the PcP scope, constituted the third information Level of the database. Level 3 is a standardized system of 23 fields for the entry of text and graphical data. In twelve fields information about the protein is entered, in the next six fields information about the gene coding for reSeArcH ArtIcLeS VOL. 2 № 4 (7) 2010 | ActA nAturAe | 101 Start-end Experimental Mw Calculated Mw Peptide sequence 1 - 16 1706.77 1705.82 MPADLPSLAADFVESK + Oxidation (M) 27 - 39 1639.82 1638.79 DVFLGMFLYEYAR + Oxidation (M) 76 - 92 2045.05 2044.09 VFDEFKPLVEEPQNLIK 106 - 113 960.52 959.56 FQNALLVR 118 - 131 1511.74 1510.84 VPQVSTPTLVEVSR 249 - 263 1763.73 1762.77 AVMDDFAAFVEKCCK + Oxidation (M) in te n s. [a .u .] m/z1000 1500 2000 2500 3000 3500 4000 1.5 1.0 0.5 0.0 х104 8 1 1 .4 9 6 0 .6 1 2 3 4 .7 1 2 8 6 .7 1 3 5 8 .7 1 5 1 1 .9 1 6 4 0 .0 1 8 2 0 .0 1 8 5 4 .0 1 9 4 6 .1 2 1 3 6 .1 200 400 600 800 1000 1200 1400 1600 1800 m/z 5000 4000 3000 2000 1000 0 in te n s. [a .u .] 1 1 2 .2 1 7 5 .1 1 9 9 .1 2 2 6 .1 2 9 4 .1 3 1 2 .1 3 2 5 .1 3 6 1 .1 4 1 2 .1 4 9 0 .2 5 1 3 .2 5 4 0 .2 5 6 1. 2 5 8 9. 3 6 3 9. 2 7 0 2 .3 7 3 7 .2 7 9 6 .4 8 6 6 .2 8 8 3 .4 9 0 0 .4 1 0 0 1 .5 1 0 7 7 .4 1 0 8 8 .5 1 1 8 7 .7 1 2 5 5 .0 1 3 2 5 .9 1 4 1 2 .8 1 5 1 3 .3 1 5 9 6 .2 1 6 4 0 .0 A1 c b1 A2 b2 Fig. 4. Results of mass-spectrometric identification of the PRO 2675 protein. A. Mass-spectrum of tryptic peptides. (1) MALDI-TOF MS data. (2) Peptide identification by Mascot. B. Mass-spectrum of one of the tryptic peptides (1) MALDI-TOF MS data. (2) Peptide identification by Mascot. C. Amino acid sequence of the PRO 2675 protein (records AAF69644.1 GI:7770217 in the NCBI Protein database); amino acid residues of the tryptic peptides are printed in red; the peptide's whose sequence was identified by MALDI-TOF MS/MS and is highlighted in grey; the sites corresponding to the albumin domain are underlined. 102 | ActA nAturAe | VOL. 2 № 4 (7) 2010 reSeArcH ArtIcLeS the protein is entered, and in the next three fields in- formation about the protein’s polymorphism is entered. In the other fields, selected references to publications about the general properties of the proteins, as well as its oncological properties, are entered (Fig. 7). the Level 3 text fields can contain hyperlinks to var- ious Internet databases, such as ncBI’s Protein, OMIM and PubMed, and SwissProt. this feature allowed the creation of the fourth information Level, providing the user with prompt access to international databases containing, in particular, the results of human genome sequencing. the PcP database is an interactive MySQL-based web resource located at http://ef.inbi.ras.ru and can Fig. 5. Level 2 fields for the NANS protein. Experimental data (Level 2) General properties Identification Supplementary identification information Fraction (Protein) Localization, number Mw, kDa pI (experimental) Method Sequence coverage, % Mascot (Matrix Science, USA) search results Figure Amino acid sequence (without signal peptides) Comments to amino acid sequence Supplementary information Tandem mass-spectrometry N-acetylneuraminic acid phosphate synthase, NANS Edit D4, 4531685 Edit 38.0 Edit 6.85 Edit MALDI TOF MS Edit 58% Edit Start-End Observed Mr(expt) Mr(calc) ppm Miss Peptide Edit Edit Edit Amino acid sequence of the NANS protein. Matched peptides shown in Red. Edit MS/MS Fragmentation of VGSGDTNNFPYLEK Found in gi|8453156, N-acetylneuraminic acid phosphate synthase [Homo sapiens] Match to Query 16: 1539.695159 from(1540.702435,1+) Edit Monoisotopic mass of neutral peptide Mr(calc): 1539.72 Ions Score: 22 Expect: 8.7 Fig. 6. Control panel with cross- references for the Dj-1 protein from the “Prostate biopsy samples, PCa and BPH” module. Identical proteins in other modules Module Point Link Disconnect LNCaP (IPG-2DE) 4301630.Dj-1 Link Disconnect LNCaP (IEF-2DE modification) 4342635.Dj-1 Link Disconnect Rhabdomyosarcoma 4342690.DJ-1 Link Disconnect PC3 4342630.DJ-1 Link Disconnect Add reSeArcH ArtIcLeS VOL. 2 № 4 (7) 2010 | ActA nAturAe | 103 Fig. 7. Level 3 fields for specially selected references to publications on the AGR2 protein. References General Oncological PMID: 9790916 1. Thompson DA, Weigel RJ. hAG-2, the human homologue of the Xenopus laevis cement gland gene XAG-2, is coexpressed with estrogen receptor in breast cancer cell. Biochem. Biophys. Res. Commun. 1998. V.251(1), P.111-116. PMID: 12975309 2. Clark HF, Gurney AL, Abaya E, Baker K, Baldwin D, Brush J, Chen J, Chow B, Chui C, Crowley C, Currell B, Deuel B, Dowd P, Eaton D, Foster J, Grimaldi C, Gu Q, Hass PE, Heldens S, Huang A, Kim HS, Klimowski L, Jin Y, Johnson S, Lee J, Lewis L, Liao D, Mark M, Robbie E, Sanchez C, Schoenfeld J, Seshagiri S, Simmons L, Singh J, Smith V, Stinson J, Vagts A, Vandlen R, Watanabe C, Wieand D, Woods K, Xie MH, Yansura D, Yi S, Yu G, Yuan J, Zhang M, Zhang Z, Goddard A, Wood WI, Godowski P, Gray A. The secreted protein discovery initiative (SPDI), a large-scale effort to identify novel human secreted and transmembrane proteins: a bioinformatics assessment. Genome Res. 2003. V.13(10), P.2265-2270. PMID: 15340161 3. Zhang Z, Henzel WJ. Signal peptide prediction based on analysis of experimentally verified cleavage sites. Protein Sci. 2004. V.13(10). P.2819-2824. Edit PMID: 12592373 1. Fletcher GC, Patel S, Tyson K, Adam PJ, Schenker M, Loader JA, Daviet L, Legrain P, Parekh R, Harris AL, Terrett JA. hAG-2 and hAG-3, human homologues of genes involved in differentiation, are associated with oestrogen receptor-positive breast tumours and interact with metastasis gene C4.4a and dystroglycan. Br. J. Cancer. 2003. V.88(4). P.579-585. PMID: 15532095 2. Kristiansen G, Pilarsky C, Wissmann C, Kaiser S, Bruemmendorf T, Roepcke S, Dahl E, Hinzmann B, Specht T, Pervan J, Stephan C, Loening S, Dietel M, Rosenthal A. Expression profiling of microdissected matched prostate cancer samples reveals CD166/MEMD and CD24 as new prognostic markers for patient survival. J Pathol. 2005. V.205(3). P.359-376. PMID: 15867376 3. Liu D, Rudland P.S, Sibson DR, Platt-Higgins A, Barraclough R. Human homologue of cement gland protein, a novel metastasis inducer associated with breast carcinomas. Cancer Res. 2005. V.65(9). P.3796-3805. PMID: 15958538 4. Smirnov DA, Zweitzig DR, Foulk BW, Miller MC, Doyle GV, Pienta KJ, Meropol NJ, Weiner LM, Cohen SJ, Moreno JG, Connelly MC, Terstappen LW, O’Hara SM. Global Gene Expression Profiling of Circulating Tumor Cells. Cancer Res. 2005. V.65. P.4993-4997. PMID: 15834940 5. Zhang JS, Gong A, Cheville JC, Smith DI, Young CY. AGR2, an androgen-inducible secretory protein overexpressed in prostate cancer. Genes Chromosomes Cancer. 2005. V.43(3) P.249-259. PMID: 17022460 6. Kovalev LI, Shishkin SS, Khasigov PZ, Dzeranov NK, Kazachenko AV, Toropygin IIu, Mamykina SV. Identification of AGR2 protein, a novel potential cancer marker, using proteomics technologies. Prikl Biokhim Mikrobiol. 2006. V.42(4). P.480-484. PMID: 18829536 7. Ramachandran V, Arumugam T, Wang H, Logsdon CD. Anterior gradient 2 is expressed and secreted during the development of pancreatic cancer and promotes cancer cell survival. Cancer Res. 2008. V.68(19). P.7811-7818. Edit be accessed from any computer connected to the In- ternet using the Mozilla Firefox and Microsoft Internet explorer browsers. there are three access permission categories: “Guest,” “Manager,” and “Administrator,” each giving certain rights for working with PcP. In particular, users with “Manager” access permission can make entries into and correct the Level 2 and 3 fields, while users with the “Administrator” category access permission, have the ability to expand the database by creating new modules and new functional elements. us- ers with “Guest” access can browse all fields but cannot edit them. In conclusion, our work resulted in the creation of an original multi-module national database entitled “Pros- tate cancer Proteomics,” which summarizes data on the proteins in prostate tissue collected from patients with BHP and Pca, as well as on proteins in several hu- man cell lines. this is very promising in the further use of proteomic and other biochemical data. We are hope- ful that the PcP database will be useful to biochemists and other biomedical scientists, making their research on Pca more efficient. This work was supported by the Moscow Department of Science and Industrial Politics (Government contracts № 8/3-373n-08 and 8/3-375n-08). 104 | ActA nAturAe | VOL. 2 № 4 (7) 2010 reSeArcH ArtIcLeS reFerenceS 1. Gottlieb B., Beitel L.K., Wu J.H., trifiro M. // Hum. Mutat. 2004. V. 23. Р. 527–533. 2. Westbrook J.A., Wheeler J.X., Wait r., Welson S.Y., Dunn M.J. // electrophoresis. 2006. V. 27. Р. 1547–1555. 3. Kandasamy K., Keerthikumar S., Goel r., Mathivanan S., Patankar n., Shafreen B., renuse S., Pawar H., ramach- andra Y.L., Acharya P.K., ranganathan P., chaerkady r., Keshava Prasad t.S., Pandey A. // nucleic Acids res. 2009. V. 37 (Database issue). P. D773–D781. 4. Vizcaino J.A., cote r., reisinger F., Barsnes H., Foster J.M., rameseder J., Hermjakob H., Martens L. // nucleic Acids res. 2010. V. 38 (Database issue). P. D736–742. 5. Stamey t.A., caldwell M., Mcneal J.e., nolley r., Hemenez M., Downs J. // J. urol. 2004. V. 172. P. 1297–1301. 6. Zhang J.S., Gong A., cheville J.c., Smith D.I., Young c.Y. // Genes chromosomes cancer. 2005. V. 43. № 3. P. 249–259. 7. Lim L.S., Sherin K. // Am. J. Prev. Med. 2008. V. 34. № 2. P. 164–170. 8. Leman e.S., Getzenberg r.H. // J. cell Biochem. 2009. V. 108. № 1. P. 3–9. 9. Zlokachestvennie novoobrazovania v rossii v 2008. Zabol- evaemost i smertnost (Malignant tumors in russia in 2008. Morbidity and Mortality) / ed. chissov V.I., Starinsky V.V., Petrova G.V. М.: «MeDpress-inform», 2008. P. 18. 10. Zlokachestvennie novoobrazovania v rossii i SnG v 2002 (Malignant tumors in russia and cIS in 2002). n. n. Blokhin russian cancer research center of russian Acad- emy of Medical Sciences / ed. Davidov M.I., Aksel e.M. М.: «MIA», 2004. 256 p. 11. Jemal A., Siegel r., Ward e., Murray t., Xu J., Smigal c., thun M.J. // cA cancer J. clin. 2007. V. 57. № 1. P. 43–66. 12. Maddams J., Brewster D., Gavin A., Steward J., elliott J., utley M,. Muller H. // Br. J. cancer. 2009. V. 101. № 3. P. 541–547. 13. Primrose S.B., twyman r.M. Genomika. rol v Medicine (Genomics. Applications in Human Biology). М.: «BInOM. Laboratoria znaniy, 2008. 277 p. 14. Kovalyov L.I., Shishkin S.S., Hasigov P.Z., Dzeranov n.K., Kazachenko A.V., Kovalyova M.A., toropygin I.Y., Mamiki- na S.V. // Prikladnaya biokhimia i mikrobiologia (Applied Biochemistry and Microbiology). 2006. V.42. №4. P.480-484. 15. Shishkin S.S., Dzeranov n.K., totrov К.I., Kazachenko A.V., Kovalyov L.I.., eremina L.S., Kovalyova M.A., toropy- gin I.Y. // urologiia (urology). 2009. V. 1. P. 56-58. 16. Kogan M.I., Loran O.B., Petrov S.B. radikalnaya khirur- gia raka predstatelnoy zhelezi (radical Surgery of Prostate cancer). М.: «Geotar-Media», 2006. 392 p. 17. Shishkin S.S., Kovalyov L.I.., Kovalyova M.A., Kra- khmaleva I.n., eremina L.S., Makarov A.A., Lisitskaya K.V., Loran O.B., Veliev e.I., Okhrits V.e. Problemi ranney diagnostiki raka prostati i vosmozhnosti primenenia novih potencialnih biomarkerov (the Problems of early Diag- nostics of Prostate cancer and Possibilities of using of new Potential Biomarkers. (Informational-methodical letter)). М.: OOO ”Originalnaya compania”, 2009. 45 p. 18. chernikov V.G., terehov S.M., Krohina t.B., Shishkin S.S., Smirnova t.D., Lunga I.n., Adnoral n.V., rebrov L.B., Denisov-nikolsky Y.I., Bikov V.A. // Bull. eksp. biol. med. (Bulletin of experimental Biology and Medicine). 2001. V. 131. № 6. P. 680-682. 19. Krohina t.B., Shishkin S.S., raevskaya G.B., Kovalyov L.I.., ershova e.S., chernikov V.G, Mirochnik V.V., Bub- nova e.n., Kucharenko V.I. // Bull. eksp. biol. med. (Bulletin of experimental Biology and Medicine). 1996. V. 122. № 9. P.314-317. 20. Kovalyov L.I., Shishkin S. S., efimochkin A.S., Kovalyova M.A., ershova e.S., egorov t.A., Musalyamov A.K. // elec- trophoresis. 1995. V. 16. P. 1160–1169. 21. eremina L.S., Kovalyov L.I., Shishkin S.S., toropygin I.Y., Burakova M.I., Kovalyova M.A., Makarov A.A., Dzeranov n.K., Kazachenko A.V., totrov К.I., Kononkov I.V., Loran O.B. // Vopr. biol. med. farm. khimii (Problems of Biological, Medical, and Pharmaceutical chemistry). 2007. V.3. P.49-52. 22. Kovalyova M.A., Kovalyov L.I., eremina L.S., Makarov A.A., Burakova M.I., toropygin I.Y., Serebryakova M.V., Shishkin S.S., Archakov A.I. // Biomedicinskaya khimia (Biomedical chemestry). 2008. V. 54. № 4. P. 420-434. 23. Anderson n.G., Anderson L. // electrophoresis. 1996. V. 17. P. 443–453. 24. Shishkin S.S., Kovalyov L.I., Gromov P.S. / Mnogolikost sovremennoy genetiki cheloveka (the Variety of contem- porary Human Genetics). М.-ufa: «Gilem», 2000. P. 17-50. 25. cumings D. // clin. chem. 1982. V. 28. P. 782–789. 26. Loran O.B., Veliev e.I., Okhrizts V.e., Lisitskaya K.V., eremina L.S., Kovalyov L.I., Kovalyova M.A., Shishkin S.S. // eur. urol. Suppl. 2010. V. 9. № 2. Р. 309. work_jvcjy75b3bgp5ojknetpdozone ---- Photography is a disciplined way of observation. Aligned with the eye, the camera becomes an instrument of analysis that allows us to represent the world around us. That is, to reinterpret and to revisit the usual, but with different eyes. Thus, photography is not a mere recording but the construction of a viewpoint, an interpretation of the world. To this end, analog cameras and procedures are adopted, not because of a nostalgic view but as a pursuit for higher quality. Unfortunately, a technical improvement –digital photography– has brought a decline in the aesthetics field. Analog photography S U E L O S 9 L A N D S architEctur al l ands. natur al l ands phiLippE bL anc pOrtafOLiO fOtOgr áficO / Photogr APhic PortfoLio A R Q 9 3 10 U C C H I L E also allows distancing oneself and facing the image with a slower tempo, which allows for further reflection upon the observed object. From this point of view I stand in an ambiguous field, between natural territories with traces of artificiality and artificial territories with natural features. I find this intermediate field more interesting, hard to define and with diffuse borders. The intent of these images is to translate this ambiguity into a visual language, built out of tones, brightness and darkness, lights and shadows. ARQ S U E L O S 11 L A N D S p h i L i p p E b L a n c Architect, Pontificia Universidad Católica de Chile, 1999. Doctor in Architecture and Urban Studies, Pontificia Universidad Católica de Chile, 2010. Since 2010 he works as an architect and as a photographer. Author of the photographs in the book Patrimonio arquitectónico u c . Fragmentos de una obra (Santiago, 2016) and the exhibition of photographs about it. He has exhibited his photographs at the Sala Vitra (2013), Corporación Cultural de Las Condes (2015), Centro de Extensión u c (2016) and the Centro Cultural Gabriela Mistral g A M (2016). Blanc is Associate Professor at the Pontificia Universidad Católica de Chile. work_jvivein77vg6pahm2nu5l7ck6a ---- POSTER PRESENTATION Open Access Analysis of Anterior Trunk Symmetry Index (ATSI) in healthy school children based on 2D digital photography: normal limits for age 7-10 years Lukasz Stolinski*, Dariusz Czaprowski, Mateusz Kozinoga, Krzysztof Korbel, Piotr Janusz, Marcin Tyrakowski, Katsuki Kono, Nobumasa Suzuki, Tomasz Kotwicki From 10th International Conference on Conservative Management of Spinal Deformities - SOSORT 2013 Annual Meeting Chicago, IL, USA. 8-11 May 2013 Background Digital photography for a 2-dimensional assessment of the body shape is a valuable method to both document the human posture and calculate the main quantitative parameters of it. Purpose The goal of this study was to assess the frontal plane symmetry of the anterior trunk in healthy school chil- dren based on the digital photography by measurement of the Anterior Trunk Symmetry Index (ATSI). . Methods The study comprised 421 school children, both sexes, aged 7-10 years, with no clinical evidence of scoliosis (Angle of Trunk Rotation <5º). One frontal photograph of anterior trunk in spontaneous standing position was taken with a digital camera in standardized manner. The semi-automatic software for calculation of photogram- metric parameters was developed in collaboration with an IT specialist. The photographs were analyzed to obtain a quantitative assessment of the ATSI parameter. The intra-observer error was calculated by the first author by measuring the pictures of 14 children three times, selected randomly, at the interval of at least two days. The inter-observer error was calculated by one sur- geon and two experienced physiotherapists by measuring the pictures of 60 children, selected randomly. The nor- mal upper value limit was calculated as mean + 2SD. Results The mean ATSI value for 421 children was 24.3. ±12.7 (girls 24.4.±12.1, boys 24.0.±13.5). The ATSI values for each age group were: (1) 7-year-old children (N=117): 26.0 ±12.9 (girls 25.9 ±12.5, boys 26.2 ±13.4); (2) 8-year- old children (N=85) 23.3 ±13.7 (girls 23.0 ±12.2, boys 23.7 ±16.1); (3) 9-year-old children (N=109) 24.5 ±12.7 (girls 23.9 ±11.9, boys 26.1 ±14.1); and (4) 10-year-old children (N=110) 22.9 ±11.7 (girls 24.5 ±11.6, boys 21.8 ±11.5). For all age groups, the mean ATSI for boys did not differ significantly from the mean ATSI for girls (P>0.05). For both boys and girls, the mean ATSI did not differ among the four age groups (P>0.05). The intra-observer error was 1.07. The inter-observer error for the three observers was 4.06. The upper value limit was: (1) for 7-year-old chil- dren: girls=50.9 and boys=53.0; (2) for 8-year-old children: girls=47.4 and boys=55.9; (3) for 9-year-old children: girls=49.9 and boys=47.7; and (4) for 10-year-old children: girls=47.7 and boys=44.8. Conclusions and discussion Using semi-automatic software, the ATSI parameter could easily be calculated on regular digital photographs. The mean value of ATSI did not differ between boys and girls for the age group range of 7-10 years. Clinical usefulness of the ATSI parameter is yet to be determined by under- taking studies on larger groups of healthy and scoliotic children at different ages. Published: 18 September 2013 * Correspondence: stolinskilukasz@op.pl Rehasport Clinic, Poznań, Poland Stolinski et al. Scoliosis 2013, 8(Suppl 2):P10 http://www.scoliosisjournal.com/content/8/S2/P10 © 2013 Stolinski et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. mailto:stolinskilukasz@op.pl http://creativecommons.org/licenses/by/2.0 Reference 1. Stolinski L, Kotwicki T, Czaprowski D, Chowanska J, Suzuki N: Analysis of the Anterior Trunk Symmetry Index (ATSI). Preliminary report. Stud Health Technol Inform 2012, 176:242-246. doi:10.1186/1748-7161-8-S2-P10 Cite this article as: Stolinski et al.: Analysis of Anterior Trunk Symmetry Index (ATSI) in healthy school children based on 2D digital photography: normal limits for age 7-10 years. Scoliosis 2013 8(Suppl 2):P10. Submit your next manuscript to BioMed Central and take full advantage of: • Convenient online submission • Thorough peer review • No space constraints or color figure charges • Immediate publication on acceptance • Inclusion in PubMed, CAS, Scopus and Google Scholar • Research which is freely available for redistribution Submit your manuscript at www.biomedcentral.com/submit Stolinski et al. Scoliosis 2013, 8(Suppl 2):P10 http://www.scoliosisjournal.com/content/8/S2/P10 Page 2 of 2 http://www.ncbi.nlm.nih.gov/pubmed/22744500?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/22744500?dopt=Abstract Background Purpose Methods Results Conclusions and discussion References work_jwuy6ssohze6jmdiq3w2oesgyy ---- DB Error: Unknown column 's.type_id' in 'on clause' work_jx33smzg45hdnixlczi5qapf5e ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218564447 Params is empty 218564447 exception Params is empty 2021/04/06-02:16:45 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218564447 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:45 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_jxxk2kd3dfgrjibeecd5whjogm ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218542410 Params is empty 218542410 exception Params is empty 2021/04/06-02:16:19 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218542410 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:19 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_jy3iit3v3bhu7mztr7rowuf3em ---- NOVEL RETINOID ESTER IN COMBINATION WITH SALICYLIC ACID FOR THE TREATMENT OF ACNE PRIMARY AUTHOR: Zoe Draelos MD CO-AUTHORS: Joseph Lewis BS; Laura McHugh MS; Arthur Pellegrino BS; Lavinia Popescu MS, MBA Retinoids (RC), alpha hydroxy acids (AHA), and salicylic acid (SA) have therapeutic benefit in acne treatment through differing mechanisms of action. It is theorized that optimal acne improvement could be achieved by combining the RC induced normalization of cellular differentiation, AHA induced exfoliation in hydrophilic areas, and SA induced exfoliation in lipophilic areas. The AHA and RC compounds have been combined in a biologically designed molecule, known as an AHA Retinoid Conjugate (AHA-RC; chemically known as ethyl lactyl retinoate), delivering both lactic acid (AHA) and RC in a time-released hydrolytic manner designed to reduce retinoid associated irritation. A 27 subject 8-week clinical trial in subjects with mild to moderate acne was conducted using a 3-product regimen consisting of a twice daily cleanser (7.8% l-lactic acid, 2% SA), a twice daily acne serum (0.1% AHA-RC, 2% salicylic acid & 10.4% l-lactic acid) and a broad spectrum SPF 50+ sunscreen as needed. Investigator counts of total inflammatory (papules, pustules) and non-inflammatory (open comedones, closed comedones) lesions revealed a statistically significant reduction in inflammatory lesion counts (p=0.006) and non-inflammatory lesion counts (p=0.015) after 4 weeks of use. Improvement continued into week 8 with highly statistically significant (p<0.001) reductions in inflammatory and non-inflammatory lesions. Thus, the combination of lactic acid, SA and the novel AHA-RC produced acne improvement after 4 weeks with continuing cumulative improvement at 8 weeks. AHA-RC represents a new molecule combining several mechanisms of action to achieve acne improvement. ABSTRACT INTRODUCTION Topical therapies serve as the frontline treatment in all but the most severe cases of acne vulgaris due to their relatively low cost and ease of use. Compliance is often an issue with topical acne therapies,1 especially those with associated unpleasant side effects such as stinging and burning, redness, or drying and flaking of skin. Retinoid compounds (RC; vitamin A and its derivatives) are heavily studied and used but still not well understood; they are commonly used to treat acne, photodamage, and other skin conditions due to the range of biological effects (normalization of melanocyte function, immunomodulation, regulation of skin cell metabolism and cellular turnover, thickening of the epidermis, increases in dermal fibroblast production and activity, stimulation of neocollagenesis, and an increase in the height of rete ridges and the number of dermal papillae2–5). When treating acne they serve to normalize cell differentiation and inhibit key immunity factors.6 Associated irritation makes topical use somewhat problematic as it may affect compliance.7 Alpha hydroxy acids (AHA) are nontoxic, organic acids (e.g. glycolic acid, lactic acid, malic acid, etc.) consisting of a carboxylic acid functional group with a hydroxyl group (alcohol) on the adjacent (alpha) carbon atom; some, such as lactic acid, are present within the body.8 Harnessed effects include moisturization, exfoliation, and dermatologic indications involving abnormal keratinization. Their safety, moisturization, and exfoliant properties make them ideal for topical use. Salicylic acid (SA), a beta hydroxy acid (or BHA), is one of five FDA cleared OTC therapies for acne and present in numerous topical formulations. Bacteriostatic and kerolytic properties as well as correction of abnormal shedding of cells have been noted9 but require continuous use; SA does not affect sebum production or kill bacteria. It is theorized that optimal acne improvement could be achieved by CLICK FOR NEXT PAGE Vitamin A Derivatives, Method of Conversion and Irritation Potential Figure 1. Vitamin A Derivatives, Method of Conversion and Irritation Potential RETINOIC ACID (VITAMIN A ACID) IRRITATION RETINYL ESTERS RETINOATE ESTERS RETINAL RETINOL Oxidation Oxidation Hydrolysis Hydrolysis Hydrolysis + - Retinyl Palmitate Retinyl Linoleate Retinyl Acetate Ethyl Lactyl Retinoate (AHA Retinoid Double Conjugate) combining the RC induced normalization of cellular differentiation, AHA induced exfoliation in hydrophilic areas, and SA induced exfoliation in lipophilic areas. Unfortunately, therapeutic doses of topically applied retinoids frequently cause skin irritations that interfere with treatment. Figure 1 shows the relationship between commonly used retinoids and general irritation level, and the process by which they are converted from one form to another. Esters are molecules made by reacting an organic carboxylic acid and an alcohol through a condensation reaction, and typically provide increased stability and reduced irritation over the parent compounds. Attempts to reduce retinoid irritation by esterifying Vitamin A with fatty acids or other common organic acids such as palmitic acid or acetic acid to produce ‘retinyl’ esters (e.g. retinyl palmitate and retinyl acetate) also result in reduced efficacy. A NEW AHA-RETINOID DOUBLE CONJUGATE MOLECULE Although technically considered carboxylic acids and alcohols, reactions where AHAs are reacted as “alcohols” are not common. Retinoate esters can be engineered by combining AHA (as the alcohol) with vitamin A acid. Beneficial results include increased stability and reduced irritation when compared to the parent compound, but esterification often detrimentally affects efficacy. A bioengineered retinoid ester, ethyl lactyl retinoate (AHA retinoid conjugate, or AHA-RC), is the first double conjugate retinoid to deliver both AHA and RC to skin on a hydrolysis-based time released mechanism biologically designed to be efficient and minimally irritating to patients. A molecular model of this novel ester is presented in Figure 2. STUDY PURPOSE The purpose of this 8 week, prospective pilot study was to evaluate efficacy and tolerability of a twice daily, three product skincare regimen using the bioengineered AHA retinoid conjugate ester plus SA in patients with acne. STUDY METHODS Women (n=27) aged 40–65 years (mean 52±6.20) presenting with a minimum of 15 inflammatory acne lesions and a minimum of 15 non-inflammatory lesions were enrolled. The study was conducted under current Good Clinical Practices (cGCP) guidelines using an independent investigational review board (IIRB) -approved protocol. After initial screening (Days -10 to -7) during which informed consent, medical history, inclusion/ exclusion criteria review, initial endpoint assessment, and other pre-screening activities were performed, enrolled subjects were instructed to discontinue use of facial products excepting dry mineral foundation and eye makeup for 7 to 10 days prior to beginning the study (“washout” period). Pre-screening was reviewed at baseline (day 0) with re-assessment of initial endpoint evaluations including a comprehensive lesion count (inflammatory lesions, non-inflammatory lesions, papules, pustules, open comedones and closed comedones). Baseline digital photographs were obtained. Product was dispensed with clear instructions for proper use. The three-product regimen included cleanser (7.8% l-lactic acid, 2% BHA), active topical (0.1% AHA-RC, 10.4% l-lactic acid, 2% BHA), and broad-spectrum sunscreen (SPF 50+); subjects were to apply the topical after cleansing in the morning and evening. Sunscreen was to be applied after morning application of product and as needed throughout the day. Subject diaries were included to promote compliance and obtain subject commentary or observations. Digital photography and visual expert grading of endpoints were performed at week 4 and week 8 follow- up; acne was evaluated via comprehensive lesion count (inflammatory lesions, non-inflammatory lesions, papules, pustules, open comedones and closed comedones). Secondary endpoints of Dryness/Flaking, Fine Lines/ Wrinkles, Dyschromia, Stinging/Burning, and Erythema/ Redness were evaluated by the investigator on a 0-5 scale (0=none, 1=minimal, 2=mild, 3=moderate, 4=moderately severe, 5=severe). Global improvement was measured on a 0-4 scale (0=no improvement, 1=minimal improvement, 2=mild improvement, 3=moderate improvement, 4=marked improvement). Adverse events were recorded along with any concomitant medication information as well. Subject diaries were collected (and new ones dispensed) at week 4; at week 8 (Day 56) subject diaries and any remaining product were collected. The primary clinical endpoint was the comprehensive lesion count with comparisons made at the week 4 and week 8 visits against baseline evaluations. Similar comparison was made for secondary endpoints. Data were analyzed via Mann-Whitney test using the cutoff of p<0.05, with p<0.001 denoting high significance. RESULTS Of enrolled subjects (n=27), 24 completed the study. Two subjects discontinued the study due to irritation adverse events; there were no serious adverse events, and one subject was lost to follow up. There was a statistically significant reduction in inflammatory and non-inflammatory lesion counts at week 4, with highly statistically significant reduction seen by week 8. Specifically, there was a statistically significant reduction in papules (p<0.001) and closed comedones (p<0.001) at week 8. Table 1 shows percentage of improvement with corresponding p values; Figure 3 graphically demonstrates study results. Data indicates strong, significant improvement in acne with proper use of the AHA-RC plus SA topical. Reductions in lesion counts across the board were notable and, for the most part, highly significant. For secondary endpoints, statistically significant increase in facial stinging seen at week 4 subsided to statistical insignificance by week 8; statistically significant improvement (29% on average) was noted in fine lines, and improvement in global appearance (average 57%) was more profound. This is particularly important in treating adult female acne, one of the most prevalent forms of acne. Statistically significant increases were not observed in dryness or erythema, further attesting to the tolerability of the acne treatment. Figures 4 & 5 demonstrate typical visible results demonstrated in the study. CLICK FOR NEXT PAGE Table 1: Acne Lesion Percent Improvement, Baseline to Week 8 LESION TYPE Inflammatory Non- inflammatory Papules Pustules Open Comedones Closed Comedones % IMPROVEMENT 64 55 67 37 87 51 P VALUE* <0.001 <0.001 <0.001 0.475 0.187† <0.001 *p<0.05 denoted statistical significance, p<0.001 denoted high statistical significance †Open comedones were only present in two patients, which may explain the lack of statistical significance despite an obviously large percentage of improvement. Molecular diagram of AHA retinoid conjugate, or AHA-RC. Figure 2. Molecular diagram of AHA retinoid conjugate (AHA-RC). Retinoic Acid Lactic Acid Ethanol Table 1: Acne Lesion Percent Change, Baseline to Week 8 Figure 4. SUBJECT 18: day 0 vs 8 Weeks Figure 5. SUBJECT 21: day 0 vs 8 WeeksThe three product regimen (AHA-RC plus SA) was shown to be safe and effective for treatment of acne; statistically significant reductions in both inflammatory and non-inflammatory acne lesion counts were noted. Significant improvement to overall skin quality was also observed. CONCLUSION Figure 3. Comparison of Percent Improvement in Acne Lesion Count, Baseline to Week 8 1. Bartlett KB, Davis SA, Feldman SR. Topical antimicrobial acne treatment tolerability: a meaningful factor in treatment adherence? J Am Acad Dermatol. 2014 Sep; 71(3):581-582.e2. 2. Kligman AM, Grove GL, Hirose R, Leyden JJ. Topical tretinoin for photoaged skin. J Am Acad Dermatol. 1986 Oct; 15(4 Pt 2):836-59. 3. Kang S. Photoaging and tretinoin. Clin Dermatol. 1998; 16(2):357-364 4. Kligman LH. Topical retinoic acid enhances repair of ultraviolet damaged dermal connective tissue. Connect Tissue Res. 1984; 12(2):139-50. 5. Zelickson AS. J Cut Aging Cosmet Dermatol. 1988: 1:41-47. 6. Wolf JE Jr. Potential anti-inflammatory effects of topical retinoids and retinoid analogues. Adv Ther. 2002 May-Jun; 19(3):109-18. 7. Rolewski SL. Clinical review: topical retinoids. Dermatol Nurs. 2003 Oct; 15(5):447-50, 459-65. 8. Yu RJ, Van Scott E. Alpha-hydroxy acids: science and therapeutic use. Cosmet Dermatol. 1994; 7(10S):12-20. 9. Madan RK, Levitt J. A review of toxicity from topical salicylic acid preparations. J Am Acad Dermatol. 2014 Apr; 70(4):788-92. REFERENCES This study was sponsored by US CosmeceuTechs, LLC and conducted on their behalf by Dr. Draelos. Lewis and McHugh are employees of US CosmeceuTechs, LLC. Pellegrino and Popescu are employees of Elizabeth Arden; Elizabeth Arden has an equity investment in US CosmeceuTechs, LLC. 0% 20% 40% 60% 80% 100% Closed Comedones Open Comedones PustulesPapulesNon- Inflammatory Inflammatory % R ed uc tio n work_jynyz772w5hzbcwl64faoiw3iy ---- Assessment of digital clubbing in medical inpatients by digital photography and computerized analysis Assessment of digital clubbing in medical inpatients by digital photography and computerised analysis Daniela Husarik, Stephan R. Vavricka, Michael Mark, Andreas Schaffner, Roland B. Walter Department of Internal Medicine, Medical Clinic B, University Hospital, Zürich, Switzerland Introduction Background: Digital clubbing has been associ- ated with a large number of disorders. To over- come the limitation of subjective clinical assess- ment, several objective measurements have been developed among which the hyponychial angle was considered most accurate for quantification of finger clubbing. Methods and results: Here we investigated hyponychial angles in 123 healthy subjects and 515 medical inpatients from a tertiary hospital. Healthy subjects had a mean angle of 178.87 ± 4.70° (range: 164.78–192.10°), a finding that is well in accordance with previous results obtained using other techniques, underlining the accuracy of the chosen method of assessment. The mean angle of patients was 181.65 ± 7.18° (range: 162.22–209.19; p <0.0001 compared to healthy controls). When the upper limit of normality, i.e. 192.10°, was used to define digital clubbing, the prevalence of digital clubbing in our patients was 8.9%; the percentage of clubbed fingers varied substantially among the various disease states (up to 80% in patients with cystic fibrosis). Conclusion: The use of digital photography with computerised analysis was found to be an easy, fast and inexpensive method for the quantification of hyponychial angles with excellent intra and inter observer reliability whilst causing no discomfort to patients. This tool may therefore be useful in fur- ther longitudinal and cross-sectional studies of finger morphology and may become an accepted standard in the diagnosis of digital clubbing. Key words: digital clubbing; digital photography; hyponychial angle; prevalence; quantification Summary Since the first description of finger clubbing in a patient with empyema by Hipprocrates in the fifth century BC, this deformity has been associ- ated with a large number of disorders [1, 2]. Fea- tures of clubbed fingers on physical examination are a shiny and smooth appearance of the cuticle with increased sponginess, a flattening of the nor- mal obtuse angle on the dorsal surface of the fin- ger at the base of the nail, an increase in volume of the distal segment of the finger and an increase in the curvature of the nail in one or both planes [2, 3]. However clinical assessment is subjective, often difficult in mild cases and therefore unreliable [4–8]. To overcome this limitation several methods have been developed over the past century for quantification of clubbing. Attempted approaches include matching brass templates with arcs of var- ious sizes to measure longitudinal curvature [9], plethysmography [10], casts with planimetry [11] or measurement of finger depth rates [12, 13] and a shadowgraph technique [14–16]. None of these, however, was accepted as a standard of diagnosis, Figure 1 Construction of hyponychial angle. The hyponychial angle was constructed by a line AB drawn from the distal digital skin crease to the cuticle, and a line BC drawn from the cuticle to the hyponychium (thickened stratum corneum of epidermis lying under the free edge of the nail [11]. Distal digital crease Cuticle Hyponychium No financial support declared. 132Original article S W I S S M E D W K L Y 2 0 0 2 ; 1 3 2 : 1 3 2 – 1 3 8 · w w w . s m w . c h Peer reviewed article S W I S S M E D W K L Y 2 0 0 2 ; 1 3 2 : 1 3 2 – 1 3 8 · w w w . s m w . c h 133 and all proved impracticable as a method for veri- fying the clinical impression of clubbing. Recently digital cameras and computerised analysis have ac- curately been used to assess fingernail morphom- etry and offer obvious advantages over previous techniques [17]. Among several objective measurements, the hyponychial angle (figure 1) correlated strikingly with the physician’s subjective score, and was con- sidered “best discriminator” by Regan et al. [11] since it distinguished normal and clubbed fingers without overlap. The accuracy of the hyponychial angle as an indicator of finger clubbing was later confirmed by other investigators [15, 16, 18]. The original findings of these early studies are sum- marised in table 1. In a recent review of the liter- ature, a pooled weighted mean value for the hy- ponychial angle of 179.0 ± 4.5° was calculated. None of the 171 healthy subjects who have been assessed to date had angles that exceeded 192° and an angle less than 190° was therefore assumed to describe normality [2]. Although several studies reported mean values of hyponychial angles in a number of disease states [2], only very limited data on finger morphology and the prevalence of digi- tal clubbing in unselected hospitalised medical pa- tients is available [19]. Thus the present study was designed to investigate the distribution of hy- ponychial angles in a large group of both healthy volunteers and medical inpatients from a tertiary hospital. For quantification the hyponychial angle was calculated by means of digital photography and computerised analysis. Materials and methods Subjects Healthy controls: after verbal consent was obtained, fingers of subjects who reported no acute or chronic ill- ness were photographed as healthy controls. Medical inpa- tients: during the three month enrolment period from Sep- tember 25, 2000 to December 22, 2000, patients admitted as inpatients to the medical clinic of the Department of Internal Medicine (University Hospital, Zürich, Switzer- land) were eligible when hospitalised for more than 2 days. This minimal length of hospitalisation was arbitrarily cho- sen to select patients more characteristic of medical inpa- tients in a tertiary hospital and as a means of excluding pa- tients hospitalised simply for short interventions. There were no other predefined exclusion criteria, and only the patient’s verbal consent was required. Patients who were admitted repeatedly during the period of enrolment were examined only during their first admission. Acquisition of digital images A simple and inexpensive system was developed using a digital camera (Coolpix 990, Nikon, Küsnacht, Switzer- land) fixed 12 cm in front of a bar to obtain standardised and reproducible images of the radial view of the right index finger. The bar (on which the extended finger was placed for assessment) and the wooden support (on which the remaining flexed fingers were placed) was secured in a controlled position in front of the camera in order to avoid rotation. The left instead of the right index finger was used in cases where traumatic or surgical deformities prevented correct measurement of the hyponychial angle on the right side. Predefined settings of the camera were used for acquisition of images in normal resolution with an automatic flash. For further analysis digital images were stored on CD-ROM. Measurements Using Quartz PCI Scientific Image Management System Software (Version 4.20, Quartz Imaging Corpo- ration, Vancouver, Canada), the images were displayed on a computer screen. The angle measurement tool was used to calculate the hyponychial angle according to Regan et al. [11] as outlined (fig. 1). Each single picture was analysed by three investigators (DH, SRV, MM) repeating the angle calculation in triplicate and at random. Reference Regan et al. [11] Bentley et al. [15] Sinniah et al. [16] Kitis et al. [18] Healthy controls Hyponychial angle (º) 187.0 180.1 ± 4.2 180.7 ± 5.2 177.9 ± 4.6 Range 176.5–192 n.d 165–189 n.d. Age (years) n.d. 11.25 6.7 ± 3.4 29 Number of subjects 18 25 20 116 Patients Hyponychial angle (º) 209.4 194.8 ± 8.31, 194.8 ± 8.32 194.5 ± 7.5 n.d. 194.8 ± 8.33, 194.8 ± 8.34 Range n.d. n.d. 178–205 n.d. Age (years) n.d. 16.21, 12.32, 173, 114 6.9 ± 2.7 n.d. Number of patients 7 501, 252, 53, 204 19 901, 22 Characteristics Asbestos workers Cystic fibrosis1 Thalassemia major IBD1 Asthma2 Cyanotic heart disease Proctitis2 Cyanotic c.h.d.3 Malabsorption syndrome Acyanotic c.h.d.4 Bronchiectasis Abbreviations: c.h.d. congenital heart disease; n.d., no data available; IBD, inflammatory bowel disease. Note: the study by Kitis et al. [18] found digital clubbing present in 75 out of 200 patients with Crohn’s disease, and 15 out of 103 with ulcerative colitis, however, no precise data on mean hyponychial angles in these groups were provided. Table 1 Summary of histori- cal studies investigat- ing hyponychial an- gles for assessment of digital clubbing. Digital clubbing in medical inpatients 134 Statistical analysis Results are presented as means ± SD. Nonparametric statistical tests were used throughout. Continuous vari- ables between groups were compared using a two-tailed Mann-Whitney U test or a Kruskal-Wallis one-way analy- sis of variance test as appropriate, whereas discontinuous variables were compared using the two-tailed Fisher’s exact test. Inter-rater and intra-rater reliability were as- sessed using intraclass correlation. In addition, Spearman rank correlations were performed. Statistical calculations were done using InStat version 3.05 (GraphPad, San Diego, CA, USA); p <0.05 was considered significant. Results 123 healthy subjects participated in the study; their characteristics and the results of the mea- surement of the hyponychial angles of their right index fingers are shown in table 2. Women were significantly older than men (p = 0.0197). They also had a slightly larger mean hyponychial angle, but this difference did not reach statistical signifi- cance (p = 0.0719). The distribution of the hy- ponychial angles is shown in figure 2. Table 3 lists mean hyponychial angles of different age groups. Mean hyponychial angles among different age groups were similar for subjects of both sexes together (p = 0.9733) as well as for women (p = 0.9585) and men (p = 0.8815) separately. There was no correlation between the age of the subjects and the hyponychial angle (for both sexes: r = 0.0070, p = 0.9387; for women: r = –0.0158, p = 0.9082; for men: r = –0.0317, p = 0.7988). In a subgroup of 26 healthy volunteers, digital images of both index fingers as well as the right middle finger were taken. Measured hyponychial angles were 176.15 ± 4.34° (right middle finger), 178.60 ± 4.38° (right index finger), and 180.40 ± 5.46° (left index finger) respectively. Compared to the right middle finger, the index fingers of both sides had significantly larger hyponychial angles (right side: p = 0.0002; left side: p = 0.0043), whereas the difference between the hyponychial angle of the right compared to the left index finger did not reach statistical significance (p = 0.1427; all data not shown). During the three-month enrolment period, a total of 576 patients were eligible for study partic- ipation. Of these, 53 patients were not included for the following reasons: personal (12), impossibility of communication due to language problems (9), severe or terminal illness or death (13), patient was not accessible (19). Of the remaining 523 patients, the digital images of 8 subjects were excluded for technical reasons. Therefore, a total of 515 pa- tients were included in the final analysis; their characteristics are outlined in table 2. Although women were slightly older than men and had larger hyponychial angles, these differences did not reach statistical significance (p = 0.0802 for age and p = 0.0710 for hyponychial angles, respec- tively). However men reported a history of regu- lar alcohol consumption (p <0.0001) and a positive cigarette smoking status (p <0.0001) significantly more frequently than women. The angle distribu- tion of the patients are shown in figure 3 and mean hyponychial angles for different age groups are listed in table 3. Hyponychial angles of patients aged 30–44 years were significantly larger than those from patients between 60–98 years of age (p <0.05), otherwise no differences between hypo- nychial angles of different age groups were found. Furthermore there were no statistical differences women men both Control subjects (n) 67 56 123 Mean age (years) ± SD 39.5 ± 12.5 34.9 ± 12.4 37.4 ± 12.6 Range 22–77 19–72 19–77 Mean hyponychial angle (º) ± SD 179.60 ± 4.76 178.01 ± 4.52 178.87 ± 4.70 Range 164.78–191.98 168.01–192.10 164.78–192.10 Patients (n) 216 299 515 Mean age/years) ± SD 62.2 ± 19.5# 59.7 ± 17.3# 60.7 ± 18.3# Range 15–98 16–91 15–98 Mean hyponychial angle (º) ± SD 182.17 ± 7.00* 181.27 ± 7.29** 181.65 ± 7.18*** Range 166.28–201.82 162.22–209.19 162.22–209.19 Regular alcohol consumption (y/n) 11/205 51/248 62/453 Cigarette smoking (y/n) 187/66 70/95 257/161 Note regular alcohol consumption was assumed when patients reported drinking of >1 glass of wine daily. Smoking status was only assessed in 416 patients. # p <0.0001 compared to corresponding healthy subjects; * p = 0.0054 compared to healthy women; ** p = 0.0011 compared to healthy men. *** p <0.0001 compared to healty subjects of both sexes. Table 2 Characteristics of control subjects and patients. S W I S S M E D W K L Y 2 0 0 2 ; 1 3 2 : 1 3 2 – 1 3 8 · w w w . s m w . c h 135 between hyponychial angles of women and men within the same age groups. Angles of patients with regular alcohol consumption did not differ from those who reported no regular alcohol consumption (180.9 ± 7.7° vs. 181.8 ± 7.1°; p = 0.3321). This was also true when women and men where analysed separately (for women: 182.1 ± 7.5° vs. 182.2 ± 7.0°, p = 0.8760; for men: 180.7 ± 7.7° vs. 181.4 ± 7.2°, p = 0.5052).Where the smoking history was known, smokers had slightly larger hyponychial angles than non-smokers (181.9 ± 7.4° vs. 180.7 ± 6.9°), but this difference did not reach statistical significance (p = 0.1329). Smoking women, however, had larger angles than non-smoking women (183.5 ± 7.5° vs. N u m b e r o f su b je ct s Hyponychial angle (˚) Hyponychial angle (˚) Figure 2 Distribution of hyponychial angles in healthy subjects. The distribution of hyponychial angles assessed by digital pho- tography and computerised analysis is shown for healthy women (upper panel), healthy men (middle panel) and for both sexes (lower panel) respectively. Figure 3 Distribution of hyponychial angles in medical inpatients. The distribution of hyponychial angles assessed by digital photography and computerised analysis is shown for female patients (upper panel), male patients (middle panel) and patients of both sexes (lower panel) respectively. Age women + men control subjects women + men patients women men women men 15–29 179.0 ± 4.2 (45)* 180.3 ± 4.4 (19) 178.0 ± 3.9 (26) 183.6 ± 7.4 (40) 183.6 ± 5.6 (17) 184.0 ± 8.7 (23) 30–44 178.5 ± 4.9 (46)** 179.0 ± 4.9 (25) 177.9 ± 4.9 (21) 183.4 ± 7.5 (75)# 184.2 ± 7.8 (35) 182.7 ± 7.2 (40) 45–59 179.6 ± 5.4 (26) 179.9 ± 5.0 (20) 178.6 ± 7.1 (6) 181.6 ± 7.3 (121) 182.7 ± 6.9 (40) 181.1 ± 7.5 (81) 60–98 177.8 ± 4.1 (6) 178.4 ± 6.0 (3) 177.2 ± 2.4 (3) 180.9 ± 6.9 (279) 181.2 ± 6.9 (124) 180.6 ± 6.9 (155) Note: results are shown as mean ± SD; the number of patients is given in parenthesis. * p = 0.0027, ** p = 0.0004 compared to patients of the same age group; # p <0.05 compared to patients aged 60–98. Table 3 Age distribution of hyponychial angles among control subjects and patients. Women Women Women and Men Men Men Women and Men Digital clubbing in medical inpatients 136 181.0 ± 6.8°; p = 0.0310), whereas the angles of smoking men and non-smoking men were equal (181.4 ± 7.3° vs. 180.5 ± 7.4°; p = 0.3771). Patients were significantly older than healthy controls (p <0.0001 for both sexes as well as for women and men separately; see table 2), and mean hyponychial angles of patients were significantly larger than those of healthy subjects; for both sexes, p <0.0001; for women, p = 0.0054; for men, p = 0.0011. When mean hyponychial angles of healthy subjects and patients of different age groups were compared, patients aged 15–29 and 30–44 years had significantly larger hyponychial angles (p = 0.0027 and p = 0.0004, respectively). 46 patients (18 female/ 28 male; mean age: 54.3 ± 21.0 years) had hyponychial angles that did not overlap with the range of hyponychial angles of healthy subjects (164.78–192.10°). When the upper limit of normality is used to define the “beginning” of digital clubbing, a prevalence of 8.9% could be cal- culated; the mean hyponychial angle of these 46 patients was 195.76 ± 4.29°. The percentage of fe- male patients (8.3%) featuring digital clubbing was not statistically different compared to the percent- age of male patients (9.4%; p = 0.7554). Table 4 lists mean hyponychial angles of different patient groups and the percentage of patients with digital clubbing among patients suffering from the same specific disease. The percentage of clubbed fingers varied substantially among the various disease- states and was as high as 80% in patients with cys- tic fibrosis. As shown, patients suffering from acquired valvular heart disease, bronchiectasis, chronic hepatitis, chronic obstructive pulmonary disease, cystic fibrosis, pulmonary emphysema, en- docarditis, heart failure, HIV infection, ischaemic heart disease, leukaemia, liver cirrhosis, pneumo- nia, pulmonary hypertension and certain solid ma- lignant tumours, as well as subjects who had had lung transplantation in the past were identified as having clubbed fingers. In addition, digital club- bing was found in patients with atrial myxoma (1), complex cyanotic congenital heart anomaly (con- sisting of a single ventricle, ventricular septum defect and pulmonary hypertension: 1 patient), cystic renal disease (1), pulmonary fibrosis (1) and lymphoma (1). As stated in the methods the hyponychial an- gles of both healthy subjects and patients were de- termined by three investigators who repeated each measurement three times in a randomised fashion. Intra-rater reliability was excellent with coeffi- cients of 0.969, 0.994, and 0.958 for assessment of hyponychial angles in healthy volunteers by rater 1, rater 2, and rater 3, and 0.981, 0.997, and 0.981 for measurement of hyponychial angles in patients Disease n1 hyponychial angle p-value2 p-value3 x4 Heart diseases Acquired valvular heart disease 81 180.57 ± 7.39 0.1342 0.1493 7 (8.6%) Endocarditis 7 186.34 ± 9.94 0.0466 0.2229 2 (28.6%) Heart failure 95 182.02 ± 6.72 0.0003 0.4834 9 (9.5%) Ischaemic heart disease 170 181.95 ± 7.32 0.0002 0.6096 16 (9.4%) Lung diseases Bronchiectasis 5 186.42 ± 7.66 0.0453 0.1653 2 (40%) COPD 62 184.83 ± 7.70 <0.0001 0.0003 9 (14.5%) Cystic fibrosis 5 193.17 ± 8.44 <0.0001 0.0050 4 (80%) Emphysema 9 181.72 ± 9.04 0.1618 0.8247 2 (22.2%) Hypertension, pulmonary 31 182.36 ± 8.62 0.0079 0.4585 3 (9.7%) Lung transplantation 12 184.99 ± 8.55 0.0219 0.2102 3 (25%) Pneumonia 47 184.32 ± 9.24 0.0002 0.0504 11 (23.4%) Infectious diseases HIV infection 19 186.55 ± 7.84 <0.0001 0.0018 3 (15.8%) Liver diseases Hepatitis (chronic) 21 185.20 ± 8.44 <0.0001 0.0115 3 (14.3%) Liver cirrhosis 19 184.08 ± 7.75 0.0049 0.1686 3 (15.8%) Malignant diseases Leukaemia* 13 183.34 ± 6.70 0.00112 0.2774 2 (15.4%) Solid tumors, malignant (all) 84 182.02 ± 6.16 <0.0001 0.3007 5 (6.0%) Anal 6 185.45 ± 6.41 0.0106 0.1463 2 (33.3%) Lung 17 182.51 ± 7.29 0.0141 0.4220 2 (11.8%) COPD = chronic obstructive pulmonary disease; 1 total number of patients suffering from the specified disease; 2 compared to healthy controls; 3 compared to remaining patients; 4 number of patients with hyponychial angles that exceed the range of normal subjects (i.d. >192.10*); * included 12 patients with acute and one patient with chronic leukaemia Note: the sum of patients with digital clubbing does not yield 46 since only diseases that affected more than 4 patients are shown and, in addition, several patients had more than one disease and were therefore listed more than once. Table 4 Hyponychial angles of various patient groups. S W I S S M E D W K L Y 2 0 0 2 ; 1 3 2 : 1 3 2 – 1 3 8 · w w w . s m w . c h 137 by rater 1, rater 2, and rater 3 respectively. Like- wise inter-rater reliability was high; corresponding coefficients were 0.839, 0.810, and 0.825 for rat- ing 1, 2, and 3 in healthy volunteers, and 0.915, 0.906, and 0.910 for rating 1, 2, and 3 in patients respectively. Discussion In the present study, we found a mean hy- ponychial angle of right index fingers among healthy subjects that was very similar to those re- ported in previous studies (see table 1 and refer- ence [2]). Since angles were determined with plas- ter cast or shadowgraph techniques in earlier re- ports, this result confirms the accuracy of digital photography and computerised analysis as a mod- ern method for quantification of finger morphol- ogy. Also in agreement with previous findings, [2] none of the healthy volunteers had angles above approximately 192°, accordingly this value was confirmed as describing the upper limit of nor- mality. Preliminary data obtained in 26 healthy volunteers indicated that hyponychial angles of index and middle fingers may be slightly but sig- nificantly different. Therefore in the future it may be important to restrict this range of normality to hyponychial angles of (right) index fingers. To date only very limited information about the prevalence of digital clubbing in unselected medical patients is available. In one study, clubbed fingers were found in 29 out of 117 adult patients (24.8%) admitted in medical wards [19]. In our group of medical inpatients from a tertiary hospi- tal, mean hyponychial angles of index fingers were significantly larger than those of healthy subjects, although the mean difference was only 2.78° and there was considerable overlap. Nevertheless 46 patients had hyponychial angles that exceeded 192.1° resulting in an overall prevalence of digital clubbing of 8.9° when the upper limit of normal- ity is used to define the “beginning” of clubbed fin- gers. Difference in characteristics of patients, as- sessment method and definition of digital clubbing may explain the difference between the results from our study and previous results. As outlined in detail in table 4, the percentage of patients with clubbed fingers varied substantially among the dif- ferent disease states: from 0% up to 80% in pa- tients with cystic fibrosis. Mean hyponychial an- gles in our patients with cystic fibrosis were very similar to those assessed by Bentley et al. [15] and Pitts-Tucker et al. [20] (193.2 ± 8.4° in our study compared to 194.8 ± 8.3° and 192° respectively), once again confirming the accuracy of the method used in the present study. An obvious limitation of our study is that the detailed analysis of the patient’s records necessary to associate digital clubbing with an underlying disease was complicated by the fact that most med- ical patients had more than one significant disease. Nevertheless patients with diseases known to be associated with digital clubbing, such as bron- chiectasis, cystic fibrosis, pulmonary fibrosis, endo- carditis, chronic hepatitis or cirrhosis, HIV in- fection, infectious lung disease and lung cancer could be identified. One patient with clubbed fin- gers was found to have an atrial myxoma, another had a complex cyanotic congenital heart anomaly. Two other patients with pulmonary hypertension were also found to have digital clubbing. In one, pulmonary hypertension was due to heart failure; in the other pulmonary hypertension was associ- ated with HIV infection. In addition, there were some diseases that so far have only rarely, if at all, been associated with digital clubbing, e.g. leu- kaemia (both patients had acute lymphoblastic leukaemia, with one having no other disease, whilst the other suffered from heart failure and fungal pneumonia) or heart failure (among these some merely had underlying ischaemic heart disease and no other concurrent disease). Two patients with anal cancer were also identified as having digital clubbing and of these only one had further disease (lung cancer). One patient with bronchiectasis and digital clubbing also had chronic hepatitis, a sec- ond one concomitantly suffered from ischaemic heart disease. In the group of patients with cystic renal disease, digital clubbing was found in one subject, who also had pulmonary emphysema. Over the last decades several objective meth- ods have been developed to defeat the limitations of subjective clinical assessment of finger mor- phology. However, previous techniques had sev- eral draw-backs, e.g. finger casts took several hours to make, were perhaps faulty and had to be re- peated or were too time-consuming for routine use [11–13]. Shadowgraphs allowed serial measure- ments with minimal time requirement but results could not be stored, precluding repeated analysis and long-term studies, [14, 15] shadowgrams were cheap and simple to obtain but necessitated out- lining of finger silhouettes on paper [16]. Recently digital cameras and computerised analysis have been introduced as a means of evaluating digital clubbing [17]. Moreover, the use of digital pho- tography with computerised analysis has proved to be an easy, simple and relatively cheap method of quantification of hyponychial angles. Data thus obtained showed only minimal variation in inter- and intra-rater reliability as determined by repet- itive analysis by three different investigators in the present study. Digital acquisition and storage of images offer long-term storage of original “data” and repeated analysis without loss of quality by dif- ferent investigators at different time points. It can be assumed that determination of both nail-fold Digital clubbing in medical inpatients 138 angles and phalangeal depth ratios are both accu- rate [2] and direct comparison of these methods awaits further analysis. Since the measurements can be recorded within a few minutes without caus- ing discomfort this method allows serial measure- ments or investigation of very large populations, rendering it a tool useful for further longitudinal and cross-sectional studies of finger morphology and digital clubbing. The clinical value for mod- ern medicine of a standardised assessment of dig- ital clubbing remains to be determined. We thank Silvia Märki for excellent technical support and Valentin Rousson, PhD, Department of Biostatistics, Institute for Social and Preventive Medicine, University of Zürich, Switzerland, for help with the statistical analy- sis. Correspondence: Dr. R. Walter Clinical Research Division Fred Hutchinson Cancer Research Center 1100 Fairview Ave. N., D2–373 Seattle, WA 98109–1024 USA E-Mail: rwalter@fhcrc.org References 1 Shneerson JM. Digital clubbing and hypertrophic osteo- arthropathy: the underlying mechanisms. Br J Dis Chest 1981; 75:113–31. 2 Myers KA, Farquhar DRE. Does this patient have clubbing? JAMA 2001;286:341–7. 3 Lovell RRH. Observations on the structure of clubbed fingers. Clin Sci 1950;9:299–321. 4 Lovibond JL. Diagnosis of clubbed fingers. Lancet 1938/I: 363–4. 5 Pyke DA. Finger clubbing. Validity as a physical sign. Lancet 1954/II:352–4. 6 Rice RE, Rowlands PW. A quantitative method for the estima- tion of clubbing [thesis]. Tulane University Medical School. New Orleans, La, 1961. 7 Carroll DG Jr. Curvature of the nails, clubbing of the fingers and hypertrophic pulmonary osteoarthropathy. Trans Am Clin Climatol Assoc 1972;83:198–208. 8 Spiteri MA, Cook DG, Clarke SW. Reliability of eliciting phys- ical signs in examination of the chest. Lancet 1988/I:873–5. 9 Stavem P. Instrument for estimation of clubbing. Lancet 1959; II:7–8. 10 Cudkowicz P, Wraith DG. An evaluation of the clinical signif- icance of clubbing in common lung disorders. Br J Tuberc Dis Chest 1957;51:14–31. 11 Regan GM, Tagg B, Thompson ML. Subjective assessment and objective measurement of finger clubbing. Lancet 1967/I: 530–2. 12 Sly RM, Fuqua G, Matta EG, Waring WW. Objective assess- ment of minimal digital clubbing in asthmatic children. Ann Allergy 1972;30:575–8. 13 Sly RM, Ghazanshahi S, Buranakul B, Puapan P, Gupta S, Warren R, et al. Objective assessment for digital clubbing in caucasian, negro, and oriental subjects. Chest 1973;63:687–9. 14 Bentley D, Cline J. Estimation of clubbing by analysis of shadowgraph. Br Med J 1970;3:43. 15 Bentley D, Moore A, Shwachman H. Finger clubbing: a quan- titative survey by analysis of the shadowgraph. Lancet 1976/II: 164–7. 16 Sinniah D, Omar A. Quantification of digital clubbing by shad- owgram technique. Arch Dis Child 1979;54:145–6. 17 Goyal S, Griffiths AD, Omarouayache S, Mohammedi R. An improved method of studying fingernail morphometry: appli- cation to the early detection of fingernail clubbing. J Am Acad Dermatol 1998;39:640–2. 18 Kitis G, Thompson H, Allan RN. Finger clubbing in inflam- matory bowel disease: its prevalence and pathogenesis. Br Med J 1979;2:825–8. 19 Dutta TK, Das AK. Clubbing – a re-evaluation of its incidence and causes. J Assoc Physicians India 1996;44:175–7 [Erratum: J Assoc Physicians India 1996;44:586]. 20 Pitts-Tucker TJ, Miller MG, Littlewood JM. Finger clubbing in cystic fibrosis. Arch Dis Child 1986;61:576–9. What Swiss Medical Weekly has to offer: • SMW’s impact factor has been steadily rising, to the current 1.537 • Open access to the publication via the Internet, therefore wide audience and impact • Rapid listing in Medline • LinkOut-button from PubMed with link to the full text website http://www.smw.ch (direct link from each SMW record in PubMed) • No-nonsense submission – you submit a single copy of your manuscript by e-mail attachment • Peer review based on a broad spectrum of international academic referees • Assistance of our professional statistician for every article with statistical analyses • Fast peer review, by e-mail exchange with the referees • Prompt decisions based on weekly confer- ences of the Editorial Board • Prompt notification on the status of your manuscript by e-mail • Professional English copy editing • No page charges and attractive colour offprints at no extra cost Editorial Board Prof. Jean-Michel Dayer, Geneva Prof. Peter Gehr, Berne Prof. André P. Perruchoud, Basel Prof. Andreas Schaffner, Zurich (Editor in chief) Prof. Werner Straub, Berne Prof. Ludwig von Segesser, Lausanne International Advisory Committee Prof. K. E. Juhani Airaksinen, Turku, Finland Prof. Anthony Bayes de Luna, Barcelona, Spain Prof. Hubert E. Blum, Freiburg, Germany Prof. Walter E. Haefeli, Heidelberg, Germany Prof. Nino Kuenzli, Los Angeles, USA Prof. René Lutter, Amsterdam, The Netherlands Prof. Claude Martin, Marseille, France Prof. Josef Patsch, Innsbruck, Austria Prof. Luigi Tavazzi, Pavia, Italy We evaluate manuscripts of broad clinical interest from all specialities, including experi- mental medicine and clinical investigation. We look forward to receiving your paper! Guidelines for authors: http://www.smw.ch/set_authors.html All manuscripts should be sent in electronic form, to: EMH Swiss Medical Publishers Ltd. SMW Editorial Secretariat Farnsburgerstrasse 8 CH-4132 Muttenz Manuscripts: submission@smw.ch Letters to the editor: letters@smw.ch Editorial Board: red@smw.ch Internet: http://www.smw.ch Swiss Medical Weekly: Call for papers Swiss Medical Weekly The many reasons why you should choose SMW to publish your research Official journal of the Swiss Society of Infectious disease the Swiss Society of Internal Medicine the Swiss Respiratory Society Impact factor Swiss Medical Weekly 0 . 7 7 0 1 . 5 3 7 1 . 1 6 2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 1 9 9 5 1 9 9 6 1 9 9 7 1 9 9 8 1 9 9 9 2 0 0 0 2 0 0 2 2 0 0 3 2 0 0 4 Schweiz Med Wochenschr (1871–2000) Swiss Med Wkly (continues Schweiz Med Wochenschr from 2001) Editores Medicorum Helveticorum work_jzo5vlxp4vd5jlrmx7mvcjtkym ---- EmailMe Form - Derm101 was deactivated on December 31, 2019 EmailMeForm Derm101 was deactivated on December 31, 2019 Paid subscribers will receive refunds on any unused balances on or before the end of the year. We are very proud of what Derm101 was able to accomplish and truly appreciate all of our subscribers for their support. Thank you so much for your loyalty. work_jzz7nw4vynf5vek7kji46gisw4 ---- 1 Parental provision and children’s consumption of fruit and vegetables did not increase following the Food Dudes programme Abstract: Purpose: This study is based on previous research which suggests that the Dudes programme increases children’s fruit and vegetable consumption for school provided meals by assessing its effectiveness in increasing the provision and consumption of fruit and vegetables in home-provided meals. Design/methodology/approach: Two cohorts of children participated from 6 schools in the West Midlands in the UK, one receiving the Food Dudes intervention and a matched control group who did not receive any intervention. Participants were children aged 4-7 years from 6 primary schools, 3 intervention (n=123) and 3 control schools (n=156). Parental provision and consumption of fruit and vegetables was assessed pre-intervention, then 3 and 12 months post-intervention. Consumption was measured across five consecutive days in each school using digital photography. Findings: No significant increases in parental provision or consumption were found at 3 or 12 months for children in the intervention schools however increases were evident for children in the control group. Research limitations/implications: Further development of the Food Dudes programme could develop ways of working with parents and children to increase awareness of what constitutes a healthy lunch. Originality/value: This is the first independent evaluation to assess the influence of the Food Dudes programme on parental provision and children’s consumption of lunchtime fruit and vegetables. Keywords: children; nutrition, eating, parents, school, health promotion Article Classification: Research Paper 2 Introduction The English Department of Health (2000) recommend that adults and children over the age of two years should consume at least five portions of fruit and vegetables per day due to the associated long-term health benefits (Boeing et al., 2012; O’Flaherty et al., 2012) However, evidence suggests that many children in the UK do not consume adequate levels of fruit and vegetables (The Health and Social Care Information Centre, 2012) which has resulted in a number of initiatives to improve children’s eating habits, including the School Fruit and Vegetable Scheme (SFVS) and the introduction of the food and nutrient based standards (School Food Trust, 2008). However, whilst the SFVS has increased children’s snack time consumption of fruit and vegetables (Department of Health, 2010) and the food and nutrient based standards have improved the nutritional content of school-supplied meals (Haroun et al., 2010), evidence suggests that only 44.1% of children choose to consume school meals with the majority of children opting to bring in lunches from home (Nelson et al., 2012). It is known that the nutritional content of packed lunches is far lower than that of school- supplied meals (Rees et al., 2008), containing only half the recommended amount of fruit and vegetables (Rogers et al., 2007). As a result, schools are encouraged to develop lunch box policies that support a whole-school healthy eating environment (School Food Trust, 2011). However, such policies may be difficult to implement as they require engagement with both parents and children in addition to involvement of the school. Consequently, school-based interventions, such as the Food Dudes programme, which aim to improve children’s fruit and vegetable consumption, are also recommended. Evidence suggests that the Food Dudes programme can increase children’s lunchtime fruit and vegetable consumption (Horne et al., 2004; Horne et al., 2009; Lowe et al., 2004) and also produce long lasting increases in the provision of fruit, vegetables and juices for children consuming home supplied lunches (Horne et al., 2009). However, studies conducted in the UK have mainly focused upon school-supplied meals, neglecting those supplied from home. It is therefore important that the effectiveness of the Food Dudes programme in increasing fruit and vegetable consumption, including those eating home-supplied lunches, is explored. 3 The aims of the present study were therefore twofold: firstly, to investigate the effectiveness of the Food Dudes programme in increasing the provision and consumption of fruit and vegetables for children consuming home-supplied meals; and secondly, to establish the extent to which the programme is able to influence long-term maintenance (12 months post intervention) of any behaviour changes observed. Methods This study formed part of a large scale independent evaluation of the Food Dudes programme (See Upton et al., 2012). Design Two cohorts of children participated in the study; one receiving the Food Dudes intervention and a matched control group who did not receive the intervention. The impact of the Food Dudes programme on provision and consumption of fruit and vegetables was assessed at baseline (prior to the intervention), 3 month follow-up (post intervention) and 12 month follow-up. Participants The programme was evaluated in 6 primary schools in the West Midlands, UK. Participants were 279 children aged between 4-7 years, 123 in the intervention schools (70 boys and 53 girls) and 156 in the control schools (85 boys and 71 girls). Intervention schools were selected by the local health authority and control schools matched as far as possible in terms of: school size, proportion of children entitled to free school meals and proportion of children from ethnic minorities. Food Dudes Intervention The Food Dudes programme consists of an initial 16 day intervention phase during which children watch a series of DVD episodes of the Food Dudes adventures. The Food Dudes are four super-heroes who gain special powers by eating their favourite fruit and vegetables that help them maintain the life force in their quest to defeat General Junk and the Junk Punks. The Dudes encourage children to ‘keep the life force strong’ by eating fruit and vegetable every day. Class teachers also read letters to the children from the Food Dudes to 4 reinforce the DVD messages. During the intervention, children are given rewards for either tasting or consuming both the target fruit and vegetables. Children are also provided with a Food Dudes home pack containing information and tips for parents on healthy eating to encourage children to eat fruit and vegetables at home as well as school (Lowe et al., 2004). Following the intervention, a maintenance phase of up to one year is implemented during which fruit and vegetable consumption is encouraged, but with less intensity than the intervention phase. Classroom wall charts are used to record consumption of fruit and vegetables and children are rewarded with further Food Dudes prizes and certificates. This phase of the programme aims to enable the school to develop a self-sustaining approach to rewarding fruit and vegetable consumption and a culture of healthy eating (Lowe and Horne, 2009). Procedure The same procedure was employed in both the intervention and control schools at each study phase and measures were recorded across five consecutive days in each school. Baseline data were recorded in June 2010, 3 month follow-up during October 2010 (due to school summer holidays) and 12 month follow-up in June 2011. In line with guidelines developed by the Health Promotion Agency (2009) a child’s portion of fruit or vegetables was defined as 40g. Control schools remained under baseline conditions during the 16 day intervention phase. At the start of the day, lunchboxes were labelled with the child’s ID number, name and class and a digital photograph taken of lunchbox contents after morning break (See Figure 1). Following lunchtime, lunchboxes were collected and a photograph taken of any leftovers (See Figure 2). Lunchtime staff instructed children to leave any uneaten food or packaging in their lunchboxes at the end of lunchtime. All rubbish bins were located away from tables to ensure that the children did not throw any food items away and also enabling close monitoring of food disposal by the research team. The number of portions of fruit, and vegetables consumed was visually estimated on a five point likert scale (0, ¼, ½, ¾, 1). Inter-rater reliability analysis was performed using correlation to determine consistency among raters. Agreement was calculated for 10% (n=28) of the study sample at baseline and was found to be excellent (r (26) = .94, p<0.01). 5 Figure 1. Example of a lunchbox, pre-consumption Figure 2. Example of a lunchbox, post-consumption Ethical approval Ethical approval was gained from the University of Worcester research ethics committee. Consent was sought from head teachers acting in loco parentis supplemented by parental “opt-out” consent whereby the child is included in the study unless their parents withdraw them (Severson and Biglan, 1989). A letter detailing the purpose of the study was sent to parents prior to the baseline phase and again at 3 and 12 month follow-up with the option to notify the class teacher by the specified date if they did not wish for their child(ren) to participate. Data analysis Mean values were computed for each child to provide an indication of the average amount of fruit and vegetables provided and consumed with the criterion that data were available for a minimum of 3 out of 5 days during each phase. Data were analysed using the Statistical 6 Package for Social Science version 19.0 (IBM, USA) and differences in consumption tested using repeated measures ANOVA. Paired t tests determined the source of any variance and effect sizes, using Cohen’s d, were calculated to establish practical significance. An α level of 0.05 was used in all statistical analyses unless otherwise stated. 7 Results Figure 3 shows mean provision of fruit and vegetables in the intervention and control schools. Analysis of fruit and vegetable provision indicated a significant main effect of study phase (F (2, 276) = 12.10, p<0.01, ηp 2 =0.08) but not school setting (F (1, 277) = 3.34, p>0.05, ηp 2 =0.01. The interaction between time and school setting was also not significant (F (2, 276) = 0.74, p >0.05, ηp 2 = 0.005. Post hoc t tests (bonferroni adjustment, 0.05/5 = 0.025) indicated no significant difference between the intervention and control schools in parental provision of fruit and vegetables at baseline (t=-0.95, p=0.34, d=0.11). Within group comparisons suggested that in the intervention schools, parental fruit and vegetable provision was not statistically higher at 3 month follow-up compared to baseline (t=2.22, p=0.03, d=0.28, CI =0.16-0.38) or between baseline and 12 month follow-up (t=1.08, p=0.28, d=-0.14, CI = -0.24 to -0.04). However, in the control schools parental provision of fruit and vegetables was statistically higher between baseline and 3 month follow-up (t=-4.01, p<0.00, d=0.46, CI= 0.36-0.54) but not between baseline and 12 month follow-up (t=-0.56, p=0.58, d=0.07, CI = -0.03-0.16. Figure 3. Mean provision (in portions) of fruit and vegetables (N=279) (53g) (58g) (62g) (71g) (48g) (60g) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Intervention Control M e a n p o rt io n s p ro v id e d Baseline 3 month follow-up 12 month follow-up 8 Figure 4. Mean consumption (in portions) of fruit and vegetables (N=279) Mean consumption of fruit and vegetables in the intervention and control schools is shown in Figure 4. Results indicated a significant main effect of time (F (2, 276) = 4.80, p <0.01, ηp 2 = 0.03 but not school setting (F, (2, 277) =2.78, p >0.05, ηp 2 = 0.01.The interaction between study phase and school setting was also not significant (F (2, 276) = 1.63, p >0.05, ηp 2 = 0.01. Post hoc t tests (bonferroni adjustment, 0.05/5 = 0.025) suggested no significant difference between the intervention and control schools in fruit and vegetable consumption at baseline (t= -0.12, p=0.90, d=0.02). Within group comparisons indicated that, in the intervention schools, consumption of fruit and vegetables was not significantly higher at either 3 or 12 month follow-up relative to baseline (t=-.60, p=0.55, d=0.08, CI= 0.01-0.16 and t= 1.05, p=0.29, d=0.16, CI= -0.24 - -0.09 respectively). In the control schools, mean fruit and vegetable consumption was statistically higher at 3 month follow-up than at baseline (t= -3.08, p=0.002, d=0.35, CI= 0.27-0.42) but not at 12 month follow-up (t=-0.98, p=0.33, d=0.09, CI= 0.02-0.16). (28g) (29g) (30g) (39g) (24g) (31g) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Intervention Control M e a n p o rt io n s c o n s u m e d Baseline 3 month follow-up 12 month follow-up 9 Discussion This study indicated that the Food Dudes intervention did not lead to short or long-term increases in parental provision of lunchbox fruit and vegetables. Likewise, the intervention did not result in significant increases in consumption of fruit and vegetables at either 3 or 12 month follow-up. In contrast, significant increases in fruit and vegetable provision and consumption were evident in the control schools, and with a medium effect size, suggesting that these increases were of moderate practical significance. However, the non-significant interaction effects for both parental provision and consumption indicate that changes in the provision and consumption of fruit and vegetables over time did not reflect a programme effect and therefore must be attributed to another influence. Nevertheless, no statistically significant baseline differences were evident between the intervention and control schools in terms of provision or consumption of fruit and vegetables. Schools were matched as far as possible in terms of: school size, proportion of children entitled to free school meals and proportion of children from ethnic minorities to control for differences in participant demographics. Furthermore, to our knowledge no form of healthy eating intervention was implemented in the control schools during the study which may have contributed to the increases in parental provision or consumption however we cannot be certain that this was not the case. If any form of health promotion programme was implemented during the course of the study, this may have impacted on study findings. Indeed, many, if not all schools are committed to developing whole school approaches to enhance the health and educational outcomes for children and young people and may choose to implement their own health promotion programme independent of interventions such as the Food Dudes programme. Previous research (Horne et al., 2004; Lowe et al., 2004) has largely focused on school- provided meals and not those supplied from home. Whilst one study (Horne et al., 2009) did report increases in parental provision and consumption of fruit and vegetables following the intervention, this study was conducted in Ireland where, unlike the UK, there is no school meal provision and children instead bring in their lunch from home (Horne et al., 2009). Therefore it is possible that there was greater parental involvement or information provided to parents in the programme implemented in Ireland than in the UK. The inability of the 10 present study to replicate these findings suggests that further evidence is required to investigate the potential of the intervention to change fruit and vegetable provision and consumption for home-provided meals in a UK context. In contrast to school provided meals which are required to adhere to food and nutritional based standards (School Food Trust, 2008), there are no such guidelines for meals provided from the home. Consequently, the ability of the intervention to modify parental provision and consumption of lunchtime fruit and vegetables may be more difficult to establish as it requires behaviour change from both children and parents. Further development of the Food Dudes programme in the UK could develop ways of working with parents and children to increase awareness of what constitutes a healthy lunch (Rogers et al., 2007). Indeed, we are aware that the programme is currently being developed with this in mind through the implementation of the Food Dudes Forever phase which is designed to strengthen the changes in dietary behaviours following the initial phase of the programme (Lowe, 2013). Strengths of the study A particular strength of this study is the use of digital photography for measuring dietary intake. Evaluations of such interventions should be based upon robust measures of dietary intake (Klepp et al., 2005) however many evaluations of interventions designed to increase children’s fruit and vegetable consumption rely on self-report measures, which are clearly limited by the ability of respondents (in this case children) to accurately recall and record consumption. In contrast, the present study used digital photography, which offers a pragmatic and reliable tool for assessing consumption in the school setting (Swanson, 2008). This method is particularly effective for studies that require rapid acquisition of data and minimal disruption to the eating environment such as the study reported here (Williamson et al., 2003). Conclusion In conclusion, the results offer no support for the effectiveness of the Food Dudes intervention in increasing parental provision or consumption of lunchtime fruit and vegetables for children consuming home-provided meals. Clearly further development work is required to ensure both the short and long term effectiveness of interventions promoting 11 fruit and vegetable consumption in children such as the Food Dudes programme. The Food Dudes Forever phase (Lowe, 2013) of the programme currently underway is one approach that may enhance the short and long term effects of the programme on children’s eating habits. References Boeing, H,. Bechthold, A., Bub, A., Elinger, S., Haller, D., Kroke, A., Leschik-Bonnet, E., Müller, M.J., Oberritter, H., Schulze, M., Stehle, P. and Watzl, B. (2012), “Critical review: vegetables and fruit in the prevention of chronic diseases”, European Journal of Nutrition, Vol. 51 No. 6, pp. 637-663. Department of Health, (2000), The National School Fruit Scheme, London, Department of Health, London. Haroun, D., Harper, C., Wood, L. and Nelson, M. (2010), “The impact of the food-based and nutrient-based standards on lunchtime food and drink provision and consumption in primary school in England”, Public Health Nutrition, Vol. 14 No. 2, pp. 209-218. Health Promotion Agency. (2009), “Nutritional Standards for School Lunches: A guide for implementation”, available at: http://www.healthpromotionagency.org.uk/Resources/nutrition/pdfs/food_in_school_09/N utritional_Standard-1EEBDB.pdf (accessed 30 August 2013). Horne, P., Tapper, K., Lowe, C.F, Hardman, C.A., Jackson, M.C. and Woolner, J. (2004), “Increasing children's fruit and vegetable consumption: a peer-modelling and rewards- based intervention”, European Journal of Clinical Nutrition, Vol. 58 No. 12, pp. 1649-1660. Horne, P.J., Hardman, C.A., Lowe, C.F., Tapper, K., Le Noury, J., Madden, P., Patel, P. and Doody, M. (2009), “Increasing parental provision and children's consumption of lunchbox fruit and vegetables in Ireland: the Food Dudes intervention”, European Journal of Clinical Nutrition, Vol. 63 No. 5, pp. 613-618. Klepp, K., Pérez-Rodrigo, C., De Bourdeaudhuij, I., Due, P.P., Elmadfa, I., Haraldsdóttir, Koniq, J., Sjostrom, M., Thórsdóttir, I., Vaz de Almeida, M.D., Yngve, A. and Brug, J. (2005), http://www.healthpromotionagency.org.uk/Resources/nutrition/pdfs/food_in_school_09/Nutritional_Standard-1EEBDB.pdf http://www.healthpromotionagency.org.uk/Resources/nutrition/pdfs/food_in_school_09/Nutritional_Standard-1EEBDB.pdf 12 “Promoting Fruit and Vegetable Consumption among European Schoolchildren: Rationale, Conceptualization and Design of the Pro Children Project”, Annals of Nutritional Metabolism, Vol. 49 No.4, pp. 212-220. Lowe, C.F. (2013), “Children's fruit and vegetable intake, programme evaluation”, Public Health Nutrition. Epub ahead of print 02 April 2013 DOI: http://dx.doi.org/10.1017/S1368980013000694. Lowe, F. and Horne, P. (2009), “Food Dudes: Increasing children's fruit and vegetable consumption”, Cases in Public Health Communication & Marketing, Vol 3, pp. 161-185. Lowe, C.F., Horne, P.J., Tapper, K.K., Bowdery, M. and Egerton, C. (2004), “Effects of a peer modelling and rewards-based intervention to increase fruit and vegetable consumption in children”, European Journal of Clinical Nutrition, Vol. 58 No. 3, pp. 510-522. Nelson, M., Nicholas, J., Riley, K. and Wood, L. (2012), “Seventh annual survey of take up of school lunches in England”, available at: http://www.childrensfoodtrust.org.uk/assets/research- reports/seventh_annual_survey2011-2012_full_report.pdf (accessed 29 August 2013) O'Flaherty, M., Flores-Mateo, G., Nnoaham, K., Lloyd-Williams, F. and Capewell, S. (2012), “Potential cardiovascular mortality reductions with stricter food policies in the United Kingdom of Great Britain and Northern Ireland”, Bulletin of the World Health Organisation, Vol. 90, pp. 522-531. Rees, G., Richards, C. and Gregory J (2008), “Food and nutrient intakes of primary school children: a comparison of school meals and packed lunches”, Journal of Human Nutrition and Dietetics, Vol. 21 No. 5, pp. 420-427. Rogers, I.S., Ness, A.R., Hebditch, K.K., Jones, L.R. and Emmett, P.M. (2007), “Quality of food eaten in English primary schools: school dinners vs packed lunches”, European Journal of Clinical Nutrition, Vol. 61 No. 7, pp. 856-864. School Food Trust, (2008), “A Guide to Introducing the Government’s Food-Based and Nutrient-Based Standards for School Lunches, pp. 2.1–2.4”, available at: http://dx.doi.org/10.1017/S1368980013000694 http://www.childrensfoodtrust.org.uk/assets/research-reports/seventh_annual_survey2011-2012_full_report.pdf http://www.childrensfoodtrust.org.uk/assets/research-reports/seventh_annual_survey2011-2012_full_report.pdf 13 http://www.healthyeatinginschools.co.uk/pdfs/sft_nutrition_guide.pdf (accessed 12 January 2013). School Food Trust, (2011), “Packed Lunch Policy Toolkit: Step by Step Guide to developing a healthy packed lunch policy”, available at: http://www.childrensfoodtrust.org.uk/schools/packed-lunches/packed-lunch-policy (accessed 30 August 2013) Severson, H. and Biglan, A (1989), “Rationale for the use of passive consent in smoking prevention research: politics, policy and pragmatics”, Preventive Medicine, Vol. 18 No.2, pp. 267-279. Swanson, M. (2008), “Digital photography as a tool to measure school cafeteria consumption”, Journal of School Health, Vol. 78 No. 8, pp. 432-437. Teeman, D., Lynch, S., White, K., Scott, E., Waldman, J., Benton, T., Shamsan. Y., Stoddart, S., Ransley, J., Cade, J. and Thomas, J. (2010), The Third Evaluation of the School Fruit and Vegetable Scheme. Department of Health, London. The Health and Social Care Information Centre, (2012), “Statistics on obesity, physical activity and diet: England, 2012”, available at: https://catalogue.ic.nhs.uk/publications/public-health/obesity/obes-phys-acti-diet-eng- 2012/obes-phys-acti-diet-eng-2012-rep.pdf (accessed 12 January 2013) Upton, D. Upton, P. and Taylor, C. (2012), “Increasing children’s lunchtime consumption of fruit and vegetables: an evaluation of the Food Dudes programme”, Public Health Nutrition, Vol. 16 No. 6, pp. 1066-1072. Williamson, D.A., Allen, H., Martin, P.D., Alfonso, A.J., Gerald, B. and Hunt, A. (2003), “Comparison of digital photography to weighed and visual estimation of portion sizes”, Journal of the American Dietetic Association, Vol. 103, No. 9 pp. 1139-1145. http://www.healthyeatinginschools.co.uk/pdfs/sft_nutrition_guide.pdf http://www.childrensfoodtrust.org.uk/schools/packed-lunches/packed-lunch-policy https://catalogue.ic.nhs.uk/publications/public-health/obesity/obes-phys-acti-diet-eng-2012/obes-phys-acti-diet-eng-2012-rep.pdf https://catalogue.ic.nhs.uk/publications/public-health/obesity/obes-phys-acti-diet-eng-2012/obes-phys-acti-diet-eng-2012-rep.pdf work_k2dcbkbmg5e3pmc3tgal6conrm ---- Hazards of pesticides to bees - 14th international symposium of the ICP-PR Bee protection group, October 23 – 25 2019, Bern (Switzerland) Abstracts: Poster Julius-Kühn-Archiv, 465, 2020 99 Section 3 – Laboratory/Semi-field/Field 3.1.P Do pollen foragers represent a more homogenous test unit for the RFID homing test, when using group-feeding? Michael Eyer, Daniela Grossar, Lukas Jeker Agroscope, Swiss Bee Research Center, 3003 Bern, Switzerland DOI 10.5073/jka.2020.465.045 Abstract The RFID homing ring test aims at developing a method, which can assess sublethal effects of xenobiotic substances on the navigation of foraging bees. Thereby, bee biology and corresponding behavioral processes might strongly influence the output of this test method. Accordingly, previous experiments demonstrated that the homing ability of nectar foragers differed between group- and single-bee-feeding, based on uneven crop content of returning bees and/or due to uneven food distribution via trophallaxis. Therefore, we here evaluated if pollen foragers represent a more homogenous test unit, when test item solutions are administered to groups of bees and thus are distributed between each other via trophallaxis. For this, we tested thiamethoxam and thiacloprid (both neonicotinoid insecticides) at field realistic doses by orally exposing tagged pollen foragers, either in groups of ten bees, or in single cages. Our results demonstrate that the homing ability of thiamethoxam-exposed pollen foragers was significantly different from the non-exposed control in the single-bee feeding approach, but not in the ten-bee feeding approach (using conservative bonferroni correction in nominal pairwise matrices). Similar tests with thiacloprid, revealed not such clear differences between the two feeding approaches. Thus, it seems that the effect of group size on the homing ability of pollen foragers seems to be compound/dose specific. Nevertheless, our results suggest that single-bee-feeding reveal biologically more robust results in context of homing ability compared to group feeding, which should be considered in the development of this new test guideline by ideally performing such tests with single-bee feeding. Moreover, pollen- instead of nectar foragers should be preferentially chosen, since they consumed the feeding solution quicker and more reliable compared to previous trials with nectar foragers. 3.2.P Digital Farming & evaluation of side effects on honey bees – first experiences within the Digital Beehive project Catherine Borrek, Simon Hoff, Ulrich Krieg, Volkmar Krieg, Philipp Senger, Marc Schwering, Silke Andree-Labsch Bayer AG Division Crop Science, Versuchsgut Höfchen, 51399 Burscheid, Germany E-Mail: catherine.borrek@bayer.com DOI 10.5073/jka.2020.465.046 Abstract Within the framework of the bee pollinators risk assessment of plant protection products, like honey bees (Apis mellifera), semi-field studies (in net houses) are conducted under worst-case exposure conditions to evaluate potential side-effects on the colony level. Therefore, several parameters concerning the bees’ health status, activity and behavior on the level of individual bees and the entire colony have to be assessed. These in situ observations and evaluations are necessary conducted by skilled investigators, who are experienced in both bee management and plant protection practices. Furthermore, digital sensor technologies around the beehive can provide additional valuable information to better understand the assessed parameters. A clear advantage of such a digital monitoring system is a continuous data acquisition, whereas the required manual assessments represent only short snapshots in time. Especially within the first hours after the application, when observations and assessments are limited for reasons of time and health protection, sensor technology can be utilized for observation of the bees’ reaction to a test compound and thereby allows the detection of a potential repellent effect or similar. Additionally, digital sensors can be calibrated to ensure the accuracy of the measurements. Hazards of pesticides to bees - 14th international symposium of the ICP-PR Bee protection group, October 23 – 25 2019, Bern (Switzerland) Abstracts: Poster 100 Julius-Kühn-Archiv, 465, 2020 In several semi-field trials according to EPPO guideline No. 170 we compared two different digital monitoring systems (ApiSCAN® and Arnia™ remote hive monitoring) and related the sensor-derived data with usual manual assessments. Based on our findings we want to highlight benefits and limitations of a digital beehive in context of the assessment of potential side-effects of plant protection products on pollinators. 3.3.P Bee colony assessments with the Liebefeld method: How do individual beekeepers influence results and are photo assessments an option to reduce variability? Holger Bargen, Aline Fauser, Heike Gätschenberger, Gundula Gonsior, Silvio Knäbe Eurofins Agroscience Services Ecotox GmbH Eutinger Strasse 24, 75223 Niefern-Öschelbronn E-Mail: Holger Bargen@eurofins.com DOI 10.5073/jka.2020.465.047 Abstract Colony strength, food storage and brood development are a fundamental part of each honeybee field study. Colony assessments are used to compare and assess those for beehive over time. At present, most colony assessments are made by experienced beekeepers according to Liebefeld method. This method is based on an estimation of areas covered by honeybees, food and brood stages on each side of a comb. Areas are counted from a grid separating the comb side into 8 sections which are protocolled with an accuracy of 0.5 sections. An assessment for a hive takes up to 20 min and even with two field locations, it is necessary to split assessments between beekeepers. So, it is important to make estimates as comparable as possible. For this purpose, beekeepers practice the assessments on pre-determined photographs to “calibrate themselves”. The advantage of the Liebefeld assessment is that the condition of bee hive is estimated with minimum disturbance of the bees. Digital photography is under discussion to gain data with high precision and accuracy with one major disadvantage. To be able to see food and brood stages in photographs, bees have to be removed from combs. This, however, results in a disturbance of the colony – especially if the assessments take place in short time intervals of 7 ± 1 days. An experiment was performed to evaluate the variation between individual beekeepers and to compare the results to data generated with photographs. For the experiment, five colonies were assessed each by four beekeepers independently according to Liebefeld method. Each comb side of the five colonies was photographed with and without honeybees sitting on it for precise analysis at the computer for a number of bees, nectar cells, pollen cells, eggs, open brood and capped brood. The number of bees and cells with the different contents were generated by an area-based assessment in ImageJ as well as a detailed counting with help of HiveAnalyzer® Software. Data from beekeeper estimations were then compared with assessments based on digital photography. With the results of the experiment, we tried to answer several questions. With the study, we wanted to determine the level of variation between the beekeepers for the live stages and food stores estimated. Honeybee: Colony assessment; Liebefeld method; digital photography; HiveAnalyzer® Introduction In 1983 Gerig introduced a method to assess strength, brood and food of a honeybee colony using a pattern of 8 square decimeters (with ½ square being smallest recorded unit) to assess the content of cells and the number of honeybees on a single comb side. Our intention was to compare this method in terms of accuracy and precision against methods using weighing and photographs as digital photography offers new technical options that were not available when Imdorf et al. (1987) did their study on the reliability of Liebefelder method for honeybee colony assessment. Improvements and key points need to be taken into consideration to compare the methods such as health of colonies and assessments workload. work_k3mcejuiojgzti7xutsx5s4qye ---- In vitro evaluation of prosthodontic impression on natural dentition: a comparison between traditional and digital techniques. | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.11138/orl/2016.9.1S.021 Corpus ID: 21999174In vitro evaluation of prosthodontic impression on natural dentition: a comparison between traditional and digital techniques. @article{Malaguti2016InVE, title={In vitro evaluation of prosthodontic impression on natural dentition: a comparison between traditional and digital techniques.}, author={G. Malaguti and R. Rossi and B. Marziali and A. Esposito and G. Bruno and C. Dariol and A. di Fiore}, journal={ORAL & implantology}, year={2016}, volume={9 Suppl 1/2016 to N 4/2016}, pages={ 21-27 } } G. Malaguti, R. Rossi, +4 authors A. di Fiore Published 2016 Materials Science, Medicine ORAL & implantology OBJECTIVES The aim of this in vitro study is to evaluate the marginal and internal fit of zirconia core crowns manufactured following different digital and traditional workflows. METHODS A 6° taper shoulder prepared abutment tooth was used to produce 20 zirconia core crowns using four different scanning techniques: scanned directly with the extraoral lab scanner, scanned with intraoral scanner, dental impressions using individual dental tray and polyether, dental casts from a polyether… Expand View on PubMed doi.org Save to Library Create Alert Cite Launch Research Feed Share This Paper 11 CitationsBackground Citations 2 Methods Citations 1 View All Figures, Tables, and Topics from this paper figure 1 table 1 figure 2 table 2 figure 3 View All 5 Figures & Tables Prosthodontic specialty In Vitro [Publication Type] Dental Impression Technique Scanning zirconium oxide Scanner Device Component abutment Prostheses, Dental, Fixed, Crown, Total, Temporary Silicones Odontogenic Tissue Tapering - action 11 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Automatic Digital Design of the Occlusal Anatomy of Monolithic Zirconia Crowns Compared to Dental Technicians' Digital Waxing: A Controlled Clinical Trial. A. D. Fiore, C. Monaco, G. Brunello, Stefano Granata, E. Stellini, B. Yilmaz Medicine Journal of prosthodontics : official journal of the American College of Prosthodontists 2020 Save Alert Research Feed Marginal and internal fit of feldspathic ceramic CAD/CAM crowns fabricated via different extraoral digitization methods: a micro-computed tomography analysis E. I. Oguz, M. Kılıçarslan, Mert Ocak, B. Bilecenoğlu, Z. Ekici Medicine, Materials Science Odontology 2020 Save Alert Research Feed Marginal adaptation of zirconia complete-coverage fixed dental restorations made from digital scans or conventional impressions: A systematic review and meta-analysis. Mahtab Tabesh, F. Nejatidanesh, G. Savabi, A. Davoudi, O. Savabi, H. Mirmohammadi Medicine The Journal of prosthetic dentistry 2020 1 Save Alert Research Feed Marginal and internal adaptation of single crowns and fixed dental prostheses by using digital and conventional workflows: A systematic review and meta-analysis. M. Hasanzade, Mahdi Aminikhah, Kelvin Ian Afrashtehfar, M. Alikhasi Medicine The Journal of prosthetic dentistry 2020 1 Save Alert Research Feed Marginal Adaptation of CAD/CAM All‐Ceramic Crowns Made by Different Impression Methods: A Literature Review Y. Memari, M. Mohajerfar, Anahid Armin, Fatemeh Kamalian, V. Rezayani, Elaheh Beyabanaki Computer Science, Medicine Journal of prosthodontics : official journal of the American College of Prosthodontists 2019 17 Save Alert Research Feed Dimensional accuracy of microcomputed tomography‐scanned half‐arch impressions M. Kerr, N. Park, David Leeson, S. Nikolskiy Medicine, Materials Science The Journal of prosthetic dentistry 2019 1 Save Alert Research Feed Trueness of CAD/CAM digitization with a desktop scanner – an in vitro study G. Joós-Kovács, B. Vecsei, Sz Körmendi, V. Gyarmathy, J. Borbely, P. Hermann Medicine BMC oral health 2019 4 View 2 excerpts, cites methods and background Save Alert Research Feed Effect of different CAD-CAM materials on the marginal and internal adaptation of endocrown restorations: An in vitro study. Wiam El Ghoul, M. Özcan, H. Ounsi, Hani Tohmé, Ziad Salameh Mathematics, Medicine The Journal of prosthetic dentistry 2019 8 Save Alert Research Feed Influence of the impression techniques on the accuracy of dental restorations V. Petrova, R. Vassileva, J. Kirilova Psychology 2020 PDF View 1 excerpt, cites background Save Alert Research Feed In Vivo and In Vitro Comparison of Internal and Marginal Fit of Digital and Conventional Impressions for Full-Coverage Fixed Restorations: A Systematic Review and Meta-analysis. M. Hasanzade, M. Shirani, Kelvin Ian Afrashtehfar, P. Naseri, M. Alikhasi Medicine The journal of evidence-based dental practice 2019 4 Save Alert Research Feed ... 1 2 ... References SHOWING 1-10 OF 26 REFERENCES SORT BYRelevance Most Influenced Papers Recency In Vitro Implant Impression Accuracy Using a New Photopolymerizing SDR Splinting Material. A. di Fiore, R. Meneghello, G. Savio, S. Sivolella, J. Katsoulis, E. Stellini Mathematics, Medicine Clinical implant dentistry and related research 2015 12 View 1 excerpt, references background Save Alert Research Feed Clinical evaluation of all-ceramic crowns fabricated from intraoral digital impressions based on the principle of active wavefront sampling. Andreas Syrek, G. Reich, Dieter Ranftl, Christoph Klein, Barbara A. Cerny, J. Brodesser Medicine Journal of dentistry 2010 289 PDF View 1 excerpt, references background Save Alert Research Feed Accuracy of complete-arch dental impressions: a new method of measuring trueness and precision. A. Ender, A. Mehl Engineering, Medicine The Journal of prosthetic dentistry 2013 310 PDF View 2 excerpts, references background and results Save Alert Research Feed Effect of digital impressions and production protocols on the adaptation of zirconia copings H. Kocaagaoglu, H. Kılınç, H. Albayrak Materials Science, Medicine The Journal of prosthetic dentistry 2017 21 View 1 excerpt, references background Save Alert Research Feed The Accuracy of Fit of Crowns Made From Wax Patterns Produced Conventionally (Hand Formed) and Via CAD/CAM Technology. Hawa M Fathi, A. H. Al-Masoody, Narjes El-Ghezawi, A. Johnson Medicine, Materials Science The European journal of prosthodontics and restorative dentistry 2016 21 View 1 excerpt, references background Save Alert Research Feed Esthetic and function rehabilitation of severely worn dentition with prosthetic-restorative approach and VDO increase. Case report. M. Gargari, B. Loré, F. M. Ceruso Medicine ORAL & implantology 2014 5 PDF View 1 excerpt, references background Save Alert Research Feed Effects of Computer-Aided Manufacturing Technology on Precision of Clinical Metal-Free Restorations K. Lee, I. Yeo, +5 authors Taek-Ka Kwon Mathematics, Medicine BioMed research international 2015 16 PDF View 1 excerpt, references background Save Alert Research Feed Evaluation of marginal fit and internal adaptation of zirconia copings fabricated by two CAD - CAM systems: An in vitro study Balaji N Rajan, S. Jayaraman, Baburajan Kandhasamy, Ilangkumaran Rajakumaran Medicine Journal of Indian Prosthodontic Society 2015 14 View 1 excerpt, references background Save Alert Research Feed Fracture Resistance of Molar Crowns Fabricated with Monolithic All‐Ceramic CAD/CAM Materials Cemented on Titanium Abutments: An In Vitro Study D. O. Doğan, O. Gorler, Burcu Mutaf, M. Ozcan, G. Eyuboglu, M. Ulgey Materials Science, Medicine Journal of prosthodontics : official journal of the American College of Prosthodontists 2017 17 PDF View 1 excerpt, references background Save Alert Research Feed Clinical evaluation of the marginal fit of cast crowns--validation of the silicone replica method. M. Laurent, P. Scheer, J. Déjou, G. Laborde Medicine Journal of oral rehabilitation 2008 145 View 1 excerpt, references methods Save Alert Research Feed ... 1 2 3 ... Related Papers Abstract Figures, Tables, and Topics 11 Citations 26 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_k4ydrftih5ew5eprtb7puvpxle ---- Anatomic Distribution of the Morphologic Variation of the Upper Lip Frenulum Among Healthy Newborns Anatomic Distribution of the Morphologic Variation of the Upper Lip Frenulum Among Healthy Newborns Shagnik Ray, BA; William Christopher Golden, MD; Jonathan Walsh, MD IMPORTANCE The maxillary labial frenulum and its potential contribution to breastfeeding difficulty may substantially affect public health. However, objective studies of the frenulum are limited. OBJECTIVE To measure the variations in length, thickness, and attachments of the maxillary labial frenulum in healthy newborns and to identify which anatomic measurements could be used in further research investigating the maxillary labial frenulum. DESIGN, SETTING, AND PARTICIPANTS This prospective cross-sectional study conducted measurements on images of maxillary labial frenula captured by digital photography from 150 healthy newborns admitted to the newborn nursery at a tertiary care children’s hospital in Maryland between September 1, 2017, and April 1, 2018. MAIN OUTCOMES AND MEASURES The primary outcome was the measurement of numerous frenulum morphologic components. RESULTS Of 150 newborns enrolled, 77 were female, the mean (SD) gestational age was 38.60 (1.72) weeks, and the mean (SD) birth weight was 3180 (570) g. The means and SDs of the morphologic components of the frenulum with the broadest distributions, which were most helpful in differentiating degrees of lip tethering, included the following: alveolar edge to frenulum gingival attachment, 1.53 (0.85) mm; frenulum length on stretch, 5.19 (1.68) mm; frenulum gingival attachment thickness, 0.84 (0.36) mm; frenulum labial attachment thickness, 2.83 (1.33) mm; and the percentage of free lip to total lip length, 87.38% (7.67%). Gingival attachment mean (SD) thickness differed between late-preterm (0.69 [0.24] mm) and term (0.88 [0.37] mm) infants (Cohen d, −0.52; 95% CI, −0.94 to −0.10). CONCLUSIONS AND RELEVANCE To our knowledge, this cross-sectional study was the first to objectively measure the numerous morphologic components of the upper lip anatomy in healthy newborns. Variations in maxillary labial frenulum morphology were identified, and some combination of the stated measurements may be used to create a more robust classification system to advance quality research in the association of lip-tie with breastfeeding difficulty. JAMA Otolaryngol Head Neck Surg. 2019;145(10):931-938. doi:10.1001/jamaoto.2019.2302 Published online August 22, 2019. Corrected on October 17, 2019. Supplemental content Author Affiliations: Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland (Ray, Walsh); Division of Neonatology, Department of Pediatrics, Johns Hopkins University School of Medicine, Baltimore, Maryland (Golden). Corresponding Author: Jonathan Walsh, MD, Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins School of Medicine, Johns Hopkins Hospital, 601 N Caroline St, Sixth Floor, Baltimore, MD 21287 (jwalsh31@jhmi.edu). Research JAMA Otolaryngology–Head & Neck Surgery | Original Investigation (Reprinted) 931 © 2019 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 https://jama.jamanetwork.com/article.aspx?doi=10.1001/jamaoto.2019.2302&utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamaoto.2019.2302 https://jama.jamanetwork.com/article.aspx?doi=10.1001/jamaoto.2019.2302&utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamaoto.2019.2302 mailto:jwalsh31@jhmi.edu A s more mothers are encouraged to breastfeed theirinfants, tethering and anatomic positioning of thelabial frenulum and lingual frenulum have surfaced as issues at the center of the breastfeeding difficulty dis- course. Upper lip frenulum tethering (termed lip-tie) has been implicated in addition to ankyloglossia (termed tongue- tie) in various childhood conditions, including breastfeeding difficulty. Tethering of the upper lip by the maxillary labial frenulum has been postulated to cause improper latching of the newborn to the mother’s breast, preventing seal forma- tion around the maternal areolar tissue and hypothetically c a u s i ng a c o n c o m it a nt i n c r e a s e i n i n f a nt r e fl u x a n d aerophagia.1,2 With many infants being diagnosed as having upper lip-tie, the role of the labial frenulum in impaired nursing necessitates further study. Originating as a posteruptive remnant of embryonic tec- tolabial bands, the maxillary labial frenulum is a small, some- what triangular fold of nonmuscular connective tissue extend- ing from the midline maxillary gingiva into the vestibule and central upper lip.3,4 At present, the typical anatomic varia- tion of the maxillary labial frenulum has been described in 2 limited studies. In a study of 1021 Swedish newborns, Flinck et al5 noted 76.7% of maxillary labial frenula inserted into the crest of the alveolar ridge, 6.7% inserted into the buccal mu- cosa of the alveolar ridge, and 16.7% inserted into the palatal mucosa of the alveolar ridge. More recently, in a study of 100 newborns, Santa Maria et al6 found that 83% of newborn max- illary labial frenula attach at the gingival margin, whereas 6% attach near the mucogingival junction and 11% attach along the inferior margin at the alveolar papilla and beyond to the pos- terior surface. However, both of those studies described frenu- lum insertion points into the gingiva with nonspecific ana- tomic locations and without measurements of frenulum insertions relative to gingival and alveolar edge landmarks. Furthermore, neither study assessed frenulum length and thickness, which equally may play a role in frenulum tether- ing. Finally, those studies did not assess the potential asso- ciation of the maxillary labial frenulum with ankyloglossia. Some health care professionals, in an attempt to improve breastfeeding in neonates, have proposed and performed sur- gical modification, release, or removal of the maxillary labial frenulum in procedures known as labial frenotomy or frenec- tomy. In May of 2015, the Agency for Healthcare Research and Quality surveyed the literature regarding the labial fre- notomy, finding the strength of evidence generally low to in- sufficient based on small, short-term studies with insuffi- cient randomized controlled trials.7 Ghaheri et al8 recently found significant improvement in breastfeeding outcomes in a prospective cohort study after combined tongue-tie and lip- tie release; however, that study lacked a control cohort, and only 1 participant had isolated lip-tie release as opposed to com- bined lip-tie and tongue-tie release. Limited conclusions can be drawn on the effect of lip-tie release based on studies with such confounding data. Ultimately, studies on possible nega- tive effects of the maxillary labial frenulum and surgical ame- lioration of these effects require an objective, consistent, and thorough classification system to enable proper clinical deci- sion-making and consistency among future studies. Currently, 2 classification systems for the maxillary labial frenulum exist. The more commonly used Kotlow clas- sification system denotes 4 frenulum types and focuses on the insertion point of the gingival attachment of the maxil- lary labial frenulum.9 However, this system does not corre- late the epidemiologic variation of frenula with poor breast- feeding outcomes. Santa Maria et al6 subsequently proposed a classification system with 3 frenulum types, again with a focus on the gingival insertion point of the maxillary labial frenulum and attempting to simplify the Kotlow classifica- tion system and improve interrater reliability. However, this tool also did not analyze other factors beyond the insertion point, and the absence of objective measurements resulted in an interrater reliability of only 38%.6 In the present article, we report the anatomic distribu- tion of different morphologic components of the maxillary la- bial frenulum that may be associated with lip mobility. We used objective measurements to help assess the components of the maxillary labial frenulum that could potentially be used in a classification system and in further research. We also ana- lyzed these measurements by race/ethnicity, sex, gestational age, birth weight, presence or absence of ankyloglossia, and LATCH score (a commonly used measure of breastfeeding success).10 Methods We performed a prospective cross-sectional study of 150 healthy newborns admitted to the Johns Hopkins Hospital Newborn Nursery (Baltimore, Maryland) between Septem- ber 1, 2017, and April 1, 2018. The Johns Hopkins institutional review board approved this study. Verbal informed consent was obtained from the parents or guardians. Verbal consent was used because of the minimal risk of the study and to reduce the burden on mothers in the perinatal period. No one re- ceived compensation or was offered any incentive for partici- pating in this study. Key Points Question What is the anatomic distribution of the maxillary labial frenulum among newborns, and which anatomic measurements could be useful to create a classification system to advance research investigating this structure? Findings This cross-sectional study of 150 healthy newborns found that the maxillary labial frenulum had numerous morphologic components with varying distributions. Several components having means and SDs with broad distributions were helpful in differentiating degrees of lip tethering, including alveolar edge to frenulum gingival attachment; frenulum length of stretch; frenulum gingival attachment thickness; frenulum labial attachment thickness; and the percentage of free lip to total lip length. Meaning This new understanding of the anatomy of the maxillary labial frenulum may be useful in future studies investigating the maxillary labial frenulum and neonatal breastfeeding difficulty. Research Original Investigation Anatomic Distribution of the Morphologic Variation of the Upper Lip Frenulum Among Healthy Newborns 932 JAMA Otolaryngology–Head & Neck Surgery October 2019 Volume 145, Number 10 (Reprinted) jamaotolaryngology.com © 2019 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 http://www.jamaotolaryngology.com/?utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamaoto.2019.2302 Newborn nursery pediatric nurse practitioners assessed all infants twice weekly for study enrollment, including late- preterm newborns (born at 34 0/7 to 36 6/7 weeks’ gestation). Infants with possible or probable craniofacial anomalies or those admitted to the neonatal intensive care unit were ex- cluded. A member of our team (S.R.) imaged the frenulum of each infant using a standardized protocol, beginning with el- evation of each infant’s upper lip and retraction to the level of the alveolar sulcus. A standardized ruler was then placed along the alveolus, and high-definition digital photographs of the participants’ upper lips and gums were obtained with a Canon PowerShot A4000 IS HD camera (Canon Inc). Numer- ous images were obtained, with the highest-quality image se- lected for measurements. LATCH scores, newborn birth weight, gestational age, presence of ankyloglossia, and demographic data were collected from each infant’s medical record. ImageJ software, version 1.51j8 (National Institutes of Health) was used to calibrate the measurement scale for each photograph and to adjust for slight variations in focal dis- tance. Digital measurements were obtained for the following components: distance from alveolar edge to frenulum attach- ment, length of frenulum, distance from frenulum lip attach- ment to vermillion border, frenulum gingival attachment thick- ness, frenulum labial attachment thickness, ratio of free gingiva to total gingival length, and ratio of free lip to total lip length. In addition, the frenula were scored based on the Kotlow and Stanford (Santa Maria et al6) classification systems. All statistical analyses were performed with STATA/SE, ver- sion 15.1 (StataCorp LLC) for Windows. The Shapiro-Wilk test was used to assess whether measurements were normally dis- tributed. Cohen d, 95% confidence intervals (CIs), and η2 were used to analyze differences between measured frenulum com- ponents and participant characteristics. A Cohen d value of 0.2 is considered a small effect size, 0.5 a medium effect size, and 0.8 a large effect size.11 Eta-squared reflects the proportion of variation in the dependent variable that is accounted for by the groups defined by the independent variable.12 A value of 0.01 is considered small, 0.06 is medium, and 0.14 or above is large. In addition, η2 was used to determine the association be- tween LATCH scores and measurements for the combined and ankyloglossia subgroups. We determined means, percen- tiles, and SDs for the measurements. Cohen d was used to de- termine the difference between LATCH scores for patients with or without ankyloglossia. In addition, potential differences in LATCH scores between different Kotlow and Stanford scale scores were determined with η2. A validated breastfeeding as- sessment scale, LATCH has 5 items with a total score range of 0 to 10. Scores of 6 or higher indicate successful breastfeeding.13 Results We enrolled 150 newborns in the study based on a population estimate of 5% incidence of lip-tie. Of the study patients, 77 participants (51.33%) were female. The race/ethnicity of the pa- tients included 7 Asian (4.67%), 65 black (43.33%), 76 white (50.67%), and 2 (1.33%) other newborns. The mean (SD) ges- tational age of the study participants was 38.60 (1.72) weeks (range, 34.0-41.6 weeks’ gestation). The mean (SD) birth weight of the participants was 3180 (570) g (range, 1850-4480 g). Thirty-one newborns (20.67%) were diagnosed by newborn nursery clinician assessment as having ankyloglossia (based on a modified Coryllos system along with functional assess- ment). The LATCH scores were routinely obtained only for new- borns whose mothers elected to attempt to breastfeed; 129 of 150 newborns had documented LATCH scores. There was no statistical difference between frenulum morphologic compo- nents for newborns with or without documented LATCH scores. The mean (SD) LATCH score of the participants was 6.78 (1.62). The mean (SD) LATCH score for black infants (7.84 [1.64]) was slightly lower than that for white infants (8.13 [1.45]), and late preterm infants had slightly lower mean (SD) LATCH scores (7.19 [1.81]) than term infants (8.28 [1.81]). All statistics involv- ing LATCH scores excluded newborns without a documented LATCH score. The 7 measurements of the maxillary labial frenulum com- ponents obtained from the captured images are shown in Figure 1 along with several examples of photographed maxil- lary labial frenula. The ratio of free gingiva (ie, gingiva length not covered by the maxillary labial frenulum) to total gingiva was calculated as 100 × [(alveolar edge to gingival attachment)/ (alveolus to sulcus)]. The mean (SD) ratio of free lip (ie, lip length not covered by the maxillary labial frenulum) to total lip, calculated as 100 × [(lip attachment to vermilion border)/ (sulcus to vermilion border)], was 87.38% (7.67%). The mean and SD for each component measured can be found in the Table. For example, the mean (SD) of the distance from the al- veolar edge to the frenulum gingival attachment was 1.53 (0.85) mm, the frenulum length on stretch was 5.19 (1.68) mm, the frenulum gingival attachment thickness was 0.84 (0.36) mm, and the frenulum labial attachment thickness was 2.83 (1.33) mm. Most frenula attached less than 2 mm from the alveolar edge and had a relatively small mean (SD) value. Visual fre- quency distributions for each measurement can be found in Figure 2 and Figure 3. Statistical analysis showed that the length from alveolar edge to frenulum gingival attachment, gingival attachment thickness, and lip attachment thickness were log normally dis- tributed. Frenulum length on stretch, length from lip attach- ment to vermilion border, distance from alveolus to sulcus, and distance from sulcus to vermilion border were normally dis- tributed. The frenula were graded with the Kotlow and Stan- ford classification systems, and the results are shown in Figure 4. Using the Kotlow scale, most neonates (101 of 150) scored 3 of 4; when graded with the Stanford scale, 140 neo- nates scored 2 of 3, with only 2 newborns scoring 3. There was a medium size effect difference for gingival at- tachment thickness between mean (SD) late-preterm (0.69 [0.24] mm) and term (0.88 [0.37] mm) infants (Cohen d, −0.52; 95% CI, −0.94 to −0.10), with term defined as gestational age greater than or equal to 37 weeks. Otherwise, there were no meaningful differences for variable measurements between late-preterm and term newborns. Of note, when comparing dif- ferences between black infants and white infants, there were small to medium effect size differences in alveolar attach- ment (mean, 1.66 [95% CI, 1.46-1.86] vs 1.36 [95% CI, 1.19- Anatomic Distribution of the Morphologic Variation of the Upper Lip Frenulum Among Healthy Newborns Original Investigation Research jamaotolaryngology.com (Reprinted) JAMA Otolaryngology–Head & Neck Surgery October 2019 Volume 145, Number 10 933 © 2019 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 http://www.jamaotolaryngology.com/?utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamaoto.2019.2302 1.53] mm; Cohens d = −0.39 [95% CI, −0.72 to −0.05]), frenu- lum length (4.79 [95% CI, 4.37-5.20] vs 5.54 [95% CI, 5.17- 5.92] mm; Cohen d = 0.46 [95% CI, 0.12-0.79]), and the ratio of lip attachment to vermillion border (0.89 [95% CI, 0.87- 0.91] vs 0.86 [95% CI, 0.84-0.87] mm; Cohen d = −0.39 [95% CI, −0.73 to −0.06]), respectively. Furthermore, we found no clinically meaningful difference between newborns diag- nosed as having or as not having ankyloglossia for each vari- able measurement, including LATCH score, except lip attach- ment thickness (mean, 3.3 [95% CI, 2.8-3.9] vs 2.7 [95% CI, 2.5- 2.9] mm; Cohen d = 0.46 [95% CI, 0.06-0.86]). The very small η2 effect size estimates (eTable in the Supplement) described no clinically meaningful association between measurements and LATCH scores. These estimates include the alveolar edge to frenulum gingival attachment, which is the functional cor- relate to the Kotlow and Stanford scales.10 In addition, η2 showed no meaningful differences in LATCH scores between different Kotlow classification system scores and different Stanford classification system scores. Discussion Evidence regarding an association between the maxillary la- bial frenulum and lip-tie is currently sparse. To improve diag- nosis and more rigorous study, the present study provided ref- erence values of the various morphologic components of the frenulum from a sample population. Considering that tether- ing necessitates 2 attachment points, numerous aspects of frenu- lum anatomy are needed rather than simply the gingival attach- ment site. Both the Kotlow classification system and the Stanford system proposed by Santa Maria et al6 involve the distance from the alveolar edge to the frenulum gingival attachment.9 In our cohort, as shown in Figure 2, the distance from the alveolar edge Table. Data on Measured Frenulum Morphologic Components Component Mean (SD) Combined Ankyloglossia Nonankyloglossia Distance from alveolar edge to frenulum gingival attachment, mm 1.53 (0.85) 1.57 (0.91) 1.52 (0.83) Length of frenulum on stretch, mm 5.19 (1.68) 5.56 (1.51) 5.09 (1.72) Distance from frenulum labial attachment to vermilion border, mm 6.02 (2.01) 5.67 (2.03) 6.11 (2.01) Gingival attachment thickness, mm 0.84 (0.36) 0.86 (0.33) 0.84 (0.37) Labial attachment thickness, mm 2.83 (1.33) 3.30 (1.50) 2.70 (1.26) Free gingiva to total gingival length ratio, % 25.49 (13.65) 24.1 (0.12) 25.8 (0.14) Free lip to total lip length ratio, % 87.38 (7.67) 85.6 (0.10) 87.8 (0.07) Figure 1. How Frenulum Morphologic Components Were Measured and Examples of Maxillary Labial Frenula Measured frenulum parametersA Examples of maxillary labial frenulaB new art to come Black vertical line indicates length from alveolar edge to frenulum gingival attachment; yellow vertical line, length of frenulum on stretch; blue vertical line, length from frenulum labial attachment to vermilion border; green horizontal line, frenulum gingival attachment thickness; pink horizontal line, frenulum labial attachment thickness; white vertical line, distance from alveolar edge to sulcus; and orange vertical line, distance from sulcus to vermilion border. Research Original Investigation Anatomic Distribution of the Morphologic Variation of the Upper Lip Frenulum Among Healthy Newborns 934 JAMA Otolaryngology–Head & Neck Surgery October 2019 Volume 145, Number 10 (Reprinted) jamaotolaryngology.com © 2019 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 https://jama.jamanetwork.com/article.aspx?doi=10.1001/jamaoto.2019.2302&utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamaoto.2019.2302 http://www.jamaotolaryngology.com/?utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamaoto.2019.2302 to the frenulum gingival attachment had a narrow range of at- tachment variability, with a mean value of 1.53 mm and an SD of 0.85 mm. Given that most frenula attached less than 2 mm from the alveolar edge and had a relatively small mean and SD, the attachment site for the maxillary labial frenulum alone was not a sufficient population discriminator for lip tethering. Fur- thermore, the small magnitude of the mean and SD could ex- plain the low interrater reliability found in the Kotlow and Stanford scales because such a small value can be difficult to accurately assess on a crying neonate. When this cohort was graded with the Kotlow scale as shown in Figure 4, the vast ma- jority (101 of 150) of individuals scored 3 of 4, which could sug- gest a more severe lip-tie by the Kotlow scale. When graded with the Stanford scale as shown in Figure 4, 140 patients scored 2 of 3, with only 2 patients scoring 3, overall providing little abil- ity to discriminate within our population. For both scales, sta- tistical analyses showed no difference in LATCH scores within each scale in the present study. Thus, the current maxillary labial frenulum classification systems may be inadequate to properly assess variation in a general population of newborns. Figure 2. Frequency Distributions of Different Frenulum Component Measurements With Overlying Fitted Gaussian Distribution Line 60 50 40 30 10 20 0 Pa rt ic ip an ts , N o. Length, mm Alveolar edge to gingival attachment A 0 54321 60 50 40 30 10 20 0 Pa rt ic ip an ts , N o. Length, mm Length on stretch B 0 108642 60 50 40 30 10 20 0 Pa rt ic ip an ts , N o. Length, mm Gingival attachment thickness D 0 2.52.01.51.00.5 60 50 40 30 10 20 0 Pa rt ic ip an ts , N o. Length, mm Lip attachment to vermilion border C 0 12963 60 50 40 30 10 20 0 Pa rt ic ip an ts , N o. Length, mm Lip attachment thickness E 0 8642 Anatomic Distribution of the Morphologic Variation of the Upper Lip Frenulum Among Healthy Newborns Original Investigation Research jamaotolaryngology.com (Reprinted) JAMA Otolaryngology–Head & Neck Surgery October 2019 Volume 145, Number 10 935 © 2019 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 http://www.jamaotolaryngology.com/?utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamaoto.2019.2302 An ideal maxillary labial frenulum classification system should involve numerous appropriately selected frenula com- ponent measurements based on epidemiologic distributions to achieve accurate description and population discrimina- tion of lip tethering. Establishing a population-based frenu- lum classification system is critical before any further re- search can be undertaken regarding the role of the maxillary frenulum and breastfeeding difficulty. On the basis of findings in the present study, certain mea- surements appeared well suited to be used in such a classifi- cation system or for further study in general. Measurements with broader distributions appeared most helpful in differen- tiating between degrees of lip tethering in our cohort. Al- though not a sufficient population discriminator alone given its narrow distribution, the length from alveolar edge to gin- gival attachment may be useful because it serves as an impor- tant determinant of restriction in 3-dimensional space, de- scribing one of the necessary points of attachment. The remaining frenulum measurements were not accounted for in the Kotlow and Stanford scales, but Figure 1 shows their vari- ability within the population. Lip and gingival attachment thickness may affect lip tethering because thicker frenula are less likely to allow for lip mobility. As given in the Table, lip attachment thickness had a relatively large SD relative to the mean. Thus, this value may help differentiate between de- grees of lip tethering within the population. Gingival attach- ment thickness had a smaller mean and SD but may still pro- vide useful data regarding frenulum morphology. The length of the frenulum on stretch, with a large mean and SD, may help further distinguish degrees of lip tethering. Biomechanically, a frenulum that stretches easily has less risk of labial tether- ing, whereas a short frenulum that does not stretch much has a higher risk of labial tethering in a 3-dimensional space. Finally, the percentage of the lip that is “free” (ie, not covered by the frenulum) helps indicate how mobile a lip can be. Over- all, some combination of these particular component mea- surements could be used to create a more comprehensive grad- ing scale for both research and clinical purposes to characterize the maxillary labial frenulum. Our study also analyzed the presence of ankyloglossia based on the frenulum measurements. Because both tongue and up- per lip tethering models are based on improper midline attach- ments in the mouth and functional restriction, one could hy- pothesize that the 2 conditions could be interrelated. The presence or absence of ankyloglossia was determined by the newborn nursery clinicians, and they used a modified Coryllos system along with functional assessment. However, our study did not show any association between the measured frenulum morphologic components or the Kotlow and Stanford scales with the presence or lack of ankyloglossia. Figure 3. Additional Frequency Distributions of Different Component Frenulum Measurements With Overlying Fitted Gaussian Distribution Line 30 30 60 40 20 10 0 Pa rt ic ip an ts , N o. Length, mm Alveolus to sulcus A 0 12963 40 20 10 0 Pa rt ic ip an ts , N o. Length, mm Sulcus to vermilion border B 0 12963 80 40 20 0 Pa rt ic ip an ts , N o. Percentage Ratio Free lip to total lip length ratio D 0 1.00.80.60.40.2 80 40 20 0 Pa rt ic ip an ts , N o. Percentage Ratio Free gingiva to total gingiva ratio C 0 1.00.80.60.40.2 60 Research Original Investigation Anatomic Distribution of the Morphologic Variation of the Upper Lip Frenulum Among Healthy Newborns 936 JAMA Otolaryngology–Head & Neck Surgery October 2019 Volume 145, Number 10 (Reprinted) jamaotolaryngology.com © 2019 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 http://www.jamaotolaryngology.com/?utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamaoto.2019.2302 These data and our identified anatomic criteria may be use- ful in understanding the role of the frenulum throughout child- hood. Recent work has suggested that abnormal tethering of the upper lip attributable to the maxillary labial frenulum may lead to formation of dental caries in childhood.1,2,9,14 The labial frenulum also has been implicated in the formation of a midline diastema (a space between the maxillary central in- cisors), which may then in turn result in dental caries second- ary to food trapping.1,15 However, there have been few stud- ies regarding the natural history of the maxillary labial frenulum. In a study of children 1 to 8 years of age, Boutsi and Tatakis16 found that the attachment of the frenulum differed across ages, with older children showing mucosal or gingival frenula rather than papillary penetrating frenula, suggesting that the maxillary labial frenulum may shift the insertion point as a child ages and the maxilla develops. Use of more compre- hensive anatomic data may enable researchers to investigate concepts such as potential association with oral health feeding, longitudinal assessment of frenulum development, and characterization of changes that may affect oral health and feeding. Limitations We noted several limitations of our investigation. This pro- spective cross-sectional study used digital photography with a standardized examination technique. Minor variations in pro- tocol and image quality could artificially increase variability in the measurements obtained, but the variability was smaller than expected for direct measurements without digital assis- tance. Given the low SDs noted in all measurements, this as- sociation was likely minimal. In addition, LATCH scores for the patients in our study were obtained at different times after birth and by numerous lactation consultants and postpartum nurses. LATCH scores did vary slightly between black infants and white infants as well as between term and late-preterm infants. These factors may have obscured any correlation between degree of frenulum and LATCH score. The present study was not de- signed to determine a correlation with lip-tie and breastfeed- ing difficulty but to describe the population’s anatomic varia- tion, which limited the use of other breastfeeding assessment tools. It is possible that the lip anatomy may have limited as- sociation with breastfeeding given that no prospective study has rigorously demonstrated such an association. Future in- vestigations should include a more comprehensive maxillary labial frenulum grading scale than the Kotlow and Stanford scales and should follow up with prospective case-control stud- ies to assess for correlations between the more ideal scale and feeding assessments, such as the LATCH score, the Infant Breastfeeding Assessment Tool, or the short-form McGill Pain Questionnaire.17 The LATCH score alone does not fully encap- sulate and quantify feeding difficulty among infants, and in- fant feeding is best assessed with a battery of tests and quali- fied clinical expertise. Further studies also are necessary to understand the role of the maxillary labial frenulum in breast- feeding and to assess the association of labial frenectomy with breastfeeding. Conclusions To our knowledge, the present study is the first to describe de- tailed, specific measurements of maxillary labial frenulum morphology in newborns. We found little variability in the al- veolar attachment location across the study population, with most frenula attaching less than 2 mm from the alveolar edge. These findings may have substantial implications for treat- ment of the rapidly growing population of infants with sus- pected lip-tie. The currently available maxillary labial frenu- lum classification systems provided poor discrimination within the present study population. We identified additional ana- tomic criteria that may improve future classification. ARTICLE INFORMATION Accepted for Publication: June 29, 2019. Published Online: August 22, 2019. doi:10.1001/jamaoto.2019.2302 Correction: This article was corrected on October 17, 2019, to fix errors in the mean (SD) birth weight and range of birth weights of 150 newborns. Author Contributions: Mr Ray and Dr Walsh had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Concept and design: All authors. Acquisition, analysis, or interpretation of data: Ray, Walsh. Drafting of the manuscript: Ray, Walsh. Critical revision of the manuscript for important intellectual content: All authors. Statistical analysis: Ray, Walsh. Administrative, technical, or material support: Ray, Walsh. Conflict of Interest Disclosures: None reported. Additional Contributions: We thank the pediatric nurse practitioners and lactation consultants of the Johns Hopkins Newborn Nursery for assisting with data collection, with special thanks to Kristen Byrnes, CRNP; Carol Long, CRNP, IBCLC; Suzanne Rubin, CRNP; Patricia Smouse, CRNP; Jo-Ann Swartz, CRNP; Krystina Mints, CRNP; Heather Sturdivant, RN, IBCLC; and Nadine Rosenblum, MS, RN, IBCLC. REFERENCES 1. Kotlow LA. The influence of the maxillary frenum on the development and pattern of dental caries on anterior teeth in breastfeeding infants: prevention, diagnosis, and treatment. J Hum Lact. 2010;26(3): 304-308. doi:10.1177/0890334410362520 Figure 4. Comparison of Distributions of Kotlow and Stanford Scale Scores 140 80 100 120 60 40 20 0 Pa rt ic ip an ts , N o. Score 1 2 3 4 Kotlow (1-4) Stanford (1-3) Anatomic Distribution of the Morphologic Variation of the Upper Lip Frenulum Among Healthy Newborns Original Investigation Research jamaotolaryngology.com (Reprinted) JAMA Otolaryngology–Head & Neck Surgery October 2019 Volume 145, Number 10 937 © 2019 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 https://jama.jamanetwork.com/article.aspx?doi=10.1001/jamaoto.2019.2302&utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamaoto.2019.2302 https://dx.doi.org/10.1177/0890334410362520 http://www.jamaotolaryngology.com/?utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamaoto.2019.2302 2. Kotlow L. Infant reflux and aerophagia associated with the maxillary lip-tie and ankyloglossia (tongue-tie). Clin Lactation. 2011;2(4): 25-29. doi:10.1891/215805311807011467 3. Henry SW, Levin MP, Tsaknis PJ. Histologic features of the superior labial frenum. J Periodontol. 1976;47(1):25-28. doi:10.1902/jop.1976.47.1.25 4. Edwards JG. The diastema, the frenum, the frenectomy: a clinical study. Am J Orthod. 1977;71 (5):489-508. doi:10.1016/0002-9416(77)90001-X 5. Flinck A, Paludan A, Matsson L, Holm A-K, Axelsson I. Oral findings in a group of newborn Swedish children. Int J Paediatr Dent. 1994;4(2): 67-73. doi:10.1111/j.1365-263X.1994.tb00107.x 6. Santa Maria C, Aby J, Truong MT, Thakur Y, Rea S, Messner A. The superior labial frenulum in newborns: what is normal? Glob Pediatr Health. 2017;4:X17718896. doi:10.1177/2333794X17718896 7. Francis DO, Chinnadurai S, Morad A, et al. Treatments for Ankyloglossia and Ankyloglossia With Concomitant Lip-Tie: Comparative Effectiveness Reviews No. 149. Rockville, MD: Agency for Healthcare Research and Quality; 2015. AHRQ Publication No. 15-EHC011-EF. 8. Ghaheri BA, Cole M, Fausel SC, Chuop M, Mace JC. Breastfeeding improvement following tongue-tie and lip-tie release: a prospective cohort study. Laryngoscope. 2017;127(5):1217-1223. doi: 10.1002/lary.26306 9. Kotlow L. Oral diagnosis of abnormal frenum attachments in neonates and infants: evaluation and treatment of the maxillary and lingual frenum using the erbium:YAG laser. J Pediatr Dent Care. 2004;10(3):11-14. https://www.kiddsteeth.com/ assets/pdfs/articles/finaslttfrenarticleoct2004.pdf. Accessed July 22, 2019. 10. Jensen D, Wallace S, Kelsay P. LATCH: a breastfeeding charting system and documentation tool. J Obstet Gynecol Neonatal Nurs. 1994;23(1):27-32. doi:10.1111/j.1552-6909.1994. tb01847.x 11. Cohen J. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. London, England: Routledge; 1988. 12. Ellis PD. The Essential Guide to Effect Sizes: Statistical Power, Meta-analysis, and the Interpretation of Research Results. Cambridge, UK: Cambridge University Press; 2015. 13. Sowjanya SVNS, Venugopalan L. LATCH score as a predictor of exclusive breastfeeding at 6 weeks postpartum: a prospective cohort study. Breastfeed Med. 2018;13(6):444-449. doi:10.1089/bfm.2017. 0142 14. Kotlow LA. Diagnosing and understanding the maxillary lip-tie (superior labial, the maxillary labial frenum) as it relates to breastfeeding. J Hum Lact. 2013;29(4):458-464. doi:10.1177/0890334413491325 15. Ceremello PJ. The superior labial frenum and the midline diastema and their relation to growth and development of the oral structures. Am J Orthod. 1953;39(2):120-139. doi:10.1016/0002-9416(53) 90016-5 16. Boutsi EA, Tatakis DN. Maxillary labial frenum attachment in children. Int J Paediatr Dent. 2011;21 (4):284-288. doi:10.1111/j.1365-263X.2011.01121.x 17. Francis DO, Krishnaswami S, McPheeters M. Treatment of ankyloglossia and breastfeeding outcomes: a systematic review. Pediatrics. 2015;135 (6):e1458-e1466. doi:10.1542/peds.2015-0658 Research Original Investigation Anatomic Distribution of the Morphologic Variation of the Upper Lip Frenulum Among Healthy Newborns 938 JAMA Otolaryngology–Head & Neck Surgery October 2019 Volume 145, Number 10 (Reprinted) jamaotolaryngology.com © 2019 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 https://dx.doi.org/10.1891/215805311807011467 https://dx.doi.org/10.1902/jop.1976.47.1.25 https://dx.doi.org/10.1016/0002-9416(77)90001-X https://dx.doi.org/10.1111/j.1365-263X.1994.tb00107.x https://dx.doi.org/10.1177/2333794X17718896 https://dx.doi.org/10.1002/lary.26306 https://www.kiddsteeth.com/assets/pdfs/articles/finaslttfrenarticleoct2004.pdf https://www.kiddsteeth.com/assets/pdfs/articles/finaslttfrenarticleoct2004.pdf https://dx.doi.org/10.1111/j.1552-6909.1994.tb01847.x https://dx.doi.org/10.1111/j.1552-6909.1994.tb01847.x https://dx.doi.org/10.1089/bfm.2017.0142 https://dx.doi.org/10.1089/bfm.2017.0142 https://dx.doi.org/10.1177/0890334413491325 https://dx.doi.org/10.1016/0002-9416(53)90016-5 https://dx.doi.org/10.1016/0002-9416(53)90016-5 https://dx.doi.org/10.1111/j.1365-263X.2011.01121.x https://dx.doi.org/10.1542/peds.2015-0658 http://www.jamaotolaryngology.com/?utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamaoto.2019.2302 work_k54652wrwzf5nbcpqbdvv2qkwq ---- STUDY PROTOCOL Open Access Promoting a healthy diet and physical activity in adults with intellectual disabilities living in community residences: Design and evaluation of a cluster-randomized intervention Liselotte Schäfer Elinder1*, Helena Bergström1, Jan Hagberg1, Ulla Wihlman1, Maria Hagströmer1,2 Abstract Background: Many adults with intellectual disabilities have poor dietary habits, low physical activity and weight disturbances. This study protocol describes the design and evaluation of a health intervention aiming to improve diet and physical activity in this target group. In Sweden, adults with intellectual disabilities often live in community residences where the staff has insufficient education regarding the special health needs of residents. No published lifestyle interventions have simultaneously targeted both residents and staff. Methods/Design: The intervention is designed to suit the ordinary work routines of community residences. It is based on social cognitive theory and takes 12-15 months to complete. The intervention includes three components: 1) Ten health education sessions for residents in their homes; 2) the appointment of a health ambassador among the staff in each residence and formation of a network; and 3) a study circle for staff in each residence. The intervention is implemented by consultation with managers, training of health educators, and coaching of health ambassadors. Fidelity is assessed based on the participation of residents and staff in the intervention activities. The study design is a cluster- randomised trial with physical activity as primary outcome objectively assessed by pedometry. Secondary outcomes are dietary quality assessed by digital photography, measured weight, height and waist circumference, and quality of life assessed by a quality of life scale. Intermediate outcomes are changes in work routines in the residences assessed by a questionnaire to managers. Adults with mild to moderate intellectual disabilities living in community residences in Stockholm County are eligible for inclusion. Multilevel analysis is used to evaluate effects on primary and secondary outcomes. The impact of the intervention on work routines in community residences is analysed by ordinal regression analysis. Barriers and facilitators of implementation are identified in an explorative qualitative study through observations and semi-structured interviews. Discussion: Despite several challenges it is our hope that the results from this intervention will lead to new and improved health promotion programs to the benefit of the target group. Trial registration number: ISRCTN33749876 Background Dietary habits, physical activity and obesity are strong modifiable risk factors for chronic diseases such as cardiovascular diseases, diabetes and some cancers [1]. People with intellectual disabilities (ID) often have poor dietary habits [2,3], low physical activity [4,5], and weight disturbances [4-8]. According to a report from the Swedish National Institute of Public Health people with ID carry a higher disease burden than the popula- tion in general and interventions directed at these risk factors could be an important way to improve health in this group [9]. In Sweden, individuals with ID who live in community residences are entitled to assistance in everyday life, but * Correspondence: liselotte.schafer-elinder@ki.se 1Division of Intervention and Implementation Research, Department of Public Health Sciences, Karolinska Institutet, Stockholm, Sweden Full list of author information is available at the end of the article Elinder et al. BMC Public Health 2010, 10:761 http://www.biomedcentral.com/1471-2458/10/761 © 2010 Elinder et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. mailto:liselotte.schafer-elinder@ki.se http://creativecommons.org/licenses/by/2.0 the staff often has insufficient education regarding the special needs of people with disabilities [10]. An ID involves a reduced short term memory and a reduced ability for abstraction which increases the risk of making unhealthy choices in an obesogenic environment [11]. IDs are usually categorised as mild, moderate, severe or profound [12]. The categories are arbitrary divisions of a complex continuum, and cannot be defined with abso- lute precision. Few lifestyle interventions targeting people with ID have been published. Health programmes, including education, exercise, and in a few cases also stress reduc- tion, have shown modest decreases in BMI [13-17]. In two of those studies, significant improvements were seen in the quality of life in the intervention group after completing the programme [13,16]. In another interven- tion, where staff received training in meal preparation and weekly supervisor feedback, routines improved and were maintained for up to one year [18]. Positive health changes, in terms of weight loss and decreased blood pressure, were seen in the residents. To our knowledge no interventions have been published targeting both staff and residents in the same community residence. In order to promote sustainability of the intervention it is designed to suit into the normal work routines of com- munity residences. The aim of this paper is to describe and explain the design and evaluation of this health intervention targeting people with ID. The description of the study protocol follows the checklist of the CON- SORT statement for cluster-randomized trials [19]. Methods and design Study objectives 1) To study effects of a health intervention on resi- dents’ diet quality, physical activity, body mass index (BMI) and quality of life. 2) To study improvements at residence level in work routines and opportunities for healthy diets and phy- sical activity. 3) To describe and analyse barriers and facilitators in the implementation of the intervention. Hypothesis We hypothesise that an educational approach directed both at the residents and staff will strengthen the knowledge and skills of the residents to improve their diet and increase physical activity, as well as the ability of staff to provide a supportive social and physical envir- onment for making healthy choices. Setting and target group Adult men and women with mild to moderate ID who live in community residences in Stockholm County are eligible for inclusion. There are approximately 500 such residences. For a residence to be included, at least three subjects in each residence have to agree to participate. All participants need to have the ability to understand simple information about the study and to decide whether they want to participate or not. A letter of invi- tation is sent to managers, who are asked to contact their subordinated community residences in Stockholm County. After notification of interest from residences, residents receive an easy-to-read letter with information about the purpose of the study and about the interven- tion itself. Participants who express interest to partici- pate are informed verbally and in writing, and written consent is obtained. Staff at the residences as well as trustees or legal guardians also receive written and ver- bal information about the intervention and the purpose of the study. All participants are rewarded with a cinema ticket. Quantitative data will be presented in aggregated form. Qualitative data will be abstracted into themes and illustrated by quotations from the interviews and observations. The data will be treated as strictly confidential and it will not be possible to identify indivi- duals or residences. Ethical permission for this study has been obtained from the Regional Ethical Review Board in Stockholm County No. 2009/1332-31/5. Planning of the intervention and development of materials When planning this intervention we employed the step- by-step approach as described by Fraser et al. [20]. In step 1 we have specified the problem and defined our problem theory. Based on previous research and practi- tioner’s experience, we have identified poor dietary habits, low physical activity and weight disturbances among adults with ID. In addition, the low educational level of staff working in community residences in gen- eral has been identified as a barrier to healthy lifestyles among the target group. Our goal with the intervention is therefore to improve diet and physical activity of the residents as well as knowledge and skills of both resi- dents and the staff. The intervention is based on social cognitive theory (SCT). This theory explains behaviour in terms of a triadic, dynamic, and reciprocal model in which behaviour, personal factors, and environmental influences all interact [21]. According to this theory, we aim to improve health behaviours through both personal factors (knowledge, skills, preferences, self-efficacy) of residents as well as through improvements in their social and physical environment, which is very much dependent on knowledge, skills and work routines of the staff. In step 2 of intervention planning, core components of the intervention are identified and programme materials are developed and pretested. “Fokus hälsa” (Focus Elinder et al. BMC Public Health 2010, 10:761 http://www.biomedcentral.com/1471-2458/10/761 Page 2 of 7 health) is a newly developed material with ten themes for use by staff. It aims to increase the knowledge and skills of residence staff with regard to diet, physical activity and health and is based on the principles of peer education [22]. The themes were developed in dis- cussions with managers of community residences and on the basis of their knowledge of the needs of the tar- get group. The themes are: 1) Health and quality of life; 2) Autonomy and ethics; 3) National recommendations concerning diet and health and information in society; 4) Healthy dietary habits; 5) Physical activity for health; 6) Availability and accessibility; 7) Habits and attitudes; 8) Motivation and support for behavioral change; 9) Cooperation; and 10) How to sustain good work. Each theme includes an introductory text, which is read before the meeting, and at the end there are three sug- gested exercises: 1) Questions for discussion; 2) Identifi- cation of strengths and weaknesses in work routines; and 3) Making agreements about new and improved work routines. A health education material “Hälsokörkortet” (Driver’s licence for health) has been developed specifically for people with ID and pretested and revised by “Studieför- bundet Vuxenskolan”, a national educational association for adults. The material includes ten educational ses- sions within five areas: diet, physical activity, culture/ aesthetics, mental health and stress relief. Intervention components The intervention takes 12-15 months to complete and includes three main components: 1) Ten health educa- tion sessions for residents; 2) the appointment of a health ambassador among the staff in each residence, including four network meetings; and 3) a study circle for staff based on the principles of peer education. Health education for residents The aim of the health education is to support residents to make lifestyle changes in an easy and positive way in their everyday life, by increasing their knowledge, preferences, skills and self-efficacy. Participants get the possibility to try new foods and activities, and are assigned home work. Ten sessions are carried out in the residences by a health edu- cator from “Studieförbundet Vuxenskolan” using the edu- cational material “Hälsokörkortet” (Driver’s licence for health). The material includes themes and detailed instructions for the educator for each of the ten sessions. Each session is supposed to last for 2 × 45 minutes and comprises of a discussion, a theme activity, testing of healthy food, physical activity and homework. Health ambassadors Ambassadors for various issues are common among the staff in community residences and we wanted to build on this practice. Therefore, in every participating com- munity residence a health ambassador is appointed among staff members. The task of the health ambassa- dor is to provide health information to colleagues and inspire them and to plan and organise health promoting activities for residents. The health ambassador receives coaching by the research team on issues regarding diet, physical activity and health and gets the possibility to exchange knowledge and experiences during network meetings with health ambassadors from the other resi- dences. Network meetings are arranged four times dur- ing the intervention. The first meeting is an introductory meeting and following meetings focus on themes chosen by the ambassadors. Study circle for staff The aim of the study circle is to increase the knowledge and skills of staff in the area of health promotion in order to empower them to improve work routines and the social and physical environment of residents. Nor- mally, staff in community residences meet once every second or fourth week. We decided that these meetings would be a good opportunity for the staff to conduct this study circle using the study material “Fokus Hälsa” (Focus Health). All the staff in every community resi- dence, including the health ambassador, comes together to discuss health issues, based on the principles of peer education [22]. A discussion leader is appointed and the group discusses and does the exercises according to the instructions in the material. Implementation components Implementation components or drivers are factors which enable practitioners to implement the interven- tion as intended [23]. Typical components used in this programme are consultations with managers, training and coaching of staff. First, managers and staff in the residences are invited to an introductory meeting dis- cussing the entire intervention with the research team. Second, the health educators are trained by the research staff together with experienced educators from “Studie- förbundet Vuxenskolan” for one day regarding how to use the material “Hälsokörkortet” (Drivers licence for health). During the entire intervention period the health ambassadors receive coaching on demand from the research group via personal visits, mail, telephone, and at network meetings. Newsletters are sent to all resi- dences to keep staff and residents up to date with the overall state of the project and important news. Figure 1 shows the logic model of the intervention explaining the hypothetical causal chain that may lead to behaviour changes of the target group. Evaluation design Step 3 in intervention research [20] constitutes the eva- luation of the intervention with regard to individual out- comes and setting level impacts. Because we are Elinder et al. BMC Public Health 2010, 10:761 http://www.biomedcentral.com/1471-2458/10/761 Page 3 of 7 targeting both residents and staff the study is designed as a cluster-randomised trial. After baseline measure- ments are completed, residences are randomised to an intervention group and to a waiting list-control group by a computer-generated list of random numbers done by the statistician, and concealed from the research staff, which enrols the participants. Evaluation is done by mixed methods, using both quantitative and qualita- tive methods. Intervention outcomes All outcomes at individual level are assessed at baseline, directly after the end of the intervention and again after 6 months. The primary intervention outcome is physical activity, which is assessed by pedometry (Yamax 200). The Yamax 200 pedometer has shown high agreement with accelerometer-measured physical activity [24,25] and is recommended for research purposes [26]. A previous study on adults with ID has shown that three days of mea- surement is needed to predict the usual weekly number of steps per day [27]. The participants receive the pedometer and, together with staff in residences, get practical training as well as verbal and easy to read instructions from the research staff on how to wear it (attached to a belt or lin- ing around the waist, vertically in line with the knee), how to record the number of steps per day and how to reset the pedometer for the next day of measurement. The par- ticipants are asked to wear the pedometer for seven conse- cutive days, but for the analysis we will include participants that have at least three days of measurements. The outcome measure is average total steps per day as an indicator of total physical activity. Dietary quality, which is a secondary outcome, is assessed by personal digital photography, a method which has been developed and validated for this project and will be published. Each participant, as well as resi- dence staff, is instructed on how to use the camera (Canon PowerShot A480) and receives practical training in taking pictures. Photos are taken by the participants themselves of all foods and beverages consumed during three days and staff is encouraged to remind partici- pants to take photos or to assist, if necessary. Outcome measures are: 1) Intake occasions of indicator foods and beverages (fruit and berries, vegetables, low nutrient density foods and beverages, and beverages excluding water); 2) Meal quality assessed in comparison to food- based dietary guidelines visualised as the plate model [28] and; 3) Dietary diversity covering nine core food groups. It is important to assess the effects of health promo- tion efforts on the quality of life, which may improve or worsen as a consequence of the intervention. Quality of life, a secondary outcome, among people with ID is assessed by a multi-factorial quality of life scale which has been developed within this project, because no sui- table scale could be identified from the literature. We selected questions from various quality of life question- naires and reconstructed and pretested them. The ques- tions include information about quality of life within six domains (home, food and meals, leisure, family and friends, physical health and mental health) and have three response alternatives; good, average or bad. The questions in the scale are read to participants in the Inputs Expert support External funding Local resources and time Activities Health education for residents Health ambassador meetings Study circle for staff Consultations Coaching Training Materials “Focus health” “Drivers licence for health” Newsletters Impacts Work routines Food supply in residences Opportunities for physical activity Outcomes Physical activity Diet BMI status Quality of life Figure 1 Logic model of the intervention in community residences. Elinder et al. BMC Public Health 2010, 10:761 http://www.biomedcentral.com/1471-2458/10/761 Page 4 of 7 form of a structured interview by the research team. Residence staff is asked not to be present during the interview, only if they think it is necessary for the sake of the respondent. The scale will be tested for its psy- chometric properties in participants in the study accord- ing to Kline el al. [29]. Height, weight and waist circumference are secondary outcomes and are measured by the research staff, at the residence. Height is measured to the nearest cm in a standardized way using SECA stadiometer (214). Weight is measured using a digital scale (SECA Robusta 813) to the nearest 0.5 kg with light clothing, and the body mass index (BMI kg/m2) is calculated. Waist circumfer- ence is measured to the nearest cm at the midpoint between the lowest rib and the iliac crest at the end of expiration. Participants answer questions regarding age, country of birth, occupation, family and physical func- tional limitation. If necessary, the staff gives supplemen- tary information. Impacts at setting level Changes in health promotion work routines and healthy living opportunities in community residences are moni- tored by self-assessment by residence managers. We developed a questionnaire called Work routines for meals, physical activity and health (Additional file 1). It includes 26 items structured into 3 sections: General health promotion, food and meals, and physical activity. The questionnaire has undergone cognitive response testing but is not otherwise validated. The questionnaire is answered at baseline, after completed intervention and again at 6 months follows up in both intervention and control residences. Fidelity criteria Fidelity is defined by the extent to which a programme is implemented as intended, and the effectiveness of an intervention is related to fidelity [30]. Fidelity is assessed based on participation of residents and staff in the inter- vention activities. Fidelity is assessed by counting the number of times that residents and staff participate in health education and the study circle, respectively. Resi- dents’ participation is documented by the education lea- ders, staff participation is self-monitored and attendance at health ambassador’s network meetings is documented by the research staff. A scoring system is developed with a 3-grade score, high, middle and low participation. Additional fidelity assessment is performed for the study circle for residence staff. A score is given to each resi- dence based on the number of themes they have cov- ered and to which extent they have made agreements about new and improved work routines. Statistical power Calculation of power is based on the assumption of an average 25% increase in physical activity, assessed as steps per day by pedometry. No Swedish data were available so we used data from Peterson et al [31] from the USA for adults with mild to moderate intellectual disabilities, who on average achieved 6621 ± 3366 steps per day. Calculations were performed with the “Sample size calculator for cluster randomized trials” [32]. The calculation was 2-sided, and power was set to 80%, the significance level to 5%, and cluster size to five indivi- duals. The calculation shows that 32 community resi- dences are needed to detect a significant change in physical activity of 25% between the intervention and control group. Data analysis Data analysis will be performed by the statistician who is blinded to the intervention group assignment. At indivi- dual level, parametric and non-parametric tests are used to compare groups at baseline, depending on the distri- butions of the quantitative variables. In order to account for clustering of the data, multilevel analysis is used to evaluate the effects of the intervention on relevant out- comes. Two levels are defined in the analysis: 1) indivi- dual and 2) residence. Linear and logistic regression models are used to study the effect on the outcome vari- ables physical activity, intake occasions of indicator foods, meal quality, dietary diversity, body weight status, BMI and quality of life. Intervention outcome will be evaluated in relation to the intervention dose (fidelity score) but also to the intention to treat principle. To evaluate the impact of the intervention on work routines and opportunities for a healthy diet and physi- cal activity in community residences ordinal regression will be used for the data derived from the questionnaire Work routines for meals, physical activity and health (Additional file 1). Multinomial logistic regression might also be used if it is the categories and not the order of the categories per se that is of importance for the results. If the dependent variable can be adequately dichotomised, we will use Poisson regression with robust error variance (modified Poisson regression) to estimate risk ratios (relative risks). Evaluation of barriers and facilitators of implementation To define and analyse barriers and facilitators related to the process of implementation an explorative qualitative study will be performed. Interviews are often used when the aim is to gain a deep understanding of a phenom- enon where not much is previously known [33,34], whereas observations are suitable to use when the aim is to explore what actually happens during the session [35]. Although it is possible to successfully conduct interviews with people with ID it involves several diffi- culties. A low level of responsiveness to open-ended questions is one [36], which is why observation is cho- sen as method for the health education sessions for resi- dents. During or directly after the observations extensive notes will be taken by the observer. Elinder et al. BMC Public Health 2010, 10:761 http://www.biomedcentral.com/1471-2458/10/761 Page 5 of 7 Semi-structured interviews [37] are conducted with health ambassadors and managers after completed inter- vention. The number of informants depends on when saturation of information is attained e.g. when no further information is added which is usually after about 15-20 interviews [37]. The interviews will be recorded and transcribed verbatim. For analyses of the interviews and observations a thematic analysis will be used as described by Malterud [38]. Thematic analysis is suitable for explorative analysis in order to develop concepts or themes that are unknown prior to the analysis. The ana- lysis procedure is an iterative process between formu- lated themes and original data. Trustworthiness of the study will be assured by a thorough description of the methods used, as well as inter-subjective agreement between different researchers in defining the different themes [39]. Anonymous quotations from the original interviews and observations will be used to illustrate the different themes in order to further enhance the credibility. Discussion To our knowledge this is the first intervention study addressing diet and physical activity habits in people with ID which is simultaneously targeting staff and resi- dents. People with different kinds of disabilities consti- tute a vulnerable group with a large and avoidable chronic disease burden and therefore health interven- tions are badly needed [9]. Nevertheless, there are sev- eral challenges in working with this target group. First, in our experience it is not so easy to recruit residences because they are not only homes but also workplaces for the staff and managers, who are busy doing their job. In addition, not all residents are willing to partici- pate in a study, due to difficulties in understanding the content of the intervention and the consequences of participation. Second, methods for assessment of out- comes as well as programme materials for the interven- tion have to be adapted to the limited cognitive abilities of the target group. A new method for assessment of diet quality, diet diversity and intake occasions of indica- tor foods by digital photography has been validated within this project and will be published. Conducting interviews and using questionnaires among people with ID in order to assess quality of life is difficult due to cognitive limitations, and no suitable scale was found in the literature. A scale was developed by using relevant questions from different quality of life questionnaires. The psychometric properties of this scale will be assessed within the frame of this intervention. In general it is hard to achieve weight loss in healthy adults through interventions targeting diet and physical activity [40]. We therefore do not expect to see signifi- cant weight losses in the intervention group. In addition, the study is not powered for this purpose. However, we hope to see changes in work routines at residence level as well as opportunities for healthy eating and physical activity for residents. We also expect improvements in residents’ health behaviours. There are also a number of ethical challenges in this intervention because people with ID are in need of pro- fessional care, but also have the basic right to autonomy and self-determination [41]. This will be dealt with in a separate qualitative study within this project. It is our hope that the results from this intervention will lead to new and improved health promotion programs to the benefit of the target group. Final results from the inter- vention study are expected in 2013. Additional material Additional file 1: Work routines for meals, physical activity and health. A questionnaire for administrators and managers of community residences concerning health promotion work routines Acknowledgements We want to thank all managers who participated in the initial conception and planning of the intervention and “Studieförbundet Vuxenskolan” for cooperation concerning health education for residents. We are grateful to Dr Emma Patterson for critically reading the manuscript and correcting the language. This study is funded by the Public Health Fund, Stockholm County Council, grant number HSN 0802-0339. Author details 1Division of Intervention and Implementation Research, Department of Public Health Sciences, Karolinska Institutet, Stockholm, Sweden. 2Division of Physiotherapy, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden. Authors’ contributions LSE and HB conceived the study, applied for funding, designed the intervention and the evaluation and are responsible for implementation and data collection. LSE prepared the initial draft of the manuscript. UW is responsible for the qualitative methods and the analysis. JH is responsible for the statistical analysis. MH is responsible for assessment and analysis of physical activity. All authors critically reviewed the manuscript and approved the final draft. Competing interests The authors declare that they have no competing interests. Received: 22 November 2010 Accepted: 13 December 2010 Published: 13 December 2010 References 1. Impact of obesity on health. In The challenge of obesity in the WHO European Region and the strategies for response. Edited by: Branca F, Nikogosian H, Lobstein T. Copenhagen: World Health Organization; 2007:20-27. 2. Adolfsson P, Sydner YM, Fjellstrom C, Lewin B, Andersson A: Observed dietary intake in adults with intellectual disability living in the community. Food Nutr Res 2008, 52. 3. Draheim CC, Stanish HI, Williams DP, McCubbin JA: Dietary intake of adults with mental retardation who reside in community settings. Am J Ment Retard 2007, 112(5):392-400. Elinder et al. BMC Public Health 2010, 10:761 http://www.biomedcentral.com/1471-2458/10/761 Page 6 of 7 http://www.biomedcentral.com/content/supplementary/1471-2458-10-761-S1.DOC http://www.ncbi.nlm.nih.gov/pubmed/19109653?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/19109653?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/19109653?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/17676962?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/17676962?dopt=Abstract 4. Emerson E: Underweight, obesity and exercise among adults with intellectual disabilities in supported accommodation in Northern England. J Intellect Disabil Res 2005, 49(Pt 2):134-143. 5. Robertson J, Emerson E, Gregory N, Hatto C, Turner S, Kessissoglou S, Hallam A: Lifestyle related risk factors for poor health in residential settings for people with intellectual disabilities. Res Dev Disabil 2000, 21(6):469-486. 6. Bhaumik S, Watson JM, Thorp CF, Tyrer F, McGrother CW: Body mass index in adults with intellectual disability: distribution, associations and service implications: a population-based prevalence study. J Intellect Disabil Res 2008, 52(Pt 4):287-298. 7. Moran R, Drane W, McDermott S, Dasari S, Scurry JB, Platt T: Obesity among people with and without mental retardation across adulthood. Obes Res 2005, 13(2):342-349. 8. Hove O: Weight survey on adult persons with mental retardation living in the community. Res Dev Disabil 2004, 25(1):9-17. 9. Arnhoff Y: Onödig ohälsa. Hälsoläget för personer med funktionsnedsättning. [Unnecessary poor health. Health status for people with disabilities.]. Östersund: Swedish National Institute of Public Health; 2008. 10. Lägesrapport 2003. [Disability Care. Status report 2003]. Stockholm: The National Board of Health and Welfare, Handikappomsorg; 2003. 11. Elinder LS, Jansson M: Obesogenic environments–aspects on measurement and indicators. Public Health Nutr 2009, 12(3):307-315. 12. International Statistical Classification of Diseases and Related Health Problems. Geneva: WHO;, 10 2007. 13. Bazzano AT, Zeldin AS, Diab IR, Garro NM, Allevato NA, Lehrer D: The Healthy Lifestyle Change Program: a pilot of a community-based health promotion intervention for adults with developmental disabilities. Am J Prev Med 2009, 37(6 Suppl 1):S201-208. 14. Chapman MJ, Craven MJ, Chadwick DD: Following up fighting fit: the long-term impact of health practitioner input on obesity and BMI amongst adults with intellectual disabilities. J Intellect Disabil 2008, 12(4):309-323. 15. Ewing G, McDermott S, Thomas-Koger M, Whitner W, Pierce K: Evaluation of a cardiovascular health program for participants with mental retardation and normal learners. Health Educ Behav 2004, 31(1):77-87. 16. Heller T, Hsieh K, Rimmer JH: Attitudinal and psychosocial outcomes of a fitness and health education program on adults with down syndrome. Am J Ment Retard 2004, 109(2):175-185. 17. Marshall D, McConkey R, Moore G: Obesity in people with intellectual disabilities: the impact of nurse-led health screenings and health promotion activities. J Adv Nurs 2003, 41(2):147-153. 18. Kneringer MJ, Page TJ: Improving staff nutritional practices in community-based group homes: evaluation, training, and management. J Appl Behav Anal 1999, 32(2):221-224. 19. Campbell MK, Elbourne DR, Altman DG: CONSORT statement: extension to cluster randomised trials. BMJ 2004, 328(7441):702-708. 20. Fraser M, Richman J, Galinsky M, Day S: Intervention research New York: Oxford University Press; 2009. 21. Baranowski TPC, Parcel GS: How individuals, environments, and health behavoir interact: Social cognitive theory. In Health behaviour and health education Theory, research, and practice. 3 edition. Edited by: Glanz KRB, Lewis FM. San Francisco: Jossey-Bass; 2002. 22. Wadoodi A, Crosby JR: Twelve tips for peer-assisted learning: a classic concept revisited. Med Teach 2002, 24(3):241-244. 23. Fixsen D, Naoom S, Blase K, Friedman R, Wallace F: Implementation research: a synthesis of the literature. Tampa, FA: University of South Florida; 2005. 24. Le Masurier GC, Tudor-Locke C: Comparison of pedometer and accelerometer accuracy under controlled conditions. Med Sci Sports Exerc 2003, 35(5):867-871. 25. Tudor-Locke C, Ainsworth BE, Thompson RW, Matthews CE: Comparison of pedometer and accelerometer measures of free-living physical activity. Med Sci Sports Exerc 2002, 34(12):2045-2051. 26. Schneider PL, Crouter SE, Bassett DR: Pedometer measures of free-living physical activity: comparison of 13 models. Med Sci Sports Exerc 2004, 36(2):331-335. 27. Temple VA, Stanish HI: Pedometer-measured physical activity of adults with intellectual disability: predicting weekly step counts. Am J Intellect Dev Disabil 2009, 114(1):15-22. 28. Camelon KM, Hadell K, Jamsen PT, Ketonen KJ, Kohtamaki HM, Makimatilla S, Tormala ML, Valve RH: The Plate Model: a visual method of teaching meal planning. DAIS Project Group. Diabetes Atherosclerosis Intervention Study. J Am Diet Assoc 1998, 98(10):1155-1158. 29. Kline P: Handbook of psychological testing London: Routledge; 2000. 30. Durlak JA, DuPre EP: Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation. Am J Community Psychol 2008, 41(3-4):327-350. 31. Peterson JJ, Janz KF, Lowe JB: Physical activity among adults with intellectual disabilities living in community settings. Prev Med 2008, 47(1):101-106. 32. Campbell MK, Thomson S, Ramsay CR, MacLennan GS, Grimshaw JM: Sample size calculator for cluster randomized trials. Comput Biol Med 2004, 34(2):113-125. 33. Bowling A: Research methods in health. 2 edition. New York: Open University Press; 2002. 34. Patton M: Qualitative research and evaluation methods. (3rd edn.) Thousand Oaks: SAGE; 2002. 3 edition. London: Sage Publications, Inc; 2002. 35. Mays N, Pope C: Qualitative research: Observational methods in health care settings. BMJ 1995, 311(6998):182-184. 36. Finlay WM, Lyons E: Methodological issues in interviewing and using self- report questionnaires with people with mental retardation. Psychol Assess 2001, 13(3):319-335. 37. Kvale S, Brinkmann S: Den kvalitativa forskningsintervjun [The qualitative research interview]. 2 edition. Lund: Studentlitteratur; 2009. 38. Malterud K: Kvalitativa metoder i medicinsk forskning [Qualitative methods in medical research] Lund: Studentlitteratur; 1998. 39. Mays N, Pope C: Qualitative research in health care London: BMJ Publishing Group; 1996. 40. Anderson LM, Quinn TA, Glanz K, Ramirez G, Kahwati LC, Johnson DB, Buchanan LR, Archer WR, Chattopadhyay S, Kalra GP, et al: The effectiveness of worksite nutrition and physical activity interventions for controlling employee overweight and obesity: a systematic review. Am J Prev Med 2009, 37(4):340-357. 41. Wullink M, Widdershoven G, van Schrojenstein Lantman-de Valk H, Metsemakers J, Dinant GJ: Autonomy in relation to health among people with intellectual disability: a literature review. J Intellect Disabil Res 2009, 53(9):816-826. Pre-publication history The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2458/10/761/prepub doi:10.1186/1471-2458-10-761 Cite this article as: Elinder et al.: Promoting a healthy diet and physical activity in adults with intellectual disabilities living in community residences: Design and evaluation of a cluster-randomized intervention. BMC Public Health 2010 10:761. Submit your next manuscript to BioMed Central and take full advantage of: • Convenient online submission • Thorough peer review • No space constraints or color figure charges • Immediate publication on acceptance • Inclusion in PubMed, CAS, Scopus and Google Scholar • Research which is freely available for redistribution Submit your manuscript at www.biomedcentral.com/submit Elinder et al. BMC Public Health 2010, 10:761 http://www.biomedcentral.com/1471-2458/10/761 Page 7 of 7 http://www.ncbi.nlm.nih.gov/pubmed/15634322?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15634322?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15634322?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11153830?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11153830?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/18339091?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/18339091?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/18339091?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15800293?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15800293?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/14733973?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/14733973?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/18498677?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/18498677?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/19896020?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/19896020?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/19896020?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/19074936?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/19074936?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/19074936?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/14768659?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/14768659?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/14768659?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15000672?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15000672?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12519273?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12519273?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12519273?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/10396775?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/10396775?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15031246?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15031246?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12098409?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12098409?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12750599?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12750599?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12471314?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12471314?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/14767259?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/14767259?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/19143459?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/19143459?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/9787722?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/9787722?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/9787722?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/18322790?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/18322790?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/18322790?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/18308385?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/18308385?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/14972631?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/7613435?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/7613435?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11556269?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11556269?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/19765507?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/19765507?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/19765507?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/19646099?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/19646099?dopt=Abstract http://www.biomedcentral.com/1471-2458/10/761/prepub Abstract Background Methods/Design Discussion Trial registration number Background Methods and design Study objectives Hypothesis Setting and target group Planning of the intervention and development of materials Intervention components Health education for residents Health ambassadors Study circle for staff Implementation components Evaluation design Intervention outcomes Impacts at setting level Fidelity criteria Statistical power Data analysis Evaluation of barriers and facilitators of implementation Discussion Acknowledgements Author details Authors' contributions Competing interests References Pre-publication history work_k554vivhnzearkailqslzunesi ---- Clinical and Dermoscopic Stability and Volatility of Melanocytic Nevi in a Population-Based Cohort of Children in Framingham School System Clinical and Dermoscopic Stability and Volatility of Melanocytic Nevi in a Population-Based Cohort of Children in Framingham School System Alon Scope1, Stephen W. Dusza1, Ashfaq A. Marghoob1, Jaya M. Satagopan2, Juliana Braga Casagrande Tavoloni1, Estee L. Psaty1, Martin A. Weinstock3, Susan A. Oliveria1, Marilyn Bishop4, Alan C. Geller5 and Allan C. Halpern1 Nevi are important risk markers of melanoma. The study aim was to describe changes in nevi of children using longitudinal data from a population-based cohort. Overview back photography and dermoscopic imaging of up to 4 index back nevi was performed at age 11 years (baseline) and repeated at age 14 years (follow-up). Of 443 children (39% females) imaged at baseline, 366 children (39% females) had repeated imaging 3 years later. At age 14, median back nevus counts increased by two; 75% of students (n¼274) had at least one new back nevus and 28% (n¼103) had at least one nevus that disappeared. Of 936 index nevi imaged dermoscopically at baseline and follow-up, 69% (645 nevi) had retained the same dermoscopic classification from baseline evaluation. Only 4% (n¼13) of nevi assessed as globular at baseline were classified as reticular at follow-up, and just 3% (n¼3) of baseline reticular nevi were classified as globular at follow-up. Of 9 (1%) index nevi that disappeared at follow- up, none showed halo or regression at baseline. In conclusion, the relative stability of dermoscopic pattern of individual nevi in the face of the overall volatility of nevi during adolescence suggests that specific dermoscopic patterns may represent distinct biological nevus subsets. Journal of Investigative Dermatology (2011) 131, 1615–1621; doi:10.1038/jid.2011.107; published online 12 May 2011 INTRODUCTION The Study of Nevi in Children (SONIC) aims to elucidate the epidemiology and biology of nevi, which in turn may inform studies related to melanoma biology and public health efforts in melanoma prevention (Oliveria et al., 2009). Nevi are important risk markers of melanoma (Gandini et al., 2005). Longitudinal studies of nevi have shown that childhood and adolescence are dynamic periods for nevi appearance and evolution (Green et al., 1995; Luther et al., 1996; Siskind et al., 2002; English et al., 2006; Milne et al., 2008). More recently, our understanding of nevogenesis during childhood has been advanced with the use of dermoscopy, which allows a detailed classification of nevi based on global dermoscopic pattern (Hofmann-Wellenhof et al., 2001). We previously reported that dermoscopically recognizable globular and reticular nevi in children are two subsets of nevi that are distinguishable based on their anatomic distribution, size, and differing associations with pigmentation phenotype (Scope et al., 2008). These subsets of nevi also correlate with specific histopathological patterns, and taken together, suggest the existence of distinct pathways of nevogenesis (Zalaudek et al., 2006; Argenziano et al., 2007). Previous studies of nevus evolution in children have been based on assessment of changes in overall nevus counts, not on tracking of changes in individual nevi. Advances in high- resolution total body photography and digital dermoscopy allow for longitudinal tracking of evolution of individual nevi (LaVigne et al., 2005). In addition, dermoscopic imaging at baseline and follow-up enables assessment of more subtle changes in the pattern of nevi. The purpose of this study was to describe changes in melanocytic nevi of pre-adolescent children using long- itudinal data from a population-based cohort. We describe dermoscopic changes in individual index nevi and new nevi occurring on the backs of children over a 3-year period. RESULTS The baseline cohort included 443 children imaged in 5th grade (age 11). Of these children, 366 children, 39% females, had repeat imaging at 8th grade (age 14). The & 2011 The Society for Investigative Dermatology www.jidonline.org 1615 ORIGINAL ARTICLE Received 20 August 2010; revised 25 December 2010; accepted 18 January 2011; published online 12 May 2011 1Dermatology Service, Memorial Sloan-Kettering Cancer Center, New York, New York, USA; 2Department of Epidemiology and Biostatistics, Memorial Sloan-Kettering Cancer Center, New York, New York, USA; 3Dermatoepidemiology Unit, VA Medical Center, Department of Dermatology, Rhode Island Hospital, Brown University, Providence, Rhode Island, USA; 4School Health Services, Framingham Public Schools, Framingham, Massachusetts, USA and 5Division of Public Health Practice, Harvard School of Public Health, Harvard University, Boston, Massachusetts, USA Correspondence: Allan C. Halpern, Dermatology Service, Memorial Sloan-Kettering Cancer Center, 160 East 53rd Street, New York, New York 10022, USA. E-mail: halperna@mskcc.org Abbreviation: SONIC, Study of Nevi in Children http://dx.doi.org/10.1038/jid.2011.107 http://www.jidonline.org mailto:halperna@mskcc.org overall retention rate was 83%, with 100% retention of the 363 students who remained in the school system. Of the 27 students who moved out of Framingham and an additional 53 who remained local but left the school system, only 3 students who had left the school system agreed to return for repeat imaging despite multiple invitations and offered accommodations. The demographic and phenotypic charac- teristics of children who were imaged in 5th and 8th grade and children lost to follow-up are presented in Table 1. Children who were retained in the study were more likely to have lighter hair color, light skin that burns easily, and to be Caucasian than those lost to follow-up; no differences were observed for sex, degree of freckling, ability to tan, or having previous sunburns. New and disappearing nevi Analysis of overview back images was performed for all participants. In general, back nevus counts increased by a median of two nevi. Of the 8th graders, 75% (n¼274) had at least one new back nevus. Of note, 103 (28%) of students had at least one nevus that disappeared. Nine of the disappearing nevi were index nevi for which baseline dermoscopic images were available. In total, 80% of the children had at least one nevus appear or disappear (Figure 1). The change in nevus count from 5th to 8th grade ranged from –2 (indicating that back nevus counts decreased due to disappearing nevi) to þ26. There was a correlation between total back nevus counts at baseline, appearance, and disappearance of nevi. Children with higher back nevus counts at baseline were more likely to develop new nevi (correlation coefficient 0.59, Po0.0001) and to have disappearing nevi (Table 2). Similarly, children with multiple new nevi were more likely to also have disappearing nevi than children without new nevi (Table 2). Dermoscopic patterns of index nevi Analysis of individual dermoscopic images was performed for all participants. A total of 945 index nevi were evaluated for dermoscopic assessment at baseline (5th grade, 2004); of these, 936 nevi were available for dermoscopic assessment at follow-up evaluation (8th grade, 2007), whereas 9 of the index nevi imaged at baseline (i.e., nevi selected for dermo- scopic imaging at baseline) disappeared at follow-up. In addi- tion, up to two new index nevi (i.e., not present at baseline), one from the upper back and one from the lower back, were imaged dermoscopically at follow-up; in total, 186 new index nevi were imaged dermoscopically at follow-up evaluation. A total of 936 nevi were available for baseline and follow- up dermoscopic evaluations (945 nevi at baseline minus 9 nevi that disappeared by follow-up). Table 3 depicts the distribution of the dermoscopic patterns of these nevi at baseline and follow-up. At follow-up, 13% of the nevi were reticular, 32% globular, 10% complex, and 45% were homo- genous (i.e., without pattern). Of these nevi, 69% (645 nevi) had retained the same dermoscopic classification from baseline. There was almost no shift in dermoscopic pattern between globular and reticular nevi: only 4% (n¼13) of nevi Table 1. Characteristics of students imaged in 5th and 8th grade (n=366) and students lost to follow-up (n=77) Characteristic Students imaged in 5th and 8th grade Students lost to follow-up n (%) n (%) P-value Sex Female 141 (39) 32 (42) Male 225 (61) 45 (58) 0.62 Race/ethnicity Native American 1 (0) 0 (0) Asian 17 (5) 5 (7) African American 14 (4) 5 (7) Hispanic 65 (18) 26 (34) White 269 (73) 41 (53) 0.01 Skin color Very fair/fair 245 (67) 33 (43) Light olive 30 (8) 4 (5) Dark olive, brown, black 91 (25) 40 (52) o0.001 Hair color Dark brown 212 (58) 62 (81) Light brown 84 (23) 10 (13) Blonde 60 (16) 3 (4) Red 10 (3) 2 (3) 0.005 Skin burns easily No 216 (59) 43 (56) Yes 135 (37) 25 (32) 0.03 Tanning ability Deep tan 108 (30) 24 (31) Moderate tan 130 (36) 27 (35) Mild/occasional tan 60 (16) 11 (14) Not able to tan 18 (5) 2 (3) Do not know 26 (7) 5 (7) 0.80 Back freckles Absent 302 (83) 61 (79) Present 64 (17) 16 (21) 0.50 Nevus count (geometric mean, SD) 5.4 (2.8) 6.3 (2.5) 0.17 Sunburns in summer before study enrollment (2004) None 230 (63) 56 (73) 1 98 (27) 12 (16) Z2 38 (10) 9 (12) 0.12 1616 Journal of Investigative Dermatology (2011), Volume 131 A Scope et al. Stability and Volatility of Nevi in Children assessed as globular at baseline were classified as reticular at follow-up and just 3% (n¼3) of baseline reticular nevi were classified as globular at follow-up. In addition, we observed that 132 (28%) of the 468 nevi that were homogenous in 2004 developed a dermoscopic pattern by 2007 and that 85 (18%) of the 468 nevi that were patterned in 2004 became homogenous by 2007. In general, nevi were 65% more likely to become patterned during follow-up than lose their pattern, i.e., become homogeneous (odds ratio¼1.65; 95% confidence interval: 1.3–2.2). Com- pared with lesions ‘‘never’’ having a distinct nevus pattern (homogenous nevi), lesions ‘‘ever’’ having a pattern (reticu- lar, globular, or complex nevi) were 2.4 times more likely (odds ratio¼2.4; 95% confidence interval: 1.2–4.9) to be observed in students with the darkest skin color when com- pared with lesions from those with the lightest skin color. In a multivariate model whereby gender, skin phototype and sunburns were independent predictors of a shift in dermo- scopic classification among index nevi between homo- geneous and patterned, students with fairer skin and a greater tendency to burn were more likely to have at least one nevus that shifted from homogenous to patterned (Po0.001) or from patterned to homogenous over the study period (P¼0.08). Furthermore, a history of two or more sunburns before study inception was associated with the greatest dermoscopic pattern volatility (i.e., at least one homogenous nevus becoming patterned and one patterned nevus becoming homogenous in the same student, Po0.05). Changes in dermoscopic pattern were not associated with gender. Of 936 index nevi that were available for dermo- scopic analysis of both baseline and follow-up, 34 nevi (4%) significantly faded in color between the two time points, 2004 2007 Figure 1. High-resolution overview photography allows for comparison of total back nevus counts from baseline (2004) and follow-up (2007). Side-by-side assessment of images allows for evaluation of new nevi (inset a, arrowhead; the new nevus is also indicated by an arrowhead on 2007 overview), disappearance of nevi (inset b, circle; the disappearing nevus is also indicated on 2004 overview with a light-blue marker) and the stability of lesions (inset b, arrows; stable nevi also indicated on both overview images by the yellow marker). Table 2. Association between at least one disappearing nevus and total back nevi at baseline and number of new nevi at the follow-up assessment No disappearing nevi X1 Disappearing nevus Total P-value1 n (%) n (%) Total back nevi at baseline 0–4 132 (85) 24 (15) 156 (100) o0.001 5–9 67 (71) 27 (29) 94 (100) 10–14 31 (57) 23 (43) 54 (100) 15–19 16 (62) 10 (38) 26 (100) Z20 22 (61) 14 (39) 36 (100) Number of new nevi at follow-up assessment 0 78 (89) 10 (11) 88 (100) 0.0092 1 44 (76) 14 (24) 58 (100) 2 39 (72) 15 (28) 54 (100) 3 30 (73) 11 (27) 41 (100) Z4 77 (62) 48 (38) 125 (100) Total 268 (73) 98 (27) 366 (100) 1P-values for trend. 2Association adjusted for total nevus count at baseline. Table 3. Dermoscopic patterns for nevi (n=936) that were evaluated at baseline (2004) and were available for dermoscopic assessment at follow-up (2007) 2007 Reticular Globular Homogeneous Complex Total 2004 Reticular 63 3 12 24 102 Row % 62 3 12 23 100 Col % 50 1 3 27 11 Globular 13 215 72 22 322 Row % 4 67 22 7 100 Col % 10 72 17 24 34 Homogeneous 44 75 336 13 468 Row % 9 16 72 3 100 Col % 35 25 80 14 50 Complex 6 6 1 31 44 Row % 14 14 2 70 100 Col % 5 2 0.2 34 5 Total 126 299 421 90 936 Row % 13 32 45 10 100 Col % 100 100 100 100 100 www.jidonline.org 1617 A Scope et al. Stability and Volatility of Nevi in Children http://www.jidonline.org i.e., became significantly lighter in pigmentation. The base- line dermoscopic patterns of these 34 nevi were homogenous in 17 (50%), globular in 14 (41.2), and reticular in 3 (8.8%); their dermoscopic patterns at follow-up were homogenous in 21 (61%), globular in 8 (24%), and reticular in 5 (15%). In addition, 9 (1%) of the 945 index imaged at baseline (i.e., nevi selected for dermoscopic imaging at baseline) disap- peared at follow-up; their dermoscopic patterns at baseline were reticular in 1 (11%), globular in 4 (44%), and homo- geneous in 4 nevi (44%). None of the nevi that completely disappeared showed any evidence of halo or regression structures on baseline dermoscopic images. A total of 186 ‘‘new’’ index lesions were identified in 165 participants. The most frequent dermoscopic pattern expressed by new nevi that were imaged at follow-up was globular (41%, n¼76), followed by homogeneous (28%, n¼52), reticular (19%, n¼35), and complex (11%, n¼20). The overall size of index nevi measured by area was relatively dynamic during the study period with 735 (79%), of 936 nevi with dermoscopic images at baseline and follow-up, either increasing or decreasing in total area by at least 20% between baseline and follow-up (Table 4). The majority of nevi (682 of 936, 73%) increased in area by at least 20% during follow-up, whereas a minority (53 of 936, 6%) of nevi decreased in area by at least 20%. Compared with lesions that remained relatively stable in overall size between baseline and follow-up, lesions that decreased in size by greater than 20% over the course of follow-up were 60% less likely to be patterned (reticular, globular, or complex) at the baseline assessment (odds ratio¼0.4; 95% confidence interval: 0.2–0.8). Lesions that increased in size during follow-up were not more likely to be patterned at the baseline assessment compared with those that remained stable in size (odds ratio¼1.1; 95% confidence interval: 0.8–1.5). Interestingly, of the 34 nevi that became signi- ficantly fainter during follow-up, 18 nevi (53%) actually did so while increasing in overall lesion area. DISCUSSION The SONIC study is a population-based investigation of nevus epidemiology in childhood and adolescence. The student population from Framingham, Massachusetts, has a racial, ethnic, and socioeconomic makeup comparable to the general US population. We previously published the baseline (5th grade) analysis of the index back nevi demonstrating an interrelationship between dermoscopic pattern, nevus size, anatomic location, and pigment phenotypes (Scope et al., 2008). We also made the observation of dermoscopic patterns in normal appearing background skin (Scope et al., 2009). Herein, we report the analysis of the initial long- itudinal phase of the study, at 3-year follow-up of the children. Consistent with previous studies of children and adoles- cents (Sigg and Pelloni, 1989; Green et al., 1995; Oliveria et al., 2004; Dogan, 2007; Milne et al., 2008), we found that between 5th and 8th grades, nevus counts increased. High-resolution imaging allowed for detection of changes in individual nevi. We observed that there is ‘‘turnover’’ or volatility of nevi in children, albeit with an overall increase in nevus counts. Children with higher back nevus counts had greater nevus volatility, being more likely to both develop new nevi and to have nevi that disappeared during follow-up. The concept of ‘‘nevus volatility’’ is, to our knowledge, previously unreported; it would be interesting to examine in future studies whether higher nevus volatility (e.g., having more new nevi or disappearing nevi per follow-up period) is an independent predictor of melanoma risk, akin to the well-established risk factor of total nevus counts. In all, 28% of children had at least one nevus that disappeared. The observation of disappearance of nevi in children is intriguing. Nevus involution is well documented in adults and is mostly seen with advanced age; nevus counts have been shown to peak around age 30 years and thereafter decrease in numbers (MacKie et al., 1985). Suggested mechanisms to nevus involution in adults include maturation, neurotization, cellular senescence, and telomere shortening (Bataille et al., 2007; Terushkin et al., 2010). An immune- mediated process primarily involving T-lymphocytes (Zeff et al., 1997) has been implicated in nevus involution via a halo phenomenon. Involution via regression can be identified with dermoscopy as regression structures, namely, granularity and white scar-like areas; dermoscopic regression has been shown to correlate on histopathology with presence of melanophages and fibroplasia of the superficial dermis. Transepidermal elimination of melanocytic nests and apop- tosis of melanocytes are other speculated mechanisms of nevus involution (Kantor and Wheeland, 1987; Lee et al., 2000). We did not observe a halo phenomenon or regression structures at baseline in nevi that subsequently disappeared. Table 4. Change in lesion area between baseline and 3-year follow-up by dermoscopic pattern at baseline Dermoscopic pattern at baseline Change in lesion area Reticular Globular Homogeneous Complex Total Decreased by at least 20% 1 12 38 2 53 Row % 1.9 22.6 71.7 3.8 100 Col % 1.0 3.7 8.1 4.6 5.6 Remained within ±20% 20 71 101 9 201 Row % 10.0 35.3 50.3 4.5 100 Col % 19.6 22.1 21.6 20.4 21.5 Increased by at least 20% 81 239 329 33 682 Row % 11.9 35.0 48.2 4.8 100 Col % 79.4 74.2 70.3 75.0 72.9 Total 102 322 468 44 936 Row % 10.9 34.4 50.0 4.7 100 Col % 100 100 100 100 100 1618 Journal of Investigative Dermatology (2011), Volume 131 A Scope et al. Stability and Volatility of Nevi in Children We did, however, see many nevi fade without associated signs of halos or regression structures and speculate that some of these fading nevi may eventually disappear. The biological mechanism of fading nevi is currently not known. It is interes- ting to note that some nevi faded while growing, possibly suggesting a mechanism of senescence. It has been shown that growth driven by BRAF mutation can simultaneously induce senescence in nevi (Michaloglou et al., 2005). In the baseline dermoscopic analysis, we observed that two types of nevi (globular versus reticular-patterned nevi) differ in anatomic distributions and in size (Scope et al., 2008). Other studies of nevi showed a difference in distribu- tion of globular and reticular nevi between the trunk and extremities (Seidenari et al., 2006; Changchien et al., 2007). We, therefore, hypothesized that these subsets of nevi are biologically distinct. The findings of the present study support this notion. Most nevi retained the same dermoscopic classification from baseline to follow-up evaluation, whereas new index nevi demonstrated a diversity of dermoscopic patterns. Crossover of pattern between globular and reticular nevi was seen in o2% of nevi. Our study has strengths. First, we retained all students for follow-up imaging who consented for 5th grade assessment and remained in the school system at 8th grade. Second, the unique observations of nevus volatility and relative dermo- scopic pattern stability were made because of the, to our knowledge, previously unreported use of longitudinal track- ing of individual nevi with high-resolution digital photo- graphy and because of the focus on early adolescence, a period with rapid changes in nevus counts. Finally, the fact that these observations were made in a population-based cohort is more likely to make them generalizable. Our study has limitations. First, imaging of nevi was limited to the back, because overview imaging of curved surfaces, such as extremities, is technologically challenging. We are currently testing three-dimensional imaging of curved surfaces. Second, sample size was limited. In addition, although the SONIC cohort encompasses a full spectrum of pigment phenotypes from fair to dark, students lost to follow- up were more likely to have darker skin phenotype and to be non-Caucasian in ethnicity (Table 1); this limits the analysis of the impact of skin phenotype and ethnicity on nevus evolution. We are currently expanding the cohort, with particular attention to increasing the number of students across the phenotypic and ethnic spectrum. This will allow for a more comprehensive analysis of predictors of nevus phenotype. In addition, although we were not able to obtain more comprehensive demographic and phenotypic charac- teristics of the source population, the distribution of race/ ethnicity among the enrolled student cohort (73% White, 18% Hispanic, 4% African-American, 5% Asian) is compar- able to that of the Framingham, Massachusetts School District as a whole (70% White, 21% Hispanic, 4% African-American, 5% Asian). Third, the implications of our findings for melanoma risk are not apparent and further study is warranted. We anticipate that more students with a high- risk phenotype (e.g., with atypical nevi) will be seen with aging of the cohort. Fourth, dermoscopic imaging of index nevi samples the student’s nevi, and may not be representa- tive of the student’s signature nevus phenotype (Suh and Bolognia, 2009). With imaging of more nevi per student in the future, we hope to mitigate this potential sampling bias. Fifth, classification of dermoscopic patterns is probably dependent on the level of nevus pigmentation. Classification of nevus pattern (e.g., as homogenous or patterned) when dermoscopic structures are very faint depends on observers’ threshold. Thus, dermoscopy is likely to be an imperfect surrogate of tissue pathology, particularly for less pigmented nevi. We plan to perform dermoscopic–histopathological correlation studies to better understand the limitations of using dermoscopic pattern as proxy for tissue morphology. In addition, we analyzed change in nevus phenotype and dermoscopic pattern using only two time points over 3 years; it is likely that as more time points and longer follow-up are used to assess what has already proven to be a very dynamic process, additional insights may be gained. For example, dermoscopic pattern may prove to be less consistent over longer follow-up periods. To this end, the SONIC study will continue to observe this cohort to age 18 years. Finally, we did not address relationship of nevus counts and dermoscopic patterns with sun exposure. Over the 3 years of follow-up, we have obtained annual questionnaires of sun exposure from children and parents. The effects of these factors on nevi will be explored in a separate paper. In conclusion, we found that early adolescence is a period of nevus volatility. Appearance of new nevi and disappear- ance or fading of existing nevi are common events. Despite this volatility, the majority of nevi retain their baseline dermoscopic pattern. In particular, nevi with reticular and nevi with globular dermoscopic patterns appeared to be distinct subsets of nevi that were exceedingly unlikely to crossover in dermoscopic pattern. Finally, none of the nevi that disappeared, grew smaller, or faded showed dermo- scopic evidence of halo or regression. We hypothesize that non-immunological mechanisms of nevus involution exists in early adolescence, which are related to loss of pigmentation, cellular senescence or transepidermal elimination. MATERIALS AND METHODS The study was approved by the Institutional Review Board at Boston University. The study adhered to the guidelines of the Helsinki Declaration. Study population Study population included students from all 10 schools in Framing- ham, Massachusetts school system who were enrolled in 5th grade in Fall 2004. The school system offers a racial/ethnic mix similar to the general US population. A list of all 691 5th-graders (age 11 years) was obtained from the school system. Mailings were sent to all families, requesting participation, and including a description of the study, consent, and assent form (for student). Two weeks after the initial mailing, follow-up telephone calls were conducted. Of 691 Framingham families with a 5th grader, 443 (64%) provided written consent for the study; we were also able to reach all but 10% of the 248 non-participants. In addition, skin examination and digital photography of participating students (described in detail below) www.jidonline.org 1619 A Scope et al. Stability and Volatility of Nevi in Children http://www.jidonline.org were carried out in 5th grade (baseline) and 8th grade (follow-up) during the School’s annual scoliosis examinations (mandatory in Massachusetts); skin type and demographics were assessed for all 5th grade students receiving the scoliosis examination, including non- participants. Participating students (n¼443) were more likely to be white (70 versus 58%, Po0.0014), fair or very fair (63 versus 38%, P¼0.0004), and male (62 versus 48%, Po0.0001) than non- participants (n¼248). Additional details about the approach used in planning and implementing this study and reasons for nonparticipa- tion have been described (Geller et al., 2007; Oliveria et al., 2009). Data collection At both 5th and 8th grades, students underwent a brief visual examination by the study nurse to assess hair, eye, and skin color and freckling, and standardized high-resolution overview digital photography of the back was performed. Photography at baseline also included close-up clinical and dermoscopic images of up to 4 index back nevi (the largest nevus on upper and lower back and one randomly selected nevus from upper and lower back). Definition of upper and lower back and method of selection of the random nevus have been previously described (Scope et al., 2008). Close-up clinical and dermoscopic photography of index nevi was repeated at 8th grade and, in addition, dermoscopic imaging of up to two new nevi (i.e., that were not present at 5th grade) was obtained, one from the upper and one from the lower back. Digital photography was performed with Phase One P25 Camera Back, Hasselblad 503w Camera System, 1 kW Studio Flash System (Canfield Scientific, Fairfield, NJ). Dermoscopic images were obtained using a Fuji S2 SLR digital camera and 60 mm Macro Nikkor lens with Epi-Lume dermoscopy attachment (Canfield Scientific). Color-coded adhesive dot were placed inferior to index nevi. These dots permitted consistent tracking, at follow-up photography, of the locations of the four index nevi that had been selected in 5th grade by real-time reference to the original tagged overview photograph. These dots also served as fiducial markers for spectral and spatial calibration of overview images on review and for data assessment of nevus surface contour (e.g., flat or raised). Images were archived in DermaGraphix, a database housed on a secured server at resolution of 3 million pixel (Canfield Scientific). Children completed self-administered questionnaires. The survey included questions on demographics, phenotype (skin type, eye, and hair color), sun sensitivity, sun exposure, sun protection practices including use of hats and sunscreen, limiting time in the sun, seeking shade, and frequency of sunburns. A parent survey was completed by one of the child’s parents. Parents were also asked questions regarding their child’s sun protection practices and exposure and family history of skin cancer. By collecting data from both parent and child, we were able to assess concordance and examine both sets of responses to identify the best source for each variable of interest. In all, 432 student surveys and 424 parent surveys were obtained. Image analysis Image analysis was performed on high-resolution monitors. Back nevus counts were performed with images projected on two monitors, side-by side. Reference lines were overlaid on back images to create quadrants for evaluation. All images were viewed at 100% magnification. Total back nevus counts were assessed at both time points. Individual nevus size was assessed on the monitor using standardized diameter scale. In addition, side-by-side comparisons of images were made to track longitudinal changes in nevi, thereby identifying new and disappearing nevi, and noting stable nevi and nevi that increased or decreased in size during follow-up. We used anatomic landmarks (e.g., angle of neck and shoulders, scapulae) to triangulate and compare the location of individual nevi on both images. Baseline and follow-up dermoscopic images of individual index nevi were also magnified 100% on the monitors and viewed side-by- side. Dermoscopic images were jointly reviewed by two dermatol- ogists who analyzed each image and compared images for global dermoscopic pattern, color, and dermoscopic structures. Dermo- scopic images of new nevi at 8th grade were assessed for the same parameters, without side-by-side comparison. We assigned nevi into three patterned categories based on global dermoscopic pattern, reticular, globular, and complex (reticular–globular), or if they lacked these patterns, into a 4th category of homogenous nevi. Patterned nevi were classified as follows: (i) reticular—lesion showed pigment network (diffuse or patchy); no globules were seen; (ii) globular—lesion showed globules and no network was seen. Of note, globules were considered as present if three or more globules were observed; (iii) complex pattern—both network and globules were seen, with or without structureless areas. Of note, nevi with reticular pattern and peripheral rim of globules, known to be a pattern of growing reticular nevi, were not coded as complex but as reticular. Homogeneous (structureless) nevi were defined as lesions in which neither network nor globules were seen. Nevus size (total lesion area) was calculated from dermoscopic images for each index nevus using a measurement tool incorporated into the image archiving software (Mirror, Canfield Scientific). Lesion borders were visually identified. A random sample of 50 study lesions was selected, and lesion measurements were completed by a second dermatologist. Concordance for lesion measurements between the two reviewers was high (rho¼0.96). Lesion area was further classified as percent change compared with baseline assessment. We considered a change in nevus area that is greater than ±20% to be clinically significant; this threshold also reduced the likelihood of misclassification of change in nevus area due to inherent measurement inaccuracy. Therefore, change in lesion size was categorized into three groups: (1) lesions that decreased in area by 420% during follow-up, (2) lesions that remained within ±20% of baseline measurement; and (3) lesions that grew by at least 20%. Statistical analysis Descriptive statistics were used to characterize the study population. Descriptive frequencies were calculated to assess student distri- bution and lesion characteristics. T-tests and w2 statistics were used to compare characteristics of students lost to follow-up with those who were retained. Lesions were classified by their global dermo- scopic pattern. In addition, lesions were also broadly classified as ever having a distinct dermoscopic pattern (reticular, globular, or complex) or never having a pattern (homogeneous). Univariate comparisons of ever having a dermoscopic pattern and base- line phenotypic characteristics of participants were completed. McNemar’s w2 were used to assess paired comparisons between baseline and follow-up assessments. As lesions were nested within students, mixed effect regression models were used. In these models, a variable for the student was entered as a random effect. The odds 1620 Journal of Investigative Dermatology (2011), Volume 131 A Scope et al. Stability and Volatility of Nevi in Children ratios estimated in the dermoscopic pattern analyses were obtained from these random effects models. In these models, the dependent variable was presence or absence of a discernable dermoscopic pattern at baseline or follow-up evaluations. The main independent variable was skin color categorized on three levels, very fair, fair to light olive, and light brown to dark brown, with very fair students acting as referent category. Student sex was included in all regression models as a potential confounding factor. All analyses were carried out using Stata v.10.1 software, Stata Corporation, College Station, TX. CONFLICT OF INTEREST The authors state no conflict of interest. ACKNOWLEDGMENTS The research was funded through NIH/NIAMS AR049342-02: ‘‘The Framing- ham School Nevus Study’’. The authors thank Dennis DaSilva and Jed Smith from Canfield Scientific. REFERENCES Argenziano G, Zalaudek I, Ferrara G et al. (2007) Proposal of a new classification system for melanocytic naevi. Br J Dermatol 157:217–27 Bataille V, Kato BS, Falchi M et al. (2007) Nevus size and number are associated with telomere length and represent potential markers of a decreased senescence in vivo. Cancer Epidemiol Biomarkers Prev 16:1499–502 Changchien L, Dusza SW, Agero AL et al. (2007) Age- and site-specific variation in the dermoscopic patterns of congenital melanocytic nevi: an aid to accurate classification and assessment of melanocytic nevi. Arch Dermatol 143:1007–14 Dogan G (2007) Melanocytic nevi in 2783 children and adolescents in Turkey. Pediatr Dermatol 24:489–94 English DR, Milne E, Simpson JA (2006) Ultraviolet radiation at places of residence and the development of melanocytic nevi in children (Australia). Cancer Causes Control 17:103–7 Gandini S, Sera F, Cattaruzza MS et al. (2005) Meta-analysis of risk factors for cutaneous melanoma. I. Common and atypical naevi. Eur J Cancer 41:28–44 Geller AC, Oliveria SA, Bishop M et al. (2007) Study of health outcomes in school children: key challenges and lessons learned from the Framing- ham Schools’ Natural History of Nevi Study. J Sch Health 77:312–8 Green A, Siskind V, Green L (1995) The incidence of melanocytic naevi in adolescent children in Queensland, Australia. Melanoma Res 5:155–60 Hofmann-Wellenhof R, Blum A, Wolf IH et al. (2001) Dermoscopic classification of atypical melanocytic nevi (Clark nevi). Arch Dermatol 137:1575–80 Kantor GR, Wheeland RG (1987) Transepidermal elimination of nevus cells. A possible mechanism of nevus involution. Arch Dermatol 123:1371–4 LaVigne EA, Oliveria SA, Dusza SW et al. (2005) Clinical and dermoscopic changes in common melanocytic nevi in school children: the Framing- ham school nevus study. Dermatology 211:234–9 Lee HJ, Ha SJ, Lee SJ et al. (2000) Melanocytic nevus with pregnancy-related changes in size accompanied by apoptosis of nevus cells: a case report. J Am Acad Dermatol 42:936–8 Luther H, Altmeyer P, Garbe C et al. (1996) Increase of melanocytic nevusbcounts in children during 5 years of follow-up and analysis of associated factors. Arch Dermatol 132:1473–8 MacKie RM, English J, Aitchison TC et al. (1985) The number and distribution of benign pigmented moles (melanocytic naevi) in a healthy British population. Br J Dermatol 113:167–74 Michaloglou C, Vredeveld LC, Soengas MS et al. (2005) BRAFE600-associated senescence-like cell cycle arrest of human naevi. Nature 436:720–4 Milne E, Simpson JA, English DR (2008) Appearance of melanocytic nevi on the backs of young Australian children: a 7-year longitudinal study. Melanoma Res 18:22–8 Oliveria SA, Satagopan JM, Geller AC et al. (2009) Study of Nevi in Children (SONIC): baseline findings and predictors of nevus count. Am J Epidemiol 169:41–53 Oliveria SA, Geller AC, Dusza SW et al. (2004) The Framingham school nevus study: a pilot study. Arch Dermatol 140:545–51 Scope A, Marghoob AA, Dusza SW et al. (2008) Dermoscopic patterns of naevi in fifth grade children of the Framingham school system. Br J Dermatol 158:1041–9 Scope A, Marghoob AA, Chen CS et al. (2009) Dermoscopic patterns and subclinical melanocytic nests can be observed in background skin. Br J Dermatol 160:1318–21 Seidenari S, Pellacani G, Martella A et al. (2006) Instrument-, age- and site dependent variations of dermoscopic patterns of congenital melanocytic naevi: a multicentre study. Br J Dermatol 155:56–61 Sigg C, Pelloni F (1989) Frequency of acquired melanonevocytic nevi and their relationship to skin complexion in 939 schoolchildren. Dermatologica 179:123–8 Siskind V, Darlington S, Green L et al. (2002) Evolution of melanocytic nevi on the faces and necks of adolescents: a 4 y longitudinal study. J Invest Dermatol 118:500–4 Suh KY, Bolognia JL (2009) Signature nevi. J Am Acad Dermatol 60:508–14 Terushkin V, Scope A, Halpern AC et al. (2010) Pathways to involution of nevi: insights from dermoscopic follow-up. Arch Dermatol 146:459–60 Zalaudek I, Hofmann-Wellenhof R, Soyer HP et al. (2006) Naevogenesis: new thoughts based on dermoscopy. Br J Dermatol 154:793–4 Zeff RA, Freitag A, Grin CM et al. (1997) The immune response in halo nevi. J Am Acad Dermatol 37:620–4 www.jidonline.org 1621 A Scope et al. Stability and Volatility of Nevi in Children http://www.jidonline.org Clinical And Dermoscopic Stability And Volatility Of Melanocytic Nevi In A Population-based Cohort Of Children In Framingham School System���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������� Introduction���������������������������������������������������� Results������������������������������������� Discussion���������������������������������������������� Materials And Methods������������������������������������������������������������������������������� Conflict Of Interest���������������������������������������������������������������������������� Acknowledgments������������������������������������������������������������� Supplementary Material���������������������������������������������������������������������������������� References���������������������������������������������� work_k5f6oojh7vcdbmr4k5zmmxtway ---- untitled to respond to the survey, thus creating response bias and overestimating the beneficial effect of photography. Overall, this survey study revealed that dermatopa- thologists find clinical photography most beneficial in the diagnosis of inflammatory skin diseases, and they would like to receive photographs more frequently. They prefer a convenient method of delivery, most commonly a printed- out photograph attached to the requisition slip. Accepted for Publication: June 4, 2010. Author Affiliations: Department of Dermatology (Drs Mohr and Hood) and Epidemiology and Biostatistics Core (Mr Indika), Eastern Virginia Medical School, Norfolk. Correspondence: Dr Mohr, Department of Dermatology, Eastern Virginia Medical School, 721 Fairfax Ave, Ste 200, Norfolk, VA 23507 (melinda.mohr@gmail.com). Author Contributions: All authors had full access to all of the data in the report and take responsibility for the in- tegrity of the data and the accuracy of the data analysis. Study concept and design: Mohr and Hood. Acquisition of data: Mohr. Analysis and interpretation of data: Mohr, In- dika, and Hood. Drafting of the manuscript: Mohr and In- dika. Critical revision of the manuscript for important intel- lectual content: Hood. Statistical analysis: Indika. Study supervision: Hood. Financial Disclosure: None reported. Additional Contributions: We thank the members of the ASDP who participated in this study and the Epidemiol- ogy and Biostatistics Core at Eastern Virginia Medical School, Norfolk, and Old Dominion University, Norfolk, for assisting with statistical analysis. 1. Fogelberg A, Ioffreda M, Helm KF. The utility of digital clinical photographs in dermatopathology. J Cutan Med Surg. 2004;8(2):116-121. 2. Kutzner H, Kempf W, Schärer L, Requena L. Optimizing dermatopathologic diagnosis with digital photography and internet: the significance of clinico- pathologic correlation [in German]. Hautarzt. 2007;58(9):760-768. PRACTICE GAPS Submitting Clinical Photographs to Dermatopathologists to Facilitate Interpretations A dvances in immunohistochemical stains, molecu-lar analysis, and laboratory technology have fa-cilitated dermatopathologic diagnostic accuracy. Nevertheless, patients and clinicians are often frustrated when dermatopathologists render nonspecific diagnoses, which may lead to diagnostic and/or therapeutic uncer- tainty. Given the importance of clinicopathologic correla- tion (CPC), a practice gap exists between what dermato- pathologists desire and what the clinicians provide.1,2 Mohr et al point out that one of the most important tools used to assist accurate dermatopathologic diagno- sis is the information supplied on the dermatopathology form accompanying tissue specimens. They also report that dermatopathologists find the addition of a clinical photograph useful in rendering a microscopic diagno- sis, especially when dealing with inflammatory skin dis- eases. The use of clinical photographs may be particu- larly helpful when dermatopathologists receive specimens with an inadequate clinical description on the dermato- pathology form, which may be more of a concern with specimens submitted by nondermatologists who have less CPC experience. Although clinical photographs are desired, it is ex- tremely infrequent for a dermatopathologist to be pro- vided with one. Barriers to sending clinical photographs with biopsy specimens include the time it takes to create and implement standard operating procedures (SOP), which include identifying the body region to be photo- graphed, obtaining consent from the patient, taking the digital photograph, downloading the photographic file, la- beling the photograph, and either printing or electroni- cally sending the picture to the pathologist. Other barri- ers are limited computer file storage space; costs of obtaining 1 or more digital cameras for the physician office; and com- pliance with the secure data transfer standards of the Health Insurance Portability and Accountability Act and Health Information Technology for Economic and Clinical Health. It is also possible that some patients may object to pho- tography, particularly of specific body parts. This gap between what dermatopathologists desire and what the clinicians provide can be narrowed by improv- ing the quality of information supplied by the clinician to the dermatopathologist. Education directed at office efficiency should include instruction on efficient pro- cesses to incorporate patient photography. Mohr et al un- derscore that patient care will benefit when clinicians im- prove the quality and quantity of the information provided, and they encourage incorporation of photog- raphy as part of routine biopsy procedures. Develop- ment of a more comprehensive way of communicating information to dermatopathologists is needed. Clinician- friendly pathology forms and reminder systems to in- clude clinical photographs may help. Considering patient volume and increasing time limi- tations of office visits, it would optimal for clinicians to train an assistant to take and process the photographs for relevant patients. The SOP should be defined for this process to assist personnel in implementation without loss of efficiency. Creating an SOP for a proper and com- plete provision of information including completion of requisition forms and taking clinical photographs will help establish uniformity of photographic information to the dermatopathologist. Hard copies of photographs are not always neces- sary. Digital technology provides a variety of media to safely transmit images, including secure Internet con- nections and storage on compact discs and flash drives, to protect the confidentiality of patient photographic in- formation, usually considered personal health informa- tion. Data transfer between dermatologists and derma- topathologists can be optimized to maximize the quality of dermatopathology diagnosis. Melinda R. Mohr, MD S. H. Sathish Indika, MS Antoinette F. Hood, MD Alejandra Vivas, MD Robert S. Kirsner, MD, PhD (REPRINTED) ARCH DERMATOL/ VOL 146 (NO. 11), NOV 2010 WWW.ARCHDERMATOL.COM 1308 ©2010 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 Author Affiliations: Department of Dermatology and Cu- taneous Surgery, University of Miami Miller School of Medicine, Miami, Florida. Correspondence: Dr Kirsner, Department of Dermatol- ogy and Cutaneous Surgery, University of Miami Miller School of Medicine, 1600 NW 10th Ave, RMSB, Room 2023-A, Miami, FL 33136 (rkirsner@med.miami.edu). Financial Disclosure: None reported. 1. Kutzner H, Kempf W, Schärer L, Requena L. Optimizing dermatopathologic diagnosis with digital photography and internet: the significance of clinico- pathologic correlation [in German]. Hautarzt. 2007;58(9):760-768. 2. Fogelberg A, Ioffreda M, Helm KF. The utility of digital clinical photographs in dermatopathology. J Cutan Med Surg. 2004;8(2):116-121. Lentiginous Melanoma In Situ Treatment With Topical Imiquimod: Need for Individualized Regimens M elanoma in situ, lentiginous type (LM), is a pre-cursor lesion for invasive malignant mela-noma, lentiginous type (LMM). Already the most prevalent subtype of in situ melanoma, LM has been shown to be increasing in incidence.1 Currently, nonsur- gical patients with LM have no treatment alternative but irradiation and so must endure the associated adverse ef- fects of this treatment. In addition, recurrence following standard therapies is unacceptably high (8%-20%).2 For these reasons, a new effective therapy for LM that pro- vides local control, prevents progression to LMM, and de- creases morbidity and mortality is clinically desirable. Small studies have reported successful treatment of LM with imiquimod, 5%, cream. The present case series highlights 15 LM lesions in 14 patients treated with topi- cal imiquimod. Histologic tissue specimens obtained be- fore, during, and after treatment were evaluated to as- sist in directing patient management and in providing objective posttreatment histopathologic response. Methods. This study was approved by the Saint Louis Uni- versity institutional review board. After diagnostic bi- opsy, patients were offered surgical excision or treat- ment with topical imiquimod, 5%. The risks of each treatment were thoroughly discussed. Patients began imi- quimod therapy with topical application 5 to 7 times each week, and this regimen was altered based on clinical re- sponse. Patients kept a log of treatment days and were observed closely during treatment. Pretreatment and post- treatment assessments were performed histologically and clinically in all patients. Intratreatment 4-mm punch bi- opsy specimens were obtained in 6 of the 14 patients. Imi- quimod treatment was discontinued only after the tu- mor clinically resolved with no remaining inflammatory response and biopsy specimens showed no residual tu- mor histologically. Results. We report 15 LM lesions in 14 patients treated effectively with imiquimod, 5%, cream as determined by clinical and histopathologic assessment. The patient de- mographics, treatment applications, and histologic and clinical findings are summarized in the Table. Patients were treated over 12 to 20 weeks with a range of 47 to 106 treatment applications (average, 79.5). All patients agreed to posttreatment biopsies, and 6 of the 14 agreed to intratreatment biopsies. Biopsies were performed dur- Table. Patient Characteristics and Treatment Summary Patient No./ Sex/Age, y Lesion Location Type of Lesion Treatment Duration, d Findings of Intratreatment Histologic Evaluation Findings of Posttreatment Histologic Evaluation Clinical Posttreatment Follow-up, mo 1/F/82 Cheek New 65 NP Prominent basilar pigmentation and solar lentigo, at 2 and 13 mo, respectively 31 2/M/90 Cheek Recurrence 100 NP Postinflammatory hyperpigmentation, 2 mo 28 3/M/80 Cheek Recurrence 84 NP Dermal scar, actinic keratosis, 10 mo 30 4/M/59 Scalp Recurrence 62 NP Dermal fibrosis, 4 mo 21 5/M/82 Nose Recurrence 84 NP Dermal fibrosis, 2 mo 20 6/M/86 Neck New 106 Postinflammatory pigment alteration, 47 d Postinflammatory pigment alteration, 0 mo 21 7/M/77 Scalp New 60 NP Postinflammatory pigment alteration, 1 mo 14 8/F/71 Nose New 85 NP Solar lentigo, 12 mo 32 9/F/95 Cheek New 84 Lichenoid dermatitis, postinflammatory pigment alteration, 60 d Vacuolar interface dermatitis, PIPA, solar elastosis, 0, 2, and 4 mo, respectively 11 10/M/78 Cheek Recurrence 84 Lichenoid dermatitis, 71 d Solar elastosis, solar lentigo, 2 mo 6 11/F/70 Nose New lesion 47 Lichenoid dermatitis, 26 d Dermal lymphocytic infiltrate, 1 mo 1 12/F/71 Nose New 104 Melanoma in situ, junctional melanocytic proliferation, dense lichenoid infiltrate, 84 d Lichenoid dermatitis, 0 mo 0 13/M/65 Scalp New 48 NP Solar elastosis, actinic keratosis, 0 mo 21 14a/M/87 Neck New 100 Melanoma in situ, 40 d Solar elastosis, actinic keratosis, 0 mo 1 14b/M/87 Lateral canthus New 80 NP Solar elastosis, actinic keratosis, 0 mo 1 Abbreviations: NP, not performed; PIPA, postinflammatory pigment alteration. (REPRINTED) ARCH DERMATOL/ VOL 146 (NO. 11), NOV 2010 WWW.ARCHDERMATOL.COM 1309 ©2010 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 work_k777csrsgbbqvltp22rn2iggwi ---- EM Advances Photodocumentation as an emergency department documentation tool in soft tissue infection: a randomized trial Adam Lund, BSc, MD, MEd*; Daniel Joo, BSc, MD*; Kerrie Lewis, LPN, EMR*; Yasemin Arikan, MD3; Anton Grunfeld, MD* ABSTRACT Objectives: Current documentation methods for patients with skin and soft tissue infections receiving outpatient parenteral anti-infective therapy (OPAT) include written descriptions and drawings of the infection that may inade- quately communicate clinical status. We undertook a study to determine whether photodocumentation (PD) improves the duration of outpatient treatment of skin and soft tissue infections. Methods: A single-blinded, prospective, randomized trial was conducted in the emergency departments of a commu- nity hospital and an academic tertiary centre. Participants included consecutive patients age $ 14 years presenting with noninvasive skin and soft tissue infections requiring OPAT. Patients in the intervention arm were treated with standard of care plus PD at each emergency physician assessment. Control subjects received care provided at the discretion of the treating physician and non-photographic documentation. The primary outcome was duration of therapy measured in half-days. The required sample size to detect a difference of one half-day was 253 patients per group (a 5 0.05). Secondary outcomes included (1) completion and thera- peutic failure rates, (2) patient satisfaction, and (3) physician and nurse satisfaction. Results: Enrolment was slower and follow-up rates lower than anticipated, and the trial was terminated when funds were exhausted. A total of 468 subjects with similar age and gender characteristics were enrolled, with 244 receiving the intervention and 224 in the control arm. The mean OPAT duration was similar in the two groups (3.6 days v. 3.5 days, p 5 0.73). No differences in the rate for completion and therapeutic failure were observed (71% v. 68% and , 1% for both, respectively). Survey response rates varied signifi- cantly: patients, 65%; nurses, 17%; and physicians, 87%. Physicians endorsed more comfort with their assessment and OPAT judgment with PD (65% and 64%, respectively). Physicians cited too much time lost with technological challenges, which would affect implementation in a busy ED. Conclusions: PD as an intervention is acceptable to patients and has reasonable endorsement by the majority of physi- cians. This trial had significant limitations that threatened the integrity of the study, so the results are inconclusive. RÉSUMÉ Objectif: Les moyens actuels de documentation des infec- tions de la peau et des tissus mous chez les patients soumis à un traitement anti-infectieux parentéral ambulatoire (TAIPA) comprennent les descriptions écrites et les croquis des tissus infectés, mais ces moyens peuvent ne pas rendre pleinement l’état clinique. Aussi avons-nous mené une étude visant à déterminer si la photodocumentation (PD) pouvait améliorer la durée du traitement des infections de la peau et des tissus mous chez les patients externes. Méthode: Un essai prospectif, à répartition aléatoire, et à simple insu a été mené au service des urgences d’un hôpital communautaire et d’un centre hospitalier universitaire de soins tertiaires. Ont participé à l’essai des patients consécu- tifs, âgés de 14 ans et plus, souffrant d’une infection non invasive de la peau et des tissus mous, qui nécessitait un TAIPA. Les sujets du groupe expérimental ont reçu des soins selon la norme, complétés par une PD à chaque évaluation faite par l’urgentologue, tandis que les sujets du groupe témoin ont reçu des soins selon le jugement du médecin traitant, complétés par une documentation non photogra- phique. Le principal critère d’évaluation était la durée du traitement, mesurée en demi-journée. La taille de l’échantil- lon nécessaire pour déceler un écart d’une demi-journée était de 253 patients par groupe (a 5 0.05). Les critères d’évalua- tion secondaires comprenaient 1) les taux de réussite et d’échec du traitement et 2) le degré de satisfaction des patients, des médecins, et du personnel infirmier. From the *Department of Emergency Medicine, University of British Columbia, Vancouver, BC; 3Department of Internal Medicine, Division of Infectious Diseases, University of British Columbia, Vancouver, BC. Correspondence to: Dr. Daniel Joo, Emergency Department, Lions Gate Hospital, 231 E 15th Street, North Vancouver, BC V7L 2L7; djoomd@gmail.com. This article has been peer reviewed. CJEM 2013;15(6):345-352� Canadian Association of Emergency Physicians DOI 10.2310/8000.2013.130726 ORIGINAL RESEARCH N RECHERCHE ORIGINALE 2013;15(6) 345CJEM N JCMU https://doi.org/10.2310/8000.2013.130726 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:19, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.2310/8000.2013.130726 https://www.cambridge.org/core https://www.cambridge.org/core/terms Résultats: La recherche de sujets et les taux de suivi ont été inférieurs aux prévisions, et l’essai a pris fin avec l’épuise- ment des fonds. Au total, 468 sujets ayant à peu près les mêmes caractéristiques quant à l’âge et au sexe ont participé à l’étude: 244 d’entre eux ont été dirigés vers le groupe expérimental et 224 vers le groupe témoin. La durée moyenne du TAIPA était comparable dans les deux groupes (3.6 jours contre 3.5; p 5 0.73). Aucun écart quant au taux d’achèvement ou d’échec du traitement n’a été relevé (71% contre 68%, et , 1% dans les deux groupes, respectivement). Par contre, les taux de réponse aux différents questionnaires ont connu des écarts très importants, soit 65% pour les patients; 17% pour le personnel infirmier, et 87% pour les médecins. D’ailleurs, les médecins ont indiqué que la PD leur avait facilité la tâche en ce qui concerne l’évaluation des lésions et la pertinence du TAIPA (65% et 64%, respective- ment). Toutefois, les médecins se sont plaints d’une trop grande perte de temps en raison de difficultés techniques, ce qui pourrait compromettre la mise en œuvre de ce type de documentation dans un service d’urgence achalandé. Conclusions: La PD comme outil d’intervention est accep- table aux yeux des patients et recueille l’appui de la majorité des médecins. Toutefois, des limites importantes sont venues entacher l’essai, mettant même en jeu son intégrité; aussi les résultats ne sont-ils pas concluants. Keywords: hospital emergency service, intravenous injec- tion, photography, randomized controlled trial, soft tissue infections Outpatient parenteral anti-infective therapy (OPAT) has emerged as an effective, safe, and cost-effective treatment option for skin and soft tissue infections (SSTIs).1–3 The increasing use of OPAT prompted the first Infectious Diseases Society of America (IDSA) practice guideline for community-based parenteral anti-infective therapy in 1997, which was recently updated in 2004.4,5 We found no clinical data on the validity or reliability of documenting response to therapy despite many recent published reviews on the topic of SSTIs.6–11 There is a paucity of data to guide physicians on when to switch to oral therapy,12 and the only available guidelines are based on expert opinion from the Clinical Resource Efficiency Support Team (CREST) in 2005.13 These guidelines suggest switching to oral therapy under the following conditions: diminishing pyrexia, less intense erythema, falling inflammatory markers, and stable comorbidities. These criteria can be difficult for emergency physicians (EPs) seeing patients for the first time to judge. Several studies have incorporated the use of serial photography for chronic skin lesions such as foot ulcers, dysplastic nevi, and arterial ulcers.14–17 However, to our knowledge, no study has been done incorporat- ing this tool in OPAT programs for SSTIs, nor has digital photography been formally studied in an emergency department (ED) setting. Some authors have suggested that the duration of OPAT should be 3 to 4 days and that longer therapy does not correlate with improved outcomes.12,18–20 Knowledge of when to stop OPAT, however, requires some clinical end point. Among SSTI interventional studies, there is no consistent definition but rather a range of end points from complete resolution of symptoms to stabilization or regression of the size of erythema.21–23 We hypothesized that photodocumentation (PD) may assist in physician decision making, allowing earlier step-down to oral therapy. This in turn could lead to fewer ED visits, improved patient flow, and decreased costs. Our hypothesis is grounded in the belief that most EPs, in the absence of clear evidence of improvement, will tend to continue OPAT rather than step down to oral therapy. Photographs may provide earlier objective evidence of infection regression, signaling an appropriate time to discontinue OPAT. Conversely, if the photographs were to show no change in the appearance of an area of cellulitis, a physician who may have otherwise discontinued OPAT on the basis of duration of therapy already given or the patient’s subjective impression of improvement may, as a result, continue OPAT. Our objective was to compare the duration of OPAT for SSTIs in ED patients treated with digital PD and standard care compared to standard care alone. Secondary outcomes of interest addressed OPAT completion rates and failure rates after conversion to oral therapy, physician and nurse perceptions of quality of care, documentation and workload, and patient satisfaction. METHODS Design and eligibility This prospective, randomized, single-blinded trial enrolled consecutive patients age $ 14 years presenting Lund et al 346 2013;15(6) CJEM N JCMU https://doi.org/10.2310/8000.2013.130726 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:19, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.2310/8000.2013.130726 https://www.cambridge.org/core https://www.cambridge.org/core/terms to the ED with nonnecrotizing SSTIs requiring OPAT. Patients were excluded if they presented with necrotizing or invasive infections or non–soft tissue infections (e.g., pneumonia, pyelonephritis, osteomye- litis), were under age 14, were previously enrolled in the trial, were unable to consent, or were admitted to hospital. Ethical approval was granted by the institu- tional research ethics boards of both participating hospitals (one academic, one community). Intervention Determination of eligibility and enrolment was done by either the treating physician or the nurse. Missed cases were identified through chart review and subse- quently enrolled. Subjects randomized to the PD arm had one to three photographs taken of their infected site initially and at subsequent EP assessments. Photographs were taken by the attending physician or nurse after having received basic training on the use of the camera equipment and previously published photographic technique.17 Instructions for camera operation and photographic technique were printed on a laminated card easily accessible to staff. A ‘‘point and shoot camera’’ was selected with settings on automatic to maximize the simplicity of the PD procedure. Clinicians were encouraged to take a wide (zoomed out) shot to provide perspective and one or two closer shots, with the area of interest filling the frame. Photograph rulers were available for inclusion in the shots to provide a measurement scale at the discretion of the clinicians. Control arm Subjects allocated to the control group received current standard of care documentation without the use of photography, which may have included drawings in the chart and/or skin markings with a surgical ink pen. The level of care provided to the control group was not monitored in terms of compliance with current guide- lines and varied at the discretion of the treating physician. Randomization The study assignments were randomized through an online random number generator in permuted blocks of 10 and distributed in concealed envelopes. Once enrolled, the enrolling nurse, EP, or research assistant handed the subject the next study pack from the pile, with group allocation being revealed on opening the envelope. Blinding After randomization and opening of the envelopes, the assignment was nonblinded except for statistical work. Outcomes The primary outcome measure was duration of OPAT in days. The secondary outcome measures were (1) therapeutic failures, defined as reinstitution of OPAT within 7 days of stepping down to oral therapy; (2) patient satisfaction with care, and (3) physician and nurse satisfaction with the PD protocol. Statistical methods Our primary outcome was analyzed using the inde- pendent samples t-test. Quantitative secondary out- comes were compared using chi-square testing, and Mann-Whitney U testing was performed for survey data. Sample size Sample size calculation was based on a mean number of treated days of 3.5 with standard care (twice-daily intravenous [IV] therapy) and an estimated standard deviation of 2 days of OPAT, with a clinically meaningful end point defined as a 0.5-day reduction in the mean number of treated days, which translates into a reduction in at least one ED visit per day to receive IV therapy. We chose this as a relevant end point because it is the smallest measurable time interval in this setting and would still likely be meaningful to the patient, who would save a visit to the ED, with the associated wait and IV therapy in situ. We calculated a sample size of 253 subjects per group with 80% power and a two-sided a of 0.05 to detect a 0.5-day reduction in the mean number of treated days. Data collection All quantitative data were abstracted by staff indepen- dent from the clinical relationship but not blinded to the intervention. All subjects in both arms of the trial received six- and seven-item satisfaction questionnaires Photodocumentation to document soft tissue infection in the ED 2013;15(6) 347CJEM N JCMU https://doi.org/10.2310/8000.2013.130726 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:19, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.2310/8000.2013.130726 https://www.cambridge.org/core https://www.cambridge.org/core/terms at their initial assessment (at institution of OPAT) and at the final assessment (discharge from OPAT and/or transition to oral step-down therapy), respectively. These were administered either by the nurse taking care of the patient or the research assistant. At the end of the trial period, questionnaires were administered to all EPs and nurses staffing both departments. The survey was designed to assess any perceived change in the EPs’ confidence in judging clinical progress, as well as EP and nurse overall satisfaction with the feasibility of the protocol.24 Questionnaires were created according to published guidelines25 and reviewed by three experienced clinician- researchers. Questionnaires were drafted using an online software tool (,http://www.surveymonkey.com.) and sent electronically to all EPs and nurses. RESULTS Quantitative data The study was terminated before reaching the planned sample size due to slower than anticipated enrolment and higher loss to follow-up, which exhausted available funding. Of 500 patients initially enrolled, 5 were protocol violations as they were under 14 years of age and 27 were lost to follow-up (5%), leaving 468 patients (93% of the planned sample) in the final analysis. Of these, 244 were in the photography arm and 224 in the control arm. Recruitment occurred from October 2009 through December 2010. Baseline subject data are shown in Table 1. The mean OPAT duration was 3.6 days in the intervention group and 3.5 days in the control group (p 5 0.73; Table 2). In addition, the completion rate of OPAT from the ED (as opposed to referral to an infectious diseases (ID) clinic) was similar between the two groups, with values of 71% and 68% (p 5 0.54), respectively; therapeutic failure rates were similar as well (0.8% PD and 0.9% control, p 5 0.93) (see Table 2). Patient survey results The patient rate of completion of the entrance survey was 87% and for the exit survey was 65%. Table 3 and Table 4 document minimal categorical significance in the responses, with a median of 4 5 agree or 5 5 strongly agree across all questions, with the exception of neutral agreement among patients in both arms that they would prefer more attention be given to the status of their infection. Physician survey results Twenty-seven of the 31 eligible EPs responded to the online survey (87%) (Figure 1). Physician responses to the PD protocol were predominantly positive. Only 25% of those surveyed believed that the current standard of care represents adequate documentation and assessment of these patients. Sixty-five percent felt more comfortable with their assessment of the patient when a photograph accompanied the chart, and 64% reported improved confidence in their judgment to continue or stop OPAT. Roughly three-quarters of EPs felt that pictures represent superior documenta- tion. Nineteen percent of participating EPs reported that the PD protocol did not improve either assess- ment or confidence in decision making. When asked whether PD should be implemented routinely here- after, 33% favoured implementation in all patients and 81% favoured it in selected patients; however, the sample size was insufficient to explore these subgroups. The majority of physicians had reservations on feasibility, citing additional time taken to implement the protocol in a busy ED, inconsistency of photo- grapher technique, and limited added value. Table 1. Baseline subject demographics in the photodocumentation and control arms PD Control Number of subjects 244 224 Age (yr) Mean (SD) 49.9 (17.0) 49.2 (14.4) Range 15–95 14–89 Percent male 60 62 PD 5 photodocumentation. Table 2. OPAT duration, completion rate, and treatment failure rate in the photodocumentation and control arms PD Control p Mean OPAT duration in days (SD) 3.6 (2.4) 3.5 (2.3) 0.73* Completion rate 71% 68% 0.54 3 Treatment failure 0.8% 0.9% 0.93 4 OPAT 5 outpatient parenteral anti-infective therapy; PD 5 photodocumentation. *Two-tailed t-test [t(466) 5 0.34, p 5 0.73]. 3 Chi-square test [x 2 (1, N 5 468) 5 0.37, p 5 0.54]. 4 Chi-square test [x2(1, N 5 468) 5 0.007, p 5 0.93]. Lund et al 348 2013;15(6) CJEM N JCMU https://doi.org/10.2310/8000.2013.130726 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:19, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.2310/8000.2013.130726 https://www.cambridge.org/core https://www.cambridge.org/core/terms Nurse survey results Only 40 of the 242 nurses who were sent the registered nurse survey responded (16.5%). Unfortunately, due to the large pool of casual and part-time nursing staff and the inability to track which nurses treated eligible patients, we were obliged to send the survey to the entire group rather than select only for those involved in the trial. Seventy percent of responding nurses agreed that the protocol was logistically feasible, 75% believed that PD did not increase wait times in the ED, and 92% supported routine implementation. DISCUSSION PD did not significantly reduce the duration of parenteral outpatient therapy for patients with SSTIs in a trial that was stopped prematurely secondary to cost overruns. Patients, physicians, and nurses perceived PD to be helpful when compared to the current standard of written documentation; however, physicians were less likely to endorse it, citing reservations with technolo- gical challenges (photograph inconsistency), increased time to use, and limited value added. There are protocol-related issues that can be edifying for those who wish to pursue future evaluations of this treatment adjunct. First, the PD protocol itself encoun- tered difficulties that may have reduced the effectiveness of the images. Although a training session was given to ED staff prior to the study and an instructional sheet accompanied the workstation, there was nonetheless significant variability in lighting, framing, and staging, resulting in inconsistent colour temperature and use of a flash. These factors contributed to making direct photograph-to-photograph comparison at times chal- lenging. A solution to this might be to create consistent conditions for ED photography, such as an external light source, appropriate draping to minimize distract- ing background or light reflection, and a ruler to provide scale. Although criteria for grading image quality were not defined, higher quality images yielded a positive endorsement from participating EPs. Table 3. Subject survey results at initiation of OPAT expressed as median values* Survey question PD Control z score 3 p 1. I am satisfied with the care I am receiving for my skin or soft tissue infection. 4 4 20.958 0.34 2. I would prefer more attention be given to the status of my infection. 3 3 21.785 0.07 3. I feel the current status of my infected site was adequately assessed. 4 4 21.757 0.08 4. I feel the current status of my infected site can be adequately documented in the chart. 4 4 21.471 0.14 5. I am confident that the next physician who sees me will be able to determine if my infection is getting better or worse or is unchanged. 4 4 21.108 0.27 6. I am comfortable with the concept of a photo being used in a medical chart to document a soft tissue infection. 5 5 20.536 0.60 OPAT 5 outpatient parenteral anti-infective therapy; PD 5 photodocumentation. *1 5 strongly disagree, 2 5 disagree, 3 5 neutral/unsure, 4 5 agree, 5 5 strongly agree. 3 Mann-Whitney U test of mean ranks between the two study arms. Table 4. Subject survey results at discharge of OPAT expressed as median values* Survey question PD Control z score* p 1. I am satisfied with the care I am receiving for my skin or soft tissue infection. 5 4 22.461 0.01 2. I feel there was adequate continuity of care from my last visit(s) to the current one. 5 4 22.164 0.03 3. I feel the doctor was accurately able to assess the progress of my infected site. 5 4 22.472 0.01 4. I have no problem with the fact that I am seen by different doctors and nurses at each visit. 4 4 22.685 0.007 5. Given the option, I would prefer digital photographs be taken of my infected site. 5 4 25.482 0.000 6. I am comfortable with the idea of having photographs added to my chart to document my progress. 5 4 24.910 0.000 7. Digital photography is a superior method of tracking my infected site. 5 4 23.401 0.001 OPAT 5 outpatient parenteral anti-infective therapy; PD 5 photodocumentation. *1 5 strongly disagree, 2 5 disagree, 3 5 neutral/unsure, 4 5 agree, 5 5 strongly agree. 3 Mann-Whitney U test of mean ranks between the two study arms. Photodocumentation to document soft tissue infection in the ED 2013;15(6) 349CJEM N JCMU https://doi.org/10.2310/8000.2013.130726 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:19, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.2310/8000.2013.130726 https://www.cambridge.org/core https://www.cambridge.org/core/terms Although the workload was reportedly a concern with the EPs, the time burden of such a protocol can be mitigated by streamlining the processes of photo- graphy (e.g., clear delegation of responsibility, easy access to equipment) and appending photographs to the chart (e.g., either hard copies or digital images in an electronic medical record). Given these challenges in implementation, it may be that PD simply does not add value to the current practice of assessment and judgment of SSTIs. It is possible that the current practice of outlining the extent of erythematous spread, in combination with the patient’s own perception of improvement or worsen- ing, is all that is needed to guide the physician in the continued use of OPAT. Patients did endorse PD and stated that photographic images, in the setting of outpatient reassessments by different physicians daily, were perceived to be a useful adjunct that may have improved continuity of care. Patients were very comfortable with the idea of having photographs taken and added to their medical chart. Interestingly, however, subjects in the control group generally felt well taken care of (median score 4 or ‘‘agree’’) in terms of satisfaction with care, perceived accuracy of documentation, and continuity from one visit to the next. Even so, they also stated that they would have preferred to have had photographs taken. Photographed patients reported greater satisfaction in all of the above categories, with a median score 1 point higher than that of their counterparts. Limitations We acknowledge several limitations to this study. First, our recruitment fell short of the targeted 506 subjects. Recruitment was a challenge due to time constraints. Although it was intended that consecutive patients would be enrolled, the reality was that some patients were missed (number not known). This introduced potential for sampling bias into our study. Reasons for not enrolling certain patients related mostly to ED flow and volume but may also have included factors such as perceived willingness to give consent, region of the body affected, availability of the camera equipment, Figure 1. Emergency physician survey responses (n 5 27). DP 5 digital photography; ED 5 emergency department; ERH 5 Eagle Ridge Hospital; OPAT 5 outpatient parenteral anti-infective therapy; PD 5 photodocumenta- tion; RCH 5 Royal Columbian Hospital; SSTI 5 skin and soft tissue infection. Lund et al 350 2013;15(6) CJEM N JCMU https://doi.org/10.2310/8000.2013.130726 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:19, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.2310/8000.2013.130726 https://www.cambridge.org/core https://www.cambridge.org/core/terms and personal biases for or against the study. To maintain an adequate rate of recruitment, small prizes were offered as incentives to top enrollers, thus introducing some recruitment bias as well. Due to limited time and resources, valuable covari- ate data pertaining to the premorbid medical complex- ity of patients (e.g., diabetes, immunosuppression, resistant organism) were not collected, which meant that it was not possible to perform an adjusted analysis to explore potential associated benefits with subgroups of SSTI patients. Additionally, the lack of a standard- ized protocol for the duration of parenteral therapy prior to switching to oral therapy may have con- founded our results in an unpredictable way as it was left to physician discretion to determine the current standard of care. Compliance with current guidelines was not measured in either arm, so there are no assurances that the standard of care was uniformly applied in all eligible and enrolled cases. Unfortunately, the protocol had to be changed after the 240th subject was enrolled during an interim planned review of the data and methods (no statistical analyses done). When the protocol was initially developed, our institutional practice was that EPs reassessed OPAT patients every 24 to 48 hours, thus allowing for 1 to 2 days between each set of photographs. After initiation of the trial, however, administrative requirements changed the practice such that EPs were to reassess patients on each visit (once or twice daily). Hence, photographs with each EP assessment meant doing them at least once a day, which resulted in (1) minimal change from one photograph to the next and (2) staff resistance to the unnecessarily labour-intensive protocol. From this time to the end of the study, EPs were encouraged to repeat photographs every 2 days, or more often at their own discretion, depending on whether a noticeable change was present. The timing of the imaging was inconsistent across this subgroup of patients. By design, physicians, nurses, and patients became unblinded to treatment group allocation. This opened the door to bias on the part of the treating physician and nurse, who may have altered their management based on knowledge of group allocation or preconceived ideas of the value of photography rather than solely on the presence or absence of photographs. Similarly, patients in the intervention arm knew they were in a trial assessing the impact of PD and may have altered their perceptions as a result of that knowledge. However, the finding that differences in satisfaction existed only on the exit survey and not on the initial survey (see Table 3) would argue against a major patient bias in this regard. Finally, 30% of patients in either group did not complete OPAT in the ED (i.e., step-down to oral therapy). This represents a large subset of patients for whom ultimate outcome data are missing. Most of these patients (66% and 76% in the PD and control groups, respectively; data not shown) were referred to and thereafter followed by ID clinics. The remainder may have been noncompliant with therapy and lost to follow-up, whereas others may have finished their care at another institution or physician’s office. Based on these data with the limitations discussed above, the results do not support a change in current practice. Future directions Further studies should address the limitations of this feasibility study prior to implementation. A more rigorous scientific design with agreement from the treating physicians to implement a standardized treat- ment protocol in a uniform way would provide a definitive answer on whether this intervention is feasible and effective. Securing adequate funding to recruit to the required sample size and optimize follow-up may identify subgroups in which a clearer benefit to PD can be demonstrated, for example, complicated subsets of SSTIs such as those refractory to first-line treatment or involving patients with premorbid conditions compli- cating their course of disease. This study intervention, if proven to be superior or at least equivalent to the current standard, has the potential to be helpful in remote regions where physician access is limited, in patients with reduced mobility who cannot easily commute to a clinic, or in home outreach OPAT programs run by nurses or via IV pumps. CONCLUSIONS We found that implementation of this randomized controlled trial revealed significant limitations in the design, evolution of care, and challenges in recruit- ment and follow-up. We were unable to demonstrate that implementation of a PD protocol in addition to standard care for patients receiving OPAT for SSTIs was superior to standard care alone for the duration of therapy. PD was perceived by the patients, EPs, Photodocumentation to document soft tissue infection in the ED 2013;15(6) 351CJEM N JCMU https://doi.org/10.2310/8000.2013.130726 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:19, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.2310/8000.2013.130726 https://www.cambridge.org/core https://www.cambridge.org/core/terms and nurses to be helpful. The protocol implementa- tion was limited by funding resources and did not achieve the required sample size, had loss to follow-up of greater than 1%, and lacked a standard- ized clinical approach to the OPAT population across all physicians for the control and the treatment arms. Future studies to compare PD of SSTIs should address the shortcomings outlined in this article prior to implementation. Acknowledgements: We would like to thank Mr. Michael Wasdell for his statistical support and Fonda Charters, Mathilda Vandermey, Nicole Lascele, and Win Nguyen for their enormous assistance in supporting our study. We would also like to give a special thanks to the emergency physicians and nursing staff at Eagle Ridge and Royal Columbian hospitals for their invaluable contributions to patient enrolment, data gathering, photograph acquisition, and facilitation of forward movement of the study. We gratefully acknowledge Dr. Gary Andolfatto for reviewing the manuscript before submission. Competing interests: We thank the Columbian Emergency Physicians’ Association for their financial support of this project. REFERENCES 1. Wai AO, Frighetto L, Marra CA, et al. Cost analysis of an adult outpatient parenteral antibiotic therapy (OPAT) programme. A Canadian teaching hospital and Ministry of Health perspective. Pharmacoeconomics 2000;18:451-7, doi:10.2165/00019053-200018050-00004. 2. Deery HG. Outpatient parenteral anti-infective therapy for skin and soft-tissue infections. Infect Dis Clin North Am 1998; 12:935-49, doi:10.1016/S0891-5520(05)70029-5. 3. Yong C, Fisher DA, Sklar GE, et al. A cost analysis of outpatient parenteral antibiotic therapy (OPAT): an Asian perspective. Int J Antimicrob Agents 2009;33:46-51, doi:10. 1016/j.ijantimicag.2008.07.016. 4. Williams DN, Rehm AD, Tice AD, et al. Practice guidelines for community-based parenteral anti-infective therapy. Clin Infect Dis 1997;25:787-801, doi:10.1086/515552. 5. Tice AD, Rehm SJ, Daloviso JR, et al. Practice guidelines for outpatient parenteral antimicrobial therapy. Clin Infect Dis 2004;38:1651-72, doi:10.1086/420939. 6. Abrahamian FM, Talan DA, Moran GJ. Management of skin and soft-tissue infections in the emergency department. Infect Dis Clin North Am 2008;22:89-116, doi:10.1016/j.idc. 2007.12.001. 7. Eisenstein BI. Treatment challenges in the management of complicated skin and soft-tissue infections. Clin Microbiol Infect 2008;14 Suppl 2:17-25. 8. Brook I. Microbiology and management of soft tissue and muscle infections. Int J Surg 2008;6:328-38, doi:10.1016/ j.ijsu.2007.07.001. 9. Stevens DL, Eron LL. Cellulitis and soft-tissue infections. Ann Intern Med 2009;150:ITC11. 10. Stevens DL, Bisno AL, Chamgers HF, et al. Practice guidelines for the diagnosis and management of skin and soft-tissue infections. Clin Infect Dis 2005;41:1373-406, doi:10.1086/497143. 11. Phoenix G, Das S, Joshi M. Diagnosis and management of cellulitis. BMJ 2012;345:e955. 12. Eron LJ, Lipsky BA, Low DE, et al. Managing skin and soft tissue infections: expert panel recommendations on key decision points. J Antimicrob Chemother 2003;52 Suppl S1:i3- 17, doi:10.1093/jac/dkg466. 13. Clinical Resource Efficiency Support Team (2005). Guidelines on the management of cellulitis in adults. Available at: http://www.acutemed.co.uk/docs/cellulitis%20guidelines, %20CREST,%2005.pdf (accessed February 2013). 14. Brem H, Kirsner RS, Falanga V. Protocol for the successful treatment of venous ulcers. Am J Surg 2004;188(1 Suppl 1): 1S-8S, doi:10.1016/S0002-9610(03)00299-X. 15. Brem H, Sheehan P, Rosenberg HJ, et al. Evidence-based protocol for diabetic foot ulcers. Plast Reconstr Surg 2006; 117(7 Suppl):193S-209S. 16. Gupta M, Berk DR, Gray C, et al. Morphologic features and natural history of scalp nevi in children. Arch Dermatol 2010; 146:506-11, doi:10.1001/archdermatol.2010.88. 17. Rennert R, Golinko M, Kaplan D, et al. Standardization of wound photography using the wound electronic medical record. Adv Skin Wound Care 2009;22:32-8, doi:10.1097/ 01.ASW.0000343718.30567.cb. 18. Ahkee S, Smith S, Newman D. Early switch from intravenous to oral antibiotics in hospitalized patients with infections: a 6-month prospective study. Pharmacotherapy 1997;17:569-75. 19. Cox NH, Colver GB, Paterson WD. Management and morbidity of cellulitis of the leg. J R Soc Med 1998;91:634-7. 20. Eron LJ. The admission, discharge and oral switch decision processes in patients with skin and soft tissue infections. Curr Treat Options Infect Dis 2003;5:245-50. 21. Brown G, Chamberlain R, Goulding J, et al. Ceftriaxone versus cefazolin with probenecid for severe skin and soft tissue infections. J Emerg Med 1996;14:547-51, doi:10.1016/ S0736-4679(96)00126-6. 22. Corwin P, Toop L, McGeoch G, et al. Randomised controlled trial of intravenous antibiotic treatment for cellulitis at home compared with hospital. BMJ 2005;330: 129, doi:10.1136/bmj.38309.447975.EB. 23. Grayson ML, McDonald M, Gibson K, et al. Once-daily intravenous cefazolin plus oral probenecid is equivalent to once-daily intravenous ceftriaxone plus oral placebo for the treatment of moderate-to-severe cellulitis in adults. Clin Infect Dis 2002;34:1440-8, doi:10.1086/340056. 24. Lund A. Digital photography IVT. Available at: http://www. rchemerg.com/Academics/Academic-Projects-at-ERH-and- RCH/Digital-Photography-IVT (accessed June 7, 2013). 25. Burns K, Duffett M, Kho ME, et al. A guide for the design and conduct of self-administered surveys of clinicians. Can Med Assoc J 2008;179:245-52, doi:10.1503/cmaj.080372. Lund et al 352 2013;15(6) CJEM N JCMU https://doi.org/10.2310/8000.2013.130726 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:19, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.2310/8000.2013.130726 https://www.cambridge.org/core https://www.cambridge.org/core/terms << /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Dot Gain 30%) /CalRGBProfile (None) /CalCMYKProfile (U.S. Sheetfed Coated v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.4 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed false /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.1000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams false /MaxSubsetPct 100 /Optimize false /OPM 1 /ParseDSCComments false /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo true /PreserveFlatness true /PreserveHalftoneInfo false /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Apply /UCRandBGInfo /Remove /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true ] /NeverEmbed [ true ] /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 150 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 600 /ColorImageDepth 8 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /FlateEncode /AutoFilterColorImages false /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /ColorImageDict << /QFactor 0.40 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 600 /GrayImageDepth 8 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /FlateEncode /AutoFilterGrayImages false /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /GrayImageDict << /QFactor 0.40 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 1200 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 >> /AllowPSXObjects false /CheckCompliance [ /PDFX1a:2003 ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError false /PDFXTrimBoxToMediaBoxOffset [ 34.02000 34.02000 34.02000 34.02000 ] /PDFXSetBleedBoxToMediaBox false /PDFXBleedBoxToTrimBoxOffset [ 8.49600 8.49600 8.49600 8.49600 ] /PDFXOutputIntentProfile (Euroscale Coated v2) /PDFXOutputConditionIdentifier (FOGRA1) /PDFXOutputCondition () /PDFXRegistryName (http://www.color.org) /PDFXTrapped /False /CreateJDFFile false /SyntheticBoldness 1.000000 /Description << /DEU /FRA /JPN /PTB /DAN /NLD /ESP /SUO /ITA /NOR /SVE /ENU (Settings for the Rampage workflow.) >> >> setdistillerparams << /HWResolution [2400 2400] /PageSize [612.000 792.000] >> setpagedevice work_kabcbexhkzgxdfzjclzlblvoiu ---- [PDF] A 1,320-nm Nd: YAG Laser for Improving the Appearance of Onychomycosis | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1097/DSS.0000000000000189 Corpus ID: 36748526A 1,320-nm Nd: YAG Laser for Improving the Appearance of Onychomycosis @article{Ortiz2014A1N, title={A 1,320-nm Nd: YAG Laser for Improving the Appearance of Onychomycosis}, author={A. Ortiz and Sam V. Truong and K. Serowka and K. Kelly}, journal={Dermatologic Surgery}, year={2014}, volume={40}, pages={1356–1360} } A. Ortiz, Sam V. Truong, +1 author K. Kelly Published 2014 Medicine Dermatologic Surgery BACKGROUND Onychomycosis is a therapeutic challenge because of the toxicities of systemic medications. This has led to the investigation of light-based technologies for safe and effective alternative treatment modalities. OBJECTIVE The purpose of this study was to determine the safety and efficacy of 4 treatments with a 1,320-nm neodymium:yttrium aluminum garnet (Nd:YAG) laser in improving the appearance of onychomycosis. MATERIALS AND METHODS This study was a 24-week, single-center randomized… Expand View on Wolters Kluwer pubs.bli.uci.edu Save to Library Create Alert Cite Launch Research Feed Share This Paper 42 CitationsHighly Influential Citations 1 Background Citations 4 Methods Citations 4 View All Topics from this paper Onychomycosis Adverse reaction to drug Neodymium Hypertrophy Yttrium Cool - action Structure of nail of toe CD244 protein, human Paper Mentions Interventional Clinical Trial 1320 nm Nd:YAG Laser for Improving the Appearance of Onychomycosis The purpose of this clinical study is to improve the appearance of onychomycosis and morphology of the nail (fungal infection). The researcher can use a light based therapy to… Expand Conditions Fungal Nail Infection Intervention Device University of California, Irvine December 2011 - December 2012 42 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Efficacy and safety of 1064-nm Nd:YAG laser in treatment of onychomycosis R. Wanitphakdeedecha, K. Thanomkitti, S. Bunyaratavej, W. Manuskiatti Medicine The Journal of dermatological treatment 2016 37 View 1 excerpt Save Alert Research Feed The Effect of Long-Pulsed Nd:YAG Laser for the Treatment of Onychomycosis. G. Okan, Nagehan Tarıkçı, G. Gokdemir Medicine Journal of the American Podiatric Medical Association 2017 17 Save Alert Research Feed Real-World Efficacy of 1064-nm Nd:YAG Laser for the Treatment of Onychomycosis J. Rivers, Brianne Vestvik, J. Berkowitz Medicine Journal of cutaneous medicine and surgery 2017 17 Save Alert Research Feed Combination therapy for onychomycosis using a fractional 2940-nm Er:YAG laser and 5 % amorolfine lacquer J. Zhang, S. Lu, +4 authors L. Xi Medicine Lasers in Medical Science 2016 16 Highly Influenced View 4 excerpts, cites methods Save Alert Research Feed The innovative management of Onychomycosis by the use of Diode Laser 808-nm: A Pilot Study A. Mosbeh, Ahmed M. Al-Adl, H. Abdo 2017 View 3 excerpts, cites background Save Alert Research Feed 1340nm LASER THERAPY FOR ONYCHOMYCOSIS: Negative Results of Prospective Treatment of 72 Toenails and a Literature Review. Graciela Araújo Do Espírito-Santo, D. Leite, H. D. Hoffmann-Santos, Luciana Basili Dias, R. Hahn Medicine The Journal of clinical and aesthetic dermatology 2017 1 Save Alert Research Feed Treatment of onychomycosis using a 1064-nm diode laser with or without topical antifungal therapy: a single-center, retrospective analysis in 56 patients G. C. Weber, P. Firouzi, +5 authors P. Gerber Medicine European Journal of Medical Research 2018 3 Save Alert Research Feed A critical review of improvement rates for laser therapy used to treat toenail onychomycosis A. Gupta, S. Versteeg Medicine Journal of the European Academy of Dermatology and Venereology : JEADV 2017 36 Save Alert Research Feed Laser treatment for onychomycosis Wei-wei Ma, C. Si, +5 authors Bingrong Zhou Medicine Medicine 2019 1 Save Alert Research Feed The effectiveness of laser therapy in onychomycosis patients: An evidence-based case report Rizky Lendl Prayogo, Evangelina Lumban Gaol, +4 authors S. R. F. Saldi Medicine 2017 PDF View 2 excerpts, cites background Save Alert Research Feed ... 1 2 3 4 5 ... References SHOWING 1-10 OF 16 REFERENCES SORT BYRelevance Most Influenced Papers Recency Long-pulse Nd:YAG 1064-nm laser treatment for onychomycosis. R. Zhang, Dong-kun Wang, F. Zhuo, Xiao-han Duan, X. Zhang, J. Zhao Medicine Chinese medical journal 2012 68 View 1 excerpt, references background Save Alert Research Feed Novel Laser Therapy in Treatment of Onychomycosis Jasmina Kozarev Medicine 2010 57 PDF Save Alert Research Feed Oral treatments for toenail onychomycosis: a systematic review. F. Crawford, P. Young, +4 authors I. Russell Medicine Archives of dermatology 2002 62 PDF View 1 excerpt, references background Save Alert Research Feed Double blind, randomised study of continuous terbinafine compared with intermittent itraconazole in treatment of toenail onychomycosis E. Evans, B. Sigurgeirsson Medicine 1999 214 PDF View 1 excerpt, references background Save Alert Research Feed Comparison of Diagnostic Methods in the Evaluation of Onychomycosis I. Haghani, T. Shokohi, Z. Hajheidari, A. Khalilian, Seyed Reza Aghili Medicine Mycopathologia 2013 107 View 2 excerpts, references background Save Alert Research Feed Double blind, randomised study of continuous terbinafine compared with intermittent itraconazole in treatment of toenail onychomycosis. The LION Study Group. E. G. Evans, B. Sigurgeirsson Medicine BMJ 1999 49 Save Alert Research Feed Device-based therapies for onychomycosis treatment. A. Gupta, F. Simpson Medicine Skin therapy letter 2012 42 Save Alert Research Feed Epidemiology of onychomycosis in special-risk populations. L. Levy Medicine Journal of the American Podiatric Medical Association 1997 83 View 1 excerpt, references background Save Alert Research Feed Quantification of temperature and injury response in thermal therapy and cryosurgery. X. He, J. Bischof Materials Science, Medicine Critical reviews in biomedical engineering 2003 152 Save Alert Research Feed The cellular and molecular basis of hyperthermia. B. Hildebrandt, P. Wust, +5 authors H. Riess Medicine Critical reviews in oncology/hematology 2002 1,245 PDF Save Alert Research Feed ... 1 2 ... Related Papers Abstract Topics Paper Mentions 42 Citations 16 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_k3c6ehq2krg5pjurkuuh5cxe64 ---- The value of digital imaging in diabetic retinopathy Methodology The value of digital imaging in diabetic retinopathy PF Sharp S Wallace J Olson K Goatman F Strachan A Grant J Hipwell N Waugh A Ludbrook K McHardy M O’Donnell JV Forrester Health Technology Assessment 2003; Vol. 7: No. 30 HTAHealth Technology AssessmentNHS R&D HTA Programme Copyright notice © Queen's Printer and Controller of HMSO 2003 HTA reports may be freely reproduced for the purposes of private research and study and may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising Violations should be reported to hta@soton.ac.uk Applications for commercial reproduction should be addressed to HMSO, The Copyright Unit, St Clements House, 2–16 Colegate, Norwich NR3 1BQ How to obtain copies of this and other HTA Programme reports. An electronic version of this publication, in Adobe Acrobat format, is available for downloading free of charge for personal use from the HTA website (http://www.hta.ac.uk). A fully searchable CD-ROM is also available (see below). Printed copies of HTA monographs cost £20 each (post and packing free in the UK) to both public and private sector purchasers from our Despatch Agents. Non-UK purchasers will have to pay a small fee for post and packing. For European countries the cost is £2 per monograph and for the rest of the world £3 per monograph. You can order HTA monographs from our Despatch Agents: – fax (with credit card or official purchase order) – post (with credit card or official purchase order or cheque) – phone during office hours (credit card only). Additionally the HTA website allows you either to pay securely by credit card or to print out your order and then post or fax it. Contact details are as follows: HTA Despatch Email: orders@hta.ac.uk c/o Direct Mail Works Ltd Tel: 02392 492 000 4 Oakwood Business Centre Fax: 02392 478 555 Downley, HAVANT PO9 2NP, UK Fax from outside the UK: +44 2392 478 555 NHS libraries can subscribe free of charge. Public libraries can subscribe at a very reduced cost of £100 for each volume (normally comprising 30–40 titles). The commercial subscription rate is £300 per volume. Please see our website for details. Subscriptions can only be purchased for the current or forthcoming volume. Payment methods Paying by cheque If you pay by cheque, the cheque must be in pounds sterling, made payable to Direct Mail Works Ltd and drawn on a bank with a UK address. Paying by credit card The following cards are accepted by phone, fax, post or via the website ordering pages: Delta, Eurocard, Mastercard, Solo, Switch and Visa. We advise against sending credit card details in a plain email. Paying by official purchase order You can post or fax these, but they must be from public bodies (i.e. NHS or universities) within the UK. We cannot at present accept purchase orders from commercial companies or from outside the UK. How do I get a copy of HTA on CD? Please use the form on the HTA website (www.hta.ac.uk/htacd.htm). Or contact Direct Mail Works (see contact details above) by email, post, fax or phone. HTA on CD is currently free of charge worldwide. The website also provides information about the HTA Programme and lists the membership of the various committees. HTA The value of digital imaging in diabetic retinopathy PF Sharp1* S Wallace6 J Olson2 K Goatman3 F Strachan3,4 A Grant6 J Hipwell3 N Waugh7 A Ludbrook5 K McHardy8 M O’Donnell6 JV Forrester9 1 Department of Medical Physics, University of Aberdeen, UK 2 Department of Ophthalmology, Grampian Universities NHS Trust, UK 3 Department of Biomedical Physics and Bioengineering, University of Aberdeen, UK 4 Departments of Ophthalmology and Diabetes, Grampian Universities NHS Trust 5 Health Economics Research Unit, University of Aberdeen, UK 6 Health Services Research Unit, University of Aberdeen, UK 7 Scottish Health Purchasing Information Centre, Grampian Health Board, UK 8 Consultant, Grampian University Hospitals NHS Trust, UK 9 Department of Ophthalmology, University of Aberdeen, UK *Corresponding author Declared competing interests of authors: none Published November 2003 This report should be referenced as follows: Sharp, PF, Olson J, Strachan F, Hipwell J, Ludbrook A, O’Donnell M, et al. The value of digital imaging in diabetic retinopathy. Health Technol Assess 2003;7(30). Health Technology Assessment is indexed in Index Medicus/MEDLINE and Excerpta Medica/ EMBASE. NHS R&D HTA Programme The NHS R&D Health Technology Assessment (HTA) Programme was set up in 1993 to ensure that high-quality research information on the costs, effectiveness and broader impact of health technologies is produced in the most efficient way for those who use, manage and provide care in the NHS. Initially, six HTA panels (pharmaceuticals, acute sector, primary and community care, diagnostics and imaging, population screening, methodology) helped to set the research priorities for the HTA Programme. However, during the past few years there have been a number of changes in and around NHS R&D, such as the establishment of the National Institute for Clinical Excellence (NICE) and the creation of three new research programmes: Service Delivery and Organisation (SDO); New and Emerging Applications of Technology (NEAT); and the Methodology Programme. This has meant that the HTA panels can now focus more explicitly on health technologies (‘health technologies’ are broadly defined to include all interventions used to promote health, prevent and treat disease, and improve rehabilitation and long-term care) rather than settings of care. Therefore the panel structure was replaced in 2000 by three new panels: Pharmaceuticals; Therapeutic Procedures (including devices and operations); and Diagnostic Technologies and Screening. The HTA Programme will continue to commission both primary and secondary research. The HTA Commissioning Board, supported by the National Coordinating Centre for Health Technology Assessment (NCCHTA), will consider and advise the Programme Director on the best research projects to pursue in order to address the research priorities identified by the three HTA panels. The research reported in this monograph was funded as project number 94/18/05. The views expressed in this publication are those of the authors and not necessarily those of the HTA Programme or the Department of Health. The editors wish to emphasise that funding and publication of this research by the NHS should not be taken as implicit support for any recommendations made by the authors. HTA Programme Director: Professor Kent Woods Series Editors: Professor Andrew Stevens, Dr Ken Stein, Professor John Gabbay, Dr Ruairidh Milne, Dr Chris Hyde and Dr Rob Riemsma Managing Editors: Sally Bailey and Sarah Llewellyn Lloyd The editors and publisher have tried to ensure the accuracy of this report but do not accept liability for damages or losses arising from material published in this report. They would like to thank the referees for their constructive comments on the draft document. ISSN 1366-5278 © Queen’s Printer and Controller of HMSO 2003 This monograph may be freely reproduced for the purposes of private research and study and may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to HMSO,The Copyright Unit, St Clements House, 2–16 Colegate, Norwich, NR3 1BQ. Published by Gray Publishing, Tunbridge Wells, Kent, on behalf of NCCHTA. Printed on acid-free paper in the UK by St Edmundsbury Press Ltd, Bury St Edmunds, Suffolk. Criteria for inclusion in the HTA monograph series Reports are published in the HTA monograph series if (1) they have resulted from work commissioned for the HTA Programme, and (2) they are of a sufficiently high scientific quality as assessed by the referees and editors. Reviews in Health Technology Assessment are termed ‘systematic’ when the account of the search, appraisal and synthesis methods (to minimise biases and random errors) would, in theory, permit the replication of the review by others. G iii © Queen’s Printer and Controller of HMSO 2003. All rights reserved. Objectives: To assess the performance of digital imaging, compared with other modalities, in screening for and monitoring the development of diabetic retinopathy. Design: All imaging was acquired at a hospital assessment clinic. Subsequently, study optometrists examined the patients in their own premises. A subset of patients also had fluorescein angiography performed every 6 months. Setting: Research clinic at the hospital eye clinic and optometrists’ own premises. Participants: Study comprised 103 patients who had type 1 diabetes mellitus, 481 had type 2 diabetes mellitus and two had secondary diabetes mellitus; 157 (26.8%) had some form of retinopathy (‘any’) and 58 (9.9%) had referable retinopathy. Interventions: A repeat assessment was carried out of all patients 1 year after their initial assessment. Patients who had more severe forms of retinopathy were monitored more frequently for evidence of progression. Main outcome measures: Detection of retinopathy, progression of retinopathy and determination of when treatment is required. Results: Manual grading of 35-mm colour slides produced the highest sensitivity and specificity figures, with optometrist examination recording most false negatives. Manual and automated analysis of digital images had intermediate sensitivity. Both manual grading of 35-mm colour slides and digital images gave sensitivities of over 90% with few false positives. Digital imaging produced 50% fewer ungradable images than colour slides. This part of the study was limited as patients with the more severe levels of retinopathy opted for treatment. There was an increase in the number of microaneurysms in those patients who developed from mild to moderate. There was no difference between the turnover rate of either new or regressed microaneurysms for patients with mild or with sight-threatening retinopathy. It was not possible in this study to ascertain whether digital imaging systems determine when treatment is warranted. Conclusions: In the context of a national screening programme for referable retinopathy, digital imaging is an effective method. In addition, technical failure rates are lower with digital imaging than conventional photography. Digital imaging is also a more sensitive technique than slit-lamp examination by optometrists. Automated grading can improve efficiency by correctly identifying just under half the population as having no retinopathy. Recommendations for future research include: investigating whether the nasal field is required for grading; a large screening programme is required to ascertain if automated grading can safely perform as a first-level grader; if colour improves the performance of grading digital images; investigating methods to ensure effective uptake in a diabetic retinopathy screening programme. Health Technology Assessment 2003; Vol. 7: No. 30 Abstract The value of digital imaging in diabetic retinopathy PF Sharp,1* J Olson,2 F Strachan,3,4 J Hipwell,3 A Ludbrook,5 M O’Donnell,6 S Wallace,6 K Goatman,3 A Grant,6 N Waugh,7 K McHardy8 and JV Forrester9 1 Department of Medical Physics, University of Aberdeen, UK 2 Department of Ophthalmology, Grampian Universities NHS Trust, UK 3 Department of Biomedical Physics and Bioengineering, University of Aberdeen, UK 4 Departments of Ophthalmology and Diabetes, Grampian Universities NHS Trust 5 Health Economics Research Unit, University of Aberdeen, UK 6 Health Services Research Unit, University of Aberdeen, UK 7 Scottish Health Purchasing Information Centre, Grampian Health Board, UK 8 Consultant, Grampian University Hospitals NHS Trust, UK 9 Department of Ophthalmology, University of Aberdeen, UK *Corresponding author Health Technology Assessment 2003; Vol. 7: No. 30 v Glossary and list of abbreviations ............. vii Executive summary .................................... ix 1 Introduction ............................................... 1 Background ................................................ 1 Key features of diabetic retinopathy .......... 1 Early Treatment of Diabetic Retinopathy Study ........................................................... 1 Risk factors in development of retinopathy ................................................. 1 An overview of existing screening methods ...................................................... 2 Commission from NHS Health Technology Assessment Programme ............................. 2 2 Study design and patient recruitment ...... 5 Aim of study ............................................... 5 Study design ............................................... 5 Question 1: can a digital system detect the presence of diabetic retinopathy (of any sort/level)? .................................................. 5 Question 2: can a digital system detect progression of retinopathy? ....................... 7 Question 3: can a digital system determine when treatment is required? ...................... 7 Patient recruitment .................................... 7 Conclusions ................................................ 9 3 Photography ............................................... 11 Choice of digital fundus camera ................ 11 The EURODIAB protocol .......................... 11 Image artefacts due to camera dust ........... 12 Grading image quality ............................... 13 Failure to obtain gradable images .............. 14 Conclusions ................................................ 14 4 Automated detection of retinopathy ........ 15 Why automate? ........................................... 15 What lesions should we detect? .................. 15 Automated detection of retinopathy .......... 15 Microaneurysm detection ........................... 16 Automated detection of hard exudates ...... 17 Lesion location and maculopathy .............. 18 Turnover of microaneurysms ..................... 18 Conclusions ................................................ 19 5 Question 1: can a digital system detect the presence of diabetic retinopathy (of any sort/level)? .................................................. 21 Introduction ............................................... 21 Slit-lamp biomicroscopy ............................. 21 Photography and digital imaging .............. 21 Distribution of retinopathy amongst patients ....................................................... 22 Detection of retinopathy ............................ 22 Automated analysis in diabetic macular oedema ....................................................... 24 Sight-threatening retinopathy .................... 24 Is the nasal-disc field required? ................. 24 Comparison of optometrist with ophthalmologist ......................................... 25 Discussion ................................................... 25 6 Question 2: can a digital system detect progression of retinopathy? Question 3: can a digital system determine when treatment is required? .................................................... 29 Introduction ............................................... 29 Study protocol ............................................ 29 Can digital red-free photography monitor progression of retinopathy? ....................... 29 Can automated analysis of digital images follow progression of retinopathy? ............................................... 30 Problem of microaneurysm turnover ......... 31 Risk factors ................................................. 31 Discussion ................................................... 33 7 Costs and consequences of screening for diabetic retinopathy .................................. 35 Introduction ............................................... 35 Digital photography costs .......................... 35 Automated grading costs ........................... 35 Colour slide photography costs ................. 35 Optometrist screening costs ....................... 35 Results ........................................................ 36 Combining modalities – a modelling exercise ....................................................... 37 Sensitivity analysis ...................................... 37 Assumed life of equipment ........................ 38 Conclusions ................................................ 38 Contents vi 8 Conclusions ................................................ 41 Current study ............................................. 41 Relevance to the NHS ................................ 43 Research recommendations ....................... 43 The future .................................................. 43 Acknowledgements .................................... 45 References .................................................. 47 Appendix 1 Systematic literature review (completed 1998) ............................. 49 Appendix 2 Consent form, letter of invitation and patient information sheet ................... 117 Health Technology Assessment reports published to date ....................................... 121 Health Technology Assessment Programme ................................................ 129 Contents vii © Queen’s Printer and Controller of HMSO 2003. All rights reserved. Health Technology Assessment 2003; Vol. 7: No. 30 Glossary Biomicroscopy A technique for examining the structures of the eye. Capital costs The costs associated with items that remain useful beyond the period in which these costs are incurred, for example, for equipment and buildings. Cost-effectiveness A comparison of the cost of an intervention with the effect on the patient. Creatinine A component of urine. Diabetic retinopathy A complication of diabetes in which the retina of the eye is affected by blocking off of its small blood vessels. Exudate Masses of macrophages containing lipids formed at the edges of where plasma has leaked from the capillaries. Field A single image taken of the retina. Fluorescein angiography A technique for visualising the blood flow through the vessels of the eye. A fluorescent dye is injected intravenously and images of the eye are taken as it flows through the blood vessels. Fovea A small pit or depression in the retina. The very centre of the macula. Fundus The portion of the interior of the eye visible through the ophthalmoscope. Fusiform Spindle shaped. Glycated haemoglobin A test to show how well controlled diabetes has been in the preceding months. Haematocrit The percentage of a blood sample occupied by cells. Haemorrhage Bleeding. HbA1c Glycated haemoglobin. See above. Ischaemia Reduction of blood supply. Laser photocoagulation The use of a highly focused laser to treat diseased tissue. Macula The area of the retina that is the centre of sight. Macular oedema Fluid in the macula. Maculopathy A pathological condition of the macula. Microalbuminuria Abnormally increased excretion of albumin in the urine. An early marker of diabetic kidney disease. Microaneurysms Localised dilations of retinal capillaries. They may leak, causing oedema and haemorrhage in the retina. Mydriasis The use of drops to dilate the pupil. Neovascularisation The formation of new blood vessels. Oedema A collection of fluid. Ophthalmoscopy The use of an optical instrument for examining the interior of the eyeball. Prognosis A forecast of the probable course and outcome of the disease. Proliferative retinopathy Diabetes can result in small blood vessels being blocked off, so depriving the retina of oxygen and nutrients. The eye tries to grow new vessels, proliferative retinopathy, that may bleed and detach the retina. Recurrent costs Costs which occur regularly. Retina The light-sensitive layer at the back of the eye. Risk factor A clearly defined occurrence that increases the probability that a person will get a disease. Sensitivity The probability of correctly identifying that a disease is present. Slit-lamp biomicroscopy A method of examining the eye using a special microscope. Glossary and list of abbreviations continued viii Glossary continued Specificity The probability of correctly identifying that a disease is not present. TIFF format Tag(ged) Image File Format – an image file popular owing to its platform independence, extendibility and great flexibility. Type 1 (insulin-dependent) diabetes The type of diabetes that develops when the body cannot produce any insulin. Type 2 (non-insulin-dependent) diabetes The type of diabetes that develops when the body can make some insulin but not enough, or when the insulin that is produced does not work properly. Visual acuity A measure of how well a person sees distant and close objects. VS2 Greater than or equal to Airlie House Photograph 2B in the equivalent of two Airlie House standard photographic fields. List of abbreviations BDR background diabetic retinopathy CCD charge-coupled device CI confidence interval CSMO clinically significant macular oedema or maculopathy ETDRS Early Treatment of Diabetic Retinopathy Study FAZ foveal avascular zone FD fractal dimension FROC free response receiver operating characteristic ICG indocyanine green IRMA intra-retinal microvascular abnormality MeSH medical subject headings NPDR non-proliferative diabetic retinopathy PC personal computer PDR proliferative diabetic retinopathy PIA perifoveal intercapillary area RAM random access memory RCT randomised controlled trial ROC receiver operating characteristic SLO scanning laser ophthalmoscope Glossary and list of abbreviations All abbreviations that have been used in this report are listed here unless the abbreviation is well known (e.g. NHS), or it has been used only once, or it is a non-standard abbreviation used only in figures/tables/appendices in which case the abbreviation is defined in the figure legend or at the end of the table. ix © Queen’s Printer and Controller of HMSO 2003. All rights reserved. Health Technology Assessment 2003; Vol. 7: No. 30 Objectives To undertake a systematic literature review followed by a primary study to assess the performance of digital imaging, compared with other modalities, in screening for, and monitoring the development of, diabetic retinopathy. The study addressed three questions: 1. Can a digital imaging system detect retinopathy irrespective of sort or level? 2. Can a digital imaging system detect progression of retinopathy? 3. Can a digital imaging system determine when treatment is required? Design Question 1 All imaging was acquired at a hospital assessment clinic. Subsequently, study optometrists examined the patients in their own premises. Questions 2 and 3 In addition to the above, a subset of patients had fluorescein angiography performed every 6 months. The gold standard was clinical examination by an ophthalmologist. All questions were also addressed using automated analysis of digital red-free images. Subjects The study invited 1114 patients undergoing direct ophthalmoscopy at the diabetic clinic in Aberdeen; of these, 727 agreed and 387 declined. Of the former, 586 attended. Of these 103 patients had type 1 diabetes mellitus, 481 had type 2 diabetes mellitus and two had secondary diabetes mellitus; 157 (26.8%) had some form of retinopathy (‘any’) and 58 (9.9%) had referable retinopathy. Results Question 1: can a digital imaging system detect retinopathy irrespective of sort or level? Any retinopathy Manual grading of 35-mm colour slides produced the highest sensitivity (89%) and specificity (89%) figures, with optometrist examination recording most false negatives (sensitivity 75%). Manual and automated analysis of digital images had intermediate sensitivity. Referable retinopathy Both manual grading of 35-mm colour slides and digital images gave sensitivities of over 90% with few false positives (specificity 89 and 87%, respectively). Digital imaging produced 50% fewer ungradable images than colour slides. Question 2: can a digital imaging system detect progression of retinopathy? This part of the study was limited as patients with the more severe levels of retinopathy opted for treatment. There was an increase in the number of microaneurysms in those patients who developed from mild to moderate. There was no difference between the turnover rate of either new or regressed microaneurysms for patients with mild or with sight-threatening retinopathy. Question 3: can a digital imaging system determine when treatment is warranted? Since there was no definite answer to question 2, then the answer must be ‘no’ at present. Executive summary x Conclusions Implications for healthcare Digital imaging In the context of a national screening programme for referable retinopathy, digital imaging is an effective method. In addition, technical failure rates are lower with digital imaging than conventional photography. Digital imaging is also a more sensitive technique than slit-lamp examination by optometrists. Automated grading of digital images Automated grading can improve efficiency by correctly identifying just under half the population as having no retinopathy. Recommendations for future research 1. Is the nasal field required for grading? Our study would suggest not. Single-field imaging could potentially reduce the time taken to perform retinal screening and the number of technical failures. 2. Can automated grading safely perform as a first-level grader? Our study would suggest ‘yes’, but this needs to be confirmed in a large screening programme. 3. Does colour improve the performance of grading digital images? Although high- resolution colour digital images are now routinely available, their role in screening for diabetic retinopathy has yet to be assessed. 4. Can patient recruitment be improved? Future research is required to ensure effective uptake in a diabetic retinopathy screening programme. Executive summary Background Despite advances in diabetic care, visual impairment in diabetes remains a devastating complication, in terms of both personal loss for the patient and socio-economic costs to the community. It remains the commonest cause of blindness in the working population.1 It is potentially preventable, but this presents an immense challenge to the NHS, principally because the timing of treatment is critical. If laser photocoagulation is delayed until there is symptomatic visual loss, the outcome may be poor, but too early referral will overload ophthalmology departments, leading to inefficient practice. Two essential components of an effective and efficient system of diabetic care within the NHS are therefore first, regular retinal assessment, and second, a method of assessment that allows optimal timing of therapy. The current arrangements within the NHS for identifying diabetic eye disease are widely perceived as being costly and inefficient and this leads to pressure for change. The development of digital cameras is a promising step forward. As it may not need a skilled operator, then regular assessment is more feasible and its diagnostic performance may be as good as, or even superior to, current methods. It is ideally suited to quality assurance as it produces a hard copy. However, the introduction of this technology into the NHS will have major logistical and resource implications and it should be a prerequisite that it demonstrably performs better than existing systems. Key features of diabetic retinopathy Klein and colleagues2 were the first to suggest that baseline microaneurysm counts in patients with no other evidence of retinopathy can provide a useful predictor of long-term progression to proliferative retinopathy, independent of the effects of glycaemic control and blood pressure. Kohner and Sleightholm3 have provided evidence supporting the importance of microaneurysm detection and quantification from their analysis of fluorescein angiograms. They showed a significant correlation between microaneurysm number and the presence of haemorrhages and cotton-wool spots and, to a lesser extent, the severity of hard exudates and intra-retinal microvascular abnormalities (IRMAs). From examination of the natural progression of fundus changes in diabetics with at least moderate diabetic retinopathy, key features were identified as having a high predictive value in heralding a deterioration to a proliferative state. The severity of IRMA, venous beading and the number of haemorrhages and microaneurysms were thought to be of the greatest significance in identifying individuals who are likely to develop neovascularisation.4 Early Treatment of Diabetic Retinopathy Study The Early Treatment of Diabetic Retinopathy Study (ETDRS) was designed to provide answers as to how the devastating morbidity from visual loss could be reduced. The benefits to long-term visual outcome for patients with high-risk proliferative retinopathy undergoing laser panretinal photocoagulation have been recognised and adopted into standard clinical practice.5 However, the authors concluded that for those with mild to moderate retinopathy, and a low risk of progression to severe visual loss, early panretinal photocoagulation was inappropriate given its potentially detrimental effect on the peripheral visual field. They emphasised that the key to successful management was early identification of retinopathy with the meticulous monitoring of progression to allow optimum timing of intervention when a high-risk proliferative stage had been reached. Risk factors in development of retinopathy The Diabetes Control and Complications Trial6 has shown that strict metabolic control can both offset the development and slow the progression of diabetic retinopathy in type 1 diabetes. Whilst acknowledging the limitations of their cross- sectional study of 3250 European type 1 diabetes Health Technology Assessment 2003; Vol. 7: No. 30 1 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. Chapter 1 Introduction patients, Sjolie and colleagues7 have also suggested that, in the later stages of retinopathy, the adjustment of blood pressure, fibrinogen and triglyceride levels may affect outcome. The cessation of cigarette smoking may also have a beneficial effect, although the evidence is conflicting.8–10 However, as intervention at both primary and secondary level can now be contemplated, the need for careful screening of diabetic populations and monitoring of established retinopathy becomes even more crucial. The recognition of very early diabetic fundal change may provide the impetus to patients to make positive lifestyle changes and tighten glycaemic control. Accurate grading of retinopathy and recognition of high- risk features are essential to providing appropriately timed laser therapy. An overview of existing screening methods Adhering to the principles of the St Vincent Declaration,11 the European Retinopathy Working Party defined a protocol of screening for diabetic retinopathy in 1991, advocating that all patients with diabetes should have an annual eye examination. This was broadly defined as ophthalmoscopy through pharmacologically dilated pupils or by retinal photography.12 In the absence of evidence to support a definitive method of performing retinal examination, a study of 3318 patients with diabetes was undertaken on behalf of the NHS by Buxton and colleagues. A comparison was made between direct ophthalmoscopy, performed by general practitioners (GPs), ophthalmic opticians and hospital physicians, and consultant ophthalmologist assessment of images acquired with a non-mydriatic Polaroid fundus camera.13 They concluded that all participants showed relatively poor sensitivities (hospital physicians 67%; GPs 53%; ophthalmic opticians 47%) although specificities were higher (hospital physicians 97%; GPs 91%; ophthalmic opticians 95%), indicating that relatively few inappropriate referrals to ophthalmology services would occur. However, it was apparent that direct ophthalmoscopy alone was likely to miss a significant proportion of patients with evidence of retinopathy, regardless of who performed the examination. Comparable results were obtained from analysis of fundus photographs, with sensitivities ranging from 35 to 67% and specificities marginally higher than those of the primary screeners at 95–98%. Although subsequent reports indicate that the quality of photography from a non-mydriatic camera may have been improved by use of mydriatic agents,14 the authors were unable to recommend a specific technique for routine screening of a general diabetic population. These results also reflect the outcome of work by previous authors15–19 and suggest that, at present, no individual method of routinely available fundus examination can be judged superior for clinical screening. Although the findings of Klein and colleagues15 seem encouraging, the wide confidence intervals calculated for their relatively small patient population warrant caution in interpretation of these results. Combining the modalities of ophthalmoscopy and retinal photography appears effective in detecting sight-threatening retinopathy according to O’Hare and colleagues;19 however, their technique does not achieve the recommended threshold of a minimum 80% sensitivity and 95% specificity for detecting mild retinopathy, suggested as an audit standard by the British Diabetic Retinopathy Working Group.20 Commission from NHS Health Technology Assessment Programme Aims of investigation Given the above background, there has been significant interest in developing screening techniques and in particular in exploring the potential of digital imaging. As there was no clear information as to how these techniques would fit into a clinical setting, the work in this report was commissioned in 1996 by the NHS Health Technology Assessment Programme. The intention was first to undertake a systematic literature review and then carry out a primary study to assess the performance of digital imaging in screening for, and monitoring the development of, diabetic retinal disease. The study aimed to show: (1) whether further technical development will be required, (2) what the diagnostic performance of current digital systems is in comparison with alternative approaches, (3) whether there is any evidence of clinical effectiveness and (4) whether it is likely to be cost-effective as a screening or disease monitoring (diagnostic) test. Overview of the study The purpose of the study was to address three questions: (1) can a digital imaging system detect retinopathy irrespective of sort or level?; (2) can a digital imaging system detect progression of Introduction 2 Health Technology Assessment 2003; Vol. 7: No. 30 3 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. retinopathy, in particular warranting further investigation?; and (3) can a digital imaging system determine when treatment is required? Diabetologists and GPs wish to know the answer to question 1 as they can use this information to motivate patients to improve their metabolic control and prevent, or slow, the progression of retinopathy and other microvascular complications of diabetes – nephropathy and neuropathy. Diabetologists, GPs and optometrists wish to know the answer to question 2, as this will help reduce the enormous workload of screening for sight- threatening retinopathy. Ophthalmologists wish to know the answer to question 3 as this will help them decide when to initiate laser treatment. We had previously developed software that could analyse digitised retinal images from fluorescein angiograms for the presence of the cardinal features of diabetic retinopathy, namely microaneurysms/ haemorrhages and exudates.21–24 This current project involves using a digital fundus camera, which would provide direct digital images, in conjunction with our software modified to analyse red-free images not only to detect retinopathy but to answer the three questions raised above. The study was preceded by a systematic review of the literature to assess the value of currently available digital imaging techniques and compare them with alternative methods. Layout of the report In the following chapter we will discuss the design of the study, centred around the three cardinal questions, and patient recruitment. The factors influencing the choice of the digital fundus camera system and the results of an assessment of the quality of the images produced by it will be presented in Chapter 3. In Chapter 4, the principles behind the software used to detect microaneurysms and hard exudates automatically will be set out. The study is divided into two branches. The role of digital imaging in screening will be looked at in Chapter 5, addressing question 1 and, in part, question 2. In Chapter 6 the value of digital imaging in monitoring progression of disease will be investigated. This will explore the answers to questions 2 and 3. Finally, costs and consequences will be examined in Chapter 7 and conclusions and proposals for further research are presented in Chapter 8. The systematic literature review is included as Appendix 1. Aim of study The systematic literature review (Appendix 1) concluded that digital photography offered a promising alternative to conventional photography and that its diagnostic performance was not impaired by the lower resolution of currently available systems. It confirmed that further studies were required to compare digital photography with conventional photography, when both sets of images are manually assessed by trained observers. In addition, it recommended that digital photography should be assessed against the performance of those currently providing retinopathy screening by direct ophthalmoscopy. As a result, the original proposal for investigating the value of digital imaging in diabetic retinopathy was followed. Three cardinal questions were to be addressed in this study: � Question 1: can a digital fundus camera system detect the presence of diabetic retinopathy (of any sort/level)? � Question 2: can a digital fundus camera system detect progression of retinopathy? � Question 3: can a digital fundus camera system determine when treatment is required? In pilot studies it was shown that the contrast of microaneurysms/haemorrhages in digitised red- free images was sufficient to allow accurate quantitative analysis. Hence a digital fundus camera would probably allow the detection of the presence of retinopathy, assuming that the patient has sufficiently clear media to give digital images of sufficiently high quality. The question is whether it is more effective than conventional photography or slit-lamp examination by non- ophthalmologists. Study design An outline of the study is shown in Figure 1. Patients were recruited whilst attending for their routine eye screening assessment at the diabetic clinic in Aberdeen. Routine screening usually involved direct ophthalmoloscopy with mydriasis. As direct ophthalmoloscopy is known to be an insensitive technique, the information from this procedure was not used in the study as it was not felt to be an acceptable screening method. Once patients had agreed to take part, they were all asked to attend a research clinic at the hospital eye clinic. There, patients had their visual acuity measured, their fundi photographed and fundi assessed by one of the study ophthalmologists. All patients were then asked to attend one of the study optometrists in the optometrists’ own premises. This appointment was arranged by the research registrar. The information from these first assessments was used to answer question 1, can a digital system detect the presence of diabetic retinopathy (of any sort/level)? The second question, can a digital system detect progression of retinopathy?, was to be answered by repeat assessment of all patients 1 year after their initial assessment. In addition to the above, a subset of patients who had more severe forms of retinopathy were monitored more frequently for evidence of progression. This subset had the following criteria when examined by the study ophthalmologists: � moderate non-proliferative or worse retinopathy � retinal thickening within a one disc diameter radius of the centre of the macula. This subset was also studied to answer question 3, Can a digital system determine when treatment is required? Question 1: can a digital system detect the presence of diabetic retinopathy (of any sort/level)? This question has two components. First, can a digital system detect the presence of any retinopathy, and second, can it be used to decide automatically that the disease has progressed to a stage where the patient should be transferred to the eye clinic for further action? As it has been estimated that ‘only’ 30% of all patients with Health Technology Assessment 2003; Vol. 7: No. 30 5 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. Chapter 2 Study design and patient recruitment diabetes at any one time have retinopathy, then such a system could be of great benefit in reducing clinical workload.25 All patients were studied every 12 months. As shown in Figure 1, they underwent digital imaging on the Topcon™ system, where red-free images were acquired using two 45° fields, a macular field and a disc/nasal field. These digital images were transferred to a SUN™ system, where the programmes which had been developed for analysing for microaneurysms and haemorrhages were used. In addition, analogue red-free and colour images were taken. Conventional mydriatic retinal Study design and patient recruitment 6 Patients attending for routine eye screening using direct ophthalmoloscopy at the Diabetic Clinic were invited to enter the study by the research registrar All patients attended a research clinic at the Eye Department. • Colour photography • Red-free digital • Examination by study ophthalmologist + • Slit-lamp biomicroscopy by trained optometrists in their own premises Moderate or worse DR Exudates and/or haemorrhages/MA within 1 disc diameter of macula and retinal thickening No DR Or mild DR Every 6 months • Fluorescein angiography • Fundus photography • Medical assessment of risk factors (annual) Every 12 months • Imaging with the digital camera • Examination by optometrist with direct ophthalmoscope Outcome Comparison of performance of: • Analogue vs digital red-free • Colour vs digital • Optometrist vs digital (Question 2) Outcome • Assessment of effectiveness of digital system in following progression of retinopathy • Correlation of quantitative measures of retinopathy with clinical status of patient (Questions 2 and 3) Optometrist vs digital Optometrist vs colour Optometrist vs automated Outcome (Question 1) FIGURE 1 Experimental design. DR, diabetic retinopathy; MA, microaneurysm. photography was performed using a Topcon 50X fundus camera with 35-mm transparencies on Kodachrome 64 film. Patients were also examined at this visit by an ophthalmologist using slit-lamp biomicroscopy, this being the only widely agreed clinical gold standard for assessing severity of retinopathy. The use of two 45° fields allowed direct comparison with the proposed European grading system developed for the EURODIAB study. This method is directly comparable with the Airlie House grading system – the present gold standard for grading diabetic retinopathy – and has been proposed as a particularly suitable method for large epidemiological studies.26 This study is described in detail in Chapter 4. All patients also had an additional assessment by a study optometrist for fundus examination using slit-lamp biomicroscopy. This was performed on a separate occasion in the optometrist’s own premises. Question 2: can a digital system detect progression of retinopathy? Question 3: can a digital system determine when treatment is required? This arm of the study looked at the ability of the digital system to provide the ophthalmologist with a reliable, quantitative measure of retinal pathology which will allow him or her to monitor the progression of the disease. The crucial questions were whether a digital system could detect progression of retinopathy and whether it could be used to determine when treatment was required. As for the detection of any retinopathy, it was felt that the answer to the first question was probably ‘yes’. That to the second question may be ‘yes’ if the measurement of area of haemorrhage and number of microaneurysms at VS2 (greater than or equal to Airlie House Photograph 2B in the equivalent of two Airlie House standard photographic fields) is the end-point. An important facet of these two questions was to look at the natural history of retinopathy. Therefore, in addition to the subset of those patients initially assigned to this study, that is, those diagnosed on the initial screening study as having moderate proliferative retinopathy or worse, or clinically significant macular oedema (maculopathy), a group of patients with lesser degrees of retinopathy was included. All patients underwent fluorescein angiography and fundus photography every 6 months. Software to measure the number of microaneurysms and area of exudate from such images had previously been developed by our group.21–24 Finally, it was planned to correlate the changes reported from the computer measurements against clinical status. All patients were assessed medically once a year as various risk factors are known to be associated with the progression of retinopathy. Patients had body mass index, blood pressure, HBA1c and microalbuminuria measured.6,15,27–29 In addition to smoking and alcohol consumption, obstetric history, past medical history and drug history were also ascertained.30,31 This was done to allow the natural history of retinopathy to be put into clinical context and to help identify those patients needing more frequent screening than eye signs alone will suggest. Patient recruitment The numbers of patients required for the study were estimated as follows. For the community screening group, assuming that 30% (240) actually have retinopathy, the area under the receiver operating characteristic (ROC) curve will be estimated to ±2%. Differences in specificity of >6% could be identified (80% power, 2p = 0.05). Assuming 560 participants of whom 10% (56) progress, the area under the ROC curve will be estimated to ±4%. Comparisons with analogue image, colour slide and optometrist will identify differences in specificity of >7% (80% power, 2p = 0.05). For the hospital monitoring group, assuming 160 patients, of whom 10% (16) progress, the area under the ROC curve will be estimated to ±7%. Differences in specificity of >13% could be identified (80% power, 2p = 0.05). For the purposes of analysis, this group may be combined with those with mild diabetic retinopathy. Assuming the combined sample, giving 720 participants of whom 10% (72) may progress, the area under the ROC curve will be estimated to ±3%. Differences in specificity of >6% could be identified (80% power, 2p = 0.05). Health Technology Assessment 2003; Vol. 7: No. 30 7 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. The ability of digital imaging to identify progression to retinopathy that warrants treatment will be explored within the cohort of people found to have moderate/severe retinopathy at the first ‘inception’ examination. Assuming 80 participants of whom 40% (32) progress, the area under the ROC curve will be estimated to ±5%. Patients were recruited from those referred to the hospital diabetic clinic. At present 1.4% (7624) of the population of Grampian (528,100) attends these clinics (Table 1). Previous studies have shown that, for Aberdeen patients, almost 100% of those on insulin and 96% of all patients with diabetes are registered at the clinic.31 As a consequence, our study reflected the local diabetic population as a whole; in other parts of the country hospital diabetic medical clinics are mainly attended by those with complications or on insulin. It was estimated that over the 36-month period of this study, about 800 patients could be studied. It was expected that approximately 30% would have abnormal retinal pathology, depending upon duration and level of control.31 The study invited 1114 patients undergoing routine fundoscopy at the diabetic clinic to participate; 727 patients were recruited and 387 declined. Of the former, 586 patients attended an optometrist trained in slit-lamp biomicroscopy and also a hospital assessment clinic where digital and conventional photography was performed. Slit-lamp examination by an ophthalmologist was performed as the gold standard. These examinations were then repeated 1 year later. Patients failing to attend for an appointment were offered a second appointment, but not followed up further. Copies of the consent form, patient information sheet and letter of invitation are provided in Appendix 2. Of the patients, 103 had type 1 diabetes mellitus (17.6%), 481 had type 2 diabetes mellitus (82.1%) and two had secondary diabetes mellitus (0.3%). The male-to-female ratio was 398:214 (65:35%). The average age at the start of the study was 56.5 years (median 59.3 years) (Table 2). Table 3 summarises the alcohol consumption and smoking history. The definition of level of retinopathy is discussed in the section ‘Photography and digital imaging’ (p. 21). Study design and patient recruitment 8 TABLE 1 Patients attending Grampian diabetic medical clinics on 10 January 1995 Diet controlled 1852 Tablet controlled 2858 UK Prospective Diabetes Study 294 Insulin treated 2620 Total 7624 TABLE 2 Age distribution of patients entering the study Age range (years) Percentage <25 2.6 25–34 6.5 35–44 9.2 45–54 20.8 55–64 30.0 65–74 27.3 >75 3.6 TABLE 3 Smoking history and alcohol consumption of patients entering the trial Alcohol consumption (units per week) Smoking history Level of retinopathy (mean and range) Non-smoker Smoker Ex-smoker None 5.4, 0–60 (n = 427) 209 80 140 Mild 6.4, 0–42 (n = 107) 59 12 37 Moderate 5.3, 0–56 (n = 36) 17 6 13 Severe 10.2, 0–28 (n = 6) 3 1 2 Very severe – 0 0 0 Early PDR 4, 0–15 (n = 4) 2 1 1 High-risk PDR 8 (n = 1) 1 0 0 PDR, proliferative diabetic retinopathy. Health Technology Assessment 2003; Vol. 7: No. 30 9 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. Conclusions For historical reasons, a larger number of patients attend the hospital diabetic clinic than might be expected in other areas. This is reflected in the demographics of the population studied, which has a higher prevalence of type 2 diabetes than might be expected in a hospital clinic. Although the majority of people in the study were of a working age, it is of concern that 528 of 1114 patients approached either declined to take part or failed to attend once recruited. It can be speculated that this is due to the fact that patients were already being screened and might have implications for any stand-alone retinopathy screening programme. Choice of digital fundus camera The choice of digital fundus camera is central to the project. At the time the study started, August 1996, there were few systems from which to choose. The defining parameters for selection were as follows: � A digital image resolution of at least 1024 × 1024 pixels. Previous work by the Aberdeen group on the automated detection of diabetic retinopathy from fluorescein angiograms had used 35-mm film digitised off-line using a Kodak Megaplus camera with a resolution of 1024 × 1024, a pixel size being equivalent to 13 �m. A coarser digitisation than this was unlikely to yield comparable results. � Monochrome image acquisition with a pixel depth of at least 8 bits. The project utilised red- free and fluorescein images. Although colour images might contain more information than monochrome images, at the time of the study commercial systems offered colour charge- coupled device (CCD) cameras with a coarser image digitisation, typically 640 × 480 pixels. � Images must not be compressed prior to storage. Some types of compression can result in loss of information. � Image file structure must be such that images can be extracted for analysis on a SUN workstation. Previously programs had been developed using the Unix™ environment on SUN workstations and time constraints meant that converting it to a personal computer (PC) environment was unrealistic. The only system able to meet those specifications was the Topcon retinal camera and IMAGEnet™ windows system (Topcon UK, Newbury, Berkshire, UK). The system, shown in Figure 2, consists of a Topcon TRC-50XT™ retinal camera with digital images being acquired with a Kodak Megaplus™ 1.4I CCD camera, with a digitisation of 1024 × 1024 pixels in monochrome with a pixel depth of 8 bits. A colour film back was available for the colour slides which were captured on to Kodak Kodachrome 64 film. Data were acquired on a PC for analysis, display and archiving under the IMAGEnet hardware and software. Acquired images were archived on CD-ROM and transferred to the SUN workstation for image analysis. The Topcon TRC-50XT camera was chosen for this study as it had the finest image digitisation of the digital fundus cameras commercially available at the time. The images from other manufacturers (such as Canon) were not comparable, principally because they used colour cameras which gave a more coarsely digitised image (e.g. 768 × 576 pixels). In our experience, the larger monochrome (red-free) images with their potentially higher resolution at a given field of view, and greater contrast, were more appropriate for computer analysis than smaller colour images. In addition to finer image pixellation from the Topcon camera, an earlier model photographic fundus camera from the same company was already being used in the eye clinic. The ophthalmic photographers were familiar with the operation of this camera and the quality of the slides they produced was consistently high. It was these slides, digitised using a Kodak Megaplus 1.4 CCD camera (the same CCD camera as fitted to the Topcon TRC-50XT digital fundus camera), which had been used to develop our automated techniques. Thus the ability to acquire the same images directly using the TRC-50XT camera was a logical progression. The EURODIAB protocol For this study we adopted the photographic protocol developed for the EURODIAB IDDM Health Technology Assessment 2003; Vol. 7: No. 30 11 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. Chapter 3 Photography FIGURE 2 Topcon retinal camera with IMAGEnet system Complications Study in which two fields of each eye are obtained for each patient.26 The first, the macular field, extends from the optic disc to the temporal retina and the second, the nasal-disc field, covers the region from the disc to the nasal retina; 50° fields were obtained, as opposed to the 45° fields used in the EURODIAB study, because this field of view was closest to that available on the Topcon TRC-50XT fundus camera used (Figure 3). The difference, however, is negligible and will, if anything, increase the area of the retina examined. The EURODIAB protocol was validated against the recognised gold standard of the modified Airlie House system,32 with which it ‘compared favourably’. In the latter, seven pairs of 30° stereo photographs are obtained for each eye of each patient. Clearly the four images per patient of the EURODIAB scheme are vastly preferable to the 28 photographs of the modified Airlie House system if either is being considered for use in a large-scale screening programme. The same protocol was used for both colour slide photography and acquisition of the digital images, permitting their direct comparison. An initial concern was the quality of images produced by the digital fundus camera. In the following section is reported a study of the overall quality of the images with respect to the visibility of structures such as the nerve-fibre layer and the retinal vessels. Image artefacts due to camera dust Soon after patient photography got under way, it became apparent that the digital images acquired were being corrupted by the presence of dust inside the camera (Figure 4a) and that this dust threw shadows on the images which closely resembled small, dark microaneurysms. In the 5 months it took to identify and solve the problem, a total of 158 subjects had been photographed. In order to salvage the images collected and prevent the costly and time- consuming recall of subjects, a program was written to remove these dust artefacts from the images. Although the dust in the camera could be disturbed on a day-to-day basis, it was found that on any given day the pattern of dust was relatively stationary. Hence, if the average of all the images collected on a given day was calculated, then the superimposed dust would produce a large response whereas non-coincident features, such as the retinal vessels, would be suppressed (Figure 4b). From this average image, a ‘mask’ could be generated (by shade-correcting and thresholding the average image; Figure 4c). This mask could then be used to eliminate the dust artefacts in each image by replacing the image pixels at these locations with the average (actually the median) of its neighbours (Figure 4d). Photography 12 FIGURE 3 Two 50° red-free images of the right eye taken according to the EURODIAB IDDM Complications Study protocol. Left, macular view; right, nasal disc. FIGURE 4 Removal of camera dust artefacts from the region of interest in an image. (a) Part of the original image showing the presence of dust as small dark shadows. (b) The average of all images taken on the same day’s photography. After processing this average image a ‘mask’ is generated giving the locations of the image dust artefacts (c). This dust mask can then be used to remove the corrupting dust artefacts in the image (d). (a) (b) (c) (d) Grading image quality To assess the quality of the digital images a grading scheme was developed that was adapted from the scoring system for vitritis.33 The categories used to grade the quality of both the colour slides and digital images are listed in Table 4. Q1 indicates the highest quality images in which the fine structure of the nerve-fibre layer is clearly visible. In images of grade Q2, the nerve-fibre layer is not visible but the smallest vessels in the image are in sharp focus. In Q3 images, these small vessels are blurred but the larger vessels are well defined. Q4 indicates that the large blood vessels in the major arcades are just blurred and, finally, if the image is judged to be of insufficient quality for a classification of retinopathy to be made, then grade Q5 is assigned to the image. This last grade is also assigned to images for which there is significant blurring of major arcade vessels in one-third or more of the image, in the absence of visible referable retinopathy. This might occur, for instance, as a result of shadows falling across the image such caused by misalignment of the eye. For an overall grade of retinopathy to be assigned to a patient, quality grades between Q1 and Q4 were required for all four fields photographed. This ensures that lesions present in an obscured or degraded region of the retina are not overlooked, leading to potentially serious undergrading of the patient’s retinopathy. In practice, we would expect these patients whose images are graded as Q5 to be referred for separate examination using slit- lamp biomicroscopy by a trained individual such as an optometrist or an ophthalmologist. Grading of the digital images and colour slides was performed solely by the research fellow. As a result, direct comparison could be made between the grades assigned to each method of photography, avoiding concerns about inter- observer variability. The data in Table 5 are replotted in Figure 5 to show overall image quality for both fields. In Figure 6, the difference in image quality between colour slide and digital image is shown. Individual colour slides had a higher frequency of the highest quality grade Q1 slides (22%) than digital images (18%) (p < 0.001), but also a higher frequency of the lowest quality grade compared with digital images (5% versus 2%). If one examines the number of images falling into categories Q1, Q2 and Q3 then there is a higher proportion of digital images than colour slides (76 and 67% respectively, p < 0.001). Health Technology Assessment 2003; Vol. 7: No. 30 13 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. TABLE 4 Scheme used to grade 35-mm colour slide and digital image quality Grade Definition Q1 Nerve-fibre layer visible Q2 Nerve-fibre level not visible Q3 Small vessels blurred Q4 Major arcade vessels just blurred Q5 Significant blurring of major arcade vessels in over one-third of the image in the absence of visible referable retinopathy TABLE 5 Comparison of quality of the digital and 35-mm colour slide images Digital image Colour slide Q1 Q2 Q3 Q4 Q5 Total Q1 492 329 64 35 1 921 Q2 138 669 287 172 5 1271 Q3 41 249 159 156 2 607 Q4 55 354 242 458 28 1137 Q5 6 45 26 83 51 211 Total 732 1646 778 904 87 4147 0.00 Q1 Q2 Q3 Q4 Q5 Colour slide (67% grades Q1–Q3) Digital red-free (76% grades Q1–Q3) 0.10 0.20 0.30 0.40 Image quality Fr eq ue nc y FIGURE 5 Colour slide and digital image quality distributions (data as in Table 4) Failure to obtain gradable images In Figure 7, the number of patient visits for which one, two, three or four ungradable fields were obtained for the two modalities are compared. Of the 1041 patient visits for which both colour slides and digital red-free photographs were obtained, twice as many visits for colour slide photography (107 or 10%) generated one or more ungradable fields compared with digital photography (54 or 5%). This was statistically significant (p < 0.001). Colour slide photography would result in a large number of patients having to be recalled for photography and, of course, this has associated costs. Conclusions One of the main concerns with using digital images is that quality would be impaired by the digitising process. The choice of the Topcon fundus camera and IMAGEnet software was made primarily on the basis that the Kodak Megaplus CCD camera used by this system had the finest image digitisation available at that time, 1024 × 1024 pixel image array. To reduce possible problems with image quality further it was specified that the software should not utilise any form of image compression. Image quality can only be defined in terms of the use to which the image is to be put,34 in this case the detection of diabetic retinopathy. This will be explored further in later chapters. However as an initial step, a grading system was used that utilised the clarity of features such as nerve-fibre layer and vessels as a measure of overall image quality. This demonstrated that the finest quality photographs were produced by conventional photography. This is not surprising as the maximum resolution of conventional photographs is approximately 2–3 times better than the digital camera used in this study. However, if the first three categories of quality are regarded as acceptable, then the digital images were superior to the colour slides. When the image quality was analysed in terms of field, then the poorest quality field tended to be the nasal field, reflecting the need for a widely dilated pupil and the absence of peripheral cortical cataract, a common clinical finding, to obtain high-quality images. Digital photography produced fewer ungradable images, approximately half the number of visits for digital photography producing one or more ungradable images compared with colour slide photography. This was mainly due to the photographer having instant feedback on image quality by being able to view them instantaneously on the computer screen. If the image was not of sufficient quality the photographer simply repeated the photograph until one of sufficient quality could be obtained. This meant that approximately 50% fewer individuals would need to be recalled for repeat photography if a digital rather than a conventional photographic camera were used for image capture. Photography 14 –3 –2 –1 equal Difference in image quality Positive: red-free digital > colour slide Negative: colour slide > red-free digital 1 2 3 4–4 0.00 0.10 0.20 0.30 0.40 0.50 Fr eq ue nc y 1 2 3 4 No. of ungradable EURODIAB fields per patient visit Colour slide: 107 visits affected (10%) Digital red-free: 54 visits affected (5%) 0 10 20 30 40 50 N o. o f p at ie nt v is its FIGURE 6 Comparison of colour slide and digital image quality FIGURE 7 The number of patient visits for which one, two, three or four EURODIAB fields were ungradable Why automate? Establishing a photographic screening programme has workload implications. In addition to the time taken to photograph the diabetic population, trained personnel are then required to view and grade all the acquired images. The acquisition of digital images reduces the time previously associated with conventional photography, because it eliminates the overhead associated with the development of photographic slides. This also accounts for the higher proportion of digital images which are of reasonable quality; they may be viewed immediately and retaken if necessary without the need to recall the patient. There is clearly a case to be made for an automated analysis of images to eliminate the need for tedious and costly manual grading of the images. Digital images are uniquely suited to this task since a well-designed computer program is able to read these images and process them directly, without the need for human intervention. Automated analysis also offers a consistency and reliability of interpretation that is not found with the human observer who is prone to fatigue. What lesions should we detect? Microaneurysms Previous screening programmes have focused on the need to detect sight-threatening retinopathy; however, opinion has more recently shifted to the detection of any retinopathy. Microaneurysms are the earliest detectable signs of diabetic retinopathy and their numbers correlate closely with the severity of the disease.3,35,36 They are a logical choice for a screening programme, enabling patients with all degrees of disease severity to be detected. Furthermore, since the number of lesions increases with the severity of the disease, the likelihood of detecting patients with more severe retinopathy will increase, helping to ensure that those in need of referral are not missed. The process of manually identifying individual microaneurysms in a fundus image is tedious and prone to operator error. Although computer- assisted localisation of these lesions can help to reduce intra- and inter-observer errors,37 a fully automated and entirely objective approach is clearly preferable. Hard exudates Diabetic maculopathy (clinically significant macular oedema) is the most common cause of visual impairment in diabetic patients and is indicated by the presence of retinal thickening or hard exudates within one disc diameter of the fovea. Current screening methods favour non-stereoscopic photography owing to the increased complexity of acquiring and interpreting stereo photographs. These techniques, however, offer no information about the presence of retinal thickening. The reliable detection of patients with macular exudates is of increased importance, therefore, if patients with potential clinically significant macular oedema are not to be overlooked. The established means of manually quantifying exudate presence, via comparison with two standard photographs,38 is unavoidably subjective, has limited precision and, consequently, is inherently inaccurate. Computerised detection, on the other hand, offers the benefits of a fully automated, and therefore objective and repeatable, analysis with accurate quantification of the extent of the pathology. Automated detection of retinopathy We have previously developed programs for the automated detection and quantification of microaneurysms in retinal fluorescein angiographic images.22,24,39 It has been demonstrated that this automated system detects microaneurysms almost as well as clinicians.39 This program was used to analyse the fluorescein images taken in the disease monitoring part of the study (Chapter 6). For the screening study (Chapter 5), we needed to adapt the technique to the more challenging task of detecting microaneurysms in digital red-free images. The general philosophy of our approach is to construct a definition of the appearance of a Health Technology Assessment 2003; Vol. 7: No. 30 15 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. Chapter 4 Automated detection of retinopathy microaneurysm or hard exudate. As will be described below, this is done on the basis of defining a number of features of the pathology, such as those relating to shape and intensity. By means of a training set of images, in which the pathology has been identified by an experienced observer, a set of rules is constructed for combining the values of these features in such a way as best to discriminate genuine from false pathology. As will be seen later in this chapter, the choice of an operating point for the program still requires a decision to be taken as to what constitutes an acceptable level of true and false- positive responses. Microaneurysm detection The definition of a microaneurysm We have defined microaneurysms as small (25–200 �m in diameter), circular features with an approximately Gaussian or ‘bell-shaped’ profile. In red-free images they appear dark against the background whereas in the frames of a fluorescein angiogram they are bright. In both cases, however, we have attempted to mimic the decisions of clinicians viewing these images by measuring each microaneurysm identified by them in a training set of diverse images. Thirteen measurements of each object are taken into account, including size, perimeter, circularity, aspect ratio, intensity and similarity to the ideal Gaussian profile. We do more than simply setting limits on these parameters, however. For instance, circularity is less significant for small microaneurysms that lie at the limit of the resolving power of the camera. The criteria for classifying an object as a microaneurysm will therefore vary with the size of the object. Operation of the program The initial processing of each image is performed in three stages. First the image is ‘shade-corrected’ to remove differences between images caused by changes in the illumination conditions under which each was obtained. Second, large features such as vessels and haemorrhages are removed by excluding all structures which are greater than a particular linear extent and, finally, only those remaining features which resemble the approximate shape and size of a microaneurysm are retained for subsequent classification. At this stage of the processing, the image contains a number of candidate microaneurysms only some of which will be genuine. For each of these candidates the program makes 13 measurements of intensity and shape, such as perimeter length, aspect ratio and circularity. These measurements are then tested using a set of rules which, if all are satisfied, identify the candidate as a microaneurysm. For example, a large object with insufficient average intensity might be rejected whereas a smaller, less prominent object is accepted owing to its greater circularity. At this stage, it was necessary to draw up the rules for the program by examining the performance of the program on a set of genuine microaneurysms identified by the consultant ophthalmologist. Unsurprisingly, it was not possible to classify correctly all the microaneurysms identified by the consultant without also detecting a large number of spurious objects. As a result, it was necessary to decide on the degree of sensitivity and specificity which would be clinically acceptable. Calibrating and testing the microaneurysm detector Figure 8 illustrates the performance of the program for the detection of individual microaneurysms. It is expressed in terms of a free response receiver operating characteristic (FROC) curve that plots the sensitivity of the program in detecting microaneurysms as a function of the number of false positives per image, that is, the average number of spurious microaneurysms reported as being present in the image. The data were generated by analysing a set of images consisting of 44 normals, 41 mild, 12 moderate, Automated detection of retinopathy 16 0.0 0.0 0.2 0.4 0.6 0.8 0.2 0.4 False positives per image 0.6 Se ns iti vi ty FIGURE 8 FROC curve comparing the performance of the microaneurysm detector (filled circles and dashed line) on digital red-free images with that of five clinical observers (filled and open triangles), with an ophthalmologist as the gold standard. The operating point of the detector is indicated by the open square. three severe and two early. By varying the program’s criteria for what constitutes a microaneurysm, by relaxing or tightening the rules used to classify each detected candidate microaneurysm, the effect on specificity of increasing the program’s sensitivity can be investigated. The outcome from the program is compared with the results from five observers looking at the same images: two ophthalmologists, two diabetologists and one physicist experienced in interpreting retinal images. The gold standard was generated by an experienced ophthalmologist. Clearly, the opinions of the human observers vary in sensitivity and specificity (number of false positives per image). For instance, the observer with the highest sensitivity has achieved this result at the expense of the greatest number of false positives (identification of objects that the ophthalmologist did not consider to be microaneurysms). The program’s FROC curve shows how changing the rules affects the sensitivity and specificity of microaneurysm detection. Obviously, it is necessary for the purpose of this study to select one operating point. Although a high sensitivity for a screening task is clearly desirable, specificity is also important. This is because the use of four fields per patient increases the likelihood of detecting a falsely positive microaneurysm and hence incorrectly concluding that a normal patient has retinopathy. By relaxing the rules, the number of genuine microaneurysms identified, and hence the sensitivity, could be increased. On the other hand, tightening the rules reduces the number of spurious objects erroneously classified as microaneurysms, so increasing specificity. An estimate of the best compromise between these two parameters was made, based on the analysis of the training set, and this version of the rules was chosen as the ‘operating point’ of the program shown in the Figure 8 by the open square. This corresponded to a sensitivity of 43% and 0.11 false positives per image. Automated detection of hard exudates Hard exudates are composed of lipoprotein and lipid-filled macrophages. This gives them a yellow, waxy appearance which is highly reflective and, as a consequence, they appear as regions of high local intensity in digital red-free photographs. Owing to their being formed by leakage from microvascular lesions, they have no characteristic shape but may be distinguished from similarly intense lesions such as cotton-wool spots or drusen by the strength of their boundary definition. In contrast, cotton-wool spots and drusen are diffuse structures with considerably less intense edges. This information has been used to classify bright objects in the images according to their size, perimeter length, overall intensity and edge definition. Once again the choice of appropriate classification rules is dictated by the exudates manually identified by an ophthalmologist in a training set of images. This set consisted of 42 images containing 479 exudate objects. Operation of the exudate detector In order to enhance those features with strong edge definition (i.e. potential exudates), the images are first ‘sharpened’ by application of an appropriate filter. This sharpening operation also helps to ensure that the detected features are accurately delineated prior to their measurement (see below). Shade correction is then performed as described above to eliminate the effects of illumination. The next stage in the processing is to locate the optic disc to enable it to be eliminated from subsequent analysis of the image. This is achieved by searching the image for the region which best matches a model disc. This model consists of a bright circle, representing the surface of the disc, bisected by a dark, diverging, vertical stripe which mimics the pattern of the dark blood vessels leaving and entering the eye via the central retinal artery and vein. Having identified the optic disc, the remaining area of the image is thresholded to identify any features which are brighter than the background and therefore could be hard exudates. Once again these candidates are classified by comparing their measurements with those of the exudates identified by a clinician in the set of training images. This comparison generates a number of rules based on the values, or pairs of values, of these parameters. Testing the hard exudate detector As for the microaneurysm detection program, the ability of the program to detect hard exudate objects was assessed using an FROC experiment, the results being shown in Figure 9. A test set of images was used consisting of 50 images in which a total of 215 individual exudates were identified by the clinical research fellow. The performance of the program is varied by relaxing or tightening the rules used to define an exudate in a manner Health Technology Assessment 2003; Vol. 7: No. 30 17 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. identical with that used for the microaneurysm program. Once again the performance of a number of clinicians asked to perform the same task was also included for comparison and the gold standard against which both are judged is provided by an ophthalmologist. The operating point is chosen to give an extremely low false-positive detection rate, 0.2 false-positives per image, so as to maximise the discrimination between exudates and cotton-wool spots or drusen. Consequently, the sensitivity will be low, 44%. However, it must be appreciated that the clinical significance of sensitivity and specificity will vary according to the total number of exudate objects which are actually present in a given image. For example, a single false positive detected in a normal image is much more serious than a single false positive detected in an image with 30 real lesions present. Lesion location and maculopathy Diabetic maculopathy (clinically significant macular oedema) is an important cause of visual impairment in patients with diabetes.40–42 It is indicated by a thickening of the retina, which can only be positively identified using stereo photography. In the absence of stereo photographs, however, a patient will be referred for further examination if he or she exhibits microaneurysms or hard exudates within one disc diameter of the fovea. By locating the fovea in the macular images obtained, we are able to define this region in the image and hence potentially identify patients with referrable macular lesions. The position of the fovea is identified in a similar manner to the optic disc position described above, finding the location in the image which best matches a model of the fovea. This model is simply a small, dark cone which mimics the increase in pigmentation in this area of the retina. Although an estimate of the macula size (one disc diameter or 1500 mm) can be made given the approximate resolution of the image, differences in optical power between patients means that this value is unreliable. A second estimate is therefore made based on a fraction (0.38) of the distance between the fovea and the centre of the optic disc. The average of these two measurements is then used to specify the size of the macular area. Because the locations of any microaneurysms or hard exudates detected in the image are known, we are able to calculate the numbers of each which lie within one disc diameter of the fovea. In Chapter 5 we will investigate whether this information enables us to detect patients with notable macular lesions and how many of the same were, on subsequent examination, found to have clinically significant macular oedema. Turnover of microaneurysms We have proposed investigating whether automated digital image processing techniques can follow the progression of retinopathy. Although the total number of microaneurysms has been shown to correlate with the severity and progression of early retinopathy,2,3,43 it has also been observed that these lesions are not static but appear and disappear over a period of time.44–46 This turnover of microaneurysms is not well understood but it has been suggested that the rate of their formation and regression might be related to the progression of retinopathy.46 In Chapter 6 we will calculate the turnover of microaneurysms detected in the three 6-monthly fluoresceins obtained from these patients and examine how these figures relate to the severity and progression of retinopathy present. Image registration In order to calculate turnover of microaneurysms, it is necessary to align or ‘register’ the images concerned. This allows corresponding microaneurysms which have been detected in the Automated detection of retinopathy 18 FIGURE 9 FROC curve comparing the performance of the hard exudate detector (filled circles and dashed line) with that of five clinical observers (filled triangles), with an ophthalmologist as the gold standard. The operating point of the detector is indicated by the open square. 0.8 0.6 0.2 0.4 0.0 2 4 6 82 False positive objects per image Se ns iti vi ty Health Technology Assessment 2003; Vol. 7: No. 30 19 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. different images to be identified according to whether they overlap (by >10%). Owing to the small size of these lesions, the registration is required to be very accurate and a cross- correlation algorithm47 with minor modifications has been found to give good results for these images. Once registered, each pair of the three 6-monthly fluorescein frames is analysed to give the number of static microaneurysms (those present in both images), new microaneurysms, (those on the second image only), and regressed microaneurysms (those on the first image only). Conclusions In this chapter we have described the algorithms for the automated detection of microaneurysms and hard exudates that have been used in this study. In the following chapters we will describe the results of our investigation into whether these automated analyses of digital red-free photographs may be used as a screening tool for diabetic retinopathy, and whether changes in the numbers of lesions present can be used to follow the progression of the disease. In addition, by identifying the locations of the fovea and optic disc in each image, the positions of the detected lesions can be related to the location of the macula. Using this information we are able to investigate whether the detection of macular microaneurysms or hard exudates or both provides useful markers for the referral of patients with suspected clinically significant macular oedema. Finally, we are able to calculate the turnover of microaneurysms in individual frames of consecutive fluorescein angiograms. Using this technique, we hope to show that the rate of new and regressed microaneurysms can provide useful information about the natural history of diabetic retinopathy, and that this information might allow the point when treatment is required to be determined. Health Technology Assessment 2003; Vol. 7: No. 30 21 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. Introduction This study aims to look at the role of digital imaging in screening for diabetic retinopathy so that timely intervention, both medical and surgical, can be applied before significant visual impairment occurs. The patient population was described in the section ‘Patient recruitment’ (p. 7); to recap, 586 patients attended a hospital assessment clinic where digital and conventional photography was performed. Slit-lamp examination by an ophthalmologist was performed as the gold standard. Patients then attended a study optometrist who performed an examination of the fundi using slit-lamp biomicroscopy. These examinations were then repeated 1 year later. Slit-lamp biomicroscopy Six high-street optometrists were recruited through the auspices of the Aberdeen and North- east Scotland branch of the Association of Optometrists. Each underwent a specially devised training programme consisting of a day of lectures, a day of slide examination, practical demonstration of slit-lamp biomicroscopy and regular eye clinic attendance, culminating in a formal examination of their ability to recognise the features of diabetic retinopathy using slit-lamp biomicroscopy. The optometrists examined the subjects using slit- lamp biomicroscopy through dilated pupils in their own premises. A poster with examples of the various features of retinopathy was provided for reference. Patients were graded according to a modified interim ETDRS severity scale. This scale is based on the grading of features of diabetic retinopathy detected in seven-field stereoscopic 35-mm colour slides against standardised slides. The presence of four blot haemorrhages in any one quadrant was used as the definition of severe retinal haemorrhages, as this is a rough approximation to the reference slide used by the ETDRS. The definitions of non-proliferative diabetic retinopathy (NPDR) used in the study are given in Table 6. Proliferative diabetic retinopathy (PDR) was defined as the presence of new vessels in the fundus. Clinically significant macular oedema was defined as the presence of retinal thickening within a one-disc diameter radius of the centre of macula. The reference standard was taken as slit-lamp biomicroscopy performed by consultants or their specialist registrars with a special interest in medical retina. The clinical grading protocol was the same as the optometrists’. This is referred to as the ‘ophthalmologist gold standard’. Photography and digital imaging Conventional mydriatic retinal photography was performed using a Topcon 50X fundus camera with 35-mm transparencies on Kodachrome 64 film. High-resolution (1024 × 1024 pixels) red-free digital photography was performed on the same camera using the IMAGEnet 1.53 1024 digital image acquisition system as described in the section ‘Choice of digital fundus camera’ (p. 11). Chapter 5 Question 1: can a digital system detect the presence of diabetic retinopathy (of any sort/level)? TABLE 6 Definitions of NPDR grades Level Retinopathy features Mild At least one microaneurysm Moderate Severe haemorrhages (≥ 4 blot haemorrhages) in one quadrant and/or cotton-wool spots or venous beading or IRMA definitely present Severe Any one of the following: severe haemorrhages in four quadrants; venous beading in two quadrants; severe IRMA in one quadrant Very severe Any of the two ‘severe’ categories Photographs were taken according to the EURODIAB protocol (see the section ‘The EURODIAB protocol’, p. 11). All photographs and digital images were graded by a trained research registrar for image quality (see the section ‘Grading image quality’, p. 13) and for retinopathy severity according to the EURODIAB protocol (Table 7). This is referred to as ‘manual grading’. The research registrar was trained by attending weekly diabetic clinics for 6 months under consultant supervision. Her effectiveness at grading images was then formally evaluated by the consultant before any of the study images was graded. Digital images were stored in their original loss- less TIFF format on CD-ROMs and manually transferred to the Department of Bio-medical Physics, University of Aberdeen. The images were analysed using the software described in Chapter 4. The presence of any retinopathy and the quantity of retinopathy were recorded. The presence of exudates and/or haemorrhages within one disc diameter of the centre of the fovea in the macular field were used to infer the possible presence of diabetic macular oedema as retinal thickening cannot be visualised in monocular images. Distribution of retinopathy amongst patients The levels of retinopathy and clinically significant macular oedema detected by the ophthalmologists in patients on their first visit are given in Table 8. Figure 10 shows the distribution of retinopathy amongst eyes and fields. Detection of retinopathy The performance of the optometrists, the manual grading of red-free digital images and manual grading of 35-mm colour slides were determined by using the ophthalmologists’ clinical grading as the gold standard. Table 9 shows the sensitivity and specificity achieved for the different screening modalities in terms of whether or not an image was reported as having retinopathy. The actual agreement as to the level of retinopathy is shown in Figure 11. The results have been analysed for all retinopathy, that is, all patients irrespective of the degree of retinopathy, and for early retinopathy, which uses the results from those patients who have either mild or no retinopathy. Of course, the specificity will be the same in both cases. Question 1: can a digital system detect the presence of diabetic retinopathy (of any sort/level)? 22 TABLE 7 EURODIAB retinopathy grading protocol Level Retinopathy features Mild HMA and/or hard exudates Moderate Very severe HMA in one field, or HMA plus CWS and/or IRMA and/or VB Severe Very severe HMA in both fields, or severe HMA in one field plus very severe CWS and/or severe IRMA and/or severe VB Photocoagulated Photocoagulation scars Proliferative New vessels and/or fibrous proliferations, and/or pre-retinal haemorrhage, and/or vitreous haemorrhage HMA, haemorrhages and microaneurysms; CWS, cotton- wool spots; IRMA, intra-retinal microvascular anomalies; VB, venous beading; severe, lesion present ≥ standard photograph 1 but < standard photograph 2; very severe, lesion present ≥ standard photograph 2. TABLE 8 Frequency of retinopathy levels and clinically significant macular oedema (maculopathy) in patients at the first visit Retinopathy level No. Clinically significant macular oedema No. None 429 Absent 559 Mild 108 Present 24 Moderate 36 Severe 6 Very severe 0 Early PDR 5 High-risk PDR 2 Health Technology Assessment 2003; Vol. 7: No. 30 23 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. 0.80 0.60 0.40 Both eyes Right eye only Eye(s) Left eye only 0.20 0.00 1.0 0.8 0.6 0.4 0.2 0.0 Fr eq ue nc y Fr eq ue nc y Fr eq ue nc y 0.0 1.0 Mild (265) Moderate (71) Severe (13) Very severe (4) Overall (370) 0.8 0.6 0.4 0.2 0.0 0.2 Colour slides Digital red-free images Fr eq ue nc y 0.4 0.6 0.8 Both fields Macular field only Field(s) Nasal disc field only Right macular Right nasal disc Left macular Field No. of fields with retinopathy Left nasal disc One of 4 fields One of 3 fields One of 2 fields 1 field 1 2 3 4 (a) (b) (c) (d) FIGURE 10 (a) The proportion of patients with manually graded retinopathy present in the left eye only, the right eye only, or both. (b) The proportion of digital red-free eye-field pairs with manually graded retinopathy present in either the macular field only, the nasal-disc field, or both. (c) The distribution of retinopathy throughout the four fields obtained using the EURODIAB protocol. (d) The number of fields exhibiting retinopathy according to the level of retinopathy. TABLE 9 The detection of any retinopathy Screening modality All retinopathy Early retinopathy Sensitivity (%) Specificity (%) Sensitivity (%) Specificity (%) 35-mm colour slide 89 89 85 90 Manual grading of digital images 83 79 81 81 Optometrists 75 82 71 82 Automated grading of digital images 83 71 82 71 Automated analysis in diabetic macular oedema In this part of the study, 583 patients were entered; 24 cases of clinically significant macular oedema (retinal thickening within one disc diameter radius of the geometric centre of the eye) were diagnosed by the ophthalmologists (Table 8). The performance of the optometrists, the manual grading of red-free digital images and manual grading of 35-mm colour slides were determined using the ophthalmologists’ clinical grading as the gold standard (Table 10). Sight-threatening retinopathy Sight-threatening retinopathy was defined as that classed as moderate or worse on the retinopathy rating scale and/or the presence of clinically significant macular oedema. The performance of the optometrists, the manual grading of red-free digital images and manual grading of 35-mm colour slides at detecting retinopathy in such patients, irrespective of agreement in grading, is shown in Table 11. Once again, the ophthalmologists’ clinical grading is taken as the gold standard. In these results sight-threatening retinopathy for the automated analysis is defined as patients with microaneurysms or exudates in the macula. As such it does not grade the severity of the retinopathy. Is the nasal-disc field required? Many established screening protocols obtain only a macular view, chiefly because detection of sight- Question 1: can a digital system detect the presence of diabetic retinopathy (of any sort/level)? 24 1.0 0.8 Fr eq ue nc y 0.6 0.4 0.2 0.0 Normal Mild Moderate Retinopathy grade(a) (b) Severe Very severe Optometrists (548) Colour slide (657) Red-free manual (630) Red-free automated (688) 1.0 0.8 Fr eq ue nc y 0.6 0.4 0.2 0.0 Normal Mild Moderate Retinopathy grade Severe Very severe Optometrists (130) Colour slide (186) Red-free manual (186) Red-free automated (200) FIGURE 11 Comparison of grading produced by screening techniques with the ophthalmologist gold standard for patients with (a) no retinopathy and (b) mild retinopathy TABLE 10 The detection of clinically significant macular oedema by optometrists, by grading 35-mm colour slides and by grading digital images Screening modality Sensitivity Specificity (%) (%) 35-mm colour slides 83 84 Manual grading of digital images 83 83 Optometrists 46 92 Automated grading of digital images 76 85 TABLE 11 The detection of sight-threatening retinopathy by optometrists, by grading of 35-mm colour slides and by grading of digital images Screening modality Sensitivity Specificity (%) (%) 35-mm colour slides 96 89 Manual grading of digital images 93 87 Optometrists 73 90 Automated grading of digital imagesa 77 88 a Clinically significant macular oedema only. threatening retinopathy has been the main objective in the past. As Figure 10(b) shows, according to a clinician interpreting the 35-mm colour slides 8% (45/580) of eye–field pairs showed retinopathy in either nasal-disc field only and 14% (91/644) when digital photography was used. If the data are analysed in terms of patients, then if single-shot macula-only fields had been used, 22/322 (7%) patients with any retinopathy would have been undetected and 6/214 (3%) patients with sight-threatening retinopathy would have been undetected (Table 12). The corresponding results for manual analysis of the red-free images were 59/366 (16%) for any retinopathy and 1/217 (0.5%) for sight-threatening retinopathy. In terms of sensitivity and specificity (Table 13), the use of a single macula field has no significant effect for sight-threatening retinopathy. Comparison of optometrist with ophthalmologist The difference between optometrists and ophthalmologists in grading of retinopathy (gold standard) is shown in Figure 12. In 74% (531/718) of patients there is agreement over the grading whereas the optometrists have a sensitivity of 76% and a specificity of 82% for the task of deciding whether or not the images are abnormal (Table 9), levels comparable with the other modalities. For the detection of macular oedema, there is more of a problem. The overall agreement on grading is 90% (631/696 excluding three ‘did not attends’) but this largely reflects the 93% agreement over the 676 normals that were present. The optometrists failed to detect clinically significant macular oedema in over half of abnormals (11/22). This probably reflects the lack of experience in detecting macular oedema, as in everyday practice optometrists would see mainly normal eyes. Discussion For historical reasons, a larger number of patients attend the hospital diabetic clinic in Aberdeen than might be expected in other areas. This is reflected in the demographics of the population studied, which has a higher prevalence of type 2 diabetes than might be expected in a hospital clinic. Although the majority of people in the study were of working age, it is of concern that 528 of 1114 patients approached either declined Health Technology Assessment 2003; Vol. 7: No. 30 25 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. TABLE 12 Number of patient visits with retinopathy by field: analysis made using 35-mm colour slides Macular field Nasal field No retinopathy Mild retinopathy Sight-threatening Total No retinopathy 616 22 34 672 Mild retinopathy 22 64 84 170 Sight-threatening 0 6 90 96 Total 638 92 208 938 TABLE 13 Effect of using a single macular view for any retinopathy and sight-threatening retinopathy All fields Macula only Condition Screening modality Sensitivity Specificity Sensitivity Specificity (%) (%) (%) (%) Any retinopathy 35-mm colour slides 89 89 86 92 Manual grading of digital images 83 79 80 88 Automated grading of digital images 83 71 79 80 Sight-threatening retinopathy 35-mm colour slides 96 89 95 89 Manual grading of digital images 93 87 93 87 Automated grading of digital images 77 88 77 88 to take part or failed to attend once recruited. This probably reflects the fact that patients were already being screened and might have implications for any stand-alone retinopathy screening programme. The reference standard chosen was that of a consultant, or his or her specialist registrar, performing slit-lamp biomicroscopy. Although seven-field stereoscopic photography is often used as a reference in research programmes, it is too cumbersome for everyday clinical practice. Slit- lamp biomicroscopy is the standard tool used by ophthalmologists to examine for diabetic retinopathy and was felt to be the most relevant reference standard for screening. It is also potentially the most sensitive standard as more retina is visualised than by the two 50° fields of the EURODIAB photographic protocol or of the seven 35° fields of the ETDRS photographic protocol. Two grading protocols were used, one for the 35- mm colour slides and the digital images and the other for slit-lamp biomicroscopy. The EURODIAB grading protocol was used for the slides and the digital images as it has been shown to correlate with the interim ETDRS retinopathy severity grades. The interim ETDRS retinopathy severity scale was modified for slit-lamp biomicroscopy, thus allowing comparison between the two techniques despite the different fields of view. From a practical point of view the presence of four blot haemorrhages in any one quadrant was used as the definition of severe retinal haemorrhages as this is a rough approximation to the reference slide used by the ETDRS. In terms of sensitivity and specificity there was little to choose between conventional photography and digital photography for detecting any or early retinopathy. For sight-threatening retinopathy the optometrists had the lowest overall sensitivity (73%) although their specificity (90%) was comparable to that of the other modalities. When looking at clinically significant macular oedema (maculopathy), a sub-set of sight- threatening retinopathy, optometrists performed particularly badly with a sensitivity of only 46%. The specificity of 92% was comparable to the other modalities. Detection of macular oedema is difficult and with hindsight requires more practice than they received. Automated detection of diabetic retinopathy has progressed rapidly in the last decade, and commercial programs are now becoming available.48–50 Recently, Lee and colleagues49 have published their work with apparently similar good results. Unfortunately, no details of the methods used or the resolution of the images were Question 1: can a digital system detect the presence of diabetic retinopathy (of any sort/level)? 26 Optometrists (718) Colour slide (940) Red-free manual (921) Red-free automated (985) 1.0 0.8 0.6 0.4 0.2 0.0 Fr eq ue nc y –4 –3 –2 –1 equal Difference in retinopathy grades (positive: overgrade, negative: undergrade) 1 2 3 4 FIGURE 12 Comparison of retinopathy grades derived by optometrists with those produced by grading 35-mm colour slides or digital red-free images published as they wished to protect this information prior to commercialisation. Our own work started with fluorescein angiography, but for this study it was realised that this was not a practical modality for screening and the computer algorithms were modified to work with high- resolution red-free digital images instead. Automated techniques have the advantage of repeatability. Individual human graders tend to have their own varying internal reference standards that are difficult to make conform, despite training, leading to intra- and inter- observer variability. However, with the algorithm used in our automated detection program it is necessary to select the particular combination of sensitivity and specificity at which the algorithm operates, as discussed in the section ‘Calibrating and testing the microaneurysm detector’ (p. 16). Grading of photographs or digital images is a repetitive task and where two-thirds are expected to be normal this leads to fatigue and boredom. Computers suffer from none of these complaints, although they are heavily reliant on images being of a sufficient quality to permit analysis. Images will therefore have to be graded for image quality before being processed. This is, however, a far simpler and quicker task than actually grading the image. Also, as was shown in the section ‘Grading image quality’ (p. 13), it is easier to obtain good quality images with the digital system. We have been able to achieve significant sensitivity and specificity for the detection of the presence of any diabetic retinopathy. That in itself may at face value not seem a great achievement, but when one considers the vast numbers of patients with diabetes in the UK that require screening, then this is no mean achievement. In a previous study comparing the automated detection of retinopathy, using only microaneurysms, with 35-mm colour slides graded by a research fellow48 we were able to achieve a sensitivity of 85% and a specificity of 76% for detecting whether or not a patient had retinopathy. As pointed out in that paper, since the decision on the presence of retinopathy requires four images to be analysed, this task will have a higher sensitivity but lower specificity than for detecting retinopathy in a single image. Assuming a prevalence for diabetic retinopathy of 30%, this would have meant that 51% of this population would be correctly classified as having no retinopathy. In this current study, in which ophthalmologists using stereo biomicroscopy were used as the gold standard, then if we consider all grades of retinopathy we have achieved a sensitivity of 87% but a lower specificity of 71% with the automated system. Despite this, the automated grading would still correctly classify 48% of a theoretical diabetic population as having no retinopathy. Clearly, such results depend on the population examined. For sight-threatening retinopathy, the specificity rises to 88% whereas the sensitivity is lower than that achieved by other modalities at 77%. The problem is that we have not been able to detect all forms of sight-threatening retinopathy as we do not as yet have a computer algorithm to detect new vessel formation. Obviously, automated grading cannot be used on its own, but in the context of a manual grading system it will still greatly reduce the workload by correctly identifying just under half the population as having no retinopathy. In addition, it would appear that automated grading is able to detect the presence of clinically significant macular oedema (maculopathy) (using a combination of the microaneurysm detection program and the exudate program), with a specificity comparable to other modalities. This allows graders to target their attention to images with potentially sight-threatening retinopathy. The use of the automated hard exudate detector as a tool for detecting any retinopathy was not found useful. Used in conjunction with the microaneurysm detector program it reduced the specificity from 71 to 67%, a result not surprising given the specificity shown in Figure 9. As mentioned above, it does have an important role in detecting maculopathy where the analysis is limited to retinopathy within a one disc diameter radius of the centre of the fovea. We used the two-field protocol of the EURODIAB study as this has been validated against the seven- field stereoscopic protocol of the ETDRS. Although the EURODIAB retinopathy levels do appear to correlate with the interim ETDRS retinopathy levels, it should be remembered that it was only validated against 24 patients and thus it cannot be certain that a two-field protocol is sufficient to detect all sight-threatening retinopathy. Using only one field would have failed to detect a significant number of patients with any retinopathy. However, in the case of sight-threatening diabetic retinopathy, up to 3% of patients were missed. As the sensitivity and specificity changed little, then in some circumstances the use of macular fields only might be regarded as acceptable. Health Technology Assessment 2003; Vol. 7: No. 30 27 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. Health Technology Assessment 2003; Vol. 7: No. 30 29 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. Introduction The previous chapter examined the effectiveness of the digital system in providing the ophthalmologist with a reliable, quantitative measure of retinal pathology. In this chapter, the value of this information as a tool for monitoring the progression of the disease will be evaluated. The important clinical questions are whether a digital system can detect progression of retinopathy and whether it can be used to determine when treatment is required. An additional facet of this study is to look at the natural history of retinopathy. Therefore, in addition to those patients initially assigned to this study, that is, those diagnosed on the initial screening study as having moderate proliferative retinopathy or worse, or clinically significant maculopathy, a group of patients with no or mild retinopathy are included. Study protocol It had originally been planned to study only patients with sight-threatening retinopathy. It proved impossible to obtain sufficient patients, mainly because those with such retinopathy usually underwent treatment before progression could be measured and so the entry requirement was relaxed. Thirty-seven patients, having mild or no retinopathy, who were participating in the screening arm also agreed to take part in this study. A total of 81 patients attended for the first appointment; the spread of retinopathy is shown in Table 14. Of these, 38 patients, 16 of whom had sight-threatening retinopathy, completed the full series of three fluorescein angiograms and fundus photographs every 6 months over a period of 12 months. The fluorescein angiograms were analysed for the presence of microaneurysms and the number of new, static and regressed microaneurysms, as described in the section ‘Turnover of microaneurysms’ (p. 18). Patients also had a variety of potential risk factors measured, including body mass index, blood pressure, HBA1c and microalbuminuria. These data will be presented in the section ‘Risk factors’ (p. 31). Can digital red-free photography monitor progression of retinopathy? Crucial to using the digital fundus camera to measure change in retinopathy is an assessment of the reproducibility of retinopathy gradings. Figure 13(a) shows how the results from a clinician manually grading the digital red-free images varies when assessed against the gold standard; the results for 35-mm colour slides are shown in Figure 13(b). There can be significant variation in the grading given; between 10 and 18% of images classified as mild are graded as moderate and up Chapter 6 Question 2: can a digital system detect progression of retinopathy? Question 3: can a digital system determine when treatment is required? TABLE 14 Retinopathy of those patients entering the study No. of patients Those undergoing three fluorescein Level of retinopathy Totala studies None 1 0 Mildb 36 22 Moderate 31 13 Severe 6 3 Very severe 0 0 Early PDR 5 0 High-risk PDR 2 0 a Of these patients, 21 also had maculopathy. b Of these patients, eight also had maculopathy. to 30% of moderates are graded as mild. This intrinsic variability in grading reproducibility will provide a limit to the extent to which a change in retinopathy level can be measured. The optometrists’ results (Figure 13c) show a greater spread in gradings; for example, only 52% of the milds were correctly classified. Can automated analysis of digital images follow progression of retinopathy? It was postulated in the proposal for this study that a more objective measure of retinopathy, in Questions 2 and 3 30 (a) (b) (c) 1.0 0.8 0.6 0.4 0.2 0.0 Fr eq ue nc y Normal Mild Moderate Digital red-free retinopathy grade Severe Very severe Normal (630) Mild (186) Moderate (77) Severe (11) Very severe (1) Colour slide retinopathy grade Normal (657) Mild (186) Moderate (78) Severe (11) Very severe (1) 1.0 0.8 0.6 0.4 0.2 0.0 Fr eq ue nc y Normal Mild Moderate Severe Very severe Optometrist’s retinopathy grade Normal (548) Mild (130) Moderate (24) Severe (3) Very severe (0) 1.0 0.8 0.6 0.4 0.2 0.0 Fr eq ue nc y Normal Mild Moderate Severe Very severe FIGURE 13 The ability of a clinician grading (a) digital images and (b) 35-mm colour slides to monitor the progression of diabetic retinopathy. The x-axis indicates the reference grade assigned by the ophthalmologist viewing the patient’s retina directly and each set of bars indicates the corresponding spread of values assigned by a clinician grading the 35-mm colour slides of each retinopathy severity group. The numbers in parentheses indicate the population in each group. In (c) the results of optometrist grading are given. particular the number of microaneurysms, might provide a means for assessing progression of the disease. Figure 14 shows, for each patient retinopathy grade, the mean number of microaneurysms per patient and the variation in that number. Results are given for both red-free (a) and fluorescein angiograms (b). Although there is a general trend towards more microaneurysms with increasing retinopathy grade, there is considerable variability in the mean number per patient in any group. These results, of course, reflect the ability to use the average number of microaneurysms to measure differences in retinopathy grading. It can be argued that measuring the changes in the number of microaneurysms is likely to be more effective. Red-free digital images were taken at a 12-month interval and the data are shown (Figure 15) as a function of the change in grade of retinopathy between the two studies. In Figure 15(a) the mean number of microaneurysms is plotted and in Figure 15(b) the data are shown as a percentage change. Only two of the groups in which the retinopathy showed a deterioration had sufficient patients to do a statistical analysis. There was a statistically significant increase in the number of microaneurysms in those patients who developed from mild to moderate retinopathy (paired t-test, p = 0.02) and from none to mild (p = 0.01). Problem of microaneurysm turnover Figure 16 demonstrates the dynamic nature of microaneurysms in fluorescein angiograms. Taking all levels of retinopathy (Figure 16a), only 45% of the microaneurysms seen on the first visit are present 6 months later and 36% of those at the 6-month visit are still seen at the 12-month visit. Similar turnover is seen in Figure 16(b) for the sight-threatening retinopathy group (42 and 36%, respectively). As the overall total number of microaneurysms shows relatively little change, then the regressed microaneurysms are being balanced by new ones. If one examines the microaneurysm turnover rate for individual patients (Figure 17), where turnover rate is calculated as the ratio of (a) new or (b) regressed microaneurysms to the total number of microaneurysms, then a large spread of values is found. There is no statistically significant difference between the turnover rate of either new or regressed microaneurysms for patients with mild or with sight-threatening retinopathy. Risk factors As mentioned in the section on questions 2 and 3 Health Technology Assessment 2003; Vol. 7: No. 30 31 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. Normal Mild Mod Sev V sev Early High-risk 150 100 50 0Normal (688) Mild (200) Mod (77) Sev (12) V sev (1) Early (5) High-risk (2) 100 80 60 40 20 0 M ea n no . o f m ic ro an eu ry sm s pe r pa tie nt M ea n no . o f m ic ro an eu ry sm s pe r pa tie nt v is it Ophthalmologist patient retinopathy grade Ophthalmologist patient retinopathy grade(a) (b) FIGURE 14 The mean number of microaneurysms per patient detected in (a) digital red-free and (b) fluorescein angiographic images. Each patient has been graded by an ophthalmologist according to the overall level of retinopathy present. Error bars are the standard deviations. (p. 7), all patients were assessed medically once a year as various risk factors are known to be associated with the progression of retinopathy. Patients had various parameters measured, including body mass index, blood pressure, HBA1c and microalbuminuria. Table 15 shows the results of a one-way analysis of variance to study the relationship between risk factors and level of retinopathy. As can be seen, only triglyceride approaches a level of statistical significance. The main reason for this is that, as mentioned earlier, the number of patients with the Questions 2 and 3 32 (a) (b) 150 100 50 0 –50 M ea n no . o f h ae m or rh ag es a nd m ic ro an eu ry sm s (s ta nd ar d de v. ) Pe rc en ta ge c ha ng e in t ot al n o. o f h ae m or rh ag es a nd m ic ro an eu ry sm s None Mild Mod Sev V sev None Mild Mod Sev V sev None Mild Mod Sev V sev None Mild Mod Sev V sev Retinopathy (12-monthly visit) grade None to none (290) None to mild (32) Mild to none (29) Mild to mild (44) Mild to moderate (12) Mild to severe (1) Moderate to mild (5) Moderate to moderate (10) Moderate to severe (1) Moderate to very severe (1) Severe to moderate (1) Severe to severe (1) Severe to early (1) None to none (290) None to mild (32) Mild to none (29) Mild to mild (44) Mild to moderate (12) Mild to severe (1) Moderate to mild (5) Moderate to moderate (10) Moderate to severe (1) Moderate to very severe (1) Severe to moderate (1) Severe to severe (1) Severe to early (1) 300 200 100 0 Retinopathy (12-monthly visit) grade FIGURE 15 (a) Change in the mean number of microaneurysms with progression of retinopathy, and (b) percentage change in the total number of microaneurysms with progression of retinopathy more severe levels of retinopathy is very limited as they would be treated rather than remain in the study. This can be seen from the graph showing triglyceride against level of retinopathy (Figure 18), where most patients are in the mild or moderate groups. Further analysis of this data therefore did not appear to be of value. Discussion Microaneurysms are a key lesion in studies on early diabetic retinopathy since the counts are said to correlate with severity of early retinopathy and its likely progression.2,3,43 However, if microaneurysm change is to be used on a patient by patient basis to make a clinical decision, there are a number of other factors to consider. First there is the question of the consistency in the grading of the level of retinopathy. It has been shown that there is a significant variation in the ratings produced by clinicians and optometrists from the gold standard of the ophthalmologist. Hence the actual definition of the degree of retinopathy has an error associated with it. Second, although the number of microaneurysms in general increases with the severity of the retinopathy, there is considerable variation in the mean number per patient. Thus the actual number of microaneurysms does not, in itself, Health Technology Assessment 2003; Vol. 7: No. 30 33 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. 800 700 600 500 400 300 200 100 0 1 2 Visit number(a) (b) 3N um be r of h ae m or rh ag es a nd m ic ro an eu ry sm s Static New 2000 1800 1600 1400 1200 1000 800 600 400 200 0 1 2 Visit number 3 N um be r of h ae m or rh ag es a nd m ic ro an eu ry sm s Static New FIGURE 16 Microaneurysm turnover as measured from fluorescein angiograms for patients with (a) any retinopathy (38 patients) and (b) sight-threatening retinopathy (16 patients) on their first visit FIGURE 17 Rate of turnover of microaneurysms, each point showing the number of (a) new microaneurysms and (b) regressed microaneurysms expressed as a percentage of the total. Each point shows the results for one patient and the patients are classified as having either mild (n = 32) or sight-threatening retinopathy (n = 40). 10 0 0 0.5 1 1.5 Grade: 1 = mild, 2 = worse than mild(a) (b) % n ew m ic ro an eu ry sm s 2 2.5 3 20 30 40 50 60 70 80 90 100 10 0 0 0.5 1 1.5 Grade: 1 = mild, 2 = worse than mild % r eg re ss ed m ic ro an eu ry sm s 2 2.5 3 20 30 40 50 60 70 80 90 100 allow the retinopathy to be graded. An analysis of how the average number of microaneurysms changed as the disease developed did show an increase in the number of microaneurysms in these groups of patients whose condition had changed from none to mild and from mild to moderate retinopathy. There were insufficient patients in the other groups. However, as can be seen from the error bars (standard deviations) in Figure 15(a), this was not sufficient to predict the change in retinopathy from microaneurysm counts alone. In keeping with the results of other studies, it was shown that on fluorescein angiography there is a significant turnover in the number of microaneurysms even over a period of 6 months, with some disappearing between visits and new ones being formed. Hence it is perhaps not surprising that the absolute number of microaneurysms does not provide an accurate measure of the level of retinopathy. The use of fluorescein angiograms raises the question of the time in the sequence that gives the best image for detecting microaneurysms. Jalli and colleagues51 have suggested that the actual phase of the angiogram studied has an important bearing on microaneurysm counts. They looked at microaneurysms in the arterial and late phases of fluorescein angiograms. Other microaneurysm counting studies have also used the arterial phase for counting microaneurysms.46,52 This approach is surprising as microaneurysms develop from capillaries and therefore to be sure that all microaneurysms are perfused with fluorescein then the arteriovenous/early venous phase, as we have used, seems more appropriate. Microaneurysms appearing only in the late phase probably contain thrombus and thus are partially occluded. Fully occluded microaneurysms do not fluoresce at all. In this study we only looked at microaneurysms present in the early venous/arteriovenous phase as we felt that this ensured that all microaneurysms present had been perfused. The mean and range of times (in seconds) at which the images were analysed were 37.2 (11.4–111.8) for visit 1, 32.8 (8.3–57.6) for visit 2 and 30.7 (17.7–52.8) for visit 3. For a particular patient the difference in the time (in seconds) of the analysed image was 4.5 (range 0.60–73.8) for visits 1 and 2 and 2.1 (range 0.10–32.2) between visits 2 and 3. In essence, this means that we only identified perfused microaneurysms, namely those of a fusiform or saccular nature. By doing so we have included the very beginnings of microaneurysm formation, namely fusiform capillary dilation, and excluded the very late stages where microaneurysms are occluded and no longer increasing in size. This is important as poorly perfused microaneurysms may exist in this state indefinitely but are no longer ‘active’. Given the dynamics of microaneurysms on fluorescein images, the potential of the rate of microaneurysm turnover as a marker of retinal progression was investigated. Although relatively small numbers of patients were available for analysis, there appeared to be no correlation between turnover rate and level of retinopathy over a 6-month period. Questions 2 and 3 34 TABLE 15 Significance of relationship between risk factors and level of retinopathy Risk factor p-Value No. of readings Body mass index 0.13 55 Creatinine 0.29 76 Systolic BP 0.25 38 Diastolic BP 0.5 41 Fasting cholesterol 0.92 66 Haematocrit 0.69 69 Haemoglobin 0.59 79 HbA1c 0.61 71 HDL 0.92 71 LDL 0.4 73 Microalbuminuria 0.21 52 Platelet count 0.33 73 White cell count 0.79 73 Triglyceride 0.076 73 BP, blood pressure; HDL, high-density lipoprotein; LDL, low-density lipoprotein. 1 2 5 4 Retinopathy grade Tr ig ly ce ri de le ve l 5 6 7 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 FIGURE 18 Triglyceride level as a function of level of retinopathy (n = 73) Health Technology Assessment 2003; Vol. 7: No. 30 35 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. Introduction This chapter is concerned with the economic evaluation of three alternative methods of providing screening for diabetic retinopathy as used in this study: digital photography, analogue photography and direct ophthalmoscopy carried out by trained optometrists. The methods have already been described in Chapter 2. The requirements of the research programme were such that some costs were incurred that would not be part of a normal service setting (research costs) and where possible these have been adjusted for, or are explained in the text. The comparison of the screening methods does not include the costs of administering a call and recall system, and the equipment does not include visual acuity charts and light boxes, as both of these would be common to all screening programmes. Costs for the photographic screening methods were calculated as a cost per session and converted to a cost per patient using a baseline assumption of 10 patients per session. Digital photography costs Capital costs The photography was carried out in a Portacabin dedicated to the project, although it is possible for this technology to be provided on a mobile basis. The equipment consists of a basic fundus camera with a digital attachment and associated computer equipment and software. The building cost was discounted over 20 years and the equipment cost was discounted over 7 years, although it is recognised that technical obsolescence may set in much earlier. Other assumptions will be considered in the sensitivity analysis. Running costs Running costs for the building were allocated from hospital costs on the basis of the floor area. Staff costs per session took into account a nurse to carry out initial visual acuity checks and administer eye drops and the photographer and a senior registrar to read and report on the results. (The research registrar undertook some of these tasks but the grades of staff for the costing are those considered appropriate to the tasks.) The session time for the photographer allows for the carrying out of related administrative tasks. Consumables Consumables consisted of eye drops. Automated grading costs The costs for screening include the cost of medical staff time to read and report the results. This project developed the software for automated grading of digital images. The development costs consist of staff time and this cost has been discounted over 7 years, to be consistent with the equipment costs, and replaces the medical staff time in the cost of the screening. Colour slide photography costs Capital costs The photography was carried out in the same setting as the digital photography. Equipment consists of a basic fundus camera with a 35-mm slide attachment. The same fundus camera was used for both types of photography but the full cost was included both times as only one type of photography would be carried out in practice. The building cost was discounted over 20 years and the equipment cost was again discounted over 7 years, in the first instance. Running costs Running costs for the building and staff costs per session were the same as for the digital photography. Consumables Consumables consisted of eye drops, film and developing materials. Optometrist screening costs Optometrists carrying out screening in the project were given training at the start of the project as Chapter 7 Costs and consequences of screening for diabetic retinopathy described in the section ‘Slit-lamp biomicroscopy’ (p. 21) and a poster showing the various features of retinopathy was provided for reference. A standard fee of £18.00 per patient was paid for the screening. The optometrists were using facilities and equipment that would be used for normal sight testing. Results The costs for digital and analogue screening are shown in Table 16. The digital camera system is slightly cheaper and both camera systems were less costly than the payment to optometrists. However, these costs are simply for the initial screening visit and the reading of the results. The number of repeat visits, because of poor images, and the referral rate for assessment have to be taken into account. A cohort analysis has been carried out for this purpose and the results are shown in Table 17. Repeat screens are only relevant for photographic screening and the rates were 5% for digital and 10% for analogue photography. The number of patients called for an assessment visit depends upon the sensitivity and specificity of the screening test. Based on the figures given in the section ‘Detection of retinopathy’, and assuming a prevalence of sight-threatening diabetic retinopathy of 6%, the number of assessment visits would be 175 for digital (graded by research registrar), 159 for digital (automated grading), 171 for 35-mm colour slides and 138 for optometrists. These visits have been costed at £51 each, the average cost for an outpatient attendance. From Tables 16 and 17, it can be seen that digital screening with automated grading has the lowest cost per screen, and remains the least costly method when the additional factors of repeat visits and number of assessment visits are included. However, it is more expensive than digital imaging with medical staff reading the images or 35-mm colour slide photography in terms of cost per true positive detected. This is largely due to having a much lower sensitivity in reading the digital images. The assumptions for Table 16 were as follows: � Building area based on two standard rooms plus circulation space giving 33.8 m2. � Building cost discounted over 20 years gives an annual equivalent cost of £2653 (for comparison, the capital charge on the area assumed would be £2501). � Session costs based on 460 sessions per year to allow for holidays/sickness/equipment breakdowns, etc. � Cost per patient based on 10 patients per session. � Camera costs allocated between fundus/digital/analogue as £54,000 for the entire digital system and £35,000 for the analogue system. � Running costs cover cleaning/maintenance/rates and power. � Film costs £9.82 per film and has been assumed to include processing. Costs and consequences of screening for diabetic retinopathy 36 TABLE 16 Screening costs (£ 1998–9) Manual interpretation Automated grading of digital images of digital images 35-mm Colour slides Cost per Cost per Cost per Cost per Cost per Cost per session patient session patient session patient Building 5.77 0.58 5.77 0.58 5.77 0.58 Equipment digital 21.03 2.10 21.03 2.10 analogue 13.63 1.36 Running costs 4.41 0.44 4.41 0.44 4.41 0.44 Drops 0.50 0.05 0.50 0.05 0.50 0.05 Film and processing 24.55 2.46 Staff nurse 38.91 3.89 38.91 3.89 38.91 3.89 photographer 27.58 2.76 27.58 2.76 27.58 2.76 senior registrar 42.87 4.29 42.87 4.29 Automated grading of digital images 26.09 2.61 Total 141.07 14.11 124.29 12.43 158.22 15.82 � Nurse costed as grade E working 46 weeks per year and allocating 3.5 hours per session to allow for time getting ready or over-running. Salary based on scale average including employer’s on-costs. � Photographer cost based on MLSO-grade salary. � Senior registrar cost based on scale average including employer’s on-costs. One session taken as 0.1 of working week; reads 17.5 patient films; session cost calculated for 10 patients. � Development costs for software taken as the whole of 3 years’ salary for grade RA1. Combining modalities – a modelling exercise It is possible that automated grading could be used in conjunction with medical staff and this can be modelled using results from this study. The automated system would be used to report any retinopathy, as it has a better sensitivity rate of 87%, although specificity is worse, 71%. Medical staff carrying out a reading of the reported positives would eliminate some false positives. The medical staff would have to read images for 324 out of 1000 patients (based on a prevalence of 6% for sight-threatening diabetic retinopathy). It is assumed that in this model 48 true positives would be detected (automated grading would detect 52 and medical grading would identify 93% of these). It is further assumed that medical reading would continue to grade as false positive the same absolute number of patients (113) as were graded false positive when reading all 1000 cases. These are the least favourable assumptions to be made about the effect of combining the two modalities of reading. The results in Table 18 show that the cost per true positive remains higher than for digital imaging and medical staff grading alone. Altering the assumption about the combined sensitivity of the reading methods makes little difference. However, if the absolute number of false positives was reduced, this could affect the result. The most optimistic assumption would be that medical staff would continue to achieve 87% specificity when reading only the positives selected by automated grading. This would reduce the number of false positives to 36, giving a combined specificity of 96%, and would reduce the cost per true positive to £375. Hence it is the combined specificity of the two modalities that is important in estimating the relative cost-effectiveness of the screening. Sensitivity analysis Prevalence Increasing the assumed prevalence of sight- threatening diabetic retinopathy reduces the cost per true positive of all screening methods without affecting the ranking between methods (Table 19). Throughput per session The estimates in Table 16 are based on a throughput of 10 patients per session for both methods of photographic screening. The number of patients per session would have to fall below seven before all of the photographic methods Health Technology Assessment 2003; Vol. 7: No. 30 37 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. TABLE 17 Costs for a cohort of 1000 patients screened (£ 1998–9) Manual grading of Automated grading of 35-mm Colour Optometrist digital images digital images slides Screening cost 14110 12430 15820 18000 Repeat screens 706 621 1582 Assessment visits 8721 8109 8721 6477 Total 23537 21160 26123 24477 True positives detected (from 60) 56 46 58 42 Cost per true positive detected 420 460 450 583 TABLE 18 Model for a cohort of 1000 patients – automated grading combined with medical staff grading Screening cost – automated grading of digital images £12430 Regrading of positives by medical staff (324 × 4.29) £1390 Repeat screens £621 Assessment visits £8313 Total £22754 True positives detected (from 60) £48 Cost per true positive detected £474 became more expensive than optometrist screening, per patient screened (Table 20). The throughput per session would have to fall below six before it became more expensive than optometrist screening, per true positive detected. The ranking of alternative photographic methods would be the same unless different throughput rates were assumed for different methods. It may be that throughput would be lower for digital imaging because of the opportunity for patient education using the captured images. If throughput per session was eight or less for digital imaging, and remained at 10 for analogue imaging, then analogue imaging would be less expensive in terms of cost per screen and cost per true positive detected. Assumed life of equipment Equipment costs have been given as assumed life of 7 years, but replacement may occur much earlier because of developments in technology. This is more likely to affect the digital systems and the costs were recalculated using 5 years and 3 years as the assumed life (Table 21). If digital equipment is replaced after 3 years, the cost per screen becomes more expensive than analogue screening. The cost per true positive remains just below the figure for the analogue screening, at £447 compared with £450. Automated grading – software costs The development costs for the software have been treated as equipment costs and have been spread over 7 years. Reducing this time period would further increase the costs of automated reading. However, the costs have also been applied to screening in one location only. If the system were adopted more widely, the cost per screen would fall. In order for the cost per true positive to equate with the cost based on medical staff grading, the cost per screen for the software would have to fall to £0.24. This is equivalent to screening 50,000 patients per year. Conclusions The costing information presented above reflects those costs incurred in the trial. Their extrapolation to a more general context of a national screening programme needs to be taken cautiously. Costs and consequences of screening for diabetic retinopathy 38 TABLE 19 Costs for a cohort of 1000 patients screened (£ 1998–9) (prevalence of 12% in normal font and 8% in italics) Manual interpretation Automated grading 35-mm of digital images of digital images Colour slides Optometrist Screening cost 14110 12430 15820 18000 Repeat screens 706 621 1582 Assessment visits 11271 10098 11322 8364 9537 8772 9588 7089 Total 26087 23149 28724 26364 24353 21823 26990 25089 True positives detected 112 92 116 84 75 62 78 56 Cost per true positive detected 232 252 248 314 325 352 346 448 TABLE 20 Effect of throughput on cost per screen Cost per patient screened (£) Manual interpretation Automated grading No of patients per session of digital images of digital images 35-mm Colour slides 10 14.11 12.43 15.82 9 15.67 13.81 17.58 8 17.63 15.54 19.78 7 20.15 17.76 22.60 6 23.51 20.72 26.37 Health Technology Assessment 2003; Vol. 7: No. 30 39 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. Optometrist analysis was the least cost-effective. As mentioned in the section ‘Comparison of optometrist with ophthalmologist’ (p. 25) to some extent this probably reflects the lack of practice in detecting macular oedema, as in everyday practice optometrists would see mainly normal eyes. Care was taken in training optometrists for this study, but performance would be expected to improve with further experience. The automated system, either on its own or in conjunction with medical staff, is more expensive per true positive detected than manual grading. To make it competitive, the cost per patient has to be reduced to £0.23, which, with the current cost implies an annual screening of 50,000 patients, a number that could be encountered in a national screening programme. The main problem was the relatively low sensitivity of the software for detecting sight-threatening diabetic retinopathy and, in part, this was because the software was unable to detect new vessels. Undoubtedly the sensitivity of software can be expected to improve as further development continues. Although manual grading appeared most cost- effective, the main problem, that has not been investigated, is the cost of setting up a manual screening service and ensuring that the quality of reporting remains consistent. Although it is not unreasonable to expect a trained grader to perform as well as the senior registrar used in this study, introducing quality control measures will have a cost implication. TABLE 21 Effect of alternative assumptions about replacement of digital equipment Cost per patient screened (£) Manual interpretation Automated grading 35-mm of digital images of digital images Colour slides Baseline (7 years) 14.11 12.43 15.82 5 years 14.79 13.11 3 years 16.40 14.72 Health Technology Assessment 2003; Vol. 7: No. 30 41 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. Current study It is estimated that there are 1100 new cases of blindness every year secondary to diabetic retinopathy. Early detection of sight-threatening retinopathy enables laser therapy to be performed to prevent or slow the onset of visual loss. At present the commonest cause of visual impairment is ischaemic maculopathy, which is untreatable by surgical means. Early detection of retinopathy with aggressive management of metabolic control and blood pressure will therefore be the most effective way of meeting the St Vincent targets for visual impairment, thus reducing the social and economic toll of this complication. The European Retinopathy Working Party recommended a sensitivity of 80% and a specificity of 95% for screening programmes, but conventional screening modalities are failing to achieve these targets and, in addition, the provision of conventional screening modalities is inadequate and inequitable in the UK. The digital fundus camera is a promising development. It makes regular assessment more feasible since, unlike current approaches, it does not require a skilled assessor to see each patient. It also offers the potential for introducing automated screening of retinal images with the associated consistency in interpretation. It is ideally suited to quality assurance as it produces a hard copy that can be assessed, unlike subjective techniques such as slit-lamp biomicroscopy. One potential disadvantage is whether the quality of the digitised image is sufficiently high for the purpose of screening. Ergonomically, digital imaging was found to be more effective than colour slides. Digital photography produced fewer ungradable images and the number of individuals needing to be recalled for repeat photography if a digital rather than a conventional photographic camera was used for image capture was reduced by about 50%. This was mainly due to the fact that the photographer had immediate feedback on the quality of the images by being able to view them instantaneously on the computer screen. If the image was not of sufficient quality the photographer simply repeated the photograph until one of sufficient quality could be obtained. So far as image quality was concerned, a grading system, based on the clarity of features such as nerve-fibre layer and vessels, demonstrated that the finest quality photographs were produced by conventional photography. This is not surprising as the maximum resolution of conventional photographs is approximately 2–3 times better than the digital camera used in this study. However, in terms of producing images of a quality acceptable for screening, then the digital images were superior to the colour slides. When the image quality was analysed in terms of field, then the poorest quality field tended to be the nasal field, reflecting the need for a widely dilated pupil and the absence of peripheral cortical cataract, a common clinical finding, to obtain high-quality images. Caution must also be exercised over the effect of artefacts on image quality. The presence of dust on images simulating microaneurysms was one problem that we encountered. Previous publications on screening have tended to compare one modality with another but not all modalities together. In this study we have compared trained optometrists using slit-lamp photography against conventional mydriatic photography and digital photography. We did not use direct ophthalmoloscopy as this has been shown to be an insensitive technique, although it may have a role in sporadic screening or where no screening at all occurs. Red-free digital images were used as they provide the greatest contrast for red abnormalities on a red background. Although other clinicians may not be used to such monochrome images, it is the standard imaging modality for fluorescein angiography where the highest quality images are required. It was also felt important to attempt to train the optometrists to the same standard as the ophthalmologists in the clinic. In terms of sensitivity and specificity, there was little to choose between conventional mydriatic photography and digital photography for detecting any or early retinopathy. For sight- threatening retinopathy the optometrists had the lowest overall sensitivity (73%), although their specificity (90%) was comparable to that of the other modalities. Digital imaging and Chapter 8 Conclusions conventional photography are acceptable methods for screening. Screening by trained optometrists is not sufficiently sensitive for detecting either any retinopathy or sight-threatening retinopathy. In the context of a national screening programme, automated analysis techniques offer the advantages of repeatability and consistency. Individual human graders tend to have their own varying internal reference standards which are difficult to make conform, despite training, leading to intra- and inter-observer variability. In a previous study comparing the automated detection of retinopathy, using only microaneurysms, with colour slides graded by a research fellow,48 we were able to achieve a sensitivity of 85% and a specificity of 76% for detecting whether or not a patient had retinopathy. In this current study, in which ophthalmologists using stereo biomicroscopy was used as the gold standard, we have achieved a slightly higher sensitivity of 83% but a slightly lower specificity of 71%. Although the sensitivity exceeds the 80% recommended by the European Retinopathy Working Party, the specificity falls short of their value of 95%. Assuming a prevalence for diabetic retinopathy of 30%, this means that 48% of this population would be correctly classified as having no retinopathy. This is a significant achievement when one considers the large number of patients with diabetes in the UK who require screening. Clearly, such results depend upon the population examined. For sight-threatening retinopathy, the specificity rises to 88% whereas the sensitivity is lower than that achieved by other modalities at 77%. The problem is that we have not been able to detect all forms of sight-threatening retinopathy as we do not as yet have a computer algorithm to detect new vessel formation. In addition, it would appear that automated grading is able to detect the presence of clinically significant maculopathy (using a combination of the microaneurysm detection program and the exudate program), with a specificity of 85%, which is comparable to other modalities, thus enabling graders to target their attention on images with potentially sight-threatening retinopathy. We conclude that automated grading cannot, at present, be used on its own but in the context of a manual grading system it will still greatly reduce the workload by correctly identifying just under half the population as having no retinopathy. For this investigation, we used the two-field protocol of the EURODIAB study. Although more patients had retinopathy confined to the macular field compared with the nasal field, using the macular field only would significantly reduce the chances of detecting any retinopathy, with between 8 and 14% of cases being missed. In the case of sight-threatening diabetic retinopathy, however, the number of missed patients drops to 3% or less, with the sensitivity and specificity being almost the same for one as for two fields. This raises the possibility of adopting a screening programme based on taking only a macular field. Such a decision will depend on the likely incidence of more advanced retinopathy in the screened population. It should also be noted that the sensitivity and specificity achieved by the automated system will depend upon the number of images taken per patient; analysis of four images will lead to a higher sensitivity but lower specificity than for detecting retinopathy in two images. The role of automated grading for assessing the progression of disease was also explored. The value of this approach was fundamentally limited by two factors. First, there is an intrinsic variability in the grading of level of retinopathy by ophthalmologists and optometrists compared with the gold standard. Second, microaneurysms, as assessed on fluorescein angiography, are subject to a rapid turnover, with only about 45% of the microaneurysms being still seen 6 months later. There was a statistically significant increase in the number of microaneurysms in the group of patients whose condition changed from none to mild and the group who progressed from mild to moderate. However, when the actual individual rate of turnover was analysed, there was no significant difference between the group of mild and sight-threatening retinopathy. However, this aspect of the study was limited by the small number of patients completing it. At the start of the study (see the section ‘Overview of the study’, p. 2), three questions were posed. First, can a digital imaging system detect retinopathy irrespective of sort or level? The answer is that employing a manual analysis of the images, a sensitivity of 83% and a specificity of 79% can be achieved, and using an automated analysis, these becomes 83% and 71%. Second, can a digital imaging system detect progression of retinopathy? The answer is that although it can measure turnover, it has not been shown that this follows the progression of retinopathy. Third, can a digital imaging system determine when Conclusions 42 treatment is warranted? Since there was no definite answer to question 2, then that to the third question must be ‘no’ at present. Relevance to the NHS Digital imaging is already succeeding conventional photography in eye departments. This study has demonstrated that the digital fundus camera does indeed have a significant role to play in ophthalmology. The finest quality photographs were produced by conventional photography, in terms of producing images of a quality acceptable for screening, but the digital images were superior to colour slides. Most important, digital imaging was found to be more effective than colour slides, producing fewer ungradable images. With the development of higher resolution digital cameras, it is to be expected that conventional photography may eventually be superseded by digital imaging. Digital imaging and conventional photography are acceptable methods for screening; in terms of sensitivity and specificity there was little to choose between conventional mydriatic photography and digital photography for detecting any or early retinopathy. Screening by trained optometrists is, however, not sufficiently sensitive for detecting either any retinopathy or sight-threatening retinopathy. In the context of a national screening programme, automated analysis techniques offer the advantages of repeatability and consistency. The cost-effectiveness of the study showed that although the automated analysis had the lowest cost per screen, in terms of cost per true positive it was more expensive than clinical staff manually reading digital images or colour slides. However, this analysis did not take account of the cost of training screeners and maintaining the quality of their reporting. Also, the cost would drop to that of other approaches in a system which was screening 50,000 patients or more per year. The performance of the software developed for this project meant that, assuming a prevalence for diabetic retinopathy of 30%, 48% of this population would be correctly classified as having no retinopathy. This suggests a two-step procedure for any screening programme. Where no retinopathy is detected by the software, then the patient would be recalled in 1-year’s time for repeat screening. Where retinopathy is detected, the image would be manually graded. Thus automated first-level grading could considerably reduce the burden of manual grading. Research recommendations 1. The two-field protocol of the EURODIAB study was used, but analysis of the data shows that the detection of referable retinopathy was as reliable using only the macular field. Single- field imaging could potentially reduce time taken to perform retinal screening. This observation requires confirmation with a larger number of patients and different referral criteria. 2. The value of an automated grading system to assist in a screening programme has been demonstrated. Further work is required to improve the sensitivity and specificity of such programmes. In particular, it would be of value to develop software to detect new vessel formation and to investigate the potential of using colour information. 3. An insufficient number of patients was recruited to investigate the value of automated grading for the evaluation of disease progression. This should be studied further in conjunction with the development of automated systems. 4. Patient recruitment was poor. Future research is required to ensure effective uptake in a diabetic retinopathy screening programme. The future Any screening programme must be able to detect retinopathy with a significant level of accuracy and confidence. The European Retinopathy Working Party recommendations are achievable in terms of sensitivity, although the required specificity was not achieved by any of the modalities tested. The recommendations of 80% sensitivity and 95% specificity are not based on any scientific evidence and whether such high levels are actually required has yet to be determined. Perhaps the major advantage that digital imaging has over all other screening modalities is its suitability for quality assurance. For any screening programme this must be a major concern. Until the skills of slit-lamp biomicroscopy can be raised to those of the ophthalmologist then the role of high-street opthalmologists might be limited to digital retinal photography as part of a national screening network. Health Technology Assessment 2003; Vol. 7: No. 30 43 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. Conclusions 44 The cost-effectiveness of the automated approach will be affected by developments in technology. On the hardware side, the major development is the arrival of digital cameras with a higher resolution than the 1024 × 1024 acquisition matrix of the Topcon system we used. For example, the current generation (2002) of cameras offer a resolution of 2160 × 1440 pixels. There will undoubtedly be improvements in automated analysis software and this has been identified as an area for further research. Health Technology Assessment 2003; Vol. 7: No. 30 45 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. The authors acknowledge the contributions ofMrs Alison Farrow and Ms Sandra McKay, who were responsible for the retinal photography. The assistance of Dr K Seipman with the translation of foreign language publications is gratefully acknowledged. The Health Services Research Unit is core funded by the Chief Scientist Office of the Scottish Office, Department of Health. Contributions of the authors PF Sharp (Professor of Medical Physics) was overall project coordinator. J Olson (Senior Registrar in Medical Ophthalmology) was coordinator for clinical research. F Strachan (Research Fellow) was a clinical researcher and lead for the systematic literature review. J Hipwell (Research Fellow) carried out software development and image analysis. A Ludbrook was responsible for health economics aspects. M O’Donnell (Research Assistant) and S Wallace (Research Assistant) provided assistance with the systematic literature review. K Goatman (Research Fellow) undertook data analysis. A Grant (Unit Director) gave advice on the systematic literature review. N Waugh (Director) provided expertise on purchasing perspectives of screening for diabetic eye disease. K McHardy (Consultant General Physician/Diabetologist) and JV Forrester (Professor) gave advice on clinical aspects of diabetic retinopathy screening. Acknowledgements 1. Ghafour IM, Allan D, Foulds WS. Common causes of blindness and visual handicap in the west of Scotland. Br J Ophthalmol 1983;67:209–13. 2. Klein R, Meuer SM, Moss SE, et al. Retinal microaneurysm counts and 10-year progression to diabetic retinopathy. Arch Ophthalmol 1985;113:1386–91. 3. Kohner E, Sleightholm M. Does microaneurysm count reflect the severity of early diabetic retinopathy? Ophthalmology 1986;93:586–9. 4. Early Treatment Diabetic Retinopathy Study Research Group. Fundus photographic risk factors for progression of diabetic retinopathy. ETDRS Report Number 12. Ophthalmology 1991;98:823–33. 5. Early Treatment Diabetic Retinopathy Study Research Group. Early photocoagulation for diabetic retinopathy. ETDRS Report Number 9. Ophthalmology 1991;98:766–85. 6. The Diabetes Control and Complications Trial Research Group. The effect of intensive treatment of diabetes on the development and progression of long-term complications in insulin-dependent diabetes mellitus. N Engl J Med 1993;329:977–86. 7. Sjolie AK, Stephenson J, Aldington S, et al. Retinopathy and vision loss in insulin-dependent diabetes in Europe. The EURODIAB IDDM Complications Study Group. Ophthalmology 1997;104:252–60. 8. Muhlhauser I. Cigarette smoking and diabetes: an update. Diabet Med 1994;11:336–43. 9. Chaturvedi N, Stephenson JM, Fuller JH. The relationship between smoking and microvascular complications in the EURODIAB IDDM Complications Study. Diabetes Care 1995;18:785–92. 10. Stratton IM, Kohner EM, Aldington SJ, Turner RC, Holman RR, Manley SE, et al. UKPDS 50: risk factors for incidence and progression of retinopathy in Type II diabetes over 6 years from diagnosis. Diabetologia 2001;44:156–63. 11. World Health Organization and the International Diabetes Federation. Diabetes care and research in Europe: the Saint Vincent Declaration. Diabet Med 1990;7:360. 12. Retinopathy Working Party. A protocol for screening for diabetic retinopathy in Europe. Diabet Med 1991;8:263–7. 13. Buxton MJ, Sculpher MJ, Ferguson BA, et al. Screening for treatable diabetic retinopathy: a comparison of different methods. Diabet Med 1991;8:371–7. 14. Jones D, Dolben J, Owens DR, et al. Non-mydriatric polaroid photography in screening for diabetic retinopathy: evaluation in a clinical setting. BMJ 1988;296:1029–30. 15. Klein R, Klein BEK, Neider MW, et al. Diabetic retinopathy as detected using ophthalmoscopy, a non-mydriatic camera and a standard fundus camera. Ophthalmology 1985;92:485–91. 16. Taylor R, Lovelock L, Tunbridge WMG, et al. Comparison of non-mydriatic retinal photography with ophthalmoscopy in 2159 patients: mobile retinal camera study. BMJ 1990;301:1243–7. 17. Harding SP, Broadbent DM, Neoh C, et al. Sensitivity and specificity of photography and direct ophthalmoscopy in screening for sight threatening eye disease: The Liverpool Diabetic Eye Study. BMJ 1995;311:1131–5. 18. Joannou J, Kalk WJ, Mahomed I, et al. Screening for diabetic retinopathy in South Africa with 60° retinal colour photography. J Intern Med 1996; 239:43–7. 19. O’Hare JP, Hopper A, Madhaven C, et al. Adding retinal photography to screening for diabetic retinopathy: a prospective study in primary care. BMJ 1996;312:679–82. 20. British Diabetic Association. Retinal photography screening for diabetic eye disease. London: British Diabetic Association; 1994. 21. Phillips RP, Spencer T, Ross PGB, Sharp PF, Forrester JV. Quantification of diabetic maculopathy by digital imaging of the fundus. Eye 1991; 5:130–7. 22. Spencer T, Phillips RP, Sharp PF, Forrester JV. Automated detection and quantification of microaneurysms in fluorescein angiograms. Graefes Arch Clin Exp Ophthalmol 1992;230:36–41. 23. Phillips R, Forrester J, Sharp PF. Automated detection and quantification of retinal exudates. Graefes Arch Clin Exp Ophthalmol 1993;231:90–4. 24. Spencer T, Olson JA, McHardy K, Sharp PF, Forrester JV. An image-processing strategy for the segmentation and quantification of microaneurysms in fluorescein angiograms of the ocular fundus. Comput Biomed Res 1996;29:284–302. 25. Scobie IN, MacCuish AC, Barrie T, Green FD, Foulds WS. Serious retinopathy in a diabetic clinic – Health Technology Assessment 2003; Vol. 7: No. 30 47 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. References 48 prevalence and therapeutic implications. Lancet 1981;2:520–1. 26. Aldington SJ, Kohner EM, Meurer S, Klein R, Sjolie AK. Methodology for retinal photography and assessment of diabetic retinopathy – The EURODIAB IDDM complications study. Diabetologia 1995;38:437–44. 27. Klein R, Klein BE, Moss SE, Davis MD, DeMets DL. Is blood pressure a predictor of the incidence or progression of diabetic retinopathy? Arch Intern Med 1989;149:2427–32. 28. Cruickshanks KJ, Ritter LL, Klein R, Moss SE. The association of microalbuminuria with diabetic retinopathy. The Wisconsin Epidemiologic Study of Diabetic Retinopathy. Ophthalmology 1993;100:862–7. 29. Wang PH, Lau J, Chalmers, TC. Meta-analysis of effects of intensive blood-glucose control on late complications of type 1 diabetes. Lancet 1993;341:1306–9. 30. Elman KD, Welch RA, Frank RN, Goyert GL, Sokol RJ. Diabetic retinopathy in pregnancy: a review. Obstet Gynecol 1990;75:119–27. 31. Wong JSK, Pearson DWM, Murchison LE, Williams MJ, Narayan V. Mortality in diabetes mellitus: experience of a geographically defined population. Diabet Med 1991;8:135–9. 32. Diabetic Retinopathy Study Research Group. A modification of the Airlie House classification of diabetic retinopathy. Report No. 7. Invest Ophthalmol Vis Sci 1981;21:210–26. 33. Nussenblatt RB, Palestine AG, Chan CC, Roberge F. Standardization of vitreal inflammatory activity in intermediate and posterior uveitis. Ophthalmology 1985;92:467–71. 34. Medical imaging – the assessment of image quality. ICRU Report 54. Bethesda, MD: International Commission on Radiation Units and Measurement (ICRU); 1996. 35. Kohner EM, Stratton IM, Aldington SJ, Turner RC, Matthews DR. Microaneurysms in the development of diabetic retinopathy (UKPDS 42). Diabetologia 1999;42:1107–12. 36. Matthews DR, Kohner EM, Aldington S, Stratton IM. Relationship of microaneurysm count to progression of retinopathy over 6 years in non- insulin-dependent diabetes. Diabetes 1995;9:117A. 37. Hellstedt T, Palsi VP, Immonen I. A computerized system for localization of diabetic lesions from fundus images. Acta Ophthalmol 1994;72:352–6. 38. Diabetic Retinopathy Study Research Group. A modification of the Airlie House classification of diabetic retinopathy. Report No. 7 Invest. Ophthalmol Vis. Sci. 1981;21:210–26. 39. Cree MJ, Olson JA, McHardy KC, Sharp PF, Forrester JV. A fully automated comparative microaneurysm digital detection system. Eye 1997;11: 622–8. 40. Klein R, Klein BE, Moss SE. Visual impairment in diabetes. Ophthalmology 1984;91:1–9. 41. Moss SE, Klein R, Klein BE. The incidence of vision loss in a diabetic population. Ophthalmology 1988; 95:1340–8. 42. Sparrow JM, McLeod BK, Smith TD, Birch MK, Rosenthal AR. The prevalence of diabetic retinopathy and maculopathy and their risk factors in the non-insulin-treated diabetic patients of an English town. Eye 1993;7(Pt 1): 158–63, 43. Klein R, Meuer SM, Moss SE, Klein BEK. The relationship of retinal microaneurysm counts to the 4-year progression of diabetic retinopathy. Arch Ophthalmol 1989;107:1780–5. 44. Kohner EM and Dollery CT. The rate of formation and disappearance of microaneurysms in diabetic retinopathy. Eur J Clin Invest 1970;1:167–71. 45. Feman SS. The natural history of the first clinically visible features of diabetic retinopathy. Trans. Am. Ophthalmol. Soc. 1994;92:745–73. 46. Hellstedt T, Immonen I. Disappearance and formation rates of microaneurysms in early diabetic retinopathy. Br J Ophthalmol 1996;80:135–9. 47. Cideciyan AV, Jacobson SG, Kemp CM, Knighton RW, Nagel JH. Registration of high resolution images of the retina. Proc SPIE 1992;1652:310–22. 48. Hipwell JH, Strachan F, Olson JA, McHardy KC, Sharp PF, Forrester JV. Automated detection of microaneurysms in digital red-free photographs: a diabetic retinopathy screening tool. Diabet Med 2000;17;588–94. 49. Lee SC, Lee ET, Kingsley RM, Wang Y, Russell D, Klein R, et al. Comparison of diagnosis of early retinal lesions of diabetic retinopathy between a computer system and human experts. Arch Ophthalmol 2001;119:509–15. 50. Ege BM, Hejlesen OK, Larsen OV, Moller K, Jennings B, Kerr D, et al. Screening for diabetic retinopathy using computer based image analysis and statistical classification. Comput Methods Programs Biomed 2000;62:165–75. 51. Jalli PYI, Hellstedt TJ, Immonen IJR. Early versus late staining of microaneurysms in fluorescein angiography. Retina 1997;17:211–15. 52. Badouin C, Maneschi F, Quentel G, Soubrane G, Hayes T, Jones G, et al. Quantitative evaluation of fluorescein angiograms – microaneurysm counts. Diabetes 1983;32:8–13. References Introduction Background Despite advances in diabetic care, visual impairment in diabetes remains a devastating complication, in terms of both personal loss for the affected individual and socio-economic costs to society. Of the 12 million citizens of the USA who are recognised as suffering from diabetes mellitus, it has been estimated that the prevalence of those with proliferative retinopathy is 700,000 (6% of the population) with an anticipated further 65,000 cases occurring per annum.1 For the insulin- dependent diabetic population of Europe, a cross- sectional study of patients attending 31 diabetes centres revealed a prevalence of pre-proliferative retinopathy of 35.6% (mild 25.8%; moderate-to- severe 9.8%) and proliferative retinopathy of 10.6%.2 The complications arising from diabetic retinopathy are an estimated 8000 new cases of blindness per year in the USA1 and 1100 new cases per annum in the UK,3 giving rise to a significant problem in the working-age population. Although it remains difficult to quantify the devastating effect of blindness with regard to personal loss, the economic cost to society in terms of unemployment and disablement benefits has been calculated as £3575 for each affected individual per annum in the UK.4 This was weighted against the estimated cost of treatment of £387 for each person at risk of blindness, indicating the economic advantages of successful treatment of those high-risk individuals. Pathogenesis and key features of diabetic retinopathy The natural history of retinopathy has been well defined, following a predictable course from the early stage of microaneurysm development to moderate retinopathy, with evidence of cotton- wool spots, intra-retinal microvascular abnormalities and venous beading indicating a deterioration in retinal blood supply. Hard exudates define areas where retinal capillary leakage is occurring in the presence of endothelial damage. Ultimately, in response to worsening ischaemia, growth of new blood vessels is stimulated. This proliferative stage of retinopathy poses a high risk for the patient as fragile new vessels have a tendency to haemorrhage, causing potentially significant visual impairment. Baseline microaneurysm counts in people with diabetes with no other evidence of retinopathy may provide a useful predictor of long-term progression to proliferative retinopathy, independent of the effects of glycaemic control and blood pressure.5 The importance of microaneurysm detection and quantification is supported by the analysis of fluorescein angiograms by Kohner and Sleightholm.6 This analysis has shown a significant correlation between microaneurysm number and the presence of haemorrhages and cotton-wool spots and, to a lesser extent, the severity of hard exudates and IRMAs. From examination of the natural progression of fundus changes in individuals with at least moderate diabetic retinopathy, it was found that the severity of IRMA, venous beading and the number of haemorrhages and microaneurysms present were of the most significance in identifying those people who are likely to develop subsequent neovascularisation.7 ETDRS The ETDRS was carried out to provide answers to the problem of reducing the devastating morbidity from visual loss secondary to diabetes.8 As a consequence of the trial, the benefits of laser panretinal photocoagulation for patients with high- risk proliferative retinopathy have been recognised and adopted into standard clinical practice. However, the authors concluded that early panretinal photocoagulation was inappropriate for those with mild to moderate retinopathy and a low risk of progression to severe visual loss, given its potentially detrimental effect on the peripheral visual field. They emphasised that the key to successful management was early identification of retinopathy with the meticulous monitoring of progression to allow optimum timing of intervention when a proliferative stage had been reached. Risk factors in development of retinopathy There are, however, other ways in which careful screening of diabetic populations and monitoring of established retinopathy may be beneficial. The Health Technology Assessment 2003; Vol. 7: No. 30 49 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. Appendix 1 Systematic literature review (completed 1998) Appendix 1 50 Diabetes Control and Complications Trial has shown that strict metabolic control can both offset the development and slow the progression of diabetic retinopathy in type 1 diabetics.9 In addition to acknowledging the limitations of their cross-sectional study of 3250 European type 1 diabetics, Sjolie and colleagues have also suggested that, in the later stages of retinopathy, the adjustment of blood pressure, fibrinogen and triglyceride levels may affect outcome.2 The cessation of cigarette smoking may also have a beneficial effect although further prospective studies are required for clarification.10,11 The recognition of very early diabetic fundal change may provide the impetus to patients to make positive lifestyle changes and tighten glycaemic control. An overview of existing screening methods Based on the principles of the St Vincent Declaration,12 the European Retinopathy Working Party defined a protocol of screening for diabetic retinopathy.13 This advocated that all diabetic patients should have an annual eye examination by ophthalmoscopy through pharmacologically dilated pupils or by retinal photography. In 1991, the Department of Health commissioned a study of 3318 diabetic patients to establish the definitive method of performing fundal examination.14 A comparison was made between direct ophthalmoscopy, performed by GPs, ophthalmic opticians and hospital physicians, and an assessment of images acquired with a non- mydriatic Polaroid fundus camera by a consultant ophthalmologist. A fundal examination by an ophthalmologist with access to the Polaroid photographs was used as the ‘gold standard’. Buxton and colleagues concluded that direct ophthalmoscopy by all groups showed relatively poor sensitivities (hospital physician 67%; GP 53%; optician 47%), although specificities were higher (hospital physician 97%; GP 91%; optician 95%), indicating that relatively few inappropriate referrals to ophthalmology services would occur.14 However, it was apparent that direct ophthalmoscopy alone was likely to miss a significant proportion of patients with evidence of retinopathy, regardless of who performed the examination. Analysis of fundus photographs resulted in sensitivities ranging from 35 to 67% with specificities marginally higher than those of the primary screeners at 95–98%. Although it has been suggested that the performance of non- mydriatic cameras may be enhanced by the use of mydriatic agents,15 Buxton and colleagues’ findings are consistent with the findings of other comparative studies.16–20 These studies, summarised in Table 22, suggest that at present no individual method of routinely available fundus examination can be judged superior or indeed satisfactory for clinical screening. Although the findings of Klein and colleagues seem encouraging for a single non-mydriatric photograph, the wide confidence intervals calculated for their relatively small patient population warrant caution in interpretation of these results.16 Combining the modalities of ophthalmoscopy and retinal photography appears effective in detection of sight-threatening retinopathy for O’Hare and colleagues.20 However, even their technique does not achieve the recommended threshold of a minimum 80% sensitivity and 95% specificity for mild retinopathy as suggested for audit standards by the British Diabetic Retinopathy Working Group.21 An overview of digital technology Given the recognised limitations of existing retinopathy screening techniques, there has been a great deal of interest in exploring the potential role of digital photography and imaging techniques which can allow computer-assisted analysis of fundal images. Initially, digital techniques required the use of scanners to convert conventionally acquired images to a digital format. More recently, on-line direct acquisition of images has become increasing commonplace as technology advances. Within a digital camera system, photographic film has been replaced with CCD sensor, which is composed of a grid of individual picture elements or pixels.22 As reflected light from a fundus image falls on this sensor, each pixel generates a discrete numerical value that is representative of the level of luminance to which it is exposed. This numerical or digital signal is then relayed to the computer’s imaging system, and subsequently stored temporarily in random access memory (RAM). As monochrome and colour images require 1 and 3 MB of memory for storage, respectively, the number of images acquired before transfer to hard drive is necessary will be limited by computer RAM size. Therefore, there may be an advantage in systems that allow direct storage on to enhanced hard drives. As the number of pixels in the CCD is increased, image resolution is improved. Commission from the NHS Health Technology Assessment Programme It is against the above background that there has been significant interest in developing digital imaging techniques in the field of diabetic eye H ealth Technology Assessm ent 2003; Vol. 7: N o. 30 5 1 © Q ueen’s Printer and C ontroller of H M SO 2003. A ll rights reserved. TABLE 22 Sensitivity and specificity of retinopathy detection techniques Screening test Reference/gold standard test Study population Definition of Sensitivity (%) Specificity (%) Reference positive test (95% CI) (95% CI) Non-mydriatic fundus cameras 45° non-mydriatic Polaroid Fundal examination by ophthalmic 3318 diabetic patients Referral with STDR 56 (48 to 63) 97 (96 to 97) Buxton, 199114 image read by ophthalmologist clinical assistant (technique not UK specified) with access to Polaroid images Single non-stereoscopic 45° 30° stereoscopic colour 99 diabetic patients A. Presence of any A. 100 (92 to 100) A. 91 (59 to 100) Klein, 199516 photograph centred between photographs according to – single eye examined retinopathy B. 93 (66 to 100) B. 98 (87 to 100) USA disc and fovea – 35-mm, ETDRS fields 1, 2 and 3 read B. Detection of non-mydriatic by trained ophthalmic graders PDR Non-mydriatic Polaroid Comparison with direct 2159 adults Presence of For new vessels For new vessels Taylor, 199017 photography – number of ophthalmoscopy examination by attending general retinopathy/ detected by either detected by either UK images taken not specified hospital physician participating diabetic clinics maculopathy by method: camera method: camera in diabetic clinic (overall sensitivity/ either test 65, ophthalmoscopy 60.3, specificity not calculated). Patients 77.5. For diagnosis ophthalmoscopy diagnosed with maculopathy exudative 39.7. For diagnosis referred for examination by maculopathy: exudative ophthalmologist camera 74.2, maculopathy: ophthalmoscopy camera 55.3, 57.4 ophthalmoscopy 67.0 Mydriatic fundus cameras Three 35-mm non-stereoscopic Examination by slit-lamp 395 diabetic patients Presence of STDR 89 (76 to 96) 86 (82 to 90) Harding,199518 45° overlapping images of each biomicroscopy by consultant UK fundus (Canon CR4-45NM ophthalmologist camera) Single non-stereoscopic 45° 30° stereoscopic colour 99 diabetic patients – A. Presence of any A. 98 (90 to 100) A 100 (77 to 100) Klein, 199516 photograph centred between photographs according to ETDRS single eye examined retinopathy B. 93 (68 to 100) B 100 (93 to 100) USA disc and fovea – 35-mm, with fields 1, 2 and 3 read by trained B. Detection of mydriatics ophthalmic graders PDR Single 60° colour slide Clinical examination by Patients attending Presence of Detection of any Detection of any Joannou, photography centred on the experienced ophthalmologist general diabetic clinic retinopathy and retinopathy 93 retinopathy 89 199619 macula – photographs graded (subgroup of 48 assessment made Detection of severe Detection of severe South Africa by clinicians patients underwent of grading accuracy retinopathy 100 retinopathy 75 ophthalmologist examination in addition to photography) continued Appendix 1 5 2 TABLE 22 Sensitivity and specificity of retinopathy detection techniques (cont’d) Screening test Reference/gold standard test Study population Definition of Sensitivity (%) Specificity (%) Reference positive test (95% CI) (95% CI) Direct ophthalmoscopy Direct ophthalmoscopy by Fundal examination by ophthalmic 3318 diabetic patients Referral with STDR i. 67 (50 to 84) i. 97 (96 to 99) Buxton, 199114 i. hospital physicians clinical assistant (technique not ii. 47 (23 to 71) ii. 95 (93 to 97) UK ii. opticians specified) with access to Polaroid iii. 53 (44 to 62) iii. 91 (90 to 92) iii. GPs non-mydriatic photographs Direct ophthalmoscopy by Fundal examination by staff-grade 493 patients examined A. Detection of A. i. 43 A. i. 94 O’Hare, 199620 i. opticians ophthalmologist (technique not by opticians; 517 background ii. 22 ii. 94 UK ii. GPs specified) with review of fundus patients examined retinopathy B. i. 75 B. i. 93 photograph by GPs B. Referral with ii. 56 ii. 98 STDR Direct ophthalmoscopy 30° stereoscopic colour 99 diabetic patients A. Presence of any A. 84 (72 to 93) A. 75 (51 to 91) Klein, 199516 through an undilated pupil by photographs according to ETDRS retinopathy B. 53 (37 to 69) B. 90 (79 to 96) USA experienced ophthalmic fields 1, 2 and 3 read by trained B. Detection of assistant ophthalmic graders PDR Combined techniques Direct ophthalmoscopy Fundal examination by staff-grade i. 493 patients A. Detection of A. i. 71 A. i. 94 O’Hare, 199620 combined with single photograph ophthalmologist (technique not examined by background ii. 65 ii. 92 UK centred on the macula (field size specified) with review of fundus opticians retinopathy B. i. 88 B. i. 99 not specified) photograph ii. 517 patients B. Referral with ii. 80 ii. 98 examined by STDR GPs CI, confidence interval; ETDRS, Early Treatment of Diabetic Retinopathy Study; PDR, proliferative diabetic retinopathy; STDR, sight-threatening diabetic retinopathy. disease. As no clear information was in existence as to how these techniques would fit into a clinical setting, we were commissioned in 1996 by the NHS Health Technology Assessment Programme to undertake both a systematic literature review and a primary study to assess the performance of digital imaging in screening for and monitoring the development of diabetic retinal disease. This document reports the findings of the systematic literature review. Ultimately, the principal interest of the NHS will be the clinical effectiveness and efficiency of digital imaging for screening for diabetic retinopathy and monitoring its progression. Assessing these qualities requires reliable information about the impact of digital imaging on clinical management of diabetic retinopathy and on later health (for example, sight-years saved), relating these to the resources used and comparing them with alternative policies for screening. However, it was our view that the technology had not yet reached the point of development where these parameters could be assessed reliably. Certainly, a search of MEDLINE using the standard Cochrane search strategy failed to identify any randomised controlled trials (RCTs) of digital imaging in this context. The review was therefore restricted to studies of the diagnostic performance of digital imaging in diabetic retinopathy, compared where possible with existing recognised techniques. In view of the difficulty in defining a gold standard in this field, we allowed the inclusion of comparative studies (studies comparing one ‘test’ with another, in this case, digital imaging). We had hoped to present a meta-analysis of results using ROC curves, thereby allowing consideration of performance of digital techniques against alternatives. Such curves allow direct comparisons of tests while varying the assumption on which normal and abnormal test results might be defined. However, owing to the diverse nature of the studies and the differing techniques under investigation, no meta-analysis of study results was possible and a qualitative analysis was carried out instead. Methods Aim and objectives of the review The overall aim of the systematic review was to assess the value of currently available digital imaging techniques and compare them with alternative methods. The objectives of the review were as follows: � to identify the number and quality of primary studies of digital imaging techniques in diabetic eye disease � to identify the range of available digital techniques applicable in this field � to determine whether current digital imaging techniques can detect early diabetic retinopathy when screening a population with no known retinopathy � to determine whether current digital imaging techniques can detect the progression of established retinopathy � to determine whether current digital imaging techniques can determine when patients require treatment of retinopathy � to evaluate ‘experimental’ techniques that may have a future clinical application � to compare current digital imaging techniques with alternative methods. Background The studies considered in this review investigated the diagnostic performance of digital imaging techniques in the field of diabetic retinopathy. The research methodologies of the studies considered therefore were different from those in reviews of effectiveness where the emphasis is solely on RCTs. The nature of the review was, however, systematic in the sense that there were explicit search strategies for identifying studies, explicit selection criteria for the studies that were considered, a systematic way of appraising the studies and a standard format was chosen for presentation of the data. Development of protocol A protocol was written at the start of the project that explicitly described the objectives of the review, the criteria required of studies for inclusion, the search strategy to be used for identification of studies and the methods of quality assessment, data abstraction and presentation of results. Systematic electronic bibliographic database searching The following electronic bibliographic databases were searched systematically: � MEDLINE (National Library of Medicine, electronic version of Index Medicus, USA) on OVID, CD PLUS. � EMBASE (Elsevier Science Publishers, electronic version of Excerpta Medica, Amsterdam) on BIDS and OVID. Health Technology Assessment 2003; Vol. 7: No. 30 53 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. � Science Citation Index (electronic version, paper version of same name, produced by ISI, Institute for Scientific Information, Philadelphia, PA, USA) on BIDS. � Ei Compendex Plus (Computerized Engineering Index, electronic version of The Engineering Index, produced by Engineering Information, Hoboken, NJ, USA) on BIDS. � Cochrane Library (database on disk and CD- ROM). The Cochrane Collaboration; 1997, Issue 3. Oxford: Update Software; 1997. Updated quarterly. � National Research Register (NRR), 14th consolidation, September 1996. The search strategy was first developed in MEDLINE as this is one of the best indexed and ‘user-friendly’ electronic bibliographic databases. Two sets of search terms were devised: one set to describe diabetic retinopathy and the other to describe digital imaging techniques. These were developed by the research team that included ophthalmologists, diabetologists, medical physicists and health services researchers experienced in literature searching. These two sets of terms were combined together using the Boolean operator ‘and’. (The use of a third set of study design terms was considered but was not included, in line with the decision to search for all possible study designs.) These sets of search terms were built up by investigating the medical subject headings (MeSH) terms using the MeSH tree with scope notes and Permuted Index as well as textword searching (searching for terms in the title and abstract). As new search terms were added to the search strategy, details of the first 50 titles and abstracts were scanned to assess their relevance to diabetic retinopathy and digital imaging. Terms that retrieved only irrelevant articles were further modified or rejected. The MEDLINE search strategy was modified for searching other databases. The modifications involved changing the syntax to suit that of the search software of the other databases and interrogating the thesaurus or indices of each database to identify equivalents of the MeSH terms (keyword system) used in MEDLINE. A more focused search was conducted on the other databases partly because they had less of a medical emphasis and partly because the strategy used on MEDLINE was fairly broad, resulting in many abstracts being assessed but few proving relevant. Searching the Internet We also searched the World Wide Web with the browser Internet Explorer and located relevant websites by using the advanced search options of the search engines Excite and Yahoo. The set of keywords entered into the query box were: digital imaging and diabetic and retinopathy. These keywords were combined together using the Boolean operator ‘and’. Handsearching Recent issues of three journals, British Journal of Ophthalmology (Volume 81, Issues 7–12, 1997), Graefe’s Archive for Clinical and Experimental Ophthalmology (Volume 235, Issues 7–12, 1997) and Investigative Ophthalmology and Visual Science (Volume 38, Issues 9–13, 1997), were handsearched for relevant publications that would not yet have appeared on the electronic databases owing to delays in indexing. Handsearching was performed by a health services researcher with an interest in the methodology of systematic reviews. This involved going through every page of each issue and reading through letters, editorials and conference reports in addition to published papers. Other methods of ascertainment of studies Reference lists of relevant studies The reference lists of relevant studies identified from the electronic databases were searched for references to other studies that might be relevant. Relevance was assessed from a hard copy of each article. We limited these searches to the ‘first- generation’ references only; in other words, we did not search the reference lists of studies originally identified from a previous reference list search. Contacting authors of key articles identified through the electronic searches Authors of key articles were contacted and asked if they were aware of any other relevant studies. A number of authors were also contacted for further information in relation to their publications. Other In addition the Proceedings of the British Diabetic Association’s Education and Care Section Annual Conference, 8–10 October 1997, and the British Diabetic Association’s Medical and Scientific Section Autumn Meeting, 9–10 October 1997, and the Spring Meeting, 25–27 March 1998, were searched for relevant abstracts or poster presentations. Appendix 1 54 Identification of possible studies All possibly relevant studies were electronically imported or manually entered into the reference managing software package Reference Manager (Version 7.01N; Research Information Systems, Carlsbad, CA, USA). Subject keywords and source of article were added. Register of possible studies Initially, all electronically derived abstracts and study titles were read by a diabetologist and a health services researcher with an interest in the methodology of systematic reviews to assess subject relevance. However, because of the high degree of concurrence and the greater speed at which abstracts could be assessed by the diabetologist, it was decided that the diabetologist alone should assess the abstracts. All relevant studies were assigned specific topic keywords on Reference Manager and the full published paper was obtained. Assessment of studies for inclusion Hard copies of studies were assessed for subject relevance and eligibility by a diabetologist. The assessor was not blinded to author, institution or journal. Studies were included if they reported on the use of either direct or indirect digital imaging techniques in the field of diabetic retinopathy and involved patients with either type 1 or 2 diabetes. Owing to the early stage of development of digital technology, poster and abstract presentations were included in the review. Given the original remit of the review, early experimental techniques which were applied to animal models only were not included. Studies used to monitor response to treatment were also not included as we were primarily interested in digital techniques which could be applied to diabetic retinopathy screening and monitoring of retinopathy progression. Quality assessment of studies to be included Comparative studies where digital imaging was assessed against an alternative technique were graded on the following methodological criteria proposed by Carruthers and colleagues:23 � use of a recognised gold reference standard � independent assessment of test under review and gold reference standard � test applied to an appropriate study population, that is, diabetic patients suspected but not known to have retinopathy, diabetic patients with established retinopathy and non-diabetic control groups � avoidance of verification bias, that is, reference standard applied to all patients under study � reproducible description of both the test and the reference standard given. Studies were graded depending on the number of these criteria met, with a score of V being awarded when a study met all five criteria and a score of I being given when only one of the above criteria was met. Posters and abstract presentations generally received a low grading owing to a lack of available information on study methodology. It is accepted that this grading may not reflect the value of future publications derived from this work. The difficulty in defining a gold standard for evaluation of techniques in this field is recognised by the reviewers; in this context, it is accepted to be either biomicroscopic examination by an experienced ophthalmologist or the use of the seven-field Airlie House photography protocol. Grading of results section in studies to be included In addition to the above methodological criteria, an assessment was made of the interpretation of study results by authors based on the following criteria adapted from those proposed by Carruthers and colleagues:23 � that sensitivity and specificity could be correctly calculated from comparison with a recognised gold standard as defined above � test reproducibility calculated by the authors to determine the consistency of results obtained � appropriate statistical method of analysis applied to results � data presented to allow confirmation of authors’ findings. Those studies meeting the above four criteria for grading of results were given a grading of IV, those meeting three of the four a grading of III, etc. The grading was carried out by a diabetologist and then, independently, by two expert in health services research. No significant difference was found between the graders. Posters and abstract presentations generally received a low grading owing to a lack of available information on the authors’ interpretation of results. It is accepted that this grading may not reflect the value of future publications derived from this work. Data abstraction The following information was extracted from the individual studies: technique under review; population studied; study aim; description of gold standard or comparative method used; summary of results; summary of reviewer’s comments; Health Technology Assessment 2003; Vol. 7: No. 30 55 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. reference; and methodological and results grading. This information has been presented in tabular form for ease of reference. Data analysis The original research proposal was for a quantitative analysis of digital imaging techniques as applied to diabetic retinopathy assessment. In view of the early stage of evolution of digital technology in this field, no statistical meta-analysis of study results was possible owing to their diverse nature and differing techniques. Therefore, only a qualitative analysis was possible. The results of the individual studies were summarised systematically and the consistency of similar studies was then considered formally. Derivation of included studies Results of systematic literature review Around 2767 published abstracts and poster presentations were considered for inclusion. A total of 40 studies met the criteria for inclusion; 28 of these were comparative studies comparing digital imaging techniques with alternative methods and 12 were studies describing previously evaluated digital techniques being used in research. Twenty-five studies were found in MEDLINE after reading 679 abstracts, a further five were found among 1327 abstracts generated by the systematic searches in the other electronic databases and the remaining nine were found in other ways. Table 23 summarises how studies were first identified, how many were judged to be possibly relevant to the review and how many were confirmed suitable for inclusion. Reporting the findings of the systematic review The following three sections will provide a qualitative description of the results of the search strategy, broadly discussed under the categories of digital fundus photography, digital angiography and the scanning laser ophthalmoscope. Each section contains a descriptive text, outlining the development of the digital technique and its application to the detection of diabetic retinopathy. The role of the search strategy was to identify papers where a digital imaging technique had been compared with existing screening and diagnostic techniques or, once validated, used to further our understanding of the pathogenesis of Appendix 1 56 TABLE 23 Results of systematic literature review Source No. of reports No. of published No. of studies No. of studies identifieda abstracts/posters possibly relevant included in assessedb to review final review Electronic searches MEDLINE 733 679 72 25 EMBASE 1273 771 35 3 Science Citation Index 626 368 15 2 Ei Compendex 215 156 3 0 Cochrane Library 32 32 0 0 National Research Registerc 12 N/A 3 0 Subtotal 2891 2006 128 30 Other Proceedings 371 371 6 5 Experts contacted 38 N/A 2 3 Bibliographies checked 36d N/A 19 2 WWW search: Excite 50 N/A 0 0 Yahoo 155 N/A 0 0 Handsearching 390 390 0 0 Subtotal 1040 761 27 10 Total 3931 2767 152 40 a Some reports were identified from more than one source. b The number of abstracts assessed after duplicates had been removed by the Reference Manager database. This is only applicable to searches carried out on the electronic databases. c A national database of information about ongoing research currently taking place in, or of interest to, the NHS. d Number of reference lists checked. diabetic retinopathy. These papers have been summarised in tabular form as techniques under evaluation or validated techniques used in research. Where digital imaging has been compared with alternative techniques, an assessment of the study methodology and the authors’ interpretation of results is presented. Publications identified by the review that did not fulfil the above criteria but were felt to be of importance for their contribution to the overall development of digital imaging in this field have been referred to in the text only. Papers relating to the treatment of retinopathy or confined to animal models have not been included in the final review. Digital fundus photography Number and quality of eligible studies Seventeen relevant publications were identified relating to digital fundus photographic techniques, using both indirect and on-line image acquisition. Four of the 17 included publications were in poster or abstract format. These studies are summarised in Table 24 and evaluated in Table 25. Manual evaluation of images Digital imaging versus colour slide photography Indirect digitisation of original colour slides to facilitate easier storage and computer analysis does not seem to affect image quality adversely to a degree affecting clinical use, as demonstrated by George and colleagues24 (Table 24). Diagnosis from digitised images displayed on a high-resolution video monitor was in agreement with that from the original colour slides in 95% of cases of sight- threatening retinopathy and 100% of non-sight- threatening retinopathy. It was noted that cotton- wool spots were underdiagnosed in the digital images although the authors concluded that the decision to refer for specialist advice was unaffected. Realising the potential for improved storage and retrieval of images, Friberg and colleagues (Table 24) attempted an early evaluation of the quality of directly acquired images against conventional imaging modalities.25 Ten diabetic patients, with fundal appearances ranging from mild background retinopathy to proliferative retinopathy, were included in an assessment of 50 patients with retinal pathology. Using a digital system capable of generating images composed of 512 × 512 pixel array, giving rise to a quarter of a million pixel elements per picture, the authors conceded that picture resolution was less than that of conventional slide photography (standard 35- mm transparency film may contain up to 4 million pixel elements per image). However, a correct clinical diagnosis was made for each diabetic patient from reviewing digital images alone and the authors concluded that the system appeared to give sufficient detail for clinical purposes. As they predicted, the technology has advanced rapidly since the time of this early study and commercial cameras delivering an image of 1024 × 1024 pixels are now readily available. Cameras that can image 2036 × 3060 pixel array are beginning to enter the market but are likely to be limited in their use at present owing to their higher cost.26 A Canon CR5 45NM-based digital imaging system has been compared against the results of 35-mm colour slide photography by Ikram and colleagues (Table 24), using slit-lamp biomicroscopy to provide a reference standard for validation of the techniques.27 From a population of 66 diabetic patients attending a mobile screening unit, clinical examination confirmed the presence of background diabetic retinopathy (BDR) in 48%, pre- or proliferative diabetic retinopathy (PDR) in 3% and clinically significant macular oedema or maculopathy (CSMO) in 15%. The results of photography were graded by a consultant diabetologist and found to be comparable for detection of retinopathy using both techniques (35-mm colour slide, BDR 38%, PDR 3% and CSMO 13%; digital imaging system, BDR 44%, PDR 3% and CSMO 14.5%). All patients with sight-threatening retinopathy were identified by each photographic technique. Although there has been no direct calculation of sensitivity and specificity, the authors conclude that digital imaging is at least comparable to colour slide photography from their study. A similar conclusion was reached by George and colleagues (Table 24) using a Canon CR5 retinal camera in an extension of their earlier work on retinopathy detection.28 Using a study group of 40 patients with a wide spectrum of diabetic retinopathy, the results of directly acquired digital images were compared with conventional slide photography. An exact agreement in grading was achieved in 93.3% of eyes. When the images of those undergraded as non-sight-threatening retinopathy were evaluated, this discrepancy was attributed to the lower resolution of digital photography. In two of the three cases, this was due to the failure to capture IRMAs, clearly visible on colour slide images. In one case, cotton-wool Health Technology Assessment 2003; Vol. 7: No. 30 57 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. Appendix 1 5 8 TABLE 24 Evaluation of digital photography techniques Technique Population studied Study aim Gold standard Results Reviewers’ comments Reference Methodology under review or comparative and results method grading Digital imaging in comparison with colour slide photography Digitisation of 45° macular view colour transparencies 150 retinal images from diabetic patients, covering a spectrum of retinopathy and including those with normal fundi To compare the detection of DR from digitised images displayed on a high-resolution video monitor with diagnosis made from colour slide examination Comparative method Assessment made against diagnosis from original slide Low intra-observer variation of 7% noted when assessing quality control images from both techniques. Of patients diagnosed with STDR on colour slide, 95% similarly diagnosed from digital images (84/88; 95% CI 88 to 99%); for those with NSTDR, 100% (62/62; 95% CI 93 to 100%) diagnosed from digital images. 4 cases of STDR undergraded to NSTDR by digital image analysis. (Difficulties in identification of CWS and macular hard exudates masked by light reflection noted in digital images. Initial poor quality slide prevented macular haemorrhage being identified on corresponding digitised image) Within a clinical setting, the authors report close agreement between the interpretation of images from both techniques. The under-reporting of STDR is likely to be overcome by direct digital image acquisition, which may have prevented the problems with image quality reported in these particular patients. 100% agreement for patients with NSTDR suggests both photographic techniques would be equally attractive for screening programmes, although validation against existing techniques has not been performed George, 199724 Methodology Grade IV Results Grade III continued H ealth Technology Assessm ent 2003; Vol. 7: N o. 30 5 9 © Q ueen’s Printer and C ontroller of H M SO 2003. A ll rights reserved. TABLE 24 Evaluation of digital photography techniques (cont’d) Technique Population studied Study aim Gold standard Results Reviewers’ comments Reference Methodology under review or comparative and results method grading Direct acquisition of digital fundus photographs and fluorescein angiograms using a TOPCON TRC 50 camera interfaced with a PAR IS2000 imaging system via a high- resolution video camera 10 diabetic patients included in a preliminary study of 50 consecutive patients attending retinal specialists clinics To evaluate the role of directly acquired digital imaging in comparison with conventional photography in diagnosis of retinal disease, including DR Comparative study Comparison of diagnosis made by retinal specialist from digital images with that of colleague using conventional photographs – any discrepancies reviewed by third specialist using both sets of images and patient history In all 10 patients with DR under study, both specialists were in agreement – 1 diagnosed with mild NPDR; 7 diagnosed with exudative retinopathy; 2 diagnosed with neovascularisation (1 neovascularisation elsewhere; 1 neovascularisation of disc) Although patient numbers under study are inadequate to influence clinical practice, the authors have demonstrated agreement of both photographic techniques across a spectrum DR. Despite the limitations of the relatively low- resolution system under review, lesions including neovascularisation were detectable. The study population was biased towards those with exudative or PDR; these results may not reflect the outcome for a general screening population. However, they illustrate that digital imaging has significant advantages over conventional photography in terms of instant availability of images with ready correction of alignment and focusing problems Friberg, 198725 Methodology Grade III Results Grade I continued Appendix 1 6 0 TABLE 24 Evaluation of digital photography techniques (cont’d) Technique Population studied Study aim Gold standard Results Reviewers’ comments Reference Methodology under review or comparative and results method grading Direct acquisition of digital images using a Canon CR5 45NM fundus camera and electronic imaging system 66 diabetic patients To evaluate the role of digital imaging in comparison with standard 35-mm colour slide photography in evaluation of DR Gold standard Fundal examination by slit-lamp biomicroscopy Clinical examination was used to define the prevalence of BDR at 48%, PDR at 3% and CSMO at 15%. Detection of retinal disease was comparable using both camera techniques. Digital imaging: BDR 44%, PDR 3% and CSMO 14.5%. 35-mm colour slides: BDR 38%, PDR 3% and CSMO 13% The authors concluded that digital imaging is at least comparable to conventional 35-mm slide photography in detection of retinopathy. No patients with STDR were undetected using either photographic technique. Although not presented by the authors, sensitivity was calculated from available data as: Digital imaging: BDR 91.7%; PDR 100%; CSMO 96.7% 35-mm colour slide: BDR 79.2%; PDR 100%; CSMO 86.7% Unfortunately, data were not presented in a manner which clarified whether specificity was also comparable with both techniques. This may be available in subsequent publication but would clearly have implications for clinical application. However, given the initial results, further assessment against current screening methods should be supported Ikram, 199727 (Poster and personal correspondence with author) Methodology Grade V Results Grade I continued H ealth Technology Assessm ent 2003; Vol. 7: N o. 30 6 1 © Q ueen’s Printer and C ontroller of H M SO 2003. A ll rights reserved. TABLE 24 Evaluation of digital photography techniques (cont’d) Technique Population studied Study aim Gold standard Results Reviewers’ comments Reference Methodology under review or comparative and results method grading Direct acquisition of images using a Canon CR5 retinal imaging system (2 X 45° fields) 40 diabetic patients with wide spectrum of established retinopathy including normal controls (5 patients) To compare the results of directly acquired digital images with conventional colour slide photography Gold standard EURODIAB photographic protocol of 2 × 45° overlapping fields used in this study has been validated as comparable to gold standard Airlie House 7 × 35° fields for 35-mm colour slides Exact agreement in grading in 93.3% of eyes (95% of STDR and 100% of NSTDR correctly identified) Undergrading of STDR in 3 cases (IRMA not identified in 2 cases; CWS misclassified in 1 case). System limited by current pixel density of 768 × 512 with loss of fine detail structures. IRMA clearly evident in colour slide film Using the data presented by the authors, digital imaging sensitivity and specificity can be calculated: detection of NSTDR; sensitivity 96.3%; detection of STDR, sensitivity 93.8%. Although the 5 patients without retinopathy were correctly identified, the patient population is biased towards those with established retinopathy. Therefore, results obtained are not applicable to a screening programme. However, given the limitations of lower resolution, the digital system has performed well in comparison with 35-mm colour slide photography. The detection rate of 95% of STDR is superior to that of alternative screening techniques in current use George, 199828 Methodology Grade V Results Grade II continued Appendix 1 6 2 TABLE 24 Evaluation of digital photography techniques (cont’d) Technique Population studied Study aim Gold standard Results Reviewers’ comments Reference Methodology under review or comparative and results method grading Digital imaging in comparison with Polaroid photography On-line acquisition of digital fundus images – electronic imaging system attached to a Canon CR5 45NM fundus camera 107 diabetic patients photographed after mydriasis – 213 images obtained To evaluate the diagnosis of STDR from digital images compared with Polaroid Comparative study The overall prevalence of retinopathy was determined from examination of both sets of images. No other independent clinical examination appears to have been performed from the information available DR was present in 58 eyes, of which 55/58 (95%) were detected on digital images and 49/58 (84%) were detected on Polaroid. Thirty-four eyes showed retinopathy meriting ophthalmologist referral – 34/34 (100%) were evident with digital imaging and 24/34 (71%) evident on Polaroid Digital images were superior to Polaroid in the detection of retinopathy and identification of those patients requiring ophthalmologist referral. The authors have also highlighted the advantages of electronic imaging systems in terms of patient comfort, enhanced storage and retrieval of images and the potential for transfer for off-site analysis. Future comparison against other screening modalities using a gold standard for validation would be required prior to a recommendation for clinical practice Ryder, 199629 Methodology Grade IV Results Grade I Direct acquisition of digital images using a TOPCON non-mydriatic camera and Imagenet system or Frost Medical Systems Ris-Lite system 118 patients photographed after mydriasis To determine the effective- ness of digital imaging in detecting DR against Polaroid photography performed with a Canon CR4 camera – for both techniques single 45° field centred between macula and disc analysed Gold standard Standard seven- field 35° fundus photography Detection of any retinopathy Digital imaging sensitivity 74% and specificity 96% (Polaroid, sensitivity 72% and specificity 88%) Detection of retinopathy meriting referral for ophthalmologist review: digital imaging sensitivity 85% and specificity 98% (Polaroid sensitivity 90% and specificity 98%) For single image comparison, digital images appear as effective as Polaroid systems Taylor, 199831 (Poster and personal correspon- dence with author) Methodology Grade V Results Grade III continued H ealth Technology Assessm ent 2003; Vol. 7: N o. 30 6 3 © Q ueen’s Printer and C ontroller of H M SO 2003. A ll rights reserved. TABLE 24 Evaluation of digital photography techniques (cont’d) Technique Population studied Study aim Gold standard Results Reviewers’ comments Reference Methodology under review or comparative and results method grading Digital imaging in comparison with retinal examination On-line digital photographs – three 30° fields. Diagnosis made by retinal specialist from examination of photographs 11 consecutive patients with diabetic retinopathy attending an ophthalmologist clinic To determine value of digital fundus camera as a screening tool for DR Comparative study Clinical examination by another retinal specialist in masked fashion Of 22 eyes examined, classification agreed on in 19. Digital image detected one case new vessels missed on examination; examination confirmed one case of new vessels less distinct on photograph. One patient defined diabetic maculopathy by image and background DR on examination The study population presented is inadequate to allow implications for clinical practice to be determined. Although the authors suggest that the study provides support for the use of digital imaging as a screening tool, the population studied were all known to have retinopathy and do not fit the criteria for a general diabetic population. The potential advantages of electronic image transfer are recognised Gupta, 199633 (Poster and personal correspon- dence with author) Methodology Grade IV Results Grade I Direct acquisition of digital fundus images 611 consecutive patients attending a general diabetic clinic randomised for assessment by either protocol To determine whether digital imaging enhanced detection of retinopathy when added to routine screening Comparative study Results of direct ophthalmoscopy and visual acuity assessment alone compared with results of routine screening plus digital image analysis For the detection of early background retinopathy: detection rate in type 1 DM 42.6% with digital imaging vs 26.7% with routine screening; type 2 DM 33.2% with digital imaging vs 20.3% with routine screening The addition of digital imaging significantly increased the detection rate of mild retinopathy in the group under review compared with those undergoing only routine screening. The authors concluded that the detection of early DR is enhanced by fundus photography. These findings did not apply to other degrees of retinopathy. Although the study was weakened by the lack of a gold standard for validation of results, this preliminary report shows a role for digital photography in early detection of retinopathy. As advances are made in the prevention of progression of retinopathy, this finding will be of significance should the emphasis in screening move from detection of referable eye pathology to detection of retinopathy at onset Lindsay, 199834 (Poster) Methodology Grade III Results Grade II continued Appendix 1 6 4 TABLE 24 Evaluation of digital photography techniques (cont’d) Technique Population studied Study aim Gold standard Results Reviewers’ comments Reference Methodology under review or comparative and results method grading Automated detection of diabetic retinopathy 60° red-free fundus photographs of posterior pole digitised using a Nikon Coolscan slide reader to provide 700 × 700 pixel images. Prospective study 200 images from patients with diabetic retinopathy and 101 images from non- diabetics with normal fundi Evaluation of the ability of an artificial neural network to detect DR after optimisation of computer protocols Gold standard Evaluation of conventional images by an experienced retinal specialist For detection of DR, system achieved sensitivity of 88.4% and specificity of 83.5% in comparison with specialist. Ability to differentiate exudates or haemorrhage/microaneurysms from normal retina (or normal retinal image containing vessels only) 93% and 73.8%, respectively. To ensure that all patients referred by specialist for evaluation are also detected by the system would require increasing sensitivity to 99% at the expense of reducing specificity to 69%. At present, authors feel the system is comparable to results achieved by optometrists, Polaroids from non- mydriatic cameras and diabetologists using direct ophthalmoscopy Although the neural network can differentiate blood vessels, hard exudates and haemorrhages/microaneurysms with relative accuracy from background retina, the basis for subsequent decision to refer patients remains unclear. No features of moderate (CWS, IRMA, VB) or proliferative (neovascularisation) are being detected using the system described. A comparison of the technique with an established screening method would be valuable to determine the role in future clinical management. At present, the system remains untested in a clinical setting Gardner, 199635 Methodology Grade V Results Grade II continued H ealth Technology Assessm ent 2003; Vol. 7: N o. 30 6 5 © Q ueen’s Printer and C ontroller of H M SO 2003. A ll rights reserved. TABLE 24 Evaluation of digital photography techniques (cont’d) Technique Population studied Study aim Gold standard Results Reviewers’ comments Reference Methodology under review or comparative and results method grading Directly acquired on- line digital fundus photographs or digitised colour photographs – seven 35° fields according to Airlie House criteria 100 diabetic patients To develop an automated method of lesion identification and quantification of severity of DR Gold standard Diagnosis by retinal specialist from seven 35° colour fundus photographs For microaneurysms, dot-and-blot haemorrhages and striate haemorrhages, computer showed 99% specificity compared with retinal specialist; sensitivity better than retinal specialist. For CWS, exudates, IRMA and neovascularisation, sensitivity was 93% and specificity 90%. The authors believe that the use of computer-based analysis is comparable to retinal specialists for the interpretation of fundus photographs The authors have attempted to develop a fully automated computer-driven analysis system, with detection of lesions across the whole spectrum of DR. We have been unable to identify subsequent formal publication of this presented abstract to provide clarification of the methodology used in this study. The authors have stated an intention to develop a commercially available system in the near future. While highlighting this work as an area of future interest, no comment can be made at present regarding implications for patient management based on the available data Sinclair, 199636 (Abstract and personal correspond- ence with author) Methodology Grade IV Results Grade II continued Appendix 1 6 6 TABLE 24 Evaluation of digital photography techniques (cont’d) Technique Population studied Study aim Gold standard Results Reviewers’ comments Reference Methodology under review or comparative and results method grading Automated detection of hard exudates Direct acquisition of digital fundus images 134 images taken from those routinely acquired at a general diabetic screening clinic To develop an automated DR screening programme, applying a statistically based pattern recognition program Not stated in report Pattern recognition allowed correct identification of optic disc and fovea in 78.4% of images. Microaneurysms, haemorrhages, exudates and CWS were identified with success rates of 66, 89, 97 and 63%, respectively Authors found difficulty differentiating between microaneurysms and haemorrhages and between exudates and CWS at this preliminary stage. They suggest a successful role for their novel statistical approach to pattern recognition in future automated screening programmes. Further development and clinical assessment will be required before this is likely to impact on patient management Ege, 199737 (Abstract) Methodology Grade II Results Grade I Digitisation of colour transparency slides to yield black and white image with 512 × 512 pixel array Standard photographs used in the ETDRS for classification of retinal exudates The semi- automated detection of retinal exudates Gold standard Exudate area determined by program assessed against standard photograph grading of severity With shade correction and contrast enhancement, program could identify 3 distinct grades of severity which correlated with those used in the EDTRS grading. With serial analysis of individual images, standard deviations of calculated exudate area were significantly lower in grade 1 compared with grade 3 images, i.e. reproducibility of technique decreased as number of exudates increased Ward, 198939 Methodology Grade II Results Grade II continued H ealth Technology Assessm ent 2003; Vol. 7: N o. 30 6 7 © Q ueen’s Printer and C ontroller of H M SO 2003. A ll rights reserved. TABLE 24 Evaluation of digital photography techniques (cont’d) Technique Population studied Study aim Gold standard Results Reviewers’ comments Reference Methodology under review or comparative and results method grading Indirect digitisation of 30° or 50° field colour slides centred on macula after projection through a red- free filter Diabetic patients with exudative retinopathy. No normal controls were included in this paper Automated detection and quantification of retinal exudates Gold standard Manual estimation of false positive and negative by experienced ophthalmologist Sensitivity 87% (range 61–100%) Coefficient of variation for reproducibility 3% for confluent areas of exudate and 17% for small scattered areas Authors have suggested that results may be improved by using directly acquired digital images. The region of interest was manually delineated prior to processing. It is anticipated that program refinement could enable this function to be automated. The speed of data acquisition and analysis and lack of user interaction will lead to enhanced objectivity in the assessment of exudative retinopathy. This has potential in both screening for retinopathy and in the monitoring of response to treatment. The program presented appears robust and well validated. Evaluation of its performance in a study population with and without exudative retinopathy would be of interest Phillips, 1991,40 199341 Methodology Grade IV Results Grade IV continued Appendix 1 6 8 TABLE 24 Evaluation of digital photography techniques (cont’d) Technique Population studied Study aim Gold standard Results Reviewers’ comments Reference Methodology under review or comparative and results method grading Secondary digitisation of colour transparency photographs using filters to give images in three colour planes (red, green and blue) with 512 × 480 pixel resolution Retrospective analysis of images from 30 patients, specifically chosen to contain both the lesion under investigation plus a random distribution of other variables such as background pigmentation. 10 images each with either CWS, exudates or drusen were identified To determine whether fundus lesions can be identified on the basis of colour alone. The effect of luminance on colour was removed to overcome variability in exposure. This allowed a 2- dimensional vector to be used to assign position of individual lesions on a chromaticity scatter diagram Gold standard Examination of colour transparencies by a retinal specialist Different lesions appeared to occupy distinct regions of the chromaticity scatter diagram, with a degree of overlap between CWS and drusen. This was reflected in the greatest error in discrimination between these lesions after application of the Mahalanobis classifier and jackknife technique for assessment of separability of lesions into correct groups (exudates, sensitivity 70%, specificity 95%; CWS, sensitivity 70%, specificity 65%; drusen, sensitivity 50%, specificity 85%; calculated from data presented by the authors) The authors propose that their technique alone is insufficient as a discriminator when lesions are of similar colour. They propose to investigate the use of additional features of size, shape, edge sharpness and texture to aid lesion recognition. At this stage in development, it is unlikely that this program will enhance automated image analysis Goldbaum, 199042 Methodology Grade IV Results Grade I continued H ealth Technology Assessm ent 2003; Vol. 7: N o. 30 6 9 © Q ueen’s Printer and C ontroller of H M SO 2003. A ll rights reserved. TABLE 24 Evaluation of digital photography techniques (cont’d) Technique Population studied Study aim Gold standard Results Reviewers’ comments Reference Methodology under review or comparative and results method grading Automated detection of VB Colour transparencies processed by digital slide scanner. Regions of interest corresponding to 64 × 64 pixels manually identified for analysis Prospective study of patients attending an ophthalmology department for assessment of diabetic retinopathy. 54 vessel segments from 18 sets of photographs processed for further evaluation Automated assessment of VB Gold standard Assessment by professional photographic graders, using adaptation of the Airlie House criteria for classification and studying slide reproductions of the digitised vessel segments 51 slides considered assessable by 2 graders, achieving exact match of clinical grade in 76% of cases. For remaining images, grading differed by one level of severity with 7 of 12 being in the ‘questionable vs definite’ category and definitive grading made by a senior colleague. Computer-based VB index was able to differentiate significantly advanced beading from each of the other 3 categories, definite beading from both advanced beading and normal vessels, and normal vessels from both definite and advanced beading (p = 0.05, Tukey’s non-parametric test) The application of Fourier analysis to the measurement of variation of vessel diameter allows quantitative assessment of VB which has been shown to reflect clinical grading. It is recognised that VB is a difficult lesion to differentiate clinically and a method of reducing subjectivity is welcomed. Integration with additional lesion detection programs would be beneficial for use in clinical practice. Validation in a clinical setting would assist definition of its role in future use Kozousek, 199244 Methodology Grade V Results Grade I continued Appendix 1 7 0 TABLE 24 Evaluation of digital photography techniques (cont’d) Technique Population studied Study aim Gold standard Results Reviewers’ comments Reference Methodology under review or comparative and results method grading Automated detection of neovascularisation Low-angle fundus photographs from the optic disc enlarged to allow manual tracing of vessel patterns. Images digitised and density–density correlation method used to calculate fractal dimensions 10 retinal vessel patterns from patients with known new vessels fulfilling criteria of NVD ≥ EDTRS Grade 3; images from 14 healthy fundi as control group To use the principles of fractal geometry to differentiate normal retinal vasculature from the development of neo- vascularisation Gold standard Professional photographic graders using modified Airlie House criteria for the definition of neovascularisation Assuming 100% accuracy by graders, FD = 1.8 yields a sensitivity for detection of NVD ≥ EDTRS Grade 3 of 90%. With these conditions, specificity is 93% with false-positive value of 7% and false-negative value of 10% The method of deriving digital images of vessel patterns is cumbersome and may be improved by the use of direct acquisition digital images. The technique is limited by an inability to identify NVD Grade 2, which also pose a risk of haemorrhage, and the presence of vitreous haemorrhage which obscures fundal detail. The area under review is a low-angle 10° view centred on the optic disc, which will not allow detection of early peripheral neovascularisation. The technique may have implications for automated screening strategies but will require further clinical validation Daxer, 199347 Methodology Grade V Results Grade III continued H ealth Technology Assessm ent 2003; Vol. 7: N o. 30 7 1 © Q ueen’s Printer and C ontroller of H M SO 2003. A ll rights reserved. TABLE 24 Evaluation of digital photography techniques (cont’d) Technique Population studied Study aim Gold standard Results Reviewers’ comments Reference Methodology under review or comparative and results method grading Automated detection of vessel diameter CWS, cotton-wool spots; DM, diabetes mellitus; DR, diabetic retinopathy; NPDR, non-proliferative diabetic retinopathy; NSTDR, non-sight-threatening diabetic retinopathy NVD, new vessels disc; VB, venous bleeding. Digitisation of serial 30° 35-mm photographs via high-resolution video camera and Context Vision GOP-302 image-analysing computer 53-year-old normotensive male type 1 diabetic with mild background retinopathy and normal renal function Evaluation of a semi- automated computer- driven method of assessing vessel diameter – vessel segment manually selected. Vessel diameter analysis by calculating average grey profile across vessel at 12 neighbouring parallel cut lines adjacent to region of interest Comparative study Comparison with observer- dependent evaluation using graphics-digitising table and manual definition of vessel diameters Co-efficient of variation of single image 1.5–7.5% for semi- automated method; 6–34% for observer method (noted to be dependent on vessel size). Standard diameter of variation from the mean 4.2 µm for automated method (cf. 18.9 µm for observer), with 95% CI being 3.2 to 6.0 and 14.6 to 27.1 µm, respectively (p <0.001) Image analysis method appears more reproducible and accurate, particularly with small-diameter vessels. ECG- triggered photography may reduce variability in serial photography analysis by minimising effect of cardiac cycle on perfusion pressure. This technique will enhance research in retinal blood flow parameters and the pathogenesis of retinopathy. It is unlikely to provide a useful tool in general retinopathy screening and management Newsom, 199255 Methodology Grade III Results Grade II Appendix 1 7 2 TABLE 25 Assessment of study methodology and results – evaluation of digital photography techniques Methodology Results Use Appropriate No Correct gold Independent study verification Reproducible sensitivity/ Reproducibility Statistics Data Reference standard assessment population bias description Grading specificity calculated appropriate presented Grading George, 199724 � � � � IV � � � III Friberg, 198725 � � � III � I Ikram, 199727 � � � � � V � I George, 199828 � � � � � V � � II Ryder, 199629 � � � � IV � I Taylor, 199831 � � � � � V � � � III Gupta, 199633 � � � � IV � I Lindsay, 199834 � � � III � � II Gardner, 199635 � � � � � V � � II Sinclair, 199636 � � � � IV � � II Ege, 199737 � � II � I Ward, 198939 � II � � II Phillips, 1991,40 199341 � � � � IV � � � � IV Goldbaum, 199042 � � � � IV � I Kozousek, 199244 � � � � � V � I Daxer, 199347 � � � � � V � � � III Newsom, 199155 � � � III � � II spots among panretinal photocoagulation scars were misdiagnosed as laser burns. However, while accepting the limitation of their system operating at a pixel density of 768 × 512, the digital camera shows promise as an additional tool in retinopathy management with the potential to improve its performance as technology advances. Digital imaging versus Polaroid photography A direct comparison of instant electronic imaging systems against Polaroid photography has been presented by Ryder and colleagues (Table 24).29 After administration of mydriatics, 213 eyes were imaged. Diabetic retinopathy was found to be present in 58 eyes, and identifiable in 55 of the digital images but only 49 of the Polaroid photographs. Thirty-four eyes were deemed appropriate for referral to an ophthalmologist: evidence of retinopathy requiring referral was present in the digital images in all cases but only seen in 24 (71%) of the Polaroid photographs. In addition to the superiority of lesion detection demonstrated, the patients preferred the electronic imaging system as the less intense photographic flash was perceived to be more comfortable. This reflects the findings of Taylor and colleagues (Table 24), who found that 96% of patients surveyed following fundus photography found the electronic systems as comfortable or better (44%) than Polaroid photography.30 From an educational aspect, 93% of patients agreed that the ability to view their own fundus images was important, with 91% also finding value in an immediate explanation of the images. The same group have also performed an assessment of the effectiveness of digital imaging in comparison with Polaroid photography (Table 24).31 They demonstrated a sensitivity of 74% (Polaroid 72%) and a specificity of 96% (Polaroid 88%) for the detection of any retinopathy; the detection of retinopathy meriting referral to an ophthalmologist showed a sensitivity of 85% (Polaroid 90%) and a specificity of 98% (Polaroid 97%). These results were obtained with the use of seven-field stereo photography as a recognised gold standard. Therefore, while retaining the advantages of an instantly acquired image, it would appear that the potential for enhanced picture storage and manipulation from a digital system can be gained without any loss of picture quality. St Thomas’ Hospital, London has an established Diabetic Eye Complication Screening programme, allowing open access for GPs who manage their own diabetic clinics.32 Although photography was traditionally performed using a non-mydriatic Polaroid camera, this technique has recently been replaced by digital photography. The authors confirm that an on-going evaluation of their new system in comparison with conventional imaging is in progress but are unable to provide preliminary results at this stage. Digital imaging versus retinal examination In an evaluation of digital cameras in a clinical setting, images were directly acquired from 11 consecutive patients using three 30° fields of view (Table 24).33 The results of image analysis and clinical examination by a retinal specialist showed agreement of classification in 19 of the 22 eyes. Of those where there was disagreement, one case of new vessels was determined by photography alone; one case of questionable new vessels on photography was clearly evident on examination; and in one eye, diabetic maculopathy was classified on photographic screening with background retinopathy only being present on examination. This latter finding may relate to the difficulty in determining the presence of macular oedema from non-stereoscopic images. The numbers reviewed in this presentation are small and the authors’ claim that digital imaging will provide an efficient screening tool is premature without the benefit of a large-scale clinical trial. However, they do highlight the possibility of using the digital camera to acquire images at the place of delivery of diabetic care with subsequent electronic transmission elsewhere for further analysis. With perhaps more relevance to the practical management of diabetic patients within a hospital setting in the UK, the detection rate of retinopathy was noted to be higher in consecutive patients attending a general diabetic clinic when digital imaging was added to routine screening by direct ophthalmoscopy (type 1 diabetes mellitus, prevalence of background retinopathy with digital imaging 42.6% versus 26.7% with routine screening only; type 2 diabetes mellitus, prevalence of background retinopathy with digital imaging 33.2% versus 20.3% with routine screening only) (Table 24).34 It was assumed that the prevalence of retinopathy would be equal in both groups under study, although details of their randomisation protocol are not available. Although there was no significant difference in detection rates for other degrees of retinopathy, the data presented suggest that the use of digital photography can enhance the detection of mild retinopathy when early lesions may be difficult to detect with direct ophthalmoscopy alone. Health Technology Assessment 2003; Vol. 7: No. 30 73 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. Automated analysis of images Detection of general retinopathy Retinal microaneurysm counts have been shown to be a useful indirect marker for grading the severity of retinopathy. These correlate with the presence of both haemorrhages and cotton-wool spots and, to a lesser degree, with hard exudates and intraretinal microvascular abnormalities.6 In addition, based on a study monitoring the increase in total microaneurysm count over a 4-year period, it is suggested that this change from baseline may be used as a surrogate measure for identifying those patients likely to develop significant diabetic retinopathy in the future.5 This prompted Gardner and colleagues (Table 24) to apply automated computer analysis techniques to digitised colour slides for the detection of early retinopathy.35 In a prospective study of 301 posterior pole fundal images from both diabetic patients with retinopathy and non-diabetic controls with healthy fundi, a neural network system was able to detect retinopathy with a sensitivity of 88.4% and a specificity of 83.5% in comparison with examination of the original fundus photographs by an experienced ophthalmologist. The ability to differentiate lesions from normal background retina was 73.8% for microaneurysms and haemorrhages and 93.0% for hard exudates. Adjustment of the sensitivity thresholds to 99% would allow all cases detected by the specialist to be identified by the system but would have led to a corresponding reduction in specificity to 69%. Their initial results are, however, comparable to those being achieved by optometrists and diabetologists using direct ophthalmoscopy and Polaroids from non-mydriatic cameras (Table 22). In practice, the application of widespread retinal photography on the scale required to undertake regular screening assessment of the diabetic population will be limited by the availability of trained photographic graders. Therefore, there is pressure to develop a fully automated screening programme, taking advantage of the potential for computer-aided analysis of digital images. Sinclair and colleagues (Table 24) are currently developing an automated diabetic retinopathy screener with early results being presented at the American Diabetic Association Meeting in June 1996.36 From analysis of the images of 100 patients, using direct acquisition or secondary digitisation of colour slides in seven standard 35° fields, their findings appear encouraging. In comparison with an assessment of the original colour fundus photographs by a retinal specialist, the computer showed a 99% specificity in detection of microaneurysms, blot and striate haemorrhages. Although the sensitivity was quoted to be better than that of the retinal specialist, exact values were not given. With regard to the detection of the more advanced lesions of nerve-fibre layer infarcts, exudates, intraretinal microvascular abnormalities and neovascularisation, the computer achieved a sensitivity of 93% and a specificity of 90%. Clearly, the need for validation of their program in a larger trial is recognised but these preliminary results suggest that automated systems may be able to play a role not only in the detection of retinopathy but also in monitoring patient progression to a stage where referral to an ophthalmologist is warranted. Using a statistical approach to pattern recognition, Ege and colleagues (Table 24) report preliminary data on the ability of their automated system to analyse digital images taken as part of the routine monitoring of retinopathy in their clinic setting.37 Based on analysis of 134 images, the optic disc and fovea were correctly identified in 78.4% of cases. Microaneurysms, haemorrhages, exudates and cotton-wool spots were detected with sensitivities of 66, 89, 97 and 63%, respectively. Problems were identified in the ability to differentiate microaneurysms and haemorrhages and to distinguish cotton-wool spots from hard exudates. The authors recognise that their system is at an early stage of development and further refinement is required before it may provide a clinically useful tool. However, their presentation underlines the need for the development of reliable automated screening programmes if digital photographic services are to be more widely offered as a routine part of diabetic care. Detection of hard exudates Hard exudates have been noted to be a consistent feature as retinopathy progresses, indicating areas of retinal vascular leakage.38 As manual counting of exudates and calculation of their area is both time consuming and unsatisfactory, several authors have applied computer analysis to this problem area (Table 24).39–42 As exudates reflect more light than the background retina, they are represented by pixels of a higher grey level value in digital images, that is, they appear ‘whiter’ than the surrounding retina. This property allows their identification by a process of thresholding, where pixels equal to or greater than a chosen grey level are selected by the computer. After a series of shade correction programs on secondary digitised images, both Ward and colleagues39 and Phillips Appendix 1 74 and colleagues40,41 were able to apply this principle successfully in an effort to provide a quantitative method of detecting exudates. By monitoring both number and area, this could provide a method of determining response to treatment such as focal laser therapy. Although the above techniques do not distinguish between other objects with a similar grey threshold such as cotton-wool spots or drusen, Goldbaum and colleagues (Table 24) have attempted discrimination of hard exudates from these lesions on the basis of colour alone.42 A training set of images was employed, where groups of pixels from representative lesions were selected manually and used to calculate colour estimates for each lesion type. Using this software program, the authors found a limited ability to discriminate between these lesions, although hard exudates were most likely to be correctly identified (hard exudate, sensitivity 70% and specificity 95%; cotton-wool spot, sensitivity 70% and specificity 65%; drusen, sensitivity 50% and specificity 85%). Further refinement of the technique has been proposed with the addition of measures of size, shape, edge sharpness and texture to improve accuracy. Detection of venous beading The relative importance of individual features of diabetic retinopathy for predicting future progression of disease has been determined by the ETDRS.7 In addition to the severity of intra- retinal microvascular abnormalities and haemorrhages/microaneurysms, venous beading was noted to be a powerful predictor for the future development of proliferative retinopathy. Several authors have used a technique of Fourier analysis to determine variation in the diameter of blood vessels, developing a computer-assisted method of detection and quantification of venous beading.43–45 Kozousek and colleagues (Table 24) have compared their program with assessment of the original colour slides by professional photographic graders, using the Airlie House criteria for classification.44 An exact agreement with photographic graders was achieved in 76% of the 51 digitised slides suitable for analysis. For the remainder, grading differed by one level of severity only, with seven out of 12 cases being in the ‘questionable versus definite’ category. The calculated venous beading index could be used to differentiate between gradings of severity and to distinguish normal vessels from both definite and advanced beading. Although the regions of interest were manually identified for computer analysis in this initial study, the authors suggested that the technique could prove a valuable addition to future automated general retinopathy screening programmes. Detection of neovascularisation The ETDRS has indicated that patients who have progressed to the development of high-risk retinopathy should receive panretinal laser photocoagulation.8 They recommended that those with less severe retinopathy should have deferral of laser therapy and close monitoring of their progress. At this earlier stage, the adverse effects of laser radiation on visual field and central vision outweigh the benefit gained in reduced likelihood of future development of high-risk retinopathy. However, this strategy is dependent on the consistent recognition of advancing eye disease. Therefore, there is clearly a need to develop techniques that will allow the detection of neovascularisation at the earliest possible stage for appropriate timing of laser therapy. In his overview of fractal analysis applied to the retinal vasculature, Mainster describes the application of a fractal dimension (FD) to describe the properties of a branching vessel pattern.46 For example, a straight line is attributed an FD of 1, an area is given an FD of 2 and a structure filling three- dimensional space is given an FD of 3. Therefore, the FD of a branching structure lying flat on the retina will lie somewhere between 1 and 2. As it is an indirect measure of how completely a structure fills space, the relative value attributed to a branching structure will increase as its nature becomes more convoluted and it effectively covers a surface more completely, that is, the FD will become closer to 2. Using this principle, Daxer (Table 24) has performed computer analysis of digitised images to determine if fractal properties can confirm the presence of neovascularisation.47 In a comparison of images from 10 diabetic patients with proliferative retinopathy and 14 healthy controls, the FDs of the retinal vasculature were generally higher in the diabetic group. Using a threshold FD value of 1.8, Daxer was able to predict neovascularisation at the optic disc equivalent of the ETDRS grade 3 with a sensitivity of 90% and a specificity of 93%, using trained photographic graders as a reference standard. The original work required manual delineation of vessels patterns from colour slide photographs, which then underwent secondary digitisation for analysis. This process could be simplified significantly with direct acquisition of digital fundus images. Health Technology Assessment 2003; Vol. 7: No. 30 75 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. The technique used in this study was based on the evaluation of a low-angle 10° field centred on the optic disc and does not address the detection of peripheral new vessels elsewhere. However, as optic disc neovascularisation is recognised as the most high-risk form of proliferative retinopathy for subsequent retinal or vitreous haemorrhage, this novel technique remains promising for future detection of those most at need of urgent laser therapy. In subsequent research, Daxer has demonstrated that FD values could reflect the clinical evidence of development of new vessels in an adult with type 2 diabetes mellitus, and their regression following panretinal laser therapy.48 Although preliminary results were confined to the study of a single patient, it is possible that this technique may also provide a quantitative measure of the effectiveness of laser photocoagulation. Applications of digital photography in research In 1986, Brinchmann-Hansen and Engvold49 described a technique for the calculation of vessel diameter from digitised images, which has been applied as a research tool by several authors in the investigation of the pathogenesis of diabetic retinopathy (see Table 26).50–54 Together with the use of laser Doppler velocimetry, an indirect calculation of blood flow through the retinal circulation has been made possible. However, as the technique is dependent on the manual assessment of vessels’ edges which may induce observer error, Newsom and colleagues (Table 24) have developed a semi-automated technique of image analysis.55 Using digitised 35-mm retinal images, the vessel region of interest was identified by cursor and the computer calculated the average grey-level profile across the vessel from an average of 12 serial measurements. With the ability to perform automatic vessel edge detection, the computer-driven program showed a lower coefficient of variation than the observer-driven method (1.7–7.5% versus 6–34%). The standard deviation of variation from the mean was lower in the semi-automated analysis program at 4.2 �m (95% CI 3.2 to 6.0 �m) compared with 18.9 �m for the observer method (95% CI 14.6 to 27.1 �m), suggesting lower variability of the technique. Therefore, in summary, the authors propose that their computer-assisted analysis program will provide a useful research tool which is likely to be improved by the study of directly acquired digital images. Other aspects of evaluation – telemedicine The application of telemedicine to screening for retinopathy has already been adopted in areas where, for economic or geographical reasons, access to ophthalmic services has been limited. There is an on-going collaboration between local physicians and the Ophthalmology Faculty at the University of Texas, San Antonio, TX, USA to improve the delivery of eye care in the community.56 With funding from the South Texas/Border Health Initiative, a bus has been provided to allow a mobile ophthalmic evaluation service to operate. This includes digital fundus photography with direct transmission of images to a grading team at the San Antonio campus, who provide an immediate diagnosis for the examining doctor to discuss with the patient. Given the high prevalence of diabetes mellitus in the local Hispanic population, this strategy has the potential to revolutionise the delivery of diabetic eye screening although, at present, no formal evaluation of the success of this technique appears to have been carried out. However, even at this early stage of development, digital technology has been adopted as a way forward in diabetes care in the USA, with several ophthalmic imaging networks already in place to provide central image interpretation for patients in locations as diverse as California and Puerto Rico.57 In response to the St Vincent Declaration12 and the low rate of retinopathy screening being achieved with traditional programmes in Germany, Mann and colleagues have presented early work outlining a central grading centre support for images transmitted from diabetes centres.58 Although accepting that images are of lower resolution than 35-mm slides, this has not been deemed to obviate their use for population screening in the opinion of experienced ophthalmologists. At this stage, no data on a formal evaluation of the system are available. Within the UK, a similar strategy has been employed to allow rural GPs in remote areas access to specialist ophthalmic services by the direct transmission of fundus images of their diabetic population for specialist grading. In Powys, Wales, GPs are linked to Telemed, described as Europe’s most advanced medical telecommunications project.59 This system allows integration of patients, primary care physicians and hospital specialists with applications not only in retinopathy screening but also extending to cover dermatology referrals and video-linked consultations for physiotherapy assessment and asthma care. In practice, the convenience of these Appendix 1 76 H ealth Technology Assessm ent 2003; Vol. 7: N o. 30 7 7 © Q ueen’s Printer and C ontroller of H M SO 2003. A ll rights reserved. TABLE 26 Fundus photography – applications in research Technique under review Population studied Study aim Research technique Results Authors’ conclusions Reference Secondary digitisation of conventional fundus photographs Digitisation of 30° red-free fundus photographs to determine vessel diameter in conjunction with laser Doppler velocimetry of the superior temporal vein to calculate flow rates 24 healthy volunteers and 76 patients, the latter attending retinopathy clinic (63 with type 1 DM, 13 with type 2 DM). Diabetic patients comprised 12 with no retinopathy; 27 with background retinopathy; 13 with pre- proliferative and 12 with proliferative retinopathy Determination of rate of retinal blood flow in healthy and diabetic fundi Technique of vessel diameter assessment validated elsewhere by Brinchmann-Hansen and Engvold.49 Average of three readings of diameter from three photographs, corrected for patient’s refraction, gives coefficient of variation of 0.26% Hyperperfusion of retinal circulation noted in all patients compared with normal; in comparison with patients with no retinopathy, retina flow was 33.2% higher with background, 69.4% with pre-proliferative and 50.1% with proliferative retinopathy. Panretinal photocoagulation led to a significant reduction in flow in comparison with all other groups (4.48 compared with 9.52 µl/min in non-diabetic controls) Flow rates calculated to be independent of age, sex, BP, blood glucose concentration, HbA1c, intraocular pressure and type and duration of diabetes. The authors propose that the increased shear stress induced by hyperperfusion is an important factor in the pathogenesis of DR Patel, 199250 Digitisation of 30° monochromatic fundus photographs to determine vessel diameter in conjunction with laser Doppler velocimetry to calculate flow rates. Photography taken in mid- diastole to eliminate effect of pulsatility from cardiac cycle 10 normal volunteers; 12 type 1 DM – 10 studies performed with blood glucose maintained >10 mmol/l and 10 studies with blood glucose <10 mmol/l (i.e. 8 patients studied twice). All diabetics had mild NPDR. Elevation of MAP above baseline was achieved using a tyramine infusion To investigate the effect of hypertension on retinal vascular autoregulation in diabetic and non- diabetic subjects via estimation of blood flow velocity Techniques validated in earlier work by Brinchmann-Hansen and Engvold49 In non-diabetics, retinal blood flow showed significant increase only at MAP 40% above baseline (p = 0.012). In contrast, increased flow in diabetics with low glucose significant at both 30% and 40% elevation above MAP (p <0.05). For diabetics with high blood glucose, only 15% elevation of MAP required to give significant increase in flow (p <0.03) By calculation of the coefficient of autoregulation, significant impairment of response to increased perfusion pressure was evident in all diabetics, most marked in those with elevated blood glucose >10 mmol/l. The authors conclude that the resultant increase in flow rate will exacerbate endothelial damage, providing a potential mechanism for accelerating the progression of DR Rassam, 199551 continued Appendix 1 7 8 TABLE 26 Fundus photography – applications in research (cont’d) Technique under review Population studied Study aim Research technique Results Authors’ conclusions Reference Secondary digitisation of 60° red-free negatives centred on the fovea Retrospective analysis of images from 45 diabetic children with type 1 DM aged 9.4–18.4 years – images from 74 eyes studied To evaluate the effect of long-term glucose control on retinal vessel diameter in children and young adults and to determine whether this may be a predictor of future retinopathy Technique validated by earlier work by Brinchmann-Hansen and Engvold.49 Manual calculation of vessel width directly on monitor using mouse-controlled cursor (micrometry) Coefficient of variation for repeated measurements 1.8–2.0% for veins and 1.8–2.1% for arteries using densitometry, cf. 3.0–3.1 and 3.6–4.1% respectively using micrometry. Linear regression analysis showed association between HbA1c value on date of second photograph (p = 0.049) and average HbA1c in year prior to photography (p = 0.051) with average increase in venous calibre. No association with changes in arterial calibre noted, although no correction possible to compensate for potential effect of perfusion pressure from cardiac cycle on overall appearance. No correlation found with sex, age, duration or age of onset of diabetes, stage of puberty or BP Despite limitations of a retrospective study using small number of patients, the authors conclude that the observed increase in venous diameter in association with prolonged hyperglycaemia supports previous research into the hyperperfusion model of DR. Clinically visible venous congestion may herald the development of capillary changes in young diabetics. Improvements in digital image analysis may allow presented technique to gain application in clinical use for rapid evaluation of vessel diameter Falck, 199552 continued H ealth Technology Assessm ent 2003; Vol. 7: N o. 30 7 9 © Q ueen’s Printer and C ontroller of H M SO 2003. A ll rights reserved. TABLE 26 Fundus photography – applications in research (cont’d) Technique under review Population studied Study aim Research technique Results Authors’ conclusions Reference Digitisation of 30° monochromatic fundus photographs to determine vessel diameter in conjunction with laser Doppler velocimetry to calculate flow rates 4 groups of 15 patients studied: normotensive non- diabetics; normotensive diabetics (12 type 1 and 3 type 2); hypertensive non- diabetics; hypertensive diabetics (5 type 1; 10 type 2). Diabetics were studied under conditions of relative normoglycaemia (<10 mmol/l) and hyperglycaemia (>15 mmol/l); hypertensive patients were studied before and after control achieved. Diabetics had mild NPDR To determine whether DM can alter the autoregulatory vasoconstrictive response to 60% oxygen breathing under hypertensive and hyperglycaemic conditions via measurement of retinal blood flow Technique validated in previous work by Brinchmann-Hansen and Engvold49 Oxygen reactivity reduced in hypertensive and normotensive diabetics in hyperglycaemic conditions compared with normotensive volunteers (p < 0.005). Also reduced in controlled and uncontrolled hypertensive diabetics at hyperglycaemia compared with respective normoglycaemic group. No statistical difference between hyperglycaemic and normoglycaemic normotensive diabetics. For non-diabetics, oxygen reactivity reduced in hypertensive compared with normotensive volunteers (for controlled hypertension, p = 0.026; for uncontrolled hypertension, p < 0.059) The authors conclude that retinal vascular reactivity to hyperoxygenation stimuli is significantly reduced in the hypertensive diabetic patient, regardless of blood glucose control. However, the deleterious effect of hyperglycaemia is thought to outweigh that of hypertension. The need for improved glycaemic control and control of hypertension to prevent accelerated progression of retinopathy is discussed Patel, 199453 continued Appendix 1 8 0 TABLE 26 Fundus photography – applications in research (cont’d) Technique under review Population studied Study aim Research technique Results Authors’ conclusions Reference BP, blood pressure; MAP, mean arterial pressure. Analysis of 30° red-free fundus photographs using a Context Vison GOP-302 digital image analysis system; laser Doppler velocimetry used to allow estimation of retinal blood flow rate 10 diabetic patients, both type 1 and 2, recruited from the diabetic retinopathy clinic with NDPR (1 patient ETDRS grade 20; 4 grade 30; 6 grade 41); 16 non-diabetic controls To determine the effect of alcohol on retinal autoregulation in response to the vasoconstrictive challenge of 60% oxygen inhalation in diabetic and non- diabetic subjects Technique validated in previous work by Brinchmann-Hansen and Engvold49 Both groups responded significantly to oxygen challenge before and after dosing with ethanol 0.5 g/kg, which was thought adequate to reflect social blood alcohol levels – reduced maximum red cell velocity, vessel diameter and retinal blood flow were demonstrated in each group in comparison with baseline measurements. No statistical difference was noted between diabetic subjects and non- diabetic subjects in their response In the normotensive normoglycaemic diabetic patients with mild non- proliferative retinopathy under review, the autoregulatory response to hyperoxygenation was comparable to that of non- diabetic subjects. It remained unaffected by acute ethanol consumption in both groups, leading the author to suggest that the retinal circulation may be superior over other circulatory systems in its ability to autoregulate in the presence of ethanol. However, the limited number of patients under review and absence of more advanced grades of retinopathy suggest caution in interpretation of these findings Dhasmana, 199454 techniques for the patient are readily apparent although as yet no formal study of the efficacy or effectiveness of digital screening in diabetic retinopathy against formal clinical examination has been identified in this setting. Digital angiography Introduction The use of a fluorescent emission from circulating dye to highlight features of the human retinal vasculature was initially described by Novotny and Alvis in 1961.60 However, the technique was limited in practice owing to the need to capture images on ciné film, requiring subsequent development and projection for viewing. An advance on conventionally acquired serial photographic images was the introduction of the low-light TV camera in 1986 by Korber, which allowed results to be viewed immediately by the clinician.61 The data obtained could then be stored by use of a conventional video cassette recorder. The recording of real-time data allowed the author to develop a quantitative method of assessment of retinal blood flow by the examination of individual still pictures from a sequence using computer-controlled image analysis. This method of estimating arteriovenous passage time, arm-to-retina time and mean plasma flow velocity was subsequently adopted widely.62,63 Using high-quality video recordings, which were then slowed to one-quarter of real-time speed for analysis courtesy of Scotland Yard, Jacobs and colleagues (see Table 29) were able to study the vascular origins of optic disc new vessels.64 In a study involving 10 people, including seven diabetic patients with proliferative retinopathy, variable origin of new vessel filling was identified. Within the diabetic sub-group, three patients derived flow from the retinal venous circulation, three patients demonstrated filling of new vessels simultaneously with the central retinal artery and one patient with diabetes showed filling that appeared to derive from two cilioretinal arteries, suggesting a choroidal origin. Although the authors were unable to determine the clinical significance of their findings, their work illustrated the superiority of videofluoroscopy with an image capture rate of 25 frames per second over conventional fluorescein photography in dynamic investigation of neovascularisation. The later development of PC-based video frame grabbers allowed conversion of analogue images to a digital format, permitting computerised image analysis. In 1989, OIS and Topcon marketed the first commercially available system capable of direct acquisition of analogue-to-digital images within the camera control unit itself, designed specifically for use with fluorescein angiography. Studies evaluating fluorescein angiography are summarised in Table 27 and their study methodologies and results evaluated in Table 28. Indirect digitisation of angiograms An early comparative study to demonstrate that the conversion to digital format did not cause degradation of conventional fluorescein angiograms which would adversely affect clinical information was carried out by Seeley and colleagues in 1989 (Table 27).65 Sixty-eight images were selected, giving a representative selection of common retinal pathologies (29 images included evidence of diabetic retinopathy) and an equal number of normal images. After conversion to a 512 × 480 pixel digital format, images were presented on a cathode-ray tube display. Clinical grading was compared with that obtained by examination of back-illuminated photographic transparencies, without projection. Using ROC curves for analysis, no appreciable difference between the information derived from the two sets of images was identified. The inter-observer differences in performance were attributed to varying clinical experience. Therefore, from their preliminary study, the authors supported the concept of development of a digitised angiogram facility, recognising the future potential for easier image storage and retrieval and image enhancement. Other authors have applied indirect digitisation techniques to both conventionally acquired photographic angiogram sequences and videoangiograms in order to provide a more quantitative assessment of background diabetic retinopathy, as summarised in Table 27. Digital image analysis Automated assessment of microaneurysms In response to the difficulty in achieving reliability and reproducibility in the manual counting of microaneurysms, Phillips and colleagues (Table 27) applied computerised image analysis to the assessment of digitised angiogram negatives.40 Microaneurysms are visible as round hyperfluorescent objects, with a higher grey level than the background retina, providing a method of differentiation. However, as they are similar in threshold to neighbouring structures such as blood Health Technology Assessment 2003; Vol. 7: No. 30 81 © Queen’s Printer and Controller of HMSO 2003. All rights reserved. Appendix 1 8 2 TABLE 27 Evaluation of digital fluorescein angiography Technique Population Study aim Gold standard Results Authors’ conclusions Reference Methodology under review studied or comparative and results method grading Comparison of digitised angiograms with original 35-mm images Automated assessment of microaneurysms 68 conventional FA images selected for digitisation via video format camera and frame grab board to give 512 × 480 pixel images. Images viewed on CRT display 29 out of 36 patients under review had DR To evaluate whether conversion of conventional photographic angiograms to digital images leads to loss of resolution causing failure in diagnosis of DR. Both sets of images assessed by 3 experts in retinal diagnosis Gold standard Opinion of expert in retinal angiography responsible for selection of images Using ROC curve analysis to evaluate diagnosis and certainty responses, no appreciable difference between the two types of images was noted. Observer 1, AUC for CRT 0.80 and for slides 0.79; observer 2, 0.87 and 0.86, respectively; observer 3, 0.91 and 0.91, respectively. Inter-observer differences attributed to past experience but did not reflect on methods under review The chosen digital technique does not cause loss of diagnostic information. The perceived advantages of the digital system are ease of image archiving and retrieval; immediate review of examinations, and the potential for transmission of images to other centres while retaining the original on file Seeley, 198965 Methodology Grade V Results Grade IV Indirect digitisation of 30° or 45° macular views from FA negatives performed by Panasonic WV-CD 20 CCD video camera, Data Translation DT-2861 frame grabber with IBM-compatible computer Macular region of eight angiograms from diabetic patients assessed Automated assessment of MAs Gold standard Manual counting of MAs by a single observer using digitised angiographic frames under evaluation MAs: sensitivity 82–93% in comparison with manual counting, with specificity of 93%. Coefficient of variation 7–14.5% The authors conclude that their results for detection of MAs compare favourably with previously published work by other authors using automated methods Phillips, 199140 Methodology Grade IV Results Grade IV continued H ealth Technology Assessm ent 2003; Vol. 7: N o. 30 8 3 © Q ueen’s Printer and C ontroller of H M SO 2003. A ll rights reserved. TABLE 27 Evaluation of digital fluorescein angiography (cont’d) Technique Population Study aim Gold standard Results Authors’ conclusions Reference Methodology under review studied or comparative and results method grading Fluorescein angiograms converted to digitised 512 × 512 array using a monochrome CCD camera and Data Translation DT-2861 frame grabber linked to an IBM- compatible PC Development of an automated method for the detection and quantification of MAs Assessed against results obtained by manual counting by team of 5 experienced ophthalmologists working independently, and grading both analogue and digitised images Gold standard Correct location of the lesions on analogue prints established by two of the authors Using FROC curves to evaluate performance, there was no difference between computer and clinician on the analysis of digitised images. Significantly higher true- positive rate (sensitivity) was achieved by the clinicians using analogue images for any given false-positive rate The authors suggest that direct on-line acquisition of digital images may reduce the effect of image degradation inherent in the secondary digitisation technique described. Increasing image resolution to the currently available 1024 × 1024 array will also improve sensitivity. With refinement, the automated technique may prove valuable in the detection and monitoring of DR Spencer, 199267 Methodology Grade IV Results Grade IV Fluorescein angiogram negatives were converted into digital format using a Kodak Megaplus CCD camera and Series-151 image- processing system, achieving images with a resolution of 1024 × 1024 pixels 13 images from diabetic patients reviewed, with 7 displaying evidence of MAs An automated method for identification and quantification of MAs from digital images compared with manual assessment of MA counts from high-quality analogue prints of the original angiogram negatives made by 5 opthalmologists (2 consultants and 3 registrars) Gold standard Authors assessment of MA counts from the angiogram prints Analysis performed using FROC curves, revealing that the computer’s performance was satisfactory in comparison with that demonstrated by the clinicians. A maximum sensitivity of 82% was reached by the automated system, imposed by limitations of the present program in recognising lesions leaking fluorescein or conglomerations of MAs, calculated as too large for inclusion Digital image processing can offer an objective and easily repeatable quantification of retinal MAs. The authors feel that the method has significant advantages over previously described techniques although no direct comparisons have been made Spencer, 199668 Methodology Grade V Results Grade IV continued Appendix 1 8 4 TABLE 27 Evaluation of digital fluorescein angiography (cont’d) Technique Population Study aim Gold standard Results Authors’ conclusions Reference Methodology under review studied or comparative and results method grading Indirect digitisation of standard 35° fluorescein angiogram negatives Patients with evidence of DR (severity of retinopathy and number of patients not stated) Development of a fully automated method of MA detection – comparison made with manual assessment of computerised images by clinicians Gold standard Manual identification of MAs from test images by an experienced study ophthalmologist and medical physicist The automated detector achieved 82% sensitivity with 5.7 false positives per image. Data presented as a FROC curve suggest the automated method is comparable in practice to results from clinicians The authors outline a fully automated strategy, with image registration by computer allowing analysis of serial images from an individual. The system can be adjusted to allow greater sensitivity at the expense of specificity or vice versa depending on the characteristics of the population under study. In comparison with clinicians, the system appears robust enough for use in clinical practice Cree, 199769 Methodology Grade IV Results Grade III Conventional angiogram negatives digitised to give 25 overlapping fields for analysis, each comprising 256 × 256 pixels 25 angiograms chosen randomly from diabetic patients participating in a prospective trial of the effect of antiplatelet agents in micro- angiopathy (DAMAD study). Type 1 and 2 represented, with at least background retinopathy and ≥ 5 MAs in the posterior pole of the eye Automated detection of MAs from fluorescein angiograms Comparative study 1. Manual counting of MAs from projected negatives by ophthalmologists and a trained technician. Each angiogram assessed twice. 2. A second validation was done by technician marking digital image on screen directly with a light-pen to overcome influence of magnification on manual and automated methods When computer assessment validated by the technician using method 2, the authors attribute the automated method with 70% sensitivity and a predictive value of 86%. The technique did not allow calculation of specificity The authors conclude that the method of automated detection under review provides a method of MA detection comparable to that achieved by a trained technician, both in accuracy and performance time (30–40 minutes per angiogram) Baudoin, 198470 Methodology Grade II Results Grade II continued H ealth Technology Assessm ent 2003; Vol. 7: N o. 30 8 5 © Q ueen’s Printer and C ontroller of H M SO 2003. A ll rights reserved. TABLE 27 Evaluation of digital fluorescein angiography (cont’d) Technique Population Study aim Gold standard Results Authors’ conclusions Reference Methodology under review studied or comparative and results method grading Automated assessment of maculopathy Indirect digitisation of 30° or 45° macular views from fluorescein angiogram negatives performed by Panasonic WV-CD 20 CCD video camera, Data Translation DT- 2861 frame grabber with IBM-compatible computer 10 angiograms analysed, of which 3 demonstrated diabetic retinopathy (1 central serous retinopathy; 1 age-related macular degeneration; 1 branch retinal vein occlusion; 4 normal controls). Comparison made of the intensity of fluorescence in early and late images to give an indication of the degree of vessel leakage Automated method of performing an assessment of macular leakage – if this is present, the calculation of the area of an angiogram showing persistence or increase in fluorescence levels is under evaluation as an indirect method of quantitative assessment Comparative study No definitive gold standard available. Threshold gradient value calculated using normal angiograms to minimise false-positive results. Assessments made of repeatability and robustness after calibration against test angiograms. Repeatability – same images digitised and assessed on 5 consecutive occasions. Robustness improved by analysing 3 early frames, counting only positive pixels appearing in 2 out of 3 images, against those from a single late image High degree of repeatability and robustness for single and multiple areas of leakage >1/5 of disc area (coefficient of variation 6%) and for single areas <1/5 of disc area. System performed less well with multiple small areas with total area <1/5 of disc area (coefficient of variation 27%) The authors propose that their method provides a simple method of quantifying macular leakage which is superior to manual assessment (although results of direct comparison not quoted). This has implications for both diagnosis and monitoring response to treatment. At this stage in development, potentially misleading lesions such as drusen need to be excluded by prior clinical examination Phillips, 199140,66 Methodology Grade III Results Grade III continued Appendix 1 8 6 TABLE 27 Evaluation of digital fluorescein angiography (cont’d) Technique Population Study aim Gold standard Results Authors’ conclusions Reference Methodology under review studied or comparative and results method grading Assessment of perifoveal and macular microcirculation 30° fluorescein angiograms centred on the macula obtained – digitised to 512 × 512 pixel resolution. Manual identification of the FAZ performed using a cursor and area and longest diameter calculated. Vitreous fluorophotometry also performed in diabetics as an indirect assessment of dye leakage Prospective study of 18 diabetic patients with diabetes duration of ≥ 5 years and retinopathy Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1017/S1368980013001894 Corpus ID: 5040177Food choice, plate waste and nutrient intake of elementary- and middle-school students participating in the US National School Lunch Program. @article{Smith2014FoodCP, title={Food choice, plate waste and nutrient intake of elementary- and middle-school students participating in the US National School Lunch Program.}, author={Stephanie Smith and L. Cunningham-Sabo}, journal={Public health nutrition}, year={2014}, volume={17 6}, pages={ 1255-63 } } Stephanie Smith, L. Cunningham-Sabo Published 2014 Psychology, Medicine Public health nutrition OBJECTIVE To (i) evaluate food choices and consumption patterns of elementary- and middle-school students who participate in the National School Lunch Program (NSLP) and (ii) compare students' average nutrient intake from lunch with NSLP standards. DESIGN Plate waste from elementary- and middle-school students' lunch trays was measured in autumn 2010 using a previously validated digital photography method. Percentage waste was estimated to the nearest 10 % for the entrée, canned fruit, fresh… Expand View on PubMed cambridge.org Save to Library Create Alert Cite Launch Research Feed Share This Paper 99 CitationsHighly Influential Citations 3 Background Citations 29 Methods Citations 4 Results Citations 3 View All Tables and Topics from this paper table 1 table 2 table 3 table 4 table 5 View All 5 Figures & Tables standards characteristics Vegetables Dietary Iron Manuscripts Milk (body substance) Nutrients Accidental Falls Choice Behavior Tray - device Paper Mentions News Article Dropping soft drinks from kids' menus is good, but doesn't make a healthy meal The Conversation 10 April 2015 99 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Factors affecting fruit and vegetable school lunch waste in Wisconsin elementary schools participating in Farm to School programmes. Andrea B Bontrager Yoder, L. Foecke, D. Schoeller Business, Medicine Public health nutrition 2015 27 PDF View 3 excerpts, cites background Save Alert Research Feed Younger Elementary School Students Waste More School Lunch Foods than Older Elementary School Students. Shahrbanou F Niaki, C. Moore, T. Chen, Karen Weber Cullen Medicine Journal of the Academy of Nutrition and Dietetics 2017 32 Save Alert Research Feed Investigating the Relationship between Food Pairings and Plate Waste from Elementary School Lunches A. Ishdorj, O. Capps, M. Storey, P. Murano Business 2015 18 PDF Save Alert Research Feed Fruit and Vegetable Plate Waste among Students in a Suburban School District Participating in the National School Lunch Program. Kellyn M. Handforth, M. Gilboy, J. Harris, Nicole Melia Medicine 2016 9 PDF View 1 excerpt, cites methods Save Alert Research Feed Changes in foods selected and consumed after implementation of the new National School Lunch Program meal patterns in southeast Texas K. Cullen, T. Chen, J. Dave Medicine Preventive medicine reports 2015 40 Highly Influenced View 4 excerpts, cites background Save Alert Research Feed Location of School Lunch Salad Bars and Fruit and Vegetable Consumption in Middle Schools: A Cross-Sectional Plate Waste Study. M. Adams, Meg Bruening, P. Ohri‐Vachaspati, Jane C Hurley Mathematics, Medicine Journal of the Academy of Nutrition and Dietetics 2016 33 View 1 excerpt, cites results Save Alert Research Feed Vegetable Purée: A Pilot Study to Increase Vegetable Consumption among School Lunch Participants in US Elementary Schools. A. Vale, J. Schumacher, Robert W. Cullen, Hae Jin Gam Medicine 2014 2 PDF View 1 excerpt, cites background Save Alert Research Feed Food and Nutrients Intake in the School Lunch Program among School Children in Shanghai, China Z. Huang, R. Gao, Nadila Bawuerjiang, Ya-li Zhang, Xiaoxu Huang, Meiqin Cai Medicine Nutrients 2017 9 PDF View 1 excerpt, cites background Save Alert Research Feed Students' dietary habits, food service satisfaction, and attitude toward school meals enhance meal consumption in school food service Kyung-eun Lee Medicine, Psychology Nutrition research and practice 2019 PDF View 1 excerpt, cites background Save Alert Research Feed Comparisons of school and home-packed lunches for fruit and vegetable dietary behaviours among school-aged youths. J. Taylor, Carolyn Sutter, L. Ontai, Adrienne Nishina, S. Zidenberg‐Cherr Psychology, Medicine Public health nutrition 2019 2 PDF Save Alert Research Feed ... 1 2 3 4 5 ... References SHOWING 1-10 OF 37 REFERENCES SORT BYRelevance Most Influenced Papers Recency Food type, food preparation, and competitive food purchases impact school lunch plate waste by sixth-grade students. M. Marlette, S. Templeton, M. Panemangalore Mathematics, Medicine Journal of the American Dietetic Association 2005 84 Save Alert Research Feed Middle-school students' school lunch consumption does not meet the new Institute of Medicine's National School Lunch Program recommendations. K. Cullen, K. Watson, J. Dave Medicine Public health nutrition 2011 22 PDF Save Alert Research Feed Offer versus Serve or Serve Only: Does Service Method Affect Elementary Children's Fruit and Vegetable Consumption?. Margaret Harbison Goggans, L. Lambert, Yunhee Chang Medicine 2011 9 Save Alert Research Feed School meals: types of foods offered to and consumed by children at lunch and breakfast. Elizabeth M Condon, M. K. Crepinsek, M. Fox Medicine, Psychology Journal of the American Dietetic Association 2009 209 Save Alert Research Feed Meals offered and served in US public schools: do they meet nutrient standards? M. K. Crepinsek, A. Gordon, P. McKinney, Elizabeth M Condon, A. Wilson Medicine Journal of the American Dietetic Association 2009 119 PDF Save Alert Research Feed School food environments and practices affect dietary behaviors of US public school children. R. Briefel, M. K. Crepinsek, Charlotte Cabili, A. Wilson, P. Gleason Medicine Journal of the American Dietetic Association 2009 408 PDF Save Alert Research Feed Promotions to increase lower-fat food choices among students in secondary schools: description and outcomes of TACOS (Trying Alternative Cafeteria Options in Schools). J. Fulkerson, S. French, Mary E. Story, Helen Y. Nelson, P. Hannan Medicine Public health nutrition 2004 50 PDF Save Alert Research Feed The Cookshop Program: Outcome Evaluation of a Nutrition Education Program Linking Lunchroom Food Experiences with Classroom Cooking Experiences T. Liquori, P. Koch, I. Contento, J. Castle Medicine 1998 142 Save Alert Research Feed School Meals: Building Blocks for Healthy Children Breakfast Programs, V. Stallings, C. WestSuitor, C. Taylor Medicine 2010 171 PDF Save Alert Research Feed Beverage choices affect adequacy of children's nutrient intakes. C. Ballew, S. Kuester, C. Gillespie Medicine Archives of pediatrics & adolescent medicine 2000 184 PDF Save Alert Research Feed ... 1 2 3 4 ... Related Papers Abstract Tables and Topics Paper Mentions 99 Citations 37 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_kez5erwe65fhhmkwmoatfakt4y ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218558829 Params is empty 218558829 exception Params is empty 2021/04/06-02:16:39 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218558829 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:39 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_kg7tieuavvanroubyivx3ysule ---- Towards a more representative morphology: clinical and ethical considerations for including diverse populations in 1069 © American College of Medical Genetics and Genomics Review An important gap exists in the screening toolkit used by physicians and other health-care professionals to help diag- nose genetic syndromes in their patients via the observa- tion of phenotypic characteristics. Visual diagnosis relies on textbooks of dysmorphology, which include images of indi- viduals with classic phenotypes for a wide range of genetic diseases. These morphological atlases have been a standard diagnostic tool for clinical geneticists for decades and guide clinicians in their choice of molecular testing.1–4 In the most widely used of these diagnostic atlases, the majority of the images are of individuals of northern European descent, reflecting the patient populations for whom the clinicians who developed these texts originally provided clinical care. However, because many of the genetic conditions profiled in these texts are prevalent in populations across the world, it is now clear that these texts do not sufficiently reflect global ancestral diversity. The lack of a variety of phenotypic images in available atlases potentially limits the utility of these atlases as diagnostic tools in globally diverse popula- tions, causing geneticists difficulty in properly diagnosing conditions in individuals of different ancestral backgrounds who may present with variable morphological features. Even the relatively simple diagnosis of Down syndrome in diverse populations is not straightforward, as seen in Figure 1, which includes photographs of children with Down syndrome from understudied populations. The observer’s gestalt conclu- sion upon examining these photos may not be the diagno- sis of Down syndrome, because the eye and nose differences that occur in various ethnicities mask the “textbook” features described in most medical texts. For example, a feature of Down syndrome is epicanthal folds at the inner portion of the eye; however, this is a normal finding in individuals of Asian descent. This difficulty with diagnosis is not unique to coun- tries with diverse racial and ethnic populations; diagnosis can be challenging even in locations with relatively homogeneous populations if this majority is of non-European ancestral ori- gin. Skilled local clinicians working in these areas may still struggle to identify genetic syndromes by phenotype, because the available training tools and classical phenotype images of particular disorders predominantly feature individuals with European ancestry. Proposals to address the underinclusion of images of human malformation syndromes from diverse populations in existing morphological atlases, such as the one described by Muenke and Submitted 22 July 2015; accepted 6 January 2016; advance online publication 10 March 2016. doi:10.1038/gim.2016.7 An important gap exists in textbooks (or atlases) of dysmorphol- ogy used by health-care professionals to help diagnose genetic syn- dromes. The lack of varied phenotypic images in available atlases limits the utility of these atlases as diagnostic tools in globally diverse populations, causing geneticists difficulty in diagnosing conditions in individuals of different ancestral backgrounds who may present with variable morphological features. Proposals to address the underinclusion of images from diverse populations in existing atlases can take advantage of the Internet and digital pho- tography to create new resources that take into account the broad global diversity of populations affected by genetic disease. Creating atlases that are more representative of the global population will expand resources available to care for diverse patients with these conditions, many of whom have been historically underserved by the medical system. However, such projects also raise ethical questions that are grounded in the complex intersection of imag- ery, medicine, history, and race and ethnicity. We consider here the benefits of producing such a resource while also considering ethical and practical concerns, and we offer recommendations for the ethical creation, structure, equitable use, and maintenance of a diverse morphological atlas for clinical diagnosis. Genet Med advance online publication 10 March 2016 Key Words: ethics; dysmorphology; genetics; global health; diverse populations 1Department of Bioethics, Clinical Center, National Institutes of Health, Bethesda, Maryland, USA; 2Social and Behavioral Research Branch, National Human Genome Research Institute, National Institutes of Health, Bethesda, Maryland, USA; 3Bioethics Core, National Human Genome Research Institute, National Institutes of Health, Bethesda, Maryland, USA; 4Medical Genetics Branch, National Human Genome Research Institute, National Institutes of Health, Bethesda, Maryland, USA; 5Center for Research on Genomics and Global Health, National Human Genome Research Institute, National Institutes of Health, Bethesda, Maryland, USA.Correspondence: Sara Chandros Hull (shull@mail.nih.gov) Towards a more representative morphology: clinical and ethical considerations for including diverse populations in diagnostic genetic atlases Maya Koretzky,1 Vence L. Bonham,2 Benjamin E. Berkman,1,3 Paul Kruszka,4 Adebowale Adeyemo,5 Maximilian Muenke4 and Sara Chandros Hull1,3 GeneTics in medicine | Volume 18 | Number 11 | November 2016 http://www.nature.com/doifinder/10.1038/gim.2016.7 mailto:shull@mail.nih.gov 1070 KORETZKY et al | Considerations for including diverse populations in diagnostic genetic atlasesReview colleagues in this issue of Genetics in Medicine,5 can take advan- tage of the ubiquity of the Internet and the ease of digital photog- raphy to create new resources that take into account the broad global diversity of populations affected by genetic disease and can be made widely available. Creating a genetic dysmorphology atlas that is more representative of the global population will help expand the resources available to care for diverse patients with these conditions, many of whom have been historically under- served in various ways by the medical system. Birth defects are now a leading cause of childhood mortal- ity and morbidity worldwide; however, medical geneticists are most commonly found in university medical centers in devel- oped countries. Most developing countries do not have medical geneticists. For example, although the largest country in Africa is Nigeria, with more than 200 million people and the largest economy, several of the authors of this article (A.A., P.K., M.M.) travel there regularly and are aware that there are no practicing clinical geneticists there; our collaborators in Nigeria are pedi- atric cardiologists. This is contrast to the Washington, DC, area, where there are more than 20 medical (MD) geneticists. From a clinical perspective, more diverse atlases would enable more accurate and earlier syndromic diagnosis of con- genital malformations to be made across patients of a variety of ancestral origins, potentially leading to improved medical care for persons of non-European descent with these conditions.6 Including a wider selection of individuals in morphological atlases could also lay the groundwork for addressing other aspects of medical diagnosis and care, as well as genetic disease, by building relationships and research capacity in the interna- tional arena.7 For example, allowing international physicians to participate in the creation of the atlas by contributing images of their patients can facilitate international cooperation and estab- lish networks of clinicians and researchers in underresourced areas. By enabling more accurate diagnosis of individuals, the atlas would also enable researchers to aggregate these data and glean a more accurate picture of the global prevalence of cur- rently underdiagnosed genetic diseases8 (for instance, evidence is emerging that cystic fibrosis is vastly underdiagnosed in pop- ulations of non-European origin9). However, the project also raises ethical questions about the selection and portrayal of individuals in the atlas and who will have access to this database. We consider the benefits of pro- ducing such a resource and consider the ethical and practical concerns raised. We also offer recommendations for the ethical creation, structure, equitable use, and maintenance of a diverse morphological atlas for clinical diagnosis. eTHicAL cOnsideRATiOns The ethical and social concerns that are raised by the creation of a diverse morphological atlas are grounded in the complex intersection of imagery, medicine, history, and race and ethnic- ity. These concerns can be sorted into two general categories: (i) historically rooted concerns about reifying racial and ethnic groups as discrete biological classifications and the misuse of racial and ethnic categories and (ii) contemporary consider- ations regarding access to the database and respecting patient autonomy and privacy in an Internet-based environment. There is a long and complex history of classifying people into groups to search for a biological basis for racial difference,10,11 with race persistently occupying a liminal space between social construct and biological utility.12 The medicalization of race has at times been used as a way to justify discriminatory practices outside of the medical sphere and as a way to challenge these same practices and push back against them.13 This complex relationship between medicine and race politics has been dem- onstrated by scholars in various contexts, from Lundy Braun’s classic work on differing spirometry measures for different races to Nancy Pollock’s scholarship on the history and anthro- pology of how cardiac disease in African Americans has been articulated by the medical establishment.14 (For one example regarding heart disease, see ref.. 15.) In many ways, biomedicine continues to reify an interpretation of race as a biological qual- ity that affects health and disease, rather than as an identity that is more indicative of social group and environmental influences than of an underlying physiology. Although the notion of “biological race” has been misused by medical professionals in many instances, it is also true that disease and health risks facing different individuals may be tracked by ancestry, social experiences, and environment— qualities for which race identification may often act as a proxy.16 Figure 1 children with down syndrome from Thailand, india, and nigeria. Courtesy of Ekanem Ekure, S.J. Patil, Girisha K.M., Antonio Richieri- Costa, and Vorasuk Shotelersuk. Volume 18 | Number 11 | November 2016 | GeneTics in medicine 1071 Considerations for including diverse populations in diagnostic genetic atlases | KORETZKY et al Review Scholars such as David Wasserman and Nancy Krieger, among others, have explored the benefits and pitfalls of using race cat- egories in medicine for individual therapeutic purposes or as a way to talk about social groups and structural racism and their impact on health.17,18 This tension between the medical utility and the medical misuse of race categories is perhaps most salient in the field of genetics, a discipline in which disease risks are correlated with ancestry but that also has historical ties to nineteenth- and twentieth-century eugenics science.19 Scholars from a wide range of disciplines have explored the ethics, anthropology, and history of race and genetics. (For further reading, please see refs. 20–22.) The language used by geneticists may serve to confuse the connections that exist between race and genetics. Although genetic variation correlates with ancestral markers, race is a much more complex and largely self-identified con- cept that cannot be determined through biomedical testing.23,24 Although guidelines on the use of accurate terminology related to race categories in genetic research publications emphasize the importance of defining how these terms are being used in a given research project (e.g., as a proxy for ancestral origin or socioeconomic status), these guidelines are rarely followed in contemporary genetics publications.25 Medical geneticists have often been in the middle of race– dysmorphology debates. Indeed, they were instrumental in changing one of its more explicit expressions: referring to children with trisomy 21 as “mongols” or as having “mongol- oid features” in medical literature as well as in the lay press. This practice originated in the 1860s, when a physician noted physical similarities in the appearance of people with trisomy 21 and individuals previously described as belonging to the “Mongoloid” race. Nearly a century later, geneticists began to call for discontinuing the use of the term “mongoloid” and its variants, opting for the term “Down syndrome” instead. Although the term remains in use, geneticists and professional organizations have formally recognized that terminology that connects a genetic condition associated with cognitive impair- ment, physical features of that condition, and a country, such as Mongolia, is derogatory and should be discontinued.26 In addition to these important linguistic nuances, there is also a long history of the unjust use of morphological images of differ- ent racial “types” by the biomedical establishment. The popula- tions that have been left out of morphological diagnostic atlases are also populations that have historically been underserved or exploited by the medical professions. In many cases, racialized images played a large role in pseudoscientific research to bol- ster claims of biological differences between races and to argue for the premise of inferiority of certain groups.27 For example, throughout the nineteenth century, the widespread creation of eugenicist atlases and, later, Nazi photographic databases were used for cataloguing and distinguishing between the races, with the ultimate goal of eradicating populations believed to be infe- rior.28 Images of disabled individuals also figured prominently in medically sanctioned eugenics campaigns well into the twenti- eth century in both America and Western Europe29. Although the goals of this project are wholly different from these historical analogues, the very practice of organizing racially labeled images in a medical textbook merits careful scrutiny because of this long and problematic history. We also considered concerns that have been expressed by contemporary disability rights advocates that by reducing an individual’s facial features or distinctive physiological traits to a set of symptoms for determining a genetic diagnosis, dys- morphology atlases might objectify the individual being photo- graphed and could reinforce stereotypes and stigma associated with these conditions. (See, for example, ref. 29.) Although these are important issues to consider, we decided that ade- quate treatment of disability perspectives is beyond the scope of this article. In addition there are several important practical decisions to work through regarding the database. For example, who will maintain the database and how will access be regulated? An open-access database would be consistent with the justice- oriented goals of the project to expand the availability of such diagnostic tools around the world. However, the easy accessibil- ity of race-labeled images and data could pose risks such as stig- matizing a particular individual, community, or population.30 The database could potentially be accessed to use the images or other material wrongfully and with malicious intent. This ten- sion between the ideal of open-access medical knowledge and the practical realities of patient privacy is not speculative; simi- lar concerns have begun to emerge for other online platforms for uploading, distributing, and viewing ostensibly anonymized patient images by medical professionals.31 However, we feel that the goals of this project to assist medical providers in making earlier and more accurate diagnoses of dysmorphic syndromes (both environmentally and genetically caused) will provide a benefit that outweighs such concerns about widespread access. In addition to patient privacy, there are other important questions raised by this database at the level of individual participants. For example, it may be challenging to obtain the informed consent of individual participants to generate and use their images in the atlas for several reasons. Participant families who come from especially remote or underresourced areas may have difficulty understanding the scope of who will be able to access the images. Craniofacial defects require the inclusion of facial pictures for a diagnosis, which increases the risk of indi- viduals being identified. Because many of the diseases that will be profiled manifest in childhood and/or are characterized by cognitive impairment or intellectual disability, many partici- pants will be unable to formally consent on their own behalf, which raises questions about surrogate permission and whether it is sufficient for use of their images. Furthermore, there are questions about whether it will (and should) be possible for participants to remove their images from the database if they change their mind about participation in the project. These questions raise important concerns about autonomy and pri- vacy for individuals who will be photographed and included in the database. GeneTics in medicine | Volume 18 | Number 11 | November 2016 1072 KORETZKY et al | Considerations for including diverse populations in diagnostic genetic atlasesReview RecOmmendATiOns Taking these historical concerns and ethical questions seriously will require careful design and implementation decisions in the creation of this kind of resource (Table 1). Especially impor- tant when considering the historical context of morphological atlases are questions of how ancestry will be determined for the purposes of this project, how it will be noted in the atlas, and how individuals of different ancestral backgrounds will be selected to participate. The goal is to strike a balance between providing useful data to local clinicians (for example, clinicians working in only one country may want to be able to narrow the atlas to only show patient images from that country) and avoid- ing the reification of racial or ethnic categories. Our recom- mendations are informed by engagement with an international group of advisors who are clinicians from non-Western coun- tries, including Uganda, Nigeria, South Africa, Rwanda, Mali, Malaysia, India, Thailand, Japan, China, South America, and the Middle East. Both the necessity of new atlases and the struc- ture of the project are grounded in challenges and complexities that were identified by members of this diverse group who are providing oversight of the website described by Muenke et al.5 We believe that the organization of newer atlases can be an important point of departure from the problematic historical examples discussed. Atlases should be structured to allow sort- ing of images by disease or by the current country or region of residence of the photographed patient so that clinicians could search for all individuals with a particular condition (e.g., any patient with Williams syndrome) and/or by nation or region (e.g., all photographs of individuals living in Sub-Saharan Africa). Patients would then be given the opportunity to iden- tify subjective ethnic, racial, and/or tribal identities that would appear alongside the picture of the individual in addition to data about the national origins of each of the subject’s four grandparents, but they would not be searchable variables in the database. For example, the photograph of a Southeast Asian individual with Down syndrome will appear on the screen if “Down syndrome” or “Southeast Asia” is searched. If the sub- ject self-identified as Khmer and indicated that all four grand- parents were born in Thailand, then this information will appear with the image of the subject as additional descriptive information, but it will not be possible for clinicians to search the database by the term “Asian,” “Khmer,” or “Thai” ancestral origin. Our work draws on the model of language for genetic variation and ancestry proposed previously.32 By organizing the searchable features of the atlas by disease or current nationality, and by associating self-identified ethnic/ racial/tribal identity information with the images in a non- searchable manner, the clinical utility of the atlas as a diagnos- tic tool across a wide range of phenotypes can be realized while limiting the possible unintended uses of such data for questions related to eugenic, scientifically misplaced, or other flawed research. The standard method of tracing ancestral origin via self-report of the nationalities of a participant’s four grand- parents seems to be an appropriate method for morphological atlases as well. We acknowledge that estimating ancestral ori- gin via one’s four grandparents may be less informative in some populations with significant diversity of continental ancestry, for example, within the United States.33 However, we emphasize that the goals of the new atlas are to be inclusive of populations from around the world, most of whom will tend not to have the same degree of heterogeneity of continental ancestry. Another important way that new atlases can be distinguished from historical projects involving the classification of indi- viduals by race is to include individuals from a geographically diverse population while avoiding the use of ethnicity-related criteria and language when selecting participants and catego- rizing them for the atlas25 by selecting participants from the pool of individuals who arrive at participating medical centers around the world. The goal of the atlas is to reflect the diversity that is seen in clinical practice around the world rather than to parse the origin of patients by self-reported “race,” continental ancestry, or ethnic origin. Following diagnosis of a genetic dis- ease, they will be approached about participating in the atlas project first, and asked for their ancestral origin data later in the process. No quota of individuals of different ancestral back- grounds to be included in the atlas will be set, nor will attempts be made to find “pure” examples of different ancestral groups. Treating ancestry as data to be collected after participants are selected, asked for their consent, and photographed will help to guard against the selection of subjects who are typical (“pure” Table 1 Summary of recommendations for the creation of new diverse morphological databases Appropriate use of ancestral categories/avoiding misuse of historic racial/ethnic constructs 1. Organize the atlas by disease rather than ancestral origin 2. Limit potential search parameters involving ancestral origin 3. Approach potential participants about being photographed for the atlas before asking them about their ancestral origin 4. Avoid “race” or “ethnicity” terminology and standardized lists of options when interviewing potential participants and reporting their responses in the published atlas. Instead, use terms and phrases such as “ancestry,” and open-ended questions about ethnic and cultural identity such as “where were your grandparents born?” Database access and maintenance 5. Seek multinational input in organizing and maintaining the atlas 6. Involve locals in designing the consent forms for each regional site Patient autonomy and privacy 7. Construct an interface for the database that describes its purpose, asks users to certify that their use of the resource will be consistent with its diagnostic intent, and does not allow for the downloading of participant images Volume 18 | Number 11 | November 2016 | GeneTics in medicine 1073 Considerations for including diverse populations in diagnostic genetic atlases | KORETZKY et al Review or “ideal”) types of any given group, thus distancing this project from prior eugenics-oriented image collecting practices. Additionally, we recommend that participants should not be prompted to provide data regarding their ancestral origins from a pre-existing list of possible race or ethnic categories, such as the list suggested by the US Office of Management and Budget (https://www.whitehouse.gov/omb/fedreg_race-ethnicity). Additionally, these categories do not reflect how individuals outside the United States describe themselves. The terms con- tained in such lists are generally insufficient for capturing the genetic diversity of human populations and do not accurately describe the goal of an atlas that aims to represent individuals with a wide variety of ancestral backgrounds. In making this recommendation, we are following the guidelines set by the Race, Ethnicity, and Genetics Working Group of the American Journal of Human Genetics.12 At best, these terms included in such lists are proxies for constructs that can be more accurately captured through other data, such as the continental origins of four grandparents. In addition, as described, they carry mul- tiple connotations (social, biological, cultural), many of which are inaccurate or simply not relevant to this endeavor. The question of what person, group, or organization will col- lect and maintain the images is an important one that is tied to concerns about the potential exploitation of marginalized groups. Although the United States has resources and exper- tise to contribute to this initiative, a purely US-led approach would undermine the goal of having this be a more diverse and accessible resource with buy-in from the various communi- ties involved. An advisory board that includes representatives from the continental regions from which clinicians will solicit images to oversee the project and the database will provide a more appropriate mechanism for defining the agenda, goals, and implementation of this international project.34 Involving local physicians and community members in the design of the database will help to craft the resource in a way that maximizes the benefits and minimizes the harms to the diverse communi- ties in which it will be used. In an effort to prevent local power relationships from affecting the collection of the data, all col- laborators will be briefed on how to properly ascertain ances- try information, focusing on medical conditions and ancestral geographic origins. A widely accessible morphological database that is freely available on the Internet is important for both justice and trans- parency reasons. However, it also raises the possibility of unde- sirable uses and unintended consequences of an open-access morphological database that are causes for concern. Requiring users to access the database via a screen that describes the intended uses of the database and to electronically sign a form agreeing to terms of use should help to mitigate this concern, although it does not provide a mechanism for enforcement of these terms. Unintended uses can also be minimized by digi- tally protecting images so that they cannot be downloaded from the website and, as mentioned, so that sorting images by ancestral origin alone be prevented. The recommended multinational oversight structure should be consulted to design an appropriate consent process for indi- viduals who participate in the project. Practically speaking, the consent process needs to address the specific informational needs of each community involved (for example, local literacy levels and educational attainment).35 A well-designed, transpar- ent consent process that is vetted by local leadership and local regulatory experts is more likely to be perceived as trustworthy by individual participants and their communities. In addition to their involvement in writing consent forms, local collabora- tors should also be directly involved in identifying appropriate participants within their locales.36 There are additional considerations regarding the informed consent process that require further exploration. The process will need to inform potential participants about the privacy risks associated with publishing images and other data on the broad and international scale that is proposed for this project, and it will need to sufficiently address any relevant privacy reg- ulations that govern the sharing of medical images and infor- mation via the Internet. Given the young age of many of the potential subjects in the atlas, an appropriate mechanism for parental authorization will be an important component of this project as well. Similarly, a surrogate mechanism to authorize the participation of adults with limited cognitive capacity, as is characteristic of many of the conditions that will be included in the atlas, will be necessary. Decisions also need to be made about whether participants will be offered the choice to with- draw their images from the database at a later date if they so choose, or whether their ability to withdraw will be limited once the images have been published in this resource. cOncLUsiOn The creation of new morphological atlases that take into account the broad diversity of the populations affected by genetic diseases is an important step in extending the benefits of medical genetics to the global populations who are currently underserved. Knowing that a child has a particular syndromic diagnosis can be lifesaving by providing important information about other significant organ systems that are often affected. In addition, an early and accurate diagnosis can enable physi- cians to perform appropriate preventative care and give affected families an idea of what lies ahead. At the same time, there are a number of ethical considerations that should be addressed in any project that relies on the publication of images and genetic information from persons with dysmorphic features who come from a variety of ancestral backgrounds. Our goals in this paper were both constructive and preventive in nature, providing rec- ommendations to guide the creation of a maximally beneficial resource while also mitigating potential problems with the project before they arise. Ultimately, we believe that the ethical concerns that have been identified, although serious, are out- weighed by the potential benefits to populations who have not been included in such resources to date, and that appropriate steps can be taken to mitigate these ethical concerns. GeneTics in medicine | Volume 18 | Number 11 | November 2016 https://www.whitehouse.gov/omb/fedreg_race-ethnicity 1074 KORETZKY et al | Considerations for including diverse populations in diagnostic genetic atlasesReview ACKNOWLEDGMENTS This work was funded in part by the intramural research program of the National Human Genome Research Institute. The authors acknowledge the helpful feedback of Donald Hadley, Manjit Kaur, and participants at the Mid-Atlantic Bioethics Fellows’ Colloquium for earlier versions of this work. Written parental consent was obtained to publish the photographs of children with Down syn- drome that are included in Figure 1. The views expressed are the authors’ own. They do not represent the position or policy of the National Institutes of Health, the US Public Health Service, or the Department of Health and Human Services. DISCLOSURE The authors declare no conflict of interest. REfERENCES 1. Clarren, SK. Book review. Atlas of Clinical Syndromes: A Visual Aid to Diagnosis. N Engl J Med 1992;327:739–740. 2. Reardon W. The Bedside Dysmorphologist. Oxford University Press: New York, 2007. 3. Wiedemann HR, Kunze J, Dibbern H. Atlas of Clinical Syndromes: A Visual Aid to Diagnosis. 2nd edn. Mosby-Year Book: St. Louis, MO, 1992. 4. Baraitser M, Winter R (eds.). Colour Atlas of Congenital Malformation Syndromes. Mosby-Wolfe: London, 1996. 5. Muenke M, Adeyemo A, Kruszka P. An electronic atlas of human malformation syndromes in diverse populations. Genet Med E-pub ahead of print 3 March, 2016. 6. Tekendo-Ngongang C, Sophie D, Seraphin N, Stefania G, Sloan-Béna F, Wonkam A. Challenges in clinical diagnosis of Williams-Beuren syndrome in sub-Saharan Africans: case reports from Cameroon. Mol Syndromol 2014;5:287–292. 7. Zusevics  KL. Public health genomics: a new space for a dialogue on racism through Community Based Participatory Research. Public Health 2013;127:981–983. 8. Lim RM, Silver AJ, Silver MJ, et al. Targeted mutation screening panels expose systematic population bias in detection of cystic fibrosis risk. Genet Med 2016;18:174–179. 9. Bobadilla, JL, Macek M, Fine JP, Farrell PM. Cystic fibrosis: a worldwide analysis of CFTR mutations--correlation with incidence data and application to screening. Hum Mutat 2002;19:575–606. 10. Yudell M. Race Unmasked Biology and Race in the 20th Century. Columbia University Press: New York, 2014. 11. Marks J. Human Biodiversity: Genes, Race and History. Aldine de Gruyter: New York, 1995. 12. Race, Ethnicity, and Genetics Working Group. The use of racial, ethnic, and ancestral categories in human genetics research. Am J Hum Genet 2005;77:519–532. 13. Braun L, Fausto-Sterling A, Fullwiley D, et al. Racial categories in medical practice: how useful are they? PLoS Med 2007;4:e271. 14. Braun L. Breathing Race into the Machine. University of Minnesota Press: Minneapolis, MN, 2014. 15. Pollock, A. Medicating Race: Heart Disease and Durable Preoccupations with Difference. Duke University Press: Durham, NC, 2012. 16. Bamshad  M. Genetic influences on health: does race matter? JAMA 2005;294:937–946. 17. Wasserman, D. The justifiability of racial classification and generalizations in contemporary clinical and research practice. Law Prob Risk 2010;3–4: 215–226. 18. Krieger N. Stormy weather: race, gene expression, and the science of health disparities. Am J Public Health 2005;95:2155–2160. 19. Comfort, N. The Science of Human Perfection: How Genes Became the Heart of American Medicine. Yale University Press: New Haven, CT, 2014. 20. Bliss, C. Race Decoded: The Genomic Fight for Social Justice. Stanford University Press: Stanford, CA, 2012. 21. Lee SS, Koenig BA, Richardson SS (eds.). Revisiting Race in a Genomic Age. Rutgers University Press: New Brunswick, NJ, 2008. 22. Reardon, J. Race to the Finish: Identity and Governance in an Age of Genomics: Identity and Governance in an Age of Genomics. Princeton University Press: Princeton, NJ, 2009. 23. Kahn, J. Genes, race, and population: avoiding a collision of categories. Am J Public Health 2006;96:1965–1970. 24. Duster, T. Race and reification in science Science 2005;307:1050–1051. 25. Sankar P, Cho MK, Monahan K, Nowak K. Reporting race and ethnicity in genetics research: do journal recommendations or resources matter? Sci Eng Ethics 2015;21:1353–1366. 26. Orr, G. Why are the words ‘mongol,’ ‘mongoloid,’ and ‘mongy’ still bandied about as insults? Independent 23 November 2014 . http://www.independent. co.uk/arts-entertainment/tv/features/why-are-the-words-mongol-mongoloid- and-mongy-still-bandied-about-as-insults-9878557.html. Accessed 30 June 2015. 27. Blumenbach JF, The Anthropological Treatises of Johann Friedrich Blumenbach. Longman, Roberts & Green: London, 1865. 28. United States Holocaust Memorial Museum. Deadly Medicine: Creating The Master Race. United States Holocaust Memorial Museum: Washington, DC, 2004. 29. Wilson JC. Making disability visible: how disability studies might transform the medical and science writing classroom. Tech Comm Quart 2000;9: 149–161. 30. Parr, H. New body-geographies: the embodied spaces of health and medical information on the internet. Environ Plann D 2002;20:73–95. 31. Virginia H. Is this doctors app a digital classroom—or medical porn? BuzzFeed News. http://www.buzzfeed.com/virginiahughes/is-this-doctors-app-a-digital- classroom-or medical-form/. Accessed 22 May 2015. 32. Sankar P, Cho MK. Genetics. Toward a new vocabulary of human genetic variation. Science 2002;298:1337–1338. 33. Yaeger R, Avila-Bront A, Abdul K, et al. Comparing genetic ancestry and self- described race in african americans born in the United States and in Africa. Cancer Epidemiol Biomarkers Prev 2008;17:1329–1338. 34. Lantz PM, Viruell-Fuentes E, Israel BA, Softley D, Guzman R. Can communities and academia work together on public health research? Evaluation results from a community-based participatory research partnership in Detroit. J Urban Health 2001;78:495–507. 35. Buseh  AG, Stevens  PE, Millon-Underwood  S, Townsend  L, Kelber  ST. Community leaders’ perspectives on engaging African Americans in biobanks and other human genetics initiatives. J Community Genet 2013;4: 483–494. 36. Polanco FR, Dominguez DC, Grady C, et al. Conducting HIV research in racial and ethnic minority communities: building a successful interdisciplinary research team. J Assoc Nurses AIDS Care 2011;22:388–396. Volume 18 | Number 11 | November 2016 | GeneTics in medicine http://www.independent.co.uk/arts-entertainment/tv/features/why-are-the-words-mongol-mongoloid-and-mongy-still-bandied-about-as-insults-9878557.html http://www.independent.co.uk/arts-entertainment/tv/features/why-are-the-words-mongol-mongoloid-and-mongy-still-bandied-about-as-insults-9878557.html http://www.independent.co.uk/arts-entertainment/tv/features/why-are-the-words-mongol-mongoloid-and-mongy-still-bandied-about-as-insults-9878557.html http://www.buzzfeed.com/virginiahughes/is-this-doctors-app-a-digital-classroom-or medical-form/ http://www.buzzfeed.com/virginiahughes/is-this-doctors-app-a-digital-classroom-or medical-form/ Towards a more representative morphology: clinical and ethical considerations for including diverse populations in diagnostic genetic atlases Main Ethical Considerations Recommendations Conclusion Disclosure Acknowledgements References work_kgksxca36febdanyyj3635nfce ---- A review of the use of information and communication technologies for dietary assessment A review of the use of information and communication technologies for dietary assessment Joy Ngo 1 *, Anouk Engelen 1 , Marja Molag 2 , Joni Roesle 1 , Purificación Garcı́a-Segovia 3 and Lluı́s Serra-Majem 1,4 1 Community Nutrition Research Centre of the Nutrition Research Foundation, University of Barcelona Science Park, Baldiri Reixac 4, 08028 Barcelona, Spain 2 Division of Human Nutrition, Wageningen University, PO Box 8129, 6700 EV Wageningen, The Netherlands 3 UPV Science Park, Building 8E, Esc. F, Piso 0 Dpcho 02, Polytechnical University of Valencia, Camino de Vera s/n, 46 022 Valencia, Spain 4 Department of Clinical Sciences, University of Las Palmas de Gran Canaria, PO Box 550, 35080 Las Palmas de Gran Canaria, Spain (Received 4 February 2009 – Revised 6 May 2009 – Accepted 1 June 2009) Presently used dietary-assessment methods often present difficulties for researchers and respondents, and misreporting errors are common. Methods using information and communication technologies (ICT) may improve quality and accuracy. The present paper presents a systematic literature review describing studies applying ICT to dietary assessment. Eligible papers published between January 1995 and February 2008 were classified into four assessment categories: computerised assessment; personal digital assistants (PDA); digital photography; smart cards. Computerised assessments comprise frequency questionnaires, 24 h recalls (24HR) and diet history assessments. Self-administered computerised assessments, which can include audio support, may reduce literacy problems, be translated and are useful for younger age groups, but less so for those unfamiliar with computers. Self-administered 24HR utilising computers yielded comparable results as standard methods, but needed supervision if used in children. Computer-assisted interviewer-administered recall results were similar to conventional recalls, and reduced inter-interviewer variability. PDA showed some advantages but did not reduce underreporting. Mobile phone meal photos did not improve PDA accuracy. Digital photography for assessing individual food intake in dining facilities was accurate for adults and children, although validity was slightly higher with direct visual observation. Smart cards in dining facilities were useful for measuring food choice but not total dietary intake. In conclusion, computerised assessments and PDA are promising, and could improve dietary assessment quality in some vulnerable groups and decrease researcher workload. Both still need comprehensive evaluation for micronutrient intake assessment. Further work is necessary for improving ICT tools in established and new methods and for their rigorous evaluation. Diet assessment: Methods: Information and communication technologies: Review Owing to the complexity of nutrition (and many present health) behaviours, it is essential to assess dietary intake ade- quately, thus providing reliable data to increase the effective- ness of interventions and policies both at the individual and population level. The classic methods to measure food and nutrient intake (food records, 24 h recalls (24HR), dietary history and FFQ) have instrument-specific advantages and disadvantages. Disadvantages include, among others, heavy respondent burden requiring subjects to perform difficult cog- nitive tasks and to be literate. In addition, researchers need appropriate data on food composition (1 – 3) . A recent review in this supplement has showed that the major factors influen- cing misreporting (under and overreporting) in recall methods are due to the reliance on respondents’ memory and ability to estimate portion sizes (4) . This may result in the unintentional omission or addition of foods. Information and communication technology Given these recognised limitations, research has focused on refining assessment methods to more accurately evaluate food intake. More recently, the possibilities of developing On behalf of EURRECA’s RA 1.1 ‘Intake Methods’ members: Serra-Majem L (Coordinator), Cavelaars A, Dhonukshe-Rutten R, Doreste JL, Frost-Andersen L, Garcı́a-Álvarez A, Glibetic M, Gurinovic M, De Groot L, Henrı́quez-Sánchez P, Naska A, Ngo J, Novakovic R, Ortiz-Andrellucchi A, Øverby NC, Pijls L, Ranic M, Ribas-Barba L, Ristic-Medic D, Román-Viñas B, Ruprich J, Saavedra-Santana P, Sánchez-Villegas A, Tabacchi G, Tepsic J, Trichopoulou A, van ’t Veer P, Vucic V, Wijnhoven TMA. * Corresponding author: Joy Ngo, fax þ34 93 403 45 43, email nutricom@pcb.ub.cat Abbreviations: ICT, information and communication technologies; IMM, interactive multimedia; MeSH, medical subject headings; PDA, personal digital assistants; YANA-C, young adolescents’ nutrition assessment on computer; DH, diet history; EBIS, diet history, consulting and information system; FIRSSt, Food intake recording software system. British Journal of Nutrition (2009), 101, Suppl. 2, S102–S112 doi:10.1017/S0007114509990638 q The Authors 2009 B ri ti sh Jo u rn al o f N u tr it io n D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:24 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S0007114509990638 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S0007114509990638 new applications of information and communication technol- ogies (ICT) to improve dietary as well as physical activity assessment are being explored. The application of ICT in dietary and physical activity assessment offers several potential advantages. Innovative methodological approaches can improve data quality, consist- ency and completeness. Furthermore, new technologies hold considerable promise for reducing costs as the presence of trained interviewers is not required for a complete interview. Moreover, computerised assessment can save considerable time in data coding as data are immediately stored. Because of less respondent burden, it may be possible to collect long-term data on both food intake and physical activity. New technologies may also help to simplify the self-monitor- ing process, which increases compliance and validity of self- reported food and energy intake (5) . Subjects’ compliance with recording their food intake is often a problem, often due to lack of motivation or difficulties in remembering to register intake. This is especially proble- matic when subjects are required to keep records for longer periods of time, when the novelty wears off or when the recording process is difficult and time consuming (6) . ICT may enhance intake assessment by simplifying the pro- cess and making it less time consuming to monitor food intake, as well as increasing the subject’s motivation to com- plete the task. As such, willingness to record intake may increase and therefore more reliable data are obtained. The purpose of this literature review was to describe and evaluate the applications of ICT in dietary intake assessment. The aim was to explore a broad range of studies that assessed the use of new as well as renewed technologies to obtain diet- ary data. The term ‘renewed methods’ is applied to existing instruments that have incorporated ICT. New methods consist of those that are distinct from the classic methods, which have also focused on the use of ICT to assess dietary intake. Methods and materials A systematic literature search was carried out in Medline to identify articles published between January 1995 and February 2008. The structured strategy included the following keywords and medical subject headings (MeSH) terms applied as follows: (‘food’ (MeSH) OR diet OR nutrient OR food consumption OR ‘food habits’ (MeSH)) AND (‘nutrition assessment’ (MeSH) OR assessment OR methods OR moni- toring OR methodology OR analysis OR evaluation) AND (ICT OR technology OR personal digital assistants (PDA) OR computer OR assisted OR internet OR ‘information science’ (MeSH) OR ‘radio waves’ (MeSH) OR radio fre- quency OR photos OR digital). Titles and abstracts were evaluated to select articles eva- luating the use of ICT in new or renewed dietary-assessment methods to assess intake, including studies that applied technology as part of an intervention. The following exclusion criteria were applied for data selection: ICT not forming part of dietary intake measures, ICT in diet assessment tools that only measured specific food items, ICT in nutrition education tools, studies relating to GM food, food technology or those evaluating technologies applied to hygiene or food safety. A manual review of reference lists in the selected articles was conducted by the second author to identify additional articles for possible inclusion. Papers were classified accord- ing to ICT method and information on administration, user evaluation and validation was extracted. Results The Medline search on ICT and dietary intake assessment yielded 3992 publications. After applying exclusion criteria, fourteen articles were selected. Examination of the reference lists of these articles yielded two additional articles. In total, sixteen articles were included and classified into four assess- ment categories: computerised assessment; PDA; digital photography; smart cards. Nine studies evaluated the use of computer assisted dietary assessment (including interviewer- and self-administered instruments), four reported on PDA, two addressed the use of digital photography and one evalu- ated the use of smart cards. The study characteristics and results are summarised in tables in the Appendix and further described later. Renewed classic measurement methods Computerised assessment ICT was applied in FFQ, 24HR and diet histories. Out of nine papers, three studies utilised computer-assisted self-adminis- tered quantitative FFQ, including one that incorporated a reduced FFQ, and three others applied self-administered 24HR. The remaining studies used computer-assisted inter- view-administered diet histories (n 2) and 24HR (n 1). FFQ. An audio computer-assisted self-interviewing ques- tionnaire that recorded data by using a touch screen was devel- oped to collect diet and lifestyle data from the previous 12 months in a population of American Indians and Alaskan Natives (7,8) . Recorded audio was utilised, which simul- taneously read the text (questions and answers) shown on the screen and the questionnaires could be administered in English, Navajo and Yupik. Out of 604 participants, 97·2 % reported that audio computer-assisted self-interviewing ques- tionnaire was easy to use and 96 % found it to be enjoyable. Although 86·2 % would use this tool again, 62 % said that more directions were needed and 10·6 % had difficulty using it, particularly those with a lower educational level, infrequent computer use and older subjects. The diet history analysis showed 18 % of subjects with high-energy intake (.33 472 kJ (8000 kcal) or 27 196 kJ (6500 kcal) for men and women, respectively) (8) . Validity and reliability of this audio computer-based questionnaire are currently being evaluated. Another application of a FFQ in a reduced form was seen in a computer-tailored fat intake reduction intervention (9) in 220 adults (20 – 60 years) that utilised a fat intake FFQ as well as questions about demographics and psychosocial determinants (level of motivation). Most subjects agreed that the question- naire were comprehensible and easy to complete, according to a five-point Likert scale (3·96 (SD 0·57)). Participants .40 years and with higher motivation levels showed greater acceptability of the computerised questionnaire, but no associ- ations were seen for sex, education level and computer literacy. New technologies and diet assessment S103 B ri ti sh Jo u rn al o f N u tr it io n D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:24 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S0007114509990638 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S0007114509990638 24 h Recalls Self-administered. The young adolescents’ nutrition assess- ment on computer (YANA-C) is a computerised 24 h dietary recall (10) developed for the self-monitoring of food intake by children 11 years and older. Participants were required to record six eating occasions (breakfast, morning snack, lunch, afternoon snack, dinner and evening snack) using a 400-item food list organised into eighteen food groups. Quantification of intake consisted of selecting from photographs of standard portions for eighty-one items that changed in real time as a response to a ‘more’ or ‘less’ button feature. Students positively evaluated YANA-C in an opinion survey. Validation against a 1- d-estimated food record and single 24HR interview showed higher estimates for energy and nutrient intakes with YANA-C compared with food records, but no significant differences with the 24HR. Spearman correlation coefficients between YANA-C and the food record were on average 0·62 and between YANA-C and the interview 0·67. An earlier study also conducted in schoolchildren evaluated a self-administered multiple-pass-computerised 24HR using the Food Intake Recording Software System (FIRSSt) (11) . An evaluation questionnaire completed by the study partici- pants yielded a positive review. Validity was tested by com- paring the food intake data of 138 fourth-grade children obtained by FIRSSt with data from an observed school lunch and a single 24HR conducted by a dietitian. The food intake data obtained by the different dietary-assessment methods were compared item per item for matches, intrusions and omissions. Results demonstrated that FIRSSt was slightly less accurate than a dietitian-conducted 24HR. Issues ident- ified included that some children reported a large number of consumed foods and beverages, probably caused by them pressing all buttons in attempts to try out the program’s options. Even though this was corrected for and the amount of errors decreased, some uncertainty still remained as to what foods were actually consumed. In adults, an interactive multimedia (IMM) dietary recall was tested against an interview-administered recall in eighty low-income participants aged 18 – 65 years (91 % were female), both recalls being realised on the same day (12) . The IMM recall included a touch screen and English and Spanish audio files. Breakfast was separated from the other meals and assessed in another IMM section as a separate recall, which caused confusion for some subjects. Nonetheless, 53 % of the participants preferred the IMM recall to the inter- view-administered recall and a paper and pencil method. Correlation results excluding the breakfast IMM showed a mean of 0·6 for the IMM and the interviewer-administered recall. Noteworthy exceptions were seen for folate (0·29) and alcohol (0·99). In the computerised dietary recall, when four portion size ranges were substituted with standardised portions, the correlations decreased. Interviewer-administered. The European Prospective Investigation into Cancer and Nutrition (EPIC), a cohort study on diet and cancer in ten countries, applied a two-step computer-assisted 24 h dietary recall (EPIC-SOFT) in a cali- bration study to decrease differential measurement errors. To evaluate the standardisation of the 24-h recall, energy intake data from 32 063 participants were collected by seventy female interviewers in eight countries (13) . For male subjects, no significant differences were found between interviewers in five out of seven countries and for women the results showed no significant differences in four out of eight countries. The difference in log-transformed mean energy intake between centres in the same country was in general NS. Moreover, the percentage of interviewers with obtained mean energy intake within SD 10 % of the country mean energy intake was 98 % for men and 94 % for women. Dietary history. One study compared two computer- assisted interviewer-administered diet history methods (DH and EBIS) against weighed food intake (14) . The two diet history methods were similar in structure, but with EBIS the interviewer had more liberty to ask questions. Validation was conducted via 8 d of weighed food data in twenty hospitalised patients with an average age of 65 years and without special diets, severe diseases or mental confusion. Greater variation was seen for the DH method (mean daily nutrient intake range from 234 % to þ20 % with mean SD ¼ 48·1 v. a range from 235 % to þ15 % with mean SD ¼ 28·1 for EBIS). The mean of non-adjusted Pearson correlation coefficients was 0·20 for DH and 0·30 for EBIS. Nutrient intake was underestimated by both computerised tools compared to weighed intake data, which may have reflected the context of recalling intake of hospital and not home-prepared and portioned foods. Estimating portion sizes was reported to be difficult for the patients. Another interviewer-administered computerised diet history was applied in the Amsterdam Growth and Health Longitudinal Study (AGAHLS), evaluating the development of health, fitness and lifestyle of adolescents in a general popu- lation in The Netherlands (15) . Participants had already com- pleted seven classic diet history interviews from mean ages of 13 to 32 years. At the mean age of 36 years, a computer- assisted diet history interview was used to measure food intake. In both methods, the previous 4 weeks were used as the reference period. To determine the difference in inter- viewer bias between the two dietary assessment tools, the inter-interviewer variability was assessed for both methods, the classic diet history interview at the mean age of 32 years (n 436) and the computer-assisted interview at mean age of 36 (n 352). The results showed that the computerised tool decreased inter-interviewer variability. For macronutrients, energy, calcium, iron and alcohol, no significant differences were found in the data obtained with the computer-assisted interview (ANOVA range: 0·012 – 3·829). Personal digital assistants A PDA is a handheld computer that can be used for various purposes. This technology has been applied for data collection in medical settings for over 15 years (16) . PDA with specifically designed dietary software program can be used to register and self-monitor dietary intake. Subjects are required to record their food intake immediately after consumption by scrolling through a list of foods or by selecting a food group and then a specific food item. After food item selection, portion sizes are entered. To assess the accuracy of a PDA-based food record, Beasley et al. (17) compared this method with a 24HR and an observed lunch in thirty-nine adults of mixed ethnic back- grounds. No significant differences were found in measuring energy and macronutrient intake. When comparing the J. Ngo et al.S104 B ri ti sh Jo u rn al o f N u tr it io n D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:24 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S0007114509990638 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S0007114509990638 data obtained by a PDA with those obtained by the other dietary-assessment methods, Pearson correlations for the 24HR ranged from 0·5 to 0·8 and those for the observed lunch from 0·4 to 0·8. Incorrect estimation of portion sizes (49 %) constituted the main source of error in the measure- ment of food intake. Other sources of error were reporting incorrect foods (25 %), omitting food (15 %), reporting similar but not identical food (9 %) and nutrient database differences (2 %). In another study, the use of a PDA to monitor food intake did not improve the validity of self-reported energy intake (18) . Energy intakes from 7-d-electronic food records were collected over 1 month in sixty-one overweight white adults. Goldberg cut-off values (1·06 and 2·30 for females, 1·05 and 2·28 for males) were used to classify individual sub- jects as low energy, valid or overreporters. The premise that self-monitoring via PDA could increase compliance and data validity (by decreased underreporting) was not supported by the results. The use of a PDA for dietary self-monitoring did not decrease the prevalence of low-energy reporting (41 %) compared with what was found in the literature (27 – 47 %). Personal digital assistant with camera and mobile phone card PDA with camera and mobile phone card can be used to record individual dietary intake by taking photos instead of manually recording foods and beverages before and after con- sumption. Photos are subsequently sent to a dietitian to esti- mate and analyse daily nutrient intake. The validity of a PDA with camera and mobile phone card (Wellnavie instrument) was evaluated by comparing daily nutrient intakes obtained by this instrument to those obtained by weighed food records (19,20) . Of energy and thirty-two nutri- ents, researchers found no significant differences between daily nutrient intakes obtained by Wellnavie and those obtained by 1 d weighed food records, with the exception of Zn, Mn, vitamin E, SFA, PUFA and dietary fibre. Spearman correlation coefficients comparing the results obtained from the two methods for thirty-three nutrients ranged from 0·21 to 0·86 with a mean correlation coefficient of 0·62. Study par- ticipants reported that the Wellnavie instrument was the least burdensome and least time consuming; however, the twenty- eight study subjects were all female college students majoring in food and nutrition (19) . In another study in seventy-five normal weight and obese adults, Wellnavie was compared with 5 d weighed food records and the results showed lower but significant Spearman correlations (0·32 – 0·77; mean 0·47) (20) . However, daily nutrient intakes measured with the Wellnavie instrument were significantly lower than the weighed records. In reference to the evaluation of underreport- ing in obese subjects, when using Wellnavie, this association was found in obese men but not in obese women (20) . Digital photography Digital photography is similar to the direct observation of food selection and plate waste, but instead of observers being present in the dining room, food selection and plate waste are photographed with a digital video camera. Refer- ence portions are first determined, weighed and photographed for comparison with photos of participants’ trays, the latter photographed at the end of the cafeteria line and post con- sumption. In order to accurately estimate portion size and plate waste, participants’ trays and reference portions should be photographed at the same angle. All photos of reference portions, food selection and plate waste are stored on a computer and can be viewed simultaneously by researchers. An individual’s food selection is estimated based on portion size and plate waste as a percentage of the reference portion. Studies have found that different observers had a high rate of agreement for the estimation of food selection, portion sizes and food intake when using digital photography (21,22) . The validity of digital photography for estimating food portion sizes and plate waste was tested by comparing this tool with direct visual estimation in a laboratory setting with adult subjects. Sixty meals consisting of ten different portion sizes from six different university cafeteria menus were prepared and weighed. For each method, three observers independently estimated por- tion sizes as a percentage of a standard serving. The results sup- ported the validity of digital photography as well as direct visual observation. Although comparable results were found in both methods, Pearson correlations referring to direct visual esti- mation (0·95 – 0·97) were often significantly higher than those referring to digital photography (0·89 – 0·94). Data obtained by both methods showed small overestimates or underestimates. The intra-class correlation coefficients were 0·94 for food selec- tion, 0·80 for plate waste and 0·92 for food intake, confirming high levels of agreement between the observers (21) . In evaluations of sixth-grade children (mean age 11·7 years), digital photography was found to be valid for measur- ing forty-three schoolchildren’s food intake in a naturalistic setting (22) . Data were obtained from five consecutive days of school lunches with two dietitians estimating food selection, plate waste and intake using digital photography. Results showed a significant association between food intake as estimated using digital photography and adiposity, which supported convergent validity. Discriminant validity was supported by non-significant correlations between food intake and depressed mood and self-esteem measures. Reliability was confirmed as the two dietitians demonstrated a high degree of agreement in estimations. The intra-class correlation coefficients for kilocalories selected and plate waste were both 0·95 and 0·930 for total kilocalorie consump- tion. In addition, CI widths were calculated for the 5 d included in the study and showed that using digital photo- graphy for 3 d was sufficient to obtain a representative measurement of food intake (22) . New measurement methods to assess food intake Smart card Smart cards can be employed, among other functions, as payments for meals. The smart card is allocated a certain monetary value that can be spent in participating cafeterias or restaurants. When the consumer pays for a meal using the smart card, the foods on the tray are immediately recorded at the cash desk and sent to a central computer. Through this, information can be collected not only about food choices, but also about the date and time of the transaction, costs incurred and the smart card number. Subsequently, the data are stored on the computer and can be linked to a nutrient database. New technologies and diet assessment S105 B ri ti sh Jo u rn al o f N u tr it io n D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:24 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S0007114509990638 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S0007114509990638 Lambert et al. (23) tested the feasibility and accuracy of the smart card used as a tool to measure the eating behaviour of school children. In a school cafeteria, food choices of 198 boys aged 7 – 11 years were recorded by the smart card system as well as direct observation by researchers in a subsample of sixty-five children, weighing leftover edible foods. During a total period of 10 d, the foods on 265 trays were recorded and the data obtained by both dietary-assessment methods were compared. The results showed an accuracy rating of 95·9 %. The largest part of error was caused by the smart card recording no data, although the child had actually consumed a school meal. Another source of identified error was those data recorded by both the smart card and the researcher were not similar. Researchers hypothesised that this occurred as diners exchanged foods or trays or paid for each other’s meal. The present study also tested the variation in portion size and plate waste, and examined the relationship between food choice and food intake. As expected, the results showed that portion sizes as well as plate waste varied sig- nificantly. Data obtained by the smart card system provided accurate information about food choice, but not about food or nutrient intake. It would be possible to ascribe an edible wastage correction factor for each food item in the software database; however, the authors viewed this as a time-consuming task (23) . Discussion Computerised assessment Self-administered computerised assessment makes it possible for participants to register and assess their dietary intake at their own pace and convenience. The subject immediately stores data and interviewers do not have to be present during the entire interview, which saves considerable time and decreases costs. Furthermore, computerised assessment tools can directly calculate nutrient intake and energy expen- diture, which makes it possible for immediate feedback (24,25) . However, some subjects may need more instructions before or during completion of the questionnaire. Applying alerts to warn subjects of improbable answers could also decrease the problem of overreporting as well as reduce the amount of data cleaning by researchers. These types of questionnaires may be useful for web-based data collection and could be applied as an alternative to in-person or self-administered ques- tionnaires in studies where participants attend a central study- visit site, as well as for questions on sensitive topics. When using computerised self-assessment, questions about risky or sensitive behaviours may be answered more truthfully (7) . In addition, adolescents might be more motivated to report their dietary intake with computer use (10) . With the application of recorded audio to complete a computerised-dietary question- naire or recall, literacy problems can be decreased. Interview-administered computer-assisted diet histories and 24HR decreased inter-interviewer variability due to stan- dardised questioning protocols. Moreover, there was a good agreement between the data obtained by the computerised and the classic methods. However, the EPIC study results showed problems still existed in the standardisation of the multiple- pass 24HR. Further research was needed to compare the actual validity of 24HR intakes across centres using urinary N2 and K as gold standards, so as to estimate the magnitude of systematic measurement errors across the EPIC centres (13) . A disadvantage of self-administered computerised assess- ment is that it requires the user to have a minimum level of knowledge about computer use. Certain population groups may have difficulty using a computerised assessment tool, for example, older and less educated individuals (7) . This is less problematic in interviewer-administered computerised assessment tools. Personal digital assistants PDA-based food records have several advantages as indi- viduals can be provided with immediate feedback and data stored on the PDA can be reviewed at any point in time. Uploading data to a computer allows the researcher or dieti- tian to analyse dietary intake as often as the user provides them with information. PDA-based self-monitoring greatly decreases the burden of the researcher or dietitian, as the time required to analyse food records is reduced by removing the need for data entry (17) . Another advantage with respect to paper diaries is that it is possible to date and timestamp every recorded food item, which makes it possible to avoid fallacious results on adherence (17 – 26) . Furthermore, audible alarms make it possible to alert the participant at specific times to record food intake. Although the advantages of PDA show their potential to improve data quality, there are several limitations. The use of PDA-based food records increases the respondent burden com- pared with paper diaries. Studies report subjects having diffi- culty using the search function and experienced inability to find certain foods (27 – 29) . Furthermore, like paper diaries, PDA-based food records require participants to be literate. As such, older or less educated individuals might have difficulty using a PDA for recording food intake. However, although lim- ited in size, a pilot study in a group of older participants with no prior computer experience showed that they easily learned how to use a PDA (26) . Despite certain obstacles, studies demon- strated that PDA can simplify intake registration and self-moni- toring, thus increasing the quality of dietary-intake data. As this dietary-assessment tool still has limitations, further develop- ment of PDA and dietary software program is necessary. In regards to studies using PDA with a camera, dietitians could not always accurately estimate portion sizes as subjects took photos at the wrong angle and digital photo images were inadequate. The accuracy of this method can be increased by improving the quality of the digital photos and by including a PDA-based food list for users to select the foods and drinks consumed. More research is needed to determine whether this method can be used in other and more diverse population groups. Digital photography The main advantage of digital photography is the possibility to collect dietary intake data from large groups relatively quickly, with minimal disruption and impact on the eating behaviour of participants. Because data are immediately stored on the computer, the researchers have more time to ana- lyse and process the obtained data. Furthermore, the partici- pants’ identities can be kept anonymous, which can be seen as an advantage. J. Ngo et al.S106 B ri ti sh Jo u rn al o f N u tr it io n D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:24 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S0007114509990638 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S0007114509990638 Studies show that digital photography is a reliable and valid tool to measure food intake in dining facilities both in adult- and school-age populations. More research is required on the practical usability and the amount of observations needed to obtain reliable food-intake data. In addition, further investigation should address whether this tool can be used to measure dietary intake in different target populations. Smart card An advantage of using the smart card system to measure food choice is that it can collect long-term data from large groups on individual food behaviour. Furthermore, the costs are rela- tively low as smart cards are inexpensive and fewer research- ers are needed since data are stored when the diner uses the smart card to pay for the meal. However, the smart card-based system also has limitations in its usability as a dietary assessment method. Lambert et al. (23) observed that children exchanged trays or paid for each other’s meal. Furthermore, diners could buy foods and bev- erages for consumption at a different time. As such, the appli- cation of this tool is better indicated to collect information about subjects’ food selection and not their food intake. Results support the usability and reliability of the smart card used as a tool to record food choices in eating facilities. Further research should focus on how the smart card system can be improved as an instrument to measure food consump- tion. Since the study sample included in this review consisted only of boys between the ages of 7 – 11 years, more research is needed to determine whether this tool can be applied to measure food intake in a variety of population groups. Conclusion Several dietary-assessment tools applying ICT have been developed and some have shown to be valid and reliable for diverse purposes and target groups. Certain methods, particu- larly computerised tools and PDA, have the potential to accu- rately measure dietary intake and may improve dietary assessment quality in certain vulnerable groups and decrease workload of researchers. Both still need further development to improve validity and reliability for the comprehensive assessment of micronutrient intake. Further work is necessary for improving ICT tools in established and new methods and for their rigorous evaluation. The forthcoming results of a recent European Commission project, Innovative Dietary Assessment Methods in Epidemiological Studies and Public Health (IDAMES) (30) , should shed light on this topic. In gen- eral, which tool is the most suitable to collect dietary data depends on study objectives and the target group. Before selecting a given tool, it is important to review the advantages and disadvantages of each method. Acknowledgements The studies reported herein have been carried out within the EURRECA Network of Excellence (www.eurreca.org), financially supported by the Commission of the European Communities, specific Research, Technology and Develop- ment Programme Quality of Life and Management of Living Resources, within the Sixth Framework Programme, contract no. 036196. The present report does not necessarily reflect the Commission’s views or its future policy in this area. J. N. developed the search strategy, supervised and undertook analysis and wrote the final draft of the paper. A. E. carried out the search strategy, contributed to writing of first draft and commented on following drafts of the paper. J. R., P. G.-S. and M. M. commented on drafts of the paper. L. S.-M. participated in the planning of the strategy, directed and supervised the work and commented on all drafts of the paper. Additional support from Dr. Margaret Ashwell and Dr. Janet Lambert in reviewing concepts and contents is gratefully acknowledged. The authors have no conflict of interests to report. References 1. Thompson FE & Byers T (1994) Dietary assessment resource manual. J Nutr 124, 2245S – 2317S. 2. Patterson RE & Pietinen P (2004) Assessment of nutritional status in individuals and populations. In Public Health Nutrition, pp. 66 – 82 [MJ Gibney, BM Margetts, JM Kearney, L Arab editors, on behalf of the Nutrition Society]. Oxford: Blackwell Science. 3. Rutishauer IHE (2005) Dietary intake measurements. Public Health Nutr 8, 1100 – 1107. 4. Poslusna K, Ruprich J, de Vries JHM, et al. (2009) Misreporting of energy and micronutrient intake estimated by food records and 24 hour recalls, control and adjustment methods in practice. Br J Nutr 108, (Suppl. 2), S73 – S85. 5. Kroeze W, Werkman A & Brug J (2006) A systematic review of randomized trials on the effectiveness of computer-tailored education on physical activity and dietary behaviors. Ann Behav Med 31, 205 – 223. 6. Gibson RS (2005) Principles of Nutritional Assessment, 2nd ed. New York: Oxford University Press. 7. Edwards SL, Slattery ML, Murtaugh MA, et al. (2007) Development and use of touch-screen audio computer-assisted self-interviewing in a study of American Indians. Am J Epidemiol 165, 1336 – 1342. 8. Slattery ML, Murtaugh MA, Schumacher MC, et al. (2008) Development, implementation, and evaluation of a computer- ized self-administered diet history questionnaire for use in studies of American Indian and Alaskan native people. J Am Diet Assoc 108, 101 – 109. 9. Vandelanotte C, Bourdeaudhuij de I & Brug J (2004) Accept- ability and feasibility of an interactive computer-tailored fat intake intervention in Belgium. Health Promot Int 19, 463 – 470. 10. Vereecken CA, Covents M, Matthys C, et al. (2005) Young adolescents’ nutrition assessment on computer (YANA-C). Eur J Clin Nutr 59, 658 – 667. 11. Baranowski T, Islam N, Baranowski J, et al. (2002) The food intake recording software system is valid among fourth-grade children. J Am Diet Assoc 102, 380 – 385. 12. Zoellner J, Anderson J & Gould SM (2005) Comparative validation of a bilingual interactive multimedia dietary assessment tool. J Am Diet Assoc 105, 1206 – 1214. 13. Slimani N, Ferrari P, Ocké M, et al. (2000) Standardization of the 24-hour diet recall calibration method used in the European Prospective Investigation into Cancer and Nutrition (EPIC): general concepts and preliminary results. Eur J Clin Nutr 54, 900 – 917. 14. Landig J, Erhardt JG, Bode JC, et al. (1998) Validation and comparison of two computerized methods of obtaining a diet history. Clin Nutr 17, 113 – 117. New technologies and diet assessment S107 B ri ti sh Jo u rn al o f N u tr it io n D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:24 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S0007114509990638 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S0007114509990638 15. Bakker I, Twisk JWR, Mechelen van W, et al. (2003) Compu- terization of a dietary history interview within the Amsterdam Growth and Health Longitudinal Study. Eur J Clin Nutr 57, 394 – 404. 16. Koop A & Mosges R (2002) The use of handheld computers in clinical trials. Control Clin Trials 23, 469 – 480. 17. Beasley J, Riley WT & Jean-Mary J (2005) Accuracy of a PDA-based dietary assessment program. Nutrition 21, 672 – 677. 18. Yon BA, Johnson RK, Harvey-Berino J, et al. (2006) The use of a personal digital assistant for dietary self-monitoring does not improve the validity of self-reports of energy intake. J Am Diet Assoc 106, 1256 – 1259. 19. Wang DH, Kogashiwa M & Kira S (2006) Development of a new instrument for evaluating individuals’ dietary intakes. J Am Diet Assoc 106, 1588 – 1593. 20. Kikunaga S, Tin T & Ishibashi G (2007) The application of a handheld personal digital assistant with camera and mobile phone card (Wellnavi) to the general population in a dietary survey. J Nutr Sci Vitaminol (Tokyo) 53, 109 – 116. 21. Williamson DA, Allen HR & Martin PD (2003) Comparison of digital photography to weighed and visual estimation of portion sizes. J Am Diet Assoc 103, 1139 – 1145. 22. Martin CK, Newton RL & Anton SD (2007) Measurement of children’s food intake with digital photography and the effects of second servings upon food intake. Eat Behav 8, 148 – 156. 23. Lambert N, Plumb J & Looise B (2005) Using smart card technology to monitor the eating habits of children in a school cafeteria: 1. Developing and validating the methodology. J Hum Nutr Diet 18, 243 – 254. 24. Lagerros YT, Mucci LA & Bellocco R (2006) Validity and reliability of self-reported total energy expenditure using a novel instrument. Eur J Epidemiol 21, 227 – 236. 25. Evers W & Carol B (2007) An Internet-based assessment tool for food choices and physical activity behaviors. J Nutr Educ Behav 39, 105 – 106. 26. Burke LE, Warziski M, Starrett T, et al. (2005) Self-monitoring dietary intake: current and future practices. J Ren Nutr 15, 281 – 290. 27. Welch J, Dowell S & Johnson CS (2007) Feasibility of using a personal digital assistant to self-monitor diet and fluid intake: a pilot study. Nephrol Nurs J 34, 43 – 48. 28. Yon BA, Johnson RK, Harvey-Berino J, et al. (2007) Personal digital assistants are comparable to traditional diaries for dietary self-monitoring during a weight loss program. J Behav Med 30, 165 – 175. 29. Ma Y, Olendzki BC, Chiriboga D, et al. (2006) PDA-assisted low glycemic index dietary intervention for type II diabetes: a pilot study. Eur J Clin Nutr 60, 1235 – 1243. 30. IDAMES: Innovative Dietary Assessment Methods in Epidemiological Studies and Public Health, http://nugo. dife.de/twiki41/bin/view/IDAMES/ProjectDescription (accessed December 2008). J. Ngo et al.S108 B ri ti sh Jo u rn al o f N u tr it io n D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:24 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S0007114509990638 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S0007114509990638 Appendix: Characteristics of selected studies on dietary-intake assessment and ICT Table S1. Computerised assessment First author Objective Sample characteristics Methods Results Correlation study/validated FFQ self-admin Edwards et al. (2007)(7) Development, usability and acceptability of ACASI for native Americans. Usability subset n 604 (32·8 % m, 67·2 % f; aged 18 – . 69 years; 75 % aged 18 – 49 years; self-identified as American Indian or Alaska native; 27·8 % , high-school education). Analysis based on baseline study data, auxiliary background data and short usability questionnaire (using five- item Likert scale) after monitoring food intake, physical activity, medical history and other lifestyle data with ACASI. 96·0 % of participants found ACASI questionnaires enjoyable to use, 97·2 % reported ease of use and 82·6 % preferred ACASI for future question- naires. 62 % indicated more directions were needed. 10·6 % reported to have difficulty using ACASI. Lower educational level and less frequent com- puter use in past year associated with usability difficulty. No Slattery et al. (2008) (8) Descriptive evaluation of a FFQ (DHQ) using ACASI in a population of American Indians and Alaskan natives. Six thousand and six hundred and four study participants (self-identify as American Indian or Alaska native, 36·0 % men, 64 % women; aged 18 – . 65 years; 60 % aged younger than 45 years; 22 % with less than a high- school education, 6 % college graduates). Completion of audio computer-assisted interview; anthropometric measure- ments, blood pressure and a finger stick blood draw. Almost 100 % of participants had complete DHQ data. More difficulties seen with lower education and acculturation levels as well as younger men and the unemployed. Low underreporting based on reported energy intake, but 18 % reported suspect high non-alcoholic energy intake (.33 472 kJ (.8000 kcal) or 29 176 kJ (6500 kcal) for men and women, respectively). Average time to complete questionnaire was 36 min. No Vandelanotte et al. (2004)(9) Investigate acceptability and feasibility of an interactive computer- tailored fat intake inter- vention in a general population. Two hundred and twenty partici- pants (20 – 60 years of age) 67 % university or college education. Participants completed a computerised questionnaire about demographics, fat intake and psychosocial determi- nants, and received personal fat intake advice. An evaluation questionnaire was com- pleted during and after the tailored program. Participants rated the diagnostic tool positively. (Likert scales were 3·96 (SD 0·57) and 4·17 (SD 0·58)). No significant differences were found according to sex, education levels and compu- ter literacy. Significant differences were found between age groups and stages of change. No 24HR self-admin Vereecken et al. (2005)(10) Assess the relative val- idity and acceptability of the computerised 24HR YANA-C. Study 1 136 pupils of two secondary schools (12 – 14 years of age). Study 2 101 pupils of two primary (11 – 12 years of age) and two secondary schools (12 – 14 years of age). Study 1 Completed 1-d- EFR and the following day YANA-C (both under supervi- sion). One week later, YANA-C was administered a second time. Study 2 Completed supervised YANA-C and 24 h dietary recall interview on the same day. A subsample completed a survey on PC experience, general attitude towards computers and their acceptability of YANA-C. A five-point scale was used. Matches between YANA-C and standard methods ranged from 67 to 97 % (x ¼ 90 % EFR; 89 % with interview) k statistics 0·38 – 0·92 (x ¼ 0·73 and 0·70 EFR and interview, respectively). Mean Spearman correlations for YANA-C and EFR 0·62 and 0·67 for YANA-C and interview. In comparison with EFR, on average 56 % were classified into same tertile and 6 % into the opposite tertile, whereas 61 % were classified into the same tertile and 5 % into the opposite tertile with the interview. Yes N e w te c h n o lo g ie s a n d d ie t a sse ssm e n t S 1 0 9 British Journal of Nutrition Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:24, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0007114509990638 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S0007114509990638 Continued First author Objective Sample characteristics Methods Results Correlation study/validated Baranowski et al. (2002) (11) Assess the validity and accuracy of the mul- tiple-pass 24HR using FIRSSt. n 138 school children (mean age 9·6 years; 33·7 % Euro- American, 30·4 % African – American, 14·5 % Hispanic and 21·4 % other). Comparisons between FIRSSt (not specified if supervised), school lunch observations and a dietitian-con- ducted multiple-pass 24HR. A six- group design was used to test obser- vation and sequencing affects, as well as hair samples. Questionnaire evaluating FIRSSt. FIRSSt v. observation: Accuracy: 46 % match, errors: 24 % intrusion, 30 % omission rates. 24HR v. observation: 59, 17 and 24 %, respect- ively. FIRSSt v. 24-h recall: 60, 15 and 24 %, respectively. Obtaining a hair sample reduced the omission rate for FIRSSt v. 24HR and increased the match rate for 24HR v. obser- vation. Children generally enjoyed using FIRSSt. Yes Zoellner et al. (2005)(12) Examine the validity of an IMM dietary recall when compared with an interview-adminis- tered dietary recall. (24HR). Eighty low-income English- and Spanish-speaking participants (91 % female; ages 18 – 65; 28 % , high-school edu- cation). Participants completed both an IMM recall (minimal guidance) and an interview-administered recall con- secutively on the same day. Participants were asked to complete a brief opinion survey. A five-point Likert scale was used. Mean of unadjusted correlation coefficients for IMM and 24HR was 0·6. (notable exceptions folate 0·29 and alcohol 0·99). 53 % of partici- pants preferred IMM, 39 % preferred an inter- viewer and 8 % preferred a pencil/paper method. Mean time of completion: 12·5 min (IMM) v. 20 min for 24HR (completion þ analysis). Yes 24HR interviewer-admin Slimani et al. (2000) (13) Examine the level of standardisation of a computer-assisted 24HR (EPIC-SOFT). Thirty two thousand and sixty three subjects from ten countries participating in EPIC calibration study. Seventy interviewers in ten countries administered two-pass computer- assisted 24HR. Differences in energy intakes across interviewers were compared, adjusting for potential confounders. For men, no significant differences were found between interviewers in five out of seven countries and for women no significant differ- ences in four out of eight countries. The differ- ence in mean energy intake between centres in the same country was in general NS. The per- centage of interviewers with a mean energy intake within (SD 10) % of the country mean energy intake was 98 % for men and 94 % for women. NA Diet history interviewer-admin Landig et al. (1998) (14) Validate two compu- terised-assisted inter- viewer-admin methods of obtaining a diet his- tory (EBIS and DH). n 20 hospitalised patients (12 m, 8 w; mean age 65 years, range 47 – 74 years). Exclusion criteria: severe dis- eases, more than 2 d of fast- ing, mental confusion and special diets. Comparison of actual intake (weighed daily amounts of food consumed) to data from two computerised inter- viewer-administered diet history methods. Intake of macronutrients and ten micro- nutrients evaluated and percentage difference calculated. Mean daily intake of nutrients calculated by DH deviated from 234 % to þ20 % (mean SD ¼ 48·1) and from 235 % to þ15 % (mean SD ¼ 28·1 %) with EBIS. Nutrient estimates calculated from both methods tended to underestimate intakes, possibly due to the context of recalling hospital prepared foods. Yes Bakker et al. (2003) (15) Comparison of interview- administered CAFTF with similar FTF inter- view and effect on interviewer bias. n 436, mean age 32 (FTF). n 352, mean age 36 (CAFTF). n 82 subjects agreement analysis. Data from cross check FTF interview at cohort mean age of 32 years com- pared to data collected with new cross check CAFTF tool at cohort mean age of 36 years, both referring to prior 4 week intake. Data com- pared from eighty-two subjects inter- viewed by FTF at 32 years and at 36 years by a different interviewer using CAFTF to test agreement. ANOVA CAFTF 0·012 – 3·829 and for FTF 1·422 – 11·583. The paired-sample differences, standard devi- ations and P-values showed some differences. Pearson’s correlation coefficients 0·6 – 0·9. All intra-class coefficients in range of 0·6 – 0·9. k ranged from 0·4 to 0·8. Bland – Altman plots showed no relevant differences. Yes Self-admin, self-administered; ACASI, audio computer-assisted self-interviewing questionnaire; DHQ, diet history questionnaire; 24HR, 24 h recall; YANA-C, young adolescents’ nutrition assessment on computer; EFR, estimated food record; FIRSSt, food intake recording software system; IMM, interactive multimedia; interviewer admin, interviewer administered; EPIC, The European Prospective Investigation into Cancer and Nutrition; NA, not applicable; DH, diet history, EBIS, diet history, consulting and information system; CAFTF, computer-assisted face-to-face diet history interview; FTF, face-to-face. J. N g o e t a l. S 1 1 0 British Journal of Nutrition Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:24, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0007114509990638 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S0007114509990638 Table S2. Personal digital assistant (PDA) First author Objective Sample characteristics Methods Results Correlation study/validated Beasley et al. (2005)(17) Evaluate validity of the DietMa- teProe program (PDA-based software) and examine sources of error from the PDA-based food record. Thirty-nine adults (twenty-one women, eighteen men; thirty- six white, three black, one Hispanic; mean age 53 years; mean BMI 28 kg/m 2 ; mean years of education 16). Three-day PDA-based food records were compared with 24HR and an observed, weighed and timed lunch. Sources of error were quantified by using calories as the unit of com- parison. No significant differences in daily totals for cal- ories and macronutrients between PDA data and comparison measurements. Pearson’s correlations for PDA and 24HR: 0·5 – 0·8; for PDA data and observed lunch: 0·4 – 0·8. The largest source of absolute error in caloric estimation was attributable to portion size estimation error (49 %). Yes Yon, 2006(18) Investigate whether PDA use for dietary self-monitoring would reduce underreporting prevalence and improve the validity of self-reported energy intake. Sixty-one white adults (fifty-six women, five men; mean age 48·2 years; mean BMI at baseline 32·2; 66 % with uni- versity degree). Part of 24 week in-person beha- vioural weight control pro- gramme. Provided with PDA using Calorie King’se Handheld Diet Dietary software. Energy intakes from 7 d PDA food records were collected within the first month. Goldberg cut-off values were used to classify indi- vidual subjects as low energy, valid or overreporters. Underre- porting compared with prevalence reported in literature. Question- naires exploring PDA use col- lected at baseline and 6 months. The prevalence of low-energy reporting observed in the present study (41 %) was consistent with underreporting prevalence reported in the literature (27 – 46 %). No Wang, 2006 (19) Evaluation of handheld PDA with camera and mobile phone card (Wellnavie). Twenty-eight female university food and nutrition students (mean age 19·3 (SD0·5) years; mean body weight 53·3 (SD8·5) kg; mean BMI 21·4 (SD2·9)). One-day WFR with subjects taking digital photos of all recorded foods, and photos sent to study dietitians by mobile phone card. An unannounced interview-admi- nistered 24HR was carried out the following day. Procedures were repeated after 6 months. Subjects completed a self-adminis- tered questionnaire regarding the three assessment methods. Differences between the Wellnavie method and WFR not statistically significant for most of the thirty-three nutrients except Zn, Mn, vitamin E, SFA, PUFA and dietary fibre. Spearman correlation coefficients were stronger for 24HR v. WFR (mean of two periods measured ¼ 0·77) than for Wellna- vie v. FR (mean ¼ 0·62). 57·1 % of subjects considered the Wellnavie method to be the least burdensome and the least time con- suming, and 42·9 % stated they could con- tinue for a month using Wellnavie. Yes Kikunaga, 2007 (20) Validation of a new dietary assessment method, a PDA with camera and mobile phone card (Wellnavie) and evaluation of the relation between obesity and underre- porting using Wellnavie. Seventy-five healthy volunteers (twenty-seven men, forty- eight women, forty-three non- obese and thirty-two obese; aged 30 – 67 years). Subjects took digital photos of their meals and had PDA display option to write in ingredients of dishes consumed. Data were sent to the dietitian by a mobile phone card. Data were compared to data obtained from WFR (five con- secutive days). The association between obesity and underreport- ing using Wellnavie was com- pared with results from both WFR and a motion and time study. The Wellnavi e method gave significantly lower values for daily nutrient intakes in all subjects than those obtained by the WFR, except for some nutrients. Significant Spear- man correlations (0·32 – 0·75) for daily nutri- ent intakes measured by Wellnavi e and the WFR method in all subjects, except for some nutrients. Obesity in men was a factor of underreporting but not in women. Yes 24HR, 24 h recalls; WFR, weighed food record; FR, food record. N e w te c h n o lo g ie s a n d d ie t a sse ssm e n t S 1 1 1 British Journal of Nutrition Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:24, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0007114509990638 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S0007114509990638 Table S3. Digital photography and smart card technology First author Objective Sample characteristics Methods Results Correlation study/validated Williamson, 2003(21) Test the validity of digital photogra- phy for measuring food portion sizes compared with weighed foods and with direct visual esti- mation. Simulation of sixty meals consisting of ten different portion sizes from six different university cafe- teria menus was prepared and weighed. Food selections and plate waste, as esti- mated by digital photography and direct visual estimation, were compared with weighed foods. For each method, three observers independently estimated por- tion sizes as a percentage of a standard serving. These percentages were multi- plied by the weight of the standard por- tion to yield estimated weights. Pearson correlations with actual weighed foods for total grams: 0·89 – 0·97for both digital photography and direct visual method. Correlations for direct visual estimation (between 0·95 and 0·97) often significantly higher than those for digital pho- tography (between 0·89 and 0·94). Both methods tended to yield small over- or underestimates. Intra-class correlation coefficients for digital photography were 0·94 for food selection, 0·80 for plate waste and 0·92 for food intake, confirming good agreement among the three observers. Yes Martin, 2006 (22) Test the reliability and validity of digital photography for measuring children’s food intake in a school cafeteria. Forty-three participants (twenty-three boys and twenty girls; mean age 11·7 years; all Anglo- American). Digital photography measured children’s food intake at school lunch cafeteria for five consecutive days. Two registered dietitians estimated food selection, plate waste and food intake based on digital photography data. Photographs taken of weighed reference portions of each food item available. Adiposity assessed with body impedance analysis and BMI expressed as percen- tile rank. Mood and self-esteem assessed with questionnaires. High degree of agreement observed between dietitians’ estimates. Intra- class correlation coefficients for kilo- calories selected and plate waste were both 0·95 and 0·93 for total kilo- calories, and 0·93, 0·89 and 0·94 for fat, Pro % and CHO, respectively. Assessment over 3 d provided reliable and representative measure of intake. A significant association between food intake and adiposity supported con- vergent validity. Non-significant cor- relations between food intake and depressed mood and self-esteem supported discriminant validity. Yes Lambert, 2005(23) Test the feasibility of using smart card technology to track eating habits of school children for a prolonged (several months) period of time. Food choices of sixty-five boys (aged 7 – 11 years). Smart cards electronically recorded all transactions at the cash desk. During two 5-d trials (November and June), food choices were directly observed and recorded (plate waste) by researchers for 265 trays from sixty-five children. The data obtained by both methods were compared. To test the relationship between foods cho- sen and actual amounts of food con- sumed, portion size of eighty foods was determined and variations in portion sizes and food wastage identified. Out of 265 trays, eleven yielding an accuracy rating of 95·9 %, had a sig- nificant discrepancy between food choices recorded by the researchers v. smart card data. Prepared, processed food items showed low variation in portion size. Foods served by catering staff or diners had far greater variation. Some items produced far less wastage than others. Edible wastage correction factor needed for each food item in order to convert food choices into intake data. No Pro, protein. J. N g o e t a l. S 1 1 2 British Journal of Nutrition Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:24, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0007114509990638 https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S0007114509990638 work_kivh47ztxvfp7aehbgh7um4hxy ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218545987 Params is empty 218545987 exception Params is empty 2021/04/06-02:16:23 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218545987 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:23 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_kjkbsbgrtzgkfgn7mizlj7a6pq ---- 56 ABSTRACT This paper describes the conservation treatment at the Monastery of St Catherine, Sinai of an illuminated manuscript preserved in an impor- tant but severely damaged sixteenth-century Greek-style Sinai binding. The conservation treatment aimed to restore functionality to this binding with minimum intervention enabling the important manuscript to travel to an exhibition. The repair techniques evolved during treatment as the two conservators studied and used the techniques of the original binder. The repair included the partial in situ re-sewing of the manuscript, work- ing around and supplementing the original sewing, and reattaching the wooden bookboards. The primary endbands were repaired to preserve and support the unique secondary metal-thread endbands and a new method was devised to repair an interlaced clasp-strap. The planning, as well as the working methods and techniques described, highlight new avenues both for the conservation of Byzantine manuscripts and for future conservation at this important library. ÖZET Bu yazı, Sina Azize Katerina Manastırı’ndaki, bir tezhipli elyazmasının konservasyon çalışmalarını anlatmaktadır. Kitap, ciddi hasar görmüş ama önemli bir 16. yüzyıl Yunan tarzı Sina cildine sahiptir. Konservasy- onun amacı bu cildin en az müdahaleyle işlevini tekrar kazanması ve böylece yazmanın bir sergiye yollanabilmesiydi. İki konservatör, kitabın mücellidinin tekniklerini inceleyip kullanırken onarım teknikleri de geliştirdiler. Onarım yazmanın kısmi in situ yeniden dikimini (özgün dikişin çevresinde çalışılıp desteklenmesi) ve cilt kapaklarına yeniden tutturulmasını kapsıyordu. Temel şirazeler korunarak onarıldı ve ikincil metal iplikli şirazelere destek oluşturuldu. İç içe geçen örgü şiraze onarımı için yeni bir yöntem geliştirildi. Anlatılan planlama ve çalışma yöntemleri hem Bizans yazmalarının konservasyonu hem de gelecekte bu önemli kütüphanedeki konservasyon çalışmaları için tutulacak yeni yolların altını çizmektedir. INTRODUCTION TO SINAI, MS. GREEK 418 AND THE HEAVENLY LADDER The Heavenly Ladder, written in the seventh century ad by John Climacus, is a treatise on monastic virtues consisting of thirty chapters conceived as rungs of a ladder reaching from earth to heaven. The work was designed for the monks of Raitho, the present day El-Tur on the Red Sea coast of the Sinai peninsula, but found a much wider audience in Greek monastic communi- ties and became “the most widely read of all ascetic writing” [1, p. 6]. John Climacus (before ad 579–c. 650) was a Sinai hermit before becoming abbot of the Monastery of St Catherine, Sinai and the Monastery, where the work was written, preserves in its library one of the most richly illustrated manuscripts of the Heavenly Ladder. MS. Greek 418, a twelfth-century ad illumi- nated parchment manuscript of 313 leaves, has 43 miniatures illustrating the prefatory material and the 30 chapters or homilies on spiritual exercises for the monks. For detailed information of the manuscript with illustrations of all the illuminations and earlier bibliographies see Martin, and Weitzmann and Galavaris [1, 2], for the Monastery and its library see Nelson and Collins, and Manafis [3, 4]. Although the provenance of the manuscript is not known, Martin (and his view repeated by later writers), thought that the manuscript was produced in “a region near Palestine, perhaps actually at Sinai” [1, p. 187]. The sumptuous manuscript is preserved in an equally lavish and rare blind- tooled, Greek-style binding; it is covered with a fine-quality red tanned-goatskin, has silver furniture and elaborate secondary endbands of silver and silver-gilt metal threads, Fig. 1. Weitzman and Galavaris dated the binding to “the second half of the fourteenth century” [2, p. 153] and Boudalis to the fifteenth century “possibly in Constantinople” [5, p. 32], but recent work by Sarris has now firmly placed this binding in a group of nine manuscripts bound at the Monastery in the late sixteenth century ad [6]. The quality of the binding of this manuscript is unsurpassed by any other bound at the Monastery and is of great importance. The binding has an unsupported sewing structure, using four stations, with wooden boards attached to the sewn bookblock by means of sewing thread only; a textile spine lining extending onto the outer face of the boards; endbands sewn to the bookblock and the head and tail edges of the boards; and metal bosses and triple interlaced leather clasp-straps laced through the right board with metal rings that fit over edge pins on the left board. For an overview of Greek-style bookbinding with bibliographies of the earlier literature see Federici and Houlis [7] and Szirmai [8], for the unique all metal-thread endbands see Boudalis [5], and for a glossary of bookbinding terms see Ligatus [9]. INTRODUCTION TO THE CONSERVATION MS. Greek 418 with its miniatures documenting monastic life was requested by the Getty Museum in 2005 to travel to Los Angeles as part of their exhibition in 2006–7: ‘Holy Image, Hallowed Ground: Icons from Sinai’ [3]. Although the impor- tance of this manuscript to the planned exhibition was clear, the condition of the manuscript was alarming when viewed in May 2005 by Nicholas Pickwoad, Fig. 2. The text leaves survive in excellent condition and have been described as such in print [2, p. 153], but the binding was very damaged and the unique secondary endbands were now very vulnerable, Fig. 3. The sewing structure was broken down along the left joint and the damage extended across the bookblock at sewing sta- tion one. The sewn connection of the bookblock to the boards was completely broken and both the spine lining and covering leather were completely split along both joints. The head and tail endband cores were broken at, or near, the joints and both wooden boards were now only attached to the bookblock by the elaborate, secondary endband sewing in silver and silver-gilt metal threads. In addition, these elaborate secondary endbands were partially detached from the primary endband cores. The binding retained both of its triple interlaced leather clasp-straps but two of the strands of the lower strap were broken at the LEARNING FROM THE PAST: USING ORIGINAL TECHNIQUES TO CONSERVE A TWELFTH-CENTURY ILLUMINATED MANUSCRIPT AND ITS SIXTEENTH- CENTURY GREEK-STYLE BINDING AT THE MONASTERY OF ST CATHERINE, SINAI Andrew Honey and Nicholas Pickwoad Fig. 1 Detail of a portion of the intact decorative metal-thread secondary endband before treatment (right or lower board, head). 57 exterior face of the board, making this rare surviving clasp with its cast silver stirrup-ring vulnerable. Any further strain placed on the binding in this state would result in irremediable damage to the binding of this important manuscript. The authors had been involved with a preservation project to “secure the future of the [Monastery’s] library” [10, p. 33] begun in 1998 and jointly organised by the Saint Catherine Foundation and the University of the Arts, London. This project has concen- trated to date on a condition survey of the bound manuscripts and early printed books coinciding with planning work to box and rehouse the manuscripts, remodel the library and provide the Monastery with modern conservation workshops. An oppor- tunity to carry out a conservation treatment at the Monastery, although daunting, was welcomed as it would highlight the requirements of the future workshop, aid its planning and would allow the project to demonstrate its conservation abilities to the monastic community. Conservation work carried out by the project prior to this had been limited to minor work required for the safe display of the Achtiname within the Monastery, and emergency work to record and safely remove a number of manuscript fragments uncovered during building work [11]. The Monastery did not have suit- able conservation facilities in which to carry out detailed and complex treatments but a converted cell, which had previously been used for some work in caring for the collection, potentially offered a location for emergency work. Unlike other conserva- tion disciplines, the treatment of books and manuscripts is not usually site or time specific and is usually undertaken by a sole conservator. The treatment of this manuscript would require a different approach for its planning and new methods of work. THE PROPOSED TREATMENT The manuscript had been surveyed in November 2003 and Nicholas Pickwoad made further notes and took additional digital photographs during his 2005 trip to supplement the existing slide photographs, drawings and survey forms. The authors discussed possible treatments with Christopher Clarkson and a treatment proposal was submitted to the Synaxis of the Monastery in December 2005, with the treatment planned to be completed in a single week during spring 2006. The treatment proposal sought to address four main areas of damage, to restore functionality to the binding and to enable the manuscript to travel safely. The treatments were designed so that they could be carried out in a temporary ad hoc workshop with all the tools and materials required taken to the Monastery. The proposed treatment aimed to secure the broken sewing structure with new thread and reconstitute the primary, sewn board attach- ment, to secure the broken endband cores where they had pulled away from the bookblock, to reattach the secondary endband sewing to the repaired primary-sewn endbands and to reattach the partly broken interlaced clasp-strap. Planning of the work before the trip needed to be thorough and although the broad aims of the treatment had been agreed, exact methods to achieve them would need to be f lexible and to be refined during the work. A wide range of tools and materials were packed to allow for all eventualities. The timetable for the work was both fixed and tight and the project was planned to allow the maximum contact time with the manuscript. Although both authors have experience of conserving west- ern manuscripts, the in situ repair of Greek-style sewing and endband structures was unfamiliar and would require new approaches. Building on the success of the survey method used at the Monastery, the treatment was to be undertaken jointly by two conservators [10, p. 37]. It was envisaged that this would generate a ‘dialogue’ between the conservators in relation to the manuscript, encouraging each conservator to continually comment on what they saw and what they planned to do. This would speed up the decision-making process and expedite the assessment of treatment ideas. Documentation would use a combination of digital photography and a written work diary to Fig. 2 Manuscript and binding before treatment, left or upper board detached and showing stretched secondary endband sewing. Fig. 3 Head endband, left or upper joint before treatment showing broken endband cores and partially detached secondary endband. 58 record work in progress, minimising the time taken, while also providing maximum information for a final report on return from the Monastery. REPAIR OF SEWING AND SEWN BOARD ATTACHMENT This Greek-style binding has an unsupported sewing structure, using four stations, with its wooden boards originally attached to the sewn bookblock by means of sewing thread only. This is unlike a Western binding structure where the boards are attached to the bookblock with sewing supports and repair techniques would need to take this into account. Before treatment began, the spine leather was lifted dry from the underlying textile lin- ing and retained. The textile spine lining was lifted dry from the spine between stations one and four, the broken external headband tiedowns were pulled clear of the lining and the lining was peeled away from the head, Fig. 4. The internal tiedowns, with three exceptions only, were all broken under the lower end- band core and these were cut to allow access. The textile spine lining was removed (and retained), revealing the spine of the manuscript with the damaged endbands peeled back though still attached to the boards. The covering leather was lifted along the back edge of the boards and the textile lining was separated from the leather. This exposed the bridling stations, with remains of the text-block sewing thread in stations one and four on the right board, and at all stations on the left board. It became clear that two threads had been looped under the bridles at each station. The removal of the spine liner and leather exposed the spine and broken sewing and a great deal of animal glue was also revealed across the spine at the head within the area covered by the endband tiedowns and towards the front joint at the tail. It would appear that this was the result of an earlier repair. The adhesive residue, the repair glue and the remains of the leather on the spine were softened with starch paste and removed mechanically. Although the sewing structure was broken down along the left joint and the damage extended across the bookblock at sewing station one, and the sewn connection of the bookblock to both boards was completely broken, the newly revealed spine showed that away from these areas of damage the sewing appeared to be sound. Studying the original structure indicated that it might be ‘patched’ by replicating the original sewing technique. Before this began the broken sewing threads were removed from the intact bridles on the right joint. These bridles were then prepared to accept the replacement sewing by lacing f lexible f loss thread- ers under the bridles on the outside of the board, lacing them from head to tail, Fig. 5. Floss threaders are f lexible polyester needles with large eyes designed to thread dental f loss under braces and allowed access to these tight bridles without break- ing them [12]. The final four gatherings were then re-sewn with linen seam- ing twine. As the thread used to sew the endleaf bifolium to the board was pulled under each bridle, a second f loss threader was also pulled under the bridles to allow the thread from the final text quire to be pulled under the bridle as well (in the opposite direction). This process essentially repeated the original sewing, an all-along two-step linkstitch, picking up the loops from the original sewing at each station where the new sewing met the original, intact sewing. Wherever possible, the original sewing thread was preserved in place, but any detached threads were removed and retained. The ends of the new sewing thread were secured by winding them in a helix around station four. The same process was repeated along the left joint, sewing the endleaves and the first seven quires all-along, using a two-step linkstitch. Quires 8–16 were sewn only between stations one and two, quires 17–20 were sewn at all stations, and quires 21–28 between stations one and two only, Fig. 4. It was remarkable how replacing the sewing and catching up the original ‘links’ in the chain drew the spine back into shape and pulled the head edge of the textblock together, which had spread when the left half of chain one had broken down. It became evident that unsupported structures can be ‘patched’ in a way that Western structures with broken sewing supports seldom allow, leaving all the intact original structural elements in place, and replacing only those that are damaged or missing. It is a versatile and adaptable method. With the sewing and sewn board attachment repaired, the spine was manipulated into a continuous, comfortable, even round and placed in a press. The loose ends of new sewing thread were frayed out and pasted to the spine, together with loose original threads. The repaired sewing chains were damp- ened with paste and f lattened with a bone folder. The spine was pasted and left for a short while to soften the spine folds, which were then gently f lattened with bone folders. A release layer of hand-made Japanese tissue was pasted to the spine and a new linen-textile spine-lining was then pasted to the spine and back edges of the boards. Fig. 5 Insertion of floss threaders under bridles on the back edge of the right or lower board. Fig. 4 Partially removed textile spine lining showing damaged sewing and sewn board attachment (above). Completed repaired sewing and sewn board attachment (below). 59 PRIMARY ENDBAND REPAIRS Although the sewing and sewn board attachment had been suc- cessfully repaired, the condition of the primary endbands was still of great concern. Greek-style primary endbands have an important structural function, adding to board attachment and helping to control the opening of the manuscript [5, p. 30]. In addition, the unique but severely damaged decorative metal- thread secondary endbands could only be preserved if the structural function of the damaged primary endband to which they were still attached was restored. The head endband cores were broken at both joints and the tail endband cores at one joint. However, the endband cores were still firmly attached to the boards where they were covered by the elaborate secondary endbands and the head cap of the covering leather, Figs 1 and 3. The primary endbands would require in situ treatment to repair the broken cores and provide attachment at every quire of the textblock if they were to regain their structural function and provide a sound support for the decorative secondary endbands. The repairs began with the tail endband. Borrowing a tech- nique used by the original binder to join the two halves of the sewn bookblock, the broken ends of the tailband core at the right joint were ‘bridled’ with three turns of linen thread. Staggered horizontal loops of thread were taken between the cores on each side of the break, pulling these together to close the gap between the broken ends of the cores and then binding them to the lower core by wrapping the thread around the lower core. A small piece of goat parchment was prepared to bridge the break in the endband cores, cut to the height of the cores and bevelled along all edges to give thin, f lexible edges that could be wrapped over the cores. The parchment strip was bound to the back of the cores with the same thread, using a small, tightly-curved, blunt needle, starting from the board in a packed, wrapping movement of the thread around both cores with occasional figure-of-eight stitches penetrating the parchment to prevent slippage along the cores. The loose ends of thread from the bridling and the start of the wrapping were included within this sewing, along the back of the parchment to further reinforce the break, Fig. 6. The packed wrapping continued for about 20 mm from the point of the break across the textblock, making tiedowns in a figure- of-eight pattern at the centre of each quire. The new thread was taken between the cores to make a figure-of-eight path around the lower core and through the textblock. On emerging from the spine, the thread was taken under the lower core to the front of the endband, and then between the cores to emerge at the back where it was taken across the back of the lower core at an angle, ref lecting the angled lower-core sewing of the original endband, it was then taken under the lower core at the centre of the next quire to form the next internal tiedown, emerging below the lowest main-sewing chain-stitch to repeat the process until the detached core was firmly reattached to the textblock. The repair and reattachment of this endband had a marked effect on the opening characteristics of the tail end of the spine, which now opened in a more controlled manner than the unrepaired head. It noticeably improved board-leverage on the textblock when the board was opened, something that the thread-only attachment of the boards to the sewing structure did not produce. It underlined the structural importance of these endbands within the Byzantine/Greek binding tradition. The head endband was then repaired with the same technique. However, as the head endband cores were broken at both joints, the staggered bridling technique was refined to link the repairs across the spine with a continuous thread, giving extra support to the repair, Fig. 7. SECONDARY ENDBAND REPAIRS The original treatment proposal had suggested that the metal threads of the secondary endband could be reattached to the repaired primary endbands by replacing their broken green silk warp threads with a new fine linen thread. However, the time that remained within the planned schedule did not allow for this complex reconstruction of the secondary sewing. The decision was therefore taken to temporarily secure the metal thread to the repaired primary-sewn endband cores with simple individual tackets of linen thread. The plied metal threads forming the separate rows were first aligned, and where possible pushed closer together on the green silk warp threads to form a compact endband once more. Individual thread tackets were then loosely tied over the secondary endband around the primary cores with a very fine linen thread to hold these in place. These were worked from where the endband was most intact, continuing towards the unravelled areas and aligning the endband in the process, at about 5 mm intervals. Once all the tackets were in place, the volume was gently opened to check that the inevitable f lex- ing of the primary cores did not strain the secondary endband that was now secured. The tackets were then tightened and knotted to hold the secondary endband whilst allowing for some movement. The loose ends of thread were cut and each tacket was turned to move the knot beneath the lower primary core, Fig. 8. REPAIRS OF THE INTERLACED CLASP-STRAP The binding retained both of its triple interlaced leather fore- edge clasp-straps, but two of the strands of the lower strap were broken at the exterior face of the board. It is rare for these clasps to survive, and increased handling in preparation for the exhibition put the strap at risk. To make the broken clasp-strap Fig. 6 Completed repaired and re-sewn tail primary endband. Fig. 7 Continuous bridling repairs used to bridge breaks in head end- band cores at both joints. 60 secure a repair was required that would bridge the break and make it functional. For both of the broken strands a blunt needle was inserted just above the fourth interlace point from the break, under the upper of the two straps which form each element of the triple strap. A thin linen thread was pulled through the strap and wrapped around the edges of the strap under the chevron which is formed by the interlace technique, so that it could not be seen on the outer surface, Fig. 9. This was repeated twice and the result- ing six threads were then plaited together on the reverse of the strap to give a tail. The broken end of the strap was whipped to the new plait with a fine linen thread. The new plaits attached to the broken interlace strap were then pulled through the existing holes in the board in front of, and beneath, the broken strap stubs and tied inside, Fig. 10. The loose ends of the new thread were frayed out and pasted under the lifted edges of the parchment. CONCLUSIONS The treatment allowed the manuscript to travel safely to its exhi- bition, but it is not finished. The tight timescale and absence of a conservation workshop precluded some types of work and this will need to be completed when the new workshop is built. Work to reconstitute the secondary endbands remains to be done and the manuscript requires a leather re-back, with the original spine either mounted on this spine or housed separately. This work proved to be challenging for the authors — but also rewarding. The close study of the structure of Greek-style bindings carried out as part of the condition survey of the Monastery’s manuscripts as well as the direct observation of this manuscript provided models upon which these repairs were based. The in situ repair of the sewing, sewn board attachment and primary endbands was accomplished using techniques adapted from those used by the original binder and the ability to repair selected parts of damaged structures within Greek- style bindings is a new and exciting avenue for conservators more accustomed to western binding structures. This successful treatment, carried out in less than ideal surroundings, also shows possible approaches that will be useful for the future planning, scheduling and execution of conservation in this important col- lection. The use of digital photography, work diaries, and the collaboration of a team of two conservators have all proved to be a useful approach to maximising conservation impact with lim- ited time available, while dealing with a remote desert location. ACKNOWLEDGEMENTS All photographs are by permission of the Holy Synaxis of the Monastery of St Catherine, Sinai and Ligatus Research Unit, University of Arts, London. The authors would like to thank Nancy Turner (Conservator, J. Paul Getty Museum, Los Angeles) who read and commented on a draft of this paper. Deborah Evetts (Private Conservator, Connecticut) discovered the value of f loss threaders for book conservation and their suggestion for the treatment described here came from Chris Clarkson. REFERENCES 1 Martin, J.R., The Illustrations of the Heavenly Ladder of John Climacus, Studies in Manuscript Illumination V, Princeton Univer- sity Press, Princeton NJ (1954). 2 Weitzmann, K. and Galavaris, G., The Monastery of Saint Catherine at Mount Sinai. The Illuminated Greek Manuscripts. Vol. 1: From the Ninth to the Twelfth Century, Princeton University Press, Princeton NJ (1990). 3 Nelson, R.S. and Collins, K., Holy Image, Hallowed Ground: Icons From Sinai, Getty Publications, Los Angeles (2006). 4 Manafis, K.A. (ed.), Sinai: Treasures of the Monastery of Saint Catherine, Ekdotike Athenon, Athens (1990). 5 Boudalis, G., ‘Endbands in Greek-style bindings’, The Paper Conservator 31 (2007) 29–49. 6 Sarris, N., Classification of Finishing Tools in Byzantine/Greek Bookbinding: Establishing Links for Manuscripts from the Library of the St. Catherine’s Monastery in Sinai, Egypt, PhD disserta- tion, Camberwell College of Arts, University of the Arts, London (forthcoming). 7 Federici, C. and Houlis, K., Legature Bizantine Vaticane, Istituto Centrale per la Patologia del Libro, Rome (1988). Fig. 8 Repaired head primary endband from back with tackets used to secure secondary endband. Fig. 10 Pulling the plaited thread extension through a lacing hole with a floss threader. Fig. 9 Broken clasp strap, plaiting new threads to provide an extension. 61 8 Szirmai, J.A., The Archaeology of Medieval Bookbinding, Ashgate, Aldershot (1999) 62–92. 9 Ligatus Research Unit, ‘An English/Greek terminology for the structures and materials of Byzantine and Greek bookbinding’, http://www.ligatus.org.uk/glossary (accessed 8 January 2010). 10 Pickwoad, N., ‘The condition survey of the manuscripts in the mon- astery of Saint Catherine on Mount Sinai’, The Paper Conservator 28 (2004) 33–61. 11 Pickwoad, N., McAusland, J. and Kalligerou, M., ‘The Cell 31A project’, http://www.ligatus.org.uk/stcatherines/sites/ligatus.org. uk.stcatherines/files/cell31A.pdf (accessed 7 August 2009). 12 Clarkson, C., private conservator, Oxford, personal communication, November 2005. AUTHORS Andrew Honey is a book conservator at the Bodleian Library in Oxford. He graduated from Camberwell College of Arts in 1994 with a BA (Hons) in Paper Conservation and studied the conservation of rare books and manuscripts at West Dean College under Christopher Clarkson from 1995–1997. He has been a visiting research fellow at the Ligatus Research Unit, University of the Arts London since 2005 working for the Saint Catherine’s Monastery Library Project. Address: Bodleian Library, Broad Street, Oxford, OX1 3BG, UK. UK. Email: andrew. honey@bodleian.ox.ac.uk Professor Nicholas Pickwoad is the director of the Ligatus Research Unit, University of the Arts London and project leader of the Saint Catherine’s Monastery Library Project. After gaining a doctorate at Oxford University he trained with Roger Powell and has been advisor on book conservation to the National Trust from 1978. He was editor of volumes 8–13 of The Paper Conservator, taught book conservation at Columbia University, New York from 1989–1992 and was chief conservator in the Harvard University Library from 1992–1995. He teaches courses in Europe, Australia and America on the history of European bookbinding. In 2009 he was awarded the Royal Warrant Holder Association’s Plowden Medal. Address: Ligatus Research Unit, The University of the Arts London, Wilson Road, London, SE5 8LU, UK. Email: npickwoad@paston.co.uk work_kmocxtjr65gdjljarca3wnou3e ---- Non-genomic action of beclomethasone dipropionate on bronchoconstriction caused by leukotriene C4 in precision cut lung slices in the horse | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1186/1746-6148-8-160 Corpus ID: 17812480Non-genomic action of beclomethasone dipropionate on bronchoconstriction caused by leukotriene C4 in precision cut lung slices in the horse @article{Fugazzola2012NongenomicAO, title={Non-genomic action of beclomethasone dipropionate on bronchoconstriction caused by leukotriene C4 in precision cut lung slices in the horse}, author={M. Fugazzola and Ann-Kristin Barton and F. Niedorf and M. Kietzmann and B. Ohnesorge}, journal={BMC Veterinary Research}, year={2012}, volume={8}, pages={160 - 160} } M. Fugazzola, Ann-Kristin Barton, +2 authors B. Ohnesorge Published 2012 Medicine BMC Veterinary Research BackgroundGlucocorticoids have been proven to be effective in the therapy of recurrent airway obstruction (RAO) in horses via systemic as well as local (inhalative) administration. Elective analysis of the effects of this drug on bronchoconstriction in viable lung tissue offers an insight into the mechanism of action of the inflammatory cascade occurring during RAO which is still unclear. The mechanism of action of steroids in treatment of RAO is thought to be induced through classical genomic… Expand View on Springer bmcvetres.biomedcentral.com Save to Library Create Alert Cite Launch Research Feed Share This Paper 1 CitationsBackground Citations 1 View All Figures and Topics from this paper figure 1 figure 2 figure 3 Leukotrienes Beclomethasone Dipropionate Leukotriene C4 Steroids Inflammation Airway Obstruction Structure of parenchyma of lung Euthanasia Inhibition Bronchioles One Citation Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Use of precision cut lung slices as a translational model for the study of lung biology Guang-Hui Liu, C. Betts, +4 authors T. Cohen Biology, Medicine Respiratory Research 2019 19 View 1 excerpt, cites background Save Alert Research Feed References SHOWING 1-10 OF 55 REFERENCES SORT BYRelevance Most Influenced Papers Recency Effect of beclomethasone dipropionate and dexamethasone isonicotinate on lung function, bronchoalveolar lavage fluid cytology, and transcription factor expression in airways of horses with recurrent airway obstruction. L. Couëtil, T. Art, +5 authors P. Lekeux Medicine Journal of veterinary internal medicine 2006 42 View 2 excerpts, references background Save Alert Research Feed Inhaled leukotrienes cause bronchoconstriction and neutrophil accumulation in horses. K. Marr, P. Lees, C. Page, F. Cunningham Medicine Research in veterinary science 1998 32 View 1 excerpt, references background Save Alert Research Feed Pharmacological studies of bronchial constriction inhibited by parasympatholytics and cilomilast using equine precision-cut lung slices. A. K. Barton, F. Niedorf, A. Gruber, M. Kietzmann, B. Ohnesorge Medicine Berliner und Munchener tierarztliche Wochenschrift 2010 7 View 1 excerpt, references methods Save Alert Research Feed Untersuchung der Leukotrienwirkung am Pferdebronchus mittels "precision-cut lung slices" (PCLS) Barbara Schwalfenberg Medicine 2007 1 Highly Influential View 3 excerpts, references background and methods Save Alert Research Feed Comparison of effects of dexamethasone and the leukotriene D4 receptor antagonist L-708,738 on lung function and airway cytologic findings in horses with recurrent airway obstruction. J. Lavoie, R. Léguillette, +5 authors Gerry J. Hickey Medicine American journal of veterinary research 2002 47 View 1 excerpt, references background Save Alert Research Feed Effects of inhaled beclomethasone dipropionate on respiratory function in horses with chronic obstructive pulmonary disease (COPD). V. Ammann, A. Vrins, J. Lavoie Medicine Equine veterinary journal 1998 57 View 1 excerpt, references background Save Alert Research Feed Pharmacokinetics of butixocort 21-propionate, budesonide, and beclomethasone dipropionate in the rat after intratracheal, intravenous, and oral treatments. F. Chanoine, C. Grenot, P. Heidmann, J. L. Junien Chemistry, Medicine Drug metabolism and disposition: the biological fate of chemicals 1991 33 View 1 excerpt, references methods Save Alert Research Feed Assessment of leukotriene B4 production in leukocytes from horses with recurrent airway obstruction. Å. Lindberg, N. Robinson, B. Näsman-Glaser, M. Jensen-Waern, J. Lindgren Medicine American journal of veterinary research 2004 25 View 1 excerpt, references background Save Alert Research Feed In vitro airway and tissue response to antigen in sensitized rats. Role of serotonin and leukotriene D4. T. Nagase, Y. Fukuchi, M. Dallaire, J. Martin, M. Ludwig Medicine American journal of respiratory and critical care medicine 1995 27 View 1 excerpt, references background Save Alert Research Feed Genomic and non-genomic actions of glucocorticoids in asthma A. Alangari Medicine Annals of thoracic medicine 2010 63 Highly Influential View 4 excerpts, references background Save Alert Research Feed ... 1 2 3 4 5 ... Related Papers Abstract Figures and Topics 1 Citations 55 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_knwlld7r6vgwrihomywyfe5tn4 ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218564821 Params is empty 218564821 exception Params is empty 2021/04/06-02:16:46 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218564821 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:46 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_koahckmutjgttf5zgolv5rbgha ---- Provided by the author(s) and University College Dublin Library in accordance with publisher policies. Please cite the published version when available. Title Wheeltracking Fatigue Simulation of Bituminous Mixtures Authors(s) Hatman, Anton M.; Gilchrist, M. D.; Nolan, D. B. Publication date 2001-06-15 Publication information Road Materials and Pavement Design, 2 (2): 141-160 Publisher Informa UK (Taylor & Francis) Item record/more information http://hdl.handle.net/10197/5964 Publisher's statement This is an electronic version of an article published in Road Materials and Pavement Design (2001) 2(2): 141-160. Road Materials and Pavement Design is available online at: www.tandfonline.com, DOI: http://dx.doi/org/10.3166/rmpd.2.141-160. Publisher's version (DOI) 10.3166/rmpd.2.141-160 Downloaded 2021-04-06T01:16:42Z The UCD community has made this article openly available. Please share how this access benefits you. Your story matters! (@ucd_oa) © Some rights reserved. For more information, please see the item record link above. https://twitter.com/intent/tweet?via=ucd_oa&text=DOI%3A10.3166%2Frmpd.2.141-160&url=http%3A%2F%2Fhdl.handle.net%2F10197%2F5964 Wheeltracking fatigue simulation of bituminous mixtures A.M. Hartman ⎯ M.D. Gilchrist (✉) ⎯ D. Nolan Mechanical Engineering Department National University of Ireland, Dublin Belfield Dublin 4 IRELAND Michael.Gilchrist@ucd.ie ABSTRACT: In order to better simulate the dynamic effects of a rolling wheel travelling over an asphalt pavement, and to better understand the initiation and growth of cracks in bituminous pavement layers, a wheeltracking fatigue simulation facility was developed. This experimental facility permitted the testing of large slab specimens (305x305x50mm) using dynamic wheel loadings. The slab specimens were supported on a soft elastomeric foundation that simulated overlay behaviour on top of a weak pavement structure. Digital photography and image analysis techniques were utilised to monitor the initiation and propagation of fatigue cracks on the bottom of these slabs. Two standard Irish mixtures, a Dense Base Course Macadam (DBC) and Hot Rolled Asphalt (HRA), were evaluated with the fatigue simulation facility. Crack damage was seen to initiate on the bottom face of the slab specimen in a direction parallel to the direction of wheel travel. These cracks would interconnect to form a full width crack that propagated through the depth of the slab. Under similar loading conditions the DBC mix had significant lower fatigue strength (two orders of magnitude). KEY WORDS: bituminous mixture, fatigue, crack growth, wheeltracking 1. Introduction Fatigue testing of bituminous road building materials tends to focus on characterising the fatigue behaviour of relatively small specimens in the laboratory using mainly uniaxial loading conditions [GIL 97, HAR 00]. The effect of a rolling wheel is very difficult to imitate in these experiments and boundary conditions lead to a pattern of crack initiation and propagation that can be quite different to what is observed in-situ. While these simplified laboratory tests provide information on the relative fatigue behaviour of different mixtures, the success with which this information can be related to behaviour under in-situ tri-axial loading conditions remains questionable [HAR 01a, HAR 01b, OWE 01]. Although laboratory scaled wheeltracking equipment has been used extensively to simulate the development of permanent deformation, its application in fatigue studies has been limited. Commercially available permanent deformation wheeltrackers include the Cooper Research Technology Wheeltracker and the LCPC wheeltracking machine. With these devices, slab specimens are supported on solid foundations that prevent slab flexure and promote failure through densification and shear flow of the tested material. On a larger scale, numerous accelerated loading test facilities exist throughout the world [TAB 89, GIL 97]. These facilities test full-scale pavements and allow the interaction between different forms of pavement distress to be studied. They do, however, involve enormous initial investment and high annual operational and maintenance costs. In 1975 Van Dijk [VAN 75] pioneered the laboratory scale fatigue wheeltracking experiment by performing a series of wheeltracks to validate beam fatigue test results on a wheeltracker developed at the Shell Amsterdam laboratories. Since then wheeltracking work has been reported from Texas [LEE 93], Nottingham [ROW 96], and elsewhere [HAR 99, SAI 00]. Fatigue simulating rolling wheel test configurations that have been developed include (i) those based on reversing wheels, where a wheel traverses back and forth across a section of pavement, (ii) continuous wheels which travel in the same direction at a constant velocity along a section of pavement and (iii) revolving wheels which travel at a constant velocity along a circular section of pavement. Wheeltracking fatigue simulation 3 2. Design and construction of facility While developing the design specifications for a fatigue simulation wheeltracking facility, the basic concepts of the test were evaluated with a prototype tracker. The experimental arrangement, shown in Figure 1, consisted of a wheelfixture that was mounted in a standard shaping machine. The reciprocating action of the shaping machine pushed the wheel over a slab specimen that was housed in a clamping box fixed to the load table of the machine. Figure 1. Development of design specifications for fatigue simulator using a prototype tracker based on the reciprocating action of a shaping machine. From these initial tests the following conclusions were drawn: - In order to induce fatigue cracks that are similar to what is observed in-situ, the slab specimen must be clamped onto solid side supports, as shown in Figure 2 (c). A slab that is only supported by an elastomeric cushion (Figure 2 (a)) results in excessive deformation, slab movement during tracking and very localised cracking. Clamping the slab directly onto a Displacement clamp Loading arm Loadcell Wheel fixture Slab specimen Clamping box Arm movement cushion (Figure 2 (b)) induces forces that counteract the load of the rolling wheel. - Monitoring cracks on the bottom of slab specimens is very labour intensive. Consequently, it was considered necessary that the design of the facility should permit easy dismantling of the slab so that the underside, from where damage initiates, could be inspected and photographed at intervals. - A reciprocating wheel action complicates any subsequent interpretation of results. This type of loading doubles the number of wheelpasses over the centre of the specimen; this occurs at higher loading speeds than over any other section of specimen and the direction of loading is not uniform. Over the short distance (305mm) available on the particular standard slab specimen, a segment of the wheel is in constant contact with the slab: this causes both the wheel and slab to heat up and ultimately leads to excessive permanent deformation. - The shaping machine was operated using displacement control: this resulted in rapid load drop-off and reduced crack growth. In order to perform fatigue tests within a reasonable time span a load controlled loading mode should be used. This is probably most easily achieved through a dead weight system. With due consideration for the aforementioned design features, an experimental facility that included four pavement monitoring stations was designed around a circular wheel tracking arrangement. The essence of the design involved the use of an existing circular motor driven table which spun two preloaded wheel fixtures around a track, which was fitted with four sample stations as shown in Figure 3. Figure 2. Schematic illustration of three different slab clamping mechanisms investigated with associated deformed shapes: (a) slab lying freely on rubber foundation; (b) slab clamped down directly on rubber foundation; (c) slab clamped on sides parallel to wheel travel. Deformed slab shapes Solid side support (a) (b) (c) Clamping force Wheel load Wheeltracking fatigue simulation 5 This particular construction ensures that the simulated direction of travel across each pavement specimen is always identical and that the simulated traffic density and velocity, and the weight of axle loadings can also be maintained constant during testing. The substrata conditions under each pavement specimen are made from an elastomeric cushion that can be modified easily by using appropriate materials having different stiffness characteristics. Figure 3. General layout of rotating fatigue testing facility. Four sample stations house pavement specimens on elastic foundations. (Top left) View of facility during tracking. (Top right) View of facility during crack monitoring with slab specimens turned on their sides and camera box inserted above one slab station. Rotating Table Sample station Track Weights to increase load Wheel fixture The facility was designed to operate on a dead load system with the rotating arms linked through hinged joints to the wheel fixtures. This ensures that the load automatically follows the deformation of the sample, ensuring a load control mode of loading. The large arc of the track, 3 m diameter, reduces the effect of cornering on the specimen while the influence of centrifugal forces are minimised by the relatively slow operating speed (approximately 3 km/h or 7 rpm). The large track diameter results in near linear travel over the slab and crack failure patterns that are parallel to the clamped specimen sides. Due to the long track length (9.4m) the wheel is in contact with a given slab for only approximately 13% of the time. This, together with the better heat dissipation characteristics of the wooden track, significantly reduced the influence of heating on permanent deformation. Miniature wheels, 200 mm in diameter with solid rubber tyres, were fitted to the fixtures on the ends of the rotating arms. The width of the wheelpath is 36 mm. The size of the contact area can be varied by changing the dead weight fixed to each wheel fixture and was measured using static tyre imprints. Table 1 summarises the contact areas and tyre pressures associated with the three different loads that were used during testing. Load (kg) Contact area (mm2) Tyre surface contact pressure (kPa) 55 1010 545 80 1150 695 100 1290 775 Table 1. Summary of contact area and tyre pressure for the three load settings. Four sample stations, each of which housed a pavement specimen, were incorporated into the track, in order to facilitate the simultaneous testing of multiple samples. An additional four sample stations, giving a total of eight, could be incorporated easily into this design, to double the number of specimens tested in a given period. However, the present study is only concerned with the arrangement of four specimens. Each specimen is clamped in an aluminium frame along its edges, parallel to the tracking direction. This allows the specimen to deform under the influence of the dynamic load. The constraining clamps are fixed to the foundation box, which contains an elastomeric foundation 170mm deep with an elastic stiffness of 2 MPa. This represents a very weak foundation support but it does permit the Wheeltracking fatigue simulation 7 pavement specimen to bend under the wheel load without deforming under the influence of gravity. In order to facilitate and simplify the capturing of images of pavement damage on the bottom face of the slab, the aluminium frame holding the specimen was designed with a hinged joint as shown in Figure 4. By releasing two lock clips, the frame flips back to the vertical position in one single movement, thus revealing the underside of the slab. This allows the insertion of a camera box, housing a digital camera and light source to monitor and record any visible damage. Figure 4. Layout of a sample station. The foundation box houses the elastic foundation material while the slab specimen is clamped in the aluminium frame. The hinge joining the components allows the specimen to be turned onto its side so that the bottom face can be inspected with ease. In order to guide the wheel onto the slab the track was precisely aligned with the top face of the specimen. As the structural integrity of the slab deteriorates, a slight permanent deflection develops across the slab. The main purpose of the present tests was not to isolate fatigue from the effects of creep, but rather was to simulate the initiation and development of fatigue damage within an actual pavement, which would naturally be subjected to both fatigue and creep. The permanent slab deflection across the wheel path was monitored at the centre of the slab during the tests. Most specimens were found to be severely cracked or to have failed catastrophically by the time a significant deformation was reached. Clamp plate Aluminium frame Slab specimen Elastomeric cushion Foundation box 3. Visual monitoring of slab cracking A procedure using digital image photography was used to monitor the crack development on the bottom face of the slab specimens in the present investigation. 3.1. Image capturing Prior to tracking, the undersides of slab specimens were painted with a quick drying road marking spray-paint. Any cracks or damage were therefore anticipated to show up as black against a white background. A number of thin paint layers were applied until a completely white surface was obtained. A 100x100mm square area was marked directly underneath the wheelpath in the centre of each slab. All crack monitoring was concentrated on this area while all strain monitoring devices were situated away from it. Fatigue damage that initiates on the underside of the pavement samples inside the square area was monitored using two image capturing methods namely, digital photography and hand traced images that were subsequently digitised. At discrete intervals throughout a test, tracking was stopped and the specimens were turned on their sides. A specially constructed camera housing was then inserted over the sample stations to provide identical photographic distance and lighting conditions during image capture. Digital photographs of the underside of the samples were captured with an Olympus C-840L digital camera. Clear transparent sheets were then placed over the monitoring areas and all visible crack defects marked by hand. The transparencies were subsequently digitised with an optical scanner and reused to monitor cumulative damage. A schematic illustration of the image capture arrangement is shown in Figure 5. 3.2. Image processing The manually traced cracks were scanned as a greyscale image. Selection of the monitoring box (100x100mm square are = 600x600 pixels), clearing of the background and storing of the image data in a compressed format (.JPG) were done using the Ulead PhotoImpact image processing package [ULE 96]. Wheeltracking fatigue simulation 9 Figure 5. Schematic of arrangement for digital photography: S = sample; C = camera with frame grabber; L = light source; I = image processing. The digital photographs were all in full colour and were 640x480 pixels in size. They also included the area of the slab around the monitoring square area as shown in Figure 6. The monitoring area was 230x230 pixels in size and this was converted to greyscale. The greyscale images were imported into UTHSCSA Image Tool [UTH 96] for image processing and analysis. The dimensions of the monitoring area (100x100mm) were calibrated against the pixel size so that direct dimensional measurements could be established for the digital images. The grey scale images were converted into a binary format (consisting of only pure white and pure black pixels) by applying a threshold value. The selection of this threshold value was achieved by applying a range of thresholds and manually selecting the particular value that best represented the crack damage. This same threshold was then applied to all subsequent images. Figure 6. Manipulation of the original image to select the monitoring area, convert the image to greyscale and apply a threshold to obtain a binary image for crack analysis. Color original (not shown here) Grayscale box (100x100mm) Binary box (100x100mm) 3.3. Image analysis To calculate the total area of cracks, an algorithm that counts the number of black pixels was applied to the images that were captured with the digital camera. The spatial calibration of this data allowed the area to be given in mm2. With regard to the images obtained from the manually traced transparencies, no distinction was made between narrow and wide cracks and all visible defects were recorded with a marker of standard width. Using the same analysis routine to calculate the amount of black pixels, but normalising the area against the width of the tracing marker, the actual total crack length was determined. Measurements of crack direction were also made on the final traced crack network pattern. An image analysis subroutine was used to identify cracks as objects. The major axis angle, i.e., the angle of the longest line drawn through the object to the horizontal, was determined for each object. The major axis angle, normalised to the length of each crack, represents the effective crack direction of each crack. 4. Strain monitoring The key parameter that is used in asphalt overlay design is the horizontal tensile strain, the critical value of which is at the bottom of the asphalt layer [ULL 87]. Methods for measuring strain are based on various mechanical, optical, acoustic, pneumatic and electric phenomena. One of the most common methods of measuring strain in engineering applications is the use of wire or foil strain gauges. For pavement applications strain gauges are generally selected on the basis of their gauge length, which is based on the maximum aggregate size of the mixture. To monitor the horizontal tensile strain in the slab specimens, 120ohm foil strain gauges with a gauge length of 10mm were used. To eliminate the influence of aggregates the gauges were glued to thin (0.3mm) brass sheets (12.5x40mm). Brass has a stiffness similar to bituminous mixtures (±2000MPa) and the brass strips were expected to deform in unison with the slab material. Figure 7 shows the basic location of the gauges on a slab. Two gauges were fixed to each slab, one on either side of the crack monitoring square normal to the centre line that runs parallel to the tracking direction. Consequently the gauges measured strains transverse to the tracking direction. Wheeltracking fatigue simulation 11 Figure 7. Strain gauges were positioned on either side of the crack monitoring area, along the centre line of the slab and in a direction perpendicular to wheel travel. Strain levels in the slabs can be changed either by adjusting the load, the specimen thickness or the foundation stiffness. During tracking, both the dynamic strain and the total strain were monitored. The dynamic strain is the per cycle strain amplitude of a sample whereas the total strain present within a sample includes strain associated with permanent deformation, as illustrated in Figure 8. A typical strain pulse as sensed by the two gauges when the wheel moves over a specimen is presented in Figure 9. Gauge 1 is positioned on the wheel entry side of the slab and accordingly reaches a peak value before Gauge 2. The wheel remains in contact with the slab for approximately 0.4s. As the wheel moves off the slab, a permanent level of strain remains in the slab and this dissipates in accordance with the viscoelastic characteristics of the bituminous mixture. Slab centre line Crack monitoring area (100x100mm) Micro crack Macro crack Brass sheet Foil strain gauge Tracking direction Figure 8. Typical development of dynamic and total strain during a fatigue test. The viscoelastic response of the bituminous pavement during a given cycle is apparent, together with the gradual increase in strain over numerous load cycles (i.e., permanent deformation). Figure 9. Typical recorded levels of strain as the loaded wheel moves over a slab specimen. Wheeltracking fatigue simulation 13 5. Experimental test programme The composition of the two mixtures used in the present investigation is summarised in Table 2. They represent two different grading profiles, namely, an open graded sandy mix (HRA, [BS 92]) and a continuous grading (DBC, [BS 93]), as shown in Figure 10. For both of the mixtures binder contents towards the lower end of the design spectrum were purposely chosen to obtain fatigue prone mixtures. Slab specimens (305Lx305Wx50D) were compacted with a laboratory roller compactor and cured for 24 hour at 60°C after compaction. Data that was monitored continuously during testing include measurements from strain gauges that were fixed to the bottom of the specimens, digital images of the cracked underside of the slab specimens, the manually traced crack patterns, the permanent slab deflection, and the total number of cycles to failure. DBC - % by weight passing sieve 20mm 14mm 10mm 6.3mm 3.35mm 300µm 75µm 98 82 66 52 40 16 4.5 Bitumen content : 4.2% (100 Pen) Target void content : 6% (by volume) HRA - % by weight passing sieve 20mm 14mm 10mm 2.36mm 212µm 75µm 100 97 80 66 30 10 Bitumen content : 7.3% (50 Pen) Target void content : 4% (by volume) Table 2. Mix constituents. Initially, the fatigue simulation experiments were carried out on one set of each of the investigated mixtures. The 100kg load setting, which corresponded to a tyre contact pressure of approximately 775 kPa, was used. The HRA slabs had a fatigue life in excess of 200 000 load cycles and this was achieved after two full weeks of testing. The DBC slabs, tested at the same load level, failed after only 2000 load cycles. Since the fatigue simulation facility was situated within a large laboratory which did not have adequate temperature control, all subsequent tests focused on the DBC mixture as these could be tested within a relatively short time span and thus reduce the effect of overnight temperature changes. Two extra sets of DBC slabs were accordingly prepared and tested at load levels shown in Table 3. Table 3 also identifies the recorded minimum and maximum slab temperatures during the tests. The HRA samples were tested at lower temperature conditions. Also, the 80kg DBC specimens were tested at a slightly higher temperature. The lack of temperature control is a limitation of the present experimental set-up, which complicates minimising the effects of creep. Modifications to the set-up, which would include environmental temperature control are planned for future experiments. Figure 10. Aggregate grading of bituminous mixtures under investigation. Temperature (°C) Material Load (kg) Contact pressure (kPa) Minimum Maximum HRA 100 775 12.6 18.6 100 775 20.7 23.4 80 695 23.6 25.3 DBC 55 545 20.5 23.1 Table 3. Summary of load settings and the specimen temperature range for fatigue simulation tests. Wheeltracking fatigue simulation 15 6. Results and discussion 6.1. Crack measurements Figure 11 summarises the measurement of crack area while measurements of cumulative crack lengths are shown in Figure 12. The general manner in which cumulative crack length varied with fatigue cycles was to initially increase sharply and then increase much more gradually. The rapid initial increase of crack length typically represented 30% of the fatigue life of a specimen. On the other hand, the crack area increased gradually as the test began and progressed and accelerated growth rates were observed close to specimen failure. In general, a larger area of cracking was observed on the DBC specimens. DBC slabs tested at the same load setting also had longer cumulative crack lengths. The total length of cracks measured in the DBC specimens also decreased with reduced test loads. Rather than one main crack appearing under the wheel track, what was observed was that small individual cracks opened along the bottom of a sample, these interconnected and combined to establish a crack network. Once a full width crack had formed, this crack propagated through the depth of the slab. Two stages can thus be distinguished: a crack initiation stage, during which the network of cracks is formed and a crack propagation stage, during which the cracks propagate through the depth of the slab and thus only appear to grow wider on the slab surface. The reduced rate of increase of cumulative crack length can thus be explained as crack propagation, during which the cracks increase in width and depth rather than any new cracks being formed. The orientation of the final crack pattern to the direction of wheel loading was determined and this is summarised in Table 4. Figure 13 shows typical examples of the final crack pattern within both the HRA and DBC mixtures. Cracks tended to remain relatively parallel to the direction of wheel travel. Since the slabs were clamped on the sides that were perpendicular to the direction of wheel travel, this allowed slab bending and consequently cracking occurred primarily in that direction. As the test load decreased, the direction of cracking deviated slightly from the horizontal. The HRA and DBC slabs that were tested at the same loading conditions displayed similar crack orientations. Wheeltracking fatigue simulation 17 Effective crack direction, CD eff (°) Material Contact pressure (kPa) Slab 1 Slab 2 Slab 3 Slab 4 Average HRA 775 15.3 14.6 15.5 18.7 16.0 775 14.2 16.4 17.0 19.9 16.9 695 23.3 25.2 31.5 19.0 24.8 DBC 545 40.2 37.0 21.6 25.8 31.2 Table 4. Summary of effective crack direction measurements. Figure 13. Examples of final crack patterns. 6.2. Strain measurements Figure 14 describes the typical manner in which dynamic strain varied after different loading intervals. The strain results from the other pavement samples follow a similar trend and this behaviour is identical to that observed by Rowe [ROW 96]. By correlating these strain measurements against crack behaviour, it is observed that the initial strain level remains relatively constant until small cracks begin to HRA (775kPa) CD eff = 15.3° DBC (545kPa) CD eff = 21.6° CD eff Tracking direction Wheeltracking fatigue simulation 19 form on the bottom surface of the specimen. From this point onward, a continuing increase in the dynamic strain level implies that cracks are propagating underneath the gauge (Gauge 1 in Figure 14). A decrease, however, indicates that significant cracks are forming close to the gauge and that the material in this region is absorbing the strain, causing the gauge to detect reduced levels of strain (Gauge 2 in Figure 14). Due to the local strengthening effect caused by the epoxy, a larger percentage (±80%) of readings similar to Gauge 2 were observed. The HRA mixture had greater fatigue wheeltracking strength than the DBC mix. The initial dynamic strain levels are plotted against the number of wheel loadings in Figure 15. On two of the DBC slabs debonding of the gauges occurred and data associated with these specimens are omitted. Temperature does have an effect on the performance of bituminous mixtures [AIR 95]. The increased life exhibited by the HRA slabs is also thought to be a consequence of the lower temperature range to which they were exposed. The DBC specimens tested at the 80kg load also showed relative shorter lives compared to the other load settings. Significantly, these specimens were tested at a slightly higher temperature range than the DBC specimens that had been tested at both 100kg and 55kg. Figure 14. Development of dynamic tensile strain under fatigue loading of DBC slab. Figure 15. Initial tensile strain versus number of load repetitions for wheeltracking slab specimens. 7. Conclusions For asphalt slabs that were simply supported on a weak elastomeric foundation and repeatedly tracked with a one directional dynamic wheel load, fatigue cracks were observed to initiate at the bottom surface of the slab and propagate upwards. Rather than one main crack appearing under the wheel track, small individual cracks opened along the bottom of the sample. These interconnected and combined to form a manner of network cracking. Once a full width crack had formed, this crack propagated through the depth of the slab. Typical behaviours that were observed during the development of crack area and crack length are illustrated in Figure 16. Measurements of crack length on the bottom face of the specimen can be used successfully to characterise the initial stage of crack formation (Stage I). The crack propagation phase (Stage II), on the other hand, is more successfully characterised by changes in the area of cracking. R2=0.71 DBC HRA Wheeltracking fatigue simulation 21 Figure 16. Typical development of crack area and crack length during wheeltracking tests. The orientation of cracks remained relatively parallel to the direction of wheel travel throughout the fatigue life of a specimen. As the test load decreased, the direction of cracking veered slightly from the horizontal and the total length of cracks decreased. Although fewer cracks were observed on the HRA slabs, the orientation of these cracks was similar to those in the DBC slabs that had been tested under the same loading conditions. 8. Acknowledgements The authors would like to express their gratitude to Dr Ian Jamieson and Mr Frank Clancy of the National Roads Authority, to Mr Michael Byrne of Roadstone Dublin Ltd. and to Enterprise Ireland (Applied Research Grants HE/1996/148 and HE/1998/288) for sponsoring this research. The support provided by Ms Amanda Gibney and Messrs Michael MacNicholas and Tom Webster of the Department of Civil Engineering at NUI,D is gratefully acknowledged. 9. References [AIR 95] AIREY, G.D., ‘Fatigue testing of asphalt mixtures using the laboratory third point loading fatigue testing system.’ M. Eng. Thesis, University of Pretoria, South Africa, 1995. [BS 93] BS 4987: PART 1, “Coated Macadams for roads and other paved areas.” British Standards Institution, London, England, 1993. [BS 92] BS 594: PART 1, “Hot Rolled Asphalt for roads and other paved areas.” British Standards Institution, London, England, 1992. [GIL 97] Gilchrist, M. D. & Hartman, A. M., 1997, Performance Related Test Procedures for Bituminous Mixtures. Boole Press, Dublin. 214pp. [HAR 99] Hartman, A. M., Nolan, D. & Gilchrist, M. D., 1999, Experimental facility for simulating the initiation and propagation of fatigue damage in bituminous road paving materials. Key Engineering Materials, 167-168, pp. 27-34. [HAR 00] HARTMAN, A.M., ‘An experimental investigation into the mechanical performance and structural integrity of bituminous pavement mixtures under the action of fatigue load conditions.’ PhD Thesis, University College Dublin, Ireland, 2000. [HAR 01a] Hartman, A. M., Gilchrist, M. D. & Walsh, G., 2001, Effect of mixture compaction on indirect tensile stiffness and fatigue. To appear in ASCE Journal of Transportation Engineering. [HAR 01b] Hartman, A. M., Gilchrist, M. D., Owende, P., Ward, S. & Clancy, F., 2001, In- situ accelerated testing of bituminous mixtures. Submitted to International Journal of Road Materials and Pavement Design. [LEE 93] LEE, J.; HUGO, F.; STOKOE, K.H., ‘Using SASW for monitoring low-temperature asphalt degradation under the model mobile load simulator (MMLS).’ Research Report 1934-2, Centre of Transportation Research, University of Texas at Austin, USA, 1993. [OWE 01] Owende, P. M. O., Hartman, A. M., Ward, S. M., Gilchrist, M. D. & O’Mahony, M. J., 2001, Minimising distress on flexible pavements using variable tire pressure. ASCE J. of Transportation Engineering. 127 (3), pp. 1-9. [ROW 96] ROWE, G.M., “Application of the dissipated energy concept to fatigue cracking in asphalt pavements.” PhD. Thesis, University of Nottingham, England, 1996. [SAI 00] SAID, S.F., “The VTI wheel tracking test.” Nordic Road and Transport Research, No. 1-2000, 2000, p.19. [TAB 89] TABATABAEE, N.; SEBAALY, P.; SCULLION, T., ‘Instrumentation for flexible pavements.’ FHWA Report RD-89-084, Washington USA, 1989. [ULE 96] ULEAD SYSTEMS, ‘Ulead PhotoImpact Version 3.0SE.’ Ulead Systems International Inc., Taipei, Taiwan, 1996. [ULL 87] ULLIDITZ, P.P., “Pavement analysis.” Pub. Elsevier Science, Netherlands, 1987. Wheeltracking fatigue simulation 23 [UTH 96] UTHSCSA IMAGE TOOL, ‘Image Tool Version 2.0.’ University of Texas Heath Science Centre Department of Dental Diagnostic Science, San Antonio, Texas, USA, 1996. [VAN 75] VAN DIJK, W., “Practical fatigue characterisation of bituminous mixtures.” Proceedings of the Association of Asphalt Paving Technologists, vol. 44, 1975, p. 38-74. work_kqvjhlmrxra3radu63nnsvwyum ---- papers UPCommons Portal del coneixement obert de la UPC http://upcommons.upc.edu/e-prints Aquesta és una còpia de la versió author’s final draft d'un article publicat a la revista Waste Management. URL d'aquest document a UPCommons E-prints: http://upcommons.upc.edu/handle/2117/108970 Article publicat / P ublished paper: Derqui, B.; Fernandez, V. The opportunity of tracking food waste in school canteens: guidelines for self-assessment. "Waste Management", Novembre 2017, vol. 69, p. 431-444, DOI: 10.1016/j.wasman.2017.07.030 © 2017. Aquesta versió està disponible sota la llicència CC-BY-NCND 3.0 http://creativecommons.org/licenses/by-nc-nd/3.0/es/ http://upcommonsdev.upc.edu/ http://upcommonsdev.upc.edu/ http://upcommons.upc.edu/e-prints http://upcommons.upc.edu/handle/2117/108970 https://doi.org/10.1016/j.wasman.2017.07.030 http://creativecommons.org/licenses/by-nc-nd/3.0/es/ 1 The opportunity of tracking food waste in school canteens: 1 guidelines for Self-Assessment 2 3 Derqui, B.; Fernandez, V. 4 5 Abstract 6 Reducing food waste is one of the key challenges of the food system and addressing it in the institutional 7 catering industry can be a quick win. In particular, school canteens are a significant source of food waste 8 and therefore embody a great opportunity to address food waste. The goal of our research is the 9 development of guidelines for audit and self-assessment in measuring and managing food waste produced 10 at school canteens. The purpose of the tool is to standardise food waste audits to be executed either by 11 scholars, school staff or by catering companies with the objective of measuring and reducing food waste at 12 schools. We performed a research among public and private schools and catering companies from which 13 we obtained the key performance indicators to be measured and then pilot-tested the resulting tool in four 14 schools with over 2,900 pupil participants, measuring plate waste from over 10,000 trays. This tool will help 15 managers in their efforts towards more sustainable organisations at the same time as the standardisation 16 of food waste audits will provide researchers with comparable data. The study suggests that although there 17 is low awareness on the amount of food wasted at school canteens, managers and staff are highly interested 18 in the topic and would be willing to implement audits and reduction measures. The case study also showed 19 that our tool is easy to implement and not disruptive. 20 Keywords 21 Food waste; School Catering; Self-assessment Tool; Sustainability Metrics; Food Waste Audit, Awareness 22 building, Food waste prevention. 23 24 1. Introduction 25 26 The global food system still has to solve deep problems in order to be truly sustainable. One of the key 27 sustainability challenges brought up by researchers (e.g. Clarke et al., 2015; Finn, 2014; Garrone et al., 28 2014) in the last few years is waste. In particular, reducing food waste (FW) would aid in the path towards 29 a more sustainable global food system as it would imply a more efficient (and ethical) use of scarce natural 30 resources at the same time as helping reduce its significant environmental footprint (Buzby and Guthrie, 31 2002). This is particularly challenging in developed countries, as food waste is very closely related to 32 individual behaviour and cultural attitudes towards food (Godfray et al., 2010). 33 Business managers are at present considered the major actors trying to implement sustainable 34 development, opposed to some years ago, when focus was put on local authorities (Dyllick et al., 2002). In 35 fact, many companies and institutions, particularly schools (Rickinson et al., 2016), have initiated a full set 36 of sustainable development initiatives to address the demands of public and private stakeholders. With 37 regard to food waste, progress has been slow, mostly due to lack of awareness (Finn, 2014). Hence, 38 increasing visibility and awareness on food waste through audits is an obvious place to start. Once food 39 waste has come to light, people will probably be willing to act against it, managers will probably become 40 more concerned about its financial impact and kitchen staff about its social implications (Goonan et al., 41 2014). In any case, food waste auditing should be the starting point of a food waste awareness campaign. 42 2 As schools are a natural place for education, and making the most of the near universal attendance of 43 school by children and the fact that they are on the premises for many hours a day (Dehghan et al., 2005), 44 addressing food waste at school canteens becomes noteworthy. However, regulators, school managers, 45 and catering companies very rarely concentrate on reducing food waste. Instead, they usually focus on 46 analysing how effective nutritional programmes are (Wilkie, 2015). For this reason, most researchers have 47 limited their studies on food waste at schools to the analysis of plate waste (PW), concerned with the 48 nutritional value of effective dietary intake. Our research has a broader purpose, offering a more holistic 49 approach on school food waste. Indeed, standard criteria for measuring school catering food waste is novel 50 in the literature, particularly as we propose to include both pre-consumer and post-consumer waste in our 51 assessment tool, while most researchers in this area have focused their work on analysing plate waste (e.g. 52 Adams et al., 2005; Byker et al., 2014; Cohen et al., 2013; Marlette et al., 2005; Rodriguez Tadeo et al., 53 2014). Moreover, through a standardised tool, researchers will be able to compare results and data from 54 different studies. The goal of this research is to provide schools and educators as well as catering companies 55 with a set of principles and tools that unveil and quantify food waste at school canteens and therefore 56 facilitate the implementation of reduction measures and result tracking. With this purpose, we first analyse 57 the nature and types of food wasted at schools as well as cafeteria managers´ attitudes toward food waste 58 and end with the development of a self-assessment waste tool. This research has a very precise managerial 59 implication. As a final outcome, a simple and easy to implement auditing tool has been developed. Through 60 it, we aim to help managers and pupils in their efforts to increase the sustainability of the food system. The 61 study is particularly relevant for schools with in-house kitchens, no matter if the service is outsourced - 62 managed by a catering company - or not. Nevertheless, the tool could be applied to other business models 63 too, with little modification. The scope of this research includes school canteens in both public and private 64 schools. To achieve the goals of this research, we collected primary data from public and private schools in 65 Spain. 66 67 2. Literature Review 68 69 2.1. The opportunity of addressing Food Waste in Institutional Feeding Systems. 70 Food waste can be defined as all the products that are discarded from the food chain while still preserving 71 their nutritional value and complying with safety standards (Falasconi et al., 2015). Estimates on the amount 72 of food wasted globally are striking: FAO estimates that up to one third of global food produce is wasted, a 73 fact that places food waste as one of the top challenges for global sustainability (FAO, 2011). In Europe, 74 despite acknowledging that food waste is a data-poor area across the main sectors in which it arises, the 75 European Commission has quantified current average annual food waste at 200 kilos per capita, stating 76 that this figure will increase significantly in the next years if no action is taken. They therefore recommend 77 member estates to act, setting the objective of halving EU disposal of edible food by 2020 (European Union 78 Committee, 2014). 79 On the other hand, researchers mention that a big impact may be achieved when addressing food waste at 80 places where there are many individuals dining at the same place (Mirosa et al., 2016).This is especially 81 true in the institutional catering industry (schools, hospitals and prisons) where, as underlined by Mirosa et 82 al. (2016), many individuals dine similarly, and therefore both efficiency along the supply chain and plate 83 waste can be addressed. Moreover, Goonan et al. (2014) state that food service institutions are big 84 3 producers of food waste, mostly during service, but also as a result of overproduction. In particular, 85 researchers state that school canteens embody a significant source of food waste (Adams et al., 2005; 86 Smith and Cunningham-Sabo, 2014) and represent an ideal opportunity for minimising food waste foot print 87 (Wilkie, 2015). Food waste was found by Wilkie (2015) to be the predominant component in a school 88 canteen waste audit in three schools in Florida (US): between 58% and 69% of total waste weight was food, 89 far more than paper, plastic & glass wastage. The mean daily food waste per pupil was averaged between 90 60.1 and 95.33 g. in schools with an in-house kitchen in this research. Therefore we can state that the 91 institutional catering industry represents an ideal opportunity to divert food waste from landfills thanks to 92 their concentrated food waste stream due to the fact that they serve a high number of meals at a single 93 location, resulting in food waste collected at only one location too (Wilkie, 2015). As a consequence, the 94 institutional catering industry becomes crucial in the fight against food waste (Mirosa et al., 2016). 95 Food waste at school canteens could be reduced through educating pupils and staff in order to change 96 behaviours that cause food waste (Wilkie, 2015). Youths concerned about food waste were found, by 97 Principato et al.(2015), to be more likely to reduce leftovers. Furthermore, we can assume that these 98 improved behaviours and habits will prevail into their adulthood (Guthrie and Buzby, 2002). Mirosa et al. 99 (2016, p.12) found one of the key reasons for consumers not to waste food was a cultural tradition: “those 100 who had grown up with the belief that they need to clean their plates” produced less plate waste. These 101 more sustainable habits could be passed on further and have an effect on the amount of waste produced 102 by future generations (Mirosa et al., 2016). There is evidence in the literature on the effectiveness of waste 103 reduction initiatives. For instance, Ensgtröm (2004) carried out research aiming to measure the impact of a 104 food waste reduction campaign in a school in Sweden resulting in a 35% reduction in plate waste compared 105 to a baseline score. It is also acknowledged by researchers that people with a high knowledge of issues 106 related to food waste are more likely to avoid waste (Principato et al., 2015). 107 Reducing Food Waste has obvious environmental and ethical benefits at the same time that it also has 108 relevant economic implications as its associated costs are not only related to procurement of food 109 ingredients, but also to disposal costs (Papargyropoulou et al., 2014). Moreover, both schools and families 110 could save some money by reducing food waste: pupils who eat more at school are less likely to spend 111 money on substitutive products outside the canteen (Cohen et al., 2013). 112 2.2. Food Waste Auditing and Reporting 113 Good sustainability performance is linked to a full and honest commitment of management to sustainability 114 and to the adoption of incentives, something that should be done by setting appropriate goals, monitoring 115 and evaluating progress (Székely and Knirsch, 2005). As stated by Gerbens (2003), measuring tools 116 offering light on the sustainability performance of a firm turns out to be the very first move towards 117 sustainability. More precisely, food waste inventories are claimed to be critical for the development of 118 effective reduction initiatives and monitoring progress overtime (Hanson et al., 2016) . Conducting a waste 119 audit in both the preparation and the display areas (kitchen and service line) as well as in the pupils’ canteen 120 is the first step towards reducing food waste produced at schools (Bradley, 2011). 121 2.2.1. Framework 122 The World Resources Institute (Hanson et al., 2016) together with partners such as WRAP, UNEP and 123 FUSIONS have developed a Global Food Loss and Waste Accounting and Reporting Standard aiming to 124 provide guidance for governments and organisations to carry out inventories on food loss and waste. We 125 have used this standard as a framework for waste auditing analysis. 126 4 As stated by the WRI, a Food Loss and Waste inventory must be based on the five principles of relevance, 127 completeness, consistency, transparency, and accuracy (C. Hanson, B. Lipinski, K. Robertson, D. Dias, I 128 Gavilan and J. Fonseca, 2016, p. 29). Relevance because it should contain the necessary information for 129 the intended user to make decisions and because the quantification method should be selected based on 130 the specific goals to achieve. Completeness because no relevant data or component should be excluded 131 from the inventory, unless justified. WRI researchers go further adding that auditing methods should be 132 consistent, allowing comparable measurements along-time in order to permit the identification of trends and 133 the assessment on the performance of the audited institution. Transparency is gained by clearly reporting 134 the quantification method. Finally, they acknowledge a trade-off between accuracy and completeness and 135 cost and suggest choosing the optimal method based on the needs and resources of the institutions. 136 Regardless of the objective and scope of the audit, entities should report on the following four elements (C. 137 Hanson, B. Lipinski, K. Robertson, D. Dias, I Gavilan and J. Fonseca, 2016) (World Resources Institute, 138 2016): 139 1. Time frame. Exact start and end date of the audit should be recorded. It is recommended to take 140 seasonal variations into account when planning waste audits. 141 2. Boundary (organisation, geography, etc.) and particularities of the sample. 142 3. Scope (types of waste included). Records must include the type of food waste, the reason that 143 caused it (e.g. overproduction, spoilage, trim waste…) as well as the estimate of loss (by weight or 144 portions). 145 4. Waste destination (where waste goes after being discarded) must be accounted and reported 146 because there are a wide range of possible destinations for food waste with very different 147 associated environmental impacts. 148 The WRI Food Loss and Waste standard (World Resources Institute, 2016) establishes that methods, 149 estimates and possible bias must be clearly documented and disclosed in a neutral manner. The auditing 150 system should also register who recorded the data. Moreover, Bradley (2011) strongly recommends that 151 the results of the audit are shared and discussed with the kitchen team and suggests that it could also be a 152 great learning opportunity for pupils. 153 Due to their interest and particularities, in this section we shall further develop both the scope of the audit 154 and waste destination. 155 2.2.2 Audit Scope and Categorisation 156 The scope of the audit must be clarified before beginning to measure food waste. Papargyropoulou et al. 157 (2014) mention the relevance of distinguishing between avoidable and unavoidable food waste as a key 158 factor in a food waste prevention strategy. Wrap’s definition of avoidable food waste includes food discarded 159 because it is unwanted or has been allowed to pass its best (Ventour, 2008), therefore avoidable food waste 160 had previously been edible, although it might or might not be edible at the time of disposal. Papargyropoulou 161 et al. (2014) explains that avoidable food waste includes foods or parts of food, usually considered edible, 162 while unavoidable food waste is food that has never been edible, such as bones, fruit skins, etc. As 163 described by Wrap, this includes waste from food that one would not expect people to eat (Wrap, 2011). 164 Despite this classification being subjective, unveiling avoidable food waste reveals the substantial potential 165 for food waste prevention (Papargyropoulou et al., 2014). 166 This leads us to the very first key characterisation when analysing food waste: whether it could possibly be 167 avoided or not. Potentially avoidable waste might not have ended up as waste with better management 168 5 while inedible food conforms to unavoidable waste. Whether to quantify both food and associated inedible 169 parts removed from the food supply chain when performing a waste audit, the choice of studying only food, 170 or only associated inedible parts, is to be decided depending on the purpose of the waste audit (Hanson et 171 al., 2016). 172 The vast majority of studies use some kind of further classification for the discarded food, usually related to 173 the place or moment where waste is generated. Table 1 shows a few examples of classifications for 174 avoidable and possibly avoidable waste used by researchers when analysing food waste. 175 Table 1. Characterisation of food waste by researchers, some examples 176 177 Additionally, as noted by Papargyropoulou (2014), distinguishing between food waste and food surplus is a 178 must when addressing food waste: food surplus is food produced beyond our nutritional needs while food 179 waste is a consequence of food surplus. Proper meal planning will help caterers minimise food surplus and 180 therefore the planning process should in some way be included in a waste audit. 181 With regard to plate waste, there is consensus in the literature on its definition (Mirosa et al., 2016). The 182 term plate waste is used by researchers to refer to the amount of food served to pupils that is finally 183 discarded. Its measures have been used with two main purposes: in order to decide how much food to 184 prepare or order and more importantly to judge how well pupils accept the meals offered (Buzby and Guthrie, 185 2002) and assess their dietary intakes. 186 On top of the above mentioned classifications, most researchers measure food types in each of the previous 187 categories separately. Depending on the purpose of the study, food type classifications can be broad, like 188 the one used by Byker et al. (2014) or Cohen et al. (2013) who classify food types into only four groups 189 (main entree, fruit, vegetables and milk) or more detailed, like Marlette et al. (2005, p. 1), who mentions 190 plate waste by the specific food item, such as applesauce, green peas, etc. using a more comprehensive 191 Author (Derqui et al., 2016) (Engström and Carlsson-Kanyama, 2004, p. 206) (Ferreira, Martins, & Rocha, 2013, p. 1630) (Falasconi et al., 2015) (Clarke et al., 2015, p. 2) Sector Food Service food service institutions University Catering School Catering Consumer (Households) Boundary Spain Sweden Portugal Italy USA Characterisation of food waste PRE- CONSUMER WEIGHT Storage losses Weight of raw and cooked food not distributed (“leftovers”) “Avoidable” unserved food Losses during cooking and preparation Preparation losses (mostly seeds, peel, etc. from fruits and vegetables) Serving loss (left on serving dishes and in canteen kitchens and food wells) “Physiological” unserved food (cooked in excess to ensure some extra portions) Food discarded due to preparation of too much food, expired use-by/open dates, or spoilage Leftovers (prepared food never served) POST- CONSUMER WEIGHT Plate waste (what the diner leaves on the plate) Plate waste (items returned at tray collection, after scraping of non- edible discards such as bones, peels, etc.) Food served but not consumed (“serving dish leftovers”) Plate waste or loss 6 classification with 10 food type groups:(a) mixed dishes (b) meats(c) grains (d) milk (e) cheese (f ) 192 vegetables (g) fruits (h) sweet snacks (i) savoury snacks and (j) beverages. Moreover, as mentioned before, 193 other researchers use the nutrient content of food for their analysis instead of food types (e.g. Bergman et 194 al., 2004). 195 196 2.2.3 Waste Destination 197 Whenever the goal of the audit might include an analysis of environmental impacts or at least an increase 198 of the awareness on food waste environmental footprint, waste destination should be recorded. The 199 environmental impact of food waste varies greatly depending on how it is discarded (Creedon, M., 200 Cunningham, D., & Hogan, 2010)(Creedon, M., Cunningham, D., & Hogan, 2010; Papargyropoulou et al., 201 2014). Typical destinations of food waste can be landfills, animal feed, anaerobic digestion, biomaterial and 202 compost, among others (C. Hanson, B. Lipinski, K. Robertson, D. Dias, I Gavilan and J. Fonseca, 2016). In 203 fact, destinations differ significantly, from the most favourable to the least favourable environmental option 204 in the waste management hierarchy (Papargyropoulou et al., 2014). Using the waste hierarchy as a 205 framework, Papargyropoulou et al. suggest different options for dealing with food surplus and food waste 206 where food surplus prevention is at the highest level of the pyramid. At the following step they suggest 207 redistribution for human consumption, animal feed and compost. Finally, at the lower levels, they list the 208 worst environmental options, such as energy recovery (e.g. anaerobic digestion) and disposing of food 209 waste in landfills - which they state should be used as the last option (Papargyropoulou et al., 2014). 210 Following the above-mentioned hierarchy, Creedon et al. (2010) state that from an environmental 211 perspective, the best way would be of course not to produce food waste or to prevent food waste from over 212 preparation, over trimming, etc. Secondly, he mentions reusing food for feeding people by reusing it in other 213 meals, donating to the needy, or even diverting it to feed animals. Thirdly, he states that food waste should 214 be recycled by composting or other processes. Finally, landfill disposal arises as the worst option for the 215 environment and is at present regulated in many countries (Creedon, M., Cunningham, D., & Hogan, 2010). 216 217 2.3. Methods for measuring Food Waste 218 Most of the academic work on food waste in the catering industry has been conducted in schools or hospitals 219 (e.g. Cohen et al., 2013; Williams and Walton, 2011) and is often focused on plate waste (Adams et al., 220 2005; Buzby and Guthrie, 2002), being researchers concerned with the nutritive intake of children as well 221 as with the efficiency of school nutrition programmes (e.g. Adams et al., 2005; Marlette et al., 2005; Smith 222 and Cunningham-Sabo, 2014). Quantification methods in the literature are diverse. Comstock (1979) 223 analysed and compared seven methods of measuring plate waste in the institutional food service, classifying 224 them into two groups: direct and indirect measures of waste, depending on whether waste was actually 225 weighed or estimated. 226 Direct (physical) measurement of plate waste is the most commonly used method by researchers, aiming 227 to measure food intake at schools by the actual weighing of food discarded by children (e.g. Bergman et al., 228 2004; Cohen et al., 2013). Aggregate measures involve collecting all food waste and weighing the total bulk 229 amount for a population (e.g. all meals from one sitting), while individual measures record either the total 230 food remaining on each individual tray or the weights of each food component on each plate (Williams and 231 Walton, 2011). Individual weighing is reported by researchers to be more accurate, despite its high logistical 232 burden being a relevant disadvantage and it may make it difficult to implement without disrupting or delaying 233 7 normal foodservice operations (Comstock, 1979; Jacko; C. C.; Dellava; J.; Ensle; K.; & Hoffman; D. J., 234 2007). Furthermore, when measuring waste individually there is a high risk of children changing their 235 consumption patterns if being observed, thus biasing results (Guthrie and Buzby, 2002; Jacko; C. C.; 236 Dellava; J.; Ensle; K.; & Hoffman; D. J., 2007). 237 Moreover, individual or aggregate measurements can be done selectively, that is, differentiating the weight 238 of each food component, or non-selectively. Comstock (1979) criticised aggregate non-selective plate waste 239 for not providing enough information and actually recommended aggregate selective plate waste defending 240 that it was fast, accurate and easy to learn while at the same time providing adequate information. Going 241 further on aggregate measures of plate waste, Jacko (2007) recommends the plate-waste method, which 242 he describes as follows: first the mass of food being served is measured by weighing each item in the menu; 243 then, after finishing eating, pupils are asked to discard individual food items into different labelled plastic 244 tubes for waste (e.g. #1 beans, #2 bread, #3 meat,…) Then, total weight per item is recorded (net of the tub 245 weight) obtaining the total amount of food waste. The difference between mass of each item served and 246 wasted is the estimated food intake. Jacko (2007) concluded from his research that there were no 247 statistically relevant differences between the estimations on energy and nutrient intake in children at school 248 obtained using aggregate selective or individual physical measurements of plate waste. 249 Indirect measures include both visual estimation and dietary recall (named self-estimation of plate waste by 250 Comstock (1979). Although Comstock (1979) considered visual estimation by trained observers as being a 251 non-obtrusive method, not too time consuming, they did not recommend it as its accuracy had not been 252 adequately tested at that moment.. More recent researches (e.g. Rodriguez Tadeo et al., 2014) have 253 concluded that it can be a valuable method. Visual estimation is done based on different grading scales for 254 plate waste, Comstock’s is the most commonly used, with 6 grades: full plate, almost full plate, ¾ plate, ½ 255 plate, ¼ plate and empty plate (Rodriguez Tadeo et al., 2014). Despite Buzby (2002) mentioning that ratings 256 can differ among observers as being a disadvantage of this method, Rodriguez Tadeo et al.’s (2014) 257 research concluded that the visual scale was a reliable tool for measurement, although acknowledging the 258 need for training catering staff as being inconvenient. Williamson (2004) performed research aiming to 259 validate digital photography for measuring food portions (food served, food intake and plate waste) 260 comparing it with direct visual estimations and weighed foods, concluding that both the direct visual 261 estimation method and digital photography results were highly correlated with actual weighed food, and 262 therefore, are valuable methods, although they acknowledge that both methods tended to slightly 263 overestimate portion sizes compared to weighed food methods. Williamson (2004) supports the validity of 264 both digital photography and direct visual estimation methods, based on the results of his research 265 comparing results of both methods with actual weighing. He recommends digital photography for being less 266 obtrusive and less disruptive in the eating environment. 267 On the other hand, when using the dietary recall method, children are asked about the type and amount of 268 food eaten. Despite this method being easy to implement and low cost, results are highly biased by 269 children’s ability to recall (Jacko; C. C.; Dellava; J.; Ensle; K.; & Hoffman; D. J., 2007), as well as by the fact 270 that children may want to please educators (Buzby and Guthrie, 2002). Comstock (1979) criticised both food 271 preference questionnaires and self-estimation for not being reliable. 272 Table 2 summarises the pros and cons mentioned by researchers of the different measurement methods, 273 based on Comstock’s (1979) classification of methods in direct or indirect measures of waste. 274 275 8 Table 2. Methods for measuring food waste 276 Method Advantages Disadvantages DIRECT MEASURES OF WASTE Individual Plate Waste Accuracy Specific information provided (e.g. by sex, age, etc.) High cost Time consuming Biased results Aggregate Selective Little disruption Easy to learn No specific information provided by pupil Non Selective Fast and easy Little information provided Rubbish Analysis Non-obtrusive Highly inaccurate Time consuming INDIRECT MEASURES OF WASTE Visual Estimation Direct Visual Non-obtrusive Non-disruptive Time Consuming Subjective ratings Need for training Digital Photography Food preference Easy to implement Low cost Low accuracy Biased results Dietary Recall 277 Actually, the most accurate method for measuring food intake has been reported to be weighing foods 278 before and after eating although it is reported to be time consuming, costly and disruptive (Williamson et al., 279 2004). This said, it is interesting to recall Smith’s (2014) research in which, in order to confirm observer 280 reliability he weighed 20% of pupil trays after consumption and compared the result with visually estimated 281 plate waste using digital photography, resulting in a 92% agreement. This is consistent with the 282 Environmental Protection Agency - EPA (2014), which suggests that when there are space and time 283 limitations, visual assessment may be more appropriate. 284 Jacko et al. (2007) in their research suggest that an accurate measure of plate waste at schools should be 285 done without direct contact with the children because this could influence their behaviour and bias results. 286 They therefore recommend the use of aggregated methods. Moreover, they compare aggregate vs 287 individual methods to measure plate waste, finding no relevant statistical differences. They conclude that 288 aggregated selective plate waste measurements provide accurate results for groups of children without the 289 complexity of implementing actual weighed food measurements (Jacko; C. C.; Dellava; J.; Ensle; K.; & 290 Hoffman; D. J., 2007). However, individual plate waste data would provide more specific information such 291 as correlations between sex and age (Jacko; C. C.; Dellava; J.; Ensle; K.; & Hoffman; D. J., 2007). 292 Therefore, even when using an aggregate method it might be useful to individually measure a small part of 293 the sample. Furthermore, in order to generate useful comparators when using aggregate methods, total 294 recorded kilos of waste are usually presented per pupil (Buzby and Guthrie, 2002). 295 296 2.4.Food Waste Research Objectives and Indicators in the Literature 297 Before going deep into the particularities of our research scope, school canteens, as a baseline we used 298 general recommendations from researchers on measuring food waste. Nevertheless, food waste studies in 299 the catering industry have been performed mainly in the education and health sectors. 300 Generally speaking, before performing a food waste audit, an entity should clearly define why it wants to 301 quantify food waste. The results may be used for internal decision making, reporting to the institution 302 stakeholders or to develop a Food Waste reduction policy or initiatives (C. Hanson, B. Lipinski, K. Robertson, 303 D. Dias, I Gavilan and J. Fonseca, 2016). The way in which results are presented is closely related to the 304 purpose of the audit, where the most recurrent research objectives observed in our review of the literature 305 are assessing novel policies nutritional and analysing the efficiency of the food system. 306 9 Food waste audit results are typically expressed by researchers through one or a combination of the 307 following indicators: 308 • Plate waste weight in grams per pupil (e.g. Ferreira et al., 2013; Wilkie, 2015), which can be calculated 309 as the mean of individual measures or as a result of dividing total waste obtained in the audit when 310 using an aggregate method by the number of diners. This output is useful when a comparison between 311 different institutions is considered useful. 312 • Plate waste index, calculated as a percentage by weight on served food that is discarded or eaten (e.g. 313 Byker et al., 2014; Rodriguez Tadeo et al., 2014). This more explicit indicator is very often used for its 314 conclusiveness and clarity. Ferreira (2013) highlights the fact that the plate waste index shows the 315 interaction between the diner and the food, regardless of kitchen or system efficiency. We find in the 316 literature researchers that present their results in either of two ways: as percentage wasted (e.g. 317 Marlette et al., 2005) or as percentage consumed out of total amount served (e.g. Cohen et al., 2013). 318 • Energy value of the waste, expressed as percentage of nutrients consumed against nutrients offered 319 (e.g. Bergman et al., 2004). This indicator is used when the purpose of the study is assessing the 320 dietary intake of pupils, without considering sustainability impacts of wasting food. 321 • Total kilos wasted (e.g. Wrap, 2011). This indicator is normally used together with average grams per 322 pupil with the purpose of increasing awareness on waste as big figures (kilos, tons) are more 323 impressive than grams. 324 • Monetary value of waste (e.g. Cohen et al., 2013) is very seldom used by researchers due to the fact 325 that the research objectives are rarely related to cost. In order to determine the cost of plate waste, 326 Buzby et al (2002) suggest multiplying the percentage waste estimate by the total budget allocation for 327 food in the institution, although acknowledging this method does not adjust for differences in costs of 328 food items wasted (e.g. bread vs meat or processed food). 329 • Efficiency of the food service system (e.g. Falasconi et al., 2015), a ratio of the relation between 330 processed food (kg) and unserved food (kg and %). As stated by Ferreira (2013, p. 3), the “Leftovers 331 index” relates all food discarded in the food service process to the quantity of food consumed. 332 333 3. Research Objective and Methods 334 335 3.1. Objectives and Scope of the Study 336 We conclude from the literature that there is relevance in measuring food waste and the need to provide a 337 standardised method that can ease its measurement as well as track its evolution along time. The 338 development of a food waste measurement reduction protocol has been highly recommended by 339 researchers like Lipinski et al. (2013) who go further by suggesting the need to link it to setting reduction 340 targets and supporting collaborative initiatives to reduce food waste. Moreover, Wilkie (2015) states that 341 before any food waste reduction or recycling initiative can be implemented, it is necessary to know the 342 amount of food waste that is generated. With regard to plate waste, Jacko (2007) observes that more and 343 more schools are acting to prevent child obesity, initiating changes in dietary education programmes and 344 lunch menus; consequently, he states that it is vital to have an accurate and cost effective validated method 345 to measure and track plate waste through which changes can be assessed. 346 Provided that food waste seems to be a challenge for schools in their path towards sustainability, and since 347 as stated by Szekely (2005), there is a need to establish clear, user-friendly methods and tools to measure 348 10 progress that companies are making toward sustainability (Székely and Knirsch, 2005, p. 1) the availability 349 of a food waste self-auditing tool becomes valuable. A standard criteria for measuring school catering food 350 waste is novel in the literature, particularly as we propose to include in our assessment tool both pre-351 consumer and post-consumer waste, while numerous studies on school food waste focus on analysing 352 plate waste (e.g. Adams et al., 2005; Byker et al., 2014; Cohen et al., 2013; Marlette et al., 2005; Rodriguez 353 Tadeo et al., 2014). 354 In order to contribute towards filling this gap, we conducted research in the catering industry at school 355 canteens. The central objective of this study is to shed light on how initiatives and practices aiming to reduce 356 food waste at schools can be measured and tracked. In order to attain this research goal, the following 357 specific objectives were set for the research: 358 O1: To analyse how research measures, assesses and reports food waste at schools. 359 O2: Comprehend the level of awareness on food waste and its relevance for school and catering managers. 360 To identify the elements that influence the generation of food waste at schools, understand its nature and 361 the types of food being wasted as well as at which point waste is generated. 362 O3: To develop a self-assessment auditing tool to be used by educational centres and researchers to 363 measure and track food waste at school canteens. 364 Our practical perspective is also novel, a fact that gives our research a very useful and precise managerial 365 implication. Our aim is to develop an easy to implement self-assessment tool to be applied by school 366 catering managers without the need of external assistance. Our auditing tool targets not only plate waste 367 but also any losses before food is served with the purpose of assessing on the sustainability of the food 368 service system. 369 3.2.Research on food waste at schools 370 With the purpose of doing an in-depth analysis of how research measures, assesses, and reports food 371 waste at schools, our first research objective, we gathered over 20 studies by means of a Scopus search 372 using as key search words - food waste and schools. Later, we found a few additional ones from 373 bibliographies and citations. We analysed their objectives, methods, procedures and outputs in order to 374 understand their strengths and weaknesses and then used the knowledge to create the foundations for the 375 development of a standardised auditing tool. 376 Studies performed in order to quantify the amount of food that is wasted daily at school dining facilities (e.g. 377 Byker et al., 2014; Falasconi et al., 2015; Smith and Cunningham-Sabo, 2014; Wrap, 2011) show the effect 378 of pupils’ preferences and behaviour, and the effect of the food service regime on food waste from school 379 meals (Wilkie, 2015). Although research objectives are diverse (see Table 3), the vast majority (80%) of 380 studies focus on analysing plate waste. However, most of these studies are not complete food waste audits 381 and do not account for food waste from kitchen preparation, or waste from serving lines nor food pupils 382 bring from home. Despite being plate waste the most frequently reported measure in school food waste 383 studies, it is not the only source of food waste at schools. Interestingly, Falasconi et al.(2015) undertook 384 research in 6 schools in Italy and found a significant level of inefficiency in school catering services: over 385 15% of the overall processed food was not served to the pupils, according to their measurement. 386 Nevertheless, only a few of the studies found in the literature aim to measure the efficiency or sustainability 387 of the school food system, as most of them are focused on pupils nutritional intake, and therefore limiting 388 the analysis to plate waste. 389 11 Plate waste measures show a considerable variation between the different schools (Wilkie, 2015). Typical 390 results range from 20% to 50% of the food served being wasted, with vegetables and fruit in the higher 391 range (Wilkie, 2015). For instance, Rodriguez Tadeo et al. (2014) did a research in Spanish schools 392 estimating leftovers by visual estimation, being up to 26% of total served food and Byker (2014) obtained a 393 45.3% of waste on total food served. Other studies mentioned by Wilkie (2015) give results that range 394 between 52 g and 227 g per pupil per day. He explains such differences were likely due to the different ages 395 of pupils and methods of food service (Wilkie, 2015). It is interesting to point out that there was significant 396 variability in the amount of food wasted during the week, vegetables ranged from 26.1% to 80%, depending 397 on the day. Although researchers acknowledge some plate waste is unavoidable (Cohen et al., 2013), they 398 agree that in excess is a sign of inefficiency or even irresponsibility (Buzby and Guthrie, 2002).The wide 399 range of waste generation rates shown in these studies also suggest the need for more standardised waste 400 audit methods to measure waste produced at school cafeterias. 401 From our review of the literature (n=20), we present a summary in table 3 of the most relevant features of 402 the studies performed by researchers quantifying food waste in school canteens as well as their weight on 403 the analysed studies. 404 Table 3. Empirical research quantifying food waste in schools (% on total analysed studies) 405 Boundary Research Scope Research Objective Methods Indicators used USA 75% UK10% SPAIN 5% ITALY 5% AUSTRALIA 5% Plate waste 80% Kitchen and PW 10% Kitchen waste 5% Total Waste 5% Dietary Assessment 40% Drivers of Plate waste 30% Method comparison 10% Economic cost of food waste 10% FS efficiency 5% Waste assessment 5% Individual 69% Aggregate 31% Selective 94% Non selective 6% Weigh 69% Visual 31% % waste on served 29% % Consumed on served 17% Nutrients consumed or wasted 21% Grams of waste per pupil 13% Waste economic value 13% Total kilos of waste 4% Food surplus 4% 406 3.3. Methods 407 The development of a standardised self-assessment tool should take into consideration the diverse 408 frameworks in which school canteens operate which involve a set of complex social phenomena. In order 409 to analyse this complexity, we designed research with an explorative/inductive approach through primarily 410 qualitative data as proposed by Pratt (Pratt, 2009). 411 With the purpose of developing a useful and practical assessment tool, we designed exploratory research 412 in two phases. First, we collected data through qualitative research with a range of stakeholders in order to 413 understand the factors that generate food waste at school canteens. Semi-structured, individual interviews 414 with 12 managers and staff of 9 different institutions and collectives that play a role in school meals were 415 conducted (see appendix A for details).In this first phase of the research we obtained insights from 416 managers, both at schools and catering organisations from which a first draft of the tool was designed. In 417 the second phase of our research, once the assessment tool was pre-designed, we tested it in four of the 418 participating schools in the former phase, in order to validate and improve it. At the same time as the tool 419 was being tested, we gathered the opinion of canteen and school staff through 9 further individual interviews 420 as well as the opinion of 8 pupils too. Data collection was performed during November and December, 2014. 421 12 The sample selection of the first part of the study followed a strategy of quotas according to the type of 422 school (semi-public, public and private institutions) and catering organisation. Due to the nature of the 423 research, all schools should satisfy the following criteria: offer in-house cooked meals in a canteen and a 424 minimum of 300 pupils having lunch daily at school. Catering companies had to have a revenue in Spain of 425 at least 10 M € in the last year and a significant market share in the institutional food service channel. To 426 identify our sample, we explored their web sites and existing reports and visited their locations. The final 427 sample was made up of 4 catering companies and 5 schools in Barcelona city. Semi-structured interviews 428 with school principals, canteen managers and food service organisation management were carried out (see 429 Appendix A for interview and organisation characteristics).Due to the complexity of an analysis of this kind 430 of process, we have developed a protocol as a conceptual and practical guide on data collection during 431 interviews. The protocol proposes a semi-structured interview design with open questions and unlimited time 432 in order to capture possible unexpected results and redirect the interview according to the responses of the 433 interviewee. The questions were grouped in three sections; the first one about the management system, 434 followed by specific questions related to each production stage (procurement, kitchen, service and waste 435 disposal) and finishing with questions on their interest in applying reduction measures and best practices. The 436 interviews lasted an average of 60 min and all of them were conducted in places suggested by the interviewees 437 to maintain their comfort and privacy. In addition, the interviews were recorded using an audio recorder. The 438 protocol also suggests the annotation of interviewees’ reactions (e.g. behaviour or non-verbal communication) 439 when responding to questions. The transcript of the interviews was conducted following a process of double 440 review by the authors. In the second phase of the research, more informal interviews with school and 441 catering staff as well as professors and pupils were conducted. 442 The next step was the codification of the interviews through the methodological proposals of Bogdan and 443 Biklen (1997) implementing a qualitative data analysis software (MaxQDA). The first step of interview coding 444 was to identify the blocks or paragraphs where the interviewees spoke about one of the elements suggested 445 by Bogdan and Biklen, such as Setting, Definition, Process, and Method. This first coding allowed us to 446 define the starting point from which we analysed the structure of each interview. The second step of coding 447 consisted in assigning to paragraphs (or a part of them) a list of preconceived codes from the theoretical 448 framework of the research. The initial list of codes contained 7 codes (Players, Places, Food Type, Waste 449 Drivers, Initiatives, Waste Hierarchy, Key Performance Indicators (KPIs)). The third and final step consisted 450 in coding the paragraphs with a more inductive approach (encoding in vivo), recoding some of the interviews 451 as new codes emerged. The final code book contains a total of 63 codes that classify data into 10 codes 452 (the former 7 plus three new ones: Management, Resources and Culture). 453 After the encoding process, we analysed each interview and later we analysed them all in block following 454 the suggestions of Miles and Huberman (1994) and Jurgenson (2005) with the goal of obtaining a specific 455 vision of each case and a final conclusion for all cases. The first step of this part of the analysis was to build 456 a checklist matrix to coherently organise several components for every case. These matrices showed the 457 different sources of data (interviews) in rows and the topics or codes (both the codes from the second and 458 the third step of the coding process) in columns. The matrices allowed us to display the interviews of the 459 codified elements and their reliability and importance according to the number of sources that corroborated 460 them. 461 From each case, we generated a Time-Ordered Matrix that showed the several processes throughout the 462 study period. Based on the matrices, we re-analysed the assessment tool that we had previously developed. 463 13 After the analysis of each case, we carried out a Cross-Case Analysis in order to enhance generalisability 464 and potential self-execution of the outcome. Following a code-oriented strategy, we developed a Case-465 Ordered Effects Matrix (based on Miles and Huberman,(1994), which allows us to see how the effects play 466 out across the seven interviewees. In other words, we could sort the seven cases and show the diverse 467 effects for each case in the same picture. The matrix has the cases in rows and the main features of the 468 school, their strategies and point of view on sustainability, the point of view of the catering company, and 469 some short-run effects. From this matrix, we were able to start analysing the relationship between schools 470 and food waste. 471 Once a first draft of the tool was developed based on the insights obtained from the qualitative phase of the 472 research, we addressed 4 schools in Barcelona in order to test its performance and improve its deficiencies. 473 The test lasted three to five consecutive weekdays at each school with the objective of comprising different 474 menus and therefore avoiding potential bias due to meal preferences. The schools were selected so as to 475 ensure different catering arrangements, medium to large size schools, public and private institutions and a 476 mix of socio-economic statuses. The four selected schools for the trial each had an in-house kitchen in 477 which daily meals were prepared managed by a specialised firm because this is the most common 478 procedure at Spanish schools, as mentioned by C4 (see Appendix A) in our research. We weighed and 479 measured waste from their canteens during 11 school days, in the four schools (Table 4). School staff 480 cooperated in the audits through setting aside the waste collected from the different areas and providing 481 access to the areas where collection stations were placed. The schools in our sample had different cafeteria 482 layouts but their lunch schedules were similar. Meals were composed of a starter (legumes, rice, pasta or 483 vegetables), main dish (meat or fish), white bread and a dessert (fruit or yoghurt) and tap water. Children 484 did not have the option of choosing their menu, except for secondary graders in school C.6 where they 485 chose from two different options for each course. Special regime meals were usually also offered on 486 demand. None of the schools offered a la carte items such as potato chips, as this very rarely happens in 487 Spanish schools. Pupils in the study ate in one common lunchroom in three of the schools, while one of the 488 schools had seven different lunchrooms. This latter school had 4 serving lines, two of the schools had one 489 single serving line, and in one - school children were served by the staff at their tables. With regard to 490 serving lines, food was presented in stainless steel containers (called Gastronorm) in the serving lines and 491 kitchen staff served students on their trays when they passed by. 492 According to Engström (2004), food waste at the canteens was collected and aggregately weighed 493 separately depending on the point where it had been produced (pantry, kitchen, service station or plate 494 waste), distinguishing whether it was avoidable (e.g. out of date ingredients, plate waste) or unavoidable 495 (e.g. bones, peels) waste. Research assistants weighed the aggregated discarded food at each step in the 496 process every day, recording total kilos as well as the approximate % of the different types of food. For this 497 purpose, we used industrial transparent plastic bags (100 litres) so that research assistants could visually 498 estimate the percentage of the different types of food once the bags were full. This was possible because, 499 as mentioned before, the variety of dishes usually offered at school canteens in Spain in one day is limited, 500 typically one entrée plus one main dish and one dessert or at the most two options of each, resulting in no 501 more than three to five different food types per meal. 502 Research assistants arrived at schools three hours before lunchtime, in order to prepare collection bins and 503 track kitchen preparation tasks. Bins were placed in different spots, labelled in order to collect food at each 504 stage. First of all they measured food wasted during meal preparation, making a note of its alleged cause. 505 14 “potentially avoidable” waste was differentiated from “unavoidable” waste such as egg shells, bones, etc. 506 and only potentially avoidable waste was weighed. For this purpose, rubbish bags were placed at different 507 points of the kitchen with specific labels. We therefore used 6 differently labelled bins and placed them at 508 the different collection stations: 1) “Out of date or damaged raw ingredients”; 2) Unavoidable “kitchen 509 scraps”; 3) Potentially avoidable “kitchen scraps”; 4) “Service line leftovers”; 5)Unavoidable “Plate waste”, 510 and 6) Potentially avoidable plate waste. Once the audit was finished, only four of them were weighed (using 511 a Pelouze scale in all but one school where we used a Campesa K3 balance), as we did not measure 512 unavoidable waste, in accordance with Papargyropoulou et al.’s (2014) suggestion. 513 We decided to combine a direct measure of waste method, aggregately weighing waste at the different 514 collection stations with a less accurate method to measure food typology shares. Once total weight was 515 measured, research assistants visually estimated the approximate percentage of total weight per food 516 category. We opted for the aggregate selective method for its easy execution and simplicity, as schools 517 should be able to implement it without external help later on. 518 Table 4 shows the total number of trays included in the trial as well as the number of days the audits lasted 519 in each school. Overall, we measured the aggregated avoidable waste weight of over 10,000 trays, and 520 2,991 children took part in the audit. 521 522 Table 4. Trays and pupils audited 523 524 Participating pupils Trial Duration (# Days) Elementary Pupils’ trays Secondary Pupils’ trays Total Audited Trays School C5 986 5* 2,815 2,113 4,928 School C7 465 2 534 396 930 School C6 1,316 3 1,881 2,067 3,948 School C8 225 1 225 0 225 TOTAL 2,991 11 5,455 4,576 10,031 *(secondary pupils were present 4 four days only) 525 526 During the audit days, we interviewed 9 canteen and school staff in order to get insights from those who 527 work closely with the day to day operations of the canteen. We also performed 9 quick interviews with 528 children eating in the canteen. The interviews in this case lasted 20 minutes on average with staff and 10 529 minutes with pupils and we encoded the transcripts following the same method and codes as in the former 530 phase of the study. 531 The number of pupils actually eating lunch in the canteen each day was registered in order to be able to 532 estimate the average weight per pupil and day, as this was the measure found by Wrap (2011) to be the 533 most meaningful way to compare data from different schools. This figure was compared with the planned 534 number of diners, a figure that we asked the cooks each audit day in order to assess potential food surpluses 535 as suggested by Papargyropoulou et al.(2014). 536 It is important to recall that the primary objective of the auditing tool is to analyse and track food waste 537 produced at schools, not the amount of food going in, nor the nutritional intake of pupils. Therefore, the 538 output is given in grams of waste per pupil and not as % of waste on food prepared or served nor percentage 539 of energy or nutrients consumed vs offered. Nevertheless, the tool can be easily adapted for these purposes. 540 541 15 4. Results and Discussion 542 4.1. Perspectives on food waste by school caterers and canteen managers 543 We found a very low real awareness of managers on the amount of waste produced in the canteens. Only 544 one of the schools in the sample had ever performed a waste audit at the canteen and only one of the 545 participating catering companies does waste audits in the kitchens they operate in on a regular basis. This 546 said we nevertheless found a high interest on the topic, especially among public funded school managers 547 and personnel: we appreciated that many school managers would be willing to implement initiatives to 548 measure and minimise the amount of food wasted at their canteens, especially after observing our pilot-test 549 results. It was acknowledged by the interviewed managers that food waste is a data-poor area and therefore 550 when suggested, a waste inventory was reflected as the starting point for the application of reductive 551 initiatives. They largely agreed on the fact that it would be useful to increase awareness on waste through 552 the measurement and tracking potential of reduction initiative results. 553 Consistent with the literature (Wrap, 2011), avoidable food waste accounted for the greatest amount of 554 waste generated at schools in our pilot test. Plate waste accounts for the biggest source of food waste, 555 followed by food from serving lines. Average weight of food wasted per elementary school pupil in Barcelona 556 ranged between 40 and 100 grams per meal and pupil. Secondary pupils’ average waste was higher in two 557 of the three secondary schools analysed, exceeding 80 grams daily waste per pupil in two of the four studied 558 schools. 559 In our trial of the auditing tool, school’s institutional and pedagogical principles showed a very direct 560 influence on the amount of food wasted at the canteen. Some schools consider the canteen as part of their 561 learning project and therefore try to educate children in finishing their food through different activities, 562 training, and workshops. These schools resulted in lower levels of waste, and especially of plate waste. 563 Conversely, whenever top management of the school did not consider food waste a priority, plate waste 564 ratios were higher, at the same time as the level of awareness on the amount of food wasted was very low. 565 Just one school mentioned they regularly performed initiatives with the purpose of reducing food waste. In 566 fact, in this school we found the lowest rate of plate waste in our pilot-test. We concluded this was due to 567 the fact that its management had a strong focus on reducing food waste and this strong focus was translated 568 into multiple ongoing initiatives.C6.1:“We settle specific objectives every year. At present we are focusing on three 569 food types: lentils, fish and oranges. Last year we achieved an important reduction on discarded bread. We are also 570 currently focused on reducing dairy packaging, as its disposal costs are high”. 571 Moreover, schools with a stronger management focus on sustainability, or with wider pedagogical objectives 572 showed high interest in the results of our pilot audit at the same time as they declared their purpose of 573 repeating the audit in the near future. 574 On the other hand, we also found food service providers with very different perspectives and visions on food 575 waste. One of the food service managers interviewed, who worked for a catering company with a strong 576 sustainability culture mentioned that school managers’ scepticism and lack of awareness was a barrier for 577 improving results:C1.2 “Implementing sustainable initiatives is difficult sometimes, as schools are often not 578 very sustainability conscious; We have had customer complaints when trying to reduce food waste arguing 579 that our only purpose was to reduce our costs!”. 580 She nevertheless recalled that when they had formerly performed waste audit assessments in schools, the 581 results had been touching for both organisations and stated that it had been easier to introduce reduction 582 initiatives in those institutions since then. We concluded from this that increasing visibility and awareness 583 on food waste is crucial: C1.2“We recently measured aggregated plate waste in one of our customers, one 584 16 big sized school in Madrid resulting on a daily average of 350 kilos of food discarded. Then they launched 585 an awareness campaign by putting together 350 kilos of packaged food ingredients at the entrance of the 586 lunchroom with the purpose of increasing awareness on food waste among children” 587 Moreover, we observed very different attitudes toward plate waste among canteen and school staff. Such 588 attitudes range from strict control on pupils so that they completely finish their meal, to passiveness, 589 acceptance or even denial of the real situation regarding plate waste. These diverse attitudes are also 590 related to dissimilar school management ideologies regarding school meals: from those considering the 591 canteen as a fringe service offered to the parents (with no educational responsibility by the school), to those 592 who consider it as part of the school´s pedagogical mission. This is very closely related to the means and 593 resources dedicated to minimise plate waste, such as the number of caretakers and their role regarding 594 leftover control and pupils eating habits as well as food waste reduction awareness campaigns. 595 596 We concluded from these observations that the role performed by school top management is the most 597 relevant factor influencing sustainability issues such as the level of canteen food waste. Those institutions 598 with a strong focus on sustainability or which were at an advanced stage on “greening” their organisations 599 usually allocated more resources to reducing food waste and were thus more likely to be looking for 600 performance indicators and initiatives to reduce waste. This was confirmed in our pilot-test, as the one 601 school with a clear focus on sustainability recorded the lowest plate waste rate. The higher management 602 focus on sustainability was translated into diverse procedures impacting the different waste driver areas, 603 resulting generally in lower waste rates. Moreover, green conscious managers tend to be concerned not 604 only with food waste but also with related packaging waste. An informative campaign addressing public 605 funded schools with the purpose of increasing awareness on food waste could therefore be highly efficient. 606 Actually, as mentioned by Papargyropoulou et al. (2014) we verified that food waste arises at all the different 607 stages as a result of very diverse causes and thus the ways to tackle them must be different too. We 608 concluded from our research that food waste drivers can be categorised in three groups. First, those related 609 to management practices such as the meal planning process or procurement practices. Secondly, 610 infrastructures and equipment also impact food waste levels, especially at the storage and serving stages. 611 Finally, human resources issues, such as staff awareness (or lack of awareness) on food waste is also 612 reflected at the different levels of food waste in canteen operations. In the next paragraphs we shall develop 613 these drivers, relating them to adequate indicators that will allow managers and researchers to measure 614 and track performance in their related areas. 615 Regarding management practices, cooks and caterers mention communication between school and kitchen 616 as key in order to accurately plan the number of menus to elaborate. As mentioned by C1.2, this is absolutely 617 relevant for special regime diets such as allergenic: C1.2 “Special menus such as diet or allergenic produce 618 higher amounts of waste per pupil than regular ones as they are more difficult to plan”. From this insight we 619 can infer the relevance of tracking deviations between planned and real numbers of diners. 620 Also related to management practices we found menu planning closely related to food waste. In fact, many 621 of the pupils interviewed complained about the quality of the food offered. Pupils’ acceptance of food can 622 be increased by menu planning policies. As suggested by C12: “The different acceptance rates of dishes 623 by pupils makes a difference. We try to balance our menus: if the first course is “difficult” (like for example 624 chickpeas), the main course should be “easier” (for instance not offering fish)”. Pupils’ acceptance of meals 625 can also be enhanced by giving them the option of choosing between more than one alternative for each 626 course. Only one of the schools studied offered the pupils different dish alternatives to choose from. 627 17 On the other hand, procurement policies were admitted as closely related to waste. Suppliers’ delivery 628 frequency and product formats are managed to prevent pantry losses. Public policies were highlighted as a 629 key potential tool to entice good purchasing practices at schools, although this was not clearly related to the 630 generation of waste and should be tracked by selective measures of plate waste. C2.1: “Public procurement 631 policies are aimed to guarantee that children have a diverse and complete diet, but effective food intake by 632 children varies a lot between schools, closely related to school management priorities and consequent child 633 education on food habits and supervision during meals”. 634 Research also shows that kitchen food waste is strongly influenced by school infrastructure and equipment. 635 Caterers need to adapt their processes to school facilities and often complain that some of them are very 636 old. They recognise this fact as a limitation: C2.1:“It is really hard sometimes”. Furthermore, the availability 637 of recycling facilities strongly determines the destination of waste. C8.”Since we own a vegetable garden , 638 we compost most of the kitchen scraps and peels we generate”. Recording regularly the destination of food 639 waste as well as its disposal costs might increase awareness on potential improvements. Waste bins at 640 schools in our sample were normally emptied into dumpsters. Although three schools in our sample had a 641 vegetable garden, only one of them composted food waste from the canteen. 642 Better storage facilities was mentioned by cooks as a way in which they could reduce the amount of raw 643 materials that had to be discarded, at the same time as it could also be a way of permitting excess cooked 644 food to be stored for later consumption. We also found a relevant source of waste related to the number of 645 serving lines in which children were served or where they could help themselves to food. Whenever there 646 is one unique serving station, waste at this stage was significantly lower than when there were several. 647 Schools with more than one service line tend to generate more food waste per pupil at this stage. This was 648 due to the fact that all types of food needed to be displayed until the end of the service time at all service 649 stations, inevitably causing a certain amount of waste at each station. One of the schools where we pilot-650 tested the auditing tool had four serving lines. Waste at this stage in this school varied significantly among 651 the dates studied and we weighed over 70 kilos of cooked food not served that was discarded in one day. 652 Bread has a relevant role here. In our case study plate waste accounted for the greatest part of food waste 653 in three of the schools studied and serving waste in the fourth one. Moreover, due to the fact that bread is 654 low priced, no attention was paid in general to the amount discarded. In most serving lines, bread was 655 placed at the beginning, together with the trays and cutlery, and diners used to take it before knowing 656 whether they were going to like the menu. Bread was in our test one of the food categories with highest 657 waste. 658 Finally, the role of canteen supervisors was emphasised as crucial, the lack of control on pupils leftovers 659 being a relevant driver of plate waste. It was acknowledged that plate waste is closely related to effective 660 supervision. Actually, schools with the lowest rates of food waste in our pilot were those where there was 661 stricter control by canteen supervisors on top of a wider educational perspective. Measuring and tracking 662 plate waste can be used by managers to encourage caretaker supervision. Managers therefore will find it 663 useful to unveil the amount of plate waste as this will allow them to set reduction objectives and measure 664 their effect or even compare results with other schools. 665 Tracking and disseminating these key performance indicators will facilitate school managers when choosing 666 the most adequate correction measures and evaluating results. Necessary correction measures are different 667 depending on the cause and the place where waste is generated. Table 5 summarises the most relevant 668 school canteen food waste drivers and the indicators or variables that might be useful for running a diagnosis 669 and describing the main improvement areas and help in the management of each of them. 670 18 671 Table 5. School food waste drivers and key performance indicators (KPI) 672 Related Area Food Waste Driver Institution Culture and Values Top Management (low) Focus on Sustainability Pedagogical Vision Management Practices Communication between Kitchen and School staff Meal planning process Menu planning (and acceptance of food by pupils) Procurement practices Infrastructure Kitchen equipment and facilities Recycling & Reuse facilities Canteen Layout Human Resources Supervision by caretakers 673 Age is highlighted as a relevant factor too. Canteen staff and caretakers agree on the fact that children of 674 different ages usually have different eating patterns. There was a consensus on the fact that younger 675 children produce less plate waste, as stated by C.5.4a “The younger they are, the more they eat. Three to 676 five year olds leave no plate waste at all!” This insight shades light on the relevance of measuring waste 677 from different collectives separately. Interestingly, even though the amount of waste generated per pupil 678 varied a lot among the different schools, food wasted by elementary pupils was much lower than by 679 secondary graders in our research. This result is consistent with the outcome of the first stage of our 680 research although we found opposite results in several of the studies (e.g. Guthrie and Buzby, 2002; Niaki 681 et al., 2016). 682 It is interesting to note that catering and school staff did not consider the proposed auditing method 683 disruptive. On the contrary, cafeteria staff, teachers and caretakers who collaborated in the trial were proud 684 to share their experience with other colleagues. They were often impressed by the results and willing to 685 collaborate when ideas for FW reduction were brought up. Research findings strongly support the relevance 686 of sharing results with canteen staff, as suggested by the World Resources Institute (C. Hanson, B. Lipinski, 687 K. Robertson, D. Dias, I Gavilan and J. Fonseca, 2016). 688 689 4.2. Self-assessment food waste auditing tool 690 Based on our research, we can group the information to be measured and tracked when auditing food waste 691 at school canteens, into four categories: accuracy of the planning system, physical measure of waste, waste 692 destination and economic cost of food waste. In the following paragraphs we develop the four categories 693 and describe related key performance indicators that should be included in a waste audit. 694 4.2.1.Accuracy of the planning system. Conformity between real versus planned number of diners should 695 be measured, with the objective of analysing and tracking actual deviations between the information used 696 by cooks when preparing food and the final amount of food needed at lunchtime. Differences between these 697 two figures are often the cause of generation of food surplus (excess food cooked). In order to assess the 698 accuracy of the planning system, we suggest using the following indicator: deviation rate between planned 699 and served meals. 700 19 A daily estimation of the difference between planned meals and the real final number of diners should be 701 tracked. For this purpose, a deviation rate should be recorded daily, noting both the number of planned 702 diners before the cooking process begins and the actual number of effective pupils that eat at the canteen 703 each auditing day. Deviations should be recorded in % of actual vs planned diners per sitting (whenever 704 there is more than one). Special menus such as allergenic or diet lunches should be recorded separately 705 too. If there is a known cause for the deviation it should also be briefly explained in the record. Needless to 706 say, elementary versus secondary grades should be recorded separately. 707 4.2.2.Physical measure of waste. Different food categories (e.g. fruit, bread, etc.) should be recorded 708 separately in order to be able to assess the efficiency of the food service system as well as dietary and 709 nutritional intake and food acceptance and preferences. This measure will shed light on the potential 710 improvement that can be achieved by performing reduction initiatives and will be helpful for their design. 711 Due to the nature of the physical measure of waste, we suggest two indicators, weight of food waste and 712 number of zero waste trays, discussed below. 713 (a)Aggregate and Selective weight of food waste at each different stage of the process. This should 714 be measured at each collection station, in order to differentiate the four typologies of waste:pantry loss, 715 cooking loss, prepared food surplus and plate waste, as explained in section 2.2. At each stage, potentially 716 avoidable food waste should be measured separately from unavoidable waste, which does not need to be 717 included in this record. Collection stations must differentiate the place and stage in the process where waste 718 has been generated and categorised food should be recorded at each collection station. We suggest 719 estimating the share (percentage on total food waste) of each food type by visual estimation. For this 720 purpose we recommend the use of transparent rubbish bags or bins for the aggregate measurement, 721 recording the approximate % of each food category after weighing. To do this, we suggest using the 722 classification used by Betz et al. (2015): meat/fish, starch, vegetables, fruit, desserts (e.g. yoghurt), and 723 others, adding bread and legumes as separate additional categories. As mentioned before, unavoidable 724 waste such as peels, bones, etc. must be separated at collection stations and withdrawn before weighing. 725 Recording total weight of unavoidable waste is optional. 726 We shall therefore measure four different waste indicators in this section, one per each stage of the 727 process: 728 A. Pantry loss: food waste generated in raw ingredient storage (mostly out of date produce). We 729 shall record the total kilos wasted at this stage, the approximate % of total weight per food type and 730 the place where it occurred (e.g. pantry, fridge, etc.) as well as its alleged cause (e.g. out of date, 731 spoilt, etc.). 732 B. Cooking loss: waste produced during the cooking process. Unavoidable waste should be 733 discarded separately at this stage because only potentially avoidable waste needs to be weighed. 734 Total kilos of avoidable waste should be recorded, as well as the approximate % per food type, the 735 place of generation and the reason that probably caused it (e.g. burnt, aesthetics, etc.). 736 C. Prepared food surplus: food cooked but not served. This comprises waste produced at serving 737 lines or other means of distribution or display. Here, total weight of cooked food not served to the 738 pupils should be recorded as well as the approximate percentage per type of food, noting the most 739 probable reason that caused it as well as its most likely end: reuse (e.g. staff meals, soups, 740 donations,…), recycling (e.g. compost), or disposal. 741 D. Plate Waste: Food Served but not eaten. We recommend measuring plate waste using the 742 aggregated and selective method, once having withdrawn inedible food or parts of food. Again, total 743 20 kilos of waste should be recorded before noting the approximate percentage per food type which will 744 be measured by visual estimation. 745 We suggest weighing discarded food without separating the different types of food at each collection station, 746 as categorisation can be visually estimated after collection by the use of transparent rubbish bags. This 747 suggested method will ease audit implementation despite possibly being less accurate. This is consistent 748 with the literature, as Smith (2014), in a study measuring individual plate waste, concluded that visual 749 estimation was close enough to selective weighing when measuring plate waste. Due to the nature of the 750 audit we prioritise easy execution over accuracy. 751 Nevertheless, plate waste usually being the main source of waste at school cafeterias, it can be helpful to 752 deepen the analysis in a small sample of pupils, in order to get insights on the reasons that caused leftovers. 753 This sample should be taken at random and it is recommended to take digital photos of these pupils’ trays, 754 both before they start dining and when they return their trays. The amount of plate waste found in this study 755 is consistent with plate waste reported in previous research in schools although high differences were found 756 among them. Moreover, most food waste types in our pilot study were legumes, vegetables and bread. This 757 is consistent with the literature, as most studies highlight the high waste of vegetables. 758 Although the aggregated method is recommended for its convenience, results should also be given in grams 759 per pupil, calculating the ratio between total waste amount and the number of real diners, using the figure 760 of real diners previously recorded. We must consider that using this ratio will only be comparable among 761 schools with the same catering system. Using this method, only plate waste ratios will be comparable among 762 schools with different catering systems. Whenever it is possible, a measure of efficiency would also be 763 recommended, recording the percentage of wasted food related to prepared food. This ratio is particularly 764 relevant for transported meals catering systems. 765 766 (b) Number of zero waste trays, as a percentage of total trays. Tracking how many pupils empty their 767 trays completely will shed light on meal acceptance and caretakers’ control. Moreover, the study suggests 768 the dissemination of this information may encourage other pupils to reduce plate waste. C.6.1: “Since we 769 started the zero tray project (a contest among classes in which the class with a higher percentage of fully 770 empty returned trays were rewarded), plate waste has been reduced significantly”. 771 4.2.3.Waste destination or use. Improvement opportunities can also arise by noting and tracking the 772 destination of waste from the canteen. Good sustainability initiatives could include setting objectives of 773 reducing waste sent to landfills and reducing food waste footprint by reducing waste that is discarded at the 774 lower levels of the waste hierarchy pyramid. The indicator proposed to manage waste destinations is simple. 775 We recommend recording the way food waste is discarded (e.g. rubbish bin, compost) or reused. The waste 776 destination indicator implies noting the approximate % of waste which will probably end in landfills, compost 777 or that will be reused, recording its intended purpose in this case. Whenever more than one disposal method 778 is used, the approximate % on total waste weight of each one should be recorded. 779 4.2.4.Economic cost of food waste. An economic estimation of food waste is recommended as it will 780 increase the relevance that school and catering managers give to tracking and measuring waste as a means 781 of reducing food waste could be seen as a potential profit increase. As mentioned by one of the caterers in 782 our sample C1.1: “Canteens are a source of business for schools, they make profit out of them”. School 783 managers with a low focus on sustainability, and therefore not motivated to reducing food waste for 784 sustainability related reasons, may find an attractive incentive in this indicator. The approximate cost of 785 21 waste can be calculated in different ways. We suggest using an average cost per meal estimated on a year 786 basis (including procurement and service) and multiply it by the equivalent of meals thrown away. This can 787 be calculated by dividing the total kilos of waste by the average weight of meal (g) and multiplying the result 788 by the average cost per meal. This should be done with the support of the financial manager. Although this 789 method may not be accurate as it does not distinguish the diverse cost of different food ingredients, we 790 prioritise ease of execution over accuracy due to the purpose of the measurement. 791 By tracking appropriate KPIs related to the above mentioned four areas and their probable causes, school 792 caterers and managers will be able to diagnose and describe main improvement areas. Materials needed 793 in order to perform the audit include a scale, six labelled waste bins or waste containers and transparent 794 rubbish bags. 795 Table 6 summarises the four main data categories, relating them to the goal of the analysis and their related 796 KPI. You will find the auditing tool in Appendix B. 797 798 799 Table 6: Summary of selected KPIs and their purpose. 800 Data Category Purpose Food Waste Indicators short list Accuracy of the planning system Better adjustment of quantities cooked 1. Planned vs real number of meals Physical measure of waste Assess system efficiency & dietary intake 2. Selective aggregate food waste by type of food 3. Zero waste trays Waste destination Reduce environmental impact 4. Food waste destination Economic cost of food waste Increase awareness of Food Waste Relevance to management 5. Total Euros/Dollars/Pounds in cost of food waste 801 Kitchen and service staff highlight that there are some dishes which typically generate low or no plate waste, 802 such as rice or pizza, while others such as fish or vegetables generate high plate waste rates. Despite menu 803 planning often taking this into consideration, we found a wide range of plate waste ratios on different dates, 804 a fact that we attributed to the different acceptance of the menus. Plate waste one day in a specific school 805 could double or even triple a previous day’s ratio. For this reason auditing a full school week is urged in 806 order to include diverse meals and avoid bias due to different meal acceptance from pupils. Strong 807 differences were also found among the sample schools in our pilot-test. 808 Once the audit is finished, it is recommended to share the results with professors, supervisors and pupils 809 as this would contribute to increase awareness on the issue. Lack of visibility and therefore lack of 810 awareness is one of the key reasons for the low level of measures taken to reduce food waste in the food 811 service channel (Derqui et al., 2016). The first measurement will be used as a baseline and the reference 812 for improvement goals. Successive measurements will shed light on the efficiency of initiatives as well as 813 on the room for improvement. We suggest that the audit project be led by a “project leader”, a person in 814 charge who will be responsible for coordinating the different players needed for the success of each 815 improvement initiative. 816 817 5. Conclusion 818 As suggested by Gerbens-Leenes et al. (2003), it is important to bridge the existing gap between theoretical 819 scientific knowledge and practical company knowledge in measuring sustainability. Nevertheless, as they 820 22 state, this is in general difficult, as research as a rule emphasises accuracy and completeness while 821 business needs easy to handle, practical and cheap tools to assess their sustainability performance 822 (Gerbens-Leenes et al., 2003).Through our research, we designed a self-assessment tool that can be easily 823 used by schools and caterers to measure and track food waste at school canteens yet comprehensive and 824 accurate. In addition, through the implementation of the tool, academics will have further relevant 825 quantitative and comparable data as well as visibility to food waste, a field of information which is not widely 826 available. Moreover, managers and researchers can adapt and use the tool in different countries and 827 environments in order to obtain metrics and insights on food waste and benefit from benchmarking and 828 shared experiences under homogenous criteria and standardised concepts. 829 Our paper provides new contributions to the literature on food waste. Firstly, a standardised and easy to 830 implement self-assessment tool is developed to be implemented at school canteens. Secondly, it sheds 831 light on the potential good acceptance that sustainable initiatives may get from school managers and staff. 832 Finally, it relates food waste drivers to key performance indicators that would help managing potential 833 initiatives to address them. On the one hand, our main contribution for researchers is the availability of a 834 standardised tool that will permit the comparison of food waste assessments in schools among different 835 cities and environments. On the other hand, we provide school and food service managers with an easy to 836 implement tool that will help them along their path towards more sustainable organisations. 837 838 6. Acknowledgements 839 This research was partly funded by the Spanish Ministry of Food and Agriculture. The authors want to thank 840 the Ministry for their initiative “More Food, Less Waste”, under which framework this research was done. 841 The authors thank the principals, teachers, pupils and cafeteria staff of the participating schools. The 842 contributions of Antonio Agustin are highly appreciated. 843 844 845 846 847 23 APPENDIX A. Sample characteristics 848 849 INSTITUTION Type of organisation Number of employees/pupils Profile & Number of people interviewed C1 SODEXO Food service 18,000 Million € Global Revenue 420,000 employees Operates in 80 countries Headquarters in FR C.1.1 Marketing Manager C.1.2 Opex Manager C.1.3 Social Responsibility Manager C2 CATSCHOOLS Food service Headquarters in Spain, operates regionally (Barcelona only) C.2.1 Sales Managers C.2.2 Purchasing Manager C3 EUREST (Compass Group) Food service Headquarters in the UK. 17,000 million pounds in 50 countries (group) C.3 Regional Sales Manager C4 ARAMARK Food service 14,329 billion USD revenue 270,000 employees in 21 countries. Headquarters in the US C.4 Regional Sales Manager C5 SAGRAT COR SCHOOL Elementary & Secondary School 1,500 pupils eat daily 2 dining rooms and two service lines C.5.1 Canteen manager C.5.2 Cook C.5.3 a & b: 2 kitchen assistants C.5.4 a, b & c: 3 caretakers C.5.5 a to e:5 pupils C6 ESCOLA PIA SCHOOL Private Elementary & Secondary School 1,500 pupils eat daily Seven dining rooms and 4 service lines Compost facilities C.6.1 Canteen manager C.6.2 a&2b supervisors C.6.3 a to d: 4 pupils C7 ISABEL DE VILLENA SCHOOL Private Elementary & Secondary School 670 daily diners C.7.1 Canteen coordinator C.7.2 Cook C8 ESCUELA JUNGFRAU SCHOOL Public Elementary School 250 daily diners Pupils are served at their table C.8 Canteen coordinator C9 COSTA LLOBERA SCHOOL Public Elementary & Secondary School C.9 Canteen coordinator 850 851 24 Appendix B SCHOOL CANTEEN FOOD WASTE AUTO – ASSESSMENT TOOL 852 853 A. Record the number of planned meals and the real number of diners Planned number of diners Actual Diners % Deviation Deviation causes 1st shift E.g. excursions, sick kids,… 2nd shift Allergenic menus Diet menus B. Selective weight by stage of the process FOOD TYPE WEIGH % on total (% Approx.) PLACE WHERE IT OCCURRED E.G.: fruit, bread… e.g. pantry, fridge… FOOD TYPE WEIGH % on total (% Approx.) PLACE WHERE IT WAS PRODUCED E.g.while cooking, already cooked …. 3. TOTAL KILOS (Approx.): ………..……………………………………………………….....…...………………….……………………………………………………………….…….………………………………… Kg Record cooked food that is not served FOOD TYPE Quantity (kilos) Cause E.g. Roasted chicken 4. TOTAL KILOS (Approx.): ………..……………………….………………………………………………………………………………………………………….…………………………………… Kg FOOD TYPE WEIGH % on total (% Approx.) KG E.g. Vegetables, legumes, etc. % waste on food served (C.2 / C.1) KG Approximate % on total weigh TOTAL AVOIDABLE KILOS (1b+2b+3b+4b) C.1 Average per pupil 1. TOTAL KILOS (Approx.): …………………………………………………………………………………..……………………………………………………….……..……………………………………………… Kg 3.b TOTAL POTENTI ALLY AVOI DABLE KI LOS (Estimate) ………….……………………………………………………………………………………………….……..……………………………………………… Kg Cost of food waste (€) (Equivalent meals thrown away * average cost per meal) 2.a UNAVOIDABLE WASTE: BIN#2 E.G. potato peels, egg shells, etc.: ………………………………………………………………………………….……………… Kg 2.b POTENTI ALLY AVOI DABLE W ASTE: Cooked but not served, burnt, damaged, etc. (indicate type) BIN#3 CAUSE E.g.burnt food; less dinners than expected,.. 4.b POTENTI ALLY AVOI DABLE W ASTE BIN#6 ………….Kg Grams / STUDENT …………… % Approx. 4.a UNAVOIDABLE WASTE BIN#5 e.g. Banana peels, bones, ….: Grams / STUDENT 2. TOTAL KILOS (Approx.): ………..………………………………………………………………………………………………………………………………………………………………..…………………..…… Kg Compost COOKING LOSS (Kitchen Waste): PANTRY LOSS (Out of date and damaged food): BIN #1 PREPARED FOOD SURPLUS (Display): BIN#4 PLATE WASTE: Equivalent of meals thrown away (Total food waste kilos / weight of meal) How it was discarded TOTAL KILOS WASTED (1+2+3+4) C.2 Average weight of meal served per tray (g) Average cost/meal (including preparation cost) (€) C.- Waste Economic Cost D. Waste Destination Garbage Bin Reuse (Mention for what purpose) Most probable end (disposal or use) E.g. Staff meals, soup, donations, etc. CAUSE 1.b TOTAL POTENTI ALLY AVOI DABLE KI LOS (Estimate) …………………………………………………………………………………………………….……..……………………………………………… Kg 25 References 854 Adams, M.A., Pelletier, R.L., Zive, M.M., Sallis, J.F., 2005. Salad bars and fruit and vegetable consumption in 855 elementary schools: a plate waste study. J. Am. Diet. Assoc. 105, 1789–92. doi:10.1016/j.jada.2005.08.013 856 Bergman, E., Buergel, N., Englund, T., Femrite, A., 2004. Relationships of Meal and Recess Schedules to Plate 857 Waste in Elementary Schools. The Journal of child Nutrition and Management. 858 Betz, A., Buchli, J., Göbel, C., Müller, C., 2015. Food waste in the Swiss food service industry - Magnitude and 859 potential for reduction. Waste Manag. 35, 218–26. doi:10.1016/j.wasman.2014.09.015 860 Bogdan, R., Biklen, S., 1997. Qualitative research for education. 861 Bradley, L., 2011. Food Service/Cafeteria Waste Reduction [WWW Document]. Northeast Recycl. Counc. URL 862 https://nerc.org/documents/schools/FoodServiceWasteReductionInSchools.pdf (accessed 6.25.16). 863 Buzby, J.C., Guthrie, J.F., 2002. Plate Waste in School Nutrition Programs Final Report to Congress. 864 Byker, C.J., Farris, A.R., Marcenelle, M., Davis, G.C., Serrano, E.L., 2014. Food Waste in a School Nutrition 865 Program After Implementation of New Lunch Program Guidelines. J. Nutr. Educ. Behav. 866 doi:10.1016/j.jneb.2014.03.009 867 C. Hanson, B. Lipinski, K. Robertson, D. Dias, I Gavilan, J. Fonseca, 2016. Food Loss and Waste Accounting and 868 Reporting Standard. 869 Clarke, C., Schweitzer, Z., Roto, A., 2015. Reducing Food Waste : Recommendations to the 2015 Dietary Guidelines 870 Advisory Committee 1–9. 871 Cohen, J.F.W., Richardson, S., Austin, S.B., Economos, C.D., Rimm, E.B., 2013. School lunch waste among middle 872 school students: Nutrients consumed and costs. Am. J. Prev. Med. doi:10.1016/j.amepre.2012.09.060 873 Comstock, E., 1979. Plate Waste in School Feeding Progams. USDA. 874 Creedon, M., Cunningham, D., & Hogan, J., 2010. Less Food Waste More Profit. 875 Dehghan, M., Akhtar-Danesh, N., Merchant, A.T., Nicklas, T., T., B., K.W., C., G., B., Parsons, T., Power, C., Logan, 876 S., Summerbell, C., Whitaker, R., Wright, J., Pepe, M., Seidel, K., Dietz, W., Livingstone, M., James, P., 877 Kelishadi, R., Pour, M., Sarraf-Zadegan, N., Sadry, G., Ansari, R., Alikhassy, H., Bashardoust, N., AlNuaim, A., 878 Bamgboye, E., AlHerbish, A., McCarthy, H., Ellis, S., Cole, T., Daniels, S., Arnett, D., Eckel, R., Gidding, S., 879 Hayman, L., Kumanyika, S., Robinson, T., Scott, B., Jeor, S.S., Williams, C., Mossberg, H., Must, A., Jacques, 880 P., Dallal, G., Bajema, C., Dietz, W., Flegal, K., Carroll, M., Ogden, C., Johnson, C., Kuczmarski, R., Flegal, K., 881 Williams, D., Going, S., Lohman, T., Harsha, D., Srinivasan, S., Webber, L., Berenson, G., Flegal, K., Wei, R., 882 Ogden, C., Himes, J., Dietz, W., Lazarus, R., Baur, L., Webb, K., Blyth, F., Danford, L., Schoeller, D., Kushner, 883 R., Deurenberg, P., der, K. van, Paling, A., Withagen, P., Deurenberg, P., Kusters, C., Smit, H., Deurenberg, 884 P., Pieters, J., Hautvast, J., Phillips, S., Bandini, L., Compton, D., Naumova, E., Must, A., Stevens, J., Hill, J., 885 Peters, J., Goodrick, G., Poston, W., Foreyt, J., Eckel, R., Krauss, R., Grundy, S., Link, K., Moell, C., Garwicz, 886 S., Cavallin-Stahl, E., Bjork, J., Thilen, U., Ahren, B., Erfurth, E., Willett, W., Nicklas, T., Prentice, A., Jebb, S., 887 Bellisle, F., Rolland-Cachera, M., Deheeger, M., Guilloud-Bataille, M., Griffiths, M., Payne, P., Maffeis, C., 888 Zaffanello, M., Pinelli, L., Schutz, Y., Troiano, R., Briefel, R., Carroll, M., Bialostosky, K., Wright, J., Kennedy-889 Stephenson, J., Wang, C., McDowell, M., Johnson, C., Gregory, J., Lowe, S., Maffeis, C., Pinelli, L., Schutz, Y., 890 Tucker, L., Seljaas, G., Hager, R., Heaney, R., Davies, K., Barger-Lux, M., Pereira, M., Jacobs, D., Horn, L. 891 Van, Slattery, M., Kartashov, A., Ludwig, D., Carruth, B., Skinner, J., Putnam, J., Allshouse, J., Ludwig, D., 892 Peterson, K., Gortmaker, S., Gittelsohn, J., Wolever, T., Harris, S., Harris-Giraldo, R., Hanley, A., Zinman, B., 893 Heitmann, B., Kaprio, J., Harris, J., Rissanen, A., Korkeila, M., Koskenvuo, M., Swinburn, B., Egger, G., 894 Tremblay, M., Willms, J., Gordon-Larsen, P., Griffiths, P., Bentley, M., Ward, D., Kelsey, K., Shields, K., 895 Ammerman, A., Carriere, G., Muller, M., Mast, M., Asbeck, I., Langnase, K., Grund, A., Campbell, K., Crawford, 896 D., Jackson, M., Cashel, K., Worsley, A., Gibbons, K., Birch, L., Gortmaker, S., Peterson, K., Wiecha, J., Sobol, 897 A., Dixit, S., Fox, M., Laird, N., Dietz, W., Gortmaker, S., Stone, E., McKenzie, T., Welk, G., Booth, M., Dwyer, 898 T., Coonan, W., Leitch, D., Hetzel, B., Baghurst, R., Guo, X., Popkin, B., Mroz, T., Zhai, F., Jacobson, M., 899 Brownell, K., Young, L., Swinburn, B., Caterson, I., Gill, T., Chomitz, V., Collins, J., Kim, J., Kramer, E., 900 McGowan, R., Freedman, D., Srinivasan, S., Valdez, R., Williamson, D., Berenson, G., Zametkin, A., Zoon, C., 901 Klein, H., Munson, S., Kotani, K., Nishida, M., Yamashita, S., Funahashi, T., Fujioka, S., Tokunaga, K., 902 Ishikawa, K., Tarui, S., Matsuzawa, Y., Lobstein, T., James, W., Cole, T., Moreno, L., Sarria, A., Popkin, B., 903 Rolland-Cachera, M., Deheeger, M., Thibault, H., GE, K., T, T., C, T., T, K., 2005. Childhood obesity, 904 prevalence and prevention. Nutr. J. 4, 24. doi:10.1186/1475-2891-4-24 905 26 Derqui, B., Fayos, T., Fernandez, V., 2016. Towards a More Sustainable Food Supply Chain: Opening up Invisible 906 Waste in Food Service. Sustainability 8, 693. doi:10.3390/su8070693 907 Dyllick, T., Hockerts, K., Thomas Dyllick, K.H., 2002. Beyond the business case for corporate sustainability. Bus. 908 Strateg. Environ. 11, 130–141. doi:10.1002/bse.323 909 Engström, R., Carlsson-Kanyama, A., 2004. Food losses in food service institutions Examples from Sweden. Food 910 Policy 29, 203–213. doi:10.1016/j.foodpol.2004.03.004 911 European Union Committee, 2014. Counting the Cost of Food Waste: EU Food Waste Prevention. Auth. House 912 Lords 78. 913 Falasconi, L., Vittuari, M., Politano, A., Segrè, A., 2015. Food Waste in School Catering: An Italian Case Study. 914 Sustainability 7, 14745–14760. doi:10.3390/su71114745 915 FAO, 2011. Global food losses and food waste: extent, causes and prevention, Save Food! 916 doi:10.1098/rstb.2010.0126 917 Ferreira, M., Martins, M.L., Rocha, A., 2013. Food waste as an index of foodservice quality. Br. Food J. 115, 1628–918 1637. doi:10.1108/BFJ-03-2012-0051 919 Finn, S.M., 2014. VALUING OUR FOOD: MINIMIZING WASTE AND OPTIMIZING RESOURCES. Zygon® 49, 992–920 1008. doi:10.1111/zygo.12131 921 Garrone, P., Melacini, M., Perego, A., 2014. Opening the black box of food waste reduction. Food Policy 46, 129–922 139. doi:10.1016/j.foodpol.2014.03.014 923 Gerbens-Leenes, P.W., Moll, H.C., Schoot Uiterkamp, A.J.M., 2003. Design and development of a measuring 924 method for environmental sustainability in food production systems. Ecol. Econ. 46, 231–248. 925 doi:10.1016/S0921-8009(03)00140-X 926 Godfray, H.C.J., Crute, I.R., Haddad, L., Lawrence, D., Muir, J.F., Nisbett, N., Pretty, J., Robinson, S., Toulmin, C., 927 Whiteley, R., 2010. The future of the global food system. Philos. Trans. R. Soc. B Biol. Sci. 365, 2769–2777. 928 doi:10.1098/rstb.2010.0180 929 Goonan, S., Mirosa, M., Spence, H., 2014. Getting a taste for food waste: A mixed methods ethnographic study into 930 hospital food waste before patient consumption conducted at three new zealand foodservice facilities. J. Acad. 931 Nutr. Diet. 114, 63–71. doi:10.1016/j.jand.2013.09.022 932 Guthrie, J.F., Buzby, J.C., 2002. Several Strategies May Lower Plate Waste in School Feeding Programs. Food Rev. 933 25, 36. 934 Hanson, C., Lipinski, B., Robertson, Kai; Dias, D., Gavilan, I., 2016. FLW Protocol Steering Committee and Authors 935 Other Contributing Authors, Food and Agriculture Organization of the United Nations. 936 Jacko; C. C.; Dellava; J.; Ensle; K.; & Hoffman; D. J., 2007. Use of the Plate waste. J. Ext. 937 Jurgenson, Á., 2005. Cómo hacer investigación cualitativa. Ecuador: reimp. Paid. pag. 938 Lipinski, B., Hanson, C., Lomax, J., Kitinoja, L., Waite, R., Searchinger, T., 2013. Reducing Food Loss and Waste. 939 World Resour. Inst. 1–40. 940 Marlette, M.A., Templeton, S.B., Panemangalore, M., 2005. Food Type, Food Preparation and competitive food 941 purchases impact school lunch plate waste by sixth-grade students. J. Am. Diet. Assoc. 105, 1779–1782. 942 doi:10.1016/j.jada.2005.08.033 943 Miles, M., Huberman, A., 1994. Qualitative data analysis: An expanded sourcebook. 944 Mirosa, M., Munro, H., Mangan-Walker, E., Pearson, D., 2016. Reducing waste of food left on plates: interventions 945 based on means-end chain analysis of customers in foodservice sector. Br. Food J. 118. 946 Niaki, S.F., Moore, C.E., Chen, T.A., Weber Cullen, K., 2016. Younger Elementary School Students Waste More 947 School Lunch Foods than Older Elementary School Students. J. Acad. Nutr. Diet. 117, 95–101. 948 doi:10.1016/j.jand.2016.08.005 949 Papargyropoulou, E., Lozano, R., K. Steinberger, J., Wright, N., Ujang, Z. Bin, 2014. The food waste hierarchy as a 950 framework for the management of food surplus and food waste. J. Clean. Prod. 76, 106–115. 951 doi:10.1016/j.jclepro.2014.04.020 952 Pratt, M., 2009. From the editors: For the lack of a boilerplate: Tips on writing up (and reviewing) qualitative 953 research. Acad. Manag. J. 954 Principato, L., Secondi, L., Pratesi, C.A., 2015. Reducing food waste: an investigation on the behaviour of Italian 955 27 youths. Br. Food J. 117, 731–748. doi:10.1108/BFJ-10-2013-0314 956 Rickinson, M., Hall, M., Reid, A., 2016. Sustainable schools programmes: what influence on schools and how do we 957 know? Environ. Educ. Res. 22, 360–389. doi:10.1080/13504622.2015.1077505 958 Rodriguez Tadeo, A., Patiño Villena, B., Periago Caston, M.J., Ros Berruezo, G., Gonzalez Martinez Lacuesta, E., 959 Alejandra, R.T., Bego??a, P.V., Jesus, P.C.M., Gaspar, R.B., Eduardo, G.M.L., 2014. Evaluando la aceptacion 960 de alimentos en escolares; registro visual cualitativo frente a analisis de residuos de alimentos. Nutr. Hosp. 29, 961 1054–1061. doi:10.3305/nh.2014.29.5.7340 962 Smith, S.L., Cunningham-Sabo, L., 2014. Food choice, plate waste and nutrient intake of elementary- and middle-963 school students participating in the US National School Lunch Program. Public Health Nutr. 17, 1255–63. 964 doi:10.1017/S1368980013001894 965 Székely, F., Knirsch, M., 2005. Responsible leadership and corporate social responsibility: Metrics for sustainable 966 performance. Eur. Manag. J. 23, 628–647. doi:10.1016/j.emj.2005.10.009 967 U.S. Environmental Protection Agency, 2014. A Guide to Conducting and Analyzing a Food Waste Assessment. 968 doi:10.1017/CBO9781107415324.004 969 Ventour, L., 2008. The food we waste, Wrap. 970 Wilkie, A., 2015. Food Waste Auditing at Three Florida Schools. Sustain. 971 Williams, P., Walton, K., 2011. Plate waste in hospitals and strategies for change. E. Spen. Eur. E. J. Clin. Nutr. 972 Metab. 6, e235–e241. doi:10.1016/j.eclnm.2011.09.006 973 Williamson, D.A., Allen, H.R., Martin, P.D., Alfonso, A., Gerald, B., Hunt, A., 2004. Digital photography: A new 974 method for estimating food intake in cafeteria settings. Eat. Weight Disord. 9, 24–28. doi:10.1007/BF03325041 975 World Resources Institute, 2016. Food Loss and Waste Accounting and Reporting Standard, World Resources 976 Institute. 977 Wrap, 2011. Food waste in schools. Banbury, Oxon. 978 979 work_kscij54fbff23fswb6nowepsq4 ---- R77 Product News Robotic sample processor Rosys Anthos has introduced the new compact Plato 8 robotic sample processor for automated liquid handling and ELISA. Designed to occupy minimal bench space, the Plato 8 utilises the most advanced technology to guarantee maximum performance. With a range of optional modules which can be fully integrated, it offers users the freedom to configure a system which exactly meets their individual requirements. Any or all stages of microplate preparation can be automated, with or without analysis. Users can choose from washable or disposable tips operating in either 2 or 4 tip formats. Circle number 2 on reader response card. New educational materials Hewlett-Packard Europe have recently published a range of educational materials to help instructors, lecturers and tutors teach the basic principles of UV-visible spectroscopy as well as the practical aspects of instrument performance, sample handling and measurement. The materials are available in an instructor’s pack consisting of a primer, workbook and companion CD. The primer describes basic principles and applications of UV-visible spectroscopy, with a particular focus on the advantages of diode-array technology. Information is presented in an easy-to-follow format, with detailed diagrams and graphs. Circle number 3 on reader response card. In Brief Powerful new antibiotic Blasticidin is a nucleoside antibiotic, from Invitrogen, isolated from Streptomyces griseochromogenes. It causes cell death in both prokaryotic and eukaryotic cells by inhibiting protein translation. Resistance to blasticidin is conferred by the bsd gene isolated from Aspergillus terreus. In eukaryotic cells, complete cell death occurs in less then 7 days. Using blasticidin therefore allows you to establish stable cell lines in less than one week. Circle number 4 on reader response card. High quality filter papers Performance and reproducibility are the important characteristics to consider when specifying filter papers. To meet these requirements, filtration specialist Schleicher & Schuell UK has recently extended its range with the introduction of high quality filter papers which combine optimum performance and reproducibility with competitive pricing. These filter papers are manufactured to the highest technical specification using the finest quality of cellulose linters. Over 100 grades of paper can be supplied for an extensive range of applications. Circle number 5 on reader response card. Automation for microplate assays Responding to the demand for more choice in assay methodologies, Rosys Anthos has introduced the new AutoFluor system. Bringing together the benefits of robotic plate handling and both fluorometric and photometric analysis, the new AutoFluor introduces walk-away automation for microplate assays. With up to 2 optional dispensers, it is ideal for all fluorescent assays. The new system complements the successful AutoLucy system which offers combined luminometry/photometry. Both models offer an unrivalled level of flexibility and automation for any type of microplate assay. Circle number 6 on reader response card. Digital photography software A new, high performance software suite is available from Olympus to process and store digital photomicrographs. DP-SOFT enables the DP10 digital camera to be controlled directly from the PC via the serial interface. It runs on Windows 95 and Windows NT, providing user-friendly tools to calibrate images and perform interactive measurements. Storing digital images on a PC hard disk can be memory intensive. As a result, Olympus developed Multiple Volume Management (MVM) protocol for DP-SOFT. Up to 230 MB of data can be held on the Olympus PowerMO 230 II Magneto-Optical drive. Circle number 7 on reader response card. Effectene TM Transfection Reagent is a unique new non-liposomal lipid formulation, from Qiagen, offering significant advantages over many liposome reagents and other transfection methods. Effectene Reagent is used together with a DNA-condensing enhancer for exceptionally high tranfection efficiencies with a wide variety of cell types, particularly with primary cells. Effectene Reagent is less toxic than many liposome reagents, and enables transfection in the presence of serum. The high stability and consistent structure of the reagent molecules ensure reliable complex formation and exceptional reproducibility. Effectene delivers plasmid DNA into cells with remarkably high efficiency, so that significantly less DNA is required to obtain higher transfection levels. Circle number 1 on reader response card. Transfection reagent Product News Transfection reagent Robotic sample processor New educational materials In Brief Powerful new antibiotic High quality filter papers Automation for microplate assays Digital photography software work_ksi7u7fhinacjl4qet6m4ojzcq ---- REVIEW Clinicians taking pictures—a survey of current practice in emergency departments and proposed recommendations of best practice P Bhangoo, I K Maconochie, N Batrick, E Henry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Emerg Med J 2005;22:761–765. doi: 10.1136/emj.2004.016972 The primary objective of this survey was to establish current practice in emergency departments in the UK. Variation in obtaining consent, how image collection is achieved, and the images stored were considered to be important outcomes. An initial postal questionnaire followed by phone survey posed questions about practical and procedural issues when capturing clinical images in emergency departments in the UK. Altogether, 117 departments replied out of 150 surveyed. Only 21 departments have a written policy permitting medico-legal case photography. A total of 53 do take clinical photographs where no policy exists, seven of which actively take assault/domestic violence images, only four of which document consent. All departments with photographic facilities take images for clinical/teaching purposes. Thirty two of those without a policy attach the photograph to the clinical notes and so may be potentially called upon for medico-legal proceedings if relevant, which raises issues of adequate consent procedures, storage, and confidentiality. This is particularly pertinent with the increasing use of digital photography and image manipulation. A large variation in current practice has been identified in relation to a number of issues surrounding clinical image handling in emergency departments. Subsequently, recommendations for best practice have been proposed to protect both the patient and the clinician with regards to all forms of photography in the emergency department setting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . See end of article for authors’ affiliations . . . . . . . . . . . . . . . . . . . . . . . Correspondence to: P Bhangoo, Emergency Department, Central Middlesex Hospital, Acton Lane, Park Royal NW10 7NS, UK; pbhangoo@ doctors.org.uk Accepted for publication 4 August 2004 . . . . . . . . . . . . . . . . . . . . . . . I llustrative clinical records are increasingly used in emergency departments and can become part of the patient health records. Undoubtedly, there are benefits from recording clinical images; a visual record of the presenting physical sign(s) can act as an aide-memoir for clinicians, serial pictures show the progress of the patient’s condition over time, and, owing to advances in digital technology and telemedicine, senior specialist opinion may be sought, even remotely from the initial place of presentation. Images taken in potential medico-legal cases, such as non-accidental injury (NAI), assaults, and domestic violence, can provide greater detail than written medical notes alone. Finally, there is an increasing use of these images for medical education and research. There are risks, however, for the patient and the clinician by taking these images if certain safeguards are not followed—for example, breach of confidentiality, invalid consent proce- dures, the use of material outside of their intended purpose, and access of the images by unauthorised personnel. The use of clinical images is a potential minefield for litigation unless best practice is followed. The primary objective of our postal question- naire was to establish what is current practice in emergency departments in the UK on clinical photography—film and digital—and to ascertain how many departments had written guidance about all aspects of this. Following collation and interpretation of the information yielded by the survey and using other reference sources—for example, General Medical Council consent guidelines—we have proposed specific recommendations of best prac- tice to protect both the patient and clinician in the emergency setting. METHOD Altogether, 150 questionnaires with self addressed return envelopes were sent to accident and emergency (A&E) departments throughout the UK. It posed the question ‘‘Does your department have a written policy for the use of clinical photography (including medico-legal cases)’’ and asked which form of photographic media is used in the department (table 1 for the questionnaire in full). A repeat posting of the same questionnaire was followed by telephoning the departments that had either provided incom- plete replies or which had failed to respond to the two postings. RESULTS Initially 70 questionnaires were returned, after which a second batch were resent to the departments that failed to reply. This led to a further 27 completed questionnaires being returned. A phone survey was then conducted and a final response rate of 78% (117/150) was achieved. Replies were from consultants, specia- list registrars, and senior nursing staff. The data were analysed under two categories: those with and those without a written policy. Abbreviations: A&E, accident and emergency; NAI, non-accidental injury 761 www.emjonline.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://e m j.b m j.co m / E m e rg M e d J: first p u b lish e d a s 1 0 .1 1 3 6 /e m j.2 0 0 4 .0 1 6 9 7 2 o n 2 1 O cto b e r 2 0 0 5 . D o w n lo a d e d fro m http://emj.bmj.com/ The results are tabulated in table 2 and fig 1 and discussed in further detail below. Departments with written policy on taking images Altogether, 41 emergency departments (36%) have an existing policy about the use of photography; 20 do not take photographs in cases of assault and domestic violence. Of these departments, four use the images for teaching purposes only so patients cannot be identified from the image thereby avoiding any medico-legal difficulties. The remaining 21 have a documented policy for taking medico-legal and clinical/teaching photographs. Four of the 21 have an annual staff training programme (mainly involving senior nursing staff), detailing the indications for imaging, the need for consent, photographic techniques, and storage of the subsequent images. Those using Polaroid (14) all applied patient labels to the reverse of the image (10 included name and address, 4 only patient hospital number). Only four departments required images to be signed by the photographer. Those Polaroid images not kept with the notes could only be accessed through the Trust’s medico-legal department. Of those using digital cameras (9), two printed the image immediately after capture and labelled the reverse with the patient’s label and then deleted the image (only two of which were cross-signed). The remaining departments stored the images on disk, which could only be accessed by named senior staff. However, only two of the seven departments were aware that they were using specific software packages that did not allow image manipulation. Only two departments have a written policy addressing photography for clinical and educational purposes. A consent form is printed on the A&E card then the images are taken, either digital or Polaroid, and then scanned by the medical illustration department who store and categorise them separately from the patient’s notes. Departments without written policy on taking images Of the 74 (63%) departments that have no written policy, 21 actually have no facilities to take photographs. However, six departments commented that their medical illustration department could take photographs during working hours. If a Polaroid picture is taken then in all cases the image is attached to the medical notes in anticipation of the patient’s next attendance—for example, at review clinic. Digital images (21 departments) are stored on a departmental computer in 5/21, with the images being deleted after the patient has been discharged. The remaining 16 departments only store the image until the referral has been made to specialist team, at which point the image is shown and then deleted. Written consent for photographs used for clinical assess- ment is documented in the medical notes in only 10 out of 53 departments. Seven (13%) of those with cameras do take photographs in cases of assault and domestic violence even though there is no formal/written policy. Of these, only 4/7 document consent. All used patient hospital labels for identification. One department reported that they have since discarded their camera after the staff member who took the image for clinical purposes was called to court to testify to its authenticity. None of the departments surveyed, with or without policies, took photographs in suspected NAI cases. All have local departmental guidelines for referral to specialist hospital or community paediatricians. Table 1 The questionnaire sent to UK accident and emergency departments Does your department have a written policy for the use of clinical photography? Does your department take photographic images in the following cases and if so please give details of format, consent, documentation, and image storage 1. Domestic violence 1. Assault 1. Non-accidental injury 1. Teaching 1. Clinical purposes – 16 digitsl – 32 polaroid – 5 both – 0 35 mms film 53/117 (45%) facilities to take photographs All take for clinical/teaching purposes but 7 of these departments do take photographs in medico-legal cases as well. 6 use medical illustration dept during working hours only 21/117 (17%) No facilties in department 4/20 have cameras 20/117 (18%) photography not permitted All used for teaching – 5 digital – 10 polaroid – 4 both – 2 35 mms film – 3 digital – 1 both 21/117 (17%) photography permitted 74 (63%) no policy 41 (36%) policy 117 replies Figure 1 Results of the questionnaire 762 Bhangoo, Maconochie, Batrick, et al www.emjonline.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://e m j.b m j.co m / E m e rg M e d J: first p u b lish e d a s 1 0 .1 1 3 6 /e m j.2 0 0 4 .0 1 6 9 7 2 o n 2 1 O cto b e r 2 0 0 5 . D o w n lo a d e d fro m http://emj.bmj.com/ DISCUSSION Should clinicians be taking images in emergency depart- ments at all considering the potential legal implications? Certainly the survey has highlighted how much variation there is in practice—for example, from how and if consent is obtained, to patient identification, and the potentials for breaches of confidentiality. General Medical Council guide- lines published in 2002 on Making and Using Visual and Audio Recordings of Patients 1 advises on the use of images for clinical, educational, and research purposes. 1. Consent Images taken for clinical purposes form part of the patient’s health record. Consent to x rays and ultrasound investiga- tions are given implicitly by the patient undergoing those procedures. Similarly, by presenting for treatment and investigation, the patient enters into a tacit agreement to documentation, which includes images as well as written information. An image taken for the purpose of treating a patient must not be used for any other purpose without express consent. However, if such an image is subsequently to be published, or used for educational research, written consent must be sought for that specific purpose. Consent is not required when the image taken for treatment or assessment does not allow for the patient to be recognised; in this case, the image can be used for educational or research purposes, with the caveat that ‘‘express consent must be sought for any form of publication’’. When making a judgement about whether the patient may be identifiable, one should bear in mind that apparently insignificant features may still be capable of identifying the patient to others. As it is difficult to be absolutely certain that a patient will not be identifiable, clinical photographs should not be published or used in any form to which the public may have access without the consent of the patient.1 If the patient is unable to give consent when the image is taken—for example, he or she is unconscious—then the image cannot be used until the patient has the capacity to give consent. Permission may be given by the immediate family if the patient is likely to be permanently incapacitated. Young people aged 16 years or over are assumed to be competent to give consent; under this age, Gillick compe- tency2 (the level of competence of an individual in decision making referred to in Gillick v. Norfolk & Wisbech Area Health Authority) to give consent may have to be determined. It must be explained when obtaining consent that once the image is in the public domain, it is difficult to control its future use and it may not be possible to withdraw the image. Latest General Medical Council guidelines, published in April 2004, detail all aspects of patient confidentiality and consent procedures.3 2. Confidentiality, document storage, and authenticity Confidentiality is essential in any clinical consultation. The Data Protection Act 19984 covers all NHS material concerning any individual patient (with the exception of anonymised information), including electronic or paper documentation. ‘‘Disclosures required by law or made in connection with legal proceedings’’ are exempted from the non-disclosure provisions by virtue of section 35 of the Act. The Act aims to protect the controlled flow of data while ensuring that confidentiality is not undermined. More recently, the Caldicott Report 20005 addressed concerns about the ease with which data can be disseminated. Patient confidentiality must be maintained and the patient’s best interest ensured when sharing information between NHS and non-NHS bodies. How the images are stored and shared are key elements in maintaining patient confidentiality. Storage of the image varies according to the media used. Concerns about digital storage and possible digital image manipulation can be minimised by the use of software packages that prevents digital image manipulation (addressed later). Some depart- ments also print off hard copies and subsequently delete the image in the digital camera. This printed image is stored in the written medical notes or is filed separately, and is accessible through the NHS Trust’s medico-legal department. If the image is kept on disk it may not be acceptable for legal purposes as it is not stored by a secure system. Polaroid images are stored with the notes or in a separate sealed envelope, and the image may have a patient label affixed to the back; some departments encourage the photographer to sign across and beyond the label to avoid the suggestion of tampering. Departments using film photography have an agreement with their medical illustration department to develop the films in-house to avoid sending films outside for processing; medical illustration departments retain a copy of the image. When printing the image it is recommended that photo- graphic materials are processed within the hospital under secure arrangements.6 Authenticity of digital photography Authenticity and admissibility of digital images as legal evidence has been addressed by The Fifth report and Eighth Report (House of Lords Select Committee on Science and Technology, 1998)7 8 as there is the potential for alteration of the image so jeopardising its value in legal proceedings. Key recommendations were the use of an audit trail and proposals for the use of image watermarks; the audit trail should follow how an image has been used once obtained. This trail includes using computer software and operating system technology so that any processing an image has undergone is thoroughly documented. The British Standards Institution Code of Practice for the Legal Admissibility of Information on Electronic Management Systems9 sets out approved procedures and documentation for the monitoring Table 2 Results of the questionnaire Written policy in department for taking photographs (21/117 No policy but do have facilities to take photographs in department (53/117) Purpose 21 medico-legal 2 teaching/clinical 7 medico-legal 53 teaching/clinical Storage of image N Polaroid 14/21 32/53 6 in clinical notes All attached to written clinical notes. This includes the assault/ domestic violence cases. 2 in medical illustration department 6 separately in fireproof cabinet N Digital 9/21 21/53 2 immediately deleted after printed 5 stored and deleted when discharged 7 stored on disc in department, limited access (only 2 used non-manipulation software) 16 deleted when referred to speciality Consent 17/21 specific consent form 43/53 verbal consent 10/53 written consent in notes 3/21 written consent in notes In the 7 departments taking medico-legal photographs only 4 documents consent Clinicians taking pictures—recommendations of best practice 763 www.emjonline.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://e m j.b m j.co m / E m e rg M e d J: first p u b lish e d a s 1 0 .1 1 3 6 /e m j.2 0 0 4 .0 1 6 9 7 2 o n 2 1 O cto b e r 2 0 0 5 . D o w n lo a d e d fro m http://emj.bmj.com/ of systems producing evidential images. The British Standards Institution Code stresses that the way in which the image is managed is as important as the technology used. CD-ROM is cited as the most up to date storage mode as WORM (write once read many times) format; this is specified as the standard device for storing data in the Code. Watermarks are being developed to ‘‘brand’’ a digital image. The watermark is hidden within the image and it requires specific software for it to be visualised. The invisible watermark may be permanent. Even more useful for evidential purposes is the development of a fragile watermark that is destroyed by any processing or modification of the original captured image. Image quality Accurate colour rendition and image definition are vital. Studies have shown no gain in diagnostic quality over analogue images.10 11 This debate will continue as there are many variables: from any effect of magnification, to different camera resolutions, and the accuracy of the computer scanners/printers reproduction of the original image. Cost and issues of practicality have a bearing—for example, the ease of use of the chosen type of camera, how to store the image, and its processing, as well as maintaining the image’s authenticity impact on this debate. RECOMMENDATIONS Consent Consent procedures should be followed by those involved with obtaining clinical images, as this will protect the clinician and patient from misuse or misunderstanding about the intended use of the image. Although consent is not required for an anonymised image to be used for educational purposes it is common courtesy to explain how the image will be used, particularly as it may come into the public domain. Although an image that is solely to be used for treatment/ clinical purposes does not require written consent, the existence of a written statement for consent stating the use of the material would help protect the patient and clinician in the event of litigation. Ideally written consent should include: 1. An explanation of the need for and purpose of such documentation. 2. That the images will form part of their confidential health records. 3. These images may be used for research or educational purposes, or both. 4. The name and signature of the medical practitioner and consultant. If consent for clinical purposes only has been given then it must be clearly recorded as such. Identification and verification This is mainly of concern with evidential photography. The following should be recorded: 1. Date and time. 2. Name of consultant, photographer, and any others present, including chaperone and parents/guardians in NAI cases. 3. Camera used and film type. 4. Number of images captured. 5. Sites at which photographs taken documented on a drawn (or template) body map in notes. Each image and print should be labelled. There is debate as to whether the patient’s hospital number alone is adequate or whether more patient detail (including date of birth and name), as well as time, date, and name of photographer are required; however, it is suggested that more identifiers than just hospital number are used to eliminate difficulties with identification at a later date. Storage and disposal of unwanted images For treatment purposes only, storing prints within the patient’s medical notes may be the most appropriate option. In medico-legal cases, hard copies should be developed and stored in-house, in a fireproof cabinet, in a sealed envelope away from the notes, accessible only to senior named staff. Alternatively all hard copy images can be scanned or copied by the medical illustration department where they can be stored and categorised. Destruction of unwanted hard copies should be by shredder/incineration, ideally in the department.9 Digital images should be captured and stored by computer software packages that cannot be manipulated or hard copies produced, and stored as above and the original captured image deleted. Written statement The photographer is rarely required to give evidence in court; this may be further minimised by including a comprehensive contemporaneous written statement when the image is taken. This ideally should include all the points mentioned under patient verification as well: N That the photographs are a true likeness of the injuries at the time the images were taken. N At whose request the photographs were taken (particu- larly important in NAI). N Who has possession of the original images? This information can thereafter be transposed onto a police statement form, which will contain the following formal declaration: ‘‘This statement, consisting of x pages each signed by me, is true to the best of my knowledge and belief and I make it knowing that, if it is tendered in evidence, I shall be liable to prosecution if I have wilfully stated anything which I know to be false or do not believe to be true.’’ As a general rule, this should obviate the need for oral testimony. Departmental policy If any form of patient photography is to be allowed then a clearly written easily accessible policy should exist to protect both the patient and staff. This should include guidance for clinical and teaching image capture covering the above mentioned recommendations. If photography is to be allowed for medico-legal purposes there should be clear guidelines and proformas to aid adequate/appropriate documentation in these cases and consideration should be given to senior staff undergoing appropriate training. Guidelines in medico-legal cases should include: 1. Written consent forms (ink stamps for clinical purposes). 2. History and examination sheets, including body maps. 3. If images are not taken then set procedures—for example, police photography or medical illustration should be encouraged where possible. 4. Identification and verification if images are taken. 5. Written statement (as mentioned above) if photography is to be allowed on site. 6. Image storage. 7. Contact details of appropriate staff, including domestic liaison staff, social workers, local police, and refuge centres. CONCLUSION Whether for education, clinical, or medico-legal purposes, the taking, storing, and use of an image raises the issues of 764 Bhangoo, Maconochie, Batrick, et al www.emjonline.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://e m j.b m j.co m / E m e rg M e d J: first p u b lish e d a s 1 0 .1 1 3 6 /e m j.2 0 0 4 .0 1 6 9 7 2 o n 2 1 O cto b e r 2 0 0 5 . D o w n lo a d e d fro m http://emj.bmj.com/ consent and confidentiality. We have highlighted a large variation in practice in emergency departments in the United Kingdom that are using photography routinely (whatever the indication may be), but recommend that a written policy in each department, covering all aspects of photography, is mandatory. By implementing a nationally accepted standard of practice on consent, patient identification, image storage, and documentation, best practice can safeguard the interests of the patient and the clinician. Authors’ affiliations . . . . . . . . . . . . . . . . . . . . . P Bhangoo, I Maconochie, N Batrick, Accident and Emergency Department, St Mary’s Hospital, London, UK E Henry, Barrister, Hollis Whiteman Chambers, Temple, London, UK Competing interests: none declared REFERENCES 1 Making and using visual and audio recordings of patients. 2002. http:// www.gmc-uk.org/standards/default.htm (accessed 29 May 2004). 2 Gillick v West Norfolk and Wisbech Area Health Authority. 1985. http:// www.hrcr.org/safrica/childrens_rights/Gillick_WestNorfolk.htm (accessed 30 June 2004). 3 Confidentiality: protecting and providing information. 2004. http:// www.gmc-uk.org/standards/default.htm (accessed 29 May 2004). 4 Department of Health. The Data Protection Act. 1998. http:// www.doh.gov.uk/dpa98 (accessed 20 October 2003). 5 The Caldicott Committee. Report on the review of patient-identifiable information. 1997. http://www.doh.gov.uk/confiden/crep.htm (accessed 21 October 2003). 6 Cull P, Gilson CC. Confidentiality of illustrative clinical records code of practice, guidance notes and recommendations. J Audiov Media Med 1986;9:124–30. 7 House of Lords Select Committee on Science and technology. Fifth Report. 1998. http://www.parliament.the-statinery-office.co.uk/pa/ld200102/ ldselect/lddelreg/50/5002.htm (accessed 14 October 2003). 8 House of Lords Select Committee on Science and technology. Eighth Report. 1998. http://www.parliament.the-statinery-office.co.uk/pa/ld200102/ ldselect/lddelreg/50/5002.htm (accessed 14 October 2003). 9 British Standards Institution. A code of practice for the legal admissibility of information on electronic document management systems. 1996: 206–7, http://www.bsi-global.com/portfolio+of+Products+and +services/ Books+Guides+Management/pd0008xalter (accessed 5 December 2003). 10 Smith J. Digital imaging: a viable alternative to conventional medico-legal photography? J Audiov Media Med 2001;24(3):129–31. 11 Axelsson B, Boden K, Fransson SG, et al. A comparison of analogue and digital techniques in upper gastrointestinal examinations: absorbed dose and diagnostic quality of the images. Eur Radiol 2000;10(8):1351–4. 11th European Forum on Quality Improvement in Health Care 26–28 April 2006, Prague, Czech Republic For further information please go to: www.quality.bmjpg.com Book early to benefit from a discounted delegate rate Clinicians taking pictures—recommendations of best practice 765 www.emjonline.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://e m j.b m j.co m / E m e rg M e d J: first p u b lish e d a s 1 0 .1 1 3 6 /e m j.2 0 0 4 .0 1 6 9 7 2 o n 2 1 O cto b e r 2 0 0 5 . D o w n lo a d e d fro m http://emj.bmj.com/ work_ksprlm3y4bcjrnfrzpzg7c4woi ---- Matamalas et al. Scoliosis 2014, 9:23 http://www.scoliosisjournal.com/content/9/1/23 RESEARCH Open Access Reliability and validity study of measurements on digital photography to evaluate shoulder balance in idiopathic scoliosis Antonia Matamalas1*, Juan Bagó1, Elisabetta D’Agata2 and Ferran Pellisé1 Abstract Objective: To determine the validity of digital photography as an evaluation method for shoulder balance (ShB) in patients with idiopathic scoliosis. Material and methods: A total of 80 patients were included (mean age 20.3 years; 85% women). We obtained a full x-ray of the vertebral column and front and back clinical photography for all patients. For antero-posterior x-rays we measured the proximal thoracic curve angles (CPT). To evaluate radiological shoulder balance we calculated the clavicle-rib intersection angle (CRIA) and T1-tilt. For clinical photography we measured shoulder height angle (SHA), axilla height angle (AHA) and the left right trapezium angle (LRTA). We analyzed the reliability of the different photographic measurements and the correlation between these and the radiological parameters. Results: The mean magnitude of PTC, CRIA and T1-tilt were 19°, −0.6° and 1.4° respectively. Mean SHA from the front was −1.7°. All photographic measurements revealed an excellent-near perfect intra and inter-observer reliability in both photographic projections. No correlation was found between the ShB and the magnitude of the PTC. A statistically significant correlation was found between clinical balance of the shoulders and radiological balance (r between 0.37 and 0.51). Conclusions: Digital clinical photography appears to be a reliable method for objective clinical measurement of ShB. The correlation between clinical and radiological balance is statistically significant although moderate/weak. Keywords: Cosmetic, Idiopathic scoliosis, Photography, Shoulders balance Introduction Cosmetic disorder is one of the main reasons to treat Idiopathic Scoliosis patients (SOSORT Consensus) [1]. Shoulder balance (ShB) has been considered a characteris- tic of the deformity in idiopathic scoliosis [2-4]. According to Raso, this represents 75% of the perceived deformity of the trunk, together with asymmetry of the scapulae and shoulder girdle [2]. To be able to evaluate this balance correctly, reliable tools are necessary. Different evaluation methods ranging from radiological, clinical and topographical have been proposed over the years. Hong et al. [5] recently evaluated the reliability and validity of the different radiological * Correspondence: amatamalasadrover@gmail.com 1Department of Orthopaedic Surgery, Hospital Vall d’Hebron, P Vall d’Hebron, 119, 08035 Barcelona, Spain Full list of author information is available at the end of the article © 2014 Matamalas et al.; licensee BioMed Cen Creative Commons Attribution License (http:/ distribution, and reproduction in any medium Domain Dedication waiver (http://creativecom article, unless otherwise stated. methods to evaluate ShB and concluded that, in general, all outcomes have reliable intra and inter-observer reliabil- ity. Nonetheless, radiological balance does not appear to optimally correspond with clinical balance, which suggests that clinical parameters should be a complement to radio- logical outcomes [6]. Different methods have been proposed to assess shoul- der imbalance [2-4]. Zaina et al., have developed a tool for routine clinical use (TRACE), consisting of photographs depicting different severities of four aspects of trunk de- formity; shoulder, scapulae, hemithorax and waist. This instrument relies on subjective impression of the observer [7]. Surface topometry such as Moiré-Fringe or 3D scan (Vitrus) [8] methods have been used. Nonetheless, these systems require expensive equipment and a trained oper- ator, which means that their usefulness in clinical practice tral Ltd. This is an Open Access article distributed under the terms of the /creativecommons.org/licenses/by/4.0), which permits unrestricted use, , provided the original work is properly credited. The Creative Commons Public mons.org/publicdomain/zero/1.0/) applies to the data made available in this mailto:amatamalasadrover@gmail.com http://creativecommons.org/licenses/by/4.0 http://creativecommons.org/publicdomain/zero/1.0/ Matamalas et al. Scoliosis 2014, 9:23 Page 2 of 9 http://www.scoliosisjournal.com/content/9/1/23 remains a moot point. Conversely, clinical photography corrects some of these defects: the equipment is cheap, simple to handle and quick to obtain. These reasons lead to supposing that it may be of considerable interest in daily practice. Two indices have traditionally been used to study the shoulders in clinical photography; the shoulder height difference (in cm or by means of the angular measure- ment) and the axillary fold height difference [9-11]. Other indices to evaluate ShB have recently been proposed shoulder height difference at the level of the upper border of the trapezium muscle [6,9,11,12] and the area of the trapezium muscles [12]. The problem with these latter indices lies in the fact that each author defines them dif- ferently. Furthermore, data on the reliability of these out- comes are incomplete because no inter-observer reliability data for any of them are available, especially for front-view photography. We think it is necessary to ascertain the validity of the measurements taken from the front because this view corresponds to the patient’s view when they look in a mirror. The aims of our research are twofold: a. to determine the test-retest reliability of various clinical measure- ments taken with digital photography and to compare the data between front and rear shots; b. to determine the validity of these photographic measurements by ana- lyzing their relationship with the radiological measure- ments of ShB. Figure 1 Radiological measurements of shoulder balance. (a) T1-tilt (b Materials and methods This is a transversal study approved by the clinical research ethics committee of Hospital Vall d’Hebrón. The inclusion criteria for this study were patients with idiopathic scoliosis with a largest Cobb angle (MLC) greater than 25° Cobb in the coronal plane, aged between 12 and 40 who agree to take part in the study. Patients were recruited consecutively; only patients who had not received surgery were included. At the time the picture was taken, no patient was on an active treatment with brace. The sample was stratified according to MLC into two groups: Group <45° and Group ≥45°. This cut-off value of 45° was chosen because at this magnitude, surgical treatment can be recommended. For each patient there was a postero- anterior x-ray of the full trunk in standing position per- formed the week before taking part in the study. Radiological measurements The following was recorded on postero-anterior x-ray: the magnitude of the proximal thoracic (PTC), main thor- acic (MTC) and thoraco-lumbar/lumbar (TLLC) curves. Furthermore, the tilt with respect to the horizontal of T1 (T1-tilt) (Figure 1), the lower end vertebra of the PTC (PTC_LEV), the lower end vertebra of the MTC (MTC_ LLV) and lower end vertebra of the TLLC (TLLC_LLV) (Figure 2), was measured. Radiological ShB was calculated by means of the Clavicle-Rib Intersection Angle (CRIA). CRIA is defined as the angle formed by the horizontal and ) Clavicle-rib intersection angle (CRIA). Figure 2 Radiological measurements of end vertebra. (a) Lower end vertebra of the proximal thoracic curve (PTC_LEV). (b) Lower end vertebra of the main thoracic curve (MTC_LEV). (c) Lower end vertebra of the thoraco-lumbar/lumbar curve (TLLC_LEV). Matamalas et al. Scoliosis 2014, 9:23 Page 3 of 9 http://www.scoliosisjournal.com/content/9/1/23 a line that joins the intersection points of the clavicles with the rib-cage (Figure 1). Photographic measurements Each patient underwent clinical photographs on the same day of the visit by just one trained examiner (EA) who undertook the entire process. To acquire the photographs a digital Nikon D5100 (Nikon Corporation, Tokyo, Japan) camera, mounted on a tripod at 110 centimeters in height and with a distance of 130 centimeters, was used for both photographs. The patients’ position was standardized on a previously marked cross on the floor. Patients were told to adopt a relaxed standing position when the photographs were taken. All of them were photographed with an anterior (front) and posterior (back) view. For each photograph three photographic indices were calculated: Left/right trapezium angle (LRTA): The trapezium angle is defined as the angle between the line following the ex- ternal border of the trapezium muscle and the horizontal. The left/right ratio of this angle was used for statistical analysis (Figures 3 and 4). Shoulder height angle (SHA): Angle formed between the line that joins the upper border of both acromion pro- cesses and the horizontal (Figures 3 and 4). Axilla height angle (AHA): Angle formed between the line that joins the upper border of both external axillary folds and the horizontal (Figures 3 and 4). The SurgimapSpine® (Nemaris Inc, New York, United States) software was used for both the x-ray and clinical photography measurements. Both the radiological and photographic measurements were assigned a positive or negative value according to the tilt direction. The right-hand thumb rule was used for this: looking at the individual (or x-rays) from the back, a clockwise and anti-clockwise tilt was considered positive and nega- tive, respectively. For example, the tilt of the end vertebrae towards the right was assigned a positive value; elevation of the left shoulder was assigned a positive value; curves with convexity to the left and right were assigned a posi- tive and negative value, respectively. Statistical analysis Descriptive statistics (mean and range) were used to report patients and radiological and photographic measurements. To determine the reliability of the photographic indi- ces an intra and inter-observer reliability analysis using the intraclass correlation coefficient (ICC) with absolute agreement and a 95% confidence interval, was per- formed. To analyze reliability photographs of the first 60 patients (30 cases with MLC < 45° and 30 cases with MLC > 45°) were used. Each photograph was measured by three evaluators (AM, JB, EA) on two separate occa- sions, one week apart. To calculate intra-observer reli- ability all three observers’ measurements were analyzed (in total 180 measurements). The first and second mea- surements of all observers were jointly compared. To calculate inter-observer reliability the first measurement made by each observer was used. The intra and inter- observer intra-class correlation coefficient was obtained for each measurement. According to Landis & Koch [13] the following scale was used to interpret the ICC: <0.20 minimal or inexistent relationship; 0.21 - 0.40 poor; 0.41- 0.60 moderate; 0.61 - 0.80 excellent and >0.81 near perfect. To study convergent validity, Pearson correlation co- efficients between the photographic and radiological measurements were calculated. Figure 3 Photographic measures in front view. (a) Right and left trapezium angle. (b) Shoulder height angle. (c) Axilla height angle. Matamalas et al. Scoliosis 2014, 9:23 Page 4 of 9 http://www.scoliosisjournal.com/content/9/1/23 To test the degree of equivalence of the back and front SHA and the degree of equivalence between the front SHA and AHA, the concordance correlation coefficient was determined [14,15]. Similar to the Pearson coefficient, the CCC varies between −1 and +1. The criteria recom- mended by McBride were used to determine the degree of concordance: >0.99 Near perfect; 0.95-0.99 Substantial; 0.90-0.95 Moderate and < 0.90 Poor [16]. Data were processed using a SPSS 17.0 program. P < 0.05 was considered statistically significant. Results Sample descriptions A total of 80 consecutive patients were included. Mean patient age (typical deviation) was 20.3 years (± 8.6) and 85% of patients were women. A total of 68.8% of patients had attained skeletal maturity (Risser 4 and 5) at the time of inclusion into the study and 16.3% were immature patients (Risser 1 and 2). The distribution of frequencies of the different kinds of curve according to the Lenke Figure 4 Photographic measures in back view. (a) Right and left trapez classification [17] was: type 1 (27.5%), type 2 (5%), type 3 (26.3%), type 4 (2.5%), type 5 (32.5%) and type 6 (6.3%). Table 1 shows the means and range of different radiolo- gical and photographic measurements. Reliability and standard error of measurement Table 2 represents intra and inter-observer reliability values and the standard error of measurement for the different photographic measurements. As can be seen in the table, the intra-observer ICC values for SHA and AHA both front and back were >0.80 indicating a near perfect correlation; for LRTA the intra-observer ICC were excellent (0.79 and 0.78 respectively). The inter-observer ICC were slightly less although within the near perfect correlation range (> 0.80) except for the front LRTA which was excellent (0.65). Concordance The SHA and AHA angles in the photograph taken frontally presented poor concordance (CCC 0.66; 95% ium angle. (b) Shoulder height angle. (c) Axilla height angle. Table 1 Descriptions of the radiological and photographic outcomes Variables Mean (°) Range Minimum Maximum Radiological PTC 18.9 −15.9 46.3 PTC_LEV −17.4 −42 19.2 MTC −33.5 −78.0 53.2 MTC_LEV 16.5 −35 42.1 TLLC 24.1 −55.6 60.9 TLLC_LEV −7.7 −33.7 36.9 T1 tilt 1.4 −16.9 29.2 CRIA −0.6 −12.1 7.3 Photographic LRTA back 1.1 0.7 2.0 SHA back −0.7 −8.4 6.2 AHA back −1.8 −8.2 7.9 LRTA front 1.2 0.6 3.4 SHA front −1.7 −11.1 5.5 AHA front −2.3 −11.9 6.4 PTC (Cobb proximal thoracic curve); PTC_LEV (Lower end vertebra of the proximal thoracic curve); MTC (Cobb main thoracic curve); MTC_LEV (Lower end vertebra of the major thoracic curve); TLLC (Cobb thoraco-lumbar/lumbar curve); TLLC_LEV (Lower end vertebra of the thoraco-lumbar/lumbar curve); T1 tilt (T1 inclination angle); CRIA (clavicle-rib intersection angle); LRTA (Left/right trapezium angle ratio); SHA (Shoulder height angle); AHA (Axilla height angle). Matamalas et al. Scoliosis 2014, 9:23 Page 5 of 9 http://www.scoliosisjournal.com/content/9/1/23 CI = 0.53-0.76). The same occurs with the SHA angles for front and back whose concordance was also poor (CCC 0.49; 95% CI = 0.32-0.64). Correlations between photographic and radiological measurements Table 3 lists the correlation coefficients between the photographic and radiological measurements. The cor- relation of photographic measurements with radiological Table 2 Intra and inter-observer reliability and standard error of measurement (SEM) of the photographic measurements Variable Intra-observer reliability (ICC) Inter-observer reliability (ICC) Standard error of measurement (SEM) Back LRTA 0.79 0.80 0.14 SHA 0.88 0.80 0.99 AHA 0.93 0.88 0.83 Front LRTA 0.78 0.65 0.24 SHA 0.91 0.89 1.06 AHA 0.91 0.85 1.18 LRTA (Left/right trapezium angle ratio); SHA (Shoulder height angle); AHA (Axilla height angle). parameters was poor to moderate. Only PTC correlated poorly (r = −0.26) with AHA for the back shot; there was no correlation with any other photographic parameter. For MTC we observed a moderate correlation (r = 0.46) with AHA posterior view; there were poor correla- tions with the frontal view clinical parameters (r ranging from −0.27 to 0.27. TLLC was moderately to poorly corre- lated with all photographic parameters both for front and back view (r ranging from −0.45 to 0.35). The end verte- brae of the different curves revealed similar correlations to those of the overall magnitude of the curve. The radiological measurement of ShB (CRIA) showed moderate correlation with the photographic measure- ments, especially the frontal photograph (LRTA r = −0.45; SHA r = 0.48 y AHA r = 0.51). A statistically significant correlation was found between the photographic measure- ments and T1-Tilt, especially for the SHA in the frontal view (r = 0.51). Discussion Shoulder balance is considered characteristic of idio- pathic scoliosis. Use of clinical photography to measure ShB has not been fully analyzed. Clinical photography offers a series of practical advantages: it is cheap, simple to handle and images are almost immediately available. The aims of our research were to determine the reliability of various measurements taken using digital photography and to evaluate their relationship with radiological para- meters in a non-selected population of patients with idio- pathic scoliosis. Selection of measurements An initial step to design the research was to decide which measurement to include in the study. It was decided to select angular measurements to avoid problems from the calibration necessary when using linear measurements. In the case of asymmetry of the trapezium muscles, it was preferred to use angular measurements instead of surface areas as we believe that the latter is more complex and not very useful in daily clinical practice. We also ruled out using skin markers as other authors had done previously. We believe that this methodology lengthens examination time and introduces a new source of bias. We therefore preferred to define, a priori, the anatomic points to be used as a reference for the measurements. After these prior considerations, it was decided to record three parameters: 1. Shoulder height angle (SHA): formed between the line that joins the upper border of both acromion processes and the horizontal. This is the parameter which should, a priori, collate shoulder imbalance better. This parameter has been used previously by other researchers both for front [9,12] and back Table 3 Correlation between the clinical outcomes of imbalance of the shoulders and the radiology Correlations Back Front LRTA SHA AHA LRTA SHA AHA r p r p r p r p r p r p CRIA −0.35 0.00 0.39 0.00 0.35 0.00 −0.45 0.00 0.48 0.00 0.51 0.00 T1 −0.35 0.00 0.37 0.00 0.31 0.005 −0.35 0.00 0.51 0.00 0.44 0.00 PTC −0.07 n.s. 0.03 n.s. −0.26 0.02 −0.07 n.s. 0.04 n.s. −0.03 0.02 MTC −0.16 n.s. 0.19 n.s. 0.46 0.00 −0.27 0.02 0.24 0.03 0.27 0.01 TLLC 0.35 0.002 −0.33 0.003 −0.45 0.00 0.23 0.04 −0.26 0.02 −0.30 0.007 PTC_LEV −0.11 n.s. 0.17 n.s. 0.45 0.00 −0.27 0.15 0.25 0.02 0.29 0.008 MTC_LEV 0.19 n.s. −0.19 n.s. −0.42 0.00 0.24 0.29 −0.21 n.s. −0.24 0.03 TLLC_LEV −0.44 0.00 0.39 0.00 0.36 0.001 −0.15 n.s. 0.25 0.02 0.28 0.01 LRTA (Left/right trapezium angle); SHA (Shoulder height angle); AHA (Axilla height angle); CRIA (Clavicle-rib intersection angle); T1 tilt (T1 inclination angle); PTC (Proximal thoracic curve Cobb angle); MTC (Main thoracic curve Cobb angle); TLLC (Thoraco-lumbar/lumbar curve Cobb angle); PTC_LEV (Lower end vertebra of the proximal thoracic curve); MTC_LEV (Lower end vertebra of the major thoracic curve); TLLC_LEV (Lower end vertebra of the thoraco-lumbar/lumbar curve). Matamalas et al. Scoliosis 2014, 9:23 Page 6 of 9 http://www.scoliosisjournal.com/content/9/1/23 [6,9,12] photography. Some authors [18] have used the linear measurement by calculating the difference in cm from the upper border of each acromion process to a horizontal line perpendicular to the axillary fold. This methodology requires calibration, whereby it was rejected. Furthermore, for SHA we only have reliable data from back photography [11]. 2. The left/right ratio trapezium angle (LRTA) reported as the angle formed by the external border of the trapezium muscle with the horizontal. We think this could be equivalent to the Ln [L/R Trapezium Area] reported by Ono [12]. These authors found a statistically significant correlation between this parameter and the radiological variables. Nonetheless, there are no reliable data for this measurement and, in our opinion; its calculation is excessively complex for routine use. The possibility of recording an evaluation parameter for the trapezium area was put forth by prior publications which indicate its relationship with the proximal thoracic curve. 3. Axilla height angle (AHA): formed between the line that joins the upper border of both acromion processes and the horizontal. This parameter has also been used previously [6,11]. It was decided to include it to analyze its possible relationship with radiological shoulder imbalance with the intention of having a second parameter to estimate shoulder imbalance for those cases when SHA is not reliable. For AHA we only have reliable data from the measurement during back photography [11]. Reliability and concordance Most of the measurements selected revealed excellent-near perfect intra and inter observer reliability (ICC > 0.70); the inter-observer ICC were slightly less, a data already reported by other authors [19]. The reliability data are very similar for frontal and back views. Yang et al. repor- ted somewhat more reliability (intra-observer reliability 0.97 for both measurements and inter-observer reliability 0.99 and 0.97 respectively) for the back photography [11]. The intra and inter-observer reliability values for LRTA, SHA and AHA from the front, used in our work, have not been previously published. We found poor concordance between SHA and AHA, which suggests that one measurement cannot estimate an- other when analyzing the clinical balance of the shoulders. Similarly, when evaluating the concordance between front and back SHA we found poor concordance between both measurements (CCC 0.49, 95% CI 0.32-0.64); this indi- cates that both measurements are not interchangeable between themselves. Relationship between the photographic and radiological measurements Overall, the correlations found between clinical and ra- diological parameters may be considered moderate to poor and in no case greater than 0.6. Behavior was simi- lar for the three parameters evaluated (SHA, AHA and LRTA) for the two photographic views (front and back). This low correlation is similar to that reported when analyzing correlations between the radiological parame- ters and those obtained with the topographic analysis technique [20]. From our results, the lack of correlation between clinical ShB and magnitude of the PTC curve is notable; because it is usually accepted, that one of the factors that has an impact on shoulder imbalance, is the structural nature of this curve. Other previous publications had found poor or inexistent correlations between radiographic and photo- graphic measurements in type 1 and 2 Lenke curve series [6,9,12,21]. These data suggest that the PTC does not have Matamalas et al. Scoliosis 2014, 9:23 Page 7 of 9 http://www.scoliosisjournal.com/content/9/1/23 a significant impact on clinical shoulder balance. Con- versely, we have found a moderate correlation between MTC and TLLC and the photographic measurements of the ShB, especially with the back AHA (r = −0.44). Yang and Qiu found a similar correlation [6,9] and Hong et al. reported that post-operative ShB in a series of patients who had received surgery, was related to the correction of the MTC and TLLC [22]. These findings would indicate that clinical ShB would in part be influenced by the mag- nitude of the main thoracic curve and the lumbar curve. The tilt of the end vertebrae for the different curves corre- lated in a similar way to those overall values of the curves with the photographic measurements. No especially inter- esting correlation was found; therefore, the vertebra to vertebra analysis does not appear to be useful. Overall, the parameters measured in the frontal view reveal corre- lations with the radiographic measurements somewhat higher than those found for the rear view. Specifically, we have to point out the correlation between SHA and CRIA (r = 0.48) and SHA and T1-tilt (r = 0.51). Therefore, we would venture to recommend that the study of ShB be performed on photography taken from a frontal view, although we are aware that this shot may be a reason for conflict or rejection, especially in the case of women. The photographic parameters (SHA, AHA, LRTA) were moderately correlated with CRIA and T1 tilt. We hypo- thesize that CRIA would be the radiological equivalent of SHA. Different parameters were used for the radiological measurement of ShB: Coracoids’ height difference (CHD) [5,21,23], clavicular angle (CA) [5,23], clavicle-rib intersec- tion difference (CRID) [23], radiological shoulder height (RSH) [5,23] or first rib angle (FRA) [9,12] among others. Our initial intention was to use clavicular angle (CA) as a radiological measure of ShB considering the high level of reliability of the measurement reported by Hong et al. [5]. Nonetheless, we find that for a high percentage of patients both shoulders on the x-rays could not be observed. For this reason, we decided to use the point where the clavicle crosses the ribcage as a reference point. Bagó et al. [23] found an excellent correlation between the difference in real shoulder height and that measurement at this refer- ence point. The correlation between SHA and CRIA was less than expected taking into account the fact that, theoretically, both measurements evaluate the same feature. In our study, no correlations between the two parameters greater than 0.54 were found. Other authors have found similar correlations between these measures when evaluating Lenke 1 and 2 curves [9]. This low correlation cannot be attributed to the reliability of the parameters evaluated if we consider that in all works published the reliability of the photographic measurements is excellent [6,11] and the same occurs with radiological measurements [5]. It is possible that the photographic measurements differ from radiological measurements because of the effect of the soft tissues in the shoulder area. It is obvious that the radio- logical and clinical balance of the shoulders are not an exact reflection of each other as suggested by Qiu et al. [6]; we need to evaluate both factors when analyzing shoulder balance in patients with scoliosis, not just on the Lenke 2 curves but also for all kinds of curves. T1-tilt moderately correlates with the photographic parameters (SHA, AHA, LRTA). Therefore, shoulder pos- ition cannot be inferred from a T1 value. In fact, there is a percentage of patients in whom shoulder and T1 tilt are in opposite directions [24]. Other authors have found that the correlation of this measure with shoulder balance, both radiological [23] and clinical [18,21] is lower than for other measures such as CA or CRID. Bearing in mind that T1 is often the upper end vertebrae of the PTC and that the magnitude of the PTC is unrelated to ShB, our data indicate that T1 tilt should be the criterion to determine the structural nature of the PTC and its impact on ShB. SHA can be considered the standard parameter to evaluate ShB in clinical photography. There is suitable intra and inter-observer reliability although the correlation with its radiographic equivalent is less than desirable. AHA is also a reliable measure but has a low correlation with radiological ShB. It is interesting to note the mo- derate correlation with the magnitude and tilt of the end vertebrae of the MTC which would suggest that this would be a parameter more related to deformity of the trunk than ShB. As we have pointed out above, this par- ameter was introduced to explore the possibility of having an alternative measure to SHA. The lack of concordance between both measures has led us to rule out this possibil- ity. The possibility that LRTA would enable evaluating PTC led us to introduce this parameter into the analysis. In spite of correct reliability, this only shows a poor cor- relation with CRIA and T1 and no correlation with PTC. Although other authors have suggested that asymmetry in the trapezium area is a parameter to consider when cli- nically evaluating the shoulder area [12], according to our results, this is a parameter that does not provide informa- tion for SHA and AHA. Consequently, we do not believe that it makes sense to recommend use of this parameter in clinical practice. Shortcomings In our opinion this study presents several significant limita- tions. First, our study did not include analysis of the pho- tographic parameters in relation to the scoliosis pattern. Some authors [11] have suggested that the photographic parameters could be different according to the type of curve. This possibility should be analyzed in further detail in future investigations. Second, we have not correlated ShB and axial plane deformity (angle of trunk inclination or apical vertebrae rotation); we take this decision due to Matamalas et al. Scoliosis 2014, 9:23 Page 8 of 9 http://www.scoliosisjournal.com/content/9/1/23 the low reliability of radiographic measures used for this purpose [25]. Third, a single photograph evaluated by dif- ferent observers on two occasions was used for the reliabil- ity analysis. However, the reliability of this shot was not determined. Patients were placed on floor marks and they were asked to stay in a comfortable position. We think that this methodology was sufficient to guarantee repeating the photograph. However, we cannot determine the error of measurement related to the patient’s position. Fortin et al. found significant reliability of a photography technique similar to that used in our investigation [10,19]. Conclusions Clinical photography is a reliable method to evaluate clin- ical shoulder balance in patients with idiopathic scoliosis. Intra and inter-observer reliability is excellent; ICC greater than 0.8 were found. The reliability of the front and back views is similar although concordance analysis reveals that the measurements are not equivalent. These data confirm that ShB is not a pathognomonic sign of structured scoli- osis. Based on the present results, the measurement of SHA does not seem an appropriate method to evaluate the effect of treatment on spinal deformity. Consequently, both examinations should be used for shoulder balance evaluation. In the future, it should be analyzed whether shoulder imbalance pattern varies according to curve pattern. Written informed consent was obtained from the patient for the publication of this report and any accom- panying images. Abbreviations ShB: Shoulder balance; PTC: Cobb proximal thoracic curve; MTC: Cobb main thoracic curve; TLLC: Cobb thoracic-lumbar/lumbar curve; MLC: Major large Cobb; T1-Tilt: Inclination of T1; CRIA: Clavicle-rib intersection angle; LRTA: Left-right trapezium angle; SHA: Shoulder height angle; AHA: Armpit height angle; ICC: Intra-class correlation coefficient; SEM: Standard error of measurement; MDC: Minimal detectable change; CCC: Concordance correlation coefficient; CHD: Coracoids’ height difference; CA: Clavicle angle; RSH: Radiological shoulder height; FRA: First rib angle; PTC_LEV: Proximal thoracic curve lower end vertebra; MTC_LEV: Main thoracic curve lower end vertebra; TLLC_LEV: Thoraco-lumbar/lumbar lower end vertebra. Competing interests The authors declare that they have no competing interests. Authors’ contributions JB has made substantial contributions to conception and design, data measurement, data analysis and interpretation, has been involved in drafting the manuscript and revising it critically for important intellectual content, and has given final approval of the version to be published. AM has made substantial contributions to conception and design, acquisition of data, data analysis and interpretation, has been involved in the drafting of the manuscript and has given final approval of the version to be published. EA has made a substantial contribution to recruitment of patients and acquisition of data by taking the photographs and data measurements. All authors read and approved the final manuscript. Acknowledgements This research is supported by a research grant from Biomed/Justimplant and the Spanish Society of Spine Surgery (GEER) to perform this investigation. Author details 1Department of Orthopaedic Surgery, Hospital Vall d’Hebron, P Vall d’Hebron, 119, 08035 Barcelona, Spain. 2Research Institute, Hospital Vall d’Hebrón, P Vall d’Hebrón, 119, 08035 Barcelona, Spain. Received: 1 September 2014 Accepted: 30 November 2014 References 1. Negrini S, Grivas TB, Kotwicki T, Maruyama T, Rigo M, Weiss HR: Why do we treat adolescent idiopathic scoliosis? What we want to obtain and to avoid for our patients. SOSORT 2005 Consensus paper. Scoliosis 2006, 1:4. 2. Raso VJ, Lou E, Hill DL, Mahood JK, Moreau MJ, Durdle NG: Trunk distortion in adolescent idiopathic scoliosis. J Pediatr Orthop 1998, 18(2):222–226. 3. Iwahara T, Imai M, Atsuta Y: Quantification of cosmesis for patients affected by adolescent idiopathic scoliosis. Eur Spine J 1998, 7(1):12–15. 4. Theologis TN, Jefferson RJ, Simpson AH, Turner-Smith AR, Fairbank JC: Quantifying the cosmetic defect of adolescent idiopathic scoliosis. Spine 1993, 18(7):909–912. 5. Hong JY, Suh SW, Yang JH, Park SY, Han JH: Reliability analysis of shoulder balance measures: comparison of the 4 available methods. Spine 2013, 38(26):E1684–E1690. 6. Qiu X-s, Ma W-w, Li W-g, Wang B, Yu Y, Zhu Z-z, Qian B-p, Zhu F, Sun X, Ng B-K, Cheng J-c, Qiu Y: Discrepancy between radiographic shoulder balance and cosmetic shoulder balance in adolescent idiopathic scoliosis patients with double thoracic curve. Eur Spine J 2009, 18(1):45–51. 7. Zaina F, Negrini S, Atanasio S: TRACE (Trunk Aesthetic Clinical Evaluation), a routine clinical tool to evaluate aesthetics in scoliosis patients: development from the Aesthetic Index (AI) and repeatability. Scoliosis 2009, 4:3. 8. Gorton GE 3rd, Young ML, Masso PD: Accuracy, reliability, and validity of a 3-dimensional scanner for assessing torso shape in idiopathic scoliosis. Spine 2012, 37(11):957–965. 9. Yang S, Feuchtbaum E, Werner BC, Cho W, Reddi V, Arlet V: Does anterior shoulder balance in adolescent idiopathic scoliosis correlate with posterior shoulder balance clinically and radiographically? Eur Spine J 2012, 21(10):1978–1983. 10. Fortin C, Feldman DE, Cheriet F, Gravel D, Gauthier F, Labelle H: Reliability of a quantitative clinical posture assessment tool among persons with idiopathic scoliosis. Physiotherapy 2012, 98(1):64–75. 11. Yang S, Jones-Quaidoo SM, Eager M, Griffin JW, Reddi V, Novicoff W, Shilt J, Bersusky E, Defino H, Ouellet J, Arlet V: Right adolescent idiopathic thoracic curve (Lenke 1 A and B): does cost of instrumentation and implant density improve radiographic and cosmetic parameters? Eur Spine J 2011, 20(7):1039–1047. 12. Ono T, Bastrom TP, Newton PO: Defining 2 components of shoulder imbalance: clavicle tilt and trapezial prominence. Spine 2012, 37(24):E1511–E1516. 13. Landis JR, Koch GG: The measurement of observer agreement for categorical data. Biometrics 1977, 33(1):159–174. 14. Beaton DE: Understanding the relevance of measured change through studies of responsiveness. Spine 2000, 25(24):3192–3199. 15. Lin L-I-K: A concordance correlation coefficient to evaluate reproducibility. Biometrics 1989, 45:255–268. 16. Mc Bride G: A Proposal for Strength-of-Agreement Criteria for Lin’s Concordance Correlation Coefficient. In NIWA Client Report: HAM2005-062; 2005. http:www.niwa.co.nz. 17. Lenke LG, Betz RR, Harms J, Bridwell KH, Clements DH, Lowe TG, Blanke K: Adolescent idiopathic scoliosis: a new classification to determine extent of spinal arthrodesis. JBJS Am 2001, 83-A(8):1169–1181. 18. Akel I, Pekmezci M, Hayran M, Genc Y, Kocak O, Derman O, Erdogan I, Yazici M: Evaluation of shoulder balance in the normal adolescent population and its correlation with radiological parameters. Eur Spine J 2008, 17(3):348–354. 19. Fortin C, Ehrmann Feldman D, Cheriet F, Labelle H: Clinical methods for quantifying body segment posture: a literature review. Disabil Rehabil 2011, 33(5):367–383. 20. Patias P, Grivas TB, Kaspiris A, Aggouris C, Drakoutos E: A review of the trunk surface metrics used as Scoliosis and other deformities evaluation indices. Scoliosis 2010, 5:12. http://www.niwa.co.nz Matamalas et al. Scoliosis 2014, 9:23 Page 9 of 9 http://www.scoliosisjournal.com/content/9/1/23 21. Kuklo TR, Lenke LG, Graham EJ, Won DS, Sweet FA, Blanke KM, Bridwell KH: Correlation of radiographic, clinical, and patient assessment of shoulder balance following fusion versus nonfusion of the proximal thoracic curve in adolescent idiopathic scoliosis. Spine 2002, 27(18):2013–2020. 22. Hong JY, Suh SW, Modi HN, Yang JH, Park SY: Analysis of factors that affect shoulder balance after correction surgery in scoliosis: a global analysis of all the curvature types. Eur Spine J 2013, 22(6):1273–1285. 23. Bagó J, Carrera L, March B, Villanueva C: Four radiological measures to estimate shoulder balance in scoliosis. J Pediatr Orthop B 1996, 5(1):31–34. 24. Ilharreborde B, Even J, Lefevre Y, Fitoussi F, Presedo A, Souchet P, Penneçot GF, Mazda K: How to determine the upper level of instrumentation in Lenke types 1 and 2 adolescent idiopathic scoliosis: a prospective study of 132 patients. J Pediatr Orthop 2008, 28(7):733–739. 25. Morrison DG, Chan A, Hill D, Parent EC, Lou EH: Correlation between Cobb angle, spinous process angle (SPA) and apical vertebrae rotation (AVR) on posteroanterior radiographs in adolescent idiopathic scoliosis (AIS). Eur Spine J, in press. doi:10.1186/s13013-014-0023-6 Cite this article as: Matamalas et al.: Reliability and validity study of measurements on digital photography to evaluate shoulder balance in idiopathic scoliosis. Scoliosis 2014 9:23. Submit your next manuscript to BioMed Central and take full advantage of: • Convenient online submission • Thorough peer review • No space constraints or color figure charges • Immediate publication on acceptance • Inclusion in PubMed, CAS, Scopus and Google Scholar • Research which is freely available for redistribution Submit your manuscript at www.biomedcentral.com/submit Abstract Objective Material and methods Results Conclusions Introduction Materials and methods Radiological measurements Photographic measurements Statistical analysis Results Sample descriptions Reliability and standard error of measurement Concordance Correlations between photographic and radiological measurements Discussion Selection of measurements Reliability and concordance Relationship between the photographic and radiological measurements Shortcomings Conclusions Abbreviations Competing interests Authors’ contributions Acknowledgements Author details References work_ktun22uucbgprnzn4k76abvfqy ---- Spread of a model invasive alien species, the harlequin ladybird Harmonia axyridis in Britain and Ireland Data Descriptor: Spread of a model invasive alien species, the harlequin ladybird Harmonia axyridis in Britain and Ireland P. M. J. Brown1, D. B. Roy2, C. Harrower2, H. J. Dean2, S. L. Rorke2 & H. E. Roy2 Invasive alien species are widely recognized as one of the main threats to global biodiversity. Rapid flow of information on the occurrence of invasive alien species is critical to underpin effective action. Citizen science, i.e. the involvement of volunteers in science, provides an opportunity to improve the information available on invasive alien species. Here we describe the dataset created via a citizen science approach to track the spread of a well-studied invasive alien species, the harlequin ladybird Harmonia axyridis (Coleoptera: Coccinellidae) in Britain and Ireland. This dataset comprises 48 510 verified and validated spatio-temporal records of the occurrence of H. axyridis in Britain and Ireland, from first arrival in 2003, to the end of 2016. A clear and rapid spread of the species within Britain and Ireland is evident. A major reuse value of the dataset is in modelling the spread of an invasive species and applying this to other potential invasive alien species in order to predict and prevent their further spread. Design Type(s) database creation objective • citizen science design • biodiversity assessment objective Measurement Type(s) population data Technology Type(s) longitudinal data collection method Factor Type(s) temporal_interval • body marking • developmental stage Sample Characteristic(s) Harmonia axyridis • British Isles • habitat 1Applied Ecology Research Group, Department of Biology, Anglia Ruskin University, Cambridge, CB1 1PT, UK. 2Biological Records Centre, NERC Centre for Ecology and Hydrology, Wallingford, Oxfordshire, OX10 8BB, UK. Correspondence and requests for materials should be addressed to P.B. (email: peter.brown@anglia.ac.uk) OPEN Received: 28 March 2018 Accepted: 21 August 2018 Published: 23 October 2018 www.nature.com/scientificdata SCIENTIFIC DATA | 5:180239 | DOI: 10.1038/sdata.2018.239 1 mailto:peter.brown@anglia.ac.uk Background & Summary The invasion process for an alien species involves various stages, notably introduction, establishment, increase in abundance and geographic spread1. An alien species that spreads and has negative effects (which may be ecological, economic or social) is termed invasive2,3. Invasive alien species are widely recognized as one of the main threats to global biodiversity4–6. There are a number of international agreements which recognize the threat posed by invasive alien species, which are designated as a priority within the Convention on Biological Diversity Aichi biodiversity target 9 (https://www.cbd.int/sp/targets/ rationale/target-9/) and are relevant to many of the Sustainable Development Goals (http://www.un.org/ sustainabledevelopment/sustainable-development-goals/). An EU Regulation on invasive alien species came into force on 1 January 2015 (http://ec.europa.eu/environment/nature/invasivealien/index_en.htm) and subsequently a list of invasive alien species of EU concern was adopted for which member states are required to take action to eradicate, manage or prevent entry. Rapid flow of information on the occurrence of invasive alien species is critical to underpin effective action. There have been few attempts to monitor the spread of invasive alien species systematically from the onset of the invasion process. Citizen science, i.e. the involvement of volunteers in science, provides an opportunity to improve the information available on invasive alien species7. Here we describe the dataset created via a citizen science approach to track the spread of a well-studied invasive alien species, the harlequin ladybird Harmonia axyridis (Coleoptera: Coccinellidae) in Britain and Ireland. This species was detected very early in the invasion process and a citizen science project was initiated and widely promoted to maximize the opportunity to gather data from the public across Britain and Ireland. Harmonia axyridis was introduced between approximately 1982 and 2003 to at least 13 European countries8 as a biological control agent. It was mainly introduced to control aphids that are pests to a range of field and glasshouse crops. From the early 2000s it subsequently spread to many other European countries, including Britain and Ireland. It is native to Asia (including China, Japan, Mongolia and Russia)9 and was also introduced in North and South America and Africa10. Harmonia axyridis was introduced unintentionally to Britain from mainland Europe by a number of pathways: some were transported with produce such as cut flowers, fruit and vegetables; others arrived through natural dispersal (flight) from other invaded regions11. To a lesser extent H. axyridis also arrived from North America12. The major pathways of spread to Ireland were probably natural dispersal (from Britain) and arrival with produce. Harmonia axyridis is a eurytopic (generalist) species and may be found on deciduous or coniferous trees, arable and horticultural crops and herbaceous vegetation in a wide range of habitats. It is particularly prevalent in urban and suburban localities (e.g. parks, gardens, and in or on buildings)13. Citizen science approaches to collecting species data are becoming increasingly popular and respected14. Advances in communication and digital technologies (e.g. online recording via websites and smartphone applications; digital photography) have increasingly enabled scientists to collect and verify large datasets of species information15. For a few species groups, including ladybirds, verification to species is possible if a reasonably good photograph of the animal is available. In late 2004, shortly after the first H. axyridis ladybird record was reported, funding was acquired from Defra and the National Biodiversity Network (NBN) to set up and trial an online recording scheme for ladybirds, and H. axyridis in particular. Thus, the online Harlequin Ladybird Survey and UK Ladybird Survey were launched in March 2005. The surveys have been very successful in gaining records from the public since 2005. Innovations such as the launch of a free smartphone application (iRecord Ladybirds) in 2013 helped to maintain the supply of records. The dataset here comprises species records of H. axyridis in various life stages (larva, pupa or adult) from Britain and Ireland over the period 2003 to 2016. A major reuse value of the dataset is in modelling the spread of an invasive species and applying this to other potential invasive alien species in order to predict and prevent their further spread. The time period of the study captures the initial fast spread of H. axyridis (principally from 2004 to 2009) plus a further substantial period (2010 to 2016) in which the distribution of the species altered relatively little, despite many further records being received. Methods This dataset (Data Citation 1) comprises 48 510 spatio-temporal records of the occurrence of H. axyridis in Britain and Ireland, from first arrival in 2003, to the end of 2016. For its type it is thus an unusually substantial dataset. Whilst the records were collated and verified by the survey organizers, the records themselves were provided by members of the public in Britain and Ireland. Uptake to the Harlequin Ladybird Survey was undoubtedly assisted by the pre-existence of the Coccinellidae Recording Scheme (now the UK Ladybird Survey), supported by the Biological Records Centre (within NERC Centre for Ecology & Hydrology)16. Reflecting the general diversification of citizen science through innovative use of technology17, high levels of public access to the internet and digital photography enabled an online survey form to be established for H. axyridis in Britain and Ireland. The Harlequin Ladybird Survey (www.harlequin- survey.org) was one of the first online wildlife surveys in Britain and Ireland. It was launched in March 2005 in response to the first report of H. axyridis in Britain, in September 200418. The Harlequin Ladybird Survey benefited from high levels of media interest, and members of the public showed great willingness to look for H. axyridis, and to register their sightings with the survey13. There are only three www.nature.com/sdata/ SCIENTIFIC DATA | 5:180239 | DOI: 10.1038/sdata.2018.239 2 https://www.cbd.int/sp/targets/rationale/target-9/ https://www.cbd.int/sp/targets/rationale/target-9/ http://www.un.org/sustainabledevelopment/sustainable-development-goals/ http://www.un.org/sustainabledevelopment/sustainable-development-goals/ http://ec.europa.eu/environment/nature/invasivealien/index_en.htm www.harlequin-survey.org www.harlequin-survey.org records from 2003 and no earlier records have been received, supporting the case that the earliest records in the dataset represent the onset of the invasion process for this species. Indeed H. axyridis has a relatively high detectability (e.g.19) and rapid reproductive rate, so is unlikely to have arrived unnoticed. Each record represents a verified sighting of H. axyridis on a given date (or range of dates) and comprises one or more individual ladybirds observed from one or more life stages (larva, pupa, adult). Records are from Britain (England, Wales and Scotland, including offshore islands), Ireland (both Northern Ireland and the Republic of Ireland), the Isle of Man and the Channel Islands (primarily Guernsey and Jersey) and are mainly from the period 2004 to 2016. The earliest record of H. axyridis in Britain was initially thought to be from 3 July 2004, but three earlier records (from 2003) were received retrospectively. The data records represent species presence and there are no absence data available. The majority of the records were received from members of the public via online recording forms (at www.harlequin-survey.org or www.ladybird-survey.org) (Supplementary Figure 1) or via smartphone apps (iRecord Ladybirds or iRecord - www.brc.ac.uk/irecord) (Supplementary Figure 2), with some records (especially in earlier years) received by post. Other records, particularly from amateur expert16 coleopterists and other naturalists, were received in spreadsheets. The spatial resolution of the records is variable. Many include an Ordinance Survey grid reference (converted to latitude and longitude), enabling resolution to 100 metres or less, but many others were derived at 1 km resolution from a UK postal code (UK Government Schemas and Standards, http://webarchive.nationalarchives.gov.uk/20101126012154/http://www.cabinetoffice.gov.uk/govtalk/ schemasstandards/e-gif/datastandards/address/postcode.aspx). The option on the online recording form to enter the location via a UK postal code was provided to make the entry of records easier for members of the public unfamiliar with grid referencing systems. Whilst the resolution is thus reduced for these records, the reduction in user error (e.g. the problem of grid reference eastings and northings being transposed) is an advantage20. The postal code method was applicable for sightings of H. axyridis made within 200 metres of a specified postal code, so could not be used for a minority of records where the ladybird was seen in a remote semi-natural habitat. The spatial resolution of the records tended to increase over time, as the number of records received via the smartphone apps increased, and these records generally have GPS-generated latitudes and longitudes. Field Descriptor STARTDATE Start data for the record (DD/MM/YYYY). ENDDATE End date for the record (DD/MM/YYYY). GRIDREF The grid reference specifying the location of the observation at the fullest precision available. The grid reference is either Ordnance Survey British (OSGB), Ordnance Survey of Ireland (OSI), or a truncated version of the Military Grid Reference System (MGRS) depending on the location of the record. Records from Britain use OSGB, from Ireland use OSI, and from the Channel Islands use MGRS. OSI grid references are distinguished by having a single letter at the start (e.g. N48), while OSGB and truncated MGRS grid references both start with 2 letters (e.g. SP30), those that start WA or WV belonging to the truncated MGRS and the rest to OSGB. The truncated MGRS grid references used here for the Channel Islands simply omit the zone number and 100 km square code from the start of the grid references as all squares within the Channel Islands are within the same zone and 100 km square 30U (e.g. WV6548 in this dataset refers to MGRS 1 km grid reference 30UWV6548). PRECISION The precision of the grid reference (in metres) given in GRIDREF field. VC The Watsonian vice county code (https://en.wikipedia.org/wiki/Vice-county). Codes 1 to 112 represent the vice counties of Britain, 113 represents the Channel Islands, and codes 201 to 240 represent Irish vice counties (where 201 = vice county H1, 202 = H2, etc). LATITUDE Latitude (WGS 84) for the centre of the grid reference supplied in the GRIDREF field. LONGITUDE Longitude (WGS 84) for the centre of the grid reference supplied in the GRIDREF field. GR_1KM The grid reference of the 1 km grid square in which the occurrence record lies (only populated if the original grid reference is at 1000 m precision or finer). LATITUDE_1KM Latitude (WGS 84) for the centre of the 1 km grid square in which the record occurs. LONGITUDE_1KM Longitude (WGS 84) for the centre of the 1 km grid square in which the record occurs. GR_10KM The grid reference of the 10 km grid square in which the occurrence record lies. LATITUDE_10KM Latitude (WGS 84) for the centre of the 10 km grid square in which the record occurs. LONGITUDE_10KM Longitude (WGS 84) for the centre of the 10 km grid square in which the record occurs. ABUNDANCE A text field containing any abundance information that was supplied with the record. FORM_ABUNDANCE Abundance information relating to color forms, if available FORM Color form of the ladybird, if available: conspicua, spectabilis, succinea. STAGE_ABUNDANCE Abundance information of different life stages, if available. STAGE Ladybird life stage: larva, pupa, adult. RECORDER ID(s) for recorder name(s) that submitted the record. In the BRC database recorder information is standardized to Surname followed by initials (e.g. Smith, J.). For this reason, the recorder ID relates to a unique standardized name and not an individual. A single ID can refer to multiple individuals and a single individual can also be associated with multiple recorder IDs. Table 1. The fields contained in each Harmonia axyridis species record in the database, with a descriptor for each. www.nature.com/sdata/ SCIENTIFIC DATA | 5:180239 | DOI: 10.1038/sdata.2018.239 3 http://webarchive.nationalarchives.gov.uk/20101126012154/ http://webarchive.nationalarchives.gov.uk/20101126012154/ https://en.wikipedia.org/wiki/Vice-county Data Records Repository The dataset is freely available for download from the Environmental Information Data Centre (EIDC) catalogue (Data Citation 1). The dataset is provided as a single tab-delimited text file, with each line representing a single record. Constituents of Species Records Each species record includes 19 fields (Table 1). Figures and Tables The figures and tables here show a summary of the dataset, notably the number of verified H. axyridis records received by year (Fig. 1), by month (Fig. 2), by vice county (Fig. 3 and Table 2 (available online only)) and the spread of H. axyridis in Britain, the Channel Islands and Ireland from 2003 to 2016 (Fig. 4). Technical Validation Record Verification Verification of the records was made by the survey organizers (led by HER and PMJB but also including others) on receipt of either a photograph or ladybird specimen. The records received from amateur expert N o. R ec or ds 0 2000 4000 6000 8000 10000 20 03 20 04 20 05 20 06 20 07 20 08 20 09 20 10 20 11 20 12 20 13 20 14 20 15 20 16 Figure 1. The number of verified Harmonia axyridis records received for Britain, the Channel Islands and Ireland by year, from 2003 to 2016. N o. R ec or ds 0 2000 4000 6000 8000 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Figure 2. The number of verified Harmonia axyridis records received for Britain, the Channel Islands and Ireland by month, from 2003 to 2016. www.nature.com/sdata/ SCIENTIFIC DATA | 5:180239 | DOI: 10.1038/sdata.2018.239 4 No. Recs 0 1−4 5−99 100−499 500−999 1000−1999 2000+ Figure 3. The number of verified Harmonia axyridis records received for Britain, the Channel Islands and Ireland, split by Vice County, from 2003 to 2016. www.nature.com/sdata/ SCIENTIFIC DATA | 5:180239 | DOI: 10.1038/sdata.2018.239 5 2003−2004 2005−2006 2007−2008 2009−2010 2011−2012 2013−2014 2015−2016 Figure 4. The spread and distribution in 10 km squares in Britain, the Channel Islands and Ireland of Harmonia axyridis from 2003 to 2016. NB where H. axyridis was recorded in a square in multiple time periods, the older time period overlays the newer one(s). www.nature.com/sdata/ SCIENTIFIC DATA | 5:180239 | DOI: 10.1038/sdata.2018.239 6 coleopterists and other naturalists are regarded as accurate (i.e. without the survey organizers seeing a photograph or specimen) and have been included in the dataset. Many further online records were received that remain unverified (i.e. no photograph or specimen was sent, or the photograph was of insufficient quality to enable identification) or were verified as another species. All such unverified or inaccurate records are excluded from this dataset. For discussion of these issues (partly relating to our dataset) see21. Verified records were regularly uploaded to the NBN Gateway (now the NBN Atlas - https://nbnatlas.org/). There the records could be viewed via online maps, which helped to encourage further recording. Recording Intensity Recording intensity by the public was not consistent over time and was influenced by media coverage, publicity events by the survey organizers, and other factors. The number of records in a period is also influenced by weather conditions and seasonality: the main peak in record numbers each year tended to be from late October to early November, the period in which H. axyridis generally moves to indoor overwintering sites (hence this is when many people first notice the species in their homes). There is also spatial variability in recording intensity: more records come from areas with high densities of people (Fig. 3). Across Britain and Ireland there were a number of particularly active local groups or individuals which contributed hotspots of recorder activity, e.g. London. To many recorders, juvenile stages (especially pupae and early instar larvae) were less noticeable and more difficult to identify than the adult stage, thus limiting their recording. The possibility of a reporting bias towards sightings early in the season also exists (i.e. some recorders may have reported their first sighting of H. axyridis, but not subsequent sightings). In order to minimize this effect, the importance of recording multiple sightings was stressed to recorders. The peaks in record numbers observed late in each year also suggest that any effect of this potential bias was minor. There is probably a further minor temporal bias towards recording on some days of the week (e.g. weekend days) more than others. Technical Validation In addition to the expert verification detailed above, each record has also undergone a series of validation checks that are designed to highlight other potential issues with the data. Checks were performed on the date information supplied with the record to ensure that both the start and/or end dates supplied are in recognized formats, are valid dates, are in the past or present (e.g. no future dates), and where both supplied that the start date is prior or equal to the end date. The location information is also checked to ensure that the supplied grid reference is in a recognized format and is a valid grid reference and that the supplied grid reference is from a 10 km and/or 1 km square that contains land. If other location fields were supplied with the grid reference (such as 10 km grid reference, vice county, tetrad or quadrant codes, etc) they were cross-checked to ensure consistency. References 1. Blackburn, T. M. et al. A proposed unified framework for biological invasions. Trends in Ecology & Evolution 26, 333–339 (2011). 2. Roy, H. E. et al. The contribution of volunteer recorders to our understanding of biological invasions. Biol. J. Linn. Soc. 115, 678–689 (2015). 3. Russell, J. C. & Blackburn, T. M. Invasive Alien Species: Denialism, Disagreement, Definitions, and Dialogue. Trends in Ecology & Evolution 32, 312–314 (2017). 4. Reid, W. V. et al. Ecosystems and human well-being: synthesis. Millennium Ecosystem Assessment. (Island Press, 2005). 5. Butchart, S. H. et al. Global biodiversity: indicators of recent declines. Science 328, 1164–1168 (2010). 6. Seebens, H. et al. No saturation in the accumulation of alien species worldwide. Nature Communications 8, 14435 (2017). 7. Latombe, G. et al. A vision for global monitoring of biological invasions. Biol. Conserv. 321, 295–308 (2017). 8. Brown, P. M. J. et al. The global spread of Harmonia axyridis (Coleoptera: Coccinellidae): distribution, dispersal and routes of invasion. BioControl 56, 623–641 (2011). 9. Orlova-Bienkowskaja, M. J., Ukrainsky, A. S. & Brown, P. M. J. Harmonia axyridis (Coleoptera: Coccinellidae) in Asia: a re- examination of the native range and invasion to southeastern Kazakhstan and Kyrgyzstan. Biol. Invasions 17, 1941–1948 (2015). 10. Roy, H. E. et al. The harlequin ladybird, Harmonia axyridis: global perspectives on invasion history and ecology. Biol. Invasions 18, 997–1044 (2016). 11. Jeffries, D. L. et al. Characteristics and drivers of high-altitude ladybird flight: Insights from vertical-looking entomological radar. PLoS One 8, e82278 (2013). 12. Majerus, M., Strawson, V. & Roy, H. The potential impacts of the arrival of the harlequin ladybird, Harmonia axyridis (Pallas) (Coleoptera: Coccinellidae), in Britain. Ecol. Entomol 31, 207–215 (2006). 13. Brown, P. M. J. et al. Harmonia axyridis in Great Britain: analysis of the spread and distribution of a non-native coccinellid. BioControl 53, 55–67 (2008). 14. Silvertown, J. A new dawn for citizen science. Trends in Ecology & Evolution 24, 467–471 (2009). 15. August, T. et al. Emerging technologies for biological recording. Biol. J. Linn. Soc. 115, 731–749 (2015). 16. Pocock, M. J., Roy, H. E., Preston, C. D. & Roy, D. B. The Biological Records Centre: a pioneer of citizen science. Biol. J. Linn. Soc. 115, 475–493 (2015). 17. Pocock, M. J., Tweddle, J. C., Savage, J., Robinson, L. D. & Roy, H. E. The diversity and evolution of ecological and environmental citizen science. PLoS One 12, e0172579 (2017). 18. Majerus, M. The ladybird has landed. (University of Cambridge Press, 2004). 19. Pocock, M. J., Roy, H. E., Fox, R., Ellis, W. N. & Botham, M. Citizen science and invasive alien species: Predicting the detection of the oak processionary moth Thaumetopoea processionea by moth recorders. Biol. Conserv. 208, 146–154 (2017). 20. Majerus, M., Forge, H. & Walker, L. The geographical distributions of ladybirds in Britain (1984-1989). Br. J. Entomol. Nat. Hist 3, 153–165 (1990). 21. Gardiner, M., Allee, L., Brown, P.M.J., Losey, J., Roy, H.E. & Smyth, R. Lessons from lady beetles: Accuracy of monitoring data from US and UK citizen science programs. Front. Ecol. Environ. 10, 471–476 (2012). www.nature.com/sdata/ SCIENTIFIC DATA | 5:180239 | DOI: 10.1038/sdata.2018.239 7 https://nbnatlas.org/ Data Citation 1. Roy, H. E. et al. NERC Environmental Information Data Centre https://doi.org/10.5285/70ee24a5-d19c-4ca8-a1ce-ca4b51e54933 (2017). Acknowledgements We are immensely grateful to the thousands of people across Britain and Ireland who have so generously and enthusiastically contributed records to the UK Ladybird Survey and so collaboratively furthered understanding of the ecology of Harmonia axyridis. We thank the National Biodiversity Data Centre of Ireland for their collaboration and sharing of data. We are very grateful to the following organizations for funding and support over the years: Department for Environment Food and Rural Affairs, National Biodiversity Network Trust, Joint Nature Conservation Committee, Natural Environment Research Council Centre for Ecology & Hydrology, Anglia Ruskin University and University of Cambridge. Author Contributions H.E.R. and P.M.J.B. co-lead the UK Ladybird Survey and in doing this verified the majority of the ladybird records within this dataset, though a minority of records were verified by others. P.M.J.B. wrote the initial draft of the paper and all authors contributed additions and revisions. C.H., H.J.D., S.L.R. and D.B.R. validated the data and provided technical expertize. C.H. generated the figures. Additional Information Table 2 is only available in the online version of this paper. Supplementary information accompanies this paper at http://www.nature.com/sdata Competing interests: The authors declare no competing interests. How to cite this article: Brown, P. M. J. et al. Spread of a model invasive alien species, the harlequin ladybird Harmonia axyridis in Britain and Ireland. Sci. Data. 5:180239 doi: 10.1038/sdata.2018.239 (2018). Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 Interna- tional License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/ The Creative Commons Public Domain Dedication waiver http://creativecommons.org/publicdomain/ zero/1.0/ applies to the metadata files made available in this article. © The Author(s) 2018 www.nature.com/sdata/ SCIENTIFIC DATA | 5:180239 | DOI: 10.1038/sdata.2018.239 8 https://doi.org/10.5285/70ee24a5-d19c-4ca8-a1ce-ca4b51e54933 http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/publicdomain/zero/1.0/ http://creativecommons.org/publicdomain/zero/1.0/ Spread of a model invasive alien species, the harlequin ladybird Harmonia axyridis in Britain and Ireland Background & Summary Methods Table 1 Data Records Repository Constituents of Species Records Figures and Tables Technical Validation Record Verification Figure 1 The number of verified Harmonia axyridis records received for Britain, the Channel Islands and Ireland by year, from 2003 to�2016. Figure 2 The number of verified Harmonia axyridis records received for Britain, the Channel Islands and Ireland by month, from 2003 to�2016. Figure 3 The number of verified Harmonia axyridis records received for Britain, the Channel Islands and Ireland, split by Vice County, from 2003 to�2016. Figure 4 The spread and distribution in 10 km squares in Britain, the Channel Islands and Ireland of Harmonia axyridis from 2003 to 2016. Recording Intensity Technical Validation REFERENCES REFERENCES We are immensely grateful to the thousands of people across Britain and Ireland who have so generously and enthusiastically contributed records to the UK Ladybird Survey and so collaboratively furthered understanding of the ecology of Harmonia axyridis. W ACKNOWLEDGEMENTS Design Type(s)database creation objective • citizen science design • biodiversity assessment objectiveMeasurement Type(s)population�dataTechnology Type(s)longitudinal data collection�methodFactor Type(s)temporalinterval • body mark Additional Information work_ku72qmubybbfbb2rrvsj7us6te ---- LARGE STRUCTURES: WHICH SOLUTIONS FOR HEALTH MONITORING? Géraldine Campa, Pierre Carreaudb, Hervé Lançonc a Metrology Instrumentation Survey Department Manager, SITES, Rueil-Malmaison, France, geraldine.camp@sites.fr b Surveying, Laser and Photogrammetry Project Manager, SITES, Rueil-Malmaison, France, pierre.carreaud@sites.fr c Technical Director, SITES, Ecully, France, herve.lancon@sites.fr KEY WORDS: Large structures, dams, viaducts, cooling towers, tunnels, monitoring, instrumentation, geometric survey, 3D laser- scanning, photogrammetry, deformation, degradation, mapping, technology ABSTRACT: Whatever the age of a large structure (dam, viaduct, cooling tower, nuclear containment, tunnel,...) It has to be periodically monitored. It is a challenge to realise these services when the access is limited and difficult for Man. This paper introduces a global approach, developed by SITES, through examples of application on different concrete dams or cooling towers, and their results. This global method involves three techniques: the SCANSITES® (a visual inspection system), the LIDAR (3D laser scanning) and high resolution photogrammetry. 1. INTRODUCTION Lots of Large structures like dams, towers, tunnels, were built in the 60’s. Lots of them are still here and working 60 years later. Some of these industrial projects might be categorised in the next decades as historical heritage for their rareness, their innovation or their size. It would be catastrophic if these structures were to fail, as they have too great an importance: economic or emblematic significance, or just for safety reasons. In this way and regarding their size and constitution, which are exceptional, the structural health of dams, cooling towers, viaducts, or nuclear containment has to be monitored in order to avoid heavy and expensive repairs that might be done too late and using methods which would require shutdown of the facility. For decades, among the existing monitoring devices and methodologies applied to different types of large structures, two are widely used for safety management: visual inspection and geometric survey. The first is usually carried out with empiric methods, and the second is realized using accurate but discrete methods such as geodetic micro-triangulation or sensors instrumentation. This paper introduces a new approach, using an exhaustive and numeric method called SCANSITES 3D®. The SCANSITES 3D® is based on the combination of the SCANSITES® method, an advanced tool which provides numeric defect inspection of large structures, a new wide ranged LIDAR technology aiming to deliver 3D exhaustive geometric mapping, and very high resolution photogrammetric coverage. In the first part of this paper, we will introduce the SCANSITES® method, the LIDAR coverage and the photogrammetry. In the second part, we will explain how the combination is achieved and which data can be extracted on large structures. Before concluding, we will extend this paper to tunnel auscultation. 2. SCANSITES® OVERVIEW Due to concrete or metal material use, knowledge of the structure cracking, corrosion and visual evolution is necessary for monitoring. In the past, many owners weren’t completely satisfied with the traditional defect mapping process, using binoculars or rope access. The main drawback is the difficulty to produce a scaled defects map enabling an accurate and reproducible monitoring (crack evolution, opening measurement…). To answer this problem, the SCANSITES® was developed in the 1990’s. This system aims to produce a digital defects mapping connected to a database which is working as a true real-time G.I.S. (Geographic Information System) “Figures 1 and 2”. It is composed of: - a hardware tool with a robotized video inspection head (focal length from 200 to more than 4000 mm) and its controllers “Figure 1”, - a software suite including a database and several dedicated inspection tools (defects localization, cracks opening measurement, geometric and pathological characteristics, mapping tools) “Figure 2”. The whole system is designed to operate on-site, without heavy carriage. Several dozens of dams, cooling towers and nuclear containment building, viaducts and chimneys have been examined by the SCANSITES®, in France and across the world, by SITES Company team. Figure 1. SCANSITES® in operation 3D model Motors control Video and position Spatial intersection Picture of a defect observed at 208 meters (122 meters high above the ground) Motorised head Control equipment in a van International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W2, 2013 XXIV International CIPA Symposium, 2 – 6 September 2013, Strasbourg, France This contribution has been peer-reviewed. The peer-review was conducted on the basis of the abstract 137 Figure 2. Defects mapping (conical projection) Figure 3. Picture of a defect captured with SCANSITES 3. LIDAR OVERVIEW The LIDAR “Figure 4” is a device which produce surveying in 3D coordinates. This technology is mature and been used for long range applications for less than 10 years. It is based on two angular coders linked to mirrors (horizontal and vertical) and a remote laser distance measurement device (time of flight of laser emission or phase shift technology) system works with enough velocity to acquire more than 100 000 points each second. For the majority of a wide range LIDAR is used. It is able to scan 1000 meters onto surfaces. The result of a LIDAR survey, called “point It is composed of dozen of millions points laser intensity. The average density is 1 point and the accuracy of the modelling surface is (5-10 mm). The surface can be compared to theorical shapes in order to extract the deviations. Most of the time, 3 to 5 points of view are needed for a dam, 6 to 8 for cooling towers. The different point linked to each other and placed into a single coordinate system based on the gravity. LIDAR is not limited to large structures. Nowadays, short range units with full dome 360° field of view are available in order to scan and archive the geometry of castles, monuments inside purpose. Some of these units can catch points in 1 second with max range of 100 mete SITES company for tunnel inspection. Figure 2. Defects mapping (conical projection) Figure 3. Picture of a defect captured with SCANSITES® LIDAR OVERVIEW ” is a device which produces high density echnology is mature and has less than 10 years. linked to mirrors (horizontal distance measurement device (time of flight of laser emission or phase shift technology). The system works with enough velocity to acquire more than each second. For the majority of large structures, a wide range LIDAR is used. It is able to scan structures, up to point cloud” “Figure 5”. known in XYZ and point each 5 to 20 mm face is a few millimetres The surface can be compared to theorical shapes in time, 3 to 5 points of view are needed for a dam, 6 point clouds have to be each other and placed into a single coordinate system LIDAR is not limited to large structures. Nowadays, short range 360° field of view are available in order to , monuments and other catch up to 1 million of x range of 100 meters. It is used in Figure 4. LIDAR in operation on the left bank of a dam Figure 5. Point cloud of Castle 4. HIGH DEFINITION ORTH COVERAGE The photogrammetric coverage deliver definition pictures of the structure, including the parts without any defects. It is possible to produce a visual inspection, using lens and high definition digital camera referenced pictures. “Figure 6” The camera and lenses used can give a pixel equivalent to a few millimet which makes it possible to detect the main defects. As the photos’ positions and orientation Euler angle), each photo can be pro texture it. The next step is to project the textured 3D mesh on a primitive (plane, cone, and cylinder projected assembly of image is called orthophotography. “Figure 7” At each point of view, complete photo coverage is done with a given focal lens: more than 1000 photos can be done for one point of view which represents about 10 complete detailed inspection of a cooling tower. After the projection and assembly of each photo, the size of the orthophotography can reach up to 25 billion of pixels. More and more, with the digital photography and computing progress, the inspection based on orthophotography can completely replace the SCANSITES operations and associated treatments were designed in order to be fully compatible and replaceable: the tools used to do the inspection, database, mapping and results are the same. Figure 4. LIDAR in operation on the left bank of a dam – deviation to mean plane HIGH DEFINITION ORTHOPHOTOGRAPHIC COVERAGE he photogrammetric coverage delivers exhaustive and high definition pictures of the structure, including the parts without to produce a visual inspection, using a long focal lens and high definition digital camera in order to produce The camera and lenses used can few millimetres onto the structure, which makes it possible to detect the main defects. positions and orientations are known (XYZ and each photo can be projected on the 3D mesh to ext step is to project the textured 3D mesh on a and cylinder) to obtain a map. This image is called orthophotography. photo coverage is done with a n focal lens: more than 1000 photos can be done for one represents about 10 000 photos for the complete detailed inspection of a cooling tower. After the projection and assembly of each photo, the size of the reach up to 25 billion of pixels. More and more, with the digital photography and computing progress, the inspection based on orthophotography can SCANSITES® inspection. The field operations and associated treatments were designed in order to be fully compatible and replaceable: the tools used to do the inspection, database, mapping and results are the same. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W2, 2013 XXIV International CIPA Symposium, 2 – 6 September 2013, Strasbourg, France This contribution has been peer-reviewed. The peer-review was conducted on the basis of the abstract 138 Figure 6. High resolution camera, 600 mm lens and panoramic head facing upstream wall of a dam Figure 7. Block diagram of orthophotography operations 5. OPERATIONS The SCANSITES 3D® monitoring requires that all data is known in a single XYZ referential. The method can use the one established for the traditional survey (targets, pillars, geodetic points). In case there is no available network, it is necessary to create one, based on singular points on the structure and referenced with traditional survey operations or laser scanning (LIDAR). Regarding the visual inspection, each structure owner has its own requirements. It deals with defects, which have to be surveyed, and the associated classification. One of the most important parameters is the minimum opening for a crack that needs to be surveyed. It mainly impacts the focal length used during the inspection (up to 4000 millimetres) and the total number of defects stored. All those considerations help in preparing the mission, mainly the database and the inspection software. The LIDAR, the SCANSITES® and/or the photogrammetric head are set at different locations in order to cover the structure’s entire surface. The high gain video camera and quality lenses of the SCANSITES® allow it to work with low ambient luminosity. The LIDAR, for its part, can work without light. With the LIDAR, a complete scan is realized. Based on this point cloud, a triangular meshing “Figure 8” is generated and converted in a 3D shape. The first use of this 3D shape is to allow the SCANSITES® to locate the defects in 3D and/or to project the photos in order to create the very high resolution orthophotography. Figure 8. Triangular meshing With this incoming data, the visual inspection is processed. With the SCANSITES®, the operator inspects the entire wall by moving the inspection head with a joystick. When a defect is found, it is photographed. The 3D map is updated in real time with defects and the database is filled with its characteristics and coordinates. In parallel or in replacement of this operation, a complete high definition photogrammetric coverage is done with a high resolution camera and a robotised panoramic head. The operations done to inspect the orthophotography use the same software than the real time SCANSITES® tool but this inspection is realized at the office. 6. TREATMENTS The treatments aim to produce, on a multilayer file, a map containing all defects captured, the geometric deflections and the photogrammetric coverage. The first step is to compare the structure’s 3D shape to the theoretical shape or to a previous survey. The 3D deflections are extracted and a map is generated. “Figures 10, 12” Two ways of representation are possible. One is a coloured map: each colour depends on the deflection value. The other way is to carry out a contour line representation. The second step is to overlay the defects surveyed with the SCANSITES® or orthophotography, using the same referential network. The last step is to overlay the pictures directly on the structure’s 3D shape enabling the production of an orthophotography. With these files, many views can be generated such as composite views: defects/deflections, defects/pictures, or thematic views (based on database queries). 6.1 Case study: a concrete arch dam in France The SCANSITES3D® method was applied to a large dam located in France. Its main figures are a height of 95 meters and a crest length of 200 meters. The average distance between the points of view and the downstream facing was about 100 meters. The aim of this job was to connect the geometric deflections to the defects surveyed. The LIDAR survey recorded 70 million points, and the defects’ total quantity was near 4000. “Figures 9, 10” The focal lens of photogrammetric coverage has been adjusted between 100 and 500 mm in order to produce an orthophotography of 1.5 billion pixels with a pixel size of 5 mm. The detailed inspection was performed by the SCANSITES® tool with a focal length of 1000 mm. The defects and photogrammetric coverage have been mapped using a cylindrical projection based on the mean cylinder of the downstream wall. This monitoring was helpful to locate the parts to repair and to store the visual and geometric aspect of the wall. Photo coverage Orientation and referencing Projection Texturing Mapping Photo mosaic International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W2, 2013 XXIV International CIPA Symposium, 2 – 6 September 2013, Strasbourg, France This contribution has been peer-reviewed. The peer-review was conducted on the basis of the abstract 139 Figure 9. Orthophotography of the downstream wall with defects overlaid Figure 10. Map overlaying deflection/defects and magnifying 6.2 Case study: a concrete cooling tower in France The second case study concerns a cooling tower located in Europe. Its dimensions are a height of 90 meters and a diameter of 50 meters at the bottom. The aim of the job was to get the tower’s geometric deflection and an accurate visual inspection with a global photogrammetric coverage “Figures 11 and 12”. As carried out on the previous case study (dam), a LIDAR point cloud was generated representing 150 million points. In parallel, we covered the whole shell with high resolution pictures, projected it on a 3D shape, and then on a projection cone, for a total quantity exceeding 25 billion pixels. In order to generate the orthophotography, several processors and computers were used in order to reduce the projection time. Without the parallel computing, the treatment time would be more than two weeks. On this type of structure, mostly built in the 70’s and 80’s, we’ve observed that the real shape can be far from the theoretical shape. Figure 11. Representation of 3D model, deflections and photogrammetric coverage Figure 12. Map overlaying deflection/defects and magnifying 7. RESULTS 7.1 Statistics Concerning the visual inspection, the traditional way is to produce a sketch based on a close inspection (rope access) or binocular observations. The consequent difficulties of these methods are to get a good location of the defects and their evolution. SCANSITES3D® provides some digital results: a scaled defect map and defects database. The first result is to produce an accurate report emphasizing defects evolution between two inspections, and the number of defects divided by zone “Figure 13”. Figure 13. Number of defects by family and zone The LIDAR coverage is a guideline for defects analysis, for instance to see if cracks are correlated, or not, with geometrical distortions. Another interesting point is its use on parts covered by moss. Scanning one way we get structural information where visual inspection is inefficient. It is also helpful to know where a structural diagnosis has to focus on (with concrete sample or testing core for example). 7.2 Use of the data for repairing and utility of geometry The data extracted from the visual inspection is useful to establish an accurate bill of quantities for restoration works, like total crack length to be treated, total corroded bar amount for passivation treatment… Providing the exact length and the exact position of cracks and corrosions allows the preparation of a reparation intervention with all the necessary information: quantity of product to use, localisation (height from the floor or the crest,…) and simulations for the access (length of rope and inclination, height of positive or negative crane, …) As the accurate 3D shape of the structure is known, data for planning sensor installation or well location can be easily computed. 1478 1076 1744 1145 1523 811 1443 1313 0 200 400 600 800 1000 1200 1400 1600 1800 A B C D E F G H D e fe c ts n u m b e r Cracks Seepage Corrosion Miscanellaneous Total by zone Photogrammetric coverage Mesh Deflections Out of tolerance deflections International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W2, 2013 XXIV International CIPA Symposium, 2 – 6 September 2013, Strasbourg, France This contribution has been peer-reviewed. The peer-review was conducted on the basis of the abstract 140 7.3 Advantage of LIDAR compared to traditional survey Traditional geometric survey uses theodolites and well micro-triangulation methods. It has the advantage results close to the best possible accuracy ( However, the drawback is that it is a “discrete” method, since it focuses on a limited number of points, usually (target, reflectors), not necessarily placed on critical part Even if SCANSITES3D® is less accurate, the high density scan produces some surface definitions close uncertainty, and we get global continuous information. improves the sensitivity of the geometric diagnosis, showing all the details. In addition, the geometrical surveys with LIDAR were much quicker to carry out than the traditional geodesic method. Another advantage is that this method work even if there is no surveying equipment such as targets. 7.4 GIS (Geographic Information System) consultation All the data (photo, geometric survey, defects maps and evolutions) is overlaid on a same file. The engineer gets a faster way to make his diagnosis compared to the fastidious data fusion imposed by separate reports. The last advantage is the storage of all collected database, offering efficient tools to measure the structure ageing All these jobs are performed without rope access, dramatically the safety conditions. 8. EXTENSION TO TUNNEL AUSCULTATION A version of the SCANSITES3D® has been adapted for tunnel auscultation. The tools are about the same: short range laser scanner, digital camera with wide lens, thermographic camera and odometer mounted on a trolley. The data mean cylinder representing tens of kilometers of tunnel wall enabling direct access to the visual aspect, distortions linked and position in the tunnel (kilometer point – KP). The view rendered and referenced is useful to detect concretions and parts of the concrete ready to fall. “ 15 and 16”. The resulting file size can reach tens of billion depending on the length of the tunnel… Each part of the tunnel can be viewed with a computer: visual aspect, geometry. This helps the owner for repairation and new equipment as the maintenance access is very restricted to during the night or weekend for high traffic tunnels. Figure 14. Water leakage and concretions (left : orthophoto – Right : distortion map) to traditional survey Traditional geometric survey uses theodolites and well-known the advantage of providing results close to the best possible accuracy (1 to 10 mm). it is a “discrete” method, since it limited number of points, usually a few dozen y placed on critical parts. is less accurate, the high density scan close to a 3-4 mm information. It greatly improves the sensitivity of the geometric diagnosis, showing all In addition, the geometrical surveys with LIDAR were much quicker to carry out than the traditional geodesic method. method works on every structure, even if there is no surveying equipment such as targets. GIS (Geographic Information System) consultation data (photo, geometric survey, defects maps and overlaid on a same file. The engineer gets a faster agnosis compared to the fastidious data all collected data in a database, offering efficient tools to measure the structure’s hese jobs are performed without rope access, improving AUSCULTATION has been adapted for tunnel auscultation. The tools are about the same: short range laser scanner, digital camera with wide lens, thermographic camera mounted on a trolley. The data is mapped on a mean cylinder representing tens of kilometers of tunnel wall enabling direct access to the visual aspect, distortions linked and KP). useful to detect leaks, concrete ready to fall. “Figures 14, The resulting file size can reach tens of billions of pixels, the length of the tunnel… Each part of the tunnel can be viewed with a computer: visual aspect, geometry. This and new equipment installation as the maintenance access is very restricted to a few hours end for high traffic tunnels. and concretions Right : distortion map) Figure 15. Orthophotography of Tunnel (cylindrical projection) Figure 16. Distortions of Tunnel (cylindrical project 9. CONCLUSION We’ve presented a new and modern approach for visual inspection and geometric survey, here focused on cooling towers, with SCANSITES particularly convenient for every structure which needs the resuming of its monitoring program, because it provides an exhaustive inventory. It also allows existing monitoring program by completing the done by conventional approaches. Not only is this method adapted to concrete structures, but it can also be used on old constructions, masonry-works, clay works… correlation between defects and deflections information used to locate the areas where geodic surveying and sensors have to focus on. Moreover, besides useful results for the monitoring, the SCANSITES3D mappings which are often lacking on old structures. This method is widely applicable on large structures such as dams, but also skyscrapers, chimneys, cooling towers, pillars of viaducts, tunnels, … The next step, in progress, is to overlay a high accuracy thermographic imagery survey. Its gain in diagnosis, mainly on cracks. . Orthophotography of Tunnel (cylindrical projection) . Distortions of Tunnel (cylindrical projection) CONCLUSION We’ve presented a new and modern approach for visual inspection and geometric survey, here focused on dams and cooling towers, with SCANSITES3D®. This method is every structure which needs the nitoring program, because it provides an allows the readjustment of an existing monitoring program by completing the partial survey approaches. Not only is this method adapted to concrete structures, but it can also be used on old , clay works… Finally, the between defects and deflections is precious to locate the areas where geodic surveying and sensors have to focus on. Moreover, besides useful results for the monitoring, the SCANSITES3D® provides as-existing mappings which are often lacking on old structures. This method is widely applicable on large structures such as dams, , cooling towers, pillars of The next step, in progress, is to overlay a high accuracy Its aim is to study the possible gain in diagnosis, mainly on cracks. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W2, 2013 XXIV International CIPA Symposium, 2 – 6 September 2013, Strasbourg, France This contribution has been peer-reviewed. The peer-review was conducted on the basis of the abstract 141 work_kubx7dgntfc7hcn3tn46x5tmvq ---- Relationship between malocclusion, soft tissue profile, and pharyngeal airways: A cephalometric study Original Research Article Relationship between malocclusion, soft tissue profile, and pharyngeal airways: A cephalometric study Kristina Lopatienė *, Antanas Šidlauskas, Arūnas Vasiliauskas, Lina Čečytė, Vilma Švalkauskienė, Mantas Šidlauskas Department of Orthodontics, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania m e d i c i n a 5 2 ( 2 0 1 6 ) 3 0 7 – 3 1 4 a r t i c l e i n f o Article history: Received 27 April 2016 Received in revised form 10 July 2016 Accepted 26 September 2016 Available online 11 October 2016 Keywords: Soft tissue profile Cephalometric analysis Pharyngeal airway Malocclusion a b s t r a c t Background and objective: The recent years have been marked by a search for new interrela- tions between the respiratory function and the risk of the development of malocclusions, and algorithms of early diagnostics and treatment have been developed. The aim of the study was to evaluate the relationships between hard and soft tissues and upper airway morphol- ogy in patients with normal sagittal occlusion and Angle Class II malocclusion according to gender. Materials and methods: After the evaluation of clinical and radiological data, 114 pre-ortho- dontic patients with normal or increased ANB angle, were randomly selected for the study. The cephalometric analysis was done by using the Dolphin Imaging 11.8 computer soft- ware. Results: Comparison of the cephalometric values of soft tissue and airway measurements performed statistically significant negative correlation between the width of the upper pharynx and the ANB angle was found: the ANB angle was decreasing with an increasing width of the upper pharynx. The airways showed a statistically significant negative correlation between the width of the lower pharynx and the distance from the upper and the lower lips to the E line. Logistic regression analysis was performed to evaluate significant factors that could predict airway constriction. The upper pharynx was influ- enced by the following risk factors: a decrease in the SNB angle, an increase in the nose tip angle, and younger age; while the lower pharynx was influenced by an increase in the distance between the upper lip and the E line and by an increase in the upper lip thickness. Conclusions: During critical period of growth and development of the maxillofacial system, the patients with oral functional disturbances should be monitored and treated by a multidisciplinary team consisting of a dentist, an orthodontist, a pediatrician, an ENT Available online at www.sciencedirect.com ScienceDirect journal homepage: http://www.elsevier.com/locate/medici Peer review under the responsibility of the Lithuanian University of Health Sciences. * Corresponding author at: Department of Orthodontics, Medical Academy, Lithuanian University of Health Sciences, J. Lukšos-Daumanto 6, 50106 Kaunas, Lithuania. E-mail addresses: klopatiene@zebra.lt (K. Lopatienė). http://dx.doi.org/10.1016/j.medici.2016.09.005 1010-660X/# 2016 The Lithuanian University of Health Sciences. Production and hosting by Elsevier Sp. z o.o. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). http://dx.doi.org/10.1016/j.medici.2016.09.005 mailto:klopatiene@zebra.lt http://www.sciencedirect.com/science/journal/00000000 http://www.elsevier.com/locate/medici http://dx.doi.org/10.1016/j.medici.2016.09.005 http://creativecommons.org/licenses/by-nc-nd/4.0/ specialist, and an allergologist. Cephalometric analysis applied in our study showed that Angle Class II patients with significantly decreased facial convexity angle, increased naso- mental, upper lip-chin, and lower lip-chin angles, and upper and lower lips located more proximally to the E line more frequently had constricted airways. # 2016 The Lithuanian University of Health Sciences. Production and hosting by Elsevier Sp. z o.o. This is an open access article under the CC BY-NC-ND license (http://creative- commons.org/licenses/by-nc-nd/4.0/). m e d i c i n a 5 2 ( 2 0 1 6 ) 3 0 7 – 3 1 4308 1. Introduction The functions of the maxillofacial system affect the growth of the face and jaws as well as tooth eruption [1]. Prolonged mouth breathing is associated with impaired speech, maxil- lofacial deformities, tooth malposition, abnormal posture, and even cardiovascular, respiratory, or endocrine dysfunc- tions [2,3]. The discussion on the relationship between maxillofacial morphology and upper airway size and resis- tance has been continuing over a century. Narrowing of the pharyngeal airway passage caused by various etiological factors – especially in the nasopharyngeal area – results in mouth breathing [2,4]. According to various authors, the main features of upper airway obstruction include: increased excessive anterior face height, narrowed upper dental arch, high palatal vault, steep mandibular plane angle, protruding maxillary teeth, and incompetent lip postures [2,5–7]. Basheer et al. found that the facial profile of patients who had mouth- breathing pattern was more convex than in those who were breathing through the nose [2]. Other authors determined a relationship between the size of the upper airways and the severity of malocclusion [6]. Obstruction of the upper airways is associated with Angle Class II malocclusion and vertical facial growth impairment [6,8]. Some studies have shown that in patients with Angle Class II malocclusion, the width of the upper pharynx is smaller than in those with Angle Class I or III malocclusion. However, other researchers provided contra- dicting conclusions and did not find any association between the width of the upper or lower pharynx and malocclusion [6,9,10]; some authors associate this with genetic and environmental factors [11]. The importance of lateral cephalometric radiographs in the evaluation of the morphology of soft and skeletal maxillofacial tissues and the diagnostics of airway pathology is unques- tionable [1,12–14]. This cephalometric analysis is a simple, cheap, and sufficiently informative diagnostic technique, and the generated 2D images along with evaluation results are sufficiently reliable and may be an alternative to 3D imaging in the evaluation of soft tissue and upper airway morphology [1,15]. The recent years have been marked by a search for new interrelations between the respiratory function and the risk of the development of malocclusions, and algorithms of early diagnostics and treatment have been developed. The aim of the study was to evaluate the relationships between hard and soft tissues and upper airway morphology in patients with normal sagittal occlusion and Angle Class II malocclusion according to gender. 2. Materials and methods After the evaluation of clinical and radiological data, 114 pre- orthodontic patients (aged 14–16 years) were randomly recruited for the study from the Clinical Department of Orthodontics, Lithuanian University of Health Sciences. The study was conducted with the permission of the Kaunas Regional Biomedical Research Ethics Committee (February 9, 2015, No. BE-2-12). The inclusion criteria were the following: patients' age, sagittal jaw relationship angle ANB > 18, and no previous maxillofacial trauma or surgery, syndromes, clefts, or orthodontic treatment. The study included 114 patients: there were 71 girls (62.3%), and 43 boys (37.7%). The subjects' mean age was 14.42 � 0.58 years. The study sample was divided into two groups according to the ANB angle: the first group consisted of subjects with normal skeletal sagittal jaw relationship (ANB 28 � 18, Class I), and the second group consisted of patients with sagittal skeletal malocclusion, (ANB > 48, Class II). Group 1 consisted of 57 subjects, namely 37 girls (64.9%) and 20 boys (35.1%); group 2, 57 patients, namely 34 girls (59.6%), and 23 boys (40.4%). Lateral cephalometric radiography was performed in fixed head position. To minimize radiation doze digital panoramic systems were used and ALARA radiation safety principle was followed. The analysis was done by using the Dolphin Imaging 11.8 (Dolphin Imaging and Management Solution) computer software. Soft tissue analysis was performed manually, using the ‘‘Annotations and measurements’’ function of the Dolphin Imaging software. Cephalometric parameters used for this study are shown in Fig. 1. For the lateral cephalometric analysis, the error margin was determined by repeating the measurements of the variables on randomly selected 20 radiographic images at 2-week intervals with the same operator; the paired sample t test showed no significant mean differences in the two data sets. 2.1. Statistical analysis Statistical data analysis was performed using the SPSS (IBM SPSS Statistics 22.0) software. Spearman correlation was applied in order to evaluate the strength of the relationship between two quantitative variables that did not meet the conditions of normal distribution. The mean values of quantitative attributes meeting the conditions of normal distribution in two independent sample groups were com- pared by applying the parametric Student t criterion, while the comparison of the medians was performed by applying the http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ Fig. 1 – Cephalometric landmarks and measurements. Landmarks: (A) the deepest point on the curve of the bone between the anterior nasal spine and dental alveolus; (B) the deepest midline point on the mandible between the infradentale and the pogonion; N (Nasion), the most anterior point of the frontonasal suture in the middle; S (Sella), the center of the sella turcica; U1, the most labial point on the crown of the maxillary central incisor; Cm (Columella), the most anterior point on the columella of the nose; Ns, soft tissue Nasion; Prn (Pronasale), the most protruded point of the nasal apex; Sn (Subnasale), midpoint of the columella base at the apex of the nasolabial angle; Ls (Labiale superius), midpoint of the upper vermilion line; Li (Labiale inferius), midpoint of the lower vermilion line; Pog', soft tissue Pogonion; Me', soft tissue Menton. Measurements: SNA, sagittal position of the maxilla; SNB, sagittal position of the mandible; ANB, sagittal jaw relationship; Cm-Prn-Ns, nose tip angle; Ns- Prn-Pog', facial convexity; Prn-Ns-Pog', nasomental angle; Pog'-Ns-Ls, upper lip-chin angle; Pog'-Ns-Li, lower lip-chin angle; Ls-E (a), distance from upper lip (Ls) to the E line (the line formed by connecting the Prn and Pog' points); Li-E (b), distance from the lower lip (Li) to the E line; A-Sn (c), upper lip thickness at point A; U1-Ls (d), upper lip thickness at the maxillary central incisor; TFH (e), facial height; LFH (f), lower facial height; UPW (g), width of the upper pharynx, measured as the distance from the point of the posterior outline of the soft palate to the closest point on the posterior pharyngeal wall; LPW (h), width of the lower pharynx, measured as the distance from the intersection of the posterior border of the tongue and the inferior border of the mandible to the closest point on the posterior pharyngeal wall. m e d i c i n a 5 2 ( 2 0 1 6 ) 3 0 7 – 3 1 4 309 nonparametric Mann–Whitney U test. Correlation analysis of SNA, SNB, and ANB angles as well as the parameters of soft tissues and the airways was performed. The most specific predictors of the decrease in the upper and lower pharyngeal width were assessed using the logistic regression analysis. Differences and interdependence between the attributes were considered to be statistically significant if P < 0.05. 3. Results A comparison of the cephalometric values of soft tissue and airway measurements between Class I and II subjects was performed. It showed that patients with Class II anomalies had a statistically significantly (P < 0.01) the upper and the lower lips closer to the E line and a smaller facial convexity angle (P < 0.001). They also demonstrated increased naso- mental (P < 0.001), upper lip-chin (P < 0.001), and lower lip- chin (P < 0.001) angles, compared to Class I subjects. The upper and lower pharyngeal airways were significantly wider (P < 0.05) in Class I than in Class II patients (Table 1). No difference in facial height or the thickness of the upper lip between Class I and II subjects was found. We evaluated the influence of sex on the soft tissues and the pharyngeal airways. In Class I subjects, differences of soft tissues between males and females were found: girls had a statistically significantly smaller upper lip-chin angle, lower lip-chin angle, the thickness of the upper lip, and lower facial height, compared to boys (Table 2). The comparison of boys to girls in the Class II group showed that in girls, the upper lip was statistically significantly more distal to the E line; they also had smaller facial convexity angles, smaller upper lip-chin angles, and smaller thickness of the upper lip, compared to boys (Table 3). The differences in the morphology of the soft tissues and the airways between boys and girls in Class I and II groups were evaluated. The comparison of mean values by applying the parametric Student t test showed that girls in the Class II group had statistically significantly smaller facial convexity angles (P < 0.001), smaller distances from the lower lip to the E line (P < 0.05), and increased nasomental (P < 0.001), upper lip- chin (P < 0.001), and lower lip-chin (P < 0.001) angles, com- pared to girls in the Class I group. The analysis applying the non-parametric Mann–Whitney U test showed that girls with Class II anomalies had smaller distances from the upper lip to the E line (P < 0.05), compared to girls in the Class I group. Upper pharyngeal width was significantly smaller in Class II girls than in the Class I group (P < 0.05). When analyzing boys, the Student t test showed that boys with Class II malocclusion had statistically significantly smaller distances from the lower lip to the E line (P < 0.05), smaller thickness of the upper lip (P < 0.05), and increased upper lip-chin angles (P < 0.05), compared to those in the Class I group. No statistically significant differences in other cephalometric measurements were detected between boys of Class I and II groups. We performed correlation analysis in order to evaluate the relationship of the sagittal position of the maxilla (SNA), the sagittal position of the mandible (SNB), and the sagittal jaw relationship (ANB) with the width of the airways. A statistically significant positive correlation was found between the width of the upper pharynx and the position of the jaws: with an increasing width of the upper pharynx, the SNA (rs = 0.238, P < 0.01) and SNB (rs = 0.301, P < 0.001) angles were increasing as well. It was found a negative correlation between the width of the upper pharynx and the ANB angle (rs = �0.186, P < 0.05): the ANB angle was decreasing with an increasing width of the upper pharynx (Fig. 2). A statistically significant negative correlation between the width of the lower pharynx and the Table 1 – Soft tissue and upper airway measurements in Class I and II subjects. Soft tissue and upper airway measurements Class I Class II P Upper lip to E line (Ls-E), mm �5.20 � 0.51 �3.54 � 0.50 0.002 Lower lip to E line (Li-E), mm �3.33 � 0.43 �1.43 � 0.43 0.003 Facial convexity (Ns-Prn-Pog'),8 127.41 � 0.49 124.30 � 0.42 0.001 Nose tip angle (Cm-Prn-Ns),8 108.55 � 0.66 107.38 � 0.68 NS Nasomental angle (Prn-Ns-Pog'),8 30.77 � 0.35 32.88 � 0.36 0.001 Upper lip-chin angle (Pog'-Ns-Ls),8 7.13 � 0.31 9.06 � 0.33 0.001 Lower lip-chin angle (Pog'-Ns-Li),8 3.49 � 0.23 4.71 � 0.26 0.001 Upper lip thickness at point A (A-Sn), mm 16.16 � 0.23 15.89 � 0.23 NS Upper lip thickness at upper incisor (U1-Ls), mm 12.01 � 0.23 11.58 � 0.23 NS Facial height (TFH), mm 113.15 � 0.96 113.09 � 0.82 NS Lower facial height (LFH), mm 65.59 � 0.72 66.45 � 0.94 NS Upper pharyngeal width, mm 13.02 � 0.48 11.68 � 0.53 0.03 Lower pharyngeal width, mm 10.53 � 0.34 9.37 � 0.38 0.01 NS, not significant. Table 3 – Soft tissue and upper airway measurements in Class II, according to gender. Soft tissue and upper airway measurements Gender P Male Female Upper lip to E line (Ls-E), mm �2.27 � 3.59 �4.16 � 4.27 0.05 Lower lip to E line (Li-E), mm �0.62 � 3.48 �1.80 � 2.86 NS Facial convexity (Ns-Prn-Pog'),8 124.79 � 2.97 123.04 � 3.03 0.03 Nose tip angle (Cm-Prn-Ns),8 107.77 � 4.54 106.90 � 5.69 NS Nasomental angle (Prn-Ns-Pog'),8 33.05 � 2.46 33.37 � 3.05 NS Upper lip-chin angle (Pog'-Ns-Ls),8 10.19 � 2.41 9.02 � 2.15 0.05 Lower lip-chin angle (Pog'-Ns-Li),8 5.37 � 2.09 4.71 � 1.74 NS Upper lip thickness at point A (A-Sn), mm 16.57 � 1.57 15.44 � 1.88 0.01 Upper lip thickness at upper incisor (U1-Ls), mm 11.62 � 1.78 11.74 � 1.81 NS Facial height (TFH), mm 115.16 � 6.57 112.92 � 6.34 NS Lower facial height (LFH), mm 67.78 � 5.67 65.13 � 4.84 NS Upper pharyngeal width, mm 11.45 � 4.42 12.14 � 4.06 NS Lower pharyngeal width, mm 9.09 � 2.80 10.08 � 3.25 NS NS, not significant. Table 2 – Soft tissue and upper airway measurements in Class I, according to gender. Soft tissue and upper airway measurements Gender P Male Female Upper lip to E line (Ls-E), mm �4.87 � 4.98 �5.13 � 2.87 NS Lower lip to E line (Li-E), mm �3.14 � 4.06 �3.04 � 2.83 NS Facial convexity (Ns-Prn-Pog'),8 126.68 � 4.88 127.33 � 2.65 NS Nose tip angle (Cm-Prn-Ns),8 107.85 � 5.65 108.65 � 4.79 NS Nasomental angle (Prn-Ns-Pog'),8 31.53 � 3.56 30.65 � 1.97 NS Upper lip-chin angle (Pog'-Ns-Ls),8 8.37 � 2.66 6.60 � 2.02 0.001 Lower lip-chin angle (Pog'-Ns-Li),8 4.23 � 1.84 3.17 � 1.58 0.005 Upper lip thickness at point A (A-Sn), mm 17.14 � 1.68 15.54 � 1.61 0.0001 Upper lip thickness at upper incisor (U1-Ls), mm 12.75 � 1.68 11.42 � 1.68 0.001 Facial height (TFH), mm 117.80 � 6.30 110.05 � 5.63 NS Lower facial height (LFH), mm 68.74 � 5.06 64.38 � 7.23 0.005 Upper pharyngeal width, mm 11.89 � 3.02 13.06 � 3.93 NS Lower pharyngeal width, mm 9.83 � 2.91 9.20 � 2.37 NS NS, not significant. m e d i c i n a 5 2 ( 2 0 1 6 ) 3 0 7 – 3 1 4310 distance from the upper (rs = �0.204; P < 0.05) and the lower (rs = �0.243, P < 0.005) lips to the E line (rs = �0.203, P < 0.05) (Fig. 3) was found. The distance from the upper and the lower lips to the E line was increasing with a decreasing width of the lower pharynx. Logistic regression analysis was performed to evaluate significant factors that could predict airway constriction. The most significant risk factors affecting the reduction in the width of the upper and the lower pharynx in all subjects and separately in males and females are presented in Table 4. The Fig. 2 – Linear relationship between the size of the ANB angle and the width of the upper pharynx (P < 0.05). m e d i c i n a 5 2 ( 2 0 1 6 ) 3 0 7 – 3 1 4 311 reduction in the width of the upper pharynx was influenced by the following risk factors: a decrease in the SNB angle, an increase in the nose tip (Cm-Prn-Ns) angle. A decrease of the SNB angle by 1 degree increased the risk of a 1-mm reduction in the width of the upper pharynx by 17%, an increase of the Cm-Prn-Ns angle by 1 degree increased this risk by 14%. The reduction in the width of the lower pharynx was influenced by an increase in the distance between the upper lip and the E line and increased upper lip thickness. An increased distance between the upper lip and the E line by 1 mm increased the risk of a 1-mm reduction in the width of the lower pharynx by 15%, and an increase in the thickness of the upper lip by 1 mm increased this risk by 26%. The upper pharynx in girls was influenced by the following risk factors: an increase in the ANB angle, an increase in the nose tip (Cm-Prn-Ns) angle. An increase of the ANB angle by 1 degree increased the risk of a 1-mm reduction in the width of Fig. 3 – Linear relationship between the distance from the upper the width of the lower pharynx. the upper pharynx by 86%, an increase of the Cm-Prn-Ns angle by 1 degree increased this risk by 19%. The upper pharynx in boys was influenced by a decrease in the SNB angle and an increase in the nose tip angle (Cm-Prn-Ns). A decrease of the SNB angle by 1 degree increased the risk of a 1-mm reduction in the width of the upper pharynx by 47%, and an increase of the Cm-Prn-Ns angle by 1 degree increased this risk by 23%. The lower pharynx in boys was influenced by an increase in the upper lip-chin angle. An increase of this angle by 1 degree increased the risk of a 1-mm reduction in the width of the lower pharynx by 54%. 4. Discussion There is widespread and growing interest in facial esthetics and its attractiveness, and it has become one of the goals of the contemporary orthodontic treatment. Scientific research on the quantitative measurable bases of facial esthetics is still in progress, and various analyses of the soft tissues are performed, evaluating facial morphology and helping to plan orthodontic treatment [16–19]. When analyzing human face, maxillofacial surgeons, plastic surgeons, and orthodontists always try to set certain principles that would help in maxillofacial reconstruction or the treatment of orthodontic malocclusion [17,19–21]. Most frequently, the analysis of the soft tissue profile and the evaluation of the relationships of the nose, lips, and chin are performed [22]. Literature data suggest that the features of the nose and other facial structures depend on a person's race and ethnic group, and a number of analyses have been proposed to evaluate them [16,20]. Various techniques are used for the analysis of the soft tissue profile, including lateral cephalometric radiographs [20], digital photography [22], 3D photography [23,24], and magnetic resonance imaging [25]. Most frequently, the analysis of the soft tissues is performed by evaluating facial profile images or cephalometric analysis. Zhang et al. compared the data of facial profile analysis obtained via cephalometric radiographs or digital photography, and did not find any significant (a, P < 0.05), and the lower (b, P < 0.01) lips to the E line and Table 4 – Factors affecting the risk of the reduction in the width of the upper and the lower pharynx, adjusted for age. Factor OR 95% CI P Reduction in the width of the upper pharynx Sagittal position of the mandible (SNB) 0.83 0.73–0.96 0.009 Nose tip angle (Cm-Prn-Ns) 1.14 1.04–1.26 0.005 Reduction in the width of the lower pharynx Distance between the upper lip and the E line 1.15 1.02–1.29 0.024 Upper lip thickness 1.26 1.00–1.59 0.052 Reduction in the width of the upper pharynx in females Sagittal jaw relationship (ANB) 1.86 1.16–2.98 0.010 Nose tip angle (Cm-Prn-Ns) 1.19 1.04–1.35 0.011 Reduction in the width of the lower pharynx in females Nose tip angle (Cm-Prn-Ns) 0.91 0.83–1.01 0.066 Upper lip thickness 1.38 1.01–1.88 0.042 Reduction in the width of the upper pharynx in males Sagittal position of the mandible (SNB) 0.68 0.52–0.90 0.006 Nose tip angle (Cm-Prn-Ns) 1.236 1.04–1.52 0.017 Reduction in the width of the lower pharynx in males Upper lip-chin angle (Pog'-Ns-Ls) 1.54 1.13–2.11 0.007 OR, odds ratio; CI, confidence interval. m e d i c i n a 5 2 ( 2 0 1 6 ) 3 0 7 – 3 1 4312 differences between these techniques [26]. The methods used for the evaluation of upper airway patency include fluoroscopy [8], nasal endoscopy [8,27], cephalometric radiographs [1,12– 14], 3D computed tomography [8,28], cone beam computed tomography [15], and magnetic resonance imaging [13]. The greatest accuracy may be achieved when analyzing 3D images [8], yet the disadvantages of this technique are high radiation exposure and high costs, and therefore cephalography is used as an alternative technique for the planning of orthodontic treatment [29]. Studies comparing cephalometry with mag- netic resonance imaging (MRI) suggest evaluating nasophar- ynx and laryngopharynx on cephalograms, refraining from evaluating the oropharynx due to overlapping structures [13]. Scientific literature indicates that cephalography allows for performing high-quality simultaneous evaluations of the airways and skeletal and soft tissues [11,30], and suggests using cephalography as a simple and sufficiently informative technique [10,31]. In our study, we chose cephalometric radiography images for the evaluation of soft and skeletal tissues and the airways. During this analysis, we evaluated the measurements of skeletal structures in the sagittal plane, the soft tissues of the lips, nose, and chin, and the structure of the airways in the upper and the lower parts of the pharynx. A patient's respiratory function is an important factor in diagnostics and orthodontic treatment planning, and it has a direct correlation with the size of the upper airways [32,33]. In our study, subjects with Angle Class II malocclusion had a significantly more convex facial profile, compared to Class I patients. The increased convexity of the facial profile was associated with the position of the upper and the lower lips closer to the E line, a decreased convexity angle, an increased nasomental angle, and increased upper lip-chin and lower lip- chin angles. Angle Class II patients were found to have significantly narrower upper airways. Researchers analyzing the soft tissue profile state that facial convexity is one of the parameters that have a statistically reliable relationship with pharyngeal airway pathology. Adequate treatment of orthodontic anomalies at a young age may prevent or alleviate pathological changes of the airways [5]. According to Basheer et al., individuals who breathe through the mouth have more convex faces and protruding incisors, compared to those who breathe through the nose [2]. Gulsen et al. stated that the convexity of facial soft tissues is related to the position of the jaws [20]. The findings of this study confirm the aforemen- tioned statements – a greater facial convexity and narrower pharyngeal airways were detected in Angle Class II subjects. The correlation analysis conducted in our study showed that an increasing width of the upper pharynx was associated with an increasing sagittal mandibular angle (SNB) and a decreasing sagittal maxillo-mandibular angle (ANB). This indicates that Angle class II subjects had narrower upper airways. The evaluation of the changes with respect to sex showed that girls with Angle Class II malocclusion had convex facial profiles: smaller facial convexity angles, smaller distances from the upper and the lower lips to the E line, and greater nasomental, upper lip-chin, and lower lip-chin angles, compared to girls in the Class I group. In boys with Class II malocclusion, increased facial convexity was related to a smaller distance from the lower lip to the E line, a decreased upper lip thickness, and an increased upper lip-chin angle. The evaluation of the influence of sex on the soft tissues showed that in girls with Class I malocclusion, upper lip-chin angles, lower lip-chin angles, the width of the upper and the lower parts of the upper lip, and lower facial height were smaller, compared to boys of the same group. Girls with Angle Class II malocclusion had greater distances between the upper lip and the E line, smaller facial convexity angles, smaller upper lip- chin angles, and smaller upper lip thickness, compared to boys of the same group. There was no difference in airway measurements between the sexes. The retrognathic position of the mandible may cause pharyngeal airway constriction in patients with a convex facial profile. Facial profile convexity may be one of the risk factors of sleep apnea. A study conducted by Ikävalko et al. showed that m e d i c i n a 5 2 ( 2 0 1 6 ) 3 0 7 – 3 1 4 313 of all healthcare specialists working with children, orthodon- tists can perform the most exact evaluations of facial convexity because other specialists lack knowledge about the importance of the facial profile in the diagnostics of airway pathology [5]. The results of studies conducted by Dimaggio et al. [22] and Souki et al. [4] suggest that the nasolabial angle in patients with Angle Class I is statistically significantly greater than in those with Angle Class II malocclusion. A small nasolabial angle may be detected in patients who breathe through the mouth [4]. The data of our study confirm these results as we found in Class II subjects more protruded upper lips, increased upper lip-chin angle, decreased distance to the E line, comparing to Class I. This is due to the protrusion of the upper incisors and the upper lip, which in turn is caused by imbalance of the linguolabial positioning. The growth and development of the maxillofacial system should be closely monitored during the critical pre-puberty period in order to prevent future nose breathing disorders – especially in patients with possible nasal breathing disorders. The condition of these patients should be followed by a team of specialists consisting of a dentist, an orthodontist, a pediatri- cian, an ENT specialist, and an allergist, who should ensure timely correction of nose breathing disorders during the period of the active growth and development of the maxillofacial system. The results of the present study show that upper airway obstruction and malocclusion are inter-related and cause changes in the facial profile. Since the cause-and-effect correlation between the size of the upper and the lower airways and the type of malocclusion has yet to be confirmed, sagittal and vertical skeletal discrepancies should be corrected interventionally during the period of growth and develop- ment, maximally approximating them to the normal status [33]. The function of the airway is of considerable importance in orthodontics and cannot be overlooked during treatment planning. 5. Conclusions During critical period of growth and development of the maxillofacial system, the patients with oral functional disturbances should be monitored and treated by a multidis- ciplinary team consisting of a dentist, an orthodontist, a pediatrician, an ENT specialist, and an allergologist. The aim of this team is timely diagnostics of disorders, adequate early treatment, and optimal recommendations for primary and secondary prevention, as well as patient monitoring through- out this growth period. Cephalometric radiographs producing 2D images are a sufficiently informative, simple, and inexpensive diagnostic modality, which could be recommended for use in daily clinical practice when diagnosing respiratory system disorders and the disorders of the morphology of skeletal and soft maxillofacial tissues. Cephalometric analysis applied in our study showed that Angle Class II patients with significantly decreased facial convexity angle, increased nasomental, upper lip-chin, and lower lip-chin angles, and upper and lower lips located more proximally to the E line more frequently had constricted airways. Conflict of interest The authors state that they have no conflicts of interest to declare. r e f e r e n c e s [1] Oz U, Orhan K, Rubenduz M. Two-dimensional lateral cephalometric evaluation of varying types of Class II subgroups on posterior airway space in postadolescent girls: a pilot study. J Orofac Orthop 2013;74(1):18–27. [2] Basheer B, Hegde KS, Bhat SS, Umar D, Baroudi K. Influence of mouth breathing on the dentofacial growth of children: a cephalometric study. J Int Oral Health 2014;6(6):50–5. [3] Šidlauskienė M, Smailienė D, Lopatienė K, Čekanauskas E, Pribuišienė R, Šidlauskas M. Relationships between malocclusion, body posture, and nasopharyngeal pathology in pre-orthodontic children. Med Sci Monit 2015;21:1765–73. [4] Souki BQ, Lopes PB, Veloso NC, Avelino RA, Pereira TB, Souza PE, et al. Facial soft tissues of mouth-breathing children: do expectations meet reality? Int J Pediatr Otorhinolaryngol 2014;78(7):1074–9. [5] Ikävalko T, Närhi M, Lakka T, Myllykangas R, Tuomilehto H, Vierola A, et al. Lateral facial profile may reveal the risk for sleep disordered breathing in children. Acta Odontol Scand 2015;73(7):550–5. [6] Indriksone I, Jakobsone G. The upper airway dimensions in different sagittal craniofacial patterns: a systematic review. Stomatologija 2014;16(3):109–17. [7] Muto T, Yamazaki A, Takeda S. A cephalometric evaluation of the pharyngeal airway space in patients with mandibular retrognathia and prognathia, and normal subjects. Int J Oral Maxillofac Surg 2008;37(3):228–31. [8] Kula K, Jeong AE, Halum S, Kendall D, Ghoneima A. Three dimensional evaluation of upper airway volume in children with different dental and skeletal malocclusions. J Biomed Graph Comput 2013;3(4):116–26. [9] Jakobsone G, Urtane I, Terauds I. Soft tissue profile of children with impaired nasal breathing. Stomatologija 2006;8(2):39–43. [10] Silva NN, Lacerda RH, Silva AW, Ramos TB. Assessment of upper airways measurements in patients with mandibular skeletal Class II malocclusion. Dental Press J Orthod 2015;20 (October (5)):86–93. [11] Sutherland K, Schwab RJ, Maislin G, Lee RW, Benedikstdsottir B, Pack AI, et al. Facial phenotyping by quantitative photography reflects craniofacial morphology measured on magnetic resonance imaging in Icelandic sleep apnea patients. Sleep 2014;37(5):959–68. [12] Ryu HH, Kim CH, Cheon SM, Bae WY, Kim SH, Koo SK, et al. The usefulness of cephalometric measurement as a diagnostic tool for obstructive sleep apnea syndrome: a retrospective study. Oral Surg Oral Med Oral Pathol Oral Radiol 2015;119(1):20–31. [13] Pirilä-Parkkinen K, Löppönen H, Nieminen P, Tolonen U, Pääkkö E, Pirttiniemi P. Validity of upper airway assessment in children: a clinical, cephalometric, and MRI study. Angle Orthod 2011;81(3):433–9. [14] Armalaitė J, Lopatienė K. Lateral teleradiography of the head as a diagnostic tool used to predict obstructive sleep apnea. Dentomaxillofac Radiol 2016;45(1):20150085. [15] Grauer D, Cevidanes LS, Styner MA, Ackerman JL, Proffit WR. Pharyngeal airway volume and shape from cone-beam http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0170 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0170 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0170 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0170 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0175 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0175 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0175 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0180 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0180 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0180 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0180 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0180 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0185 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0185 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0185 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0185 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0190 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0190 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0190 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0190 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0195 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0195 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0195 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0200 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0200 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0200 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0200 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0205 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0205 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0205 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0205 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0210 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0210 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0210 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0215 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0215 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0215 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0215 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0220 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0220 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0220 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0220 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0220 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0225 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0225 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0225 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0225 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0225 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0230 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0230 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0230 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0230 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0235 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0235 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0235 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0240 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0240 m e d i c i n a 5 2 ( 2 0 1 6 ) 3 0 7 – 3 1 4314 computed tomography: relationship to facial morphology. Am J Orthod Dentofacial Orthop 2009;136(6):805–14. [16] Joshi M, Wu LP, Maharjan S, Regmi MR. Sagittal lip positions in different skeletal malocclusions: a cephalometric analysis. Prog Orthod 2015;16:77. [17] Anic-Milosevic S, Mestrovic S, Prlić A, Slaj M. Proportions in the upper lip-lower lip-chin area of the lower face as determined by photogrammetric method. J Craniomaxillofac Surg 2010;38(2):90–5. [18] Anić-Milosević S, Lapter-Varga M, Slaj M. Analysis of the soft tissue facial profile by means of angular measurements. Eur J Orthod 2008;30(2):135–40. [19] Rose AD, Woods MG, Clement JG, Thomas CD. Lateral facial soft-tissue prediction model: analysis using Fourier shape descriptors and traditional cephalometric methods. Am J Phys Anthropol 2003;121(2):172–80. [20] Gulsen A, Okay C, Aslan BI, Uner O, Yavuzer R. The relationship between craniofacial structures and the nose in Anatolian Turkish adults: a cephalometric evaluation. Am J Orthod Dentofacial Orthop 2006;130(2):15–25. [21] Johal A, Patel SI, Battagel JM. The relationship between craniofacial anatomy and obstructive sleep apnoea: a case- controlled study. J Sleep Res 2007;16(3):319–26. [22] Dimaggio FR, Ciusa V, Sforza C, Ferrario VF. Photographic soft-tissue profile analysis in children at 6 years of age. Am J Orthod Dentofacial Orthop 2007;132(4):475–80. [23] Nanda V, Gutman B, Bar E, Alghamdi S, Tetradis S, Lusis AJ, et al. Quantitative analysis of 3-dimensional facial soft tissue photographic images: technical methods and clinical application. Prog Orthod 2015;16:21. [24] Choi JW, Lee JY, Oh TS, Kwon SM, Yang SJ, Koh KS. Frontal soft tissue analysis using a 3 dimensional camera following two-jaw rotational orthognathic surgery in skeletal class III patients. J Craniomaxillofac Surg 2014;42(3):220–6. [25] Lee RW, Sutherland K, Chan AS, Zeng B, Grunstein RR, Darendeliler MA, et al. Relationship between surface facial dimensions and upper airway structures in obstructive sleep apnea. Sleep 2010;33(9):1249–54. [26] Zhang X, Hans MG, Graham G, Kirchner HL, Redline S. Correlations between cephalometric and facial photographic measurements of craniofacial form. Am J Orthod Dentofacial Orthop 2007;131(1):67–71. [27] Passali FM, Bellussi L, Mazzone S, Passali D. Predictive role of nasal functionality tests in the evaluation of patients before nocturnal polysomnographic recording. Acta Otorhinolaryngol Ital 2011;31(2):103–8. [28] Mello Junior CF, Guimarães Filho HA, Gomes CA, Paiva CC. Radiological findings in patients with obstructive sleep apnea. J Bras Pneumol 2013;39(1):98–101. [29] Lee RW, Chan AS, Grunstein RR, Cistulli PA. Craniofacial phenotyping in obstructive sleep apnea – a novel quantitative photographic approach. Sleep 2009;32(1):37–45. [30] Sutherland K, Lee RW, Cistulli PA. Obesity and craniofacial structure as risk factors for obstructive sleep apnoea: impact of ethnicity. Respirology 2012;17(2):213–22. [31] Sato K, Shirakawa T, Sakata H, Asanuma S. Effectiveness of the analysis of craniofacial morphology and pharyngeal airway morphology in the treatment of children with obstructive sleep apnoea syndrome. Dentomaxillofac Radiol 2012;41(5):411–6. [32] Lopatiene K, Smailiene D, Sidlauskiene M, Cekanauskas E, Valaikaite R, Pribuisiene R. An interdisciplinary study of orthodontic, orthopedic, and otorhinolaryngological findings in 12–14-year-old preorthodontic children. Medicina (Kaunas) 2013;49(11):479–86. [33] Ucar FI, Uysal T. Comparision of orofacial airway dimensions in subject with different breathing pattern. Prog Orthod 2012;13(3):210–7. http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0240 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0240 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0245 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0245 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0245 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0250 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0250 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0250 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0250 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0255 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0255 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0255 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0260 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0260 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0260 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0260 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0265 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0265 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0265 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0265 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0270 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0270 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0270 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0275 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0275 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0275 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0280 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0280 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0280 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0280 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0285 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0285 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0285 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0285 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0290 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0290 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0290 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0290 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0295 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0295 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0295 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0295 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0300 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0300 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0300 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0300 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0305 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0305 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0305 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0310 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0310 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0310 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0315 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0315 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0315 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0320 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0320 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0320 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0320 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0320 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0325 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0325 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0325 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0325 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0325 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0330 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0330 http://refhub.elsevier.com/S1010-660X(16)30060-X/sbref0330 Relationship between malocclusion, soft tissue profile, and pharyngeal airways: A cephalometric study 1 Introduction 2 Materials and methods 2.1 Statistical analysis 3 Results 4 Discussion 5 Conclusions Conflict of interest References work_kuj33nsvtvdafd7csivg33hycm ---- untitled CASE REPORT A case of digital hoarding Martine J van Bennekom,1 Rianne M Blom,1 Nienke Vulink,1 Damiaan Denys2 1Department of Psychiatry, Academic Medical Center, Amsterdam, The Netherlands 2Netherlands Institute for Neuroscience, Royal Netherlands Academy of Arts and Science, Amsterdam Correspondence to Martine J van Bennekom, m.j.vanbennekom@amc.nl Accepted 16 September 2015 To cite: van Bennekom MJ, Blom RM, Vulink N, et al. BMJ Case Rep Published online: [please include Day Month Year] doi:10.1136/ bcr-2015-210814 SUMMARY A 47-year-old man presented to our outpatient clinic, preoccupied with hoarding of digital pictures, which severely interfered with his daily functioning. He was formerly diagnosed with autism and hoarding of tactile objects. As of yet, digital hoarding has not been described in the literature. With this case report, we would like to introduce ‘digital hoarding’ as a new subtype of hoarding disorder. We conclude with differential diagnostic considerations and suggestions for treatment. BACKGROUND With increasing technological innovation and unlimited possibilities for digital storage, a new subtype of hoarding may have arisen, ‘digital hoarding’. Digital hoarding is the accumulation of digital files to the point of loss of perspective, which eventually results in stress and disorganisa- tion. Although digital hoarding does not interfere with cluttering of living spaces, it has an immense impact on daily life functioning. Although no scien- tific papers have been published on this subject, the problem is frequently described on the internet by patients and by professionals. Recently, hoarding disorder has been documen- ted as a separate disorder in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5).1 The specific phenomenology and development of hoarding justified a classifica- tion separate from obsessive-compulsive disorder (OCD).2 Hoarding disorder is defined as the exten- sive collection of objects followed by difficulty dis- carding them for aesthetic reasons, or because of an object’s emotional value or consideration of its possible usefulness in the future.3 The consequence of hoarding objects is accumulation of objects in living spaces, leading to limited space, poor hygiene and feelings of embarrassment.4 We describe the first case of digital hoarding, based on a patient who presented to our outpatient clinic. CASE PRESENTATION A 47-year-old man was referred to our outpatient clinic with a request to treat his hoarding behaviour. The referral was made by his attending psychiatric nurse from an institution specialised in the treat- ment of autism, who helped the patient with struc- turing his daily life. In 2007, the patient was diagnosed with an autism spectrum disorder (ASD) in combination with traits of attention deficit dis- order (ADD). Furthermore, he was known to have recurrent depressive episodes since 1994, for which he was being treated with venlafaxine 150 mg daily. His object hoarding gradually started when he was a university student, followed by progressive worsen- ing over the years. He had never sought treatment for his object hoarding before presentation to our outpatient clinic. The patient mainly hoarded objects with limited or no economic value, such as paperwork and bike components. He had difficulty discarding these objects and felt comfortable surrounded by them. He believed they would be of use to him in the future. The accumulation of these objects led to severe cluttering of his house (figure 1). Owing to his object hoarding, the patient felt embarrassed to invite people over. The patient also hoarded digital pictures, which started 5 years earlier when he obtained a digital camera. Digital photography was his main daytime activity; he took up to 1000 images every day, mainly of landscapes. He had difficulty discarding these pictures, even though many were very similar, because they brought back memories. As with his objects, the patient felt attached to his digital pictures. He had four external hard drives containing the original pic- tures and four external hard drives containing backups. He never used or looked at the pictures he had saved, but was convinced that they would be of use in the future. He planned to merge pictures when new technologies would become available and thought some pictures would be suitable for future publication. The patient indicated that organising the large amount of digital pictures caused feelings of frustration and was very time-consuming, taking 3–5 h a day on average. It interfered with his sleep- ing pattern and kept him from other activities such as cleaning his house, going outside and relaxing. The patient was not married, had no children and was unemployed. He lived on an allowance. He lived by himself in an apartment in the city. He had a fairly good relationship with his father and had two good friends. Two aunts on his mother’s side of the family also expressed hoarding behaviour. Figure 1 Picture of patient’s living room, published with his permission, Clutter Image Rating #5.5 van Bennekom MJ, et al. BMJ Case Rep 2015. doi:10.1136/bcr-2015-210814 1 New disease http://crossmark.crossref.org/dialog/?doi=10.1136/bcr-2015-210814&domain=pdf&date_stamp=2015-10-08 http://casereports.bmj.com INVESTIGATIONS Mental status examination revealed an unshaved and casually dressed man. His attitude was friendly and enthusiastic. He pre- sented his case in a rigid way and had a short attention span. However, he was oriented and there were no memory deficits. There were no delusions or hallucinations. He did not have obsessions or compulsions. His mood was normal and he had no suicidal thoughts. His psychomotor behaviour was restless with stereotyped repetitive movements. He had limited insight into his hoarding behaviour. We administered several hoarding questionnaires. His total score on the Saving Inventory-Revised (SI-R) was 47 of 92, with an emphasis on clutter; a total score >40 is classified as hoard- ing disorder.6 On the Compulsive Acquisition Scale (CAS), he had a score of 46.7 This is just below the cut-off scale of 47.8 for compulsive buying.8 On the Saving Cognitions Inventory, the patient scored 82, while a mean score for hoarders is 104.9 He scored relatively high on the ‘control’ subscale, indicating he liked to be in control of his objects and would not let other people throw anything away. The need for control over posses- sions is a major contributing factor in hoarding disorder.2 DIFFERENTIAL DIAGNOSIS The patient met all DSM-5 criteria of hoarding disorder for his object hoarding (table 1). We compared the digital hoarding of this patient to the DSM-5 criteria of hoarding disorder and noted a lot of similarities. The patient had difficulty discarding his digital pictures even though they were of limited value. Furthermore, he had stored multiple pictures of the same scene and never inspected them again. He believed these pictures could be of use to him in the future. Moreover, the amount of pictures made it impossible for him to process them on his computer, hence the eight external hard drives. Therefore, this ‘digital’ clut- tering led to disorganisation of the digital pictures and accumula- tion of digital storage space, which is functionally similar to the ‘accumulation of possessions’ in object hoarding. The patient experienced significant distress because of his digital hoarding. The digital hoarding was not preceded by specific obsessions, nor did the patient suffer from other obsessions. The patient did not feel forced to perform this behaviour and it was not directed at any perceived prevention. Therefore, we did not classify his digital hoarding as a symptom dimension of OCD. Furthermore, the patient did not meet criteria for obsessive-compulsive person- ality disorder (OCPD) apart from his hoarding. We considered the option of his digital hoarding being part of his ASD. Pertusa et al10 stated hoarding behaviour in ASD might be part of a stereotyped interest with related collecting of items. The digital hoarding of our patient could be seen as part of a stereo- typed interest in digital photography in the scope of his ASD. However, the processing and saving of the digital pictures caused suffering and distress, and thus was less likely to be part of his stereotyped interest. He claimed he would enjoy his digital photog- raphy more without the need to organise and discard the pictures. Furthermore, patients with ASD have problems in executive func- tions such as planning and organising, which could be the reason for his disorganised digital hoarding. However, the disorganisation of his digital pictures was not solely due to a lack of overview, but mainly a consequence of massive accumulation. Therefore, we did not categorise his digital hoarding as part of his ASD. Apart from the ASD, this patient was also diagnosed with traits of ADD. Individuals with childhood attention deficit hyperactiv- ity disorder (ADHD) symptoms have a higher prevalence of life- time hoarding symptoms as compared to individuals without childhood ADHD (8.9% vs 2.7%).11 Inattention in ADHD is an especially strong predictor of hoarding-related behaviour.12 Also, patients with hoarding disorder show an increased rate of ADHD symptoms compared to healthy controls. Nearly 30% of hoard- ing patients fulfil the inattention criteria for adult ADHD.13 Part of this patient’s inability to organise his objects and digital pic- tures could be attributed to his inattention span and lack of over- view due to ADD. However, hoarding disorder itself may lead to impaired cognitive functioning, especially inattention and indeci- siveness, contributing to disorganisation.14 15 Furthermore, the disorganisation is also due to massive accumulation. Therefore, the disorganisation in his collection of objects and digital pictures is better explained as a consequence of his hoarding disorder rather than as a consequence of his ADD traits. All in all, certain aspects of his digital hoarding, including the lack of overview and disorganisation of his digital pictures could be partly attributed to his ASD and ADD traits. However, his digital hoarding fulfilled all DSM-5 criteria for hoarding dis- order. This justified a separate diagnosis of hoarding disorder alongside to his ASD and ADD traits. TREATMENT Object hoarding is mainly treated with cognitive–behavioural treatment (CBT), including motivational interviewing, stimulus control, finding alternative activities and challenging dysfunc- tional thoughts.16 We suggest a comparable treatment strategy for digital hoarding. In the case of our patient, we would first apply motivational interviewing for encouragement in the treat- ment of his digital hoarding, followed by reducing the acquire- ment of photos. For example, by setting a maximum number of pictures to be taken of a scene, and to search for alternative activities related to digital photography. Finally, his thoughts about storing pictures for future use could be challenged. Ideally, part of this treatment would be performed at the patient’s home by home care workers instructed by an experi- enced cognitive–behavioural therapist. OUTCOME AND FOLLOW-UP The patient is currently being treated with CBT by a therapist from our psychiatry department, in collaboration with his Table 1 Diagnostic criteria for hoarding disorder in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5)1 Criterion Content A Persistent difficulty discarding or parting with personal possessions, regardless of their actual value B The difficulty is due to strong urges to save items and/or distress associated with discarding C The symptoms result in the accumulation of a large number of possessions, which fill up and clutter active living areas of the home or workplace to the extent that the intended use of those areas is no longer possible. If all living space is uncluttered, it is only because of the interventions of third parties (eg, family members, cleaners, authorities) D The symptoms cause clinically significant distress or impairment in social, occupational or other important areas of functioning (including maintaining a safe environment for oneself and others) E The hoarding symptoms are not due to a general medical condition (eg, brain injury, cerebral vascular disease) F The hoarding symptoms are not restricted to the symptoms of another mental disorder (eg, hoarding due to obsessions in obsessive-compulsive disorder, cognitive deficits in dementia, restricted interests in autism spectrum disorder, food storing in Prader-Willi syndrome) 2 van Bennekom MJ, et al. BMJ Case Rep 2015. doi:10.1136/bcr-2015-210814 New disease attending psychiatric nurse. The first priority is treatment of his object hoarding because of the enormous cluttering of his house. So far, the patient has organised a large amount of his objects in a wall cabinet. He is satisfied with the extra floor space gained and this encourages him to keep going. The next focus will be on decreasing the acquirement, and the discarding of objects. The treatment progresses slowly because the patient wants to maintain full control over the treatment. The reduction of digital hoarding has recently been included in his treatment, starting with organising his pictures based on subject and delet- ing pictures of lesser quality. DISCUSSION We presented a patient who, apart from object hoarding, suf- fered from digital hoarding. The accumulation and disorganisa- tion of his digital pictures resulted in distress and impairment of functioning. The features of his digital hoarding showed a similar pattern to his object hoarding. No scientific papers on digital hoarding have been documen- ted up to now. Digital hoarding is described as a new concept in online blogs or magazines, which mainly contain calls for help or ‘confessions’ from possible digital hoarders. On these web- sites, digital hoarding is defined as a more hidden subtype of hoarding, because it does not interfere with living spaces and hygiene. It is considered a solitary problem and not necessarily existing alongside other forms of hoarding. The unrestricted possibilities for saving and the fear of losing important data is leading to disorganisation and loss of perspective, so-called digital cluttering. Digital hoarding is pathological when it crosses the line of interference with other aspects of life. No university or hospital websites have addressed the problem or offered professional treatment. A few clinical psychologists claim treatment in the form of behavioural therapy focusing on distraction by other activities, building on social skills and improving sleep hygiene.17 Technology companies have reacted to the phenomenon by offering tools and software for archiving and filing computer files.18 However, the possibility of saving more files might stop any tendency to actually delete files to gain more overview of data. Digital hoarding could be treated based on the principles of CBT for object hoarding. We suggest digital hoarding as a specific subtype of hoarding disorder. Digital hoarding would fulfil the general DSM-5 cri- teria of hoarding disorder (table 1), because it involves (A and B) a difficulty to discard due to a strong urge to save files because of perceived need or emotional attachment, (C) an accumulation of files on hard drives and external drives, leading to ‘digital clutter’, loss of overview and disorganisation and (D) significant distress and interference with daily functioning. To ensure rapid detection, this subtype of hoarding disorder should be added to hoarding screening instruments.19 Patient’s perspective I am proud to be the first patient in whom digital hoarding is described in the scientific literature. I hope this will be of help in the future to other patients with problems of digital hoarding. Learning points ▸ We suggest digital hoarding, characterised by the accumulation and disorganisation of digital files causing distress and impairment in functioning, as a new subtype of hoarding disorder. ▸ Digital hoarding might be less obvious than other subtypes of hoarding and, to detect, requires a perceptive attitude from caregivers. ▸ More case reports and clinical trials are necessary to gain further insight into the phenomenology and optimal treatment strategy of digital hoarding. Acknowledgements The authors would like to acknowledge Ron de Joode, therapist at our department, for his involvement in the treatment. Contributors MJVB identified the case. All the authors contributed to the conception, design and draft of the manuscript. All the authors revised the manuscript critically for important intellectual content. DD takes responsibility for the integrity of the data. All the authors gave their final approval of the version to be published. Competing interests None declared. Patient consent Obtained. Provenance and peer review Not commissioned; externally peer reviewed. REFERENCES 1 American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 5th edn. Arlington V, ed. American Psychiatric Publishing, 2013. 2 Marchand S, Phillips McEnany G. Hoarding’s place in the DSM-5: another symptom, or a newly listed disorder? Issues Ment Health Nurs 2012;33:591–7. 3 Frost RO, Patronek G, Rosenfield E. Comparison of object and animal hoarding. Depress Anxiety 2011;28:885–91. 4 Frost RO, Steketee G, Tolin DF. Diagnosis and assessment of hoarding disorder. Annu Rev Clin Psychol 2012;8:219–42. 5 http://www.ocfoundation.org/hoarding/cir.pdf (accessed Jun 2014). 6 Frost RO, Steketee G, Grisham J. Measurement of compulsive hoarding: saving inventory-revised. Behav Res Ther 2004;42:1163–82. 7 Frost RO, Steketee G, Williams L. Compulsive buying, compulsive hoarding, and obsessive-compulsive disorder. Behav Ther 2002;33:201–14. 8 Frost RO, Tolin DF, Steketee G, et al. Excessive acquisition in hoarding. J Anxiety Disord 2009;23:632–9. 9 Steketee G, Frost RO, Kyrios M. Cognitive aspects of compulsive hoarding. Cognit Ther Res 2003;27:463–79. 10 Pertusa A, Bejerot S, Eriksson J, et al. Do patients with hoarding disorder have autistic traits? Depress Anxiety 2012;29:210–18. 11 Fullana MA, Vilagut G, Mataix-Cols D, et al. Is ADHD in childhood associated with lifetime hoarding symptoms? An epidemiological study. Depress Anxiety 2013;30:741–8. 12 Tolin DF, Villavicencio A. Inattention, but not OCD, predicts the core features of hoarding disorder. Behav Res Ther 2011;49:120–5. 13 Frost RO, Steketee G, Tolin DF. Comorbidity in hoarding disorder. Depress Anxiety 2011;28:876–84. 14 Woody SR, Kellmane-McFarlane K, Welsted A. Review of cognitive performance in hoarding disorder. Clin Psychol Rev 2014;34:324–36. 15 Blom RM, Samuels JF, Grados MA, et al. Cognitive functioning in compulsive hoarding. J Anxiety Disord 2011;25:1139–44. 16 Gilliam CM, Tolin DF. Compulsive hoarding. Bull Menninger Clin 2010;74: 93–121. 17 http://drchristinavillarreal.com/tag/digital-hoarding/ (accessed Jun 2014). 18 http://digitalpreservation.gov/personalarchiving/ (accessed Jun 2014). 19 Nordsletten AE, Fernández de la Cruz L, Pertusa A, et al. The structured interview for hoarding disorder (SIHD): development, usage and further validation. J Obsessive Compuls Relat Disord 2013;2:346–50. van Bennekom MJ, et al. BMJ Case Rep 2015. doi:10.1136/bcr-2015-210814 3 New disease http://dx.doi.org/10.3109/01612840.2012.704134 http://dx.doi.org/10.1002/da.20826 http://dx.doi.org/10.1146/annurev-clinpsy-032511-143116 http://www.ocfoundation.org/hoarding/cir.pdf http://www.ocfoundation.org/hoarding/cir.pdf http://dx.doi.org/10.1016/j.brat.2003.07.006 http://dx.doi.org/10.1016/S0005-7894(02)80025-9 http://dx.doi.org/10.1016/j.janxdis.2009.01.013 http://dx.doi.org/10.1016/j.janxdis.2009.01.013 http://dx.doi.org/10.1023/A:1025428631552 http://dx.doi.org/10.1023/A:1025428631552 http://dx.doi.org/10.1002/da.20902 http://dx.doi.org/10.1002/da.22123 http://dx.doi.org/10.1016/j.brat.2010.12.002 http://dx.doi.org/10.1002/da.20861 http://dx.doi.org/10.1016/j.cpr.2014.04.002 http://dx.doi.org/10.1016/j.janxdis.2011.08.005 http://dx.doi.org/10.1521/bumc.2010.74.2.93 http://drchristinavillarreal.com/tag/digital-hoarding/ http://drchristinavillarreal.com/tag/digital-hoarding/ http://drchristinavillarreal.com/tag/digital-hoarding/ http://digitalpreservation.gov/personalarchiving/ http://digitalpreservation.gov/personalarchiving/ http://dx.doi.org/10.1016/j.jocrd.2013.06.003 Copyright 2015 BMJ Publishing Group. All rights reserved. For permission to reuse any of this content visit http://group.bmj.com/group/rights-licensing/permissions. BMJ Case Report Fellows may re-use this article for personal use and teaching without any further permission. Become a Fellow of BMJ Case Reports today and you can: ▸ Submit as many cases as you like ▸ Enjoy fast sympathetic peer review and rapid publication of accepted articles ▸ Access all the published articles ▸ Re-use any of the published material for personal use and teaching without further permission For information on Institutional Fellowships contact consortiasales@bmjgroup.com Visit casereports.bmj.com for more articles like this and to become a Fellow 4 van Bennekom MJ, et al. BMJ Case Rep 2015. doi:10.1136/bcr-2015-210814 New disease A case of digital hoarding Abstract Background Case presentation Investigations Differential diagnosis Treatment Outcome and follow-up Discussion References work_kulpocb6vbh6ho74ysrhyvk7ae ---- The Influence of Age, Duration of Diabetes, Cataract, and Pupil Size on Image Quality in Digital Photographic Retinal Screening PETER HENRY SCANLON, MRCP1 CHRIS FOY, MSC2 RAMAN MALHOTRA, FRCO, PHTH3 STEPHEN J. ALDINGTON, DMS4 OBJECTIVE — To evaluate the effect of age, duration of diabetes, cataract, and pupil size on the image quality in digital photographic screening. RESEARCH DESIGN AND METHODS — Randomized groups of 3,650 patients had one-field, nonmydriatic, 45° digital retinal imaging photography before mydriatic two-field photography. A total of 1,549 patients were then examined by an experienced ophthalmologist. Outcome measures were ungradable image rates, age, duration of diabetes, detection of referable diabetic retinopathy, presence of early or obvious central cataract, pupil diameter, and iris color. RESULTS — The ungradable image rate for nonmydriatic photography was 19.7% (95% CI 18.4 –21.0) and for mydriatic photography was 3.7% (3.1– 4.3). The odds of having one eye ungradable increased by 2.6% (1.6 –3.7) for each extra year since diagnosis for nonmydriatic, by 4.1% (2.7–5.7) for mydriatic photography irrespective of age, by 5.8% (5.0 – 6.7) for nonmyd- riatic, and by 8.4% (6.5–10.4) for mydriatic photography for every extra year of age, irrespective of years since diagnosis. Obvious central cataract was present in 57% of ungradable mydriatic photographs, early cataract in 21%, no cataract in 9%, and 13% had other pathologies. The pupil diameter in the ungradable eyes showed a significant trend (P � 0.001) in the three groups (obvious cataract 4.434, early cataract 3.379, and no cataract 2.750). CONCLUSIONS — The strongest predictor of ungradable image rates, both for nonmydri- atic and mydriatic digital photography, is the age of the person with diabetes. The most common cause of ungradable images was obvious central cataract. Diabetes Care 28:2448 –2453, 2005 T he use of nonmydriatic photogra- phy has been reported from the U.S. (1– 4), Japan (5), Australia (6,7), France (8), and the U.K. (9 –13). Reports of ungradable image rates for nonmydri- atic photography vary between 4% re- ported by Leese et al. (10) and 34% reported by Higgs et al. (13). In the U.K., national screening pro- grams for detection of sight-threatening diabetic retinopathy are being imple- mented in England (14), Scotland (15), Wales, and Northern Ireland. England and Wales are using two 45° field mydri- atic digital photography as their preferred method. Scotland is using a three-stage screening procedure, in which the first stage is one-field nonmydriatic digital photography with mydriatic photography used for failures of nonmydriatic photog- raphy and slit-lamp biomicroscopy for failures of both photographic methods. Northern Ireland is performing nonmyd- riatic photography in those aged �50 years and mydriatic photography in those aged �50 years. The Gloucestershire Diabetic Eye Study (9) was designed to formally eval- uate the community-based nonmydriatic and mydriatic digital photographic screening program that was introduced in October 1998. The current study was de- signed to evaluate the effect of age, dura- tion of diabetes, cataract, and pupil size on the image quality in nonmydriatic and mydriatic digital photographic screening. RESEARCH DESIGN AND METHODS — For the comparison of mydriatic photography and nonmydriatic photography in those patients with grad- able images, the Gloucestershire Diabetic Eye Study (9) was designed to detect a difference of 2% in the detection of refer- able diabetic retinopathy between the methods (9% for mydriatic and 7% for nonmydriatic photography). To detect this difference with 80% power and 5% sig- nificance level, 3,650 patients had to be examined, allowing for an estimated un- gradable image rate of 15% with nonmyd- riatic photography. Eighty groups of 50 patients from within individual general practices were randomly selected for inclu- sion as potential study patients. This num- ber allowed for lower rates of screening uptake within some of the study practices. The patient’s history (including dia- betes type) was taken and signed consent obtained. Patients classified as type 1 had commenced insulin within 4 months of diagnosis, while patients classified type 2 were not requiring insulin or commenced insulin after 4 months of diagnosis. Visual acuity was measured using ret- roilluminated LogMAR charts modified from those used in the Early Treatment Diabetic Retinopathy Study (16). One 45° nonmydriatic digital photograph was taken of each eye using a Topcon NRW5S camera with Sony 950 video camera cen- tered on the macula, repeated once only if necessary. After mydriasis with Tropic- amide 1%, two 45° photographs, macular and nasal, were taken of each eye accord- ing to the EURODIAB protocol (17). Di- rect ophthalmoscopy was performed, the results of which were recorded. The screener was at liberty to take additional retinal or anterior segment views if he ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● From the 1Department of Ophthalmology, Cheltenham General Hospital, Cheltenham, U.K.; the 2R&D Support Unit, Gloucester Hospitals National Health Service Trust, Gloucester, U.K.; the 3Oxford Eye Hos- pital, Oxford, U.K.; and the 4Retinopathy Grading Centre, Imperial College, London, U.K. Address correspondence and reprint requests to Dr. Peter Scanlon, Gloucestershire Eye Unit, Cheltenham General Hospital, Sandford Road, Cheltenham, GL53 7AN, U.K. E-mail: peter.scanlon@glos.nhs.uk. Received for publication 9 February 2005 and accepted in revised form 23 June 2005. A table elsewhere in this issue shows conventional and Système International (SI) units and conversion factors for many substances. © 2005 by the American Diabetes Association. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. P a t h o p h y s i o l o g y / C o m p l i c a t i o n s O R I G I N A L A R T I C L E 2448 DIABETES CARE, VOLUME 28, NUMBER 10, OCTOBER 2005 considered this to be appropriate and was specifically requested to take an anterior segment view of an eye with a poor quality image. Grading Patients for the reference standard ex- amination (n � 1,549) using 78D lens slit-lamp biomicroscopy and direct oph- thalmoscopy were recruited from those attending for photographic screening on days when an experienced ophthalmolo- gist (P.H.S.) was able to attend. A separate study was performed to validate the oph- thalmologist’s reference standard against seven-field stereophotography (18). A specialist registrar in ophthalmol- ogy (R.M.) interpreted the images from the study patients who received the refer- ence standard examination (n � 1,549). P.H.S. interpreted the images of all pa- tients who did not receive his reference standard examination (n � 2,062). Grad- ers had a history sheet, including the pa- tient’s age, diabetes and ophthalmological history, visual acuity, screener’s ophthal- moscopy findings, and reasons for extra views. Nonmydriatic and mydriatic images were graded using Orion software (Cwm- bran, U.K.) with time of grading sepa- rated by at least 1 month to prevent bias from a memory effect. It was not possible to mask the grader between methods be- cause one image of each eye was captured without mydriasis and two images with mydriasis. For grading, 19-inch Sony Tri- nitron monitors were used with a screen resolution of 1,024 � 768 and 32-bit color (although we recognize that the camera system was limited to 24 bit). The Topcon fundus camera with Sony digital camera produced an image of resolution 768 � 568 pixels. Image grading and the reference stan- dard examination used the Gloucester- s h i r e a d a p t a t i o n o f t h e E u r o p e a n Working Party guidelines (19) for refer- able diabetic retinopathy (previously used in the Gloucestershire Diabetic Eye Study [9] and validated against seven- field stereophotography in a separate study [18]), as shown in Table 1. Refer- able retinopathy was classified as grades three to six on this form. The Interna- tional Classification (20) was not used be- cause the current study was undertaken before this was introduced and, even if this was available, referral to an ophthal- mologist in the U.K. is at a level between level 3 and level 4 of the International Classification. The ungradable image rate was clas- sified as the number of patients with an ungradable image in one or both eyes un- less referable diabetic retinopathy was de- tected in either eye. Image quality was judged with reference to each eye on the macular view and an eye was considered ungradable when the large vessels of the temporal arcades were blurred or more than one-third of the picture was blurred unless referable retinopathy was detected in the remainder. The nasal view was re- garded as providing supplementary infor- mation and was not used for image quality assessments. Reexamination of photographs P.H.S. reexamined all the anterior seg- ment photographs from eyes with un- gradable images and any control eyes (i.e., if an anterior segment photograph had been taken of the patient’s other eye) to determine whether cataract was present using the following classifications: 1) ob- vious central cataract: impaired central red reflex with obvious cataract almost certainly contributing to poor image qual- ity; 2) early cataract: some impairment of central red reflex with cataract, which may or may not contribute to poor image quality; and 3) no cataract: good central red reflex and either no cataract or early peripheral lens changes not considered to contribute to poor image quality. The horizontal pupil diameter of all the pupils in the central axis on the 19- inch monitor on which the anterior seg- m e n t i m a g e s w e r e d i s p l a y e d w a s measured. The anterior segment images had been collected using a standardized methodology, so as to maintain near equivalence in image magnification be- tween patients. Any other pathology that might have contributed to impaired qual- ity of retinal images was recorded. Iris color of the ungradable eye was classified as blue, green (including blue with brown flecks or green), light brown, or dark brown. Statistical methods Data were entered into a customized da- tabase in the Medical Data Index (Patient Administration System) at Cheltenham General Hospital and downloaded into SPSS version 10 (SPSS, Chicago, IL) for Table 1 Description Grade right eye Grade left eye Outcome No diabetic retinopathy 0 0 12/12 Minimal nonproliferative diabetic retinopathy 1 1 12/12 Mild nonproliferative diabetic retinopathy 2 2 12/12 Maculopathy 3 3 Refer Hemorrhage �1 DD from foveal center 3a 3a Routine Exudates �1 DD from foveal center 3b 3b Soon Groups of exudates (including circinate and plaque) within the temporal arcades �1 DD from foveal center 3c 3c Soon Reduced VA not corrected by a pinhole likely to be caused by a diabetic macular problem and/or suspected clinically significant macular edema. 3d 3d Soon Moderate to severe nonproliferative diabetic retinopathy 4 4 Refer Multiple cotton wool spots (�5) 4a 4a Soon and/or multiple hemorrhages 4a 4a Soon and/or intraretinal microvascular abnormalities 4b 4b Soon and/or venous irregularities (beading, reduplication, or loops) 4b 4b Soon Proliferative diabetic retinopathy 5 5 Refer New vessels on the disc, new vessels elsewhere, preretinal hemorrhage, and/or fibrous tissue Urgent Advanced diabetic retinopathy 6 6 Refer Vitreous hemorrhage, traction/traction detachment, and/or rubeosis iridis Immediate DD, disc diameter; VA, visual acuity. Scanlon and Associates DIABETES CARE, VOLUME 28, NUMBER 10, OCTOBER 2005 2449 data analysis as required. Percentages and 95% CIs were calculated. Multiple logistic regression was used to assess the impact of more than one predictive factor on the odds of poor image quality. Pupil diameters for ungradable eyes and the opposite gradable eyes (where an- terior segment photographs of both eyes were available) were compared for the un- gradable eyes with no cataract, early cat- aract, and obvious central cataract. To identify any trends, the diameters in un- gradable and opposite eyes and the differ- ence between them were compared between cataract groups using one-way ANOVA with a linear contrast. Age and duration of diabetes were compared be- tween the no cataract and the obvious central cataract group using Mann- Whitney U tests. RESULTS Acceptance rate of screening invitation and nonattendance rate at screening appointment and identification of the study population Of 11,909 people with diabetes in the county, 74% responded to the screening invitation and attended. Of those who re- sponded to the screening invitation and booked an appointment, the attendance rate was 95%. The high response and at- tendance rates enabled the target popula- tion of 3,650 patients from within 80 groups of 50 patients to be identified and examined. Images of 39 patients from one prac- tice were excluded from the study because the patient images were accidentally cap- tured in JPEG format instead of TIFF format. Ungradable image rates were cal- culated for all remaining 3,611 patients in the study. Seven grading forms were absent from the nonmydriatic group, all of which were from the subgroup of 1,549 patients who had the reference standard examination. Ungradable image rate and age The ungradable image rate for nonmydri- atic photography was 19.7% (95% CI 18.4 –21.0) and for mydriatic photogra- phy was 3.7% (3.1– 4.3). A total of 15 patients in the nonmydriatic group and 8 patients in the mydriatic group who were found to have an ungradable image in one eye were not included in the ungradable image rate because referable retinopathy was detected in the other eye (Fig. 1). Detection of referable retinopathy in different age ranges From the reference standard examination of 1,549 patients, 180 patients were found to have referable diabetic retinopa- thy. The grading form for one of these patients (from the nonmydriatic group) was missing, making the maximum pos- sible detection in that group 179. Levels of detection of referable diabetic retinop- athy were 82.8% for mydriatic photogra- p h y ( 1 4 9 o f 1 8 0 ) a n d 5 7 . 5 % f o r nonmydriatic photography (103 of 179). Analyzing the nonmydriatic figures in 10- year age-groups, the younger age-groups had better image quality results and better identification of referable diabetic reti- nopathy (Fig. 2). Type of diabetes, sex of study patients, and duration of diabetes Of 3,611 study patients, 16.5% had type 1 diabetes, 81.6% had type 2 diabetes, and 1.9% had unknown diabetes status. Participants were 55% male and 45% fe- male. Duration of diabetes was 41.7% 0 – 4 years, 26.2% 5–9 years, 13.7% 10 –14 years, 7.6% 15–19 years, 10.8% 20� years, and 0.2% unknown duration. The 1,549 reference standard subgroup patients had very similar characteristics. Ungradable image rate versus age and duration of diabetes Because an association was found be- tween ungradable image rate and both age and duration of diabetes and also between age and duration of diabetes, a logistic re- gression analysis was undertaken to see if Figure 1—Unassessable image patients for mydriatic and nonmydriatic photographic screening. Image quality in diabetic retinal screening 2450 DIABETES CARE, VOLUME 28, NUMBER 10, OCTOBER 2005 the associations were independent of each other. For nonmydriatic photography, the odds of having one eye ungradable in- creased by 2.6% (95% CI 1.6 –3.7) for each extra year since diagnosis, irrespec- tive of age, and by 5.8% (5.0 – 6.7) for every extra year of age, irrespective of years since diagnosis. For mydriatic pho- tography, the odds of having one eye un- gradable increased by 4.1% (2.7–5.7) for each extra year since diagnosis, irrespec- tive of age, and by 8.4% (6.5–10.4) for each extra year of age, irrespective of years since diagnosis. The analysis showed that both age and years since diagnosis con- tributed to the odds of having an ungrad- able image in one eye. Influence of cataract and other pathology Of the 169 ungradable eyes from 133 pa- tients, 8 eyes had no anterior segment im- age. Of the 161 eyes with an anterior segment image, 92 eyes (57%) had obvi- ous central cataract, 34 eyes (21%) had early cataract, and 15 eyes (9%) had no cataract. The study of other pathology showed 10 eyes (6%) had a corneal scar, 9 eyes (6%) had asteroid hyalosis, and 1 eye (1%) had a history of hemorrhage, glau- coma, and blindness (not from diabetic retinopathy). Influence of pupil diameter There were 97 patients in whom one eye was not assessable. In 12 cases, there was a nondiabetic, noncataract pathological reason detected that would explain why imaging was unsuccessful (e.g., corneal scarring), and in 5 cases no anterior seg- ment image was taken of the ungradable eye. In the remaining 80 cases, no obvious other pathology was detected that could explain poor image quality, suggesting a relationship with pupil size. To test this hypothesis, we examined the pupil diam- eter in those 54 cases in which an anterior segment view was available of both the ungradable eye and the gradable fellow eye. The following comparisons were made between the two eyes. In eight eyes with no cataract seen in the ungradable eye, the mean pupil diameter in the un- gradable eye was 2.7 cm and in the grad- able control eye was 3.6 cm (difference: 0.9 cm). In 14 eyes with early cataract seen, the mean pupil diameter in the un- gradable eye was 3.4 cm and in the grad- able control eye was 3.9 cm (difference: 0.5 cm). In 32 eyes with obvious cataract seen, the mean pupil diameter in the un- gradable eye was 4.4 cm and in the grad- able control eye was 4.3 cm (difference: �0.1 cm). The pupil diameter in the un- gradable eye and the difference in pupil diameters between the two eyes both showed significant trends (P � 0.001 and P � 0.008, respectively) in the three groups. However, the pupil diameter in the gradable eye did not show a signifi- cant trend (P � 0.072) in any group. The eight people in the no cataract group with poor pupillary dilation (mean 2.7 cm) had a mean age of 72.7 years and a mean duration of 20.4 years with diabe- tes. The 32 people with obvious central cataract and good pupillary dilation (mean 4.4 cm) had a mean age of 78.5 years and a mean duration of 8.7 years with diabetes. The Mann-Whitney U test showed no significant difference for the ages between these two groups but a sig- nificant difference for duration of diabetes (P � 0.003). Iris color in ungradable eyes Of the 124 patients in whom anterior views enabled color determination, there were 68 blue (55%), 24 green (19%), 21 light brown (17%) and 11 dark brown (9%) eyes. The iris color is in keeping with Gloucestershire’s predominant pro- portional white Caucasian population, the main ethnic minority groups being In- dian/British Indian (0.7%) and Black/ Black British (0.8%). CONCLUSIONS — Several possible factors might have an influence on image quality in retinal photography. Age is sug- gested in the following studies. Higgs et al. (13) reported that 13% �50 years, 39% 50 –70 years, and 54% �70 years had ungradable nonmydriatic images. Buxton et al. (21) reported that the un- gradable image rate varied between 2% in the Exeter physician group and 9% in the Oxford general practitioner group. The difference between these two groups was principally related to age, duration of di- abetes, and type of diabetes. Some studies (3,8) have reported nonmydriatic un- gradable image rates �12%, but the aver- age age of the study population was �55 years. Duration of diabetes is suggested as a factor by Cahill et al. (22), who in 2001 reported that pupillary autonomic dener- vation increases with increasing duration of diabetes mellitus. Ethnicity is sug- gested by Klein et al. (23). Flash intensity is suggested by Taylor et al. (24), who reported less patient dis- comfort with the lower flash power (10 W vs. 300 W) of the digital system. In non- mydriatic photography, there is a faster pupil recovery time with lower flash in- tensities, which may improve image qual- ity in the fellow eye. Age, duration of diabetes, and ethnic- ity were not reported in some studies (7,11,25), while others (1,6) have re- ported these variables but have not re- ported an association. The study by Lin et al. (4) excluded 197 patients (48.5%) for unusable seven-field reference standard photos and a further 12 patients (2.96%) Figure 2—Referable retinopathy by age compared to the reference standard examination. Scanlon and Associates DIABETES CARE, VOLUME 28, NUMBER 10, OCTOBER 2005 2451 because of unusable ophthalmoscopy records, which made it difficult to inter- pret the ungradable image rate of 8.1%. Shiba et al. (5) excluded the �70 years age-group and remarkably attempted 9 � overlapping nonmydriatic 45° fields (5), whereas others have only attempted five fields (8), three fields (2,3), and the ma- jority only one nonmydriatic field (1,6,9,10,13,21). Patient numbers varied from 40 eyes in the study by Lim et al. (2) to 3,611 patients in the current study. The current study has suggested that, after excluding a small number of patients with other pathology, the causes of un- gradable images in mydriatic photogra- phy are obvious central cataract (57%), a combination of early cataract and a small pupil (21%), and a small pupil alone (9%). There was a dip to 75% in the 30 –39 age-group (two patients missed) and 62.5% in the 40 – 49 age-group (six patients missed) in detection of referable retinopathy using mydriatic photogra- phy. If ungradable images were test posi- tive (i.e., referable), six patients in total would have been missed in the 30 – 49 age- group. On retrospective examination of the mydriatic images, the pathology was visible in five of six of these (two hav- ing received extensive laser treatment and being graded as stable treated diabetic ret- inopathy). There was only one person whose retinopathy visible within the two 45° fields was mild nonproliferative dia- betic retinopathy (i.e., not referable), whereas small new vessels elsewhere were visible in the peripheral retina only on ref- erence standard examination. This is the only patient in this age-group that should have been a definite false negative for the test. While a 20% failure rate for nonmyd- riatic photography might be acceptable because patients could be reexamined by other means, there is a difference in de- tection of referable retinopathy between the two methods, as shown in Fig. 2. The Health Technology Board for Scotland used data from the current study in their report (15) and concluded that similar sensitivities and specificities could be achieved by dilating those patients with ungradable images. However, this relies on the ability of the screener to accurately determine an ungradable image at the time of screening and, in the Scottish sys- tem, relies on the assumption that the grading of one field will detect referable retinopathy with the same degree of accu- racy as the grading of two fields (giving evidence from Olson et al.’s study [26]). There have been differing views on the number of fields required for screening, Bresnick et al. (27) supporting Olson et al.’s view that one field may be sufficient. However, studies by Moss et al. (28), Shiba et al. (5), and von Wendt et al. (29) have suggested that higher numbers of fields give greater accuracy in detection of retinopathy levels. Data from the current study indicates that there would potentially be very many occasions on which nonmydriatic imag- ing in patients aged �50 years would re- sult in ungradable images. In the �80 years age-group, the failure rate is re- duced from 41.6 to 16.9% by dilation with G Tropicamide 1%. It is possible that the failure rate of 16.9% following dila- tion with G Tropicamide 1% could be fur- ther reduced by the addition of G Phenylephrine 2.5% for this specific group. Routinely dilating the �50 years age-group with G Tropicamide 1% at out- set could potentially reduce the failure rate by �80%. If screening programs are going to consider nonmydriatic photog- raphy to detect sight-threatening diabetic retinopathy, the findings of the current study largely support the use of this method for the group �50 years of age who are at lowest risk of ungradable im- ages, and yet, this group contains a num- ber of young regular nonattendees, who some authors suggest are at greatest risk of blindness (e.g., MacCuish et al. [30] and Jones [31]). Acknowledgments — This study was funded by the Project Grant South West R&D Direc- torate. P.H.S is submitting this work for an MD thesis to University College London. The study was designed by P.H.S. with the support of C.F, and the article was written by P.H.S. with the help of S.J.A. P.H.S. performed all the clinical examinations, and P.H.S. and R.M. graded all the images. C.F. undertook the data analysis. All coauthors commented on the drafts and helped to interpret the findings. P.H.S. is the guarantor for this publication. References 1. Pugh JA, Jacobson JM, Van Heuven WA, Watters JA, Tuley MR, Lairson DR, Lori- mor RJ, Kapadia AS, Velez R: Screening for diabetic retinopathy: the wide-angle retinal camera. Diabetes Care 16:889 – 895, 1993 2. Lim JI, LaBree L, Nichols T, Cardenas I: A comparison of digital nonmydriatic fun- dus imaging with standard 35-millimeter slides for diabetic retinopathy. Ophthal- mology 107:866 – 870, 2000 3. Bursell SE, Cavallerano JD, Cavallerano AA, Clermont AC, Birkmire-Peters D, Ai- ello LP, Aiello LM, Joslin Vision Network Research Team: Stereo nonmydriatic dig- ital-video color retinal imaging compared with Early Treatment Diabetic Retinopa- thy Study seven standard field 35-mm ste- reo color photos for determining level of diabetic retinopathy. Ophthalmology 108: 572–585, 2001 4. Lin DY, Blumenkranz MS, Brothers RJ, Grosvenor DM: The sensitivity and spec- ificity of single-field nonmydriatic mono- chromatic digital fundus photography with remote image interpretation for dia- betic retinopathy screening: a comparison with ophthalmoscopy and standardized mydriatic color photography. Am J Oph- thalmol 134:204 –213, 2002 5. Shiba T, Yamamoto T, Seki U, Utsugi N, Fujita K, Sato Y, Terada H, Sekihara H, Hagura R: Screening and follow-up of di- abetic retinopathy using a new mosaic 9-field fundus photography system. Dia- betes Res Clin Pract 55:49 –59, 2002 6. Harper CA, Livingston PM, Wood C, Jin C, Lee SJ, Keeffe JE, McCarty CA, Taylor HR: Screening for diabetic retinopathy us- ing a non-mydriatic retinal camera in ru- ral Victoria. Aust N Z J Ophthalmol 26: 117–121, 1998 7. Yogesan K, Constable IJ, Barry CJ, Eikel- boom RH, McAllister IL, Tay-Kearney ML: Telemedicine screening of diabetic retinopathy using a hand-held fundus camera. Telemed J 6:219 –223, 2000 8. Massin P, Erginay A, Ben Mehidi A, Vicaut E, Quentel G, Victor Z, Marre M, Guil- lausseau PJ, Gaudric A: Evaluation of a new non-mydriatic digital camera for de- tection of diabetic retinopathy. Diabet Med 20:635– 641, 2003 9. Scanlon PH, Malhotra R, Thomas G, Foy C, Kirkpatrick JN, Lewis-Barned N, Har- ney B, Aldington SJ: The effectiveness of screening for diabetic retinopathy by dig- ital imaging photography and technician ophthalmoscopy. Diabet Med 20:467– 474, 2003 10. Leese GP, Ahmed S, Newton RW, Jung RT, Ellingford A, Baines P, Roxburgh S, Coleiro J: Use of mobile screening unit for diabetic retinopathy in rural and urban areas. BMJ 306:187–189, 1993 11. Jones D, Dolben J, Owens DR, Vora JP, Young S, Creagh FM: Non-mydriatic Po- laroid photography in screening for dia- betic retinopathy: evaluation in a clinical setting. Br Med J (Clin Res Ed) 296:1029 – 1030, 1988 12. Murgatroyd H, Ellingford A, Cox A, Bin- nie M, Ellis JD, MacEwen CJ, Leese GP: Effect of mydriasis and different field strategies on digital image screening of di- abetic eye disease. Br J Ophthalmol 88: 920 –924, 2004 13. Higgs ER, Harney BA, Kelleher A, Reck- Image quality in diabetic retinal screening 2452 DIABETES CARE, VOLUME 28, NUMBER 10, OCTOBER 2005 less JP: Detection of diabetic retinopathy in the community using a non-mydriatic camera. Diabet Med 8:551–555, 1991 14. Gillow JT, Gray JA: The National Screen- ing Committee review of diabetic retinop- athy screening. Eye 15:1–2, 2001 15. Facey K, Cummins E, Macpherson K, Morris A, Reay L, Slattery J: Organisation of Services for Diabetic Retinopathy Screen- ing. Glasgow, Scotland, Health Technol- ogy Board for Scotland, 2002, p. 1–224 16. Ferris FL, 3rd, Kassoff A, Bresnick GH, Bailey I: New visual acuity charts for clin- ical research. Am J Ophthalmol 94:91–96, 1982 17. Aldington SJ, Kohner EM, Meuer S, Klein R, Sjolie AK: Methodology for retinal photography and assessment of diabetic retinopathy: the EURODIAB IDDM com- plications study. Diabetologia 38:437– 444, 1995 18. Scanlon PH, Malhotra R, Greenwood RH, Aldington SJ, Foy C, Flatman M, Downes S: Comparison of two reference standards in validating two field mydriatic digital photography as a method of screening for diabetic retinopathy. Br J Ophthalmol 87: 1258 –1263, 2003 19. Retinopathy Working Party: A protocol for screening for diabetic retinopathy in Europe. Diabet Med 8:263–267, 1991 20. Wilkinson CP, Ferris FL 3rd, Klein RE, Lee PP, Agardh CD, Davis M, Dills D, Kam- pik A, Pararajasegaram R, Verdaguer JT, the Global Diabetic Retinopathy Project Group: Proposed international clinical di- abetic retinopathy and diabetic macular edema disease severity scales. Ophthal- mology 110:1677–1682, 2003 21. Buxton MJ, Sculpher MJ, Ferguson BA, Humphreys JE, Altman JF, Spiegelhalter DJ, Kirby AJ, Jacob JS, Bacon H, Dud- bridge SB, et al.: Screening for treatable diabetic retinopathy: a comparison of dif- ferent methods. Diabet Med 8:371–377, 1991 22. Cahill M, Eustace P, de Jesus V: Pupillary autonomic denervation with increasing duration of diabetes mellitus. Br J Oph- thalmol 85:1225–1230, 2001 23. Klein R, Klein BE, Neider MW, Hubbard LD, Meuer SM, Brothers RJ: Diabetic retinopathy as detected using ophthal- moscopy, a nonmydriatic camera and a standard fundus camera. Ophthalmology 92:485– 491, 1985 24. Taylor DJ, Fisher J, Jacob J, Tooke JE: The use of digital cameras in a mobile retinal screening environment. Diabet Med 16: 680 – 686, 1999 25. Williams R, Nussey S, Humphry R, Thomp- son G: Assessment of non-mydriatic fun- dus photography in detection of diabetic retinopathy. Br Med J (Clin Res Ed) 293: 1140 –1142, 1986 26. Olson JA, Strachan FM, Hipwell JH, Goat- man KA, McHardy KC, Forrester JV, Sharp PF: A comparative evaluation of digital imaging, retinal photography and optometrist examination in screening for diabetic retinopathy. Diabet Med 20:528 – 534, 2003 27. Bresnick GH, Mukamel DB, Dickinson JC, Cole DR: A screening approach to the surveillance of patients with diabetes for the presence of vision-threatening reti- nopathy. Ophthalmology 107:19 –24, 2000 28. Moss SE, Meuer SM, Klein R, Hubbard LD, Brothers RJ, Klein BE: Are seven stan- dard photographic fields necessary for classification of diabetic retinopathy? In- vest Ophthalmol Vis Sci 30:823– 828, 1989 29. von Wendt G, Heikkila K, Summanen P: Detection of retinal neovascularizations using 45 degrees and 60 degrees photo- graphic fields with varying 45 degrees fields simulated on a 60 degrees photo- graph. Acta Ophthalmol Scand 80:372– 378, 2002 30. MacCuish AC: Early detection and screening for diabetic retinopathy. Eye 7:254 –259, 1993 31. Jones RB, Larizgoitia I, Casado L, Barrie T: How effective is the referral chain for di- abetic retinopathy? Diabet Med 6:262– 266, 1989 Scanlon and Associates DIABETES CARE, VOLUME 28, NUMBER 10, OCTOBER 2005 2453 work_kv3bcf23hrhhxfy7a47a6u37be ---- wi37-2Zimmer.vp BRIGITTE ZIMMER Adiantum krameri (Pteridaceae), a new species from French Guiana Abstract Zimmer, B.: Adiantum krameri (Pteridaceae), a new species from French Guiana. – Willdenowia 37: 557-562. – ISSN 0511-9618; © 2007 BGBM Berlin-Dahlem. doi:10.3372/wi.37.37215 (available via http://dx.doi.org/) Adiantum krameri is described as a species new to science and illustrated. It is endemic to French Guiana (NE South America) and clearly differs from A. cordatum, with which it has so far been confused. Key words: ferns, Adiantum cordatum, NE South America. In my treatment of the Adiantum petiolatum group for Flora Mesoamericana (Zimmer 1995), I surmised that the French Guiana plants identified as A. cordatum Maxon did in fact belong to a different, undescribed species. A. cordatum was first mentioned as occurring in the centre of French Guiana by Kramer (1978) in his work on the pteridophytes of Suriname. Later a detailed description and drawings of that plant were provided (Cremers & Kramer 1985, Cremers 1997). At that time only collections from the vicinity of Saül were available. Meanwhile the same spe- cies has been recorded from further localities (Cremers & Hoff 1990, Cremers 1990). A careful study of several specimens has now confirmed my earlier hypothesis: the French Guiana plants differ from genuine A. cordatum in several features and are easy to separate from it. Therefore they are here described as a species new to science. Adiantum krameri B. Zimmer, sp. nov. Holotype: French Guiana, Montagne de la Trinité, sommet NE, in high forest near creek, c. 300 m, 4.2.1984, Granville & al. 6510 (B 20-80881!; isotypes: BR, CAY!, G, NY, P, U, Z!) – Fig. 1-2. Ab affini Adianto cordato quocum hucusque confusum differt foliorum lamina late ovata (10- 22 × 10-18 cm), latitudine aequilonga vel sublongiore (nec sesquilongiore), membranacea, costa prope basin tantum obvia, in facie adaxiali striis longitudinalibus haud notata. Plants terrestrial. Rhizome short-creeping, 3-5 mm in diameter, nodose from old pseudopodia, with several 0.5-1 mm thick rhizoids; rhizome scales reddish brown, concolourous to irregularly semi-clathrate, narrowly triangular to linear-lanceolate, often slightly denticulate. Leaves usu- ally simple, 30-70 cm long, very occasionally pinnate with 1(-2) pairs of pinnae, petiole 20-52 cm Willdenowia 37 – 2007 557 558 Zimmer: Adiantum krameri from French Guiana Fig. 1. Adiantum krameri holotype. long, 1½-2 × as long as the lamina, shiny, dark reddish brown, adaxially canaliculate with hair- like scales in the groove; costa dark brown like the petiole abaxially for 1.2-2.6 cm, becoming in- distinct in the distal half of the lamina; lamina horizontal, grey-green, membranaceous, glabres- cent above and sparingly scaly below when adult, in simple leaves ovate-orbicular, 10-22 × 10-18 cm, scarcely longer than broad, the base deeply and narrowly cordate, often with overlap- ping lobes, the apex acuminate; when fronds pinnate, pinnae similar to each other, alternate, borne on 0.7-1.4 cm long stalks, with an obliquely truncate base; young leaves pinkish, sterile margins entire; venation reticulate, evident on both sides, with 5-7-seriate, elongate areoles de- creasing in size toward the margin and apex; adaxial surface lacking idioblastic streaks. Sori con- tinuous on either side of the lamina, extending from outside the basal sinus almost to the tip; false indusia membranous, glabrous; sporangial annulus of 14-18 thickened cells. Spores trilete, c. 40-45 µm in diameter (Fig. 2). Eponymy. – The species is named in honour of Prof. Dr Karl Ulrich Kramer (1928-94), a dedi- cated, excellent pteridologist who loved to share his broad knowledge with students, fellow sci- entists and everyone interested in ferns. Distribution and habitat. – Endemic to central French Guiana and known till now from the fol- lowing three main areas (Fig. 3): Saül region (Monts La Fumée, Les Eaux-Claires, Carbet Maïs, Pic Matécho); Montagnes de la Trinité; Les Nouragues. Infrequent, terrestrial, growing in moist lowland tropical rainforests, often in deep shade, on sandy to clayey soil and granite outcrops, between 140 and 400 m above sea level. Additional specimens known (data partly supplied by M. Boudrie, G. Cremers & R. Moran). – French Guiana: Saül, = Circuit ORSTOM “Montagne la Fumée” P.K.3, forêt dense, 18.10.1972, Granville B-4602 (CAY [2 sheets]!, P, U, Z!); Saül, = Circuit ORSTOM des Monts “La Fumée”, P.K.3., forêt sur pente. 14.1.1974, Granville 2017 (CAY!, P); Saül, Layon Est sur le tracé ORSTOM sur La Montagne La Fumée, à 2,4 km, 28.10.1976, Granville B-5388 (CAY!, P, Z!); Saül, circuit La Fumée, 14.9.1978, Prévost 301 (CAY!); Saül, forêt sur colline à 3,5 km envi- ron au Nord de Carbet Maïs (20 km Est de Saül), 9.7.1979, Granville 3072 (CAY [3 sheets]!, NY, Willdenowia 37 – 2007 559 Fig. 2. Adiantum krameri scanning electron micrograph of a spore from the holotype. – Scale bar: 10 µm. P, Z [2 sheets]!; Saül, sous bois de forêt primaire, 9.3.1985, Aumeeruddy 56 (CAY!, Z!); Saül, 3°37'N, 53°12'W, near Eaux Claires along the Sentier Botanique, non-flooded moist forest, c. 200-400 m, 2.11.1990, Mori & al. 21529 (CAY!, NY); Saül, La Fumée W, lowland tropical rain- forest, 3°37'N, 53°13'W, 11.1990, Mori 21669 (NY); Saül, Eau Claire, near Saül, vicinity of gra- nitic outcrops, 200 m, 13-15.10.1993, van der Werff & al. 12986 (CAY, MO [2 sheets] photo!, NY); Saül, Pic Matécho, bord de crique, 14.12.2000, Hequet 972 (CAY); Montagnes de la Trinité, sommet nord, forêt de terre ferme, sous bois sur pente forte, 10.1.1984, Granville & al. 5848 (CAY!); Montagnes de la Trinité, zone sud, bassin de la Mana, forêt sur pente, pied de falaises au sud du massif, 300 m, 11.1.1998, Granville & Crozier 13494 (B!, CAY, K, NY, P, U, US); Station des Nouragues – Bassin de l’Arataye, 4°3'N, 52°42'W, vers le petit Plateau, forêt primaire, 30 m, 15.8.1990, Sabatier 3497 (CAY!); Station des Nouragues - Bassin de l’Approu- ague - Arataye, 4°3'N, 52°42'W, forêt de pente de basse altitude, sous-bois humide dans un petit thalweg, sur sol sablo-argileux et affleurements granitiques, 130 m, 23.2.1991, Granville 11165 (B!, BR, CAY [2 sheets]!, G, MO, NY [2 sheets], P, U, US, Z); Station des Nouragues, 12.3.1996, Solano K297 (CAY). Delimitation. – Adiantum krameri was up till now mistaken for A. cordatum Maxon (1931), de- scribed from Panama (Lectotype [designated by Zimmer 1995]: Pittier 4297, US 670422!; isolectotypes: US 670423! and US 67421!, see Fig. 4). A. cordatum does not occur in the Guia- nas, but only in Panama. It can be easily distinguished from A. krameri by the costa remaining distinct almost to the tip of the lamina, which is narrowly ovate, about twice as long as broad, lighter green and indistinctly veined abaxially, with parallel running idioblastic streaks adaxially. All material from French Guiana identified as A. cordatum or A. sp. aff. cordatum per- tains to the new species, as do the illustrations in Cremers & Kramer (1985: 3) and in Cremers (1997: 146). As A. krameri has a very distinctive look and is not easily recognized as an Adi- antum species, additional, unidentified specimens collected at other localities, perhaps even from the neighbouring Suriname, might well exist. 560 Zimmer: Adiantum krameri from French Guiana Fig. 3. The known distribution of Adiantum krameri from French Guiana. Willdenowia 37 – 2007 561 Fig. 4. Adiantum cordatum isotype (US, Pittier 4297). Acknowledgements Particular thanks are due to Werner Greuter for his assistance with the Latin diagnosis and fruit- ful discussions. I am grateful to Michel Boudrie and George Cremers for their careful and com- petent reviewing of the manuscript, and the curators of CAY, US and Z for providing specimens for study. Furthermore I thank Monika Lüchow (scanning electron microscopy), Nora Schirmer (digital photography), Ingo Haas and Angela Lautsch for technical assistance. Digital photogra- phy was funded by the friends of the Botanical Garden and Botanical Museum. References Cremers, G. 1991: Studies on the flora of the Guianas 48. Modes de repartition des ptérido- phytes de Guyane française. – Compt. Rend. Séances Soc. Biogéogr. 66: 27-42. — 1997: Adiantum Linnaeus. – Pp. 138-147 in: Mori S. A., Cremers G., Gracie C., de Granville J.-J., Hoff M. & Mitchell J. D. (ed.), Guide to the vascular plants of central French Guiana. 1. Pteridophytes, gymnosperms and monocotyledons. – Mem. New York Bot. Gard. 76. — & Hoff, M. 1990: Inventaire taxonomique des plantes de la Guyane française I. – Les ptéridophytes. – Invent. Faune Flore. Mus. Nat. Hist. Nat., Secr. Faune Flore, Paris. 54. — & Kramer, K. U. 1985: Studies on the flora of the Guianas 10. Ptéridophytes nouveaux pour la Guyane française I. – Proc. Kon. Ned. Akad. Wetensch. C 88: 1-14. Kramer, K. U. 1954: A contribution to the fern flora of French Guiana. – Acta Bot. Neerl. 3: 481-494. — 1978: The pteridophytes of Suriname. – Uitgaven Natuurwetensch. Studiekring Suriname Ned. Antillen 93: 1-198. Zimmer, B. 1995: 2. Grupo de Adiantum petiolatum. – Pp. 110-113 in: Davidse, G., Sousa, S. M. & Knapp, S. (ed.), Flora mesoamericana 1. – México. Address of the author: Brigitte Zimmer, Botanischer Garten und Botanisches Museum Berlin-Dahlem, Freie Universität Berlin, Königin-Luise-Str. 6-8, D-14195 Berlin, Germany; e-mail: b.zimmer@bgbm.org. 562 Zimmer: Adiantum krameri from French Guiana work_ky4766vldvhbrc6tzfnjabwbe4 ---- SENSORY SYSTEMS DISORDERS - Cost Studies PSS5 BEVACIZUMAB VERSUS RANIBIZUMAB FOR AGE-RELATED MACULAR DEGENERATION (AMD): A BUDGET IMPACT ANALYSIS Zimmermann I, Schneiders RE, Mosca M, Alexandre RF, do Nascimento Jr JM, Gadelha CA Ministry of Health, Brasília, DF, Brazil OBJECTIVES: The use of intravitreal injection of vascular endothelial growth factor inhibitors is an effective treatment for AMD and trials have showed similar clinical effects of bevacizumab and ranibizumab. The aim of this study was to estimate the budget impact for Brazilian Ministry of Health (MoH) recommending ranibizumab instead of bevacizumab for AMD. METHODS: We did a deterministic budget impact analysis, with the MoH perspective, comparing the use of ranibizumab and bevaci- zumab for wet AMD. The target population was estimated by extrapolating epide- miologic data to the Brazilian population. Data about dosage, administration and fractioning were extracted from literature. Prices were obtained with the Brazilian regulatory agency, applying potential discounting benefits. This analysis did not consider the cost of the fractioning process because it will be assumed by the states and not by the MoH. RESULTS: The considered price of the ranibizumab vial was US$ 962.86 (fractioning is not an option). In contrast, a 4 mL vial of bevacizumab would cost US$ 410.86 (US$ 5.14 each 0.05 mL dose, resulting in 80 doses/vial). Therefore, the expenses of one year on ranibizumab would be about US$ 11,554.37 and about US$ 61.63 for bevacizumab (12 injections for both). Thus, the use of ranibizumab instead of bevacizumab for treating 467,600 people would be related with a US$ 5,374,007,960.48 budget impact. The sensitivity analyses also demon- strated a budget impact of US$ 3,097,416,007.65 and US$ 5,287,555,101.51 (1 dose/ vial and 20 doses/vial, respectively). CONCLUSIONS: Although not a label indica- tion, bevacizumab has been widely adopted in clinical practice. As presented above, even with inefficient fractioning methods, the use of bevacizumab would bring substantial savings to MoH resources. Even the need of preserving the steril- ity of the solution being a real-world worry, stability studies have showed the maintenance of the solution characteristics through adequate handling and stor- age. PSS6 COST-OF-ILLNESS OF CHRONIC LYMPHOEDEMA PATIENTS IN HAMBURG AND SUBURBAN REGION Purwins S1, Dietz D1, Blome C2, Heyer K1, Herberger K1, Augustin M1 1University Clinics of Hamburg, Hamburg, Germany, 2University Medical Center Hamburg, Hamburg, Germany OBJECTIVES: Chronic lymphedema is of particular interest from the socioeco- nomic point of view, since it is accompanied with high costs, disease burden and permanent need of medical treatment. The economic and social impact can in- crease if complications such as erysipelas and ulcers develop. Therefore, cost-of- illness of patients with lymphoedema or lipoedema should be known. METHODS: Patients with chronic primary or secondary lymph- or lipoedema of upper or lower limbs, with at most 6 months of disease duration, were enrolled in an observa- tional, cross-sectional study in Hamburg and surroundings (population of approx- imately 4 Mio inhabitants, 90% of which are insured in the statutory health insur- ance (SHI) and 10% in private insurance). Standardized clinical examinations and patient interviews were carried out. The oedemas were documented via digital photography as well as further available patient data. Resource utilizations were collected. From the societal perspective direct medical, non - medical and indirect costs were computed. RESULTS: A total of 348 patients were enrolled and inter- viewed. 90.8% of them were female and had a mean age of 57.3� 14.5 years. Mean annual costs per lymphoedema were €8121. These costs consisted of 58% direct (€4708) and 42% indirect (€3413) costs. The SHI accounted for about €5552 expenses and the patient €494.20 out-of-pocket costs. Conducted subgroup analyses on (a) arm vs leg oedema and (b) primary vs secondary vs lipo-lymphoedema did not show significant differences in costs. The main costs drivers in this study were medical treatment and disability costs. CONCLUSIONS: The treatment of patients with chronic lymphoedema is associated with high direct and indirect costs. PSS7 C-REALITY (CANADIAN BURDEN OF DIABETIC MACULAR EDEMA OBSERVATIONAL STUDY): 6-MONTH FINDINGS Barbeau M1, Gonder J2, Walker V3, Zaour N1, Hartje J4, Li R1 1Novartis Pharmaceuticals Canada Inc., Dorval, QC, Canada, 2St. Joseph’s Health Care, London, ON, Canada, 3OptumInsight, Burlington, ON, Canada, 4OptumInsight, Eden Prairie, MN, USA OBJECTIVES: To characterize the economic and societal burden of Diabetic Macular Edema (DME) in Canada. METHODS: Patients with clinically significant macular edema (CSME) were enrolled by ophthalmologists and retinal specialists across Canada. Patients were followed over a 6-month period to combine prospective data collected during monthly telephone interviews and at sites at months 0, 3 and 6. Visual acuity (VA) was measured and DME-related health care resource informa- tion was collected. Patient health-related quality of life (HRQOL) was measured using the National Eye Institute Visual Functioning Questionnaire (VFQ-25), and the EuroQol Five Dimensions (EQ-5D). RESULTS: A total of 145 patients [mean age 63.7 years (range: 30-86 yrs); 52% male; 81% Type 2 diabetes; mean duration of diabetes 18 years (range: 1-62 yrs); 72% bilateral CSME] were enrolled from 16 sites across 6 provinces in Canada. At baseline, the mean VA was 20/60 (range: 20/20- 20/800) across all eyes diagnosed with CSME (249 eyes). Sixty-three percent of pa- tients had VA severity in the eye diagnosed with DME (worse seeing eye if both eyes diagnosed) of normal/mild vision loss (VA 20/10 to � 20/80), 10% moderate vision loss (VA � 20/80 to � 20/200), and 26% severe vision loss/nearly blind (VA � 20/200). At month 6, the mean VFQ-25 composite score was 79.6, the mean EQ-5D utility score was 0.78, and the EQ visual analogue scale (VAS) score was 71.0. The average 6-month DME-related cost per patient was $2,092 across all patients (95% confi- dence interval: $1,694 to $2,490). The cost was $1,776 for patients with normal/mild vision loss, $1,845 for patients with moderate vision loss, and $3,007 for patients with severe vision loss/nearly blind. CONCLUSIONS: DME is associated with limi- tations in functional ability and quality of life. In addition, the DME-related cost is substantial to the Canadian health care system. PSS8 NON-INTERVENTIONAL STUDY ON THE BURDEN OF ILLNESS IN DIABETIC MACULAR EDEMA (DME) IN BELGIUM Nivelle E1, Caekelbergh K1, Moeremans K1, Gerlier L2, Drieskens S3, Van dijck P4 1IMS Health HEOR, Vilvoorde, Belgium, 2IMS Health, Vilvoorde, Belgium, 3Panacea Officinalis, Antwerp, Belgium, 4N.V. Novartis Pharma S.A., Vilvoorde, Belgium OBJECTIVES: To study real-life patient characteristics, treatment patterns and costs associated with DME and visual acuity (VA) level. METHODS: The study aimed to recruit 100 patients distributed evenly over 4 categories defined by last measured VA. 1-year retrospective data were collected from medical records. An- nual direct costs were calculated from resource use in medical records and official unit costs (€ 2011). Self-reported economic burden was collected via Short Form Health and Labour Questionnaire (SF-HLQ). Indirect costs (€ 2011) included per- sonal expenses and caregiver burden (SF-HLQ). RESULTS: Thirteen Belgian oph- thalmologists recruited 32, 12, 14 and 6 DME patients for VA categories �20/50, 20/63-20/160, 20/200-20/400 and �20/400 respectively. VA was stable during the study in 86% of patients. Recruitment for lower VA categories was difficult due to long-term vision conservation with current treatments, lack of differentiation be- tween lowest categories in medical records and discontinuation of ophthalmolo- gist care in lowest categories. 75% of patients had bilateral DME. 68% were treated for DME during the study, of which 60% in both eyes. 50% received photocoagula- tion, 33% intravitreal drugs. Less than 4% of patients had paid work; 17% received disability replacement income. Total direct medical costs in patients receiving active treatment ranged from €960 (lowest VA) to €3,058. 59% of direct costs were due to monitoring and vision support, 39% to DME treatment. Indirect cost trends were less intuitive due to small samples and large variations. Annual costs grouped by 2 highest and 2 lowest VA levels, were respectively €114 and €312 for visual aids, €407 and €3,854 for home care. CONCLUSIONS: The majority of DME patients had bilateral disease. Except for the lowest VA, direct medical costs increased with VA decrease. Indirect costs were substantially higher at lower VA levels. Low sample sizes in some categories did not allow statistical analysis of cost differences. PSS9 COST OF BLINDNESS AND VISUAL IMPAIRMENT IN SLOVAKIA Psenkova M1, Mackovicova S2, Ondrusova M3, Szilagyiova P4 1Pharm-In, Bratislava, Slovak Republic, 2Pharm-In, spol. s r.o., Bratislava, Slovak Republic, 3Pharm-In, Ltd., Bratislava, Slovak Republic, 4Pfizer Luxembourg SARL, Bratislava, Slovak Republic OBJECTIVES: To measure the burden of the disease and provide a basis for the health care policy decisions. METHODS: The analysis was performed based on the several data sources. Data on prevalence of bilateral blindness and visual impair- ment were obtained from the official Annual Report on the Ophthalmic Clinics Activities. Cost analysis was performed from the Health and Social Insurance per- spective and reflects the real costs of health care payers in 2010. Information on health care and social expenditure were obtained from State Health and Social Insurance Funds. As detailed data on expenditures were not always available in a necessary structure, the missing data were collected in the retrospective patient survey. Both direct and indirect costs were evaluated and divided by the cost type and level of visual impairment. For the estimation of indirect costs Capital method was used. Patient survey was conducted on randomly collected geographically homogeneous sample of 89 respondents from all over Slovakia. RESULTS: A total of 17 201 persons with bilateral blindness or visual impairment were identified in 2010. Total yearly expenditures were 63 677 300 €. Direct costs counted only for 7% (4 468 112 €) of total costs and the most of them were caused by hospitalisations (4 001 539 €) and medical devices (307 739 €). The indirect costs counted for 59 209 188 €. The highest share represented loss of productivity (69%), followed by disability pensions (17%) and compensation of medical devices (14%). CONCLUSIONS: The evidence of cost-effectiveness must be demonstrated in order to get reimburse- ment in Slovakia. According the Slovak guidelines indirect costs are accepted only in exceptional cases. Indirect costs of blindness and visual impairment count more than two thirds of total costs and therefore should be considered in health care policy evaluations. PSS10 ECONOMICAL BURDEN OF SEVERE VISUAL IMPAIRMENT AND BLINDNESS – A SYSTEMATIC REVIEW Köberlein J1, Beifus K1, Finger R2 1University of Wuppertal, Wuppertal, Germany, 2University of Bonn, Bonn, Germany OBJECTIVES: Visual impairment and blindness pose a significant burden in terms of costs on the affected individual as well as society. In addition to a significant loss of quality of life associated with these impairments, a loss of independence leading to increased dependence on caretakers and inability to engage in income generat- ing activities add to the overall societal cost. As there are currently next to no data capturing this impact available for Germany we conducted a systematic review of the literature to estimate the costs of visual impairment and blindness for Ger- many and close this gap. METHODS: A systematic literature search of the main medical and economic information databases was conducted from January-April A569V A L U E I N H E A L T H 1 5 ( 2 0 1 2 ) A 2 7 7 – A 5 7 5 work_l2fa2l55vvbxra7n2one5guqem ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218545673 Params is empty 218545673 exception Params is empty 2021/04/06-02:16:23 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218545673 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:23 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_l2mat3t7ybhxvm3tgl5uu77eqy ---- untitled FATORES ASSOCIADOS À INTERRUPÇÃO DE TRATAMENTO ANTI-RETROVIRALTRATAMENTO DE CRIANÇAS DESNUTRIDAS HOSPITALIZADASSARNI ROS ET AL.Artigo OriginalArtigo OriginalArtigo OriginalArtigo OriginalArtigo Original Rev Assoc Med Bras 2009; 55(2): 145-8 145 *Correspondência: UNESP Departamento de Dermatologia e Radioterapia, s/n Campus Universitário de Rubião Jr. – Botucatu – SP CEP 18.618-000 Tel / Fax: (14) 3882-4922 heliomiot@fmb.unesp.br RESUMO OBJETIVO. Avaliar o desempenho da análise de imagem digital na estimativa da área acometida pelas úlceras crônicas dos membros inferiores. MÉTODOS. Estudo prospectivo em que foram mensuradas úlceras empregando o método planimétrico clássico, utilizando desenho dos seus contornos em filme plástico transparente, medida sua área posteriormente por folha milimetrada. Esses valores foram utilizados como padrão para a comparação com a estimativa de área pelas fotografias digitais padronizadas das úlceras e dos desenhos das mesmas em filme plástico. Para criar um referencial de conversão dos pixels em milímetros, foi empregado um adesivo com tamanho conhecido, adjacente à úlcera. RESULTADOS. foram avaliadas 42 lesões em 20 pacientes portadores de úlceras crônicas de membros inferiores. As áreas das úlceras variaram de 0,24 a 101,65cm2. Observou-se forte correlação entre as medidas planimétricas e as fotos das úlceras (R2=0,86 p<0,01), porém a correlação das medidas planimétricas com as fotos digitais dos desenhos das úlceras foi ainda maior (R2=0,99 p<0,01). CONCLUSÃO. A fotografia digital padronizada revelou-se método rápido, preciso e não-invasivo capaz de estimar a área afetada por úlceras. A avaliação das medidas fotográficas dos contornos das úlceras deve ser preferida em relação à análise de sua fotografia direta. UNITERMOS: Fotografia. Úlcera da perna. Processamento de imagem assistida por computador. Dermatoses da perna. ÚLCERAS CRÔNICAS DOS MEMBROS INFERIORES: AVALIAÇÃO PELA FOTOGRAFIA DIGITAL HÉLIO AMANTE MIOT*1, THAÍS JUNG MENDAÇOLLI2, SARA VENOSO COSTA2, GABRIELA RONCADA HADDAD3, LUCIANA PATRÍCIA FERNANDES ABBADE4 Trabalho realizado pelo departamento de Dermatologia e Radioterapia da FMB-Unesp, Botucatu, SP INTRODUÇÃO Úlceras crônicas dos membros inferiores (UCMI) afetam até 5% da população adulta dos países ocidentais, causando significante impacto socioeconômico e configurando problema de saúde pública. Sua etiologia está associada a diversos fatores como: doença venosa crônica, doença arterial periférica, neuro- patias, hipertensão arterial, trauma físico, anemia falciforme, infecções cutâneas, doenças inflamatórias, neoplasias e altera- ções nutricionais1,2. Sua terapêutica efetiva envolve a correção da condição de base e o uso de medidas locais para promover a cicatrização1,3. Duração prolongada do tratamento, ocorrência de recidivas e necessidade de grande aderência do paciente são elementos que contribuem para a grande morbidade relacionada às UCMI2,3. Nos últimos anos, muitos grupos multidisciplinares têm se dedicado à pesquisa de seus aspectos fisiopatológicos e terapêuticos. As medidas de perímetro e áreas das úlceras são importantes para comparação de regimes de tratamento e para monitorização da resposta terapêutica individual. Há, porém, controvérsias a respeito do melhor método para estimativa de áreas de úlceras cutâneas2-5. Pela morosidade do tratamento, a avaliação qualitativa mostra-se insuficiente e imprecisa, optando-se pela planimetria manual com filme plástico como o método quantitativo mais difundido para essa aferição4-6. A fotografia digital compreende a codificação da imagem capturada pelas lentes da câmera em unidades elementares chamadas pixels, que possuem valores fixos de cor e posição na imagem digital, permitindo sua quantificação7. Ademais, permi- te a documentação duradoura e o registro clínico eletrônico, favorecendo auditorias, revisões clínicas, teleassistência e o telemonitoramento4,8. A validação de um método não-invasivo, baseado na foto- grafia digital padronizada, para estimar a área acometida pelas UCMI ou dos desenhos de seus perímetros pode oferecer vantagens sobre a planimetria clássica. 1. Professor assistente do departamento de Dermatologia da FMB-Unesp. São Paulo, SP 2. Aluna da FMB-Unesp, São Paulo, SP 3. Residente em Dermatologia da FMB-Unesp, São Paulo, SP 4. Professor assistente do Departamento de Dermatologia da FMB-Unesp, São Paulo, SP MIOT HA ET AL. Rev Assoc Med Bras 2009; 55(2): 145-8146 MÉTODOS Estudo prospectivo envolvendo a fotografia das UCMI de 20 pacientes consecutivos, esclarecidos e voluntários, de ambula- tório específico para o tratamento de UCMI, em centro univer- sitário, sem restrição por sexo, fototipo, etiologia ou idade. Inicialmente, cada úlcera foi mensurada empregando o método planimétrico clássico (PL), utilizando desenho dos seus contornos em filme plástico transparente e medida sua área posteriormente, em folha milimetrada5. Esses valores foram utilizados como medidas padrão para a comparação com os outros métodos. Em seguida, foram capturadas fotografias digitais direta- mente das úlceras (FU), assim como fotografias dos desenhos dos contornos das úlceras nos filmes plásticos (FD), que foram usados anteriormente para a planimetria. Para a criação de um referencial de medidas fotográficas, as FU foram obtidas com os pacientes deitados, com angulação da objetiva ortogonal ao plano da úlcera e foi colado um adesivo de tamanho conhecido (13mm x 8 mm) adjacente à borda da úlcera (Figura 1) e do desenho de seu contorno para as FD. A partir do referencial externo conhecido, estabeleceu-se a relação de comprimento (pixels/mm) individualizada para cada foto, traçou-se manualmente no computador o perímetro das úlceras nas FU e nas FD e calculou-se sua área em pixels, sendo depois, transformada em mm2. A câmera digital compacta empregada foi a Nikon 4300, modo automático, função macro ativada e resolução de 800x600 pixels. As imagens foram analisadas pelo software ImageJ 1.37v que estimou o número de pixels contido dentro do perímetro traçado manualmente9. A análise estatística foi realizada a partir do software Bioestat 3.0.10 A correlação dos valores obtidos foram comparados pelo teste de regressão linear simples a partir do coeficiente de determinação ajustado (R2) sem exclusão dos outliers, foi realizada a representação gráfica das dispersões. A concordância das estimativas foi estimada pelo teste de Bland- Altman11. Foram considerados significativos valores de p bicaudais menores que 0,01. A amostra inicial de 20 pacientes foi testada para um nível de significância de 0,01 e poder de 99%. RESULTADOS Foram avaliadas 42 lesões em 20 pacientes portadores de UCMI. As áreas das úlceras variaram de 0,24 mm2 a 101,65mm2 (método PL). As Figuras 2 e 3 demonstram haver forte correlação entre as medidas planimétricas e as fotos das úlceras (R2=0,86 p<0,01), porém a correlação das medidas planimétricas com as fotos digitais dos desenhos das úlceras foi ainda maior (R2=0,99 p<0,01). Observou-se concordância entre as estimativas das áreas pelo teste de Bland-Altman (p>0,1) em ambos os métodos. O desvio médio da medida fotográfica das úlceras em relação à medida planimétrica (32,4%) foi também de desem- penho inferior comparado às fotografias dos desenhos (9,2%). A variância das medidas fotográficas foi, em ambos experimen- tos, maior que as medidas pelo método PL. O tamanho amostral inicial mostrou-se adequado para a significância e poder deter- minados. Figura 1 - Segmentação manual a partir da fotografia digital padronizada Figura 2 - Correlação entre a medida da fotografia direta das úlceras e a planimetria Figura 3 - Correlação entre a medida da fotografia dos desenhos das úlceras e a planimetria ÚLCERAS CRÔNICAS DOS MEMBROS INFERIORES Rev Assoc Med Bras 2009; 55(2): 145-8 147 DISCUSSÃO A análise da imagem de fotografias digitais padronizadas revelou alta correlação e concordância relacionados à planimetria, o que indica seu emprego na estimativa de áreas de UCMI. O método planimétrico tradicional representa o gold standard na avaliação das UCMI, entretanto, a fotografia digital tem se mostrado um adequado estimador da área. Os resultados desse estudo, em concordância com a literatura, sugerem a fotografia digital padronizada como um método documental não invasivo, ágil, preciso e de baixo custo para subsidiar o cuidado específico nessa doença6,12,13. O desempenho inferior da FU em detrimento da FD, apesar de derivarem da mesma técnica, pode ocorrer pelo fato de que a curvatura do membro inferior dos pacientes resulte em uma redução da área estimada pela FU, pois a imagem fotografada sofre uma planificação, o que se corrige quando empregamos a FD de filme transparente, que circunda todo o membro envolvido4,13. Outra razão a ser considerada é a interferência que se sobrepõe à medida da área direta da úlcera quando se sucedem o desenho do seu perímetro e a sua transferência para a folha milimetrada, acumulando erro do tipo aleatório. Apesar das variâncias das medidas das fotografias digitais tenham sido maiores que as medidas planimétricas, estima-se que a variabilidade interobservador seja muito grande na demar- cação manual das úlceras, favorecendo a análise digital pelo mesmo indivíduo como método de maior precisão6,14. Outro elemento importante é o aumento sistemático do erro de estima- tiva das áreas das úlceras à medida que seu tamanho aumenta, o que ocorre também em outros sistemas de medição4. A estimativa da área das UCMI pela FU apresenta também limitações na documentação de úlceras extensas ou que acome- tam toda circunferência da perna, necessitando da análise da composição de várias fotos e de uma correção para a avaliação de área em superfície não plana.4 A FD, pela facilidade de transposição do perímetro da úlcera independentemente da curvatura do plano, não oferece óbice a essa medida. O método de estimativa fotográfica de áreas com posterior medida planimétrica já era utilizado antes mesmo da tecnologia de fotografia digital, por outro lado, a disponibilidade de microcomputadores e a popularização das câmeras digitais tornaram possível o emprego dessa tecnologia como instrumen- to de precisão7,15-17. A fotografia digital apresenta baixo custo, sendo disponível mesmo nas regiões mais carentes, entretanto sua principal vantagem reside na possibilidade de automatização, documen- tação e velocidade na mensuração, principalmente em um grande número de pacientes e embora o método planimétrico seja ainda mais barato, é mais trabalhoso, pode sofrer maiores interferências decorrentes do examinador e ser operacional- mente mais difícil de ser realizado6. O desenvolvimento de técnicas estimadoras da gravidade e da evolução do tratamento das UCMI pode permitir uma abrangência maior da assistência médica especializada, quer presencialmente ou a partir de teleconsultas, assim como pode contribuir junto à pesquisa médica sobre o tema18,19. Alguns autores consideram a fotografia como parte do prontuário médico e, no caso das UCMI, além da finalidade de pesquisa, as imagens podem fornecer subsídio às anotações do prontuário ou do registro eletrônico e documentar a evolução clínica em diferentes momentos20,21. Embora a técnica fotográfica digital tenha sido validada nesse estudo para a medida de UCMI, os fundamentos do método podem ser transpostos para a medida de áreas de outras estruturas na superfície da pele, como úlceras de pressão, ou a demarcação de superfícies não planas, como as margens de tumores cutâneos na face4,14,16,17. Ainda que não estime quantitativamente a profundidade ou elementos clínicos como infecção, secreção, necrose, qualida- de do fundo e características tróficas, a FU permite uma medida qualitativa desses elementos com a vantagem de documentação visual e duradoura, passível de comparação posterior ou telemonitoramento, a partir do uso de registros clínicos eletrô- nicos, que podem, se necessário, ser enviados por telemedicina para parecer de especialistas, o que possibilita aumento da cobertura do cuidado especializado à população12,18,19,21. Os autores recomendam o uso da fotografia digital padronizada na documentação e avaliação de UCMI, quer seja no seguimento terapêutico de pacientes ou em estudos clínicos específicos. CONCLUSÃO A fotografia digital padronizada revelou-se método válido, rápido e não-invasivo capaz de estimar com precisão a área afetada de UCMI. A avaliação das medidas fotográficas dos desenhos dos contornos das úlceras em filme plástico deve ser preferida em relação à análise de sua fotografia direta. Conflito de interesse: não há SUMMARY CHRONIC ULCERS OF THE LOWER LIMBS: AREA EVALUATION BY DIGITAL PHOTOGRAPHY OBJECTIVES. To evaluate results of digital imaging analysis in estimating the areas of chronic ulcers in the lower limbs. METHODS. In a prospective study the ulcer areas were estimated by the classic planimetric method, where ulcer perimeters are drawn on a transparent plastic film. Areas were then measured in millimetered paper. These values were considered as gold standards to evaluate standardized digital photographs of ulcers and of drawings for area estimation. A known length of adhesive was placed adjacent to ulcers to estimate the proportion of pixels relative to real millimeters. RESULTS. Forty two lesions from 20 patients with chronic lower limb ulcers were evaluated. Areas ranged from 0.24 to 101.65cm2. Planimetric measures strongly correlated with photos of the ulcers (R2=0.86 p<0.01), however, their correlation with digital photos of the ulcer drawings was even higher (R2=0.99 p<0.01). CONCLUSIONS. Standardized digital photography proved to be a quick, precise and non-invasive method to estimate ulcer areas. The evaluation of measurements from drawings of ulcer perimeters should be preferred to direct photographic analysis of the ulcers. [Rev Assoc Med Bras 2009; 55(2): 145-8] KEY WORDS: Photography. Leg ulcer. Leg dermatoses. Image processing computer-assisted. MIOT HA ET AL. Rev Assoc Med Bras 2009; 55(2): 145-8148 REFERÊNCIAS 1. Abbade L, Lastoria S: Venous ulcer: epidemiology, physiopathology, diagnosis and treatment. Int J Dermatol. 2005;44:449-56. 2. Reichenberg J, Davis M. Venous Ulcers. Semin Cutan Med Surg 2005; 24:216-26. 3. Abbade LPF; Lastória S. Abordagem de pacientes com úlcera da perna de etiologia venosa. An Bras Dermatol, 2006; 81:509-19. 4. Pressley ZM et al. Digital image analysis: a reliable tool in the quantitative evaluation of cutaneous lesions and beyond. Arch Dermatol. 2007;143:1331-3. 5. Oien RF, Håkansson A, Hansen BU, Bjellerup M. Measuring the size of ulcers by planimetry: a useful method in the clinical setting. J Wound Care. 2002;11:165-8. 6. Stremitzer S, Wild T, Hoelzenbein T. How precise is the evaluation of chronic wounds by health care professionals? Int Wound J. 2007;4:156-61. 7. 6. Miot HA; Paixão MP; Paschoal FM. Fundamentos da fotografia digital em dermatologia. An Bras Dermatol. 2006;81:174-80. 8. Miot HA, Paixão MP, Wen CL. Teledermatologia: passado, presente e futuro. An Bras Dermatol. 2005;80:523-32. 9. Image J - Image Processing and Analysis in Java v1.34. National Health Institute (NHI). 2007 [citado 25 jun 2007]. Disponível em: http:// rsb.info.nih.gov/ij/. 10. Ayres M, Ayres Jr M, Ayres DL, dos Santos AS. Bioestat: 3.0 aplicações estatísticas nas áreas das ciências biológicas e médicas. Belém: Sociedade Civil Mamirauá MCT/CNPq Conservation International; 2003. 11. Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet. 1986;1:307-10. 12. Louis DT. Photographing pressure ulcers to enhance documentation. Decubitus. 1992;5:38-40,42,44-5. 13. Griffin JW, Tolley EA, Tooms RE, Reyes RA, Clifft JK. A comparison of photographic and transparency-based methods for measuring wound surface area. Phys Ther. 1993;73:117-22. 14. Samad A, Hayes S, French L, Dodds S. Digital imaging versus conventional contact tracing for the objective measurement of venous leg ulcers. J Wound Care. 2002;11:137-40. 15. Cutler NR, George R, Seifert RD, Brunelle R, Sramek JJ, McNeill K, et al. Comparison of quantitative methodologies to define chronic pressure ulcer measurements. Decubitus. 1993;6:22-30. 16. Solomon C, Munro AR, Van Rij AM, Christie R. The use of video image analysis for the measurement of venous ulcers. Br J Dermatol. 1995;133:565-70. 17. Rajbhandari SM, Harris ND, Sutton M, Lockett C, Eaton S, Gadour M, et al. Digital imaging: an accurate and easy method of measuring foot ulcers. Diabetes Med. 1999;16:339-42. 18. Hofmann-Wellenhof R, Salmhofer W, Binder B, Okcu A, Kerl H, Soyer HP. Feasibility and acceptance of telemedicine for wound care in patients with chronic leg ulcers. Telemed Telecare. 2006;12(Suppl 1):15-7. 19. Salmhofer W, Hofmann-Wellenhof R, Gabler G, Rieger-Engelbogen K, Gunegger D, Binder B, et al. Wound teleconsultation in patients with chronic leg ulcers. Dermatology. 2005;210:211-7. 20. Scheinfeld N. Photographic images, digital imaging, dermatology, and the law. Arch Dermatol. 2004;140:473-6. 21. Binder B, Hofmann-Wellenhof R, Salmhofer W, Ocku A, Kerl H, Soyer P. Teledermatological monitoring of leg ulcers in cooperation with home care nurses. Arch Dermatol.2007;143:1511-4. Artigo recebido: 28/11/07 Aceito para publicação: 01/07/08 work_l3fz6glh4na4vgystepdr5rwfm ---- Lehmuskallioetal2017 Accepted version, pre-print Accepted May 12 2017 for publication in Visual Communication, officially published in Vol.18(1) (Feb 2019) or Vol.18(2) (May 2019). Asko Lehmuskallio, Jukka Häkkinen, Janne Seppänen Photorealistic computer-generated images are difficult to distinguish from digital photographs: A case study with professional photographers and photo-editors Abstract: There are strict guidelines on photoediting in newsrooms, and serious professional repercussions if failing to adhere to these, while computer-generated imagery is increasingly used in other areas of visual communication. This paper presents empirical research on the ability of professional photographers and editors to distinguish photographs from photorealistic computer-generated images by looking at them on a screen. Our results show clearly that those studied (n= 20) are unable to distinguish these from another, suggesting that it is increasingly difficult to make this distinction, particularly since most viewers are not as experienced in photography as those studied. Interestingly, those studied continue to share a conventional understanding of photography, that is not in line with current developments in digital photography and digital image rendering. Based on our findings, we suggest the need for developing a particular visual literacy that understands the computational in digital photography, and grounds the use of digital photography among particular communities of practice. When seeing photographs on screens, journals, exhibitions, or newspapers, we might actually be looking at computer-generated simulations, and vice versa. Keywords: Computer-generated images; photography; simulation; digital culture; representation; photojournalism 1. Introduction Press photographers need to adhere to strict guidelines in how photographs are shot, edited and published. With the advent of digital photography, discussions on the ethics of photo editing have resurfaced, discussions that documentary and news photographers have had to deal repeatedly with throughout the history of using photographic equipment. In recent years, photographers have been fired due to removing or adding content to images, Pulitzer-prize winning Narciso Contreras being only one famous example among many. He had retouched a photo of a fighter in Syria by removing from it the video camera of a colleague before sending the picture to the Associated Press.i The discipline asked for in press photography concerns particularly photographic processes, that should leave decipherable traces in the images taken for, and published by, news media. Of particular concern in published guidelines are the technical characteristics of the photographs published.ii Has the picture been staged or re-enacted? Has content been removed or added? In the field of journalism these questions regarding digital alteration have gained notable attention since the beginning of 1990s, and continue to this day (Reaves, 1992; Reaves, 1995; Schwartz, 1992; Huang, 2001; Lowrey, 2003; Mäenpää and Seppänen, 2010; Mäenpää, 2014). The discussions revolve around the ethical codes of photo editing (Solaroli 2015), the alleged objectivity of news photographs (Carlson 2009) and the empirical editing practices in newsrooms (Gürsel 2016). All of these discussions pay particular emphasis on the role that digitization plays, particularly since digital photographic practices seem to be more difficult to control and verify within journalistic practice. While strict guidelines regarding photo-editing remain standard practice among photo journalists, computer-generated imagery is increasingly used in other fields of visual communication, such as advertising, cinema, and in TV series. When creating computer-generated imagery, render software is used to create visualizations that have not been captured with a camera. If computer-generated imagery is photorealistic, it has a visual resemblance to images captured originally with a camera. While photojournalistic practice focuses on uses of digital photo-editing, render engines, the software used to render images, already allow for creating photorealistic visualizations that may be difficult to distinguish from photographs. The at times heated discussions regarding uses of digital photography within press photography, and recent developments within computer-generated imagery point toward a seeming paradox in our understandings of photography. The images we see as photographs, might not contain any trace of an event outside of the photographic technology. We will call the need to maintain a purity in photographic images a conventional understanding of photography, due to its reliance on conventions to maintain a particular causality in photographs. Of importance is here the idea that at the time a photographic image is taken something outside of the camera is actually having an effect on both the material surface of photographic film and, after developing and printing, to the photographic prints created. Here questions used to assess the images are the likes of: has the picture been staged or re-enacted? Has content been removed or added? In the second case, developments in digital photography have been used to challenge this conception. Many scholars argue that digitalization has undermined the causal connection between a pre-photographic reality and the photographic image. WJT Mitchell claims that “[a]lthough a digital image may look just like a photograph when it is published in a newspaper, it actually differs as profoundly from a traditional photograph as does a photograph from a painting” (Mitchell, 2001, p. 3). Fred Ritchin (1991) suggests that the arrival of digital photography questions the role of photojournalism as bearing witness to events in news, due to the possibilites opened up for digital manipulation. And Rubinstein and Sluis (2013, p. 27-29) even write that the digital photograph “has to be considered as a kind of program, a process expressed as, with, in or through software. When the photograph became digital information, it not only became malleable and non-indexical, it became computational and programmable.” Film-based and digital photography thus seem to be two completely different processes, digital photography being equated with computer-generated imagery. We approach this apparent paradox by focusing on one detail that is of importance within these discussions: Can professional photographers, for whom a conventional understanding of photography is of importance for their professional practice, distinguish digital photographs from photorealistic computer-generated images by looking at them on a computer screen? The rationale for the question is evident: If computational techniques for photorealistic image rendering are indeed so advanced that professional viewers cannot distinguish them from photos taken with a camera, our understanding of the digital photographic image, and of its relations have to be reassessed. This is particularly so, since many professional photographers still hold that it is utterly important to be able to make a distinction between strongly edited digital photographs – let alone purely computer-generated pictures – and photographic images taken according to a conventional understanding (e.g. Mäenpää and Seppänen, 2010). Basically, we are interested in finding out if ‘the aesthetic qualities of photography are […able] to lay bare the realities,’ as André Bazin (1980) might have suggested, or if the distinction between digital photographs and computer- generated imagery has truly become indistinguishable, not just for ordinary people but also for highly skilled professionals. Therefore we showed 20 professional photographers and photo-editors 37 pictures on a computer screen and asked them to look at each picture and answer a simple question: “Do you think the picture shown is a photograph or is it computer-generated?” Due to their expertise, these people should be particularly suited in making a correct distinction, particularly since the majority of respondents were acclaimed professionals working in news and press photography. To make the task somewhat more interesting, we did not choose just any pictures, but border cases, that is images which we found difficult to distinguish ourselves, thus acknowledging that there continue to be a variety of computer-generated images, as well as photographs, which remain relatively easy to distinguish from each other. The computer-generated-images we chose looked very much like actual photographs taken with a camera, whereas some of the pictures recorded with a camera contained elements that an untrained eye might take to result from render engines. After each decision we asked the research participants to justify their choices in writing, as well as later to answer to questions in a short interview. This material was analyzed by paying special attention to two aspects: 1) are the studied able to make a correct distinction between photographs and computer-generated images?, and 2) what knowledge of photography do they rely on when making their distinctions both in writing and the interview? In the following, we will provide an overview of prevailing discussions about our themes, present our methodology, including the choice of images, the experimental setting, and the choice of research participants. In going through the results of their decisions, we discuss the rationales those studied gave for choosing between digital photographs made with a camera and photorealistic computer-generated-images. Finally, we will discuss the repercussions our results have on the practices of looking at photographic images, arguing for the need to develop a visual competence that understands the computational in digital photography, and grounds the use of digital photography among particular communities of practice. 2. Related research Particularly in the sphere of photojournalism and news imagery, the question of objectivity and trustworthiness of the images published is paramount. Photojournalism as a profession is a precarious undertaking, with reports of mass layoffs in many media outlets, a shift to assigning freelance work (Thomson 2016), and increased buying of images from stock-photo agencies (Machin 2004; Frosh 2013) and citizen photographers (Andén-Papadopoulos & Pantti 2013) for press publica- tion. Photojournalists face various risks in their day-to-day work, and, besides living a precarious existence economically, many fear physical harm during working hours (Hadland et al. 2016). Photos, especially from conflict zones, often must also be visually dramatic in order to be published (Solaroli 2015). Professional photojournalists learn a way of seeing, becoming skilled in particular photo aesthetics, which they can distinguish reasonably well from other aesthetics, such as those of citizen photographs (Quinn 2015). When searching for the right kinds of images while trying to make a living, some photographers have cut corners and manipulated photos before sending them out for publication. Publication of fake photos remains a major reason for firing photojournalists, and these fake items are widely discussed in news media (Carlson 2009). As a complicating factor, in times of using digital photographic technologies, the rules of what counts as fake and what does not are not always straightforward. A rule of thumb adopted by many photojournalists and professional societies is that of the imaginary darkroom principle (Mäenpää & Seppänen 2010). In essence, the idea is that the kind of post-processing that could be performed in darkrooms during the film era and that was usual then is acceptable with digital photo-editing software. The imaginary-darkroom metaphor follows the broader logic of a conventional understanding of photography. Debates of this nature have arisen within photojournalism because they affect hiring decisions, publications, and the winning of prestigious prizes. Since the early 1990s, media theorists of numerous stripes have asserted that digital photographs not only are easy to manipulate but differ significantly from analogue photographs, maintaining that digital photographs should be considered to be, at base, computer-generated imagery. The argument is made by highlighting the difference in which the image captured with a camera is processed from a latent image into a visible photograph. If the image is captured on a light-sensitive emulsion on film, its processing is seen as following a “continuous translation,” as, for example, Bernd Stiegler puts it, whereas capturing on a light-sensitive semiconductor, such as a CCD chip, “occurs within a fixed grid and graticulate pattern in which each individual point or pixel is determined by a numerical value and may accordingly be arbitrarily processed and transformed” (Stiegler 2004, p. 108–109; translated in Schröter 2011, p. 51). This focus on the material basis of the technologies used is evidence for even claiming that “[t]he pictorial evidence of photography is dead, […p]hotography as an authentic document has played out its role” (Stiegler 2004, p. 109–110; translated in Schröter 2011, p. 51). Daniel Rubinstein and Katharina Sluis (2013) argue for a fundamental dichotomy between analogue and digital photography by suggesting that the “algorithmic image” is non-indexical, without a causal relationship to the objects depicted, and that the only reason images on screens look like photographs would be “because of algorithmic interventions that ensure that what is registered on the camera’s CCD/CMOS sensor is eventually output as something that a human would understand as a photograph” (location 897). These examples show that digital photography is, in this line of thought, equated with computer-generated imagery. The reference to something “out there” seems to be lost and is mainly constructed within computing systems. As will become clear from the discussion later, our perspective suggests an alternative line of argumentation, suggesting a rethinking of the computational in digital photography, applying neither the darkroom principle nor the claim about non-indexical algorithmic images. Meanwhile, photorealistic computer-generated imagery has been explicitly developed in computer science, gaining particular interest among those working with digital signal processing, computer vision, computer graphics, and digital forensics. Early on, specific computer-generated simulations were already able to trick the human eye, undoubtedly spurring research and development activity. Humans’ judgements surrounding photographs and computer-generated images have been studied experimentally with relatively simple objects under basic illumination and also, later, in somewhat more complex settings (McNamara et al. 2005). Recently, Shaojing Fan et al. (2014) asked participants in a study to compare photographed and computer-generated faces. Since their interest lay mainly in understanding what parts of faces are important for judging whether images of faces are photographs or instead computer-generated, in order to enable creation of better computer- generated simulations, we know from their work primarily that distinguishing these classes of image from each other is not straightforward and at times proves especially difficult. Photorealistic images are widely used nowadays in advertising, catalogues, brochures, and news propaganda, and, interestingly, even some computer scientists have started becoming concerned about how to differentiate between photographs and computer-generated images. For example Lyu and Farid (2005, p. 845) cite as a motivation for their computational work: “If we are to have any hope that photographs will again hold the unique stature of being a definitive recording of events, we must develop technology that can differentiate between photographic and photorealistic images.” They have created computational techniques to distinguish the two kinds of images from each other, with a success rate of correct classification for approx. 67% of the photorealistic images tested and approx. 99% of the photographs in their test set. Others have followed the same path (for an overview, see Stamm et al. 2013), but, as far as we know, perfect computational solutions for differentiation are lacking. This is due to studies in computer science having to deal with a specific tension in the field: while some are especially interested in reliably distinguishing photographs from computer-generated images, others have the objective of creating photorealistic renderings for which this distinction cannot be made. To the best of our knowledge, systematic judging between photographs and computer-generated images has not been explored among professional photographers and photo-editors, a group that should be especially well suited to distinguishing between these two kinds of images. Adding to the importance of studying photo professionals is their need to be able to note these distinctions in their everyday work, particularly in the field of journalism. 3. Methodology In order to learn if professional photographers can tell by looking at pictures displayed on a computer screen if these are photographs or computer-generated images, we recruited 20 professional photographers and photo editors for an experiment, 8 males and 12 females, born between 1958 and 1991. All had experience with photo editing software, such as for example Adobe Photoshop and Lightroom. Only two of the researched had experience with rendering photorealistic images. All worked in Finland, and mainly within photojournalism, although some worked (additionally) in other areas, such as e.g. advertising. The experiment consisted of showing the research subjects 37 pictures one at a time, displayed on a 29.7 inch Apple Cinema Display from an adjustable viewing distance, basically a chair in front of a work desk. The research laboratory had ordinary office lighting in order to simulate an encounter with these kinds of visualizations when working on one’s own computer screen. The experiments were conducted in a research laboratory of the [blinded for review] by one of the authors and a research assistant. Each image was shown once for as long as the research participants needed, in sequential order. The respondents were asked to decide while looking at each picture revealed if the picture shown is a photograph, or if it is generated with a computer.iii We asked the respondents to mark each decision on a separate laptop and to justify the decision in writing. After that they could select the next picture and repeat the procedure. After the respondents had gone through all pictures, they were asked oral questions about this differentiation, based on semi-structured questions that focused on the importance of this differentiation, its facility, the criteria that other people use for this distinction, and their definitions of photography and that of computer-generated images. The interviews were recorded and transcribed for analysis. After analysing our findings, we organized a meeting with the participants to discuss them. Of all researched, seven were able to participate in this latter meeting. The pictures displayed were collected by two authors [blinded for review] using Internet search tools, focusing on a pre-selected photographic genre, landscapes and sceneries. The most important criteria for choosing computer-generated images was that they looked like photographs taken with cameras and fitted the selected genre. Photographs again only had to meet the criterion to be landscapes. Landscapes were chosen as a motif because software render engines are already very good for creating computer-generated landscapes, and pictures were widely available. Sources for pictures collected include image galleries from software developers (e.g. http://planetside.co.uk/galleries/terragen-gallery) and online photo-sharing sites (e.g. https://www.flickr.com/ and http://www.deviantart.com/). After we had collected the pictures, two of the authors went through all of them separately choosing only those computer-generated images that were hard to distinguish from photographs. In order to test the feasibility of our setting, we conducted a pilot study with nine participants, including mainly psychology students and research colleagues. Encouraged by our initial results we contacted our networks for help to recruit research participants. In very much the vein as has been identified in science studies, we worked long in advance in order to create a setting that would, at least hopefully, provide interesting results vis-à-vis our research question (Latour, 1999). Research subjects were searched via a personal visit to a major Finnish daily, by posting announcements to message boards and e-mail lists frequented by photojournalists, and by asking colleagues to help in recruitment. The material collected for analysis consisted of the choices between a photograph and a computer- generated image, the written justifications for the selection, and the oral interviews conducted, which were transcribed for analysis. The analysis of the material collected is explained as we go through the Findings. 4. Findings 4.1. Distinguishing photographs from computer-generated images We received a total of 725 answers to the selection task, of which 371 opted for “photograph” and 354 for “computer-generated image.” Of the possible 740 answers (37 pictures seen by 20 research participants) 15 are missing because one research participant was not able to finish the experiment in the time he had allotted to it, and two researched seem to have inadvertently skipped answering one question each. None of the respondents was able to categorize all images correctly, the best result was 10 mistakes out of 37, while the worst result was 23 mistakes out of 37. This means that the success rate varied between 37.84% and 72.97%. When we averaged the ratings over the whole experiment, the mean percentage of correct answers was 55.49%, which is near guessing rate. For photographs and computer-generated images the percentages were 55.85% and 55.02%, respectively. In order to see the statistical significance of our findings, we conducted a logistic regression analysis to the results with R statistical software. Logistic regression analysis is particularly suitable for analysing frequency data. The results show that the image type did not predict the participant categorization performance (Chi-square = 0.0384, p=0.84 with df=1), in other words the participants could not differentiate between photographs and computer-generated images. On the other hand, image contents predicted the categorization performance (Chi-square = 105.00, p<0.001 with df=35), which means that some images were easy and some images difficult to categorize. This is why we compared the odds ratios of the images (for odds ratio, see Peng et al., 2002). In Table 1. are presented the results for the photographs displayed. A small odds-ratio indicates that the photograph was more commonly mixed with a computer-generated image. A large value indicates that the photograph was often recognized as a photograph. The table has been sorted by the odds-ratio and the statistically significant odds ratios have been indicated by bold text. From the table it can be seen that there were seven photographs that were very often thought to be computer- generated images (images 4, 5, 7, 10, 17, 18, 30). This means, that in addition to not being able to categorize these images correctly, in seven cases the respondents confused images from one category (photographs) with another category (computer-generated images). Table 1. Results for photographs In Table 2. are presented the results for computer-generated images. A small odds-ratio indicates that computer-generated images were more commonly taken to be a photograph than a computer- generated image. A large value indicates that the computer-generated image was often recognized indeed as a computer-generated image. The table has been sorted by the odds-ratio and the statistically significant odds ratio has been indicated by bold text (image 28). From the table it can be seen that there was only one computer-generated image that was very commonly recognized as rendered, adding to our earlier conclusion that the respondents were in all other cases not able to distinguish between photographs and computer-generated images. Image Total answers Correct answers Odds- ratio z-value p-value 10 20 3 0.05 -3.63 p<0.001 7 20 4 0.07 -3.41 p<0.001 4 18 5 0.10 -2.96 p<0.001 17 20 7 0.14 -2.65 p=0.01 30 19 8 0.19 -2.25 p=0.02 5 20 9 0.22 -2.11 p=0.03 18 20 9 0.22 -2.11 p=0.03 12 20 10 0.27 -1.84 p=0.07 32 20 11 0.33 -1.56 p=0.12 8 19 11 0.37 -1.37 p=0.17 31 20 12 0.40 -1.26 p=0.21 15 19 12 0.46 -1.06 p=0.29 20 20 13 0.50 -0.96 p=0.34 3 20 14 0.62 -0.64 p=0.52 26 20 14 0.62 -0.64 p=0.52 9 19 14 0.75 -0.38 p=0.70 34 19 15 11 20 16 1.07 0.08 p=0.94 24 19 16 1.42 0.42 p=0.68 21 20 17 1.51 0.49 p=0.62 Image Total answers Correct answers Odds- ratio z-value p-value 2 19 6 0.56 -0.86 p=0.39 23 19 6 0.56 -0.86 p=0.39 25 20 8 0.81 -0.32 p=0.75 13 20 9 1.00 0.00 p=1.00 35 20 9 1.00 0.00 p=1.00 33 19 9 1.10 0.15 p=0.88 1 20 9 1.10 0.15 p=0.88 27 20 10 1.22 0.32 p=0.75 19 20 10 1.22 0.32 p=0.75 22 19 10 1.36 0.48 p=0.63 37 20 11 1.49 0.63 p=0.53 36 20 12 1.83 0.95 p=0.34 29 19 13 2.65 1.46 p=0.14 6 20 14 2.85 1.58 p=0.11 14 19 14 3.42 1.79 p=0.07 16 20 15 3.67 1.90 p=0.06 28 20 18 11.00 2.75 p=0.01 Table 2. Results for computer-generated images These results lead clearly to the conclusion that the studied image professionals, who work day-to- day with photography as photographers and/or editors, are unable to distinguish correctly between photographs captured with a camera and computer-generated images by looking at them on a computer screen. We believe this result to remain valid also within a wider population, since the respondents in our sample were explicitly selected to be among the most proficient people to distinguish photographs from other kinds of images. This is due to the importance of high standards in the ‘purity’ of photographs within their profession. For aspects that remain today difficult to render photorealistically, as e.g. faces (Fan et al. 2014), the results would surely look different. Neither have we tested this categorization task with those creating explicitly rendered images that look like photographs to see if they would fare better. Possibly working explicitly with render engines is helpful for learning more appropriate diagnostic criteria for distinguishing photographs from computer-generated images. Nevertheless, there is so much interest in advancing techniques for photorealistic rendering, that we are pretty sure that in the future photographed faces will be as difficult to distinguish from computer-generated faces, as we found for landscape images among photo professionals. This is why our results have serious implications for our understanding of photography after its digitization, and particularly the work practices that photographs are embedded in. 4.2. Justifying distinctions between photographs and computer-generated images Besides asking to choose if a shown image is a photograph or computer-generated, we asked our respondents to justify their decisions in order to discover how they understand photographs and computer-generated images. Additionally we were interested in the ways they distinguish these two. Using an open-ended semi-structured interview, we thus asked the respondents how they define a photograph, how they define a computer-generated image, how they think that others would fare in distinguishing photographs from computer-generated images, and if they think that this distinction is important. The distinction was considered to be especially important particularly for those who work in photo journalism. The authenticity of photographs, and their connection to something out there in the vein of a conventional understanding of photography, was a dominant idea brought up by several respondents. Because the respondents worked with photography, and mainly had little or no experience with rendering engines for creating photorealistic pictures, the participants justified their distinctions mainly by comparing their knowledge of photographs to the pictures shown to them on a computer screen. Thus their idea of photography served as an orientational device for making their choices and justifying them. The reasons given for deciding if the picture shown was a photograph or not can be divided into four types of rationales that focus on an understanding of 1) how photographs look like (their iconic quality), 2) how photographic equipment effects photographic imagery (the indexical quality of the equipment), 3) what kinds of scenes exist “somewhere out there” and thus might be depicted as photographs (the indexical quality of the scenes depicted), 4) and what kinds of images photographs are, especially when contrasted to other kinds of images (the symbolic dimension of photographs). These categories are informed by a Peircean analysis of visual semiotics (Houser et al. 1992; 1998; Jappy 2013). Taken together, these four rationales provide a core understanding of how the distinction between the photograph and a computer-generated image was assessed. Although the respondents differed in their evaluations of which picture shown would be a photograph and which a computer-generated image, they had both in written form and in oral interviews strikingly similar evaluations of what kinds of images photographs are. Thus, their understanding of photography seemed to follow a shared path. 4.2.1. Is this distinction important? All interviewees, except for two, maintained that this distinction is especially important either on a general level when thinking about photographs, or then as part of contextual uses such as within journalism. For illustrative purposes, or in advertisements, this distinction was by many not considered to be as important, since the laying bare of realities is not considered to be a main task of pictures used for those purposes. We quote a few interviewees in order to give a sense of the force of the felt importance for this distinction: “It is important that photo professionals are able to distinguish these images from each other, so that we would not present computer-generated images in a wrong context.” R08 “Considering news imagery or journalism, then it is absolutely important. An image has to be authentic, if it is used to convey information, it is used in news, or it is used to convey facts. Then it is pivotal that the image is authentic.” R10 “Yes, definitely. Especially now that there have been several cases that manipulated imagery has been published as news images. It is very important to be able to differentiate these from each other.” R16 Since this distinction is considered to be especially important, we analysed closer the ways in which the research participants verbally explained on what grounds they had made their decision. By doing this, we wanted to know what the research participants used as their diagnostic criteria to assess how photographs look like. 4.2.2. What do photographs look like? An important rationale given for deciding if a picture shown is a photograph or a computer- generated image dealt with qualities of the seen picture. For photographs, the amount of detail was of importance, the apparent randomness of the detail, a perceived natural rhythm, and at times an unsightliness became important diagnostic criteria. Computer-generated images again were thought of not being able to produce enough detail, creating predictable, repeated shapes, looking unnatural and at times too perfect, or then not good enough, displaying for example too many pixels. In the following, these iconic criteria (following Peirce), have been marked in italics: ”The amount of detail, the variation in light and shadows. The patches and fallen trees in the forest, the areas, where stone changes to forest.” R10 ”It might be a computer-generated image, but since there are not too many recurring patterns, I’d say it’s a photograph.” R6 ”The clouds between the hills, the amount of detail also there, where one’s look would not directly go to.” R8 ”The amount of detail, the randomness and the ”mistakes” within the world of the image, bad lighting in the foreground.” R5 These visual, iconic qualities give a sense of how the researched identified the particular visual characteristics of photographs. Many of these characteristics have fascinated photographers since the beginnings of the craft. The amount of detail, also in areas that one would not directly look at, as articulated by R8, is a good example of the special charm that many feel towards photography. 4.2.3 How does photographic equipment affect photographic imagery? An important diagnostic criteria for distinguishing photographs from computer-generated images was to infer how the technical equipment used for creating images effects how the pictures displayed might look like. Here the professional photographers studied were able to infer a variety of possibilities in regard to photographic technology, even if they were not always correct in their assessments. They had prior experience of the effects that for example long exposure times have for resulting images, or how the use of different lenses affects for example the display of perspective. This inferred indexical relation between visual characteristics of an image, and how it has come about, was described repeatedly. Again, articulations relating to these criteria are shown in italics. “you see motion blurr, and in the foreground a lot of details.” R17 “Angle of view, distortion due to a wideangle lens and the vanishing of the horizon.” R17 “A misty pineforest: a flattened reality depicted with a telelens – the tree trunks are precisely arranged but seem to be real.” R3 “When taken with a long exposure during dusk, a landscape can look like this as well.” R16 “depicted with a small image sensor” R9 In these examples, the descriptions explicitly relate to the assumed indexical traces marked by the equipment used for creating pictures. Photographic technology is considered to have affected the visual characteristics of the picture shown, even in cases of computer-generated images. Thus one’s knowledge of photographic technology, and of its possible effects, plays an important role in assessing the pictures. 4.2.4 What kinds of scenes exist “somewhere out there”? For photographs of landscapes, the characteristics of landscapes are of obvious interest for deciding if a picture shown is a photograph or a computer-generated image. These are also indexical signs, referring in this case to the scenery depicted affecting the photographs. Here one’s own experience of particular places is used for assessing the credibility of the landscapes depicted, as well as one’s understanding of how light, water, rocks and similar aspects behave in landscapes. Although those studied refer at times explicitly to their own experiences, at other times they clearly indicate their reasoning referring to possibly existing places. “I’ve seen once this kind of water. The variation between grass and stone in the forefront, the amount of details, plausible color range for that kind of weather and time of day.” R11 “I’ve taken once at the sea a similar kind of image.” R1 “Isn’t this a picture taken by me when flying over the Alps to Milan in autumn 2007?” R6 “A fell landscape: the weather phenomenon seems characteristic for the fell highlands.” R7 “somewhere on the Southside of New Zealand I’m sure you’ll find these kinds of places” R13 “It’s a possible morning fog.” R4 In these examples, the diagnostic criteria used refer to one’s own experience, or an assesment of probability based on one’s experience. The first three examples, “I’ve seen once this kind of water,” “I’ve taken once at the sea a similar kind of image,” and “Isn’t this a picture taken by me” refer explicitly to personal, lived experience, that is used in assesment. The latter examples again are abductions, where the possibility of particular sceneries is evaluated vis-à-vis their existence on earth. Although the respondents do not claim to be sure of the existence of these places, they indicate a possibility that these kinds of sceneries actually do exist, and are not figments of imagination. 4.2.5. Qualifying photographs An important set of justifications for deciding if a picture shown was a photograph or a computer- generated image did not deal with iconic or indexical signs, but with more general, symbolic signs. These are helpful for gaining an understanding of photography as articulated in the justifications, and contrast these with those given for computer-generated images. The following symbolic qualities expressed have to do with understandings related to photography that expand iconic and indexical descriptions. Although how photographs look and what they refer to (the photographic equipment and the sceneries depicted) are of particular importance, we focus here on those descriptions that go beyond these categories. “a lot of life” R4 “believable, feels real” R7 “I don’t believe that a machine can create that much life” R1 “there’s a kind of natural line” R3 “the image feels natural” R16 Here photographs are understood to show life, being real, authentic, natural or believable, thus in line with descriptions within the conventional understanding of photography outlined earlier, as well as with the reasons given by respondents why this distinction between photographs and computer-generated images matters. The symbolic dimension in the descriptions shows how the respondents use very abstract criteria (“life,” “real,” “natural”) for making their distinctions, qualifying photographic images as special in contrast to the pejoratively discussed computer- generated images. 5. Discussion: The difficulty of telling photographs from computer-generated images 5.1. Conventional understanding of the photograph Our findings show that, while the respondents were unable to distinguish photographs from computer-generated images, they did have a very particular understanding of photography. We call this understanding a conventional understanding of the photograph, because it is based on conventions and is conventional in the sense that it can be found in a variety of other settings as well – it is closely related to the darkroom principle outlined in our discussion of related research (Mäenpää & Seppänen 2010; Solaroli 2015). The conventional conception of the photograph, as suggested by Florian Rötzer, relies on the indexical quality of the photographic representation, wherein a photograph is a combination of material traces left by light emitted or reflected from the photographed objects and an iconic picture that depicts, more or less accurately, the photographed scene. The material trace thus is seen as binding the photographic representation in time and space, working as an epistemological support for the iconic content. In addition, there is a time-space delay between the photographic representation and the represented scene with its objects. The delay is in accordance with the fundamental sine qua non of the representation itself: for there to be a representation, there must be a difference between the represented and the representation. Hence, the represented precedes the representation chronologically and spatially. In a conventional understanding of photography, it is logically impossible for a representation to precede the represented by existing before it or by being situated at exactly the same spatial coordinates. Hence, their ontological status is different. The particular power of photographic representation stems here from it being a representation that is created on the basis of traces, from surface markings made with light (Maynard 1997). The representation is deemed to give witness to an event that really has happened, even if the visual representation itself is blurry, underexposed, or even undecipherable. Therefore, in this understanding, photography provides a representation in which a trace of an actual event somehow has a presence in the picture taken. This is described very lucidly as the spectral, haunted, uncanny quality of photography by authors such as Walter Benjamin, Roland Barthes, and Susan Sontag. In this light, the exclamation by one of the interviewees regarding the importance of a difference between photographs and computer-generated images echoes a discourse on photography that has been established since the beginning of the medium: “It is absolutely important […]. I work with, and have always been interested only in, images that speak the truth. Speak or, rather, depict the truth” (R01). The main understanding of photographs found among the individuals studied can be seen accordingly as an explicit continuation of this position: a photograph contains a seed of that which has really happened; it is a very particular kind of witness (Peters 2001; Greenwood & Reinardy 2011). In contrast to this understanding, our findings problematize the ability of professional photographers to distinguish between pictures taken with cameras, fitting their idea of the photograph containing a trace of a scene out there, and those created with computers. By merely looking at images, our subjects could not assess correctly whether the representation contains an indexical trace of the scene depicted. As the results show, the percentage of correct answers given is near guessing rate, meaning that extremely skilled professionals already have serious difficulties in distinguishing between these two kinds of images. Additionally, in some cases, those studied took pictures captured with cameras to be computer-generated images. We can only imagine how difficult the distinction will be to make for non-professionals, especially as computational tools for image rendering advance further. At the same time, there is a profound desire for the kind of images in which “natural images […] imprint themselves durably,” as Henry Fox Talbot (1980, p. 29) has already suggested. As our research material shows, the participants in the study felt that the difference between a photograph and a computer-generated image is of particular importance, and they had a surprisingly uniform understanding of what a photograph looks like. In contrast to the symbolic qualities attributed to photographs, such as looking natural, being plausible, having a lot of life, or being simply true, computer-generated images were characterized as inauthentic, simulations, artificial constructs, essentially artificial copies of an original they can never replace. However, as our findings show, the difference between “an original” and “a copy” is increasingly difficult to pinpoint in the realm of digital photography. The difficulty of choosing between an idea of an authentic image, “a photograph,” and a computer-generated image underscores Hans Belting’s (2005) claim regarding images in general: We want to find images to trust, because we have never ceased to search for these kinds of images. Various kinds of images, long before the advent of photography, were considered to be authentic, true representations of a reality that had left its traces in the images.iv It is often the case that the representational surface does not suffice for making a correct distinction between images created with different means, adding weight to the suggestion by John Durham Peters (1999, location 4413) that “[i]dentical objects invite radically different hermeneutic stances.” In our case, the images are interpreted in very different ways, depending on their categorization as either photographic or computer-generated images. 5.2. The photorealistic computer-generated image In a contrast to the ways in which our research subjects spoke of photographs, computer-generated images were discussed particularly via opposition to any claims to authenticity. Whereas photographs were described with terms such as “authentic,” “natural,” “true,” and “trustworthy,” computer-generated images were considered to be artificial, unnatural, made, depicting a parallel reality, and too perfect. For the participants, computer-generated images therefore have little in common with photographs. Interestingly, this position echoes important contributions to discussions on digital photography such as those introduced by Rubinstein and Sluis (2013), Stiegler (2004), or Ritchin (1991), who all maintain that digital photographs are fundamentally different from analogue, film-based photographs. Also, Rötzer suggests that “[d]igital photography provides images that only seem to be realistic, in which all kinds of image production techniques can be blended at will, in which there is no end to the possibilities of arbitrary manipulation: photography is the perfect painting of a digital surrealism, the image a naked surface on which the imagination can make subjective drawings” (p. 21). Both among our research participants and in the work cited, the viewpoints are constructed in explicit contrast against a conventional understanding of photography, which thereby serves as a point of reference. Whereas among our participants the contrast was provided by descriptions of computer-generated imagery, in the works cited it is digital photography that in itself is considered to be suspicious. In both cases, the role of digitization and software in the creation of images is proof enough of surpassing the agency of any other kinds of mediations. Now our inability to distinguish increasing quantities of photorealistic renderings from photographs created with a camera points to a need to take photorealistic computer-generated images seriously and take into account that some of the images we see and use are not photography. Our results might be read also as confirming the role of algorithms and software in digital photography. We could conclude that, yes, digital photographs are unreliable, particularly when compared to their film-based, analogue predecessors, but the latter stance would remain unconvincing, also because there are diverse examples of manipulating film-based photographs. Instead, we want to point to another direction, suggesting a fundamental tension in how digital photographs are approached: on one hand, the digitization of photography has raised, with considerable force, questions about their status as particular kinds of pictures, while, on the other hand, they continue to be used and valued in accordance with a conventional understanding of photography. The claims as to a fundamental break between photography and digital photography have not been able to convince professional photographers, photo-editors, or the social settings in which images made with digital photographic equipment are used. 5.3. Communities of practice and the computational in digital photography Instead of claiming an ontological break per se between images created with analogue and digital cameras, it seems more useful to acknowledge that the question of authentic vs. artificial images has been a constant companion in assessing photographs throughout their history, including analogue and digital. Just as the computer-generated image might seem suspect, it has long been suggested that photography, when in the wrong hands, might actually be a means against the truth. No matter the conventional understanding of photography, photographs have, in fact, always remained suspect. Have the scenes photographed been staged? Have people been cut out of the negatives produced? What has been left out of the images? Whose story is being told? In contrast to reliance on a conventional understanding of photography or on claims of digital photographs being non-indexical algorithmic images, we suggest that the computational in digital photography should be assessed from a clearer foundation, while one takes into account the role of particular communities of practice in directing the use of specific digital photographs. Here, two technical issues should receive particular attention, the first being the merging of consecutively taken images into a single picture file, as is the case when one uses high-dynamic-range (HDR) images, and the other being the status of RAW image files in assessment of photographic objectivity (Winslow 2009; Mäenpää 2014; Solaroli 2015). These have become prominent topics particularly because of decisions on including or excluding photojournalistic work from competitions. As Marco Solaroli (2015) notes, when Swedish photojournalist Paul Hansen won the World Press Photo of the Year award in 2013, Neal Krawetz, a computer scientist working in digital forensics, claimed that Hansen’s picture was not a photograph but an HDR composite. These pictures are composites of several shots, taken with different lighting settings and merged into a single picture. This technique allows for creating pictures with a wider dynamic range than those based on one image alone, and the technique is particularly useful for difficult lighting conditions, as well as for creating visually dramatic images. While Hansen was allowed to keep the award and arguably benefited from the new visual aesthetics that HDR allows for, the example shows how digital photographic technologies not just are a faster, lighter, or more convenient way of doing photography but effectively change what can be done photographically. The role of the RAW file in relation to assessment of images has sparked even more pointed criticism of the “imaginary darkroom” idea within photojournalism itself. As Jenni Mäenpää (2014) notes, photojournalist Klavs Bo Christensen contested a Picture of the Year decision against him after judges decided, upon seeing a RAW file, that too much editing had been done. This decision, based on a conventional understanding of photography, treats RAW files as digital negatives, in a view that is increasingly contested. Christensen, as both Mäenpää (2014) and Solaroli (2015) attest, said that “a RAW file […] has nothing to do with reality and I do not think you can judge the finished images and the use of Photoshop by looking at the RAW file.” Although this statement seems counterintuitive from the imaginary-darkroom perspective, from a computational perspective it becomes understandable. After all, RAW files are unprocessed, latent images and can only be made visible for human eyes by the use of software (Munhoz 2014; Solaroli 2015). And the software for processing RAW images is rapidly developing. Francesco Zizola, a photojournalist, describes it: When you take a photograph, the photo is there but – at the same time – isn’t there. It’s there but it cannot yet be seen […]. The raw is the latent image of the digital age. It is a numeric record of light-generated data that are not yet elaborated. As such, it cannot be seen. When you “open” it in order to see it, you do it with a software [sic] that makes you see it on the basis of its own degree of technological sophistication. […T]he raw remains the same, but software improves. Today we can’t see everything in the raw. In six months we’ll see more. In one year perhaps even more. (Zizola in Solaroli 2015, p. 523) Zizola’s assessment of the RAW file as “a numeric record of light-generated data that are not yet elaborated” opens up an alternative path between a view that dismisses the usefulness of digital photographs in photojournalism and that of treating digital photography in terms of an imaginary darkroom, fundamentally in line with a conventional understanding of photography. As a photojournalist, Zizola obviously will know whether he has been on location actually taking a photograph, affected by light, and whether he has tampered with the image and in which ways. These records of light-generated data are ever more often visualized in computing environments not just as classic photographs, so-called simulative pictures, but increasingly also as heuristic pictures, visualized as, in essence, data visualizations, as Asko Lehmuskallio (2016) has shown with reference to work among computer scientists. Digital photography does not have to be presented in only one way, and numeric records of light-generated data allow for a wide variety of computational processes. Our empirical findings show that our ways of differentiating various photo technologies from each other, such as analogue from digital or camera-based from computer-rendered, need to be reassessed. Instead of being in stark contrast to each other, in practice they are much more entwined and entangled with each other than we would perhaps initially wish to acknowledge. If they are to serve as records of events that really happened, their particular purity has to be painstakingly created, every time anew. Jens Schröter (2011) shows in relation to scientific imaging that the capture, development, and visualization of a photographic trace has to be carefully manipulated in order to assure its authenticity. The authentic trace is an accomplishment assured by codes of conduct, education, and peer review in particular communities. Our understandings of particular photographs are hence tied to the communities of practice in which they are used, and in this context it is seldom a technology alone that is taken as a guarantee for authenticity. As both our findings and the related work among photojournalists show, the mechanisms of authentication are difficult to keep pure. 6. Conclusions: A plea for photojournalism Our results show clearly that the people we studied are unable to distinguish correctly between digital photographs and photorealistic computer-generated images. Since they, somewhat paradoxically, continue to hold to a conventional understanding of photography, we pay attention to this contrariness by turning to related literature on the digitization of photography in general and discussions surrounding their use in photojournalism in particular. In contrast to recent lamentations of the demise of the photojournalistic profession, listing various threat scenarios due to editorial use of stock-photo agencies’ work, citizen photographers, changes in business models, or the further digitalization of the profession, we are positive that professionals in visual cultures are increasingly needed. Photojournalism has always been reacting to societal and technological changes, and this proactive stance is needed as we start to uncover the various ways in which digital photography may be thought of. In underscoring also the future need for photojournalists and professionals in visual cultures, we maintain that photographic images are not trustworthy simply because they have been taken with particular equipment; rather, they are made trustworthy by being embedded within conventions of communities of practice in which people strive to create images with a particular representational logic, as is the case in photojournalism. Since, as our results show, it is increasingly difficult to assess how images in digital environments originally were created by merely looking at them, we cannot rely on our understanding of digital photography resting on a darkroom principle. Instead, we have to develop a visual literacy that acknowledges the difficulty of separating digital photographs from computer-generated images. When seeing photographs on screens, in journals, at exhibitions, or in newspapers, we might actually be looking at computer-generated simulations, and the reverse applies for images that we might expect to be computer-rendered. At the same time, we must be aware of the myriad forms that digital photographs may take in visualization of numeric records of light-generated data, all of which make distinctions in how photographs have been created even more difficult. We always have to negotiate the delicate equilibrium between representation and belief, but our task is made easier if we can rely on communities of practice with solidly established criteria for doing so, even if these criteria, as we suggest, do have to be changed. 7. References Andén-Papadopoulos K and Pantti M (2013) Re-imagining crisis reporting: Professional ideology of journalists and citizen eyewitness images. Journalism 14(7): 960–977. Baer J (2010) Introduction: Photography's “And/Or” Nature. Visual Resources: an international journal on images and their uses 26(2): 89–93. Bazin A (1980) The Ontology of the Photographic Image. In: Trachtenberg A (ed) Classic Essays on Photography. Stony Creek, CT: Leete’s Island Books, pp.237–244. Belting H (2005) Einleitung. In: Ibid. Das echte Bild. Bildfragen als Glaubensfragen. München: C.H. Beck, pp.7–44. Carlson M (2009) The Reality of A Fake Image: News norms, photojournalistic craft, and Brian Walski’s fabricated photograph. Journalism Practice 3(2): 125–139. Fan S, Wang R, Ng T-T, et al. (2014) Human Perception of Visual Realism for Photo and Computer-Generated Face Images. ACM Trans. Appl. Percept. 11(2): 21 pages, http://doi.acm.org/10.1145/2620030. Frosh P (2013) Beyond the Image Bank: Digital Commercial Photography. In: Lister M (ed) The Photographic Image in Digital Culture. London: Routledge, pp. 131–. Greenwood K and Reinardy S (2011) Self-trained and selfmotivated: Newspaper photojournalists strive for quality during technological challenges. Visual Communication Quarterly 18(3): 155–166. Gursel ZD (2016) Image Brokers: Visualizing World News in the Age of Digital Circulation. Berkeley: Univ of California Press. Hadland A, Lambert P and Campbell D (2016) The Future of Professional Photojournalism: Perceptions of Risk. Journalism Practice 10(7): 820–832. Houser N and Kloesel C (1992) The Essential Peirce: Selected Philosophical Writings (1867–1893). Indiana Univ. Press. Houser N et al. (1998) The Essential Peirce: Selected Philosophical Writings (1893–1913). Indiana Univ. Press. Huang ES (2001) ‘Readers’ Perception of Digital Alteration in Photojournalism. Journalism & Communication Monographs 3(3): 147–182. Jappy T (2013) Introduction to Peircean Visual Semiotics. Kindle Edition, London et al.: Bloomsbury. Latour B (1999) Circulating Reference. Sampling the Soil in the Amazon Forest. In: Ibid. Pandora’s Hope. Essays on the Reality of Science Studies. Cambridge/London: Harvard University Press, pp.24–79. Lehmuskallio A (2016) The camera as a sensor: The visualization of everyday digital photography as simulative, heuristic and layered pictures. In: Gómez Cruz E and Lehmuskallio A (eds) Digital Photography and Everyday Life. Empirical Studies on material visual practices. New York/London: Routledge, pp. 243–266. Lowrey W (2003) Normative Conflict in the Newsroom: the case of digital photo manipulation. Journal of Mass Media Ethics 18(2): 123–142. Lyu S and Farid H (2005) How Realistic is Photorealistic? IEEE Transactions on Signal Processing 53(2): 845–850. Machin D (2004) Building the world’s visual language: The increasing global importance of image banks in corporate media. Visual Communication 3(3): 316–336. Mäenpää J (2014) Rethinking Photojournalism: The Changing Work Practices and Professionalism of Photojournalists in the Digital Age. Nordicom Review 35 (2014)(2): 91–104. Mäenpää J and Seppänen J (2010) Imaginary Darkroom. Digital photo editing as a strategic ritual. Journalism Practice 4(4): 454–475. Maynard P (1997) The Enginge of Visualization: Thinking Through Photography, Ithaca, NY: Cornell University Press. McNamara A, et al. (2005) Exploring perceptual equivalence between real and simulated imagery. Proceedings of the 2nd symposium on Applied Perception in Graphics and Visualization: 123–128. Mitchell W (2001) The reconfigured eye: Visual truth in the post-photographic era. Fourth printing, Cambridge, MA/London: MIT Press. Munhoz P. (2014) Manipulation, professional practices and deontology in informational photography: identifying new parameters. Brazilian Journalism Research 10(1 EN): 210-237. Peng CJ, Lee KL and Ingersoll GM. (2002) An Introduction to Logistic Regression Analysis and Reporting. The Journal of Educational Research 2(1): 3–14. Peters JD (1999) Speaking into the Air. A History of the Idea of Communication. Kindle Edition, Chicago and London: The University of Chicago Press. Peters JD (2001) Witnessing. Media, Culture & Society 23: 707–723. Quinn S (2015) A question of quality: how research participants described photographs in the NPPA study. NPPA: https://nppa.org/news/question-quality- how-research-participants-described-photographs- nppa-study. Reaves S (1992) What’s Wrong With This Picture? Daily newspapers photo editors’ attitudes and their tolerance toward digital manipulation. Newspaper Research Journal 13/14(4/1): 131– 155. Reaves S (1995) The Vulnerable Image: categories of photos as predictor of digital manipulation. Journalism & Mass Communication Quarterly 72(3): 706–715. Ritchin F (1991) Photojournalism in the Age of Computers. In: Squiers C (ed) The Critical Image. Essays on Contemporary Photography. London: Lawrence & Wishart, pp.28–37. Rubinstein D and Sluis K (2013) The digital image in photographic culture: algorithmic photography and the crisis of representation. In: Lister M (ed) The Photographic Image in Digital Culture. Second Edition, Kindle. New York/London: Routledge. Rötzer F (1996) Re: Photography. In: von Amelunxen H, Iglhaut S, Rötzer F, et al. (eds) Photography after photography. Memory and representation in the digital age Amsterdam: G+B Arts, pp.13–25. Schröter J (2011) Analogue/Digital. Referentiality and intermediality. Kunstlicht 32(3): 50–57. Schwartz D (1992) To Tell the Truth: codes of objectivity in photojournalism, Minneapolis, MN: Gordon and Breach. Solaroli M. (2015) Toward A New Visual Culture Of The News: Professional photojournalism, digital post-production, and the symbolic struggle for distinction. Digital Journalism 3(4): 513–532. Stamm MC, Wu M and Liu KJR (2013) Information Forensics: An Overview of the First Decade. IEEE Access 1(2013): 167–200. Stiegler B (2004) Digitale Photographie als epistemologischer Bruch und historische Wende. In: Engell L and Neitzel B (eds) Das Gesicht der Welt. Medien in der digitalen Kultur. München: Fink, pp.102–128. Talbot WHF (1980) A Brief Historical Sketch of the Invention of the Art. In: Trachtenberg A (ed) Classic Essays on Photography. Stony Creek, CT: Leete's Island Books, pp.27–36. Thomson T (2016) Freelance Photojournalists and Photo Editors: Learning and adapting in a (mostly faceless) virtual world. Journalism Studies: 1–21. • Funding: This research has been funded by the Academy of Finland projects Mind, Picture, Image (MIPI) and Digital Face (DIFA). • Acknowledgements: We would like to thank for assistance in conducting the empirical work Milla Huuskonen and Hannu Alén, as well as in helping recruiting participants Anna-Kaisa Rastenberger, Hanna Weselius, and Matilda Katajamäki. Additionally we have benefitted from discussing the research with Kari-Jouko Räihä, Göte Nyman, Poika Isokoski, Jenni Mäenpää and Erhard Schüttpelz. • Contact details for the corresponding author: Dr. Asko Lehmuskallio, asko.lehmuskallio@uta.fi, +358-50-3187013, Finland. • Asko Lehmuskallio, University of Tampere • Jukka Häkkinen, University of Helsinki • Janne Seppänen, University of Tampere • Author biographies: Asko Lehmuskallio (PhD) is a University Researcher at the Faculty of Communication Sciences, at the University of Tampere. His research focuses on media anthropology, visual studies, and digital culture. Lehmuskallio is founding member of the Nordic Network for Digital Visuality (NNDV), and chair of the ECREA TWG Visual Cultures. His recent books include Pictorial Practices in a ’Cam Era’ (2012, Tampere Univ Press), #snapshot. Cameras amongst us (co-ed with A. Rastenberger, 2014, Finnish Museum of Photography) and Digital Photography and Everyday Life (co-ed with E. Gómez Cruz, 2016, Routledge). • Jukka Häkkinen (PhD) is a University Researcher at the Institute of Behavioural Sciences at the University of Helsinki, and Principal Investigator of the Visual Cognition Research Group. He works on perceptual psychology, particularly image perception, and has studied the use of a wide range of image technologies, including sickness symptoms with various consumer head-mounted displays. His work is widely cited, and he has a h-index of 23. • Janne Seppänen (PhD) is a professor at the Faculty of Communication Sciences, at the University of Tampere. He has been pivotal in advancing research on visual cultures in Finland, and works additionally on photojournalism, advertising and psychoanalysis. His recent books include Levoton valokuva (‘The unruly photograph’, 2014, Vastapaino), Mediayhteiskunta (with Esa Väliverronen, ‘Media Society’, 2013, Vastapaino), The Power of the Gaze (2006, Peter Lang) and Visuaalinen kulttuuri (‘Visual Culture’, 2005, Vastapaino). Footnotes: i See e.g. https://www.theguardian.com/artanddesign/photography-blog/2014/jan/23/associated- press-narciso-contreras-syria-photojournalism and http://www.dailymail.co.uk/news/article- 2544662/Pulitzer-Prize-winning-photographer-fired-admitting-doctored-Syrian-war-rebel-picture- photoshopping-camera-original-image.html. ii See e.g. http://www.worldpressphoto.org/activities/photo-contest/verification-process/what- counts-as-manipulation iii The experiments were conducted in Finnish, and the participants were asked to answer the following question: ”Onku kuva mielestäsi valokuva vai tietokoneella tehty?” iv Belting (2005) discusses the problematics between trace and simulacrum in ’true representations of Christ’ as an example that might be helpful for elucidating a broader issue with representation within a so called Western history. The Mandylion of Edessa, as well as the Shroud of Turin provide only two examples in a rich history of searching for the authentic image. work_lcsq5waeufesxkol2z5e2ebyfm ---- SHORT NOTE Michael J. Raupach Æ Sven Thatje New records of the rare shrimp parasite Zonophryxus quinquedens Barnard, 1913 (Crustacea, Isopoda, Dajidae): ecological and phylogenetic implications Received: 25 April 2005 / Accepted: 22 August 2005 / Published online: 19 October 2005 � Springer-Verlag 2005 Abstract The rare dajid, Zonophryxus quinquedens represents the only known isopod parasiting on shrimps in Antarctic waters. In contrast to the Bopyridae, which typically live in the gill cavity of their crab host, dajid isopods are normally attached to the carapace of the parasited shrimp. Four specimens of Z. quinquedens Barnard, 1913 were collected in the eastern and western Weddell Sea, Antarctica, during the expeditions ANT XXI/2 in 2003/2004 and ANT XXII/3 in 2005. Molecular phylogenetic analyses, based on small subunit rRNA gene sequences, indicate a close relationship of Z. quinquedens to the Bopyridae. Possible ecological and physiological aspects of the parasite–host interaction are discussed. Introduction The scarcity of decapod crustaceans is one of the most striking biodiversity characteristics of Antarctic waters when compared to other seas (Thatje and Arntz 2004), caused by physiological as well as ecological factors (e.g., Clarke 1983; Thatje et al. 2003). On the high Antarctic continental shelf only five benthic shrimp species are known (Gorny 1999). In contrast to the decapod crustaceans, the peracarid crustaceans, espe- cially the Amphipoda and Isopoda, have flourished in terms of diversity in Antarctic waters (e.g., Brandt 1991, 2000). Among these, parasitic forms have received little attention in the past. Parasitic isopods with a highly modified morphology, strong sexual dimorphism, and strange life cycles, occur in the taxon Cymothoida Wägele, 1989 (e.g., the Gnathiidae Leach, 1814). In particular the Bopyridae are important isopod parasites of decapods. Although our knowledge of their life his- tory and physiology is extremely limited, they are known to frequently infest lithodid crabs (Roccatagliata and Lovrich 1999, and references therein). Host–parasite interactions in the Lithodidae are known from subant- arctic waters, albeit records of bopyrids are still lacking from high Antarctic lithodids. As for the Antarctic shrimp only the host–parasite interaction between the enigmatic isopod Z. quinquedens (Isopoda, Cymothoida, Dajidae) (Fig. 1a) and Nemato- carcinus lanceopes has been described so far (Brandt and Janssen 1994). Shrimps of the genus Nematocarcinus (see Fig. 1b) are common in Antarctic deep waters (Wägele and Sieg 1990), but specimens were found even in the Cape basin (see also Thatje et al. 2005a; Linse et al. 2005). Morphological studies indicate a close relation- ship of the Dajidae and Bopyridae (Wägele 1989), but the classification is still in discussion (Martin and Davis 2001; Brandt and Poore 2003). In contrast to the Bo- pyridae, which are known to infest their host in the branchial chamber below the carapace, the Dajidae are parasites typically attached to the carapace of eup- hausiids, mysids, and shrimps (see Fig. 1c), although some may also be found on the host gills or attached to the carapace of their hosts. Within the genus Zonophryxus, six different species have been described from polar, temperate, and tropical waters until now: Z. dodecapus Holthuis, 1949 (Canary Islands), Z. grimaldii Koehler, 1911 (Spain), Z. quinquedens Barnard, 1913 (South Africa), Z. retrodens Richardson, 1904 (Hawaii), Z. similes Richardson, 1914 (Hudson Bay), and Z. trilobus Richardson, 1910 (Philip- pines). Z. quinquedens is currently known only from three different locations, all located in the Southern Hemi- sphere: 18�29¢E 34�21¢S (Cape Point area) at 840–1,250 m (Barnard 1913), 48�W 62�S (Powell Basin) at unkown depth (Lopretto 1983) and 05�08¢W 69�58¢S (eastern Weddell Sea) at 665 m (Brandt and Janssen 1994). M. J. Raupach (&) Lehrstuhl für Spezielle Zoologie, Fakultät für Biologie, Ruhr-Universität Bochum, Universitätsstraße 150, Bochum 44780, Germany E-mail: michael.raupach@rub.de S. Thatje National Oceanography Centre, University of Southampton, European Way, Southampton SO14 3ZH, UK Polar Biol (2006) 29: 439–443 DOI 10.1007/s00300-005-0069-2 CORE Metadata, citation and similar papers at core.ac.uk Provided by e-Prints Soton https://core.ac.uk/display/30113?utm_source=pdf&utm_medium=banner&utm_campaign=pdf-decoration-v1 The present work gives insight into the phylogenetic position of Z. quinquedens within the Cymothoida using ssu rRNA gene sequences and some possible ecological and physiological aspects of the parasite–host interaction. Material and methods Specimens A single female specimen of Z. quinquedens was sepa- rated from an epibenthic sledge sample taken in the eastern Weddell Sea (PS65/232-1: 13�56¢W 71�18¢S) at 900–910 m water depth during the expedition ANT XXI/2 in December 2003. In addition, one female and one male were collected in the eastern Weddell Sea (PS67/074-7: 13�58¢W 71�18¢S, 1,030–1,040 m depth) and one female in the western Weddell Sea (PS67/151-1: 47�07¢W 61�45¢S during ANT XXII/3 (ANDEEP III) in February 2005. The animals were immediately trans- ferred into 96% ethanol for preservation. The benthos material obtained from the epibenthic sledge (ANT XXI/2) and Agassiz trawl (ANT XXII/3) contained 74, respectively, 84 specimens of the deep-sea shrimp N. lanceopes (Bate 1888), known as host of Z. quinque- dens (Brandt and Janssen 1994). Fig. 1 a Female and dwarf male (arrow) of Z. quinquedens Barnard, 1913 (Crustacea, Isopoda, Dajidae) in ventral (left) and dorsal (right) view, scale bar = 5 mm; b N. lanceopes (Bates 1888), lateral view, scale bar = 1 cm; c drilling hole (arrow) of a female specimen of Z. quinqudens on the carapace of N. lanceopes 440 Additional sequences included in the DNA analyses were obtained from GenBank and included all available cymothoids except Paragnathia formica (AF255687), which is being known as being a long-branched taxon (see Dreyer and Wägele 2001 for details). The following sequences were used for outgroup comparison: Deca- poda: Astacus astacus (AF235959), Stomatopoda: Squilla empusa (X01723), Isopoda (Anthuridea): Cyathura carinata (AF332146), and Paranthura nigro- punctata (AF279598). DNA extraction, PCR, and sequencing Methods for DNA extraction, amplification (including primers), and sequencing are given elsewhere (Raupach et al. 2004). Total genomic DNA was extracted from one leg of one specimen (female, ANT XXI/2). The new ssu rDNA sequence can be retrieved from GenBank (DQ008451). Alignment and phylogenetic analyses All 14 ssu rDNA sequences used were aligned using CLUSTAL X on default parameters (Thompson et al. 1997), generating an alignment of 3,546 base pairs (bp). Variable regions within ribosomal RNAs, especially in isopods, can greatly vary in length, which makes it almost impossible to establishing the base homology between distantly related species (e.g., Choe et al. 1999; Dreyer and Wägele 2001). Highly variable and non- alignable regions within the alignment were identified using the secondary structure of the ssu rRNA in the decapod A. astacus (Wuyts et al. 2002). Therefore most parts of the expansion segments V4, V7, V9 and some other helices were excluded from further phylogenetic analyses (definitions in parentheses refer to the helix numbering of ssu rRNAs): 76–86 [E_6], 133–151/183– 447 [E_8–E_11], 821–1974 [E_23/1–E23/17 = V4], 2505–3011 [E_43/1–E_43/4 = V7], and 3336–3460 [E_49 = V9]. The final alignment used for this study had 1,464 bp; both alignments are available from the authors. The homogeneity of base frequency versus taxa was tested using the v2�test implemented in PAUP*4.0b10 (Swofford 2002). Sequences were analysed using a Bayesian approach with the program MrBayes 3.1 (Huelsenbeck and Ronquist 2001), with clade support assessed by posterior probability, on default parameters. Trees were sampled every 100 generations, yielding 9,000 samples of the Markov chain after a ‘‘burn in’’ of 1,000 generations. The appropriate model of nucleotide sub- stitution for the Bayesian analyses was determined by using the Akaike Information Criterion (Akaike 1974), implemented in MODELTEST version 3.7 (Posada and Crandall 1998), which has several important advantages over the hierachical likelihood ratio test (hLRT) (see Posada and Buckley 2004 for details). Results All analysed sequences deviate somewhat from the ex- pected base frequencies (A:C:G:T = 0.26:0.23:0.27:0.24), but there are no significant differences in base composi- tion (v2 test: df=39, P=0.99). Plots of transitions and transversions versus evolutionary distances indicated no substitution saturation (not shown). The Akaike Infor- mation Criterion suggests the use of the general time- reversible model (Tavaré 1986) with gamma-distributed rates for the ssu rRNA gene dataset (alpha = 0.68, Pin- var = 0.40, R(AC)=1.05, R(AG)=2.73, R(AT)=1.46, R(CG)=0.72, R(CT)=4.17, R(GT)=1.00) as parameters for the Bayesian analyses. The tree resulting from the Bayesian analysis is pro- vided in Fig. 2. The stomatopod S. empusa and the decapod A. astacus were used to root the trees. The topology supports the monophyly of the Cymothoida (1.00). Further groups recovered with high support are the Cymothoidae (1.00) and Bopyridae (1.00). Z. quinquedens appears as the sister taxon of the Bo- pyridae (0.95). The monophyly of the Cirolanidae is not supported. Discussion Despite the small number of cymothoid sequences, these analyses reveal some insights into the phylogeny of this taxon. Nevertheless, additional sequences are urgently needed to reconstruct the phylogeny of the Cymothoida. The Bayesian analyses strongly support the monophyly of the Cymothoidea and Bopyridae, but the results do not support a sister-group relationship of these two taxa as suggested by other molecular studies (Dreyer and Wägele 2001, 2002). In addition, no evidence was found for the monophyly of the Cirolanidae. However, ssu rDNA data support the hypothesis that Z. quinquedens is closely related to the Bopyridae, as it is also suggested by some morphological evidence (Wägele 1989; Brandt and Poore 2003). Mobility represents an important trait for the female Z. quinquedens because the parasitic isopod has to move from the cast exuvia to the fresh carapace of its host when it moults. Moulting in adult Antarctic shrimp has only been described in Chorismus antarcticus from the Weddell Sea shelf (Thatje et al. 2005b). The moulting in one female of C. antarcticus observed in the laboratory, from rupture of the carapace to leaving the exuvia, surprisingly lasted only about 3 min (Thatje et al. 2005b). A few abdominal flappings, as typically found in escape attempts of shrimps, were performed to leave the exuvia. If this moulting pattern is similar in N. lanceopes, the moult of the host should be a crucial moment in the life history of the isopod. This may be determined by changes in the host’s steroid levels in the haemolymph. As an ectoparasite, Z. quinquedens should use its reduced but functional peraeopods to move from the 441 cast exuvia to the fresh carapace of the host during moult; all this has to take place within a very short time period. The fresh and still soft carapace should facilitate the infestation of the host by the isopod, and it is remarkable to observe that the reduced and modified legs of Zonophryxus do indeed superficially penetrate the carapace of Nematocarcinus (Fig. 1c), which is a strong hint for a continuous infestation after moult. However, most aspects of this enigmatic isopod are still unknown and thus remain in speculation, and biochemical anal- yses are urgently needed to understand the physiology of this remarkable parasite. Acknowledgements We are grateful to Angelika Brandt for orga- nizing the expeditions ANDEEP III (ANT XXII/3), and to Wolf Arntz for managing ANT XXI/2. We would like to thank Manuel Ballesteros (Centro de Estudios Avanzados de Blanes) for provid- ing the material (ANT XXI/2) and Katrin Linse (British Antarctic Survey) for her help with digital photography of Z. quinquedens. Bhavani Narayanaswamy (Scottish Association for Marine Sci- ence) provided constructive comments. The first author is indebted to the German Science Foundation (DFG) for co-financing his participation on the Polarstern cruise (grant WA 530/26). We would like to thank three anonymous reviewers for comments on the manuscript. References Akaike H (1974) A new look at the statistical model identification. IEEE Trans Autom Contr 19:716–723 Barnard KH (1913) Contributions to the crustacean fauna of South Africa. Ann S Afr Mus 10:197–240 Bate CC (1888) Report on the Crustacea Macura collected by HMS Challenger during the years 1873–76. Part 1. Rep Voy Chal- lenger 24:1–929 Fig. 2 Bayesian consensus tree based on an alignment of 1,464 bp from the ssu rRNA gene. Model choice based on the Akaike Information Criterion: Six substitution types (GTR model) with gamma- distributed rates (alpha = 0.68) and invariant positions (Pinvar = 0.40). Numbers at the nodes represent posterior probabilities; values below 0.50 are not shown 442 Brandt A (1991) Zur Besiedlungsgeschichte des antarktischen Schelfes am Beispiel der Isopoda (Crustacea, Malacostraca). Berichte zur Polarforschung (Rep Polar Res) 98:1–240 Brandt A, Janssen HH (1994) Redescription of Zonophryxus quinquedens Barnard, 1913 (Crustacea, Isopoda, Dajidae) from the Weddell Sea, Antarctica, with notes on its biology and zoogeography. Polar Biol 14:343–350 Brandt A (2000) Hypotheses on Southern Ocean peracarid evolu- tion and radiation (Crustacea, Malacostraca). Ant Sci 12(3):269–275 Brandt A, Poore GCB (2003) Higher classification of the flabellif- eran and related Isopoda based on a reappraisal of relation- ships. Inv Syst 17:893–923 Choe CP, Hancock JM, Hwang UW, Kim W (1999) Analysis of the primary sequence and secondary structure of the unusually long ssu rRNA of the soil bug, Armadillidium vulgare. J Mol Evol 49:798–805 Clarke A (1983) Life in cold waters: the physiological ecology of polar marine ectotherms. Oceanogr Mar Biol Ann Rev 21:341– 453 Dreyer H, Wägele JW (2001) Parasites of crustaceans (Isopoda: Bopyridae) evolved from fish parasites: molecular and mor- phological evidence. Zoology 103:157–178 Dreyer H, Wägele JW (2002) The Scutocoxifera tax nov. (Crusta- cea, Isopoda) and the information content of nuclear ssu rDNA sequences for reconstruction of isopod phylogeny (Peracarida: Isopoda). J Crust Biol 22:217–234 Gorny M (1999) On the biogeography and ecology of the Southern Ocean decapod fauna. Scientia Marina 63 (Suppl 1):367–382 Holthuis LB (1949) Zonophryxus dodecarpus nov. spec., a remarkable species of the family Dajidae (Crustacea: Isopoda) from the Canary Islands. Koninklijke Nederlandsche Akademie van Wetenschnappen 53(3):3–8 Huelsenbeck JP, Ronquist F (2001) MrBayes: Bayesian inference of phylogenetic trees. Bioinformatics 17:754–755 Koehler R (1911) Isopodes nouveaux de la famille Dajides prove- nant des campagnes de la ‘‘Princesse Alice’’. Bull Inst Oceanogr Monaco 196:1–34 Leach WE (1814) Crustaceology. In: Brewster‘s Edinburgh Ency- clopedia 7:383–437 Linse K, Brandt A , Bohn J, Danis B, Heterier V, Janussen D, Lopéz Gonzales PJ, Schwabe E, Thomson MR (2005) Mega- benthos collected by the Agassiz trawl. In: Fahrbach E, Brandt A (eds): Die Expedition ANT XXII/3 des Forschungsschiffes Polarstern. Ber Polar Meeresforsch (Rep Polar Mar Res) (in press) Lopretto EC (1983) Zonophryxus quinquedens Barnard (Isopoda, Epicaridea, Dajidae) in South Orkney Islands waters. In: El- Sayed SZ, Tomo AP (eds) Biological investigations of Marine Antarctic Systems and Stocks (Biomass) vol. 7: Ant Aquat Biol:87–97 Martin JW, Davis GE (2001) An updated classification of the re- cent Crustacea. Sci Ser 39:1–124 Posada D, Crandall KA (1998) MODELTEST: testing the model of DNA substitution. Bioinformatics 14:817–818 Posada D, Buckley TR (2004) Model selection and model averag- ing in phylogenetics: advantages of akaike information criterion and Bayesian approaches over likelihood ratio tests. Syst Biol 53:793–808 Raupach MJ, Held C, Wägele JW (2004) Multiple colonization of the deep sea by the Asellota (Crustacea: Peracarida: Isopoda). Deep-Sea Res II 51:1787–1795 Richardson H (1904) Contributions to the natural history of the Isopoda. Proc US Nat Mus 27:657–681 Richardson H (1910) Marine isopods collected in the Phillippines by the US Fisheries Commission Streamer ‘‘Albatross’’ in 1907- 1908. Doc US Bur Fish 736:1–44 Richardson H (1914) Reports on the scientific results of the expe- dition to the tropical Pacific in charge of Alexander Agassiz, on the U.S. Fish Commission Streamer ‘‘Albatross’’. Bull Mus Comp Zool Harvard 58:361–372 Roccatagliata D, Lovrich GA (1999) Infestation of the False King Crab Paralomis granulosa (Decapoda: Lithodidae) by Pseudione tuberculata (Isopoda: Bopyridae) in the Beagle Channel, Argentina. J Crust Biol 19(4):720–729 Swofford DL (2002) PAUP*: phylogenetic analysis using parsi- mony (* and other methods) Version 4.0b10. Sinauer Associ- ates, Sunderland, Massachusetts Tavaré S (1986) Some probabilistic and statistical problems in the analysis of DNA sequences. In: Miura RM (ed) Some mathe- matical questions in biology—DNA sequence analysis. pp 57– 86 Thatje S, Schnack-Schiel S, Arntz WE (2003) Developmental trade- offs in Subantarctic meroplankton communities and the enigma of low decapod diversity in high southern latitudes. Mar Ecol Prog Ser 260:195–207 Thatje S, Arntz WE (2004) Antarctic reptant decapods: more than a myth? Polar Biol 27:195–201 Thatje S, Bacardit R, Arntz WE (2005a) Larvae of the deep-sea Nematocarcinidae (Crustacea: Decapoda: Cardidea) from the Southern Ocean. Polar Biol 28:290–302 Thatje S, Lavaleye M, Arntz WE (2005b) Reproductive strategies of Antarctic decapod crustaceans. Ber Polar Meeresforsch (Rep Polar Mar Res) 503:28–30 Thompson JD, Gibson TJ, Plewniak F, Jeanmougin F, Higgins DG (1997) The Clustal X windows interface: flexible strategies for multiple sequence alignment aided by quality analysis tools. Nucl Acids Res 24:4876–4882 Wägele JW (1989) Evolution und phylogenetisches System der Isopoda: Stand der Forschung und neue Erkenntnisse. Zoo- logica 140:1–262 Wägele JW, Sieg J (eds) (1990) Fauna der Antarktis. Verlag Paul Parey, 197 pp Wuyts J, Van de Peer Y, Winkelmans T, De Wachter R (2002) The European database on small subunit ribosomal RNA. Nucl Acids Res 30:183–185 443 Sec1 Sec2 Sec3 Fig1 Sec4 Sec5 Sec6 Sec7 Ack Bib CR1 CR2 CR3 Fig2 CR4 CR5 CR6 CR7 CR8 CR9 CR10 CR11 CR12 CR13 CR14 CR15 CR16 CR17 CR18 CR19 CR20 CR21 CR22 CR23 CR24 CR25 CR26 CR27 CR28 CR29 CR30 CR31 CR32 CR33 CR34 CR35 CR36 work_ldrbhdfa4nakrgijxfctwzl6oi ---- Dry-Swabbing/Image Analysis Technique for the Pharmaceutical Equipment Cleaning Validation Procedia Engineering 42 ( 2012 ) 447 – 453 1877-7058 © 2012 Published by Elsevier Ltd. doi: 10.1016/j.proeng.2012.07.436 20th International Congress of Chemical and Process Engineering CHISA 2012 25 – 29 August 2012, Prague, Czech Republic Dry-swabbing/image analysis technique for the pharmaceutical equipment cleaning validation P. Zámostný a*, K. Punčochováa, Z. Vltavskýb, J. Pateraa, Z. Bělohlava aDepartment of Organic Technology, Faculty of Chemical Technology, Institute of Chemical Technology Prague Technická 5, 166 28 Prague, Czech Republic bZentiva a.s., U Kabelovny 130, 102 37 Prague 10, Czech Republicc Abstract This paper presents the development of a new technique using the dry-swabbing method for monitoring the contamination of pharmaceutical equipment. Black polyester wipes were used to improve the detection limit of the visual inspection. A standardized method of producing model impurity was used to produce known contamination of the model surface by a variety of compounds ranging from 0 to 500 μg.dm-2. The sample contaminations were dry- swabbed and evaluated by measuring the intensity of contamination using the computer image analysis. The detected intensities of contamination were always proportional to the amount of the impurity applied. The dry-swabbing method has been proven to be at least by one order of magnitude more sensitive than mere visual check. © 2012 Published by Elsevier Ltd. Selection under responsibility of the Congress Scientific Committee (Petr Kluson) Keywords: Cleaning validation; dry-swabbing; image analysis; decontamination 1. Introduction Cleaning validation is a collection of techniques and processes aimed at maintaining the cleanness standards for the pharmaceutical equipment regardless its processing history. The cleaning validation procedures are generally aimed at checking and proving, that the residues of the active pharmaceutical ingredient, remaining at the surface of the machinery are acceptable after finished cleaning; the * Petr Zámostný. Tel.: +420-2-2044-4222; fax: +420-2-2044-4340. E-mail address: petr.zamostny@vscht.cz. Available online at www.sciencedirect.com Open access under CC BY-NC-ND license. Open access under CC BY-NC-ND license. http://creativecommons.org/licenses/by-nc-nd/3.0/ http://creativecommons.org/licenses/by-nc-nd/3.0/ 448 P. ZaÅLmostny_ et al. / Procedia Engineering 42 ( 2012 ) 447 – 453 acceptance value being related to the toxicity of API in question. In general, the acceptable level of residual contamination [1, 2] can vary depending on the compound from several hundreds μg per dm2 of the equipment surface to several μg per dm2. Unassisted human eye can only identify contamination levels of the several hundred μg per dm2 magnitude [3], thus in most cases some instrumentation is required to improve sensitivity. The analysis (usually UV or HPLC) of wet swabs from the equipment surface or rinse water represents the industrial standard [4]. Common drawback of both approaches is the necessity to take the sample to laboratory and hence the inevitable delay in continued operation of the production equipment. This study reports the development of a new technique using the dry-swabbing method for monitoring the contamination of pharmaceutical equipment that is more readily available for routine monitoring of the contamination. The dry-swabbing/image analysis (DSIA) technique employs black polyester wipes for dry-swabbing the equipment surface, so as to transfer all, or at least the representative portion of contamination to the wipe creating visible stain on its surface. Digital photography and computer image analysis can convert the visual information into the numerical intensity, which is proportional to the amount of contamination. 2. Materials and methods The study involved a variety of tested compounds, including amlodipin, ibuprofen, paracetamol, caffeine, rutin, esculin, losartan etc. and pharmaceutical formulations thereof as a model substances and formulations for investigating the test performance. Those substances were provided by courtesy of Zentiva company (Czech Republic). 2.1. Testing equipment and procedure The experiments were carried out in laboratory, using plain stainless steel plates, having marked square 1 dm2 sample areas. Simulated contamination by any of the model contaminants was created by spraying the pre-determined amount of substance solution or suspension over the sample area (fig. 1a). The sprayed volume was maintained constant in order to improve reliability and the contamination level was changed using different concentration of sprayed solution/suspension. Steel plates were then left to dry. This procedure was used to produce a series of stainless steel plates containing known surface contamination by selected model contaminant. Contamination levels ranging from 0 to 500 μg.dm-2 were generally used. Contaminations over 300 μg.dm-2 were above the visually detectable limit for most compounds, on the other hand, the contaminations below 150 μg.dm-2 were not visually detectable. Then, each sample area was wiped by folded Black Inspection Wiper Class 10.000 (Vestilab SA, Spain) using forceps. The wiping proceeded in a scanning-like manner from left to right edge of the sample area and then again in the top-down direction. The contamination was transferred at least partially onto the wiper, producing a “dry-swab”, containing visible stain, if there was any contamination to be detected. Example of obtained dry-swabs is provided in fig. 2. The figure shows, that the size and/or intensity of the stain generally increase with the increasing surface contamination. Therefore, it should be possible to use the swabs to quantify the contamination. 449 P. ZaÅLmostny_ et al. / Procedia Engineering 42 ( 2012 ) 447 – 453 Fig. 1. (a) spraying the contaminant solution on model surface area; (b) surface contaminated by 500 μg.dm-2 caffeine; (c) surface contaminated by 75 μg.dm-2 caffeine Fig. 2. Dry-swabs of stainless-steel sample area contaminated by Ibalgin® suspension (a) 500 μg.dm-2; (b) 300 μg.dm-2; (c) 75 μg.dm-2 2.2. Dry-swabs image processing and analysis The shape, size and intensity of the stain depend not only on the contamination of tested surface, but also on the fine details of the swabbing procedure. Among other effects, the intensity of the stain is negatively correlated to its surface area. Hence, it would be difficult to find a reliable relationship between any individual parameter of the stain and the contamination level. Therefore, the obtained swabs were processed by image analysis. Digital images of the obtained swabs were taken using a digital camera. An integral value of luminance was chosen as a representative quantity characterizing the overall intensity of the swab. It was obtained from digital images using the following procedure in Adobe Photoshop software (Adobe, USA): The effect of varying light conditions during imaging was corrected by calibration using white and black standards, The noise and the wiper texture were suppressed by interpolation, The integral value of luminance of the circular area containing the stain I, was determined. The background luminance I0 was determined over similar circular area of the same size, but on clean unused viper. The difference Inet = I – I0 is taken as the net integral intensity of the swab. 450 P. ZaÅLmostny_ et al. / Procedia Engineering 42 ( 2012 ) 447 – 453 The repeatability of the procedure including taking the swab, taking the digital image, and processing it by image analysis varied between RSD = 5 – 10 % for all tested compounds and mixtures. 3. Results and discussion The dry-sawabs/image analysis technique was tested on a variety of selected APIs and pharmaceutical formulations. Each compound or formulation was tested on five levels of contamination, which were supplemented by blank test. The overview of actual contamination levels for all tested species and obtained results are provided in the tab. 1. The table shows the blanks normally exhibit very low swab intensities. Figures 3 and 4 show the swab intensity being proportional to the contamination level for all tested species, but the slope of the proportion varies from one case to another. The relationship is hence specific for any particular tested material, but it can be approximated by linear dependence, at least for relatively low contamination. Thus, for each material, there can be found a range of contamination, where the relationship between that contamination and the swab intensity is linear. Therefore, within such range the technique can be used to quickly quantify the amount of contamination on tested surface. Table 1. Swab intensities for contamination of model surface by different contaminants c, μg.dm-2 500 300 150 75 50 25 5 0 I × 10-3 Amlodipin - 79.2 43.7 24.1 21.1 6.9 - 0.0 Caffeine 226.6 172.5 109.8 57.0 8.7 0.0 - 0.0 Ibuprofen - 105.0 47.9 25.0 17.3 12.8 - 0.0 Losartan - 41.2 44.1 55.8 73.3 40.0 - 2.3 Nifuroxazide - 446.2 249.1 149.7 121.7 128.5 - 9.9 Paracetamol - 36.5 36.4 20.6 15.1 1.9 - 0.0 Rutin - 250.3 208.8 191.9 130.1 53.9 - 0.4 Valsartan - 320.4 147.1 128.9 87.1 - 56.9 0.0 Endiex® - 386.1 241.9 175.4 145.3 - 71.8 2.4 Ibalgin® 207.3 236.7 87.1 42.2 32.0 0.0 - 0.2 Lozap H® - 216.2 122.2 102.8 92.6 - 40.3 3.9 Valzap® - 224.1 108.7 71.2 22.9 - 5.9 2.4 The figs. 3 – 4 and data in tab. 1 show certain relationship between active ingredient and respective formulation. However the relationship has a long way to go for being universally valid rule, so that relevant formulation has to be used for method calibration for each specific compound or mixture, whatever is appropriate for the surface being examined. The intensity-contamination relationships are linear in wide range for some samples (fig. 3) or they may deviate from the linearity at higher concentrations (fig. 4). The deviation can be due to the stain oversaturation as it is the case for Ibalgin or due to the unfavorable optical properties of tested material in case of losartan. The observations above were summarized for all tested compounds in tab. 2, showing the conservative estimate of the detection limit (LOD) and the linear range of quantification (ROQ). The LOD estimate is expressed as the lowest contamination that was actually tested and produced swab intensity 3 times higher than that of blank sample. ROQ reports the range from LOD up to the limit of calibration curve linearity. 451 P. ZaÅLmostny_ et al. / Procedia Engineering 42 ( 2012 ) 447 – 453 0,0 100,0 200,0 300,0 400,0 500,0 0 100 200 300 I × 10-3 c, μg.dm-2 Endiex Nifuroxazid 0,0 100,0 200,0 300,0 400,0 0 100 200 300 I × 10-3 c, μg.dm-2 Valzap Valsartan Fig. 3. Relationship between the swab intensity and contamination of the sample surface by (a) Endiex® formulation and nifuroxazide; (b) Valzap® formulation and valsartan 0,0 100,0 200,0 300,0 0 100 200 300 400 500 I × 10-3 c, μg.dm-2 Ibuprofen Ibalgin 0,0 100,0 200,0 300,0 0 100 200 300 400 500 I × 10-3 c, μg.dm-2 Lozap H Losartan Fig. 4. Relationship between the swab intensity and contamination of the sample surface by (a) Ibalgin® formulation and ibuprofen; (b) Lozap H® formulation and losartan 452 P. ZaÅLmostny_ et al. / Procedia Engineering 42 ( 2012 ) 447 – 453 Table 2. Limits of detection and ranges of quantification of surface contamination by dry-swabbing method for variety of pharmaceutical ingredients/formulation c, μ.dm-2 LOD ROQ Amlodipin < 25 25 - 300 Caffeine 25 - 50 50 - 300 Ibuprofen < 25 25 - 300 Losartan < 25 25 - 75 Nifuroxazid < 25 25 - 300 Paracetamol < 25 25 - 150 Rutin < 25 25 - 100 Valsartan < 5 5 - 300 Endiex® < 5 5 - 300 Ibalgin® 25 - 50 50 - 300 Lozap H® < 5 5 - 300 Valzap® 5 - 50 50 - 300 4. Conclusions The developed technique of dry-swabs/image analysis was proved useful for quick determination of surface contamination by pharmaceutical formulation. All tested substances and formulations exhibited statistically significant intensity-contamination relationship. The detected intensities of contamination were always proportional to the amount of the impurity applied. Even the smallest test contamination of 25 μg.dm-2, left apparent contamination stains on the swab, while the zero-level sample showed no visible trace of contamination. Most relationships exhibited very god linearity in the contamination ranges of interest (25 – 250 μg.dm-2). For higher contamination, the linearity was poor, due to stains over- saturation, but those contamination levels are so high that they are detectable by unassisted eye. Distinct differences among test compounds were observed, thus “per substance” calibration must be performed to obtain relevant results. It can be concluded that for practical application of this technique, it would be best to prepare a sample of swabs in laboratory environment first. These standardized swabs could be afterwards compared visually with swabs applied in operating conditions. This would enable the final users to make rapid and reliable estimates of the intensity of contamination. The dry-swabbing method has been proven to be at least by one order of magnitude more sensitive than simple visual check. Acknowledgements Financial support from specific university research (MSMT No 21/2012, project A1_FCHT_2012_007) is gratefully acknowledged. 453 P. ZaÅLmostny_ et al. / Procedia Engineering 42 ( 2012 ) 447 – 453 References [1] Hall WE. Cleaning and validation of cleaning for coated pharmaceutical products. Drug Manuf Technol Ser 1999;3:269-98. [2] McMenamin M, Establishing acceptance criteria for cleaning validation, in, American Chemical Society, 2006, pp. MRM-106. [3] Forsyth RJ, Roberts J, Lukievics T, Van NV. Correlation to visible-residue limits with swab results for cleaning validation. Pharm Technol 2006;30:90, 2, 4-6, 8, 100. [4] Zaheer Z, Zainuddin R. Analytical methods for cleaning validation. Pharm Lett 2011;3:232-239. work_leajt7jphzblvkmpqlfmtbch24 ---- HDBMR_8327565 1..7 Research Article Effects of Deep Cervical Flexor Training on Forward Head Posture, Neck Pain, and Functional Status in Adolescents Using Computer Regularly Isha Sikka,1 Chandan Chawla,2 Shveta Seth,3 Ahmad H. Alghadir,4 and Masood Khan 4 1Optum, Noida 201304, India 2Ability Physiotherapy and Sports Injury Clinic, Hauz Khas, New Delhi 110016, India 3AIIMS, New Delhi 110029, India 4Rehabilitation Research Chair, College of Applied Medical Sciences, King Saud University, P.O. Box 10219, Riyadh 11433, Saudi Arabia Correspondence should be addressed to Masood Khan; masoodkhan31@rediffmail.com Received 23 July 2020; Revised 21 September 2020; Accepted 22 September 2020; Published 6 October 2020 Academic Editor: Ali I. Abdalla Copyright © 2020 Isha Sikka et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In contemporary societies, computer use by children is a necessity and thus highly prevalent. Using computers for long hours is related to a higher risk of computer-related muscular disorders like forward head posture (FHP) and neck pain (NP). Deep cervical flexor (DCF) muscles are important head-on-neck posture stabilizers; thus, their training may lead to an improvement in FHP and NP. The aim of this study was to determine if 4 weeks of DCF training is effective in alleviating NP, improving FHP, and functional status in adolescent children using computers regularly, a pretest-posttest experimental group design was used. Subjects were randomly assigned into the experimental group (receiving DCF training and postural education) and the control group (receiving postural education only). 30 subjects with a mean age of 15:7 ± 1:725 years with NP and FHP using computers regularly participated in the study. Dependent variables were measured on day 1 (at baseline) and after 4 weeks of training. Photographic analysis was used for measuring FHP, visual analog scale for NP intensity, and neck disability index for functional status. Data analysis showed that in both groups, no significant improvement occurred in FHP. In both groups, there was a significant improvement in functional status and NP. There was no significant difference between both groups for FHP and NP. There was a significant improvement in functional status in the experimental group in comparison to the control group. Four weeks of DCF training does not cause a significant improvement in FHP in 13 to 18 years old adolescent children using computers regularly. 1. Introduction There is a high prevalence of musculoskeletal-related pain among young people. One of the studies found neck pain to be most common among all musculoskeletal pain syn- dromes, affecting around 17.2% of adolescents [1]. In com- parison to asymptomatic controls, adult patients having neck pain are found to have increased forward head posture (FHP) and impaired performance of neck flexor and extensor muscles [2–4]. It is proposed that more use of information and communication technologies has concurrently increased the prevalence of neck pain. Computer use has been associ- ated with adolescent neck pain, with daily use of computers exceeding 2-3 hours as a threshold for neck pain [5, 6]. Using a computer for long periods results in static posture main- tained for a longer time particularly in neck and shoulder regions [7]. Several studies found a relationship between neck pain and computer usage [6, 8, 9]. It has been proposed that the repetitive use of mobile phones, laptops, computers, TVs, video games, and even backpacks has forced the body to adapt to the FHP and kyphosis [10]. One of the studies reported that subjects assumed significant FHP while viewing mobile phones in comparison to standing neutrally. Greater head tilt angle and smaller neck tilt angle were found when Hindawi BioMed Research International Volume 2020, Article ID 8327565, 7 pages https://doi.org/10.1155/2020/8327565 https://orcid.org/0000-0002-8051-142X https://creativecommons.org/licenses/by/4.0/ https://doi.org/10.1155/2020/8327565 subjects looked at mobile phones in comparison to neutral standing posture [11]. The craniovertebral angle (CVA) is defined as an angle made by the intersection of a line joining the midpoint of the tragus of ear to the skin overlying the C7 spinous process and a horizontal line passing through the C7 spinous process. There is a correlation between FHP, neck pain, and CVA. One of the studies reported that subjects having smaller CVA had FHP and were prone to have increased severity of neck pain [12]. There could be both lower cervical flexion and upper cervical extension in FHP [13] and may include tightness of posterior region muscles and weakness and lengthening of anterior cervical muscles [14]. Deep cervical flexor (DCF) muscles have been found to have a significant role in supporting and strengthening of the cervical spine [15]. Studies also suggested that in the case of cervical disor- ders, a rehabilitation approach will be more effective if DCF muscles are used properly before strengthening of global cer- vical muscles [13]. The use of a pressure biofeedback unit is also suggested as a more effective way of DCF strengthening than conventional exercises [16–18]. Sitting in flexed static posture is of greater significance especially during adolescence because growth in spinal struc- tures is rapid during this period. Understanding the mecha- nism of neck pain associated with FHP and its associated changes in the adolescent age group will help us in framing better therapeutic strategies for this particular age group. Also, this would contribute to the understanding of neck pain in older age groups and will raise awareness for the need for early interventions to prevent such problems. The DCF has an important role in head-on-neck posture stabilization. Chiu et al. reported that DCF training was more effective in suppressing an increase in the severity of neck pain than those patients who did not do DCF training [19]. Another study reported that a rehabilitation program that included DCF training effectively alleviated symptoms of headaches in patients. The best method to specifically acti- vate DCFs and reduce the involvement of SCM muscles is the craniocervical flexion training (CCFT). Since postural abnormalities cause pain and injuries, postural correction and education have been used for the treatment of such pains [20]. As the mechanism of pain development in computer users has been hypothesized to be prolonged duration of holding a static posture, the importance of postural aware- ness factors in school children has been emphasized. The objective of the present study is to determine if FHP and neck pain are improved with 4 weeks of DCF training in adoles- cents of 13 to 18 years of age, who use computers regularly. Hence, we hypothesize that 4 weeks of DCF training will sig- nificantly improve FHP, functional status, and neck pain in the computer-using adolescents of 13 to 18 years of age. 2. Materials and Methods 2.1. Study Design. The pretest-posttest experimental design was used. DCF training and postural education were inde- pendent variables, and FHP, functional status, and neck pain were dependent variables. FHP was measured through a CVA in photographic analysis, functional status was mea- sured through the neck disability index (NDI) score, and neck pain was measured through the visual analog scale (VAS) score. 2.2. Participants. For experimental research, group sizes of about 30 participants are considered minimum size to make a valid generalization [21, 22]. Therefore, 30 subjects, stu- dents of 8th to 12th standard from 2 CBSE affiliated schools (16 males and 14 females with a mean age of 15:7 yrs:± 1:725), took part in the study (Table 1). Subjects having neck pain with or without headache, of duration more than 3 months and less than 1 year and 6 months, as identified by the body discomfort chart and NDI value less than 24 (mild to moderate disability scores on NDI) were included in the study. Subjects also had to have FHP as identified by a straight line down from the external meatus falling anterior to shoulder and the mid thorax. Subjects were required to use a computer for at least 3 hours a day for at least 4 days a week or more. Subjects with an ongoing or previous history of spinal fracture, history of cervical spinal surgery, neurolog- ical signs, inflammatory disease, spinal instability, spinal tumor, spinal infection, spinal cord compression, congenital, or acquired postural deformity were excluded from the study. The study conforms to “The Code of Ethics of the World Medical Association (Declaration of Helsinki).” The ethical committee of the institutional review board (file id: RRC- 2019-08) approved this study. This study has been registered in clinicaltrial.gov (ID: NCT04463199). Subjects were selected depending upon inclusion and exclusion criteria, then randomly allocated into either of the 2 groups using lot- tery with 15 subjects in each group: experimental group and control group. The participants and the researcher were unaware of the random sequence. The outcome assessor was kept blinded to the allocation. The experimental group was given DCF training and postural education however, the control group was given postural education only. 2.3. Instrumentation (i) The pressure biofeedback unit (Stabilizer TM, Chat- tanooga Group, INC., Chattanooga, TN) (ii) Digital camera (Nikon Coolpix L16) (iii) Image tool UTHSCSA version 3.0 University of Texas Health Sciences Center, San Antonio, TX (iv) Adjustable camera stand (v) VAS and NDI scales (vi) Plumb line (vii) Anatomical markers 2.4. Protocol. Consent was obtained from the principals of school, parents, and students. Risks and benefits of the study were discussed with them. The study was conducted on school premises. The study was divided into 3 phases: (1) preintervention evaluation, (2) intervention, and (3) postin- tervention evaluation. 2 BioMed Research International http://clinicaltrial.gov https://clinicaltrials.gov/ct2/show/NCT04463199 (1) Preintervention evaluation: values of dependent var- iables were taken on day 1 of the study. Subjects were given a body discomfort chart to mark the area of pain/discomfort. NDI and VAS were given to indi- cate the level of functional status and level of pain, respectively. FHP was measured by calculating the CVA using a digital photography technique. Values taken on day 1 were designated as CVA Pre, NDI Pre, and VAS Pre (2) Intervention: for the experimental group, DCF train- ing and postural education were given for 4 weeks. For the control group, only postural education was given, verbally as well as in print, for 4 weeks. Under the supervision of a physiotherapist, an exercise reg- imen was performed for a total of 4 weeks (3) Postintervention evaluation: CVA, NDI, and VAS were again measured at the end of 4th week and des- ignated as CVA Post, NDI Post, and VAS Post 2.5. Measurement of FHP. FHP was measured by taking lat- eral photographs, and then, these photographs were analyzed through a digital photography technique with the help of dig- itizing software (Image tool UTHSCSA version 3.0). The CVA was measured by software by drawing a line from the tragus of the ear to the 7th cervical vertebrae. The angle this line makes with horizontal is the CVA. The subjects sat on a chair. A plumb line fixed to the wall served as a reference for verticality and was included in the picture. The digital camera was mounted on an adjustable tripod camera stand. At a distance of 0.8 m from the subject, the camera was placed at the level of the subject’s head and neck region. The camera base was adjusted to the subject’s shoulder height. The subjects were asked to look directly ahead. C7 spinous process was palpated; C7 spinous process and tragus of the ear were marked. A retroreflective marker was placed over the skin at the level of the C7 spinous process and secured with tape. A total of 3 lateral photographs of the sub- jects were taken, and an average of the CVA was recorded as the final score. The markers were highlighted in the photo- graphs and analyzed using computer software. 2.6. Intervention 2.6.1. Craniocervical Flexion Training (1) Preparation of Subjects. Subjects were made to lie in crook lying position with their craniocervical region in midrange neutral position. Folded towels of appropriate thickness were placed under the head not the neck, if required, to maintain the cervical spine’s neutral position. (2) Preparation of a Pressure Biofeedback Unit (PBU). PBU airbag was clipped together and folded in, fastened, and placed suboccipital. The uninflated pressure sensor was kept below the neck so that it touched the occiput then inflated to a stable baseline pressure of 20 mmHg to just fill the space below the neck but not to push it into lordosis. (3) Patient Instruction. Subjects were demonstrated the cor- rect action of the DCF that were gentle nodding of head as if saying “yes.” With gentle nodding, the patient was instructed to target one mark corresponding 2 mmHg on the pressure dial at a time. The pressure that the patient could hold steady for 10 seconds with minimal superficial muscle activity is the one which was taken as baseline endurance capacity (10 repetitions of 10-second hold). The action was ensured to be pure nod with no head retraction and no head lifting. Subjects were told to perform head nodding action to gradually target and hold the 5 pressure levels for 10 seconds between 22mmHg and 30mmHg. The minimum satisfac- tory performance requirements were 26mmHg. Each session of CCFT consisted of 3 sets, with each set having 10 repeti- tions. Sessions were performed for a total of 4 weeks with 4 days in a week under the supervision of the therapist. A two-minute rest was given between the sets. 3. Data Analysis FHP was measured using a CVA in photographic analysis, functional status was measured using NDI scores, and pain intensity was measured using VAS scores. All statistical anal- yses were performed using the SPSS statistical software ver- sion 26 (SPSS Inc., Chicago, IL, USA). The Shapiro-Wilk test of normality was performed to assess the normal distri- bution of demographic data of all participants and the nor- mal distribution of dependent variables (CVA, NDI, and VAS) data in both groups. Levene’s test for equality of vari- ances was performed to compare baseline values of depen- dent variables across both groups and revealed no significant difference CVA (p = 0:88), NDI (p = 0:16), and VAS (p = 0:41). Baseline values and values at the end of 4th week were compared for all dependent variables. For the within-group comparison of CVA, a paired t-test was applied, and for NDI and VAS, the Wilcoxon signed-rank test was applied, because NDI data is considered an ordinal data and VAS data did not support normality. The ANOVA test was performed for the between-group analysis of mean differences of CVA at baseline and after 4 weeks of interven- tion. For between-group analysis of NDI and VAS, the Mann-Whitney U test was applied. Results in this study were considered significant if p < 0:05. 4. Results The Shapiro-Wilk test of normality revealed a normal distri- bution of demographic data of all participants. CVA, NDI, Table 1: Respondent’s demographic data, n = 15 each group, mean ± SD, and p values for Shapiro-Wilk tests of normality. Experimental group p value Control group p value Age (years) 15:46 ± 1:88 0.07 15:93 ± 1:57 0.33 Height (cm) 165:2 ± 5:97 0.14 164:6 ± 6:55 0.16 Weight (kg) 48:8 ± 4:69 0.30 48:46 ± 3:50 0.99 BMI (kg/m2) 17:86 ± 0:84 0.57 17:90 ± 0:77 0.45 3BioMed Research International and VAS mean values at baseline (Pre) and 4-week interval (Post) are presented in Table 2, Figure 1 and 2. The Shapiro-Wilk test of normality of mean values of dependent variables (CVA, NDI, and VAS) revealed normal distribution except for NDI Post mean value in the experimental group, VAS Pre value in the control group, and VAS Post value in the experimental group (Table 2). 4.1. Craniovertebral Angle (CVA). The within-group analysis (paired sample test) revealed no significant improvement in CVA in both groups: experimental (p = 0:797) and control (p = 0:563) (Table 3). This means significant changes in CVA (i.e., CVA Pre-CVA Post) did not occur in both groups. Between-group (ANOVA) analysis of changes in CVA (i.e., CVA Pre-CVA Post) revealed no significant difference between the experimental group and the control group (p = 0:542) (Table 4). 4.2. NDI. The within-group analysis (Wilcoxon signed-rank) test revealed significant improvement in NDI in both groups: experimental (p = 0:001) and control (p = 0:036) (Table 3). This means NDI was improved significantly in both groups. Between-group analysis (Mann-Whitney U test) revealed that changes in NDI (i.e., NDI Pre-NDI Post) in the experi- mental group were significantly greater than changes in NDI in the control group (p = 0:019) (Table 4). 4.3. VAS. The within-group analysis (Wilcoxon signed-rank) test revealed significant improvement in VAS in both groups: experimental (p = 0:001) and control (p = 0:010) (Table 3). This means VAS was improved significantly in both groups. Between-group (Mann-Whitney U test) analysis of changes in VAS (VAS Pre-VAS Post) revealed no significant differ- ence between the experimental group and the control group (p = 0:412) (Table 4). 5. Discussion Results of the present study showed that pain and functional status were significantly improved in both the experimental group and the control group; however, there was no signifi- cant change in FHP in both groups. Also, there was no signif- icant difference between both groups for FHP and neck pain. There was a significant improvement in functional status in the experimental group in comparison to the control group. This is the first study, in our knowledge, where CCFT is used in the computer-using adolescents (age group 13 to 18 yrs.), having FHP and neck pain. Therefore, there is lim- ited scope for comparison with other studies due to the lack of enough literature in this area. In patients with neck pain, DCF muscles’ endurance capacity is lost [23]. These muscles have a significant role in maintaining cervical lordosis partic- ularly in functional midranges of the cervical spine [24]; therefore, DCF training was chosen as a treatment for FHP in this study. Our findings suggest that DCF training when given along postural education in the experimental group or postural education alone as given in the control group has no signifi- cant effect on FHP. Our findings are consistent with the results of Jull et al. [18] where the effectiveness of a 6-week low load craniocervical flexion exercise program in cervico- genic headache patients was studied through a randomized controlled trial. The result showed that there was a substan- tial decline in pain related to joint palpation and neck move- ment; however, the photographic measure of CVA representing FHP was unchanged [18]. Our results are also consistent with the findings of Kang, which reported signifi- cant improvement in neck muscle’s endurance and ROM but no significant improvement was obtained in the CVA by per- forming pressure biofeedback-guided DCF muscles’ training [25]. Grant et al. performed a single case study and reported no significant change in posture parameters (CVA) of the screen-based operator even though endurance of DCF increased and mechanosensitivity of articular, muscular, and neural structures reduced, after DCF and lower scapular muscle group’s stabilization training of 4 weeks [26]. How- ever, a study by Gupta et al. showed contrasting results than our findings and reported significant improvement in FHP as a result of DCF training [15]. A possible attempt has been made to explain the above findings. One of the primary reason could be that school pupil’s head and neck posture is affected by multiple factors [27]. Computer furniture, anthropometric variations, reports of pain and visual factors, and potential harmful develop- mental effects occurring as a result of consistent postural stresses, all can affect school pupils’ posture. Hence, just ana- lyzing one factor in isolation and rectifying it would not pre- vent pupils from developing these musculoskeletal symptoms. Therefore, it is necessary to have a multidimen- sional approach if we want to have a significant and sustain- able improvement in symptoms [27]. Another possible factor that could have influenced our results is that according to Janda from the viewpoint of mus- cle analysis, FHP is a result of the weakness of the DCF and dominance or even tightness of the sternocleidomastoid along with tightness of cervical extensor muscles [28]. There may be muscle imbalance around the cervical spine; there- fore, activating a single muscle group in isolation may not be expected to be beneficial. Therefore, conclusions that could be drawn from our study are that DCF training in iso- lation may be ineffective in the treatment of FHP rather than Table 2: CVA, NDI, and VAS, n = 15 each group, mean ± SD values at baseline (Pre) and after 4 weeks of intervention (Post). And p values for Shapiro-Wilk tests of normality. Experimental group p value Control group p value CVA Pre (degrees) 44:85 ± 7:54 0.48 42:55 ± 8:04 0.55 CVA Post (degrees) 45:13 ± 5:93 0.36 41:83 ± 8:33 0.08 NDI Pre (points) 13:00 ± 6:61 0.07 12:26 ± 5:29 0.38 NDI Post (points) 8:26 ± 5:67 0.02∗ 10:53 ± 4:79 0.23 VAS Pre (cm) 5:33 ± 1:67 0.17 5:66 ± 1:91 0.02∗ VAS Post (cm) 3:33 ± 1:39 0.01∗ 4:33 ± 1:58 0.18 ∗Significant (p < 0:05). 4 BioMed Research International being dismissive of any effectiveness of the exercise program. Addressing the muscle imbalance is important so that the optimal flexibility of those muscles that are tight could be achieved. Also, improving the strength of those muscles that are prone to weakness is necessary [28]. Apart from the above-stated explanation, another possi- ble cause of our findings could be that static measure of FHP, i.e. CVA in the photographic analysis may not be an adequate outcome measure. Szeto in his study on compari- son of computer workers with and without pain reported that computer users having neck pain drift into a more FHP when distracted [29]. This may indicate that muscles required to maintain the posture of the cervical spine have impaired or low level of endurance. It could also be argued that using photographs to measure spinal posture in an institution may not reflect ongoing posture. Also, the outcome of this study could have been influenced by several other less signif- icant factors such as neck length, height, body built, genetic predisposition, or recreational activities of students. The group receiving CCFT along with postural education showed greater statistical improvement in functional status (disability) in comparison to the group receiving only pos- tural education. However, changes in FHP and neck pain were statistically similar in the group receiving both CCFT and postural training and group receiving postural education only. The possible explanation for the above findings could be that in our study, the total duration of DCF training was modified to once a day, 4 days in a week, for a total of 4 weeks, from the study performed by Jull et al. that included exercise protocol of twice a day, all days a week, for a total of 6 weeks. In our study, pupils performed every session in the supervision of a therapist; however, in the study of Jull 0 5 10 15 20 25 30 35 40 45 50 CVA NDI VAS Experimental group Pre Post Figure 1: Graph depicting CVA, NDI, and VAS Pre and Post mean values in the experimental group. 0 5 10 15 20 25 30 35 40 45 CVA NDI VAS Control group Pre Post Figure 2: Graph depicting CVA, NDI, and VAS Pre and Post mean values in the control group. 5BioMed Research International et al., subjects performed exercises at home and were super- vised by a therapist only once in a week [30]. This protocol was modified so that it suits pupils’ schedules and school time table. Also, since it is better to perform exercises under the therapist’s supervision than performing at home, therefore, each session was performed under the therapist’s guidance. It could be possible that the 4-week duration of DCF training may not have been sufficient to produce adequate changes in DCF. Another significant outcome of this study will be the role of postural instructions in bringing significant improve- ment in functional status (disability) and pain in both groups. Deviation of posture from normal alignment causes imbalances and abnormal strains on the musculoskeletal sys- tem [20]. Based on this concept that postural imbalances or abnormalities cause injuries and pain, postural correction and education were used as a treatment approach for allevi- ating symptoms [20] and proved to be successful in this study. 5.1. The Relevance of the Study. Since the number of children using computers/laptops will increase in years to come, therefore, the risk for developing musculoskeletal disorders at a younger age will also increase. All this could increase sick leaves and early retirements. Education of correct body pos- tures, ergonomic advice, and good work practices, when established in early life, significantly reduces the chances of developing musculoskeletal problems in later life. 5.2. Recommendation for Further Studies. Electromyography can be used in future studies to know simultaneous muscle activation occurring due to DCF training along with postural changes. Future studies are needed that addresses musculo- skeletal imbalances (muscle shortening, etc.) also. Finally, instead of using a static photographic measure for FHP, a more dynamic outcome measure should be taken. 6. Conclusion This study showed that DCF training and postural education of 4 weeks does not cause any significant improvement in FHP in adolescent pupils using computers regularly. But neck pain and functional status (perceived disability) improved significantly with DCF training and postural edu- cation either given alone or in combination with each other. Significant improvement occurred in functional status when DCF training and postural education were given in combination with each other; however, in terms of reduction of neck pain, and FHP, there was no difference whether DCF training and postural education were given in combination or given alone. Therefore, this study cuncludes that 4 weeks of DCF training did not cause any significant improvement in FHP in adolescent pupils who use computers regularly. Data Availability The data associated with the paper are not publicly available but are available from the corresponding author on reason- able request. Conflicts of Interest The authors declare that there is no conflict of interest regarding the publication of this paper. Acknowledgments The authors are grateful to the Deanship of Scientific Research, King Saud University for funding through Vice Deanship of Scientific Research Chairs. References [1] G. B. Hoftun, P. R. Romundstad, J.-A. Zwart, and M. Rygg, “Chronic idiopathic pain in adolescence–high prevalence and disability: the young HUNT Study 2008,” Pain®, vol. 152, no. 10, pp. 2259–2266, 2011. [2] D. Falla, G. Jull, and P. Hodges, “Feedforward activity of the cervical flexor muscles during voluntary arm movements is delayed in chronic neck pain,” Experimental Brain Research, vol. 157, no. 1, pp. 43–48, 2004. [3] A. G. Silva, P. Sharples, and M. I. Johnson, “Studies comparing surrogate measures for head posture in individuals with and without neck pain,” Physical Therapy Reviews, vol. 15, no. 1, pp. 12–22, 2013. [4] J. Schomacher and D. Falla, “Function and structure of the deep cervical extensor muscles in patients with neck pain,” Manual Therapy, vol. 18, no. 5, pp. 360–366, 2013. [5] J. C. Day, A. Janus, and J. Davis, Computer and Internet Use in the United States: 2003 Special Studies, US Census Bureau Cur- rent Population Report, 2005. Table 3: Within-group comparison for dependent variables, mean difference ± SD in both groups. Experimental group SEM p value Control group SEM p value CVA Post-CVA Pre 0.28 ±4.16 1.07 0.79 -0.71 ±4.70 1.21 0.56 NDI Post-NDI Pre -4.73 ±3.49 0.90 0.00∗ -1.73 ±2.84 0.73 0.03∗ VAS Post-VAS Pre -2.00 ±1.64 0.42 0.00∗ -1.33 ±1.67 0.43 0.01∗ ∗Significant (p < 0:05); SEM = standard error of mean. Table 4: Between-group comparison of dependent variables, mean difference ± SD. Experimental group Control group p value CVA Post-CVA Pre 0.28 ±4.16 -0.71 ±4.70 0.54 NDI Post-NDI Pre -4.73 ±3.49 -1.73 ±2.84 0.01∗ VAS Post-VAS Pre -2.00 ±1.64 -1.33 ±1.67 0.41 ∗Significant (p < 0:05). 6 BioMed Research International [6] P. T. Hakala, A. H. Rimpelä, L. A. Saarni, and J. J. Salminen, “Frequent computer-related activities increase the risk of neck–shoulder and low back pain in adolescents,” The Euro- pean Journal of Public Health, vol. 16, no. 5, pp. 536–541, 2006. [7] G. Ariëns, P. Bongers, M. Douwes et al., “Are neck flexion, neck rotation, and sitting at work risk factors for neck pain? Results of a prospective cohort study,” Occupational and envi- ronmental medicine, vol. 58, no. 3, pp. 200–207, 2001. [8] J. Auvinen, T. Tammelin, S. Taimela, P. Zitting, and J. Karppinen, “Neck and shoulder pains in relation to physical activity and sedentary activities in adolescence,” Spine, vol. 32, no. 9, pp. 1038–1044, 2007. [9] A. M. Briggs, L. M. Straker, N. L. Bear, and A. J. Smith, “Neck/- shoulder pain in adolescents is not related to the level or nature of self-reported physical activity or type of sedentary activity in an Australian pregnancy cohort,” BMC Musculoskeletal Disor- ders, vol. 10, no. 1, p. 87, 2009. [10] S. Kumar, “Theories of musculoskeletal injury causation,” Ergonomics, vol. 44, no. 1, pp. 17–47, 2010. [11] X. Guan, G. Fan, X. Wu et al., “Photographic measurement of head and cervical posture when viewing mobile phone: a pilot study,” European Spine Journal, vol. 24, no. 12, pp. 2892–2898, 2015. [12] C. H. T. Yip, T. T. W. Chiu, and A. T. K. Poon, “The relation- ship between head posture and severity and disability of patients with neck pain,” Manual Therapy, vol. 13, no. 2, pp. 148–154, 2008. [13] Z. A. Iqbal, R. Rajan, S. A. Khan, and A. H. Alghadir, “Effect of deep cervical flexor muscles training using pressure biofeed- back on pain and disability of school teachers with neck pain,” Journal of Physical Therapy Science, vol. 25, no. 6, pp. 657–661, 2013. [14] M.-Y. Lee, H.-Y. Lee, and M.-S. Yong, “Characteristics of cer- vical position sense in subjects with forward head posture,” Journal of Physical Therapy Science, vol. 26, no. 11, pp. 1741– 1743, 2014. [15] B. D. Gupta, S. Aggarwal, B. Gupta, M. Gupta, and N. Gupta, “Effect of deep cervical flexor training vs. conventional isomet- ric training on forward head posture, pain, neck disability index in dentists suffering from chronic neck pain,” Journal of clinical and diagnostic research: JCDR, vol. 7, no. 10, pp. 2261–2264, 2013. [16] G. Jull, D. Falla, B. Vicenzino, and P. Hodges, “The effect of therapeutic exercise on activation of the deep cervical flexor muscles in people with chronic neck pain,” Manual Therapy, vol. 14, no. 6, pp. 696–701, 2009. [17] O. M. Giggins, U. M. Persson, and B. Caulfield, “Biofeedback in rehabilitation,” Journal of Neuroengineering and Rehabilita- tion, vol. 10, no. 1, p. 60, 2013. [18] G. Jull, P. Trott, H. Potter et al., “A randomized controlled trial of exercise and manipulative therapy for cervicogenic head- ache,” Spine, vol. 27, no. 17, pp. 1835–1843, 2002. [19] T. T. Chiu, T.-H. Lam, and A. J. Hedley, “A randomized con- trolled trial on the efficacy of exercise for patients with chronic neck pain,” Spine, vol. 30, no. 1, pp. E1–E7, 2005. [20] S. Sahrmann, Diagnosis and Treatment of Movement Impair- ment Syndromes, Mosby, St, Louis, MO, USA, 2002. [21] A. Fink, How to sample in surveys, Sage, 2003. [22] H. C. Kraemer and C. Blasey, How Many Subjects? Statistical Power Analysis in Research, Sage Publications, 2015. [23] D. H. Watson and P. H. Trott, “Cervical headache: an investi- gation of natural head posture and upper cervical flexor mus- cle performance,” Cephalalgia, vol. 13, no. 4, pp. 272–284, 1993. [24] D. Falla, G. Jull, T. Russell, B. Vicenzino, and P. Hodges, “Effect of neck exercise on sitting posture in patients with chronic neck pain,” Physical Therapy, vol. 87, no. 4, pp. 408– 417, 2007. [25] D. Y. Kang, “Deep cervical flexor training with a pressure bio- feedback unit is an effective method for maintaining neck mobility and muscular endurance in college students with for- ward head posture,” Journal of Physical Therapy Science, vol. 27, no. 10, pp. 3207–3210, 2015. [26] R. Grant, G. Jull, and T. Spencer, “Active stabilisation training for screen based keyboard operators-a single case study,” The Australian Journal of Physiotherapy, vol. 43, no. 4, pp. 235– 242, 1997. [27] P. Grimes and S. Legg, “Musculoskeletal disorders (MSD) in school students as a risk factor for adult MSD: a review of the multiple factors affecting posture, comfort and health in classroom environments,” Journal of the Human- Environment System, vol. 7, no. 1, pp. 1–9, 2004. [28] V. Janda, Muscles and motor control in cervicogenic disorders: assessment and management, Physical Therapy of the Cervical and Thoracic Spine, 1994. [29] G. P. Szeto, “Potential health problems faced by an Asian youth population with increasing trends for computer use,” Proceedings of the XVth Triennial Congress of the International Ergonomics Association, Seoul, and Korea, Citeseer, 2003. [30] G. Jull, D. Falla, J. Treleaven, M. Sterling, and S. O'Leary, A therapeutic exercise approach for cervical disorders, 2004. 7BioMed Research International Effects of Deep Cervical Flexor Training on Forward Head Posture, Neck Pain, and Functional Status in Adolescents Using Computer Regularly 1. Introduction 2. Materials and Methods 2.1. Study Design 2.2. Participants 2.3. Instrumentation 2.4. Protocol 2.5. Measurement of FHP 2.6. Intervention 2.6.1. Craniocervical Flexion Training 3. Data Analysis 4. Results 4.1. Craniovertebral Angle (CVA) 4.2. NDI 4.3. VAS 5. Discussion 5.1. The Relevance of the Study 5.2. Recommendation for Further Studies 6. Conclusion Data Availability Conflicts of Interest Acknowledgments work_lekkf35a7resndbfrpnuug7som ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218560827 Params is empty 218560827 exception Params is empty 2021/04/06-02:16:41 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218560827 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:41 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_lfqbkmqisnhxhkcnx2mrsjl2ie ---- J Range Manage 57:675 -678 I November 2004 Technical Note: Lightweight Camera Stand for Close -to -Earth Remote Sensing D. Terrance Booth,1 Samuel E. Cox,2 Mounier Louhaichi,3 and Douglas E. Johnson"' Authors are 'Rangeland Scientist and 2Remote Sensing Technician, USDA -ARS, High Plains Grasslands Research Station, 8408 Hildreth Road, Cheyenne, WY 82009; and 3Professor and 4Research Associate, Department of RangelandResources, 302B Strand Agriculture Hall, Oregon State University, Corvallis, OR 97331. Abstract Digital photography and subsequent image analysis for ground-cover measurements can increase sampling rate and measurement speed and probably can increase measurement accuracy. Reduced monitoring time (labor cost) can increase monitoring precision by allowing for increased sample numbers. Multiple platforms have been developed for close -to -earth remote sensing. Here we outline a new, 5.8 -kg aluminum camera stand for acquiring stereo imagery from 2 m above ground level. The stand is easily transported to, from, and within study sites owing to its low weight, excellent balance, and break -down multipiece construction. Resumen La fotografía digital y el subsecuente análisis de las imágenes para obtener mediciones de cobertura a nivel del suelo puede incrementar la tasa y velocidad de muestreo y probablemente la certeza de las mediciones. Reducir el tiempo de monitoreo (costos de trabajo) puede incrementar la precisión del mismo al permitir obtener un mayor número de muestras. Se han desarrollado múltiples plataformas para obtener imágenes de sensores remotos a distancias cercanas de la tierra. Aquí nosotros describimos un nuevo soporte para cámara que es de aluminio y pesa 5.8 kg para adquirir imágenes estereoscópicas a 2 m del nivel del suelo. El soporte es fácilmente transportado a los sitios de estudio y dentro de ellos debido a su bajo peso, excelente balance y su construcción de piezas plegadizas. Key Words: rangeland monitoring, vertical, photography, nadir, digital photography Introduction The first use of vertical photography for plant cover analysis was reported by Cooper (1924), who used a wooden camera stand to acquire photographs of permanent plots. Between 1924 and the present, a succession of camera -stand designs have been used in the study of rangeland vegetation (Table 1). Claveran (1966) was the first of these researchers to use a camera stand for acquiring stereophotographs of quadrats. Key aspects of a successful stand design include low weight, ease of use, and simplicity. Here we present a design that combines rigidity, low weight, and balanced construction for a stand suitable for a single user working in a variety of plant communities. Lightweight Camera Stand With Quadrat Base For monitoring all types of rangeland, a highly portable, yet rigid, camera stand is desirable. In consultation with the Research was funded in part by a grant from the Wyoming State Office of the Bureau of Land Management. Mention of trade names is for information only and does not imply an endorsement. Correspondence: D. Terrance Booth, Agricultural Research Service, 8408 Hildreth Road, Cheyenne, WY 82009. Email: Terry.Booth@ars.usda.gov Manuscript received 6 December 2003; manuscript accepted 5 July 2004. Colorado State University Agricultural Engineering Center, we designed and constructed an aluminum camera stand similar to that described by Louhaichi et al (2001). The new stand is 2 m in height with a 1 -m2 base, constructed of 2.25-cm, thin walled aluminum tubing with custom milled joints (Fig. 1). The base breaks down into four 1 -m lengths and the 2 vertical poles each break down into two 1 -m lengths. Each joint has a removable pin around the base and top, permanently attached by a 10 -cm cable to one side of each joint to avoid pin misplacement (Fig. 1, upper inset). These pins allow the stand to be rapidly disassembled into nine 1 -m segments for transport and storage. The 2 segments of the 2 vertical poles are connected via a flared coupler with hand tightened setscrews (Fig. 1, lower inset). The stand weighs 5.8 kg, not including the camera, and is easily balanced and carried in the field by a single operator. The horizontal top bar is square in cross section. A quick release camera mount is attached to a carriage that rolls laterally along the top bar, allowing for stereo image acquisition. Setscrews along the top bar regulate lateral movement of the mount to control the degree of parallax in the stereo imagery. Taller features require less parallax to achieve optimal stereo effect. Too much parallax can prevent stereo viewing or lead to difficulty in focussing the images, so attention must be paid to proper adjustment of these setscrews. An Olympus E20, 5- megapixel, color digital camera with infrared remote control is mounted on the camera stand to acquire nadir imagery from JOURNAL OF RANGE MANAGEMENT 57(6) November 2004 675 Table 1. Review of camera stands used for vertical ground photography. Author Year AGL1 (m) FId2 (m2) Description Cooper 1924 1.8 1 Wooden, offset tripod with all parts on 1 side of quadrat Rowland and Hector 1934 NG NG Wooden trestle over square meter quadrat Winkworth et al 1962 NG 2.5 Tall stepladder Claveran 1966 1.7 1 Metal tripod opened at top with 15 -cm wooden bars Wimbush et al 1967 1.2 0.9 Rectangular frame supported with 4 spreading, detachable legs Pierce and Eddleman 1970 1.5 1 Aluminum angle bar supported between 2 standard camera tripods Wells 1971 1.3 1.5 Wimbush stand modified for 2 cameras Tueller et al 1972 2.5 2.3 Tripod supporting 2 cameras Ratliff and Westfall 1973 1.2 0.09 Square base -0.09 m2, camera handheld against cross bar between uprights Pierce and Eddleman 1973 1.5 1 Tripod (HighBoy IV; Quick -Set, Skokie, IL) Owens et al 1985 < 7 <6x9 Offset tripod with adjustable camera boom Roshier et al 1997 Variable Variable Gantry connected to automobile Northrup et al 1999 < 5.5 16 Telescoping camera boom mounted on all- terrain vehicle Bennett et al 2000 2 1 Offset aluminum tripod with collapsible camera arm, bubble levels Richardson et al 2001 1.5 NG Monopod of PVC Louhaichi et al 2001 1.7 3.5 A PVC prototype of the stand described here VanAmberg 2003 2 1.4 Wheeled, with a telescoping vertical post and a 0.5 -m camera arm 'AGL indicates camera altitude above ground level; Fld, field of view; NG, not given; PVC, polyvinyl chloride. 2The given field of view is that obtained by the authors using their own particular camera and lens settings. 2 m above ground level (AGL). Each image covers a 1 x 1.4 m area at wide -angle (35 mm) zoom and produces a pixel resolution of 1.16 mm2. When sampling in tall shrub areas where it is difficult to place the 1 -m quadrat base flat on the ground, sections of the stand (one vertical pole and one 1 -m base length) can be fitted together via the attached pins to create a monopod. The mono pod and the attachment of a specialized aluminum camera mounting plate holds the camera 2 m above the ground and 1 m from the vertical pole allowing for easier nadir image acquisi- tion (stereo imagery has not yet been acquired using the monopod configuration). Shadows from tall vegetation confound many types of image analysis either by hiding areas of interest or by altering color in shaded areas. We mounted a 183 -cm -long x 104 -cm -wide roll up window shade along the base of the stand such that it could be pulled up and attached anywhere along the vertical side support to shade the entire plot (Fig. 1). The shade is made of medium weight light- filtering vinyl that allows even illumination of the entire plot, eliminating shadows and providing for more satu- rated colors, thus improving the quality of the imagery obtained. The shade can be removed when not needed. The entire plot can be shaded except when the sun is higher than 67.5 °. Thus, the plot can be shaded during approximately 75% of the daylight hours at an equatorial location at equinox. Northern latitudes have more daylight hours with the sun below this angle. Discussion and Conclusions High- resolution digital images are useful for several types of data gathering and have proven to be a quick and accurate means for vegetation classification (Bennett et al 2000; Lou haichi et al 2001). As indicated in Table 1, various types of camera stands or other ground -based platforms have been used to collect nadir imagery. Some of the more recent designs include that of Northrup et al (1999), who constructed a telescoping camera boom from aluminum channel stock and mounted it to the front of an all- terrain vehicle at 45 °. Bennett et al (2000) constructed a portable aluminum stand equipped with a collapsible camera arm and two telescopic legs. Richardson et al (2001) used a 1.5 -m monopod made of 10 -cm- diameter polyvinyl chloride (PVC) tubing, with a hori- zontal arm extending 1 m away from the top of the vertical axis. A camera mounted on the end of the arm was used to acquire nadir images in dense vegetation. Louhaichi et al (2001) mounted a 35 -mm camera on a lightweight stand of PVC tubing with the camera mounted 1.7 m AGL and above a 1 -m2 base. The use of PVC as a construction material resulted in a lightweight stand (5.1 kg); however, PVC lacks rigidity. VanAmberg (2003) constructed a wheeled camera stand out of steel cornerstock that consisted of a 1 -m2 base, a single 1- to 3 -m telescoping vertical post attached to the center of a base length, and a horizontal arm projecting 0.5 m from the top of 676 Journal of Range Management Figure 1. Aluminum camera stand with roll -up vinyl shade and shaded 1 -m2 plot. The stand breaks down into nine 1-m lengths, adjusts for stereo imagery, and weighs 5.8 kg. The base is 1 m2. The height is 2 m. Insets show enlargements of milled aluminum connector with attached connector pin, and flared coupler with hand tightened setscrew for vertical pole segment connection. the vertical post to which was attached a digital camera. Although convenient for open grassland, this rolling stand is difficult to maneuver in areas with shrub cover, and it is too heavy to carry. Construction cost for the stand described here was $250 for materials and $390 for labor. After more than a year of using the stand, we conclude that it is a highly practical rangeland monitoring tool with advantages that include: 1) the stand can 57(6) November 2004 677 be carried easily over uneven terrain and through most range- land vegetation types; 2) it is stable in high -wind situations owing to its square base and rigid, durable aluminum con- struction; 3) the inclusion of the square meter frame (quadrat) as part of the camera -stand base; 4) the capability of acquiring stereo digital imagery with a single camera; 5) the roll -up vinyl shade allows for evenly illuminated, shadow -free, color -satu- rated imagery during more than 75% of the available daylight hours; and 6) the ability to break down and store the stand in a 1.1 -m -long case. Literature Cited BENNETT, L. T., T. S. JUDD, AND M. A. ADAMS. 2000. Close -range vertical photography for measuring cover changes in perennial grasslands. Journal of Range Management 53:634 -641. BOOTH, D. T., S. E. Cox, AND D. E. JOHNSON. 2004. Calibration of Threshold Levels in Vegetation -Cover Classification Software. Abstract. Society for Range Man- agement Meeting Abstracts. CLAVERAN, R. A. 1966. Two modifications to the vegetation photographic charting method. Journal of Range Management 19:371 -373. COOPER, W. S. 1924. An apparatus for photographic recording of quadrats. Journal of Ecology 12:317 -321. LOUHAICHI, M., M. M. BORMAN, AND D. E. JOHNSON. 2001. Spatially located platform and aerial photography for documentation of grazing impacts on wheat. Geocarta 16:63 -68. NORTHRUP, B. K., J. R. BROWN, C. D. DIAS, W. C. SKELLY, AND B. RADFORD. 1999. A technique for near ground remote sensing of herbaceous vegetation in tropical woodlands. Rangeland Journal 21:229 -243. OWENS, M. K., H. G. GARDINER, AND B. E. NORTON. 1985. A photographic technique for repeated mapping of rangeland plant populations in permanent plots. Journal of Range Management 38:231 -232. PIERCE, W. R., AND L. E. EDDLEMAN. 1970. A field stereophotograph technique for range vegetation analysis. Journal of Range Management 23:218 -220. PIERCE, W. R., AND L. E. EDDLEMAN. 1973. A test of stereophotograph sampling in grassland. Journal of Range Management 26:148 -150. RATLIFF, R. D., AND S. E. WESTFALL. 1973. A simple stereophotographic technique for analyzing small plots. Journal of Range Management 26:147 -150. RICHARDSON, M. D., D. E. KARCHER, AND L. C. PURCELL. 2001. Quantifying turfgrass cover using digital image analysis. Crop Science 41:1884 -1888. ROSHIER, D., S. LEE, AND F. BORELAND. 1997. A digital technique for recording of plant population data in permanent plots. Journal of Range Management50:106 -109. ROWLAND, J. W., AND J. M. HECTOR. 1934. A camera method for charting quadrats. Nature 133:179. TUELLER, P. T., G. LORAIN, K. KIPPING, AND C. WILKIE. 1972. Methods for measuring vegetation changes on Nevada rangelands. Reno, NV: Agricultural Experiment Station, University of Nevada, Reno. Report T16. Available from: College of Agriculture, Biotechnology and Natural Resources, Mail Stop 22, 1660 N. Virginia St., Reno, NV 89557 -0107. VANAMBURG, L. 2003. Digital imagery to estimate canopy characteristics of short- grass prairie vegetation [MS thesis]. Fort Collins, CO: Colorado State University. 122 p. Available from: Deparment of Forest, Rangeland and Watershed Stewardship, The College of Natural Resources, Colorado State University, Fort Collins, CO 80523 -1472. WELLS, K. F. 1971. Measuring vegetation changes on fixed quadrats by vertical ground stereophotography. Journal of Range Management 24:233 -236. WIMBUSH, D. J., M. D. BARROW, AND A. B. COSTIN. 1967. Color stereophotography for the measurement of vegetation. Ecology 48:150 -152. WINKWORTH, R. E., R. A. PERRY, AND C. O. ROSSETTI. 1962. A comparison of methods of estimating plant cover in an arid grassland community. Journal of Range Management 15:194 -196. 678 Journal of Range Management work_lgknu3mfdvaqxmfxdec467xmve ---- Real-time Medical Visualization of Human Head and Neck Anatomy and its Applications for Dental Training and Simulation Paul Anderson1, Paul Chapman, Minhua Ma, and Paul Rea2 1 Digital Design Studio, Glasgow School of Art, The Hub, Pacific Quay, Glasgow, G51 1EA, UK {p.anderson, p.chapman, m.ma}@gsa.ac.uk Phone: +44 (0)141 566-1478 2 Laboratory of Human Anatomy, School of Life Sciences, College of Medical, Veterinary and Life Sciences, University of Glasgow, G12 8QQ, UK Paul.Rea@glasgow.ac.uk Phone +44(0)141 330-4366 Running title: Real-time Visualization of Head and Neck Anatomy Abstract The Digital Design Studio and NHS Education Scotland have developed ultra-high definition real-time interactive 3D anatomy of the head and neck for dental teaching, training and simulation purposes. In this paper we present an established workflow using state-of-the-art 3D laser scanning technology and software for design and construction of medical data and describe the workflow practices and protocols in the head and neck anatomy project. Anatomical data was acquired through topographical laser scanning of a destructively dissected cadaver. Each stage of model development was clinically validated to produce a normalised human dataset which was transformed into a real-time environment capable of large-scale 3D stereoscopic display in medical teaching labs across Scotland, whilst also supporting single users with laptops and PC. Specific functionality supported within the 3D Head and Neck viewer includes anatomical labelling, guillotine tools and selection tools to expand specific local regions of anatomy. The software environment allows thorough and meaningful investigation to take place of all major and minor anatomical structures and systems whilst providing the user with the means to record sessions and individual scenes for learning and training purposes. The model and software have also been adapted to permit interactive haptic simulation of the injection of a local anaesthetic. Keywords: dental simulation, haptic interaction, head and neck anatomy, laser scanning, medical visualization, real-time simulation, real-time visualization Background 3D scanning technology, including laser scanning and white light scanning, are being used to explore applications in medicine already routinely used in other fields. In healthcare, the technology has also been used in development of prostheses as translated scan data is immediately usable in computer aided design software, improving the speed of development of the prosthesis. One of the limitations of laser scanning technology is that it is only able to capture and reconstruct the outer surface of the body [1], therefore the scans do not have any internal structure and physical properties regarding skeleton, skin or soft tissues of the scanned human body, unless it is combined with cadaveric dissection [2]. On the other hand, medical visualization based on direct and indirect volumetric visualization uses data derived from 3D imaging modalities such as CT, MRI, cryosection images, or confocal microscopy. Although, the visualization is generally accurate, it only represents a particular human body or cadaveric specimen. Demonstrating a normalised anatomically correct model is difficult due to the source of data─the largely elderly population of cadaveric specimens. In indirect volume visualization where individual surface models are reconstructed, mistakes and inaccuracy might be introduced from the manual or automatic segmentation 1 Corresponding author process; whereas in direct volumetric visualization, interactivity is limited since surface geometries are not reconstructed. As a result, the users are not able to manipulate the model (volumetric data) as they could on surface models. Functions such as virtual dissection, e.g. disassemble/reassemble, and studying individual substructures are not possible. Furthermore, each imaging modality has its limitations, for example, for cryosections, the cadaver had to be segmented into large blocks which results in a loss of data at certain intervals; for CT/MRI images, segmentation rarely focuses on very thin anatomic structures [3] such as fascia. Developing a model that can present thin anatomic structures would be of great interest to medical professionals and trainees. However, the ability to accurately segment very thin structures is a challenging and substantial task. In this paper we present an established workflow using state-of-the-art laser scanning technology and software for design and construction of 3D medical data and describe the workflow practices and protocols in the Head and Neck Anatomy project at the Digital Design Studio (DDS). The workflow overcomes the above limitations in volumetric visualization and surface anatomy. This work was conducted by a well-established, unique, multi-disciplinary team drawn from art, technology and science. The team includes computer scientists, 3D modellers and animators, mathematicians, artists and product designers, and champions a culture of research and creativity which is fast moving, highly productive, externally engaged and autonomous. This successful academic hybrid model is built upon strong collaborative partnerships directly with industry and end users, resulting in tangible real-world outputs. We have sought to establish a balanced portfolio of research, teaching and commercialisation operating within a state-of-the-art, custom built facility located within the heart of the Digital Media Quarter in Glasgow. The DDS houses one of the largest virtual reality and motion capture laboratories in the world ensuring that we are at the forefront of digital innovation, developing products focussed on simulated virtual environments, highly realistic digital 3D models and prototypes, user interfaces and avatars. The virtual reality laboratory is at a huge scale enabling 30- 40 users simultaneously to experience live real-time simulation and interaction as shown in Figure 1. Figure 1. The Head and Neck system was presented in the large scale Virtual Reality laboratory The work also involves the Scottish Medical Visualization Network, a collaborative initiative that has brought together 22 different medical disciplines and allied healthcare professionals across 44 organisations in Scotland to pursue excellence in medical visualization. Through this network, we created 3D digital models of selected anatomy that have been used to educate health professionals [4] and have also been used to support activities such as pre-operative planning, risk reduction, surgical simulation and increased patient safety. The work has also received significant recognition in recent major publications such as the RCUK report Big Ideas for the Future [5]. Development of Head and Neck Anatomy NHS Education Scotland (NES) launched a European tender to develop a four strand work package to develop digital content for interactive Head and Neck anatomy, instrument decontamination, virtual patients and common disease processes for dentistry. This was a complex project requiring high levels of interaction across multidisciplinary development partners, in order to build digital anatomy (the 3D Definitive Head and Neck) and other digital products for medical teaching, across a distributed network of centres and primary care settings. The research took place within established protocols concerned with government legislation and patient interaction where the security of data and appropriate interface development were key factors. This paper will discuss and focus on the Work Package A, i.e. the 3D interactive Head and Neck Anatomy. The aim of this project, commissioned by NES, was to complete the construction of the world’s most accurate and detailed anatomical digital model of the head and neck using state-of-the-art data acquisition techniques combined with advanced, real-time 3D modelling skills and interactive visualization expertise. It was felt essential that this digital model must be capable of real-time interaction supporting both medical training and personal exploration, with all 3D data models and information being fully annotated, medically validated, and interactively “disassembled” to isolate and study individual substructures then reassembled at the touch of a button. In order to create a truly accurate interactive digital 3D model of head and neck anatomy it was essential to base the models upon real data acquired from both cadaveric and live human subjects. The full range of digital products was formally launched in April 2013. User feedback from NES medical teaching centres and primary care settings has been extremely positive. This model integrates different tissue types, vasculature, and numerous substructures that are suitable for both the casual user and, in particular, those engaged in medical learning and teaching. Our model and software interface provides fluid and intuitive ways to visualise and encourage meaningful engagement with human anatomy amongst diverse audiences and is currently being well received by medical trainees, clinicians and the general public. Our 3D digital model development process, including data acquisition, model construction, interface design and implementation has been critically evaluated and validated by multi-disciplinary experts in the fields of medicine and computing science. Our extensive collaborative research network includes senior clinicians, surgical consultants, anatomists and biomedical scientists within the NHS, formal links with the medical schools within the Universities of Glasgow, Edinburgh, Dundee, Manchester and London, and other key specialists in the Scottish Medical Visualization Network. A Review of Anatomical Visualization Systems We continually monitor the medical visualization landscape for similar visualization products, and conduct regular exhaustive reviews of other digital datasets to ensure we maintain high levels of innovation and that our activities make a significant contribution to the field. Our development team, consisting of licensed anatomists, computer scientists and 3D modellers, has scrutinised all relevant competition worldwide. It is well known that there have been numerous datasets, models and atlases of human anatomy developed over the years. The first such major undertaking was the Visible Human which produced whole-body datasets based on male and female cadavers. The value and impact of this initial dataset were immediate: clinicians, scientists and educators throughout the world quickly accessed and downloaded this information, which became the de facto dataset on human anatomy for years (despite its limitations). The massive and strongly positive response to the introduction of the Visible Human demonstrates the need for the existence of, and accessibility to, anatomical information that is comprehensive, reliable and easy to use. Since then, several datasets have emerged that have attempted to contain either partial or whole-body anatomical information. There are approximately twelve anatomical visualization datasets available which vary widely in both quality and viability. We outline the functionality and fidelity of seven of them, below, in order to provide a comparison with our Head and Neck project. VisibleBody [6] is a web-based, downloadable 3D human full body viewer. The dataset is proprietary and claimed as ‘extremely accurate’. Tools include rotate, zoom, select single or multiple parts, labels, transparency, search and locate anatomical structures by name, cut and view internal structure of organs. Google Body Browser [7] is a basic interactive 3D human body viewer using Zygote Media Group’s dataset. The software is free to use. BodyBrowser requires WebGL which requires latest versions of browsers. It features rotate, zoom, select single parts, labels on each part, transparency, and iteration between layers (skin, muscles, bones, organs, brain and nerves). Primal Pictures [8] produces various pieces of software that each visualise various parts of the body, e.g. one for the head and neck, another for shoulders and hands. The dataset is proprietary and also includes volumetric data from MRI, CT and cryosection images. It provides rotate, zoom, dynamic search, part selection, labels and extraction of metadata tools including extra educational information for each body part. Cyber Anatomy Med [9] is a medical visualization educational package with models constructed from CT/MRI volumetric datasets. It features rotation, translation, zoom, selection of parts, dissection mode, search, transparency, labels, culling planes that use MRI, CT and cryosection data, hide, reveal, explode, implode, screenshot capturing and stereoscopic 3D support. The Visible Human Project [10] is a complete, anatomically detailed, 3D representation of the normal male and female human. Acquisition of transverse CT, MR and cryosection images of representative male and female cadavers have been completed. Volumetric datasets are difficult to visualise due to the many complex steps required to process the data─it requires a significant level of user expertise, and the segmentation of the datasets is equally complex. 3D4Medical [11] produces medical visualization software for reference in health and fitness markets. Their latest applications are ‘interactive’ although the models are not truly interactive as they use pre-rendered animations that only allow the user to manipulate viewpoints in predetermined vertical or horizontal paths. This pre-rendered approach also results in a ‘grainy’ display when the user zooms into the 3D model. BodyViz [12] is a 3D medical imaging software that combines volumetric visualization with a game controller interface. Apart from basic navigation controls, it provides: clipping plane on arbitrary angles; Xbox controller; view different tissue types (e.g. bones, muscles); and customer created annotations. It can also visualise live patients’ CT/MRI data. We believe that the Head and Neck project improves on the above products in several ways. • Accuracy/clinical value. Some of the described products above are oversimplified, lacking detail and reproducing anatomical errors due to a reliance on 2D sources such as Netter-type illustrations. Several omit relevant data or were assessed to be inaccurate by anatomist project advisors, particularly in the placement of fine detail of nerves and blood vessels. Some products demonstrate a poor relationship between their source data (e.g. MRI) and the modelled structures. Due to our rigourous validation process, the Head and Neck project improves on the accuracy of other anatomical models and has high clinical value, at very fine levels of detail. • Realism. Several of the anatomical models above are largely diagrammatic, low resolution, and lack definition and detail. The Head and Neck project used laser scanning to capture extremely detailed structural information (e.g. in the surface of the skull) and high resolution digital photography of live patients to ensure realistic textures, at very high levels of definition and detail. The result is colour and structural information that accurately represents the real thing. • Normalised anatomy. The Head and Neck project is not based on one particular patient but represents a normalised adult male, based on the input and validation of the project’s clinical advisory team. • Functionality/appropriateness to teaching. Whilst many of the products above are highly appropriate for basic medical study, our evaluation showed that many are not of sufficient quality to be useful in a clinical or higher education context. Our Head and Neck model is appropriate for learning, teaching and simulation at an expert level and the functionality supports a wide range of scenarios including: true interactivity (not based on pre-rendered animations); a full suite of interaction tools including labelling, zoom, selection, rotation, translation, transparency, guillotine, explode/implode and reassemble; large-scale 3D stereoscopic projection (with optional head-tracking view); control via a PC or game-controller; and an intuitive user interface. Our systematic review of the medical visualization landscape indicated that there was an unmet need for validated anatomical visualizations of the healthy human body that could be relied upon by viewers as accurate and realistic reproductions of the real thing. Indeed, within medical, dental and surgical curricula, the number of actual contact hours for teaching has markedly reduced over recent years. Within the medical curriculum alone, the General Medical Council issued guidelines to medical schools in the United Kingdom (Tomorrow’s Doctors, 1993) requesting a reduction in the amount of factual information [13], [14]. This has happened across many medical schools across the world [13-17]. However, medical training programs also began to change to a more integrated curriculum with various teaching methodologies adopted [18], [19]. More recently in the UK, Tomorrow’s Doctors 2009 has placed more emphasis on the medical sciences related to clinical practice. This directly reflects opinion from academics, clinicians and students that the anatomy content had been significantly “dumbed down” previously [17], [20]. Thankfully, this is changing within medical, dental and surgical curricula. Interestingly, with these changes, it has also been shown that to optimise learning, a variety of teaching modalities need to be used alongside traditional techniques. There is now an increased demand from anatomical educators for additional teaching resources, including those of a virtual nature and involving interactive multimedia [21-23]. In addition, the general public as yet has no means to view and interact with a truly representative visual simulation of the human body in a way that provides a genuine educational experience to promote public understanding of health and wellbeing. Using visualization experience gained within the automotive, defence and built environment sectors alongside medical visualization research, we sought to address these shortfalls by constructing a high fidelity 3D dataset, supporting meaningful user engagement, viewing and real-time interaction. The construction of head and neck anatomy sits very well within the established DDS academic, commercial and research programmes where the primary focus is on user interaction with real-time digital data that supports multi-disciplinary skill sets. It embraces and builds upon our technical development platform for research and commercial development through 3D laser scanning, 2D data capture, data processing and optimisation, 3D construction of objects and environments, photo-realistic rendering, user interface design, real-time display and cross-platform development. There is a direct connection between the development of this anatomical head and neck dataset and our joint MSc in Medical Visualization and Human Anatomy, an innovative programme designed and delivered by the DDS in partnership with the University of Glasgow’s Laboratory of Human Anatomy, School of Life Sciences, part of the College of Medical, Veterinary and Life Sciences. Indeed, this collaboration with the Laboratory of Human Anatomy ensures access to one of Europe’s largest anatomical facilities with cadaveric material. Continuing collaboration ensures beneficial knowledge exchange and user feedback from this postgraduate student community and continuously informs future research development and in turn positively impacts upon pedagogy and the overall student learning experience. The Workflow Figure 2 shows the development workflow, which at a high level, consists of identification of a suitable donated cadaver, dissection, 3D laser scanning capturing surface measured data, 3D computer modelling of all structures, digital photography from surgical procedures, texture mapping (colour information onto 3D surfaces) and interface development to support user interactions, trials and testing. Verification and validation is conducted at every development stage with final results being presented to a clinical advisory panel who met every three months throughout the project period. Figure 2. Development workflow Data Construction An important consideration with a project of this size is the sustainability of the workflow, tools and datasets that are generated over its duration. One of the dangers of working within the computing industry is that software and proprietary data formats can often become obsolete over time resulting in data that cannot be read or used. Consequently, it is good practice to ensure that any valuable data is stored using open and well- documented formats to ensure that it is not reliant on a single company or piece of software to be used. We adopted such an approach for the storage and preservation of the head and neck anatomical datasets. The data generated over the course of this project can be separated into two groups. The first is the raw data such as photographic evidence and point cloud data generated from white light and laser scanners. The second is the processed data created by the modelling team. The non-specialist nature of the raw data enables us to store all of it using open file formats such as the Portable Network Graphics (PNG) format for photographs and images and the Stanford Triangle Format (PLY) for scan data. However, in order to process this data and create the anatomical models, more specialist proprietary tools and data formats are required. Since this cannot be avoided, we used industry standard tools such as Autodesk’s Maya and Pixologic’s ZBrush to generate and store these models. In order to ensure long-term sustainability, the textured meshes are also exported and stored using an open format such as COLLADA to insure against any of these products becoming obsolete in the future. Data Acquisition We have developed a completely new approach to medical visualization. Our novel data construction workflow and validation process uses donor cadaveric material and through the process of destructive dissection and staged, high resolution laser scanning allows us to produce an accurate 3D anatomical model of head and neck anatomy. To ensure accuracy and a true likeness of the human body, this dataset was validated by the project’s Clinical Advisory Board, which comprises anatomists, clinicians and surgical specialists. Our dataset and interface development focuses on real-time interaction and simulation (not pre-rendered animations) that allows users to fully interact with all aspects of the anatomy and can be fully investigated in 3D at any scale, from a laptop or mobile device to a fully immersive environment (see Figure 1). • Selection of Specimen The identification of a suitable embalmed male Caucasian cadaver between the ages of 50-65 was the starting point for this work package. This was identified from the regular stock in the Laboratory of Human Anatomy, School of Life Sciences, College of Medical, Veterinary and Life Sciences at the University of Glasgow. The cadaver had no obvious signs of pre-existing craniofacial abnormalities. All procedures were carried out under the Anatomy Act 1984 [24] and the Human Tissue (Scotland) Act 2006, part 5 [25]. This was undertaken by the government’s Licensed Teacher of Anatomy, one of the co-authors (PR). This formed the basis of the definitive 3D digital human that was developed by DDS, with the head and neck region comprising the element to be described. Minimal pre-existing age-related changes were present, significant because the majority of cadavers are of those who have died in old age, many of whom are edentulous. The alveolar bone and teeth were recreated digitally based on laser scans of disarticulated teeth, held at Glasgow Dental School. • Dissection and data capture of head and neck soft tissue Ultra high resolution 3D laser scanning supported by high-resolution colour imaging capture was performed on the cadaver before formaldehyde embalming. Perceptron Scanworks V5 3D laser scanner [26] was used to capture accurate data of the surface geometry. Intra-oral scanning was also performed prior to the preservation process to allow accurate reconstruction of dental related anatomy (and to establish key anatomical and clinically relevant key landmarks) when there is pliability/mobility at the temporomandibular joint. The embalming procedure was carried out through collaboration of a mortician and qualified embalmer, supervised by a Licensed Teacher of Anatomy in the Laboratory of Human Anatomy, University of Glasgow. The eyes were injected with a saline solution post-embalming to maintain its life-like contour, a technique established in anatomical and surgical training for ocular (and ocular related) surgical procedures designed by the Canniesburn Plastic Surgery Unit, an international leader in plastic and reconstructive training, research and clinical expertise. Skin and subcutaneous tissue were meticulously dissected (using standard anatomical techniques) from the head and neck territories with appropriate health and safety precautions typical in the environment of using cadaveric tissue. Superficial muscles, nerves, glands and blood vessels were identified. Scanned muscles and attachments included the sternocleidomastoid, infrahyoid muscles, muscles of facial expression (including those around and within the eyes and nose) and the superficial muscles of mastication, including masseter and temporalis, all of which have important clinical and functional applications. The superficial nerves captured at this stage were the major sensory and motor innervations of the head and neck including the trigeminal and facial nerves, and specifically the termination of the facial nerve onto the muscles of facial expression. Scanned glands included the major salivary glands, i.e. the parotid, submandibular and the sublingual glands, as well as the endocrine thyroid gland. The blood vessels identified at this stage are the facial vessels as well as the jugular venous drainage of superficial anatomical structures. Deeper dissection of the head included data capture for the training of oral related musculature including genioglossus, geniohyoid, mylohyoid, lateral and medial pterygoids, digastric and buccinators amongst others. These specific muscles are significantly important in oral function, and have immense clinical importance for dental training. The related nerve and blood supply to these structures were captured as previously described. Neck dissection (down to the thoracic inlet) proceeded deeper to identify and capture major and minor structures at this site. Blood vessels (and related branching) were meticulously dissected to demonstrate arterial supply and venous drainage including the common carotid, subclavian and brachiocephalic trunk. Venous drainage includes internal jugular and subclavian veins, and all tributaries. The relationship of these blood vessels to important nerve structures in the neck demonstrated the close proximity to other structures and included the vagus and phrenic nerves, sympathetic trunk and the brachial plexus in the neck (supplying motor and sensory innervation to the upper limbs), and other closely related sensory and motor innervations. Also the larynx and trachea were included in soft tissue structure identification and data capture to clearly show anatomical relations for important clinical procedures, e.g. cricothyroidotomy, tracheostomy and airway intubation, as well as the oesophagus. Following every stage of identification of all relevant anatomical structures in the head and neck, Perceptron Scanworks V5 laser scanning [26] supported by 3D mesh processing software package, PolyWorks V12 [27] (which aligns the partial scans and generates a mesh surface) were performed prior to the next, deeper dissection. (Figure 3 shows a polygon mesh model generated from raw high-density 3D point clouds). This enabled a complete dataset to be recorded at all stages of dissection, which will then be able to be constructed and deconstructed by users as relevant to the training required, thus creating unique spatial awareness training. Figure 3. Surface mesh generated from raw point cloud data • Scanning of skeletal structures After soft tissue 3D data capturing, this was removed (apart from ocular structures) to demonstrate the skeletal structures of the head and neck including the neurocranium (calvaria and cranial base), viscerocranium (facial skeleton), mandible and vertebrae. This enables identification of all relevant foramina through which the cranial nerves exit/enter the skull and mandible. Perceptron Scanworks V5 3D laser scanner and PolyWorks were used again to capture the skeletal structures and to process the point cloud data. Individual laser scans from multiple views of the skull were fused to create a high poly surface mesh, which is the only structure where generated mesh from scan data is directly used in the Head and Neck model. The geometric models of soft-tissue from scan data was found to not accurately represent their natural shape due to soft-tissue deformation resulting from gravity and realignment of the underlying bone structure. Therefore, soft tissue scan data and their mesh are used as a reference in the modelling process and all structures are validated by anatomy specialists to ensure accuracy. • Intracranial scanning At this stage, the vault of the skull and the brain were removed and the same laser scanning and 3D meshing were performed to record specifically all twelve pairs of cranial nerves, cerebral tissue, including parietal, frontal, temporal and occipital lobes with related gyri and sulci, and cerebral vasculature (e.g. Circle of Willis). Where the cranial nerves have been detached, repeat scanning of the base of the skull was undertaken to allow reconstruction of the full intracranial path of these nerves through the numerous foramina to their termination sites. At this stage the visual pathway was established using the gross anatomy of the optic nerves, optic chiasm, optic tracts with modelling of the lateral geniculate body combined with the previous capture of the midbrain and the occipital cortices. Intracranial vascular circulation was modelled based on standard anatomical and applied surgical knowledge. After the base of the skull was scanned, the roof of the bony orbit was exposed to capture the extra-ocular muscles namely the levator palpebrae superioris, superior, inferior, lateral and medial recti, and the superior and inferior oblique incorporating their individual nerve supplies, and related vasculature and nerves surrounding these structures in each of the orbits. The cornea, anterior segment and retina were modelled based on existing anatomical knowledge and understanding. • Photorealistic Texturing At each stage, the capture of 3D topographic data through the use of the Perceptron Scanworks V5 3D laser scanning was supported by high-resolution colour imaging. Since cadavers differ considerably in colour and texture from living tissue, and the shadow, specular highlights, and occlusions in photographs also make them not suitable for texture mapping [28], the photographic data of soft tissue were mainly used as references when building the geometry of models. In order to produce a photorealistic and accurate model of the aforementioned structures, the skin surface, muscles and skeletal elements consist of several texture layers, describing colour, glossiness and surface structure subtleties. These were achieved through using a combination of photographs of living tissue, the poly- painting tool in Zbrush, and various other tools in Photoshop and Autodesk Maya. We produced visually realistic organ textures and appearances that are as close as possible the natural colour of skin and healthy living tissue. Figure 4 shows work-in-progress texturing in Maya. Figure 4. Texturing in Maya The Complete Dataset of Head and Neck The produced high resolution measured dataset is grouped and identified in Maya in order to serve a wide range of anatomical outputs. This enables the interface to present appropriately tagged data to the user, encapsulated in logical subsets. The tagging process includes clinically relevant, anatomically built structures that present context specific information on a case-by-case basis. A rendered head and neck model showing the muscle, nerve, and vascular layers is presented in Figure 5. The resulting complete dataset does not represent any one specific human body but represents a comprehensive structure that captures and presents a normalised, unbiased anatomical model. Figure 5. Rendered head and neck showing the muscle, nerve, and vascular layers Interactive Software The uniqueness of the DDS model comes from the forging together of three key components: • the anatomical precision and accuracy of the constructed datasets; • the specialist clinical input to challenge and validate the model; and • the seamless interface allowing proficient user-interactivity in real-time, enabling meaningful feedback and learning. In order to use the head and neck model in a realtime application, low poly models were created with normal maps baked from a Zbrush sculpt, decimating or re-topologizing to simplify the high poly mesh. Another importantly unique characteristic is the accompanying suite of manipulation tools, which afford both interactivity (for individual users) as well as interconnectivity (to support collaborative group usage). Figure 6 is a screenshot of the interactive Head and Neck Anatomy, showing a clipping plane on arbitrary angles and available functions in the vertical toolbar on the right. Apart from basic manipulation (such as object translation and rotation) and navigation controls (zoom, pan, etc.), it provides orthographic clipping planes as well as clipping plane on arbitrary angles. The virtual cutting planes reveal cross-section which resembles cryosection or CT imaging. The users can interact with the Head and Neck either through conventional input methods or through a combination of Xbox controller and head tracking. Together with stereoscopic projection, the latter interface provides an immersive experience for users to explore the internal structure of head and neck. The user can hide and reveal various tissue types, e.g. bones, muscles, nerves, vascular (Figure 6-C), and conduct virtual dissection via a drag-and-drop function (Figure 6-A). An ‘explode’ mode (Figure 6-B) is available to allow the user to control an explosion via a slider, making the parts separate away from their original locations to reveal the inner details of the head and neck anatomy. Users can also save particular viewpoints and settings which could be loaded in the future (Figure 6-D). The interactive application also provides data annotation to support teaching and learning. Where clinically relevant, anatomically built structures are appropriately annotated with context specific information on a case- by-case basis to include text related content, linked diagrams and 3D images. http://en.wikipedia.org/wiki/Explosion Figure 6. A screenshot of the interactive Head and Neck Anatomy showing clipping plane on arbitrary angles and available features Verification / Sign off Working in collaboration with the 3D modellers in the team, every structure which had been dissected was validated during the modelling process, to ensure anatomical accuracy and a true likeness of the human body. The dataset was validated by the project’s Clinical Advisory Board which comprised anatomists, clinicians and surgical specialists. Figure 7 shows an event of clinical validation. PR (one of the co-authors) who is a senior clinical anatomist and government Licensed Teacher of Anatomy, ensured complete anatomical accuracy of all structures constructed in every stage. Figure 7. Clinical validation by a clinical advisory board The skull and mandible were initially reconstructed from the laser-captured skull. Then, a stringent schedule was created and adhered to at the beginning of the project for creating each and every musculoskeletal, vascular, neural and glandular structure. The approach undertaken was creating each anatomical structure from the deepest (closest to the skull) to the most superficial i.e. the skin (unlike the superficial to deep dissection). As every muscle, bone, nerve, blood vessel and glandular structure was created it had to be moulded around the framework of the skull. This involved using the laser scanned material and also the high resolution digital photography which was captured for all anatomical structures. This ensured direct correlation with every structure to the dissected, laser-scanned and photographed components. All attachments, origins, terminations and pathway of every anatomical structure were meticulously created, where the anatomist and digital modellers would assign a set of structures to be modelled during a weekly period over the duration of the project. Therefore, this ensured that a catalogue of ALL anatomical structures in the head and neck was created which had to be modelled from the dissected material. On completion of the modelling of each anatomical structure, this was then reviewed as to its accuracy, initially of that individual element. As the work progressed and the model became more complex, each structure that was created and validated had also to be examined in its accuracy to each and every surrounding anatomical structure to ensure complete rigour and accuracy in position of all structures, including those nearby. This ensured a completely anatomically correct and robust model was being developed. Where relevant, this was also examined by a clinician who operated in that field to ensure not just exceptionally accurate anatomical datasets were being created, but one with all relevant surgical anatomy clearly and accurately identifiable. This was crucial where the anatomy of the oral cavity, including the teeth, had to be created to an exceptionally high level of accuracy never seen to date. This involved a team of oral surgeons and senior dental clinicians working side by side with the modellers during the duration of the project. As each anatomical area (e.g. floor of mouth, intracranial territory, orbit etc) was created in the model, the modellers, anatomist, dental clinicians and surgeons then reviewed those set of structures in that territory and went through a rigorous “signing off” process when the work in that area was complete. To verify the accuracy of the model it was also externally examined and validated by senior academic clinicians specialising in head and neck anatomy. Again, this ensured that each and every anatomical structure gave a real-to-life representation of the structures, as well as ensuring the highest degree of accuracy of the anatomy created. Haptic Injection One of the most commonly performed procedures by a dental practitioner is anaesthetising the inferior alveolar nerve. The development of haptic feedback utilises rigid simulation with great accuracy to enhance student learning. Force feedback devices vary depending on complexity and feedback resolution and have a wide range of downstream applicable scenarios. To this end we use a six-degree of freedom PHANTOM Omni for haptic interaction, as a cost effective solution for larger numbers of users, off-the-shelf console controllers can be utilised in this application. Figure 8 shows the haptic interface for training dental anaesthesia, i.e. an injection which blocks sensation in the inferior alveolar nerve, running from the angle of the mandible down the medial aspect of the mandible, innervating the lower teeth, lower lip, chin, and tongue. The position/orientation and movement of the PHANTOM Omni stylus is linked to a dental syringe. To anesthetize the nerve, the user inserts the needle (stylus) somewhere posterior to the model’s last molar. The user can then press one of the PHANTOM Omni pen buttons (Figure 9), which triggers an injection animation, i.e. anaesthetic solution in a breech-loading syringe being injected into soft tissues of the 3D model. The user can feel resistance (force feedback) when soft tissue is touched, and the syringe tip moves smoothly when the injection button is pressed. Visual feedback is also provided through a viewpoint (Figure 8, the third small window on the left of the screen) with the anaesthetic areas turning red. The application also prompts warning messages when, for example, the needle is positioned too posterior and anaesthetic may be put into parotid gland. This application allows dental students unlimited practice opportunities to become familiar with local anaesthetic at zero risk to real patients. Figure 8. Haptic interaction for training the injection of a local anaesthetic Figure 9. The two PHANTOM Omni pen buttons: one is for injection, the other for reset. Conclusion and Future Work We have described the workflow of data construction, development, and validation of an interactive high resolution three-dimensional anatomical model of head and neck. The Head and Neck datasets represent a step change in anatomical construction, validation, visualization and interaction for viewing, teaching, training and dissemination. In the long term, the results obtained from this three-year project can be viewed as the framework on which to build future efforts. Examples are: to expand models to whole body systems (a project creating a female breast model for breast cancer early detection has already begun); the inclusion of physiological processes (a pilot project on this funded by the Physiological Society has also begun); dynamic representations of the progression of diseases; deformable simulations to model the dynamics of living tissue and physiology, e.g. nerve movement, pulse, blood flow, hemodynamics, collision detection and elasticity and to support surgical rehearsal; and the exploitation of new directions that may include internationalisation initiatives, commercialisation opportunities, and establishment of partnerships with other socially concerned organisations. References [1] Beveridge, E., Ma, M., Rea, P., Bale, K., and Anderson, P. 3D Visualization for Education, Diagnosis and Treatment of Iliotibial Band Syndrome. In the Proceedings of the IEEE International Conference on Computer Medical Applications (ICCMA 2013), Sousse, Tunisia, 20-22 January 2013. ISBN: 978-1-4673- 5213-0, DOI: 10.1109/ICCMA.2013.6506143 [2] Chang, M.C., Trinh, N.H., Fleming, B.C., and Kimia, B. B. Reliable Fusion of Knee Bone Laser Scans to Establish Ground Truth for Cartilage Thickness Measurement. In SPIE Medical Imaging (Image Processing, Proceedings of SPIE Volume 7623). San Diego, CA. Feb. 2010. [3] Kalea, E.H., Mumcuoglua, E.U., Hamcanb, S. Automatic segmentation of human facial tissue by MRI–CT fusion: A feasibility study. Computer Methods and Programs in Biomedicine, 108(3):1106–1120, December 2012. Elsevier. [4] Anderson, P. Developing 3D Interactive Anatomy. The RCPSG MacEwen lecture at the International Surgical Congress of the Association of Surgeons of Great Britain and Ireland, Glasgow, 2009. [5] RCUK and Universities UK. Big Ideas for the Future. June 2012. [accessed 13 May 2013] Available from: http://www.rcuk.ac.uk/Publications/reports/Pages/BigIdeas.aspx [6] VisibleBody [accessed the 18 May 2013] Available from: http://www.visiblebody.com [7] Body Browser [accessed the 18 May 2013] Available from: http://www.zygotebody.com/ [8] Primal Pictures [accessed the 18 May 2013] Available from: http://www.primalpictures.com/ [9] Cyber Anatomy [accessed the 18 May 2013] Available from: http://cyber-anatomy.com [10] Spitzer, V., Ackerman, A., Scherzinger, A., & Whitlock, D. (1996). The Visible Human Male: A Technical Report. Journal of American Medical Informatics Association, 3 (2), 118-130. [11] 3D4Medical [accessed the 18 May 2013] Available from: http://www.3d4medical.com [12] BodyViz [accessed the 18 May 2013] Available from: http://www.bodyviz.com/ [13] Utting, M., Willan P. (1995). What future for dissection in courses of human topographical anatomy in universities in the UK. Clin Anat 8:414-417. [14] Dangerfield, P, Bradley P, Gibbs T. (2000). Learning gross anatomy in a clinical skills course. Clin Anat 13:444-447. [15] Collins, T.J, Given, R.L., Hulsebosch, C.E., Miller, B.T. (1994). Status of gross anatomy education as perceived by certain postgraduate residency programs and anatomy course directors. Clin Anat 7:275-296. [16] Holla, S.J., Selvaraj, KG, Isaac, B, Chandi, G. (1999). Significance of the role of self-study and group discussion. Clin Anat 15:38-44. [17] Fitzgerald, J.E., White, M.J., Tang, S.W., Maxwell-Armstrong, C.A., James, D.K. (2008). Are we teaching sufficient anatomy at medical school? The opinions of newly qualified doctors. Clin Anat 21:718–724. [18] Schmidt, H. (1998). Integrating the teaching of basic sciences, clinical biopsychosocial issues. Acad Med 73:S24-S31. [19] Ling Y, Swanson DB, Holtzman K, Bucak SD. (2008). Retention of basic science information by senior medical students. Academic Medicine, 83:S82-85. [20] Patel, K.M., Moxham, B.J. (2006). Attitudes of professional anatomists to curricular change. Clin Anat 19:132–141. [21] Kluchova, D., Bridger, J., Parkin, I.G. (2000) Anatomy into the future. Bratisl Lek Listy. 101(11):626-629 [22] Turney, B.W. (2007) Anatomy in a modern medical curriculum. Ann R Coll Surg Engl. 89:104-107 [23] Sugand K, Abrahams, P. and Khurana, A. The anatomy of anatomy: A review for its modernization. Anatomical Sciences Education 3(2):83-93, 2010. [24] Anatomy Act 1983, [accessed the 15 May 2013] Available from: http://www.legislation.gov.uk/ukpga/ 1984/14/pdfs/ukpga_19840014_en.pdf [25] Human Tissue (Scotland) Act 2006. [accessed on the 15th May] Available from: http://www.legislation.gov. uk/asp/2006/4/pdfs/asp_20060004_en.pdf [26] Perceptron Scanworks V5. [accessed the 16 May 2013] Available from: http://www.exactmetrology.com/ products/perceptron/scanworks-v4i/ [27] PolyWorks. [accessed the 16 May 2013] Available from: http://www.innovmetric.com/polyworks/3D- scanners/home.aspx [28] Cenydd, L., John, N.W., Bloj, M., Walter, A., and Phillips, N.I. Visualizing the Surface of a Living Human Brain. IEEE Computer Graphics and Applications, March/April 2012, 55-65. http://www.visiblebody.com/ http://www.zygotebody.com/ http://www.primalpictures.com/ http://cyber-anatomy.com/ http://www.3d4medical.com/ http://www.bodyviz.com/ http://www.legislation.gov.uk/ukpga/%201984/14/pdfs/ukpga_19840014_en.pdf http://www.legislation.gov.uk/ukpga/%201984/14/pdfs/ukpga_19840014_en.pdf http://www.innovmetric.com/polyworks/3D-scanners/home.aspx http://www.innovmetric.com/polyworks/3D-scanners/home.aspx The Workflow The Complete Dataset of Head and Neck Interactive Software Verification / Sign off Haptic Injection Conclusion and Future Work References work_li7oqdlwxnhwllox5untr4cmre ---- www.palgrave-journals.com/dam © 2006 Palgrave Macmillan Ltd 1743-6540 $30.00 Vol. 2, 5 231– 236 JOURNAL OF DIGITAL ASSET MANAGEMENT 231 OVERVIEW Enterprise digital imaging departments are facing massive challenges in storage and asset management with few complete solutions on the market that address their workfl ow needs. Photography has gone digital but the supporting workfl ows largely remain analog. The baseline for growth will not be set until true digital workfl ows are designed and built for enterprise digital photography users. Have a look at the delegate list for the 2006 HS Digital Asset Management (DAM) Symposium in NYC and you ’ ll get a quick sense of how important asset management is becoming to networks, ad agencies and publishers. Every major enterprise has a VP or Director level position dedicated to asset management as executives have seen the importance of managing the image lifecycle within the enterprise. This paper will focus on the pitfalls involved in managing enterprise digital photography. Professional photography is moving steadily towards a 100 per cent digital environment. The doubts regarding quality and cost are being removed month by month — 8 megapixel capture backs have become 22 megapixel backs, and are becoming 39 megapixel backs hugely improving image detail and quality. The rest of the value chain — capture, retouching and print — is working with digital fi les and producing quality fi nal visuals, which are the real priority. Rethinking asset management for enterprise digital photography Aaron Holm has been at the forefront of media technology innovation throughout his career, working in animation, audio engineering, broadcast television and digital asset management. In 2002, Aaron was recognized by the US Department of Homeland Security for his outstanding contributions in the fi eld of interactive media technology. In 2004, Aaron founded Markham Street Media, a company dedicated to the development and deployment of advanced digital asset management systems. Working closely with Industrial Color, MSM was instrumental in the technical design and development, project management and market development for GlobalEdit, a powerful on-line image management service that is experiencing explosive growth as a pioneer in the DAM industry. Since the successful collaboration, Aaron Holm has joined Industrial Color as Vice-President of Development and Integration. Industrial Color builds proprietary digital capture and asset management systems with offi ces in New York City, Los Angeles and Miami. The company has 25 employees and produces and manages an average of 10 shoots per day in locations across the globe. Keywords: digital photography , digital workfl ow , creative group , megapixel , metadata structure Abstract Enterprise digital imaging departments are facing massive challenges in storage and asset management with few complete solutions on the market that address their workfl ow needs. Photography has gone digital but the supporting workfl ows large- ly remain analog. The baseline for growth will not be set until true digital workfl ows are designed and built for enterprise digital photography users. Despite the tremendous qualitative improvements in digital photography, the fi rst thing usually done after capture is printing contact sheets and shipping them around the country. Instead, better color calibration, bandwidth, web standards, metadata fl exibility, industry knowledge and realized ROI are creating a sweet spot that will allow forward thinking companies to revolutionize the medium of photography. This paper outlines the asset management issues that enterprise digital photography departments are facing today and seeks to provide a framework for how asset management technology and techniques can be critical in helping the industry move towards a fully digital imaging workfl ow. Journal of Digital Asset Management (2006) 2, 231 – 236. doi: 10.1057/palgrave.dam.3650037 Aaron Holm Industrial Color Tel: + 212 334 3353 Email: aholm@ industrialcolor.com Holm JOURNAL OF DIGITAL ASSET MANAGEMENT Vol. 2, 5 231–236 © 2006 Palgrave Macmillan Ltd 1743-6540 $30.00232 So the digital workfl ow exists, but with major analog gaps. What ’ s the problem? Digital distribution is limited. Metadata structures are rigid. Storage is an afterthought. DISTRIBUTION IS LIMITED — CLOSER TO THE EDIT A typical photo shoot day will create 2000 digital images. The Art Director and Creative group must fi nd the images that they believe will ultimately produce a winning visual. Out of 2,000 images, only ten or 15 will survive the edit and become selects. The edit process usually consists of printing reams of contact sheets, which will be marked up by art directors, creative directors and photographers with grease pencils until a group of selects are chosen. This process has not changed since the analog days. The promises of speed and effi ciency remain trapped because the required workfl ow is not available in digital form. The main issues here are that: 1. It ’ s impossible to distribute 2,000 48MB RAW or 64MB processed TIFF over the internet. 2. Manually shipping drives will not enable the collaboration required in the edit. • • • Until recently there have been no real solutions to allow for the distribution of images from capture. Traditional software vendors have no incentive to transform editing into a digital process because they are focused on asset management as an end result not as an integrated workfl ow. RIGID METADATA STRUCTURES The most basic issue presents a major challenge: how does one collect and catalog the date, location and talent info from the shoot? There are many mature methods of handling this issue including pre-confi gured XMP or EXIF fi les, stamping images with metadata during processing, or assigning staff to catalog images that return from shoot. It will always be a challenge to gather this data, but not enough attention is currently paid to understanding when the initial metadata should be captured and who should be responsible. Also, once the initial metadata is in place, it is often impractical or diffi cult to change it. The solution: design the metadata capture and edit process to individual enterprise requirements — then get it out of the way. Most people in the DAM industry view metadata as an archiving mechanism rather than part of the workfl ow and thus divorce it from the creative process. The accurate capture and edit of metadata throughout the image lifecycle Rethinking asset management for enterprise digital photography © 2006 Palgrave Macmillan Ltd 1743-6540 $30.00 Vol. 2, 5 231–236 JOURNAL OF DIGITAL ASSET MANAGEMENT 233 is crucial, as it will drive the options available to every other component. However, we needn ’ t treat metadata as a sacred cow that should be agonizingly detailed in its initial spec but then rigid and infl exible as our asset passes through the many hands that assign its value. Metadata should be seamlessly added and edited during every phase of our required workfl ow using tools that require no knowledge of metadata technology from the user. Photographers are not required to understand the structural difference between a RAW or TIFF fi le; why should Art Directors have to know what XMP stands for? Additionally, metadata should be considered a component of the overall quality of the image. As much as lighting, processing and color management are crucial to enterprise quality assurance standards for imaging, so too is metadata. It ’ s helpful to view metadata as a QA issue to ensure the topic is a priority. STORAGE — THE SCARY MATH All of these images do have to make it somewhere for storage. They may be in transit during production or may be reaching their fi nal destination. This will depend on enterprise requirements and will vary from company to company but one thing remains true: most participants are not prepared for the storage requirements of enterprise digital photography. Let ’ s go back to our sample shoot: A typical shoot will last 2 – 5 days. One shoot day produces an average of 2,000 shots (Raw = 2,000, Jpeg = 2,000, High Res = 100 Tiffs; note: for entertainment = 2,000 tiffs). Current professional capture backs are 22 megapixel. File size for 22MP (Raw = 48MB, Jpeg = 2 – 6MB, Tiff = 64MB). The new generation of digital capture backs available as of January are 39 megapixel. Enterprise digital imaging departments often produce 50 or more shoots a year If you don ’ t have your calculator, that ’ s a lot of data. That does not take into account the multiple retouched versions, layouts, proofs, etc … tack on another 10 Terabytes for these requirements. Here ’ s where we can see the true interdependence of the solutions suggested thus far. If we have not captured the important info regarding these image fi les, we are storing • • • • meaningless chaos and adding time to every subsequent image related task. In addition, if the value judgments made during the edit phase were not captured electronically in metadata, we have no idea which fi les deserve the most attention. So, even if we ’ ve found a budget and hardware solution to store this mass of data, without an integrated workfl ow we don ’ t have meaningful storage. Taking a drive of images and dumping it onto enterprise storage is not a workfl ow solution. The problem with this scenario is that we have introduced an additional requirement — that the images be catalogued before they can be found and worked on by others. To get around this snag, we should build systems that ingest jobs rather than dump images. A job is a unit that is relevant to the enterprise and can be understood by everyone. Cataloging software should be replaced by ingestion workfl ow software that is capable of taking a meaningful collection of images and adding them to the library at the appropriate time in a single action. There is a need to make storage meaningful. Ultimately, storage is a software, not hardware problem. Disk space is not the issue. Go to most photographer studios and you ’ ll see the results of years of disk drive marketing. Daisy chained 500GB and Terabyte drives booting in sequence as the petrifi ed photographer ’ s assistant watches. The story is typical in many enterprises. People have been told that disk space is cheap, but meaningful disk space is not. If these experienced knowledge workers actually have access to the images they need via our current generation DAM systems, they will need to pull them out, do their work, and somehow put them back without offending the rigid structure of the DAM logic container creative workers often stay with the solution that works: just make prints. How do we begin to create solutions and truly digital workfl ows? Understand the businesses involved — advertising, entertainment, publicity and photography and design DAM solutions for creative professionals. Understand the technology that ’ s available and what ’ s coming down the pipe. Avoid rigid structures. Understanding where the analog workfl ow is limiting growth. • • • • Holm JOURNAL OF DIGITAL ASSET MANAGEMENT Vol. 2, 5 231–236 © 2006 Palgrave Macmillan Ltd 1743-6540 $30.00234 Find good technology and project managers. Find new solution providers. What are the needs? The fi rst need is for technology design and user requirements to fi nd alignment. Asset management technology is typically designed for asset managers rather than the people who will do the valuable work. In this industry, the photographer, model, art directors, agencies, retouchers, publicists, talent and publishers all need to do work on an image in order for it to become valuable. In addition, each of these contributors will work with the image in a different way, using different tools. Many of the tools required are available in software form. Products like Photoshop, and now Aperture, are industry standard and more than capable of providing a stage for creative people to work. The problem is that these software tools are not collaborative. In order to build the next generation of tools, we ’ ll need to understand the needs of the current generation of creative professionals. It seems unlikely that the existing crop of DAM software vendors will be able to make this leap. They have years of development investment supporting technology that exists to fi nd digital things, not to work on them. Apple and Adobe will likely continue to lead the way in creating desktop tools for creatives, but the door is wide open on the back end. The most important back end enabler is the internet, and the fi rst requirement to use the internet in a digital imaging workfl ow is to address the issue of versioning. Through its lifecycle, an image will travel over fi rewire, scsi, ftp, http, Ethernet, etc … Systems built around the web must work with low-resolution images to be practical, but the retouchers and printers will ultimately require a high- resolution visual. To address this, we must fi nd ways to link these assets and provide users with the images they require, over the medium available, at the time they are needed. In addressing this need at Industrial Color we ’ ve learned that what ’ s ultimately required is to design better interfaces, image processing systems, presentation rules, version control mechanisms and integrated storage and backup solutions. This is a signifi cant and continuing challenge. • • The good news is that the technology available to technology and project designers is ready for the challenge. More than ever before, if you can dream up a workfl ow, you can fi nd capable partners to build your new generation of asset management tools. Typical asset management tools are designed to allow users to fi nd stuff. There ’ s usually a group of people that work to enter information to describe a given image in order to make that image searchable within the DAM system. The problem is not the ability of existing technology to match the needs of this market, it ’ s that the technology is deployed for the wrong group of people The Art Directors and Creatives that will ultimately work with images work with Photoshop because they were designed to meet their needs. Most asset management systems were designed to meet the needs of people looking for stuff, not people looking to do work with stuff. Who benefi ts from this solution and how? Entertainment and Advertising Enterprises. Forward thinking digital photography technology and service vendors. Heavy users and resellers of photography and imaging. Photographers, agents and agencies will benefi t from the falling price and rising quality of asset management services. The greatest leaps are available for companies still on the fence about asset management or with systems, budgets and staff that are fl exible enough to move beyond monolith systems. Entertainment companies, publishers and advertising agencies stand to gain the most as they shoot the most. Here ’ s a breakdown of some of the benefi ts lurking within the new, creative workfl ows: Increased production capacity through greater effi ciencies in distribution and search. Increased production capacity through direct integration with electronic production systems – publishing, broadcast, print, pre-press, lab services, legal, HR. Decreased cost due to reduction in re-shoots and lost visuals. Decreased cost due to reduction in printing and shipping. • • • • • • • Rethinking asset management for enterprise digital photography © 2006 Palgrave Macmillan Ltd 1743-6540 $30.00 Vol. 2, 5 231–236 JOURNAL OF DIGITAL ASSET MANAGEMENT 235 Increased client and partner satisfaction resulting from more uniform and reliable color management. Reduced legal expenditure resulting from formally tracked access and distribution. Expansion of the partner and value chain resulting from removal of geographic constraints. Decreased time to market for fi nal visuals. Asset Management executives know very well how important the economic arguments are to funding new workfl ow projects and changing existing processes. As the image lifecycle involves much travel, it should follow that the new workfl ows will involve a mix of vendor services, DAM enabled web applications, enterprise DAM at the client and, decreasingly, desktop software all working together in a well designed, defi ned and documented workfl ow. A good example of a new benefi ciary is Stockland Martel — www.stocklandmartel.com . Representing some of the world ’ s top assignment photographers, they receive a steady stream of digital images from shoots around the world. Considering the possibilities ahead of the hazards, they have found ways to place digital at the foundation of new business opportunities. The once obscure fi eld of asset management is of great importance to companies like Stockland Martel. Matthew Goodrich, VP of Marketing says, “ New advances and techniques in asset management have helped us run better, market better, manage better and most importantly, consider new markets and businesses. ” What are the risks? Requires heavy investment by both enterprise customers and technology service providers. Disrupting successful workfl ows in the name of innovation can lead to disaster. • • • • • • The biggest risk in adopting new techniques is that the existing, successful processes will be destroyed. If you ’ re a publisher who ’ s currently operating at capacity you are doing something right. There will be no room for error as the magazine, catalog, advertisements still must go out the door. For this reason, it ’ s absolutely essential that companies in the existing value chain step up to work with their client to improve their processes and begin to consider technology project management as a core business. What ’ s the good news? The technology required to make this work has been proven in other sectors. Metadata standards can connect the dots between traditional photography and advanced software development techniques. Distribution and search. New business opportunities. Where we ’ re all going is towards the digital workfl ow. Despite the tremendous qualitative improvements in digital photography, the fi rst thing usually done after capture, is printing contact sheets and shipping them around the country. Instead, better color calibration, bandwidth, web standards, metadata fl exibility, industry knowledge and realized ROI are creating a sweet spot that will allow forward thinking companies to revolutionize the medium of photography. The same requirements will exist in this as in previous industry changes: Competent managers making competent decisions in full view of the economic realities. Capable, experienced vendors and partners fully committed to the long-term success of every project. • • • • • • Holm JOURNAL OF DIGITAL ASSET MANAGEMENT Vol. 2, 5 231–236 © 2006 Palgrave Macmillan Ltd 1743-6540 $30.00236 Well-researched decisions regarding technology and consulting spending. Experienced project management with healthy fear of hubris. Careful vendor selection matching strengths with requirements. We shouldn ’ t have to invent any core technology here. We should thank the folks in the fi nancial, telecom and insurance sectors for making massive investments in R & D that have • • • led to the current level of sophistication available for our development purposes. Forward thinking organizations like DISC ( http://www. disc-info.org ) have also provided industry leadership. Storage, security, metadata, bandwidth and browser technology are more than adequate for our needs – the challenge is good user interface design. You show an art director an environment that they understand immediately and they won ’ t care how it brings everything together, they ’ ll just get to work. Rethinking asset management for enterprise digital photography OVERVIEW What's the problem? DISTRIBUTION IS LIMITED — CLOSER TO THE EDIT RIGID METADATA STRUCTURES STORAGE — THE SCARY MATH work_lj4to4o4fbeeliez5xe3vjrf7q ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218548553 Params is empty 218548553 exception Params is empty 2021/04/06-02:16:26 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218548553 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:26 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_llfv4rdto5bp7nd63czfdkriou ---- Mapping ice front changes of Müller Ice Shelf, Antarctic Peninsula Antarctic Science 7 (2): 197- 1,05 ( 1 995) Short note Mapping ice front changes of Miiller Ice Shelf, Antarctic Peninsula CAROLINE G. WARD British Antarctic Survey, High Cross, Madingley Road, Cambridge, CB3 OET Present address: Coventry University, Priory Street, Coventry, CVl SFB Introduction Miiller Ice Shelf (67"15'S, 66'52'W) is situated at the southern end of Lallemand Fjord. It is a small ice shelf (- 80 km2) fed by Briickner and Antevs glaciers, which both flow northward off the central peaks ofArrowsmith Peninsula; the ice shelfcontains an ice rise (Humphreys Ice Rise). Data sources have indicated that not only has the ice front retreated since 1947 but also that there have been two advances. This paper describes how these changes were recorded using simple photogrammetric techniques. Methodology Data sources Seven sources of data were used (Table I), spanning the period 1947-1993. These included both oblique and vertical aerial photographs, and visible satellite imagery. The quality of the images used varied greatly. Techniques Avariety of techniques were used to compile the map of MiiIler Ice Shelf, depending upon the format of the data. The ice fronts identified on the oblique aerial photographs were sketched in by hand, those from satellite images digitized directly from the photographic product or digital data on screen, and vertical aerial photographs were interpreted using a radial-line plotter. Once the ice fronts had been drawn onto an existing base map, the cartographic data were edited using a Laser-Scan software package for editing geographic information and map data. The initial base map had been compiled by conventional cartographic methods, at 1:250 000 scale, for a BAS geological project (Moyes et al. 1994) and digitized subsequently. The details of the base map were improved by overlaying it on the 1986 digital Landsat image and, with the help of the IfAG photographs, the coastline and grounding line were digitized at a larger scale. Oblique aerialphotographs. By far the most approximate ice fronts mapped were those sketched from the oblique aerial photographs. These were estimated by noting significant features on the photographs (e.g. the position of the ice front at the mainland coast and at Humphreys Ice Rise) and extrapolating a line from these points. Satellite images. The satellite images, available as photographic products at 1:250 000 scale and as digital data, are of a resolution thh; clearly depicts the ice front. Using known control points from the BAS triangulation network, the images could be positioned accurately with respect to the base map. Ice fronts were digitized directly using a graphics package, which allowed accurate mapping to better than 100 m. Vertical aerial photographs. Radial distortion from the principle point of the image is enhanced by the change in elevation of the ground surface, although on an ice shelf this change is negligible. Because large-scale photographs, such as the 1956 FIDASE series, would have produced a noticeable radial error in the ice front data, tracing was regarded as too inaccurate for this project. A Watts Radial-line plotter, which is designed to transfer planimetric detail directly from vertical stereoscopic photographs on to abasemap,was used instead. By using the radial-line triangulation, the displacement produced by relief and radial-lens distortion is compensated for by the intersectionofimages fromanoverlapping pair. The plotterwas used to createalarger-scalebasemap ofthecoast, groundingline and rock outcrop polygons of the area around Miiller Ice Shelf; these details were subsequently digitized and merged with the existingmap data to enhance the earlier map. Aftertransferring the digital data into a GIS package (ARCDNFO software) the Table I. Details of data sources used. Source Format Date Scale of data RARE Oblique aerial 1947 - photography FIDASE Vertical aerial 1956 1 2 7 000 photography TMA Oblique aerial 1963 - photography LANDSAT Satellite image 1974 1:250 000 (photographic product) (digital) photography photography LANDSAT Satellite image 1986 1:250 000 IfAG Vertical aerial 1989 1:70 000 BAS Vertical aerial 1993 1:25 000 1:20 000 BAS Digital topographic 1994 1:250 000 (Moyes et al.) base map RARE = Rome Antarctic Research Expedition, FIDASE = Falkland Islands Dependencies Aerial Survey, TMA = Trimetrogon aerial photographs, IfAG = Institut f u r Angewandte Geodasie, BAS = British Antarctic Survey. 197 https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0954102095000265 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:21, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms https://doi.org/10.1017/S0954102095000265 https://www.cambridge.org/core 198 C.G. WARD Fig. 1. Map showing the retreat and advance of the Muller Ice Front between 1947 and 1993. The graph indicates changes in the area of Miiller Ice Shelf with time; patterned lines (with dates) correspond to ice- front ornaments on the map. areaoftheiceshelfcouldbecalculatedfor eachyearmapped and the variations with time assessed (Fig. 1). Results The map of the Miiller Ice Shelf (Fig. 1) with seven ice fronts indicates that the ice shelf has decreased in size over the 46-year period. However, the regression pattern is not simple since a rapid advance of the ice front occured between 1947 and 1956, increasing the ice shelf area from approximately 51 km2 to c. 78 kmz. A second period of ice front advance, 1974-1986, representsan area increase of c. 4 km2. These results do not agreewiththose documented by Domacketal. 1995, who found no noticeable change during the period 1947-1974 and from 1974 to the present day. Despitetherangeinquality ofthesource data, the variations in the position of the ice front with time, as recorded by this study, are considered to be real. The ice shelf does appear to be undergoing some form of change reaching a minimum extent in 1993. Over the past 46 years it has experiencedavariation ofc. 31 km2, 40%ofitsmaximumextent in 1956. Similar changes in ice shelf areas have been recorded elsewhere in the Antarctic Peninsula (Doake & Vaughan 1991, Skvarca 1993)) possibly reflecting recent climate variability in the region (Morrison 1990, King 1994). Acknowledgements I wish to thank Paul Cooper (British Antarctic Survey) for his help in providing the area analysis data, and the graph included in Fig. 1. Also Adrian Fox, Andrew Perkins and Janet Thomson (British Antarctic Survey) for their technical advice during this project. Theprojectisbeingcarriedout aspart of anundergraduate final year thesis. References DOAKE, C.S.M. & VAUGHAN, D.G. 1991. Rapid disintegration of the Wordie Ice Shelf in response to atmospheric warming. Nature, 350,328-330. DOMACKE.W.,ISHMAN,S.E., S ~ , A . B . , M C C E N W V C . E . &Jvu,A.J.T. 1995. Late Holocene advance of the Miiller Ice Shelf, Antarctic Peninsula: Sedimentologic, geochemical and palaeontologic evidence. AntarcticScience, 7,159-179. KING, J.C. 1994. Recent climate variability in the vicinity of the Antarctic Peninsula. International Journal of Climatology. 357-369. MORRISON, S.J. 1990. Warmest year on record in the Antarctic Peninsula? Weather, 45,231-232. M o n s , A.B., Wnuw, C.F.H., THOMSON,J.W. EIAL. 1994. Geologicalmapof Adelaide Island to Foyn Coast, BAS GEOMAP Series, Sheet 3,1:250 OOO scale map with supplementary text, 60 pp. Cambridge: British Antarctic Survey. SKVARCA, P. 1993. Fast recession of the northern Larsen Ice Shelf monitored by space images. Annals of Glaciology, 17,317-321. https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0954102095000265 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:21, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms https://doi.org/10.1017/S0954102095000265 https://www.cambridge.org/core work_lm6seekudfetnfywvi36z4d4xe ---- Instrumentation Viewpoint /11/ MARTECH1123 Abstract – Determining the volume of an animal can help us to understand certain aspects of its hydrodynamics and thermodynamics, such as behavioral thermoreg- ulation and energy consumption. This determination is difficult due to the irregular shapes of the animal’s body. On the other hand, large calibrated tanks are needed for measurements and this is not functional. We have designed an innovative mechanism to register morphological characteris- tics in a fast, low-cost way, using standard digital photography. Keywords – measure morphological data, photography, mesh, low-cost, volume, shark I. INtrODUCtION Pelagic sharks play a vital role in maintaining the balance of marine ecosystems. They have been under increasingly intense fishing pressure due to a higher demand for shark products. This over-exploitation affects populations that are generally fragile and is leading some species to the brink of extinction. In this context, more investigation is necessary to know those animals. Determining the volume of an animal helps us to understand aspects of its hy- drodynamics and thermodynamics, such as behavioral thermoregulation and energy consumption. Calculating the volume of animals is complicated because they have irregular shapes of the bodies. On the other hand, large calibrated tanks are needed for measurements and this is not functional. In this work we present an innovative system to calculate volumes by image analysis. In a first step, we need to create a mesh in 3 dimensions from two pho- tographs of 2 dimensions. In a second step, a program determines the volume analyzing the figure. It is important that the mesh is well adapted to the body. This system could be used for the automatic counting and monitoring of other species in the environment. II. OBjEtIVES 1. Finding a way to accurately measure the volume of large sharks. This method should be able to be used in ships. 2. Quantifying the error to create a 3D model through 2 photos. 3. Implementing the process for measuring volumes in other species. III. ExPErIMENtAL CHArACtErIzAtION We worked with the blue shark (Prionace glauca). Preliminary tests had helped us to set the range of measures that we were bound to work with, from 0.5 to 15 l. We conducted experiments with small sharks. These sharks’ volume was calcu- lated accurately as they could be introduced in test tubes. Indeed, the only way to accurately measure volume is through volume displacement techniques, us- ing a big test tube (2 meters high and 400mm in diameter), specifically designed for the test. In a second step we set the characteristics of the photos that we would use for the estimation of the volume: zenital position, and lateral position. Finally, we took zenital and lateral pictures of the shark against a background scale. This enabled us to establish common axes. In the computer, the photos are transferred to the Blender program, an open source 3D graphics applica- tion that can be used for modeling and creating interactive 3D applications. It creates a 3D NURBS mesh and converts it into a triangulated mesh to calculate the volume. This calculation is performed by AdMesh software (program for processing tri- angulated solid meshes in STL file format) that checks the integrity of the mesh and closes the holes. The software gives the result of the volume by finite ele- ments. This result is compared with the one obtained in the test tube. IV. PrELIMINAr rESULtS The first 3D models had an error of 24% approximately. As we made new mea- surements with larger animals and obtained new meshes the error decreased to under 10%. Tiny Shark Medium Shark Large Shark Real Measure 0,185 1,98 10,95 Mesh 3d W i t h fins Without fins W i t h fins W i t h - out fins W i t h fins With- out fins 0,230 0,195 2,28 2,15 13,3 11,9 Error 24,3% 5,4% 15,1% 8,6% 51,4% 8,6% An interesting observation is that taking into account the shark fins on the 3D model increased the final error by a constant value. On the other hand, the mesh is easily adaptable to a spindle-shaped smooth profile, but does register nooks and crevices of a real animal (mouth, holes). We believe that both errors com- pensate each other. V. CONCLUSIONS The volume measurement of large sharks is not easy to perform accurately, and less on board of a ship. The proposed model is the indirect calculation through photographs and finite approximations through a digital model of the shark. This process gives us a 10% error in the calculation, which is better than the one done in the field. VI. REFERENCES http://www.blender.org/ https://sites.google.com/a/varlog.com/www/admesh-htm SIMS, D. W., WEARMOUTH, V. J., SOUTHALL, E. J., HILL, J. M., MOORE, P., RAWLIN- SON, K., HUTCHINSON, N., BUDD, G. C., RIGHTON, D., METCALFE, J. D., NASH, J. P. and MORRITT, D. (2006), Hunt warm, rest cool: bioenergetic strategy underlying diel vertical migration of a benthic shark. Journal of Animal Ecology, 75: 176–190. doi: 10.1111/j.1365-2656.2005.01033.x Ignacio gonzalez1, gonzalo Mucientes2,3 1 Unidad de Tecnologías Marinas (UTMAR), Fundación CETMAR (Centro Tecnológico del Mar), 2 área de Transferencia de Tecnología, Fundación CETMAR (Centro Tecnológico del Mar), 3 Instituto de Investigaciones Marinas (IIM-CSIC) (todos) C/Eduardo Cabello 6, 36208 Vigo (Pontevedra) E-mail: igonzalez@cetmar.org Tel:(+34) 986 247047 SYSTEM CAlCUlATION OF VOlUMES bY IMAgE ANAlYSIS ON SHARKS work_lml6aldnt5bqzmwe23lnobnmdi ---- Études photographiques, 24 | novembre 2009 Études photographiques 24 | novembre 2009 Elites économiques et création photographique From the Picture Archive to the Image Bank Commercializing the Visual through Photography. The Bettmann Archive and Corbis Estelle Blaschke Electronic version URL: http://journals.openedition.org/etudesphotographiques/3435 ISSN: 1777-5302 Publisher Société française de photographie Printed version Date of publication: 9 November 2009 ISBN: 9782911961243 ISSN: 1270-9050 Electronic reference Estelle Blaschke, « From the Picture Archive to the Image Bank », Études photographiques [Online], 24 | novembre 2009, Online since 21 May 2014, connection on 19 April 2019. URL : http:// journals.openedition.org/etudesphotographiques/3435 This text was automatically generated on 19 April 2019. Propriété intellectuelle http://journals.openedition.org http://journals.openedition.org http://journals.openedition.org/etudesphotographiques/3435 From the Picture Archive to the Image Bank Commercializing the Visual through Photography. The Bettmann Archive and Corbis Estelle Blaschke 1 Acting as both filters and catalysts, photographic agencies and commercial photo archives have widely influenced photographic production and reproduction in their attempt to satisfy and to continuously stimulate the ever-increasing demand for visual imagery. The acceptance of the substitution of the referent by a photograph underpins the creation and growth of the picture market1 and the development of a ‘picture economy.’2 This acceptance, plus the reproducibility of the image allow for the establishment of the market value for photography. In 1859, Oliver Wendell Holmes, in reference to the sales potential of stereoscopic photographs, stated ‘Form is henceforth divorced from matter.’3 Ever since, photographic agencies and commercial archives have focused on how to sell photographic pictures as surrogates of the objects and the ideas they represent. The accumulation, management, and archiving of photographs are fundamental conditions in the context of the picture market, as they guarantee the sustained exploitation of the imagery and constitute the economic basis of their distribution. These private agencies and archives have become repositories for a facet of photographic history, one in which the product is shaped by its particular setting and economic parameters. This article examines the construction of value for both analogue and digital photography by tracing the development of two different economic models – the picture archive and the image bank – using the examples of the Bettmann Archive and Corbis respectively. It also considers the consequences of economic efficiency on the management and use of photographs. From the Picture Archive to the Image Bank Études photographiques, 24 | novembre 2009 1 Producing a ‘Circulating Library of Authentic Photo- prints’ 2 Otto Bettmann, born in Leipzig, Germany, in 1903, studied history, art history, and sociology and was awarded a doctorate for his dissertation on the rise of professionalism in the book trade in eighteenth century Germany.4 From 1928 to 1932, he was employed as librarian and rare books curator by the Staatliche Kunstbibliothek, the state art library in Berlin, where he began photographing illustrations from art books and other printed matter with a 35 mm camera.5 This became the foundation for a photographic collection, with a series of medical illustrations from historic prints constituting the first systematic group. Dismissed from his position at the library in April 1933, the result of the Law for the Restoration of the Professional Civil Service implemented by the National Socialist regime, Bettmann established the commercial photo archive Bildarchiv Dr. Otto Bettmann/Berlin. 3 Though this endeavor was destined to fail under the worsening political situation, Bettmann used the time, until his emigration to New York in November 1935, to develop a more comprehensive business model and to systematically expand his collection. By the time he left for the United States, his collection comprised ten thousand photographic reproductions, notably of illustrations and paintings housed in European museums and libraries. 4 Parallel to collecting images, Bettmann developed a particular indexing system, applying his training in classification systems, reference work, and cataloguing to the needs of a picture archive.6 This system relied on a card index combining two types of data: the visual in the form of a ‘thumbnail’ photograph, and the textual information. The decisive factor for selecting pictures was their iconographic content, their potential as ‘subject pictures.’7 Bettmann, ‘the picture-librarian,’ 8 applied this method of viewing pictures according to their depicted themes in the 1931 exhibition on Reading and Books in Graphic Art and Painting which he had organized for the annual book day at the state art library. By analyzing an image with ‘subject eyes,’9 Bettmann identified a number of descriptive and associative keywords, which, in combination with the caption and date, would form the metadata of the picture. 5 Aware of the inherent value of his collection, Bettmann took the negatives and the corresponding card index with him to New York. This was the initial capital of the Bettmann Archive, established in 1936. Soon, the classification system was augmented by new categories, including that of ‘Americana,’ illustrations of American history, politics, and society, which came to represent the majority of the archive. As he had done in Germany, Bettmann obtained pictures by leafing through and photographing the material in public libraries, such as the Library of Congress in Washington, DC, and the New York Public Library, in particular its prints and photographs division. Owing to their liberal policies (libraries were considered the guardians of the books, but not of their visual contents), Bettmann was able to amass a considerable quantity of reproductions, paying little or no fees. His business idea entailed transforming publicly accessible image resources, whether inherently valuable or not, into commercially viable products by reproducing, evaluating, and indexing them. Issues of ownership or copyright were ignored completely. From the Picture Archive to the Image Bank Études photographiques, 24 | novembre 2009 2 6 Fascinated by the wealth of American libraries and their general accessibility, Bettmann appropriated what, in his view, was public property – the picture – by means of photography. This appropriation expressed itself unmistakably in the form of the release agreement printed on the back of the photograph, insisting on the mention of the Bettmann Archive in the credit line whenever the image was used. Yet, in an article published in the Wilson Bulletin for Librarians in April 1939, Bettmann pleaded: the ‘thousands of pictures in art books, periodicals, manuscripts, etc. … these picture sources have to be freed and made available in a systematic form.’10 The Bettmann Archive sought to be understood as a ‘circulating library of authentic photo-prints,’ 11 as a library of pictures. 7 Besides the appropriation or ‘liberation’ of library stock, the Bettmann Archive acquired various private and commercial picture collections, as well as books and magazines, which, after being photographed, were resold or put up at auction, if a profit could be expected. The criteria for selection consisted of three vital preconditions: good reproducibility of the source image, instant readability, and thematic expressivity.12 Under the guiding notion of reviving historical representations, the depicted scenes of human experience and achievement had to prove relevant for the present day in order to be marketed and, as Bettmann put it, ‘to make history a useful and slightly profitable thing.’13 8 The material was, above all, directed towards publishers, advertisers, and designers. Whether understood as a resource for individual pictures, portfolios, illustrated texts, or as a ‘complete historic museum of your trade or industry,’14 the Bettmann Archive was to be a catalyst of ideas and also a picture finding machine – a service, culminating in the person of Otto Bettmann, the ultimate picture editor, who ‘finds pictures and eliminates them’ 15 to save the client work and time. One Word Is Worth a Million Pictures 9 The efficiency of picture research, as eagerly promoted by the Bettmann Archive, relied on its particular classification and research method, based on its card index. Subsequent to a thorough and rigorous selection process, an individual record was created for each picture accepted into the archive. The index of keywords pointed to a register of categories and sub-categories in which a photographic print of the picture was to be found. Hence, one picture could appear in several categories and, depending on the client’s request, could be used in a variety of contexts. This is why the extensive list of categories, the company’s index, played such a crucial role in promoting the Bettmann Archive and attracting new clients. ‘One word is worth a million pictures’16: in contrast to the practice of most photographic agencies and commercial picture archives, the Bettmann Archive did not advertise its most prominent images, but rather presented extracts of its index. While the archive comprised two hundred thematic categories with ten thousand reproductions in 1936, the number of categories had increased to ten thousand with more than one million pictures by 1961.17 The aim was not to have as many pictures as possible in any single category, but to have as many categories as possible in order to further the vision of an encyclopedic picture collection, a ‘vast collection of photographic reproductions recording man’s progress in every art, profession and trade – in all countries – in all ages.’18 From the Picture Archive to the Image Bank Études photographiques, 24 | novembre 2009 3 10 The promotional index presented clients with an initial orientation to the collection and marked the quality that distinguished the Bettmann Archive from its competitors and from the public libraries and other institutions whose non-exclusive visual holdings Bettman was offering for sale. Through its research, screening, and re-evaluation of the images and the creation of a complex classification system, the Bettmann Archive claimed superiority over similar picture agencies. Despite the ‘uniqueness’19 of the available pictures, the prices, ranging from $5 to $500, were not higher than the usual rates for news or contemporary stock photography.20 Thus, the distinctive characteristic of the Bettmann Archive and the key to its success depended upon both the stringent selection of relevant historical illustrations and the instant retrieval of the photographs. 11 The classification system, and in particular the card index, was simplified over the years to increase economic efficiency. While the first series of German card templates, mostly handwritten, indicated the keywords under the field of ‘prospective buyers,’ the English templates listed them, in typewritten form, under the field ‘index,’ replacing the space for the provenance of the picture resources. Also the reverse side of the cards changed: anticipating the idea of a holistic ‘commodity tracking,’ the early templates were designed to indicate the clients’ names and addresses, as well as the pictures’ use and commercial success. However, these fields were later removed, as filling them in proved to be too time consuming and therefore not practical. In addition to the changes and adjustments that were made between the German and the English card templates, the translation of the metadata also implied the revision of keywords, whether in terms of addition, removal, or reinterpretation. 12 The card index as it survives today suggests that this visual indexing system, which, through its meticulousness reflected the academic approach promoted by the Bettmann Archive, applied only to those negatives and picture sources that were relocated from Europe to New York. It seems, in fact, that the indexing system was maintained only in the early years of the archive’s commercial activity and was abandoned with the development and increased professionalization of the firm. However, the thematic classification and the continuous expansion of categories remained integral to the structure of the archive. The picture research was now conducted with the help of a ‘category book,’ a system which could also be found at other photo agencies. 13 The simplification of the visual indexing system seems to reflect the paradox of the idea of the archive – with the theory of its classification systems on the one hand and their practical maintenance on the other hand, between the desire to accumulate and control masses of objects and the inevitable impossibility of doing so. Thus the idealistic aspirations of the librarian and academic were in opposition to the demands on the ‘picture man,’ who needed to uphold his competitiveness within the picture market. The fact that ‘commerce and culture are standing in a dialectic relation to one another’21 and that the one does not necessarily exclude the other has been pointed out by Cheryce Kramer in her remarks on Otto Bettmann’s doctoral dissertation.22 It is this dichotomy that characterized the Bettmann Archive throughout its existence and that was its driving force. 14 This becomes particularly apparent with respect to the books and booklets published by the Bettmann Archive and later by its publishing company Picture House Press. Starting in 1939 with a visual history of libraries and a series of Educational Forum Portfolios,23 Bettmann compiled several illustrated books using pictures from the archive, such as the Pictorial History of Medicine,24 the Pictorial History of Music,25 The Good Old Days; They Were From the Picture Archive to the Image Bank Études photographiques, 24 | novembre 2009 4 Terrible!26 and The Bettmann Archive Picture History of the World,27 and indicated clearly that the pictures could be acquired via the Bettmann Archive. The fine line between visual historiography and sales promotion, between an educational or epistemological tool and a catalogue of goods, is particularly blurred in the case of the Bettmann Portable Archive, first published in 1966. As a ‘graphic history of almost everything,’ it presented 3,669 pictures, thematically arranged and cross-referenced. This portable version of the Bettmann Archive was meant to inspire, to convey knowledge, and to be ‘a window in the house of pictures,’28 for both the general public and for potential clients internationally.29 The self-representation of Otto Bettmann as a restless, if somewhat peculiar academic, in combination with the institutional aura of the company’s name, contributed to the archive’s credibility and played a decisive role in establishing it as an authoritative source for historical pictures. ‘Either You Grow or You Go’ 15 While the Bettmann Archive30 had specialized in making historical imagery available through its service, it had to adapt to the fact that photography had become the leading picture medium; no longer merely a tool for reproduction, photography was now also a means of producing ‘autonomous’ pictures. In response to this situation, the Bettmann Archive acquired, in the early 1970s, large photographic collections and former agency stock, ‘entire photo-morgues’31 such as the Gendreau Collection and the Underwood & Underwood Collection of stereographs, which the Bettmann Archive distributed on a commission basis. The archive’s stock grew sharply, as a result, adding historic photography to its vast stock of photographic reproductions of illustrations in various media. In the case of the Gendreau Collection, however, Bettmann objected that it scarcely added a more modern facet to the existing stock, as it was static, ‘neither old enough to fall within my own collection nor current enough to qualify as truly contemporary.’ 32 16 Moreover, every acquisition necessitated the integration of a foreign, often less developed, classification system into the existing one, leading to an undermining of the complexity of Bettmann’s original system. This dilution of complexity can be regarded as a direct consequence of the shift of the picture market towards the need to always supply more pictures and to continuously stimulate demand. 17 While Bettmann had anticipated the growing need to incorporate photographs into his collection of reproductions, his interest in the medium was clearly business oriented. He valued its unrivaled efficiency for reproducing and its reproducibility, as well as its standardizing format. Through photography, picture sources and materials of all kinds could be transferred into a single format, allowing not only for the amassing of images, but also the creation of an operational classification system. In the context of a commercial picture archive, the question of the photographic original – in any case a highly disputed concept in terms of photography – was irrelevant or handled pragmatically, since the original was defined as the available picture source, the master. 18 Upon demand, prints were made and mailed out to customers as 13 x 18 cm ‘authentic photo-prints.’ The Bettmann Archive requested the return of the prints after their one- time use. If a master was damaged or lost, a new negative was produced. Thus, the negative became the vehicle for preserving a damaged Bettmann ‘original’ and thus ensuring its continued exploitation. And the negative was easily manageable in terms of From the Picture Archive to the Image Bank Études photographiques, 24 | novembre 2009 5 archival storage. With the abandonment of the visual card index and the growing number of transactions, the registers were increasingly filled with ‘authentic photo-prints,’ often with several copies of the same image. 19 Regardless of the changing copyright laws for photography, the Bettmann Archive rarely credited the photographer. The reproductions were deemed to have become the property of the archive once a collection was purchased or reproduced. This is apparent in the ‘photography chapter’ of the Bettmann Portable Archive of 1966 33: for example, the reproduction of Walker Evans’s photograph ‘Penny Picture Display, Savannah, Georgia, 1932’ is presented with the caption ‘Advertising in Window of Midwestern Small Town Portrait Studio, 1932.’ There is no mention of the photographer’s name. Photographs, whether art, news, or object photography, were not considered any different from other picture sources. Photographs were ‘subject pictures,’ endlessly reproducible, authorless, and, therefore, quasi-public property. The self-designation of Otto Bettmann as ‘photo- historian’34 or ‘backward photographer’ 35 and the advertising slogan ‘Call on us for anything in photo-history’36 articulates a different historiography of photography – not the history of photography as a medium, but photography as a medium of history. 20 In 1981, Otto Bettmann retired and sold his company, with its collection of approximately three million pictures, to the publishing firm Kraus-Thomson Organization, which later merged the Bettmann stock with the photo collections of United Press International (UPI) and Reuters, bringing its holdings to a total of over sixteen million objects, including duplicates. Tellingly, Kraus-Thomson did not change the archive’s name, despite the takeover and relocation into new offices and even though the Bettmann Archive was fused with the significantly larger UPI collection.37 From this time forward, the stock, including numerous ‘iconic images from the 20th century,’38 as well as photographs dating back to the 1860s, was marketed under the name The Bettmann Archive and also its photo-journalistic branch, Bettmann News Photos39. A Formidable Digitized Oak: Juxtaposition vs. Hierarchy 21 In 1995, the Bettmann Archive changed hands once again, but in contrast to the Kraus- Thomson Organization, Corbis, the new owner, was a priori unfamiliar with the traditional picture market. A ‘visual content provider’40 specializing in the commercialization of digitized reproductions, Corbis was created in 1989 by Microsoft founder and owner William H. Gates under the name Interactive Home Systems. The initial business concept consisted of reproducing art works and famous photographs in order to display these digital images on wall monitors in private homes or to sell them as thematic compilations on CD-ROMs. Corbis negotiated non-exclusive licensing rights with a number of museums to market their collections by means of digital images, also referred to as ‘media-files.’ The museums were to benefit not only financially from this arrangement, since part of the generated profit was to be shared with Corbis, but also through the indirect promotion of their collections. 22 But Corbis claimed a separate copyright protection, arguing that the digital reproduction of an artwork or photograph could be considered as fundamentally distinct from its ‘original.’ Through the potential adjustment of colour, brightness, and contrast, the digital reproduction could be interpreted as a unique work of art.41 The museums balked, From the Picture Archive to the Image Bank Études photographiques, 24 | novembre 2009 6 fearing a loss of control over the use of their holdings especially given that copyright legislation had yet to be adjusted to reflect the rapidly changing technology. Corbis countered allegations of taking possession and commercializing common property through digitization by stating that the copyright protection applied only to the digital image produced by Corbis. The company argued that it was not preventing anyone from reproducing and subsequently disseminating the same original work of art. 23 Unlike Bettmann’s earlier approach, Corbis’s initial idea was barely successful and led to the development of a modified business model. In 1995, with the aid of major financial investments, Corbis began taking over numerous photo agencies and picture archives, and establishing several commission contracts. By purchasing inventory outright, the company avoided lengthy negotiations with museums and libraries, while seeking to gain a leading position in the picture market by eliminating potential competitors. 24 Similar to the sales rhetoric used by the Bettmann Archive (and many other photo agencies), Corbis’s mandate was ‘to build a comprehensive visual encyclopedia, a Britannica without the body text,’ 42 a ‘digital Alexandria’43 with a stock that now amounts to more than one hundred million creative, entertainment, and archival images.44 As with the invention of photography, digital reproduction technology reanimated the vision of eventually being able to capture ‘the world’ – the fantasy of the archive, an idea that was deliberately propagated by Corbis. 25 The Bettmann Archive, Hulton-Deutsch, Sygma, Condé Nast, the Ansel Adams Collection, the Andy Warhol Foundation are but a few of the ‘iconic and historical collections’ whose images Corbis features. Corbis has defined its value primarily through the quantity of available pictures, in both analogue and digital form. With these acquisitions and their successive digitization, the company assumed a key position in the distribution chain of this new product. Parallel to its major competitor Getty Images, Corbis anticipated the radical transformation in the field of communications with its new way of consuming and distributing pictures. 26 Otto Bettmann himself approved of the purchase of his former company, claiming to be pleased ‘to have seen my original acorn nourished and cultivated into a formidable digitized oak,’45 and that ‘picture seekers no longer have to consult a multiplicity of sources to fill their graphic needs, all can be satisfied in one well-organized picture emporium,’46 thereby reaffirming his pragmatism and view on photographic materiality. 27 In turn, Corbis – the IT-company and newly created ‘super-agency’47 – stated: ‘When we acquired the Bettmann Archive in 1995, both Bill and I immediately recognized not only its commercial potential, but even more important, our stewardship obligation.’48 With the mention of the company’s stewardship obligation, Steve Davis, former chief executive of Corbis, points to an episode in the company’s history that generated much criticism in the media and in the writings and works of artists.49 Corbis solicited a group of photography conservators to develop a preservation plan to stop the deterioration process of large numbers of negatives. In keeping with this plan, in 2002 Corbis transferred the Bettmann Archive and UPI files to Iron Mountain, an underground storage facility located in a former limestone mine north of Pittsburgh. 28 The information protection and storage company, also called Iron Mountain, holds the documents and data of approximately 2,300 clients, including government departments, private companies, as well as libraries, museums, and media corporations.50 Temperature and humidity controls ensure the long-term preservation of the analogue materials. For a From the Picture Archive to the Image Bank Études photographiques, 24 | novembre 2009 7 selection of approximately 28,000 ‘icons of photography’ – a set of vintage negatives of best-selling and best-known pictures – the preservation plan foresees storage at −20 degrees Celsius, thereby freezing the negatives. 29 In contrast to the initial plan to digitize the entire stock, the preservation of the Bettmann Archive resulted in a reduction of the ‘visible’ and instantly retrievable photographs. First, the sixteen million analogue objects were viewed and evaluated; the reproduction sources were repackaged, low-quality pictures or duplicates were sorted out – they were archived but did not qualify for digitization. Then, after the re-verification of copyrights and research on captions, Corbis digitized the negatives and prints, starting with the best-selling pictures and with those pictures for which the company expected future demand, namely pictures that could be relevant for the present day. However, the initial objective of reproducing more than five thousand photographs per month was soon reversed. Not only were the costs of digitization and digital storage soaring, but the company had underestimated the difficulties in bringing together the diverse collections and in migrating the existing metadata into the new visual database. Today, the number of scanned pictures of the Bettmann and UPI masters adds up to about 250,000 items, of which only a fraction can be displayed on the Corbis website. New scans are predominantly carried out upon the clients’ request. Thus, as with many other digitization projects, it is the client who will determine which pictures will be made ‘visible.’ 30 Yet, one has to acknowledge that the electronic display has led to an unprecedented virtual availability of a selection of the stock in the form of standard, medium-sized and watermarked screen pictures. The screen picture is the alias of the digital picture as well as a surrogate of the analogue photograph archived at Iron Mountain. With its search mechanism employing both keywords and cross-referencing, the electronic database realizes what Otto Bettmann had envisioned with his card index, as mentioned in the Bettmann Portable Archive of 1966: a ‘mobile catalogue.’ 31 ‘Ideally, picture retrieval should work in the following manner (and perhaps one day it will): The picture user in search of “Melba eating Melba toast” will teletype his coded request to an electronic picture research pool. After a few minutes’ wait, a Western Union messenger will arrive with a fat envelope containing pictures of Melba eating Melba toast, dry, buttered or with marmalade! Only a digit here and there has to be changed should the request happen to be for “Thomas Jefferson eating spaghetti” or a reproduction of Leonardo da Vinci’s “Mona Lisa” … This Pictorial Futurama is not offered facetiously. We are getting there … To help in such pursuits and to speed up the retrieval of pictures – the right pictures – The Bettmann Archive has developed a visual index.’51 32 But as the example of the ‘ideal visual index’ conceived of by Otto Bettmann has shown, the ‘ideal’ research scenario becomes deficient when offering too many images, and yet offering as many pictures as possible is the underlying condition of the picture market. In practice, the question of how to find an individual picture or a ‘fat envelope’ of pictures has become ever more urgent: it is, and will continue to be, the key concern of all picture providers. 33 One way of facilitating the research is by presenting pictures in portfolios such as ‘Bettmann Premium’ or ‘Great Historical Moments’ and by the rating of pictures, since the creation of a hierarchy serves to structure the body of the available stock. It may be this hierarchy that is the most distinctive feature between the image bank and the picture archive, as it replaces the principle of juxtaposition of equal elements – the basic From the Picture Archive to the Image Bank Études photographiques, 24 | novembre 2009 8 principle of the library – and the principle that characterized the structure of the Bettmann Archive. NOTES 1. For further reading, see Paul FROSH, The Image Factory. Consumer Culture and the Visual Content Industry (Oxford: Berg, 2003). 2. See also Matthias BRUHN, Bildwirtschaft. Verwaltung und Verwertung von Sichtbarkeit (Weimar: VDG Verlag, 2003). 3. Oliver WENDELL HOLMES, ‘The Stereoscope and the Stereograph,’ The Atlantic Monthly, no. 3 (June 1859): 738–49. 4. See Otto BETTMANN, Die Entstehung buchhändlerischer Berufsideale im Deutschland des XVIII. Jahrhunderts (Ph.D diss., Leipzig, 1927). 5. Otto BETTMANN, The Picture Man (Gainsville: University Press of Florida, 1992), 26. 6. Ibid., 23. 7. Otto BETTMANN, ‘A Picture Index,’ Wilson Bulletin for Librarians, April 1939: 536. 8. Otto BETTMANN, The Picture Man (note 5), 23. 9. Ibid, 26. 10. Otto BETTMANN, ‘A Picture Index’ (note 7), 537. 11. See Advertising for the Bettmann Archive, The Bettmann Archive Newsletter, no. 4 (March 1942). ‘Regard the Archive as a circulating library of authentic photo-prints.’ 12. O. BETTMANN, ed., The Bettmann Portable Archive (New York City, 1966), 81. 13. Leslie HANSCOM, ‘A “Little Bettmann Archive” for Everyone,’ Newsday, New York City (November 19, 1978). 14. See Advertising, The Bettmann Archive Newsletter, (note 11), no. 3 (May 1941). 15. Leslie HANSCOM, ‘A “Little Bettmann Archive” for Everyone’ (note 13). ‘Essentially what the Bettmann Archive has to sell is a service that saves a client infinite, perhaps prohibitive labor. “People pay me for finding pictures, and for eliminating them” says Dr. Bettmann – that is for screening thousands of pictures so that only a good choice remains.’ 16. See slogan on figure 4. 17. ‘Dr. Bettmann and his Picture Archive,’ Publishers Weekly, December 1961 [no author credited]. 18. See Advertising, The Bettmann Archive Newsletter (note 11), no. 3 (May 1941). 19. Ibid., no. 4 (March 1942). 20. Ibid., nos. 1–4 (1941–2). 21. Cheryce KRAMER, ‘“©Bettmann/CORBIS” – Techniken der Sichtbarmachung von historischem Bildmaterial.’ In Konstruieren, kommunizieren, präsentieren. Bilder von Wissenschaft und Technik, ed. Alexander GALL, 259–91 (Göttingen: Wallstein Verlag, 2008), 259. 22. Original title, ‘Die Entstehung buchhändlerischer Berufsideale im Deutschland des XVIII. Jahrhunderts’ (dissertation, university of Leipzig, 1927). 23. The Bettmann Archive, ed., The Educational Forum Portfolio, 1–4: ‘The Story of the Wheel’ (November 1940); ‘The Story of Roads’ (January 1941); ‘The Story of Vehicles’ (March 1941); ‘Street Life and Inns Through the Ages’ (May 1941). 24. Otto L. Bettmann, Pictorial History of Medicine (Springfield: Thomas, 1956). From the Picture Archive to the Image Bank Études photographiques, 24 | novembre 2009 9 25. Paul Henry LANG and Otto BETTMANN, Pictorial History of Music (New York City: W. W. Norton, 1960). 26. Otto L. BETTMANN, The Good Old Days; They Were Terrible! (New York City: Random house, 1974). 27. The Bettmann Archive, ed., The Bettmann Archive Picture History of the World. The Story of Western Civilization Retold in 4460 Pictures (New York City: Random house, 1978). 28. Interview with Otto BETTMANN recorded for the William E. Wiener Oral History Library of the American Jewish Committee (New York City, June 12 and June 24, 1971). 29. Otto BETTMANN, The Picture Man (note 5), 135. 30. Ibid.,, 93. 31. Ibid., 101. 32. Ibid., 101–2. 33. O. BETTMANN, ed., The Bettmann Portable Archive (note 12), 160–4. The ‘chapters’ of the Portable Archive are defined by the selection of thematic keywords. 34. Otto BETTMANN, The Picture Man (note 5), 84. 35. Ibid., 85. 36. See Advertising, The Bettmann Archive Newsletter (note 11), no.1 (October 1940). 37. UPI is in itself a conglomerate of several photo agencies such as ACME (1923–60) and INP (1912–58). 38. See Corbis Film Preservation and the Bettmann Archive factsheet, updated 2007, http:// www.corbis.com/corporate/PressRoom/PressFactSheet.asp. Similar notions appear in several articles, such as Mary BATTIATA, ‘Buried Treasure,’ Washington Post Magazine, May 2003: ‘millions of the greatest images of the 20th century’; Dirck HALSTEAD, ‘A Visit to the Corbis Picture Mine,’ The Digital Journalist, June 2003, http://www.digitaljournalist.org/issue0306/cpmine.html: ‘a vast proportion of the world’s visual legacy’; Scott WILLIAMS, ‘Freezing Time,’ Washington CEO Magazine 15, no. 5 (May 2004): 69–70: ‘most famous pictures in history.’ 39. Information sheet for the merged company The Bettmann Archive and Bettmann News Photos, ca. 1989. 40. Richard RAPAPORT, ‘In His Image,’ Wired 4, no.11 (November 1996). http://www.wired.com/ wired/archive/4.11/corbis_pr.html. 41. Jane LUSAKA, Susannah CASSEDY O’DONNELLL, and John STRAND, ‘Whose 800-lb Gorilla Is It? Corbis Corporation Pursues Museums,’ Museum News (May/June 1996). 42. Richard RAPAPORT, ‘In His Image’ (note 39): quote by Charles Mauzy, director of media development at Corbis. 43. Ibid. 44. See Corbis corporate fact sheet, August 2009, www.corbis.com. 45. Jesse BIRNBAUM, David BJERKLIE, and Patrick E. COLE, ‘Gates Snaps Top Pix,’ Time (October 23, 1995): 107. 46. Otto BETTMANN, The Picture Man (note 5), 144. 47. Paul FROSH, The Image Factory (note 1), 27. 48. Henry WILHELM, ‘High-Security, Sub-Zero Cold Storage for the Permanent Preservation of the Corbis-Bettmann Archive Photography Collection,’ in Final Program and Proceedings: IS&T Archiving Conference (Springfield: IS&T, 2004), 122–7. 49. See the most prominent works addressing the issue: Hal FOSTER, ‘The Archive without Museums,’ OCTOBER 77 (Summer 1996): 97–119; Geoffrey BATCHEN, ‘Photogenics/Fotogenik,’ Camera Austria 62–3 (1998): 5–16; Allan SEKULA, ‘Between the Net and the Deep Blue Sea (Rethinking the Traffic of Photographs),’ OCTOBER 102 (Fall 2002): 3–34. See also artworks by Alfredo JAAR, ‘Lament of the Images (2002),’ installation work Documenta XI (Kassel, 2002); and Ines SCHABER, ‘Culture Is Our Business (2004),’ photographs and dia-projection presented in the From the Picture Archive to the Image Bank Études photographiques, 24 | novembre 2009 10 framework of the group exhibition No Matter How Bright the Light, the Crossing Occurs at Night (Berlin: KW Institute for Contemporary Art, 2006). 50. www.ironmountain.com/company. 51. O. BETTMANN, ed., The Bettmann Portable Archive (note 12), S. 81. ABSTRACTS Contrary to the art market of photography, the picture market is based on the reproducibility and indexability of the medium. As the protagonists of this market, photographic agencies and commercial photo archives are seeking for ways to maximize revenues from picture licensing and to continuously exploit their collections. But how exactly is monetary value attributed to these pictures produced and reproduced through photography? How does the information associated to a picture as well as the specific forms of visual archiving and picture retrieval contribute to its value? How does the dictate of economic efficiency shape and influence the materiality and status of photography? Using the example of the Bettmann Archive and Corbis, this article analyses the construction of value through analogue and digital photography and investigates the differences and continuities between a former commercial picture archive evolving in response to the changing market needs (Bettmann Archive) and a visual content provider (Corbis) that emerged from the transformation in the fields of communication and stimulated new ways of consuming and distributing pictures. AUTHOR ESTELLE BLASCHKE Estelle Blaschke is an art historian as well as a research associate and teacher of the history and theory of photography at the University of Duisburg-Essen. From 2003 to 2007, she worked as a consultant in UNESCO’s Culture Sector, before going on to write her doctoral dissertation, ‘La production du patrimoine dans les agences photographiques’ under the direction of André Gunthert, Michel Poivert, and Herta Wolf. Together with Herta Wolf, she co-organized the conference Dépôt et Plateforme. L’Archive Visuelle dans l’Ère Post-Photographique, which was held in Cologne in June 2009; she also writes the blog Post-Photographic Archive. From the Picture Archive to the Image Bank Études photographiques, 24 | novembre 2009 11 From the Picture Archive to the Image Bank Producing a ‘Circulating Library of Authentic Photo-prints’ One Word Is Worth a Million Pictures ‘Either You Grow or You Go’ A Formidable Digitized Oak: Juxtaposition vs. Hierarchy work_lmzqjuedqjhmtf4n67mwcqvshe ---- Microsoft Word - Gaglio_et_al_Manuscript_Revised_FINALE_DOI.docx Dietary studies in birds: testing a non-invasive 1 method using digital photography in seabirds 2 3 Davide Gaglio1*, Timothée R. Cook1,2, Maëlle Connan3, Peter G. Ryan1 4 and Richard B. Sherley4,5,* 5 1Percy FitzPatrick Institute, DST-NRF Centre of Excellence, University of Cape Town, 6 Rondebosch 7701, South Africa; 2Institute of Ecology and Environmental Sciences, 7 Evolutionary Eco-physiology Team, University Pierre et Marie Curie, Bâtiment A–7ème étage, 7 8 quai, St Bernard, 75005 Paris, France; 3DST-NRF Centre of Excellence at the Percy FitzPatrick 9 Institute, Department of Zoology, Nelson Mandela Metropolitan University, PO Box 77000, Port 10 Elizabeth 6031, South Africa; 4Environment and Sustainability Institute, University of Exeter, 11 Penryn Campus, Penryn, Cornwall, TR10 9FE, United Kingdom; 5Animal Demography Unit, 12 Department of Biological Sciences, University of Cape Town, Private Bag X3, Rondebosch 13 7701, South Africa. 14 15 *Corresponding authors, email: (DG) swift.terns@gmail.com; (RBS) r.sherley@exeter.ac.uk 16 Summary 17 1. Dietary studies give vital insights into foraging behaviour, with implications for understanding 18 changing environmental conditions and the anthropogenic impacts on natural resources. Traditional diet 19 sampling methods may be invasive or subject to biases, so developing non-invasive and unbiased 20 methods applicable to a diversity of species is essential. 21 2. We used digital photography to investigate the diet fed to chicks of a prey-carrying seabird, and 22 compared our approach (photo-sampling) to a traditional method (regurgitations) for the greater crested 23 tern Thalasseus bergii. 24 3. Over three breeding seasons, we identified >24,000 prey items of at least 47 different species, more 25 than doubling the known diversity of prey taken by this population of terns. We present a method to 26 estimate the length of the main prey species (anchovy Engraulis encrasicolus) from photographs, with an 27 accuracy < 1 mm and precision ~0.5 mm. Compared to regurgitations at two colonies, photo-sampling 28 produced similar estimates of prey composition and size, at a faster species accumulation rate. The prey 29 compositions collected by two researchers photo-sampling concurrently were also similar. 30 4. Photo-sampling offers a non-invasive tool to accurately and efficiently investigate the diet 31 composition and prey size of prey-carrying birds. It reduces biases associated with observer-based 32 studies and is simple to use. This methodology provides a novel tool to aid conservation and 33 management decision-making in light of the growing need to assess environmental and anthropogenic 34 change in natural ecosystems. 35 36 Key-words: diet, digital photography, non-invasive monitoring, prey-carrying birds, rarefaction curves, 37 Thalasseus bergii, regurgitation 38 Introduction 39 Dietary studies are essential to understand animal ecology, temporal changes in the environment, and to 40 establish sustainable management strategies for natural resources (Jordan 2005). In complex natural 41 systems, top-predators can act as indicators of environmental conditions, and their diet, in particular, can 42 provide important information on prey species abundance, occurrence and size, which may reflect 43 processes over short time-frames (e.g. Suryan et al. 2002; Parsons et al. 2008). As such, outcomes from 44 diet studies are important tools for monitoring changes in demographic parameters or behaviour, 45 themselves a product of changing diet (Sherley et al. 2013). Moreover, dietary studies can provide 46 powerful indicators of anthropogenic impacts and environmental change on food-webs (e.g. Piatt et al. 47 2007; Green et al. 2015), facilitating conservation biology and ecosystem-based management (Grémillet 48 et al. 2008; Sherley et al. 2013). The importance of monitoring diet thus demands the development of 49 simple, efficient, non-invasive methods applicable to a diversity of species. 50 Numerous techniques exist to investigate bird diets (Jordan 2005; Inger & Bearhop 2008; Karnovsky, 51 Hobson & Iverson 2012). Invasive techniques include induced regurgitations (Diamond 1984), stomach 52 flushing of live birds (Wilson 1984), application of neck-collars on chicks (Moreby & Stoate 2000) and 53 the dissection of birds collected specifically for this purpose (Doucette, Wissel & Somers 2011). These 54 methods describe short-term diet composition accurately (González-Solís et al. 1997), despite some 55 errors introduced by differential prey regurgitation or digestion (e.g. Jackson & Ryan 1986). More recent 56 biochemical methods involving isotopic, lipid and DNA analyses provide complementary approaches, 57 but generally cannot be used alone due to their coarse taxonomic resolution (Karnovsky, Hobson & 58 Iverson 2012). Moreover, these approaches typically require disturbance or capture of birds, which can 59 impact their physiology and behaviour (e.g. Ellenberg et al. 2006; Carey 2009). 60 Accurate, non-invasive diet sampling is therefore required to give fine-scale indicators of prey 61 availability or prey selection. One of the least non-invasive methods is to observe birds carrying visible 62 prey with binoculars or video recording systems, from a safe distance. This typically involves birds 63 feeding offspring or incubating partners (e.g. Safina et al. 1990; Redpath et al. 2001; Tornberg & Reif 64 2007). Such studies are generally limited to assessing chick diet, but have the potential to reveal changes 65 in prey communities (Anderson et al. 2014). However, observer-based diet studies are subject to several 66 methodological limitations (Cezilly & Wallace 1988; González-Solís et al. 1997; Lee & Hockey 2001) 67 calling for further development of this approach. 68 Digital photography represents an excellent alternative tool to study the diet fed to chicks of prey-69 carrying birds, because 1) there is virtually no limit to the number of pictures that can be taken, 2) 70 species identification is possible in most cases, 3) prey can potentially be measured accurately and 71 precisely, 4) images can be re-analysed without loss of data quality, i.e. samples do not deteriorate over 72 time and 5) storage is simple. Over the last decade, the use of digital photography for dietary studies has 73 included the use of camera-traps to investigate the diet of nesting raptors (García-Salgado et al. 2015; 74 Robinson et al. 2015), and the combined use of digital compact cameras with spotting scopes 75 (digiscoping) to assist prey identification (made primarily by observations) for Caspian terns 76 (Hydroprogne caspia) and common murres (Uria aalge) (Larson & Craig 2006, Gladics et al. 2015). 77 However, both techniques have limitations including poor image quality and difficultly in capturing 78 images of birds carrying prey in flight or during fast delivery to chicks (see Larson & Craig 2006, 79 García-Salgado et al. 2015). 80 Recent advances in performance and price reductions of digital single lens reflex (DSLR) cameras 81 combined with autofocus telephoto lenses makes digital photography an affordable option for prey 82 identification, even for birds in flight. In the last few years, DSLRs have been used opportunistically to 83 identify items carried by a variety of birds (e.g. Woehler et al. 2013; Gaglio, Sherley & Cook 2015, Tella 84 et al. 2015) but a systematic approach and an accurate method to estimate prey dimensions are lacking. 85 We developed a standardised application of digital photography using DSLR cameras and telephoto 86 lenses to investigate chick diet composition and prey size in prey-carrying birds. We tested the method 87 on the colonial breeding greater crested tern Thalasseus bergii in South Africa. We compared the 88 efficacy of photo-sampling to the more traditional used regurgitations (Walter et al. 1987) using prey 89 identified to species level collected from chicks, and assessed the accuracy and precision of length 90 measurements of the main prey made from photographs. We also evaluated the potential for observer 91 bias in this system. Finally, we discuss the validity of applying our non-invasive approach to any prey-92 carrying bird and the potential to develop a simple and effective tool-box to accurately identify and 93 estimate the size of any carried item. 94 95 Methods 96 STUDY SPECIES AND SITES 97 The greater crested tern (hereafter ‘tern’) is distributed from the Namibian coast eastwards to the central 98 Pacific. It feeds mostly at sea by dipping onto the surface or plunge diving up to ca 1 m (Crawford, 99 Hockey & Tree 2005). During breeding, adults usually return from foraging with a single prey item, 100 which is either offered to the partner during courtship or delivered to the offspring (Crawford, Hockey & 101 Tree 2005). In South Africa, the sub-species Thalasseus bergii bergii breeds mostly on islands in the 102 Western Cape (Crawford 2003). Since 2008, Robben Island (33°48’S, 18°22’E), Table Bay, has hosted 103 the largest southern African colony, reaching ~13,000 breeding pairs in 2010 (Makhado et al. 2013). A 104 few hundred pairs breed in the Eastern Cape, mostly on Seal Island (33°50’S, 26°17’E), Algoa Bay 105 (Makhado et al. 2013). We studied their diet at both Robben and Seal Islands. 106 107 PHOTO-SAMPLING 108 We investigated the diet of breeding terns at Robben Island during 2013 (February–June), 2014 109 (January–June) and 2015 (February–June) and at Seal Island during June 2015. Adult terns returning 110 with prey were photographed from a vantage point of 50–80 m from the edge of their colony (Fig. 1a). 111 At Seal Island (~300 pairs) we were able to photograph all adults returning to the colony during our 112 photo-sampling sessions. At Robben Island, colonies were much larger (> 6,000 pairs) so we could not 113 photograph all individuals. However, every attempt was made to not bias selection to individuals 114 carrying particularly conspicuous prey items. The distance to the flying birds ranged between 6.5 and 25 115 m. Total sampling effort represented ~ 50 h of photography per year. For each individual, we typically 116 took a sequence of 3 photos (a “photo set”) for identification and prey measurements (Fig. 1b). We found 117 by trial and error that 3 images provided the best trade-off to balance processing time with obtaining at 118 least one sharp image. To avoid biasing the results and maintain independence among photo sets, ad-hoc 119 image analysis was performed for each sampling session to discard repeated photo sets of the same 120 adults carrying the same prey item. Recurrent birds were identified using distinguishable feather patterns, 121 presence of colour or metal rings, type and position of prey in the bill while flying, and distinctive 122 markings on the prey. 123 Photos were taken using Canon 7D and 7D Mark II cameras, fitted with Canon EF 100–400 mm 124 f/4.5-5.6L IS USM zoom lenses. We set the cameras to (i) shutter speed priority (1/2500 s); (ii) 125 automatic ISO (or aperture priority mode that provided shutter speeds of at least 1/2500 s); (iii) high-126 Speed Continuous Shooting; (iv) Autofocus on AI Servo (for moving subjects) using the AF point 127 expansion; and (v) large Jpeg file format for high-speed recording. We set the telephoto lens to 128 autofocus, the image stabilizer to on and the closest focal point to 6.5 m to increase autofocus speed. 129 130 IDENTIFICATION OF PREY SPECIES 131 All blurred or otherwise non-identifiable images (due to e.g. distance, an unfavourable position of prey in 132 the bill or lighting) were discarded. From the remaining photographs (e.g. Fig. 1), we determined the 133 numerical abundance (Duffy & Jackson 1986) of prey (usually at species level) using fish guides (Smith 134 & Heemstra 2003; Branch et al. 2010) and assistance from experienced observers (see 135 Acknowledgements). In some instances, good quality photographs contained prey that could not be 136 identified (< 0.01% of total prey items). For example, some adults returned with pieces of fish flesh, 137 possibly originating from kleptoparasitism disputes or scavenging. These images were excluded from our 138 analyses. Approximately 45% of photo sets were suitable for prey identification; there was no evidence 139 of bias towards particular prey types among discarded images. 140 141 ESTIMATION OF PREY STANDARD LENGTH 142 Dietary studies of piscivorous birds commonly measure the standard length (SL) of the fish (length from 143 the tip of the snout to the posterior edge of the hypural plate) to compare prey size (Barret 2002, Smith & 144 Heemstra 2003). We estimated SL from photographs for anchovy Engraulis encrasicolus, the most 145 common species in the tern’s diet. As prey tended to flex to differing degrees in the adults’ bills, direct 146 SL measurement from the image underestimates fish length. Thus, we estimated SL from measurements 147 of individual body parts (eye diameter, operculum width and head width, all measured dorsoventrally), 148 which were less distorted in the image and generally in a plane parallel to the bird’s bill and the camera 149 (Figs 1b and 2). 150 To do this we first assessed the accuracy of predicted SLs based on these morphological 151 measurements using cross-validation by fitting log-linear allometric regressions to a training dataset (n = 152 50) and comparing model predictions to a test dataset (n = 20) of anchovies measured by hand (see 153 Appendix S1). Next, we measured 37 additional anchovies with Vernier callipers (to the nearest 0.1 mm) 154 and then photographed them held in the bill of a dead tern, for which the culmen length was known (Fig. 155 2 in Appendix S1). For each image, we used the ‘line selection tool’ in ImageJ (Schneider et al. 2012) to 156 estimate eye diameter (𝐸), operculum width (𝑂) and head width (𝐻) for each fish by scaling the pixel 157 length in the image to (1) the length of the dead tern’s culmen (62.1 mm; measured with Vernier 158 callipers), (2) the mean culmen length for this species (61.2 mm, n = 128; Crawford, Hockey & Tree 159 2005) and (3) the minimum and maximum recorded culmen lengths (range: 54.5–67.6 mm, Crawford, 160 Hockey & Tree 2005). We used the estimates of 𝐸, 𝑂 and 𝐻 to obtain three estimates of SL (𝑆𝐿) using 161 the log-linear allometric regressions (see also Appendix S1), and calculated their arithmetic mean 162 (combined 𝑆𝐿) and used this value in further analyses (since it was generally most accurate; Appendix 163 S1). 164 To determine the accuracy (γ) of the combined 𝑆𝐿 estimates from the images, we compared them to 165 the known SL of each fish. We defined the mean percentage accuracy (𝛾) of the combined 𝑆𝐿 estimates 166 as: 167 𝛾 = 100 𝑛 1 − 𝑆𝐿! − 𝑐𝑜𝑚𝑏𝑖𝑛𝑒𝑑 𝑆𝐿! 𝑆𝐿! ! !!! (eqn 1) 168 where i indexes each of the n = 37 fish. As the absolute difference was computed, both overestimates and 169 underestimates of e.g. 2% would yield γ = 98%. In addition, we assessed the mean difference between 170 the known SLs and the combined 𝑆𝐿 estimates using permutation tests with 10,000 Monte Carlo 171 iterations (perm library v. 1.0-0.0 for R). 172 To determine the precision (or repeatability) of the method, we repeated the measurement process in 173 ImageJ to obtain six 𝐸, 𝑂 and 𝐻 values and the corresponding combined 𝑆𝐿 values for 17 of the 37 fish 174 (using a known length on the ruler in each photograph). We calculated the combined 𝑆𝐿 as above and 175 used this to assess precision. Precision (τ) was defined as: 176 𝜏!,! = 1 𝑛 𝑐𝑜𝑚𝑏𝑖𝑛𝑒𝑑 𝑆𝐿!,! ! !!! − 𝑐𝑜𝑚𝑏𝑖𝑛𝑒𝑑 𝑆𝐿!,! (eqn 2) 177 where j indexes each of the n = 6 combined 𝑆𝐿 values for the f = 17 fish. We report mean precision (in 178 mm) of all (6 x 17 = 102) values of τf,j. 179 In addition, we examined whether either precision or accuracy were influenced by the SL of a fish. 180 For accuracy, we used a linear model of the form: 181 logit(γi) = α + β × SLi + εi 182 (eqn 3) 183 where α and β are estimated from the data, γi are the accuracy estimates (as proportions), SLi the known 184 standard length for fish i and εi ~ N(0,σ) the residual error, with σ estimated from the data. For precision 185 we used a linear-mixed model (LMM: lme4 library for R) of the form: 186 τf,j = β × SLf,j + δf,j × ηj + εf,j 187 (eqn 4) 188 where β is the fixed effect parameter, ηj ~ N(0, 𝜍) the random effect parameter, εf,j ~ N(0, σ) the residual 189 error, δf,j the vector of fish IDs, τf,j the vector of precision values and SLf, the vector of known standard 190 lengths for each measurement j of fish f, with β, σ and 𝜍 estimated from the data. 191 Finally, we used the above approach to estimate SL of the prey in a subset of the digital images 192 collected in the field where the bird’s bill and the head of the prey were clearly visible and approximately 193 parallel to the camera (Fig. 1b). For each image, we used combined 𝑆𝐿 and assumed the length of the 194 bird’s culmen to be 61.2 mm (see above). 195 196 COMPARISON BETWEEN PHOTO-SAMPLING AND REGURGITATION-SAMPLING 197 To compare photo-sampling and regurgitation-sampling, we collected images of adults carrying prey and 198 regurgitations from chicks concurrently on 18 and 19 April 2015 at Robben Island (photo-sampling 199 effort: 600 min) and on 9 June 2015 at Seal Island (photo-sampling effort: 132 min). Regurgitates were 200 collected from the ground, while chicks were inside a pen during ringing operations (chicks often 201 regurgitate when disturbed). Prey were later identified from whole-prey or diagnostic prey remains 202 resistant to digestion such as otoliths and squid beaks using Clarke (1986), Smith & Heemstra (2003), 203 Smale, Watson & Hecht (1995), Branch et al. (2010) and the Port Elizabeth Museum’s reference 204 collection. Prey items that were not identified mainly consisted of fish flesh and were excluded from our 205 analysis. The SL of whole anchovies collected from regurgitations was measured using a ruler. 206 We compared the number of prey items from different taxa between methods using χ2 tests and 207 assessed differences in the estimated anchovy SLs using permutation tests (10,000 iterations) for each 208 island separately as the SL variance between islands was heterogeneous (Levene’s test: W(1,164) = 5.8, p = 209 0.017). 210 We examined prey diversity using sample-based rarefaction curves as these allow for standardized 211 comparison across collections that differ in sample size (Gotelli & Colwell 2001). Using 1,000 random 212 permutations of both the photo-samples and regurgitations from 18 and 19 April 2015, we produced 213 curves of the mean (± asymptotic 95% confidence intervals, CI) species accumulation rate (species 214 identified per sample made). We then compared this rate at samples sizes of n = 190. In addition, by 215 fitting a Generalised Additive Model (GAM) to the photo-sample means and by assuming equal 216 accumulation rates for extrapolation, we also compared the predicted species accumulation rate for 217 regurgitations to the mean rate for photo-sampling at n = 1500. The chosen sample sizes approximate 218 those obtained in the field. 219 Finally, to evaluate any possible observer effect on photo-sampling, two different researchers 220 (observer-A and observer-B) simultaneously collected photographs at Robben Island on 18 and 19 April 221 2015. The two observers used the same equipment (Canon 7D Mark II camera, Canon 100–400 mm lens) 222 and had similar experience in wildlife photography. All other procedures were the same as described 223 above. We compared the samples from the two observers using χ2 tests. Unless otherwise stated, all 224 means are presented ± 1 SD and all statistics were performed using R v.3.2.1. 225 226 Results 227 PHOTO-SAMPLING VS. REGURGITATION-SAMPLING 228 In total ~160,000 photos were taken during the three breeding seasons on Robben Island, yielding images 229 of 24,211 prey items identifiable to species (96%, 48 species) or family (98%, 49 families) level (total of 230 51 prey taxa; Table 1). During the regurgitation comparison trial at Robben Island, we identified 27 231 species from 1,510 photo-samples compared to 11 species from 198 regurgitated prey items. At Seal 232 Island, we identified 11 species from 157 photo-samples and 6 species from 103 regurgitated prey items 233 (Appendix S2). The mean species accumulation rate at 190 samples was 0.075 (95% CI: 0.058–0.089) 234 for photo-sampling and 0.057 (95% CI: 0.053–0.058) for regurgitations; however, at this sample size, the 235 95% CIs overlapped (Fig. 3). The number of species predicted from 1,500 regurgitations was 23.4 (based 236 on the GAM extrapolation) versus 27.0 for photo-sampling (Fig. 3). The diet composition of main prey 237 did not differ significantly between the two methods for Robben Island (χ2 = 47, d.f. = 42, p = 0.26) or 238 Seal Island (χ2 = 18, d.f. = 15, p = 0.26; Table S3 in Appendix S2). 239 240 ACCURACY AND PRECISION IN ESTIMATING ANCHOVY STANDARD LENGTH 241 Mean SL of the 50 anchovy used to calculate the allometric regressions between the morphometric 242 measurements (training set) was 109.6 ± 13.5 mm (range = 83.3–130.5 mm), similar to the 20 anchovy in 243 the test set (SL 112.8 ± 3.0 mm; range = 107.6–116.8 mm). The predicted 𝑆𝐿s of the test set 244 predominantly fell within the 95% prediction intervals for all three specific body part models (Fig. S1, 245 Appendix S1). The mean accuracy (𝛾) for the combined 𝑆𝐿 was 97.9 ± 1.7% (range 93.0–99.9%) for the 246 training set and 97.3 ± 1.8% (range 92.5–100%) for the test set. Accuracy was not affected by SL in 247 either case (linear models: p > 0.05, see Appendix S1). 248 The mean SL of the 37 photographed anchovy was 113.4 ± 6.7 mm. With the culmen length of the 249 dead tern (62.1 mm) as the reference, mean accuracy (𝛾) for the combined 𝑆𝐿 was 98.3 ± 1.5% (range 250 93.8–100%), yielding a mean combined 𝑆𝐿 of 114.0 ± 7.1 mm (Table S2 in Appendix S1). With the 251 species’ mean culmen length (61.2 mm) as the reference, the mean combined 𝑆𝐿 = 112.7 ± 7.0 mm (𝛾 = 252 98.1 ± 1.5%, range 92.2–99.9%; Fig. 4, Table S2). The length of a fish (actual SL) did not influence the 253 accuracy in either case (linear models: p > 0.05, Fig. 4) and neither of the combined 𝑆𝐿s differed 254 significantly from the actual SL (permutations tests: p > 0.05). The mean accuracy (𝛾) reduced to 88.9 (± 255 3.3)% and 91.3 (± 3.2)% for the minimum (54.5 mm) and maximum (67.6 mm) recorded culmen lengths 256 respectively (Table S2) and these combined 𝑆𝐿 series did differ significantly from the actual SLs 257 (permutations tests: p < 0.001; see Appendix S1). 258 The mean precision of the combined 𝑆𝐿 estimates was 0.52 (± 0.38) mm or 99.6 (± 0.3)%, with an 259 absolute range of 0.02–1.58 mm or 98.6–99.99%. Precision was not related to the actual SL of the fish 260 being measured (LMM: χ2 = 0.02, p = 0.89). 261 262 COMPARISONS OF PREY SIZE BETWEEN PHOTO-SAMPLING AND REGURGITATIONS 263 At Robben Island, 116 anchovy from photo-samples (10% of anchovy photographed) and 20 from 264 regurgitates (12%) could be measured, while at Seal Island, the corresponding values were 21 (18%) and 265 nine (9%) respectively. Overall, the anchovy were longer at Seal Island (mean = 120.3 ± 8.2 mm, n = 30) 266 than at Robben Island (91.2 ± 13.2 mm, n = 136; p < 0.001; Fig. 5). For Robben Island, the mean 267 combined 𝑆𝐿 of anchovy in the photo-samples was 91.3 ± 13.6 mm compared to 90.8 ± 11.1 mm for 268 regurgitates (Fig. 5). At Seal Island, they were 121.6 ± 9.3 mm and 117.4 ± 3.6 respectively. The SL 269 estimates from the two methods did not differ statistically for either Robben Island (p = 0.85) or Seal 270 Island (p = 0.21). 271 272 COMPARISON BETWEEN OBSERVERS 273 We identified 1,510 prey items of 22 species from the photographs taken by observer-A and 1,625 of 21 274 species from observer-B. Prey composition did not differ significantly between the two (χ2 = 72, d.f. = 275 64, p = 0.23). However, three species were not recorded in common; observer-A photographed one 276 horsefish Congiopodidae sp. and one eel Ophichthidae sp., while observer-B recorded three individuals 277 of Cape hake Merluccius capensis. 278 279 Discussion 280 Photo-sampling offers an effective, low-impact alternative to traditional diet studies for birds that carry 281 prey items in their bill, with accurate prey identification and size estimates possible. Samples can be 282 acquired quickly and equivalent diet compositions obtained with relatively low effort (Fig. 3). In three 283 breeding seasons, we sampled 24,211 prey items and identified 51 prey taxa (Table 1) with this 284 approach; the most comprehensive diet analysis for terns in southern Africa prior to our study identified 285 25 species from 1,311 regurgitated prey items in 10 breeding seasons (1977–1986; Walter et al. 1987). 286 Despite ~55% of photos being discarded, our approach yielded an order of magnitude more samples and 287 identified twice as many species, with minimal disturbance to breeding birds. 288 The photo-sampling approach has several other advantages over traditional diet sampling. First, terns 289 often regurgitate only the posterior body and caudal fin of a fish, making identification of similar species 290 difficult (McLeay et al. 2009). Photo-sampling records the entire prey, and if there is doubt as to the 291 identification, images can be shared easily with global experts or on specialized websites (e.g. I-spot). 292 Second, photo-sampling can be used in a range of situations (e.g. on land or from a boat), by one 293 individual (collection of regurgitations often involves many people), with minimal training in 294 photography (cameras can be pre-set). Third, the photographic equipment is relatively affordable and 295 once purchased can be used for several years, at multiple colonies and for several species. Also, although 296 processing the photographs can be time-consuming, taking about 30 min for an average of 100 prey 297 identified, the images can be stored and analysed multiple times if needed, without the loss of data 298 quality or metadata (e.g. date and location). 299 Possible drawbacks associated with photo-sampling include the repeated photography of prey items, 300 especially those with long handling times, leading the frequency of these items being over-estimated. 301 This is predominately a problem in larger colonies, where it is difficult to follow the fate of individual 302 prey items, and one that could be countered using delays (e.g. 5 mins) between photosets. When only a 303 subset of prey is sampled, large or conspicuous prey items may induce an observer bias if they are easier 304 to photograph, more readily identified to species level or more interesting to the photographer. Training 305 photographers to randomise the photo-sampling as much as possible should help reduce this potential 306 bias. Differences in photographic experience between different observers could create a potential bias 307 and should be examined in future studies. Photo-sampling is difficult in bad weather (strong wind, rain or 308 mist) and this may also introduce bias in some situations. Finally, one constraint of our study is that 309 photo-sampling was applied to study chick diet. Although this can provide important insights into 310 changes in prey communities (Anderson et al. 2014), it may not always represent adult diet, or diet 311 outside the breeding season (McLeay et al. 2009). We thus suggest implementing indirect methods such 312 as measuring stable isotope ratios in e.g. blood and feathers of adults (Inger & Bearhop 2008) 313 concurrently with photo-sampling. Moreover, applying both methods concurrently on marked individuals 314 would allow the development of trophic discrimination factors in wild animals (Newsome et al. 2010). 315 More broadly, ecologists now use digital photography to study animals across a wide range of taxa 316 (e.g. Morrison et al. 2011; Marshall & Pierce 2012; Gregory et al. 2014). Opportunistic observations 317 have documented novel behaviours and trophic interactions (e.g. Gaglio, Sherley & Cook 2015; Tella et 318 al. 2015), suggesting that standardised approaches to study species bringing items to a known location 319 have great potential for ecological monitoring. This approach could also be applied to a diversity of taxa 320 in addition to birds that carry prey (e.g. carnivores bringing prey to their offspring, or ants and termites 321 carrying items to their nests). In any of these applications photo-sampling could provide high quality 322 photographic data to complement the now extensive use of camera-traps. 323 The ecological information provided by prey size is almost as important as prey species, giving 324 information on the targeted prey cohort and the predator’s energetics. We demonstrated that prey size 325 (anchovy SL) can be estimated accurately (~98%) and precisely (~99%) from images. The approach 326 could be used with a wide variety of predators and prey species to eliminate biases associated with in situ 327 visual observation (Lee & Hockey 2001). Even if photo-sampling is unlikely to obtain measurements as 328 accurately or precisely as regurgitated/dropped prey, the sample size from photo-sampling is always 329 likely to be greater than the number of prey found undigested. A crucial step to estimate absolute prey 330 size is identifying a reference object (e.g. culmen, eye diameter) of known size, to provide a scale for 331 prey measurements. These reference objects should be chosen carefully and the degree to which the 332 selected trait varies within the population assessed to constrain and minimise errors where possible (see 333 Results). Additional studies could photograph birds of known bill length, age and sex (e.g. colour banded 334 individuals) with prey held with different angles to the body and compare larger numbers of observers 335 photo-sampling concurrently to further quantify the errors associated with prey measurements. For prey 336 species that are not distorted in images (e.g. some insects do not bend over a bird’s bill), size can be 337 estimated directly and even when absolute estimates are not possible, the method still can be used to 338 assess changes in relative prey size, allowing for spatial and temporal comparisons. 339 Crucially, the photo-sampling method caused little if any disturbance to the nesting birds. Distances 340 from animals can be selected to balance each species’ sensitivity against image quality. The opportunity 341 to record the number and size of prey brought to offspring remotely and in real time without influencing 342 behaviour, allows for accurate monitoring of temporal variability. For threatened or declining species 343 (e.g. many seabirds; Croxall et al. 2012), such non-invasive methods can help elucidate functional links 344 between population dynamics, environmental variability and anthropogenic pressures (Saraux et al. 345 2011). Incorporating these observations into detailed information on species composition and energy 346 content for energetic models offers great potential for indicators of long-term and large-scale ecosystem 347 change (Furness & Cooper 1982). Furthermore, with standardized protocols, digital images can be shared 348 easily using digital platforms (e.g. I-spot, Google Images) to facilitate global collaborations (e.g. 349 González-Solís et al. 2011; Lynch et al. 2015), encourage community involvement in citizen science 350 projects (e.g. Newman et al. 2012), and develop data archives to answer as yet unforeseen questions. 351 Given the growing need to assess environmental changes and human impacts on natural ecosystems 352 (Hobday et al. 2015), our methodology offers a novel tool for collaborative efforts in conservation. 353 354 Acknowledgements 355 Our research was supported by a Department of Science and Technology-Centre of Excellence grant to 356 the Percy FitzPatrick Institute of African Ornithology. SAN Parks and Robben Island Museum provided 357 logistical support and access to the tern colonies. We thank: Pierre Pistorius (Nelson Mandela 358 Metropolitan University), for supporting the research on Seal Island; Barrie Rose, Bruce Dyer, Carl van 359 der Lingen, Rob Leslie, Bryan Maritz, Charles Griffiths, Mike Picker, Jean-Paul Roux and Malcolm 360 Smale for assistance with prey identification; Malcolm Smale for access to the otolith and squid beak 361 reference collections at the Port Elizabeth Museum; Carl van der Lingen and Cecile Reed for samples of 362 anchovy; numerous volunteers who collected regurgitations; and Alistair McInnes (Observer A) and 363 Dominic Rollinson (Observer B). Thanks to Tom Flower and Stephen Votier for constructive comments 364 on an earlier draft. This research was approved by SANParks (CONM1182), the Department of 365 Environmental Affairs (RES2013/24, RES2014/83, RES2015/65) and the animal ethics committee of the 366 University of Cape Town (2013/V3/TC) and NMMU (A14-SCI-ZOO-003). 367 368 Data accessibility 369 All data used in this article are available in the Supplementary materials or Dryad Digital Repository 370 (Gaglio et al. 2016) http://dx.doi.org/10.5061/dryad.j647p. 371 372 References 373 Anderson, H.B., Evans, P.G.H., Potts, J.M., Harris, M.P. & Wanless S. (2014) The diet of Common 374 Guillemot Uria aalge chicks provide evidence of changing prey communities in the North Sea. Ibis, 375 156, 23–34. 376 Barrett, R. T. 2002. Atlantic puffin Fratercula arctica and common guillemot Uria aalge chick diet and 377 growth as indicators of fish stocks in the Barents Sea. Marine Ecology Progress Series, 230, 275–287. 378 Branch, G.M., Griffiths, C.L., Branch, M.L. & Beckley, L.E. (2010) Two Oceans: A Guide to the Marine 379 Life of Southern Africa. Randomhouse/Struik, Cape Town. 380 Carey, M.J. (2009) The effects of investigator disturbance on procellariiform seabirds: a review. New 381 Zealand Journal of Zoology, 36, 367–377. 382 Cezilly, F. & Wallace, J. (1988) The determination of prey captured by birds through direct field 383 observations: a test of the method. Colonial Waterbirds, 11, 110–112. 384 Clarke, M.R. (1986) A Handbook for the Identification of Cephalopod Beaks. Clarendon Press, Oxford. 385 Crawford, R.J.M. (2003) Influence of food on numbers breeding, colony size and fidelity to localities of 386 Swift Terns in South Africa’s Western Cape, 1987-2000. Waterbirds, 26, 44–53. 387 Crawford, R.J.M., Hockey, P.A.R. & Tree, A.J. (2005) Swift Tern Sterna bergii. Roberts Birds of 388 Southern Africa (7th edn) (eds P.A.R. Hockey, W.R.J. Dean& P.G. Ryan), pp 453–455. Trustees of 389 the John Voelcker Bird Book Fund, Cape Town. 390 Croxall, J.P., Butchart, S.H., Lascelles, B., Stattersfield, A.J., Sullivan, B., Symes, A. & Taylor, P.H.I.L. 391 (2012) Seabird conservation status, threats and priority actions: a global assessment. Bird 392 Conservation International, 22, 1–34. 393 Diamond, A.W. (1984) Feeding overlap in some tropical and temperate seabird communities. Studies in 394 Avian Biology, 8, 24–46. 395 Doucette, J.L., Wissel, B. & Somers, C.M. (2011) Cormorant-fisheries conflicts: stable isotopes reveal a 396 consistent niche for avian piscivores in diverse food webs. Ecological Applications, 21, 2987–3001. 397 Duffy, D.C. & Jackson, S. (1986) Diet studies of seabirds: a review of methods. Colonial Waterbirds, 9, 398 1–17. 399 Ellenberg, U., Mattern, T., Seddon, P.J. & Jorquera, G.L. (2006) Physiological and reproductive 400 consequences of human disturbance in Humboldt Penguins: the need for species-specific visitor 401 management. Biological Conservation, 133, 95–106. 402 Furness, R.W. & Cooper, J. (1982) Interactions between breeding seabird and pelagic fish populations in 403 the Southern Benguela region. Marine Ecology Progress Series, 8, 243–250. 404 Gaglio, D., Sherley, R.B. & Cook, T.R. (2015) Insects in the diet of the Greater Crested Tern Thalasseus 405 bergii bergii in southern Africa. Marine Ornithology, 43, 131–132. 406 Gaglio, D., Cook T. R., Connan M., Ryan P. G., Sherley R. B. (2016) Data from: Dietary studies in 407 birds: testing a non-invasive method using digital photography in seabirds. Dryad Digital Repository, 408 doi: 10.5061/dryad.j647p 409 García-Salgado, G., Rebollo, S., Pérez-Camacho, L., Martínez-Hesterkamp, S., Navarro, A. & 410 Fernández-Pereira, J.-M. (2015) Evaluation of trail-cameras for analyzing the diet of nesting raptors 411 using the Northern Goshawk as a model. PLoS One, 10, e0127585. 412 Gladics, A.J., Suryan, R.M., Parrish, J.K., Horton, C.A., Daly, E.A. & Peterson, W.T. (2015) 413 Environmental drivers and reproductive consequences of variation in the diet of a marine predator. 414 Journal of Marine Systems, 146, 72–81. 415 González-Solís, J., Oro, D., Pedrocchi, V., Jover, L. & Ruiz, X. (1997) Bias associated with diet samples 416 in Audouin’s Gulls. Condor, 99, 773–779. 417 González-Solís, J., Smyrli, M., Militão, T., Gremillet, D., Tveraa, T., Phillips, R.A. & Boulinier, T. 418 (2011) Combining stable isotope analyses and geolocation to reveal kittiwake migration. Marine 419 Ecology Progress Series, 435, 251–261. 420 Gotelli, N.J. & Colwell, R.K. (2001) Quantifying biodiversity: procedures and pitfalls in the 421 measurement and comparison of species richness. Ecology Letters, 4, 379–391. 422 Green, D.B., Klages, N.T.W., Crawford, R.J.M., Coetzee, J.C., Dyer, B.M., Rishworth, G.M. & 423 Pistorius, P.A. (2015) Dietary change in Cape Gannets reflects distributional and demographic shifts 424 in two South African commercial fish stock. ICES Journal of Marine Science, 72, 771–781. 425 Gregory, T., Carrasco Rueda, F., Deichmann, J., Kolowski, J., & Alonso, A. (2014) Arboreal camera 426 trapping: taking a proven method to new heights. Methods in Ecology and Evolution, 5, 443–451. 427 Grémillet, D., Pichegru, L., Kuntz, G., Woakes, A.G., Wilkinson, S., Crawford, R.J.M. & Ryan, P.G. 428 (2008) A junk-food hypothesis for gannets feeding on fishery waste. Proceedings of the Royal Society 429 of London, Series B, Biological Sciences, 275, 1149–1156. 430 Hobday, A.J., Bell, J.D., Cook, T.R., Gasalla, M.A. & Weng, K.C. (2015) Reconciling conflicts in 431 pelagic fisheries under climate change. Deep Sea Research Part II, 113, 291–300. 432 Inger, R. & Bearhop, S. (2008) Applications of stable isotope analyses to avian ecology. Ibis, 150, 447–433 461. 434 Jackson, S. & Ryan, P.G. (1986) Differential digestion rates of prey by White-chinned Petrels 435 (Procellaria aequinoctialis). Auk, 103, 617–619. 436 Jordan, M.J.R. (2005) Dietary analysis for mammals and birds: a review of field techniques and animal-437 management applications. International Zoo Yearbook, 39, 108–116. 438 Karnovsky, N.J., Hobson, K.A. & Iverson, S.J. (2012) From lavage to lipids: estimating diets of seabirds. 439 Marine Ecology Progress Series, 451, 263–284. 440 Larson, K. & Craig, D. (2006) Digiscoping vouchers for diet studies in bill-load holding birds. 441 Waterbirds, 29, 198–202. 442 Lee, N.M. & Hockey, P.A.R. (2001) Biases in the field estimation of shorebird prey sizes. Journal of 443 Field Ornithology, 72, 49–61. 444 Lynch, T. P., Alderman, R., & Hobday, A. J. (2015) A high-resolution panorama camera system for 445 monitoring colony-wide seabird nesting behaviour. Methods in Ecology and Evolution, 6, 491–499. 446 Makhado, A.B., Dyer, B.M., Fox, R., Geldenhuys, D., Pichegru, L., Randall, R.M., Sherley, R.B., 447 Upfold, L., Visagie, J., Waller, L.J., Whittington, P.A. & Crawford, R.J.M. (2013) Estimates of 448 numbers of twelve seabird species breeding in South Africa, updated to include 2012. Department of 449 Environmental Affairs, Internal Report. Pp. 1–16. 450 Marshall, A.D. & Pierce, S.J. (2012) The use and abuse of photographic identification in sharks and rays. 451 Journal of Fish Biology, 80, 1361–1379. 452 McLeay, L.J., Page, B., Goldsworthy, S.D., Ward, T.M. & Paton, D.C. (2009) Size matters: variation in 453 the diet of chick and adult crested terns. Marine Biology, 156, 1765–1780. 454 Moreby, S.J. & Stoate, C. (2000) A quantitative comparison of neck-collar and faecal analysis to 455 determine passerine nestling diet. Bird Study, 47, 320–331. 456 Morrison, T.A., Yoshizaki, J., Nichols, J.D. & Bolger, D.T. (2011) Estimating survival in photographic 457 capture–recapture studies: overcoming misidentification error. Methods in Ecology and Evolution, 2, 458 454–463. 459 Newman, G., Wiggins, A., Crall, A., Graham, E., Newman, S. & Crowston, K. (2012) The future of 460 citizen science: emerging technologies and shifting paradigms. Frontiers in Ecology and the 461 Environment, 10, 298–304. 462 Newsome, S.D., Bentall, G.B., Tinker, M.T., Oftedal, O.T., Ralis, K., Estes, J.A. & Fogel, M.L. (2010) 463 Variation in δ13C and δ15N diet - vibrissae trophic discrimination factors in a wild population of 464 California sea otters. Ecological Applications, 20, 1744-1752. 465 Parsons, M., Mitchell, I., Butler, A., Ratcliffe, N., Frederiksen, M., Foster, S. & Reid, J.B. (2008) 466 Seabirds as indicators of the marine environment. ICES Journal of Marine Science, 65, 1520–1526. 467 Piatt, J.F., Harding, A.M.A., Shultz, M., Speckman, S.G., van Pelt, T.I., Drew, G.S. & Kettle, A.B. 468 (2007) Seabirds as indicators of marine food supplies: Cairns revisited. Marine Ecology Progress 469 Series, 352, 221–234. 470 Redpath, S.M., Clarke, R., Madders, M. & Thirgood, S.J. (2001). Assessing raptor diet: comparing 471 pellets, prey remains, and observational data at Hen Harrier nests. Condor, 103, 184–188. 472 Robinson, B.G., Franke, A. & Derocher, A.E. (2015) Estimating nestling diet with cameras: quantifying 473 uncertainty from unidentified food items. Wildlife Biology, 21, 277–282. 474 Safina, C., Wagner, R.H., Witting, D.A. & Smith, K.J. (1990) Prey delivered to Roseate and Common 475 Tern chicks; composition and temporal variability. Journal of Field Ornithology, 61, 331–338. 476 Saraux, C., Le Bohec, C., Durant, J.M., Viblanc, V.A., Gauthier-Clerc, M., Beaune, D., Park, Y.-H., 477 Yoccoz, N.G., Stenseth, N.C. & Le Maho, Y. (2011) Reliability of flipper-banded penguins as 478 indicators of climate change. Nature, 469, 203–206. 479 Schneider, C.A., Rasband, W.S. & Eliceiri, K.W. (2012) NIH Image to ImageJ: 25 years of image 480 analysis. Nature Methods, 9, 671–675. 481 Sherley, R.B., Underhill, L.G., Barham, B.J., Barham, P.J., Coetzee, J.C., Crawford, R.J.M., Dyer, B.M., 482 Leshoro, T.M. & Upfold, L. (2013) Influence of local and regional prey availability on breeding 483 performance of African Penguins Spheniscus demersus. Marine Ecology Progress Series, 473, 291–484 301. 485 Smale, M.J., Watson, G. & Hecht, T. (1995) Otolith atlas of southern African marine fishes. 486 Ichthyological Monographs of the JLB Smith Institute of Ichthyology 1. 487 Smith, M.M. & Heemstra, P.C. (2003) Smiths' Sea Fishes. Struik Plublishers, Cape Town. 488 Suryan, R.M., Irons, D.B., Kaufman, M., Benson, J., Jodice, P.G.R., Roby, D.D. & Brown, E.D. (2002) 489 Short-term fluctuations in forage fish availability and the effect on prey selection and brood-rearing in 490 the Black-legged Kittiwake Rissa tridactyla. Marine Ecology Progress Series, 236, 273–287. 491 Tella, J.L., Banos-Villalba, A., Hernández-Brito, D., Rojas, A., Pacifico, E., Diaz-Luque, J.A., Carrete, 492 M., Blanco, G. & Hiraldo, F. (2015) Parrots as overlooked seed dispersers. Frontiers in Ecology and 493 the Environment, 13, 338–339. 494 Tornberg, R., & Reif, V. (2007) Assessing the diet of birds of prey: a comparison of prey items found in 495 nests and images. Ornis Fennica, 84, 21. 496 Walter, C.B., Cooper, J. & Suter, W. (1987) Diet of Swift Tern chicks in the Saldanha Bay Region, 497 South Africa. Ostrich, 58, 49–53. 498 Wilson, R.P. (1984) An improved stomach pump for penguins and other seabirds. Journal of Field 499 Ornithology, 55, 109–112. 500 Woehler, E.J., Saviolli, J.Y., Bezerra-Francini, C.L., Neves, T. & Bastos-Francini, R. (2013). Insect prey 501 of breeding South American Terns. Marine Ornithology, 41, 199–200. 502 503 Supporting Information 504 Additional Supporting Information may be found in the online version of this article. 505 Appendix S1. Additional methods and results for estimation of prey standard length. 506 Appendix S2. Results of the comparison between photo-sampling and regurgitation. 507 508 Table 1. Prey families in the greater crested tern diet identified by photo-sampling on Robben Island during the 509 2013, 2014 and 2015 breeding seasons. N = number of prey items identified. 510 Prey type Family Species N Fish Engraulidae 1 16206 Dussumieriidae 1 2557 Scomberesocidae 1 1658 Syngnathidae 2 866 Clupeidae 1 545 Carangidae 2 409 Gonorynchidae 1 351 Atherinidae 1 198 Mugilidae 1 117 Merlucciidae 1 76 Pomatomidae 1 67 Soleidae Unid. 63 Champsodontidae 1 58 Clinidae Unid. 63 Clinidae 5 25 Holocentridae 1 47 Nomeidae 2 46 Triglidae Unid. 43 Blenniidae Unid. 38 Myctophidae 1 23 Gobiidae Unid. 9 Gobiidae 1 23 Scombridae 1 22 Scyliorhinidae Unid. 16 Macrouridae Unid. 12 Congridae 1 12 Coryphaenidae 1 10 Sebastidae Unid. 6 Gobiesocidae 1 6 Trichiuridae 2 9 Tetraodontidae Unid. 5 Cheilodactylidae 1 4 Ophichthidae Unid. 5 Bregmacerotidae Unid. 4 Ophidiidae 1 3 Ophidiidae Unid. 1 Sparidae 2 3 Congiopodidae Unid. 2 Berycidae 1 2 Centriscidae 1 1 Chlorophthalmidae 1 1 Batrachoididae 1 1 Aulostomidae Unid. 1 Cephalopods Loliginidae 2 54 Sepiidae 1 85 Octopodidae 1 11 Crustaceans Squillidae 1 244 Brachyura* Unid. 2 Portunidae 1 1 Palinuridae 1 3 Insects Gryllidae 1 191 Gryllotalpidae 1 2 Sphingidae 1 2 Sphingidae Unid. 1 Coleoptera ** Unid. 1 *Infraorder, **Order 511 Figure Legeneds 512 Fig. 1 a) Examples of capturing a photo-sample of an adult greater crested terns carrying prey to the colony without 513 causing disturbance to nesting birds and (b) the resulting close-up image of the prey used for identification 514 (anchovy) and standard length measurements. From c to n: Examples of tern prey items: c) sardine Sardinops 515 sagax; d) Atlantic saury Scomberesox saurus; e) multi-prey load (3 anchovy and 1 sardine); f) dolphinfish 516 Coryphaena hippurus; g) snake eel Ophichthidae sp.; h) sole Austroglossus sp.; i) longsnout pipe fish Syngnathus 517 temminckii; l) shyshark Haploblepharus sp.; m) cuttlefish Sepia vermiculata; n) common squid Loligo vulgaris; o) 518 rock lobster Jasus lalandii; p) two-spotted cricket Gryllus bimaculatus . 519 520 Fig. 2 Example of the application (in ImageJ) of the ‘line selection tool’ to measure the linear distances for the 521 three morphometric parameters: (1) eye diameter (E); (2) head width (H) and (3) operculum width (O). 522 523 Fig. 3 Sample-based rarefaction and species accumulation curves for greater crested tern diet at Robben Island. 524 Accumulation curves show the observed species accumulation from 1510 photo-samples (orange points) and 198 525 regurgitations (blue points) collected on 18 and 19 April 2015. Rarefaction curves (solid lines) and 95% asymptotic 526 confidence intervals (shaded areas) are based on 1,000 random permutations (shown as light grey points) of the 527 observed data. The rarefaction curve for regurgitations is extrapolated (blue dashed line) based on a GAM fit to the 528 photo-sampling, assuming an equal species accumulation rate beyond the range of the observed data. Vertical 529 dotted lines show sample sizes of 190 and 1500 used to compare the methods. 530 531 Fig. 4 Accuracy of estimated standard length (𝑆𝐿) (y-axis) compared with actual SL values (x-axis) of anchovy 532 from photographs in ImageJ using allometric regressions based on estimates of eye diameter (𝐸, open orange 533 circles), operculum width (𝑂, open blue circles), head diameter (𝐻, purple open circles) and the mean of all three 534 (mean 𝑆𝐿, black closed circles). The mean culmen length of greater crested terns (61.2 mm) was used as the 535 reference length to scale the pixel-based length estimates in ImageJ. The grey dashed line represents 100% 536 accuracy. 537 538 Fig. 5 Frequency distribution of anchovy standard length from photo-samples and regurgitations (A = Robben 539 Island; B = Seal Island). 540 work_lnpq2vh5nbc7xmiw5sjjftc4ba ---- Print FEATURES BOOKS 162 MRS BULLETIN • VOLUME 42 • FEBRUARY 2017 • www.mrs.org/bulletin This book gives an overview of silver (Ag) and silver halide (AgX, X = Br, I) nanoparticles used in the fi eld of photogra- phy and other applications. Topics include structure, synthesis, photophysics, cataly- sis, photovoltaics, and stability. Chapter 1 introduces metal nanopar- ticles, plasmonics, and AgX photography. Chapter 2 reviews the shape and structure of metal, Ag, and AgX nanoparticles. The structures of nuclei and seeds, single- crystalline nanoparticles, nanoparticles modifi ed by crystal defects, and compos- ite structures are described. The chapter then gives methods for characterizing the crystal structures. Chapter 3 reviews the preparation of Ag nanoparticles and related materials for plasmonics and AgX photography. The chemistry of nanoparticle synthesis is given, including nuclei and seeds, preparation of single- crystalline nanoparticles, and growth of asymmetric nanoparticles through the introduction of defects and surfactants. The preparation of AgX nanoparticles for photography focuses on AgX-gelatin interactions and a discussion of various methods for preparation of single-crys- talline and tabular AgX nanoparticles. There is a description of industrial-scale AgX nanoparticle synthesis. Methods for the arrangement of AgX and Ag nanopar- ticles needed for fi ne imaging and fabrica- tion of photographic fi lm are described. Chapter 4 covers light absorption and scattering of Ag, including molecule- scale Ag nanoparticles as well as larger isotropic and anisotropic Au nanopar- ticles, nanorods, and nanoplates. There is a discussion of light absorption of J- and H-aggregated chromophores, Ag nanoparticles, and related materials in AgX photography. Chapter 5 discusses catalysis by Ag and other metal nanopar- ticles in plasmonics and photography. Topics discussed include photocatalytic water splitting and hydrogen production. There follows a discussion of the role of Ag catalysts in the mechanism of photo- graphic development. Chapter 6 focuses on the photovoltaic effect in Ag and other metal nanoparticles, covering light-induced charge separation in inorganic and organic semiconductors. The chapter also covers light-induced charge separation in Ag/AgX nanopar- ticle systems in relation to photography. Chapter 7 covers stability of Ag and other metal nanoparticles in AgX photography. There is an extensive discussion of the effect of gelatin on the electrochemical properties of Ag nanoparticles and the reasons why it performs better than other polymers. The chapter also discusses the electronic structure of Ag nanoparticles in gelatin layers in ambient atmosphere and stabilization of Ag and other metal nanoparticles in photographic materials and plasmonic devices. Historically, Ag and AgX nanoparti- cles have played central roles in photogra- phy. Due to the rise of digital photography, knowledge of AgX photography risks being lost. This book is important because it gives an overview of this fi eld drawn from the history of photography and how it can be applied to emerging technolo- gies such as catalysis, photovoltaics, and plasmonics. The author has worked in the photography industry for nearly 50 years and has a deep knowledge that is refl ected in this book. There are 566 refer- ences, including critical articles from the early history of AgX photography, and 168 fi gures. This book is a useful refer- ence for researchers and graduate students interested in all aspects of plasmonics and metal nanoparticles. Reviewer: Thomas M. Cooper of the Air Force Research Laboratory, USA. Silver Nanoparticles: From Silver Halide Photography to Plasmonics Tadaaki Tani Oxford University Press, 2015 240 pages, $110.00 ISBN 978-0-19-871460-6 hi b k i In the interest of transparency, MRS is a co-publisher of this title. However, this review was requested and reviewed by an independent Book Review Board. This authoritative volume introduces the reader to computational thermody- namics and the use of this approach to the design of material properties by tailoring the chemical composition. The text covers applications of this approach, introduces the relevant computational codes, and offers exercises at the end of each chapter. The book has nine chapters and two appendices that provide background mate- rial on computer codes. Chapter 1 covers the fi rst and second laws of thermody- namics, introduces the spinodal limit of stability, and presents the Gibbs–Duhem equation. Chapter 2 focuses on the Gibbs energy function. Starting with a homo- geneous system with a single phase, the authors proceed to phases with variable compositions and polymer blends. The discussion includes the contributions of Computational Thermodynamics of Materials Zi-Kui Liu and Yi Wang Materials Research Society and Cambridge University Press, 2016 260 pages, $89.99 (e-book $72.00) ISBN 9780521198967 h f https://www.cambridge.org/core/terms. https://doi.org/10.1557/mrs.2017.20 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:43, subject to the Cambridge Core terms of use, available at http://crossmark.crossref.org/dialog/?doi=10.1557/mrs.2017.20&domain=pdf https://www.cambridge.org/core/terms https://doi.org/10.1557/mrs.2017.20 https://www.cambridge.org/core work_lolj4smqb5d5vavdon7ubbzlqa ---- Comparison of macular hole size measured by optical coherence tomography, digital photography, and clinical examination SE Benson, PG Schlottmann, C Bunce and DG Charteris Abstract Aim To report on the agreement of macular hole size as measured using optical coherence tomography (OCT), Topcon digital photography, and surgeon estimate on clinical examination. Methods Observational cohort series of patients who underwent macular hole surgery over an 18-month period. Patients had OCT scan and digital fundus photographs preoperatively. At operation the surgeon estimated the size of macular hole. The agreement between methods was assessed using the technique described by Bland and Altman. Results There was good repeatability of photographic and OCT assessment and no evidence of systematic bias between repeated macular hole measurement by digital photography (P ¼ 0.36) or by OCT (P ¼ 0.58). There was evidence of systematic bias between photographic and surgeon measurements (Po0.001), and between OCT and surgeon (Po0.001) with photographic and OCT assessment being greater. There was also evidence of bias between OCT and photographic measurements with photographic measurement tending to be greater than the OCT measurement for smaller holes and lower for larger holes (P ¼ 0.02). Conclusions OCT and Topcon digital photography have good repeatability for measurement of macular hole size. Both these methods measured larger hole sizes compared to surgeon estimate. Digital photography and OCT methods did not agree. Eye (2008) 22, 87–90; doi:10.1038/sj.eye.6702947; published online 28 September 2007 Keywords: digital photography; macular hole; optical coherence tomography; size Introduction Macular hole size is a prognostic indicator for both visual outcome and anatomical success of surgery.1–3 Various imaging modalities have been used to measure the size of macular holes, including optical coherence tomography (OCT),1–3 digital photography, and the confocal scanning laser ophthalmoscope.4 This study was undertaken to assess the repeatability of OCT and digital photographic analyses and to determine the agreement between these two methods and clinical assessment. Materials and methods Patients were a cohort who underwent macular hole surgery over an 18-month period at Moorfields Eye Hospital. All patients underwent analysis by STRATUSOCT Model 3000 scanner (Zeiss Humphrey Instruments, Dublin, CA, USA) radial line OCT scan producing 6 � 6 mm scans, and digital fundus photographs taken with the Topcon TRC 50IA retinal camera (Topcon Corporation, Tokyo, Japan) the day prior to surgery. OCT scans were considered of good quality and used only if all six radial line images had a signal-to-noise ratio higher than 35 db, more than 95% of accepted A-scans and signal strength of 5 or more as recommended by the OCT scanner manufacturers.5 All six scans were examined by an ophthalmologist experienced in OCT interpretation. Macular hole size was measured Received: 15 December 2006 Accepted in revised form: 27 April 2007 Published online: 28 September 2007 Moorfields Eye Hospital, London EC1V 2PD, UK Correspondence: SE Benson, Moorfields Eye Hospital, City Road, London EC1V 2PD, UK Tel: þ 020 7566 2283; Fax: þ 020 7566 2285. E-mail: sarah.benson@ moorfields.nhs.uk Eye (2008) 22, 87–90 & 2008 Nature Publishing Group All rights reserved 0950-222X/08 $30.00 www.nature.com/eye C L IN IC A L S T U D Y http://dx.doi.org/10.1038/sj.eye.6702947 mailto:sarah.benson@moorfields.nhs.uk mailto:sarah.benson@moorfields.nhs.uk http://www.nature.com/eye using callipers in the ‘retinal thickness analysis’ mode (Figure 1), the scan with the largest distance between edges of the hole was taken to be the most accurate as this was more likely to represent the true diameter of the hole rather than an ‘off centre’ measurement. This is particularly relevant due the central scotoma and poor central fixation in many macular hole patients. From the chosen scan the shortest distance across the full-thickness defect was defined as the size of the hole as described in previous studies.2,3 A photographic technician experienced in measuring images using Topcon IMAGEnet Digital Systems (Topcon, America Corporation, Paramus, NJ, USA) software system analysed one digital fundus photograph from each patient. If more than one photograph had been taken, the best image quality was assessed. Images were considered to be of good quality when the retinal nerve fibre layer was visualised. All images were 351 field size, two-dimensional (stereo images were not used), of 1024 pixels per inch resolution (ppi), and were viewed in colour mode. Both a horizontal and vertical diameter measurement was recorded for each photograph (Figure 2). The vertical and horizontal measurements were then averaged for each photograph resulting in a single diameter measurement per patient. The operating surgeon (of which there were four in total) was asked to estimate hole size in relation to the perceived size of the optic disc to the nearest 50 mm at operation using either the BIOM wide angle viewing system or the Machemer contact lens. Assessment of agreement between the methods of measurement was performed using the Bland and Altman technique.6,7 To assess repeatability, we performed repeat measurements of macular hole size by masked analysis of a second digital photograph and a second OCT for each patient. Repeat surgeon estimate was not performed as the surgeon could not be adequately masked to their previous estimate during surgery. All statistical analyses were conducted using STATA version 7 (College Station, TX, USA). We certify that all applicable institutional and governmental regulations concerning the ethical use of human volunteers were followed during this research. Results Fifty patients were recruited of which three did not have digital photographs and 11 had no surgeon estimate of hole size. A further three patients did not have OCT scan. The median size of all macular holes measured by OCT was 512 mm and the range was 212–1073 mm. The median as measured by surgeon estimate was 400 mm (range 100–800) and the median as measured by digital photography was 578 (range 153.5–996). Repeatability There was no evidence of systematic bias between repeat macular hole measurement by digital photography (47 patients, P ¼ 0.36). or by OCT (47 patients, P ¼ 0.58). The 95% limits of agreement for photographic readings were from �36.99 to 32.31 and for OCT readings �121.66 to 112.22. Figure 1 OCT measurement of macular hole diameter. Figure 2 Macular hole diameter measurement via Topcon digital photography. Macular hole size measurements SE Benson et al 88 Eye Comparison of assessments There was evidence of systematic bias between digital photography and surgeon macular hole measurements with photographic readings being almost always higher than the surgeon assessment (37 patients, mean difference (SE) 173.7 mm (29.00), Po0.001). Likewise, between OCT and surgeon (37 patients, mean difference (SE), 150.3 mm (29.57), Po0.001), OCT measurements were usually higher than surgeon estimate. The 95% limits of agreement for photographic and surgeon readings were from �179.0 to 526.4 mm and for OCT and surgeon readings �209.4 to 509.9 mm. Figure 3 illustrates that while photographic measurements tended to be higher than the corresponding OCT measurement for smaller holes the reverse was true for larger holes with the photographic measurement being lower than that of the OCT. Overall, there was evidence of systematic biasFwith a mean (SE) difference of �61 mm (24.1) P ¼ 0.02. The 95% limits of agreement were wide ranging from �349.7 to 227.7 mm. Discussion Accurate assessment of size of macular holes is important for both research studies and to guide clinical management, size of hole having been shown to affect anatomical and visual success.3,8,9 This study was undertaken to determine the extent of agreement between commonly used methods of macular hole size measurement (OCT, digital photography, and clinical examination) and the results show that while OCT and digital photography have good repeatability of results the two methods do not have a high level of agreement in their assessment of macular hole size. Since OCT and digital photography are intuitively less subjective than clinicians estimate, it might be expected that there would be closer agreement between these two methods than of either with surgeons estimate. OCT and digital photographic analysis are, however, not totally objective: in OCT the callipers must be placed by an observer, likewise the observer must choose where to start and end the measurement when using Topcon digital photography. Detailed study of individual pixels of digital images may allow more scientific and accurate measurement of hole size, however, would be both time consuming and impractical clinically. It is notable that the photographic measurement tended to result in larger hole size compared to OCT measurement for small macular holes, and conversely giving a smaller hole size compared to OCT for larger holes. It may be that patients with a larger macular hole and therefore larger central scotoma tracked the OCT scanning line, therefore resulting in a falsely elongated macular hole size measurement. Our results suggest that clinical examination underestimates macular hole size compared to OCT and digital photography. It is possible that clinical assessment may underestimate hole size because of reflected light from around the hole or due to glial tissue along the hole edges.10 The statistical assessment of agreement as described by Bland and Altman6,7 was chosen as it is applied when direct measurement without adverse effects is difficult or impossible (in this instance direct measurement of the macular hole). As the true values remain unknown we are using indirect methods of measurement and these are assessed in pairs (ie, OCT, digital photography, and clinical examination). Bland and Altman6,7 also stress the importance of repeatability in assessing agreement between methods, as a method with poor repeatability will never agree well with another method.6,7 The ideal model for assessment of agreement as described would have involved the same observer taking the different measurements to avoid interobserver variability;6,7 however, because of the skill and experience required by each of the three methods, the measurements in this study were performed by different individuals. Interobserver variability was also introduced by having different surgeons (total four) estimating the hole size at time of operation; however, we consider that this represents the ‘real-life’ situation encountered by vitreoretinal surgeons. We performed repeatability tests for the OCT and digital photography, an essential step in assessment of agreement. It was not possible in the operative time Figure 3 Agreement between OCT and Topcon digital photo- graphic measurements. For smaller macular holes (to the left of the horizontal axis), there is a tendency for photographic measurements to be higher than the OCT measurements (observations tend to be above the horizontal line), whereas for larger holes the photographic measurements are lower than those of the OCT (observations are below this line). Macular hole size measurements SE Benson et al 89 Eye frame for surgeons to repeat measurement by clinical examination while remaining masked to their previous result and we consequently do not have a repeatability analysis for this. A number of variables were considered in planning measurement of hole size by each method. For example, the difficulty of fixation with a central scotoma may have prevented a true maximum diameter section through the hole being obtained with both the digital photographs and OCT. In an attempt to reduce this problem, the scans and photographs were either repeated until optimal fixation was obtained or for the OCT the external fixator light was used. Refractive errors can affect magnification of absolute measurements, as surgeon estimate of macular hole size (although using two different viewing systems) was a relative measurement (relative to optic nerve size) this does not apply.11 No correction was made for different refractive errors in our methods of measurement (OCT or digital photography). Therefore, any bias introduced would have applied to all sets of measurements although possibly in unequal proportions. We limited surgeons to estimating hole size to the nearest 50 mm as we felt any smaller increment would be impractical in the clinical setting. We acknowledge, however, that limiting only this one method of measuring hole size could introduce bias. Using the Topcon photographic system, an infinite number of measurements can be taken and we concluded that averaging two was practical in a clinical setting. Furthermore, we selected the OCT scanning mode of ‘6 � 6 radial line’, providing six scans evenly spread at a 601 angle centred at the macula. This was considered a clinically practical compromise likely to incorporate the maximum hole diameter in one of the six scans. The ‘radial line’ scanning mode of the OCT while taking a few seconds longer than the ‘fast macula scan’ provides greater resolution of image (512 A-scans vs 128 A-scans),5 and was therefore chosen to determine more accurately the hole edges. OCT and digital photography demonstrated good repeatability in the measurement of macular hole size although agreement was not found between the measurements obtained using OCT, photography, and clinical examination. Therefore, we would advise that caution should be exercised in comparing studies where two different methods of macular hole size measurement are used and in generalising the results when advising patients on their prognosis. To determine which indirect measurement modality is the most accurate, agreement would ideally be assessed against a direct measurement of hole size. Potentially, this could be performed using optical assessment at operation or on pathology specimens although both such approaches have inherent weaknesses because of optical aberrations and processing artefacts. In clinical practice, it is likely that either OCT or photography would be used to measure hole size and advise patients of their prognosis. As both methods are reliably repeatable they would appear to be equally valid (and better than clinical assessment) if individual units audit surgical results against hole size measurements. In addition, it may well be that there is no arbitrary cutoff hole size (eg, 400 mm) where the surgical prognosis significantly alters and that outcome varies directly with the continuum of hole size. The absolute measurement of the size of the macular hole in comparison to previous studies may therefore be only of secondary importance. Acknowledgements Grant information: Financial support provided by Pfizer, Tadworth, Surrey, grant no.: PARA-0505-084. The authors have no competing interests to declare. References 1 Ullrich S, Haritoglou C, Gass C, Schaumberger M, Ulbig MW, Kampik A. Macular hole size as a prognostic factor in macular hole surgery. Br J Ophthalmol 2002; 86: 390–393. 2 Ip MS, Baker BJ, Duker JS, Reichel E, Baumal CR, Gangnon R et al. Anatomical outcomes of surgery for idiopathic macular hole as determined by Optical Coherence Tomography. Arch Ophthalmol 2002; 120: 29–35. 3 Kang SW, Ahn K, Ham D-I. Types of macula hole closure and their clinical implications. Br J Ophthalmol 2003; 87: 1015–1019. 4 Kobayashi H, Kobayashi K. Correlation of quantitative three-dimensional measurements of macular hole size with visual acuity after vitrectomy. Graefes Arch Clin Exp Ophthalmol 1999; 4: 283–288. 5 Zeiss STRATUSOCT USER MANUAL P/N 55556 Rev A. 2004. 6 Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet 1986; i: 307–310. 7 Bland JM, Altman DG. Measuring agreement in method comparison studies. Stat Methods Med Res 1999; 8: 135–160. 8 Puliafito CA, Hee MR, Lin CP, Reichel E, Schuman JS, Duker JS et al. Imaging of macular diseases with optical coherence tomography. Ophthalmology 1995; 102: 217–229. 9 Ryan EH, Gilbert HD. Results of surgical treatment of recent-onset full-thickness idiopathic macular holes. Arch Ophthalmol 1994; 112: 1545–1553. 10 Madreperla SA, Geiger GL, Funata M, de la Cruz Z, Green R. Clinicopathologic correlation of a macular hole treated by cortical vitreous peeling and gas tamponade. Ophthalmology 1994; 101: 682–686. 11 Pach J, Pennell DO, Romano PE. Optic disc photogrammetry: magnification factors for eye position, centration and ametropias, refractive and axial; and their application in the diagnosis of optic nerve hypoplasia. Ann Ophthalmol 1989; 21: 454–462. Macular hole size measurements SE Benson et al 90 Eye Comparison of macular hole size measured by optical coherence tomography, digital photography, and clinical examination Introduction Materials and methods Results Repeatability Comparison of assessments Discussion Acknowledgements References work_lph55ibrc5ffzksh2sjkdajtq4 ---- Soft tissue manipulation for single implant restorations © 2011 Macmillan Publishers Limited. All rights reserved. © 2011 Macmillan Publishers Limited. All rights reserved. © 2011 Macmillan Publishers Limited. All rights reserved. © 2011 Macmillan Publishers Limited. All rights reserved. Soft tissue manipulation for single implant restorations A. Alani1 and M. Corson2 VERIFIABLE CPD PAPER can be compounded when the placement of implants contrasts from the position of natural teeth axially, mesio-distally and bucco-lingually. The difficulties in creating a natural transition between single implant restorations and teeth are primarily associ- ated with the differences in circumferential dimensions, shapes and position (Figs 3-4). Differences in the nature and form of the junctional epithelium between implants and teeth is also an important considera- tion. Due to these inherent variations the discrepancies in the emergence profiles of teeth and implants are more apparent in the aesthetic zone. In contrast to crowns on natural teeth, implant crown restora- tions can vary in their depth subgingivally between the implant fixture and the gin- gival margin. Indeed once implants have fully integrated the connection of a heal- ing abutment is usually the initial stage in emergence development. At abutment connection surgery the majority of heal- ing abutments are cylindrical; this is also a feature of the impression copings and so INTRODUCTION The interface between fixed prostheses and soft tissues is important for the success of restorations in the anterior region.1 The success rates of implants can be assured as long as appropriate surgical protocols are followed in the presence of adequate qual- ity and quantity of bone.2 Unfortunately the presence of adequate bone for implant placement does not always coincide with the ideal soft tissue architecture for an aesthetic result (Fig. 1). This can be con- sidered more significant with the provision of implant restorations replacing multiple tooth units or the solitary implant adjacent to natural teeth especially in patients with a high smile line (Fig. 2).3 Where teeth, implants and edentate spaces require restoration, achieving aes- thetic harmony can be difficult. In reality the provision of adequate function may be easier to achieve than aesthetics due to a variety of factors. The nature and quality of junctional tissues differ between natural teeth and implants.4 As such, creation of an optimal ‘emergence profile’ on implant retained crowns can be challenging. This Achievement of optimal aesthetics on implants in the anterior region can be difficult due to inherent differences to the natural dentition. An important consideration is the peri-implant soft tissues which can be modified to create a more natural emergence profile and contour. The methods with which this can be achieved vary as can methods for recording soft tissue changes and relaying this to technician colleagues. This review appraises some techniques available for the manipulation of the soft tissue profile on single implant restorations. do not simulate the more oval cross section of anterior teeth (Figs 4-5).7 Once optimal healing is achieved the soft tissues require manipulation to produce a more oval and funnel shape rather than a cylindrical and parallel emergence which would otherwise be created from the initial impression and subsequent master cast. Implant restoration emergence can be improved by a period of soft tissue sculp- turing before the definitive restoration.8 Depending on the depth of the implant and the gingival biotype the potential for 1*SpR in Restorative Dentistry, 2Consultant in Restorative Dentistry, Department of Prosthodontics, Newcastle Dental Hospital, Richardson Road, Newcastle, NE2 4AZ *Correspondence to: Dr Aws Alani Email: awsalani@hotmail.com Refereed Paper Accepted 9 September 2011 DOI: 10.1038/sj.bdj.2011.904 ©British Dental Journal 2011; 211: 411-416 • Identifies the importance of soft tissue aesthetics for single implant restorations. • Details some of the techniques currently available for soft tissue modification of peri-implant tissues. • Illustrates the advantages and disadvantages of the different techniques available for manipulation and subsequent impression procedures. I N B R I E F PR A C TIC E Fig. 1 Single implant restoration (21). Note the gingival blanching, abrupt emergence and poor gingival contour when compared to adjacent teeth Fig. 2 Definitive restoration on implant 21 subsequent to tissue modification. Note the healthy band of tissue associated with the gingival margin Fig. 3 Circumferentially teeth are inherently ovoid in shape in comparison to implants which are circular BRITISH DENTAL JOURNAL VOLUME 211 NO. 9 NOV 12 2011 411 © 2011 Macmillan Publishers Limited. All rights reserved. © 2011 Macmillan Publishers Limited. All rights reserved. © 2011 Macmillan Publishers Limited. All rights reserved. © 2011 Macmillan Publishers Limited. All rights reserved. PRACTICE Fig. 4 The cemento-enamel margin of teeth is undulating varying between interproximal areas mesially and distally to the buccal. In comparison the implant restoration margin is circumferentially uniform Fig. 7 Implant with healing abutment in the 23 site Fig. 5 Two different types of healing abutment. The left abutment has some taper incorporated whereas the right abutment is parallel sided Fig. 8 Modification of the interim restoration using flowable composite to improve the emergence profile Fig. 6 Superficially placed implant – the scope for emergence and soft tissue modification is limited soft tissue manipulation can differ. Where an implant is placed sufficiently deep in a thick biotype the potential for gradual emergence of the final crown is good. In contrast the potential for manipulation is limited where the implant is shallow to the gingival margin in a thin biotype which can make the emergence profile abrupt and unnatural (Fig.  6). The verti- cal distance between the implant and the gingival margin is important as the poten- tial for sculpturing is directly proportional to the depth of the implant.9 Laboratory duplication of soft tissue profile encircling implant restorations can be achieved using a soft tissue substitute. This attempts to mimic the flexibility of the gingiva which traditional dental stone cannot replicate. The position of the soft tissue around the crowns can be modified in a similar method to those used in orthodontics to rearrange periodontal tissues.10 By thin- ning the crown in the subgingival portion the soft tissues can be brought more coro- nally, in contrast bulking out the tissues will move them more apically.11 TECHNIQUES IN MODIFYING THE EMERGENCE PROFILE OF RESTORATIONS Indirect techniques There have been a variety of direct and indirect techniques described to manipulate the emergence profile of implant restora- tions. Macintosh and Sutherland shaped the emergence profile on the master cast by burring the soft tissue substitute before crown construction.12 A similar technique has been described where the soft tissue substitute was trimmed away to the desired contour and soft tissue topography of a central incisor.13 These techniques have some disadvantages. The soft tissue substi- tute gives neither the clinician nor the tech- nician an accurate indication of the biotype or the tolerance of the tissue to changes in crown dimension. If the emergence is over contoured tension in the soft tissues may occur causing trauma that may result in recession and even bone resorption. Direct techniques Direct techniques in optimising the soft tissue topography around implants vary in their invasiveness and so trauma to the peri-implant soft tissues. At the abutment connection stage the lifting of a split thickness flap and repositioning on clo- sure has been advocated.14 In contrast the use of a fast handpiece with an oval shaped bur has been described to remove circum-implant soft tissue to create optimal emergence.15 Less invasive direct techniques involve the modification of a provisional restora- tion in a gradual manner. This can be con- sidered advantageous as it allows direct assessment of the influence of change in contour on the soft tissue profile. These techniques sculpt the soft tissues by way of modification of the interim restora- tion. The changes in the gingival tissues are then recorded and reproduced in the definitive restoration. Historically autopol- ymerised resin has been used to modify interim abutments for emergence modifi- cation.16 The use of resin intra-orally and subgingivally that may have not fully polymerised could cause trauma to tissue that may already be delicate post implant surgery.12 This can be overcome with the use of direct composite (Fig. 8).17 The use of composite can be considered advanta- geous due to the material’s command, light 412 BRITISH DENTAL JOURNAL VOLUME 211 NO. 9 NOV 12 2011 © 2011 Macmillan Publishers Limited. All rights reserved. © 2011 Macmillan Publishers Limited. All rights reserved. © 2011 Macmillan Publishers Limited. All rights reserved. © 2011 Macmillan Publishers Limited. All rights reserved. PRACTICE aesthetics.12 Modification of the subgin- gival portion may therefore take several visits. This soft tissue maturation process may take up to two months depending on the degree of manipulation required.19 Where there is a need for implant res- toration removal the soft tissues can col- lapse due to loss of support. This can be overcome by placing an appropriately contoured healing abutment or custom- ised index while the restoration is out of the mouth (Figs 10-11). The advantages and disadvantages of each technique are outlined in Table 1. RECORDING SOFT TISSUE CHANGES AND COMMUNICATION WITH TECHNICAL COLLEAGUES Once the soft tissue and emergence profile have reached an optimum level the new soft tissue profile needs to be recorded on an implant level working model with appropriate soft tissue substitute for definitive restoration construction.13 The production of a new implant level impres- sion and subsequent new master cast is one option, although there are numer- ous alternatives to this. The record of the new soft tissue profile can take place outside of the clinical environment once the interim restoration has been steri- lised.20 One option is to place an implant analogue in a dental stone to which the customised restoration is attached. An set and favourable handling properties. Where direct composite resin is applied to a laboratory made interim restoration the shade matching between the two materi- als can be difficult although as the shade difference is likely to be subgingival this factor can be considered less crucial.17 One notable advantage is the ability to bond to composite incrementally, this is important if the soft tissue cuff is taut and will only allow for gradual modification of emer- gence over a period of time without undue strain to the gingiva. Where the gingival margin requires more manipulation in the mesial as opposed to the distal portion (or vice versa) composite application can be customised for these differences and can aid in the production on papilla in favour- able situations. The use of different types of composite may aid in the sculpturing process. Traditional composite resin can be employed where a significant amount of soft tissue manipulation is required. Where finer adjustments are required flowable composite can be used. Where gingival blanching occurs due to changes in emergence the restoration can be gradually torqued over a period of 15 minutes (Fig.  9).18 Once torqued any residual blanching should ease after 15 minutes. If this doesn’t occur the restora- tion should be removed and dimensions modified to prevent undue trauma as this is likely to reduce control of emergence Table 1 Advantages and disadvantages of emergence modification technique Soft tissue profile transfer technique Advantages Disadvantages Customised impression coping Allows for dimensional changes that may have occurred to adjacent teeth to be recorded eg composite augmentation procedures. Second impression required. Due to the additional stages required may introduce inaccuracies due to volumetric changes of materials used. Requires a short period without the restoration and may result in soft tissue collapse. Use provisional restoration as an impression coping Same as customised impression coping. Second impression required. Requires a longer period without the restoration and may result in soft tissue collapse. Patient may go without provisional restoration during fabrication of definitive restoration resulting in soft tissue collapse and the need for development again at definitive fit. Injection of soft tissue substitute around provisional restoration on master cast Can be done chairside. Patient retains provisional restoration during definitive restoration instruction. No need for second impression. Patient goes without customised restoration for a short period of time. Soft tissue substitute may require further modification to resemble clinical situation. Fig. 9 On torqueing, blanching resulted in the adjacent soft tissues which ceased after 15 minutes Fig. 10 Virtually parallel emergence profile soon after the healing abutment has been removed to commence the restorative phase Fig. 11 Emergence profile after eight weeks of modification using flowable composite and an interim restoration. The restoration was modified at initial fit and then again approximately four weeks later Fig. 12 Use of a silicone based inter-occlusal registration material to maintain the soft tissue profile while the restoration is out of the mouth BRITISH DENTAL JOURNAL VOLUME 211 NO. 9 NOV 12 2011 413 © 2011 Macmillan Publishers Limited. All rights reserved. © 2011 Macmillan Publishers Limited. All rights reserved. © 2011 Macmillan Publishers Limited. All rights reserved. © 2011 Macmillan Publishers Limited. All rights reserved. PRACTICE impression of this is taken in silicone and from this a customised impression post is fabricated for the definitive impression (Figs  12-16).20 Alternatively the custom- ised restoration itself can be used as an impression coping or relocated into the set impression (Fig. 17).21,22 The disadvantage of this technique is that the patient will be without a restoration for some period while a working model is fabricated. The time lapse between model fabrica- tion and reinsert of the interim restora- tion may result in relapse of the modified soft tissues. To prevent the need for a delay in rein- serting of the interim restoration tech- niques have been described avoiding the need for a second implant level impression. The placement of the customised restora- tion on an accurate master cast and the injection of soft tissue analogue around this can recreate the soft tissue profile for fabrication of the definitive prosthesis.23 This technique can be achieved at the chairside but the importance of recording the gingival profile and level in addition to the emergence is central to success of the technique (Figs 18-19). Identification of the peri-implant margin intra-orally before transfer to the working model can be achieved using an indelible ink pencil. Intraoral scribing around the margin clini- cally and, once sterilised, trimming of the soft tissue analogue to this level on the model will complete the registration pro- cess (Figs 20-22). Alternatively a silicone impression of both the interim crown and the soft tissue can be taken. The interim crown can then be relocated into the impression and an implant analogue connected before pour- ing of the working model. This technique requires the patient to be without the res- toration for sometime while the working model is produced. The use of digital photography to pro- vide technical colleagues with adequate information in shade development has been described.24 This practice is also advantageous in recreating optimal resto- rations for crown emergence. The gingival biotype adjacent to the implant restoration Fig. 13 Customised restoration removed and healing abutment placed Fig. 14 Customised restoration located onto implant replica embedded in a some laboratory stone Fig. 15 After putty was applied around the restoration the matrix created was then used to fabricate a customised impression coping with composite Fig. 16 Customised impression coping in vivo Fig. 17 Relocation of interim crown restoration into a silicone impression subsequent to completion of soft tissue modification. The implant analogue was connected and a new working model produced Table 2 Advantages and disadvantages of the various soft tissue profile transfer techniques Emergence modifica- tion technique Advantages Disadvantages Laboratory modifica- tion of working model by way of burring material away from around implant replica Quick and easy. The clinician and technician can decide on the optimal emergence early in the restorative phase. Gives neither the clinician nor tech- nician an appreciation of the gingival biotype. This can result in over-con- touring of the restoration and so the development of complications such as resorption or recession. An incision may be needed to seat the temporary with potential for further recession or resorption. Surgical modification of peri-implant tissues Establishes soft tissue profile by direct manipulation of the tissues as required. Can give the clinician the oppor- tunity to use adjunctive surgical procedures such as graft placement to bulk tissues if required. If conducted at the abutment con- nection stage the stability of the soft tissues after healing maybe difficult to assess. Further healing and soft tissue changes may result in suboptimal definitive restoration. Modification of emergence profile on interim restoration Indexing of implant at first stage surgery and subsequent fitting of optimised temporary fitted at abutment connection. Gives the clinician an opportunity to gradually modify the restoration while gauging the effect on the peri- implant tissues so optimising the aesthetic result. Allows soft tissues to mature around a restoration before providing the definitive restoration. Gives the patient opportunity to trial the restoration with respect to aesthetics, function and speech. Avoids multiple visits for modifica- tion of temporary. Takes time and visits to reach the optimal result so elongates treatment. Loss of the interim restoration during the sculpting process can result in soft tissue regression to previous dimensions. Need for index of implant at place- ment so good primary stability necessary. May result in an overcontoured emergence with potential for reces- sion and resorption. 414 BRITISH DENTAL JOURNAL VOLUME 211 NO. 9 NOV 12 2011 © 2011 Macmillan Publishers Limited. All rights reserved. © 2011 Macmillan Publishers Limited. All rights reserved. © 2011 Macmillan Publishers Limited. All rights reserved. © 2011 Macmillan Publishers Limited. All rights reserved. PRACTICE can be assessed in addition to papilla dimensions using digital photography. The advantages and disadvantages of each technique are outlined in Table 2. ABUTMENT MODIFICATION TO OPTIMISE OUTCOMES Where soft tissue profile is suboptimal or the gingival biotype is thin the considera- tion for modification of the abutment to either improve the gingival tone or relocate the cement lute can be considered. The pink aesthetic score identifies numerous factors in the achievement of the ideal aesthetic gingival profile.25 One of these is the colour of the peri-implant mucosa which can be influenced by the depth and position of the implant. Where implant placement is more superficial (and so peri-implant soft tissues are inherently more translucent) the use of gingival toned porcelain directly on the abutment or implant restoration can be considered (Fig. 23). In some cases the cement lute can be relocated supragingi- vally by way of ceramic pressed directly on the zirconia abutment.26 This tech- nique allows the emergence profile to be replicated on the custom abutment while accommodating for continuity of shade between sub and supra gingival portions of the restoration. This technique does have some disadvantages. Ceramic has shown inferior biocompatibility to peri-implant tissues when compared to zirconia and titanium.27,28 Relocation of the cement lute supragingivally may result in deteriora- tion over time resulting in microleakage and discolouration. CONCLUSION Where adequate soft tissue mass and favourable biotype is present around a sin- gle implant restoration the manipulation of soft tissues to create optimal marginal shape and position can be considered to improve aesthetics of the definitive res- toration. This can be achieved in a num- ber of ways and the recording of these changes can also vary. Care needs to be taken not to manipulate the tissues to a degree that would cause undue trauma or prolonged discomfort. The authors would like to thank Newcastle Dental Hospital Conservation Laboratory for the technical work presented in this manuscript. 1. LeSage B P. Improving implant aesthetics: prosthet- ically generated papilla through tissue modelling with composite. Pract Proced Aesthet Dent 2006; 18: 257–263. 2. Adell R, Eriksson B, Lekholm U, Brånemark P I, Jemt T. Long-term follow-up study of osseointegrated implants in the treatment of totally edentulous jaws. Int J Oral Maxillofac Implants 1990; 5: 347–359. 3. Alani A, Maglad A, Nohl F. The prosthetic manage- ment of gingival aesthetics. Br Dent J 2011; 210: 63–69. 4. Yeung S C. Biological basis for soft tissue manage- ment in implant dentistry. Aust Dent J 2008; 53 Suppl 1: S39–S42. 5. McArdle B F, Clarizio L F. An alternative method for restoring single-tooth implants. J Am Dent Assoc 2001; 132: 1269–1273. 6. Belser U C, Grütter L, Vailati F, Bornstein M M, Weber H P, Buser D. Outcome evaluation of early placed maxillary anterior single-tooth implants using objective esthetic criteria: a cross-sectional, retrospective study in 45 patients with a 2- to 4-year follow-up using pink and white esthetic scores. J Periodontol 2009; 80: 140–151. 7. Bain C A, Weisgold A S. Customized emergence profile in the implant crown - a new technique. Compend Contin Educ Dent 1997; 18: 41–45. 8. Azer S S. A simplified technique for creating a customized gingival emergence profile for implant-supported crowns. J Prosthodont 2010; 19: 497–501. 9. Stein J M, Nevins M. The relationship of the guided gingival frame to the provisional crown for a single-implant restoration. Compend Contin Educ Dent 1996; 17: 1175–1182. 10. Mihram W L. Dynamic biologic transformation of the periodontium: a clinical report. J Prosthet Dent 1997; 78: 337–340. 11. Biggs W F, Litvak A L Jr. Immediate provisional restorations to aid in gingival healing and optimal contours for implant patients. J Prosthet Dent 2001; 86: 177–180. 12. Macintosh D C, Sutherland M. Method for developing an optimal emergence profile using heat-polymerized provisional restorations for single-tooth implant-supported restorations. J Prosthet Dent 2004; 91: 289–292. 13. Tarlow J L. Procedure for obtaining proper contour of an implant-supported crown: a clinical report. J Prosthet Dent 2002; 87: 416–418. 14. Reikie D F. Restoring gingival harmony around single tooth implants. J Prosthet Dent 1995; 74: 47–50. 15. Davidoff S R. Late stage soft tissue modification for anatomically correct implant-supported restora- tions. J Prosthet Dent 1996; 76: 334–338. Fig. 18 Implant (11) restored with an interim progressively modified restoration Fig. 19 Indellible ink pencil used to mark gingival margin on crown before removal and relocation on working model Fig. 20 Gingival substitute applied to model and trimmed to recorded gingival level. Note the soft tissue substitute present on the distal margin which does not represent the clinical situation. This required further trimming before construction of the definitive restoration Fig. 21 Implant retained bridge cantilvered from 21 site into 22. Note the pink porcelain glazed onto the subgingival portion Fig. 22 Ceramic pressed directly onto a zirconia abutment. The dimensions of the subgingival portion were transferred from a previous modified interim restoration Fig. 23 The final cemented restoration. Note the cement lute line is supragingival BRITISH DENTAL JOURNAL VOLUME 211 NO. 9 NOV 12 2011 415 © 2011 Macmillan Publishers Limited. All rights reserved. © 2011 Macmillan Publishers Limited. All rights reserved. © 2011 Macmillan Publishers Limited. All rights reserved. © 2011 Macmillan Publishers Limited. All rights reserved. PRACTICE 16. Spyropoulou P E, Razzoog M, Sierraalta M. Restoring implants in the esthetic zone after sculpting and capturing the periimplant tissues in rest position: a clinical report. J Prosthet Dent 2009; 102: 345–347. 17. Al-Harbi S A, Edgin W A. Preservation of soft tissue contours with immediate screw-retained provisional implant crown. J Prosthet Dent 2007; 98: 329–332. 18. Kim T H, Cascione D, Knezevic A. Simulated tissue using a unique pontic design: a clinical report. J Prosthet Dent 2009; 102: 205–210. 19. Elian N, Tabourian G, Jalbout Z N et al. Accurate transfer of peri-implant soft tissue emergence profile from the provisional crown to the final prosthesis using an emergence profile cast. J Esthet Restor Dent 2007; 19: 306–314. 20. den Hartog L, Raghoebar G M, Stellingsma K, Meijer H J. Immediate loading and customized restoration of a single implant in the maxillary esthetic zone: a clinical report. J Prosthet Dent 2009; 102: 211–215. 21. Jansen C E. Guided soft tissue healing in implant dentistry. J Calif Dent Assoc 1995; 23: 57–58, 60, 62. 22. Attard N, Barzilay I. A modified impression tech- nique for accurate registration of peri-implant soft tissues. J Can Dent Assoc 2003; 69: 80–83. 23. Neale D, Chee W W. Development of implant soft tissue emergence profile: a technique. J Prosthet Dent 1994; 71: 364–368. 24. Amet E M, Milana J P. Restoring soft and hard dental tissues using a removable implant prosthesis with digital imaging for optimum dental esthetics: a clinical report. Int J Periodontics Restorative Dent 2003; 23: 269–275. 25. Fürhauser R, Florescu D, Benesch T, Haas R, Mailath G, Watzek G. Evaluation of soft tissue around single-tooth implant crowns: the pink esthetic score. Clin Oral Implants Res 2005; 16: 639–644. 26. Kamalakidis S, Paniz G, Kang K H, Hirayama H. Nonsurgical management of soft tissue deficiencies for anterior single implant-supported restorations: a clinical report. J Prosthet Dent 2007; 97: 1–5. 27. Abrahamsson I, Berglundh T, Glantz P O, Lindhe J. The mucosal attachment at different abutments. An experimental study in dogs. J Clin Periodontol 1998; 25: 721–727. 28. Welander M, Abrahamsson I, Berglundh T. The mucosal barrier at implant abutments of different materials. Clin Oral Implants Res 2008; 19: 635–641. 416 BRITISH DENTAL JOURNAL VOLUME 211 NO. 9 NOV 12 2011 Soft tissue manipulation for single implant restorations Introduction Techniques in modifying the emergence profile of restorations Indirect techniques Direct techniques Recording soft tissue changes and communication with technical colleagues Abutment modification to optimise outcomes Conclusion Acknowledgements Note References work_lumdbt76xvdmrkwcndrnwovhpa ---- Emergency Preparedness: Life, Limb, the Pursuit of Safety and Social Justice | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.4018/javet.2012040103 Corpus ID: 41068673Emergency Preparedness: Life, Limb, the Pursuit of Safety and Social Justice @article{Russo2012EmergencyPL, title={Emergency Preparedness: Life, Limb, the Pursuit of Safety and Social Justice}, author={M. Russo and V. Bryan and Gerri Penney}, journal={Int. J. Adult Vocat. Educ. Technol.}, year={2012}, volume={3}, pages={23-34} } M. Russo, V. Bryan, Gerri Penney Published 2012 Political Science, Computer Science Int. J. Adult Vocat. Educ. Technol. Since 9-11, emergency preparedness has been the focus on federal, state, tribal, and local levels. Although current research describes emergency management response, many barriers may exist that effect response systems, including the role of first responders, social vulnerability, and the way technology interfaces with these variables. Several factors determine the success of emergency preparedness and its ultimate impact on the health and safety of the community, including social media… Expand View via Publisher igi-global.com Save to Library Create Alert Cite Launch Research Feed Share This Paper 5 Citations View All Topics from this paper Social media Population Emoticon Classification Research Group Social network Incidence matrix 5 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Emergency Management Professional Development: Linking Information Communication Technology and Social Communication Skills to Enhance a Sense of Community and Social Justice in the 21st Century M. Russo Political Science 2013 6 Save Alert Research Feed A Case Study of Innovation Platforms for Agricultural Research, Extension, and Development: Implications for Non-Formal Leadership and Adult Learning Matthew L. S. Gboku, O. M. Modise, J. Bebeley Sociology 2015 4 Save Alert Research Feed Towards More Socio-Culturally Sensitive Research and Study of Workplace E-Learning K. Remtulla Sociology, Computer Science Int. J. Adult Vocat. Educ. Technol. 2010 11 Save Alert Research Feed Transformative Learning in an Online Environment P. Cranton Psychology, Computer Science Int. J. Adult Vocat. Educ. Technol. 2010 35 Save Alert Research Feed Innovative Instructional Strategies with the Use of Technology for Adult Learners L. Farmer Psychology 2010 3 Save Alert Research Feed References SHOWING 1-10 OF 34 REFERENCES SORT BYRelevance Most Influenced Papers Recency Emergency Preparedness Training of Tribal Community Health Representatives Lisle S Hites, B. Granillo, +5 authors J. Burgess Medicine Journal of Immigrant and Minority Health 2011 10 PDF Save Alert Research Feed State-Level Emergency Preparedness and Response Capabilities S. Watkins, D. Perrotta, +4 authors Monica Huang Medicine Disaster Medicine and Public Health Preparedness 2011 17 PDF Save Alert Research Feed A Social Vulnerability Index for Disaster Management B. Flanagan, E. Gregory, E. Hallisey, J. Heitgerd, B. Lewis Political Science 2011 411 PDF Save Alert Research Feed Integrating social media into emergency-preparedness efforts. Raina M. Merchant, Stacy Elmer, N. Lurie Medicine The New England journal of medicine 2011 297 PDF Save Alert Research Feed Changing Homeland Security: What is Homeland Security? Christopher Bellavita Political Science 2008 50 View 1 excerpt, references background Save Alert Research Feed ASSURING HOMELAND SECURITY : CONTINUOUS MONITORING, CONTROL AND ASSURANCE OF EMERGENCY PREPAREDNESS M. Turoff, Michael Chumer, +4 authors A. Kogan Engineering 2004 78 PDF View 1 excerpt, references background Save Alert Research Feed Organizing for Homeland Security C. Wise Economics 2002 83 Save Alert Research Feed Machiavelli Confronts 21st Century Digital Technology: Democracy in a Network Society W. Baer, N. Borisov, +12 authors W. Dutton Political Science 2009 9 Save Alert Research Feed The Role of Online Learning in the Emergency Plans of Flagship Institutions K. Meyer, J. Wilson Sociology 2011 17 Save Alert Research Feed The Collapse of Sensemaking in Organizations: The Mann Gulch Disaster K. Weick Sociology 1993 2,530 View 1 excerpt, references background Save Alert Research Feed ... 1 2 3 4 ... Related Papers Abstract Topics 5 Citations 34 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_lvmsj5dp6fb2fbncsniceo6s4u ---- Digital Image Analysis: A Reliable Tool in the Quantitative Evaluation of Cutaneous Lesions and Beyond CORRESPONDENCE RESEARCH LETTERS Digital Image Analysis: A Reliable Tool in the Quantitative Evaluation of Cutaneous Lesions and Beyond Q uantitative evaluation of cutaneous lesions inclinical trials can be problematic for diseasessuch as lower extremity ulcers, vitiligo, and alopecia. Because of the irregular shapes of these lesions, calculation of their circumference, diam- eter, and area can be cumbersome using traditional manual tracings. As a result, most trials using manual tracings will measure the longest diameter or approxi- mate circumference of a target lesion. To compound the problem, in some diseases, access to the lesions is diffi- cult. These lesions include erosions of the oral or geni- tal mucosa. New software is now readily available that incorporates digital technologies and allows for digital image analysis (DIA) to circumvent these traditional problems. Digital image analysis provides a means to calculate desired target diameter and area with ease. Our objective was to compare the interrater reliabil- ity and application feasibility of Image Pro Express (Me- dia Cybernetics, Silver Spring, Maryland) DIA software with traditional manual tracings to determine the area of lower extremity ulcers. Methods. Our study was imbedded in a larger random- ized, double-blind, placebo-controlled trial, and inclu- sion and exclusion criteria was previously described by Sumpio et al.1 After approval from the respective insti- tutional review boards, participants were recruited from vascular surgery and podiatry clinics from a total of 16 sites in the United States. Each site provided both digital images and transparen- cies with manual tracings of the target ulcers (Figure 1). Of the 3 possible readers, 2 readers outlined the target ul- cers from both the digital images and the transparencies. At all sites, digital images were obtained using a Nikon (Mel- ville, New York) Coolpix 8800 camera. Furthermore, all images were obtained using standardized procedures; no- tably, the target ulcers were positioned 6 inches away from the camera and facing the light, to avoid any shadows. A metric ruler was placed on both the digital images and trans- parencies to allow for measurement calibration. Once the digital images were obtained, they were uploaded into an IBM (Chicago, Illinois) desktop computer, and the perim- eter of each ulcer was outlined electronically by the 2 read- ers using a wireless mouse (Figure 2). Concurrently, the respective transparencies with manual tracings outlined by the 2 readers were scanned into the computer using a stan- dard IBM flatbed scanner. We then used the Image Pro Ex- press DIA software to determine the diameter and subse- quent area of the target ulcers for both the digital images and the manual tracings. Continuous variables were calculated as means ± SDs and were compared by t test or analysis of variance where appropriate. Categorical variables were com- pared by �2 analysis. Finally, agreement between raters was determined by intraclass correlation coefficient. Reliability of DIA was determined by comparing the area obtained from DIA to the area obtained from manual tracings. P � .05 was considered statistically significant. Results. A total of 99 patients with lower extremity ul- cers were recruited into the study. The mean ± SD pa- tient age was 66 ± 13 years; 65% were men (n = 65), and 64% were white (n = 64). In addition, 77% of the lower extremity ulcers were from participants who were diag- nosed as having diabetes mellitus (n = 76). Of the 99 patients in the study, 91 patients had manual tracings and 91 had digital images; 87 had both. The mean ± SD lower extremity ulcer area measured by reader 1 was 3.27 ± 3.53 cm2 in the manual tracings and 3.08 ± 3.15 cm2 in the digital images. Similarly, the lower extremity ulcer mean ± SD area measured by reader 2 was 3.28 ± 3.50 cm2 in the manual tracings and 3.11 ± 3.18 cm2 in the digital images. The intraclass correlation coeffi- cient between the 2 readers for manual tracing and digi- tal images was 0.9994 and 0.9978, respectively. There was no statistically significant differences in areas between manual tracings and digital images (paired t test P = .12; 95% confidence interval, −0.51 to 0.06 cm2). Comment. In our study, we demonstrated that area can be reliably measured across raters using DIA in targeted lower extremity ulcers. Also, we have shown that the results from the DIA approach are not significantly dif- ferent from those of traditional manual tracings. While traditional manual tracings have been the gold standard method by which to evaluate lower extremity ulcer circumference, diameter, and area, manual trac- ings can be cumbersome given the irregular shapes of these lesions. However, DIA can provide quantitative evalua- tion of target lower extremity ulcers with greater ease and efficiency. Therefore, we propose that if DIA is as reli- able as traditional manual tracings, DIA may be used to estimate the size of lower extremity ulcers when the in- vestigator sees fit. The use of DIA is not limited to lower extremity ulcers but also potentially extends to other difficult-to- measure lesions. Specifically, DIA may serve a vital role in the quantitative evaluation of other diseases involving irregular-border cutaneous lesions, such as vitiligo and alopecia, as well as difficult to access lesions, such as erosions of the oral or genital mucosa. In fact, DIA may prove to be not only useful and reli- able but also more precise than traditional manual tracings. (REPRINTED) ARCH DERMATOL/ VOL 143 (NO. 10), OCT 2007 WWW.ARCHDERMATOL.COM 1331 ©2007 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 The greatest limitation of the DIA technology is for use on lesions that cannot be accurately represented in 2 dimensions, such as large ulcers that wrap around the curvature of the limb. In this circumstance, several standardized images of the ulcer and complex recon- struction may be necessary. However, In general, patient positioning, body curvature, or tapering of the limbs, as well as compromised accessibility can be mm mm mm Figure 1. Three digital images with electronic manual tracings (left) and corresponding traditional manual tracings (right). All scales are given in millimeters. (REPRINTED) ARCH DERMATOL/ VOL 143 (NO. 10), OCT 2007 WWW.ARCHDERMATOL.COM 1332 ©2007 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 sources of measurement error for manual tracings, and in general, DIA provides a technology that can at least partially overcome these problems. Finally, it is important to note that DIA provides information beyond the quantitative evaluation of traditional manual tracings. Because DIA involves digital photog- raphy, it provides an image and thus a basis for objec- tive evaluation of other end points such as infection, lesion thickness, granulation status, surrounding edema, and lesion progression over time, if used at sequential time points. Overall, additional studies are needed to further assess the uses of DIA in quantita- tive evaluation of other cutaneous lesions and beyond. Correspondence: Dr Chen, Department of Dermatol- ogy, Emory University School of Medicine, 101 Woo- druff Cir, Atlanta, GA 30322 (schen2@emory.edu). Financial Disclosure: None reported. Funding/Support: This project was supported in part by an unrestricted educational grant from Otsuka Pharma- ceuticals; National Institutes of Health (NIH) grant NHLBI R01-47345 and the Veterans Administration Merit Re- view Board (Dr Sumpio); and Mentored Patient Ori- ented Career Development Award K23AR02185-01A1 from the National Institute on Arthritis and Musculo- skeletal and Skin Disease, NIH, and the American Skin Association David Martin Carter Research Scholar Award (Dr Chen). 1. Sumpio BE, Chen SC, Moran E, et al. Adjuvant pharmacological therapy in the management of ischemic foot ulcers: results of the HEALing of Ischemic Foot Ulcers With Cilostazol Trial (HEAL-IT). Int J Angiol. 2006;15(2):76-82. Wound Assessment by 3-Dimensional Laser Scanning R ecent advances in our understanding of the biol-ogy of cutaneous tissue repair have influencedcurrent therapeutic strategies for chronic wound management and will continue to influence chronic wound management strategies into the future.1 An effective and accurate monitoring of skin lesions should be performed by measuring in an objective, pre- cise, and reproducible way the complete status and evo- lution of the wound.2 The main goal of current research projects is to design an easy-to-use technological sys- tem that can monitor the qualitative and quantitative evo- lution of a skin lesion. This level of monitoring can be achieved by using 3-di- mensional scanners: in particular, systems based on ac- tive optical approaches.3 There are 2 different areas of potential applications of such types of devices: in medi- cal treatment (to improve the efficacy of therapeutic regi- mens)4 and pharmacologic scientific research (to assess the quality and effectiveness of new chemicals or clini- cal procedures).5 Methods. We prospectively examined 15 patients with venous leg ulcers. The patients who underwent sequen- tial imaging of chronic wounds for this study all at- tended the leg ulcer clinic of the Wound Healing Re- search Unit at the University of Pisa, Pisa, Italy. Our sequential imaging system is equipped with a Vivid 900 laser scanner (Minolta, Osaka, Japan), which is used for digitizing or scanning the wound shape. With regard to the calculation of the “external” surface and volume of a wound, it is necessary to assess its original shape to de- termine the missing volume virtually. At the time of pa- tient presentation, information on the shape of the skin be- fore the wound occurred is missing, and the technique for virtual reconstruction of the original wound surface must be as easy and user-friendly as possible. The system, rely- ing on an analysis of the shape of the surface immediately outside the wound perimeter, creates an interpolating vir- tual surface that is continuously connected to the existing surface outside the wound and to that covering it. The parameters we studied were the mean wound area (measured in square centimeters) and mean volume (cu- bic centimeters). To assess interrater reproducibility, scans were evaluated by 2 independent investigators. For as- sessment of intrarater reproducibility, a single investi- gator performed 2 consecutive measurements 5 min- utes apart. Immediately after the first wound assessment of the first observer, a second observer, blinded to the findings of the first analysis, measured the same wound. The means and standard deviations of duplicate de- terminations for each wound were used for analysis. The reproducibility of measurements was evaluated by means of an intraclass correlation coefficient (ICC) and its 95% confidence interval (CI). Results. The measured total areas and volumes for inde- pendent raters and for subsequent measures of 1 rater are Figure 2. Procedure for using the Image Pro Express (Media Cybernetics, Silver Spring, Maryland) digital image analysis (DIA) software: (1) position target lesion toward camera at a set distance and toward a light source for standardization; (2) place ruler for calibration (present scale is in millimeters); (3) take digital photograph; (4) download image into computer running DIA software; (5) trace target, as demonstrated in this image; and (6) query DIA software to perform diameter and area calculations. Zakiya M. Pressley, MD Jovonne K. Foster, MS Paul Kolm, PhD Liping Zhao, MS Felicia Warren, BA William Weintraub, MD Bauer E. Sumpio, MD, PhD Suephy C. Chen, MD, MS (REPRINTED) ARCH DERMATOL/ VOL 143 (NO. 10), OCT 2007 WWW.ARCHDERMATOL.COM 1333 ©2007 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 work_ly4jg2qfk5bdfd6iot62kdkt54 ---- Inexpensive diffuse reflectance spectroscopy system for measuring changes in tissue optical properties Diana L. Glennie Joseph E. Hayward Daniel E. McKee Thomas J. Farrell Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Biomedical-Optics on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use Inexpensive diffuse reflectance spectroscopy system for measuring changes in tissue optical properties Diana L. Glennie,a,* Joseph E. Hayward,a,b Daniel E. McKee,c and Thomas J. Farrella,b aMcMaster University, Department of Medical Physics and Applied Radiation Sciences, 1280 Main Street West, Hamilton, Ontario L8S 4L8, Canada bJuravinski Cancer Centre, Department of Medical Physics, 699 Concession Street, Hamilton, Ontario L8V 5C2, Canada cMcMaster University, Department of Surgery, Division of Plastic and Reconstructive Surgery, 1280 Main Street West, Hamilton, Ontario L8S 4L8, Canada Abstract. The measurement of changes in blood volume in tissue is important for monitoring the effects of a wide range of therapeutic interventions, from radiation therapy to skin-flap transplants. Many systems available for purchase are either expensive or difficult to use, limiting their utility in the clinical setting. A low-cost system, capable of measuring changes in tissue blood volume via diffuse reflectance spectroscopy is presented. The system consists of an integrating sphere coupled via optical fibers to a broadband light source and a spectrom- eter. Validation data are presented to illustrate the accuracy and reproducibility of the system. The validity and utility of this in vivo system were demonstrated in a skin blanching/reddening experiment using epinephrine and lidocaine, and in a study measuring the severity of radiation-induced erythema during radiation therapy. © 2014 Society of Photo-Optical Instrumentation Engineers (SPIE) [DOI: 10.1117/1.JBO.19.10.105005] Keywords: integrating sphere; tissue optics; chromophores; erythema; diffuse reflectance spectroscopy; visible light. Paper 130855SSR received Nov. 29, 2013; revised manuscript received Jan. 31, 2014; accepted for publication Feb. 3, 2014; pub- lished online Oct. 7, 2014. 1 Introduction The ability to quantify changes in the concentration of chromo- phores in the skin (particularly oxy- and deoxy-hemoglobin) in vivo and in real time has many applications in healthcare. For example, a complication of radiation therapy is radiation- induced erythema, which, if not monitored closely, can progress to painful moist desquamation.1–4 In photodynamic therapy, tis- sue oxygenation can be used to indicate treatment efficacy5 since oxygen is required for the activation of the cytotoxic pho- tochemicals.6,7 Finally, in plastic surgery, proper blood flow is integral for the success of free tissue transplants and is used to indicate whether or not a return to the operating room is necessary.8 Several methods have been validated for measuring skin red- ness. In increasing complexity and accuracy they are visual assessment (with or without a color chart), colorimetry/photog- raphy, and spectroscopy.9 Although the visual assessment tech- nique,10 is the most common, it is qualitative in nature. Due to its subjective nature and the nonlinearity of human vision, it is highly prone to interobserver as well as intraobserver varia- tions.11 The subjectivity of this method can be minimized by the introduction of color charts; however, a very large number of color shades would be required to best account for the effect of pigmentation on the perceived redness. Despite these difficul- ties, visual assessment remains the gold standard for measuring skin redness.12,13 Digital photography is usually approached as a two-dimen- sional implementation of colorimetry. In colorimetry, the color is quantified using a set of three specifically tuned color sensors (usually RGB) that represent the color using a standard color map, such as the L*a*b* system from the International Commission on Illumination (CIE).14 Colorimetry (and digital photography) is made extremely difficult by the necessity to cal- ibrate and standardize the results to allow for intermeasurement comparison (between days or between individuals).15 Following correct calibration, both methods are capable of detecting changes in blood and oxygen saturation but, since the relation- ship between the measured data and skin redness is not fully characterized, they are only capable of indicating whether the skin is more or less red in comparison to previous or baseline measurements.16–19 Spectroscopy-based methods, such as reflectance spectros- copy and hyperspectral imaging, are the most complex of the methods used for measuring skin color.20–24 Spectroscopy pro- vides quantitative data across a range of wavelengths, allowing for different parameters to be extracted from its measurements, depending on the scope of the investigation and the apparatus used. User-friendly commercial models capable of monitoring relative erythema and tissue oxygen saturation are expensive and use single-use detection probes. For example, the T-Stat® (Spectros, Portola Valley, California) costs approxi- mately $25,000 US.25 Cheaper models are less user-friendly and mostly only provide a single value for oxygen saturation. As a result, these systems are primarily used by highly trained investigators at research institutions and are rarely utilized in a typical clinical setting where they could be used routinely and would prove most beneficial. In order to facilitate the translation of spectroscopy systems from the research laboratory to the routine clinical setting for the use on human skin in vivo, an economic integrating sphere- based diffuse reflectance spectroscopy (DRS) unit was devel- oped and characterized. The system designs and specifications will be outlined. To illustrate the validity and utility of the assembled system, the results of two ongoing clinical studies *Address all correspondence to: Diana L. Glennie, E-mail: glennid@mcmaster .ca 0091-3286/2014/$25.00 © 2014 SPIE Journal of Biomedical Optics 105005-1 October 2014 • Vol. 19(10) Journal of Biomedical Optics 19(10), 105005 (October 2014) Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Biomedical-Optics on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use http://dx.doi.org/10.1117/1.JBO.19.10.105005 http://dx.doi.org/10.1117/1.JBO.19.10.105005 http://dx.doi.org/10.1117/1.JBO.19.10.105005 http://dx.doi.org/10.1117/1.JBO.19.10.105005 http://dx.doi.org/10.1117/1.JBO.19.10.105005 http://dx.doi.org/10.1117/1.JBO.19.10.105005 measuring erythema under different conditions will be presented. 2 Design of the Total DRS System Simply, the total DRS system consists of a white light source coupled to an integrating sphere via an optical fiber. A second detection optical fiber directs the reflected light to a spectrom- eter. The spectrometer is controlled by a computer on which the required processing software was installed. A schematic of the system design is shown in Fig. 1. A detailed description of the selection of each component is presented below. 2.1 Light Source Oxy- and deoxy-hemoglobin have spectral absorption features within the visible light range.26,27 Therefore, a light source encompassing this range, without any narrow bandwidth spec- tral excitation features, is required. In addition, a stable output over the measurement period (minutes to hours) is required for proper reflectance calculation. An Oriel 77501 Radiometric Fiber Optic Source (Newport, Irvine, California) was chosen with a 100 W quartz tungsten halogen lamp to produce a highly stable output within the visible-NIR wavelength range that can be easily coupled to an optical fiber. It also has an adjustable iris to allow for output optimization. 2.2 Spectrometer and Optical Fibers The spectrometer must be capable of detecting light with high sensitivity across the visible spectrum. It must also have suffi- cient spectral resolution to allow for the differentiation between the spectral features of oxy- and deoxy-hemoglobin (<13 nm for oxy-hemoglobin). An ideal spectrometer would also be small for ease of portability. The S2000 Miniature Fiber Optic Spectrometer from Ocean Optics (Dunedin, Florida) was chosen for this system. It has a wavelength range of 340 to 1000 nm and a dynamic range of 2000 for a single scan. The 2048-element linear CCD-array results in a pixel width of approximately 0.35 nm and the inte- gration time can range from 3 ms to 60 s. Its small size (<150-mm cube) allows for easy transportation between clinical sites. The fiber optic connector specifications for the Ocean Optics spectrometer are for an SMA 905 to single-strand 0.22 NA opti- cal fiber. The optical fiber acts in place of a slit in the spectrom- eter’s hardware. A relatively large fiber core of 400 μm was chosen to maximize light collection. This resulted in spectral resolution of 10 nm as determined from the measurement of a mercury–argon calibration source (HG-1 Mercury Argon Calibration Source, Ocean Optics, Dunedin, Florida). The effect of this spectral resolution on the reflectance spectrum analysis is described in Sec. 4.2. The final criterion for the fibers was high transmission in the visible spectrum. Two such fibers, with a wavelength range between 400 and 2200 nm, were purchased from Thorlabs (Newton, New Jersey). 2.3 Integrating Sphere The size of the integrating sphere is dictated by its use to mea- sure light reflectance from human skin. As such, the integrating sphere should be relatively small (on the order of 5 to 10 cm in diameter) so that it can fit onto the various curves of the human body. A small sphere would also be easier to maneuver and keep stationary, resulting in more stable measurements. The size of the measurement port of the sphere should be sufficiently large that local inhomogeneities in the measurement area (such as small freckles or hairs) do not overwhelm the result, but should result in the sphere’s port fraction (the ratio of the total port area to the total internal surface area of the sphere) falling between 2% and 5%.28 For the range of sphere sizes suggested above, the port diameter would fall somewhere within 1.5 to 5 cm. The sphere should also have a high internal reflectance (greater than 94%) and produce a uniform light field at the meas- urement port.29–31 If the input light is directly incident on the detection port, the sphere should include a baffle, blocking this path. For spheres of the size used in this experiment, baffles should be avoided when possible as they disrupt the internal sur- face of the sphere, reducing the uniformity of the illumination within the sphere. The integrating sphere was made from a cube of Spectralon® (Labsphere®, North Sutton, New Hampshire) with side lengths of 2 in. (50.8 mm). The cube was bisected and a hemispherical cavity was machined into both halves using a 1¼ in. (31.75 mm) ball-end mill. The bottom of one of these halves was milled down, creating a port measuring 15.2 mm in diameter. The parts were assembled to form the sphere and holes were drilled through the center of the unmilled half as well as through one side at the junction to accommodate SMA 905 connectors which would become the detection and illumination ports, respectively (see Fig. 2). 2.4 Implementation Costs The specifications for each individual component have some flexibility; therefore, a DRS system can be built within a wide range of costs while still achieving the same measurement results. In choosing the light source, it is only important that it covers the desired wavelength range and be stable to within 1%. Although a uniform spectral output is ideal to keep the signal uncertainty relatively constant, it is not necessary. A quartz tung- sten halogen lamp provides smooth spectral features and high output powers; however, a less expensive alternative would be a white LED. These provide excellent illumination and Fig. 1 A schematic of the measurement system (not to scale). The light source is connected to the side port of the integrating sphere. Light collected through the overhead port is detected by the spectrom- eter and processed by the laptop. Journal of Biomedical Optics 105005-2 October 2014 • Vol. 19(10) Glennie et al.: Inexpensive diffuse reflectance spectroscopy system for measuring changes. . . Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Biomedical-Optics on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use are relatively stable, although they do have an emission peak in the blue region (∼465 nm). Integrating spheres can be purchased from an optical device supplier; however, spheres can be built for costs as low as $100 US by obtaining a suitable block of Spectralon. Since the sphere is not being used for radiometric purposes, it can deviate from an ideal integrating sphere and still provides an accurate reflectance measurement. Spheres can also be constructed by vacuum form- ing plastic styrene about a spherical mold and coating the inside with barium sulfate paint.32 The most expensive piece of equipment component is the spectrometer, and its price will depend on the detection sensi- tivity and grating size. The average cost for a common fiber- based spectrometer is around $2000 US. Although not recom- mended, spectrometers can also be built cheaply if necessary.33 The computer must be able to interface with the spectrometer and run the necessary software. Therefore, an inexpensive net- book or laptop will be sufficient. Optical fibers are uniformly priced in the market and will contribute very little to the total cost of the system. A list of itemized expenses is shown in Table 1, assuming new materials were required. For comparison, a hand-held col- orimeter is available for $6000 US from Derma Spectrometer (MIC Global, London, United Kingdom). 3 Procedure and Performance 3.1 Integrating Sphere Configuration The optical fibers are connected to the integrating sphere follow- ing a d∕0 deg (diffuse illumination/direct detection) geometry such that the input light is first incident on the sphere wall before encountering the tissue surface and the output fiber is directly across from the measurement port (as shown in Fig. 1). In this geometry, the sample is more uniformly illuminated compared with a 0 deg ∕d geometry due to the multiple reflections of the light prior to exiting the sphere at the tissue. In addition, since the illumination is diffuse rather than normally incident, the pen- etration of light is more superficial due to the oblique entrance angle (average 55 deg). Thus, a greater percentage of spectro- scopic information originates from the upper layers of skin where the chromophores of interest are located. Due to the small size of this integrating sphere, a baffle was not used. The geometry and the detector fiber acceptance angle (0.22 NA) allowed only light that was specularly or diffusely reflected from the tissue surface to be collected. 3.2 Calculating Spectral Reflectance The spectral reflectance of a tissue sample was normalized by dividing the spectral count rate with the detection port on the tissue, StðλÞ, by the spectral count rate from a highly reflecting standard, SnormðλÞ. Both of these were adjusted by subtracting the background signal rate, SbgðλÞ, so that the modified total diffuse reflectance, R�mðλÞ, is given by R�mðλÞ ¼ StðλÞ − SbgðλÞ SnormðλÞ − SbgðλÞ . (1) Normalizing to a reflectance standard eliminates the need to correct the measured signal rate for the system spectral response. A 99% reflectance standard (SRS-99-010, Labsphere, North Sutton, New Hampshire) was used as the normalization standard while a 2% reflectance standard (SRS-02-010, Labsphere, North Sutton, New Hampshire) was used for the background. The 2% standard was used instead of directing the detection port into a dark room in order to avoid changes in ambient lighting condi- tions, should the system be used in different locations which would affect the calculated reflectance. This substitution did not affect the accuracy or precision of the measurement. If reflectance standards are not available, a piece of thick, matte black cloth may be substituted for the 2% standard, and a piece of high diffusely reflective material such as a piece of Spectralon or a flat surface coated with barium sulfate may be substituted. For each spectral count rate measurement, the integration time was set such that the maximum intensity was approxi- mately 90% of the dynamic range. This allowed for optimal pre- cision while ensuring that the signal would not saturate. Five measurements were averaged to further reduce the noise. The averaged measurements were converted into a count rate by dividing by the integration times. Fig. 2 A cross-sectional diagram of the integrating sphere. The block of Spectralon® used to make the sphere is bisected before processing and then reattached to form the sphere. Table 1 Cost estimates for the DRS system. Listed prices are based on the purchase of new material. Component Pricing (USD) Low High Light source $100 $500 Integrating sphere $100 $1500 Spectrometer $500 $3000 Computer $250 $500 Connection cables $100 $200 Total $1050 $5700 Journal of Biomedical Optics 105005-3 October 2014 • Vol. 19(10) Glennie et al.: Inexpensive diffuse reflectance spectroscopy system for measuring changes. . . Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Biomedical-Optics on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use 3.3 Sphere Preparation The measurement port of the integrating sphere was covered with a sheet of occlusive dressing (Tegaderm™ film, 3M Health Care, St. Paul, Minnesota) in order to prevent dirt and other material from contaminating the inside of the sphere. A new sheet was applied for each patient before any measurements to ensure sterility. The dressing was left on for the normalization and background measurements and, therefore, did not modify the resulting reflectance. Reflectance measurements were per- formed on calibrated diffuse reflectance standards ranging from 2% to 99% (RSS-08-010, Labsphere, North Sutton, New Hampshire) with the dressing in place and removed. Both sets of measurements showed no measurable difference. 3.4 Correcting for Single Beam Substitution Error Single beam integrating spheres used for reflectance spectros- copy suffer from single beam substitution error34,35 due to the decrease of the total flux within the sphere when the normali- zation plate is replaced with the sample. This can be corrected using Eq. (2). The parameters (a, b, c), as a function of wave- length, were determined empirically by measuring the calibrated reflectance standards described in the previous section and developing a relationship between the measured and calibrated reflectances (represented by R�m and Rm, respectively), based on the fraction of reflected light. If reflectance standards are not available, Intralipid™ (Baxter, Deerfield, Illinois) and India ink liquid phantoms can be used as they have well-characterized extinction coefficients.36,37 Rm ¼ aR�m þ b R�m þ c . (2) These data were fit using a nonlinear least-squares algorithm at each of the wavelengths. A typical fit for a single wavelength is shown in Fig. 3. This correction was applied to the modified total diffuse reflectance, resulting in a corrected total diffuse reflectance (Rm). A set of colored diffuse reflectance standards (CSS-04-010, Labsphere, North Sutton, New Hampshire) were measured and, following correction, the measured reflectance was within 0.01 of the calibrated reflectance specified by the supplier (Fig. 4). 3.5 Reflectance Measurement Reproducibility The reproducibility of the system was tested using the green reflectance standard (SCS-GN-010) because it had reflectance similar to human skin and spectral features in the same region as hemoglobin. Reflectance was measured every day for 30 days and the standard deviation across the 500 to 700 nm spectral region never exceeded 1%. As expected, it varied with the spectral reflectance of the reflectance standard (i.e., the uncertainty was lower when the reflectance/signal was higher). Reproducibility measurements were also performed on human skin and they had a similar result. 4 Experimental Validation 4.1 Study Overviews In order to demonstrate the use and validity of the DRS system, sample data from two ongoing erythema studies are presented. In the first study, erythema and skin blanching were induced via subcutaneous injection of lidocaine (a vasodilator and anes- thetic) with or without epinephrine (a vasoconstrictor) over the deltoid muscles of volunteers’ upper arms. The aim of this study was to determine the time to maximal effect of injected epinephrine. In the second study, serial skin reflectance measurements were taken on head and neck cancer patients undergoing intensity-modulated radiation therapy (IMRT). The goal of this study was earlier detection of radiation-induced erythema compared with visual assessment methods. Both studies received Hamilton Health Sciences Research Ethics approval. 4.2 Erythema Index Analysis The measured reflectance spectra were processed using the Dawson erythema index (EI).38 This model was chosen because of its wide acceptance and use (over 280 citations to date),39 as well as its straight-forward calculation method. Briefly, the EI is the area under the curve of the log of the inverse reflectance spectrum between 510 and 610 nm (encompassing the absorp- tion features of oxy- and deoxy-hemoglobin). The influence of melanin in the EI can be approximately corrected using reflec- tance data between 650 and 700 nm (EIc). For serial measure- ments on an individual, a relative erythema index (EIr) can also 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 C a lib ra te d r e fle ct a n ce ( ca l) Measured reflectance ( m*) R R Fig. 3 The integrating sphere calibration curve at 600 nm. The fitted function corrects the measured reflectance for single-beam substitu- tion error. The dots are the calibrated and measured reflectance pairs and the dashed line is the fit to these data. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 500 550 600 650 700 R e fle ct a n ce Wavelength (nm) Fig. 4 The measured (symbols) and calibrated (lines) reflectance spectra for a set of four colored diffuse reflectance standards: red (•), yellow (□), green (▴), and blue (⋄). Correction for the single- beam substitution error brought the measured reflectance values to within 0.01 of the calibrated reflectance specified by the supplier. Journal of Biomedical Optics 105005-4 October 2014 • Vol. 19(10) Glennie et al.: Inexpensive diffuse reflectance spectroscopy system for measuring changes. . . Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Biomedical-Optics on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use be calculated. Simply, a baseline EIc is obtained either at time zero or at a nearby reference location. This is subtracted from EIc values measured at later time points such that, in the absence of changes in hemoglobin, EIr would be zero. The 10 nm FWHM spectral resolution of the system has the effect of broadening spectral features in the measured reflec- tance. Although this would be problematic for narrow features, the absorption features in the hemoglobin spectra are very broad and were not strongly affected. To verify the effect on the EI, spectra derived from the literature40 were convolved with a 10 nm FWHM Gaussian function and the EI calculated before and after. Small differences in the calculated EI were noted (data not shown), however changes in EI with respect to an increase or decrease in hemoglobin were insensitive to the spectrometer’s spectral resolution. 4.3 Study Results In the first study, the reflectance was measured serially for 2 h following the injection of lidocaine (with or without epineph- rine) and the measurements were processed to calculate the EIr as a function of time. A time course for one volunteer is shown in Fig. 5 along with the reflectance spectra at specific time points. For both injections, there was a rapid increase in the EIr indicating an increase in the hemoglobin content. The combined lidocaine and epinephrine injection then decreased to a minimum EIr of approximately −16 at the 22 min mark indicating a reduction in hemoglobin content. An analysis of the EIr for all subjects indicated that the maximum epineph- rine-induced blanching occurred approximately 25 min follow- ing injection, after which surgical incision may commence.41 In the second study, the reflectance was measured daily over the course of the patients’ head and neck IMRT treatments. During this study, it was necessary to have multiple investigators operate the DRS system. This requirement illustrated the ease of training associated with the system, as all investigators were capable of properly using the system following a short 15 min tutorial. Greater variation was observed in the daily mea- surements compared with the short-term measurements of the first study (see Fig. 6). The EIr was not calculated because the baseline consisted of a single measurement. The variation is the result of daily changes such as time of day and patient temperature.42 An increase in EIc was observed over the course of the 35 days. Erythema was first visually diagnosed on day 18 of treatment. This study is ongoing. 5 Discussion This paper illustrates two clinical applications of a DRS system. These results demonstrate that a low-cost spectroscopy system is capable of measuring spectral changes in reflectance due to changes in the concentration of hemoglobin. These changes were quantified using the Dawson EI. The system is easy to operate and yields valuable clinical data with little training required. The system described here may be found to have a wide range of clinical assessment roles, which would make it an even more useful tool for health care practitioners. However, since the system collects a full spectrum, it is capable of generating much more valuable information than just a single EI value. For example, correcting for background chromophores is only approximate and any changes over the measurement period could register as incorrect increases or decreases in skin redness. An alternative modeling approach using a spec- trally constrained diffuse reflectance model to fit the measured reflectance spectrum with concentrations of the major tissue chromophores may be advantageous. This will allow for the detection of skin color changes in reference to their responsible chromophore, but would require the measured spectrum to be extremely accurate and precise. One of the limitations of this spectroscopy system is that the signal is normalized using a highly scattering Spectralon® stan- dard. In comparison, human skin is much less scattering and therefore the true reflectance is under-represented due to scatter- ing losses. These scattering losses are not large but, since they 0 20 40 60 80 100 120 Time (min) (a) Lidocaine Lidocaine + epinephrine 500 525 550 575 600 625 650 675 700 Wavelength (nm) (b) Baseline Lidocaine Lidocanie + epinephrine -20 -10 0 10 20 30 40 50 60 R e la tiv e e ry th e m a in d e x (a .u ) 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 R e fle ct a n ce Fig. 5 (a) A sample time course for one volunteer in the study involv- ing lidocaine and epinephrine. (b) Full reflectance spectra from before injection (□), 1 min following the injection of lidocaine alone (Δ), and 26 min following the injection of lidocaine and epinephrine (○). 80 100 120 140 160 180 200 220 180 5 10 15 20 25 30 35 C o rr e ct e d e ry th e m a in d e x (a .u ) Treatment progress (days) Fig. 6 Corrected erythema index for a head and neck IMRT cancer patient. Daily measurements were taken over the course of treatment. Erythema was not visually noted until day 18. Journal of Biomedical Optics 105005-5 October 2014 • Vol. 19(10) Glennie et al.: Inexpensive diffuse reflectance spectroscopy system for measuring changes. . . Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Biomedical-Optics on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use vary with the tissue optical properties, they would need to be accounted for in a spectrally dependent model. This paper presents a low-cost, user-friendly DRS system for measurement of changes in skin hemoglobin concentration. The performance of the system was characterized in terms of wave- length accuracy and measurement stability, uncertainty, and reproducibility. The validity and utility of the system were dem- onstrated through a skin reddening/blanching experiment and a radiation-induced erythema study, followed by analysis with a simple erythema model. Further uses of the system have yet to be investigated. Acknowledgments The authors would like to thank Kevin R. Diamond and Gabriel A. Devenyi for their assistance in the preparation of this paper. This work was financially supported by the Natural Sciences and Engineering Research Council of Canada. References 1. J. W. Hopewell, “The skin: its structure and response to ionizing radi- ation*,” Int. J. Radiat. Biol. 57(4), 751–773 (1990). 2. N. S. Russell et al., “Quantification of patient to patient variation of skin erythema developing as a response to radiotherapy,” Radiother. Oncol. 30(3), 213–221 (1994). 3. J. Nyström et al., “Objective measurements of radiotherapy-induced erythema,” Skin Res. Technol. 10(4), 242–250 (2004). 4. T. J. Fitzgerald et al., “Radiation therapy toxicity to the skin,” Dermatol. Clin. 26(1), 161–172 (2008). 5. J. H. Woodhams, A. J. MacRobert, and S. G. Bown, “The role of oxygen monitoring during photodynamic therapy and its potential for treatment dosimetry,” Photochem. Photobiol. Sci. 6(12), 1246–1256 (2007). 6. M. S. Patterson, E. Schwartz, and B. C. Wilson, “Quantitative reflec- tance spectrophotometry for the noninvasive measurement of photosen- sitizer concentration in tissue during photodynamic therapy,” Proc. SPIE 1065, 115–122 (1989). 7. B. C. Wilson and M. S. Patterson, “The physics, biophysics and tech- nology of photodynamic therapy,” Phys. Med. Biol. 53(9), R61–R109 (2008). 8. M. H. Steele, “Three-year experience using near infrared spectroscopy tissue oximetry monitoring of free tissue transfers,” Ann. Plast. Surg. 66(5), 540–545 (2011). 9. P. Agache, “Assessment of erythema and pallor,” in Measuring the Skin, 1st Ed., P. Agache and P. Humbert, Eds., pp. 591–601, Springer-Verlag, New York (2004). 10. A. Trotti et al., “CTCAE v3.0: development of a comprehensive grading system for the adverse effects of cancer treatment,” Semin. Radiat. Oncol. 13(3), 176–181 (2003). 11. M. Bodekaer et al., “Good agreement between minimal erythema dose test reactions and objective measurements: an in vivo study of human skin,” Photodermatol. Photoimmunol. Photomed. 29(4), 190–195 (2013). 12. D. Basketter et al., “Visual assessment of human skin irritation: a sen- sitive and reproducible tool,” Contact Dermatitis 37(5), 218–220 (1997). 13. Y. Wengström et al., “Quantitative assessment of skin erythema due to radiotherapy—evaluation of different measurements,” Radiother. Oncol. 72(2), 191–197 (2004). 14. Colorimetry—Part 3: CIE Tristimulus Values. ISO 11664-3:2012(E)/ CIE S 014-3/E:2011. Geneva, Switzerland : ISO. (2012). 15. B. Jung et al., “Real-time measurement of skin erythema variation by negative compression: pilot study,” J. Biomed. Opt. 17(8), 081422 (2012). 16. N. Kollias and G. N. Stamatas, “Optical non-invasive approaches to diagnosis of skin diseases,” J. Invest. Dermatol. 7(1), 64–75 (2002). 17. J. Canning et al., “Use of digital photography and image analysis tech- niques to quantify erythema in health care workers,” Skin Res. Technol. 15(1), 24–34 (2009).. 18. I. Nishidate et al., “Noninvasive imaging of human skin hemodynamics using a digital red-green-blue camera,” J. Biomed. Opt. 16(8), 086012 (2011). 19. M. Setaro and A. Sparavigna, “Quantification of erythema using digital camera and computer-based colour image analysis: a multicentre study,” Skin Res. Technol. 8(2), 84–88 (2002). 20. R. Zhang et al., “Determination of human skin optical properties from spectrophotometric measurements based on optimization by genetic algorithms,” J. Biomed. Opt. 10(2), 024030 (2005). 21. G. N. Stamatas et al., “In vivo measurement of skin erythema and pig- mentation: new means of implementation of diffuse reflectance spec- troscopy with a commercial instrument,” Br. J. Dermatol. 159(3), 683–690 (2008). 22. N. Kollias, I. Seo, and P. R. Bargo, “Interpreting diffuse reflectance for in vivo skin reactions in terms of chromophores,” J. Biophotonics 3(1–2), 15–24 (2010). 23. D. Yudovsky, A. Nouvong, and L. Pilon, “Hyperspectral imaging in diabetic foot wound care,” J. Diabetes Sci. Technol. 4(5), 1099– 1113 (2010). 24. M. S. Chin et al., “Hyperspectral imaging for early detection of oxygenation and perfusion changes in irradiated skin,” J. Biomed. Opt. 17(2), 026010 (2012). 25. P. Fox et al., “White light spectroscopy for free flap monitoring,” Microsurgery 33(3), 198–202 (2013). 26. A. R. Young, “Chromophores in human skin,” Phys. Med. Biol. 42(5), 789–802 (1997). 27. T. Lister, P. A. Wright, and P. H. Chappell, “Optical properties of human skin,” J. Biomed. Opt. 17(9), 090901 (2012). 28. L. M. Hanssen and K. A. Snail, “Integrating spheres for mid- and near- infrared reflection spectroscopy,” in Handbook of Vibrational Spectroscopy, J. M. Chalmers and P. R. Griffiths, Eds., Wiley & Sons, Chichester, United Kingdom (2002). 29. Labsphere, Technical Guide: Integrating Sphere Theory and Applications, pp. 1–19, North Sutton, NH, http://www.labsphere .com/technical/technical-guides.aspx. 30. Labsphere, Technical Guide: Integrating Sphere Radiometry and Photometry, p. 26, North Sutton, New Hampshire. 31. Labsphere, Technical Guide: Reflectance Spectroscopy, pp. 1–40, Labsphere, North Sutton, New Hampshire. 32. D. L. Glennie, Use of Integrating Spheres for Improved Skin PDT Treatment, pp. 1–96, M.Sc. Thesis, McMaster University, Hamilton, Ontario (2009). 33. S. Sumriddetchkajorn and Y. Intaravanne, “Home-made N-channel fiber- optic spectrometer from a web camera,” Appl. Spectrosc. 66(10), 1156– 1162 (2012). 34. A. W. Springsteen, “Reflectance spectroscopy: an overview of classifi- cation and techniques,” in Applied Spectroscopy: A Compact Reference for Practitioners, 1st ed., J. Workman and A. W. Springsteen, Eds., pp. 193–224, Academic Press, New York (1998). 35. Labsphere, Application Note No. 01: Quantitation of Single Beam Substitution Correction in Reflectance Spectroscopy Accessories, North Sutton, New Hampshire. 36. S. T. Flock et al., “Optical properties of Intralipid: a phantom medium for light propagation studies,” Lasers Surg. Med. 12(5), 510–519 (1992). 37. S. J. Madsen, M. S. Patterson, and B. C. Wilson, “The use of India ink as an optical absorber in tissue-simulating phantoms,” Phys. Med. Biol. 37(4), 985–993 (1992). 38. J. B. Dawson et al., “A theoretical and experimental study of light absorption and scattering by in vivo skin,” Phys. Med. Biol. 25(4), 695–709 (1980). 39. B. Riordan, S. Sprigle, and M. Linden, “Testing the validity of erythema detection algorithms,” J. Rehabil. Res. Dev. 38(1), 13–22 (2001). 40. S. L. Jacques, “Skin Optics Summary,” Oregon Med. Laser Cent. News, 1998, http://omlc.ogi.edu/news/jan98/skinoptics.html (20 April 2012). 41. D. E. McKee et al., “Optimal time delay between epinephrine injection and incision to minimize bleeding,” Plast. Reconstr. Surg. 131(4), 811– 814 (2013). 42. A. Fullerton et al., “Guidelines for measurement skin colour and eryth- ema: a report from the Standardization Group of the European Society of Contact Dermatitis*,” Dermatitis 35(1), 1–10 (1996). Biographies of the authors are not available. Journal of Biomedical Optics 105005-6 October 2014 • Vol. 19(10) Glennie et al.: Inexpensive diffuse reflectance spectroscopy system for measuring changes. . . Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Biomedical-Optics on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use http://dx.doi.org/10.1080/09553009014550911 http://dx.doi.org/10.1016/0167-8140(94)90460-X http://dx.doi.org/10.1111/srt.2004.10.issue-4 http://dx.doi.org/10.1016/j.det.2007.08.005 http://dx.doi.org/10.1016/j.det.2007.08.005 http://dx.doi.org/10.1039/b709644e http://dx.doi.org/10.1117/12.978011 http://dx.doi.org/10.1117/12.978011 http://dx.doi.org/10.1088/0031-9155/53/9/R01 http://dx.doi.org/10.1097/SAP.0b013e31820909f9 http://dx.doi.org/10.1016/S1053-4296(03)00031-6 http://dx.doi.org/10.1016/S1053-4296(03)00031-6 http://dx.doi.org/10.1111/phpp.2013.29.issue-4 http://dx.doi.org/10.1111/cod.1997.37.issue-5 http://dx.doi.org/10.1016/j.radonc.2004.04.011 http://dx.doi.org/10.1016/j.radonc.2004.04.011 http://dx.doi.org/10.1117/1.JBO.17.8.081422 http://dx.doi.org/10.1046/j.1523-1747.2002.19635.x http://dx.doi.org/10.1111/srt.2009.15.issue-1 http://dx.doi.org/10.1117/1.3613929 http://dx.doi.org/10.1034/j.1600-0846.2002.00328.x http://dx.doi.org/10.1117/1.1891147 http://dx.doi.org/10.1111/bjd.2008.159.issue-3 http://dx.doi.org/10.1002/jbio.200900066 http://dx.doi.org/10.1177/193229681000400508 http://dx.doi.org/10.1117/1.JBO.17.2.026010 http://dx.doi.org/10.1117/1.JBO.17.2.026010 http://dx.doi.org/10.1002/micr.22069 http://dx.doi.org/10.1088/0031-9155/42/5/004 http://dx.doi.org/10.1117/1.JBO.17.9.090901 http://www.labsphere.com/technical/technical-guides.aspx http://www.labsphere.com/technical/technical-guides.aspx http://www.labsphere.com/technical/technical-guides.aspx http://www.labsphere.com/technical/technical-guides.aspx http://dx.doi.org/10.1366/11-06522 http://dx.doi.org/10.1002/(ISSN)1096-9101 http://dx.doi.org/10.1088/0031-9155/37/4/012 http://dx.doi.org/10.1088/0031-9155/25/4/008 http://omlc.ogi.edu/news/jan98/skinoptics.html http://omlc.ogi.edu/news/jan98/skinoptics.html http://omlc.ogi.edu/news/jan98/skinoptics.html http://omlc.ogi.edu/news/jan98/skinoptics.html http://dx.doi.org/10.1097/PRS.0b013e3182818ced http://dx.doi.org/10.1111/cod.1996.35.issue-1 work_m3fij3x555b2tplcoh6wkqspom ---- Nitte University Journal December 2016 32 Nitte University Journal of Health Science NUJHS Vol. 6, No.4, December 2016, ISSN 2249-7110 Introduction Nature has endowed every individual with a definite pattern of smile. A Smile, when pleasing and attractive, enriches not only the one who smiles, but also the observers of it. The clinicians ability to understand the positive elements in beautifying individual patient`s smile and creating different strategies increases the aesthetic attributes that lies outside the general aesthetic concept. The latest advances in different fields have significantly improved our ability to view our patients in a more dynamic way and helped us to improve the quantification and communication of newer concepts of function and apperanance. Today, the orthodontist`s ability to clinically evaluate the patient in 3 dimension and use the latest technologies (Computer data basing of the clinical examination and digital videography) to document, define and communicate the treatment strategy to patients and collegues involved in interdisciplinary care leads to the Dynamic Smile Visualization and Quantification in different age groups 1 2 3 4 5 Harshitha V. , M.S. Ravi , Reshma Raveendran , Raed Saeed , Kiran Kumar C. 1, 3 2 4 5 Senior Lecturer, Professor, Post Graduate, Former Lecturer, Dept. of Orthodontics and Dentofacial Orthopaedics, A.B. Shetty Memorial Institute of Dental Sciences, Deralakatte, Mangalore - 575 018, India. M.S. Ravi, Professor, Dept. of Orthodontics and Dentofacial Orthopedics, A.B. Shetty Memorial Institute of Dental Sciences, Deralakatte, Mangalore - 575 018, India. *Corresponding Author : Original Article 1 concept of “art of smile”. The orthodontist has an all important role in establishing a pleasant smile through his clinical skill and knowledge in various treatment procedures. Orthodontic speciality is presently focussing its attention on the multifactorial nature of smile, combined with a shift towards patient- driven aesthetic diagnosis and problem oriented 2 treatment planning. Therefore this study was carried out with the objective of assessing the posed and dynamic smile and to compare the various attributes of smile in frontal, oblique and sagittal dimensions, in two different age groups The results of this study would help the orthodontist in establishing a better diagnosis identifyingthe various attributes of smile that needs correction, improvement or enhancement and to identify if the attribute contributes Received : Review Completed : 27.08.2016 Accepted : 15.10.2016 06.06.2016 Abstract Aims and Objectives: To assess the posed and dynamic smile and to compare the various attributes of smile in frontal, oblique and sagittal dimensions, in two different age groups (10- 15years and 18-25 years). Materials and method : The posed and dynamic smile parameters were measured using digital video clips in 80 subjects of two different age groups (10-15years and 18-25 years). Total of 15 parameters were studied in 3 planes of space. The data was analysed using student`s t-test to compare smile parameters across the age groups, paired t-test was used to analyse the parameters of posed and unposed smile within the same age group and chi-square test was performed for the discrete data. Results : The present study revealed significant differences in dynamic smile parameters between the two age groups. The parameters like Philtrum height and Smile index are more in older age group whereas the buccal corridor was more in younger age group. Significant differences were also recorded in various parameters in both the groups when the posted smile is compared with that of the dynamic smile. Conclusion : In both the age groups, the dynamic and posed smile attributes are significantly different, except for buccal corridor and interlabial gap. Keywords : Digital video clips, Dynamic smile, Posed smile, Smile analysis, Different age groups. Access this article online Quick Response Code Running title : Dynamic Smile Visualization and Quantification Published online: 2020-04-22 33 Nitte University Journal of Health Science NUJHS Vol. 6, No.4, December 2016, ISSN 2249-7110 positively or negatively in designing the smile of a patient. An attractive, well balanced smile is a paramount 1 treatment objective of modern orthodontics. Materials and Methods A total of 80 subjects were selected to measure the lip- tooth characteristics in the 10-15 years and 18-25 years age group. Thus it was imperative to select a sample containing a uniform proportion of girls (20 in each group) and boys (20 in each group) spanning cross both the age groups. The subjects selected had class I skeletal pattern with near normal dental occlusion. None of the subjects had undergone prior orthodontic/surgical treatments Method of collection of Data To capture a patient`s speech, oral and pharyngeal function and smile at the same time, the best method would be 3 standardised digital videography. The patient was seated in a cephalostat and placed in natural head position. Ear rods were used to stabilize the head and avoid excess motion. The digital video camera Cannon Powershot A630 was mounted on a camera tripod stand and set at a fixed distance of 2.5ft from the patient. The lens of the camera was positioned parallel to the true perpendicular of the face in natural head position, and the camera was raised to the level of the patient's lower facial third and the patient was asked to relax and then smile. Pictures were made from the oblique and sagittal view and video was made in the frontal dimension. The tooth display varies during speech and smiles. So all aspects of anterior tooth display was evaluated by a video clip during speech and smiling at the equivalent of 30 frames per second; Each subjects had 5 seconds of video yielding 150 frames for comparison. The raw clip was downloaded to Windows Movie Maker for reviewing and selecting the frame that best represents the patient's natural unstrained social smile. (Refer Figure - 1) Smile Analysis The photo was selected that best represented the subject's social smile then captured using a program called Windows Movie Maker and saved as a JPEG file. The smile image was then measured for the fifteen attributes using Adobe Photoshop CS2 version 9.0, scaling and measuring grid (Refer Figure - 2 and figure - 4) and MS- Excel. Still pictures (Fig – 3) were also made to measure the various attributes of posed smile. 15attributes of smile in 3 dimensions were measured (table: 1): The data thus obtained was subjected to statistical analysis using student's t-test to compare smile parameters across the age groups. This method of analysis assumes that the data are symmetrically distributed around the mean and that the standard error of each sample was approximately the same. Chi-square test was performed for discrete data. The present study has analyzed fifteen different parameters of posed and dynamic smile of 80 individuals of two different age groups, 10-15years and 18-25years with class I skeletal base. Data analysed using student`s t-test and Chi square test revealed the following information: • Means and standard deviations (SD) for each of the m e a s u re m e nt s d u r i n g d y n a m i c ( d u r i n g c h e e articulation) and posed social smile were calculated. (table 2) • The parameters significantly different during the testing of hypothesis for means of dynamic smile parameters between the age groups 10-15 years and 18-25 years are Philtrum height, Buccal Corridor and Smile Index (p=0, 0.003 and 0.004 respectively).The parameters not significantly different are Commissure height, Interlabial gap, Maxillary incisor show, Crown height and Gingival display (p=0.298, 0.233, 0.513, 0.804 and 0.2 respectively). (table 3) • Chi square test revealed no statistically significant values. The most prevalent parameters for both age groups were: The consonant smile arc in both, frontal and oblique dimension, Broad arch form, Symmetrical transverse cant of maxillary occlusal plane, Consonant orientation of palatal plane, Normal overjet and Upright incisor angulation (Table 4). Results Running title : Dynamic Smile Visualization and Quantification 34 Nitte University Journal of Health Science NUJHS Vol. 6, No.4, December 2016, ISSN 2249-7110 Running title : Dynamic Smile Visualization and Quantification Figure 1 : Dynamic frames from patient's video clip Figure 2 : Smile grid applied to patient's dynamic smile Figure 2 : Still pictures of patient in the order: Frontal Photograph, Profile Photograph and Oblique Photograph Figure 4 : Smile grid applied to patient's social smile Results The present study has analyzed fifteen different parameters of posed and dynamic smile of 80 individuals of two different age groups, 10-15years and 18-25years with class I skeletal base. Data analysed using student`s t-test and Chi square test revealed the following information: • Means and standard deviations (SD) for each of the measurements during dynamic (during chee articulation) and posed social smile were calculated. (table 2) S.NO PARAMETER MEASURMENTS 1. PHILTRUM HEIGHT Measured from Subspinale to the most inferior portion of the upper lip on the vermilion tip beneath the philitral columns 2. COMMISSURE HEIGHT Measured from a line constructed from the alar bases through the subspinale and then from the commissures perpendicular to this line 3. INTERLABIAL GAP Distance between the upper and lower lips when lip incompetence is present 4. MAXILLARY INCISOR SHOW The amount of maxillary incisors exposed vertically on smiling 5. CROWN HEIGHT Vertical height of the maxillary central incisors 6. GINGIVAL DISPLAY Amount of gingival exposures vertically 7. SMILE ARC Relationship of the curvature of the incisal edges of the maxillary incisors and canines to the curvature of the lower lip in the posed social smile 8. ARCH FORM Shape of the arch – broad, narrow or normal 9. BUCCAL CORRIDOR Measured from the mesial line angle of the maxillary first premolars to the interior portion of the commissure of the lips 10. TRANSVERSE CANT OF Symmetrical or assymetrical MAXILLARY OCCLUSAL PLANE 11. ORIENTATION OF PALATAL PLANE Downward cant of the posterior maxilla, upward cant of the anterior maxilla or variations of both. It can be consonant or non consonant ATTRIBUTES SMILE AGE N MAX. MEAN SE MEAN MEDIAN SD MIN. PHILTRUM HEIGHT Dynamic 10-15 40 17.33 9.79 0.39 8.81 2.49 6.27 18-25 40 22.61 12.27 0.44 11.90 2.80 8.50 Posed 10-15 40 12.36 8.40 0.38 8.23 2.38 2.72 18-25 40 14.80 9.40 0.36 8.87 2.29 5.56 COMMISSURE HEIGHT Dynamic 10-15 40 31.10 22.27 0.53 21.87 3.38 16.50 18-25 40 35.61 22.79 0.62 21.72 3.89 15.79 Posed 10-15 40 22.67 15.79 0.58 16.15 3.67 7.06 18-25 40 24.12 16.37 0.50 16.13 3.15 12.14 INTER LABIAL GAP Dynamic 10-15 40 13.90 9.71 0.33 9.00 2.11 6.00 18-25 40 15.42 9.30 0.44 9.15 2.76 5.40 Posed 10-15 40 12.67 8.49 0.39 8.76 2.48 3.37 18-25 40 17.72 8.93 0.49 8.36 3.13 3.93 MAXILLARY INCISSOR Dynamic 10-15 40 9.41 6.06 0.28 6.20 1.78 2.87 SHOW 18-25 40 10.80 6.01 0.36 5.31 2.29 1.70 Posed 10-15 40 10.82 7.14 0.31 7.09 1.98 3.26 18-25 40 11.82 7.39 0.31 7.64 1.93 3.32 CROWN HEIGHT Dynamic 10-15 40 9.41 6.37 0.30 6.20 1.92 2.87 18-25 40 10.92 6.49 0.35 5.60 2.22 1.70 Posed 10-15 40 11.05 7.88 0.32 8.30 1.99 3.48 18-25 40 11.82 8.24 0.26 8.36 1.65 5.00 GINGIVAL DISPLAY Dynamic 10-15 40 0.50 0.03 0.02 0 0.11 0 18-25 40 10.92 6.49 0.35 5.60 2.22 1.70 Posed 10-15 40 1.55 0.15 0.06 0 0.39 0 18-25 40 3.94 0.55 0.17 0 1.08 0 BUCCAL CORRIDOR Dynamic 10-15 40 7.41 4.70 0.24 4.82 1.51 0 18-25 40 6.10 3.78 0.18 3.67 1.14 0.68 Posed 10-15 40 8.30 3.87 0.27 3.56 1.70 1.41 18-25 40 11.18 4.05 0.42 3.53 2.68 0 SMILE INDEX Dynamic 10-15 40 7.83 5.38 0.17 5.38 1.10 3.38 18-25 40 6.10 3.78 0.18 3.67 1.14 0.68 Posed 10-15 40 17.8 7.43 0.46 6.44 2.93 5.11 18-25 40 15.15 7.44 0.38 7.01 2.39 3.61 35 Nitte University Journal of Health Science NUJHS Vol. 6, No.4, December 2016, ISSN 2249-7110 Running title : Dynamic Smile Visualization and Quantification Table 1 : Attributes of smile parameters in the study Table 2 : Summary of Descriptive Statistics of Smile across age groups S.NO PARAMETER MEASURMENTS 12. SMILE ARC Relationship of the curvature of the incisal edges of the maxillary incisors, canines, premolars and molars to the curvature of the lower lip in posed smile. This can be consonant, non-consonant or a reverse smile arc. 13. OVERJET Normal((2-4mm), positive (>4mm) or negative(<2mm) 14. INCISOR ANGULATION Upright, proclined, Retroclined 15. SMILE INDEX Determined by dividing the inter commissure width by the interlabial gap during smile. 36 Nitte University Journal of Health Science NUJHS Vol. 6, No.4, December 2016, ISSN 2249-7110 Running title : Dynamic Smile Visualization and Quantification SE= Standard error mean SD=Standard deviation Table 3 : Hypothesis testing for means of Dynamic Smile parameters between age groups 10-15 years and 18-25 years Parameter 10 to 15 years 18 to 25 years T- test Significance P-value of Difference Philtrum Height 9.78 +/- 3 x 2.50 12.27 +/- 3 x 2.80 0 significantly different Commisure Height 22.27 +/- 3 x 3.37 22.79 +/- 3 x 3.89 0.298 Not significantly different Inter-labial Gap 9.7 +/- 3 x 2.10 9.30 +/- 3 x 2.76 0.233 Not significantly different Maxillary Incisor show 6.06 +/- 3 x 1.78 6.01 +/- 3 x 2.29 0.513 Not significantly different Crown height 6.36 +/- 3 x 1.92 6.5 +/- 3 x 2.22 0.804 Not significantly different Gingival display 0.03 +/- 3 x 0.11 0.09 +/- 3 x 0.27 0.2 Not significantly different Buccal Corrider 4.7 +/- 3 x 1.51 3.78 +/- 3 x 1.15 0.003 significantly different Smile Index 5.37 +/- 3 x 1.10 6.12 +/- 3 x 1.65 0.004 significantly different Table 4 : Hypothesis testing for means of Discrete Dynamic Smile Parameters between age groups Parameter Chi-Sq test Significance of Difference P-value Frontal Smile Arc 0.8303 Not significantly different Frontal Arch form 0.152 Not significantly different Frontal Transverse Cant NA No difference Orientation of Palatal Plane 0.358 Not significantly different Oblique-Smile Arc 0.833 Not significantly different Overjet 0.34 Not significantly different IncisorAngulation 0.34 Not significantly different • of hypothesis for means of dynamic smile parameters between the age groups 10-15 years and 18-25 years are Philtrum height, Buccal Corridor and Smile Index (p=0, 0.003 and 0.004 respectively).The parameters not significantly different are Commissure height, Interlabial gap, Maxillary incisor show, Crown height and Gingival display (p=0.298, 0.233, 0.513, 0.804 and 0.2 respectively). (table 3) • Chi square test revealed no statistically significant values. The most prevalent parameters for both age groups were: The consonant smile arc in both, frontal and oblique dimension, Broad arch form, Symmetrical transverse cant of maxillary occlusal plane, Consonant orientation of palatal plane, Normal overjet and Upright incisor angulation (Table 4). The parameters significantly different during the testing Discussion Generally a smile is considered as a friendly greeting in all cultures. In modern society an attractive smile is often considered as an asset in interviews, work settings, social interactions, and even the quest to attract a mate. Nowadays smile is given much importance and there is increasing emphasis on aesthetic by our society. But a perusal of the dental and orthodontic literature shows that there is much conjecture about “smile design” and treatment for smile aesthetics, sound scientific data are 3 actually quite sparse. Recent studies described a new method of capturing and analysing the smile with videography and computer software compared to the older scientific studies which examined smile aesthetics using static photographs. The credit for the use of videography to analyse smiles with 37 Nitte University Journal of Health Science videography goes to Ackerman et al, Ackerman and Ackerman, and Sarver and Ackerman. We can achieve a much more predictable and standardised smile using a video (about 30 frames per second) than by static photographs and with the use of computer software to analyse and measure smile we are able to extract the frame that best represents patients social smile andthe errors in measurements can be reduced to a great extent. Analyzing the smile and obtaining averages for various components can shed light on a standard of normalcy to serve as a 1, 4,5,6,7. guideline for the creation of an aesthetic smile. Clinically and statistically significant changes in anterior lip- tooth relationships were found between speech and smile. Soft tissue dimensional changes occur between saying “Chee” and the posed social smile. The commissures of the lips move significantly more superiorly and laterally in the posed social smile. Hence, the spatial change at the commissures directly affects the amount of percent incisor below the intercommissure line, and the increase in smile width will proportionately increase smile index. Two dimensionally and morphologically different lip frameworks are present in the “Chee” articulation and the posed social smile. When compared with single frame capture method with digital photography, standardized digital videography provides the clinician a wider range of images for selecting the parameters of lip-tooth relationships during facial animation. Because there is variability in the posed social smile in adolescents with time, a single digital photography is insufficient for the 8 evaluation of treatment effects or maturational changes. The present study analyzed fifteen different parameters of dynamic and posed smile of 80 individuals of two different age groups, 10-15 years and 18-25 years with class I Skeletal base and no gross deformities. Male and female distribution in each group was equal (20 each).The parameters were analyzed in all the three dimensions. Digital video clips of the 80 subjects in the frontal dimension and pictures of the same patients in the frontal, oblique and sagittal dimension were taken and analyzed based on the above mentioned parameters. The various attributes of dynamic smile were compared for the two age groups. Statistical analysis indicated the following results: • When compared between the two age groups the philtrum height during dynamic smile for 10- 15years age group was lesser than the 18-25 years age group (Table 2). Review of literature did not reveal any previous studies comparing the philtrum heights. • Commissure height : When compared between the two age groups the commissure height during dynamic smile was almost the same for both age groups (Table 2). Commissure height has also not been addressed by any previous studies. • Inter-labial gap: When compared between the two age groups the interlabial gap during dynamic smile was almost the same for both age groups (Table 2). • Maxillary incisor show: When compared between the two age groups the maxillary incisor show during dynamic smile was almost the same for both age groups (Table 2). • Crown height: When compared between the two age groups the Crown height during dynamic smile was almost the same for both age groups (Table 2) which is not in accordance with the study done by Gillen RJ etal, which depicts thatthe age of the patient is a factor in crown height as a cause of the apical migration in the 9 adolescence. • Gingival display: When compared between the two age groups the gingival display during dynamic smile was almost the same for both age groups (Table 2) which is not in accordance to the study done by Sarver DM, 1 Ackerman MB which says that aging will diminish gingival display. • Buccal corridor: When compared between the two age groups the buccal corridor during dynamic smile was greater for the 10-15 years age group (Table 2). • Smile index: When compared between the two age groups the smile index during dynamic smile was greater for the 18-25 years age group (Table 2) which is Philtrum height: NUJHS Vol. 6, No.4, December 2016, ISSN 2249-7110 Running title : Dynamic Smile Visualization and Quantification 38 Nitte University Journal of Health Science in accordance to the study done by Sarver DM, 1 Ackerman MB. It is said that, as the smile index decreases the less youthful the smile appears. • Consonant smile arc was the most common both frontally and obliquely in both age groups. It is well known that compared to a non-consonant smile, a 7,10 consonant smile arc is more attractive. Orthodontists should try not to disturb consonant smiles but create them by accurate and precise bracket positions. • Arch form: As viewed frontally, Broad arch form was the most common in both age groups. • Transverse cant of maxillary occlusal plane: Symmetrical cant was observed in all the subjects. • Orientation of palatal plane: Consonant palatal plane was the most common followed by the non-consonant plane with upward cant of anterior maxilla, next followed by non-consonant plane with downward cant of posterior maxilla in both age groups. • Over jet:Over jet was normal in majority of the subjects in both age groups. • Incisor angulation: Upright incisor was the most common in both age groups. Smile arc: Therefore, the results of this study shows that the maturation and aging has significant effect on the soft tissues as this can be observed with decrease in incisor and gingival display at rest and during smile, decrease in turgor (or tissue “fleshiness”), and also the lengthening of the resting commissure and philtrum heights, These changes need to be kept in mind during diagnosis and treatment planning with orthodontic mechanics in different age group of individuals. Conclusion This study helps us to establish that dynamic smile parameters like Smile Index and philtrum height increase from adolescents to adults whereas buccal corridor shows a decrease with age.This study depicts that dynamic smile parameters are different in adolescents (10-15 years age group) and adults (18-25 years age group) which may be due to growth, maturation and ageing. This study also shows that dynamic smile visualization is a better method of studying smile rather than still pictures to enhance facial aesthetics.Further studies are required to assess the soft tissue parameters in different age groups and also in different ethnic group. NUJHS Vol. 6, No.4, December 2016, ISSN 2249-7110 References 1. Sarver DM, Ackerman MB. Dynamic smile visualization and quantification: Part 1.Evolution of the concept and dynamic records for smile capture. Am J Orthod Dentofacial Orthop 2003; 124:4-12. 2. Sarvera, David M., James L. Ackermanb. Orthodontics about face: the re-emergence of the aesthetic paradigm. Am J Orthod Dentofacial Orthop 2000;117:575-6. 3. C Maulik, R Nanda. Dynamicsmileanalysis in young adults. AmJ Orthod Dentofacial Orthop 2007;132:3:307-15 4. Ackerman MB, Ackerman JL. Smile analysis and design in the digital era. J. Clin. Orthod 2002;36:221-36. 5. Tjan AHL, Miller GD. Some aesthetic factors in smile. J. Prosthet .Dent 1984;51(1): 24 - 8. 6. Peck S, Peck L, Kataja M. The gingival smile line. Angle Orthod 1992;62: 91-100. 7. Sarver DM, Ackerman M. Dynamic smile visualization and quantification: Part 2. Smile analysis and treatment strategies. Am J Orthod Dentofacial Orthop 2003; 124:116-27. 8. Ackerman MB, Colleen Brensinger, Richard Landis. An Evaluation of Dynamic Lip-Tooth Characteristics During Speech and Smile in Adolescents. The Angle Orthodontist 2003; 74(1): 43–50. 9. Gillen RJ, Schwartz RS, Hilton TJ, Evans DB. An analysis of selected normative tooth proportions. Int J Prosthodont 1994; 7:410-17. 10. Sarver D. The importance of incisor positioning in the aesthetic smile: the smile arc. Am J Orthod Dentofacial Orthop 2001; 120:98-111. Running title : Dynamic Smile Visualization and Quantification work_man3svcjtvhzvjjf7i3bcftyay ---- 25_Ungur Andreea Agricultura – Ştiinţă şi practică nr. 3-4 (75-76)/2010 143 NEED TO IMPLEMENT A GIS FOR ARCHEOLOGICAL SITES Ungur Andreea “1 Decembrie 1918” University of Alba Iulia, Romania; email: andreeaungur@yahoo.com Abstract. Given the large amount of data arising from a multidisciplinary archaeological research, the destructive nature of archaeological research and present possibilities for modeling and managing digital data, appear the need to manage and digital modeling of their to obtain digital database, maps, analysis and reports. All these requirements can respond, by his complexity, the Geographical Information Systems. Keywords: archaeological site, preventive archeology, proximity analysis, spatial distribution INTRODUCTION A key to the approach that we adopted in this paper is the archaeologic cadastre who, in accordance with the latest field regulations, is considered as specific information system for archaeological sites. Thus, within or, registration and evidence of archaeological sites is a activity of utmost importance since they are provided the minimum conditions necessary to protect sites. For archeology, each GIS project is a unique case, according to research methods and techniques used, nature and complexity of the site, archaeologist etc. requirements. Therefore, working and operation mode of a GIS in archeology can not be standardized. The reasons why for the archaeological sites should be reconstructed in digital form are the following: large quantity of data in graphic form (maps, plans) in the form of sheets; difficult access to information; the damage during time of graphical information, which leads to loss of information. Given the destructive nature of archaeological research, this is seriously in archeology, because we can’t recover lost information; the existence need of precise positioning data of archaeological discoverys and of achieving measurements by making precision maps for storage of information from year to year (especially in the rescue excavations); a GIS have facilities in spatial analysis of information, due to its possibility to work on multiple logic levels that is structured the graphical information. Thus, a single project which includes information from one area or a wider area, you can create thematic maps of spatial distribution of a type of discovery or more, and obtaining information on connections and links between these; � handling, storage and information management easier in the digital system than analog (rolls of maps, etc); � ability to listing and viewing maps at different scales, in digital format, as well the advantages afforded by posibility to correct digital maps; � the possibility to generate 3D model; � the possibility of storage and complex query of graphic and non- graphic data; Agricultura – Ştiinţă şi practică nr. 3-4 (75-76)/2010 144 � the possibility to associate to graphic elements their image (digital photography or scanned); � the possibility to integrate into a GIS project as graphic data such and data in nongraphic, in a uniform format, with well established links between them and access in both directions - from a graphic to a record in the database and from a registration from database to a graphic. A GIS APPLICATION FOR ARCHAEOLOGICAL SITES By its very nature, the fields of protecting archaeological sites are characterized by their ability to handle data with varying degrees of detail, the data being used in different scales and resolutions. Given this reality, I propose to achieve a geographical information system on two levels, creating a spatial multi- resolution database with, properly, two different levels of detail, namely: the “area” level and the “site (Monument)” level. Within the system will be implemented technical, legal and quality cadastral elements for each level. However, in each of the two levels will be used specific spatial data. “Area” level system To achieve the GIS applications at this level will be used as a support the digital maps of the study areas. At this level there are elements of technical cadastre (location, neighborhood, area, cadastral number, protection area), cadastre qualitative elements (the overall condition of the building, state of the elements of the monument) and legal elements (owner, legal situation) (Fig. 1). Fig. 1. Application interface For “Area” level of the application were taken into account all the thirteen archaeological sites from Roşia Montană. It was made a database in Microsoft Access. The database contains data on site location (territorial administrative unit, city, geographical landmarks, hidrograpfical landmarks, latitude, longitude, altitude), the legal situation of the site, area, type of research, code epoch, conservation status, mobile treasures found. In choosing a certain site from database he will appear marked on the A- Cad plan, his photo, caracteristics of the site, and if we select "RAN" code appears "Fiche archaeological site (Fig. 2). Agricultura – Ştiinţă şi practică nr. 3-4 (75-76)/2010 145 Fig. 2. Database query “Site (monument)” level “Site (monument)” level will be made for the particular case of archaeological sites. At this level there will be elements of technical and qualitative cadastre. Technical elements include the site location in a coordinate system which, depending on the configuration and complexity of the site can be defined as unique on site or unique on single grave. Qualitative elements describe the state of the site and its components in order to achieve of classifications, such as classifications after causes, types of degradation, types of treatment, emergency, etc. For “Site (monument)” level of the application was choosed The Circular Monument, which is part of the “Găuri – Hop – Hăbad - Tăul Ţapului” archaeological site. For this level was created a database which contains general data on the Circular Monument, but also data on the four graves found in this site (dimensions at the upper part of the grave pit, dimensions at the bottom of the grave, depth at the upper part of the grave pit, depth on the bottom of the grave pit, mobile treasures found, dating, age, destination) (Fig.3). Fig. 3. The database for “Site (monument)” level Agricultura – Ştiinţă şi practică nr. 3-4 (75-76)/2010 146 When we select the monument will appear his location on the site, the characteristic data and the locate the four graves (Fig. 4). Fig. 4 The location of Circular Monument within the site “Găuri – Hop – Hăbad - Tăul Ţapului” When we will select a certain grave it will appear marked on the Circular Monument, and we will show the caractersitics of that grave, mobile treasures found and placing them strengthen a certain period of time, based on dating. Fig. 5 Database query Agricultura – Ştiinţă şi practică nr. 3-4 (75-76)/2010 147 As types of analysis that can make in a GIS project can be listed the following: o making decisions on preventive archeology; o spatial distribution of a type of object found, and correlations between objects found at different levels of ironing; o percentage analysis regarding the presence of an object or a certain type of complex on layers, the levels or large areas (in several resorts from the same archaeological period); o a proximity analysis showing areas of origin of materials. Besides storage facilities, management and obtaining of information a GIS application has far superior facilities and on the presentation of data. PROPOSALS FOR FURTHER RESEARCH Among the proposals for further research can enumerate: a) generalization of GIS application for monuments and archaeological sites at nationwide; b) extension to realize GIS application for other archaeological sites; c) extension of research in the use of 3D information. CONCLUSIONS The requirement of efficient administration of the national cultural heritage leads to the need for operational implementation of geographic information systems based on urban real estate cadastre and historical monuments cadastre. The system developed in this paper is a geographic information systems for historical monuments thus joining in the trends which manifests national and international on cadastre, urban real estate cadastre, historical monuments cadastre and the management of cultural heritage. It is necessary that such a informational system to become operational because it is primarily intended for specialists in the field of archeology at all levels: national, zonal level and the monument. REFERENCES 1. Pădure, I. and A. Ungur (2006). Cadastre de specialitate, Edit. Risoprint, Cluj Napoca. 2. Masterson, B. (1999). Archaeological applications of modern survey techniques. Discovery Programme Reports 5, (web). 3. Vereş I. (2006). Automatizarea lucrărilor topo-geodezice, Ed.. Universitas, Petroşani. 4. ***Varcalin, Truică - GIS applications in archeology. 5. ***Visual Basic - user manual. 6. ***www.cast.uark.edu/nadag - Ideal surface conditions for geophysical surveys. work_mcf6ugzy5ndytlbfeciazd6nlm ---- [PDF] The Diabetic Retinopathy Screening Workflow | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1177/1932296815617969 Corpus ID: 22659520The Diabetic Retinopathy Screening Workflow @article{Bolster2015TheDR, title={The Diabetic Retinopathy Screening Workflow}, author={Nigel M. Bolster and M. Giardini and A. Bastawrous}, journal={Journal of Diabetes Science and Technology}, year={2015}, volume={10}, pages={318 - 324} } Nigel M. Bolster, M. Giardini, A. Bastawrous Published 2015 Medicine Journal of Diabetes Science and Technology Complications of diabetes mellitus, namely diabetic retinopathy and diabetic maculopathy, are the leading cause of blindness in working aged people. Sufferers can avoid blindness if identified early via retinal imaging. Systematic screening of the diabetic population has been shown to greatly reduce the prevalence and incidence of blindness within the population. Many national screening programs have digital fundus photography as their basis. In the past 5 years several techniques and adapters… Expand View on SAGE journals.sagepub.com Save to Library Create Alert Cite Launch Research Feed Share This Paper 11 CitationsBackground Citations 1 Methods Citations 1 View All Figures and Topics from this paper figure 1 figure 2 figure 3 figure 4 figure 5 View All 5 Figures & Tables Diabetic Retinopathy Retinal Diseases Diabetes Mellitus standards characteristics Smartphone Diabetic Neuropathies Diabetic Nephropathy Abnormality of the macula Legal patent Name Retina Gastric Fundus Carcinoma In Situ 11 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Clinical Components of Telemedicine Programs for Diabetic Retinopathy M. Horton, P. Silva, J. Cavallerano, L. Aiello Medicine Current Diabetes Reports 2016 14 Save Alert Research Feed Validation of mobile-based funduscope for diabetic retinopathy screening in Estonia. Birgit Krieger, Riina Hallik, K. Kala, Karina Ülper, Margarita Polonski Medicine European journal of ophthalmology 2020 Save Alert Research Feed Microvascular complications of type 2 diabetes mellitus. C. Faselis, Alexandra Katsimardou, K. Imprialos, Pavlos Deligkaris, M. Kallistratos, Kiriakos Dimitriadis Medicine Current vascular pharmacology 2019 10 Save Alert Research Feed Primeiros 5 anos de Implementação do Programa de Rastreio de Retinopatia Diabética no Centro Hospitalar do Porto A. C. Abreu, V. Lages, P. Batista, António J. M. Ribeiro, P. Menéres, B. Pessoa Medicine 2017 PDF Save Alert Research Feed Eye Care Utilization in A Community-oriented Mobile Screening Programme for Improving Eye Health in Iran: A Cluster Randomized Trial M. Katibeh, H. Sabbaghi, +5 authors P. Kallestrup Medicine Ophthalmic epidemiology 2020 Save Alert Research Feed Trash to treasure Retcam P. Chandrakanth, Ramya Ravichandran, N. Nischal, M. Subhashini Medicine Indian journal of ophthalmology 2019 9 View 1 excerpt, cites methods Save Alert Research Feed Synthesising Wider Field Images from Narrow-Field Retinal Video Acquired Using a Low-Cost Direct Ophthalmoscope (Arclight) Attached to a Smartphone Keylor Daniel Chaves Viquez, Ognjen Arandjelovic, A. Blaikie, In Ae Hwang Computer Science 2017 IEEE International Conference on Computer Vision Workshops (ICCVW) 2017 5 PDF View 1 excerpt, cites background Save Alert Research Feed Retina Deep Learning for Predicting Refractive Error From Retinal Fundus Images Avinash V. Varadarajan, Ryan Poplin, +7 authors Dale R. Webster 2018 PDF Save Alert Research Feed What Is a “Smart” Device? P. Ichhpujani, S. Thakur Computer Science 2018 2 Save Alert Research Feed A review of smartphone point‐of‐care adapter design Taif Alawsi, Zainab Al‐Bawi Computer Science 2019 4 Save Alert Research Feed ... 1 2 ... References SHOWING 1-10 OF 63 REFERENCES SORT BYRelevance Most Influenced Papers Recency Screening for Diabetic Retinopathy and Diabetic Macular Edema in the United Kingdom T. Peto, C. Tadros Medicine Current Diabetes Reports 2012 56 View 2 excerpts, references methods and background Save Alert Research Feed Article Commentary: The English national screening programme for sight-threatening diabetic retinopathy P. Scanlon Medicine Journal of medical screening 2008 124 Save Alert Research Feed Marked reductions in visual impairment due to diabetic retinopathy achieved by efficient screening and timely treatment N. Hautala, Riittaliisa Aikkila, +5 authors Hannu Alanko Medicine Acta ophthalmologica 2014 46 Save Alert Research Feed Screening for Diabetic Retinopathy D. Squirrell, J. Talbot 2003 54 Save Alert Research Feed Adopting 3-Year Screening Intervals for Sight-Threatening Retinal Vascular Lesions in Type 2 Diabetic Subjects Without Retinopathy E. Agardh, Poya Tababat-Khani Medicine Diabetes Care 2011 83 PDF Save Alert Research Feed Systematic screening for diabetic eye disease in insulin dependent diabetes J. Kristinsson, E. Stefánsson, F. Jónasson, I. Gíslason, S. Björnsson Medicine Acta ophthalmologica 1994 43 Save Alert Research Feed Biennial eye screening in patients with diabetes without retinopathy: 10-year experience E. Ólafsdóttir, E. Stefánsson Medicine, Biology British Journal of Ophthalmology 2007 91 View 1 excerpt, references background Save Alert Research Feed Diabetic retinopathy N. Cheung, P. Mitchell, T. Wong Medicine The Lancet 2010 299 Highly Influential View 3 excerpts, references background Save Alert Research Feed Active prevention in diabetic eye disease. A 4-year follow-up. J. Kristinsson, H. Hauksdóttir, E. Stefánsson, F. Jónasson, I. Gíslason Medicine Acta ophthalmologica Scandinavica 1997 49 View 1 excerpt, references background Save Alert Research Feed Use of mobile screening unit for diabetic retinopathy in rural and urban areas. G. Leese, S. Ahmed, +5 authors J. Coleiro Medicine BMJ 1993 80 PDF View 1 excerpt, references results Save Alert Research Feed ... 1 2 3 4 5 ... Related Papers Abstract Figures and Topics 11 Citations 63 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_mcotuxxeqvho3kzxxi2wvqie2m ---- www.palgrave-journals.com/dam © 2006 Palgrave Macmillan Ltd 1743–6540 $30.00 Vol. 2, 5 209 JOURNAL OF DIGITAL ASSET MANAGEMENT 209 In this issue, we explore issues of and around the topic of image asset management. Our papers focus on digital photography (Stanton), image assets (Norris), enterprise digital photography (Holm), change management (Kirsch), Saas (Schupp / Krishna), POP (Cass) and videogame production (Horodyski). In Michael Moon’s Cycle Time column, we visit issues of the human side of supply chains, and introduces his Project-Event Lifecycle model as a tool to benchmark hidden costs in your digital supply-chain. Jennifer Binder of SD Assets contributes a piece on the importance of XMP in protecting the metadata attached to assets, particularly in exchanging data across platforms. Russ Stanton of BBDO contributes a case study of BBDO and General Electric creating a global photo library for GE. What delighted the agency and client was the ease with which they were able to customize Xinet ’ s DAM solution to fi t their workfl ow. David Norris of On Request Images is next with a paper outlining six different factors to consider when planning an image management system, and the unique challenges that can be mitigated with careful planning. Aaron Holm of Markham Street Media then takes a look at the management of enterprise digital photography — and the importance of digital workfl ow to the process. Next, Kenny Kirsch of NAPC reminds us that it ’ s not just about the technology. Finding a leader within the organization with the ‘ will and skill ’ to drive change is essential to the lasting success of a DAM implementation. Jon Schupp of Corbis (with Mukul Krishna) explore why the growing market acceptance of SaaS solutions such as Salesforce.com are bringing explosive growth to the hosted DAM market sector. Danielle Cass of Xinet uses a case study of Macy ’ s art department to show the rapid return on investment gained by the use of DAM in the retail space — enhancing creative and production workfl ows. John Horodyski of Electronic Arts rounds out this issue with a paper on the challenges of working with metadata for different types of rich media assets; specifi cally, in the unique digital assets used in a videogame ’ s creation with some examples from Electronic Arts. Iris AlRoy Managing Editor Editorial Journal of Digital Asset Management (2006) 2, 209. doi: 10.1057/palgrave.dam.3650044 Editorial work_mctb6gevmzh3vgs6h5fx3y3xxy ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218546903 Params is empty 218546903 exception Params is empty 2021/04/06-02:16:24 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218546903 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:24 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_mdbmojwdifdopiqzuuyof4o4im ---- Effectiveness of the Healthy Start-Départ Santé approach on physical activity, healthy eating and fundamental movement skills of preschoolers attending childcare centres: a randomized controlled trial RESEARCH ARTICLE Open Access Effectiveness of the Healthy Start-Départ Santé approach on physical activity, healthy eating and fundamental movement skills of preschoolers attending childcare centres: a randomized controlled trial Anne Leis1*, Stéphanie Ward2, Hassan Vatanparast3, M. Louise Humbert4, Amanda Froehlich Chow4, Nazeem Muhajarine1, Rachel Engler-Stringer1 and Mathieu Bélanger5,6,7 Abstract Background: Since young children spend approximately 30 h per week in early childcare centres (ECC), this setting is ideal to foster healthy behaviours. This study aimed to assess the effectiveness of the Healthy Start-Départ Santé (HSDS) randomized controlled trial in increasing physical activity (PA) levels and improving healthy eating and fundamental movement skills in preschoolers attending ECC. Methods: Sixty-one ECC were randomly selected and allocated to either the usual practice (n = 30; n = 433 children) or intervention group (n = 31; n = 464 children). The HSDS intervention group was provided a 3-h on-site training for childcare educators which aimed to increase their knowledge and self-efficacy in promoting healthy eating, PA and development of fundamental movement skills in preschoolers. PA was measured during childcare hours for five consecutive days using the Actical accelerometer. Preschoolers’ fundamental movement skills were assessed using the standard TGMD-II protocol and POMP scores. Food intake was evaluated using digital photography-assisted weighted plate waste at lunch, over two consecutive days. All data were collected prior to the HSDS intervention and again 9 months later. Mixed-effect models were used to analyse the effectiveness of the HSDS intervention on all outcome measures. Results: Total number of children who provided valid data at baseline and endpoint for PA, food intake and fundamental movement skills were 259, 670 and 492, respectively. Children in the HSDS intervention group had, on average, a 3.33 greater point increase in their locomotor motor skills scores than children in the control group (β = 3.33, p = 0.009). No significant differences in effects were observed for object control, PA and food intake. However, results demonstrated a marginal increase in portions of fruits and vegetables served in the intervention group compared to control group (β = 0.06, p = 0.05). (Continued on next page) © The Author(s). 2020 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. * Correspondence: anne.leis@usask.ca 1Department of Community Health & Epidemiology, College of Medicine, University of Saskatchewan, Health Sciences E Wing, 104, Clinic Place, Saskatoon, SK S7N 5E5, Canada Full list of author information is available at the end of the article Leis et al. BMC Public Health (2020) 20:523 https://doi.org/10.1186/s12889-020-08621-9 http://crossmark.crossref.org/dialog/?doi=10.1186/s12889-020-08621-9&domain=pdf http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/publicdomain/zero/1.0/ mailto:anne.leis@usask.ca (Continued from previous page) Conclusion: Of the 12 outcome variables investigated in this study, 10 were not different between the study groups and two of them (locomotor skills and vegetables and fruits servings) showed a significant improvement. This suggests that HSDS is an effective intervention for the promotion of some healthy behaviours among preschoolers attending ECC. Trial registration: Clinical Trials NCT02375490. Registered on February 24, 2015; 77 retrospectively registered. Keywords: Preschool, Physical activity, Eating behaviours, Food intake, Fundamental movement skills, Population health intervention Background It is well documented that early childhood (0–5 years) sets the foundation for a lifetime of health and well- being [1]. However, research indicates that very young Canadian children are not active enough [2] and may not have a sufficiently nutritionally balanced diet for op- timal growth and development [3]. Given that young children in many developed countries spend approxi- mately 30 h per week in early childcare centres (ECC), [4] this setting has been identified as an ideal environ- ment for implementing strategies to foster the develop- ment of healthy behaviours [5, 6]. While much effort has already been invested in either improving physical activity or healthy eating among school age children and preschoolers, interventions have rarely assessed both behaviours simultaneously. A key aspect of increasing physical activity is the development of fundamental movement skills (e.g., object control and locomotor skills); to date this has often been overlooked. Several reviews have also highlighted the limited impact of single domain interventions [7, 8]. For example, one systematic review reports that the least successful inter- ventions in improving physical activity levels, dietary be- haviours, or body composition focused on only one or two outcomes; conversely, the most successful interven- tions aimed to positively influence several factors, such as knowledge, abilities and competence [9]. Accordingly, interventions should be grounded in comprehensive be- haviour change models and include a multipronged ap- proach [10]. Interventions promoting healthy weights in children should therefore, encompass a broad spectrum of concerted actions targeting both physical activity and healthy eating [6] and should be based on best available knowledge from research and practice [8, 11]. Built on a socioecological model, Healthy Start-Départ Santé (HSDS) was developed following the principles de- scribed above and includes strategies for each level of in- fluence. HSDS is a multilevel, intersectoral population health intervention designed to empower childcare edu- cators to enhance physical activity, fundamental move- ment skills and healthy eating opportunities within the daily routine of preschoolers (3 to 5 years old) who at- tend ECC (i.e., licenced childcare centres or preschools). HSDS adheres to the population health approach which posits that to positively influence population-level health outcomes, interventions must take into account the wide range of health determinants, [12] recognize the import- ance and complexity of potential interplay among these determinants, and reduce social and material inequities [13]. Further, they must rely on the best available evi- dence, stimulate intersectoral collaborations, and provide opportunities for all potential stakeholders to be mean- ingfully engaged from the onset to deployment [13]. The HSDS evaluation reported here aimed to assess the effectiveness of the HSDS intervention in increasing physical activity levels and healthy eating as well as im- proving fundamental movement skills in preschoolers at- tending ECC. It was hypothesized that, in comparison to a control group (usual practice), exposure to the HSDS intervention would result in increased opportunities for physical activity and healthy eating, which in turn would lead to increased physical activity levels, improved fun- damental movement skills and healthier eating among preschoolers. Methods Trial design This study used an ECC-based cluster randomized con- trolled trial design, where ECC were randomly allocated to either the intervention (HSDS) or control group (usual practice). A complete description of the trial protocol was published in 2016 and is registered (Clini calTrials.gov #NCT02375490) [14]. The study protocol was implemented as planned; however, modifications were made in the method used to score fundamental movement skills as explained below. Further, as detailed in the analysis section, the amount of missing data for the outcomes forced us to modify the analysis plan from an intention-to-treat to a complete-cases analysis ap- proach. The study received ethics approval from Health Canada, the University of Saskatchewan, and the Univer- sité de Sherbrooke. Participants Provincial registries of licenced ECC in Saskatchewan and New Brunswick, Canada, were used as sampling Leis et al. BMC Public Health (2020) 20:523 Page 2 of 12 https://clinicaltrials.gov/ct2/show/NCT02375490 http://clinicaltrials.gov http://clinicaltrials.gov frames. ECC were excluded if they had previously re- ceived a physical activity or nutrition intervention, did not prepare and provide lunch to children, and for feasi- bility reasons, if they had less than 20 children enrolled full-time in a preschool program. ECC were stratified ac- cording to province, geographical location (urban/rural) [15] and their respective school division (English or French). Once stratification was completed, project co- ordinators randomly selected ECC using the Stata SE statistical sequence generator software. ECC were then contacted, provided information about the study, and in- vited to participate. ECC which agreed to participate in the study were sent a consent form, as well as parental consent forms to recruit preschoolers attending their ECC on a full-time basis. If the ECC declined, they were replaced by another randomly selected ECC from the same stratum. Once informed consent was obtained, simple randomization was used to allocate ECC to either the intervention or control group with a 1:1 ratio. Par- ents of all participating children provided signed, in- formed consent. Prior to initiating recruitment and based on pilot work, we estimated that 700 children (350 per group) would provide 80% power to detect a 10% between-group difference in outcomes, considering a within-group standard deviation of 40%, a two-sided α of 0.05, an intra-class correlation of 0.02 and an esti- mated multiple correlation of 0.15 between the interven- tion and other explanatory variables. To compensate for losses to follow-up, our target was to recruit a minimum of 735 participants (5% over the 700 calculated). Intervention The HSDS intervention was delivered over the course of 6 to 8 months, and included a 3-h on-site training, re- sources (i.e. an implementation manual, physical activity and healthy eating manuals, an active play equipment kit), and on-going on-line and telephone support and monitoring; centres were also offered a tailored 90-min booster session at the midway point of the intervention period. ECC randomly allocated to the control group continued their usual practice and were not provided with any training, resources or support. However, once the study was completed, all childcare centres from the control group were offered the HSDS intervention. On-site training and resources All ECC allocated to the intervention group were pro- vided with a 3-h on-site training, which was offered to childcare educators, directors and cooks after work hours. This training session was delivered by trained specialists (dietitians, kinesiologists or other experts in the fields of nutrition and physical activity), and covered best practices in physical activity and healthy eating in early childhood, including topics such as the importance of physical activity and healthy eating for preschoolers, how to easily integrate physical activity and healthy eat- ing in the ECC’s daily routine, how to introduce and en- courage children to try new and healthy foods, and how to help children develop their fundamental movement skills. ECCs were also provided with the evidence-based LEAP BC™-GRANDIR CB resources which included a physical activity and healthy eating manual. In addition, a New Brunswick developed fundamental movement skills manual (Active Kids Toolkit Foundations for All©), a kit with active play equipment, an implementa- tion manual, and other complementary resources for childcare staff and families were shared with all partici- pating sites. On-going support and monitoring ECC were encouraged to identify a Healthy Star, which was a childcare staff member who was a champion for physical activity and healthy eating and who was a knowledge-sharing contact between the ECC and the HSDS coordinators. The HSDS team checked-in with the intervention ECC on a regular basis by phone or email and provided them with support and encourage- ment. Monthly newsletters were also sent to all interven- tion ECC, which included tips on how to get children moving or on how to improve healthy eating. ECC were encouraged to share these newsletters with parents. Booster session A 90-min booster session was offered to all intervention ECC approximately three months after the initial train- ing. This on-site session was personalized based on chal- lenges identified by each individual ECC, and was offered as a staff meeting, an in-class demonstration, a parent presentation, a cooking class, or a staff mini- training. Outcomes Each participating ECC was visited by two trained re- search assistants over two weekdays to collect data prior to the start of the intervention period and again 9 months later. This two-day data collection period was chosen for feasibility and logistical purposes, as well as to reduce the burden on ECCs. While blinding was not possible for the ECC, parents and children were not in- formed about group assignment. Research assistants re- sponsible for collecting data were not told about the ECC’s group allocation. Physical activity Physical activity was assessed using the Actical acceler- ometer (B and Z-series, Mini Mitter/Respironics, Oregon, USA) [16], which has been shown to be a valid Leis et al. BMC Public Health (2020) 20:523 Page 3 of 12 tool for measuring physical activity in preschoolers [17]. The Actical was worn by children during childcare hours for five consecutive days. Educators were instructed to place the accelerometer around each participating child’s waist when they first arrived at the ECC in the morning, and to remove it before the child went home at the end of the day. Once the measurement period of five work days was completed, the accelerometers were collected and sent back to the research team. Accelerometer data were recorded in 15 s intervals. Time spent in physical activity (PA), moderate-to- vigorous PA (MVPA), light intensity PA (LPA) and sed- entary time were measured based on predetermined vali- dated thresholds for preschoolers [17]. Counts of less than 25 per 15 s represented sedentary time (which in- cluded nap time) [18], counts between 25 and 714 per 15 s represented LPA [17, 18] while counts of 715 and above defined MVPA [19]. Non-wear time was defined as any period of 60 consecutive minutes where no counts were measured. To provide the most reliable data while maximizing sample size, it was determined that children had to have worn the accelerometer for a mini- mum of 2 h on at least 4 days to be included in the ana- lyses [19]. To control for within and between participant wear time variations, accelerometer data were standard- ized to an 8-h period, [20] which represents the typical number of hours children in our study attended the ECC. The SAS codes used to clean and manage raw ac- celerometer data for this study are available as open source [21]. Fundamental movement skills The Test of Gross Motor Development (TGMD-II), a valid and reliable tool used to assess fundamental move- ment skills among children 3–11 years of age, was used to measure children’s fundamental movement skills [22]. Children were videotaped while completing two trials of each locomotor (run, hop, gallop, leap, horizontal jump) and object control skills (catch, kick, overhand and underhand throw), using the standard TGMD-II proto- col. Videos were then reviewed by trained assessors who scored each skill and calculated a total raw score for locomotor skills and object control. The TGMD-II scor- ing protocol uses raw locomotor and object control skill scores to calculate an age adjusted Gross Motor Quo- tient (GMQ). The GMQ score applies a denominator which assumes that the child has performed each skill. However, some items of the TGMD were eliminated (slide, striking a stationary ball, and stationary dribble) due to the young age of the children. As a result the GMQ could not be accurately calculated for these chil- dren and thus, the Percent of Maximum Possible (POMP) scoring system was applied to score children’s fundamental movement skills [23]. The children’s raw object control and locomotor scores were converted to POMP scores to generate the maximum possible score based on the skills, which we included. This also enabled maximizing use of data for cases where children had missing data for a particular skill. For example, if a child had missing data for the run skill (i.e. because they did not want to run at the time of testing), the score for that child would be calculated on a maximum of 40 instead of having a score out of 48 as usual. POMP scores were computed, and age-adjusted as defined by the TGMD-II protocol. Food intake and food served Amount of food served by educators or cooks and chil- dren’s intake of vegetables and fruit (servings), fiber (g) and sodium (mg) were measured at lunch on 2 consecu- tive weekdays using weighed plate waste enhanced with digital photography. The intent of capturing at least two consecutive days of usual intake was to minimize the day by day variation in order to obtain a more represen- tative measure while being logistically feasible. The weighed plate waste method has been shown to be a precise measurement of dietary intake [24, 25] and has been previously used in studies conducted among school-aged children [26–28]. This method consisted of weighing a standard serving of each food item served to the children. Digital photography was also used to docu- ment the weight of the food item sitting on the scale and its type or composition (e.g. mixed dish versus a single-ingredient item). Each child’s plate was weighed and photographed before each serving and after the child was done eating.. In the cases where children served themselves rather than being provided a pre- plated meal, each child’s individual servings of food were weighed and photographed in the same manner. If a sec- ond serving was requested by a child, the same proced- ure was repeated. With digital photography it was possible to estimate the quantity of individual food items first served and then left on each child’s plate. Food in- take was calculated as the difference in weight between the total amount of food served and the amount of food leftover [25]. Plate waste data and recipes obtained from the childcare centres were used to assess the amount of vegetables and fruit, fibre and sodium served and con- sumed by each child, using the ESHA Food Processor nutritional analysis software, version 10.10.00 (Salem, Oregon). Finally, amount of vegetables and fruit (serv- ings), fibre (g) and sodium (mg) served and children’s average intake over the 2 days of data collection were calculated. Other variables Children’s age and sex were obtained through a ques- tionnaire administered to parents. The number of Leis et al. BMC Public Health (2020) 20:523 Page 4 of 12 children in each ECC was based on the total number of preschoolers attending the centre. The ECCs were cate- gorized as having 20 preschoolers or less, between 21 and 26 preschoolers or more than 26 preschoolers. The socioeconomic status of ECC was estimated based on the median income of individuals aged 15 years and older living within the same postal code as the ECC, using data from the Canadian 2011 National Household Survey [29]. Each ECC was placed into one of four so- cioeconomic status categories according to if their re- gional median income was less than $40,000, between $40,000 and $59,999, between $60,000 and $69,999, or $70,000 and above. As for geographical location, centres were defined as urban if they were in a census metropol- itan area or a census agglomeration with a strong metro- politan influenced zone (MIZ), as defined by the Community Information Database, 2006 [15]. Centres were categorised as being in a rural area if they were in an area with moderate, weak or no MIZ. Opportunities for physical activity and healthy eating were assessed using 55 items (25 items related to nutri- tion and 30 items related to physical activity) of the Nutrition and Physical Activity Self-Assessment for Child Care (NAP SACC) [30, 31] by two trained re- search assistants who scored the childcare centre’s en- vironment over the two days of data collection. Each research assistant recorded their observations inde- pendently and compared their observations at the end of the second day. Excellent inter-rater reliability was shown between the research assistants (Cohen’s kappa = 0.942, p < 0.001). The mean ± SD for the scores of nutrition and physical activity components of the NAP SACC are reported separately for intervention and control groups at baseline (Table 1). The 55 items were summarized into fewer categories using principal component analysis. Given NAP SACC-derived vari- ables were ordinal, we used the untie method (PRINQ- UAL procedure in SAS 9.4) to transform the data, which helps retain variance of the original data for find- ing correlations. The factor loadings are the correlation coefficient of the relationship between categories of the NAP SACC and the underlying factors [32]. For label- ling the factors, we considered all questions with factor loadings above or below the cut off of ±0.4. Four groupings were identified to represent environmental factors related to physical activity and nutrition in ECC (see Additional file “1”). Analyses We used complete case analysis, such that only partici- pants with complete outcome data were included. This represents a deviation from our original protocol, which planned for analyses to be pursued according to the intention-to-treat principle [14]. This modification was necessary as the issue of missing data largely affected outcome variables, and it is generally the norm not to use imputation for missing data among outcome vari- ables, especially when the proportion of missing data is large [33]. To assess the effect of the intervention, mea- sures of the outcomes of interest were fitted in mixed- effect models using time of measurement (baseline or endpoint), group (intervention or control), and an inter- action between time and group as fixed effects (Models 1). Additional models (Models 2) were built on these ini- tial models to account for potentially confounding vari- ables identified using Directed Acyclic Graphs (DAG) for each outcome [34]. These graphs are frequently used in epidemiological studies as they help illustrate the po- tential sources of bias and help identify confounding var- iables which should be controlled in the statistical analyses [34]. DAGs help researchers visually represent their hypotheses and the relationships between the vari- ables of interest. Specifically, 2 Models were adjusted for age, sex, size of ECC, neighbourhood income, language, province, rurality and a physical activity environment Table 1 Descriptive characteristics of the study sample Control Intervention CHILDREN LEVEL VARIABLES (n = 433) (n = 462) Age (years) 4.1 ± 0.75 4.1 ± 0.77 Sex (boys) 235 (54%) 237 (51%) Height (cm) 102.8 ± 6.6 103.4 ± 6.6 Weight (kg) 17.1 ± 3.0 17.4 ± 3.1 Waist circumference (cm) 53.6 ± 4.6 53.6 ± 4.5 Age-adjusted BMI (kg/m2) 20.4 ± 3.7 20.3 ± 3.8 Province Saskatchewan 230 (53%) 272 (58%) New Brunswick 203 (47%) 192 (42%) Language of ECC English 265 (61%) 310 (67%) French 168 (39%) 154 (33%) Location of ECC Rural 152 (35%) 197 (42%) Urban 281 (65%) 267 (58%) Median household income (before taxes) 54,773 ± 10,790 54,769 ± 11,067 CHILDCARE CENTRE LEVEL VARIABLES (n = 30) (n = 31) Number of children in ECC 27 ± 12 28 ± 15 Nutrition environment score (scale from 0 to 75) 38.31 ± 7.88 39.39 ± 7.23 PA environment score (scale from 0 to 90) 43.5 ± 10.27 42.0 ± 10.03 Leis et al. BMC Public Health (2020) 20:523 Page 5 of 12 score for physical activity and gross motor skills out- comes, or a nutrition environmental score for food in- take and food served outcomes. To account for clustering related to repeated measures and due to the sampling of participants in ECC, variables representing participants and ECC were included as random effects in all models. In a secondary set of analyses, we tested add- itional interaction terms to assess whether the interven- tion would have different effects across different strata (i.e., sex, province—Saskatchewan, New Brunswick, lan- guage of centre—English, French, or location of centre— urban, rural). Analyses were conducted with the MIXED procedure in SAS, version 9.4. Results Sixty-one childcare centres were randomly selected and allocated to the intervention or control group (Fig. 1). In total, 895 children (4.1 years old ±0.8) were recruited in September of 2013, 2014 and 2015. Of these children, 462 attended an ECC randomly allocated to the inter- vention group and 433 attended an ECC allocated to the control group. All ECCs allocated to the intervention group received the HSDS intervention, except for one centre (n = 9 participating children) which decided to drop out of the study due to change of management. Losses to follow-up were similar in both groups across all primary outcomes, except for physical activity where Fig. 1 CONSORT flow diagram of participants through each stage of the intervention Leis et al. BMC Public Health (2020) 20:523 Page 6 of 12 the percentage of follow-up loss was greater in the con- trol group (25%) than in the intervention group (16%). The total number of children who provided valid data at both baseline and endpoint for PA, food intake and fun- damental movement skills were 259, 670 and 492, respectively. Following recruitment of one of the childcare centres in the usual practice arm, it was found that it had the same director and shared staff with a nearby ECC which had been recruited in the intervention arm. Given the risk of contamination quasi certain it was decided to amalgamate the 2 centres as one intervention centre. Children in both groups were similar on all baseline characteristics as demonstrated in Table 1. On average, children lost to follow up engaged in more physical ac- tivities, displayed less sedentary time and had better scores on the object control component of the funda- mental movement skills evaluation at baseline than chil- dren retained for the follow-up evaluation (Table 2). The models showed a positive effect of the interven- tion on locomotor skills (Table 3). Specifically, in the model controlling for potentially confounding variables, children in the intervention group had, on average, a 3.33 greater point increase in their locomotor motor skills scores than children in the control group. The intervention was not associated with statistically signifi- cant differences in effects on object control or physical activity variables. Overall, children in both the interven- tion and control group increased their time spent in total PA by approximately 10 min over an 8-h period, on average, between baseline and endpoint. Specifically, this represents an average increase of 7 min of MVPA and 3 min of LPA per childcare day. However, no significant differences in total PA, MVPA, LPA or sedentary activity were found between the intervention and control group between baseline and endpoint. Whereas the intervention was not associated with dif- ferences in children’s food intake, the models suggested a marginal difference in food served following the inter- vention. Specifically, the adjusted model suggested a lar- ger increase in portions of vegetables and fruits served in the intervention group compared to control group. None of the interaction analyses suggested any differ- ence in the effect of the intervention across sexes, prov- inces, language groups, or location of centres (data not shown). Discussion The effectiveness of the HSDS intervention in increasing physical activity levels and healthy eating as well as im- proving fundamental movement skills in preschoolers at- tending ECC was only partially demonstrated. The intervention was associated with a statistically significant improvement in locomotor skills. This finding is not sur- prising as a process evaluation of the HSDS demon- strated that most ECC who received the HSDS intervention reported using the physical literacy HOP re- source on a weekly basis and that 74% of those activities emphasized locomotor skills over object control skills [35]. Our findings are consistent with previous studies which have found that physical activity interventions tar- geting gross motor skills, result in an increase in Table 2 Baseline values of outcome variables among participants retained and lost to follow-up Participants Retained Participants Lost to follow-up p-value (t-test) Baseline values (mean ± standard deviation) Physical activity (n = 259) (n = 176) Total physical activity (minutes/day) 179.33 ± 44.41 189.98 ± 48.82 0.03 MVPA (minutes/day) 28.93 ± 15.34 32.11 ± 21.73 0.1 LPA (minutes/day) 150.41 ± 36.08 157.87 ± 38.53 0.06 Sedentary time (minutes/day) 300.67 ± 44.41 290.02 ± 48.82 0.03 Fundamental movement skills (n = 492) (n = 63) Locomotor (score) 41.6 ± 15.96 41.4 ± 16.09 0.9 Object Control (score) 42.04 ± 15.72 46.47 ± 14.95 0.05 Food Intake (n = 670) (n = 117) Fiber (g) 2.41 ± 1.34 2.39 ± 1.24 0.9 Vegetables and fruit (servings) 0.66 ± 0.43 0.61 ± 0.43 0.4 Sodium (mg) 502.48 ± 386.45 494.48 ± 324.32 0.9 Food Served (n = 670) (n = 117) Fiber (g) 2.84 ± 1.50 2.69 ± 1.35 0.4 Vegetables and fruit (servings) 0.80 ± 0.49 0.71 ± 0.47 0.2 Sodium (mg) 575.24 ± 415.94 553.92 ± 352.03 0.6 Leis et al. BMC Public Health (2020) 20:523 Page 7 of 12 locomotor skills but not necessarily object control skills [36, 37]. Interventions designed to increase gross motor skills in children often target locomotor skills such as jumping and running rather than object control skills. Wang et al. discussed this phenomenon and suggested that more targeted approaches should be employed when designing interventions aimed at supporting chil- dren in developing and improving both locomotor and object control skills [38]. The development of fundamental movement skills (locomotor, object control) in childhood are essential building blocks for participation in physical activity across the lifespan [36, 39]. Teaching children these es- sential movement skills may lead to a greater willingness to participate in physical activity of all types during early childhood and beyond [40]. Nevertheless, physical activity-level related outcomes were not associated with between group differences in this study. However, phys- ical activity levels did improve in both groups. This posi- tive change could potentially be indicative of a study effect (“Hawthorne effect”) in that children who wore ac- celerometers, regardless of intervention/control groups, were more active compared to when they were not wear- ing the devices. This observation is confirmed in 30% of the studies included in Waters et al. systematic review of control groups’ improvements in physical activity intervention trials and one of the associated factors was repeated measures [41]. Other possible explana- tions are the potential for seasonal effects as physical activity levels of preschoolers tend to increase during Spring [42, 43]. In terms of food intake, results did not reach statistical significance after controlling for confounding factors. According to a systematic review by Golley & Bell, Table 3 Difference in PA, fundamental movement skills, food intake/served between the intervention and control groupsa Controlb Group Interventionb Group Models 1c Models 2d Mean (standard deviation) Pre Mean (standard deviation) Post Mean (standard deviation) Pre Mean (standard deviation) Post Beta for intervention- control group difference (standarderror) p-value Beta for intervention -control group difference (standarderror) p-value Physical Activity (n = 119) (n = 140) Total physical activity (minutes/day) 189.00 (42.23) 193.87 (44.12) 170.34 (44.70) 180.28 (48.11) 6.42 (6.18) 0.3 5.98 (6.28) 0.3 MVPA (minutes/day) 32.31 (16.98) 37.65 (18.36) 25.78 (12.97) 34.31 (17.65) 2.68 (2.49) 0.3 2.01 (2.54) 0.4 LPA (minutes/day) 156.69 (35.10) 156.22 (34.54) 144.56 (36.18) 145.98 (35.32) 3.81 (4.8) 0.4 4.11 (4.86) 0.4 Sedentary time (minutes/day) 291.00 (42.23) 286.13 (44.12) 309.66 (44.71) 299.72 (48.11) −6.42 (6.18) 0.3 −5.98 (6.28) 0.3 Fundamental movement skills (n = 236) (n = 256) Locomotor (score) 44.35 (16.93) 44.72 (15.49) 38.55 (15.92) 43.02 (15.61) 3.84 (2.09) 0.001 3.33 (1.28) 0.009 Object Control (score) 45.41 (16.55) 43.69 (14.80) 43.02 (15.61) 44.08 (14.85) 3.38 (2.63) 0.201 1.61 (2.55) 0.5 Food Intake (n = 314) (n = 356) Fiber (g) 2.42 (1.42) 2.67 (1.74) 2.40 (1.44) 2.46 (1.37) −0.09 (0.04) 0.05 −0.068 (0.047) 0.1 Vegetables and fruit (servings) 0.63 (0.52) 0.76 (0.69) 0.66 (0.46) 0.81 (0.57) 0.01 (0.03) 0.7 0.02 (0.03) 0.6 Sodium (mg) 474.94 (307.73) 485.79 (328.91) 528.92 (418.72) 521.07 (326.90) −0.05 (0.7) 0.9 −0.12 (0.76) 0.9 Food Served (n = 314) (n = 356) Fiber (g) 2.84 (1.52) 2.99 (1.76) 2.68 (1.34) 2.73 (1.43) −0.1 (0.04) 0.02 −0.07 (0.045) 0.1 Vegetables and fruit (servings) 0.76 (0.56) 0.85 (0.70) 0.76 (0.46) 0.92 (0.57) 0.04 (0.03) 0.2 0.06 (0.03) 0.05 Sodium (mg) 544.68 (336.68) 545.39 (348.24) 586.64 (430.55) 581.53 (387.46) 0.29 (0.7) 0.7 0.26 (0.73) 0.7 aBetween group differences are based on mixed-effect linear regression models which include variables representing participants and childcare centres as random effects to account for repeated measures and for clustering of participants in childcare centres bMeans in this column are based on all data available at each measurement period cRepresents the interaction term between time and group. To be included in this analysis, participants had to have provided data for both the pre and post measurement periods dRepresents the interaction term between time and group with adjustments for age, sex, size of childcare centre, neighbourhood income, language, province, rurality and a physical activity environment score for physical activity and gross motor skills outcomes or a nutrition environmental score for food intake and food served outcomes Leis et al. BMC Public Health (2020) 20:523 Page 8 of 12 previous interventions which provided nutrition training for ECC staff have found positive effects on children’s dietary intake or on centre food provision [44]. However, few studies showed a positive effect when these out- comes were assessed using objective methods [45, 46], as was done in this study. Overall, children’s food intake in- creased slightly in both groups. This could be attributed to a maturation effect, as children’s daily energy require- ments increase by approximately 100 kcals each year be- tween the age of 2 and 5 [47]. Furthermore beside the marginal increase in portions of fruits and vegetables served in the intervention group compared to the con- trol group, no significant between-group differences in fiber, and sodium were found. The ECCs’ environment in which food is prepared and served, is influenced by provincial standards. Yet, the implementation of these standards in ECCs is limited by the lack of enforcement, [48] and their interpretation may vary as a function of the presence of a dedicated cook, access to fresh and af- fordable healthy food, and other contextual factors such as child care leadership and priorities, which are difficult to standardize, thus possibly explaining the modest dir- ect impact on children’s diet. As a population health intervention, HSDS’s main tar- get was the change in the childcare centre’s environment with the hypothesis that in comparison to the usual practice group, exposure to the HSDS intervention would result in increased opportunities for physical ac- tivity and healthy eating in those centres which in turn would increase healthy behaviours in children. Impact on children hinged on the full deployment of the inter- vention as intended. According to our pilot study [49] and the HSDS process evaluation intervention, [35] edu- cators were very responsive to HSDS, felt more confident in their own skills after the intervention train- ing as well as were willing to organize the childcare cen- tres environment and role model in order to facilitate behaviour changes in children. Reported changes were usually simple to implement, low cost and at the centre level. Educators’ modelling behaviours, skill development and increased self-efficacy are recognized in the litera- ture as key strategies to effect change [49, 50] and our previously reported findings certainly concur with these. While implementation fidelity of the intervention was high, process evaluation results also showed that more ECC used the physical activity resource than the healthy eating resource. This could partially explain the dismal findings with regards to food intake and food served. Further, ECC generally reported lack of time, lack of support from childcare staff and low parental engage- ment as key challenges to full implementation and sus- tainability of HSDS, which could also explain the lack of significant results of this study. In Nixon’s systematic re- view of interventions in childcare settings targeting healthy behaviours and obesity prevention [10], 6 out of 12 studies documenting a significant improvement in outcomes were associated with medium to high parental involvement. This level of engagement was missing in the intervention reported here, therefore future interven- tions should more systematically target the whole family in addition to the ECC. Another variable to consider is the length of deployment in centres; maybe the interven- tion was too short or not sufficiently intensive. The robust design of the study using a control group and pre and post objectives measures is a core strength of the HSDS study. Although applying the randomisa- tion scheme, not all centres started at the same point at baseline and this situation may have overshadowed the true impact of HSDS, which was a population health intervention operating in real world settings, meaning that children and ECC in the usual practice group knew the purpose of the study in order to consent and were aware they would only receive the intervention in a de- layed fashion in the future. The most obvious and visible measurements were related to the wearing of accelerom- eters, which may explain the increase in physical activity in both groups. The lack of significant results could also be attributed to the following factors. It is possible that our priori stat- istical power estimates following pilot work were based on an optimistic level of 10% difference, which was not reached. Estimates may not have taken into consider- ation adequate sample size for achieving meaningful sub-groups analyses. Although the number of enrolled children surpassed our initial target number, the lower than anticipated number of participants who provided valid data at both time points, especially for physical ac- tivity, represents a limitation of the study which may have prevented us from demonstrating significant ef- fects. In addition to reducing our statistical power, the loss of participants during follow up may have affected internal and external validity of our results. In particular, children who contributed to the outcome measures at both time points were generally less physically active and had poorer object control motor skills at baseline. Valid pre- and post-food intake data was easier to collect and control, as lunchtime was a structured and routine activity in the ECC, and therefore expected by children. The large proportion of missing data for the outcome variables also precluded the adoption of the intention- to-treat principle. It has been documented that deviating from the intention-to-treat principle may yield bias esti- mates, especially in cases where it is replaced by a per- protocol analysis [51]. In the current study, we used complete case analyses, which qualifies as a modified intent-to-treat approach where all participants for whom data were available were included, regardless of if their exposure occurred as planned in the protocol [52]. Leis et al. BMC Public Health (2020) 20:523 Page 9 of 12 Although not as susceptible to bias as a per-protocol analysis, the complete case analyses used are associated with a higher risk that the study groups being compared differ in terms of potentially confounding variables than if the intention-to-treat principle were used [53]. An- other limitation may be due to the length of the inter- vention, which was shorter than in studies reporting significant behaviours changes [54–56]. Conclusions In summary, HSDS, a population health intervention in the ECC settings combines increased opportunities for physical activity and healthy eating, which constitute two core components for effective childhood obesity preven- tion according to Bleich et al. recent systematic review [57]. However, although many (n = 12) outcome indica- tors were investigated, only locomotor skills at the child level and vegetables and fruits servings at the centre level significantly increased at follow up, suggesting that the HSDS intervention was effective in only promoting some healthy behaviours among preschoolers attending ECC. No difference between the intervention and the control group were noted for the other variables assessed. Supplementary information Supplementary information accompanies this paper at https://doi.org/10. 1186/s12889-020-08621-9. Additional file 1. Principal component analysis of the NAPSACC questionnaire. Abbreviations BMI: Body mass index; DAG: Directed Acyclic Graphs; ECC: Early childhood centres; GMQ: Gross Motor Quotient; HSDS: Healthy Start-Départ Santé; LPA: Light intensity physical activity; MVPA: Moderate-to-vigorous physical activity; NAP SACC: Nutrition and Physical Activity Self-Assessment for Child Care; PA: Physical activity; POMP: Percent of Maximum Possible; SD: Standard deviation; TGMD-II: Test of Gross Motor Development II Acknowledgements Authors are grateful for the important contribution of Gabrielle Lepage- Lavoie, project manager, and Roger Gautier, director of the Réseau Santé en Français de la Saskatchewan, for leading the implementation of Healthy Start-Départ Santé. We also thank directors of ECC, educators, parents, and children for their valued collaboration. Authors’ contributions Development of the intervention was led by AL, LH, and NM and development of the evaluation protocol was led by AL, MB, LH, HV, and NM. AL acted as the study’s principal investigator. AL and MB each oversaw the study in their respective province. Physical activity components were led by MB and NM, nutrition components were led by HV, SW, and RES, and fundamental movement skills components were led by LH and AFC. AL, SW and MB wrote the first draft of this manuscript. All of the authors reviewed the manuscript critically for important intellectual content and approved the final version submitted for publication. Funding This study was financially supported by a grant from the Public Health Agency of Canada (# 6282-15-2010/3381056-RSFS), a research grant from the Consortium national de formation en santé (# 2014-CFMF-01), and a grant from the Heart and Stroke Foundation of Canada (# 2015-PLNI). AFC was funded through a postdoctoral fellowship from the Saskatchewan Health Re- search Foundation and SW was funded through a Canadian Institutes of Health Research Charles Best Canada Graduate Scholarships Doctoral Award and a Gérard-Eugène-Plante Doctoral Scholarship from the Faculty of Medi- cine and Health Sciences at the Université de Sherbrooke. Availability of data and materials The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Ethics approval and consent to participate The study received ethics approval from Health Canada, the University of Saskatchewan, and the Université de Sherbrooke. All ECC and parents of participating children provided written informed consent. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests. Author details 1Department of Community Health & Epidemiology, College of Medicine, University of Saskatchewan, Health Sciences E Wing, 104, Clinic Place, Saskatoon, SK S7N 5E5, Canada. 2École des sciences des aliments, de nutrition et d’études familiales, Faculté des sciences de la santé et des services communautaires, Université de Moncton, Moncton, Canada. 3College of Pharmacy and Nutrition/School of Public Health, University of Saskatchewan, Saskatoon, Saskatchewan S7N 0Z2, Canada. 4College of Kinesiology, University of Saskatchewan, Saskatoon, Canada. 5Department of family medicine, Université de Sherbrooke, 18 avenue Antonine-Maillet Moncton, Moncton, New Brunswick E1A 3E9, Canada. 6Centre de formation médicale du Nouveau-Brunswick, 18 avenue Antonine-Maillet Moncton, Moncton, New Brunswick E1A 3E9, Canada. 7Vitalité Health Network, 330 Université Avenue Moncton, Moncton, New Brunswick E1C 2Z3, Canada. Received: 5 July 2019 Accepted: 30 March 2020 References 1. Guyer B, Ma S, Grason H, Frick KD, Perry DF, Sharkey A, et al. Early childhood health promotion and its life course health consequences. Acad Pediatr. 2009. https://doi.org/10.1016/j.acap.2008.12.007. 2. Garriguet D, Carson V, Colley RC, Janssen I, Timmons BW, Tremblay MS. Physical activity and sedentary behaviour of Canadian children aged 3 to 5. Health Rep. 2016;27(9):14–23. 3. Roberts KC, Shields M, de Groh M, Aziz A, Gilbert J-A. Overweight and obesity in children and adolescents: results from the 2009 to 2011 Canadian health measures survey. Health Rep. 2012;23(3):37–41. 4. Larson N, Ward DS, Neelon SB, Story M. What role can child-care settings play in obesity prevention? A review of the evidence and call for research efforts. J Am Diet Assoc. 2011. https://doi.org/10.1016/j.jada.2011.06.007. 5. Story M, Kaphingst KM, French S. The role of child care settings in obesity prevention. Futur Child. 2006;16(1):143–68. 6. Kaphingst KM, Story M. Child care as an untapped setting for obesity prevention: state child care licensing regulations related to nutrition, physical activity, and media use for preschool-aged children in the United States. Prev Chronic Dis. 2009;6(1):A11. 7. Mikkelsen M, Husby S, Skov L, Perez-Cueto F. A systematic review of types of healthy eating interventions in preschools. Nutr J. 2014. https://doi.org/ 10.1186/1475-2891-13-56. 8. Temple M, Robinson J. A systematic review of interventions to promote physical activity in the preschool setting. J Spec Pediatr Nurs. 2014. https:// doi.org/10.1111/jspn.12081. 9. Hesketh KD, Campbell KJ. Interventions to prevent obesity in 0–5 year olds: an updated systematic review of the literature. Obesity. 2010. https://doi. org/10.1038/oby.2009.429. 10. Nixon CA, Moore HJ, Douthwaite W, Gibson EL, Vogele C, Kreichauf S, et al. Identifying effective behavioural models and behaviour change strategies underpinning preschool- and school-based obesity prevention interventions Leis et al. BMC Public Health (2020) 20:523 Page 10 of 12 https://doi.org/10.1186/s12889-020-08621-9 https://doi.org/10.1186/s12889-020-08621-9 https://doi.org/10.1016/j.acap.2008.12.007 https://doi.org/10.1016/j.jada.2011.06.007 https://doi.org/10.1186/1475-2891-13-56 https://doi.org/10.1186/1475-2891-13-56 https://doi.org/10.1111/jspn.12081 https://doi.org/10.1111/jspn.12081 https://doi.org/10.1038/oby.2009.429 https://doi.org/10.1038/oby.2009.429 aimed at 4-6-year-olds: a systematic review. Obes Rev. 2012. https://doi.org/ 10.1111/j.1467-789X.2011.00962.x. 11. Public Health Agency of Canada, Canadian Institute for Health Information. Obesity in Canada. 2011 https://www.canada.ca/content/dam/phac-aspc/ migration/phac-aspc/hp-ps/hl-mvs/oic-oac/assets/pdf/oic-oac-eng.pdf. Accessed 12 Feb 2015. 12. Evans R, Barer M, Marmor T. Why are some people healthy and others not: the determinants of health of populations. Hawthorne: Aldine De Gruyter; 1994. 13. Government of Canada. Implementing the Population Health Approach. 2013. https://www.canada.ca/en/public-health/services/health-promotion/ population-health/implementing-population-health-approach.html. Accessed 15 Aug 2013. 14. Bélanger M, Humbert L, Vatanparast H, Ward S, Muhajarine N, Chow AF, et al. A multilevel intervention to increase physical activity and improve healthy eating and physical literacy among young children (ages 3-5) attending early childcare centres: the healthy start-Départ Santé cluster randomised controlled trial study protocol. BMC Public Health. 2016. https:// doi.org/10.1186/s12889-016-2973-5. 15. Community Information Database. http://www.cid-bdc.ca/useful-definitions. Accessed Aug 5 2013. 16. Pate RR, O’Neill JR, Mitchell J. Measurement of physical activity in preschool children. Med Sci Sport Exerc. 2010. https://doi.org/10.1249/MSS. 0b013e3181cea116. 17. Pfeiffer KA, McIver KL, Dowda M, Almeida MJCA, Pate RR. Validation and calibration of the Actical accelerometer in preschool children. Med Sci Sports Exerc. 2006;38(1):152–7. 18. Wong SL, Colley R, Connor Gorber S, Tremblay M. Actical accelerometer sedentary activity thresholds for adults. J Phys Act Health. 2011;8(4):587–91. 19. Ward S, Bélanger M, Donovan D, Boudreau J, Vatanparast H, Muhajarine N, et al. “Monkey see, monkey do”: Peers’ behaviors predict preschoolers’ physical activity and dietary intake in childcare centers. Prev Med. 2017. https://doi.org/10.1016/j.ypmed.2017.01.001. 20. Katapally TR, Muhajarine N. Towards uniform accelerometry analysis: a standardization methodology to minimize measurement bias due to systematic accelerometer wear-time variation. J Sports Sci Med. 2014;13(2): 379–86. 21. Boudreau J, Bélanger M. SAS Code for Accelerometer Data Cleaning and Management Version 1.3. 2015. http://mathieubelanger.recherche. usherbrooke.ca/Files/Other/UserManual_BoudreauBelanger V1–3.pdf. . 22. Ulrich D. Test of gross motor development. 2nd ed. Austin: Pro-Ed; 2000. 23. Cohen P, Cohen J, Aiken LS, West SG. The problem of units and the circumstance for POMP. Multivariate Behav Res. 1999;34(3):315–46. 24. Jacko C, Dellava J, Ensle K, Hoffman D. Use of the plate-waste method to measure food intake in children. J Ext. 2007;45(6):6RIB7. 25. Wolper C, Heshka S, Heymsfield S. Measuring food intake: an overview. In: Allison DB. Handbook of assessment measures for eating behaviors and weight-related problems. Thousand Oaks: Sage Publishing; 1995. 26. Blakeway SF, Knickrehm ME. Nutrition education in the Little Rock school lunch program. J Am Diet Assoc. 1978;72(4):389–91. 27. Lee HS, Lee KE, Shanklin CW. Elementary students’ food consumption at lunch does not meet recommended dietary allowance for energy, iron, and vitamin a. J Am Diet Assoc. 2001. https://doi.org/10.1016/S0002- 8223(01)00261-9. 28. Whatley JE, Donnelly JE, Jacobsen DJ, Hill JO, Carlson MK. Energy and macronutrient consumption of elementary school children served modified lower fat and sodium lunches or standard higher fat and sodium lunches. J Am Coll Nutr. 1996;15(6):602–7. 29. University of Toronto. Microdata Analysis and Subsetting with SDA. 2015. http://sda.chass.utoronto.ca/sdaweb/index.html. Accessed Aug 22 2015. 30. Ammerman AS, Ward DS, Benjamin SE, Ball SC, Sommers JK, Molloy M, et al. An intervention to promote healthy weight: nutrition and physical activity self-assessment for child care (NAP SACC) theory and design. Prev Chronic Dis. 2007;4(3):A67. 31. Neelon SB, Ammerman A. Nutrition and physical activity self-assessment for child care (NAP SACC): results from a pilot intervention. J Nutr Educ Behav. 2007;39:142–9. 32. Kruskal JB. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika. 1964. https://doi.org/10.1007/ BF02289565. 33. Jakobsen J, Gluud C, Wettersley J, Winkel P. When and how should multiple imputation be used for handling missing data in randomised clinical trials – a practical guide with flowcharts. BMC Med Res Methodol. 2017. https://doi. org/10.1186/s12874-017-0442-1. 34. Sauer B, VanderWeele T. Supplement 2: use of directed acyclic graphs. In: Velentgas P, Dreyer N, Nourjah P, editors. Developing a protocol for observational comparative effectiveness research: a User’s guide. Rockville: Agency for Healthcare Research and Quality; 2013. 35. Ward S, Chow AF, Humbert ML, Bélanger M, Muhajarine N, Vatanparast H, et al. Promoting physical activity, healthy eating and gross motor skills development among preschoolers attending childcare centers: process evaluation of the healthy start-Départ Santé intervention using the RE-AIM framework. Eval Program Plann. 2018. https://doi.org/10.1016/j.evalprogplan. 2018.02.005. 36. Adamo KB, Wilson S, Harvey ALJ, Grattan KP, Naylor P-J, Temple VA, et al. Does intervening in childcare settings impact fundamental movement skill development? Med Sci Sport Exerc. 2016. https://doi.org/10.1249/MSS. 0000000000000838. 37. Wasenius NS, Grattan KP, Harvey ALJ, Naylor P-J, Goldfield GS, Adamo KB. The effect of a physical activity intervention on preschoolers’ fundamental motor skills — a cluster RCT. J Sci Med Sport. 2018. https://doi.org/10.1016/j. jsams.2017.11.004. 38. Wang JH-T. A study on gross motor skills of preschool children. J Res Child Educ. 2004. https://doi.org/10.1080/02568540409595052. 39. Gallahue D, Osmun JC. Understanding motor development: infants, children, adolescents, adults. Boston: McGraw Hill; 2006. 40. Lubans DR, Morgan PJ, Cliff DP, Barnett LM, Okely AD. Fundamental movement skills in children and adolescents. Sport Med. 2010. https://doi. org/10.2165/11536850-000000000-00000. 41. Waters L, Reeves M, Fjeldsoe B, Eakin E. Control group improvements in physical activity intervention trials and possible explanatory factors: a systematic review. J Phys Act Health. 2012;9(6):884–95. 42. Carson V, Spence JC. Seasonal variation in physical activity among children and adolescents: a review. Pediatr Exerc Sci. 2010;22(1):81–92. 43. Barbosa S, de Oliveira A. Physical activity of preschool children: a review. J Physiother Phys Rehabil. 2016. https://doi.org/10.4172/2573-0312.1000111. 44. Golley RK, Bell L. Interventions for improving young children’s dietary intake through early childhood settings: a systematic review. Int J Child Heal Nutr. 2015. https://doi.org/10.6000/1929-4247.2015.04.01.2. 45. Williams C, Bollella M, Strobino B, Spark A, Nicklas T, Tolosi L, et al. “Healthy- Start”: outcome of an intervention to promote a heart healthy diet in preschool children. J Am Coll Nutr. 2002. https://doi.org/10.1080/07315724. 2002.10719195. 46. Witt K, Dunn C. Increasing fruit and vegetable consumption among preschoolers: evaluation of color me healthy. J Nutr Educ Behav. 2012. https://doi.org/10.1016/j.jneb.2011.01.002. 47. Torun B. Energy requirements of children and adolescents. Public Health Nutr. 2005. https://doi.org/10.1079/PHN2005791. 48. Ward S, Bélanger M, Donovan D, Vatanparast H, Engler-Stringer R, Leis A, et al. Lunch is ready … But not healthy: An analysis of lunches served in childcare centres in two Canadian provinces. Can J Public Heal. 2017. https://doi.org/10.17269/cjph.108.5688. 49. Chow AF, Leis A, Humbert L, Muhajarine N, Engler-Stringer R. Healthy Start—Départ Santé: A pilot study of a multilevel intervention to increase physical activity, fundamental movement skills and healthy eating in rural childcare centres. Can J Public Heal. 2016. https://doi.org/10.17269/cjph.107. 5279. 50. Ward S, Belanger M, Donovan D, Vatanparast H, Muhajarine N, Engler- Stringer R, et al. Association between childcare educators’ practices and preschoolers’ physical activity and dietary intake: a cross-sectional analysis. BMJ Open. 2017. https://doi.org/10.1136/bmjopen-2016-013657. 51. McCoy C. Understanding the intention-to-treat principle in randomized controlled trials. West J Emerg Med. 2017. https://doi.org/10.5811/westjem. 2017.8.35985. 52. Abraha I, Cherubini A, Cozzolino F, De Florio R, Luchetta L, Rimland J, et al. Deviation from intention to treat analysis in randomised trials and treatment effect estimates: meta-epidemiological study. BMJ. 2015. https:// doi.org/10.1136/bmj.h2445. 53. Montori V, Guyatt G. Intention-to-treat principle. CMAJ. 2001;165(10): 1339–41. Leis et al. BMC Public Health (2020) 20:523 Page 11 of 12 https://doi.org/10.1111/j.1467-789X.2011.00962.x https://doi.org/10.1111/j.1467-789X.2011.00962.x https://www.canada.ca/content/dam/phac-aspc/migration/phac-aspc/hp-ps/hl-mvs/oic-oac/assets/pdf/oic-oac-eng.pdf https://www.canada.ca/content/dam/phac-aspc/migration/phac-aspc/hp-ps/hl-mvs/oic-oac/assets/pdf/oic-oac-eng.pdf https://www.canada.ca/en/public-health/services/health-promotion/population-health/implementing-population-health-approach.html https://www.canada.ca/en/public-health/services/health-promotion/population-health/implementing-population-health-approach.html https://doi.org/10.1186/s12889-016-2973-5 https://doi.org/10.1186/s12889-016-2973-5 http://www.cid-bdc.ca/useful-definitions https://doi.org/10.1249/MSS.0b013e3181cea116 https://doi.org/10.1249/MSS.0b013e3181cea116 https://doi.org/10.1016/j.ypmed.2017.01.001 http://mathieubelanger.recherche.usherbrooke.ca/Files/Other/UserManual_BoudreauBelanger http://mathieubelanger.recherche.usherbrooke.ca/Files/Other/UserManual_BoudreauBelanger https://doi.org/10.1016/S0002-8223(01)00261-9 https://doi.org/10.1016/S0002-8223(01)00261-9 http://sda.chass.utoronto.ca/sdaweb/index.html https://doi.org/10.1007/BF02289565 https://doi.org/10.1007/BF02289565 https://doi.org/10.1186/s12874-017-0442-1 https://doi.org/10.1186/s12874-017-0442-1 https://doi.org/10.1016/j.evalprogplan.2018.02.005 https://doi.org/10.1016/j.evalprogplan.2018.02.005 https://doi.org/10.1249/MSS.0000000000000838 https://doi.org/10.1249/MSS.0000000000000838 https://doi.org/10.1016/j.jsams.2017.11.004 https://doi.org/10.1016/j.jsams.2017.11.004 https://doi.org/10.1080/02568540409595052 https://doi.org/10.2165/11536850-000000000-00000 https://doi.org/10.2165/11536850-000000000-00000 https://doi.org/10.4172/2573-0312.1000111 https://doi.org/10.6000/1929-4247.2015.04.01.2 https://doi.org/10.1080/07315724.2002.10719195 https://doi.org/10.1080/07315724.2002.10719195 https://doi.org/10.1016/j.jneb.2011.01.002 https://doi.org/10.1079/PHN2005791 https://doi.org/10.17269/cjph.108.5688 https://doi.org/10.17269/cjph.107.5279 https://doi.org/10.17269/cjph.107.5279 https://doi.org/10.1136/bmjopen-2016-013657 https://doi.org/10.5811/westjem.2017.8.35985 https://doi.org/10.5811/westjem.2017.8.35985 https://doi.org/10.1136/bmj.h2445 https://doi.org/10.1136/bmj.h2445 54. Nemet D, Geva D, Pantanowitz M, Igbaria N, Meckel Y, Eliakim A. Long term effects of a health promotion intervention in low socioeconomic Arab- Israeli kindergartens. BMC Pediatr. 2013. https://doi.org/10.1186/1471- 2431-13-45. 55. Hoffman JA, Thompson DR, Franko DL, Power TJ, Leff SS, Stallings VA. Decaying behavioral effects in a randomized, multi-year fruit and vegetable intake intervention. Prev Med. 2011. https://doi.org/10.1016/j. ypmed.2011.02.013. 56. Bayer O, von Kries R, Strauss A, Mitschek C, Toschke AM, Hose A, et al. Short- and mid-term effects of a setting based prevention program to reduce obesity risk factors in children: a cluster-randomized trial. Clin Nutr. 2009. https://doi.org/10.1016/j.clnu.2009.01.001. 57. Bleich SN, Vercammen KA, Zatz LY, Frelier JM, Ebbeling CB, Peeters A. Interventions to prevent global childhood overweight and obesity: a systematic review. Lancet Diabetes Endocrinol. 2018. https://doi.org/10. 1016/S2213-8587(17)30358-3. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Leis et al. BMC Public Health (2020) 20:523 Page 12 of 12 https://doi.org/10.1186/1471-2431-13-45 https://doi.org/10.1186/1471-2431-13-45 https://doi.org/10.1016/j.ypmed.2011.02.013 https://doi.org/10.1016/j.ypmed.2011.02.013 https://doi.org/10.1016/j.clnu.2009.01.001 https://doi.org/10.1016/S2213-8587(17)30358-3 https://doi.org/10.1016/S2213-8587(17)30358-3 Abstract Background Methods Results Conclusion Trial registration Background Methods Trial design Participants Intervention On-site training and resources On-going support and monitoring Booster session Outcomes Physical activity Fundamental movement skills Food intake and food served Other variables Analyses Results Discussion Conclusions Supplementary information Abbreviations Acknowledgements Authors’ contributions Funding Availability of data and materials Ethics approval and consent to participate Consent for publication Competing interests Author details References Publisher’s Note work_men7qdl2ijdnnpjyunt7arj43i ---- OPTH-61483-grader-agreement--sensitivity-and-specificity-of-digital-pho © 2014 Sellahewa et al. This work is published by Dove Medical Press Limited, and licensed under Creative Commons Attribution – Non Commercial (unported, v3.0) License. The full terms of the License are available at http://creativecommons.org/licenses/by-nc/3.0/. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. Permissions beyond the scope of the License are administered by Dove Medical Press Limited. Information on how to request permission may be found at: http://www.dovepress.com/permissions.php Clinical Ophthalmology 2014:8 1345–1349 Clinical Ophthalmology Dovepress submit your manuscript | www.dovepress.com Dovepress 1345 O r i g i n a l r e s e a r C h open access to scientific and medical research Open access Full Text article http://dx.doi.org/10.2147/OPTH.S61483 Grader agreement, and sensitivity and specificity of digital photography in a community optometry- based diabetic eye screening program luckni sellahewa1,2 Craig simpson2 Prema Maharajan2 John Duffy2 iskandar idris3 1Diabetic Medicine Department, nottingham University hospitals, 2north nottinghamshire eye screening service, sherwood Forest hospitals Foundation Trust, 3Division of Medical sciences and graduate entry Medicine, school of Medicine, University of nottingham, nottingham, UK Correspondence: iskandar idris Division of Medical sciences and graduate entry Medicine, school of Medicine, University of nottingham, royal Derby hospital, Uttoxeter road, De22 3ne, UK Tel +44 1332 724 668 Fax +44 1332 724 697 email iskandar.idris@nottingham.ac.uk Background: Digital retinal photography with mydriasis is the preferred modality for diabetes eye screening. The purpose of this study was to evaluate agreement in grading levels between primary and secondary graders and to calculate their sensitivity and specificity for identifying sight-threatening disease in an optometry-based retinopathy screening program. Methods: This was a retrospective study using data from 8,977 patients registered in the North Nottinghamshire retinal screening program. In all cases, the ophthalmology diagnosis was used as the arbitrator and considered to be the gold standard. Kappa statistics were used to evaluate the level of agreement between graders. Results: Agreement between primary and secondary graders was 51.4% and 79.7% for detect- ing no retinopathy (R0) and background retinopathy (R1), respectively. For preproliferative (R2) and proliferative retinopathy (R3) at primary grading, agreement between the primary and secondary grader was 100%. Where there was disagreement between the primary and second- ary grader for R1, only 2.6% (n=41) were upgraded by an ophthalmologist. The sensitivity and specificity for detecting R3 was 78.2% and 98.1%, respectively. None of the patients upgraded from any level of retinopathy to R3 required photocoagulation therapy. The observed kappa between the primary and secondary grader was 0.3223 (95% confidence interval 0.2937–0.3509), ie, fair agreement, and between the primary grader and ophthalmology for R3 was 0.5667 (95% confidence interval 0.4557–0.6123), ie, moderate agreement. Conclusion: These data provide information on the safety of a community optometry-based retinal screening program for screening as a primary and as a secondary grader. The level of agreement between the primary and secondary grader at a higher level of retinopathy (R2 and R3) was 100%. Sensitivity and specificity for R3 were 78.2% and 98.1%, respectively. None of the false-negative results required photocoagulation therapy. Keywords: retinopathy, screening, public health, community, optometry, diabetes Introduction Diabetic retinopathy is a highly specific microvascular complication of diabetes and the leading cause of blindness in people under the age of 60 years in industrialized countries.1–4 Data from the Early Treatment of Diabetic Retinopathy Study showed that early laser treatment would be more than 90% effective in preventing blindness,4 and as such, early detection of sight-threatening disease is crucial in preventing blindness in this group of patients. To this end, previous studies have shown the effectiveness of diabetes eye screening programs to prevent blindness in patients with diabetes.2–9 The United Kingdom National Screening Committee therefore recommended a systematic population screening program10 which was implemented in 2003. As a result, the current National Health Service (NHS) Diabetic Eye Screening Programme is in place.11 Journal name: Clinical Ophthalmology Journal Designation: Original Research Year: 2014 Volume: 8 Running head verso: Sellahewa et al Running head recto: Optometry-based screening program DOI: http://dx.doi.org/10.2147/OPTH.S61483 C lin ic a l O p h th a lm o lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com/permissions.php http://creativecommons.org/licenses/by-nc/3.0/ www.dovepress.com www.dovepress.com www.dovepress.com http://dx.doi.org/10.2147/OPTH.S61483 mailto:iskandar.idris@nottingham.ac.uk Clinical Ophthalmology 2014:8submit your manuscript | www.dovepress.com Dovepress Dovepress 1346 sellahewa et al Digital retinal photography with mydriasis is the preferred modality for diabetic eye screening based on its reported val- ues for sensitivity and specificity,12–15 and its ability to quality assure screening standards.16,17 This modality of retinopathy screening fulfils the Exeter minimum standard for sensitiv- ity and specificity of 80% and 95%, respectively, for robust and safe diabetic retinopathy screening.18,19 Conventionally, this utilizes technicians to perform the primary grading, with secondary grading performed by more experienced screeners or clinicians, and arbitration grading performed by an oph- thalmologist or a diabetologist with expertise in diabetic retin- opathy screening. However, in selected screening programs, primary and secondary gradings are performed by trained opticians. Whilst data are available on the effectiveness of individual screening modalities,10–13,17–19 there is currently only one study that has looked at the interobserver agreement between primary graders and an expert grader.20 Information on the safety, effectiveness, and agreement between primary and secondary graders for images of patients undergoing rou- tine diabetic eye screening in a community optometry-based retinopathy screening program has not yet been reported. Materials and methods The North Nottinghamshire diabetic retinopathy screening service has utilized an optometry-based model since April 2006 and involves 36 optometrists across 21 sites. Screen- ing is undertaken by local optometrists, and two-field digital images of the retina are recorded in the database and graded. All models and makes of the retinal cameras in use, as well as their age, are approved based on criteria set by the NHS Diabetic Eye Screening Programme. Tropicamide 1% is used to dilate the pupils to an acceptable size for screening, which is performed according to a standard national screen- ing protocol. Primary and secondary grading is carried out by optometrists on the digital retinal images, and a web- based referral to an ophthalmologist is required if there is disagreement between primary and secondary graders or if sight-threatening retinopathy is observed. For this study, data were collected retrospectively between January 2011 and December 2011 from a cohort of 8,977 patients registered in an optometry-based retinal screening program database currently in place in North Nottinghamshire. These patients were reviewed by optom- etrists who carried out digital retinal photography. Images were stored in a web-based database and graded according to the national screening standard.11 Grading levels were as follows: no retinopathy (R0), background retinopathy (R1), preproliferative retinopathy (R2), proliferative retinopathy (R3), and maculopathy (M1). Any retinopathy detected by a primary grader (R1, R2, M1) and 10% of images with no evidence of retinopathy (R0) was sent for secondary grading performed by another optometrist. If there was any disagree- ment between the primary and secondary grader, the images were sent to arbitration, which was performed by an oph- thalmologist. The presence of proliferative retinopathy (R3) would require an urgent referral to ophthalmology. However, during 2011, due to an internal quality audit that was being undertaken, all patients with R1 were referred to the ophthal- mologist for screening. Retinal images that were not gradable by the primary grader for reasons such as previous surgery or cataracts were referred directly to ophthalmology. Patients under ophthalmology follow-up were kept under ophthal- mology review with follow-up appointments until their retinopathy was stable. The screening program also has in place a fail-safe mechanism (monitored by a fail-safe officer) whereby images of patients subsequently found to have R3 or have undergone photocoagulation therapy are traced back to see whether this was missed during screening on an ongoing basis. No R3 was being missed at screening during the period of this audit. Once the patients had stable retinopathy with no immediate intervention required, they were referred back into the local retinal screening recall process. We calculated the agreement between the primary and secondary grader as well as between individual graders and ophthalmologists by means of Kappa statistics.21 We also looked at the proportion of disagreement leading to an upgrad- ing of the retinopathy level. Assessment of sensitivity and specificity values in this study was limited to images graded as R3, since all R3 are referred to an ophthalmologist for arbi- tration or a final grading. R3 grading from the primary grader was compared against the “gold standard” ophthalmological diagnosis. Sensitivity is calculated as the (number of true posi- tives/true positives + false negatives) while specificity is cal- culated as the (number of true negatives/true negatives + false positives). This work is labeled as service evaluation. The audit work and data derived from this work are part of the program’s ongoing clinical governance exercise to maintain standards of retinopathy screening within the service. The statistical analysis was performed using SPSS version 14 software (SPSS Inc., Chicago, IL, USA). Results Of 8,977 patients (15,583 images), 734 patients were graded as R0 by the primary grader. Of these, 377 were graded as R0 by the secondary grader. This resulted in 51.4% agree- ment between the primary and secondary grader for patients graded as R0 at primary grading. The other 357 patients had no agreement between the primary and secondary grader. C lin ic a l O p h th a lm o lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical Ophthalmology 2014:8 submit your manuscript | www.dovepress.com Dovepress Dovepress 1347 Optometry-based screening program From these, 4.8% (n=17) were downgraded and 3.6% (n=13) were upgraded by ophthalmology (Table 1). Background retinopathy grading (R1) was given to 7,784 patients by the primary grader and 1,448 of these were graded by ophthalmology. The level of agreement between primary and secondary graders in this group was 79.7% (n=6,204). Among these patients, 15.5% (n=207) of agree- ment was reported between the primary grader and ophthal- mology, while the agreement between the secondary grader and ophthalmology was 10.7% (n=835). For the proportion in which there was disagreement between the primary and secondary grader, 2.6% (n=41) were upgraded, of which 1% (n=16) were upgraded to R3 (Table 1). For the proportion in which there was disagreement between the primary and secondary grader, 0.8% (n=13) were downgraded to a differ- ent grade by ophthalmology (Table 1). Where patients were graded R2 (n=210) at primary grading, agreement between the primary and secondary grader was 100% (Table 1); 207 of the 210 that were graded as R2 by the primary grader were graded by the secondary grader as well as ophthalmology. This was due to an internal quality assurance audit that was taking place in 2011. Proliferative retinopathy (R3) was detected in 249 patients by the primary grader, but only 31.7% (79) of these were subsequently confirmed as R3 by ophthalmology. Of the total population screened (n=8,977), 8,728 were found not to have R3 by the primary grader, while 1,777 patients were confirmed by ophthalmology not to have R3. From these data, the sensitivity and specificity for R3 in our cohort is 78.2% and 98.1% (Table 1); 3.6% of normal (R0) and 2.6% of background retinopathy (R1) had a disagreement in grading, leading to an upgrading of retinopathy level by ophthalmology. Ten percent of images graded as R0 went through to ophthalmology for arbitration. Of these, there was no agreement between the primary and secondary grader, but there was 56.6% agreement between the primary grader and ophthalmology, and 36.6% agreement between the secondary grader and ophthalmology. We used Kappa statistics to evaluate the level of agree- ment between primary and secondary graders and between primary and arbitration graders for R0–R2. There was an observed kappa of 0.3223 (95% confidence interval 0.2937–0.3509) and 0.269 (95% confidence interval 0.216–0.321), respectively (Tables 2 and 3). The level of agreement between the primary grader and ophthalmology for R3 using Kappa statistics gives an observed kappa of 0.5667 (95% confidence interval 0.4557–0.6123). Discussion For a systematic screening program to be effective, it needs a database that is robust and well maintained. The system currently in place in North Nottinghamshire uses a central call/recall center with ongoing quality assurance taking place at all stages of the process. In addition to their professional qualification registered by the General Optical Council which regulates dispensing opticians and optometrists, all screeners/graders would have undertaken a certificate for diabetic retinopathy screening by City and Guilds, as well as undergoing a test training set mandated by the NHS Dia- betic Eye Screening Programme. During the period of the audit, one test training set was performed by the opticians. However, data for the intergrader agreement based on this exercise were not available. Although the national program recommended only 10% of R0 to be secondarily screened, we performed an internal audit for the year 2009–2010, where all R0 underwent secondary grading as a result of a quality Table 1 Percentage of agreement, disagreement, upgrading, and downgrading of images in the north nottingham screening program R0 (n=734) n (%) R1 (n=7,784) n (%) R2 (n=210) n (%) R3 (n=249) n (%) agreement between primary and secondary grader 377 (51.4%) 6,204 (79.7%) 210 (100%) 249 (100%) agreement between primary grader and ophthalmology not evaluated 1,207 (15.5%) 78 (37.1%) 79 (31.7%) agreement between secondary grader and ophthalmology not evaluated 835 (10.7%) 78 (37.1%) not evaluated Disagreement leading to downgrading by ophthalmologist 17 (4.8%) not evaluated not evaluated 113 (45.4%) Disagreement leading to upgrading by ophthalmologist 13 (3.6%) 41 (2.6%) not evaluated not evaluated Disagreement leading to upgrading to r3 by ophthalmologist not evaluated 13 (0.8%) not evaluated not evaluated Notes: Using Kappa statistics to evaluate agreement between primary grader and ophthalmology for r3, the observed κ is 0.57 (95% confidence interval 0.46–0.61), ie, moderate agreement. Sensitivity and specificity for detecting R3 are 78.2% and 98.1%, respectively. Abbreviations: r0, no retinopathy; r1, background retinopathy; r2, preproliferative retinopathy; r3, proliferative retinopathy. C lin ic a l O p h th a lm o lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical Ophthalmology 2014:8submit your manuscript | www.dovepress.com Dovepress Dovepress 1348 sellahewa et al assurance exercise recommended by the NHS Retinopathy Screening Programme. No sight-threatening retinopathy (R2 or higher) was identified. The above study provides novel information on the safety and effectiveness of a community-based retinal screening program that uses optometrists at both the primary and secondary grader level compared with other optometry or nonoptometry-based programs that use senior graders, dia- betologists, or ophthalmologists as secondary graders. Evidence for the effectiveness of screening is based on evidence of treatment efficacy especially after early detection and on cost-effectiveness. Comparing this screening program with the Exeter standards,18,19 ours achieved a specificity level above the expected 95% but the sensitivity level was marginally short of the recommended 80% threshold. Of note, the sensitivity data here refer to data analysis specific to R3 rather than data from the whole program. Moreover, it is conceivable that the slightly higher level of false-positives observed here reflects a slightly overcautious approach by optometrists to grading in patients with a higher likelihood of abnormalities in their eyes. In addition, image arbitration was performed by an ophthalmologist who may decide on the final “grade” based on clinical need for photocoagulation therapy rather than actual reporting of the images. Nevertheless, the importance of appropriate sensitivity and specificity for any screening modality has become more important in view of some recent evidence which may advocate for a different frequency of retinopathy screening for different individuals depending on the risk of retinopathy progression, based on baseline and/or previous screening results.24 Despite a high false-negative rate, none of the false negatives required urgent photocoagulation therapy, which reflects a subsequent “clinical” diagnosis by the ophthalmologist rather than a misdiagnosis by the optometrist. This has been confirmed by regular audit of our data based on the governance struc- ture currently in place in our screening program. It was also reassuring to note that the levels of agreement between pri- mary and secondary graders for higher levels of retinopathy (R2 and R3) were both 100%. For lower levels of retinopathy, ie, R0 and R1, agreement between primary and secondary graders were lower at 51.4% and 79.7%, respectively. Of these, 3.6% of normal (R0) and 2.6% of background (R1) retinopathy showed a disagreement in grading, leading to an upgrading of retinopathy level by ophthalmology, but none required photocoagulation therapy. Some limitations to this study needs to be highlighted. To calculate sensitivity and specificity, we analyzed data specific to R3 only. This was because only 10% of R0 and some of R1 and R2 were referred to ophthalmology, whereas all R3 were referred to an independent ophthalmologist. Because of this, we were unable to look at the sensitivity and specificity for the whole cohort, which affects the results reported in our study. We used the ophthalmologist grade as the gold standard, so it would be important to have all retinopathy graded as R2 by the primary grader reviewed by ophthalmology to ensure that none of these would need to be upgraded to R3, which would mean they will need ophthalmology follow-up and potential treatment. The study was carried out by retrospective data collection, which would also be considered as a limitation, due to the presence of confounding biases. We were also not able to reliably deter- mine results for maculopathy within our program. Further, we were not able to accurately adjust results for ungradable images, due to poor patient compliance with the screening protocol, poor mydriasis, or other factors. Interpretation of the results is limited to this program and cannot necessarily be generalized to other programs. Lastly, although Kappa statistics is a recognized method for assessment of agree- ment, the magnitude of kappa reflecting adequate agreement is unclear. However, arbitrary guidelines are available to indicate level of agreement, although these are not evidence- based. Generally, however, it is accepted that a kappa score 80% would suggest very good agreement.25,26 Despite this, due to methodological limitations of other research in this area, and due to a lack of data and evidence of optom- etrists as primary and secondary graders in detecting R3 in a retinopathy screening program, we believe data from this study would enhance available knowledge concerning the Table 2 agreement and disagreement for primary grader (horizontal axis) and secondary grader (vertical axis) R0 R1 R2 r0 17 185 6 r1 12 1,207 122 r2 0 36 78 Notes: Using Kappa statistics to evaluate overall level of agreement between primary and secondary graders for r0–r2, the observed κ is 0.3223 (95% confidence interval 0.2937–0.3509). Abbreviations: r0, no retinopathy; r1, background retinopathy; r2, preproliferative retinopathy. Table 3 agreement and disagreement for primary grader (horizontal axis) and arbitration grader (vertical axis) R0 R1 R2 r0 377 1,107 0 r1 354 6,204 0 r2 3 261 210 Notes: Using kappa statistics to evaluate overall level of agreement, between primary and secondary graders for r0–r2, the observed κ is 0.269 (95% confidence interval 0.216–0.321). Abbreviations: r0, no retinopathy; r1, background retinopathy; r2, preproliferative retinopathy. C lin ic a l O p h th a lm o lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical Ophthalmology Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/clinical-ophthalmology-journal Clinical Ophthalmology is an international, peer-reviewed journal covering all subspecialties within ophthalmology. Key topics include: Optometry; Visual science; Pharmacology and drug therapy in eye diseases; Basic Sciences; Primary and Secondary eye care; Patient Safety and Quality of Care Improvements. This journal is indexed on PubMed Central and CAS, and is the official journal of The Society of Clinical Ophthalmology (SCO). The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/ testimonials.php to read real quotes from published authors. Dovepress Clinical Ophthalmology 2014:8 submit your manuscript | www.dovepress.com Dovepress Dovepress 1349 Optometry-based screening program safety and effectiveness of an optometry community-based retinopathy screening program. There is no clear evidence suggesting who has the best sensitivity and specificity for detecting sight-threatening retinopathy, ie, whether it is independent graders, optom- etrists, diabetologists, general practitioners, or ophthal- mologists. A single study showed that retinal photographs assessed by optometrists could achieve 91% sensitivity in detecting R3 or sight-threatening retinopathy.20 Data on the effectiveness of individual screening modalities are widely available.13,17,19,23 However, our study provides unique data on the safety, effectiveness, and agreement between primary and secondary graders for images of patients undergoing routine diabetes eye screening in a community optometry- based retinopathy screening program. Author contributions LS contributed to the data acquisition and analysis, and interpretation of the data, and wrote the first draft of the manuscript. CS supported the acquisition and analysis of the data. JD and PM contributed to analysis or interpreta- tion of data. II conceptualized the study and contributed to the design, analysis, and interpretation of the data. II is the guarantor for this study. All authors contributed to the writing of the manuscript and agreed on the final draft. Disclosure The authors report no conflicts of interest in this work. References 1. Owens DR, Gibbins RL, Kohner E, et al. Diabetic retinopathy screening. Diabet Med. 2000;17(7):493–393. 2. Stefánsson E, Bek T, Porta M, et al. Screening and prevention of diabetic blindness. Acta Ophthalmol Scand. 2000;78(4):374–385. 3. Garvican L, Clowes J, Gillow T. Preservation of sight in diabetes: developing a national risk reduction programme. Diabet Med. 2000;17(9):627–634. 4. Scanlon P, Aldington S, Wilkinson C. Early Treatment Diabetic Retinop- athy Study Research Group. Early photocoagulation for diabetic retinopa- thy, ETDRS report number 9. Ophthalmology. 1991;98(5):766–785. 5. James M, Turner D, Broadbent D, et al. Cost effectiveness analysis of screening for sight threatening diabetic eye disease. BMJ. 2000;320(7250): 1627–1631. 6. Buxton M, Sculpher M, Ferguson B, et al. Screening for treatable diabetic retinopathy: a comparison of different methods. Diabet Med. 1991;8(4):371–377. 7. Sculpher M, Buxton M, Ferguson B, et al. A relative cost-effectiveness analysis of different methods of screening for diabetic retinopathy. Diabet Med. 1991;8(7):644–650. 8. Bachmann MO, Nelson S. Impact of diabetic retinopathy screening on a British district population: case detection and blindness prevention in an evi- dence based model. J Epidemiol Community Health. 1998;52(1):45–52. 9. Davies R, Roderick P, Canning C, et al. The evaluation of screening poli- cies for diabetic retinopathy using simulation. Diabet Med. 2002;19(9): 762–770. 10. UK National Screening Committee. Available from: http://www.screen- ing.nhs.uk. Accessed May 31, 2013. 11. NHS Diabetic Eye Screening Programme. Available from: http:// diabeticeye.screening.nhs.uk. Accessed May 31, 2013. 12. Ferguson BA, Humphreys JE, Altman JFB, et al. Screening for treatable diabetic retinopathy: a comparison of different methods. Diabet Med. 1991;8(4):371–377. 13. Hutchinson A, McIntosh A, Peters J, et al. Effectiveness of screening and monitoring tests for diabetic retinopathy – systematic review. Diabet Med. 2000;17(7):495–506. 14. Scanlon PH, Wilkinson CP, Aldington J, et al. Screening for diabetic retinopathy. In: Scanlon PH, Wilkinson CP, Aldington SJ, Matthews DR, editors. A Practical Manual of Diabetic Retinopathy Management. Oxford, UK: Wiley-Blackwell; 2009. 15. Taylor D, Fisher J, Jacob J, et al. The use of digital cameras in a mobile retinal screening environment. Diabet Med. 1999;16(8):680–686. 16. Goatman KA, Philip S, Fleming AD, et al. External quality assurance for image grading in the Scottish diabetic retinopathy screening pro- gramme. Diabet Med. 2012;29(6):776–783. 17. Sallam A, Scanlon PH, Stratton IM, et al. Agreement and reasons for disagreement between photographic and hospital biomicroscopy grad- ing of diabetic retinopathy. Diabet Med. 2011;28(6):741–746. 18. Harding SP, Broadbent DM, Neoh C, et al. Sensitivity and specificity of photography and direct ophthalmoscopy in screening for sight threat- ening eye diseases: the Liverpool Eye Study. BMJ. 1995;311(7013): 1131–1135. 19. Harding S, Greenwood R, Aldington S, et al. Grading and disease management in national screening for diabetic retinopathy in England and Wales. Diabet Med. 2003;20(12):965–971. 20. Patra S, Gomm EM, Macipe M, et al. Interobserver agreement between primary graders and an expert grader in the Bristol and Weston diabetic retinopathy screening programme: a quality assurance audit. Diabet Med. 2009;26(8):820–823. 21. Donner A, Shoukri M, Klar N, et al. Testing the equality of two depen- dent Kappa statistics. Stat Med. 2000;19(3):373–387. 22. Gibbins RL, Owens DR, Allen JC, et al. Practical application of the Euro- pean field guide in screening for diabetic retinopathy by using ophthal- moscopy and 35 mm retinal slides. Diabetologia. 1998;41(1):59–64. 23. Olson J, Strachan F, Hipwell J, et al. A comparative evaluation of digital imaging, retinal photography and optometrist examination in screening for diabetic retinopathy. Diabet Med. 2003;20(7):528–534. 24. Stratton IM, Aldington SJ, Taylor DJ, Adler AI, Scanlon PH. A simple risk stratification for time to development of sight threatening diabetic retinopathy. Diabetes Care. 2013;36:580–585. 25. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159–174. 26. Fleiss JL. Statistical Methods for Rates and Proportions. 2nd ed. New York, NY, USA: John Wiley; 1981. C lin ic a l O p h th a lm o lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com/clinical-ophthalmology-journal http://www.dovepress.com/testimonials.php http://www.dovepress.com/testimonials.php www.dovepress.com www.dovepress.com www.dovepress.com www.dovepress.com Publication Info 2: Nimber of times reviewed: work_mgfpb3xpsfhdxf66palk4et47q ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218561175 Params is empty 218561175 exception Params is empty 2021/04/06-02:16:42 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218561175 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:42 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_mi6hrpr7yzb5di7lxupmggih7u ---- Zoom in on dental photography Dr Philip Wander* explores applications of dental photography in general dental practice and explains why it is so important. 32 vital www.nature.com/vital PHOTOGRAPHY The ultimate patient record First and foremost, photographs complement patient records. Photographs, along with written description, paint a complete picture of a case. They serve as an ideal and informative way to document the progress of a case from every angle and during each visit. Charting the progress of a case you can monitor everything from healing after surgery to soft tissue pathology, cracks in fillings, and most importantly, you can have a visual reference to all cases available in your patient charts. The concept of digital photography enhances the ultimate goal of many practices to move toward a paperless practice. Digital images can be added directly to practice management programs to make for the ultimate patient record. For years photography has been an integral component of dental education. Today, with the increase in medico-legal problems, the difficult economic times, and the increased need for patient motivation and communication and practice management, practitioners are looking for ways to both increase patient flow and expand the scope of their practice. Digital dental photography Photography is a hobby enjoyed by millions. For many years it has played a vital part in our lifestyle, at work and at play. It is found in almost every aspect of day-to-day living, and is an important source of information, education, advertising and revenue. Digital dental photography is one of the most versatile and easiest of media to work with. It is rapidly becoming one of the tools used routinely in dentistry today. Progress has been the bottom line in dentistry over the past decade. The advances in technique and materials have astounded all, and have led us into a new age of clinical dentistry. With the new techniques comes a renewed commitment to quality and excellence, from clinical dentistry to practice management. Photography stands at the forefront of these changes; it enables the practitioner to record patient cases for review by the practitioners and their colleagues, to record the patient’s condition for malpractice suits – in practice management situations, and, of course, for protection against litigation. Every practitioner has the obligation to do the best work he or she possibly can, and constant self-examination and evaluation is required. Photographs of before and after cases clearly show the dentist where the work can be improved. Even when the finished product appears clinically sound, when viewing the image from a clinical photograph, many restorations will reveal hidden defects and the problems that the dentist can address during the next clinical procedure. Increased competition amongst professionals in the form of advertising, large group practice, chain operations and marketing has sent most practitioners searching for new ways to find and keep patients. Internal and external marketing practices can be employed by dental professionals. Practice marketing The most effective means of internal marketing is proper practice management, and the use of photographs can help to market the practice in numerous ways such as newsletters and welcome packs. For external marketing the most effective way is via your website which can be greatly enhanced by including photographs of work you have carried out, to engage prospective patients. Increasing awareness on the public part of legal recourse against professionals has sent medical and dental practitioners scurrying Zoom in on dental photography *Phil is best known for his extensive articles as well as his BDJ textbook co-authored with Peter Gordon on clinical dental photography - a topic on which he has lectured and demonstrated extensively. Phil qualified from the University of Liverpool in 1966 and obtained his MGDS from the Royal College of Surgeons in London in 1980. He recently obtained a Fellowship in Homeopathy from the Faculty of Homeopathy in London. Phil is a consultant to a number of general dental practices in Manchester. © 2011 Macmillan Publishers Limited. All rights reserved. PHOTOGRAPHY www.nature.com/vital vital 33 to find protection. The public’s realisation of the fallibility of the professional has caused malpractice cases to soar. The best defence against a malpractice suit is proper record keeping and clinical competency. Photography provides the answer to the first, and certainly helps the practitioner achieve clinical excellence by way of self-review. The use of photography as a standard part of the patient’s records is becoming universal procedure among medical and dental professionals. Almost all specialists are now supplementing records with photographs of their cases, and general practitioners are following suit. The ‘team’ approach Photographic techniques can be taught and applied by the whole dental team. The basics are very easy to learn and follow - the best way is to attend a suitable ‘hands on’ course. Interaction between staff and patients is unbelievably powerful. Reception, chairside, dental therapists and hygienists - the whole of your team need and should want to be involved in the whole experience of encouraging the patient to want the very best for their dental health, appearance and function. The use of photography goes a huge way to help understanding and solving dental problems. Equipment is simple to use, reliable, specialised for our purposes, and cost effective. Some of the benefits of using photography in practice are its versatility, flexibility, durability and inexpensiveness. The applications of intra- oral photography are countless, limited only by the imagination of the practitioner. Receptionist/treatment coordinator In many practices a specific member of staff is empowered to undertake initial photographs at the patient’s first visit, especially for treatment planning and cosmetic imaging and can save valuable time downloading images ready for the dentist to analyse. Practice promotion Photography in the dental practice can help stimulate patient awareness, motivation and interest. Dental photography has revolutionised the concept of professional - patient communication. Now through photography, a patient can see what a dental professional sees and therefore understand the importance of treatment and be involved in their own treatment planning. Images can be imported into cosmetic imaging programs and edited to show potential ‘before and afters’. With the art of cosmetic dentistry growing and elective cosmetic procedures on the rise, this form of imaging is instrumental in selling cases by actually showing patients how potential changes could affect the way they look. This concept has become so popular and widely marketed that today many prospective patients not only enjoy seeing the potential before and after picture, but will often select a practice that offers this technology. TELL or SHOW? Digital photography enables you to view and review images taken and then display and relay the information to patients. Photographs make it easier for the patient to comprehend the conditions present, and to understand both the treatment required and the procedures involved. This understanding by the patient allows for open communication and prevents any misunderstanding. The increased awareness also increases patient motivation towards their dental health. A simple way of presenting the photographs is view the images on a laptop or iPad. Trouble spots in the mouth can be identified which cannot be seen by the patient using direct vision or mirrors. The patient now actually sees the cavities, misplaced teeth and broken fillings, not wondering if the practitioner has fabricated any imaginary problems. Full arch occlusal photographs are particularly useful. A rewarding experience Photography is a tool, and when properly used in dentistry makes dental practice a much easier and more rewarding experience. Patients will benefit from greater understanding and better quality dentistry being performed. Staff will benefit from an exciting atmosphere and have greater ease and understanding of their role in the management of the practice. And the practitioner will benefit, as improved quality of dentistry is sure to follow after closely scrutinising photographs of work done. The improved record keeping will offer security against law suits, and the improvement in practice patient flow and increased level of treatment plan acceptance in sure to put a smile on the face of every practitioner. The camera is truly an eye to a better practice and great fun too. The BDA Training essentials course portfolio offers a one-day interactive course for the whole dental team, ‘Clincial photography in the dental practice’. The course covers the techniques of how to take quality, predictable dental photographs and incorporate them into your day-to- day practice. Visit www.bda.org/ training or call BDA Events on 020 7563 4590 for further information. ‘ The concept of digital photography enhances the ultimate goal of many practices to move toward a paperless practice.’ Comprehensive series of photographs for a new patient © 2011 Macmillan Publishers Limited. All rights reserved. http://www.bda.org/training http://www.bda.org/training Zoom in on dental photography The ultimate patient record Digital dental photography Practice marketing The ‘team’ approach Receptionist/treatment coordinator Practice promotion TELL or SHOW? A rewarding experience work_mkbi6rhqozexrcj3ev2kmf57we ---- Reprinted with permission from American Institute of Physics, License Number 1643741256210 Instrumented Taylor anvil-on-rod impact tests for validating applicability of standard strength models to transient deformation states D. Eakins and N.N. Thadhani Reprinted with permission from D. E. Eakins, Journal of Applied Physics, 100, 073503 (2006). Copyright 2006, American Institute of Physics. This article may be downloaded for personal use only. Any other use requires prior permission of the author and the American Institute of Physics. Instrumented Taylor anvil-on-rod impact tests for validating applicability of standard strength models to transient deformation states D. E. Eakins and N. N. Thadhania� School of Materials Science and Engineering, Georgia Institute of Technology, 771 Ferst Drive, Love Building, Atlanta, Georgia 30332 �Received 12 December 2005; accepted 20 July 2006; published online 3 October 2006� Instrumented Taylor anvil-on-rod impact tests have been conducted on oxygen-free electronic copper to validate the accuracy of current strength models for predicting transient states during dynamic deformation events. The experiments coupled the use of high-speed digital photography to record the transient deformation states and laser interferometry to monitor the sample back �free surface� velocity as a measure of the elastic/plastic wave propagation through the sample length. Numerical continuum dynamics simulations of the impact and plastic wave propagation employing the Johnson-Cook �Proceedings of the Seventh International Symposium on Ballistics, 1983, The Netherlands �Am. Def. Prep. Assoc. �ADPA��, pp. 541–547�, Zerilli-Armstrong �J. Appl. Phys. C1, 1816 �1987��, and Steinberg-Guinan �J. Appl. Phys. 51, 1498 �1980�� constitutive equations were used to generate transient deformation profiles and the free surface velocity traces. While these simulations showed good correlation with the measured free surface velocity traces and the final deformed sample shape, varying degrees of deviations were observed between the photographed and calculated specimen profiles at intermediate deformation states. The results illustrate the usefulness of the instrumented Taylor anvil-on-rod impact technique for validating constitutive equations that can describe the path-dependent deformation response and can therefore predict the transient and final deformation states. © 2006 American Institute of Physics. �DOI: 10.1063/1.2354326� I. INTRODUCTION The response of a material to high-strain-rate loading is an important subject, owing to the innumerable applications for materials in dynamic environments, such as high-rate forming, ballistics, and impact on vehicle structures. Under- standing this behavior is essential for predictive purposes, reducing the iterative nature of designing components. The heart of predictive material modeling is the constitutive equation, which relates the application of load to deforma- tion response on the basis of material properties. The validity of the constitutive equation depends upon its ability to de- scribe fully and predictively any experimentally obtained in- formation throughout the deformation event �and not just the end state� as a function of strain, strain rate, and temperature. Several of the more widely used constitutive equations have either empirical forms resulting from test data spanning a certain range of strain rates,1 or are based on physically based models relying on effects of thermally activated dislo- cation motion.2–4 The rod-on-rigid-anvil impact experiment performed by Taylor in 1948 remains a popular method of investigating the mechanical response of a material to dynamic loading.5 In the original configuration, a cylindrically shaped specimen is impacted against a rigid anvil at velocities sufficient to in- duce plastic deformation without failure, where strain rates in the range of 103 – 105 s−1 are typical. By comparing the shape �length change� between the final deformed and initial specimens for a given impact velocity, estimates of an aver- age dynamic flow stress of the material may be made. Early work on the construction/validation of constitutive relation- ships was similarly based upon comparisons between nu- merically simulated and experimentally obtained deformed specimen geometry.1,2,6–8 However, such comparisons do not provide a true validation, since the models are not used to generate the deformation path �just the end state� and the estimate of dynamic yield strength is based on the assump- tion of perfectly plastic material response. The inability to record the evolving profile during the deformation event pre- vented a robust validation of the early constitutive models. The modern Taylor test has evolved considerably from its early start. In recent years, use of high-speed digital pho- tography has made the capture of the deformation history possible. With interframe times in some cases of as few as several tens of nanoseconds, the time-resolved transient specimen profiles permit the validation of material models that can describe deformation occurring over a range of time and not just limited to the final deformed state. In addition, experiments conducted in the reverse configuration, i.e., im- pacting a rigid anvil against a stationary rod, permit spatially dependent measurements to be made that would be impos- sible if the sample were otherwise in motion. For example, VISAR �velocity interferometry system for any reflector, Ref. 9� can be employed to monitor the particle velocity at the sample’s back �free� surface or at nearly any location within the specimen periphery.10,11 Coupling the multiple forms of time-resolved measurement techniques, comple- mentary information can be obtained against which current and future constitutive models can be more robustly evaluated. To date, a number of popular strength models have beena�Electronic mail: naresh.thadhani@mse.gatech.edu JOURNAL OF APPLIED PHYSICS 100, 073503 �2006� 0021-8979/2006/100�7�/073503/8/$23.00 © 2006 American Institute of Physics100, 073503-1 Downloaded 03 Nov 2006 to 130.207.165.29. Redistribution subject to AIP license or copyright, see http://jap.aip.org/jap/copyright.jsp http://dx.doi.org/10.1063/1.2354326 http://dx.doi.org/10.1063/1.2354326 http://dx.doi.org/10.1063/1.2354326 used to predict the final geometry of Taylor impact speci- mens tested over a range of strain rates. The models by Johnson and Cook,1 Zerilli and Armstrong,2 and Steinberg et al.4 are some of the most widely known and frequently used, partly owing to their incorporation into many private and commercial hydrocodes. The applicability of these models to transient deformation states however, has been largely ig- nored. To address path-dependent effects, a few models have recently been developed, such as the modified Armstrong- Zerilli model by Goldthorpe et al.12 and Gould and Goldthorpe13 and the empirical model of Frechard et al.14 Validation of these path-linked models through comparison with transient profiles has shown improvements over the tra- ditional models.14,15 However, the use of approaches that provide complementary measures of transient deformation states and require adherence to transient phenomenon has yet to be adopted as a general rule for model validation. The goal of the work described in this paper is to estab- lish a method for validating constitutive equations using multiple time-resolved measurement techniques. Instru- mented Taylor anvil-on-rod impact experiments were thus performed on an oxygen-free electronic �OFE� Cu standard at 83 m / s, utilizing high-speed digital photography to obtain transient deformation profiles and velocity interferometry to record the sample back surface velocity revealing the various elastic, plastic, and radial wave interactions. Continuum finite-element simulations of the deformation event were made using the Johnson-Cook empirical equation and the physically based Zerilli-Armstrong and Steinberg-Guinan models, and the constant parameters were adjusted in each case to match the experimentally derived VISAR trace. The three models with respective fitted constants were then com- pared with the final deformation state of the recovered sample and the recorded transient deformation profiles for the 83 m / s experiment for further evaluation and validation. The models with the same constants were also applied to the results from the experiment conducted at 205 m / s for evalu- ation and validation at the higher velocity. II. EXPERIMENTAL PROCEDURE A. Instrumented Taylor anvil-on-rod impact test An 80 mm diameter 7.6 m long helium driven single- stage gas gun was used to perform the instrumented Taylor anvil-on-rod impact experiments �Fig. 1�. The projectile is comprised of a 76.2 mm diameter rigid flyer plate, machined from 6 mm thick hardened maraging 300 steel, mechanically fastened to a 178 mm long solid aluminum carrier sabot. All contacting surfaces were lapped prior to assembly. A wrap- around breach firing mechanism was used to propel the pro- jectiles at velocities of 83 and 205 m / s, measured by a series of shorting pins located at the muzzle face. To avoid un- wanted specimen damage imparted by secondary impacts, a soft-catch recovery technique was used in order to slow the projectile shortly after impacting the sample. An Imacon 200 high-speed digital framing camera capable of 50 ns inter- frame time was positioned at one of the experiment chamber windows to record the deformation event, while a flash placed at the opposite side was used to achieve backlighting. The free surface velocity interferometry measurements were collected by a VISAR probe positioned approximately 30 mm from the specimen back surface and recorded on a gigahertz frequency oscilloscope. Triggering of the camera and the digital oscilloscope for the VISAR was accomplished using Dynasen end-impact crush pins placed 2 mm preced- ing the specimen impact face. B. Specimen preparation OFE grade copper �C10100� was supplied as 34 in. diam- eter round rod as specified by ASTM B187. Cylindrical Tay- lor specimens were prepared from the stock observing a length to diameter ratio of 4:1 chosen for the series of ex- periments. The ends of the rods were similarly lapped paral- lel to within 0.008° using 15 �m diamond suspension for a rough polish. The remaining cylinder surface was untreated and had a smooth unpolished mill finish. The copper rods were supported by a 1 / 8 in. thick poly methyl methacrylate �PMMA� disk, offset from the specimen back surface by 9 mm to allow imaging of the sample’s entire length. Per- pendicularity between the sample face and barrel axis to within 3 mrad was achieved for each experiment through the use of a two-axis target adjustment ring and laser alignment system. C. Experimental measurements The high-speed framing camera allows the transient pro- files to be captured at any time during the deformation event. Use of backlighting has become a standard in increasing the contrast between the specimen and surrounding environment, reducing the uncertainty in measurements. A total of 16 frames were captured in each experiment, each image con- sisting of 1200 � 980 pixels. Most frames were spent at the early stages of deformation while the free end was still in view. The first two frames captured the sample and projectile before impact, providing another measure of the impact ve- locity. The next 11 frames were spaced evenly at short time intervals while maintaining full visibility of the back surface. Images captured with the back end obscured or beyond the framing window were undesirable, since the specimen length could not be determined for data analysis. The remaining three frames were spent at later times for the purpose of determining the time of separation between projectile and sample. Using VISAR, the free surface velocity of a specimen may be recorded during impact, giving an accurate measure- ment of particle velocity Up and overall back-end speed b. FIG. 1. Schematic of the instrumented Taylor anvil-on-rod impact experi- ment showing the arrangement of the projectile, sample, VISAR probe, and camera. 073503-2 D. E. Eakins and N. N. Thadhani J. Appl. Phys. 100, 073503 �2006� Downloaded 03 Nov 2006 to 130.207.165.29. Redistribution subject to AIP license or copyright, see http://jap.aip.org/jap/copyright.jsp Unlike the camera images, which are limited in number and resolution, VISAR data evolve throughout the deformation event and are dependent on the mechanical wave behavior and their interactions during propagation across the sample length. For the experiments using VISAR, a green laser in- terferometry system manufactured by VALYN and nanosec- ond resolution recording devices were used. D. AUTODYN-2D simulations Following each experiment, dynamic deformation simu- lations using the experimental impact velocity were run with the AUTODYN-2D hydrocode originally developed by Century Dynamics.16 AUTODYN-2D is a commercial finite-element package that specializes in the computation of dynamic simulations and contains a variety of solvers �Lagrange, Eu- ler, arbitrary lagrangian eulerian �ALE�, and smoothed par- ticle hydrodynamics �SPH��. Of those available, the La- grangian solver is most appropriate for simulating deformations where the strains are small. An axisymmetric model of the reverse-Taylor experiment was constructed uti- lizing a lower-I graded mesh for the copper specimen �Fig. 2�. A mesh density study indicated that the resolution em- ployed �0.2 mm at the impact face� was adequate to con- verge upon steady elastic and plastic responses. To simulate the VISAR data recorded from the back surface, a gauge was placed on the last node to record the velocity component in the x direction. Additional gauges were also placed in adja- cent nodes of the flyer and specimen impact end to assist in determining the time of separation between projectile and sample. As a first approximation, the Johnson-Cook and Zerilli-Armstrong viscoplastic models �Eqs. �1� and �2�� and the Steinberg-Guinan strength model popularly used for the high-strain-rate regime �Eqs. �3� and �4�� for OFHC copper were used to simulate the deformation response of OFE Cu samples investigated in this study: � = ��0 + B� n��1 + C ln �̇*��1 − T*m� , �1� � = �G + C2� 1/2 exp�− C3T + C4T ln �̇� + kd −1/2 , �2� � = �0�1 + ��� + �i�� n ��1 + � �P� �0 � P �1/3 + � GT� G0 ��T − 300��, �3� G = G0�1 + � GP� G0 � P �1/3 + � GT� G0 ��T − 300��. �4� The parameters in the above equations correspond to stan- dard terminology.1,2,4 Adjustments to the starting model parameters were made until the simulated velocity trace matched the experimental VISAR data. Simulated images of the specimen profiles were then constructed for the final deformed state, and at times coinciding with the experimental camera data, to determine if the model followed the same deformation path. An adjusted Johnson-Cook strength model incorporating several measured elastic constants was used for the steel flyer and left unchanged for each simulation. Though the validity of the Johnson-Cook model is under scrutiny, the choice of strength model for the maraging steel flyer plate bears little weight since it is assumed to be nearly rigid. III. RESULTS AND DISCUSSION Impact experiments on OFE copper were performed at 83 and 205 m / s to showcase the extremes in deformation response �ever-convex profile at 83 m / s, concave/convex transition and barreling at 205 m / s�. The lengths and diam- eters of the initial and final impacted rods from each experi- ment are listed in Table I, where the final diameter refers to that of the crushed end. A. Experimental camera and free surface velocity data Selected high-speed camera images showing the tran- sient deformation states for the experiment performed at an impact velocity of 83 m / s are presented in Fig. 3. For the field of view shown, each pixel corresponds to approxi- mately 81 �m for an image resolution of 12.3 pixel / mm. The images reveal deformation occurring without movement of the free surface until perhaps 58 �s; however, determin- ing the exact time of movement from camera data is compli- cated by the variable perspectives of each image channel. Separation between the sample and flyer appears to occur between 134.7 and 169.2 �s, as shown from the later images in the inset of Fig. 3. The free surface velocity trace recorded by VISAR from the 83 m / s experiment is shown in Fig. 4, where the onset of disturbance at the free surface is chosen for the start time, t0. The free surface reaches a velocity of 80 m / s nearly 100 �s after t0 before the signal is lost. The oscillations early in the trace are caused by the alternating states of tension and com- pression at the back surface due to radial reflection of the elastic waves, and are most pronounced when the VISAR probe is focused at the center of the rod’s back face. FIG. 2. AUTODYN-2D meshes used for all simulations with graded mesh den- sity toward the impact surface and gauge points for recording x velocity at the impact and free surface. TABLE I. Initial and final dimensions of OFE copper. Velocity �m/s� Initial diameter, d0 �mm� Initial length, l0 �mm� Final diameter, df �mm� Final length, lf �mm� 205 18.9 75.0 36.6 54.5 83 19.1 75.1 23.6 70.3 073503-3 D. E. Eakins and N. N. Thadhani J. Appl. Phys. 100, 073503 �2006� Downloaded 03 Nov 2006 to 130.207.165.29. Redistribution subject to AIP license or copyright, see http://jap.aip.org/jap/copyright.jsp B. Simulations and model refinement Simulations using the AUTODYN-2D hydrocode with Johnson-Cook �JC�, Zerilli-Armstrong �ZA�, and Steinberg- Guinan �SG� models were first used to fit the experimentally recorded VISAR velocity trace recorded for the 83 m / s ex- periment. Using the fitted parameters, the simulations were used to obtain the final deformed state and transient defor- mation profiles of the 83 m / s impacted sample. 1. Free surface velocity simulations Equivalent velocity records simulated using the JC, ZA, and SG models with parameters for oxygen-free high con- ductivity �OFHC� copper �available from the AUTODYN li- brary� are shown as dashed lines along with the experimental data in Figs. 4�a�–4�c�, for the experiment performed at 83 m / s. Each of the models produced characteristics similar to the experiment, exhibiting equivalently spaced oscillations early in the velocity traces. However, predicted rise times to 80 m / s were considerably longer than the 100 �s rise time observed experimentally, indicating a lower degree of strength and strain hardening. These differences are not un- expected, since the parameters used in the models pertain to OFHC Cu and are not intended for the OFE Cu material tested in this work. Maintaining the form of the Johnson- Cook, Zerilli-Armstrong, and Steinberg-Guinan equations, adjustments were made to the starting parameters until an agreeable match between the simulation and experiment was met. The resulting fitted VISAR traces following the changed constant parameters �listed in Table II along with original constants� are also shown as solid traces in Figs. 4�a�–4�c�. In the Johnson-Cook equation, the value of the work- hardening exponent n �=0.05� was determined from quasi- static compression tests of the starting material. Such a low value of the exponent is typical of cold-worked Cu.17 De- creasing the hardening exponent had the effect of increasing the VISAR trace slope, in addition to increasing the amount of curvature. To compensate for the increase in curvature, changes in the hardening constant C were made until good agreement was reached between the traces. FIG. 3. High-speed digital images capturing the transient deformation profiles of the 83 m / s experiment, where times shown are referenced from impact. Separation between the flyer and specimen occurs between 135 and 169 �s, as shown in the inset. FIG. 4. Free surface velocity trace determined experimentally at 83 m / s and through simulation utilizing the starting OFHC �dashed� and fitted �solid� �a� Johnson-Cook, �b� Zerilli-Armstrong, and �c� Steinberg-Guinan strength models. 073503-4 D. E. Eakins and N. N. Thadhani J. Appl. Phys. 100, 073503 �2006� Downloaded 03 Nov 2006 to 130.207.165.29. Redistribution subject to AIP license or copyright, see http://jap.aip.org/jap/copyright.jsp Similarly, the yield strength determined through the same compression test was used to refine the Hall-Petch strength in the Zerilli-Armstrong model. Afterwards, the value of C3 in the ZA equation was increased to reduce the curvature in the simulated trace. Fitting of the Steinberg- Guinan model was more iterative. Though the best fit yielded strength parameters Y, hardening constant �, and hardening exponent n that did not exactly match those determined by compression tests, the physical significance behind the modi- fications �increased yield strength, decreased hardening� was consistent. In each case, fitting of the VISAR traces was achieved by reducing the number of free variables through experimen- tation, but it still remains an iterative process. The fits reached through this method are not necessarily exclusive and should not rule out the possibility of alternate parametric solutions to the experimental VISAR results. 2. Transient and final deformation state simulations Before comparing the simulated profiles to the transient deformation states captured by the high-speed digital cam- era, an initial comparison of the final sample dimensions of the 83 m / s experiment and those simulated using the three models was conducted. Table III lists the final crushed-end diameters, lengths, and associated errors using the original and fitted parameters for the Johnson-Cook, Zerilli- Armstrong, and Steinberg-Guinan models. It can be seen that with the use of original parameters, each model exhibits good prediction of final impact face diameter �0.6%–1.3% error�, but varying degree of error in specimen length � 2.1% error for JC model, 5.9% error for ZA model, and 4.9% error for SG model�. Use of model parameters obtained following fitting of the free surface velocity traces to the VISAR data demon- strated substantial improvements in the predicted final length with less than a 1% difference with each model, while errors in impact-face diameter showed marginal increase � 3.2% for JC, 2.6% for ZA, and 0.87% for SG�. These results sug- gest that the VISAR trace collected from the sample back �free� end is most sensitive to overall changes in length and may be used to improve the final fit of the current models. The final simulated specimen geometries are shown along with the recovered sample images in Fig. 5�a�. The compari- sons reveal a fit within the resolution limit of the digital images. The transient specimen lengths for each selected frame shown in Fig. 3 for the 83 m / s experiment, and their com- parisons with simulated results using the fitted parameters are shown in Table IV. In each case, the match observed with fitted parameters for the respective models throughout the deformation event is well below 1%. These results reempha- size that VISAR data collected from the free end capture the elastic and plastic disturbances responsible for length changes. C. Constitutive model validation of transient states Comparisons between the photographed and simulated crushed-end profiles �using fitted parameters with each of the models� at several stages during deformation in the case of the 83 m / s sample are shown in Figs. 5�b�–5�f�. The Johnson-Cook empirical model, with the fitted constants, fol- lows the crushed-end diameter fairly well and overall shows a better match to the experimental profiles in comparison to the ZA and SG models. Though it appears that the model has still not fully converged upon complete agreement with the recorded data, the results are almost entirely within the un- certainty in camera measurements. While the Johnson-Cook model performs best at the impact face, the fitted Steinberg- Guinan model does particularly well in matching the region immediately following the flared end. In contrast, the tran- TABLE II. Constitutive model adjustments through VISAR matching. Parameter Johnson-Cook Zerilli-Armstrong Steinberg-Guinan C n �G + kd 1/2 �MPa� C3 � �MPa� � n Starting 0.025 0.31 65 0.0028 120 36 0.45 Fitted 0.005 0.05 280 0.004 320 5 0.3 TABLE III. Final specimen dimensions and associated error for 83 m / s experiment. Expt. Starting Error �%� Fitted Error �%� Johnson-Cook Diameter �mm� 23.6 23.3 1.31 24.4 3.18 Length �mm� 70.3 68.8 2.12 70.1 0.30 Zerilli-Amstrong Diameter �mm� 23.6 23.4 1.00 23.0 2.62 Length �mm� 70.3 66.2 5.90 70.0 0.43 Steinberg-Guinan Diameter �mm� 23.6 23.776 0.60 23.428 0.87 Length �mm� 70.3 66.84 4.93 70.059 0.35 073503-5 D. E. Eakins and N. N. Thadhani J. Appl. Phys. 100, 073503 �2006� Downloaded 03 Nov 2006 to 130.207.165.29. Redistribution subject to AIP license or copyright, see http://jap.aip.org/jap/copyright.jsp sient deformation profiles obtained using the Zerilli- Armstrong model with the fitted parameters reveal a much poorer agreement with the observed transient deformation profiles. The model greatly underpredicts the instantaneous crushed-end diameter and exaggerates the extent of deforma- tion in the latter portion of the specimen. The results reveal that while constitutive equations validated against free sur- face velocity measurements can predict the final deformed state of impacted samples within a few percent error, the transient deformation states are not equally well predicted. D. Validation of transient and end states at high velocity While the fits to transient profiles were improved by matching the experimentally recorded free surface velocity traces �most notably in refining the length�, it remains to be seen whether the performance of the fitted models can be extended to a range of velocities, or rather if it is unique to the fitting velocity �83 m / s in this case�. To investigate this, FIG. 5. �Color online� Comparison of the experimental and simulated profiles using the fitted Johnson-Cook, Zerilli-Armstrong, and Steinberg-Guinan strength models at the �a� final deformed state and ��b�–�f�� transient states. The fitted Johnson-Cook model matches the experimental profile almost entirely within the limits of measurement uncertainty �pixel error�, while the fitted Steinberg-Guinan model closely follows the latter rod region, only slightly underestimating the crushed-end radius. The fitted Zerilli-Armstrong model underpredicts the crushed-end radius at all stages of deformation and exaggerates the deformation in the region immediately following the impact face �arrow in �a��. TABLE IV. Transient specimen lengths in mm. Time ��s� Expt. JC fit �% error� ZA fit �% error� SG fit �% error� 20.23 73.4 73.3�0.11� 73.4�0.03� 73.5�0.15� 32.95 72.5 72.5�0.03� 72.5�0.02� 72.7�0.26� 45.67 71.7 71.7�0.08� 71.7�0.06� 71.9�0.25� 58.40 71.1 71.0�0.08� 71.1�0.01� 71.3�0.33� 71.12 70.6 70.5�0.13� 70.5�0.12� 70.7�0.13� 073503-6 D. E. Eakins and N. N. Thadhani J. Appl. Phys. 100, 073503 �2006� Downloaded 03 Nov 2006 to 130.207.165.29. Redistribution subject to AIP license or copyright, see http://jap.aip.org/jap/copyright.jsp the VISAR fitted JC, ZA, and SG models for the 83 m / s experiment were employed to simulate the transient and final profiles of the similar experiment performed at 205 m / s. The higher velocity produces a markedly different final profile, characterized by enhanced localized deformation in the re- gion following the flared end �barreling�. The results of simulating the 205 m / s experiment using the constants derived from the 83 m / s experiment reveal that both the Zerilli-Armstrong and Steinberg-Guinan fitted mod- els are able to reproduce the entire specimen profile �every location along the rod length� to within 5% throughout the deformation event. Furthermore, the average error at any given frame is less than 2% for the fitted SG model and less than 3% for the ZA model. The fitted Johnson-Cook model, on the other hand, does not predict the formation of a bar- reled region, and suffers from considerably larger errors. Fig- ure 6 shows a comparison of the transient frames at early times �up to 60 �s after impact� and the final deformed state based on the SG model, which shows the best match. Note the development of a barreled region closely, but not per- fectly, matching that observed experimentally. It should be restressed that it is not the intent of this work to determine the Johnson-Cook, Zerilli-Armstrong, or Steinberg-Guinan constants for OFE copper, but rather to present an instrumented anvil-on-rod impact experimental method of validating these models and similar ones against the entire deformation process. The development of such path-linked models is a critical step in understanding the behavior of even more complex material systems, whereby dynamic mechanical testing may be complicated by fracture or other physical/chemical changes. IV. CONCLUSIONS A method of model validation using a combination of high-speed camera images of transient deformation states, velocity interferometry data, and finite-element simulations has been presented for the instrumented Taylor anvil-on-rod impact test performed at 83 and 205 m / s. Matching the specimen rear free surface velocity through adjustment of several parameters for the Johnson-Cook, Zerilli-Armstrong, and Steinberg-Guinan constitutive models resulted in accu- rate prediction of the final deformed state of the impacted samples at 83 m / s, as well as the transient specimen lengths, though examination of the crushed-end transient profiles re- vealed a decent fit accomplished only by the Johnson-Cook model. For the higher velocity �205 m / s� impact experiment, the ZA model and the SG model reproduce the entire speci- men profile to within 5% throughout the deformation event, while the JC model generates considerably large errors. In consideration of the respective constitutive models, the Johnson-Cook model is not path linked as is. Adjusting the parameters does not make it applicable to higher velocities, and correspondingly, higher strain rates. The Zerilli- Armstrong model does not match the transient profiles at 83 m / s, but matches both the transient and final profiles at the higher velocity. The Steinberg-Guinan model performs satisfactorily at low velocity, and very well at high velocity. The results illustrate that the instrumented anvil-on-rod impact tests employing the combined use of a high-speed camera to generate transient deformation states and velocity interferometry to monitor elastic-plastic wave propagation is necessary to ensure proper validation of path-linked consti- tutive models. FIG. 6. �Color online� Transient and final deformed profiles of the 205 m / s experiment simulated using the fitted Steinberg-Guinan model, as compared to high- speed camera data and recovered specimen. The simu- lated transient profiles �white� match the experimental images to within 5% at early stages of deformation and does well to predict the barreled zone at the final state. 073503-7 D. E. Eakins and N. N. Thadhani J. Appl. Phys. 100, 073503 �2006� Downloaded 03 Nov 2006 to 130.207.165.29. Redistribution subject to AIP license or copyright, see http://jap.aip.org/jap/copyright.jsp 1G. R. Johnson and W. H. Cook, Proceedings of the Seventh International Symposium on Ballistics, 1983, The Netherlands �Am. Def. Prep. Assoc. �ADPA��, pp. 541–547. 2F. J. Zerilli and R. W. Armstrong, J. Appl. Phys. 61, 1816 �1987�. 3P. S. Follansbee and U. F. Kocks, Acta Metall. 36, 82 �1988�. 4D. J. Steinberg, S. G. Cochran, and M. W. Guinan, J. Appl. Phys. 51, 1498 �1980�. 5G. Taylor, Proc. R. Soc. London, Ser. A 194, 289 �1948�. 6J. B. Hawkyard, Int. J. Mech. Sci. 11, 313 �1969�. 7M. L. Wilkins and M. W. Guinan, J. Appl. Phys. 44, 1200 �1973�. 8W. H. Gust, J. Appl. Phys. 53, 3566 �1982�. 9L. M. Barker and R. E. Hollenbach, J. Appl. Phys. 43, 4669 �1972�. 10I. Rohr, H. Nahme, and K. Thoma, J. Phys. IV 110, 513 �2003�. 11I. Rohr, H. Nahme, and K. Thoma, Int. J. Impact Eng. 31, 401 �2005�. 12B. D. Goldthorpe, A. L. Butler, and P. Church, J. Phys. IV 4, 471 �1994�. 13P. J. Gould and B. D. Goldthorpe, J. Phys. IV 10, 39 �2000�. 14S. Frechard, A. Lichtenberger, F. Rondot, N. Faderl, A. Redjaimia, and M. Adoum, J. Phys. IV 110, 9 �2003�. 15S. M. Walley, P. D. Church, R. Townsley, and J. E. Field, J. Phys. IV 10, 69 �2000�. 16 Autodyn 2D, Version 6.0, Century Dynamics. 17H. Conrad, High-Strength Materials �Wiley, New York, 1965�, pp. 884. 073503-8 D. E. Eakins and N. N. Thadhani J. Appl. Phys. 100, 073503 �2006� Downloaded 03 Nov 2006 to 130.207.165.29. Redistribution subject to AIP license or copyright, see http://jap.aip.org/jap/copyright.jsp Eakins_JAP_073503.pdf Eakins_JAP_073503.pdf gatech.edu http://hsrlab.gatech.edu/publications/eakins/Eakins_JAP_073503.pdf work_mkyybcknkveiplzo4xfhxuzgju ---- untitled New Reflection Transformation Imaging Methods for Rock Art and Multiple-Viewpoint Display Mark Mudge Tom Malzbender * Carla Schroer Marlin Lum * Hewlett Packard Labs VAST/CIPA 2006, Nicosia Cyprus November 4, 2006 Topics • What is RTI? • How RTI works • Fundamental Understandings • Existing RTI Capture Systems • New Method, Highlight RTI • Multi-view RTI • Future Work Reflection Transformation Imaging (RTI) • Term coined by our coauthor Tom Malzbender and Dan Gelb of HP Labs, inventers of Polynomial Texture Mapping (PTMs) • Stores surface reflection information for each image pixel • 2D images with true 3D information • Information is image based, not requiring geometry in Cartesian space • No data loss from shadows and specular highlights • Mathematical explanation available at: hpl.hp.com/research/ptm RTI Basics • Fixed camera position • Multiple images illuminated from different known light positions • Images synthesized into a single RTI image • RTI captures “real world” reflectance characteristics of subject • Reflectance information generates perception of shape • Enhancement discloses additional information Ceramic Stamping Archaeologial Research Collection University of Southern California How RTIs work How RTIs work How RTIs work Fundamental Understandings • Digital technology for CH must be: – Adapted to the cultural heritage (CH) community – Adopted by the CH community – The result of early and on-going collaboration and evaluation with CH professionals – Freely available to the CH community Fundamental Understandings • Based on digital photography because those skills are already widespread • Compatible with existing skill sets and working cultures • Possible to automatically process the photos • No need for help from digital imaging experts during empirical data capture Fundamental Understandings • New Standards of Best Practice through: – New tools and methods – Worldwide communication – Pilot projects and demonstrations – Automatic “Empirical Provenance” Empirical Provenance • Access to process history and raw data • When included with digital representations of cultural heritage materials: – Permits the qualitative evaluation of digital information – Increases the acceptance of online information by scholars, educators, and the public – promotes collaborative, distributed scholarship •Now being mapped to the CIDOC/CRM Known Light Position RTI Capture Techniques HP’s RTI Capture Systems Photo: courtesy HP Labs Photo: courtesy HP Labs CHI’s Manual, Low Cost, Template System CHI’s Automatic RTI Capture Dome ISTI/CNR PISA Quality Assessment New Method: Highlight RTI No prior knowledge of light position needed Highlight RTI in the Field Rocha 2 da Ribeira de Piscos Parque Arqueologico Vale do Coa 5 June 2006 Highlight RTI in the Laboratory Instituto Portugues de Arqueologica Centro Nacional de Arte Rupestre 7 June 2006 Equipment Required • SLR digital camera • Light reducing neutral density filters • Tripods • Black ball on length and angle adjustable boom • Measuring tape • Retractable surveyors plumb-bob string (bob removed) • Light source – 1 to 1/32 power adjustable 320 watt second (minimum) flash - with battery pack and radio flash triggers *OR* – intensity adjustable continuous light - generator or indoors • Laptop Computer – remote camera control software – image viewing software (Photoshop, Irfanview, etc.) – black cloth Field Considerations • Ambient light – Use neutral density filters to block daylight • Light radius management – Illumination intensity central to accurate normal calculation – Low dynamic range in digital cameras – All the available histogram range used by incident light angle range Grazing Incident Angle ‘High Noon’ incident angle Estimating Lighting Direction From Highlight Location • V is the view vector pointing to the camera • N is the surface normal • L is the unknown, normalized light vector we solve for Prior use of spheres to collect lighting direction: Masseulus ‘02, Einarsson ‘04 Results: Goat Petroglyph 68cm by 46cm area Normal every 166 microns or 36 samples per sq. mm Results: Portable Rock Art ‘Portable Art’ - Magdalenian period (14,000 – 12,000 BP) Stone is 18 cm wide Normal captured every 65.7 microns or 231 per sq. mm Inset petroglyph is only 3.1cm in length Multi-View RTI • Multiple RTIs are captured from different viewpoints around the subject • These viewpoints are integrated in an interactive viewer • Image based 3D representation of objects from multiple viewing angles without 3D geometric models First Multi-view RTI: Collection Archéologie du Musée de l'Hospice du Grand St. Bernard Bronze Age Torque – 1600-2000 BCE Highlight RTI Future work • Automatically detect highlights on the ball • Make capture process more efficient • Explore stereo camera setup • Use 2 black spheres for precise light Positions • Explore non- photorealistic rendering Legend Rock, Wyoming Petroglyph Site August, 2006 Multi-View RTI Future Work: • Recent CHI funding from the U.S. Institute of Museum and Library Services (IMLS) National Leadership Grant will: – Create easy to use and cost effective Multi-View RTI capture techniques • Require only object placement and digital photography skill sets • Provide an automatic RTI processing pipeline • Compatible with existing professional cultures – Automatic generation of empirical provenance 3D Range Geometry Future work: • New research suggests possible automatic extraction of full 3D geometry from multi-view RTI image sequences • Geometry extraction enables: – Viewing from any direction – Measurement – Re-lighting from any direction or source – Reduction in the number of required RTI capture angles – Physical reconstruction, animation, and analysis of the subject Acknowledgements Thanks to: • Parque Arqueologico do Vale do Coa (PAVC) • Centro Nacional de Arte Rupestre (CNART) • Universidade do Minho • The Congregation of the Grand St. Bernard • Szymon Rusinkiewicz – Princeton • James Davis – UC Santa Cruz • U. S. Bureau of Land Management – Denver Technology Center, Archaeologist Mike Bies • CHI supporters and the CHI Board of Directors Contact mark@c-h-i.org carla@c-h-i.org www.c-h-i.org Copyright Cultural Heritage Imaging All Rights Reserved work_mmtvh3eofbagpjw5nhtugpaxwi ---- Studia Culturae: Вып. 4 (34): Photo: Я. Тойстер. С. 148-155. Я. ТОЙСТЕР Кандидат философских наук, старший преподаватель и руководитель направления «исследования культуры» Колледж инженерного дела, дизайна и искусств Шенкар ЗАЧЕМ БЫТЬ ФОТОГРАФИЧЕСКОМУ ИЗОБРАЖЕНИЮ? Многие современные системы визуализации обладают поразительной способно- стью: их продукция не обязательно только изображения. Данные, которые собира- ются и генерируются этими системами, могут быть представлены в виде акустиче- ских сигналов или текстов. И только часть изображений представлена в виде фотографий. Оба эти явления растут из нашего необоснованного предпочтения к изображениям вообще и из понятности фотографических изображений в частности. Это необходимо рассматривать как уникальное эпистемическое преимущество. Возможно, это преимущество в большой мере сохраняется в течение последних нескольких десятилетий. Это особенно удивительно, учитывая всеобщее беспокой- ство о том, что фотография «закончилась» и о том, что придѐт или уже пришло ей на смену. По непонятной причине эта тревога стала причиной распространения термина «аналоговая фотография», термину не совсем правильному. Я предлагаю пересмотреть определения терминов «аналоговый» и «цифровой» в отношении фотографии, с тем, чтобы выработать альтернативное представление о том, что есть фотографическое изображение. Ключевые слова: изображение, фотография, когнитивная доступность, эпистеми- ческое преимущество, аналоговая фотография, цифровая фотография, информация. YA. TOISTER PhD of Philosophy Senior lecturer and director of the Cultural Studies Shenkar College of Engineering, Design and Art, Israel WHY BE A PHOTOGRAPHIC IMAGE? Many contemporary imaging systems share a striking quality: their output need not be limited to images. Instead the raw data such systems collect and generate can just as eas- ily appear as acoustic signals or as text. Furthermore, of those images an unexpected pro- portion bears the familiar form of the photograph. These two phenomena, I argue, stem from an unconditioned bias towards images, on our behalf, and from the cognitive acces- sibility of photographic images in particular. This should be seen as a unique epistemic advantage. Arguably, this advantage has not diminished much in the course of the last decades. This is especially surprising given the wide anxiety about the end of photogra- phy and what will, or has, come after it. Confusingly, this anxiety has given rise to the term analogue photography, which is, I argue, somewhat misguided. Accordingly I pro- PHOTO: Я. Тойстер 149 pose revised definitions for the terms «analogue» and «digital» in respect to photography. These facilitate an alternative understanding of what photographic images are. Keywords: image, photograph, cognitive accessibility, epistemic advantage, analogue photography, digital photography, information. WHY IMAGES? The very words, imaging technologies, seem to refer to the production of im- ages. Those images, as is well known, may also be of objects that can never be seen in humanly inhabited existence, objects that have never been basted in any kind of light. This is the case with systems designed to «image» the inner organs of the human body, or galaxies in outer space. There are other examples – in fields like geology, archaeology, agriculture and law enforcement, to name but a few. Moreover, there are also cases when images may even be generated of „ob- jects‟ that simply do not occupy space and time at all – at least not in the ordi- nary sense of space and time as these phenomena have been popularly under- stood since the European enlightenment. Yet a striking feature of many imaging technologies today is that their output does not need to be in the form of an image at all. This is certainly the case with some medical imaging technologies like MRI and fMRI and with astronomical imaging systems like radio astronomy. The raw data used to create images in these technologies is often captured numerically – that is, not by an optical appa- ratus. In a brain MRI the radio frequency signals that brain nuclei emit (after they have been subjected to high magnetization levels) is measured with a coil and mathematically processed with a computer. Even in the case of NASA‟s Hubble Ultra-Deep Field (HUDF) and eXtreme Deep Field (XDF) «telescopes» – the full range of electromagnetic radiation from ultraviolet to infra-red is traced mathematically, not optically – in order to computationally create beauti- ful images. Thus we could say that, once these images are in existence, the only thing undoubtedly «optical» about them is that they are transmitted as optical signals through optical communication networks. It should be noted that raw data is, by default, completely indifferent towards its content and towards the „sensory field‟ within which it will later appear. In other words there is nothing „natural‟ in an MRI image or a Hubble image as these technologies can offer a choice between various data display formats. An MRI scan can be outputted as acoustics signals, as encrypted text or simply as an indecipherably long numerical chain. The same is true for the „observations‟ made by various astronomical instruments. One important question then deserves to be asked: «Why are images the pre- dominant output form of imaging technologies?» 150 STUDIA CULTURAE: Вып. 4 (34) Perhaps the lure of images is, in some cases, an end in itself. After all, the visual spectacle in images of distant galaxies and nebulas is undeniable. Perhaps there is something in us, which is deeply committed to vision. Committed even to the extent that we feel that observation must be intimately connected to visu- ality. Nowhere is this more evident than in the urge to think of an ultrasound «image» as the first «sighting» of an unborn baby. Thus, even when it comes to numerical observation instruments, we maintain a bias that precludes the possibility that certain data clusters will appear as any- thing other than an image. The point I am trying to make here is simple – the preference for images may in fact be preventing us from fully understanding the unique form of knowledge that some images offer us. Clearly visual representations (including not just images but also diagrams, maps, etc.) supply us with large amounts of complex information in ways that are more easily comprehensible to our human processing capabilities than raw data. This, however, does not explain another question: «Why do such images so often appear photograph-like? Why are they taken to be photographic images?» WHY PHOTOGRAPHS? Two separate explanations can account for the ubiquity of photograph-like images: historical preference and cognitive accessibility. The former explanation is probably straightforward – in the last 180 years we have grown increasingly accustomed to „seeing‟ by means of photographs. We recognize no other way of understanding ourselves; we know no better way to define the world around us. Photographs are a dominant form throughout our culture. The latter explanation is slightly more complicated. Photographs are so cog- nitively accessible because of three separate qualities. The first reason is granu- larity. Photograph-like images usually maintain consistency of granularity from object through instrument to depiction. Granularity is the extent to which a sys- tem can be broken down into small parts, either by the system itself or by its observation or description. It is the extent to which a larger entity is subdivided. For example, if a room can be broken into units of centimetres then it is a system with fine granularity. If, for whatever reason, this room can only be broken into units of metres then it is a system with coarse granularity. 1 Secondly, the data structure of photographs can preserve the visual properties of objects in a consistent way. In other words it „breaks‟ those objects effi- 1 Granularity is always relative to some framework or purpose. In the above example the relative framework is of course human activity. PHOTO: Я. Тойстер 151 ciently, it then reassembles them within predetermined, finite error bounds and it resembles them more than do other forms of data. 1 The third and perhaps most important reason for the ubiquity of photograph- like images concerns the mode of data preservation and presentation. The impli- cation is that consistency of granularity and structure preservation come unen- cumbered. The photographic image itself preserves only certain forms of infor- mation but not other forms. Most notably it does not preserve egocentric information – that is information about the objects locale and chronology (Cohen and Meskin 2004). This information may at times be preserved by other means – previously as part of the title or label and now most often in what we call metadata (Rubinstein 2014). Either way, such means, old or new, are never a part of the photographic image itself. Thus, the cognitive accessibility of pho- tographs provides a unique epistemic structure, which I call „select information with no strings attached‟. Such an epistemic structure is, in some cases, an epis- temic advantage. ANALOGUE AND DIGITAL Arguably, this epistemic advantage has not diminished much in the course of the last decades. This is especially surprising given the omnipresent anxiety in discourse about photography during recent decades. Think, for example, how often expressions such as «Post-Photography» and «After Photography» have appeared in exhibition and book titles. (The first such instance that I am acquainted with was as early as 1992 in William J. Mitchell‟s book The Reconfigured Eye [Mitchell, 1992]. There have since been many consecutive publication titles expressing a similar sentiment. For example: the exhibition and catalogue Photography after Photography [Hu- bertus von Amelunxen et al., 1996], the book After Photography [Fred Ritchin, 2009], the SFMOMA Symposium titled Is Photography Over? «Is Photography Over?», 2010] The most recent contribution to this list is probably Joan Font- cuberta‟s collection of essays titled Pandora’s Camera: Photogr@phy After Photography [Fontcuberta, 2014].) Curiously, such expressions of concern have not appeared in reference to the processes and technologies that I previously described but rather in reference to a much simpler technology – the digital sensor and the digital camera. Purport- edly it is these that have been responsible for „the digital turn‟ in photography, not optical communication networks, Moore‟s law or anything of that sort. I would therefore like to briefly refer to the confusing terms «analogue» and 1 The first two qualities are defined by Delehanty, the third quality is a condition I posit (2010). 152 STUDIA CULTURAE: Вып. 4 (34) «digital», which are commonly used in reference to photography. In fact they are used so freely that, at times, their use is simply misguided. The term analogue of course derives from the same ancestor as the English word analogy (the Greek word analogia). Thus, in philosophy and in literature analogue means a condition of shared elements. This enabled Roland Barthes, for example, to argue that photography is a procedure of analogical representa- tion (Barthes 1977). In mathematics, electrical engineering or signal processing the term analogue describes a non-quantized (continuous) signal, only some of whose features correspond to features in another signal. Thus, contrary to com- mon belief, analogue information does not mean information accurately copied but rather that particular type of information that can never be accurately cop- ied. In fact, it is within this meaning and only within this meaning that use of this term in the context of photography can be warranted. Interestingly, it was not until the 1990s that the expression «analogue pho- tography» made its first appearance in literature. This happened at almost the same time, and as a direct result, of the emergence of the expression, «digital photography». The term «digital», on the other hand, comes from the word digit. Digital marks the creation of discrete units of things. A digital signal is a non- continuous, quantized signal. A digital instrument simply takes something that is undivided and divides it (or presents it as divided). Thus, when we discuss digi- tal representation we refer to a system of generating divisions. True, digital sys- tems usually require multiplicity to generate value (and that is what we call cal- culation) but «digital» in and of itself does not mean anything other than divisions. A simple yet useful way to visualize the difference between digital and analogue is to think of fuel gauges and fuel warning signs. The fuel gauge in most cars, with its needle going from full to ¾ to ½ to ¼, with infinite stops in between, is an analogue indication instrument. On the other hand, the fuel warn- ing sign, which has only two distinct modes of display – red light off and red light on, is a digital indication instrument. Put differently «analogue» and «digital» can simply be distinct manifes- tations of the same condition – for example lack of fuel in one‟s car. This is why it is wrong to understand them as definitions that are mutually exclusive. Seen in this way it should become obvious that analogue photography did not go any- where when the digital camera was invented in 1975, or when it hit consumer markets in the mid-1990s. Similarly, digital photography did not emerge from 1970s sensor technology, nor is it dependent on it. Accordingly, we could even say that in a certain way photography has al- ways been digital – I see no other way to understand the faculty of a silver halide (the sensitive component in photographic emulsion). A silver halide, it should be PHOTO: Я. Тойстер 153 stated, does not and cannot turn grey. It can only turn black or remain un- changed. Thus, if the strict sense of the term digital means non-continuous, dis- crete, then the silver halide is a digital instrument. To the extent that the colour grey exists on a given section of photographic emulsion it is there simply be- cause silver halides are very small. In other words it is only the combination of black and «white» halides (in varying ratios) that creates grey emulsion. This is just one example but there are others. Perhaps it is time to drop the use of the term analogue in reference to «traditional» photography. The term «digital» can stay, even though we can find better terms – for example coded, algorithmic or simply mathematical. CONCLUSION Accordingly, to be an image today does not necessarily entail being to an image-carrier – a material thing in the world. 1 This is especially noticeable whenever we think about photographic images. Since photography completed its migration into the computerized habitat photographs no longer require material supports. But photographs have not only untethered themselves from their „pho- tographic image-carriers‟, they are also becoming increasingly removed from the technologies and sciences from which they emerged. In other words it is not only chemistry that has become extinct from the ecology of photography, me- chanics and traditional optics may soon follow. This is why the question of why be a photographic image? matters today more than it did before. What my brief comments have attempted to demonstrate is that it is increas- ingly more problematic to think of photography as a technology. It is complex not only because photography seems to be everywhere and nowhere at once. It is difficult because we are now realizing that perhaps photography was never the technology that we thought it to be. Perhaps it was never a technology but rather a family of technologies 2 that has always been integrated with, embedded in and part of other families, and groups. Be that as it may, it is futile to expect photo- graphs to adhere to definitions that have previously been put forth for the me- dium. Much like the output of other media forms, today‟s photography cannot be measured and subjected to a quantitative analysis subsequent to its production processes. The photographic images we are working with today, and the ones that will be available to us in the near future, will not comply with theoretical accounts that attempt to position them as the epitome of indexical representation. They will 1 This term is taken from: Lambert Wiesing (2011). 2 This idea is adapted from: Patrick Maynard (1997). 154 STUDIA CULTURAE: Вып. 4 (34) only be understood within a theoretical context that describes photography as an extreme form of mathematical abstraction. Such descriptions are rare, but the one that resonates most powerfully is Vilém Flusser‟s assertion that any attempt to distinguish the photograph from other forms of data is simply a lost cause. 1 If the photograph is anything, then it is only an image with fine granularity. Acknowledgments: I would like to thank Ross Gibson, Romi Mikulinsky, Daniel Rubinstein and Dawn M. Wilson for comments on earlier versions of this text. ЛИТЕРАТУРА 1.Barthes, R. Rhetoric of the image. Heath, Stephen (ed.) Image, Text, Music (trans. Heath Stephen), New York: Hill and Wang, 1977, pp. 32-51 2. Cohen, J., Meskin, A. On the epistemic value of photographs, The Journal of Aesthetics and Art Criticism, 62:2, 2004, pp. 197–210. 3. Delehanty, M. „Why images?‟, Medicine Studies, 2:3, 2010, pp. 161–73. 4. Flusser, V. Into the Universe of Technical Images (trans. N. A. Roth). Minneapolis: University of Minnesota Press, 2011. 5. Fontcuberta, J. Pandora's Camera: Photogr@Phy after Photography. London: Mack, 2014. 6. Maynard, P. The Engine of Visualization: Thinking Through Photography. Ithaka NY: Cornell Univ. Press, 1997. 7. Mitchell, W. J. The Reconfigured Eye: Visual Truth in the Post-Photographic Era. Cambridge, MA: The MIT Press, 1994. 8. Ritchin, F. After Photography. New York: W.W. Norton & Company, 2009. 9. Rubinstein, D. The digital image. Mafte’akh; Lexical Review of Political Thought, 6:1. , 2014, pp.1-21. 10. SFMOMA, «Is Photography Over?», 2010. http://www.sfmoma.org/about/ re- search_projects/research_projects_photography_over, access date 1 June, 2015 11. von Amelunxen, H., Iglhaut, S.,Rotzer, F. (eds), Photography after Photography: Memory and Representation in the Digital Age, Munich: G+B Arts, 1996. 12. Wiesing, L., Pause of participation. On the function of artificial presence. Research in Phenome- nology, 41:2, 2011, pp. 238–52. TRANSLIT 1.Barthes, R. Rhetoric of the image. Heath, Stephen (ed.) Image, Text, Music (trans. Heath Stephen), New York: Hill and Wang, 1977, pp. 32-51 2. Cohen, J., Meskin, A. On the epistemic value of photographs, The Journal of Aesthetics and Art Criticism, 62:2, 2004, pp. 197–210. 3. Delehanty, M. „Why images?‟, Medicine Studies, 2:3, 2010, pp. 161–73. 1 What do I actually mean when I say a photograph of a house depicts that house, and a computer image of an airplane yet to be built is a model? … Any way I formulate the difference between depiction and model, I come to grief… It can therefore be said of a photographer that he has made a model of a house in the same sense that the computer operator has made a model of a virtual airplane. (Flusser 2011: 42–43) http://www.sfmoma.org/about/%20research_projects/research_projects_photography_over http://www.sfmoma.org/about/%20research_projects/research_projects_photography_over PHOTO: Я. Тойстер 155 4. Flusser, V. Into the Universe of Technical Images (trans. N. A. Roth). Minneapolis: University of Minnesota Press, 2011. 5. Fontcuberta, J. Pandora's Camera: Photogr@Phy after Photography. London: Mack, 2014. 6. Maynard, P. The Engine of Visualization: Thinking Through Photography. Ithaka NY: Cornell Univ. Press, 1997. 7. Mitchell, W. J. The Reconfigured Eye: Visual Truth in the Post-Photographic Era. Cambridge, MA: The MIT Press, 1994. 8. Ritchin, F. After Photography. New York: W.W. Norton & Company, 2009. 9. Rubinstein, D. The digital image. Mafte’akh; Lexical Review of Political Thought, 6:1. , 2014, pp.1-21. 10. SFMOMA, „Is Photography Over?‟, 2010. http://www.sfmoma.org/ about/research_projects/research_projects_photography_over, access date 1 June, 2015 11. von Amelunxen, H., Iglhaut, S.,Rotzer, F. (eds), Photography after Photography: Memory and Representation in the Digital Age, Munich: G+B Arts, 1996. 12. Wiesing, L., Pause of participation. On the function of artificial presence. Research in Phenome- nology, 41:2, 2011, pp. 238–52. http://www.sfmoma.org/%20about/research_projects/research_projects_photography_over http://www.sfmoma.org/%20about/research_projects/research_projects_photography_over work_mmwes5pkanegrp6nzykjeamtte ---- Development of Uniform Protocol for Alopecia Areata Clinical Trials Development of Uniform Protocol for Alopecia Areata Clinical Trials James A. Solomon1,2,3 Developing a successful treatment for alopecia areata (AA), clearly has not been at the forefront of the agenda for new drug/device development among the pharmaceutical and medical device industry. The National Alopecia Areata Foundation (NAAF), a patient advocacy group, initiated a plan to facilitate and drive clinical research toward finding safe and efficacious treatments for AA. As such, Alopecia Areata Uniform Protocols for clinical trials to test new treatments for AA were developed. The design of the uniform proto- col is to accomplish the development of a plug-and- play template as well as to provide a framework wherein data from studies utilizing the uniform proto- col can be compared through consistency of inclu- sions/exclusions, safety, and outcome assessment measures. A core uniform protocol for use by pharma- ceutical companies in testing proof of concept for investigational products to treat AA. The core protocol includes standardized title, informed consent, inclu- sion/exclusion criteria, disease outcome assessments, and safety assessments. The statistical methodology to assess successful outcomes will also be standardized. The protocol as well as the informed consent form has been approved in concept by Liberty IRB and is ready to present to pharmaceutical companies. The Journal of Investigative Dermatology Symposium (2015) 17, 63–66; doi:10.1038/jidsymp.2015.45 INTRODUCTION Developing a successful treatment for alopecia areata (AA) clearly has not been at the forefront of the agenda for new drug/device development among the pharmaceutical and medical device industry. Currently, no Federal Food and Drug Administration (FDA) approved therapy exists (Shapiro et al., 1997; Lachgar et al., 1998; Wiseman et al., 2001; Olsen et al., 2004; Blume-Peytavi et al., 2011; Gilhar et al., 2012). The National Alopecia Areata Foundation (NAAF), a patient advocacy group, initiated a plan to facilitate and drive clinical research toward finding safe and efficacious treatments for AA. NAAF, following FDA Guidelines (FDA, 2010), obtained support in principle from the FDA, to develop Alopecia Areata Uniform Protocols for clinical trials to test new treat- ments for AA. The design of the uniform protocol is meant to accomplish two overall goals: firstly, the uniform protocol is to be a plug-and-play template to entice biopharmaceutical and medical device companies to test medications and devices on AA. Secondly, the AAUP is to be a framework wherein data from clinical trials for the treatment of AA can be easily compared through consistency of inclusions/exclusions, safety, and outcome assessment measures. RESULTS A core protocol for a pharmaceutical investigative product proof of concept was developed and approved by the SAC. The title accepted is ‘‘Uniform Core Protocol for Phase (II /III) of A Double-Blind, Vehicle Controlled, Randomized, Multi-Center Study to Evaluate the Safety and Therapeutic Efficacy of oENTER IP4 Treatment of Adult Patients with Moderate to Severe Scalp Alopecia Areata.’’ The calculation of power follows d1:d2: v (Treatment Dose 1: Treatment Dose 2: Vehicle) For a P-value percentage difference between treatment Dose 1 (d1), Dose 2 (d2), and vehicle (veh)-treated subjects on the primary efficacy end point, treatment d1, d2, and veh-treated subjects will be required to provide p% power with a two-sided test at a significance level of 0.05. Screened: d1 þd2 þvþs Enrolled (Randomized): (d1 þd2 þvþs) * % Planned Complete: d1 þd2 þv. (Dell et al., 2002). Primary objective is The primary objective is to assess the safety and therapeutic efficacy of a 24-week regimen of administration of investiga- tional products (IP) with x treatment frequency, in adult subjects with chronic moderate-to-severe scalp AA. Secondary objectives include Assessment of the durability of the response over a 12-week post-treatment period of observation; the subjects’ perceptions of their scalp disease with treatment, and upon withdrawal of treatment, in relationship to baseline; the change from REVIEW 1Ameriderm Research, Ormond Beach, Florida, USA; 2Department of Dermatology, University of Central Florida College of Medicine, Orlando, Florida, USA and 3Department of Medicine, University of Illinois College of Medicine, Urbana, Illinois, USA Correspondence: James A. Solomon, Ameriderm Research, 725 West Granada Boulevard, Suite 44, Ormond Beach, Florida 32174, USA. E-mail: drjsolomon@ameridermresearch.com Abbreviations: AA, Alopecia areata; AT, Alopecia totalis; AU, Alopecia universalis; FDA, Federal Food and Drug Administration (USA); ICF, Informed Consent Form; IP, Investigative Product; NAAF, National Alopecia Areata Foundation; SAC, Scientific Advisory Committee; SALT, Severity of Alopecia Tool & 2015 The Society for Investigative Dermatology www.jidonline.org 63 http://dx.doi.org/10.1038/jidsymp.2015.45 mailto:drjsolomon@ameridermresearch.com http://www.jidonline.org baseline of the number and width of terminal hairs in the target site using digital photography; and IP-specific changes in the biomarker associated with IP. Inclusion criteria include Subjects 418 years of age with a diagnosis of scalp AA and at least 25% scalp hair loss due to AA, for at least 6 months up to 2 years duration. All subjects taking thyroid medication or hormonal therapy must be on a stable dose for 6 months and maintain such throughout the study. Female subjects who are pregnant or who are nursing or plan pregnancy during the study period are restricted from participation. Exclusion criteria include Less than 25% scalp AA involvement; any co-existent andro- genetic alopecia; prior treatment with IP; active scalp inflam- mation; history of systemic or cutaneous medical, or psychiatric disease which will put patient at risk or interfere with assessments. Disease outcome assessments include Severity of Alopecia Tool (SALT) score (Figure 1); digital photography; IP-related biomarkers, subject oriented AA assessments (Figure 2) Skindex, and Mendoza AA Qol Burden of Disease Questionnaire. Safety assessments include Adverse event history, changes in concomitant medications and/or diseases, physical exam, vital signs, electrocardiogram, chest X-ray, safety blood and urine labs, and IP-specific safety labs. Analysis The methods of analysis will be calculated according to standard statistical methods to maintain significance based on knowledge of the investigative product or device being tested. A minimal significance of Po0.05 two-tailed will be maintained. Wherein possible, one should justify the number of subjects with a preliminary power calculation, as published by Dell et al. (2002) for calculating sample size, using the formula n¼1þ2C(s/d)2, where n¼number to enroll, C is Biological Topical Topical TopicalSystemic Systemic Systemic AA AA AA Immunological AU/AT AU/AT Other AU/AT Core pharmaceutical Figure 1. Map of core protocol for pharmaceutical investigative drug. The pharmaceutical company would edit this protocol with input specific to the investigative drug. Left side: 18% Top: 40% Back: 24% Olsen/Canfield Salt score Site: Subject: Visit: Percentage involvedQuadrant Total Back Top 0.18 0.18 0.40 0.24 Right side Left side ScoreMultiplier Date: Right side: 18% 10 10 10 6 5 4 5 5 4 4 54 6 6 610 Figure 2. Severity of Alopecia Tool (SALT) score methodology (Reprinted from Olsen et al., 2004 with permission from the Journal of the American Academy of Dermatology). JA Solomon Alopecia Areata Uniform Protocol 64 The Journal of Investigative Dermatology Symposium (2015), Volume 17 constant dependent on desired power and significance, s is SD, and d is difference to detect. In summary, the development of the AA core uniform protocols and their modules hopefully will encourage and facilitate the testing of new drugs and devices in patients with AA. Moreover, the uniformity of the process will allow for comparisons between and among treatments studied. Thus, by having uniform protocols available for industry, more treat- ments can be developed and tested with an improved ability to compare the efficacy of one treatment or another for each and all of the AA variants. MATERIALS AND METHODS Members of the NAAF Scientific Advisory Committee (SAC) provided historical protocols for a variety of previous AA investigations. The SAC concurred with an overall plan to develop a uniform protocol made up of two core uniform protocols for all forms of A one for pharmaceutical agents, and another for medical devices. Since there are no currently FDA approved treatments for AA, the initial core protocols are developed for proof of concept investigations. The core has a standardized title, power of study, informed consent form (ICF), inclusion/exclusion criteria set, outcome assessments, and safety assessments.(Olsen et al., 2004; Blume-Peytavi et al., 2011; Olsen, 2011; Gilhar et al., 2012) In light of variable responses that may occur between those patients with patchy AA from those with alopecia totalis (AT) and/or alopecia universalis (AU), subjects with AT (495% scalp) or AU (495% scalp as well as diffuse non-scalp involvement) are included, but are evaluated as separate groups from those with patchy (25–95%) alopecia. From these core protocols, modules can be developed for specific variants in mode of application (topical, oral, injected) frequency of application (twice daily, monthly), pharmacological /immunological concepts; device modus of administration (laser, non-laser). A pharmaceutical or device company may then modify these modules with factors specific to the investigative product or device (Figure 3). Corresponding to this core protocol also developed were an ICF, and source documents. The protocol and ICF were presented to Liberty IRB, Deland FL and approved in concept. CONFLICT OF INTEREST Grant from NAAF (patient advocacy group) went to the author’s employer to develop Uniform Protocol. The authors declare no conflict of interest. ACKNOWLEDGMENTS This work was funded by a grant from NAAF to the author’s institution. Funding for the Summit and the publication of this supplement was provided by the National Alopecia Areata Foundation and was made possible (in part) by a grant (R13AR067088-01) from the National Institute of Arthritis and Musculoskeletal and Skin Diseases and all co-funding support was provided by the National Center for Advancing Translational Sciences. DISCLAIMER The views expressed in written conference materials or publications and by speakers and moderators do not necessarily reflect the official policies of the Department of Health and Human Services; nor does mention of trade names, commercial practices, or organizations imply endorsement by the U.S. Government. REFERENCES Blume-Peytavi U, Hillmann K, Dietz E et al. (2011) A randomized, single-blind trial of 5% minoxidil foam once daily versus 2% minoxidil solution twice daily in the treatment of androgenetic alopecia in women. J Am Acad Dermatol 65:1126–36 Dell RB, Holleran S, Ramakrishnan R (2002) Sample size determination. ILAR J 43:207–13 FDA (2010) Draft Guidance/Guidance for Industry. Adaptive Design Clinical Trials for Drugs and Biologics. Center for Biologics Evaluation and Research CBER (eds) US Department of Health and Human Services, Alopecia areata is a condition that may affect you. Please rate how severe the following symptoms of your alopecia areata have been in the past week. Please select one response from 0 (symptom has not been present) to 10 (the symptom was as bad as you can imagine it could be) for each item. Not present Did not interfere Interfered completely 0 0 0 0 0 0 0 1 1 1 1 1 1 1 2 2 2 2 2 2 2 3 3 3 3 3 3 3 4 4 4 4 4 4 4 5 5 5 5 5 5 5 6 6 6 6 6 6 6 7 7 7 7 7 7 7 8 8 8 8 8 8 8 9 9 9 9 9 9 9 10 10 10 10 10 10 10 0Work Scalp hair loss Body or eye lashes hair loss Tingling/numbness of the scalp Itchy or painful skin Irritated skin Feeling anxious or worry Feeling sad Your alopecia areata may interfere with your daily functioning. Please rate how the following items were interfered with by alopecia areata in the past week. Please select one response from 0 (did not interfere) to 10 (interfered completely) for each item. Enjoyment of life Interaction with others Daily activities Sexual relationships Quality of life 0 0 0 0 0 1 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3 4 4 4 4 4 4 5 5 5 5 5 5 6 6 6 6 6 6 7 7 7 7 7 7 8 8 8 8 8 8 9 9 9 9 9 9 10 10 10 10 10 10 As bad as you can imagine Figure 3. Alopecia Areata Symptom Impact Scale (AASIS). JA Solomon Alopecia Areata Uniform Protocol www.jidonline.org 65 http://www.jidonline.org Center for Drug Evaluation and Research (CDER) http://www.fda.gov/ downloads/Drugs/Guidances/ucm201790.pdf Gilhar A, Etzioni A, Paus R (2012) Alopecia areata. N Engl J Med 366:1515–1525 Lachgar S, Charveron M, Gall Y et al. (1998) Minoxidil upregulates the expression of vascular endothelial growth factor in human hair dermal papilla cells. Br J Dermatol 138:407–11 Olsen EA (2011) Investigative guidelines for alopecia areata. Dermatol Ther 24:311–9 Olsen EA, Hordinsky MK, Price VH et al. (2004) Alopecia areata investiga- tional assessment guidelines Part II. J Am Acad Dermatol 51:440–7 Shapiro J, Lui H, Tron V et al. (1997) Systemic cyclosporine and low-dose prednisone in the treatment of chronic severe alopecia areata: a clinical and immunopathologic evaluation. J Am Acad Dermatol 36:114–7 Wiseman MC, Shapiro J, MacDonald N et al. (2001) Predictive module for immunotherapy of alopecia areata with diphencyprone. Arch Dermatol 137:1063–8 JA Solomon Alopecia Areata Uniform Protocol 66 The Journal of Investigative Dermatology Symposium (2015), Volume 17 http://www.fda.gov/downloads/Drugs/Guidances/ucm201790.pdf http://www.fda.gov/downloads/Drugs/Guidances/ucm201790.pdf Development of Uniform Protocol for Alopecia Areata Clinical Trials Introduction Results Primary objective is Secondary objectives include Inclusion criteria include Exclusion criteria include Disease outcome assessments include Safety assessments include Analysis Figure™1Map of core protocol for pharmaceutical investigative drug.The pharmaceutical company would edit this protocol with input specific to the investigative drug Figure™2Severity of Alopecia Tool (SALT) score methodology (Reprinted from Olsen et™al., 2004 with permission from the Journal of the American Academy of Dermatology) Materials and Methods This work was funded by a grant from NAAF to the authorCloseCurlyQuotes institution. Funding for the Summit and the publication of this supplement was provided by the National Alopecia Areata Foundation and was made possible (in part) by a grant (R13AR067 This work was funded by a grant from NAAF to the authorCloseCurlyQuotes institution. Funding for the Summit and the publication of this supplement was provided by the National Alopecia Areata Foundation and was made possible (in part) by a grant (R13AR067 This work was funded by a grant from NAAF to the authorCloseCurlyQuotes institution. Funding for the Summit and the publication of this supplement was provided by the National Alopecia Areata Foundation and was made possible (in part) by a grant (R13AR067 Disclaimer Figure™3Alopecia Areata Symptom Impact Scale (AASIS) work_mn5z5fohw5cshcafxw6n2shruy ---- Nasal Prosthesis Rehabilitation: A Case Report | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1007/s13191-011-0094-5 Corpus ID: 19212514Nasal Prosthesis Rehabilitation: A Case Report @article{Jain2011NasalPR, title={Nasal Prosthesis Rehabilitation: A Case Report}, author={S. Jain and K. Maru and J. Shukla and A. Vyas and R. Pillai and P. Jain}, journal={The Journal of Indian Prosthodontic Society}, year={2011}, volume={11}, pages={265-269} } S. Jain, K. Maru, +3 authors P. Jain Published 2011 Medicine The Journal of Indian Prosthodontic Society Facial defects resulting from neoplasm, congenital malformation or trauma can be restored with facial prosthesis using different materials and retention methods to achieve life-like look and function. A nasal prosthesis can re-establish esthetic form and anatomic contours for mid-facial defects, often more effectively than by surgical reconstruction as the nose is relatively immobile structure. For successful results, lot of factors such as harmony, texture, color matching and blending of… Expand View on Springer europepmc.org Save to Library Create Alert Cite Launch Research Feed Share This Paper 29 CitationsBackground Citations 10 Methods Citations 3 View All Figures and Topics from this paper figure 1 figure 2 figure 4 figure 5 figure 6 figure 7 View All 6 Figures & Tables Nasal prosthesis Prosthesis Implantation Congenital Abnormality Squamous cell carcinoma Esthetics (discipline) Neoplasms Methamphetamine Squamous Epithelial Cells Resins, Plant Wounds and Injuries Activities of Daily Living (activity) 29 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Rehabilitation of a Partial Nasal Defect with Facial Prosthesis: A Case Report R. Negahdari, Alireza Pournasrollah, Sepideh Bohlouli, A. Sighari Deljavan Medicine Journal of dental research, dental clinics, dental prospects 2014 2 PDF Save Alert Research Feed Rehabilitation of a Partial Rhinectomy Patient with Eyeglass Supported Provisional Nasal Prosthesis - A Case Report D. Shah, Kalpesh Vaishnav, S. Duseja, Saloni S. Naik Medicine 2014 Save Alert Research Feed Saving Faces Changing Lives the Art of Maxillofacial Prosthodontics – A case report A. Meenakshi, Sangeetha Meena Medicine 2018 Save Alert Research Feed Nasal Prosthesis –A Case Report O. Ajay, Dr.Gilsa K.Vasunni, R. Dr.Nidhin Medicine 2014 1 PDF Save Alert Research Feed Nasal prosthesis after partial rhinectomy. A case report S. Gupta, B. Gupta, Bhagwandas K. Motwani Medicine Clinics and practice 2020 PDF Save Alert Research Feed Implant-retained nasal prosthesis O. Kara, N. Demir, A. Ozturk, M. Keskin Medicine 2015 2 View 1 excerpt, cites background Save Alert Research Feed Rehabilitation of post-traumatic total nasal defect using silicone and acrylic resin V. Aggarwal, K. Datta, S. Kaur Medicine Journal of Indian Prosthodontic Society 2016 2 Save Alert Research Feed Unique Magnetic Attachment Designed for a Solitary Dental Implant Retaining Silicone Nose Prosthesis S. Guttal, B. Bangera, A. Kudva, B. R. Patil, S. Thakur Medicine 2016 View 1 excerpt, cites background Save Alert Research Feed Prosthetic Rehabilitation of a Nasomaxillary Defect Utilizing a two-Component Prosthesis: a Clinical Report Mahesh Gandhewar, Tejaswini A Bankar, A. Selecman, S. Ahuja Medicine Journal of dentistry 2020 PDF View 1 excerpt, cites background Save Alert Research Feed Mid Facial Defect Rehabilitation Using Extra Oral and Intraoral Prosthesis Retained by Magnets Ashfaq Yaqoob, Rafi Tagoo, U. Malik, Gowhar Yaqub 2017 PDF Save Alert Research Feed ... 1 2 3 ... References SHOWING 1-10 OF 24 REFERENCES SORT BYRelevance Most Influenced Papers Recency Prosthetic rehabilitation of a midfacial defect resulting from lethal midline granuloma--a clinical report. S. Guttal, N. Patil, A. Shetye Medicine Journal of oral rehabilitation 2006 27 View 1 excerpt, references background Save Alert Research Feed CAD-CAM construction of a provisional nasal prosthesis after ablative tumour surgery of the nose: a pilot case report. L. Ciocca, G. Bacci, R. Mingucci, R. Scotti Medicine European journal of cancer care 2009 33 View 1 excerpt, references methods Save Alert Research Feed New protocol for construction of eyeglasses-supported provisional nasal prosthesis using CAD/CAM techniques. L. Ciocca, M. Fantini, F. De Crescenzio, F. Persiani, R. Scotti Engineering, Medicine Journal of rehabilitation research and development 2010 33 PDF View 1 excerpt, references methods Save Alert Research Feed Osseointegration in craniofacial reconstruction Per-Ingvar Brånemark, Daniel E. Tolman Medicine 1998 43 View 1 excerpt, references background Save Alert Research Feed Osseointegration in maxillofacial prosthetics. Part II: Extraoral applications. S. Parel, P. Branemark, A. Tjellstrom, G. Gion Materials Science, Medicine The Journal of prosthetic dentistry 1986 93 Save Alert Research Feed Nasal prosthesis rehabilitation: a case report. A. Gurbuz, M. Kalkan, A. Ozturk, G. Eskitascioglu Medicine Quintessence international 2004 4 View 1 excerpt, references background Save Alert Research Feed Nasal defects and osseointegrated implants: UCLA experience. R. Nishimura, E. Roumanas, P. Moy, T. Sugai Medicine The Journal of prosthetic dentistry 1996 83 Save Alert Research Feed Biomaterials and biomechanics of oral and maxillofacial implants: current status and future developments. J. Brunski, D. Puleo, A. Nanci Medicine The International journal of oral & maxillofacial implants 2000 369 PDF View 1 excerpt, references background Save Alert Research Feed A technique for the fabrication of a more retentive nasal prosthesis G. Asadi, A. Mirfazaelian, AmirA Mangoli Medicine 2009 3 Save Alert Research Feed Squamous Cell Carcinoma of the Nasal Septum Mucosa M. Fradis, L. Podoshin, R. Gertner, E. Sabo Medicine Ear, nose, & throat journal 1993 28 View 1 excerpt, references background Save Alert Research Feed ... 1 2 3 ... Related Papers Abstract Figures and Topics 29 Citations 24 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_mnhmljxjcnfe5mwhpfums352fy ---- PLEASE SCROLL DOWN FOR ARTICLE This article was downloaded by: [Aarhus Universitets Biblioteker] On: 2 February 2010 Access details: Access Details: [subscription number 912931718] Publisher Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37- 41 Mortimer Street, London W1T 3JH, UK New Review of Hypermedia and Multimedia Publication details, including instructions for authors and subscription information: http://www.informaworld.com/smpp/title~content=t713599880 Designing for playful photography Marianne Graves Petersen a; Sara Ljungblad b; Maria Håkansson b a Department of Computer Science, University of Aarhus, Aarhus N, Denmark b SICS, Kista, Sweden Online publication date: 16 December 2009 To cite this Article Petersen, Marianne Graves, Ljungblad, Sara and Håkansson, Maria(2009) 'Designing for playful photography', New Review of Hypermedia and Multimedia, 15: 2, 193 — 209 To link to this Article: DOI: 10.1080/13614560903204653 URL: http://dx.doi.org/10.1080/13614560903204653 Full terms and conditions of use: http://www.informaworld.com/terms-and-conditions-of-access.pdf This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, re-distribution, re-selling, loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. http://www.informaworld.com/smpp/title~content=t713599880 http://dx.doi.org/10.1080/13614560903204653 http://www.informaworld.com/terms-and-conditions-of-access.pdf Designing for playful photography MARIANNE GRAVES PETERSEN$*, SARA LJUNGBLAD% and MARIA HÅKANSSON% $Department of Computer Science, University of Aarhus, Aabogade 34, DK-8200, Aarhus N, Denmark %SICS, Box 1263, SE-164 29, Kista, Sweden (Received 20 February 2009; final version received 23 July 2009) This paper highlights the concept of playful photography as an emerging and important area for Human Computer Interaction (HCI) research, through bringing together three research projects investigating new ways of engaging with digital photography with theories related to playfulness and experience-centred design. Drawing upon this, we start to unpack playful photography and its characteristics. Instead of aiming for a unifying theory of photography related to experience-centred research, we take a reflective stance on our own research work. This is intended to encourage a critical discussion about playful photography, as well as support the on-going research in this area with a possible theoretical perspective. Keywords: Playful photography; Experience-centred design; Design cases 1. Introduction In this paper, we highlight and start to unpack the concept of playful photography, based on our experiences from designing and studying digital photography applications. We suggest this is an interesting design space that awaits further exploration as a research strand. We position playful photo- graphy in the history of photography both inside and outside of the HCI area and within the emerging experience orientation within HCI. Furthermore, the concept is discussed on the basis of three different design cases, all exemplifying characteristics that can be associated with playful photography. The characteristics of playful photography can be considered as a resource for others developing photo applications in this direction, but we do not claim that the list of characteristics is exhaustive. We see them as a starting point and as describing how playful photography can represent an activity that is done as an enjoyment in itself, how it can involve taking advantage of the physical or social surroundings, and be bodily engaging. Moreover, the characteristics show how such bodily actions can be social and involve co-experiences and, for example, how viewing pictures means that several people are physically active together. *Corresponding author. Email: mgraves@cs.au.dk New Review of Hypermedia and Multimedia, Vol. 15, No. 2, August 2009, 193�209 New Review of Hypermedia and Multimedia ISSN 1361-4568 print/ISSN 1740-7842 online # 2009 Taylor & Francis http://www.tandf.co.uk/journals DOI: 10.1080/13614560903204653 D o w n l o a d e d B y : [ A a r h u s U n i v e r s i t e t s B i b l i o t e k e r ] A t : 1 9 : 5 7 2 F e b r u a r y 2 0 1 0 Photography became a mundane practice after the introduction of roll-film cameras, almost a hundred years ago (Wells 2004). Amateur practices and perspectives have interested both anthropologists like Chalfen (1987) and literature theorists like Sontag (1977). Chalfen studied how snapshot pictures of everyday life in the USA were possessed and valued, and what made people consider something to be a good picture. His studies led to questions about if and how people will create new photographic practices as novel technology, such as video, would become part of everyday life. Sontag has written about early significance of everyday photography, how it developed in tandem with tourism, and how, for example, the camera was used to memorialise special events and could be experienced as having something to do in a new situation. Along with Barthes (1982), a twentieth century French literary theorist and sociologist, these authors often discuss photography from the perspective of the resulting pictures, what they picture, why and how people value them. With the advent of digital photography, photography has become the subject of HCI research, now involving a perspective of how people manage their pictures on computers (Frohlich et al. 2002). Some of this work has been situated within a ‘‘usability paradigm’’ seeking to optimise the handling, storing and organisation of the photos; focusing less on the experiential aspects of photography. However, in recent years, we have also seen a number of investigations on how to design for new photographic experiences in HCI. This strand focuses on playful, creative and aesthetic practices surrounding digital photography rather than on optimising the usability, efficiency or technical quality of digital photography. We have worked independently within this strand, more specifically on designing experience-oriented photo applications beyond the desktop, seeking to explore alternatives (Gaver and Martin 2000) to current applications for digital photography. Our theoretical starting point has been the concepts of homo ludens (Huizinga 1955) and ludic engagement (Gaver et al. 2004). In addition, the emerging experience-centred perspective grounded in pragmatist aesthetics (McCarthy and Wright 2004, Petersen et al. 2004) has informed the design processes. In a number of projects, we have designed for playful and engaging experiences with digital photography taking advantage of ubiquitous computing technologies. The result of this research has been the development and trial uses of three digital photography applications. Bringing together these research experiences it becomes clear that there are several strong similarities with respect to both design intentions and the experiences from field trials. As a result, we want to highlight what we consider to be ‘‘playful photography’’. This is an emerging research strand that deserves special attention, as digital photography is already a ubiquitous technology and this specific design space is currently not very widely explored, yet complementing other kinds of photography research within HCI. Moreover, the notion of playful photography can be considered as its own strand within experience- centred design, providing a specific lens that may support researchers that are designing for other types of game or leisure activities, or even to support taking a new perspective of more traditionally designed work applications. 194 M. G. Petersen et al. D o w n l o a d e d B y : [ A a r h u s U n i v e r s i t e t s B i b l i o t e k e r ] A t : 1 9 : 5 7 2 F e b r u a r y 2 0 1 0 2. Background The following section will give a background to how digital photography has previously been approached in HCI, and more recently in experience-centred HCI. Except for providing a general overview of related work, it will help to point out how the previous work differs from what we consider to be playful photography. 2.1 Photography and HCI Within the field of HCI, photography started to gain attention, as it became a digital technology. People were suddenly able to take lots of pictures at a low cost, and share these almost instantly with others, which both created new opportunities and challenges. A number of studies, therefore, investigate how photographs are managed and shared on computers (Frohlich et al. 2002, Rodden and Wood 2003), and how desktop applications could be designed to support such new photo practices (Voida and Mynatt 2005, Kirk et al. 2006). Kirk et al. (2006) have identified ‘‘photowork’’ as the management of a picture collection done after capture and prior to sharing. Photowork includes downloading, selecting, organising, editing and filing pictures, primarily done to prepare not only for sharing but also for other purposes. Photowork is often complex and time consuming, and Kirk et al. (2006) suggest a need for making new software tools to make pre-sharing activities easier and more enjoyable. Overall, the early focus on photography in HCI was how to design tools for people to manage large amounts of digital images more effortlessly and efficiently, and share them with others. With the advent of camera phones, researchers has taken an interest in communication and photography, for example, investigating the reasons for taking pictures with and sharing pictures from mobile phones (Kindberg et al. 2005, Van House et al. 2005). As shown in these studies (Kindberg et al. 2005, Van House et al. 2005), people take and share such pictures for a number of different reasons, such as taking an image as a reminder for oneself, as a visual part of an on-going discussion with friends, or as a way to express oneself to others. Recently in HCI, there has also been an interest for photo-sharing and social network sites like Flickr, and what motivates people to share their pictures there, and even tag them for various purposes (Ames and Naaman 2006). Along with improved mobile and sensor-based technologies, there has also been an interest within the HCI field to move away from the desktop and design photo/video applications that take advantage of aspects of the physical world. Rather than using sensors to create bodily engaging applications, several of these designs have had a predominant focus on tasks, such as making browsing easier for people. For instance, LAFCam used sensors to detect laughter and automatically mark a video sequence as interesting, in order to facilitate for the user to later find the ‘‘fun’’ parts of a recording (Lockerd and Mueller 2002). StartleCam also used different sensors to automatically trigger Designing for Playful Photography 195 D o w n l o a d e d B y : [ A a r h u s U n i v e r s i t e t s B i b l i o t e k e r ] A t : 1 9 : 5 7 2 F e b r u a r y 2 0 1 0 a wearable camera when the user gets excited or aroused (Healey and Picard 1998). In a similar way, SenseCam originally used a number of sensors to automatically trigger pictures to be taken; with the intention to support people with distorted memory (Sellen et al. 2007). 2.2 Experience-centred HCI The emerging field of experience-centred HCI emphasises that technology is much more than usability, efficiency and utility. Technology has become a part of our everyday life and we react to it emotionally, intellectually and sensually; aspects that we need to understand and consider when designing technology (McCarthy and Wright 2004). This approach acknowledges, for example, that technology is not only a part of our work lives, but also strongly integrated into leisure and enjoyment, which fundamentally affects how we should design and study technology (Blythe et al. 2004, Gaver et al. 2004). In leisure applications, for example, it is often not the most efficient and fastest road to a goal that is interesting, fun or satisfactory for the user, but the road that allows exploration and creativity. In order to explore ways, technology could support a broader range of values such as play, exploration and personal reflection, Gaver (2002) has been working with the concept of ludic design. Ludic design is influenced by Huizinga’s theory of play, which argues that humans are inherently playful creatures who want and need to engage in activities that are not related to utility, duty or truth (Huizinga 1955). Forlizzi and Battarbee (2004) have stressed that experience need not be an individual thing only. They challenge the assumption that experience is seen as entirely private and subjective, as argued in some areas of experience- centred HCI. They suggest co-experience as the experience created through social interaction and point to creativity in collaboration as potentially contributing to co-experience. Whereas Forlizzi and Battarbee focus on establishing a framework for experience and co-experience, our interest lies in designing for engaging individual experiences as well as co-experiences. 2.3 Photography and experience-centred HCI There are a number of photography projects within HCI that, we argue, have been experience-centred in design. What is common for these projects is that they try to open for new experiences of capturing or viewing photographs that have little or nothing to do with efficiency or ease of use. In doing this, they differ from the previously mentioned efforts in HCI to support, e.g. photowork (Kirk et al. 2006). We argue that the following examples are related to, or even belong to, the notion of ‘‘playful photography’’ that we will outline next. An early example is Audiophotography, in which Frohlich et al. (2000) explored ways of using sound to add value to still images. In Audiophotography, a sound clip is recorded along with taking a photograph, and then played when viewing the image. Based on the concept of Audiophotography, Martin and Gaver (2000) further explored speculative design proposals for digital camera 196 M. G. Petersen et al. D o w n l o a d e d B y : [ A a r h u s U n i v e r s i t e t s B i b l i o t e k e r ] A t : 1 9 : 5 7 2 F e b r u a r y 2 0 1 0 technology that illustrated intriguing, playful, or provocative ways of taking pictures. In a similar way, Bitton et al. (2004) explore in the project RAW how sound can alter the act of viewing pictures. RAW is a novel photo-viewing application that plays sound automatically taken at the moment of capture. A more recent example is Columbus (Rost et al. 2008), which is a photo application for exploring geo-tagged images by physically going to the places they are located. Columbus is inspired by old-fashioned adventures of discovering unknown territories, and is deliberately limited to show only local pictures so that the user gradually ‘‘discovers’’ the physical and digital worlds as she moves around. Applications supporting more playful approaches to digital photography are thus emerging, and are in this way promoting qualities that were already recognised in roll-film cameras, but which so far have been relatively unexplored in the digital realm. We see ourselves as contributing to investigat- ing playful approaches to digital photography by taking advantage of the properties of the digital material in developing innovative design concepts, as well as seeking to elicit the more generic qualities of playful photography. However, it is important to stress that even designs that are not originally intended for playful use can obviously allow this, depending on what the user wants to do (Ljungblad 2009). In the same way, existing commercial applications such as Flickr (www.flickr.com) and Facebook (www.facebook. com) also allow for playful uses and purposes, even though they lack some of the characteristics that we outline in this paper as being important to playful photography. 3. Three examples of playful photography Below, we will describe three different photo applications that illustrate properties of what we consider to be playful photography. We briefly describe their functionality, design rationale and user experiences from field studies. The three applications have been developed independently from each other, but there are striking similarities as well as some interesting differences between them that motivate the notion of playful photography, which we explore and develop in this paper. 3.1 Context photography Context photography allows photographers to take pictures of not only light but also of movement and sound, which creates different real-time visual effects in the pictures depending on these conditions in the immediate surroundings (Håkansson et al. 2006, Håkansson and Gaye 2008, Håkansson 2009). The Context Camera, which is an application for camera phones, uses the built-in microphone and camera to sense sound and movement, respectively, and then maps this information to graphical effects that affect the picture in real time (see Figure 1). This means, for instance, that being in a setting with lots of noise and action will create different visual effects in the Designing for Playful Photography 197 D o w n l o a d e d B y : [ A a r h u s U n i v e r s i t e t s B i b l i o t e k e r ] A t : 1 9 : 5 7 2 F e b r u a r y 2 0 1 0 photograph, than if it had been taken in a quiet setting. Context photography thus differs from Audiophotography (Frohlich et al. 2000) in that sound and movement are visually affecting the appearance of a picture in real time, rather than being associated as an audio or motion clip with the still image. Context photography allows for completely new ways of taking digital pictures, where the photographer might take pictures of moving objects and/ or noisy settings to create aesthetically pleasing pictures with these visual effects. The photographer can also actively create sounds or movement, or ask someone to participate in making noise or movement in order to get an interesting picture. The Context Camera illustrates how the use of sensor technologies can create opportunities for new applications and modes of interaction for digital cameras, which can encourage new creative and playful photo practices. The design process of Context photography involved learning about engaging and meaningful photography experiences from a specific group of amateur photographers, called Lomographers (Ljungblad 2007). Their enjoy- ment and everyday practice of taking pictures were studied in order to inform novel playful photography experiences. The final design of the Context Camera was not intended for the Lomographers per se, but for people interested in engaging, creative and playful digital photography in general. In a six-week study involving seven amateur photographers who used the Context Camera on their camera phones, we found that Context photography changed the perceived enjoyment of taking photographs (Håkansson et al. 2006). One participant described how using the Context Camera changed his way of taking pictures: ‘‘You move yourself or the camera more. Spin it etc. just to try to get a fun effect’’. We also found that users enjoyed not being in Figure 1. Context photography implemented on a camera phone (left), photographs with visual effects created by sound and motion in the moment of taking pictures. 198 M. G. Petersen et al. D o w n l o a d e d B y : [ A a r h u s U n i v e r s i t e t s B i b l i o t e k e r ] A t : 1 9 : 5 7 2 F e b r u a r y 2 0 1 0 control all the time, and that unexpected results due to the dynamics of a setting was part of the fun: ‘‘Much of the fun with context photography is that you feel you are not entirely in control over how the picture will turn out. The situation will determine this. . .’’ As argued in Håkansson and Gaye (2008) and Håkansson (2009), the combination of explicit interaction (actively creating input) and implicit interaction (letting a certain environment create visual effects simply by being there) can invite playful exploration. How the pictures turn out also involves a moment of surprise. We suggest it is the combination of exploiting aspects of the physical environment, physical interaction, unexpectedness and visual effects that are ambiguous (rather than ready- made with a particular meaning and purpose) which makes this playful photography. 3.2 Autonomous wallpaper Autonomous wallpaper combines picture taking and home decoration (Ljungblad and Holmquist 2007). It introduces picture taking as a playful way to actively contribute to changing the interior design in the home. People can take pictures of everyday things with their mobile phones, send the pictures to the application, and let the colours and patterns in the images be transformed Figure 2. Autonomous wallpaper lets users place pictures from their camera phone on their living room wall. Each pictures becomes a unique flower, growing dynamically with other flowers on the wall. Designing for Playful Photography 199 D o w n l o a d e d B y : [ A a r h u s U n i v e r s i t e t s B i b l i o t e k e r ] A t : 1 9 : 5 7 2 F e b r u a r y 2 0 1 0 into a unique decoration on their living room wall (see Figure 2). For instance, it is possible to take a picture of green leaves intended to match the sofa, or take pictures of colourful patterns or people to create an outstanding and dynamic ‘‘party wall’’ when throwing a party. Users send pictures from their camera phones via Bluetooth or email and then position a flower on the wall by pointing the phone to a position where they want the flower to grow. Unique flowers with specific behaviour and appearance are then created from each picture, and they grow among and adapt themselves to other flowers on the wall, and can even create new flowers. Currently, the prototype is projected on a wall from a PC, and uses an ultra-sonic positioning system to allow the user to physically position a flower on the wall. The design is grounded in studies of people owning pets such as lizards, spiders and snakes, and the kind of passive, yet everyday engaging experience that caring for such pets may involve (Ljungblad and Holmquist 2007). The joy in using Automomous wallpaper involves both caring for the interior design by planning and taking interesting-looking pictures, and waiting to see how the picture will appear as a flower on the wall. 3.3 Squeeze Squeeze is an oversized interactive sack-chair which is intended as a site for collective and playful exploration of the history of the home as captured through the digital photos taken with a house camera (see Figure 3). Figure 3. Squeeze. Pictures are taken through squeezing the camera; the pictures on the wall can be explored through physical activity in the chair. 200 M. G. Petersen et al. D o w n l o a d e d B y : [ A a r h u s U n i v e r s i t e t s B i b l i o t e k e r ] A t : 1 9 : 5 7 2 F e b r u a r y 2 0 1 0 The design of the camera seeks to make picture taking possible and attractive for all members of a family, even for small children. As a picture is taken, it is immediately put on display on a wall close to the sack-chair. The pictures can be explored from the sack-chair through movement in the piece of furniture and through manipulation of the active zones on the sack-chair. It is possible to stretch and rotate pictures, and it is possible to navigate back and forth in history. The active zones of the sack-chair are deliberately distributed over the entire chair in a way that allows for collective control, and requires collaboration and physical activity in order to explore the photos. The physical shape of the over-sized sack-chair is designed to accommodate multiple people, its flexibility allowing for a changing number of people’s presence adapting to the shifting circumstances of the home. The furniture was brought into two homes for trial use. In general, the families were keen to engage with each other and explore Squeeze. While one father commented at first that the furniture looked more appropriate for a kindergarten, they were all engaging physically and actively in exploring the pictures, as exemplified by Figure 4 where a mother grabs her son’s foot and bumps it into the sack-chair as a way of browsing photos. We were further surprised to see how a six-year-old boy started to take pictures of beautiful patterns in the home, for instance, close-ups of a patterned carpet, and then eagerly awaited its presence on the wall. The family members were also sometimes playfully fighting over control of the pictures, e.g. as one navigated forward in the pictures, another went back. 4. Unpacking playful photography In the following, we will discuss a number of characteristics that we suggest are of importance to playful photography, based on our three designs presented Figure 4. A mother grabs her son’s foot and bumps it into the sack-chair to browse the collection. Designing for Playful Photography 201 D o w n l o a d e d B y : [ A a r h u s U n i v e r s i t e t s B i b l i o t e k e r ] A t : 1 9 : 5 7 2 F e b r u a r y 2 0 1 0 above. These characteristics are brought up to open for debate and support a further exploration of playful photography, and should not, therefore, be viewed as an exhaustive list of what playful photography is. Rather than attempting to define an overall framework that would include or exclude specific designs, we suggest that different photo-related designs may share one or several of these characteristics of playful photography, as well as include others that are different from the ones that we outline here. In line with the experience perspective furthered by Blythe et al. (2004) and McCarthy and Wright (2004), we see playful photography as something that can only be designed for but not be prescribed by design. As argued above, designs that primarily focus on efficiency, for example, organising pictures in the most efficient way, would not be categorised as playful photography. Similarly, a camera intended to log the everyday life without any user involvement is not necessarily playful photography. However, it is important to acknowledge that such designs may still end up involving some elements of playful photography, if users start to explore and engage in playful ways beyond the designers’ intentions. 4.1 Part of mundane everyday life All three designs are intended for non-professional settings in everyday life, and they represent different approaches to this. Squeeze is integrated as a piece of furniture in the home environment and designed to be appealing to all members of a family, including small children. Autonomous wallpaper is part of the interior design. Both Context photography and Autonomous wallpaper represent activities that can be done as a quick leisure in between other mundane activities. Snapshots can be taken, for example, when waiting for the bus, without allocating time for this as a separate activity. Thus, this is different from, for example, photowork and working with photo albums, which can be considered as a separate allocated activity rather than an in-between activity. 4.2 Activity as engaging in itself Squeeze, Autonomous wallpaper and Context photography are all designed for exploratory activities that are done as enjoyment in itself. With Squeeze, the exploration of digital pictures is designed to be an engaging social and physical activity, with Context photography, the act of taking pictures is made novel and playful, and finally with Autonomous wallpaper, a novel playful display of pictures in physical space is explored. Research on photowork has suggested that post-capture and pre-sharing activities should be made easier and more enjoyable (Kirk et al. 2006). These three playful photography applications focus primarily on novel ways of capturing and sharing pictures, and while photowork is not made completely redundant in these contexts, the need for it might be changed or reduced. In the case of Squeeze, for example, there is no work-process between taking a picture and sharing and experiencing it on the wall. In addition, since the intention to 202 M. G. Petersen et al. D o w n l o a d e d B y : [ A a r h u s U n i v e r s i t e t s B i b l i o t e k e r ] A t : 1 9 : 5 7 2 F e b r u a r y 2 0 1 0 take pictures changes in playful photography, people might not take the same amount of pictures or with the same frequency as with regular cameras used for, e.g. documentation. As a result of this, the need for photowork will probably be different and may not even be conducted or considered meaningful by the users when they engage in playful photography applications. Moreover, the purposes of engaging with the prototypes are strongly related to activities that are attractive in themselves and thus detached from more task-oriented activities where the purpose is to ‘‘get the work done’’. For example, Context photography invites playful exploration of taking pictures with sound and movement as new parameters, Autonomous wallpaper invites playful exploration of taking different pictures as a way to dynamically decorate one’s home, and finally, Squeeze offers a social site for sharing memories within the family. Another characteristic, which our applications point to, is the issue of time efficiency. As argued by Kirk et al. (2006), there has been a tendency towards evaluating photo applications in terms of the time it takes to retrieve a photo. For all the playful photography applications we have developed and explored, time is not a critical issue in this way. On the contrary, the situations are characterised by excess time; of time spend with the purpose of spending time in an engaging and playful way, where the situation at hand becomes the purpose in itself. This is in line with the need for designing for pottering, as called for by Taylor et al. (2008). The characteristic of having the activity as engaging in itself is also well in line with the experience perspective furthered by pragmatist aesthetics, which promotes curiosity, engagement and imagination in the exploration of an interactive system (Petersen et al. 2004). We see this in the trial uses of the systems in that, for instance, for Context photography it is argued that ‘‘Much of the fun with context photography is that you feel you are not entirely in control over how the picture will turn out’’ illustrating how curiosity and imagination motivates the engagement with the system. Similarly with Squeeze, the physical activity, which the interaction design invited for, contributed to engaging the families to investigate the digital photos, leading to the next characteristic of supporting bodily engagement. 4.3 Supporting bodily engagement The designs in this paper represent applications that are mobile or beyond the desktop, and they take advantage of this by making use of the physical surroundings and bodily actions to open for playfulness and exploration. In Context photography, sound and movement have become new parameters, allowing the photographer to either create sounds and movement by bodily actions, or explore how aspects in the physical surroundings may affect the pictures by, e.g. seeking out a busy setting. With Squeeze, pictures are explored through shared physical activity with other people. With Autono- mous wallpaper, users can physically ‘‘plant’’ flowers on their wall. As opposed to adding sensors to a camera for passively logging activities (e.g. Designing for Playful Photography 203 D o w n l o a d e d B y : [ A a r h u s U n i v e r s i t e t s B i b l i o t e k e r ] A t : 1 9 : 5 7 2 F e b r u a r y 2 0 1 0 Holleis et al. 2005, Sellen et al. 2007), playful photography applications exploit such technical opportunities to encourage bodily engagement from the user. This is in line with aesthetic interaction (Petersen et al. 2004), which also emphasises the value and potential in invoking the whole body and the senses in the interaction with technology. Furthermore, both Squeeze and Autonomous wallpaper also operate on a bodily scale in the way the pictures are explored. With Squeeze, the area with embedded sensors invites for engaging the whole body, e.g. the foot (figure 4) and even makes room for more than one person contributing directly to the interactive control and exploration, in this way supporting co-experiences (Forlizzi and Battarbee 2004). With Autonomous wallpaper, the pictures grow on the wall making it possible to relate to the contents of the pictures on a bodily scale. 4.4 Moments of surprise Context photography, Autonomous wallpaper and Squeeze all exploit ‘‘moments of surprise’’ as important parts of the overall experience. The Context Camera is deliberately designed so that the visual effects will not be displayed until the picture has been taken. If the effects would be constantly visible in the viewfinder, this might create specific expectations on each picture, and diminish the fun. Our field study showed that an element of surprise is essential in making Context photography exciting, and that the lack of user control that it brings with it can also be part of the fun (Håkansson et al. 2006). Similarly, in Autonomous wallpaper, it is difficult to foresee how a specific picture will turn out as a flower, which then leads to a surprise once the picture is sent to the wall. In Squeeze, the element of surprise lies instead in when and which picture will appear on the wall, which depends on how people engage with the sack-chair, e.g. by playfully counteracting each other’s actions. In contrast, an example where this kind of creative ‘surprise’ has not been taken in consideration is conventional digital cameras that give a warning when the lighting is not appropriate or the picture might get blurred. As Dunne (1999) critically argues, this is ‘‘as if to warn the user that she is breaking the norm and is about to become creative’’. The applications in this paper further illustrate how lack of control can be part of an engaging experience. In all three examples, the relationship between the process of capturing a picture and the ‘‘resulting’’ picture is subject to exploration, and this is part of the enjoyment itself. For example, in Context photography the surrounding sound and movement are not necessarily possible to control. However, this can still be perceived as a fun challenge, leading to unpredictable but not necessarily undesired effects. Similarly, it is not possible to foresee how the flowers will appear in Autonomous wallpaper, unless the exact same picture has been used before. This lack of control is, however, considered as an important part of the system. Interestingly, the lack of control conflicts with classic ideals of direct manipulation and points to the value of moments of surprise, which is not 204 M. G. Petersen et al. D o w n l o a d e d B y : [ A a r h u s U n i v e r s i t e t s B i b l i o t e k e r ] A t : 1 9 : 5 7 2 F e b r u a r y 2 0 1 0 seen as an interaction ideal in more traditional usability-oriented paradigms of HCI. 4.5 Open for social interaction Social interaction around photographs in HCI has often involved viewing or organising photos together, for example in a desktop application, which usually implies that one person is controlling and structuring while other people are more passively observing. Playful photography can support an alternative and preferably more engaging form of social interaction. It is apparent that Squeeze supports and builds upon social interaction and shared experiences of photography. In fact, the design requires more than one person to take full advantage of its functionality through the physical distribution of the controls on the sack-chair. This is in line with the emphasis on co-experience as promoted by Forlizzi and Battarbee (2004) who suggest that experiences can be enhanced through sharing. Thus, this should be considered as an important issue for playful photography. Context photography and Autonomous wallpaper do not require several people to interact with each other, but allow for various social experiences nonetheless. In fact, it can be argued that both designs could be meaningful as a shared activity. Context photography was inspired by lomographers, who share a very specific practice, interest and community. In a similar way, Context photography could lead to emerging groups of people who enjoy this particular way of taking pictures and want to share their experiences with others. In the case of Autonomous wallpaper, it could become a conversation piece as well as a collaborately created ‘‘garden’’ of flowers at, for instance, a social event. 4.6 The purpose of taking pictures changes Playful photography provides new opportunities for why pictures are taken. Early purposes of amateur photography were to memorialise a vacation, and more recent purposes of using camera phones include taking a picture to memorise a receipt or send someone a visual reminder. With new technology, the reasons why people take pictures might change. One important part of playful photography is that this ‘‘why’’ also is likely to be open for interpretation (Sengers and Gaver 2006). This means that the goal of taking pictures and how to enjoy them is ambiguous and something that users actively engage in. For example, using Squeeze together with others to explore the sack-chair and look at pictures on the wall potentially allows for a more engaging experience of the resulting pictures that is affected by the presence of several people. Context photography changes the overall experience in the moment of taking pictures, as sound and movement usually do not affect pictures in this way. However, how such pictures are interpreted*as the ‘‘truth’’ of a situation or simply as a fun effect*is left open to users to decide. In a similar way, users decide for themselves how and when they use Autonomous wallpaper. Pictures that are explicitly taken to be Designing for Playful Photography 205 D o w n l o a d e d B y : [ A a r h u s U n i v e r s i t e t s B i b l i o t e k e r ] A t : 1 9 : 5 7 2 F e b r u a r y 2 0 1 0 sent to the wallpaper are likely to be different from pictures that are taken as a note or as a memory. For instance, a picture could be taken of a pattern or even just a colour that the user wants to see as a pattern on the wall. 5. Discussion Above, we have unpacked playful photography based on the experiences from our design concepts, and the experiences of trial uses of these. The ambition is not to outline the defining characteristics of playful photography, but rather to suggest characteristics of a design space in this way pointing towards such and supporting designers who want to design similar applications and explore this space. Furthermore, with the characteristics we wish to invite for a debate and sharing of experiences more generally within this area beyond singular point designs. Our concepts point to the potential of moving interaction and experience around digital photography into the physical space, in fact where it came from, but with the digitisation of photography it has resided for a while on the desktop platform. With this work we wish to challenge and complement this research through pointing to other directions. Through bringing our independently developed cases together in this way, we also wish to call for design-oriented research that further investigates motivations and purposes of photography. Through innovative playful photography design, we can make new kinds of experiences and relations to a variety of digital photography applications possible. Even though the characteristics of playful photography were derived from having theories of homo ludens, pragmatist aesthetics and experience meet the area of digital photography; the characteristics seem to point to more generic qualities of playful ways of engaging with digital materials, which go beyond digital photography. It would be relevant for future design-oriented research to investigate how the above characteristics can inform design for other domains within a playful realm. Furthermore, future playful photography research could investigate the significance of the resulting pictures, and how these are interesting to the spectator as individual or society, as exemplified in, for example, Camera Lucida (Barthes 1982). Here Barthes reflected on the role and meaning of photographs from the perspective of the ‘spectator’, the viewer, of a photograph. As means of reflection, Barthes defined two themes, studium and punctum, where the studium is the subject of an image or the symbolic meaning, and the punctum is what makes it interesting for a particular spectator. This could be a curious detail, or as in Barthes’ case, a personal relationship with the subject in the picture together with a sense of time that triggers punctum. Barthes was interested in the relation between these two. Theoretical themes like those presented by Barthes could be valuable in guiding the further exploration of playful photography. The examples in this paper focus on novel playful ways of supporting the act of taking photographs and displaying/sharing them afterwards, but not on the visual qualities of ‘playful photographs’. If looking at the visual qualities, the 206 M. G. Petersen et al. D o w n l o a d e d B y : [ A a r h u s U n i v e r s i t e t s B i b l i o t e k e r ] A t : 1 9 : 5 7 2 F e b r u a r y 2 0 1 0 themes of studium and punctum could be relevant to consider*do they exist in the pictures resulting from playful photo applications, and how can we speak of them? What is it in a picture taken with a playful camera application that triggers the spectator (as opposed to the operator, the photographer)? Finally, as Barthes stressed in his work, looking at and appreciating pictures is something that is highly subjective. We can speak of certain qualities in pictures, but it is the spectator who ultimately decides what makes an image interesting to her. This is supporting the value of the subjective proposed by experience-centred HCI, and therefore of high relevance to playful photo- graphy. 6. Conclusion In this paper, we have furthered the concept of playful photography based on theories of homo ludens and experience-centred perspectives as well as the development and trial use of a number of design concepts supporting playful photography. In this way, we have described how playful photography is grounded in theoretical perspectives and we have unpacked characteristics of playful photography based on the theories and design cases. We have positioned playful photography with respect to the history of photography and we suggest that some of the qualities of early photography, e.g. of having interaction and experience around photography as integral part of the physical space has become lost with digital desktop-based photography. Our cases and trial uses suggest that ubiquitous computing open up for new opportunities for re-establishing and even improving the experiences around photography in this direction. Furthermore, we have positioned playful photography within the field of HCI and we have pointed out how playful photography can serve to complement research into, e.g. photowork (Kirk et al. 2006), which focuses on improving the processes between capture and sharing. Both our design concepts as well as the characteristics of playful photography we have established suggest that there is an unexplored potential in designing for new experiences around the processes of capturing and sharing digital photographs. We have established a number of characteristics of playful photography as part of our unpacking of the concept. Our goal with this unpacking is to promote the design space of playful photography and to encourage others to explore this further. Furthermore, we wish to invite for a debate around this space, i.e. what other characteristics for playful photography can be established and how can the experiences from this design- oriented research into playful photography bring about lessons for designing for playful relations to other types of digital materials. Acknowledgements We thank the people who participated in the trial uses of the prototypes and our colleagues at SICS, Sweden and Center for Interactive Spaces, University of Aarhus. Designing for Playful Photography 207 D o w n l o a d e d B y : [ A a r h u s U n i v e r s i t e t s B i b l i o t e k e r ] A t : 1 9 : 5 7 2 F e b r u a r y 2 0 1 0 References M. Ames and M. Naaman, ‘‘Why we tag: Motivations for annotation in mobile and online media’’, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’07), San Jose, CA, April 28�May 03, 2007, New York, NY: ACM, pp. 971�980, 2007. R. Barthes, Camera Lucida: Reflections on Photography, New York: Hill and Wang, 1982. J. Bitton, S. Agamanolis and M. Karau, ‘‘RAW: Conveying minimally-mediated impressions of everyday life with an audio-photographic tool’’, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’04), Vienna, Austria, April 24�29, 2004, New York, NY: ACM, pp. 495�502, 2004. M. Blythe, K. Overbeeke, A.F. Monk and P.C. Wright, Funology: from Usability to Enjoyment, AA Dordrecht, The Netherlands: Kluwer Academic, 2004. R. Chalfen, Snapshot Versions of Life, Bowling Green, OH: Bowling Green State University Popular Press, 1987. (Paperback edition 2008) A. Dunne, Hertzian Tales, London: MIT Press, 1999. J. Forlizzi and K. Battarbee, ‘‘Understanding experience in interactive systems’’, in Proceedings of the 5th Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques (DIS’04), Cambridge, MA, August 01�04, 2004, New York, NY: ACM, pp. 261�268, 2004. D. Frohlich, G. Adams and E. Tallyn, ‘‘Augmenting photographs with audio’’, Personal Technologies, 4(4), pp. 205�208, 2000. D. Frohlich, A. Kuchinsky, C. Pering, A. Don and S. Ariss, ‘‘Requirements for photoware’’, in Proceedings of the 2002 ACM Conference on Computer Supported Cooperative Work (CSCW’02), New Orleans, Louisiana, November 16�20, 2002, New York, NY: ACM, pp. 166�175, 2002. B. Gaver and H. Martin, ‘‘Alternatives: Exploring information appliances through conceptual design proposals’’, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’00), The Hague, The Netherlands, April 01�06, 2000, New York, NY: ACM, pp. 209�216, 2000. W. Gaver, ‘‘Designing for Homo Ludens’’, I3 Magazine, No. 12, June, 2002. W.W. Gaver, J. Bowers, A. Boucher, H. Gellerson, S. Pennington, A. Schmidt, A. Steed, N. Villars and B. Walker, ‘‘The drift table: Designing for ludic engagement’’, in CHI’04 Extended Abstracts on Human Factors in Computing Systems (CHI’04), Vienna, Austria, April 24�29, 2004, New York, NY: ACM, pp. 885�900, 2004. P. Holleis, M. Kranz, M. Gall and A. Schmidt, ‘‘Adding context information to digital photos’’, in Proceedings of the Fifth international Workshop on Smart Appliances and Wearable Computing (ICDCSW) � Volume 05, June 06�10, 2005, Washington, DC: IEEE Computer Society, pp. 536�542, 2005. J. Healey and R.W. Picard, ‘‘StartleCam: A cybernetic wearable camera’’, in Proceedings of the 2nd IEEE international Symposium on Wearable Computers (ISWC), October 19�20, 1998, Washington, DC: IEEE Computer Society, pp. 42�49, 1998. J. Huizinga, Homo Ludens. A Study of the Play Element in Culture, Boston, MA: Beacon Press, 1955. M. Håkansson, Playing with Context: Explicit and Implicit Interaction in Mobile Media Applications, PhD Dissertation, Department of Computer and Systems Sciences, Stockholm University, Sweden, 2009. M. Håkansson and L. Gaye, ‘‘Bringing context to the foreground: Designing for creative engagement in a novel still camera application’’, in Proceedings of the 7th ACM Conference on Designing interactive Systems (DIS’08), Cape Town, South Africa, February 25�27, 2008, New York, NY: ACM, pp. 164�173, 2008. M. Håkansson, L. Gaye, S. Ljungblad and L.E. Holmquist, ‘‘More than meets the eye: An exploratory study of context photography’’, in Proceedings of the 4th Nordic Conference on Human-Computer interaction: Changing Roles (NordiCHI’06), Oslo, Norway, October 14�18, 2006, vol. 189, A. Mørch, K. Morgan, T. Bratteteig, G. Ghosh, and D. Svanaes, (Eds), New York, NY: ACM, pp. 262�271, 2006. T. Kindberg, M. Spasojevic, R. Fleck and A. Sellen, ‘‘I saw this and thought of you: Some social uses of camera phones’’, in CHI’05 Extended Abstracts on Human Factors in Computing Systems, Portland, OR, USA, April 02�07, 2005, New York, NY: ACM, pp. 1545�1548, 2005. D. Kirk, A. Sellen, C. Rother and K. Wood, ‘‘Understanding photowork’’, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’06), Montréal, Québec, Canada, April 22�27, 2006, R. Grinter, T. Rodden, P. Aoki, E. Cutrell, R. Jeffries, and G. Olson, (Eds), New York, NY: ACM, pp. 761�770, 2006. 208 M. G. Petersen et al. D o w n l o a d e d B y : [ A a r h u s U n i v e r s i t e t s B i b l i o t e k e r ] A t : 1 9 : 5 7 2 F e b r u a r y 2 0 1 0 S. Ljungblad, ‘‘Designing for new photographic experiences: How the lomographic practice informed context photography’’, in Proceedings of the 2007 Conference on Designing Pleasurable Products and interfaces (DPPI’07), Helsinki, Finland, August 22�25, 2007, New York, NY: ACM, pp. 357�374, 2007. S. Ljungblad, ‘‘Passive photography from a creative perspective: ‘If I would just shoot the same thing for seven days, it’s like . . . What’s the point?’’’, in Proceedings of the 27th international Conference on Human Factors in Computing Systems (CHI’09), Boston, MA, April 04�09, 2009, New York, NY: ACM, pp. 829� 838, 2009. S. Ljungblad and L.E. Holmquist, ‘‘Transfer scenarios: Grounding innovation with marginal practices’’, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’07), San Jose, CA, April 28�May 03, 2007, New York, NY: ACM, pp. 737�746, 2007. A. Lockerd and F. Mueller, ‘‘LAFCam: Leveraging affective feedback camcorder’’, in CHI’02 Extended Abstracts on Human Factors in Computing Systems, Minneapolis, MN, April 20�25, 2002, New York, NY: ACM, pp. 574�575, 2002. H. Martin and B. Gaver, ‘‘Beyond the snapshot from speculation to prototypes in audiophotography’’, in Proceedings of the 3rd Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques (DIS’00), New York City, NY, August 17�19, 2000, D. Boyarski and W.A. Kellogg, (Eds), New York, NY: ACM, pp. 55�65, 2000. J. McCarthy and P. Wright, Technology as Experience, Cambridge: MIT Press, 2004. M.G. Petersen, O.S. Iversen, P.G. Krogh, and M. Ludvigsen, ‘‘Aesthetic interaction: A pragmatist’s aesthetics of interactive systems’’, in Proceedings of the 5th Conference on Designing interactive Systems: Processes, Practices, Methods, and Techniques (DIS’04), Cambridge, MA, August 01�04, 2004, New York, NY: ACM, pp. 269�276, 2004. M.G. Petersen, ‘‘Squeeze: Designing for playful experiences among co-located people in homes’’, in CHI’07 Extended Abstracts on Human Factors in Computing Systems, San Jose, CA, April 28�May 03, 2007, New York, NY: ACM, pp. 2609�2614, 2007. K. Rodden and K.R. Wood, ‘‘How do people manage their digital photographs?’’ in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’03), Ft. Lauderdale, FL, April 05�10, 2003, New York, NY: ACM, pp. 409�416, 2003. M. Rost, F. Bergstrand, M. Håkansson, and L.E. Holmquist, ‘‘Columbus: Physically exploring geo-tagged photos’’. Demonstration at UbiComp 2008, South Korea, 2008. A.J. Sellen, A. Fogg, M. Aitken, S. Hodges, C. Rother, and K. Wood, ‘‘Do life-logging technologies support memory for the past?: An experimental study using sensecam’’, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’07), San Jose, CA, April 28�May 03, 2007, New York, NY: ACM, pp. 81�90, 2007. P. Sengers and B. Gaver, ‘‘Staying open to interpretation: Engaging multiple meanings in design and evaluation’’, in Proceedings of the 6th Conference on Designing Interactive Systems (DIS’06), University Park, PA, June 26�28, 2006, New York, NY: ACM, pp. 99�108, 2006. S. Sontag, On Photography, London: Penguin Group, 1977. A.S. Taylor, S.P. Wyche and J. Kaye, ‘‘Pottering by design’’, in Proceedings of the 5th Nordic Conference on Human-Computer interaction: Building Bridges (NordiCHI’08), Lund, Sweden, October 20�22, 2008, vol. 358, New York, NY: ACM, pp. 363�372, 2008. A. Voida and E.D. Mynatt, ‘‘Six themes of the communicative appropriation of photographic images’’, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’05), Portland, OR, April 02�07, 2005, New York, NY: ACM, pp. 171�180, 2005. L. Wells, (Ed.), Photography: A Critical Introduction. Third edition. New York: Routledge, 2004. Designing for Playful Photography 209 D o w n l o a d e d B y : [ A a r h u s U n i v e r s i t e t s B i b l i o t e k e r ] A t : 1 9 : 5 7 2 F e b r u a r y 2 0 1 0 work_mnrk6xhlejdzflgnm2jznwby7u ---- BioMed CentralBMC Ophthalmology ss Open AcceResearch article The ChromaTest, a digital color contrast sensitivity analyzer, for diabetic maculopathy: a pilot study Roger Wong*, Jaheed Khan, Temi Adewoyin, Sobha Sivaprasad, Geoffrey B Arden and Victor Chong Address: Retinal Research Unit, Ophthalmology Department, King's College Hospital NHS Trust, Denmark Hill, London SE5 9RS, UK Email: Roger Wong* - drrogerwong@gmail.com; Jaheed Khan - jaheedkhan@yahoo.co.uk; Temi Adewoyin - telade@yahoo.co.uk; Sobha Sivaprasad - sobhasiva@hotmail.com; Geoffrey B Arden - g.arden@city.org.uk; Victor Chong - victor.chong@kingsch.uk * Corresponding author Abstract Background: To assess the ability of the Chromatest in investigating diabetic maculopathy. Method: Patients with Type 2 diabetes and no concurrent ocular pathology or previous laser photocoagulation were recruited. Visual acuities were assessed followed by colour contrast sensitivity testing of each eye using Chromatest. Dilated fundoscopy with slit lamp biomicroscopy with 78 D lens was then performed to confirm the stage of diabetic retinopathy according to the Early Treatment Diabetic Retinopathy Study. Results: 150 eyes in 150 patients were recruited into this study. 35 eyes with no previous laser photocoagulation were shown to have clinically significant macular oedema (CSMO) and 115 eyes with untreated non-proliferative diabetic retinopathy (NPDR) on fundus biomicroscopy. Statistical significant difference was found between CSMO and NPDR eyes for protan colour contrast threshold (p = 0.01). Statistical significance was found between CSMO and NPDR eyes for tritan colour contrast threshold (p = 0.0002). Sensitivity and specificity for screening of CSMO using pass- fail criterion for age matched TCCT results achieved 71% (95% confidence interval: 53–85%) and 70% (95% confidence interval: 60–78%), respectively. However, threshold levels were derived using the same data set for both training and testing the effectiveness since this was the first study of NPDR using the Chromatest Conclusion: The ChromaTest is a simple, cheap, easy to use, and quick test for colour contrast sensitivity. This study did not achieve results to justify use of the Chromatest for screening, but it reinforced the changes seen in tritan colour vision in diabetic retinopathy. Background The debilitating nature of untreated diabetic retinopathy promotes the need for cost-effective screening methods. Various studies have shown that cost effective screening can reduce blind registration due to diabetes [1-3]. Although seven field 30 degree stereo colour fundus pho- tographs are the gold standard for diabetic screening, both remain relatively expensive and difficult to obtain [4,5]. In the UK, the National Screening Program for Diabetic Retinopathy utilises non-stereo digital photography as this meets the Diabetes UK standards for sensitivity and specificity. Published: 17 August 2008 BMC Ophthalmology 2008, 8:15 doi:10.1186/1471-2415-8-15 Received: 30 June 2007 Accepted: 17 August 2008 This article is available from: http://www.biomedcentral.com/1471-2415/8/15 © 2008 Wong et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Page 1 of 6 (page number not for citation purposes) http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=18706104 http://www.biomedcentral.com/1471-2415/8/15 http://creativecommons.org/licenses/by/2.0 http://www.biomedcentral.com/ http://www.biomedcentral.com/info/about/charter/ BMC Ophthalmology 2008, 8:15 http://www.biomedcentral.com/1471-2415/8/15 Non-stereo fundus imaging is easier to obtain but has lim- itations in establishing macular oedema [6]. There is evi- dence that tritan colour vision is diminished in patients with diabetic maculopathy, but testing with the FM100 hue and Farnsworth-Lanthony D-15 test are labour inten- sive and time consuming [7]. Colour vision testing with a computer graphics system is an effective alternative [8]. This study assesses the ability of an automated, digital col- our contrast sensitivity program in investigating diabetic maculopathy. Methods Patients from either the Diabetic Eye Screening Service or patients returning for their follow-up appointment in the Medical Retina Service were recruited for this study. Inclu- sion criteria included Type 2 diabetic patients with untreated non-proliferative diabetic retinopathy (NPDR) and untreated clinically significant macular oedema (CSMO). Exclusion criteria included Type 1 diabetes, pro- liferative diabetic retinopathy, previous laser photocoagu- lation, and concurrent ocular pathology including infection, trauma, amblyopia, glaucoma, and/or vascular occlusion. Medical history including duration of diabetes, hyperten- sion, renal disease, recent HbA1c, and smoking were recorded. Concurrent eye disease and previous treatment were also recorded. Examination of best corrected logMar visual acuities (BCVA) was followed by colour contrast sensitivity testing of each eye by occluding the fellow eye and using the diabetic module of ChromaTest, a software program analyzing the age-corrected tritan (TCCT) and protan color contrast thresholds (PCCT). A brief explana- tion of what the patient is expected to see and their expected response was made prior to the test. The right eye was tested first followed by the left. For the Chromatest, the subject is seated at a fixed distance from the monitor so the alphabetical letter displayed on the computer screen subtends a constant angle on the ret- ina. The letter size creates an image that tests the central 6.5 degrees of the retina. The letters are displayed on a background of equiluminance. The operator has no influ- ence on the contrast of the test letter given. The computer finds the endpoint of the test by a Modified Binary Search method; if response is correct, on the next presentation the colour difference between letter and background is halved. If response is incorrect, the colour -contrast is dou- bled. Incorrect responses prolong the test, but do not influence the final threshold. This method of determining thresholds leads to finite steps which reach a plateau at the colour contrast sensitivity threshold. The reproduci- bility of this measurement is 1%, which is the sensitivity of the test. The Chromatest has been further described in various articles [8-10]. Control data was obtained from unpublished data collected by G.B. Arden from diabetic patients without any diabetic retinopathy prior to this study (Table 1). Test and training sets are both from the group studied in this report. Dilated fundoscopy with slit lamp biomicroscopy and 78 D lens was performed by a specialist registrar (RW) to con- firm the grading of CSMO according to the Early Treat- ment Diabetic Retinopathy Study extension of the modified Airlie House classification [11]. CSMO is defined as any retinal thickening within 500 microns of the centre of the fovea; hard, yellow exudates within 500 microns of the centre of the fovea with adjacent retinal thickening; or at least 1 disc area of retinal thickening, any part of which is within 1 disc are of the centre of the fovea. Each age group (eg. 30–49 years old, 50–69, 70–89) sep- arated by 2 decades was assigned pass-fail criterion for TCCT as previous data suggests age related change in threshold for tritan colour. Since this is the first study of NPDR using the Chromatest, threshold levels were derived using the same data set for both training and test- Table 1: Colour Contrast Sensitivity in Patients with Diabetes and No Clinical Retinopathy (N = 30) Age Tritan Protan 37 12.4 2.5 44 9.4 3.1 48 4.2 4.2 48 4.1 2.9 48 4.2 2.4 51 11.3 5.9 51 4.2 2.5 51 5.9 4.7 54 6.9 6.6 54 4.1 4.8 54 7.9 3.7 57 6.8 2.5 59 8.6 2.5 59 9.4 2.4 60 15.7 2.6 60 6.2 5.4 61 15.7 11.6 62 7.1 2.7 62 8.6 11.4 64 7.9 3.7 67 9.4 5.1 67 13.6 5.4 68 17.3 5.4 68 11.7 5.71 69 6.8 6.8 69 13.9 4.7 70 17.3 4.7 70 12.4 5 71 6.7 3.8 72 21.7 5.4 Control: Age, TCCT, PCCT Page 2 of 6 (page number not for citation purposes) BMC Ophthalmology 2008, 8:15 http://www.biomedcentral.com/1471-2415/8/15 ing the effectiveness. Pass-fail criterion for each age group was chosen piecewise and sensitivity/specificity calcula- tions were made according to these arbitrarily assigned levels. Sensitivity, specificity, confidence intervals, and χ2 test were calculated by web-based statistical calculator made availa- ble by Professor Lowry at Vassar College, New York http:// faculty.vassar.edu/lowry/VassarStats.html. Wilcoxon Rank Sum Test for non-parametric statistical analysis was per- formed using web software http://www.fon.hum.uva.nl/ Service/Statistics/Wilcoxon_Test.html. Results 150 eyes of 150 patients were included in this study. Of the 150 eyes, 115 eyes had untreated NPDR (Table 2) and 35 eyes had untreated CSMO (Table 3). Median age was 60 years. Median duration of diabetes was 16.0 years. Median LogMar BCVA for NPDR patients was 0.20 and for CSMO patients was 0.20. Interquartile range for VA NPDR and CSMO was 0.20 and 0.30, respectively. Median PCCT for NPDR was 3.9% and for CSMO patients was 5.6%. Wilcoxon Rank Sum Test analysis revealed statistical sig- nificant difference between CSMO and NPDR eyes for PCCT (p = 0.01). When compared to controls with sample size N = 30 (Table 1), PCCT for NPDR had no statistical significance (p = 0.15) whereas PCCT for CSMO was sig- nificant (p = 0.002). Median TCCT for NPDR was 15.4% and for CSME patients was 29.6%. Statistical significance was found between CSMO and NPDR eyes for TCCT (p = 0.0002). Both were also statistically significant when com- pared to controls (p < 0.001) The piecewise pass/fail criterion for TCCT for each age group was as follows: 11.0 (30–49 year old); 23.0 (50–69 year old); 32.0 (70–89 year old). Sensitivity and specifi- city for screening of CSMO using the above pass-fail crite- rion for age matched TCCT results achieved 71% (95% confidence interval: 53–85%) and 70% (95% confidence interval: 60–78%), respectively (Table 4). When repeating the analysis in Table 4 for only subjects with logMar BCVA > = 0.1, sensitivity to detect CSMO improves to 75% (CI: 47–91%) and specificity to 85% (CI: 67–89%) p = 0.0002. Similarly, when repeating the analysis in Table 4 for only subjects with CSMO with cen- tral macular thickening, sensitivity to detect CSMO improves to 83.3% (CI: 58–96%) p < 0.0001. Discussion Cost effective screening for chronic and debilitating disor- ders such as diabetic retinopathy is not only important to the well being of the patient, but these healthy adults con- tribute to the economy of a nation. With the rise in type 2 Table 2: Colour Contrast Sensitivity in Patients with NPDR (N = 115) Age Log Mar VA Tritan Protan 31 0 13.6 3.4 32 0 5.2 3.2 32 0 6.7 2 32 0.2 15.4 3.2 41 0 16.1 15.4 41 0 6.1 2.1 41 0 6.2 2.1 41 0 6 1.7 41 0 8.4 3.9 42 0 11.4 3 44 0.2 9.6 4.8 44 0.2 13.3 8.1 45 0.2 16.1 4.2 45 0.2 22.1 5.5 45 0.4 19.9 5.8 48 0 5.6 2.9 48 0.5 20.6 3.8 48 0.6 29.5 5 49 0 7.4 3.4 49 0 6.3 2.2 49 0 8.4 3.9 49 0 8.4 2.6 49 0 9.4 3.1 49 0 9.9 3.4 49 0 10.3 2.9 49 0 30.5 6.1 49 0 34.5 4 49 0.1 33.6 6 49 0.7 9.2 2.6 49 0.7 12.2 3.6 51 0 13.6 4.4 51 0.1 18 5.8 51 0.2 19.1 7 52 0 10.8 2.6 52 0.2 82.4 9.3 54 0 9 3.1 54 0 22.1 4.6 54 0.2 23.6 4.3 55 0 14.4 3.1 55 0 20.2 5.4 55 0.2 18.4 3.5 55 0.2 17.6 2.1 55 0.3 19.6 4.4 55 0.3 85.9 7.7 55 0.4 22.1 7.7 56 0 8.1 2.7 56 0 11.1 2.5 56 0.1 6.6 2.6 57 0 10.3 3.6 57 0.1 6.7 2.9 57 0.1 7.2 2.1 57 0.2 14.9 2.9 58 0.1 13.9 3.8 58 0.2 11 3.3 58 0.2 21.4 2.8 58 0.2 38 3.8 59 0.2 6.8 2.1 59 0.2 6.3 1.4 Page 3 of 6 (page number not for citation purposes) http://faculty.vassar.edu/lowry/VassarStats.html http://faculty.vassar.edu/lowry/VassarStats.html http://www.fon.hum.uva.nl/Service/Statistics/Wilcoxon_Test.html http://www.fon.hum.uva.nl/Service/Statistics/Wilcoxon_Test.html BMC Ophthalmology 2008, 8:15 http://www.biomedcentral.com/1471-2415/8/15 diabetes in obese adolescents due to dietary and lifestyle changes, the need for an optimal method of screening for sight threatening diabetic retinopathy becomes a critical essential [12]. Abnormal protan and especially tritan colour vision is associated with diabetic retinopathy [13]. Blue-yellow defect has also been described in both diabetic retinopa- thy and glaucoma [14]. In contrast to the optotype used for testing macular function, the Chromatest has a sepa- rate glaucoma module for which it is designed to measure peripheral colour sensitivity changes in an arcuate man- ner using a central fixation point. This study did not cross examine patients with glaucoma and diabetic retinopathy using both glaucoma and macular modules, but it is fea- sible that further testing may reveal an overlap in colour defect for these patients. Although the mechanism of altered colour vision is unknown, there is evidence that reduced retinal oxygen saturation is associated with impaired colour vision in diabetics [15]. Error scores in colour vision have been found to be directly correlated to severity of macular oedema [16]. This may be similar to the effects of retinal detachment where photoreceptors are shifted obliquely [16]. Correlation between selective loss of short wavelength pathway sensitivity and the severity of diabetic macular oedema has been demonstrated [17,18]. Therefore, we have concentrated on the study of untreated CSMO to ascertain the viability of such a screening method. The use of smaller letters (1.5 degree; Chromat- est module for age related macular degeneration) might give better results for CSMO as it may test macular func- tion better than the larger 6.5 degree optotype. This study included only patients with type 2 diabetes to reduce the possible variability in pathogenesis. Although the mechanism of diabetic retinopathy is likely to be iden- tical in both type 1 and type 2 diabetes, previous studies such as the Early Treatment Diabetic Retinopathy Study and Diabetic Retinopathy Study have investigated each type of diabetes separately. Laser photocoagulation was an exclusion criterion as it affects tritan colour vision [19]. Cataract and pseudophakia were not excluded as both are more common in diabetics and exclusion would have limited the usefulness of the Chromatest in screening. It is understood that lens-yellowing effects due to cataract may cause pre-retinal absorption of short-wavelength light resulting in tritan deficits. This may have influenced the overall sensitivity and specificity of the study, but it was a representation of the realistic setting clinicians experience in their practice. In colour contrast testing, the higher the TCCT or PCCT score, the more abnormal the result compared to age- matched normal levels. 30% (35 of 115) patients with NPDR had TCCT above normal levels. 12 male patients were suspected to have congenital colour blindness as their PCCT were considerably worse than normal and not 59 0.2 10.1 2.7 60 0.2 8 3.1 60 0.2 12.2 4.4 61 0 5.7 2.7 61 0 7.5 2.5 61 0.2 8.6 2.7 61 0.2 13.4 2.8 62 0 10.4 2.8 62 0.3 98.7 78.2 62 0.3 98.7 75.7 63 0 9.9 4 63 0.1 15.4 5 63 0.1 25.3 6.5 64 0 18.5 3.7 64 0.2 20.2 4 64 0.2 75.7 21.4 65 0.3 15.4 6.3 65 0.3 37.9 19.9 67 0 18.3 7.7 67 0 20.6 6.7 67 0.1 19.9 4.6 67 0.1 57.7 3.8 67 0.2 8.1 2.5 67 0.3 20 6.5 67 0.3 50.4 2.9 67 0.5 52.4 8.4 67 0.6 18.1 6.7 68 0.1 32.7 6 68 0.2 10.6 2.7 68 0.2 31.5 3.9 69 0 14.4 4.4 69 0.1 49.6 6.2 69 0.5 19.9 5.2 71 0 9.2 13.3 71 0 11.1 3.8 71 0.1 7.2 13.7 71 0.2 9.6 2.5 72 0.2 21.5 5.7 72 0.4 5.5 2.6 72 0.4 60.3 6.1 72 0.5 34.8 6.4 72 0.6 18.6 3.3 75 0 12.9 2.2 75 0.1 19.9 4 75 0.3 40.4 3.6 76 0.3 27.6 4.4 76 0.3 70.5 9.6 77 0.1 11.9 3.6 78 0 24 5.2 78 0.2 17.6 4 78 0.2 20.9 7.1 78 0.3 22.4 12.9 79 0.5 52.6 21.7 79 0.5 98.7 67.6 82 0 13.5 5.2 82 0.2 23.6 6.8 NPDR patients: Age, VA, TCCT, PCCT Table 2: Colour Contrast Sensitivity in Patients with NPDR (N = 115) (Continued) Page 4 of 6 (page number not for citation purposes) BMC Ophthalmology 2008, 8:15 http://www.biomedcentral.com/1471-2415/8/15 corresponding to their visual acuity or their fundus appearance. This was not confirmed with any other mode of investigation as the study was aimed at mimicking real- istic clinical setting where high volume testing can be con- ducted without further time consuming tests. 16 cases had severe NPDR and may have contributed to the poor results whereas the remaining 7 had results not corre- sponding to their fundus appearance. We postulate that these 7 eyes may have had concurrent disease indistin- guishable by indirect biomicroscopy such as more advanced ischaemia. Ultimately, fluorescein angiography may have further elucidated the true pathology. 29% (10 of 35) CSMO patients had TCCT better than nor- mal levels. 8 eyes had CSMO qualified as 1 disc area of ret- inal thickening within 1 disc area of the fovea. 2 eyes had exudates with associated retinal thickening within 500 microns of the fovea, but both were left eyes and it is pos- sible that the patients were able to perform educated guesses because they had been conditioned following test- ing with their right eye. Unfortunately, we were forced to obtain normal threshold levels through the same dataset. These levels were obtained through analysis of cases without CSMO. There- fore, the results may be biased. However, because this device is relatively new and the limited availability of fur- ther data from diabetics, we are limited to using this data- set to obtain "normal" threshold values. Further data will strengthen our case of the power of this diagnostic tool. The Chromatest is unable to successfully screen those patients with congenital blindness and performs less well for patients without foveal pathology. Conditioning fol- lowing testing with the right eye may also allow patients to perform better on their left eye. From anecdotal evi- dence, time for testing of the second eye was observed by the investigators to be shorter than the first eye. Repeated testing which was not done in our study may alleviate this problem. This study has studied more untreated CSMO eyes with colour vision than any other that have been published, but it requires more data to solidify our find- ings. Colour contrast analysis may become a useful tool for defining the need for laser treatment, but so far our experience fails the Exeter Standards of the British Dia- betic Association (Diabetes UK), which established screening levels of at least 80% sensitivity and 95% specif- icity [20]. Despite the limitations of the results, there was no dis- crimination for age and visual acuity due to the ease of the test. All patients were able to perform this test unlike the 1.5% of patients failing to perform another automated TCCT test [21]. Average test time was fast at 5 minutes and requires no mydriasis unlike fluorescein angiography and fundus photography. Conditioning after repeated testing is an issue for reliability, but this study was aimed at mim- icking realistic clinical settings where patients have no experience of colour contrast testing. Further studies to distinguish repeatability and data for classifying normal results from abnormals are planned. The equipment required is relatively cheap and readily available com- Table 3: Colour Contrast Sensitivity in Patients with CSMO (N = 35) Age LogMar VA Tritan Protan 31 0 8.5 3.6 31 0 11.1 4 42 0.2 14.1 4.5 44 0 7 1.9 44 0 18.8 2.6 51 0.2 8.8 2.6 52 0 29.6 3.5 52 0.3 72.3 10.7 55 0.2 18.4 3.5 56 0.3 18.4 2.9 56 0.5 36 5.6 58 0.1 7.7 2.7 58 0.3 78.2 13.7 59 0.2 23.6 3 62 0 70.5 7.7 62 0.1 49.9 11.4 63 0.4 27.3 6.7 65 0.1 85.9 14.4 65 0.3 98.7 16.9 67 0.1 16.1 3.2 67 0.2 11.8 3 67 0.3 80.8 12.4 68 0.2 13.3 3.2 69 0.1 23.3 5.3 69 0.5 30.3 16.1 70 0 21.5 6.8 70 0 35.4 5.6 70 0 32.7 5.5 70 0 62.8 9 70 0.5 98.7 20.8 71 0 98.7 14.7 71 0.2 64.8 20 71 0.3 98.7 42.3 72 0.7 68 18.4 72 0.9 57.7 16.9 CSMO patients: Age, VA, TCCT, PCCT Table 4: χ2 test for TCCT detection of CSMO True Positive True Negative Total Test Positive 25 35 60 Test Negative 10 80 90 Total 35 115 150 Sensitivity = 71% (CI: 53–85%), Specificity: 70% (CI: 60–78%); χ2 test: p < 0.0001 comparing proportions of true positives among the test positive versus test negative subjects Page 5 of 6 (page number not for citation purposes) BMC Ophthalmology 2008, 8:15 http://www.biomedcentral.com/1471-2415/8/15 pared to those required for optical coherence tomography or stereomacular photographs. It is also a non-invasive procedure and less labour intensive compared to fluores- cein angiography. Conclusion Non-ophthalmic doctors can have a retinopathy detection rate of 49% compared to 96% for ophthalmologists [22]. Therefore, a cost effective method for screening is essential for diabetic retinopathy. Screening by digital photography proposed under the National Service Framework is offered to all patients with diabetes in the United King- dom. It is supplemented by biomicroscopy by the oph- thalmologists in monitoring and treating sight threatening disease. Furthermore, optical coherence tom- ography has become a powerful tool in screening and monitoring CSMO with sensitivity and specificity rates of near 80% and 90%, respectively [23]. Perhaps with fur- ther investigation, TCCT testing may become a supple- ment for detecting and monitoring sight threatening pathology without much equipment or trained techni- cians. However, with current data, all forms of TCCT test- ing including the Chromatest do not qualify for use in screening for CSMO. Competing interests The authors declare that they have no competing interests. Authors' contributions RW examined patients, conducted investigation, con- ceived, drafted the manuscript. TA performed the statisti- cal analysis. JK compiled patient list and conducted investigation. SS compiled patient list and conducted investigation. GA performed the statistical analysis. VC conceived of the study, and participated in its design and coordination. All authors read and approved the final manuscript. Acknowledgements No funding was obtained for this study. References 1. Rohan TE, Frost CD, Wald J: Prevention of blindness by screen- ing for diabetic retinopathy; a quantitative assessment. Brit Med J 1989, 299:1198-201. 2. Olafsdóttir E, Stefánsson E: Biennial eye screening in patients with diabetes without retinopathy: 10-year experience. Br J Ophthalmol 2007, 91:1599-601. 3. Whited JD, Datta SK, Aiello LM, et al.: A modeled economic anal- ysis of a digital tele-ophthalmology system as used by three federal health care agencies for detecting proliferative dia- betic retinopathy. Telemed J E Health 2005, 11:641-51. 4. Singer DE, Nathan DM, Fogel HA, et al.: Screening for diabetic retinopathy. Ann Intern Med 1991, 116:660-71. 5. Moss S, Meuer SM, Klen R, et al.: Are seven standard photo- graphic fields necessary for classification of diabetic retinop- athy? Invest Ophthalmol Vis Sci 1989, 30:823-8. 6. Harding SP, Broadbent DM, Neoh C, et al.: Sensitivity and specifi- city of photography and direct ophthalmoscopy in screening for sight threatening eye disease: the Liverpool diabetic eye study. Brit Med J 1995, 311:1131-5. 7. Bresnick GH, Condit R, Palta M, et al.: Association of hue-discrim- ination loss and diabetic retinopathy. Arch Ophthalmol 1985, 103:1317-24. 8. Arden GB, Gunduz K, Perry S: Colour vision testing with a com- puter graphics system. Clin Vis Sci 1988, 2:303-20. 9. Arden GB: Testing contrast sensitivity in clinical practice. Clin Vis Sci 1987, 2:213-24. 10. Arden GB, Gunduz K, Perry S: Colour vision testing with a com- puter graphics system; preliminary results. Doc Ophthalmol 1988, 69(2):167-174. 11. ETDRS Research Group: Grading diabetic retinopathy from stereoscopic color fundus photographs-an extension of the modified Airlie house classification: ETDRS reprot number 10. Ophthalmology 1991, 98:786-806. 12. Caprio S, Tamborlane WV: Metabolic impact of obesity in child- hood. Endocrinology & Metabolism Clinics of North America 1999, 28(4):731-47. 13. Treager SD, Knowles PI, De Alwys DV, Reffin JP, Ripley LG, Casswell AG: Colour vision deficits predict the development of sight- threatening disease with background retinopathy. Invest Oph- thalmol Vis Sci 1993, 34:719. (ARVO Abstracts no 81) 14. Nitta K, Saito Y, Kobayashi A, Sugiyama K: Influence of clinical fac- tors on blue-on-yellow perimetry for diabetic patients with- out retinopathy: comparison with white-on-white perimetry. Retina 2006, 26:797-802. 15. Dean FM, Arden GB, Dornhorst A: Partial reversal of protan and tritan colour vision defects with inhaled oxygen in insulin dependent diabetic patients. Br J Ophthalmol 1997, 81:27-30. 16. Verriest G, van Laethem J, Uvijls A: A new assessment of the nor- mal ranges of Farnsworth-Munsell 100-Hue Test scores. Am J Ophthalmol 1982, 93:635-42. 17. Ueda M, Adachi-Usami E: Assessment of central visual function after successful retinal detachment surgery by pattern visual evoked cortical potential. Br J Ophthalmol 1992, 76:482-485. 18. Greenstein VC, Sarter B, Hood D, et al.: Hue discrimination and S cone pathway sensitivity in early diabetic retinopathy. Invest Ophthalmol Vis Sci 1990, 31:1008-1014. 19. Ulbig MR, Arden GB, Hamilton AM: Color contrast sensitivity and pattern electroretinographic findings after diode and argon laser photocoagulation in diabetic retinopathy. Am J Ophthalmol 1994, 117:583-588. 20. British Diabetic Association: Retinal photography screening for diabetic eye disease. London: British Diabetic Association Report; 1997. 21. Ong GL, Ripley LG, Newsom RSB, Casswell AG: Assessment of colour vision as a screening test for sight threatening dia- betic retinopathy before loss of vision. Br J Ophthalmol 2003, 87:747-752. 22. Sussman Ej, Tsiaras WG, Soper KA: Diagnosis of diabetic eye dis- ease. JAMA 1982, 247:3231-3234. 23. Virgili G, Menchini F, Dimastrogiovanni AF, Rapizzi E, Menchini U, Bandello F, Chiodini RG: Optical coherence tomography versus stereoscopic fundus photography or biomicroscopy for diag- nosing diabetic macular edema: a systematic review. Invest Ophthalmol Vis Sci 2007, 48:4963-73. Pre-publication history The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2415/8/15/prepub Page 6 of 6 (page number not for citation purposes) http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=2513049 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=2513049 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=17627978 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=17627978 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16430383 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16430383 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16430383 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=2656572 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=2656572 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=2656572 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=7580708 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=7580708 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=7580708 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=4038123 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=4038123 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=3168720 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=3168720 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=2062513 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=2062513 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=2062513 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16963854 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16963854 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16963854 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9135404 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9135404 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9135404 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=6979252 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=6979252 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=1390531 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=1390531 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=1390531 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=2354907 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=2354907 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=8172263 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=8172263 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=8172263 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12770974 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12770974 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12770974 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=7087063 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=7087063 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=17962446 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=17962446 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=17962446 http://www.biomedcentral.com/1471-2415/8/15/prepub Abstract Background Method Results Conclusion Background Methods Results Discussion Conclusion Competing interests Authors' contributions Acknowledgements References Pre-publication history work_mnwnzmmu7rgafmw2o5bminopaq ---- Assessing 3D metric data of Digital Surface Models for extracting archaeological data from archive stereo-aerial 1 photographs 2 Heather Papworth (corresponding author)a, Andrew Fordb, Kate Welhama, and David Thackrayc 3 4 aDepartment of Archaeology, Anthropology & Forensic Science 5 bDepartment of Life & Environmental Sciences 6 cICOMOS UK 7 Bournemouth University 8 Faculty of Science and Technology 9 Talbot Campus, Fern Barrow 10 Poole, Dorset. BH12 5BB 11 Email: h.e.papworth@gmail.com 12 13 Abstract 14 Archaeological remains are under increasing threat of attrition from natural processes and the continued 15 mechanisation of anthropogenic activities. This research analyses the ability of digital photogrammetry software to 16 reconstruct extant, damaged, and destroyed archaeological earthworks from archive stereo-aerial photographs. Case 17 studies of Flower’s Barrow and Eggardon hillforts, both situated in Dorset, UK, are examined using a range of imagery 18 dating from the 1940s to 2010. Specialist photogrammetric software SocetGXP® is used to extract digital surface 19 models, and the results compared with airborne and terrestrial laser scanning data to assess their accuracy. Global 20 summary statistics and spatial autocorrelation techniques are used to examine error scales and distributions. 21 Extracted earthwork profiles are compared to both current and historical surveys of each study site. The results 22 demonstrate that metric information relating to earthwork form can be successfully obtained from archival 23 photography. In some instances, these data out-perform airborne laser scanning in the provision of digital surface 24 models with minimal error. The role of archival photography in regaining metric data from upstanding archaeology and 25 the consequent place for this approach to impact heritage management strategies is demonstrated. 26 27 Keywords 28 Digital photogrammetry; archaeology; archive stereo-photographs; earthworks; reconstruction; digital surface models; 29 laser scanning. 30 1. Introduction 31 32 mailto:h.e.papworth@gmail.com Archaeological sites are subject to substantive on going decay and damage caused by a variety of both natural and 33 human behaviours (Rowley and Wood 2008). Factors such as increased storm rates and sea-level rise, and the sharp 34 growth in the efficiency, rate and scale at which many anthropogenic activities occur within the UK landscape are a 35 pressing issue for many regions (Oxford Archaeology 2002; Murphy et al. 2009). Subsequently it has been estimated 36 that, of the 600,000 sites in England, one has been lost per day since 1945 (Darvill and Fulton 1998). This equates to 37 a projected disappearance rate of at least 25,000 sites over the past 70 years, and nowhere is this more apparent 38 than in the tangible loss and damage of earthwork features. 39 In an attempt to mitigate this loss a number of conservation charters exist that advocate recording archaeological sites 40 before they are destroyed (Bassegoda-Nonell et al. 1964; ICOMOS General Assembly 1996), but the reality of 41 achieving this ideal is an immense challenge. Within the UK alone the high density and sheer variety of monuments 42 mean that there are inevitably large numbers of sites where damage or destruction has occurred, and the level of 43 recording undertaken has been minimal, or in some extreme cases non-existent. The ability to utilise archive data 44 from the periods subsequent to the disappearance of such sites to reconstruct what previously existed would be of 45 significant benefit, not only for the understanding of these monuments but, just as importantly, to inform the 46 conservation, management and interpretation of our heritage assets in the future. 47 Fortunately, an archive containing these data does exist. In the UK, stereo-aerial photographs (SAPs) have been 48 gathered regularly, and on a large scale, since the 1940s. These images hold within them the potential for landscape 49 reconstruction. Three-dimensional (3D) data can be extracted using digital photogrammetry methods to create digital 50 surface models (DSMs); an approach that has already been successfully used in the geomorphology and surveying 51 disciplines to assess terrain change over time (Chandler 1989, Adams and Chandler 2002, Walstra et al. 2004, 52 Walstra 2006, Miller et al. 2008, Aguilar et al. 2013). In contrast, although archaeologists have been utilising aerial 53 photography for over a century and are experts in prospecting for and mapping earthwork features from them in two-54 dimensions (Wilson 2000, p.16; Barber 2011, p.215), very few have acknowledged the inherent 3D properties of SAPs 55 (Verhoeven et al. 2012) beyond the use of stereoscopes. 56 This research employs qualitative and quantitative methods to assess the ability of archive SAPs to reconstruct extant, 57 damaged and destroyed earthworks. Two case study sites are used to compare the results obtained from digital 58 surface models created from a range of SAPs to those achieved via modern metric survey techniques commonly used 59 in the archaeology and heritage sectors: global navigation satellite systems (GNSS), and terrestrial and airborne laser 60 scanners. The utility of these data when compared to both objective (metric) and interpretative (such as hachure 61 plans) survey is discussed (Bowden and McOmish 2012; Blake 2014). 62 2. Data and Methods 63 64 2.1 Field Site Selection 65 66 The field study sites of Flower’s Barrow and Eggardon Iron Age hillforts (Figures 1 and 2) were selected as they both 67 contain a mixture of subtle and pronounced earthworks, some of which are well preserved and stable whilst others 68 have been damaged or destroyed via natural or anthropogenic agents. 69 70 Figure 1: Location of Flower’s Barrow hillfort within the United Kingdom including an orthophotograph of the site and examples of earthworks within the hillfort (Bottom Left) Occupation Platforms and (Bottom Right) a linear annex. 71 Coverage of each area with archive SAPs devoid of cloud cover and with suitable stereo-overlap is available, with a 72 range of imagery for each decade from the 1940s to the present day. Flower’s Barrow is situated on Defence Estates 73 land and the site is only accessible to the public on weekends and during major school holidays, thus footfall is limited. 74 A condition assessment completed by Wessex Archaeology (2001) identified the hillfort and its environs as being in 75 good condition, although the southern ramparts have been lost to cliff erosion since construction. The terrestrial 76 hinterland is, however, stable. Eggardon is unique in that the northern half of the hillfort interior has been damaged by 77 Figure 2: Location of Eggardon within the United Kingdom including an orthophotograph of the site and examples of earthworks within the hillfort including (Bottom Left) linear features and pits, (Bottom Upper Right) damaged barrow and (Bottom Lower Right) henge monument. irregular ploughing since the 1940s, whilst the southern half is separated by a fence denoting the parish boundary and 78 has remained in the custody of the National Trust, facilitating its preservation. 79 2.1.1 Baseline Data Collection 80 A baseline reference metric survey was collected at each site using a Leica C10 terrestrial laser scanner (TLS) whose 81 station locations were ascertained using a Leica Viva GNSS. A sub-10cm point cloud was achieved within the hillfort 82 at each site, with the Mean Absolute Error for the dataset at Flowers Barrow calculated by Leica Cyclone as 0.011m 83 and at Eggardon Hillfort as 0.014m. The TLS point density as created at each field site is shown in Figure 3. 84 90 To identify systematic errors in the TLS dataset prior to undertaking analysis with it, a number of random points were 91 collected across each field site using a Leica Viva GNSS. The residual differences between TLS and GNSS elevation 92 values are illustrated in Figure 4. 93 94 Figure 3: Diagrams illustrating TLS point density created at Flowers Barrow (above) and Eggardon Hillfort (below). 20cm, with very few values exceeding this figure. The graphs in Figure 5 illustrate the lack of correlation between 100 residual elevation values between the TLS and GNSS in relation to the proximity of measurements to the scanner and 101 as a function of TLS point density. Subsequently it can be said that neither the proximity of the TLS data to the 102 scanner or the point density of the data influences error in the TLS data. 103 104 2.2. Archive Stereo-Aerial Photographs and Airborne Laser Scanning 105 106 Archive SAPs for both sites were obtained from the National Monuments Record (NMR) in Swindon, Bournemouth 107 University (BU), and Dorset County Council (DCC) (Table 1). 108 Figure 5: Scatter plots demonstrating the lack of a relationship between residual elevation values when examining these as a function of proximity to the location of the C10 TLS (above) and as a function of TLS point density (below). 109 The prints from BU and DCC were scanned using an A3 desktop scanner whilst the NMR scanned the requisite 110 negatives using a Vexcel Photogrammetric Scanner. Each image was scanned at a resolution of 2400 dots-per-inch 111 (dpi) and saved in the lossless TIFF file format. A recent set of commercially available, digital SAPs were obtained 112 from GetMapping Ltd, created using a Vexcel UltraCamX digital aerial camera and delivered in JPEG format from the 113 RGB (not panchromatic) sensors. 114 Archive airborne laser scanning (ALS) data from the EA was obtained for Flower’s Barrow in its raw format to ensure 115 that processing the data could be undertaken in a transparent way as the methods employed by the EA are not 116 disclosed. A parallel dataset was not available for Eggardon. 117 118 Flower’s Barrow Archive Stereo-Aerial Photographs Date Flown Archive/ Creator Scale Focal Length (mm) Flying Height (m) Ground Sample Distance (m) Image Type (Verticals) Format (cm) *Original Media March 1945 NMR/RAF 1:10500 203.2 2133.6 0.110 B&W 12.7x12.7 Negative August 1968 NMR/RAF 1:10000 152.4 1524 0.118 B&W 23x23 Negative June 1972 BU/J.A.Storey 1:12000 151.85 1822.2 0.132 B&W 23x23 Print April 1982 NMR/OS 1:8000 304.8 2438.4 0.081 B&W 23x23 Negative June 1986 DCC/ C.E.G.B.Winfrith 1:12000 153.05 1836.6 0.135 B&W 23x23 Print 1997 DCC/ C.E.G.B.Winfrith 1:10000 153.15 1531.5 0.114 Colour 23x23 Print Sept. 2009 GetMapping Ltd. - 100.5 - 0.150 Colour 10.39x6.78 Digital Eggardon hillfort Archive Stereo-Aerial Photographs Date Flown Archive/ Creator Scale Focal Length (mm) Flying Height (m) Ground Sample Distance (m) Image Type (Verticals) Format (cm) *Original Media January 1948 NMR/RAF 1:10000 508 5029.2 0.104 B&W 20.95x19.05 Negative March 1948 NMR/RAF 1:10000 508 5029.2 0.114 B&W 20.95x19.05 Negative April 1969 NMR/OS 1:7500 304.8 2286 0.072 B&W 23x23 Negative October 1972 DCC/J.A.Story 1:12000 151.85 1822.2 0.122 B&W 23x23 Print April 1984 NMR/OS 1:8000 304.8 2438.4 0.078 B&W 23x23 Negative July 1989 NMR/OS 1:8200 304.8 2499.4 0.084 B&W 23x23 Negative Sept. 1997 DCC/ NRSC Airphoto Group 1:10000 153.15 1531.5 0.104 Colour 23x23 Print 2010 GetMapping Ltd. - 100.5 - 0.150 Colour 10.39x6.78 Digital *Prior to purchase/digitisation, each image was examined as a photographic print to ensure suitable coverage and sufficient image quality Table 1: List of archive stereo-aerial photographs and their associated metadata for the field study sites Flower’s Barrow and Eggardon hillforts. 2.3. Photogrammetric Processing 119 120 High-end photogrammetric software was chosen for processing archive SAPs, despite the popularity of Structure-121 from-motion (SfM) software with the archaeological community (Hullo et al. 2009; Ducke et al. 2011; Verhoeven 2011; 122 Plets et al. 2012; Verhoeven et al. 2012a, 2012b; Koutsoudis et al. 2013; De Reu et al. 2013; Green et al. 2014; 123 McCarthy 2014). Although SfM was initially considered as a means of processing archive SAPs, it was disregarded for 124 its lack of optimisation for use with traditional, high resolution vertical stereo-aerial photographs. SfM is designed for 125 use with lower-resolution photographs that are taken with a suggested overlap of +80% forward and 60% side 126 (AgiSoft LLC 2014), not the 60% forward and 20-30% overlap present in SAPs. Whilst it is possible to obtain a DSM 127 from SfM using archive SAPs, any metric information extracted from them should be examined carefully. 128 DSMs were created from SAPs using the SocetGXP® Automatic Terrain Extraction algorithm (ATE), the settings for 129 which were determined via experimentation as described in Papworth (2014, p.153). SocetGXP® leads the user 130 through a step-by-step workflow, prompting the input of interior and exterior orientation parameters. As with other such 131 software packages, the best results are obtained if camera calibration data, namely fiducial coordinates, principle point 132 location and lens distortion parameters are provided (Figure 6). 133 134 However, many of these measures are not available for archive SAPs but, to account for these issues, the software 135 includes a self-calibrating bundle adjustment routine. With the exception of the 2009 and 2010 SAPs from GetMapping 136 Figure 6: Diagram illustrating the components of interior orientation (top) and the information strip often provided with an aerial photograph (below) (Papworth 2014, p59). Ltd, which were provided with camera calibration and exterior orientation information, the remaining SAP datasets 137 were processed using the self-calibrating bundle adjustment to obtain the missing camera parameters. To mitigate for 138 the lack of interior orientation data and exterior orientation information (i.e. GNSS camera positions at the time of 139 exposure and the associated rotation measures describing the attitude of the camera at this time) a large number of 140 ground control points (GCPs) were collected (Figure 7). The GCPs were gathered as close to the point of interest 141 (each hillfort) as possible. The objects used as GCPs were features identifiable in the archive SAPs such as gate 142 Figure 7: Location map showing the distribution of GNSS ground control points, shown in red, for (a.) Flower’s Barrow (GCPs from alternative mapping sources shown in green) and (b.) Eggardon hillfort (Papworth 2014). posts, road intersections, and the corner of structures such as buildings for example. Flower’s Barrow proved 143 challenging because of its location on a live firing range and the restricted access to the area surrounding the site. A 144 large number of GCPs were recorded using a Leica Viva GNSS that, when operating in Network RTK mode (i.e. 145 receives real-time positional corrections from static reference stations within the UK via a mobile phone), has a stated 146 accuracy of 8mm horizontally (+0.5 parts per million, or ppm) and 15mm vertically (+0.5ppm) (Leica Geosystems AG, 147 no date). However, many of the GCPs situated to the north of the hillfort within the fields and the farm area were 148 extracted from 1:1000 scale Ordnance Survey mapping and a 3rd party 2m DSM. The fields and roads surrounding 149 Eggardon were fully accessible, facilitating the collection of GCPs across the area of interest. Due to the problems 150 encountered with the lack of mobile phone reception in the area, Network RTK was not available and thus GCPs were 151 gathered using a Leica GS10 reference station in combination with the GS15 rover. This data was subsequently post-152 processed using Leica GeoOffice software, which resulted in a mean accuracy per GCP of 0.014m. 153 The workflow developed for this research utilised a small number of initial tie points to ensure an acceptable solution 154 was achieved for relative orientation between the SAPs when the bundle adjustment was first run. The overall root 155 mean square error (RMSE) was used to assess the orientation result, as it represents a global measure of the 156 accuracy with which the software has calculated the solution for the relationship between the SAPs and the GCPs. A 157 minimum RMSE value was sought by variously tightening and loosening the exterior orientation accuracy values, and 158 removing tie points with the largest errors. Subsequently, more tie points were added along with a small number of 159 GCPs. These were required to provide locational information in an appropriate coordinate system and strengthen the 160 relationship between the images. As each GCP recorded with the Viva GNSS is stored with data quality information, it 161 was possible to input these values into SocetGXP®, adding an extra 20cm to the x/y values, as suggested by Walstra 162 et al (2011), to account for potential offsets caused by GNSS errors and the observer identifying GCP locations within 163 the imagery. 164 The process was continued until no further decrease could be obtained in the root mean squared (RMS) value. The 165 accuracies achieved during triangulation of the imagery are shown in Table 2. A DSM of 1m resolution was extracted 166 from the data using the ATE adaptive algorithm and exported from SocetGXP® as a point cloud for interpolation in 167 ArcMap 10.1. As these data were to be validated using independent DSM datasets, described in Section 2.4.1, 168 standardising the interpolation algorithm used was necessary to limit the variables capable of influencing data quality 169 and subsequently its analysis. The ‘natural neighbour’ interpolator was employed because it has been identified as an 170 accurate method to apply to high resolution datasets (Abramov and McEwen 2004, Bater and Coops 2009) and is 171 stated by Maune et al. (2007) to work well with both regular and irregularly-spaced point cloud data and is not prone to 172 introducing artifacts. 173 174 175 2.4. Validation Methods 176 177 2.4.1. Quantitative Assessment 178 179 Objective assessment of error in the SAP DSMs was undertaken on their elevation values in comparison with those of 180 the TLS collected at each field site (see Section 2.1). This was achieved by subtracting each of the SAP DSMs from 181 the TLS DSM to create a DSM of Difference (DoD) for each SAP epoch that contained the residual values between 182 terrain models. These values were extracted from each DoD to create a table of residual values that are taken from 183 each cell of the raster, which in turn were used to create summary statistics as described in Section 2.4.2. The 184 desktop scanned SAPs for Flower’s Barrow were not assessed as the pilot study utilised only the NMR imagery to 185 determine the viability of the research. 186 Error assessment was enhanced by converting the elevation DSMs from both the SAPs and the TLS into first-order 187 derivatives, namely ‘slope’ and ‘aspect’, as per the approach advocated by Gallant and Wilson (2000). Whilst useful in 188 their own right as terrain attributes, the conversion of elevation data into first-order derivatives can enhance noise or 189 other errors contained in the original dataset, helping to identify problematic regions within a dataset. Therefore each 190 of the SAP slope DSMs were subtracted from the TLS slope model to create a slope DoD, and the same process was 191 repeated for the aspect datasets. The residual values from each of the slope and aspect DoDs were exported to 192 SPSS for statistical analysis (Section 2.4.2). 193 Flowers Barrow Triangulation Root Mean Square Residuals Date Flown Image (Pixels) X (m) Y (m) Z (m) Total RMS (m) Mar 1945 8.333 3.592 2.440 4.480 5.915 Aug 1968 1.004 2.776 2.120 6.494 7.374 Apr 1982 1.518 1.588 1.783 2.204 3.142 Sept 2009 1.136 8.326 9.699 3.157 1.317 Eggardon hillfort Triangulation Root Mean Square Residuals Date Flown Image (Pixels) X (m) Y (m) Z (m) Total RMS (m) Jan/Mar 1948 3.198 2.807 1.244 5.220 5.791 Apr 1969 1.142 3.797 8.092 4.085 8.948 Oct 1972 6.564 2.115 1.744 1.138 2.969 Apr 1984 6.243 5.900 3.164 1.815 1.934 Jul 1989 6.385 3.758 5.879 2.670 7.471 Sept 1997 2.466 7.829 1.452 5.256 1.731 May 2010 6.675 7.325 7.668 2.428 1.088 Table 2: Results of the image triangulation process in SocetGXP® 2.4.2. Summary Statistics 194 195 A number of statistical measures were used to assess whether systematic or random errors were present in the SAP 196 DSMs. Systematic errors are caused by a bias within the photogrammetric workflow, such as errors in pixel geometry 197 of a sensor (camera or scanner), or lens distortion for example (Mitchell 2007). These errors can be mitigated if they 198 have been measured, modelled and a correction for them applied (Wolf and Dewitt 2000). Random errors are 199 sometimes referred to as noise and are related to data quality, although tend to be difficult to predict. Within the SAPs, 200 these errors are caused by poor image geometry and image blur, for example, although other sources of random 201 error, such as image resolution and scale can, to some extent, be predicted. In TLS data, random errors are caused 202 by adverse influences on the laser beam, such as particulate matter in the air (i.e. water droplets), or strong winds 203 destabilising the instrument during data collection for example. Systematic errors tend to be a consistent value (i.e. 204 photographs taken with the same lens will all contain the same amount of lens distortion) and affect the accuracy of 205 the data by shifting calculated values by a known quantity away from the ‘true’ value (the latter of which can never 206 truly be determined). However, as systematic errors are consistent, they do not influence precision, which is related to 207 the repeatability of measurements. Precision is influenced by random errors that do not have consistent values and 208 are often difficult to account for. 209 Global indicators of DSM error, namely root mean square error (RMSE), mean error (ME) and standard deviation 210 (SD), were calculated to objectively compare the TLS DSM to each SAP DSM (Baily et al. 2003, Walstra et al. 2004, 211 Papasaika et al. 2008, Aguilar et al. 2009, Walstra et al. 2011, Perez Alvarez et al. 2013). RMSE was used as an 212 indicator of accuracy and to identify the presence of systematic errors within the SAP DSMs (Dowman and Muller 213 2011), therefore the larger the RMSE error, the poorer the accuracy of a dataset in comparison with the baseline TLS. 214 SD was used to indicate the precision of the SAP DSMs as this value is influenced by the presence of random errors 215 in a dataset. ME values reveal bias (Fisher and Tate 2006), which can be introduced by systematic errors that cause 216 under- or over-estimation of elevation values. With the exception of the RMSE, which was calculated in Microsoft 217 Excel, both the ME and SD from each DoD were created using SPSS. 218 Frequency histograms of the residual values were generated from the data to determine whether the residual 219 distribution between each SAP DSM and that of the TLS is normal. Such distributions are generally indicative of 220 random errors, with increasing histogram width illustrating a decrease in precision. A skewed distribution, which 221 contains the majority of residual values either in the left or right-hand section of the graph, may indicate the presence 222 of systematic bias in a dataset. In this particular instance, camera calibration information detailing the lens distortion 223 parameters, fiducial marks coordinates and the principle point were not available for any of the SAP datasets, with the 224 exception of the 2009 and 2010 images for Flowers Barrow and Eggardon Hillfort respectively, and thus it was 225 anticipated that the results would highlight the presence of systematic errors. 226 The peak of the distribution also provides an important indicator of difference. If all systematic errors were eliminated 227 via the use of an appropriate photogrammetric model, only the remaining random errors would influence this peak. 228 Subsequently, the size of these errors will be represented by the Kurtosis (spikiness) of the histogram. 229 Scatter plots were also created to identify whether a linear relationship exists between the SAP DSMs and the 230 independent dataset, namely the TLS. This process plots the elevation values of one dataset against another and can 231 reveal how similar or different these data are. A positive, linear relationship, or correlation, between the two datasets 232 would indicate their similarity. 233 2.4.3 Spatial Autocorrelation of Errors 234 235 In addition to the absolute (global) indicators of error calculated above, local Moran’s I analysis was undertaken to 236 determine how errors were spatially distributed across a DSM. This was important as, although the DoDs for 237 elevation, slope and aspect illustrate spatial distribution of residual values, they do not indicate whether these 238 differences are statistically significant. The results generate a diagram indicating where, across the area of interest, 239 clusters of statistically significant high (shown in red) or low (blue) residual values occur as well as regions with non-240 significant values (grey), an example is provided in Figure 8. Statistically significant values that are surrounded by 241 either much lower (light blue) or higher (orange) values are indicative of outliers. Together, both the clusters and 242 outliers are suggestive of values that exceed what would be expected from random error (ESRI 2014). 243 244 2.4.4. Qualitative Assessment 245 246 In addition to the metric performance of each dataset, the SAP DSMs were assessed against the interpretive surveys 247 produced by the Royal Commission for Historical Monuments of England (RCHME) for each site. Profile data were 248 extracted from each SAP DSM epoch and their form compared to that of the profile lines recorded by the RCHME on 249 *HH=Statistically Significant High Value, HL=High Value Surrounded by Low Values, LH=Low Value Surrounded by High Values, LL= Statistically Significant Low Value. The significance level is set at 95%. Figure 8: Example of a Moran’s I diagram illustrating the statistical significance of residual slope values between the 1984 SAP data and that of the TLS across Eggardon. their hachure plans from Eggardon (RCHME 1952) and Flower’s Barrow (RCHME 1970). Point data were generated 250 along the location of the profile line as shown on each hachure plan by importing the georeferenced RCHME data into 251 ESRI ArcMap v10. These were then extracted to provide locational information for the GNSS, and the stakeout 252 function used to locate the profile line in the field and re-measure it. This approach facilitated the direct comparison of 253 profile data as captured during a conventional archaeological survey (RCHME) with that created by mass-capture and 254 GNSS methods to assess how representative the latter were of archaeological earthworks. 255 256 3. Results 257 258 3.1 Quantitative Assessment of DSM Quality 259 260 3.1.1 Summary Statistics 261 262 The summary statistics from both Flower’s Barrow and Eggardon, based on the comparison of SAP DSM elevation, 263 slope and aspect values with those of the TLS data, are presented in Table 3. The general trend in the results 264 supports the observation that DSM quality increases as SAP age decreases. This is illustrated by the decrease in ME, 265 SD and RMSE values, although there are two important exceptions to this: the DSMs derived from the desktop 266 scanned prints within the Eggardon dataset and the 2009 digital photography obtained for Flower’s Barrow. The 267 differences between these datasets and the others are more easily discernible by examining the slope and aspect 268 results, which exhibit greater disparities due to the conversion of elevation values into first-order derivatives. 269 The residual histograms (Figures 9 and 10), display normal distributions for both field sites when comparing the slope 270 and aspect differences between the SAP and TLS values. 271 272 Flower’s Barrow TLS MINUS 1945 TLS MINUS 1968 TLS MINUS 1982 TLS MINUS 2009 TLS MINUS 2009 ALS Number of values 47361 51759 51759 51759 51759 Elevation Mean Error (m) 0.035 -0.163 -0.126 -0.097 -0.399 Std. Deviation (m) 1.204 0.741 0.213 0.619 0.394 RMSE (m) 1.205 0.759 0.248 0.626 0.561 Slope Mean Error (degrees) -2.210 1.630 0.213 0.584 -0.582 Std. Deviation (degrees) 8.914 8.250 3.532 7.830 6.059 RMSE (degrees) 9.184 8.410 3.538 7.852 6.087 Aspect Mean Error (degrees) -5.278 3.949 -0.093 -0.354 -1.920 Std. Deviation (degrees) 47.767 44.873 20.453 31.628 28.032 RMSE (degrees) 48.057 45.046 20.453 31.629 28.097 Eggardon hillfort TLS Minus 1948 TLS Minus 1969 TLS Minus 1972 (DTS)* TLS Minus 1984 TLS Minus 1989 TLS Minus 1997 (DTS)* TLS Minus 2010 Number of values 89021 89021 89021 89021 89021 89021 89021 Elevation Mean Error (m) 0.172 -0.044 -0.145 -0.386 0.076 0.171 0.282 Std. Deviation (m) 2.572 0.390 0.664 0.248 0.150 0.505 0.155 RMSE (m) 2.577 0.392 0.6796 0.459 0.168 0.533 0.321 Slope Mean Error (degrees) -14.410 -1.171 -3.540 -0.666 0.191 -3.166 0.481 Std. Deviation (degrees) 12.253 4.249 6.385 3.300 3.021 5.959 2.278 RMSE (degrees) 18.915 4.408 7.301 3.366 3.027 6.748 2.328 Aspect Mean Error (degrees) -13.663 -4.581 -7.232 -1.202 -0.681 -7.487 1.569 Std. Deviation (degrees) 68.852 52.521 61.173 48.557 44.069 62.097 39.191 RMSE (degrees) 70.195 52.721 61.599 48.572 44.074 62.546 39.223 *DTS = Desktop Scanned Prints Table 3: Summary statistics for Flower’s Barrow (top) and Eggardon hillfort (below) showing the global errors between the TLS DSM and the SAP DSMs and their derivatives 273 The majority of the histograms are leptokurtic i.e. demonstrates a high peak. Greater variation in histogram shape was 274 observed across the SAP DSM elevation values for both sites (Figures 9 and 10). The Flower’s Barrow elevation 275 residual histograms were positively skewed for the 1945 and 1968 SAPs whilst negatively skewed for the 2009 SAPs 276 and the 2009 ALS data, with the latter exhibiting a more leptokurtic peak. The 1982 SAP residual histogram was, 277 however, normally distributed and displayed a leptokurtic peak. Bimodal distributions were observed in the Eggardon 278 elevation residuals for the 1948, 1972 and 2010 SAP DSMs, indicating the presence of two modes (i.e. there are two 279 residual values that most commonly occur) within the data. The second, much smaller peak of residuals for the 1948 280 and 1972 datasets are comprised of negative values, whilst the second peak within the 2010 data has a very small 281 spread but a leptokurtic peak that occurs around 0m. The remaining epochs, namely the 1969, 1984, 1989 and 1997 282 SAP DSM residual histograms, are all normally distributed with leptokurtic peaks. 283 Figure 9: Residual Histograms of DSM difference for Flower’s Barrow. The normal distribution is represented by the bell-shaped line. Scatter plot results for discerning the relationship between the SAP DSM elevations and the TLS DSM (Figure 11 and 284 12), generally increase in positive linearity as SAP age decreases for both sites, although there are a number of 285 exceptions highlighted in the Eggardon dataset (Figure 12). 286 Figure 10: Residual Histograms of DSM difference for Eggardon. The normal distribution is represented by the bell-shaped line. 287 The 1972 and 1997 Eggardon scatter plots (Figure 12), whilst broadly linear, contain a large amount of noise, as 288 demonstrated by the spread of the values within the graph, which subsequently thickens the appearance of the linear 289 scatter. However, this does not greatly affect their correlation values, as shown in Table 4, which contains Pearson’s 290 ‘r’ values, where any value from 0.5 to 1.0 indicates a high, positive correlation (Field 2013, p.82). 291 Figure 11: Flower’s Barrow Elevation Scatter Plots Figure 12: Eggardon Elevation Scatter Plots 292 The 1948 elevation results from Eggardon exhibit the least significant relationship with an ‘r’ value of 0.717, although 293 this differs markedly to the results obtained for the 1945 SAPs at Flower’s Barrow, which obtained a value of 0.993. It 294 is also evident that, for the data at both field sites, the correlation between the TLS and SAP data decreases when 295 converted to a first-order derivative (Table 4). These statistical data also indicate that correlation improves as the age 296 of the SAPs decrease. 297 298 3.1.2 Spatial Autocorrelation of DSM Errors 299 300 The residual values from the elevation, slope and aspect Moran’s I maps all demonstrate that the residual distribution 301 is clustered for both sites, as illustrated in Figure 13 and 14. 302 Flower’s Barrow Correlations N* Elevation Correlation Slope Correlation Aspect Correlation TLS vs. 1945 47361 0.993 0.710 0.562 TLS vs. 1968 51759 0.997 0.700 0.595 TLS vs. 1982 51759 1.000 0.949 0.922 TLS vs. 2009 SAPs 51759 0.998 0.750 0.814 TLS vs. 2009 ALS 51759 0.999 0.856 0.856 Eggardon hillfort Correlations N* Elevation Correlation Slope Correlation Aspect Correlation TLS vs. 1948 89021 0.717 0.300 0.144 TLS vs. 1969 89021 0.981 0.725 0.510 TLS vs. 1972 89021 0.949 0.419 0.323 TLS vs. 1984 89021 0.992 0.832 0.572 TLS vs. 1989 89021 0.997 0.851 0.656 TLS vs. 1997 89021 0.967 0.423 0.308 TLS vs. 2010 89021 0.997 0.917 0.729 *N = number of elevation/slope/aspect values compared Table 4: Pearson’s ‘r’ correlation values obtained by comparing the TLS elevation and first-order derivative data to that of the SAPs at both Flower’s Barrow and Eggardon hillfort. 303 Figure 13: Flower’s Barrow Local Moran’s I Results 304 Figure 14: Eggardon Local Moran’s I Results An examination of the Flowers Barrow elevation datasets in Figure 13 indicates consistent clustering of values on the 305 rampart slopes for each epoch, including the ALS, with the exception of 1945. Within this diagram there appears to be 306 some distinction between clustered low values to the south-west and clustered high values to the north-east. This is 307 suggestive of a systematic error in the photogrammetric model created from the 1945 SAPs which, at first glance, may 308 indicate a tilt within the SAP DSM. However, if this were the case, it would be expected that a distinctive cluster of 309 high values would be followed by an area of non-significant values where the pivot point of the tilt would be, which 310 would then turn in to a region of low values. The pattern shown within the 1945 elevation Moran’s I diagram may show 311 large blocks of high and low values, but each is interspersed with clusters of values of the opposite sign. 312 Subsequently, it is unlikely that tilting of the SAP DSM is the cause of this difference. This same pattern is not 313 apparent in the slope and aspect derivatives from the 1945 DSM although, as derivative values of elevation, they are 314 independent of the factors affecting a tilt in elevation models. 315 A pattern of high and low value clustering along the rampart slopes and in the ditches of Flowers Barrow can be 316 identified in all of the Moran’s I diagrams, with the exception of the 1945 elevation map. Based upon the quality 317 assessment undertaken on the TLS data in Section 2.1.1 these errors are not caused by low point densities or 318 accuracies in this dataset. Whilst the largest errors between the TLS and GNSS datasets were located along the inner 319 most north-facing rampart slope, they do not occur sufficiently frequently elsewhere within the hillfort to suspect that 320 deficiencies in the TLS data influence the pattern of spatial clustering within the Moran’s I results. The pattern of 321 clustering suggests that, as slope angle increases, so to do the differences between the TLS and SAP datasets. 322 Within the Eggardon Hillfort results, as shown in Figure 14, there are different patterns of clustering. Most notable 323 within the elevation diagrams are the stripes of clustered values that are especially evident in the 1972 and 1997 DTS 324 datasets. These stripes do not necessarily coincide with features in the imagery, such as the information strips, the 325 edge of a frame or an artefact in the images themselves, although they have structure and are suggestive of a 326 systematic error. As this pattern is noted in the two datasets scanned from prints using a desktop scanner it is likely 327 that the source of error may be related to the use of print materials and an uncharacterised scanner, the issues of 328 which are discussed in Section 4.3. 329 A further systematic error appears to be present in the 2010 elevation diagram as shown by the clustering of 330 significant high values to the west and significant low values to the east. The transition between these significant 331 values occurs on the boundary between coverage of 6 stereo pairs (west) and 4 stereo pairs (east) in the image block. 332 It is possible that this difference in stereo coverage created differences in number and accuracy of tie points in the 333 block adjustment and also a difference in the number of stereo-matched points in the cloud used to interpolate the 334 DSM. Subsequently, this would result in different significant residuals across the DSM when compared with reference 335 data. Were the block to be extended to create a consistent stereo coverage across the site, such that 6 stereo pairs 336 covered the east of the hillfort, it is likely that these significant differences in residuals would be of the same sign, 337 minimised, or absent. 338 As with the Flowers Barrow datasets, there are noticeable clusters of significant values along the rampart slopes of 339 the hillfort in the majority of the elevation and slope diagrams for all SAP epochs. These are most pronounced along 340 the slope of the east-facing rampart, situated to the east of the hillfort. As with the Flowers Barrow results, an 341 examination of the TLS data density and accuracy in Section 2.1.1 does not depict any obvious deficit in the data 342 which may influence the outcome of the Moran’s I analysis. There is one exception to this observation and that is the 343 line of high values running from the north-west to the south-east of the hillfort, which is synonymous with the fence 344 that denotes the boundaries of ownership. This feature was not removed from the TLS dataset and thus indicates 345 elevation values that are expected to be much higher than those contained within the SAP DSMs. Overall, these 346 results support the same conclusion as Flowers Barrow, in that an increase in the slope angle of the ramparts 347 decreases the accuracy of its representation in comparison with the TLS data. 348 349 3.2 Qualitative Assessment of DSM Data 350 351 The profiles illustrated in Figure 15 show that the results for Eggardon appear to create a more reliable reconstruction 352 of the ramparts than those for Flower’s Barrow. There is close conformity with the RCHME survey of Flower’s Barrow 353 from the GNSS with the exception of the plateau in the middle of the profile. Aside from the 1945 and 1968 DSMs the 354 profiles from the remaining SAP DSMs are remarkably similar. The same can be said of the Eggardon profile whereby 355 SAP data from 1969 onwards conforms to the RCHME survey. In corroboration with the quantitative results, the DSMs 356 from newer SAPs create profiles that are more consistent with the shape of the rampart banks and ditches than those 357 from the older photography and the images scanned from prints using a desktop scanner. Subsequently, it can be 358 Figure 15: Profile Comparisons between the RCHME survey, GNSS and SAP DSM results. said that SAP DSMs can be used to provide archaeological survey data, although photography from the 1940s and 359 from desktop scanned photographic prints cannot. 360 361 4. Discussion 362 363 The results, both empirical and qualitative, demonstrate that archive SAPs can be used to generate data akin to that 364 of the RCHME surveys whilst highlighting a number of issues that should be considered when undertaking DSM 365 generation with these data. 366 367 4.1 SAP Age 368 369 It has been demonstrated that the quality of DSM output does generally improve with decreasing imagery age: a trend 370 which has also been identified by previous studies conducted by Walstra et al. (2004) and Aguilar et al. (2012). Both 371 authors provide tables showing a decrease in RMSE values as the age of SAPs also decrease. However the 372 relationship is complex and influenced by a variety of factors such as camera and film quality, suitable GCPs and the 373 state of preservation of photographic materials, for example. 374 The weakest performing DSMs were produced from the 1945 and 1948 photographs from Flower’s Barrow and 375 Eggardon respectively (see Section 3.1.1). The 1945 SAP DSM from Flower’s Barrow is missing the western tip of 376 the hillfort due to a lack of stereo-overlap in this region. However, when comparing the appearance of the 1945 and 377 1948 DSMs with one another (see Figure 13 and 14) the 1945 data has a less noisy appearance and the lack of 378 stereo-overlap does not appear to have hindered reconstruction of the hillfort to the centre and east of the earthwork. 379 This disparity between two SAP datasets of similar age illustrates the variation in image quality depending on location 380 for the period 1945-’51. During this period the RAF were responsible for the acquisition of aerial photography over the 381 United Kingdom on behalf of the OS. The cameras and aircraft used were not designed for photogrammetric use 382 (Owen and Pilbeam 1992). For instance, the most widely employed cameras (F24 & F52) were non-metric, relatively 383 small-format (5 x 5” and 8.5” x 7” respectively) and instead designed for aerial reconnaissance (Conyers Nesbit 2003). 384 A minority of sorties were flown at relatively low altitude with comparatively short focal lengths (2133.6m and 8” over 385 Flowers Barrow in 1945), but the majority, especially in rural areas, were flown at greater altitudes with longer focal 386 lengths (5029.2m and 20” over Eggardon in 1948). The resulting photo scales were very similar (1:10500 and 1:10000 387 respectively) but with markedly different baseline-to-altitude ratios, with the latter making photogrammetry very 388 difficult. These are the most likely explanations for the differences in photogrammetric results between 1945 and 1948 389 presented here. 390 These problems were further exacerbated by the use of “split-vertical” camera pairs. In the case of the Spitfire PR XIX, 391 as used over Eggardon in 1948, this arrangement had two F52 cameras angled 5° 20” from the vertical (Evidence in 392 Camera, March 1945). This resulted in low-oblique photographs, with twice the coverage of a single camera. This was 393 ideal for aerial reconnaissance but came at the expense of photogrammetry. With few photogrammetric plotters 394 available (Whitaker 2014) the emphasis was instead on the creation of photo-mosaics for reconstruction purposes. 395 The RAF were also unable to guarantee a supply of aircrew experienced in aerial survey, given problems of 396 recruitment and retention (Macdonald 1992). 397 From 1951 the OS resumed control of aerial photography acquisition, but aircraft and aircrew were provided by the 398 Ministry of Transport & Civil Aviation, who proved unsatisfactory for similar reasons as the RAF. For the majority of the 399 1950s and early 1960s aerial survey was limited in quantity and extent (Macdonald 1992) and thus archival 400 photography was not available for either Flowers Barrow or Eggardon. 401 It was only after the purchase of 9 x 9” format metric cameras (i.e. Zeiss RMK 30/23 & Wild RC8R) and the use of 402 civilian contractors for flying that matters finally improved from the mid-1960s (Survey Season, in FLIGHT 403 International, 4th January 1965. p. 49 & 50). Stricter quality-control requirements stipulating the timing and weather 404 conditions (including sun angle) were also used by the OS and contractors. Therefore DSMs from SAPs dating from 405 the mid-1960s onwards for both sites do not demonstrate as stark a difference as the data from the 1940s. 406 Further developments saw the RAF re-equipped with the Canberra PR 3, 7 & 9 aircraft and, whilst still employing split-407 vertical pairs of F52, they also carried the F49 9 x 9” format survey camera (Oakey 2008). These resulted in superior 408 photogrammetric products, as demonstrated with the SAP acquired over Flowers Barrow in 1968. 409 Based on the reasons stated above the authors recommend that archive SAPs of the UK dating from the 1945-‘65 410 period require independent consideration and verification, especially those acquired from 1945-’51. 411 In the 1980s the OS purchased cameras with forward motion compensation (i.e. Zeiss RMK 15/23 & Wild 412 RC10A/RC20), which further improved photogrammetric products (Macdonald 1992). However, it is arguable that the 413 introduction of colour film at the OS and commercial companies, in part to meet the growing requirement for colour 414 orthophotographs, may have negatively impacted photogrammetry in terms of the lower granularity with greater film 415 density (the opposite of black & white films) (Langford 1998). 416 417 4.2 Image Resolution 418 419 In comparison with the analogue archive SAPs, the most recent 2009 digital photography generated over Flower’s 420 Barrow by GetMapping Ltd. did not perform as favourably as the ALS and 1982 SAP datasets. This is illustrated by 421 the summary (global) statistics provided in Table 3. The elevation data from the 2010 digital photography of Eggardon 422 was also out-performed by 1989 OS SAPs on comparison of the RMSE values, which are 0.321m and 0.168m 423 respectively. The cause of this is postulated to be the reduced detail captured by the digital multispectral camera 424 system of the Vexcel UltraCam X from which the images were provided (unfortunately, the panchromatic images were 425 not available in the scope of this project). Unlike the 6µm pixel size of its panchromatic camera, the multispectral 426 imagery delivers a pixel size of 18µm (Microsoft Corp. 2008), providing an image with pixel dimensions of 5770x3770 427 pixels and a ground-sample distance (GSD) of 0.15m. Therefore, the newest SAP datasets for both field sites have 428 the lowest GSD of the group and contains a lesser amount of information per pixel than the film-based imagery of 429 earlier sorties. 430 It should be noted that image resolution for scanned photographs is dependent on the abilities of the scanner used to 431 digitise them. Whilst much of the issues surrounding scanners and the effect of photographic materials on resolution 432 will be discussed in Section 4.3, a brief appraisal of scan resolutions is given here. Walstra et al. (2011) state that a 433 scanned pixel size of 6 to 12µm would be required to preserve a resolution of 30 to 60 lines/mm as provided by 434 original film, based on a paper by Baltsavias (1999) who states that good DSM results can be acquired when using a 435 scan resolution of 25-30µm. The photographs for this research were scanned at a resolution of 2400 dots-per-inch or 436 10.6µm whilst those utilised by Walstra et al. (2011) and Aguilar et al. (2013) range between 15 - 42 µm. However, the 437 increased resolution of the scanned images used in this research will not necessarily increase the precision of the 438 resultant DSMs if the original film quality, atmospheric conditions at exposure and processing methods, for example, 439 are such that image quality is already degraded prior to scanning. Aguilar et al. (ibid.) compared the overall RMSE 440 results obtained from 1977 imagery, , with values obtained by Walstra et al. (2007) for 1971 (1.31m). 441 A further consideration regarding the GetMapping imagery is the provision of such in the compressed jpeg file format, 442 as was the case for this research, and whether this may have had a negative impact on DSM quality. Lam et al. 443 (2001) have shown that utilising jpeg files with a compression ratio of less than 10 should have no significant impact 444 upon DSM quality, unless the image is texturally rich. If so, there will be a more significant degradation of image 445 quality, which in turn affects the quality of the resultant DSM. As no information relating to the compression ratio was 446 provided with the GetMapping data for this research, the adverse influence of jpeg compression on the DSMs created 447 using this imagery cannot be dismissed. 448 449 4.3 Photographic Materials and Scanning Technologies 450 451 The positive influence of the use of well-defined photogrammetric scanning technology upon archive SAPs can be 452 seen by examining the Eggardon dataset. Tables 2 and 3 demonstrate the overall weak performance of the 1972 and 453 1997 SAP DSMs. This disparity is due to the scanning of these images using commercially available desktop 454 equipment that had not been characterised to provide error correction factors for any geometric or radiometric 455 distortions it may introduce to the digitised photography. Geometric errors (i.e. lens distortion, defective pixels and 456 CCD misalignments) and radiometric errors (i.e. illumination instabilities, stripes and electronic noise) are rarely, if 457 ever, accounted for by manufacturers of desktop scanners (Baltsavias and Waegli 1996; El-Ashmawy 2014), with the 458 radiometric errors in particular varying on a frequent basis (Baltsavias and Waegli ibid.), thus illustrating the instability 459 of this equipment. El-Ashmawy (ibid.) states that photogrammetric scanners are manufactured to robust standards to 460 ensure the accuracy of analogue to digital image production at every stage of the scanning process, which also 461 illustrates the inflated cost of these scanners. 462 All other SAPs from this site were scanned from original negatives using a photogrammetric scanner (see Section 463 2.2), and the DSMs produced from this imagery performed well. The 1948 SAPs form the only exception to this result 464 producing data seen to be inferior to both the 1972 and 1997 SAPs. Here age is thought likely to be the dominant 465 factor (see Section 4.1). A comparative study on the influence of scanning technologies upon archive SAPs and the 466 extraction of DSMs using SfM software has been investigated by Sevara (2013). The author also concluded that 467 DSMs obtained using photography scanned from negatives contained less distortion and performed better than scans 468 obtained from print materials. Print materials produce photographs with lower detail due to the larger size of silver 469 crystals used in their emulsions, as compared to that used in film (Walstra et al. 2011), and are known to be more 470 susceptible to degradations (Jacobson 2000, p373). Degradations can take a variety of forms such as image fading 471 and staining (caused by residual chemicals in photographic material or oxidisation of silver particles by atmospheric 472 gases, for example) or microbial attack on the gelatin contained with a negative or print, and on the paper substrate 473 within print materials. High humidity will cause gelatin in photographic materials to swell that, in a photographic print, 474 can also encourage some of the layers to detach, particularly around the edges, which gives rise to this ‘frilling’ as well 475 as blistering on the print surface (Weaver 2008, p13). Weaver (ibid., p14) also highlights further issues with humidity, 476 namely ‘cockling’ of print materials, which can occur differentially across a photograph as the gelatin and paper 477 expand at different rates, and the reduction of paper flexibility that enhances the likelihood of tearing and cracking in 478 this substrate. These factors highlight the challenges in removing geometric distortions from archive photographs, 479 particularly from prints. 480 Overall, the results from this section of our study reflect the findings from Walstra et al. (ibid.) in that the values 481 obtained from scanned prints are worse than those from scanned negatives. Walstra et al. (ibid.) state that the 482 accuracy values from scanned prints may be worse by a factor of 3.1 in comparison with scanned diapositives, 483 although the authors also note that this value is based upon limited data. It should also be remarked that different 484 photogrammetric software, namely Leica Photogrammetry Suite (LPS), was utilised by this study and thus the results 485 from an alternative package, such as SocetGXP, may be different. However, it is nevertheless a useful indication of 486 the quality of data achievable when utilising poorer-quality data. 487 488 4.4 Control and Tie Point Distribution 489 490 The distribution of tie points and GCPs across each set of imagery is suspected to be the cause of bimodal 491 distributions observed in the elevation residual histograms obtained for the 1948, 1972 and 2010 Eggardon datasets. 492 A lack of sufficient control close to the edges of the photographs, where lens distortion is at its peak, may cause the 493 triangulation routine to perform sub-optimally. Triangulation utilises both tie points and GCPs to calculate missing 494 exterior orientation and camera calibration information, including the lens distortion parameters. Subsequently, image 495 matching in these regions may perform sub-optimally, particularly if radial distortion has been modelled poorly, and the 496 triangulation solution may result in values that increase with distance from the principle point. This issue will propagate 497 into the terrain extraction process whereby unrepresentative elevation values may appear in areas where the SAPs 498 overlap, causing artifacts to appear in the DSM. These were observed in the 1972, 1997 and 2010 Eggardon DSMs, 499 manifesting as a stripe effect running north-south through the hillfort (Figure 11). However, the 1997 residual 500 histogram does not display a bimodal distribution, which may suggest that the elevation offsets in the SAP DSM 501 caused by the stripe may be negated by the other elevation values within the data. 502 Studies by Walstra (2006) and Aguilar et al. (2012) have referred to optimal GCP numbers when processing archive 503 SAPs, with Walstra (ibid.) using between 4 to 9 per stereopair and Aguilar et al. (ibid.) identifying similar triangulation 504 accuracies irrespective of whether 12 or 24 GCPs were utilised. Subsequently, the latter research recommended the 505 use of 6 to 9 GCPs per stereopair (14 when using very old photographs of lower geometric and radiometric quality 506 and/or smaller scale) if self-calibrating bundle adjustment was required, as was the requirement for this project due to 507 the deficit of camera calibration data. However, as Walstra et al. (2011) note, selecting suitable control in historical 508 imagery is often limited, relying instead on site accessibility and/or change since the photography was captured and 509 must thus be considered based upon the features visible in any one image set. Subsequently, the number of GCPs 510 identified per stereopair, per epoch for this research were akin to those used by Walstra (2006), namely between 4 to 511 9 GCPs. It should also be noted that a number of natural features (i.e. fence and gate posts, road intersections) were 512 used as GCP locations, which have been identified by Walstra et al. (2011) as a potentially large source of 513 uncertainty, particularly when compared with artificial targets deployed as a control network during contemporary a 514 photogrammetric survey. 515 516 4.5 Camera Calibration Information 517 518 Camera calibration information was only available for the most recent digital photography, namely the 2009 Flower’s 519 Barrow and 2010 Eggardon datasets from GetMapping Ltd. However, this did not appear to influence the quality of the 520 DSM results, particularly for the Flower’s Barrow field site over which the 1982 SAP DSM performed most favourably 521 in comparison with the TLS, as highlighted by the summary statistics provided in Table 3. In addition, the inclusion of 522 camera calibration information for the 2009 and 2010 digital photography did not prevent the clustering of residual 523 values (Table 4). These results suggest that calibration information does not greatly influence the occurrence of 524 systematic errors. Therefore, whilst it is good practice to work with camera calibration information where possible, for 525 archival imagery their absence should not significantly negatively impact on the result obtained. 526 527 4.6 Archaeological Implications 528 529 The recording of archaeological earthworks, particularly those at threat from attrition, should be regularly recorded for 530 management and monitoring purposes (ICOMOS General Assembly 1996). Metric information is crucial for these 531 purposes, playing an important role in the analytical process of understanding a monument and mapping changes that 532 are occurring or may occur in due course. However, as demonstrated by the field sites utilised for this study, the only 533 known survey data of each consists of an interpretive hachure plan and selected profiles, which depict the form of the 534 earthworks but do not facilitate future mapping and monitoring practices. Subsequently, the regaining of metric data 535 for such purposes in relation to these sites is only possible via the use of SAPs. This research has demonstrated that 536 many of the photogrammetrically scanned SAPs and the DSMs created from them have been able to provide metric 537 information akin to that recorded using a TLS. This has been demonstrated by high correlation values between the 538 elevation data produced by photogrammetry and laser scanning. It has also been shown that profiles similar to the 539 RCHME interpretive surveys can be extracted from SAP DSMs, thus illustrating the capability to extract archaeological 540 information from archive SAPs. 541 542 5. Conclusion 543 544 In conclusion, this research has demonstrated that archive SAPs can produce data of archaeological utility. 545 Archaeologists wishing to extract DSMs from these data are advised to consider a number of factors when pursuing 546 their own SAP datasets. Firstly, SAP age influences DSM quality, with photography from the 1940s generating poorer 547 results than DSMs generated using desktop scanned prints. However, the 1945 SAPs processed for Flower’s Barrow 548 did show promise, particularly when reconstructing the rampart slopes as illustrated by its profile. The digitisation 549 method was also a key factor in producing high-quality DSMs and thus obtaining digital images scanned from 550 negatives using a photogrammetric scanner will provide the best results. 551 Modern aerial photography providers should be asked to provide data from the panchromatic sensors in their digital 552 cameras in a lossless (i.e. TIF format) as the RGB imagery processed by this research provided results that were, in 553 the case of the Flower’s Barrow study, poorer than imagery captured in 1982. Whilst camera calibration information 554 did not appear to greatly influence the quality of the DSM results, GCPs should be gathered using GNSS equipment 555 and, if possible, well distributed across the area of interest and throughout the photographic pair, strip or block. 556 The results presented by this research demonstrate that archaeologists must not solely rely on empirical analysis to 557 assess DSM quality, but continue to conduct visual assessments of the data. This approach has been advocated 558 previously by the archaeological community with reference to ALS DSMs (Doneus et al. 2008; Corns and Shaw 2009). 559 The implications the success of this method has upon archaeological research management and the mitigation of 560 earthwork loss is significant. It provides a means by which to recover metric information lost through attrition, 561 particularly if no prior record has been created. The reconstruction of larger earthworks has been demonstrated, and 562 further work is required to assess this method for smaller earthwork features, such as pits and housing platforms. 563 564 6. Acknowledgements 565 566 This research was supported by a doctoral research studentship jointly funded by Bournemouth University (BU) and 567 the National Trust. The authors would like to thank the technical staff at BAE Systems UK (Rick Mort, Steve Foster 568 and Rut Gallmeier) for their help and advice regarding photogrammetric workflows and Professor Stuart Robson and 569 Dietmar Backes (University College London) for training and assistance with SocetGXP. Thanks are also due to 570 Martin Schaefer (University of Portsmouth) for initially providing photogrammetric scans of BU imagery and members 571 of staff at the National Trust (Dr Martin Papworth, Guy Salkeld and Alison Lane) for providing help and access to 572 National Trust resources. Grateful thanks are also extended to the two anonymous reviewers who took the time and 573 trouble to make valuable comments and suggestions regarding the content of this paper. 574 575 7. References 576 577 Abramov, O. and McEwen, A. 2004. An evaluation of interpolation methods for Mars Orbiter Laser Altimeter (MOLA) 578 data. International Journal of Remote Sensing, 25(3), 669-676. 579 Adams, J. and Chandler, J., 2002. Evaluation of Lidar and Medium Scale Photogrammetry for Detecting Soft-Cliff 580 Coastal Change. The Photogrammetric Record, 17 (99), 405-418. 581 AgiSoft LLC, 2014. Agisoft PhotoScan User Manual: Professional Edition, Version 1.1. AgiSoft LLC. 582 Aguilar, M., Aguilar, F. and Negreiros, J., 2009. Self-Calibration Methods for Using Historical Aerial Photographs with 583 Photogrammetric Purposes. In: Ingegraf XXI Lugo. 584 Aguilar, M. A., Aguilar, F. J., Fernández, I. and Mills, J. P., 2013. Accuracy Assessment of Commercial Self-585 Calibrating Bundle Adjustment Routines Applied to Archival Aerial Photography. The Photogrammetric Record, 28 586 (141), 96-114. 587 Baily, B., Collier, P., Farres, P., Inkpen, R. and Pearson, A., 2003. Comparative assessment of analytical and digital 588 photogrammetric methods in the construction of DSMs of geomorphological forms. Earth Surface Processes and 589 Landforms, 28 (3), 307-320. 590 Baltsavias, E.P., Waegli, B., 1996: Quality Analysis and Calibration of DTP Scanners. Paper presented at the 18th 591 ISPRS Congress in Vienna. In 'International Archives of Photogrammetry and Remote Sensing', Vol. XXXI/B1, pp. 13-592 19. 593 Baltsavias EP. 1999. On the performance of photogrammetric scanners. Photogrammetric Week ‘99: 155–173. 594 Barber, M., 2011. A History of Aerial Photography and Archaeology: Mata Hari's glass eye and other stories. Swindon: 595 English Heritage. 596 Bassegoda-Nonell, J., Benavente, L., Boskovic, D., Daifuku, H., Flores, C., Gazzola, P., Langberg, H., Lemaire, R., 597 Matteucci, M., Merlet, J., Pane, R., Pavel, S. C. J., Philippot, P., Pimentel, V., Plenderleith, H., Redig de Campos, D., 598 Sonnier, J., Sorlin, F., Stikas, E., Tripp, G., de Vrieze, P. L., Zachwatovicz, J. and Zbiss, M. S., 1964. The Venice 599 Charter. In ICOMOS (Ed.). Venice. 600 Bater, C. W. and Coops, N. C., 2009. Evaluating error associated with lidar-derived DSM interpolation. Computers and 601 Geosciences, 35 (2), 289-300. 602 Blake, B., 2014. The slippery slope: CAD hachures. billboyheritagesurvey. 13th March. Available from: 603 http://wp.me/pYvo7-1ZR [Accessed 14th March 2014] 604 Bowden, M. and McOmish, D., 2012. A British Tradition? Mapping the Archaeological Landscape. Landscapes, 12 (2), 605 20-40. 606 Chandler, J., 1989. The Acquisition of Spatial Data from Archival Photographs and Their Application to 607 Geomorphology. (PhD). City University, London. 608 Conyers Nesbit, R. 2003. Eyes of the RAF: A History of Photo-reconnaissance. Stroud: Sutton Publishing Ltd. 609 Darvill, T. and Fulton, A., 1998. Monuments at Risk Survey 1995. Bournemouth and London: Bournemouth University 610 and English Heritage. 611 De Reu, J., Plets, G., Verhoeven, G., De Smedt, P., Bats, M., Cherretté, B., De Maeyer, W., Deconynck, J., 612 Herremans, D., Laloo, P., Van Meirvenne, M. and De Clercq, W., 2013. Towards a three-dimensional cost-effective 613 registration of the archaeological heritage. Journal of Archaeological Science, 40 (2), 1108-1121. 614 Dowman, I.J. and Muller, J.P. 2011. Evaluation of Spaceborne DSMs: A Guide to Methodology. Committee on Earth 615 Observation Satellites, Terrain Mapping Subgroup. 616 Ducke, B., Score, D. and Reeves, J., 2011. Multiview 3D reconstruction of the archaeological site at Weymouth from 617 image series. Computers & Graphics, 35 (2), 375-382. 618 ESRI, 2014. ArcGIS 10.1 Desktop Help. Cluster and Outlier Analysis (Anselin Local Moran’s I) (Spatial Statistics). 619 Field, A. 2013. Discovering Statistics Using IBM SPSS Statistics. London: Sage. 620 Fisher, P. F. and Tate, N. J., 2006. Causes and consequences of error in digital elevation models. Progress in 621 Physical Geography, 30 (4), 467-489. 622 Gallant, J. C. and Wilson, J. P., 2000. Primary Topographic Attributes. In: Wilson, J. P., and Gallant, J. C., eds. 623 Terrain Analysis. Principles and Practice. John Wiley & Sons Inc. 624 Green, S., Bevan, A. and Shapland, M., 2014. A comparative assessment of structure from motion methods for 625 archaeological research. Journal of Archaeological Science, 46 (0), 173-181. 626 Hullo, F., Grussenmeyer, P. and Fares, S., 2009. Photogrammetry and dense stereo matching approach applied to 627 the documentation of the cultural heritage site of Kilwa (Saudi Arabia). International Archives of Photogrammetry, 628 Remote Sensing and Spatial Information Sciences, XXXVIII-3. 629 ICOMOS General Assembly, 1996. Principles for the Recording of Monuments, Groups of Buildings and Sites, Sofia, 630 Bulgaria 5-9 October. 631 Jacobson, R.E., 2000. Life expectancy of imaging media. In: Jacobson, R. E., Ray, S. F., Attridge, G. G. and Axford, 632 N., eds. The Manual of Photography. Ninth Edition. Focal Press. 633 Koutsoudis, A., Vidmar, B. and Arnaoutoglou, F., 2013. Performance evaluation of a multi-image 3D reconstruction 634 software on a low-feature artefact. Journal of Archaeological Science, 40 (12), 4450-4456. 635 Lam, K. W. K., Li, Z. and Yuan, X., 2001. Effects of JPEG compression on the accuracy of digital terrain models 636 automatically derived from digital aerial images. Photogrammetric Record, 17(98): 331–342. 637 Langford, M. 1998. Advanced Photography, 6th Ed. Focal Press. 638 Leica Geosystems AG (no date). Leica Viva GS15 Data Sheet [online]. Available from: http://www.leica-639 geosystems.co.uk/downloads123/zz/gpsgis/Viva%20GS15/brochures-datasheet/Leica_Viva_GS15_DS_en.pdf 640 [Accessed: 22nd November 2015]. 641 Macdonald, A.S. 1992. Air Photography at the Ordnance Survey from 1919 to 1991. The Photogrammetric Record. 642 14(80): 249-260 643 Maune, D. F., Kopp, S. M., Crawford, C. A. and Zervas, C. E., 2007. Introduction. In: Maune, D. F., ed. Digital 644 Elevation Model Technologies and Applications: The DSM Users Manual, 2nd Edition. Bethesda, Maryland: The 645 American Society for Photogrammetry and Remote Sensing. 646 McCarthy, J., 2014. Multi-image photogrammetry as a practical tool for cultural heritage survey and community 647 engagement. Journal of Archaeological Science, 43 (0), 175-185. 648 Miller, P., Mills, J., Edwards, S., Bryan, P., Marsh, S., Mitchell, H. and Hobbs, P., 2008. A robust surface matching 649 technique for coastal geohazard assessment and management. ISPRS Journal of Photogrammetry and Remote 650 Sensing, 63 (5), 529-542. 651 Mitchell, H., 2007. Fundamentals of photogrammetry. In: Fryer, J., Mitchell, H., and Chandler, J., eds. Applications of 652 3D Measurement from Images. Caithness, Scotland: Whittles Publishing. 653 Murphy, P., Thackray, D. and Wilson, E., 2009. Coastal Heritage and Climate Change in England: Assessing Threats 654 and Priorities. Conservation and Management of Archaeological Sites, 11 (1), 9-15. 655 Oakey, T. 2008. Candid Canberra. Aeroplane Monthly. 36(2): 34-38 656 Odle, J. E. 1965. The Williamson F49 Mark 4 Air Survey Camera. The Photogrammetric Record. 5 (25): 37–49 657 Owen, T. and Pilbeam, E., 1992. Ordnance Survey: map makers to Britain since 1791. HMSO: London. 658 Oxford Archaeology, 2002. Management of Archaeological Sites in Arable Landscapes. London: DEFRA. 659 Papasaika, H., Poli, D. and Baltsavias, E., 2008. A Framework for the Fusion of Digital Elevation Models. In: XXI 660 ISPRS Conference "Silk Road for Information From Imagery" Beijing, China. 661 Papworth, H., 2014. An assessment of archive stereo-aerial photographs for 3-dimensional reconstruction of damaged 662 and destroyed archaeological earthworks [online]. (PhD). Bournemouth University. Available from: 663 http://eprints.bournemouth.ac.uk/21780/ [Accessed: 6th January 2016]. 664 Pérez Álvarez, J. A., Herrera, V. M., Martínez del Pozo, J. Á. and de Tena, M. T., 2013. Multi-temporal archaeological 665 analyses of alluvial landscapes using the photogrammetric restitution of historical flights: a case study of Medellin 666 (Badajoz, Spain). Journal of Archaeological Science, 40 (1), 349-364. 667 Plets, G., Verhoeven, G., Cheremisin, D., Plets, R., Bourgeois, J., Stichelbaut, B., Gheyle, W. and De Reu, J., 2012. 668 The Deteriorating Preservation of the Altai Rock Art: Assessing Three-Dimensional Image-Based Modelling in Rock 669 Art Research and Management. Rock Art Research, 29 (2), 139-156. 670 Royal Commission on Historic Monuments, 1970. An Inventory of the Historical Monuments of the County of Dorset: 671 Volume II, South-East Part 3. 672 Royal Commission on Historical Monuments, 1952. An Inventory of the Historical Monuments in Dorset: Volume I, 673 West Part 1. 674 Rowley, T. and Wood, J., 2008. Deserted Villages. Shire Publications Ltd. 675 Sevara, C., 2013. Top Secret Topographies: Recovering Two and Three-Dimensional Archaeological Information from 676 Historic Reconnaissance Datasets Using Image-Based Modelling Techniques. International Journal of Heritage in the 677 Digital Era, 2 (3), 395-418. 678 Walstra, J., Chandler, J. H., Dixon, N. and Dijkstra, T. A., 2004. Time for change - quantifying landslide evolution using 679 historical aerial photographs and modern photogrammetric methods. In: Proceedings of the 20th ISPRS Congress. 680 Istanbul, Turkey, 12-23 July 2004. 681 Walstra, J., 2006. Historical Aerial Photographs and Digital Photogrammetry for Landslide Assessment. (PhD). 682 Loughborough University. 683 Walstra, J., Chandler, J. H., Dixon, N. and Wackrow, R., 2011. Evaluation of the controls affecting the quality of spatial 684 data derived from historical aerial photographs. Earth Surface Processes and Landforms, 36 (7), 853-863. 685 Wessex Archaeology, 2001. Lulworth Ranges, South Dorset. Archaeological Integrated Land Management Plan 686 (ILMP) Baseline Study. Desk-Based Archaeological Assessment and Monument Condition Survey. 687 Whitaker, K.A. 2014. The Wild Heerbrugg A5 in Britain in 2014. The Photogrammetric Record. 29(148): 456–462 688 Wilson, D. R., 2000. Air Photo Interpretation for Archaeologists. Stroud: Tempus Publishing Ltd. 689 Wolf, P. R. and Dewitt, B. A., 2000. Elements of Photogrammetry: with Applications in GIS. 3rd. Boston, USA and 690 London, UK: McGraw-Hill. 691 Verhoeven, G., 2011. Taking computer vision aloft – archaeological three-dimensional reconstructions from aerial 692 photographs with photoscan. Archaeological Prospection, 18 (1), 67-73. 693 Verhoeven, G., Doneus, M., Briese, C. and Vermeulen, F., 2012a. Mapping by matching: a computer vision-based 694 approach to fast and accurate georeferencing of archaeological aerial photographs. Journal of Archaeological 695 Science, 39 (7), 2060-2070. 696 Verhoeven, G., Taelman, D. and Vermeulen, F., 2012b. Computer Vision-Based Orthophoto Mapping of Complex 697 Archaeological Sites: The Ancient Quarry of Pitaranha (Portugal-Spain). Archaeometry, 54 (6), 1114-1129. 698 1. Introduction 2. Data and Methods 2.1 Field Site Selection 2.1.1 Baseline Data Collection 2.2. Archive Stereo-Aerial Photographs and Airborne Laser Scanning 2.3. Photogrammetric Processing 2.4. Validation Methods 2.4.1. Quantitative Assessment 2.4.2. Summary Statistics 2.4.3 Spatial Autocorrelation of Errors 2.4.4. Qualitative Assessment 3. Results 3.1 Quantitative Assessment of DSM Quality 3.1.1 Summary Statistics 3.1.2 Spatial Autocorrelation of DSM Errors 3.2 Qualitative Assessment of DSM Data 4. Discussion 4.1 SAP Age 4.2 Image Resolution 4.3 Photographic Materials and Scanning Technologies 4.4 Control and Tie Point Distribution 4.5 Camera Calibration Information 4.6 Archaeological Implications 5. Conclusion 6. Acknowledgements 7. References work_mo5fsncxijaztgfuatvjl4thgm ---- untitled STUDY Development of a Photographic Scale for Consistency and Guidance in Dermatologic Assessment of Forearm Sun Damage Naja E. McKenzie, PhD, RN; Kathylynn Saboda, MS; Laura D. Duckett, CCRS; Rayna Goldman, MBA; Chengcheng Hu, PhD; Clara N. Curiel-Lewandrowski, MD Objectives: To develop a photographic sun damage as- sessment scale for forearm skin and test its feasibility and utility for consistent classification of sun damage. Design: For a blinded comparison, 96 standardized 8 � 10 digital photographs of participants’ forearms were taken. Photographs were graded by an expert dermatolo- gist using an existing 9-category dermatologic assess- ment scoring scale until all categories contained photo- graphs representative of each of 4 clinical signs. Triplicate photographs were provided in identical image sets to 5 community dermatologists for blinded rating using the dermatologic assessment scoring scale. Setting: Academic skin cancer prevention clinic with high-level experience in assessment of sun-damaged skin. Participants: Volunteer sample including participants from screenings, chemoprevention, and/or biomarker studies. Main Outcome Measures: Reproducibility and agree- ment of grading among dermatologists by Spearman cor- relation coefficient to assess the correlation of scores given for the same photograph, � statistics for ordinal data, and variability of scoring among dermatologists, using analy- sis of variance models with evaluating physician and pho- tographs as main effects and interaction effect variables to account for the difference in scoring among derma- tologists. Results: Correlations (73% to �90%) between derma- tologists were all statistically significant (P � .001). Scores showed good to substantial agreement but were signifi- cantly different (P � .001) for each of 4 clinical signs and the difference varied significantly (P � .001) among photographs. Conclusions: With good to substantial agreement, we found the development of a photographic forearm sun damage assessment scale highly feasible. In view of sig- nificantly different rating scores, a photographic refer- ence for assessment of sun damage is also necessary. Arch Dermatol. 2011;147(1):31-36 T H E Q U E S T F O R C O N S I S - tency in clinical assess- ment of sun damage has led to the development of ob- jective grading methods for characterization and quantification of sun damage. The methods include descrip- tive,1,2 visual analog,3,4 and photographic grading scales.2,5 Published scales have been for facial assessment only, but when skin biopsies are required, forearms are preferable rather than cosmetically sensi- tive facial areas. Weiss and colleagues2 developed a de- scriptive scale for the assessment of overall cutaneous photoaging to be used along with facial photographic samples but did not dis- cuss agreement or validity. The R.W. Johnson Pharmaceutical Research Insti- tute descriptive scale1 achieved a chance- corrected agreement (� coefficient) of 0.11. Dermatologic research protocols rely on consistent clinical identification, descrip- tion, and quantification of sun damage in forearm skin. To date, no valid and reli- able photographic assessment scale of fore- arm skin sun damage has been developed. The clinical assessment of human skin for sun damage is a highly subjective but vital part of evaluating the effectiveness of agents and interventions for their ability to reduce or reverse sun damage. Since his- topathologic evaluation is a regulatory re- quirement along with clinical evaluation to assess safety and efficacy of test ar- ticles, biopsied tissue must be obtained. Human subjects considerations suggest that forearm skin, rather than facial skin, Author Affiliations: College of Nursing (Dr McKenzie), Skin Cancer Prevention Annex (Mss Saboda, Duckett, and Goldman), Mel and Enid Zuckerman College of Public Health (Dr Hu), and College of Medicine (Dr Curiel-Lewandrowski), Arizona Cancer Center, University of Arizona, Tucson. (REPRINTED) ARCH DERMATOL/ VOL 147 (NO. 1), JAN 2011 WWW.ARCHDERMATOL.COM 31 ©2011 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 should be used for this purpose. This consideration alone makes an objective grading scale for forearm skin essen- tial. Furthermore, a standardized teaching set will be valu- able for developing a reproducible method and can sup- port the comparison of findings from a variety of studies. The objective of this study was to begin the develop- ment of a consistent photographic assessment scale of sun damage in forearm skin, complemented with a descrip- tive scale, that can become a criterion standard in der- matologic studies. This study is the first step toward this objective. METHODS A criterion standard is a performance standard with which ex- perts or peers agree and with which individual practice can be compared.6 Establishing such a criterion standard requires a strong empirical relationship between the scale and the vari- able it represents.7 Forearm photodamage assessment in current studies8-10 is performed using a subjective 10-point scale for each of 4 clini- cal signs of UV-induced skin damage: fine wrinkling, coarse wrinkling, abnormal pigmentation, and a global assessment. The global assessment is used to give an overall impression of sun damage. Each clinical sign is ranked and subdivided as fol- lows: absent (0), mild (1-3), moderate (4-6), and severe (7-9). This approach is similar to the R.W. Johnson Pharmaceutical Research Institute descriptive scale1,11 which is used for assess- ment of photodamage in facial skin. Our scale, the Dermato- logic Assessment Form Forearm Photographic Assessment Scale, is presented in Figure 1. PARTICIPANTS In the spring and summer of 2007, a total of 48 adults (26 women [54.2%] with a mean age of 52 years and 22 men [45.8%] with a mean age of 63 years) were recruited for this study. Partici- pants identified themselves as white (n = 47) or African Ameri- can (n = 1). Participants further identified themselves as His- panic (n = 6) or non-Hispanic (n = 36); 6 did not provide any ethnic identification. The sample included community volun- teers and participants taking part in screenings and clinical stud- ies. Individuals whose dorsal forearms were unsuitable for use in a photographic scale, including those with significant in- flammation or irritation, tattoos, or other markings, were not eligible. Individuals on the extremes—almost no sun damage and very severe sun damage—had to be sought by referral and invited to participate. One academic physician (C.C.-L.) and 5 community der- matologists agreed to assist with the study as raters. Of these, the academic physician was designated as the project’s expert dermatologist and reference standard. This dermatologist is the primary study physician leading our clinical trials and there- fore has the most experience assessing skin photodamage in- volving the forearm. This physician’s initial grading was des- ignated as the reference standard for subsequent gradings using the photographic scale. ETHICAL APPROVAL AND INFORMED CONSENT This study was approved by the institutional review board of the University of Arizona, which has a Federalwide Assurance with the US Office of Human Research Protections and func- tions under a Statement of Compliance. All participants pro- vided signed informed consent. DIGITAL PHOTOGRAPHY Digital photographs were taken of the dorsal forearms from knuckle (metacarpal-phalangeal) to elbow to avoid personal identification. Both forearms of each of the 48 participants were photographed, for a total of 96 unique photographs. A Nikon COOLPIX 4300 digital camera (Nikon, Tokyo, Japan) was used with standardized methods to ensure consistency. Standard- ized lighting consisted of available overhead lighting in a win- dowless studio with no separate skin illumination. The Any- time Flash setting was used with maximum aperture (preset between 2.8 and 7.6), and all photographs were taken on a uni- form blue background. Additional settings included image size, 2272 � 1704; image quality, fine; focus, macro close-up auto- matic single mode; and sensitivity, 100 ISO. The focal length of the COOLPIX lens system is 8 to 24 mm. The expert dermatologist scored the photographs by clini- cal sign using our existing clinical sun damage assessment scale until all score categories were saturated for each clinical sign. RANDOMIZATION AND GRADING Each photograph was printed unedited in triplicate, coded, and paired with a blank dermatologic assessment scale form (Figure 1). The expert dermatologist performed the initial grading of the pho- tographs, thus establishing our reference standard for compari- son (Table 1). The triplicate image sets, consisting of 288 pho- tographic pages, were randomly ordered in binders and delivered to the 5 evaluating dermatologists. They each blindly evaluated the 96 unique photographs 3 times. Finally, the dermatologist des- ignated as the reference standard repeated evaluation of the ran- domized set of photographs. Clinical Sign Absent Mild Moderate Severe Fine wrinkling 0 1 2 3 4 5 6 7 8 9 Coarse wrinkling 0 1 2 3 4 5 6 7 8 9 Abnormal pigmentation 0 1 2 3 4 5 6 7 8 9 Global 0 1 2 3 4 5 6 7 8 9 Figure 1. Dermatologic Assessment Form Forearm Photographic Assessment Scale. Table 1. Distribution of Reference Standard Initial Grading by Category and Clinical Sign Category Level No. Fine Wrinkling Coarse Wrinkling Abnormal Pigmentation Global Assessment None 0 2 9 5 6 Low 1 12 5 12 10 2 7 11 7 8 3 11 8 10 9 Moderate 4 6 5 4 5 5 10 14 13 14 6 21 10 19 17 Severe 7 15 19 13 15 8 9 7 10 6 9 3 8 3 6 (REPRINTED) ARCH DERMATOL/ VOL 147 (NO. 1), JAN 2011 WWW.ARCHDERMATOL.COM 32 ©2011 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 Two dermatologists were able to review only 287 photo- graphs due to a missing image at the time of the evaluation. The remaining dermatologists evaluated the complete set of 288 photographs. The 2 missed evaluations were treated as miss- ing data and imputed using an average of available data for the same reviewer and photograph. ANALYSIS AND RESULTS The nonparametric Spearman � was used to study the corre- lation of all scores given for each photograph. Analysis of vari- ance models with random effects were used to study the dif- ference in scores by different dermatologists. All analyses, random ordering, and graphs were carried out in Stata version 10 (StataCorp LP, College Station, Texas). We first analyzed the relationship among the 4 scores given to each photograph by the expert dermatologist (when setting the reference standard and as assessments 3 months later). Table 2 summarizes the Spearman � correlation coefficients. The expert dermatologist’s assessments of the same photo- graphs over time were highly and significantly correlated near or above 90% for all 4 clinical signs. The correlation between the expert dermatologist’s assessment and the scores given by the 5 community derma- tologists ranged from 73% to above 90% (Table 3) and were all statistically significant (P � .001). These results show that assessments by all dermatologists had a strong lin- ear relationship with the reference standard scores. How- ever, strongly correlated scores can be quite different in magnitude and ultimately fail to show agreement.12 There- fore, to quantify agreement among the community derma- tologists and the reference standard, we calculated the � sta- tistic for ordinal data. Calculation of � statistic is based on the ratio of the observed to the expected (ie, by chance) agreement. All � statistics (Table 4) fell between 0.28 and 0.76. Guidelines for interpretation of � vary. Landis and Koch13 would categorize 0.28 as “fair” and 0.76 as “substan- tial.” Percentage of agreement among raters, calculated as part of the � statistic (Table 4), showed that raters agreed with the reference standard 71% to 92% of the time. The highest percent agreement was between the original and final, blinded rating session of the expert dermatologist. Figure 2 shows the distribution of maximum deviation from the reference standard for each dermatologist and each clini- cal sign. Deviation is defined as the difference between a given score and the reference standard, and the maximum deviation is the one with the greatest magnitude (positive or negative) Table 2. Correlation of Reference Standard to Repeated Screening by Expert Dermatologist at 3 Months Dermatologic Assessment Clinical Sign Spearman � Correlation Coefficients Image Set 1 Image Set 2 Image Set 3 Fine wrinkling 0.87 0.92 0.91 Coarse wrinkling 0.92 0.91 0.91 Abnormal pigmentation 0.91 0.90 0.91 Global assessment 0.92 0.93 0.93 Table 3. Correlation of Reference Standard to Community Dermatologists a Dermatologic Assessment Form Criteria Spearman Correlation Coefficients Set 1 Set 2 Set 3 Fine wrinkling Dermatologist B 0.79 0.87 0.87 C 0.88 0.89 0.88 D 0.71 0.69 0.74 E 0.81 0.81 0.83 F 0.74 0.86 0.86 Coarse wrinkling Dermatologist B 0.90 0.89 0.91 C 0.92 0.93 0.91 D 0.82 0.83 0.85 E 0.82 0.83 0.87 F 0.86 0.85 0.88 Abnormal pigmentation Dermatologist B 0.88 0.86 0.92 C 0.91 0.91 0.89 D 0.89 0.89 0.89 E 0.86 0.92 0.90 F 0.89 0.85 0.91 Global assessment Dermatologist B 0.90 0.90 0.93 C 0.92 0.92 0.92 D 0.90 0.90 0.91 E 0.86 0.85 0.89 F 0.90 0.88 0.92 a All correlations are statistically significant at P � .001. Table 4. � Statistics by Clinical Sign for Average Rater Specific Agreement vs Reference Standard a Dermatologists Agreement � (95% Confidence Interval) Fine Wrinkling A 92.1 0.76 (0.68-0.79) B 90.3 0.69 (0.65-0.70) C 90.7 0.71 (0.67-0.75) D 71.4 0.28 (0.26-0.35) E 77.3 0.37 (0.31-0.43) F 90.2 0.68 (0.63-0.71) Coarse Wrinkling A 91.7 0.76 (0.70-0.80) B 88.4 0.70 (0.64-0.72) C 91.0 0.76 (0.74-0.80) D 71.1 0.29 (0.24-0.35) E 86.3 0.61 (0.58-0.68) F 89.5 0.72 (0.62-0.75) Abnormal Pigmentation A 92.2 0.76 (0.74-0.79) B 90.0 0.71 (0.68-0.76) C 91.7 0.76 (0.73-0.79) D 81.0 0.47 (0.41-0.55) E 88.6 0.66 (0.63-0.69) F 89.3 0.70 (0.63-0.75) Global Assessment A 92.4 0.77 (0.72-0.78) B 91.2 0.75 (0.69-0.76) C 91.9 0.76 (0.74-0.80) D 81.8 0.50 (0.44-0.54) E 87.1 0.64 (0.55-0.69) F 90.2 0.70 (0.69-0.75) a A is the reference standard dermatologist; B through F, the community dermatologists. (REPRINTED) ARCH DERMATOL/ VOL 147 (NO. 1), JAN 2011 WWW.ARCHDERMATOL.COM 33 ©2011 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 among the 3 scores for each photograph. Here, a deviation of ±3 was not rare and could exceed 5. We used 2-way analysis of variance to examine the derma- tologist effect and the photograph effect. All of the expert der- matologist’s assessments were excluded from the data to avoid potential bias. Analysis of variance indicated that the scores given by the 5 remaining dermatologists were significantly different (P � .001) for each of the 4 clinical signs, and the differences tended to vary among photographs (P � .001). COMMENT Current clinical protocols rely on consistent clinical as- sessment of sun damage in forearm skin to evaluate base- line and efficacy. To date, no valid and reliable photo- graphic assessment scale of forearm skin sun damage has been developed. The purpose of this study was to de- velop and test a forearm photographic assessment scale that can be used to ensure such consistency when adopted by study dermatologists who are required to clinically as- sess photodamage. We plan to subject the scale to ex- panded testing in order to propose this scale as a crite- rion standard for general use in dermatologic studies. Weiss and colleagues,2 in studying the effect of topical tretinoin, used a paper scale that included clinical signs for the assessment of overall improvement in cutaneous photoaging of the face to be used along with photo- graphic samples, but they did not discuss agreement or validity. Griffiths and colleagues1 developed a photonu- meric scale that included the most common features of interest in the evaluation of photodamage of facial skin. The R.W. Johnson Pharmaceutical Research Institute de- scriptive scale1 included a detailed description of the mani- festations of sun damage with a chance-corrected agree- ment (� coefficient) of 0.11 without, and 0.31 with, accompanying facial photographs. Chance-corrected agreement ranges from −1 to �1, with scores of 0.40 to 0.75 considered fair and greater than 0.75 considered ex- cellent or substantial.13,14 This scale is similar to our clini- cal assessment scale, but for facial skin. On our scale, hy- perpigmentation and mottling have been combined into a single clinical sign and renamed abnormal pigmenta- tion because, in the opinion of all of our principal inves- tigators, pigmentation is difficult to separate into 2 dif- ferent features. Visual analog scales rely on health care practioners to estimate features visually on a metrically defined hori- zontal line. Developers of such scales for assessment of sun damage3,4 have described them as more sensitive than descriptive scales and highly reproducible, but they have not reported chance-corrected agreement or repeatabil- ity. Our 10-step clinical assessment scale consists of 3 levels of severity: mild, moderate, and severe. Each of these is subdivided into 3 numerical grades, allowing for a more nuanced scale not unlike a visual analog scale. Photographic scales have the advantage of providing a consistent visual frame of reference, thus minimizing variability in perception and subjectivity. The photo- graphic scale of Larnier et al5 consists of a set of 3 stan- dardized photographs to represent each of 6 grades of sun damage, ranging from mild to very severe. The photographs were taken in a standard manner, from the same angle and of the same side (left) and region of the 7 8 9 6 5 4 3 2 1 0 Criterion Standard A B C D E F Fi ne W rin kl in g 7 8 9 6 5 4 3 2 1 0 Criterion Standard A B C D E F Ab no rm al P ig m en ta tio n 7 8 9 6 5 4 3 2 1 0 Criterion Standard A B C D E F G lo ba l A ss es sm en t 7 8 9 6 5 4 3 2 1 0 Criterion Standard A B C D E F Co ar se W rin kl in g A B C D Figure 2. A, Distribution of maximum fine wrinkling scoring deviation from the reference standard for each dermatologist (A is the reference standard dermatologist; B-F are the community dermatologists). B, Distribution of maximum coarse wrinkling scoring deviation from the reference standard for each dermatologist. C, Distribution of maximum abnormal pigmentation scoring deviation from the reference standard for each dermatologist. D, Distribution of maximum global assessment scoring deviation from the reference standard for each dermatologist. (REPRINTED) ARCH DERMATOL/ VOL 147 (NO. 1), JAN 2011 WWW.ARCHDERMATOL.COM 34 ©2011 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 face. On assessment of interobserver agreement, chance-corrected � scores ranged from 0.44 to 0.76 on the first and second occasions. In addition, dermatolo- gists with and without experience with sun-damaged skin scored similarly, supporting the notion that a pho- tographic scale increases objectivity and standardiza- tion. Testing of our scale achieved similar or better interobserver agreement using blinded image sets. Figure 3 shows the global assessment photographs with the best agreement. An upper-extremity photonumeric scale was devel- oped to assess skin aging in smokers and nonsmokers on the protected upper inner arm.15 The scale was effec- tive in showing greater skin aging in smokers than non- smokers. Efficacy and safety of a topical agent were evalu- ated using a photographic method consisting of baseline and repeated side-by-side projection of before-and-after images during 36 weeks of treatment,16 but the standard was relative and relevant only to that study. The quality of digital photography has improved greatly since the original description of the photographic method,17 jus- tifying the establishment of an absolute standard for pho- todamage in forearm skin. Photographic evaluation of photodamage improve- ment has also been used in laser resurfacing and remod- eling18; however, the photographs were facial and there- fore not applicable to our scale. Forearm skin is also used to establish combination laser procedures before clinical use and the availability of a forearm skin scale may be use- ful in nonpharmaceutical approaches to photodamage. Shoshani and colleagues19 made a case for a clinically validated scale for the assessment of facial wrinkling. We B D A C F E H G J I Figure 3. Global assessment: severity score 0 (A); 1 (B); 2 (C ); 3 (D); 4 (E); 5 (F); 6 (G); 7 (H); 8 (I); and 9 ( J). (REPRINTED) ARCH DERMATOL/ VOL 147 (NO. 1), JAN 2011 WWW.ARCHDERMATOL.COM 35 ©2011 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 propose that a forearm scale is equally necessary. Our find- ings support the ability of blinded, independent derma- tologists to achieve good to excellent agreement and strong linear correlation among their scores as well as internal consistency of ratings, all at a level of high statistical sig- nificance. Nevertheless, there were differences in how the dermatologists rated the photographs. All dermatolo- gists in this study have similar years of experience and we cannot immediately explain the differences in how the community dermatologists rated the photographs, al- though one of them sees primarily a retiree population and did rate the photographs less severely. The size of maximum differences may be related to the type of pa- tients typically seen in the practices of the community dermatologists. However, even without training, our der- matologists achieved high agreement and significant cor- relation in how they rated the photographs. The high per- centage of agreement testifies to the potential for improvement in consistency with training among der- matologists for whom agreement is vital. The inability of our photographic scale to account di- rectly for hyperkeratotic features, for both extension of skin surface involvement and thickness, must be ac- knowledged as a limitation of our study. The next phase of scale development will include an objective form of categorical validation, such as optical coherence tomog- raphy20 or microscopy. We also acknowledge that the com- position of our sample with regard to race and ethnic- ity, being mainly white, may limit generalization across all populations. Our sample heterogeneity is represen- tative of our US and local populations; however, it will be expanded in the next phase of scale development. CONCLUSIONS Based on these results, the expanded Dermatologic As- sessment Form Forearm Photographic Assessment Scale has great potential to yield highly consistent scor- ing of forearm sun damage in study participants. Fur- ther steps are needed to create a training image set that can be considered the criterion standard for forearm sun damage. Accepted for Publication: July 2, 2010. Correspondence: Naja E. McKenzie, PhD, RN, Arizona Cancer Center, University of Arizona, PO Box 245024, 1515 N Campbell Ave, Tucson, AZ 85724-5024 (nmckenzie@azcc.arizona.edu). Author Contributions: All authors had full access to all the data in the study and take responsibility for the in- tegrity of the data and the accuracy of the data analysis. Study concept and design: McKenzie, Saboda, Duckett, Goldman, and Curiel-Lewandrowski. Acquisition of data: Saboda, Duckett, and Curiel-Lewandrowski. Analysis and interpretation of data: McKenzie, Saboda, Hu, and Curiel- Lewandrowski. Drafting of the manuscript: McKenzie, Goldman, and Curiel-Lewandrowski. Critical revision of the manuscript for important intellectual content: McKenzie, Saboda, Duckett, Hu, and Curiel-Lewandrowski. Statis- tical analysis: McKenzie, Saboda, and Hu. Obtained fund- ing: Goldman. Administrative, technical, or material sup- port: Duckett and Goldman. Study supervision: Hu and Curiel-Lewandrowski. Financial Disclosure: None reported. Funding/ Support: This study was supported by grant P01 CA27502 from the Chemoprevention of Skin Can- cer Program Project (principal investigator David S. Al- berts, MD) and grant R25T CA78447 from the Cancer Prevention and Control Training Program (principal in- vestigator David S. Alberts, MD). Additional Contributions: We gratefully acknowledge Elka Eisen, MD, Stuart Salasche, MD, Gerald N. Gold- berg, MD, Linda Ilizaliturri, MD, and Richard C. Miller, MD, the dermatologists who graded our image sets. REFERENCES 1. Griffiths CE, Wang TS, Hamilton TA, Voorhees JJ, Ellis CN. A photonumeric scale for the assessment of cutaneous photodamage. Arch Dermatol. 1992;128(3): 347-351. 2. Weiss JS, Ellis CN, Goldfarb MT, Voorhees JJ. Tretinoin therapy: practical as- pects of evaluation and treatment. J Int Med Res. 1990;18(3)(suppl 3):41C- 48C. 3. Lever L, Kumar P, Marks R. Topical retinoic acid for treatment of solar damage. Br J Dermatol. 1990;122(1):91-98. 4. Marks R, Edwards C. The measurement of photodamage. Br J Dermatol. 1992; 127(suppl 41):7-13. 5. Larnier C, Ortonne JP, Venot A, et al. Evaluation of cutaneous photodamage using a photographic scale. Br J Dermatol. 1994;130(2):167-173. 6. JAMA Instructions for Authors. JAMA. 1995;273(1):27-28. 7. DeVellis RF. Scale Development. Newbury Park, CA: Sage; 1991. 8. Einspahr JG, Bowden GT, Alberts DS, et al. Cross-validation of murine UV signal transduction pathways in human skin. Photochem Photobiol. 2008;84(2):463- 476. 9. Stratton SP, Alberts DS, Einspahr JG, et al. A phase 2a study of topical perillyl alcohol cream for chemoprevention of skin cancer. Cancer Prev Res (Phila). 2010; 3(2):160-169. 10. Stratton SP, Saboda KL, Myrdal PB, et al. Phase 1 study of topical perillyl alco- hol cream for chemoprevention of skin cancer. Nutr Cancer. 2008;60(3):325- 330. 11. Olsen EA, Katz HI, Levine N, et al. Sustained improvement in photodamaged skin with reduced tretinoin emollient cream treatment regimen: effect of once- weekly and three-times-weekly applications. J Am Acad Dermatol. 1997;37 (2, pt 1):227-230. 12. Jakobsson U, Westergren A. Statistical methods for assessing agreement for or- dinal data. Scand J Caring Sci. 2005;19(4):427-431. 13. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159-174. 14. Landis JR, Koch GG. An application of hierarchical kappa-type statistics in the assessment of majority agreement among multiple observers. Biometrics. 1977; 33(2):363-374. 15. Helfrich YR, Yu L, Ofori A, et al. Effect of smoking on aging of photoprotected skin: evidence gathered using a new photonumeric scale [published correction appears in Arch Dermatol. 2007;143(5):633]. Arch Dermatol. 2007;143(3): 397-402. 16. Maddin S, Lauharanta J, Agache P, Burrows L, Zultak M, Bulger L. Isotretinoin improves the appearance of photodamaged skin: results of a 36-week, multi- center, double-blind, placebo-controlled trial. J Am Acad Dermatol. 2000;42 (1, pt 1):56-63. 17. Armstrong RB, Lesiewicz J, Harvey G, Lee LF, Spoehr KT, Zultak M. Clinical panel assessment of photodamaged skin treated with isotretinoin using photographs. Arch Dermatol. 1992;128(3):352-356. 18. Marini L. SPF-RR sequential photothermal fractional resurfacing and remodel- ing with the variable pulse Er:YAG laser and scanner-assisted Nd:YAG laser. J Cosmet Laser Ther. 2009;11(4):202-211. 19. Shoshani D, Markovitz E, Monstrey SJ, Narins DJ. The modified Fitzpatrick Wrinkle Scale: a clinical validated measurement tool for nasolabial wrinkle severity assessment. Dermatol Surg. 2008;34(suppl 1):S85-S91. 20. Korde VR, Bonnema GT, Xu W, et al. Using optical coherence tomography to evaluate skin sun damage and precancer. Lasers Surg Med. 2007;39(9):687- 695. (REPRINTED) ARCH DERMATOL/ VOL 147 (NO. 1), JAN 2011 WWW.ARCHDERMATOL.COM 36 ©2011 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 work_mpgl5v3ndfc2pcxnocu46hjo7m ---- Find that photo!: interface strategies to annotate, browse, and share COMMUNICATIONS OF THE ACM April 2006/Vol. 49, No. 4 69 FIND THAT PHOTO! Interface Strategies to Annotate, Browse, and Share As digital photos become the standard media for personal photo taking, supporting users to explore those photos becomes a vital goal. Dominant strategies that have emerged involve innovative user interfaces that support annotation, browsing, and sharing that add up to rich support for exploratory search. Successful retrieval is based largely on attaching appropriate anno- tations to each image and collection since automated image content analysis is still lim- ited. Therefore, innovative techniques, novel hardware, and social strategies have been pro- posed. Interactive visualization to select and view dozens or hundreds of photos extracted from tens of thousands has become a popular strategy. And since the goal of photo search is to support sharing, storytelling, and remi- niscing, experiments with new collaborative strategies are being examined. While digital photographic databases and retrieval systems have been in use for many years, these systems were typically designed for professionals in museums, libraries, advertis- ing, and journalism, to name a few specíálitíes. Such systems employed a cadre of financially motivated individuals to hand-annotate the pictures with metadata such as keywords, dates, and locations, often using fixed vocabu- laries, to support traditional search techniques. By contrast, consumers typically put little effort into photo annotation; they are more focused on exploratory search and serendipi- tous discovery of photos with a stronger emphasis on entertainment. This leads to a very different set of requirements for personal photo use where ease of annotation, support for exploratory browsing, and convenient sharing is crucial. Annotate. In textual exploratory search, users can enter key phrases from a docu- ment to retrieve similar content. But for images, retrieval based on content through automated analysis is often limited to some forms of shape analysis (such as finding the presence of faces in an image) and color matching to find sunrises or determine whether an image was taken inside or outside. To support effective exploratory search on photos, appropriate annotations must be asso- B y Ben Shneiderman, Benjamin B. Bederson, a n d Steven M. Drucker 70 April 2006/Vol. 49, No. 4 COMMUNICATIONS OF THE ACM ciated with the images either by the camera or by users of the images, such as the photographer or potentially a larger community of users. Cameras are increasingly recording information about the photo- graph including time and date stamps, tilt sensors for orientation, light levels, focal distances, and even global position. Barcodes, RFID tags, or other label- ing methods could enable a higher percentage of pho- tos to be annotated automatically. Many interfaces enable manual annotation of photographs by “painting” keywords [3] or dragging and dropping names onto images. Commercial tools such as Adobe PhotoShop Album make tags drag-able onto photo borders. Other tools perform temporal clustering to create a more man- ageable set of photo groups [1]. As with many tasks, manual annotation can be improved by designing interfaces that support faster and easier annotation as well as making the future benefits more apparent. Automatic and manual annotations are valuable in supporting both searching and browsing. Browse. Users browse for fun and to find a spe- cific photograph. They may be looking for photos of their grandfather, their hike down the Grand Canyon, or a friend’s wed- ding. They also may be looking for a great photo to accompany a story of a sun- rise hike or memorable baseball game. Clearly, if the photo col- lection has been extensively annotated, techniques such as faceted search (see Hearst’s article in this sec- tion) can help users filter down a collection and show potential targets for brows- ing. User-controlled visualization of photos grouped by date, location, or annotation can greatly facilitate browsing and increase enjoyment [4]. Different lay- outs of photos can exploit this metadata to help peo- ple find desired photos and discover new ones. In particular, geo-tagging of photos and interfaces, like WWMX, allows people to find all those photographs associated with a particular area (see Figure 1). Chronological displays work well for dates as well, but large numbers of photos can be overwhelming, so groups of photos can be clustered by date and represen- tative photos can be manually or automatically chosen for each cluster [1, 2]. These representative photos again help to provide landmarks in order for users to locate photos from particular events. Interfaces such as PhotoMesa use powerful filtering tools, plus flexible grouping and rapid zooming, to enable users to explore thousands of photos fluidly (see Figure 2). Figure 1. The WorldWide Media Exchange (WWMX) interface showing map and calendar views along with images as published in ACM Multimedia 2003; wwmx.org. Consumers typically put little effort into photo annotation; they are more focused on exploratory search and serendipitous discovery of photos with a stronger emphasis on entertainment. Share. Sharing photos by email, instant messag- ing, Web sites, and cell phones is a growing suc- cess story. When users select photos and make them available to others, they seem to be willing to invest more effort in annota- tion. Also by making them public, they invite others to comment and add annotations. More elaborate story-generating tools invite users to provide slideshow sequences with text captions and audio narration. Recent innovations in social experiences on the Web have sought to encourage annotation by increas- ing satisfaction and making the benefits immediately apparent. A game-like approach to image annotation gets players to cooperate with anonymous, remotely located partners in assigning keywords for pho- tographs [5]. This surprisingly addictive game has succeeded in labeling over 10 million images as of August 2005 (since its introduction in 2003). Other communities, such as Flickr, allow users to share and annotate images on a Web site using tags. These “folksonomies” have now gone past photos to Web pages and blogs as well (such as technorati and deli.cio.us). The trend toward annotating, browsing, and sharing your photos via Web sites such as Flickr, Ofoto, and Shutterfly is perhaps one of the biggest changes enabled by the transformation from analog to digital photogra- phy. Photos no longer sit unattended in shoeboxes stored in attics, but are available for ready viewing by friends and family distributed around the world. SUMMARY A combination of annota- tion, browsing, and sharing of photos can support the special exploratory search needs of personal digital photo users by getting around the fact that direct search of image content con- tinues to be beyond the capa- bilities of current systems. The special needs of amateur digital photogra- phers are pushing the photo industry to support users with their desired activities. Social networking, in com- bination with innovative user interfaces and visualiza- tion, is just beginning to support everyday photogra- phers. However, we see significant work remaining, especially in metadata standardization to help users cope with their rapidly growing and increasingly val- ued collections. References 1. Graham, A., Garcia-Molina, H., Paepcke, A., and Winograd, T. Time as essence for photo browsing through personal digital libraries. In Pro- ceedings of the 2nd ACM/IEEE-CS Joint Conference on Digital Libraries (2002). ACM Press, NY, 326–335. 2. Huynh, D., Drucker, S., Baudisch, P., and Wong, C. Time Quilt: Scal- ing up zoomable photo browsers for large, unstructured photo collec- tions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2005). ACM Press, NY, 1937–1940. 3. Kuchinsky, A., Pering, C., Creech, M., Freeze, D., Serra, B., and Gwiz- dka, J. FotoFile: A consumer multimedia organization and retrieval sys- tem. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (1999). ACM Press, NY, 496–503. 4. Kustanowitz, J. and Shneiderman, B. Meaningful presentations of photo libraries: Rationale and applications of bi-level radial quantum layouts. In Proceedings of the 5th ACM/IEEE-CS Joint Conference on Digital Libraries (2005). ACM Press, NY, 188–196. 5. van Ahn, L. and Dabbish, L. Labeling images with a computer game. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2004). ACM Press, NY, 319–326. Ben Shneiderman (ben@cs.umd.edu) is a professor and the founding director of the Human-Computer Interaction Lab, Computer Science Department, at the University of Maryland, College Park, MD. Benjamin B. Bederson (bederson@cs.umd.edu) is an associate professor and director of the Human-Computer Interaction Lab, Com- puter Science Department, at the University of Maryland, College Park, MD. Steven M. Drucker (sdrucker@microsoft.com) is lead researcher of the Next Media Research Group, Microsoft Research, Redmond, WA. © 2006 ACM 0001-0782/06/0400 $5.00 c COMMUNICATIONS OF THE ACM April 2006/Vol. 49, No. 4 71 Figure 2. PhotoMesa showing 114 photos in six groups in a single view with integrated annotation and search tools as published in ACM UIST 2001; (courtesy of www.photomesa.com). work_mppsgsojybcnxlptuups522f4a ---- [PDF] A Preliminary Report of the Virtual Craniofacial Center: Development of Internet-/Intranet-Based Care Coordination of Pediatric Craniofacial Patients | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1097/00000637-200105000-00010 Corpus ID: 1342741A Preliminary Report of the Virtual Craniofacial Center: Development of Internet-/Intranet-Based Care Coordination of Pediatric Craniofacial Patients @article{Goodwin2001APR, title={A Preliminary Report of the Virtual Craniofacial Center: Development of Internet-/Intranet-Based Care Coordination of Pediatric Craniofacial Patients}, author={M. Goodwin and L. R. Otake and J. Persing and J. H. Shin}, journal={Annals of Plastic Surgery}, year={2001}, volume={46}, pages={511-516} } M. Goodwin, L. R. Otake, +1 author J. H. Shin Published 2001 Medicine Annals of Plastic Surgery The authors present preliminary information regarding the development of an Internet-based Virtual Craniofacial Center that provides access to a patient database with visual and textual data. Patients are photographed by digital camera with standardized images. Through a Web site linked to a remote database, patient demographics, management data, reports, and acquired digital photographic images are stored and retrieved. The database can be used to sort and to present data as desired by… Expand View on Wolters Kluwer wiredmd.com Save to Library Create Alert Cite Launch Research Feed Share This Paper 4 CitationsBackground Citations 1 View All Topics from this paper craniofacial Therapy, Computer-Assisted Demography Computer Systems Modem Device Component disease transmission Personal Computers 4 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Seven years of clinical experience with teleconsultation in craniomaxillofacial surgery. R. Ewers, K. Schicho, +4 authors M. Truppe Medicine Journal of oral and maxillofacial surgery : official journal of the American Association of Oral and Maxillofacial Surgeons 2005 48 Save Alert Research Feed Use of Standardized, Quantitative Digital Photography in a Multicenter Web-based Study J. Molnar, Wesley K. Lew, +4 authors W. Willner Computer Science, Medicine Eplasty 2009 17 PDF Save Alert Research Feed The digital camera in clinical practice. R. Smith Medicine Otolaryngologic clinics of North America 2002 17 View 1 excerpt, cites background Save Alert Research Feed Haptic feedback improves surgeons' user experience and fracture reduction in facial trauma simulation. S. Girod, Sara C. Schvartzman, D. Gaudilliere, K. Salisbury, R. Silva Engineering, Medicine Journal of rehabilitation research and development 2016 6 PDF Save Alert Research Feed References SHOWING 1-10 OF 12 REFERENCES SORT BYRelevance Most Influenced Papers Recency A computer-based method of filing photographs and procedures. J. Allan, T. Cook Medicine Archives of otolaryngology 1985 5 Save Alert Research Feed An evaluation of telemedicine in surgery: telediagnosis compared with direct diagnosis. N. Demartines, U. Otto, +4 authors F. Harder Medicine Archives of surgery 2000 34 PDF Save Alert Research Feed Using JPEG image compression to facilitate telemedicine. L. Yamamoto Medicine The American journal of emergency medicine 1995 50 Save Alert Research Feed Practice of Otolaryngology via Telemedicine D. B. Blakeslee, W. Grist, M. Stachura, B. Blakeslee Medicine The Laryngoscope 1998 33 Save Alert Research Feed Digital images in the diagnosis of wound healing problems. A. Roth, J. Reid, C. Puckett, M. Concannon Medicine Plastic and reconstructive surgery 1999 35 Save Alert Research Feed Telemedicine versus face to face patient care: effects on professional practice and health care outcomes. R. Currell, C. Urquhart, P. Wainwright, R. Lewis Medicine Nursing times 2001 282 Save Alert Research Feed Telemedicine in vascular surgery: feasibility of digital imaging for remote management of wounds. D. Wirthlin, S. Buradagunta, +7 authors W. Abbott Medicine Journal of vascular surgery 1998 105 Save Alert Research Feed Multicentre randomised control trial comparing real time teledermatology with conventional outpatient dermatological care: societal cost-benefit analysis R. Wootton, S. Bloomer, +7 authors M. Loane Medicine BMJ : British Medical Journal 2000 170 PDF Save Alert Research Feed Conversion to digital photography and photo archiving. L. Edstrom Medicine Plastic and reconstructive surgery 2000 8 Save Alert Research Feed Multicentre randomized control trial comparing real time teledermatology with conventional outpatient dermatological care: societal cost– benefit analysis BMJ 2000 ... 1 2 ... Related Papers Abstract Topics 4 Citations 12 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_mqiwdizcorctbajejdbvzd5wnq ---- worked by injecting magnetic domains into garnet patterned with an array of ferromagnetic bars. Applying an alternating magnetic field caused these domains, or bub- bles, to hop within the garnet from underneath one bar to the next, like packages on a conveyor belt. By taking the presence or absence of a bubble to represent a 1 or a 0, respec- tively, such a device could be used to store a series of digital bits. Boyle and Smith spent barely an hour at the blackboard devising an electronic alternative. Instead of magnetic bubbles, they proposed to use electronic charges injected into metal-oxide-semiconductor (MOS) capacitors grown on silicon. By placing two capacitors close to ‘Revolutionary’ is used too often to describe advances in science. When applied to the invention of the charge-coupled device (CCD) array by Willard Boyle and George Smith, however, it is not far off. Yet, the CCD was not originally intended for applications in digital imaging, for which Boyle and Smith received the 2009 Nobel Prize in Physics, but rather as a potential new form of digital memory. In the late summer of 1969, Boyle and Smith, who were working at Bell Laboratories, were told to come up with a semiconductor memory that could compete with the so-called ‘magnetic bubble’ memory that was being developed by a rival group of their division. Bubble memory each other and applying electric voltages they could induce the charge to move from one to the next. In this way, packets of charge could be passed down a linear array of ‘charge- coupled’ MOS capacitors, mimicking the operation of a bubble array. Ironically, although the operation of the device was a success, neither it nor bubble memory ever took off as a means of storing digital information. But it did not take long for Boyle and Smith to realize that it might have other uses. At around the same time, Bell Laboratories was working hard to develop the Picturephone, which was a crude videoconferencing sys- tem. The commercial cathode-ray- tube cameras used were notoriously unreliable and a more dependable alternative was eagerly sought. The CCD provided the solution. The simplicity of fabricating large sensor arrays, combined with the linear optical response to even the most faint light sources, has meant that 40 years after their invention they are still used in large-scale opti- cal telescopes, including the Hubble space telescope. It has also allowed them to become cheap enough to be integrated into most modern mobile phones — a fact that news agencies increasingly rely on for important events. So, although the revolution might not be televised, thanks to the CCD it will almost certainly be photographed. Ed Gerstner, Senior Editor, Nature Physics ORIGINAL RESEARCH PAPERS Boyle W. S. & Smith, G. E. Charge coupled semiconductor devices. Bell Syst. Tech. J. 49, 587–592 (1970) | Amelio, G. F., Tompsett, M. F. & Smith, G. E. Experimental verification of the charge coupled device concept. Bell Syst. Tech. J. 49, 593–600 (1970) FURTHER READING Smith, G. E. The invention and early history of the CCD. Nucl. Instr. Meth. Phys. Res. A 607, 16 (2009) M I L E S T O N E 1 4 Digital photography is born The CCD inventors, Willard Boyle (left) and George Smith (right). Image courtesy of Alcatel-Lucent/Bell Labs M i l e s t o n e s NATuRe MILeSTONeS | P h oto n s M AY 2 0 1 0 © 20 Macmillan Publishers Limited. All rights reserved10 work_mrkoqblu2zeada67u2je34tsqm ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218559370 Params is empty 218559370 exception Params is empty 2021/04/06-02:16:39 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218559370 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:39 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_mu2w2g2xajevxiwhfpv77glezy ---- © 2008 Palgrave Macmillan Ltd 1743–6540 Vol. 4, 3 123 JOURNAL OF DIGITAL ASSET MANAGEMENT 123 www.palgrave-journals.com/dam In this issue (Volume 4, Number 3), our topic is Digital Value Chains . Digital value chains entail the next phase of digital asset management, the linking up of several “ loosely coupled ” systems of multichannel publishing and pan-regional marketing communications. In this issue, we bring you a selection of interviews with industry leaders: Dennis Pannuto , Principal at Aha! Insight Technology , speaks with us about the transformation of agencies into digital service providers; BJ Gray , AVP of Marketing Operations for Victoria ’ s Secret , talks about automating the workfl ow process that starts with in fi eld digital photography shoots or sessions and ends with fi nalized, color-corrected retail displays; Rak Bhalla , Marketing Manager for Adobe Systems , gives some insight about how Acrobat 3D supports the review, approval and distribution of 3D CAD models, drawing and engineering data; and John Hingley , CEO at Andiamo Systems , rounds out the theme of this issue by discussing the emergence of social media as a marketing tool and the need for low- cost systems to track conversations, themes and sentiments about your brand throughout the blogosphere and social networking sites. In this issue ’ s installment of Cycle Time, Michael Moon clarifi es the relationship between digital supply chains from digital value chains , kicking off with some insights on the value-add of each to digital asset management before drilling down to a defi nition of the supply chain and a value chain, and the distinctive value each offers towards marketing and innovation for the enterprise. Looking at the current landscape of business as digital supply chain, what is required of the new digital agency? Dennis Pannuto looks at the shift from the traditional agency to a future where specialists are closely aligned with the business and the senior management is IT savvy. In other words, it leads to the merging of the CMO and CIO roles. Dennis also spoke with us regarding the new generation of self-directed consumers and how the digital agency can address the new set of dynamics they bring — employing lessons learned from successful social networking ventures. What buying criteria do marketing executives employ to go digital with successful multi-channel publishing? Michael Moon and Andrew Salop (of Metaseed.NET ) interview BJ Gray on how she researched different types of systems to fi nd a solution to work around the needs, language and established workfl ow at Victoria ’ s Secret. The current lifecycle for a product has historically been three years to market, while now it is not much more than one year. How can one software family help facilitate collaboration in this type of fast-moving, global work environment? According to Rak Bhalla , product lifecycle management is a full-spectrum process, and the expertise lies in streamlining the supply chain in many cases — getting suppliers involved very early on in the process. Adobe 3D and the entire PDF family facilitates workfl ow effi ciency, in effect fully leveraging CAD data much earlier in the process than has traditionally been possible. With the advent of social media, the ease with which consumers can get information and start to share their opinions turns a lot of this traditional campaign planning on its ear. How can the new digital agency use this situation to its advantage? We spoke with John Hingley , founder of Andiamo Systems, a social media analysis company, on how he advises companies to effectively use consumer-generated content. Thanks for joining us! In our next issue, we ’ ll cover innovations in Marketing Operations. Iris AlRoy Managing Editor Editorial Journal of Digital Asset Management (2008) 4, 123. doi: 10.1057/dam.2008.26 Editorial work_mufkjzxvnne7fmkzrhaypixeny ---- TPM13081 924..927 Coincident Tick Infestations in the Nostrils of Wild Chimpanzees and a Human in Uganda The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Hamer, Sarah A., Andrew B. Bernard, Ronan M. Donovan, Jessica A. Hartel, Richard W. Wrangham, Emily Otali, and Tony L. Goldberg. 2013. "Coincident Tick Infestations in the Nostrils of Wild Chimpanzees and a Human in Uganda." American Journal of Tropical Medicine and Hygiene 89 (5): 924–927. Published Version doi:10.4269/ajtmh.13-0081 Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:13456933 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA http://osc.hul.harvard.edu/dash/open-access-feedback?handle=&title=Coincident%20Tick%20Infestations%20in%20the%20Nostrils%20of%20Wild%20Chimpanzees%20and%20a%20Human%20in%20Uganda&community=1/1&collection=1/2&owningCollection1/2&harvardAuthors=da32bedd4b524a9b65b3a830684765a9&departmentHuman%20Evolutionary%20Biology http://nrs.harvard.edu/urn-3:HUL.InstRepos:13456933 http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA Am. J. Trop. Med. Hyg., 89(5), 2013, pp. 924–927 doi:10.4269/ajtmh.13-0081 Copyright © 2013 by The American Society of Tropical Medicine and Hygiene Coincident Tick Infestations in the Nostrils of Wild Chimpanzees and a Human in Uganda Sarah A. Hamer, Andrew B. Bernard, Ronan M. Donovan, Jessica A. Hartel, Richard W. Wrangham, Emily Otali, and Tony L. Goldberg* Department of Veterinary Integrative Biosciences, Texas A&M University, College Station, Texas; Freelance Nature Photographer, Merchantville, New Jersey; Freelance Nature Photographer, Helena Montana; Department of Biological Sciences, University of Southern California, Los Angeles, California; Department of Human Evolutionary Biology, Harvard University, Cambridge Massachusetts; Makerere University Biological Field Station, Fort Portal, Uganda; Department of Pathobiological Sciences, University of Wisconsin, Madison, Wisconsin Abstract. Ticks in the nostrils of humans visiting equatorial African forests have been reported sporadically for decades, but their taxonomy and natural history have remained obscure. We report human infestation with a nostril tick in Kibale National Park, Uganda, coincident with infestation of chimpanzees in the same location with nostril ticks, as shown by high-resolution digital photography. The human-derived nostril tick was identified morphologically and genetically as a nymph of the genus Amblyomma, but the mitochondrial 12S ribosomal RNA or the nuclear intergenic transcribed spacer 2 DNA sequences of the specimen were not represented in GenBank. These ticks may represent a previously uncharacterized species that is adapted to infesting chimpanzee nostrils as a defense against grooming. Ticks that feed upon apes and humans may facilitate cross-species transmission of pathogens, and the risk of exposure is likely elevated for persons who frequent ape habitats. INTRODUCTION Africa contains a great diversity of ticks, many of which transmit diseases of considerable importance to global human and animal health.1,2 Over several decades, sporadic cases of nostril ticks have been reported in persons who have visited equatorial African forests. In 1960, Walton documented three cases of nostril ticks in visitors to the forests of western Uganda, including Kibale Forest,3 which is now Kibale National Park, the location of our study. Walton speculated that such ticks “might normally infest the nasal passages of anthropoid apes,” on the basis that chimpanzees (Pan troglodytes schweinfurthii) were common in each forest where nostril ticks had been reported. Morphologic exami- nation proved inconclusive at the time, but it was hypothe- sized that these nostril ticks might be nymphal stages of A. paulopunctatum, which infests wild suids.4 In 2008, Aronsen and Robbins reported feeding to repletion of a nostril tick in a researcher visiting Kibale National Park to study primates and elephants.5 Morphologic examination showed the tick to be a nymph of the genus Amblyomma, consistent with earlier descriptions of Walton, but identifica- tion to species was again not possible. In this study, we report molecular characterization of a nostril tick from a researcher visiting Kibale National Park, Uganda. In addition, using high-resolution digital photog- raphy, we document coincident infestation of many young chimpanzees in Kibale with nostril ticks. Our analyses sug- gest that nostril ticks may commonly infest wild apes and may sporadically infest human visitors to ape habitats. MATERIALS AND METHODS Kibale National Park, western Uganda (0°13¢–0°41¢N, 30°19¢–30°32¢E) is a semi-deciduous forested park with an area of 795 km2 that contains a high diversity and biomass of wild non-human primates.6 Chimpanzees have been studied continuously in Kibale since 1987, and research projects of varying durations have focused on other non- human primates in Kibale, other animals, plants, persons, and the physical environment. 7 These activities create a com- munity of researchers and associated personnel who spend considerable time in the forest. Such activities increase rates of microbial transmission between humans and wild apes.8,9 A multi-decade, concerted effort has led to a remarkable degree of habituation of the Kanyawara community of chimpanzees in Kibale,10,11 which at the time of our study contained approximately 55 chimpanzees. As part of the regular monitoring of this chimpanzee community, digital photographs are obtained. In 2011 and 2012, several young chimpanzees were observed with nostril ticks, and efforts were made to document these cases by using high-resolution photography. Approximately 45 chimpanzees were regularly photographed during the study period. Photographs were taken opportunistically at close range with both digital SLR and point-and-shoot cameras. The cameras were set to a high ISO speed (at least 2,000), high frames per second (at least 6), and a wide aperture (f/2.8 at 200 mm). Effort was made to shoot at an angle that captured the interior of the nostrils. A chimpanzee was considered infested if a smooth, rounded, tick-like object, able to be differentiated from debris, was seen partially or fully occluding the nostril in one or more photograph. In June 2012, immediately upon returning to the United States after approximately three weeks in Kibale, one of us (TLG) discovered a tick in his right nostril. The tick was attached to the upper lateral nasal cartilage at the level of the rhinion (osseocartilaginous junction with the nasal bone). As reported in previous studies,3,5 the infesta- tion caused mild irritation but no epistaxis. The tick was removed intact by using forceps and placed in RNAlater solution (Invitrogen, Grand Island, NY). Dorsal and ventral photographs of the tick were obtained by using a SMZ- 745T Zoom Stereo Photo Microscope with 2 + auxiliary objective (Nikon Instruments, Melville, NY), and a leg plus an additional tibia and tarsus from a second leg were removed for DNA extraction. *Address correspondence to Tony L. Goldberg, Department of Pathobiological Sciences, University of Wisconsin-Madison, Madison, WI 53706. E-mail: tgoldberg@vetmed.wisc.edu 924 Total DNA was extracted by using the DNeasy Blood and Tissue kit (QIAGEN, Valencia, CA) and used as tem- plate in separate touchdown polymerase chain reactions to amplify a 360-basepair fragment of the tick mitochon- drial 12S ribosomal RNA (rRNA)12 and the 1,041-basepair tick nuclear 5.8S–28S rRNA intergenic transcribed spacer 2 (ITS2).13 DNA extracts from two ticks of known identity (A. maculatum and A. americanum, collected in Texas) were used as positive controls. After purification of the amplicons (ExoSAP-IT; Affymetrix, Santa Clara, CA), bidi- rectional Sanger sequencing was performed on an ABI 3730xl DNA Analyzer (Applied Biosystems, Foster City, CA) at Eton Bioscience (San Diego, CA). Based on preliminary assessments of tick morphology and DNA sequences, 12S rRNA and ITS2 sequences of all African Amblyomma ticks in GenBank were down- loaded and used in phylogenetic analyses. Sequences were aligned by using Clustal Omega 14 with manual adjust- ment. Phylogenetic trees were constructed from aligned sequences by using the neighbor-joining method15 imple- mented in the computer program MEGA5, 16 with a maximum composite likelihood distance correction and 1,000 bootstrap replicates of the data. Sequences of 12S rRNA and ITS2 from the nostril tick, A. maculatum, and A. americanum were deposited in Genbank (accession numbers KC538939–KC538944). RESULTS During January 2011–July 2012, high-resolution photog- raphy identified nine chimpanzees with ticks in their nostrils of 45 chimpanzees photographed, resulting in a period prevalence of 20% (95% confidence interval = 10.9–33.8%; Figure 1). Some chimpanzees were photographed with nos- tril ticks on more than one occasion, including in alternat- ing nostrils, indicating independent infestations (Figure 1). The tick removed from the nostril of the researcher was identified morphologically as a partially engorged nymph belonging to the genus Amblyomma (Figure 2), but species- level morphologic identification was not possible; it is generally not possible in practice to identify wild-caught African nymphal ticks of this genus to the species level based on morphologic characteristics. 17 DNA sequences of 12S rRNA and ITS2 did not match any sequences in GenBank but were closest to A. rhinocerotis in the case of 12S rRNA (85.2% nucleotide similarity) and A. variegatum in the case of ITS-2 (94.4% nucleotide similarity). Phylo- genetic trees of 12S rRNA and ITS2 showed the nostril tick to cluster with known African Amblyomma species but to be divergent from any published sequence 13,18 (Figure 3). DISCUSSION We are aware of only two previous reports of infestations of human nostrils with ticks, and these infestations both occurred in visitors to the forests of western Uganda.3,5 Anecdotally, however, such infections are not uncommon. For example, the researcher infested in the current study recalls two prior tick infestations in his nostrils during two previous visits to Kibale over approximately nine years. In addition, two of us (JAH and RWW) recall hosting nostril ticks during visits to Kibale, and reports of such infestations from other researchers and field personnel in Kibale occur intermittently. The nostril tick that we identified was a member of the genus Amblyomma, as has been reported, 3,5 but DNA sequences of this particular species did not previously exist in GenBank. Thus, this tick is either a recognized species that has not yet been sequenced or an unrecognized species. Adult specimens would likely be necessary to make this determination, which might necessitate either trapping an adult or allowing a nymph to feed to repletion and molt. To date, however, our efforts to recover ticks from the Figure 1. Nostril ticks in young chimpanzees in Kibale National Park, Uganda, 2011–2012. A, Thatcher, born in November 2011, with right nostril tick on February 27, 2012. B, Thatcher with a left nostril tick on July 24, 2012. C, Thatcher with both nostrils occluded by ticks on December 2, 2011. D, Betty, born in January 2011, with a left nostril tick on January 23, 2011. Photo credits: Andrew B. Bernard, Ronan M. Donovan, and Jessica A. Hartel. Figure 2. Tick removed from a human nostril in July 2012 after field research in Kibale National Park, Uganda. A, Dorsal. B, Ventral. Note that leg 4 on the right side and the tibia and tarsus of leg 2 on left side were removed for DNA extraction. Scale bar = 1 mm. Photo credit: Gabriel L. Hamer. NOSTRIL TICKS IN CHIMPANZEES AND A HUMAN 925 forest floor in locations frequented by chimpanzees have been unsuccessful, and we have not yet had the opportunity to allow a nymph to feed to repletion. In Africa, a total of 22 tick species in the genus Amblyomma have been described,19 of which 10 are known to occur in Uganda and 5 others possibly occur there. 17 Limited genetic data on these species are available; for example, as of July 7, 2013, mitochondrial 12S rRNA or ITS2 sequences of only four such species were represented in GenBank, and even fewer African Amblyomma species were represented by sequences of other loci. We note that we likely underestimated infestation rates of chimpanzees because ticks may have been present at times when animals were not photographed, and ticks may have been present but minimally engorged or attached deep within the nostril and therefore not visible. We also note that it is not definitive that the tick removed from the human nostril was of the same species as the ticks observed in chimpanzee nostrils. Despite the remarkable degree of habituation of the Kanyawara chimpanzees, it remains impractical to collect ticks directly from the nostrils of these wild apes. Nevertheless, the coincidence of the human infestation with a high frequency of chimpanzee infestation in the same location is suggestive. Despite close daily observation and high-resolution pho- tography, we did not observe engorged ticks on the bodies of the chimpanzees studied, even on hairless regions. Grooming is a frequent activity among chimpanzees, and its importance for social bonding cannot be overstated.20 A tick that specializes on chimpanzees would clearly benefit from defenses against grooming, such as seeking an attachment site anatomically inaccessible to manual removal; infesta- tion of the nostril would appear to be ideal in this regard. In this light, it is noteworthy that the chimpanzee louse (Pediculus schaeffi) will suddenly become stiff and motion- less when exposed to light, even falling from the hair like a piece of debris, presumably to evade detection when the hair of a chimpanzee is parted during grooming.21 Such countermeasures to grooming may be common adaptations of primate ectoparasites. Many Ambloymma ticks are three-hosts ticks, requiring blood meals from three separate vertebrate hosts, with off- host time spent on the forest floor or on vegetation, includ- ing underbrush and the forest canopy.22 In sub-Saharan Africa, Amblyomma spp. ticks transmit Ehrlichia ruminantium, Rickettsia africae, and Coxiella burnetii, which are the agents of heartwater disease, African tick bite fever, and Q fever, respectively.4,23,24 Our findings raise the possibility that nos- tril ticks could transmit these or similar diseases between humans and chimpanzees, as well as to and from other ter- restrial or arboreal species on which the ticks feed.4 The risks of zoonotic transmission may be increased for researchers, research personnel, and others who spend time in forested habitats frequented by chimpanzees.8,9 Finally, we note that we did not discover our nostril tick until after the researcher had returned to his home country, similar to a previous report. 5 The subtle host-seeking and attachment behavior that must accompany tick infestation of the highly innervated mammalian nostril supports the hypothesis that this tick is adapted to infesting nasal passages. More generally, although infestation with nostril ticks is a rare event, a great many persons visit wild ape habitats each year for such pur- poses as tourism and research.7 Rapid international travel of persons unknowingly infested with nostril ticks and other sim- ilar parasites could, under the right circumstances, lead to the establishment of exotic tick populations or the transmission of tick-borne diseases far from their countries of origin.25,26 Received February 12, 2013. Accepted for publication August 5, 2013. Published online September 30, 2013. Acknowledgments: We thank the Uganda Wildlife Authority and the Uganda National Council for Science and Technology for granting us permission to conduct research in Kibale National Park; the Makerere University Biological Field Station, the Kibale Chimpanzee Project, and the Kibale EcoHealth Project for pro- viding support in the field; and Lorenza Beati-Ziegler and Richard G. Robbins for helpful comments and discussion. Figure 3. Phylogenetic trees of nostril tick and other ticks of the genus Amblyomma from 12S ribosomal RNA (rRNA) (top) and nuclear intergenic transcribed spacer 2 (ITS2) (bottom) sequence alignments constructed by using the neighbor-joining method. All African Amblyomma species in Genbank were included, as were A. americanum and A. maculatum from Texas that were processed along with the nostril tick as positive controls. Alignment lengths were 319 and 1,015 nucleotide positions, respectively. GenBank accession numbers are shown in parentheses. Numbers above branches indicate bootstrap values based on 1,000 resamplings of the data. Scale bars indicate nucleotide substitutions per site. 926 HAMER AND OTHERS Financial support: This study was supported in part by National Institutes of Health grant TW009237 as part of the joint National Institutes of Health–National Science Foundation Ecology of Infec- tious Disease program and the United Kingdom Economic and Social Research Council. Authors’ addresses: Sarah A. Hamer, Department of Veterinary Integrative Biosciences, Texas A&M University, College Station, TX, E-mail: shamer@cvm.tamu.edu. Andrew B. Bernard, Merchantville, NJ, E-mail: andrew.b.bernard@gmail.com. Ronan M. Donovan, Helena, MT, E-mail: ronmdon@gmail.com. Jessica A. Hartel, Department of Biological Sciences, University of Southern California, Los Angeles, CA, E-mail: hartelj@gmail.com. Richard W. Wrangham, Department of Human Evolutionary Biology, Harvard University, Cambridge, MA, E-mail: wrangham@fas.harvard.edu. Emily Otali, Makerere University Biological Field Station, Fort Portal, Uganda, E-mail: eotali@yahoo.co.uk. Tony L. Goldberg, Department of Pathobiological Sciences, University of Wisconsin- Madison, Madison, WI, E-mail: tgoldberg@vetmed.wisc.edu. REFERENCES 1. Hotez PJ, Kamath A, 2009. Neglected tropical diseases in sub- saharan Africa: review of their prevalence, distribution, and disease burden. PLoS Negl Trop Dis 3: e412. 2. Cumming GS, 2000. Using habitat models to map diversity: pan- African species richness of ticks (Acari: Ixodida). J Biogeogr 27: 425–440. 3. Walton GA, 1960. A tick infecting the nostrils of man. Nature 188: 1131–1132. 4. Van der Borght-Elbl A, 1977. Ixodid Ticks (Acarina, Ixodiae) of Central Africa, Volume 5. Tervuren, Belgium: Musee Royal de l’Afrique Centrale. 5. Aronsen GP, Robbins RG, 2008. An instance of tick feeding to repletion inside a human nostril. Bulletin of the Peabody Museum of Natural History 49: 245–248. 6. Struhsaker T, 1997. Ecology of an African Rain Forest: Logging in Kibale and the Conflict between Conservation and Exploi- tation. Gainesville, FL: University Press of Florida. 7. Wrangham R, Ross E, 2008. Science and Conservation in African Forests: The Benefits of Long-Term Research. Cambridge, UK: Cambridge University Press. 8. Goldberg TL, Gillespie TR, Rwego IB, Wheeler ER, Estoff EE, Chapman CA, 2007. Patterns of gastrointestinal bacterial exchange between chimpanzees and humans involved in research and tourism in western Uganda. Biol Conserv 135: 511–517. 9. Rwego IB, Isabirye-Basuta G, Gillespie TR, Goldberg TL, 2008. Gastrointestinal bacterial transmission among humans, mountain gorillas, and livestock in Bwindi impenetrable National Park, Uganda. Conserv Biol 22: 1600–1607. 10. Smith TM, Machanda Z, Bernard AB, Donovan RM, Papakyrikos AM, Muller MN, Wrangham R, 2013. First molar eruption, weaning, and life history in living wild chimpanzees. Proc Natl Acad Sci USA 110: 2787–2791. 11. Wilson ML, Kahlenberg SM, Wells M, Wrangham RW, 2012. Ecological and social factors affect the occurrence and out- comes of intergroup encounters in chimpanzees. Anim Behav 83: 277–291. 12. Hickson RE, Simon C, Cooper A, Spicer GS, Sullivan J, Penny D, 1996. Conserved sequence motifs, alignment, and secondary structure for the third domain of animal 12S rRNA. Mol Biol Evol 13: 150–169. 13. Beati L, Patel J, Lucas-Williams H, Adakal H, Kanduma EG, Tembo-Mwase E, Krecek R, Mertins JW, Alfred JT, Kelly S, Kelly P, 2012. Phylogeography and demographic history of Amblyomma variegatum (Fabricius) (Acari: Ixodidae), the tropical bont tick. Vector Borne Zoonotic Dis 12: 514–525. 14. Sievers F, Wilm A, Dineen D, Gibson TJ, Karplus K, Li W, Lopez R, McWilliam H, Remmert M, Soding J, Thompson JD, Higgins DG, 2011. Fast, scalable generation of high- quality protein multiple sequence alignments using Clustal Omega. Mol Syst Biol 7: 539. 15. Saitou N, Nei M, 1987. The neighbor-joining method: a new method for reconstructing phylogenetic trees. Mol Biol Evol 4: 406–425. 16. Tamura K, Peterson D, Peterson N, Stecher G, Nei M, Kumar S, 2011. MEGA5: molecular evolutionary genetics analysis using maximum likelihood, evolutionary distance, and maxi- mum parsimony methods. Mol Biol Evol 28: 2731–2739. 17. Matthysse JG, Colbo MH, 1987. The Ixodid Ticks of Uganda: Together with Species Pertinent to Uganda because of Their Present Known Distribution. College Park, MD: Entomological Society of America. 18. Beati L, Keirans JE, 2001. Analysis of the systematic rela- tionships among ticks of the genera Rhipicephalus and Boophilus (Acari: Ixodidae) based on mitochondrial 12S ribosomal DNA gene sequences and morphological characters. J Parasitol 87: 32–48. 19. Voltzit OV, Keirans JE, 2003. A review of African Amblyomma species (Acari, Ixodida, Ixodidae). Acarina 11: 135–214. 20. Goodall J, 1986. The Chimpanzees of Gombe: Patterns of Behavior. Cambridge, MA: Harvard University Press. 21. Kuhn HJ, 1967. Parasites and the phylogeny of the cararrhine primates. Chiarelli B, ed. Taxonomy and Phylogeny of Old World Primates with References to the Origin of Man. Torino, Italy: Rosenberg and Seller, 187–195. 22. Sonenshine DE, 1991. Biology of Ticks. New York: Oxford University Press. 23. Cazorla C, Socolovschi C, Jensenius M, Parola P, 2008. Tick- borne diseases: tick-borne spotted fever rickettsioses in Africa. Infect Dis Clin North Am 22: 531–544 ix–x. 24. Walker JB, 1987. The tick vectors of Cowdria ruminantium (Ixodidea, Ixodidae, Genus Amblyomma) and their distribu- tion. Onderstepoort J Vet 54: 353–379. 25. Randolph SE, Rogers DJ, 2010. The arrival, establishment and spread of exotic diseases: patterns and predictions. Nat Rev Microbiol 8: 361–371. 26. Hamer SA, Goldberg TL, Kitron UD, Brawn JD, Anderson TK, Loss SR, Walker ED, Hamer GL, 2012. Wild birds and urban ecology of ticks and tick-borne pathogens, Chicago, Illinois, USA, 2005–2010. Emerg Infect Dis 18: 1589–1595. NOSTRIL TICKS IN CHIMPANZEES AND A HUMAN 927 work_mxrzwbi2aneufclcjkm4ianhae ---- PDF hosted at the Radboud Repository of the Radboud University Nijmegen The following full text is a publisher's version. For additional information about this publication click this link. http://hdl.handle.net/2066/117914 Please be advised that this information was generated on 2021-04-06 and may be subject to change. http://hdl.handle.net/2066/117914 Unilateral Condylar Hyperplasia: A 3-Dimensional Quantification of Asymmetry Tim J. Verhoeven1*., Jitske W. Nolte2., Thomas J. J. Maal1, Stefaan J. Bergé1, Alfred G. Becking3 1 Department of Oral and Maxillofacial Surgery, Radboud University Medical Centre, Nijmegen, The Netherlands, 2 Department of Oral and Maxillofacial Surgery/Oral Pathology, VU University Medical Centre and Academic Centre of Dentistry, Amsterdam, The Netherlands, 3 Department of Oral and Maxillofacial Surgery, Academic Medical Centre/Emma Children Hospital Amsterdam, Amsterdam, The Netherlands Abstract Purpose: Objective quantifications of facial asymmetry in patients with Unilateral Condylar Hyperplasia (UCH) have not yet been described in literature. The aim of this study was to objectively quantify soft-tissue asymmetry in patients with UCH and to compare the findings with a control group using a new method. Material and Methods: Thirty 3D photographs of patients diagnosed with UCH were compared with 30 3D photographs of healthy controls. As UCH presents particularly in the mandible, a new method was used to isolate the lower part of the face to evaluate asymmetry of this part separately. The new method was validated by two observers using 3D photographs of five patients and five controls. Results: A significant difference (0.79 mm) between patients and controls whole face asymmetry was found. Intra- and inter-observer differences of 0.011 mm (20.034–0.011) and 0.017 mm (20.007–0.042) respectively were found. These differences are irrelevant in clinical practice. Conclusion: After objective quantification, a significant difference was identified in soft-tissue asymmetry between patients with UCH and controls. The method used to isolate mandibular asymmetry was found to be valid and a suitable tool to evaluate facial asymmetry. Citation: Verhoeven TJ, Nolte JW, Maal TJJ, Bergé SJ, Becking AG (2013) Unilateral Condylar Hyperplasia: A 3-Dimensional Quantification of Asymmetry. PLoS ONE 8(3): e59391. doi:10.1371/journal.pone.0059391 Editor: Jonathan A. Coles, Glasgow University, United Kingdom Received November 6, 2012; Accepted February 14, 2013; Published March 27, 2013 Copyright: � 2013 Verhoeven et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: The authors have no support or funding to report. Competing Interests: The authors have declared that no competing interests exist. * E-mail: t.verhoeven@mka.umcn.nl . These authors contributed equally to this work. Introduction Unilateral Condylar Hyperplasia (UCH) is a rare disorder that has been researched and discussed in numerous publications. Uncertainty exists about the aetiology. The condition is charac- terized by an asymmetry in the lower part of the face, due to persistent or renewed activity resembling growth in one of the mandibular condyles [1]. Varying degrees of mandibular over- growth can be clinically detected in UCH patients. A classification in three categories has been described: hemimandibular elonga- tion (HE), hemimandibular hyperplasia (HH) and a combination of these two (hybrid form) [2]. The asymmetrical development in UCH patients often results in functional and aesthetic problems [3]. No gold standard for diagnosis and treatment is available. (Hetero-) anamnesis in combination with clinical and radiological documentation and a positive SPECT-scan are currently being used to identify ongoing disease. Patients are considered to have hyperactivity of one condyle if the bone scintigram shows a .10% left to right difference [4,5]. Treatment of these patients consists of removal of the growth center by a partial condylectomy. Secondly, correction of the facial asymmetry needs to be addressed, usually consisting of a combination of orthodontics and surgery [6]. Although the disease is self-limiting, asymmetry can become excessive. Especially in patients where the growth activity degree is hard to rate, for example when clinical evaluation indicates progression whereas the bonescintigraphy does not show a .10% right to left difference, accurate (imaging) documentation for monitoring is of utmost importance. Facial asymmetry in patients with condylar hyperplasia has been subjectively described before, but an objective quantification is lacking [2,6]. Objective quantification would offer possibilities to evaluate the development of the facial asymmetry in time. Secondly, it would offer a possibility to evaluate the effect and accuracy of treatment. With recent advances in 3D technology, objective quantification of facial asymmetries can be performed without the use of ionizing radiation or other invasive measures [7,8]. The aim of this study was to objectively quantify facial and mandibular soft-tissue asymmetry in patients with unilateral condylar hyperplasia, and to evaluate whether this method is applicable for routine diagnostic and follow-up procedures. A new method based on 3D stereophotogrammetry to isolate the lower part of the face was validated and used to compare the patients to a control group. PLOS ONE | www.plosone.org 1 March 2013 | Volume 8 | Issue 3 | e59391 Materials and Methods Thirty patients with proven unilateral condylar hyperplasia (UCH) and available 3D stereophotogrammetric images from September 2009 untill November 2011 were included in the study. UCH was defined using the following inclusion criteria: pro- gressive mandibular asymmetry, supported with a positive bone- scan (difference in affected vs. non-affected region of interest .10%) and/or performed condylectomy. Exclusion criteria were proven mandibular fracture, previous mandibular surgery and facial asymmetry suspected to be based on a non UCH-cause. A control group of 30 age and gender matched healthy volunteers, without a prior history of facial surgery or existing facial deformities, was selected. This study was presented to the institutional review board of the VU University Medical Center, and it was decided that no ethical approval was needed, due to the retrospective and non-invasive nature. All patients were informed about the use of their photographs for research purposes besides the normal use for diagnosis and treatment. For all controls used in this study a written informed consent was obtained prior to photo acquisition and use. A consent protocol was developed and used. This procedure was discussed and approved by the ethics committee. The data were processed anonymously. The patient depicted in the article has given her written consent for publication. For all patients and controls 3D photographs were acquired using a stereophoto-grammatrical camera set-up (3dMD face TM System, 3dMD, Atlanta, GA, USA). The 3D photographs were taken with the subject in a natural head position, eyes open and relaxed facial musculature [9]. All 3D photographs were taken by an experienced co-worker. Asymmetry quantification of the whole face was achieved using an existing method priorly published by Verhoeven et al. [10], which includes the following steps: 1. Using 3dMDpatient software the neck, ears and hair were removed to exclude confounding regions (3dMDpatient TM v3.1.0.3 Software Platform, 3dMD) (figure 1) [7]. 2. In MaxilimH (Medicim NV, Mechelen, Belgium) a sagittal plane was constructed and used to create a mirrored 3D photograph (figure 1). 3. The original and the mirrored 3D photograph were matched using the Iterative Closest Point Algorithm [11]. This registration procedure was performed in MaxilimH using selected areas (forehead, upper nasal dorsum and zygoma [12]) (figure 1). 4. The registration procedure resulted in a color map which illustrates the distances between two corresponding points on both (original and mirrored) 3D photographs [13]. These distances were used as a direct measurement of the facial asymmetry. The absolute mean and the 95 th percentiles of the distances were calculated in millimeters using MatlabH (7.4.0 (R2007a) Mathworks, Natick, MA, USA) (figure 1). 5. As UCH is a mandibular disorder, most of the asymmetry is expected in this region. Separate evaluation of asymmetry in this particular area is desirable. Thus, a fifth step was added to isolate the lower part of the face. 6. The original photograph was imported into MaxilimH and a reference frame was set up [14]. The subnasal landmark was identified and a plane, parallel to the horizontal plane of the reference frame, was constructed through this landmark [15]. The new plane was used to remove the upper part of the distance map. Now the asymmetry of the lower face could be calculated (figure 2). Statistical Analysis and Validation The described methods, for the complete and lower face, were applied to the patient and control group. The patients and controls were compared for the absolute mean and the 95 th percentile of the asymmetry. Significant differences (P,0.05) were tested using an unpaired Student’s T-test. To investigate the inter-observer reproducibility of the lower face method, it was applied to the 3D photographs of five patients and five controls by two observers. To investigate the intra- observer error one of the observers repeated the measurements one week later. The absolute mean asymmetry and the 95 th percentile of the measurements were compared. A difference of less than 0.5 mm was considered clinically acceptable [14,16]. The difference in means (95% confidence interval [CI]) and the standard error of the mean (SEM = SD/!N) were calculated to represent the systematic error. The measurement error (ME = SD/!2) was calculated to represent the random error. Categories Apart from calculating the absolute mean and 95 th percentiles, the latter was used to divide all patients and controls into four categories (figure 3): N symmetry (0–2 mm) N minor asymmetry (2–4 mm) N asymmetry (4–6 mm) N strong asymmetry (.6 mm) Results Validation of the Lower Face Method Table 1 shows the intra- and inter-observer performances of the lower face method. The intra-observer difference of the absolute mean asymmetry is 0.011 mm (20.034–0.011) with a measure- ment error of 0.022 mm. The inter-observer difference is 0.017 mm (20.007–0.042) with a measurement error of 0.024 mm. Study The study method was applied to 30 patients and 30 controls. The average age of the patient group was 22 years (69, range 11– 41 years) and included sixteen women and fourteen men. The asymmetry for the complete face of both the patient and the control group is demonstrated in table 2. The absolute mean asymmetry in patients (1.57 mm) and controls (0.78 mm) showed a significant difference of 0.79 mm. In the 95th percentile of the asymmetry a significant difference (3.32 mm) between controls (2.12 mm) and patients (5.44 mm) was also found. For assessment of the lower face asymmetry two individuals in the patient group and five individuals in the control group had to be excluded because of overlying hair in the ear region, making it impossible to set up a reference frame. Therefore, the lower face asymmetry was measured for 28 patients and 25 controls. The results of the included subjects are presented in table 3. The absolute mean (2.64 mm vs. 1.01 mm) and 95 th percentile (6.47 mm vs. 2.29 mm) of the asymmetry both showed a significant difference between patients and controls of 1.63 mm and 4.18 mm respectively. Discussion Unilateral condylar hyperplasia is inextricably linked to facial asymmetry, most visible in the lower third of the face. This has Asymmetry in Unilateral Condylar Hyperplasia PLOS ONE | www.plosone.org 2 March 2013 | Volume 8 | Issue 3 | e59391 been published on extensively, classifying two characteristical patterns: hemimandibular elongation and hemimandibular hyper- plasia, and a third hybrid or mixed form of these two (HH/HE). HE exerts a horizontal asymmetry with a clear horizontal deviation of the chin. HH demonstrates a more vertical asymmetry with minor horizontal chinpoint deviation and/or cant. Usually regular photographs are taken to subjectively evaluate the asymmetry [2,3,17,18,19,20]. To our knowledge objective 3D-quantification of the asymmetry has not been performed before. The aim of this study was to objectively quantify facial and mandibular soft-tissue asymmetry in patients with unilateral condylar hyperplasia, and to evaluate whether this Figure 1. Illustrating step 1 removal of the confounding regions. Step 2 computing of a mirror image. Step 3 registration procedure using the selected areas. Step 4 creation of a distance map. (The individual in this photograph has given written informed consent (as outlined in PLOS consent form) to publish this picture). doi:10.1371/journal.pone.0059391.g001 Figure 2. Illustrating step 5. A reference frame is set up. The subnasal landmark (Sn) is indicated through which a plane, perpendicular to the horizontal plane of the reference frame, is computed. The new plane is used to split the (in step 4 computed) distance map. (The individual in this photograph has given written informed consent (as outlined in PLOS consent form) to publish this picture). doi:10.1371/journal.pone.0059391.g002 Asymmetry in Unilateral Condylar Hyperplasia PLOS ONE | www.plosone.org 3 March 2013 | Volume 8 | Issue 3 | e59391 method is applicable for routine diagnostic and follow-up procedures. A new method based on 3D stereophotogrammetry to isolate the lower part of the face was validated and used to compare the patients to a control group. The used method is based on a 3D stereophotogrammetry system with a well-researched error of 0.1 mm and an acquisition time of 2 ms [21]. The small system error and fast acquisition time makes the system very suitable for quantifying soft-tissue facial asymmetry. Because the system is based on digital photography it is only a small burden and not invasive for the subject compared to radiographs including cone beam computed tomography. Digital photography is not able to image underlying bony structures and therefore is unable to assess the tissue-origin of asymmetry. Another limitation is the inability to capture the fine structures of the hair. The study method to isolate and quantify mandibular asymmetry specifically, showed clinically acceptable intra- and inter-observer performance scores. The method is a modified version of the method described by Verhoeven et al. in 2012 [10]. The intra- and inter-observer performance scores (0.02 mm and 0.04 mm respectively) of the method described by Verhoeven et al. were mainly influenced by two manually performed steps in the procedure. The first is the removal of the confounding regions and secondly, the selection of the regions of interest for surface based registration. In this study the only variable was the indication of the subnasal landmark, as the other steps were already validated. This could explain the low intra- and inter- observer differences. An advantage for both the previously described method and the newly modified method is that it is not based on a facial midline but on surface based registration. The midline, especially in this patient group, does not naturally coincide with the facial symmetry axis [22]. Another advantage of the method is the possibility to measure asymmetry in the whole face independent of the direction of the asymmetry. This makes it applicable to various pathologies. In addition the analysis is easy and quick to use which makes it applicable to quickly measure asymmetry in a clinical setting. Differences in the amount of asymmetry within one group and between the patient and control groups are illustrated using the 95 th percentile. Categories were made with a 2 mm difference, clearly visualizing the differences between the patient group and Figure 3. Histogram of the number of persons per category based on the 95 th percentile. doi:10.1371/journal.pone.0059391.g003 Table 1. Intra- and inter-observer performances of the lower face method. Absolute mean 95 th percentile Mean SE ME Mean SE ME Intra-observer 0.011 0.010 0.022 0.0054 0.007 0.015 (95%-CI) (20.034– 0.011) (20.021– 0.010) Inter-observer 0.017 0.011 0.024 0.010 0.008 0.017 (95%-CI) (20.007– 0.042) (20.007– 0.028) The results for the absolute mean and the 95 th percentile are shown. The difference in means (Mean) (95% CI) (mm), standard error (SE) (mm) and the measurement error (ME) (mm). doi:10.1371/journal.pone.0059391.t001 Table 2. The asymmetry of the whole face in patients and controls. Absolute mean 95 th percentile Patients Mean (mm) 1.57 5.44 SD (mm) 0.62 2.59 Controls Mean (mm) 0.78 2.12 SD (mm) 0.20 0.57 Difference Mean (mm) 0.79 3.32 (95% CI) (0.55–1.02) (2.35–4.29) doi:10.1371/journal.pone.0059391.t002 Asymmetry in Unilateral Condylar Hyperplasia PLOS ONE | www.plosone.org 4 March 2013 | Volume 8 | Issue 3 | e59391 control group. All but one control are within the categories symmetry (0–2 mm) or minor asymmetry (2–4 mm). On the contrary, all but one of the patients are within the three asymmetric categories. The patients whole face asymmetry is equally distributed over the three categories. While the patients lower face asymmetry has a striking peek in the strong asymmetry (.6 mm) category (figure 3). By categorizing patients in different asymmetry groups, a systematic approach to diagnosis, treatment and follow-up would become possible. Hwang et al. performed a classification of facial asymmetry by cluster analysis, using measurements on frontal cephalograms and photographs [23]. According to the results 100 patients were divided in five asymmetry subgroups. Each group appeared to have a specific etiology and different treatment modality. The classification system proved to be of great help in accurate diagnosis and treatment planning of facial asymmetries. The four categories in this study show a severity of asymmetry and do not differentiate in location of asymmetry (such as elongation or hyperplasia). Secondly, there is no discrimination in origin (such as mandibular asymmetry or muscular/soft tissue hyperplasia). However, these categories in severity could be a prognostic factor for treatment, and could lead to earlier intervention in patients that at first presentation are already scheduled in category four. In 2010, Meyer-Marcotty et al. compared subjective ratings of pictures with objective measurements of asymmetry [24]. A 3D optical sensor was used and asymmetry was calculated by mirroring, surface based registering and calculating the average absolute mean distance between the original and the mirrored 3D surfaces. Eighteen unilateral cleft lip and palate patients were compared with eighteen random control persons. A significant difference in whole face asymmetry (patients 0.87 mm - controls 0.59 mm) as well as a positive correlation between objective asymmetry and appearance rating were found. Apart from the whole face asymmetry they isolated the lower face (subnasale to gnathion). The lower faces had a mean asymmetry of 0.79 mm in patients and 0.59 mm in controls. These asymmetries are rather small compared to this study. Part of this difference can be explained by the difference in pathology. Condylar hyperplasia is expected to result in more overall facial asymmetry than cleft lip and palates. This is especially clear in the patients’ lower face regions 2.59 mm (this study) vs. 0.79 mm (Meyer-Marcotty) of asymmetry. Part of the difference might also be explained by the exclusion of confounding regions. This was not described by Meyer-Marcotty et al. If the excluded regions contain more of the facial asymmetry it will not be taken into account in the mean facial asymmetry measurement and therefore result in a lower mean for both controls and patients. Furthermore, the difference in isolating the lower face might also influence the outcome. Meyer-Marcotty et al. described the method of isolation only in 2D and no validation study was reported. In 2009 and 2011, Primozic et al. studied the correction of unilateral posterior crossbite in the primary dentition using an acrylic plate expander [25,26]. Two Konica/Minolta Vivid 910 laser scanners were used for image acquisition. The images were mirrored, surface based registered and the absolute mean distance between surfaces was calculated as a measure of asymmetry. Facial images of 30 children with a unilateral posterior crossbite (with at least 2 mm midline deviation) and 30 without malocclusion were compared. The whole face asymmetry was compared and no significant difference in asymmetry was found between patients and controls. The whole face asymmetry was found to be 0.50 mm for patients and 0.44 mm for controls. These are small asymmetries compared to the results in this study on condylar hyperplasia. The difference can again be explained by the different pathologies and the possible difference in exclusion of confounding regions. The authors mentioned the removal of unwanted data, but they did not exactly describe which data. In a previous study by Verhoeven et al. patients with mandibular reconstruction were compared with an age and gender matched control group [10]. For the whole face asymmetry measurement, the same method as in this study was used. For the lower face asymmetry measurement a non-validated plane through three landmarks (subnasal, left and right alar curvature) was used. Significant differences were found between patients and controls for both whole (2.21 mm vs 1.02 mm) and lower face asymmetry (3.37 mm vs 1.25 mm). The larger results compared to this study can be partially explained by the different pathologies. But these differences do not explain the difference in the whole face control group of both studies (0.78 mm (this study) vs. 1.02 mm (Verhoeven [10]) which was done with the exact same measurement method. A possible explanation is the difference in age between the control groups: mean age of 22 years (range 11– 41) vs. 54 years (range 15–74). This leads to the presumption that with ageing facial asymmetry increases. This is an interesting hypothesis for further studies. Time is an important factor in progressive disorders such as UCH, and is being referred to as ‘‘the fourth dimension’’. Kaban describes progression of deformity with time in mandibular asymmetry as a result of undergrowth and overgrowth, and states that understanding this process is the basis for diagnosis and treatment [27]. Although UCH is a self-limiting disease, pre- vention of end-stage gross deformities is crucial, and development of secondary midfacial deformities should be avoided. With increasing severity of asymmetry, surgical correction becomes more extended and usually has to be performed bilateral and bimaxillary. Prevention of this could be performed by in time removal of the abnormal growth center with a partial condylect- omy. Thus, identifying progression of the disease at time of diagnosis is of utmost importance for further treatment planning. Establishing the presence of progression is tenuous, history-taking, earlier documentation of photographs, radiographs and even bone-scans are of relative importance since no gold standard is available. This study demonstrates that 3D sterophotogrammetry is a useful tool for quantification of overall facial and lower facial asymmetry. With a substantial database of patients with unilateral condylar hyperplasia, it might be interesting to perform a more extensive study on follow-up from the moment of established diagnosis until the moment of finalizing treatment. Table 3. The asymmetry of the lower face in patients and controls. Absolute mean 95 th percentile Patients Mean (mm) 2.64 6.47 SD (mm) 1.35 2.97 Controls Mean (mm) 1.01 2.29 SD (mm) 0.40 0.74 Difference Mean (mm) 1.63 4.18 (95% CI) (1.04–2.12) (2.89–5.23) doi:10.1371/journal.pone.0059391.t003 Asymmetry in Unilateral Condylar Hyperplasia PLOS ONE | www.plosone.org 5 March 2013 | Volume 8 | Issue 3 | e59391 Conclusion There is a significant difference in facial and mandibular soft- tissue asymmetry between patients with unilateral condylar hyperplasia and controls. The new method used to isolate mandibular asymmetry proved to be valid and is a suitable tool to produce a more in-depth evaluation of asymmetry of the lower face. 3D stereophotogrammetry is easily applicable for routine diagnostic procedures and seems useful for follow-up of UCH patients. Author Contributions Conceived and designed the experiments: TJV JWN TJJM SJB AGB. Performed the experiments: TJV JWN TJJM. Analyzed the data: TJV JWN TJJM SJB AGB. Contributed reagents/materials/analysis tools: TJV JWN TJJM SJB AGB. Wrote the paper: TJV JWN TJJM SJB AGB. References 1. Egyedi P. (1969) Etiology of condylar hyperplasia. Aust Dent J: 14: 12–23. 2. Obwegeser HL, MS Makek MS. (1986) Hemimandibular hyperplasia–Hemi- mandibular elongation. J Maxillofac Surg: 14: 183–208. 3. Nitzan DW, Katsnelson A, Bermanis I, Brin I, Casap N. (2008) The clinical characteristics of condylar hyperplasia: experience with 61 patients. J Oral Maxillofac Surg: 66: 312–318. 4. Gray RJM, Sloan P, Quayle AA, Carter DH. (1990) Histopathological and scintigraphic features of condylar hyperplasia. Int J Oral Maxillofac Surg: 19: 65–71. 5. Saridin CP, Raijmakers PG, Tuinzing DB, Becking AG. (2011) Bone scintigraphy as a diagnostic method in unilateral hyperactivity of the mandibular condyles: a review and meta-analysis of the literature. Int J Oral Maxillofac Surg: 40: 11–17. 6. Obwegeser HL, MS Makek MS. (1986) Hemimandibular hyperplasia–Hemi- mandibular elongation. J Maxillofac Surg: 14: 183–208. 7. Maal TJ, Plooij JM, Rangel FA, Mollemans W, Schutyser FA, et al. (2008) The accuracy of matching three-dimensional photographs with skin surfaces derived from cone-beam computed tomography. Int J Oral Maxillofac Surg: 37: 641– 646. 8. Kau CH, Richmond S, Incrapera A, English J, Xia JJ. (2007) Three-dimensional surface acquisition systems for the study of facial morphology and their application to maxillofacial surgery. Int J Med Robot: 3: 97–110. 9. Maal TJ, Van Loon B, Plooij JM, Rangel F, Ettema AM, et al. (2010) Registration of 3-Dimensional Facial Photographs for Clinical Use. J Oral Maxillofac Surg: 68: 2391–401. 10. Verhoeven TJ, Coppen C, Barkhuysen R, Bronkhorst EM, Merkx MAW, et al. (2013) Three dimensional evaluation of facial asymmetry after mandibular reconstruction: Preliminary validation of a new method using stereophoto- grammetry. Int J Oral Maxillofac Surg: 42: 19–25. 11. Besl PJ, Mckay HD. (1992) A method for registration of 3-D shapes. Pattern Analysis and Machine Intelligence, IEEE Transactions: 14: 239–256. 12. Maal TJJ, Verhamme LM, Van Loon B, Plooij JM, Rangel FA, et al. (2011) Variation of the Face in Rest Using 3D Stereophotogrammetry. Int J Oral Maxillofac Surg: 40: 1252–1257. 13. Groeve PD, Schutyser F, Cleynenbreugel JV, Suetens P. (2001) Registration of 3D photographs with spiral ct images for soft tissue simulation in maxillofacial surgery. Med Image Comput Comput Assist Interv: 2208: 991–996. 14. Plooij JM, Swennen GR, Rangel FA, Maal TJ, Schutyser FA, et al. (2009) Evaluation of reproducibility and reliability of 3D soft tissue analysis using 3D stereo-photogrammetry. Int J Oral Maxillofac Surg: 38: 267–273. 15. Swennen GRJ, Schutyser FAC, Hausamen J-E. (2005) Three-Dimensional Cephalometry: A Color Atlas and Manual. Berlin: Springer GmbH: 320. 16. Hajeer MY, Ayoub AF, Millett DT, Bock M, Siebert JP. (2002) Three- dimensional imaging in orthognathic surgery: the clinical application of a new method. Int J Adult Orthodon Orthognath Surg: 17: 318–330. 17. Ianetti G, Cascone P, Belli E, Cordaro L. (1989) Condylar Hyperplasia: Cephalometric study, treatment planning, and surgical correction. (our experience). Oral Surgery Oral Med Oral Pathol: 68: 673–681. 18. Jones RH, Tier GA. (2012) Correction of Facial Asymmetry as a Result of Unilateral Condylar Hyperplasia. J Oral Maxillofac Surg: 70: 1413–1425. 19. Villanueva-Alcojol L, Monje F, González-Garcı́a R. (2011) Hyperplasia of the mandibular condyle: clinical, histopathologic, and treatment considerations in a series of 36 patients. J Oral Maxillofac Surg: 69: 447–55. 20. Chia MSY, Naini FB, Gill DS. (2008) The Aetiology, Diagnosis and Management of Mandibular Asymmetry. Ortho Update: 1: 44–52. 21. Boehnen C, Flynn P. (2007) Accuracy of 3D Scanning Technologies in a Face Scanning Scenario. Proc. 5th Int. Conf. on 3-D Digital Imaging and Modeling. 22. Ferrario VF, Sforza C, Poggio CE, Tartaglia G. (1994) Distance from symmetry: a three-dimensional evaluation of facial asymmetry. J Oral Maxillofac Surg: 52: 1126–32. 23. Hwang HS, Youn IS, Lee KH, Lim HJ. (2007) Classification of facial asymmetry by cluster analysis. Am J Orthod Dentofacial Orthop: 132: 279. e1–6. 24. Meyer-Marcotty P, Alpers GW, Gerdes AB, Stellzig-Eisenhauer A. (2010) Impact of facial asymmetry in visual perception: a 3-dimensional data analysis. Am J Orthod Dentofacial Orthop: 137: 168–9. 25. Primozic J, Ovsenik M, Richmond S, Kau CH, Zhurov A. (2009) Early crossbite correction: a three-dimensional evaluation. Eur J Orthod: 31: 352–356. 26. Primozic J, Richmond S, Kau CH, Zhurov A, Ovsenik M. (2011) Three- dimensional evaluation of early crossbite correction: a longitudinal study. Eur J Orthod: 350: 7–13. 27. Kaban LB. (2009) Mandibular asymmetry and the fourth dimension. J Craniofac Surg: 20: 622–631. Asymmetry in Unilateral Condylar Hyperplasia PLOS ONE | www.plosone.org 6 March 2013 | Volume 8 | Issue 3 | e59391 work_mz7zvuzjfrba3e2iw5gfrklm5m ---- Coaxial double helix structured fiber-based triboelectric nanogenerator for effectively harvesting mechanical energy Nanoscale Advances PAPER O pe n A cc es s A rt ic le . P ub li sh ed o n 17 A ug us t 20 20 . D ow nl oa de d on 4 /6 /2 02 1 2: 16 :2 7 A M . T hi s ar ti cl e is l ic en se d un de r a C re at iv e C om m on s A tt ri bu ti on -N on C om m er ci al 3 .0 U np or te d L ic en ce . View Article Online View Journal | View Issue Coaxial double h aSchool of Advanced Materials and Nanotech China bInstitute of Nanoscience and Nanotechnolo China. E-mail: qinyong@lzu.edu.cn † Electronic supplementary information including schematic diagram illustratin structure and working process of the F vertical contact-separation mode, outpu (washing video, wearing video, whirligig v Cite this: Nanoscale Adv., 2020, 2, 4482 Received 28th June 2020 Accepted 14th August 2020 DOI: 10.1039/d0na00536c rsc.li/nanoscale-advances 4482 | Nanoscale Adv., 2020, 2, 448 elix structured fiber-based triboelectric nanogenerator for effectively harvesting mechanical energy† Jinmei Liu,a Nuanyang Cui,a Tao Du,a Gaoda Li, b Shuhai Liu,b Qi Xu,a Zheng Wang,a Long Gu*a and Yong Qin *b Harvesting energy from the surrounding environment, particularly from human body motions, is an effective way to provide sustainable electricity for low-power mobile and portable electronics. To get adapted to the human body and its motions, we report a new fiber-based triboelectric nanogenerator (FTNG) with a coaxial double helix structure, which is appropriate for collecting mechanical energy in different forms. With a small displacement (10 mm at 1.8 Hz), this FTNG could output 850.20 mV voltage and 0.66 mA m�2 current density in the lateral sliding mode, or 2.15 V voltage and 1.42 mA m�2 current density in the vertical separating mode. Applications onto the human body are also demonstrated: the output of 6 V and 600 nA (3 V and 300 nA) could be achieved when the FTNG was attached to a cloth (wore on a wrist). The output of FTNG was maintained after washing or long-time working. This FTNG is highly adaptable to the human body and has the potential to be a promising mobile and portable power supply for wearable electronic devices. Introduction Nowadays, mobile and portable electronics have been widely applied in communication, monitoring personal health care, monitoring environmental safety, and so on.1–9 For powering mobile and portable electronics, numerous batteries or super- capacitors have been utilized, but their lifetime is usually limited, resulting in some problems, such as the problem of frequent charging, which greatly hinders their applications.10–12 An effective way to solve such issues is the harvesting and utilization of the widely existing energy from the surrounding environment. In our ambient environment, numerous energy sources could be harvested and utilized, such as solar energy, mechanical energy, thermal energy, and chemical energy. As a kind of mechanical energy, human body motion energy is closely related to human activities, which make it ubiquitously available in the applicable environment for mobile and portable devices. Thus, it is important to collect and convert the human body motion energy into electricity as a mobile energy source for mobile and portable electronics. nology, Xidian University, Xi'an 710071, gy, Lanzhou University, Lanzhou 730000, (ESI) available: Additional gures, g the fabricating process of FTNG, TNG under lateral sliding mode and t performance, and the ESI videos ideo). See DOI: 10.1039/d0na00536c 2–4490 Aimed at harvesting the widely existing mechanical energy, triboelectric nanogenerators (TENGs) were invented based on triboelectrication and electrostatic induction.13–19 Using this technology, many groups have developed lots of wearable TENGs to harvest and convert human body motion energy into electricity,20–35 which makes the body motion energy a feasible and available power source for mobile and portable electronics. Because of ber's merits of being small, lightweight, bendable, and washable property, the ber-based wearable TENGs have been widely studied.36–38 Zhong et al. fabricated a ber-based TENG to convert the biomechanical motions/vibration energy into electricity using one CNT-coated cotton ber, and one PTFE and CNT-coated cotton ber in an entangled structure.39 Kim et al. fabricated a fabric-based TENG for powering wearable electronics by weaving bers consisting of Al bers and PDMS tubes.40 Zhao et al. developed a wearable TENG by directly weaving Cu-coated PE bers and polyimide-coated Cu-PET bers in two vertical directions.41 Dong et al. developed a 3- dimensional TENG for harvesting biomechanical energy using three types of bers: blended ber consisting of stainless steel ber and polyester ber, PDMS-coated energy-harvesting ber, and binding bers in three directions.42 Wen et al. demon- strated a TENG built using a Cu-coated-EVA tube along with PDMS and Cu-coated EVA tube to collect random body motion energies.43 Aer that, He et al. fabricated a ber-based TENG with a silicone rubber ber, in which the CNT layer and the copper ber function as two electrodes.44 Chen et al. reported a wearable TENG from commercial PTFE, carbon, and cotton bers with the traditional shuttle weaving technology.45 By This journal is © The Royal Society of Chemistry 2020 http://crossmark.crossref.org/dialog/?doi=10.1039/d0na00536c&domain=pdf&date_stamp=2020-10-10 http://orcid.org/0000-0002-5249-0422 http://orcid.org/0000-0002-6713-480X http://creativecommons.org/licenses/by-nc/3.0/ http://creativecommons.org/licenses/by-nc/3.0/ https://doi.org/10.1039/d0na00536c https://pubs.rsc.org/en/journals/journal/NA https://pubs.rsc.org/en/journals/journal/NA?issueid=NA002010 Paper Nanoscale Advances O pe n A cc es s A rt ic le . P ub li sh ed o n 17 A ug us t 20 20 . D ow nl oa de d on 4 /6 /2 02 1 2: 16 :2 7 A M . T hi s ar ti cl e is l ic en se d un de r a C re at iv e C om m on s A tt ri bu ti on -N on C om m er ci al 3 .0 U np or te d L ic en ce . View Article Online choosing and processing materials together with designing a exible structure, the ber-based TENGs in all the above- mentioned studies can effectively scavenge mechanical energy in different forms. However, the complexity and fragility of the hybrid composites composed of cotton thread, carbon nano- tube, PDMS, etc. could reduce their robustness and lead to disconnection or short-circuits, and further affect the output performance. Being an important factor in affecting the TENG's capacity in practical applications, it is attractive to implement a highly robust ber-based TENG with cost-effective materials and exible structure. Herein, we report a ber-based triboelectric nanogenerator (FTNG) with high robustness that can effectively scavenge biomechanical energy from both weak physiological motions and vigorous behavior. In FTNG, the positive and negative triboelectric layers (Nylon and PTFE bers) were rst wrapped with their electrodes to form the two core–shell parts of FTNG, and then these two core–shell structure bers were twined into a coaxial double helix structure FTNG. The FTNG can be worn on the human wrist or attached to the cloth to harvest the gentle energy of body motion. Besides, it could also be used to harvest the spinning energy from a rapid rotation. Results and discussion As shown in Fig. 1a, FTNG consists of one Nylon ber-wrapped Cu ber, and one PTFE ber-wrapped Cu ber, and they are Fig. 1 (a) The structure of the FTNG. (b) Digital photography of the FTNG strain curves of the Nylon-wrapped Cu fiber, PTFE-wrapped Cu fiber, an FTNG hanging a 500 g weight. This journal is © The Royal Society of Chemistry 2020 twined into a double helix structure. Nylon and PTFE act as the frictional surfaces and the Cu bers present inside work as the positive and negative electrodes, respectively. The fabrication process of FTNG is illustrated in Fig. S1a† and the experimental section in detail. The optical photo of the as-fabricated FTNG shown in Fig. 1b indicates the high exibility of this structure. An enlarged view of the part in the red square is shown in Fig. 1c, in which the Nylon/Cu ber and PTFE/Cu ber were interwoven at regular intervals along the axial direction. The lateral image and the cross-sectional image of FTNG are shown in Fig. S1b and c,† respectively. FTNG weighs only 0.58 g, as shown in Fig. S1d,† which demonstrates that it is light enough to be used as the wearable power source with little discomfort. Since the tensile operation is a common motion in daily activ- ities, a tensile-loading test was essential to examine the ability of FTNG to endure mechanical operations. As shown in Fig. 1d, the composite ber exhibits a strength of 200 MPa with a tension strain of 150%, which is a little higher than that of the Nylon-wrapped Cu ber (�150 MPa) and PTFE-wrapped Cu ber (�95 MPa). Moreover, a weight of 500 g can be steadily hung on the composite ber, as shown in the inset of Fig. 1d. It can be found that the double helix structure improves the tensile property, which is benecial for the robustness of FTNG, thereby enabling its characteristics feature of harvesting mechanical energy from violent activities. To test the output performance of FTNG, it was tensioned by xing its two ends, and a sewing polyester thread of 0.20 mm in . (c) An enlarged view of the part marked in red square of (b). (d) Stress– d the composite FTNG fiber. The inset is a digital photography of the Nanoscale Adv., 2020, 2, 4482–4490 | 4483 http://creativecommons.org/licenses/by-nc/3.0/ http://creativecommons.org/licenses/by-nc/3.0/ https://doi.org/10.1039/d0na00536c Nanoscale Advances Paper O pe n A cc es s A rt ic le . P ub li sh ed o n 17 A ug us t 20 20 . D ow nl oa de d on 4 /6 /2 02 1 2: 16 :2 7 A M . T hi s ar ti cl e is l ic en se d un de r a C re at iv e C om m on s A tt ri bu ti on -N on C om m er ci al 3 .0 U np or te d L ic en ce . View Article Online diameter as a contact object was rubbed against FTNG, as shown in Fig. S1e.† When sliding and contacting occurred, FTNG starts to work and generates electricity. Fig. 2a shows a full working cycle of the FTNG's operation in the sliding mode. When the polyester ber moves towards the Nylon ber, the surfaces of the two bers come in contact and rub against each other. On the basis of the frictional series, the gaining electron's ability of polyester was relatively stronger than that of Nylon due to which electrons migrated from the Nylon surface to the polyester surface, resulting in polyester and Nylon with negative and positive charges, respectively. As displayed in Stage I, when the polyester ber moves to the right, the contact surface between polyester and PTFE rubs against each other, and then the electrons migrate from polyester to PTFE, making the PTFE ber a negatively charged surface. Meanwhile, the net reduced electric eld drives electrons from the PTFE's electrode to the Nylon's electrode until the net electric eld gets shielded by the induced charges moving from two electrodes. As shown in Stage II, when the polyester wire continues sliding to the right, the contact stage comes to the aligned position, where the positive and negative triboelectric charges were completely balanced. In the case of polyester wire sliding towards le, the contact position goes back to the misaligned condition, with the free electrons being driven from the Nylon's electrode to the PTFE's electrode, as presented in Stage III, leading to the backow of induced free electrons. This process continues until the Fig. 2 (a) Working mechanism of the FTNG under a lateral sliding mode with the displacement of 10 mm. (c) The current density of FTNG via accumulative charge quantity via integrating the output current in Fig. S 4484 | Nanoscale Adv., 2020, 2, 4482–4490 polyester wire keeps sliding towards the le in an aligned position (Stage IV). However, when the leward sliding continues in a misaligned position, a reversed ow of induced electrons was observed (Stage V). Consequently, the power generation process of FTNG in one cycle was completed. By sliding the polyester ber back and forth along the FTNG, charges got alternately transferred between the two electrodes of Nylon and PTFE. Under a 10 mm displacement at 1.8 Hz, FTNG generates an output voltage and output current of 850.20 mV and 19.52 nA, as shown in Fig. S1f and g,† respec- tively. An enlarged view of the output voltage peak in one cycle is shown in Fig. 2b, from which the four working stages could be observed clearly. As demonstrated in Fig. 2c, the current density could reach 0.66 mA m�2, which is the ratio of the output current value and the sliding contact area. The actual contact area, here in the sliding process, was about 29.41 mm2, which is demonstrated in detail in Table 1 in ESI.† Fig. 2d is the integral curve of the output current curve from which the accumulative charge quantity reaches 16.73 nC, and the charging rate reaches 1.67 nC s�1. Here, the charging rate is an average electric quantity in one second calculated from the integral value of the current curves. The moving speed and frictional area can largely affect the TENG's output performance. As shown in Fig. 3a and b, with a constant 10 mm sliding displacement, both the FTNG's output voltage and its output current increase with an increase . (b) Enlarged output voltage of the FTNG a under frequency of 1.8 Hz dividing the output current in Fig. S1e† by the contact area. (d) The 1e.† This journal is © The Royal Society of Chemistry 2020 http://creativecommons.org/licenses/by-nc/3.0/ http://creativecommons.org/licenses/by-nc/3.0/ https://doi.org/10.1039/d0na00536c Fig. 3 (a) Output voltage and (b) output current of FTNG under different lateral sliding frequencies. (c) Output voltage and (d) output current of FTNG under different lateral sliding displacements. (e) Output current before and after washing operation. (f) Output current when continually working for 3 hours. Paper Nanoscale Advances O pe n A cc es s A rt ic le . P ub li sh ed o n 17 A ug us t 20 20 . D ow nl oa de d on 4 /6 /2 02 1 2: 16 :2 7 A M . T hi s ar ti cl e is l ic en se d un de r a C re at iv e C om m on s A tt ri bu ti on -N on C om m er ci al 3 .0 U np or te d L ic en ce . View Article Online in the frequency. With the increase in the sliding frequency, the sliding speed becomes larger, which shortens one working cycle time and further increases the working cycle number in a xed time. Consequently, the peak value of the open-circuit voltage increased from 140.53 mV at 0.3 Hz to 688.27 mV at 1.5 Hz (Fig. 3a). As shown in Fig. 3b, the current peak value increases from 3.13 nA at 0.3 Hz to 14.43 nA at 1.5 Hz, which means that the peak current density increases from 0.11 mA m�2 at 0.3 Hz to 0.48 mA m�2 at 1.5 Hz. During the relative sliding process, the actual frictional area between the FTNG and polyester ber will increase with the increase in the sliding displacement. To investigate this aspect, FTNG was made to work at different sliding displacements at a xed sliding frequency of 1 Hz. When the sliding displacement varied between 3 mm and 7 mm, the voltage peak value increased from 70.56 mV to 441.02 mV (Fig. 3c). Moreover, the current peak value increased from 0.77 nA to 6.45 nA (Fig. 3d). When divided by the contact area of 29.41 mm2, the peak current density increased from 0.03 mA m�2 at 3 mm to 0.22 mA m�2 at 7 mm. It can be explained by stating that the larger contact area leads to increased displacement. This measurement exhibits the FTNG's ability to scavenge mechanical energy from low frequency and small amplitude motion widely existing in daily activities of humans. As shown in Fig. 3e and ESI Video S1,† FTNG was immersed in water and stirred by a glass rod to test if FTNG can be washed like the clothes without a reduction in the performance. The comparison of the output currents before and aer the washing process (Fig. 3e) shows no obvious reduction, which indicates This journal is © The Royal Society of Chemistry 2020 that FTNG possesses good washing durability. Further, the robustness of FTNG was tested by continuously working for 3 h (�10 000 times) at 1 Hz sliding frequency and 10 mm sliding displacement. As shown in Fig. 3f, aer the charge accumula- tion process, the output current remains stable, which indicates the high robustness and long-term stability of FTNG. Aside from harvesting the sliding mechanical energy along the ber's length direction in the above sliding mode, FTNG could also harvest the mechanical energy when it contacts and separates with other fabrics such as polyester fabric, as shown in Fig. S2a† in a contact-separating TENG working mode. As FTNG contacts and separates with the fabric, the electrons ow back-and-forth between the two electrodes in an external circuit, generating an alternating current output. Here, the polyester fabric, having bers of 0.20 mm in diameter, was used to contact and separate with FTNG. At 1.8 Hz frequency and 20 mm displacement, the output voltage of 2.15 V and output current of 69.3 nA were achieved (Fig. S2b and c†). The corre- sponding peak output current density was 1.42 mA m�2. The actual contact area, here in the vertical separating process, was about 48.80 mm2, which is demonstrated in detail in Table 2 in the ESI.† The effect of the separation frequency on the FTNG's output performance was investigated with the separating distance being set at 10 mm. As shown in Fig. S2d and e,† when the frequency increases from 0.3 to 1.5 Hz, the output voltage and current increase from 436.78 mV, 4.97 nA to 1291.83 mV, 22.05 nA, respectively. Also, the corresponding current density, which can be obtained by dividing with the contact area of 48.80 Nanoscale Adv., 2020, 2, 4482–4490 | 4485 http://creativecommons.org/licenses/by-nc/3.0/ http://creativecommons.org/licenses/by-nc/3.0/ https://doi.org/10.1039/d0na00536c Nanoscale Advances Paper O pe n A cc es s A rt ic le . P ub li sh ed o n 17 A ug us t 20 20 . D ow nl oa de d on 4 /6 /2 02 1 2: 16 :2 7 A M . T hi s ar ti cl e is l ic en se d un de r a C re at iv e C om m on s A tt ri bu ti on -N on C om m er ci al 3 .0 U np or te d L ic en ce . View Article Online mm2, increases from 0.10 mA m�2 at 0.3 Hz to 0.47 mA m�2 at 1.5 Hz. To test the ability of harvesting energy from the human activity, 3 FTNGs were woven into a bracelet in a parallel manner, and placed on one experimenter's wrist, as presented in Fig. 4a and ESI Video S2.† In this case, the friction occurs between FTNG and the wrist from shaking hands. Fig. 4b and c show the corresponding output voltage and output current generated by the experimenter's hand shaking motion. The output signals easily reach to 3 V and 200 nA (the enlarged view of output signals is shown in Fig. S3†). Then, 7 FTNGs were attached to the waist of one experimenter in parallel, as shown in Fig. 4d and ESI Video S3.† When the experimenter swings his arm alternately along the lateral direction and vertical direction, Fig. 4 (a) Demonstration of the application of FTNG to harvest the wrist m by the wrist's motion. (d) Photograph showing the application of FTNG a jogging. (e) Output voltage and (f) output current of FTNG attached on 4486 | Nanoscale Adv., 2020, 2, 4482–4490 the friction occurs between FTNG and clothes. Fig. 4e and f show the output voltage and output current generated by the swing-arm movements, respectively. The output signals easily reach to 6 V and 600 nA, and every wave packet in them corre- sponds to one arm swing. The enlarged view of the output signals can be found in Fig. S4.† Therefore, this FTNG has the potential to act as a mobile and portable power supply for wearable devices. To further test the adaptability of FTNG for versatile mechanical energy harvesting, a whirligig is used to drive FTNG. The whirligig is a circular disc that spins when pulling on strings passes through its center (radius of the central disc was �35 mm). As shown in Fig. 5a, two FTNGs are inserted into the two center holes, respectively. The one cycle movement of the otion energy. (b) Output voltage and (c) output current of FTNG driven ttached on the cloth to harvest the body motion energy in walking or the cloth. This journal is © The Royal Society of Chemistry 2020 http://creativecommons.org/licenses/by-nc/3.0/ http://creativecommons.org/licenses/by-nc/3.0/ https://doi.org/10.1039/d0na00536c Fig. 5 (a) Schematic of the application of FTNG on harvesting the spinning energy. (b) Output voltage and (c) output current of FTNG driven by the spinning movement. The corresponding enlarged view for one wave packet of the output voltage (d) and output current (e). Paper Nanoscale Advances O pe n A cc es s A rt ic le . P ub li sh ed o n 17 A ug us t 20 20 . D ow nl oa de d on 4 /6 /2 02 1 2: 16 :2 7 A M . T hi s ar ti cl e is l ic en se d un de r a C re at iv e C om m on s A tt ri bu ti on -N on C om m er ci al 3 .0 U np or te d L ic en ce . View Article Online circular disc consists of two processes of forward and backward windings (ESI Video S4†). During the forward winding process, the input stretching force on FTNG (exerted by the human hands) accelerates the circular disc to its maximum rotating speed. Simultaneously, the two FTNGs begin to come in contact with each other and reach a tightly coiled state. Then, in the back winding process, no input force was on the FTNG due to which the disc rotates in the reverse direction. Consequently, the two FTNGs separate from each other and get into a parallel state. Aer this position, the inward force was applied again, and the two FTNGs begin to come in contact with each other again. This cycle of winding and unwinding of the FTNGs repeats itself, in which electricity is generated by the two FTNGs. Fig. 5b and c show the output voltage and current driven This journal is © The Royal Society of Chemistry 2020 by the spinning of the circular disc. The corresponding voltage and current were 1.2 V and 40 nA, respectively. The enlarged view of one wave packet of the output voltage and current is shown in Fig. 5d and e, respectively, which corresponds to a winding or unwinding process. This demonstration implies that such a tough FTNG can also be extended to harvest the motion energy from the high-speed and vigorous movements. Conclusions In summary, we developed a FTNG with a coaxial double helix structure that utilizes the general Nylon/Cu ber and PTFE/Cu ber for effectively harvesting the mechanical energy from the human body. Under a small displacement (10 mm, 1.8 Hz), this Nanoscale Adv., 2020, 2, 4482–4490 | 4487 http://creativecommons.org/licenses/by-nc/3.0/ http://creativecommons.org/licenses/by-nc/3.0/ https://doi.org/10.1039/d0na00536c Nanoscale Advances Paper O pe n A cc es s A rt ic le . P ub li sh ed o n 17 A ug us t 20 20 . D ow nl oa de d on 4 /6 /2 02 1 2: 16 :2 7 A M . T hi s ar ti cl e is l ic en se d un de r a C re at iv e C om m on s A tt ri bu ti on -N on C om m er ci al 3 .0 U np or te d L ic en ce . View Article Online FTNG could output voltage of 850.20 mV and current density of 0.66 mA m�2 in the lateral sliding mode, and 2.15 V and 1.42 mA m�2 in the vertical separating mode. Even aer a washing process or a 3 hours continuous working (�10 000 cycles), no observable output performance's degradation was found. The large output of FTNG and its properties of being exible, light- weighted, and robust structure make it suitable for continuous power harvesting. When the FTNG was worn on a human wrist, it delivered an output of 3 V voltage and 200 nA current via shaking hands. It can also harvest the energy in swing arms during walking and generates an output of 6 V voltage and 600 nA current when attached to the cloth. Furthermore, this FTNG is highly adaptable for harvesting the rotating mechanical energy. These features make this FTNG a promising mobile and portable power supply for wearable electronic devices. Methods Fabrication of the FTNG Here, an enameled Cu wire (0.14 mm in diameter) was chosen as the inner electrode, and Nylon bers (0.15 mm in diameter) and PTFE bers (0.25 mm in diameter) were chosen as the frictional surface materials. At rst, the Cu wire was xed in the middle and then was wrapped by Nylon bers to fabricate the Nylon-coated Cu ber in the core–shell structure using a homemade rotating support. The diameter of this core–shell composite ber was about 0.65 mm. Further, using the same method, PTFE bers were twined around the central Cu wire to fabricate the PTFE-wrapped Cu ber with the core–shell struc- ture. The diameter of this fabricated ber was about 0.72 mm. At last, the Nylon-wrapped Cu ber and PTFE-wrapped Cu ber were twinned with each other to form a composite ber with a double helix structure. At last, a FTNG of 1.22 mm in diameter was formed. Measurement of FTNG During the output performance test of FTNG, a commercial linear motor was used to apply the external force through stretching and releasing operations. The low-noise current preamplier SR570 was used to measure the FTNG's output current, and the low-noise preamplier SR560 was used to measure the FTNG's output voltage. Author contributions The manuscript was written through contributions of all authors. All authors have given approval to the nal version of the manuscript. Conflicts of interest There are no conicts to declare. Acknowledgements Research was supported by Joint fund of Equipment Pre- Research and Ministry of Education (6141A02022518), the 4488 | Nanoscale Adv., 2020, 2, 4482–4490 Fundamental Research Funds for the Central Universities (No. XJS191401, JB191403, JB191401, JB191407, no. lzujbky-2018- ot04), the National Natural Science Foundation of Shaanxi Province (NO. 2018JQ5153, 2020JM-182). References 1 D. Son, J. Lee, S. Qiao, R. Ghaffari, J. Kim, J. E. Lee, C. Song, S. J. Kim, D. J. Lee, S. W. Jun, S. Yang, M. Park, J. Shin, K. Do, M. Lee, K. Kang, C. S. Hwang, N. Lu, T. Hyeon and D. H. Kim, Multifunctional Wearable Devices for Diagnosis and Therapy of Movement Disorders, Nat. Nanotechnol., 2014, 9, 397–404. 2 D. M. Zhang and Q. J. Liu, Biosensors and Bioelectronics on Smartphone for Portable Biochemical Detection, Biosens. Bioelectron., 2016, 75, 273–284. 3 H. Y. Guo, M.-H. Yeh, Y.-C. Lai, Y. L. Zi, C. S. Wu, Z. Wen, C. G. Hu and Z. L. Wang, All-in-One Shape-Adaptive Self- Charging Power Package for Wearable Electronics, ACS Nano, 2016, 10, 10580–10588. 4 K. Deshmukh, M. B. Ahamed, K. K. Sadasivuni, D. Ponnamma, M. A. A. AlMaadeed, S. K. K. Pasha, R. R. Deshmukh and K. Chidambaram, Graphene Oxide Reinforced Poly(4-styrenesulfonic acid)/Polyvinyl Alcohol Blend Composites with Enhanced Dielectric Properties for Portable and Flexible Electronics, Mater. Chem. Phys., 2017, 186, 188–201. 5 A. Scalia, F. Bella, A. Lamber-ti, S. Bianco, C. Gerbaldi, E. Tresso and C. F. Pirri, A Flexible and Portable Powerpack by Solid-State Supercapacitor and Dye- Sensitized Solar Cell Integration, J. Power Sources, 2017, 359, 311–321. 6 Y. H. Liu, J. Zeng, D. Han, K. Wu, B. W. Yu, S. G. Chai, F. Chen and Q. Fu, Graphene Enhanced Flexible Expanded Graphite Film with High Electric, Thermal Conductivities and EMI Shielding at Low Content, Carbon, 2018, 133, 435– 445. 7 Z. L. Wang, Entropy Theory of Distributed Energy for Internet of Things, Nano Energy, 2019, 58, 669–672. 8 K. Dong, X. Peng and Z. L. Wang, Fiber/Fabric-Based Piezoelectric and Triboelectric Nanogenerators for Flexible/ Stretchable and Wearable Electronics and Articial Intelligence, Adv. Mater., 2019, 32, 1902549. 9 S. Hajra, S. Sahoo and R. N. P. Choudhary, Fabrication and Electrical Characterization of (Bi0.49Na0.49Ba0.02)TiO3-PVDF Thin Film Composites, J. Polym. Res., 2019, 26, 14. 10 N. S. Choi, Z. Chen, S. A. Freunberger, X. Ji, Y. K. Sun, K. Amine, G. Yushin, L. F. Nazar, J. Cho and P. G. Bruce, Challenges Facing Lithium Batteries and Electrical Double- Layer Capacitors, Angew. Chem., Int. Ed., 2012, 51, 9994– 10024. 11 K. Jost, G. Dion and Y. Gogotsi, Textile Energy Storage in Perspective, J. Mater. Chem. A, 2014, 2, 10776–10787. 12 X. Pu, L. X. Li, H. Q. Song, C. H. Du, Z. F. Zhao, C. Y. Jiang, G. Z. Cao, W. G. Hu and Z. L. Wang, A Self-Charging Power Unit by Integration of a Textile Triboelectric This journal is © The Royal Society of Chemistry 2020 http://creativecommons.org/licenses/by-nc/3.0/ http://creativecommons.org/licenses/by-nc/3.0/ https://doi.org/10.1039/d0na00536c Paper Nanoscale Advances O pe n A cc es s A rt ic le . P ub li sh ed o n 17 A ug us t 20 20 . D ow nl oa de d on 4 /6 /2 02 1 2: 16 :2 7 A M . T hi s ar ti cl e is l ic en se d un de r a C re at iv e C om m on s A tt ri bu ti on -N on C om m er ci al 3 .0 U np or te d L ic en ce . View Article Online Nanogenerator and a Flexible Lithium-ion Battery for Wearable Electronics, Adv. Mater., 2015, 27, 2472–2478. 13 F. R. Fan, L. Lin, G. Zhu, W. Wu, R. Zhang and Z. L. Wang, Transparent Triboelectric Nanogenerators and Self- Powered Pressure Sensors Based on Micropatterned Plastic Films, Nano Lett., 2012, 12, 3109–3114. 14 V. Nguyen and R. S. Yang, Effect of Humidity and Pressure on the Triboelectric Nanogenerator, Nano Energy, 2013, 2, 604–608. 15 Z. L. Wang, Triboelectric Nanogenerators as New Energy Technology and Self-Powered Sensors-Principles, Problems and Perspectives, Faraday Discuss., 2014, 176, 447–458. 16 G. Khandelwal, A. Chandrasekhar, N. R. Alluri, V. Vivekananthan, N. P. M. J. Raj and S. J. Kim, Trash to Energy: A Facile, Robust and Cheap Approach for Mitigating Environment Pollutant Using Household Triboelectric Nanogenerator, Appl. Energy, 2018, 219, 338– 349. 17 J. H. Choi, K. J. Cha, Y. Ra, M. La, S. J. Park and D. Choi, Development of a Triboelectric Nanogenerator with Enhanced Electrical Output Performance by Embedding Electrically Charged Microparticles, Funct. Compos. Struct., 2019, 1, 4. 18 M. Wang, J. L. Duan, X. Y. Yang, Y. D. Wang, Y. Y. Duan and Q. W. Tang, Interfacial Electric Field Enhanced Charge Density for Robust Triboelectric Nanogenerators by Tailoring Metal/Perovskite Schottky Junction, Nano Energy, 2020, 73, 104747. 19 V. Vivekananthan, A. Chandrasekhar, N. R. Alluri, Y. Purusothaman and S. J. Kim, A Highly Reliable, Impervious and Sustainable Triboelectric Nanogenerator as a Zero-Power Consuming Active Pressure Sensor, Nanoscale Adv., 2020, 2, 746–754. 20 S. Jung, J. Lee, T. Hyeon, M. Lee and D. H. Kim, Fabric-Based Integrated Energy Devices for Wearable Activity Monitors, Adv. Mater., 2014, 26, 6329–6334. 21 T. Zhou, C. Zhang, C. B. Han, F. R. Fan, W. Tang and Z. L. Wang, Woven Structured Triboelectric Nanogenerator for Wearable Devices, ACS Appl. Mater. Interfaces, 2014, 6, 14695–14701. 22 W. Seung, M. K. Gupta, K. Y. Lee, K. S. Shin, J. H. Lee, T. Y. Kim, S. Kim, J. J. Lin, J. H. Kim and S. W. Kim, Nanopatterned Textile-Based Wearable Triboelectric Nanogenerator, ACS Nano, 2015, 9, 3501–3509. 23 N. Y. Cui, J. M. Liu, L. Gu, S. Bai, X. B. Chen and Y. Qin, Wearable Triboelectric Generator for Powering the Portable Electronic Devices, ACS Appl. Mater. Interfaces, 2015, 7, 18225–18230. 24 J. Wang, S. M. Li, F. Yi, Y. L. Zi, J. Lin, X. F. Wang, Y. L. Xu and Z. L. Wang, Sustainably Powering Wearable Electronics Solely by Biomechanical Energy, Nat. Commun., 2016, 7, 12744. 25 F. R. Fan, W. Tang and Z. L. Wang, Flexible Nanogenerators for Energy Harvesting and Self-Powered Electronics, Adv. Mater., 2016, 28, 4283–4305. 26 X. Pu, W. X. Song, M. M. Liu, C. Y. Jiang, C. H. Du, C. Y. Jiang, X. Huang, D. C. Zou, W. G. Hu and Z. L. Wang, Wearable This journal is © The Royal Society of Chemistry 2020 Power-Textiles by Integrating Fabric Triboelectric Nanogenerators and Fiber-Shaped Dye-Sensitized Solar Cells, Adv. Energy Mater., 2016, 1601048. 27 X. Pu, L. X. Li, M. M. Liu, C. Y. Jiang, C. H. Du, Z. F. Zhao, W. G. Hu and Z. L. Wang, Wearable Self-Charging Power Textile Based on Flexible Yarn Supercapacitors and Fabric Nanogenerators, Adv. Mater., 2016, 28, 98–105. 28 Y. C. Lai, J. N. Deng, S. L. Zhang, S. M. Niu, H. Guo and Z. L. Wang, Extraordinarily Sensitive and Low-Voltage Operational Cloth-Based Electronic Skin for Wearable Sensing and Multifunctional Integration Uses: A Tactile- Induced Insulating-to-Conducting Transition, Adv. Funct. Mater., 2016, 27, 1604462. 29 W. Gong, C. Y. Hou, J. Zhou, Y. B. Guo, W. Zhang, Y. G. Li, Q. H. Zhang and H. Z. Wang, Continuous and Scalable Manufacture of Amphibious Energy Yarns and Textiles, Nat. Commun., 2019, 10, 868. 30 S. S. Kwak, H. J. Yoon and S. W. Kim, Textile-Based Triboelectric Nanogenerators for Self-Powered Wearable Electronics, Adv. Funct. Mater., 2019, 29, 1804533. 31 A. R. Mule, B. Dudem, S. A. Graham and J. S. Yu, Humidity Sustained Wearable Pouch-Type Triboelectric Nanogenerator for Harvesting Mechanical Energy from Human Activities, Adv. Funct. Mater., 2019, 29, 1807779. 32 Z. Liu, H. Li, B. J. Shi, Y. B. Fan, Z. L. Wang and Z. Li, Wearable and Implantable Triboelectric Nanogenerators, Adv. Funct. Mater., 2019, 29, 1808820. 33 J. M. Liu, L. Gu, N. Y. Cui, Q. Xu, Y. Qin and R. S. Yang, Fabric-Based Triboelectric Nanogenerators, Research, 2019, 2019, 13. 34 J. M. Liu, L. Gu, N. Y. Cui, S. Bai, S. H. Liu, Q. Xu, Y. Qin, R. S. Yang and F. Zhou, Core–Shell Fiber-Based 2D Woven Triboelectric Nanogenerator for Effective Motion Energy Harvesting, Nanoscale Res. Lett., 2019, 14, 311. 35 L. Zhang, C. Su, L. Cheng, N. Y. Cui, L. Gu, Y. Qin, R. S. Yang and F. Zhou, Enhancing the Performance of Textile Triboelectric Nanogenerators with Oblique Microrod Arrays for Wearable Energy Harvesting, ACS Appl. Mater. Interfaces, 2019, 11, 26824–26829. 36 Y. Cheng, X. Lu, K. H. Chan, R. R. Wang, Z. R. Cao, J. Sun and G. W. Ho, A Stretchable Fiber Nanogenerator for Versatile Mechanical Energy Harvesting and Self-Powered Full- Range Personal Healthcare Monitoring, Nano Energy, 2017, 41, 511–518. 37 K. Dong, Y. Ch. Wang, J. N. Deng, Y. J. Dai, S. L. Zhang, H. Y. Zou, B. H. Gu, B. Z. Sun and Z. L. Wang, A Highly Stretchable and Washable All-Yarn-Based Self-Charging Knitting Power Textile Composed of Fiber Triboelectric Nanogenerators and Supercapacitors, ACS Nano, 2017, 11, 9490–9499. 38 M. M. Liu, Z. F. Cong, X. Pu, W. B. Guo, T. Liu, M. Li, Y. Zhang, W. G. Hu and Z. L. Wang, High-Energy Asymmetric Supercapacitor Yarns for Self-Charging Power Textiles, Adv. Funct. Mater., 2019, 1806298. 39 J. Zhong, Y. Zhang, Q. Zhong, Q. Hu, B. Hu, Z. L. Wang and J. Zhou, Fiber-Based Generator for Wearable Electronics and Mobile Medication, ACS Nano, 2014, 8, 6273–6280. Nanoscale Adv., 2020, 2, 4482–4490 | 4489 http://creativecommons.org/licenses/by-nc/3.0/ http://creativecommons.org/licenses/by-nc/3.0/ https://doi.org/10.1039/d0na00536c Nanoscale Advances Paper O pe n A cc es s A rt ic le . P ub li sh ed o n 17 A ug us t 20 20 . D ow nl oa de d on 4 /6 /2 02 1 2: 16 :2 7 A M . T hi s ar ti cl e is l ic en se d un de r a C re at iv e C om m on s A tt ri bu ti on -N on C om m er ci al 3 .0 U np or te d L ic en ce . View Article Online 40 K. N. Kim, J. S. Chun, J. W. Kim, K. Y. Lee, J.-U. Park, S.-W. Kim, Z. L. Wang and J. M. Baik, Highly Stretchable 2D Fabrics for Wearable Triboelectric Nanogenerator under Harsh Environments, ACS Nano, 2015, 9, 6394–6400. 41 Z. Z. Zhao, C. Yan, Z. X. Liu, X. L. Fu, L. M. Peng, Y. F. Hu and Z. J. Zheng, Machine-Washable Textile Triboelectric Nanogenerators for Effective Human Respiratory Monitoring through Loom Weaving of Metallic Yarns, Adv. Mater., 2016, 29, 1702648. 42 K. Dong, J. N. Deng, Y. L. Zi, Y. C. H. Wang, C. H. Xu, H. Y. Zou, W. B. Ding, Y. J. Dai, B. H. Gu, B. Z. Sun and Z. L. Wang, 3D Orthogonal Woven Triboelectric Nanogenerator for Effective Biomechanical Energy Harvesting and as Self-Powered Active Motion Sensors, Adv. Mater., 2017, 29, 1702648. 4490 | Nanoscale Adv., 2020, 2, 4482–4490 43 Z. Wen, M.-H. Yeh, H. Y. Guo, J. Wang, Y. L. Zi, W. D. Xu, J. N. Deng, L. Zhu, X. Wang, C. G. Hu, L. P. Zhu, X. H. Sun and Z. L. Wang, Self-Powered Textile for Wearable Electronics by Hybridizing Fiber-Shaped Nanogenerators, Solar Cells, and Supercapacitors, Sci. Adv., 2016, 2, 1600097. 44 X. He, Y. L. Zi, H. Y. Guo, H. W. Zheng, Y. Xi, C. S. Wu, J. Wang, W. Zhang, C. H. Lu and Z. L. Wang, A Highly Stretchable Fiber-Based Triboelectric Nanogenerator for Self-Powered Wearable Electronics, Adv. Funct. Mater., 2017, 1604378. 45 J. Chen, H. Y. Guo, X. J. Pu, X. Wang, Y. Xi and C. G. Hu, Traditional Weaving Cra for One-Piece Self-Charging Power Textile for Wearable Electronics, Nano Energy, 2018, 50, 536–541. This journal is © The Royal Society of Chemistry 2020 http://creativecommons.org/licenses/by-nc/3.0/ http://creativecommons.org/licenses/by-nc/3.0/ https://doi.org/10.1039/d0na00536c Coaxial double helix structured fiber-based triboelectric nanogenerator for effectively harvesting mechanical energyElectronic supplementary... Coaxial double helix structured fiber-based triboelectric nanogenerator for effectively harvesting mechanical energyElectronic supplementary... Coaxial double helix structured fiber-based triboelectric nanogenerator for effectively harvesting mechanical energyElectronic supplementary... Coaxial double helix structured fiber-based triboelectric nanogenerator for effectively harvesting mechanical energyElectronic supplementary... Coaxial double helix structured fiber-based triboelectric nanogenerator for effectively harvesting mechanical energyElectronic supplementary... Coaxial double helix structured fiber-based triboelectric nanogenerator for effectively harvesting mechanical energyElectronic supplementary... Coaxial double helix structured fiber-based triboelectric nanogenerator for effectively harvesting mechanical energyElectronic supplementary... Coaxial double helix structured fiber-based triboelectric nanogenerator for effectively harvesting mechanical energyElectronic supplementary... Coaxial double helix structured fiber-based triboelectric nanogenerator for effectively harvesting mechanical energyElectronic supplementary... Coaxial double helix structured fiber-based triboelectric nanogenerator for effectively harvesting mechanical energyElectronic supplementary... work_mzj5o672uzcqxjoh3titreaari ---- doi:10.1016/j.joms.2005.05.227 trauma. Roughly half of the laryngeal fractures in our series were managed non-operatively although approxi- mately three-quarters required airway intervention rang- ing from intubation to emergent cricothyroidotomy. Cli- nicians treating maxillofacial trauma need to be familiar with the signs and symptoms of this injury. A timely evaluation of the larynx and rapid airway intervention are essential for a successful outcome. The Schaefer classification of injury severity and corresponding treat- ment guidelines were consistent with our study. References Leopold DA: Laryngeal trauma. A historical comparison of treatment methods. Arch Otolaryngol 109:106, 1983 Schaefer SD, Stringer SP: Laryngeal trauma, in Bailey BJ, Pillsbury HC, Driscoll BP (eds): Head and Neck Surgery: Otolaryngology. Phila- delphia, PA, Lippincott-Raven, 1998, pp 947-956 Short and Long Term Effects of Sildenafil on Skin Flap Survival in Rats Kristopher L. Hart, DDS, 705 Aumond Rd, Augusta, GA 30909 (Baur D; Hodam J; Wood LL; Parham M; Keith K; Vazquez R; Ager E; Pizarro J) Statement of the Problem: Annually in the United States, approximately 175,000 people sustain severe facial trauma requiring major surgical repair. These injuries often cause significant loss of facial skin, leading to severe aesthetic and functional deficits. Skin flaps are the foundation for recon- structing such defects. The most important factor deter- mining the survival of these flaps is the delivery of oxygen via the circulation. A number of therapeutic modalities have been explored to improve blood flow and oxygen- ation of flap tissue. One principal approach has been to increase blood flow by vasodilation. However, due to their hypotensive effects, the vasodilators tested thus far have not been utilized in surgical repair of facial skin. Phospho- diesterase (PDE)-inhibitors, which includes the drug silden- afil, are a relatively new class of FDA approved drugs whose effect on tissue viability has not been widely explored. The vasodilatory effects of these drugs have the potential to enhance blood flow to wound sites, improve oxygen sup- ply, and promote wound repair. In this study, we examined whether administering sildenafil intraperitoneally at a dose of 45 mg/kg/d has a beneficial effect on the survival of surgical skin flaps in rats. Materials and Methods: Surgical skin flaps were evaluated using orthogonal polarization spectral imaging, flap image analysis, and histology at 1, 3, 5, and 7 days. Orthogonal polarization spectral imaging provides high quality, high con- trast images of the microcirculation of skin flaps. Areas of normal capillary flow are easily differentiated from areas of stasis and areas completely devoid of vessels. First, rats were assigned to either sildenafil treated (45 mg/kg/day IP), vehicle control, or sham (no injection) groups. Second, caudally based dorsal rectangular (3 x 10 cm) flaps were completely raised and then stapled closed. Third, spectral imaging was used to determine the distances from the distal end of the flap to the zones of stasis and zones of normal flow. Finally, animals were sacrificed and the flaps removed and photographed. Digital images of the flaps were used to determine the percent of black, discolored (gray/red), and normal tissue. Method of Data Analysis: a. Sample size: N � 152 rats b. Duration of study: 3 months c. Statistical methods: One-way analysis of variance (ANOVA) d. Subjective analysis: No Results: The orthogonal polarization spectral imaging results showed a significant decrease in the zone of necrosis (no vessels present) in rats treated with silden- afil one and three days after surgery. We also found a significant decrease in the total affected area, which consists of the zones of necrosis and stasis, in treated rats three days after surgery. Digital photography analysis also showed a significant decrease in the area of necrosis (black tissue) at three days. These findings support the results obtained using spectral imaging. No significant differences were found between sildenafil treated and control animals five and seven days after surgery. Conclusion: These results demonstrated that 45 mg/ kg/d IP of sildenafil may have a beneficial effect on skin survivability at the early stages of wound healing. Or- thogonal polarization spectral imaging has been proven to predict areas of necrosis more accurately than photo- graphic analysis. This method allowed us to observe differences between sildenafil treated and control rats as early as 24 hours and as late as three days after surgery. Although we did not see any benefit when animals were treated with 45 mg/kg/d IP five and seven days after surgery, we believe that changes in the treatment regi- men may enhance long-term flap survivability. References Olivier WA, Hazen A, Levine JP, et al: Reliable assessment of skin flap viability using orthogonal polarization imaging. Plast Reconst Surg 112:547, 2003 Sarifakioglu N, Gokrem S, Ates L, et al: The influence of sildenafil on random skin flap survival in rats: An experimental study. Br J Plast Surg 57:769, 2004 Funding Source: United States Army 2005 Straumann Resident Scientific Award Winner Histomorphometric Assessment of Bony Healing of Rabbit Critical-Sized Calvarial Defects With Hyperbaric Oxygen Therapy Ahmed M. Jan, DDS, The Hospital for Sick Children, S-525, 555 University Ave, Toronto, Ontario M5G 1X8, Oral Abstract Session 4 AAOMS • 2005 63 Canada (Sàndor GKB; Evans AW; Mhawi A; Peel S; Clokie CML) Statement of the Problem: A critical-sized defect is the smallest osseous wound that will not heal spontaneously with bony union over the lifetime of an animal. Practi- cally, the defect should not heal spontaneously during the experimental period. Hyperbaric oxygen therapy (HBO) is used to improve the healing of a variety of problem wounds. This study evaluated the effect of HBO on the healing of critical-sized defects in the rabbit cal- varial model and whether HBO administration can result in the healing of a larger “supracritical-sized” defect. Materials and Methods: Twenty New Zealand rabbits were divided into 2 groups of 10 animals. Full thickness calvarial defects were created in their parietal bones bilat- erally. Defects were critical-sized (15 mm) on one side and supracritical (18 mm) on the contralateral side. Group 1 received a 90 minute HBO therapy session at 2.4 ATA daily for 20 consecutive days. Group 2 served as a control group receiving only room air. Five animals in each group were sacrificed at 6 and 12 weeks postoperatively. Method of Data Analysis: Data analysis included qual- itative assessment of the calvarial specimens as well as quantitative histomorphometric analysis to compute the amount of regenerated bone within the defects. Hema- toxylin and eosin stained sections were sliced and cap- tured by a digital camera (RT Color; Diagnostic Instru- ments Inc, Sterling Heights, MI). A blinded investigator examined merged images and analyzed them for quantity of new bone regeneration. Statistical significance was established with a p value � .05. Results: The HBO group showed bony union and demon- strated more bone formation than the control group at 6 weeks (p � .001). The control group did not show bony union in either defect by 12 weeks. There was no significant difference in the amount of new bone formed in the HBO group at 6 weeks compared with 12 weeks (p � .309). However, the bone at 6 weeks was more of a woven charac- ter, while at 12 weeks it was more lamellated and more mature. Again, in the HBO group both the critical-sized and the supracritical-sized defects healed equally (p � .520). Conclusion: HBO therapy has facilitated the bony heal- ing of both critical-sized and supracritical-sized rabbit calvarial defects. Since bony healing was achieved early, it is reasonable to assume that an even larger than 18 mm defect (if it were technically feasible) might have healed within the 12 week period of study aided by HBO. Adjunctive HBO, based on histomorphometrics, doubles the amount of new bone formed within both the critical sized and the supracritical-sized defects. It allowed an increase in the critical size by more than 20%. References Moghadam HG, Sàndor GK, Holmes HI, et al: Histomorphometric evaluation of bone regeneration using allogeneic and alloplastic bone substitutes. J Oral Maxillofac Surg 62:202, 2004 Muhonen A, Haaparanta M, Gronroos T, et al: Osteoblastic activity and neoangiogenesis in distracted bone of irradiated rabbit mandible with or without hyperbaric oxygen treatment. Int J Oral Maxillofac Surg 33:173, 2004 Craniofacial Growth Following Cytokine Therapy in Craniosynostotic Rabbits Harry Papodopoulus, DDS, MD, University of Pittsburgh School of Dental Medicine, 3501 Terrace Street, Pittsburgh, PA 15261 (Ho L; Shand J; Moursi AM; Burrows AM; Caccamese J; Costello BJ; Morrison M; Cooper GM; Barbano T; Losken HW; Opperman LM; Siegel MI; Mooney MP) Statement of the Problem: Craniosynostosis affects 300- 500/1,000,000 births. It has been suggested that an over- expression of Tgf-beta 2 leads to calvarial hyperostosis and suture fusion in craniosynostotic individuals. This study was to test the hypothesis that neutralizing antibodies to Tgf-beta 2 may block its activity in craniosynostotic rabbits, preventing coronal suture fusion in affected individuals, and allowing unrestricted craniofacial growth. Materials and Methods: Twenty-eight New Zealand White rabbits with bilateral delayed-onset coronal suture synostosis had radiopaque dental amalgam markers placed on either side of coronal sutures at 10 days of age (synos- tosis occurs at approximately 42 days of age). At 25 days, the rabbits were randomly assigned to three groups: 1) Sham control rabbits (n � 10); 2) Rabbits with non-spe- cific, control IgG antibody (100ug/suture) delivered in a slow release collagen vehicle (n � 9); and 3) Rabbits with Tgf-beta 2 neutralizing antibody (100ug/suture) delivered in slow release collagen (n � 9). The collagen vehicle in groups Two and Three was injected subperiosteally above the coronal suture. Longitudinal lateral and dorsoventral head radiographs and somatic growth data were collected from each animal at 10, 25, 42, and 84 days of age. Method of Data Analysis: Significant mean differences were assessed using a one-way analysis of variance. Results: Radiographic analysis showed significantly greater (p � 0.05) coronal suture marker separation, overall craniofacial length, cranial vault length and height, cranial base length, and more lordotic cranial base angles in rabbits treated with anti-Tgf-beta-2 anti- body than groups at 42 and 84 days of age. Conclusion: These data support our initial hypothesis that interference with Tgf-beta-2 production and/or function may rescue prematurely fusing coronal sutures and facilitate craniofacial growth in this rabbit model. These findings also suggest that this cytokine therapy may be clinically significant in infants with insidious or progressive postgestational craniosynostosis. References Poisson E, Sciote JJ, Koepsel R, et al: Transforming growth factor- beta isoform expression in the perisutural tissue of craniosynostotic rabbits. Cleft Palate Craniofac J 41:392, 2004 Oral Abstract Session 4 64 AAOMS • 2005 work_mzlxcrzjcrh2dmyl22ywegzpfu ---- doi:10.1016/j.jembe.2006.02.012 y and Ecology 334 (2006) 316–323 www.elsevier.com/locate/jembe Journal of Experimental Marine Biolog A clockwork mollusc: Ultradian rhythms in bivalve activity revealed by digital photography David L. Rodland ⁎, Bernd R. Schöne, Samuli Helama, Jan K. Nielsen, Sven Baier INCREMENTS Research Group, Institute for Geology and Paleontology, Johann Wolfgang Goethe University — Frankfurt am Main, Senckenberganlage 32, 60325 Frankfurt am Main, Germany Received 16 June 2005; received in revised form 28 November 2005; accepted 16 February 2006 Abstract Time-lapse digital images can be acquired and archived using web-cameras, allowing non-invasive analysis of behavior patterns of bivalve molluscs at ultradian (sub-daily) time-scales over long intervals. These records can be analyzed directly by a human operator or through properly calibrated image analysis software. Preliminary results using species of marine and freshwater bivalves identify several ultradian biological rhythms of similar duration. Wavelet analysis indicates strong periodicity in mantle and siphon activity in the 3 to 7 min range, with longer duration shell contraction periods at 60–90 min. The recurrence of these rhythms among marine and freshwater bivalve species maintained under constant (but differing) conditions suggests the influence of common intrinsic drivers (chemico-physical mechanisms or biological clocks). Sub-daily growth increments preserved in the shells of rapidly growing bivalve species are potentially related to these biological rhythms, with implications for shell growth, biomineralization, and the temporal resolution of paleoclimate proxy data. © 2006 Elsevier B.V. All rights reserved. Keywords: Anodonta cygnea; Arctica islandica; Mytilus edulis; Sclerochronology; Wavelet analysis 1. Introduction The periodicity of growth increments in the skeletons of bivalve molluscs has been documented extensively ranging from tidal cycles to seasonal variability and annual reproduction breaks, and can even preserve decadal climatic oscillations (e.g., Richardson, 1988; Goodwin et al., 2001; Schöne et al., 2004). Because of their periodicity, these increments have played a critical role in studies that depend on establishing growth chronologies. Examples include: establishing the or- ⁎ Corresponding author. Tel.: +49 69 798 22974; fax: +49 69 798 22958. E-mail address: d.rodland@em.uni-frankfurt.de (D.L. Rodland). 0022-0981/$ - see front matter © 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.jembe.2006.02.012 ganic origin of fossils (e.g., Leonardo da Vinci's Leicester codex; see Gould, 1997), reconstructing paleoclimate records (e.g., Schöne, 2004), determining the seasonality of shellfish predation at prehistoric human settlements (e.g., Deith, 1983), determining modes of evolutionary change (e.g., Jones and Gould, 1999), and estimating changes in the distance from the Earth to the Moon over geologic time (e.g., Barker, 1968). While the identification of annual bands is well documented (e.g., Clark, 1974), the precise nature and origin of smaller growth increments remains a matter of some dispute. Early work in the field suggested that these were produced by circadian cycles, entrained by daily light variations (e.g., Pannella and MacClintock, mailto:d.rodland@em.unirankfurt.de http://dx.doi.org/10.1016/j.jembe.2006.02.012 A clockwork mollusc: Ultradian rhythms in bivalve activity revealed by digital photography Introduction Materials and methods Results Arctica islandica Mytilus edulis Anodonta cygnea Discussion Acknowledgements References work_mzshbmxslnc5bivllvr2d4cglm ---- Review began 11/05/2020 Review ended 11/14/2020 Published 11/26/2020 © Copyright 2020 Chung et al. This is an open access article distributed under the terms of the Creative Commons Attribution License CC-BY 4.0., which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Type 2 Diabetic Retinopathy Screening in a General Practice: A Five-Year Retrospective Analysis Anthony J. Chung , My Nhi Dang 1. Medicine and Surgery, Queen Elizabeth Hospital Woolwich, London, GBR 2. General Practice, Imperial College London, London, GBR Corresponding author: Anthony J. Chung, anthonyjychung@yahoo.co.uk Abstract Aim To assess current standards of diabetic retinopathy screening in primary care against the National Institute of Clinical Excellence (NICE) guidelines for type 2 diabetes mellitus. Moreover, to determine whether individuals with diabetes were screened for diabetic retinopathy no later than three months from referral to the local eye screening service and no later than one year from their last retinal screen. Materials and methods A single-center, retrospective audit was undertaken at a small general practice. Data was collected from the health records of individuals placed on the type 2 diabetes register from 01/01/2013 to 01/01/2018. Individuals who were diagnosed with diabetes whilst registered at a different practice, who had pre-diabetic retinal screening or were referred onto a different screening pathway were excluded. A total of 50 records were audited and data collection involved demographics, dates individuals were placed on the diabetes register and dates of attendance and non-attendance to screening. Results 16.0% of individuals with type 2 diabetes underwent retinal screening which adhered to the NICE guidelines. Of the cohort which did not adhere, 59.5% experienced an interval greater than three months between diagnosis and first retinal screening and 64.3% experienced a screening interval greater than one year. Conclusions Diabetic retinopathy screening of individuals must be improved to meet the NICE standards. Interventions should be implemented to increase the awareness within general practitioners and practice nurses to ensure all people with diabetes receive their first retinal screen within the first three months of diagnosis with regular annual screening thereafter. Categories: Endocrinology/Diabetes/Metabolism, Family/General Practice, Ophthalmology Keywords: types 2 diabetes, diabetic retinopathy, diabetes mellitus, screening, primary care Introduction Diabetic retinopathy, a common complication of both type 1 and type 2 diabetes mellitus, is the single largest cause of blindness before old age. However, it takes several years for diabetic retinopathy to progress to a stage where sight is threatened. Diabetic Eye Screening, a national screening program introduced in 2003, was initiated with the aims of reducing the risk of sight loss through early detection and treatment, where necessary, of sight-threatening retinopathy. This was achieved through examination of the retina via mydriatic digital photography and is offered to all people with diabetes aged 12 years and over. By 2008, the screening program achieved population coverage across the entirety of England. In 2015-16, the annual uptake of the program was 82.8% [1]. In 2012, the Royal College of Ophthalmologists (RCOphth) authored Diabetic Retinopathy Guidelines, providing valuable clinical guidance for the management of diabetic eye disease [2]. The RCOphth guidelines make particular reference to the National Institute of Clinical Excellence (NICE) guidelines and supports the use of population-based digital photography screening program for diabetic retinopathy in the United Kingdom. The current standards outlined for adults with type 2 diabetes [3] state that on diagnosis, general practitioners should immediately refer adults with type 2 diabetes to the local eye screening service; perform screening as soon as possible and no later than three months from referral; arrange repeat structured eye screening annually. 1, 2 2 Open Access Original Article DOI: 10.7759/cureus.11713 How to cite this article Chung A J, Dang M (November 26, 2020) Type 2 Diabetic Retinopathy Screening in a General Practice: A Five-Year Retrospective Analysis. Cureus 12(11): e11713. DOI 10.7759/cureus.11713 https://www.cureus.com/users/136567-anthony-j-chung https://www.cureus.com/users/136858-my-nhi-dang This retrospective audit was undertaken with the aim of determining whether current diabetic retinopathy screening in patients registered to one general practice adhered with the NICE guidelines. In particular, this audit assessed whether individuals with type 2 diabetes had undergone screening for diabetic retinopathy no more than three months from referral to the local eye screening service. Furthermore, we investigated whether people with diabetes had undergone retinal screening no later than a year from their most previous screen. Compliance to these objectives would be determined by obtaining retinal screening dates from health records. This article was previously presented as an oral presentation at the Advanced Ophthalmologic Practice 2020 Annual Scientific Meeting on January 11, 2020. Materials And Methods A single-center audit was undertaken at a small general practice serving approximately 6000 patients. Data was collected retrospectively and extracted using Vision healthcare software and Docman electronic document management software. All cases read-coded as “Type 2 diabetes mellitus. Placed on register” over the period of five years (1st January 2013 - 1st January 2018) were audited. The audit was registered and approved in line with general practice processes. A total of 89 people were added to the the type 2 diabetes register during the data collection period (Figure 1). After examining patient documentation, 29 individuals had been diagnosed with diabetes prior to the data collection period despite being read-coded, and thus were excluded. Additionally, only individuals registered to the practice prior to their diagnosis of diabetes were included. Individuals who had been diagnosed with diabetes in other practices prior to registering with the audited practice were excluded. Furthermore, one person was excluded due to attendance at pre-diabetic retinal screening and another person was excluded due to referral onto a slit lamp biomicroscopy screening pathway. After exclusion, 50 cases were included in this study. The audit collected data on the following domains: age and gender, dates individuals were placed on the diabetes register, dates of attendance and non-attendance to diabetic retinopathy screening and postponements to screening appointments. FIGURE 1: Flow chart of selection of patient records for inclusion in audit analysis. Results Of the 60 people diagnosed with type 2 diabetes in this audit period, 50 fulfilled the relevant audit criteria and were thus included in the analysis. As seen in Table 1, the overall mean age for people with type 2 diabetes at diagnosis was 59.5 years, with a range from 33 to 83 included. Additionally, there were a greater number of males (n = 34) compared to females (n = 16) in this audit. 2020 Chung et al. Cureus 12(11): e11713. DOI 10.7759/cureus.11713 2 of 7 https://assets.cureus.com/uploads/figure/file/164406/lightbox_5f8cb140260511ebb4050b5aac32d489-Flowchart.png Variable Patients (n = 50) Demographics Age, years 59.5 (33-83) Female, n (%) 16 (68.3) Male, n (%) 34 (55.4) TABLE 1: Clinical and demographic information for all patients included in analysis. In the audit period, 16.0% of people with type 2 diabetes were screened for diabetic retinopathy in line with NICE guidelines. No persons diagnosed with diabetes in 2013 and 2014 had retinal screening which fully met the criteria defined by the NICE (Figure 2). From 2014 onwards, the percentage of individuals with diabetes who underwent satisfactory screening increased with each consecutive year. There was a large increase in the percentage of individuals fulfilling the NICE criteria between those diagnosed in 2015 (8.3%) and 2016 (40.0%). The increasing trend peaks with individuals diagnosed with diabetes in 2017 (45.5%). FIGURE 2: Percentage of diabetic patients diagnosed between 2013- 2018 who underwent annual retinal screening in accordance with the NICE guidelines. Of the 42 individuals determined not to have met the NICE guidelines, 59.5% had a time period of greater than three months between their diagnosis and their first appointment, 64.3% attended screening but had screening intervals which were greater than a year and 26.2% did not attend their scheduled screening appointments (Figure 3). One individual (2.4%) had formally postponed their screening appointment while six patients’ records (14.3%) contained missing documentation regarding whether they were screened or not. 2020 Chung et al. Cureus 12(11): e11713. DOI 10.7759/cureus.11713 3 of 7 https://assets.cureus.com/uploads/figure/file/161550/lightbox_f3f70b501d1311eb9a64cdeef9e46ad5-Figure-2.png FIGURE 3: Reasons why diabetic patients diagnosed between 2013-2018 were determined not to have met the NICE retinal screening guidelines. Of the individuals who attended their first offered retinal screening appointment, the mean period of time between diagnosis and their initial screening appointment was 108.7 days (Figure 4). In comparison, individuals who did not attend their first offered retinal screening appointment underwent their first retinal screen an average of 435.2 days following their date of diagnosis. FIGURE 4: Average period of time between diabetes diagnosis and first retinal screening appointment. In summary, 16.0% of individuals included in this audit had diabetic retinal screening which completely fulfilled the NICE criteria, this includes attending the diabetic screening program within three months from their referral and screening within one year from their last retinal screening appointment. Of the individuals who did not meet the NICE guidelines, 59.5% were due to the interval between the diagnosis date and first screening being greater than three months, and 64.3% were due to screening intervals of greater than one year in length. Moreover, there is a 326.5 day difference in the time taken from diagnosis to first retinal screening appointment between individuals who attend their first appointment and those who do not attend their first appointment. 2020 Chung et al. Cureus 12(11): e11713. DOI 10.7759/cureus.11713 4 of 7 https://assets.cureus.com/uploads/figure/file/161555/lightbox_c4a178801d1411eb8bca1788adb374b7-Figure-3.png https://assets.cureus.com/uploads/figure/file/161557/lightbox_324c44f01d1511eb858fa574a5e8cab0-Figure-4.png Discussion This audit was undertaken with the aim of determining whether current diabetic retinopathy screening in people with type 2 diabetes registered to one general practice in London adhered with the NICE guidelines. Furthermore, by identifying areas of improvement, implementing an intervention and re-auditing at a later stage, this audit provides the groundwork to evaluate whether progression has been made in aligning current retinal screening practices with those imposed by the NICE as part of the United Kingdom's National Health Service (NHS) retinopathy screening program. Main findings This audit revealed a positive trend between the year of diagnosis and the percentage of people with type 2 diabetes who underwent retinal screening in accordance with the NICE guidelines. As seen in Figure 2, no people diagnosed with diabetes in 2013 and 2014 underwent retinal screening which fully complied with the NICE guidelines. However, with each consecutive year of diagnosis from 2014, the percentage of individuals compliant with the NICE guidelines progressively increases. This may be explained through several factors. Firstly, reduced awareness of the NICE guidelines and lack of diabetic screening pathway training in healthcare professionals in the past may have contributed to the low compliance rate. Additionally, changes in patient attitudes towards attending screening programs may account for these results. Patient attitudes may be influenced by greater patient education on the detrimental effects of diabetic retinopathy [4], social media usage, NHS promotional materials and increased recommendations by healthcare providers [5]. Despite the improvement in compliance rate, people diagnosed with diabetes in 2017 still fell short of the standards outlined by the NICE guidelines, with only 45.5% successfully undergoing retinal screening. Our in-depth examination of the reasons underlying why diabetic retinopathy screening guidelines were not being achieved identified two main causes (Figure 3). The primary reason was that, despite patient attendance, screening intervals were greater than the NICE-recommended one-year period (64.3%). Secondly, the time period between diagnosis and the first screening appointment was found to be greater than the NICE-recommended maximum of three months (59.5%). Both findings may be explained through a multitude of factors, including administrative errors, long waiting lists, delayed referrals and rescheduling of appointments. Furthermore, 26.2% of people with diabetes did not attend their scheduled screening appointments. It was beyond the scope of our study to identify the reasons as to why people did not attend their screening appointments, however, this topic has been explored in detail in the literature [6-8]. In particular, Strutton et al. performed a quantitative analysis to identify explanations for why individuals registered to one South London diabetic eye screening program had never attended a screening appointment [8]. The study categorized factors given to non-attendance into patient-level factors and system-level factors. Patient-level factors were factors determined to some extent by the patient while system-level factors were those determined by the healthcare provider. The study established that reasons for non-attendance included patients having other commitments, patient anxiety and miscommunication regarding the patient’s address and clinical condition. The results of the study by Strutton et al. are useful in providing insight into our study as our cohort are of similar demographic and geographic population [8]. It is important to highlight that 14.3% of people with diabetes had incomplete documentation regarding diabetic screening appointments. This included lack of documentation concerning dates of screening appointments and patient attendance. Whilst missing data is especially common in any retrospective study, greater attention could be made in future to minimize the amount of missing data for re-audits. This can be achieved through the implementation of a proforma which would aid healthcare professionals in submitting completed documentation. Our study emphasizes the importance of following up individuals with diabetes who do not attend their first screening appointment. As seen in Figure 4, individuals who did not attend their first offered screening appointment are seen on average 326.5 days later compared to individuals who did attend their first offered appointment. It appears that non-attendance to the first offered screening appointment is a strong indicator that the individual will not have their first screen until greater than a year’s length in time has passed. Furthermore, our audit has revealed that despite attendance to the first offered screening appointment, the average number of days between diagnosis and the aforementioned is 108.7 days. This is slightly greater than the maximum three month interval as outlined by the NICE suggesting that system-level factors are the predominant cause of the failure to adhere to the NICE criteria. Strengths and limitations There are many strengths associated with this study. This audit incorporated a large number of people with diabetes during its phase one period, thus allowing it to stand in good stead for future comparisons among other phases. Additionally, by collecting data over a five year period, this study had a vast data set and was able to identify reliable trends. Finally, the retrospective nature of this study ensures that the data collected was not influenced by any changes in behavior or patient care by healthcare providers. 2020 Chung et al. Cureus 12(11): e11713. DOI 10.7759/cureus.11713 5 of 7 Whilst our study was able to provide valuable insights into the different factors as to why diabetic retinopathy screening in primary care patients are not currently in line with NICE guidelines, there are some limitations to note. Our study utilized the date individuals were placed on the diabetes register in substitute of the date of referral to the local eye screening service. This likely resulted in an overestimation of people determined to have not met the NICE criteria, as the date of referral is typically a few days after the date individuals were placed on the diabetes register. Moreover, the shortcomings associated with the single-center study design of this audit should be considered. The lack of external validity in single-center studies means that the results of this study may not be representative of individuals registered to other practices. Differences in patient demographics, varying attitudes and socioeconomic disparities make it difficult to generalize our results to the patient population. This combined with the different priorities of healthcare providers across the country introduces further challenges against the applicability of our results. Also, it would have been useful to collect data on travel distance to the nearest screening center as this is a major factor influencing the elderly population of diabetic individuals whom are at a greater risk of developing diabetic complications. Conclusions Overall, this audit explored the key factors that influence whether individuals with type 2 diabetes receive retinopathy screening in accordance with the NICE guidelines. This includes long periods of time between diagnosis and screening, screening intervals of greater than a year, non-attendance and lack of documentation. Thus, several interventions are recommended to improve adherence with diabetic retinopathy screening guidelines. Interventions can be classified into those aimed at healthcare professionals and those aimed at individuals with diabetes. Healthcare professionals in general practice need increased awareness of the diabetic retinopathy NICE guidelines. This may be facilitated through annual training sessions, as well as including this as a mandatory topic of discussion in general practice meetings. Targeted reminder notifications on Vision software to prompt action from healthcare professionals may be helpful in practice. Additionally, implementation of a proforma tailored towards people with diabetes will encourage complete documentation from healthcare professionals. Regular audits are essential to assess compliance with the NICE standards. We also recommend incorporation of regular follow ups for individuals with diabetes who do not attend their first offered screening appointment as this cohort are particularly at risk of delayed retinopathy screening . This could be through an escalation in contact methods including text messages, letters and phone calls. Additional Information Disclosures Human subjects: All authors have confirmed that this study did not involve human participants or tissue. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conf licts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. Acknowledgements The authors thank the administrative staff with the search, the reception staff for their kind words of encouragement and Dr Zehra Rashid and Dr Saikat Adhikari for their support in conducting the audit and in contributing to and implementing the recommendations. References 1. Scanlon PH: The english national screening programme for diabetic retinopathy 2003-2016 . Acta Diabetolol. 2017, 54:515-525. 10.1007/s00592-017-0974-1 2. Diabetic retinopathy guidelines 2012 . (2019). Accessed: November 4th : https://www.rcophth.ac.uk/wp- content/uploads/2014/12/2013-SCI-301-FINAL-DR-GUIDELINES-DEC-2012-updated-July-2013.pdf. 3. Type 2 diabetes in adults: management . (2019). Accessed: November 4th : https://www.nice.org.uk/guidance/ng28/chapter/1-Recommendations. 4. Murray CM, Shah BR: Diabetes self-management education improves medication utilisation and retinopathy screening in the elderly. Prim Care Diabetes. 2016, 0:179-85. 10.1016/j.pcd.2015.10.007 5. Eijk KNDV, Blom JW, Gussekloo J, Polak BCP, Groeneveld Y: Diabetic retinopathy screening in patients with diabetes mellitus in primary care: incentives and barriers to screening attendance. Diabetes Res Clin Pract. 2012, 96:10-6. 10.1016/j.diabres.2011.11.003 6. Kashim RM, Newton P, Ojo O: Diabetic retinopathy screening: a systematic review on patients’ non- attendance. Int J Environ Res Public Health. 2018, 15:157. 10.3390/ijerph15010157 7. Graham-Rowe E, Lorencatto F, Lawrenson JG, et al.: Barriers to and enablers of diabetic retinopathy screening attendance: a systematic review of published and grey literature. Diabet Med. 2018, 35:1308- 2020 Chung et al. Cureus 12(11): e11713. DOI 10.7759/cureus.11713 6 of 7 https://dx.doi.org/10.1007/s00592-017-0974-1?utm_medium=email&utm_source=transaction https://dx.doi.org/10.1007/s00592-017-0974-1?utm_medium=email&utm_source=transaction https://www.rcophth.ac.uk/wp-content/uploads/2014/12/2013-SCI-301-FINAL-DR-GUIDELINES-DEC-2012-updated-July-2013.pdf?utm_medium=email&utm_source=transaction https://www.rcophth.ac.uk/wp-content/uploads/2014/12/2013-SCI-301-FINAL-DR-GUIDELINES-DEC-2012-updated-July-2013.pdf?utm_medium=email&utm_source=transaction https://www.nice.org.uk/guidance/ng28/chapter/1-Recommendations?utm_medium=email&utm_source=transaction https://www.nice.org.uk/guidance/ng28/chapter/1-Recommendations?utm_medium=email&utm_source=transaction https://dx.doi.org/10.1016/j.pcd.2015.10.007?utm_medium=email&utm_source=transaction https://dx.doi.org/10.1016/j.pcd.2015.10.007?utm_medium=email&utm_source=transaction https://dx.doi.org/10.1016/j.diabres.2011.11.003?utm_medium=email&utm_source=transaction https://dx.doi.org/10.1016/j.diabres.2011.11.003?utm_medium=email&utm_source=transaction https://dx.doi.org/10.3390/ijerph15010157?utm_medium=email&utm_source=transaction https://dx.doi.org/10.3390/ijerph15010157?utm_medium=email&utm_source=transaction https://dx.doi.org/10.1111/dme.13686?utm_medium=email&utm_source=transaction 1319. 10.1111/dme.13686 8. Strutton R, Du Chemin A, Stratton IM, Forster AS: System-level and patient-level explanations for non- attendance at diabetic retinopathy screening in Sutton and Merton (London, UK): a qualitative analysis of a service evaluation. BMJ Open. 2016, 6:e010952. 10.1136/bmjopen-2015-010952 2020 Chung et al. Cureus 12(11): e11713. DOI 10.7759/cureus.11713 7 of 7 https://dx.doi.org/10.1111/dme.13686?utm_medium=email&utm_source=transaction https://dx.doi.org/10.1136/bmjopen-2015-010952?utm_medium=email&utm_source=transaction https://dx.doi.org/10.1136/bmjopen-2015-010952?utm_medium=email&utm_source=transaction Type 2 Diabetic Retinopathy Screening in a General Practice: A Five-Year Retrospective Analysis Abstract Aim Materials and methods Results Conclusions Introduction Materials And Methods FIGURE 1: Flow chart of selection of patient records for inclusion in audit analysis. Results TABLE 1: Clinical and demographic information for all patients included in analysis. FIGURE 2: Percentage of diabetic patients diagnosed between 2013-2018 who underwent annual retinal screening in accordance with the NICE guidelines. FIGURE 3: Reasons why diabetic patients diagnosed between 2013-2018 were determined not to have met the NICE retinal screening guidelines. FIGURE 4: Average period of time between diabetes diagnosis and first retinal screening appointment. Discussion Main findings Strengths and limitations Conclusions Additional Information Disclosures Acknowledgements References work_n3pjlavgijapbhyialpwso6chy ---- Spectral unmixing of multiple lichen species and underlying substrate This article was downloaded by: [University of Winnipeg] On: 08 January 2014, At: 07:16 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK International Journal of Remote Sensing Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/tres20 Spectral unmixing of multiple lichen species and underlying substrate Matthew Morisona, Edward Cloutisa & Paul Manna a Department of Geography, University of Winnipeg, Winnipeg, Canada Published online: 02 Jan 2014. To cite this article: Matthew Morison, Edward Cloutis & Paul Mann (2014) Spectral unmixing of multiple lichen species and underlying substrate, International Journal of Remote Sensing, 35:2, 478-492 To link to this article: http://dx.doi.org/10.1080/01431161.2013.871085 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms- and-conditions http://www.tandfonline.com/loi/tres20 http://dx.doi.org/10.1080/01431161.2013.871085 http://www.tandfonline.com/page/terms-and-conditions http://www.tandfonline.com/page/terms-and-conditions Spectral unmixing of multiple lichen species and underlying substrate Matthew Morison, Edward Cloutis*, and Paul Mann Department of Geography, University of Winnipeg, Winnipeg, Canada (Received 4 July 2013; accepted 24 October 2013) Cryptogamic covers are a wide range of photoautotrophic plants which synthesize their own food while using sunlight as an energy source. Globally, cryptogrammic covers (such as cyanobacteria, algae, fungi, lichens, and bryophytes) annually uptake about 7% of the net primary production of terrestrial vegetation and account for about half of annual biological terrestrial nitrogen fixation. On the basis of these contributions to global carbon and nitrogen cycling, it is crucial to be able to accurately monitor seasonal and regional patterns of cryptogamic cover distribution and abundance. However, lichen-encrusted rock seldom comprises 100% of the ground cover within a pixel of remote-sensed imagery, and thereby arise challenges in lichen mapping and monitoring. Here we explore spectroscopic methods and spectral mixture analysis (SMA) to overcome the challenges of reflectance spectroscopy-based optical remote sensing detection and characterization of crustose lichen species. One suite of discrete wavelengths (λ1 = {400, 470, 520, 570, 680, 800, 1080, 1120, 1200, 1300, 1470, 1670, 1750, 2132, 2198, 2232 nm}) and two wavelength regions (λ2 = {λ: 800 nm ≤ λ ≤ 1300 nm} and λ3 = {λ: 2000 nm ≤ λ ≤ 2400 nm}) were investigated for their ability to discriminate between substrate and different lichen species. We found that the spectral region 800–1300 nm performed best at lichen-substrate differentiation and interspecial lichen differentiation. Furthermore, measures of central tendency from multiple wavelength regions are superior to most individual wavelength regions, particularly for lichen-rock unmixing. 1. Introduction Lichens are a type of crypotogamic cover, consisting of a symbiotic relationship between a fungus (mycobiont) and a photosynthetic, green algae, or cyanobacteria partner referred to as a photobiont (Rees, Tutubalina, and Golubeva 2004). Globally, cryptogamic covers annually uptake around 3.9 Pg of carbon, about 7% of the net primary production of terrestrial vegetation, and around 49 Tg of nitrogen, about half of the annual biological terrestrial nitrogen fixation (Elbert et al. 2012). The geographic distribution of many lichen species make in situ monitoring costly and inefficient in areas such as boreal forest and tundra zones, where lichens comprise up to 70% of the terrestrial ground cover (Solheim et al. 2000), and remote areas where lichens function as the dominant terrestrial organism, such as Antarctica (Kappen 1983), European mountainous areas, and northern hemispheric continental subarctic regions (Nordberg and Allard 2002). Hyperspectral remote-sensing systems have been identified as a desired methodology to monitor lichen extent and abundance (Zhang, Rivard, and Sánchez-Azofeifa 2005; Rees, Tutubalina, and Golubeva 2004; Feng et al. 2013). Several key requirements for remote lichen monitoring are met. Lichen transmission is <3% through the 350–2500 nm *Corresponding author. Email: e.cloutis@uwinnipeg.ca International Journal of Remote Sensing, 2013 Vol. 35, No. 2, 478–492, http://dx.doi.org/10.1080/01431161.2013.871085 © 2013 Taylor & Francis D ow nl oa de d by [ U ni ve rs it y of W in ni pe g] a t 07 :1 6 08 J an ua ry 2 01 4 mailto:e.cloutis@uwinnipeg.ca range, allowing for reflectance data from lichen species to be spectrally unaffected by the underlying substrate (Bechtel, Rivard, and Sánchez-Azofeifa 2002). In addition, many regions characterized by abundant lichen populations are free of other diverse vegetation (Bjerke et al. 2011), such as in the subarctic region where tree cover is very sparse and the land surface is dominated by rock and lichen coverage (Bartalev et al. 2003). Further, in areas such as continental subarctic, poor drainage conditions result in wetland cover and great proportions of the region are covered by water. However, this is overcome in this case as lichens lack water-absorbing structures, a waxy cuticle and stomata (Brodo, Sharnoff, and Sharnoff 2001). As a result, water absorption occurs over the whole vegetative body (Loppi et al. 1997). Ager and Milton (1986) determined that changes in the moisture content of the lichen do not affect the locations of their absorption bands. However, there are challenges to the remote monitoring of lichen species. Lichen- encrusted rock seldom comprises 100% of the ground cover within a pixel (Zhang, Rivard, and Sánchez-Azofeifa 2005), and thus it is not feasible to determine lichen extent directly from imagery. SMA is an established technique which uses a linear combination of endmember spectral components to approximate the abundance of each endmember within a mixture spectrum (Somers et al. 2011; Song 2005). This study is an attempt to address the issue of monitoring the abundance and distribution of terrestrial crustose lichen coverage through hyperspectral remote sensing systems, and assess the use of SMA as a tool in this regard. Previous research has focused on the capability of remotely sensed imagery to infer reflectance signatures of minerals on the ground by unmixing substrate and lichen reflectance spectra (Zhang, Rivard, and Sánchez-Azofeifa 2005; Rogge et al. 2009). The application of this capability to mineral exploration and geological mapping is noted, however, this approach is proverbially ‘throwing the baby out with the bathwater’ considering applications to lichen detection, characterization, and monitoring. In particu- lar, the amount of carbon and nitrogen that lichens are capable of fixing from the atmosphere depends on the lichen species (Lange et al. 2004; Gavazov et al. 2010). Thus, it is important to be able to not only discriminate substrate from encrusting lichen, but also to discriminate between lichen species within a mixed pixel or spectrum. 2. Methods 2.1. Lichen-encrusted rock samples Fifteen lichen-encrusted rocks were selected from several localities, including Hecla Island, Manitoba (n = 4), Pinawa, Manitoba (n = 3), Churchill, Manitoba (n = 4), Eagle Butte, Alberta (n = 1), and Contwoyto Lake, Nunavut (n = 3). These samples represent a wide range of lichen species (n = 16) and substrate compositions, including basalts, limestones, quartzites, dolostone, schist, and mafic metatuff. A summary of locality, lichen species (classified into rough groupings by colour as orange, green, black, or white/grey), and substrate composition for each sample is shown in Table 1. Ten of the 15 samples contained two endmembers, a single lichen species, and underlying substrate. Five of the 15 samples contained three or more endmembers: two or more lichen species and underlying substrate. All samples were kept air-dried from their collection until spectral reflectance mea- surements were made. Spectral reflectance between wet and dry samples is reported by Rees, Tutubalina, and Golubeva (2004) to differ by less than ±5%, with dry samples exhibiting slightly higher reflectance values. International Journal of Remote Sensing 479 D ow nl oa de d by [ U ni ve rs it y of W in ni pe g] a t 07 :1 6 08 J an ua ry 2 01 4 2.2. Spectral and endmember abundance data Reflectance spectra were collected for each endmember (each lichen species on each sample and underlying substrate for each sample), as well as for a circular area containing one or more lichen species and substrate on each sample to represent a spectral mixture. All spectral data was acquired with an ASD FieldSpec Pro HR spectrometer, which acquires data from 350 to 2500 nm with a spectral resolution of between 2 and 7 nm, and a spectral sampling interval of 1.4 nm, which is internally resampled by the spectrometer to provide 1 nm output data. Sample spectra were measured relative to a Spectralon- calibrated reflectance standard, then corrected for minor irregularities in the absolute reflectance of the standard in the 2.0–2.5 μm region and for dark current. Spectral measurements were collected using a fibreoptic (FOV = 25.4°), employing a viewing geometry of incidence = 30° and emission = 0° with an in-house collimated 50 watt quartz tungsten halogen lamp light source at the Planetary Spectrophotometer Facility at the University of Winnipeg. Each spectral curve is an average of 500 scans of a constant field of view. This observational set-up is properly referred to as biconical-bidirectional (Schaepman-Strub et al. 2006). In the ensuing discussion we will refer to it simply as reflectance or bidirectional reflectance. To model the situation of a mixed pixel in which spectral information from both the substrate and lichen are contributing to the pixel’s spectral composition, two circular carbon black masks were constructed (radii = 6.90 and 3.50 cm) to coincide with viewing heights of 15.1 and 7.7 cm, respectively, from the spectrometer’s optical fibre bundle. Each sample was covered with a circular carbon black mask such that the remaining Table 1. Summary of locality, lichen species (roughly classified by colour as orange, green, black, or white/grey), and substrate composition for each sample. Sample number Locality Substrate composition Lichen species 1 Hecla Island, Manitoba Limestone Placynthium nigrum (black) 2 Contwoyto Lake, Nunavut Basalt Dimelaena oriena (orange) 3 Contwoyto Lake, Nunavut Mafic metatuff Umbilicaria deusta (black) 4 Eagle Butte, Alberta Sandstone Aspicilia cincera (white/grey) 5 Pinawa, Manitoba Granite Physcia caesia (white/grey) 6 Hecla Island, Manitoba Limestone Placynthium nigrum (black) 7 Churchill, Manitoba Quartzite Melanalia stygia (black) 8 Churchill, Manitoba Pink dolostone Melanelia disjuncta (black) 9 Churchill, Manitoba Grey schist Rhizocarpon geographicum (black) 10 Hecla Island, Manitoba Limestone Placynthium nigrum (black) 11 Churchill, Manitoba Granite Caloplaca cernia (orange), Rinodina turfacea (black) 12 Hecla Island, Manitoba Limestone Candelaria concolor (orange), Placynthium nigrum (black) 13 Contwoyto Lake, Nunavut Granite Arctoparmelia centrifuga (green), Umbilicaria deusta (black), Lecanora dispersa (white/grey) 14 Pinawa, Manitoba Granite Arctoparmelia centrifuga (green), Placynthium asperllum (black), Lecanora muralis (white/grey) 15 Pinawa, Manitoba Granite Candelariella aurella (orange) Placynthium asperllum (black) Lecanora muralis (white/grey) 480 M. Morison et al. D ow nl oa de d by [ U ni ve rs it y of W in ni pe g] a t 07 :1 6 08 J an ua ry 2 01 4 visible area comprised heterogeneously one or more lichen species and the underlying substrate, with the abundance of lichen coverage and substrate coverage varying by sample and mask position. Once the mask was fixed on an area, high-resolution digital photographs were taken to determine areal coverage of different lichen species and substrate by employing ImageJ software (Schneider, Rasband, and Eliceiri 2012). ImageJ was used to manually trace polygons delineating areas of lichen and substrate on each sample and calculate relative abundance of each. Bidirectional reflectance spectra as defined above (representing a remotely sensed mixed pixel) were then obtained for the identical area on each sample. 2.3. Spectral mean normalization Spectral mean normalization is a technique used to suppress wavelength-independent differences between spectral features (Wu 2004; Zhang, Rivard, and Sánchez-Azofeifa 2005). The reflectance value at each wavelength is divided by the mean reflectance value of some set of wavelengths: r0 λð Þ ¼ r λð ÞP λ2λj r λð Þf g λjj j � � ; (1) where r0 λð Þ is the mean normalized reflectance at wavelength λ, r λð Þ is the raw reflectance at wavelength λ, λj is a set of wavelengths forming a wavelength region for analysis (for example, λvis ¼ λ : 390 nm � λ � 700 nmf g, representing the visible region) over the index variable j for some number of wavelength regions. λj �� �� is the size of the set of wavelengths λj, for example, for λvis as defined previously, λvisj j = 310. Spectral mean normalization reflectance values may range above 100%. For example, a mean normalized reflectance value of 150% at some wavelength λ represents a reflec- tance 50% greater than the mean reflectance across all wavelengths under consideration (these wavelengths under consideration are precisely the set λj). 2.4. Spectral mixture analysis SMA models a mixture spectrum as a linear combination of its component spectral endmembers weighted by their abundance within a field of view, and can be inverted to determine percentage abundance of known endmembers from a mixture spectrum. The abundances are determined by minimizing the squared difference function d subject to the constraints that the abundances of all k distinct endmembers sum to one and that each endmember has non-negative abundance: d f1; f2; . . . ; fk; e λð Þ λ2λjf g � � ¼ X λ2λjf g r0m λð Þ � Xk i¼1 X λ2λjf g r0i λð Þfi þ e λð Þ 0 B@ 1 CA 2 64 3 75 2 ; (2) International Journal of Remote Sensing 481 D ow nl oa de d by [ U ni ve rs it y of W in ni pe g] a t 07 :1 6 08 J an ua ry 2 01 4 fi � 0; (3) Xk i¼1 fi ¼ 1; (4) where fi is the percentage abundance of the ith endmember, r 0 m λð Þ is the mean normalized reflectance of the mixture spectrum at wavelength λ, r0i λð Þ is the mean normalized reflectance of the ith endmember spectrum at wavelength λ, e λð Þ is an error term as a function of wavelength, and λj is a set of wavelengths forming a wavelength region for analysis. The Generalized Reduced Gradient (GRG) algorithm (Lasdon et al. 1978) was employed to minimize the squared difference function d and output optimal values for percentage abundance of each endmember and error functions e λð Þ. 2.5. Selecting spectral regions λj Zhang, Rivard, and Sánchez-Azofeifa (2005) demonstrated that a single lichen endmem- ber is suitable for performing SMA in the wavelength region from 2000 to 2400 nm. This technique is preferable when discriminating rock from lichen cover, but an extension of this technique is required in order to differentiate between multiple lichen species in the same mixture spectrum. One set of discrete wavelengths and two wavelength regions were defined in order to highlight both intra-species spectral reflectance differences between lichens and also spectral differences between lichens and their underlying substrate. In addition, these regions and wavelength set purposely avoided water bands present at 1400 and 1900 nm, as although characteristic lichen absorption bands are unaffected by moisture content (Ager and Milton 1986), many regions dominated by lichen species, such as northern continental subarctic are dominated by wet conditions year-round (Stow et al. 2004). We identified one suite of discrete wavelengths and two wavelength intervals as being potentially useful for lichen-substrate discrimination. The first suite of discrete wave- lengths (wavelength set) is a collection of 16 distinct wavelengths selected by Rees, Tutubalina, and Golubeva (2004) in their principal components analysis of reflectance spectra of subarctic lichen species from 400 to 2400 nm. The selected wavelengths, termed ‘interesting wavelengths’ by the authors, in nanometers, are λ1 = {400, 470, 520, 570, 680, 800, 1080, 1120, 1200, 1300, 1470, 1670, 1750, 2132, 2198, 2232 nm}. The first spectral region is also following up work by Rees, Tutubalina, and Golubeva (2004) and based upon their recommendation for an ideal wavelength region to discrimi- nate between lichen species, representing the infrared spectrum below the hydroxyl absorption band at 1400 nm, where λ2 = {λ: 800 nm ≤ λ ≤ 1300 nm} with a 1 nm wavelength interval (integer values) between members. We have chosen this wavelength region λ2 to determine whether intraspecial discrimination in this region can be extended to discriminating both between species and between lichen and substrate. The second wavelength region is adapted from Zhang, Rivard, and Sánchez-Azofeifa (2005), and takes advantage of the spectral distinction between rock and lichen in the range 2000– 2400 nm; λ3 = {λ: 2000 nm ≤ λ ≤ 2400 nm}, also with a 1 nm wavelength interval (integer values) between members. To minimize over- or under-representation of lichen or substrate endmember spectra in any one particular wavelength region, two measures of central tendency (mean and 482 M. Morison et al. D ow nl oa de d by [ U ni ve rs it y of W in ni pe g] a t 07 :1 6 08 J an ua ry 2 01 4 median values) were applied to the percentage abundances yielded from SMA in each of the three wavelength sets or regions. 3. Results & discussion 3.1. Endmember spectra Figure 1 shows mean-normalized bidirectional reflectance spectra of all lichen species and substrate endmembers. The variability (or lack thereof) in lichen spectra (across species, across colours, and between substrate and lichen) presented here provides a preliminary understanding of which wavelength regions may perform best for spectral unmixing. 3.2. Discriminating lichen cover from substrate Table 2 summarizes the results of SMA to determine percentage abundances from each of the three wavelength sets or regions as compared to results of percentage abundance derived from digital photography. Figure 2 displays the unmixing results for each of the three wavelength sets or regions, as well as the mean and median values of abundance for all three wavelength sets or regions. Only substrate abundance is displayed, as linear regression is invariant under linear transformations, and thus results would be identical for lichen abundance. The wavelength set or region performing best for lichen-substrate discrimination was λ2 = {λ: 800 nm ≤ λ ≤ 1300 nm} (coefficient of determination R 2 = 0.90) versus λ3 = {λ: 2000 nm ≤ λ ≤ 2400 nm} (R 2 = 0.80). This is surprising, as λ3 was selected to take advantage of the spectral differences between lichen cover and underlying 1.7 1.5 1.3 1.1 0.9 0.7 0.5 0.3 0.1 350 600 (a) Dimelaena oriena Candelariella aurella Candelaria concolor Arctoparmeli centrifuga 850 1100 1350 1600 1850 Wavelength (nm) M ea n no rm al iz ed r ef le ct an ce 2100 2350 1.9 1.7 1.5 1.3 1.1 0.9 0.7 0.5 0.3 0.1 (b) Melanalia stygia Placynthium nigrum Melanelia disjuncta Rhizocarpon geographicum Placynthium asperllum Umbilicaria deusta M ea n no rm al iz ed r ef le ct an ce 350 600 850 1100 1350 1600 1850 Wavelength (nm) 2100 2350 Aspicilia cincera Lecanora muralis Lecanora dispersa Physcia caesia 1.7 1.5 1.3 1.1 0.9 0.7 0.5 0.3 0.1 M ea n no rm al iz ed r ef le ct an ce (d) 350 600 850 1100 1350 1600 1850 Wavelength (nm) 2100 2350 Basalt Granite Grey schist Quartzite Limestone1.9 (c) 1.7 1.5 1.3 1.1 0.9 0.7 0.5 0.3 0.1 M ea n no rm al iz ed r ef le ct an ce 350 600 850 1100 1350 1600 1850 Wavelength (nm) 2100 2350 Figure 1. Normalized reflectance spectra (350–2500 nm) of endmemebers in the study: (a) orange and green lichen species, (b) black lichen species, (c) substrates, (d) white/grey species. International Journal of Remote Sensing 483 D ow nl oa de d by [ U ni ve rs it y of W in ni pe g] a t 07 :1 6 08 J an ua ry 2 01 4 T ab le 2 . P er ce n ta g e ab u n d an ce re su lt s o f li ch en co v er an d su b st ra te b y sp ec tr al m ix tu re an al y si s (S M A ) fo r sp ec tr al se ts an d re g io n s λ 1 (λ 1 = {4 0 0 , 4 7 0 , 5 2 0 , 5 7 0 , 6 8 0 , 8 0 0 , 1 0 8 0 , 11 2 0 , 1 2 0 0 , 1 3 0 0 , 1 4 7 0 , 1 6 7 0 , 1 7 5 0 , 2 1 3 2 , 2 1 9 8 , 2 2 3 2 n m }, λ 2 (λ 2 = {λ : 8 0 0 n m ≤ λ ≤ 1 3 0 0 n m }) , an d λ 3 (λ 3 = {λ : 2 0 0 0 n m ≤ λ ≤ 2 4 0 0 n m }) fo r sa m p le s 1 – 1 0 (s in g le li ch en sp ec ie s an d su b st ra te m ix tu re s) . A b u n d an ce (% ) A n al y si s m et h o d C o v er cl as s S am p le 1 S am p le 2 S am p le 3 S am p le 4 S am p le 5 S am p le 6 S am p le 7 S am p le 8 S am p le 9 S am p le 1 0 D ig it al p h o to S u b st ra te 7 7 .1 9 4 .5 1 6 .6 7 2 .9 5 7 .3 1 7 .9 6 5 .8 6 6 .6 7 2 .3 3 7 .3 L ic h en 2 2 .9 5 .5 8 3 .4 2 7 .1 4 2 .7 8 2 .1 3 4 .2 3 3 .4 2 7 .7 6 2 .7 S M A λ 1 S u b st ra te 8 7 .0 9 8 .4 1 9 .8 8 8 .6 9 2 .1 1 9 .7 5 7 .5 8 4 .1 7 6 .5 2 4 .9 L ic h en 1 3 .0 1 .6 8 0 .2 11 .4 7 .9 8 0 .3 4 2 .5 1 5 .9 2 3 .5 7 5 .1 S M A λ 2 S u b st ra te 7 9 .8 9 8 .3 2 0 .6 7 3 .8 8 1 .6 2 7 .2 7 8 .1 8 4 .5 7 8 .4 3 2 .0 L ic h en 2 0 .2 1 .7 7 9 .4 2 6 .2 1 8 .4 7 2 .8 2 1 .9 1 5 .5 2 1 .6 6 8 .0 S M A λ 3 S u b st ra te 7 7 .8 1 0 0 .0 1 7 .4 7 3 .2 1 5 .9 8 .4 6 7 .1 7 7 .1 8 9 .6 3 3 .6 L ic h en 2 2 .2 0 .0 8 2 .6 2 6 .8 8 4 .1 9 1 .6 3 2 .9 2 2 .9 1 0 .4 6 6 .4 S M A m ea n S u b st ra te 8 1 .5 9 8 .9 1 9 .3 7 8 .5 6 3 .2 1 8 .4 6 7 .6 8 1 .9 8 1 .5 3 0 .2 L ic h en 1 8 .5 1 .1 8 0 .7 2 1 .5 3 6 .8 8 1 .6 3 2 .4 1 8 .1 1 8 .5 6 9 .8 S M A m ed ia n S u b st ra te 7 9 .8 9 8 .4 1 9 .8 7 3 .8 8 1 .6 1 9 .7 6 7 .1 8 4 .1 7 8 .4 3 2 .0 L ic h en 2 0 .2 1 .6 8 0 .2 2 6 .2 1 8 .4 8 0 .3 3 2 .9 1 5 .9 2 1 .6 6 8 .0 484 M. Morison et al. D ow nl oa de d by [ U ni ve rs it y of W in ni pe g] a t 07 :1 6 08 J an ua ry 2 01 4 substrate, whereas λ2 was selected to take advantage of intra-species spectral distinc- tions between lichens. For example, one sample (sample 9) where SMA λ2 (78.4% substrate, 21.6% lichen) outperformed SMA λ3 (89.6% substrate, 10.4% lichen) compared with the abundance from the image (72.3% substrate, 27.7% lichen). Figure 3 shows the normalized 100 y = 1.13x – 0.17 R2 = 0.83 100 80 80 60 60 40 40 Substrate abundance (%) from photo (a) S ub st ra te a bu nd an ce ( % ) fr om S M A λ 1 20 20 0 0 y = 1.02x + 6.58 R2 = 0.90 (b) 100 80 60 40 S ub st ra te a bu nd an ce ( % ) fr om S M A λ 2 20 0 100806040 Substrate abundance (%) from photo 200 y = 1.10x – 1.74 R2 = 0.97 (d) 100 80 60 40 S ub st ra te a bu nd an ce ( % ) fr om S M A m ea n 20 0 100806040 Substrate abundance (%) from photo 200 y = 1.17x – 11.64 R2 = 0.80 (c) 100 80 60 40 S ub st ra te a bu nd an ce ( % ) fr om S M A λ 3 20 0 100806040 Substrate abundance (%) from photo 200 (e) 100 80 60 40 S ub st ra te a bu nd an ce ( % ) fr om S M A m ed ia n 20 0 100806040 Substrate abundance (%) from photo 200 y = 1.06x + 2.28 R2 = 0.91 Figure 2. Results of spectral unmixing analysis to determine substrate abundance for lichen- substrate discrimination using (a) wavelength set λ1, (b) wavelength region λ2, (c) wavelength region λ3, (d) mean substrate abundance values for all three wavelength sets and regions, and (e) median substrate abundance values for all three wavelength sets and regions. International Journal of Remote Sensing 485 D ow nl oa de d by [ U ni ve rs it y of W in ni pe g] a t 07 :1 6 08 J an ua ry 2 01 4 reflectance of the mixture as well as substrate and lichen endmembers for sample #9 (a) between 800 and 1300 nm (λ2) and (b) between 2000 and 2400 nm (λ3). The lichen and substrate endmembers appear, from quick examination, to be spectrally distinct in the wavelength region λ2. Also, the mixture spectra through λ2 appear to be an approximate linear combination of 80% of the substrate endmember spectra and 20% of the lichen endmember spectra, as was confirmed from SMA. This suggests that although the region between 800 and 1300 nm was selected to highlight spectral differences between lichen species, it appears capable of discriminating between lichen cover and substrate better than the wavelength region selected for this purpose based on the results of Zhang, Rivard, and Sánchez-Azofeifa (2005). The mean (R2 = 0.97) and median (R2 = 0.91) unmixing results performed better than any of the three individual wavelength sets or regions. By capturing a wider range of variation between spectral signatures in different regions, potential over- or under-repre- sentation of any particular endmember abundance in any one wavelength region is reduced through averaging these abundance values over the three sets or regions. 1.5 1.3 1.1 0.9 0.7 0.5 0.3 1.3 1.2 1.1 1.0 0.9 0.8 0.7 0.6 0.5 800 900 1000 1100 1200 1300 Wavelength (nm) 2000 Substrate (grey schist) Rhizocarpon geographicumMixture 2100 2200 2300 2400 Wavelength (nm) M ea n no rm al iz ed r ef le ct an ce M ea n no rm al iz ed r ef le ct an ce (a) (b) Figure 3. Normalized reflectance spectra of mixture, substrate, and lichen for sample #9 in the wavelength regions (a) λ2 and (b) λ3. 486 M. Morison et al. D ow nl oa de d by [ U ni ve rs it y of W in ni pe g] a t 07 :1 6 08 J an ua ry 2 01 4 3.3. Discriminating between lichen species Table 3 summarizes the results of SMA to determine percentage abundances between various lichen species compared with percentage abundances derived from digital photo- graphy. Figure 4 displays the unmixing results for each of the three wavelength sets or regions, as well as the mean and median values for all three wavelength sets or regions. Abundance values from SMA and digital photography are displayed for each lichen species and substrate cover. Similar to results for discriminating between mixtures of substrate and only one lichen species, the best wavelength set or region for discriminating between substrate and multiple lichen species was λ2 = {λ: 800 nm ≤ λ ≤ 1300 nm} (R 2 = 0.47), compared with λ1 (R 2 = 0.36) or λ3 (R 2 = 0.41). This is not surprising as λ2 was selected for interspecies discrimination as described by Rees, Tutubalina, and Golubeva (2004). Table 3. Percentage abundance results for multiple lichen species and substrate by spectral mixture analysis (SMA) for spectral sets and regions λ1 ({400, 470, 520, 570, 680, 800, 1080, 1120, 1200, 1300, 1470, 1670, 1750, 2132, 2198, 2232 nm}, λ2 ({λ: 800 nm ≤ λ ≤ 1300 nm}), and λ3 ({λ: 2000 nm ≤ λ ≤ 2400 nm}) for samples 11–15 (multiple lichen species and substrate mixtures) in rough colour classes of species (orange, green, black, and white/grey). Abundance (%) Analysis method Cover class Sample 11 Sample 12 Sample 13 Sample 14 Sample 15 Digital photo Orange species 0.6 27.4 0.0 0.0 0.8 Green species 0.0 0.0 12.6 0.5 0.0 Black species 41.6 12.9 15.2 12.9 2.8 White/grey species 0.0 0.0 19.2 20.5 48.6 Substrate 57.8 59.7 53.0 66.1 47.8 SMA λ1 Orange species 0.0 47.1 0.0 0.0 27.9 Green species 0.0 0.0 27.1 0.0 0.0 Black species 0.0 0.0 11.1 14.1 5.4 White/grey species 0.0 0.0 7.4 58.4 29.7 Substrate 100.0 52.9 54.4 27.5 37.0 SMA λ2 Orange species 0.0 51.2 0.0 0.0 0.0 Green species 0.0 0.0 10.5 10.5 0.0 Black species 0.0 0.5 0.5 10.7 11.9 White/grey species 0.0 0.0 21.9 50.6 32.0 Substrate 100.0 48.4 67.2 28.1 56.0 SMA λ3 Orange species 0.0 46.6 0.0 0.0 8.4 Green species 0.0 0.0 39.8 52.9 0.0 Black species 0.0 0.0 1.8 0.2 0.0 White/grey species 0.0 0.0 0.7 3.1 47.5 Substrate 100.0 53.4 57.6 43.8 44.2 SMA mean Orange species 0.0 48.3 0.0 0.0 12.1 Green species 0.0 0.0 25.8 21.1 0.0 Black species 0.0 0.2 4.5 8.3 5.8 White/grey species 0.0 0.0 10.0 37.4 36.4 Substrate 100.0 51.6 59.7 33.2 45.7 SMA median Orange species 0.0 47.1 0.0 0.0 8.4 Green species 0.0 0.0 27.1 10.5 0.0 Black species 0.0 0.0 1.8 10.7 5.4 White/grey species 0.0 0.0 7.4 50.6 32.0 Substrate 100.0 52.9 57.6 28.1 44.2 International Journal of Remote Sensing 487 D ow nl oa de d by [ U ni ve rs it y of W in ni pe g] a t 07 :1 6 08 J an ua ry 2 01 4 For example, spectral unmixing of sample 13 was more successful (although still not to an ideal standard) at interspecies lichen discrimination compared with digital photo- graphy (12.6% green lichen species, 15.2% black lichen species, 19.2% grey lichen species, 53.0% substrate) in the wavelength region λ2 (27.1% green lichen species, 11.1% black lichen species, 7.4% grey lichen species, 67.2% substrate) than for the wavelength region λ3 (39.8% green lichen species, 1.8% black lichen species, 0.7% Substrate Orange species Green species Black species White/grey species 100806040200 Abundance (%) from photo 100 80 60 40 20 0 A bu nd an ce ( % ) fr om S M A λ 1 y = 0.80x + 4.66 R2 = 0.44 (e) 100806040200 Abundance (%) from photo 100 80 60 40 20 0 A bu nd an ce ( % ) fr om S M A λ 1 y = 0.80x + 5.48 R2 = 0.47 (d) 100806040200 Abundance (%) from photo 100 80 60 40 20 0 A bu nd an ce ( % ) fr om S M A λ 1 y = 0.83x + 4.65 R2 = 0.41 (c) 100 (a) 100 80 80 60 60 40 40 20 20 0 0 A bu nd an ce ( % ) fr om S M A λ 1 Abundance (%) from photo y = 0.71x + 7.94 R2 = 0.36 100806040200 Abundance (%) from photo 100 80 60 40 20 0 A bu nd an ce ( % ) fr om S M A λ 1 (b) y = 0.86x + 3.86 R2 = 0.47 Figure 4. Results of spectral unmixing analysis to determine abundance of multiple lichen species and substrate using (a) wavelength set λ1, (b) wavelength region λ2, (c) wavelength region λ3, (d) mean abundance values for all three wavelength sets and regions, and (e) median abundance values for all three wavelength sets and regions. 488 M. Morison et al. D ow nl oa de d by [ U ni ve rs it y of W in ni pe g] a t 07 :1 6 08 J an ua ry 2 01 4 grey lichen species, 57.6% substrate). Figure 5 displays normalized reflectance spectra for each lichen endmember, the substrate endmember, and the mixture spectra. These results confirm the work of Rees, Tutubalina, and Golubeva (2004), who suggested that lichen species may be spectrally differentiated in the region between 800 and 1300 nm. In this case, λ3 performed better than λ2 at discriminating between total lichen coverage (all species) and substrate, consistent with previous work by Zhang, Rivard, and Sánchez- Azofeifa (2005) that the wavelength region between 2000 and 2400 is optimal for distin- guishing between lichen spectra (independent of species) and substrate. This suggests that when not concerned with lichen species composition but faced with the challenge of multiple lichen species, spectral unmixing in the 2000 to 2400 nm region is preferred. The mean (R2 = 0.47) and median (R2 = 0.44) unmixing results performed better than unmixing for the λ1 wavelength set and λ3 wavelength region; however, in this case, both measures of central tendency performed equally well or worse than unmixing results from the λ2 region. 1.5 (a) (b) 1.3 1.1 0.9 0.7 0.5 0.3 M ea n no rm al iz ed r ef le ct an ce 900 1000 1100 1200 1300800 Wavelength (nm) 2000 2100 2200 2300 2400 Wavelength (nm) Mixture Umbilicaria deusta Lecanora dispersa Arctoparmelia centrifuga Substrate (granite) 1.3 1.2 1.1 1.0 0.9 0.8 0.7 M ea n no rm al iz ed r ef le ct an ce Figure 5. Normalized reflectance spectra of mixture, substrate, and lichen for sample #13 in the wavelength regions (a) λ2 and (b) λ3. International Journal of Remote Sensing 489 D ow nl oa de d by [ U ni ve rs it y of W in ni pe g] a t 07 :1 6 08 J an ua ry 2 01 4 In all cases, R2 values for unmixing results were much lower for multiple lichen species (ranging from 0.36 to 0.47) than single lichen species and substrate mixtures (0.80 to 0.97). The complexity of three or four endmembers in an unmixing model makes the problem much more difficult, particularly when multiple endmembers are spectrally ambiguous (as is the case with many lichen species in the selected wavelength set or regions), illustrated in the case of sample #13 in Figure 4. Of note is the fact that in this study all lichen species under consideration were of crustose type. Many environments investigated for lichen abundance and coverage through hyperspectral remote sensing systems may be covered (predominantly or periph- erally) by other lichen morphologies (foliose and fruiticose). These other morphologies may have spectral properties that were not accounted for in this work and deserving of similar treatment to determine whether spectral unmixing is a feasible method to dis- criminate lichen cover intramorphologically, and between fruiticose and foliose lichens and substrate. 4. Conclusions This study has assessed the potential to employ SMA to determine abundance of substrate and one or more lichen species within a mixture. One wavelength set and two wavelength regions (λ1 = {400, 470, 520, 570, 680, 800, 1080, 1120, 1200, 1300, 1470, 1670, 1750, 2132, 2198, 2232 nm}, λ2 = {λ: 800 nm ≤ λ ≤ 1300 nm}, and λ3 = {λ: 2000 nm ≤ λ ≤ 2400 nm}) were investigated for their ability to discriminate between substrate and different lichen species. Water absorption bands at 1400 and 1900 nm were purposefully avoided in each wavelength set or region. The mean and median unmixing results performed better than unmixing in the λ1 set and λ3 region; however, in this case, both measures of central tendency performed equally well or worse than unmixing results from the λ2 region. In all cases, R 2 values for unmixing results were much lower for multiple lichen species (ranging from 0.36 to 0.47) than single lichen species and substrate mixtures (0.80 to 0.97). The complexity of three or four endmembers in an unmixing model makes the problem much more difficult, particularly when multiple endmembers are spectrally ambiguous (as is the case with many lichen species in the selected wavelength set and regions). In both cases of lichen-substrate differentiation and interspecial lichen differentiation, the wavelength region between 800 and 1300 nm performed better than the region between 2000 and 2400 nm and the wavelength set consisting of the wavelengths λ1 = {400, 470, 520, 570, 680, 800, 1080, 1120, 1200, 1300, 1470, 1670, 1750, 2132, 2198, 2232 nm}. However, when considering the challenge of lithological mapping in areas of multiple lichen species, unmixing in the spectral region from 2000 to 2400 nm performed best for determining total lichen abundance (species-independent) and substrate abundance. The mean and median abundance value from SMA of several different wavelength sets or regions performed better than the SMA in any individual wavelength set or region. Previous studies employing SMA to unmix lichen and substrate have focused on the application of this procedure for lithologic mapping using published spectra of rock encrusting lichens, combined with the usage of normalization procedures in different spectral regions applied to both image and endmember spectra. In doing so, some methodologies ‘provide geologists with an opportunity to group all lichens into one endmember’ (Zhang, Rivard, and Sánchez-Azofeifa 2005). Although effective for geolo- gical mapping, recent emphasis on the importance of lichens in global carbon and 490 M. Morison et al. D ow nl oa de d by [ U ni ve rs it y of W in ni pe g] a t 07 :1 6 08 J an ua ry 2 01 4 nitrogen cycling underscores the importance of long-term lichen detection, characteriza- tion, and monitoring. In the future, a closer examination of lichen/substrate unmixing in the visible region may provide a solution to the difficulties inherent in intraspecies (or intramorphological) discrimination. Acknowledgements The authors also wish to thank the Canada Foundation for Innovation, the Manitoba Research Innovations Fund, the Canadian Space Agency, and the University of Winnipeg for supporting the establishment of the Planetary Spectrophotometer Facility where this work was conducted. Additional thanks are extended to G. Bedard, J. McCorquedale, R. St J. Lambert, and M. Zalick for assistance with sample collection; to M. Bennett for assistance with figure preparation; and to J. Marsh at the University of Alberta for assistance with identification of lichen species. Funding This research was funded by the Natural Sciences and Engineering Research Council of Canada, The Canadian Space Agency, and the University of Winnipeg. References Ager, C. M., and N. M. Milton. 1986. “Spectral Reflectance of Lichens and Their Effects on Reflectance of Rock Substrates.” Geophysics 52: 898–906. Bartalev, S. A., A. S. Belward, D. V. Erchov, and A. S. Isaev. 2003. “A New SPOT4-VEGETATION Derived Land Cover Map of Northern Eurasia.” International Journal of Remote Sensing 24: 1977–1982. Bechtel, R., B. Rivard, and A. Sánchez-Azofeifa. 2002. “Spectral Properties of Foliose and Crustose Lichens Based on Laboratory Experiments.” Remote Sensing of Environment 82: 389–396. Bjerke, J. W., S. Bokhorst, M. Zielke, T. V. Callaghan, F. W. Bowles, and G. K. Pheonix. 2011. “Contrasting Sensitivity to Extreme Winter Warming Events of Dominant Sub-Arctic Heathland Bryophyte and Lichen Species.” Journal of Ecology 99: 1481–1488. Brodo, I. M., S. D. Sharnoff, and S. Sharnoff. 2001. Lichens of North America. New Haven, CT: Yale University Press. Elbert, W., B. Weber, S. Borrows, J. Steinkamp, B. Büdel, M. O. Andreae, and U. Pöschl. 2012. “Contribution of Cryptogamic Covers to the Global Cycles of Carbon and Nitrogen.” Nature Geoscience 5: 459–462. doi:10.1038/ngeo1486. Feng, J., B. Rivard, D. Rogge, and A. Sánchez-Azofeifa. 2013. “The Longwave Infrared (3–14 Μm) Spectral Properties of Rock Encrusting Lichens Based on Laboratory Spectra and Airborne SEBASS Imagery.” Remote Sensing of Environment 131: 173–181. doi:10.1016/j. rse.2012.12.018. Gavazov, K. S., N. A. Soudzilovskaia, R. S. P. van Logtestijn, M. Braster, and J. H. C. Cornelissen. 2010. “Isotopic Analysis of Cyanobacterial Nitrogen Fixation Associated with Subarctic Lichen and Bryophyte Species.” Plant and Soil 333 (1–2): 507–517. Kappen, L. 1983. “Ecology and Physiology of the Antarctic Lichen Usnea Sulphurea (Koenig) Th. Fries.” Polar Biology 1: 249–255. Lange, O. L., B. Büdel, A. Meyer, H. Zellner, and G. Zotz. 2004. “Lichen Carbon Gain Under Tropical Conditions: Water Relations and CO2 Exchange of Lobariaceae Species of a Lower Montane Rainforest in Panama.” The Lichenologist 36: 1–14. doi:10.1017/ S0024282904014392. Lasdon, L. S., A. D. Waren, A. Jain, and M. Ratner. 1978. “Design and Testing of a Generalized Reduced Gradient Code for Nonlinear Programming.” ACM Transactions on Mathematical Software 4 (1): 34–50. Loppi, S., L. Nelli, S. Ancora, and R. Bargagli. 1997. “Accumulation of Trace Elements in the Peripheral and Central Parts of a Foliose Lichen Thallus.” The Bryologist 100: 251–253. Nordberg, M.-L., and A. Allard. 2002. “A Remote Sensing Methodology for Monitoring Lichen Cover.” Canadian Journal of Remote Sensing 28 (2): 262–274. doi:10.5589/m02-026. International Journal of Remote Sensing 491 D ow nl oa de d by [ U ni ve rs it y of W in ni pe g] a t 07 :1 6 08 J an ua ry 2 01 4 Rees, W. G., O. V. Tutubalina, and E. I. Golubeva. 2004. “Reflectance Spectra of Subarctic Lichens Between 400 and 2400 nm.” Remote Sensing of Environment 90: 281–292. Rogge, D. M., B. Rivard, J. Harris, and J. Zhang. 2009. “Application of Hyperspectral Data for Remote Predictive Mapping, Baffin Island, Canada.” Reviews in Economic Geology 16: 209–222. Schaepman-Strub, G., M. E. Schaepman, T. H. Painter, S. Dangel, and J. V. Martonchik. 2006. “Reflectance Quantities in Optical Remote Sensing – Definitions and Case Studies.” Remote Sensing of Environment 103: 27–42. Schneider, C. A., W. S. Rasband, and K. W. Eliceiri. 2012. “NIH Image to ImageJ: 25 Years of Image Analysis.” Nature Methods 9: 671–675. Solheim, I., O. Engelsen, B. Hosgood, and G. Andreoli. 2000. “Measurement and Modeling of the Spectral and Directional Reflection Properties of Lichen and Moss Canopies.” Remote Sensing of Environment 72: 78–94. Somers, B., G. P. Asner, L. Tits, and P. Coppin. 2011. “Endmember Variability in Spectral Mixture Analysis: A Review.” Remote Sensing of Environment 115: 1603–1616. Song, C. H. 2005. “Spectral Mixture Analysis for Subpixel Vegetation Fractions in the Urban Environment: How to Incorporate Endmember Variability?” Remote Sensing of Environment 95: 248–263. Stow, D. A., A. Hope, D. McGuire, D. Verbyla, J. Gamon, F. Huemmrich, S. Houston, C. Racine, M. Sturm, K. Tape, L. Hinzman, K. Yoshikawa, C. Tweedie, B. Noyle, C. Silapaswan, D. Douglas, B. Griffith, G. Jia, H. Epstein, D. Walker, S. Daeschner, A. Petersen, L. Zhou, and R. Myneni. 2004. “Remote Sensing of Vegetation and Land-Cover Change in Arctic Tundra Ecosystems.” Remote Sensing of Environment 89: 281–308. Wu, C. 2004. “Normalized Spectral Mixture Analysis for Monitoring Urban Composition Using ETM+ Imagery.” Remote Sensing of Environment 93: 480–492. Zhang, J., B. Rivard, and A. Sánchez-Azofeifa. 2005. “Spectral Unmixing of Normalized Reflectance Data for the Deconvolution of Lichen and Rock Mixtures.” Remote Sensing of Environment 95 (1): 57–66. doi:10.1016/j.rse.2004.11.019. 492 M. Morison et al. D ow nl oa de d by [ U ni ve rs it y of W in ni pe g] a t 07 :1 6 08 J an ua ry 2 01 4 Abstract 1. Introduction 2. Methods 2.1. Lichen-encrusted rock samples 2.2. Spectral and endmember abundance data 2.3. Spectral mean normalization 2.4. Spectral mixture analysis 2.5. Selecting spectral regions $$\bf {\lambda _{\it j} $$ 3. Results & discussion 3.1. Endmember spectra 3.2. Discriminating lichen cover from substrate 3.3. Discriminating between lichen species 4. Conclusions Acknowledgements Funding References work_n3tpbupcrjeqrawxl2il6ndoxe ---- Two-dimensional digital photography for child body posture evaluation: standardized technique, reliable parameters and normative data for age 7-10 years REVIEW Open Access Two-dimensional digital photography for child body posture evaluation: standardized technique, reliable parameters and normative data for age 7-10 years L. Stolinski1,2,3*, M. Kozinoga1,2, D. Czaprowski4,5, M. Tyrakowski6, P. Cerny7,8,9, N. Suzuki10 and T. Kotwicki1 Abstract Background: Digital photogrammetry provides measurements of body angles or distances which allow for quantitative posture assessment with or without the use of external markers. It is becoming an increasingly popular tool for the assessment of the musculoskeletal system. The aim of this paper is to present a structured method for the analysis of posture and its changes using a standardized digital photography technique. Material and methods: The purpose of the study was twofold. The first one comprised 91 children (44 girls and 47 boys) aged 7–10 (8.2 ± 1.0), i.e., students of primary school, and its aim was to develop the photographic method, choose the quantitative parameters, and determine the intraobserver reliability (repeatability) along with the interobserver reliability (reproducibility) measurements in sagittal plane using digital photography, as well as to compare the Rippstein plurimeter and digital photography measurements. The second one involved 7782 children (3804 girls, 3978 boys) aged 7–10 (8.4 ± 0.5), who underwent digital photography postural screening. The methods consisted in measuring and calculating selected parameters, establishing the normal ranges of photographic parameters, presenting percentile charts, as well as noticing common pitfalls and possible sources of errors in digital photography. Results: A standardized procedure for the photographic evaluation of child body posture was presented. The photographic measurements revealed very good intra- and inter-rater reliability regarding the five sagittal parameters and good reliability performed against Rippstein plurimeter measurements. The parameters displayed insignificant variability over time. Normative data were calculated based on photographic assessment, while the percentile charts were provided to serve as reference values. The technical errors observed during photogrammetry are carefully discussed in this article. Conclusions: Technical developments are allowed for the regular use of digital photogrammetry in body posture assessment. Specific child positioning (described above) enables us to avoid incidentally modified posture. Image registration is simple, quick, harmless, and cost-effective. The semi-automatic image analysis, together with the normal values and percentile charts, makes the technique reliable in terms of child’s posture documentation and corrective therapy effects’ monitoring. Keywords: Standardization, Digital photography, Photogrammetry, Percentile charts, Normative data, Primary school children * Correspondence: stolinskilukasz@op.pl 1Department of Spine Disorders and Pediatric Orthopedics, University of Medical Sciences, 28 Czerwca 1956r. no. 135/147, 61-545 Poznan, Poland 2Rehasport Clinic, Poznan, Poland Full list of author information is available at the end of the article © The Author(s). 2017 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 DOI 10.1186/s13013-017-0146-7 http://crossmark.crossref.org/dialog/?doi=10.1186/s13013-017-0146-7&domain=pdf mailto:stolinskilukasz@op.pl http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/publicdomain/zero/1.0/ Background Human body posture Body posture is defined as the alignment of body segments which is considered as an important health indicator [1]. Human body posture is also described as a motor habit ac- companying daily activities [2]. Normal human posture is the characteristic of the vertical position which relies on spinal alignment and its position over the patient’s head and pelvis [3, 4]. Human body posture undergoes large variability, which depends on age, sex, body growth, envir- onmental factors, and psychophysical status of an individ- ual [5–7]. The accurate description of human body posture represents a topic of interest for the scientists aiming to measure and to document the posture. For the clinicians, posture evaluation plays a role in the global health assess- ment. On the one hand, faulty posture may result from various disorders, while the posture itself may be even patognomic for certain diseases (ex. spondylolisthesis). On the other hand, incorrect body posture can have negative impact on the overall health, leading to pain or functional disorder, which means that it can affect the quality of life both in childhood and adulthood [8]. The quality of body posture results from individual settings of respective body parts, especially the spine [9] and pelvis [10] alignment in the sagittal plane. The grav- ity line is defined as the vertical line passing through the center of gravity in the entire body. For a standing sub- ject, the reference posture is described by the relations between the gravity line and body segments [11]. Balanced arrangement of body parts provides the basis for the center of mass. Such arrangement of body parts enables the maintenance of horizontal gaze as well as ef- fective muscle contraction and stretching without un- necessary loss of energy [12]. Diagnostic tools for measuring the sagittal spine curvatures and the pelvis alignment can be used to describe a correct posture while standing [13]. The multitude of methods and diagnostic tools makes it difficult to standardize the assessment of body posture. In addition, there is a lack of a clear range between the traditional and faulty posture—in particular, the number of quantitative posture parameters. Thus, the data on the prevalence of faulty posture is very divergent and based on different diagnostic criteria [14]. The content of the paper fulfills the following objec- tives: (1) to standardize digital photography technique for posture assessment; (2) to determine the intra- observer reproducibility and the inter-observer reliability of photographic sagittal parameters: sacral slope (SS), lumbar lordosis (LL), thoracic kyphosis (TK), chest in- clination (CI), and head protraction (HP); (3) to check the validity of photographic measurements against the Rippstein plurimeter measurements; (4) to analyze the variability of five sagittal photographic angles: SS, LL, TK, CI, HP, and two coronal parameters: Anterior Trunk Symmetry Index (ATSI) and Posterior Trunk Symmetry Index (POTSI) over time (1 week); (5) to present the normative values of sagittal photographic parameters based on photographic assessment of 7782 children aged 7–10; and (6) to discuss common pitfalls and sources of errors in digital photography used in posture evaluation. Methods Standardization of posture assessment with digital photography The use of reliable tools and methods for clinical mea- surements is the first step towards evidence-based medi- cine [15] as the foundation of effective and safe clinical practice. Just like any tool, the photographic technique for posture evaluation should be checked and validated before use. Standardization required to assess body pos- ture was performed as part of this study. Preparing a patient to photogrammetry Marking anatomical body landmarks In the procedure below, body posture is assessed without the use of exter- nal markers attached to the skin. Dots corresponding to the anatomical body landmarks are drawn on the skin with the use of a non-toxic color pencil. The following body landmarks are marked (Fig. 1): – The center of the sternal notch Fig. 1 Anatomical points marked on the body Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 2 of 24 – Anterior superior iliac spine (ASIS)—right and left – Posterior superior iliac spine (PSIS)—right and left – Spinous process of C7 – The point between T12 and L1 spinous process – The point between L5 and S1 spinous process – The center of acromion—right and left – The center of greater trochanter—right and left – The center of external malleolus of the ankle joint Positioning the patient Positioning children during body posture evaluation Standardized procedure for photographic body posture evaluation includes the pho- tos presented in Fig. 2: spontaneous standing frontal posture (2a), sagittal profiles including photos of the left side (2b), left side actively corrected (2c), left side in for- ward bending (2d), spontaneous standing posture of the back (2e), right side (2f), right side actively corrected (2g), right side in forward bending (2h), as well as front (2i) and back forward bending (2j). Positioning children during scoliosis rib and lumbar prominence evaluation In order to document the angle of trunk rotation at different trunk levels, one can take a sequence of photos (5–15) made during forward bending of a child (Fig. 3). Lower limb positioning in photographic examination The undressed child (wearing the underwear and a narrow bra for girls) is barefoot with its knees extended and the feet hip-width apart. The feet are placed on lon- gitudinal and crosswise lines marked on the ground so that their lateral malleoli are situated over the center of the crosswise line and the feet stay parallel to the longi- tudinal line (Fig. 4). Most of the upper part of the intergluteal cleft should be uncovered. Upper limb and head positioning in photographic examination The hair is tied with the use of a hair clip to make the external auditory meatus and the upper body contours visible. Children are asked to look forward at eye level. For the front and back photos, the upper limbs are loosely hanging down. For the lateral photos, in order to uncover the contour of the back, the upper limbs are slightly flexed in the gleno-humeral and the elbow joint at the angle of approx. 10°–20° and 20°–30° respectively. The gleno-humeral joint flexion is performed slowly to avoid any trunk movement, especially the backward trunk hyperextension (Fig. 5). For the front photos taken during forward bending, the upper limbs are kept together and directed forward to the ground as in Adam’s test (Fig. 3). For lateral photos made during forward bending, the upper limbs are loosely hanging down (Fig. 2d, h). Photographic parameters for the frontal plane evaluation There are two main photographic parameters for the frontal plane trunk assessment and two for the lower limb assessment. The two trunk parameters are Anterior Trunk Symmetry Index and Posterior Trunk Symmetry Index. Anterior Trunk Symmetry Index (ATSI)—the par- ameter is defined as the sum of six indices: three frontal plane asymmetry indices (sternal notch, axilla folds, and waist lines) and three frontal plane height difference indices (acromions, axilla folds, and waist lines). Frontal asymmetry index at sternal notch level (FAI-SN) is calculated by dividing the distance be- tween the center of the sternal notch and the midline by the height of the trunk. The height of the trunk (e) is the vertical distance between the navel and the center of the sternal notch. Frontal asymmetry in- dexes at axilla level (FAI-A) and at trunk level (FAI-T) are calculated by dividing the difference in the distance between each trunk’s edge and the mid- line (c − d, a − b) by the width of the trunk (c + d, a + b). Height indices of trunk asymmetry are calcu- lated by dividing the difference in height at three levels of trunk: HDI-S for shoulders, HDI-A for axil- las, and HDI-T for the trunk waistline by the trunk height measured from navel to the center of the ster- nal notch (e). The shoulder point is the point of intersection at shoulder level with a vertical line from each axilla. ATSI was introduced by Stolinski et al. in 2012 [16] (Fig. 6). Fig. 2 a–j Standardized positions for posture photogrammetry Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 3 of 24 ATSI ¼ FAI−SN þ FAI−A þ FAI−Tð Þ þ HDI−S þ HDI−A þ HDI−Tð Þ Posterior Trunk Symmetry Index (POTSI)—similarly to ATSI Index, the POTSI parameter is defined as the sum of six indices: three frontal plane asymmetry indices (C7, axilla folds, and waist lines) and three frontal plane height difference indices (acromions, axilla folds, and waist lines). Frontal asymmetry index at C7 level (FAI-C7) is calculated by dividing the distance between the C7 point and the midline by the height of the trunk. The height of the trunk (e) is the vertical distance between the C7 and the begin- ning of gluteal cleft. Frontal asymmetry indexes at axilla level (FAI-A) and trunk level (FAI-T) are calculated by dividing the difference in distance between each trunk’s edge and the midline (c − d, a − b) by the width of the trunk (c + d, a + b). Height indices of trunk asymmetry are calculated by dividing the difference in the height at three levels of trunk: HDI-S for shoulders, HDI-A for axillas, and HDI-T for the trunk waistline by the trunk height (e). The shoulder point is the point of intersection at shoulder level with a vertical line from each axilla. POTSI was in- troduced by Suzuki et al. in 1999 [17, 18] (Fig. 7). POTSI ¼ FAI−C7 þ FAI−A þ FAI−Tð Þ þ HDI−S þ HDI−A þ HDI−Tð Þ The two photographic postural parameters of lower limb frontal plane assessment are tibiofemoral angle and tibiocalcaneal angle. Tibiofemoral angle (TFA)—the angle between the line drawn from the center of the ankle joint to the center of the knee joint and the line drawn from the center of the knee joint to ASIS of the same lower limb (Fig. 8a) [19, 20]. Tibiocalcaneal angle (TCA)—the angle between a line drawn between the center of the calcaneus and the Achilles tendon, and a second line drawn from the Achilles tendon to the mid-calf of the same lower limb (Fig. 8b) [21]. Photographic parameters for the sagittal plane evaluation The following photographic parameters are assumed for the sagittal plane assessment: Sacral slope angle (SS)—the angle between the vertical line and the line tangent to body contour at the sacral area (Fig. 9a) [22]. Lumbar lordosis angle (LL)—the angle between the line tangent to body contour at the level of T12-L1 spinous processes and the line tangent to body contour at the level of L5-S1 spinous processes (Fig. 9b) [23]. Thoracic kyphosis angle (TK)—the angle between the line tangent to body contour at the level of C7-Th1 spinous processes and the line tangent to body contour at the level of Th12–L1 spinous processes (Fig. 9c) [24]. Chest inclination angle (CI)—the angle between the horizontal line and the line connecting the C7 spinous process with the point at the anterior neck-anterior thorax junction (Fig. 9d) [25]. Head protraction angle (HP)—the angle between the horizontal line and the line connecting the C7 spinous process and the external auditory meatus (Fig. 9e) [26]. Fig. 3 Photographic documentation of trunk rotation/trunk inclination deformity revealed during Adams’ forward bending test (left to right—progressive forward bending) Fig. 4 Feet positioning for posture photographic evaluation: a front view, b lateral left view, c back view, and d lateral right view Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 4 of 24 Sagittal pelvic tilt (SPT)—the angle between the horizontal line and the line joining the anterior and the posterior superior iliac spine (Fig. 10a) [27]. Trochanter-ankle angle (TA)—the angle between the vertical line drawn from the center of external malleolus of the ankle joint and the line drawn from the center of external malleolus of the ankle joint to the top of the greater trochanter (Fig. 10b). Acromion-ankle angle (AA)—the angle between the vertical line drawn from the center of external malleolus of the ankle joint and the line drawn from the center of external malleolus of the ankle joint to the center of acromion (Fig. 10c). Ear-ankle angle (EA)—the angle between the vertical line drawn from the center of external malleolus of the ankle joint and the line drawn from the center of external malleolus of the ankle joint to the external auditory meatus (Fig. 10d). Enlarged photos of coronal and sagittal parameters are presented in Additional file 1: Appendix 1. Semi-automatic measurements of postural photographic parameters All the abovementioned parameters can be measured manually, manually in ink, on a print or digitally on the monitor screen. To facilitate the measurement, a semi- automatic software named SCODIAC was created [28]. The software is available online and free to download [https://www.ortotika.cz/download/SetupSCODIAC_Ful- l.zip]. The landmarks are manually placed on the screen. Afterwards, the software calculates the values of the re- quired parameters. The initial version of software was checked against x-ray measurements [29]. The current version focused on digital photography images (Fig. 11). Placing the landmarks consists in moving small circles provided at the screen to the required anatomical points manually. The software calculations are automatic. The software explains all functions in a user-friendly way. Validation of the photographic technique We checked the reliability of the photographic technique above. Our objectives in this part of the study were (1) to determine the intra-observer reproducibility and the inter-observer reliability of the photographic sagittal pa- rameters: sacral slope angle (SS), lumbar lordosis angle (LL), thoracic kyphosis angle (TK), chest inclination angle (CI), and head protraction angle (HP) (Fig. 9); and (2) to check the validity of photographic measurements Fig. 5 Child’s posture taken for sagittal plane assessment: a spontaneous standing posture, b “actively corrected posture” in this child reveals backward trunk hyperextension which should be avoided; such image points out the importance of children education in what the correct human body posture consists of Fig. 6 Diagram illustrating the measurements of ATSI Index Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 5 of 24 https://www.ortotika.cz/download/SetupSCODIAC_Full.zip https://www.ortotika.cz/download/SetupSCODIAC_Full.zip against the Rippstein plurimeter measurements by ana- lyzing correlations between the corresponding angles. The study group consisted of 91 healthy volunteers (44 girls and 47 boys) aged 7–10 (mean 8.2 ± 1.0 years). The exclusion criteria were history of any spine disorder, min. 7-degree ATR value, lower limbs discrepancy, and refusal to participate. Children were photographed in a relaxed (spontaneous, habitual) posture from the left (Fig. 2b) and right side (Fig. 2f). The study was performed in accordance with the 1964 Helsinki Declaration. All studies reported in this chapter were approved by the In- stitutional Review Board of Poznan University of Medical Sciences (No. 832/11, date 6/10/2011). Intra-observer reproducibility One observer (a physiotherapist with 10 years’ experi- ence) performed three series of photographic measure- ments. Each series comprised three measurements, with a 2-day interval between each series. The observer measured the photographic parameters of 30 randomly selected healthy children. Five photographic parameters (SS, LL, TK, CI, and HP) were measured using the afore- mentioned methodology. The intra-observer reproduci- bility was quantified by the use of intraclass correlation coefficient (ICC) and standard error for single measure- ment (SEM) [30]. Inter-observer reliability Three observers, physiotherapists with 10, 8, and 2 years’ experience respectively, performed three series of photo- graphic measurements. Each series included three mea- surements, with a 2-day interval between each series. The observer measured photographic parameters of 30 randomly selected healthy children. Five photographic parameters (SS, LL, TK, CI, and HP) were measured using the methodology described above. The inter- observer reliability was quantified by the use of intra- class correlation coefficient (ICC) and standard error for single measurement (SEM) [30]. Validation of the photographic technique against Rippstein plurimeter In order to determine the correlation of the photographic parameters versus Rippstein plurimeter measurements, three observers measured the sagittal curvatures (sacral slope, lumbar lordosis, and thoracic kyphosis) of 91 chil- dren three times with the use of the Rippstein plurimeter (Fig. 12) immediately after the children had the photos taken, one photo from the left side and one photo from the right side, according to standardized conditions described above. The values of the corresponding parame- ters (photographic thoracic kyphosis angle versus pluri- meter thoracic kyphosis angle, etc.) were compared. Fig. 7 Diagram illustrating the measurements of POTSI Index Fig. 8 a Diagram illustrating the measurements of TFA. b Diagram illustrating the measurements of TCA Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 6 of 24 Variability of photographic sagittal parameters over time The aim of the second part of the study was to analyze the variability over time (zero time, after 1 h, and after 1 week) of five 2D photographic angles: sacral slope (SS), lumbar lordosis (LL), thoracic kyphosis (TK), chest inclination (CI), and head protraction (HP). The study group comprised 30 healthy volunteers (13 girls and 17 boys) aged 7–10 (mean 8.2 ± 1.0 years). The same exclusion criteria as in photo- graphic technique validation XYZ were used. Children were photographed in a standardized relaxed (spontaneous, habitual) posture (Fig. 2b). At each of the three exposures, the digital photographs of the left profile of the body were taken three times one after another within 5 s. The exposure was made (1) at the time zero, (2) 1 h later, and (3) one week later. In total, 270 photos were assessed. Five photographic parameters were calculated on each photo. Variability of photographic coronal parameters over time The aim of this part of the study was to analyze the vari- ability in time (zero time, after one hour, after one week) of two coronal photographic parameters: ATSI (Anterior Trunk Symmetry Index) (Fig. 6) and POTSI (Posterior Trunk Symmetry Index) (Fig. 7) which serve to evaluate the symmetry of the trunk in coronal plane. The study group comprised 30 healthy volunteers (13 girls and 17 boys) aged 7–10 (mean 8.1 ± 1.1 years). The same exclusion criteria as in photographic technique val- idation were used. Children were photographed in a standardized relaxed (spontaneous, habitual) posture in the coronal plane. Three digital photographs were taken within 5 s, including the front (Fig. 13) and back (Fig. 14) Fig. 9 a Diagram illustrating the measurements of SS. b Diagram illustrating the measurements of LL. c Diagram illustrating the measurements of TK. d Diagram illustrating the measurements of CI. e Diagram illustrating the measurements of HP Fig. 10 a Diagram illustrating the measurements of SPT. b Diagram illustrating the measurements of TA. c Diagram illustrating the measurements of AA. d Diagram illustrating the measurements of EA Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 7 of 24 Fig. 11 SCODIAC printscreen images illustrating coronal and sagittal plane parameters Fig. 12 Areas of application of the Rippstein plurimeter to the patient’s spine: a lumbo-sacral junction, b thoraco-lumbar junction, and c cervico-thoracic junction. The angular parameters are calculated as the differences between the two positions: lumbar lordosis = a, b; thoracic kyphosis = b, c Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 8 of 24 view. The same procedure was repeated after 1 h and after 1 week (540 photos were assessed). Normative values of sagittal photographic parameters in children aged 7–10 Normative values of sagittal photographic parameters were calculated based on photographic assessment of 7782 children of both sexes, aged 7–10. All photographs were taken respecting the abovementioned procedures. Statistical analysis Statistical analyses were performed using Statistica 10 (StatSoft), Gretl and Microsoft Excel software. Statistical significance level was defined as P < 0.05. Reliability was determined with the intraclass correlation coefficient Fig. 13 Positioning of the child during photographic documentation of front view: a zero time, b after 1 h, and c after 1 week Fig. 14 Positioning of the child during photographic documentation of back view: a zero time, b after 1 h, and c after 1 week Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 9 of 24 (ICC) by means of the two-way model and Cronbach’s alpha. [30, 31]. The scale from Bland and Altman were used in the classification of the reliability values and re- lationship between plurimeter and photography [30]. ICC values smaller than or equal to 0.20 were consid- ered poor, 0.21–0.40 fair, 0.41–0.60 moderate, 0.61–0.80 good, and 0.81–1 very good [32]. Standard error of measurement (SEM) was measured according to Shrout. [33]. Analysis of variance, homogeneity of variance, nor- mality of distribution, and post hoc tests were used to examine the variation of five photographic sagittal parameters over time. Results Photogrammetry reliability studies Validation of the photographic technique Photographic measurements The reliability of the photographic measurements is shown in Table 1. The ICC values for the sacral slope angle, lumbar lordosis angle, thoracic kyphosis angle, chest inclination angle, and head protraction angle revealed very good reliability, with the SEMs of the measurement ranging between 0.7 and 1.3. Photogrammetry versus plurimeter The correlation of measurements using plurimeter and digital photography is shown in Table 2. The ICC values for the sacral slope angle (0.93), lumbar lordosis angle (0.97), and thoracic kyphosis angle (0.95) revealed very good reliability. All ICC values for the three angles reported very good inter- observer repeatability, with the SEMs of the measure- ment ranging between 0.9 and 1.4. Variability of photographic sagittal parameters over time There were no significant differences between the measurements (p > 0.05) at zero time, after 1 h, and after 1 week in any of the five sagittal photographic parame- ters. In the case of SS and CI, the 1 week measurement was different to the zero and the 1-h measurement, but the differences were not statistically significant (using analysis of variance and post hoc tests). The results of measurement of both parameters increased with time, so the largest difference was observed between the Table 1 Reliability of using photographic technique for measuring the sagittal trunk alignment Variables Intraobserver reproducibility Interobserver reliability ICC 95% CI SEM [°] P value ICC 95% CI SEM [°] P value Left side of the body SS 0.93 0.88 0.97 1.0 0.957 0.93 0.86 0.97 0.9 0.658 LL 0.97 0.95 0.99 1.0 0.975 0.97 0.95 0.99 1.0 0.987 TK 0.93 0.87 0.96 1.2 0.974 0.94 0.89 0.97 0.9 0.811 CI 0.96 0.92 0.98 0.7 0.953 0.92 0.83 0.96 0.9 0.540 HP 0.90 0.83 0.95 1.0 0.990 0.84 0.74 0.92 1.2 0.984 Right side of the body SS 0.93 0.88 0.97 1.0 0.952 0.92 0.86 0.96 1.1 0.954 LL 0.96 0.93 0.98 1.2 0.936 0.96 0.92 0.98 1.1 0.852 TK 0.91 0.85 0.95 1.3 0.990 0.92 0.86 0.96 1.2 0.726 CI 0.93 0.87 0.96 0.9 0.931 0.88 0.79 0.94 1.1 0.689 HP 0.94 0.89 0.97 0.7 0.989 0.85 0.75 0.92 1.0 0.913 Mean of the left and right side of the body SS 0.95 0.91 0.97 0.9 0.977 0.94 0.89 0.97 0.9 0.836 LL 0.97 0.95 0.98 1.0 0.952 0.98 0.95 0.99 0.9 0.936 TK 0.93 0.88 0.97 1.1 0.986 0.92 0.86 0.96 0.9 0.777 CI 0.96 0.92 0.98 0.7 0.950 0.92 0.84 0.96 0.9 0.622 HP 0.94 0.89 0.97 0.8 0.995 0.89 0.80 0.94 1.0 0.979 ICC intraclass correlation coefficient, CI confidence interval, SEM standard error of measurement *Statistically significant difference (P < .05) Table 2 Correlation of Rippstein plurimeter versus photographic measurements Variables ICC 95% CI SEM [°] P value Plurimeter SS—photo SS 0.93 0.89 0.95 1.2 0.712 Plurimeter LL—photo LL 0.97 0.93 0.98 0.9 0.425 Plurimeter TK—photo TK 0.95 0.93 0.97 1.4 0.945 ICC intraclass correlation coefficient, CI confidence interval, SEM standard error of measurement *Statistically significant difference (P < .05) Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 10 of 24 measurement carried out in time zero and 1 week later. In case of the remaining three parameters (TK, LL, HP), we could not find such a trend (Table 3). Variability of photographic coronal parameters over time There was no statistically significant difference be- tween measurements (p > 0.05) for ATSI in zero time, after 1 h, and after 1 week. There was no statistically sig- nificant difference between measurements (p > 0.05) for POTSI parameters in zero time, after 1 h, and after 1 week (Table 4). A slight tendency regarding the differ- ence between the 1-week measurement and the zero and 1-h measurement was not statistically significant. This observation needs further study in a bigger sample (p values in post hoc tests were between 0.15 and 0.30). Normative values of sagittal photographic parameters for children 7–10 Five sagittal photographic parameters (SS, LL, TK, CI, HP) were measured for each child. The data was analyzed separately for boys and girls and for each year of age, ranging from 7 to 10. Numerical values based on the tables (Additional file 2: Appendix 2A) and percentile charts for sex and age (Additional file 2: Appendix 2B) are presented in Additional file 2: Appendix 2. Table 4 contains the exemplary numerical values of the five photographic parameters (all values presented in degrees). Pitfalls and sources of errors in photogrammetry used for posture evaluation Errors may occur during photographic examination and photography evaluation. Attention should be paid to prepare and position the child according to the protocol. The incorrect prepar- ation or positioning is illustrated below with the exam- ples identified within our study group of 7782 children participating in the local school screening program. In total, 46,595 digital photos were analyzed. The following problems were noted and are reported below in the following way: (1) type of error and (2) consequence for posture assessment. Figures are illus- trating the following: � Protraction of the shoulders—the upper limbs cover the body contours and anatomical points (Fig. 15) � Incorrect head position and gaze direction—impact on cervical spine parameters (Fig. 16) � Inability to adopt spontaneous relaxed posture—impact on lumbar lordosis and thoracic kyphosis angles (Fig. 17) � Hair covering the body contours—impossibility of measuring photographic parameters (Fig. 18) � Gluteal cleft covered with underpants—impossible calculation of POTSI index (Fig. 19) � Bra or swimsuit with limited body contact and obscuring the trunk—sagittal angles design and calculation not possible (Fig. 20) � One-leg standing—impact on coronal plane symmetry (Fig. 21) � Incorrect rotational foot positioning—introduction of rotation to the whole body (Fig. 22) � Digital camera not level—possible photographic parameters modification (Fig. 23) � Limited communication with the child can be treated as a contraindication for photographical measurements—standardized position not possible (Fig. 24) � Insufficient image sharpness—difficulties with photographic angle measurement (Fig. 25) These errors can influence the photographic evalu- ation and should be avoided. Table 3 Variability of sagittal and frontal parameters over time Measurements SS LL TK CI HP ATSI POTSI Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Zero time 24.7 7.0 41.7 9.5 44.5 6.7 27.1 7.1 53.3 3.1 21.6 11.6 21.3 10.0 After 1 h 25.3 7.2 42.0 8.3 43.6 7.6 27.6 7.5 52.8 5.8 21.8 11.4 21.9 10.6 After 1 week 26.8 8.1 43.0 9.1 45.0 8.9 29.3 6.8 52.8 5.1 20.1 8.8 18.3 6.1 P 0.533 0.854 0.786 0.478 0.926 0.798 0.288 Mean mean value of three measurements, SD standard deviation of three measurements *Statistically significant difference (P < .05) Table 4 Exemplary table based on numerical values for 7-year-old girls (N = 1083) Percentile SS LL TK CI HP 97 44 52 62 41 71 90 39 45 55 36 66 75 33 37 48 32 63 50 27 29 42 27 58 25 22 23 35 22 54 10 17 18 28 18 49 3 12 13 23 14 45 Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 11 of 24 Discussion Radiological assessment as the current gold standard for scoliosis evaluation but not for child body posture evaluation The radiological imaging remains the gold standard for idiopathic scoliosis (IS) diagnosis and evaluation [34–37]. It enables the primary and secondary curves identification, Cobb angle measurement, axial vertebral rotation assess- ment, and Risser sign grading. It differentiates the idiopathic scoliosis from the congenital one. However, for the large cohort studies or for the school screening pur- pose, the children are not exposed to radiography because of the radiation risk [38, 39]. In the screening condi- tions, the suspicion of idiopathic scoliosis is detected with manual anthropometric devices, such as the sco- liometer [40–43] or smartphone with a specific device [44–46]. The basic method of school screening for idiopathic scoliosis is a clinical examination in the forward bending position (Adam’s test) with the use of scoliometer [47, 48]. Surface topography methods based on computerized image capturing and digitally calculated parameters are also proposed for the evalu- ation of patients suffering from idiopathic scoliosis. These techniques utilize raster stereography based on distortion of a grid projected onto the back [49–51] or body scanning using light beam and its distortion analysis [49, 52, 53]. Evaluation of physical deformity developing in idiopathic scoliosis presents some common areas to- gether with the body shape evaluation in postural disor- ders. Similar diagnostic tools are often used. In children, it is especially important to apply the techniques which do not involve exposure to x-ray radiation. Several methods have been proposed for body posture assess- ment: simple photographic techniques and plumbline measures [54–57], goniometers, inclinometers and linear devices [58–60], computer-assisted methods including electrogoniometers [61], electromagnetic movement sys- tems [62, 63], computer-assisted digitization systems [64–66], or 3D ultrasound-based motion analysis device [67]. Finally, digital photography is gaining grounds in the assessment of trunk alignment [68]. Overview of photographic parameters proposed for posture evaluation Photographic parameters for posture evaluation were presented by several authors. The parameters proposed in this study were selected based on the authors’ per- sonal experience and the careful analysis of previous publications. Canales et al. [69] reported the following posterior and sagittal parameters: head position, thoracic kyphosis, lumbar lordosis, pelvic inclination, and knee position Fig. 15 Technical error—protraction of the shoulders Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 12 of 24 together with the following anatomical points to be considered: scapulas, shoulders, and ankles (Fig. 26). Cerrutto et al. [70] reported the following anterior, posterior, and sagittal parameters: P1, P2, L1, L2, L3, AR, and AL angles which were measured based on the lines drawn from the anatomical points: superior and inferior scapular angles, vertical lines related to ear lobe, acromion and scapular prominence, and ver- tical lines related to manubrium and coracoid process (Fig. 27). Pausić et al. [71] proposed assessment based on the following anatomical points: head and neck, trunk, pel- vis, knee joints, and ankle joints (for the coronal plane) or head and neck, trunk, pelvis, and knee joints for the sagittal plane (Fig. 28). Penha et al. [8] reported the following posterior and sagittal parameters: lumbar lordosis, thoracic kyphosis, pelvic inclination, head position, and lateral spinal devia- tions based on anatomical points to be considered (Fig. 29). Ruivo et al. [72] reported the following sagittal param- eters: head angle, cervical angle, and shoulder angle (Fig. 30). Sacco et al. [73] reported the following sagittal param- eters: tibiotarsal angle, knee extension/flexion angle, Q angle, and subtalar angle (Fig. 31). Canhadas et al. [74] proposed the following anatomical points to be considered: external orbicularis, commis- sura labiorum, acromioclavicular joint, sternoclavicular joint, ear lobe, antero-superior iliac spines, postero- superior and postero-inferior iliac spines, inferior angles of the scapula, olecranon central region, and popliteal line. In addition, the following angles were evaluated: bilateral foot inclination, forward inclination of the fib- ula, knee angle, cervical lordosis, thoracic kyphosis, lum- bar lordosis, knee flexor, tibiotarsal angle, forward head position, and sternal angles (Fig. 32). Matamalas et al. [75] reported the following posterior parameters: waist height angle, waist angle, and waistline distance ratio (Fig. 33). Matamalas et al. [76] published the following anterior and posterior parameters: trapezium angle, shoulder height angle, and axilla height angle (Fig. 34). Current opinions on digital photography technique Digital photography completed with analyzing software can be viewed as digital photogrammetry and can be found in several areas of life and technology: architec- ture, psychology, medicine, rehabilitation, and other fields [77–80]. For the purpose of posture assessment, this technique is easy to access and cost-effective [81, 82]. The technique provides measurement of body an- gles or distances, which allows for a quantitative posture assessment. Remaining non-invasive, digital photography is becoming an increasingly popular tool for assessing the musculoskeletal system, including the sagittal and coronal curvatures of the spine, in both clinical practice and research [83, 84]. In recent years, the photographic technique has been used to assess the posture of healthy and unhealthy children and adults [69, 74, 85]. Digital photography was applied to assess body posture of chil- dren carrying heavy backpacks [86], to evaluate the qual- ity of posture while standing [87, 88] and siting [89], or Fig. 16 Technical error—incorrect head position and/or gaze direction Fig. 17 Technical error—lack of spontaneous relaxed posture Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 13 of 24 for quantifying the foot shape [90]. Several studies described usefulness of the photographic technique to assess patients with idiopathic scoliosis [75, 91–94]. Technical procedures of posture photogrammetry Camera resolution Different resolutions of digital cameras were used in the previous studies, ranging from 2.0 megapixels (Mpx) [73], 4.1 Mpx [95, 96], through 5.1 Mpx [84], 6.0 Mpx [81], 6.3 Mpx [97] to 7.2 Mpx [98]. For this study, CANON POWER SHOT A590 IS, 1/2.5 CCD matrix, 8.3 megapixels, 35–140-mm lens (Canon Incorporation, Tokyo, Japan) was used. The resolution of 1600 × 1200 [2 Mpx] provided sufficient photo quality [99]. Camera position—distance and height In previous photographic studies, the distance between the camera and the object was reported to be 173 cm [97, 100], 300 cm [73, 81, 95, 101], or 400 cm [55]. The camera was positioned at the height of 70, 127, 80 or 90 cm [81], while other authors set the camera by cen- tering the lens at half of the child’s height [81, 95, 98]. In our previous experiments, the camera was placed on a stabile tripod at the height of 90 cm and the distance of 300 cm. These settings were previously suggested for children aged 7–10 [81, 98]. Such a combination of dis- tance and height enabled covering the whole silhouette without moving the camera [12]. Child positioning Some authors proposed to practice the photographic examination of the standing child wearing casual clothes, sportswear (shorts and a T-shirt) [87], just shorts [64], or the swimsuit [102]. Unfortunately, the clothes may slightly distort the body contour. Producing and registering images of undressed children seems to be a potential challenge for posture photogrammetry. Nowadays, it involves both the imperative to adopt procedures respecting individual sensitivity and the protection of image processing and storing. Yet, here we are, proposing the evaluation of the child body posture without any T-shirt, thigh or socks, wearing only underwear and bra [103], which is not Fig. 18 Technical error—body contours covered by hair Fig. 19 Technical error—gluteal cleft upper contour covered by underpants Fig. 20 Technical error—the trunk obscured by bra or swimsuit Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 14 of 24 commonly accepted in our society (with individual cases of parents refusal noted). However, the local cultural background should be considered. Longer hair of the person examined should be tied or curled with a clip so as not to cover the external auditory meatus or neck contour. Lower limb positioning—the feet In previous studies, some authors proposed to set the feet at the 30-degree external rotation in drawn trian- gles [97] or freely within the defined field lines [84]. In our observation, the 30-degree external rotation of the feet may undesirably impact the position of other parts of the body, especially the ankle joint in relation to the vertical projection of the quadrangle support. Fig. 21 Technical error—one-leg standing Fig. 22 Technical error—incorrect feet position unparalleled Fig. 23 Technical error—digital camera not leveled Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 15 of 24 We decided to position the feet over the longitudinal and crosswise lines marked on the ground so that the lateral malleoli were situated over the center of cross- wise line, and the feet were parallel to the longitu- dinal line and hip-width apart. We found such setting to be the most neutral feet position which does not interfere with the spontaneous posture [63]. It has also the advantage of being suitable for assessing the tibio-calcaneal angle. In our experiments, most chil- dren needed assistance to place the feet correctly. Fig. 24 Technical error—limited communication with child Fig. 25 Technical error—photo taken without proper focusing Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 16 of 24 Lower limbs positioning—the knees The position of the knee joint and the symmetric lower limbs loading are also objects for standardization as some children tend to stand for the photographic evalu- ation having one lower limb more loaded or one knee more visibly bended. Such a position would influence the whole body posture, especially the trunk. We recom- mend positioning the child with equally loaded feet, in neutral setting of the knees, without flexion or hyper- extension. Upper limb positioning—the elbows Most authors suggest using the position with upper limbs hanging loosely [87, 96, 98, 102, 104, 105] in order not to influence the trunk [106]. The problem of the upper limb positioning is well-studied in the case of lat- eral spine radiography and different solutions are pro- posed to avoid the spine being obscured by the upper limbs [106–108]. Moreover, in the course of the standardization studies, we observed that the relaxed upper limbs sometimes covered the lumbar lordosis con- tour and greater trochanter. Similar observations have been made by other authors who suggested carrying out photographic sagittal evaluation with the elbow joints bent at 90° [73, 87, 109]. Finally, we recommend setting the upper limbs slightly flexed at about 10°–20° at the gleno-humeral joints and at about 20°–30° at the elbow joints. The movement of the upper limb flexion in the gleno-humeral and the elbow joints is performed slowly to avoid any involuntary trunk movement towards trunk hyperextension [87], which is the way to increasing lower thoracic spine [110] or even creating a patho- logical lordosis in this region [111]. During this move- ment, the child is watched, and if any accompanying trunk movement happens, the child is asked to repeat the upper limb movement. In some cases, passive posi- tioning of the upper limbs is needed. In addition, we ob- served that during the upper limb movement, some children performed elevation or protraction of the shoulders which covered the neck contour and the upper thoracic spine contour. Therefore, during the positioning of the upper limbs, we make sure that the Fig. 26 a–b Photographic assessment as proposed by Canales et al. in 2010 ([69], reprinted by permission) Fig. 27 Photographic assessment as proposed by Cerutto et al. in 2012 ([70], reprinted by permission) Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 17 of 24 shoulder girdle stays down. It is important to note that the presence of shoulder protraction in loosely hanging upper limbs is common in the population of children aged 7–10 [8]. Head position and gaze direction During the standardization of the photographic technique, we checked the effect of the head position and the gaze direction on postural parameters. Our pre- liminary studies have shown that the head position af- fects the angular size of thoracic kyphosis and lumbar lordosis. Initially, we were planning to ask the child to look at a specific point marked in front of her/him as proposed in the literature [87]. Then, we noted this cre- ated an additional problem because of differences in children’s height, which is why we proceeded with the “look ahead” command. Nevertheless, we noticed that some children, even when looking ahead, maintained voluntary lowered head position with the flexion of the cervical spine. Therefore, in order to achieve standard- ized conditions, each child was instructed to keep the eyes open and to direct the gaze at the eye level the mo- ment it receives the “look ahead” command [71, 109, 112, 113]. Consequently, if an inappropriate voluntary head position was observed, we explained to the child once again how the head should be set. In rare situa- tions, we helped the child by modifying the head pos- ition in a gentle way, trying not to trigger any artificially corrected position. In our practice, we also found it use- ful to ask children not to smile or laugh while taking the photos as it could affect postural parameters [112]. Fig. 28 Photographic assessment as proposed by Pausić et al. in 2010 ([71], reprinted by permission) Fig. 29 a–d Photographic assessment as proposed by Penha et al. in 2009 ([8], reprinted by permission) Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 18 of 24 Digital photography technique for body posture evaluation and documentation During this study, the postural photogrammetry revealed a simple and quick procedure. One can possibly perform photographic measurements with the use of a simple digital camera or a mobile camera in consideration of the standardized conditions for photographic evaluation. The tripod revealed a helpful device to stabilize the camera and control its position. The time needed for preparing the child for examination together with the time for taking photographic exposures in two sagittal projections was ca. 5 min, whereas the time required for calculation of five standardized sagittal parameters was ca. 3 min. This study confirmed the usefulness of photo- graphic method for body posture documentation and evaluation. Digital photography technique can be used in research on the development and variability of posture in children. The developed procedure allows for the accurate and uniform filling of photographic docu- mentation by physiotherapists and to obtain good qual- ity research which is in line with the EBM rules. Due to its non-invasiveness, the technique can be promoted in scientific and clinical research. Parents’ concerns regard- ing the use of radiography are avoided. The low cost of producing and archiving digital photos has a beneficial ef- fect on technology. There is no need to acquire expensive, specialized equipment or software. Digital photogrammetry Fig. 30 Photographic assessment as proposed by Ruivo et al. in 2015 ([72], reprinted by permission) Fig. 31 Photographic assessment as proposed by Sacco et al. in 2007 ([73], reprinted by permission) Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 19 of 24 screening can be significant for the budget savings of indi- vidual units which organize screening (e.g., the local government), which is often crucial in financing various types of research projects. Specific numerical values of the normal range for quantitative and validated parameters are presented in this paper. According to van Maanem et al., the simplicity of assessing the posture on the photos is at the core of this technique—it is objective, easy to use, and of low cost [114]. For Cobb et al. [90], the digital photog- raphy for a two-dimensional assessment of the body shape is a valuable method for recording the body posture and calculating quantitative parameters in everyday clinical practice. Fortin et al. [109] claim that digital photography technique can be used for scientific assessment provided that the procedures in question are taken into account. Galera et al. mention that the current studies present new diagnostic possibilities of digital photography, which is a common procedure for two-dimensional evaluation of body posture [71]. Digital photography has some limitations. The major limitation of the technique is the two-dimensional body posture assessment, as it is not impossible to measure trunk rotation. The method may not be suitable for children under 7 years of age. Conclusions In summary, although both the surface topography and the radiological evaluation cannot be replaced with digital photography—the former for the 3D imaging, the latter for skeletal imaging—this technique offers a new additive value to human posture imaging. The develop- ment of digital photography technique allows for its regular use in the assessment of body posture. The method of child preparation and positioning described above allows us to avoid incidentally modified posture. Fig. 32 Photographic assessment as proposed by Canhadas et al. in 2009 ([74], reprinted by permission) Fig. 33 a–c Photographic assessment as proposed by Matamalas et al. in 2016 ([75], reprinted by permission) Fig. 34 Photographic assessment as proposed by Matamalas et al. in 2014 ([76], reprinted by permission) Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 20 of 24 The registration of images is simple, quick, harmless and cost-effective. The semi-automatic image analysis has been developed. The choice of postural parameters was based on previous publications and on personal experi- ence and can be modified. The photographic method of body posture assessment developed during this study is characteristic of high reliability of measurements. The five developed and calculated photographic parameters (sacral slope, thoracic kyphosis, lumbar lordosis, chest inclination, and head protraction) describe the child body posture in the sagittal plane and demonstrate good repeatability and reproducibility, which may become a standard for body posture evaluation in children. Per- forming such a large series of measurements in children resulted in the preparation of normal values and per- centile charts for age and sex, making it possible for us to employ the photographic parameters possible in the diagnosis of child posture pathology as well as to moni- tor the effects of corrective therapy. Additional files Additional file 1: Appendix 1. Coronal and sagittal photographic parameters. (PDF 268 kb) Additional file 2: Appendix 2. Normative values of sagittal photographic parameters for children aged 7–10 based on the tables and percentile charts for sex and age. (PDF 1000 kb) Abbreviations AA: Acromion-ankle angle; ASIS: Anterior superior iliac spine; ATSI: Anterior Trunk Symmetry Index; C7: Seventh cervical vertebra; CI: Chest inclination angle; EA: Ear-ankle angle; EBM: Evidence-based medicine; FAI-A: Frontal Asymmetry Index at axilla level; FAI-C7: Frontal Asymmetry Index at C7 level; FAI-SN: Frontal Asymmetry Index at sternal notch level; FAI-T: Frontal Asymmetry Index at trunk level; HDI-A: Height Difference Index for axillas; HDI-S: Height Difference Index for shoulders; HDI-T: Height Difference Index for trunk waistline; HP: Head protraction angle; ICC: Intraclass correlation coefficient; IS: Idiopathic scoliosis; L1: First lumbar vertebra; L5: Fifth lumbar vertebra; LL: Lumbar lordosis angle; POTSI: Posterior Trunk Symmetry Index; PSIS: Posterior superior iliac spine; S1: First sacral vertebra; SEM: Standard error for single measurement; SPT: Sagittal pelvic tilt; SS: Sacral slope angle; T12: Twelfth thoracic vertebra; TA: Trochanter-ankle angle; TCA: Tibio- calcaneal angle; TFA: Tibio-femoral angle; TK: Thoracic kyphosis angle Acknowledgements The authors would like to thank Prof. Pawel Ulman for his contribution to statistical analysis, Mr. Krzysztof Korbel, PT, and Ms. Katarzyna Politarczyk, PT, for assistance in the measurements. The study was performed as part of the local prevention project “Skierniewice Chooses Health – Bad Posture and Postural Defects Prophylaxis in Class I-III Primary School Children” and “Poznan Chooses Health – Bad Posture Prophylaxis in Class I-IV Primary School Children”. Funding No sources of funding were utilized for the study. Availability of data and materials The datasets analyzed during the current study can be requested from the corresponding author by providing a good reason. Authors’ contributions LS performed the study design, developed the study protocol, data collection, and compilation, described the result analysis and manuscript, drafting and revision, critical revision of the manuscript. MK described result analysis, participated in the acquisition of the data, critical revision of the manuscript. DC performed the study design, developed the study protocol, data collection and compilation, described result analysis and manuscript, drafting and revision, critical revision of the manuscript. MT participated in the acquisition of the data and the critical revision of the manuscript. PC prepared the computer software for the study. NS developed the study protocol, critical revision of the manuscript, TK performed the study design, developed the study protocol, data collection and compilation, described result analysis and manuscript, drafting and revision, as well as the critical revision of the manuscript. All authors read and approved the final manuscript. Ethics approval and consent to participate All study participants gave written consent, and the study was approved by the Institutional Review Board of Poznan University of Medical Sciences (832/ 11, date 6/10/2011). Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Author details 1Department of Spine Disorders and Pediatric Orthopedics, University of Medical Sciences, 28 Czerwca 1956r. no. 135/147, 61-545 Poznan, Poland. 2Rehasport Clinic, Poznan, Poland. 3Rehasport Clinic Licensed Rehabilitation Center, Skierniewice, Poland. 4Department of Physiotherapy, Józef Rusiecki University College, Olsztyn, Poland. 5Center of Body Posture, Olsztyn, Poland. 6Department of Orthopaedics, Pediatric Orthopaedics and Traumatology, The Centre of Postgraduate Medical Education in Warsaw, Otwock, Poland. 7Faculty of Health Studies, University of West Bohemia, Pilsen, Czech Republic. 8Faculty of Physical Education and Sport, Charles University, Prague, Czech Republic. 9ORTOTIKA, s. r. o, Faculty at Motol University Hospital, Prague, Czech Republic. 10Scoliosis Center, Medical Scanning Tokyo, Tokyo, Japan. Received: 30 July 2017 Accepted: 4 December 2017 References 1. Kendall FP, McCreary EK, Provance PG. Muscles testing and function with posture and pain. 4th ed. USA: Lippincott Williams and Wilkins; 2005. 2. Kiebzak W, Szmigiel C, Kowalski I, Sliwinski Z. Importance of risk factors in detecting psychomotor development disorders in children during their first year of life. Advances in Rehabilitation. 2008;22:29–33. 3. Blaszczyk JW, Cieslinska-Swider J, Plewa M, Zahorska-Markiewicz B, Markiewicz A. Effects of excessive body weight on postural control. J Biomech. 2009;42:1295–300. https://doi.org/10.1016/j.jbiomech.2009.03.006. 4. Kowalski IM, Protasiewicz-Falowska H. Trunk measurements in the standing and sitting posture according to evidence based medicine (EBM). J Spine Surg. 2013;1:66–79. 5. Kowalski IM, Protasiewicz-Faldowska H, Siwik P, Zaborowska-Sapeta K, Dabrowska A, Kluszczynski M, Raistenskis J. Analysis of the sagittal plane in standing and sitting position in girls with left lumbar idiopathic scoliosis. Pol Ann Med. 2013;20:30–4. https://doi.org/10.1016/j.poamed.2013.07.001. 6. Komro KA, Tobler AL, Delisle AL, O’Mara RJ, Wagenaar AC. Beyond the clinic: improving child health through evidence-based community development. BMC Pediatr. 2013;13:172. https://doi.org/10.1186/1471-2431-13-172. 7. Sitarz K, Senderek T, Kirenko J, Olszewski J, Taczala J. Sensomotoric development assessment in 10 years old children with posture defects. Polish. J Phys. 2007;3:232–40. 8. Penha P, Joao S, Casarotto R, Amino C, Penteado D. Postural assessment of girls between 7 and 10 years of age. Clinics. 2005;60:9–16. Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 21 of 24 dx.doi.org/10.1186/s13013-017-0146-7 dx.doi.org/10.1186/s13013-017-0146-7 http://dx.doi.org/10.1016/j.jbiomech.2009.03.006 http://dx.doi.org/10.1016/j.poamed.2013.07.001 http://dx.doi.org/10.1186/1471-2431-13-172 9. Vrtovec T, Pernus F, Likar B. A review of methods for quantitative evaluation of spinal curvature. Eur Spine J. 2009;18:593–607. https://doi.org/10.1007/ s00586-009-0913-0. 10. Vrtovec T, Janssen MMA, Likar B, Castelein RM, Viergever MA, Pernus F. A review of methods for evaluating the quantitative parameters of sagittal pelvic alignment. Spine J. 2012;12:433–46. https://doi.org/10.1016/j.spinee. 2012.02.013. 11. Gangnet N, Pomero V, Dumas R, Skalli W, Vital JM. Variability of the spine and pelvis location with respect to the gravity line: a three-dimensional stereoradiographic study using a force platform. Surg Radiol Anat. 2003;25: 424–33. https://doi.org/10.1007/s00276-003-0154-6. 12. Lamartina C, Berjano P. Classification of sagittal imbalance based on spinal alignment and compensatory mechanisms. Eur Spine J. 2014;23:1177–89. https://doi.org/10.1007/s00586-014-3227-9. 13. Araujo F, Lucas R, Alegrete N, Azevedo A, Barros H. Individual and contextual characteristics as determinants of sagittal standing posture: a population-based study of adults. The Spine J. 2014;14:2373–83. https://doi. org/10.1016/j.spinee.2014.01.040. 14. Gorecki A, Kiwerski J, Kowalski IM, Marczynski W, Nowotny J, Rybicka M, Jarosz U, Suwalska M, Szelachowska-Kluza W. Prophylactics of postural deformities in children and youth carried out within the teaching environment—experts recommendations. Pol Ann Med. 2009;16:168–77. 15. Czaprowski D, Pawlowska P, Gebicka A, Sitarski D, Kotwicki T. Intra- and interobserver repeatability of the assessment of anteroposterior curvatures of the spine. using Saunders digital inclinometer Ortop Traumatol Rehabil. 2012;14:145–53. https://doi.org/10.5604/15093492.992283. 16. Stolinski L, Kotwicki T, Czaprowski D, Chowanska J, Suzuki N. Analysis of the anterior trunk symmetry index (ATSI). Preliminary report. Stud Health Technol Inform. 2012;176:242–6. 17. Suzuki N, Inami K, Ono T, Kohno K, Asher MA. Analysis of posterior trunk symmetry index (POTSI) in scoliosis. Part 1. Stud Health Technol Inform. 1999;59:81–4. https://doi.org/10.3233/978-1-60750-903-5-81. 18. Inami K, Suzuki N, Ono T, Yamashita Y, Kohno K, Morisue H. Analysis of posterior trunk symmetry index (POTSI) in scoliosis. Part 2. Stud Health Technol Inform. 1999;59:85–8. 19. Culik J, Marik I. Nomograms for determining the tibia-femoral angle. Locomotor System J. 2002;9:81–90. 20. Cheng J, Chan P, Chiang S, Hui P. Angular and rotational profile of the lower limb in 2,630 Chinese children. J Pediatr Orthop. 1991;11:154–61. 21. Fortin C, Feldman DE, Cheriet F, Denis E, Gravel D, Gauthier F, Labelle H. Reliability of a quantitative clinical posture assessment tool among persons with idiopathic scoliosis. Physiotherapy. 2012;98:64–75. https://doi.org/10. 1016/j.physio.2010.12.006. 22. Wiltse LL, Winter RB. Terminology and measurement of spondylolisthesis. J Bone Joint Surg. 1983;65:768–72. 23. Stagnara P, DeMauroy JC, Dran G, Gooon GP, Costanzo G, Dimnet J, Pasquet A. Reciprocal angulation of vertebral bodies in a sagittal plane: approach to references for the evaluation of kyphosis and lordosis. Spine. 1982;7:335–42. 24. Boulay C, Tardieu C, Hecquet J, Benaim C, Mouilleseaux B,Marty C, Prat- Pradal D, Legaye J, Duval-Beaupe’re G, Pe´ lissier J Sagittal alignment of spine and pelvis regulated by pelvic incidence: standard values and prediction of lordosis Eur Spine J 2006;15:415–422. doi: https://doi.org/10. 1007/s00586-005-0984-5. 25. Kuo YL, Tully EA, Galea MP. Video analysis of sagittal spinal posture in healthy young and older adults. J Manip Physiol Ther. 2009;32:210–5. https://doi.org/10.1016/j.jmpt.2009.02.002. 26. Bolzan GP, Souza JA, Boton LM, da Silva AMT, Corrêa ECR. Facial type and head posture of nasal and mouth-breathing children. J Soc Bras Fonoaudiol. 2011;23:315–20. 27. Preece SJ, Willan P, Nester CJ, Graham-Smith P, Herrington L, Bowker P. Variation in pelvic morphology may prevent the identification of anterior pelvic tilt. J Man Manip Ther. 2008;16:113–7. https://doi.org/10.1179/ 106698108790818459. 28. Cerny P, Stolinski L, Drnkova J, Czaprowski D, Kosteas A, Marik I. Skeletal deformities measurements of x-ray images and photos on the computer. Locomotor System J. 2016;23(Suppl 2):32–6. ISSN 2336-4777. 29. Cerny P, Marik I. Anglespine–program for metrology of spinal and knee deformities in growth period. Locomotor System J. 2014;21:276–84. 30. Weir JP. Quantifying test-retest reliability using the intraclass correlation coefficient and the SEM. J Strength Cond Res. 2005;19:231–40. 31. Bland JM, Altman DG. Statistics notes: Cronbach’s alpha. BMJ. 1997;314:572. 32. Keszei AP, Novak M, Streiner DL. Introduction to health measurement scales. J Psychosom Res. 2010;68:319–23. https://doi.org/10.1016/j.jpsychores.2010. 01.006. 33. Shrout PE, Fleiss JL. Intraclass correlations: uses in assessing rater reliability. Psychol Bull. 1979;86:420–8. 34. Knott P, Pappo E, Cameron M, deMauroy JD, Rivard C, Kotwicki T, Zaina F, Wynne J, Stikeleather L, Bettany-Saltikov GTB, Durmala J, Maruyama T, Negrini S, O’Brien JP, Rigo M. SOSORT 2012 consensus paper: reducing x-ray exposure in pediatric patients with scoliosis. Scoliosis. 2014;9:4. https://doi. org/10.1186/1748-7161-9-4. 35. Kotwicki T, Durmała J, Czaprowski D, Głowacki M, Kolban M, Snela S, Sliwinski Z, Kowalski IM. Conservative management of idiopathic scoliosis—guidelines based on SOSORT 2006 Consensus. Ortop Traumatol Rehabil. 2009;5:379–95. 36. Czaprowski D, Kotwicki T, Durmała J, Stolinski L. Physiotherapy in the treatment of idiopathic scoliosis—current recommendations based on the recommendations of SOSORT 2011 (society on scoliosis orthopaedic and rehabilitation treatment). Advances in Rehabilitation. 2014;1:23–9. https:// doi.org/10.2478/rehab-2014-0030. 37. Kotwicki T, Chowanska J, Kinel E, Czaprowski D, Tomaszewski M, Janusz P. Optimal management of idiopathic scoliosis in adolescence. Adolesc Health Med Ther. 2013;4:59–73. https://doi.org/10.2147/AHMT.S32088. 38. Richards SB, Vitale MG. Screening for idiopathic scoliosis in adolescents. An Information tatement J Bone Joint Surg. 2008;90:195–8. https://doi.org/10. 2106/JBJS.G.01276. 39. Dutkowsky JP, Shearer D, Schepps B, Orton C, Scola F. Radiation exposure to patients receiving routine scoliosis radiography measured at depth in an anthropomorphic phantom. J Pediatr Orthop. 1990;10:532–4. 40. Fong DY, Lee CF, Cheung KM, Cheng JC, Ng BK, Lam TP, Mak KH, Yip PS, Luk KD. A meta-analysis of the clinical effectiveness of school scoliosis screening. Spine (Phila Pa 1976). 2010;35:1061–71. https://doi.org/10.1097/ BRS.0b013e3181bcc835. 41. Sabirin J, Bakri R, Buang SN, Abdullah AT, Shapie A. School scoliosis screening programme—a systematic review. Med J Malaysia. 2010;65:261–7. 42. Sox HC Jr, Berwick DM, Berg AO, Frame PS, Fryback DG, Grimes DA, Lawrence RS, Wallace RB, Washington AE, Wilson MEH, Woolf SH. Screening for adolescent idiopathic scoliosis: review article. JAMA. 1993;269:2667–72. https://doi.org/10.1001/jama.1993.03500200081038. 43. Bunnell WP. An objective criterion for scoliosis screening. J Bone Joint Surg. 1984;66:1381–7. 44. Balg F, Juteau M, Theoret C, Svotelis A, Grenier G. Validity and reliability of the iPhone to measure rib hump in scoliosis. J Pediatr Orthop. 2014;34:774– 9. https://doi.org/10.1097/BPO.0000000000000195. 45. Izatt MT, Bateman GR, Adam CJ. Evaluation of the iPhone with an acrylic sleeve versus the scoliometer for rib hump measurement in scoliosis. Scoliosis. 2012;7:14. https://doi.org/10.1186/1748-7161-7-14. 46. Driscoll M, Fortier-Tougas F, Labelle H, Parent S, Mac-Thong J. Evaluation of an apparatus to be combined with a smartphone for the early detection of spinal deformities. Scoliosis. 2014;25:10. https://doi.org/10.1186/1748-7161-9- 10. 47. Grivas TB, Vasiliadis ES, Mihas C, Triantafyllopoulos G, Kaspiris A. Trunk asymmetry in juveniles. Scoliosis. 2008;3:13. https://doi.org/10.1186/1748- 7161-3-13. 48. Kotwicki T, Chowanska J, Kinel E, Lorkowska M, Stryla W, Szulc A. Sitting forward bending position versus standing position for studying the back shape in scoliotic children. Scoliosis. 2007;2(Suppl 1):S34. https://doi.org/10. 1186/1748-7161-2-S1-S34. 49. McCarthy RE. Evaluation of the patient with deformity. In: Weinstein SL, editor. The pediatric spine. New York: Raven Press; 1994. p. 185–224. 50. Drerup B, Hierholzer E, Ellger B. Shape analysis of the lateral and frontal projection of spine curves assessed from rasterstereographs. In: Sevastik JA, Diab KM, editors. Research into spinal deformities. Amsterdam: IOS Press; 1997. p. 271–5. 51. Zubairi J. Applications of computer-aided rasterstereography in spinal deformity detection. Image Vis Comput. 2002;20:319–24. 52. Upadhyay SS, Burwell RG, Webb JK. Hump changes on forward flexion of the lumbar spine in patients with idiopathic scoliosis. Spine (Phila Pa 1976). 1988;13:146–51. 53. Turner-Smith AR, Harris JD, Houghton GR, Jefferson RJA. Method for analysis of back shape in scoliosis. J Biomech. 1988;21:497–509. Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 22 of 24 http://dx.doi.org/10.1007/s00586-009-0913-0 http://dx.doi.org/10.1007/s00586-009-0913-0 http://dx.doi.org/10.1016/j.spinee.2012.02.013 http://dx.doi.org/10.1016/j.spinee.2012.02.013 http://dx.doi.org/10.1007/s00276-003-0154-6 http://dx.doi.org/10.1007/s00586-014-3227-9 http://dx.doi.org/10.1016/j.spinee.2014.01.040 http://dx.doi.org/10.1016/j.spinee.2014.01.040 http://dx.doi.org/10.5604/15093492.992283 http://dx.doi.org/10.3233/978-1-60750-903-5-81. http://dx.doi.org/10.1016/j.physio.2010.12.006 http://dx.doi.org/10.1016/j.physio.2010.12.006 http://dx.doi.org/10.1007/s00586-005-0984-5. http://dx.doi.org/10.1007/s00586-005-0984-5. http://dx.doi.org/10.1016/j.jmpt.2009.02.002 http://dx.doi.org/10.1179/106698108790818459 http://dx.doi.org/10.1179/106698108790818459 http://dx.doi.org/10.1016/j.jpsychores.2010.01.006 http://dx.doi.org/10.1016/j.jpsychores.2010.01.006 http://dx.doi.org/10.1186/1748-7161-9-4 http://dx.doi.org/10.1186/1748-7161-9-4 http://dx.doi.org/10.2478/rehab-2014-0030 http://dx.doi.org/10.2478/rehab-2014-0030 http://dx.doi.org/10.2147/AHMT.S32088. http://dx.doi.org/10.2106/JBJS.G.01276 http://dx.doi.org/10.2106/JBJS.G.01276 http://dx.doi.org/10.1097/BRS.0b013e3181bcc835. http://dx.doi.org/10.1097/BRS.0b013e3181bcc835. http://dx.doi.org/10.1001/jama.1993.03500200081038 http://dx.doi.org/10.1097/BPO.0000000000000195 http://dx.doi.org/10.1186/1748-7161-7-14 http://dx.doi.org/10.1186/1748-7161-9-10 http://dx.doi.org/10.1186/1748-7161-9-10 http://dx.doi.org/10.1186/1748-7161-3-13 http://dx.doi.org/10.1186/1748-7161-3-13 http://dx.doi.org/10.1186/1748-7161-2-S1-S34 http://dx.doi.org/10.1186/1748-7161-2-S1-S34 54. Zonnenberg AJJ, Maanen V, Elvers JWH, Oostendorp RAB. Intra/interrater reliability of measurements on body posture photographs. J Craniomandibular Pract. 1996;14:326–31. https://doi.org/10.1080/08869634. 1996.11745985. 55. Raine S, Twomey LT. Head and shoulder posture variations in 160 asymptomatic women and men. Arch Phys Med Rehabil. 1997;78:1215–21. https://doi.org/10.1016/S0003-9993(97)90335-X. 56. Vernon H. An assessment of the intra- and inter-reliability of the posturometer. J Manipulative and Physiol Ther. 1983;6:57–60. 57. Bullock-Saxton J. Postural alignment in standing: a repeatable study. Austr J Physiother. 1993;39:25–9. https://doi.org/10.1016/S0004-9514(14)60466-9. 58. Braun BL, Amundson LR. Quantitative assessment of head and shoulder posture. Arch Phys Med Rehabil. 1989;70:322–9. 59. Grimmer K. An investigation of poor cervical resting posture. Aust J Physiother. 1997;43:7–16. https://doi.org/10.1016/S0004-9514(14)60398-6. 60. Nilsson BM, Soderlund A. Head posture in patients with whiplash-associated disorders and the measurement method’s reliability—a comparison to healthy subjects. Adv Physiother. 2005;7:13–9. https://doi.org/10.1080/ 14038190510010278. 61. Christensen HW, Nilsson N. The ability to reproduce the neutral zero position of the head. J Manip Physiol Ther. 1999;22:26–8. https://doi.org/10. 1016/S0161-4754(99)70102-8. 62. Swinkels A, Dolan P. Regional assessment of joint position sense in the spine. Spine. 1998;23:590–7. 63. Swinkels A, Dolan P. Spinal position sense is independent of the magnitude of movement. Spine. 2000;25:98–105. 64. Dunk NM, Chung YY, Compton DS, Callaghan JP. The reliability of quantifying upright standing postures as a baseline diagnostic clinical tool. J Manip Physiol Ther. 2004;27:91–6. https://doi.org/10.1016/j.jmpt.2003.12. 003. 65. Dunk NM, Lalonde J, Callaghan JP. Implications for the use of postural analysis as a clinical diagnostic tool: reliability of quantifying upright standing spinal postures from photographic images. J Manip Physiol Ther. 2005;28:386–92. https://doi.org/10.1016/j.jmpt.2005.06.006. 66. Beaudoin L, Zabjek KF, Leroux MA, Coillard C, Rivard CH. Acute systematic and variable postural adaptations induced by an orthopaedic shoe lift in control subjects. Eur Spine J. 1999;8:40–5. https://doi.org/10.1007/ s005860050125. 67. Strimpakos N, Sakellari V, Gioftsos G, Papathanasiou M, Brountzos E, Kelekis D, Kapreli E, Oldham J. Cervical spine ROM measurements: optimizing the testing protocol by using a 3D ultrasound-based motion analysis system. Cephalgia. 2005;25:1133–45. https://doi.org/10.1111/j.1468-2982.2005.00970. x. 68. Zaina F, Atanasio A, Negrini S. Clinical evaluation of scoliosis during growth: description and reliability. In: Grivas TB, editor. The conservative scoliosis treatment. Studies in health technology and informatics, vol. 135. Amsterdam: IOS Press; 2008. p. 123–54. 69. Canales JZ, Cordas TA, Fiquer JT, Cavalcante AF, Moreno RA. Posture and body image in individuals with major depressive disorder: a controlled study. Rev Bras Psiquiatr. 2010;32:375–80. https://doi.org/10.1590/S1516- 44462010000400010. 70. Cerruto C, Di Vece L, Doldo T, Giovannetti A, Polimeni A, Goracci C. Computerized photographic method to evaluate changes in head posture and scapular position following rapid palatal expansion: a pilot study. J Clin Pediatr Dent. 2012;37:213–8. https://doi.org/10.17796/jcpd.37.2.11q670. 71. Pausic J, Pedisic Z, Dizdar D. Reliability of a photographic method for assessing standing posture of elementary school students. J Manip Physiol Ther. 2010;33:425–31. https://doi.org/10.1016/j.jmpt.2010.06.002.34vlw000wx. 72. Ruivo RM, Pezarat-Correia P, Carita AI. Intrarater and interrater reliability of photographic measurement of upper-body standing posture of adolescents. J Manip Physiol Ther. 2015;38:74–80. https://doi.org/10.1016/j.jmpt.2014.10. 009. 73. Sacco ICN, Alibert S, Queiroz BWC, Pripas D, Kieling I, Kimura AA, Sellmer AE, Malvestio RA, Sera MT. Reliability of photogrammetry in relation to goniometry for postural lower limb assessment. Rev Bras Fisioter. 2007;11: 411–7. https://doi.org/10.1590/S1413-35552007000500013. 74. Canhadas Belli JF, Chaves TC, Siriani de Oliveira A, Grossi DB. Analysis of body posture in children with mild to moderate asthma. Eur J Pediatr. 2009; 168:1207–16. https://doi.org/10.1007/s00431-008-0911-y. 75. Matamalas A, Bago J, D’Agata E, Pellise F. Validity and reliability of photographic measures to evaluate waistline asymmetry in idiopathic scoliosis. Eur. Spine J. 2016;25:3170–9. https://doi.org/10.1007/s00586-016- 4509-1. 76. Matamalas A, Bago J, D’Agata E, Pellise F. Reliability and validity study of measurements on digital photography to evaluate shoulder balance in idiopathic scoliosis. Scoliosis. 2014;9:23. https://doi.org/10.1186/s13013-014- 0023-6. 77. Yoder J. Review: photographic architecture in the twentieth century, by Claire Zimmerman. J Soc Archit Hist. 2016;75:110–2. https://doi.org/10.1525/ jsah.2016.75.1.110. 78. Beilin H. Understanding the photographic image. J Appl Dev Psychol. 1999; 20:1–30. https://doi.org/10.1016/S0193-3973(99)80001-X. 79. Ellenbogen R, Jankauskas S, Collini FJ. Achieving standardized photographs in aesthetic surgery. Plast Reconstr Surg. 1990;86:955–61. 80. do R’r JLP, Nakashima IY, Rizopoulos K, Kostopoulos D, Marques AP. Improving posture: comparing segmental stretch and muscular chains therapy. Clin Chiropr. 2012;15:121–8. https://doi.org/10.1016/j.clch.2012.10. 039. 81. Santos MM, Silva MPC, Sanada LS, Alves CRJ. Photogrammetric postural analysis on healthy seven to ten-year-old children: interrater reliability. Rev Bras Fisioter. 2009;13:350–5. https://doi.org/10.1590/S1413- 35552009005000047. 82. Giglio CA, Volpon JB. Development and evaluation of thoracic kyphosis and lumbar lordosis during growth. J Child Orthop. 2007;1:187–93. https://doi. org/10.1007/s11832-007-0033-5. 83. do Rosário JLP. Photographic analysis of human posture: a literature review. J Bodyw Mov Ther. 2014;18:56–61. https://doi.org/10.1016/j.jbmt.2013.05.008. 84. Ferreira EAG, Duarte M, Maldonado EP, Burke TN, Marques AP. Postural assessment software (PAS/SAPO): validation and reliability. Clinics. 2010;65: 675–81. https://doi.org/10.1590/S1807-59322010000700005. 85. Neiva PD, Kirkwood RN, Godinho R. Orientation and position of head posture, scapula and thoracic spine in mouth-breathing children. Int J Pediatr Otorhinolaryngol. 2009;73:227–36. https://doi.org/10.1016/j.ijporl. 2008.10.006. 86. Grimmer-Somers K, Milanese S, Louw Q. Measurement of cervical posture in the sagittal plane. J Manip Physiol Ther. 2008;31:509–17. https://doi.org/10. 1016/j.jmpt.2008.08.005. 87. McEvoy MP, Grimmer K. Reliability of upright posture measurements in primary school children. BMC Musculoskelet Disord. 2005;6:35. https://doi. org/10.1186/1471-2474-6-35. 88. Gadotti IC, Magee DJ. Validity of surface measurements to access craniocervical posture in the sagittal plane: a critical review. Phys Ther Rev. 2008;13:258–68. https://doi.org/10.1179/174328808X309250. 89. Perry M, Smith A, Straker L, Coleman J, O'Sullivan P. Reliability of sagittal photographic spinal posture assessment in adolescents. Adv Physiother. 2008;10:66–75. https://doi.org/10.1080/14038190701728251. 90. Cobb SC, James R, Hjertstedt M, Kruk J. A digital photographic measurement method for quantifying foot posture: validity, reliability, and descriptive data. J Athl Train. 2011;46:20–30. https://doi.org/10.4085/1062- 6050-46.1.20. 91. Guan X, Fan G, Wu X, Zeng Y, Su H, Gu G, Zhou Q, Gu X, Zhang H. Photographic measurement of head and cervical posture when viewing mobile phone: a pilot study. Eur Spine J. 2015;24:2892–8. https://doi.org/10. 1007/s00586-015-4143-3. 92. Matamalas A, Bago J, D’ Agata E, Pellise F. Does patient perception of shoulder balance correlate with clinical balance? Eur Spine J. 2016;25:3560– 7. https://doi.org/10.1007/s00586-015-3971-5. 93. Sai-hu M, Benlong S, Xu S, Zhen L, Ze-zhang Z, Bang-ping Q, Yong Q. Morphometric analysis of iatrogenic breast asymmetry secondary to operative breast shape changes in thoracic adolescent idiopathic scoliosis. Eur Spine J. 2016;25:3075–81. https://doi.org/10.1007/s00586-016-4554-9. 94. Saad KR, Colombo AS, Ribeiro AP, Joao SMA. Reliability of photogrammetry in the evaluation of the postural aspects of individuals with structural scoliosis. J Bodyw Mov Ther. 2012;16:210–6. https://doi.org/10.1016/j.jbmt. 2011.03.005. 95. Souza JA, Pasinato F, Basso D, Castilhos Rodrigues Correa E, Toniolo da Silva AM. Biophotogrammetry: reliability of measurementsobtained with a posture assessment software (SAPO). Rev Bras Cineantropom Desempenho Hum. 2011;13:299–305. https://doi.org/10.5007/1980-0037.2011v13n4p299. 96. Penha PJ, Baldini M, Amado João SM. Spinal postural alignment variance according to sex and age in 7- and 8-year-old children. J Manip Physiol Ther. 2009;32:154–9. https://doi.org/10.1016/j.jmpt.2008.12.009. Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 23 of 24 http://dx.doi.org/10.1080/08869634.1996.11745985 http://dx.doi.org/10.1080/08869634.1996.11745985 http://dx.doi.org/10.1016/S0003-9993(97)90335-X http://dx.doi.org/10.1016/S0004-9514(14)60466-9. http://dx.doi.org/10.1016/S0004-9514(14)60398-6 http://dx.doi.org/10.1080/14038190510010278 http://dx.doi.org/10.1080/14038190510010278 http://dx.doi.org/10.1016/S0161-4754(99)70102-8 http://dx.doi.org/10.1016/S0161-4754(99)70102-8 http://dx.doi.org/10.1016/j.jmpt.2003.12.003 http://dx.doi.org/10.1016/j.jmpt.2003.12.003 http://dx.doi.org/10.1016/j.jmpt.2005.06.006 http://dx.doi.org/10.1007/s005860050125 http://dx.doi.org/10.1007/s005860050125 http://dx.doi.org/10.1111/j.1468-2982.2005.00970.x http://dx.doi.org/10.1111/j.1468-2982.2005.00970.x http://dx.doi.org/10.1590/S1516-44462010000400010 http://dx.doi.org/10.1590/S1516-44462010000400010 http://dx.doi.org/10.17796/jcpd.37.2.11q670. http://dx.doi.org/10.1016/j.jmpt.2010.06.002.34vlw000wx. http://dx.doi.org/10.1016/j.jmpt.2014.10.009 http://dx.doi.org/10.1016/j.jmpt.2014.10.009 http://dx.doi.org/10.1590/S1413-35552007000500013. http://dx.doi.org/10.1007/s00431-008-0911-y http://dx.doi.org/10.1007/s00586-016-4509-1. http://dx.doi.org/10.1007/s00586-016-4509-1. http://dx.doi.org/10.1186/s13013-014-0023-6 http://dx.doi.org/10.1186/s13013-014-0023-6 http://dx.doi.org/10.1525/jsah.2016.75.1.110. http://dx.doi.org/10.1525/jsah.2016.75.1.110. http://dx.doi.org/10.1016/S0193-3973(99)80001-X http://dx.doi.org/10.1016/j.clch.2012.10.039 http://dx.doi.org/10.1016/j.clch.2012.10.039 http://dx.doi.org/10.1590/S1413-35552009005000047. http://dx.doi.org/10.1590/S1413-35552009005000047. http://dx.doi.org/10.1007/s11832-007-0033-5 http://dx.doi.org/10.1007/s11832-007-0033-5 http://dx.doi.org/10.1016/j.jbmt.2013.05.008 http://dx.doi.org/10.1590/S1807-59322010000700005. http://dx.doi.org/10.1016/j.ijporl.2008.10.006 http://dx.doi.org/10.1016/j.ijporl.2008.10.006 http://dx.doi.org/10.1016/j.jmpt.2008.08.005 http://dx.doi.org/10.1016/j.jmpt.2008.08.005 http://dx.doi.org/10.1186/1471-2474-6-35 http://dx.doi.org/10.1186/1471-2474-6-35 http://dx.doi.org/10.1179/174328808X309250 http://dx.doi.org/10.1080/14038190701728251 http://dx.doi.org/10.4085/1062-6050-46.1.20 http://dx.doi.org/10.4085/1062-6050-46.1.20 http://dx.doi.org/10.1007/s00586-015-4143-3 http://dx.doi.org/10.1007/s00586-015-4143-3 http://dx.doi.org/10.1007/s00586-015-3971-5. http://dx.doi.org/10.1007/s00586-016-4554-9 http://dx.doi.org/10.1016/j.jbmt.2011.03.005 http://dx.doi.org/10.1016/j.jbmt.2011.03.005 http://dx.doi.org/10.5007/1980-0037.2011v13n4p299 http://dx.doi.org/10.1016/j.jmpt.2008.12.009 97. Fortin C, Feldman DE, Cheriet F, Labelle H. Validity of a quantitative clinical measurement tool of trunk posture in idiopathic scoliosis. Spine. 2010;35: E988–94. https://doi.org/10.1097/BRS.0b013e3181cd2cd2. 98. Milanesi JM, Borin G, Correˆa ECR, da Silva AMT, Bortoluzzi DC, Souza JA. Impact of the mouth breathing occurred during childhood in the adult age: biophotogrammetric postural analysis. Int J Pediatr Otorhinolaryngol. 2011; 75:999–1004. https://doi.org/10.1016/j.ijporl.2011.04.018. 99. Young S. Research for medical photographers: photographic measurement. J Audiov Media Med. 2002;25:94–8. https://doi.org/10.1080/ 014051102320376799. 100. Fortin C, Feldman DE, Cheriet F, Labelle H. Differences in standing and sitting postures of youth with idiopathic scoliosis from quantitative analysis of digital photographs. Phys Occup Ther Pediatr. 2013;33:1–14. https://doi. org/10.3109/01942638.2012.747582. 101. Galera S, Nascimento L, Teodoro E, Tomazini J. Comparative study on the posture of individuals with and without cervical pain. IFMBE Proc. 2009;25: 131–4. https://doi.org/10.1007/978-3-642-03889-1_36. 102. Lafond D, Descarreaux M, Normand MC, Harrison DE. Postural development in school children: a cross-sectional study. Chiropr Osteopat. 2007;15:1–7. https://doi.org/10.1186/1746-1340-15-1. 103. O’Sullivan PB, Grahamslaw KM, Kendell M, Lapenskie SC. Mo¨ller NE, Richards KV. The effect of different standing and sitting postures on trunk muscle activity in a pain-free population. Spine. 2002;27:1238–44. 104. Normand MC, Descarreaux M, Harrison DD, Harrison DE, Perron DL, Ferrantelli JR. Three dimensional evaluation of posture in standing with the posture print: an intra- and inter-examiner reliability study. Chiropr Osteopat. 2007;15:1–11. https://doi.org/10.1007/s00586-005-0984-5. 105. Smith A, O’Sullivan P, Straker L. Classification of sagittal thoraco-lumbo- pelvic alignment of the adolescent spine in standing and its relationship to low back pain. Spine. 2008;33:2101–7. https://doi.org/10.1097/BRS. 0b013e31817ec3b0. 106. Vedantam R, Lenke LG, Bridwell KH, Linville DL, Blanke K. The effect of variation in arm position on sagittal spinal alignment. Spine. 2000;25:2204–9. 107. Tyrakowski M, Janusz P, Mardjetko S, Kotwicki T, Siemionow K. Comparison of radiographic sagittal spinopelvic alignment between skeletally immature and skeletally mature individuals with Scheuermann's disease. Eur Spine J. 2015;24:1237–43. https://doi.org/10.1007/s00586-014-3595-1. 108. Tyrakowski M, Mardjetko S, Siemionow K. Radiographic spinopelvic parameters in skeletally mature patients with Scheuermann disease. Spine (Phila Pa 1976). 2014;39:E1080–5. https://doi.org/10.1007/s00586-014-3595-1. 109. Fortin C, Feldman DE, Cheriet F, Labelle H. Clinical methods for quantifying body segment posture: a literature review. Disabil Rehabil. 2011;33:367–83. https://doi.org/10.3109/09638288.2010.492066. 110. Czaprowski D, Pawlowska P, Stolinski L, Kotwicki T. Active self-correction of back posture in children instructed with ‘straighten your back’ command. Man Ther. 2014;19:392–8. https://doi.org/10.1016/j.math.2013.10.005. 111. Stolinski L, Kotwicki T, Czaprowski D. Active self correction of child’s posture assessed with plurimeter and documented with digital photography. Progress in Medicine. 2012;25:484–90. 112. Grimmer KA, Williams MT, Gill TK. The associations between adolescent head-on-eck posture, backpack weight and anthropometric features. Spine. 1999;24:2262–7. 113. Solow B, Sandham A. Cranio-cervical posture: a factor in the development and function of the dentofacial structures. Eur J Orthod. 2002;5:447–56. https://doi.org/10.1093/ejo/24.5.447. 114. Van Maanen CJ, Zonnenberg AJ, Elvers JW, Oostendorp RA. Intra/interrater reliability of measurements on body posture photographs. Cranio. 1996;14: 326–31. • We accept pre-submission inquiries • Our selector tool helps you to find the most relevant journal • We provide round the clock customer support • Convenient online submission • Thorough peer review • Inclusion in PubMed and all major indexing services • Maximum visibility for your research Submit your manuscript at www.biomedcentral.com/submit Submit your next manuscript to BioMed Central and we will help you at every step: Stolinski et al. Scoliosis and Spinal Disorders (2017) 12:38 Page 24 of 24 http://dx.doi.org/10.1097/BRS.0b013e3181cd2cd2 http://dx.doi.org/10.1016/j.ijporl.2011.04.018. http://dx.doi.org/10.1080/014051102320376799. http://dx.doi.org/10.1080/014051102320376799. http://dx.doi.org/10.3109/01942638.2012.747582 http://dx.doi.org/10.3109/01942638.2012.747582 http://dx.doi.org/10.1007/978-3-642-03889-1_36. http://dx.doi.org/10.1186/1746-1340-15-1. http://dx.doi.org/10.1007/s00586-005-0984-5 http://dx.doi.org/10.1097/BRS.0b013e31817ec3b0. http://dx.doi.org/10.1097/BRS.0b013e31817ec3b0. http://dx.doi.org/10.1007/s00586-014-3595-1 http://dx.doi.org/10.1007/s00586-014-3595-1. http://dx.doi.org/10.3109/09638288.2010.492066 http://dx.doi.org/10.1016/j.math.2013.10.005 http://dx.doi.org/10.1093/ejo/24.5.447 Abstract Background Material and methods Results Conclusions Background Human body posture Methods Standardization of posture assessment with digital photography Preparing a patient to photogrammetry Photographic parameters for the frontal plane evaluation Photographic parameters for the sagittal plane evaluation Semi-automatic measurements of postural photographic parameters Validation of the photographic technique Intra-observer reproducibility Inter-observer reliability Validation of the photographic technique against Rippstein plurimeter Variability of photographic sagittal parameters over time Variability of photographic coronal parameters over time Normative values of sagittal photographic parameters in children aged 7–10 Statistical analysis Results Photogrammetry reliability studies Validation of the photographic technique Discussion Radiological assessment as the current gold standard for scoliosis evaluation but not for child body posture evaluation Overview of photographic parameters proposed for posture evaluation Current opinions on digital photography technique Technical procedures of posture photogrammetry Camera resolution Camera position—distance and height Child positioning Lower limb positioning—the feet Lower limbs positioning—the knees Upper limb positioning—the elbows Head position and gaze direction Digital photography technique for body posture evaluation and documentation Conclusions Additional files Abbreviations Funding Availability of data and materials Authors’ contributions Ethics approval and consent to participate Consent for publication Competing interests Publisher’s Note Author details References work_n4dolm7g5vfenjfnwpoebnnk7a ---- 123VELASCO, Nina. Fotografia digital, estética e sociedade de controle. Revista Galáxia, São Paulo, n. 16, p. 123-133, dez. 2008. Fotografia digital, estética e sociedade de controle Nina Velasco Resumo: A união entre a fotografia e a tecnologia digital produziu um híbrido cujas consequências sociais e estéticas foram pouco questionadas. A fotografia numérica vem sendo largamente utilizada em práticas as mais distintas. Ao nos depararmos com a produção de arte con- temporânea, encontramos algumas experiências que nos fazem refletir sobre o papel dessa tecnologia na arte e na sociedade, de uma forma geral. Duas obras que tematizam e proble- matizam a relação entre a fotografia e o poder servirão de base para uma reflexão acerca de algumas consequências do uso da fotografia nos diagramas disciplinar e de controle. Palavras-chave: fotografia; arte e tecnologia; poder Abstract: Digital photography, esthetics and society of control — The association of photography and digital technology has created a hybrid whose social and aesthetic consequences have still scarcely been addressed. Numerical photography has been used widely in the most diverse functions. Upon encountering the production of contemporary art, we find that some experiences lead us to question the role of this technology in art and society as a whole. Two works of art that center around and discuss the relation between photography and power will serve as the basis for a reflection about some of the consequences of the use of photography in disciplinary and control diagrams. Keywords: photography; art and technology; power Fotografia digital, estética e sociedade de controle A tecnologia é social antes de ser técnica. Gilles Deleuze A união entre a fotografia e a tecnologia digital produziu um híbrido cujas consequên- cias sociais e estéticas ainda foram pouco questionadas. A fotografia numérica digital vem 124 VELASCO, Nina. Fotografia digital, estética e sociedade de controle. Revista Galáxia, São Paulo, n. 16, p. 123-133, dez. 2008. sendo largamente utilizada (podendo-se dizer que praticamente extinguiu a modalidade analógica) em práticas as mais distintas: desde a clássica fotografia amadora da família e em viagens, passando pelo fotojornalismo e por todo o tipo de fotografia profissional, até a fotografia considerada artística. Ao nos depararmos com a produção de arte contemporânea, encontramos algumas experiências que nos fazem refletir sobre o papel dessa tecnologia na arte e na sociedade, de uma forma geral. Duas obras que tematizam e problematizam a relação entre a fotografia e o poder servirão de base para uma reflexão acerca de algumas consequências do uso da fotografia nos diagramas disciplinar e de controle. A fotografia surge exatamente no período em que o diagrama disciplinar se consolida e se aparelha. Foucault (1999, p. 170) relaciona o “diagrama” ao panótico. O diagrama (ou máquina abstrata), para Deleuze (2005, p. 46), é “uma causa imanente não-unificadora que se estende por todo campo social”. Walter Benjamin (1989, p. 45), ao teorizar sobre o processo de modernização da sociedade, já ressaltava as relações entre a fotografia e o poder policial: Nos primórdios dos procedimentos de identificação, cujo padrão da época é dado pelo método Bertillon, encontramos a definição da pessoa através da assinatura. Na história desse processo, a descoberta da fotografia representa um corte. Para a criminalística não significa menos do que a invenção da imprensa para a literatura. Pela primeira vez, a fotografia permite registrar vestígios duradouros e inequívocos de um ser humano. A comparação da fotografia com a imprensa, em Benjamin, nos faz pensar imediatamente na questão da reprodutibilidade técnica, que permeia quase todo o pensamento do autor sobre as mudanças na percepção do sujeito moderno. O enfoque dado na maioria de suas leituras costuma se ater à questão da decadência da aura e do fim da autenticidade da obra de arte com o advento da fotografia. Esse trecho deixa claro, no entanto, que o processo a que Benjamin se refere é bem mais amplo e faz parte de toda uma reconfiguração do campo social. Benjamin analisa o momento em que a esfera privada se torna cada vez mais pública, em que “uma múltipla estrutura de registros” faz com que o ser humano desapareça nas massas da cidade grande (BENJAMIN, 1989, p. 44). Além da importância da fotografia para o controle policial, entre os exemplos que o autor elenca, estão o registro das partidas e das chegadas de carruagens em praças públicas; a contagem das cartas pelo correio; e a numeração das casas nos bairros populares. O uso da fotografia pela criminalística representa um corte por seu caráter documental, verdadeiro “vestígio”, considerado “inequívoco”. Assim como uma im- pressão digital, a fotografia opera pela contiguidade do referente. Essa característica metonímica confere uma autoridade inédita à produção imagética, especialmente dos rostos de seres humanos. 125VELASCO, Nina. Fotografia digital, estética e sociedade de controle. Revista Galáxia, São Paulo, n. 16, p. 123-133, dez. 2008. John Tagg (2005, p. 89-134), em seu artigo sobre a fotografia como prova jurídica, faz um panorama das relações entre a fotografia e a instituição da polícia. Coinciden- temente, os primeiros anos do desenvolvimento da técnica fotográfica correspondem ao período da criação do serviço policial na Inglaterra. Não demorou muito para que a polícia percebesse o importante instrumento de vigilância que a fotografia poderia se tornar. Ainda na década de 1840, a polícia inglesa contratou fotógrafos civis para retratar possíveis criminosos presos e acrescentar às impressões digitais uma fotografia que os identificasse. Torna-se evidente que a imagem fotográfica passa a ter um papel importante para a constituição das novas estratégias de poder na modernidade. No entanto, não se trata aqui do poder clássico, centralizado e restrito aos aparatos estatais. Deleuze identifica a posição teórica de Foucault a partir do abandono de uma série de postulados tradicionais da esquerda à cerca da noção de poder. Entre eles, estaria o “postulado da localização” (DELEUZE, 1992, p. 35), em que o poder poderia ser localizado no aparelho do Estado e em suas instituições. Para Foucault (1988, p. 35), o Estado “aparece como efeito de conjunto ou resultante de uma multiplicidade de engrenagens e de focos que constituem por sua conta uma ‘microfísica do poder’”. A disciplina não pode ser identificada com uma instituição, como a polícia, nem por um aparelho, pois ela é um tipo de tecnologia que atravessa diversas instituições, um modo de exercer o poder compartilhado por diversas instituições. É preciso ressaltar que a polícia não foi a única instituição a fazer uso da fotografia como forma de aperfeiçoar o controle dos corpos a serem normatizados. Praticamente todas as instituições chamadas disciplinares por Foucault, como o manicômio, a escola e o exército, absorveram imediatamente a fotografia em suas práticas cotidianas, dando grande relevância à documentação fotográfica. O uso de autorretratos pela instituição policial é bastante antigo e data, possi- velmente, da mesma época do uso de fotografias. O cruzamento dessas duas técnicas resulta em um mecanismo bastante eficaz de vigilância e disciplina. A pesquisa no arquivo fotográfico da polícia é o primeiro passo em uma investigação criminal que possui testemunhas oculares capazes de fornecer um autorretrato do delinquente. Essas são estratégias evidentemente disciplinares, ao produzir o efeito do panóptico descrito por Foucault (1999, p. 167): automatização e desindividualização do poder, que pode ser exercido por qualquer um que faça funcionar a máquina. A dissociação do par “ver/ser visto” se dá a partir do momento em que qualquer um é uma teste- munha em potencial. Novamente podemos citar Benjamin como um dos primeiros a atentar para essa característica na modernidade. Ao mesmo tempo em que a massa pode ser o refúgio para o criminoso (que busca o anonimato em meio à multidão), cometer um delito em praça pública é bastante temerário. Seria esse o paradoxo condensado na literatura policial: “O 126 VELASCO, Nina. Fotografia digital, estética e sociedade de controle. Revista Galáxia, São Paulo, n. 16, p. 123-133, dez. 2008. conteúdo social primitivo do romance policial é a supressão dos vestígios do indivíduo na multidão da cidade grande” (BENJAMIN, 1989, p. 41). Deleuze faz uma ressalva, entretanto, para o fato de que a sociedade disciplinar descrita por Foucault estaria passando por alterações significativas na contemporaneidade. O próprio Foucault situa o apogeu desse diagrama de poder no início do século XX e percebe uma progressiva mutação das instituições disciplinares a partir de meados desse século, após a Segunda Guerra Mundial. Encontramo-nos numa crise generalizada de todos os meios de confinamento, prisão, hospital, fábrica, escola, família. A família é um “interior”, em crise como qualquer ou- tro interior, escolar, profissional, etc. Os ministros competentes não param de anunciar reformas supostamente necessárias. Reformar a escola, reformar a indústria, o hospital, o exército, a prisão; mas todos sabem que essas instituições estão condenadas, num prazo mais ou menos longo. (DELEUZE, 1992, p. 220) Para Deleuze, as instituições de confinamento, modelares, funcionavam como variáveis dependentes, em que sempre se começava do zero, operando pela lógica ana- lógica. A sociedade do controle, que estaria dando lugar à sociedade disciplinar, por sua vez, possui variações inseparáveis, modulações que operam por uma lógica numérica. Enquanto na sociedade disciplinar o indivíduo estava sempre recomeçando, ao passar de uma instituição de confinamento a outra, na sociedade de controle, nunca se termina nada. O autor identifica, ainda, para cada modo de poder (soberania, disciplina e con- trole) técnicas ou máquinas distintas. Os equipamentos da sociedade disciplinar seriam “máquinas energéticas”, ao passo que a sociedade do controle opera por máquinas informáticas e computadores. Será, então, que a polícia e a prisão, enquanto agenciamentos pertencentes ao diagrama disciplinar, estariam se tornando obsoletos? Não parece ser o caso. O próprio Deleuze admite que não se trata necessariamente de anunciar apenas o fim dos mecanis- mos de poder antecedentes, mas de pensar o que está em vias de ser implantado, mesmo que meios antigos retornem (ou nunca deixem de existir, acrescentamos) devidamente adaptados. No caso específico da instituição de punição disciplinar por excelência (a prisão), o autor identifica a busca de penas “substitutivas” e o uso de coleiras eletrônicas como novos mecanismos de poder na lógica do controle. Poderíamos acrescentar o uso crescente de câmeras de vigilância que pulverizam o olhar centralizado do antigo panóptico. Mesmo que a lógica que o rege tenha mudado pouco em sua essência, (nunca se têm certeza se realmente existe alguém na torre observando, assim como a câmera não necessariamente precisa estar ligada para que seu efeito seja conseguido), a diferença é clara. “Não se está mais diante do par massa-indivíduo. Os indivíduos se tornaram ‘dividuais’, divisíveis, e as massas tornaram-se amostras, dados, mercados ou ‘bancos’” (DELEUZE, 1992, p. 222). 127VELASCO, Nina. Fotografia digital, estética e sociedade de controle. Revista Galáxia, São Paulo, n. 16, p. 123-133, dez. 2008. Outra questão a ser levantada diz respeito às consequências do uso da tecnologia digital para a produção de imagens fotográficas no aparato policial. Será que a fotografia numérica perde o seu caráter documental ao deixar de ser uma reprodução puramente mecânica de um referente? A fotografia digital teria a mesma credibilidade do que a analógica? Será que haveria uma diferença de grau ou de natureza entre uma imagem fotográfica analógica e uma fotografia numérica? Para começar, deveríamos nos perguntar se é possível uma ontologia da imagem fotográfica. Para alguns autores, como Roland Barthes, a fotografia se caracteriza por uma relação direta com o referente, sem o qual não pode existir (BARTHES, 1984, p. 116). É essa a ideia condensada no noema “isso foi”. Philippe Dubois (1993, p. 94), por sua vez, define a fotografia por seu caráter indicial; como todo índice, a fotografia procede de uma conexão física com seu referente: é constitutivamente um traço singular que atesta a exis- tência de seu objeto e o designa com o dedo por seu poder de extensão metonímica.1 O que insiste nessas e outras definições é a “impressão” produzida pela luz que emana do referente na superfície sensível de sais de prata, gerando a imagem automaticamente. A fotografia digital mantém uma parte desse processo (ótico), porém a morfogênese2 da imagem será definitivamente diversa. A captação da imagem pela câmera fotográfica digital ainda se dá através do modelo de projeção inerente à câmera escura, como a represen- tação pictórica do Renascimento, a fotografia e o cinema. A distinção entre a imagem pictórica renascentista, por exemplo, e a fotográfica seria o automatismo do processo químico. Nesse caso, não podemos deixar de levar em conta o processo algorítmico que caracteriza a produção da fotografia digital. Antonio Fatorelli (2003, p. 16-17), no entanto, chama atenção para o fato de que todas as tentativas de encontrar uma ontologia da imagem fotográfica, denominadas por ele de “essencialistas”, se estabelecem sempre em referência à natureza técnica do processo fotográfico. Isso porque somente assim seria possível encontrar uma definição que fosse capaz de ser genérica o suficiente para abarcar a heterogeneidade das imagens e das produções sociais reunidas sob o rótulo de “fotografia” ao longo de mais de um século de história. Não se trata apenas de uma fotografia, mas sempre de fotografias no plural, como ressalta André Rouillé (2005, p. 16). Só será possível entender “a fotografia”, portanto, como um conjunto de práticas e imagens que possui relação com o contexto histórico e a materialidade do real em que estão inseridas. A clássica análise feita por Pierre Bourdieu acerca do papel social da fotografia e de seu estatuto enquanto estética intermediária, entre popular e elitizada, Un art moyen, já foi alvo de diversas críticas e considerações, no campo tanto da sociologia 1 Índice entendido como uma das três categorias peircianas de signo, que se distingue dos demais por ter uma relação de conexão física com o referente, e não convencional como o símbolo, nem por analogia como o ícone. 2 Conceito criado por Edmond Couchot para diferenciar o regime das imagens representacionais (criadas através do modelo da projeção) das imagens virtuais (criadas por simulação). 128 VELASCO, Nina. Fotografia digital, estética e sociedade de controle. Revista Galáxia, São Paulo, n. 16, p. 123-133, dez. 2008. quanto da crítica de arte. A afirmação de que a fotografia se caracterizava como um instrumento de coesão familiar e de representação dessa coesão na classe média, privilegiando alguns temas rituais como “fotografáveis” (casamentos, viagens de lua de mel, batizados, aniversários etc.), nos parece bastante válida para o momento em que o texto foi escrito. Resta saber se a tecnologia digital não está proporcionando outras funções sociais para a imagem fotográfica na contemporaneidade. Certamente, algumas mudanças no próprio ato fotográfico se impõem: a imediatez entre o fazer e o ver a imagem; a possibilidade de “apagar”, “jogar fora”, desproduzir; o aumento exponencial da quantidade de imagens; a flexibilidade das condições de iluminação necessárias para a imagem inteligível; entre outras. Nunca tantas imagens fotográficas foram produzidas como na atualidade, após a popularização das câmeras de fotografia digital em todos os seus formatos e preços (sejam câmeras profissionais e sofisticadas, até as pequenas câmeras de bolso point and shot, ou ainda as embutidas em telefo- nes celulares). Essas imagens serão visualizadas e divulgadas de múltiplas maneiras, através de troca de e-mails, álbuns virtuais de fotografias, torpedos fotográficos, ou até mesmo em impressões de alta qualidade que poderão retornar ao velho porta- retratos na cômoda da sala. A princípio, o uso da fotografia digital pelas instituições disciplinares que buscam se adaptar à nova lógica do controle não mudou significativamente. Os retratos 3x4 continuam a ser o padrão para a identificação do indivíduo perante qualquer instituição (seja no nível macro, nas carteiras de identidade, por exemplo, seja no micro, como em academias de ginástica ou videolocadoras). Entretanto, se torna cada vez mais comum que essas imagens sejam produzidas, armazenadas e consultadas apenas digitalmente. O que poderia parecer apenas uma economia financeira em relação à tecnologia anterior, na verdade resulta em uma nova estratégia de poder. A utilização de banco de imagens digitais cria uma série de novas possibilidades para a investigação e o controle. Pode-se citar como exemplo a identificação de cadáveres queimados (maneira pela qual crimi- nosos buscavam esconder vestígios) através da comparação feita por computador entre imagens de diversas naturezas e origens: fotografias feita por amigos e parentes, raios X do arquivo do médico clínico da família, próteses dentárias etc. Outro exemplo de como a tecnologia digital pode aprimorar as técnicas de controle e vigilância é o programa PhotoComposer Plus, criado pelo professor Isnard Martins para a produção de retratos falados. Usando um banco de dados formado por milhares de foto- grafias digitais, o programa permite criar um retrato digital a partir de traços decompostos de rostos (a lista dos moldes inclui: olhos, nariz, boca, queixo, cabeça, cabelo, bigode, cavanhaque, sobrancelha, bolsa nos olhos, vinco na testa, vinco no rosto e óculos). Mes- mo sem uma formação específica, qualquer um é capaz de fazer um retrato falado em poucos minutos, escolhendo diretamente dentre as opções os moldes que lhe pareçam mais adequados em relação ao rosto que pretende retratar. 129VELASCO, Nina. Fotografia digital, estética e sociedade de controle. Revista Galáxia, São Paulo, n. 16, p. 123-133, dez. 2008. Aparentemente, pouca coisa muda em relação aos retratos falados tradicionais, a não ser o fato de economizar tempo e dinheiro. Além, é claro, da alta definição da imagem criada pelo programa. Na página da internet que anuncia o software, temos a seguinte descrição: “Técnica inédita para geração de moldes com aparência de fotografia, oferece fidelidade e facilidade ao entrevistador e entrevistado na geração de retratos falados pelo sistema Photocomposer Plus”. A imagem criada, de fato, parece possuir alto poder de precisão, transformando uma imagem que deveria ser apenas um esboço do rosto de um criminoso em uma prova documental irrefutável. Pelo menos é assim que se autopro- clama no slogan que segue a descrição do produto no site: “No registro da sua próxima ocorrência, registre também a face do crime”.3 É justamente a “aparência de fotografia” que faz com que o retrato gerado por esse programa seja atraente e inovador. A automação e a fidelidade são o grande diferencial dessa tecnologia, como fica claro na propaganda do tipo “antes e depois” apresentada no site. A comparação entre o retrato falado feito a lápis com a fotografia e o gerado por computador deixa evidente a discrepância entre a verossimilhança da imagem digital composta por fo- tografias e a imagem analógica do desenho. Assim como a fotografia foi logo absorvida pela instituição policial em seus primórdios, a sua versão digital, com suas novas potencialidades, tem sido largamente utilizada como tecnologia de vigilância e controle. Fotografia e estética Muito já foi dito sobre a importância da fotografia para a arte moderna, como nos clássicos ensaios sobre a fotografia e o cinema de Walter Benjamin. A discussão sobre fotografia era ou não um meio legítimo de produção estética marcou as primeiras déca- das após o surgimento dessa tecnologia. Benjamin foi o primeiro a apontar uma solução possível para essas questões: em vez de procurar saber se a fotografia era arte ou não, tornava-se necessário pensar a “arte como fotografia” (BENJAMIN, 1985, p. 104). Ou, colocado de outra maneira, décadas depois, por Susan Sontag (2004, p. 164): “mais im- portante do que a questão de ser ou não a fotografia uma arte é o fato de que ela anuncia (e cria) ambições novas para a arte”. Uma das principais características da arte moderna foi justamente o sentimento de uma forte relação entre arte e vida, fazendo com que a percepção estética perpassasse campos tão distintos quanto o design industrial, a política e a guerra. Benjamin alerta para as consequências da estetização da política, que converge necessariamente para a guerra. Isso porque “somente a guerra permite mobilizar em sua totalidade os meios técnicos do presente, preservando as atuais relações de produção” (BENJAMIN, 1985, p. 195). Sua crítica está voltada claramente ao projeto fascista, mas resvala em projetos da 3 Disponível em: . Acesso em: 25 jul. 2008. 130 VELASCO, Nina. Fotografia digital, estética e sociedade de controle. Revista Galáxia, São Paulo, n. 16, p. 123-133, dez. 2008. arte moderna, como o futurismo. Em seu manifesto, Marinetti declara: “Há vinte e sete anos, nós futuristas contestamos a afirmação de que a guerra é antiestética [...] Por isso, dizemos: [...] a guerra é bela” (MARINETTI apud BENJAMIN, 1985, p. 195). Paul Virilio (1994, p. 21), em seus estudos sobre as máquinas de visão, identifica uma relação promíscua entre algumas técnicas audiovisuais e as experiências bélicas. As ações guerreiras sempre tiveram que se organizar à distância, favorecendo o surgimento de uma linguagem deslocalizada proporcionada por aparatos técnicos como o telégrafo, a fotografia, o cinema e a infografia (imagens virtuais obtidas, percebidas e analisadas por computador). Na realidade, o que Virilio aponta é a natureza híbrida dessas técnicas. A invenção da fotografia só foi possível graças à importante herança artística da litografia e da câmera escura e à igualmente importante contribuição científica de instrumentos óticos astronômicos. A polêmica instaurada entre o caráter documental ou estético da fotografia retorna sempre que um desses dois lados é realçado (VIRILIO, 1994, p. 73). A tecnologia digital foi incorporada ao instrumental da arte desde o seu surgimento, em meados do século XX. O campo da chamada “Arte e tecnologia”, ou “artemídia”, como prefere Arlindo Machado, se consolidou como uma das principais vertentes da arte contemporânea. Naturalmente, a fotografia digital vem sendo largamente utilizada por muitos artistas em experiências das mais diversas. Interessa-nos aqui, entretanto, pensar de que maneira duas experiências específicas de artistas brasileiros tematizam ou proble- matizam a relação entre essa tecnologia e o poder na sociedade de controle. Tomemos como primeiro exemplo a Série Vulgo, da artista Rosângela Rennó. Para a obra, a artista escolheu doze imagens em um arquivo resultante de levantamento fotográfico que se estendeu entre 1920 e 1940, no setor de Psiquiatria e Criminologia da Penitenciária do Estado de São Paulo, todas representando redemoinhos de cabelo de detentos. Assim como a impressão digital, esses traços físicos são completamente únicos e servem, em sua origem, como identificação dos condenados e reconhecimento de possíveis fugitivos. Em nenhuma das imagens está visível o rosto do fotografado; a maioria focaliza apenas a nuca e o couro cabeludo dos modelos. A intervenção digital da artista se restringe (pelo menos aparentemente) a uma coloração vermelho-clara acrescentada justamente no centro do redemoinho do couro cabeludo de cada indivíduo. As imagens resultantes do tratamento digital da artista são, então, ampliadas em grande formato (165 cm x 115 cm), ganhando uma dimensão monumental ao serem expostas na galeria. Em uma primeira apreensão, o espectador se coloca diante de imagens enigmáticas e constrangedoras. São fotografias visivelmente deslocadas de seu lugar original, carregando marcas do contexto do qual faziam parte: na borda superior, algumas ainda possuem furos redondos regulares típicos de uma catalogação em arquivos; em cinco casos é possível distinguir séries de letras e números rabiscadas no negativo, que aparecem na foto im- pressa em suas margens, (W104 — fig. 1 — e P1337 à caneta; P104, 4250 e P777 com estilete, em todos os casos os números e letras aparecem invertidos); o mofo que mancha 131VELASCO, Nina. Fotografia digital, estética e sociedade de controle. Revista Galáxia, São Paulo, n. 16, p. 123-133, dez. 2008. algumas imagens não foi retirado no tratamento digital; os picotes da borda e os arranhões permanecem, indicando a passagem do tempo; em uma das imagens, uma mancha dá a impressão de que o topo da cabeça do detento foi carcomido, deixando-o incompleto. São vestígios, marcas, sinais que remetem a um tempo, a uma história. No entanto, o espectador não está diante de uma exposição oficial de um museu penitenciário. A sutil intervenção da cor, o tamanho exagerado da imagem e a relação que elas estabelecem com as demais obras expostas na galeria criam um deslocamento de sentido evidente. Não está em jogo a relação da imagem com seu referente, não se trata da represen- tação de indivíduos reais que um dia estiveram diante de uma câmera. Ao mesmo tempo, não se trata, tampouco, de imagens-objeto pertencentes ao discurso cientificista do estudo criminalista. Sem que a história dessas imagens seja renegada, um novo contexto discur- sivo nasce no deslocamento produzido pela artista. Um deslizamento que não apaga o sentido original, mas produz uma nova experiência discursiva. Outro exemplo a ser analisado é a instalação Autorretrato Falado, de Jair de Souza. Na exposição, o interator4 entrava em uma cabine onde, primeiramente, era fotografado com uma webcam e, em seguida, compunha seu autorretrato falado com o auxílio do software PhotoComposer Plus em apenas quinze minutos. Finalmente seu autorretrato e sua fotografia eram impressos e expostos em um painel composto por todos os resultados das experiências com a obra. Na página oficial da obra, o autor explica que sua inspiração surgiu ao ter contato com o livro italiano Wanted! sobre a história, a técnica e a estética da fotografia crimina- lística. Jair de Souza relata que durante anos esse livro o instigou, até ler uma matéria de jornal sobre o programa PhotoComposer Plus. Foi então que se tornou possível a criação da instalação. Um dos problemas a ser resolvido, entretanto, para que o efeito pretendido fosse possível, era uma maior identificação entre o público e as imagens criadas pelo programa. Para solucionar esse impasse, o projeto produziu um banco de dados inédito formado por mil rostos de brasileiros (moradores da cidade do Rio de Janeiro). Nas pala- vras do autor: “Isso representa uma verdadeira doação pública de milhares de elementos faciais”.5 Posteriormente, uma equipe formada por alunos da Escola Nacional de Belas Artes trabalhou essas imagens, resultando em cerca de 8 mil elementos faciais codificados e armazenados por grupo. Essas imagens tornaram o programa muito mais eficiente em reproduzir faces com características específicas da população nacional. Após a exposição, esse banco de imagens foi disponibilizado para a polícia carioca. A psicanalista e artista plástica Graça Pizá (BOUSSO; PIZÁ, 2007, p. 8-9) encontra um sentido possível para a instalação: “o auto-retrato falado, ao provocar esta procura da forma, da harmonia da proporção da face ideal, está ensinando a fazer arte”. O pressuposto 4 O termo “interator” vem sendo usado por diversos autores para dar conta da nova natureza do espectador nas obras de arte interativas. 5 Disponível em: . Acesso em: 30 jul. 2008. 132 VELASCO, Nina. Fotografia digital, estética e sociedade de controle. Revista Galáxia, São Paulo, n. 16, p. 123-133, dez. 2008. dessa afirmação, no entanto, nos parece equivocado. Harmonia e proporções ideais já não são há muito as preocupações do fazer artístico. Para a crítica Daniela Bousso (BOUS- SO; PIZÁ, 2007, p. 4-5), a obra coloca “em jogo a diferença entre o mítico e o ilusório, entre aquilo que somos e aquilo que supomos ser”. Isso devido ao fato de a imagem do autorretrato falado ser necessariamente “bem diferente de um instantâneo fotográfico ou de um flagrante”. Esse seria um dos méritos da obra. A autora parte da ideia de que um instantâneo fotográfico de fato é capaz de representar “aquilo que somos”, equivalendo a um flagrante. Esquece-se do fato de que a própria fotografia pode nos surpreender com “uma de suas facetas que você ainda desconhece” (outro mérito da obra, para Bousso). A imprensa em geral destacou o aspecto lúdico da instalação (a palavra “jogo” aparece em diversas matérias sobre a obra).6 De fato, o sucesso que a exposição obteve no CCBB do Rio de Janeiro, onde foi exposta, em grande parte se deveu ao caráter de entretenimento da instalação. O desafio de compor um autorretrato a partir de um quebra-cabeça de elementos faciais com um tempo limitado torna a experiência curiosa e divertida. A comparação do au- torretrato digital com a fotografia muitas vezes cria um resultado cômico ou constrangedor. Se a distinção entre diversão e arte se torna bastante tênue na contemporaneidade, poderíamos perguntar se a instalação possui um sentido utilitário, como muitas reporta- gens da grande imprensa nos levam a crer. Praticamente todas as matérias enfatizaram a informação de que o banco de dados resultante do projeto posteriormente seria doado à Polícia Civil do Rio de Janeiro. Muito se debate, desde o nascimento da Estética, se a arte deve ou não ter caráter interessado ou útil. Mesmo que a resposta mais uma vez não seja tão simples, em se tratando de arte contemporânea, é preciso ressaltar que nenhuma obra de arte pode ser eticamente isenta. Algumas ambiguidades se tornam evidentes nesse caso: quem doou sua imagem para o projeto o fez para uma obra artística ou trabalhou para a polícia? Os alunos de belas-artes trabalharam para um projeto estético ou policialesco? A resposta a essas questões são no mínimo contraditórias. O fato é que atualmente a polícia do Rio de Janeiro dispõe de uma ferramenta de vigilância e de um inédito banco de imagens, além da obra “Autorretrato falado” ser a principal peça publicitária do programa PhotoComposer Plus, tendo como objetivo a venda para outras polícias municipais do Brasil. Enquanto a obra de Rosangela Rennó faz um deslize no significado das fotografias, primeiramente pertencentes a uma instituição disciplinar (a polícia), ressaltando seu caráter estético e fazendo uma crítica à modalidade de poder em que estavam inseridas original- mente, o contrário parece acontecer na obra de Jair de Souza. Em “Autorretrato falado” as fotografias produzidas primeiramente dentro de um discurso essencialmente estético são, posteriormente, assumidas e integradas ao diagrama de poder contemporâneo, sem levantar nenhum questionamento sobre ele. 6 Um clipping sobre a exposição está disponível em: . 133VELASCO, Nina. Fotografia digital, estética e sociedade de controle. Revista Galáxia, São Paulo, n. 16, p. 123-133, dez. 2008. Referências BARTHES, R. (1984). A câmera clara. Rio de Janeiro: Nova Fronteira. BENJAMIN, W. (1985). Obras escolhidas I: magia e técnica, arte e política. São Paulo: Brasiliense. ______. (1989). Obras escolhidas III: Charles Baudelaire: um lírico no auge do capitalismo. São Paulo: Brasiliense. BOURDIEU, P.; BOLTANSKI, L. (1965). Un art moyen: essais sur les usages socieux de la photo- graphie. Paris: Minuit. BOUSSO, D.; PIZÁ, G. (2007). Autorretrato falado. Rio de Janeiro: Centro Cultural do Banco do Brasil. DELEUZE, G. (1992). Conversações. São Paulo: Ed. 34. ______ (2005). Foucault. São Paulo: Brasiliense. DUBOIS, P. (1993). O ato fotográfico. Campinas: Papirus. FATORELLI, A. (2003). Fotografia e viagem: entre a natureza e o artifício. Rio de Janeiro: Relume Dumará. FOUCAULT, M. (1999). Vigiar e punir. Petrópolis: Vozes. KRAUS, Rosalind (1990). Lo fotográfico: por una teoría de los desplazamientos. Barcelona: Gustavo Gili. MACHADO, Arlindo (2007). Arte e mídia. Rio de Janeiro: Jorge Zahar. ROUILLÉ, A. (2005). La photographie: entre document et art contemporain. Paris: Galimard, 2005. SONTAG, S. (2004). Sobre fotografia. São Paulo: Companhia das Letras. TAGG, J. (2005). El peso de la representación. Barcelona: Editorial Gustavo Gili. VIRILIO, Paul (1994). A máquina de visão: do fotograma à videografia, holografia e infografia (computação eletrônica): a humanidade na “era da lógica paradoxal”. Rio de Janeiro: José Olympio. NINA VELASCO é professora adjunta do Programa de Pós-Graduação em Comunicação da Universidade Federal de Pernambuco e doutora em Estética e Tecnologia da Co- municação pela Universidade Federal do Rio de Janeiro. nina_velasco@yahoo.com.br Artigo recebido em setembro de 2008 e aprovado em novembro de 2008. work_n4q4xai2obgwxfzrd4taucjt3a ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218557943 Params is empty 218557943 exception Params is empty 2021/04/06-02:16:38 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218557943 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:38 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_n5m7myruind65jr4bppaq4fk7i ---- ORAL PRESENTATION Open Access Technical error of vertebral rotation measurement directly on the computer screen, according to Raimondi method Manuel D Rigo*, Monica Villagrasa From 7th International Conference on Conservative Management of Spinal Deformities Montreal, Canada. 20-22 May 2010 Introduction The purpose of the study was to check the intrarater reliability of the Raimondi system measuring vertebral rotation directly on a computer screen. Background The measurement of vertebral axial rotation according to Perdriolle and to Raimondi is reliable on standard films, where apical vertebrae show a proper size for the scale of both systems. Current digital radiographs are delivered on a CD format (screen) or on a film format, usually with a non proper size for the two above men- tioned methods. No system has been described for a rapid assessment of vertebral rotation directly from the screen. Raimondi system, with no needs for reference marks, could be considered a proper tool to be use for that purpose. The size of the apical vertebra can be increased as much as necessary when using CD format as well as digital photography. However this could be considered, together with other factors, a source of error. Methods Vertebral rotation was measured according to Raimondi twice by the same observer on 45 apical vertebrae (thor- acic, thoracolumbar and lumbar) from a set of 27 radio- graphs (12 CDs and 15 digital photos) directly on the computer screen. Results Intrarater reliability was r= .968 (r= .991 on standard films). The mean intrarater error was 2.4 degrees (0.4 degrees on standard films). Discussion No rapid and practical way to measure vertebral rota- tion has been shown when assessing scoliosis directly from CDs. On the other hand, when patients provide films from digital radiographs, such a films use to be too small in a way that the transversal diameter of the apical vertebra is less than 20mm. Thus, nor Perdriolle method neither Raimondi are useful tools to measure vertebral rotation on those films. An alternative method is to use a digital photo from the film, which can be managed to change the size of the apical vertebra by using any of the common photo viewers available in PCs. Raimondi method needs no reference marks and showed good intrarater reliability as well as a low mean measurement error when used on standard radiographs. For this reason it was considered the best candidate method to use for direct measurements of the vertebral rotation on the computer screen from both CDs and digital photos. Although intrarater reliability is still good, the results of this study does not reproduce those from previous studies when measuring on classical radiographs, suggesting that errors could come from dif- ferent sources: the screen itself or variability produced by the change of the vertebral size when looking for a proper proportion to use this particular method. Conclusions Raimondi method is useful and practical tool for the measurement of vertebral rotation directly on a compu- ter screen in a busy clinical setting; however, due to theInstitut Elena Salvá, Barcelona, Spain Full list of author information is available at the end of the article Rigo and Villagrasa Scoliosis 2010, 5(Suppl 1):O14 http://www.scoliosisjournal.com/content/5/S1/O14 © 2010 Rigo and Villagrasa; licensee BioMed Central Ltd. relatively high mean intrarater error found in this present study, it should be used cautiously, even by an experience clinician, when making decisions. Published: 10 September 2010 doi:10.1186/1748-7161-5-S1-O14 Cite this article as: Rigo and Villagrasa: Technical error of vertebral rotation measurement directly on the computer screen, according to Raimondi method . Scoliosis 2010 5(Suppl 1):O14. Submit your next manuscript to BioMed Central and take full advantage of: • Convenient online submission • Thorough peer review • No space constraints or color figure charges • Immediate publication on acceptance • Inclusion in PubMed, CAS, Scopus and Google Scholar • Research which is freely available for redistribution Submit your manuscript at www.biomedcentral.com/submit Rigo and Villagrasa Scoliosis 2010, 5(Suppl 1):O14 http://www.scoliosisjournal.com/content/5/S1/O14 Page 2 of 2 Introduction Background Methods Results Discussion Conclusions work_n6pucbt43zb6xcto7yd3t2tnia ---- www.wjps.ir /Vol.3/No.1/January 2014 29 Comparison of Healing Effect of Aloe Vera Extract and Silver Sulfadiazine in Burn Injuries in Experimental Rat Model Mohammad Reza Akhoondinasab1*, Motahhare Akhoondinasab2, Mohsen Saberi3 ABSTRACT BACKGROUND Wound healing is widely discussed in the medical literature. This study compared the healing effect of aloe vera extract and silver sulfadiazine in burn injuries in experimental rat model. METHODS Sixteen rats were randomly assigned to one of two groups, each group 8 rats. A deep second-degree burn on the lower back and 3rd degree burn on upper back of each rat were created with a standard burning procedure. Burns were dressed daily with aloe vera extract in group 2 and silver sulfadiazine in group 1. Response to treatment was assessed by digital photography during treatment until day 32. Histological parameters (PMN, epithelialization, fibrosis and angiogenesis) were assessed after biopsy of scar at the end of research. RESULTS Wound healing was more visible in aloe vera group. Also the speed of healing in aloe vera group was better than silver sulfadiazine group. CONCLUSIONS Based on our findings, aloe vera can be a therapy of choice for burn injuries. KEYWORDS Aloe vera; Silver sulfadiazine; Burn; Rat Please cite this paper as: Akhoondinasab MR, Akhoondinasab M, Saberi M. Comparison of Healing Effect of Aloe Vera Extract and Silver Sulfadiazine in Burn Injuries in Experimental Rat Model. World J Plast Surg 2014;3(1):29-34. INTRODUCTION Wound healing is widely discussed in the medical literature. Much research has been carried out to develop more sophisticated dressing that expedite the healing process and diminish the bacterial burden in wounds.1 Traditional forms of medicine especially herbal products deployed for centuries in Africa and Asia are under scientific investigation for their attributes in treatment of wound. Avicenna, the Persian physician and scholar (980-1037 AD) recommended medicinal plants, for dressing of wound in his famous book, Canon of medicine.1 Original Article 1. Faculty of Plastic and Reconstructive Surgery, Burn Research Center, Iran University of Medical Sciences, Tehran, Iran; 2. Faculty of Pharmacy, Tehran University of Medical Sciences, Tehran, Iran; 3. Medicine, Quran and Hadith Research Center, Department of Community Medicine, Faculty of Medicine, Baqiyatallah University of Medical Sciences, Tehran, Iran *Correspondence Author: Mohammad Reza Akhoondinasab, MD; Faculty of Plastic and Reconstructive Surgery, Burn Research Center, Motahhari Hospital, Iran University of Medical Sciences, Tehran, Iran E-mail: akhoondinasab@yahoo.com Received: Aug. 14, 2013 Accepted: Nov. 8, 2013 30 Aloe vera in burns www.wjps.ir /Vol.3/No.1/January 2014 Red Ginseng root extracts have also been used clinically as topical treatments for atopic suppurative dermatitis, wounds and skin inflammation.2 Herbal products seem to possess moderate efficacy with no or less toxicity and are less expensive as compared with synthetic drugs.3 There are several reports on using herbal drugs in healing of burn injuries.4-7 The kiwifruit originated >700 years ago in China. It was later introduced in New Zealand and California, where the first major planting occurred in 1960. Some clinical effects of kiwi fruit ingredients such as ascorbic acid (as a scavenger), antibacterial agents, and actinidin (a potent protein-dissolving enzyme) have been reported in the literature.8 Burn wound healing is one of major indications of aloe vera gel use in many countries.9,10 Clinical data on the treatment of psoriasis and Lichen ruber planus have confirmed long lasting ameliorative effects of BAC-3 (existing with high concentration in dirhamnolipid) when compared to conventional therapy using corticosteroids.11 For many years the effect of herbal medicine on burn wound has been noted. Herbal products seem to possess moderate efficacy with no or less toxicity and are less expensive as compared with synthetic drugs. Many plants and plants- derived products have been shown to possess potent wound-healing activity.12 Spathodea campanulata Beauv. (Bignoniaceae) is widely distributed through Africa and found in particular in Cameroon and Senegal. It is used in traditional herbal medicine for the treatment of ulcers, filaria, gonorrhea, diarrhea and fever. S. campanulata was also known in Cameroon traditional medicine to have a healing activity in burn wounds.8 Combudoron, composed of extracts from arnica and stinging nettle is used for the treatment of partial thickness burns and insect bites in Europe. Nettle root extracts contain at least 18 phenolic compounds and 8 lignans.13 Healing of burn is still a challenge in modern medicine and there are a few drugs capable of accelerating wound healing. As alternative plants are rich sources to survey.14 Traditionally, fresh leaves or decoction of Chromolaena odorata have been used throughout Vietnam for many years as well as in other tropical countries for the treatment of leech bite, soft tissue wounds, burn wounds, skin infection and dento-alveolitis.15 Combudoron also seems to have positive effects on healing of grade 2 laser induced burns which deserve further investigation.16 Swift eschar separation with a resulting wound-bed that appeared pink and viable suggests that kiwifruit may help in the management of patients with deep burns.17,18 This study compared the healing effect of aloe vera extract and silver sulfadiazine in burn injuries in experimental rat model. MATERIALS AND METHODS In a randomized clinical trial, 16 Wistar-albino male rats (average weight: 300-350 gr, average age: 3-4 months) were randomly divided into 2 equal groups (1: topical silver sulfadiazine treated group, 2: topical aloe vera group). They were all in a sheltered environment (temperature: 20- 25°C; humidity: 65-75%) under the supervision of a veterinarian. During experimentation, the rats were fed with usual rat chow and tap water and each rat was kept in a separate cage. All rats were handled according to the ethical principles for animal experiments of the international council for animal protection. All experimental procedures were agreed by the research ethics committee of the university. Rats were anesthetized with inhalational anesthesia using xylazine (10 mg/kg) and ketamine hydrochloride injection (50-100 mg/kg intramuscularly) to increase the depth of anesthesia. The skin on the dorsum was shaved with an electrical clipper. A deep, second-degree burn wound was created with a hot plate (diameter: 4×2 cm) at an identical temperature (warmed 5 minutes within boiling water and placed for 10 seconds on the skin with an equal pressure) over lower back and a third degree burn over upper back by 30 seconds of pressure (Figure 1).1 Then surface of wounds were covered by corresponding ointment and no dressing were applied. These ointments were used daily. For assessment of wound healing, digital photography was taken every 4 days under general anesthesia. The photographs then are assessed by software Image j and the percentage of healing was determined. Histologic parameters (PMN, epithelialization, fibrosis and angiogenesis) were assessed on biopsy specimens of wounds at the end of the study. Every specimen was provided under general anesthesia by resection of healed area and surrounding normal skin. 31 Akhoondinasab et al. www.wjps.ir /Vol.3/No.1/January 2014 Histological criteria was defined as following: for fibrosis (collagen bundles): normal bundle: 2, disorganized/edematous: 1, and amorphous: 0. For PMN, 40x field: 0-10: 2, 11-40: 1, >40: 0. Angiogenesis in 3 degree: mild, moderate and severe. Epithelialization was expressed as positive and negative. Fig. 1: 3rd degree burn over upper back and 2nd degree burn over lower back in 2nd session. RESULTS This was an experimental study using male Sprague–Dawley rats. We investigated the healing properties of aloe vera leaf extract. One of the animals died in silver sulfadiazine group. In 3rd degree burns, wound healing was significantly more in aloe vera group (Figure 2 and 3), but for second degree burns, the difference was not as significant as third-degree burns. Pathological assessments of specimens encompassed fibrosis, angiogenesis, inflammation and epithelialization. Epithelialization was more evident in aloe vera group. In second-degree wounds except in 2nd, 8th and 11th sessions, the difference between groups was significant (P<0.005) and the best results belonged to aloe vera group. In third degree burns, except in 2nd, 4th and 11th sessions, the difference between groups was significant (P<0.005) and aloe vera had more healing effect. DISCUSSION The skin is one of the largest organs in the body that performs numerous vital functions including fluid homeostasis, thermoregulation, immunologic, neurosensory and metabolic functions. The skin also provides primary protection against infection by acting as a physical barrier.8 When this barrier is damaged, pathogens have a direct route to infiltrate the body, potentially resulting in infection. The sequence of events that repairs the damage is categorized into three overlapping phases: inflammation, proliferation and tissue remodeling. The normal healing process can be impeded at any step along its path by a variety of factors that can contribute to impaired healing. Impaired wound healing may be a consequence of pathologic states associated with diabetes, immune disorders, ischemia, venous stasis and Fig. 2: Comparison of healing of third-degree burns. 32 Aloe vera in burns www.wjps.ir /Vol.3/No.1/January 2014 injuries such as burn, frostbite and gunshot wounds.8 The final step of the proliferative phase is the epithelialization. It involves migration, proliferation and differentiation of epithelial cells from the wound edges to resurface the defect. In open full thickness burn wounds, epithelialization is delayed until a bed of granulation tissue is established to allow migration of epithelial cells.12 Several studies showed that burn infection is the main cause of mortality in patients with extensive burns. Therefore, many researchers tried to achieve appropriate treatment methods to reduce the risk of wound infections and to shorten the period of treatment of patients with burn wounds.1 Some of these treatments involve using topical antimicrobial agents which effectively reduce mortality rate of burns.4-7 One of these antimicrobial topical ointment is 1% silver sulfadiazine, with advantages such as easy and convenient use, not to create pain when consumed, yielding low toxicity and sensitivity and having anti-bacterial effect, which made it known as the gold standard of anti-microbial topical drugs for patients with burns and turned it to the main consumed medicine in treatment of burn wounds around the world.3,9 Burn management entails significant duration of hospital stay, expensive medication, multiple operative procedures and prolonged period of rehabilitation. Topical anti-bacterial agents and disinfectants are good in protecting against infection, but the occurrence of allergic reactions and skin irritations to these agents reduces the rate of skin regeneration and increases the recovery time.8 The ultimate burn dressing wound is inexpensive and comfortable and it would not only allow the burn to heal rapidly, but also clean the wound and debride fragments of separated eschar and devitalized tissue and have antibacterial activity. A wide variety of substances have been reported to be useful in the treatment of burn wound.4-9 Healing of burn is still a challenge in modern medicine and there are a few drugs capable of accelerating wound healing and as an alternative plants were rich sources to survey.4-7,15 For many years, the effect of herbal medicine on burn wound has been noted. Herbal products seem to possess moderate efficacy with no or less toxicity and are less expensive as compared with synthetic drugs. Many plants and plants-derived products have been shown to possess potent wound-healing activity.4-8 Eupolin ointment, a formulation prepared from the aqueous extract of the leaves of C. odorata (formerly Eupatorium odoratum) has been licensed for clinical use in Vietnam.17 Most of the medicines are mixture of several plants, but none of these traditional ointments were scientifically studied. In our study, aloe vera extract was compared with silver sulfadiazine as the standard treatment for burn wounds in rat. The actual mechanism of improved healing is still Fig. 3: Comparison of healing in second degree burns. 33 Akhoondinasab et al. www.wjps.ir /Vol.3/No.1/January 2014 unclear. The probable mechanism are providing necessary material for healing, increasing blood flow to burn area, decreased inflammatory response, and decreasing rate of infection. The healing time in grade 3 burns, in aloe vera group was significantly shorter than silver sulfadiazine group. This effect might be due to major role of wound contraction in third degree burn wounds in skin of rats. Wound healing in rat skin does not perfectly mimic human skin wound healing because the skin morphology is different (rats are described as loose-skinned animals) and ‘‘loose’’ skin allows wound contraction to play a significant role in closing rat skin wounds. Consequently, wound contraction is usually more rapid than epithelialization.12 Human has tight skin, and this difference makes the comparison with loose-skinned animals more difficult. Although there are inherent drawbacks in using rats for comparisons with human skin wound healing, there are also advantages in the use of rats as a research model, such as the availability of a broad knowledge based on rat wound healing gained from years for previous research.12 Aloe vera (Aloe vera Linn, synonym: aloe vera barbadensis Mill) is in family Liliaceae, which is a tropical plant easily grown in hot and dry climates including Thailand. Numerous cosmetics and medicinal products are made from the mucilaginous tissue, called aloe vera gel, located in the center of the aloe vera leaf. Aloe vera gel has been used for many indications since the Roman era or even long before. Burn wound healing is one of major indications of aloe vera gel use in many countries.10 A recent review of four clinical trials investigating the effect of Aloe vera on burn wounds found that aloe vera significantly shortened the wound healing time (by approximately eight days) compared to control. They concluded that it may be an effective treatment for first and second degree burns.14 The results of the study provided permission for a start in human study. We hope a new burn ointment can be introduced by usage of herbal medicines with less adverse effects and shorten the period of healing thus decrease the rate of hypertrophic scar. Our findings denotes to recommendation of aloe vera in healing of burn injuries as an inexpensive and available herbal medicine. ACKNOWLEDGEMENT The author appreciate kind support of Iran University of Medical Sciences. CONFLICT OF INTEREST The authors declare no conflict of interest. REFERENCES 1 Manafi A, Kohanteb J, Mehrabani D, Japoni A, Amini M, Naghmachi M, Zaghi AH, Khalili N. Active immunization using exotoxin A confers protection against Pseudomonas aeruginosa infection in a mouse burn model. BMC Microbiol 2009;9:23. 2 Daryabeigi R, Heidari M, Hosseini SA, Omranifar M. Comparison of healing time of the 2 degree burn wounds with two dressing methods of fundermol herbal ointment and 1% silver sulfadiazine cream. Iran J Nurs Midwifery Res 2010;15:97-101. 3 Kimura, Y. Sumiyoshi, M. Kawahira, K. Sakanaka, M. Effects of ginseng saponins isolated from Red Ginseng roots on burn wound healing in mice. Br J Pharmacol 2006;148:860-70. 4 Hazrati M, Mehrabani D, Japoni A, Montasery H, Azarpira N, Hamidian-shirazi AR, Tanideh N. Effect of Honey on Healing of Pseudomonas aeruginosa Infected Burn Wounds in Rat. J Appl Anim Res 2010;37:161-165. 5 Amini M, Kherad M, Mehrabani D, Azarpira N, Panjehshahin MR, Tanideh N. Effect of Plantago major on Burn Wound Healing in Rat. J Appl Anim Res 2010;37:53-56. 6 Hosseini SV, Niknahad H, Fakhar N, Rezaianzadeh A, Mehrabani D. The healing effect of honey, putty, vitriol and olive oil in Psudomonas areoginosa infected burns in experiental rat model. Asian J Anim Vet Adv 2011;6:572-579. 7 Hosseini SV, Tanideh N, Kohanteb J, Ghodrati Z, Mehrabani D, Yarmohammadi H. Comparison between Alpha and silver sulfadiazine ointments in treatment of Pseudomonas infections in 3rd degree burns. Int J Surg 2007;5:23-6. 8 Upadhyay NK, Kumar R, Siddiqui MS, Gupta A. Mechanism of Wound-Healing Activity of Hippophae rhamnoides L. Leaf Extract in Experimental Burns. Evid Based Complement Alternat Med 2011;2011:659705. 9 Mohajeri G, Masoudpour H, Heidarpour M, Khademi EF, Ghafghazi S, Adibi S, Akbari M. 34 Aloe vera in burns www.wjps.ir /Vol.3/No.1/January 2014 The effect of dressing with fresh kiwifruit on burn wound healing. Surgery 2010;148:963-8. 10 R.Maenthaisong,N.Chaiyakunapruk,S. Niruntraporn, C.Kongkaew. The efficacy of aloe vera used for burn wound healing: a systematic review. Burns 2007;33:713-18. 11 Cuttle L, Kempf M, Kravchuk O, George N, Liu PY, Chang HE, Mill J, Wang XQ, Kimble RM. The efficacy of Aloe vera, tea tree oil and saliva as first aid treatment for partial thickness burn injuries. Burns 2008;34:1176-82. 12 Stipcevic T. Enhanced healing of full- thickness burn wounds using di-rhamnolipid. Burns 2006;32:24-32. 13 Sy GY, Nongonierma RB, Ngewou PW, Mengata DE, Dieye AM, Cisse A, Faye B. [Healing activity of methanolic extract of the barks of Spathodea campanulata Beauv (Bignoniaceae) in rat experimental burn model]. Dakar Med 2005;50:77-81. 14 Chrubasik JE, Roufogalis BD, Wagner H, Chrubasik S. A comprehensive review on the stinging nettle effect and efficacy profiles. Part II: urticae radix. Phytomedicine 2007;14:568-79. 15 Kahkeshani N, Farahanikia B, Mahdaviani P, Abdolghaffari A, Hasanzadeh Gh, Abdollahi M, Khanavi M. Antioxidant and burn healing potential of Galium odoratum extracts. Res Pharm Sci 2013;8:197–203. 16 Thang PT, Patrick S, Teik LS, Yung CS. Anti- oxidant effects of the extracts from the leaves of Chromolaena odorata on human dermal fibroblasts and epidermal keratinocytes against hydrogen peroxide and hypoxanthine- xanthine oxidase induced damage. Burns 2001;27:319-27. 17 Huber R, Bross F, Schempp C, Gründemann C. Arnica and stinging nettle for treating burns- a self-experiment. Complement Ther Med 2011;19:276-80. 18 Hafezi F, Rad HE, Naghibzadeh B, Nouhi A, Naghibzadeh G. Actinidia deliciosa (kiwifruit), a new drug for enzymatic debridement of acute burn wounds. Burns 2010;36:352-5. work_nbi53afrt5d2tdaatlxei4iuae ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218561984 Params is empty 218561984 exception Params is empty 2021/04/06-02:16:43 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218561984 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:43 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_nfslpsv45zbtdceyjke5v4tglq ---- 01. 이준혁- WBAN 환경에서 효율적인 라우팅을 위한 3차원 좌표 주소할당 기법의 적용.hwp Journal of Digital Contents Society Vol. 15 No. 6 Dec. 2014(pp. 745-749) http://dx.doi.org/10.9728/dcs.2014.15.6.745 디지털 미디어예술에서의 삼학 김의나*, 김태은** 요 약 인간은 오랫동안 이미지를 만들어왔습니다. 황토, 숯, 적철광을 염료로 사용한 구석기 시대의 동굴벽 화로부터 마야 같은 3D툴을 이용한 현 시대의 디지털 영상까지 오랜 역사만큼 다양한 방법으로 만들어 졌습니다. 그리고 그 사회를 구분하는 것은 인간과 기술의 발달입니다. 기술, 인간, 예술 이 세 가지를 알아야 시대성을 가진 이미지를 만들 수 있습니다. 그래서 중세에 수도사들이 학생들에게 가르쳤던 트 리비움(Trivium) 삼학을 가져와 디지털 시대의 삼학이라고 주제를 정했습니다. 본 논문은 기술, 인간, 예 술을 실제 애니메이션에 적용해보면서 이론적 논의 가치를 확인해 보도록 하겠습니다. 키워드 : 디지털 영상, 기술, 인간, 예술, 시대성, 에니메이션 The Trivium of the Digital Media Art Eui Na Kim*, Tae Eun Kim** Abstract Mankind has long made images. From cave paintings in the Old Stone Age when the dye was red clay, charcoal, or red iron ore to the modern digital images on the bases of 3D tools like Maya, images have been made through various methods. The basis of society’s classification is the development of human beings and technologies. When we are familiar with three elements— technology, human, and art—we are able to create an image suitable for the trend of the times. As a consequence, we brought the trivium that had been taught by the monks in the Middle Age to set our subject to the trivium of the digital age. In this paper, we are going to put technology, human, and art into actual animation to check its theoretical discussion value. Keywords : digital imaging, technology, human beings, art, trend of the times, animation 1. Introduction An image can convey the thoughts of its maker while also embodying the state of technology of the time. Artists often apply the ※ Corresponding Author : Eui Na Kim Received : September 30, 2014 Revised : December 30, 2014 Accepted : December 31, 2014 * Namseoul University, Department of Multimedia Tel: +82-10-8776-6085, Fax: +82-41-581-2321 email: eu15196@hotmail.com ** Namseoul University, Department of Multimedia latest technologies to create images that reflect current viewpoints and technical trends. To begin with, we are going to associate the past image with the trend of the times and apply the pattern to the present age to figure out what kind of image the present age is seeking. The first images created by humans are thought to be mural paintings. The mural paintings of Lascaux Cave (BC 15,000 to BC 13,000) in the southern part of France are credited with triggering the primitive image era. At that time, our ancestors supposed that "the more mural paintings we draw, the more likely we can have the animal alive." The 746 Journal of Digital Contents Society Vol. 15 No. 6 (Dec. 2014) paintings functioned not for visual pleasure but as magical and incantatory elements and means to achieve wealth. Man has evolved to call these images art. Drawings and paintings are now largely enjoyed as a form of expression in the western culture. Before the Middle Age, people placed high values on drawings that exactly reproduced a scene (regeneration). Such appreciation for the replication of nature in the form of images regressed duringthe Middle Age when Christianity was the dominant religion. It was believed that God said, "All is not as it seems." During the Renaissance movement, however, development of perspective and chiaroscuro, as well as a focus, on human form and anatomy reignited interest for realism in fine art and "regeneration" retook its position as the main topic of the art world. The invention of photography at the turn of the 19th century changed the art world. Paintings were not as accurate as photos in terms of shape or color regeneration and as such, painting were deprived of their identity of "regeneration". This shock made the art world turn its back from realism. Painters started to deconstruct what they saw. The world dismantled into basic shapes (Cubism), liberating colors (fauvism), and even distorting the space instead. It was through deconstructionism that fine art regained its identity. [1] Photography began stretching its hand to impact people’s daily lives. At first, they innovated the art itself, saying nothing of the landscape or portrait which had been traditionally performed by the art. As photography techniques improved, "aura" in art slowly collapsed. Aura refers to the atmosphere emanated by an original artwork that confers its uniqueness in time and space, often associated with religious images and objects. The ease of reproducing images en mass via photography democratized fine art, especially religious objects and images. The camera, the next runner of the painting, got down to recreating all the existing contents by way of a new medium, called the movie. All contents including the Bible, novel, ghost story, cartoon, folktale, fable, and play were recreated as a movie, a vehicle which reached its heyday in the 20th century. [2] Photography and film went through another transformation when digitalization techniques were developed at the turn of the 21st century. Although analog film can capture near perfect "regenerations" they are hard to handle. Digital photography and film can be artificial manipulated. In fact, the period has come when we are able to express all the images in our imagination through digital film. Images evolve with advancements in technology. In the primitive times, a magician drew mural paintings; in the Middle Age, a painter drew paintings; and in the Industrial Revolution period, a photographer shot photos. Who, then, will make images in the digital age? That person is a programmer. He or she is not confined to simply program computer languages but can merge technology with art through creating digital images. Technology, art and human are termed the Trivium of the Digital Age. [3] In the Middle Age, trivium referred to subjects in medieval universities: dialetic, grammar, and rhetoric. Marshall McLuhan, a media philosopher, said that the trivium is a subject that changes at every age which represents it. The goals of this study is to determineif technology, art, and human represents the current society. If so, can one create images which make people feel good. 2. Discussion 2.1 Technology The core technologies in the digital age are The Trivium of the Digital Media Art 747 the pixel and vector. A man must use pixels and vectors to express what he imagines in an image. Glue and scissors were used for editing analog objects. They were extremely coarse units when compared with the extremely exquisite particles called atoms, microunits used in the analog period. As a result, the viewer notices the manipulation techniques while watching the screen, thereby destroying the continuity of the object viewed. On the contrary, the image in the digital age is the pixel. The pixel allows a man to express whatever he imagines. Therefore, he must be familiar with the smallest digital units in order to program contents. As a way of using the pixel, we used the software MAYA, a 3D tool. Using MAYA, we modeled and rigged Alien Pupu whom we imagined. (Figure 1) Rigged Character The alien in Figure 1 is Pupu, a rigged character. Pupu moves in combination with its skeleton but rigging is necessary to give shape to the movement, while the controllers at each part are used for animation production. Prior to skeleton fabrication, we analyzedthe shape, structure, and specific movement of the character to determine how the skeleton will be structured and how it may be controlled. We motion tested our design to compose a precise skeleton, along with rigging and controlling on the basis of a more realistic movement analysis, in order to express the desired movement. First, we made joints and bones using the minimum unit of the skeleton. Second, we bound them so that the skeleton can transform the object. We put the character object and the skeleton together using smooth binding and we modified the weight using a paint skin weight tool so that the object can be naturally transformed. Here, the Smooth Bind function was adopted to allow smooth transformation of the object by the skeleton. Blend Shape was applied to the characterso that personality and emotion can be naturally expressed and transmitted. Meanwhile, Graphical User Interface (GUI) was composed for look control. GUI improves convenience and enables a delicate look to be displayed. Finally, the main controller was made to control the movement, rotation, and size of the character. 2.2 Human Being Humans are able to pass judgment quickly. As such, if a product is to appeal to people it must elicit a good feeling in a short amount of time. Accordingly, we must figure out what characteristics people unconsciously like or hate and apply the characteristics to the image as early as possible. In addition, we did not want to simply rely on intuition but aimed to study how well the personal mind and intellectual system can be handled. 748 Journal of Digital Contents Society Vol. 15 No. 6 (Dec. 2014) (Figure 2) Uncanny Valley Robot engineer, Mori Masahiro, believes that robots should be made in human form to elicit joy/happiness/a smile in the simplest manner. People tend to feel a sense of kinship and comfort toward an object if they feel that it is similar to them. However, when the object becomes too similar to them, the degree of good feeling dramatically drops to a point called the Uncanny Valley. Of course, the degree of good feeling again surges past the Uncanny Valley but such point is difficult to achieve using robotics (?). Judging from this, it may be effective to perform a design on the 1st peak just before the Uncanny Valley. This concept may be applied to animation. We created a model on the 1st peak rather than the 2nd peak past the Uncanny Valley. [4] 2.3 Sympathy: Theoretical Discussion and Application Social interactions and the ability to sympathize are important human characteristics. People often seek to confirm the good feelings of others. Furthermore, they are able to enhance good feelings by receiving the same information and sympathizing with each other. A good example of this is during the incident called "Truce on Christmas Eve" which occurred at the beginning of World War I on December 24, 1914 in Flanders, France. German and British established unofficial truce to collect and bury those killed in action. A number of soldier on opposite sides of the conflict walked toward each other, shook hands, shared cigarettes and biscuits, and laughed at the absurd war. This incident teaches us about the power of sympathy. Tens of thousands of people put away their nation, ideology, or class and made peace during the terrible war out of sympathy even though the peace was sustained for only a couple of hours. 3. Art Technology has its limit, including the limit of the pixel. Other factors contribute to these limitations including the limits of the technique of our team members, our 3D modeling devices, and our storytelling. Art can overcoming such limits. Early fine arts had been monopolized by noblemen. However, after the Industrial Revolution technological advances allowed replication and duplication of fine arts and images which allowed popularization of art. Considering such, technology makes art plentiful while art mitigates the limit of technology. 4. Conclusion The pixel is important for it is the basic unit that allows artists to create an image. Its flexibility enables us to venture into new artistic realms. Although modern techniques allow artists to create near perfect replica images, the new technologies also enableartists to bring their imaginations to life through image manipulation. Because digital images can be rapidly replicated and sold to the masses, it is important to understand how the public recognizes and understands The Trivium of the Digital Media Art 749 superficiality and apply that knowledgeto the image. Finally, we must make use of design to engage the customer on an emotional level. In conclusion, images of the future cannot simply refer to the art or the technology but must refer to a combination of technology, humanity, and art to be an appealing and desired product. (Figure 3) Disappearing Pupu References [1] Jin, Jung Kwon, “The Story of Western Art History," Humanist Publication, pp.36-45, July 2011. [2] Beiner, Ronald, “Walter Benjamin's Philosophy of Hi story,” Political Theory, Vol. 12, No. 4, Aug. 1984. [3] Flusser, Vilem, "Lob der Oberflächlichkeit," Bollman n VLG, Koln, Sep. 1998. [4] Mori, Masahiro, “Bukumi no Tani Genshō: 不気味の 谷現象," Kosei Publishing Company, pp.59-68, Dece mber, 1989. [5] Bussi, C. A. "Virtual Reality-based investigation of four cognitive theories for navigation," Unpublished doctoral dissertation, Virginia Polytechnic Institute and State University, Blacksburg, VA, 1995. [6] Ko, Eung-Nam, Hong, Sung-Ryong. "A Web Based Error Manager for Societal Security Service," Korea Digital Contents of Society, Vol. 15, No 1, 2014. [7] Kulkarni, S.D.; Minor, M.A.; Deaver, M.W.; Pardyja k, E.R.; Hollerbach, J.M "Design, Sensing, and Contr ol of a Scaled Wind Tunnel for Atmospheric Displa y, Mechatronics," IEEE/AS Journal of Digital Conte nts Society on, vol. 15, no. 4, pp. 635-645, Nov. 2014. 김 의 나 2003년: 미국 디자인아트센터대학 (학사) 2006년: 미국 디자인아트센터대학 (석사) 2006년~2009년: 미국 소니 픽쳐스 엔터테인먼트 2013년~현 재: 남서울대학교 멀티미디어학과 조교수 관심분야: 멀티미디어, 영화편집, 그래픽디자인, 사진, 영상, 에니메이션 등 김 태 은 1999년: 중앙대학교 전기공학과 졸 업 (공학사) 2005년: 중앙대학교 전자공학괴 졸 업(공학석사) 1997년: 중앙대학교 전자공학과 졸업(공학박사) 1995년: 삼성전자 휴먼 테크 논문 대상은상수상 1997년: 영상처리관련 3건의 특허취득확정 1993년~1996년: 한국재단참여연구원 1997년~현 재: 남서울대학교 멀티미디어학과 교수 관심분야: 멀티미디어시스템, 영상인식, 증강현실, 웹 3D처리기술 등 work_nfw762pwkrhk3a3mfhrpxoal4q ---- Digital Cover Photography for Estimating Leaf Area Index (LAI) in Apple Trees Using a Variable Light Extinction Coefficient Sensors 2015, 15, 2860-2872; doi:10.3390/s150202860 sensors ISSN 1424-8220 www.mdpi.com/journal/sensors Article Digital Cover Photography for Estimating Leaf Area Index (LAI) in Apple Trees Using a Variable Light Extinction Coefficient Carlos Poblete-Echeverría 1,*, Sigfredo Fuentes 2, Samuel Ortega-Farias 1, Jaime Gonzalez-Talice 3 and Jose Antonio Yuri 3 1 Research and Extension Center for Irrigation and Agroclimatology (CITRA), Universidad de Talca, Talca 3460000, Chile; E-Mails: sortega@utalca.cl 2 Faculty of Veterinary and Agricultural Sciences, University of Melbourne, Melbourne, VIC 3010, Australia; E-Mail: sigfredo.fuentes@unimelb.edu.au 3 Centro de Pomaceas, Universidad de Talca, Talca 3460000, Chile; E-Mails: jgonzalezt@utalca.cl (J.G.-T.); ayuri@utalca.cl (J.A.Y.) * Author to whom correspondence should be addressed; E-Mail: cpoblete@utalca.cl; Tel.: +56-71-2200426. External Editor: Gonzalo Pajares Martinsanz Received: 7 August 2014 / Accepted: 10 December 2014 / Published: 28 January 2015 Abstract: Leaf area index (LAI) is one of the key biophysical variables required for crop modeling. Direct LAI measurements are time consuming and difficult to obtain for experimental and commercial fruit orchards. Devices used to estimate LAI have shown considerable errors when compared to ground-truth or destructive measurements, requiring tedious site-specific calibrations. The objective of this study was to test the performance of a modified digital cover photography method to estimate LAI in apple trees using conventional digital photography and instantaneous measurements of incident radiation (Io) and transmitted radiation (I) through the canopy. Leaf area of 40 single apple trees were measured destructively to obtain real leaf area index (LAID), which was compared with LAI estimated by the proposed digital photography method (LAIM). Results showed that the LAIM was able to estimate LAID with an error of 25% using a constant light extinction coefficient (k = 0.68). However, when k was estimated using an exponential function based on the fraction of foliage cover (ff) derived from images, the error was reduced to 18%. Furthermore, when measurements of light intercepted by the canopy (Ic) were used as a proxy value for k, the method presented an error of only 9%. These results have shown that by using a proxy k value, estimated by Ic, OPEN ACCESS Sensors 2015, 15 2861 helped to increase accuracy of LAI estimates using digital cover images for apple trees with different canopy sizes and under field conditions. Keywords: light intercepted by the canopy; gap fraction; clumping index; remote sensing 1. Introduction Accurate modeling of energy balance, gas exchange processes and light distribution occurring within the canopy of fruit orchards requires the characterization and assessment of canopy vigor and structure. Leaf area index (LAI) is one of the most important variables to assess canopy structure [1]. Moreover, LAI is a key variable used by a variety of physiological and functional plant models [2] and by remote sensing models at large scales [3–5]. Direct measurements of LAI in field conditions are extremely difficult to obtain for experimental and commercial fruit orchards. The usual technique to determine LAI is based on the analysis of destructive samples by collecting leaves and the subsequent measurement of their area using leaf area meters (e.g., Li-3100C; Li-Cor, Lincoln, NE, USA), or by digitally acquiring leaf scans and image processing. These methods are destructive, labor intensive, and can be significantly expensive when applied to large trees [6]. In general, these methods have been used to develop empirical allometric equations based on measurements of total leaf area relative to shoot length or plant area related to trunk diameter. Allometric equations are widely used to estimate LAI for different crops and to validate other indirect LAI estimation methods [7]. However, these equations are site-specific and may vary with changes in canopy size and climatic conditions [8,9]. Alternatively, indirect optical methods have been developed based on measurements of direct or diffuse light penetration through the canopy [10–13]. Different devices have been developed to estimate LAI using light transmittance measurements, such as: (I) the plant canopy analyzer (PCA) LAI-2000 (or 2200) (Licor Inc., Lincoln, NE, USA), which uses a fisheye light sensor that measures diffuse radiation simultaneously in five distinct angles [6,10,14]; (II) the DEMON (Centre for Environmental Mechanics, Canberra, Australia), which is based on measurements of direct radiation to estimate LAI [10,13]; (III) SunScan (Delta-T devices Ltd. Cambridge, UK) and AccuPAR (Decagon Devices, Pullman, WA, USA), which measures photosynthetic active radiation (PAR) with wavelengths between 400–700 nm [10]; (IV) hemispherical and fisheye photography to assess canopy structure through image analysis. The problem with the latter imagery techniques is that they require a complex image analysis procedure and specific software [2,10,15,16]. The optical methods to assess LAI have been successfully tested for different crops. However, the direct application of these techniques requires the use of destructive measurements to calibrate the instruments for each specific condition, species and location since they usually underestimate LAI compared to direct measurements [10,16–18]. Due to the practical constraints of current methodologies available to estimate LAI, the development of a simple, accurate and practical method is required to assess the necessary parameters to estimate LAI, particularly under field conditions for experimental purposes. Recently, accurate and rapid estimations of LAI have been made possible through the development of a simple method that uses cover digital photography and gap fraction analysis [2,16–18]. The main objective of this study was to analyze the Sensors 2015, 15 2862 performance of a modified automated procedure, proposed by Fuentes et al. [2,19]. This new procedure was developed to estimate LAI in apple orchards using a proxy light extinction coefficient (k) obtained from digital photography and instantaneous measurements of incident (Io) and transmitted radiation below the canopy (I) [20]. 2. Materials and Methods 2.1. Experimental Sites Description The study was conducted in the 2009–2010 agricultural season in two commercial apple orchards. The first trial (Trial 1) was carried out in an orchard planted in 2007 with apple trees cv. Cripp’s Pink/M-7, located in the Maule Region, Pelarco, Chile (35°25′L.S; 71°23′L.W., 189 m.a.s.l.). The planting distance was 3 m (between rows) × 1.5 m (between plants) (2,222 trees·ha−1) in East-West oriented rows with a Solaxe training system. The experimental plot had 10 trees with heights ranging from 2.5 to 3.0 m. The second trial (Trial 2) was carried out with apple trees cv. Ultra Red Gala/MM 111, planted in 2003 and located in the Maule Region, San Clemente (35°30′L.S; 71°28′L.W., 83 m.a.s.l.). The planting distance was 4 m (between rows) × 2 m (between plants) (1250 trees·ha−1) in East-West oriented rows with the Solaxe training system [21]. In Trial 2 the experimental plot had 12 trees with heights ranging from 3.8 to 4.0 m (cv. Ultra Red Gala1) and 18 trees with heights ranging from 2.5 to 3.0 m (cv. Ultra Red Gala2). For both experimental sites, the topping was performed just above a lateral productive branch. The climate in both orchards is a Mediterranean type with a mean maximum temperature of 30 °C in the warmest month (January) and a mean minimum temperature of 3.5 °C in the coldest month (July). The mean annual rainfall is 700 mm, with a dry period of six months (November to April). In this study digital cover photography acquisition and canopy light interception measurements were carried out in February and April for apple trees cv. Ultra Red Gala and cv. Cripp’s Pink, respectively. 2.2. Destructive Estimation of Leaf Area Index (LAI) Leaf area (LA) per plant was assessed for the 40 trees used in this study (10 trees cv. Cripp’s Pink, 18 trees cv. Ultra Red Gala1 (3.8–4.0 m tall) and 12 trees cv. Ultra Red Gala2 (2.5–3.0 m tall)). After taking digital images and light measurements from tree canopies, plants were completely defoliated. Leaves were stored in cooler containers and immediately taken to the laboratory for analysis. A sub-sample of 200 g of fresh leaves per tree was taken and their areas were measured with a leaf area meter (LI-3100 Area meter, Li-Cor Biosciences, Lincoln, NE, USA.). This procedure allowed measurement of the specific leaf area (SLA) expressed in cm2·g−1. Total tree leaf area was obtained by multiplying the total leaf mass from the whole tree by the SLA obtained. Then, destructive leaf area index (LAID) was calculated by dividing total tree leaf area by the space assigned per tree (distance between rows multiplied by distance between plants, assuming equal size trees). The size of the sub-sample used in this study was validated in a previous study of apple trees in the same orchards through comparison between LAID estimated by the subsample and LAID obtained by measurements of total leaves of trees [22]. Furthermore, since fruits and branches also contribute to the light extinction through the canopy, digital image acquisition and light canopy interception measurements were carried out in the presence of fruits on the trees for comparative purposes between trials. Sensors 2015, 15 2863 2.3. Digital Cover Photography Acquisition A conventional digital camera (Digimax A503, Samsung, Korea) with focal length of 36 mm and angles of view of 53.13° horizontal, 36.87° vertical and 62° diagonal (field of view (FOV) at 1 m distance equal to 0.964 m wide × 0.643 m high) was mounted on a tripod with a bubble level to ensure the camera was in the horizontal position for each image. Digital images were acquired with a resolution of 2048 × 1536 pixels in the Joint Photographic Experts Group (JPEG) format at the zenith angle from all plants as described by Fuentes et al. [2]. Digital images were collected with automatic exposure at 0.3 m from the ground 4 h before noon to avoid direct sunlight shinning into the lens of the camera. Apple trees were divided into four quadrants and four images were obtained per tree (Figure 1), resulting in a total of 160 digital images (40 trees). Figure 1. Example of typical upward looking digital images for an apple tree, considering the four quadrants defined with an image sub-division of 7 (49 sub-samples per each image). 2.4. Canopy Light Interception Measurements Incident radiation (Io) and the radiation transmitted below canopy (I) were measured using a commercial Ceptometer (AccuPAR LP-80, Decagon Devices Inc., Pullman, WA, USA), by taking one measurement above the canopy and five measurements distributed below the canopy parallel to tree rows. The measurements were regularly spaced between two rows every 0.3 m for the Trial 1 and 0.4 m for the Trial 2 on both sides of the tree trunks. Ceptometer measurements were taken in parallel to digital images (8 February 8th in cv. Ultra Red Gala and 25 April 25 in cv. Cripp’s Pink). The below canopy readings were averaged to calculate the light intercepted by the canopy (Ic) as follows: Ic = 1 − I/Io (dimensionless). 2.5. Leaf Area Index Estimated by the Digital Photography Method (LAIM) Sensors 2015, 15 2864 The analysis script developed by Fuentes et al. [2] performs a cloud filtering process and automatic gap analysis of upward-looking digital images. The cloud filtering process is applied by analyzing image color and brightness. The blue band (450–495 nm) is used to filter clouds, since they provide the best contrast between foliage cover and sky plus clouds [2]. The blue band of each image was extracted as a histogram and explored to identify a suitable threshold between foliage and sky. The code developed allows the automation of this process by identifying the minima value automatically between the peaks from sky and foliage. The automatic gap analysis is performed by dividing each binary image into a number of sub-images defined by the user. For this study, we used 4 images per tree with an image sub-division of 7 (49 sub-images per digital image, 196 sub-images per tree) as standard parameters (Figure 1). From each sub-image, the program counts the total of pixels corresponding to sky (S) and leaves (L). A large gap is considered when the ratio S/L in each sub-image is larger than a user-specified value. In this study, a large gap threshold equal to 0.75 was used. This value was proposed by Fuentes et al. [6] for evergreen Eucalyptus woodland. In addition, in the same study, it was shown that changes in the threshold presented a low contribution in the final LAI estimation assessed through sensitivity analysis. When this threshold criterion is met per sub-image, the pixel count for S is added to the big gap count for that particular full image. If the ratio observed is smaller than the user-specified value for a specific sub-image, the pixel count contribution to the total big gap count of that particular sub-image is equal to zero. On the other hand, the light extinction coefficient (k) can be incorporated as a fixed value or as a variable value per image. The fractions of foliage cover (ff) and crown cover (fc) are calculated from Mcfarlane et al. [16] using the following equations: ff = 1- g T TP (1) fc = 1- g L TP (2) where gT is the total number of gap pixels; gL is the total number of large gap pixels and TP is the total number of pixels. Using the ff and fc calculated values, the crown porosity (Φ) can be calculated as follows: Φ = 1- ff fc (3) The clumping index at the zenith (Ω(0)) and the effective leaf area index (LAIM) are calculated from Beer’s Law as follows [2,16,23]: Ω (0) = (1- Ф)∙ln(1- ff) ln(Ф)∙ff (4) LAIM = -fc ∙ ln(Ф)k ·Ω (0) (5) Finally, the measured light extinction coefficient (kM) was calculated by inverting Equation (5) using the measured value of LAI (LAID) as follows: kM = -fc ∙ ln(Ф)LAI · Ω (0) (6) Sensors 2015, 15 2865 2.6. Performance Evaluation of the Digital Photography Method (LAIM) The evaluation of the performance of LAIM included a linear regression between LAID and LAIM, the calculation of root mean square error (RMSE), mean absolute error (MAE), mean bias error (MBE) and index of agreement (d) [24–26]. Additionally, a sensitivity analysis was carried out to evaluate the effect of variations (of ±30%) in k in the estimation of LAIM. Furthermore, the relationships between kM versus fc and kM versus Ic were evaluated by linear and exponential models. 3. Results The averaged values obtained for LAID, kM, ff, fc, Φ and Ω(0), for the cv. Cripp’s Pink (Trial 1) and cv. Ultra Red Gala1 and cv. Ultra Red Gala2 (Trial 2) are summarized in Table 1. When comparing both experimental sites, the cv. Cripp’s Pink presented the lowest values of LAID and SLA with average values of 1.84 and 29.9 (cm2·g−1), respectively. Similar results were observed considering the rest of the parameters analyzed, with the exception of crown porosity (Φ), which was the highest. The cv. Ultra Red Gala2 presented the highest LAID = 2.96 and correspondingly lowest Φ = 0.14. Tallest trees (cv. Ultra Red Gala1, 3.8–4.0 m tall) presented the highest kM with an average value of 0.79. Additionally, the cv. Ultra Red Gala1&2 presented Ω(0) values closest to 1. The variability within trees was not significant, as presented by the standard deviation values (SD) for the different parameters. The sensitivity analysis of kM (used to estimate LAIM) showed that LAIM was significantly affected by ±30% variation of kM. The relative LAIM changes generated by variations of kM + 30% and kM − 30% were in the order of −23.1% and 42.9%, respectively. Table 1. Average and standard deviation values obtained for destructive leaf area index (LAID), specific leaf area (SLA), light intercepted by the canopy (Ic), measured extinction coefficient (kM), fraction of foliage cover (ff), crown cover (fc), crown porosity (Φ) and clumping index at the zenith angle (Ω(0)) for the both trial sites. Trial LAID SLA Ic kM ff fc Φ Ω(0) cv. Cripp’s Pink (n = 10, Trial 1) * Avg. 1.84 28.0 0.551 0.553 0.66 0.76 0.17 0.75 ** S.D. 0.49 2.92 0.165 0.179 0.11 0.09 0.04 0.07 cv. Ultra Red Gala1 (n = 18, Trail 2) * Avg. 2.46 28.2 0.795 0.793 0.86 0.98 0.12 0.97 ** S.D. 0.34 1.25 0.083 0.087 0.03 0.01 0.02 0.02 cv. Ultra Red Gala2 (n = 12, Trail 2) * Avg. 2.96 34.1 0.632 0.625 0.83 0.97 0.14 0.96 ** S.D. 0.38 1.16 0.067 0.084 0.05 0.03 0.04 0.04 Total (n = 40) * Avg. 2.46 29.9 0.683 0.680 0.80 0.92 0.14 0.91 ** S.D. 0.57 3.54 0.145 0.152 0.11 0.11 0.04 0.10 * Avg. is the average value; ** S.D. is the standard deviation; SLA expressed in cm2·g−1. 3.1. Estimation of Light Extinction Coefficient An exponential relationship was found when comparing the light extinction coefficient estimated by the fractions of foliage cover (ff) versus the kM obtained by inverting Equation (5). Considering the whole dataset the r2 found for the exponential model was 0.67. A similar fit was obtained using a linear model with an r2 equal to 0.62. (Table 2). Figure 2 shows that the exponential relationships obtained are site Sensors 2015, 15 2866 and cultivar specific. In the case of cv. Cripp’s Pink, the exponential model showed an r2 equal to 0.93, while for cv. Ultra Red Gala1&2 the r2 value for the exponential model was equal to 0.46 (Table 2). Table 2. Models for light extinction coefficient as a function of the fractions of foliage cover (ff) for the both trial sites. Trial Model Type Equation r2 Trial 1 (cv. Cripp’s Pink) Exponential kM = 0.031·exp (4.44·ff) 0.93 Linear kM = 2.02·ff − 0.73 0.87 Trial 2 (cv. Ultra Red Gala1&2) Exponential kM = 0.096·exp (2.37·ff). 0.46 Linear kM = 1.63·ff − 0.65 0.42 Whole dataset Exponential kM = 0.136·exp (1.99·ff). 0.67 Linear kM = 1.08·ff − 0.17 0.62 r2 is the coefficient of determination (dimensionless). Figure 2. Exponential relationship between measured light extinction coefficient (kM) and fraction of foliage cover (ff). Dashed line represents the exponential model for the whole dataset, the continuous line represents the exponential model for the Trial 1 (cv. Cripp’s Pink) and dotted line represents the exponential model for the Trial 2 (cv. Ultra Red Gala1 and cv. Ultra Red Gala2). Furthermore, we tested the use of instantaneous canopy light interception measurements as a proxy for kM (Figure 3). When the whole dataset was considered (n = 40), the instantaneous fraction of light intercepted by the canopy (Ic) showed a linear relationship with kM. The linear regression analysis (forced to pass through the origin) between kM and Ic was highly significant (p < 0.01) with an r2 value of 0.90 and slope b = 0.99. Figure 3 shows that the data cloud was closely distributed to the 1:1 line with a range of variability of kM vs. Ic from a minimum of 0.25 to a maximum of 0.95. Sensors 2015, 15 2867 Figure 3. Relationship between measured light extinction coefficient (kM) and the fraction of light intercepted by the canopy (Ic = 1 − I/Io) for both trials. 3.2. Determination of LAIM The LAIM was calculated using three approaches: (I) using parameters derived from the image analysis and a constant light extinction coefficient (k = 0.68) (LAIM1); (II) using parameters derived from the image analysis with kM estimated by an exponential function of ff (LAIM2) and (III) using fc, ff, Φ and Ω(0) derived from the image analysis and the instantaneous values of the fraction of incident radiation absorbed by the canopy (LAIM3). The comparison between LAID vs. LAIM1, LAIM2 and LAIM3 is presented in Table 3. This table shows that LAIM1 had RMSE of 0.61, equivalent to 25% of error. In this case the method was tested using a common extinction value, k = 0.68, for all images (n = 160), and resulted in a poor correlation between the LAID and LAIM1 (r2 = 0.3). LAIM2 had a RMSE of 0.44 (18% of error) and r2 = 0.40. Finally, the LAIM3 displayed a much better agreement with LAID, with a d value of 0.96. Furthermore, the values of RMSE and MAE were 0.22 (9% of error) and 0.17 (7% of error), respectively. Linear regression analysis between LAID and LAIM3 for the whole dataset (40 apple trees) was highly significant with an r2 value of 0.85 (Figure 4). The t-tests for the null hypothesis of intercept = 0 and slope = 1.0 at the 95.0% confidence level were accepted with p-values of 0.55 and 0.58, respectively. Figure 4 shows the linear relationship between LAID and LAIM3 for the three measurement sites. The same figure shows a data point that escaped from the main data cloud, corresponding to a tree from the cv. Cripp’s Pink variety that presented LAID values of around 1.0. The range of LAID obtained from the data varied from 1.0 to a maximum of 3.75. Sensors 2015, 15 2868 Table 3. Statistical analysis of leaf area index estimated by the LAIM method using a constant k value (LAIM1), a k derived from ff (LAIM2) and using the instantaneous fraction of incident radiation absorbed by the canopy as a proxy value for k (LAIM3). Approach RMSE MAE MBE r2 d LAIM1 0.61 (25%) 0.46 (19%) 0.07 (3%) 0.30 0.70 LAIM2 0.44 (18%) 0.36 (15%) −0.06 (−2.4%) 0.40 0.71 LAIM3 0.22 (9%) 0.17(7%) 0.01 (0.2%) 0.85 0.96 RMSE is the root mean square error (mm·day−1); MBE is the mean bias error (mm·day−1); MAE is the mean absolute error (mm·day−1); r2 is the coefficient of determination (dimensionless); d is the index of agreement (dimensionless). Figure 4. Comparison between leaf area index estimated by cover digital images considering instantaneous fraction of incident radiation absorbed by the canopy (LAIM3) vs. leaf area index obtained by defoliated method (LAID). Data shown considers all varieties and study fields. 4. Discussions In this study, measurements of canopy light interception and digital cover images were taken four hours before midday with the objective of avoiding direct sunlight within the pictures. This consideration for timing consistency in imaging is very important as in fruit tree orchards radiation transmittance varies greatly throughout the day [27–30]. Additionally, Zhao et al. [30] showed that there is significant influence of daytime on the pixel count between vegetation and background in vertical upward-oriented digital images. Their main recommendation was to avoid images taken under direct light conditions in order to ensure that the sensed radiation does not include any reflected or transmitted radiation by leaves. Previous studies have proposed the use of an automated procedure using cover photography with a single averaged k per tree species. Specifically for eucalyptus, a k = 0.5 has been recommended [2,16]. In the case of grapevines, a calculated k = 0.7 has also been reported [31] and used as a constant k [32]. Sensors 2015, 15 2869 However, in the present study, it was demonstrated that the cover photography method (LAIM) is sensitive to k values required to obtain accurate LAI from isolated apple trees. Therefore the accuracy of the method depends on a good estimator of k for individual images. According to results presented here, using a constant k for apples (k = 0.68 average value obtained from Table 1) will result in an overestimation of the averaged k for Cripp’s Pink of 20% and an underestimation for the averaged k for Ultra Red Gala1 of 16% and 7.4% for Ultra Red Gala2. Furthermore, it was shown that an over and underestimation of 30% from kM resulted in overestimations and underestimations of LAID by 43% and 23%, respectively, which are significant differences from real values. The differences in k obtained per apple cultivar and per trial site highlight how crucial local calibration for any LAI methodology can be. This can be considered the main constraint for indirect measurement methods that deal with light transmission through the canopy. Another constraint in using other common LAI measurement methodologies is that local calibrations are generally useful only for those same conditions of canopy vigor and site. Therefore, local calibrations cannot be used in other cultivars or same cultivars with different canopy structure (Table 3). Differences between both experimental sites could be related mainly to the age of the orchards. In Trial 2 the three-year old plants had not reached their productive potential and had yet not occupied the total assigned ground area. Therefore a higher proportion of leaf area growth will be more vertically elongated towards areas with large gaps and areas with low porosity than in more balanced trees (Trial 1), where growth is characterized by a more uniform leaf area distribution. The automated methodology presented in this paper can incorporate an independent k value per image analyzed. This can be obtained either using a parameter measured from the same image, such as ff or parallel measurements of the fraction of light intercepted by the canopy (Ic). Macfarlane et al. [17] showed that ff was highly correlated with LAI obtained using allometry and the plant canopy analyzer (LI-2000) in E. globulus stands (r2 values of 0.78 and 0.82, respectively). In this study by using a proxy k, obtained from ff (LAIM2), the model explained only 40% of the LAID variability with an associated RMSE of 18%. A positive aspect of this method is that it is completely automated, since the analysis code is able to calculate k using the equation found in LAIM1. The downside of the model is that it requires the calibration of ff against kM to obtain the algorithm for other cultivars and species, since the relationship between ff and kM depends on the characteristic of the orchard, specifically the plant density. Results in Figure 2 show that the relationship between ff and kM is site and cultivar specific. When a specific model is used for Trial 1 and 2 the RMSE in the LAIM2 method is reduced to 15%. Most significantly, by using a proxy k value obtained by the relationship between kM and Ic = 1 − I/Io, which explained 85% of the variability of kM, the LAIM3 estimation improved significantly with an associated RMSE of only 9%, This could be attributed to the inclusion of fruits and non-leaf material, such as branches in the images. The positive aspect of this method is that the Ic factor can be a direct input in the estimation of k value required per image to calculate LAIM. This relationship was also consistent for both cultivars unlike the relation found with ff. The downside of this latter method is that it requires the measurement of I and Io per image using another instrument. To aid in this process a light sensor is currently under development for smartphone and tablet platform use, allowing easy assessment of light interception above and below the canopy [32]. Furthermore, the Ic factor takes into consideration the specific canopy architecture and vigor of trees. Therefore, this procedure can be considered as an easy auto-calibration procedure per image, which offers more accurate estimations of LAI per image while considering site specificity effectively. Sensors 2015, 15 2870 5. Conclusions The demonstrated, automated methodology, for estimating LAI from apple trees using digital cover photography and variable light extinction coefficient has been shown to be an accurate and inexpensive technique to estimate LAI and other architectural parameters. Results have shown that the LAIM method was able to estimate LAID with an error of 18% when the light extinction coefficient (k) was simulated as a function of fraction of foliage cover (ff) derived from digital images. However, when variable measurements of light intercepted by the canopy (Ic) were used as a proxy for k, light extinction, digital photography method presented an RMSE of only 9%. These results have demonstrated that by using a variable k, estimated by Ic, the accuracy of LAI estimates for apple trees with variable canopy sizes and under field conditions increased significantly. This method can be used for experimental and practical applications. The latter could allow growers to identify spatial differences in vigor that could be correlated to unseen factors, such as soil differences. Plant functional models and remote sensing techniques could benefit from the proposed methodology as a consequence of simplifying the acquisition and analysis of accurate LAI. Acknowledgments The research leading to this report was supported by the Chilean government through the projects CONICYT (No. 79090035) and by Universidad de Talca through the research program “Adaptation of Agriculture to Climate Change (A2C2)”. We thank Professor Thomas A. Lehe for English revision. Author Contributions Carlos Poblete-Echeverría and Sigfredo Fuentes contributed equally for the development of the idea, methodology and manuscript writing. Jaime Gonzalez-Talice and Jose Antonio Yuri contributed to the acquisition of the images required to perform the experiments and participated in the revision process. Samuel Ortega-Farias contributed to the revision and edition of this paper. Conflicts of Interest The authors declare no conflict of interest. References 1. Chason, J.W.; Baldocchi, D.D.; Huston, M.A. A comparison of direct and indirect methods for estimating forest canopy leaf area. Agric. For. Meteorol. 1991, 57, 107–128. 2. Fuentes, S.; Palmer, A.R.; Taylor, D.; Zeppel, M.; Whitley, R.; Eamus, D. An automated procedure for estimating the leaf area index (LAI) of woodland ecosystems using digital imagery, MATLAB programming and its application to an examination of the relationship between remotely sensed and field measurements of LAI. Funct. Plant Biol. 2008, 35, 1070–1079. 3. Pekin, B.; Macfarlane, C. Measurement of crown cover and Leaf Area Index using digital cover photography and its application to remote sensing. Remote Sens. 2009, 1, 1298–1320. Sensors 2015, 15 2871 4. TangF, H.; Dubayah, R.; Swatantran, A.; Hofton, M.; Sheldon, S.; Clark, D.B.; Blair, B. Retrieval of vertical LAI profiles over tropical rain forests using waveform lidar at La Selva, Costa Rica. Remote Sens. Environ. 2012, 124, 242–250. 5. Tang, H.; Dubayah, R.; Brolly, M.; Ganguly, S.; Zhang, G. Large-scale retrieval of leaf area index and vertical foliage profile from the spaceborne waveform lidar (GLAS/ICESat). Remote Sens. Environ. 2014, 154, 8–18. 6. Villalobos, F.J.; Orgaz, F.; Mateos, L. Non-destructive measurement of leaf area in olive (Olea europaea L.) trees using a gap inversion method. Agric. For. Meteorol. 1995, 73, 29–42. 7. Jonckheere, I.; Fleck, S.; Nackaerts, K.; Muys, B.; Coppin, P.; Weiss, M.; Baret, F.D.R. Review of methods for in situ leaf area index determination: Part I. Theories, sensors and hemispherical photography. Agric. For. Meteorol. 2004, 121, 19–35. 8. Le Dantec, V.; Dufrene, E.; Saugier, B. Interannual and spatial variation in maximum Leaf Area Index of temperate deciduous stands. For. Ecol. Manag. 2000, 134, 71–81. 9. Mencuccini, M.; Grace, J. Climate influences the leaf area/sapwood area ratio in Scots pine. Tree Physiol. 1995, 15, 1–10. 10. Bréda, N.J.J. Ground-based measurements of leaf area index: A review of methods, instruments and current controversies. J. Exp. Bot. 2003, 54, 2403–2417. 11. Chen, J.M.; Cihlar, J. Plant canopy gap-size analysis theory for improving optical measurements of Leaf-Area Index. Appl. Opt. 1995, 34, 6211–6222. 12. Lang, A.R.G.; McMurtrie, R.E. Total leaf areas of single trees of Eucalyptus grandis estimated from transmittances of the sun’s beam. Agric. For. Meteorol. 1992, 58, 79–92. 13. Lang, A.R.G.; Yueqin, X. Estimation of Leaf Area Index from transmission of direct sunlight in discontinuous canopies. Agric. For. Meteorol. 1986, 37, 229–243. 14. Welles, J.M.; Cohen, S. Canopy structure measurement by gap fraction analysis using commercial instrumentation. J. Exp. Bot. 1996, 47, 1335–1342. 15. Frazer, G.W.; Fournier, R.A.; Trofymow, J.A.; Hall, R.J. A comparison of digital and film fisheye photography for analysis of forest canopy structure and gap light transmission. Agric. For. Meteorol. 2001, 109, 249–263. 16. Macfarlane, C.; Hoffman, M.; Eamus, D.; Kerp, N.; Higginson, S.; McMurtrie, R.; Adams, M. Estimation of Leaf Area Index in eucalypt forest using digital photography. Agric. For. Meteorol. 2007, 143, 176–188. 17. Macfarlane, C.; Arndt, S.K.; Livesley, S.J.; Edgar, A.C.; White, D.A.; Adams, M.A.; Eamus, D. Estimation of Leaf Area Index in eucalypt forest with vertical foliage, using cover and fullframe fisheye photography. For. Ecol. Manag. 2007, 242, 756–763. 18. Macfarlane, C.; Grigg, A.; Evangelista, C. Estimating forest leaf area using cover and fullframe fisheye photography: Thinking inside the circle. Agric. For. Meteorol. 2007, 146, 1–12. 19. Fuentes, S.; Poblete-Echeverría, C.; Ortega-Farias, S.; Tyerman, S.; de Bei, R. Automated estimation of Leaf Area Index from grapevine canopies using cover photography, video and computational analysis methods. Aust. J. Grape Wine Res. 2014, 20, 465–473. 20. Lindroth, A.; Perttu, K. Simple calculation of extinction coefficient of forest stands. Agric. Meteorol. 1981, 25, 97–110. Sensors 2015, 15 2872 21. Yuri, J.A.; Ibarra-Romero, M.; Vásquez, J.L.; Lepe, V.; González-Talice, J.; del Pozo, A. Reduction of apple tree height (Malus domestica Borkh) cv. Ultra Red Gala/MM111 does not decrease fruit yield and quality. Sci. Hortic. 2011, 130, 191–196. 22. Lepe, V.; Yuri, J.; Moggia, C. Estimación del índice de área Foliar a Través de Fotografía Hemisférica Deshoje Manual en Manzanos Cerezos. Master’s Thesis, Universidad de Talca, Talca, Chile, 2004. 23. Leblanc, S.G. Correction to the plant canopy gap-size analysis theory used by the tracing radiation and architecture of canopies instrument. Appl. Opt. 2002, 41, 7667–7670. 24. Legates, D.R.; McCabe, G.J. Evaluating the use of “goodness-of fit” measures in hydrologic and hydroclimatic model validation. Water Resour. Res. 1999, 35, 233–241. 25. Mayer, D.; Butler, D. Statistical validation. Ecol. Model. 1993, 68, 21–32. 26. Willmott, C.J. Some comments on the evaluation of model performance. Bull. Am. Meteorol. Soc. 1982, 63, 1309–1313. 27. Flenet, F.; Kiniry, J.R.; Board, J.E.; Westgate, M.E.; Reicosky, D.C. Row spacing effects on light extinction coefficients of corn, sorghum, soybean, and sunflower. Agron. J. 1996, 88, 185–190. 28. Kiniry, J.R.; Landivar, J.A.; Witt, M.; Gerik, T.J.; Cavero, J.; Wade, L.J. Radiation-use efficiency response to vapor pressure deficit for maize and sorghum. Field Crops Res. 1998, 56, 265–270. 29. Yunusa, I.A.M.; Walker, R.R.; Blackmore, D.H. Characterization of water use by Sultana grapevines (Vitis vinifera L.) on their own roots or on Ramsey rootstock drip-irrigated with water of different salinities. Irrig. Sci. 1997, 17, 77–80. 30. Zhao, D.; Xie, D.; Zhou, H.; Jiang, H.; An, S. Estimation of Leaf Area Index and Plant Area Index of a submerged macrophyte canopy using digital photography. PLoS One 2012, 7, e51034. 31. Cavallo, P.; Poni, S.; Rotundo, A. Ecophysiology and vine performance of cv. “Aglianico” under various training systems. Sci. Hortic. 2001, 87, 21–32. 32. Fuentes, S.; de Bei, R.; Pozo, C.; Tyerman, S.D. Development of a smartphone application to characterize temporal and spatial canopy architecture and leaf area index for grapevines. Wine Vitic. J. 2012, 6, 56–60. © 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/). work_nijostrtxvh4vljw6enxeny5qi ---- Frontiers | Laser Therapy for Incision Healing in 9 Dogs | Veterinary Science Impact Factor 2.245 | CiteScore 2.6 More on impact › Frontiers in Veterinary Science Veterinary Surgery and Anesthesiology Toggle navigation Section (current)Section About Articles Research topics For authors Why submit? Fees Article types Author guidelines Review guidelines Submission checklist Contact editorial office Submit your manuscript Editorial board Article alerts This article is part of the Research Topic Veterinary Sports Medicine and Physical Rehabilitation View all 11 Articles Articles Edited by David Levine University of Tennessee at Chattanooga, United States Reviewed by Maria Fahie Western University of Health Sciences, United States Narda G. Robinson CuraCore Integrative Medicine & Education Center, United States The editor and reviewers' affiliations are the latest provided on their Loop research profiles and may not reflect their situation at the time of review. TABLE OF CONTENTS Abstract Introduction Materials and Methods Results Discussion Conclusion Authors Contributions Conflict of Interest Statement Acknowledgments Footnotes Abbreviations References Suggest a Research Topic > Download Article Download PDF ReadCube EPUB XML (NLM) Supplementary Material Export citation EndNote Reference Manager Simple TEXT file BibTex total views View Article Impact Suggest a Research Topic > SHARE ON Original Research ARTICLE Front. Vet. Sci., 29 January 2019 | https://doi.org/10.3389/fvets.2018.00349 Laser Therapy for Incision Healing in 9 Dogs Jennifer L. Wardlaw1,2*, Krista M. Gazzola1, Amanda Wagoner1, Erin Brinkman1, Joey Burt1, Ryan Butler1, Julie M. Gunter1 and Lucy H. Senter3 1Department of Clinical Sciences, Animal Health Center, College of Veterinary Medicine, Mississippi State University, Starkville, MS, United States 2Gateway Veterinary Surgery, St. Louis, MO, United States 3Office of Research and Economic Development, Department of Clinical Sciences, College of Veterinary Medicine, Mississippi State University, Starkville, MS, United States Laser therapy is becoming common place in veterinary medicine with little evidence proving efficacy or dosages. This study evaluated surgical wound healing in canines. Twelve Dachshunds underwent thoraco-lumbar hemilaminectomies for intervertebral disc disease (IVDD). Digital photographs were taken of their incisions within 24 h of surgery and 1, 3, 5, 7, and 21 days postoperatively. The first three dogs were used to create a standardized scar scale to score the other dogs' incision healing. The remaining 9 dogs were randomly assigned to either receive 8 J/cm2 laser therapy once a day for 7 days or the non-laser treated control group. Incision healing was scored based on the scar scale from 0 to 5, with zero being a fresh incision and five being completely healed with scar contraction and hair growth. All scar scores significantly improved with increasing time from surgery (<0.001). Good agreement was achieved for inter-rater reliability (p = 0.9). Laser therapy increased the scar scale score, showed improved cosmetic healing, by day seven and continued to be significantly increased on day 21 compared to control dogs (p < 0.001). Daily application of laser therapy at 8J/cm2 hastened wound healing in Dachshunds that received thoracolumbar hemilaminectomies for IVDD. It also improved the cosmetic appearance. Introduction Laser therapy is a novel rehabilitation technique being used in veterinary medicine for both rehabilitation and therapeutic purposes. Photobiomodulation (PBM) induced by laser therapy is the application of electromagnetic radiation within the near infrared spectrum and is aimed at stimulating healing or analgesia within the target tissue. Currently laser therapy is being advocated for a variety of conditions some of which include musculoskeletal pain, osteoarthritis, joint pain, and inflammation, neuropathic pain, otitis, dermatitis, chronic, or non-healing wounds and decubital ulcers (1–5). There are three phases of wound healing; the inflammatory, proliferative and remodeling phases. The inflammatory phase is initiated at the time of injury and begins with hemostasis and formation of the platelet plug. Platelets release platelet-derived growth factor which attract neutrophils and more importantly macrophages. Macrophages attract fibroblasts and therefore commence the proliferative phase. Fibroblasts differentiate into myofibroblasts and cause tissue contraction. Tensile strength is increased by collagen reorganization and the eventual outcome is a wound that reaches 80% of the strength of uninjured tissue (6, 7). It has been shown in experimental studies that laser therapy reduces pain, positively influences inflammatory, proliferative, and maturation phases of wound healing and increases wound tensile strength (6, 8–10). However, most of these studies have been in laboratory animals and do not account for the difference in wound healing between species. Despite the numerous accounts of the potential positive effects of laser therapy in various applications for both human and veterinary medicine, exact protocols for various conditions, and tissue healing do not exist. Recent studies in veterinary medicine show the potential benefit of wound healing using PBM induced by laser therapy, including accelerated wound healing in equine distal limb wounds using a wavelength of 635 nm and an energy output of 17 mW per diode for a power density of 5.1 J/cm2 (11). Another recent canine study has shown that surgery in combination with PBM decreases the time to ambulation in dogs with T3-L3 myelopathy secondary to intervertebral disk herniation using a wavelength of 810 nm and an energy output of 200 mW for a power density of 2–8 J/cm2 (4). Looking at other laser therapy reports in the literature wound healing protocols vary from 1 to 40 J/cm2, consequently necessitating the continued need for controlled research studies in order to evaluate the efficacy of proposed protocols using specified power densities on specific target tissues for defined clinical indications (8, 12–17). This study attempts to objectively measure the ability of PBM induced by laser therapy to accelerate the healing time of surgically created wounds by using a previously described scar scale that corresponds with histopathology (18). This scar scale using photography has shown in numerous species that scar cosmetics is a consistent and sensitive indicator of histologic healing and is independent of the reviewer (3, 18–21). By using digital photographs, blinded to the reviewers, the present study evaluated healing while avoiding tissue sample collection. The goal of the study was to objectively evaluate the use of laser therapy as a treatment modality in canine intervertebral disc disease (IVDD) patients for surgical incisional healing. Materials and Methods All procedures were approved by the Mississippi State University Institutional Animal Care and Use Committee and all treatment and control animal participants had documented informed client consent before enrollment in the study. The use of veterinarians to score the incision healing was approved by the Mississippi State University Institutional Review Board for the Protection of Human Subjects in Research. Dachshund patients who present to College of Veterinary Medicine, Mississippi State University for a hemilaminectomy in the thoraco-lumbar region were included in this study. Dachshunds with a dapple colored coat, known systemic health issues, incisions closed with staples, or owners that did not consent were not included in the study. Dogs that had received previous back surgeries were also not included in the study population. Dogs that met the inclusion criteria had data collected with regards to signalment, weight, body condition score, coat color, location of disc herniation, surgical incision length, type of suture used, fat pad graft or gel foam usage, steroid administration, bladder medications, neurologic status at presentation, and at the time of discharge. A 10 mega-pixel camera1 was used to take digital photographs. Photographs were taken at an equal distance (15 cm), angle, setting and lighting on days 0, 1, 3, 5, and 7 and again at day 21 of all dogs, with day 0 being the day of surgery. All photographs were taken with the standardized distance from the incision, on the same exam table, with the same camera settings and 90° from the dogs' back, by one of two authors (KG or AW). A clinical scar scale using digital photography was created as previously described (18). The first three dogs presented that met our inclusion criteria had digital photographs taken at days 0, 1, 3, 5, 7, and 21 using the standardized variables listed. These three dogs were used to create the scar scale from 0 to 5 with 0 being a fresh surgical incision (day 0 picture), a score of 1 with a fresh incision but no hemorrhage present (day 1 picture), a score of 2 had incision with some scabbing, swelling or bruising (day 2 picture), a score of 3 had visible healing with ongoing skin remodeling but resolving bruising or inflammation (day 5 picture), a score of 4 had healing progressing but a visible scar present (day 7 picture), and a score of five had a completely healed surgical incision with epithelialization, contraction and hair regrowth (day 21 picture) (Figures 1A,B). FIGURE 1 Figure 1. Examples of the scar scale images used for reviewer ranking of surgical incisions for both treatment groups. (A) This image was defined as a scar scale score of zero. (B) This image was defined as a scar scale score of five. The first three dogs enrolled in the study were selected and photographed to assure appropriate and expected healing without complication. After the initial three dogs were used to create the visually representative scar scale, the next qualifying dog was assigned to one of two treatment groups (laser or non-laser) using a coin flip. Subsequent dogs were then assigned every other to maintain similar treatment group sizes. Laser therapy dogs received 8 J/cm2 daily for seven days, beginning on day 1 using a class 3B veterinary laser2. The laser was wiped clean before and after each patient with the Novus polish system cloth provided by the laser manufacturer. The incision and one additional probe head spot size (7.55 cm2) around the entire incision (cranial, bilaterally the length of the incision, and caudal) were treated, except for over the laminectomy site. Total patient irradiation times varied due to varying lengths of incisions. The probe3 was applied with contact, but no pressure, perpendicular to the skin at all times used. The manufacturer pre-programmed muscle/increase local circulation for acute, low pulse setting was used. This setting has a pulsatile setting at 8 Hz with a 90% on, 10% off emissions at 850 nm for laser diodes and 670 nm for LED and defaults at 4 J/cm2. All treatment spots received 4 J/cm2 in this described spot pattern, twice during each treatment totally 8 J/cm2 treatment dosage. Digital photographs were taken at days 0, 1, 3, 5, 7, and 21. The non-laser group did not receive laser treatment, and had pictures taken at days 0, 1, 3, 5, 7, and 21. Upon completion of the study all pictures were randomly assigned a number, 1 through 125, using a computer generated4, unsorted list. The scar scale dog photographs were arranged on white cork board and labeled as 0, 1, 2, 3, 4, or 5 to define the scar scale. The board represented a score of 0 (day 0), 1 (day 1), 2 (day 3), 3 (day 5), 4 (day 7), and spaced linearly until the day 21 photographs which represented a scar scale score of 5. The treatment group photos were arranged in ascending numerical order according to their randomly assigned image number. Veterinary volunteers, not involved in the surgery, patient care, laser therapy, or photograph acquisition were recruited to score the images (JW, EB, JB, RB, JG, LS). All scoring sessions were performed privately, in the same room and during daylight hours. All volunteers were instructed to score the images using whole integers from 0 to 5 based on cosmetic healing by considering; incision oozing, bruising, crusting, inflammation, edema, granulation, epithelialization, contraction, and hair regrowth on the incision site. Each volunteer scored all the photographs once and did so in one sitting. Once all six evaluators had ranked the images, the author (JW) was then given the treatment group assignments from the co-investigators (KG, AW) to send for statistical analysis. The tests for significance of sources of variation and inter-rater reliability were determined using covariant analysis performed with Statistical Analysis System's GLIMMIX program5. The clinical importance of statistically significant differences in treatments was assessed using confidence intervals (22). A P-value of < 0.05 was considered to be significant. Free marginal Kappa values were used to assess inter-rater agreement. Results This study was performed entirely at the Animal Health Center, College of Veterinary Medicine, Mississippi State University from September 2010 to May 2012. Twelve dogs met ouri inclusion criteria during this timeframe and were designated as three scar scale example dogs, five non-laser dogs, and four laser therapy dogs. Signalment, body condition score, incision length, lesion location, use of fat graft and/or gel foam, suture material used, steroid administration, and neurologic status at the time of surgery did not differ between laser therapy and non-laser dogs. All the laser therapy dogs were brown, whereas the non-laser group contained three black and two brown dogs. All dogs enrolled in the study appeared calm and comfortable for photography acquisition and laser therapy; therefore, were able to complete the study. All the dogs remained static in their neurologic status at the time of discharge or improved, none declined. Most of the dogs received steroids either at the time of surgery or before referral (67%). Some of the non-laser dogs (n = 4) and laser dogs (n = 2) received steroids. Steroid type, dosage, time of administration and frequency varied within the study population. The first three dogs healed without incident, appeared uniformly cosmetic and were selected to make the scar scale score. Scar scale score of the study dogs was significantly associated with the day of the image (p < 0.0001), and whether laser therapy was used (p < 0.001), but not by reviewer (p = 0.9). The scar scale rankings were similar for all six veterinary reviewers which consisted of a dermatologist, radiologist, laboratory animal specialist, general practitioner and two surgeons (JG, EB, LS, JB, RB, JW). There were no differences between scar scale scores on days 0, 1, 3, or 5 between groups. There was a statistically significant improvement in scar scores on days 7 and 21 for laser therapy vs. non-laser dogs (p < 0.0.1). The mean scar score was significantly higher for the laser (95% CI = 3.21-4.12) than the non-laser (95% CI = 1.85-2.56) dogs at day 7. The mean scar score was significantly higher for the laser (95% CI = 4.52-5.03) than non-laser (95% CI = 3.25-4.21) dogs at day 21 (Figures 2A–C). Dogs on day 21 that received laser therapy had less variation in their score with an average score of 4.78 ± 0.54 vs. non-laser dogs average 3.73 ± 1.34 (Figure 3). FIGURE 2 Figure 2. Representative images of the two treatments groups at day 21. (A) A non-laser patient illustrating the continued presence of a scab over some of the incision's epithelium. (B) A non-laser patient with a wide scar area and one remaining pink area of granulation toward the right side of the incision. (C) A laser therapy patient showing a completely healed incision with contraction and hair regrowth around and on the incision. FIGURE 3 Figure 3. Frequency histogram showing individual scar scale scores (n = 48) on day 21 for the low intensity laser therapy (n = 3) and non-laser treatments (n = 5) groups from the six veterinary reviewers. Day 21 results were further compared for laser therapy and non-laser groups for eight patients using the six reviewers. One laser therapy dog did not return for the 21 day photograph. Dogs that received steroid treatment regardless of laser therapy had a clinically significant lower scar score (95% CI 3.41–4.25) vs. dogs that did not receive steroids (mean 5 ± 0.0) at day 21. The dogs that received steroids had a mean scar score that was higher for those that also received laser therapy (95% CI = 4.30-5.32) compared to the steroid administered dogs that were in the non-laser group (95% CI = 2.89-3.94) at day 21. But the two dogs that received steroids and laser therapy had a median score of 5 and a mean score of 4.8 at day 21. Dogs receiving steroids and did not receive laser treatments scored 1–3 points lower on the scar scale at day 21. The average score by day 5 was a full point higher on the scar scale score for the laser group and continued to be one point higher on average through day 21. The standard deviation (SD) ranged from 0.85 to 1.6 for all scores except day 21 laser group which had a SD of only 0.35, indicating strong agreement between the reviewers (Table 1). Medians were equally elevated for the laser treated group starting at day 3 and continued until the end of the study. Median scores for the laser group at days 0, 1, 3, 5, 7, and 21 were 1, 2, 2.5, 3, 4, and 5 respectively. The median scar scores for the non-laser group at days 0, 1, 3, 5, 7, and 21 were 1, 2, 2, 2, 2.5, and 4, respectively. Overall there was a strong predictive value for scar scale score with laser therapy treatment in this study (Figure 4). Spearman's Correlation Coefficient was statistically significant for scar score and day for the reviewers (rs = 0.80). Kappa values comparing inter-rater agreement are shown for each day (Table 2). TABLE 1 Table 1. The scar scores for the two treatments groups collected on each day. FIGURE 4 Figure 4. A constant interval graph showing plotted predicted values of scare scale scores for laser treated and non-laser treated patients. TABLE 2 Table 2. Free Marginal Kappa for Inter-rater agreement of scar scores. Discussion Laser therapy has been found to accelerate wound healing by possibly stimulating oxidative phosphorylation therefore reducing the inflammatory response and pain (6, 8–10). This study helps to further support the idea of PBM induced by laser therapy as a vital component to wound healing and rehabilitation by showing the improved outcome of laser therapy on the surgical incisions of dogs with IVDD that underwent a hemilaminectomy. Five different specialty veterinarians and a general practitioner were recruited and given the same instructions on ranking the overall cosmetic appearance by looking at incision oozing, bruising, crusting, inflammation, edema, granulation, epithelialization, contraction, and hair regrowth on the incision site. Other studies have also evaluated the ability to assess wound healing without using histologic evidence and have found successful modalities, such as a clinical scar scale using digital photography, that are indicated in ante-mortem studies (3, 18, 21). The use of digital photography as a valid means to evaluate the efficacy of wound healing using specific treatment modalities is of particular value because it allows for gross patient evaluation and does not necessitate histologic confirmation (3, 21). This method of evaluation has been more specifically explored using porcine burn scars, human skin grafts and a more recent canine laser study (18, 23, 24). Such studies have confirmed the correlation between histologic characteristics of wound healing with visual assessment of clinical scar outcome. The information extrapolated from this study, and the Gammel et al. (23) study, proves that it is achievable to establish a reliable clinical scar scale using digital photography (23). This high level of correlation therefore indicates that this is a viable method to use when analyzing surgically created wounds treated with ancillary therapy such as PBM induced by laser therapy. We saw a large increase in inter-rater agreement after day 7, especially in the laser group at day 21 having excellent correlation between reviewers (kappa 0.79). While the variance in scar scores improved with healing, the variance in the scores for the laser group was lower throughout the study and had a higher numerical score on the scale throughout the study. This coupled with the excellent kappa value on day 21 demonstrated reference scars are a potentially reliable method to assess clinical healing on photographs. However, higher agreement was seen with increased familiarity with the scar scale score and repeated scoring of photographs (18). Wang et al. demonstrated an increase of correlation coefficients from 65% to over 80% with simply repeated ratings of the photographs. Perhaps, we would have had higher agreement earlier in the results with reviewers scoring the same images several times. Steroid administration for IVDD patients continues to be a bone of contention among veterinarians. In this prospective study the type, dosage, and length of administration of steroids were not controlled so no conclusions could be drawn. It should also be noted perioperative high dose steroids are not thought to statistically effect wound healing, vs. chronic usage (25). Steroid usage over 10 days in humans can show 2 to 5 times wound complication rates, but varies with comorbidities, dose and surgery (25). We also avoided the laser beam directly over the hemilaminectomy site, due to unwarranted concerns of potential contraindication for laser therapy directly over the spinal cord. This potential concern has been refuted in the literature and has been shown to improve neurologic outcome (4). Combined with our results, these two studies suggest increased surgical healing in IVDD canines using 8 J/cm2 daily for the first week after surgery. Our study used a higher dosage than previously reported for a case report of a chronic canine wound (5 J/cm2), for wound healing (5 J/cm2), and for open wounds (1 J/cm2) (12, 23, 26). It should also be noted that previous studies treated for 4 or 5 days, and this study did for 7 consecutive days (4, 12). An additional study did not have apparent benefits to PBM for therapy 3 times a week for 32 days of treatment using 1 J/cm2 (26). This suggests that a higher dose and perhaps a more concentrated treatment schedule may be more successful for PBM. But since these papers were not performed in a parallel manner, it is unclear if our success in speeding wound healing would be seen with an altered protocol. While every other day laser therapy, more or less J/cm2 or less treatment days may alter the findings, canine epidermal keratinocytes showed detrimental effects at 10 J/cm2 (27). Since the range in the literature for wound healing is 1–40 J/cm2 we chose a middle range, but still an aggressive dosage to increase the chance we would see a benefit, but avoid tissue damage. Recently the World Association of Laser Therapy indicated at least 5–7 J/cm2 are needed to induce cellular change, suggesting this to be the low end of therapeutic dosages (28). This is potentially confirmed with the two recent wound healing studies showing no apparent benefit with PBM using 1 or 5 J/cm2 in canines (23, 26). In our study, many of the patients were ready to go home prior to the full 7 day treatment was complete. Therefore, the success of laser therapy would need to be weighed against cost to client to remain in hospital or have the patients return for daily outpatient treatments. Non-laser treatment dogs appeared to have a wider scar and more crusting still present, with no hair growth over the incision or bridging epithelium at day 21. Perhaps eventually both groups would look the same further out in time. This is unclear since this study did not follow patients out longer than 21 days. While this study focused on IVDD patients for a more consistent surgical scar, perhaps this study could be extrapolated to include PBM treatment for other canine surgical incisions or wounds. Similar wounds in the inflammatory phase of wound healing should theoretically respond similarly favorable as in this study. However, since these were uncomplicated, clean, surgical incisions and not open wounds, these conclusions cannot be drawn. It is also unclear if feline patients would need an altered dosage due to their different skin vascularity and healing properties. A major limitation to this study includes the small sample size, but due to large differences between the groups statistical values were still found which were likely clinically significant. But due to sample size we could not draw conclusion on other interesting variables such as pain, steroid usage, neurologic function, or metabolic variables. This is the first veterinary study to use digital pictures and a scar scale score for surgical incisions. The reviewers were blinded to the treatment groups and pictures were taken uniformly to prevent patient identification by excluding dapple coloring, previous surgery on the back, and cropping into the incision and skin. However, the photographs were colored images and the length of the incisions and axial muscle were not uniform. So, while reviewers were blinded to the treatment group, they may have recognized the shape of an incision or curve of a spine while going through and scoring the randomly numbered images. Therefore, they may have noticed a healing trend on a patient's incision, but they were blinded to the treatment group. The more complete healing, with minimal scarring of wounds, has been shown to coordinate very strongly with overall tensile strength (3, 10, 18, 21, 23, 26). There are obvious incentives to a non-invasive wound healing scoring system. While the true test of tensile strength of wound healing between the two groups would have been more conclusive, the strong agreement between reviewers shows these animals did not need the added morbidity of serial biopsies. This study did not record total time of PBM in each patient. The dosage was predetermined and the incision-encompassing treatment area, but the size of the incision and patient varied; therefore, total joules and treatment times varied. In future studies, this would be a potentially valuable piece of information to record and help uniformity in PBM study reporting and comparison. Additional study limitations were the inability to draw conclusion about many of the things that are known risk factors to abnormal wound healing and operative related factors (i.e., details of scrub, core body temperature, surgery time). Conclusion The surgical incisions in these four dogs healed faster and more cosmetically with PBM induced by laser therapy using 8 J/cm2 daily for 7 days. The improved healing and cosmetic score could be seen beginning at day 7 and continued to be improved for 3 weeks after surgery. Authors Contributions JW, KG, and AW contributed to conception and design of the study. AW organized the database. JW, EB, JB, RB, JG, and LS analyzed the pictures and scored the patients. KG wrote the original grant application for this project. JW wrote the original draft of this manuscript. All authors contributed to manuscript revision, read, and approved the submitted version. The corresponding author takes primary responsibility for communication with the journal and editorial office during the submission process, throughout peer review, and during publication. The corresponding author is also responsible for ensuring that the submission adheres to all journal requirements including, but not exclusive to, details of authorship, study ethics, and ethics approval, clinical trial registration documents and conflict of interest declaration. The corresponding author should also be available post-publication to respond to any queries or critiques. Conflict of Interest Statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Acknowledgments The authors acknowledge the assistance of Dr. Dennis E. Rowe for performing the statistical analysis. Support in part by the Office of Research and Graduate Studies, College of Veterinary Medicine, Mississippi State University. Footnotes 1. ^Canon PowerShot SD1200 IS Digital ELPH, 10.0 megapixel. Canon U.S.A. Inc. Lake Success, NY 11042, USA. 2. ^Vectra Genisys Transport Laser Model 2784. DJO, LLC Vista, CA 92083, USA. 3. ^9 Diode Cluster Applicator, 1040 mW total power; (5) 850 nm Laser diodes at 200 mW each, with a spot size of 0.188 cm2, and (4) 670 nm LED diodes at 10mW each, with a spot size of 0.64 cm2 each. Total contact area 7.55 cm2 with a power density of 0.138 W/cm2 Hudson Aquatics, Angola IN 46703, USA. 4. ^Microsoft Office Excel 2007, Microsoft. Redmond, WA 98052, USA. 5. ^SAS Institute. 2008. SAS/STAT Online 9.2 Users Guide. SAS Institute, Cary, NC 27512, USA. Abbreviations PBM, Photobiomodulation; IVDD, Intervertebral Disc Disease. References 1. Bartels KE. Lasers in medicine and surgery. Vet Clin Small Anim. (2002) 32:495–515. doi: 10.1016/S0195-5616(02)00002-5 PubMed Abstract | CrossRef Full Text | Google Scholar 2. Bartels KE. Lasers in veterinary medicine: where we are and where are we going? In: Proceedings of the 2009 81st Western Veterinary Conference. Las Vegas, NV (2009). PubMed Abstract | Google Scholar 3. Schwoebel F, Barsig J, Wendel A, Hamacher J. Quantitative assessment of mouse skin transplant rejection using digital photography. Lab Anim. (2005) 39:209–14. doi: 10.1258/0023677053739792 PubMed Abstract | CrossRef Full Text | Google Scholar 4. Draper WE, Schubert TA, Clemmons RM, Miles SA. Low-Level laser therapy reduces the time to ambulation in dogs after hemilaminectomy: a preliminary study. J Small Anim Pract. (2012) 53:465–9. doi: 10.1111/j.1748-5827.2012.01242.x PubMed Abstract | CrossRef Full Text | Google Scholar 5. Millis DL, Francis D, Adamson C. Emerging modalities in veterinary rehabilitation. Vet Clin Small Anim. (2005) 35:1335–55. doi: 10.1016/j.cvsm.2005.08.007 PubMed Abstract | CrossRef Full Text | Google Scholar 6. Medrado AR, Pugliese LS, Reis SR, Andrade ZA. Influence of low level laser therapy on wound healing and its biological action upon myofibroblasts. Lasers Surg Med. (2003) 32:239–44. doi: 10.1002/lsm.10126 PubMed Abstract | CrossRef Full Text | Google Scholar 7. Swaim SF. Advances in wound healing in small animal practice: current status and lines of development. Vet Dermatol. (1997) 8:249–57. doi: 10.1111/j.1365-3164.1997.tb00271.x CrossRef Full Text | Google Scholar 8. Gal P, Mokry M, Vidinsky B, et al. Effect of equal daily doses achieved by different power densities of low-level laser therapy at 635nm on open skin wound healing in normal and corticosteroid-treated rats. Lasers Med Sci. (2009) 24:539–47. doi: 10.1007/s10103-008-0604-9 CrossRef Full Text | Google Scholar 9. Peavy GM. Lasers and laser-tissue interaction. Vet Clin Small Anim. (2002) 32:517–34. doi: 10.1016/S0195-5616(02)00003-7 PubMed Abstract | CrossRef Full Text | Google Scholar 10. Stadler I, Lanzafame RJ, Evans R, Narayan V, Dailey B, Buehner N, et al. 830-nm Irradiation increases the wound tensile strength in a diabetic murine model. Lasers Surg Med. (2001) 28:220–6. doi: 10.1002/lsm.1042 PubMed Abstract | CrossRef Full Text | Google Scholar 11. Jann HW, Bartels K, Ritchey JW, Payton ME. Equine wound healing: influence of low level laser therapy on an equine metacarpal wound healing model. Photon Lasers Med. (2012) 1:117–22. doi: 10.1515/plm-2012-0004 CrossRef Full Text | Google Scholar 12. Lucroy MD, Edwards BF, Madewell BR. Low-intensity laser light-induced closure of a chronic wound in a dog. Veter Surg. (1999) 28:292–5. doi: 10.1053/jvet.1999.0292 PubMed Abstract | CrossRef Full Text | Google Scholar 13. In de Braekt MM, van Alphen FA, Kuijpers-Jagtman AM, Maltha JC. Effect of low laser therapy on wound healing after palatal surgery in beagle dogs. Lasers Surg Med. (1991) 11:462–70. doi: 10.1002/lsm.1900110512 PubMed Abstract | CrossRef Full Text | Google Scholar 14. Cury V, Bossini PS, Fangel R, Crusca Jde S, Renno AC, Parizotto NA, et al. The effect of 660 nm and 780 nm laser irradiation on viability of random skin flap in rats. Photomed Laser Surg. (2009) 27:721–4. doi: 10.1089/pho.2008.2383 CrossRef Full Text | Google Scholar 15. Kawalec JS, Hetherington VJ, Pfennigwerth TC, Dockery DS, Dolce M. Effect of a diode laser on wound healing by using diabetic and nondiabetic mice. J Foot Ankle Surg. (2004) 43:214–20. doi: 10.1053/j.jfas.2004.05.004 PubMed Abstract | CrossRef Full Text | Google Scholar 16. Ribeiro MA, Albuquerque RL, Barreto AL, Moreno de Oliveira VG, Santos TB, Freitas Dantas CD, et al. Morpholigical analysis of second-intention wound healing in rats submitted to 16 J/cm2 lambda 660-nm laser irradiation. Indian J Dent Res. (2009) 20:390–3. doi: 10.4103/0970-9290.57360 CrossRef Full Text | Google Scholar 17. Chyczewski M, Holak P, Jalynski M, Kasprowicz A. Effect of laser biostimulation on the healing of cutaneous surgical wounds in pigs. Bull Vet Inst Pulawy. (2009) 53:135–8. Google Scholar 18. Wang XQ, Kravchuk O, Liu PY, Kempf M, Boogaard CV, Lau P, et al. The evaluation of a clinical scar scale for porcine burn scars. Burns (2009) 35:538–46. doi: 10.1016/j.burns.2008.10.005 PubMed Abstract | CrossRef Full Text | Google Scholar 19. Ghoghawala SY, Mannis MJ, Murphy CJ, Rosenblatt MI, Isseroff RR. Economical LED based, real-time, in vivo imaging of murine corneal wound healing. Exp Eye Res. (2007) 84:1031–8. doi: 10.1016/j.exer.2007.01.021 PubMed Abstract | CrossRef Full Text | Google Scholar 20. Wilmink JM, van den Boom R, van Weeren PR, Barneveld A. The modified meed technique as a novel method for skin grafting in horses: evaluation of acceptance, wound contraction and closure in chronic wounds. Equine Vet J. (2006) 38:324–29. doi: 10.2746/042516406777749290 PubMed Abstract | CrossRef Full Text | Google Scholar 21. Ribeiro MS, Silva DF, Maldonado EP, de Rossi W, Zezell DM. Effects of 1047-nm neodymium laser radiation on skin wound healing. J Clin Laser Med Surg. (2002) 20:37–40. doi: 10.1089/104454702753474995 PubMed Abstract | CrossRef Full Text | Google Scholar 22. Braitman LE. Confidence intervals assess both clinical significance and statistical significance. Ann of Intern Med. (1991) 114:515–7. doi: 10.7326/0003-4819-114-6-515 PubMed Abstract | CrossRef Full Text | Google Scholar 23. Gammel JE, Biskup JJ, Drum MG, Newkirk K, Lux CN. Effects of low-level laser therapy on healing of surgically closed incisions and surgically created open wounds in dogs. Vet Surg. (2018) 47:499–506. doi: 10.1111/vsu.12795 PubMed Abstract | CrossRef Full Text | Google Scholar 24. Rennekampff HO, Fimmers R, Metelmann HR, Schumann H, Tenenhaus M. Reliability of photographic analysis of wound epithelialization assessed in human skin graft donor sites and epidermolysis bullosa wounds. Trials (2015) 16:235–42. doi: 10.1186/s13063-015-0742-x PubMed Abstract | CrossRef Full Text | Google Scholar 25. Wang AS, Armstrong EJ, Armstrong AW. Corticosteroids and wound healing: clinical considerations in the perioperative period. Am J Surg. (2013) 206:410–7. doi: 10.1016/j.amjsurg.2012.11.018 PubMed Abstract | CrossRef Full Text | Google Scholar 26. Kurach LM, Stanley BJ, Gazzola KM, Fritz MC, Steficek BA, Hauptman JG, et al. The effect of low-level laser therapy on the healing of open wounds in dogs. Vet Surg. (2015) 44:988–96. doi: 10.1111/vsu.12407 PubMed Abstract | CrossRef Full Text | Google Scholar 27. Gagnon D, Gibson TW, Singh A, zur Linden AR, Kazienko JE, LaMarre J. An in vitro method to test the safety and efficacy of low-level laser therapy (LLLT) in the healing of canine skin model. BMC Vet Res. (2016) 12:73–83. doi: 10.1186/s12917-016-0689-5 PubMed Abstract | CrossRef Full Text | Google Scholar 28. Bjordal JM. Low level laser therapy (LLLT) and world association for laser therapy (WALT) dosage recommendations. Photomed Laser Surg. (2012) 30:61–2. doi: 10.1089/pho.2012.9893 PubMed Abstract | CrossRef Full Text | Google Scholar Keywords: laser, wound healing, canine, scar scale, IVDD, incision, photobiomodulation Citation: Wardlaw JL, Gazzola KM, Wagoner A, Brinkman E, Burt J, Butler R, Gunter JM and Senter LH (2019) Laser Therapy for Incision Healing in 9 Dogs. Front. Vet. Sci. 5:349. doi: 10.3389/fvets.2018.00349 Received: 01 September 2018; Accepted: 31 December 2018; Published: 29 January 2019. Edited by: David Levine, University of Tennessee at Chattanooga, United States Reviewed by: Narda Gail Robinson, CuraCore Integrative Medicine and Education Center, United States Maria Fahie, Western University of Health Sciences, United States Copyright © 2019 Wardlaw, Gazzola, Wagoner, Brinkman, Burt, Butler, Gunter and Senter. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. *Correspondence: Jennifer L. Wardlaw, wardlaw@gatewayvetsurgery.com COMMENTARY ORIGINAL ARTICLE People also looked at Suggest a Research Topic > work_njb3mgl7yfafplmx2os5fsz63i ---- [PDF] Prevalence and risk factors of epiretinal membranes: a systematic review and meta-analysis of population-based studies | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1136/bmjopen-2016-014644 Corpus ID: 10283485Prevalence and risk factors of epiretinal membranes: a systematic review and meta-analysis of population-based studies @article{Xiao2017PrevalenceAR, title={Prevalence and risk factors of epiretinal membranes: a systematic review and meta-analysis of population-based studies}, author={W. Xiao and X. Chen and William Yan and Z. Zhu and Mingguang He}, journal={BMJ Open}, year={2017}, volume={7} } W. Xiao, X. Chen, +2 authors Mingguang He Published 2017 Medicine BMJ Open Objective This study was to aggregate the prevalence and risks of epiretinal membranes (ERMs) and determine the possible causes of the varied estimates. Design Systematic review and meta-analysis. Data sources The search strategy was designed prospectively. We searched PubMed, Embase and Web of Science databases from inception to July 2016. Reference lists of the included literatures were reviewed as well. Study selection Surveys published in English language from any population were included… Expand View on Publisher bmjopen.bmj.com Save to Library Create Alert Cite Launch Research Feed Share This Paper 19 CitationsHighly Influential Citations 3 Background Citations 8 Results Citations 4 View All Topics from this paper Epiretinal Membrane Estimated Hypertensive disease Fibrosis Retina Tomography, Optical Coherence Programming Languages Genetic Selection tomography Cellophane Literature Forty Nine Paper Mentions News Article Association of retinal vessel density with retinal sensitivity in surgery for idiopathic epiretinal ... Medical Health News 4 June 2020 News Article Woman presents with slowly progressive unilateral vision loss Healio.com 22 July 2019 19 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Prevalence of Epiretinal Membrane among Subjects in a Health Examination Program in Japan H. Shimizu, R. Asaoka, +6 authors M. Tanito Medicine Life 2021 PDF View 3 excerpts Save Alert Research Feed Assessment of the Vitreomacular Interface Using High-Resolution OCT in a Population-Based Cohort Study of Older Adults. Nicola B Quinn, D. H. Steel, +11 authors R. Hogg Medicine Ophthalmology. Retina 2020 PDF Save Alert Research Feed OCT changes of idiopathic epiretinal membrane after cataract surgery J. Vallejo-García, M. Romano, +4 authors Paolo Vinciguerra Medicine International Journal of Retina and Vitreous 2020 Save Alert Research Feed Wide-based Foveal Pit: A Predisposition to Idiopathic Epiretinal Membrane I-Hsin Ma, C.-M. Yang, Y. Hsieh Medicine medRxiv 2020 1 Highly Influenced PDF View 10 excerpts, cites background Save Alert Research Feed Epiretinal Membrane Surgery in Daily Clinical Practice: Results of a Proposed Management Scheme J. Pareja, A. Coronado, I. Contreras Medicine Journal of ophthalmology 2019 1 PDF View 2 excerpts, cites background Save Alert Research Feed Cross-Sectional Imaging Analysis of Epiretinal Membrane Involvement in Unilateral Open-Angle Glaucoma Severity. S. Sakimoto, Tomoyuki Okazaki, +8 authors K. Nishida Medicine Investigative ophthalmology & visual science 2018 4 PDF View 1 excerpt, cites results Save Alert Research Feed Association of retinal vessel density with retinal sensitivity in surgery for idiopathic epiretinal membrane Urara Osada, H. Kunikata, M. Yasuda, K. Hashimoto, K. Nishiguchi, T. Nakazawa Medicine Graefe's Archive for Clinical and Experimental Ophthalmology 2020 Save Alert Research Feed Osteopontin in vitreous and idiopathic epiretinal membranes L. Dinice, Andrea Cacciamani, +5 authors A. Micera Medicine Graefe's Archive for Clinical and Experimental Ophthalmology 2020 1 View 2 excerpts, cites background Save Alert Research Feed Prevalence of epiretinal membrane in the phakic eyes based on spectral-domain optical coherence tomography Boyun Kim, Ayoung Choi, Jin Heung Park, Sohee Jeon Medicine PloS one 2021 PDF View 1 excerpt, cites results Save Alert Research Feed Assessment of the deformation of the outer nuclear layer in the Epiretinal membrane using spectral-domain optical coherence tomography S. Takagi, Shigeki Kudo, +6 authors M. Ishida Medicine BMC Ophthalmology 2019 3 View 1 excerpt, cites results Save Alert Research Feed ... 1 2 ... References SHOWING 1-10 OF 31 REFERENCES SORT BYRelevance Most Influenced Papers Recency Prevalence and risk factors for epiretinal membranes in a multi-ethnic United States population. C. H. Ng, N. Cheung, +7 authors T. Wong Medicine Ophthalmology 2011 115 Save Alert Research Feed Prevalence and associations of epiretinal membranes in an adult Japanese population: the Funagata study R. Kawasaki, J. Wang, +6 authors T. Wong Medicine Eye 2009 54 PDF Save Alert Research Feed Prevalence and risk factors for epiretinal membranes in a Japanese population: the Hisayama study M. Miyazaki, H. Nakamura, +4 authors Y. Nose Medicine Graefe's Archive for Clinical and Experimental Ophthalmology 2003 57 Save Alert Research Feed Prevalence and risk factors for epiretinal membrane: the Singapore Epidemiology of Eye Disease study N. Cheung, Shu-Pei Tan, +5 authors T. Wong Medicine British Journal of Ophthalmology 2016 42 Save Alert Research Feed Prevalence and risk factors of epiretinal membrane in Asian Indians. V. Koh, C. Cheung, +6 authors T. Wong Medicine Investigative ophthalmology & visual science 2012 43 PDF Save Alert Research Feed Prevalence and Risk Factors of Idiopathic Epiretinal Membranes in Beixinjing Blocks, Shanghai, China X. Zhu, Jin-juan Peng, +4 authors X. Zhang Medicine PloS one 2012 25 PDF Save Alert Research Feed Prevalence and associations of epiretinal membranes in latinos: the Los Angeles Latino Eye Study. S. Fraser-Bell, M. Ying-Lai, R. Klein, R. Varma Medicine Investigative ophthalmology & visual science 2004 97 PDF Save Alert Research Feed Does cigarette smoking alter the risk of pterygium? A systematic review and meta-analysis. S. S. Rong, Y. Peng, Y. Liang, Di Cao, V. Jhanji Medicine Investigative ophthalmology & visual science 2014 29 PDF Save Alert Research Feed Prevalence and associations of epiretinal membranes in the visual impairment project. D. Mccarty, B. Mukesh, +4 authors C. McCarty Medicine American journal of ophthalmology 2005 142 Save Alert Research Feed Epidemiology of epiretinal membrane in a large cohort of patients with uveitis. B. Nicholson, Mei Zhou, +6 authors H. Sen Medicine Ophthalmology 2014 39 Save Alert Research Feed ... 1 2 3 4 ... Related Papers Abstract Topics Paper Mentions 19 Citations 31 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_njonjnfp4zgozn4punqxkimwc4 ---- ORIGINAL ARTICLE T. T. Carpenter Æ A. S. H. Kent A new method of quantifying endometriosis using digital photography Received: 19 December 2004 / Accepted: 18 February 2005 / Published online: 13 May 2005 � Springer-Verlag Berlin / Heidelberg 2005 Abstract The revised American Fertility Society scoring system for quantifying endometriosis is a relatively insensitive tool when assessing peritoneal endometriosis. We describe a new technique that can be used to quantify endometriosis which uses digital photography and a specifically designed computer analysis package to calculate lesion surface area. Using this we were able to demonstrate good intra-observer reproducibility, al- though inter-observer variability was relatively poor. Keywords Endometriosis Æ Quantification Introduction Despite the fact that the revised American Fertility Society scoring system (rAFS) [1–3] is the standard technique for classifying endometriosis, it is rather lim- ited in its discriminatory powers when dealing with peritoneal disease. By virtue of the fact that it was developed as a scoring system aimed to correlate with fertility, points are allocated heavily for disease affecting the fallopian tubes and ovaries with very little points being available for peritoneal disease. In fact, for peri- toneal disease alone the highest possible score is six, which places it in the mild category. Thus if one is spe- cifically interested in change in peritoneal disease the rAFS is a rather blunt tool. In view if this, we undertook a pilot study to assess the feasibility and reproducibility of using digital imag- ing to assess the surface area of specific endometriotic lesions. Method Subjects This was a three-centre study with multicentre regional ethics committee approval. Six patients were recruited (two from each site). Inclusion criteria were patients who were due to undergo a laparoscopy for suspected or known endometriosis, were over 18 years old, had all gynaecological organs present and had given in- formed consent. There was no limitation on stage of disease. Image capture All patients underwent laparoscopy in the proliferative phase of the cycle. Laparoscopic entry was carried out in the usual way as per the practice of the gynaecolo- gist, with particular care being taken when the uterus was instrumented not to be too vigorous with bimanual examination. On entry, the abdominopelvic cavity was inspected in the usual way, at which time an endome- triotic ‘‘index’’ lesion was selected. This was a lesion that was anatomically relatively easy to photograph with no/minimal manipulation needed, next to which could be placed a needle. The operating camera system was then detached from the laparoscope and replaced by a Nikon COOLPIX 4500 digital camera with a special adaptor to allow attachment of the camera to the end of the laparoscope. The camera had 4.0 megapixel resolution. Camera set-up A straight 13 mm surgical needle (with 2.0 vicryl at- tached) was introduced via a second port and placed as close to the lesion as possible in the same plane. The light setting on the camera was set to incandescent with a light intensity of zero (range �3 to +3). T. T. Carpenter (&) Æ A. S. H. Kent Dept. of Gynaecology, Royal Surrey County Hospital, Egerton Road, Guildford, GU2 7XX, UK E-mail: t.carpenter@doctors.org.uk T. T. Carpenter 7 Pointout Road, Southampton, SO16 7DL, UK Gynecol Surg (2005) 2: 119–125 DOI 10.1007/s10397-005-0095-7 Using auto focus, supplemented by manual focus where necessary, the lesion was photographed close-up, ensuring that the whole lesion and needle was in view. The laparoscope was then pulled back, the camera refocused, and a wide-angle picture taken to allow the location of the lesion to be demonstrated. The laparoscope and needle were then removed and the camera system detached. Following this, the whole process was then repeated to obtain a second set of images on the same lesion. The remainder of the lap- aroscopy was completed by the laser ablation of all visible endometriosis and closure of the abdominal port sites was by the usual practice of the operating gynaecologist. Images captured on the digital flash card where then downloaded onto the hard drive of a Compaq Evo computer and copies of each pair (close-up and wide angle) of images were stored on individual compact discs. Image analysis The images were analysed independently by two gynaecologists using a specifically-prepared software package produced by Virtualscopics, Rochester, NY, USA. The 12 individual close-up images were presented in random order for each assessor to quantify. The surface area was calculated by first defining the needle for scale. The individual components of each le- sion—red, black and white (as defined by the rAFS definition) [1]—were then circumscribed by the investi- gators using any combination of the various functions. Functions available were: Filters: Red, Green, Blue, Black, White Delineators: Live wire mode Allows you to optimise the path drawn between successively user- defined points. Shrink-wrap mode Allows you to define a structure by roughly tracing the outside perim- eter of a well-defined structure. Region growth mode Allows you to identify, with one mouse click, an entire well-defined structure. 3-D region growth mode Similar to the Region Growth Mode, but the growth will proceed in three dimensions. Geometrically constrained region growth (GEORG) mode Allows you to define the shape of the geometric model. The model is used to smooth region boundaries and limit growth outside a struc- ture of interest. 3-D GEORG mode Operates on the same principle as GEORG, with the additional functionality of growth proceeding in three dimensions. Add mode Allows you to use free hand tracking to modify a currently finalized (red) contour by adding a new area. Adjust mode The Adjust Mode allows you to modify the active contour. Each time the left mouse button is clicked, the contour is forced to pass through the clicked point. Continuous trace mode Allows you to perform free hand tracing of structure boundaries. Erase mode Allows you to modify the currently finalized contour by using free hand tracing to delete the un- wanted portion of the region. Polygon mode Allows you to manually trace structure boundaries by connecting points that the user made in the Image window. Straight lines are used to connect points defined by successive mouse clicks. Rectangle mode Allows you to define a rectangle. Select mode Allows you to convert a finalized (red) contour to an active (blue) contour. When the investigator was happy with the defined area, the image was finalised and the computer calcu- lated the surface area using the needle as a reference. Statistical analysis Intra- and inter-observer reproducibility was assessed by variance, coefficient of variation, and subjectively using plots. Results The total lesion area and the breakdown of red, black and white components as assessed by the two assessors are shown in Table 1. The range in sizes of the index lesions selected for assessment is wide (0.5–69 mm 2 ). The index lesions for three out of the six subjects contained only red tissue and had no black or white component. These were also the three smallest lesions. The remaining three lesions had all three component areas, with the largest compo- nent being white scar tissue. Plots of the lesion areas for the duplicate assessments made on each subject are presented by area type in Figs. 1, 2, 3 and 4. The within- and between-subject variance compo- nents of the index lesion assessments are shown in Table 2 and the within-subject coefficient of variations (SD/mean) are shown in Table 3. 120 Discussion The ranges in lesion size and lesion components in these patients are quite wide, with some small lesions having only red components and other larger lesions being dominated by white areas. To assess the reproducibility of a test there is no standard single ‘‘test’’ that can be applied. Instead one must use a combination of quantitative and qualitative assessments. In this experiment we used the coefficient of variation (CV) to quantitatively assess variability; how- ever, this should be interpreted with great caution as the number of assessments is small and as such will be markedly influenced by any outlying values. As a general rule, a CV of 100 is used as the cut-off for an acceptable lack of variation, but the figure is obviously a contin- uum, with lower figures indicating less variability. The CVs obtained in this study for both assessors show good reproducibility for within-subject analysis, with all figures being below 100. Variability in all cases was less for assessor one than for assessor two, with the within-T a b le 1 L es io n su rf a ce a re a s a s a ss es se d b y ea ch a ss es so r S u b je ct Im a g e L es io n a re a (m m 2 ) T o ta l R ed B la ck W h it e S u b je ct M ea n C en tr e M ea n A ss es so r 1 A ss es so r 2 A ss es so r 1 A ss es so r 2 A ss es so r 1 A ss es so r 2 A ss es so r 1 A ss es so r 2 A ss es so r 1 A ss es so r 2 A ss es so r 1 A ss es so r 2 1 A 0 .5 4 0 .4 8 0 .5 4 0 .4 8 0 0 0 0 0 .5 4 0 .5 5 1 .3 2 3 .0 4 1 B 0 .5 3 0 .6 2 0 .5 3 0 .6 2 0 0 0 0 2 A 2 .0 1 3 .7 4 2 .0 1 3 .7 4 0 0 0 0 2 .1 0 5 .5 3 2 B 2 .1 8 7 .3 2 2 .1 8 7 .3 2 0 0 0 0 3 A 6 8 .9 4 1 5 .5 7 2 5 .7 2 9 .6 9 .0 3 5 .9 7 3 4 .1 9 0 6 7 .7 3 1 7 .3 4 4 5 .6 3 2 1 .8 5 3 B 6 6 .5 2 1 9 .1 2 6 .1 1 1 0 .6 8 1 0 .2 1 8 .4 2 3 0 .2 0 4 A 1 9 .1 3 1 7 .9 4 1 .7 5 0 .9 1 0 .2 5 0 .6 1 7 .1 3 1 6 .4 3 2 3 .5 4 2 6 .3 7 4 B 2 7 .9 4 3 4 .7 9 2 .2 9 1 .4 7 0 .6 5 1 .2 8 2 5 3 2 .0 4 5 A 1 4 .3 8 1 1 .4 1 4 .3 8 7 .3 0 0 0 4 .1 1 4 .2 9 1 1 .6 3 2 2 .2 9 1 9 .4 2 5 B 1 4 .1 9 1 1 .8 5 1 4 .1 9 8 .9 1 0 0 0 2 .9 4 6 A 3 3 .0 9 3 5 .4 3 1 2 .5 8 .0 3 1 .7 8 1 .1 9 1 8 .8 1 2 6 .2 1 3 0 .2 9 2 7 .2 2 6 B 2 7 .4 9 1 9 .0 1 9 .6 8 6 .6 5 1 .5 6 1 .5 5 1 6 .2 5 1 0 .8 1 Fig. 1 Plots of total lesion surface area for each pair of lesions and for each assessor 121 subject variation contributing between only 0.7 and 3.6% of the total variance of assessor 1 compared to a within-subject variance for assessor 2 of between 6.7 and 34%. However, it is important to note that the per- centage variance figures given in Table 2 are percent- ages, so the figures are proportions. Therefore, if the between-subject variance component is high, as in the case of assessor 1, then proportionately, and thus as a percentage, the within-subject variance will be low. This can therefore give an artificial impression of a smaller within-subject variance than is actually the case. Look- ing at the true figures for variance, it is still clear that assessor 1 gave consistently less within-subject variabil- ity than assessor 2, although to a lesser extent than is apparent from the percentage values. Upon reviewing the techniques used by both asses- sors, assessor 1 used significantly more manual tracing of the regions via the live wire mode than user 2, who predominantly used the more automated functions of regional growth and geometrically constrained regional growth modes. This would imply that, although more time-consuming, semi-automated manual drawing of the lesions is a more reproducible technique. In both cases the most reproducible component of analysis was the red area. This is encouraging, as red areas are considered most active [4], and thus if one were to test any new treatment for active endometriosis, one would expect these areas to respond first and possibly to a greater extent. The total CVs for the two assessors were clearly far less reproducible. As a function of both between-subject variance and within-subject variance, this would be ex- pected due to the wide ranges in lesion sizes and com- positions. When assessing the efficacy of any treatment (or indeed a simple change in disease) one is interested in the changes in individual lesions or components of individual lesions, not the changes in total areas of le- sions in a combined set of patients. As such this varia- tion is unimportant. As explained earlier, assessment of reproducibility is enhanced by the combination of both quantitative and qualitative methods. Thus the plots of lesion size are equally important when drawing conclusions on Fig. 2 Plots of red lesion surface area for each pair of lesions and for each assessor Fig. 3 Plots of black lesion surface area for each pair of lesions and for each assessor 122 reproducibility and they also allow us to assess intra- and inter-observer variability. From Tables 1 to 3 and Figs. 1, 2, 3 and 4, the intra-observer reproducibility of the red and black areas appears very good in both observers. Note, however, that assessor 1 found no black areas in three of the subjects and assessor 2 found no black areas in two subjects, thereby making the plots of the black areas for the six subjects look better than perhaps is truly the case. Red areas, however, were present in all subjects, and the reproducibility is good throughout. The total lesion areas and white lesion areas do seem to show greater variability, with the greatest variability being apparent in the larger lesions. The re- duced reproducibilities of these areas compared to the red and black areas are probably related to the fact that the red and black areas have more obvious borders and thus confines. We are looking at lesions on a back- ground of peritoneum which itself has a white/greyish appearance. Identifying a border between a white area of endometriosis and normal peritoneum is therefore considerably harder, and hence more prone to error, than defining a border between a red or black area and Fig. 4 Plots of white lesion surface area for each pair of lesions and for each assessor T a b le 2 V a ri a n ce o f le si o n su rf a ce a re a s fo r ea ch a ss es so r L es io n a re a B et w ee n -s u b je ct v a ri a n ce co m p o n en t W it h in -s u b je ct v a ri a n ce co m p o n en t T o ta l v a ri a n ce P ro p o rt io n o f to ta l v a ri - a n ce fr o m b et w ee n -s u b je ct v a ri a n ce (% ) P ro p o rt io n o f to ta l v a ri - a n ce fr o m w it h in -s u b je ct v a ri a n ce (% ) A ss es so r 1 A ss es so r 2 A ss es so r 1 A ss es so r 2 A ss es so r 1 A ss es so r 2 A ss es so r 1 A ss es so r 2 A ss es so r 1 A ss es so r 2 T o ta l 6 0 9 .5 8 9 4 .6 9 .5 7 4 8 .3 6 1 9 .1 5 1 4 2 .9 9 8 .5 6 6 2 .5 3 4 R ed 9 6 .8 2 1 4 .1 7 0 .7 1 1 .5 7 9 7 .5 3 1 5 .7 4 9 9 .3 9 0 0 .7 1 0 B la ck 1 4 .4 5 7 .6 2 0 .1 3 0 .5 5 1 4 .5 8 8 .1 7 9 9 .1 9 3 .3 0 .9 6 .7 W h it e 1 8 6 .9 5 9 7 .0 3 7 .0 3 4 0 .1 8 1 9 3 .9 8 1 3 7 .2 1 9 6 .4 7 0 .7 3 .6 2 9 .3 123 normal peritoneum. The greater variability seen in the larger lesions is most likely to be as a consequence of size. The potential for error is going to increase with lesion size since the border to be defined is longer. In addition, we show actual lesion size in the plots rather than relative differences between plots. Thus, a 20% difference in actual surface area between plots in a small lesion will be markedly smaller than a 20% difference in a larger lesion. In reviewing plots of the two assessments, the larger lesion will have a much steeper slope between the two plots than the smaller lesion, despite the relative difference in the two assessments being the same. Assessment of inter-observer variability is best made by viewing Figs. 1, 2, 3 and 4. The more horizontal the line, the lower the intra observer variability, and the closer each pair of lines lie to each other, the lower the inter observer variability. From all plots it is clear that the reproducibility of analysis is significantly worse between assessors. The differences between the assessors when assessing the black areas does not seem great; however, as previously mentioned this is because no black areas were present in two of the subjects (as as- sessed by both assessors), and with the exception of subject 3 the black areas in the other subjects were very small. The lesion from subject 3 showed wide variation in all areas of assessment between the two assessors. On review, this lesion is a very complex lesion, with a potentially large chance of error. There was no consistent difference between the assessors, although it is interesting to note that whilst the red area was most reproducible in within-observer analysis, it was in fact the area that showed most vari- ation between observers. Thus it would appear that, although borders of these areas are easier than other areas to elucidate, the subjective decision as to whether an area is red, black or white still remains to be made by the assessor, and this appears to show more variability. When assessing reproducibility one must be aware of the components contributing to the variability. The ta- bles and figures concentrate on the variability within and between the two assessors. In fact the pairs of lesions analysed in each case are not the same image of the lesion but two different images of the same lesion taken at different times (albeit the same operation). Thus the within-subject variation is not only a function of the variability of the assessor but also the variability of the image taken. Whilst efforts have been made to min- imise the variability between the images, inevitably the photographs will not be taken at exactly the same angle, the needle will not be in exactly the same position for each photograph, and other variables will be slightly different between them too. It is important to be aware of these differences, as they are likely to be minimised in this study by the fact that the images were captured during the same operation a short duration apart. In studies assessing the efficacy of a treatment over time, the images will be captured during separate operations some time apart, almost certainly increasing the variability. The aim of developing new analysis techniques is to facilitate the detection of clinically-significant effects of a treatment on a disease, which in this case means that we would like to detect a clinically-significant difference in endometriotic lesions. In endometriosis, whilst one can make assumptions as to what would be a clinically-sig- nificant reduction in pain for example, because of the lack of correlation of symptoms with disease, it is not possible to logically select a value for clinically-signifi- cant change (with the exception of total disease irradi- ation) in lesion surface area. This technique is therefore most useful as a research tool. For ethical and economic reasons it is not possible to run long, large-scale trials of new treatments without some indication of the efficacy to begin with. Research tools such as this will allow investigators to detect changes, which may be relatively small, over a short period of time, which may then be used to justify a larger, more pragmatic study of a par- ticular treatment. What is important, however, is to be aware of the ability, or limitations, of a test used to detect a differ- ence. A test that has a relatively high variability is going to be unable to detect small differences between two groups because the difference will be ‘‘masked’’ by the background intrinsic variability. From the results ob- tained using this technique, assessor 1 should be able to detect a within-subject variance of more than 3.6%, and assessor 2 a within-subject variance of more than 34%. These figures should be kept in mind when interpreting the results. Because of the relatively small number of lesions used in this study and the differences between each pair of lesions examined, it was not possible to assess whether there was a ‘‘learning curve’’ with this technique. One criticism of this technique is that it only mea- sures the diseased surface and takes no account of depth of invasion and volume of disease. This would be particularly applicable to patients with relatively ad- vanced disease or those in areas such as the uterosacral ligaments, which often have little visible disease at the Table 3 Coefficients of variation for each assessor Lesion area Overall mean CV (Total) CV (Within-Subject) Assessor 1 Assessor 2 Assessor 1 Assessor 2 Assessor 1 Assessor 2 Total 23.08 14.77 107.8 80.93 13.4 47.1 Red 9.32 5.48 106.0 72.4 9.0 22.9 Black 1.96 1.58 194.8 180.9 18.4 46.9 White 11.80 7.71 118.0 151.9 22.5 82.2 124 surface but have deep deposits within. At present, the most likely route to quantifying this sort of disease would be via other means of imaging, such as magnetic resonance imaging. While work has been undertaken in this area, a reproducible technique has not been found to date [5]. Conclusions This technique demonstrates acceptable intra-observer variability for both assessors; however, there is signifi- cant operator dependence for reproducibility. The intra-observer reproducibility is relatively poor. This technique allows us to estimate an observers variability, thereby highlighting what might be considered a clini- cally-significant change when power calculations are performed for future studies. Acknowledgments We would like to acknowledge Pfizer UK for provision of the digital equipment and statistical analysis. We would also like to acknowledge Mr. S. Ewen and Mr. A. Pooley who undertook the digital photography at two of the sites. References 1. American Society for Reproductive Medicine (1997) Revised American Society for Reproductive Medicine classification of endometriosis. Fertil Steril 67(5):817–821 2. American Fertility Society (1985) Revised American Fertility Society classification of endometriosis. Fertil Steril 43(3):351– 352 3. American Fertility Society (1979) American Fertility Society for Classification of endometriosis. Fertil Steril 32(6):633–634 4. Khan KN et al (2004) Higher activity by opaque endometriotic lesions than nonopaque lesions. Acta Obstet Gynecol Scand 83(4):375–382 5. Kinkel K et al (1999) Magnetic resonance imaging characteris- tics of deep endometriosis. Hum Reprod 14(4):1080–1086 125 Sec1 Sec2 Sec3 Sec4 Sec5 Sec6 Sec7 Sec8 Sec9 Tab1 Fig1 Fig2 Fig3 Fig4 Tab2 Tab3 Sec10 Ack Bib CR1 CR2 CR3 CR4 CR5 work_nkbhwpxwkjcldbf7viskuxap6a ---- Archaeology International News Obituaries Barney Harris1,* How to cite: Harris, B. ‘Obituaries’. Archaeology International, 2020, 23 (1), pp. 26–33 • DOI: https://doi.org/10.14324/111.444.ai.2020.02 Published: 30 December 2020 Copyright: © 2020, Barney Harris. This is an open-access article distributed under the terms of the Creative Commons Attribution Licence (CC-BY) 4.0 https://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, distribution and reproduction in any medium, provided the original author and source are credited • DOI: https://doi.org/10.14324/111.444.ai.2020.02 Open access: Archaeology International is a peer-reviewed open-access journal. *Correspondence: barnabas.harris.14@ucl.ac.uk 1UCL Institute of Archaeology, UK https://creativecommons.org/licenses/by/4.0/ mailto:barnabas.harris.14@ucl.ac.uk A r c h A e o l o g y I n t e r n At I o n A l26 © 2020, Barney Harris. This is an open-access article distributed under the terms of the Creative Commons Attribution Licence (CC-BY) 4.0 https://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, distribution and reproduction in any medium, provided the original author and source are credited • DOI: https://doi.org/10.14324/111.444.ai.2020.02 Obituaries Barney Harris Several distinguished archaeologists who had close links to the Institute have died during the past year. Brief obituaries are given here and reference made to some of the obituaries available elsewhere. Colin McEwan, 1951–2020 Colin McEwan (Figure 1) was Curator of Latin American Collections (1993–2004) and Head of the Americas Section (2005–12) at the British Museum, ending his career as Director of Pre-Columbian Studies at Dumbarton Oaks in Washington, DC (2012–18). Colin was born in Falkirk, Scotland (11 August 1951) and raised in New Zealand. Back in Scotland, he was awarded a BSc degree in Geography at Aberdeen University (1973), then worked as a ‘rough-neck’ in the North Sea oil rigs to fund his graduate studies at Cambridge University, obtaining a Certificate in Prehistoric Archaeology (1975). While at Cambridge he participated in Eric Higgs’ site-catchment surveys in Asturias-Lérida and in the Somerset Levels Project led by John Coles (1974). In 1975 he travelled to South America to join Earle Saxon’s (University of Durham) excavation at the famous Paleo-American ‘Mylodon Cave’ site. He then travelled through the Andes, reaching Ecuador where he participated in the Buena Vista Valley excavations led by Liz Carmichael (British Museum) and Warwick Bray (Institute of Archaeology), completing the tour in Yucatán (1976). Later Warwick recruited Colin for the 1977 Anglo-Colombian Expedition along the https://creativecommons.org/licenses/by/4.0/ O b i t u a r i e s 27 Figure 1 McEwan, a keen mountaineer, in his native Scottish Highlands. (Image credit: Norma Rosso) Caquetá River (Amazonian forest). In 1978 he worked at Isla de La Plata (Ecuador), excavating an Inca ritual offering. In 1979 Colin joined the graduate programme at the University of Illinois. There Zuidema and Lathrap became the most influential figures in Colin’s scholarly career. His PhD research (1983–1990) at Agua Blanca site, Ecuador, focused on political-religious power as expressed by its famous U-shaped stone-sculptured seats, architecture and landscape, with attention to human cognition and agency. Colin also steered the Agua Blanca community in the creation of a cultural centre-cum-museum, where the locals (not state authorities) successfully claimed ownership and displayed their heritage – a first for South America. After working at the Art Institute of Chicago (1990–2), he began curatorial work at the British Museum, bringing Pre-Columbian Latin America to a worldwide audience through major exhibitions, accom- panied by conferences and publications. First was the Mexican Gallery (1994), followed by exhibits on Pre-Columbian Gold (1996), Patagonia (1997), Unknown Amazon (2001), Pre-Columbian Caribbean (2008) A r c h A e o l o g y I n t e r n At I o n A l28 and Moctezuma-Aztec Ruler (2009). Colin forged strong research links between the British Museum and the Institute’s faculty. He was instru- mental in Oliver joining Bray (1994) to double its Americanist presence. He collaborated in various projects with Graham, Baquedano and Sillar and was a frequent fixture at the Institute’s South American Seminars. In 2006 he was named Honorary Visiting Professor of the Institute, of which he was extremely proud. Colin unfailingly and energetically supported the Institute’s students by opening the British Museum’s collections for their dissertation research and providing internship opportunities, an ethos he took to Dumbarton Oaks. Colin passed away in Tampa, Florida (28 March 2020), after a valiant battle against aggressive leukaemia. His presence at the Institute is sorely missed, but his legacy will live on. Dr José R. Oliver Reader in Latin American Archaeology, Institute of Archaeology, UCL References Anonymous. 2020. Colin McEwan: Un estudioso de las culturas originarias. Semblanzas, Museo Antropológico y de Arte Contemporáneo, 2–5. Quito: Mininsterio de Cultura y Patromonio. Accessed 5 October 2020. http://www. portalcultural.culturaypatrimonio.gob. ec/redmuseos/semblanzas.pdf. Boomhower, Daniel. 2020. ‘In Memoriam: Colin McEwan’. Accessed 5 October, 2020. https://www.doaks.org/ newsletter/in-memoriam-colin-mcewan. Borrero, Luis. 2020. ‘Obituario – Colin McEwan’, Intersecciones en Antropología 21(1): 115–16. https://doi.org/10.37176/ iea.21.1.2020.544. López Luján, Leonardo. 2020. ‘Semblanza – Colin McEwan’, Arqueología Mexicana 163: 12. Oliver, José R. (2021). ‘Colin McEwan (1951–2020) – In Memoriam of the Complete Americanist from Scotland’. Andean Past. Stuart J.A. Laidlaw, 1956–2019 I had the very great pleasure of travelling to Jerusalem and Jericho with Stuart as part of preparations for the exhibition A Future for the Past (2007) featuring the Petrie Palestinian collection. Our brief was to undertake interviews with those involved in the heritage and archae- ology of the region that reflected upon Sir Flinders Petrie’s complex http://www.portalcultural.culturaypatrimonio.gob.ec/redmuseos/semblanzas.pdf http://www.portalcultural.culturaypatrimonio.gob.ec/redmuseos/semblanzas.pdf http://www.portalcultural.culturaypatrimonio.gob.ec/redmuseos/semblanzas.pdf https://www.doaks.org/newsletter/in-memoriam-colin-mcewan https://www.doaks.org/newsletter/in-memoriam-colin-mcewan https://doi.org/10.37176/iea.21.1.2020.544 https://doi.org/10.37176/iea.21.1.2020.544 O b i t u a r i e s 29 legacy. As part of the research trip we made a special pilgrimage to Petrie’s grave in the Protestant Cemetery on Mount Zion (Figure 2). Once there Stuart asked me to take his photo while, true to form, he mocked my rookie camera skills, which were indeed spectacularly infe- rior to his own. Stuart proved an utterly fabulous travelling companion with his own unique brand of humour, irony and relentless yet affec- tionate teasing that my colleagues, former students, family and friends would certainly recognise. It was, however, our visit to Jericho that was obviously of most immense interest to Stuart. Using his own ‘photographic memory’ of Jericho – gleaned from the IoA archives – he recognised all the images taken of Kathleen Kenyon’s excavations from the 1950s onwards. For over two hours in the hot sun he was engrossed in taking as many photo- graphs as possible (sometimes at great risk in terms of the precarious angles needed to get the right shots) and he obviously gained great pleasure from following in the footsteps of his mentor Peter Dorrell and Kenyon herself. Stuart not only displayed his ongoing deep commit- ment to the Institute of Archaeology, his depth of knowledge and enthu- siasm, but also his pride at seeing himself as part of this genealogy as the ‘Institute’s Photographer’. Indeed, Stuart was a greatly esteemed colleague at the Institute of Archaeology for 40 years. Joining the Institute in 1979, Stuart worked closely with Peter Dorrell and, following the latter’s retirement, began his teaching of archaeological photography and illustration. As Lecturer in Archaeological Photography, he was an exemplary teacher of prac- tice and taught many cohorts of undergraduate and graduate students in archaeological photography, latterly including digital imaging tech- niques. Stuart was always a clear champion of students and strove to make their experience at the Institute an enjoyable, as well as a productive, one. It was no surprise when he received a UCLU Student Choice award for Outstanding Support for Teaching in 2013–14, awards which celebrate outstanding teaching and recognise those that support students in their learning at UCL. These awards are completely student-led – students determine the award categories, set the criteria, nominate the potential awardees and decide who wins the awards. Stuart’s work at the Institute took him to a variety of locations including Libya, Greece, Belize and Russia. He ran short courses on A r c h A e o l o g y I n t e r n At I o n A l30 Figure 2 Stuart Laidlaw standing by Petrie’s grave, Protestant Cemetery, Mount Zion, Jerusalem. (Image credit: Beverley Butler 2006) Digital Photography for the International Association of Photographers for 25 years in London, as well as around the world, including in Sri Lanka, Tenerife and at UCL-Qatar. His research interests included the evolution of new techniques in archaeological photography and he was always ready to experiment with new technology. Within the IoA, Stuart worked closely in the photo lab with Ken Walton for many years. He was central to the current refurbishment of the Photographic facil- ities at the Institute and won (with Sandra Bond) a UCL Sustainability Gold Award for the photography studio. While Stuart participated in, and supported, the research of other Institute staff, including recent work on the making of the Terracotta O b i t u a r i e s 31 Army, his major focus was the Ivories from Nimrud. He worked with Georgina Herrmann and Helena Coffey on this series of publications, which provide a seminal photographic documentation of an Iraqi heritage that has now suffered so much destruction. He was working on the final, eighth volume of the series at the time of his death. While I treasure my memories of our alternative pilgrimage to Jerusalem and Jericho, in sharing their memories of Stuart colleagues have stated how they will ‘miss his stories and jokes and choice of music’, and how there now exists ‘a very large, Stuart-sized hole on the 4th floor of the Institute building’. These sentiments and Stuart’s larger than life character were captured too in a very apt reading from his funeral service, reproduced here: ‘A Successful Man’ by Bessie Anderson Stanley That man is a success Who has lived well, laughed often and loved much; Who has gained the respect of intelligent men and the love of little children; Who has filled his niche and accomplished his task; Who leaves the world better than he found it; Who has never lacked appreciation of his family and friends or failed to express it; Who looked for the best in others and gave the best he had. Beverley Butler Theresa O’Mahony, 1958–2019 It is never easy to say goodbye to a friend and loved one, particularly when they have had such an impact on your life. Theresa (Figure 3) was an incredible woman with a fiery spirit, brilliant sense of humour and an abundance of love and energy for her family, friends and the causes she believed in. It was a privilege to call her a friend, and I am thankful for having had the opportunity to get to know her over the past five years. A r c h A e o l o g y I n t e r n At I o n A l32 An Institute of Archaeology alumna, Theresa received a BA(Hons) in Archaeology in 2015, followed by an MA in Public Archaeology in 2016. Her published research into contemporary disability and inclu- sion in archaeology has reached over 3.3 million people in the UK and abroad, and served as the basis for the establishment of the Enabled Archaeology Foundation. Theresa partnered with Breaking Ground Heritage (part of Operation Nightingale), the Bamburgh Research Project and the Thames Discovery programme, as well as other projects, in order to test her inclusion methods and ideas concerning perma- nent long-term sustainable employment of the disabled in community, academic and commercial archaeology. Never one to back down from a fight, Theresa challenged the establishment through her work with the Chartered Institute for Figure 3 Theresa representing the Foreshore Recording & Observation Group and Enabled Archaeology at the Thames Discovery Programme Foreshore Forum, October 2018. (Image credit: Chris Haworth/Thames Discovery Programme) O b i t u a r i e s 33 Archaeology’s Equality and Diversity Group, through various confer- ences and often by going directly to the sources themselves. She leaves behind a legacy of activism for the disabled, particularly in the archaeo- logical community. As anyone who knew her can attest, she always put the needs of others far ahead of her own; she saved lives and, despite a myriad of health problems, she fought for those who were unable to do so themselves. She touched and inspired those who came to know her and challenged those who were in need of expanding their worldview. For each and every one of us who came to join her in her cause, there is no doubt that we will carry on from her example and ensure her message of unity and equality resonates through time. May your light shine brightly always, Theresa. You will be missed, but never forgotten. Erik De’Scathebury You can read more about Theresa’s personal reflections on UK archaeology here: https://doi.org/10.1080/20518196.2018.1487624. https://doi.org/10.1080/20518196.2018.1487624 work_nkbqotcpprbrzdqiwfgpsjlr5a ---- Development and Validation of a Method to Measure Lumbosacral Motion Using Ultrasound Imaging VU Research Portal Development and validation of a method to measure lumbosacral motion using ultrasound imaging. van den Hoorn, W.; Coppieters, M.W.J.; van Dieen, J.H.; Hodges, P.W. published in Ultrasound in Medicine and Biology 2016 DOI (link to publisher) 10.1016/j.ultrasmedbio.2016.01.001 document version Version created as part of publication process; publisher's layout; not normally made publicly available Link to publication in VU Research Portal citation for published version (APA) van den Hoorn, W., Coppieters, M. W. J., van Dieen, J. H., & Hodges, P. W. (2016). Development and validation of a method to measure lumbosacral motion using ultrasound imaging. Ultrasound in Medicine and Biology, 42(5), 1221-1229. https://doi.org/10.1016/j.ultrasmedbio.2016.01.001 General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal ? Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. E-mail address: vuresearchportal.ub@vu.nl Download date: 06. Apr. 2021 https://doi.org/10.1016/j.ultrasmedbio.2016.01.001 https://research.vu.nl/en/publications/22da6345-a1af-4516-aa79-8b1dda98324a https://doi.org/10.1016/j.ultrasmedbio.2016.01.001 Ultrasound in Med. & Biol., Vol. -, No. -, pp. 1–9, 2016 Copyright � 2016 World Federation for Ultrasound in Medicine & Biology Printed in the USA. All rights reserved 0301-5629/$ - see front matter /j.ultrasmedbio.2016.01.001 http://dx.doi.org/10.1016 d Original Contribution DEVELOPMENT AND VALIDATION OF A METHOD TO MEASURE LUMBOSACRAL MOTION USING ULTRASOUND IMAGING WOLBERT VAN DEN HOORN,* y MICHEL W. COPPIETERS,* y JAAP H. VAN DIE€EN,y and PAUL W. HODGES* *Centre for Clinical Research Excellence in Spinal Pain, Injury and Health, School of Health and Rehabilitation Sciences, The University of Queensland, Brisbane, Queensland, Australia; and yMOVE Research Institute Amsterdam, Department of Human Movement Sciences, Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands (Received 9 July 2015; revised 20 November 2015; in final form 3 January 2016) A Clinica of Hea 84A, S w.vand Abstract—The study aim was to validate an ultrasound imaging technique to measure sagittal plane lumbosacral motion. Direct and indirect measures of lumbosacral angle change were developed and validated. Lumbosacral angle was estimated by the angle between lines through two landmarks on the sacrum and lowest lumbar verte- brae. Distance measure was made between the sacrum and lumbar vertebrae, and angle was estimated after distance was calibrated to angle. This method was tested in an in vitro spine and an in vivo porcine spine and validated to video and fluoroscopy measures, respectively. R2, regression coefficients and mean absolute differ- ences between ultrasound measures and validation measures were, respectively: 0.77, 0.982, 0.67� (in vitro, angle); 0.97, 0.992, 0.82� (in vitro, distance); 0.94, 0.995, 2.1� (in vivo, angle); and 0.95, 0.997, 1.7� (in vivo, distance). Lumbosacral motion can be accurately measured with ultrasound. This provides a basis to develop measurements for use in humans. (E-mail: w.vandenhoorn@uq.edu.au) � 2016 World Federation for Ultrasound in Medicine & Biology. Key Words: Spine motion, Ultrasound, In vitro model, In vivo model. INTRODUCTION Intervertebral motion can be measured accurately with radiologic techniques (Breen et al. 1989; Dvor�ak et al. 1991a; 1991b; Pearcy 1985; Pearcy and Whittle 1982). However, repeated exposure to ionizing radiation poses health risks. Alternative methods using markers attached to the skin to represent intervertebral motion (Lee et al. 1995; M€orl and Blickhan 2006; Vanneuville et al. 1994) are not ideal because movement of the skin relative to the vertebral body means that markers may not reflect true intervertebral motion. Although this problem can be overcome by attachment of markers directly to spinous processes (Steffen et al. 1997), this is invasive and impractical for widespread use. Accurate measurement of intervertebral motion in clinical settings requires a method, which is non-invasive, easy to use and readily available. ddress correspondence to: Wolbert van den Hoorn, Centre for l Research Excellence in Spinal Pain, Injury and Health, School lth and Rehabilitation Sciences, The University of Queensland, ervices Road, Brisbane, Queensland 4072, Australia. E-mail: enhoorn@uq.edu.au 1 Measurement of intervertebral motion could help to explore the relation among muscle dysfunction, reduced proprioception and low back pain, and could therefore direct rehabilitation. There is accruing evidence that con- trol of motion at a single segment may be relevant for low back pain (Breen et al. 2012). For example, function of deep erector spinae muscles is affected at a single segment by low back pain (MacDonald et al. 2010) and spinal injury (Hodges et al. 2006) and atrophy and inhibition of these muscles might affect segmental motion (Quint et al. 1998). Reduced segmental motion would affect proprioception (Burke et al. 1978), and lower back proprioception is affected in people with low back pain (Brumagne et al. 2000). A non-invasive method to measure intervertebral motion is necessary to investigate the relation between low back pain and dys- functions in clinical practice and in large cohorts. Ultrasound imaging allows direct and accurate imaging of vertebral structures (from reflection of the ultrasound beam at the periosteum) and surrounding soft tissues, is readily available in clinical settings and has the potential to measure intervertebral motion accu- rately and non-invasively. Anatomic landmarks of the sacrum and lumbar vertebrae can be visualized reliably Delta:1_given name Delta:1_surname Delta:1_given name Delta:1_surname mailto:w.vandenhoorn@uq.edu.au http://dx.doi.org/10.1016/j.ultrasmedbio.2016.01.001 http://dx.doi.org/10.1016/j.ultrasmedbio.2016.01.001 mailto:w.vandenhoorn@uq.edu.au 2 Ultrasound in Medicine and Biology Volume -, Number -, 2016 with ultrasound (Zieger and D€orr 1988). We proposed that a set of anatomic landmarks could provide the basis to estimate intervertebral motion with ultrasound imaging. The aim of the current investigation was to develop techniques using ultrasound imaging to accurately and non-invasively measure intervertebral motion of the lumbosacral spine in the sagittal plane. A second aim was to compare these techniques against measurement of intervertebral motion made either with a video when the technique was applied to an in vitro spine model, or fluoroscopy when the technique was applied in vivo using an anesthetized pig. MATERIALS AND METHODS Traditionally, angle between two segments is defined by two anatomic landmarks identified on each segment (Morrissy et al. 1990). Lines can be drawn through these pairs of anatomic landmarks and the angle between the two lines used as a representation of the angle between these segments. This technique was tested using two points identified on the L5 vertebra and sacrum in a single image. An alternative method to estimate inter- vertebral angle change may be based on one anatomic landmark per segment when it is difficult to accurately track two points on two structures as the structures move relative to each other. This technique is based on the assumption that the distance between bony landmarks on adjacent segments changes with spinal movement in the sagittal plane and, when calibrated against a known angle change, a change in distance between posterior Fig. 1. Experiment 1 set-up. (a) Sagittal plane image of the sp axis of rotation of L5. (b) The spinal model (1) was submer ultrasound transducer was held manually. Motion of L5 in rel ultrasound transducer (2) and digital camera (3). The angle be measures were compared to the angle between the rods (4) in drawn through two prominent landmarks of the sacrum and a of L5 used in the ultrasound measu elements of adjacent vertebra could be used to estimate intervertebral movement. Both methods were evaluated in two separate experiments. In the first experiment, lumbosacral motion of an in vitro spine model was measured with ultrasound imaging and digital photog- raphy. In the second experiment, motion measured with ultrasound imaging was compared with measures made in vivo with fluoroscopy in an anesthetized pig. EXPERIMENT 1: MEASUREMENTS OF INTERVERTEBRAL MOTION USING AN IN VITRO SPINE MODEL Experimental set-up A synthetic anatomic model of the spine (3B Scien- tific, Lumbar Spinal Column, Hamburg, Germany) was used to simulate lumbosacral motion. The model was modified by placing a one-degree-of-freedom hinge at the position of the approximate instantaneous axis of rotation (approximately one third of the distance from the dorsal aspect of the vertebral body) (Pearcy and Bogduk 1988). The hinge allowed motion in the sagittal plane (Fig. 1a). With the sacrum fixated, L5 could be moved manually (approximately 4�/s) through a physio- logic range of motion (ROM) of 14� (White and Panjabi 1978). The model was placed in a water-filled tank to facil- itate coupling for ultrasound imaging (Fig. 1b). The transducer (7–10 MHz linear array) of the ultrasound system Logiq 9 (GE Healthcare, Little Chalfont, Buckinghamshire, UK) was submerged and held inal model used, * indicates the estimated position of the ged in a water filled tank for ultrasound coupling. The ation to the sacrum was recorded simultaneously by the tween the sacrum and L5 estimated from the ultrasound serted into L5 and the sacrum. (c) Angle between a line line drawn through the lamina and mammillary process re (experiment 1, method 1). Measurement of lumbosacral motion using ultrasound d W. VAN DEN HOORN et al. 3 manually approximately 50 mm above the model to record the relevant anatomic landmarks of the sacrum and L5 (see below). For validation of the measures, motion of L5 was recorded simultaneously with a digital camera (Kodak V530; Eastman Kodak Company, Rochester, New York, USA) (Fig. 1b). To facilitate accurate measurement of the angle change from the digital images, metal rods (length 5 80 mm) were inserted into the sacrum and L5 (Fig. 1b). Two points on each rod were visually identified, and the angle between the lines through the points was calculated. Measurement technique 1: Direct measurement of angular change The lamina and mammillary process of L5 and two points on the surface of the sacrum immediately lateral to Fig. 2. (a) Position of the ultrasound transducer to visualize tw and the cranial edge of the dorsal aspect of the sacrum (p2) (b) Ultrasound image obtained with the ultrasound transducer through p1 and p2 and a line through p3 and p4 depicts an ana the ultrasound transducer to visualize the cranial edge of the dor the distance between the sacrum and L5 points. (d) Ultrasound as in (c), where D displays the distance b the dorsal processes (Fig. 1c) served as bony landmarks to estimate lumbosacral movement. This is referred to as a ‘‘direct’’ technique, as an angle is directly calculated from the ultrasound images. To enable simultaneous visu- alization of the four landmarks, the ultrasound transducer was held at an angle in the frontal plane of approximately 25� relative to the longitudinal axis of the spine (Fig. 2a). Measurement technique 2: Indirect measurement of angular change The change in the distance between the lamina of L5 and the cranial edge of the dorsal aspect of the sacrum during spinal movements was used to estimate lumbosa- cral angle. To visualize these landmarks, the ultrasound transducer was held parallel to the spine (Fig 2b). To estimate an angle change from a distance change, knowl- edge about the relation between a change in distance and o prominent landmarks of the sacrum; dorsal aspect (p1) and L5; the lamina (p3) and mammillary process (p4). positioned as in (a). The angle (180�2q) between a line tomic angle between the sacrum and L5. (c) Position of sal aspect of the sacrum and L5 lamina, where D displays image obtained with the ultrasound transducer positioned etween the sacrum and L5 points. 4 Ultrasound in Medicine and Biology Volume -, Number -, 2016 a change in angle is required. To establish this relation, data from measurement technique 1 were used, as both distance and angle information were available in ultra- sound images using that technique. The distance change from when the ultrasound transducer was held parallel to the spine was then transformed to an angle change. We refer to this as an ‘‘indirect’’ technique, as the change in angle is estimated from a change in distance. Data analysis The digital video and ultrasound movie frames were converted into JPEG images and imported into MATLAB (The MathWorks, Inc., Natic, MA, USA). To correct for sampling rate differences (ultrasound: 29 fps; camera: 30 fps), images were synchronized to the first and last frame in which movement of the L5 vertebra could be visually determined. To reduce bias from sequential presentation of the images, images were analyzed in random order. To ensure a large enough angle change between frames during the slow movement of the spinal model, every 20th digital video frame was analyzed. Data were recorded for 1 min of repeated movement between the extremes of ROM (approximately 16 cycles). Measurement technique 1. Because the ultrasound transducer was placed approximately 25� oblique (b) to the spine to visualize the bony landmarks in measurement technique 1, and because motion of L5 occurred in the sagittal plane, the calculated angle (q) from the ultra- sound was corrected for the out-of-plane movement using eqn (1). qcorrected 5 1 cosðbÞ q (1) Where 0� # b , 90�. Measurement technique 2. Because the available range of lumbosacral movement used for calibration of the distance data may be smaller than 14� in real life situations, intervertebral angle change was also estimated with calibration ratios calculated for smaller ROM. This enabled assessment of the degree to which calibration range affected the accuracy of the indirect technique (11.2� [80% of 14�]; 8.4� [60%]; 5.6� [40%]; and 2.8� [20%]). Calibration ratios to convert distance change (Dd) to angular change (D4) were determined for each range of motion using eqn (2). D4 5 DqROM Ddrom Dd (2) Where ROM relates to 11.2�, 8.4�, 5.6� and 2.8�. As the ultrasound transducer was placed at approximately a 25� angle in the frontal plane relative to the longitudinal axis during the calibration measurements, the measured distance change was adjusted by multiplication of the values by cos(25�). Although the projected distance between the two points measured should approximate a sin function of the angle, a linear estimation of the distance was used, as there were only small changes in angle. EXPERIMENT 2: IN VIVO MEASUREMENTS USING AN ANESTHETIZED PIG Experimental set-up A 4-mo-old domestic pig (Swedish Landrace), weighing 45 kg, was used. Ethical approval was obtained from the institutional ethics committee. The animal was sedated by a 30-mg/kg intramuscular injection of ketamine (Ketalar, Pfizer, New York, NY, USA), and after 10 min, anesthetized intravenously with 20 mg/kg of propofol (Rapinovet, Shering-Plough, Kenilworth, NJ, USA). Maintenance doses were given as required. The animal was placed prone on a table. Although the pig is a quadruped, its lumbar spine approximates the hu- man lumbar spine in size, shape and biomechanics (Smit 2002). Fluoroscopy and ultrasound recordings of inter- segmental motion were made simultaneously during lumbopelvic movements. The lumbar spine was moved by changing the orientation of the rear legs and pelvis (Fig. 3). Ultrasound images were recorded with an Acuson SC2000 7–10 MHz linear array transducer (Siemens Healthcare, Malvern, PA, USA). Measurement technique 1: Direct measurement of angular change An angle was calculated between the lines fitted parallel to the sacrum and parallel to the L6 spinous process of the last lumbar vertebrae (L6) to represent the angle between these two structures. The ultrasound transducer was placed in the midsagittal plane to image the cranial aspect of the sacrum and the caudal aspect of the dorsal processes of L6 (Fig. 4a, 4b). Two points could be identified on each of these structures for orienta- tion of the lines for calculation of angle change. Measurement technique 2: Indirect measurement of angular change To assess whether the change in linear distance between the cranial edge of the dorsal aspect of the sacrum and lamina of L6 could be used to quantify lumbosacral motion, the same indirect method was used as in experiment 1, with the exception that the lamina of L6 was visualized in the pig. S1 S1 L6 L6 a b Fig. 3. Experiment 2 set-up. The lumbar spine of the pig was extended in small steps. The lumbar spine at the extremes of flexion (a) and extension (b) are shown. The left side shows the corresponding fluoroscopic images and the right side shows the corresponding ultrasound images. The sacrum (S1) and the sixth lumbar vertebrae (L6) are highlighted in both fluoroscopic and ultrasound images. Measurement of lumbosacral motion using ultrasound d W. VAN DEN HOORN et al. 5 Data analysis The change in angle between the sacrum and L6 was analyzed by overlaying consecutive fluoroscopy images (Cakir et al. 2006). For technique 2, the distance measure could not be calibrated as it had been in experiment 1, because it was difficult to obtain an accurate estimation of distance and angle from the same image in technique 1. To be able to estimate angle change from a distance change, the angle information extracted from the fluoros- copy images was used. To calibrate a distance change to an angle change, the linear relation between distance measured from ultrasound and angle measured from fluoroscopy was determined by estimating the linear regression coefficients. The linear regression coefficients were then used to transform distance measured from ultrasound to angle. STATISTICAL ANALYSIS Statistical analyses were performed using MATLAB (The MathWorks, Inc.). To investigate the relationship between the angles and distances estimated from the ultrasound images, and the angles measured either from the digital images (experiment 1) or fluoroscopy (experi- ment 2) linear regressions were calculated with the intercept forced through zero. Regression coefficients and explained variance (R2) were extracted. A regression coefficient smaller than one indicates that the ultrasound measure tends to underestimate the lumbosacral angle compared to the measure used for validation. To deter- mine error, the mean absolute differences between the measures made with ultrasound, and the respective technique used for validation, were calculated. RESULTS Experiment 1: In vitro measurements For technique 1, the explained variance between the lumbosacral angles measured by digital photography and ultrasound was 97%. The corresponding linear regression coefficient was 0.992. The mean absolute prediction error was 0.82�. For technique 2, the explained variance between the lumbosacral angles measured by digital photography and ultrasound was 77%. Table 1 summarizes the comparison between measures made from photography and ultrasound when distance was calibrated at different ranges of movement. When the distance change was calibrated to the angle change at 60% range (8.4�) of the total range of technique 1, the mean error of the estimated angle change was the lowest, and the regression coefficient was the closest to one. When range used for calibration was reduced to 20% (2.8�) of the total range, the mean error of the estimated angle change was the highest, and the regression coefficient Fig. 4. Angle (a, b [b]) and distance (c, d [D]) between the sacrum and L6 calculated from the ultrasound images of the animal (pig) experiment. 6 Ultrasound in Medicine and Biology Volume -, Number -, 2016 was the furthest from one. Figure 5 shows an example of the estimated and observed angle change when the avail- able range was 60% (8.4�) of the total range. Experiment 2: In vivo measurements The explained variance between the lumbosacral angles measured by fluoroscopy and ultrasound (technique 1) was 93.8% (Fig. 6). The corresponding linear regression coefficient was 0.995. The mean absolute error was 2.1�. When estimating changes in lumbosacral angle from changes in distance between Table 1. Statistical results for estimated angle change between the sacrum and L5 ROM 14.1� (100%) 11.3� (80%) 8.5� (60%) 5.6� (40%) 2.8� (20%) Mean error 0.710� 0.696� 0.663� 0.709� 0.801� Regression coefficient 1.024 1.019 0.982 0.850 0.789 R 2 0.774 0.774 0.774 0.774 0.774 Calibration ratio 0.4801 0.4752 0.4603 0.3987 0.3697 ROM 5 range of motion used for calibration. the cranial edge of the dorsal aspect of the sacrum and the lamina of L6 (technique 2), the explained variance was 95.0% (Fig. 6). The corresponding linear regression coefficient was 0.997. The mean error was 1.7�. DISCUSSION This study aimed to develop and validate techniques to measure lumbosacral motion with ultrasound imaging. The findings of in vitro and in vivo experiments demon- strated the viability of ultrasound to estimate lumbosacral movement. Measurement technique 1: Direct measurement of angular change The results demonstrate that the amplitude of lumbosacral motion measured from the angle between a line drawn through two prominent landmarks on the sacrum and a line drawn through the lamina and mammil- lary processes of L5 was highly correlated with the angle measured with digital photography. This confirms the accuracy of the technique. The same principle is 0 10 20 30 40 50 60 70 80 90 0 2 4 6 8 an gl e ch an ge ( °) samples actual angle change absolute error estimated angle change mean difference = 0.60° regression coefficient = 0.982 explained variance = 77.4% Fig. 5. Comparison between the angular data in the in vitro spinal model using the distance between the cranial edge of the dorsal aspect of the sacrum and L5 lamina (dashed black line) after angle change was calibrated with the 60% (8.2�) range of motion calibration with the angles calculated from the digital camera (solid black line). The dotted grey line shows the absolute errors. Measurement of lumbosacral motion using ultrasound d W. VAN DEN HOORN et al. 7 commonly used in X-ray analysis where the angle between lines that go through anatomic landmarks on adjacent vertebrae represents a segmental angle (Breen et al. 1989; Pearcy 1985; Pearcy and Whittle 1982). The quality of the ultrasound images was very good in the in vitro model as a result of the absence of soft tissues, and is most likely better than the quality that could be expected in vivo. However, the 93.8% explained variance between fluoroscopy and ultrasound angle from the animal study indicated that angles measured with ultrasound can be used to accurately measure an angle between the sacrum and an adjacent vertebra in vivo. The average error of the estimated angle was greater in the in vivo than in the in vitro experiment (2.10� vs. 0.82�). A larger error in the in vivo experiment could be related to reduced clarity of the landmarks of the sacrum and L6, or be a result of 0 10 20 30 40 50 an gl e (° ) 1 2 3 4 5 6 7 samples fluoroscopy angle absolute error estimated angle mean difference = 1.69° regression coefficient = 0.997 explained variance = 95.0% Fig. 6. Comparison between the angular data in the in vivo pig model using the distance change between the cranial edge of the dorsal aspect of the sacrum and L6 lamina (dashed black lines) and fluoroscopy images (solid black lines). The black dots show the absolute errors. greater complexity of motion (combined translation and rotation), as motion in the in vitro model was limited to a single degree of freedom. Furthermore, the ultrasound measures were compared to angles calculated from fluoroscopy images, which probably has larger errors compared to the angles measured from the digital camera images. Regardless, the errors are relatively small considering the total lumbosacral ROM (approximately 25�) available in the porcine spine. In the in vitro experiment the ultrasound transducer was held at an angle in the frontal plane of approximately 25� relative to the longitudinal axis of the spine to allow visualization of anatomic landmarks used for a direct lumbosacral angle measure. Deviations of the ultrasound transducer from 25� could introduce an error in lumbosa- cral angle calculation, as the ultrasound transducer was held manually throughout the experiment. Because the ultrasound transducer angle was not measured during the experiment, however, potential error can be calculated. With knowledge of real lumbosacral angle change (measured with the video camera) and the actual orientation angle of the ultrasound transducer, the error can be calculated using eqn (3). Error 5 q cosðgÞ cosðεÞ 2q (3) Where q is the real lumbosacral angle change, g is the real ultrasound transducer angle and ε is the erroneous ultrasound transducer angle. For example, a deviation of the ultrasound transducer of 5� toward the longitudinal axis of the spine from 25� (i.e., 20�) with a lumbosacral angle change of 5� (observed from the video camera) would give an overestimation error of 0.18� (3.7%); a deviation of the ultrasound transducer of 5� away from the longitudinal axis of the spine from 25� (i.e., 30�) with a lumbosacral angle change of 5� (observed from the video camera) would give an underestimation error 8 Ultrasound in Medicine and Biology Volume -, Number -, 2016 of -0.22� (4.4%). Although the ultrasound transducer was held as still as possible, small errors could have been introduced by small changes in ultrasound head orientation. Accurate control of ultrasound transducer orientation can avoid additional errors in the measure- ment of lumbosacral angle. It is likely to be challenging to image four anatomic landmarks simultaneously to calculate the lumbosacral angle because of muscle contraction and body contour changes during movement of a human spine, and there is increased potential for operator error with this tech- nique. For these reasons we also investigated a simpler technique to evaluate if a change in distance between posterior elements of adjacent vertebrae can be used to estimate intervertebral movement. Measurement technique 2: Indirect measurement of angular change Change in the distance between landmarks on adjacent vertebrae is related to a change in intervertebral angle and can be converted to an angle via calibration with a known angle change. Distance between the cranial edge of the dorsal aspect of the sacrum and lamina of L5 measured with ultrasound was highly correlated with the angle calculated from the digital camera and could be used to estimate lumbosacral angle change. However, the error of the angle calculated from the distance change was slightly larger when calibrated at smaller ROMs. This may have consequences when this method is applied in clinical situations, for example, if lumbosacral range of motion is reduced in people with lower back pain. This would limit the largest possible angle available for calibration. The error in the in vitro experiment suggests that angle changes smaller than 0.66� cannot be detected reliably. However, this error is relatively small in relation to the total ROM of 14� of the lumbosacral spine, set to represent average ROM in humans. In vivo flexion and extension movements in the lumbar spine involve rotational and translational compo- nents (Ogston et al. 1986). Both components are tracked by the path of the instantaneous axis of rotation. In our spine model, the axis of rotation was fixed by a hinge, and therefore, translational components were restricted. It is possible that in in vivo situations, the distance can change through a pure translation, which would errone- ously be reported as an angle change. However, small distributions of instantaneous axes of rotation in healthy people have been reported in vivo (Pearcy and Bogduk 1988). Although Ogston et al. (1986) reported larger distributions of the instantaneous axis of rotation, they did not normalize for vertebral size. Given that the instan- taneous axis of rotation changes little in vivo, this supports the use of a single instantaneous axis of rotation for validation of our method in our in vitro model. Furthermore, the in vivo animal experiment showed that ultrasound distance related well to angle measurement with fluoroscopy. This suggests that a change in instanta- neous axis with sagittal motion did not influence the mea- surement of the distance between the sacrum and last lumbar vertebrae. However, this can only be assumed, as we did not measure the position of the instantaneous axis of rotation in experiment 2. Another option to reduce the effect of translation on the calculation of distance and related error in the estimated angle change is to use only the caudal-cranial distance component. This would reduce the error in the angle estimation, as the transla- tional component does not contribute to the distance calculation. In addition, in older patients, spondylosis and related osteophytes could hamper reliable identifica- tion of two landmarks on L5 in the same image, which is required to calibrate distance change to angle change. The extent to which this affects our measures should be explored in future studies. Both experiments indicate that a change in distance between adjacent segments can be used to estimate angle change. In theory, it does not matter which two anatomic landmarks are used as long as they can be followed reliably during motion. Thus, this technique is potentially very adaptable. More superficial landmarks such as the dorsal processes could be used to detect change in distance between segments with motion in the sagittal plane, and has been explored by Chleboun et al. (2012) and in a more recent paper by Cuesta-Vargas (2015) between lumbar segments in humans. CONCLUSION The results show that ultrasound can be used to measure intervertebral motion in an in vitro and in an in vivo model. The ultrasound method has a potential to provide a measure of intervertebral motion in clinical conditions without invasive procedures or exposure to ionizing radiation using real time ultrasound imaging. In a logical next phase of this research, the reliability of the developed methods needs to be examined in humans. Acknowledgments—W. van den Hoorn was supported by the Gerrit Jan van Ingen Schenau Promising Young Scientist Award from the Vrije Universiteit Amsterdam. P.W. Hodges was funded by the National Health and Medical Research Council of Australia Research Fellowship. REFERENCES Breen AC, Allen R, Morris A. Spine kinematics: A digital videofluoro- scopic technique. J Biomed Eng 1989;11:224–228. Breen AC, Teyhen DS, Mellor FE, Breen AC, Wong KWN, Deitz A. Measurement of intervertebral motion using quantitative fluoros- copy: Report of an international forum and proposal for use in the assessment of degenerative disc disease in the lumbar spine. Adv Orthop 2012;2012:802350. http://refhub.elsevier.com/S0301-5629(16)00004-1/sref1 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref1 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref2 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref2 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref2 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref2 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref2 Measurement of lumbosacral motion using ultrasound d W. VAN DEN HOORN et al. 9 Brumagne S, Cordo P, Lysens R, Verschueren S, Swinnen S. The role of paraspinal muscle spindles in lumbosacral position sense in individ- uals with and without low back pain. Spine (Phila Pa 1976) 2000;25: 989–994. Burke D, Hagbarth KE, L€ofstedt L. Muscle spindle activity in man during shortening and lengthening contractions. J Physiol 1978; 277:131–142. Cakir B, Richter M, K€afer W, Wieser M, Puhl W, Schmidt R. Evaluation of lumbar spine motion with dynamic X-ray–a reliability analysis. Spine (Phila Pa 1976) 2006;31:1258–1264. Chleboun GS, Amway MJ, Hill JG, Root KJ, Murray HC, Sergeev AV. Measurement of segmental lumbar spine flexion and extension using ultrasound imaging. J Orthop Sports Phys Ther 2012;42:880–885. Cuesta-Vargas AI. Development of a new ultrasound-based system for tracking motion of the human lumbar spine: Reliability, stability and repeatability during forward bending movement trials. Ultrasound Med Biol 2015;41:2049–2056. Dvor�ak J, Panjabi MM, Chang DG, Theiler R, Grob D. Functional radiographic diagnosis of the lumbar spine. Flexion-extension and lateral bending. Spine (Phila Pa 1976) 1991a;16:562–571. Dvor�ak J, Panjabi MM, Novotny JE, Chang DG, Grob D. Clinical validation of functional flexion-extension roentgenograms of the lumbar spine. Spine (Phila Pa 1976) 1991b;16:943–950. Hodges P, Holm AK, Hansson T, Holm S. Rapid atrophy of the lumbar multifidus follows experimental disc or nerve root injury. Spine (Phila Pa 1976) 2006;31:2926–2933. Lee YH, Chiou WK, Chen WJ, Lee MY, Lin YH. Predictive model of intersegmental mobility of lumbar spine in the sagittal plane from skin markers. Clin Biomech (Bristol, Avon) 1995;10:413–420. MacDonald D, Moseley GL, Hodges PW. People with recurrent low back pain respond differently to trunk loading despite remission from symptoms. Spine (Phila Pa 1976) 2010;35:818–824. M€orl F, Blickhan R. Three-dimensional relation of skin markers to lumbar vertebrae of healthy subjects in different postures measured by open MRI. Eur Spine J 2006;15:742–751. Morrissy RT, Goldsmith GS, Hall EC, Kehl D, Cowie GH. Measurement of the Cobb angle on radiographs of patients who have scoliosis. Evaluation of intrinsic error. J Bone Joint Surg Am 1990;72: 320–327. Ogston NG, King GJ, Gertzbein SD, Tile M, Kapasouri A, Rubenstein JD. Centrode patterns in the lumbar spine. Baseline studies in normal subjects. Spine (Phila Pa 1976) 1986;11: 591–595. Pearcy MJ. Stereo radiography of lumbar spine motion. Acta Orthop Scand Suppl 1985;212:1–45. Pearcy MJ, Bogduk N. Instantaneous axes of rotation of the lumbar intervertebral joints. Spine (Phila Pa 1976) 1988;13:1033–1041. Pearcy MJ, Whittle MW. Movements of the lumbar spine measured by three-dimensional X-ray analysis. J Biomed Eng 1982;4: 107–112. Quint U, Wilke HJ, Shirazi-Adl A, Parnianpour M, L€oer F, Claes LE. Importance of the intersegmental trunk muscles for the stability of the lumbar spine. A biomechanical study in vitro. Spine (Phila Pa 1976) 1998;23:1937–1945. Smit TH. The use of a quadruped as an in vivo model for the study of the spine - biomechanical considerations. Eur Spine J 2002;11: 137–144. Steffen T, Rubin RK, Baramki HG, Antoniou J, Marchesi D, Aebi M. A new technique for measuring lumbar segmental motion in vivo. Method, accuracy, and preliminary results. Spine (Phila Pa 1976) 1997;22:156–166. Vanneuville G, Poumarat G, Chandezon R, Guillot M, Garcier JM, Coillard C. [Reliability of cutaneous markers in kinetic studies of the human thoracic and lumbar spine]. Bull Assoc Anat (Nancy) 1994;78:19–21. White AA 3rd, Panjabi MM. The basic kinematics of the human spine. A review of past and current knowledge. Spine (Phila Pa 1976) 1978; 3:12–20. Zieger M, D€orr U. Pediatric spinal sonography. Part I: Anatomy and examination technique. Pediatr Radiol 1988;18:9–13. http://refhub.elsevier.com/S0301-5629(16)00004-1/sref3 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref3 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref3 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref3 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref4 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref4 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref4 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref4 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref5 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref5 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref5 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref5 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref6 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref6 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref6 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref7 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref7 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref7 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref7 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref8 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref8 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref8 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref8 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref9 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref9 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref9 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref9 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref10 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref10 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref10 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref11 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref11 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref11 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref12 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref12 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref12 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref13 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref13 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref13 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref13 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref14 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref14 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref14 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref14 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref15 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref15 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref15 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref15 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref16 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref16 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref17 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref17 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref18 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref18 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref18 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref19 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref19 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref19 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref19 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref19 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref20 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref20 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref20 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref21 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref21 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref21 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref21 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref22 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref22 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref22 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref22 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref23 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref23 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref23 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref24 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref24 http://refhub.elsevier.com/S0301-5629(16)00004-1/sref24 Development and Validation of a Method to Measure Lumbosacral Motion Using Ultrasound Imaging Introduction Materials and Methods Experiment 1: Measurements of Intervertebral Motion Using an In Vitro Spine Model Experimental set-up Measurement technique 1: Direct measurement of angular change Measurement technique 2: Indirect measurement of angular change Data analysis Measurement technique 1 Measurement technique 2 Experiment 2: In Vivo Measurements Using an Anesthetized Pig Experimental set-up Measurement technique 1: Direct measurement of angular change Measurement technique 2: Indirect measurement of angular change Data analysis Statistical Analysis Results Experiment 1: In vitro measurements Experiment 2: In vivo measurements Discussion Measurement technique 1: Direct measurement of angular change Measurement technique 2: Indirect measurement of angular change Conclusion References work_nmnqo7n23fapfkxlxv6bxvslh4 ---- MICROSCOPY 101 We appreciate the response to this publication feature and welcome all contributions. Contributions may be sent to José A. Mascorro, our Technical Editor, at his e-mail address: jmascor@tulane.edu. José may also be reached at the Department of Structural and Cellular Biology, Tulane University Health Sciences Center, 1430 Tulane Ave., New Orleans, LA 70112 and Ph: (504) 584-2747 (504) 584-1687 New Resin for Repair of Bell Jar Chips Owen P. Mills and Matt Huuki* Michigan Technological University, * Matt's Auto Glass, Houghton, MI opmills@mtu.edu and mhuuki@global.net As you ease the bell jar back down onto its seat, past the evaporation fixture, it is quite easy to chip a $600 jar. Chipped bell jars are a peren- nial problem in EM labs. They degrade vacuum in evaporators result- ing in lower quality films and ultimately shorten the life of the vacuum stuck down to a lab bench top. I wet the paper and slid the bell jar back and forth across the wet sandpaper, taking care to keep the bell jar flat against the bench top surface at all times (Fig. 5). The finished repair is shown in Fig. 6. This is the second jar I have repaired using this technique. Both repaired bell jars hold vacuum and have performed well in all ways. I would recommend this technique to those with shallow chips of small extent. Be aware that your local auto glass repair shop will have no idea what a bell jar is, so be prepared to explain your situation. Better yet, bring this copy of MT with you! fPlease read the Microscopy Today Disclaimer on page 6. This pro- cedure is potentially dangerous. DO NOT ATTEMPT this repair on ANY size bell jar defect without an understanding of the risks involved and your commitment to taking ALL appropriate precautions during the repair process and after the repaired bell jar is returned to service. If you are unsure of the consequences of your actions, don't do it! ... Editor warn system. This note describes a new resin that I have used for repairing bell jar chips. A common chip can be seen in Fig. 1. It was caused by crashing the bell jar into the evaporation fixture. The chip is less than 0.5mm deep. Beside it you can see another chip that will not be treated as it does not extend to the jar edge. IMPORTANT NOTE TO READERS: It would not be wise to at- tempt repairs on deep chips of > 0.10 mm or those spreading over 30mm laterally. Finally, DO NOT attempt repairs on cracked bell jars. Bell jars must resist the stress of vacuum and cracked bell jars must be taken out of service. t I worked with Matt Huuki, owner of Matt's Auto Glass in Houghton, MI, to repair this bell jar chip. The resin Matt uses is Glas-Weld #2020 Clear Resin. This is the same product he uses to repair rock chips in au- tomotive windshields. It is a UV sensitive resin that sets in an anaerobic environment. We made the repair as shown in the follow sequence of photos. In Fig. 2 a dam of masking tape is applied to both sides ofthejarto contain the thin epoxy. Slightly more resin is applied than needed and a mylar sheet is set over the wet resin in Fig. 3. The UV light is then applied for ~10 minutes (Fig. 4). The UV light is applied for a longer period of time for this bell jar repair since the resin is thicker compared with a windshield chip repair where the resin is normally very thin. After the UV is applied, the resin is dry and the tape can be removed. Next, the resin can be ground down to the same height as the ground edge of the bell jar. To accomplish this I used 240 grit SiC pressure sensitive adhesive (PSA) backed wet-dry sandpaper that we temporarily Inexpensive Digitization of an SEM Henry C. Aldrich and Donna S. Williams, University of Florida, Gainesville, FL haldrich2@cox. net Because of the high cost of Polaroid film, many years ago we fitted our Hitachi S-450 scanning electron microscope with a 35 mm camera. At that time, we used a Pentax ME Super, which was totally manual and had to have the film advanced by a hand lever. This was an annoyance, but when we set up the system, Polaroid Type 55 film was about $2.00 per photo, and the cost of 35 mm spooled in our lab ran about $.10 per photo. When we traded the Hitachi S-450 for the later Hitachi S-570, we moved the 35 mm system to this microscope. About 1999, when the Pentax ZX-50 with motorized film advance became available, we adapted it to the S-570, using the Pentax electric shutter release. The lens used with both of these cameras was an elderly 50 mm screw-mount Pentax Macro lens that focused well on the CRT of the SEM. Then, when Canon introduced the Digital Rebel for under $1000 about a year ago, we convinced a local Canon dealer to lend us one to try on the SEM. It worked quite well, and so we purchased it and converted the Hitachi S-570 to digital photography for about $1000 - the cost of the Digital Rebel with its standard zoom lens plus the electric cable release. We also purchased an AC adapter to replace the AA batteries. The Rebel uses Compact Flash cards, which can then be removed from the camera and taken to any computer with a card reader to download the images. We have not experimented with other lenses. A dedicated macro lens might produce slightly sharper images, but the standard Rebel zoom lens seems quite adequate. The Canon electric cable release has a locking feature, so 46 MICROSCOPY TODAY March 2005 D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re . C arn eg ie M ello n U n iversity , o n 06 A p r 2021 at 01:16:34 , su b ject to th e C am b rid g e C o re term s o f u se, availab le at h ttp s://w w w .cam b rid g e.o rg /co re/term s . h ttp s://d o i.o rg /10.1017/S155192950005149X http://crossmark.crossref.org/dialog?doi=10.1017/S155192950005149X&domain=pdf https://www.cambridge.org/core https://www.cambridge.org/core/terms https://doi.org/10.1017/S155192950005149X work_nnhb4imrufe2pccjardxyjrmsa ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218546931 Params is empty 218546931 exception Params is empty 2021/04/06-02:16:24 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218546931 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:24 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_nnlzx2hrfjfxxdncxllrczd3iq ---- untitled http://vcj.sagepub.com Visual Communication DOI: 10.1177/1470357209102110 2009; 8; 123 Visual Communication Paul Cobley and Nick Haeffner Digital cameras and domestic photography: communication, agency and structure http://vcj.sagepub.com/cgi/content/abstract/8/2/123 The online version of this article can be found at: Published by: http://www.sagepublications.com can be found at:Visual Communication Additional services and information for http://vcj.sagepub.com/cgi/alerts Email Alerts: http://vcj.sagepub.com/subscriptions Subscriptions: http://www.sagepub.com/journalsReprints.navReprints: http://www.sagepub.co.uk/journalsPermissions.navPermissions: http://vcj.sagepub.com/cgi/content/refs/8/2/123 Citations by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com/cgi/alerts http://vcj.sagepub.com/subscriptions http://www.sagepub.com/journalsReprints.nav http://www.sagepub.co.uk/journalsPermissions.nav http://vcj.sagepub.com/cgi/content/refs/8/2/123 http://vcj.sagepub.com A R T I C L E Digital cameras and domestic photography: communication, agency and structure P A U L C O B L E Y A N D N I C K H A E F F N E R London Metropolitan University, UK A B S T R A C T This article seeks to open up debate on the nature of communication in digital domestic photography. The discussion locates itself between the putative poles of ‘digital democracy’ and ‘digital literacy’, questioning the communicative co-ordinates of the snapshot and identifying the ‘idiomatic genres’ in which it takes place. The authors argue that digital cameras enable domestic photographers to take ‘good’ or professional-looking photographs and make certain capacities of professional cameras avail- able for consumer use. Conversely, however, they argue that the question of critical understanding of the politics of representation in domestic camera use remains, since technical proficiency is not necessarily always accompanied by analysis. One reason suggested for this is that, frequently, the uses of photography are insufficiently analysed. The article therefore criticizes the idea that (domestic) photography can be understood in terms of ‘language’ without paying due attention to the use of photography to capture the nonverbal. K E Y W O R D S ‘affordance’ • agency • digital democracy • digital literacy • genre • idiom • ‘language’ • nonverbal communication • politics of representation • structure In the popular imagination, digital imaging has been seen as a matter of modification and mutability. The modification arises from all the post hoc touching up that was employed in analogue photography in such spheres as advertising and fashion that is now, through specialized software, available to domestic camera users. This is coupled with the mutability of the image at the point of ‘production’ (as opposed to ‘post-production’) in the touch-of-a- button effects that digital cameras offer. SAGE Publications (Los Angeles, London, New Delhi, Singapore and Washington DC): http://vcj.sagepub.com Copyright © The Author(s), 2009. Reprints and permissions: http://www.sagepub.co.uk/journalspermissions.nav/ Vol 8(2): 123–146 DOI 10.1177/1470357209102110 v i s u a l c o m m u n i c a t i o n by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com Along with modification and mutability, the so-called ‘digital age’ has also ushered in concerns over the extent to which mediated existence is ‘virtual’ or ‘reified’. Digital imaging has played a role in ‘virtual’ existence, particularly as it has sustained some aspects of internet communication, but also in the sense that it has contributed to the putative unreality and the unreliability of mass mediated communication (Wheeler, 2002). Conversely, digitality has also contributed to feelings of reification in which the only yardstick of truth is that which proceeds from heavily mediated messages. In terms of practice and sign making in a digital age, however, two further, possibly artificial, poles have emerged. The first is ‘digital democracy’, in which digital technologies, particularly those to do with imaging, grow at a very rapid rate and become available to consumers outside a purely industrial setting to the extent that information imbalance is, in some measure, ameliorated. The second is the more sobering perspective of ‘visual literacy’, typically associated with the Halliday-influenced work of Kress and Van Leeuwen (e.g. 2006), which identifies in visual ‘language’ a series of constraints that are in some ways analogous to the constraints of verbal language. This position is well known and represents a reasonable under- standing of the checks and balances that characterize the contemporary visual, and, perhaps more specifically, digital age (cf. Kress, 2003). Nevertheless, in what follows we wish to question some of Kress and Van Leeuwen’s imperatives both in the consideration of ‘critical literacy’ and in considering the idea of digital democracy: that is, digital democracy as a potential; not so much a utopia, but rather as part of quotidian attempts to enhance communication. We present a necessarily provisional overview of domestic digital camera use, an area which, although thus far under- theorized and even less well researched ethnographically, is the subject of a major consumer boom. Digital cameras are one of the fastest growing consumer markets in the West, partly as a result of their incorporation into the latest generation of mobile phones. At the same time, digital imaging is fast rendering film unfashionable and economically unattractive to many of the big players in camera sales. The vastly accelerated process whereby a photographer can now capture a digital image and dispatch it for publication via the internet almost immediately afterwards has, by comparison, made film seem far too slow and costly for most businesses. It has led to the rise of ‘citizen journalism’, the ‘digital amateur’ and further erosion of the authority of the professional photographer. For this, and other reasons, the consumer boom in digital cameras has not simply amounted to another straightforward step in the onward march of established capital. Major players such as Kodak underestimated the rise of digital cameras, were slow to enter the market and were unready for some aspects of the new era of the visual heralded by digital imaging technology at a domestic level. The ‘domestic’ component of this phenomenon has been crucial, both because it entails a mass market and because it puts into the hands of some members of the public means of V i s u a l C o m m u n i c a t i o n 8 ( 2 )124 by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com imaging which were, until very recently, only in the hands of industry and which thus contributed to an imbalance of power in the economy of signs. Put simply, the question for visual theory that arises from the digital camera boom concerns whether digital cameras teach, facilitate or otherwise enhance visual literacy among the public. Do they enable command over imaging? Do they foster any sense of the world of representation beyond the act of taking domestic photographs and communicating on a very localized, personal level? We would begin by suggesting that widespread uptake of digital cameras can inculcate the disposition of seeing the world through a viewfinder or a screen; it can also encourage the making of basic choices about representation. As such, domestic digital cameras harbour the poten- tial to induce a more self-reflexive attitude towards media in general. Before widespread empirical work on digital camera use proceeds, we would argue that it is worth taking seriously the capacity of self-reflexivity inherent in domestic photography. C O M M U N I C A T I O N I N D O M E S T I C P H O T O G R A P H Y What little available literature there is concerning vernacular photography has tended to be equivocal in its categorizations. The general catch-all concept of ‘the snapshot’ tended to dominate discussion in the past, although writers and critics rarely distinguish between wedding snaps, travel snaps, pet snaps and family snaps. Although his own work is focused on the ‘home mode’, Chalfen (1987) notes that the ‘snapshot’ can refer broadly to any hastily taken picture and the term actually derives from hunting, denoting a hurried gun shot taken without deliberate aim which was applied to photog- raphy for the first time as early as the 1860s (p. 72). The casual domestic photographer, the mobile phone snapper and the amateur enthusiast may also need to be considered under separate headings at specific junctures in the debate, possibly according to the degrees of deliberation they employ in their photographs and the specific uses to which they are put. Yet, on reviewing the literature, it is apparent that academic books and articles dealing with photography as an art, and photographers as artists, far outweigh the meagre proportion of texts dealing with vernacular photography. Even the recent special issue of the journal Source (2005, dedicated to vernacular photography) looks at the subject with a museum and gallery interest. One honourable exception would be the edited collection Photography’s Other Histories (Pinney and Peterson, 2003) which remedies a gap in the scholarly literature. The collection consists of a set of papers contributed by anthropologists looking at the uses of photography in developing nations. Stallabrass (1996) has discussed domestic digital photography but his discussion is largely orientated towards a Frankfurt School influenced critique, an approach we have explicitly avoided here. Chalfen’s study of analogue photography, Snapshots: Versions of Life (1987) remains one of the key texts on domestic photography considered in terms of its ‘uses’. His work provides an extensive meditation, based on C o b l e y a n d H a e f f n e r : D i g i t a l c a m e r a s a n d d o m e s t i c p h o t o g r a p h y 125 by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com research into family albums, on ‘Kodak culture’ and the ‘home mode’ , taking in both home movies as well as snapshots. Much of what we argue here concurs with Chalfen’s findings, although there are significant differences, as will be seen. Family Snaps, edited by Jo Spence and Pat Holland (1991), also provided one of the most important critical discussions of domestic photog- raphy, while Annette Kuhn’s Family Secrets (1995) offers a psychoanalytic approach to her family, which relies on an analysis of family photographs. More recently, the so-called ‘snapshot aesthetic’ has become a high profile concept, following the success of photographers such as Nan Goldin and Wolfgang Tilmans, whose work self-consciously draws on the idea. Partly as a result of the fashion for the snapshot aesthetic among artists, exhibitions of vernacular photography have been organized in art galleries and museums, making the humble snapshot available for appropriation by art lovers. As Douglas R. Nickel (1995), the curator of one such exhibition remarks: There is a fascination to certain examples that allows them a kind of afterlife, a license to circulate in other contexts. When the image is severed from its original, private function, it also becomes open, available to a range of readings wider than those associated with its conception. (p. 13) Nickel also points out that ‘the snapshot remains by far the most populous class of photographic object we have, and it is as yet, without a theory’ (p. 9). Elsewhere in the exhibition catalogue, Lori Fogerty notes that: most of the criteria we usually associate with photographs in a museum – works of personal expression, made with an aesthetic or at least social intent, by a self-conscious artist or professional – are absent. The very idea of the unique or rare object is thrown into question by the snapshot, since all of us own them, have taken them, have been their subject. (p. 8) It is interesting that Nickel ultimately defines the snapshot in terms of the emotions, acknowledging that the topic may be met with distaste in much academic discourse: we must be prepared to enter the terrain customarily regarded with much suspicion by the scholar: that of affect. The snapshot is, by design, an object of sentiment . . . the family photograph is forged in the emotional response its maker has to a subject, a relationship characterised by its sincerity. (p 14) Like Nickel, Don Slater (1995) finds the essence of the snapshot in its affective tonality, although unlike the former, he is much less sympathetic to V i s u a l C o m m u n i c a t i o n 8 ( 2 )126 by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com the idea, moving from the term ‘sentiment’ to the more pejorative ‘sentimentality’. Slater describes the snapshot as an ‘idealisation’, which imposes ‘a filter of sentimentality’ over its subject matter: The most common photographs are of loved ones – partners and children – taken during leisure time, times of play . . . [The snapshot] is sentimental because it attempts to fix transcendent and tender emotions and identifications on people and moments hauled out of ordinary time and mundanity, the better to foreground an idealised sense of their value and the value of our relationship to them, in the present and in memory. (p. 134) Despite this equation of snapshots with domestic sentimentality, emotion in communication demands analysis. Future research in media and communication studies will have to take more seriously the findings of Damasio (1995) and others that the emotions are central to critical reasoning, not a distorting ‘filter’ that has to be removed before we can see clearly. Part of this shift will also involve the acknowledgement that there are many kinds of code that may be in play in the production, reception and taking of photographs, including, but not limited to, the verbal, the written, the visual, the tactile – each of which carries with it a dynamic affective charge. This is not confined to the photograph itself: it is central to the success of leading brands. Reviewers frequently comment, for example, on the unique qualitative experience afforded by Nikon cameras (see, for instance, dpreview.com’s review of the Nikon D40 entry level DSLR). The emotional charge of the snapshot, of course, is also bound to its cultural and economic co-ordinates. The snapshot was born with the introduction of the Kodak camera in the summer of 1888. The camera had a basic lens with a barrel shutter and came pre-loaded with 100-exposure roll of film which could then be posted to Eastman’s Kodak factory for processing. Kodak’s famous slogan ran: ‘you press the button – we do the rest’. The success of the camera was phenomenal: as Nickel (2005) remarks, Eastman created not just a product but a culture (p. 10; cf. Chalfen, 1987). Yet, along with the rank amateurs, grew enthusiasts with aspirations to higher technical and aesthetic standards. Indeed, there was (and is) a considerable middle ground which makes it difficult to delineate between these two categories. Many casual domestic photographers took a lot of photographs and learned about framing and composition as they went. As Alden (2005) puts it: ‘knowingly or not, amateurs would adopt the rhetoric of professional photographers’ (p. 8). However, serious film-based photography is a notoriously expensive hobby, with initial outlays of £25,000 not uncommon for a fully equipped studio with darkroom. Although Photoshop now puts advanced darkroom capabilities in the hands of amateur digital photographers for a mere £600 (or c. £60 for Photoshop Elements, the cut-down version of the software), C o b l e y a n d H a e f f n e r : D i g i t a l c a m e r a s a n d d o m e s t i c p h o t o g r a p h y 127 by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com there is still a significant economic and status divide between amateur and professional, artisan and artist. Actual mastery of Photoshop is also a steep learning curve. Unlike professional industry or art photographers, amateur photographers have traditionally not had access to a regular public audience for their work. In spite of the fact that magazines such as Amateur Photographer have featured the work of enthusiasts, few will become well known (although with the arrival of new internet sites such as Flickr, ImageShack, Fotolog, Fotki and PBase, new spaces of exhibition and discus- sion are opening up). As a consequence, genres of amateur photography have tended to remain fairly limited in scope. However, this does not mean that they can necessarily be reduced to ‘language’ or delimited codes; rather, they are bounded by audiences and what camera users do, with available technology, in respect of them. G E N R E S O F D O M E S T I C P H O T O G R A P H Y Given that domestic photography is concerned with a limited space of production, dissemination and consumption, one useful way to consider its communicative action is through the figure of the idiom. Notwithstanding its linguistic bearing, Feldges (2008) suggests that most photography can be considered idiomatic because of its constraints of production and the limits on its audience. He therefore identifies four main ‘idiomatic genres’: idiomatic micro-communication; creative macro-communication; the pre- sentational spectacular; and the scientific idiom. The photographs that follow, bar the ‘scientific’ one, have all been taken by amateurs using consumer digital cameras. They show that the existing idioms are readily available for amateurs to produce pictures. But there are a number of other issues that they illustrate which, although seemingly straightforward, are rather important. The following seven pictures of a 16-month old boy on a swing are an example of idiomatic micro-communication. The pictures were taken by the parents solely for the viewing, in the first instance, of the parents and the boy’s grandmother, all members of the same idiomatic network. The idiom is limited by the intended audience, but also by the purpose of the sign making in the production of the photograph. It is pretty clear from this selection of photographs that the purpose was ‘simply’ to capture the boy’s enjoyment of the swing, his expression of delight, a full view of his face, a sense of what he looked like in general at that age and that moment, and to do so with certain fundamental aesthetic factors taken into account (for example, instances when the early Spring sun, low in the sky, was not shining in his face). In short, the main aim was to capture nonverbal communication, from the boy and by the scene. As such, this set of parameters does not differ from those in operation with idiomatic micro-communication in analogue, film-based photography. The difference, though, is to do with choice: in the past, the expense of getting films developed prevented domestic photographers from making numerous V i s u a l C o m m u n i c a t i o n 8 ( 2 )128 by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com C o b l e y a n d H a e f f n e r : D i g i t a l c a m e r a s a n d d o m e s t i c p h o t o g r a p h y 129 Figures 1–7 Idiomatic micro-communication. Photos published with permission. by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com pictures in the search for an image that approached perfection for the purposes at hand. In digital photography, multiple pictures can be taken at no further expense and low-level aesthetic judgements can be made at the point of production (by viewing each picture in the LCD monitor) or later, on a computer (if the pictures are being uploaded there). Although it may seem like small beer, digital cameras facilitate the executing of minor decisions about the effectiveness of sign making in the idiom, as well as an apprehension of the diversity of what signs can signify. The next picture (Figure 8) is a landscape taken by an amateur photographer from a moving coach and is an example of what Feldges calls creative macro-communication. The pocket digital camera easily facilitated a slow shutter speed for taking the photograph. The photo has subsequently been rendered in monochrome using Photoshop. This represents a considerably larger idiom than that of micro-communication. It is possible to make out what it is a picture of, but there are no specific co-ordinates for reading the picture and thoroughly delimiting the audience in the way as there were with the photographs of the boy (Figures 1–7). At the same time, there is sufficient doubt about what is pictured to raise some interest, and that interest is potentially harboured by an audience whose size is dictated by their capacity to appreciate, broadly, ‘creative’ image making in general. The other interesting feature of this idiom is that it downplays the authorship function relative to that of micro-communication. Domestic analogue photography, of course, does not prohibit creative macro-communication; it is possible, V i s u a l C o m m u n i c a t i o n 8 ( 2 )130 Figure 8 Creative macro-communication. Photo: Nick Haeffner. by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com although less probable, to produce a macro-communication that is creative using a traditional analogue camera. However, digital cameras make it that much easier and that much more likely, since they come with an array of technology that was only previously available to professionals at a very high price. Even more characteristic of idiomatic modes shared by both digital and analogue photography is the category of the presentational spectacular as exemplified in the photograph in Figure 9, taken by an amateur. C o b l e y a n d H a e f f n e r : D i g i t a l c a m e r a s a n d d o m e s t i c p h o t o g r a p h y 131 Figure 9 The presentational spectacular. Photo: Nick Haeffner. by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com Amateur photographers who are planning to take anything more than ‘point and shoot’ family snaps will customarily seek to photograph naturally occurring objects in a ‘realistic’ way, eschewing the kind of ‘trick effects’ Barthes (1977[1960]) identified early in photographic theory and focusing, instead, on the spectacle of the object itself. Frequently, this involves the voluntary or involuntary photographing of an object with concomitant attention to the way it may impose itself on a particular purview in an unusual or impressive manner. In this idiomatic genre, the spectacle of naturally occurring events can increase in magnitude and intensity according to the expansion of the idiomatic network. A common example of this is those photographs that become news items. The benefits that domestic digital cameras offer to this process are banal, but worth noting: they are mainly to be found in the ability to quickly select higher ISO numbers for low light photography and the ready accessibility of pocket cameras them- selves, often built into mobile phones. The scientific idiom, instanced in the photograph in Figure 10, is frequently closely related to the presentational spectacular in its effects, but is marked from it in its usual intent, and in that it is generally the preserve of professionals. Feldges suggests that this idiom constitutes the ‘most rational’ use of photography because it is employed solely in the purpose of scientific explanation and exploration. The interpretation of scientific idiom photo- graphs such as this one relies on empirical codes rather than symbolic ones. V i s u a l C o m m u n i c a t i o n 8 ( 2 )132 Figure 10 The scientific idiom. Photo: Nick Haeffner. by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com Such photographs are generally repeatable in similar form and, within their idiomatic, scientific communities, audiences will impute authorship to their producers, particularly if they are presenting new knowledge through the photographs. Domestic digital camera use seldom exemplifies the scientific idiom; yet it sometimes acts in a similar, partly traditional, way in the process of revelation and ‘truth’. It is here that the pronounced drive in snapshots to ‘show’ an event or people, so closely associated with the ‘home mode’ or family orientation (see, especially, Chalfen, 1987: 98–9), evades its domestic moorings in a fashion that is especially facilitated by the availability of consumer digital cameras. There are already a number of celebrated instances of this, one of which is discussed below. All these examples evince a kind of ‘literacy’, to use the common linguistically orientated parlance once more. The final three idioms, in particular, represent an informed use of photography which has some sense of technique, practice and tradition. They are reminiscent of the work of enthusiastic amateurs or what is sometimes called the camera club mentality, operating efficiently in a very limited idiom. The scientific idiom is a developed version of this; the idiomatic micro-communication examples, on the other hand, only exemplify ‘literacy’ at the level of anticipating choices to be made in the selection of pictures at the moment of uploading. Yet, it should be noted that while all this sign making constitutes a kind of literacy, it is not necessarily a critical literacy which one would, perhaps, hope to be unleashed by ‘digital democracy’. Proficiency at the formal level in such idiomatic photography can be high but frequently absent is critical reflection on both the politics of representation and the referent. The question that follows from these observations on photographic idioms and the sign-making practices associated with them, then, concerns the possibility that widespread digital camera use may contribute to the democratization of critical insight or, put another way, will lead to greater media literacy. Certainly, the surveillance and legal functions of photography would seem to have been further problematized in recent years. Echoing some of the popular concerns and opportunities we outlined at the start of this article, Mitchell (1992) suggested that: Protagonists of the institutions of journalism with their interest in being trusted, of the legal system, with their need for provably reliable evidence, and of science, with their foundational faith in the recording instrument, may well fight hard to maintain the hegemony of the standard photographic image – but others will see the emergence of digital imaging as a welcome opportunity to expose the aporias in photography’s construction of the visual world, to deconstruct the very ideas of photographic objectivity and closure, and to resist what has become an increasingly sclerotic pictorial tradition. (p. 8) C o b l e y a n d H a e f f n e r : D i g i t a l c a m e r a s a n d d o m e s t i c p h o t o g r a p h y 133 by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com The challenge of domestic digital photography to the ‘increasingly sclerotic pictorial tradition’ is located in the potential for critical understanding of the photographic text that the technology can facilitate. Re-framing the matter, we might declare it to be a metamorphosis of the roles of structure and agency in this field of representation. S T R U C T U R E A N D A G E N C Y The idea of amateur photography transforming the polis is not exactly new (see Benjamin, 1973[1935] and Kracauer, 1995). The proliferation of non- professional photographers led to hopes in the 1970s that the medium could inaugurate a demystification and politicization of the media. Among those who advanced this view of domestic photography was Don Slater, who edited the journal Camerawork in the early 1980s. As Slater (1995) explains: the 1970s saw numerous currents of radical photography which considered the potential of photography for empowerment in everyday life: in education, documentation, alternative politics, etc. To some extent these currents touched upon domestic photography, particularly through Jo Spence’s work. (p. 143) Those who sponsored the idea of empowerment envisaged a progressive politics of image making in which practice itself would transform ideology and social institutions: why could we not see people – cameras in hand – telling their lives – to themselves and to others – in a narrative cut to their own dreams, desires, anger? . . . Could that arch-enemy, the root of repre- sentational tyranny – the myth of realism, of the factuality of images, of the naturalness of meaning – survive the people’s own experience of making images? . . . Could we not enlist photography in the ranks of counter-hegemony and prefigurative culture? (p. 144) In Slater’s view, ‘the question of the fate of the photographic image in everyday digital photography resolves into the tangled structuring of leisure experience at the meeting point of consumer capitalism and the construction of family identity’ (p. 137). The agency which photography seems to offer as a potential has to be considered in relation to the structures within which it operates and which it brings into being: Taking photographs itself is structured (with Kodak mass photog- raphy as the paradigm of structuring a complex skill into a few simple actions – ‘You press the button . . .’) and is regarded as an intrinsic part of other leisure event-structures: holidays, time-off, special occasions (Christmas or wedding). It fits into the commodification of V i s u a l C o m m u n i c a t i o n 8 ( 2 )134 by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com leisure generally and is part of their commodification: we are encouraged to photograph our lives in such a way as to frame them as leisure events (p. 141). Furthermore, when domestic users get to the point of developing their relationship with photography, Slater argues that questions of technique serve to obscure any consideration of the ideology that lies behind image making. An important factor in this craft/technique/consumerist nexus is the consumer press, which offers a large range of magazines promoting lenses, tripods, filters, carrying cases and other accessories, often through editorial pieces explaining how to get the ‘best’ images (the aforementioned ‘camera club mentality’ or ‘hobbyist photography’) (p. 142). It could be argued that since Slater published his article in 1995, a wider range of amateur publications exists with some titles, such as Digital Photographer, showcasing the pictures of photographers whose work is celebrated for its challenging nature. Such publications can count on the readership of a growing band of workers in the culture industries of design, music and IT, characterized by Bourdieu (1986) as the ‘new petit bourgeoisie’, a rapidly expanding social group which has an ambivalent relationship to traditional bourgeois norms. Such people are highly likely to pursue digital photography as a hobby, partly because their workflow is most likely to have been thoroughly digitized in the last 10 years, in the interests of business efficiency. Such users have an easy familiarity with digital technology and an interest in new developments. They also value forms of culture previously held to be bohemian (jazz, exotic travel to unusual destinations, modern art), which, in contrast with their scruffy, rough and ready forebears, they usually consume in thoroughly bourgeois surroundings. However, a quick survey of the shelves of a London newsagent suggests that consumerism, not photography, is still the dominant theme of these magazines with titles such as What Camera?, Which Digital Camera?, Digital Camera Buyer and Digital Camera Shopper making explicit what titles such as Photography Monthly and Amateur Photographer are too reticent to admit: that their primary purpose is to deliver readers to advertisers. Since we have raised the question of how easy users feel with digital technology, it is worth briefly returning here to the haptic dimension of digital photography, especially since product designers put so much thought into this aspect, in contrast to the relative neglect it has suffered at the hands of scholars of communication. In this respect, the concept of ‘affordance’ has recently achieved some prominence in discussions of the agency which users may exercise in relation to technology. Gibson (1979) described affordance as all the ‘action possibilities’ available to the actor independent of the individual’s ability to recognize these possibilities. The concept was given influential revision by Norman (1999) who not only distinguished between ‘real affordances’, ‘perceived affordances’ and ‘cultural conventions’ but also illustrated their haptic co-ordinates. The real affordances are linked to C o b l e y a n d H a e f f n e r : D i g i t a l c a m e r a s a n d d o m e s t i c p h o t o g r a p h y 135 by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com physical constraints on action. The perceived affordances take account of an actor’s goals, values, beliefs and interests. However, the cultural constraints are conventions shared by a cultural group. Norman writes: A convention is a cultural constraint, one that has evolved over time. Conventions are not arbitrary: they evolve, they require a community of practice. They are slow to be adopted, and once adopted, slow to go away. So although the work implies voluntary choice, the reality is that they are real constraints upon our behaviour. (p. 41) Norman asks, ‘what is it about this object that makes people want to use it in this way?’ He concludes that the object (in our case, the digital camera or computer software interface) must ‘talk’ to us in some kind of a ‘language’, recommending some uses and discouraging others. Norman’s work has been very influential on a generation of designers working in technology, many of them camera and computer designers, preoccupied with creating user-friendly interfaces that require a careful consideration of the sense of touch. However, Norman’s work fits in with what has been called the administrative tradition in communication studies (identified with North America and emphasizing improving communication, often with a business model in the background). This contrasts with critical European communi- cations research that places much greater emphasis on the social and political aspects of communication. One could therefore view the notion of affordances in the light of two traditions of thought. In the North American model, the emphasis would be on making digital imaging technology ever more efficient and popular with consumers. In the European tradition, however, this kind of ‘means–end’ or ‘instrumental’ rationality is viewed as one of the ways in which potential citizen photographers are (lamentably) turned into consumer photographers. From a Foucauldian perspective, the ‘freedoms’ afforded by consumer technology turn out to be simply more efficient ways of ensuring our subjectivization to consumer society and all the hidden assumptions that underwrite it. A further issue lurks behind the concept of affordances, too. It has recently been incorporated into actor- network-theory which distinguishes between prescription, proscription, affordances and allowances, and concerns itself with what a device allows or forbids in relation to the actor (Latour, 2005). Actor-network-theory, with its emphasis on the idea that researchers are always lagging behind the changing world that informants are involved in making, would be a fruitful ally in the ethnographic research that is needed to begin to understand domestic digital camera use. A third position can also be developed in response to a criticism that both the positive and negative approaches just discussed take too much of a broad-brush approach. Rather than asking how digital imaging devices can be purified of inefficiency or ideological distortion on a grand scale, we will V i s u a l C o m m u n i c a t i o n 8 ( 2 )136 by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com try, instead, to get a more nuanced understanding of how people interact with them, what they want to get out of them and why. As a preliminary measure, this would entail a consideration of the diverse amateur uses of cameras, what users are trying to get out of the photograph and what they are trying to get out of the technology, and how they place these within, and possibly expand, genres of sign making. In this configuration, it is not sufficient to just posit a dialectic of structure and agency. The same goes for the role of language in photographic sign making. It is not enough to rest between a hard version of the Hallidayan perspective (to the pole of linguistic structure) and the soft version (to the pole of choice). Rather, it is necessary to trace out the fluctuations between the two in concrete situations. Consider the photographs taken at Abu Ghraib prison (Figure 11, below) which suggest that the practices of amateur digital photography can rupture the synergy of public discourses which surround it in consumer culture. C o b l e y a n d H a e f f n e r : D i g i t a l c a m e r a s a n d d o m e s t i c p h o t o g r a p h y 137 Figure 11 Abu Ghraib. Digital image: public domain. by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com Ironically, in this case, such a rupture was unintentional and it is pos- sible that the photograph originally arose in a different idiom. Nevertheless, it points to an interesting contradiction in the relations between established power and mass media. In our current climate of feral competition for ratings and circulation, the media are generally perceived as trying to appear more populist and have become more sensationalist. This does not mean that they are necessarily ‘speaking the truth to power’ – far from it in many cases. However, it does mean that the amateur photographs taken at Abu Ghraib, which proved devastating to the political establishment in the US and in the UK, proved to be an extremely lucrative commodity for the media who ensured that their dissemination was the best that modern media can achieve. The photographs also led to much discussion on enthusiast websites such as dpreview.com, which normally specializes in reviews of the latest photographic gear. For months, the site teemed with furious postings from visitors arguing about the ‘appropriate’ uses of digital photography and the veracity of the digital image. For once, anger and passion about politics ruptured the otherwise bland discourse on hobbyist photography. For many, the personal connected with the political in reasoned and informed discourse about the wider implications of digital technologies. However, other respondents, echoing our opening comments on views of the digital age, refused to believe that the Abu Ghraib images were ‘true’. Still others were led to a crisis of faith in their support for ‘the war in Iraq’. Were these images not ‘snapshots’ in the broad sense, outside of the family, in which we have begun to discuss them earlier? And, if so, do generalizations about snapshots still hold? Although the circumstances in which these photographs were taken and the motivations of the camera users are still not clear, it is nevertheless true that their subsequent circulation gave them the status of artefacts of citizen journalism. Furthermore, one final point should be made: although the referents are clearly to be understood as linguistically placed social actors, the most striking fact about the Abu Ghraib pictures is that the nonverbal communication, captured in all its naked brutality, is so overwhelming as to precede such linguistic placing. From the mobile phone photos used to rouse a demonstration (Robertson in Langford, 2005) to Eliot Ward’s amateur images of the London 7/7 terrorist attack, digital amateurs have been not just consumers of the media but producers of it too. Sontag (2004) has criticized the tendency of some theorists to speak of ‘spectacle’ and ‘spectators’ when referring to photographic reportage of such events: To speak of reality becoming a spectacle is a breathtaking provincialism. It universalises the viewing habits of a small educated population living in the rich part of the world, where news has been converted into entertainment . . . it assumes that everyone is a spectator. (pp. 109–11) V i s u a l C o m m u n i c a t i o n 8 ( 2 )138 by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com Nevertheless, as Crawford (2008) points out: If one grants Sontag a victory, it comes with two qualifications. First, in an image-event such as 7/7, both the producers and consumers of imagery are likely to be ‘a small educated population living in a rich part of the world.’ Where there is a tendency in counter-cultural circles to refer to the media as an ‘it’ or a ‘they’, we are no longer permitted the luxury of this separation. We are the media as Ward’s mobile phone photograph clearly demonstrates. Perhaps, however, this is a somewhat isolated example, untypical of the practice and consumption of amateur digital photography, although, crucially, it is an amateur snapshot facilitated by a domestic technology. Nevertheless, it is still necessary to revisit the questions posed earlier. Is there an amateur digital photography practice which questions ideology – (a) at the level of the politics of representation; and (b) at the level of the referent? Our preliminary thesis is that there is but that it is limited. Much- needed ethnographic research in the area would have to pose classic questions of communication: ● How was the image captured? ● What was the object of representation? ● Where does it circulate? ● To whom and why? Where can these questions be posed and how can we attempt to get the public to begin asking them more systematically? One answer is within the institution of the university. However, it is still necessary to acknowledge that the university is not in reality an institution of free enquiry. Teaching and research are carried out within specific frameworks that involve powerful imperatives and constraints. Perhaps it is not enough to get our students to read articles and carry out their own research. It may even be the case that among the alternative methods to inculcate self-reflexivity, digital imaging itself can be used to look critically at the institutions in which it is studied. The snapshot shown in Figure 12, speedily facilitated by a digital camera, was taken on a holiday visit to the US outside a well-established university. It shows a sculpture of birds in flight in front of university buildings. Beneath the statue is a person, probably a student. A preferred reading of the image would talk about the statue as a symbol of all that is great about the experience of university study: the freedom to allow your ideas and your spirit to soar to new heights, like the birds in the sculpture. The (incidental) presence of a student of non-Western appearance could be seen as evidence of the inclusive and universalistic aspirations of the university. C o b l e y a n d H a e f f n e r : D i g i t a l c a m e r a s a n d d o m e s t i c p h o t o g r a p h y 139 by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com Such a reading fits well with what Wernick (1991) has called our ‘promotional culture’, in which modern public communications have become all about selling something and in which public institutions are constantly trying to sell themselves to potential customers. However, other readings of the image are readily available. For instance, note that the statue is made of metal and is therefore an image, not of dynamism but of stasis, suggesting not freedom but rather inflexible authority. Is the intellectual achievement apparently celebrated in the statue, or that of the student, or of the intellectuals whose work they must learn to cite? The scale of the photo has the student dwarfed beneath this monumental metal monolith, sug- gesting perhaps the extent to which universities still require the subjection of V i s u a l C o m m u n i c a t i o n 8 ( 2 )140 Figure 12 The university experience. Photo: Nick Haeffner. by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com those who study within them to a vast administrative bureaucracy and an institutionalized mode of life. Certainly, the student’s nonverbal communi- cation suggests a reading other than the ‘preferred’ one. Thus it is necessary to acknowledge that while, on the one hand, the modern university can and should encourage critical literacy in relation to the image, it is nevertheless part of the problem as well as part of the solution. It may be that the development of a critical disposition towards digital imaging in the future will come just as much from outside the university as from inside it. It may happen as the previous decorum of public discourse, conducted through the appropriate authorized public channels, is rudely disrupted by the explosion of web-enabled chatter, about which it is unwise to pass judgement based on generalizations. For it is through public discourse on the ubiquity of nonverbal communication in the digital age that we are witnessing not only the questioning of technology and the politics of representation, but also the putative linguistic basis of both. C O N C L U S I O N In following suggestions that troubling dichotomies have emerged in the digital age, we have attempted to place consumer digital cameras and domestic photography in the prominent position we believe they demand. As with all technologies, digital cameras are embedded in discourse. There is simply no escaping the fact that amateur digital photography is caught up in a defined politics of representation. Yet, equally, it would be folly to assume that technology is automatically complicit with existing discursive structures. We need to take seriously the capacity of self-reflexivity inherent in digital domestic photography, defined as the non-professional use of consumer digital cameras including, but not confined to, family snapshots. Digital domestic photography is also embedded in ‘uses’; indeed, it is constituted by its uses. Chalfen’s work (1987) makes this very clear by demonstrating how the study of social and cultural contexts of camera use in the ‘home mode’ creates a ‘symbolic world’. Similarly, in a more recent investigation of a new technology, Horst and Miller’s (2006) anthropology of mobile phone use in Jamaica, it is observed that in the literature on telephony ‘texts that consider the widest possible context for understanding the usages and consequences of the telephone [are] much more effective than those that start too narrowly from a supposed intrinsic quality of the technology itself ’ (p. 11). This echoes Chalfen’s (1987) finding that ‘techno- logical innovations are, and will continue to be, less important than culture’s contribution to providing a continuity in a model and pattern of personal pictorial communication’ (p. 166). These are strong points with which, in some measure, we would concur. The danger of any investigation of a specific and new medium, particularly an investigation as preliminary as this one, is that it tends to identify novel features in relation to other media and, often, to imagine that those features are immutable characteristics of the medium. As Mitchell (2005) argues: C o b l e y a n d H a e f f n e r : D i g i t a l c a m e r a s a n d d o m e s t i c p h o t o g r a p h y 141 by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com accounts of media tend to disavow their constructed character, presenting the medium as possessed of essential characteristics and a certain natural destiny. This is especially true of photography, which seems to license every commentator to make pronouncement on its essential character, even when their aim is to deny any essentialism. Thus, the very theorists of photography who have done the most to open up the limitless variety and complexity of photographic images invariably wind up at some point declaring an essential teleology, a fixed center in the labyrinth. Photography’s true nature is found in its automatic realism and naturalism, or in its tendency to aestheticize and idealize by rendering things pictorial. It is praised for its incapacity for abstraction, or condemned for its fatal tendency to produce abstractions from human reality. It is declared to be independent of language, or riddled with language. Photography is a record of what we see, or a revelation of what we cannot see, a glimpse of what was previously invisible. Photographs are things we look at, and yet, as Barthes also insists, ‘a photograph is always invisible, it is not what we see’. (p. 474) Yet, in focusing on the possible domestic uses of digital cameras and the location of those uses within loosely established idioms, we have hopefully circumvented much of the essentialism that Mitchell identifies. Furthermore, we would argue that there is a need to at least consider the potentialities of the new medium. Chalfen’s (1987) work, for example, does not (and cannot) speculate on equivalent communication in the ‘home mode’ before snapshots became a widespread technological reality. In addition, his and other investigators’ studies of family albums necessarily neglect to study all the photographs that families might discard or fail to place in albums or frames (because they are not good technically, contain a bad pose, a frown, and so forth). One of our points is that the technology of consumer digital photography allows the cheap generation of many more dispensable pictures which can be discarded at the click of a switch rather than forcing the photographer to wait and be disappointed after paying for their development on paper. Indeed, this may have a bearing on content and framing in the ‘home mode’ of domestic photography. One of the reasons that analogue photographs of people were taken in (often family) groups, for example, is the cost implication: taking one photograph of a number of people was cheaper than taking numerous photographs of different individuals. This is not to say that group photographs have died out with the advent of affordable digital cameras; clearly, they have not, because they are an entrenched mode of picturing for all the good reasons that Chalfen lists. But the possibilities for domestic photography offered by digital cameras beyond such modes should not be overlooked. Digital cameras are a fast-moving new medium: for example, the increasing pixel count of cameras and the fall in price of digital equipment in V i s u a l C o m m u n i c a t i o n 8 ( 2 )142 by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com the West, coupled with the increasing audio–video components of contem- porary consumer cameras that make this a converged medium, threaten to supersede the current account. Nevertheless, we argue that, since it is a new medium, it is necessary to investigate and theorize digital photography’s possibilities even as we proceed to examine uses. And there is a need to question the basis upon which ‘uses’ are conceived. Lurking behind anthropological accounts of the ‘uses’ of communication technology is a phenomenon which we are compelled to comment on in relation to the sign- making function of domestic digital photography in a visual/digital age: that is, language. Thus, we have tried to question at least one of the views on photography that Mitchell identifies, that photography is riddled with language. Only a scant perusal of the foregoing is needed, however, to demonstrate that the present foray into the parameters of digital domestic photography is itself riddled with linguistic metaphors. The paradigm of ‘language’, with all its connotations of richness and constraints, is difficult to evade and is one contributing factor in the current predilection for, especially, Halliday’s perspective on linguistic limits and opportunities. Yet, linguistic metaphors should not be confused with either linguistic determinations or even determination by language in the last instance. Despite the element of choice that is evident in much Hallidayan discourse theory, ‘language’ as constraining is central to its understanding of discourse. In Multimodal Discourse (2002), Kress and Van Leeuwen see their work in its relation to ‘questions about cognition, learning, knowledge, subjectivity’ within the frame of reference provided by ‘the so-called Sapir-Whorf hypothesis’: That argument, has remained inconclusive: in its strong form – that we cannot see, perceive, think outside of the categories provided by language – it seems untenable. Patently enough, we can, though we need to work harder to do so. In its weak form – that the categories of language provide grooves for habituated thought – it seems difficult to escape. (pp. 127–8) Despite the difficulty of escaping language or verbal expression, we would venture a criticism of Kress and Van Leeuwen’s palpably reasonable position. For all its even-handedness, this position remains a glottocentrist one, predicated on the primacy of verbal expression. Given the centrality of language in many human affairs, this must, of course, make sense. However, there is the risk of downplaying the nonverbal in the idea of ‘literacy’, while, at the same time, reducing the possibility of any escape from the clutches of language. In both ontogenetic and phylogenetic terms, this is a major mistake. As the work of Sebeok (especially 2001, 1991, 1988) and others working in contemporary semiotics is at pains to demonstrate, nonverbal communication is so much a part of the human repertoire of communi- C o b l e y a n d H a e f f n e r : D i g i t a l c a m e r a s a n d d o m e s t i c p h o t o g r a p h y 143 by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com cation, particularly in human embeddedness in the predominant universe of non-human nature, that it is frequently repressed rather than simply ignored. So, too, with photography. Effectively, what is captured in domestic digital photography is nonverbal communication, even if it is only the cheesy smile of the relative posing before the camera. Leaving aside the audio–video facilities available in many contemporary digital cameras on the amateur market, the message in a photograph is overwhelmingly nonverbal. For this reason, and bearing in mind the question of digital democracy, we would opt for the ‘weak version’ of Kress and Van Leeuwen. Where Kress and Van Leeuwen follow Halliday in identifying the semantic dimension as the realm of choice, we would seek to push matters further by shifting debate onto the issue of what people want from photographs (in the situation of ‘utterance’) and its nonverbal co- ordinates: the required pose, proper lighting, colour and exposure, and mimicry of the situation (see Chalfen, 1987: 71–99). Indeed, there may even be the desire for deception on the part of the photographer resulting in a photograph that is even more perfect than the real situation. Furthermore, that desire is often extended in the service of a belief in ‘full communication’, a false dream that the image will ‘say’ everything that was desired. Digital photography serves as part of the embellishment of such a dream through the fine tuning and tinkering with the image that it allows at the point of domestic production and, subsequently, through widely adopted image programs such as Photoshop. The popularly conceived mutability of digital imaging mentioned at the outset, then, offers a new opportunity or a new choice. It offers a ‘digital democracy’ where, seemingly, domestic camera use entails autonomy over one’s own images. But the opportunity occurs on a limited basis and does so not least because of a continued belief in the possibility of attaining a more perfect communication. That language would enable such perfection is obviously as much a fallacy as the idea of a photograph, conversely, saying ‘ain’t’ (Worth, 1981). Language can provide metaphors for understanding photography; but to insist that it is the basis for other forms of communication by humans invites misconceptions about the nature of signs as well as the nature of agency. R E F E R E N C E S Alden, T. (2005) Real Photo Postcards: Unbelievable Images from the Collection of Harvey Tulchensky. New York: Princeton Architectural Press. Barthes, R. (1977[1960]) ‘The Photographic Message’, in Image–Music–Text, trans. and ed. S. Heath. London: Fontana. Benjamin, W. (1973[1935]) ‘The Work of Art in the Age of Mechanical Reproduction’, in Hannah Arendt (ed.) Illuminations, trans. H. Zohn. Glasgow: Fontana. Bourdieu, P. (1986) Distinction: A Social Critique of the Judgement of Taste. London: Routledge. V i s u a l C o m m u n i c a t i o n 8 ( 2 )144 by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com Chalfen, R. (1987) Snapshots: Versions of Life. Bowling Green, OH: Bowling Green State University Press. Crawford, D. (2008) ‘Realism vs Reality in the War on Terror: Artworks as Models of Interpretation’. URL (consulted 31 March 2008): www.code-flow.net/fake/book/crawford-realism-en.html Damasio, A. (1995) Descartes’ Error: Emotion, Reason and the Human Brain. New York: Vintage. Feldges, B. (2008) American Icons: The Genesis of a National Visual Language. London: Routledge. Gibson, J. (1979) The Ecological Approach to Visual Perception. New York: Houghton Mifflin. Horst, H. and Miller, D. (2006) The Cell Phone: An Anthropology of Communication. Oxford: Berg. Kracauer, S. (1995) The Mass Ornament: Weimar Essays, trans. T.Y. Levin. Cambridge, MA.: Harvard University Press. Kress, G.R. (2003) Literacy in the New Media Age. London: Routledge. Kress, G.R. and Van Leeuwen, T. (2002) Multimodal Discourse: The Modes and Media of Contemporary Communication. London: Arnold. Kress, G.R. and Van Leeuwen, T. (2006) Reading Images, 2nd edn. London: Routledge. Kuhn, A. (1995) Family Secrets: Acts of Memory and Imagination. London: Verso. Langford, M. (ed.) (2005) Image and Imagination. Montreal: McGill-Queens University Press. Latour, B. (2005) Reassembling the Social: An Introduction to Actor-Network- Theory. Oxford: Oxford University Press. Mitchell, W.J.T. (1992) The Reconfigured Eye: Visual Truth in the Post- Photographic Era. Cambridge, MA.: MIT Press. Mitchell, W.J.T. (2005) What Pictures Want: The Lives and Loves of Images. Chicago: University of Chicago Press. Nickel, D. (2005) Snapshots: The Photography of Everyday Life: 1888 to the Present. San Francisco: Museum of Modern Art. Norman, D. (1999) ‘Affordance, Conventions and Design’, Interactions, May: 38–43. Pinney, C. and Peterson, N. (2003) Photography’s Other Histories. Durham, NC: Duke University Press. Sebeok, T.A. (1988) ‘In What Sense Is Language a “Primary Modeling System”?’, in H. Broms and R. Kaufmann (eds) Semiotics of Culture. Helsinki: Arator. Sebeok, T.A. (1991) ‘Communication’, in A Sign Is Just a Sign. Bloomington: Indiana University Press. Sebeok, T.A. (2001) ‘Nonverbal Communication’, in P. Cobley (ed.) The Routledge Companion to Semiotics and Linguistics. London: Routledge. Slater, D. (1995) ‘Domestic Photography and Digital Culture’, in M. Lister (ed.) The Photographic Image in Digital Culture. London: Routledge. C o b l e y a n d H a e f f n e r : D i g i t a l c a m e r a s a n d d o m e s t i c p h o t o g r a p h y 145 by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com Sontag, S. (2004) Regarding the Pain of Others. Harmondsworth: Penguin. Source (2005) Special issue on Vernacular Photography, issue 43 (Summer). Spence, J. and Holland, P. (1991) Family Snaps. London: Virago. Stallabrass, J. (1996) Gargantua: Manufactured Mass Culture. London: Verso. Wernick, A. (1991) Promotional Culture: Advertising, Ideology and Symbolic Expression. London: Sage. Wheeler, T.H. (2002) Phototruth or Photofiction? Ethics and Media Imagery in the Digital Age. Guildford: Erlbaum. Worth, S. (1981) Studying Visual Communication. Philadelphia: University of Pennsylvania Press. B I O G R A P H I C A L N O T E S PAUL COBLEY is Reader in Communications at London Metropolitan University, an Executive Committee Member of the International Association for Semiotic Studies (IASS), and a member of the Semiotic Society of America and of the Media Communication and Cultural Studies Association (MeCCSA). He is the author of a number of books, including The American Thriller (Macmillan, 2000) and Narrative (Taylor & Francis, 2001). He edited The Communication Theory Reader (Routledge, 1996), The Routledge Companion to Semiotics and Linguistics (2001), Communication Theories, (four volumes, Routledge, 2006), and (with Adam Briggs) The Media: An Introduction, 2nd edn (Longman, 2001); he co-edits two journals: Subject Matters and Social Semiotics and is associate editor of Cybernetics and Human Knowing. Address: London Metropolitan University, 31 Jewry Street, London EC3N 2EY, UK. [email: p.cobley@londonmet.ac.uk] NICK HAEFFNER is Senior Lecturer in Communications at London Metropolitan University and a visiting professor at Boston University, British Programmes. In 2005, he published a monograph on Alfred Hitchcock. (Longman). In 2006, he co-devised and co-curated an interactive new media exhibition called RePossessed, which is partly inspired by Hitchcock’s film Vertigo (1958). He is a member of the Society for Cinema and Media Studies and the Media, Communication and Cultural Studies Association. He is co- founder of the journal Subject Matters. Address: London Metropolitan University, 31 Jewry Street, London EC3N 2EY, UK. [email: nickhaeffner@hotmail.com] V i s u a l C o m m u n i c a t i o n 8 ( 2 )146 by Paul Cobley on November 18, 2009 http://vcj.sagepub.comDownloaded from http://vcj.sagepub.com work_no4j7juj4rgwxcfvgnzcpczemi ---- doi:10.1016/S0020-0255(02)00291-8 A comparative study of digital watermarking in JPEG and JPEG 2000 environments M.A. Suhail a , M.S. Obaidat b,*, S.S. Ipson b , B. Sadoun c a Monmouth University, UK b University of Bradford, UK c Al-Balqa’ Applied University (BAU), Jordan Received 1 June 2001; accepted 27 April 2002 Abstract JPEG 2000 is a new compression technology that achieves very high compression rate and maintains visual quality. Digital watermarking techniques have been developed to protect the copyright of media signals. The goal of this paper is to put into perspective joint photographic experts group (JPEG) and JPEG 2000 concepts a long with water- marking principle. It provides evaluation of the compatibility aspects of JPEG 2000 versus JPEG standard with watermarking. Various experiments have been conducted to compare the performance of both standards under various conditions. An outlook on the future of digital image watermarking within JPEG 2000 is introduced. � 2002 Elsevier Science Inc. All rights reserved. Keywords: JPEG; JPEG 2000; Watermarking; Image processing; DCT; DWT; Image compression 1. Introduction Many watermarking schemes have been suggested for images and some for audio and video streams. A large number of these schemes address the problems Information Sciences 151 (2003) 93–105 www.elsevier.com/locate/ins * Corresponding author. Address: Department of Computer Science, Monmouth University, W. Long Branch, NJ 07764, USA. Tel.: +1-732-571-4482; fax: +1-732-263-5202. E-mail address: obaidat@monmouth.edu (M.S. Obaidat). 0020-0255/02/$ - see front matter � 2002 Elsevier Science Inc. All rights reserved. doi:10.1016/S0020-0255(02)00291-8 mail to: obaidat@monmouth.edu of implementing invisible watermarks. Basic watermarking concepts are dis- cussed in [1,2]. Researchers define a digital watermark as identification code carrying information (an author�s signature, a company�s log, etc.) about the copyright owner, the creator of the work, the authorized consumer and so on. It is permanently embedded into digital data for copyright protection and may be used for checking whether the data have been modified. Visible and invisible watermarking are the two categories of digital watermarking. The concept of the visible watermarking is very simple; it is analogous to stamping a mark on a paper. The data is said to be digitally stamped. An example of visible water- marking is seen in television channels when the station�s logo is visibly super- imposed in the corner of the screen. Invisible watermarking, on the other hand, is a far more complex concept. It is most often used to identify copyright data, like author, distributor, etc. On the other hand, image compression of digital images is the process of reducing the size of an image while retaining the highest possible visual quality. JPEG is the very well known ISO/ITU-T standard for image compression. Several modes are defined in JPEG [3]. It is released in late 1980s. JPEG 2000 is a new compression technology that achieves very high compression rate and maintains visual quality. The JPEG 2000 is issued now to become an Inter- national Standard (IS) [3,4]. This paper will give an answer to a common question about the robustness performance of the watermarking techniques against attacks of JPEG and JPEG 2000. An outlook on the future of digital image watermarking within JPEG 2000 will be given. Section 2 introduces an overview of watermarking concept. Section 3 provides a background about JPEG and JPEG 2000. Also, it compares both standards in terms of technology, performance, and appli- cations. Section 4 presents the experimental work and the results. Section 5 provides a look into the future of image coding and data security. Section 4 concludes this paper. 2. Watermarking concept and techniques 2.1. Watermarking applications and properties The two major applications for watermarking are protecting copyrights and authenticating photographs. The main reason for protecting copyrights is to prevent image piracy when the provider distributes the image on the Internet [2]. One method used to authenticate digital images is to embed a digital watermark that breaks or changes as the image is tampered with. This informs the authenticator that the image has been manipulated. Watermarking techniques that are intended to be widely used must satisfy several requirements. The type of application decides which watermarking 94 M.A. Suhail et al. / Information Sciences 151 (2003) 93–105 technique to be used. However, three requirements have been found to be common to most practical applications. These are robustness, invisibility and delectability. Some of watermarking requirements competes with each other. Also other requirements [1,2] may be significant. 2.2. Digital watermarking approaches There are two main generations of watermarking: first generation water- marking and second generation watermarking [5]. Both approaches can be achieved via spatial or transform techniques such as discrete cosine transform (DCT) and discrete wavelet transform (DWT). First generation watermarking (1GW) methods have been mainly focused on applying the watermarking on the entire image/video domain. However, this approach is not compatible with novel approaches for still image and video compression. JPEG 2000 and MPEG4/7 standards are the new techniques for image and video compression. They are region- or object-based, as can be seen in the compression process. By contrast, second generation watermarking (2GW) was developed in order to increase the robustness and invisibility and to overcome the 1GW weakness. The 2GW takes into account region, boundary and object characteristics. They give additional advantages in terms of detection and recovery from geometric attacks as com- pared to the 1GW [5]. This can be achieved by exploiting salient region or object features and characteristics of the image. Such watermarking methods may present additional advantages in terms of detection and recovery from geometric attacks [6]. They may be designed so that selective robustness to different classes of attacks is obtained. This will improve the watermark flexibility. 3. Joint photographic experts group standards This section will introduce some discussion about JPEG and JPEG 2000 standards. JPEG is the most widely used standard. However, JPEG 2000 is a new standard, which will appear in various applications in the near future. It represents the state-of-the-art in image coding. This section will give an ex- planation of the principles behind the algorithms used in both standards. 3.1. JPEG standard JPEG is the very well known ISO/ITU-T standard. Several modes are de- fined in JPEG. Baseline and lossless modes are the most popular ones [7,8]. It was released in late 1980s. The baseline mode supports lossy coding. The lossless mode is created for lossless coding only. In the baseline mode, the image is subdivided into pixels of size 8 � 8 (64 pixels). Then, each pixel of this subimage is level shifted by M.A. Suhail et al. / Information Sciences 151 (2003) 93–105 95 subtracting the quantity 2ð̂ð n)1). The DCT of each block is then computed. After that, the block is quantized and reordered using zig-zag pattern to form 1-D sequence. The AC coefficients of this 1-D sequence are coded using a variable-length code. The DC coefficient is coded relative to the DC coefficient of the previous subimage. An excellent background and examples of this are given in [8]. The transformation and normalization process produces a large number of zero-valued coefficients. These coefficients that remain after the normalization process will be discarded. Then, entropy coded with Huffman coding is performed. The quantization step size for each of the 64 DCT co- efficients is specified in a quantization table, which remains the same for all blocks. To decompress a JPEG compressed subimage, the decoder must first recreate the normalized transform coefficients that led to the compressed bit stream. Because a Huffman coded binary sequence is instantaneous and uniquely decodable, this step is easily accomplished using a lookup table. Any difference between the original and reconstructed subimage is the result of the lossy nature of the JPEG compression and decompression processes [3,8]. Fig. 1 shows a JPEG block diagram for lossy compression. However, the lossless mode is based on a completely different algorithm. It relies on a pre- dictive scheme which is based on the nearest three causal neighbors and seven different predictors are defined (the same one is used for all samples). The prediction error is entropy coded with Huffman coding. The other modes de- fined in JPEG provide variants of the previous two basic modes. This is like progressive bit streams and arithmetic entropy coding [8,9]. 3.2. JPEG 2000 standard JPEG 2000 was also developed by the International Standards Organization (ISO). It is the new image compression standard. The JPEG 2000 code handles both lossy and lossless compression using the same transform-based frame- work [4]. It is based on the DWT. The latter provides a number of benefits over Fig. 1. Block diagram of JPEG algorithm (lossy mode encoder). 96 M.A. Suhail et al. / Information Sciences 151 (2003) 93–105 the previous JPEG compression techniques that are based on DCT. DWT encodes the image in a continuous stream. So, this will avoid the tendency toward visible artifacts that sometimes result from DCT�s division of an image into discrete compression blocks [3,4]. Also, its model relies on scalar quanti- zation, context modeling, and arithmetic coding and post-compression rate allocation. The DWT used in JPEG 2000 is dyadic. It can be performed with a reversible filter (Le Gall (5,3) taps filter 9) [9], which provides for lossless coding. Also, a non-reversible filters (Daubechies (9,7) taps BI-orthogonal one 10) can be used for higher compression to do lossy compression but not lossless. The quantizer follows an embedded dead-zone scalar approach. It is independent for each subband. Each subband is divided into block ð64 � 64Þ. These subbands are entropy coded using context modeling and bit-plane arithmetic coding. The coded data is organized in layers. They are quality levels, using the post-compression rate allocation and output to the code- stream in packets [3]. The basic scheme of JPEG 2000 can be seen in Fig. 2. The above is part 1 description of JPEG 2000 standard, which defines the core system. Part 2 is still in preparation [9]. 3.2.1. JPEG 2000 functionality and features JPEG 2000 includes many-advanced features and supports a number of functionalities. Many of these functionalities are inherent from the algorithm itself [4,9,10]. These feature and functionalities are: • High compression ratio. • Lossy and lossless compression. • Progressive recovery by fidelity or resolution. • Visual (fixed and progressive) coding. • Good error resilience. • Arbitrarily shaped region of interest coding. • Random access to specific regions in an image. • Security • Multiple component images • Palletized images • It also can support images in width and height from 1 up to 232)1. 3.2.2. JPEG 2000 applications The following are examples of potential application that will benefit directly from JPEG 2000; see Table 1. Fig. 2. Basic encoding scheme of JPEG 2000. M.A. Suhail et al. / Information Sciences 151 (2003) 93–105 97 3.3. Comparison between JPEG and JPEG 2000 In this section, we will present some comparative experimental results to show the difference between JPEG and JPEG 2000. A boat image is com- pressed at very low bit rates using JPEG, at the same time, the image is compressed to the same degree using JPEG 2000, see Figs. 3–5. The com- pression ratio is 30:1. The images compressed using JPEG degrades signifi- cantly. Also, the images compressed using the JPEG 2000 algorithms and at the same compression rates do not suffer from the same degree of degradation as JPEG images. The noise artifacts, such as blockiness, that are clearly evident with JPEG are reduced with JPEG 2000. At very high compression rates the image content is easily recognizable with JPEG 2000 but not with JPEG. This shows that JPEG 2000 outperforms JPEG at higher compression ratios. Table 2 shows the main difference between the JPEG and JPEG 2000. Table 1 Examples of potential applications of JPEG 2000 Document imaging Digital photography Scanning Color facsimile Medical imaging Internet Web browsing E-Commerce Image archiving Remote sensing Digital library Mobile Fig. 3. Original boat image. 98 M.A. Suhail et al. / Information Sciences 151 (2003) 93–105 4. Experimental work 4.1. DCT-based watermarking algorithms The two-dimensional forward DCT kernel is used here. It is defined as Fig. 4. Compressed image using JPEG standard. Fig. 5. Compressed image using JPEG 2000 with same compression ratio as in Fig. 4. M.A. Suhail et al. / Information Sciences 151 (2003) 93–105 99 gðx; y; 0; 0Þ ¼ 1 N ; ð4aÞ gðx; y; u; vÞ ¼ 1 2N 3 ½cosð2x þ 1Þup�½cosð2y þ 1Þvp� ð4bÞ for x; y ¼ 0; 1; . . . ; N � 1, and u; v ¼ 1; 2; . . . ; N � 1 [8]. The DCT scheme relies on some of the ideas proposed by Cox et al. [11]. They propose a watermark that consists of a sequence of randomly generated real numbers. These num- bers have a normal distribution with zero mean and unity variance: W ¼ fw1; w2; . . . ; wNg: ð5Þ Then, the DCT of the whole image is computed. The DCT coefficients are chosen to be watermarked. After that, the watermark is added by modifying the DCT coefficients: C ¼ fc1; c2; . . . ; cNg: ð6Þ According to c0i ¼ ci þ aciwi; ð7Þ where i ¼ 1; 2; . . . ; N; and a ¼ 0:1. If we denote the original image by I0 and the watermarked possibly distorted image I�w, then, a possibly corrupted water- mark W � can be extracted. Reversing the embedding procedure can do this extracting. This is done using the inverse DCT. Table 2 Major differences between JPEG and JPEG 2000 Standard Technologies and features Applications JPEG by ISO/IEC DCT Internet imaging Perceptual quantization Digital photography Zig-zag reordering Image and video editing Huffman coding Arithmetic coding JPEG 2000 by ISO/IEC DWT Digital libraries New functionalities E-Commerce Reversible integer-to-integer and nonreversible real-to-real DWT Internet ROI Digital photography Error resilience Image and video editing Progression orders Printing Lossy to lossless in one system Medical imaging Better compression at low bit-rates Mobile Better at compound images and graphics (palletized) Color facsimile Satellite imaging Scanning Remote sensing 100 M.A. Suhail et al. / Information Sciences 151 (2003) 93–105 The watermark is embedded in subimages of the image. Therefore, the N � M image I is subdivided into pixels of size 16 � 16 (256 pixels). The DCT of the block is then computed. After that, the DCT coefficients are reordered into a zig-zag scan. This reordering is similar to the JPEG compression algo- rithm [8]. Then the coefficients in the zig-zag ordering of the DCT spectrum are selected. These selected coefficients are modified, according to (6), where ci is the original DCT coefficient, wi is the watermark coefficient and ci; w is the modified coefficient. To tune the watermark energy, the term a is used. The higher the a value, the more robust and visible the watermark. Finally, we need to reverse the above procedure to get our watermarked image. The modified DCT coefficients are reinserted in the zig-zag scan. Then, the inverse DCT is applied. Finally, the blocks are merged to obtain the watermarked image Iw. 4.2. DWT-based watermarking algorithms A DWT-based approach is used. The image (I) and watermark (W) are transformed into the DWT. The host image is transformed into three levels ðL ¼ 3Þ of DWT. Each of these levels (l:1 to 3) will produce a sequence of three levels detail images ðj ¼ 1; 2; 3Þ. Also, a gross approximation of the image at the coarsest resolution level will be generated at level three (l ¼ 3 and j ¼ 4) [12]. The resulting coefficients are then watermarked according to Iwj;lðx; yÞ ¼ Ij;lðx; yÞ þ bðf1; f2ÞWj;1ðx; yÞ; where Iðx; yÞ is the DWT of the host image, Iwðx; yÞ is the watermarked image, W is the watermark, l is the DWT resolution level and j is the DWT frequency orientation. The watermarking algorithm is adaptive by making use of human visual system (HVS) characteristics, which increase robustness and invisibility at the same time. The HVS ½bðf1;f2Þ� can be represented by [13,14]: bðf1; f2Þ ¼ 5:05e�0:178ðf1þf2Þ e0:1ðf1þf2Þ � � 1 � ; where, f1 andf2 are the spatial frequencies (cycles/visual angle). However, the watermark will be spatially localized at high-resolution levels of the host image. By this, the watermark will be more robust. At the end, the inverse DWT is applied to form the watermarked image. Fig. 3 shows the block diagram of the proposed method. 4.3. Watermarking detection The embedding watermark function makes small modifications to Iorig. For example, if W ¼ ðw1; w2; . . .Þ ¼ ð1; 0; 1; 1; 0; . . .Þ, the embedding operation may involve adding or subtracting a small quantity a from each pixel or sample of Iorig when wi is 1 or 0, respectively. During the second stage of the water- marking system, the detecting function D uses knowledge of W, and possibly Iorig, to extract a sequence W � from the signal R undergoing testing: M.A. Suhail et al. / Information Sciences 151 (2003) 93–105 101 DðR; IorigÞ ¼ W �: The signal R may be the watermarked signal Iw. It may be a distorted version of Iw resulting from attempts to remove the watermark, or it may be an unrelated signal. The extracted sequence W � is compared with the watermark W to de- termine whether R is watermarked. The comparison is usually based on a correlation measure q, and a threshold c0 used to make the binary decision (Z) on whether the signal is watermarked or not. To check the similarity between W (the embedded watermark), and W � (the extracted watermark), the corre- lation measure between them can be found using qðW ; W �Þ ¼ W W �ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi W � W � p ; where W W � is the scalar product between these two vectors. However the decision function is ZðW �; W Þ ¼ 1; c P c0 0 otherwise; � where c is the value of the correlation and c0 is a threshold. A �1� indicates a watermark has been detected, while a �0� indicates that a watermark has not been detected. In other words, if W and W � are sufficiently correlated (greater than some threshold c0), the signal R has been verified to contain the water- mark, which confirms the author�s ownership rights to the signal. Otherwise, the owner of the watermark W has no rights over the signal R. The detection threshold c0 is considered empirically to be 0.1 in our experiments. This was decided based on the examination of the correlation of random sequences. 4.4. Results and discussion Figs. 6–9 show the experimental results. DCT- and wavelet-based water- marking algorithms described in this paper are implemented in Matlab envi- ronment. Different watermarked images are exposed to JPEG and JPEG 2000 for different compression ratios. The results are recorded in Table 3, which shows the correlation coefficients of the watermarking detector after com- pression using JPEG and JPEG 2000. The compression ratio was varied from 5 to 45. The recorded data is for wavelet and DCT techniques. Fig. 6 shows a DCT-based watermarking algorithms exposed to both JPEG and JPEG 2000. It is clear from the figure that the robustness of the water- marking algorithm against JPEG is better than JPEG 2000. On the other hand, the wavelet-based watermarking technique is less robust when exposed to JPEG attack than when exposed to JPEG 2000. This is shown in Fig. 7. On another display of the recorded data, Fig. 8 shows the results of JPEG attacks on DCT and wavelet watermarking techniques. The robustness of DCT algorithm compared to wavelet algorithms is better. However, the robustness of wavelet 102 M.A. Suhail et al. / Information Sciences 151 (2003) 93–105 Fig. 6. Comparison between JPEG and JPEG 2000 on a DCT-based watermarking technique. Fig. 7. Comparison between JPEG and JPEG 2000 on a wavelet-based watermarking technique. Fig. 8. Results of JPEG-DCT attacks on DCT and wavelet techniques. M.A. Suhail et al. / Information Sciences 151 (2003) 93–105 103 algorithm against JPEG 2000 outperform the same attack when applied on DCT algorithm. One can conclude from this that the compatibly issues between the watermarking algorithms and the compression standard may play an important role in the robustness of the watermarking. When the images are compressed at a very high compression ratio, the images compressed using JPEG-DCT degrades significantly. However, the images compressed using the JPEG 2000 algorithms and at the same compression rates do not suffer from the same degree of degra- dation. Also, the study needs more investigation on wide range of watermarking algorithms either using DCT or wavelet domain to generalize this result. 5. Concluding remarks The goal of this paper is to evaluate the performance of DCT and wavelet watermarking techniques against JPEG and JPEG 2000 attacks. The paper Fig. 9. Results of JPEG 2000 attacks on DCT and wavelet techniques. Table 3 Experimental results Tech DCT-based Wavelet-based CR JPEG-DCT JPEG 2K JPEG-DCT JPEG 2K 5 1 1 1 1 10 0.99 0.99 0.98 1 15 0.99 0.975 0.97 1 20 0.98 0.960 0.95 1 25 0.96 0.945 0.93 0.985 30 0.95 0.940 0.92 0.985 35 0.94 0.92 0.90 0.985 40 0.93 0.91 0.89 0.985 45 0.925 0.9 0.79 0.970 104 M.A. Suhail et al. / Information Sciences 151 (2003) 93–105 shows that there are compatibly issues between the robustness of the water- marking algorithms and the compression standards. More investigation is needed on a wide range of DCT- and wavelet-based watermarking algorithms to investigate the generalization of this conclusion. There was a discussion about how and whether watermarking should form part of the standard during the standardization process of JPEG 2000. The requirements regarding security have been identified in the framework of JPEG 2000. However, there has been neither in depth clarification nor a harmonized effort to address watermarking issues. The initial drafts of the JPEG 2000 standard did not mention the issue of watermarking. However, there is a plan to examine how watermarking might be best applied within the JPEG 2000. Therefore the potential is that watermarking technology will be used in con- junction with JPEG 2000. References [1] Busch, Christopher, Olfgang Funk, Stephen Wolthusen, Digital watermarking: from concepts to real-time video applications, IEEE Computer Graphics and Applications 19 (1) (1999) 25–35. [2] F. Hartung, M. Kutter, Multimedia watermarking techniques, Proceedings of the IEEE 87 (7) (1999) 1079–1106. [3] D. Santa-Cruz, T. Ebrahimi, A study of JPEG 2000 still image coding versus other standards, in: Proceeding of the X European Signal Processing Conference (EUSIPCO), Tampere, Finland, September 5–8, vol. 2, 2000, pp. 673–676. [4] ISO/IEC JTC 1/SC 29/WG 1, ISO/IEC FCD 15444-1: Information technology JPEG 2000 image coding system: core coding system, WG 1 N 1646, pp. 1–205, March 2000. Refer to: http://www.jpeg.org/FCD15444-1.htm. [5] http://www.tsi.enst.fr/~maitre/tatouage//icip2000.html. [6] M. Kutter, S.K. Bhattacharjee, T. Ebrahimi, Towards second generation watermarking schemes, in: Proceedings of the International Conference on Image Processing, 1999, ICIP 99, 1 (October) (1999) 320–323. [7] http://www.jpeg.org/. [8] R. Gonalez, P. Wintz, Digital Image Processing, second ed., Addison Wesley, Reading, MA, 1987. [9] D. Santa-Cruz, T. Ebrahimi, J. Askel€oof, M. Larsson, C. Christopoulos, JPEG 2000 still image coding versus other standards, in: Proceedings of the SPIE�s 45th annual meeting, Applications of Digital Image Processing XXIII, V 4115, San Diego, California, July 30–August 4, 2000, pp. 1–10. [10] D. Santa-Cruz, T. Ebrahimi, An analytical study of JPEG 2000 functionalities, in: Proceedings of the IEEE International Conference on Image Processing (ICIP), vol. 2, Vancouver, Canada, September 10–13, 2000, pp. 49–52. [11] Cox, J. Kilian, T. Leighton, T. Shamoon, Secure Spread Spectrum Watermarking for Multimedia. Tech. Rep. 95-10, NEC Research Institute, 1995. [12] C. Burrus, H. Guo, Introduction to Wavelets and Wavelet Transforms: A Primer, first ed., Prentice-Hall, Englewood Cliffs, NJ, 1998. [13] M. Levine, Vision in Man and Machine, McGraw-Hill, New York, 1985. [14] R. Clark, An introduction to JPEG 2000 and Watermarking, IEE Seminar on Secure Images & Image Authentication, 2000, pp. 3/1–3/6 http://ltswww.epfl.ch/~ebrahimi/JPEG2000Seminar. M.A. Suhail et al. / Information Sciences 151 (2003) 93–105 105 http://www.jpeg.org/FCD15444-1.htm http://www.tsi.enst.fr/~maitre/tatouage//icip2000.html http://www.jpeg.org/ http://ltswww.epfl.ch/~ebrahimi/JPEG2000Seminar A comparative study of digital watermarking in JPEG and JPEG 2000 environments Introduction Watermarking concept and techniques Watermarking applications and properties Digital watermarking approaches Joint photographic experts group standards JPEG standard JPEG 2000 standard JPEG 2000 functionality and features JPEG 2000 applications Comparison between JPEG and JPEG 2000 Experimental work DCT-based watermarking algorithms DWT-based watermarking algorithms Watermarking detection Results and discussion Concluding remarks References work_nrngwd3qmzffvdajnng5kylrmi ---- 079 A NEW INSULT TECHNIQUE FOR A LARGE ANIMAL SURVIVAL MODEL OF HUMAN INTRA-ARTICULAR FRACTURE Osteoarthritis and Cartilage Vol. 17, Supplement 1 S51 ated with cartilage volume loss (p=0.284, rho=-0.60). Similarly, the increase in GCA correlated significantly with less severe cartilage defect (p=0.001, rho=-0.99), joint effusion (p=0.041, rho=-0.89) and BML (p=0.004, rho=-0.97). At week 26, higher PVF signif- icantly (p=0.013, rho=0.95) correlated with more severe menis- cal tears while higher CGA correlated (p=0.037, rho=-0.90) with cartilage volume loss. In line with these findings, the evolution of meniscal tears significantly correlated with less osteophytosis (p=0.013, rho=-0.95) and joint effusion (p=0.028, rho=-0.92). Conclusions: his exploratory study reveals multiple binary associ- ations between a number of joint structural defects and the extent of OA-induced functional disability. Data revealed that PVF and GCA are mainly affected by BML and cartilage defects, whereas meniscal integrity is more affected by gait biomechanics. These results highlight the need for a physiopathologically- based sta- tistical analysis strategy to better understand the structure-activity relationships of the injured joint. 078 EARLY SYNOVIAL RESPONSES TO ANTERIOR CRUCIATE LIGAMENT AUTOGRAFTING IN THE OVINE STIFLE JOINT B.J. Heard, Y. Achari, M. Chung, N.G. Shrive, C.B. Frank Univ. of Calgary, Calgary, AB, Canada Purpose: Anterior cruciate ligament (ACL) reconstruction using tendon autografts aims to restore the function of a completely damaged ACL. However, evidence suggests that such grafts may be less than ideal due, in part, to abnormal graft tensioning and perhaps-related post-surgical inflammation. We have developed a biomechanically idealized ACL autograft model (using the native ACL itself as an immediate replacement with and without exces- sive ‘graft’ tensioning) to study the biological responses to such surgery. The present study focussed on identifying the alterations to early markers of synovial inflammation and tissue remodelling proteinases. The hypothesis for this study was that all grafts would induce inflammation but overtensioning of ACL grafts would further increase the expression of inflammatory and tissue remodelling bio-markers. Methods: All surgeries were performed using protocols approved by the animal care committee of the University of Calgary. Fifteen skeletally mature 3-4 year old female Suffolk-cross sheep were allocated equally into 5 groups: anatomical ACL-core, twist tight ACL-core, twist loose ACL-core, sham, and non-operated controls. The ACL core surgeries were accomplished via arthrotomy to the right stifle joint. The patella was dislocated medially to expose the ACL. The proximal head of the lateral femoral condyle was the entry point for a guide pin that was inserted to mark the femoral insertion of the ACL. A dry nitrogen drill was used to core down to the marked insertion. After the ACL insertion was freed, the core was either a) immediately fixed in place (anatomical), b) pulled 1mm away from the joint while being twisted 90 degrees and then fixed (twist tight), or c) pushed 1mm into the joint while being untwisted 90 degrees and then fixed (twist loose). For shams, the core was stopped at the halfway mark between the surfaces of the proximal femoral condyle to the femoral ACL insertion, a distance of roughly 1.5cm. The non-operated controls were age matched and housed for the same duration of time as the experimental subjects. All animals were sacrificed 2 weeks post-injury. At dis- section, synovium from both left and right stifle joints were isolated and examined for different matrix metalloproteinases, interleukins and lubricin using real-time RT-PCR. Results: Synovial tissue from the treated joint of the anatomical, twist tight and twist loose core groups all exhibited significant increases in the mRNA levels of the matrix metalloproteinases examined. MMP-1 and MMP-3 mRNA levels exhibited maximum elevation in the twist-tight core groups, followed by anatomically placed ACL and twist-loose core group. However MMP-13 mRNA levels exhibited maximum elevations in the anatomical core group followed by twist tight and twist loose groups. The matrix metallo- proteinases mRNA levels did not change in either the contralateral limbs of the treated groups or the limbs of the non-operated con- trols. Investigation of IL-1β mRNA levels revealed an 8-10 fold increase in the three treated groups respectively with not much variation between the groups. Interestingly the IL-6 did not exhibit any change in the mRNA levels in any of the groups. Lubricin mRNA levels followed the same pattern as MMP-1 and MMP-3. Conclusions: The tension of an ACL graft can influence the mRNA levels of certain MMPs, interleukins and lubricin in the synovium, which in turn may influence the structure and biome- chanical properties of the graft. 079 A NEW INSULT TECHNIQUE FOR A LARGE ANIMAL SURVIVAL MODEL OF HUMAN INTRA-ARTICULAR FRACTURE Y. Tochigi, T.E. Baer, P. Zhang, J.A. Martin, T.D. Brown Univ. of Iowa, Iowa City, IA Purpose: To model human intraarticular fractures (IAFs) in the porcine hock joint (human ankle analogue) in vivo, a new fracture insult technique/system has been developed. In this technique, a joint is subjected to an injurious transarticular compressive force pulse, so as to replicate the most typical mechanism of human distal tibial “plafond” fracture. Figure 1 shows the custom interface device developed for this technique. The “tripod” of pins connects the distal impact face to the talus without soft tissue intervention, while the proximal fixator holds the tibial shaft tilted posteriorly. In this “offset” condition, a force pulse applied to a joint causes sudden elevation of vertical shear stresses in the anterior tibial juxtaarticular bone. With guidance from a stress-rising sawcut placed at the anterior cortex, well-controlled, reproducible anterior malleolar fractures are created (Figure 2). For an animal model of IAF to be scientifically meaningful, pathophysiological realism of fracture-associated cartilage injury is essential. The purpose of this study was to document the cell-level cartilage pathology introducible using this “offset” fracture impact technique. Figure 1. The “tripod” device-to-bone interface system. Methods: Four fresh porcine hock specimens, in which chondro- cytes were fully viable, were utilized. Of these, two were subjected to fracture insult using the offset impact technique, with a force pulse (30 joules) delivered by a drop-tower device. In the other two, morphologically similar distal tibial simulated fractures were created using a sharp osteotome (non-impact osteotomy control). Macroscopic fracture morphology was recorded by means of dig- ital photography. The fractured distal tibial surface, harvested as osteoarticular fragments, was then incubated in culture medium. S52 Poster Presentations Figure 2. Reproducibility of fracure patterns in the pilot work. Two days later, those fragments were subjected to chondrocyte viability analysis using the fluorescence live/dead assay, using a confocal laser-scanning microscope system. For each joint, scans were executed at several sites near fracture edges, as well as centrally in non-fracture areas. Chondrocytes in the superficial zone (within 100-150 microns from the surface) were scanned, and site-specific cell deaths rates were computed. Results: In both fracture-impacted joints, appreciable chondro- cyte death (cell death rate >25%) was identified only near fracture edges (Figure 3A). The cell death rate at near-edge sites (21.9±14.1%), although variable across specimens/locations (range 0.6 to 45.2%), was significantly higher than at central non- fracture sites (3.4±5.7%, p=0.001). In the non-impact osteotomy joints, by contrast, very little chondrocyte death was identified at either near-edge (Figure 3B) or central sites (1.0±1.2% and 1.8±2.9%, respectively). Figure 3. Confocal scanning images at an impaction fracture edge (A) and at an osteotomy edge (B). Cells stained green are alive, while those stained red are dead. Conclusions: The cell death spatial distribution patterns in the impacted fracture fragments, concentrated near fracture edges, were analogous to those in human clinical IAFs (Kim et al., OA & Cartilage 2002; Hembree et al., JBJS-Br 2007). The striking difference in chondrocyte viability with respect to non-impact os- teotomy fragments suggests that the acute mechanical damage to cartilage associated with the impaction fracture insult was very different from that with non-impact “fracture” simulation by os- teotomy. This new fracture insult technique appears to have the potential to replicate the cell-level pathology of human IAFs in a large animal survival model. 080 EVALUATION OF A NOVEL SURGICAL TECHNIQUE IN AN IN VIVO OVINE MODEL TO ACCESS THE POSTERIOR HORN OF THE MEDIAL MENISCUS R.B. Modesto 1, R.L. Mauck 2, B.M. Baker 2, C. Underwood 1, T.P. Schaer 1 1Univ. of Pennsylvania Sch. of Vet. Med. Dept. of Clinical Studies New Bolton Ctr., Kennett Square, PA; 2Univ. of Pennsylvania Sch. of Med. Dept. of Orthopaedic Surgery, Philadelphia, PA Purpose: Despite advances in surgical technique and fixation devices, a pressing clinical demand exists for new strategies for repairing the knee meniscus. Appropriate animal models are required to test engineered construct safety and efficacy in the context of the demanding loadbearing environment of native tissue. Ovine models are particularly useful, as their menisci are closer in size to that of human beings and show similar loading patterns. However, current open surgical approaches to ovine menisci do not provide full access and visualization of the red-white and white zone in the posterior horn, the most clinically relevant portion of the medial meniscus. The purpose of this in vivo pilot study was to evaluate a new surgical approach allowing for improved access to the posterior horn of the medial meniscus. Methods: Under general anesthesia, the medial compartment was accessed via a lateral parapatellar arthrotomy subluxating the patella medially. A medial femoral condyle osteotomy was planned as a vertical cut starting directly medial to the femoral origin of the caudal cruciate ligament aiming to the medial transition of the femoral diaphysis and metaphysis. The osteotomized femoral condyle was then reflected to reveal the medial meniscus in its entirety. The osteotomy was anatomically reduced and repaired with two 4.5mm cortical bone screws placed in lag fashion. A teno- tomy of the common calcanean tendon was performed about 3 cm proximal to the calcaneus to obtain temporary non-weight bearing in the operated limb. Animals were confined until weight bearing. During that period they obtained three times daily physiotherapy of passive range of motion. Pre-, postoperative and endterm ra- diographs were obtained. Animals were euthanized at 6 months and both stifles were grossly examined, followed by histological, biochemical and mechanical analyses of medial tibial plateau and histological analysis of the medial meniscus. Contralateral limbs were used as controls. Results: The medial femoral condyle osteotomy consistently of- fered full access to the medial meniscus and allowed for excellent visualization of the pars intermedia and pars posterior. Animals had uneventful recoveries and did not bear weight on the oper- ated limb for 5-6 weeks postoperative while tolerating physiother- apy well. Gross visualization of the articular surfaces showed no marked change in cartilage appearance in operated or contralat- eral limbs. Histopathology of the medial menisci demonstrated mild decrease in proteoglycan content, but this was not significant. Histological analysis of the articular cartilage of the tibial plateau revealed no adverse changes in cartilage structure or differences in intensity of collagen or proteoglycan staining between groups. Biochemical analysis of proteoglycan content showed no signifi- cant changes between operated limbs and contralateral controls. Conclusions: Although preclinical models replicate some of the features of the disease process modeled, they invariably fail to re- produce the complexity of the degenerative disorder. The current study is an attempt to create a relevant animal model to study regenerative repair in the posterior horn of the medial menis- cus for the human patient. Using a medial femoral condylotomy greatly improves surgical access to the meniscus without injuring associated soft tissue structures. Our findings and observations thus far suggest that this model could greatly enhance regenera- tive research of the knee meniscus in large animal models when macroscopic devices need to be placed and evaluated in vivo. work_nsc7kqqlwrecrhsri6pwo5onpm ---- CMTE_A_681716_P 509..513 Full Terms & Conditions of access and use can be found at https://www.tandfonline.com/action/journalInformation?journalCode=imte20 Medical Teacher ISSN: 0142-159X (Print) 1466-187X (Online) Journal homepage: https://www.tandfonline.com/loi/imte20 An innovative approach to inculcate analytical and non-analytical clinical reasoning among medical students Ahmed Yaqinuddin To cite this article: Ahmed Yaqinuddin (2012) An innovative approach to inculcate analytical and non-analytical clinical reasoning among medical students, Medical Teacher, 34:6, 511-512, DOI: 10.3109/0142159X.2012.675098 To link to this article: https://doi.org/10.3109/0142159X.2012.675098 Published online: 10 Apr 2012. Submit your article to this journal Article views: 544 View related articles Citing articles: 1 View citing articles https://www.tandfonline.com/action/journalInformation?journalCode=imte20 https://www.tandfonline.com/loi/imte20 https://www.tandfonline.com/action/showCitFormats?doi=10.3109/0142159X.2012.675098 https://doi.org/10.3109/0142159X.2012.675098 https://www.tandfonline.com/action/authorSubmission?journalCode=imte20&show=instructions https://www.tandfonline.com/action/authorSubmission?journalCode=imte20&show=instructions https://www.tandfonline.com/doi/mlt/10.3109/0142159X.2012.675098 https://www.tandfonline.com/doi/mlt/10.3109/0142159X.2012.675098 https://www.tandfonline.com/doi/citedby/10.3109/0142159X.2012.675098#tabModule https://www.tandfonline.com/doi/citedby/10.3109/0142159X.2012.675098#tabModule more about the discussed conditions. On the other hand, digital photographs represent the basic tool of many modern educational methods (online dermatology courses, problem- based learning), and may have a place in teledermatology (Whited et al. 1998). Despite these advantages of digital photograph-based teaching, we emphasize that this modality is not a substitute for formal clinical teaching; it is rather a good supplement. Amri, M., ELHani, I., & Alkhateeb, A.A., College of Medicine, King Faisal University, Al Ahsa, Kingdom of Saudi Arabia. E-mail: montassaramri@yahoo.fr References Kaliyadan F, Manoj J, Venkitakrishnan S, Dharmaratnam AD. 2008. Basic digital photography in dermatology. Indian J Dermatol Venereol Leprol 74:532–536. Whited JD, Mills BY, Hall RP, Drugge RJ, Grichnik JM, Simel DL. 1998. A pilot trial of digital imaging in skin cancer. J Telemed Telecare 4:108–112. Thomas the Tank Engine significantly improves the understanding of oxygen delivery and hypoxaemia Dear Sir Physiological concepts are often poorly understood with students ‘knowing’ but not ‘understanding’ (Berlucchi & Di Benedetta 2000). Therefore with respect to oxygen delivery and hypoxaemia we have used images of Thomas the Tank Engine delivering coal to improve comprehension (Cosgrove et al. 2006). To evaluate the process we presented two 30-minute lectures to Year One Medical Students. The control lecture was a conventional presentation; the study lecture contained additional images of Thomas the Tank Engine. LREC approval was unnecessary; HiT Entertainment-UK granted permission to use the imagery. Students were randomised into four groups (A n¼73, B n¼56, C n¼59, D n¼53). Groups A and B received the control lecture, while C and D the study lecture. Groups A and C undertook a pre-lecture MCQ to assess background knowledge. All students completed a post- lecture MCQ and qualitative evaluation of the lecture. Evaluation scores (out of 20 for the MCQs) were collected using an ARS-KEEpad system and compared using the Mann– Whitney U-test. A p-value �0.05 was regarded as significant. The post-lecture MCQ scores failed to demonstrate any evidence of gender stereotyping or priming (A vs. B, p¼0.6 and C vs. D, p¼0.4.) Both lectures significantly improved post- lecture MCQ scores ( p 5 0.001.) Group A had a significantly higher pre-lecture MCQ score compared to group C (median 16 vs. 12, p 5 0.001); there was no difference post-lecture between A and C (median 18 vs. 17, p¼0.4). Qualitatively the imagery also made the lecture significantly more organised ( p¼0.006), interesting and stimulating ( p 5 0.001) and improved under- standing ( p 5 0.001.) At 6 months there was no significant difference in MCQ scores ( p¼0.4). Groups A and B had the same median MCQ score at 6 months as group A pre-lecture MCQ ( p¼0.9). Groups C and D had a significantly higher median MCQ score at 6 months compared to group-C pre- lecture ( p 5 0.001). In conclusion, analogous imagery significantly improved aspects of understanding hypoxaemia and in students with lower levels of background knowledge, the imagery allowed them to attain knowledge levels similar to their peers and minimised knowledge decline-ratio at 6 months. Thus com- pared to a didactic lecture, imagery of Thomas the Tank Engine delivering coal can provide an improved structure for lecture delivery and knowledge retention, assisting medical students in engagement with learning and understanding the processes of oxygen delivery and hypoxaemia. The improvement in scores post-lecture can also assist the lecturers (both registered with the General Medical Council) in their revalidation process. J. Cosgrove, I. Nesbitt, P. Laws & M. Baruch, Newcastle upon Tyne Hospitals NHS Foundation Trust, Perioperative and Critical Care, Freeman Hospital, Freeman Road, High Heaton, Newcastle upon Tyne, NE7 7DN, UK. E-mail: joe.cosgrove@nuth.nhs.uk M. Sawdon, School of Medicine and Health, Stockton-on Tees, Durham University, TS17 6BH, UK J. Green, Department of Anaesthesia, Intensive Care and Pain Management, University of Alberta, Edmonton, Alberta, Canada K. Fordy, Department of Anaesthesia, Intensive Care, Sunderland Royal Hospitals, Sunderland, UK D. Kennedy, School of Medicine, Newcastle University, Newcastle upon Tyne, NE7 7DN, UK References Berlucchi G, Di Benedetta C. 2000. The harmonisation of physiology teaching: A tool for its recognition in European countries. Eur J Physiol 441:165–166. Cosgrove JF, Fordy K, Hunter I, Nesbitt ID. 2006. Thomas the Tank Engine and Friends improve the understanding of oxygen delivery and the pathophysiology of hypoxaemia. Anaesthesia 61:1069–1074. An innovative approach to inculcate analytical and non- analytical clinical reasoning among medical students Dear Sir Clinical reasoning is a cognitive process by which clinicians diagnose illnesses and manage them and is considered as the LETTERS TO THE EDITOR 511 primary determinant of clinical competence (Charlin et al. 2000). Over the past 15 years, research directed to better understand this complex, multifaceted skill has led to the consensus that clinicians generally use two types of approaches in clinical reasoning: analytical and non-analytical (Pelaccia et al. 2011). Non-analytical reasoning is a tacit, heuristic approach that is based on previous experience (Pelaccia et al. 2011). It matches a specific clinical presentation under consideration with previously encountered similar cues, signs and situations. The advantage of this type of reasoning is that it is effortless, quick and a decision is made at the subconscious level. By contrast, analytical reasoning is a rational assessment of the clinical problem based on in-depth analysis of potential causes. The decision is based on conscious application of rules that have been acquired through learning. The advantage of this type of reasoning is that the decision is made after consideration and elimination of several possible causes, thereby increasing the accuracy of diagnosis. At the College of Medicine, Alfaisal University, we have developed an innovative approach which allows students to develop both heuristic and analytical reasoning using a case- based method of learning. In this approach, both students and the experienced clinical tutor will work as a group to solve a clinical case without prior knowledge of the diagnosis. Both students and tutors are presented with the clinical case and a complete medical record of it. The group works together to analyse the available information and reach a definitive diagnosis. During the process the tutor will share with the students his experiential reasoning regarding the case. After the correct diagnosis is revealed to the group they will discuss the clues which led them to the diagnosis and distractors which were misleading them. In this method since the tutors are unaware of the final diagnosis of the case, they are using their heuristic as well as analytical skills to solve the problems alongside students. The approach of using experienced clinical tutors to work with novices to solve a clinical case is currently used successfully at Alfaisal. This represents an innovative case- based approach which inculcates both heuristic and analytical thinking. Ahmed Yaqinuddin, Department of Anatomy and Cell biology, College of Medicine, Alfaisal University, Riyadh, KSA. E-mail: ayaqinuddin@alfaisal.edu References Charlin B, Tardif J, Boshuizen HP. 2000. Scripts and medical diagnostic knowledge: Theory and applications for clinical reasoning instruction and research. Acad Med 75:182–190. Pelaccia T, Tardif J, Triby E, Charlin B. 2011. An analysis of clinical reasoning through a recent and comprehensive approach: The dual- process theory. Med Educ (online) 16. A proposed program for revising training in diagnostic and interventional cardiac catheterization using a didactic lecture series and virtual reality simulation Dear Sir Cardiac catheterization is an invasive cardiology procedure learned during cardiology fellowship. The current paradigm of training cardiology fellows in the skills required for cardiac catheterization involves direct patient contact/care, i.e., learn- ing on patients in an ‘‘apprenticeship’’ model (Gallagher & Cates 2004). In most cardiology programs, there is no formalized curriculum, despite the existence of knowledge and competency standards. At the University of Pittsburgh we have endeavored to develop a more complete training process for cardiology fellows in the adult learner model, which will enable them to meet the Accreditation Council for Graduate Medical Education’s (ACGME) six core competencies while achieving a higher level of learner satisfaction. To accomplish this, we first performed a task analysis of training components in cardiac catheterization. We then performed a targeted needs assessment of our graduating fellows (N¼11), which showed poor learner satisfaction with current teaching and a need for simulation training, averaging a 2.2 and a 4.0, respectively, on a 5-point Likert scale. We then used the content of the task analysis, the data from the needs assessment, and the guidelines created by ACGME and Core Cardiology Training Symposium to create a comprehensive invasive cardiology curriculum using the latest in modern technology. Identified learner needs were molded into a workable and deliverable curriculum in invasive cardiology using an active, adult learner format addressing all three major learner domains (knowledge, psychomotor, and affective). The final curriculum involved: (1) delivery of didactic lectures via a web portal to improve fellow knowledge in core areas of cardiology, (2) virtual reality cardiac catheterization simulation to improve psychomotor skills, and (3) formative feedback from cardiology attendings on performance of cardiac catheterization with real patients. This curriculum was initiated at the University of Pittsburgh in July 2011 and will end its pilot phase in July 2012. We anticipate the data generated will show that innovative use of technology utilized in an adult learner format can improve the abilities and satisfaction of interventional cardiol- ogy trainees. This curriculum can serve as a model for other LETTERS TO THE EDITOR 512 work_nu2eck52szflnhx2jxy5ngrzpe ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218564379 Params is empty 218564379 exception Params is empty 2021/04/06-02:16:45 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218564379 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:45 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_nufed6ib65da3pigbzr5fo64ae ---- Crime Prevention and Community Safety: An International Journal 2005, 7 (3), 63 Review Page 63 Contrast: An Investigator�s Basic Reference Guide to Fingerprint Identification Concepts by Craig A. Coppock Springfield, IL: Charles C. Thomas (2001) ISBN 0 398 07130 6 (131 pages, £20.91) Reviewed by Sabrina Henze Contrast was compiled by the author while teaching fingerprint science to experienced police officers, new crime-scene investigators and criminal justice students. The book covers several major topics, including friction skin characteristics, exemplar and latent prints, identification errors, photography and computerised databases. This is a valuable basic reference or introductory text. Unfortunately, it is very poorly edited and suffers from numerous distracting typographical, punctuation and printing errors (averaging at least one per page). The section on fingerprint identification concepts in Chapter 8 would be more logically placed near the beginning of the book. Better cross-referencing might have eliminated the need for some repetition: for example, the illustrations of common friction skin characteristics in Chapter 2 appear again in Chapter 8. The author digresses at a few points, and some portions of the text do not belong in an introductory guide�eg �Friction skin has even been found on the gripping side of the tail of a red howler monkey� and �In Heisenberg�s indefinable quantum world, our classical laws and logic do not apply. Of course, fingerprint identification need not go this far�. Most illustrations are clear, especially the reproductions of ten-print cards, the FBI diagram of friction skin structure, and the ninhydrin and cyanoacrylate flowcharts. Others are less so, like the illustration of a fingerprint impression made through a thin latex glove to adhesive tape. Colour photos would have enriched the chapter on photography, image enhancement and colour. The useful description of the National Crime Information Center fingerprint classification system and its drawbacks would perhaps be easier to assimilate in tabular form. As Contrast is also designed as a reference guidebook, a few additional tools would make it more user-friendly. A glossary, including terms like �dactylography�/ �dactyloscopy�, �dysplasia�, �exemplar� versus �latent� prints, �incipient ridges�, �reversals�, �tension� and �flexion� creases, and �edgeology� and �poroscopy� would be helpful, along with an expanded index. The most interesting passages are the comparison of inked and computerised prints, the discussion of reversals and why they are problematic, and the side note about footprint identification (plantar surfaces are often better protected in fires and aeroplane or car accidents, though this technique has largely been replaced by DNA identification). A discussion of automated fingerprint identification systems (AFIS) worldwide and of foreign networking and cooperation would have added an interesting international perspective. While those working regularly with fingerprint evidence would probably wish to consult more in-depth and up-to-date sources on AFIS, digital photography and live-scan technology (though ideas can be found in the solid bibliography), the book gives a clear outline of the possibilities and limitations of fingerprint collection, development and classification techniques. Sabrina Henze Security Without Borders Paris, France Contrast: An Investigator's Basic Reference Guide to Fingerprint Identification Concepts work_nujhfqdhhrbcbjndylpymkirim ---- NALDC Jump to Main Content An official website of the United States government Here’s how you know Here’s how you know Official websites use .gov A .gov website belongs to an official government organization in the United States. Secure .gov websites use HTTPS A lock ( A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites. Toggle navigation National Agricultural Library Digital Collections U.S. Department of Agriculture Home NALDC API Contact Us Search History search for Search Selections (0) Show Selections Clear Selections NALDC Main content area The New National Agricultural Library Digital Collections The NALDC offers access to collection materials available in digital format. Use this site to explore several collections of full-text historical documents on the agricultural and food sciences.   Access more digitized materials through the National Agricultural Library collection available in the Internet Archive: https://archive.org/details/usdanationalagriculturallibrary NALDC Collection Lists Historical Dietary Guidance Digital Collection Pomological Watercolor Collection Animal Welfare Act History Digital Collection Historical Gypsy Moth Publications Collection World Poultry Science Association Collection Plant Inventory, U.S. Department of Agriculture Collection Fruit and Vegetable Market News Collection USDA Publications Collection Rural Development Publications Collection Organic Roots Collection Reports of Bean Improvement Cooperative and National Dry Bean Council Research Conference NAL Information Centers Collection Footer Home Sitemap Social Links Facebook Twitter YouTube RSS Government Links Policies and Links Plain Writing FOIA Accessibility Statement Privacy Policy Non-Discrimination Statement Anti-Harassment Policy Information Quality Agricultural Research Service USDA.gov USA.gov WhiteHouse.gov Sign up for ARS News updates Your email address Sign up work_nwilyqphvzc7rgd5cko3vcggc4 ---- © 2008 Acta Dermato-Venereologica. ISSN 0001-5555 doi: 10.2340/00015555-0464 Acta Derm Venereol 88 CLINICAL REPORT Acta Derm Venereol 2008; 88: 474–479 Topical tacrolimus was recently introduced as a novel therapeutic option in vitiligo. Excellent results were seen mainly on the face and neck areas. We treated 30 adult vitiligo patients with tacrolimus 0.1% ointment twice daily, and compared the results with those of placebo ointment. In 20 patients, defined areas on the right arm or leg were occluded overnight with 3 different dressings. Repigmentation was evaluated quantitatively and qua- litatively. Quality of life changes were assessed with the Dermatology Life Quality Index. After 6 months, treat- ment was stopped in 7 of 30 patients as they did not show any repigmentation, 5 of them had no occlusive therapy. After 12 months, 17 of 21 patients (81%) with facial in- volvement showed repigmentation of the face. Although no or minimal repigmentation occurred on the extremi- ties when using tacrolimus ointment alone, 80% of the patients showed repigmentation on the arms when using additional occlusive, especially hydrocolloid dressings. Hands, feet and lower legs were unresponsive. The best results were obtained in patients with long-standing vi- tiligo. Only minimal side-effects were noted. There was no significant elevation in tacrolimus blood levels, taking into account that occlusion was performed only on li- mited parts of the body. In conclusion, tacrolimus 0.1% ointment proved an effective and safe treatment option for adult patients with vitiligo. Beyond the face and neck areas, repigmentation could be achieved only by additio- nal occlusion. Key words: vitiligo; tacrolimus; calcineurin inhibitors; occlusive dressings; DLQI. (Accepted January 31, 2008.) Acta Derm Venereol 2008; 88: 474–479. Anke Hartmann, Department of Dermatology, University of Erlangen, Hartmannstraße 14, DE-91052 Erlangen, Germany. E-mail: anke.hartmann@uk-erlangen.de Vitiligo is a common pigmentary disorder that may seri- ously affect quality of life (QoL). Autoimmune mecha- nisms are considered responsible for the destruction of melanocytes. This theory is supported by the association of vitiligo with other autoimmune disorders and by the identification of autoantibody-dependent and direct auto- cytotoxic mechanisms (1). Therefore, besides surgical regimens indicated only in stable vitiligo (2, 3), several immunomodulating strategies are increasingly used for repigmentation of active disease (4–6). Among these, topical and systemic corticosteroids, ultraviolet A (UVA) and ultraviolet B (UVB) phototherapy are the most com- mon. However, limited effectiveness, particularly in acral regions, and local as well as systemic side-effects, especially in long-term therapy, are the main drawbacks of these strategies, and restrict their use, particularly in children. Recently, a new class of topical immunomodula- tors, tacrolimus and pimecrolimus, were introduced as treatment modalities for vitiligo. Both have been demonstrated to inhibit the proliferation and activa- tion of T-lymphocytes and to be highly effective and well-tolerated in the treatment of atopic dermatitis (7). Compared with cyclosporine they are 10–100 times more effective and suited for topical use, neither ac- cumulating locally in the skin nor systemically (8). Since 2002, clinical studies of up to 6 months treatment duration in a limited number of patients have favoured the use of calcineurin inhibitors in vitiligo, especially in children, in whom they were shown to be equally as effective as corticosteroids (9–11). However, in adult patients the effect was restricted to the face and neck, whereas lesions in other body regions showed no or little remission (12–14). In a prospective placebo-controlled right-left com- parison study we investigated the efficacy and safety of tacrolimus 0.1% ointment for up to 12 months in 30 adult patients with vitiligo, and tested the influence of additional occlusive treatment. The results show that long-term treatment with topical tacrolimus is effec- tive and safe in patients with long disease duration and that considerable repigmentation on the limbs may be achieved by occlusive therapy. PATIENTS AND METHODS Patients Thirty-one adult vitiligo patients (24 women, 7 men) with symmetrical depigmented lesions were enrolled in a prospec- tive study after obtaining written informed consent. Prior to implementation, this investigator-driven study was approved by the ethics committee of the local medical faculty. Patients with Occlusive Treatment Enhances Efficacy of Tacrolimus 0.1% Ointment in Adult Patients with Vitiligo: Results of a Placebo- controlled 12-month Prospective Study Anke HARTMANN, Eva-Bettina BRöCkER and Henning HAMM Department of Dermatology, University of Würzburg, Würzburg, Germany 475Tacrolimus in vitiligo spontaneous repigmentation and those who had received topical or systemic treatment during the last 6 months were excluded. Thirty patients (23 women, 7 men) with a mean age of 43.7 years (age range 19–65 years) completed the study. Twenty-two patients had vitiligo vulgaris, and 8 had vitiligo acrofacialis, according to the classification of Fitzpatrick (15). The mean disease duration was 15.8 years (range 8 months to 40 years; 11 patients with a disease duration of less than 10 years, 9 pa- tients with more than 20 years, and 10 patients with a duration in between). Two patients had stable disease for more than one year, 28 had a progressive course. Half of the patients had been unsuccessfully treated previously, including corticosteroids, UVB and UVA phototherapy, excimer laser therapy, and topical pseudocatalase. Forty percent of the patients reported a positive family history for vitiligo and 48% a positive history for atopy. Prior to study entry, 2 patients were known to have autoimmune gastritis, 2 patients psoriasis, and one patient alopecia areata. Two patients had photobiological skin type I, 9 had type II, 16 type III, 2 type IV, and one type V. Treatment regimen A thin film of tacrolimus 0.1% ointment (Protopic®, Astellas Deutschland GmbH, formerly Fujisawa Deutschland GmbH, Munich, Germany) was applied twice daily on the depigmented lesions of the face and neck (21 of 31 patients) as well as of the right upper and lower extremity (31 patients). On the left side of the limbs a bland emollient (Unguentum Cordes, Ichthyol- Gesellschaft, Hamburg, Germany) was used as placebo. In 20 patients with widespread depigmentation on the right arm and leg, tacrolimus ointment was combined with overnight occlusive dressings in previously defined areas. A transparent household polyethylene foil (12 defined areas), a polyurethane foil (Opsite Flexifix, Smith and Nephew, Hull, Uk; 7 areas), or a hydro- colloid wound dressing (Askina Transparent Biofilm, Braun, Melsungen, Germany; 5 areas) were used on depigmented areas of 80–150 cm2 size. After 6 months, treatment was stopped if no repigmentation was observed. The responding regions were treated continuously for 12 months. Evaluation of treatment efficacy and safety Patients were examined monthly for 4 months, and thereafter bimonthly. At study entry and every 2 months, the size of the treated lesions was documented by digital photography and measured by planimetry. The latter was performed by outli- ning the margin of the depigmented lesion onto a transparent film with square centimetre division, and calculating the area by addition of the squares included. Treatment efficacy was classified by a grading system used by most investigators: grade 1: 0–25% repigmentation (no or weak effect), grade 2: > 25–50% (moderate effect), grade 3: > 50–75% (good effect), and grade 4: > 75% (excellent effect). Colorimetric assessment of pigmentation (content of melanin) was performed with a Mexameter MX 18 (Courage & khazaka Electronics GmbH, Cologne, Germany) based on photometry. Melanin content of the vitiligo lesions were measured before treatment and after 6 and 12 months on 4 different points randomly chosen within the depigmented areas, and were compared with uninvolved surrounding skin. Before and at the end of treatment, disease activity was assessed by the Vitiligo Disease Activity (VIDA) score described by Njoo et al. (16). In this scoring system, using the patient’s own judgement of the present disease activity, the term “active” is defined as expansion of existing or appearance of new lesions. Laboratory blood tests, including differential blood count and serum analysis of anti-thyroid antibodies, antinuclear antibodies and total IgE levels, were performed before and after 6 and 12 months of therapy. Tacrolimus serum concentrations were measured after 6 and 12 months, between 6 h and 8 h post-dose, using a micro-enzyme immunoassay (MEIA, Abbott, Wiesba- den, Germany) with a quantification limit of 1.5 ng/ml. Quality of life assessment All patients completed the Dermatology Life Quality Index (DLQI) questionnaire introduced by Finlay & khan (17) im- mediately before and at the end of the study. The total DLQI score was calculated by addition of the scores of each of the 10 questions, resulting in a score between 0 (no impairment of QoL) and 30 (maximal impairment of QoL). In addition, the following 2 questions relating to special problems of vitiligo patients were addressed: (i) How extensively do you have to protect your skin from sunlight? and (ii) How much are you afraid that the skin lesions will expand? According to the DLQI assessment, 4 alternative responses were allowed: “not at all”, “a little”, “a lot”, or “very much” with corresponding scores of 0, 1, 2 and 3. Statistical analysis For statistical evaluation, Student’s t-test was used by apply- ing Applet Java Software. The significance level was set at p = 0.05. RESULTS Patient profile Thirty patients (23 women, 7 men) completed 6 months of treatment, one patient dropped out because of non- compliance. After evaluation of treatment outcome at 6 months, all 23 patients responding to therapy continued the study and completed 12 months of treatment. Mean disease activity before therapy, assessed by the VIDA score, was +2.76 (Table I). In 15 patients (50%) ele- vated anti-thyroid antibodies were found, in 4 patients associated with slightly elevated serum thyroxin levels. In 6 patients IgE levels were raised above the cut-off of 100 kU/l. Ten of 30 patients had positive antinuclear antibodies. Treatment outcome The mean ± standard deviation (SD) amount of tacroli- mus 0.1% ointment used per month was 41.6 ± 16.9 g. Table I. Vitiligo Disease Activity (VIDA) score at entry and end of the study (before and after treatment) Patients (n) VIDA score Before treatment After treatment + 4 12 1 + 3 7 1 + 2 5 + 1 4 0 2 5 – 1 23 Mean score +2.76 –0.5 Acta Derm Venereol 88 476 A. Hartmann et al. After 6 months, 23 of 30 patients (77%) showed distinct repigmentation, either in the face (mean repigmentation 34.0%) or arms (mean repigmentation 7.3%) when treat- ed with tacrolimus ointment. Whereas the best results were achieved when a hydrocolloid foil was used as occlusive dressing (mean repigmentation 50%), no or weak response was seen on the other lesions. Initial repigmentation was observed in periorbital sites after a mean of 8.9 weeks (range 2–18 weeks), in perioral sites after 12.3 weeks (range 6–24 weeks), and on the lower arms after a mean of 27.8 weeks (range 8–40 weeks). On the dorsum of the hands, the lower legs (with and without occlusion) and on the feet no treat- ment response was seen in any of the patients. Seven of 30 (23%) patients (5 women, 2 men) did not respond to therapy at all, neither on the face nor on the extremities. Therefore, treatment was stopped in these patients after 6 months. Five of them had used no occlusive therapy, 2 had used polyurethane foil. After 12 months, 17 of 21 patients (81%) with vitiligo lesions on the face showed repigmentation. Treatment response was rated excellent (grade 4) in 10 patients (Fig. 1), good in 5 patients, moderate in one, and weak or absent (grade 1) in 5 patients. Overall repigmenta- tion was 60.5 ± 40% in the responding patients and the size of involved areas diminished by 79.4 ± 27.1% on average. Response to treatment showed a slight cor- relation with skin phototype, with better a outcome in darker skinned patients. In 16 of 17 patients (94%) who used additional oc- clusive dressings on the arms, repigmentation could be observed. Whereas no occlusion, or the use of household polyethylene foil, had no or a weak effect, polyure- thane foil and hydrocolloid dressing led to moderate to excellent repigmentation, respectively (Table II, Fig. 2). These results could be confirmed by colorimetric measurement, showing a significant increase in me- lanin index when polyurethane foil and hydrocolloid dressing were used (data not shown). Repigmentation started significantly earlier in the hydrocolloid group (11.3 ± 3.4 weeks) compared with the polyurethane group (29.3 ± 4.6 weeks, p < 0.0001). However, good therapeutic results could also be achieved in the poly- urethane foil group despite later onset of repigmentation (Fig. 3). It was notable that patients with long disease duration showed significantly better results than those with a shorter history. Patients with a disease duration of more than 10 years had an overall mean repigmentation of lesions of the face and arms of 49.7% ± 37.9, compa- red with 14.7% ± 27.3 in patients with a disease duration of less than 10 years (p = 0.0009). Overall disease acti- vity, as assessed by the VIDA score, was significantly decreased at the end of the study (p < 0.0001) (Table I). On the placebo-treated sites, 3 patients showed initial Fig. 1. A 61-year-old woman with progressive vitiligo since 14 years. (A) Before therapy. (B) After 9 months of treatment with 0.1% tacrolimus ointment. Table II. Mean ±SD response after 12 months treatment with tacrolimus 0.1% ointment in 30 patients. All patients had lesions on the extremities, 21 of 30 patients had lesions on the face. 7 patients stopped treatment after 6 months because of lack of response Occlusion (no. of lesions) % Repigmentation Face None (21) 60.5 ± 40 Arms None (26) 1.5 ± 4.7 Polyethylene foil (10) 10.5 ± 7.9 Polyurethane foil (4) 38.8 ± 20.9 Hydrocolloid dressing (3) 86.7 ± 5.8 Legs None (13) 0 Polyethylene foil (2) 0 Polyurethane foil (3) 0 Hydrocolloid dressing (2) 0 SD: standard deviation. Fig. 2. A 43-year-old woman with progressive vitiligo since 22 years. (A) Before therapy. (B) After 9 months of treatment with 0.1% tacrolimus ointment applied on the entire right arm, combined with hydrocolloid dressing (arrow). Acta Derm Venereol 88 477Tacrolimus in vitiligo repigmentation and 4 patients had progressive disease despite contralateral repigmentation. In all but 2 patients tacrolimus serum concentrations were below the detection level (1.5 ng/ml) after 6 and 12 months. Two patients showed detectable levels of 3.0 and 3.6 ng/ml, respectively, after 6 months, and levels below 1.5 ng/ml at the end of the study. However, oc- clusive treatment was performed in only limited parts of the body. There was no significant change in laboratory parameters and in ANA titres at the end of the treatment period. All of the patients showing elevated anti-thyroid antibodies were induced to take a specific medication. In 3 patients the anti-thyroid antibodies were increased and in 8 of 15 patients the initially elevated levels were lowered at the end of treatment. Total IgE levels were increased in 3 and decreased in one patient at the end of the study. Adverse effects Side-effects were documented in 80% of the patients. In most cases, they were of minor degree and confined to the application sites. Adverse events included trans- ient facial flushing (n = 16), enhanced heat intolerance (n = 9), especially when drinking alcohol, burning (n = 4), mild pruritus (n = 2), and mild perioral folli- culitis (n = 2). Facial flushing occurred irrespective of whether or not tacrolimus ointment was applied on the face. In 11 patients it was present during the entire treat- ment period, and in 3 patients it was described as mo- derate to severe. Interestingly, all 6 patients reporting none of these adverse events belonged to the treatment group with no or minimal repigmentation. None of the side-effects led to discontinuation of therapy. No skin atrophy, teleangiectasia, cutaneous infections or ocular problems were observed or reported by the patients. No systemic side-effects related to administration of tacrolimus ointment were noticed. Quality of life The mean ± SD DLQI score was 12.4 ± 6.5 (range 2–27) before treatment and decreased to mean ± SD 9.3 ± 5.6 (range 1–23) after 12 months of therapy, indicating significant improvement of QoL (p = 0.001). The initial mean DLQI score for patients with no or poor treatment response (group A, 12.0 ± 7.2) was comparable to those for patients with moderate to excellent results (group B–D, 12.6 ± 6.2). However, at the end of therapy, mean DLQI score in the latter groups was 8.6 ± 4.9, in contrast to 10.3 ± 6.9 for patients in the group treated without success. Thus, response to treatment could ameliorate the DLQI score significantly (p = 0.006) whereas no or poor outcome did not. As to the 2 additional questions, 50% of the patients agreed to the need for protection against sunlight and 80% agreed to fear of progression with the highest score at the beginning of the study. At the end of treatment, question 1 improved by 40%, and question 2 by 29.3%, as calculated according to the DLQI. DISCUSSION Topical corticosteroids and phototherapy are the most common treatment options currently used in vitiligo. Long-term therapy, however, which is mostly required to obtain and maintain efficient repigmentation is ham- pered because of local and systemic side-effects. Topical calcineurin inhibitors, especially tacrolimus, have recently been shown to be a new promising thera- peutic tool in vitiligo (9–14). In a right-left comparison study of 2 months duration on children, tacrolimus 0.1% ointment was found to be as effective as 0.05% clobetasol propionate (9). In a recent open-label study on 19 patients with minimum age of 15 years, twice daily application of 0.1% tacrolimus over 6 months led to excellent repigmentation on the face and neck areas in 68% of patients, whereas no significant effect could be achieved on the trunk and extremities (12). Patients with a disease duration of more than 10 years were excluded. Although protection from the sun during treatment with tacrolimus ointment is recommended by the manufacturer, several clinical studies propagate the enhancing effect of UVB phototherapy in combination with tacrolimus in inducing repigmentation (18–20). In a study on 110 patients, combined treatment of tacro- limus and narrow-band UVB led to good to excellent response in 42% of the lesions. However, the effect on the extremities was disappointing (18). Whereas in pla- Fig. 3. A 26-year-old man with progressive vitiligo since 14 years. (A) Before therapy. (B) After 12 months of treatment with 0.1% tacrolimus ointment applied on the elbows combined with occlusion by polyurethane foil. Acta Derm Venereol 88 478 A. Hartmann et al. cebo-controlled studies the combined use of tacrolimus ointment and 308-nm excimer laser was superior to laser treatment alone (19, 20), tacrolimus in combination with narrow-band UVB did not prove more efficient than UVB phototherapy alone (14). In the present prospective study we evaluated the placebo-controlled application of tacrolimus 0.1% ointment for up to 12 months in 30 adult patients with vitiligo. By the end of treatment, 17 of the 21 patients with facial lesions (81%) showed repigmentation on the face and neck areas (mean response of 79.4% surface area). On the extremities there was almost no effect when applying tacrolimus ointment openly, which is in accordance with previous results (12). However, when using polyurethane foil or hydrocolloid dressing for overnight occlusive treatment, moderate to excellent repigmentation could be achieved, depending on the dressing used. Thus, we showed for the first time that occlusion therapy can enhance the effect of topical ta- crolimus therapy in vitiligo. As hydrocolloid dressings lead to higher stratum corneum water holding capacity compared with polyurethane foils (21), they may be more suitable for enhancing transcutaneous penetra- tion of topically applied agents. In the past, tacrolimus and pimecrolimus have been shown to be effective in descaled plaque-type psoriasis and pyoderma gangre- nosum, when applied under plastic foils as occlusive bandages, with no local or systemic side-effects (22, 23). However, serum concentrations of the drugs were not measured. In our study, all patients had tacrolimus serum levels below the detection limit of 1.5 ng/ml after 12 months, indicating that long-term topical treatment with additional long-term occlusion of areas up to 150 cm2 does not lead to accumulation of tacrolimus in the blood. Detectable tacrolimus blood levels were found in only 2 patients after 6 months, but below the level of 5–20 ng/ml intended for systemic treatment (24). In both patients, tacrolimus levels were below the detection limit in subsequent measurements. Several retrospective studies on more than 300,000 children and adults did not indicate an increased risk of non- melanoma skin cancer or lymphoma in those treated with topical calcineurin inhibitors (25, 26). Like other topical treatment modalities in vitiligo, tacrolimus achieves the best results when used on the face and neck areas. This fact has been explained by the greater density of hair follicles in these areas and, thus, the greater melanocyte reservoir (27, 28). Recently, it could be shown that tacrolimus significantly enhances the proliferation of both melanocytes and melanoblasts (29). In our study, treatment response showed a slight correlation with skin type, as excellent results were more often seen in darker skinned patients. Previous studies have not analysed the relationship between the duration of vitiligo and the therapeutic success of topical immunomodulators. In a recent study, patients with a disease duration of greater than 10 years have even been excluded (12). Interestingly, in our study patients with a history of more than 10 years benefited more from tacrolimus treatment than patients with shorter histories. In accordance with previous observations, our study showed that vitiligo affects the QoL of the patients to a significant extent, comparable to that in patients with atopic dermatitis (4, 16, 30). Depending on the treat- ment outcome, QoL may improve significantly (32% in patients with moderate to excellent results compared with 14% in cases with poor outcome). In conclusion, tacrolimus ointment seems to be a safe and effective treatment option for vitiligo in adult patients, even in those with long disease duration and in long-term treatment. It may significantly improve the QoL of affected patients. In regions beyond the face and neck, additional occlusion may significantly enhance the therapeutic result and may shorten the time until the start of repigmentation. Larger placebo-controlled studies using calcineurin inhibitors in combination with occlusion, penetration enhancers, or phototherapy, or in higher concentrations, are required to determine the exact role of these potent drugs in vitiligo treatment and their optimal mode of use. ACkNOWLEDGEMENTS The authors wish to thank Fujisawa Deutschland GmbH, Munich, Germany, for financial support of this study and Mr G. A. krämer for performing digital photography. Conflict of interest: This investigator-driven study was spon- sored by Fujisawa Deutschland GmbH, Munich, Germany. The publication was supported by Astellas Pharma GmbH. The authors have no other financial or non-financial conflicts of interest. REFERENCES 1. Ongenae k, Van Geel N, Naeyaert JM. Evidence for an autoimmune pathogenesis of vitiligo. Pigment Cell Res 2003; 16: 90–100. 2. Njoo MD, Westerhof W, Bos JD, Bossuyt PM. A systematic review of autologous transplantation methods in vitiligo. Arch Dermatol 1998; 134: 1543–1549. 3. Hartmann A, Brocker EB, Hamm H. Repigmentation of depigmented hairs in stable vitiligo by transplantation of autologous melanocytes in fibrin suspension. J Eur Acad Derm Venereol 2008; 22: 624–626. 4. Hartmann A, Lurz C, Hamm H, Brocker EB, Hofmann UB. Narrow-band UVB311 nm vs. broad-band UVB therapy in combination with topical calcipotriol vs. placebo in vitiligo. Int J Dermatol 2005; 44: 736–742. 5. Tjoe M, Gerritsen MJ, Juhlin L, van de kerkhof PC. Treat- ment of vitiligo vulgaris with narrow band UVB (311nm) for one year and the effect of addition of folic acid and vitamin B12. Acta Derm Venereol 2002; 82: 369–372. 6. Hartmann A, Brocker EB, Becker JC. Hypopigmentary skin disorders: current treatment options and future directions. Drugs 2004; 64: 89–107. Acta Derm Venereol 88 479Tacrolimus in vitiligo 7. Paller AS, Lebwohl M, Fleischer AB Jr, Antaya R, Langley RG, kirsner RS, et al. Tacrolimus ointment is more effective than pimecrolimus cream with a similar safety profile in the treatment of atopic dermatitis: results from 3 randomized, comparative studies. J Am Acad Dermatol 2005; 52: 810–822. 8. Cather JC, Abramovits W, Menter A. Cyclosporine and tacro- limus in dermatology. Dermatol Clin 2001; 19: 119–137. 9. Lepe V, Moncada B, Castanedo-Cazares JP, Torres-Alvarez MB, Ortiz CA, Torres-Rubalcava AB. A double-blind ran- domized trial of 0.1% tacrolimus vs 0.05% clobetasol for the treatment of childhood vitiligo. Arch Dermatol 2003; 139: 581–585. 10. Coskun B, Saral Y, Turgut D. Topical 0.05% clobetasol propionate versus 1% pimecrolimus ointment in vitiligo. Eur J Dermatol 2005; 15: 88–91. 11. Silverberg NB, Lin P, Travis L, Farley-Li J, Mancini AJ, Wagner AM, et al. Tacrolimus ointment promotes repig- mentation of vitiligo in children: a review of 57 cases. J Am Acad Dermatol 2004; 51: 760–766. 12. Grimes PE, Morris R, Avaniss-Aghajani E, Soriano T, Meraz M, Metzger A. Topical tacrolimus therapy for vitiligo: therapeutic responses and skin messenger RNA expression of proinflammatory cytokines. J Am Acad Dermatol 2004; 51: 52–61. 13. Dawid M, Veensalu M, Grassberger M, Wolff K. Efficacy and safety of pimecrolimus cream 1% in adult patients with vitiligo: results of a randomized, double-blind, vehicle- controlled study. J Dtsch Dermatol Ges 2006; 4: 942–946 14. Mehrabi D, Pandya AG. A randomized, placebo-controlled, double-blind trial comparing narrowband UV-B plus 0.1% tacrolimus ointment with narrowband UV-B plus placebo in the treatment of generalized vitiligo. Arch Dermatol 2006; 142: 927–928. 15. king RA, Oetting WS. Disorders of melanocytes. In: Freedberg IM, Eisen AZ, Wolff k, Austen F, Goldsmith LA, katz SI, Fitzpatrick TB, editors. Dermatology in general medicine. New York: McGraw-Hill, 1999: p. 925–936. 16. Njoo MD, Bos JD, Westerhof W. Treatment of generalized vitiligo in children with narrow-band (TL-01) UVB radia- tion therapy. J Am Acad Dermatol 2000; 42: 245– 253. 17. Finlay AY, khan Gk. Dermatology Life Quality Index (DLQI) – a simple practical measure for routine clinical use. Clin Exp Dermatol 1994; 19: 210–216. 18. Fai D, Cassano N, Vena GA. Narrow-band UVB photo- therapy combined with tacrolimus ointment in vitiligo: a review of 110 patients. J Eur Acad Derm Venereol 2007; 21: 916–920. 19. kawalek AZ, Spencer JM, Phelps RG. Combined excimer laser and topical tacrolimus for the treatment of vitiligo: a pilot study. Dermatol Surg 2004; 30: 130–135. 20. Passeron T, Ostovari N, Zakaria W, Fontas E, Larrouy JC, Lacour JP, Ortonne JP. Topical tacrolimus and the 308-nm excimer laser: a synergistic combination for the treatment of vitiligo. Arch Dermatol 2004; 140: 1065–1069. 21. Berardesca E, Vignoli GP, Fideli D, Maibach H. Effect of occlusive dressings on the stratum corneum water holding capacity. Am J Med Sci 1992; 304: 25–28. 22. Mrowietz U, Graeber M, Brautigam M, Thurston M, Wagenaar A, Weidinger G, Christophers E. The novel asco- mycin derivative SDZ ASM 981 is effective for psoriasis when used topically under occlusion. Br J Dermatol 1998; 139: 992–996. 23. Schuppe HC, Homey B, Assmann T, Martens R, Ruzicka T. Topical tacrolimus for pyoderma gangrenosum. Lancet 1998; 351: 832. 24. Prograf 5mg/ml. Astellas Pharma GmbH, Fachinformation, April 2006. 25. Arellano FM, Wentworth CE, Arana A, Fernandez C, Paul CF. Risk of lymphoma following exposure to calcineurin inhibitors and topical steroids in patients with atopic der- matitis. J Invest Dermatol 2007; 127: 808–816. 26. Margolis DJ, Hoffstad O, Bilker W. Lack of association between exposure to topical calcineurin inhibitors and skin cancer in adults. Dermatology 2007; 214: 289–295. 27. Ortonne JP, Schmitt D, Thivolet J. PUVA-induced repig- mentation of vitiligo scanning electron microscopy of hair follicles. J Invest Dermatol 1980; 74: 40–42. 28. Cui J, Shen LY, Wang GC. Role of hair follicles in the repigmentation of vitiligo. J Invest Dermatol 1991; 97: 410–416. 29. Lan CC, Chen GS, Chiou MH, Wu CS, Chang CH, Yu HS. Fk506 promotes melanocyte and melanoblast growth and creates a favourable milieu for cell migration via keratino- cytes: possible mechanisms of how tacrolimus ointment induces repigmentation in patients with vitiligo. Br J Der- matol 2005; 153: 498–505. 30. Parsad D, Pandhi R, Dogra S, kanwar AJ, kumar B. Derma- tology Life Quality Index score in vitiligo and its impact on the treatment outcome. Br J Dermatol 2003; 148: 373–374. Acta Derm Venereol 88 work_nwpe7hi2efftjcanddmofjrfma ---- NALDC Jump to Main Content An official website of the United States government Here’s how you know Here’s how you know Official websites use .gov A .gov website belongs to an official government organization in the United States. Secure .gov websites use HTTPS A lock ( A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites. Toggle navigation National Agricultural Library Digital Collections U.S. Department of Agriculture Home NALDC API Contact Us Search History search for Search Selections (0) Show Selections Clear Selections NALDC Main content area The New National Agricultural Library Digital Collections The NALDC offers access to collection materials available in digital format. Use this site to explore several collections of full-text historical documents on the agricultural and food sciences.   Access more digitized materials through the National Agricultural Library collection available in the Internet Archive: https://archive.org/details/usdanationalagriculturallibrary NALDC Collection Lists Historical Dietary Guidance Digital Collection Pomological Watercolor Collection Animal Welfare Act History Digital Collection Historical Gypsy Moth Publications Collection World Poultry Science Association Collection Plant Inventory, U.S. Department of Agriculture Collection Fruit and Vegetable Market News Collection USDA Publications Collection Rural Development Publications Collection Organic Roots Collection Reports of Bean Improvement Cooperative and National Dry Bean Council Research Conference NAL Information Centers Collection Footer Home Sitemap Social Links Facebook Twitter YouTube RSS Government Links Policies and Links Plain Writing FOIA Accessibility Statement Privacy Policy Non-Discrimination Statement Anti-Harassment Policy Information Quality Agricultural Research Service USDA.gov USA.gov WhiteHouse.gov Sign up for ARS News updates Your email address Sign up work_nyypqkrs2fha5mv63vcjkhqxmy ---- BioMed CentralBMC Musculoskeletal Disorders ss Open AcceStudy protocol The Clinical Assessment Study of the Hand (CAS-HA): a prospective study of musculoskeletal hand problems in the general population Helen Myers*1, Elaine Nicholls1, June Handy1, George Peat1, Elaine Thomas1, Rachel Duncan1, Laurence Wood1, Michelle Marshall1, Catherine Tyson2, Elaine Hay1 and Krysia Dziedzic1 Address: 1Primary Care Musculoskeletal Research Centre, Keele University, Keele, Staffordshire, ST5 5BG, UK and 2North Staffordshire Combined Healthcare NHS Trust, Stoke-on-Trent, Staffordshire, ST2 8LD, UK Email: Helen Myers* - h.l.myers@cphc.keele.ac.uk; Elaine Nicholls - e.nicholls@cphc.keele.ac.uk; June Handy - j.e.handy@cphc.keele.ac.uk; George Peat - g.m.peat@cphc.keele.ac.uk; Elaine Thomas - e.thomas@cphc.keele.ac.uk; Rachel Duncan - r.c.duncan@cphc.keele.ac.uk; Laurence Wood - l.r.j.wood@cphc.keele.ac.uk; Michelle Marshall - m.marshall@cphc.keele.ac.uk; Catherine Tyson - CatherineA.Tyson@northstaffs.nhs.uk; Elaine Hay - e.m.hay@cphc.keele.ac.uk; Krysia Dziedzic - k.s.dziedzic@cphc.keele.ac.uk * Corresponding author Abstract Background: Pain in the hand affects an estimated 12–21% of the population, and at older ages the hand is one of the most common sites of pain and osteoarthritis. The association between symptomatic hand osteoarthritis and disability in everyday life has not been studied in detail, although there is evidence that older people with hand problems suffer significant pain and disability. Despite the high prevalence of hand problems and the limitations they cause in older adults, little attention has been paid to the hand by health planners and policy makers. We plan to conduct a prospective, population-based, observational cohort study designed in parallel with our previously reported cohort study of knee pain, to describe the course of musculoskeletal hand problems in older adults and investigate the relative merits of different approaches to classification and defining prognosis. Methods/Design: All adults aged 50 years and over registered with two general practices in North Staffordshire will be invited to take part in a two-stage postal survey. Respondents to the survey who indicate that they have experienced hand pain or problems within the previous 12 months will be invited to attend a research clinic for a detailed assessment. This will consist of clinical interview, hand assessment, screening test of lower limb function, digital photography, plain x-rays, anthropometric measurement and brief self-complete questionnaire. All consenting clinic attenders will be followed up by (i) general practice medical record review, (ii) repeat postal questionnaire at 18-months, and (iii) repeat postal questionnaire at 3 years. Discussion: This paper describes the protocol for the Clinical Assessment Study of the Hand (CAS-HA), a prospective, population-based, observational cohort study of community-dwelling older adults with hand pain and hand problems based in North Staffordshire. Published: 30 August 2007 BMC Musculoskeletal Disorders 2007, 8:85 doi:10.1186/1471-2474-8-85 Received: 25 July 2007 Accepted: 30 August 2007 This article is available from: http://www.biomedcentral.com/1471-2474/8/85 © 2007 Myers et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Page 1 of 9 (page number not for citation purposes) http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=17760988 http://www.biomedcentral.com/1471-2474/8/85 http://creativecommons.org/licenses/by/2.0 http://www.biomedcentral.com/ http://www.biomedcentral.com/info/about/charter/ BMC Musculoskeletal Disorders 2007, 8:85 http://www.biomedcentral.com/1471-2474/8/85 Background Musculoskeletal diseases have a major impact on the health of the population [1]. In adults aged 50 years and over osteoarthritis (OA) is the cause of the majority of musculoskeletal pain and disability [2]. Although the pro- jected increase in the proportion of older people in the population has propelled OA up the agenda of health planners and policy makers, the main focus of attention has been on lower limb OA. Less attention has been given to the hand, despite the fact that the prevalence of hand pain in the general population has been estimated between 12% and 21% [3-5] and at older ages the hand is one of the most common sites of pain and OA [6]. The relationship between symptomatic hand OA and disabil- ity in everyday life has not been studied in detail [7], and although there is some evidence that older people with hand problems suffer significant pain and disability [8] and psychological and emotional distress as a result of functional limitation [9], little is known about the specific ways in which these problems interfere with daily life, or how their impact varies with age, gender and pain sever- ity. Although older people with hand problems view OA as a serious condition, the majority do not consult their general practitioner with their hand problem over the course of a year, even when severely affected [8]. Defining hand OA for epidemiological research and in clinical practice is problematic. Clinical criteria [10] and radiographic grading [11] for the classification of hand OA have been developed to establish uniformity in the reporting of this disease. However, population studies have shown that symptoms are only present in a minority of those with radiographic changes [12], suggesting that the clinical syndrome and the structural disease of OA appear to be separate, albeit related, entities. Conse- quently, it is doubtful whether the "true" prevalence of symptomatic hand OA can be captured from clinical or radiographic studies alone [10]. In North Staffordshire a programme of research into oste- oarthritis in primary care is being undertaken. The pro- gramme comprises a series of linked studies designed to establish the optimal management of osteoarthritis in older adults in primary care. The clinical assessment stud- ies are part of this programme and are prospective cohort studies whose main objective is to provide population- based evidence that will indicate the most useful way of assessing older adults with hand pain and problems and knee pain in primary care. The studies will provide pri- mary care practitioners with a description of the popula- tion of older adults with hand pain and problems and knee pain in clinically meaningful terms i.e. using simple clinical history and examination techniques. Addition- ally, they should help to determine if clinical classification of musculoskeletal hand and knee conditions is useful at the population level and what simple questions and assessment tools identify important groups, both cross- sectionally and longitudinally. The aim of this paper is to outline the protocol for the Clinical Assessment Study of the Hand (CAS-HA). The protocol for the Clinical Assess- ment Study (Knee) (CAS(K)) has been reported previ- ously [13]. Cross sectional study The general aim of the cross sectional component of the CAS-HA is to provide population-based evidence that will indicate the most useful way of assessing older adults with hand pain or hand problems in primary care. Addition- ally, we aim to identify clinical, functional and radio- graphic sub-groups within the study population. Specifically our study will consider the following ques- tions: • What is the prevalence of clinical signs and symptoms? How does this relate to hand function? • What is the prevalence of 'red flags' indicative of possi- ble serious joint pathology? • In what respect do consulters and non-consulters differ at baseline? • Can simple signs and symptoms accurately identify older adults with radiographic hand OA? • What is the relationship between symptomatic hand OA and soft tissue syndromes e.g. carpal tunnel syndrome? Longitudinal study Accurate information on the likely course of hand pain and problems in this population will play an important role in deciding how best to manage these problems and may possibly help to inform preventative measures in the future. To address this we intend to establish a cohort at baseline that will be followed up at 18-month intervals (subject to further funding and ethical approval). The study is designed in accordance with previously published requirements for reporting longitudinal studies in rheu- matology [14,15]. The general aim of the longitudinal component of the CAS-HA is to determine the course of hand pain and problems over time. Specifically, our study will address the following questions: • How common is deterioration in terms of hand pain, hand problems and functional limitation? Can this be predicted? • Does radiographic OA predict change in severity and characteristics of symptomatic hand OA? Page 2 of 9 (page number not for citation purposes) BMC Musculoskeletal Disorders 2007, 8:85 http://www.biomedcentral.com/1471-2474/8/85 • What proportion of this sample consult their general practitioner for hand pain or problems within the follow- up period? Can this be predicted by information collected at baseline? • What is the relative contribution of clinical history, hand assessment, digital imaging, x-rays and lower limb func- tion as prognostic markers? Methods/Design A population-based prospective observational cohort study of hand pain and problems in older people (50 years and over) has been designed in parallel to our previ- ously reported cohort study of knee pain in older people [13]. The hand cohort study will be conducted in 5 phases with a sample of people, aged 50 years and over, regis- tered with two local general practices (Figure 1). Ethical approval for CAS-HA baseline and 18-month follow up has been obtained from the North Staffordshire Local Research Ethics Committee. Ethical approval for 3-year follow up has been obtained from the Hereford and Worcester Local Research Ethics Committee. Phase 1: Baseline two-stage mailed survey Phase 2: Baseline clinical assessment study of the hand (CAS-HA) Phase 3: Eighteen month prospective review of general practice medical records Phase 4: Follow-up mailed survey at 18 months Phase 5: Follow-up mailed survey at 3 years Phase 1: Baseline two-stage mailed survey Full details of Phase 1 design and methods have been pre- viously reported [16]. Briefly, Phase 1 consists of a Health Survey questionnaire that will be mailed to all adults aged 50 years and over registered with the two participating practices. Respondents who provide written consent to further contact and who report pain or problems (e.g. stiffness or knobbly swellings) in the hands, or pain in the hips, knees or feet will be sent a second questionnaire (the Regional Pains Survey questionnaire). These two ques- tionnaires include measures of general health status, socio-demographic characteristics, psychological and life- style variables, and pain and disability (general and site specific). Hand specific questions are provided in Table 1. Non-responders to each questionnaire will be sent a reminder postcard at two weeks and, for those who do not respond to the postcard, a repeat questionnaire at 4 weeks. Phase 2: Baseline clinical assessment study of the hand (CAS-HA) Respondents to the Regional Pains Survey questionnaire who report experiencing hand pain or problems within the last 12 months and who provide written consent to further contact will be sent a letter of invitation to the CAS-HA research clinic and an information sheet outlin- ing the study. The process of recruiting participants and the practical organisation and running of the CAS-HA research clinic will follow the same procedures as those reported previously for CAS(K) [13]. Briefly, participants will be offered an appointment to attend the research clinic where they will be assessed by a trained research therapist after giving written, informed consent. Research clinics will be held at a local National Health Service Trust Hospital and will offer a maximum of 16 appointments per week. Participants will undertake the following standardised assessment: digital photography of the hands, clinical interview and hand assessment, lower extremity function test, brief self-complete questionnaire, plain radiography of the hands and knees, and simple anthropometric meas- urement. Digital photography of the hands Each participant will have four photographs taken of their hands by an assessor using a digital camera (Olympus Camedia C-4040 ZOOM: resolution 2272 × 1704 pixels) attached to a copy stand. The dorsal and palmar aspects of both hands, including the wrists, will be photographed. Photographs will be taken according to pre-defined writ- ten protocols that include standard positioning of partici- pants. Clinical interview and hand assessment Participants will be interviewed and examined by a trained assessor blinded to the findings from radiography and digital photography. The proposed content of the interview and assessment is provided in Table 2. Briefly, this procedure will comprise two components. Firstly, participants will be screened to identify possible red flags indicative of potentially serious pathology, namely recent trauma to the hands likely to have resulted in significant tissue damage, and acutely swollen, painful hands or knees. Secondly, a structured, standardised clinical inter- view and hand assessment developed and piloted for the study will be conducted [17,18]. For assessments requir- ing instrumented measures, equipment will be calibrated prior to the start of the study. Lower extremity function The Short Physical Performance Battery (SPPB) [19] will be conducted in all participants. This includes a standing balance test, a timed repeated chair stand test (5 repeti- Page 3 of 9 (page number not for citation purposes) BMC Musculoskeletal Disorders 2007, 8:85 http://www.biomedcentral.com/1471-2474/8/85 Page 4 of 9 (page number not for citation purposes) Flowchart of study proceduresFigure 1 Flowchart of study procedures. Data collection points are in shaded boxes. BMC Musculoskeletal Disorders 2007, 8:85 http://www.biomedcentral.com/1471-2474/8/85 tions) and a 4-metre gait speed test. The conduct and scor- ing of the SPPB will be as recommended on the training CD-ROM (Guralnik, personal communication). Brief self-complete questionnaire During the clinic visit, participants will complete a brief self-complete questionnaire containing questions relating to their hand problem (Table 2). Questions relating to knee problems will also be asked – days of pain, aching or stiffness in previous month, days in pain in the previous 6 months [20], episode duration [21], the Chronic Pain Grade [22] and symptom satisfaction (adapted from [23]). Radiography and anthropometric measurement Radiography of both hands and knees will be obtained for all participants. Plain radiographs of each hand will be taken (1 hand per film). A posteroanterior (PA) view will be taken, where the palmar aspect of the hand will be placed on the film with the fingers extended, separated slightly and spaced evenly (Buckland-Wright, personal communication). Imaging of the tibiofemoral joint of the knee will be undertaken using weight-bearing semiflexed (MTP) posteroanterior (PA) view according to a defined protocol [24]. The patellofemoral joint of the knee will be imaged with the lateral and skyline view, both in a recum- bent position with the knee flexed to 45°. Weight (kgs) and height (cms) of each participant will be measured using digital scales (Seca Ltd., Birmingham, UK) and a wall mounted height meter (Holtain Ltd., Crymych, UK) respectively. Table 1: Hand specific data to be collected at baseline (Regional Pains Survey Questionnaire) Concept Measurement method Details Characteristic of complaint Hand dominance right, left, both Duration of hand problem years/months Hand problem in past 12 months*§ yes, no Hand pain in past 12 months*§ yes, no Side of pain in past 12 months*§ right, left, both Duration of pain in past 12 months*§ < 7 days, 1–4 weeks, 1–3 months, 3+months Most problematic hand*§ right, left, both Hand pain, symptoms and physical features AIMS 2*§ [30] pain sub-scale AUSCAN*§ [34] pain and stiffness sub-scales In past month, severity of stiffness, aching, tenderness, weakness, clumsiness, burning, tingling, numbness *§ severe, moderate, mild, very mild, none In past month, days of joint warmth, dropping objects, frustration *§ all, most, some, few, no Hand pain lasting ≥ 1 day in past month*§‡ yes, no Painful areas in last month: hand drawings [31]*§ shaded areas Nodes: hand drawings*§ [32] circled joints Aesthetics Michigan Hand Outcomes Questionnaire§ [33] appearance sub-scale Function AIMS 2*§‡ [30] hand and finger function sub-scale AIMS 2§ [30] arm function sub-scale AUSCAN*§ [34] physical function sub-scale Difficulty with usual activities: pick up coins, hold book, clench fist, self-care, open packets no, mild, moderate, severe, unable to do Illness perceptions Illness Perceptions Questionnaire Revised (IPQ- R) [35] 9 dimensions: illness coherence, treatment control, personal control, timeline (acute/ chronic), timeline (cyclical), consequences, emotional representation, identity, causes Health care related to hand problem AIMS 2*§ [30] medication sub-scale Hand injuries ... ever yes, no: right, left, both Hand operations ... ever yes, no: right, left, both Consulted GP in past 12 months§ yes, no NHS and private services used in past 12 months§ (adapted from [36]) yes, no to physiotherapy, occupational therapy, hospital specialist, acupuncture, osteopath/ chiropractor, drugs on prescription, hand operation, hand injection, other Occupational impact Excessive use of hands in occupation yes, no Pastimes and hobbies Excessive use of hands in pastimes and hobbies yes, no Impact of symptoms AIMS 2*§ [30] impact subscale *Also gathered at 18 months; § Also gathered at 3 years; ‡ Minimum data to be sought at 18 months and 3 years from non-responders Page 5 of 9 (page number not for citation purposes) BMC Musculoskeletal Disorders 2007, 8:85 http://www.biomedcentral.com/1471-2474/8/85 Page 6 of 9 (page number not for citation purposes) Table 2: Hand specific data to be collected during clinical assessment (CAS-HA) Concept Measurement method Details Clinical Interview Questions: Characteristic of complaint Duration of hand problem < 12 months, 1-<5 years, 5-<10 years, 10 years + Onset: sudden, gradual yes, no, for right and left hands Onset: following accident or injury yes, no, for right and left hands Hand pain and hand symptoms Pain/tenderness in past month yes, no Hand pain descriptors from McGill Pain Questionnaire§ [37] 15 descriptors Pain location: hand drawing shading both hands front and back Pain present all the time yes, no Pain related to sleep disturbance yes, no Pain limits activity yes, no Hand stiffness in past month yes, no Side of stiffness right, left, both Hand stiffness on waking in past month yes, no Duration of morning stiffness ≤ 30 mins, 30+ mins Finger locking, triggering yes, no Release of locking yes, no Altered sensation (pins + needles, tingling, numbness) in past month yes, no Altered sensation location: hand drawing shading both hands front and back Altered sensation worse at night yes, no Occupational impact Stop work due to hand problem yes, no Absence from work due to hand problem yes, no Management/self-help Adaptation: gadgets, help, avoidance, change method, stop/reduce activity, take longer, other yes, no 17 treatments/self-help activities tried recently yes, no Any treatments effective yes, no Family history of joint problems Relatives with joint problems: father, mother, brother, sister yes, no Hand involvement yes, no Diagnostic and causal attributions Open-ended questions free text Health problems Open-ended question: 2 most important health problems free text Hand Assessment (right and left hands): Upper limb screen 9 movements (adapted from [38]) yes, no, unable to assess Observation/Palpation Swelling, nodes, bony enlargement, deformity at selected joints yes, no Thenar muscle wasting yes, no Dupuytren's yes, no Measurement Thumb opposition [39] yes, no, for 10 positions Thumb extension degrees Wrist extension degrees Wrist flexion degrees Tests Phalen's [40,41] positive, negative, unable to assess Grind [42,43] positive, negative, unable to assess Finklestein's [42,44] positive, negative, unable to assess Hand function Grip Ability Test [45] timed (seconds) Power grip (JAMAR dynomometer) [46] lbs Pinch grip (B&L pinch gauge) [46] lbs Brief self-complete questionnaire: Hand pain and hand symptoms Days of hand pain, ache or stiffness in past month*§ [10] all, most, some, few, no Severity of hand pain in past month*§ numerical rating scale (0–10) Thumb pain during activity in past month*§ yes, no Swelling in hands in past month yes, no Impact of symptoms Severity of overall hand problems in past month*§ none, very mild, mild, moderate, severe Bothersomeness of hand problem in past 2 weeks*§ (adapted from [47]) not at all, slightly, moderately, very much, extremely Symptom satisfaction*§ (adapted from [23]) 5-point Likert scale: very dissatisfied to very satisfied *Also gathered at 18 months; § Also gathered at 3 years; ‡ Minimum data to be sought at 18 months and 3 years from non-responders BMC Musculoskeletal Disorders 2007, 8:85 http://www.biomedcentral.com/1471-2474/8/85 Post-clinic procedure The practical organisation, administration and communi- cation post-clinic will be identical to that described by Peat et al [13], but with emphasis on the hand rather than the knee. A trained observer with a background in diag- nostic radiography will score the hand radiographs. Standardised coding of radiographic features using the Kellgren and Lawrence [11] grading system will be com- pleted for sixteen joints in each hand and wrist, the distal interphalangeal joints (DIP), the proximal interphalan- geal joints (PIP), the interphalangeal joint of the thumb (IP), the metacarpophalangeal joints (MCP), the thumb carpometacarpal joint (CMC) and the trapezioscaphoid joint (TS). Knee films will be scored for individual radio- graphic features, including osteophytes, joint space nar- rowing, sclerosis and subluxation. The Altman Atlas [25] and scoring system [26] are to be used for the PA and sky- line views and the Burnett Atlas [27] for the lateral view. Additionally, PA and skyline views will be assigned a Kel- lgren and Lawrence grade [11]. Quality assurance and quality control Quality assurance and control are important in longitudi- nal studies especially when using observers to gather data [28]. In the current study, the clinical interview, hand assessment, lower limb screen, and the taking and scoring of radiographs will be subject to a number of quality assurance and control procedures. The study protocol and inter- and intra-assessor reliability of the clinical interview and hand assessment have been formally tested in a pilot study [18]. Reliability studies investigating inter- and intra-observer reproducibility will be conducted for the scoring of radiographs. All assessors will receive training using the study protocols prior to the commencement of data collection. Assessors will practice interviews and assessments using the proto- cols with healthy volunteers and expert participants. All radiographers participating in the study will also receive training prior to the start of the research clinics. A detailed assessor manual containing study protocols will be pro- vided to all members of the CAS-HA team for reference during the study period. A programme of quality control measures previously reported [13] will be implemented throughout the course of the study. Phase 3: Prospective review of general practice medical records All participants in Phase 1 who give written consent for their GP records to be accessed will have their computer- ised medical records tagged by a member of the Centre's Table 3: Hand specific data to be collected only at 18 months and 3 years Concept Measurement method Details Perceived change in hand problem since baseline Transition index [48]‡ completely recovered, much better, better, no change, worse, much worse Health care related to hand problem since baseline Hand injury yes, no Hand operation yes, no Consulted GP in past 18 months† yes, no NHS and private services used in past 18 months† (adapted from [36]) yes, no to physiotherapy, occupational therapy, hospital specialist, acupuncture, osteopath/ chiropractor, drugs on prescription, hand operation, hand injection, other Occupational impact since baseline Time off work yes, no Stopped work yes, no Hand pain and hand symptoms Days of hand swelling in past month all, most, some, few, no Days of hand pain in past 6 months^ [22] no, 1–30, 31–89, 90+ days Hand pain severity in past 6 months^ numerical rating scale (1–10) Coping strategies for hand pain Single-item Coping Strategies Questionnaire (CSQ) [49] numerical rating scale (0–7) with verbal anchors (never do that, always do that) Illness perceptions Shortened version adapted from IPQ-R [35] 6 dimensions: illness coherence, personal control, timeline (acute/chronic), timeline (cyclical), consequences, emotional representation Management/self-help 7 treatments/self-help activities tried in past month: simple painkiller; anti-inflammatory tablets; creams, gels, or rubs; glucosamine or chondroitin sulphate; warmth, heat; cold; hand exercises yes, no Narrative account Open-ended question: course of hand pain and problems‡ free text ‡ Minimum data to be sought at 18 months and 3 years from non-responders; †Data only gathered at 18 months; ^Data only gathered at 3 years Page 7 of 9 (page number not for citation purposes) BMC Musculoskeletal Disorders 2007, 8:85 http://www.biomedcentral.com/1471-2474/8/85 Health Informatics team. The protocol for this phase of the study has been previously reported [13]. Phase 4 and 5: Follow-up mailed survey at 18 months and 3 years A follow-up survey will be mailed to all Phase 2 partici- pants 18 months and 3 years after their baseline clinical assessment. The focus of follow-up will be on clinical change in symptoms and function and possible determi- nants of this. The proposed content of these surveys is pro- vided in Tables 1, 2, 3. Primary outcome data will be sought from non-respondents by telephone or post. Par- ticipants who have moved practice during the follow-up period will be traced using the NHS tracing service and their new general practitioner will be asked for permission to include them in the follow-up. Sample size The sample size for this study was determined by the esti- mated numbers of participants needed in Phase 2 to ensure sufficient power for both cross-sectional and longi- tudinal analyses. A target sample of 500 was set. We esti- mate that 90% of follow-up questionnaires will be returned and that approximately 70 participants (12%) will report clinically significant deterioration over the 18- month period [29]. With this number of participants, we will have 80% power to detect a risk ratio of 1.6 or greater with a minimum 64% exposure rate (e.g. presence of radi- ographic OA) in those who have deteriorated, and a 50% exposure rate in those who do not, at 95% level of confi- dence. Statistical analysis Linking data collected at the clinical assessment with that from the 18-month and 3-year follow-up questionnaires, we will be able to determine prospectively the factors that are related to clinical deterioration using risk ratios and associated 95% confidence intervals. Discussion The Clinical Assessment Study of the Hand (CAS-HA) is a prospective, population-based, observational cohort study based in North Staffordshire that intends to investi- gate issues surrounding the classification and course of hand pain, problems and hand osteoarthritis in commu- nity-dwelling adults aged 50 years and over. This study will complement our previous study on knee pain in older people [13]. Abbreviations AIMS2, Arthritis Impact Measurement Scale 2; AUSCAN, AUStralian CANadian Osteoarthritis Hand Index; CAS- HA, Clinical Assessment Study of the Hand; CAS(K), Clin- ical Assessment Study of the Knee; CMC, carpometacar- pal; CSQ, Coping Strategies Questionnaire; DIP, distal interphalangeal; GP, General Practitioner; IP, inter- phalangeal; IPQ-R, Illness Perceptions Questionnaire Revised; MCP, metacarpophalangeal; MTP, metatar- sophalangeal; OA, Osteoarthritis; PA, posteroanterior; PIP, proximal interphalangeal; SPPB, Short Physical Per- formance Battery; TS, trapezioscaphoid. Competing interests The author(s) declare that they have no competing inter- ests. Authors' contributions All authors participated in the design of the study and drafting the manuscript. All authors read and approved the final manuscript. Acknowledgements This study is supported financially by a Programme Grant awarded by the Medical Research Council, UK (Grant Code: G9900220) and by Support for Science funding secured by North Staffordshire Primary Care Research Consortium for NHS service support costs. KD was supported by a grant from the Arthritis Research Campaign. The authors would like to thank the administrative and health informatics staff at Keele University's Primary Care Musculoskeletal Research Centre, especially Charlotte Clements, staff of the participating general practices and Haywood Hospital, especially Dr Jackie Saklatvala, Carole Jackson and the Radiographers at the Department of Radiography, and Carol Graham and Nikki Edwards at the Department of Occupational Therapy. The authors would like to thank the following for permission to use published measures at baseline: Prof N Bellamy (AUSCAN), Dr K Chung (Michigan Hand Outcomes Questionnaire), Prof M Doherty (finger nodes drawings), Prof R Meenan (AIMS2), Prof D Symmons (hand pain drawings), and Prof J Weinman (IPQ-R). The authors gratefully acknowledge the advice and per- mission to use the SPPB training CD-ROM from Dr Jack Guralnik. We also gratefully acknowledge the assistance of Prof Chris Buckland-Wright for advice and training for the x-ray protocols. References 1. Woolf AD, Pfleger B: Burden of major musculoskeletal condi- tions. Bull World Health Organ 2003, 81:646-656. 2. Zhang Y, Niu J, Kelly-Hayes M, Chaisson CE, Aliabadi P, Felson DT: Prevalence of symptomatic hand osteoarthritis and its impact on functional status among the elderly: The Framing- ham Study. Am J Epidemiol 2002, 156:1021-1027. 3. Urwin M, Symmons D, Allison T, Brammah T, Busby H, Roxby M, Sim- mons A, Williams G: Estimating the burden of musculoskeletal disorders in the community: the comparative prevalence of symptoms at different anatomical sites, and the relation to social deprivation. Ann Rheum Dis 1998, 57:649-655. 4. Walker-Bone K, Palmer KT, Reading I, Coggon D, Cooper C: Prev- alence and impact of musculoskeletal disorders of the upper limb in the general population. Arthritis Rheum 2004, 51:642-651. 5. Dahaghin S, Bierma-Zeinstra SM, Reijman M, Pols HA, Hazes JM, Koes BW: Prevalence and determinants of one month hand pain and hand related disability in the elderly (Rotterdam study). Ann Rheum Dis 2005, 64:99-104. 6. Buckwalter JA, Martin J, Mankin HJ: Synovial joint degeneration and the syndrome of osteoarthritis. Instr Course Lect 2000, 49:481-489. 7. Maheu E, Dreiser RL, Lequesne M: Methodology of clinical trials in hand osteoarthritis. Issues and proposals. Rev Rhum Engl Ed 1995, 62:55S-62S. 8. Dziedzic K, Thomas E, Hill S, Wilkie R, Peat G, Croft P: The impact of musculoskeletal hand problems in older adults : the find- Page 8 of 9 (page number not for citation purposes) http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=14710506 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=14710506 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12446258 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12446258 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12446258 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9924205 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9924205 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9924205 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=15334439 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=15334439 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=15334439 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=15608306 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=15608306 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10829201 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10829201 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=7583183 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=7583183 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=17329350 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=17329350 BMC Musculoskeletal Disorders 2007, 8:85 http://www.biomedcentral.com/1471-2474/8/85 ings from the North Staffordshire Osteoarthritis Project (NorStOP). Rheumatology 2007, 46:963-967. 9. Hill S, Ong BN, Choi KS, Dziedzic KS: The impact of hand oste- oarthritis on the individual [abstract]. Rheumatology 2004, 43:s153. 10. Altman R, Alarcon G, Appelrouth D, Bloch D, Borenstein D, Brandt K, Brown C, Cooke TD, Daniel W, Gray R, Greenwald R, Hochberg M, Howell D, Ike R, Kapila P, Kaplan D, Koopman W, Longley S, McShane DJ, Medsger T, Michel B, Murphy W, Osial T, Ramsey-Gold- man R, Rothschild B, Stark K, Wolfe R: The American College of Rheumatology criteria for the classification and reporting of osteoarthritis of the hand. Arthritis Rheum 1990, 33:1601-1610. 11. Kellgren JH, Lawrence JS: Radiological assessment of osteo- arthrosis. Ann Rheum Dis 1957, 16:494-502. 12. Cicuttini FM, Spector TD: The epidemiology of osteoarthritis of the hand. Rev Rhum Engl Ed 1995, 62:3S-8S. 13. Peat G, Thomas E, Handy J, Wood L, Dziedzic K, Myers H, Wilkie R, Duncan R, Hay E, Hill J, Croft P: The Knee Clinical Assessment Study – CAS(K). A prospective study of knee pain and knee osteoarthritis in the general population. BMC Musculoskelet Dis- ord 2004, 5:4. 14. Silman A, Symmons D: Reporting requirements for longitudinal observational studies in rheumatology. J Rheumatol 1999, 26:481-483. 15. Wolfe F, Lassere M, van der Heijde D, Stucki G, Suarez-Almazor M, Pincus T, Eberhardt K, Kvien TK, Symmonds D, Silman A, van Riel P, Tugwell P, Boers M: Preliminary core set of domains and reporting requirements for longitudinal observational stud- ies in rheumatology. J Rheumatol 1999, 26:484-489. 16. Thomas E, Wilkie R, Peat G, Hill S, Dziedzic K, Croft P: The North Staffordshire Osteoarthritis Project – NorStOP: prospec- tive, 3-year study of the epidemiology and management of clinical osteoarthritis in a general population of older adults. BMC Musculoskelet Disord 2004, 5:2. 17. Myers H, Dziedzic K, Thomas E, Hay E, Croft P: The development of a hand assessment for clinical research: A consensus study using a modified Delphi approach [abstract]. Rheumatology 2004, 43:s154. 18. Myers H, Dziedzic K, Thomas E, Hay E, Croft P: Classifying hand OA in a population of older people: A reliability study [abstract]. Rheumatology 2005, 44:s14. 19. Guralnik JM, Simonsick EM, Ferrucci L, Glynn RJ, Berkman LF, Blazer DG, Scherr PA, Wallace RB: A short physical performance bat- tery assessing lower extremity function: association with self-reported disability and prediction of mortality and nurs- ing home admission. J Gerontol 1994, 49:M85-M94. 20. Von KM, Jensen MP, Karoly P: Assessing global pain severity by self-report in clinical and health services research. Spine 2000, 25:3140-3151. 21. de Vet HC, Heymans MW, Dunn KM, Pope DP, van der Beek AJ, Mac- farlane GJ, Bouter LM, Croft PR: Episodes of low back pain: a pro- posal for uniform definitions to be used in research. Spine 2002, 27:2409-2416. 22. Von KM, Ormel J, Keefe FJ, Dworkin SF: Grading the severity of chronic pain. Pain 1992, 50:133-149. 23. Cherkin DC, Deyo RA, Street JH, Barlow W: Predicting poor out- comes for back pain seen in primary care using patients' own criteria. Spine 1996, 21:2900-2907. 24. Buckland-Wright JC, Wolfe F, Ward RJ, Flowers N, Hayne C: Sub- stantial superiority of semiflexed (MTP) views in knee oste- oarthritis: a comparative radiographic study, without fluoroscopy, of standing extended, semiflexed (MTP), and schuss views. J Rheumatol 1999, 26:2664-2674. 25. Altman RD, Hochberg M, Murphy WA Jr, Wolfe F, Lequesne M: Atlas of individual radiographic features in osteoarthritis. Osteoarthritis Cartilage 1995, 3(Suppl A):3-70. 26. Altman RD, Fries JF, Bloch DA, Carstens J, Cooke TD, Genant H, Gofton P, Groth H, McShane DJ, Murphy WA: Radiographic assessment of progression in osteoarthritis. Arthritis Rheum 1987, 30:1214-1225. 27. Burnett S, Hart D, Cooper C, Spector T: A radiographic atlas of oste- oarthritis London: Springer-Verlag; 1994. 28. Whitney CW, Lind BK, Wahl PW: Quality assurance and quality control in longitudinal studies. Epidemiol Rev 1998, 20:71-80. 29. Elliott AM, Smith BH, Hannaford PC, Smith WC, Chambers WA: The course of chronic pain in the community: results of a 4-year follow-up study. Pain 2002, 99:299-307. 30. Meenan RF, Mason JH, Anderson JJ, Guccione AA, Kazis LE: AIMS2. The content and properties of a revised and expanded Arthritis Impact Measurement Scales Health Status Ques- tionnaire. Arthritis Rheum 1992, 35:1-10. 31. Ferry S, Pritchard T, Keenan J, Croft P, Silman AJ: Estimating the prevalence of delayed median nerve conduction in the gen- eral population. Br J Rheumatol 1998, 37:630-635. 32. O'Reilly S, Johnson S, Doherty S, Muir K, Doherty M: Screening for hand osteoarthritis (OA) using a postal survey. Osteoarthritis Cartilage 1999, 7:461-465. 33. Chung KC, Pillsbury MS, Walters MR, Hayward RA: Reliability and validity testing of the Michigan Hand Outcomes Question- naire. J Hand Surg [Am] 1998, 23:575-587. 34. Bellamy N, Campbell J, Haraoui B, Gerecz-Simon E, Buchbinder R, Hobby K, MacDermid JC: Clinimetric properties of the AUS- CAN Osteoarthritis Hand Index: an evaluation of reliability, validity and responsiveness. Osteoarthritis Cartilage 2002, 10:863-869. 35. Moss-Morris R, Weinman J, Petrie KJ, Horne R, Cameron LD, Buick D: The revised Illness Perception Questionnaire (IPQ-R). Psy- chology & Health 2002, 17:1-16. 36. Jinks C, Lewis M, Ong BN, Croft P: A brief screening tool for knee pain in primary care. 1. Validity and reliability. Rheuma- tology (Oxford) 2001, 40:528-536. 37. Melzack R: The short-form McGill Pain Questionnaire. Pain 1987, 30:191-197. 38. Clinical Assessment of the Musculoskeletal System : A Handbook for Medical Students [http://www.arc.org.uk/ arthinfo/documents/6321.pdf] 39. Kapandji IA: Clinical evaluation of the thumb's opposition. J Hand Ther 1992, 2:102-106. 40. Cailliet R: Hand Pain and Impairment 4th edition. Philadelphia: F.A. Davis Company; 1994. 41. Boyling J: The prevention and management of occupational hand disorders. In Hand Therapy: Principles and Practice Edited by: Salter M, Cheshire L. Oxford: Butterworth-Heinemann; 2000:211-225. 42. Lister G: The Hand: Diagnosis and Indications 2nd edition. Edinburgh: Churchill Livingstone; 1978. 43. Aulicino P: Clinical Examination of the Hand. In Rehabilitation of the Hand: Surgery and Therapy Edited by: Hunter J, Mackin E, Callahan A. St Louis: Mosby; 1995:53-75. 44. Simpson C: Hand Assessment: A clinical guide for therapists 1st edition. Wiltshire: APS Publishing; 2002. 45. Dellhag B, Bjelle A: A Grip Ability Test for use in rheumatology practice. J Rheumatol 1995, 22:1559-1565. 46. Mathiowetz V, Weber K, Volland G, Kashman N: Reliability and validity of grip and pinch strength evaluations. J Hand Surg [Am] 1984, 9:222-226. 47. Dunn KM, Croft PR: Classification of low back pain in primary care: using "bothersomeness" to identify the most severe cases. Spine 2005, 30:1887-1892. 48. van der Windt DA, Koes BW, Deville W, Boeke AJ, de Jong BA, Bouter LM: Effectiveness of corticosteroid injections versus physiotherapy for treatment of painful stiff shoulder in pri- mary care: randomised trial. BMJ 1998, 317:1292-1296. 49. Jensen MP, Keefe FJ, Lefebvre JC, Romano JM, Turner JA: One- and two-item measures of pain beliefs and coping strategies. Pain 2003, 104:453-469. Pre-publication history The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2474/8/85/prepub Page 9 of 9 (page number not for citation purposes) http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=17329350 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=17329350 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=2242058 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=2242058 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=2242058 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=13498604 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=13498604 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=7583180 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=7583180 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=15028109 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=15028109 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=15028109 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9972991 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9972991 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9972992 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9972992 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9972992 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=14718062 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=14718062 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=8126356 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=8126356 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=8126356 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11124730 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11124730 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12438991 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12438991 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=1408309 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=1408309 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9112715 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9112715 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9112715 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10606380 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10606380 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10606380 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=8581752 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=8581752 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=3689459 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=3689459 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9762510 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9762510 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12237208 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12237208 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12237208 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=1731806 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=1731806 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=1731806 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9667616 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9667616 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9667616 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10489318 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10489318 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9708370 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9708370 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9708370 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12435331 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12435331 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12435331 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11371661 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11371661 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=3670870 http://www.arc.org.uk/arthinfo/documents/6321.pdf http://www.arc.org.uk/arthinfo/documents/6321.pdf http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=7473483 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=7473483 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=6715829 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=6715829 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16103861 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16103861 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16103861 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9804720 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9804720 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9804720 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12927618 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12927618 http://www.biomedcentral.com/1471-2474/8/85/prepub Abstract Background Methods/Design Discussion Background Cross sectional study Longitudinal study Methods/Design Phase 1: Baseline two-stage mailed survey Phase 2: Baseline clinical assessment study of the hand (CAS-HA) Digital photography of the hands Clinical interview and hand assessment Lower extremity function Brief self-complete questionnaire Radiography and anthropometric measurement Post-clinic procedure Quality assurance and quality control Phase 3: Prospective review of general practice medical records Phase 4 and 5: Follow-up mailed survey at 18 months and 3 years Sample size Statistical analysis Discussion Abbreviations Competing interests Authors' contributions Acknowledgements References Pre-publication history work_nzwuuevxpffc5ndp77jmrw3e3a ---- CSDL | IEEE Computer Society work_o2gjbwrygnecnbm3h7foyretnm ---- Provided by the author(s) and NUI Galway in accordance with publisher policies. Please cite the published version when available. Downloaded 2021-04-06T01:15:20Z Some rights reserved. For more information, please see the item record link above. Title PTP-IP a new transport specification for wireless photography Author(s) Corcoran, Peter M. Publication Date 2005 Publication Information P. Bigioi, G. Susanu, E. Steinberg, P. Corcoran, (2005) " PTP- IP a new transport specification for wireless photography", IEEE Transactions on Consumer Electronics, Vol. 51, No. 1, pp. 240-244. Publisher IEEE Item record http://hdl.handle.net/10379/295 https://aran.library.nuigalway.ie http://creativecommons.org/licenses/by-nc-nd/3.0/ie/ IEEE Transactions on Consumer Electronics, Vol. 51, No. 1, FEBRUARY 2005 Manuscript received January 14, 2005 0098 3063/05/$20.00 © 2005 IEEE 240 PTP/IP - A New Transport Specification for Wireless Photography Petronel Bigioi, George Susanu, Eran Steinberg and Peter Corcoran Abstract – PTP (Picture Transfer Protocol ISO-15740) is an International Standard for connectivity and interface of digital photography devices. In this paper we describe how PTP can be extended to work over generic TCP/IP networks. As digital cameras and printers becomes wireless, PTP/IP will provide the necessary flexibility of applications infrastructure for networked digital photography1. Index Terms — digital camera, DSC, digital printers, connectivity, PTP, PTP/IP, wireless, PictBridge, ISO-15740. I. INTRODUCTION The Picture Transfer Protocol (PTP) [1] was developed to address the necessary industry driven requirements for two way communication between a digital still cameras (DSC) and external devices such as printers, host desktop computers, hand held devices etc . PTP is an internationally published standard (ISO-15740) designed and refined as part of the electronic photography group, WG-18, of the photography technical committee, TC-42 of the International Standards Organization as ISO. This activity was successfully conducted at an industry-wide level and involves practically all major digital camera manufacturers and additionally, imaging chipset manufacturers, and digital imaging and operating systems software companies. As of late 2004, at least 95% of all consumer digital cameras include PTP as the sole means of camera connectivity or in parallel with USB mass storage solutions. An important validation the PTP standard is the fact that PTP became the underlying protocol for PictBridge [3] (Standardized by CIPA - Camera and Imaging Product Association, Japan) which is the high level protocol for direct connectivity between cameras and printers. Recently, Microsoft ® Corporation announced MTP (Media Transfer Protocol) which is a superset of PTP standard. The ability of PTP going over networking transports enables also the MTP to work over IP networks. This enables a number of new usage models for portable media players and devices in the context of wireless 802.11 networks. PTP include two components which may not be symmetrical: an initiator, namely the desktop device or the printer and a responder, namely the digital camera. 1 Petronel Bigioi is Director of Camera Technology at FotoNation Ireland Ltd, Galway (e-mail: petronel@fotonation.com). Eran Steinberg is CEO of FotoNation Inc, San Francisco, CA (e-mail: erans@fotonation.com). George Susanu is a Senior Research Engineer at FotoNation Ireland Ltd, (e-mail: george@fotonation.com). Peter Corcoran is the Director of IP at FotoNation Ireland Ltd, (e-mail: pcor@fotonation.com). The features supported by PTP include defined mechanisms for image referencing, for command responses, events, device properties, datasets, and data formats to ensure interoperability. In addition to standard operations and formats, optional operations and formats, as well as extension mechanisms, are supported allowing manufacturer-specific customizations of imaging device behavior while still conforming to the PTP standard. II. THE NEED FOR ADITIONAL TRANSPORTS An important guideline for the WG-18 working group has been to define PTP as a transport independent protocol. Thus in theory solutions are available for PTP over multiple transport layers. For example, discussion of the broader applicability of PTP was presented at ICCE 2002 [4]. This paper introduced some preliminary ideas for implementing PTP over Bluetooth and TCP/IP networks. Other reference of PTP includes PTP over 1394. Practically, due to the factt at most consumer level cameras in the last 3 years are USB based, current product support of PTP is limited to USB (1.0, 1.1 and 2.0) devices. As part of related activities by the USB standards working group a Still Image Device Definition was presented in conjunction with the PTP specification [2]. This specification approved by the USB-WG became the basis for the implementation of PTP over USB. However, it is important to note that PTP was developed to be a much more flexible standard than the current USB based implementations would suggest. As consumer electronic devices become wireless, there is an obvious need to support this trend in the digital imaging domain, in particular DSCs, Digital printers and supporting desktop devices. As such devices begin to require access to IP transport layers, most noticeably WLAN and TCP/IP based, there is a need to extend the capabilities of PTP to operate over such transport layers. In this paper we present and discuss the specifications of a new PTP transport, namely PTP/IP. This transport is designed to transparently support a superset functionality over a dual TCP/IP network connection as is available for PTP running over a USB transport. This will provide a significantly more flexible infrastructure for digital photography applications, especially in the context of the growth of home WLAN networks. III. THE SOLUTION A. Transport Requirements PTP is a transport independent protocol. Referring to the PTP Standard [1] Clause 7 the definition of the transport requirements, include: P. Bigioi et al.: PTP/IP - A New Transport Specification for Wireless Photography 241 o : Disconnection Event – this clause requires that the transport layer has to report device disconnection to higher layers. The implementation of the transport layer has to generate a DeviceDisconnection notification. ([1] Clause 7.1) o Reliable, Error Free Channel – this clause requires a reliable, error free channel, meaning that those issues have to be solved in the transport level protocol. The PTP is not dealing with them. ([1] Clause 7.2) o Asynchronous Event Support – this clause requires support for Asynchronous Events. The transport has to provide a separate physical or logical way of transferring the events asynchronously to transactions. ([1] Clause 7.3 o Device Discovery and Enumeration – this clause requires transport support for device discovery and enumeration. ([1] Clause 7.4) The PTP/IP transport fulfills all of the above requirements but one –Device Discovery and Enumeration (clause 7.4). Due to irreconcilable reasons beyond the scope of the Specification, it was unwise to select a single device-discovery and service-enumeration protocol. Instead, the decision was to separate the device-discovery from the internals of the PTP/IP specifications. A informative annex describes how the PTP/IP discovery can be implemented using Rendezvous. However, discovery is by no means limited to Rendezvous. Current implementations include manual (no-discovery), proprietary UDP based discovery, UPnP™ etc as described in Section E of this paper. The PTP/IP transport specification is based on the use of TCP (Transfer Control Protocol) layer as its own transport layer, as shown in Fig 1. TCP (in the TCP/IP protocol stack) is the natural transport layer to provide reliable, error free data communication services to the PTP/IP layer. TCP is a stream based transport layer that provides multiple communication channels (TCP connections) and error free data delivery. B. PTP/IP as part of the Communication Stack PTP/IP supports the requirements of the PTP layer on a network stack. It was determined that mapping PTP onto the TCP layer using TCP stream sockets would provide a suitable means of communication between two PTP devices (i.e. the Initiator and Responder of PTP). PTP Layer PTPIP Layer TCP IP Network Specific Layer Network Specific Protocol PTP Layer PTPIP Layer TCP IP Network Specific Layer PTP Protocol PTPIP Protocol Initiator Responder Application Layer Software (Optional) Application Layer Software (Optional) PictBridge Protocol TCP Protocol IP Protocol Fig 1: PTP/IP over TCP/IP The place of the PTP/IP layer in the communication stack is presented in Fig1. The possibility to open multiple concurrent sockets between the devices would provide a mechanism to separate (i) the command, data & response (CDR) and (ii) the event-based, PTP primitives. The use of TCP is attractive from two points of view: (i) support in almost all types of LANs (ii) provides error free data delivery and concurrent logical connections support. C. PTP/IP Connection In order for the PTP layer on the Initiator to be able to establish a session with the remote PTP layer on the Responder device, a PTP/IP connection needs to be first established. This consists of two TCP/IP connections (Fig 2), one for a CDR channel and one to provide an event channel. The former carries PTP commands, data and responses while the latter carries PTP events related to the CDR channel asynchronously. If an Initiator requires multiple PTP sessions to the same Responder device, then multiple PTP/IP connections need to be established. In a networking environment, multiple Initiators may attempt to open connections with the same Responder. Thus special care needs to be taken in order for the Responder to make sure that dual TCP connections are associated with the same PTPIP connection. Initiator (Device A) Responder (Device C) Responder (Device B) Da ta co nn ec tio n Ev en t c on ne ct ion Event connection Data connection Fig 2: PTP/IP Connections A PTP/IP connection should be fully established before any event and data communication can take place. This is described in Fig 3. The Initiator begins by establishing a CDR TCP connection with the Responder, (1) and immediately afterwards sends the Init_Command_Request PTP/IP packet (2) that will contain its unique device identifier and friendly name. Commnad/Data TCP Connection Establishment Initiator Responder InitCmdReq InitEvtReq InitEvtAck InitCmdAck Event TCP Connection Establishment PTPIP Connection Established 1 2 3 4 5 6 7 Fig 3: PTPIP Connection Establishment IEEE Transactions on Consumer Electronics, Vol. 51, No. 1, FEBRUARY 2005 242 The Responder may answer either with an Init_Command_Ack which returns a unique connection number, its unique device identifier and its friendly name, and lets the Initiator know that it can continue with the connection establishment procedure; or with an Init_Fail which will deny the Initiator access to Responder services and will close the CDR channel. When the Initiator receives the Init_Fail PTP/IP packet, it will also close its end of the CDR channel. If the Initiator received an Init_Command_Ack packet it will initiate the event channel TCP connection (4), and the Initiator will then send the Init_Event_Request PTP/IP packet (5) which carries the previously assigned connection number. The Responder uses this number to associate the two TCP connections, CDR and event, with the same PTP/IP connection and the same PTP session. In response, the Responder will send an Init_Event_Ack packet on the event channel (6). The PTP/IP connection is now considered established (7) and further data communication can take place. Note that as opposed to USB where a responder is limited to a single initiator (a camera can only interact with one tethered device), in the wireless world, the responder, such as a digital camera, an concurrently interface with more than a single initiator (e.g. a camera may communicate with a PC to download images and with a wireless printer to make prints.) D. Transport Configuration PTP/IP is executed on top of TCP/IP. Correct implementation of the addressing is a key component of the implementation. The Responder and/or Initiator must obtain valid IP addresses using one of the following methods: o Manual configuration – the IP address and other associated information is configured manually by the user, to reflect the topology and address schema of the local area network in which the imaging devices will function o DHCP server running in local network. In this case, the imaging devices should implement a DHCP client that will automatically obtain an IP address from the DHCP server running in the system. There may be a case where DHCP is not available (such a camera connecting to a single PC at home via ad hoc wireless connection, or a LAN that does not implement DHCP) o Dynamic Configuration for example IPv4 Link-Local Addresses (v4LL) - standard that describes how an IP address is automatically configured in order for a new device to work in a local area network, without having to setup a DHCP server PTP/IP specification doesn’t state nor limits the method to be used. It is up to the specific platform what method will be used to configure the TCP/IP stack for normal functionality. E. Device and Service Discovery Although the PTPIP transport for the PTP protocol opens a number of interesting new usage scenarios for an imaging device, if no device and service discovery protocol is used, then the PnP features of a device will be lost. The PTP standard requires device discovery and enumeration. In PTPIP this requirement can be fulfilled by using a few approaches: o Manual Configuration – instruct the Initiator manually where to connect (IP:port for the remote Responder) o Rendezvous – a device discovery and service discovery mechanism defined by Apple® Computer, Inc. o UPnP – the family of protocols defined by UPnP Forum [5] to facilitate automatic device configuration, service discovery and invocation. Only the device and service discovery features (SDP) of UPnP are to be used. o A custom device and service discovery mechanism, specific for PTP/IP, based on UDP protocol. Each Responder (Camera) arriving in a new network and periodically thereafter will use UDP protocol to broadcast a defined packet that will contain information about itself (Such as supported protocol – i.e. PTP, port numbers where it waits for incoming connections, its IP address, and other useful information). All the devices that may need to act as Initiators will run background processes that would intercept this message and configure the PTP/IP layer. The PTP/IP layer can work without a service discovery protocol, therefore specifying such service is outside the scope of the specifications of the PTP/IP. F. Device Bonding The PTP/IP protocol also provides support for device bonding, better known as device pairing. The protocol provides transfer mechanisms for device unique identifiers as well as device friendly names during the PTP/IP connection establishment. Each device in PTP/IP context will have a unique GUID (16 bytes unique number) that will represent the device in the communication process. The exchange of their GUIDs, allows to the application layer on each PTP/IP device to implement device level access mechanisms. Initiators have the ability to filter out the Responders they are not configured to connect to and Responders have the ability to reject a PTP/IP connection request from unknown Initiators. This mechanism is required in order to avoid accidental picture transfer to an unauthorized Initiator for example when a Responder (digital camera user) enters the wireless-proximity of an Initiator (the neighbor’s printer….). Despite what service discovery and enumeration is used, it is recommend that the device GUID will be also part of the announcement packets, so the Initiator will be able to filter (if it wants) specific devices, and generate selective notifications. G. Implementation An initiator and responder reference designs as well as devices based on PTP/IP are currently in initial stages of production. Such devices have implemented PTP/IP over TCP/IP over 802.11b/g. A possible PC software architecture for PTP related services is presented in Fig 4. The operating system running on the PC is Windows XP. Windows XP has its own imaging architecture (WIA – Windows Imaging Architecture). The PTP logic is implemented as a standard WIA minidriver, for integration with the imaging support from the operating system. Such reference software is available by the authors. The PTP Transports (only PTP-USB and PTP/IP are presented) are accessed through a custom transport API that is the same for all the available transports. In this way, same PTP logic can be used to access different types of PTP devices (USB devices, IEEE1394 devices, Bluetooth devices or PTP/IP devices). P. Bigioi et al.: PTP/IP - A New Transport Specification for Wireless Photography 243 Usb Transport PTP USB Kernel Mode Class Device Driver Platform Specific Networking Support WIA Minidriver PTP Analyzer PtpEnum PTP Virtual Bus Driver Transport API Windows Imaging Architecture Wia Mini Devi ce Deri ver I nterf ace User Mode Kernel Mode PtpITcp Transport Sockets API Platform Specific TCP/IP Stack Operating System Abstraction API OS OS OS Fig 4 PC WinXP Software Architecture for PTP For transports that don’t have build in device discovery and enumeration, the authors have implemented an external plug and play manager (PtpEnum) that can accommodate various plug and play mechanisms, external to the transports. This is the case with the PTP/IP transport. PtpEnum has two main functions: plug and play manager, in charge with receiving and enumerating all PTP devices in the local LAN and filtering of those devices (based on GUID of the devices and/or the device friendly name). PtpRTcp Transport Platform Specific Networking Support (Ethernet, WiFi, etc..) PTP Responder Logic Transport API User Mode Kernel Mode Storage FileSystem Sockets API Platform Specific TCP/IP Stack Operating System Abstraction API OS OS USB Transport Platform Specific USB Support OS Imaging Device Specific Control API Fig 5: Responder Software Architecture If a device arrival notification is received (either using Rendezvous™ or the custom PnP mechanism), PtpEnum will decide if the device is or not allowed to be reported. If the device is allowed to communicate with the host PC, then PtpEnum will notify a kernel mode bus device driver that will generate the Windows XP specific PnP events that will trigger the creation of an local imaging device (WIA will load the PTP WIA mini-driver). All the XP imaging applications (including the Explorer) will be able to work with the newly arrived device. The virtual bus device driver and its children don’t contain any PTP logic at all. Their existence is somehow forced, but necessary for an easy, seamless integration with Windows XP imaging architecture. Possible software architecture for PTP related services in a Responder device is presented in Fig 5. The PTP Responder that implements the PTP logic can work with different transports, through the transport API. Same PTP logic can be used when the imaging device connects over the USB cable or over a networking interface (Ethernet or WIFI physical connections). IV. CONCLUSIONS The PTP/IP transport provides the description of how the PTP data structures need to be transported over a TCP/IP network protocol and is the foundation for establishing wireless camera and printer communication. PTP/IP provides the necessary basis for direct printing to wireless printers as well as wireless connection to desktop devices. A detailed specification draft of the specification is available by the authors [6]. Reference implementation as well as feasibility tests were conducted and successfully concluded by the authors. Initial devices using PTP/IP were introduced by Fall-2004. PTP/IP is scheduled to be presented to Standard bodies by Winter-2005. REFERENCES [1] PTP/ISO-15740, “Picture Transfer Protocol Specification”, http://www.i3a.org/downloads_it10.html [2] USB Device Working Group, “USB Still Image Capture Device Definition”, http://www.usb.org/developers/devclass_docs/usb_still_img10.pdf [3] CIPA, “CIPA DC-001-2003 Digital Photo Solutions for Imaging Devices”, http://www.cipa.jp/pictbridge/contents_e/03overview_e.html [4] P. Bigioi, G. Susanu, P. Corcoran and I. Mocanu, “Digital Camera Connectivity Solutions using the Picture Transfer Protocol (PTP)”, ICCE 2002 and IEEE Transactions on Consumer Electronics, vol. 48, number 3, pp.417-427, August 2002 [5] UPNP Forum http://ww.upnp.org [6] PTP/IP Draft Specification – for review purposes only www.fotonation.com/ Petronel Bigioi received his B.S. degree in Electronic Engineering from “Transylvania” University from Brasov, Romania, in 1997. At the same university he received in 1998 M.S. degree in Electronic Design Automation. He received a M.S. degree in electronic engineering at National University of Ireland, Galway in 2000. He is currently Director of Camera Technology at FotoNation. His research interests include VLSI design, communication network protocols, digital imaging and embedded systems. He is a member of IEEE and a member of technical committee of ICCE. IEEE Transactions on Consumer Electronics, Vol. 51, No. 1, FEBRUARY 2005 244 George Susanu received his BS degree in microelectronics from “Kishinev Politechnical Institute”, Kishinev, Republic of Moldova. With an experience of eight years in RTOS and embedded systems, having a wide experience in C/C++ programming he currently is working as senior R&D engineer with FotoNation Ireland Ltd. His research areas include real time operating systems and device connectivity. Eran Steinberg received his BSc. in mathematics from HU, Jerusalem, Israel. He received his BFA in Photography from School of Visual Arts, NY. He received a MSc. in Imaging Science from RIT, Rochester, NY. With a vast experience of over 15 years in industry, project leader of ISO/TC42/WG18 – 15740 & 12 granted patents. he is currently CEO of FotoNation. His research interests include digital image processing and connectivity. Peter Corcoran received the BAI (Electronic Engineering) and BA (Math’s) degrees from Trinity College Dublin in 1984, where . continued his studies at TCD and was awarded a Ph.D. in Electronic Engineering for research work in the field of Dielectric Liquids. In 1986 he was appointed to a lectureship in Electronic Engineering at NUI, Galway. His research interests include microprocessor applications, home networking, digital imaging and wireless networking technologies. He is a member of IEEE. footer1: 01: v 02: vi 03: vii 04: viii 05: ix 06: x footerL1: 0-7803-8408-3/04/$20.00 © 2004 IEEE headLEa1: ISSSTA2004, Sydney, Australia, 30 Aug. - 2 Sep. 2004 work_o3cfr6ydsna5rbilixfzlffhku ---- Impact of Long-Wavelength UVA and Visible Light on Melanocompetent Skin Impact of Long-Wavelength UVA and Visible Light on Melanocompetent Skin Bassel H. Mahmoud1, Eduardo Ruvolo2, Camile L. Hexsel1, Yang Liu2,3, Michael R. Owen1, Nikiforos Kollias2, Henry W. Lim1 and Iltefat H. Hamzavi1 The purpose of this study was to determine the effect of visible light on the immediate pigmentation and delayed tanning of melanocompetent skin; the results were compared with those induced by long-wavelength UVA (UVA1). Two electromagnetic radiation sources were used to irradiate the lower back of 20 volunteers with skin types IV–VI: UVA1 (340–400 nm) and visible light (400–700 nm). Pigmentation was assessed by visual examination, digital photography with a cross-polarized filter, and diffused reflectance spectroscopy at 7 time points over a 2-week period. Confocal microscopy and skin biopsies for histopathological examination using different stains were carried out. Irradiation was also carried out on skin type II. Results showed that although both UVA1 and visible light can induce pigmentation in skin types IV–VI, pigmentation induced by visible light was darker and more sustained. No pigmentation was observed in skin type II. The quality and quantity of pigment induced by visible light and UVA1 were different. These findings have potential implications on the management of photoaggravated pigmentary disorders, the proper use of sunscreens, and the treatment of depigmented lesions. Journal of Investigative Dermatology (2010) 130, 2092–2097; doi:10.1038/jid.2010.95; published online 22 April 2010 INTRODUCTION Electromagnetic radiation exists as a spectrum. It is classified based on its wavelength into radio waves, microwaves, infrared (IR), visible light, UV, X-rays, and g radiation. Studies on human photobiology have focused primarily on UV radiation, and more recently, on IR (Schieke et al., 2003). The visible spectrum, used for general illumination, is defined as the portion of electromagnetic radiation visible to the human eye, which corresponds to wavelengths from 400 to 700 nm (Diffey and Kochevar, 2007). The absorption of visible light by chromophores in the skin is the principle for its use in laser therapy, intense pulse light therapy, and photodynamic therapy. However, the effect of visible light on pigmentary alterations has not been explored. This is especially relevant, as the visible spectrum comprises 38.9% of sunlight when it reaches the surface of the earth (Frederick et al., 1989). The limited information on visible light is partly due to the lack of readily available broad-spectrum light source that emits only in the visible spectrum without UV or IR components. In this study, we evaluated the effects of a light source that emits 98.3% visible light on cutaneous pigmen- tary alterations in individuals with Fitzpatrick skin types IV–VI, and compared these effects with those induced by UVA1 (340–400 nm). Furthermore, because it is known that responses to electromagnetic radiation in the UV range differ in individuals with different skin types (Jackson, 2003), the results obtained were compared with those in individuals with skin type II. RESULTS The spectral irradiance of the visible light source used in this study is shown in Figure 1; the dose range used is shown in Table 1. The UVA1 light source’s spectral distribution was as follows: 99.7% of UVA1, 0.12% of UVA2, and 0.17% of visible light, and less than 0.00001% of UVB radiation. This made the effects of visible light, UVA2, and UVB radiation negligible because the highest UVA1 dose used in this study was 60 J cm�2. The visible light source emitted 0.19% UVA1 (340–400 nm), 98.3% visible light (400–700 nm), and 1.5% IR (700–1800 nm) radiation. Considering that the highest dose for the visible light used in this study was 480 J cm�2, there was less than 1 J cm�2 of UVA1 emitted. Effects of long-wavelength UVA1 (340–400 nm) on skin types IV–VI The lowest dose at which pigmentation developed for all patients was 5 J cm�2; as the dose was increased, darker pigmentation was observed. Pigmentation was more intense in volunteers with darker skin, and was still evident after ORIGINAL ARTICLE 2092 Journal of Investigative Dermatology (2010), Volume 130 & 2010 The Society for Investigative Dermatology Received 16 June 2009; revised 7 February 2010; accepted 23 February 2010; published online 22 April 2010 1Department of Dermatology, Multicultural Dermatology Center, Henry Ford Hospital, Detroit, Michigan, USA; 2Johnson & Johnson, Skillman, New Jersey, USA Correspondence: Iltefat H. Hamzavi, Department of Dermatology, Multicultural Dermatology Center, Henry Ford Hospital, 3031 West Grand Boulevard, Suite 800, Detroit, Michigan 48202, USA. E-mail: ihamzav1@hfhs.org 3Current address: Department of Medicine and Bioengineering, University of Pittsburgh, 5117 Centre Avenue, Pittsburgh, Pennsylvania 15232, USA Abbreviations: DRS, diffuse reflectance spectroscopy; IPD, immediate pigment darkening; IR, infrared http://dx.doi.org/10.1038/jid.2010.95 mailto:ihamzav1@hfhs.org diascopy, suggesting that the observed skin darkening was indeed due to pigmentary alteration rather than dilatation of cutaneous vasculature. The immediate pigmentation was characterized by being well defined and grayish in color (Figure 2a). With time, the skin color changed to brown and faded rapidly over the course of 2 weeks following irradiation. No erythema was observed at any time point following irradiation. The diffuse reflectance spectroscopy (DRS) results for oxy- hemoglobin level and melanin content after irradiation with the UVA1 light source were analyzed and graphed. The assess- ment of oxy-hemoglobin levels is a reflection of erythema clinically, whereas the measurement of melanin content reflects cutaneous pigmentation. There was no dose–response or time-course relationship between UVA1 radiation and oxy-hemoglobin levels assessed using DRS; this finding correlated with our clinical observation of no erythema induced by UVA1 irradiation. In contrast, there was a dose–response correlation between the melanin content and the UVA1 dose delivered at all of the time points studied, corresponding to pigmentation observed clinically. Confocal microscopy did not show any significant difference in melanocyte density or in the amount of melanin between irradiated and non-irradiated control sites. Effects of visible light (400–700 nm) on skin types IV–VI Immediately after visible light irradiation, there was an induction of immediate pigmentation; the lowest dose at which pigmentation developed was 40 J cm�2. As the dose was increased, pigmentation became darker (Figure 2b). Similar to the results of UVA1, pigmentation was still evident after diascopy and more intense in skin type V volunteers. However, in contrast to the UVA1 effect, the immediate pigmentation was characterized by being dark brown from the start and surrounded by ill-defined erythema, which disappeared in less than 2 hours. Furthermore, pigmentation induced by visible light was sustained during the 2-week period of the study and did not fade away even at lower doses. DRS results for oxy-hemoglobin and melanin after irradiation with the visible light source are illustrated in Figures 3 and 4, respectively. DRS findings showed a direct correlation between the concentration of oxy-hemoglobin and the visible light doses delivered, which correlates with the clinical finding that the higher the dose, the more intense the erythema (Figure 3). The threshold for a minimal perceptible erythema, based on clinical observation and correlation with DRS measurements, is approximately 0.45 in terms of relative difference in absorbance values (irradiated minus control sites). As shown in Figure 3, only the two highest doses, 320 and 480 J cm�2, have values consistently above 0.45 for time points from immediately after exposure (0 hour) to 2 hours, which correlate with erythema observed clinically (Figure 2b). After 2 hours, even at 480 J cm�2, oxy-hemoglobin levels dropped to less than 0.45, consistent with the clinical observation that the erythema was resolved beyond the 2-hour time point. DRS data show that the increase in melanin content was directly related to the dose delivered (Figure 4). For all the doses studied, the melanin contents at 1 day after irradiation were lower than those at the earlier time points; this difference was statistically significant for 160 and 320 J cm�2 (Po0.05) doses. At 1 week and 2 weeks after irradiation, for all the doses studied, melanin contents increased above the levels observed at the 1-day time point. This DRS-measured decrease at 24 hours corresponds clinically to the transitional zone between persistent pigment darkening and delayed tanning due to new melanin formation. Confocal microscopy performed following the highest dose (480 J cm�2) at 2 and 24 hours after irradiation showed redistribution of pigment, in the form of migration of melanin from basal cells to the upper epidermal cell layers, in the irradiated site compared with control. Histopathological results The histopathological examination carried out following exposure to the highest dose (480 J cm�2) of visible light at 24 hours after irradiation showed no difference between irradiated and non-inrradiated control sites when stained with hematoxylin and eosin, Acid Orcein, or P53 stains. Specifi- cally, no thermal or actinic damage was observed in the 1E–4 1E–5 1E–6 1E–7 1E–8 1E–9 1E–10 Ir ra d ia n ce ( W c m – 2 p e r 2 n m ) Wavelength (nm) 200 400 600 800 1,000 1,200 1,400 1,600 1,800 2,000 Figure 1. Spectral irradiance of the visible light source. Note that the vertical axis is in logarithmic scale. Table 1. Irradiation doses and fluence rates Visible light UVA1 Fluence rate: 200 mW cm�2 Fluence rate: 25 mW cm�2 Dose (J cm�2) Dose (J cm�2) 8 1 40 5 80 10 160 20 320 40 480 60 Abbreviation: UVA1, long-wavelength ultraviolet A. www.jidonline.org 2093 BH Mahmoud et al. Impact of Visible Light on the Skin http://www.jidonline.org dermis of the irradiated skin. On the other hand, using the Fontana–Mason stain, there was a redistribution of melanin pigment. This means that in the non-irradiated control site, the pigment was seen just in the basal keratinocytes; whereas in the irradiated site the pigment was redistributed into the keratinocytes in the upper spinous cell layers (Figure 5). Effects of UVA1 and visible light on skin type II In contrast to pigmentation induced in individuals with skin types IV–VI, no pigmentation was induced in subjects with skin type II using the same light sources of UVA1 and visible light as well as the same doses that were used for volunteers with skin types IV–VI (Figure 6). DRS results also confirmed our clinical observation, for there was no significant differ- ence between oxy-hemoglobin and melanin concentration measured at the irradiated site compared with the control site in subjects with skin type II. DISCUSSION Studies on cutaneous pigmentary changes have focused primarily on UV radiation, especially UVA. Whereas visible light has been used extensively in laser therapy, intense pulse light therapy, and photodynamic therapy, very little is known about its effect on the time course and quality of pigmen- tation, and its effect on different skin types. Furthermore, the majority of studies done on the effect of UV have focused on Caucasian population. Population statistics in the United States show significantly changing demographics in the past decade. According to the 2000 census, 29% of the United States population, representing approximately 85 million people, are individuals of color (Taylor and Cook-Bolden, 2002). In 1983, Kollias and Baqer (1984) conducted an in vivo study on the changes in pigmentation induced by visible and near-IR light using a polychromatic light source of 390–1,700 nm. They observed that pigmentation could occur without significant UV component. Porges et al. (1988) exposed 20 healthy individuals with skin types II, III, and IV to a visible light source of a compact 150-W xenon-arc solar simulator, with a spectral distribution between 385 and 690 nm. Both IPD and immediate erythema faded over a 24-hour period. The residual tanning response remained unchanged for the remaining 10-day observation period. The threshold dose for IPD with visible light was between 40 and 80 J cm�2, whereas the threshold dose for delayed tanning was closer to 80–120 J cm�2. Owing to the lack of 0 minutes 30 minutes 1 hour 2 hours 1 day 1 week 2 weeks 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 – 0.1 Dose (J cm–2) 0 50 100 150 200 250 300 350 400 450 500 O xy -h e m o g lo b in ( ir ra d ia te d — c o n tr o l) Figure 3. Oxy-hemoglobin concentration. Oxy-hemoglobin concentration, as measured using DRS, at sites irradiated with visible light radiation (8–480 J cm�2) and with measurements taken immediately after irradiation (0 minutes) to 2 weeks later. Oxy-hemoglobin values higher than 0.45 correspond to visually perceptible erythema. Values represent the average difference between the irradiated and control site values of 20 healthy volunteers having skin types IV–VI. Error bars denote standard error. Figure 2. Clinical appearance of irradiated sites immediately after exposure. (a) Six photos showing sites irradiated with UVA1 at doses of 1, 5, 10, 15, 20, 40, and 60 J cm�2, respectively, as indicated in the figure; (b) six photos showing sites irradiated with visible light at doses of 8, 40, 80, 160, 320, and 480 J cm�2, respectively, as indicated in the figure. 2094 Journal of Investigative Dermatology (2010), Volume 130 BH Mahmoud et al. Impact of Visible Light on the Skin standardization pertaining to the spectrum of visible light producing sources in the aforementioned two studies, it is difficult to compare the results. The filter used in the latter study was a 3-mm Schott GG385 (Schott Optical Company, Duryea, PA), which should have removed most of the short-wavelength UV radiation; however, a part of the long UVA spectrum, together with visible light, was still present in this filtered light source. In our study, a light source emitting 98.3% visible light was used to evaluate the effect of visible light on skin. In addition, melanocompetent volunteers with skin types IV–VI were used, which are different from the samples evaluated in the aforementioned studies. The results obtained for skin types IV–VI were then compared with those obtained for skin type II. Although there was evident pigmentation following UVA1 and visible light irradiation with threshold doses of 5 and 40 J cm�2, respectively, no pigmentation was induced on skin type II for all doses used and at all the time points studied (Figure 6). These results were also confirmed by DRS, which objectively assess the degree of pigmentation in skin. When comparing the quality of pigmentation observed following UVA1 and visible light irradiation in skin types IV–VI, it was noted that pigmentation induced by UVA1 was initially gray in color and then turned brown after 24 hours, whereas pigmentation induced by visible light was dark brown from the start (Figure 6). Also, UVA1-induced pigmentation was well defined and not surrounded by erythema at any point of time. This was confirmed by DRS, which showed no increase in the levels of oxy-hemoglobin at any point of time following irradiation. On the other hand, following exposure to visible light, erythema appeared immediately after irradiation surrounding the pigmentation. It started to fade after half an hour and completely disappeared 2 hours after irradiation (Figure 6). The light source used in this study had three KG5/3 mm filters to block IR radiation, which generates heat. Therefore, only a minimal IR component (1.5%) was detected (Figure 1). It is possible that the erythema following visible light irradiation in skin types IV–VI is due to the fact that heat is produced within melanocompetent skin from the absorption of visible light by melanin pigment. Heat in turn can lead to dilatation of deep dermal vessels. In this scenario, the heat generated would be independent of the heat in the light source; however, it would depend on the concentration of the melanin chromophore and the amount of visible light delivered to the skin. This could be the reason why no skin response was seen in subjects with skin type II. Pigmentation induced by visible light was darker and more sustained than the pigmentation induced by UVA1 (Figure 6). It should be noted that the UVA1 and visible light doses used in this study are those easily obtained from daily sun exposure. The fluence rate of the solar spectrum of visible light, during clear sky conditions at sea level, is about 15 times higher than that of UVA. Therefore, a dose of 20 J cm�2 of UVA would correspond to about 300 J cm�2 of visible light for an exposure time of about 1 hour. At 20 J cm�2, pigmentation induced by UVA1 was faint and faded rapidly over the course of 2 weeks; on the other hand, at 320 J cm�2 of visible light pigmentation was much darker and remained unchanged until the end of the study period, which was 2 weeks (Figure 6). Thus, our results strongly suggest that visible light could potentially have a significant role in producing darker and longer-lasting pigmentation in populations with skin types IV–VI than UV1 radiation. When specifically looking for thermal and actinic DNA damage after a single irradiation, histopathological examina- tion of irradiated sites compared with non-irradiated control sites showed no difference using hematoxylin and eosin, Acid Orcein, and P53 stains. A noticeable change observed in our histopathological examination was the redistribution of melanin pigments, in the form of migration of melanin from basal cells to the upper layers of the epidermis, in comparison with the non-irradiated control sites. This redistribution corresponds clinically to the persistent pigment darkening seen in our volunteers 24 hours following irradiation, which was the time when biopsies were taken. 0 minutes 30 minutes 1 hour 2 hours 1 day 1 week 2 weeks 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 –0.1 0 50 100 150 200 250 300 350 400 450 500 Dose (J cm–2) M e la n in ( ir ra d ia te d — c o n tr o l) Figure 4. Melanin concentration. Melanin concentration as measured using DRS at sites irradiated with visible light radiation (8–480 J cm�2) and with measurements taken immediately after irradiation (0 minutes) to 2 weeks later. Values represent the average difference between the irradiated and control site values of 20 healthy volunteers having skin types IV–VI. Error bars denote standard error. a b Figure 5. Histological changes at 24 hours after irradiation with 480 J cm�2 of visible light. (a) Unirradiated control site, pigment was seen just in the basal keratinocytes; (b) irradiated site, the circled area highlights the redistribution of melanin pigment into the keratinocytes in the upper spinous cell layers. Fontana-Mason stain; bar¼0.33 mm. www.jidonline.org 2095 BH Mahmoud et al. Impact of Visible Light on the Skin http://www.jidonline.org This work has implications on the use of sunscreens. UV filters are divided into organic (also known as chemical) and inorganic (also known as physical) filters. There is no effective organic filter for visible light. As only optically opaque filters are able to absorb visible light, only optically opaque inorganic filters can protect against visible light. The two generally available inorganic sunscreen agents are zinc oxide and titanium dioxide (Lim and Honigsmann, 2007). Considering that the results of this study showed that visible light can produce sustained dark pigmentation in individuals with skin types VI–VI, there may be a need for the development of filters that protect against visible light. Such filters could be useful for the management of patients with photoaggrevated dermatoses, such as melasma. In conclusion, this study described a method to assess pigmentation induced by pure UVA1 and visible light, and then applied this method to both type II and IV–VI skin. We have also developed a visible light source that can produce these wavelengths with minimal UV and IR contamination. A difference exists between the quality, time course, and duration of pigmentation produced by UVA1 and pigmenta- tion produced by visible light. Furthermore, the response to UVA1 and visible light irradiation depends on skin type, as no pigmentation was induced on skin type II. Although currently no standardized visible light source is used in all studies, it would be ideal if such a light source could be agreed upon and used in future studies. The fact that visible light can induce dark and relatively sustained pigmentation has a clinical implication on the treatment of photoderma- toses. In addition, it shows the need for sunscreens with better coverage in the visible light range. MATERIALS AND METHODS This study was reviewed and approved by the Institutional Review Board, Henry Ford Hospital. Study procedures were followed in accordance with the ethical standards of the Institutional Review Board and the principles of the Helsinki Declaration of 1975. Informed consent was obtained from all participants before the initiation of the study. Volunteers To be included in the study, volunteers had to be at least 18 years old, and not taking any photosensitizing drugs. Women who were lactating, pregnant, or planning to become pregnant, and patients with serious systemic disease, immunosuppression, skin cancers, with recent history of vitiligo, melasma, other disorders of pigmentation, and photosensitivity were excluded. The study was conducted throughout the calendar year. In all, 22 normal healthy volunteers were recruited, 20 of them had Fitzpatrick skin types IV–VI (4 type IV, 12 type V, and 4 type VI) and 2 volunteers had skin type II. Regarding gender, there were 4 males and 18 females, with a mean age of 36 years (20–60 years). We chose the lower back as a non-sun-exposed area to conduct our study, and a non-irradiated nearby skin site served as a control. Volunteers were instructed to avoid sun exposure or tanning beds to the irradiated as well as control areas during the period of the study. Materials Light sources and irradiation steps. We used two targeted light sources, a UVA1 light source and a visible light source. The UVA1 light source was a Hamamatsu LightingCure UV Spot Light 200, 240–400 nm, 200 W (Hamamatsu photonics K.K., Shimokanzo, Toyooka-village, Iwata-gun, Shizuoka-ken, Japan). For irradiation 0 hours 1 day 1 week 2 weeks UVA1 Skin type V Skin type V Skin type II Visible light Visible light Figure 6. Clinical photos of skin type V irradiated with UVA1 and visible light and skin type II irradiated with visible light at different time points. Cross- polarized images of sites irradiated with UVA1 (a–d) and visible light (e–l) at different times on both type V (e–h) and type II (i–l) skin. (a, e, and i) Immediately after irradiation, (b, f, and j) 1 day after irradiation, (c, g, and k) 1 week after irradiation, (d, h, and l) 2 weeks after irradiation. 2096 Journal of Investigative Dermatology (2010), Volume 130 BH Mahmoud et al. Impact of Visible Light on the Skin purposes a 9-mm Hamamatsu liquid light guide was used. The UVA1 light source was filtered using one UV Hot Mirror/3-mm (Newport Thin Film Laboratories, Chino, CA), UG11/1-mm, and WG 345/3-mm filter (Schott Optical Company). The light source irradiance spectrum was measured using a calibrated Optronics OL754 spectroradiometer (Optronics, Orlando, FL). The visible light source used was the Fiber-Lite Model 170-D (Dolan-Jenner Industries, Boxborough, MA) with a 150 W quartz- halogen lamp; a straight 8-mm Dolan-Jenner glass optical fiber was used for irradiation. In all three KG5/3-mm Schott glass filters and one GG400/3-mm (Schott Optical Company) were used to filter IR and UV radiation from the light source, respectively. Spectral irradiance of the visible light source was measured using a calibrated Optronics OL750 spectroradiometer (Figure 1). The fluence rate for both light sources was adjusted to 25 and 200 mW cm�2 for UVA1 and visible light, respectively, using a calibrated Oriel Thermopile Model 71767 (Oriel, Stamford, CT). An acrylic holder, to which the optical fibers and the liquid light guide were attached, was specifically designed for the study. Next, the optical fiber holder was fixed on the lower backs of volunteers during the period of delivery of the required doses. The delivered ranges for the irradiation doses are shown in Table 1. Sites were irradiated with visible light ranging from 8 J cm�2 to 480 J cm�2 and UVA1 from 1 J cm�2 to 60 J cm�2. Clinical and instrumental assessment Clinical and instrumental assessment for immediate pigment darkening, persistent pigment darkening, erythema, and edema was done at seven time points: immediately after irradiation, and then at 30 minutes, 1, 2 hours, 1 day, 1 week, and 2 weeks following irradiation. For each time point, cross-polarized digital photography was used to document the exposed sites. The advantage of a cross- polarized filter is that it prevents the glare coming from the skin surface and thus leads to a better visualization of sub-surface chromophores. Erythema and pigmentation were also assessed by measuring the levels of oxy-hemoglobin and melanin, respectively, using DRS. Biopsies were carried out at sites exposed to the highest visible light dose used at 24 hours; biopsies from an unexposed surrounding site served as control specimens. Diffuse reflectance spectroscopy. DRS is an analytical tool used to investigate the optical properties of an absorbed molecule, and to measure the scattering and absorption properties of the skin in which a beam of light penetrates. The DRS instrument consisted of a quartz halogen light source (Ocean Optics, Boca Raton, FL), a bifurcated fiber bundle (Multimode Fiber Optics, East Hanover, NJ), an S2000 spectrometer (Ocean Optics), and a laptop computer. Before data acquisition, the spectrophotometer was calibrated using pure white tile and a dark background for compensation. Apparent concentra- tions of hemoglobin and melanin were calculated from the diffuse reflectance spectra as described by Kollias et al. (1992). Measure- ments were taken from the irradiated site and the adjacent unirradiated control site. In order to calculate the changes in the chromophore concentrations, the value for the control site was subtracted from that for the irradiated site. Therefore, if there is no detectable change in the irradiated site, the value would be zero. Based on clinical correlation, the threshold for minimal perceptible erythema that correlated with DRS measurements was 0.45 in terms of relative difference in absorbance values (irradiated minus control sites). Confocal microscopy. Pigmentation induced by the highest dose of UVA1 and visible light as well as the control site was then assessed using a hand-held confocal microscope (VivaScope 3000, Lucid, Rochester, NY) designed specifically for clinical imaging of the skin, at 2 and 24 hours after the highest dose of visible light irradiation. Statistical analysis DRS measurements were taken in triplicate with the 2.5-mm- diameter fiber on the irradiated as well as control skin sites. The results were presented as means and standard deviation; standard least square statistical model analysis was carried out using JMP 7 (SAS Institute, Cary, NC). Histopathology and immunohistochemistry Histological changes induced by the highest dose of visible light were assessed by histopathological examination 24 hours after irradiation, for the irradiated site and for the non-irradiated area (control) using hematoxylin and eosin, Fontana-Masson, AcidOr- cein, and P53 stains. CONFLICT OF INTEREST The authors state no conflict of interest. ACKNOWLEDGMENTS This study was supported by an unrestricted grant from Johnson & Johnson Consumer Companies, Skillman, NJ; the Shahani Fund; and the CS Livingood Fund from the Department of Dermatology, Henry Ford Hospital, Detroit, MI. REFERENCES Diffey BL, Kochevar IE (2007) Basic principles of photobiology. In: Photodermatology (Lim HW, Hönigsmann H, Hawk JL, eds), New York: Informa Healthcare USA, 15–27 Frederick JE, Snell HE, Haywood EK (1989) Solar ultraviolet radiation at the earth’s surface. Photochem Photobiol 50:443–50 Jackson BA (2003) Lasers in ethnic skin: a review. J Am Acad Dermatol 48:S134–8 Kollias N, Baqer A (1984) An experimental study of the changes in pigmentation in human skin in vivo with visible and near infrared light. Photochem Photobiol 39:651–9 Kollias N, Baqer A, Sadiq I et al. (1992) In vitro and in vivo ultraviolet-induced alterations of oxy- and deoxyhemoglobin. Photochem Photobiol 56:223–7 Lim H, Honigsmann H (2007) Photoprotection. In: Photodermatology (Lim HW, Hönigsmann H, Hawk JL, eds), New York: Informa Healthcare USA, 267–78 Porges SB, Kaidbey KH, Grove GL (1988) Quantification of visible light- induced melanogenesis in human skin. Photodermatology 5:197–200 Schieke SM, Schroeder P, Krutmann J (2003) Cutaneous effects of infrared radiation: from clinical observations to molecular response mechanisms. Photodermatol Photoimmunol Photomed 19:228–34 Taylor SC, Cook-Bolden F (2002) Defining skin of color. Cutis 69:435–7 www.jidonline.org 2097 BH Mahmoud et al. Impact of Visible Light on the Skin http://www.jidonline.org Impact Of Long-wavelength Uva And Visible Light On Melanocompetent Skin������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������� Introduction���������������������������������������������������� Discussion���������������������������������������������� Materials And Methods������������������������������������������������������������������������������� Conflict Of Interest���������������������������������������������������������������������������� Acknowledgments������������������������������������������������������������� Supplementary Material���������������������������������������������������������������������������������� References���������������������������������������������� work_o4kewawsfvellfvvuld5vkhi74 ---- doi:10.1016/j.joms.2005.05.227 trauma. Roughly half of the laryngeal fractures in our series were managed non-operatively although approxi- mately three-quarters required airway intervention rang- ing from intubation to emergent cricothyroidotomy. Cli- nicians treating maxillofacial trauma need to be familiar with the signs and symptoms of this injury. A timely evaluation of the larynx and rapid airway intervention are essential for a successful outcome. The Schaefer classification of injury severity and corresponding treat- ment guidelines were consistent with our study. References Leopold DA: Laryngeal trauma. A historical comparison of treatment methods. Arch Otolaryngol 109:106, 1983 Schaefer SD, Stringer SP: Laryngeal trauma, in Bailey BJ, Pillsbury HC, Driscoll BP (eds): Head and Neck Surgery: Otolaryngology. Phila- delphia, PA, Lippincott-Raven, 1998, pp 947-956 Short and Long Term Effects of Sildenafil on Skin Flap Survival in Rats Kristopher L. Hart, DDS, 705 Aumond Rd, Augusta, GA 30909 (Baur D; Hodam J; Wood LL; Parham M; Keith K; Vazquez R; Ager E; Pizarro J) Statement of the Problem: Annually in the United States, approximately 175,000 people sustain severe facial trauma requiring major surgical repair. These injuries often cause significant loss of facial skin, leading to severe aesthetic and functional deficits. Skin flaps are the foundation for recon- structing such defects. The most important factor deter- mining the survival of these flaps is the delivery of oxygen via the circulation. A number of therapeutic modalities have been explored to improve blood flow and oxygen- ation of flap tissue. One principal approach has been to increase blood flow by vasodilation. However, due to their hypotensive effects, the vasodilators tested thus far have not been utilized in surgical repair of facial skin. Phospho- diesterase (PDE)-inhibitors, which includes the drug silden- afil, are a relatively new class of FDA approved drugs whose effect on tissue viability has not been widely explored. The vasodilatory effects of these drugs have the potential to enhance blood flow to wound sites, improve oxygen sup- ply, and promote wound repair. In this study, we examined whether administering sildenafil intraperitoneally at a dose of 45 mg/kg/d has a beneficial effect on the survival of surgical skin flaps in rats. Materials and Methods: Surgical skin flaps were evaluated using orthogonal polarization spectral imaging, flap image analysis, and histology at 1, 3, 5, and 7 days. Orthogonal polarization spectral imaging provides high quality, high con- trast images of the microcirculation of skin flaps. Areas of normal capillary flow are easily differentiated from areas of stasis and areas completely devoid of vessels. First, rats were assigned to either sildenafil treated (45 mg/kg/day IP), vehicle control, or sham (no injection) groups. Second, caudally based dorsal rectangular (3 x 10 cm) flaps were completely raised and then stapled closed. Third, spectral imaging was used to determine the distances from the distal end of the flap to the zones of stasis and zones of normal flow. Finally, animals were sacrificed and the flaps removed and photographed. Digital images of the flaps were used to determine the percent of black, discolored (gray/red), and normal tissue. Method of Data Analysis: a. Sample size: N � 152 rats b. Duration of study: 3 months c. Statistical methods: One-way analysis of variance (ANOVA) d. Subjective analysis: No Results: The orthogonal polarization spectral imaging results showed a significant decrease in the zone of necrosis (no vessels present) in rats treated with silden- afil one and three days after surgery. We also found a significant decrease in the total affected area, which consists of the zones of necrosis and stasis, in treated rats three days after surgery. Digital photography analysis also showed a significant decrease in the area of necrosis (black tissue) at three days. These findings support the results obtained using spectral imaging. No significant differences were found between sildenafil treated and control animals five and seven days after surgery. Conclusion: These results demonstrated that 45 mg/ kg/d IP of sildenafil may have a beneficial effect on skin survivability at the early stages of wound healing. Or- thogonal polarization spectral imaging has been proven to predict areas of necrosis more accurately than photo- graphic analysis. This method allowed us to observe differences between sildenafil treated and control rats as early as 24 hours and as late as three days after surgery. Although we did not see any benefit when animals were treated with 45 mg/kg/d IP five and seven days after surgery, we believe that changes in the treatment regi- men may enhance long-term flap survivability. References Olivier WA, Hazen A, Levine JP, et al: Reliable assessment of skin flap viability using orthogonal polarization imaging. Plast Reconst Surg 112:547, 2003 Sarifakioglu N, Gokrem S, Ates L, et al: The influence of sildenafil on random skin flap survival in rats: An experimental study. Br J Plast Surg 57:769, 2004 Funding Source: United States Army 2005 Straumann Resident Scientific Award Winner Histomorphometric Assessment of Bony Healing of Rabbit Critical-Sized Calvarial Defects With Hyperbaric Oxygen Therapy Ahmed M. Jan, DDS, The Hospital for Sick Children, S-525, 555 University Ave, Toronto, Ontario M5G 1X8, Oral Abstract Session 4 AAOMS • 2005 63 Canada (Sàndor GKB; Evans AW; Mhawi A; Peel S; Clokie CML) Statement of the Problem: A critical-sized defect is the smallest osseous wound that will not heal spontaneously with bony union over the lifetime of an animal. Practi- cally, the defect should not heal spontaneously during the experimental period. Hyperbaric oxygen therapy (HBO) is used to improve the healing of a variety of problem wounds. This study evaluated the effect of HBO on the healing of critical-sized defects in the rabbit cal- varial model and whether HBO administration can result in the healing of a larger “supracritical-sized” defect. Materials and Methods: Twenty New Zealand rabbits were divided into 2 groups of 10 animals. Full thickness calvarial defects were created in their parietal bones bilat- erally. Defects were critical-sized (15 mm) on one side and supracritical (18 mm) on the contralateral side. Group 1 received a 90 minute HBO therapy session at 2.4 ATA daily for 20 consecutive days. Group 2 served as a control group receiving only room air. Five animals in each group were sacrificed at 6 and 12 weeks postoperatively. Method of Data Analysis: Data analysis included qual- itative assessment of the calvarial specimens as well as quantitative histomorphometric analysis to compute the amount of regenerated bone within the defects. Hema- toxylin and eosin stained sections were sliced and cap- tured by a digital camera (RT Color; Diagnostic Instru- ments Inc, Sterling Heights, MI). A blinded investigator examined merged images and analyzed them for quantity of new bone regeneration. Statistical significance was established with a p value � .05. Results: The HBO group showed bony union and demon- strated more bone formation than the control group at 6 weeks (p � .001). The control group did not show bony union in either defect by 12 weeks. There was no significant difference in the amount of new bone formed in the HBO group at 6 weeks compared with 12 weeks (p � .309). However, the bone at 6 weeks was more of a woven charac- ter, while at 12 weeks it was more lamellated and more mature. Again, in the HBO group both the critical-sized and the supracritical-sized defects healed equally (p � .520). Conclusion: HBO therapy has facilitated the bony heal- ing of both critical-sized and supracritical-sized rabbit calvarial defects. Since bony healing was achieved early, it is reasonable to assume that an even larger than 18 mm defect (if it were technically feasible) might have healed within the 12 week period of study aided by HBO. Adjunctive HBO, based on histomorphometrics, doubles the amount of new bone formed within both the critical sized and the supracritical-sized defects. It allowed an increase in the critical size by more than 20%. References Moghadam HG, Sàndor GK, Holmes HI, et al: Histomorphometric evaluation of bone regeneration using allogeneic and alloplastic bone substitutes. J Oral Maxillofac Surg 62:202, 2004 Muhonen A, Haaparanta M, Gronroos T, et al: Osteoblastic activity and neoangiogenesis in distracted bone of irradiated rabbit mandible with or without hyperbaric oxygen treatment. Int J Oral Maxillofac Surg 33:173, 2004 Craniofacial Growth Following Cytokine Therapy in Craniosynostotic Rabbits Harry Papodopoulus, DDS, MD, University of Pittsburgh School of Dental Medicine, 3501 Terrace Street, Pittsburgh, PA 15261 (Ho L; Shand J; Moursi AM; Burrows AM; Caccamese J; Costello BJ; Morrison M; Cooper GM; Barbano T; Losken HW; Opperman LM; Siegel MI; Mooney MP) Statement of the Problem: Craniosynostosis affects 300- 500/1,000,000 births. It has been suggested that an over- expression of Tgf-beta 2 leads to calvarial hyperostosis and suture fusion in craniosynostotic individuals. This study was to test the hypothesis that neutralizing antibodies to Tgf-beta 2 may block its activity in craniosynostotic rabbits, preventing coronal suture fusion in affected individuals, and allowing unrestricted craniofacial growth. Materials and Methods: Twenty-eight New Zealand White rabbits with bilateral delayed-onset coronal suture synostosis had radiopaque dental amalgam markers placed on either side of coronal sutures at 10 days of age (synos- tosis occurs at approximately 42 days of age). At 25 days, the rabbits were randomly assigned to three groups: 1) Sham control rabbits (n � 10); 2) Rabbits with non-spe- cific, control IgG antibody (100ug/suture) delivered in a slow release collagen vehicle (n � 9); and 3) Rabbits with Tgf-beta 2 neutralizing antibody (100ug/suture) delivered in slow release collagen (n � 9). The collagen vehicle in groups Two and Three was injected subperiosteally above the coronal suture. Longitudinal lateral and dorsoventral head radiographs and somatic growth data were collected from each animal at 10, 25, 42, and 84 days of age. Method of Data Analysis: Significant mean differences were assessed using a one-way analysis of variance. Results: Radiographic analysis showed significantly greater (p � 0.05) coronal suture marker separation, overall craniofacial length, cranial vault length and height, cranial base length, and more lordotic cranial base angles in rabbits treated with anti-Tgf-beta-2 anti- body than groups at 42 and 84 days of age. Conclusion: These data support our initial hypothesis that interference with Tgf-beta-2 production and/or function may rescue prematurely fusing coronal sutures and facilitate craniofacial growth in this rabbit model. These findings also suggest that this cytokine therapy may be clinically significant in infants with insidious or progressive postgestational craniosynostosis. References Poisson E, Sciote JJ, Koepsel R, et al: Transforming growth factor- beta isoform expression in the perisutural tissue of craniosynostotic rabbits. Cleft Palate Craniofac J 41:392, 2004 Oral Abstract Session 4 64 AAOMS • 2005 work_o63mxiwykbdpzf4dixvhbtavpe ---- Digital dental photography. Part 3: principles of digital photography Digital dental photography. Part 3: principles of digital photography I. Ahmad1 VERIFIABLE CPD PAPER passes continuously, without divisions or separations. In a similar vein, digital photography offers many benefi ts compared to conven- tional photography including: Instantaneity and convenience• Flexibility for editing, copying and • disseminating images Environmentally greener by • eliminating toxic dyes and processing chemicals Long-term economy by reusing storage • media such as memory cards. As with any new technology, there is, however, a learning curve to fully utilise the benefi ts and avoid the pitfalls. This chapter describes the fundamental aspects of digital photography, which serve as an essential foundation for subsequent chap- ters. The starting point is the quintessential item for digital photography: the sensors. THE SENSORS Light sensors can be categorised into three basic types, ocular, digital and chemical. Surprisingly, the fundamental principles of the three types are very similar. The ocu- lar apparatus consists of the eyes, optic nerve and the brain, which is the ultimate arbitrator for assessment irrespective of the method used to create an image. In the light sensitive retina of the eyes, coloured dyes are stimulated by incoming light, triggering neural responses to the brain, which subsequently computes the image of the object being viewed. In digital photography, light sensitive diodes act as the sensors, which create We are currently living in the digital revo- lution: digital broadcasting, digital con- sumer goods, digital dental radiography, and photography is no exception. Without doubt, digital is the future. However, the natural world is analogue; everything around us is continuous: col- our, space, time and sound are all sinu- ous, without discrete separations. We have separated nature, or digitised it, for the purpose of convenience, utilisation and manipulation. An example is time, which we have divided into days, hours, minutes and seconds, but which in reality, similarly to our surroundings, is not intermittent but Although we live in a digital age, our knowledge of the processes and technology involved is often limited. As a foundation to understanding the subsequent parts of this series, this part describes the fundamental aspects of digital photography, which includes the sensors, processing and display. an electrical current, or charge, which is eventually processed into an image. Ocular and digital imagery share many similari- ties and are both extremely fl exible. For example, if we see something we do not like, we can look away (with digital pho- tography, unwanted parts of an image can be cropped). If something attracts our attention, the eyes concentrate on that specifi c part of the object or subject (with digital photography, any point of interest can be enlarged). Also, if we do not like what we see, the brain can change the con- text of reality so that we fi nd the apparent unsightly representation more pleasurable (with digital photography, software manip- ulation can alter an image to any desired parameter). These few examples highlight the fl exibility and uncanny similarities of ocular and digital imagery. Conversely, chemical or fi lm photogra- phy is rigid, with little scope for manipula- tion and therefore requires that all settings be exact if an acceptable image is to be produced. The basis of chemical photog- raphy is photosensitive coloured layers painted onto a fi lm emulsion which, fol- lowing development, reveals the registered image on a cellulose sheet. To produce a correctly exposed and high quality image, every setting needs to be accurate. For example, sharp focusing, correct orienta- tion, proper framing and composition, pre- cise aperture opening and shutter speeds. In addition, the colour temperature of the ambient light must match that of the fi lm emulsion, and the developing chemicals need to be precisely diluted and at the correct working temperature. It is obvious 1General Dental Practitioner, The Ridgeway Dental Surgery, 173 The Ridgeway, North Harrow, Middlesex, HA2 7DF Correspondence to: Irfan Ahmad Email: iahmadbds@aol.com www.IrfanAhmadTRDS.co.uk Refereed Paper Accepted 15 November 2008 DOI: 10.1038/sj.bdj.2009.416 ©British Dental Journal 2009; 206: 517-523 • We live in a digital world, and recent technological advances have offered conveniences and facilities that were once only stuff of dreams • The eyes and digital sensors share uncanny similarities, unlike fi lm photography that is rigid and infl exible • Digital photography can be summarised by the acronym CPD (capture, processing and display). I N B R I E F PR A C TIC E 1. Digital dental photography: an overview 2. Purposes and uses 3. Principles of digital photography 4. Choosing a camera and accessories 5. Lighting 6. Camera settings 7. Extra-oral set-ups 8. Intra-oral set-ups 9. Post-image capture processing 10. Printing, publishing and presentations FUNDAMENTALS OF DIGITAL DENTAL PHOTOGRAPHY BRITISH DENTAL JOURNAL VOLUME 206 NO. 10 MAY 23 2009 517 © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE that with so many variables, the scope of error is magnifi ed and even if camera settings are correct, incorrect developing can produce unsatisfactory results. Besides the obvious convenience and instance of digital photography, a major advantage is the ability to correct technical errors at a later stage using software manipulation for rectifying exposure, white balance, fram- ing, orientation, sharpening, etc. A comparison of the three sensors, ocu- lar, digital and chemical, is summarised in Table 1. It is worth noting that for chemical photography, the fi lm sheet serves as the light sensitive sensor, storage and repro- duction media. However, with the ocular and digital imagery, each of these three entities uses different media, which obvi- ously expands possibilities for manipula- tion and offers unparalleled fl exibility. TECHNICAL ASPECTS OF DIGITAL PHOTOGRAPHY The easiest way to describe the basic prin- ciples of digital photography is by divid- ing them into three processes, forming the acronym CPD: C for capture• P for processing• D for display.• Capture The heart of all products based on sili- cone technology, such as computers, stor- age media, scanners and digital cameras, is a semiconductor. With image sensors, the semiconductors are photosensitive units composed of tiny light detecting units called pixels. The latter are a substitute for emulsion in conventional fi lm cameras. Pixels come in many shapes and qualities, varying in size from 5 μm to 12 μm. Basically, the image sensor is a col- lection of silicone photodiodes (pixels), which register the intensity of brightness and darkness of an object. In effect, they are only capable of producing a black and white image of the object being photo- graphed. To create a colour image requires using appropriate fi lters corresponding to the three additive primary colours red, green and blue. Currently there are two types of image sensors competing in the market, the CCD (charged coupled device) and CMOS (com- plementary metal oxide semiconductors), each having advantages and disadvantages. The CCDs can be further divided into full- frame and interline CCDs (Figs 1-2). The former, full-frame CCDs, allow the entire frame viewed in the viewfi nder to be captured onto the sensor. The pixels are arranged in a line, and once stimulated by light, convey the electrical charge to the end of the line where it is processed Table 1 Comparison of the three types of sensors Ocular Digital Chemical Light sensitive sensor Retina Electrical diodes (pixels) Dyes on fi lm sheets Storage/relay media Nerves Memory cards or disc Film sheet Reproduction media Brain Monitor, projector, print Film sheet Pixel Pixel Conducting area Pixel Conducting area Micro-lens Fig. 1 Full-frame CCD with a large fi ll factor Fig. 2 Interline CCD with a small fi ll factor Fig. 3 Interline CCD with micro-lenses, which increase the fi ll factor 518 BRITISH DENTAL JOURNAL VOLUME 206 NO. 10 MAY 23 2009 © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE software (for example, Adobe® Photoshop). Since all digital equipment and software work with a binary code, a transgression about digitisation is essential. All computers function with a digital binary code, that is, they can only com- prehend 0 and 1. These binary digits are termed bits, and eight bits are equivalent to 1 byte. A byte is the minimum number of bits required to make up a single alpha- betic character. The storage and memory capacities of computers are therefore quoted in megabytes (MB) or gigabytes (GB). In digital photography the primary function of the A/D converter is to repro- duce the pure analogue signal into a dig- ital code that is as close as possible to the original. Since digital data is composed of discrete entities, lacking homogeneity with noticeable banding, in order for the human eye to visualise a continuous tonal range (greyscale), a minimum of 8 bits is necessary (28), which equate to 256 levels (Figs 4-6). Consequently, for colour images com- prising the three primary colours red, green and blue, each colour channel must have a minimum of 8 bits so that a continuous tonal range is perceived. This means 8 bit for red, 8 bit for green and 8 bit for blue (256 levels for red, 256 levels for green, and 256 levels for blue), or a total of 24 bits (224). This is referred to as the bit or colour depth of an image. The greater the bit depth per primary colour, the greater the accuracy of recorded detail. The colour depth is an important point to consider when purchasing digital equipment or soft- ware. The stated colour depth can either be for each primary colour (per channel), or the total bit depth of the three primary colours. Many manufacturers quote 8-bit depth/channel, indicating 8 bit per primary colour (that is, 8 for red, 8 for green and 8 for blue), which equates to a total bit depth of 24. However, other manufacturers state the total bit depth of say, 24. Returning to the image sensor, each pixel is assigned a binary number accord- ing to the magnitude of its charge. The A/D converter assigns the level of brightness in steps. As mentioned above, the greater the number of steps the smoother the transi- tion on a greyscale and the more precise rendition of an image. Most cameras use an 8 bit per channel A/D converter cod- ing for 256 different levels of brightness and darkness, while professional systems to form an image. This procedure is time consuming and must be carried out in darkness, that is, after the camera shut- ter is closed. To expedite the process, the interline CCDs have non-light sensitive rows between the pixels which convey the electrical charge simultaneously as the pixels are ‘stimulated’ by the incoming light. This accelerates the process of creat- ing an image, but due to the conducting row (non-light sensitive areas), the light sensitive area (fi ll factor) available is lower compared to the full-frame CCDs. This is a major advantage of full-frame CCDs, since a fi ll factor of 70% to 90% means that less image information is lost compared to an interline CCD, which has a fi ll factor of 30% to 50%. The other competitors to CCDs are the CMOS sensors. These devices register light similarly to a CCD, but processing is per- formed on each pixel rather than being conducted to the end of a line. Due to the circuit integration within each pixel, the CMOS sensors have a smaller surface area with a reduced fi ll factor of only 30%. Other limitations of CMOS sensors are a low dynamic range and increased noise levels, which are both detrimen- tal to image quality. The advantages are lower power consumption and elimination of booming (overfl ow of excess electrical charges to adjacent pixels). To circumvent the low fi ll factor of both interline CCDs and CMOS sensors, either micro-lenses or octagonal shaped pixels with larger sur- face areas arranged diagonally are used to increase the light-sensitive potential or fi ll factor (Fig. 3). Processing There is a misconception equating an image sensor with digital images. However, the contrary is true. A sensor is only capable of delivering an analogue signal. This signal is an electrical charge, obtained from the result of exposing the pixels to light. Each pixel creates an electrical charge depending on the intensity of incoming light and the duration of exposure. Further technology is necessary to transform the analogue elec- trical signal into binary (digital) numbers. This is achieved by an analogue-digital converter (A/D converter). Once converted, the digital data is processed by microchips either within the camera, or downloaded to a PC and manipulated with appropriate Fig. 4 A small bit depth results in a pronounced jagged edge Fig. 5 Increasing the bit depth creates a smoother edge Fig. 6 To create a seamless transition between black and white and a smooth edge, a minimum bit depth of 8, or 256 levels, is necessary BRITISH DENTAL JOURNAL VOLUME 206 NO. 10 MAY 23 2009 519 © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE use 16 bit per channel, which translates to 65,536 brightness levels. To calculate the number of colours possible in a given system, the tonal levels for each colour are multiplied. For example, for a cam- era with a total colour depth of 24, the number of colours coded is 256R × 256G × 256B, which results in 16.7 million pos- sible colours (Fig. 7), and for a total 48-bit system (16 bit/channel) or 248, the number of colours is 2.8014. In comparison, the dif- ference threshold for colour of the human eye is low enough to discriminate 7 mil- lion colours. In reality, only 8 bits are necessary for the eyes to visualise an uninterrupted smooth greyscale. However, once an 8-bit image is manipulated using photo editing software, there is degradation of the origi- nal 8-bit signal resulting in jagged steps at the periphery of objects. To avoid these unwanted artefacts, it is therefore wiser to start with a 16 bit image, allowing for degradation while still maintaining the minimum requisite 8 bit colour depth. Besides colour depth, the other factor to consider is the dynamic range of the sensor (to be discussed further in Part 6). This is determined by the amount of charge that a pixel can accept, or its saturation level, termed full well capacity. The larger the physical size of a pixel, the greater the charge it can hold, and the greater the dynamic range. Therefore, in high-end digital cameras the sensors have larger pixels of approximately 12 μm, compared to the 5 μm pixels that are used in amateur or compact cameras. As previously stated, the image captured by a sensor is in essence black-and-white (Fig. 8). Colour is achieved by adding three channels representing the three primary colours of additive mixing, that is red, green and blue. A variety of ingenious methods are utilised for creating coloured images including rotating colour fi lters, beam splitters, or coating each pixel with fi lters of the three primary colours. The lat- ter is the most popular method, creating a mosaic of red, green and blue on the image sensor (Fig. 9). When exposed, each pixel registers the intensity of light for one of the three colours, and when collated together this yields a colour image (Fig. 10). The latest technology is sandwiching three separate pixel layers of red, green, and blue, similar to the dye emulsions of fi lm. The rationale is that the red, green and blue components of white light penetrate at different depths: red is deepest, green intermediate and blue superfi cial. When these multi-layer sensors are exposed to white light, each individual layer registers red, green or blue, and when combined they form a coloured image. The actual image capture is performed by numerous methods, for example scanning, 8 bit (28 = 256) Red channel (28 = 256) Green channel (28 = 256) Blue channel (28 = 256) 16.7 million colours Fig. 7 Schematic representation of a 24 bit colour depth system (or 8 bit/channel) Bayer pattern Fig. 8 The pixels of an image sensor only detect the brightness levels of an object, creating a black and white image Fig. 9 Colour is captured by placing fi lters of red, green and blue onto the pixels, such as this Bayer pattern arrangement 520 BRITISH DENTAL JOURNAL VOLUME 206 NO. 10 MAY 23 2009 © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE errors by using software to suppress these unwanted artefacts. However, if suppres- sion is too great, genuine detail is lost, while a lengthy computation time may be unacceptable to expedite workfl ow. But taking all factors into consideration, the 3-shot, 1-shot, 4-shot, microscan and macroscan. The method used depends on the application in question. For example, for photographing static still-life compo- sitions or documenting works of art and sculptures, the ideal is the scanner system. However, a scanning system is inappro- priate for moving objects, and for sports photography the 1-shot system is the ideal choice. By far the most popular system, and one that is suitable for dental photography, is the 1-shot system. As the name implies, a single exposure is required to capture the object being photographed. The set-up is as follows. The image sensor consists of pixels with a mosaic of fi lters, for example in the Bayer pattern arrangement for the three primary colours, red, green and blue (Fig. 9). Once exposed, the sensor records the corresponding amount of red, green and blue in the prevailing composition. The entire process is summarised in Figs 11-15. Depending on the proximity of the pixels, small amounts of detail are lost, which are interpolated using information from adjacent pixels. The disadvantages of this system are that the resulting image is prone to interpolation errors such as col- our fringes at edges of objects and Moiré patterns of strongly chequered materials. The interpolation algorithms mitigate these Missing inter-pixel information is interpolated from adjacent pixels Fig. 10 The fi ltered pixels record the amount of red, green and blue at each site to form a colour image Figs 11-15 The pixels are only capable of measuring the dark and bright parts of an image, and in effect are only capable of producing a black and white image (Fig. 11). Colour is created by the three channels red (Fig. 12), green (Fig. 13) and blue (Fig. 14), and combining these channels produces a colour image (Fig. 15) 11 13 15 14 12 BRITISH DENTAL JOURNAL VOLUME 206 NO. 10 MAY 23 2009 521 © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE benefi ts of a 1-shot system, such as the ability to record moving subjects, com- pactness, lightweight and reduced cost, outweigh the minor and perhaps imper- ceptible loss in image quality. Before an image can be viewed a certain amount of processing is necessary. Firstly, the captured image must be processed by software in the camera as a digital fi le. The format of the fi le at this stage can either be proprietary, that is specifi c to a particular camera manufacturer, or in a generic format such as RAW (PNG), TIFF or JPEG. Secondly, the size of the ensuing fi le depends on the format in which it is saved. The fi le size is a crucial determinant of the fi nal image quality. The fi le size of an image can be calculated according to the formula: Number of pixels × (total bit depth ÷ 8) = image size in bytes For example, the maximum fi le size that a digital camera with a 10 million pixel image sensor, and a bit depth of 24 (8 bits per primary colour) is capable of creating in an uncompressed state is 30 MB: [10 × 106 × (24 ÷ 8)] = 30 MB The fi le format, and hence its size, is pri- marily dependant on the intended use of the image. This is usually in a RAW or TIFF format. If a smaller size fi le is required, a low-resolution fi le format such as JPEG can be chosen. The latter format compresses the original digital fi le at the expense of detail loss, but is easier to store, manipulate and disseminate. The JPEG format also has a range of resolutions from low to high, with corresponding fi le sizes, respectively. If a proprietary format is chosen, the fi le is in a raw state and requires processing by specifi c software before it can be viewed and stored into a generic format. On the other hand if a generic format is chosen at the outset, no further processing is neces- sary to view or store the fi le. Display After in-camera processing, the image can be displayed via electronic or printed media. Electronic media consists of moni- tors and projectors, and printed media of photographic paper or printing paper. The fi rst time that an image is usually viewed is on the LCD monitor on the cam- era back (Fig. 16) The size of these moni- tors varies from 2 inches to 3.5 inches, with a resolution ranging from ¼ million to 1 million pixels. The monitors allow instantaneous viewing of the image for assessing composition, framing, orienta- tion and exposure. However, they are of little use for determining fi ne detail or sharp focus due to their low resolution, usually no more than 1 megapixels. The second type of electronic viewing is with a computer monitor or an LCD projector (beamer). The resolution of both computer monitors and projectors varies enormously. For the former the resolution Fig. 16 LCD display on camera back Fig. 17 An image with a fi le size of 30.3 MB and 2,797 × 1,895 pixels (5.3 megapixels) Fig. 19 A 100% enlargement of a section of the image shown in Figure 17, with a fi le size of 2.3 MB and 639 × 616 pixels (0.4 megapixels) Fig. 18 An image with a fi le size of 113.4 MB and 5,329 × 3,717 pixels (19.8 megapixels) Fig. 20 A 100% enlargement of a section of the image shown in Figure 18, with a fi le size of 7.6 MB and 1,140 × 1,153 pixels (1.3 megapixels) 522 BRITISH DENTAL JOURNAL VOLUME 206 NO. 10 MAY 23 2009 © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE The fi nal point worth mentioning about megapixels and monitors or projectors is as follows. While both have relatively low pixel counts compared to digital cameras, a difference is noticeable when an image is enlarged. For example, this is particularly relevant when photographing pathologi- cal changes to the oral mucosa. If a small lesion, in its early stages, is detected it is useful to magnify the area for detailed visual assessment. However, if the image deteriorates when enlarged, it is clinically useless, giving few clues to the pathologi- cal process. To illustrate this point, con- sider the two images in Figures 17 and 18, which were taken with identical light- ing, lens, etc. but with digital backs of different pixel count image sensors. The image in Figure 17 is a 30.3 MB fi le, with a pixel count of 2,797 × 1,895 (5.3 meg- apixels), while the image in Figure 18 is a 113.4 MB fi le with a pixel count of 5,329 × 3,717 (19.8 megapixels), that is, the second image has a nearly four times greater pixel count. When both images are viewed full-frame on a standard, 1.3 meg- apixel computer monitor, no difference is visually perceptible. The reason is that the pixel count of both images (5.3 megapix- els and 19.8 megapixels) exceeds that of the monitor (1.3 megapixels). However, if both images are now enlarged by 100% to concentrate on the lower mandibular incisors, the pixel count of the enlarged section of Figure 17 is 639 × 616 (0.4 megapixels) – Figure 19, while for Figure 18 it is 1,140 × 1,153 (1.3 megapixels) – Figure 20. With this enlargement, the pixel count of Figure 19 is lower (0.4 megapix- els) than the monitor (1.3 megapixels), and the image appears grainy and is seen to be breaking down, with loss of detail. However, the enlarged image in Figure 20 matches the pixel count of the monitor (1.3 megapixels) and still appears sharp and retains detail. Notice the scratches on the enamel surface on the mandibular right lateral incisor in Figure 20, which are indiscernible in Figure 19. This example emphasises the need to use camera equip- ment with high specifi cations, including a high pixel count, to retain quality when an image is enlarged. The fi nal method of viewing an image is printing, which can either be with an offi ce printer or a professional printing press. Both methods are ubiquitously popular, each having their unique benefi ts and drawbacks. Printing is discussed further in Part 10. ranges from as little as 720 × 480 (0.3 meg- apixels) to 1,440 × 900 (1.3 megapixels). Even the state-of-the-art true high defi ni- tion projectors are only capable of deliver- ing a resolution of 2 megapixels (1,920 × 1,080), far short of what is achievable with even the most inexpensive digital compact cameras. This is the reason that an image taken with a 3 megapixels camera will look the same as that from a camera with a 10 megapixel sensor. If no difference is visually discernable, why bother with expensive, high megapixels cameras? The reason is as follows: the resultant image quality is not solely dependant on the number of pixels. Other more important factors include resolving power of the lens, tonal range of the entire system such as bit depth, dynamic range, fi le format and size, camera hardware (A/D converter, cooling), and image processing software (interpola- tion and colour reproduction), etc. Hence an elaborate camera system usually offers more than just higher megapixels, but also the features cited above to produce high quality images. This is an important point to remember before choosing a camera sys- tem (covered in Part 4), because two cam- eras with identical megapixels will produce drastically different quality images. BRITISH DENTAL JOURNAL VOLUME 206 NO. 10 MAY 23 2009 523 © 2009 Macmillan Publishers Limited. All rights reserved. Digital dental photography. Part 3: principles of digital photography Introduction The sensors Technical aspects of digital photography Capture Processing Display work_o67lxefhnjaopnudzeoxzo56ly ---- bwus_DSU_29012 80..84 Single-Pass Carbon Dioxide Versus Multiple-Pass Er:YAG Laser Skin Resurfacing: A Comparison of Postoperative Wound Healing and Side-Effect Rates ELIZABETH L. TANZI, MD, AND TINA S. ALSTER, MD Washington Institute of Dermatologic Laser Surgery, Washington, DC BACKGROUND. Ablative laser skin resurfacing with carbon dioxide (CO2) and erbium:yttrium-aluminum-garnet (Er:YAG) lasers has been popularized in recent years and their side effects individually reported. No prior study, however, has directly compared the relative healing times and complications rates between the two different systems. OBJECTIVE. To evaluate and compare postoperative wound healing and short- and long-term side effects of single-pass CO2 and multiple-pass, long-pulsed Er:YAG laser skin resurfacing for the treatment of facial photodamage and atrophic scars. METHODS. A retrospective chart review and analysis of sequential clinical photographs were performed in 100 con- secutive patients who underwent laser skin resurfacing with single-pass CO2 (Ultrapulse 5000; Coherent, Palo Alto, CA, N 5 50) or multiple-pass, long-pulsed Er:YAG laser resurfacing (Contour; Sciton, Palo Alto, CA, N 5 50). All laser procedures were performed by a single operator for the amelioration of facial rhytides or atrophic scars. The rate of re-epithelialization, duration of erythema, and presence of complications were tabulated. RESULTS. The average time to re-epithelialization was 5.5 days with single-pass CO2 and 5.1 days with long-pulsed Er:YAG laser resurfacing. Postoperative erythema was observed in all patients, lasting an average of 4.5 weeks after single-pass CO2 laser treatment and 3.6 weeks after long-pulsed Er:YAG laser treatment. Hyperpigmentation was seen in 46% of the patients treated with single-pass CO2 and 42% of the patients treated with the long-pulsed Er:YAG laser (average duration of 12.7 and 11.4 weeks, respectively). No incidences of hypopigmenta- tion or scarring were observed. CONCLUSION. Skin resurfacing with single-pass CO2 or multi- ple-pass long-pulsed Er:YAG laser techniques yielded comparable postoperative healing times and complication profiles. E.L. TANZI, MD, AND T.S. ALSTER, MD HAVE INDICATED NO SIGNIFICANT INTEREST WITH COMMERCIAL SUPPORTERS. LASER SKIN resurfacing is an effective treatment option for many patients with cutaneous photodam- age, wrinkles, and acne scarring.1–4 Based on the principles of selective photothermolysis,5 ablative resurfacing lasers target and effectively vaporize water-containing tissue. Collagen shrinkage and re- modeling are initiated by controlled thermal injury to the dermis.6–8 Several laser systems are currently available for cutaneous laser resurfacing, including high-energy pulsed and scanned carbon dioxide (CO2) and erbium:yttrium-aluminum-garnet (Er:YAG) lasers. Although excellent improvement of photodamaged skin, rhytides, and atrophic scars can be achieved after multiple pass treatment technique with these laser systems,1,3,9–15 an extended recovery period and, in some cases of CO2 laser resurfacing, prolonged erythema have diminished the enthusiasm for multi- pass CO2 procedures. 16,17 Moreover, delayed-onset permanent hypopigmentation has been shown to occur in upward 20% of those treated with multiple- pass CO2 laser skin resurfacing. 17,18 In response to these disadvantages, refinements in the CO2 surgical technique and Er:YAG laser technology have been developed. In 1997, a minimally traumatic single-pass CO2 laser resurfacing procedure was described that resulted in faster re-epithelialization and an improved side- effect profile than typically observed after use of the multiple-pass technique.19 After application of the CO2 laser scans, partially desiccated skin is left intact (rather than removed as is typical with multi- pass procedures) to serve as a biologic wound dressing. Additional passes with the CO2 laser may be per- formed focally in areas of more extensive involvement to limit unnecessary thermal and mechanical trauma to less involved skin. Subsequent reports have sub- stantiated the improved side-effect profile of this less aggressive procedure.20,21 r 2003 by the American Society for Dermatologic Surgery, Inc. � Published by Blackwell Publishing, Inc. ISSN: 1076-0512/02/$15.00/0 � Dermatol Surg 2003;29:80–84 Address correspondence and reprint requests to: Tina S. Alster, MD, Washington Institute of Dermatologic Laser Surgery, 2311 M Street, N.W., Suite 200, Washington, DC, 20037, or e-mail: talster@skin laser.com. In addition to the development of minimally traumatic CO2 laser techniques, the search for alter- native methods of cutaneous resurfacing led to the development of the Er:YAG laser. At a wavelength of 2940 nm, the Er:YAG laser corresponds to the peak absorption coefficient of water and is absorbed 12 to 18 times more efficiently by cutaneous water-contain- ing tissue than is the 10,600-nm wavelength of the CO2 laser. 22 At a fluence of 5 J/cm2, a typical short- pulsed (250 ms) Er:YAG laser reliably ablates 10 to 20 mm of tissue per pass, producing a residual zone of thermal injury not exceeding 15 mm.4,23 In contrast, CO2 laser skin resurfacing produces 20 to 60 mm of tissue ablation and up to 150 mm of residual thermal injury per pass. As a result of the minimal thermal injury induced by short-pulsed Er:YAG laser resurfa- cing, faster re-epithelialization and an improved side effect profile are effected (as compared with CO2 laser skin resurfacing).24–26 On the other hand, minimal thermal injury in the dermis provides insufficient vas- cular coagulation (resulting in poor intraoperative hemo- stasis) and reduced collagen contraction and remodeling (resulting in less impressive clinical results).4,23 To address the limitations associated with short- pulsed Er:YAG laser skin resurfacing, modulated (variable-pulsed) Er:YAG laser systems have been developed to improve intraoperative hemostasis and induce collagen remodeling. Modulated Er:YAG laser systems allow precise control of ablation while increas- ing the ability to induce collagen formation and achieve hemostasis through increased thermal injury.18 Although the previously described single-pass CO2 and modulated Er:YAG laser skin resurfacing techni- ques for facial photodamage, rhytides, and atrophic scarring have gained popularity among cutaneous laser surgeons, long-term studies comparing their relative side effects and complications have not been per- formed. Therefore, the objective of this study was to evaluate and compare postoperative wound healing and side-effect profiles of these two techniques for the treatment of photodamage and atrophic scarring. Methods A retrospective chart review and analysis of digital photography was performed in 50 consecutive patients (49 females and 1 male; mean age, 51; skin phototypes I–V) who received single-pass CO2 laser resurfacing (Ultrapulse 5000; Lumenis Laser Corp., Santa Clara, CA) and 50 consecutive patients (47 females and 3 males; mean age, 47; skin phototypes I–V) who received multiple-pass, long-pulsed Er:YAG laser (Contour; Sciton Laser Corp., Palo Alto, CA) resurfa- cing (Table 1). All laser procedures were performed by a single surgeon (T.S.A.) over a 2-year period for the indication of photodamage, rhytides, or atrophic scarring on the face. Anesthesia was obtained with regional nerve blocks using 1% lidocaine with 1:200,000 epinephr- ine. For full-face procedures, intravenous anesthesia was administered by a certified nurse anesthetist using a combination of propofol, midazolam, fentanyl, and ketamine. The CO2 laser was calibrated to 300-mJ energy and 60-W power through an 8-mm square scanning handpiece, and the entire face was treated with adjacent nonoverlapping laser scans in a single laser pass at CPG density 5. The 3-mm collimated hand- piece was used at 300- to 500-mJ energy and 5- to 7-W power to refine treatment edges. Partially desiccated tissue remained intact to serve as a biologic wound dressing. Er:YAG laser resurfacing was performed in dual mode (sequential ablation/coagulation) after being calibrated to 90-mm ablation (22.5 J/cm2) with 50% spot overlap and 50-mm coagulation. A square scanning handpiece was used to vaporize the epidermis in a single pass over the entire face. An additional one to two regional passes were delivered to the involved areas using identical laser settings. Laser scans were placed in an adjacent nonoverlapping manner, care- fully removing all partially desiccated skin with saline- soaked gauze between each laser pass. The partially desiccated tissue remaining from the final laser pass was left intact as a biologic wound dressing. The laser- irradiated skin showed a clean, pale pink hue with minimal to no bleeding. Immediately after laser treatment, Aquaphor oint- ment (Beirsdorf Inc., Wilton, CT) was applied to the irradiated skin. Each patient was instructed to perform gentle facial rinses with dilute acetic acid soaks several times daily, followed by an application of ointment Table 1. Patient Characteristics Procedure Female Male Mean Age (Year) SPT I SPT II SPT III SPT IV SPT V One pass 49 1 51 13 26 6 4 1 CO2 Multipass Er:YAG 47 3 47 20 16 7 5 2 Abbreviation: SPT, skin phototype. Dermatol Surg 29:1:January 2003 TANZI AND ALSTER: SINGLE-PASS CO2 VERSUS MULTIPLE-PASS ER:YAG SKIN RESURFACING 81 and a cooling masque (SkinVestment Inc., Washington, DC). A 10-day course of prophylactic antiviral treatment (valacyclovir 500 mg twice daily) was initiated on the morning of surgery. Patients were followed closely during the first postoperative week, during which time any residual coagulated debris was gently removed with cool water and dilute acetic acid compresses. All patients were able to apply camouflage make-up within 7 to 10 days postoperatively. Patients were formally evaluated by a physician on postoperative days 3 through 7, and at 1, 3, 6, and 12 months after the procedure. If prolonged erythema or hyperpigmentation was noted, the patient was eval- uated every 2 weeks until complete resolution. The incidence, severity, and duration of side effects and complications were recorded at each postoperative patient visit. Patient satisfaction surveys (poor, fair, good, or excellent results) were obtained 12 months after the procedure. Results The average time to re-epithelialization was 5.5 days (range, 5–7 days) with single-pass CO2 and 5.1 days (range, 5–8 days) with long-pulsed Er:YAG laser resurfacing (Table 2). Postoperative erythema was observed in all patients, lasting an average of 4.5 weeks (range, 3–12 weeks) after single-pass CO2 laser treatment and 3.6 weeks (range, 3–14 weeks) after long-pulsed Er:YAG laser treatment. Hyperpigmentation was seen in 46% of patients treated with single-pass CO2 and 42% of patients treated with the long-pulsed Er:YAG laser (average duration of 12.7 and 11.4 weeks, respec- tively) (Figures 1A,B and 2A,B). The majority of patients experiencing postinflammatory hyperpigmen- tation had darker skin tones (skin phototypes III–V); however, nearly 40% of patients with skin phototype II also hyperpigmented. Mild acne occurred in 12 patients (5 after CO2 and 7 after Er:YAG) during the first postoperative week, presumably because of the use of occlusive ointment. All cases of acne responded without recurrence to oral minocycline (75 mg twice daily for 1 week). Seven patients (three after CO2 and four after Er:YAG) developed transient milia requiring no intervention. Dermatitis was noted in five patients (one after CO2 and four after Er:YAG) and responded to mild topical corticosteroid cream. No cases of herpetic or fungal infections were encountered; however, nine patients (four after CO2 and five after Er:YAG) experienced localized superficial bacterial infections that fully resolved with oral ciprofloxacin (500 mg twice daily for 5 days). No hypopigmentation or hypertrophic scarring was observed in any study patient throughout the 12-month study period. Patient satisfaction surveys revealed good to excellent ratings in 85% of the patients after Er:YAG laser skin resurfacing and in 87% of the patients after single-pass CO2 laser treatment. Discussion Although the demand for ablative laser skin resurfa- cing procedures has been in recent decline because of Table 2. Postoperative Healing Time and Side Effect/Complication Rates Procedure N Reepithelialization Time (Average Duration in Days) Erythema (Average Duration in Weeks) Hyperpigmentation (Incidence; Average Duration in Weeks) Hypopigmentation Acne Milia Dermatitis Infection Scar One-pass CO2 50 5.5 4.5 23 (46%; 12.7) 0 5 (10%) 3 (6%) 1 (2%) 4 (8%) 0 Multipass Er:YAG 50 5.1 3.6 21 (42%; 11.4) 0 7 (14%) 4 (8%) 4 (8%) 5 (10%) 0 Figure 1. (A) Hyperpigmentation seen 4 weeks after single-pass CO2 laser treatment in a patient with skin phototype III. (B) Hyperpigmenta- tion resolved 13 weeks postoperatively. Figure 2. (A) Hyperpigmentation observed 3 weeks after multipass Er:YAG laser resurfacing (skin phototype III). (B) Complete normal- ization of skin pigmentation 11 weeks postoperatively. 82 TANZI AND ALSTER: SINGLE-PASS CO2 VERSUS MULTIPLE-PASS ER:YAG SKIN RESURFACING Dermatol Surg 29:1:January 2003 the development of nonablative laser technology and concerns regarding postoperative morbidity, few mod- alities can rival the impressive clinical results that ablative lasers can achieve.6,26 Less invasive CO2 laser resurfacing techniques and Er:YAG laser technology have been developed to reduce the postoperative morbidity associated with traditional multiple-pass CO2 laser resurfacing. Although these techniques have gained widespread acceptance among cutaneous laser surgeons, studies comparing their long-term side effects and complications are limited. Ruiz-Esparza and Gomez20 evaluated 15 patients after one-pass CO2 laser skin resurfacing for a follow- up period of 18 months. All patients were re- epithelialized by 7 days, and continued clinical improvement of rhytides was observed throughout the length of the study. No cases of scarring or persistent dyspigmentation were reported. Ross et al.27 evaluated 13 patients over a 6-month period following single-pass CO2 laser resurfacing on one side of the face and multiple-pass, short-pulsed Er:YAG laser resurfacing on the contralateral side. Their histologic results demonstrated that when CO2 and Er:YAG lasers produce equal levels of thermal destruction, equivalent healing and clinical improvement are effected. Delayed-onset permanent hypopigmentation—a serious complication that has been observed several months after multiple-pass CO2 laser skin resurfa- cing—has not yet been seen following single-pass treatment. Although no incidences of hypopigmenta- tion occurred in this study population at the 1-year follow-up evaluation, the frequency of hypopigmenta- tion following modulated Er:YAG laser skin resurfa- cing remains unknown. To date, only three cases of hypopigmentation following modulated Er:YAG laser skin resurfacing have been reported.18,28,29 Because it is possible for hypopigmentation to present several years postoperatively, additional studies are necessary to assess its true incidence after either single-pass CO2 or modulated Er:YAG laser skin resurfacing. Conclusion Single-pass CO2 laser resurfacing has a comparable postoperative period and complication profile to that of multiple-pass, long-pulsed Er:YAG laser resurfa- cing, even in patients with dark skin tones. Thus, the CO2 laser can still be employed when invasive skin resurfacing is indicated, effecting relatively few side effects and complications (compared with multipass CO2 procedures). Reliable comparisons of clinical improvement between modalities in a retrospective review are tenuous at best as digital photography was not standardized for all of the patients studied. As such, a comparison of clinical improvement between the two systems was not reported herein. The high degree of patient satisfaction reported after treatment with each of the two systems, however, indicates an equivocal clinical effect. Clearly, additional long-term comparison studies between one-pass CO2 and modu- lated Er:YAG laser skin resurfacing are warranted to delineate fully the advantages and disadvantages of each technique. Continued research and advances in ablative technology should further enhance the ability to achieve minimal-risk wrinkle or scar effacement. References 1. Alster TS, Garg S. Treatment of facial rhytides with a high-energy pulsed CO2 laser. Plast Reconstr Surg 1996;98:791–4. 2. Alster TS, Kauvar ANB, Geronemus RG. Histology of high-energy pulsed CO2 laser resurfacing. Semin Cutan Med Surg 1996;15: 189–93. 3. Fitzpatrick RE. Maximizing benefits and minimizing risk with CO2 laser resurfacing. Dermatol Clin 2002;20:77–86. 4. Sapijaszko MJA, Zachary CB. Er: YAG laser skin resurfacing. Dermatol Clin 2002;20:87–96. 5. Anderson RR, Parrish JA. Selective photothermolysis. precise microsurgery by selective absorption of pulsed radiation. Science 1983;220:524–7. 6. Alster TS. Cutaneous resurfacing with CO2 and erbium:YAG: lasers: preoperative, intraoperative, and post-operative considerations. Plast Reconstr Surg 1999;103:619–32. 7. Ross EV, McKinlay JR, Anderson RR. Why does carbon dioxide resurfacing work? A review. Arch Dermatol 1999;135:444–54. 8. Smith KS, Skelton HG, Graham JS, et al. Depth of morphologic skin damage and viability after one, two and three passes of a high- energy, short-pulse CO2 laser (TruPulse) in pig skin. Dermatol Surg 1997;37:204–10. 9. Alster TS, Nanni CA, Williams CM. Comparison of four carbon dioxide resurfacing lasers: a clinical and histopathologic evaluation. Dermatol Surg 1999;25:153–9. 10. Apfelberg DB. Ultrapulse carbon dioxide laser with CPG scanner for full-face resurfacing of rhytides, photoaging, and acne scars. Plast Reconstr Surg 1997;99:1817–25. 11. Fitzpatrick Re Goldman MP, Satur NM, et al. Pulsed carbon dioxide laser resurfacing of photoaged facial skin. Arch Dermatol 1996;132:395–402. 12. Alster TS, West TB. Resurfacing atrophic facial scars with a high- energy, pulsed carbon dioxide laser. Dermatol Surg 1996;22:151–5. 13. Waldorf HA, Kauvar ANB, Geronemus RG. Skin resurfacing of fine to deep rhytides using a char-free carbon dioxide laser in 47 patients. Dermatol Surg 1995;21:940–6. 14. Weinstein C, Alster TS Skin resurfacing with high-energy, pulsed carbon dioxide lasers. In: Alster TS, Apfelberg DB, eds. Cosmetic Laser Surgery. New York: John Wiley & Sons, 1996:9–28. 15. Tanzi EL, Alster TS. Treatment of atrophic facial scars with a dual- mode erbium: YAG laser. Dermatol Surg 2002;28:551–5. 16. Nanni CA, Alster TS. Complications of CO2 laser resurfacing: an evaluation of 500 patients. Dermatol Surg 1998;24:315–20. 17. Bernstein LJ, Kauvar ANB, Grossman MC, et al. The short and long term side effects of carbon dioxide laser resurfacing. Dermatol Surg 1997;23:519–25. 18. Zachary CB. Modulating the Er: YAG laser. Lasers Surg Med 2002;26:223–6. 19. David L, Ruiz-Esparza J. Fast healing after laser skin resurfacing: the minimal mechanical trauma technique. Dermatol Surg 1997;23:359–61. 20. Ruiz-Esparza J, Gomez JMB. Long-term effects of one general pass laser resurfacing: a look at dermal tightening and skin quality. Dermatol Surg 1999;25:169–74. 21. Khosh MM, Larrabee WF, Smoller B. Safety and efficacy of high fluence CO2 laser skin resurfacing with a single pass. J Cutan Laser Ther 1999;1:37–40. Dermatol Surg 29:1:January 2003 TANZI AND ALSTER: SINGLE-PASS CO2 VERSUS MULTIPLE-PASS ER:YAG SKIN RESURFACING 83 22. Walsh JT, Flotte TJ, Deutsch TF. Er:YAG laser ablation of tissue: effect of pulse duration and tissue type on thermal damage. Lasers Surg Med 1989;9:314–26. 23. Alster TS. Clinical and histological evaluation of six erbium: YAG lasers for cutaneous resurfacing. Laser Surg Med 1999;24: 87–92. 24. Bass LS. Erbium:YAG laser skin resurfacing: preliminary clinical evaluation. Ann Plast Surg 1998;40:328–34. 25. Khatri KA, Ross V, Grevelink JM, et al. Comparison of erbium: YAG and carbon dioxide lasers in resurfacing of facial rhytides. Arch Dermatol 1999;135:391–7. 26. Tanzi EL, Alster TS Side effects and complications of variable- pulsed erbium:YAG laser skin resurfacing: extended experience with 50 patients. Plast Reconstr Surg, in press. 27. Alster TS, Lupton JR. Update of dermatologic laser surgery: new trends for 2002. Cosmet Dermatol 2002;15:33–6. 28. Ross VE, Miller C, Meehan K, et al. One-pass CO2 versus multiple- pass Er:YAG laser resurfacing in the treatment of rhytides: a comparison side-by-side study of pulsed CO2 and Er:YAG lasers. Dermatol Surg 2001;27:709–15. 29. Jeong JT, Kye YC. Resurfacing of pitted facial acne scars with a long-pulsed Er: YAG laser. Dermatol Surg 2001;27:107–10. 84 TANZI AND ALSTER: SINGLE-PASS CO2 VERSUS MULTIPLE-PASS ER:YAG SKIN RESURFACING Dermatol Surg 29:1:January 2003 work_o6lv3trgcjhx3mrg5qowpkb4ti ---- From PhotoWork to PhotoUse: exploring personal digital photo activities From PhotoWork to PhotoUse: exploring personal digital photo activities Citation for published version (APA): Broekhuijsen, M. J., van den Hoven, E. A. W. H., & Markopoulos, P. (2017). From PhotoWork to PhotoUse: exploring personal digital photo activities. Behaviour & Information Technology, 36(7), 754-767. https://doi.org/10.1080/0144929X.2017.1288266 DOI: 10.1080/0144929X.2017.1288266 Document status and date: Published: 03/07/2017 Document Version: Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication: • A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal. If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement: www.tue.nl/taverne Take down policy If you believe that this document breaches copyright please contact us at: openaccess@tue.nl providing details and we will investigate your claim. Download date: 06. Apr. 2021 https://doi.org/10.1080/0144929X.2017.1288266 https://doi.org/10.1080/0144929X.2017.1288266 https://research.tue.nl/en/publications/from-photowork-to-photouse-exploring-personal-digital-photo-activities(86802bb6-33f9-4c52-9899-a34e4f22f32c).html Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=tbit20 Download by: [Eindhoven University of Technology] Date: 18 July 2017, At: 03:22 Behaviour & Information Technology ISSN: 0144-929X (Print) 1362-3001 (Online) Journal homepage: http://www.tandfonline.com/loi/tbit20 From PhotoWork to PhotoUse: exploring personal digital photo activities Mendel Broekhuijsen , Elise van den Hoven & Panos Markopoulos To cite this article: Mendel Broekhuijsen , Elise van den Hoven & Panos Markopoulos (2017) From PhotoWork to PhotoUse: exploring personal digital photo activities, Behaviour & Information Technology, 36:7, 754-767, DOI: 10.1080/0144929X.2017.1288266 To link to this article: http://dx.doi.org/10.1080/0144929X.2017.1288266 © 2017 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group Published online: 15 Feb 2017. Submit your article to this journal Article views: 350 View related articles View Crossmark data Citing articles: 1 View citing articles http://www.tandfonline.com/action/journalInformation?journalCode=tbit20 http://www.tandfonline.com/loi/tbit20 http://www.tandfonline.com/action/showCitFormats?doi=10.1080/0144929X.2017.1288266 http://dx.doi.org/10.1080/0144929X.2017.1288266 http://www.tandfonline.com/action/authorSubmission?journalCode=tbit20&show=instructions http://www.tandfonline.com/action/authorSubmission?journalCode=tbit20&show=instructions http://www.tandfonline.com/doi/mlt/10.1080/0144929X.2017.1288266 http://www.tandfonline.com/doi/mlt/10.1080/0144929X.2017.1288266 http://crossmark.crossref.org/dialog/?doi=10.1080/0144929X.2017.1288266&domain=pdf&date_stamp=2017-02-15 http://crossmark.crossref.org/dialog/?doi=10.1080/0144929X.2017.1288266&domain=pdf&date_stamp=2017-02-15 http://www.tandfonline.com/doi/citedby/10.1080/0144929X.2017.1288266#tabModule http://www.tandfonline.com/doi/citedby/10.1080/0144929X.2017.1288266#tabModule From PhotoWork to PhotoUse: exploring personal digital photo activities Mendel Broekhuijsen a,b, Elise van den Hoven a,b,*,** and Panos Markopoulos a,b aDepartment of Industrial Design, Eindhoven University of Technology, Eindhoven, Netherlands; bFaculty of Design, Architecture and Building, School of Design, University of Technology Sydney, Sydney, Australia ABSTRACT People accumulate large collections of digital photos, which they use for individual, social, and utilitarian purposes. In order to provide suitable technologies for enjoying our expanding photo collections, it is essential to understand how and to what purpose these collections are used. Contextual interviews with 12 participants in their homes explored the use of digital photos, incorporating new photo activities that are offered by new technologies. Based on the qualitative analysis of the collected data, we give an overview of current photo activities, which we term PhotoUse. We introduce a model of PhotoUse, which emphasises the purpose of photo activities rather than the tools to support them. We argue for the use of our model to design tools to support the user’s individual and social goals pertaining to PhotoUse. ARTICLE HISTORY Received 10 August 2015 Accepted 25 January 2017 KEYWORDS Digital photography; PhotoUse; contextual interviews; design research; interaction design 1. Introduction Nowadays most of us deal with unprecedented quantities of personal media, such as photographs, messages, status updates, and e-mails. Some of these media we create our- selves, and some we receive from others or result from our use of new technologies. The study presented in this paper describes the use of personal digital photos, one of the most prevalent records people keep of auto- biographical content. Photos can be considered digital objects as long as they exist in digital form (Kirk et al. 2009), for example, on camera SD cards, computer hard drives, or cloud storage. Before the introduction of digital photography, the size of an individual’s photo collection was in the order of hundreds, and now it is in the order of tens of thou- sands. This changes not only the nature of the tools needed for using them, but also the significance and the way in which such collections may support autobio- graphical memory processes. People use their personal digital photo collections regularly, for example, to browse them, organise them, or share them. Our work builds on the seminal PhotoWork paper by Kirk et al. (2006) in which the activities that lead towards sharing of digital photos are described in a model. In our research, we are looking at activities that involve the per- sonal use of digital photos, to identify opportunities for the design of novel supportive tools. The majority of domestic photo collections contain photos related to holidays, birthdays, and other personal events. But other kind of photos end up in our collection as well, such as saved Internet images, screenshots, snapshots of receipts, and boarding passes. In the remainder of this paper, we refer to all the activities that involve the use of any of these digital photos as photo activities, cov- ering the moment that a photo is captured or collected, to the moment it is used for, for example, formative, communicative, experiential, or remembering purposes (van Dijck 2008). In our research, we are interested in the relation between media and remembering, so the remembering purpose is of special interest to us. The photos that we capture or collect for our personal collec- tions often acquire personal value as external represen- tations that can cue autobiographical remembering (Hoven and Eggen 2014). The autobiographical value of photos can support our interactions with others, for example, telling the story of one’s holiday while viewing a slideshow of preselected photos. Great leaps have been made in capturing moments and experiences and creating digital records (e.g. see Frohlich and Tallyn 1999; Hodges et al. 2006). As a result, we have too many photos, and people lack the time, the tools, and the patience to organise them effec- tively (Bergman et al. 2009; Kirk et al. 2006), which hin- ders them from fully enjoying their photo collection. The © 2017 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License (http://creativecommons.org/Licenses/by-nc-nd/ 4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not altered, transformed, or built upon in any way. CONTACT Mendel Broekhuijsen m.j.broekhuijsen@tue.nl *Present address: DJCAD, University of Dundee, Dundee, UK. **Present address: ARC Centre of Excellence in Cognition and its Disorders, Sydney, Australia. BEHAVIOUR & INFORMATION TECHNOLOGY, 2017 VOL. 36, NO. 7, 754–767 http://dx.doi.org/10.1080/0144929X.2017.1288266 http://crossmark.crossref.org/dialog/?doi=10.1080/0144929X.2017.1288266&domain=pdf http://orcid.org/0000-0002-0517-0039 http://orcid.org/0000-0002-0888-1426 http://orcid.org/0000-0002-2001-7251 http://creativecommons.org/Licenses/by-nc-nd/4.0/ http://creativecommons.org/Licenses/by-nc-nd/4.0/ mailto:m.j.broekhuijsen@tue.nl http://www.tandfonline.com challenge emerging pertains to how to help people to bet- ter curate their collections of digital photos. Curation in the context of digital media involves deciding on what to keep and in what format and structure to preserve it, how the information can be retrieved, and deciding on the methods of capturing, (re)presentation, and reproduc- tion (Van House and Churchill 2008). This paper describes an interview study involving 12 participants that examined how people use photo collec- tions at home, with the aim to identify opportunities for better supporting photo activities for remembering pur- poses, through interactive technology. In the next section, we review related work, then we present the aims and methods of the study, and summarise its results. We intro- duce the term PhotoUse to make a distinction between the purposive useof photographicmaterial,andtheworkthat is associated with, for example, managing, organising, and retrieving photos. We present our model of PhotoUse, with the aim to illustrate our holistic view on the personal and social use of digital photos, and discuss the impli- cations for design and research in this field. 2. Related work In this section we discuss related work within the fields of Human–Computer Interaction, information science, and psychology, surrounding digital photo activities. 2.1. Photo technology In 1998, Frohlich et al. provided an inventory of activities that are related to the use of digital photos for sharing. The hardware and software tools that are required to facilitate these activities were described as PhotoWare (Frohlich et al. 2002). There are many examples reported in the related literature of devices designed to enable people to display and share their photographs; for example, PhotoBox, a wooden box that slowly prints digital photos to display them (Odom et al. 2012), and Shoebox, a combination of storage and display (Banks and Sellen 2009). Sharing and commenting on digital photos used to be a complicated task (Frohlich et al. 2002), but many opportunities have been addressed by commercial tech- nologies that enable us to share and comment on photo- graphs instantly (e.g. Flickr, Instagram, Dropbox, Facebook, and Whatsapp). Recent example tools for sharing and storytelling (e.g. Cueb (Golsteijn and Hoven 2013) and 4 Photos (O’Hara et al. 2012)) demon- strate the influence of such technologies on our com- munication, with the new possibilities to share experiences and activities. Notable examples of research works that deliberately support autobiographical remembering include Living Memory Box (Stevens et al. 2003) and The Family Archive device (Kirk et al. 2010). These designs incorporate the contextual and chronological information to display a family archive. They also address multiple users, opportunities for story- telling, and the need for curation, which is important when designing for media-supported remembering (Van House and Churchill 2008). Research into browsing, sharing, or viewing photo- graphs is usually not concerned with curation, although selecting the required subset of the photo collection is a crucial prerequisite for successful viewing and sharing (Whittaker, Bergman, and Clough 2010). Moreover, research into photo browsing typically approaches photo collections as databases rather than cues for remembering, thereby ignoring important activities such as reminiscing and storytelling, and instead focusing on media retrieval tasks. However, from the perspective of photos as memory cues, the experience of remember- ing is more important than the accuracy of retrieval. Most applications have made use of new interaction techniques that are supported by smartphones or multi- touch surfaces, to make pleasurable and efficient manipulation and access to photo collections. However, this does not suffice to address the difficulties of organ- ising or retrieving of photos in its totality. 2.2. Photo activities Studies that describe how people use their collection have focussed on managing collections (e.g. see Rodden and Wood 2003), tools for efficient search, and retrieval (e.g. see Whittaker, Bergman, and Clough 2010). They identify design opportunities to support these activities with new tools. Kirk and colleagues (Kirk et al. 2006) introduced a descriptive flow-chart model covering the most common interactions with digital photographs between capturing and sharing, which they termed PhotoWork. This model can be used to develop and assess new digital photo management tools. It identifies three stages between capturing and sharing photos: . Pre-download stage: just after capturing a photo includes triaging on the capturing device; . At-download stage: when transferring photos to the computer; includes triaging on computer, editing, organising, filing, and backup; . Pre-share stage: work that is necessary before being able to share the photo includes sorting, selecting a subset, simple editing, copying, printing, and sending. The PhotoWork model provides a linear, waterfall- like description of the life cycle of a digital photo file, BEHAVIOUR & INFORMATION TECHNOLOGY 755 with a clear progression and separation between captur- ing, organising and sharing of photos. Figure 1 shows the PhotoWork life cycle by Kirk et al. as it appeared in Banks et al. (2012). Kirk et al. (2006) suggested that the activities listed in the PhotoWork model consume all the time that people are prepared to spend on their growing media collection. Several designs (e.g. Hilliges, Baur, and Butz 2007) have been based on the PhotoWork model, exploring technological solutions within the stage-based framework it provides. However, the steps described appear to be those that photo technology necessitates rather than how people want to do things. As a model it also draws attention to user tasks, such as curation, and designing for their efficient performance rather than opportunities to design for an enhanced user experience. 2.3. Photo purpose To further underline the importance of research into digital photo use, let us consider the value of photos for different purposes. In the analogue era, personal photos mainly served autobiographical remembering, and archiving the family history (e.g. van Dijck 2008; Sarvas and Frohlich 2011). Photos can serve as cues for the memories of the events in our own lives: our autobio- graphical memory (AM) (see e.g. Conway and Pleydell- Pearce 2000; Hoven and Eggen 2014; Tulving 2007). Memories themselves cannot be stored as a digital or other record, but are reconstructed every time they are recalled (Guenther 1998). What can be stored are the external items that can cue the reconstruction of mem- ories (Hoven and Eggen 2009; Sellen and Whittaker 2010). This implies that curation of memory cues is an important aspect of media-supported autobiographical remembering. In the last two decades, the purposes of photos have changed along with the advancements of digital photography and camera phone use. Especially in the younger generations, the use of photos for com- munication and identity formation is more prevalent than using photos for remembering purposes (van Dijck 2008). But in line with the arguments of van Dijck, we believe that the remembering purpose of digital photos is still very important, and helps determining the value of the photos. Petrelli and Whittaker (2010) argue that digital items are valuable, though compared to phys- ical items they are not very frequently accessed, because they are hidden away in our computers. However, digital photo collections are increasingly more valuable for cue- ing our memory. 2.4. Photo curation Although we are putting a lot of deliberate effort into building photo collections that portray our lives, poor organisation makes it harder to find the photos (Whit- taker, Bergman, and Clough 2010). Whittaker, Bergman, and Clough (2010) reported that software tools intended to support retrieval activities fail to aid the process of photo retrieval activities in families, and found that their participants were not successful in almost 40% of photo retrieval tasks (although long-term retrieval is the major motivation for families to capture the photos). Other issues concerned remembering the storage location of items, and the amount of time it took partici- pants to find items in their collection (up to 4 minutes), mainly caused by the large amount of photographs to search through. Whittaker, Bergman, and Clough (2010) concluded that there is a need for new tools to fil- ter, evaluate, maintain, and share photo collections to enjoy their value. Despite their intentions to get organised (Frohlich et al. 2002), on most occasions people lack the time and motivation to properly curate their personal photo collections. Existing studies emphasise that ‘work needs to be done’ when it comes to organising photos, thus illustrating the understanding that curation is considered unsatisfying work, despite the promise of a well-organ- ised collection (Frohlich et al. 2002). An example of the efforts in research to specifically address the curation issue of digital photos is Pearl Figure 1. Model of PhotoWork by Kirk et al. (2006), as appeared in Banks et al. (2012). Re-used with permission from ACM. 756 M. BROEKHUIJSEN ET AL. (Jansen, Hoven, and Frohlich 2014), which projects mul- tiple photos on the wall, allowing participants to select, favour, and organise the content while viewing. Despite these efforts, there is as yet insufficient understanding on how to design photo retrieval solutions that reduce the workload of curation, and focus on pleasurable photo activities. 3. Field study The aim of the study was to explore photo activities in the home environment, identifying what kind of pho- tography-related activities participants engage in, and in what fashion they engage with their collection. To make this inventory, we conducted in-depth contextual interviews in the homes of the participants and analysed the results using open and selective coding (Corbin and Strauss 2008). To create an overview of digital photo activities, and the opportunities for supportive tools, we interviewed 12 participants about their use of personal digital pho- tography. The study described in this paper focused pri- marily on the home environment, because at their homes, people typically have the possibility to access their entire personal collection and have the opportunity to demonstrate their usual practices to the researchers. We explored all possible photo activities of our partici- pants in this context, including all existing technologies used as retrieving devices – such as the use of smart- phones, laptops, smart televisions, and tablets. 3.1. Participants Twelve participants, 6 male and 6 female, were recruited based on their photo use and social situation. The par- ticipants differed in age, profession, demographics, social situation, and interest in digital photography, to provide a broad overview of personal photo usage. Participants were selected based on whether they are well acquainted with digital photography, whether they own a digital camera and/or a smartphone, whether they live with a partner (and/or children), and whether they own a col- lection of at least 2000 digital photos. The latter require- ment was formulated to make sure that the participants would have experience with curating digital media. The selected participants were mentally healthy, well-edu- cated Dutch people between the age of 18 and 69 years (mean = 39.7). They owned minimum 2000 and maxi- mum 300,000 photographs (mean = 42,083). The partici- pants were socially active, both online (social media) and ‘off-line’ (friends, sports clubs, societies, etc). Three par- ticipants were not living with a partner at the time, but shared an apartment with friends; eight of the participants lived with their partner, from which four had children. The youngest participant still lived with his parents. 3.2. Procedure Participants took part in a semi-structured interview, lasting 1–1.5 hours. The interviews were held in the homes of the participants, to give us a clear understand- ing of the context, the practices and tools, and the issues that were encountered by the participants. In the first part, the interview focused on understanding how par- ticipants use photos. The participants were asked to talk about their photo activities, how often they use their photo collection, and how much time they spend on it. The explorative nature of this research required an open interview approach, in which the participants were asked to explain as many photo activities as possible in detail. When usual practices of the participants were mentioned, the participants were invited to demonstrate a typical photo activity by showing a few digital photo- graphs, using the tools and procedures they would nor- mally use to browse and view their photos. After the demonstration, participants were asked to talk about the purpose of their photo activities, with questions such as ‘Why do you want to make a printed photo album?’ By elaborating on the purpose of using photos, participants were invited to reflect on whether current activities facilitated the purpose adequately, and to think about possible improvements. All the interviews were audio recorded and transcribed in full. The photo activities that participants demonstrated were captured on video, for later reference. At the end, the participants received instructions to estimate the size of their entire digital photo collection, across devices (external hard drive, laptop, PC, Mac, smartphone, and tablet). The estimation was based on the file counters in the software they used (Picasa (for web), Aperture, iPhoto, and native photo applications on smartphones and tablets), or the file count property for the appropriate folders in Win- dows Explorer. The estimation was made based on the total count, rounded to the nearest thousand. 3.3. Analysis The interview transcripts were analysed with a focus on identifying the detailed activities described by the partici- pants, and, where possible, identifying the purpose. This inductive analysis was done based on the method of open coding, as described in Corbin and Strauss (2008). The qualitative data analysis was aimed at identifying inter- esting patterns in the use of photo collections and pro- blems that participants experience with their photo BEHAVIOUR & INFORMATION TECHNOLOGY 757 activities. To find the codes for the categories, the open coding procedure followed the operational approach dis- cussed in Corbin and Strauss (2008). We were interested in generating theory and hypotheses about what motiv- ates people to engage in photo activities, by relating behaviours to needs and motives, rather than describing activities at a phenomenological level. After analysis, the important quotes were translated from Dutch to English. 4. Results In this section we will share the most important obser- vations that emerged from the analysis. 4.1. Photo activities The participants provided descriptions of 171 photo activities. In this section we will describe the activities, by giving a description and examples from the partici- pants. Of importance when trying to describe the photo activities are the differences between activities. For example: the difference between the activities mana- ging and organising is very subtle, and the distinction for labelling was made on a technical level: managing in our definition includes everything that is done with the digi- tal files; we see that organising is done using the meta- data of the files. Another example is browsing vs. sharing, which in many cases occur simultaneously; shar- ing a story with friends usually involves browsing, and so these labels were often applied together. The distinction was made whether the focus of the activity was on the social aspects (labelled sharing) or on memory retrieval (labelled browsing). Based on the characteristics, the comments of the par- ticipants regarding their photo activities were divided into 13 activity categories (see Table 1 for the two-level classification) and then divided into four activity types: accumulating, curating, retrieving, and appropriating. The accumulating type contains all the activities that are responsible for expanding the photo collection; cur- ating is done in order to manage and edit the existing collection; the retrieving type is the largest and contains all the activities that are done in order to find, browse, or view photos in an existing collection, which is needed for most other activities; appropriating consists of all the activities that are done in order to share or show photos, either digital or physical. An overview of the activity cat- egories and types can be found in Table 1. 4.1.1. Accumulating People engage in activities that are part of the process of expanding a personal photo collection, starting with cap- turing the photos. The activities that are part of accumu- lating photos are the following: . Capturing: taking pictures; on-device quality triage to determine retaking the picture . Collecting: adding pictures to your collection, which you did not capture yourself On-device triaging is done as part of the capturing: camera phones and digital cameras have the feature to immediately assess the picture quality, and discard and retake in case the quality is not good enough: Sometimes [I do the selection] on the camera […] if it is clear that the picture is not usable. – P10 (female; age 34; 3000 photos) The other accumulative activity is collecting. It involves getting images from friends or family members, images from the Internet, scans from newspapers, etc. P09 explained the trouble she had with getting images in her collection that other people took: My brother, my father and my husband all make pic- tures which need to be added. I curse them for throwing away the EXIF data […] [because] I have to add this manually […] and that takes way more time. – P09 (female; age 38; 28000 photos) The collecting activity is important when broadening the scope of ‘media’ beyond photographs, which poses an additional challenge: every type of memory cue that we need to curate, but is not created by ourselves, may at some point in time include media that we do not yet consider as autobiographical digital media, for example, public transport timestamps or electronic shopping receipts. These media need to be considered in future curation solutions (see Whittaker 2013). Table 1. Two-level classification of photo activities, based on the descriptions of the participants. Photo activity type Photo activity category Accumulating Capturing Collecting Curating Triaging Organising Managing Editing Retrieving Browsing Viewing Searching Appropriating Sharing Printing Tinkering Collaging Notes: Based on their characteristics the 171 activities from 12 participants were divided into 13 categories, and the categories were then divided into 4 activity types: Accumulating, curating, retrieving, and appropriating. The columns are divided into Photo activity category and Photo activity type. 758 M. BROEKHUIJSEN ET AL. 4.1.2. Curating There are many different activities that people engage in to curate their collection. The participants engaged in file managing, organising, triaging, and editing pictures in their collection. The activities are listed below: . Organising: tagging, moving, categorising, naming, captioning, archiving, and deleting . Triaging: assessing, selecting for a specific purpose (e.g. sharing, decorating, and presenting) . Managing: filing, backup, downloading, and uploading . Editing: retouching, cropping, combining, correcting, and changing Some of the participants took organising of photos really serious: I have several scripts [on the computer] that rename the photos based on year, month, day, minute, second. And then they are automatically moved to a folder, which is imported into Aperture, per year. And recently I some- times create a smart folder with a specific start and end date. – P09 (female; age 38; 28000 photos) Triaging is reported to be one of the most burdening parts of curating, unless it is done when the participants require a specific subset of their collection: Only if I want to start a new project or a photo album or a collage […] but otherwise I would not browse through [my photos]. – P03 (female; age 30; 18000 photos) It depends […] especially after traveling, everything goes onto the computer, and then there will be a round of selection, and a selection round for the photos that I want to be able to see more often, which I put into a Dropbox folder to be able to view them on another com- puter. And [some go] into the shared folder with my boyfriend. So then I am actually selecting three times. – P10 (female; age 34; 3000 photos) Editing occurs when multiple copies are being gener- ated for editing, and thus the collection is expanding. For example, P01 and P09 had many copies of pictures that they had retouched or had changed into black/white, but of which they had kept the originals. Some participants really enjoyed editing: [after the holiday] I will do some editing, yeah, for a couple of days. – P02 (male; age 29; 25000 photos) I love making the picture […] creating a good end result […] is very satisfying. – P08 (male; age 40; 300000 photos) Curating is important, because it is key to the success of the rest of the activities. The variety of activities in our study revealed that for different occasions a different subset of the collection is needed: These [Picasa] folders are to show to other people […] And I sent these photos to the manager of a museum shop, for inspiration. – P07 (female; age 66; 40000 photos) After my children were born I made folders for sharing and especially the grandmothers liked that. In Picasa I had a shared folder with a few people […] in which I put the most beautiful photos. I did that until 2012, and after that I did not have time for it anymore. Which is a shame because those are the folders that I actually look at. – P09 (female; age 38; 28000 photos) This is also in line with the findings of, for example, Odom, Zimmerman, and Forlizzi (2011) that people experience the need to express themselves differently in different situations. The variety of activities that we found seemed to depend on the audience and the con- text. This form of identity display in different social con- texts, established using digital photographs, was also reported by Frohlich et al. (2002). 4.1.3. Retrieving The retrieving type consists of all the activities that are done in order to interact with the file system: . Browsing: for example, browsing (casual viewing of pictures while interacting with them) . Viewing: passive viewing of slideshows . Searching: for example, goal-directed retrieving, searching The participants enjoy browsing, and especially browsing on mobile devices is often done to pass the time: I guess just on the couch, being bored. Or during boring moments, in the train or something. When you have nothing to do, and are sitting alone. […] I really forget […] many situations you know, and when you see the photos you start to think about it, which brings back the memory. And that is of course nice when you are bored, because then you think about pleasant moments. – P02 (male; age 29; 25000 photos) [Browsing happens] on my phone sometimes. When I am thinking ‘I am going to browse through my photos, that’ll be fun’ … actually when I am bored. – P12 (male; age 18; 2000 photos) The only moment I look at my photos is when I connect my iPhone to my computer, because then iPhoto opens and then I look at the photos for a while. […] if it is just in front of me, that is more likely to happen than that I think of a specific moment and start to search for that specific photo. – P03 (female; age 30; 18000 photos) BEHAVIOUR & INFORMATION TECHNOLOGY 759 Searching was reported to be cumbersome, especially when the participants own large collections and do not access them very often. Many of the activities of other types start with accessing the file system. 4.1.4. Appropriating With appropriating we mean to cluster the activities that are part of modifying and/or sharing (physical) instances of the digital collection: . Sharing: remote sharing (online, on social media, or sending postcards), collocated sharing . Printing: printing photos, a poster, or family albums . Collaging: making a collage from (printed) photos, making (digital) booklets . Tinkering: tinkering with printed photos, cutting and pasting printed photos Printing includes printing selected photographs, and using online services to layout and print photo albums. P03 explained why she makes printed albums: You frame the memory; the album is always about a specific moment […] and in an album you can recre- ate the atmosphere […]. And it has to look nice of course […]. It is also about just ‘owning’, because a beautiful small album will still be the same small album in 100 years’ time […]. – P03 (female; age 30; 18000 photos) Most of the participants engage in some form of shar- ing. Some of the remote sharing included social media, others used e-mail, while others engaged in collocated sharing. [after the holiday] I sent some photos to my parents [via email], saying ‘I am home again, here are already ten photos; the rest will follow soon’ – which never happens. – P04 (male; 28; 2000 photos) When we were rebuilding our house, I would often open Aperture […] that was nice, many people were inter- ested to see what [the house] used to look like. – P09 (female; age 38; 28000 photos) I brought the booklet that my parents made for my 18th birthday. […] The first few days I went through it once every day. […] I also show it to other people; everyone who comes here. – P12 (male; age 18; 2000 photos) The appropriation activities that were reported by the participants are in many occasions linked to activities in the curation category, which illustrates that the partici- pants spend a lot of time selecting and editing the subset in order to share, or otherwise use, the result of their work. 4.2. New technologies: new behaviour Although most of the activities that we found could be mapped onto the previously mentioned PhotoWork model (Kirk et al. 2006), the temporal sequence appears to be more varied, without a distinct start and finish. Figure 2 illustrates an example of the multithreaded and iterative process of the photo activities that we found: the case where browsing through your photos reminds you of a set of photos of you and your friend, which you decide to retrieve, select (triaging), crop (edit), print, and send (share). A single activity has then moved rapidly through several activity categories. The dotted line in Figure 2 illustrates that the Photo- Work process is one of the possible sequences of use, and that the activities from the PhotoWork model are still present in current practices, but not necessarily as a temporal bounding process. As illustrated by Figure 2, the constraints of technol- ogies and resulting practices have slightly changed since the work of Kirk et al. (2006). The specific stages and the strict linear workflow of early digital photo activities seem to arise from the lack of, for example, network con- nectivity for cameras, and the fact that people use per- sonal computers to store and manage collections. New activities that come with the use of new technologies Figure 2. Overview of all photo activity categories and activity types. In the figure, all the photo activity categories and photo activity types are displayed in a continuous model. The outer cir- cle displays the 4 photo activity types; the inner circle displays the 13 photo activity categories. The solid line, starting from browsing and leading to sharing, illustrates one of the many possible photo activities that can take place. The dotted line describes the workflow of the PhotoWork process (Kirk et al. 2006), which is just one of the possible sequences of use. 760 M. BROEKHUIJSEN ET AL. have had an impact on available photo activities: social media platforms have innovated photo sharing, and smart camera phones have changed the way we think about photography, the frequency and the context in which we capture and share photos. I have grandchildren in Italy, and one of them recently got an iPad, and I got a new washing machine. He […] is crazy about washing machines. So I have a maid […] who makes a photo with her smartphone, and then sends it to me via email, so I can forward the photo to my grandson. – P06 (female; age 69; 7000 photos) 4.3. Composite photo activities Our data strongly indicate that the boundary between different photo activities is not clearly defined. We found that participants engaged in multiple photo activi- ties that follow one another, or between which they alter- nate. As a consequence, it was difficult to determine what participants consider as a singular photo activity. For example, the activity sharing was reported as being a single activity, but in fact the actual sharing of a photo appeared to be part of a chain of activities. In these examples, sharing a photo starts with capturing: In certain weather conditions, for example snow, I also take a photo to share on Facebook, like ‘this is how it was’. – P04 (male; 28; 2000 photos) I take, for example, many photos, which are for What- sapp – then I send a photo of where I am, what I am looking at or what I am doing. My mother, for example, likes that very much. – P02 (male; age 29; 25.000 photos) In many occasions, the participants did not even realise that sharing is part of a chain, as camera phones with Internet enable faster ways of sharing, compared to the previous workflow that involved getting images from the camera to the computer, and sharing it over the Inter- net using a desktop computer. Examples from the partici- pants include the activity of ‘selecting a photo on a camera phone for sharingon Facebook’ (asdemonstrated byP01); ‘sharing a photo directly from the camera phone with Whatsapp Messenger’ (P05); ‘making a postcard for a friend, using the in-app printing service from Instagram on a smartphone’ (P03). Figure 3 illustrates all the actual activities that were explained by the participants as a single activity, but in reality consist of multiple activities. 4.4. Purpose of photo activities The purposes of the activities could be identified in 163 of the 171 activities. We identified the purposes by ana- lysing the complete description of each photo activity. Eight activities were described in general terms or without context, so the purpose of these activities was not clear. The findings show that people engage with their photos with the aim to serve a personal, social, or utilitarian purpose. The social purpose was the main purpose of using photos, but also individual use of photos, for example, reminiscing, was reported as an important motivator for photo activities. We make here a distinction between individual and utilitarian pur- poses based on the data: many photo libraries contain a combination of leisure photos as well as practical photos, for example, photographed receipts and screenshots of online purchases. All purposes resulting in infor- mation-driven retrieval, as well as content-independent file management, are labelled as having a utilitarian pur- pose instead of individual, although these purposes are usually individual. The purposes of photo activities, according to the participant data, include the following: . Social purpose: for example, storytelling, viewing together with others, and shared reminiscing . Individual purpose: for example, individual reminis- cing, thinking about past events; browsing for enjoy- ment; viewing slideshows; and creating collages for decoration . Utilitarian purpose: for example, optimising the organisation as part of hobby or technical interest; searching for specific information Figure 3. Illustration of all the photo activity categories that are linked to each other, derived from the description of 171 photo activities. The outer circle displays the 4 photo activity types; the inner circle displays the 13 photo activity categories. Each line describes connected activities, from the examples given by the participants. Sequences depend on the context, and so there is no predetermined order, starting point, or end. BEHAVIOUR & INFORMATION TECHNOLOGY 761 Some examples revealed several photo activities with the motivation to reminisce. For example, P07 described a combination of editing, triaging, collecting, organising, and viewing: I browse through [my] digital dairy often, to see what I did a few years ago. Sometimes I delete something […]. In there is [described] what I have done […] and how I liked it. – P07 (female; age 66; 40000 photos) The following example not only describes individual activities to reach a social goal, but also illustrates efforts of self-presentation: I put every now and then something on Facebook […] when I went somewhere together with other people […]. I like […] tagging people, and the com- ments that follow. But also my background and pro- file pictures are from my holidays, to show […] that I have been to a nice place. – P04 (male; 28; 2000 photos) As an example of storytelling, P01 described the use of photos to complement a story he shared with friends: It can complement the conversation, for example the picture of a ring, when I just proposed to my girlfriend and I was talking about it. – P01 (male; age 33; 50000 photos) Another participant triaged his collection specifically to support telling a story about his holiday: I thought it would be nice to have a selection with me all the time. So I would be able to show it my grandma or my friends. Because I forget many things, I can tell a bet- ter story if I have the photos in front of me. – P02 (male; age 29; 25000 photos) In an example of reminiscing, one participant indi- cated that the location information on his camera phone supported his memory: Sometime I just browse though them […] and review what I have been doing […]. I like the GPS tracker, because now I have all these pins [showing] the places I visit. – P02 (male; age 29; 25.000 photos) People are oriented towards the purpose of engaging with their photo collection, and many activities were described by the participants that illustrated purposes such as ‘sharing an experience’, ‘revitalise friendships’, and ‘browsing to fight boredom’. 4.5. PhotoUse To make a distinction between the purposive use of photographic material and the work of photo accumulat- ing, curating, retrieving, and appropriating, we suggest the term PhotoUse. In line with the suggestion of Kirk et al. (2006) that searching and browsing tools should perhaps be part of other activities within PhotoWork, we believe that designers can benefit if they focus on finding design opportunities to enhance the experience of burdening photo work, by focussing their designs on contributing to one of the purposes of the photo activities. To be able to use the activities that we found in a con- structive way for the design and assessment of tools to support purposive PhotoUse and to emphasise the user experience, we propose an alternative way of visualising the photo activities. By analysing the activities and dem- onstrations of the 12 participants in our study, we were able to conceptualise a model to overview PhotoUse. The model shifts the focus away from the tasks and work involved, and their temporal ordering, to the instrumental purpose such work serves, identifying different ways in which subordinate activities relate to each other to serve different needs. The PhotoUse model can be found in Figure 4. The PhotoUse model contains all the activities that together describe PhotoUse. To illustrate our focus on the purposes that motivate behaviour, the purposes are placed around the photo activities. As an example, one can think of capturing a photo with the motivation of future reminiscing, or capturing a photo to simply share an activity via social media. Figure 4. Model of PhotoUse. The outer circle displays the social, individual, and utilitarian purposes that motivate the photo activities. The middle circle displays the 4 photo activity types; the inner circle displays the 13 photo activity categories. All the activity categories join in the centre of the model, together form- ing the whole of PhotoUse. 762 M. BROEKHUIJSEN ET AL. We believe that the abstracted model can cater for a holistic approach towards photo activities because it illustrates a process without a clear beginning or end. Furthermore, it illustrates the importance of user needs that motivate behaviour. The model might serve designers who are developing solutions for purposive PhotoUse, and researchers who focus on media-sup- ported social activities, such as storytelling and reminis- cing. More detailed implications of the PhotoUse model can be found in the next section. 5. Implications for design There are plenty of technological offerings aiming to support capturing, retrieving, and sharing. Many techno- logical solutions enable people to reminisce and browse photos, but since those solutions are not specifically designed for mnemonic purposes, they are in many cases less suitable than the participants would like. Most tools are often developed with a focus on pro- ductivity, and not on the user experience. We believe, based on the findings, that people want to move freely between different photo activities, in different order. The tools that they use should support and facilitate such freedom. The PhotoUse model can be used in the design process to keep an overview of all the photo activities, thus making sure that challenges (such as retrieving, triaging, and searching) are not addressed in isolation, but are considered within the context of a com- plex chain of activities. In other words, we see opportu- nities for interactive tools that support the purposes of PhotoUse while engaging in photo activities, such as photo curating. The following recommendations are intended to contribute to the design and assessment of such tools, with specific focus on the autobiographical purposes of PhotoUse, either individually (reminiscing) or shared (storytelling). 5.1. Purposive PhotoUse We encourage designers of especially photo curation tools to emphasise that less enjoyable activities can be part of the PhotoUse activities that people can enjoy, and do not have to be designed and perceived as tasks that have to be done. From the interviews, we got the notion that the participants were overall satisfied with the way they view and browse their collection, as well as the technologies that they use for capturing, sharing, viewing, and browsing their media. But especially cura- tion does not seem to be intrinsically motivating: all the participants reported that they had curation-related activities in mind that they wanted to do, but that they were postponing, such as organising, printing, sorting, and sharing printed images. One participant explained that she had been determined to do the curation at the start of her retirement 8 years ago, but she still had not done it: My only consolation is that I know hardly anyone who has everything in flawless order. – P06 (female; age 69; 7000 photos) Another participant felt the need to organise her photos on the computer every time she saw an organised collection from someone else: A friend of us […] makes printed albums from every event […] one for herself and one for her child. […] And then I am thinking ‘Wow!’ but I will never have the patience for that; there will always be other things that need to be done … – P05 (female; age 31; 14000 photos) Other participants also expressed their discontent with the way their collection was organised. We believe that we need to ensure that enjoyment becomes a part of curation work, or simply eliminate curation work by automating it. One opportunity for encouraging curation is our observation that triaging in the final step before sharing is being done frequently, because participants do not consider this as ‘work’, but instead they consider it as gratifying. Especially social activities such as wed- dings and anniversaries inspire the curation and careful triaging of photos. This implies that it is possible to make curation more enjoyable if the purpose of the activity is clear. Since curation seems to be inevitable and impor- tant, tools can pay attention to emphasise for the user what the purpose of the curation activity is, seamlessly integrating into and contributing to other pleasurable photo activities people engage in such as browsing and reminiscing. The PhotoUse model can be used to find the most promising combinations of burdening activi- ties, enjoyable activities, and purpose. A possible design could motivate people to engage in burdening photo tasks when it contributes to their well-being. People gen- erally enjoy the fact that they are building up a life story, and photos can be very empowering for this narrative identity (McAdams 2011). 5.2. System-mediated curation Supporting curation is important for the successful use of the whole photo collection. To help participants think about their curation issues, we asked how they thought about delegating the curation of the collection to an intelligent system that could do the triaging for them. Almost all participants were hesitant, and reported that they would like to keep having influence on their own collection: BEHAVIOUR & INFORMATION TECHNOLOGY 763 But […] how would they know what structure I want? Perhaps you need an intake. […]. Some simple things might be nice to have done […] but they can only do that with the items that I have not yet organized […] but they should do it the same way as the rest. – P10 (female; age 34; 3000 photos) It is dangerous to let a program manage your data- base, as it went wrong when I used iPhoto, and that is not what you want. Because you don’t know any- more what is going on. – P02 (male; age 29; 25000 photos) This provides designers with an interesting challenge, since we found that the participants do not automatically accept the support from automated curation systems to do the curation for them, but are also unwilling to do the curation themselves. The balance between auto- mation and control is not a new challenge (see e.g. Para- suraman, Sheridan, and Wickens 2000). Even in systems that appear to have been successfully automated – for example, online shops, helpdesks, and computer-aided learning – people prefer to have control over their actions. In these cases, technology has re-established opportunities for human contact via chat windows, tea- cher contact, community networks, etc. But human con- tact is not the only argument for avoiding complete automation: findings from the work of, for example, Ste- vens et al. (2003) show that the activity of curating and annotating can also increase the value of the objects. In line with their findings, P03 explained the process of making an album as a way to ‘frame the memory’, which was clearly an enjoyable process that should not be automated. So curation can be aided by a semi-auto- mated solution, one in which the user is helped by the system but remains in control. Such systems might be more helpful to the user if they act more in line with the purposes that motivate the curation activities. The PhotoUse model can provide the designer with insight into those purposes. 5.3. Context-dependent PhotoUse The finding that the participants want a different sub- set of their photos for different social situations is in line with the literature on storytelling, which occurs when the conversation has been adapted and tailored to a specific listener. Selection of words, topic, and ordering sequences are adapted to the recipient of the story (for more elaboration on recipient design, see Sacks, Schegloff, and Jefferson 1974). Current tools do not incorporate the flexibility that is needed to support different contexts. In short, people do not have the right picture at hand when they need it, as illustrated by P01: When I […] go to my parents with the goal to view my holiday pictures, I again have to make a selection. That means extra effort because I need to make that selec- tion prior to the visit. – P01 (male; age 33; 50000 photos) Another comment from several participants illus- trates the need for adaptable intelligent systems: I browse for 10 minutes, and then I am distracted by something on the Internet, and then I think of some- thing funny for which I switch back to my photo library, […] and so it goes back and forth. – P10 (female; age 34; 3000 photos) Due to technological advancement, the life cycle of photos is becoming more complex, and more unpredict- able, and users are getting more demanding. Tools that support storytelling need to adapt to the different con- texts and demands of the user. We therefore see oppor- tunities for context-dependent selections, to serve as a valuable contribution to storytelling. 5.4. Collaborative PhotoUse In agreement with Kirk et al. (2006) and Frohlich et al. (2002), we see opportunities to better deal with shared collections, for example, opportunities for curation sol- utions when multiple users own a photo collection, which remain relevant and yet unresolved to date. The described activities presented in this research were in many cases done by the participants alone, while at the same time many of the activities were done with a social purpose. Although our research set-up was geared towards individual photo use, the amount of individual activities with a social purpose provides opportunities to improve the experience of media-supported remote sharing. Sharing more than a single photo with friends and relatives complicates photo use. Especially the partici- pants who are parents (3 of 12) or grandparents (1 of 12) reported the difficulty of shared responsibility over a family collection. A similar complexity was seen where people depend on the curation skills of others, for example, children depending on parents for the cura- tion of their childhood: [The children] are sloppier. […] It wouldn’t surprise me if [the pictures] are vanished at some point, so that’s why I ask them to send them to me. – P11 (male; age 55; 16000 photos) I try to capture the nice events […]. I got [an album] from my parents from my own childhood as well […] so it is nice to have and it is after all a testimonial of your childhood, I want to have that for my daughter as well – P08 (male; age 40; 300000 photos) 764 M. BROEKHUIJSEN ET AL. Designing for collaborative PhotoUse provides opportunities for reducing the workload of collection management and is a promising direction for aiding the perceived burden of photo curation, and also pro- vides opportunities for shared remembering. 6. Discussion We have argued in favour of a broader consideration of PhotoUse that relates the activities that users engage in with the purpose that motivates them. This perspective arose out of interviews with users that focused on the needs, experiential goals, and purposes motivating PhotoUse rather than just the operation of the technol- ogy involved. The method of contextual interviews in the homes allowed us to explore the whole collections, but also may have biased our study skewed towards homebound media, missing out on nomadic aspects of photo activi- ties, which is upcoming with the use of camera phones (e.g. Kindberg et al. 2005; Sarvas and Frohlich 2011). Perhaps due to the set-up of the study, some of the activities were less represented in the results than one would expect: for example, capturing is very important, but the activities were focussed on the moment after cap- turing, and therefore only some of the 171 activities were tagged with capturing. Instead participants mentioned using their photos very actively for sharing, browsing, and retrieving. Also, sharing was less often mentioned as an activity than one would expect. Sharing is a very important aspect of (mobile) digital photography, and the emphasis on the other aspects might be caused by the individual set-up of the interview in the home. We did not include the frequency of occurrence of the differ- ent activities in the findings, nor did we add the occur- rence percentages to the purposes. The goal of our study was to reveal which activities occur, and how they relate to each other. We did not want to emphasise certain activities, even though they occur more often in this par- ticular group of participants. Despite the small sample size, we believe that the differences in age, demographics, and life stages of the participants provided sufficient var- iety to consider these results for designing tools to support photo activities. It would be useful to extend our research to include extreme cases in terms of, for example, collec- tion size, age, or familiarisation with technology. Not all the photo activities are serving a higher pur- pose, since some photo activities are intended to be entertaining in themselves. The selected participants in this study were photo enthusiasts, and therefore the enjoyment of photography, editing, and organising was present. The model that we presented in this paper centres around the purpose of user behaviour. We want to stress the difficulty of asking participants about their motiv- ation for behaviour, as they are generally unaware of the underlining motivation for their behaviour. By look- ing at the motivations and purposes of domestic pho- tography that are described in the related literature (e.g. van Dijck 2008; Sarvas and Frohlich 2011), we can relate our findings to the purposes that they found. They made a distinction between the purposes related to (a) com- munication, social bonding, and demonstration of cul- tural membership; (b) self-presentation and identity formation; and (c) preservation and retention of (family) memories (van Dijck 2008; Sarvas and Frohlich 2011). The purposes that we defined based on our data include the same array of purposes, although we made a division between individual and social purposes. In reality the line between a social and an individual purpose is not so well defined, because, for example, self-presentation to a social group is also part of identity formation. Most of the motivations for photo activities were related to a social endeavour (e.g. sharing and storytell- ing), using the photos as cues for our AM. Bluck et al. (2005) summarised some important functions of AM in our lives, including a social function, a self-preser- vation function, and a directive function. In addition to these functions, an adaptive function has been described by, for example, Bluck (2003) and Cohen (1996). In an effort to generalise the purposes that motivate engage- ment with memory-inducing photos, we related the pur- pose of photo activities to the earlier mentioned functions of AM. The result can be found in Table 2. Our proposal that the purposes of PhotoUse relate to the functions of AM supports our finding that people are generally motivated to engage with digital photos to support autobiographical remembering. We are aware that curation tools and applications might exist that are successful, but do not fit our vision. We suggest that the PhotoUse model can be used to model and illustrate the dynamic and flexible set of photo activities that people engage in, inspiring the design of novel Table 2. Purposes of PhotoUse related to the functions of AM (Bluck 2003). Columns are divided in PhotoUse purposes and Functions of AM. PhotoUse purpose Functions of AM (Bluck 2003) Social purposes Social function (e.g. bonding, maintaining relationships) Individual purposes Adaptive function (mood regulation) Self-function (construction and maintenance of self- concept and self history) Utilitarian purposes Directive function (making plans for the future based on past experiences) BEHAVIOUR & INFORMATION TECHNOLOGY 765 technologies, and stimulating research into the use of photographic material to support AM reconstruction. 7. Conclusions This paper has contributed an investigation into the use of photos for autobiographical purposes. Based on con- textual interview data that were analysed qualitatively, we have argued for an alternative perspective concerning photo activities: from the life cycle of a single photo to the interplay of different photo-related activities and the purposes that motivate them. Our presented Photo- Use model can be used to emphasise the complexity and flexibility that are required when designing tools for photo use for individual and shared autobiographical remembering. Acknowledgements The UTS Human Research Ethics Committee approved this study, reference number 2013000681. We would like to thank all the participants who made this study possible. We would also like to thank S. Jumisko-Pyykkö for her advice on the paper, and lastly we would like to thank all the reviewers for their insightful comments. Disclosure statement No potential conflict of interest was reported by the authors. Funding This research was funded by Stichting voor de Technische Wetenschappen (STW) VIDI [grant number 016.128.303] of The Netherlands Organization for Scientific Research (NWO), awarded to Elise van den Hoven. ORCID Mendel Broekhuijsen http://orcid.org/0000-0002-0517-0039 Elise van den Hoven http://orcid.org/0000-0002-0888-1426 Panos Markopoulos http://orcid.org/0000-0002-2001-7251 References Banks, R., N. Duffield, A. Sellen, and A. S. Taylor. 2012, July 4. “Things We’ve Learnt about Memory.” Insights Magazine (2): 1–72. Banks, R., and A. Sellen. 2009. “Shoebox: Mixing Storage and Display of Digital Images in the Home.” Proceedings of the 3rd international conference on tangible and embedded interaction, 35–40. New York: ACM. Bergman, O., S. Tucker, R. Beyth-Marom, E. Cutrell, and S. Whittaker. 2009. “It’s Not That Important: Demoting Personal Information of Low Subjective Importance Using GrayArea.” Proceedings of the SIGCHI conference on human factors in computing systems, 269–278. New York: ACM. Bluck, S. 2003. “Autobiographical Memory: Exploring its Functions in Everyday Life.” Memory (Hove, England) 11 (2): 113–123. doi:10.1080/741938206. Bluck, S., N. Alea, T. Habermas, and D. C. Rubin. 2005. “A TALE of Three Functions: The Self–Reported Uses of Autobiographical Memory.” Social Cognition 23 (1): 91– 117. doi:10.1521/soco.23.1.91.59198. Cohen, G. 1996. Memory in the Real World. 2nd ed. East Sussex: Psychology Press. Conway, M. A., and C. W. Pleydell-Pearce. 2000. “The Construction of Autobiographical Memories in the Self- Memory System.” Psychological Review 107 (2): 261–288. doi:10.1037/0033-295X.107.2.261. Corbin, J., and A. Strauss. 2008. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. Los Angeles, CA: Sage. van Dijck, J. 2008. “Digital Photography: Communication, Identity, Memory.” Visual Communication 7 (1): 57–76. Dropbox. Accessed August 28, 2014. https://www.dropbox.com. Facebook. Accessed April 29, 2015. https://www.facebook.com. Flickr. Accessed August 28, 2014. https://www.flickr.com. Frohlich, D. M., A. Kuchinsky, C. Pering, A. Don, and S. Ariss. 2002. “Requirements for Photoware.” Proceedings of the 2002 ACM conference on computer supported cooperative work, 166–175. New York: ACM Press. Frohlich, D. M., and E. Tallyn. 1999. “Audiophotography: Practice and Prospects.” CHI EA ‘99: CHI ‘99 extended abstracts on human factors in computing systems, 296– 297. New York: ACM Press. Golsteijn, C., and E. van den Hoven. 2013. “Facilitating Parent- Teenager Communication Through Interactive Photo Cubes.” Personal and Ubiquitous Computing 17 (2): 273– 286. doi:10.1007/s00779-011-0487-9. Guenther, R. K. 1998. Human Cognition. Upper Saddle River, NJ: Prentice Hall. Hilliges, O., D. Baur, and A. Butz. 2007. “Photohelix: Browsing, Sorting and Sharing Digital Photo Collections.” Presented at the Second Annual IEEE International Workshop on horizontal interactive human-computer sys- tems, 2007. TABLETOP ‘07. IEEE, 87–94. doi:10.1109/ TABLETOP.2007.14. Hodges, S., L. Williams, E. Berry, S. Izadi, J. Srinivasan, A. Butler, G. Smyth et al. (2006). “SenseCam: A Retrospective Memory Aid.” Proceedings of the 8th inter- national conference on ubiquitous computing, 177–193. Berlin: Springer-Verlag. Hoven, E. van den, and B. Eggen. 2009. “The Effect of Cue Media on Recollections.” Human Technology: An International Journal on Humans in ICT Environments 5 (1): 47–67. Hoven, E. van den, and B. Eggen. 2014. “The Cue is Key: Design for Real-Life Remembering.” Zeitschrift Für Psychologie 222 (2): 110–117. doi:10.1027/2151-2604/ a000172. Instagram. Accessed August 28, 2014. http://instagram.com/. Jansen, M., E. van den Hoven, and D. M. Frohlich. 2014. “Pearl: Living Media Enabled by Interactive Photo Projection.” Personal and Ubiquitous Computing 18 (5): 1259–1275. doi:10.1007/s00779-013-0691-x. Kindberg, T., M. Spasojevic, R. Fleck, and A. Sellen. 2005. “The Ubiquitous Camera: An in-Depth Study of Camera Phone 766 M. BROEKHUIJSEN ET AL. http://orcid.org/0000-0002-0517-0039 http://orcid.org/0000-0002-0888-1426 http://orcid.org/0000-0002-2001-7251 http://dx.doi.org/10.1080/741938206 http://dx.doi.org/10.1521/soco.23.1.91.59198 http://dx.doi.org/10.1037/0033-295X.107.2.261 https://www.dropbox.com https://www.facebook.com https://www.flickr.com http://dx.doi.org/10.1007/s00779-011-0487-9 http://dx.doi.org/10.1109/TABLETOP.2007.14 http://dx.doi.org/10.1109/TABLETOP.2007.14 http://dx.doi.org/10.1027/2151-2604/a000172 http://dx.doi.org/10.1027/2151-2604/a000172 http://instagram.com/ http://dx.doi.org/10.1007/s00779-013-0691-x Use.” Pervasive Computing, IEEE 4 (2): 42–50. doi:10.1109/ MPRV.2005.42. Kirk, D. S., S. Izadi, A. Sellen, S. Taylor, R. Banks, and O. Hilliges. 2010. “Opening up the Family Archive.” In Proceedings of the 2010 ACM conference on computer sup- ported cooperative work, 261–270. New York: ACM Press. Kirk, D. S., A. Sellen, C. Rother, and K. R. Wood. 2006. “Understanding Photowork.” Proceedings of the SIGCHI conference on human factors in computing systems, 761– 770. New York: ACM Press. Kirk, D. S., A. Sellen, S. Taylor, N. Villar, and S. Izadi. 2009. “Putting the Physical into the Digital: Issues in Designing Hybrid Interactive Surfaces.” Proceedings of the 23rd British HCI group annual conference on people and compu- ters: celebrating people and technology, 35–44, British Computer Society. McAdams, D. P. 2011. “Narrative Identity.” In Handbook of Identity Theory and Research, edited by S. J. Schwartz, K. Luyckx, and V. L. Vignoles, 99–115. New York, NY: Springer. doi:10.1007/978-1-4419-7988-9_5. O’Hara, K., J. Helmes, A. Sellen, R. Harper, ten M. Bhömer, and E. van den Hoven. 2012. “Food for Talk: Phototalk in the Context of Sharing a Meal.” Human–Computer Interaction 27 (1–2): 124–150. doi:10.1080/07370024.2012. 656069. Odom, W., M. Selby, A. Sellen, D. S. Kirk, R. Banks, and T. Regan. 2012. “Photobox: On the Design of a Slow Technology.” Proceedings of the designing interactive sys- tems conference, 665–668. New York: ACM Press. Odom, W., J. Zimmerman, and J. Forlizzi. 2011. “Teenagers and Their Virtual Possessions: Design Opportunities and Issues.” Proceedings of the SIGCHI conference on human factors in computing systems, 1491–1500. New York: ACM Press. Parasuraman, R., T. B. Sheridan, and C. D. Wickens. 2000. “A Model for Types and Levels of Human Interaction with Automation.” IEEE Transactions on Systems Man and Cybernetics Part A – Systems and Humans 30 (3): 286– 297. doi:10.1109/3468.844354. Petrelli, D., and S. Whittaker. 2010. “Family Memories in the Home: Contrasting Physical and Digital Mementos.” Personal and Ubiquitous Computing 14 (2): 153–169. doi:10.1007/s00779-009-0279-7. Rodden, K., and K. R. Wood 2003. “How do People Manage their Digital Photographs?” Proceedings of the SIGCHI conference on human factors in computing systems, 409– 416. New York: ACM Press. Sacks, H., E. A. Schegloff, and G. Jefferson. 1974. “A Simplest Systematics for the Organization of Turn-Taking for Conversation.” Language 50 (4): 696–735. Sarvas, R., and D. M. Frohlich. 2011. From Snapshots to Social Media – The Changing Picture of Domestic Photography. London: Springer-Verlag. doi:10.1007/978-0-85729-247-6. Sellen, A., and S. Whittaker. 2010. “Beyond Total Capture.” Communications of the ACM 53 (5): 70–77. doi:10.1145/ 1735223.1735243. Stevens, M. M., G.D. Abowd, K.N.Truong, andF. Vollmer. 2003. “Getting into the Living Memory Box: Family Archives & Holistic Design.” Personal and Ubiquitous Computing 7 (3– 4): 210–216. doi:10.1007/s00779-003-0220-4. Tulving, E. 2007. “Are there 256 Different Kinds of Memory.” In The Foundations of Remembering: Essays in Honor of Henry L. Roediger, edited by J. S. Nairne, 3 Vols, 39–52. New York: Psychology Press. http://alicekim.ca/ Roediger07_39.pdf. Van House, N. A., and E. F. Churchill. 2008. “Technologies of Memory: Key Issues and Critical Perspectives.” Memory Studies 1 (3): 295–310. doi:10.1177/1750698008093795. Whatsapp Accessed April 29, 2015. https://www.whatsapp. com. Whittaker, S. 2013. “Personal Information Management: From Information Consumption to Curation.” Annual Review of Information Science and Technology 45 (1): 1–62. doi:10. 1002/aris.2011.1440450108. Whittaker, S., O. Bergman, and P. Clough. 2010. “Easy on that Trigger Dad: A Study of Long Term Family Photo Retrieval.” Personal and Ubiquitous Computing 14 (1): 31– 43. doi:10.1007/s00779-009-0218-7. BEHAVIOUR & INFORMATION TECHNOLOGY 767 http://dx.doi.org/10.1109/MPRV.2005.42 http://dx.doi.org/10.1109/MPRV.2005.42 http://dx.doi.org/10.1007/978-1-4419-7988-9_5 http://dx.doi.org/10.1080/07370024.2012.656069 http://dx.doi.org/10.1080/07370024.2012.656069 http://dx.doi.org/10.1109/3468.844354 http://dx.doi.org/10.1007/s00779-009-0279-7 http://dx.doi.org/10.1007/978-0-85729-247-6 http://dx.doi.org/10.1145/1735223.1735243 http://dx.doi.org/10.1145/1735223.1735243 http://dx.doi.org/10.1007/s00779-003-0220-4 http://alicekim.ca/Roediger07_39.pdf http://alicekim.ca/Roediger07_39.pdf http://dx.doi.org/10.1177/1750698008093795 https://www.whatsapp.com https://www.whatsapp.com http://dx.doi.org/10.1002/aris.2011.1440450108 http://dx.doi.org/10.1002/aris.2011.1440450108 http://dx.doi.org/10.1007/s00779-009-0218-7 Abstract 1. Introduction 2. Related work 2.1. Photo technology 2.2. Photo activities 2.3. Photo purpose 2.4. Photo curation 3. Field study 3.1. Participants 3.2. Procedure 3.3. Analysis 4. Results 4.1. Photo activities 4.1.1. Accumulating 4.1.2. Curating 4.1.3. Retrieving 4.1.4. Appropriating 4.2. New technologies: new behaviour 4.3. Composite photo activities 4.4. Purpose of photo activities 4.5. PhotoUse 5. Implications for design 5.1. Purposive PhotoUse 5.2. System-mediated curation 5.3. Context-dependent PhotoUse 5.4. Collaborative PhotoUse 6. Discussion 7. Conclusions Acknowledgements Disclosure statement ORCID References work_o6xfdc56jbdctmbjddljgrom3y ---- © 2010 Macmillan Publishers Ltd. 1743–6540 Journal of Digital Asset Management Vol. 6, 3, 139–146 www.palgrave-journals.com/dam/ Correspondence: Patricia Russotti School of Photographic Arts and Sciences, Rochester Institute of Technology, One Lomb Memorial Drive, Rochester, NY 14623- 5603, USA E-mail: patti.russotti@ rit.edu INTRODUCTION This book is a guide for photographers, image- makers, or anyone who will be working with image fi les during the course of their work. It is also a handbook for knowing who should be doing what, within an Eagle ’ s view of workfl ow as well as all the iterations that a fi le will take on during its lifespan. It is for anyone wanting to know the choices for developing a workfl ow to create consistent, and reliable results in their work. Original Article Getting to work – Image ingestion Patricia Russotti visual artist and educator, is a Professor in the School of Photographic Arts and Sciences, College of Imaging Arts & Sciences at Rochester Institute of Technology. She is an active digital and photographic imaging artist producing a wide range of work for corporations, public service organizations, museums, Individual artistic commissions, funded projects and public exhibitions. She has done extensive consulting for in-house graphic service departments, ad agencies and photography studios. These endeavors focus on assessing, refi ning and creating workfl ows, best practices, and providing training programs. Patti is also committed in assisting educators with the integration of current industry best practices, technology and software into their curriculums. Patti has been training and presenting on Adobe Photoshop since the fi rst version of the application. She is a long time Adobe Beta Tester. She develops and presents technical and creative corporate seminars, workshops and training programs internationally. Patti ’ s career has a breadth and depth of experience and skill in workfl ow, image making (analog, digital, alternative and historic processes), the creative process, design and education. She is the co-author or Digital Photography Best Practices and Workfl ow Handbook, A Guide to Staying Ahead of the Workfl ow Curve. © 2010, published by Elsevier Inc, Focal Press. Patti ’ s positions previous to and concurrent with RIT include: the management and operation of a multimedia lab, ownership and operation of an advertising photography studio and an imaging services, consulting group. Patti holds MS and EdS degrees from Indiana University. Richard Anderson is an advertising and corporate photographer based in Baltimore, MD (USA). He made the transition from fi lm to digital photography in 1999 and now uses a completely digital workfl ow. Anderson has conducted numerous seminars and written magazine articles on digital workfl ow and quality issues for photographers. He is an ASMP Board offi cer and chairs the ASMP Digital Standards Committee. He was the principle author of the Universal Photographic Digital Imaging Guidelines (UPDIG). He is the project leader for dpBestfl ow, a project jointly funded by the US Library of Congress and the ASMP to promote best practices for digital imaging workfl ow and digital image preservation. In addition to the aforementioned book, the project has published a website, dpBestfl ow.org. , which is a resource for the digital photography community worldwide. Anderson received a 2009 International Photographic Council (IPC) Photographer Leadership Award in recognition for his volunteer work as an ASMP board member, for his contributions to the trade through his role in developing UPDIG, for his efforts in securing a 3-year award from the Library of Congress to further his work on digital standards, and for his educational outreach to the photographic community. ABSTRACT Chapter 7, Getting To Work – Image Ingestion, covers the key com- ponents of getting digital images from the camera into the computer. The ingestion step is one of the most important steps in a safe and effi cient digital image workfl ow. Performing this step properly can provide the best opportunity to automate the dig- ital workfl ow and ensure organization and safe-keeping of the photographer ’ s work. We discuss best practices for: options for safe transfer of images utilizing dedicated ingestion software, creating a fi le and folder backup; fi le naming conventions and strategies; adding metadata and building templates for reuse; and visually verifying images in an image browser before the memory cards are reformatted. Journal of Digital Asset Management (2010) 6, 139 – 146. doi: 10.1057/dam.2010.16 Keywords: digital ; photography ; workfl ow ; best practices • • • • • Russotti and Anderson © 2010 Macmillan Publishers Ltd. 1743–6540 Journal of Digital Asset Management Vol. 6, 3, 139–146140 and to synthesize the workfl ow and communication process; what works, why things work, and what choices you need to make for consistency, effi ciency and repeatability. It is sometimes diffi cult to get direct answers to specifi c questions; everyone has a position and wants to share that with you. Who really knows the best path and with the most accurate information? Many tools are currently incomplete or are still evolving. Many of today ’ s requisite skill sets may require a consultation with an expert in that particular step to ensure accuracy. Collaboration is not just a buzzword, it is a necessary part of our workfl ow. How do you work smart now? Accepting things verbatim may not always be the best way to go. Use this information to make enlightened decisions while tapping into the growing community of imaging professionals. Digital Photography Best Practices and Workfl ow is a culmination of the knowledge and experience of many of those experts . Working with technology requires us to consider the following aspects: Embrace the concept of integrated information and critical thinking to make good decisions. Have an open and informed mind. Accept that things evolve and change. ‘ Know ’ the solution before understanding the new process may hinder an otherwise appropriate action. Technology often becomes like religion and politics – be a good consumer and employ bi-partisan tactics. Learn to examine other ’ s ideas, maintain an open mind, and allow yourself to adjust your position and known facts. All applications have issues – being wedded to one mind-set can hinder a better understanding of the process and a more effi cient workfl ow. Flexibility is paramount – yet consistency and discipline are essential. These are the fundamentals of what workfl ow is about and what is required to stay on top of, and ahead of the curve. Accumulating knowledge and experience has become what we • • • • • • • • • The goals are to: Show how each piece of hardware and software might fi t into the bigger digital workfl ow picture. Help our audience understand how to choose hardware, software and a process that is optimal for their photographic goals. Show how to make informed decisions about choosing the correct workfl ow for your needs. Provide a framework for keeping up with the evolving digital photographic workfl ow and ecosystem. Provide realistic strategies to preserve digital image fi les (and the work that you do to them) for the short, and especially the long term, as they are an important part of our history and culture. During the last few years, the term ‘ Workfl ow ’ has migrated from a primarily print and publishing, graphic arts industry persona to an imaging-at-large term. This has resulted in the need for photographers, artists, trainers and educators to integrate this concept and material into their vocabulary and curriculums. Yet, up until now, there has not been a single and complete resource that puts each of the seemingly disparate processes together. There has been a need to create a harmonious map and guide among all the disciplines that ultimately need to work together. In other words, a resource that unites and leads to all the things you need to do within a workfl ow. Perhaps the root cause of this lack of understanding and consistency lives in the many defi nitions or views of workfl ow, which exist depending on one ’ s background and discipline. This text provides the necessary components to help build a cohesive workfl ow that implements best practices and provides solutions for imaging professionals and educators. Digital Photography Best Practices and Workfl ow is a coherent, concise guide to the key aspects of the digital photography workfl ow; a list of resources and links to stay current and up to speed with the rapid changes in technology; a glossary to help standardize the language and defi nitions as one implements best practices • • • • • • • • Getting to work – Image ingestion © 2010 Macmillan Publishers Ltd. 1743–6540 Journal of Digital Asset Management Vol. 6, 3, 139–146 141 call the best practices – these are the things that work if a little bit of discipline and consistency are applied. Our digital careers are about living through the technological shifts and synthesizing what works and will continue to work as the technology evolves. Figures 1 and 2 provide a workfl ow overview from Chapter 4 of ‘ Digital Photography Best Practices and Workfl ow. ’ DATA PRESERVATION AND DATA BACKUP The primary goal of image ingestion is to preserve image data and create backups. Image ingestion from memory card(s), tethered cable or wireless requires similar steps. In the capture section, we referred to the ideal setup of using a backup card when shooting if your camera model supports that function. If you do not use two cards for capture, you should regularly download your single cards to a disk or storage device; not format cards until the downloaded images can be reviewed and backed up to a second drive; not connect the camera directly to the computer to ingest images (unless shooting tethered); always use download software as opposed to using the Finder application to copy the image fi les. • • • • Figure 1 : This is the dpBestfl ow generic workfl ow excerpted from Chapter 4 of ‘ Digital Photography Best Practices and Workfl ow ’ . Russotti and Anderson © 2010 Macmillan Publishers Ltd. 1743–6540 Journal of Digital Asset Management Vol. 6, 3, 139–146142 non-Latin letters and other non-standard characters such as forward and back slashes, colon, semi-solon, asterisks, angle brackets or brackets. File names should end in a three letter fi le extension preceeded by a period, such as .CR2, .JPG, > TIFF and so on. The next important criterion is that each digital image fi le should have a unique fi le name. Having multiple fi les with the same name is confusing to the photographer and clients alike. There is the danger that a fi le might automatically overwrite another fi le with the same name. How you arrive at unique fi le names will require some thought. Once you have developed a system, it is important to standardize and adhere to it. Some parameters that can be used to develop unique names can incorporate the following aspects: Your name or initials as the beginning of the string . Keep in mind that you are limited to 31 characters, so short names or initials work best. The date of the photography session . This works best if you use year, month and day to keep fi les lined up in a chronological order. A job sequence number . The fi rst project of the year might have a date / job string such as 09001, and so on. This will cause fi les to line up in job number order. • • • • Using the Finder to copy fi les is potentially unreliable, sometimes resulting in skipped or missing fi les. Never use the fi nder to move fi les from the memory card. If you move (not copy) fi les or folders between the memory card and computer disk and some glitch happens to the destination volume, or your connection with it, all your fi les vanish – both on the memory card and on the destination hard drive. There are many other good reasons to use dedicated ingestion software, but image preservation is the most important one. DPBESTFLOW FILE NAMING CONVENTIONS Naming (or re-naming) digital image fi les is a key organizational task in digital image workfl ow as it is the most basic element of your fi le system structure. Digital cameras do not currently have very sophisticated naming options, and the default names are confusing and lack one of the most important criterions for digital image fi le naming – each fi le name must be unique. The fi rst criterion for fi le naming is that it must follow these basic computer system rules: Letters in the name should only be the letters of the Latin alphabet (A – Z, a – z). Numbers should only be the numerals 0 – 9. Only use hyphens and underscores. Avoid any other punctuation marks, accented letters, • • • Figure 2 : Workfl ow parameter decision points. Getting to work – Image ingestion © 2010 Macmillan Publishers Ltd. 1743–6540 Journal of Digital Asset Management Vol. 6, 3, 139–146 143 A sequence number . This number can begin at 0001 and go until the end of the job (make sure to have enough digits to contain the total number of fi les, or your fi les will not line up correctly). Many photographers like this model because they tell at a glance if the total number of fi les matches the fi nal fi le number. Another reason to use this system is that the order that fi les line up (and are likely viewed and proofed) can be controlled easily by organizing the fi les in a desired order and then renaming them. Another approach is to use the automatically generated numbers from the camera. One problem with this approach is that this cannot guarantee unique job names if you use more than one camera when you shoot. Avoid incorporating a job name or description in the fi le name . Although you can do this, it is hard to avoid running into an overly long fi le name using this approach. Another consideration is that if you do a lot of shoots for a particular client, or at a particular location, you will have to use some other naming string to differentiate the shoots from one another, so the descriptive component of the name is not particularly helpful. One fi nal criterion is to have a fi le naming system that allows you to easily tell whether a fi le is an original fi le, masterfi le or derivative fi le. There are any number of possible variants such as a black and white version, a CMYK version or a standard fi le format version from a raw original, such as JPEG or TIFF. Plan for this, and incorporate enough headroom into your fi le naming schema to add a descriptor for these variations. For instance, a masterfi le could have the letter M or, MF or Master added at the end of the fi le name, but just in front of the three-letter extension. A CMYK version could have CMYK added, or a black and white version could have BW added and so on. • • Once you have created a fi le naming system, use it consistently for all fi les. Just as with the optional workfl ow step of converting proprietary raw fi les to DNG, there can be early binding fi le naming, or late binding fi le naming. Although many prefer to implement a fi le naming schema on ingestion, those who use multiple cameras when they shoot, or those who want to organize image fi les in a particular order and have the fi le names preserve that bit of organizational effort, prefer to rename image fi les after the editing workfl ow step. We recommend appending short descriptors for different versions of an image fi le as part of your fi le naming schema. For instance: Original: RNA_08002_001.cr2 DNG version: RNA_08002_001.DNG • • Tripwire One of the very worst things you can do with fi le naming is to create a situation in which the same image fi le has two different names. This can occur if you make an immediate duplicate copy of the ingested fi les, put away a copy and go on to edit and rename the other copy. A workfl ow like that will be doubly confusing because not only will the fi le names not match up, but the total number of fi les may be different as well. Although there are workarounds, such as matching up image fi les by capture time, or putting the original fi le name into a metadata fi eld and matching fi les up that way, it creates more work and wastes time. However, putting the fi le name into a metadata fi eld, such as the ‘ Title Field ’ , is useful for delivery fi les. This allows you to easily recover from situations in which the fi les ’ recipient renames them and then needs another version of the same image. You can direct the person to where the original fi le name is stored and Masterfi le: RNA_08002_001_M.PSD Layered Masterfi le: RNA_08002_001_LM. PSD Flattened masterfi le: RNA_08002_001_FM. PSD Web version: RNA_08002_001_web.jpg Offset printing version: RNA_08002_001_ CMYK.TIFF Desktop or Lab print version: RNA_ 08002_001_print.TIFF • • • • • • Russotti and Anderson © 2010 Macmillan Publishers Ltd. 1743–6540 Journal of Digital Asset Management Vol. 6, 3, 139–146144 INGESTION TOOLS Card readers Card readers come in many confi gurations to accommodate different computer connections (USB, USB2, Firewire 400 / 800) and several types of memory cards (Compact Flash (CF), Secure Digital, Memory Sticks and so on). If you have a collection of cameras, from point- and-shoot to high-end digital, you may need an all-in-one card reader to accept a wide variety of card sizes. This type of card reader is usually a USB device. You may also need a higher speed Firewire 400 or 800 CF card reader. If you photograph events or sports and have a need for speed, you may want to have a set of Firewire 800 card readers. These stackable units can download from multiple cards at the same time. The gold standard for information on fl ash memory cards and card readers is available on the Rob Galbraith site ( www.robgalbraith.com/ bins/reader_report_multi_page.asp?cid=6007- 9392 ). As with most other things digital, plan to periodically replace older memory cards with newer ones that have greater storage capacity as well as increased read and write speed. Luckily, they have gone down in price. Download / ingest software Many software applications used for image ingestion are multifunctional – meaning they can be used as a browser, parametric image editor, cataloging application or all of the above, as is the case for such all-in-one programs such as Adobe Lightroom and Apple Aperture. One software stands out as purely dedicated to image ingestion, the aptly named ImageIngester . Another widely used application, PhotoMechanic , adds a browsing function as well. Camera manufacturers make downloading utilities and browsers, although their metadata support varies. Adobe Lightroom , Apple Aperture Adobe Bridge and PhotoMechanic 4.6 include image ingestion along with tethered shooting (Aperture directly, and Lightroom, Bridge and PhotoMechanic indirectly by means of auto import from a watched folder fed by the manufacturer ’ s capture utility). Some Cataloging Software, such as Expressions Media have import functions, although few use it for that purpose as it jumps ahead too many steps in the workfl ow. Phase One ’ s Capture One software is widely used for ADD METADATA The second goal of the ingest step is to advance the workfl ow by adding metadata; image adjustment presets; applications such as Image Ingester Pro software that can even add cataloging information as the fi les come into the computer. At the cost of a little bit of setup time, batch metadata can be added to the image fi les via metadata templates. This will save lots of time later in the workfl ow and does not add signifi cantly to the download time. Adding basic metadata at this stage helps to insure that all derivative fi les will have the same base metadata as the originals. We recommend that your basic metadata template should consist of the following: File Name (in the document title fi eld) Name of the Creator / Author Address Phone E-mail Copyright status Copyright notice Generic rights usage This basic set of metadata can be built to provide more specifi c custom metadata as the image fi les go through the editing process. This additional metadata might include the following: Description and / or headline Specifi c rights usage (possibly PLUS code or usage description) Client name Location Keywords Star ratings Special instructions • • • • • • • • • • • • • • • • • • your search time becomes much shorter. In addition, keeping the original fi le name in the metadata can be useful when you need to rename a fi le for use on your web page. This will make it more discoverable by search engines. Getting to work – Image ingestion © 2010 Macmillan Publishers Ltd. 1743–6540 Journal of Digital Asset Management Vol. 6, 3, 139–146 145 INGESTION SETUP Card handling Have a plan for keeping shot cards separate from un-shot cards. Make sure that the un-shot cards are formatted and ready to go. Establish a numbering or coding system for your cards. If you fi nd corrupted fi les, this will make it easier to track down which card may be creating the problem. It also just helps you keep track of your cards. Backing up downloaded cards The ideal plan is to use a software that writes fi les to two separate drive destinations. Use of a Raid1 external drive is highly recommended. But if your software only supports writing to one destination, the second choice is to regularly download to a second drive, although this doubles your download time. The third choice is to regularly copy fi les from the fi rst download to a second drive. In all cases, memory cards should not be formatted until downloads have been verifi ed. The easiest method of verifi cation is to put the downloaded fi les into a raw processor, and let it build new thumbnails from the raw data. • • • Tripwire: Earlier, we said that managing image metadata can be confusing. We face that confusion fi rsthand in the ingestion step. Some of these examples are as follows: ImageIngester writes only the XMP variety of metadata. Others (for example: Aperture) write only the legacy Information Interchange Model (IIM) International Press Telecommunications Council (IPTC) variety . Adobe Software writes both varieties. PhotoMechanic lets the user decide whether to use XMP or legacy IPTC and where to put it. PhotoMechanic issues PhotoMechanic can embed XMP metadata in proprietary raw fi les (although not any longer as a default setting). Not only is this a potentially dangerous practice, it can also cause a metadata collision with Adobe Camera Raw, where a proprietary raw fi le can have embedded XMP metadata from PhotoMechanic coexisting with attached XMP (sidecar fi le) from Adobe Camera Raw. This will not appear to be a problem, unless the same fi les are put back into PhotoMechanic and any change is made to the metadata. A change will cause Adobe Camera Raw to read only the newer XMP, which is embedded, and ignore the older XMP (camera settings in the sidecar fi le). Result: When this happens, all your parametric edits will be lost, and you will have to start over with your color and tone edits. Best Practice: Our advice is to make sure that the PhotoMechanic preference is set to always create a sidecar fi le for the XMP data. • • • • • Tripwire: Browsers set to use the embedded camera previews cannot be used to verify raw fi les. The preview JPEG can appear fi ne, but the underlying raw data can be corrupt. When using parametric image editors (or browsers set to build previews from the raw data), be aware that you need to let them build their cache and create thumbnails from the raw data before you can judge whether the underlying raw data is good. HANDLING YOUR CAPTURE FORMAT CHOICE Capturing and downloading JPEGs is simplest, and has the least issues. Shooting and downloading Raw + JPEG is trickier. Be sure to test your workfl ow as weird things can happen. We found that some software gives the raw fi les and the JPEG fi les sequential numbers instead of the tethered ingestion, although its metadata support is extremely minimal. Bibble Labs ’ s Bibble software also offers tethered shooting. However, its metadata support is only slightly better than Capture One; although like Capture One, it offers advanced parametric image editing capabilities. Russotti and Anderson © 2010 Macmillan Publishers Ltd. 1743–6540 Journal of Digital Asset Management Vol. 6, 3, 139–146146 tethered or from cards, takes longer than JPEG, but is otherwise straightforward. Some software supports converting fi les to DNG format on ingestion. There are pros and cons, which we discuss in Chapter 9, Optimization . same number with different extensions. This behavior can vary according to the model of the camera and whether ingest is done from cards or done auto-imported when the camera is tethered. Downloading proprietary raw, whether Table A1 : Comparison chart of ingestion software features Photo mechanic Lightroom Bridge Downloader Pro ImageIngesterPro Ingest to multiple locations Yes, up to 2 Yes, up to 2 Yes, up to 2 Yes, up to 3 Yes, up to 3 Choose images to be ingested No Yes Yes Yes No Allow automatic ingestion when card is detected No, but ingest window can pop up No, but ingest window can pop up No Yes Yes Rename fi les during ingestion Yes Yes Yes Yes Yes Arrange images into folders by date and time or by name Yes Yes Yes Yes Yes Convert to DNG during ingestion No Yes Yes Yes, with plug-in Yes Verify image during ingestion No Only if DNG conversion is being used Only if DNG conversion is being used Only if DNG conversion is being used Yes Embed metadata during ingestion Yes Not in raw fi les Not in raw fi les Yes Yes Sidecar metadata during ingestion Yes Yes Yes Yes Yes Metadata variables Yes No No Yes Yes Metadata memory Yes Yes Yes Yes Yes Rename differently for different cameras Yes Yes No Yes Yes Filter fi les being ingested Yes No No Yes Yes Tag GPS during ingestion No, but tags can be synced after ingestion No No Yes Yes Eject card after ingestion Yes Yes No Yes Yes Launch external viewer after ingestion No No No Yes Yes Download multiple cards at once No No No No Yes Getting to work – Image ingestion INTRODUCTION DATA PRESERVATION AND DATA BACKUP dpBestflow FILE NAMING CONVENTIONS ADD METADATA INGESTION TOOLS Card readers Download/ingest software INGESTION SETUP Card handling Backing up downloaded cards HANDLING YOUR CAPTURE FORMAT CHOICE Appendix work_o74qw3crh5ejxbnotpg7l4477e ---- Book Review Computational Photography: Methods and Applications Rastislav Lukac, Ed., 564 pp., ISBN: 1439817499, CRC Press, Digital Imaging and Computer Vision Series (2010), $110.53 hardcover. Reviewed by Hayder Radha, Michigan State University, East Lansing, Michigan Computational Photography: Methods and Applications offers a rather comprehensive coverage of both the analyt- ical principles and practical algorithms that have been de- veloped for digital photography. Similar to other books that are coordinated by editors rather than written by one or a few authors, this book is a collection of chapters authored by many experts in the broad area of computational photography. A total of 40 authors contributed to 18 chapters resulting in more than 500 pages. The book also includes 32 pages of color images that are replicated from the book chapters; these color images, which form the color insert of the book, are printed on glossy paper for better viewing of the results and performances of the different techniques and methods covered within the book’s 18 chapters. First Impression The book begins with a carefully written preface where the editor, Rastislav Lukac, outlines the book chapters, their objec- tives, and targeted coverage. I believe that Lukac provides a good first impression at the onset of the book by attempting to define the area of computational photography in the first para- graph of the preface. As he highlighted, computational photogra- phy “. . . refers broadly to computational imaging techniques that enhance or extend the capabilities of digital photography” (p. xi). This statement roughly defines the objective of the book as well. Consequently, one can view the book as a collection of (mainly) image-processing methods and algorithms that have been devel- oped to “enhance and extend the capabilities” of modern digital photography. By focusing on this image-processing dimension, the book could arguably be included in any list of recommended reading for a traditional graduate-level image-processing course. Although it could be a stretch, I even found the book as a po- tential candidate (with some adjustments and updates) to serve as a textbook for a more specialized course in digital photogra- phy and imaging. In this latter case, the main missing ingredient relative to a good textbook is a set of homework problems or MATLAB-like projects that are normally found in traditional text- books. This missing ingredient, however, does not necessarily provide a justifiable ground for criticizing the book in its current version; I do not believe that the book was intended to serve as a textbook. However, I believe that the book has many positive elements that enables it to become a member of a recommended reading list of a graduate-level image-processing course. Despite the clear dominance of “image processing” methods within the book, one would encounter many aspects of computer vision, computer graphics, and machine learning when reading different chapters. Indeed, two chapters (15 and 16) have titles related to machine learning. Audience The clarity in exposition, which begins with the preface, makes the book accessible to a broader audience than experts in the field. Although this is a common claim offered by many editors and book authors, I found the book strikes a reasonable balance between being accessible and analytically deep at the same time. Many parts of this book could be as useful to professionals, who may not have the technical background to appreciate some of the analytical subtleties, as to graduate students, faculty in academia, and other experts, who would probably enjoy some of the deep analysis provided by many chapters. Almost every chapter be- gins with a high-level description and a big-picture view of the problem at hand. When appropriate, a system-level view is also shown, which reinforces the accessibility of the book to a broader audience than just experts in the field. Each chapter also provides a very decent yet brief overview of major contributions within the particular theme of the chapter. Consequently, the book in its current version also serves as a good pointer to the leading and latest references in digital photography over the past decade and sometimes beyond. Parts versus Chapters Overall, the coverage and the flow of the book chapters are well organized. Although the book chapters are not divided into explicit parts, one can identify a few possible ways for dividing the book chapters into parts that are associated with general themes or areas within computational photography. The editor hints at his own categorization of the book chapters in an implicit way. This categorization can be found in the fifth paragraph of the preface. The editor eludes that the book consists of the following five parts: (a) single capture image fusion; (b) different steps of the imaging pipeline of consumer cameras, including compression, color correction and enhancement, denoising, demosaicking, super-resolution imaging, deblurring, and high- dynamic-range imaging; (c) variety of rather loosely coupled topics including bilateral filtering, painterly rendering, shadow de- tection, and document rectification; (d) machine-learning-based methods; and (e) light-field rendering and dynamic-view synthesis. Color-Filter-Array Chapters After reading the book, I found myself dividing the book into slightly different parts than what was suggested by the editor. For example, I found a relatively clear and common theme Journal of Electronic Imaging Oct–Dec 2011/Vol. 20(4)049901-1 Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Electronic-Imaging on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use among five nonsequential chapters.∗ Chapters 1–3, 5 and 6 all focus on addressing different aspects of the dominant practice of utilizing color filter array (CFA)—based architecture in digital cameras. Within these 5 chapters, the first 2 essentially address the foundation of recovering missing color while maximizing the signal-to-noise ratio of images captured by a single-sensor array. In particular, I found Chap. 1 among the most critical chapters because it describes the foundation of many of the technical and practical building blocks of the popular single-sensor camera architecture. Basic topics such as CFA-based imaging, human visual color response, demosaicking, and other introductory concepts in digital photography are covered. Hence, being the first of 18 chapters, it lays the foundation and describes some of the concepts of the CFA-based camera architecture that are nec- essary for understanding future chapters. Meanwhile, Chap. 1 also provides a new contribution by advocating the use of a fourth panchromatic color channel (which includes a mix of all three pri- mary colors, and hence, captures the ever important luminance signal) in addition to the traditional three-color CFA channels (chrominance) that are found in the well-known Bayer pattern. The theme of the chapter is the fusion of these different color and luminance channels for the recovery of the original colors while minimizing noise. Hence, the chapter covers many of the foundations of single-sensor cameras in addition to its own con- tribution related to image fusion of luminance and chrominance information. Consequently, Chap. 1 is the lengthiest, spanning close to 60 pages. Chapter 2 builds upon Chap. 1 by addressing the crucial issue of motion deblurring from a single capture. The chapter describes an interesting approach that exploits the intriguing framework of utilizing a fourth panchromatic channel in addition to the three color channels of the same CFA. This approach is based on the notion of using different exposure times for the different pixel sensors, where the panchromatic channel is sensed (integrated) over a shorter time than the chrominance channels. Both Chaps. 1 and 2 complement each other quite well, which is not surprising given that both were written by common authors, all of whom are leading experts in the area. Chapter 3 addresses the issue of compressing CFA images in general and Bayer CFA images in particular. The chapter be- gins by outlining the two major options for incorporating an image codec (encoder and decoder) within a CFA-based camera. The conventional approach would be to perform demosaicking prior to any encoding and decoding operation. In other words, under a traditional approach, standard image compression can be ap- plied directly to a demosaicked image. This chapter describes and advocates the alternative approach of compressing CFA im- ages captured directly by modern digital cameras and storing these compressed CFA images. This alternative approach re- quires the development of methods that compress CFA images directly, which is the main theme of this chapter. By continuing within the theme of CFA-related methods, the next chapter in this part of the book is Chap. 5; and hence, we postpone the discussion of Chap. 4 to a later point in this re- view. Chapter 5 addresses the issue of denoising CFA images. In particular, the chapter describes a principle component anal- ysis (PCA)-based approach for CFA image denoising, and con- sequently, it provides a brief yet accessible review of PCA, which I believe adds a tangible value not only to the chapter itself but to the book as well. The chapter also reinforces topics intro- ∗It would have been nice if these chapters, which focus on CFA- related topics, were put in sequence. In particular, by considering the first six chapters 1–6, one may argue that Chapter 4 represents a bit of a disruption of the flow within the context of CFA-based techniques. Nevertheless, this disruption does not take away from the merit of Chapter 4. Furthermore, the placement of Chapter 4 within the first six chapters is consistent with the Editor’s own categorization as explained above. duced by other chapters, such as camera imaging fundamentals, including CFA-based camera architecture, demosaicking, tradi- tional camera image-processing pipeline, and last but not least, denoising, which is the main focus of the chapter. Although many of these topics are covered by earlier chapters, I found a slightly repetitive nature of these topics, within the book in general and within the CFA-theme chapters in particular, useful. This brief rep- etition adds to the accessibility of the book chapters; in addition, it strikes a balance between keeping each chapter self-contained and preventing the book from becoming overly repetitive. Going back to the focus of Chap. 5, and after covering the aforemen- tioned basic topics, the chapter moves on to its description of PCA and noise modeling. Despite its adequate coverage of these two fundamental topics, I believe the chapter could have served the reader better by elaborating further on both topics. For example, I believe that PCA is probably among the most important top- ics that should be covered thoroughly by any image-processing book. In particular, making a connection with image transforms in general, and with the well-known optimal Karhunen–Loève transform would have been welcome material for many readers. Regarding noise models, the chapter quickly resorts to adopting signal-independent models after a brief admittance that in real life the captured noise by typical image sensors is signal-dependent. Nevertheless, the chapter strikes a reasonable balance by adopt- ing a channel (color)-dependent noise model, yet maintains the traditional signal-independent noise view for tractability and ease of treatment. Other positive aspects of the chapter include its coverage of the denoising procedure where it is based on maxi- mum likelihood estimation (MLE), which is another fundamental topic that should be included in any viable imaging book. One strong aspect of this topic within this chapter is the connection made among MLE, PCA, and the noise model based on a rather pragmatic yet relatively sound approach. The last chapter in this part of CFA-related material is Chap. 6. This chapter addresses the core topic of demosaick- ing from an optimization point of view. In particular, it describes a regularization-based demosaicking framework, and hence, this chapter includes several analytically sound concepts that are of interest to theoretically inclined readers. In this context, the chapter touches upon few multirate multidimensional signal- processing concepts, including periodic sampling and lattices in the spatial and Fourier domains, and integrates these con- cepts in a rather cohesive analytical framework. Chapter 6 also presents the ill-posed nature of the demosaicking problem and how regularization can help. To that end, the chapter provides a rather compact and relatively accessible treatment of the general topic of regularization for graduate students or other readers who may not be familiar with this topic. Nevertheless, the analytical nature of the material in this chapter makes it inaccessible for a more general audience or those who are not theoretically in- clined. This chapter represents an example of why I believe the book can serve as a recommended reading for graduate-level image-processing course. CFA-less Chapter about Colors Chapter 4 addresses the problem of color correction and color constancy. This problem is viewed in this chapter as color restora- tion and enhancement, which is reflected explicitly by the title of the chapter. Regardless of the problem’s name or the chosen ti- tle, color constancy is a fundamental issue that is faced by digital photography, and the introduction section of this chapter provides a brief overview about the nature of the problem and its impor- tance. In particular, Chap. 4 provides the traditional explanation for illumination-independent color representation as the de facto objective of color constancy or color-correction algorithms. The theme of the chapter is not color constancy per say, but rather on how to approach this problem in the compressed domain. Given Journal of Electronic Imaging Oct–Dec 2011/Vol. 20(4)049901-2 Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Electronic-Imaging on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use that the DCT is the dominant transform for leading imaging com- pression methods, the chapter provides a brief overview of DCT fundamentals. The chapter provides a very extensive coverage of simulation results with plenty of examples. A color-enthusiast reader, however, might be potentially disappointed when reading the simulation results. Both the unfortunate exclusion and the print quality of some of the key color images (from/within the color insert of the book) may cause a bit of distress for color fanatics. Although a few important images, which are shown as gray-level images within the corresponding book chapters, have unfortunately been excluded from the designated color pages, I believe all images of Chap. 4 dealing with color correction should have been replicated and included in the insert of color pages. More importantly, I believe that the quality of some of the im- ages included in the color insert does not serve the reader much in appreciating the subtle differences among the different color- correction methods. Many of the images from Chap. 4 included within the color insert are a bit dark with little or no visible color information. Nevertheless, a couple of color examples do provide a visual confidence about the chapter’s theme and contribution. Super-Resolution with Super Rigor For an academic reader, Chap. 7 is arguably the best chap- ter in this book. It provides an excellent exposition of the im- portant area of super-resolution (SR). As a start, the topic of the chapter is among the most interesting and important ar- eas within computational photography. More importantly, the way the chapter is written and the way the topic is covered makes Chap. 7 worthy material to browse, read, and even study for any- one who is interested in the area of computational photography. One of the reasons why an academic person (like me) would enjoy this chapter is that it covers a crucial set of fundamental topics in a comprehensive and well-written way, with the appro- priate level of analytical depth. Another reason for the success of the chapter is its avoidance of presenting a new contribution. As highlighted by the author, the chapter “aims to serve as an introductory material for SR imaging, and also provide a com- prehensive review of SR methods” (p. 177). The chapter brings together topics such as aliasing, blurring, and noise models in a cohesive way, and integrates them quite clearly into the SR imag- ing inverse problem. A multitude of core and popular approaches for SR are covered. These topics range from frequency domain methods to joint interpolation and deconvolution, least square estimation, and Bayesian estimation. More recent and advanced approaches are also covered, such as projection onto convex sets and training-based SR imaging. The chapter even goes fur- ther by addressing another set of important topics and how they impact the problem of SR. These topics include image registra- tion, regularization and parameter estimation, blur modeling, and other advanced imaging model issues and problems. The chapter concludes its excellent exposition by addressing the CFA archi- tecture, and hence, it positions itself as self-contained as any other chapter in the book. In short, Chap. 7 is must-read material for a broad range of readers in the area. Traditional Application with More Rigor Despite a difference in application, there is a common formu- lation between the topics of Chap. 7 (super-resolution) and Chap. 8 (image deblurring). Hence, Chap. 8 presents a general degra- dation model that captures image blurring. This model includes both the traditional convolution by a shift-invariant point spread function and an additive noise model. Both of these degradation elements can be found in the SR formulation of Chap. 7; how- ever, Chap. 7 addresses further issues such as aliasing (due to subsampling), and many other fundamental and advanced topics as outlined above. Chapter 8 subsequently addresses the prob- lem of deblurring using a classical deconvolution approach based on a regularization framework. Other image-deblurring aspects, which do not rely on deconvolution, are also covered. Overall, Chap. 8 covers one of the most important and traditional topics in digital photography, namely deblurring, yet it is rather brief in its treatment, especially when compared to Chap. 7. Meanwhile, I found the two chapters complement each other in the sense of addressing two important problems in digital photography, yet outlining similar analytical formulations with some welcomed rigor to address these problems. High-Dynamic-Range Imaging The next two chapters of the book, Chaps. 9 and 10, cover the increasingly popular tool of high-dynamic-range (HDR) imaging. In particular, Chap. 9 provides a very good introductory overview of HDR imaging. The chapter covers both the capturing (compo- sition) and display (tone mapping) aspects of the HDR problem in a way accessible to a broad set of readers. Similar to other chapters (e.g., the one on super-resolution and the chapter on deblurring), Chap. 9 addresses the problem of HDR using multi- ple exposures. One positive aspect of the chapter is that it covers the HDR problem using a rather generic luminance-chrominance space. Consequently, the chapter begins with a very easy-to- read review of popular color spaces, especially those that are relevant to the HDR problem (e.g., opponent, YUV, and HSV color spaces). This type of coverage makes the chapter self- contained and comprehensive in terms of being an introductory tutorial to the important topic of HDR imaging. For the captur- ing (composition) side of HDR, the chapter addresses key al- gorithms for monochrome and luminance-chrominance–based HDR composition methods. This is followed by a parallel cover- age of monochromatic and luminance-chrominance display (tone mapping) methods. The chapter also includes a brief overview of hardware-based HDR solutions. Plenty of illustrative examples are included as well, many of them can be found in the color insert pages of the book. Chapter 10 continues the topic of HDR but adds a new and practical dimension to the problem, specifically the issue of achieving HDR imaging under motion and dynamic scenes. To be self-contained, the chapter begins with a very brief overview of the topic of HDR and the more specific issue of HDR imaging from multiple exposures. This overview intersects with the material from Chap. 9 and may seem redundant; however, it reinforces some of the key concepts. One arguably negative is- sue, which is probably unavoidable in an editorial book, is a minor inconsistency in notations that are used in Chaps. 9 and 10. Nev- ertheless, I found both chapters provide a very good overview and detailed discussion of the increasingly important topic of HDR, which in my judgment, represents one of the strong and positive aspects of the book. New and Emerging Problems and Applications The remainder of the book covers a rather interesting and sometimes intriguing set of applications and related imaging problems. Starting with Chap. 11, and arguably with the exception of Chap. 13, the book takes the reader on a tour of new problems ranging from the practical, such as surveillance, to more artistic, such as painting and colorization. Chapter 11 provides an interesting overview and describes a new approach for shadow detection. The chapter outlines a very easy-to-read taxonomy of (a) the broad set of applications that benefit from or need shadow detection, and (b) the different ap- proaches and algorithms used for shadow detection. In particular, the chapter focuses on region-based parametric modeling of the shadow detection problem for surveillance video applications. The chapter categorizes the problem of shadow detection as nonparametric (illumination independent) and parametric; then it divides parametric approaches into local and global ones; finally, it highlights its focus on a global parametric framework that it Journal of Electronic Imaging Oct–Dec 2011/Vol. 20(4)049901-3 Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Electronic-Imaging on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use introduces. The description of this framework is approached by the chapter in a very clear way (at least from a high level). This clarity is accomplished by distinguishing and addressing the two issues of shadow detection: statistical feature modeling and color space analysis. As the chapter author explains, this strategy in ex- position was done by design, not by accident. Despite the exten- sive interaction between these two issues/aspects (statistical fea- ture modeling and color space analysis) of the shadow-detection problem, treating both subjects in separate sections emphasizes both the importance of, and the distinction between, these two is- sues. Both benchmark sequences and real-life surveillance video were used to present many illustrative results that attempt to validate the introduced framework in this chapter. Despite the potentially perceived narrow application space (surveillance) and described framework (parametric-based global) outlined in this chapter, I found it a very helpful reference for this increasingly important area for computational photography. Chapter 12 covers an intriguing and emerging digital imag- ing application for scanning using digital cameras as opposed to employing traditional flatbed scanners. This application re- quires what is known as document dewarping or document rec- tification, which is the focus of this chapter. As highlighted by the chapter, document dewarping/rectification is needed for ad- dressing perspective and geometric distortions that result from using regular digital cameras when capturing documents. After a very brief overview of different approaches for document recti- fication, the chapter makes arguments about the disadvantages of these approaches in preparation for motivating the authors’ proposed methods that are covered in this chapter. This aspect of the chapter, although consistent with its objective of present- ing the authors’ own approaches, has a bit of a negative tone, even for an editorial book.† The chapter proceeds by focusing on two major digital camera-driven methods for image rectification: (1) a stereo-based approach for document rectification, and (2) a single-image–based method for figure rectification. Both ap- proaches are covered with sufficient analytical depth and many illustrative examples. As anyone in the area may expect, the stereo-based method is based on explicit three-dimensional (3D) reconstruction. This approach is naturally more versatile than the second (single-image–based) method in terms of its ability to work in conjunction with optical character recognition. The fo- cus of the single-view–based approach is on rectifying a figure within a scanned document. Although the chapter makes a clear case for the viability and potential of camera-driven document scanning, this chapter could have served a more comprehensive purpose by surveying other techniques, and more importantly, by addressing related key imaging issues. To the authors’ credit, they admit, albeit late in the conclusion section, that other impor- tant and related issues to document rectification, such as uneven illumination and motion blur, were not addressed by the chapter. Despite these shortcomings, the chapter does bring this very in- teresting and relatively new topic to the reader with a good list of references and illustrative figures. Old Filter with a New Name, Improved Design, and More Rigor Chapter 13 provides an overview of the theory and applica- tions of bilateral filters. Before proceeding with my review of the chapter, it is worth noting that I found the chapter placement within this last part of the book (Chaps. 11–18) a bit odd. This last part of the book apparently focuses on new and emerging †Naturally, this is a common struggle for anyone writing a book chap- ter: how much of it should be a fair survey, and how much of it should be a coverage of the authors’ own and preferred methods. Therefore, this comment is not intended as a criticism of this particular chapter, but a rather an observation related to all chapters that emphasize the authors’ contributions versus surveying related work. applications, yet Chap. 13 addresses a specific class of filters. Probably the rational for placing Chap. 13 within this part of the book is the following: bilateral filters, which take into consideration both the spatial distance and intensity distance of neighboring pixels with respect to the pixel under consideration, have found many compelling applications due to their robustness in preserv- ing critical image details, such as edges, while achieving superior smoothing results. The chapter provides a very good overview of the theoretical aspects and applications of bilateral filters, yet I believe that this topic has much deeper roots than the coverage provided by the chapter. For example, from the early and mid- 1980s, one active area of research was order-statistic filters and related design (such as alpha-trimmed and even special cases of median filters). I believe that the chapter would have served the reader more by providing some mention of these related top- ics. Nevertheless, the chapter provides an excellent outline of the utility of bilateral filters to a broad range of important areas such as denoising, tone mapping, artifact reduction (e.g., due to compression), and other related applications. Fast algorithms for bilateral filters are also covered. Back to the Intriguing Application Chapter 14 covers arguably one of the most intriguing appli- cations of modern digital photography called painterly rendering. In its most simplistic form, this area addresses the basic notion of converting natural images taken by digital cameras to pieces of visual art. The chapter captivates the reader from the first paragraph by explaining and motivating this new area by provid- ing examples of fascinating interactions between art, psychology, and digital imaging. The chapter briefly and quickly outlines a rel- atively long list of references related to this emerging area, and then it explains its focus, that is unsupervised painterly render- ing, which automatically converts any natural image into what is known as a “painterly image.” The chapter explores two sets of algorithms: (1) ones that simulate putting paint on paper or canvas, and (2) other algorithms for abstracting from the clas- sical tools used by artists over centuries until our modern time. The chapter focuses on describing algorithms and related mod- els for the simulation of brush strokes. Furthermore, the chapter addresses the visual properties that distinguish an artistic work, such as classical painting, from a natural image taken by a cam- era. This later topic also covers adding synthetic texture based on the theory of glass patterns. The chapter includes a very good number of beautiful images that are included in the color insert. In summary, for those who enjoy painting and visual art, this is probably the most enjoyable chapter in the book! Chapter 15 arguably continues the artistic theme of Chap. 14, yet it covers a completely different imaging application. In par- ticular, Chap. 15 addresses the ill-posed problem of automatic colorization of grayscale images. Automatic is a key word in this context. The introductory part of the chapter makes a convinc- ing argument about the limitation of manual colorization methods when applied to grayscale visual content, and hence, it provides a good motivation for approaching the problem using learning techniques that are developed within the area of machine learn- ing (ML). More importantly, the chapter outlines well-established ML methods for fully automatic image colorization, which can be applied to this problem without any user intervention. The chap- ter also argues about adopting global methods (as opposed to local ones) to achieve this rather challenging objective of fully au- tomating the colorization process. The chapter formulates image colorization as a prediction problem; and it outlines nonpara- metric methods, namely the Parzen window and the well-known support vector machine (SVM) methods, for tackling this prob- lem. The chapter also covers graph-cut algorithms, which the author argues provide the best results due in part to their abil- ity to combine local predictions with spatial coherency functions. Journal of Electronic Imaging Oct–Dec 2011/Vol. 20(4)049901-4 Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Electronic-Imaging on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use Overall, I found the chapter interesting (considering both the ap- plication and the ML tools), informative (in terms of its coverage of well-established ML frameworks and concepts), and relatively ac- cessible, especially for those who might have some background in ML or signal processing. Otherwise, some parts of the chap- ter might be a bit challenging to follow for readers who are not very familiar with basic ML concepts. Nevertheless, I believe the chapter provides a very decent coverage of these ML concepts, such as SVMs, or at least it provides sufficient pointers to them for those who are interested in learning more. In summary, even for those who might not be familiar with ML, they would leave the chapter familiar with some of its most common jargons, and with a sense of the viability and applicability of ML for automatic image colorization. Chapter 16 has commonalities with both Chaps. 14 and 15: it arguably maintains the book’s artistic theme, which begins in Chap. 14, and it covers some ML approaches, which represent a major theme of Chap. 16. The focus of Chap. 16 is face beautifi- cation, an application that is both intriguing and controversial! The tools covered in the chapter for this application are drawn from the rich area of machine learning. Although most humans will not argue against the viability of face beautification as an application, the chapter begins with strong motivating statements, such as the human face being the “most frequently photographed object on earth” (p. 421). The chapter also provides a brief window into some of the philosophical, psychological, and artistic arguments for face beautification. One of the salient points highlighted in this chapter is the observation that many studies have shown a consistent perception and agreement among many cultures regarding the human perception of face attractiveness. As the author stated, “. . . people share a common taste for facial attrac- tiveness” (p. 422). I believe that this statement, which is cleverly repeated and strategically placed in multiple occasions within the first few pages of this chapter, provides a convincing reason for the interested reader to continue reading or at least glancing at this chapter. After the well-written introductory part, the chapter covers supervised learning methods for building a facial beauty model. This is followed by descriptions of both an optimization approach and a heuristic method for face beautification. One of the excellent aspects of the chapter is its coverage of some of the details related to the whole process of this application. This cover- age is initially presented well by showing a comprehensive overall system diagram of the different functions and processes needed to develop a face beautification application. Then, each of the individual components of the system is covered, usually within a designated section of the chapter. The chapter could have eas- ily stopped its coverage of the topic after the first four sections; however, the author continued by presenting details regarding methods for locating face features and for identifying canonical points on frontal images of human faces. It also includes cover- age of how to perform the required image warping that ensures unmistakable resemblance to the original image while achieving the sought after beautification. The chapter even includes a brief appendix to elaborate further on one of the key ML approaches presented in the chapter, called support vector regression. Light Field Rendering The last two chapters, 17 and 18, cover the area of light field imaging. Although both chapters provide some form of an intro- duction to the area of light field rendering, Chap. 18 outlines a more comprehensive overview (or a survey) of the area’s major contributions. Hence, the book reader might have been served better if the order of these two chapters was reversed. The focus of Chap. 17 is on addressing two major issues with traditional light field rending methods: a. trading off spatial resolution ver- sus angular resolution and b. what the authors call “common photometric distortion and aliasing” (p. 446). As a consequence, the chapter is outlined and written as a journal paper where the authors “propose” a new framework that includes a hardware so- lution, programmable aperture, and post-processing algorithms for removing distortions associated with light field rendering and for generating view-dependent depth maps for view interpola- tion. Overall, Chap. 17 is well-written and should be accessible to a relatively broad audience despite its shortcoming in terms of providing an overview of the area and related work. Neverthe- less, this shortcoming of Chap. 17 is compensated for in Chap. 18 where a much more comprehensive survey of related work in the area is provided. Meanwhile, and as part of its outline of the different contributions in the area, Chap. 18 provides an in- depth description of the authors’ contributions in dynamic view synthesis using areas of cameras. One of the positive aspects of Chap. 18 is that it provides a very good perspective of the area of light field modeling and rendering, and how it fits within the larger context of image-based modeling and rendering. Further- more, in its overview of the different contributions, which tend to be mostly chronological, the chapter moves from the most general five-dimensional plenoptic function model toward four- dimensional, 3D, and even two-dimensional light field imaging models. This flow of presentation, I believe, can be very helpful for readers who are looking for guidance on how these differ- ent areas relate to each other and how they fit within the larger contexts of light field modeling and image-based rendering. The Last Word Overall, the book presents very well-balanced material for a broad audience in digital and computational photography. The book provides an arguably excellent coverage of analytical tools (from traditional image processing to computer vision and ma- chine learning), traditional applications (such as demosaicking, denoising, deblurring, and super-resolution), to emerging and in- triguing applications (such as painterly rendering and automatic colorization). It is always unknown to a reader of an anthology how the editor chooses the particular experts who end up writing the chapters of such a book, especially when there are many competing research groups covering a particular theme of a chapter. For this book, I believe that the editor has made first- rate choices, and as a consequence, I believe that the chosen authors with their contributions have served their editor and their readers quite well. Hayder Radha received PhM and PhD degrees from Columbia University in 1991 and 1993, an MS degree from Purdue Uni- versity in 1986, and a BS degree (with honors) from Michigan State University (MSU) in 1984, all in electrical engineer- ing. Currently, he is a professor of elec- trical and computer engineering at MSU and the director of the Wireless and Video Communications Laboratory. He was with Philips Research from 1996 to 2000, where he worked as a con- sulting scientist and fellow in the Video Communications Re- search Department. He was a member of the technical staff (MTS) and then a distinguished MTS at Bell Laboratories where he worked between 1986 and 1996 in the areas of digital com- munications, image processing, and broadband multimedia. He is a Fellow of the IEEE, and his current research areas include multimedia communications and coding, image processing, com- pressed sensing, social networks, sensor networks, and network coding. He has authored more than 150 peer-reviewed papers and 30 U.S. patents in these areas. Journal of Electronic Imaging Oct–Dec 2011/Vol. 20(4)049901-5 Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Electronic-Imaging on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use work_o7ic6cebn5cwhiygxug6izhgrm ---- Microsoft Word - nas-2aNew.doc Jurusan Desain Komunikasi Visual, Fakultas Seni dan Desain –Universitas Kristen Petra http://www.petra.ac.id/~puslit/journals/dir.php?DepartmentID=DKV TEACHING VISUAL LITERACY THROUGH DIGITAL PHOTOGRAPHY AND IMAGING EXERCISES Aristarchus Pranayama Visual Communication Design Department, Art and Design Faculty Petra Christian University Surabaya E-mail: arispk@petra.ac.id ABSTRACT Students of visual communication design need the ability to analyze, compose, and interpret images that speak in a visual language. Teaching visual literacy is one way that enables students to be aware and critical about images that surround their lives everyday. Visual literacy gives students the ability to actively unravel and deconstruct codes given by an image, rather than become a passive receiver of it. As a method for teaching, exercises or challenges in creating or composing images can be a good complement in developing this ability, rather than mere analyzing. Digital photography is an excellent medium for this, since it is quick at capturing and producing many images that we want, and very lenient on mistakes or technical errors so students are less afraid to take risks and more productive. Through strategic exercises of digital photography and imaging, students can learn visual literacy in a very dynamic way; not only reading images, but also creating them and reinterpreting them through class presentations and discussions. Keywords: teaching method, visual literacy, digital photography, digital imaging, visual communication design, analyzing, interpreting ABSTRAK Mahasiswa Desain Komunikasi Visual memerlukan kemampuan untuk menganalisa, membuat, dan menginterpretasi gambar-gambar yang berbicara dalam bahasa visual. Mengajarkan Visual Literacy adalah salah satu cara agar mahasiswa mampu menyadari dan kritis terhadap gambar-gambar yang ada di sekeliling kehidupan mereka sehari-hari. Visual Literacy memberi mahasiswa kemampuan untuk secara aktif mengungkap dan mendeskonstruksi kode-kode dari sebuah gambar, dari pada menjadi seorang penerima yang pasif. Sebagai metode pengajaran, penugasan untuk membuat gambar visual merupakan pelajaran tambahan yang baik untuk membangun kemampuan ini, selain hanya menganalisa gambar. Fotografi digital adalah media yang tepat untuk ini, karena cepat dalam merekam dan menghasilkan gambar- gambar yang kita inginkan, serta sangat fleksibel dalam kesalahan-kesalahan ataupun kekeliruan teknis, sehingga mahasiswa cenderung untuk tidak takut mengambil resiko dan lebih produktif. Lewat latihan-latihan yang dibuat secara strategis lewat fotografi digital dan digital imaging, mahasiswa dapat mempelajari visual literacy secara dinamis; tidak hanya belajar membaca gambar-gambar, tetapi juga menciptakan dan me-reinterpretasikan lewat presentasi dan diskusi dalam kelas. Kata kunci: metode mengajar, visual literacy, fotografi digital, digital imaging, desain komunikasivisual, analisis, interpretasi INTRODUCTION The illiterate of the future will be the person ignorant of the use of the camera as well as of the pen.-- Laszlo Moholy-Nagy, 1936, quoted in Goin, 2001, 91 Visual literacy has been described as an understanding and critical analysis of all visual imagery presented to the individual in a culture (Golubieski, 2006). Actually, there are numerous definitions that slightly differ from one publication to another. There are also differing names as to who thought up of the term first. Wikipedia (2006) notes that: Visual literacy is the ability to interpret, negotiate, and make meaning from information presented in the form of an image. Visual literacy is based on the idea that pictures can be “read” and that meaning can be communicated through a process of reading. The term “visual literacy” (VL) is credited to Zach Flo, who in 1969 offered a tentative definition of the concept: “Visual literacy refers to a group of vision-competencies a human being can develop by seeing and at the same time 58 Pranayama, Teaching Visual Literacy Through Digital Photography Jurusan Desain Komunikasi Visual, Fakultas Seni dan Desain –Universitas Kristen Petra http://www.petra.ac.id/~puslit/journals/dir.php?DepartmentID=DKV 59 having and integrating other sensory experiences” (Avgerinou, 1997). Surprisingly, the same contributor, Avgerinou, in a different site gives credit as to who coined the term first to a different name, which is John Debes (www.ivla.org): The term “Visual Literacy” was first coined in 1969 by John Debes, one of the most important figures in the history of IVLA. Debes’ offered (1969b, 27) the following definition of the term: “Visual Literacy refers to a group of vision- competencies a human being can develop by seeing and at the same time having and integrating other sensory experiences. The development of these competencies is fundamental to normal human learning. When developed, they enable a visually literate person to discriminate and interpret the visible actions, objects, symbols, natural or man-made, that he encounters in his environment. Through the creative use of these competencies, he is able to communicate with others. Through the appreciative use of these competencies, he is able to comprehend and enjoy the masterworks of visual communication.” However different or slightly different these explanations are, they all involve “seeing” and develop something from that seeing; either under- standing or finding meaning, or a competence in “reading” what we see and make something of it. Visual literacy is something we learn and can be categorized as a skill, just like our text reading skills or verbal skills. In a world like today, with so many images surrounding our lives, where we are constantly surrounded by images through television, magazines, etc., and at the same time being visual communicators, we need to learn to decode and encode these images as visual language. It is important that we can be an alert reader of visuals rather than a passive receiver, so that we may also use this “reading” skill to strategically, responsibly, and innovatively communi- cate through visuals. DIGITAL-VISUAL CULTURE Students of the Visual Communication Design department in Petra Christian University are engagers in many visually exessive media. Some are involved professionally, others for fun. They may be working in photography, video, animation, graphics, film, illustration, interactive, among others. Today, images are the students’ language because we now live in a visual culture. However, being alert and critical about what they are doing or seeing is something different and has to be taught through a learning process. The same with a language class for Bahasa Indonesia is still taught for Indonesian who are already speaking the language. Grammar and structure have to be addressed, habitual language mistakes have to be straightened out. The same goes with visual language. Furthermore, it is the digital era where everything is instant and transferable. You may take a picture with your mobile phone and then print it out on to a banner, or upload it and share it with the rest of the world through the Internet. This fast and instantenious character is what the students are now living through and accepting as the norm and becoming their character as well. And it is what the teachers must face so that we can still “speak” the students’ language, get their acknowledgement, and guide them through to a better understanding of the digital-visual world. The challenge is to make them able to interpret what they see and know exactly what they give out for others to see. And digital photography is the perfect medium for that. THE EVOLVING MEDIUM The concept of photography in this digital age has shifted a little. It is now more of an image capture medium, since just like a scanner, everything inputted is almost always run through the computer first before outputted to various media. We can look back to its history where only a hand few people can shoot photographs, and it has a few purpose or function, and also a few genres. However today, almost everybody can take pictures or shoot a camera, photos have many varied purposes, and have also developed into many genres. Therefore, the medium has become simplified or human friendly, so the results become more complex and as diversified and eclectic as humans. Digital photography is the perfect medium to teach visual literacy because it mirrors the characteristics of students today. It is a high quality, quick capture medium that is intuitive and very lenient on mistakes. It gives students the opportunity to be very productive and not afraid to take risks. Computer and digital imaging software are also inseparable mediums along with digital photography, because all images captured through the camera are then downloaded into the computer for futher processing. Thus, the computer and the digital imaging software play big roles also in digital photography. With these mediums in mind, it is our concern to explore their capabilities, their advantages and positive aspects, and NIRMANA, VOL.8, NO. 2, Juli 2006: 58-64 Jurusan Desain Komunikasi Visual, Fakultas Seni dan Desain –Universitas Kristen Petra http://www.petra.ac.id/~puslit/journals/dir.php?DepartmentID=DKV 60 make use of them to their maximum capacities for teaching visual literacy. DESIGNING THE VISUAL EXERCISES Making visual exercises through photography is a constant challenge. There are few basic exercises however that need to be given to students. They are basic design elements, such as line, shape, form, texture, pattern, and color. But before creating the exercise, it is important to first of all determine the objective. As with the Experimental Photography class I teach, the first assignment given to each student is to think of a color he/she likes, and take 100 shots of pictures for a week of anything that has that color. The specific color in the pictures must be dominant or the focus (not necessarily meaning “in focus” or not blurred) in each photograph. This photo exercise challenges the students to not only start taking lots of pictures, but also be alert and aware with their eyes as to what they see around them. The setting of 100 pictures is to set a standard of productivity, not specifically a standard of quality yet. It sets a standard of how often each student should be taking pictures and at the same time taking advantage of the digital camera’s capability of capturing lots of images. From there, students are supposed to pick only the best 10 pictures of 100 to be presented in front of the class. The discussion topic would include not only about color and its language or expression or mood, but also composition and other design elements. This exercise could be applied to other design elements as well, for example, to take pictures of patterns and textures, lines, forms, and shapes. Figure 1. “Crawling Spider” by student Kevin Winaldy Another important subject for visual literacy is points of view, where images can tell the same story differently, just by shedding a different light, changing where to focus, changing the viewing angle, and cropping or what to leave in or take out of the picture. Every decision that we make in fitting our subject matter into a frame speaks something about the subject matter and about ourselves as the photographer. Covering a story about someone of an interest to the students would be a good exercise in such a topic. Discussions could include documentary and photo-journalism genres. A third example of exercises would be to challenge the students in creating pictures that are inspired by sounds or music. The challenge is to pick out a song. It could be instrumental or with lyrics. Then try to photograph something inspired by that music. It might be better if the teacher picks the song and let the students listen and create photographs from that one song. There are two sub-challenges for this. One is to aim for abstractions or, in other words, make abstract photographs. And two, is to have non-abstract photographs, this means with clear representation of subject matters. The goal of this exercise is to have us translate the language of music into the visual language, which most of the time can speak about similar things, such as rhythm/pattern, melody/line, moods/textures or colors, and so on. Interpretation comes to play from our listening to the music or lyrics and transfering it into an abstract picture and a representative one. Abstractions force students to again use basic forms of design to communicate, and experiment on using the camera for making abstract images. Figure 2. An example of an abstract photo that uses an extreme close-up technique to create an abstraction. It is a picture of the Virgin Mary, entitled “Sacrifice” by student Yohan Ariel The fourth exercise and usually the hardest is juxtaposition of things unrelated to create new meaning. One of the trade secrets or strategies of advertising and contemporary art is the unexpected juxtaposition of things unrelated to attract attention, Pranayama, Teaching Visual Literacy Through Digital Photography Jurusan Desain Komunikasi Visual, Fakultas Seni dan Desain –Universitas Kristen Petra http://www.petra.ac.id/~puslit/journals/dir.php?DepartmentID=DKV 61 shock, puzzle, or question people's perception of those things. We can look at Oliviero Toscani's works for example for the United Colors of Bennetton campaign. This is just one example of many. And when we realize this, we can start to be aware that we are a part of the photograph, because as a viewer, we start to interpret what we see according to our background knowledge, and give new meanings to the things posed together on the same scene or photo. Therefore, meaning doesn't necessarily comes from the photo, but from the viewer. This photo exercise or challenge has two types that can be explored. One is the juxtaposition of things related and considered pairs or opposites. The other is the juxtaposition of things unrelated and that gives new meaning(s) to the things pictured together. Although difficult, students should aim for number two. A good example would be a Popeye action figure next to an olive fruit. Although these two are unrelated in the type of object they are, when juxtaposed together in the same scene the olive fruit gains new meaning, because of our knowledge of Popeye's wife's name "Olive”. Figure 3. Another example of an abstract photo that uses blurring technique to create an abstraction, by student Tri Gusti Irmawanto Figure 4. Oliviero Toscani’s photo for an ad campaign (http://www.olivierotoscanistudio.com/) A fifth exercise is to utilize the camera to create a sequence that creates humor. It may be a short sequence by using the continuous shooting ability of the digital SLR, or an expanded period of time. The purpose of this exercise is to let the students experience the making of a story through a sequence. To make them understand that sometimes we cannot interpret a story just from one picture but a series of pictures, such as the case in documentary photography for example. This helps them understand the significance of a theme and narrative in photographs. With humor played in, it introduces a fun and creative atmosphere into the process. Students may have the humor in each picture or at the end of the sequence as a punch line. Figure 5. Duane Michals, I Build a Pyramid, 1978 (Ingledew, 2005) The sixth exercise is a technical process where we introduce students to digital imaging. High Dynamic Range or HDR photography has been a popular technique since the emergence of digital cameras. It is a technique where 3 or more different exposures of the same picture are taken and merged or composited together to show each photo’s best detailed areas and give the maximum range of details and “information” from the darkest to the lightest parts (dynamic range) on “one” photo. It is closer to the dynamic range of the human eyes, where for a digital camera can only capture a third of our dynamic range in a single shot (Dickman, 2005:213). The result image of a HDR photo in general looks very sharp with long depth of field, clearly detailed in the darkest to lightest areas, and has a wider range of colors, often resembling to contemporary hyperrealistic paintings. It seems “more NIRMANA, VOL.8, NO. 2, Juli 2006: 58-64 Jurusan Desain Komunikasi Visual, Fakultas Seni dan Desain –Universitas Kristen Petra http://www.petra.ac.id/~puslit/journals/dir.php?DepartmentID=DKV 62 real” than reality. Eventhough it is just close to our eyes’ dynamic range, HDR photo somehow seems to show an “enhanced” reality or a clearer reality. This is the purpose of the exercise, which is to show students of such capabilities in digital photography and imaging, and how they can explore and discuss such hyperrealism. This is truly an experiment that has been impossible to capture using traditional photographic methods. Techniques of compositing HDR pictures on the photo shoot with the DSLR camera and in the computer have to be explained in detail prior to and after the exercise. Figure 6. A HDR photograph by student Heru Wibowo Figure 7. Example of a HDR photo by Pete Carr (http://www.petecarr.net, via http://www.vanilladays.com/hdr-guide/) Pranayama, Teaching Visual Literacy Through Digital Photography Jurusan Desain Komunikasi Visual, Fakultas Seni dan Desain –Universitas Kristen Petra http://www.petra.ac.id/~puslit/journals/dir.php?DepartmentID=DKV 63 The seventh exercise which deals more with digital imaging, explores the world of surrealism in the digital age. Today, with the advances of computer imaging technology, almost any kind of creative imagery is possible; by combining different images into one (montage), rearranging them, retouching them, remaking them, etc. However, to direct students in using those techniques, a good exercise would be to make students create an imagery of their dream or dreams by combining many different photos they take. They should do some writing about their thoughts, feelings, and dreams first, and maybe some poetry, then think of symbols or metaphors that would visualize them into a photo illustration. Students should also look into Surrealism and the works of Surrealist artists and photographers, from Salvador Dali to contemporary photographers, to gain references of the diverse look and feel of Surrealist works. The digital imaging exercise above can also be assigned with another theme, which is a “World Record of something.” This is adapted from Wilde’s Visual Literacy book (Wilde, 1991:166), where in it is an illustration problem, but can be adopted here into a photographic digital imaging exercise. It is a very good alternative or additional exercise to the last one. In the book, Wilde explains the problem: “Illustrate (in our case, with photos) a world record of any fact, statistic, deed, or achievement, in any field or subject area. Consider the natural world, outer space, bizarre and outrageous stunts, challenging exploits, great sports accomplish- ments, or occurences that are ridiculous, provo- cative, or shocking.” These digital imaging exercises would also demand titles that would further challenge students to direct people’s interpretations of their created photo illustrations. Students also at the same time learn the essential digital image manipulation techniques. CONCLUSION These exercises are just examples that have been implemented in the Experimental Photography class and have shown great results. Other exercises could be developed and designed further. The basic conside- rations and objectives of each exercise must be apparent. Pictures of examples may be shown or not during the briefing of the photo exercise, depending on whether they are important in making the students understand of the expected results or not. These exercises should be limiting somewhat, but open for students’ creativity and interpretation. Discussions and readings of the results are critical in the learning and understanding of visual literacy. After analyzing an image often unintentional messages emerge. This is where usually the best learning process takes place. Technical and aesthetical aspects could also be discussed for effective visual communication. VISUAL LANGUAGE PHOTO EXERCISE DESIGN IMAGE MAKING PROCESS PRESENTATION OF RESULT DISCUSSION OF INTERPRETATIONS V IS U A L L IT E R A C Y Figure 9. Scheme of visual literacy exercise in class Continued guidance for the students can be achieved through online discussions. From the Figure 8. HDR image exposure compared to other media (Dickman, 2005:213) NIRMANA, VOL.8, NO. 2, Juli 2006: 58-64 Jurusan Desain Komunikasi Visual, Fakultas Seni dan Desain –Universitas Kristen Petra http://www.petra.ac.id/~puslit/journals/dir.php?DepartmentID=DKV 64 beginning of the class, students are required to have an online portfolio or a photoblog, where they could upload their best pictures and share with class members and the rest of the world. They are required to be active members of groups and contributors to them. It is important for students to be aware of the myriads of styles of photos and references. In this way, the learning process not only takes place in class, but also out of class and even with people around the world. The future is not only the digitality of everything, but how they are all interconnected in the global network. That they contribute, take and give from each other, affecting, inspiring, and continuously innovating. REFERENCES Avgerinou, M. & Ericson, J. (1997). “A review of the concept of visual literacy.” British Journal of Educational Technology, 28(4), 280-291. Dickman, Jay and Jay Kinghorn. (2005). Perfect Digital Photography. McGraw-Hill Osborne Media, 213. Goin, Peter. (2001). “Visual Literacy.” Geographical Review, Jan-Apr 2001, 91 (1-2); Academic Research Library, 363. Golubieski, Mary R. (2006). Visual Literacy in the Artroom., WEB School Arts, April. Ingledew, John. (2005). Photography, King Laurence Publis. Wilde, Judith and Richard Wilde. (1991). Visual Literacy, New York: Watson-Guptill Publi- cations, a division of VNU Business Media, Inc. http://www.wikipedia.org. “Visual Literacy.” http://www.ivla.org. “What is ‘Visual Literacy’” contri- buted by Maria Avgerinou. work_obcgsps7rfgqfmrdvv2hrmrlzy ---- ALBATROSSES, PETRELS AND SHEARWATERS OF THE WORLD. Derek Olney and Paul Scofield. 2007. London: Christopher Helm. 240 p, illustrated, soft cover. ISBN 978-0-7136-4332-9. £19.99. BOOK REVIEWS 87 References Ayres, P. 1999. Mawson: a life. Melbourne: Melbourne University Press. Bickel, L. 1977. Mawson’s will. New York: Stein and Day. Holland, C. 1994. Arctic exploration and development c. 500 BC to 1915: an encyclopedia. New York and London: Garland Publishing. Howgego, R.J. 2003. Encyclopedia of exploration to 1800. Sydney: Hordern House. Howgego, R.J. 2004. Encyclopedia of exploration 1800 to 1850. Sydney: Hordern House. Howgego, R.J. 2006. Encyclopedia of exploration 1850 to 1940: the oceans, islands and polar regions. Sydney: Hordern House. Mawson, D. 1915. The home of the blizzard. 2 vols. London: William Heinemann. ALBATROSSES, PETRELS AND SHEARWATERS OF THE WORLD. Derek Olney and Paul Scofield. 2007. London: Christopher Helm. 240 p, illustrated, soft cover. ISBN 978-0-7136-4332-9. £19.99. doi:10.1017/S0032247407006997 When Peter Harrison’s book Seabirds: an identification guide was published by Croom Helm in 1983, it caused a minor sensation in the birdwatching world. For the first time, all the seabirds of the world were to be found illustrated and described in a single, relatively portable volume, with accurate distribution maps. Keen amateur ornithologists were made aware of the treasures to be found around the oceans of the planet, and many have since travelled to exotic and distant destinations in search of the many species of seabird that this book depicted. Albatrosses, petrels and shearwaters of the world is the latest in a long line of first-class field guides from the excellent Helm publishing house, and seeks to improve our knowledge of this group of birds, the majority of which occur in the tropics or Southern Hemisphere. Have the authors achieved this aim? The current book is a work dedicated to three families of seabirds, namely albatrosses, petrels, and shearwaters. The book begins with a list of the species and subspecies covered. There are 137 species recognised by the authors, and all are described and illustrated, with a distribution map for each. The authors then outline the taxonomic debate, and describe both the phylogenetic and biological species concepts well, pointing out the limitations of each, in terms that are sure to be understood by the general reader. Since 1983, there have been significant advances in the study of seabirds. New species have been described, and birds previously considered subspecies have been promoted to specific status. The authors discuss these changes in taxonomy in their introduction. More import- antly, advances in DNA analysis have led to changes in the taxonomic relationships between species. Although this is true for many bird families, nowhere has it been more so than for seabirds. For example, when I was a teenage birdwatcher in the 1970s, the field guides of the day only described wandering albatross (Diomeda exulans). Now, we are informed that no fewer than six species of great albatross are recognised by most authorities. One reason for the taxonomic changes may be that many species breed on remote islands that had been difficult for the researcher to access. Indeed, the Procellariformes, which comprise the birds in this guide, are among the most enigmatic of birds and have drawn the increasing attention of birdwatchers and research workers during recent years. Much of this interest is due to the fact that for most of their lives, these birds are only to be found across the vast oceans of the planet, only coming to land in order to breed. As the authors state, most members of the general public are not aware of the existence of many of these species. A historical background to the group is given, and we learn that the first petrels can be found in fossils aged between 40 and 45 million years old. The next section describes the four groups that make up the species covered by this book. This is a well-written text and gives a good basic introduction to the species of the group. I was interested to read that the wedge-rumped storm petrel (Oceanodroma tethys) was the only member of the storm petrels to visit its breeding burrow in the daytime. For all the others, night-time visits to the nest affords better protection from predation by larger seabirds. Of similar interest, we find that the sooty (Puffinus griseus) and short-tailed shearwaters (P. tenuirostris) are both able to reach up to 70 m below the surface of the sea in search of prey. In a well-written section on identification, there are some useful guidelines for identifying a mystery seabird from this group. Pitfalls of identifying seabirds in different weather and lighting conditions are well covered and the observer is warned about this, and we are also presented with a discussion of the variability in the plumages that many of this group display. The relevance of moult in seabirds is also discussed. I like the attitude that comes across in this book: that one cannot hope to identify all the seabirds one sees, and that if one doesn’t know what species a bird was, it is safer for the scientific record not to guess. A section on the conservation of this group of seabirds makes for bleak reading, with 10% of the species ‘threatened with extinction in the near future.’ The main reason put forward for this by the authors is predation of one form or other. There is interesting comment about the evidence of the use of seabirds as food resource by early Polynesian explorers as far back as 3000 years ago. The authors describe in some detail the damage inflicted on the seabird populations of the world by man since those early days. Non-human influence has also led to serious declines in seabirds that breed on offshore islands. The ‘usual suspects’ include introduced rats, pigs, and dogs. Seabirds are also increasingly at risk when at sea, and the authors identify the well-known threats from drift nets and long-lining as being of major importance in 88 BOOK REVIEWS the decline of many seabird species. More worryingly, the authors speculate on the possible effects on seabirds of global warming. They highlight the submergence of breeding colonies under raised sea levels, and also the reduction in phytoplankton populations as a consequence of warmer sea temperatures. This section of the book ends on an optimistic note, however, with a brief discussion of how some of the above issues are being addressed, together with pointers as to how the individual birdwatcher may help. The body of the book is contained in the 45 plates that depict all of the species, together with the accompanying text for each species. The plates are arranged in systematic order, on the right-hand page, with brief descriptions of the species on the page opposite. This is a well-tried and successful format, especially for a book designed to be used in the field. The majority of the birds are shown in flight, with a few images of birds on water. Confusion species are often shown together on the same plate, which helps in field identification. On the whole, the plates are reasonable. Two points of criticism apply, however. Firstly, the style of the artist is too ‘loose’ for my liking. I prefer illustrations in identification guides to be more detailed. I realize this is a personal view and others may have a different opinion. The second criticism is much more important. The majority of the plates illustrate birds against a white background. For species with significant amounts of white in the plumage, this makes seeing the bird very difficult. For example, snowy albatross (Diomedea exulans), Figures 1a, 1d (Plate 1); southern royal albatross (D. epomophora), Figures 2a, 2c (Plate 1); and the fulmars (Fulmarus sp.), (Plate 12). The book would, in my opinion, be enhanced enormously if the artist had painted the birds against a non-white background. I also feel that the illustration of the waved albatross (Phoebastria irrorata), Figure 1a (Plate 5), looks a little light and is not an accurate representation of the species. The detailed text for each species follows on from the plates, and sections cover taxonomy, distribution, behaviour, ‘jizz,’ size, plumage variations, moult and wear, and identification points. Happily, there is an accurate, up-to-date, and well-drawn distribution map for each species, which is a very helpful tool for assessing the likelihood of the bird you’ve just seen could actually be the one you think it is! In the case of the Pacific storm petrels, we are informed that neither author has much field experience in this part of the world — an honest approach — and that the texts for species in this part of the world were difficult to prepare. They suggest that more care and attention is needed in identifying white-rumped storm petrels photographed in places such as the Galapagos. I, for one, will now be re- examining my own photographs to confirm identification, based on the information in this book. Overall, this book is a very good attempt to represent the known extant seabirds in these groups, and is up to date. For example, the recent rediscovery of the New Zealand storm petrel (Pealeornis maoriana), which had been presumed extinct, is covered. Interestingly, this species was identified from digital photographs taken at sea on a pelagic trip — the authors quite rightly pointing out the importance of obtaining photographs of seabirds as a means of confirming identification. For many of the seabirds described in this guide, the birdwatcher is often given only the briefest of glimpses, as the bird zooms past a boat, in a sea that is anything other than flat calm. The advent of digital photography has helped enormously — something the authors emphasise. However they rightly make the point that a photograph can only be one part of the observation of a seabird and that photography does not replace field craft — the observation by eye of the bird and the recording of contemporaneous field notes — a skill that has, it seems, declined over the years. The authors are at pains to point out that as time goes on and more research is undertaken, the position in terms of number of species and their relationships may well change. However, for a representation of the current position, this is an admirable effort. It is a compact volume, and although not really pocket-sized, could easily accompany the birdwatcher on pelagic trips or cruises. Harrison’s book generated a hunger amongst bird- watchers to get to know more about seabirds, and the book reviewed here represents a major step forward in knowledge about the pelagic seabirds of the world. With the reservations outlined above in mind, I have no hes- itation in recommending this volume to all birdwatchers and seabird workers. (Kevin Elsby, Chapel House, Bridge Road, Colby, Norwich NR11 7EA.) Reference Harrison, P. 1983. Seabirds: an identification guide. London: Croom Helm. THE COLDEST CRUCIBLE: ARCTIC EXPLOR- ATION AND AMERICAN CULTURE. Michael F. Robinson. 2006. Chicago and London: University of Chicago Press. xii + 206 p, illustrated, hardcover. ISBN: 978-0-226-72184-2. US$39.00. doi:10.1017/S0032247407007000 Guided by previous scholarship that engaged the rich social and institutional contexts of exploration, Michael Robinson turns the frame of focus away from the north toward its domestic audience, in a timely and resonant attempt to consider the cultural importance of the Arctic itself. From the outset, Robinson pays particular attention to what may be called the ‘culture’ of exploration, to the competing demands placed upon explorers by a range of public audiences, the struggles to build support for expeditions before departure and to defend claims upon their return, and the ongoing efforts of explorers to cast themselves as individuals worthy of the nation’s full attention. The Arctic became a stage for the performance of strident patriotism as well as becoming a platform for work_obyjx4brwvcdfmdsgdpgajvkoq ---- Instruments and Methods
Techniques for measuring high-resolution firn density profiles: case study from Kongsvegen, Svalbard Journal of Glaciology, Vol. 54, No. 186, 2008 463 Instruments and Methods Techniques for measuring high-resolution firn density profiles: case study from Kongsvegen, Svalbard Robert L. HAWLEY,1 Ola BRANDT,2 Elizabeth M. MORRIS,1 Jack KOHLER,2 Andrew P. SHEPHERD,3 Duncan J. WINGHAM4 1Scott Polar Research Institute, University of Cambridge, Lensfield Road, Cambridge CB2 1ER, UK E-mail: rlh45@cam.ac.uk 2Norwegian Polar Institute, Polar Environmental Centre, NO-9296 Tromsø, Norway 3Department of Geography, University of Edinburgh, Drummond Street, Edinburgh EH8 9XP, UK 4Centre for Polar Observation and Modelling, University College London, Gower Street, London WC1E 6BT, UK ABSTRACT. Onan11mfirn/icecore fromKongsvegen,Svalbard,wehaveuseddielectricprofiling(DEP) to measure electrical properties, and digital photography to measure a core optical stratigraphy (COS) profile. We also used a neutron-scattering probe (NP) to measure a density profile in the borehole from which the core was extracted. The NP- and DEP-derived density profiles were similar, showing large- scale (>30cm) variation in the gravimetric densities of each core section. Fine-scale features (<10cm) are well characterized by the COS record and are seen at a slightly lower resolution in both the DEP and NP records, which show increasing smoothing. A combination of the density accuracy of NP and the spatial resolution of COS provides a useful method of evaluating the shallow-density profile of a glacier, improving paleoclimate interpretation, mass-balance measurement and interpretation of radar returns. 1. INTRODUCTION Thesnowdensificationprocess isof fundamental importance in many aspects of glaciology, including regional climat- ology, watershed runoff forecasting and interpretation of ice-surface elevation changes. In particular, high-resolution density profiles are critical for accurate modelling and in- terpretation of ground-penetrating radar (GPR) returns. In climatology studies, the thickness and frequency of refrozen melt layersareusedto infersummerclimateconditions (Oku- yama and others, 2003), highlighting the importance of detecting thin ice layers in a density profile. In addition, the slowly varying depth–density relationship is indicative of long-term climate (e.g. Herron and Langway, 1980), and highly accurate density data are needed to exploit this for paleoclimate interpretation. In the traditional gravimetric method, the mass and vol- ume of a sample are measured, with resolution and accu- racy being dependent upon the sample size. Such samples are generally obtained in three ways: direct sampling from a snow-pit wall; bulk sampling of core sections; and sub- samples cut from core sections. The practical depth limit for snow-pit sampling isa few metres in most locations. For core sampling, bulk densities of whole sections of core will not necessarily reveal fine density stratigraphy or ice layers (as thin as ∼1mm) that would have an effect on GPR signals (Kohler and others, 1997; Arcone and others, 2004). Making smaller subsamples from the core gives finer resolution, but is time-consuming and thus rarely carried out (Clark and others, 2007) over the full length of a core. Such sampling does not coexist easily with chemistry studies on a core, so non-destructive methods are clearly beneficial. Alternative methods for measuring the fine-scale density and stratigraphy in the firn are therefore desirable. Wilhelms (2005) expanded on the work of Wilhelms and others (1998) and Wolff (2000) using dielectric profiling (DEP) to infer firn density from conductivity and permittivity measurements. Morris and Cooper (2003) described the adaptation of a soil moisture instrument, the neutron-scattering probe (NP), to measure snow and firn density in a borehole. The utility of this method for interpreting firn stratigraphy has been shown (Hawley and Morris, 2006; Hawley and others, 2006), and Morris (in press) presented a physically based scattering model for calibration. Hawley and Morris (2006) demon- strated a link between firn density and optical properties us- ing borehole optical stratigraphy (BOS), and many ice-core processing lines currently use digital photography or scan- ning to image core sections (McGwire and others, 2007), yielding data from which to produce a similar core optical statigraphy (COS) record. Gamma-attenuation profiling, in which density can be inferred from the absorption and scat- tering of γ-rays by ice, has also been used (Eisen and others, 2006) to measure densitynon-destructively oncore sections. Weusedetailed measurementsofdensityacquired byDEP and NP methods, along with traditional gravimetric meas- urements and COS, to assess the utility of each technique for determining detailed firn density profiles for a complex stratigraphy with snow, firn and ice layers. 2. METHODS 2.1. Study area Kongsvegen is a 25km long polythermal, surge glacier in Svalbard, which has been the subject of extensive mass- balance campaigns (e.g. Hagen and others, 1999). Our study site is located at mass-balance stake 8 on the glacier, at an elevation of ∼700m (Fig. 1). Melting or rain events can occur year-round in Svalbard, and air temperatures during summer remain above freezing (Førland and Hanssen-Bauer, 2000). Meltwater can therefore percolate deep into the firn, forming ice layers up to several tens of centimetres thick. Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:41, subject to the Cambridge Core terms of use. https://www.cambridge.org/core 464 Hawley and others: Instruments and methods Fig. 1. Map of the Ny-Ålesund region, with the glacier Kongsvegen in the lower right. Our study site is located at stake 8. 2.2. Drilling At our field site, we collected a firn core to a depth of ∼11m usingaPolar IceCoringOffice (PICO)10cmhandaugerwith a power head. We measured and weighed the 35 core sec- tions in the field to obtain gravimetric densities. Core qual- ity was consistently poor in the unconsolidated winter snow comprising the uppermost 2.8m, making it impractical to return these sections to the laboratory. We transported the re- maining 23 core sections (listed in Table 1) to the laboratory for repeat gravimetric measurements, DEP and COS analy- sis. For registration with borehole measurements, each core section was located in depth with respect to its neighbours and several control points, at which we had measured the depth of the drill cutting head. 2.3. Neutron-probe logging Once the drilling was complete, we lowered the neutron probeto thebottomof theboreholeandraised it slowly to the surface at approximately 7–10cmmin−1, with the probe off- setandrestingagainst thesideof thehole, logging thedensity at1cmintervals (MorrisandCooper,2003;HawleyandMor- ris, 2006). The calibration of the NP measurement depends on the exact diameter of the borehole, which is larger near the surface due to the repeated passage of the drill. The NP- measured density profile shown in Figure 2 was determined from the measured count rate of neutrons returning to the detector using three-group neutron-scattering theory (Mor- ris, in press). The count rate depends on the characteristics of the probe, the snow/firn/ice density, temperature and the diameter of the borehole. The diameter of most boreholes drilled in firn with a hand- held coring drill is likely to vary; the hole will be larger near the surface from the repeated removal and insertion of the drill.Weknowthat, in this case, thediameterof theborehole was not constant. Specifically, the topmost ∼2m of the hole were enlarged as a result of repeated raising and lowering of the drill. This effect is likely to be largest in the upper few metres and smallest at the bottom, which has seen the fewest passes of the drill. In the absence of a caliper log of the hole, we estimated the borehole diameter to be 11cm (0.5cm on each side of the drill head) throughout most of its depth, flaring to 14cm at the surface (estimated visually). As can be seen from the comparison between gravimetric and NP data in Figure 2, there is good agreement between thebulkgravimetric densitiesand thosemeasuredbyNP.The logstopsbefore thebottomof thecorebecause thebottomof the hole was filled with ice chips that could not be removed by the drill. 2.4. Dielectric profiling We measured conductivity and permittivity profiles in the laboratory using DEP at 250KHz. We made measurements along thecoresectionsat5mmincrementswith10mmelec- trodes. For each core section, we made four measurements, rotating the core by 0, 90, 180 and 270◦. DEP measures the conductance and capacitance of the ice core, and we use the capacitance Cp, following Kohler and others (2003), to estimate the relative permittivity �r: �r = Cp Cair , (1) whereCair = 64.5×10−15 Fm−1 isaconstantobtainedusing a blank reading with an empty instrumental set-up. See Wil- helms and others (1998) for a more thorough description of Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:41, subject to the Cambridge Core terms of use. https://www.cambridge.org/core Hawley and others: Instruments and methods 465 Table 1. Laboratory measurements of the core sections Piece Length Diameter Mass Density m m kg kgm−3 1 0.290 0.0750 0.568 444±10 2 0.255 0.0740 0.527 481±12 3 0.285 0.0760 0.761 589±13 4 0.460 0.0760 0.153 733±13 5 0.400 0.0750 0.961 544±10 6 0.500 0.0750 1.191 539±9 7 0.370 0.0750 0.994 608±12 8 0.540 0.0760 2.074 847±14 9 0.570 0.0760 1.391 538±9 10 0.720 0.0755 1.976 613±9 11 0.465 0.0760 1.505 714±12 12 0.335 0.0760 0.929 612±12 13 0.330 0.0770 1.256 818±16 14 0.205 0.0770 0.816 855±24 15 0.155 0.0770 0.593 822±29 16 0.440 0.0755 1.306 663±12 17 0.090 0.0770 0.241 575±33 18 0.625 0.0760 1.706 602±9 19 0.290 0.0760 0.872 663±14 20 0.095 0.0770 0.264 597±33 21 0.110 0.0760 0.329 660±31 22 0.550 0.0760 1.950 782±13 23 0.130 0.0770 0.402 664±27 the instrument and discussion of the technique. We then use the relationship between density ρ and relative permittivity �r given by Kovacs and others (1995), �r = (1 +8.45 × 10−4ρ)2, (2) to calculate the firn/ice density. Since DEP measures over a finite volume, the measurements near the ends of each core section are subject to end effects. We excised these low-density end-effect anomalies by eye. The full set of DEP profiles is shown in Figure 2. Note that the DEP-derived den- sity profile appears to capture a significant amount of high- frequency variability, agrees well with gravimetric densities in high-density areas and slightly underestimates the density of the lower-density layers. 2.5. Core optical stratigraphy Visible stratigraphy analysis on ice cores has a long history of success (e.g. Alley, 1988; Alley and others, 1997). More recently, Hawley and others (2003) have developed BOS, which takes the visual analysis concept to boreholes and measures a log of brightness vs depth. To create a COS pro- file, we imaged the core sections illuminated from the side with a digital camera. The details of the camera system are presented by Sjögren and others (2007). We processed the images to obtain a brightness log by subsampling the centre portion of the image, avoiding the edges of the core, and taking the mean value of the pixels at a given depth of the core. A subsection of the optical profile with the accompanying imagery is shown in Figure 3. With side illumination, light detected by the camera is primarily scattered from within the firn, so the relatively low-scattering ice layers appear dark. 400 600 800 0 1 2 3 4 5 6 7 8 9 10 11 D ep th (m ) a 400 600 800 Density (kg m−3) b 400 600 800 c d Fig. 2. Density data (black) and data averaged over core sections (grey) to facilitate comparison: (a) gravimetric density data; (b) NP data; (c) DEP data (thin grey lines depict four runs at 0, 90, 180 and 270◦ rotation, and the black linedepicts themean); and (d) imagery of the core on a black background with side illumination, the basis of the COS shown in Figure 3. 3. DISCUSSION 3.1. Accuracy of the methods As can be seen in Figure 2, there are differences between the absolute densities measured by the three methods. We evaluate the accuracy of each technique. The gravimetric technique using core samples has the longest history and has proven utility. The depth resolution, however, is usually insufficient to resolve the shorter-scale spatial fluctuations in density, and the accuracy can be af- fected by several factors. The diameter of the core is gen- erally measured at several places along the core, but may not be consistent. The length of the core is measured, but, Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:41, subject to the Cambridge Core terms of use. https://www.cambridge.org/core 466 Hawley and others: Instruments and methods 300 600 900 3.5 4 4.5 5 Densityy (kgg m−3) D ep th (m ) 50100150 Brightness (digital number) a b Fig. 3. (a) NP (grey) and DEP (black) density profiles; (b) COS profile, a mean brightness from the core imagery; and (c) digital imagery of the core, the basis of the COS profile. In this detailed view, the effect of core breaks on the DEP and core-optical measurement can be seen; there are short data gaps where the end effects in the data have been eliminated and are delineated with horizontal grey lines. Note that thin ice layers as detected by COS are smoothed in the NP profile; this is because the 13.5cm neutron detector behaves as a low-pass filter on the measurement. The smoothing effect is also present in the DEP profile (e.g. depths 4.4m and 4.6m). in the event of an uneven core break, this length might not be uniform. This can result in density being either over- or underestimated. Often a core will be in many pieces upon extraction from the drill. If a piece is lost, the measured mass of the core section will be reduced, resulting in an under- estimation of density. This is particularly troublesome when core quality is poor or in unconsolidated snow, where care is required to obtain good results. Systematic density over- estimation is rare, because this would require excess weight orvolumetobeunderestimated.Core lossalso introducesan ambiguity in thedepthpositioningofadensitymeasurement, although measurements of the drill depth for any given core can help to resolve this. Density ismeasuredbyNPbycountingtherateofneutrons slowed by scattering in the snow and then absorbed by the detector. The diameter of the borehole has an effect on the relation between density and count rate, as does the position of the probe in the hole, i.e. whether the probe is centred in the hole or lying against the side (Morris, in press). On Kongsvegen we did not measure borehole diameter but were careful to align the probe with the side of the hole when we set up the measurement. Although it is reasonable to assume the probe did not at any depth losecontact with the wall, we cannot categorically exclude this possibility. In future tests, use of a pressure shoe to hold the tool against the borehole wall would eliminate this source of uncertainty. We have assumed in section 2.3 that the diameter of the hole is 11cm through most of its depth, but that it flares from 11cm at the first hard layer (∼2m) to 14cm at the surface. We do not know what the error in this estimate is, but sensitivity calculations by Morris (in press) indicate that fora10%error inboreholediameter the resultingerror in the derived density would be of the order 8–10%. A caliper log of the hole, showing the exact diameter, would allow us to account for the effect of variation in borehole diameter more accurately, although the calipers may not work properly in unconsolidated snow. In standard practice, when co-registration of NP profiles with core is not required, the borehole can be drilled using a 5cm non-coring auger and a rigid guide tube up to several metres long. The guide tube is made of aluminium which does not affect the neutrons, and can be left in place during logging. This prevents collapse of lower-density layers and ensures the hole is of constant diameter through the upper- most (generally lowest-density and weakest) part of the firn. In addition, the accuracy of the method is much improved by using a small-diameter access hole (Morris and Cooper, 2003; Morris, in press). DEP uses the electrical properties of the ice, as outlined above, to calculate density. As can be seen in Figure 2, DEP appears to underestimate in the lower-density sections of core but agrees with the gravimetric measurements for the higher-density sections. We suspect the underestimation at lower densities is caused by thinner core diameter (typically 7.3–7.6cm com- pared to 7.8cm for ice layers). The air gap between the core, guarding and electrodes in the DEP affects the capacitance readings. Inessence, byreducing thecorediameterby5mm, 13% less core material will occupy the cradle and thus the relative permittivity will be underestimated. Propagating this error through Equation (2) leads to a possible error of up to 20% in the low-density sections and 10–13% in the high- density sections. However, since the higher-density sections typically have a smaller air gap, the uncertainty is further reduced. The actual observed underestimation in the low- density parts of the core is ∼15%. In addition, there could also be a change in conductivity with density which is unaccounted for in the density cal- culation, or an inaccuracy of the blank measurement. For the purposes of characterizing the large-scale fluctuations of this layered (firn/ice) core, the present method is sufficient. For a polar firn core, where density is more slowly varying, one would process the DEP using the full permittivity and conductivity values following Wilhelms (2005). Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:41, subject to the Cambridge Core terms of use. https://www.cambridge.org/core Hawley and others: Instruments and methods 467 3.2. Details in the density profile The three density measurements are plotted together in Fig- ure 2. Both DEP and NP techniques show finer spatial reso- lution compared to the gravimetric method. Figure 3 shows a section of the DEP, NP and COS profiles, along with com- posite imagery of the core. The sharpest contrasts are seen in the COS profile. Thin ice layers such as that seen at ∼4.4m are spatially resolved by COS, and can be seen smoothed in the NP and DEP profiles. The smoothing of thin ice layers in the NP data is readily apparent. This is due to the fact that the active length of the neutron detector (13.5cm) acts as a low-pass filter (with a cut-off of approximately half the active length, or 6.75cm). This is particularly noticeable at ∼3.75m in Figure 3. Less obvious but also apparent is the smoothing effect of DEP with electrodes 10mm long which sampleafinitevolume.Afirst estimateof the sensingvolume can be found when excising the core-break end effects from the DEP record. Generally, ∼2cm was removed from each end of a core section, implying that the sensing length along the core is ∼4cm. In principle, the true density profile could be extracted by inverse methods, using the NP or DEP densities and geom- etry constrained by COS. Since optical stratigraphy can be obtained using down-hole techniques (i.e. BOS), a combin- ation of NP and BOS would allow a very accurate and de- tailed density profile to be constructed. 3.3. Pseudo-density from COS Since the optical signal is affected by factors other than den- sity, such as grain size and shape and other aspects of firn microstructure, the inversion of optical brightness to find density is not straightforward. A true inversion is beyond the scope of this study. We can, however, exploit the strong cor- relation between brightness anddensity (Hawley andMorris, 2006; Sjögren and others, 2007) to investigate the potential foranoptical record (COSorBOS) tobeused incombination with a high-resolution density profile (such as that from NP or DEP) to produce a detailed and accurate interpretation of density. Simplifying the procedure of Sjögren and others (2007) for obtaining a density profile from COS, we apply a linear transformation y = Ax + B to the intensity data, and vary A and B to minimize the mismatch over a short depth range between this ‘pseudo-density’ profile and the density profile measured by NP. The optimum values are A = −2.4 and B = 1000, and the resulting profile is shown in Figure 4. Note that the ice layers are very clearly seen in the pseudo- density profile, and the background density is in agreement with the NP density profile. 4. CONCLUSIONS We have made side-by-side density measurements in mixed firn and ice using NP, DEP, COS and gravimetric density measurement techniques. Unconsolidated snow near the surface affects the measurements derived from all the techniques, and further refinements are needed for these conditions. Althoughdependent onan accurate measurement ofbore- holediameter, theNPmethoddoesnot require thecollection and shipping of core and is relatively simple to deploy in the field. This means that the NP method is free of the problems associated with core depth registration, core breaks, poor core quality or melting of cores during shipping. In fact, NP 500 550 600 650 700 750 800 850 3.5 4 4.5 5 Density (kg m−3) D ep th (m ) Fig.4.A‘pseudo-density’ profile,derived fromtheCOSprofileanda simple linear transformation, is shown in black with the NP-density data ingrey.Note thatwhilenotgivinga truephysicallybasedmeas- urement of density, the COS-derived pseudo-density profile clearly captures the details of the ice layers. can be deployed in a rapidly drilled hole with a 5cm non- coring auger. The DEP method has the fine resolution needed to charac- terize thin (<5cm) ice layers, but suffers from the problems associated with collecting cores mentioned above. Although both NP and DEP smooth the density profile to some extent, both offer a vast improvement over gravimetric methods of density profiling, with increased spatial resolution and pre- cision and reduced potential for human errors. Optical stratigraphy in the borehole or on the core, while not providing a quantification of density, can be combined with either NP or DEP to improve the ability of either tech- nique to resolve thin ice layers. A combined NP, optical and caliper down-hole tool may prove to be the ideal means of measuringacontinuous,high-resolution,high-accuracy pro- file of density from the surface to any depth accessible via drilling. ACKNOWLEDGEMENTS We thank K. Christianson for assistance with drilling. This work was supported by the European Space Agency and by the UK Natural Environment Research Council under grant No. NER/0/S/2003/00620, and by the Norwegian Research Council. We thank D. Peel (Scientific Editor), C. Shuman and an anonymous reviewer for insightful comments which im- proved the manuscript. REFERENCES Alley,R.B. 1988. Concerning the deposition and diagenesis of strata in polar firn. J. Glaciol., 34(118), 283–290. Alley, R.B. and 11 others. 1997. Visual-stratigraphic dating of the GISP2 ice core: basis, reproducibility, and application. J. Geophys. Res., 102(C12), 26,367–26,382. Arcone, S.A., V.B. Spikes, G.S. Hamilton and P.A. Mayewski. 2004. Stratigraphic continuity in 400MHz short-pulse radar profiles of firn in West Antarctica. Ann. Glaciol., 39, 195–200. Clark, I.D. and 8 others. 2007. CO2 isotopes as tracers of firn air diffusion and age in an Arctic ice cap with summer melt- ing, Devon Island, Canada. J. Geophys. Res., 112(D1), D01301. (10.1029/2006JD007471.) Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:41, subject to the Cambridge Core terms of use. https://www.cambridge.org/core 468 Hawley and others: Instruments and methods Eisen, O., F. Wilhelms, D. Steinhage and J. Schwander. 2006. Improved method to determine radio-echo sounding reflector depths from ice-core profiles of permittivity and conductivity. J. Glaciol., 52(177), 299–310. Førland, E.J. and I. Hanssen-Bauer. 2000. Increased precipitation in the Norwegian Arctic: true or false? Climatic Change, 46(4), 485–509. Hagen, J.O., K. Melvold, T. Eiken, E. Isaksson and B. Lefauconnier. 1999. Mass balance methods on Kongsvegen, Svalbard. Geogr. Ann., 81A(4), 593–601. Hawley, R.L. and E.M. Morris. 2006. Borehole optical stratigraphy and neutron-scattering density measurements at Summit, Green- land. J. Glaciol., 52(179), 491–496. Hawley, R.L., E.D. Waddington, R.A. Alley and K.C. Taylor. 2003. Annual layers in polar firn detected by borehole optical stratigraphy. Geophys. Res. Lett.,30(15), 1788. (10.1029/ 2003GL017675.) Hawley, R.L., E.M. Morris, R. Cullen, U. Nixdorf, A.P. Shepherd and D.J. Wingham. 2006. ASIRAS airborne radar resolves internal annual layers in the dry-snow zone of Greenland. Geophys. Res. Lett., 33(4), L04502. (10.1029/2005GL025147.) Herron, M.M. and C.C. Langway, Jr. 1980. Firn densification: an empirical model. J. Glaciol., 25(93), 373–385. Kohler, J., J. Moore, M. Kennett, R. Engeset and H. Elvehøy. 1997. Using ground-penetrating radar to image previous years’ sum- mer surfaces for mass-balance measurements. Ann. Glaciol., 24, 355–360. Kohler, J., J.C. Moore and E. Isaksson. 2003. Comparison of modelled and observed responses of a glacier snowpack to ground-penetrating radar. Ann. Glaciol., 37, 293–297. Kovacs, A., A.J. Gow and R.M. Morey. 1995. The in-situ dielectric constant of polar firn revisited. Cold Reg. Sci. Technol., 23(3), 245–256. McGwire, K.C. and 6 others. In press. An integrated system for optical imaging of ice cores. Cold Reg. Sci. Technol. Morris, E.M. Inpress. A theoretical analysisof theneutron-scattering method for measuring snow and ice density. J. Geophys. Res. Morris, E.M. and J.D. Cooper. 2003. Density measurements in ice boreholes using neutron scattering. J. Glaciol., 49(167), 599–604. Okuyama, J.,H.Narita,T.HondohandR.M.Koerner. 2003.Physical properties of the P96 ice core from Penny Ice Cap, Baffin Island, Canada, and derived climatic records. J.Geophys. Res., 108(B2), 2090. (10.1029/2001JB001707.) Sjögren, B. and 6 others. 2007. Determination of firn density in ice cores using image analysis. J. Glaciol., 53(182), 413–419. Wilhelms, F. 2005. Explaining the dielectric properties of firn as a density-and-conductivity mixed permittivity (DECOMP). Geophys. Res. Lett., 32(16), L16501. (10.1029/2005GL022808.) Wilhelms, F., J. Kipfstuhl, H. Miller, K. Heinloth and J. Firestone. 1998. Precise dielectric profiling of ice cores: a new device with improved guarding and its theory. J. Glaciol., 44(146), 171–174. Wolff, E. 2000. Electrical stratigraphy of polar ice cores: principles, methods, and findings. In Hondoh, T., ed. Physics of ice core records. Sapporo, Hokkaido University Press, 155–171. MS received 19 February 2008 and accepted in revised form 14 March 2008 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:41, subject to the Cambridge Core terms of use. https://www.cambridge.org/core work_ocseewgofzc77meiush4v2ytqe ---- Extracting Leaf Area Index by Sunlit Foliage Component from Downward-Looking Digital Photography under Clear-Sky Conditions Remote Sens. 2015, 7, 13410-13435; doi:10.3390/rs71013410 remote sensing ISSN 2072-4292 www.mdpi.com/journal/remotesensing Article Extracting Leaf Area Index by Sunlit Foliage Component from Downward-Looking Digital Photography under Clear-Sky Conditions Yelu Zeng 1,2,3, Jing Li 1,2,*, Qinhuo Liu 1,2,*, Ronghai Hu 1, Xihan Mu 1, Weiliang Fan 1, Baodong Xu 1,3, Gaofei Yin 1,4 and Shengbiao Wu 1,3 1 State Key Laboratory of Remote Sensing Science, Jointly Sponsored by Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences and Beijing Normal University, No. 20A, Datun Road, Beijing 100101, China; E-Mails: zengyl@radi.ac.cn (Y.Z.); rhhu@mail.bnu.edu.cn (R.H.); muxihan@bnu.edu.cn (X.M.); fanweiliang@163.com (W.F.); xubd@radi.ac.cn (B.X.); yingf@radi.ac.cn (G.Y.); wushengbiao13@mails.ucas.ac.cn (S.W.) 2 Joint Center for Global Change Studies (JCGCS), Beijing 100875, China 3 College of Resources and Environment, University of Chinese Academy of Sciences, Beijing 100049, China 4 Institute of Mountain Hazards and Environment, Chinese Academy of Sciences, Chengdu 610041, China * Authors to whom correspondence should be addressed; E-Mails: lijing01@radi.ac.cn (J.L.); liuqh@radi.ac.cn (Q.L.); Tel./Fax: +86-010-6485-1880 (J.L.). Academic Editors: Alfredo R. Huete and Prasad S. Thenkabail Received: 20 July 2015 / Accepted: 1 October 2015 / Published: 13 October 2015 Abstract: The development of near-surface remote sensing requires the accurate extraction of leaf area index (LAI) from networked digital cameras under all illumination conditions. The widely used directional gap fraction model is more suitable for overcast conditions due to the difficulty to discriminate the shaded foliage from the shadowed parts of images acquired on sunny days. In this study, a new LAI extraction method by the sunlit foliage component from downward-looking digital photography under clear-sky conditions is proposed. In this method, the sunlit foliage component was extracted by an automated image classification algorithm named LAB2, the clumping index was estimated by a path length distribution-based method, the LAD and G function were quantified by leveled digital images and, eventually, the LAI was obtained by introducing a geometric-optical (GO) model which can quantify the sunlit foliage proportion. The proposed method was OPEN ACCESS Remote Sens. 2015, 7 13411 evaluated at the YJP site, Canada, by the 3D realistic structural scene constructed based on the field measurements. Results suggest that the LAB2 algorithm makes it possible for the automated image processing and the accurate sunlit foliage extraction with the minimum overall accuracy of 91.4%. The widely-used finite-length method tends to underestimate the clumping index, while the path length distribution-based method can reduce the relative error (RE) from 7.8% to 6.6%. Using the directional gap fraction model under sunny conditions can lead to an underestimation of LAI by (1.61; 55.9%), which was significantly outside the accuracy requirement (0.5; 20%) by the Global Climate Observation System (GCOS). The proposed LAI extraction method has an RMSE of 0.35 and an RE of 11.4% under sunny conditions, which can meet the accuracy requirement of the GCOS. This method relaxes the required diffuse illumination conditions for the digital photography, and can be applied to extract LAI from downward-looking webcam images, which is expected for the regional to continental scale monitoring of vegetation dynamics and validation of satellite remote sensing products. Keywords: leaf area index; near-surface remote sensing; digital photography; gap fraction; clumping index; sunlit foliage component; clear-sky conditions 1. Introduction Leaf area index (LAI) is a key biophysical variable to characterize vegetation canopy structure and functioning in most ecosystem productivity and land surface process models [1,2]. LAI is defined as half the total foliage area per unit ground surface area [3]. Satellite remote sensing provides the unique way to obtain LAI in long-term time series and at the global scale [2,4]. However, the accuracy of remotely sensed LAI can be affected by the land surface heterogeneities, the impact of clouds and aerosols in the atmospheric correction, the uncertainties from the forward model used to create the look-up tables, and the saturation of optical signals over dense canopies when the lower layer is obscured by the upper layer[5–8]. Thus, it is necessary to validate the remotely sensed LAI products with ground-based LAI measurements for product utilization and algorithm improvement. In general, LAI can be measured through direct and indirect methods in the field campaign [9]. Direct LAI measurements include destructive sampling and non-harvest litter traps for deciduous forests, which are the most accurate, but are extremely labor-intensive and time-consuming [9–11]. Indirect methods using optical radiometric or imaging sensors, e.g., LAI-2000 Plant Canopy Analyzer (PCA), Digital Hemispherical Photography (DHP), and Tracing Radiation and Architecture of Canopies (TRAC), are based on the gap fraction or gap size distribution analysis [9,12–14]. Indirect LAI measurements are widely used in the field campaign due to the high efficiency, but the temporal revisit frequency and the spatial coverage are limited by the manpower [4,15]. Recently, near-surface remote sensing using networked digital cameras or radiometric sensors provides a low-cost way to continuously monitor the vegetation dynamics at high temporal frequency (several measurements per day) over the regional to continental scale [15–20]. For example, the PhenoCam dataset has collected time series of red-green-blue (RGB) camera images for more than Remote Sens. 2015, 7 13412 200 forest sites across North America [16,20]. The downward inclined cameras are installed at the top of instrument towers, such as the flux towers for eddy covariance measurements, which are several or tens of meters above the canopy [17]. The cameras are set at automated or fixed exposure, and webcam images are widely used to extract color indices in long-term time-series, such as the excess green (ExG) and the green chromatic coordinate (gcc) for vegetation phonological monitoring [16,17,20]. Recently much attention has been paid to use LAI time-series for tracking vegetation dynamic changes, because color indices or spectral indices, which are derived from camera RGB channel digital numbers (DNs) or reflectances, can be affected by leaf optical properties (e.g., chlorophyll a+b content and water amount), canopy structure (e.g., LAI, leaf angle distribution, and clumping index), soil background, sun-target-sensor geometry, and diffuse irradiance ratio [21–23]. Additionally, compared with vegetation indices, the quantitative biophysical meaning of LAI is clear, and LAI will not get saturated on the mature phase at least by definition [24–26]. However, little research has been conducted to extract LAI from such downward-looking webcam images, which is expected for the regional to continental scale monitoring of vegetation dynamics and validation of satellite remote sensing products. Current indirect LAI measurements by upward-pointing or downward-looking imaging sensors, such as DHP or non-fisheye digital photography, require stable diffuse illumination conditions, including uniform overcast, just before sunrise or after sunset [15,27]. Downward-looking cameras have already been used to extract LAI over short vegetation, such as agricultural crops, but are not often used for tall forest canopies until the tower-based webcam provides the possibility to extract forest LAI by near-surface remote sensing [27–30]. The series of acquired webcam images can be not only under overcast conditions, but also under clear-sky conditions with direct sunlight [16,17]. According to the theory of the geometric-optical (GO) model, the difference between the two types of illumination conditions for the images is that on overcast days there are two scene components: foliage and background, while on sunny days there are four scene components: sunlit foliage, sunlit background, shaded foliage, and shaded background [31–33]. The directional gap fraction model has been widely used on overcast days, while till now little research is dedicated to the extraction of LAI on sunny days [29,30]. The main challenge for the images acquired on sunny days is that it is difficult to discriminate the shaded foliage and shaded background from the shadows, where the signal variations in the three color channels are small [28–30]. Misclassification of shaded foliage as background on sunny days will lead to an overestimation of the gap fraction extraction [29], which may eventually result in the LAI estimation be outside the relative accuracy (20%) and absolute accuracy (0.5) requirement by the Global Climate Observation System (GCOS, http://www.wmo.int/pages/prog/gcos). To avoid the potential impact of such misclassification on sunny days, the goal of this study is to extract LAI by the area ratio of the sunlit foliage component using the GO model, instead of using the directional gap fraction model for images acquired under overcast conditions. The sunlit foliage component extraction, clumping index estimation and the LAI retrievals by different methods will be presented in detail in this study. The accurate extraction of LAI from downward-looking digital photography under clear-sky conditions will make it possible to monitor the LAI dynamics from low-cost webcams under all illumination conditions. Remote Sens. 2015, 7 13413 2. Methodology 2.1. Theory According to the GO model, the digital image of the scene within the field of view can be divided into four components: sunlit foliage, sunlit background, shaded foliage, and shaded background [31–33]. PT, PG, ZT, and ZG are the area ratio of the four components, respectively, and thus PT + PG + ZT + ZG = 1 holds. The area ratio of the sunlit foliage PT for discontinuous canopies can be expressed by [33]: 𝑃𝑇 = 1 − exp⁡(−√ 𝐺𝑠𝐺𝑣Ω𝑠Ω𝑣 𝜇𝑠𝜇𝑣 𝑤𝐿𝐴𝐼) (1) where μ𝑠 = 𝑐𝑜𝑠θ𝑠, μ𝑣 = 𝑐𝑜𝑠θ𝑣, and θ𝑠 and θ𝑣 are the solar/view zenith angle, respectively. Ω𝑠 and Ω𝑣 are the clumping index in the solar and view directions [12], and 𝐺𝑠 and 𝐺𝑣 are the mean projection of unit foliage area on a plane perpendicular to the solar and view directions [34]. The hot spot function w can be derived by: 𝑤 = 1 𝐻 ∫ 𝑒 − 𝑧𝛿 𝑠𝐿𝑑𝑧 = 𝑠𝐿 𝐻𝛿 𝐻 0 (1 − 𝑒 − 𝐻𝛿 𝑠𝐿 ) (2) where H is the canopy height, and 𝑠𝐿 is the characteristic linear dimension of the foliage in the Kuusk Hot-spot model [35]. For the spherical orientation of leaves, the foliage dimension 𝑠𝐿 = 𝑑𝐿𝜋 2/16, and for horizontal leaves 𝑠𝐿 = 𝑑𝐿𝜋/4, where 𝑑𝐿 is the mean foliage diameter [36]. δ = √ 1 μ𝑠 2 + 1 μ𝑣 2 − 2𝑐𝑜𝑠ξ μ𝑠μ𝑣 , and ξ is the scattering phase angle which can be calculated from the solar/view zenith angle θ𝑠 and θ𝑣, and the solar/view azimuth angle φ𝑠 and φ𝑣 𝑐𝑜𝑠ξ = 𝑐𝑜𝑠θ𝑠𝑐𝑜𝑠θ𝑣 + 𝑠𝑖𝑛θ𝑠𝑠𝑖𝑛θ𝑣cos⁡(φ𝑣 − φ𝑠) (3) From Equations (1)–(3), LAI can be derived by the area ratio of the sunlit foliage component PT as: LAI = − ln(1 − 𝑃𝑇) √ 𝐺𝑠𝐺𝑣Ω𝑠Ω𝑣 μ𝑠μ𝑣 𝑤 (4) where the variables have the meaning as in Equation (1)–(3). Thus, the following procedures are proposed to extract LAI from downward-looking digital images acquired on sunny days (Figure 1): 1. Estimate the area ratio of the sunlit foliage component PT from digital images by an automated image classification algorithm. 2. Extract the clumping index Ω from digital images by a path length distribution-based method [37]. 3. Characterize the leaf angle distribution (LAD) and calculate the leaf projection function (G). 4. Acquire the canopy height H, the foliage diameter 𝑑𝐿 and the solar/view geometric information by field measurements. 5. Derive LAI by Equation (4) with the above variables estimated. The procedures (1–2) will be described in detail at the following Sections 2.2–2.3. Remote Sens. 2015, 7 13414 Figure 1. The flow chart of procedures to extract LAI from downward-looking digital photography under clear-sky conditions. 2.2. Sunlit Foliage Component An automated image classification algorithm called LAB2 is adopted in this study to extract sunlit foliage from images acquired on sunny days [38]. The LAB2 method was originally proposed to estimate the foliage cover from digital nadir-view images; and has shown to be the most accurate method for foliage cover > 10% when compared with other five classification methods [38]. Firstly, the green leaf algorithm (GLA) for each pixel is calculated as [39] GLA = 2𝐺 − 𝑅 − 𝐵 2𝐺 + 𝑅 + 𝐵 (5) where R, G, and B are the DNs of the RGB channels of the images. The values of GLA range between −1 and +1, and the positive values indicate green vegetation because they have higher levels in the green channel than in the red and blue channels. Secondly, the digital image is converted from the RGB color space to the CIE L*a*b* color space, where L* is the luminance component, a* represents color on the green-magenta axis, and b* represents color on the blue-yellow axis. The CIE L*a*b* color space separates the luminance channel L* from two chromaticity channels a* and b*, which makes the correlations between channels minimal, and reduces the potential impact of illumination changes on image processing, compared with the RGB space. Negative a* indicates green, while positive a* indicates red, which has been implemented in previous studies to quantify the greenness of foliage [38,40]. Thirdly, automatically identify the background and sunlit foliage training sets by the DNs of the RGB images. Pixels with GLA ≤ 0 are classified as definite background training sets, while pixels satisfying the criterion (G > R) & (G > B) & (G > 25) are identified as sunlit foliage training sets, Remote Sens. 2015, 7 13415 according to the suggestions by [38] to detect the background and foreground training sets. Mean values of GLA, a*, and b* are calculated for the background and sunlit foliage training sets. Finally, classify each pixel by the Euclidean distance from the sunlit foliage and background means of GLA, a*, and b* using the minimum-distance-to-means classifier. The area ratio of the sunlit foliage PT can be calculated by NT/N, where N is the total number of pixels for the digital image, and NT is the number of pixels classified as the sunlit foliage. 2.3. Clumping Index The clumping index (CI), which quantifies the degree of foliage nonrandom distribution in space, is defined as [12]: Ω(θ𝑣 ,φ𝑣) = 𝐿𝐴𝐼𝑒𝑓𝑓(θ𝑣 ,φ𝑣) 𝐿𝐴𝐼𝑡𝑟𝑢𝑒 (6) where 𝐿𝐴𝐼𝑡𝑟𝑢𝑒 is the true LAI of the scene, and 𝐿𝐴𝐼𝑒𝑓𝑓(θ𝑣 ,φ𝑣) is the effective LAI that can be derived from the directional gap fraction model: 𝐿𝐴𝐼𝑒𝑓𝑓(θ𝑣 ,φ𝑣)⁡= −cosθ𝑣ln𝑃(θ𝑣 , φ𝑣) 𝐺(θ𝑣 ,φ𝑣) (7) where 𝑃(θ𝑣 , φ𝑣) is the directional gap fraction at the view zenith angle θ𝑣 and view azimuth angle φ𝑣, 𝐺(θ𝑣 , φ𝑣) is the foliage projection function as in Equation (1). 𝑃(θ𝑣 ,φ𝑣) can be extracted from images acquired on adjacent overcast days by the LAB2 image classification algorithm in Section 2.2. However, in practice, the true LAI is usually unknown, which means the clumping index cannot be directly derived by definition as in Equation (6). The total clumping index Ω can be divided into two components Ω = Ω𝐸 γ𝐸 (8) where Ω𝐸 is the element clumping index quantifying the degree of foliage clumping at scales larger than the shoot, and γ𝐸 is the needle-to-shoot area ratio, which is usually assumed to be 1 for broadleaf forests [14]. In general, the element clumping index Ω𝐸 can be estimated by the finite-length logarithmic gap averaging (LX) method of [41] or gap size distribution-based (CC) method of [42] from sampled transects in the image. In this study, a path length distribution-based method is adopted to estimate the clumping index since it has shown to improve the accuracy on the basis of the widely used LX and CC methods over heterogeneous canopies [37]. The estimated true LAI in Equation (6) by the path length-based method can be expressed as [37]: LAI𝑡𝑟𝑢𝑒̃ = ∫ (𝐹𝐴𝑉𝐷 ∙ 𝑙𝑚𝑎𝑥) 1 0 ∙ 𝑐𝑜𝑠θ𝑣 ∙ 𝑙𝑟 ∙ 𝑝𝑙𝑟(𝑙𝑟)𝑑(𝑙𝑟) (9) where 𝐹𝐴𝑉𝐷 is the foliage area volume density, 𝑙𝑚𝑎𝑥 is the maximum path length along the transect, θ𝑣 is the view zenith angle, 𝑙𝑟 is the relative path length, and 𝑝𝑙𝑟(𝑙𝑟) is the path length distribution function inversed from measured gaps in the sliding windows. Similar to the segment length in the LX method, the size of the sliding window is set to be 10 times the foliage characteristic width, and 40 transects are employed for an image in this study [41]. More detailed description of the variables in Remote Sens. 2015, 7 13416 Equation (9) can be found in [37]. Eventually the clumping index can be calculated by Equation (6–9) from digital images acquired on adjacent overcast days. 2.4. Leaf Angle Distribution and Leaf Projection Function The leaf projection function (G) is the mean projection of unit foliage area on a plane perpendicular to the solar or view directions [34]. Assuming a uniform leaf azimuth orientation distribution, the G function can be expressed as [43]: 𝐺(θ) = ∫ 𝐴(θ, θ𝐿) 𝜋 2 0 𝑓(θ,θ𝐿)𝑑θ𝐿 (10) 𝐴(θ,θ𝐿) = { 𝑐𝑜𝑠θ𝑐𝑜𝑠θ𝐿 |𝑐𝑜𝑡θ𝑐𝑜𝑡θ𝐿| > 1 𝑐𝑜𝑠θ𝑐𝑜𝑠θ𝐿[1 + (2/𝜋)(𝑡𝑎𝑛𝜓 − 𝜓)] 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 (11) where θ is the solar or view zenith angle, θ𝐿 is the leaf inclination angle, and 𝜓 = 𝑐𝑜𝑠 −1(𝑐𝑜𝑡θ𝑐𝑜𝑡θ𝐿). The two-parameter Beta-distribution has been evaluated to be the most accurate to describe the leaf inclination distribution function 𝑓(θ𝐿) [44,45]: 𝑓(𝑡) = 1 𝐵(μ,ν) (1 − 𝑡)μ−1𝑡ν−1 (12) where t is 2𝜃𝐿/𝜋, μ and ν are two parameters, and 𝐵(μ,ν) is the Beta-function defined as: 𝐵(μ,ν)⁡= ∫ (1 − 𝑥)μ−1𝑥ν−1 1 0 𝑑𝑥 = Γ(μ)Γ(ν) Γ(μ + ν) (13) where Γ is the Gamma function, and 𝜇 and 𝜈 are calculated as: μ = (1 − 𝑡̅)( σ0 2 σ𝑡 2 − 1) (14) ν = 𝑡̅( σ0 2 σ𝑡 2 − 1) (15) where 𝑡̅ and σ𝑡 2 are the mean and variance of t, and σ0 2 is the maximum variance of t, which can be calculated as: σ0 2 = 𝑡̅(1 − 𝑡̅) (16) From Equation (10)–(16), the leaf angle distribution (LAD) and G function can be described by two parameters, 𝜇 and 𝜈. Traditionally, LAD can be measured by mechanical inclinometers, while this direct method is labor-intensive and may not be feasible for tall forest canopies [46]. A new developed photographic method by analyzing leveled digital camera images at different heights of the canopy allows for a rapid and accurate estimation of LAD over broadleaf canopies [46–48]. The leveled digital images can be taken along a vertical crown profile at 2-m height increments from the crown bottom to the top [48]. Leaves in the images oriented perpendicularly to the view direction of the camera are selected, and their leaf inclination angles between the leaf surface normal and the zenith direction are estimated using the angle measurement tool of an image processing software (ImageJ; http://rsbweb.nih.gov/ij/) [47]. Remote Sens. 2015, 7 13417 The LAD has an impact on the regional CO2 and H2O fluxes [49], and can be affected by the trunk, stems, branches, and twigs. The spatial variability of LAD can be accounted for by dividing the canopy into multiple layers, and then the LAD of each layer and its contribution to the four components can be calculated layer by layer separately. The temporal variability of LAD can be quantified by a time-series model to extract the trend of temporal changes from LAD measurements at several times. The spatial representative of the LAD measurements may be limited in a heterogeneous forest when the tower has access to only a few species. The potential impact of light environment changes caused by the tower on the canopy structure can be reduced by the mask of canopies near the tower. In general, a minimum sample set of 75 measured leaves across the whole canopy are considered to be sufficient to characterize LAD reliably [48]. The leaf inclination angle measurements are used to fit the two parameters μ and ν in Equations (13)–(15), with which 𝑓(θ𝐿) and 𝐺(θ) can be derived by Equations (10)–(13). If no LAD field measurement data are available, then a typical LAD function for specific tree species can be assumed as the back-up approach [48]. 3. Materials Firstly, the different LAI extraction methods were evaluated by the 3D realistic structural scene at the YJP site. Then the proposed LAI extraction approach for sunny days was applied to extract LAI from actual images acquired at Huailai Remote Sensing Experimental Station in Beijing, China, from July to September in 2014. 3.1. Field Data Collection at the YJP Site The YJP site in this study is part of the Boreal Ecosystem-Atmosphere Study (BOREAS) eddy covariance tower sites [14]. YJP denotes young jack pine (Pinus banksiana), and is located in the BOREAS southern study area situated near Candle Lake, Saskatchewan, Canada (53.975°N, 104.650°W) as in Figure 2. The average age of the jack pine trees is 11–16 years old, and the density of the YJP site is about 4000 stems per hectare. The crown shape of the jack pine consists of a cone top and a cylinder bottom with the mean radius of 0.85 m, and the understory is composed of thin grasses, lichens, and some bearberry [50]. Three parallel transects with the equal length of 300 m, were separated by 10 m and oriented along northwest and southeast directions. The flux tower in YJP site was exactly located at the center of the middle transect. The effective LAI measurements by LAI-2000 Plant Canopy Analyzer (LI-COR, Lincoln, NE, USA) were taken along the transects every 10 m marked by the forest flags and, thus, eventually 90 readings were made within 20 min under overcast conditions. The 90° view caps were used to hide the operator from the sensor’s view during the measurements by LAI-2000. The element clumping index Ω𝐸 was measured by TRAC along the transects under clear-sky conditions. The sensors were carried at about 0.1–0.2 m above the forest floor with the walking speed of 1 m per three seconds. The needle-to-shoot area ratio (γ𝐸 ) was 1.43, and the mean width of foliage elements projected on a plane perpendicular to the solar direction (Ws) was 0.17 m [12]. The spherical LAD (i.e., G = 0.5) was assumed, which means leaves have no preferred orientation and is often considered as a reasonable assumption for conifer shoots where no LAD measurement data are available [48]. The canopy structure parameters of the YJP site are shown in Table 1. The same dataset of the YJP site has Remote Sens. 2015, 7 13418 been used to evaluate the optical-based LAI measurement techniques and the improved four-scale model in previous studies [12,14,50]. Figure 2. The young jack pine (YJP) site located in the BOREAS southern study area situated near Candle Lake, Saskatchewan, Canada (53.975°N, 104.650°W). The projection is UTM 13 North, WGS84. The background image is a true-color composite image from Landsat8/TM that was acquired on 8 August 2014. Table 1. The canopy structure parameters of the YJP site. Ha is the height of the lower part of the tree with no foliage; Hb is the height of the crown; R is the mean radius of the tree crowns; Ws is the mean width of foliage elements projected on a plane perpendicular to the solar direction; and G is the leaf projection function in Equation (10). Tree Density 𝐋𝐀𝐈𝐭𝐫𝐮𝐞 𝛀𝐄 𝛄𝑬 Ha Hb R Ws G 4000 trees/ha 2.7 0.72 1.43 0.5 2.5 0.85 m 0.17 m 0.5 3.2. 3D Reference Scene Construction Ground truth LAI can be difficult to acquire by destructive methods in practice for tall forest canopies, and previous studies usually relied on indirect LAI measurements by optical sensors, such as LAI-2000 and TRAC, as the reference value to compare different methods [30]. [12] reported that 80% accuracy can be achieved by LAI-2000 and TRAC when operated carefully, which suggests it is difficult to draw firm conclusions in method comparisons if the uncertainty of the reference values is relatively large. The 3D realistic structural scenes can account for the location, orientation, size, and shape of every individual leaf within a canopy, and thus have the advantage of knowing exactly the true values of each parameter, such as LAI, which provides the objective ground truth for the validation of various indirect LAI measurement methods [37,51]. In fact, the 3D realistic structural scene based approaches have been widely used as the “surrogate truth” in the RAdiative transfer Model Intercomparison (RAMI) exercise Remote Sens. 2015, 7 13419 conducted by the Joint Research Center, European Commission [52]. It should be noted that the following analysis was based on synthetic images by the 3D scene instead of real images. The canopy structure parameters acquired in the field measurements at YJP site in Table 1 except γ𝐸 were used as inputs to construct the 3D scene by the 3D Max software and the Maxscript language [53]. Since it was almost impossible to construct conifer forests at the needle level due to the huge number of needles, the shoot was used as the elementary unit in the 3D reference scene construction. Then the needle-to-shoot area ratio (γ𝐸) was set at 1 and, thus, Ω = Ω𝐸, which was equivalent to set the forests as broadleaf trees. The constructed 3D scene has an area of 125 m2 with 50 crowns. For synthetic images under sunny conditions, the solar zenith angle 𝜃𝑠 and solar azimuth angle 𝜑𝑠 were set at −30° and 180°, respectively. Then multi-angular downward-looking synthetic images under axonometric projection in the principal plane (PP) and in the cross plane (CP) were generated at 900 × 900 pixels in 8-bit digitalization per color channel, with the view zenith angle 𝜃𝑣 ranged from −50° to 50° at 10° intervals, as partly shown in Figure 3. For synthetic images under overcast conditions, the illumination was set to be isotropic and thus significant shadows cannot be seen in the synthetic images. The 3D software can determine whether each leaf was sunlit and visible at the same time by a ray tracing method. The sunlit foliage component (PT), shaded foliage component (ZT), gap fraction (GF), and true LAI can be directly extracted from the constructed 3D scene by definition as the ground truth values. The 3D reference values for clumping index (CI) at different view angles can also be derived from the 3D scene by definition in Equation (6), which was a more direct way than other methods such as the LX, CC or the path length distribution-based method in Section 2.3. PT was extracted from synthetic images on sunny days, GF and CI were derived from synthetic images on overcast days, while ZT was derived from all of the synthetic images because ZT was the difference between the probability of seeing the foliage and seeing the sunlit foliage. (a) (b) (c) Figure 3. The downward-looking synthetic images from constructed 3D scene of the YJP site under clear-sky conditions. The components in bright green, bright brown, dark green, and dark brown correspond to sunlit foliage, sunlit background, shaded foliage and shaded background, respectively. (a–c) correspond to synthetic images generated in the principal plane with the view zenith angle θ𝑣 of 30°, 0° and −30°, respectively. The solar zenith angle θ𝑠 and solar azimuth angle φ𝑠 are −30° and 180°, respectively. Remote Sens. 2015, 7 13420 3.3. In Situ Measurements at the Huailai Site The Huailai Remote Sensing Experimental Station is located in Beijing, in the north of China (40.348°N, 115.783°E). The camera was fixed on the tower crane about 23 m above the ground surface, and the size of the images was 2736 × 1824 pixels. In total, 18 digital actual images were acquired under clear-sky conditions by the tower crane at nadir view (θ𝑣 = 0°) in 2014 over two different vegetation types: aspen and corn. Among them, seven images were taken over the aspen on 27 July and four, three, and four images were taken over the corn on 27 July, 18 August, and 20 September, respectively. The illumination geometry including the solar zenith angle and the solar azimuth angle for each image were determined by the location of the Huailai site and the accurate acquisition time of each image. The heights of the aspen were measured at 35 sampled trees in the seven images by a laser distance meter, and the corn heights were measured at 20 sampled plants for each time phase. The average canopy height of the aspen was about 7.8 m on 27 July and the average corn height was about 1.6 m, 1.9 m, and 2.0 m on 27 July, 18 August and 20 September, respectively. The mean foliage diameter 𝑑𝐿 for the sampled aspen leaves was 0.16 m, and 𝑑𝐿 for the sample corn leaves was 0.15 m, 0.18 m, and 0.20 m. The spherical LAD was assumed for the aspen and corn as a reasonable assumption where no LAD measurement data was available [48]. The size of the ground surface corresponding to each image was about 25.2 m × 16.8 m, and the four corners of the ground in the image were labeled for the LAI and the element clumping index (Ω𝐸) measurements. The effective LAI was measured by LAI-2000 at nine single points evenly distributed over each image after sunset of the image acquisition day. The element clumping index (Ω𝐸 ) for the aspen was measured by TRAC along two transects under clear-sky conditions. The corn was usually considered as homogeneous canopies when the characteristic of row structure was not significant, and thus the element clumping index (Ω𝐸) for the corn was set as 1. The needle-to-shoot area ratio (γ𝐸) was set at 1 because both of the aspen and the corn were broadleaf plant species. 4. Results and Analysis 4.1. Scene Component Extraction The sunlit foliage component (PT) extracted by the automated LAB2 image classification algorithm from 22 images in the principal plane and the cross plane under sunny conditions were compared with that determined by the 3D software as in Figures 4 and 5. The minimum and average overall accuracy of the LAB2 algorithm for the 22 images were 91.4% and 93.6%, respectively. The overall accuracy was the proportion of the number of pixels correctly identified as sunlit foliage component or background in the number of all pixels of the image. From visual inspection in Figures 3b and 4, the sunlit foliage were effectively extracted by the LAB2 algorithm. The slight misclassification mainly occurred on the edge of the sunlit leaf blur, which might be due to the smoothing of the penumbra effect and the light scattering and diffraction [54]. Figure 5 suggests that the R2, root-mean-square error (RMSE) and bias for the PT extracted by the LAB2 algorithm from the 22 images were 0.98, 0.04, and −0.03, respectively. The intercept of the regression line was negative, which suggests that the PT extracted by the LAB2 algorithm was negatively biased when the value of PT was small. The slope of Remote Sens. 2015, 7 13421 the regression line was larger than 1, which indicates that the LAB2 algorithm can overestimate PT when the value of PT was large. (a) (b) Figure 4. The classification results of sunlit foliage (in black) and background (in white) by the automated LAB2 algorithm (a) and by the 3D software (b). The original RGB image was acquired at nadir view (θ𝑣 = 0°) as in Figure 3b. Figure 5. Comparison between the area ratio of the sunlit foliage component (PT) extracted by the LAB2 algorithm (on the vertical axis) and the 3D reference values (on the horizontal axis) from 22 images in the principal plane and in the cross plane. The area ratio of different scene components, including sunlit foliage (PT), shaded foliage (ZT) and gap fraction (GF) extracted by the LAB2 algorithm and the 3D reference values at different view zenith angles in PP and CP are shown in Figure 6. In PP, the sunlit foliage component (PT) reached the maximum in the hot spot direction when illumination and view directions coincide (θ𝑣 = −30°), which was due to the absence of visible shadows, but dropped quickly with the increasing of the phase angle between the directions to the sun and the camera. By contrast, a valley can be seen around the hot spot for the shaded foliage component (ZT), but the value of ZT increased significantly on both Remote Sens. 2015, 7 13422 sides of the hot spot region. In CP, PT increased slightly from nadir view to large view zenith angles, while ZT increased more sharply compared with PT. In both PP and CP, the gap fraction (GF) reached the maximum at nadir view, but decreased smoothly with the increasing of the view zenith angle. Compared with PT and ZT, the gap fraction (GF) extracted by the LAB2 algorithm exhibited the best performance with an RMSE of 0.03. The shaded foliage (ZT) had the lowest accuracy with an RMSE of 0.07. (a) (b) Figure 6. The area ratio of the sunlit foliage component (PT), shaded foliage component (ZT) and gap fraction (GF) extracted by the LAB2 algorithm and the 3D reference values in the principal plane (a) and in the cross plane (b) with the solar zenith angle (SZA) of −30°. 4.2. Clumping Index Estimation The clumping index (CI) estimated by two indirect methods and the 3D reference values at different view zenith angles in PP and CP are shown in Figure 7. In both PP and CP, the CI increased with the increasing view zenith angle (𝜃𝑣), and reached the minimum value at nadir view, which suggests the clumping effect for such discontinuous canopies was the most significant when θ𝑣 = 0°. In PP, both the path length-based method and the LX method for CI were underestimated compared with the 3D reference values. Performances for the path length-based method and the LX method were slightly better in CP than in PP, and the accuracy of the image classification to distinguish the foliage and background in the previous step may have an impact on the CI estimation. In PP and CP, the variations of the CI by the LX method against θ𝑣 were much smoother than that by the path length-based method and the 3D reference values, which suggests that the LX method may not be very sensitive to the view zenith angle variations for the CI estimation. The statistical results for the evaluation of the path length-based method and the LX method by the 3D reference values at different view zenith angles in PP and CP are shown in Figure 8. The clumping index (CI) was slightly underestimated by both methods with the negative bias, while the scatter points by the path length-based method were more approaching the 1:1 line compared with the LX method. The R2, RMSE, and Relative Error (RE) of the path length-based method were 0.86%, 0.05%, and 6.6%, while that of the LX method were 0.46%, 0.07%, and 7.8%, respectively, which suggests that Remote Sens. 2015, 7 13423 the path length-based method improved the accuracy of the clumping index (CI) estimation compared with the LX method. (a) (b) Figure 7. The clumping index (CI) estimated by the path length distribution-based method in this study (CI_Path) and the LX method (CI_LX), compared with the 3D reference values (CI_3D) in the principal plane (a) and in the cross plane (b). Figure 8. Comparison between the clumping index (CI) estimated by the path length distribution-based method (CI_Path) or the LX method (CI_LX) with the 3D reference values (CI_3D) from 22 images in the principal plane and in the cross plane. 4.3. Comparison of LAI Retrievals with Other Methods The retrieved LAI and the corresponding RE by four different methods at different view zenith angles in PP and CP are shown in Figure 9. The four methods include the retrieval of LAI by the sunlit foliage component under sunny conditions with the CI estimated by the path length-based method (LAI_Path) and the LX method (LAI_LX), and by the directional gap fraction model under overcast conditions (LAI_Overcast) and under sunny conditions (LAI_Sunny) with the CI estimated by the Remote Sens. 2015, 7 13424 widely used LX method. In PP, the LAI_Path and the LAI_LX methods underestimated the LAI in the forward scattering directions with the view zenith angle θ𝑣 > −30°, while overestimated the LAI in the backward directions with θ𝑣 < −30°. The largest RE for the two methods in PP occured at large view zenith angles in the backward directions, which was probably due to the symmetrical characteristic of the Kuusk Hot-spot function [35]. In CP, the two methods underestimated the LAI when the view zenith angle θ𝑣 ≤ 30°, while overestimated the LAI at large view zenith angles when θ𝑣 > 30° . The LAI_Overcast overestimated the LAI in PP, especially at large view zenith angles, while achieved the highest accuracy among the four methods in CP. (a) (b) Figure 9. The retrieved LAI and the corresponding Relative Error (RE) by four different methods in the principal plane (a) and in the cross plane (b). The LAI_True is the true LAI set in the 3D realistic structural scene, which serves as the ground reference value; LAI_Path and LAI_LX are the retrieval of LAI by the sunlit foliage component under sunny conditions with the CI estimated by the path length-based method and the LX method, respectively; LAI_Overcast and LAI_Sunny are the retrieval of LAI by the directional gap fraction model under overcast conditions and under sunny conditions, respectively. Remote Sens. 2015, 7 13425 The directional gap fraction model underestimated the LAI in both PP and CP under sunny conditions (LAI_Sunny) as in Figure 9, due to the difficulty to correctly extract the shaded foliage component from the shadowed parts of the images in the automated image classification. The misclassification of shaded foliage as background led to the overestimation of the gap fraction, and eventually resulted in the LAI underestimation by the gap fraction model. In PP, LAI_Sunny achieved the highest accuracy in the hot spot direction when the area ratio of the shaded foliage component was minimum, and the amplitude of RE increased when the view directions were away from the hot spot direction as in Figure 9a. The trend of the RE amplitude in PP was correlated with that of the shaded foliage fraction (ZT) in Figure 6a, which also increased with the increasing of the phase angle between the directions to the sun and the camera. The largest RE amplitude in PP occurred at large view zenith angle (e.g., θ𝑣 = 50°) in the forward scattering direction. In CP, the RE amplitude reached the minimum value at the nadir view, and increased with the increasing view zenith angle (θ𝑣) as in Figure 9b. The trend of the RE amplitude in CP was also similar to that of the shaded foliage fraction (ZT) in Figure 6b, which was symmetrical along the nadir view direction. The evaluation results for the performances of the four different methods by 22 images in PP and CP are shown in Table 2. The LAI_Overcast achieved the highest accuracy among the four methods with an RMSE of 0.32 and an RE of 9.0%, which suggests that the directional gap fraction model was the best choice for the LAI extraction under overcast conditions. The LAI_Path method proposed in this study performed best among the other three methods using images acquired under sunny conditions, with an RMSE of 0.35 and an RE of 11.4%, which can meet the accuracy requirement (0.5; 20%) by the GCOS. The LAI_Path method performed slightly better than the LAI_LX method, which was mainly due to the more accurate estimation of the clumping index as described in Section 4.2. The LAI_Sunny method had the lowest accuracy among the four methods, with an RMSE of 1.61 and an RE of 55.9%, which was significantly outside the accuracy requirement (0.5; 20%) by the GCOS. This suggests that the directional gap fraction model was inappropriate for images acquired under clear-sky conditions due to the uncertainties in the shaded foliage extraction, and it was more accurate to extract LAI by the sunlit foliage component on sunny days. Table 2. The statistical results for the performances of four different LAI retrieval methods from 22 images in the principal plane and in the cross plane. The four methods including LAI_Path, LAI_LX, LAI_Overcast, and LAI_Sunny are the same as in Figure 9. LAI_Path LAI_LX LAI_Overcast LAI_Sunny RMSE 0.35 0.40 0.32 1.61 Relative Error (RE) 11.4% 12.1% 9.0% 55.9% 4.4. Applications at the Huailai Site The actual downward-looking digital images acquired under clear-sky conditions for a corn region and for an aspen region at the Huailai site are shown in Figure 10 together with the classification results by the automated LAB2 algorithm. From visual interpretation, the sunlit foliage component of the two vegetation types were generally extracted by the LAB2 algorithm, which laid the foundation Remote Sens. 2015, 7 13426 for the further steps of LAI extraction. The uncertainties in the image classification will affect the final LAI extraction accuracy by error propagation in the subsequent procedures. (a) (b) (c) (d) Figure 10. The actual downward-looking digital images acquired under clear-sky conditions for a corn region on 20 September 2014 (a) and for an aspen region on 27 July 2014 (b) at the Huailai site. Both of the two images were acquired at nadir view (θ𝑣 = 0°). The classification results of sunlit foliage (in black) and background (in white) by the automated LAB2 algorithm for the corn region and the aspen region are shown in (c,d), respectively. The comparison between retrieved LAI by 18 images acquired on sunny days with field measured true LAI at the Huailai site in 2014 is shown in Figure 11. The field measured true LAI was calculated as the ratio of the effective LAI measured by LAI-2000 and the element clumping index measured by TRAC. Two methods include the proposed LAI extraction method by the sunlit foliage component under sunny conditions with the clumping index estimated by the path length-based method (LAI_Path), and the retrieval of LAI by the directional gap fraction model under sunny conditions with the clumping index estimated by the widely used Lang and Xiang (LX) method (LAI_Sunny). It can be seen from the 1:1 line that the directional gap fraction model underestimated the LAI with the bias of −1.38 under sunny conditions (LAI_Sunny), and this was due to the misclassification of shaded foliage as background, which resulted in the overestimation of gap fraction. The proposed LAI extraction method by the sunlit foliage component (LAI_Path) was more approaching the 1:1 line with the slope Remote Sens. 2015, 7 13427 closer to 1 compared with the directional gap fraction model. The proposed LAI extraction method slightly overestimated the LAI at low vegetation cover (LAI < 3), while underestimated the LAI at high vegetation cover (LAI > 5). The R2, RMSE, and Relative Error (RE) of the directional gap fraction model were 0.86%, 1.55%, and 36.2%, while that of the proposed LAI extraction method were 0.89%, 0.49%, and 11.6%, respectively. This suggests that the directional gap fraction model may not meet the accuracy requirement (0.5; 20%) by the GCOS on sunny days, and the proposed LAI extraction method by the sunlit foliage component improved the accuracy of the LAI estimation from (1.55; 36.2%) to (0.49; 11.6%) compared with the directional gap fraction model. Figure 11. Comparison between retrieved LAI and the corresponding Relative Error (RE) by 18 images acquired on sunny days with field measured true LAI by LAI-2000 and TRAC at the Huailai site in 2014. LAI_Path is the retrieval of LAI by the sunlit foliage component under sunny conditions with the clumping index estimated by the path length-based method, and LAI_Sunny is the retrieval of LAI by the directional gap fraction model under sunny conditions with the clumping index estimated by the widely used Lang and Xiang (LX) method. 5. Discussion One of the advantages for near-surface remote sensing is the high spatial resolution, which can distinguish the foliage from background and makes it possible to extract LAI continuously by near-surface imaging sensors [15,16,54]. The directional gap fraction model can achieve the highest accuracy under overcast conditions (LAI_Overcast) as in Figure 9, while the performance of the traditional gap fractional model on sunny days was quite poor (LAI_Sunny), which was significantly outside the accuracy requirement (0.5; 20%) by the GCOS. The poor performance of the gap fractional model on sunny days limited the extraction of LAI by near-surface remote sensing under different illumination conditions. The proposed LAI extraction method (LAI_Path) by sunlit foliage component achieved the accuracy of (0.35; 11.4%) on sunny days, which can meet the accuracy requirement (0.5; 20%) by the GCOS. Although the accuracy of the proposed method on sunny days (LAI_Path) was slightly lower than that of (0.32; 9.0%) by the gap fraction model under overcast conditions Remote Sens. 2015, 7 13428 (LAI_Overcast) partly due to the penumbra smoothing effect under sunny conditions, the proposed method improved the accuracy greatly from (1.61; 55.9%) by the gap fractional model on sunny days (LAI_Sunny). The proposed method relaxes the required illumination conditions for the extraction of LAI by near-surface remote sensing from only overcast conditions to all light conditions. The uncertainties for the non-fisheye or fisheye digital photography-based LAI extraction methods mainly come from image classification, canopy structure parameter estimation (e.g., Ω and G), and the LAI inversion model [9,47]. Illumination conditions and camera exposure settings have a crucial impact on the image classification for gap fraction estimation, and sunny illumination conditions can lead to poor gap fraction estimations due to the misclassification of the shadowed parts in the images as in previous studies [29,54]. In this study, the average overestimation of the gap fraction which is, in fact, the proportion of the shaded foliage component (ZT), can be as large as 0.23 in the principal plane and 0.26 in the cross plane. The consequent underestimation of LAI is (1.61; 55.9%), which is obviously outside the accuracy requirement (0.5; 20%) by the GCOS. The underestimation of LAI on sunny days is consistent with previous studies, which strongly recommend that overcast conditions should be privileged to guarantee the image classification accuracy when feasible [9,29,30]. Misclassification can occur on the edges of the leaf blur due to the light scattering and diffraction or the penumbra smoothing effect under sunny conditions, which has also been reported by [54]. While the tedious and time-consuming image processing used to be considered as the main weakness for DHP by [9], the LAB2 algorithm makes it possible for the automated image processing and the accurate extraction of the sunlit foliage component with the minimum overall accuracy of 91.4%. The Lang and Xiang (LX) method is shown to be not very sensitive to variations of the view zenith angle (𝜃𝑣) for the clumping index (CI) estimation in this study. The underestimation of CI by the LX method has also been reported in previous studies, which can be explained by the assumption that the foliage elements are randomly distributed within the finite length, while empty segments (no gaps) or large gaps between tree crowns will give erroneous results [37,41,55,56]. The path length distribution-based method can avoid the assumption of constant path length and can effectively characterize the non-randomness within canopies, which has improved the accuracy of the CI estimation compared with the LX method. For the leaf projection function (G) and LAD estimation, previous studies have started to use multi-directional gap fraction measurements by the 5 rings of LAI-2000 or different angular sectors of the DHP image for a joint retrieval of LAI, Ω and G [9,29], while [57] argued further work was still needed to separate the effects of foliage clumping (Ω) on the estimation of G. The photographic method adopted in this study by analyzing leveled digital camera images provides an independent way for LAD and G estimation, which can avoid the influence of other canopy structure parameters, e.g., Ω and LAI [47,48]. The development of the GO model which can quantify the sunlit foliage component relaxes the required illumination conditions for the digital photography from only overcast conditions to all light conditions [33]. Actually, the state of the art of forward models can significantly improve the theory and methods of ground observations. For example, the apparent clumping index proposed by [58] makes it possible for the LAI-2000 instrument to quantify the clumping effects, and the 1D bidirectional transmission model developed by [59] can overcome the traditional assumption of LAI-2000 that foliage absorbs all the radiation in the blue band, and provides a mechanism for the use Remote Sens. 2015, 7 13429 of the instrument under direct sunlight conditions in the latest version of LAI-2200C. The largest overestimation of LAI by the proposed method (LAI_Path) occured at large view zenith angles on the backward side in the principal plane (e.g., θ𝑣 = −50°). This is mainly due to the underestimation of the sunlit foliage component for the current GO model in the backward directions, and this phenomenon has been reported by [53] and [60]. Thus, futher development is still needed for the GO model to simulate the sunlit foliage component more accurately in the backward scattering directions. In this study, the woody part is neglected because the shadowing effect of the trunk, stems, branches, and twigs is less of a problem for extracting sunlit foliage from downward-looking images, while the woody elements may significantly influence the extinction and gap fraction measurements for upward-pointing cameras and LAI-2000 [15,47]. Additionally, the YJP site in this study does not have important understory, the canopies are relatively open and, thus, the sunlit background can be easily distinguished, while the contrast between the foliage and the background understory will have an impact on the accuracy of the image classification and the sunlit foliage extraction [38,50]. It is promising to use photographs taken at different times on a sunny day with varying solar zenith angles to reduce the LAI inversion uncertainty. Finally, the digital photography method may not work due to gap saturation when canopies reach closure and the visible sunlit foliage component reaches the upper limit, especially for relatively dense tropical rainforest, which is also the limitation of other indirect methods [30]. There is a little difference for the sensitivity of saturation, because the saturation is directly related to the reflectance for downward-looking sensors, such as satellite or near-surface remote sensing, while the saturation is linked to the transmittance for upward-pointing sensors such as LAI-2000. The proposed downward-looking method underestimated the LAI at high vegetation cover (LAI > 5) compared with the upward-pointing LAI-2000 in Figure 11, which suggests that downward-looking methods might be more sensitive than upward-pointing methods because the light intensity can still be decreased when penetrating the canopy even after the canopy is closed and no sky can be observed from the ground surface. 6. Conclusions Recently, near-surface remote sensing using networked digital cameras provides a low-cost way to continuously monitor the vegetation dynamics over the regional to continental scale. However, current indirect LAI measurements by the directional gap fraction model can only use images acquired under diffuse sky conditions, while tower-based webcam images can also be acquired under clear-sky conditions with the direct sunlight. The main challenge for the images on sunny days is that it is difficult to discriminate the shaded foliage and shaded background from the shadows. A new LAI extraction method by the sunlit foliage component from downward-looking digital photography under clear-sky conditions is proposed in this study. This method extracts the sunlit foliage component by the automated LAB2 image classification algorithm, estimates the clumping index by a path length distribution-based method, quantifies the LAD and G function by leveled digital images, and eventually obtains the LAI by introducing a GO model which can quantify the sunlit foliage fraction. The proposed method was evaluated at the YJP site, Canada, by the 3D realistic structural scene constructed based on the field measurements. Then the proposed LAI extraction approach for sunny days was applied to extract LAI from actual images acquired at Huailai Remote Sensing Experimental Remote Sens. 2015, 7 13430 Station in Beijing, China, from July to September in 2014. The following conclusions can be drawn from this study: (1) The LAB2 algorithm makes it possible for the automated image processing and the accurate sunlit foliage component extraction with the minimum overall accuracy of 91.4%. (2) The widely used LX method tends to underestimate the clumping index, while the path length distribution-based method can reduce the RE from 7.8% to 6.6%. (3) Using the current directional gap fraction model under sunny conditions can lead to an underestimation of LAI by (1.61; 55.9%), which was significantly outside the accuracy requirement (0.5; 20%) by the GCOS. (4) The proposed LAI extraction method has an RMSE of 0.35 and an RE of 11.4% under sunny conditions, which can meet the accuracy requirement of the GCOS. The new method relaxes the required illumination conditions for the digital photography from only overcast conditions to all light conditions, and can be applied to extract LAI from downward-looking webcam images, which is expected for the regional to continental scale monitoring of vegetation dynamics and validation of satellite remote sensing products. Acknowledgments This research was supported by the National Basic Research Program of China (No. 2013CB733401), the National Natural Science Foundation of China (No. 41271366), the National High Technology Research and Development Program of China (No. 2012AA12A304) and the CAS/SAFEA International Partnership Program for Creative Research Teams (No. KZZD-EW-TZ-09). The authors would like to thank three anonymous reviewers for the constructive comments and suggestions that helped to improve the manuscript. Author Contributions Yelu Zeng designed this research and wrote the manuscript. Jing Li and Qinhuo Liu provided with the guidance and gave important suggestions in the revisions. Ronghai Hu and Xihan Mu calculated the clumping index and analyzed the data. Weiliang Fan designed and performed the experiments. Baodong Xu and Gaofei Yin offered support for the optimization of the algorithm. Shengbiao Wu interpreted the results and checked the writing. Nomenclature CC The gap size distribution-based method to estimate CI by Chen and Cihlar CI Clumping index CP The cross plane DHP Digital Hemispherical Photography GCOS Global Climate Observation System GF Gap fraction GLA Green leaf algorithm GO Geometric-optical model Remote Sens. 2015, 7 13431 LAD Leaf angle distribution LAI Leaf area index LAI_Path The retrieval of LAI by the sunlit foliage component under sunny conditions with the CI estimated by the path length-based method LAI_LX The retrieval of LAI by the sunlit foliage component under sunny conditions with the CI estimated by the LX method LAI_Overcast The retrieval of LAI by the directional gap fraction model under overcast conditions with the CI estimated by the LX method LAI_Sunny The retrieval of LAI by the directional gap fraction model under sunny conditions with the CI estimated by the LX method LX The finite-length logarithmic gap averaging method to estimate CI by Lang and Xiang PCA Plant Canopy Analyzer PP The principal plane PG Sunlit background component PT Sunlit foliage component RE Relative Error SZA Solar zenith angle TRAC Tracing Radiation and Architecture of Canopies YJP The young jack pine site ZG Shaded background component ZT Shaded foliage component Conflicts of Interest The authors declare no conflict of interest. References 1. Bonan, G.B. Land-atmosphere interactions for climate system models: Coupling biophysical, biogeochemical, and ecosystem dynamical processes. Remote Sens. Environ. 1995, 51, 57–73. 2. Myneni, R.; Hoffman, S.; Knyazikhin, Y.; Privette, J.; Glassy, J.; Tian, Y.; Wang, Y.; Song, X.; Zhang, Y.; Smith, G. Global products of vegetation leaf area and fraction absorbed PAR from year one of MODIS data. Remote Sens. Environ. 2002, 83, 214–231. 3. Chen, J.M.; Black, T. Defining leaf area index for non‐flat leaves. Plant Cell Environ. 1992, 15, 421–429. 4. Zeng, Y.; Li, J.; Liu, Q.; Qu, Y.; Huete, A.R.; Xu, B.; Yin, G.; Zhao, J. An optimal sampling design for observing and validating long-term leaf area index with temporal variations in spatial heterogeneities. Remote Sens. 2015, 7, 1300–1319. 5. Zeng, Y.; Li, J.; Liu, Q.; Li, L.; Xu, B.; Yin, G.; Peng, J. A sampling strategy for remotely sensed lai product validation over heterogeneous land surfaces. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 3128–3142. Remote Sens. 2015, 7 13432 6. Yang, W.; Tan, B.; Huang, D.; Rautiainen, M.; Shabanov, N.V.; Wang, Y.; Privette, J.L.; Huemmrich, K.F.; Fensholt, R.; Sandholt, I. Modis leaf area index products: From validation to algorithm improvement. IEEE Trans. Geosci. Remote Sens. 2006, 44, 1885–1898. 7. Yin, G.; Li, J.; Liu, Q.; Fan, W.; Xu, B.; Zeng, Y.; Zhao, J. Regional leaf area index retrieval based on remote sensing: The role of radiative transfer model selection. Remote Sens. 2015, 7, 4604–4625. 8. Yin, G.; Li, J.; Liu, Q.; Li, L.; Zeng, Y.; Xu, B.; Yang, L.; Zhao, J. Improving leaf area index retrieval over heterogeneous surface by integrating textural and contextual information: A case study in the Heihe River Basin. IEEE Geosci. Remote Sens. Lett. 2015, 12, 359–363. 9. Jonckheere, I.; Fleck, S.; Nackaerts, K.; Muys, B.; Coppin, P.; Weiss, M.; Baret, F. Review of methods for in situ leaf area index determination. Agric. For. Meteorol. 2004, 121, 19–35. 10. Gower, S.T.; Norman, J.M. Rapid estimation of leaf area index in conifer and broad-leaf plantations. Ecology 1991, 1896–1900. 11. Marshall, J.; Waring, R. Comparison of methods of estimating leaf-area index in old-growth Douglas-fir. Ecology 1986, 975–979. 12. Chen, J.M. Optically-based methods for measuring seasonal variation of leaf area index in boreal conifer stands. Agric. For. Meteorol. 1996, 80, 135–163. 13. Leblanc, S.G.; Chen, J.M.; Fernandes, R.; Deering, D.W.; Conley, A. Methodology comparison for canopy structure parameters extraction from digital hemispherical photography in boreal forests. Agric. For. Meteorol. 2005, 129, 187–207. 14. Chen, J.M.; Rich, P.M.; Gower, S.T.; Norman, J.M.; Plummer, S. Leaf area index of boreal forests: Theory, techniques, and measurements. J. Geophys. Res.: Atmos. (1984–2012) 1997, 102, 29429–29443. 15. Ryu, Y.; Verfaillie, J.; Macfarlane, C.; Kobayashi, H.; Sonnentag, O.; Vargas, R.; Ma, S.; Baldocchi, D.D. Continuous observation of tree leaf area index at ecosystem scale using upward-pointing digital cameras. Remote Sens. Environ. 2012, 126, 116–125. 16. Richardson, A.D.; Braswell, B.H.; Hollinger, D.Y.; Jenkins, J.P.; Ollinger, S.V. Near-surface remote sensing of spatial and temporal variation in canopy phenology. Ecol. Appl. 2009, 19, 1417–1428. 17. Sonnentag, O.; Hufkens, K.; Teshera-Sterne, C.; Young, A.M.; Friedl, M.; Braswell, B.H.; Milliman, T.; O’Keefe, J.; Richardson, A.D. Digital repeat photography for phenological research in forest ecosystems. Agric. For. Meteorol. 2012, 152, 159–177. 18. Qu, Y.; Han, W.; Fu, L.; Li, C.; Song, J.; Zhou, H.; Bo, Y.; Wang, J. LAINet—A wireless sensor network for coniferous forest leaf area index measurement: Design, algorithm and validation. Comput. Electron. Agric. 2014, 108, 200–208. 19. Ryu, Y.; Baldocchi, D.D.; Verfaillie, J.; Ma, S.; Falk, M.; Ruiz-Mercado, I.; Hehn, T.; Sonnentag, O. Testing the performance of a novel spectral reflectance sensor, built with light emitting diodes (LEDs), to monitor ecosystem metabolism, structure and function. Agric. For. Meteorol. 2010, 150, 1597–1606. 20. Hufkens, K.; Friedl, M.; Sonnentag, O.; Braswell, B.H.; Milliman, T.; Richardson, A.D. Linking near-surface and satellite remote sensing measurements of deciduous broadleaf forest phenology. Remote Sens. Environ. 2012, 117, 307–321. Remote Sens. 2015, 7 13433 21. Verhoef, W.; Bach, H. Coupled soil-leaf-canopy and atmosphere radiative transfer modeling to simulate hyperspectral multi-angular surface reflectance and TOA radiance data. Remote Sens. Environ. 2007, 109, 166–182. 22. Myneni, R.B.; Hall, F.G. The interpretation of spectral vegetation indexes. IEEE Trans. Geosci. Remote Sens., 1995, 33, 481–486. 23. Morton, D.C.; Nagol, J.; Carabajal, C.C.; Rosette, J.; Palace, M.; Cook, B.D.; Vermote, E.F.; Harding, D.J.; North, P.R. Amazon forests maintain consistent canopy structure and greenness during the dry season. Nature 2014, 506, 221–224. 24. Guan, K.; Medvigy, D.; Wood, E.F.; Caylor, K.K.; Li, S.; Jeong, S.J. Deriving vegetation phenological time and trajectory information over Africa using SEVIRI daily LAI. IEEE Trans. Geosci. Remote Sens. 2014, 52, 1113–1130. 25. Garrity, S.R.; Bohrer, G.; Maurer, K.D.; Mueller, K.L.; Vogel, C.S.; Curtis, P.S. A comparison of multiple phenology data sources for estimating seasonal transitions in deciduous forest carbon exchange. Agric. For. Meteorol. 2011, 151, 1741–1752. 26. Zhang, X.; Friedl, M.A.; Schaaf, C.B. Global vegetation phenology from moderate resolution imaging spectroradiometer (MODIS): Evaluation of global patterns and comparison with in situ measurements. J. Geophys. Res.: Biogeosci. 2006, 111, G04017. 27. Jonckheere, I.; Nackaerts, K.; Muys, B.; Coppin, P. Assessment of automatic gap fraction estimation of forests from digital hemispherical photography. Agric. For. Meteorol. 2005, 132, 96–114. 28. Baret, F.; De Solan, B.; Lopez-Lozano, R.; Ma, K.; Weiss, M. Gai estimates of row crops from downward looking digital photos taken perpendicular to rows at 57.5 zenith angle: Theoretical considerations based on 3D architecture models and application to wheat crops. Agric. For. Meteorol. 2010, 150, 1393–1401. 29. Demarez, V.; Duthoit, S.; Baret, F.; Weiss, M.; Dedieu, G. Estimation of leaf area and clumping indexes of crops with hemispherical photographs. Agric. For. Meteorol. 2008, 148, 644–655. 30. Liu, J.; Pattey, E. Retrieval of leaf area index from top-of-canopy digital photography over agricultural crops. Agric. For. Meteorol. 2010, 150, 1485–1490. 31. Li, X.; Strahler, A.H. Geometric-optical modeling of a conifer forest canopy. IEEE Trans. Geosci. Remote Sens. 1985, doi:. 10.1109/TGRS.1985.289389. 32. Chen, J.M.; Leblanc, S.G. A four-scale bidirectional reflectance model based on canopy architecture. IEEE Trans. Geosci. Remote Sens. 1997, 35, 1316–1337. 33. Fan, W.; Gai, Y.; Xu, X.; Yan, B. The spatial scaling effect of the discrete-canopy effective leaf area index retrieved by remote sensing. Sci. China Earth Sci. 2013, 56, 1548–1554. 34. Ross, J. The Radiation Regime and Architecture of Plants Stands; Tasks for Vegetation Science; Springer Netherlands: The Hague, The Netherlands, 1981; Volume 3. 35. Kuusk, A. The hot spot effect of a uniform vegetative cover. Sov. J. Remote Sens. 1985, 3, 645–658. 36. Kuusk, A. The hot spot effect in plant canopy reflectance. In Photon-Vegetation Interactions; Springer: Berlin, Germany, 1991; pp 139–159. 37. Hu, R.; Yan, G.; Mu, X.; Luo, J. Indirect measurement of leaf area index on the basis of path length distribution. Remote Sens. Environ. 2014, 155, 239–247. Remote Sens. 2015, 7 13434 38. Macfarlane, C.; Ogden, G.N. Automated estimation of foliage cover in forest understorey from digital NADIR images. Methods Ecol. Evol. 2012, 3, 405–415. 39. Louhaichi, M.; Borman, M.M.; Johnson, D.E. Spatially located platform and aerial photography for documentation of grazing impacts on wheat. Geocarto Int. 2001, 16, 65–70. 40. Liu, Y.; Mu, X.; Wang, H.; Yan, G. A novel method for extracting green fractional vegetation cover from digital images. J. Veg. Sci. 2012, 23, 406–418. 41. Lang, A.; Xiang, Y. Estimation of leaf area index from transmission of direct sunlight in discontinuous canopies. Agric. For. Meteorol. 1986, 37, 229–243. 42. Chen, J.M.; Cihlar, J. Plant canopy gap-size analysis theory for improving optical measurements of leaf-area index. Appl. Opt. 1995, 34, 6211–6222. 43. Wilson, J.W. Inclined point quadrats. New Phytol. 1960, 59, 1–7. 44. Goel, N.S.; Strebel, D.E. Simple beta distribution representation of leaf orientation in vegetation canopies. Agron. J. 1984, 76, 800–802. 45. Wang, W.-M.; Li, Z.-L.; Su, H.-B. Comparison of leaf angle distribution functions: Effects on extinction coefficient and fraction of sunlit foliage. Agric. For. Meteorol. 2007, 143, 106–122. 46. Zou, X.; Mõttus, M.; Tammeorg, P.; Torres, C.L.; Takala, T.; Pisek, J.; Mäkelä, P.; Stoddard, F.; Pellikka, P. Photographic measurement of leaf angles in field crops. Agric. For. Meteorol. 2014, 184, 137–146. 47. Ryu, Y.; Sonnentag, O.; Nilson, T.; Vargas, R.; Kobayashi, H.; Wenk, R.; Baldocchi, D.D. How to quantify tree leaf area index in an open savanna ecosystem: A multi-instrument and multi-model approach. Agric. For. Meteorol. 2010, 150, 63–76. 48. Pisek, J.; Sonnentag, O.; Richardson, A.D.; Mõttus, M. Is the spherical leaf inclination angle distribution a valid assumption for temperate and boreal broadleaf tree species? Agric. For. Meteorol. 2013, 169, 186–194. 49. Pisek, J.; Ryu, Y.; Alikas, K. Estimating leaf inclination and G-function from leveled digital camera photography in broadleaf canopies. Trees 2011, 25, 919–924. 50. Leblanc, S.G.; Bicheron, P.; Chen, J.M.; Leroy, M.; Cihlar, J. Investigation of directional reflectance in boreal forests with an improved four-scale model and airborne POLDER data. IEEE Trans. Geosci. Remote Sens. 1999, 37, 1396–1414. 51. Yin, T.; Lauret, N.; Gastellu-Etchegorry, J.-P. Simulating images of passive sensors with finite field of view by coupling 3-D radiative transfer model and sensor perspective projection. Remote Sens. Environ. 2015, 162, 169–185. 52. Widlowski, J.L.; Pinty, B.; Lopatka, M.; Atzberger, C.; Buzica, D.; Chelle, M.; Disney, M.; Gastellu‐Etchegorry, J.P.; Gerboles, M.; Gobron, N. The fourth radiation transfer model intercomparison (RAMI‐IV): Proficiency testing of canopy reflectance models with ISO‐13528. J. Geophys. Res. Atmos. 2013, 118, 6869–6890. 53. Fan, W.; Li, J.; Liu, Q. GOST2: The improvement of the canopy reflectance model gost in separating the sunlit and shaded leaves. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 1423–1431. 54. Zhang, Y.; Chen, J.M.; Miller, J.R. Determining digital hemispherical photograph exposure for leaf area index estimation. Agric. For. Meteorol. 2005, 133, 166–181. Remote Sens. 2015, 7 13435 55. Pisek, J.; Lang, M.; Nilson, T.; Korhonen, L.; Karu, H. Comparison of methods for measuring gap size distribution and canopy nonrandomness at Järvselja RAMI (radiation transfer model intercomparison) test sites. Agric. For. Meteorol. 2011, 151, 365–377. 56. Gonsamo, A.; Pellikka, P. The computation of foliage clumping index using hemispherical photography. Agric. For. Meteorol. 2009, 149, 1781–1787. 57. Macfarlane, C.; Hoffman, M.; Eamus, D.; Kerp, N.; Higginson, S.; McMurtrie, R.; Adams, M. Estimation of leaf area index in eucalypt forest using digital photography. Agric. For. Meteorol. 2007, 143, 176–188. 58. Ryu, Y.; Nilson, T.; Kobayashi, H.; Sonnentag, O.; Law, B.E.; Baldocchi, D.D. On the correct estimation of effective leaf area index: Does it reveal information on clumping effects? Agric. For. Meteorol. 2010, 150, 463–472. 59. Kobayashi, H.; Ryu, Y.; Baldocchi, D.D.; Welles, J.M.; Norman, J.M. On the correct estimation of gap fraction: How to remove scattered radiation in gap fraction measurements? Agric. For. Meteorol. 2013, 174–175, 170–183. 60. Kuusk, A.; Kuusk, J.; Lang, M. Modeling directional forest reflectance with the hybrid type forest reflectance model FRT. Remote Sens. Environ. 2014, 149, 196–204. © 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/). work_octvu4mkpjhtllfs2zldy6z5o4 ---- Experimental research on overlying strata movement and fracture evolution in pillarless stress-relief mining Experimental research on overlying strata movement and fracture evolution in pillarless stress-relief mining Junhua Xue1 • Hanpeng Wang2 • Wei Zhou1 • Bo Ren1 • Changrui Duan1 • Dongsheng Deng1 Received: 20 February 2015 / Revised: 11 March 2015 / Accepted: 15 March 2015 / Published online: 21 May 2015 � The Author(s) 2015. This article is published with open access at Springerlink.com Abstract In multiple seams mining, the seam with relatively low gas content (protective seam) is often extracted prior to mining its overlying and/or underlying seams of high gas content and low permeability to minimize the risk of high gas emission and outbursts of coal and gas. A key to success with this mining sequence is to gain a detailed understanding of the movement and fracture evolution of the overlying and underlying strata after the protective seam in extracted. In Zhuji mine, the No. 11-2 seam is extracted as a protective seam with the pillarless mining method by retaining goaf-side roadways prior to its overlying No. 13-1 seam. An investigation has been undertaken in the panel 1111(1) of Zhuji mine to physically simulate the movement and fracture evolution of the overlying strata after the No. 11-2 seam is extracted. In the physical simulation, the displacement, strain, and deformation and failure process of the model for simulation were acquired with various means such as grating displacement meter, strain gauges, and digital photography. The simulation result shows that: (1) Initial caving interval of the immediate roof was 21.6 m, the first weighting interval was 23.5–37.3 m with the average interval of 33.5 m, and the periodic weighting interval of the main roof was in a range of 8.2–20.55 m and averaged at 15.2 m. (2) The maximum height of the caving zone after the extraction of No. 11-2 seam was 8.0 m, which was 4 times of the seam mining height and the internal strata of the caving zone collapsed irregularly. The mining-induced fractures developed 8–30 m above the mined No. 11-2 seam, which was 7.525 times of the seam mining height, the fracture zone was about 65� upward from the seam open-off cut toward the goaf, the height of longitudinal joint growth was 4–20 times of the mining seam height, and the height of lateral joint growth was 20–25 times of the mining seam height. (3) The ‘‘arch-in-arch’’ mechanical structure of the internal goaf was bounded by an expansion angle of broken strata in the lateral direction of the retained goaf-side roadway. The spatial and temporal evolution regularities of over- burden’s displacement field and stress field, dynamic development process and distribution of fracture field were analyzed. Based on the simulation results, it is recommended that several goaf drainage methods, i.e. gas drainage with buried pipes in goaf, surface goaf gas drainage, and cross-measure boreholes, should be implemented to ensure the safe mining of the panel 1111(1). Keywords Low-permeability coal seam � Pillarless stress-relief mining � Overburden movement � Fracture evolution � Physical simulation 1 Introduction The permeability of coal reservoir in China is generally low. The coal seams in Huainan are typical ‘‘three-high, one-low and one soft’’ seams, namely high gas content in coal (12–26 m 3 /t), high gas pressure (up to 6.2 MPa), high rock stress (the maximum principal stress is up to 26.8 MPa), low seam permeability (0.001 mD), and soft & Junhua Xue xuejunhua2003@163.com 1 State Key Laboratory of Deep Coal Mining & Environment Protection, Huainan 232001, China 2 Geotechnical & Structural Engineering Research Center, Shandong University, Jinan 250012, China 123 Int J Coal Sci Technol (2015) 2(1):38–45 DOI 10.1007/s40789-015-0067-0 http://crossmark.crossref.org/dialog/?doi=10.1007/s40789-015-0067-0&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1007/s40789-015-0067-0&domain=pdf coal strength (Yuan 2006). The complex geological con- ditions pose serious threats to coal mining. In high gas seams, gas must be extracted before coal mining, however the low permeability of coal seams seriously affects the gas drainage effect. Based on the geological conditions of the coal seams in Huainan mining area, the technology of in- tegrated coal production and gas extraction was innovated by Yuan Liang through mining a protective seam with retained goaf-side roadways and ‘‘Y’’ type ventilation system. A key of the technology is to significantly increase the permeability of the overlying and underlying protected seams and subsequent gas extraction from the protected coal seams by boreholes drilled from the retained roadways (Yuan 2008). The technology of integrated coal production and gas extraction requires a detailed understanding of the deformation and failure of the surrounding rocks in working faces and gas migration and enrichment in and around the working faces, especially the fracture evolution of overlying strata after the protective seam is extracted as the fracture distribution significantly affects the target zones of gas drainage by boreholes. A large number of studies have been undertaken to understand in the mining-induced fracture distribution in overlying strata (Karmis et al. 1983; Hasenfus et al. 1988; Bai and Elsworth 1990; Shu and Bhattacharyya 1993; Palchik 2002; Unver and Yasitli 2005; Cao et al. 2011; Li et al. 2012; Usanov et al. 2013; Liu et al. 2014). It is generally recognized that overlying strata in longwall ex- traction can be divided into three different zones: caving zone, fracture zone and continuous deformation zone. Qian et al. (2003) established the key strata theory in ground control. Song (1988) proposed the theory of transferring rock beam. Liu (1995) attributed the damage of overlying rock strata to ‘‘three zones’’, respectively caving zone, fracture zone and continuous deformation zone, and put forward the general understanding of ‘‘three horizontal zones’’ and ‘‘three vertical zones’’. Based on the findings of the simulation test, Deng et al. (1998), Guo et al. (2009) and Guo et al. (2012) obtained the evolution characteristics of mining fracture regularity and fractured rock mass. Xue and Xue (2012), Li et al. (2012) and Li and Zhang (2011) developed a method to predict the propagation height of mining induced fractures, the height of fractured zone in top coal caving mining, and the non-steady state evolution of fractures. They also revealed the development of two types of fractures in overlying strata: delamination frac- tures and vertical fractures. Most of the above-mentioned studies were undertaken in normal mining depth, as mining depth increases ground stress becomes quite high and the pore forming rate and effective drainage time are relatively low (Palchik 2005; Li 2012; Szlazak et al. 2014). Physical modeling test has been undertaken to simulate the goaf- side roadway retaining process in mining the first panel 1111(1) of the protective No. 11-2 seam at Zhuji mine in Huainan mining area. This paper presents the test findings, including the spatial and temporal evolution regularities of overburden’s displacement and stress fields, and dynamic development process and distribution of overburden’s fracture field. The annual production Zhuji mine is 4.0 million tons and current mining seams are No. 11-2 seam and No. 13- seam. The No. 11-2 seam lies about 65 m below the No. 13-1 seam. Both seams are outburst prone. The 11-2 seam is mined as a protective seam to protect the No. 13-1 seam (protected seam). The No. 11-2 seam is 1.26 m thick on average, flat, and contains an average gas content of 4.95 m 3 /t. The No. 13-1 seam lies 65 m above the No. 11-2 seam and has an average gas content of 6.98 m 3 /t. The first extraction panel 1111(1) is in the No. 11-2 seam and has a mining depth of about 910 m. The panel is 220 m wide and 1618 m long. The panel is mined with the fully-mechan- ized longwall retreat method. A goaf-side roadway is re- tained (pillarless mining) and Y type ventilation is adopted for the panel extraction (Lu et al. 2013) (Fig. 1). A physical model was made to simulate the panel 1111(1), as shown in Fig. 2. The simulation test was per- formed with high-stress modeling system. The system has double gantry structure and double cylinder installation. The test conditions of plane strain and plane stress were simulated by installing or removing the front and rear rigid reaction beam devices. The touch-screen hydraulic loading control system was used to achieve synchronous non-uniform gradient loading and long loading. The model strain and displacement were obtained with a number of high-tech equipment such as high speed static strain test system, micro-precision grating displacement meter, digital cameras measuring systems, and high-precision total station test. Fig. 1 Zhuji mine in China Experimental research on overlying strata movement and fracture evolution… 39 123 The model block size was 2000 (W) 9 2000 (H) 9 650 (T) (mm 9 mm 9 mm) and the ground stress field was achieved by external compensation method, the maximum output horizontal and vertical direction of cylinders were 3000 kN, and the maximum load intensity was 2.23 MPa. 2 Test 2.1 Test purpose The retained goaf-side roadway is subject to repeated col- lapse of its lateral rock strata and continuous disturbance, the movement and crack development characteristics of goaf overburden are very important to the roadway stability and dictate the gas rich region and drainage position. A plane stress model was used in the test to simulate excavation directions in both the strike direction and the dip direction, thus overlying strata movement characteris- tics can be more thoroughly studied. 2.2 Test model The simulated model dimension was 2.0 m 9 2.0 m 9 0.65 m (width 9 height 9 thickness). The model simulates the strata size of 120 m 9 120 m (width 9 height) with the geometric similarity ratio of 60:1. The pillarless mining physical simulation model is shown in Fig. 3. A total of 23 rock strata were laid in the model, containing three coal seams: No. 11-2 seam, No. 11-1 seam, and No. 13-1 seam. The No. 11-2 coal seam was 61.8 m below the No. 13-1 seam and 4.5 m above the No. 11-1 seam. A pillar of 200 mm wide was retained on mining boundaries of modeled working face and a pillar of 583 mm wide was retained on coal side to ensure similarity of the boundary conditions. In the model, a mining height was 35 mm, excavation length was 1100 mm; the cross- sectional dimension of the goaf-side retained roadway was 83.3 mm 9 50 mm (width 9 height), the left lane di- mension of the roadway was 66.7 mm 9 50 mm (width 9 height) and the support side lane of the roadway was 50 mm in width. The average density of the actual rock was 2.5 g/cm 3 , the average density of the simulated rock was 1.5 g/m 3 , and density similarity ratio was 5:3. Stress similarity constant was then calculated as 100 while similarity constants of rock tensile strength rc, compressive strength rf, bending strength rf, shear strength rs and elastic modulus E were all at 100. As the modeled mining depth was 910 m, the vertical stress of roadway original rock rvp = 22.75 MPa, and horizontal stress rHp = 11.4 MPa, therefore the vertical and horizontal stress in the model were set at rvm = 0.23 MPa, rHm = 0.11 MPa respectively. The strata numbered from 1 to 9 in Fig. 3 represents mudstone, No. 13-1 coal seam, sandy shale, No. 1 mud- stone, siltstone and fine sandstone, No. 2 mudstone, No. 11-2 coal seam, No. 11-1 coal seam, and other rock strata, respectively. 2.3 Simulation materials The fundamental forces that the overlying strata bear are the tension and compression, their damage take the form of shear and tensile failure, the deformation and failure of the strata are related to their elastic modulus, Poisson’s ratio and strength. Therefore simulation materials were selected by similarity conditions as follows: sand of particle size less than 1.5 mm as aggregate, gypsum and lime bound with water, mica powder as a laminated material to simu- late rock strata structure. Specific rock strata parameters are in Table 1. Fig. 2 High stress model test system Fig. 3 Test model layout and strata histogram 40 J. Xue et al. 123 2.4 Testing procedures The simulation test was carried out with the following procedures: (1) Making the modeled rock layer by layer, tamping and burying displacement and strain sensors in pre- determined position until the model was completed. (2) Removing the front and rear modules after the completed model was ready for 15 days, marking the grid of 100 mm 9 100 mm, and then let the model dry naturally for 7 more days. (3) Connecting the sensor lines to their respective instrument, testing the data acquisition system to make sure that it operate normally. (4) Applying boundary stress through the hydraulic system to compensate simulated stress field and allow it stabilizing for 3 days. (5) Excavating the roadway, recording surrounding rock deformation. (6) Opening a gap beside the roadway, filling the gap, and letting it dry naturally for 1 day. (7) Excavating along the edge of the filled gap along the direction of mining boundary, excavating 9 cm (equivalent to 5.4 m in actual roadway development rate), continue to excavate every 5 min, recording the time of each excavation, observing and pho- tographing after each excavation, while recording deformation and stress data until excavation reached the mining boundary line. 3 Analysis of test results 3.1 Goaf overburden fracture growth evolution In the test, the whole process was photographed with a high resolution digital single-lens reflex (SLR) camera (Fig. 4) and the overburden movement and fracture evolution of the pillarless mining can be clearly seen. As you can be seen in Fig. 4 that when the longwall face in the model retreated 36 cm or 21.6 m in the field, the immediate roof mudstone was caved in; when the retreating distance in the model was 54 cm or 32.4 m in the field, the immediate roof mudstone was caved in for the second time; when the distance in the model was 63 cm or 37.8 m in field, the fine sandstone layer of the basic roof initially was caved in and the first weighting occurred; when the distance in the model was 72 cm or 43.2 m in field, the siltstone layer of the main roof was caved in, bed-separated fractures of the overlying stratum began to develop; when the distance in the model was 99 cm or 59.4 m in the field, the fine sandstone layer of the basic roof was caved in second time, the initial peri- odic weighting interval of the main roof was measured to be 15.6 m in the field, a wide range of lateral and longi- tudinal fractures were developed above the main roof, the maximum height of bed separation fractures above the coal seam was 35 cm; the fine sandstone layer of basic roof was caved for the third time after the face stopped for 1 day, the second periodic weighting interval of the main roof was 14.8 m in the field, the height of bed separation fractures above coal the seam was 50 cm, some lower fractures were closed, bending zone, fracture zone and caving zone were clearly observed. The maximum height of the caving zone was 9.0 m, which was 7.5 times of the mining height. The rock blocks in the caving zone were irregularly shaped. The height of mining-induced fractures was from 8.0 to 30 m, 7.5–25 times of the mining height. The bed separation fractures in the lower fracture zone were obvious. The fracture zone was about 65� upward from the face to goaf direction. The height of the longitudinal fractures was 4–20 times of the mining height. The height of the lateral fractures was 20–25 times of the mining height. The bending zone was above the fracture zone. The height of the bending zone Table 1 Physical and mechanical parameters of rocks Strata No. Lithology Height (mm) Density (kg/m 3 ) Compressive strength rcm (Mpa) 1 Mudstone 220 1650 0.18 2 No. 13-1 coal seam 70 1650 0.08 3 Sandy mudstone 60 1650 0.25 4 Mudstone 110 1650 0.20 5 Siltstone 35 1650 0.32 6 Fine sandstone 90 1650 0.45 7 Mudstone 50 1500 0.18 8 No. 11-2 coal seam 20 1500 0.08 9 No. 11-1 coal seam 15 1500 0.08 Experimental research on overlying strata movement and fracture evolution… 41 123 was more than 25 times of the mining height. The rock strata movement in bending zone was observed to be continuous. Generally, the stress-relieved coal seam was usually dominated by transverse fractures and swelled longitudinally. There was no longitudinal fracture between bending zone and fracture zone. These results suggest that, the roof on the goaf side of the retained roadway broke in the form of ‘‘cantilever’’ because of the support of filling wall. The unanchored end of the cantilever was in upper direction and on the outer side of filling wall. There wasn’t significant deformation and fail- ure in the roof of the retained roadway, indicating that the filling wall could ensure the stability of a retained roadway. In inclination direction, the ‘‘arch-in-arch’’ structure space– time evolution existed in the pillarless mining. The initial caving arch and periodic caving arch constituted the lower arch, the overlying fracture zone and bending zone made up the upper arch, the lower arch was contained within the upper arch, and the upper arch expanded constantly in the longitudinal and vertical directions in which the lower arch moved forward at the same time. 3.2 Displacements Rock strata displacements obtained by the multi-point displacement meter and total station were shown in Figs. 5 and 6. As can be seen from Fig. 5 that the roof displacement at the level of 52 cm above the coal seam was very small, suggesting the height of caving zone of 50 cm. Roof dis- placement at the level of 12 cm above the coal seam with excavation step is shown in Fig. 6. As can be seen from Fig. 6 that the area 0–15 m ahead the working face was intensively affected by mining, the velocity of convergence increased fast between the two sides of roof and floor; the area of 15–45 m ahead the working face was also affected Fig. 4 Overburden movement and fracture evolution in pillarless mining Fig. 5 Roof displacement distribution at different heights Fig. 6 Displacement at the level of 12 cm above the seam with excavation step 42 J. Xue et al. 123 by mining to some extent but much less than the area 0–15 m ahead of the face, and the area 70 m ahead the face was basically unaffected by mining. Figure 7 shows the roof vertical displacement contour and vector. The color in the figure represents the intensity of displacement and fractures, and the dart colour means relatively more displacement and fractures. 3.3 Analysis of strain field Figure 8 shows the strain field distribution, vertical strain and shear strain contour. As can be seen from the vertical strain contour that the tensile strain was larger as strata breakage and fracturing was developing in caving zone and fracture zone. As can be seen from shear strain contour that there was obvious shear strain zonal distribution upward at the working face inclined toward mining direction and above goaf on the side of the retained roadway. 3.4 Roof movement characteristics on goaf side According to the results of the simulation test, overburden fracture region can be divided into three zones: fracture initiation and development zone, fracture closure zone, and fracture stable development zone. The specific overlying strata structure movement characteristics related to these fracture zones are shown in Fig. 9. As can be seen in Fig. 9 that there were rupture-dilative rock areas along the goaf-side direction of the retained roadway which was bounded by rupture line, that was the fracture stable development zone defined in previous sec- tion. The overlying strata caving form in goaf under the retained roadway was similar to that of open-off cut on the goaf side. The roof periodic caving phenomenon did exist in both directions of the working face. Roof on the goaf side under the retained roadway broke in the form of ‘‘cantilever’’ and layered caving. The cantilever breaking point acted as a rotation axis of the overlying strata. Due to differences in the physical, mechanical and geometric Fig. 7 Roof vertical displacement contour and vector Fig. 8 Roof vertical strain and shear strain contour Fig. 9 Overburden movement characteristics of pillarless mining Experimental research on overlying strata movement and fracture evolution… 43 123 parameters, the breaking period of the overlying strata varies, resulting in the formation of a stable structure of fracture development, and this structure can’t be com- pacted because of lateral stability region balanced action. The fracture closure zone exists in the middle of goaf. The fracture stable development zone is on the side of retained roadway. A fan-shaped fracture region is formed in this area (or ‘‘O’’ ring), which is also gas accumulation area. Fracture initiation and development zone is close to the rear side of working face, and the difference with the fracture stable development zone is that the former is still in initiation and development process because of the con- tinuous stopping. The fracture closure zone is between the fracture initiation and development zone and the fracture stable development zone. The fracture closure zone evolves from the fracture initiation and development zone. Stress relief in protected No. 13 seam occurred between stress relief lines due to lower strata subsidence, thus ‘‘stress relief and permeability enhancement’’ effect ap- peared in stress relief zone, resulted in significant increase in the permeability of rock and coal seam in this zone. Effective gas drainage of high flow and purity should be obtained in the ‘‘O’’ ring and stress relief zone. 3.5 Engineering application of test results Gas drainage has proven to be an effective method to control gas emission and outbursts of coal and gas in underground coal mining (Black and Aziz 2009; Yuan 2010; Karacan et al. 2011).Basedonthesimulationtest,gascontrolmeasureswere developed to manage gas issues in panel 1111(1). These measures included mainly goaf gas drainage with buried pipes in the goaf, surface vertical wells, and cross-measure bore- holes. The return air volume was 2290–2700 m 3 /min during gas drainage period. Gas concentration in return air was less than 0.6 %. Gas extraction rate in the working face was more than 60 %. The average gas extraction rate was 70 %. With these gas control measures, panel 1111(1) was mined without encountering any gas-related safety issues. With stress relief mining of No. 11-2 coal seam, the protected area of No. 13 coal seam was extensively de-stressed and de-gassed. The risk of coal and gas outburst during both development of headings and panel extraction was minimized, and a significant gain was achieved in the production of panel 1111(1). 4 Conclusions An investigation was undertaken in the panel 1111(1) of Zhuji mine to physically simulate the movement and fracture evolution of the overlying strata after the No. 11-2 seam was extracted. In the physical simulation, the dis- placement, strain, and deformation and failure process of the model for simulation were acquired with various means such as grating displacement meter, strain gauges, and digital photography. The simulation results showed that: 1. Initial caving interval of the immediate roof was 21.6 m and the first weighting interval was from 23.5 to 37.3 m (33.5 m on average). Periodic weighting interval of the main roof varied between 8.2 and 20.55 m (15.2 m on average). 2. The maximum height of the caving zone after No. 11-2 seam was mined was 8.0 m or 4 times of mining height, and internal strata of the caving zone collapsed irregularly. The mining-induced fractures developed 8.0–30 m above the mined No. 11-2 seam, which was 7.525 times of the seam mining height, the fracture zone was about 65� upward from the seam open-off cut toward the goaf, the height of longitudinal joint growth was 4–20 times of the seam mining height, the height of lateral joint growth is 20–25 times of the seam mining height. 3. The spatial and temporal evolution regularities of overburden’s displacement field and stress field, dynamic development process and distribution of fracture field were analyzed and it was found that there was an ‘‘arch-in-arch’’ mechanical structure of internal goaf bounded by expansion angle in the lateral direction of the retained goaf-side roadway. 4. Based on the simulation results, it is recommended that several goaf drainage methods, i.e. gas drainage with buried pipes in goaf, surface goaf gas drainage, and cross-measure boreholes, should be implemented to ensure the safe mining of the panel 1111(1). Acknowledgments The program was supported by the National Natural Science Foundation of China (51427804) and the Open Found of State Key Laboratory of Deep Coal Mining & Environment Protection. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. References Bai M, Elsworth D (1990) Some aspects of mining under aquifers in China. Min Sci Technol 10(1):81–91 Black DJ, Aziz NI (2009) Developments in coal mine methane drainage and utilisation in Australia. In: Proceedings of the Ninth International Mine Ventilation Congress. Department of Mining Engineering, Indian School of Mines University, Dhanbad, India, 10–13 Nov 2009, pp 445–460 44 J. Xue et al. 123 Cao AY, Dou LM, Li FC, Jiang H, Qu Y, Han RJ (2011) Study on characteristic of overburden movement in unsymmetrical isolat- ed longwall mining using microseismic technique. Procedia Eng 26:1253–1262 Deng KZ, Zhou M, Tan ZX, Xu NZ (1998) Study on laws of rock mass breaking induced by mining. J China Univ Min Technol 27(3):261–264 Guo GL, Zha JF, Miao XX, Wang Q, Zhang XN (2009) Similar material and numerical simulation of strata movement laws with long wall fully mechanized gangue backfilling. Procedia Earth Planet Sci 1(1):1089–1094 Guo H, Yuan L, Shen BT, Qu QD, Xue JH (2012) Mining-induced strata stress changes, fractures and gas flow dynamics in multi- seam longwall mining. Int J Rock Mech Min Sci 54:129–139 Hasenfus GJ, Johnson KL, Su DWH (1988) A hydro geomechanical study of overburden aquifer response to long wall mining. In: Proceedings of the 7th conference of ground control in mining, pp 144–152 Karacan CÖ, Ruiz AF, Cotè M, Phipps S (2011) Coal mine methane: a review of capture and utilization practices with benefits to mining safety and to greenhouse gas reduction. Int J Coal Geol 86(2-3):121–156 Karmis M, Triplett T, Haycocks C, Goodman G (1983) Mining subsidence and its prediction in Appalachian coalfield. Rock Mech 22:665–675 Li YB (2012) Study on the integrated pillarless coal production and methane extraction technology. Adv Mater Res 1792(524): 735–738 Li K, Zhang YB (2011) Research on overlying strata movement and deformation under the conditions of shortwall and longwall mining. Appl Mech Mater 1446(90):1299–1302 Li CY, Chen J, Wu HF (2012) Field measurement research on overburden stratum movement and deformation. Appl Mech Mater 170:1151–1157 Liu TQ (1995) Influence of mining activities on mine rockmass and control engineering. J China Coal Soc 20(1):1–5 Liu JK, Dong C, Zhang SQ, Zhang Y (2014) Research on the rules of overlying rock movement by similar material simulation in fully mechanized top coal caving mining with high cutting height. Appl Mech Mater 3013(522):1419–1425 Lu P, Fang LC, Tong YF, Li GH, Zhang GF, Deng Z (2013) Relieved gas drainage and comprehensive control in gob of Y-type coal face in the first coal seam mining of deep multi-seams. J Min Saf Eng 30(3):456–462 Palchik V (2002) Influence of physical characteristics of weak rockmass on height of caved zone over abandoned sub-surface coalmines. Environ Geol 42(1):92–101 Palchik V (2005) Localization of mining-induced horizontal fractures along rock layer interfaces in overburden: field measurements and prediction. Environ Geol 48:68–80 Qian MG, Miao XX, Xu JL (2003) Dominant stratum theory for control of strata movement. China University of Mining and Technology Press, Xuzhou Shu DM, Bhattacharyya AK (1993) Prediction of sub-surface subsidence movements due to underground coal mining. Geotech Geol Eng 11(4):221–234 Song ZQ (1988) Practical ground pressure control. China University of Mining and Technology press, Xuzhou Szlazak N, Obracaj D, Swolkien J (2014) Methane drainage from roof strata using an overlying drainage gallery. Int J Coal Geol 136:99–115 Unver B, Yasitli NE (2005) Modelling of strata movement with a special reference to caving mechanism in thick seam coal mining. Int J Coal Geol 66(4):227–252 Usanov SV, Mel’nik VV, Zamyatin AL (2013) Monitoring rock mass transformation under induced movements. J Min Sci 49(6): 913–918 Xue JH, Xue S (2012) Retaining a goaf-rdge gateroad in a three-entry panel layout. Adv Mater Res 1792(524):681–685 Yuan L (2006) Key technique to high efficiency and safe mining in highly gassy mining area with complex geologic condition. J China Coal Soc 31(2):174–178 Yuan L (2008) Key technique of safe mining in low permeability and methane-rich seam group. Chin J Rock Mech Eng 27(7):1370– 1379 Yuan L (2010) Concept of gas control and simultaneous extraction of coal and gas. China Coal 6:5–12 Experimental research on overlying strata movement and fracture evolution… 45 123 Experimental research on overlying strata movement and fracture evolution in pillarless stress-relief mining Abstract Introduction Test Test purpose Test model Simulation materials Testing procedures Analysis of test results Goaf overburden fracture growth evolution Displacements Analysis of strain field Roof movement characteristics on goaf side Engineering application of test results Conclusions Acknowledgments References work_oerhj4bfabhnvlfgtgfld6kaji ---- Clinical Study Sirolimus Ointment for Facial Angiofibromas in Individuals with Tuberous Sclerosis Complex S. Amin,1 A. Lux,2 A. Khan,3 and F. O’Callaghan4 1Paediatric Neurology, University College London and University Hospitals Bristol, London, UK 2Paediatric Neurology, University Hospitals Bristol, Bristol, UK 3Dermatology, University Hospital of North Durham, Durham, UK 4Paediatric Neurosciences, University College London, London, UK Correspondence should be addressed to S. Amin; sam.amin.14@ucl.ac.uk Received 8 August 2017; Accepted 18 October 2017; Published 15 November 2017 Academic Editor: José Maŕıa Huerta Copyright © 2017 S. Amin et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Background. Facial angiofibromas affect most patients with tuberous sclerosis complex. They tend to progress, can cause recurrent bleeding and facial disfigurement, and have significant psychological effects. We reviewed the effectiveness and safety of topical sirolimus ointment 0.1%. We also assessed the effect of treatment on quality of life. Methods. We report our experience in using sirolimus ointment in 14 patients with TSC (9 children and 5 adults). The impact of sirolimus ointment was monitored with digital photography, dermatological review using a validated Facial Angiofibroma Severity Index (FASI), and quality of life assessments using the questionnaires PedsQL for children and SF36 for adults. Results. The FASI scores were improved in 12/14 cases after six months’ treatment, and improvement was more likely in children (median FASI scores of improvement after treatment were 3 points for children and 1 for adults). Proxy-reported PedsQL scores for the total psychosocial domain improved significantly in the children in the cohort with treatment. Conclusions. Sirolimus ointment 0.1% administered once a day was effective in treating facial angiofibromas. It appears to be safe and well tolerated and to have a positive impact on patients’ quality of life. It appeared to be most beneficial when started in childhood. 1. Introduction Tuberous sclerosis complex (TSC) is a neurocutaneous con- dition which has an autosomal dominant pattern of inher- itance. Approximately two-thirds of cases are sporadic. The birth incidence has been estimated as 1 in 5,800 per year [1]. This condition is caused by mutations in the tumour suppressor genes TSC1 and TSC2. TSC1 encodes for the pro- tein hamartin which is located on chromosome 9, and TSC2 encodes for the protein tuberin, located on chromosome 16 [2, 3]. Hamartin and tuberin function together within the cell as a complex and have an inhibitory effect on the mammalian Target Of Rapamycin (mTOR), a protein kinase that affects cell growth and division through the regulation of protein synthesis [4, 5]. Mutation in either TSC1 or TSC2 genes leads to overactivation of mTOR pathway, which will lead to uncontrolled cell growth. This, in turn, causes growth of benign tumours (hamartomas) in multiple organs such as the brain, kidneys, skin, heart, and lungs, which are the clinical hallmarks of the disease. Approximately 86% of patients with TSC suffer from facial angiofibromas, which are benign tumours on the face. They generally become noticeable at around the age of five years. The severity varies significantly between patients. They can have a huge impact on a patient’s self-esteem [6]. In addition, they are known to cause recurrent bleeding, irritation, infection, facial scarring, and disfigurement [7]. The mTOR inhibitors mimic the action of the TSC gene products, by inhibiting the action of mTOR and therefore reg- ulating the PI3 kinase-mTOR-S6 kinase intracellular growth pathway. Sirolimus (also known as rapamycin�) and its 40- O-(2-hydroxyethyl) derivative, everolimus, effectively inhibit mTOR [8–10]. In recent years, systemic mTOR inhibitors have been used to treat the complications of TSC and it has been shown that they shrink TSC lesions such as renal angiomyolipomas and subependymal giant cell astrocytomas Hindawi International Scholarly Research Notices Volume 2017, Article ID 8404378, 6 pages https://doi.org/10.1155/2017/8404378 https://doi.org/10.1155/2017/8404378 2 International Scholarly Research Notices and that they improve facial angiofibromas [11, 12]. However, their use is constrained by concerns about systemic side effects. There have been several case reports of sirolimus ointment for the treatment of facial angiofibromas, where dif- ferent sirolimus ointment strength, preparation, and regimes have been used. None of these studies have used a validated tool to assess the severity of the rash and its response to treatment [13]. The best-established treatment options to date for facial angiofibromas are laser therapy and surgical removal of the large lesions. However, many patients also have learning difficulties and do not tolerate the sessions of laser therapy without multiple general anaesthetics. We report our experi- ence in using sirolimus ointment 0.1% for facial angiofibro- mas in children and adults with TSC, using a standardised and validated Facial Angiofibroma Severity Index. We also assessed the effect of this treatment on quality of life. 2. Materials and Methods Fourteen sequential patients with a definite diagnosis of TSC, as defined by the International Tuberous Sclerosis Complex Consensus Group [14], from our TS clinic at the Royal United Hospital in Bath started using sirolimus ointment from May 2014. Any patients who were suitable were offered the treatment. We had 14 patients suitable for this treatment at the time. We did not exclude any patients. Any patients who attended the clinic in May 2014 and had facial angiofibromas agreed to try sirolimus ointment. None of the patients were on systemic mTOR inhibitors before starting or during this treatment of topical sirolimus. The gender balance and prevalence of learning disabilities in our clinic cohort are similar to previously reported popula- tion based TSC cohorts, suggesting that this clinic population is not grossly dissimilar from the TSC population at large [15] Topical 0.1% sirolimus ointment was used. The patients were advised to apply a thin coating to the affected areas on the face, once a day in the evening. Each ointment pot contained 30 mg of sirolimus in 30 grams of ointment, and this consisted of 15 tablets of sirolimus (rapamycin 2 mg tablets) crushed and mixed with white soft paraffin. One pot was given for 6 weeks. Each pot costed approx- imately m180. The treatment was not funded by any pharma- ceutical companies. The cost was covered by the NHS as this treatment was part of patient’s standard care. An information leaflet about topical sirolimus therapy was given to our patients at the start of the therapy. Patients were advised to use hydrocortisone cream should they suffer from irritation or a burning sensation. We routinely use digital photography to monitor the effectiveness of topical sirolimus in our patients. Digital photography was undertaken at baseline, and then at six months. Six photographs were taken by a digital camera from each patient at each visit. Three photographs were taken with the camera flash on and the other three are taken with the flash in automatic mode. One photograph was taken in full face view directly facing the camera and the other two were side profiles. The preferred facial expression was natural with both eyes open. However, a facial smile was acceptable. Patients were advised not to wear makeup before having their photo taken. None of the patients in this series wore make up during the visits. During each visit, patients, parents, and carers were asked to report on the degree of improvement. The treating physicians were also reporting the effect of the ointment. In order to obtain a more objective assessment of the treat- ment our consultant dermatologist analysed the photographs and scored treatment response in a blinded fashion. The dermatologist was given the pre- and posttreatment photos for each patient without knowing whether they were pre- or posttreatment photos. The Facial Angiofibroma Severity Index (FASI) was used by the dermatologist to assess the severity of the rash. The FASI is an objective clinical tool that assesses the severity of facial angiofibromas and treatment response. The index has good interrater reliability (correlation coefficient 𝑠 > 0.98; range 0.97–0.99) [16]. The score is obtained by summing the partial scores assigned to each of three features: erythema, size, and extent of lesions. Erythema and lesion size are scored from 0 to 3, while the extent is scored 2 to 3. Mild, moderate, and severe facial angiofibroma correspond with activity score ranges on the FASI of ≤5, 6-7, and ≥8, respectively (Table 1). We use the Pediatric Quality of Life Inventory (PedsQL) in our clinic to monitor children’s quality of life. PedsQL is a reliable and valid assessment tool for quality of life in children. It is also concise, has age appropriate versions, and has parallel forms for children and parents [17]. For the adult patients, we use the Short Form 36 Health Survey Questionnaire (SF-36) to assess the quality of life [18]. FASI scores pre- and posttreatment were compared using Wilcoxon signed-rank sum test. Mann–Whitney U Test was used to compare the FASI scores between children and adults. Quality of life scores were compared using Student’s paired t-test. Ethical approval was not required as this treatment is being used as part of patient’s routine care. This is not a clinical trial. This treatment was available to all patients who were suitable. Consent was taken from each patient for taking facial photographs. 3. Results 3.1. Patient Characteristics. Fourteen patients (9 children and 5 adults; 7 male) were in this cohort. The median age was 16 and the range was from 9 to 40 years. Five cases had learning difficulties. Six cases had already received laser therapy or surgery for their facial rash in the past but significant angiofibromatosis remained. None of these patients received laser therapy or surgery in the 6 months prior to starting or during the topical treatment. None of the patients in this group had received topical mTOR inhibitors before starting the topical sirolimus therapy and none of them had received systemic mTOR inhibitors before or during the course of the study. Before the treatment, 4 patients had the maximum FASI score of 9, 7 patients scored 8, and 3 had a FASI score of 7 (Table 1). International Scholarly Research Notices 3 Table 1: Showing patients characteristics and Facial Angiofibroma Severity Index (FASI) score before and after treatment. Patient number Age Sex Learning disabilities Present Pre- erythema (0/1/2/3) Pre-size (1/2/3) Pre- extension (2/3) Pre- FASI Post- erythema (0/1/2/3) Post-size (1/2/3) Post- extension (2/3) Post- FASI 1 12 C Yes 2 3 3 8 0 2 3 5 2 11 D No 2 3 3 8 1 2 2 5 3 13 D No 2 3 3 8 0 1 3 4 4 14 C No 2 3 3 8 1 2 3 6 5 16 D No 2 2 3 7 0 1 2 3 6 9 C No 2 3 3 8 1 2 3 6 7 17 C No 2 3 3 8 1 2 3 6 8 14 C Yes 2 2 3 7 1 2 2 5 9 9 D Yes 3 3 3 9 2 2 2 6 10 23 D No 3 3 3 9 3 3 3 9 11 27 D No 3 3 3 9 3 3 3 9 12 24 C No 2 3 3 8 1 2 3 6 13 22 D Yes 3 3 3 9 2 3 3 8 14 40 C Yes 1 3 3 7 0 3 3 6 Erythema. Skin color: 0, light red: 1, red: 2, and dark red/purple: 3. Size. Small (<5 mm): 1, large (>5 mm): 2, and confluent: 3. Extension. <50% cheek area: 2 and >50% cheek area: 3. FASI Score. < or = 5: mild, 6-7: moderate, and = or >8: severe. 3.2. Treatment Response. Based on the FASI photographic assessments by our dermatologist, FASI scores were improved in twelve out of fourteen (Figures 1 and 4 and Table 1) (2-tailed Wilcoxon signed-rank test 𝑝 = 0.002). Of the remaining 2 patients, 1 had improvement in rash but no FASI score change. One had no detectable response. The treating physicians reported improvement in 13 out of 14 patients. Thirteen out of fourteen patients and their parents or carers reported facial angiofibroma improvement. No one had worsening of facial angiofibromas after the therapy. Children (age 0–16 years) in this cohort responded better to the treatment than the adults. Median FASI scores improvement after treatment were three points for children and one point for adults (Mann–Whitney U Test 𝑝 = 0.053). All the children had an improvement in their facial angiofibromas after 6 months compared with four out of five adults. The presence of learning disabilities had no apparent relationship with response to treatment. The median FASI scores improvement after treatment were 2 points for both patients with and without learning disabilities. Both female and male patients were equally responsive to this treatment. 3.3. Quality of Life. PedsQL was used in nine patients to assess their quality of life. Six children had no learning difficulties and completed the self-reported PedsQL. The self-reported scores for total psychosocial domain improved in five out of six patients (Figure 2(a)). The parents of all the nine paediatric patients completed the proxy PedsQL. The proxy-reported scores for total psychosocial domain improved significantly after treatment (paired 𝑡-test: 𝑡 = −3.09, 𝑝 = 0.014), with individual scores improving in five patients, staying the same in two and marginally declining in two (Figure 2(b)). Four adult patients had their quality of life assessed using SF36. The vitality/energy domain scores of SF36 improved markedly in two patients and stayed the same in two (Fig- ure 3(a)). The social functioning domain scores improved in three patients and declined in one (Figure 3(b)). For the patient who had an unchanged FASI score but still reported rash improvement and also reported improvements in quality of life, his social function domain scores improved from 50 to 100 and vitality/energy from 56 to 69. The patient who had neither FASI nor facial angiofibroma improvement also had no improvement in quality of life. 3.4. Adverse Reactions. One patient developed facial redness two weeks after commencing the treatment. Hydrocortisone cream was used successfully to treat the reaction. He contin- ued with sirolimus ointment and had no further reaction. No other patients reported any adverse events. 4. Discussion This report shows that the facial angiofibromas, assessed by a dermatology assessment of photographs, using FASI, showed improvement in 12/14 TSC cases after 6 months of sirolimus ointment 0.1% treatment. The strength of this study is that it has attempted to objectively assess response of facial angiofibromatosis to topical mTOR inhibition using a validated instrument and an assessor blinded to treatment status. It has also attempted to describe the impact of treatment on the quality of life of patients. Sirolimus ointment and cream have been used in smaller case series. However, none of these reports have been from the UK and none have objectively assessed effectiveness and/or quality of life [19, 20]. 4 International Scholarly Research Notices (a) Before treatment (b) After treatment Figure 1 0 10 20 30 40 50 60 70 80 90 100 Pe ds Q L sc or es Patient 2 Patient 3 Patient 4 Patient 5 Patient 6 Patient 7 Pretreatment Posttreatment (a) Showing self-report responses by the children to the PedsQL psychosocial domain before and 6 months after the treatment Patient 1 Patient 2 Patient 3 Patient 4 Patient 5 Patient 6 Patient 7 Patient 8 Patient 9 Pretreatment Posttreatment 0 10 20 30 40 50 60 70 80 90 100 Pe ds Q L sc or es (b) Showing proxy-report responses by the parents of all the children to the PedsQL psychosocial domain before and 6 months after the treatment Figure 2 Patient 10 Patient 11 Patient 12 Patient 13 Pretreatment Posttreatment 0 10 20 30 40 50 60 70 80 90 100 Pe ds Q L sc or es (a) Showing SF36 vitality/energy scores for adult patients before and after the treatment Patient 10 Patient 11 Patient 12 Patient 13 Pretreatment Posttreatment 0 10 20 30 40 50 60 70 80 90 100 Pe ds Q L sc or es (b) Showing SF36 social functioning scores for the adult patients before and after the treatment Figure 3 One of the patients who did not show FASI score improvement had a very severe rash before treatment and it remained very severe after treatment. His rash had improved but not enough to change the FASI score. It may be that the FASI score is not sensitive enough to pick up improvement in patients with very severe rash. In particular, this patient’s rash was less erythematous after treatment. Erythema is an important component of facial angiofibromatosis and it is the component that demonstrated the greatest improvement in the FASI scores. The degree of erythema noticed on visual inspection is reliably correlated with blood flow detected by Doppler scans [21]. It is probable that the topical mTOR International Scholarly Research Notices 5 0 1 2 3 4 5 6 7 8 9 10 FA SI sc or es Pre-FASI Post-FASI Patient 10 Patient 11 Patient 12 Patient 7 Patient 8 Patient 9 Patient 4 Patient 5 Patient 6 Patient 1 Patient 2 Patient 3 Patient 13 Patient 14 Figure 4: Showing FASI scores before and after treatment. inhibitor has been especially successful in reducing the vascularity of the angiofibromatosis. It is known that systemic mTOR inhibitors have a potent effect on the vascularity of other TSC lesions such as renal angiomyolipomas. The better response amongst children compared to adults with this treatment is possibly due to the fact that the children’s rash was less severe before the treatment compared with the adults or because the rash is still developing and therefore is more susceptible to intervention. We know from clinical experience that facial angiofibromas tend to stabilise in adulthood. The median FASI scores for children improved from 8 to 5.5 while for adults the scores were from 9 to 8. These results suggest that early intervention may be more effective and therefore justifiable. The treatment has improved the quality of life of the children and adults in this study. The treatment specifically showed a significant impact on the psychosocial domain component of the quality of life scores. The psychological impact of facial angiofibromas on TSC patients may be under- appreciated by healthcare professionals. In our experience, a lot of young children with TSC, who are mildly affected by the facial rash, still get teased or picked on at school by their peers. This can have a significant bearing on their school progress, which can also be associated with numerous physical, mental, and social harms [22]. The physical domains of quality of life were also assessed as they are part of the PedsQL and SF36 questionnaires. However, there was no difference in the physical domains before and after treatment in these patients. The parents and carers of one adult patient in this study have not completed an SF36 form on behalf of the patient as they found the questions difficult to answer. SF36 is a reliable tool and has content validity [18]. However, it may not be so reliable for adults with learning difficulties. There is no standardised and validated tool for quality of life assessment in adults with severe learning difficulties. There are limitations with this case series report. Firstly, this is not a randomised placebo controlled trial and therefore the results may be subject to bias and there is certainly likely to be a lack of power in any statistical analyses. Secondly, we have not systematically measured compliance to treatment and therefore we cannot say that lack of response to treatment is not due to lack of compliance. However, it is reasonable to suggest that the improvement in facial angiofibromato- sis demonstrated in this series is likely to be due to the topical treatment because we know from clinical experience that facial angiofibromas generally do not improve without intervention. Another limitation to the study is that the quality of life questionnaires for patients with severe learning difficulties was completed by parents and carers. This method of quality of life assessment is less reliable than self-reported outcome, but it is the only practicable method of assessment in people with learning disability, and it remains of interest. It is possible that the improvement we have seen in quality of life is due to a placebo effect or close follow-up. However, there is good correlation between response of rash to treatment and improvement in quality of life. All the patients who had facial angiofibroma improvement also had improvement in quality of life. In addition, we have seen more improvement in psychosocial domains rather than physical domains after the treatment. All of the patients treated with this ointment have requested to continue the treatment and this high retention rate further suggests that it is an effective intervention with limited inconvenience or adverse effects. It will be interesting to see whether long-term treatment results in continued improvement. We do know from clinical experience that stopping treatment results in recurrence. We will continue to review them regularly as this treatment constitutes part of their ongoing clinical care. Unfortunately, not many patients in the UK have access to this ointment due to funding. We are hoping that reporting our experience of sirolimus ointment use will support family groups in streamlining funding for this treatment. 5. Conclusion Sirolimus ointment 0.1% once a day was apparently effective in treating facial angiofibromas in our clinical cohort. It also appeared to be safe and well tolerated and had a positive significant impact on patients’ quality of life. We saw greater effect in children than adults and therefore early treatment may be advisable. The positive outcomes from this case series suggest that a larger prospective placebo controlled randomised controlled trial is justified. The safety and efficacy of 0.1% sirolimus in this group set a baseline platform for a larger national study to compare 0.1% strength with a higher strength and placebo. There is a need for a national study to determine optimal dosing regimen of topical sirolimus for facial angiofibromas in TSC, to facilitate EMA licence application for this indication, and to facilitate NHS funding for this treatment. Currently, this treatment is not available through the NHS in all the Tuberous Sclerosis Clinics in the United Kingdom. Conflicts of Interest The authors declare that they have no conflicts of interest. 6 International Scholarly Research Notices References [1] J. P. Osborne, A. Fryer, and D. Webb, “Epidemiology of tuberous sclerosis,” Annals of the New York Academy of Sciences, vol. 615, pp. 125–127, 1991. [2] M. van Slegtenhorst, R. de Hoogt, C. Hermans et al., “Identifica- tion of the tuberous sclerosis gene TSC1 on chromosome 9q34,” Science, vol. 277, no. 5327, pp. 805–808, 1997. [3] The European Chromosome 16 Tuberous Sclerosis Consortium, “Identification and characterization of the tuberous sclerosis gene on chromosome 16,” Cell, vol. 75, no. 7, pp. 1305–1315, 1993. [4] M. Nellist, M. A. Van Slegtenhorst, M. Goedbloed, A. M. W. Van Den Ouweland, D. J. J. Halley, and P. Van Der Sluijs, “Characterization of the cytosolic tuberin-hamartin complex. Tuberin is a cytosolic chaperone for hamartin,” The Journal of Biological Chemistry, vol. 274, no. 50, pp. 35647–35652, 1999. [5] A. R. Tee, D. C. Fingar, B. D. Manning, D. J. Kwiatkowski, L. C. Cantley, and J. Blenis, “Tuberous sclerosis complex-1 and -2 gene products function together to inhibit mammalian target of rapamycin (mTOR)-mediated downstream signaling,” Proceedings of the National Acadamy of Sciences of the United States of America, vol. 99, no. 21, pp. 13571–13576, 2002. [6] Skin (Dermatological) Manifestations in TSC, Tuberous Sclerosis Alliance, http://tsalliance.org/pages.aspx?content=601. [7] M. K. Koenig, A. A. Hebert, J. Roberson et al., “Topical rapamycin therapy to alleviate the cutaneous manifestations of tuberous sclerosis complex: a double-blind, randomized, controlled trial to evaluate the safety and efficacy of topically applied rapamycin,” Drugs in R&D, vol. 12, no. 3, pp. 121–126, 2012. [8] J. B. Easton and P. J. Houghton, “mTOR and cancer therapy,” Oncogene, vol. 25, no. 48, pp. 6436–6446, 2006. [9] C. Vézina, A. Kudelski, and S. N. Sehgal, “Rapamycin (AY 22,989), a new antifungal antibiotic. I. Taxonomy of the pro- ducing streptomycete and isolation of the active principle,” The Journal of Antibiotics, vol. 28, no. 10, pp. 721–726, 1975. [10] D. R. Plas and G. Thomas, “Tubers and tumors: rapamycin therapy for benign and malignant tumors,” Current Opinion in Cell Biology, vol. 21, no. 2, pp. 230–236, 2009. [11] J. J. Bissler, F. X. McCormack, L. R. Young et al., “Sirolimus for angiomyolipoma in tuberous sclerosis complex or lymphangi- oleiomyomatosis,” The New England Journal of Medicine, vol. 358, no. 2, pp. 140–151, 2008. [12] D. A. Krueger, M. M. Care, K. Holland et al., “Everolimus for subependymal giant-cell astrocytomas in tuberous sclerosis,” The New England Journal of Medicine, vol. 363, no. 19, pp. 1801– 1811, 2010. [13] R. S. Foster, L. J. Bint, and A. R. Halbert, “Topical 0.1% rapamycin for angiofibromas in paediatric patients with tuber- ous sclerosis: A pilot study of four patients,” Australasian Journal of Dermatology, vol. 53, no. 1, pp. 52–56, 2012. [14] H. Northrup and D. A. Krueger, “Tuberous sclerosis complex diagnostic criteria update: recommendations of the 2012 Inter- national Tuberous Sclerosis Complex Consensus Conference,” Pediatric Neurology, vol. 49, no. 4, pp. 243–254, 2013. [15] F. J. O’Callaghan, M. J. Noakes, C. N. Martyn, and J. P. Osborne, “An epidemiological study of renal pathology in tuberous sclerosis complex,” BJU International, vol. 94, no. 6, pp. 853–857, 2004. [16] R. Salido, G. Garnacho-Saucedo, I. Cuevas-Asencio et al., “Sustained clinical effectiveness and favorable safety profile of topical sirolimus for tuberous sclerosis—associated facial angiofibroma,” Journal of the European Academy of Dermatology and Venereology, vol. 26, no. 10, pp. 1315–1318, 2012. [17] P. Upton, C. Eiser, I. Cheung et al., “Measurement properties of the UK-English version of the Pediatric Quality of Life Inventory� 4.0 (PedsQL�) generic core scales,” Health and Quality of Life Outcomes, vol. 3, article no. 22, 2005. [18] V. Burholt and P. Nash, “Short Form 36 (SF-36) Health Survey Questionnaire: Normative data for Wales,” Journal of Public Health, vol. 33, no. 4, pp. 587–603, 2011. [19] M. Tanaka, M. Wataya-Kaneda, A. Nakamura, S. Matsumoto, and I. Katayama, “First left-right comparative study of topical rapamycin vs. vehicle for facial angiofibromas in patients with tuberous sclerosis complex,” British Journal of Dermatology, vol. 169, no. 6, pp. 1314–1318, 2013. [20] J. Tu, R. S. Foster, L. J. Bint, and A. R. Halbert, “Topical rapamycin for angiofibromas in paediatric patients with tuber- ous sclerosis: Follow up of a pilot study and promising future directions,” Australasian Journal of Dermatology, vol. 55, no. 1, pp. 63–69, 2014. [21] A. Lahti, H. Kopola, A. Harila, R. MyllylÄ, and M. Hannuksela, “Assessment of skin erythema by eye, laser Doppler flowmeter, spectroradiometer, two-channel erythema meter and Minolta chroma meter,” Archives of Dermatological Research, vol. 285, no. 5, pp. 278–282, 1993. [22] J. A. Dake, J. H. Price, and S. K. Telljohann, “The nature and extent of bullying at school,” Journal of School Health, vol. 73, no. 5, pp. 173–180, 2003. http://tsalliance.org/pages.aspx?content=601 Submit your manuscripts at https://www.hindawi.com Stem Cells International Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 MEDIATORS INFLAMMATION of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Behavioural Neurology Endocrinology International Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Disease Markers Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 BioMed Research International Oncology Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Oxidative Medicine and Cellular Longevity Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 PPAR Research The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Immunology Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Obesity Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Computational and Mathematical Methods in Medicine Ophthalmology Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Diabetes Research Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Research and Treatment AIDS Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Gastroenterology Research and Practice Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Parkinson’s Disease Evidence-Based Complementary and Alternative Medicine Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com work_ohiqcfcllzf37bq3iqtq5qreky ---- ORAL PRESENTATION Open Access A new CMR protocol for non-destructive, high resolution, ex-vivo assessment of the area at risk simultaneous with infarction: validation with histopathology Lowie M Van Assche*, Han W Kim, Christoph J Jensen, Ki-Young Kim, Michele Parker, Raymond J Kim From 15th Annual SCMR Scientific Sessions Orlando, FL, USA. 2-5 February 2012 Background In the setting of acute myocardial infarction (AMI), the aim of reperfusion and pharmacologic therapies is to salvage areas of ischemic, but reversibly injured myocar- dium within the area-at-risk (AAR). The current histo- pathologic reference standard requires administration of microspheres, destructive sectioning of the heart, the use of tissue stains, and digital photography to delineate the AAR, infarction, and to calculate salvage. We evalu- ated a newly developed CMR protocol that potentially provides non-destructive, high-resolution, ex-vivo assess- ment of the AAR simultaneous with infarction. Methods Four canines underwent 50-minute occlusion of the LAD coronary followed by reperfusion. Imaging was performed 5-days post-AMI. The main goal of the pro- tocol was to create 3 distinct myocardial gadolinium concentrations delineating viable AAR, infarcted AAR and remote myocardium. This was achieved by (1) injecting gadolinium (0.3-0.4mmol/kg) in-vivo to differ- entially accumulate in infarction, (2) providing a wait time to allow washout of gadolinium from viable myo- cardium, (3) injecting another dose of gadolinium prior to sacrifice using the same process as if administering microspheres for determining the AAR by pathology. For validation purposes, microspheres (2-8μm, Thermo- Scientific) were mixed with gadolinium in this step. Finally, (4) the heart was extracted and ex-vivo 3-dimen- sional-delayed-enhancement-CMR imaging was per- formed. The heart was sectioned into short-axis slices and photographed under UV-light to delineate the AAR (absence of microspheres) and after staining with triphe- nyltetrazoliumchloride to delineate infarction. Histo- pathology and ex-vivo-CMR images were analyzed by 2 blinded observers. Results Figure 1 shows histopathology of a subendocardial-AMI with transmural AAR. On the matched delayed- enhancement-CMR image, 3 distinct image intensities are present: (1) myocardium with lowest signal matches location and shape of the viable AAR by microspheres, (2) region with highest signal matches infarction on pathology and (3) remote zone with intermediate signal. On an animal basis, ex-vivo-CMR AAR was similar to that by microspheres (33%±7 vs. 35%±6 respectively, p=0.4). On a slice basis (n=27), there was a strong linear correlation between the ex-vivo-CMR and microsphere- defined AAR (r=0.98, slope=0.96±0.04, p<00001). Simi- larly, CMR infarct size matched that by triphenyltetrazo- liumchloride (r=0.97, p<0.0001). Calculated CMR salvage was also highly correlated with that of histo- pathology (r=0.92, p<0.0001). Conclusions We developed a new CMR protocol that provides high- resolution, ex-vivo images of the AAR simultaneous with infarction in the same 3-dimensional dataset. This can serve as an alternative to histopathology as a truth standard measurement of the AAR and salvage that is non-destructive, allows for multiplanar reconstruction and is automatically registered with the spatial map of infarction.Cardiology, Duke University, Durham, NC, USA Van Assche et al. Journal of Cardiovascular Magnetic Resonance 2012, 14(Suppl 1):O7 http://www.jcmr-online.com/content/14/S1/O7 © 2012 Van Assche et al; licensee BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. http://creativecommons.org/licenses/by/2.0 Funding Funded in part by 5R01HL064726-07. Published: 1 February 2012 doi:10.1186/1532-429X-14-S1-O7 Cite this article as: Van Assche et al.: A new CMR protocol for non- destructive, high resolution, ex-vivo assessment of the area at risk simultaneous with infarction: validation with histopathology. Journal of Cardiovascular Magnetic Resonance 2012 14(Suppl 1):O7. Submit your next manuscript to BioMed Central and take full advantage of: • Convenient online submission • Thorough peer review • No space constraints or color figure charges • Immediate publication on acceptance • Inclusion in PubMed, CAS, Scopus and Google Scholar • Research which is freely available for redistribution Submit your manuscript at www.biomedcentral.com/submit Figure 1 Example of sub-endocardial AMI with transmural risk region. Van Assche et al. Journal of Cardiovascular Magnetic Resonance 2012, 14(Suppl 1):O7 http://www.jcmr-online.com/content/14/S1/O7 Page 2 of 2 Background Methods Results Conclusions Funding work_oi2u55qtgbejreqrgp5l4ze36u ---- Special Section Guest Editorial Digital Photography Peter B. Catrysse Stanford University Department of Electrical Engineering Stanford, California 94305-4088 pcatryss@stanford.edu Sabine Süsstrunk École Polytechnique Fédérale de Lausanne �EPFL� School of Communication and Computer Sciences 1015 Lausanne, Switzerland sabine.susstrunk@epfl.ch Digital cameras have revolutionized photography. Never be- fore in the history of photography has it been so easy to capture, display, and share images. It is no surprise that consumer and professional photographers alike have quickly adopted digital camera systems, and that digital pho- tography has seen an explosive growth over the past de- cade. More than one-billion digital cameras are now being sold every year. In addition to the sheer volume of digital cameras for consumer and professional use, digital photog- raphy offers unique opportunities and challenges for imaging scientists and system designers. An exciting development in recent years has been the integration of digital cameras with mobile-communication de- vices, such as cellular phones, portable electronic organiz- ers, and laptops. Sales of cellular phone cameras already dwarf those of all other digital camera systems combined. Integrated cameras have become one of the most ubiqui- tous imaging devices. Most of us carry a cell phone at all times, and the presence of a camera during our daily rou- tines is starting to change the way we think about photogra- phy. From a developer’s point of view, integrated digital cam- eras present an exciting opportunity, which comes with challenges in terms of system footprint and computational- processing limitations. Another development in digital photography is the emer- gence of computational photography. Computational photog- raphy provides the capability to record much more informa- tion and offers the possibility of processing this information afterwards. In essence, it blurs the line between digital- image capture and subsequent image processing. Using modified digital-imaging systems, it enables features such as digital refocusing or extended depth of field. This special section on digital photography highlights state-of-the-art research in imaging-component technolo- gies, optical-imaging systems, and image-processing tech- niques. The papers in this special section are extended ver- sions of papers presented at Digital Photography Conferences IV–V �2009–2010� at the IS&T/SPIE Electronic Imaging Symposium. This conference has been very suc- cessful in bringing together academic and industry experts in all the technical fields associated with digital photography, including optics, image-sensor design, color and image pro- cessing, and image quality. At the center of any digital-photography system is a solid- state image sensor, i.e., the light-sensitive element of a digi- tal camera. Digital photography would not be possible with- out it. Charge-coupled device technology was the long-time incumbent in image-sensor technology. More recently, complementary metal-oxide semiconductor �CMOS� tech- nology has been a great enabler and cost driver in solid- state imaging, especially for cell-phone cameras. The sens- ing and sampling methods in the image sensor have a large influence on subsequent image reconstruction and thus ulti- mately on image quality. All papers in this special section revolve around the sensor design, architecture, and initial image reconstruction. Research is very active in this area, and we are pleased to provide here five state-of-the-art ar- ticles on these important topics. Color images are traditionally acquired through a color filter array �CFA�, a mosaic of red �R�, green �G�, and blue �B� color filters affixed to the image sensor. Thus, only one color is captured at each spatial position. To reconstruct the missing colors and obtain full RGB values at every pixel lo- cation, an algorithm called demosaicking has to be applied. M. Guarnera, G. Messina, and V. Tamaselli propose a new adaptive demosaicking method for the Bayer CFA, which analyzes the local neighborhood and applies different inter- polation depending on the detection and orientation of gra- dients. The authors also propose a false color-removal algo- rithm to eliminate residual color errors as a postprocessing step. With the advent of cell-phone cameras that have very limited processing power, the computational complexity of imaging algorithms has become even more of an issue. Chung and Chan propose an efficient decision-based demo- saicking method using a new edge-sensing algorithm. The proposed integrated gradient method simultaneously ex- tracts gradient information on both color intensity and color- difference domains. Their algorithm thus avoids re- estimation of local gradients based on intermediate interpolation. Tamburrino et al. present a new CMOS image-sensor de- sign where the blue and red filters of the RGB Bayer CFA are replaced by a magenta filter. Under each of those filters they place two stacked, pinned photodiodes; one absorbs mostly blue light and the other mostly red. To complement this sen- Journal of Electronic Imaging Apr–Jun 2010/Vol. 19(2)021101-1 Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Electronic-Imaging on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use sor design, they implement a suitable demosaicking algo- rithm and show that their approach outperforms the de- mosaicking of the Bayer pattern in terms of image quality. Image sensors for scientific and industrial applications of- ten require image sensors with high sensitivity and high speed over a wide range of illumination conditions. In their contribution, Stern and Cole perform a detailed design study for a solid-state focal plane array consisting of silicon ava- lanche photodiodes. Each detector in the array is capable of operating with wide dynamic range in linear or in Geiger mode. Linear mode allows the sensor to operate with high quantum efficiency and speed. In Geiger mode, the sensor performs as a single-photon detector. In a noise analysis, the authors predict imaging performance at ultralow illumi- nance �10−4 lux� with a signal-to-noise ratio greater than seven at near room temperature. Conventional digital cameras capture the spatial informa- tion �intensity� of a scene. Plenoptic cameras are capable of capturing both spatial and angular information �radiance�. Such architectures enable, for example, refocusing of the image or extension of the depth of focus after the image has been captured. This is achieved by employing an internal microlens array, which trades off spatial information �reso- lution� for angular information. To improve over current de- signs, Georgiev and Lumsdaine developed the focused ple- noptic camera. Here, a microlens array is used as an imaging system focused on the image plane of the main camera lens. It enables rendering of final images with signifi- cantly higher resolution. In their paper, they analyze the fo- cused plenoptic camera in optical phase space; present ba- sic, blended, and depth-based rendering algorithms that produce high-quality high-resolution images; and demon- strate GPU-based implementations that render full-screen refocused images in real time. These five papers cover a broad spectrum of current hardware and software investigations in digital photography. They show how active research in this field remains, how new image capture modalities give rise to novel algorithm development, and how they open up new disciplines in digi- tal photography. With the above mentioned �r�evolution of cell-phone imaging, the emergence of computational pho- tography, and the broad and ever-growing dissemination of image-capture devices, we expect that digital-photography science and technology will continue to evolve rapidly in all application areas. Peter B. Catrysse is an engineering re- search associate in the E. L. Ginzton Labo- ratory at Stanford University. He received a PhD and MSc in electrical engineering from Stanford University. In his doctoral re- search, he pioneered the integration of nanoscale metal optics in deep-submicron CMOS technology. In his current work, he aims at elucidating the physics of nanopho- tonic structures and at applying them to op- tical sensing devices. He has published over 75 refereed papers, holds four U.S. patents, and has given more than 20 invited talks. He has served on the program commit- tee of the Digital Photography Conference at the IS&T/SPIE Elec- tronic Imaging Symposium since 2008. He is a member of SPIE, OSA, and a senior member of the IEEE. He is a Brussels Hoover Fellow of the Belgian American Educational Foundation �1994� and the recipient of a Hewlett-Packard Labs Innovation Research Award �2008�. Sabine Süsstrunk is a professor for im- ages and visual representation in the School of Communication and Computer Sciences at the École Polytechnique Fédérale in Lausanne, Switzerland, since 1999. Her main research areas are in com- putational photography, color imaging, image-quality metrics, image indexing, and archiving. Sabine has authored or coau- thored over 80 peer-reviewed papers and holds 5 patents. She is an associate editor for the IEEE Transactions on Image Processing and served as chair or committee member in many international conferences on color imaging, digital photography, and image-systems engineering. She was a cochair of the 2009 Digital Photography Conference at the IS&T/SPIE Electronic Imaging Symposium 2009, the EI sympo- sium’s cochair in 2010, and the chair in 2011. She is a senior mem- ber of IS&T and IEEE. Journal of Electronic Imaging Apr–Jun 2010/Vol. 19(2)021101-2 Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Electronic-Imaging on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use work_oiimw5lghjgwjdz6nlh7c2ka5q ---- Phenotypic techniques and applications in fruit trees: a review Huang et al. Plant Methods (2020) 16:107 https://doi.org/10.1186/s13007-020-00649-7 R E V I E W Phenotypic techniques and applications in fruit trees: a review Yirui Huang, Zhenhui Ren* , Dongming Li and Xuan Liu Abstract Phenotypic information is of great significance for irrigation management, disease prevention and yield improvement. Interest in the evaluation of phenotypes has grown with the goal of enhancing the quality of fruit trees. Traditional techniques for monitoring fruit tree phenotypes are destructive and time-consuming. The development of advanced technology is the key to rapid and non-destructive detection. This review describes several techniques applied to fruit tree phenotypic research in the field, including visible and near-infrared (VIS–NIR) spectroscopy, digital photography, multispectral and hyperspectral imaging, thermal imaging, and light detection and ranging (LiDAR). The applications of these technologies are summarized in terms of architecture parameters, pigment and nutrient contents, water stress, biochemical parameters of fruits and disease detection. These techniques have been shown to play important roles in fruit tree phenotypic research. Keywords: Phenotype, VIS–NIR spectroscopy, Spectral imaging, Thermal imaging, LiDAR © The Author(s) 2020. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://crea- tivecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdo- main/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Background Plant phenotype describes the expression of plant traits. Phenotypes are studied at multiple levels, including cells, tissues, organs, individual plants and the whole orchard [1]. Plant phenotypic traits include but are not limited to plant height, biomass content, water state, and yield [2]. The expression of phenotypic traits is controlled by a large number of genetic factors. Therefore, accurate anal- ysis of phenotypic traits is of great significance for the selection of dominant genes and marker-assisted selec- tion [3, 4]. Fruit tree planting is an important part of agricultural production. In some cases, studies on fruit tree phe- notypes have shown great reference value for accurate irrigation [5, 6], disease control [7, 8], and fruit quality evaluation [9]. In the past, digital callipers were used to measure tree height and crown diameter [10]. Physico- chemical methods were applied to detect the pigment and nutrient content of blades, for example, the Kjeldahl method for the measurement of nitrogen (N) and oven drying method for the determination of moisture [11]. These methods are valuable but time-consuming and destructive to the plant. With the development of technology, researchers began to develop rapid and non-destructive methods for the study of plant phenotypes. Spectroscopy has been found to be able to detect contents of biochemical substances [12]. Visible and near-infrared (VIS–NIR) spectrometers have become an effective instrument for spectral data collection because of their convenience [13, 14]. Some imaging devices are being used to speed up information acquisition [15, 16]. These techniques help to extend research from the level of single leaf to the level of the whole orchard, promoting the study of high-throughput phenotypes [3, 15, 17]. The present research not only focuses on the study of phenotypic information but also seeks to describe the spatial distribution of phenotypic traits. Light detection and ranging (LiDAR) scanning [18] can measure the spatial coordinates of monitoring points and provide reliable location information for describing the spatial variability in phenotypic traits. In addition, many efforts have been made to replace manual labour Open Access Plant Methods *Correspondence: renzh68@163.com College of Mechanical and Electrical Engineering, Hebei Agricultural University, Baoding 071001, China http://orcid.org/0000-0003-3946-2168 http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/publicdomain/zero/1.0/ http://creativecommons.org/publicdomain/zero/1.0/ http://crossmark.crossref.org/dialog/?doi=10.1186/s13007-020-00649-7&domain=pdf Page 2 of 22Huang et al. Plant Methods (2020) 16:107 with automated mechanical equipment to build auto- mated phenotypic platforms [2, 19]. This paper aims to provide a comprehensive and in- depth review of the techniques for fruit tree phenotypic studies. We summarized the technologies and applica- tions in the field around five aspects of fruit tree phe- notypes (Fig.  1). The development trends and future challenges of phenotypic techniques are prospected at the end of this paper. VIS–NIR spectroscopy VIS–NIR spectroscopy is a new non-destructive meas- urement technique. Based on the different reflection and radiation information of different substances in the same spectral band, VIS–NIR spectroscopy is widely used to detect chemical substances [20, 21], soil [22], minerals [23] and food [24]. The following articles in this section describe the principle of VIS–NIR spectroscopy and its application in the study of fruit tree phenotypes. The principle of VIS–NIR spectroscopy Electromagnetic waves in a range of 400–2500  nm are often used in VIS–NIR spectroscopy [25]. Some of the groups in a substance, especially those containing hydro- gen (C–H, O–H, N–H), absorb energy in VIS–NIR spectroscopy, resulting in changes in the reflected or transmitted spectrum [26]. When the substance content of the sample to be measured is different, various spectral curves will be generated. The spectral comprises broad bands arising from overlapping absorptions [27, 28]. Therefore, the corresponding relationship between spec- tra and the parameters to be measured can be established based on spectral features, to carry out quantitative anal- ysis of parameters. The application of VIS–NIR spectroscopy Portable spectrometer is a frequently used instrument in VIS–NIR spectroscopy studies, which can be applied for non-destructive detection. The sample can be measured directly with a light probe [27]. Moving the spectrometer to the sample location, rather than moving the sample to the laboratory, is the most convenient feature of portable spectrometers [28]. Different spectrometers have differ- ent spectral band ranges, and it is critical to select the suitable one for detection. Table 1 summarizes the appli- cation of VIS–NIR spectroscopy in fruit tree phenotypic studies in the field, and the details are described in the following sections. Detection of pigment and nutrient contents Spectra are recorded with a sampling resolution of nanoscale by spectrometers so that hundreds or thou- sands of spectral variables will be obtained with each sample. Such a large amount of data often leads to the unreliability of the dependent variable prediction. Many variable selection methods have been developed to elimi- nate variables containing mostly noise, such as partial least squares (PLS), artificial neural networks (ANN), genetic algorithm (GA) and so on. For a detailed intro- duction to these methods, the reader is referred to [29]. Fig. 1 Five aspects and related phenotypic parameters of fruit trees Page 3 of 22Huang et al. Plant Methods (2020) 16:107 Ta b le 1 T h e ap p lic at io n s o f V IS –N IR s p ec tr o sc o p y in  t h e  st u d y o f f ru it t re e p h en o ty p es N o te : I n t h e “s ca le ” c o lu m n o f t h e ta b le , t h e fr u it t re e o b je ct s ar e d iv id ed in to s in g le le av es (L ), in d iv id u al t re es ( T ) a n d fr u it s (F ) A p p lic at io n s Sp ec ie s Sc al e Sp ec tr al r an g e D ev ic es D et ec te d p ar am et er s Ev al u at io n p ar am et er s A d va n ta g es Li m it at io n s R ef er en ce s In d ic es Pe rf o rm an ce ev al u at io n Pi g m en t an d n u tr i- en t co n te n ts A p p le o rc h ar d L 25 3– 92 2 n m O ce an O p ti cs F ib er Sp ec C h l R = 0 .9 1; RM SE P = 0 .2 2 Ea sy t o o p er at e; n o n -d es tr u ct iv e fo r le av es M o re s am p le s le ad to lo n g er t es ti n g ti m e [3 1] L 35 0– 25 00 n m Fi el d Sp ec 3 C h l R2 = 0 .6 21 3 [3 0] Vi n ey ar d L 35 0– 10 00 n m O ce an O p ti cs U SB 20 00 C h l R2 = 0 .8 –0 .9 [3 3] C ar o te n o id SI PI R2 = 0 .4 9 L 35 0– 25 00 n m Fi el d Sp ec 3 M o is tu re R2 = 0 .9 6 [3 4] N R2 = 0 .9 5 M g R2 = 0 .7 7 W at er s tr es s C it ru s o rc h ar d T 35 0– 25 00 n m Fi el d Sp ec P ro W at er s ta tu s R2 = 0 .8 9 O b ta in t h e sp ec tr al va lu e d ire ct ly ; n o t aff ec te d b y su n - lig h t co n d it io n s D et ec ti o n a t th e o rc h ar d le ve l i s lim it ed [3 6] O liv e o rc h ar d T 35 0– 25 00 n m Fi el d Sp ec P ro Ψ le af N D G I R2 = 0 .5 7; RM SE = 0 .3 7 [3 7] M SI R2 = 0 .4 8; RM SE = 0 .4 1 L Ψ le af M SI R2 = 0 .4 5; RM SE = 0 .7 2 N D W I R2 = 0 .4 5; RM SE = 0 .7 5 Vi n ey ar d L 35 0– 25 00 n m Fi el d Sp ec 4 EW T R2 = 0 .6 81 [3 9] T 32 5– 10 75 n m Fi el d Sp ec H an d - h el d 2 Ψ p d VA RI R2 = 0 .7 9; [3 8] N D G I R2 = 0 .7 9; L 16 00 –2 40 0 n m M ic ro P H A Z IR Ψ s R c v = 0 .7 7– 0. 93 [4 0] R W C R c v = 0 .6 6– 0. 81 T 11 00 –2 10 0 n m N IR S p ec g s R2 = 0 .9 5 [4 1] T 11 00 –2 10 0 n m PS S 21 20 Ψ s R2 = 0 .8 6 [4 2] g s R2 = 0 .6 6 T 11 00 –2 10 0 n m PS S 21 20 Ψ s R2 = 0 .6 8– 0. 85 [4 3] Bi o ch em ic al p ar am - et er s M an g o o rc h ar d F 30 2– 11 48 n m Fi el d Sp ec TS S R2 = 0 .7 2 Th e m et h o d o f d at a p ro ce ss in g is si m p le In te rf er ed b y th e n o n -t ar g et s’ sp ec - tr al in fo rm at io n [4 5] TA R2 = 0 .6 4 Vi n ey ar d F 57 0– 90 0 n m PS S 10 50 TS S R2 = 0 .9 5 [4 6] A n th o cy an in s R2 = 0 .7 9 Po ly p h en o ls R2 = 0 .4 3 Page 4 of 22Huang et al. Plant Methods (2020) 16:107 Photosynthesis is an important process in the growth of green plants. Chlorophyll absorbs light energy and converts it into water and carbon dioxide via photosyn- thesis. Chlorophyll content in leaves can reflect the pho- tosynthetic capacity and growth status of fruit trees [30]. An optical fiber spectrometer within the range of 500– 1100  nm was used to determine the chlorophyll content in apple tree leaves [31]. Backward interval partial least squares (BiPLS) algorithm was applied to spectral data processing. From 1490 measuring bands, 71 bands with valid information were selected as input variables of the prediction model of chlorophyll content, and the value of R was 0.91. Wang et  al. used the first derivative (FD) for the pre-process of spectral data [30]. Wavelengths of 530  nm, 581  nm, 697  nm and 734  nm were selected as sensitive wavelengths. The FDs were treated by the ratio and normalization methods, and four new parameters FD530, FD734 − FD530, (FD734 − FD530)/(FD734 + FD530), FD697 − FD581 were chosen to establish the PLS model. The PLS model exhibited an R2 value of 0.6213 for esti- mating the chlorophyll content in young apple leaves. The vegetation index (VI) is the integration of spectral data from two or more bands after a certain mathemati- cal transformation [32]. Researchers usually establish mathematical prediction models by calculating the spec- tral value of the sensitive wavelength or by calculating and optimizing vegetation indices defined in botany. Zarco-Tejada et al. used an optical USB2000 spectrom- eter to detect the chlorophyll and carotenoid contents in grape leaves [33]. The spectrometer has a sampling inter- val of 0.5  nm which is beneficial for calculating narrow- band spectral indices. Several indices calculated within the range of 700–750  nm yielded good results with R2 value of 0.8–0.9 for chlorophyll estimation. The Struc- ture Insensitive Pigment Index (SIPI) calculated by R430/ R680 was used to estimate carotenoids with R 2 = 0.49. The Photochemical Reflectance Index (PRI), calculated by (R570 − R539)/(R570 + R539), had a clear correlation with chlorophyll-carotenoid ratios. The results of the experi- ments above indicate that some spectral bands within the spectral ranges of green light (490–560 nm) and red light (620–780  nm) are significantly correlated with pigment contents. For the estimation of nutrient contents, Ordonez et al. used FieldSpec 3 to characterize the components of vine leaves [34]. They applied functional nonparametric methods to establish prediction models. A curve was fit- ted to the discrete spectral data of different wavelength by the smoothing process. Moisture and nitrogen were predicted with high R2 values (R2 = 0.96 and R2 = 0.95, respectively). The relationship between the amount of Ca in leaves and reflectance is not sensitive, which may be due to the lack of comparative experiments on different fertilizers. The functional model use all the spectral fea- tures detected by the spectrometer instead of using char- acteristic wavelength values so that the utilization rate of hyperspectral information is improved [35]. The accuracy of portable spectrometers is lower than that of laboratory spectrometers, but portable spectrom- eters are affordable, small in size, and easy to use, which are useful for non-scientists [28]. VIS–NIR spectroscopy could be an effective diagnostic tool for predicting nutri- ent deficiencies in fruit leaves [34], and implementing reasonable fertilization management. Detection of water stress The evaluation of water stress aims to determine the sta- tus of the water deficit in orchards. Before water stress has a significant impact on fruit trees, reasonable irriga- tion will reduce the degree of damage to trees. Stomatal conductance (gs) and stem water potential (Ψs) are two representative indicators reflecting vegetation water status. FieldSpec Pro with a spectral range of 350–2500  nm was applied to detect water status in citrus trees [36]. The study found significant differences in the monthly mean reflectance of the citrus canopy in summer (~ 22%) and winter (~ 15%), which indicated that canopy reflectance can be used to provide the water condition of citrus trees. Rallo et  al. used a portable spectrometer to collect the spectral information of the paraxial position of the blade on a one-year-old shoot in olive groves [37]. The spec- trometer was placed on an aluminium mast mounted on a horizontal arm at a distance of 1  m above the canopy. The angle of view of the sensor was vertically down- ward, which covered an area of approximately 0.12 m2 of the canopy. The results showed that optimized indices, the Normalized Difference Greenness Vegetation Index (NDGI) and the Normalized Difference Water Index (NDWI), had strong correlations with leaf water poten- tials (Ψleaf). For field spectrometers, the spectral resolution can be 2  nm, which means that there are many closely spaced bands in the same property. The selection of the most suitable wavelength for mathematical modelling will improve the accuracy of the phenotypic parameter esti- mation. Rallo et al. utilized NDWI and Moisture Spectral Index (MSI) to evaluate the water content [37]. In the calculation, they found that the central wavelength of the NIR band should be selected at 715 nm for the estimation at the canopy level and 750 nm for the estimation at the leaf level, which are lower than the standard of 858  nm. Pôças et  al. also analysed the optimal wavelength when evaluating the water status in a vineyard [38]. The wave- lengths 520 nm (blue), 539 nm (green) and 586 nm (red) were selected as the best wavelengths for the calculation Page 5 of 22Huang et al. Plant Methods (2020) 16:107 of the Visible Atmospherically Resistant Index (VARI). González-Fernández et  al. used FieldSpec 4 to detect the water status in a vineyard [39]. Spectral acquisition experiments were carried out at both the leaf and canopy levels. The canopy measurements were made at nadir and at 0.30 m above the canopy. Researchers used continuous removal analysis to highlight the absorption and reflec- tion characteristics of the spectral curves. In relation to the equivalent water thickness of the blade, the band area at 1450  nm contributes to a higher correlation than that at 1200 nm or 1950 nm. The advent of field spectrometer allowed spectral detection in the field, but the instrument is too large and heavy for workers. The application of handheld spec- trometers makes it possible to measure without complex optical fiber connections and backpacks. Diago et al. used a handheld digital transform spectrometer working in the spectral range of 1600–2400 nm to detect water stress at the leaf level in different vineyards [40]. Reliable predic- tions of Ψs and leaf relative water content (RWC) were achieved from regression models. In addition, handheld spectrometers can also be used at the canopy level. Simi- lar to the way portable spectrometers are used in canopy level experiments, the spectrometer sensor would be maintained above the canopy, and the angle of view was vertically downward. The detectable canopy diameter should be smaller than the canopy diameter to eliminate interference from soil [38]. Using portable and handheld spectrometers in the field avoids the destruction of vegetation and improves the collection speed; however, there are still challenges in terms of time and manpower for collecting large amounts of data. Realizing the automation of data collection is the key to researches on high-throughput fruit tree pheno- types. Diago et  al. installed a VIS–NIR spectrometer on an all-terrain vehicle, and the sensor head was mounted at a height of 1.40  m above the ground. The spectrom- eter was fixed at a distance of 25–50 cm from the canopy [41]. When detecting the spectral information of grape leaves, the vehicle needed to be stopped first. In the fol- lowing year, the same team, using a similar device, per- formed spectral measurements while the vehicle was in continuous motion [42]. In this case, the original spec- tral information obtained will contain information about voids, wood, metal, etc. To filter information about the canopy from the original data, static blade characteristics should be collected before the experiment. The spectral detection instrument, mounted on a vehicle to operate contactless detection, was called on-the-go spectros- copy [41]. Instead of human workers, the vehicle car- ries the spectrometer for movement. The automation of acquisition tools greatly improves the efficiency of data acquisition. Compared with traditional methods, the application of VIS–NIR spectroscopy to evaluate phenotypic infor- mation can reduce the damage to fruit trees. For the leaf level, the detection is rapid and effective. For the canopy level, a spectrometer on a tripod is used to detect spec- tral information for individual trees. The emergence of on-the-go spectroscopy speeds up data collection and contributes to the study of high-throughput phenotypes. On-the-go spectroscopy has the ability to take measure- ments in multi-rows and enables mapping of the variabil- ity in the fruit tree water status in orchards, which is of great value for formulating reasonable irrigation meas- ures. The orchard could be divided into differentiated zones according to the variability in the water status. Dif- ferent watering schedules and doses for different zones can greatly reduce water waste [43], which is responding to the policy of sustainable development. Detection of biochemical parameters of fruits Chlorophyll, carotenoids, total soluble solids (TSS) con- tent and titratable acidity (TA) are biochemical parame- ters of fruit, and they will change gradually with the fruit growth [44]. Accurate prediction of biochemical param- eters will contribute to judging the maturity of fruit and determining whether it is suitable for harvest. This sec- tion mainly focuses on the detection of the fruit in living conditions. Elsayed et  al. used a handheld spectrometer with wavelengths of 302–1148  nm to test the biochemical parameters of mangoes [45]. The optical fiber probe was placed at a zenith angle of 30 degrees and 0.15  m above the mango fruit for non-contact detection. A contour map was made for the coefficients of determination of all biochemical parameters of mango fruits with all pos- sible wavelength (302–1048  nm) combinations. Twelve wavelengths (810, 780, 760, 750, 730, 720, 710, 686, 620, 570, 550 and 540  nm) were selected to estimate TSS (R2 = 0.72) and TA (R2 = 0.64). The results of partial least square regression (PLSR) models revealed that the newly developed index (NDVI − VARI)/(NDVI + VARI) (NDVI: Normalized Difference Vegetation Index) showed a close association with chlorophyll meter readings (R2 = 0.78). In addition to the detection of specific points, on-the- go spectroscopy has successfully realized continuous spa- tial detection. A PSS 1050 spectrometer operating in the 570–990 nm spectral range was installed on an all-terrain vehicle [46]. To align the detection probe to the position of the grape cluster, the height of the spectrometer sensor was adjusted to 0.8  m above the ground, the angle was adjusted to level, and the sensor had a distance of 0.3  m from the canopy. According to the spectral characteris- tics of the grape clusters obtained artificially, the thresh- old was constantly adjusted to separate out the true berry Page 6 of 22Huang et al. Plant Methods (2020) 16:107 spectrum from the raw data. TSS was estimated with R2 value of 0.95. On-the-go spectroscopy is proven to be feasible for the detection of canopy water stress and fruit biochemi- cal parameters in vineyards. It should be noted that the canopy of vineyard is continuous and different from that of citrus or apple orchards. Extracting effective spectral information is the key to data processing in the applica- tion of on-the-go devices in orchards with discontinu- ous canopies. The calculation of spectral indices based on sensitive wavelength is convenient, but the spectral characteristics of other wavelengths are neglected in this process. Establishing prediction models for fruit tree phenotypes based on the full spectral information will greatly improve the utilization of spectral information, to obtain results with high accuracy. Digital photography With the rapid development of digital computer and image processing technology, digital photography is becoming increasingly popular in scientific research and daily life. The approach of obtaining plant colour and spatial information from digital images has been success- fully applied in the study of plant phenotypes [47–49]. The principle of digital photography The charge-coupled device (CCD) is a semiconduc- tor device, which is applied in imaging technology as an image capture component. CCD can directly convert optical signals into analogue current signals and realize image acquisition and reproduction through analogue- to-digital conversion. With the continuous progress of chip technology, complementary metal oxide semi- conductor (CMOS) has gradually replaced CCD with the advantages of low energy consumption and moder- ate price [50]. Digital photography is an image acquisi- tion technology for colour communication [51]. Digital images can be taken instantly and easily transmitted and edited. Depending on these advantages, digital photogra- phy is rapidly applied in scientific research. This section mainly focuses on the application of digital photography in the study of fruit tree phenotypes in the field. The application of digital photography In the study of fruit tree phenotypes, digital photography is mainly used for the determination of canopy struc- tural and biochemical parameters. Fisheye photography and digital cover photography are two techniques with different lenses, both cameras are useful in plant pheno- typic analysis, especially in the determination of leaf area index (LAI) [52, 53]. A summary of applications of digital photography in fruit tree phenotypic studies is given in Table 2. Detection of architecture parameters Digital image has high image resolution, which is valua- ble for the calculation of canopy architecture parameters. The architecture parameters include tree height, crown diameter, crown volume (Cv), leaf area (LA) and LAI. LAI is the total one-sided area of leaf tissue per unit ground surface area [54], it can be regarded as a reliable basis for pruning branches and leaves, to improve light transmit- tance and promote the growth of branches and leaves. Digital hemispherical photography (DHP) is a type of digital imaging with fisheye lenses. Pictures are usually acquired from beneath the canopy towards the zenith, or from above the canopy looking downward in phenotypic research. Jonckheere et  al. reviewed the methods for indirect measurement of LAI by using DHP technology [47]. The advantage of using DHP is that several available commercial integrated instruments have been invented for LAI estimation for the image processing to reduce the intervention of operators. Each system contains a specific imaging device and a free analysis software [54, 55]. Illumination condition and shooting distance are intui- tive factors affecting image quality. To improve the accu- racy of LAI estimation, Knerl et  al. conducted multiple experiments to determine the optimal shooting environ- ment [56]. Two kinds of coloured anti-hail nets (blue and pearl nets) were artificially created over the apple trees to mimic uniform overcast and ideal clear sky conditions. The images were taken at distances of 10, 20 and 40  cm respectively, above the ground under the canopy. The OTSU algorithm was selected for threshold prediction. The processing result showed that when the images were taken for a tree group from approximately 10  cm away from the ground in a net-free environment, the predicted LAI had the smallest deviation from the destructive LAI. In the threshold selection, Zarate-Valdez et  al. [57] dis- covered that the contrast threshold for distinguishing leaves from the sky needed to be verified many times to generate reliable LAI. Digital cover photography (DCP) has become a substi- tute for DHP with the advantage of high resolution. DCP uses a narrow field-of-view lens aimed at the zenith for imaging [58]. Compared with hemispherical photogra- phy, DCP is not sensitive to image exposure; however, there is a lack of software for digital cover image process- ing automatically [53, 59]. To improve the automation of the analysis methods for cover images, Fuentes et  al. used a script written in MATLAB 7.4 to replace the manual technique for LAI estimation of eucalyptus woodland [60]. The developed script can directly connect the laptop to the digital cam- era to obtain cover photographs and LAI analysis in real time. In subsequent research, the script was also applied to determine the LAI of fruit trees in apple orchards Page 7 of 22Huang et al. Plant Methods (2020) 16:107 and vineyards [61]. In addition, Fuentes et  al. added an automated module to the original code, and frames (images) were extracted from videos by commands from the Image Analysis Toolbox [62]. The new script could be successfully applied to analyse the LAI of grape trees from videos. The development of specific software and automation programs for hemispherical and digital cover images provides an accurate and rapid method for the determi- nation of the LAI of fruit trees. However, some studies have indicated that an ordinary consumer digital camera without special sensors can also be used to detect pheno- typic information of fruit trees with its ability to perceive colour information. Taking advantage of high resolution of digital images, Klodt et  al. presented an image segmentation method based on colour information [63]. Image pairs with over- lapping information were obtained from different loca- tions for each plant. The depth map was constructed by calculating the depth information according to the displacement of the target point in image pairs. Fruits, leaves, stems and background in the image were seg- mented according to the colour information. According to the depth information, the pixel size in the segmented image was weighted to calculate the vine leaf area. This method has been successfully applied to the calculation of LA and fruit-to-leaf ratios in vineyards. In addition, the structure from motion (SfM) of orchards can be carried out by using digital images, which is convenient to detect the canopy volume of fruit trees. Haris et  al. obtained low-altitude images of an citrus orchard by UAV and generated a 3D map of the orchard [64]. They proposed a method to divide the 3D image of trees into a collection of voxels for the estimation of canopy volume. A voxel was a 3D array that represented the depth of an image. The canopy vol- ume was calculated by calculating the number of vox- els occupied by each canopy and the volume of each voxel. Canopy volumes of 78 trees can be measured in 15  min by this method. The efficiency has been signifi- cantly improved compared to the manual measurement (10 min for each tree measured). LAI is a dimensionless quantity representing the canopy and a significant parameter for quantitative analysis of ecosystem productivity [54]. In traditional measurement approaches, the LAI is equivalent to the cumulative leaf area of the leaf fall period in a known collection area [65]. Although this calculation method obtains the most realistic results, it needs to go through a long process. Studies have shown that digital photog- raphy is a reliable method for the measurement of LAI. Moreover, the estimation of LA and Cv by digital imag- ing can help farmers monitor the growth condition of fruit trees. Table 2 The applications of digital photography in the study of fruit tree phenotypes Note: In the “scale” column of the table, the fruit tree objects are divided into individual trees (T) and fruits (F) Applications Species Scale Devices Detected parameters Evaluation parameters Advantages Limitations References Indices Performance evaluation Architecture parameters Apple orchard T Nikon FC-E8 fisheye lens LAI; PAI Low cost; high accuracy; especially suitable for the deter- mination of LAI Susceptible to uneven light and overlap- ping blades [55] T CID CI-110 fisheye lens LAI Error = 13% [56] T Digimax A503 Samsung LAI R2 = 0.85; RMSE = 0.22 [61] Almond orchard T Nikon FC-E8 fisheye lens LAI NDVI R2 = 0.88 [57] NDWI R2 = 0.91 Vineyard T Digimax A503 Samsung LAI R2 = 0.97; RMSE = 11.5% [62] T Canon EOS 60D LA R2 = 0.93; RMSE = 3.0% [63] Citrus orchard T Canon EOS 6D Cv [64] Biochemical parameters Mango F Kodak D5100 Chl-a (NDVI − VARI)/ (NDVI + VARI) R2 = 0.71 RGB images can reflect colour information well The evaluation accuracy is not high enough [45] Chl-t R2 = 0.71 Chl-b (R − B)/(R + B) R2 = 0.57 Carotenoids R2 = 0.53 TSS R2 = 0.57 TA R2 = 0.59 Page 8 of 22Huang et al. Plant Methods (2020) 16:107 Detection of biochemical parameters of fruits The colour digital image represented by red, green and blue components is called RGB image [50]. RGB images can accurately reflect the colour information of the tar- get. Extracting the three colour components of R, G and B is the key to RGB image processing [66]. Some vegeta- tion indices (VIs) expressed by colour components can be used to predict biochemical parameters of fruits. Elsayed et al. proposed a method for the determination of the chlorophyll content of mango fruits by the VARI and the NDVI calculated by (R − B)/(R + B) and (G − R)/ (G + R − B), respectively [45]. According to the PLSR models, the newly developed index (NDVI − VARI)/ (NDVI + VARI) showed close and highly significant asso- ciations with chlorophyll a and chlorophyll t (the sum of chlorophyll a and chlorophyll b). In addition, the index (R − B)/(R + B) was a good predictor of TA. The determination of phenotypic information of fruit trees by digital photography results in no damage to fruit trees, and its ability to view images instantly with- out rinsing film brings great convenience to data acqui- sition. In addition, the segmentation of fruit trees and backgrounds based on colour information provides a new method for image processing. Multispectral and hyperspectral imaging Spectral imaging is a technique used to divide the break- down of ground object electromagnetic radiation into several narrow spectral segments and obtain information for different bands of the same target at the same time by means of photography or scanning. Spectral imaging sensors can detect information in spectral bands beyond the visible range, such as infrared wavelengths, providing researchers with additional raw data [67]. The principle of multispectral and hyperspectral imaging The visible to long-wave infrared spectral spectrum (0.4–14 µm) is commonly used in scientific research. The electromagnetic waves in this band can be divided into four categories: VIS band (400–700 nm), NIR band (700– 1000 nm), short-wave infrared band (1000–2500 nm) and long-wave infrared band (7.5–14 µm) [68]. Spectral imaging is a technology can simultaneously obtain the two-dimensional spatial information and one- dimensional spectral information of the target, cover- ing a variety of disciplines such as spectroscopy, optics, computer technology, electronics technology, and preci- sion machinery [69]. Multispectral imaging adopts paral- lel sensor arrays and detects a small amount of reflection over broad wavelength, which is generally composed of three to six discontinuous bands. Hyperspectral imag- ing detects reflection of hundreds of continuous spectral bands, and the band widths are narrower than the widths of multispectral bands [5]. Therefore, hyperspectral imaging can yield in-depth information about specimens that are easily lost in multispectral imaging. The application of multispectral and hyperspectral imaging As computer technology and new optical equipment have evolved, many kinds of multispectral and hyper- spectral imager devices have been developed. The spec- tral imager needs to be stable during image acquisition. Darkroom and halogen lamps are usually designed for spectral image acquisition in the laboratory [16, 70]. Ground-based spectral imaging system is suitable for experiments in the field. The tripods and vehicles are used as the bearing device for the camera [36, 71]. To quickly obtain spectral data of the whole orchard, an unmanned aerial vehicle (UAV) was applied for imag- ing [72]. As the UAV flies along the route path, the spec- tral camera takes continuous images at regular intervals [19]. In addition, spectral cameras mounted on manned spacecrafts and satellites can capture spectral images on a large scale. The acquisition and processing methods of multispectral and hyperspectral imaging in the study of fruit tree phenotypes are shown in Fig. 2. The application of spectroscopy in phenotypic studies has a long history [16], and this review mainly focused on the research over the last 5 years. A summary is listed in Table 3, and some details are described in the following section. Detection of architecture parameters It is a useful method to establish digital terrain models (DTMs) of orchards by using low-altitude images and global positioning system (GPS) for the identification of canopy architectural features. DTM is an ordered numerical array that describes the spatial distribution of various information on the Earth’s surface. DTM without ground objects is referred to as digital elevation model (DEM), and DTM with ground objects is known as digi- tal surface model (DSM). Agisoft PhotoScan is a special computer vision software that can automatically identify and match features of multiple images and build a DTM of the research area by combining ground control point parameters, GPS positioning and internal parameters of the camera. Matese et al. measured the canopy height of vine rows by constructing DSMs and DTMs [73]. Images of the vineyard in the R, G and NIR bands were obtained by a multispectral camera. The canopy height model, rep- resenting the relief of the vine row surface, was obtained by subtracting DTM from DSM. The estimated canopy height is approximately 0.5 m lower than the actual can- opy height. They also built a NDVI map of the vineyard and found a good correlation between NDVI values and canopy heights in the aera with high canopy height. This Page 9 of 22Huang et al. Plant Methods (2020) 16:107 finding provided an idea for estimating canopy architec- ture parameters using VIs. Pixel-based segmentation results are prone to produce salt and pepper noise because that the size of a single pixel is much smaller than the detected object. There- fore, object-based image segmentation techniques are increasingly used in phenotypic studies [74]. Díaz-Varela used multi-resolution segmentation and supervised clas- sification algorithms to segment the olive canopy and background from UAV images captured by a modified RGB camera [75]. The segmentation of single crowns is performed by the watershed algorithm. The canopy was isolated by a segmented contour line, and tree height was retrieved from DSM based on the identification of local maxima. As a result, crown diameter was pre- dicted with R2 = 0.58 and R2 = 0.22 in discontinuous and continuous canopies, respectively, and tree height was estimated with R2 = 0.07 and R2 = 0.58. Koc-San et  al. proposed circular Hough transform algorithm to extract citrus trees from DSM [76]. Combined with the specific canopy size and spacing, the images were processed by threshold analysis, median filtering and edge detection to obtain the edge of the tree shadow. Then, according to the azimuth of the sun, the circular shadow was moved to obtain the exact boundary of tree crowns. This method is of great value for distinguishing tree crowns from other plants which have similar radiation conditions. A conclu- sion can be drawn from this result that circular Hough transform algorithm is available for the identification and feature extraction of fruit trees with green, round and compact features. Torres-Sánchez et al. classified vegeta- tion and bare land area based on vegetation index values. The DSM layer was applied to separate trees with the sur- rounding soil according to the difference in height [77]. This method provides a good estimation of tree height (R2 = 0.90) and canopy area (R2 = 0.94). Considering the spatial characteristics and contextual features, the object- oriented classification method takes spatial pixel cluster as the classification feature instead of a single pixel, that is suitable for high resolution image processing. RGB images have high spatial resolution, which is conducive to the accurate acquisition and matching of ground control points in the modelling of DTMs. The spatial resolution of multispectral images is slightly lower than that of RGB images, so it is easy to lose similarities in the matching process of multispectral images. How- ever, multispectral cameras can detect reflection beyond RGB bands, which is valuable in image segmentation for vegetation and background pixels with significant con- trast in infrared bands. The segmentation of canopy and Fig. 2 The acquisition and processing methods of multispectral and hyperspectral imaging in the study of fruit tree phenotypes. The analysis has four steps, as shown in the figure Page 10 of 22Huang et al. Plant Methods (2020) 16:107 Ta b le 3 T h e ap p lic at io n s o f m u lt is p ec tr al a n d  h yp er sp ec tr al im ag in g in  t h e  st u d y o f f ru it t re e p h en o ty p es A p p lic at io n s Sp ec ie s Sc al e Sp ec tr al r an g e D ev ic es D et ec te d p ar am et er s Ev al u at io n p ar am et er s A d va n ta g es Li m it at io n s R ef er en ce s In d ic es Pe rf o rm an ce ev al u at io n A rc h it ec tu re p ar am et er s O liv e o rc h ar d O B, G , R , r ed e d g e, N IR Te tr ac am m in i- M C A -6 C an o p y ar ea R2 = 0 .9 4; RM SE = 1 .4 4 m 2 Su it ab le fo r th e in fo rm at io n ac q u is it io n o f t h e w h o le o rc h ar d c an o p y H ig h c o st ; d iffi cu lt t o ac cu ra te ly d et ec t b la d e o ri en ta ti o n [7 7] Tr ee h ei g h t R2 = 0 .9 0; RM SE = 0 .2 4 m C v R = 0 .6 5 O IR Pa n as o n ic L u m ix D M C -G F1 Tr ee h ei g h t R2 = 0 .2 2 [7 5] C ro w n d ia m et er R2 = 0 .5 8 Vi n ey ar d O G , R , N IR A D C -S n ap Tr ee h ei g h t N D VI [7 3] Pi g m en t an d n u tr ie n t co n - te n ts A p p le o rc h ar d O VI S, re d e d g e, N IR M u lt is p ec tr al im ag er C h l N D VI R2 = 0 .6 67 ; RM SE = 0 .1 78 W id e sp ec tr al b an d r an g e; re al -t im e m o n it o ri n g o f a la rg e ar ea N o t su it ab le fo r p ar am et er d et er m in at io n o f s in g le b la d es [3 2] C it ru s o rc h ar d O 49 0– 95 0 n m M in i-M C A 12 To ta l N R = 0 .6 46 9; RM SE P = 0 .1 29 6 [8 2] To ta l s o lu b le su g ar R = 0 .6 39 8; RM SE P = 8 .8 89 1 St ar ch R = 0 .6 82 2; RM SE P = 1 4. 93 03 O 40 0– 88 5 n m M ic ro -h yp er sp ec VN IR m o d el C h lo ro p h yl l flu o re sc en ce FL D n R2 = 0 .7 2 [8 0] Pe ar o rc h ar d O 55 0– 81 0 n m Te tr ac am M ic ro - M C A Le af % N M 3C I R2 = 0 .6 7; RM SE = 0 .2 4 [8 3] Vi n ey ar d O 51 5, 5 30 , 5 70 , 6 70 , 70 0, 8 00 n m M u lt is p ec tr al se n so r C ar o te n o id co n te n t R 5 15 /R 57 0 R2 = 0 .4 3 [7 8] 40 0– 88 5 n m M ic ro -h yp er sp ec VN IR m o d el R 5 15 /R 57 0 R2 = 0 .4 8 R 5 15 /R 57 0, T C A RI / O SA VI R2 = 0 .4 2; RM SE = 0 .8 7 Bi o ch em ic al p ar am et er s M an g o o rc h ar d T 39 0. 9– 88 7. 4 n m R es o n o n P ik a II D M R2 = 0 .6 4 Q u ic k d et ec ti o n ; ti m e sa vi n g ; N o n ee d fo r ch em i- ca l t re at m en t A d va n ce d im ag e p ro ce ss in g te ch n iq u es a re re q u ire d [7 1] T 39 0. 9– 88 7. 4 n m R es o n o n P ik a II Yi el d R2 = 0 .8 3 [8 8] Vi n ey ar d T 40 0– 10 00 n m R es o n o n P ik a L TS S R2 = 0 .9 1 [8 7] A n th o cy an in co n ce n tr at io n R2 = 0 .7 2 Page 11 of 22Huang et al. Plant Methods (2020) 16:107 N o te : I n t h e “s ca le ” c o lu m n o f t h e ta b le , t h e fr u it t re e o b je ct s ar e d iv id ed in to in d iv id u al t re es ( T ) a n d t h e w h o le o rc h ar d (O ) Ta b le 3 ( co n ti n u ed ) A p p lic at io n s Sp ec ie s Sc al e Sp ec tr al r an g e D ev ic es D et ec te d p ar am et er s Ev al u at io n p ar am et er s A d va n ta g es Li m it at io n s R ef er en ce s In d ic es Pe rf o rm an ce ev al u at io n D is ea se s d et ec - ti o n A vo ca d o o rc h ar d O 39 0– 52 0 n m ; 47 0– 57 0 n m ; 67 0– 75 0 n m M o d ifi ed C an o n D is ti n g u is h la u re l w ilt d is ea se B/ G Su it ab le fo r d is ea se d et ec - ti o n o ve r la rg e sc al es ; n o t in flu en ce d b y th e va ri at io n in a g ro n o m ic ch ar ac te ri st ic s La ck o f a b ili ty to d ia g n o se d is ea se [9 4] O 58 0, 6 50 , 7 40 , 7 50 , 76 0, 8 50 n m Te tr ac am m in i- M C A -6 D is ti n g u is h la u re l w ilt d is ea se TC A RI 76 0– 65 0 [9 5] N IR /G O 56 0, 6 60 , 8 30 n m A D C M ic ro Id en ti fy w h it e ro o t ro t d is ea se N D VI A cc u ra cy is 8 2% [9 6] O liv e o rc h ar d O 40 0– 88 5 n m M ic ro -H yp er sp ec VN IR V W s ev er it y le ve ls FL D 3 A cc u ra cy is 7 9. 2% [9 2] A lm o n d o rc h ar d O 40 0– 88 5 n m M ic ro -H yp er sp ec VN IR R ed le af b lo tc h d ev el o p m en t FL D 2 [9 3] C h l a + b C ar o te n o id Page 12 of 22Huang et al. Plant Methods (2020) 16:107 background pixels is an important part of image process- ing, an algorithm that is suitable for the distribution char- acteristics of fruit trees will help to obtain ideal results. In summary, it is necessary to find a balance between the accuracy of DTMs and the complexity of image process- ing to select the appropriate technology for phenotypic research. Biomass is one of the most important parameters of canopy management. Architecture parameters can be used as the basis for assessing biomass [73]. The esti- mation of architecture parameters of fruit trees with UAV imaging at orchard level, allows creating maps of orchard heterogeneity and observing zones with differ- ent tree sizes, which provide a prerequisite for precision agriculture. Detection of pigment and nutrient contents At different growth stages, the pigment and nutrient con- tents of fruit leaves will change accordingly which will generate different reflection under light radiation. Spec- tral imaging records the spectral information of the tar- get, which can be used to analyse growth conditions of the plant. The Transformed Chlorophyll Absorption in Reflec- tance Index (TCARI) and Optimized Soil-adjusted Veg- etation Index (OSAVI) usually be applied to minimize the effects of soil and LAI during pigment estimation. Zarco-Tejada et al. estimated the leaf carotenoid content of vineyards using UAV multispectral and hyperspectral images [78]. The combination of R515/R570 and TCARI/ OSAVI indices could provide good prediction of carot- enoid content. However, the results obtained with multi- spectral imagery yielded (R2 = 0.43) lower R2 values than those obtained with hyperspectral imagery (R2 = 0.48). A reason might be that multispectral cameras have inde- pendent lenses, resulting in errors in pixel matching in different wavebands. Chlorophyll fluorescence is a probe for the study of photosynthesis, which can reflect the photochemical reaction process and is related to the chlorophyll con- tent. The quantification of chlorophyll fluorescence aims to evaluate photosynthesis. The nonuniformity of the canopy will affect the measurement of the fluorescence signal. To extract the pure canopy fluorescence emis- sion from the clustered pixels, the coverage range of each pixel should be fully considered [79]. Fraunhofer line depth (FLD) principle is the fundamental principle of chlorophyll fluorescence detection. Zarco-Tejada et al. captured multispectral images of a citrus orchard from a UAV [80]. Irradiance spectra at wavelengths of 763, 750 and 780  nm were selected as parameters of the model. They compared fluorescence retrieval models established by structural indices and chlorophyll index with FLD model and found that the prediction result of FLD model was obviously better. N is the main mineral nutrient needed for chlorophyll production and other plant cell components (proteins, nucleic acids and amino acids) [81]. The determination of N can help with the timely management of nitrogen ele- ments in the orchards to ensure growth vitality. Xuefeng et  al. obtained spectral images of a citrus orchard at the height of 100  m above the canopy using a multispectral camera equipped on a UAV [82]. The camera had eleven spectral channels with wavelengths of 490, 550, 570, 671, 680, 700, 720, 800, 840, 900 and 950  nm. Mature and young leaf areas were selected manually in the images. The PLS model based on the original spectrum was the best prediction model for the total nitrogen content with R2 = 0.6469. The model that combined supported vector machine (SVM) and least square methods could estimate the starch content of mature leaves with R = 0.6822. In a red-blush pear orchard, Perry et  al. used a six-band (at 550, 660, 710, 720, 730, 810 nm, and all bands were 10 nm wide) multispectral camera to collect images of the canopy with UAV [83]. They provided a new index, the Modified Canopy Chlorophyll Content Index (M3CI_710 nm), utilized for the assessment of canopy nitrogen. M3CI_710 nm was calculated according to the formula, (RNIR + RRed − RRE)/(RNIR − RRed + RRE), where RNIR is the measured reflectance in the 810-nm band, RRed is the measured reflectance in the 660-nm band, and RRE is the measured reflectance in the 710-nm band. Regression results showed the highest R2 value (R2 = 0.67) for the leaf %N with the new index. Spectral camera equipped on UAVs can capture can- opy images of an orchard in a short time, but the flight is affected by air traffic control and battery power. Remote sensing satellites are man-made satellites used as remote sensing platforms in outer space, capable of covering the Earth or designated areas. Satellite data from remote sensing platforms can be used for agricultural research. Multispectral sensors carried by satellites mainly include blue, green, red and NIR bands. Sentinel-2, which was launched by the European Space Agency, has sensor in the red-edge bands. Li et  al. used Sentinel-2A remote sensing images to estimate the chlorophyll content of apple canopies [32]. The (NDVIgreen + NDVIred + NDVIre) was the best indices for the determination of chlorophyll content, and the SVM model provided better predictive results with R2 = 0.729 than back-propagation neural net- work (BPNN) method. The above research results indicate that spectral imag- ing has great value in monitoring the pigment and nutri- ent contents of fruit trees. Satellite spectral remote sensing has a broad field of vision and can record macro features of large areas on the ground; nonetheless, the Page 13 of 22Huang et al. Plant Methods (2020) 16:107 spatial resolution of the images is much lower than that of UAV images. The spectral imaging sensors carried by a UAV have more bands than satellite sensors. Thus, spectral imaging with a UAV is an available method for agricultural phenotypic research when time and space permit. Compared with VIS–NIR spectroscopy, spectral imag- ing technology can obtain information more quickly and economize more labor force. It is noteworthy that spec- tral imaging cannot obtain spectral data directly, so com- plex image processing techniques are needed to extract spectral information from the images. Detection of biochemical parameters of fruits The experiments of fruit detection using spectral imaging are mainly carried out in the laboratory under controlled conditions including illumination, temperature, and dis- tance [84–86]. Fruits were tested separately, which would take a long time when there are a large number of sam- ples. Recently, on-the-go spectral imaging devices have been successfully applied in fruit detection [87]. Gutiérrez et al. installed a hyperspectral camera (400– 1000  nm) on an all-terrain vehicle, to obtain dynamic hyperspectral images of a vineyard [87]. A relation matrix was established between all the pixels in the spectral image and the characteristic spectrum of the grape, the pixels with correlation coefficients that reached a pre- determined value were selected as grape pixels. Epsilon- SVM algorithm was applied for the prediction of TSS (R2 = 0.91) and anthocyanin concentration (R2 = 0.72). The application of on-the-go hyperspectral imaging accomplished the detection of fruit components in the field, and the results could be compared with those under laboratory conditions. Replacing all-terrain vehicles with field robotics, Wen- del et  al. implemented a driverless, automatic spectral scanner to predict the dry matter (DM) content of man- goes [71]. They developed an analytical method that unified the classification and regression analysis of hyper- spectral images based on a convolutional neural network (CNN) and the PLS algorithm. The DM content predic- tion was not for individual fruit, but for the average over each tree. The prediction results revealed that the CNN model had a higher prediction accuracy (R2 = 0.64) than the PLS model (R2 = 0.58). To make a more accurate esti- mation of the mango yield, the research team counted the number of mangoes for each tree [88]. RGB images and hyperspectral images of mango trees were obtained simultaneously. After classifying the mango and non- mango pixels, the width and height of the local area of the mango pixels were parameterized to determine the local maximum. The number of mangoes was determined by the number of local maxima. The estimation of the mango counts showed that the accuracy of hyperspectral counting was lower than that of RGB imaging. Although the resolution of RGB imaging is higher than that of spectral imaging, which is more conducive to image segmentation, spectral imaging can be applied in many aspects of phenotypic research, bringing much more information to researchers than RGB imaging. The estimation of the ripeness and the number of fruits by spectral imaging is beneficial for farmers to make a detailed harvest plan and maximize the benefits [88]. Detection of diseases Plant diseases can cause considerable losses of plant quality and yield. Hence, effective identification meth- ods should be adopted to prevent disease aggravation and infection [7]. Traditional detection methods are visual feature analysis and microbiological methods by laboratory experiments [89, 90]. However, these meth- ods require specialized pathological knowledge and a long time to complete the detection process, resulting in the missing of the best opportunity for treatment. Non- invasive spectral imaging technology provides a rapid non-destructive testing method for plant disease detec- tion. This section mainly focuses on the applications of hyperspectral and multispectral imaging for the disease detection of fruit trees in the field. Verticillium wilt (VW) caused by the soil-borne fungus Verticillium dahliae Kleb is the most limiting disease in all traditional olive-growing regions worldwide. To detect VW, Calderón et  al. captured airborne thermal, multi- spectral and hyperspectral images of a 7-ha commercial orchard. Through general linear model analysis, visible ratios (B/BG/BR) and fluorescence index (FLD3) were found to be effective in detecting VW at early stages of disease development [91]. To verify the applicability of spectral imaging methods in large-scale orchards, the research team carried out VW detection experiments in a 3000-ha commercial olive area. A manned aircraft replaced the UAV for image acquisition, since the UAV cannot be used in flight for a long time. Linear discri- minant analysis (LDA) and SVM algorithms were used to classify healthy and diseased trees. For the whole data set, SVM expressed a high classification accuracy of 79.2%, while LDA achieved a classification accuracy of 59.0%. FLD3 was a good indicator that could identify olive trees at the early stages of disease development over as much at the orchard scale and even larger scales [92]. López-López et al. used the same analytical algorithms to detect red leaf blotch disease in an almond orchard [93]. Pigment indices (chlorophyll and carotenoid) and chloro- phyll fluorescence can identify infected trees effectively in the early stage. Page 14 of 22Huang et al. Plant Methods (2020) 16:107 Laurel wilt (LW) is a lethal disease that spreads throughout the southeastern United States and has severely affected avocado industry. A digital colour camera was modified by adding a 37-mm filter ring to the front nose to capture images in the blue band (390– 520  nm), green band (470–570  nm) and red-edge band (670–750 nm) [94]. The M-statistic was applied to evalu- ate the separability of healthy and diseased trees. Accord- ing to the analysis of variance for the spectral images of the avocado canopy, B/G was found to be capable of separating the healthy trees from the laurel wilt-affected trees with M = 1.53. However, the researchers suggested using a high-spectral-resolution camera to improve the classification accuracy. A Tetracam mini-MCA-6 multi- spectral camera with six individual digital sensors (green: 580–10  nm; red: 650–10  nm, red-edge, Redge740: 740– 10  nm, red-edge, Redge750: 750–10  nm, NIR760: 760– 10  nm, and NIR850: 850–40  nm) was applied to obtain spectral images of an avocado orchard [95]. To make the tests more accurate, the researchers divided the degree of infection into four stages. The VIs TCARI760–650, NIR/G and redge/G, were able to discriminate LW at each developmental stage, and the value of M was up to 2.1. Although the modified digital camera had a significant reduction in cost, the multispectral camera had a higher number of bands and narrower bandwidth, so more spec- tral information could be applied for the classification of diseased trees to achieve an improved accuracy. Perez- Bueno et  al. mounted a multispectral camera limiting the radiation to the bands at 560, 660 and 830  nm on a UAV [96]. ANN, logistic regression analysis (LRA), LDA and SVM were trained on NDVI to identify white root rot disease in avocado orchards. All four algorithms had the same resolution capability. The sensitivity of the LDA model was 55.5%, which is lower than that of the ANN and SVM models (78.6%). LRA had higher universality and a lower rate of false negatives than SVM in terms of classification. These conclusions can provide a reference for the selection of classification models. When infected fruit trees show different response char- acteristics from healthy trees, spectral imaging technol- ogy can provide reliable information for the identification of infected fruit trees. Various forms of VIs can be indi- cators for identification. Effective identification of dis- ease facilitates the implementation of healthy control and yield optimization measures, rather than relying on the chemical action of pesticides [90]. Multispectral cameras have separate sensors for each spectral band, and a multispectral image provides infor- mation on all pixels in the corresponding bands. Hyper- spectral cameras adopt the push-sweep method to obtain all spectral information for all pixels in the bands [67]. The essence of a hyperspectral image is a cube composed of a large number of images, two dimensions are pixels, and the third dimension is the spectrum of each pixel [97]. For multispectral images, high precision is needed in pixel matching of images obtained from different sen- sors at the same time. We can conclude that spectral imaging is an effective method to realize contactless and spatially continuous monitoring for fruit tree phenotypic studies at the orchard level. Thermal imaging Thermal imaging can produce digital images and draw a thermal map of the scene in false colour [98]. Tradi- tionally, temperature is measured with thermometers, thermocouples, thermistors, and temperature detectors. These techniques are limited to the determination of spe- cific points while thermal imaging enables continuous monitoring in space [99]. The principle of thermal imaging Everything in nature whose temperature is above abso- lute zero can emit infrared radiation, and this infrared radiation carries information about the characteristics of the object. Thermal motion of molecules or atoms will be more intense with increasing temperature, and the infrared radiation will also be enhanced [99]. The core of a thermal imaging camera is the infrared detector, which absorbs the infrared energy emitted by the object and converts it into voltage or current [100]. Thermal imag- ing technology can visualize the temperature information of the detected object, which has played an important role in the analysis of meteorological disaster manage- ment [101, 102], animal behaviour recognition [103, 104], and medical research [105, 106]. The application of thermal imaging The application of thermal imaging in the study of fruit tree phenotypes over recent years is summarized in Table  4. Some details and analysis are shown in the fol- lowing section, especially focusing on the detection of water stress and disease. Detection of water stress The lack of sufficient moisture in fruit trees can be con- sidered water stress. Water stress is the most harmful environmental stress to the development and produc- tion of fruit trees and can affect cell division and veg- etative growth. Water decreasing in plants leads to the photosynthetic rate subtracting and stomatal closure increasing, which result in a reduction of CO2 uptake and transpiration and thus a rise of plant temperature [107]. Although gs cannot be directly measured by ther- mal imaging, it is feasible to measure canopy temperature (Tc) to reflect stomatal status [108]. Page 15 of 22Huang et al. Plant Methods (2020) 16:107 Ta b le 4 T h e ap p lic at io n s o f t h er m al im ag in g in  t h e  st u d y o f f ru it t re e p h en o ty p es N o te : I n t h e “s ca le ” c o lu m n o f t h e ta b le , t h e fr u it t re e o b je ct s ar e d iv id ed in to in d iv id u al t re es ( T ) a n d t h e w h o le o rc h ar d (O ) A p p lic at io n s Sp ec ie s Sc al e Sp ec tr al r an g e D ev ic es D et ec te d p ar am et er s Ev al u at io n p ar am et er s A d va n ta g es Li m it at io n s R ef er en ce s In d ic es Pe rf o rm an ce ev al u at io n W at er s tr es s C it ru s o rc h ar d T 7. 5– 13 µ m IR t h er m al c am er a Ψ s T c − T a R2 = 0 .4 2– 0. 76 Su it ab le fo r ca n o p y te m p er at u re d et ec ti o n a t th e o rc h ar d le ve l, re d u ci n g t h e d ep lo ym en t o f a la rg e n u m b er o f t em p er at u re se n so rs H ig h c o st ; v u ln er - ab le t o s h ad o w , u n ev en li g h ti n g an d o th er e n vi ro n - m en ta l e ff ec ts [1 08 ] Pe ar o rc h ar d T 8– 12 µ m FL IR 4 00 g s T c [1 07 ] Vi n ey ar d O 7. 5– 13 µ m FL IR T au II 3 20 P n C W SI R = − 0 .8 0 [1 13 ] O 7. 5– 13 µ m FL IR T au II 3 20 Ψ s C W SI R2 = 0 .6 93 1 [1 14 ] g s R2 = 0 .7 06 1 O liv e o rc h ar d O 7. 5– 13 .5 µ m Ta u 2 3 24 Ψ s C W SI R2 = 0 .6 0– 0. 73 [1 15 ] g s R2 = 0 .9 1 T 7. 5– 13 µ m Th er m aC A M S C 20 00 W at er s ta tu s C W SI [1 10 ] T 8– 14 µ m Fl ir O n e Ψ le af T c R2 = 0 .8 1 [1 19 ] C W SI R2 = 0 .7 3 A lm o n d o rc h ar d O 8– 12 µ m M ir ic le 3 07 Ψ s C W SI R2 = 0 .6 7 [1 11 ] T c − T a R2 = 0 .6 5 Pe ac h o rc h ar d O 8– 12 µ m M ir ic le 3 07 Ψ s C W SI R2 = 0 .9 2 [1 11 ] T c − T a R2 = 0 .6 5 A p ri co t o rc h ar d O 8– 12 µ m M ir ic le 3 07 Ψ s C W SI R2 = 0 .6 4 [1 11 ] T c − T a R2 = 0 .6 5 D is ea se d et ec ti o n A p p le o rc h ar d T 8– 12 µ m Va ri o sc an 3 20 1 ST In fe ct ed a re a o f S ca b d is ea se M TD R2 = 0 .8 5 [1 21 ] Se ve ri ty le ve ls R2 = 0 .7 1 O liv e o rc h ar d O 8– 12 µ m M ir ic le 3 07 Se ve ri ty le ve ls o f V W T c -T a R2 = 0 .7 6 [9 1] C W SI R2 = 0 .8 3 O 7. 5– 13 µ m FL IR S C 65 5 Se ve ri ty le ve ls o f V W T c − T a [9 2] A lm o n d o rc h ar d O 7. 5– 13 µ m FL IR S C 65 5 Se ve ri ty le ve ls o f R ed le af b lo tc h T c − T a [9 3] Page 16 of 22Huang et al. Plant Methods (2020) 16:107 For the purpose of reducing the influence of field changes, Struthers et  al. adjusted irrigation amount and conducted control experiments on 30 pear trees [107]. The stress treatment included 18 canopies and a con- trol treatment of 12 canopies (normal irrigation). A long-wave thermal imager in 7.5–13 µm wavelength was attached to a mechanical lift. Thermal images of the can- opy were acquired with a field of view of 25 degrees at nadir 1.3  m above the canopy. The results of multivari- ate analysis proved that Tc obtained by thermal imaging varied with gs, but this change may lag behind due to the influence of air temperature (Ta) and vapor pressure deficit. The Crop Water Stress Index (CWSI) is a reasonable quantitative evaluation parameter for crop water stress under evaporation pressure loss [109–111]. The CWSI can be calculated by the formula follows: where Tc − Ta represents the temperature difference between the crop canopy and the air; (Tc − Ta)ul is the upper limit of (Tc − Ta), indicating that the canopy is immediately dried; and (Tc − Ta)ll is the lower limit of (Tc − Ta), indicating the canopy under good irriga- tion conditions [112]. The estimations of (Tc − Ta)ll and (Tc − Ta)ul need to be careful and accurate, as they play important roles in the calculation. Remote and proximal sensing measurements were compared with plant physiological variables by Matese et al. [113]. A small thermal imaging camera (7.5–13 µm) was mounted on a UAV as the remote sensing device, and images were collected at 70  m above the ground with a resolution of 9  cm/pixel. Proximal sensing images were collected at a 1.5 m distance from the lateral canopy with an infrared thermal imaging camera (8–14  µm). In the calculation of the CWSI, the researcher revised the for- mula as follows according to the actual situation: Tdry and Twet represent the temperature of a stressed leaf and an unstressed wet leaf, respectively, while Tleaf replaces Tc − Ta indicating the leaf surface temperature. The leaves were treated with petroleum jelly or wetted to simulate the phenomenon of leaf stress and wetting. The results showed that remote sensing data had the same value as the proximal data. The CWSI value will increase when the net photosynthesis (Pn) rate decreases under water stress. Therefore, the CWSI could be used as an indicator to evaluate the water status of the vine- yard. In addition, the research team also detected the (1)CWSI = (Tc − Ta) − (Tc − Ta)ll (Tc − Ta)ul − (Tc − Ta)ll (2)CWSI = Tleaf − Twet Tdry − Twet variation trend of the water state on a seasonal scale in the vineyard [114]. The CWSI correlated well with Ψs (R2 = 0.6931) and gs (R 2 = 0.7061). These results sug- gested that high-resolution thermal images can create great value for accurate vineyard management. Egea et  al. proposed a method to calculate the CWSI at different moments with Non-Water-Stressed Base- lines (NWSBs) [115]. The NWSB was derived from Tc measured by infrared sensors mounted above olive trees, which is associated with weather changes such as solar radiation. The slope and intercept of the NWSB will change at different times in 1  day. To prevent the influ- ence of rainy weather on leaf temperature and humidity, NWSB measurements were made only on continuous sunny days. This method is practical for simplifying the calculation of CWSI at different times. García-Tejero, IF et al. evaluated NWSBs in an orchard with three varieties of almonds [116]. It could be concluded from the results of the different varieties that the slopes of the NWSB were similar, but the intercepts were different. This con- clusion also indicated that the NWSB intercept is related to weather conditions. The definition of the NWSB pro- vides a reference for irrigation treatment under different water stress levels. Thermal imagery is a spatial image with many mixed pixels, similar to spectral imagery, so separating the study area from the background is still the critical step in image processing. Moller et  al. aligned a digital colour image with a thermal image and used the segmentation of the digital image as a mask, to perform a statistical analysis of the temperature in thermal images [117]. Agisoft Photo- Scan was used to create a 3D point cloud and DEM using thermal images and GPS positions. Pixels of soil and leaves can be separated by determining the height thresh- old [113, 114]. All steps required a high level of image processing technology and related procedures. Salgadoe et  al. proposed a method for automatically segment- ing canopy pixels according to temperature histograms [118]. A histogram gradient threshold was set with a pre- defined local gradient to identify the highest and lowest canopy temperature. Compared with the segmentation methods for specific pixels, the thresholding segmen- tation method based on histograms is more time- and labour-saving and suitable for images of various resolu- tions, can be a reliable method for fast and standardized thermal analysis. Although thermal cameras have contributed signifi- cantly to canopy temperature and water stress assess- ments, its cost is a burden for ordinary farmers. To reduce the cost of the camera, García-Tejero et  al. used a thermal imaging camera connected to a smartphone (Flir One) and a conventional Thermal Imaging Camera Flir SC600 to capture images of almond trees [119]. The Page 17 of 22Huang et al. Plant Methods (2020) 16:107 Flir One camera has a lower resolution (80 × 60 pixels) than the Flir SC600 (640 × 480 pixels). There was a strong similarity between Tc obtained by the Flir One camera and that measured by the Flir SC600 camera (R2 = 0.90), which indicated that the Flir One camera was available for water state assessment. The design of thermal imag- ing devices connected to mobile phones not only speeds up the monitoring process but also facilitates the use by fruit farmers. Traditionally, plant water status is usually estimated by diffusion porometers or pressure chambers [6]. The manual measurement methods are not timely. Thermal imaging technology can analyse water status of fruit trees in a short time with the evaluation of Tc. Using thermal imaging to monitor the spatial variation in orchard water status, the data from a large orchard area can be obtained quickly without installing an unreasonable number of on- site sensors. In addition, thermal imaging based on UAVs can be used to map the water status of the whole orchard, which can provide more detailed reference for the modu- lated irrigation strategy. Detection of diseases Plant disease pathogens may damage the cuticular cell structure of plant tissues, affect stomatal conductance and transpiration, and cause changes in leaf temperature [120]. The ability of thermal imaging to evaluate canopy temperature makes it possible to detect plant diseases. Apple scab pathogen grows under the epidermis of apple leaves and absorbs nutrients from the subcuticu- lar space and destroys the cuticle, causing water loss and temperature changes. Oerke et  al. found significant differences in the thermal images corresponding to dif- ferent stages of disease severity [121]. The maximum temperature difference (MTD) between the infected area and healthy area increased with the increase in the degree of infection. It was correlated with the infection area (R2 = 0.85) and overall infection severity (R2 = 0.71). Polystigma amygdalinum PF Cannon is also a fungus that lives on the surface of the leaf, causing almond trees to be infected with red leaf blotch disease. López-López et  al. [93] collected thermal images in an almond orchard and found that the Tc − Ta increased with the severity of the disease, especially in the stages of moderate or severe infection. When a plant is affected by VW, the vascular sys- tem will be damaged, which impedes the flow of water, resulting in water stress [122, 123]. Calderón et  al. iden- tified the VW severity levels in olive orchards with air- borne thermal imagery [91]. The gs was measured in the leaf and near-canopy fields at the tree level, Tc and Ta were estimated from the thermal images. Measure- ment results showed that the Tc − Ta would become higher and gs would become lower as the severity level increased, which proved that crown temperature esti- mated with thermal imaging was effective in detecting VW in the early stage of disease development. The team then expanded the olive garden experiment by select- ing nine areas in a larger commercial olive garden [92]. The nine areas covered different tree species, tree ages, planting densities and soil management techniques. The results showed that Tc − Ta was still an effective indicator for VW detection in large-scale orchards. The studies mentioned above suggested that the changes in Tc caused by disease can be monitored by thermal imaging techniques. Thermal imaging can help to separate healthy trees from infected trees, but it lacks diagnostic capability. It is difficult to determine whether the temperature change is caused by disease [121]. Com- bined with other imaging techniques to solve this prob- lem is the focus of detecting fruit tree diseases at present. LiDAR scanning Radar is an electronic device that transmits electromag- netic waves to the target and receives its echo, to obtain the distance and orientation from the target to the elec- tromagnetic wave transmission point. LiDAR is a radar (radio detection and ranging) system that transmits a laser beam to detect the position, velocity and other characteristics of a target [124, 125]. The principle of LiDAR A LiDAR system consists of a single narrowband laser and a receiving system [126]. The laser fires a pulse of light at the target, and the reflected wave is picked up by the receiver. The receiver can accurately measure the propagation time of the light pulse from transmission to reflection. Light pulses travel at the speed of light, and the distance from the laser point to the target can be cal- culated based on the speed of light and the time of prop- agation. The position of the target can be determined according to the height and scanning angle of the laser [127]. The application of LiDAR Because of the ability to detect distance, LiDAR provides great value in estimating architecture parameters of fruit trees [128–131]. The application of LiDAR in phenotypic analysis has been reviewed by Colaço et al. [18]. This sec- tion mainly focuses on the combination of LiDAR and other technologies. In the study of chlorophyll content, Ma et  al. pro- posed a method to estimate the chlorophyll content in different areas of light intensity by using 3D mod- els with colour characteristics [132]. A 3D laser scan- ner was used to acquire 3D data of apple trees; it was Page 18 of 22Huang et al. Plant Methods (2020) 16:107 equipped with an internal colour camera that enabled the building of a colourful 3D model, and the colours represented different light intensities. They found that the colour index (R − B)/(R + B) was suitable for describing the chlorophyll content of different lights. Similarly, a fusion method of multispectral camera and 3D portable lidar images was proposed by Hosoi et  al. [133]. The multispectral camera was placed on the points on the lines connecting the sample and LiDAR to promise the spectral images had the same angle of view as LiDAR data. The VI value of each pixel was added to lidar projection image as an additional attrib- ute value reflecting spatial distribution of chlorophyll. This method provides both horizontal and vertical dis- tribution of chlorophyll content over the canopy. The uniting of LiDAR and colour imaging is benefi- cial to the detection of fruits. Underwood et  al. used a mobile ground vehicle robot equipped with a 2D LiDAR and a machine vision camera to scan almond trees [134]. Within the LiDAR-based canopy mask, image classification was performed on the images associated with each tree for the estimation of canopy volume. To reduce the error caused by fruit occlusion, Stein et  al. collected data from multiple viewpoints [135]. Based on spatial position coordinates, the fruits in the image were correlated with other viewpoint images to avoid double counting. The error between the number of fruits calculated by this method and the true value was 1.36%, which was considered to be a high precision. In addition to precise 3D coordinates, LiDAR systems also record “intensity”, which is roughly defined as the backscattering intensity of the echo per test point and refers to the amplitude of the returned signal [125]. Dif- ferent spectral reflectance properties will result in dif- ferent backscattered intensification. Gené-Mola et  al. converted the backscattering intensity at a laser wave- length of 905  nm into reflectance to separate the apple fruits from the canopy branches [136]. According to the feature that the reflectance value of apples is higher than that of leaves and branches at 905 nm wavelength, the correlation points corresponding to leaves and branches were removed from the point clouds, and remaining points were clustered to obtain the num- ber of apples. The fusion result of reflectance informa- tion and LiDAR data was comparable to that of colour imagery. In terms of obtaining plant reflectance, LiDAR is less affected by illumination conditions than spectral imaging. When the spatial information of orchards is detected with UAV or satellite imagery, the spatial resolution is limited by the flight altitude, and the observation angle is just overlooking. The integration of ground-based LiDAR with other technologies can facilitate the study of phenotypic characteristics from multiple lateral perspec- tives on fruit trees. Discussion So far, much progress has been made in phenotypic study of fruit trees, but efforts still need to be made in the com- bination of technologies and the improvement of equip- ment. In further research, we should pay more attention to the practicability of technology so that we can make a real contribution to the development of agriculture. To this end, we proposed the following aspects for the focus and challenges of future fruit tree phenotypic research. The applications of spectrometers and spectral imag- ers indicate that the changes in fruit tree pigment con- tents and water state can cause clear spectral responses in the VIS, NIR and short-wave infrared bands. However, hyperspectral sensors in the ultraviolet (UV) range have been demonstrated to detect salt stress in barley leaves [137]. UV–VIS spectroscopy has been used for the classi- fication of tea types [138]. Whether the spectral informa- tion of the UV band or other bands is useful for the study of fruit tree phenotypes remains to be further verified in the future. Cost reduction of optical imaging sensors will be the emphasis of the fruit tree phenotypic techniques, which can serve more farmers rather than scientists. The Flir One camera mentioned in part 5 is a good example [119], which has a lower cost than professional optical imaging devices and can satisfy the research demands in agricul- ture. Maintaining a high resolution while keeping a low cost is a challenge during the course of fabrication. In addition, it is necessary to develop image processing soft- ware with broad applied value so that mobile phones can replace computers to calculate the phenotypic character- istics of fruit trees. LiDAR and imaging systems are complementary tech- niques for creating spatial coordinate descriptions and 3D image displays of plants [139]. LiDAR system pro- vides precise elevation information, which is beneficial to the establishment of DSMs and DTMs. Wang et  al. utilized airborne LiDAR and optical remote imagery to identify tree species in urban forests, and the classifica- tion accuracy was greatly improved compared with opti- cal image analysis alone [74]. Consequently, in the study of fruit tree phenotypes, it may be a new method to iden- tify fruit trees with airborne LiDAR and optical imaging. Conclusion We attempted to review the non-destructive technologies applied in the field study of fruit tree phenotypes, includ- ing VIS–NIR spectroscopy, digital photography, multi- spectral and hyperspectral imaging, thermal imaging, and LiDAR. These techniques are feasible and valuable Page 19 of 22Huang et al. Plant Methods (2020) 16:107 for the applications in phenotypic studies of fruit trees, such as the detection of architecture parameters, pig- ment and nutrient contents, water status, biochemical parameters of fruits, and plant disease. In particular, the combination of the data obtained by LiDAR and imag- ing techniques can promote the evaluation of pheno- typic characteristics of fruit trees in three-dimensional space. Spatial characteristics have great contributions to the monitoring of spatial variability of pigment contents, the detection of fruit locations and the prediction of fruit yield. The combination of non-destructive monitoring tech- nology and automatic machinery realizes the automa- tion of phenotypic research equipment. Ground-based devices are used for the detailed study of fruit trees at the tree level. However, it will take a long time to detect large orchard areas with terrestrial devices. Imaging techniques based on UAV and satellites have facilitated high-throughput phenotypic studies. The study of fruit tree phenotypes will be beneficial to rational irrigation, disease prevention, and yield improvement. Furthermore, phenotypic information can be considered the basis for screening excellent fruit tree species and promoting planting research on fruit trees. Abbreviations ANN: Artificial neural network; BiPLS: Backward interval partial least squares; BPNN: Back-propagation neural network; CCD: Charge-coupled device; CMOS: Complementary metal oxide semiconductor; CNN: Convolutional neural network; Chl: Chlorophyll; Chl-b: Chlorophyll-b; Chl-a: Chlorophyll-a; Chl-t: Chlorophyll-t; Cv: Crown volume; CWSI: Crop water stress index; DCP: Digital cover photography; DEM: Digital elevation model; DHP: Digital hemispherical photography; DM: Dry matter; DSM: Digital surface model; DTM: Digital terrain model; EWT: Equivalent water thickness; FD: First derivative; FLD: Fraunhofer line depth principle; FLD3: Fraunhofer line depth principle based on three spectral bands; FLDn: FLD3 normalized; GA: Genetic algorithm; GPS: Global positioning system; gs: Stomatal conductance; LA: Leaf area; LAI: Leaf area index; LDA: Linear discriminant analysis; LiDAR: Light detection and ranging; LRA: Logistic regression analysis; LW: Laurel wilt; M3CI: Modified canopy chlorophyll content index; MSI: Moisture spectral index; MTD: Maximum temperature difference; N: Nitrogen; NDGI: Normalized difference greenness vegetation index; NDVI: Normalized difference vegetation index; NDWI: Nor- malized difference water index; NIR: Near-infrared; NWSB: Non-Water-Stressed Baseline; OSAVI: Optimized Soil-adjusted Vegetation Index; PAI: Plant area index; PLS: Partial least squares; PLSR: Partial least squares regression; Pn: Net photosynthesis; PRI: Photochemical reflectance index; UV: Ultraviolet; R: Red (spectral band of red); R: Correlation coefficient (parameter of performance evaluation); R2: Coefficients of determination; Rcv: Cross validation correlation coefficient; RMSE: Root mean square error; RMSEP: Root mean square error prediction; RWC : Relative water content; SfM: Structure from Motion; SIPI: Structure Insensitive Pigment Index; Spec: Spectrometer; SVM: Supported vec- tor machine; Ta: Air temperature; TA: Titratable acidity; Tc: Canopy temperature; TCARI: Transformed Chlorophyll Absorption in Reflectance Index; TSS: Total soluble solids; UAV: Unmanned aerial vehicle; VARI: Visible atmospherically resistant index; VI: Vegetation index; VIS: Visible; VIs: Vegetation indices; VW: Verticillium wilt; Ψleaf: Leaf water potential; Ψpd: Predawn leaf water potential; Ψs: Stem water potential. Acknowledgements The authors thank American Journal Experts (AJE) for editing the language of this paper. Authors’ contributions H-YR collected and analysed references, drafted the manuscript; R-ZH pro- vided financial supports; R-ZH and L-DM proposed the subject and revised manuscript, L-X offered much help in the process of revision. All authors read and approved the final manuscript. Funding The authors are grateful for the financial support from the Hebei Provincial Department of Science and Technology (Grant number 19227211D). Availability of data and materials Not applicable. Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests. Received: 7 November 2019 Accepted: 30 July 2020 References 1. Dhondt S, Wuyts N, Inzé D. Cell to whole-plant phenotyping: the best is yet to come. Trends Plant Sci. 2013;18(8):428–39. 2. Shakoor N, Lee S, Mockler TC. High throughput phenotyping to acceler- ate crop breeding and monitoring of diseases in the field. Curr Opin Plant Biol. 2017;38(C):184–92. 3. Mir RR, Reynolds M, Pinto F, et al. High-throughput phenotyping for crop improvement in the genomics era. Plant Sci. 2019;282(SI):60–72. 4. Mahlein A. Plant disease detection by imaging sensors-parallels and specific demands for precision agriculture and plant phenotyping. Plant Dis. 2016;100(2):241–51. 5. Chetty K, Govender M, Bulcock H. A review of hyperspectral remote sensing and its application in vegetation and water resource studies. Water Sa. 2007;33(2):145–51. 6. Jones HG. Irrigation scheduling: advantages and pitfalls of plant-based methods. J Exp Bot. 2004;55(407):2427–36. 7. Alemu K. Detection of diseases, identification and diversity of viruses: a review. J Biol Agric Healthcare. 2015;5(1):204–13. 8. Ali MM, Bachik NA, Bachik NA, Muhadi NA, et al. Non-destructive techniques of detecting plant diseases: a review. Physiol Mol Plant P. 2019;108:101426. 9. Qin J, Chao K, Kim MS, et al. Hyperspectral and multispectral imaging for evaluating food safety and quality. J Food Eng. 2013;118(2):157–71. 10. Morgan KT, Scholberg JMS, Obreza TA, et al. Size, biomass, and nitrogen relationships with sweet orange tree growth. J Am Soc Hortic Sci. 2006;131(1):149. 11. Zhang Y, Zheng L, Sun H. An optical detector for determining chloro- phyll and nitrogen concentration based on photoreaction in apple tree leaves. Intell Autom Soft Co. 1995;21(3):409–21. 12. Sari M, Sonmez NK, Karaca M. Relationship between chlorophyll con- tent and canopy reflectance in Washington navel orange trees (Citrus sinensis (L.) Osbeck. Pak J Bot. 2006;38(4):1093–102. 13. Fernández-Novales J, Garde-Cerdán T, Tardáguila J, et al. Assess- ment of amino acids and total soluble solids in intact grape berries using contactless Vis and NIR spectroscopy during ripening. Talanta. 2019;199:244–53. 14. Wang H, Peng J, Xie C, et al. Fruit quality evaluation using spectroscopy technology: a review. Sensors. 2015;15(5):11889–927. 15. Raychaudhuri B. Imaging spectroscopy: origin and future trends. Appl Spectrosc Rev. 2016;51(1):23–35. 16. Mishra P, Asaari MSM, Herrero-Langreo A, et al. Close range hyperspec- tral imaging of plants: a review. Biosyst Eng. 2017;164:49–67. Page 20 of 22Huang et al. Plant Methods (2020) 16:107 17. Zhao C, Zhang Y, Du J, et al. Crop phenomics: current status and perspectives. Front Plant Sci. 2019;10:714. 18. Colaço AF, Molin JP, Rosell-Polo JR, et al. Application of light detec- tion and ranging and ultrasonic sensors to high-throughput phe- notyping and precision horticulture: current status and challenges. Hortic Res-England. 2018;5(1):35. 19. Roth L, Hund A, Aasen H. PhenoFly planning tool: flight planning for high-resolution optical remote sensing with unmanned areal systems. Plant Methods. 2018;14(1):116. 20. Wagner A, Hilgert S, Kattenborn T, et al. Proximal VIS-NIR spec- trometry to retrieve substance concentrations in surface waters using partial least squares modelling. Water Sci Tech-W Sup. 2019;9(4):1204–11. 21. Czechlowski M, Marcinkowski D, Golimowska R, et al. Spectroscopy approach to methanol detection in waste fat methyl esters. Spectro- chim Acta Part A Mol Biomol Spectrosc. 2019;210:14–20. 22. Wang J, Wang J, Chen Z, et al. Development of multi-cultivar models for predicting the soluble solid content and firmness of European pear (Pyrus communis L.) using portable vis–NIR spectroscopy. Post- harvest Biol Tec. 2017;29:143–51. 23. Yang E, Ge S, Wang S. Characterization and identification of coal and carbonaceous shale using visible and near-infrared reflectance spectroscopy. J Spectrosc. 2018;2018:1–13. 24. You H, Kim Y, Lee J, et al. Food powder classification using a port- able visible-near-infrared spectrometer. J Electromagn Eng Sci. 2017;17(4):186–90. 25. Xie LJ, Wang AC, Xu HR, et al. Applications of near-infrared systems for quality evaluation of fruits: a review. T Asabe. 2016;59(2):399–419. 26. Arendse E, Fawole OA, Magwaza LS, et al. Non-destructive prediction of internal and external quality attributes of fruit with thick rind: a review. J Food Eng. 2018;217:11–23. 27. Nicolaï BM, Beullens K, Bobelyn E, et al. Nondestructive measure- ment of fruit and vegetable quality by means of NIR spectroscopy: a review. Postharvest Biol Tec. 2007;46(2):99–118. 28. Crocombe RA. Portable spectroscopy. Appl Spectrosc. 2018;72(12):1701–51. 29. Xiaobo Z, Jiewen Z, Povey MJW, et al. Variables selection methods in near-infrared spectroscopy. Anal Chim Acta. 2010;667(1–2):14–32. 30. Wang Z, Zhu X, Fang X, et al. Hyperspectral models for estimating chlorophyll content of young apple tree leaves. Intell Autom Soft Co. 2015;21(3):383–93. 31. Guo Z, Zhao C, Huang W, et al. Nondestructive quantification of foliar chlorophyll in an apple orchard by visible/near-infrared reflectance spectroscopy and partial least squares. Spectrosc Lett. 2014;47(6):481–7. 32. Li C, Zhu X, Wei Y, et al. Estimating apple tree canopy chlorophyll content based on Sentinel-2A remote sensing imaging. Sci Rep-UK. 2018;8(1):3756. 33. Zarco-Tejada PJ, Berjón A, López-Lozano R, et al. Assessing vineyard condition with hyperspectral indices: leaf and canopy reflectance simu- lation in a row-structured discontinuous canopy. Remote Sens Environ. 2005;99(3):271–87. 34. Ordonez C, Rodriguez-Perez JR, Moreira JJ, et al. Using hyperspectral spectrometry and functional models to characterize vine-leaf composi- tion. IEEE T Geosci Remote. 2013;51(5):2610–8. 35. Ordoñez C, Martínez J, Matías JM, et al. Functional statistical techniques applied to vine leaf water content determination. Math Comput Model. 2010;52(7–8):1116–22. 36. Dzikiti S, Verreynne SJ, Stuckens J, et al. Seasonal variation in canopy reflectance and its application to determine the water status and water use by citrus trees in the Western Cape, South Africa. Agr Forest Mete- orol. 2011;151(8):1035–44. 37. Rallo G, Minacapilli M, Ciraolo G, et al. Detecting crop water status in mature olive groves using vegetation spectral measurements. Biosyst Eng. 2014;128:52–68. 38. Pôças I, Rodrigues A, Gonçalves S, et al. Predicting grapevine water status based on hyperspectral reflectance vegetation indices. Remote Sens-Basel. 2015;7(12):16460–79. 39. González-Fernández AB, Rodríguez-Pérez JR, Marcelo V, et al. Using field spectrometry and a plant probe accessory to determine leaf water content in commercial vineyards. Agr Water Manage. 2015;156:43–50. 40. Diago MP, Tardaguila J, Fernández-Novales J, et al. Non-destructive assessment of grapevine water status in the field using a portable NIR spectrophotometer. J Sci Food Agr. 2017;97(11):3772–80. 41. Diago MP, Bellincontro A, Scheidweiler M, et al. Future opportunities of proximal near infrared spectroscopy approaches to determine the vari- ability of vineyard water status. Aust J Grape Wine R. 2017;23(3):409–14. 42. Diago MP, Fernández-Novales J, Tardaguila J, et al. In field quantification and discrimination of different vineyard water regimes by on-the-go NIR spectroscopy. Biosyst Eng. 2018;165:47–58. 43. Diago MP, Fernández-Novales J, Gutiérrez S, et al. Development and validation of a new methodology to assess the vineyard water status by on-the-go near infrared spectroscopy. Front Plant Sci. 2018;9:59. 44. Cruz-Hernandez A, Paredes-Lopez O. Fruit quality: new insights for biotechnology. Crit Rev Food Sci Nutr. 2012;52(3):272–89. 45. Elsayed S, Galal H, Allam A, et al. Passive reflectance sensing and digital image analysis for assessing quality parameters of mango fruits. Sci Hortic-Amsterdam. 2016;212:136–47. 46. Fernandez-Novales J, Tardaguila J, Gutierrez S, et al. On-The-Go VIS + SW-NIR spectroscopy as a reliable monitoring tool for grape composition within the vineyard. Molecules. 2019;24(15):2795. 47. Jonckheere I, Fleck S, Nackaerts K, et al. Review of methods for in situ leaf area index determination. Agr Forest Meteorol. 2004;121(1–2):19–35. 48. Madec S, Baret F, de Solan B, et al. High-throughput phenotyping of plant height: comparing unmanned aerial vehicles and ground LiDAR estimates. Front Plant Sci. 2017;8:2002. 49. Watanabe K, Guo W, Arai K, et al. High-throughput phenotyping of sorghum plant height using an unmanned aerial vehicle and its appli- cation to genomic prediction modeling. Front Plant Sci. 2017;8:421. 50. Kazlauciunas A. Digital imaging- theory and application Part 1: theory. Surf Coat Int. 2001;84(B1):1–9. 51. Guowei Hong MRL, Rhodes PA. A study of digital camera colorimet- ric characterization based on polynomial modeling. Color Res Appl. 2001;26(1):76–84. 52. Macfarlane C, Hoffman M, Eamus D, et al. Estimation of leaf area index in eucalypt forest using digital photography. Agr Forest Meteorol. 2007;143(3–4):176–88. 53. Macfarlane C, Grigg A, Evangelista C. Estimating forest leaf area using cover and fullframe fisheye photography: thinking inside the circle. Agr Forest Meteorol. 2007;146(1–2):1–12. 54. Breda NJJ. Ground-based measurements of leaf area index: a review of methods, instruments and current controversies. J Exp Bot. 2003;54(392):2403–17. 55. Liu C, Kang S, Li F, et al. Canopy leaf area index for apple tree using hemispherical photography in arid region. Sci Hortic-Amsterdam. 2013;164:610–5. 56. Knerl A, Anthony B, Serra S, et al. Optimization of leaf area estimation in a high-density apple orchard using hemispherical photography. HortScience. 2018;53(6):799–804. 57. Zarate-Valdez JL, Whiting ML, Lampinen BD, et al. Prediction of leaf area index in almonds by vegetation indexes. Comput Electron Agr. 2012;85:24–32. 58. Pekin B, Macfarlane C. Measurement of crown cover and leaf area index using digital cover photography and its application to remote sensing. Remote Sens-Basel. 2009;1(4):1298–320. 59. Alivernini A, Fares S, Ferrara C, et al. An objective image analysis method for estimation of canopy attributes from digital cover photography. Trees. 2018;32(3):713–23. 60. Fuentes S, Palmer AR, Taylor D, et al. An automated procedure for esti- mating the leaf area index (LAI) of woodland ecosystems using digital imagery, MATLAB programming and its application to an examination of the relationship between remotely sensed and field measurements of LAI. Funct Plant Biol. 2008;35(10):1070. 61. Poblete-Echeverría C, Fuentes S, Ortega-Farias S, et al. Digital cover photography for estimating leaf area index (LAI) in apple trees using a variable light extinction coefficient. Sensors-Basel. 2015;15(2):2860–72. 62. Fuentes S, Poblete-Echeverría C, Ortega-Farias S, et al. Automated estimation of leaf area index from grapevine canopies using cover photography, video and computational analysis methods. Aust J Grape Wine R. 2014;20(3):465–73. Page 21 of 22Huang et al. Plant Methods (2020) 16:107 63. Klodt M, Herzog K, Töpfer R, et al. Field phenotyping of grape- vine growth using dense stereo reconstruction. BMC Bioinform. 2015;16(1):143. 64. Haris M, Ishii K, Ziyang L, et al. Construction of a high-resolution digital map to support citrus breeding using an autonomous multicopter. Acta Hort. 2016;1135:73–84. 65. Chason JW, Baldocchi DD, Huston MA. A comparison of direct and indirect methods for estimating forest canopy leaf area. Agr Forest Meteorol. 1991;57(1):107–28. 66. Pei S, Cheng C. Extracting color features and dynamic matching for image data-base retrieval. IEEE T Circ Syst Vid. 1999;9(3):501. 67. Carlsohn MF. Spectral imaging in real-time—Imaging principles and applications. Real-Time Imag. 2005;11(2):71–3. 68. Araus JL, Kefauver SC, Zaman-Allah M, et al. Translating high-through- put phenotyping into genetic gain. Trends Plant Sci. 2018;23(5):451–66. 69. Garini Y, Young IT, McNamara G. Spectral imaging: principles and appli- cations. Cytom Part A. 2006;69A(8):735–47. 70. Oerke E, Herzog K, Toepfer R. Hyperspectral phenotyping of the reaction of grapevine genotypes toPlasmopara viticola. J Exp Bot. 2016;67(18):5529–43. 71. Wendel A, Underwood J, Walsh K. Maturity estimation of mangoes using hyperspectral imaging from a ground based mobile platform. Comput Electron Agr. 2018;155:298–313. 72. Zhang C, Kovacs JM. The application of small unmanned aerial systems for precision agriculture: a review. Precis Agric. 2012;13(6):693–712. 73. Matese A, Di Gennaro SF, Berton A. Assessment of a canopy height model (CHM) in a vineyard using UAV-based multispectral imaging. Int J Remote Sens. 2017;38(8–10):2150–60. 74. Wang K, Wang T, Liu X. A review: individual tree species classification using integrated airborne LiDAR and optical imagery with a focus on the urban environment. Forests. 2019;10(1):1. 75. Díaz-Varela R, de la Rosa R, León L, et al. High-resolution airborne UAV imagery to assess olive tree crown parameters using 3D photo reconstruction: application in breeding trials. Remote Sens-Basel. 2015;7(4):4213–32. 76. Koc-San D, Selim S, Aslan N, et al. Automatic citrus tree extraction from UAV images and digital surface models using circular Hough transform. Comput Electron Agr. 2018;150:289–301. 77. Torres-Sánchez J, López-Granados F, Serrano N, et al. High-throughput 3-D monitoring of agricultural-tree plantations with unmanned aerial vehicle (UAV ) technology. PLoS ONE. 2015;10(6):e0130479. 78. Zarco-Tejada PJ, Guillén-Climent ML, Hernández-Clemente R, et al. Estimating leaf carotenoid content in vineyards using high resolution hyperspectral imagery acquired from an unmanned aerial vehicle (UAV ). Agr Forest Meteorol. 2013;171–172:281–94. 79. Zarco-Tejada PJ, Suarez L, Gonzalez-Dugo V. Spatial resolution effects on chlorophyll fluorescence retrieval in a heterogeneous canopy using hyperspectral imagery and radiative transfer simulation. IEEE Geosci Remote S. 2013;10(4):937–41. 80. Zarco-Tejada PJ, González-Dugo MV, Fereres E. Seasonal stability of chlorophyll fluorescence quantified from airborne hyperspectral imagery as an indicator of net photosynthesis in the context of preci- sion agriculture. Remote Sens Environ. 2016;179:89–103. 81. Islam MS. Sensing and uptake of nitrogen in rice plant: a molecular view. Rice Sci. 2019;26(6):343–55. 82. Xuefeng L, Qiang L, Shaolan H, et al. Estimation of carbon and nitrogen contents in citrus canopy by low-altitude remote sensing. Int J Agric Biol Eng. 2016;9(5):149–57. 83. Perry EM, Goodwin I, Cornwall D. Remote sensing using canopy and leaf reflectance for estimating nitrogen status in red-blush pears. HortScience. 2018;53(1):78–83. 84. Inácio MRC, de Lima KMG, Lopes VG, et al. Total anthocyanin content determination in intact açaí (Euterpe oleracea Mart.) and palmitero- juçara (Euterpe edulis Mart.) fruit using near infrared spectroscopy (NIR) and multivariate calibration. Food Chem. 2013;136(3–4):1160–4. 85. Galvez-Sola L, García-Sánchez F, Pérez-Pérez JG, et al. Rapid estimation of nutritional elements on citrus leaves by near infrared reflectance spectroscopy. Front Plant Sci. 2015;6:571. 86. Nagy A, Riczu P, Tamás J. Spectral evaluation of apple fruit ripening and pigment content alteration. Sci Hortic-Amsterdam. 2016;201:256–64. 87. Gutiérrez S, Tardaguila J, Fernández-Novales J, et al. On-the-go hyperspectral imaging for the in-field estimation of grape berry soluble solids and anthocyanin concentration. Aust J Grape Wine R. 2019;25(1):127–33. 88. Gutiérrez S, Wendel A, Underwood J. Ground based hyperspectral imaging for extensive mango yield estimation. Comput Electron Agr. 2019;157:126–35. 89. Zhang J, Huang Y, Pu R, et al. Monitoring plant diseases and pests through remote sensing technology: a review. Comput Electron Agr. 2019;165:104943. 90. Mahlein AK, Kuska MT, Thomas S, Bohnenkamp D, Alisaac E, Behmann J, Wahabzada M, Kersting K. Plant disease detection by hyperspectral imaging: from the lab to the field. Adv Animal Biosci. 2017;8(2):238–43. 91. Calderón R, Navas-Cortés JA, Lucena C, et al. High-resolution airborne hyperspectral and thermal imagery for early detection of Verticillium wilt of olive using fluorescence, temperature and narrow-band spectral indices. Remote Sens Environ. 2013;139:231–45. 92. Calderón R, Navas-Cortés J, Zarco-Tejada P. Early detection and quan- tification of verticillium wilt in olive using hyperspectral and thermal imagery over large areas. Remote Sens-Basel. 2015;7(5):5584–610. 93. López-López M, Calderón R, González-Dugo V, et al. Early detection and quantification of almond red leaf blotch using high-resolution hyperspectral and thermal imagery. Remote Sens-Basel. 2016;8(4):276. 94. de Castro AI, Ehsani R, Ploetz RC, et al. Detection of laurel wilt disease in avocado using low altitude aerial imaging. PLoS ONE. 2015;10(4):e124642. 95. De Castro AI, Ehsani R, Ploetz R, et al. Optimum spectral and geometric parameters for early detection of laurel wilt disease in avocado. Remote Sens Environ. 2015;171:33–44. 96. Perez-Bueno ML, Pineda M, Vida C, et al. Detection of white root rot in avocado trees by remote sensing. Plant Dis. 2019;103(6):1119–25. 97. Hagen N, Kudenov MW. Review of snapshot spectral imaging technolo- gies. Opt Eng. 2013;52(9):90901. 98. Tattersall GJ. Infrared thermography: a non-invasive window into thermal physiology. Comp Biochem Physiol A Mol Integr Physiol. 2016;202:78–98. 99. Vadivambal R, Jayas DS. Applications of thermal imaging in agriculture and food industry—a review. Food Bioprocess Tech. 2011;4(2):186–99. 100. Meola C, Carlomagno GM. Recent advances in the use of infrared thermography. Meas Sci Technol. 2004;9(15):27–58. 101. Berger C, Rosentreter J, Voltersen M, et al. Spatio-temporal analysis of the relationship between 2D/3D urban site characteristics and land surface temperature. Remote Sens Environ. 2017;193:225–43. 102. Stow D, Riggan P, Schag G, et al. Assessing uncertainty and demonstrat- ing potential for estimating fire rate of spread at landscape scales based on time sequential airborne thermal infrared imaging. Int J Remote Sens. 2019;40(13):4876–97. 103. Kays R, Sheppard J, Mclean K, et al. Hot monkey, cold reality: surveying rainforest canopy mammals using drone-mounted thermal infrared sensors. Int J Remote Sens. 2019;40(2):407–19. 104. Giro A, Pezzopane JRM, Barioni Junior W, et al. Behavior and body surface temperature of beef cattle in integrated crop-livestock systems with or without tree shading. Sci Total Environ. 2019;684:587–96. 105. Koprowski R. Automatic analysis of the trunk thermal images from healthy subjects and patients with faulty posture. Comput Biol Med. 2015;62:110–8. 106. Childs C, Siraj MR, Fair FJ, et al. Thermal territories of the abdomen after caesarean section birth: infrared thermography and analysis. J Wound Care. 2016;25(9):499–512. 107. Struthers R, Ivanova A, Tits L, et al. Thermal infrared imaging of the temporal variability in stomatal conductance for fruit trees. Int J Appl Earth Obs. 2015;39:9–17. 108. Ballester C, Jiménez-Bello MA, Castel JR, et al. Usefulness of thermogra- phy for plant water stress detection in citrus and persimmon trees. Agr Forest Meteorol. 2013;168:120–9. 109. Jackson RD, Idso SB, Reginato RJ, et al. Canopy temperature as a crop water stress indicator. Water Resour Res. 1981;17(4):1133–8. 110. Ben-Gal A, Agam N, Alchanatis V, et al. Evaluating water stress in irrigated olives: correlation of soil water status, tree water status, and thermal imagery. Irrigation Sci. 2009;27(5):367–76. Page 22 of 22Huang et al. Plant Methods (2020) 16:107 • fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year • At BMC, research is always in progress. Learn more biomedcentral.com/submissions Ready to submit your research ? Choose BMC and benefit from: 111. Zarco-Tejada P, Gonzalez-Dugo V, Nicolás E, et al. Using high resolu- tion UAV thermal imagery to assess the variability in the water status of five fruit tree species within a commercial orchard. Precis Agric. 2013;14(6):660–78. 112. Jackson RD, Kustas WP, Choudhury BJ. A reexamination of the crop water stress index. Irrigation Sci. 1988;9(4):309–17. 113. Matese A, Baraldi R, Berton A, et al. Estimation of water stress in grape- vines using proximal and remote sensing methods. Remote Sens-Basel. 2018;10(1):114. 114. Santesteban LG, Di Gennaro SF, Herrero-Langreo A, et al. High-resolu- tion UAV-based thermal imaging to estimate the instantaneous and seasonal variability of plant water status within a vineyard. Agr Water Manage. 2017;183:49–59. 115. Egea G, Padilla-Díaz CM, Martinez-Guanter J, et al. Assessing a crop water stress index derived from aerial thermal imaging and infrared thermometry in super-high density olive orchards. Agr Water Manage. 2017;187:210–21. 116. García-Tejero IF, Gutiérrez-Gordillo S, Ortega-Arévalo C, et al. Thermal imaging to monitor the crop-water status in almonds by using the non- water stress baselines. Sci Hortic-Amsterdam. 2018;238:91–7. 117. Moller M, Alchanatis V, Cohen Y, et al. Use of thermal and visible imagery for estimating crop water status of irrigated grapevine. J Exp Bot. 2006;58(4):827–38. 118. Salgadoe A, Robson A, Lamb D, et al. A non-reference temperature histogram method for determining tc from ground-based thermal imagery of orchard tree canopies. Remote Sens-Basel. 2019;11(6):714. 119. García-Tejero I, Ortega-Arévalo C, Iglesias-Contreras M, et al. Assess- ing the crop-water status in almond (Prunus dulcis Mill.) trees via thermal imaging camera connected to smartphone. Sensors-Basel. 2018;18(4):e1050. 120. Kaim W, Fiedler J. Spectroelectrochemistry: the best of two worlds. Chem Soc Rev. 2009;38(12):3373–82. 121. Oerke EC, Fröhling P, Steiner U. Thermographic assessment of scab disease on apple leaves. Precis Agric. 2011;12(5):699–715. 122. Tsror Lahkim L. Epidemiology and control of Verticillium wilt on olive. Israel J Plant Sci. 2011;59(1):59–69. 123. Jiménez-Díaz RM, Cirulli M, Bubici G, Jiménez-Gasco LM, et al. Verticil- lium wilt, a major threat to olive production: current status and future prospects for its management. Plant Dis. 2012;96(3):304–29. 124. Colaço A, Trevisan R, Molin J, et al. A method to obtain orange crop geometry information using a mobile terrestrial laser scanner and 3D modeling. Remote Sens-Basel. 2017;9(8):763. 125. Kashani AG, Olsen MJ, Parrish CE, et al. A review of LIDAR radiometric processing: from ad hoc intensity correction to rigorous radiometric calibration. Sensors. 2015;15(11):28099–128. 126. Gondal MA, Mastromarino J. Lidar system for remote environmental studies. Talanta. 2000;53(1):147–54. 127. Lim K, Treitz P, Wulder M, et al. LiDAR remote sensing of forest structure. Progress Phys Geography Earth Environ. 2016;27(1):88–106. 128. Del-Moral-Martínez I, Rosell-Polo J, Company J, et al. Mapping vineyard leaf area using mobile terrestrial laser scanners: should rows be scanned on-the-go or discontinuously sampled? Sensors-Basel. 2016;16(1):119. 129. Chakraborty M, Khot LR, Sankaran S, et al. Evaluation of mobile 3D light detection and ranging based canopy mapping system for tree fruit crops. Comput Electron Agr. 2019;158:284–93. 130. Pfeiffer SA, Guevara J, Cheein FA, et al. Mechatronic terrestrial LiDAR for canopy porosity and crown surface estimation. Comput Electron Agr. 2018;146:104–13. 131. Arnó J, Escolà A, Masip J, et al. Influence of the scanned side of the row in terrestrial laser sensor applications in vineyards: practical conse- quences. Precis Agric. 2015;16(2):119–28. 132. Ma X, Feng J, Guan H, et al. Prediction of chlorophyll content in differ- ent light areas of apple tree canopies based on the color characteristics of 3D reconstruction. Remote Sens-Basel. 2018;10(3):429. 133. Hosoi F, Umeyama S, Kuo K. Estimating 3D chlorophyll content distribu- tion of trees using an image fusion method between 2D camera and 3D portable scanning lidar. Remote Sens-Basel. 2019;11(18):2134. 134. James P, Underwood CHBW, Sukkarieh S. Mapping almond orchard canopy volume, flowers, fruit and yield using lidar and vision sensors. Comput Electron Agr. 2016;130:83–96. 135. Stein M, Bargoti S, Underwood J. Image based mango fruit detection, localisation and yield estimation using multiple view geometry. Sen- sors. 2016;16(11):1915. 136. Gené-Mola J, Gregorio E, Guevara J, et al. Fruit detection in an apple orchard using a mobile terrestrial laser scanner. Biosyst Eng. 2019;187:171–84. 137. Brugger A, Behmann J, Paulus S, et al. Extending hyperspectral imaging for plant phenotyping to the UV-range. Remote Sens-Basel. 2019;11(12):1401. 138. Dankowska A, Kowalewski W. Tea types classification with data fusion of UV-Vis, synchronous fluorescence and NIR spectroscopies and chemometric analysis. Spectrochim Acta Part A Mol Biomol Spectrosc. 2019;211:195–202. 139. Rosell JR, Sanz R. A review of methods and applications of the geo- metric characterization of tree crops in agricultural activities. Comput Electron Agr. 2012;81:124–41. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in pub- lished maps and institutional affiliations. Phenotypic techniques and applications in fruit trees: a review Abstract Background VIS–NIR spectroscopy The principle of VIS–NIR spectroscopy The application of VIS–NIR spectroscopy Detection of pigment and nutrient contents Detection of water stress Detection of biochemical parameters of fruits Digital photography The principle of digital photography The application of digital photography Detection of architecture parameters Detection of biochemical parameters of fruits Multispectral and hyperspectral imaging The principle of multispectral and hyperspectral imaging The application of multispectral and hyperspectral imaging Detection of architecture parameters Detection of pigment and nutrient contents Detection of biochemical parameters of fruits Detection of diseases Thermal imaging The principle of thermal imaging The application of thermal imaging Detection of water stress Detection of diseases LiDAR scanning The principle of LiDAR The application of LiDAR Discussion Conclusion Acknowledgements References work_oiobgvue2rfwxikg6jajnuolpi ---- Abstract: Long-Term Outcomes at Skeletal Maturity Following Cranial Vault Remodeling and Fronto-Orbital Advancement for Metopic Craniosynostosis | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1097/01.gox.0000526331.21932.b9 Corpus ID: 37415259Abstract: Long-Term Outcomes at Skeletal Maturity Following Cranial Vault Remodeling and Fronto-Orbital Advancement for Metopic Craniosynostosis @article{Suri2017AbstractLO, title={Abstract: Long-Term Outcomes at Skeletal Maturity Following Cranial Vault Remodeling and Fronto-Orbital Advancement for Metopic Craniosynostosis}, author={Simrat Suri and M. Alperovich and Roberto L Flores and D. Staffenberg}, journal={Plastic and Reconstructive Surgery Global Open}, year={2017}, volume={5} } Simrat Suri, M. Alperovich, +1 author D. Staffenberg Published 2017 Medicine Plastic and Reconstructive Surgery Global Open RESULTS: SSO demonstrated decreased wholebrain intrinsic connectivity compared to controls in left BA-39 and bilateral BA-7’s (p=0.071), which are the superior parietal lobules and the angular gyrus. UCS had significantly decreased intrinsic connectivity throughout the prefrontal cortex (PFC, p=0.031). On seed-based analysis, UCS had significantly increased connectivity between left BA-40 and bilateral BA-6, BA-8, and BA-9, and left BA-32 (p=0.050), between left BA-7 and the anterior PFC (p=0… Expand View via Publisher Save to Library Create Alert Cite Launch Research Feed Share This Paper 1 Citations View All Topics from this paper Craniosynostosis Bilateral filter Cranial nerve diseases PowerBuilder Foundation Classes Prefrontal Cortex Capability Maturity Model Lobule Inferior frontal gyrus AngularJS Frontal lobe gyrus Bank vault Structure of angular gyrus Insula of Reil Molecular orbital One Citation Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Temporal Fat Grafting in Children With Craniofacial Anomalies Artur Fahradyan, P. Goel, Madeline Williams, A. Liu, Daniel Gould, M. Urata Medicine Annals of plastic surgery 2020 Save Alert Research Feed References SHOWING 1-5 OF 5 REFERENCES The Effects of Whole-Vault Cranioplasty versus Strip Craniectomy on Long-Term Neuropsychological Outcomes in Sagittal Craniosynostosis Peter W. Hashim, A. Patel, +10 authors J. Persing Medicine Plastic and reconstructive surgery 2014 32 Save Alert Research Feed Altered brain connectivity in sagittal craniosynostosis. Joel S. Beckett, Eric D. Brooks, +6 authors J. Persing Medicine Journal of neurosurgery. Pediatrics 2014 23 Save Alert Research Feed Language, learning, and memory in children with and without single-suture craniosynostosis. K. Kapp-Simon, E. Wallace, B. Collett, M. Cradock, C. Crerand, M. Speltz Medicine Journal of neurosurgery. Pediatrics 2016 16 PDF Save Alert Research Feed Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates Anders Eklund, T. Nichols, H. Knutsson Computer Science, Medicine Proceedings of the National Academy of Sciences 2016 2,222 PDF Save Alert Research Feed The effects of wholevault cranioplasty versus strip craniectomy on long - term neuropsychological outcomes in sagittal craniosynosto Plast Reconstr Surg Related Papers Abstract Topics 1 Citations 5 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_ok6dv3mg35cx5gsnffxzho4yqe ---- Non-therapist identification of falling hazards in older adult homes using digital photography Preventive Medicine Reports 2 (2015) 794–797 Contents lists available at ScienceDirect Preventive Medicine Reports journal homepage: http://ees.elsevier.com/pmedr Non-therapist identification of falling hazards in older adult homes using digital photography Katherine C. Ritchey a,⁎, Deborah Meyer b, Gillian H. Ice c a Department of Geriatrics and Gerontology, Ohio University Heritage College of Osteopathic Medicine Health Sciences Center, Athens, OH, United States b Department of Interdisciplinary Health Studies, Ohio University College of Health Sciences and Professions, Athens, OH, United States c Department of Social Medicine and Director of Global Health, Ohio University Heritage College of Osteopathic Medicine Health Sciences Center, Athens, OH, United States ⁎ Corresponding author at: Geriatric Fellow, Univers Sound Health Care System, 1660 S. Columbian Way (S-18 1597, United States. Fax: +1 206 764 2569. E-mail address: krhanke@uw.edu (K.C. Ritchey). http://dx.doi.org/10.1016/j.pmedr.2015.09.004 2211-3355/Published by Elsevier Inc. This is an open acce a b s t r a c t a r t i c l e i n f o Available online 21 September 2015 Keywords: Fall prevention Home safety Falling hazards Home assessments Evaluation and removal of home hazards is an invaluable method for preventing in-home falls and preserving in- dependent living. Current processes for conducting home hazard assessments are impractical from a whole pop- ulation standpoint given the substantial resources required for implementation. Digital photography offers an opportunity to remotely evaluate an environment for falling hazards. However, reliability of this method has only been tested under the direction of skilled therapists. Ten community dwelling adults over the age of 65 were recruited from local primary care practices between July, 2009 and February, 2010. In-home (IH) assessments were completed immediately after a photographer, blinded to the assessment form, took digital photographs (DP) of the participant home. A different non-therapist assessor then reviewed the photographs and completed a second assessment of the home. Kappa statistic was used to an- alyze the reliability between the two independent assessments. Home assessments completed by a non-therapist using digital photographs had a substantial agreement (Kappa = 0.61, p b 0.001) with in-home assessments completed by another non-therapist. Additionally, the DP assessments agreed with the IH assessments on the presence or absence of items 96.8% of the time. This study showed that non-therapists can reliably conduct home hazard evaluations using digital photographs. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Introduction Falls are the leading cause of unintentional fatal and non-fatal injury in those over the age of 65, with nearly half of older individuals falling in their home (Gill et al., 1999; Centers for Disease Control and Prevention NCfIPaC, 2010, 2014). Studies suggest that most homes occupied by older adults have at least four falling hazards and that hazards are in- volved in 30–40% of in-the-home falls (Carter et al., 1997; Wyman et al., 2007; Stevens et al., 2014). Evaluation and removal of home hazards is an invaluable method for preventing falls, reducing the risk of injury and preserving independent living in the elderly (Stevens et al., 2001; Clemson et al., 1996; Gillespie et al., 2012; Robertson and Gillespie, 2013). ity of Washington, V.A. Puget 2-GRECC), Seattle, WA 98108- ss article under the CC BY-NC-ND lic Several randomized control trials and subsequent meta-analysis have shown that home hazard assessments reduce the rate of falls by nearly 20%, therefore making them recommended components of mul- tifactorial fall interventions (Clemson et al., 1996, 2008; Nikolaus, 2003; Campbell et al., 2005; Lord et al., 2006; Anon, 2011, 2012, 2013; Cumming et al., 1999; Day et al., 2002). However, effective hazard re- moval programs are cost-prohibitive from a public health perspective (Gillespie et al., 2012; Clemson et al., 2008; Lord et al., 2006). The signif- icant time and labor required for in-home assessments performed by a skilled assessor, typically an occupational therapist (OT), and absent reimbursement for home safety services highlight factors impeding the provision of home assessments (Pynoos and Nishita, 2003). Digital photography offers an opportunity to remotely evaluate the environment for falling hazards. Limited studies suggest that digital photography reliably identifies hazards related to falling (Daniel et al., 2013; Sanford and Butterfield, 2005). These studies relied on OTs to supervise photography training and conduct the digital home evalua- tion. This report therefore, investigates the concordance of digitally based home hazard assessments to in-home assessments completed by novice evaluators. By demonstrating that non-therapists can assess ense (http://creativecommons.org/licenses/by-nc-nd/4.0/). http://creativecommons.org/licenses/by-nc-nd/4.0/ http://dx.doi.org/10.1016/j.pmedr.2015.09.004 mailto:krhanke@uw.edu http://dx.doi.org/10.1016/j.pmedr.2015.09.004 http://creativecommons.org/licenses/by-nc-nd/4.0/ http://www.sciencedirect.com/science/journal/ Table 1 Participant demographics. All Female Male Total (n) 10 6 4 Age (mean) 78 78.4 77.2 Fallen in past year 63.8% (7) 57.1% (4) 60% (3) Fear of falling 18.1% (2) 28.5% (2) 0 Live alone 63.8% (7) 85.7% (6) 20% (1) Home assistance 18.1% (2) 28.5% (2) 0 795K.C. Ritchey et al. / Preventive Medicine Reports 2 (2015) 794–797 photographs for falling hazards, results of this study may encourage more sustainable approaches to home safety programs. Methods Participants Primary care physicians (8 physicians in two separate practices) lo- cated in the Athens County region of Ohio were approached to help in the recruitment of participants. These physicians identified patients meeting the following criteria: 65 years or older; live independently at home; no hospitalizations in the prior month; no history of mild cogni- tive impairment or dementia as determined by their primary care phy- sician. Recruitment occurred between July 2009 and February 2010. Participants were contacted, informed about the study and provided consent to the primary investigator. The study protocol, risks of proce- dures and consent were reviewed and approved by Ohio University Institutional Review Board. Intervention design A de novo assessment was created from validated home hazard forms but reduced in length to focus on three areas of high risk for falls in order to determine the feasibility and provide proof of concept that the use of photographs by non-therapists is a reliable method to as- sess home safety (Fischer et al., 2007; Clemson et al., 1999; La Grow et al., 2006). The final form had a total of 44 items limited to the living room (18), bedroom (18), and staircase (4) as these are high risk areas for falls (Carter et al., 1997; Clemson et al., 1996; La Grow et al., 2006). Items comprised of hazardous conditions associated with en- trances, walkways, sitting areas, beds, handrails, steps, and lighting. Each item was scored independently where “yes” indicated a hazardous condition (HC) was present, “no” indicated a HC was not present, or “not-applicable” indicated the HC or location did not exist. In order to ensure the de novo home hazard instrument was reliable, we first analyzed the agreement between two independent raters com- pleting in-home (IH) assessments on the same home with the de novo home hazard form. Two medical student evaluators without prior home safety, occupational, or physical therapy training were recruited to complete ten (10) independent IH assessments. These assessments were done on the same day, in sequential order and blinded to each other's findings. Substantial reliability between the two IH assessments was obtained (Kappa = 0.681, p b 0.001) and confirmed that the digital photograph (DP) study could proceed with reliable home hazard form. The reliability of home hazard identification by non-therapists using DP was evaluated by comparing a DP home assessment by one rater to an IH assessment of the same home by a different rater. Two different medical students, without prior home safety, occupational, or physical therapy experience, used the same de novo assessment form described above for either the DP or the IH assessments. A third non-therapist, medical student acted as the photographer and was blinded to the con- tent of the home safety assessment form used by the DP and IH evalua- tors. A photographer's protocol developed specifically for this study by the primary investigator is described here in brief. The photographer re- ceived a succinct (b20 min) training session provided by primary inves- tigator but otherwise had no other formal training on photography, home hazards or the de novo home assessment. The protocol specified room locations, camera angle, and distance and position to stand from landmarks (i.e. entrances, walls, the bed) to ensure that the rooms were captured in their entirety. One photograph was taken of the room entrance, five for each wall in a room and one for each side of the bed exposed (not touching a wall). On the day of the IH assessment, the photographer first entered the home and completed the photogra- phy protocol using a standard, commercially available digital camera. Once the photographer had left the premises, the IH evaluator entered the home completed the IH assessment. The DP evaluator, blinded to the content of the IH assessment, then completed the DP assessment from the digital photographs once downloaded on a computer. The IH assessor provided all participants with a home safety checklist and brief education regarding home hazards after the assessment was com- pleted. It took an average of 25 min to complete in IH assessment, 13 min to complete the photographs and 22 min to complete the DP assessment. Statistical analysis Power analysis completed prior to the study indicated that ten home assessments yielding 440 variables were required to detect substantial agreement (Kappa N 0.6) with a power of 0.8 between two independent raters (Cohen, 1960). An inter-rater reliability test using a generalized non-weighted Kappa Statistic was performed with SPSS, version 18.0, in March 2010. Observed agreement was calculated between each inde- pendent rater's responses to the 44 items over the 10 home assessments (for a total of 440 variables). Percentage agreement (PA) was derived by subtracting the percent disagreement obtained from the 2 × 2 or 3 × 3 table established by SPSS from 100. There was no source of funding, grant or otherwise, which support- ed this work. Results A total of 11 participants were recruited to participate in IH and DP home assessments. One participant declined leaving 10 available for de- mographic information (Table 1) and analysis. DP assessments were concordant with the IH assessments on the presence or absence of HC 96.8% of the time. Discordant information was observed in four paired home assessments. In one case, two HC were present in the IH but where marked as “not applicable” in the DP assessment (0.4% of ques- tions). For the other three cases, 10 HC were present in the DP but were absent in the IH assessment (3% of questions). There was substantial agreement between the DP and IH assess- ments (PA = 78%, Kappa = 0.61, p b 0.001). A subset analysis of each room location indicated that the bedroom had slightly stronger agree- ment than the living room, with walkways within either room having the greatest reliability (Table 2). Photographs captured HC associated with walkways, beds, entrances, or exits with reasonable reliability (Table 2). The bed had the strongest agreement followed by walkways and entrance ways in the bedroom and living room. Moderate reliability was found for sitting areas and staircases but the Kappa value was not significant. Percent agreements for each of these areas were 62.5% and 83%, respectively (Table 2). Informal comments provided by 10 participants to IH assessor indi- cated that the older adults appreciated the assessment of the home and education materials provided. There was no negative feedback suggest- ing that the participants felt reluctant to have strangers into their home to either take photographs or complete a home hazard assessment. The medical student assessors and photographer also had positive interac- tions with the older adults. Table 2 Inter-rater reliability and percent disagreement for the living room, bedroom and items within. Areas Number of questionsa Percent Agreement Kappa p-value Living room 180 73 0.443 0.000 Entrances/exits 100 70 0.379 0.000 Walkways 40 87.5 0.679 0.000 Bedroom 180 78 0.568 0.000 Entrances/exits 100 74 0.461 0.000 Walkways 40 82.5 0.478 0.002 Bed 40 85 0.683 0.000 Boldface indicates statistical significance (p b 0.01) a Reflects the number of questions combined for all 10 assessments 796 K.C. Ritchey et al. / Preventive Medicine Reports 2 (2015) 794–797 Discussion Despite established efficacy, the time, resources, and cost to perform in-home assessments prohibit widespread implementation of home hazard fall prevention programs. (Gillespie et al., 2012; Clemson et al., 2008; Lord et al., 2006) Herein, we report that non-therapists could re- liably conduct a home hazard evaluation using digital photographs. This is the first study of its kind to show that non-skilled assessors can utilize photographs to identify falling hazards. Though the primary aim of the study was overall reliability, analysis of individual rooms and sections within the rooms provided useful in- formation regarding variability in photograph sensitivity and to high- light possible strengths and weaknesses of this approach. Agreement was stronger for bedroom HC compared to living room HCs. Though as- sessment form questions and directions were similar for these locations, bedroom assessments included 3–4 more photographs capturing the area around the bed. This additional set of photographs could have in- creased the sensitivity of DP assessments as indicated by the greater re- liability of the bedroom assessments and by the observation that DP evaluations captured more HCs in 30% of cases. On the other hand, entrances/exits had the lowest reliability of any item section. A few explanations for this finding is the limited number of pictures (1) per entrance/exit, ability of photographs to distinguish small items (i.e. light switches, thresholds) assessed in this location or poor conditions (i.e. low light) diminishing the quality and sensitivity of photographs. Though beyond the scope of this project, a digital home hazard assess- ment holds a significant potential to support the remote implementa- tion of home hazard prevention programs and could possibly improve upon the current standard of care. There were some limitations to our study. For one, the small sample size reduced the power necessary to conduct analysis of individual items and identify the aspects of the assessment procedure reducing overall reliability. Secondly, the photographer was not reflective of a community dwelling older adult, however, given the simplicity of the photograph instructions and number of photographs required, it is plausible an older adult or surrogate could produce similar results. Last- ly, we investigated the reliability of home hazard assessments as con- ducted by two non-therapists, omitting a comparison to a skilled assessor. Therefore, it is possible that our results indicate high concor- dance between novices but are inaccurate as compared to the gold stan- dard. Studies have shown that non-therapists can identify hazards with consistency comparable to therapists (Day et al., 2002; Pighills et al., 2011). Thus, the study's conclusions retain clinical merit and serve as a foundation for future investigations. Conclusions Remote home safety and fall hazard assessments have the potential to transform community-based therapist services and public health in- terventions supporting aging in place. Results from this study support a non-therapist use of digital technology as a means for evaluating the home for falling hazards. Results from this study not only support the use of digital technology but the use of non-skilled assistants as a means for expanding therapist home safety services. Future clinical trials will need to establish the sustainability and clinical efficacy of re- mote assessments that involve all high risk locations of the home with varying degrees of therapist, community resident or older adult involve- ment. These studies would need to address how reliable assessments completed by novices (i.e. older adults or confidants) are and how effi- cacious they are at reducing falls or encouraging older adult adherence to technology-based home safety modifications. Though there remains further refinement before digital evaluations replace in-home hazard assessments, there is strong evidence that mobile technologies utilized by non-clinicians can improve the provision of this much needed fall prevention service. Financial disclosure This study did not receive any financial support or funding. Author contributions KCR and GHI were responsible for the study concept and design. DM significantly contributed to study design. KCR and GHI was responsible for data analysis. All authors were responsible for interpretation of results, writing and editing of this manuscript. The content of this article has not been previously published elsewhere. No financial disclosures were reported by the authors of this paper. Conflict of interest statement Katherine C. Ritchey: I had no conflict of interest or financial interest to disclose in relation to this manuscript. Deborah Meyer: I had no conflict of interest or financial interest to disclose in relation to this manuscript. Gillian H. Ice: I had no conflict of interest or financial interest to disclose in relation to this manuscript. Acknowledgments The authors would like to thank the physicians at University Medical Associates, Athens, Ohio who supported the recruitment of participants for the study. The authors are indebted to the contributions of the many medical students who volunteered their time completing the photogra- phy and home assessments. We would also like to thank Dr. Wayne Carlson and Dr. Victor Heh, who provided additional support in the study design and mentorship of the primary author. Finally, we would like to note special recognition of Dr. Alvin Matsumoto, Dr. Elizabeth Phelan and Dr. Mark Hanke, who provided manuscript guidance and review. References Anon, 2011. Summary of the updated American Geriatrics Society/British Geriatrics Soci- ety clinical practice guidelines for prevention of falls in older adults. J. Am. Geriatr. Soc. 59 (1), 148–157. Anon, 2012. Prevention of falls in community-dwelling older adults: USPTF. Anon, 2013. Falls: assessment and prevention of falls in older adults. June, 2013 ed. National Institute of Clinical Excellence, London (June). Campbell, A., Robertson, M., La Grow, S., et al., 2005. Randomized controlled trial of pre- vention of falls in people aged N75 with severe visual impairment: the VIP trial. BMJ 331 (7520), 817–825. Carter, S., Campbell, E., Sanson-Fisher, R., Redman, S., Gillespie, W., 1997. Environmental hazards in the homes of older people. Age Ageing 26, 195–202. Centers for Disease Control and Prevention NCfIPaC, 2010, 2014. Web-based Injury Statistics Query and Reporting Systemt (WISQARS). Clemson, L., Cumming, R., Roland, M., 1996. Case–control study of hazards in the home and risk of falls and hip fractures. Age Ageing 25 (2), 97–101. Clemson, L., Fitzgerald, M., Heard, R., Cumming, R., 1999. Inter-rater reliability of a home fall hazard assessment tool. Occup. Ther. J. Res. 19 (2), 171–179. http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0120 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0120 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0120 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0125 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0115 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0115 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0045 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0045 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0045 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0045 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0045 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0010 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0010 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0100 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0100 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0030 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0030 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0080 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0080 797K.C. Ritchey et al. / Preventive Medicine Reports 2 (2015) 794–797 Clemson, L., Mackenzie, L., Ballinger, C., Close, J., Cumming, R., 2008. Environmental inter- ventions to prevent falls in community-dwelling older people: a meta-analysis of randomized trials. J. Aging Health 20 (8), 954–971. Cohen, J., 1960. A coefficient of agreement for nominal scales. Educ. Psychol. Meas. 20, 37–46. Cumming, R., Thomas, M., Szonyi, G., et al., 1999. Home visits by an occupational therapist for assessment and modification of environmental hazards: a randomized trial of falls prevention. J. Am. Geriatr. Soc. 47 (12), 1397–1402. Daniel, H., Oesch, P., Stuck, A., Born, S., Bachmann, S., Schoenengerger, A., 2013. Evaluation of a novel photography-based home assessment protocol for identification of environmental risk factors for falls in elderly persons. Swiss Med. Wkly. 12 (143). Day, L., Fildes, B., Gordon, I., Fitzharris, M., Flamer, H., Lord, S., 2002. A randomized facto- rial trial of falls prevention among older people living in their own homes. BMJ 325, 128–133. Fischer, G., Baker, A., Koval, D., Lishok, C., Maisto, E., 2007. A field test of the cougar home safety assessment (version 2.0) in the home of older persons living alone. Aust. Occup. Ther. J. 54 (2), 124–130. Gill, T., Williams, C., Robison, J., Tinetti, M., 1999. A population-based study of environ- mental hazards in the homes of older persons. Am. J. Public Health 89 (4), 553–556. Gillespie, L., Robertson, M., Gillespie, W., Sherrington, C., Gates, S., Clemson, L., 2012. Interventions for preventing falls in older people living in the community. Cochrane Database Syst. Rev. 12 (9). La Grow, S., Robertson, M., Campbell, A., Clarke, G., Kerse, N., 2006. Reducing hazard related falls in people 75 years and older with significant visual impairment: how did a successful program work? Inj. Prev. 12, 296–301. Lord, S., Menz, H., Sherrington, C., 2006. Home environment risk factors for falls in older people and the efficacy of home modifications. Age Ageing 35 (2), ii55–ii59. Nikolaus, T.B.M., 2003. Preventing falls in community-dwelling frail older people using a Home Intervention Team (HIT): results from the randomized falls—HIT trial. J. Am. Geriatr. Soc. 51 (3), 300–305. Pighills, A., Torgerson, D., Sheldon, T., Drummond, A., Bland, J., 2011. Environmental as- sessment and modification to prevent falls in older people. J. Am. Geriatr. Soc. 59 (1), 26–33. Pynoos, J., Nishita, C., 2003. The cost and financing of home modifications in the United States. J. Disabil. Policy Stud. 14, 68–73. Robertson, M.C., Gillespie, L.D., 2013. Fall prevention in community-dwelling older adults. JAMA 309 (13), 1406–1407. Sanford, J., Butterfield, T., 2005. Using remote assessment to provide home modification services to underserved elders. The Gerontologist 45 (3), 389–398. Stevens, M., Holman, C., Bennett, N., 2001. Preventing falls in older people: impact of an intervention to reduce environmental hazards in the home. J. Am. Geriatr. Soc. 49 (11), 1442–1447. Stevens, J., Mahoney, J., Ehrenreich, H., 2014. Circumstances and outcomes of falls among high risk community-dwelling older adults. Inj. Epidemiol. 1 (5), 1–9. Wyman, J., Croghan, C., Nachreiner, N., et al., 2007. Effectiveness of education and individ- ualized counseling in reducing environmental hazards in the homes of community- dwelling older women. J. Am. Geriatr. Soc. 55 (10), 1548–1556. http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0050 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0050 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0050 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0090 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0090 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0055 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0055 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0055 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0130 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0130 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0130 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0060 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0060 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0060 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0075 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0075 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0075 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0005 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0005 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0105 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0105 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0085 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0085 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0085 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0110 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0110 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0040 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0040 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0040 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0095 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0095 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0095 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0065 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0065 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0035 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0035 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0070 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0070 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0025 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0025 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0025 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0020 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0020 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0015 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0015 http://refhub.elsevier.com/S2211-3355(15)00127-8/rf0015 Non-�therapist identification of falling hazards in older adult homes using digital photography Introduction Methods Participants Intervention design Statistical analysis Results Discussion Conclusions Financial disclosure Author contributions Conflict of interest statement Acknowledgments References work_omjmhksrxbgvtbegl5bn67skzu ---- Ugyldig lenke til dokument i vitenarkiv | Unit Hopp til hovedinnhold Direktoratet for IKT og fellestjenester i høyere utdanning og forskning Tjenester Handlingsplan for digitalisering Digitalisering Om Unit Søk Meny Digitalisering Digitaliseringsstyret Handlingsplan for digitalisering Fagutvalgene Tjenesterådene Tjenester Utdanningstjenester Forskningstjenester Administrative tjenester Bibliotekstjenester Digitalt læringsmijø Generelle verktøy Vårt arbeid Prosjekter Prosjektrammeverk Styringsmodell for informasjonssikkerhet i høyere utdanning og forskning Open tilgang (open access) Klagenemnder Om Unit Organisering Ansatte Jobb i Unit Units vedtekter Offentlig journal Årsrapportar og tildelingsbrev Personvern og informasjonskapsler Generelle tjeneste- og leveransebetingelser Dette er de vi er til for Aktuelt Aktuelt Arrangement Nyhetsbrev Kontakt oss Om Unit Fakturainformasjon Presse English Norsk Ugyldig lenke til dokument i vitenarkiv Brage - åpent vitenarkiv Sist endret: 24.04.2020 Del: Share to LinkedIn Share to Facebook Share by mail Share to Twitter Du prøver å nå et dokument via en lenke som ikke er gyldig. Hvis du vet hvilken institusjon som har publisert dokumentet, kan du finne det i institusjonens vitenarkiv. Se liste over vitenarkiv her. You are trying to reach a document via a link that is not valid. If you know which institution has published the document, you can find it in the institution's Open Access Repository. See list of Open Access Repositories here. Del: Share to LinkedIn Share to Facebook Share by mail Share to Twitter Fant du det du lette etter? Ja Nei Hva lette du etter? Din tilbakemelding hjelper oss å forbedre nettstedet Til toppen Kontakt Hovedkontor: Abelsgate 5, Teknobyen, 7030 Trondheim Avdeling Oslo: Fridtjof Nansens vei 19, 0369 Oslo Telefon: +47 73 98 40 40 E-post: postmottak@unit.no Om Unit Om Unit Hiring (NO) Ansatte Presse Offentlig journal Vedtekter Personvern og informasjonskapsler Tildelingsbrev Våre tjenester Utdanningstjenester Forskningstjenester Administrative tjenester Digitalt læringsmijø Bibliotektjenester Generelle verktøy Hold deg oppdatert LinkedIn Facebook E-post Twitter RSS feed Meld deg på vårt nyhetsbrev og hold deg oppdatert Nyhetsbrev fra Unit work_omnhw3zyrzht5oow4my7ipgsmi ---- BioMed CentralBMC Ophthalmology ss Open AcceResearch article Diagonal ear lobe crease in diabetic south Indian population: Is it associated with Diabetic Retinopathy? Sankara Nethralaya Diabetic Retinopathy Epidemiology And Molecular-genetics Study (SN-DREAMS, Report no. 3) Rajiv Raman1, Padmaja Kumari Rani1, Vaitheeswaran Kulothungan2 and Tarun Sharma*1 Address: 1Shri Bhagwan Mahavir Vitreoretinal services, 18, College Road, Sankara mx vir Vitreoretinal services, 18, College Road, Sankara Nethralaya, Chennai-600 006, Tamil Nadu, India and 2Department of Molecular Genetics, 18, College Road, Sankara Nethralaya, Chennai-600 006, Tamil Nadu, India Email: Rajiv Raman - rajivpgraman@gmail.com; Padmaja Kumari Rani - rpk111@gmail.com; Vaitheeswaran Kulothungan - drp@snmail.org; Tarun Sharma* - drtaruns@gmail.com * Corresponding author Abstract Background: To report the prevalence of ear lobe crease (ELC), a sign of coronary heart disease, in subjects (more than 40 years old) with diabetes and find its association with diabetic retinopathy. Methods: Subjects were recruited from the Sankara Nethralaya Diabetic Retinopathy Epidemiology And Molecular-genetics Study (SN-DREAMS), a cross-sectional study between 2003 and 2006; the data were analyzed for the1414 eligible subjects with diabetes. All patients' fundi were photographed using 45° four-field stereoscopic digital photography. The diagnosis of diabetic retinopathy was based on the modified Klein classification. The presence of ELC was evaluated on physical examination. Results: The prevalence of ELC, among the subjects with diabetes, was 59.7%. The ELC group were older, had longer duration of diabetes, had poor glycemic control and had a high socio- economic status compared to the group without ELC and the variables were statistically significant. There was no statistical difference in the prevalence of diabetic retinopathy in two groups. On multivariate analysis for any diabetic retinopathy, the adjusted OR for women was 0.69 (95% CI 0.51-0.93) (p = 0.014); for age >70 years, 0.49 (95% CI 0.26-0.89) (p = 0.024); for increasing duration of diabetes (per year increase), 1.11(95% CI 1.09-1.14) (p < 0.0001); and for poor glycemic control (per unit increase in glycosylated heamoglobin), 1.26 (95% CI 1.19-1.35) (p < 0.0001). For sight-threatening diabetic retinopathy, no variable was significant on multivariable analysis. In predicting any diabetic retinopathy, the presence of ELC had sensitivity of 60.4%, and specificity, 40.5%. The area under the ROC curve was 0.50 (95% CI 0.46-0.54) (p 0.02). Conclusion: The ELC was observed in nearly 60% of the urban south Indian population. However, the present study does not support the use of ELC as a screening tool for both any diabetic retinopathy and sight-threatening retinopathy. Published: 29 September 2009 BMC Ophthalmology 2009, 9:11 doi:10.1186/1471-2415-9-11 Received: 3 February 2009 Accepted: 29 September 2009 This article is available from: http://www.biomedcentral.com/1471-2415/9/11 © 2009 Raman et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Page 1 of 5 (page number not for citation purposes) http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19788727 http://www.biomedcentral.com/1471-2415/9/11 http://creativecommons.org/licenses/by/2.0 http://www.biomedcentral.com/ http://www.biomedcentral.com/info/about/charter/ BMC Ophthalmology 2009, 9:11 http://www.biomedcentral.com/1471-2415/9/11 Background Coronary heart disease (CHD) is a leading cause of mor- tality in persons with type 2 diabetes [1]. In the general population, macrovascular disease is the primary patho- genic mechanism causing CHD; however, in population with diabetes, it is the microvascular disease, in addition to the macrovascular component, that may play a role in the development of CHD. Recent evidences from the ARIC (Atherosclerosis Risk In Communities) study sug- gest that the presence of diabetic retinopathy (microvas- cular disease) poses a two-fold higher risk of CHD (macrovascular disease) and a three-fold higher risk of fatal CHD; therefore, there is a link between macrovascu- lar and microvascular pathogenic mechanisms [2]. The presence of ear lobe crease (ELC) and its association with CHD was first described in 1973 [3]. Blodgett et al found that 75% of CHD cases had ear lobe crease as com- pared to 35% of the controls (age and gender matched) [4]. Hence, there is a link between ELC and macrovascular disease (CHD), and between, microvascular disease and macrovascular disease. But what link we have between ELC and microvascular disease, we do not know. The present study is aimed to find out the prevalence of ELC in south Indian diabetic population and its association with diabetic retinopathy in a population-based study. Methods The study design and research methodology of SN- DREAMS 1 are described in detail elsewhere [5]. The sam- ple was stratified based on the socio-economic scoring: low (score, 0-14), middle (score, 15-28), and high (score, 29-42) [6]. The scoring was calculated on the basis of several param- eters such as ownership/type of residence (rent or own), number of rooms in the house, educational status, salary, occupation, material possessions (cycle, TV, audio, car etc.) and house/land value. Eligible patients, above 40 years, were enumerated using the multistage random sam- pling method. For all those who were not previously diag- nosed to have diabetes diabetic, fasting blood glucose estimation was done twice: first, in the field using capil- lary blood, and second, at the base hospital using labora- tory method (glucose oxidase method) [7]. Patients were considered to be newly diagnosed with diabetes, if the fasting blood glucose level was ≥110 mg/dl on two occa- sions, as described above [8]. The study was approved by the Institutional Review Board, and an informed consent was obtained from all individuals. Demographic data, socio-economic status, physical activ- ity, risk of sleep apnea, dietary habits, and anthropometric measurements were collected. A detailed medical and ocular history and a comprehensive eye examination, including stereo fundus photographs, were taken at the base hospital. Biochemical investigations (Blood sugar, total serum cholesterol, high-density lipoproteins, serum triglycerides, hemoglobin, glycosylated hemoglobin HbA1c) were conducted at the base hospital in fasting state. The presence of a diagonal ear lobe crease was assigned to a person with a crease stretching obliquely from the outer ear canal towards the border of the ear lobe of both the ears; examiners were guided to diagnose the ELC, by com- paring the features with the standard photograph pro- vided to them. Diabetic retinopathy was clinically graded using Klein's classification (Modified Early Treatment Diabetic Retin- opathy Study scales) [9]. The alternative method involves grading all stereoscopic standard fields as a whole, and assigning a level of severity for the eye according to the greatest degree of retinopathy using a modified Airlie House Classification scheme. Retinal photographs were taken after pupillary dilatation (Carl Zeiss Fundus Cam- era, Visucamlite, Jena, Germany); all patients underwent 45° four-field stereoscopic digital photography (posterior pole, nasal, field, superior, and inferior). All photographs were graded by two independent observers in a masked fashion; the grading agreement was high (k = 0.83) [5]. Sight-threatening diabetic retinopathy was defined as the presence of severe non-proliferative diabetic retinopathy, proliferative diabetic retinopathy and clinically significant macular edema [10]. Of the 5,999 subjects enumerated, 5,778 (96.32%) responded for the first fasting blood sugar estimation. Of the 5,778 subjects, 1,816 individuals (1,349 with known history of diabetes and 467 with provisionally diagnosed diabetes) were invited to visit the base hospital. Of the 1,563 (85.60%) who responded, 138 were excluded; in two, the age criteria was not met, and in 136, the second fasting blood sugar was ≤ 110 mg/dl). Provisionally diag- nosed diabetes was defined as new asymptomatic individ- ual with a first fasting blood glucose level ≥ 110 mg/dl. An additional 11 individuals were excluded as their digital fundus photographs were of poor quality, making them ungradable for further analysis. Thus, a total of 1,414 indi- viduals were analyzed for the study. Results Of the 1,414 subjects who were analyzed for diabetic retinopathy, the mean age was 56.3 ± 10 yrs; 750 (53.04%) were men, and 664 were (46.96%), women. The prevalence of ELC in the diabetic population was 844 (59.7%) (95% CI 57.1 - 62.2)); ELC was evident in both the ears, in all subjects. Diabetic retinopathy was seen in 255 (18.03%) (95%CI 16.06-20.13) subjects. Page 2 of 5 (page number not for citation purposes) BMC Ophthalmology 2009, 9:11 http://www.biomedcentral.com/1471-2415/9/11 Table 1 compares the clinical and laboratory data between the ELC group and the group without ELC. The subjects in ELC group (844) were older, had longer duration of dia- betes, had poor glycemic control, and had a high socio- economic status as compared to the group without ELC (570); all variables were statistically significant. No differ- ences were observed between the two groups with regard to gender, smoking and alcohol, diabetic retinopathy, BMI, Blood pressure, serum lipids and microalbuminuria. Table 2 shows the results of logistic regression analysis, keeping diabetic retinopathy as an outcome variable. For any diabetic retinopathy, the univariate analyses identi- fied several associated factors: women OR 0.64 (95% CI 0.49-0.85) (p = 0.002), for 50-59 years, OR was 1.7 (95% CI 1.18-2.44) (p = 0.012), for 60-69 years, OR was 1.7 (95% CI 1.13-2.45) (p = 0.010), increasing duration of diabetes per year OR 1.1 (95% CI 1.09-1.13) (p < 0.0001), and poor glycemic control (per unit increase in glyco- sylated heamoglobin) OR 1.3 (95% CI 1.21-1.36) (p < 0.0001). In multivariate analysis, the adjustments were done for age, gender, duration of diabetes, serum lipids, blood pressure, socio-economic status and glycemic con- trol. From the multivariate analysis, the adjusted OR for women was 0.69 (95% CI 0.51-0.93) (p = 0.014); for age >70 years, 0.49 (95% CI 0.26-0.89) (p = 0.024); for increasing duration of diabetes (per year increase), 1.11(95% CI 1.09-1.14) (p < 0.0001); for presence of ELC, 0.88 (95% CI 0.65-1.19) (p = 0.422) and for poor glycemic control (per unit increase in glycosylated heamo- globin), 1.26 (95% CI 1.19-1.35) (p < 0.0001). For sight-threatening diabetic retinopathy, the univariate analysis revealed the following risk factors: presence of ear lobe crease OR was 2.02 (95% CI 1.02-4.73) (p = 0.04) and the OR for the increasing duration of diabetes (per year) was 1.05 (95% CI 1.00-1.10) (p = 0.045). No varia- ble was significant on the multivariable analysis. The adjusted OR for ELC was 2.00 (95% CI 0.91-4.35) (p = 0.086). Compared to the gold standard — photographic classifi- cation — in diagnosing sight-threatening diabetic retin- opathy, the presence of ELC had a sensitivity of 75%, and specificity of 42.3%. The positive predictive value was only 19.8%. The area under the ROC curve was 0.59 (95% Table 1: Clinical and Laboratory characteristics in ELC group versus non-ELC group ELC Group (Present) n = 844 ELC Group (absent) n = 570 P value Age (years) 57.39+/-10.13 54.75+/-9.65 <0.0001 Gender Male 463 (54.9) 287 (50.4) 0.1076 Female 381 (45.1) 283 (49.6) 0.1076 Smoking 169 (20.0) 108 (18.9) 0.6576 Alcohol 175 (20.73) 135 (23.68) 0.2114 Retinopathy group No DR 690 (81.8) 469 (82.3) 0.8654 Any DR 154 (18.2) 101 (17.71) 0.8657 NST 124 (14.7) 91 (16.0) 0.5538 ST 30 (3.6) 10 (1.8) 0.0680 Duration of DM (months) 79.90+/-76.66 70.86+/-73.91 0.0275 BMI (Kg/m2) 25.48+/-4.16 25.22+/-3.98 0.2410 BP Systolic (mmHg) 139.51+/-21.13 138.57+/-20.27 0.4044 BP Diastolic (mmHg) 81.98+/-11.33 81.93+/-11.44 0.9354 Glycosylated Hb 8.34+/-2.25 7.99+/-2.11 0.0033 Serum cholesterol (mg/dL) 186.77+/-39.77 186.16+/-42.19 0.7826 Serum HDL (mg/dL) 39.35+/-9.66 39.06+/-10.91 0.5994 Ratio HDL/cholestrol 0.200+/-0.059 0.204+/-0.064 0.2272 SES Lower 653 (77.36) 500 (87.71) <0.0001 Upper 191 (22.6) 70 (12.3) <0.0001 Microalbuminuria Normal 681 (80.7) 469 (82.3) 0.4914 Microalbuminuria 139 (16.5) 87 (15.3) 0.6670 Clinical microbuminuria 24 (2.8) 14 (2.5) 0.8612 ELC: Ear lobe crease, DR: Diabetic retinopathy, NST: Non-Sight threatening diabetic retinopathy including mild and moderate non-proliferative diabetic retinopathy, ST: Sight threatening diabetic retinopathy including severe non-proliferative diabetic retinopathy, proliferative diabetic retinopathy and diabetic macular edema, DM: Diabetes mellitus, BMI: Body mass index, BP: Blood pressure, Hb: Heamoglobin, HDL: High density lipoprotein, SES: Socioeconomic status. Page 3 of 5 (page number not for citation purposes) BMC Ophthalmology 2009, 9:11 http://www.biomedcentral.com/1471-2415/9/11 CI 0.50-0.68) (p = 0.047). In predicting any diabetic retin- opathy, the presence of ELC had sensitivity of 60.4%, and specificity, 40.5%. The area under the ROC curve was 0.50 (95% CI 0.46-0.54) (p 0.02). Discussion The diagonal ELC has been suggested as a simple marker of vascular disease in the general population, but in pop- ulation with diabetes (population with increased risk of microangiopathy) only limited data are available [11]. The Fremantle diabetes study reported the prevalence of ELC to be 55% in the western Australian population [11]. Our data suggest that the ELC was present in 59.7% of the diabetic population > 40 years in the urban south Indian population. The present study shows that the subjects in the ELC group were older, had longer duration of diabetes and had poor glycemic control. Similar observations were found in the Fremantle study [11]. Subjects with ELC had a higher socio-economic status as compared to the group without ELC; this could be an indirect measure of the population that is at a greater risk for coronary artery disease. With regards to the association between ELC and any dia- betic retinopathy, we noted that increasing age, poor gly- cemic control and increasing duration of diabetes were significant variables in both univariate and multivariate models. Similar observations were made in other popula- tion-based studies on diabetic retinopathy [12-15]. Regarding the association between ELC and sight-threat- ening diabetic retinopathy, the univariate analysis showed that subjects with ELC had almost twice the risk of developing sight-threatening diabetic retinopathy. The possible explanations for this association include loss of elastin; this might be responsible for ear lobe crease and similar loss of elastin in a retinal blood vessel might account for increased leakage and dilatation. The other speculation is the ischemia; the focal ischemia of the der- mal fat might cause ear lobe crease, and ischemia in retina causes sight- threatening changes. However, these specu- lations need further studies. Further, in multivariate anal- ysis, when the effect of other variables was adjusted, the association between the ELC and the sight-threatening diabetic retinopathy was not significant. Taking into con- sideration the low sensitivity and specificity value along with low positive predictive value, the presence of ELC Table 2: Regression Analysis to study the effect of various risk factors on Diabetic Retinopathy and Sight-threatening Diabetic Retinopathy. Diabetic Retinopathy Sight-threatening Diabetic Retinopathy Univariate Analysis Multivariate analysis Univariate Analysis Multivariate analysis OR 95% CI p OR 95% CI p OR 95% CI p OR 95% CI p Gender Men 1 1 1 1 Women 0.64 0.49-0.85 0.002 0.69 0.51-0.93 0.014 0.86 0.42-1.73 0.667 0.92 0.44-1.91 0.822 SES Lower 1 1 1 1 Upper 0.09 0.63-1.29 0.584 0.72 0.48-1.07 0.104 1.24 0.53-2.92 0.617 1.00 0.39-2.52 0.996 Ear lobe crease Absent 1 1 1 1 Present 1.036 0.79-1.37 0.800 0.88 0.65-1.19 0.422 2.02 1.02-4.73 0.043 2.00 0.91-4.34 0.086 Age groups (years) 40-49 1 1 1 1 50-59 1.698 1.18-2.44 0.012 1.38 0.94-2.04 0.100 2.39 0.76-7.15 0.135 2.13 0.66-6.80 0.204 60-69 1.665 1.13-2.45 0.010 1.08 0.71-1.66 0.710 2.86 0.88-9.24 0.079 2.43 0.73-8.08 0.148 ≥ 70 1.120 0.66-1.88 0.673 0.49 0.26-0.89 0.024 3.22 0.78-13.30 0.106 2.83 0.65-12.32 0.167 Duration of Diabetes (per year increase) 1.110 1.09-1.13 <0.0001 1.11 1.09-1.14 <0.0001 1.050 1.00-1.10 0.046 1.04 0.99-1.09 0.123 HbA1c (per unit increase) 1.280 1.21-1.36 <0.0001 1.26 1.19-1.35 <0.0001 1.060 0.93-1.22 0.383 1.07 0.93-1.25 0.343 OR: Odds Ratio, SES: Socioeconomic status, CI: Confidence interval, HbA1c: Glycosylated heamoglobin Page 4 of 5 (page number not for citation purposes) BMC Ophthalmology 2009, 9:11 http://www.biomedcentral.com/1471-2415/9/11 Publish with BioMed Central and every scientist can read your work free of charge "BioMed Central will be the most significant development for disseminating the results of biomedical researc h in our lifetime." Sir Paul Nurse, Cancer Research UK Your research papers will be: available free of charge to the entire biomedical community peer reviewed and published immediately upon acceptance cited in PubMed and archived on PubMed Central yours — you keep the copyright Submit your manuscript here: http://www.biomedcentral.com/info/publishing_adv.asp BioMedcentral cannot be used as a screening tool to predict diabetic retinopathy, including sight-threatening diabetic retinop- athy. The strength of this study was that it used photography and standard grading techniques for diabetic retinopathy. Further, the study was representative of a large population and results could be extrapolated to the whole of urban India. One of the limitations of the study was not studying the non-diabetic population with regards to ELC; it would have been interesting to compare the findings with sub- jects without diabetes. The second limitation was not grading ear lobe crease; recent evidence has pointed out to the relationship between the increasing grades of ELC and the increasing severity of coronary artery disease[16]. Thirdly, the sample in the subgroup of sight-threatening diabetic retinopathy is small; -thus, a lower power in this subgroup analysis. Conclusion ELC is present in nearly 60% of urban south Indian pop- ulation with diabetes, aged above 40 years. The presence of ELC is somewhat related to sight-threatening diabetic retinopathy on univariate analysis. Competing interests The authors declare that they have no competing interests. Authors' contributions RR carried out the clinical evaluation in the study and wrote the manuscript. PKR participated in the design of the study, VK performed the statistical analysis and TS conceived the study, and participated in its design and coordination. All authors read and approved the final manuscript. Acknowledgements RD TATA Trust, Mumbai, India for funding the study. References 1. Ghaffar A, Reddy KS, Singhi M: Burden of non-communicable diseases in South Asia. BMJ 2004, 328:807-10. 2. Klein R, Sharrett AR, Klein BE, Moss SE, Folsom AR, Wong TY, Bran- cati FL, Hubbard LD, Couper D, ARIC Group: The association of atherosclerosis, vascular risk factors, and retinopathy in adults with diabetes: the atherosclerosis risk in communities study. Ophthalmology 2002, 109:1225-34. 3. Frank ST: Aural sign of coronary-artery disease. N Engl J Med 1973, 289:327-8. 4. Blodgett G: The presence of a diagonal ear-lobe crease as an indicator of coronary artery disease, thesis. University of Utah, Salt Lake City; 1983. 5. Agarwal S, Raman R, Paul PG, Rani PK, Uthra S, Gayathree R, McCarty C, Kumaramanickavel G, Sharma T: Sankara Nethralaya-Diabetic Retinopathy Epidemiology and Molecular Genetic Study (SN-DREAMS 1): study design and research methodology. Ophthalmic Epidemiol 2005, 12:143-53. 6. Oakes JM, Rossi PH: The measurements of SES in health research: current practice and steps towards a new approach. Soc Sci Med 2003, 56:769-784. 7. Expert Committee on the Diagnosis and Classification of Diabetes Mellitus: Report of the Expert Committee on the Diagnosis and Classification of Diabetes Mellitus. Diabetes Care 2003, 26(suppl):S5-20. 8. Sharma T: How necessary is the second estimation of fasting plasma glucose level by laboratory venous blood to diagnose Type 2 diabetes, particularly in epidemiological studies? Oph- thalmic Epidemiol 2006, 13:281-2. 9. Klein R, Klein BEK, Magli YL, et al.: An alternative method of grading diabetic retinopathy. Ophthalmology 1986, 93:1183-7. 10. Baeza M, Orozco-Beltrán D, Gil-Guillen VF, Pedrera V, Ribera MC, Pertusa S, Merino J: Screening for sight threatening diabetic retinopathy using non-mydriatic retinal camera in a primary care setting: to dilate or not to dilate? Int J Clin Pract 2009, 63(3):433-8. 11. Davis TM, Balme M, Jackson D, Stuccio G, Bruce DG: The diagonal ear lobe crease (Frank's sign) is not associated with coronary artery disease or retinopathy in type 2 diabetes: the Freman- tle Diabetes Study. Aust N Z J Med 2000, 30:573-7. 12. Xie XW, Xu L, Jonas JB, Wang YX: Prevalence of diabetic retin- opathy among subjects with known diabetes in China: the Beijing Eye Study. Eur J Ophthalmol 2009, 19(1):91-9. 13. Raman R, Rani PK, Reddi Rachepalle S, Gnanamoorthy P, Uthra S, Kumaramanickavel G, Sharma T: Prevalence of diabetic retinop- athy in India: Sankara Nethralaya Diabetic Retinopathy Epi- demiology and Molecular Genetics Study report 2. Ophthalmology 2009, 116(2):311-8. 14. Klein R, Knudtson MD, Lee KE, Gangnon R, Klein BE: The Wiscon- sin Epidemiologic Study of Diabetic Retinopathy: XXII the twenty-five-year progression of retinopathy in persons with type 1 diabetes. Ophthalmology 2008, 115(11):1859-68. 15. Lim A, Stewart J, Chui TY, Lin M, Ray K, Lietman T, Porco T, Jung L, Seiff S, Lin S: Prevalence and risk factors of diabetic retinopa- thy in a multi-racial underserved population. Ophthalmic Epide- miol 2008, 15(6):402-9. 16. Higuchi Y, Maeda T, Guan JZ, Oyama J, Sugano M, Makino N: Diago- nal earlobe crease are associated with shorter telomere in male Japanese patients with metabolic syndrome. Circ J 2009, 73(2):274-9. Pre-publication history The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2415/9/11/prepub Page 5 of 5 (page number not for citation purposes) http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=15070638 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=15070638 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12093643 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12093643 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12093643 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=4718047 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16019696 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16019696 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12560010 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12560010 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12560010 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12502614 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12502614 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16877287 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16877287 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16877287 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=3101021 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=3101021 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19222628 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19222628 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19222628 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11108067 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11108067 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11108067 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19123155 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19123155 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19123155 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19084275 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19084275 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19068374 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19068374 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19068374 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19065433 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19065433 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19060421 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19060421 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19060421 http://www.biomedcentral.com/1471-2415/9/11/prepub http://www.biomedcentral.com/ http://www.biomedcentral.com/info/publishing_adv.asp http://www.biomedcentral.com/ Abstract Background Methods Results Conclusion Background Methods Results Discussion Conclusion Competing interests Authors' contributions Acknowledgements References Pre-publication history work_oncpamj66zgrvj5wnwxajv6yaa ---- BioMed CentralMalaria Journal ss Open AcceMethodology Investigation of a novel approach to scoring Giemsa-stained malaria-infected thin blood films Owen Proudfoot*1, Nathan Drew2, Anja Scholzen3, Sue Xiang3 and Magdalena Plebanski3 Address: 1Burnet Institute, Austin Hospital campus, Studley Road, Heidelberg, Melbourne, 3084, Australia, 2Independent computing advisor, Melbourne, Australia and 3Department of Immunology, Monash University, Melbourne, Australia Email: Owen Proudfoot* - o.proudfoot@burnet.edu.au; Nathan Drew - nathan.drew@au.transport.bombardier.com; Anja Scholzen - Anja.Scholzen@med.monash.edu.au; Sue Xiang - sue.xiang@med.monash.edu.au; Magdalena Plebanski - magda.plebanski@med.monash.edu.au * Corresponding author Abstract Daily assessment of the percentage of erythrocytes that are infected ('percent-parasitaemia') across a time-course is a necessary step in many experimental studies of malaria, but represents a time- consuming and unpopular task among researchers. The most common method is extensive microscopic examination of Giemsa-stained thin blood-films. This study explored a method for the assessment of percent-parasitaemia that does not require extended periods of microscopy and results in a descriptive and permanent record of parasitaemia data that is highly amenable to subsequent 'data-mining'. Digital photography was utilized in conjunction with a basic purpose- written computer programme to test the viability of the concept. Partial automation of the determination of percent parasitaemia was then explored, resulting in the successful customization of commercially available broad-spectrum image analysis software towards this aim. Lastly, automated discrimination between infected and uninfected RBCs based on analysis of digital parameters of individual cell images was explored in an effort to completely automate the calculation of an accurate percent-parasitaemia. Background When conducting blood-stage malaria trials involving live malaria challenges in mice, frequent assessment of their 'percent parasitaemia' is necessary throughout the dura- tion of the experiment. Similarly, the in vitro expansion of the human malaria parasite Plasmodium falciparum requires frequent monitoring of the percent parasitaemia of cultures [1]. While FACS-based experimental methods have been reported [2,3], the most common method used to assess percent parasitaemia remains extensive micro- scopic scrutiny of Giemsa-stained thin blood-films. Lengthy sessions of microscopy are tedious and uncom- fortable though, thus this is not a popular task among researchers. Particularly where new models are being explored by a laboratory or future data scrutiny or mining may be required, retention of all thin-blood-films is desir- able. The slides can become a storage burden and degrade over time however. Additionally, they do not represent a record of the exact microscopic-field from which the data was derived at the time of recording. Published: 21 April 2008 Malaria Journal 2008, 7:62 doi:10.1186/1475-2875-7-62 Received: 8 July 2007 Accepted: 21 April 2008 This article is available from: http://www.malariajournal.com/content/7/1/62 © 2008 Proudfoot et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Page 1 of 6 (page number not for citation purposes) http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=18426594 http://www.malariajournal.com/content/7/1/62 http://creativecommons.org/licenses/by/2.0 http://www.biomedcentral.com/ http://www.biomedcentral.com/info/about/charter/ Malaria Journal 2008, 7:62 http://www.malariajournal.com/content/7/1/62 Initially, this study sought to improve the qualitative sub- stance of 'percent parasitaemia' data utilizing a combina- tion of microscopy, digital photography and computer software. A simple programme was written towards this aim in the 'Visual-Basic [4]' computer language. Once a digital image of the Giemsa-stained blood-film is cap- tured, the programme enables the researcher to 'score' each cell from a comfortable distance on a monitor via mouse clicks, rather than while looking down a micro- scope. Every decision at the cell level is recorded via a small colour-coded addition to the on-screen image. This approach negates long microscopy sessions and the possi- bility of cells inadvertently being included twice ('double- counts') in the assessment of percent-parasitaemia. Optional parameters include scoring a lymphocyte count per microscopic-field, subdivision of uninfected RBCs based on maturity, and subdivision of malaria-infected RBCs based on parasite-stage. Regardless of the parame- ters chosen, overall percent-parasitaemia is calculated as scoring data are entered. Via this approach, the micros- copy and scoring need not be performed by the same researcher, and the data-base created represents a legiti- mate substitute for actual blood-film-slides, which is more amenable to long term storage and future scrutiny. The programme ('Plasmoscore') is freely available for download [5] as a stand alone programme. After development of a simple programme affording a permanent digital record of the precise derivation of par- asitaemia data, partial automation of the process via cus- tomizing commercially available digital-image analysis software (Image-Pro® [6]) was attempted. A 'macro' com- bining an automated total cell count for each microscopic field with manual user determination of each infected cell was developed. Lastly, automated discrimination between infected and uninfected cells was attempted in an effort to completely automate the determination of the percent- parasitaemia depicted in digital thin-blood-film images. Results Manual scoring of a digital image (Plasmoscore) The Plasmoscore programme was written in the Visual- Basic [4] computer language. Upon execution of the pro- gramme, the user is prompted to load a digital image of a Giemsa-stained thin blood film. The user then selects which of a series of cell types (infected RBC, uninfected RBC, ring infected RBC etc) they will score, and clicks every cell in the image that falls into this category. Each decision made by the user is recorded via a small colour- coded mark superimposed onto the image (Figures 1a and Screen-shots of Giemsa-stained thin-films of malaria-infected murine and human blood scored using the freely available 'Plasmo-score' softwareFigure 1 Screen-shots of Giemsa-stained thin-films of malaria-infected murine and human blood scored using the freely available 'Plasmoscore' software. a. A scored image from a C57BL/6 mouse infected with Plasmodium yoelii YM, captured at 1000× magnification. Green arrows indicate unifected RBCs, and red arrows indicate infected RBCs. In this example the user has opted not to discriminate between the different stages of infected RBC. b. A scored image of a human RBC culture exper- imentally infected with P. falciparum, captured at 600× magnification. In this example the user has chosen to specifically deter- mine the number of trophozoites present (blue arrows), and infected cells at any other stage of development have simply been scored as 'infected RBC' (red arrows). Page 2 of 6 (page number not for citation purposes) Malaria Journal 2008, 7:62 http://www.malariajournal.com/content/7/1/62 1b). After scoring all desired cell-types, a total 'percent parasitaemia' is calculated, and this and the scored image can be saved as a permanent record. Semi-automated scoring of a digital image (Image-pro®) A commercially available image-analysis software pack- age, 'Image-Pro' [6], was explored as a vehicle for accurate automation of a total cell count. A macro imposing a standard series of manipulations to the digital images loaded including hue, sharpness, contrast and back- ground-flattening adjustments was generated. The series of adjustments yields an image of which the software can accurately determine the total number of cells depicted therein, automatically (Figure 2a). Boundaries are then superimposed around each individual cell (Figure 2b). At this point the user can scrutinize the automated total cell count (utilizing a zoom function if necessary) and correct any inaccuracies via a mouse click (Figure 3a). Once an accurate total cell count has been determined, the user clicks on individual infected cells in the blood-film. The zoom-function can also be utilized at this point to afford closer examination of cells that are ambiguous re infec- tion-status (Figure 3b). The percent parasitaemia of the film is thus calculated semi-automatically, via an analysis sequence safe-guarded by user scrutiny at each crucial stage. All data, including the total cell count before and after correction, infected cell count, percent parasitaemia and associated image files are automatically exported and saved to a running excel file. The reliability and reproducibility of percent-paracitae- mia-data obtained via this method was formally assessed using archival blood-films from mice that had been exper- imentally infected with malaria and assessed via the tradi- tional method. In an effort to gage the validity of semi- automated parasitaemia determination as compared to entirely manual percent-parasitaemia assessment, semi- automated parasitaemia scores were compared with those derived from entirely-manual assessment of the same blood-film by a different researcher (Figure 4a). Internal (sample) reproducibility using the method was gauged by comparing semi-automated percent-parasitaemias derived from images of different areas of the same slide, assessed by the same researcher (Figure 4b). Completely automated scoring of a digital image Attempts were made to identify a combination of the parameters of isolated objects (in this case 'infected cells') including size, internal-contrast and brightness, in order to automatically discriminate infected from uninfected RBCs. Based on such parameters, cells were 'ranked' and organized into a grid via a macro, based on the likelihood Screen shots of data derived from a P. yoelii infected thin blood film before and after partially automated determination of per-cent-parasitaemiaFigure 2 Screen shots of data derived from a P. yoelii infected thin blood film before and after partially automated determination of percent-parasitaemia. a. A section of a blood film derived from a P. yoelii YM infected C57BL/6 mouse. b. The same section after automatic total cell count, user correction thereof and parasitaemia scoring; blue crosses indicate where the user has added cells to the total cell-count, and green dots indicate where the user has subtracted cells. Yellow cir- cles indicate cells that the user has deemed infected. Page 3 of 6 (page number not for citation purposes) Malaria Journal 2008, 7:62 http://www.malariajournal.com/content/7/1/62 Page 4 of 6 (page number not for citation purposes) Screen shots depicting utilisation of the mouse-wheel-linked 'enhanced magnification' (zoom) function to aid accurate scoringFigure 3 Screen shots depicting utilisation of the mouse-wheel-linked 'enhanced magnification' (zoom) function to aid accurate scoring. a. The user has 'zoomed in' on two potentially agglutinated cells to an ultimate magnification of 16000×, to confirm that the software has accurately identified them as two separate cells. b. The user has zoomed in on a cell that is ambiguous re infection status under standard magnification, and confirmed that it is infected. Comparison of manually and semi-automatically scored malaria infected blood-filmsFigure 4 Comparison of manually and semi-automatically scored malaria infected blood-films. a. Scores yielded by semi- automated percent-parasitaemia plotted against scores derived from the same slide via manual scoring by a different researcher (correlation = .96). b. Semi-automated scores of two images derived from different areas of the same slide, ana- lysed by the same researcher, plotted against each other (correlation = .99). Malaria Journal 2008, 7:62 http://www.malariajournal.com/content/7/1/62 that they were infected. While 'obviously infected' and 'obviously uninfected' cells were accurately discriminated via this approach, the interphase of the ranked-grid con- tained sustantial overlap (Figure 5), necessitating substan- tial user correction. Discussion and conclusion There are several advantages conferred by using digital image based methods of scoring thin-blood-films for per- cent-parasitaemia over the traditional method. Foremost, large numbers of films can be scored without extended hours of microscopy. Automating the total cell count via the above-described method substantially reduces the time taken to score each film; if parasitaemia appears low (<50%) then the user need only score infected cells, and if it appears high the user need only score uninfected cells, to derive the over-all percent-parasitaemia. The graphic nature of the permanent records generated enable data of interest to be perused by multiple researchers simultane- ously, emailed to distant researchers or retrospectively 'mined' for additional details such as differential parasite- stage analysis and lymphocyte counts. Unlike an actual blood-film, the digital image does not degrade over time and is not an incumbrance to store or transport. Monetary considerations limit the utilisation of this approach. The most obvious is the requirement of a high- resolution, wide lens, microscope-mounted digital cam- era. At the operational level if the camera used cannot cap- ture at least 400 cells per image, multiple images per blood-film will be required and the programme will need to combine user input appropriately to derive a valid per- cent parasitaemia per sample. Commercial software was used to enable an automatic cell-count, which would rep- resent a financial limitation for many laboratories. Nota- bly, there are various free 'open-source' software projects in a constant state of development such as NIH-image [7] and Image-J [8], which may prove useful to this end in the future. While it was found that accurate automated determina- tion of the total-cell-count of Giemsa-stained thin-blood- film images was ultimately achievable via commercial software [6], the success of this was affected by the uni- formity with which thin blood-films were generated, processed and photographed. Particularly, thickly spread, over-fixed or over-stained blood films were not accurately processed by automated cell-counting software. The ultimate aim of completely automated scoring of malaria infected thin-blood film images was explored by the authors, but remains elusive. Attempts were made to utilize parameters such as background flattening, internal- contrast and brightness of isolated objects (in this case 'cells') to discriminate infecteds from uninfecteds. Various factors including the presence of lymphocytes and the increased incidence of reticulocytes hampered these efforts however, resulting in a requirement for extensive 'user-correction'. The automated generation of a graphic grid 'ranking and grouping' cells based on a combination of criteria (predicting the likelihood that cells were infected) was achieved, which could substantially reduce the user-time required to score cells as infected or unin- fected. It is anticipated that digital imaging processes will be widely utilized to determine the percent parasitaemia of malaria infected thin blood-films in the future. The extent to which this can be automated will rely on the adaptability of the software utilized. Given the complexity of the task, completely accurate yet entirely automated (no user correction required) determination of the proc- ess seems distant. The 'interphase' after automatic identification of individual cells and gridding of these ranked on various features potentially dis-criminating infected and uninfected cellsFigure 5 The 'interphase' after automatic identification of individual cells and gridding of these ranked on various features potentially dis- criminating infected and uninfected cells. Page 5 of 6 (page number not for citation purposes) Malaria Journal 2008, 7:62 http://www.malariajournal.com/content/7/1/62 Publish with BioMed Central and every scientist can read your work free of charge "BioMed Central will be the most significant development for disseminating the results of biomedical researc h in our lifetime." Sir Paul Nurse, Cancer Research UK Your research papers will be: available free of charge to the entire biomedical community peer reviewed and published immediately upon acceptance cited in PubMed and archived on PubMed Central yours — you keep the copyright Submit your manuscript here: http://www.biomedcentral.com/info/publishing_adv.asp BioMedcentral Competing interests The author(s) declares that they have no competing inter- ests. Authors' contributions OP: Created the project and conceptualised the idea of automated scoring via software, involved in the genera- tion, integration and testing of computational and labora- tory methods at all levels. ND: Principal computing advisor, conceptualised the idea of automated scoring via software, generated original computer programmes (written in visual basic) and wrote all 3rd party macros for the image analysis software pack- age Image-Pro®, involved in ongoing testing throughout the project. AS: Generated and tested malaria infected human blood samples utilizing the methods described herein, provid- ing evidence the novel scoring method was useful in the context of human malaria research. SX: Principal microscopy advisor, introduced and formed the conceptual idea of using the image analysis software package Image-Pro® to the project for generating an auto- mated scoring system. MP: Facilitated the ongoing development of the project from it's inception via the provision of practical resources. All authors have read and approved the final manuscript Acknowledgements Because the Vaccine and Infectious Diseases (VID) Laboratory headed by Magdalena Plebanski was already a well-equipped malaria research facility at the time of the projects inception, it was able to be conducted with minimal funding. Notably, the project would also not have been possible without the computing advisor/programmer (Nathan Drew) contributing his services entirely gratis, apparently due to the humanitarian aims of the project. The costs of publication were covered by Monash University's ongoing sub- scription to the BMC journal network. References 1. Trager W, Jensen JB: Human malaria parasites in continuous culture. Science 1976, 20:673-675. 2. Sanchez B, Mota M, Sultan A, Carvalho L: Plasmodium berghei par- asite transformed with green fluorescent protein for screen- ing blood schizontocidal agents. In J Parasitol 2004, 34:485-490. 3. Pattanapanyasat K, Webster H, Udomsangpetch R, Wanachiwanawin W, Yongvanitchit K: Flow cytometric two-color staining tech- nique for simultaneous determination of human erythrocyte membrane antigen and intracellular malarial DNA. Cytometry 1992, 13:182-187. 4. Microsoft: Visual Basic. 2005 [http://msdn2.microsoft.com/en-au/ vbasic/default.aspx]. 5. Proudfoot O, Drew N: Plasmoscore 1.3. Burnet Institute 2006 [http://www.burnet.edu.au/home/publications/software/plasmo score]. 6. MediaCybernetics: Image-Pro. 2004 [http://www.mediacy.com]. 7. Research Services Branch of the National Institute of Mental Health: NIH-image. 2005 [http://rsb.info.nih.gov/nih-image]. 8. Rasband W: Image-J. 2006 [http://rsb.info.nih.gov/ij/index.html]. Page 6 of 6 (page number not for citation purposes) http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=1372210 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=1372210 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=1372210 http://msdn2.microsoft.com/en-au/vbasic/default.aspx http://msdn2.microsoft.com/en-au/vbasic/default.aspx http://www.burnet.edu.au/home/publications/software/plasmoscore http://www.burnet.edu.au/home/publications/software/plasmoscore http://www.mediacy.com http://rsb.info.nih.gov/nih-image http://rsb.info.nih.gov/ij/index.html http://www.biomedcentral.com/ http://www.biomedcentral.com/info/publishing_adv.asp http://www.biomedcentral.com/ Abstract Background Results Manual scoring of a digital image (Plasmoscore) Semi-automated scoring of a digital image (Image-pro®) Completely automated scoring of a digital image Discussion and conclusion Competing interests Authors' contributions Acknowledgements References work_ooxtk4mscfadrmo2445ekg5cl4 ---- Photographic Standards for Patients With Facial Palsy and Recommendations by Members of the Sir Charles Bell Society Photographic Standards for Patients With Facial Palsy and Recommendations by Members of the Sir Charles Bell Society Katherine B. Santosa, MD; Adel Fattah, MD, PhD; Javier Gavilán, MD; Tessa A. Hadlock, MD; Alison K. Snyder-Warwick, MD IMPORTANCE There is no widely accepted assessment tool or common language used by clinicians caring for patients with facial palsy, making exchange of information challenging. Standardized photography may represent such a language and is imperative for precise exchange of information and comparison of outcomes in this special patient population. OBJECTIVES To review the literature to evaluate the use of facial photography in the management of patients with facial palsy and to examine the use of photography in documenting facial nerve function among members of the Sir Charles Bell Society—a group of medical professionals dedicated to care of patients with facial palsy. DESIGN, SETTING, AND PARTICIPANTS A literature search was performed to review photographic standards in patients with facial palsy. In addition, a cross-sectional survey of members of the Sir Charles Bell Society was conducted to examine use of medical photography in documenting facial nerve function. The literature search and analysis was performed in August and September 2015, and the survey was conducted in August and September 2013. MAIN OUTCOMES AND MEASURES The literature review searched EMBASE, CINAHL, and MEDLINE databases from inception of each database through September 2015. Additional studies were identified by scanning references from relevant studies. Only English-language articles were eligible for inclusion. Articles that discussed patients with facial palsy and outlined photographic guidelines for this patient population were included in the study. The survey was disseminated to the Sir Charles Bell Society members in electronic form. It consisted of 10 questions related to facial grading scales, patient-reported outcome measures, other psychological assessment tools, and photographic and videographic recordings. RESULTS In total, 393 articles were identified in the literature search, 7 of which fit the inclusion criteria. Six of the 7 articles discussed or proposed views specific to patients with facial palsy. However, none of the articles specifically focused on photographic standards for the population with facial palsy. Eighty-three of 151 members (55%) of the Sir Charles Bell Society responded to the survey. All survey respondents used photographic documentation, but there was variability in which facial expressions were used. Eighty-two percent (68 of 83) used some form of videography. From these data, we propose a set of minimum photographic standards for patients with facial palsy, including the following 10 static views: at rest or repose, small closed-mouth smile, large smile showing teeth, elevation of eyebrows, closure of eyes gently, closure of eyes tightly, puckering of lips, showing bottom teeth, snarling or wrinkling of the nose, and nasal base view. CONCLUSIONS AND RELEVANCE There is no consensus on photographic standardization to report outcomes for patients with facial palsy. Minimum photographic standards for facial paralysis publications are proposed. Videography of the dynamic movements of these views should also be recorded. LEVEL OF EVIDENCE NA. JAMA Facial Plast Surg. 2017;19(4):275-281. doi:10.1001/jamafacial.2016.1883 Published online January 26, 2017. Video Supplemental content Author Affiliations: Author affiliations are listed at the end of this article. Corresponding Author: Alison K. Snyder-Warwick, MD, Facial Nerve Institute, Division of Plastic and Reconstructive Surgery, Department of Surgery, Washington University School of Medicine, 660 S Euclid Ave, Campus Box 8238, St Louis, MO 63110 (snyderak @wudosis.wustl.edu). Research JAMA Facial Plastic Surgery | Original Investigation (Reprinted) 275 © 2017 American Medical Association. All rights reserved. D ow nl oa de d by C ar ne gi e M el lo n U ni ve rs it y fr om w w w .l ie be rt pu b. co m a t 04 /0 5/ 21 . F or p er so na l us e on ly . http://jamanetwork.com/journals/jamafacialplasticsurgery/pages/instructions-for-authors/#SecLevelofEvidence/Ethnicity/?utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamafacial.2016.1883 http://jama.jamanetwork.com/article.aspx?doi=10.1001/jamafacial.2016.1883&utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamafacial.2016.1883 http://jama.jamanetwork.com/article.aspx?doi=10.1001/jamafacial.2016.1883&utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamafacial.2016.1883 http://jama.jamanetwork.com/article.aspx?doi=10.1001/jamafacial.2016.1883&utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamafacial.2016.1883 mailto:snyderak@wudosis.wustl.edu mailto:snyderak@wudosis.wustl.edu To date, facial expression has been found to be the richest source of information about emotions. Paul Ekman1(p3449) R egarded as a pioneer in the study of emotions and mi-croexpressions, psychologist Paul Ekman1 captures theintegral role of facial expressions in recognizing emo- tions, which allows us to connect with one another on the most basic level. However, for patients with facial palsy, this fun- damental visual tool of communication is compromised. The inability to communicate thoughts and interact with others is debilitating and results in substantial psychosocial morbidity in this patient population.2,3 In addition, treatment of facial nerve disorders is chal- lenging. Patients are seen with a diverse array of deficits, and various procedures and interventions exist for management. Given the amount of variability that can exist in how patients with facial palsy are seen and treated, it is imperative that cli- nicians caring for these patients use a uniform tool to docu- ment their outcomes. In essence, a single language is needed to allow meaningful comparison of outcomes. The use of standardized high-quality medical photogra- phy is one method that can improve information exchange and outcome comparison in this patient population. Standards and guidelines for medical photography within various disci- plines in plastic surgery, such as hand surgery, body contour- ing, and cosmetic surgery, have been thoroughly described.4-6 For example, in cosmetic surgery alone, a literature review revealed 79 articles describing photographic standards for various procedures, with most dedicated to standards for pa- tients with rhinoplasty. Moreover, outcomes of facial reani- mation procedures, similar to cosmetic surgery cases, are largely based on visual improvements after surgery.3 How- ever, while a few studies7-9 have proposed the need for pho- tographic standards for patients with facial palsy, no studies have addressed this specific topic in detail. In this study, we performed a literature review of photo- graphic standards among patients with facial palsy and per- formed a cross-sectional survey of members of the Sir Charles Bell Society (SCBS) (http://www.sircharlesbell.org) to examine the use of medical photography among clinicians devoted to treating patients with facial palsy. The SCBS is an international multidisciplinary group of health care professionals dedicated to progressing the management of facial nerve disorders. Because this group of practitioners focuses on care of the population with facial palsy, the society membership is a valuable group from which to canvas information. From these data, we propose a set of minimum photographic standards for patients with facial palsy and anticipate that they will be widely adopted and ultimately improve outcomes and enhance progress in the management of patients with facial paralysis. Methods Institutional review board approval was not required be- cause this study involved a survey of health care profession- als and a literature review; it did not involve patients or inter- ventions and did not include research on patients or animals. Patient information and care were not included in the study. The cross-sectional survey was disseminated to SCBS mem- bers in electronic form using an online tool. Participation in this web-based survey was voluntary, all responses were anony- mous, and informed consent was waived given the minimal risks to participants. Waiver of informed consent was the con- sensus of multiple participating institutions. The literature review searched EMBASE, CINAHL, and MEDLINE databases from inception of each database through September 2015. Under the guidance of a medical librarian, this search strategy involved Medical Subject Headings and free- text terms relating to facial palsy and photography (ie, facial paralysis, facial reanimation, facial nerve weakness, facial nerve dysfunction, facial palsy, photography, and standards). Addi- tional studies were identified by scanning references from rel- evant studies. Only English-language articles were eligible for inclusion. These citations were imported into a computer pro- gram (EndNote X7; Clarivate Analytics), and any duplicates were identified and discarded. Articles that were included met the following criteria: they (1) discussed patients with facial palsy and (2) outlined photographic guidelines for this pa- tient population. The lead authors (K.B.S. and A.K.S.-W.) re- viewed each study according to these criteria. The literature review and analysis were performed in August and Septem- ber 2015. The cross-sectional survey (eFigure 1 in the Supplement) was disseminated to SCBS members in electronic form using an online tool (Qualtrics; http://www.qualtrics.com). This survey consisted of 10 questions related to facial grading scales, patient-reported outcome measures, other psycho- logical assessment tools, and photographic and videographic recordings. Of these 10 questions, 6 were spec ific ally directed toward the use of photography and videography in the practice of these clinicians. Careful attention to the cre- ation of these questions was taken to avoid bias in responses. Closed single-response questions were used to categorize responses. Where necessary, open-ended “other” categories were used to allow respondents to specify details in free-text form. The survey and analysis were performed in August and September 2013. Key Points Question How do clinicians specializing in facial paralysis use photography and videography to document facial nerve function in this special patient population? Findings A cross-sectional survey of the Sir Charles Bell Society members, a group of professionals dedicated to caring for patients with facial palsy, revealed that all members used some form of photography to document outcomes; however, there was variability in which facial expressions were used. Meaning There is no consensus on photographic standardization to report outcomes for patients with facial palsy; therefore, we propose minimum photographic standards for facial paralysis publications to improve exchange of information and comparison of outcomes. Research Original Investigation Photographic Standards for Patients With Facial Palsy 276 JAMA Facial Plastic Surgery July/August 2017 Volume 19, Number 4 (Reprinted) jamafacialplasticsurgery.com © 2017 American Medical Association. All rights reserved. D ow nl oa de d by C ar ne gi e M el lo n U ni ve rs it y fr om w w w .l ie be rt pu b. co m a t 04 /0 5/ 21 . F or p er so na l us e on ly . http://www.sircharlesbell.org http://jama.jamanetwork.com/article.aspx?doi=10.1001/jamafacial.2016.1883&utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamafacial.2016.1883 http://www.qualtrics.com http://www.jamafacialplasticsurgery.com/?utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamafacial.2016.1883 Results Summary of the Literature Review In total, the database searches retrieved 435 citations. After re- view in the computer program, 43 duplicates were identified, leaving 392 citations that fit our search criteria. One addi- tional citation was included after scanning references of rel- evant articles. After title, abstract, and full-text analysis by the lead authors, 7 studies met the final inclusion criteria. Publi- cation dates ranged from 1979 to 2015. Six of the 7 articles dis- cussed or proposed views specific to patients with facial palsy. However, none of the articles specifically focused on photo- graphic standards for the population with facial palsy. Details of the literature search are listed in Table 1. Survey Findings We received responses from 83 of the 151 SCBS members (55%). All survey respondents used still photography. Of all views to capture facial nerve function, the 4 views that were most com- monly used by respondents were at rest or repose (99.8% [82 of 83]), large smile showing teeth (92.8% [77 of 83]), eleva- tion of eyebrows (88.0% [73 of 83]), and closure of eyes gen- tly (88.0% [73 of 83]) (Figure 1A). These views are included in the House-Brackmann Facial Nerve Grading System and the Sunnybrook Facial Grading System, which were the 2 most commonly used grading systems by respondents in our sur- vey. Of all respondents, 60.2% (50 of 83) reported using the House-Brackmann Facial Nerve Grading System to monitor fa- cial nerve function, while 51.8% (43 of 83) reported using the Sunnybrook Facial Grading System. The views not specifi- cally mentioned in common grading scales were used less fre- quently and included closure of eyes tightly (78.3% [65 of 83]), small closed-mouth smile (73.4% [61 of 83]), showing bottom teeth (51.8% [43 of 83]), and nasal base view (18.1% [15 of 83]) (Figure 1B). Puckering of lips and snarling or wrinkling of the nose, which are described in the Sunnybrook Facial Grading System but not in the House-Brackmann Facial Nerve Grad- ing System, were used by 81.9% (68 of 83) and 61.4% (51 of 83) of SCBS members, respectively. Additional free-texted views included puffing out cheeks (n = 1) and lateral views (n = 2). Our group has reported videography use rates among SCBS members in a previous publication.14 In summary, 81.9% (68 of 83) of survey respondents reported using videography, of whom 70.6% (48 of 68) reported documenting the same move- ments on videography as in still photography. Discussion Although photographic standards have been well described and adopted in other highly visual fields of plastic surgery, such as cosmetic surgery, to our knowledge, no universal stan- dards exist for patients with facial palsy or facial reanima- tion. Our literature search revealed few studies of photo- graphic standards in patients with facial palsy, with only 7 articles describing the importance of medical photography in our patient population. Even among 6 of the 7 articles that pro- pose specific photographic views for patients with facial palsy, there are notable differences in the number and selection of Table 1. Results From the Literature Review Source Study Title Views Discussed Summary of Article Ellis and Gillies,10 1979 “Evaluation of the Paralyzed Face” Resting, frowning, squinting, eyebrow lifting, smiling, smiling with lower lip depression, pursing the lips Authors describe how clinicians should evaluate the paralyzed face. They assert that standardized photographs serve as medicolegal documentation and will allow the surgeon to select the appropriate management strategy. Mitchell,11 1981 “Preoperative and Postoperative Photographic Standards” Repose, maximal mimic muscle contraction This letter to the editor was written in response to an article on interfascicular facial nerve repair. The author’s critique was that there was inadequate documentation of the patient’s outcome in the original article and argues for the need for standardization of photography in publications for this patient population. el-Naggar et al,9 1995 “Life-Size Photograph Transparencies: A Method for the Photographic Detection and Documentation of Recovery From Facial Paralysis” Not applicable By overlapping life-size photographic transparencies, the authors argue that this method provides an objective method to track the progress of a patient with facial palsy. Barrs et al,12 2001 “Digital Camera Documentation System for Facial Nerve Outcome Assessment” Raising eyebrows, eyes closed tightly, smiling, puckering of lips, wrinkling of nose, drawing corner of mouth down The authors describe an easy and efficient method to document facial nerve function by digital photography. Henderson et al,6 2005 “Photographic Standards for Facial Plastic Surgery” Anterior-posterior left and right lateral, left and right oblique, smile, pucker, wrinkle nose, close eyes tight, eyebrows raised Through a series of photographs, the aim of the article is to provide the reader with guidelines and photographic standards for special facial plastic and reconstructive procedures (ie, rhinoplasty, mentoplasty, blepharoplasty, rhytidectomy or facial reanimation, browplasty, cheiloplasty, and cleft lip repair). Schaaf et al,13 2006 “Standards for Digital Photography in Cranio-Maxillo-Facial Surgery, Part II: Additional Picture Sets and Avoiding Common Mistakes” Full frontal view, closed eyes, wrinkling forehead, smiling, front view whistling, blowing out cheeks The 2-part publication provides a set of photographic views for different craniofacial procedures. These standardized sets of photographs were approved and adopted by the Council of the European Association for Cranio-Maxillo- Facial Surgeons in 2005. Niziol et al,7 2015 “Is There an Ideal Outcome Scoring System for Facial Reanimation Surgery? A Review of Current Methods and Suggestions for Future Publications” Repose, eyebrows maximally raised, eyes tightly closed, narrow smile, maximum smile This article was written as a literature review to identify an ideal scoring system for facial palsy. In their discussion, the authors note an abundance of grading scales that exist in the literature and propose a “quick and simple system” using a set of standardized still photographs. Photographic Standards for Patients With Facial Palsy Original Investigation Research jamafacialplasticsurgery.com (Reprinted) JAMA Facial Plastic Surgery July/August 2017 Volume 19, Number 4 277 © 2017 American Medical Association. All rights reserved. D ow nl oa de d by C ar ne gi e M el lo n U ni ve rs it y fr om w w w .l ie be rt pu b. co m a t 04 /0 5/ 21 . F or p er so na l us e on ly . http://www.jamafacialplasticsurgery.com/?utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamafacial.2016.1883 still views (Table 1). For example, the only view that was con- sistently proposed throughout the 6 articles was the full fron- tal view at rest or repose. Similar to the literature review findings, our survey of SCBS members demonstrated variability in the types of views used by clinicians. Most respondents documented similar static views; however, not all of these views were used by all respon- dents. The 4 most common views (ie, at rest or repose, large smile showing teeth, elevation of eyebrows, and closure of eyes gently) are those that are described in the 2 most commonly used assessment sc ales by SCBS members, the House- Brackmann Facial Nerve Grading System and the Sunny- brook Facial Grading System. Less frequently obtained views, such as closure of eyes tightly, small closed-mouth smile, show- ing bottom teeth, and nasal base view, coincidentally are those not described in the 2 common scales. Visual documentation of outcomes is incomplete even among SCBS members, whose practices focus more on care of patients with facial palsy. In other words, our findings reveal that almost half of the SCBS members are not documenting the function of all branches of the facial nerve. Dysfunction of the marginal mandibular branch commonly occurs in isolation, such as iatrogenically, acquired in association with a man- dible fracture, or congenitally (eg, in asymmetric crying fa- cies). Regardless of the mechanism of injury or presentation, testing and documenting the function of this branch is an in- tegral aspect of a comprehensive facial nerve examination. Similarly, another view substantially underused by survey re- spondents and absent from our literature review was the na- sal base view, which has also been discussed by Bhama and colleagues.8 This view is especially important to assess sym- metry of the external valve. Nasal obstruction is a common but underrecognized problem among patients with facial palsy, and static or dynamic support of the external nasal valve can im- prove patients’ perception of nasal airflow.15 As demonstrated by our literature review and survey find- ings, there is substantial heterogeneity and a lack of consen- sus in photographic documentation of this patient popula- tion. Therefore, we propose a standardized set of photographs that should be used in outcomes analysis to improve informa- tion exchange and data comparison. This minimum set con- sists of the following 10 static views (Figure 2): (1) at rest or re- pose, (2) small closed-mouth smile, (3) large smile showing teeth, (4) elevation of eyebrows, (5) closure of eyes gently, (6) closure of eyes tightly, (7) puckering of lips, (8) showing bot- tom teeth, (9) snarling or wrinkling of the nose, and (10) nasal base view. Table 2 lists the different views in greater detail. An- other view that can be considered is having patients puff out their cheeks. This view can help evaluate the severity of peri- oral incompetence in paralyzed faces and severity and tight- ness in patients with synkinesis (eFigure 2 in the Supple- ment). Moreover, because many approaches to reanimate the face include nerve substitutions or transfers (eg, masseteric nerve or hypoglossal nerve), it is imperative for clinicians to assess and document the function of any potential donor nerve on physical examination. The function of these common donors will not be captured with the standardized photo- graphic and videographic views discussed in this study. While these 10 static views are important, we should not underestimate the power of videography in capturing spon- taneity of movement, synkinesis, and speech production. We encourage clinicians to record patients performing the views mentioned previously and to have patients recite simple sen- tences that require the production of bilabial and plosive sounds to assess speech (Video). We recommend using a sen- tence from the Goldman-Fristoe Test of Articulation 3 (Pear- son Education, Inc), which incorporates most motor move- ments required for articulate speech production. Incorporating speech assessment may provide important assessments of im- provement when comparing preintervention and postinter- vention function. Strengths and Limitations A strength of our study is the inclusion of a cross-sectional sur- vey of SCBS members, an international group of clinicians who specialize in the care of patients with facial palsy. By leverag- ing the expertise of this group of practitioners, we present a minimum set of photographs and videography that can be eas- Figure 1. Survey Findings 82 Elevation of Eyebrows At Rest or Repose Closure of Eyes Gently 100 88 90 92 94 96 98 Su rv ey R es po nd en ts , % View 86 84 Large Smile Showing Teeth Use of views described in common grading scalesA 0 Showing Bottom Teeth Closure of Eyes Tightly Nasal Base View 100 30 40 50 60 70 80 90 Su rv ey R es po nd en ts , % View 20 10 Small Closed-Mouth Smile Use of views not described in common grading scalesB Responses were received from 83 of the 151 Sir Charles Bell Society members (55.0%). All survey respondents used still photography. Research Original Investigation Photographic Standards for Patients With Facial Palsy 278 JAMA Facial Plastic Surgery July/August 2017 Volume 19, Number 4 (Reprinted) jamafacialplasticsurgery.com © 2017 American Medical Association. All rights reserved. D ow nl oa de d by C ar ne gi e M el lo n U ni ve rs it y fr om w w w .l ie be rt pu b. co m a t 04 /0 5/ 21 . F or p er so na l us e on ly . http://jama.jamanetwork.com/article.aspx?doi=10.1001/jamafacial.2016.1883&utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamafacial.2016.1883 http://jama.jamanetwork.com/article.aspx?doi=10.1001/jamafacial.2016.1883&utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamafacial.2016.1883 http://www.jamafacialplasticsurgery.com/?utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamafacial.2016.1883 ily adopted by others caring for this special patient popula- tion. However, there are limitations that remain. Because of the paucity of literature dedicated to photographic standards for patients with facial palsy, we were not able to conduct a meta-analysis or systematic review, which would provide more robust evidence. In addition, because our study was cross- sectional in design, we capture the practice patterns of clini- cians at a given snapshot of time, which for us in this study was August and September 2013. Since the surveying of these SCBS members in 2013, practice patterns, specifically the use of medical photography and videography, could have evolved. Moreover, while out of the scope of this study, we acknowl- edge the need for a standardized approach to incorporate medi- c al photography and v ideography along w ith patient- reported outcomes for patients who undergo facial reanimation surgery for facial palsy. The benefits of standardized medical photography and vid- eography in this patient population are several. First, images and video obtained before and after surgery can demonstrate functional deficits or improvements over time, which can pro- vide invaluable feedback to the surgeon and reassurance to the patient. This feedback to the patient with facial palsy is espe- cially critical because improvements in facial nerve function are often subtle and slow progressing. Standardized photo- graphs and videos in tandem with objective scoring and pa- tient-reported outcome measures provide a comprehensive evaluation of the patient’s progression over time. Second, medical photography is a useful teaching tool. High-quality im- ages of our proposed 10 static views can help trainees better understand how different functions are related to the intri- cate branching patterns and course of the facial nerve, which can sometimes be challenging to comprehend. Third, and most Figure 2. Facial Palsy Static Views At rest or repose Large smile showing teethSmall closed-mouth smile Nasal base view Elevation of eyebrows Closure of eyes tightlyClosure of eyes gently Puckering of lips Showing bottom teethSnarling or wrinkling of the nose Nasal base view A standardized set of photographs is proposed that should be used in outcomes analysis to improve information exchange and data comparison. This minimum set consists of 10 static views. Photographic Standards for Patients With Facial Palsy Original Investigation Research jamafacialplasticsurgery.com (Reprinted) JAMA Facial Plastic Surgery July/August 2017 Volume 19, Number 4 279 © 2017 American Medical Association. All rights reserved. D ow nl oa de d by C ar ne gi e M el lo n U ni ve rs it y fr om w w w .l ie be rt pu b. co m a t 04 /0 5/ 21 . F or p er so na l us e on ly . http://www.jamafacialplasticsurgery.com/?utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamafacial.2016.1883 important, standardized medical photography and videogra- phy are integral to communicating results to other clinicians caring for these patients. Studies involving this patient popu- lation are often underpowered and anecdotal because the clini- cal volume of patients with facial nerve disorders is low. One solution to this problem is to use standardized medical pho- tographs and videos in scientific presentations and publica- tions. It is equally important to adopt standardized videogra- phy to monitor the progress of these patients before and after surgery. Because medical photography captures the patient’s facial nerve function at a single moment in time, it can occa- sionally be misleading and should be supported and corrobo- rated by standardized videography. While grading of these suggested views and movements was out of the scope of the present article, other groups have successfully developed and validated an electronic, clinician- graded facial function scale, referred to as the eFACE,16 which can be applied to these proposed standards. Furthermore, as a result of the findings of the present study, the SCBS urges practitioners to use these minimum photographic and video- graphic standards in practice. In addition, we also urge jour- nal editors to adopt these standards for submission of manu- scripts related to facial palsy. This requirement would improve exchange of information and may assist with execution of high- quality, multicenter studies needed in this patient population.8 Conclusions With currently available techniques, it is difficult to compare outcomes of patients with facial palsy. To bridge this gap, we propose guidelines for photographic and videographic stan- dards among patients with facial palsy consisting of 10 static views and their respective dynamic functions. Scientific pre- sentations and publications related to facial palsy should fulfill these minimum photographic standards to encourage uniformity in depicting outcomes. Standardized medical pho- tography is integral to allow exchange of information and ul- timately to optimize the management of this complex and chal- lenging condition. ARTICLE INFORMATION Accepted for Publication: September 30, 2016. Published Online: January 26, 2017. doi:10.1001/jamafacial.2016.1883 Author Affiliations: Division of Plastic and Reconstructive Surgery, Department of Surgery, Washington University School of Medicine, St Louis, Missouri (Santosa); Facial Nerve Programme, Regional Paediatric Burns and Plastic Surgery Service, Alder Hey Children’s National Health Service Foundation Trust, Liverpool, England (Fattah); Department of Otorhinolaryngology, La Paz University Hospital, Madrid, Spain (Gavilán); Facial Nerve Center, Division of Facial and Plastic and Reconstructive Surgery, Department of Otology and Laryngology, Harvard Medical School and Massachusetts Eye and Ear, Boston (Hadlock); Facial Nerve Institute, Division of Plastic and Reconstructive Surgery, Department of Surgery, Washington University School of Medicine, St Louis, Missouri (Snyder-Warwick). Table 2. Minimum Standard Photographic Views for Patients With Facial Palsy View Facial Nerve Branch Tested Muscles Activated Special Notes At rest or repose All Not applicable. Symmetry at rest is one of the most important views to capture. Specifically, the clinician should evaluate the position of the eyebrows, canthi, and oral commissures from side to side. With regard to the eye in this view, specific attention should be made to the height of palpebral aperture and lower eyelid position. Patients with unilateral facial palsy typically deviate the tip of their nose and upper and lower lips toward the uninjured or unaffected side. This view can also help assess the patient’s resting tone. Small closed-mouth smile Zygomatic, buccal Upper lip levators (major is levator labii superioris) Not applicable. Large smile showing teeth Zygomatic, buccal, marginal mandibular Upper lip levators, levators of the corner of the mouth, depressors of lower lip (innervated by marginal mandibular) Damage of the marginal mandibular branch in a smile showing no teeth will produce little alteration. However, this view would capture obvious asymmetry of the lower lip depressors. Elevation of eyebrows Frontal Frontalis Not applicable. Closure of eyes gently Frontal Orbicularis oculi Eye closure is a complex action. The upper lid is controlled by the following 2 synergistic muscles and nerves: the palpebral portion of the orbicularis oculi (innervated by the frontal branch of the facial nerve) and the levator palpebrae (innervated by the superior ramus of the oculomotor nerve). Closure of eyes tightly Frontal Orbicularis oculi, corrugator supercilii Not applicable. Puckering of lips Buccal Orbicularis oris, buccinator Not applicable. Showing bottom teeth Marginal mandibular, cervical Depressors of lower lip (innervated by marginal mandibular), platysma Not applicable. Snarling or wrinkling of the nose Frontal, buccal Corrugator supercilii, nasalis, depressor septi nasii, levator labii superioris aleque nasii Not applicable. Nasal base view Buccal Nasalis, depressor septi nasii, levator labii superioris aleque nasii This view allows for an improved assessment of midline structures and symmetry. In addition, external nasal valve symmetry is best assessed using this view. Research Original Investigation Photographic Standards for Patients With Facial Palsy 280 JAMA Facial Plastic Surgery July/August 2017 Volume 19, Number 4 (Reprinted) jamafacialplasticsurgery.com © 2017 American Medical Association. All rights reserved. D ow nl oa de d by C ar ne gi e M el lo n U ni ve rs it y fr om w w w .l ie be rt pu b. co m a t 04 /0 5/ 21 . F or p er so na l us e on ly . http://jama.jamanetwork.com/article.aspx?doi=10.1001/jamafacial.2016.1883&utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamafacial.2016.1883 http://www.jamafacialplasticsurgery.com/?utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamafacial.2016.1883 Author Contributions: Drs Fattah and Snyder-Warwickhad full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: Santosa, Fattah, Hadlock, Snyder-Warwick. Acquisition, analysis, or interpretation of data: All authors. Drafting of the manuscript: Santosa, Snyder-Warwick. Critical revision of the manuscript for important intellectual content: Fattah, Gavilán, Hadlock, Snyder-Warwick. Statistical analysis: Santosa, Snyder-Warwick. Administrative, technical, or material support: Santosa, Fattah, Hadlock, Snyder-Warwick. Study supervision: Santosa, Fattah, Snyder-Warwick. Conflict of Interest Disclosures: None reported. Funding/Support: Dr Santosa was in part supported by grant F32NS098561 from the National Institute of Neurological Disorders and Stroke, National Institutes of Health. Role of the Funder/Sponsor: The National Institutes of Health did not have a role in any of the following: design and conduct of the study; collection, management, analysis, or interpretation of the data; preparation, review, or approval of the manuscript; or decision to submit the manuscript for publication. Disclaimer: The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Additional Contributions: Andrew Yee, BS, provided the photographs and video, and Kim Lipsey, MLS, assisted with the literature search. Both are affiliated with Washington University in St Louis and received no compensation outside of their usual salary. We thank members of the Sir Charles Bell Society for their participation and input. We thank the patient for granting permission to publish her images. REFERENCES 1. Ekman P. Darwin’s contributions to our understanding of emotional expressions. Philos Trans R Soc Lond B Biol Sci. 2009;364(1535): 3449-3451. 2. Walker DT, Hallam MJ, Ni Mhurchadha S, McCabe P, Nduka C. The psychosocial impact of facial palsy: our experience in one hundred and twenty six patients. Clin Otolaryngol. 2012;37(6): 474-477. 3. Dey JK, Ishii M, Boahene KD, Byrne PJ, Ishii LE. Facial reanimation surgery restores affect display. Otol Neurotol. 2014;35(1):182-187. 4. Wang K, Kowalski EJ, Chung KC. The art and science of photography in hand surgery. J Hand Surg Am. 2014;39(3):580-588. 5. Wong MS, Vinyard WJ. Photographic standards for the massive weight loss patient. Ann Plast Surg. 2014;73(suppl 1):S82-S87. 6. Henderson JL, Larrabee WF Jr, Krieger BD. Photographic standards for facial plastic surgery. Arch Facial Plast Surg. 2005;7(5):331-333. 7. Niziol R, Henry FP, Leckenby JI, Grobbelaar AO. Is there an ideal outcome scoring system for facial reanimation surgery? a review of current methods and suggestions for future publications. J Plast Reconstr Aesthet Surg. 2015;68(4):447-456. 8. Bhama P, Gliklich RE, Weinberg JS, Hadlock TA, Lindsay RW. Optimizing total facial nerve patient management for effective clinical outcomes research. JAMA Facial Plast Surg. 2014;16(1):9-14. 9. el-Naggar M, Rice B, Oswal V. Life-size photograph transparencies: a method for the photographic detection and documentation of recovery from facial paralysis. J Laryngol Otol. 1995; 109(8):748-750. 10. Ellis DA, Gillies TM. Evaluation of the paralyzed face. J Otolaryngol. 1979;8(6):473-476. 11. Mitchell SA. Preoperative and postoperative photographic standards [letter]. Arch Otolaryngol. 1981;107(2):130. 12. Barrs DM, Fukushima T, McElveen JT Jr. Digital camera documentation system for facial nerve outcome assessment. Otol Neurotol. 2001;22(6): 928-930. 13. Schaaf H, Streckbein P, Ettorre G, Lowry JC, Mommaerts MY, Howaldt HP. Standards for digital photography in cranio-maxillo-facial surgery, part II: additional picture sets and avoiding common mistakes [published correction appears in J Craniomaxillofac Surg. 2006;34(7):443]. J Craniomaxillofac Surg. 2006;34(6):366-377. 14. Fattah AY, Gavilan J, Hadlock TA, et al. Survey of methods of facial palsy documentation in use by members of the Sir Charles Bell Society. Laryngoscope. 2014;124(10):2247-2251. 15. Kayabasoglu G, Nacar A. Secondary improvement in static facial reanimation surgeries: increase of nasal function. J Craniofac Surg. 2015;26 (4):e335-e337. doi:10.1097/SCS .0000000000001769 16. Banks CA, Bhama PK, Park J, Hadlock CR, Hadlock TA. Clinician-graded electronic facial paralysis assessment: the eFACE. Plast Reconstr Surg. 2015;136(2):223e-230e. doi:10.1097/PRS .0000000000001447 Photographic Standards for Patients With Facial Palsy Original Investigation Research jamafacialplasticsurgery.com (Reprinted) JAMA Facial Plastic Surgery July/August 2017 Volume 19, Number 4 281 © 2017 American Medical Association. All rights reserved. D ow nl oa de d by C ar ne gi e M el lo n U ni ve rs it y fr om w w w .l ie be rt pu b. co m a t 04 /0 5/ 21 . F or p er so na l us e on ly . https://www.ncbi.nlm.nih.gov/pubmed/19884139 https://www.ncbi.nlm.nih.gov/pubmed/19884139 https://www.ncbi.nlm.nih.gov/pubmed/19884139 https://www.ncbi.nlm.nih.gov/pubmed/22994846 https://www.ncbi.nlm.nih.gov/pubmed/22994846 https://www.ncbi.nlm.nih.gov/pubmed/24317219 https://www.ncbi.nlm.nih.gov/pubmed/23755927 https://www.ncbi.nlm.nih.gov/pubmed/23755927 https://www.ncbi.nlm.nih.gov/pubmed/25003445 https://www.ncbi.nlm.nih.gov/pubmed/25003445 https://www.ncbi.nlm.nih.gov/pubmed/16172344 https://www.ncbi.nlm.nih.gov/pubmed/25589458 https://www.ncbi.nlm.nih.gov/pubmed/25589458 https://www.ncbi.nlm.nih.gov/pubmed/24136326 https://www.ncbi.nlm.nih.gov/pubmed/7561499 https://www.ncbi.nlm.nih.gov/pubmed/7561499 https://www.ncbi.nlm.nih.gov/pubmed/529356 https://www.ncbi.nlm.nih.gov/pubmed/7469893 https://www.ncbi.nlm.nih.gov/pubmed/7469893 https://www.ncbi.nlm.nih.gov/pubmed/11698821 https://www.ncbi.nlm.nih.gov/pubmed/11698821 https://www.ncbi.nlm.nih.gov/pubmed/16859914 https://www.ncbi.nlm.nih.gov/pubmed/24496778 http://dx.doi.org/10.1097/SCS.0000000000001769 http://dx.doi.org/10.1097/SCS.0000000000001769 http://dx.doi.org/10.1097/PRS.0000000000001447 http://dx.doi.org/10.1097/PRS.0000000000001447 http://www.jamafacialplasticsurgery.com/?utm_campaign=articlePDF%26utm_medium=articlePDFlink%26utm_source=articlePDF%26utm_content=jamafacial.2016.1883 work_oqmvlqxmurbnrk7lruzvbhvlju ---- International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 38 Integrated Enterprise Resilience Architecture Framework for Surviving Strategic Disruptions Author’s Details: (1) Hassan Ahmed Hassan Mohamed- Ph.D. Student Affiliation: Faculty of Computers and Information Cairo University Title: Business Architect (2) Prof. Galal Hassan Galal-Eldeen-Faculty of Computers and Information Cairo University 1 Abstract Resilient business enterprises are able to survive strategic disruptions like technology disruptions and come back more successful. They succeed because they develop and effectively implement the resilience strategies of mitigation, adaption, and transformation. This paper proposes an integrated resilience framework that is based on a combination of enterprise architecture and business architecture frameworks. At the core of the proposed framework are a meta-model and a method. The framework guides the development of a unified vision of how a business enterprise can address a specific strategic disruption and transform itself in a successful way. The framework articulates the vision through the lens of business blueprint views that guide the formation of transformation initiatives. Through the mapping capabilities of the framework, the transformation initiatives cross over the boundaries between organization structures and domains. In the last section, we demonstrate our proposed method and meta- model with the help of a case study. Key Words: disruption; strategic disruption; resilience; mitigation; adaptation; transformation; operating model; competitive strategy; business model; enterprise architecture; business architecture; capability; value stream; value proposition; 2 Introduction We live in a world of change and disruptions. When they happen, the typical response is, "Who would have thought this will happen?". Whether the economy is strong or weak, competition is fiercer than ever, and change comes faster than ever; and if a business wants to survive difficult times, it has to prepare itself to be able to make the right shift at the right time in response to disruptions and changes (Bossidy and Charan 2002). Disruptions can be rooted in new technologies, new disruptive business models, the emergence of new regulatory and market forces, or changes in the availability of resources (Fiksel 2003). Some of these disruptions can be game-changing phenomena causing storms that threaten the business enterprises going through those storms. These kinds of disruptions are called strategic disruptions (Schwartz and Randall 2007). An example of such a strategic disruption is the digital photography technology that threatened the core businesses of two global enterprises, Fujifilm and Kodak (Komori 2015). Business enterprises going through these kinds of storms are not equal in their approach to dealing with them and ended up with different results after going through the storms; some succeeded while some failed. For example, Fujifilm succeeded while Kodak failed in facing the digital photography disruption (Komori 2015). EMC succeeded in facing the disruption of the new storage technologies and customer preference change in favour of low tier low cost storage solutions, while Sun Microsystems failed in facing the disruption of the technology bubble burst and the associated change in customer preference in favour of open low cost solutions (Bossidy and Charan 2002). Most often, business enterprises are able to identify the threat of strategic disruption. Kodak identified the threat of digital photography a long time ago but failed to transform its business in response to the disruption (Komori 2015). In contrast, Fujifilm redesigned leveraging its core competencies and targeted acquisitions with synergetic or transformational intent (Komori 2015). In the same line, many current business enterprises see the emerging digital technologies including social, mobile, big data and analytics, IOT, AI, machine learning, cloud computing, and blockchain technologies; as threatening their profitability and even the survivability of their businesses. They also see these technologies present opportunities to offer new, compelling value propositions that combine their existing competencies with the capabilities of the new technologies. The difference between successful and unsuccessful enterprises is that successful enterprises build resilience capabilities to prepare for strategic disruptions using resilient strategies (Hamel and Välikangas 2003). A resilient strategy is not concerned with stabilizing business enterprises quickly under small shocks, but rather, it is concerned with making business enterprises continuously survive large strategic disruptions in the long term. A resilient strategy is concerned with surviving different strategic disruptions through continuously monitoring, interpreting, and adapting to sustainable trends that cause business enterprises to permanently lose the profitability and growth of their core businesses (Hamel and Välikangas 2003). 2.1 What is Resilience and what is Resilience Strategies? Resilience (with its roots in the Latin word resilio) means to adapt and “bounce back” from a disruptive event (Longstaff, Armstrong, et al. 2010). Similarly, it is the capacity of a system to absorb disturbance, undergo change, and retain the same essential functions, structure, identity, and feedbacks (Holling 1973). Within the resilience view, a system like a business enterprise can exist in one of several basins of attractions called regimes. The system shifts from one basin of attraction or regime to another if it passes the threshold of a controlling variable (Holling 1973). International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 39 Figure 1: Basin of attractions A threshold of a controlling variable is the level or amount of a change of that controlling variable, that causes a change in critical feedback, causing the system to self-organize along a different trajectory towards a different attractor (Walker and Meyers 2004). Despite the fact that complex adaptive systems like business enterprises are affected by many variables; they are usually driven by only a handful of key controlling variables (Walker and Meyers 2004). This is an important concept that is used to create and execute strategies to respond to disruptions. For example, if we want to prevent the system from flipping into another regime, we should prevent crossing the thresholds of the systems‟ controlling variables. (Folke, Carpenter, et al. 2002) introduced three kinds of resilience strategies; mitigation, adaptation, and transformation. Mitigation strategy is the capacity to initiate counter forces to keep the control variables checked within their thresholds or delay crossing these thresholds. This will prevent or delay the expected impactful changes in the structure and critical feedback which causes the system to flip into an alternate undesirable stability regime of that system (Walker and Meyers 2004). Adaptation strategy represents the capacity to adjust responses to changing external drivers, controlling variables and internal processes, and thereby allow for a return to the current trajectory (stability domain). It takes the system into a temporary recovery state in which adaptive responses work to cross back the control variables thresholds, return back to the current regime, and try to move away from the control variables thresholds (Walker and Meyers 2004). Transformation strategy is the capacity of the system to cross thresholds into new development trajectories. It is the capacity of the system to transform itself into a different kind of system literally. Transformation strategy becomes very important when a system is in a stable regime that is considered undesirable, and it is either impossible or getting progressively harder and harder, to engineer a „flip‟ to the original or some other regime of that same system. The system will have a different identity. (Folke, Carpenter, et al. 2010). 2.2 Problem Definition and Research Objective The problem that this work addresses is how business enterprises can formulate a resilience strategy and develop and deploy a resilience roadmap when faced with strategic disruptions, in a way that ensures the survivability of these business enterprises. Traditional strategic management approaches are not enough to address this problem. This is clear when we look at the difference of results between Fujifilm and Kodak. Both enterprises faced the same disruption, the digital photography that impacted thei r core film businesses. Both enterprises were successful in applying traditional strategic management approaches for decades. However, in facing the storm of the digital disruption, FujiFilm responded differently than Kodak. After the storm, Fujifilm became a much more successful company with diversified business, ranging from optical devices to radiopharmaceuticals, while Kodak filed for bankruptcy in 2012 (Komori 2015). Fujifilm was a resilient enterprise while Kodak was not. This points clearly to a gap in having a clear resilience approach that stitches together strategies and actions in a way that enables the business enterprise to survive the storm successfully. The goal and contribution of this work are to propose a resilience-based framework (figure 2) for addressing strategic disruptions that can be used independently of other domains such as strategic management or Enterprise Risk Management, but also in collaboration with these domains. The proposed resilience-based framework is overlaid over the enterprise architecture framework. The reason for this is that, when enterprises are engaged in strategic transformation in response to strategic disruptions, they make use of enterprise architectures to direct the development and change of the enterprise as a whole since enterprise architecture is concerned with the overall steering of the direction in which the enterprise aims to transform itself (Lankhorst 2009). The enterprise architecture should provide an elaboration of the enterprise‟s vision such that it enables the steering and coordination of all the actions involved in the transformation. In that sense, the enterprise architecture is a bridge from vision to implementation (Fehskens 2008). Figure 2: Resilience Based Framework International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 40 3 Methodology For this work, the design science research methodology (DSRM) suggested by (Peffers, Tuunanen, et al. 2007) was adopted. This process proposes six consecutive steps where the output of each is treated as input in the next one and with some iterative activity. The first step is the problem identification and motivation, where the specific research problem is defined, and the value of the solution is justified. The second step is the definition of the objectives for a solution, during which the objectives of the solution are deducted from the problem definition in the previous step and from what is feasible. During the third activity of design and development, the actual artefact is created. In the fourth activity, the use of the artefact is demonstrated. Evaluation of the artefact is the fifth activity with observation and measurement of how well the designed artefact supports a solution to the problem. In the final activity, communication takes place about the problem and its importance, and about the artefact and its quality characteristics. 3.1 Synthesizing the Integrated Resilience Framework Only resilient business enterprises like Fujifilm (Komori 2015) and IBM (Garr and Redux 2000) are able to survive game- changing strategic disruptions and come back as more successful enterprises than they were before the disruptions. Resilient business enterprises apply resilience concepts to build the components and capabilities that enable them to survive and transform themselves at the times when that need to face strategic disruptions. Management of resilient business enterprises use the resilience strategies of mitigation, adaptation, and transformation and execute them at the right times and in the right combinations for their enterprises in facing the strategic disruptions (Folke, Carpenter, et al. 2002). Concepts applied by successful, resilient enterprises like Fujifilm are captured and used to develop an integrated resilience-based framework. Business enterprises can use the proposed integrated resilience-based framework to prepare themselves and guide their actions to survive strategic disruptions. The foundation of the framework is the resilience concepts and resilience strategies. The framework is synthesized from a set of tools, strategies, frameworks, and information that are derived from nature of the behaviours of business enterprises facing disruptions, the stages that these business enterprises go through facing these disruptions, and the characteristics of the ones that survive these disruptions. 3.2 Framework Requirements In order for the resilience-based framework to be an affective framework that can guide business enterprises to survive game- changing strategic requirements, it must fulfill the following requirements: Framework Requirement Description Monitor and interpret shifts in the environment The framework must allow monitoring changes in the environment and interpreting these changes into possible trajectories of the future. By environment here we mean, the pattern of all the external conditions and influences that affect its life and development of an enterprise, and they include the dimensions: social, technological, environmental, economic, and political. The importance of this requirement is that a business enterprise cannot be resilient against all possible types of disruptions since this is economically impossible (May, Levin, et al. 2008). Apply the operating efficiency scenario The framework must allow applying the scenario of moving parts of the enterprise‟s operating model to their efficiency frontier. By operating model, we mean all the components that depict how the business operates on a daily basis (Winter and Fischer 2006). Changing the operating model in this way has two outcomes; the first is the reverse or slowdown of the negative impact of the strategic disruption, and the second is accumulating more resources that will be needed if a subsequent transformation phase is to take place. Apply the adaption scenario The framework must allow applying the adaption scenario (Walker and Meyers 2004) to recover from the impact of a strategic disruption. Business enterprises recover from the impact of strategic disruption through either finding other markets for their products and services or through scaling down to match the impact of the strategic disruption. The goal of the adaptation strategy is to survive the impact, minimize cost, liquidate the released resources and add them to the resource base needed during the transformation strategy phase. Apply the transformation scenario The framework must allow applying the transformation scenario to redesign the business enterprise deliberately. The resilient business enterprise applies the resilience transformation scenario through changing the business model of the business enterprise. The transformation scenario shakes the very foundation of the enterprise, transform it into a different kind of an enterprise, and change its identity (Folke, Carpenter, et al. 2010). Articulate the core capabilities of the business The framework must articulate the core capabilities of the business enterprise that will be the base for transformation based on diversifying their uses and applications. The reason for this requirement is that resilient business enterprises build in-house core capabilities that are valuable, rare, inimitable and non-substitutable. Around these core capabilities, business International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 41 enterprise models of these enterprises can be changed (Barney 1991). Organize enterprise concepts into layers with a different rate of change The reason for this requirement is that this kind of organization makes the enterprise more adaptive and the transformation process smoother. We learned this from the concept of systems architectonics that is used to describe how to design buildings that can learn, by proposing several constructional layers that change at different rates. The more these layers can evolve without requiring changes to other layers, the more adaptable the building is (Galal-Edeen 2008). Develop IT architecture that is business driven The framework must allow developing the IT architecture based on the required transformation of the business. This requirement can be realized through a mapping process from the business concepts to IT concepts. Table 1: Framework Requirements 4 Integrated Enterprise Resilience Architecture Framework The proposed resilience-based framework is overlaid over the enterprise architecture framework since enterprise architecture is the tool that is concerned with the whole enterprise; business, information, and technology (Lankhorst 2009). Enterprise architecture is a tool that can translate a business vision into effective enterprise change by creating, communicating and improving the key requirements, principles, and models that describe the enterprise's future state and enable its evolution (Lapkin, Allega, et al. 2008). The defining characteristic of enterprise architecture is that it crosses internal organizational boundaries and provides coordinated views of the entire enterprise, acting as a single source of reference and thus efficiently supporting management planning and decision making (Bernard 2012). For this work, we use the TOGAF framework (Josey 2011) and business architecture framework (GUILD 2014), as the foundation for the integrated enterprise resilience architecture framework. TOGAF framework is composed of many different parts, but the largest and most well-known is the Architecture Development Method (ADM). The architectural domains are described in terms of phases of the ADM, starting with Business, then Information Systems (a combination of Data and Application), and Technology. And while TOGAF does describe some artefacts, there is significant flexibility in what artefacts should be produced and as to the degree of formality present (Josey 2011). The business architecture framework represents holistic, multidimensional business views of capabilities, end-to-end value delivery, information, and organizational structure; and the relationships among these business views and strategies, products, policies, initiatives, and stakeholders (GUILD 2014). The reason for choosing this combination is that the mix of the two frameworks address the requirements of the framework that we present above. Another reason is that the two frameworks can be combined and integrated together perfectly. TOGAF is a generic and customizable framework that can be combined and integrated with other frameworks for processes and/or contents (Josey 2011). TOGAF has a business architecture development phase (Josey 2011) that can be integrated with business views created by the business architecture framework (GUILD 2014). Three main usage scenarios for the enterprise architecture within the context of the resilience analysis: changing the operating model of the enterprise, changing the competitive strategies of the enterprise, and changing the business model of the enterp rise. The three scenarios correspond to the three resilience strategies of mitigation, adaptation, and transformation. The mitigation strategy in this context has the mission of moving the operating model to the efficiency frontier. The adaption strategy in t his context applies several competitive strategies to recover from the impacts of the strategic disruptions. The transformation strategy in this context changes the business model of the business which transforms the enterprise into a new identity. 4.1 Enterprise Resilience Architecture Development Method Based on the combination between TOGAF ADM, the business architecture framework and the framework requirements, we have developed a method that can guide business enterprises in addressing strategic disruptions as per figure 3. The details of the method are shown in figure 4. International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 42 Figure 3: Enterprise Resilience Architecture Development Method Figure 4: Detailed Enterprise Resilience Architecture Development Method 4.1.1 Prepare and Sense In this phase, the business enterprise prepares itself to deal with strategic disruptions through instilling the resilience design characteristics throughout the organization. These characteristics enable the enterprise to apply the required resilience strategies to survive and persist when facing strategic disruptions (Reeves, Levin, et al. 2016). The business enterprise develops and deploys strategies to instil the necessary redundancy, diversity, connectivity, innovation, and core capabilities throughout the organization. Also, in this phase, the business enterprise monitors the control variables that if crossed their thresholds, can shift the business enterprise into an undesirable regime. Approaching the thresholds of one or more control variables indicates the possibility of the emergence of a strategic disruption and kicks off the next phase of diagnosing this situation. 4.1.2 Diagnose Strategic Disruptions In this phase, the business enterprise conducts environment analysis to understand the forces that cause the strategic disruptions. Strategic disruptions create drivers for the business enterprise to transform itself. The business enterprise needs to assess these drivers, a process that results in creating a set of transformation goals. 4.1.3 Develop Business Vision In this phase, the business enterprise revises its business scope, business model, and value network in light of the transformation goals that have been identified in the previous phase. Based on these revisions, the business enterprise formulates a transformation vision that will guide all the architecture effort that will follow. 4.1.4 Develop Current Enterprise Architecture In this phase, the business enterprise captures the current enterprise architecture in terms of capabilities, value propositions, value streams, organization structure, information, products & services, application, data, and technology. The business enterprise will then use these concepts to create business blueprint views of the current state of the business. Prepare and Sense Diagnose Strategic Disruptions and Emerging Scenarios Develop Business Vision Develop Current Enterprise Architecture Conduct Resilience Scenarios Analysis Develop Target Enterprise Architecture Implement Resilience Transformatio n Programme International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 43 4.1.5 Conduct Resilience Scenario Analysis The blueprint views created in the previous phase will be analysed in light of the strategic disruption dimensions and the transformation vision created in the previous phase. These analyses will typically be part of the resilience scenarios: mitigation scenario, adaptation scenario, and transformation scenario; mentioned in the framework requirements. For example, as part of the mitigation scenario, the business enterprise may ask: for a specific customer segment, what are value streams that if streamlined and optimized will maximize the value delivered to this segment? Then, the business enterprise can determine which capabilities are enabling these value streams and the information systems that support these capabilities. 4.1.6 Develop Target Enterprise Architecture Based on the analyses done in the previous phase, the business enterprise develops a target enterprise architecture in terms of target capabilities, value propositions, value streams, organization structure, information, and products & services, applications, data, and technology. The business enterprise conducts an architecture gap analysis to define the enterprise gaps between the current enterprise architecture and the target enterprise architecture. 4.1.7 Implement Resilience Transformation Programme In this phase, the business enterprise consolidates the enterprise architecture gaps identified in the previous phase, develops a consolidated enterprise architecture solution that addresses these gaps, creates a transformation programme and roadmap that crosses over the business lines, departments, products & services, customer segments, and information technology. A transformation map created this way, ensures integrated execution, effective investment, non-duplicated, and non-fragmented initiatives. In this phase, the business enterprise ensures conformance of the programme projects execution with the target enterprise architecture. 4.2 Enterprise Resilience Architecture Meta-Model At the core of the integrated enterprise resilience architecture framework is the framework‟s Meta-Model. Contents of the resulted architectures are created based on this framework‟s Meta-Model. These enterprise architecture contents form what is called, the enterprise architecture knowledgebase, which provides the foundational perspective for formalizing the definition, relationships, and management of the enterprise architecture artefacts. The knowledgebase is the centrepiece of the enterprise resilience architecture framework. The foundation of the knowledge base is the enterprise architecture Meta-Model. The Meta- Model identifies the artefacts and relationships that serve as the foundation for storing and automating an enterprise architecture practice. The enterprise architecture Meta-Model is based upon a set of core concept terms or “domain categories” and relationships among those domain categories (Josey 2011). The following figure (figure 5) shows the concepts of the Meta-Model that we use for creating the knowledge base of the integrated enterprise resilience architecture framework: Figure 5: Enterprise Resilience Architecture Meta-Model International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 44 Enterprise Resilience Architecture Concept Enterprise Resilience Architecture Concept Definition Capability Concept Capabilities describe what the business does and what it will need to do differently in response to strategic challenges and opportunities. They combine resources, competencies, information, processes and their environments to deliver consistent outcome (Burton 2010). Organization and Business Unit Concept The business unit is the main concept used to establish organization maps. It is defined as follows: “A logical element or segment of a company (such as accounting, production, marketing) representing a specific business function, and a definite place on the organization chart, under the domain of a manager. Also, called department, division, or a functional area” (Ulrich and Rosen 2014). Stakeholder Concept A stakeholder is defined as an internal or external individual or organization with a vested interest in achieving value through a particular outcome (Ulrich and Rosen 2014). Value and Value Proposition Concept Value can be defined as the benefit that is derived by an organization‟s stakeholder while interacting with that organization. Value is fundamental to everything that an organization does. In fact, the only reason an organization exists is that it provides value to one or more stakeholders (Brandenburger and Stuart 1996). A value proposition is defined as: “An innovation, service, or feature intended to make a company, product, or service attractive to customers or related stakeholders” (Frow and Payne 2011). Information Concept Accurate, timely, relevant information is crucial to good decision-making, including strategic decisions (Choo 1996). Information and knowledge are key assets in the current knowledge worker-driven economy. It has been consistently shown that information is essential for innovation in a culture that encourages and rewards intelligent risk taking. Information facilitates the assessment of both upside and downside risk associated with a course of action (De Jong, Marston, et al. 2013). Outcome An outcome represents an end result that has been achieved. Outcomes are high-level, business-oriented results produced by capabilities of an organization, and by inference by the core elements of its architecture that realize these capabilities. Outcomes are tangible, possibly quantitative, and time-related, and can be associated with assessments. An outcome may have a different value for different stakeholders (Josey, Lankhorst, et al. 2016). Product Concept The product can be defined as a good, idea, method, information, object, or service that is the end result of a process and serves as a need or want satisfier. It is usually a bundle of tangible and intangible attributes (benefits, features, functions, uses) that a seller offers to a buyer for purchase. Products can be goods or services, and are distinguished by tangibility: goods are tangible, and services are intangible. The product can also be referred to as the overall experience provided by the combination of goods and services to satisfy the customer‟s needs (Geracie and Eppinger 2013). Strategy Concept A strategy is an approach or plan for configuring some capabilities and resources of the enterprise, undertaken to achieve a goal. It is the pattern or plan that integrates an organization‟s major goals, policies and action sequences into a cohesive whole (Quinn 1980). Initiative Concept The application is the common terminology used to characterize a collection of software assets that automates and enables a bounded set of capabilities and is identifiable by name and other characteristics. These assets must be assessed for investment purposes just like any other asset. An application may decompose into smaller chunks. These chunks have historically been called subsystems, but other terms may also apply (Kellerman and Löfgren 2008). Data Concept Data is often defined "as being discrete, objective facts or observations, which are unorganized and unprocessed and therefore have no meaning or value because of lack of context and interpretation.” Information may be built on top of data but may also only exist in the mind of a person or be conveyed in speech or ephemeral documents; information is the combination of data and a context for interpreting that data (Ulrich and Rosen 2014). Orchestration Concept Services and application components automate business capabilities, and value stream / capability cross-mappings provide insights into service and application orchestration. When a business needs to improve or even add capabilities based on any number of business scenarios, capabilities and value streams provide architects with a framework for business service and service orchestration requirements (Ulrich and Rosen 2014). Table 2: Meta-Model Concepts 5 Demonstrating the Method - ArchiSurance Case Study ArchiSurance is a company that was created after the merger of three other previously independent insurance companies to take advantage of the numerous synergies between them in order to control costs, maintain customer satisfaction, invest in new technology and takes advantage of emerging markets with high growth potential. They realized that only a larger, combined company could achieve these goals when lower-cost competitors started entering their markets and at the same time new opportunities in high-growth regions emerged; thus, they decided to join forces (Jonkers, Band, et al. 2012). The three original organizations were „Home & Away,‟ which provided home and travel insurance to its clients; „PRO-FIT,‟ which provided auto insurance; and „Legally Yours,‟ which was specializing in legal expense insurance. Although the three pre- merger companies were selling different types of insurance, they had similar business models; they all sold directly to consumers and small businesses through the Web, email, telephone and postal mail channels, without using an intermediary channel. The International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 45 created company, operating as ArchiSurance, is now providing all the aforementioned services of the three pre-merger companies (as shown below in Figure 6). Like its three predecessors, ArchiSurance sells directly to customers via print, Web, and direct marketing and intends to frequently adjust its offerings in response to changing market conditions (Jonkers, Band, et al. 2012). Figure 6: ArchiSurance: the result of a merger of three insurance companies After the merger, ArchiSurance set up a shared front-office as a multi-channel contact centre for sales and customer service at the pre-merger headquarters of Home & Away. There are still three separate back-offices that handle the insurance products of the three original companies. A Shared Service Centre (SSC) has been established for document processing at the pre-merger headquarters of PROFIT (Jonkers, Band, et al. 2012). The organization structure of the merged ArchSurance company is shown in figure 7. Figure 7: Global Organizational Structure of ArchiSurance 5.1.1 Diagnose strategic disruptions In spite of the successful take-off of ArchiSurance, the enterprise faces a wave of decreasing profitability and rapid increasing migration of customers to competitors. The company is struggling to cope with the huge social changes in consumer attitudes and behaviours. The traditional insurance model adapted by ArchiSurance is being challenged by the adoption of innovative usage - based business models and telematics by the competition, as well as by increased capital requirements and regulatory oversight across the world. ArchiSurance is not the only insurance enterprise that faces this wave. The first thing the enterprise decided to do is to understand the driving forces of the strategic shifts that shape the sector‟s landscape and cause the disruption wave. ArchiSurance conducted a STEEP (Social, Technological, Environmental, Economic, and Political) analysis to understand these driving forces. Aspect Assessment Social  The ongoing social trend is causing the insurance to be transformed from being „sold or pushed to customers‟ to being „bought‟ by customers. This requires insurance companies including agents, advisors, and carriers to re-examine their roles in the insurance value chain (Yoder, Rao, et al. 2012).  The rapid adoption and fast evolution of social networks continue to empower both consumers and businesses and create what is called virtual communities (Yoder, Rao, et al. 2012). Technology  The growth in smartphones and tablets, the growth in cloud computing, the explosion of International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 46 computing power and storage and the growth in active sensors and devices connected to the internet; create big data that is accumulated and analysed can provide insurance companies competitive advantage in pricing, underwriting and loss control (Yoder, Rao, et al. 2012).  Digital technologies including social, mobile, analytics, IOT, AI, Machine Learning and blockchain present opportunities to offer new, compelling value propositions that combine existing competencies with the capabilities of new technologies (Yoder, Rao, et al. 2012). Environment  The severity and frequency of catastrophic events, both natural and man-made, have been increasing over the years (Yoder, Rao, et al. 2012).  With continued fossil fuel use, pollution will remain a significant health issue, threatening the well-being of populations in developed and developing countries (Yoder, Rao, et al. 2012).  Life and health insurers will need to monitor trends in atmospheric pollution closely in order to assess risk in different regions accurately (Yoder, Rao, et al. 2012). Economic  The world economy is shifting from a world dominated by developed markets to a world in which the majority of growth is in emerging markets (Yoder, Rao, et al. 2012).  In the developed world, the old outnumber the young. In emerging markets (except China) the working age population will continue to outnumber the dependent population, and thereby result in more productive growth (Yoder, Rao, et al. 2012).  The rise of the middle class in emerging markets is fuelling increased consumption, which is leading to impressive small business growth (Yoder, Rao, et al. 2012).  In developing countries, government infrastructure investment, population growth, new businesses, and wealth creation are driving growth in construction, land development, energy, and transportation sectors, all of which are creating a greater need for insurance (Yoder, Rao, et al. 2012). Political  Consumers lacking faith in the solvency of social security programmes will begin to focus on providing their own savings for retirement, away from government programmes (Yoder, Rao, et al. 2012).  This will create new opportunities for life and annuity insurers (Yoder, Rao, et al. 2012).  Over the past 3 decades, there has been an increase in terrorist attacks around the world. These terrorist attacks often impact multiple product lines, which are often modelled independently. Detailed modelling is required to understand the capacity requirements for terrorism coverage (Yoder, Rao, et al. 2012). Table 3: STEEP Analysis of Insurance Industry Based on the STEEP analysis, ArchiSurance diagnosed the situation as a strategic disruption caused by the interaction of STEEP forces shifts. Figure 8: Strategic Disruption Diagnosis 5.1.2 Develop Business Vision ArchiSurance created a vision (Figure 9) for a new business model that is based on customer engagement and preventive insurance strategies. The target business model is enabled by a digital core that transforms the customer interaction approach and delivers personalized value propositions based on the preventive insurance concept. International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 47 Figure 9: Transformation Vision The vision depends on creating an integrated digital core as per figure 10: Figure 10: Envisioned Integrated Digital Core ArchiSurance created a new platform based business model as shown in figure 11: Figure 11: ArchiSurance new business model International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 48 5.1.3 Develop current enterprise architecture ArchiSurance mapped the current core capabilities as shown in figure 12: Figure 12: ArchiSurance current core capabilities ArchiSurance mapped the current application landscape as shown in figure 13: Figure 13: ArchiSurance Current Application Landscape 5.1.4 Conduct Resilience Scenario Analysis ArchiSurance conducted resilience scenarios as follows: Mitigation Scenario ArchiSurance decided to move to the operational frontier and enhancing the customer experience through the following initiatives: 1- Automate the Underwriting Process 2- Automate Real-time Fraud Detection 3- Enable customers to submit claims through smartphones These initiatives enable ArchiSurance to grow the current markets and boost current customer loyalty. This way it can sustain the current business model, delay the impact of the disruptive forces, and provide a strong base for the business model transformation. Adaptation Scenario ArchiSurance started a restructuring initiative to reduce cost and match the declining trend. Saved resources are used to fuel the transformation scenario. Transformation Scenario ArchiSurance launched a Transformation Programme to transform the business model from a product-based insurance business model into a platform-based insurance business model. ArchiSurance created several initiatives to build a digital core that will form the foundation of the new business model. Table 4: Resilience Scenarios International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 49 5.1.5 Develop Target Enterprise Architecture ArchiSurance mapped the target core capabilities. The new architecture will transform the current core capabilities and add to them new capabilities as shown in figure 14: Figure 14: ArchiSurance target core capabilities The new digital core will be deployed as shown in figure 15: Figure 15: ArchiSurance digital technologies deployment ArchiSurance target application portfolio is shown in figure 16: International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 50 Figure 16: ArchiSurance target application portfolio 5.1.6 Implement Resilience Transformation Programme ArchiSurance created a transformation programme including the following initiatives (figure 17): Figure 17: ArchiSurance transformation programme 6 Conclusions and Future Work In this paper, we propose an integrated resilience framework to guide business enterprises to design and implement the right changes when they are faced with game-changing strategic disruptions. To be an effective framework, it must fulfil a set of requirements including: the ability to monitor and interpret shifts in the environment, the ability to apply the operating efficiency scenario, the ability to apply the adaption scenario, the ability to apply the transformation scenario, the ability to articulate the core capabilities of the business enterprise, the ability to organize enterprise concepts into layers with different rate of change, and the ability to develop IT architecture that is business driven. Since traditional strategic management approaches failed to address this problem, we had to choose a tool that is capable of steering the whole enterprise. So, we overlaid the framework over a combination of two frameworks, the enterprise architecture framework, and the business architecture framework. The two frameworks can be combined and integrated together perfectly in a way that addresses the requirements of the framework. The framework is composed of two main components: the enterprise resilience architecture development method and the enterprise resilience architecture meta-model. There are several limitations to the work we have presented. We have stated that the framework can integrate with other domains like strategic management and enterprise risk management. Therefore, we suggest that further research should be done in order to elaborate more on the possibility of these collaborations. Also, we have demonstrated our proposed framework with the help of one case study. Although this is sufficient for stating that our approach is viable for the organisation under analysis, we cannot state that it is applicable to all organisations. Therefore, further research needs to be done in order to investigate the generalizability of our framework. 7 Bibliography i. Barney, J. B. (1991). "Firm Resources and Sustained Competitive Advantage." Journal of Management 17: 99-120. ii. Bernard, S. A. (2012). An introduction to enterprise architecture, AuthorHouse. iii. Bossidy, L. and R. Charan (2002). Execution: The Discipline of Getting Things Done Crown Business, New York. iv. Brandenburger, A. M. and H. W. Stuart (1996). "Value‐based business strategy." Journal of economics & management strategy 5(1): 5-24. International Journal of Management Sciences and Business Research, Nov-2018 ISSN (2226-8235) Vol-7, Issue 11 http://www.ijmsbr.com Page 51 v. Burton, B. (2010). "Eight Business Capability Modeling Best Practices." Gartner Research, ID(G00175782). vi. Choo, C. W. (1996). "The knowing organization: How organizations use information to construct meaning, create knowledge and make decisions." International Journal of Information Management 16(5): 329-340. vii. De Jong, M., et al. (2013). "The eight essentials of innovation performance." McKinsey Strategy. viii. Fehskens, L. (2008). Re-thinking architecture—the architecture of enterprise architecture. 20th Enterprise architecture practitioners conference, The Open Group, Reading, UK. ix. Fiksel, J. (2003). "Designing Resilient, Sustainable Systems." Environmental Science and Technology. x. Folke, C., et al. (2002). "Resilience and sustainable development: building adaptive capacity in a world of transformations." Ambio(31): 437–440. xi. Folke, C., et al. (2010). "Resilience Thinking: Integrating Resilience, Adaptability and Transformability." Ecology and Society 15(4). xii. Frow, P. and A. Payne (2011). "A stakeholder perspective of the value proposition concept." European journal of marketing 45(1/2): 223-240. xiii. Galal-Edeen (2008). "Cairo University Strategy Formulation: an Architectonic View." Conference on Enhancing the Competitiveness of Universities, Faculty of Commerce, Cairo University. xiv. Garr, D. and I. Redux (2000). Lou Gerstner and the Business Turnaround of the Decade, HarperInformation. xv. Geracie, G. and S. D. Eppinger (2013). The Guide to the Product Management and Marketing Body of Knowledge, Product Management Educational Institute (PMEI). xvi. GUILD, B. (2014). "A Guide to the Business Architecture Body of Knowledge (BIZBOK Guide)." V04. xvii. Hamel, G. and L. Välikangas (2003). "The Quest for Resilience." Harvard Business Review. xviii. Holling, C. S. (1973). "Resilience and Stability of Ecological Systems." Annual Review of Ecological Systems(4). xix. Jonkers, H., et al. (2012). "The ArchiSurance case study." White paper, The Open Group, Spring. xx. Josey, A. (2011). Togaf version 9.1 enterprise edition: An introduction, Open Group. xxi. Josey, A., et al. (2016). "An Introduction to the ArchiMate® 3.0 Specification." White Paper from The Open Group. xxii. Kellerman, J. and P. Löfgren (2008). Application Portfolio Management. xxiii. Komori, S. (2015). Innovating out of crisis: How Fujifilm survived (and Thrived) as its core business was vanishing, Stone Bridge Press, Inc. xxiv. Lankhorst, M. (2009). "{Enterprise Architecture at Work: Modelling, Communication and Analysis (The Enterprise Engineering Series)}." xxv. Lapkin, A., et al. (2008). "Gartner Clarifies the Definition of the Term Enterprise Architecture." Research G00156559, Gartner. xxvi. Longstaff, P. H., et al. (2010). "Building Resilient Communities: A Preliminary Framework for Assessment." HOMELAND SECURITY AFFAIRS VI(3). xxvii. May, R. M., et al. (2008). "Complex systems: Ecology for bankers." Nature 451(7181): 893-895. xxviii. Peffers, K., et al. (2007). "A design science research methodology for information systems research." Journal of management information systems 24(3): 45-77. xxix. Quinn, J. B. (1980). Strategies for change: Logical incrementalism, Irwin Professional Publishing. xxx. Reeves, M., et al. (2016). "The biology of corporate survival." Harvard Business Review 94(1): 2. xxxi. Schwartz, P. and D. Randall (2007) Ahead of the Curve: Anticipating Strategic Surprise. MONITOR_ GROUP xxxii. Ulrich, W. and M. Rosen (2014). "The Business Capability Map: Building a Foundation for Business/IT Alignment (2011)." Cutter Consortium for Business and Enterprise Architecture. xxxiii. Walker, B. and J. A. Meyers (2004). "Thresholds in Ecological and Social–Ecological Systems: a Developing Database." Ecology and Society 9(2). xxxiv. Winter, R. and R. Fischer (2006). Essential layers, artifacts, and dependencies of enterprise architecture. Enterprise Distributed Object Computing Conference Workshops, 2006. EDOCW'06. 10th IEEE International, IEEE. xxxv. Yoder, J., et al. (2012). Insurance 2020: Turning change into opportunity, Retrieved from Pricewaterhouse-Coopers Insurance website: http://www. pwc. com/gx/en/insurance/pdf/insurance-2020-turning-change-into-opportunity. pdf. http://www/ work_oqoxbzegjrb2fjiqgkt4xd5cn4 ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218552224 Params is empty 218552224 exception Params is empty 2021/04/06-02:16:31 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218552224 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:31 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_ortjqsszmnebxj53ydyotssdku ---- Reproducibility of the AO/ASIF and Gartland classifications for supracondylar fractures of the humerus in children r e v b r a s o r t o p . 2 0 1 5;5 0(3):266–269 w w w . r b o . o r g . b r Original Article Reproducibility of the AO/ASIF and Gartland classifications for supracondylar fractures of the humerus in children� Igor Tadeu Silveira Rocha ∗, André de Siqueira Faria, Carlos Fontoura Filho, Murilo Antônio Rocha Universidade Federal do Triângulo Mineiro (UFTM), Uberaba, MG, Brazil a r t i c l e i n f o Article history: Received 2 April 2014 Accepted 15 May 2014 Available online 28 May 2015 Keywords: Fractures of the humerus/classification Children Observer-dependent variations Reproducibility of results a b s t r a c t Objective: To evaluate the reproducibility of the radiographic classifications of Gartland and the Association for Osteosynthesis/Association for the Study of Internal Fixation (AO/ASIF) for supracondylar fractures of the humerus in children. Methods: On two occasions, 50 radiographs in anteroposterior and lateral views were evalu- ated by three pediatric orthopedists in accordance with the Gartland and AO/ASIF pediatric classifications. Their responses were subjected to statistical analysis consisting of cal- culation of the � coefficient to assess the intra- and interobserver concordance, in both classifications. Results: The strength of the intraobserver concordance was high or near perfect for the three examiners in the two classification systems. The strength of the interobserver concordance was high in the two systems, with � coefficients of 0.756 for the Gartland classification and 0.766 for the AO/ASIF classification. Conclusion: The Gartland and AO/ASIF classification systems showed similar reproducibility and performance. High strength of concordance was seen in the intra- and interobserver analyses. © 2014 Sociedade Brasileira de Ortopedia e Traumatologia. Published by Elsevier Editora Ltda. All rights reserved. Reprodutibilidade das classificações AO/ASIF e Gartland para fraturas supracondilianas de úmero em crianças r e s u m o Palavras-chave: Fraturas do úmero/classificação Criança Objetivo: Avaliar a reprodutibilidade das classificações radiográficas de Gartland e Associa- tion for Osteosynthesis/Association for the Study of Internal Fixation (AO/ASIF) para fraturas supracondilianas de úmero em crianças. � Work developed in the Discipline of Orthopedics and Traumatology, Universidade Federal do Triângulo Mineiro, Uberaba, MG, Brazil. ∗ Corresponding author. E-mails: igorsilveira2003@yahoo.com.br, doutorigorsilveira@hotmail.com (I.T.S. Rocha). http://dx.doi.org/10.1016/j.rboe.2015.05.001 2255-4971/© 2014 Sociedade Brasileira de Ortopedia e Traumatologia. Published by Elsevier Editora Ltda. All rights reserved. dx.doi.org/10.1016/j.rboe.2015.05.001 http://www.rbo.org.br http://crossmark.crossref.org/dialog/?doi=10.1016/j.rboe.2015.05.001&domain=pdf mailto:igorsilveira2003@yahoo.com.br mailto:doutorigorsilveira@hotmail.com dx.doi.org/10.1016/j.rboe.2015.05.001 r e v b r a s o r t o p . 2 0 1 5;5 0(3):266–269 267 Variações dependentes do observador Reprodutibilidade dos resultados Métodos: Em duas ocasiões foram avaliadas por três cirurgiões ortopedistas pediátricos 50 radiografias nas incidências anteroposteriores e perfil de acordo com as classificações de Gartland e AO/ASIF pediátrica. As respostas foram submetidas à análise estatística pelo cálculo do coeficiente � para avaliar a concordância intra- e interobservador, em ambas as classificações. Resultados: A força de concordância intraobservador foi grande ou quase perfeita para os três examinadores nos dois sistemas de classificação. A força de concordância interobservador foi grande nos dois sistemas, com coeficiente � de 0,756 para classificação de Gartland e de 0,766 para classificação AO/ASIF. Conclusão: Os sistemas de classificação de Gartland e AO/ASIF mostraram reprodutibili- dade e desempenho similar. Observou-se grande força de concordância nas análises intra- e interobservador. © 2014 Sociedade Brasileira de Ortopedia e Traumatologia. Publicado por Elsevier Editora Ltda. Todos os direitos reservados. I S f t c a f m a c r T h f m a s o M o t i s e c t ( t o l l M T o i the SPSS software, version 12.0 (Chicago, USA), in order to determine the � coefficient, which inferred the degree of con- cordance beyond what would be expected only by chance. The strength of the intra- and interobserver concordance of the Table 1 – Association between the � coefficient and the strength of concordance.14 � coefficient Strength of concordance Less than zero Poor 0–0.20 Negligible 0.21–0.40 Slight ntroduction upracondylar fractures are the commonest type of elbow ractures in children and the second commonest type of frac- ure during childhood, accounting for more than 60% of the ases.1–4 They occur most frequently between the ages of five nd ten years.5 The various classification systems proposed or these fractures have had the aims of guiding the treat- ent, estimating the prognosis and enabling standardization nd comparison among the many scientific studies. These lassifications need to be simple, easy to apply clinically and eproducible, with high concordance between surgeons.6–8 he Gartland classification for supracondylar fractures of the umerus is the one most used.9,10 In this classification system, ractures are grouped according to their degree of displace- ent. Although the LaGrange11 classification is more descriptive nd detailed in cases of greater displacement, it is not the ystem most used. In turn, the system adopted by the AO group12 for fractures f the long bones in children combines the classification of uller et al.13 for adults with an additional description focused n the immature skeleton.8 This is an alphanumeric system hat includes the bone affected, the location and the sever- ty, along with the peculiarities of the growing bone. Thus, upracondylar fractures would be described as 13-/9.1 with an nding of I, II, III or IV, according to whether the fracture was omplete or incomplete, and with or without contact between he fragments. In this manner, only the exception component I–IV) of the morphological segment of the AO/ASIF classifica- ion was taken into consideration in the present study. The objective of this study was to assess the reproducibility f the Gartland and AO/ASIF classifications for supracondy- ar fractures of the humerus in children, by investigating the evels of intra- and interobserver concordance. ethods his study was conducted in a referral hospital that attends rthopedic trauma cases, after receiving approval from the nstitution’s ethics committee. Fifty conventional radiographs (anteroposterior and lateral views) originating from initial attendance of patients with supracondylar fractures of the humerus, produced between January and June 2013, were selected for evaluation. The radiographic images for the study were obtained by means of high-resolution digital photography, with preserva- tion of the original characteristics of the film. The selection did not take into consideration the quality of the radiography. Images from patients over the age of 16 years, from those who presented a closed growth plate line and from those presenting multiple fractures on radiographs were excluded. The images were evaluated by three pediatric orthopedists who had had previous access to the classifica- tion systems. Seven days of training before the analysis was permitted. The examiners evaluated the 50 images over a maximum time of two hours and made a second evaluation with the same duration, two weeks later. The order of the 50 images was varied through randomization. The examiners did not have access to the responses of their peers or to their own responses given on the previous occasion. The responses given by each examiner to the radiographic evaluations were written on a printed chart that was handed out to each participant, together with a free and informed consent statement. The results were gathered and analyzed with the aid of ® 0.41–0.60 Moderate 0.61–0.80 High 0.81–1.00 Almost perfect 268 r e v b r a s o r t o p . 2 0 Table 2 – Intraobserver concordance level according to the � coefficient, in relation to the Gartland and AO classifications for supracondylar fractures of the humerus in children. Gartland AO Examiner 1 0.781 0.767 Examiner 2 0.859 1 Examiner 3 0.719 0.782 Table 3 – Interobserver analysis on � coefficient for Gartland classification. Gartland I Gartland II Gartland III � 0.945 0.535 0.677 p-Value of � <0.001 <0.001 <0.001 95% confidence interval of � Upper: 1.0 Upper: 0.695 Upper: 0.837 Lower: 0.785 Lower: 0.375 Lower: 0.517 two classification systems was then determined, as detained in Table 1.14 Results The intraobserver concordance according to the � coeffi- cient, relating to the Gartland classification for supracondylar fractures of the humerus in children and the AO/ASIF clas- sification for fractures in children, as presented in Table 2, was high or almost perfect for all the examiners in relation to both classifications. For two of the three examiners, the concordance for the AO/ASIF system was slightly higher. Tables 3 and 4 present the interobserver analyses for the Gartland and AO classifications, respectively. It can be seen that the interobserver concordance decreased with regard to category II, in both classification systems. As shown in Table 5, the interobserver evaluation showed � of 0.756 for the Gartland classification and 0.766 for the AO/ASIF classification, which thus shows high concordance between the two systems. Table 4 – Interobserver analysis on � coefficient for AO classific AO I � 0.865 p-Value of � <0.001 95% confidence interval of � Upper: 1.0 Lower: 0.705 Table 5 – General � coefficient for interobserver evaluation, acco Number of radiographs General � Gartland 50 0.756 AO/ASIF 50 0.766 1 5;5 0(3):266–269 Discussion The diversity of classification systems for a group of fractures that is published over the course of time may give rise to inter- pretational conflicts. Thus, the validity, reproducibility and correlations of well- established classifications need to be verified, given that comparisons between different evaluations, with exclusion of causality and personal bias, can demonstrate the qualities or weaknesses of a given system under examination. According to Audigé et al.,6 for these objective to be attained, the clas- sification system needs to go through three research phases before it is validated for clinical use.6,14 To know whether a given characterization or classification for an object is reliable, this object needs to be evaluated sev- eral times, by more than one examiner. For this, in the present study, the � coefficient was used. This infers the degree of con- cordance beyond what would be expected purely by chance. It is based on the number of concordant responses, i.e. the number of cases for which the result is the same among the examiners.15,16 In the present study, the examiners seemed to be “well calibrated”, both within themselves and with the others. The interobserver concordance values were within the 95% con- fidence interval, with p < 0.001 in both classification systems. Therefore, these values presented statistical significance. As also found by Brandão et al.,14 our interobserver concordance index was no greater than 0.8, even though the observers were all pediatric orthopedists. The concordance found between the Gartland and AO/ASIF classification systems was satisfactory (high or almost per- fect). These systems had similar performance, despite the greater complexity of the AO/ASIF system and the examiners’ lower degree of familiarity with this system. In the present study, the lowest strength of concordance (moderate) in the interobserver analysis was found in type II of the Gartland and AO/ASIF classifications. However, according 10 to Heal et al., the lowest level of interobserver concordance for the Gartland classification occurred in type I. It was observed that variations in the degree of concord- ance in the interobserver analysis of different studies10,14 did ation. AO II AO III AO IV 0.435 0.75 1.0 <0.001 <0.001 <0.001 Upper: 0.595 Upper: 0.91 Upper: 1.0 Lower: 0.275 Lower: 0.59 Lower: 0.84 rding to classification system. General p-value 95% confidence interval of � <0.001 Upper: 0.874 Lower: 0.637 <0.001 Upper: 0.868 Lower: 0.665 0 1 5 n c o t a c h s t C T i s t c C T r 1 1 1 1 1 1 r e v b r a s o r t o p . 2 ot invalidate the constant observation that the two classifi- ations have good reproducibility. Evaluation of the reproducibility of these classifications is f importance insofar as they guide the type of treatment insti- uted for these fractures (conservative versus surgical). They lso enable standardization of the orthopedic language for omparing studies from different centers. Now that the reproducibility of these classification systems as been verified, it becomes necessary to conduct further tudies to ascertain whether one of them might be superior o the other and thus to determine a standard system. onclusion he Gartland and AO/ASIF classification systems showed sim- lar reproducibility and the intra- and interobserver analyses howed high strength of concordance, even though use of he AO/ASIF system remains limited among orthopedists and, onsequently, their familiarity with this method is lower. onflicts of interest he authors declare no conflicts of interest. e f e r e n c e s 1. Lins RE, Simovitch RW, Waters PM. Pediatric elbow trauma. Orthop Clin North Am. 1999;30(1):119–32. 2. Cheng JC, Shen WY. Limb fracture pattern in different pediatric age groups: a study of 3350 children. J Orthop Trauma. 1993;7(1):15–22. 3. Blount WP. Fractures in children. Baltimore: Williams and Wilkins; 1955. 1 ;5 0(3):266–269 269 4. Smith FM. Children’s elbow injuries: fractures and dislocations. Clin Orthop Relat Res. 1967;(50):7–30. 5. Kasser JR, Beaty JH. Supracondylar fractures of the distal humerus. In: Beaty JH, Kasser JR, editors. Rockwood and Wilkins’ fractures in children. 5th ed. Philadelphia: Lippincott Williams & Wilkins; 2001. p. 577. 6. Audigé L, Bhandari M, Kellam J. How reliable are reliability sudies of fracture classifications? A systematic review of their methodologies. Acta Orthop Scand. 2004;75(3):184–94. 7. Garbuz DS, Marsi BA, Esdaile J, Duncan CP. Classification systems in orthopaedics. J Am Acad Orthop Surg. 2002;10(4):290–7. 8. Slongo T, Audigé L, Schlickewei W, Clavert J, Hunter J. Development and validation of the AO paediatric comprehensive classification of long-bone fractures. J Pediatr Orthop. 2006;26(1):43–9. 9. Gartland JJ. Management of supracondylar fractures of the humerus in children. Surg Gynecol Obstet. 1959;109(2):145–54. 0. Heal J, Boud M, Livingstone J, Blewitt N, Blom AW. Reproducibility of the Gartland classification for supracondylar humeral fractures in children. J Orthop Surg (Hong Kong). 2007;15(1):12–4. 1. LaGrange JRP. Fractures supracondyleennes. Rev Chir Orthop. 1962;48:337–414. 2. Slongo T, Audigé L, Clavert JM, Lutz N, Frick S, Hunter J. AO comprehensive classification of pediatric long-bone fractures: a web-based multicenter agreement study. J Pediatr Orthop. 2007;27(2):171–80. 3. Müller ME, Nazarian S, Koch P. The comprehensive classification of fractures of long bones. Berlin: Springer-Verlag; 1990. 4. Brandão G, Teixeira L, Américo L, Soares C, Caldas L, Azevedo A, et al. Reprodutibilidade da classificação da AO/Asif para fraturas dos ossos longos na criança. Rev Bras Ortop. 2010;45 Suppl.:37–9. 5. Siegel S, Castellan N. Nonparametric statistics for the behavioral sciences. New York: McGraw-Hill; 1988. 6. Fleiss JL. The measurement of interrater agreement. In: Statistical methods for rates and proportions. New York: John Wiley and Sons; 1981. http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0085 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0090 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0095 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0095 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0095 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0095 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0095 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0095 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0095 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0095 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0095 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0095 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0095 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0095 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0095 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0095 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0095 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0095 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0095 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0095 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0095 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0100 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0100 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0100 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0100 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0100 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0100 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0100 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0100 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0100 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0100 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0100 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0100 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0100 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0100 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0100 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0100 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0100 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0100 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0100 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0100 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0105 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0110 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0115 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0120 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0125 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0130 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0135 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0135 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0135 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0135 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0135 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0135 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0135 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0135 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0135 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0135 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0135 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0135 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0135 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0135 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0135 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0135 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0135 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0135 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0135 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0135 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0140 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0145 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0150 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0155 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 http://refhub.elsevier.com/S2255-4971(15)00062-2/sbref0160 Reproducibility of the AO/ASIF and Gartland classifications for supracondylar fractures of the humerus in children Introduction Methods Results Discussion Conclusion Conflicts of interest References work_owatozixpratpe7hryhyvlkf24 ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218565660 Params is empty 218565660 exception Params is empty 2021/04/06-02:16:47 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218565660 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:47 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_ox4pmfmzmnaoxkhg5fv5id7z34 ---- DIGITAL DOCUMENTATION IN THE CONSERVATION OF CULTURAL HERITAGE: FINDING THE PRACTICAL IN BEST PRACTICE L. S. Beck Institute of Archaeology, University College London (l.stewart.11@ucl.ac.uk) KEY WORDS: Conservation, Documentation, Digital Photography, Method, Research ABSTRACT: Documentation of treatment is one of the central tenets of conservation as a profession, and a necessary aspect of the preservation of cultural heritage. Photographic documentation has been an essential technique for recording the nature of heritage objects and illustrating conservation procedures. The routine use of digital photography in recent years has opened many avenues to conservators, but also poses unique threats to the long-term stability of the conservation record. Digital documentation is subject to decay just as physical or „analogue‟ records are, with the stark difference that digital data corrupts absolutely, where physical records can remain legible through various stages of deterioration. It is therefore necessary to understand the options that conservators have with regards to preservation of their records for the future. The various guidelines presently available regarding digital documentation may be synthesized into a coherent „best practice‟ specific to digital conservation documentation. This practice, however, must be reconsidered within the framework of what is necessary to ensure that photographic records are preserved, versus what is feasible. In order to determine if conservators are aware of the limitations of digital technology, thirty practicing conservators were asked to respond to a questionnaire regarding their own documentation practices. The responses identified a lack of best practice, and indicated that there are multiple factors which prevent conservators from developing effective methods for creating, storing, and accessing documentation. To address this, a modified form of best practice, the „best practical‟ method, is developed as a series of guidelines with the intent of being feasible for practicing conservators. This method aims to reduce the time and economic costs required of best practice, while minimizing the risk to the conservation record. The „best practical‟ guidelines are being designed to be applicable to a wide range of professional contexts, from large public institutional conservators to independent private contractors. The significance of selection of documentation for long-term survival is also emphasized. The value of these guidelines lies in the identification of small changes to current practice that have the potential to make large differences in the amount of information preserved for future conservators, scholars, and other interested parties. 1. INTRODUCTION Photography has existed for more than 150 years, but only recently have advances in technology made high-quality cameras and devices readily available, affordable, and easy to use. Digital technology in particular allows users to take more pictures than ever before. Museums have been able to make use of these advances, and today many conservators use digital photography for the purposes of documenting their work. However, the advent of digital photography also brings about new issues that conservators must be prepared to address. The lifespan of the hardware used to store digital media can be as low as five years; paper and printed photographs can last as long as 100 years with minimal intervention (Carlston 1998). Extra resources must be dedicated to addressing this issue and prolonging the lifespan of photographic documentation. Are conservators prepared to address these issues? Have they begun, or are future conservators already facing a span of time from which information will be lost? Many conservators bemoan the lack of documentation from before the end of the 20 th century, before conservation was established as a profession (Pye 2001). Conservators must acknowledge that when they use digital photography, they face many issues that have the potential to dramatically decrease the lifespan of their documentation. Failing to prepare for the future of their photographs could potentially leave future conservators without records from our time. 1.1 Motivation and aims Documentation is one of the fundamental responsibilities of the modern conservation professional, and is emphasized in institutional policies as well as professional ethical guidelines. When properly completed, it can serve to inform future conservators of previous work, allowing conservators to track the efficacy of treatments over time. It can also serve to allow comparison between a previous state and the current state of an object, illuminating the rate of deterioration. To other museum professionals, documentation can be used to demonstrate the work that conservators do. These are only a few of the many benefits that documentation provides to conservators and others within the museum and academic communities. The lack of documentation records from the early years of conservation parallels the loss of digital data through obsolescence and corruption that has occurred over the past quarter century in many fields. Some go so far as to say that the human record from the late 20 th and early 21 st century may be completely unreadable in the future (Deegan and Tanner 2006). A valuable point about the longevity of digital media has been made on multiple occasions: digital media corrupts absolutely, a stark difference from analogue media which has various stages of deterioration over time, many of which are still legible (Gschwind et al 2005). Three significant issues have been isolated with respect to the longevity of digital media: the storage medium itself, the hardware, and the interpretive software necessary. For each of these, there are dangers of corruption of data or obsolescence of the software or hardware. (Carlston 1998) Multiple strategies have been suggested as a response to the frailty of digital media, and are widely used in the technical community. Refreshing, or copying material from one medium to another, is an effective strategy because no data is lost during replication (Gschwind et al 2005). Another way to accomplish this is by „exercising‟ data, as using and resaving also prevents data loss and ensures that it will remain usable (Lyman and Besser International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W2, 2013 XXIV International CIPA Symposium, 2 – 6 September 2013, Strasbourg, France This contribution has been peer-reviewed. The peer-review was conducted on the basis of the abstract 85 1998). Another commonly used strategy, migration, involves transferring data onto newer hardware, protecting it from technical obsolescence (Greenstein and Beagrie 1998). The aim of this research was to understand how conservators use photography as a form of documentation today. How do the issues presented affect conservators, and are conservators addressing these issues in a way that will allow their records to be accessible well into the future? 1.2 Methodology The questions of recording, storage, and access were probed with respect to theoretical best practice as well as the current practices of conservators. Current practices in documentation were analysed using a questionnaire, with selected follow-up interviews. The questionnaire consisted of eighteen short questions, and was targeted at practicing conservators. Any interested conservator was welcome to respond to the questionnaire. Responses from 30 conservators were recorded. Of these, 18 responses were from conservators in museums, 7 were from conservators in private practice, and 5 were from conservators who chose to submit anonymously. 2. DEFINING BEST PRACTICE There are a variety of sources that document „best practice‟ when it comes to documentation, particularly photography. The AIC did recently publish a guide to digital photography for conservators, which is the first of its kind aimed specifically at the field of conservation. There are a good number of guidelines and standards that were not developed for the heritage sector, but are still applicable, such as the Universal Photographic Digital Imaging Guidelines (UPDIG) and guidelines published by the Scientific Working Group on Imaging Technology (SWGIT). The goal of each of these sources comes down to the same ideal: what can be done to ensure the preservation of photographs, especially digital photographs, for as long as possible? The recommendations of all of these sources are synthesized below into one set of „best practice‟ standards for photographic conservation documentation. 2.1 Documenting Conservation The first thing to consider when creating documentation is the stages within the treatment process where photographs will be significant. The AIC Code of Ethics suggests that records must be created before and after treatment documenting condition, but should also detail “examination, sampling, scientific investigation, and treatment” (AIC 1994). While this refers to all forms of documentation, not just photography, it illuminates the decision-making process for conservators. Photographs should be taken whenever written documentation could not succinctly describe condition before, during, and after treatment. Given that conservators are not professional photographers, there are few photography specific details that are required by best practice. However, it is imperative that images are taken with colour targets and size scales included. In addition to using a colour target, the digital camera itself should be colour characterised in order to create photographs with the most accurate colour reproduction. This can be accomplished with colour calibration software from companies such as X-rite (formerly GretagMacbeth). 2.2 Storage of Documentation First and foremost, best practice in the storage of documentary photographs involves not only multiple digital copies but physical or „analogue‟ copies. In order to discuss the specifics of each type of storage, they will be dealt with separately, but it is important to remember that both methods of storage must be used. The first thing that must be considered when storing digital data is the file format. The current standard is a TIFF file, with EXIF metadata attached, for uncompressed photographs. EXIF is a form of metadata that records information about the camera used to take the photograph as well as other technical metadata. Images should be named in such a way that the content of the file is easily understood; naming should be standardised across the institution (Keene 1998). The image should be converted to a standardised colour profile. The AIC guide suggests a variety of colour profiles, including but not limited to Adobe RGB, which has been adopted as a standard by many museums. Another important consideration is the resolution of the image. A minimum resolution of 300 dpi (dots per inch) is required, with 600 dpi as the target resolution. The next consideration is how many digital copies to store. It is recommended that a high resolution master file is maintained, unedited and uncompressed in TIFF format, while a lower resolution, compressed JPEG is also maintained as an „access copy‟ of the image. The master file should never be manipulated; only the „access‟ JPEG or another copy of the original should be changed in any way (Gschwind et al 2005). Maintenance undertaken on digital storage is just as important as the methods of storage, as it ensures against catastrophic failures and keeps archives in working order. The AIC guide recommends maintaining multiple backup copies of data in separate locations, and randomly surveying to check for corruption. Backups of all images should be made at least weekly, ideally daily, on a rotation of four different hard drives, one per week. In addition to this, migration should be used to ensure that data remains readable. Every five years, data should be copied to new, up-to-date hardware (Ball 1998). Analogue storage is just as significant as digital storage in the preservation of digital images. It is important that photographs are printed on acid-free paper with archivally sound inks, from a printer that has been colour calibrated. Prints should have a minimum resolution of 300dpi. Once printed, they should be stored in archival sleeves in order to protect the surface. 2.3 Access to Documentation Access to documentation is not well-defined within existing best practice. A central archive is significant, such that conservation images are not separated from other images and documentation of objects (Stevenson 2006). If one central database is not possible, photographs should be linked or associated in some way with all other records concerning an object (Keene 1998). These connections are not only important to ease of access, but to ensuring that no information or documentation regarding an object, once created, is lost. 2.4 Lifespan One important thing that current best practice literature does not address, but must be established, is the lifespan required of the documentation produced by conservators. The best way to International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W2, 2013 XXIV International CIPA Symposium, 2 – 6 September 2013, Strasbourg, France This contribution has been peer-reviewed. The peer-review was conducted on the basis of the abstract 86 understand how the conservation record will be used in the future is to first understand how current conservators use old photographs. In order to better understand current usage, follow-up interviews were conducted with conservators at two museums which have substantial photographic archives who responded to the questionnaire. The conservators both noted that at the moment, the benefit that would come of finding something useful in the archive is outweighed by the cost in terms of time it would take to find that resource, as the archives are all physical records. However, there is also a significant barrier in terms of time and economic cost to digitise all of the records and photography. That these conservators would use photos from 100 years ago, if access were easier and faster, allows us to extrapolate that conservators 100 years in the future will use photographs from the present day. Consider, however, that 100 years encompasses almost the entire history of photography. It is therefore possible that the acceptable lifespan should in fact be longer, something that may be observed as the length of the history of photography approaches and passes the span of utility of a photograph. „Best practice‟, then, should indicate a lengthier preservation goal, on the order of 500 years. 3. SURVEYING CONSERVATORS The questionnaire was used to determine how conservators document their work in practice, and to understand how these practices differ from the „best practice‟ suggested by various guidelines. This will serve to illuminate what aspects of „best practice‟ are, in fact, impractical for conservators, and potentially why these practices are troublesome. All of the conservators surveyed indicated that they were producing digital photographs as opposed to still using film photography, and 25 out of 30 indicated that they photograph at least 75% of objects. This proves a previous hypothesis: digital photography has become the norm in conservation, creating an enormous archive of digital images across the field. 3.1 Storage of Photographs Responding conservators were asked how and where photographs were stored, and whether or not analogue copies were kept as well as digital copies. Figure 1 compiles the responses to this question. Figure 1. Breakdown of Storage Methods For institutional conservators, the split between using only digital storage and keeping both physical and digital copies is almost even. However, some of the conservators who responded that they keep both physical and digital copies do not do so for every object that they treat. The selection process for these conservators is based on the scope of treatment or loan requirements. Records of minor treatments were noted by one conservator as being kept in digital format only, and two responses stated that records were only printed for loans. Conservators in private practice largely favoured storage of records in solely digital formats, although some indicated that they would print records on request. 3.2 Maintenance of Storage Given that all of the responding conservators use digital storage in some manner for their records, the maintenance of this storage is extremely important. As such, respondents were asked if and how their digital storage systems are maintained. Figure 2 shows the different maintenance techniques undertaken in institutions and private practice. Note with this chart that some conservators use more than one technique; for example, each conservator who listed migration as a technique also employs refreshing in conjunction. Figure 2. Data maintenance practices of conservators in both private practice and within heritage institutions Two private practice conservators responded that their form of maintenance for their digital storage is to store their photographs in two separate locations, on two separate hard drives. While this practice, known as doubling, is a safeguard against catastrophic loss, it is not really a form of maintenance- each separate digital storage area is subject to deterioration and 5 6 1 10 8 0 2 4 6 8 10 12 14 16 Digital Copies Only Both Digital and Physical Copies Conservators in Institutions Conservators in Private Practice Anonymous Respondents 4 1 5 9 A. Conservators in Heritage Institutions Refreshing Migration None/Unknown Handled by IT Dept. 2 1 3 B. Conservators in Private Practice International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W2, 2013 XXIV International CIPA Symposium, 2 – 6 September 2013, Strasbourg, France This contribution has been peer-reviewed. The peer-review was conducted on the basis of the abstract 87 obsolescence, and simple doubling does not prevent either of these (MacLean 1998). Another problematic response, given by six institutional conservators, is that they are unaware of what maintenance is undertaken because it is all handled by their IT department. Together, 14 of the 30 conservators surveyed either undertake no maintenance on their digital storage, or are unaware of what is done by others, if anything. 3.3 Associated Information Another aspect of storage is the connection made between documentation and other information, such as catalogues or older records. Conservators were asked what curatorial or other information is kept or associated with photographs. Responses to this question were textual and very varied, and as such are not shown in a visual form here. The most common response was that photographs are associated in some way with the treatment record for the object. The methods of doing so ranged from naming the photograph file in a way that treatment number is included to inserting images into textual reports. More than 65% of conservators surveyed directly associate their documentary photographs with the treatment record of the object. Some institutional conservators are able to do this very easily, and to include much more associated information, by putting photos into their institution‟s database. In most cases, this associates them not only with treatment records but with records from other departments, such as curatorial records. However, some conservators store photographs in their own folders, and label the images with accession number, object number, or other museum identification number. While this means that records are stored apart from each other, the numbering system means that locating information related to the object is not particularly difficult. 3.4 Accessing Documentation Conservation records are not created solely for the use of the conservator; at the very least, they provide a record of treatment for future conservators, but also have the potential to be accessed by others in the heritage sector for a variety of reasons. The responding conservators were asked if other departments within their institution were given access, and if other institutions or outside persons were given access in any form. 7 of the 18 institutional conservators explained that they have an institution-wide database, which grants any member of staff access to these photographs. One conservator indicated that their institution has a database that is not only available to others within the museum, but is made available online to the general public. On the other end of the spectrum, 7 conservators said that photographs were rarely or never made accessible to anyone outside of conservation. Some of these same conservators also noted that they did not think other departments or institutions would be interested, but that they tend to grant access to photos if requested. Conservators in private practice indicated that fewer people were granted access to their records. For the most part, it was explained that due to client confidentiality and copyright, records are either never shared, or only shared with the owner‟s permission when deemed justifiable. It was not explained what reasons for access constituted „justifiable‟. The records of private conservators are divided into two categories: those records which stay with the conservator and those which stay with the object when it is returned to its owner, both of which are significant. 3.5 Issues Raised by Survey Responses If we assume that conservators are aware of best practice, but are not currently following best practice guidelines routinely or consistently, then one must assume that there are other factors preventing them from doing so. The questionnaire results reveal that many conservators do not retain a physical copy; lack of time or economic resources could be the cause of this. The reasons for the general lack of proper maintenance of digital storage must also be considered in order to develop a methodology that is useful to conservators. If conservators do not know that data deteriorates, or that the lifespan of a hard drive is less than ten years, they will not be able to account for that and prevent it. There is also the potential that conservators do not see this as part of their job, as it is something handled by other departments in many institutions. Maintenance of digital storage also has resource costs, both economic and in terms of time spent. Economic costs come not only when the storage system is first created, in terms of buying the necessary hardware and software, but annual running costs as well. The amount of time spent on maintaining digital storage can also be costly; it is beneficial for data to be refreshed as often as daily, even without such frequent intervention the larger tasks of migration or emulation are particularly time-consuming. Smaller tasks such as colour calibration and formatting of files when moved from the camera to the computer can become very time-consuming as the quantity of data involved increases. Overall, the costs of maintenance and preservation of digital storage can be high, and it is understandable that conservators must make changes to „best practice‟ in order to make their processes practical and applicable. 4. A NEW METHODOLOGY The questionnaire revealed that conservators do not always engage in the „best practice‟ methods. It is for this reason that a consistent and practical methodology must be developed, one that is feasible for conservators. The goal remains the same; to ensure that conservation records, in particular digital records, are set up to be preserved for as long as possible. This method must be „practical‟ in that the cost of engaging in various aspects, in terms of time, money, or other resources, cannot outweigh the benefit gained. The „consistent‟ aspect of this new methodology is significant as well. It is important to have some degree of standardisation across conservation as a profession. If this is accomplished, the future of the conservation record will not only be secured, but it will be ensured that the record does not have gaps. 4.1 Practical Documentation Some conservators noted in the questionnaire that they do not take before and after photos of very minor treatments, or those that do not enact any visible change on the object. This is acceptable under the „best practical‟ method, as it not only saves time and effort, but in the long run reduces the storage space necessary. This is a form of objective selection, it should be noted; instead of later choosing that these photographs do not need to be preserved, it is decided from the outset that they International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W2, 2013 XXIV International CIPA Symposium, 2 – 6 September 2013, Strasbourg, France This contribution has been peer-reviewed. The peer-review was conducted on the basis of the abstract 88 are unnecessary. In addition to this, it is largely unnecessary for conservators to colour calibrate their digital cameras. Colour standardisation can be reached to an acceptable degree through the use of colour targets, which along with scales must be included in every photograph. Conservators are also still expected to take before, during, and after-treatment photographs for the majority of treatments. 4.2 Modified Storage Methods The first thing to consider with storage is whether it is cost- effective to use both digital and analogue storage. Consider that the lifespan being aimed for, as determined by the case studies, is 500 years; hard drives must be replaced every 5 years, which results in 100 migrations and thousands of daily or weekly backups, following the „best practice‟ methodology. Even with a hypothetical 1% migration failure rate, the chain is broken and data is lost. Archivally sound paper, properly stored, can last for the full 500 years, making hard copies invaluable. Therefore even the „best practical‟ method still involves both analogue and digital copies of documentation. Many conservators already acknowledge using this practice. Those who only use digital storage may need to invest in printers, ink, paper, and the like; however, the cost of this is clearly outweighed by the benefit of having a storage system that is stable on the same order of magnitude as the desired lifespan of documentation. The first step in the storage of digital photographs is selection for survival. Conservators must actively choose which photographs they think best represent an object before and after treatment. There is the risk that future conservators will not find useful the same things that current conservators do; however, there is also the risk that attempting to preserve too much will result in none of it surviving. One to two photographs each for before, during, and after treatment, for a total of three to five photographs, should suffice for almost all objects. Fewer are necessary for simpler treatments. Then there are specifics of digital storage that must be considered. A TIFF file with EXIF metadata remains the recommended file type. Although many conservators were unaware of what metadata is kept, metadata remains significant, and conservators should set their cameras to automatically capture and include EXIF data. The previous recommendations regarding resolution are still valid, requiring a minimum of 300 dpi. What is not required of conservators in the „best practical‟ method is to keep multiple variations of their images for „archive‟ and „access‟, as this increases the amount of data storage needed. An „access‟ copy should still be made is if the conservator intends to manipulate the photo in any way. The benefit of having the original version to return to if something goes wrong during manipulation or the copy is damaged outweighs the cost of doing so. One of the major revelations of the questionnaire results is that not all guidelines will work for all conservators. Nowhere was this more evident than in the difference between storage maintenance practices of conservators in large institutions and conservators in private practice or at small institutions. It is acceptable for institutional conservators to rely on their IT department for the maintenance of their data, if that option is available. However, an effort should be made to understand what maintenance the IT department performs. Ensuring the longevity of the conservation record is their responsibility, and while delegation is fine, it must be overseen. It is recommended that conservators with the benefit of an institutional archive make their own backups of records once every six months, backing up on a rotation of two hard drives. In the case of institutional loss, the conservator‟s archives will be no more than six months out of date, a very small loss when considered on the order of 100 years‟ preservation. On the other hand, conservators in private practice and those at smaller institutions must be more individually responsible for their records. It is recommended that private practice conservators establish a schedule for backing up to multiple hard drives in rotation, as suggested is „best practice‟ by the AIC. However, backups can be made monthly rather than more frequently, reducing some of the time cost. Conservators should set aside the funding to purchase new hard drives every five to seven years. These purchases allow the use of migration as a maintenance technique as well as refreshing, another layer of protection ensuring that their hardware does not become obsolete. Best Practice New Methodology Institutional Conservators Conservators in Private Practice Digital Storage Practices TIFF file format with EXIF metadata TIFF file format with automatically-captured EXIF metadata „Access‟ and „archival‟ versions of every photo „Access‟ copy only needed when manipulating images Apply procedures to all photos taken Select for survival 3-5 photos per object, based on complexity of treatment and change in appearance Data Maintenance Practices Daily or weekly backups Biannual backups plus understanding of institutions procedures Monthly backups 4 hard drives in rotation 3 hard drives in rotation Replace HDD every 5 years Replace HDD every 5-7 years Figure 3. Table showing the previous best practice recommendations in comparison with the recommendations of the new methodology, for both institutional and private practice conservators. 4.3 Practical Use & Access In general, institutional systems have already put into place the „best practical‟ method for accessing photographs. Many are available at least across departments. However, these decisions are mostly up to the institutions, not the individual conservators. What conservators can do is promote the sharing of knowledge and information. Conservators should also ensure that the file system and record labelling associate objects with their records. For both institutional and private practice conservators, the importance of a structured filing system and a readily understandable, consistently applied naming system cannot be stressed enough. This not only ensures that files are not accidentally „lost‟ within the drive or network, but ensures that the images and records are easily accessible for the creating conservator, their colleagues, and future conservators. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W2, 2013 XXIV International CIPA Symposium, 2 – 6 September 2013, Strasbourg, France This contribution has been peer-reviewed. The peer-review was conducted on the basis of the abstract 89 There are specific aspects to access that private practice conservators must consider. Their records are much more private than those of institutions, due to copyright and owner‟s wishes, and this is acceptable. What private conservators must ensure is that clients are provided with not only digital but hard copies of any documentation produced, such that the object treated is never separated from records about it. 5. CONCLUSIONS Digital photography has provided conservators with the opportunity to quickly and easily document their work, and share that documentation with not only their fellow conservators, but other museum professionals and even sometimes with the public. Although it has proven to be an invaluable tool, it has been shown that there are many issues that come alongside digital media. Overall, this research has been successful in understanding how aware conservators are of these issues and how their practice reflects them. The questionnaire has provided insight into the practices of modern conservators, illuminating where they are successful in preserving their documentation and where there must be change. These changes come in the form of a methodology that is more appropriate for the needs of the modern conservator. This includes variation of the recommended practices for conservators in private practice or small institutions versus those in large institutions. The time between backups is greater for conservators who have the benefit of an institutional archive, though the importance of personal responsibility in the maintenance of records is stressed. While this revised methodology lessens the burdens of „best practice‟ on finances and time, it does not sacrifice the longevity of conservation documentation. To answer a question posed at the very beginning of this paper, conservators have in fact begun to address the issues pertaining to digital media. Their practices are not perfect; there are aspects that could stand to be improved in order to ensure viability of digital resources. The future is not yet facing a „blank‟ in the history of conservation. As long as conservators continue to improve and work at maintaining their documentation, future conservators may never again face a span of time from which information will be lost. ACKNOWLEDGEMENTS This paper is based on a dissertation submitted in partial fulfilment of the requirements for the degree of MA in Principles of Conservation at the Institute of Archaeology, University College London in 2012. My utmost appreciation goes to the thirty conservators who responded to my questionnaire, and those who responded to follow-up questions. Without their cooperation and insight, this research would not have been possible. Sincere thanks are also due to my advisor, Dr John Merkel, Elizabeth Pye, James Hales, and Dean Sully for all of their assistance throughout this process. REFERENCES AIC, 1994. Code of Ethics & Guidelines for Practice. American Institute for Conservation, Washington D.C. Ball, S., Clark, S., and Winsor, P., 1998. The Care of Photographic Materials & Related Media. Museums & Galleries Commission, London. Carlston, D., 1998. Storing Knowledge. In: B. Davis and M. MacLean (eds.), Time & Bits: Managing Digital Continuity. The J. Paul Getty Trust, Los Angeles, pp. 21-31. de Polo, A., and Minelli, S., 2006. Digital Access to a Photographic Collection. In: L. MacDonald (ed.), Digital Heritage: Applying Digital Imaging to Cultural Heritage. Elsevier, Amsterdam, pp. 93-114. Deegan, M., and Tanner, S., 2006. Key Issues in Digital Preservation. In: M. Deegan and S. Tanner (eds.), Digital Preservation. Facet Publishing, London, pp. 1-31. Greenstein, D., and Beagrie, N., 1998. A Strategic Framework for Creating and Preserving Digital Resources. Library Information Technology Centre, London. Gschwind, R., Rosenthaler, L., Schnider, R., Frey, F., and Frey, J., 2005. Digital Image Picture Archives: Theory and Practice in Switzerland. In: J. Hemsley, V. Cappellini, and G. Stanke (eds.), Digital Applications for Cultural and Heritage Institutions. Ashgate Publishing, Hants, pp. 123-132. Keene, S., 1998. Digital Collections: Museums and the Information Age. Butterworth-Heinemann, Oxford. Lyman, P., and Besser, H., 1998. Defining the Problem of Our Vanishing Memory: Background, Current Status, Models for Resolution. In: B. Davis and M. MacLean (eds.), Time & Bits: Managing Digital Continuity. The J. Paul Getty Trust, Los Angeles, pp. 11-20. MacLean, M., 1998. Setting the Stage: Summary of Initial Discussions. In: B. Davis and M. MacLean (eds.), Time & Bits: Managing Digital Continuity. The J. Paul Getty Trust, Los Angeles, pp. 32-35. Pye, E., 2001. Caring for the Past: Issues in conservation for archaeology and museums. James & James, London. Scientific Working Group on Imaging Technology (SWGIT), 2010. Guidelines for the Forensic Imaging Practitioner. Retrieved 19 July 2012 from World Wide Web: https://www.swgit.org/documents/Current%20Documents. Stevenson, J., 2006. Digitisation Programmes in the V&A. In: L. MacDonald (ed.), Digital Heritage: Applying Digital Imaging to Cultural Heritage. Elsevier, Amsterdam, pp. 69-92. UPDIG Coalition, 2008. Universal Photographic Digital Imaging Guidelines, Universal Quick Guide v 4.0. Retrieved 19 July 2012 from World Wide Web: http://www.updig.org/. Warda, J. (ed.), Frey, F., Heller, D., Kushel, D., Vitale, T., and Weaver, G., 2011. The AIC Guide to Digital Photography and Conservation Documentation. (Second Edition). AIC, Washington, D.C. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W2, 2013 XXIV International CIPA Symposium, 2 – 6 September 2013, Strasbourg, France This contribution has been peer-reviewed. The peer-review was conducted on the basis of the abstract 90 work_ox6couolxja37g5yk2rcrjyqmq ---- Estimation of leaf area index in isolated trees with digital photography and its application to urban forestry Urban Forestry & Urban Greening 14 (2015) 377–382 Contents lists available at ScienceDirect Urban Forestry & Urban Greening j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / u f u g Short communication Estimation of leaf area index in isolated trees with digital photography and its application to urban forestry Francesco Chianucci a,∗, Nicola Puletti a, Elena Giacomello b, Andrea Cutini a, Piermaria Corona a a Consiglio per la Ricerca in Agricoltura e l’Analisi dell’Economia Agraria, Forestry Research Centre, Arezzo, Italy b Università IUAV di Venezia, Venice, Italy a r t i c l e i n f o Keywords: Image analysis Leaf area index Urban greening Urban tree Vegetation index a b s t r a c t Accurate estimates of leaf area index (L) are strongly required for modelling ecophysiological processes within urban forests. The majority of methods available for estimating L is ideally applicable at stand scale and is therefore poorly suitable in urban settings, where trees are typically sparse and isolated. In addition, accurate measurements in urban settings are hindered by proximity of trees to infrastructure elements, which can strongly affect the accuracy of tree canopy analysis. In this study we tested whether digital photography can be used to obtain indirect estimate of L of isolated trees. The sampled species were Platanus orientalis, Liquidambar styraciflua and Juglans regia. Upward-facing photography was used to estimate gap fraction and foliage clumping from images col- lected in unobstructed (open areas) and obstructed (nearby buildings) settings; two image classification methods provided accurate estimates of gap fraction, based on comparison with measurements obtained from a high quality quantum sensor (LAI-2000). Leveled photography was used to characterize the leaf angle distribution of the examined tree species. L estimates obtained combining the two photographic methods agreed well with direct L measurements obtained from harvesting. We conclude that digital photography is suitable for estimating leaf area in isolated urban trees, due to its simple, fast and cost- effective procedures. Use of vegetation indices allows extending significantly the applicability of the photographic method in urban settings, including green roofs and vertical greenery systems. © 2015 Elsevier GmbH. All rights reserved. Introduction Green infrastructures are key components in the planning of sustainable cities. The fields of landscape ecology and urban ecol- ogy have emerged from research to become the primary promoter of ecological design in cities (Ong, 2003; Roy et al., 2012). Deter- mining the role urban forests have in mitigating heat island effects, removing air pollutants and cooling buildings represents a central issue (Hardin and Jensen, 2007; Nowak et al., 2006). The influ- ence of urban forests and individual tree species on carbon cycling and greenhouse gasses reduction represent another active research topic (Manes et al., 2012). Specific surveys on urban forests have been carried out to expand the domain of multipurpose forest inventories (e.g., Corona et al., 2012). ∗ Corresponding author. Tel.: +39 0575 353021; fax: +39 0575 353490. E-mail addresses: francesco.chianucci@entecra.it, fchianucci@gmail.com (F. Chianucci). The ability to estimate leaf area index (L) is essential to accu- rately modelling these physiological processes. The amount of leaf area is directly related to the pollutant interception and emission rates of individual tree species (Nowak, 1994). L also influences rainfall storage and its effect on reducing water runoff (Xiao et al., 2000). L is also used as main input variable in building energy bal- ance simulations (Sailor, 2008). Among the various existing techniques, indirect optical methods have been widely used to indirectly estimate L from measurements of radiation transmittance through the canopy (for a review, see Jonckheere et al., 2004). The Beer–Lambert’s law has often been used to model canopy transmittance (Eq. (1), based on Nilson, 1971) as P ( � ) = exp ( −G ( � ) ˝ ( � ) Lt cos � ) (1) where P is the canopy gap fraction, G is the foliage projection func- tion, which is related to the foliage angle distribution, ̋ is the foliage clumping index at zenith angle � and Lt is the plant area http://dx.doi.org/10.1016/j.ufug.2015.04.001 1618-8667/© 2015 Elsevier GmbH. All rights reserved. dx.doi.org/10.1016/j.ufug.2015.04.001 http://www.sciencedirect.com/science/journal/16188667 http://www.elsevier.com/locate/ufug http://crossmark.crossref.org/dialog/?doi=10.1016/j.ufug.2015.04.001&domain=pdf mailto:francesco.chianucci@entecra.it mailto:fchianucci@gmail.com dx.doi.org/10.1016/j.ufug.2015.04.001 Estimation of leaf area index in isolated trees with digital photography and its application to urban forestry Introduction work_oxahoyaoozcz3ngm52bvnortcy ---- Imaging Ejecta and Debris Cloud Behavior Using Laser Side-lighting Procedia Engineering 58 ( 2013 ) 363 – 368 1877-7058 © 2013 The Authors. Published by Elsevier Ltd. Selection and peer-review under responsibility of the Hypervelocity Impact Society doi: 10.1016/j.proeng.2013.05.041 The 12th Hypervelocity Impact Symposium Imaging Ejecta and Debris Cloud Behavior Using Laser Side-Lighting J.M. Mihalya*, A.J. Rosakisa, M.A. Adamsa, J.T. Tandya aGraduate Aerospace Laboratories, California Institute of Technology, Pasadena, CA, United States Abstract The Caltech Small Particle Hypervelocity Impact Range (SPHIR Facility) utilizes a two-stage, light gas gun to accelerate Nylon 6/6 right cylinders (d = 1.8 mm, L/D=1, 5.5 mg) and spheres (d = 1.8 mm, 3.6 mg) to impact speeds of 5 km/s and above. The projectiles impact aluminum 6061-T6 plate targets. An optical technique was employed to produce images of the hypervelocity impact event with short exposure times (20 ns) and short inter- on, orthogonal to the projectile flight direction, to provide a series of shadowgraph images of the impact on the target. An expanded beam from a 532 nm continuous wave laser is used as the illumination source. The beam is expanded to illuminate a 10 cm diameter area and is then directed to a gated, intensified high-speed CCD camera. The front ejecta and debris clouds created behind the target are simultaneously imaged with this system. An edge-finding algorithm has been developed to provide a consistent method for identifying the position of the debris-front in sequential images. This technique enables a regular method to investigate the debris cloud evolution and to characterize its asymmetrical features. Furthermore, with the Laser Side-lighting system atmospheric waves emanating from the impact site are also visible. Increasing the atmospheric pressure in the target chamber (above the nominal 1.5 Torr) significantly increases the observable features of these shock waves. The behaviour of these waves provides an improved understanding of the temporal sequence of the impact phenomena. © 2012 Published by Elsevier Ltd. Selection and/or peer-review under responsibility of the Hypervelocity Impact Society. Keywords: High-Speed Imaging, Laser Shadowgraph, Debris Cloud, Ejecta Nomenclature SPHIR Small Particle Hypervelocity Impact Range LSL Laser Side-Lighting d Impactor diameter, mm h Target plate thickness, mm v Impact speed, km/s 1. Introduction Damage from the hypervelocity impact of meteoroids and orbital debris (MOD) poses a serious and growing threat to spacecraft. Historically, the cost and complexity of hypervelocity impact experiments has been a limiting factor in the study of hypervelocity impact phenomenology. The aerospace community would benefit greatly from a low-cost facility capable of providing a mass-velocity analog to the MOD threat. Such a facility with in situ measurement capabilities enabling quantitative characterization of impact phenomena, such as debris cloud formation and evolution would strongly support the development of foundational knowledge and promote advances in spacecraft shield design. * Corresponding author. Tel.: 1-626-395-3664 E-mail address: jmmihaly@caltech.edu Available online at www.sciencedirect.com © 2013 The Authors. Published by Elsevier Ltd. Selection and peer-review under responsibility of the Hypervelocity Impact Society Open access under CC BY-NC-ND license. Open access under CC BY-NC-ND license. http://creativecommons.org/licenses/by-nc-nd/3.0/ http://creativecommons.org/licenses/by-nc-nd/3.0/ 364 J.M. Mihaly et al. / Procedia Engineering 58 ( 2013 ) 363 – 368 1.1 Experimental Facility To address the inherent challenges in understanding hypervelocity impact phenomenology, the California Institute of Technology, in a joint effort with NASA Jet Propulsion Laboratory, has established the Small Particle Hypervelocity Impact Range (SPHIR). The light-gas gun system used in this facility was designed, developed and fabricated by engineers at the Southwest Research Institute [1] and installed at Caltech in 2006. This facility [2] utilizes a two-stage light-gas gun with a 1.8 mm bore diameter launch tube capable of producing mass-dependent velocities ranging from 2 to 10 km/s. Most commonly, the facility is used to launch 5.5 mg Nylon 6/6 right cylinders (L/D = 1) to impact speeds between 5 and 7 km/s. The facility has also been used to accelerate 3.6 mg Nylon 6/6 spheres and 22.7 mg 440C steel spheres to impact speeds ranging from 5 to 6 km/s and 2 to 3 km/s, respectively. The impactors are launched, without a sabot, into a 1 m x 1 m x 2 m target chamber which is evacuated to a selectable pressure between 1 and 50 Torr. The impactor velocity is measured using a Photron SA-1 FASTCAM high-speed camera [2]. Typically, this camera is operated at 150,000 frames per second. When the impactor is traveling at greater than 3km/s, the low-pressure atmosphere (as low as 1 Torr) in the evacuated target chamber is ionized as a sheath surrounding and trailing the impactor. This hot plasma sheath radiates sufficient light to enable high-speed imaging of the projectile location by self-illumination. Measurement of the position of the impactor ahead of the target and the time of flight to the target enables determination of the impact speed. The continuously recording, "looping" camera is triggered by the target impact flash, which is detected with a photodiode. 2. Laser Side-Lighting System Historically, flash x-ray systems have been used [3-5] to observe and analyse the evolution of debris clouds. High-speed photography has also provided an alternative to the imaging of hypervelocity impact debris formation [6]. Advances in modern digital photography have improved both the quality and utility of high-speed photography as a method to study debris phenomena in hypervelocity impact experiments [7]. Such digital photography systems [8] typically utilize flash lamps to provide diffuse white light as an illumination source. Coherent laser light has also been recently implemented as the basis for diagnostics used in the study of ejecta [9] and debris [10] phenomena. The Laser Side-Lighting System (LSL) described herein, which utilizes a monochromatic, collimated, and coherent light from a continuously operated laser, provides several advantages over other imaging techniques used to observe hypervelocity impact events. The use of collimated light not only enables high-speed imaging of the event with low exposure times (15 ns), but is capable of doing so with relatively low illumination intensities (600 mW). The laser side-lighting system with an enclosed free beam provides several operational benefits vis a vis flash lamp and flash x-ray systems. The continuous operation of the laser illumination source produces a relatively constant illumination source throughout the entire period of the experiment, offering a solution of reduced operational complexity compared to pulsed laser photography systems [11]. Furthermore, given that the illumination source can be turned on before an experiment, this system is advantageous for facilities without a reliable method to pre-trigger the illumination source. Therefore, this laser illumination source reduces the complexity of triggering the instrumentation during the experiment. The use of coherent light allows the LSL system, with small modification, to be used for several interferometric techniques such as schleiren imaging [12] and Coherent Gradient Sensing [13] to measure the impact phenomena. Lastly, the use of directed (collimated), monochromatic light does not interfere with any simultaneous spectroscopic measurements of the impact event during experiments. 2.1 Hardware The Laser Side-Lighting System produces side-profile shadowgraphs using a Cordin 214-8 gated, intensified CCD camera. The Cordin camera contains 4 double-exposed CCD sensors to provide 8 images with 1000 x 1000 pixel resolution. The camera is capable of providing exposure and inter-frame times as low as 10 ns. The second exposure recorded on a given CCD must -channel plate (MCP) intensifier to reset. However, four consecutive images may be obtained by using each of the four CCDs once. This mode enables a maximum framing rate of 50 x 106 fps. A Coherent Verdi V6 diode-pumped solid-state laser is used to provide 532 nm wavelength (continuous wave) light as the illumination source. The laser beam is expanded to a 10 cm diameter collimated beam using two Keplerian beam expanders and then directed into the target tank. A mirror is used to steer the laser illumination towards an imaging solution capable of producing 6 Watts of illumination intensity. Given that a small fraction (approximately 10%) of the potential laser power is required to provide sufficient illumination intensity, a more uniform intensity can be delivered to the high- 365 J.M. Mihaly et al. / Procedia Engineering 58 ( 2013 ) 363 – 368 speed camera. This is achieved th -expansion of the beam before collimation in the second Keplerian beam expander. 2.2 System Setup and Specifications The laser illumination provided by the Verdi V6 laser is delivered into the target tank orthogonal to the impactor velocity vector. Figure 1 provides a conceptual illustration of the LSL system setup. The LSL system therefore provides a series of shadowgraph images which have a perspective similar to the flash x-ray technique used by Piekutowski [4]. The primary distinction of this method from flash x-ray is that the shadowgraphs generated by this method are produced by the absorption of laser illumination by debris particles and subsequent interference of the coherent light. Constructive interference of the collimated laser source is created by gradients in the index of refraction corresponding to gradients in density, pressure, and temperature of the atmosphere surrounding the debris. The current, maximum field of view with this system is circular and defined by the diameter of the expanded laser beam, 100 mm. To utilize all of the pixels in the square 1000 x 1000 pixel CCDs of the Cordin camera, the field of view can be adjusted to a square of approximately 70 mm x 70 mm. This corresponds to an image resolution of approximately 0.07 mm/pixel. Finer resolutions and smaller fields of view with higher image resolution are also possible. Fig. 1. The Laser Side-Lighting System (illustration not to scale). A laser power of 600 mW is adequate to observe the hypervelocity impact event with exposure times as small as 15 ns. At the impact speeds typically produced with the SPHIR light gas gun (5 to 7 km/s), this exposure time is adequate to prevent image blur that can result from the motion of the features in the image. Furthermore, the short exposure time coupled with the collimated illumination source prevents the observed impact phenomena from being masked by camera pixel saturation from the impact flash. As with the high-speed camera used to measure impact speed, the Cordin camera used in the LSL system is triggered by the impact flash at the target. This method is robust and consistent; however, it has an inherent limitation: there is a short delay between first contact with the target and the development of an impact flash bright enough to register on the photo diode. This creates an uncertainty in the time of initial contact between impactor and target. However, the time between the camera images is known very accurately (with an uncertainty less than 10 ns) and facilitates the accurate measurement of debris cloud evolution. Triggering of the LSL system is not limited to this triggering method and other triggering systems, such as a velocity comparator, could be used if available. 366 J.M. Mihaly et al. / Procedia Engineering 58 ( 2013 ) 363 – 368 3. Image Analysis The LSL system has been used to observe d = 1.8 mm diameter Nylon 6/6 right cylinders (L/d = 1) impacting 6061-T6 aluminum target plates at speeds between 5 and 7 km/s. A characteristic result for a h = 1.5 mm thick aluminum target impacted at 5.54 km/s is provided in Figure 2. As shown, the formation of ejecta in front of the target and debris behind the target can be observed up-range (left) and down-range (right) from the target, respectively. The ejecta up-range of the target is consistently observed to be highly asymmetric, which is believed to be a consequence of impactor geometry and impactor tumbling. Note that the time-stamps labelling each image refer to the time from triggering. Diffraction patterns (large bullseye rings) in the background illumination are present in all of the images shown below. These artifacts are produced by internal reflections in optics inside the Cordin camera, which lack anti-reflection coatings. For the hardware currently presented, these aberrations are unavoidable, differ slightly on each CCD and typically have minimal effect on the image processing. However, in principle these aberrations can be eliminated in future LSL systems with the use of anti-reflection lenses throughout the imaging system. Fig.2. A h = 1.5 mm 6061-T6 aluminum target plate impacted at 5.5 km/s by a d = 1.8 mm Nylon 6/6 right cylinder at 0 degrees target obliquity. The impactor travelled from left to right and the target chamber pressure is 1.2 Torr. 3.1 Debris Cloud Measurement An edge-finding algorithm has been developed to identify the boundary of the debris front in each image of the earlier images. The algorithm analyzes grayscale data from each image along lines in the longitudinal z-direction (constant y). Along each line, grayscale values are considered as a moving average. The moving standard deviation and gradient of this moving average of grayscale is then computed. The boundary is then identified at points where the gradient is large and positive and the standard deviation is a maximumm . Note that in the convention used here, a white pixel has a grayscale of 255. Figure 3 provides an example of this algorithm and edge-finding result with an illustration of the defined boundary and corresponding grayscale analysis. The uncertainty of the debris-front position can then be quantified based on the length of the transition region of grayscale and standard deviation. In the example shown in Figure 3, this would correspond to an error of approximately 5 pixels or less resulting in an accuracy of ± 0.3 mm. This methodology provides a consistent way to define debris-front position in a single image. The method is applied to each image in the sequence of images obtained during an experiment. An example result is shown in Figure 4, which illustrates a measured debris front in one image while plotting the positions of debris fronts taken from the following two images (taken 0.9 and 1.9 micro-seconds later). This data can be used to measure the longitudinal speed of the debris produced as a function of radial distance from the impact site. Measurement of the speed of debris thrown from the target early in the formation of the debris cloud provides one metric for comparison and validation of numerical simulations [14]. Furthermore, this method provides a regular methodology to quantify asymmetrical debris front velocities in each experiment. Given the described accuracy of the debris position measurement and excellent temporal uncertainty of the Cordin camera (less than 10 nanoseconds), measurement of the debris velocity is computed with an accuracy of ± 0.1 km/s. In the example presented in Figure 4, a 6.3 km/s impact of a 1.8 mm Nylon right- cylinder (L/D=1) on a 1.5 mm thick 6061-T6 aluminum plate produces an initial debris front velocity (separate but collinear with the impactor velocity vector) of 1.8 km/s. 367 J.M. Mihaly et al. / Procedia Engineering 58 ( 2013 ) 363 – 368 (a) (b) Fig. 3. (a) Enlarged view of the image from Figure 2. The blue dot is the boundary identified by the described edge-finding algorithm. Grayscale values along the red-line are analyzed in the adjacent image. (b) The moving average of the grayscale is plotted in red. The pink line is the moving standard deviation of the grayscale and the blue line is the gradient of the grayscale. (a) (b) Fig. 4. (a) The debris-front is identified (in red) for the image shown using the described edge-finding algorithm. The debris-fronts identified in the subsequent images (0.9 and 1.9 micro-seconds later) are painted in blue and magenta. (b) The positions of the identified debris fronts are plotted with respect to the impact location. 3.2 High-Pressure Measurements The use of collimated, coherent light in the LSL system significantly increases the observable impact features in experiments where the atmospheric pressure in the target chamber is increased above the nominal 1 Torr. At higher pressures, waves emanating from the impact site are visible much like they are in schleiren shadowgraphs [12]. The observation of theser phenomena is enabled by strong gradients in the index of refraction of the rarefied atmosphere constructively interfering with the coherent light source. An example of this observation is provided in Figure 5 for a h = 0.5 mm plate impacted at 4.7 km/s in 50 Torr atmospheric pressure. Slight defocussing of the imaging system enables improved contrast of the observed shock as a result of caustic effects. Measurement of these waves can enhance understanding of the temporal sequence of the impact phenomena. YYYYYYYYYYYYYY ZZZZZZZZZ 1.1.1.1.1.1.1.1.11111 5 5555555 mmmmmmmmmmmmmmmmmmmmmmmmmmmm 368 J.M. Mihaly et al. / Procedia Engineering 58 ( 2013 ) 363 – 368 Fig.5. A h = 0.5 mm 6061-T6 aluminum target plate impacted at 4.7 km/s by a d = 1.8 mm Nylon 6/6 right cylinder at 0 degrees target obliquity. The impactor travelled from left to right and the target chamber pressure is 50 Torr. 3.3 Future Work The LSL technique is currently being utilized in an extended series of experiments to characterize debris cloud and ejecta behaviour and other impact phenomena produced by d = 1.8 mm Nylon 6/6 right cylinders impacting 6061-T6 aluminum plates at impact speeds between 5 and 7 km/s. The campaign features significant replication of experiments with identical test conditions in order to characterize the variation inherent in the impact phenomena. Aluminum target plate thicknesses have been selected to provide a variety of plate deformation and perforation mechanisms. Previous work [15] at the SPHIR facility involved uncertainty quantification and validation of numerical models simulating high-speed impact. Measurements obtained using the LSL system for Nylon cylinders impacting aluminum plates will be implemented in a similar campaign comparing results to those of a numerical simulation [14]. Additionally, LSL results will be complemented by a suite of in situ and post mortem optical diagnostics. Such diagnostics will relate results to the post mortem damage of the target and help describe the content, scale and lethality of the debris cloud. Acknowledgements This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-FC52-08NA28613. The authors would also like to thank Mike Mello for his assistance with the optomechanical design of the LSL system and Petros Arakelian for his assistance in installing the optical benches and safety features. References [1] Grosch, D. J., Riegel, J.P., 1993. -Stage Light-Gas Gun, International Journal ofrr Impact Engineering 14, p. 315. [2] Mihaly, J.M., Lamberson, L.E., Adams, M.A., Rosakis, A.J., A low cost, small bore light-gas gun facility Proceedings of the 11th Hypervelocity Impact Symposium, p. 675-686. [3] Friend, W.H., Murphy, C.L, Gough, P.S., Review of meteoroid-bumper interaction studies at McGill University. NASA CR-54858 [4] Piekutowski, A.J., Characteristics of Debris Clouds Produced by Hypervelocity Impact of Aluminum Spheres with Thin Aluminum Plates, International Journal of Impact Engineering 14, p. 573 [5] Piekutowski, A.J., 1987. Debris Clouds Generated by Hypervelocity Impact of Cylindrical Projectiles with Thin Aluminum Plates, International Journal of Impact Engineering 5, p. 509 [6] Kassel, P.C., DiBattista, J.D., 1971. An ultra-high-speed photographic system for investigating hypervelocity impact phenomena. NASA TN D- 6128 [7] Francesconi, A., Pavarin, D., Giacomuzzo, C., Angrilli, F., 2006. Impact experiments on low-temperature bumpers, International Journal of Impact Engineering 33, p. 264 272. [8] Putzar, R., Schaefer, F., Lambert, M., Vulnerability of spacecraft harnesses to hypervelocity impacts, International Journal of Impact Engineering 35, p. 1728 1734. [9] bution, th Lunar and Planetary Science Conference. [10] Zhang, Q., Chen, Y., Huang, F., Long, R., 2008. Experimental Study on Expansion Characteristics of Debris Clouds Produced by Oblique Hypervelocity Impact of LY12 Aluminum Projectiles with thin LY12 aluminum plates, International Journal of Impact Engineering 35, p. 1884 [11] Isbell, W.M., 1987. Historical overview of hypervelocity impact diagnostic technology. International Journal of Impact Engineering 5, p. 389 410. [12] Settles, G. S., 2001. Schlieren and shadowgraph techniques: Visualizing phenomena in transparent media, Berlin:Springer-Verlag. [13] Rosakis, A.J.,1993. Two Optical Techniques Sensitive to Gradients of Optical Path Difference: The Method of Caustics and the Coherent Gradient Sensor (CGS) in Experimental Techniques in FractureE Editor, p. 327 [14] tion Meshfree (OTM) simulations of hypervelocity impact th Hypervelocity Impact Symposium. [15] M. Adams, A. Lashgari, B. Li, M. McKerns, J. Mihaly, M. Ortiz, H. Owhadi, A.J. Rosakis, M. Stalzer, T.J. Sullivan, Rigorous model-based uncertainty quantification with application to terminal ballistics - Part II. Systems with uncontrollable inputs and large scatter, Journal of the Mechanics and Physics of Solids, ISSN 0022-5096 work_oydlukuhhnagbjsfacswbybhmu ---- The transmembrane chemokines CXCL16 and CX3CL1 and their receptors are expressed in human meningiomas. | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.3892/or.2012.2164 Corpus ID: 5493982The transmembrane chemokines CXCL16 and CX3CL1 and their receptors are expressed in human meningiomas. @article{Li2013TheTC, title={The transmembrane chemokines CXCL16 and CX3CL1 and their receptors are expressed in human meningiomas.}, author={G. Li and K. Hattermann and R. Mentlein and H. Mehdorn and J. Held-Feindt}, journal={Oncology reports}, year={2013}, volume={29 2}, pages={ 563-70 } } G. Li, K. Hattermann, +2 authors J. Held-Feindt Published 2013 Biology, Medicine Oncology reports Meningiomas are common slowly growing benign tumors, however, anaplastic meningiomas have an aggressive biological and clinical behavior associated with high rates of recurrence and unfavorable prognosis. Since the molecular mechanisms involved in progression of meningiomas are not yet fully understood and recent investigations have suggested a possible role of chemokines in tumor biology, the aim of the study was to investigate the… Expand View on PubMed spandidos-publications.com Save to Library Create Alert Cite Launch Research Feed Share This Paper 10 CitationsBackground Citations 3 Methods Citations 1 View All Figures and Topics from this paper figure 1 figure 2 figure 3 figure 4 Neoplasms chemokine CXCL16 gene Blood Vessel Polymerase Chain Reaction Benign Neoplasm Staining method Grade Infiltration Ligands Anaplastic thyroid carcinoma 10 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency “Inverse signaling” of the transmembrane chemokine CXCL16 contributes to proliferative and anti-apoptotic effects in cultured human meningioma cells K. Hattermann, K. Bartsch, +5 authors J. Held-Feindt Biology, Medicine Cell Communication and Signaling 2016 11 View 2 excerpts, cites methods and background Save Alert Research Feed Blockade of CXCR6 reduces invasive potential of gastric cancer cells through inhibition of AKT signaling Y. Li, Li-Xia Fu, Wan-Lin Zhu, H. Shi, Li-jian Chen, B. Ye Biology, Medicine International journal of immunopathology and pharmacology 2015 14 Save Alert Research Feed Down regulation of Akirin-2 increases chemosensitivity in human glioblastomas more efficiently than Twist-1 S. Krossa, A. Schmitt, +4 authors J. Held-Feindt Medicine Oncotarget 2015 10 PDF Save Alert Research Feed The Chemokine Receptor CXCR6 Evokes Reverse Signaling via the Transmembrane Chemokine CXCL16 Vivian Adamski, R. Mentlein, R. Lucius, M. Synowitz, Janka Held-Feindt, K. Hattermann Biology, Medicine International journal of molecular sciences 2017 6 PDF Save Alert Research Feed MiR-451 inhibits cell growth and invasion by targeting CXCL16 and is associated with prognosis of osteosarcoma patients Fei Zhang, W. Huang, Meixia Sheng, Tie-long Liu Biology, Medicine Tumor Biology 2014 39 View 2 excerpts, cites background Save Alert Research Feed The chemokine CXCL16 modulates neurotransmitter release in hippocampal CA1 area Maria Amalia Di Castro, F. Trettel, G. Milior, L. Maggi, D. Ragozzino, C. Limatola Medicine, Chemistry Scientific reports 2016 19 PDF Save Alert Research Feed Human Monocyte Heterogeneity as Revealed by High-Dimensional Mass Cytometry A. A. Hamers, Huy Q Dinh, +9 authors C. Hedrick Biology, Medicine Arteriosclerosis, thrombosis, and vascular biology 2019 48 Save Alert Research Feed Microglia–blood vessel interactions: a double-edged sword in brain pathologies Nevenka Dudvarski Stanković, M. Teodorczyk, Robert Ploen, F. Zipp, M. Schmidt Biology, Medicine Acta Neuropathologica 2015 112 View 1 excerpt, cites background Save Alert Research Feed Matrix metalloproteinase‐1‐mediated mesenchymal stem cell tumor tropism is dependent on crosstalk with stromal derived growth factor 1/C‐X‐C chemokine receptor 4 axis I. Ho, Yulyana Yulyana, +4 authors P. Lam Biology, Medicine FASEB journal : official publication of the Federation of American Societies for Experimental Biology 2014 25 Save Alert Research Feed Intracranial Meningiomas: A 30-Year Experience and Literature Review. H. Mehdorn Medicine Advances and technical standards in neurosurgery 2016 12 Save Alert Research Feed References SHOWING 1-10 OF 56 REFERENCES SORT BYRelevance Most Influenced Papers Recency Overexpression of CXCL16 and its receptor CXCR6/Bonzo promotes growth of human schwannomas J. Held-Feindt, Brigitte Rehmke, +5 authors H. Mehdorn Biology, Medicine Glia 2008 39 View 4 excerpts, references background Save Alert Research Feed The Chemokine CXCL16 and Its Receptor, CXCR6, as Markers and Promoters of Inflammation-Associated Cancers M. Darash-Yahana, J. Gillespie, +10 authors J. Farber Biology, Medicine PloS one 2009 115 PDF View 1 excerpt, references background Save Alert Research Feed Expression and potential function of the CXC chemokine CXCL16 in pancreatic ductal adenocarcinoma. M. Wente, M. Gaida, +7 authors H. Friess Biology, Medicine International journal of oncology 2008 66 View 2 excerpts, references background Save Alert Research Feed Expression of CXCL16 in human rectal cancer. D. Wågsäter, A. Hugander, J. Dimberg Biology, Medicine International journal of molecular medicine 2004 33 View 3 excerpts, references results and background Save Alert Research Feed CX3CR1 promotes recruitment of human glioma-infiltrating microglia/macrophages (GIMs). J. Held-Feindt, K. Hattermann, +5 authors R. Mentlein Biology, Medicine Experimental cell research 2010 113 View 1 excerpt, references background Save Alert Research Feed Polymorphism in the microglial cell-mobilizing CX3CR1 gene is associated with survival in patients with glioblastoma. M. Rodero, Y. Marie, +11 authors C. Combadière Medicine Journal of clinical oncology : official journal of the American Society of Clinical Oncology 2008 72 View 1 excerpt, references background Save Alert Research Feed Characterization of chemokines and their receptors in the central nervous system: physiopathological implications A. Bajetto, R. Bonavia, S. Barbero, G. Schettini Biology, Medicine Journal of neurochemistry 2002 289 View 1 excerpt, references background Save Alert Research Feed Expression of CXC chemokine receptors 1–5 and their ligands in human glioma tissues: Role of CXCR4 and SDF1 in glioma cell proliferation and migration A. Bajetto, F. Barbieri, A. Dorcaratto, S. Barbero, T. Florio Biology, Medicine Neurochemistry International 2006 141 View 1 excerpt, references background Save Alert Research Feed Pathogenic role of the CXCL16-CXCR6 pathway in rheumatoid arthritis. T. Nanki, T. Shimaoka, K. Hayashida, K. Taniguchi, S. Yonehara, N. Miyasaka Medicine Arthritis and rheumatism 2005 135 View 1 excerpt, references background Save Alert Research Feed Expression of the CXCR6 on polymorphonuclear neutrophils in pancreatic carcinoma and in acute, localized bacterial infections M. Gaida, F. Günther, +5 authors M. Wente Medicine Clinical and experimental immunology 2008 31 View 2 excerpts, references background Save Alert Research Feed ... 1 2 3 4 5 ... Related Papers Abstract Figures and Topics 10 Citations 56 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_ozmttazhnvfnpko35zjltnw6oe ---- CSDL | IEEE Computer Society work_p2lqc5vfvbhd3nyyehdmg5eate ---- Journal of Electronic Imaging 13(2), 264–277 (April 2004). Adaptive hybrid mean and median filtering of high-ISO long-exposure sensor noise for digital photography Tamer Rabie UAE University College of Information Technology P.O. Box 15551 Al-Ain, United Arab Emirates E-mail: tamer@cs.toronto.edu Abstract. This paper presents a new methodology for the reduction of sensor noise from images acquired using digital cameras at high- International Organization for Standardization (ISO) and long- exposure settings. The problem lies in the fact that the algorithm must deal with hardware-related noise that affects certain color channels more than others and is thus nonuniform over all color channels. A new adaptive center-weighted hybrid mean and median filter is formulated and used within a novel optimal-size windowing framework to reduce the effects of two types of sensor noise, namely blue-channel noise and JPEG blocking artifacts, common in high-ISO digital camera images. A third type of digital camera noise that affects long-exposure images and causes a type of sensor noise commonly known as ‘‘stuck-pixel’’ noise is dealt with by pre- processing the image with a new stuck-pixel prefilter formulation. Experimental results are presented with an analysis of the perfor- mance of the various filters in comparison with other standard noise reduction filters. © 2004 SPIE and IS&T. [DOI: 10.1117/1.1668279] 1 Introduction With the advent of the inexpensive charge-coupled device ~CCD! on a chip ~Fig. 1!, the wide-spread move from tra- ditional 35 mm film photography to digital photography is becoming increasingly apparent especially with journalists and professional photographers. This has prompted digital camera manufacturers to try to implement most of the legacy techniques common among traditional film cameras such as high-International Organization for Standardization ~ISO! film, long exposures, high-speed shutters, etc., into digital cameras. One technique that is of utmost importance to a large community of photographers is the digital camera equivalent to the traditional high-speed silver-based film sensitivity, commonly known as the ISO sensitivity num- ber. An ISO number that appears on regular camera film packages specifies the speed, or sensitivity, of this type of silver-based film. The higher the number the ‘‘faster’’ or more sensitive the film is to light. Typical ISO speeds for Paper 03-014 received Feb. 3, 2003; revised manuscript received Apr. 30, 2003 and Sep. 11, 2003; accepted for publication Sep. 15, 2003. 1017-9909/2004/$15.00 © 2004 SPIE and IS&T. 264 / Journal of Electronic Imaging / April 2004 / Vol. 13(2) silver-based film include 100, 200, or 400. Each doubling of the ISO number indicates a doubling in film speed so each of these films is twice as fast as the next fastest. Image sensors used in digital cameras are also rated using equiva- lent ISO numbers. Just as with film, an image sensor with a lower ISO needs more light for a good exposure than one with a higher ISO. In poorly lit conditions, a longer expo- sure of the image sensor is needed for more light to enter. This, however, will lead to the acquired images being blurred, unless the scene being imaged is completely still. It is, therefore, better to set the image sensor to a higher ISO setting because this will enhance freezing of scene motion and shooting in low light. Typically, digital image sensor ISOs range from 100 ~fairly slow! to 3200 or higher ~very fast!. Some digital cameras have more than one ISO rating. In low-light situations, the sensor’s ISO can be increased by amplifying the image sensor’s signal ~increasing its gain!. Some cameras even increase the gain automatically. This not only increases the sensor’s sensitivity, but, unfortu- nately, also increases the noise or ‘‘grain,’’ thus, generating images that are contaminated with random noise effects. 2 Sensor Noise Types Noise can be summarized as the visible effects of an elec- tronic error ~or interference! in the final image from a digi- tal camera. Noise is a function of how well the image sen- sor and digital signal processing systems inside the digital camera are prone to and can cope with or remove these errors ~or interference!. Noise significantly degrades the image quality and increases the difficulty in discriminating fine details in the image. It also complicates further image processing, such as image segmentation and edge detection. The type of high-ISO sensor noise produced by a typical digital camera CCD imaging sensor can be modeled as an additive white Gaussian distribution with zero mean and a variance ~noise power! proportionate to the amount of am- plification applied to the image sensor’s signal to boost its gain.1–3 Rabie Visible noise in a digital image is often affected by tem- perature ~high worse, low better! and ISO sensitivity ~high worse, low better!. Some cameras exhibit almost no noise and some a lot and all the time. It has certainly been the challenge of digital camera developers to reduce noise and produce a ‘‘cleaner’’ image, and indeed some recent digital cameras are improving this situation greatly, allowing for higher and higher ISOs to be used without too much noise. In general, image artifacts produced by digital cameras can be divided into three types. • Stuck-pixel noise: also known as impulse-type noise is created by many digital cameras and is caused by long exposure time of the CCD elements in dim-lighting conditions when a bright image is required without the use of the flash light. During the exposure, some CCD cells become saturated and are stuck at a bright color, which show up in the acquired image as bright im- pulse noise pixels @see Fig. 2~a!#. Removing this type of noise is usually not a difficult task but comes at the cost of blurring the other uncorrupted pixels. • Blue-channel noise: is a common problem in digital photographs, especially those created by high-ISO professional news and sports digital cameras. A typical digital camera sensor @CCD or complementary metal– oxide–semiconductor ~CMOS!# is more sensitive to certain primary colors than others ~often sensors are Fig. 1 The CCD imaging sensor chip used in the ‘‘Kodak Profes- sional DCS720x’’ digital camera. (courtesy http:// www.dpreview.com/). less sensitive to blue light! and so to compensate, these channels are amplified more than the others @see Fig. 2~d!#. This noise has hampered the acceptance of digital cameras for quality reasons, and it has limited their use for some techniques such as automasking based on chrominance ~e.g., ‘‘blue screen’’ back- grounds!. • JPEG artifacts: also known as JPEG blocking arti- facts, are due to the nature of the 8-by-8 block-size used by the JPEG compression standard. High com- pression ratios result in images with blockyness in the blue and red channels. These blocks are especially ob- vious in the flat areas of an image. In high detail areas, artifacts called ‘‘mosquito noise’’ become noticeable. This term comes from the ripple effect that mosqui- toes make when their legs touch water. The sensor noise filtering techniques that will be de- scribed shortly present a solution to these types of artifacts that are common in most color images acquired by digital cameras. 3 Background of Noise Reduction Techniques The primary concern in digital photography is the visual fidelity of the acquired images. What professional photog- raphers demand from a digital camera is fast and precise image acquisition ~high-sensitivity in low-light conditions and exact-moment capture! coupled with the best visual results. Previous noise reduction techniques available in the literature do not take into account the physics of the CCD photo-capture element used in a majority of video and digi- tal still cameras today. The CCD image sensor tends to be highly sensitive to the green light frequencies and less sen- sitive to the blue and red light waves. The CCD hardware controller is, thus, tuned to increase the signal gain of the blue CCD elements more than the green elements. In nor- mal and good lighting conditions there are no visible ef- fects due to this difference in signal gain, but in low light conditions, where the CCD signal gain is increased more for the less sensitive color channels, this produces high- frequency noise that contaminates the blue channel, and, to a lesser extent, the red channel, more severely than the green channel. In general the chrominance channels of the acquired images will be more severely affected by this Fig. 2 (a) Long exposure stuck pixel noise (courtesy http://www.dpreview.com/), (b) red channel, (c) green channel, and (d) blue channel showing excessive noise. Journal of Electronic Imaging / April 2004 / Vol. 13(2) / 265 Adaptive hybrid mean and median filtering . . . Fig. 3 An image acquired in low light at an ISO-400 setting with a Minolta Dimage digital camera, then separated into its component channels using the L*a*b* color space to show the severity of noise in the chrominance a and b channels as compared to the luminance channel due to the limitations of the CCD image sensor. noise than the luminance channel of the image as shown in Fig. 3. The resulting effect is the visibility of random noise artifacts in the acquired image that differs in severity from acceptable ~at low-ISO settings , ISO 400) to completely contaminating the picture ~at very high ISO settings . ISO 2000) such that it becomes visually unacceptable. More details on separating a color image into luminance ~brightness! and chrominance ~color! channels will be pre- sented in Sec. 4. Statistical characteristics of images are of fundamental importance in many areas of image processing. Incorpora- tion of a priori statistical knowledge of spatial correlation in an image, in essence, can lead to considerable improve- ment in many image processing algorithms. For noise fil- tering, the well-known Wiener filter for minimum mean- squared error ~MMSE! estimation is derived from a measure or an estimate of the power spectrum of the image, as well as the transfer function of the spatial degradation phenomenon and the noise power spectrum.4 Unfortunately, the Wiener filter is designed under the assumption of wide- sense stationary signal and noise. Although the stationarity assumption for additive, zero-mean, white Gaussian noise is valid for most cases, it is not reasonable for most realistic images, apart from the uninteresting case of uniformly gray image fields. What this means in the case of the Wiener filter is that we will experience uniform filtering throughout the image, with no allowance for changes between edges and flat regions, resulting in unacceptable blurring of high- frequency detail across edges and inadequate filtering of noise in relatively flat areas. Noise reduction filters have been designed in the past with this stationarity assumption. These have the effect of removing noise at the expense of signal structure. Ex- amples such as the fixed-window Wiener filter and the fixed-window mean and median filters have been the stan- dard in noise smoothing for the past 2 decades.4 – 6 These filters typically smooth out the noise, but destroy the high- frequency structure of the image in the process. This is mainly due to the fact that these filters deal with the fixed- window region as having sample points that are stationary ~belonging to the same statistical ensemble!. For natural scenes, any given part of the image generally differs suffi- ciently from the other parts so that the stationarity assump- 266 / Journal of Electronic Imaging / April 2004 / Vol. 13(2) tion over the entire image or even inside a fixed-window region is not generally valid. Newer adaptive Wiener filter- ing techniques that take into account the nonstationarity nature of most realistic images have been used as an alter- native to preserve signal structure as much as possible.7 Many, however, do this at the expense of proper noise re- duction, where the high-frequency areas will be insuffi- ciently filtered, which will result in a large amount of high- amplitude noise remaining around edges in the image. Another shortcoming is the failure of these filters to remove stuck-pixel ~impulse-type! noise that appears in the ac- quired images due to long CCD exposure times in dim light. A fixed-window median filter will remove this type of impulse noise but will also alter important signal structure due to the same assumption that the image samples in the fixed-window can be modeled by a stationary random field, which is not valid for a fixed-window that cannot inher- ently differentiate between edge and flat image regions.8 Another shortcoming with many noise filtering tech- niques that deal with color digital images is the application of the same filter evenly to the three color channels ~R,G,B!9,10 with the assumption that the sensor noise is equally distributed among the three color channels, which is an erroneous assumption as explained at the start of this section. To the author’s knowledge, the issue of high-ISO noise reduction for digital cameras has not received much attention in the literature. Although current work in the lit- erature on adaptive noise reduction filters ~such as Smolka et al.,11 Eng and Ma12! may be used to reduce high-ISO digital camera noise, these filters have been developed without the specific needs of professional digital cameras and as such do not take into consideration the different types of noise degradations which are generated by these digital cameras as mentioned previously. The result is either insufficient noise reduction in the chrominance channels, or too much smoothing in the luminance channel of the fil- tered image. Also, many filters only deal with gray-level images6,8,13–17 and, as such, are not very effective for the type of images acquired by the digital cameras discussed in this work. We will compare one of the commonly used adaptive spatial noise reduction filters, namely the adaptive local statistic MMSE filter, with our work to show the ef- fectiveness of our technique in producing visually superior Rabie filtered images that can directly be used for further analysis, such as edge detection and image understanding. Digital camera manufacturers are only now beginning to realize the importance of incorporating noise reduction fil- ters in the hardware image acquisition pipeline for their digital cameras. To the author’s knowledge, the only known digital camera to actually attempt to incorporate a noise reduction algorithm is the Kodak Professional DCS720x digital camera.18 Kodak rates the camera as ‘‘calibrated’’ up to ISO 4000 and capable of ISO 6400. This means that shooting in extremely low light/high shutter speed condi- tions is possible with this camera, but at the expense of increased noise in the acquired images. With noise reduc- tion activated the images are acquired with much less noise and are more visually pleasing. Images from this camera are used as a comparison with the techniques described in this paper. In the next sections, a new adaptive technique is de- scribed that is highly tuned to produce visually pleasing filtered color digital images that have been acquired using digital sensor-based cameras. This new technique differs from previous filtering methods in that it is geared towards the type of color images obtained from digital cameras, and thus takes into account the physical limitations of the CCD and the specific types of CCD sensor noise produced. 4 Color Spaces Before going into the details of the new color filtering methods, it is important to give a brief background of the most popular color spaces used in separating color images into their component color channels before filtering each channel. A color space is a model for representing color in terms of intensity values. It defines a one-, two-, three-, or four- dimensional space whose dimensions, or components, rep- resent intensity values. A color component is also referred to as a color channel. For example, RGB space is a three- dimensional color space whose components are the red, green, and blue intensities that make up a given color. Color spaces can be divided into two general categories; device dependent and device independent color spaces. • Device dependent color spaces: These include the family of RGB Spaces. The RGB space is a three- dimensional color space whose components are the red, green, and blue intensities that make up a given color. Most CCD and CMOS based digital camera im- aging sensors use the RGB color space by reading the amounts of red, green, and blue light reflected from the scene that fall on the CCD elements and then con- vert those amounts into digital values. These values are device dependent and one CCD may produce a different RGB value from another depending on how it is manufactured. HSV space and HLS space are transformations of RGB space that can describe colors in terms more natural to an artist. The name HSV stands for hue, saturation, and value, and HLS stands for hue, lightness, and saturation. The CMY color space is sometimes used in CCD and CMOS image sensors. The name CMY refers to cyan, magenta, and yellow, which are the three primary colors in this color space, and red, green, and blue are the three second- aries. • Device independent color spaces: Some color spaces can express color in a device-independent way. Whereas RGB colors vary with CCD and CMOS sen- sor hardware characteristics, device-independent col- ors are meant to be true representations of colors as perceived by the human eye. These color representa- tions, called device-independent color spaces, result from work carried out in 1931 by the Commission Internationale d’Eclairage ~CIE! and for that reason are also called CIE-based color spaces. The CIE cre- ated a set of color spaces that specify color in terms of human perception. It then developed algorithms to de- rive three imaginary primary constituents of color, namely X , Y , and Z , that can be combined at different levels to produce all the color the human eye can per- ceive. The resulting color model, and other CIE color models, form the basis for all color management sys- tems. Although the RGB and CMY values differ from device to device, human perception of color remains consistent across devices. Colors can be specified in the CIE-based color spaces in a way that is indepen- dent of the characteristics of any particular imaging device. The goal of this standard is for a given CIE- based color specification to produce consistent results on different devices, up to the limitations of each device.19 One problem with representing colors using the X Y Z color space is that it is perceptually nonlinear: it is not possible to accurately evaluate the perceptual closeness of colors based on their relative positions in X Y Z space. Col- ors that are close together in X Y Z space may seem very different to observers, and colors that seem very similar to observers may be widely separated in X Y Z space. L *a *b * space is a nonlinear transformation of X Y Z space to create a perceptually linear color space designed to match per- ceived color difference with quantitative distance in color space.20,21 As stated earlier, the CCD sensor of a typical digital camera is less sensitive to the blue and red channels and this causes amplified noise artifacts in the chromatic chan- nels in low light or at high-ISO settings. Moreover, there seems to be general agreement that spatial resolution is markedly lower in chromatic channels than in the achro- matic one ~see Fig. 3!, hence, high-frequency information, i.e., edges, come mainly from this achromatic channel.22 Another important consideration is that, in order to avoid chromatic artifacts in the filtered image, a nonlinear opera- tor cannot be applied to each RGB component separately.22 These two considerations and experimental results suggest that a color model which should separate luminance from chrominance is suitable. We, thus, choose to separate our acquired images using the L *a *b * color space because of its merits stated earlier. The L *a *b * color space separates the RGB image into a luminance channel L , and two chrominance channels ( a , b ) . This allows us to use differ- ent filter parameters specifically tuned for each channel. In general the luminance channels suffer less noise artifacts Journal of Electronic Imaging / April 2004 / Vol. 13(2) / 267 Adaptive hybrid mean and median filtering . . . than the ( a , b ) chrominance channels. We, therefore, take this into consideration when filtering each channel, by al- lowing more smoothing in the ( a , b ) channels to correct for color artifacts, while passing more high frequency in the filtered luminance channel. This will be further emphasized when presenting the experimental results in a later section. 5 Adaptive-Window Signal Equalization Hybrid Filter The quantity of light falling on an image sensor array ~e.g., CCD array!, is a real valued function q ( x , y ) of two real variables x and y . An image is typically a degraded mea- surement of this function, where degradations may be di- vided into two categories, those that act on the domain ( x , y ) and those that act on the range q . Sampling, aliasing, and blurring act on the domain, while noise ~including quantization noise! and the nonlinear response function of the camera act on the range.23 We are concerned with the latter type of camera sensor degradations. Digital camera sensor noise reduction is the process of removing unwanted noise from a digital image. It falls into two main categories, reduction or removal of noise from high-ISO images including JPEG compression artifacts, and reduction or removal of noise from long-exposure im- ages ~with ‘‘stuck pixels’’!. In this section a detailed de- scription of the adaptive hybrid filter for sensor noise re- moval is presented with experimental results showing its performance in comparison with other standard noise re- duction filters. 5.1 Hybrid Mean and Adaptive Center Weighted Median Filter The median filter is a class of order-statistic filters where filter statistics are derived from ordering ~ranking! the ele- ments of a set rather than computing means, etc. The me- dian filter is a nonlinear neighborhood operation, similar to convolution, except that the calculation is not a weighted sum. Instead, the pixels in the neighborhood are ranked in the order of their gray levels, and the midvalue of the group is stored in the output pixel. In probability theory, the me- dian M , of a random variable x , is the value for which the probability of the outcome x , M is 0.5.6 Median filtering is normally a slower process than convolution, due to the re- quirement for sorting all the pixels in each neighborhood by value. There are, however, algorithms that speed up the process.24,25 The median filter is popular because of its demonstrated ability to reduce random impulsive noise without blurring edges as much as a comparable linear low- pass filter. However, it often fails to perform as well as linear filters in providing sufficient smoothing of nonimpul- sive noise components such as additive Gaussian noise. In order to achieve various distributed noise removal, as well as detail preservation, it is often necessary to combine lin- ear and non-linear operations.26 –29 In this section we intro- duce a hybrid filter combining the best of both worlds; proper smoothing in flat regions and detail preservation in busy regions of the image. One of the main disadvantages with the basic median filter is that it is location-invariant in nature, and thus also tends to alter the pixels not disturbed by noise. The center weighted median filter ~CWMF! was developed to address 268 / Journal of Electronic Imaging / April 2004 / Vol. 13(2) this limitation in the basic median filter.30,31 This filter will give the pixel at the center of the window more weight ( . 1 ) than the other pixels in the window before determin- ing the median. This has the effect of preferentially pre- serving that pixel’s value so both fine detail and noise are more preserved. In the extreme, one could make it so that the center pixel has a weight equal to the entire weight of the rest of the window, in which case the value of the center pixel is assured of being the output of the median opera- tion. This is the identity filter, where the output is equal to the input. In general, a CWMF can be varied over the range from the median filter to the identity filter by varying the central weight. This corresponds to the range from strong noise and detail removal ~basic median filtering! to none ~identity filtering!. In the original CWMF implementation, the central weight is constant over the entire image. In this paper, we make use of the CWMF concept and implement it as an adaptive CWMF ~ACWMF! by varying the central weight based on signal and noise estimates in- side an adaptive window framework, which will be de- scribed in detail in the next section. In formulating our center weighted median-based filter we use an image model with additive noise as follows: y ~ k , l !5 x~ k , l !1 n~ k , l !, ~1! where k P@ 0,M 2 1 #, and l P@ 0,N 2 1 # for an M 3 N sized image. n ( k , l ) is a zero-mean additive white Gaussian noise random variable, of variance s n 2 , and uncorrelated to the ideal image x ( k , l ) , which is assumed to be of zero mean and variance s x 2 , and y ( k , l ) is the noise-corrupted input image. For the purpose of the following analysis, we assume that both x ( k , l ) and n ( k , l ) are ergodic random variables. The implication of this assumption is that although we do not have a priori knowledge of the signal and noise statis- tical variance and mean, we can still capture samples of x ( k , l ) and n ( k , l ) and determine their variance and mean, which are, in turn, representative of their respective en- sembles. It should also be noted that although the noise variance, s n 2 , is not known a priori, it is easily estimated from a window in a flat area of the degraded image y ( k , l ) . 32 We begin by setting an objective criterion of optimality for deriving the central weight at each pixel location. We use a similar criterion to that used in deriving the power spectrum equalization filter,33 by seeking a linear estimate, x̂ ( k , l ) , such that the signal variance of the estimate is equal to the variance of the ideal image x ( k , l ) . Assuming this estimate is of the form x̂~ k , l !5 a•y ~ k , l !, ~2! we can express our criterion as s x 25 E$x̂ 2%5 E$~ a•y !2%, ~3! where E$•% is the expectation operator. In general, the ac- quired images have a nonzero mean, and we can account for this by subtracting the mean of each image from both random variables of Eq. ~2!. For zero mean noise, the a Rabie posteriori sample mean ~local mean inside the adaptive window! of the degraded pixel y ( k , l ) , denoted by m y , is equal to the a priori sample mean of the ideal pixel x ( k , l ) . After dropping the ( k , l ) notation for readability, we have x̂ 2 m y 5 a•~ y 2 m y !, ~4! and we can write our criterion after accounting for the mean as follows: s x 25 E$~ x̂ 2 m y ! 2%5 E$@ a•~ y 2 m y !# 2% 5 a 2•s y 25 a 2•~ s x 21 s n 2 !. ~5! Therefore, the signal equalization estimator a becomes a 5A s x2 s x 21 s n 2. ~6! If the number of pixels in the adaptive window, of size L x3 L y , is ( L x•L y ) , then the central weight for the pixel under analysis at pixel position ( k , l ) is given as C w5 a•~ L x•L y 2 1 !1 1. ~7! This central weight can be used to give the value of the pixel at ( k , l ) more weight than the other pixels in the adap- tive window before determining the median; i.e., we count it as if it were C w pixels rather than just one pixel. Thus, a 5 0 is the basic median filter, while a 5 1 is the identity filter, and 0 25º) were obtained. Shoulder, armpit and waist angles, in addition to trunk asymmetry indexes, were calculated on front and back photographs with Surgimap software. On AP radiographs Coronal Cobb angles and radiological shoulder imbalance using CRIA angle (Angle between horizontal line and a line drawn between, the right and left intersection points of clavicle and rib cage) were calculated. Intra-class corre- lation coefficient was used to assess intra and inter-rater reliability. The Pearson correlation coefficients (r) were used to estimate concurrent validity between both methods. Results 80 patients (68 females) mean age 20.3 years (12-40 years) were included. Mean Cobb Maximum (CobbMax) was 45.9º (25.1º-77.2º).All measures had a good to excel- lent intra and inter-rater reliability both in front and back photographs. Waist height angle and CobbMax angle, both in front and back photographs, were signifi- cantly correlated (r=0.42 back/r=0.29 front view). There was no significant correlation between proximal thoracic curve magnitude and any of the shoulder measures. The correlation between shoulder and armpit height angles and radiographic clavicle tilt were -0.44 and -0.41 respectively on frontal view. There was a correlation between trapezium angle ratio and clavicle tilt in both views (r=0.43 back/r=0.32 front view). No other statisti- cally significant correlations between both methods were found. Conclusions Digital photography measurement of waist height is use- ful to assess trunk deformity in idiopathic scoliosis due to a good correlation with Cobb. Shoulder asymmetry indexes like shoulder height angle can be clinical mea- sures of shoulder imbalance. Trunk asymmetry indexes are not correlated with radiological measures. Published: 4 December 2014 doi:10.1186/1748-7161-9-S1-O6 Cite this article as: Matamalas et al.: Validity of a quantitative tool of trunk asymmetry based on digital photographs in patients with idiopathic scoliosis. Scoliosis 2014 9(Suppl 1):O6. Hospital Vall Hebron, Barcelona, Spain Matamalas et al. Scoliosis 2014, 9(Suppl 1):O6 http://www.scoliosisjournal.com/supplements/9/S1/O6 © 2014 Matamalas et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. http://creativecommons.org/licenses/by/4.0 http://creativecommons.org/publicdomain/zero/1.0/ Background Aim Study design Methods Results Conclusions work_p7x2pvlnubcqjp6sylvskmaybm ---- Études photographiques, 26 | novembre 2010 Études photographiques 26 | novembre 2010 Saisi dans l'action : repenser l'histoire du photojournalisme Reaching Beyond the Photograph, redefining the Press Thierry Gervais, Christian Delage et Vanessa R. Schwartz Édition électronique URL : http://journals.openedition.org/etudesphotographiques/3126 ISSN : 1777-5302 Éditeur Société française de photographie Édition imprimée Date de publication : 30 novembre 2010 Pagination : - ISBN : 9782911961267 ISSN : 1270-9050 Référence électronique Thierry Gervais, Christian Delage et Vanessa R. Schwartz, « Reaching Beyond the Photograph, redefining the Press », Études photographiques [En ligne], 26 | novembre 2010, mis en ligne le 27 février 2012, consulté le 22 avril 2019. URL : http://journals.openedition.org/etudesphotographiques/3126 Ce document a été généré automatiquement le 22 avril 2019. Propriété intellectuelle http://journals.openedition.org http://journals.openedition.org http://journals.openedition.org/etudesphotographiques/3126 Reaching Beyond the Photograph, redefining the Press Thierry Gervais, Christian Delage et Vanessa R. Schwartz 1 Perhaps no technology has done more to alter the visual landscape of modernity than the appearance of photographs within the mass press. This visual landscape has also come to stand at the center of the formation of what we consider ‘global consciousness.’ As historian of photography Gisèle Freund remarked in 1936, ‘Photography opened a window, as it were. The faces of public personalities became familiar and things that happened all over the world were his to share. As the reader’s outlook expanded, the world began to shrink.’1 It is the alchemy of the photographic image combined with the public vision and distribution goals of the press that work to create a global vision. This role has made photojournalism one of the most influential of photographic discourses. 2 Several of the articles in this new issue of Études photographiques began as part of a colloquium, ‘Caught in the Act: Re-Thinking the History of Photojournalism,’ organized in June 20092. The essays by Thierry Gervais, Jason Hill, Will Straw, and Vanessa Schwartz are indebted to that larger group’s conversation. Some of the important materials explored at the colloquium include, at Rockefeller Center, the American government’s Office of War Information’s own sort of photojournalism (Laura Wexler), the seemingly incongruous status of war reporting in a fashion magazine (Becky Conekin on Lee Miller during World War II in Vogue), the important contribution of early newsreels to news practice (Magdalena Mazaraki), the effects of photography and film on modern city life (Stéphane Füzesséry), the shared aesthetics of photography and film as exemplified by the Film and Photo League (Christian Delage), the powers, perils, and possibilities of the representation of children as and in the news (Anne Higonnet), and more recent developments of amateur photojournalism on the web (André Gunthert) and the market for photojournalistic images as art photography in the work of Luc Delahaye (Richard Meyer). These papers addressed such problems as the powerful decontextualization of the press image, its broad and often unpredictable circulation, and its aestheticization - fundamental issues but ones that could not be fleshed out here due to the space Reaching Beyond the Photograph, redefining the Press Études photographiques, 26 | novembre 2010 1 limitations of a single journal issue and the other publication commitments of the authors. 3 This issue of Études photographiques follows in the footsteps of that of June 2007, La trame des images. Histoires de l’illustration photographique [The Image Screen: Histories of Photographic Illustration]3. That issue challenged the accepted account of newspapers – assumed to be impartial – as simply coming together with recorded images, pointing to editorial choices dictated by competition and various economic criteria. Within this analytical framework, the authors analyzed photographic production in terms of its relations with the modes of distribution and emphasized how permeable the barrier is that ostensibly separates the two – photography and the press. Finally, the articles shone a spotlight on the aesthetic ambitions of news photographs, which harmonize with the editorial goals of the media of distribution. In this issue, number 26, the authors go even further, revealing the photograph’s shortcomings as a news vehicle and the solutions adopted by the editors of illustrated newspapers and magazines to remedy its inadequacies. This issue also looks beyond the production of images to focus on the role of editors in chief and art directors, those men and women who every week labor in the shadows to construct the narrative of current events in images. For them, ‘stolen’ pictures, photographic sequences, and retouched photographs became means for articulating a position or, for some, a code of journalistic ethics. 4 More specifically, this issue examines how press photographs were connected to other visual modes such as cinema, studio photos, and radiophotography. Taken together, several of the essays demonstrate that the press and its images were not hermetically sealed as a singular discourse but rather were influenced by and helped to shape related image cultures. The articles highlight and help define photojournalistic practice in an intervisual field. This issue features articles where authors have paid careful attention to the archives of both the production and diffusion of images – reading the images as historians – and by doing so the images are contextualized and made to resonate with broader social and cultural issues. 5 The self-conscious ‘image’ of the photojournalist is described in his own representation in images in the press (Thierry Gervais and Vanessa Schwartz) and in the formation of such organizations as the National Press Association (Vincent Lavoie). But such self- consciousness also bears witness to the way the press itself, through a specifically visual rhetoric, questioned the neutrality of information thought to characterize modern press practice. This is particularly revealing as we often consider the project of photojournalism to have been an enthusiastic and naïve engagement with presenting ‘things as they are.’4 The essays demonstrate this more complex awareness through such cases as the employment of obviously retouched images in radiophotography (Jason Hill), in the changing look of pulp crime tabloids (Will Straw), and in the strategic uses of black and white as opposed to color images (Audrey Leblanc). They argue that the history of photojournalism has always included a sophisticated critique of visual objectivity - long before such issues became generalized as part of the easy manipulation of images during the digital age. The press, they suggest, has even contributed to transmitting ideas about the fallacy of reliable information. 6 Finally, these articles look at practices associated with mainstream news sources (such as L’Illustration and Paris-Match) as well as those established by intellectual tabloids such as PM. They insist that such genres as the pulp true crime image and red carpet and paparazzi photographs – the underbelly of the news industry – be pondered Reaching Beyond the Photograph, redefining the Press Études photographiques, 26 | novembre 2010 2 alongside the bulk of what has stood as such legitimate fields of photojournalism as war reporting. In fact, while there is a continuous history of news and war reporting from illustration to photography, such dogged invasions of privacy and the construction of a certain kind of celebrity is specifically photographic. With this in mind, tabloid photography may be one of the singular contributions of photojournalism. 7 Although the relative novelty of the web as a means of distribution and the advent of digital photography has enlivened scholarship about photojournalism, it has also produced an archaeological impulse. The essays collected here should make clear that this transformation has also facilitated a fruitful return to the equally troubled pre- digital archive of the photographic press – challenged by the collecting habits of research libraries. These libraries often did not care to buy such publications as tabloids and were oriented toward the disposal of the hard copy of press publications in favor of their substitution by poor microfilms. The lack of consistent archiving practices of both photographers and publications, not to mention the continued commercial value of press images, serve to further decontextualize them from the moment of their production while making them inaccessible to scholars, except as discrete objects for sale. Despite these less than perfect research conditions, our return to the history of photojournalism in its heyday, suggests that it never really presented ‘things as they are,’ and that its advocates and practitioners actually knew this and tried to convey this information to their audience. It seems we are finally learning to see and hear photojournalism, as it really was. NOTES 1. Gisèle Freund, Photography and Society (Boston: Godine, 1980 [1936]): 103. 2. ‘Caught in the Act: Re-Thinking the History of Photojournalism,’ international conference co- organized by Christian Delage and Vanessa Schwartz and funded and hosted by the Borchard Foundation at the Château de la Bretesche in Missillac, June 21-24, 2009. Catherine Clark, Ryan Linkof, and Curtis Fletcher provided research support for the colloquium. 3. La trame des images. Histoire de l’illustration photographique, Études photographiques, no. 20 (June 2007). 4. Mary Panzer and Christian Caujolle, Things as They Are.Photojournalism in Context Since 1855 (London: Chris Boot, 2005). Reaching Beyond the Photograph, redefining the Press Études photographiques, 26 | novembre 2010 3 Reaching Beyond the Photograph, redefining the Press work_pa6yxgzag5br5nzmll5eh6wxju ---- Moghimi et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2010 11(8):598-606 598 Studying pressure sores through illuminant invariant assessment of digital color images Sahar MOGHIMI†1, Mohammad Hossein MIRAN BAYGI†‡1, Giti TORKAMAN2, Ehsanollah KABIR1, Ali MAHLOOJIFAR1, Narges ARMANFARD1 (1Department of Electrical Engineering, Tarbiat Modares University, P.O. Box 14115-111, Tehran, Iran) (2Department of Physical Therapy, Tarbiat Modares University, P.O. Box 14115-111, Tehran, Iran) †E-mail: {moghimi, miranbmh}@modares.ac.ir Received Sept. 7, 2009; Revision accepted Jan. 29, 2010; Crosschecked July 1, 2010 Abstract: Methods for pressure sore monitoring remain both a clinical and research challenge. Improved methodologies could assist physicians in developing prompt and effective pressure sore interventions. In this paper a technique is introduced for the assessment of pressure sores in guinea pigs, using captured color images. Sores were artificially induced, utilizing a system particularly developed for this purpose. Digital images were obtained from the suspicious region in days 3 and 7 post-pressure sore generation. Different segments of the color images were divided and labeled into three classes, based on their severity status. For quantitative analysis, a color based texture model, which is invariant against monotonic changes in illumination, is proposed. The texture model has been developed based on the local binary pattern operator. Tissue segments were classified, using the texture model and its features as inputs to a combination of neural networks. Our method is capable of discriminating tissue segments in different stages of pressure sore generation, and therefore can be a feasible tool for the early assessment of pressure sores. Key words: Local binary pattern (LBP), Automatic assessment, Neural networks, Color based texture model, Pressure sores, Digital color images doi:10.1631/jzus.C0910552 Document code: A CLC number: TP391 1 Introduction Pressure sores occur when an unrelieved pressure causes ischemia, which, if prolonged, can lead to the development of necrotic tissue (Salcido et al., 1995). Pressure sores are notably different from acute wounds in that, unlike acute wounds, they can develop both superficially and/or from within the deep tissue, depending on the nature of the surface loading (Bader et al., 2005). Pressure sores are a major problem for patients with impaired mobility and reduced ability to sense injury. Therefore, daily checks are required in at-risk patients for signs of erythema or redness which may indicate the generation of pressure sores. Different imaging techniques, such as digital photography, high frequency ultrasound, computer- ized tomography (CT), and magnetic resonance imaging (MRI), can be used for the early assessment of pressure sores. CT and MRI are not economical and cannot be employed in small offices and/or clin- ics. In addition, CT and MRI have the concerns of exposure to X-rays, injected dyes, magnetic fields, and also a practical delay in final report generation. High frequency ultrasound does not have the above problems, but it is rather expensive and the training is intensive as well as user-dependent. Digital photo- graphy has been widely used for the assessment of wounds and pressure ulcers (Belem, 2004; Salter et al., 2006; Treuillet et al., 2009). Although this tech- nique does not provide information about the under- lying tissue, its low cost and ease of use and inter- pretation have made it a desirable tool for the as- Journal of Zhejiang University-SCIENCE C (Computers & Electronics) ISSN 1869-1951 (Print); ISSN 1869-196X (Online) www.zju.edu.cn/jzus; www.springerlink.com E-mail: jzus@zju.edu.cn ‡ Corresponding author © Zhejiang University and Springer-Verlag Berlin Heidelberg 2010 Moghimi et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2010 11(8):598-606 599 sessment of pressure sores. Also, different image processing techniques have been developed to pro- vide quantitative interpretation tools for studying the captured images. Changes in the color content of digital photo- graphs have demonstrated to provide a viable tool for assessment of wounds in general. Hansen et al. (1997) used color images to score artificially-induced pres- sure sores in animals based on changes in the hue component. Herbin et al. (1993) used color images of the wound surface to extract the area and a color index for monitoring tissue changes during wound healing. Berriss and Sangwine (1997) used three-dimensional (3D) RGB histograms for clustering different tissue types in the pressure ulcer bed. Several researchers tried to cluster or segment wound images using dif- ferent sets of features including textural features as well as those extracted from RGB, HSV, HIS, LAB, and LUV histograms (Jones and Plassmann, 1995; Berriss and Sangwine, 1997; Nischik and Forster, 1997; Bon et al., 2000; Perez et al., 2001; Belem, 2004; Zheng et al., 2004; Galushka et al., 2005; Kolesnik and Fexa, 2005). Bon et al. (2000) used color content of captured images to define a healing curve for the assessment of wound healing. Another research team used an adaptive spline technique to segment the wound boundary in the images of venous leg ulcers based on hue, saturation, and intensity measures (Oduncu et al., 2004). Although color fea- tures have proven to be useful for the segmentation of wound tissue and assessment of healing, their values are dependent on the effect of surrounding light and specifications of the image capturing system (Plass- mann et al., 1995; Plassmann and Jones, 1998; Ma- lian et al., 2005). Therefore, precautions should be taken in image acquisition. Comparison of wound status in different time intervals is not reliable unless images are obtained under the same surrounding light conditions, or a color calibration phase is included in the preprocessing steps. Researchers have exercised different color calibration techniques to address the above issue (Diao et al., 2005; Bianco et al., 2007; Lee and Choi, 2008). In this paper, we introduce a technique for studying pressure sore generation from the color content of captured images, regardless of the sur- rounding light. It is illustrated that as color changes are measured with respect to the adjacent healthy tissue, no precaution has to be considered about the surrounding light. In fact, we have shown that the introduced model is invariant against monotonic color variations across the image. For evaluating the effi- ciency of the proposed model, different segments of the captured images are labeled, based on their se- verity status, and classified using the proposed model and a combination of neural networks. 2 Materials and methods 2.1 Pressure sore generation 2.1.1 Animals We used six healthy male albino guinea pigs (Dunkin-Hartley; Pasteur Institute of Iran, Tehran, Iran) that were 4 to 6 months old and weighed 400 to 450 g. The animals were maintained in special cages under controlled conditions according to the experi- mental guidelines of Tarbiat Modares University. Up to 12 h before the anesthesia and pressure application, the animals had unlimited access to food. The Ethics Commission of Tarbiat Modares University approved the study. 2.1.2 Procedure After weighing the animals, we prepared the anesthesia using a mixture of xylazine (20 mg/ml) and ketamine hydrochloride (Brown et al., 1995) (100 mg/ml; 1 to 8 cc, injection of 1 cc/kg). The system for generation of pressure sores in- cluded a PWM (pulse width modulation) card (Con- trol of Biological Systems LAB, Amir Kabir Univer- sity, Tehran, Iran), an A/D (analog to digital) card, a PC, a mechanical arm, and two 12 V DC motors (MicroMo, USA). The rotational output of the motors was converted to a linearly driven indentor, using a mechanical interface. Two strain gauges (HBM, Germany) were assembled on a small horizontal arm facing each other to measure the applied force. The sensor signal was linearized after passing through the A/D card. A/D output provided the control signal. A PID (proportional-integral-derivative) controller (Control of Biological Systems LAB, Amir Kabir University, Tehran, Iran) was used to keep the dif- ference between the desired and applied loads as Moghimi et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2010 11(8):598-606 600 small as possible. The user was able to monitor the applied pressure throughout the wound generation sessions for the duration of the experiment. A block diagram of the system is illustrated in Fig. 1. The ability of the system to produce pressure sores was previously tested based on histological analysis (Torkaman et al., 2000). A photograph of the system used for generating pressure sores is shown in Fig. 2. Pressure was uniformly applied to a disk of di- ameter 0.75 cm. The load was kept constant at (400±5) g for 5 h over the trochanter region of the right animal hind limb. 2.2 Data preparation Digital images were obtained from the region of interest (ROI) in days 3 and 7 post-pressure sore generation using a 7-megapixel Canon digital camera in normal day light in the laboratory without any internal illumination. The reason for choosing these days for tissue monitoring is that the pressure sore, induced in this manner, reaches its maximum severity after seven days (Daniel et al., 1981; Torkaman et al., 2000), and the full extent of tissue damage is not apparent until the fifth through the seventh day. It has also been stated that before this time, the actual extent of deep necrosis is difficult to define and after that healing reverses some of the more obvious signs of damage (Hansen et al., 1997). By monitoring the tissue in the above days, we were more certain that we were assessing pressure sore generation. The ROI was marked on the sample’s skin. The animals were kept in a restrainer during image acquisition sessions to provide more accurate images and avoid motion ar- tifacts. The size of the captured images was reduced to 480×480 pixels with the suspicious region at the center of the image. Every 40×40-pixel segment of the RGB images was labeled by three physicians. Each physician was asked to classify every region into one of three different classes, with I, II, and III referring to intact, slightly damaged, and considerably damaged tissues respectively. The label of the class corresponding to the highest number of physician votes was considered as the final label of the region under study. Table 1 illustrates the results of labeling different regions (segments) into the above three classes. The RGB color images were converted to the HSV color space, and the hue component was consid- ered for further processing steps. This allowed us to develop our model based on the color content of the captured images. The hue component values were normalized to take values in the interval [0, 1]. 2.3 Modeling the color content We utilized texture information when modeling the color content of each segment. The local binary pattern (LBP) texture operator was selected as the measure of texture because it is invariant with respect to the monotonic grayscale changes (Ojala et al., 1996; Heikkila and Pietikainen, 2006). This operator has shown an excellent performance in different applica- tions (Haralick et al., 1973; Ahonen et al., 2006; Heikkila and Pietikainen, 2006; Armanfard et al., 2008; Mackiewicz et al., 2008; Tajeripour et al., 2008). The LBP operator labels the pixels of an image Fig. 2 The setup used for generating pressure sores Fig. 1 Block diagram of the pressure sores generation system ∑ Controller PWM Driver Motors Power supply Power supply Indenter Sensor Amplifier A/D Voltage level + − PWM: pulse width modulation Table 1 Number of segments in each class Class Description Number of members I Intact 1552 II Slightly damaged 137 III Considerably damaged 39 Moghimi et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2010 11(8):598-606 601 region by thresholding the values of neighborhood pixels with respect to the central pixel. After the threshold is set, the values of the pixels in the neighborhood produced are multiplied by the bino- mial weights given to the corresponding pixel. The values of the pixels (in the predefined neighborhood) are summed together to obtain the LBP number of the central pixel. To make the LBP invariant to the small color changes, i.e., the presence of hair roots in the ROI, the outcome of thresholding was considered to be one, only if the difference between the neighbor and the central pixel was larger than t (defined threshold). The smaller was the value of t, the smaller were the changes in tissue color allowed to affect the outcome of the thresholding process. For calculating this value the hue component of an image obtained from the wound tissue was taken into account. Dif- ferent threshold values were tested in ascending order until the LBP values of the intact regions became very small with respect to the LBP values of the wound tissue. Fig. 3 shows the LBP values of the pixels ob- tained using different threshold values. For visual interpretation these values are illustrated as grayscale images. While higher values of t cause meaningful changes in hue values to be undetectable, for very smaller values of t the LBP operator detects unde- sirable changes in the hue values due to the presence of hair roots. The LBP operator is formulated as 0 0 0 0 1, ( , ) ( , ) , 0, ( , ) ( , ) , i i i i i I x y I x y t b I x y I x y t − >⎧ = ⎨ − ≤⎩ (1) 1 0 0 0 LBP( , ) 2 , N i i i x y b − = = ∑ (2) where I(xi, yi) is the value of the pixel location (xi, yi) in the neighborhood, i=0, 1, …, N−1. The value of i increases in a counter clockwise manner starting from the top left corner of the neighborhood. The process discussed above is illustrated in Fig. 4. Note that different neighborhood sizes may be adopted for calculating the LBP values. Based on our previous experiments the value of t was set to 0.05. The texture model for a pixel developed in this research consisted of its LBP value, computed over a defined region around that pixel (a 3×3 pixel neighborhood). In the first attempt, the LBP histogram of each segment was used as its feature vector. We applied this operator to the hue channel of the HSV space for detecting changes in the tissue color. The model of every 40×40 region of the 480×480 images was generated as mentioned above. To reduce the processing time, in the next step, the original LBP histograms were replaced by three features extracted (a) (b) (c) (d) (e) Fig. 3 (a) The hue component of a wound image. Local binary pattern (LBP) values of (a), calculated for different threshold values and illustrated as grayscale images: t=0.01 (b), t=0.02 (c), t=0.04 (d), and t=0.05 (e) Fig. 4 An example for calculating the local binary pattern (LBP) values From left to right: the hue component, the magnified hues and their values, and the threshold binary values Thresholding = (11010011)2 0.965 0.973 0.0150.894 0.011 0.074 0.031 0.023 1 1 1 1 1 1 0 0 0 Moghimi et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2010 11(8):598-606 602 from them. The extracted features were their entropy, variance, and kurtosis (Theodoridis and Koutroumbas, 2003). Entropy: g 1 2 0 ( )log ( ). N I H P I P I − = = −∑ (3) Variance: g 1 2 2 0 ( ) ( ). N I I m P Iμ − = = −∑ (4) Kurtosis: g 1 4 4 0 ( ) ( ). N I I m P Iμ − = = −∑ (5) Herein Ng is the total number of possible LBP values. P(I) and m represent the probability of the presence of I and the mean LBP value in the neighborhood, re- spectively. Both ideas (original LBP histogram and the extracted features) were tested for their capabili- ties to discriminate segments belonging to different classes introduced earlier. 2.4 Classification We tried to classify the labeled segments based on the features extracted from the texture model. Multi-layered perceptron neural networks (MLP NNs) were used to classify the segments into the three dif- ferent classes. Three networks with similar structures were used. Each network was trained to discriminate one specific class from the others. All nodes had a tan-sigmoid transfer function. The weights and biases of the networks were initialized with random values, taken from −1 to 1. We used batch training where the weights between the neurons are updated only after all the training examples are exposed to the network. We also used Levenberg-Marquardt back-propagation for training and found that, in practice, this method produced the best result. For the classification of segments using the LBP histograms, the networks had 256 input neurons and 2 hidden layers with 20 and 7 neurons respectively. For classifying the histogram features, a network with 3 input neurons and 1 hidden layer (with 8 neurons) produced the best result. Note that the number of hidden layers as well as their neurons was chosen based on the experiments we conducted. Due to the small number of samples in two of our classes (classes II and III) and the dimensions of the feature vectors, we used 5-fold cross-validation to avoid any bias or error during segment classification. Since three neural networks were trained on the data, the outputs of the networks had to be combined to obtain the final decision. The final decision (D) was produced as D=i, if di=max(d1, d2, d3), where i=1, 2, or 3 and d1, d2, and d3 were the decisions of the three classifiers. This means that a class label was assigned as the final decision, if the soft decision of the net- work trained on the samples of that specific class was higher than those of the other two networks. In fact di was the probability of assigning a feature vector to class i by the ith classifier. 3 Results and discussions Fig. 5 shows two RGB color images obtained from the same guinea pig in days 3 and 7 post-pressure sore generation. Figs. 5b and 5c illustrate the polar histogram of the hue of these segments. Although both digital color images (Figs. 5a and 5f) were captured from the same animal, the hue shift was noticeable in the transition from polar histograms Figs. 5b and 5c to polar histograms Figs. 5g and 5h. The reason for this artifact is that no precautions were taken about the light conditions. On the other hand, the peaks of the four LBP histograms (Figs. 5d, 5e, 5i, and 5j), calcu- lated over the hue component, happened to occur at the same location (at zero). This phenomenon has a simple analytic meaning. Since the monotonic color shift can be expressed as a constant offset (os) in the hue value of each pixel, the left term of the condition in Eq. (1) under monotonic color shift becomes (I(xi, yi)−os)−(I(x0, y0)−os), which is exactly the same as I(xi, yi)−I(x0, y0). This illustrates that the LBP histogram is invariant against monotonic color shifts from one day to another, and therefore is a reliable tool for com- paring the color images, obtained under monotonically variant illumination. Fig. 6 shows the LBP histograms of the three different segments, each belonging to one of the classes under study. The value of the histograms at zero was omitted for illustration purposes. The his- tograms were normalized by dividing their values to Moghimi et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2010 11(8):598-606 603 the number of pixels in each segment. It is obvious that the LBP histogram of a healthy segment had a very small value at locations other than zero. This is because the color content of a healthy segment has a homogeneous pattern. Pressure sore generation causes inhomogeneities in the color content, as a result of the appearance of skin redness and erythema. As the pressure sore reaches more severe stages, or in other words progresses to superficial layers, color inhomogeneities become significant, resulting in a rough histogram. An LBP histogram of a segment that belongs to class III is presented in Fig. 6c. The threshold value introduced in Eq. (1) plays an im- portant role in detecting the color differences. Therefore, the sensitivity of the model in the assess- ment of pressure sore generation is highly related to this threshold value. Fig. 6 shows the difference between the three observation classes heuristically. It is desirable to know whether the features extracted from the LBP histogram have the ability to discriminate the seg- ments that illustrate different stages of pressure sore generation. Table 2 shows the significance of ex- tracted features examined by analysis of variance and multiple comparisons. Based on the confidence in- terval, and that the interval does not include zero, the mean of these three classes are considerably different and there is a meaningful difference between the three classes under study. Therefore, it may be concluded that the extracted features are able to discriminate the samples belonging to different classes, and are good candidates for the purpose of classification. The box-plots of the extracted features are presented in Fig. 7, for providing a better comprehension about the class separability. On each box, the central mark is the median. The edges of the box are the 25th and 75th percentiles. The whiskers extend to the most extreme data points, outliers not considered. The outliers are plotted individually. Table 3 shows the classification results over the test samples, using two different feature vectors, namely LBP histograms and its features. As one would predict, feeding the entire LBP histogram into the classifiers results in greater accuracy. The highest percentage of misclassified samples belongs to class II. This was expected, since this class includes the Fig. 6 Local binary pattern (LBP) histograms of hue segments belonging to classes I (a), II (b), and III (c) The value of the histograms at zero is omitted for illustration purposes 0.001 0.002 0.003 0 0.01 0.02 0.03 0 0.008 0.012 0.016 0.020 0 0.004 50 150 200 250 1000 (c) (b) (a) LBP value N or m al iz ed h is to gr am v al ue Fig. 5 Wound images and their hue and local binary pattern (LBP) histograms (a) is the RGB color image obtained from one animal on day 7. (b) and (c) are polar histograms of hue computed for the dashed squares of (a). (d) and (e) are LBP histograms of the same regions used in (b) and (c). (f) is the RGB color image obtained from the same animal on day 3. (g) and (h) are polar histograms of hue computed for the dashed squares of (f). (i) and (j) are LBP histograms of the same regions used in (g) and (h). The dashed squares in (a) and (f) represent two healthy segments of tissue on days 7 and 3, respectively 0.2 0.4 0.6 0.8 1.0 0 0.2 0.4 0.6 0.8 1.0 0 0.2 0.4 0.6 0.8 1.0 0 0.2 0.4 0.6 0.8 1.0 0 0 50 100 150 200 250 50 100 100 100 200 200 50 150 90° 150 60° 30° 0° 20 40 60 80 60° 30° 30° 30° 0° 0° 0° 60° 60° 90° 90° 90° 50 100 150 (d) (b) (e) (i) (c) (g) (h) (j) (a) (f) LBP value N or m al iz ed h is to gr am v al ue N or m al iz ed h is to gr am v al ue Moghimi et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2010 11(8):598-606 604 segments that were only slightly damaged. Therefore, tissue segments with faint erythema or redness can be misinterpreted as intact tissue. As the pressure sores were induced in guinea pigs, the hypothesis could not be tested on more se- vere sores due to ethical considerations. The number of segments belonging to class III was very small compared with that of class I. This was due to the small dimension of the pressure disk used for inducing the sores. The size of the pressure applicator disk was chosen based on the size of the animal trochanter. We could not reduce the image dimension either, as it was necessary to monitor the adjacent tissue for tracking the pressure sores. In- creasing the number of images would not help, as the ratio of samples in class III to the samples in class I would still be small. Nevertheless, the test samples belonging to class III, with the smallest numbers, were still classified with high accuracy. The number of segments labeled into class III by the clinicians was higher in the images of day 7 than in the images of day 3. This implies that the images of day 7 illustrated more severe stages of pressure sores. Therefore, different captured images may also be compared based on the pressure sore severity by counting the number of segments classified into the three mentioned classes. This would provide a rela- tive measure for monitoring the pressure sore gen- eration. Bon et al. (2000) also introduced a relative measure of wound healing, but it was based on the changes in the RGB components. Herbin et al. (1993) used a color index for monitoring healing but they provided their image database with precautions about surrounding light. 4 Conclusions In this study, a technique was proposed for studying pressure sore generation, based on numeri- Table 3 Classification results Percentage of misclassified samples (%) Input Total accuracy (%) Training time (min) Class I* Class II** Class III*** LBP histogram 97.97 420 0. 32 19 0.00 LBP features 95.94 1.5 0. 67 38 0.00 * n=311; ** n=26; *** n=9. n is the number of samples in each class Fig. 7 Box-plots of the extracted feature (a) Entropy; (b) Variance; (c) Kurtosis. These values are plot- ted for all the samples (including test and training) 0.02 0.03 0.04 0.05 0.06 244 246 248 250 252 254 0 0.5 1.0 1.5 Class I Class II Class III (c) (b) (a) E nt ro py Va ria nc e K ur to si s Table 2 Class separability studied through one way analysis of variance and multiple comparisons MD (95% CI) Class Entropy Variance Kurtosis I/II −0.3799 [−0.4036, −0.3562] 0.0103 [0.0098, 0.0108] 0.9972 [0.8947, 1.0997] I/III −0.9904 [−1.0335, −0.9473] 0.0289 [0.0279, 0.0298] 3.3084 [3.1220, 3.4948] II/III −0.6105 [−0.6588, −0.5622] 0.0186 [0.0175, 0.0196] 2.3112 [2.1025, 2.5199] MD: mean difference; CI: confidence interval Moghimi et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2010 11(8):598-606 605 cal analysis of the color content of captured images. Analysis of wounds has previously been carried out by monitoring changes in wound color. Our approach was more specifically established to study the changes in tissue color through a method which is invariant to monotonic changes in tissue color in an uncontrolled surrounding light. Small variations in the light conditions during image acquisition can affect the final decision on the state of tissue, being monitored for the assessment of pressure sores. This is because superficial signs of pressure sores initiate as small changes in the color content of the superficial layers of suspicious areas. Therefore, a quantitative technique which is invariant to small changes in the environmental light, and yet is able to distinguish these changes in tissue color, may be helpful for monitoring tissue status. It was demonstrated that the proposed technique can be used to discriminate intact tissue from slightly and considerably damaged tissues. In this paper we dealt with artificially-induced sores which were generated in the controlled envi- ronment of the laboratory. We controlled and took into account factors such as temperature, moisture and delivered food, which could affect tissue condi- tions. It remains to be seen how these results will translate to patient studies, where there may be other factors affecting tissue conditions. Acknowledgements We would like to thank Drs. Afsar HADDA- DIAN and Parvin MANSOORI for assistance in data analysis. This research has been supported with the re- sources and use of facilities at Tarbiat Modares Uni- versity, Tehran, Iran. References Ahonen, T., Hadid, A., Pietikainen, M., 2006. Face description with local binary patterns: application to face recognition. IEEE Trans. Pattern Anal. Mach. Intell., 28(12):2037- 2041. [doi:10.1109/TPAMI.2006.244] Armanfard, A., Komeili, M., Kabir, E., 2008. TED: a Tex- ture-Edge Descriptor Based on LBP for Pedestrian De- tection. IEEE Int. Symp. on Telecommunications, p.643- 648. [doi:10.1109/ISTEL.2008.4651380] Bader, D., Bouten, C., Colin, D., Oomens, C., 2005. Pressure Ulcer Research: Current and Future Perspectives. Springer Verlag, London, p.1-7. [doi:10.1007/3-540-288 04-X] Belem, B., 2004. Non-invasive Wound Assessment by Image Analysis. PhD Thesis, University of Glamorgan, UK. Berriss, W.P., Sangwine, S.J., 1997. A Colour Histogram Clustering Technique for Tissue Analysis of Healing Skin Wounds. Proc. 6th Int. Conf. on Image Processing and Its Applications, p.693-697. [doi:10.1049/cp:19970984] Bianco, S., Gasparini, F., Russo, A., Schettini, R., 2007. A new method for RGB to XYZ transformation based on pattern search optimization. IEEE Trans. Consum. Electron., 53(3):1020-1028. [doi:10.1109/TCE.2007.4341581] Bon, F.X., Briand, E., Guichard, S., Couturaud, B., Revol, M., Servant, J.M., Dubertret, L., 2000. Quantitative and ki- netic evolution of wound healing image analysis. IEEE Trans. Med. Imag., 19(7):767-772. [doi:10.1109/42.875 206] Brown, M., Gogia, P.P., Sinacore, D.R., Menton, D.N., 1995. High-voltage galvanic stimulation on wound healing in guinea pigs: longer-term effects. Arch. Phys. Med. Reha- bil., 76(12):1134-1137. [doi:10.1016/S0003-9993(95)801 22-7] Daniel, R.K., Priest, D.L., Wheatley, D.C., 1981. Etiologic factors in pressure sores: an experimental model. Arch. Phys. Med. Rehabil., 62:492-498. Diao, C.Y., Lu, D.M., Liu, G., 2005. Relighting multiple color textures. J. Zhejiang Univ. Sci., 6A(11):1284-1289. [doi:10.1631/jzus.2005.A1284] Galushka, M., Zheng, H., Patterson, D., Bradley, L., 2005. Case-Based Tissue Classification for Monitoring Leg Ulcer Healing. Proc. 18th IEEE Symp. on Computer- Based Medical Systems, p.353-358. [doi:10.1109/CBMS. 2005.39] Hansen, G.L., Sparrow, E.M., Kokate, J.Y., Leland, K.J., Iaizzo, P.A., 1997. Wound status evaluation using color image processing. IEEE Trans. Med. Imag., 16(1):78-86. [doi:10. 1109/42.552057] Haralick, R.M., Shanmugan, K., Dinsten, I., 1973. Textural features for image classification. IEEE Tran. Syst. Man Cybern., 3(6):610-621. [doi:10.1109/TSMC.1973.4309 314] Heikkila, M., Pietikainen, M., 2006. A texture-based method for modeling the background and detecting moving ob- jects. IEEE Trans. Pattern Anal. Mach. Intell., 28(4):657- 662. [doi:10.1109/TPAMI.2006.68] Herbin, M., Bon, F.X., Venot, A., Jeanlouis, F., Dubertret, M.L., Dubertret, L., Strauch, G., 1993. Assessment of healing kinetics through true color image processing. IEEE Trans. Med. Imag., 12(1):39-43. [doi:10.1109/42.222664] Jones, B.F., Plassmann, P., 1995. An instrument to measure the dimensions of skin wounds. IEEE Trans. Biomed. Eng., 42(5):464-470. [doi:10.1109/10.376150] Kolesnik, M., Fexa, A., 2005. Multi-dimensional color histo- grams for segmentation of wounds in images. LNCS, 3656:1014-1022. [doi:10.1007/11559573_123] Moghimi et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2010 11(8):598-606 606 Lee, S.H., Choi, J.S., 2008. Design and implementation of color correction system for images captured by digital camera. IEEE Trans. Consum. Electron., 54(2):268-276. [doi:10.1109/TCE.2008.4560085] Mackiewicz, M., Berens, J., Fisher, M., 2008. Wireless capsule endoscopy color video segmentation. IEEE Trans. Med. Imag., 27(12):1769-1781. [doi:10.1109/TMI.2008.926061] Malian, A., Azizi, A., van den Heuvel, F.A., Zolfaghari, M., 2005. Development of a robust photogrammetric me- trology system for monitoring the healing of bedsores. Photogr. Rec., 20(111):241-273. [doi:10.1111/j.1477- 9730.2005.00319.x] Nischik, M., Forster, C., 1997. Analysis of skin erythema using true color images. IEEE Trans. Med. Imag., 16(6):711- 715. [doi:10.1109/42.650868] Oduncu, H., Hoppe, A., Clark, M., Williams, R.J., Harding, K.J., 2004. Analysis of skin wound images using digital color image processing: a preliminary communication. Int. J. Lower Extrem. Wounds, 3(3):151-156. [doi:10.1177/ 1534734604268842] Ojala, T., Pietäikinen, M., Harwood, D., 1996. A comparative study of texture measures with classification based on feature distributions. Pattern Recogn., 29(1):51-59. [doi:10.1016/0031-3203(95)00067-4] Perez, A.A., Gonzaga, A., Alves, J.M., 2001. Segmentation and Analysis of Leg Ulcers Color Images. Proc. Int. Workshop on Medical Imaging and Augmented Reality, p.262-266. [doi:10.1109/MIAR.2001.930300] Plassmann, P., Jones, T.D., 1998. MAVIS: a non-invasive instrument to measure area and volume of wounds. Med. Eng. Phys., 20(5):332-338. [doi:10.1016/S1350-4533(98) 00034-4] Plassmann, P., Jones, B.F., Ring, E.F.J., 1995. A structured light system for measuring wounds. Photogr. Rec., 15(86): 197-204. [doi:10.1111/0031-868X.00025] Salcido, R., Fisher, S.B., Donofrio, J.C., Bieschke, M., Knapp, C., Liang, R., LeGrand, E.K., Carney, J.M., 1995. An animal model and computer controlled surface pressure delivery system for the production of pressure ulcers. J. Rehabil. Res. Dev., 32(2):149-161. Salter, R., Love, H., Fright, R., Nixon, M., 2006. PDA-based, portable laser scanner measurement of wound size: ac- curacy and reproducibility. ANZ J. Surg., 76(Suppl 1): A59. Tajeripour, F., Kabir, E., Sheikhi, A., 2008. Fabric defect de- tection using modified local binary patterns. EURASIP J. Adv. Signal Process., 2008, Article ID 783898, p.1-12. [doi:10.1155/2008/783898] Theodoridis, S., Koutroumbas, K., 2003. Pattern Recognition (2nd Ed.). Academic Press, London, p.270-272. Torkaman, G., Sharafi, A.A., Fallah, A., Katoozian, H.R., 2000. Biomechanical and Histological Studies of Experimental Pressure Sores in Guinea Pigs. Proc. 10th Int. Conf. on Biomedical Engineering, p.463-469. Treuillet, S., Albouy, B., Lucas, Y., 2009. Three-dimensional assessment of skin wounds using a standard digital cam- era. IEEE Trans. Med. Imag., 28(5):752-762. [doi:10. 1109/TMI.2008.2012025] Zheng, H., Bradley, L., Patterson, D., Galushka, M., Winder, J., 2004. New Protocol for Leg Ulcer Classification from Colour Images. Proc. IEEE 26th Annual Int. Conf. on Engineering in Medicine and Biology Society, p.1389- 1392. [doi:10.1109/IEMBS.2004.1403432] << /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (ISO Coated v2 300% \050ECI\051) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /OK /CompatibilityLevel 1.3 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJDFFile false /CreateJobTicket false /DefaultRenderingIntent /Perceptual /DetectBlends true /DetectCurves 0.1000 /ColorConversionStrategy /sRGB /DoThumbnails true /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo true /PreserveFlatness true /PreserveHalftoneInfo false /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts false /TransferFunctionInfo /Apply /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /Helvetica ] /NeverEmbed [ true ] /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 150 /ColorImageMinResolutionPolicy /Warning /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 150 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 >> /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 >> /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /Warning /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 150 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 >> /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 >> /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 600 /MonoImageMinResolutionPolicy /Warning /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 >> /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /Description << /CHS /CHT /DAN /DEU /ESP /FRA /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN /KOR /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR /PTB /SUO /SVE /ENU >> >> setdistillerparams << /HWResolution [2400 2400] /PageSize [595.276 841.890] >> setpagedevice work_pctpxe6kvbgvvnj6wnk6kvri5e ---- International Journal of Circumpolar Health: Vol 80, No 1 Log in  |  Register Cart Home All Journals International Journal of Circumpolar Health List of Issues Volume 80, Issue 1 International Journal of Circumpolar Health An open access journal Publishes international open access research on circumpolar health covering issues related to the health of indigenous peoples and high latitude environments. Search in: This Journal Anywhere Advanced search Submit an article New content alerts RSS Citation search Citation search Current issue Browse list of issues Explore Top Published in cooperation with Circumpolar Health Research Network (CircHNet) About this journal Journal metrics Aims and scope Instructions for authors Journal information Editorial board Editorial policies Latest articles See all volumes and issues Volume 80, 2021 Vol 79, 2020 Vol 78, 2019 Vol 77, 2018 Vol 76, 2017 Vol 75, 2016 Vol 74, 2015 Vol 73, 2014 Vol 72, 2013 Vol 71, 2012 Vol 70, 2011 Vol 69, 2010 Vol 68, 2009 Vol 67, 2008 Vol 66, 2007 Vol 65, 2006 Vol 64, 2005 Vol 63, 2004 Vol 62, 2003 Vol 61, 2002 Vol 60, 2001 Login or register to access this feature Have an account? Login nowDon't have an account? Register for free Register a free Taylor & Francis Online account today to boost your research and gain these benefits: Download multiple PDFs directly from your searches and from tables of contents Easy remote access to your institution's subscriptions on any device, from any location Save your searches and schedule alerts to send you new results Choose new content alerts to be informed about new research of interest to you Export your search results into a .csv file to support your research Register now or learn more Download citations Choose format RIS (ProCit, Reference Manager) BibTeX RefWorks Direct Export Download citations Download citations Download PDFs Browse by section (All) Display order (Default) Published online date Page number Section A-Z Most read Most cited International Journal of Circumpolar Health, Volume 80, Issue 1 (2021) Issue In Progress Original Research Article Article Patient healthcare experiences in the Northwest Territories, Canada: an analysis of news media articles Rhiannon Cooper, Nathaniel J. Pollock, Zander Affleck, Laura Bain, Nanna Lund Hansen, Kelsey Robertson & Susan Chatwood Article: 1886798 Published online: 18 Mar 2021 Abstract | Full Text | References | PDF (359 KB) | Supplemental | Permissions  299Views 0CrossRef citations Altmetric Article Incidence and survival of head and neck cancer in the Faroe Islands Sunnvá Hanusardóttir Olsen, Jeppe Friborg, Bjarki Ellefsen, Kathrine Kronberg Jakobsen & Kasper Aanæs Article: 1894697 Published online: 14 Mar 2021 Abstract | Full Text | References | PDF (841 KB) | Supplemental | Permissions  593Views 0CrossRef citations Altmetric Article Suicides in Nunavik: a life course study William Affleck, Nadia Chawky, Guy Beauchamp, Martha Malaya Inukpuk, Ellasie Annanack, Véronique Paradis & Monique Séguin Article: 1880143 Published online: 11 Mar 2021 Abstract | Full Text | References | PDF (1420 KB) | Permissions  322Views 0CrossRef citations Altmetric Article A sense of health and coherence in young rural schoolchildren in Sweden Eva Randell, Camilla Udo & Maria Warne Article: 1893534 Published online: 11 Mar 2021 Abstract | Full Text | References | PDF (322 KB) | Permissions  315Views 0CrossRef citations Altmetric Article Equivalent servings of free-range reindeer promote greater net protein balance compared to commercial beef Melynda S. Coker, Scott E. Schutzler, Sanghee Park, Rick H. Williams, Arny A. Ferrando, Nicolaas E. P. Deutz, Robert R. Wolfe & Robert H. Coker Article: 1897222 Published online: 11 Mar 2021 Abstract | Full Text | References | PDF (815 KB) | Permissions  486Views 0CrossRef citations Altmetric Article An experimental exposure study revealing composite airway effects of physical exercise in a subzero environment Linda Eklund, Filip Schagatay, Ellen Tufvesson, Rita Sjöström, Lars Söderström, Helen G. Hanstock, Thomas Sandström & Nikolai Stenfors Article: 1897213 Published online: 08 Mar 2021 Abstract | Full Text | References | PDF (360 KB) | Permissions  425Views 0CrossRef citations Altmetric Article Community-driven research in the canadian arctic: dietary exposure to methylmercury and gastric health outcomes Emily V. Walker, Safwat Girgis, Yan Yuan & Karen J. Goodman Article: 1889879 Published online: 01 Mar 2021 Abstract | Full Text | References | PDF (347 KB) | Permissions  387Views 0CrossRef citations Altmetric Article Human change and adaptation in Antarctica: Psychological research on Antarctic wintering-over at Syowa station Tomoko Kuwabara, Nobuo Naruiwa, Tetsuya Kawabe, Nanako Kato, Asako Sasaki, Atsushi Ikeda, Shinji Otani, Satoshi Imura, Kentaro Watanabe & Giichiro Ohno Article: 1886704 Published online: 22 Feb 2021 Abstract | Full Text | References | PDF (3786 KB) | Permissions  634Views 0CrossRef citations Altmetric Article School professionals committed to student well-being Anna Onnela, Tuula Hurtig & Hanna Ebeling Article: 1873589 Published online: 25 Jan 2021 Abstract | Full Text | References | PDF (291 KB) | Permissions  687Views 0CrossRef citations Altmetric Article Diabetic retinopathy awareness and eye care behaviour of indigenous women in Saskatoon, Canada Valerie Umaefulam & Kalyani Premkumar Article: 1878749 Published online: 25 Jan 2021 Abstract | Full Text | References | PDF (272 KB) | Permissions  659Views 0CrossRef citations Altmetric Article Daily moderate-intensity physical activities and optimism promote healthy ageing in rural northern Sweden: a cross-sectional study. Katarina Öjefors Stark & Niclas Olofsson Article: 1867439 Published online: 19 Jan 2021 Abstract | Full Text | References | PDF (325 KB) | Permissions  700Views 0CrossRef citations Altmetric Review Article (Scoping and Systematic) Article Sami dietary habits and the risk of cardiometabolic disease: a systematic review IK Dahl & C Dalgård Article: 1873621 Published online: 19 Jan 2021 Abstract | Full Text | References | PDF (1127 KB) | Supplemental | Permissions  821Views 0CrossRef citations Altmetric Original Research Article Article Acute otitis media and pneumococcal vaccination – an observational cross-sectional study of otitis media among vaccinated and unvaccinated children in Greenland Simon Imer Jespersen, Malene Nøhr Demant, Michael Lynge Pedersen & Preben Homøe Article: 1858615 Published online: 07 Jan 2021 Abstract | Full Text | References | PDF (1711 KB) | Permissions  936Views 0CrossRef citations Altmetric Article Metabolic features of adiposity and glucose homoeostasis among school-aged inuit children from Nunavik (Northern Quebec, Canada) Thierry Comlan Marc Medehouenou, Cynthia Roy, Pierre-Yves Tremblay, Audray St-Jean, Salma Meziou, Gina Muckle, Pierre Ayotte & Michel Lucas Article: 1858605 Published online: 04 Jan 2021 Abstract | Full Text | References | PDF (485 KB) | Supplemental | Permissions  864Views 0CrossRef citations Altmetric Article First Nations’ hospital readmission ending in death: a potential sentinel indicator of inequity? Josée Lavoie, Wanda Phillips-Beck, Kathi Avery Kinew, Grace Kyoon-Achan & Alan Katz Article: 1859824 Published online: 13 Dec 2020 Abstract | Full Text | References | PDF (257 KB) | Permissions  981Views 0CrossRef citations Altmetric Article Indigenous women’s reproductive health in the Arctic zone of Western Siberia: challenges and solutions Elena Bogdanova, Sergei Andronov, Andrey Lobanov, Ruslan Kochkin, Andrei Popov, Ildiko Asztalos Morell & JonØyvind Odland Article: 1855913 Published online: 07 Dec 2020 Abstract | Full Text | References | PDF (313 KB) | Permissions  1195Views 0CrossRef citations Altmetric Article Polysubstance abuse among sexually abused in alcohol, drug, and gambling addiction treatment in Greenland: a cross sectional study Sara Viskum Leth, Maibritt Leif Bjerrum & Birgit V. Niclasen Article: 1849909 Published online: 29 Nov 2020 Abstract | Full Text | References | PDF (304 KB) | Permissions  1314Views 0CrossRef citations Altmetric Explore Most read articles Most cited articles Multimedia Information for Authors Editors Librarians Societies Open access Overview Open journals Open Select Cogent OA Dove Medical Press F1000Research Help and info Help & contact Newsroom Commercial services Advertising information All journals Books Keep up to date Register to receive personalised research and resources by email Sign me up Taylor and Francis Group Facebook page Taylor and Francis Group Twitter page Taylor and Francis Group Linkedin page Taylor and Francis Group Youtube page Taylor and Francis Group Weibo page Copyright © 2021 Informa UK Limited Privacy policy Cookies Terms & conditions Accessibility Registered in England & Wales No. 3099067 5 Howick Place | London | SW1P 1WG Accept We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Cookie Policy. By closing this message, you are consenting to our use of cookies. work_pdfsgqs5evg4ve5v2x7vpiqo24 ---- Shibboleth Authentication Request If your browser does not continue automatically, click work_pdki4fybifdqblxwrhto77hw7q ---- 12-0078.pmd 71 CLINICS 2008;63(1):71-6 BASIC RESEARCH Instituto de Ortopedia e Traumatologia, Hospital das Clínicas, Faculdade de Medicina da Universidade de São Paulo - São Paulo/SP, Brazil. sandraort@uol.com.br Received for publication on September 19, 2007. Accepted for publication on September 26, 2007. AN IN VITRO BIOMECHANICAL COMPARISON OF ANTERIOR CRUCIATE LIGAMENT RECONSTRUCTION: SINGLE BUNDLE VERSUS ANATOMICAL DOUBLE BUNDLE TECHNIQUES Sandra Umeda Sasaki, Roberto Freire da Mota e Albuquerque, César Augusto Martins Pereira, Guilherme Simões Gouveia, Júlio César Rodrigues Vilela, Fábio de Lima Alcarás Sasaki SU, da Mota e Albuquerque RF, Pereira CAM, Gouveia GS, Vilela JCR, Alcará F de L. An in vitro biomechanical comparison of anterior cruciate ligament reconstruction: single bundle versus anatomical double bundle techniques. Clinics. 2008;63(1):71-6. INTRODUCTION: Anterior cruciate ligament ruptures are frequent, especially in sports. Surgical reconstruction with autologous grafts is widely employed in the international literature. Controversies remain with respect to technique variations as continuous research for improvement takes place. One of these variations is the anatomical double bundle technique, which is performed instead of the conventional single bundle technique. More recently, there has been a tendency towards positioning the two bundles through double bone tunnels in the femur and tibia (anatomical reconstruction). OBJECTIVES: To compare, through biomechanical tests, the practice of anatomical double bundle anterior cruciate ligament reconstruction with a patellar graft to conventional single bundle reconstruction with the same amount of patellar graft in a paired experimental cadaver study. METHODS: Nine pairs of male cadaver knees ranging in age from 44 to 63 years were randomized into two groups: group A (single bundle) and group B (anatomical reconstruction). Each knee was biomechanically tested under three conditions: intact anterior cruciate ligament, reconstructed anterior cruciate ligament, and injured anterior cruciate ligament. Maximum anterior dislocation, rigidity, and passive internal tibia rotation were recorded with knees submitted to a 100 N horizontal anterior dislocation force applied to the tibia with the knees at 30, 60 and 90 degrees of flexion. RESULTS: There were no differences between the two techniques for any of the measurements by ANOVA tests. CONCLUSION: The technique of anatomical double bundle reconstruction of the anterior cruciate ligament with bone-patellar tendon-bone graft has a similar biomechanical behavior with regard to anterior tibial dislocation, rigidity, and passive internal tibial rotation. KEYWORDS: Anterior cruciate ligament. Knee. Anatomy. Comparative Study. Surgery. INTRODUCTION Anterior Cruciate Ligament (ACL) ruptures are a fre- quent injury, especially in sports. Surgical reconstruction with autologous grafts is widely employed in the international lit- erature. Controversies remain with respect to technique vari- ations as continuous research for improvement takes place. One of these variations is the anatomical double bundle tech- nique, which is performed in place of the conventional sin- gle bundle (antero-medial bundle) technique1-8. Recently, there has a tendency towards positioning of the two bundles through a double tunnel technique in the femur and the tibia9-13, the so called “anatomical technique.” The effectiveness of the “Double Bundle” technique has been questioned by some authors14-16, who found similar 72 CLINICS 2008;63(1):71-6An in vitro biomechanical comparison of anterior cruciate ligament reconstruction Sasaki SU et al. outcomes when these were compared to conventional “Sin- gle Bundle” reconstruction. Harner17, in 2004, posed the question: “Double Bundle or Double Trouble?” Morbidity, biomechanical advantages, and the duration of surgery have been specifically questioned. In light of this tendency, we propose our study, a di- rect comparison between the two techniques involving a biomechanical evaluation of the single bundle ACL tech- nique versus anatomical double bundle reconstruction us- ing cadaveric knees and patellar grafts (with no graft amount variation). MATERIALS AND METHODS Specimen Preparation Eighteen fresh frozen human undamaged cadaveric knees (nine pairs), all from males ranging in age from 44 to 63 years, were used in this study. The femur was cut 20 cm, and the tibia 30 cm from the joint line. The iliotibial tract up to mid-tight, the popliteus musculotendinous unit, and the joint capsule were left intact. The fibula was se- cured to the tibia to simulate the restraint provided by the interosseous membrane. The knees were stored at -20°C, and were thawed for 24 hours at room temperature before testing. Prior to the procedures, they were submitted to a primary arthroscopic inspection to rule out any previous intra-articular lesions. Biomechanical tests Anterior tibial displacement simulating the anterior drawer test with the knees at 30, 60 and 90 degrees flexion were the biomechanical tests performed. All knee speci- mens were tested under three conditions (“Intact,” “Recon- structed,” and “Injured,” in this order), and every test had three cycles, for which the third test’s data were recorded. The knees were first tested after an initial inspection by arthroscopy (“Intact Condition”). They were set up in a Kratos 5002 Universal Biomechanical Test Machine with a 100 kgf load cell connected to a computer system, where anterior displacement (millimeters) and stiffness (Newtons/ milimiters) data were recorded, while concomitant inter- nal tibia rotation (degrees) data was recorded by digital photography. A continuous velocity (20mm/min) was ap- plied by a 100N load cell in the tests. The tibia and femur were fixed to steel tubes with screws. Mounting was accomplished using grips for each bone that allowed their precise positions to be adjusted. The tibia was secured first, with its shaft ranging at 30, 60 and 90 degrees to the load cell axis. Tibia rotation, varus, val- gus and translation were allowed by the system. There was a degree graduation mark on the tibial steel tube and a nee- dle to register initial and final tibial rotation values (Fig- ure 1). The unsecured femur was placed with its shaft aligned along the axis of the load cell and its weight sup- ported by the lower grip (Figure 2). Before each test, the tibia was placed in a rotational position midway between its limits of internal and external rotation, and its rotational position in degrees was recorded. The “zero test point” was determined by a previous short cycle with a posterior drawer followed by an ante- rior drawer under 50 N load (Figure 3), and the inflexion point of this curve was registered as the “zero point.” All Figure 1 - Tibial rotation data recording device. Figure 2 - Biomechanical test apparatus. 73 CLINICS 2008;63(1):71-6 An in vitro biomechanical comparison of anterior cruciate ligament reconstruction Sasaki SU et al. biomechanical tests in the protocol were begun with a short posterior drawer to ensure the presence of the predeter- mined “zero point” on the anterior drawer curves of the three cycles (Figure 4). The specimens were not discon- nected from the original steel tube fixation at any time, in- cluding during surgeries, when the whole system was dis- connected from the Kratos Machine grips. Lesions The ACL lesions were made through an arthroscopic procedure after the “intact” tests and before the “recon- structed” tests. After the “reconstructed” tests, all bicortical screws were removed followed by the removal of the patellar grafts from the knees. Groups Specimens were divided in two groups, with one knee from each pair. In group A, single bundle reconstruction was performed, and in group B, the opposite side knees were submitted to anatomic double bundle reconstruction (Figure 5). Surgical technique 1) Group A (single bundle): After the original ACL resection, the group A knees were submitted to ACL conventional single bundle recon- struction with a bone-tendon-bone patellar graft. The tibial tunnel parameters for the tibial guide location were 7 mm in front of the PCL (Posterior Cruciate Ligament) at the medial tibial spine basis. For the femoral tunnel, a 7 mm offset guide was positioned in the notch at 11 h in the right knees and 1 h in the left knees. The grafts’ bone plug di- mensions were 2.5 cm in length, 10 mm wide and 10 mm deep. Tibial and femoral graft fixation was performed with two sutures (polyester No. 5) through the bone plugs at- Figure 3 - “Zero point” cycle. Figure 4 - Displacement versus three cycles of load curve. Figure 5 - Biomechanical tests and groups. 74 CLINICS 2008;63(1):71-6An in vitro biomechanical comparison of anterior cruciate ligament reconstruction Sasaki SU et al. tached to a transversal bicortical screw and washer. 2) Group B ( Anatomic Double Bundle): Anatomical double bundle reconstruction through dou- ble femoral and double tibial tunnels was performed using a patellar graft divided longitudinally into two parts with the same length, 5 mm thickness, and 5 mm depth (Figure 6). The posterior lateral (PL) tibial tunnel location param- eters were 7 mm in front of the PCL, behind the lateral tibial spine18, with the extra-articular location at the me- dial tibial anterior cortical diverging 70 degrees from the coronal plane. The anterior medial (AM) tunnel’s location was 7 mm in front of the posterior lateral guide wire, be- tween the tibial spines, with 50 degrees angulation to the coronal plane. After each tunnel was ready, we confirmed its integrity by introducing an arthroscope into the tunnel. For the femoral tunnels, we used a 5 mm offset femo- ral guide introduced through the tibial tunnel when possi- ble. On two occasions, for the PL femoral tunnel, we had to introduce the femoral guide through the anteromedial portal. The PL tunnel was located on the notch at 9:30 hs on the right knees and 14:30 hs on the left knees. The sec- ond femoral tunnel (AM) was at 11 hs on right knees and 13 hs on the left knees (Figure 7). The posterior lateral patellar graft was introduced first through the tunnels, followed by the AM graft. The PL graft was tensioned and fixed at 15 degrees and the AM graft at 90 degrees of knee flexion by two sutures (polyester No. 5) through each bone plug attached to transversal bicortical screws and washers, two on the femur, and two on the tibia. Statistics Statistical analysis was performed with Analysis of Vari- ance (ANOVA) of groups with the third cycle data tests of dislocation (millimeters) and stiffness media (Newtons/ millimeters). Evaluation Condition test differences were determined by the Bonferroni test method. The á value was 5%, and lower p values represented statistically significant differences. RESULTS The anterior drawer dislocation results are presented in millimeters and stiffness media in Newtons per millimeter for each evaluation condition: intact, injured, and repaired at 30, 60, and 90 degrees of knee flexion (Table 1). Statis- tical analysis showed no differences between groups A and B. The tibial passive internal rotation data are presented in degrees for the intact and repaired conditions at 30, 60, and 90 degrees of knee flexion (Table 2). Again, there were no statistical differences between groups A and B DISCUSSION Today’s technical improvement theme for ACL reconstruc- tion certainly includes the double bundle proposal1-8,11-13. Recently, there has been a tendency towards the “anatomi- cal double bundle reconstruction” technique with four bone tunnels, of which two are in the femur and two are in the tibia.5,11-13 The superiority of the double bundle technique in the literature is questionable as some authors have found simi- lar results when comparing it with the single bundle tech- nique15,16,19, while others have found better results with the Figure 6 - Patellar graft division. Figure 7 - Group B: four tunnels. 75 CLINICS 2008;63(1):71-6 An in vitro biomechanical comparison of anterior cruciate ligament reconstruction Sasaki SU et al. double bundle method.13,14 In 1999, Edwards20 showed the influence of the amount of graft on double bundle ACL reconstruction results, and questioned the real advantage of this technique. It is pos- sible that the better double bundle results found in some publications13,14 stem from this facet. In light of these studies, we present a direct biomechanical comparison between single and anatomical double bundle ACL reconstructions with the same total amount of bone-patellar tendon-bone graft. One of the rea- sons for the patellar graft choice was the possibility of ex- act division into two equal parts. The other reason was to avoid overstraining of the bundles, a problem documented by Miura21 in 2006 for hamstrings grafts. One difficulty of double bundle technique execution was the femoral guide location on the tibial tunnel. As such, it was alternatively introduced through the anteromedial por- tal in two specimens. Our results showed no differences between the two tech- niques for any of the measurements. This does not neces- sarily mean that the two techniques are similar under clini- cal conditions, as testing was limited to a non-cyclical load- ing biomechanical comparison (limited by our conventional tensile tester) on cadaveric specimens, which evaluated only immediate biomechanical results under experimental con- ditions. However, we could compare the results with the “intact condition” parameters (not possible in patients), concluding that both techniques could not restore them completely but have the same capacity to improve the “lesioned condition” parameters. CONCLUSIONS Reconstructions of anterior cruciate ligaments with the anatomical double bundle technique and with the conven- tional single anterior medial bundle technique with the same amount of bone-patellar tendon-bone graft have simi- lar initial biomechanical behaviors with regard to anterior tibial dislocation, rigidity and passive internal tibial rota- tion. Table 1 - Anterior drawer and stiffness. Data Registered Dislocation (mm) Stiffness (N/mm) Flexion Angle 30° 60° 90° 30° 60° 90° Knee condition A B A B A B A B A B A B Intact 5.93 6.41 5.52 6.41 4.72 4.91 27.56 28.74 34.02 33.62 40.03 40.93 Injured 17.07 18.57 13.99 14.72 11.14 12.75 15.93 15.17 16.20 17.43 18.91 20.77 Repaired 10.90 11.20 8.46 9.13 6.60 7.65 16.24 16.08 19.03 18.32 22.90 23.71 p P=0.47 p=0.59 P=0.23 P=0.93 P=0.97 P=0.45 Table 2 - Tibial passive internal rotation. Flexion Angle 30° 60° 90° Knee condition A B A B A B Intact 2.50 1.94 3.44 2.89 2.61 3.17 Repaired 2.28 1.89 4.17 3.67 4.17 4.33 p P=0.59 P=0.67 0.74 REFERENCES 1. Andrews JR, Sanders R. A “mini-reconstruction” technique in treating anterolateral rotatory instability (ALRI). Clin Orthop Relat Res. 1983;172: 93-6. 2. Mott HW. Semitendinosus Anatomic Reconstruction for Cruciate Ligament Insufficiency. Clin Orthop Relat Res.1983;172: 90-2. 3. Zaricznyj B. Reconstruction of the anterior cruciate ligament of the knee using a doubled tendon graft. Clin Orthop Relat Res. 1987; 220:162-75. 4. Sakane M, Fox RJ, Woo SLY, Liversay GA, Li G, Fu FH. In situ forces in the anterior cruciate ligament and its bundles in response to anterior tibial loads. J Orthop Res. 1997;2: 285-93. 5. Muneta T, Sekyia I, Yagishita K, Ogiuchi T, Yamamoto H, Shinomiya K. Two bundle reconstruction of the anterior cruciate ligament using semitendinosus tendon with endobuttons: operative technique and preliminary results. Arthroscopy. 1999;6:618-24. 6. Kubo T, Hara K, Suginoshita T, Shimizu C, Tsujihara T, Honda H, et al. Anterior cruciate ligament reconstruction using the double method; J Orthop Surg. 2000;2: 59-63. 76 CLINICS 2008;63(1):71-6An in vitro biomechanical comparison of anterior cruciate ligament reconstruction Sasaki SU et al. 7. Hara K , Arai Y, Ohta M, Minami G, Urade, H hirai N, Watanabe N, et al. A new double-bundle anterior cruciate ligament reconstruction using the posteromedial portal technique with hamstrings. Arthroscopy. 2005;10:1274-6. 8. Yagi M, Wong EK, Kanamori A, Debski RE, Fu FH, Woo SLY. Biomechanical analysis of an anatomic anterior cruciate ligament reconstruction. Am J Sports Med. 2002;30:660-6. 9. Franceschi P, Sbihi A, Champsaur P. Recontruction arthroscopique à double faisceau antéro–médial et postéro-latéral du ligament croisé antérieur. Rev Chir Orthop. 2002;6:691-7. 10. Ishibashi Y, Tsuda E, Tazawa K, Toh S. Intraoperative evaluation of the anatomical double – bundle anterior cruciate ligament reconstruction with the orthopilot navigation system. Orthopedics. 2005;28:s1277-82. 11. Fu FH, Starman JS. Honing the anatomic double-bundle ACL reconstruction surgical technique. Orthopedics Today on line by Orthosupersite on web. 2006. 12. Zelle BA, Brucker PU, Feng MT, Fu FH. Anatomical double-bundle anterior cruciate ligament reconstruction. Sports Med. 2006;2:99-108. 13. Yasuda K, Kondo E, Ichiyama H, Tanabe Y, Tohyama H. Clinical evaluation of anatomic double –bundle anterior cruciate ligament reconstruction procedure using hamstring tendon grafts: comparisons among 3 different procedures. Arthroscopy. 2006;22:240-51. 14. Mae T, Shino K, Miyama T, Shinjo H, Ochi T, Yoshikawa H, et al. Single- versus two-femoral socket anterior cruciate ligament reconstruction technique: biomechanical analysis using a robotic simulator. Arthroscopy. 2001;17:708-16. 15. Albuquerque RFM, Sasaki SU, Amatuzzi MM, Angelini FJ. Anterior Cruciate Ligament reconstruction with double bundle versus single bundle: experimental study. Acta Ortop Bras. 2007;3:335-44. 16. Adachi N, Ochi Y, Iwasa J, Kuriwaka M, Ito Y. Reconstruction of the Anterior cruciate ligament. J Bone Joint Surg Br. 2004;4:516-20. 17. Harner CD. Double Bundle or Double Trouble? Arthroscopy. 2005;10:1013-4. 18. Hamada M, Shino K, Miyama T, Shinjo H, Ochi T, Yoshikawa H, et al. Single versus bi–socket anterior cruciate ligament reconstruction using autogenous multiple-stranded harmstring tendons with endobutton femoral fixation: a prospective study. Arthroscopy. 2001;17:801-7. 19. Sbihi A, Franceschi JP, Christel P, Colombet P, Dijian P, Bellier G. Reconstruction du ligament croisé antérieur par greffe de tendons de la patte d’oie à un ou à deux faisceux. Rev Chir Orthop. 2004;7:643-50. 20. Edwards TB, Guanche CA, Petrie SG, Thomas KA. In vitro comparison of elongation of the anterior cruciate ligament and single-and dual-tunnel anterior cruciate ligament reconstructions. Orthopedics. 1999; 22:577- 84. 21. Miura K, Savio L-Y W, Brinley YCF, Noorani S. Effects of knee flexion angles for graft fixation on force disrtribution in double bundle anterior cruciate ligament grafts. Am J. Sports Med. 2006;34:577-85. work_pfotatpxqrfqji5buv2wwy647e ---- [PDF] Selfie stick: An extension of the photographer's hand in operation room conditions | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.4103/ijps.IJPS_187_16 Corpus ID: 11122419Selfie stick: An extension of the photographer's hand in operation room conditions @article{encan2017SelfieSA, title={Selfie stick: An extension of the photographer's hand in operation room conditions}, author={A. Şencan and Mehmet Baydar and K. Ozturk and O. Orman}, journal={Indian Journal of Plastic Surgery : Official Publication of the Association of Plastic Surgeons of India}, year={2017}, volume={50}, pages={115 - 116} } A. Şencan, Mehmet Baydar, +1 author O. Orman Published 2017 Medicine Indian Journal of Plastic Surgery : Official Publication of the Association of Plastic Surgeons of India Best photographs are captured by professional photographers in standard conditions. Medical photographers are professionalists who photograph patients in clinics and operation rooms and aware of sterile procedures. Unfortunately, not every medical facility employs a medical photographer. Shooting a picture intraoperatively can sometimes become a challenge when there is no one but untrained staff available.  View on PubMed thieme-connect.de Save to Library Create Alert Cite Launch Research Feed Share This Paper Figures and Topics from this paper figure 1 figure 2 Infertility Clinic Health care facility photograph One Citation Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Selfie stick: An extension of the photographer's hand in operation room conditions A. Şencan, Mehmet Baydar, K. Ozturk, O. Orman Medicine Indian journal of plastic surgery : official publication of the Association of Plastic Surgeons of India 2017 PDF Save Alert Research Feed References SHOWING 1-4 OF 4 REFERENCES Waterproof Camera Case for Intraoperative Photographs M. Raigosa, J. Benito-Ruiz, J. Fontdevila, J. Ballesteros Medicine Aesthetic Plastic Surgery 2007 8 Save Alert Research Feed A useful tool for intraoperative photography: underwater camera case. Soner Tatlidede, O. Egemen, L. Bas Medicine Annals of plastic surgery 2008 5 Save Alert Research Feed A safe and efficient method for intra-operative digital photography using a waterproof case. J. Tsai, Han-Tsung Liao, +4 authors C. Chen Medicine Journal of plastic, reconstructive & aesthetic surgery : JPRAS 2011 6 Save Alert Research Feed Selfie stick: An extension of the photographer's hand in operation room conditions A. Şencan, Mehmet Baydar, K. Ozturk, O. Orman Medicine Indian journal of plastic surgery : official publication of the Association of Plastic Surgeons of India 2017 Save Alert Research Feed Related Papers Abstract Figures and Topics 1 Citations 4 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_ph3if5to6jbpbectvb2k3zttmi ---- Fedora: 404 Not Found 404 Not Found No such object, datastream, or dissemination. [DefaulAccess] No datastream could be returned. Either there is no datastream for the digital object "uk-ac-man-scw:179097" with datastream ID of "null " OR there are no datastreams that match the specified date/time value of "null " . work_phuy56bf4rcwlp7f5ehoygvaue ---- CCID-27902-emerging-technologies-for-the-detection-of-melanoma---achiev © 2012 Herman, publisher and licensee Dove Medical Press Ltd. This is an Open Access article which permits unrestricted noncommercial use, provided the original work is properly cited. Clinical, Cosmetic and Investigational Dermatology 2012:5 195–212 Clinical, Cosmetic and Investigational Dermatology Emerging technologies for the detection of melanoma: achieving better outcomes Cila Herman Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA Correspondence: Cila Herman Department of Mechanical Engineering, Johns Hopkins University, 102 Latrobe Hall, 3400 North Charles Street, Baltimore, MD 21218-2682, USA Tel +1 410 516 4467 Fax +1 410 516 7254 Email cherman@jhu.edu Abstract: Every year around 2.5–3 million skin lesions are biopsied in the US, and a fraction of these – between 50,000 and 100,000 – are diagnosed as melanoma. Diagnostic instruments that allow early detection of melanoma are the key to improving survival rates and reducing the number of unnecessary biopsies, the associated morbidity, and the costs of care. Advances in technology over the past 2 decades have enabled the development of new, sophisticated test methods, which are currently undergoing laboratory and small-scale clinical testing. This review highlights and compares some of the emerging technologies that hold the promise of melanoma diagnosis at an early stage of the disease. The needs for detection at different levels (patient, primary care, specialized care) are discussed, and three broad classes of instruments are identified that are capable of satisfying these needs. Technical and clinical requirements on the diagnostic instruments are introduced to aid the comparison and evaluation of new technologies. White- and polarized-light imaging, spatial and spectroscopic multispectral methods, quantitative thermographic imaging, confocal microscopy, Optical Coherence Tomography (OCT), and Terahertz (THZ) imaging methods are highlighted in light of the criteria identified in the review. Based on the properties, possibilities, and limitations of individual methods, those best suited for a particular setting are identified. Challenges faced in development and wide-scale application of novel technologies are addressed. Keywords: Infrared imaging, thermography, melanoma detection and diagnosis, quantitative imaging, in vivo diagnostics Introduction and background Melanoma incidence is increasing at one of the fastest rates for all cancers in the US, with a current lifetime risk of one in 55.1 Every year, approximately 68,000 melanomas will be diagnosed, with around 8700 resulting in death. If melanoma is detected at an early stage, before the tumor has penetrated the epidermis, the 5-year survival rate is about 99%.2 However, the 5-year survival rate drops dramatically – to 15% – for patients with advanced disease.2,3 At present, there are no systemic treatments available that significantly extend the life span of patients with advanced melanoma;4,5 therefore, the key to extended survival is early detection and treatment.6,7 In order to enable early detection and diagnosis, avoid unnecessary biopsies, and ultimately reduce the cost of care, it is essential to develop accurate, sensitive, and objective quantitative diagnostic instruments that have the potential to have a deep impact on the disease. Dramatic advances in solid-state electronics and instrumentation, imaging hardware, computers, and software tools, including image-analysis techniques, have opened new avenues for quantitative imaging for the detection of melanoma. This review highlights and Dovepress submit your manuscript | www.dovepress.com Dovepress 195 R E v I E w open access to scientific and medical research Open Access Full Text Article http://dx.doi.org/10.2147/CCID.S27902 C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 mailto:cherman@jhu.edu www.dovepress.com www.dovepress.com www.dovepress.com http://dx.doi.org/10.2147/CCID.S27902 Clinical, Cosmetic and Investigational Dermatology 2012:5 compares some of the emerging technologies that hold the promise of melanoma diagnosis. The emphasis of the review is placed on the technology aspect of the instrumentation, recent advances, challenges, and research needs. While most cancers develop deeper in the body and cannot be viewed with the naked eye, melanoma most often develops on the skin and is easily visible. The challenge that remains is recognizing it early, so that treatment can be initiated before the tumor reaches a thickness of 0.76 mm.8 Detecting melanoma with the human eye can be challenging, because of the diversity of possible presentations during the early stage of the disease. While trained dermatologists can reliably recognize advanced melanoma by visual inspection (relying on the Asymmetry, Border irregularity, Color variegation, Diameter . 6 mm, Evolution [ABCDE] criteria) melanomas at their very early and most curable stages often mimic histologically benign look-alike lesions. Instruments that can capture and quantify information not readily available to the human eye and aid in discerning whether a lesion is benign or malignant could dramatically improve survival rates and reduce health-care costs. Early detection of melanoma relies on periodic (yearly) complete and thorough examinations by physicians, supplemented by more frequent (monthly) self- exams by patients. Both physicians and patients would benefit from accurate and reliable instruments able to yield objective and quantitative assessment of lesions. These scanners can serve as preliminary screening tools, allowing more time for the clinician and specialist to focus on lesions that are more likely to be malignant. Types of melanoma detectors based on end user Therefore, the long-term goals of research efforts when devel- oping instruments for the noninvasive in vivo diagnosis of mela- noma are to develop three key types of devices (scanners): • Type I: Relatively low-cost, easy-to-use, handheld scanners for screening purposes (for self-exams of at-risk patients or screening in primary care facilities), providing an indication whether a lesion has a potential for malignancy (in this case, the patient would be advised to seek medical care with advanced diagnostics methods). Currently, full-body photography is the tool available for patients’ self-exams, and this method relies on the subjective visual comparison of the real-life lesion and the previously photographed lesion. Modern technologies offer a range of more objective methods to aid the self-exams of patients. • Type II: Precision scanners that provide accurate and detailed visual and/or quantitative characterization of individual lesions for use in specialized clinics by trained professionals, to aid decisions regarding the need for biopsy or the extent of surgery. The focus of current research efforts is the development of these instruments, and several promis- ing techniques have emerged over the past two decades. The transition from laboratory demonstrations and small-sample pilot trials to clinical practice remains a challenge. • Type III: Full-body-imaging systems that would ideally map the entire body surface of the patient in three dimensions, with sufficient resolution to provide three-dimensional (3D) information similar to the currently used digital full-body photography. This imaging system would serve as a catalog of all lesions. White-light imaging could be combined and coupled with one of the more advanced and sensitive imaging systems with advanced diagnostic capabilities. These imaging systems are essential to provide objective reference baseline information when tracking the evolution of lesions; however, they also hold the promise of diagnostic capabilities. With advances in telecommunication and information technologies, telemedicine, the technology of providing clinical health care at a distance is becoming a reality. All three types of scanners would support telemedicine applications by allowing the development of an image repository for a patient as well as the tracking of the patient’s lesions over time and enable long-term follow-up care. Since melanoma is becoming a widespread health problem, telemedicine combined with quality information from scanners would be an invaluable tool in geographic regions where specialized care is not easily accessible. It would also help in eliminating distance barriers and improving access to medical services that would often not be consistently available. Requirements Ideally, a melanoma-detection instrument (scanner) should satisfy a range of requirements: 1. The scanner should provide an objective assessment of the lesion by measuring some physical properties of the tissue (by quantitatively comparing healthy and diseased tissue) and lesion, or by comparing the parameters of the diseased tissue with a previously established scale in order to eliminate the subjective component in the interpretation of the results. 2. The outcome of the scanning and data-analysis process should be delivered in quantitative form (for example, by characterizing the malignant potential of skin lesions with a simple, numeric, quantitative scale), followed by submit your manuscript | www.dovepress.com Dovepress Dovepress 196 Herman C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dermatology 2012:5 a conclusion/recommendation regarding the nature of the lesion. 3. In its basic form (type I and type III scanners used for screening), the screening scanner would not require a trained radiologist for interpretation of the images. Data and images from a type II device would be both quantitative and qualitative, and allow analysis of the characteristic features of a lesion in more detail. Specialized training would be necessary for the interpretation of data delivered by type II scanners. 4. The device and the measurement method should be sensitive enough to deliver a strong signal even for very early stage melanoma lesions. 5. The measurement method should be robust and insen- sitive to differences in ambient and measurement conditions and local and individual variations of skin properties. 6. Ideally, measurement should be relative, in that it compares the properties of the lesion and the healthy tissue of the same subject, rather than comparing the responses of different individuals. In this way, the influence of individual variations of skin-tissue properties can be greatly reduced or even eliminated. 7. The scanner should be easy to use without sophisticated and specialized training (except for some applications of type II scanners), so that it is suitable for self-exams of patients between appointments. 8. Ideally, the device would not require perfect contact with the skin or the use of an immersion fluid for imaging. 9. The measurement should be noninvasive, by either not requiring irradiation of the tissue or keeping irradiation low, to minimize affecting or causing damage to the skin. Some scanning methods require, for example, irradiation with light, sound, or electromagnetic (EM) radiation, while others measure the radiation naturally emitted by the skin/human body. Other interactions with the skin of the patient, eg, thermostimulation by cooling, should be moderate, in order to avoid or minimize discomfort during the scanning process. 10. Ideally, the same measurement method should be used for scanning of individual lesions (type I and type II devices), as in a full-body scanner (type III). 11. The exclusion criteria should not limit the use of the device significantly. 12. The scanner should be reliable in operation, small in size and be low-cost, sensitive in terms of measure- ment capabilities, and robust in use, with a strong support base for manufacturing, customer support, and repair. 13. The scanner(s) should be easy to integrate into telemedicine applications and systems. 14. The scanner(s) should have the ability to interface with the Internet in order to deliver information and images into the patient’s record. 15. The scanner should have the ability to interface with the manufacturer’s database and update the image library as new lesions are scanned. The interaction with the manufacturer’s system should also allow seamless updates of the software as the database and software are updated (as opposed to purchasing upgrades every couple of years, for example). As the calibration database is continuously enhanced and upgraded, the improvements should be made available to the end user continuously and seamlessly. 16. The technology should be readily available commercially within a short time (a few years, depending on the length of the FDA approval process) and should be reliable and relatively low-cost. The imaging systems currently under development as candidates for melanoma detection satisfy different sets of requirements from this list: no single technology can satisfy all requirements at the same time. The emphasis in this review is on technologies that offer the potential of in vivo measurements, since in vivo diagnostic tools are essential and very much in demand in many fields of medicine and in melanoma diagnosis in particular. In general, they can provide guidance for surgical interventions by detecting lesion margins or allow for replacement of general biopsies of suspicious tissues by more targeted biopsies (typically one out of 30–40 lesions biopsied is a melanoma). In this way, in vivo imaging methods would reduce unnecessary tissue excisions and the associated pathology costs, as well as biopsy-associated risks and morbidity. Other clinical applications of these imaging instruments include the monitoring of the effect of therapies to tailor drug or radiation treatments based on individual responses of the patients. While a considerable amount of effort has been devoted to the development of type II methods and devices, much less effort has been dedicated to type I and type III scanners. The existence of an accurate and sensitive screening tool for self-exams of patients (similar to pressure cuffs or blood glucose monitors) would have a dramatic impact on the early detection of melanoma. Some methods are well suited for in vitro imaging and tests, which is useful when there is a need for fast processing of large numbers of samples. submit your manuscript | www.dovepress.com Dovepress Dovepress 197 Technologies for detection of melanoma C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dermatology 2012:5 By quantifying the imaging outcomes and screening cases that are not likely to be malignant, the pathologist is able to devote more attention to the samples that indicate possible malignancy. Classification of detectors based on detection method Direct contact methods vs noncontact methods The methods suitable for melanoma diagnosis can be classified as those that require direct contact between the radiation source, skin, and receiving optics, and methods that do not require direct contact (remote methods). Direct-contact methods sometimes require immersion fluids between the skin and the optics to avoid losses caused by the mismatch of the optical properties of the media through which the signal propagates. The use of immersion fluids adds complexity to the measurement, and these methods are typically not suitable for full-body scanners. Noncontact or remote measurements can be affected by the involuntary movement of the subject or even breathing, and they will require motion-tracking algorithms to be included into the image-analysis and processing steps when high-accuracy measurements are needed. Another classification is based on the need to apply external forcing to the investigated system before or during a measurement. Active vs passive methods Active methods involve forcing (heating, cooling, application of pressure, application of a tracer, or a contrast medium, etc) and measure the response of the system to the applied forcing. Passive methods characterize the system as is, ie, they measure steady-state properties. Passive methods are easier to apply; however, active methods may be faster, yield a stronger signal, and be more sensitive and accurate. The diagnostic method may require exposing the subject to an external source of radiation (X-rays, ultrasound, light, and measuring the reflected, scattered, or transmitted portion of that radiation) or measurement of the radiation naturally emit- ted by the subject. The latter method is easier to apply and does not require a radiation source (making the instrument smaller and less complex); however, there is little flexibility in influencing the nature or magnitude of the measurement signal. Image-forming vs quantitative methods Diagnostic methods in medicine can be classified as image- forming (information characterizing the lesion is contained in an image, such as a digital photograph, ultrasound image, or X-ray image) and quantitative methods that deliver a quantitative signal (for example, in the form of electric current or potential difference). Visual inspection using the ABCDE criteria is essentially an image-forming method, with the visual information regarding the characteristics of light reflected from the skin being detected by the human eye and processed by the brain. The human brain is capable of processing large amounts of visual information quickly, and after suitable training comparing this information with the mental database of images. The diagnosis has a subjective component, and the experience of the person conducting the analysis plays a significant role. Describing a visual image with numbers (quantifying it) to allow rigorous quantitative comparison with other images is a very complex process. Fully emulating the visual image analysis and decision-making process of the human brain with a computer is not possible at this stage of technical development. However, modern computer-based image-processing tools can aid the quantification process by reducing the vast amount of information contained in visual images to a limited amount of quantitative data relevant for a particular diagnosis. Quantitative methods deliver data that are easily quantified or compared (such as peaks in a radiation spectrum); however, they lack the subjective visual detail that visual information can easily convey. An essential component of a melanoma-detection system is a calibration database that is proprietary to the manufacturer of the instrument. This database evolves as more information (test data) becomes available during the development and use of the instrument. The calibration database is established by comparing the measurement signal to the gold standard – the pathology result – for the evaluated lesion. The accuracy of detection is expected to improve over time as new lesions are added to the library of lesions. The lesion-classification outcomes will vary from method to method, and it is essential to “teach” the computer how to discriminate lesions based on the data accumulated in the calibration database. The lesion-classification outcomes are established using biostatistics methods. An example of this kind of database is the one established for MelaFind based on clinical studies. Stages of instrument development Instrumentation for the early detection of melanoma is complex, both in terms of hardware and software as well as clinical testing, and the development of a new detection or imaging system is a multidisciplinary effort accomplished in several stages: submit your manuscript | www.dovepress.com Dovepress Dovepress 198 Herman C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dermatology 2012:5 1. The first step is the proof-of-concept study, involving the development and testing of a laboratory prototype instrument in a laboratory environment on an experimental phantom, in vitro or on animal models. 2. This step is followed by modifying the hardware for clinical studies and demonstrating the feasibility of the method in patient studies with a limited number of subjects. Initially, the image and data analysis are manual; after the initial learning phase, the objective is to develop automated analysis and classification criteria in the development steps to follow, which is a challenging task. 3. Successful feasibility testing on a limited number of subjects is followed by the development of equipment suitable for clinical testing on a larger number of patients to establish the calibration database. Automated image- processing algorithms and data-processing software are developed based on the data collected in the calibration database (image library). The creation and initial validation of the database, software, and the decision- making criteria is an iterative process requiring testing on a large number of patients and assessment of a large number of benign and malignant lesions. 4. Once both the software and the hardware have reached the necessary level of technological readiness (reaching the accuracy and sophistication required for clinical evaluation), the FDA approval process can begin. In the US, the first step in the approval is the clinical assessment of the instrument to gather data for premarket approval (PMA) report and documentation. 5. After receiving PMA, the market launch of the instrument follows. For example, it took around 10 years for MelaFind to mature from early proof-of-concept efforts to PMA. There are numerous hurdles along the path of new medical device development, and the journey is difficult, costly and time-consuming. In spite of the many sophisticated laboratory imaging systems developed for melanoma imaging described in the literature, the automated quantitative detection remained an elusive goal, primarily because of the cost, effort, and time needed to develop a sophisticated, sensitive, and reliable device. The recent launch of the MelaFind scanner for the detection of melanoma is a step in the right direction. The development of screening instruments remains a challenge for the future, and it will require significant investment of resources, both public and private, to evaluate different technologies in order to develop instruments best suited for a particular application. It appears that justifying the need for investing in diagnostic tools for skin cancer, which is directly visible on the skin surface, is more difficult than for other conditions that cannot be directly visualized by the naked eye. Electromagnetic radiation in medical diagnostics EM radiation is the most commonly used means of probing human tissue in medical diagnostic and imaging instruments (X-rays, Magnetic Resonance Imaging (MRI), digital photography, etc). EM radiation is a form of energy emitted or absorbed by matter, which behaves like a wave when traveling through space with the speed c. EM radiation is characterized by both electric and magnetic field components, which oscillate in phase perpendicular to each other and perpendicular to the direction of the propagation of the wave and energy. In vacuum, EM radiation propagates at the speed of light - c 0 = 2.998 × 108m/s. EM radiation has practically no inertia; therefore it can be used to analyze high-speed, unsteady processes in the human body. Its interaction with living tissue depends on the properties of radiation, the frequency ν (or wavelength λ), and the power. For radiation propagating in a particular medium, these two properties are related as λ = c/ν. EM radiation carries energy through space, away from the source towards the matter with which it interacts. The EM spectrum is illustrated in Figure 1, in order of increasing frequency and decreasing wavelength. The black- body radiation spectra for thermal radiation (left) and IR radiation (right) are depicted in the top portion of Figure 1. The longest wavelengths in the EM spectrum are called radio waves, followed by microwaves, IR radiation (responsible for thermal effects), visible light, ultraviolet radiation, X-rays, and gamma rays. The intermediate portion of the spectrum that extends from 0.1 to 100 µm is called thermal radiation, and it includes portions of the ultraviolet and all of the visible and IR spectra, as shown in Figure 1. The human eye is sensitive to a relatively small window of wavelengths of EM radiation called the visible spectrum, which is in the range of 0.4–0.7 µm. The nature of the interaction of EM radiation with biological tissue depends on the properties of radiation: its power and frequency (wavelength). For longer wavelengths and lower frequencies (radio waves, microwaves, IR radiation), the interaction (and potential damage) with cells (and other materials) is characterized by heating effects associated with the radiation power. For shorter wavelength, higher-frequency radiation (ultraviolet frequencies and above), such as X-rays and gamma rays, the potential damage to the chemical structures and bonds and living cells by EM radiation can be far larger than that done by simple heating. submit your manuscript | www.dovepress.com Dovepress Dovepress 199 Technologies for detection of melanoma C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dermatology 2012:5 incident Emitted Source Detector Scattering Absorbed A B C Muscle Fat Reticular demis Papillary dermis Epidermis Lesion Reflected Figure 2 (A) Interaction of radiation with tissue (absorption, reflection), radiation source, and detector; (B) digital white light, and (C) dermoscopy images of a melanoma lesion, capturing the reflected portion of the incident EM radiation. 1019 0.1 Å 1 Å 1 nm 10 nm 100 nm 1 µm 10 µm 100 µm 1 mm 1 cm 1 4 10 10−5 10−5 100 105 Visible region 0.1 0.2 0.4 5800 K 1000 K 800 K 300 K 100 K Wavelength, λ (µM) Wavelength, λ (µM) Wavelength Frequency (Hz) G a m m a -r a ys X -r a ys U ltr a vi o le t V is ib le In fr a r e d M ic ro w a ve s R a d io , T V L o n g -w a ve s Before Incoming UV rays After 50 K 2000 K 800 K 400 K 100 K 50 K Infrared region 0.71 4 10 40 100 300 1,000 100 105 40 100 300 6001000 10 cm 1 m 10 m 100 m 1000 m 1018 1017 1016 1015 1014 1013 1012 1011 1010 109 108 107 106 S p e c tr a l e m is s iv e p o w e r, E a ,b (W /m 2 µ m ) S p e c tr a l e m is s iv e p o w e r, E a ,b (W /m 2 µ m ) A M R a d a r F a r IR N e a r IR 1.4 µm 15 µm Figure 1 EM radiation spectrum shown in terms of wavelength and frequency, characteristic ranges used in medical diagnostics. Note: The black-body radiation spectrum for thermal and IR radiation is shown in the top portion of the image. This can be attributed to the ability of photons to damage individual molecules chemically by affecting the chemical bonds. Whenever using EM radiation for probing human tissue in vivo, the frequency (wavelength) and power (and their combination) have to be selected in such a way to avoid affecting and damaging the tissue. More power can yield a stronger signal, but it is also more likely to cause damage; therefore, safety considerations will limit the magnitude of the probing radiation and thereby the measured signal. A typical instrument or imaging system for medical diagnostic purposes consists of the source of radiation and a detector. The radiation emitted by the source (human body, sun, light bulb, laser source, etc) is reflected, absorbed, and scattered by human tissue, as illustrated in Figure 2A. Some forms of radiation, eg, X-rays, can also be transmitted through the human body. The human body will also emit radiation in the IR domain of the spectrum (not visible to the naked eye), and the IR emission spectrum is shown in the top-right portion of Figure 1. The detector captures the reflected, scattered, transmitted, or emitted radiation. The reflection, transmission, scattering, and emission properties of healthy and diseased tissue will usually differ, and a sensitive detector will image or quantify this difference. Radiation sources and detectors have evolved greatly over the past few decades, which has enabled measurements of very small signals, some that were too small to detect in the past. Therefore, the technology advances may allow measurements today that were impossible in the recent past. Depending on the wavelength range used for sensing, the design and complexity of the radiation source and the detector can vary greatly, and so do the challenges and costs involved with building the imaging hardware. For example, sources and detectors for X-ray radiation instruments in medical diagnostics are well established and readily available, whereas THz sources and detectors suitable for medical applications submit your manuscript | www.dovepress.com Dovepress Dovepress 200 Herman C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dermatology 2012:5 emerged on the market relatively recently. Older technologies are often less costly, readily available, and already have good distribution and support (repair) networks. This is especially the case when a technology is widely used in other fields, such as IR imaging, with an established use in night-vision systems. Reliability testing and cost may initially pose a challenge when trying to rely on a new technology. An essential component of most medical instruments nowadays is a computer that stores, analyzes, and displays data to the user and interfaces the Internet for information access and storage. Over the past few decades, computer hardware and software have experienced dramatic advances, allowing fast processing of large amounts of complex data, which is essential for medical diagnostic systems. Through access to the Internet, safe, centralized storage and sharing (as needed) of large amounts of data have become a reality. Melanoma-detection methods The majority of cutaneous melanomas appear as pigmented lesions of the skin. The most effective screening tool for detection of atypical lesions and the standard of care, established 27 years ago, has been visual examination: assessing the lesion according to the ABCDE criteria, introduced in 1985.9,10 The last criterion, evolution, was added in 2004, after recognizing the importance of the changes in size and shape of the lesion in the screening process.9 The source of radiation is white light (from natural or artificial sources), and the radiation reflected by the skin is detected by the human eye. The diagnostic accuracy of the ABCDE criteria has been verified in clinical practice; however, it only provides qualitative guidelines for melanoma identification and yields high rates of false positives and frequent false negatives. Studies suggest that almost one out of three melanomas could not be diagnosed using these criteria.10 Over the past three decades, there have been improvements in imaging tools for melanoma detection; however, clinicians still primarily rely on their eyes, experience, and clinical judgment. Therefore, accurate, sensitive, objective, and quantitative instruments to aid the diagnosis of melanoma are still needed in clinical practice. In addition to this, screening tools that would alert patients during self-exams, coupled with the ability to store images and data online in a repository for long-term follow-up, would be invaluable in reducing the mortality associated with the disease. The melanoma-detection methods discussed in this review are summarized in Table 1 and their key features are indicated in the table. Digital photography and dermoscopy Current in vivo imaging tools commonly used by derma- tologists are digital photography (total cutaneous imaging or the imaging of individual lesions) and dermoscopy. In digital photography, serial images recorded over time (pho- tographic follow-up) are compared in order to find changes in size, shape, and color of pigmented lesions that might suggest malignancy. It was found that the relative change of a lesion is a sensitive marker for early melanoma.4,7–10,14 However, additional studies also showed that only 35% of diagnosed melanomas were identified on the basis of the relative change.11,12 Table 1 Melanoma detection methods considered in this review and their key features Method Type Quantitative Image- forming Active/ passive Contact/ remote Radiation type Technology readiness Digital photography I, II, III No Yes Passive Remote white light Available Dermoscopy II No Yes Passive Both Polarized light Available MSS I, II Yes Yes Active Contact Different wavelengths PMA MSF I, II Yes Usually no Active Contact Different wavelengths Research phase THz imaging I?, II Yes Possible Active Both THz range Research phase IR in spatial domain I, II, III Yes Yes Can be both Remote Naturally emitted IR Research phase CSLM II No Yes Active Contact Different wavelengths Available OCT II No Yes Active Contact Near-IR Research phase Abbreviations: CSLM, confocal scanning laser microscopy; IR, infrared; MSF, multispectral imaging in the frequency domain; MSS, multispectral imaging in the spatial domain; OCT, optical coherence tomography; PMA, premarket approval. submit your manuscript | www.dovepress.com Dovepress Dovepress 201 Technologies for detection of melanoma C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dermatology 2012:5 Dermoscopy involves the use of a handheld microscope and polarized light, which allows imaging of the deeper structures of the skin lesions. Reported sensitivity and specificity of dermoscopy are 98% and 68%, respectively.12 Dermoscopy has been reported to improve the diagnostic accuracy for melanoma by 5%–30%. However, studies have also confirmed that diagnostic accuracy is heavily dependent on the operator’s experience,11–14 and both techniques are highly subjective, without broadly applicable standards or quantitative criteria. A digital photo and a dermoscopy image of a melanoma lesion are displayed in Figure 2B and C. Whole-body photography (also referred to as full-body photography), which is essentially white-light, high-resolution digital photography, has become an important aid in patient self-exams. It involves the taking of photographs of the entire body, in order to identify suspicious lesions that may be malignant melanomas. This method is frequently used for high-risk patients. The patient uses an album with a set of professional, high-resolution photographs to compare the appearance of the lesion at the time of the self-exam with a baseline recorded earlier. Photographs of single lesions are used for the monitoring of a particular lesion when biopsy does not seem warranted. The expectation is that the use of photography combined with visual inspection will reduce unnecessary biopsies and allow for early detection of melanoma. As far as insurance coverage is concerned, photographic methods (or other imaging techniques) in melanoma screening are often still considered to be experimental or investigational, since there is insufficient published clinical data to support a benefit for high-risk patients, which is coupled with the lack of standardization of methods and guidelines regarding when a lesion needs to be biopsied as opposed to monitored. Many insurance companies claim that there is currently a lack of sufficient evidence of improved patient outcomes (early detection or reduction in numbers of unnecessary biopsies) to justify the additional cost of imaging, and so many high-risk patients are forced to pay for the imaging (for example, total-body photography) out of pocket. However, considering the costs for treating advanced melanoma and the outcomes, it seems that encouraging the use of imaging in diagnostics would make sense and reduce morbidity and mortality as well as health-care costs on the long run. Commercial systems for assessment of lesions using digital photography already exist; however, they are not in widespread use. Implementing the ABCDE diagnos- tic criteria in an automated evaluation algorithm on a computer is extremely challenging and has not yet been accomplished because of the diversity and complexity of lesions and images. Any commercial system to be used as a diagnostic tool in clinical practice would need extensive and expensive validation in clinical trials and FDA approval. With enhancements in imaging and computer technologies as well as image-processing software, the method of the future would be high-resolution 3D mapping of the entire body as a periodic screening tool. Ideally, the photographic imaging technique would be coupled with another more quantitative diagnostic technique. Multispectral imaging Spectral methods fall into the class of emerging new technologies that are currently being investigated to determine their ability to diagnose melanoma. Multispectral information can be acquired and analyzed both in the spatial domain (detector measures radiation intensity at a particular point or region) and the spectral domain (detector captures radiation intensity as function of wavelength – spectrum – for the selected point or domain). The imaging equipment and the signal-analysis methods strongly depend on the approach taken. These methods hold the promise of offering quantitative criteria for melanoma diagnosis. Multispectral imaging in the spatial domain One of the most significant advances in the detection of melanoma over the past decade has been the development of the MelaFind15 technology by Mela Sciences.16 After a 2-year approval process, this technology received PMA (P090012) from the FDA for use in the US in 2011. The PMA approval was followed by approval for sale of the system in the European Union as well. MelaFind delivers objective additional information (based on quantitative characterization of lesions) about clinically atypical cutaneous pigmented skin lesions with either clinical or historical characteristics of melanoma during skin examinations.17–21 The role of the instrument is to aid the decision of the dermatologist regarding diagnosis and the need for biopsy. The multicenter blinded study, using histologic data as a reference standard, supporting the PMA application for MelaFind involved 1383 patients in the US (enrolled from January 2007 to July 2008). The study demonstrated a sensitivity of over 98% of MelaFind (95% lower conf idence bound) with a biopsy ratio of 10.8:1.22 Most lesions were thin melanomas or borderline lesions. The MelaFind outcomes were binary: positive submit your manuscript | www.dovepress.com Dovepress Dovepress 202 Herman C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dermatology 2012:5 (lesion should be considered for biopsy) and negative (lesion should be considered for later evaluation). MelaFind had an average specificity of 9.5%, whereas the specificity of the investigators was 3.7%.22 Clinicians participating in the study were blinded to the output of MelaFind, and patient care was managed based on clinical information. The lesion was considered positive when the prebiopsy dermatologic diagnosis was either melanoma or melanoma could not be ruled out. The study could not determine the true sensitivity of the participating dermatologists, since only lesions scheduled for biopsy were evaluated; melanomas missed by examining clinicians were not included in the study. The sensitivity of physicians can be assessed in longitudinal (long-term follow-up) studies, and such results were not available for this PMA because of the short time frame. Other studies list biopsy sensitivity values in the range of 86.7%– 93.7%.22 In addition to this, a pilot study investigated the biopsy sensitivity of dermatologists in a reader study using 25 randomly selected melanomas and 25 non-melanomas. The readers were 39 dermatologists who did not participate in the clinical trial, and they reviewed the clinical history and the images of the lesions to decide whether to biopsy the lesion or rule out melanoma. The average biopsy sensitivity of the reader study was 78%,22 a result similar to some prior studies. More details about the MelaFind study are available in the original publication.22 The MelaFind instrument characterizes and classifies lesions using three outcomes, based on the degree of 3-D morphological disorganization of the lesion. The f irst outcome describes “high disorganization” lesions (such as malignant melanoma, melanoma in situ, high-grade dysplastic nevi, and atypical melanocytic proliferation), which are candidates for biopsy. The second outcome is “non-evaluable”, and biopsy decision for these is based on careful evaluation of other clinical criteria. The MelaFind scanner is suitable for noninvasive assess- ment of lesions that are sufficiently pigmented to be clearly discerned from the surrounding normal skin using automated image-processing tools. Their diameter should be in the range between 2 mm and 22 mm, and this dimension is determined by the imaging optics. Other restrictions and requirements involve accessibility by the handheld component, intact skin without scars, fibrosis, or foreign matter. MelaFind is not suitable for use on several anatomic sites, such as acral, palmar, plantar, mucosal, or subungual areas, as well as near the eyes. The handheld component of the MelaFind imaging system consists of a radiation source (illuminator) that sequentially irradiates the lesion with 10 wavelengths of light (therefore, it is classified as a multispectral system that collects information in the spatial domain), including visible and near-IR bands. Light scattered back from regions beneath the surface of the lesion is collected by a lens system that focuses it onto a detector, forming one image for each of the wavelengths. The spatial resolution of the detector is around 3 melanocytes (20 µm). Multispectral data are captured from a region up to 2.5 mm deep into the skin, and they provide unique 3-D information regarding the morphological organization of the lesion. The information collected by the detector (ten images) is next analyzed by MelaFind’s automatic data-analysis algorithms. The data-processing steps include calibration (reduction of noise and artifacts), followed by quality control (detection of imaging problems such as overexposure, underexposure, inappropriate lesion size, too much hair, etc). When problems are detected, the system instructs the operator to rescan the lesion. The data-processing algorithms include the application of image-processing tools to the data received by the detector, such as lesion segmentation to identify portions of the image that belong to the lesion, followed by feature extraction to quantify the parameters that characterize lesions. Parameters of the lesion, such as asymmetry, color variation, and border changes (characteristics similar to the ABCDE criteria, evaluated by the computer) are quantified for the ten wavelengths used by MelaFind. There are differences in the skin’s response (penetration depth, scattering characteristics) to the ten wavelengths used, and the differences between the images recorded at different wavelengths yield useful information for the decision-making process. The collected data are compared with information stored in the proprietary database of melanomas and benign lesions assembled by MelaSciences. The database was developed, trained, and tested with over 10,000 in vivo lesions and corresponding histological results in over 7000 patients. The sample included 600 melanomas, most of which were early stage lesions. This lesion database evolves and “learns” whenever a new case is evaluated and its parameters added to the database. The final element of the lesion assessment is classification based on the level of disorganization. MelaFind is currently in phase III clinical trials, and the technology holds the promise of being incorporated into clinical practice after further research and evaluation. MelaFind is a type II instrument according to the classification introduced in this review, and early clinical studies are very promising.22 MelaFind also holds the promise of potentially becoming a screening device (type I in this submit your manuscript | www.dovepress.com Dovepress Dovepress 203 Technologies for detection of melanoma C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dermatology 2012:5 review) after being proven as a type II device and appropriate modifications. Since the method requires irradiation by ten wavelengths, implementation as a type III scanner poses significant challenges, and it is less likely to prove feasible for full-body screening applications. Being recognized as a test method by insurance carriers would be a key step towards using diagnostic instruments to detect early melanoma, and extensive clinical validation will be needed to achieve this goal. Multispectral imaging in the frequency domain: spectroscopic methods Multispectral methods (in the spectral domain with radiation data for a point or region presented as intensity as a function of frequency or wavelength) are widely used in engineering for the characterization of materials, and they have also found their way into ex vivo and more recently into in vivo medical diagnostics. In this review advances and important features of IR and Raman spectroscopy as representatives of spectroscopic methods are summarized. IR and Raman spectroscopy provide details regarding the chemical composition and molecular structure of substances in cells and biological tissues, and they are considered to be vibrational spectroscopic techniques.23,24 Over time, with the progress of technology, they have also evolved into effective visualization tools. Since pathological changes and diseases cause chemical and structural changes in tissues on the molecular level, the resulting differences in the vibrational spectra serve as sensitive markers or unique fingerprints of the disease.25 These spectra can be captured by spectroscopic methods, which are noninvasive and do not require the administration of contrast. Early biomedical applications of these two spectroscopic methods would allow the measurement of spectra at a particular point in the tissue, and required therefore knowledge of the exact location of diseased tissue. With advances in optical imaging systems over the past 15 years, sensitive, high-throughput instruments have become available. These enhanced instruments collect a large number of spectra from larger samples faster and with improved spatial resolution with the aid of a microscopic imaging system. Also, over the past decade, fiber-optic probes have been developed, and their feasibility for in vivo applications has been demonstrated. Both spectra – the IR absorption spectrum and the Raman scattering spectrum – carry the same type of information regarding the energies of molecular vibrations. The underlying physics for these two spectroscopic methods is different, and each method offers specific advantages (and disadvantages) in medical applications. IR spectroscopy IR spectroscopy measures absorbed radiation, and can serve as a visualization tool to aid the pathologist in evaluating tissue specimens. IR instrumentation is usually less complex than devices for Raman spectroscopy that measure scattered light. Collection of IR data is faster, and signal-to-noise ratios are higher than for Raman spectrometers. The disadvantage of the method is that sample thickness for aqueous biological systems is limited to a few micrometers in transmission or attenuated total reflection (ATR) measurements. This limitation is imposed by water being a strong IR-absorbing medium. These samples are prepared on special substrates, and it is important to avoid affecting the composition of the sample during the preparation. Advanced Fourier transform IR (FTIR) spectrometers allow two-dimensional (2D) and 3D mapping of relevant spectral characteristics of the tissue. In IR spectroscopic imaging, the field of view is irradiated and imaged onto a detector array whose elements are sensitive to IR radiation. Each sensor element in the array will collect a spectrum, and the number of spectra collected simultaneously depends on the number of sensor elements in the array. IR radiation can be guided to the sample, and the reflected or transmitted radiation routed to the spectrometer, by means of IR fiber optics. Fiber-optic systems are suitable for in vivo endoscopic measurements; however, contact between the tissue and probe and probe contamination remain challenges to be resolved in the future. Raman spectroscopy Since the presence of water affects Raman spectra little, Raman spectroscopy is better suited for medical diagnostic applications. Raman spectra are typically measured in the reflection mode, and Raman spectroscopy can also be applied in vivo. Penetration of light can vary, depending on the wavelength, power, and sample properties (geometry and composition). Human skin is a multilayered system, with layer thicknesses in the 10–500 µm range. Because of the absorption of IR radiation by water, the depth of penetration of IR radiation in skin tissue in in vivo IR spectroscopy measurements is only a few micrometers. Therefore, the ATR mode of spectroscopy can be used to analyze the outermost layer of stratum corneum.26 FTIR spectroscopy has also been used to analyze healthy skin and lesions.27–29 The ATR technique coupled to IR fiber optics has been applied for FTIR imaging of the stratum corneum to evaluate intact, thin skin sections, and the effects of exogenous materials.30,31 FTIR spectroscopy has been combined with Raman imaging submit your manuscript | www.dovepress.com Dovepress Dovepress 204 Herman C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dermatology 2012:5 to analyze the stratum corneum and the permeation of lipids into the skin.32–34 Confocal Raman microspectroscopy has also been used to study structures in the stratum corneum35–37 in up to 200-µm depth. The combination of Raman spectroscopy and Confocal Scanning Laser Microscopy (CSLM) offers the ability to analyze sections and layers of the skin without physically dissecting the tissue.38 The feasibility of these methods for the diagnosis of melanoma and other skin cancers has been demonstrated in several studies.39–43 Spectroscopic methods are very promising tools for the early diagnosis of melanoma. In addition to ex vivo characterization of diseased tissue, current research is focusing on in vivo applications. After adequate calibration with a large number of samples, automated evaluation of the data should be possible. Spectroscopic systems are best suited for type II devices, and a simplified instrument could possibly be converted into a type I device for screening purposes. Since the investigated tissue has to be irradiated with EM radiation of appropriate wavelength, it would be hard to accomplish both irradiation and the capturing of the scattered and reflected radiation for a large surface area in full-body measurements in type III devices. Terahertz imaging THz (1 THz – 1012 Hz) imaging uses radiation whose frequency lies between the microwave and IR regions (longer than IR wavelengths) of the EM spectrum. Until recently, it was the least explored and utilized region of the spectrum, due to difficulties involved in building THz radiation sources and detectors. Advances in optics and instrumentation over the past two decades have led to the development of a range of sources and detectors. One of the many applications of THz imaging is in the detection of disease, in particular skin cancer. The aim is to develop small, portable, fast, handheld scanners for early detection of cancer. The technology has shown promise; however, more research and development is needed to develop an imaging system capable of reliable automated detection of melanoma. Infrared imaging in the spatial domain: infrared thermography revisited The advances in IR (thermographic) imaging technology that captures spatial IR-emission data (rather than spectral characteristics), computers, and image-analysis techniques over the past two decades has led to renewed interest in IR imaging, in particular quantitative imaging, in a variety of medical and engineering applications. Since cancerous lesions are metabolically more active than healthy tissue and blood supply to the lesion is increased, cancerous lesions are warmer, and under certain conditions the difference in the IR radiation emitted by the lesion and healthy tissue can be readily detected by thermographic imaging. Thermal imaging has been used as a research tool in medicine for over 50 years44–47 to analyze medical conditions associated with changes in the body or skin temperature. Skin cancer and melanoma detection are some of the earliest applications of IR imaging, and the value of the method has been a subject of controversy in the past. However, recent studies with advanced instrumentation and image-analysis tools have shown characteristic differences in thermal signatures between healthy skin or benign lesions and cancerous lesions.48–50 Some new results and methods will be discussed in this review. In this section, we highlight general issues and challenges in thermographic imaging as well as our recent efforts in quantitative thermal imaging of melanoma. Thermographic techniques capture the EM radiation naturally emitted by the subject, the human body, and this radiation closely resembles the black-body radiation at 36°C–37°C; its spectrum is shown in Figure 1 (top right). The emissivity of the skin is in the range 0.96–0.98. The capturing of naturally emitted radiation is an advantage over methods that require irradiation of the skin by an external source (to measure the reflected or scattered radiation in order to analyze skin structures) such as by MelaFind, in particular for full-body imaging applications. Therefore, the method is inherently safe and noninvasive. The emitted radiation is collected by the imaging optics and focused onto the focal plane array sensitive to IR wavelengths, thereby delivering the information in spatial (contrasted to spectral information in spectroscopic methods) and two dimensional (2-D) format. Recently, high-sensitivity uncooled detectors called focal plane arrays have become available, which have greatly simplified the technology, with the promise of significant cost reductions in the future. Another favorable feature of the method is that it is both image-forming (it allows the visual mapping of thermal structures on the skin surface and underneath the skin) and quantitative (allows measurement of temperature, metabolic activity rates, and other thermophysical properties based on temperature data). Depending on the number of detectors in the focal plane– array matrix, a single image can contain 2D information with thousands of temperature measurements recorded in a fraction of a second. Modern IR cameras offer excellent time resolution, so the temperature changes of the subject can be recorded over time in the form of a movie. Images can be submit your manuscript | www.dovepress.com Dovepress Dovepress 205 Technologies for detection of melanoma C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dermatology 2012:5 displayed as gray-scale or color coded with a color palette. The rainbow palette is preferred in medical applications, with the blue corresponding to low and the red to high temperature. Assigning absolute temperatures to emitted radiation poses a significant challenge and requires careful calibration of the instrument, as well as standardization of the imaging process. In addition to the visual information in the form of a 2D image or movie, quantitative information about the temperature evolution of a point or a region over time can be extracted from the 2-D information by postprocessing. The value of thermographic imaging goes beyond providing visual insight into the thermal characteristics of the skin and the human body: proper processing of the temperature data offers the possibility of quantitative comparison and automated detection and quantification of abnormal events or processes or establishing other associations with medical conditions of interest.51–55 A good understanding of the thermal physiology of the body is essential for the interpretation of temperature data and spatiotemporal temperature changes in thermographic images. To enhance the understanding and interpretation of the information contained in thermographic images, computational modeling of the physiological processes has to go hand in hand with the thermographic imaging steps.52 When applying thermography in medical diagnostics, the voluntary and involuntary movement of the subject during the imaging process introduces an additional complexity into the image-analysis and quantification process. While specific image features are easy to detect and track in a series of images recorded over time in white-light imaging, the same features are not readily visible in the IR image sequence. Therefore, motion-tracking procedures developed specif ically for IR imaging are needed for quantitative thermographic imaging. IR imaging can be implemented either as a passive (static) or an active (dynamic) visualization method. Active IR imaging involves introducing external forcing, such as heat- ing, cooling (also called thermostimulation in breast cancer imaging), or pressure to induce or enhance relevant thermal contrasts between the investigated lesion and healthy tissue. The general advantage of active measurement methods is that they do not require the subject to reach a well-defined steady state, which can be very time-consuming and challenging. For example, in the past, when static applications prevailed, the patient was required to spend one or more hours in a thermally conditioned exam room in order to fully accli- mate (reach a thermal steady state) to the environment. Reaching a true steady state may not always be possible, and individual variations between test subjects will remain. Therefore, standardization in the imaging process – the key to comparison of images recorded at different times or of different subjects – was and remains a significant challenge in passive imaging. In the dynamic method, the thermal response to an excitation, often applied as a step change in the thermal boundary conditions, involving the application or removal of heating or cooling, is measured. This is a much faster and more robust approach than the passive method. The dynamic method is well established in numerous engineering tests and applications, such as thermophysical property measurements, and it offers advantages in clinical applications, for which the duration of the measurement and the ease of use are critical. It allows the detection of abnormal features by self-referencing, ie, comparing the responses of healthy and diseased tissue of the same subject, rather than comparing responses of different subjects. While the application of external forcing increases the complexity of the measurement, the benefits outweigh the drawbacks. The required heating or cooling has to induce small changes in skin temperature; therefore, the duration of cooling is short and the temperature levels applied do not cause significant discomfort to the patient. When the skin surface is cooled in active IR imaging, the difference in the thermophysical properties of the lesion (located near or underneath the surface) when compared to healthy tissue results in identifiable temperature contours and temperature differences between the lesion and the surrounding healthy tissue during the thermal recovery phase. These surface-temperature contours will differ from those present in the steady-state situation observed during passive IR imaging. Therefore, the measurement process consists of three phases: (1) the initial phase, when the skin is exposed to nominal ambient conditions, (2) the cooling phase, which is followed by (3) the thermal recovery phase. In the Heat Transfer Lab of Johns Hopkins University, we developed a thermographic imaging system that allows accurate measurements of transient skin-surface temperature distributions51,53–57 and can be used for the detection of melanoma. It relies on active (dynamic) IR imaging based on cooling for the characterization of the thermal response of skin lesions and healthy skin. Cancerous lesions, such as melanomas, generate more heat and reheat faster than the surrounding healthy skin, thereby creating a marker for melanoma detection. The aim of the imaging is to distinguish benign lesions (which behave thermally similar to the healthy skin) from malignant pigmented lesions. submit your manuscript | www.dovepress.com Dovepress Dovepress 206 Herman C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dermatology 2012:5 The steps in the imaging process are illustrated in Figure 3. The imaging starts with applying a rectangular template to the region of interest, centered around the lesion. A white- light photograph of the domain is taken first, followed by an IR image of the reference steady-state situation under ambient conditions (Figure 3A). The lesion is not visible in the reference steady-state IR image, implying that the heat generation is too small to be measured at the skin surface under static conditions. The skin is then cooled by blowing cold air or applying a cooling patch at 15°C–25°C for a duration of up to 1 minute. After removing the cooling load, the dynamic thermal response of the structure is acquired using IR imaging. Color-coded IR images recorded 2 seconds into the thermal recovery are shown in Figure 3B. The results suggest that thermal contrast between the lesion and the healthy surrounding tissue is enhanced by the cooling. After the postprocessing steps, the temperature of the lesion and healthy tissue are plotted as a function of time during the thermal recovery, as shown in Figure 3C. There is a significant difference in temperature between the melanoma lesion and healthy tissue during thermal recovery, whereas benign pigmented lesions have the same thermal recovery as the healthy tissue. The temperature difference can be measured accurately using modern IR cameras. More details about the method are available in the literature.51–57 This imaging technique holds the promise of staging melanomas based on the magnitude of temperature differences and other thermal characteristics of the lesion during the recovery process. Motion tracking in dynamic IR imaging IR imaging is a noncontact method, and this feature offers advantages in terms of ease of application and the ability to image larger surface areas and multiple lesions. Therefore, this method is suitable for type III full-body imaging systems. However, during the thermal recovery phase, involuntary movement of the patient is unavoidable in a clinical environment, and even breathing can cause small spatial displacement of lesions that can lead to deterioration of the measurement data. The measurement involves accurate tracing of the transient temperature response at any specific point on the skin. To accomplish this, it is necessary to apply motion tracking/compensation processing on the IR video sequence. The framework of motion tracking can be generally summarized in two steps.57 Step 1: registration between the white- light image and the first IR image frame Since the lesion is visible in the white-light image only, to identify its corresponding points in the IR image, the first task is the registration via feature point correspondence. An adhesive marker, visible both in white-light and IR images, is used for the registration computations. As shown in Figure 4A, the four corners of the marker are first identified in the white-light image either manually or by an edge-detection algorithm, such as the Harris method.58 Next, the corresponding points are identified in the first IR image frame, which serves as the reference image in step 2 (Figure 4B). Based on the four pairs of points for coordinate correspondence, a 2-D projective transformation matrix can be identified to map any point in the white-light image to its corresponding point in the reference IR image frame.59 The lesion region can be delineated in the white-light image via an interactive segmentation algorithm, such as a 250 200 150 100 Recovery: 2s Lesion Steady-state Lesion 50 10 20 30 40 50 60 70 80 60 70 80 90 100 110 120 130 140 150 18 35 34 33 32 31 30 29 28 27 26 25 17 16 15 14 13 12 11 10 50 100 150 200 250 300 30 30025020015010050 250 200 150 100 50 25 20 15 10 C 0 60 120 180 10 15 20 25 30 35 t (s) T ( °C ) lesion healthy skin C B A 0 60 120 180 10 15 20 25 30 35 t (s) Lesion Healthy skin Figure 3 (A) white light photograph of the larger body surface area with a cluster of pigmented lesions, adhesive window serving as thermal marker and reference IR image of the region at ambient temperature; (B) the same area 2s into the thermal recovery and magnified section of the melanoma lesion and its surroundings; (C) temperature profiles of the lesion and the surrounding normal skin during the thermal recovery process.51–53 submit your manuscript | www.dovepress.com Dovepress Dovepress 207 Technologies for detection of melanoma C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dermatology 2012:5 random walker.60 Once the transformation matrix is solved, the delineated lesion can be transferred to the registered IR image (Figure 4C). Step 2: registration between the consecutive IR image frames in the recovery IR video sequence After the lesion location is identified in the first IR image frame, registration can be applied between every consecutive IR frame in the video sequence. Any point of interest can then be tracked in the video in the presence of involuntary subject movement. The registration between the consecutive IR frames can be achieved using a quadratic motion model.61 Transformations of the image during the motion- correction steps are illustrated in Figure 5. The magnitude and direction of the motion are indicated as a vector in each image frame. A sequence of IR image frames recorded during the recovery phase can be aligned to compensate for involuntary body/limb movement of the patient by applying a quadratic motion model for registration. The registration of the lesion region enables an accurate comparison between the transient thermal response of healthy skin and that of the lesion, and provides critical information that allows identification of the malignant lesion, as shown by Pirtini Cetingul and Herman.51,52,54 Since IR imaging (with proper calibration) yields quantitative data, the method (analysis of the thermal recovery process) holds the potential of allowing the staging of the disease. This version of quantitative thermography has the potential to be implemented as a type I, type II, and type III imaging device, since the radiation naturally emitted by the body is measured, and direct contact between the skin and the probe is not required. The drawback is the need for sophisticated motion tracking in the image-processing phase to compensate for involuntary movement of the subject. IR thermography alone cannot distinguish between different types of skin cancer; however, it can alert the patient to the need to seek out professional help (type I device for screening). As a type II device, with proper calibration, it could potentially measure the magnitude of excess heating (caused by increased metabolic activity and blood supply in the cancerous lesion) and provide a quantitative measure of the malignant potential. The discipline of IR thermogra- phy has been experiencing dramatic growth over the past decade, primarily driven by night vision and surveillance applications, accompanied by improvements in hardware and reduction in cost of the instrumentation. With current trends continuing, thermography would be a good option for large-scale screening efforts at the level of the patient or in primary care facilities. Confocal scanning laser microscopy CSLM is a noninvasive imaging method that has the ability to assess the cellular and nuclear details of skin lesions in vivo, with details similar to histology.62,63 CSLM detects backscat- tered light, and contrast is caused by natural variations of the refractive index of tissue microstructures,64 such as organelles and melanosomes. Since the skin pigment melanin has a high Without correction With correction Without correction With correction 0 s 14 s 36 s 80 s 124 s 148 s Figure 5 Motion correction for compensation of the subject’s involuntary movement in IR images recorded at different time instants. Notes: The effects of motion correction can be observed in the corrected columns as dark regions around the edges of the images. white arrows in the uncorrected images represent the direction and magnitude of motion of the subject. 30 25 20 15 (1997, 2508) (1047, 2458) (1182, 1406) * (2130, 1550) * (1637, 1968) (1482, 1706) * ** A B C Figure 4 Image registration steps in melanoma detection: (A) white light image with characteristic points for image analysis (rectangular marker and lesion center), (B) edge and corner correspondence in IR image, (C) lesion contour registration in the IR images via transformation matrix. submit your manuscript | www.dovepress.com Dovepress Dovepress 208 Herman C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dermatology 2012:5 refractive index, it behaves as a contrast agent for cytoplasm.65 Therefore, the cytoplasm of melanocytes in pigmented and amelanotic melanomas appears brighter and can be easily detected. Past small-scale clinical studies focused on the detection of skin cancers, such as melanomas,66–69 basal and squamous cell carcinomas,70 benign lesions, and the detec- tion of margins between lesions and surrounding normal tissue.71 Clinical studies have shown that the technique has a sensitivity in the range of 89%–97% and a specificity in the range of 80%–85% (depending on the study),62–71 after appro- priate training of the physician in the interpretation of the patterns observed using CSLM. Similar to other diagnostic methods that involve interpretation of visual images, the diagnostic accuracy will be dependent on the experience of the operator. A commercially available CSLM imaging system (VivaScope) is shown in Figure 7A. The method is capable of assessing parallel layers of the skin up to a depth of 300 µm (Figure 7B). The head of the imaging system contains the optical components (light source and imaging optics) as well as a computer-controlled linear translation stage for sequential scanning of the area of interest. To minimize 0 50 100 150 200 10 15 20 25 30 35 Time (sec) 0 50 100 150 200 Time (sec) T e m p e ra tu re ( °C ) 10 15 20 25 30 35 T e m p e ra tu re ( °C ) Recovery temperature (without motion correction) Lesion Healthy tissue Recovery temperature (with motion correction) Healthy tissue Lesion Figure 6 Impact of the motion correction on the reconstructed temperature profiles. Notes: without motion correction (left), the noise is of the order of magnitude of the measured signal. with motion correction. the temperature of the melanoma lesion is higher than the temperature of the surrounding healthy tissue. This temperature difference can be used to quantify the malignant potential of the cancerous lesion. 25 µm 25 µm 25 µm 25 µm Without a mole Stratum corneum Stratum granulosum Stratum spinosum Dermo-epidermaljunc. A B C Confocal microscope field of view 2 mm 2 m m 20 µm x y z Figure 7 (A) vivaScope imaging system for dermatology; (B) parallel skin layers imaged by the CSLM system; (C) CSLM images of different skin layers. submit your manuscript | www.dovepress.com Dovepress Dovepress 209 Technologies for detection of melanoma C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dermatology 2012:5 optical losses between media with very different refractive indices, an immersion fluid is applied between the skin and the optics to enable direct contact between detector and skin. The positioning of the detector head relative to the lesion may pose a challenge for some lesion locations. A CSLM system and images of four skin layers are displayed in Figure 7, and they deliver fine detail and high-quality information on skin structures. CSLM is a commercially available type II imaging system suitable for use in specialized centers by trained operators. Since it requires direct contact between optics and the skin by means of an immersion fluid, this imaging system is not suitable for a full-body scanner. Also, since the interpretation of the images requires specialized knowledge and automated interpretation of the images would pose a major challenge, it is not likely that CSLM will be suitable as a type I screening instrument. Optical coherence tomography OCT is a noninvasive, in vivo imaging method, which captures high-resolution (µm), 3D images of biological tissue. OCT is an interferometric technique using relatively long-wavelength light in the near-IR portion of the spectrum, which is able to penetrate into the scattering medium (deeper than CSLM). This technique has higher resolution, greater detection depth and image size, and faster scanning time than CSLM. Although in OCT, a melanoma shows increased light scattering and more homogenous signal distribution than healthy skin, more work is needed to improve its utility for skin cancer detection.12,13 OCT is suitable as a type II device for use in specialized centers by operators trained in the interpretation of the images. Conclusions This brief review discusses some of the promising current technologies as well as the needs and challenges in developing sensitive and reliable diagnostic tools for the early detection of melanoma. Melanoma is the fastest- growing cancer in terms of incidence, and the need for accurate diagnostic tools is increasing. Every year, around 2.5–3 million skin lesions are evaluated in the US, and over 100,000 are diagnosed as melanoma. The objective is to develop automated diagnostic instruments for screening of individual lesions and full-body screening, as well as sophisticated instruments that can provide dermatologists with fine detail regarding the structure of a lesion and staging information in vivo. Screening instruments would alert patients to seek the care of a dermatologist, and would be intended for use in primary care facilities or by the patients, similar to blood pressure cuffs or diabetes- testing equipment. With recent progress in electronics and instrumentation, several sophisticated and very promising imaging methods have emerged and are being investigated in small trials. One of the key challenges is that diagnostic instruments are expected to compete in price and ease of use with visual inspection, which is the current standard of care. Clearly, using a diagnostic instrument would increase the duration and the cost of the exam; therefore, insurance coverage would be a key driving factor for technology development. The funding for technology development beyond the initial feasibility studies and the funding of large scale studies to demonstrate the effectiveness of imaging systems along with the complex and lengthy governmental approval process are the main challenges on the path of these imaging systems finding their place in medical care. Acknowledgments Images and photographs for this review were taken or contributed by Rajeev Hatwar, Tze-Yuan Cheng, and Muge Pirtini Cetingul. This research was funded by the National Institutes of Health NCI (grant no 5R01CA161265-02), the National Science Foundation (grant no 0651981), and the Alexander and Margaret Stewart Trust through the Cancer Center of Johns Hopkins University. Disclosure The author reports no conflicts of interest in this work. References 1. National Cancer Institute. SEER Stat Fact Sheets: Melanoma of the Skin. 2012. Available from: http://seer.cancer.gov/statfacts/html/melan.html. Accessed August 3, 2012. 2. Skin Cancer Foundation. Skin Cancer Facts. 2012. Available from: http:// www.skincancer.org/Skin-Cancer-Facts. Accessed August 3, 2012. 3. Jemal A, Siegel R, Ward E, Hao Y, Xu J, Thun MJ. Cancer statistics, 2009. CA Cancer J Clin. 2009;59(4):225–249. 4. Elder D. Tumor progression, early diagnosis and prognosis of melanoma. Acta Oncol. 1999;38(5):535–547. 5. Fecher LA, Cummings SD, Keefe MJ, Alani RM. Toward a molecular classification of melanoma. J Clin Oncol. 2007;25(12):1606–1620. 6. Wartman D, Weinstock M. Are we overemphasizing sun avoidance in protection from melanoma? Cancer Epidemiol Biomarkers Prev. 2008; 17(3):469–470. 7. Geller AC, Swetter SM, Brooks K, Demierre MF, Yaroch AL. Screening, early detection, and trends for melanoma: current status (2000–2006) and future directions. J Am Acad Dermatol. 2007;57(4):555–572; quiz 573–576. 8. Friedman RJ, Rigel DS, Kopf AW. Early detection of malignant melanoma: the role of physician examination and self-examination of the skin. CA Cancer J Clin. 1985;35(3):130–151. 9. Abbasi NR, Shaw HM, Rigel DS, et al. Early diagnosis of cutaneous melanoma: revisiting the ABCD criteria. JAMA. 2004;292(22): 2771–2776. submit your manuscript | www.dovepress.com Dovepress Dovepress 210 Herman C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://seer.cancer.gov/statfacts/html/melan.html http://www.skincancer.org/Skin-Cancer-Facts http://www.skincancer.org/Skin-Cancer-Facts www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dermatology 2012:5 10. Thomas L, Tranchand P, Berard F, Secchi T, Colin C, Moulin G. Semio- logical value of ABCDE criteria in the diagnosis of cutaneous pigmented tumors. Dermatology. 1998;197(1):11–17. 11. Wang SQ, Rabinovitz H, Kopf AW, Oliviero M. Current technologies for the in vivo diagnosis of cutaneous melanomas. Clin Dermatol. 2004;22(3):217–222. 12. Psaty EL, Halpern AC. Current and emerging technologies in melanoma diagnosis: the state of the art. Clin Dermatol. 2009;27(1):35–45. 13. Patel JK, Konda S, Perez OA, Amini S, Elgart G, Berman B. Newer technologies/techniques and tools in the diagnosis of melanoma. Eur J Dermatol. 2008;18(6):617–631. 14. Andreassi M, Andreassi L. Utility and limits of noninvasive methods in dermatology. Expert Rev Dermatol. 2007;2(3):249–255. 15. http://www.melafind.com. 16. http://www.melasciences.com. 17. Gutkowicz-Krusin D, Elbaum M, Jacobs A, et al. Precision of automatic measurements of pigmented skin lesion parameters with a MelaFind(TM) multispectral digital dermoscope. Melanoma Res. 2000;10(6):563–570. 18. Elbaum M, Kopf AW, Rabinovitz HS, et al. Automatic differentiation of melanoma from melanocytic nevi with multispectral digital dermoscopy: a feasibility study. J Am Acad Dermatol. 2001;44(2):207–218. 19. Elbaum M. Computer-aided melanoma diagnosis. Dermatol Clin. 2002;20(4):735–747, x–xi. 20. Elbaum M. Automated diagnosis: illustrated by the MelaFind system. In: Marghoob AA, Kopf AW, Braun R, editors. An Atlas of Dermoscopy. Abingdon: Taylor and Francis; 2004:325–341. 21. Friedman RJ, Gutkowicz-Krusin D, Farber MJ, et al. The diagnostic performance of expert dermoscopists vs a computer-vision system on small-diameter melanomas. Arch Dermatol. 2008;144(4):476–482. 22. Monheit G, Cognetta AB, Ferris L, et al. The performance of MelaFind: a prospective multicenter study. Arch Dermatol. 2011;147(2):188–194. 23. Rigel DS, Roy M, Yoo J, Cockerell CJ, Robinson JK, White R. Impact of guidance from a computer-aided multispectral digital skin lesion analysis device on decision to biopsy lesions clinically suggestive of melanoma. Arch Dermatol. 2012;148(4):541–543. 24. Krafft C, Sergo V. Biomedical applications of Raman and infrared spectroscopy to diagnose tissues. Spectroscopy. 2006;20(5–6): 195–218. 25. Garidel P. Insights in the biochemical composition of skin as investigated by micro infrared spectroscopic imaging. Phys Chem Chem Phys. 2003;5:2673–2679. 26. Lucassen GW, Caspers PJ, Puppels GJ. In vivo infrared and Raman spectroscopy of stratum corneum. Proc SPIE. 1998;3257:52–60. 27. Mendelsohn R, Flach CR, Moore DJ. Determination of molecular conformation and permeation in skin via IR spectroscopy, microscopy and imaging. Biochim Biophys Acta. 2006;1758(7):923–933. 28. Mendelsohn R, Rerek ME, Moore DJ. Infrared spectroscopy and microscopic imaging of stratum corneum models and skin. Phys Chem Chem Phys. 2000;2:4651–4657. 29. Mendelsohn R, Chen HC, Rerek ME, Moore DJ. Infrared microscopic imaging maps the spatial distribution of exogenous molecules in skin. J Biomed Opt. 2003;8(2):185–190. 30. Xiao C, Moore DJ, Flach CR, Mendelsohn R. Permeation of dimyris- toylphosphatidylcholine into skin – structural and spatial information from IR and Raman microscopic imaging. Vib Spectrosc. 2005;38(1–2): 151–158. 31. Xiao C, Moore DJ, Rerek ME, Flach CR, Mendelsohn R. Feasibility of tracking phospholipid permeation from IR and Raman microscopic imaging. J Invest Dermatol. 2005;124(3):622–632. 32. Zhang G, Moore DJ, Mendelsohn R, Flach CR. Vibrational microspectroscopy and imaging of molecular composition and structure during human corneocyte maturation. J Invest Dermatol. 2006;126(5):1088–1094. 33. Bommannan D, Potts RO, Guy RH. Examination of stratum corneum barrier function in vivo by infrared spectroscopy. J Invest Dermatol. 1990;95(4):403–408. 34. Bhargava R, Levin IW. Gram-Schmidt orthogonalization for rapid reconstruction of Fourier transform infrared spectroscopic imaging data. Appl Spectrosc. 2004;58(8):995–1000. 35. Xiao C, Flach CR, Marcott C, Mendelsohn R. Uncertainties in depth determination and comparison of multivariate with univariate analysis in confocal Raman studies of a laminated polymer and skin. Appl Spectrosc. 2004;58(4):382–389. 36. Caspers PJ, Lucassen GW, Carter EA, Bruining HA, Puppels GJ. In vivo confocal Raman microspectroscopy of the skin: noninvasive determination of molecular concentration profiles. J Invest Dermatol. 2001;116(3):434–442. 37. Caspers PJ, Lucassen GW, Puppels GJ. Combined in vivo confocal Raman spectroscopy and confocal microscopy of human skin. Biophys J. 2003;85(1):572–580. 38. Percot A, Lafleur M. Direct observation of domains in model stratum corneum lipid mixtures by Raman microspectroscopy. Biophys J. 2001;81(4):2144–2153. 39. Tfalyli A, Piot O, Durlach A, Bernard P, Manfait M. Discriminating nevus and melanoma on paraffin-embedded skin biopsies using FTIR microspectroscopy. Biochim Biophys Acta. 2005;1724(3):262–269. 40. Lasch P, Naumann D. FT-IR microspectroscopic imaging of human carcinoma thin sections based on pattern recognition techniques. Cell Mol Biol (Noisy-le-grand). 1998;44(1):189–202. 41. Gniadecka M, Philipsen PA, Sigurdsson S, et al. Melanoma diagnosis by Raman spectroscopy and neural networks: structure alteration in proteins and lipids in intact cancer tissue. J Invest Dermatol. 2004;122(2):443–449. 42. Nijssen A, Bakker Schut TC, Heule F, et al. Discriminating basal cell carcinoma from its surrounding tissue by Raman spectroscopy. J Invest Dermatol. 2002;119(1):64–69. 43. Huang Z, Lui H, Chen XK, Alajlan A, McLean DI, Zeng H. Raman spectroscopy of in vivo cutaneous melanin. J Biomed Opt. 2004;9(6): 1198–1205. 44. Cristofolini M, Piscioli F, Valdagni C, Della Selva A. Correlations between thermography and morphology of primary cutaneous malignant melanomas. Acta Thermogr. 1976;1(1):3–11. 45. Hartmann M, Kunze J, Friedel S. Telethermography in the diagnostic and management of malignant melanomas. J Dermatol Surg Oncol. 1981;7(3):213–218. 46. Di Carlo A. Thermography and the possibilities for its applications in clinical and experimental dermatology. Clin Dermatol. 1995;13(4): 329–336. 47. Diakides NA. 1998 Infrared imaging: an emerging technology in medicine. IEEE Eng Med Biol Mag. 1998;17(4):17–18. 48. Fauci M, Breiter R, Cabanski W, et al. Medical infrared imaging – differentiating facts from fiction, and the impact of high precision quantum well infrared photodetector camera systems, and other factors, in its reemergence. Infrared Phys. 2001;42(3–5):337–344. 49. Gulyaev YV, Markov AG, Koreneva LG, Zakharov PV. Dynamical infrared thermography in humans. IEEE Eng Med Biol Mag. 1995;14: 766–771. 50. Ring EFJ, Ammer K. Infrared thermal imaging in medicine. Physiol Meas. 2012;33(3):R33–R46. 51. Herman C, Pirtini Cetingul M. Quantitative visualization and detection of skin cancer using dynamic thermal imaging. J Vis Exp. 2011;(51): e2679. 52. Pirtini Cetingul M, Herman C. Heat transfer model of skin tissue for the detection of lesions: sensitivity analysis. Phys Med Biol. 2010;55(19): 5933–5951. 53. Pirtini Cetingul M, Herman C. Quantification of the thermal signature of a melanoma lesion. Int J Therm Sci. 2011;50(4):421–431. 54. Pirtini Cetingul M, Herman C. The assessment of melanoma risk using the dynamic infrared imaging technique. J Therm Sci Eng Appl. 2011;3(3):031006.1–031006.9. 55. Pirtini Cetingul M, Alani RM, Herman C. Quantitative evaluation of skin lesions using transient thermal imaging. Proceedings of the International Heat Transfer Conference, IHTC14; August 8–13, 2010; Washington, DC, USA. submit your manuscript | www.dovepress.com Dovepress Dovepress 211 Technologies for detection of melanoma C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.melafind.com www.melasciences.com www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dermatology Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/clinical-cosmetic-and-investigational-dermatology-journal Clinical, Cosmetic and Investigational Dermatology is an interna- tional, peer-reviewed, open access, online journal that focuses on the latest clinical and experimental research in all aspects of skin disease and cosmetic interventions. All areas of dermatology will be covered; contributions will be welcomed from all clinicians and basic science researchers globally. This journal is indexed on CAS. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. Clinical, Cosmetic and Investigational Dermatology 2012:5 56. Pirtini Cetingul M, Alani RM, Herman C. Detection of skin cancer using skin transient thermal imaging. Proceedings of the ASME 2010 Summer Bioengineering Conference, SBC2010; June 15–19, 2010; Naples, Florida, USA. 57. Pirtini Cetingul M, Cetingul HE, Herman C. Analysis of transient thermal images to distinguish melanoma from dysplastic nevi. Proceedings of the SPIE Medical Imaging Conference; February 12–17, 2011; Lake Buena Vista, FL, USA. 58. Harris C, Stephens M. A combined corner and edge detector. Proceedings of the 4th Alvey Vision Conference; 1988. 59. Hartley R, Zisserman A. Multiple View Geometry in Computer Vision. Cambridge: Cambridge University Press; 2003:32–33. 60. Grady L. Random walks for image segmentation, pattern analysis and machine intelligence. IEEE Trans Pattern Anal Mach Intell. 2006; 28(11):1768. 61. Odobez JM, Bouthemy P. Robust multiresolution estimation of parametric motion models. J Vis Commun Image Represent. 1995; 6(4):348–365. 62. Pawley JB. Handbook of Biological Confocal Microscopy. 2nd ed. New York: Plenum Press; 1995. 63. Webb RH. Confocal optical microscopy. Rep Prog Phys. 1996;59(3): 427–471. 64. Rajadhyaksha M, Zavislan JM. Confocal reflectance microscopy of unstained tissue in vivo. Retin Lipid-Soluble Vitam Clin Pract. 1998;14(1):26–30. 65. Rajadhyaksha M, Grossman M, Esterowitz D, Webb RH, Anderson RR. In vivo confocal scanning laser microscopy of human skin: melanin provides a good contrast. J Invest Dermatol. 1995;104(6):946–952. 66. Busam KJ, Hester K, Charles C, et al. Detection of clinically amelanotic malignant melanoma and assessment of its margins by in vivo confocal scanning laser microscopy. Arch Dermatol. 2001;137(7):923–929. 67. Charles CA, Marghoob AA, Busam KJ, Clark-Loeser L, Halpern AC. Melanoma or pigmented basal cell carcinoma: a clinical-pathologic correlation with dermoscopy, in vivo confocal scanning laser microscopy, and routine histology. Skin Res Technol. 2002;8(4):282–287. 68. Marghoob AA, Charles C, Busam KJ, et al. In vivo confocal scanning laser microscopy of a series of congenital melanocytic nevi suggestive of having developed malignant melanoma. Arch Dermatol. 2005; 141(11):1401–1412. 69. Scope A, Benvenuto-Andrae C, Agero AL, et al. In vivo reflectance confocal microscopy imaging of melanocytic skin lesions: Consensus terminology glossary and illustrative images. J Am Acad Dermatol. 2007;57(4):644–658. 70. Aghassi D, Anderson RR, Gonzales S. Confocal laser microscopic imaging of actinic keratosis in vivo: a preliminary report. J Am Acad Dermatol. 2000;43(1):42–48. 71. Busam KJ, Hester K, Charles C, et al. Detection of clinically amelanotic malignant melanoma and assessment of its margins by in vivo confocal scanning laser microscopy. Arch Dermatol. 2001;137(7):923–929. submit your manuscript | www.dovepress.com Dovepress Dovepress Dovepress 212 Herman C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com/clinical-cosmetic-and-investigational-dermatology-journal http://www.dovepress.com/testimonials.php www.dovepress.com www.dovepress.com www.dovepress.com www.dovepress.com Publication Info 2: Nimber of times reviewed: work_phwi6fehovh6lp3csk3adihs3i ---- Rev. Bras. Cir. Plást. 2011; 26(3): 394-401394 Martins PDE et al. Adipose tissue mature stem cells in skin healing: a controlled randomized study Uso de células-tronco adultas de tecido adiposo na cicatrização da pele: estudo controlado, randomizado Study conducted at the Plastic Surgery Service of Hospital São Lucas of Pontifícia Universidade Católica do Rio Grande do Sul, Porto Alegre, RS, Brazil. Submitted to SGP (Sistema de Gestão de Publicações/Manager Publications System) of RBCP (Revista Brasileira de Cirurgia Plástica/Brazilian Journal of Plastic Surgery). Received: May 28, 2011 Accepted: August 19, 2011 1. Ph.D. in Surgery from Pontifícia Universidade Católica do Rio Grande do Sul (PUCRS), head of the Plastic Surgery Service of Hospital São Lucas da PU- CRS, full member of the Brazilian Society of Plastic Surgery (SBCP), Porto Alegre, RS, Brazil. 2. Ph.D. in Surgery from PUCRS, full member of SBCP, preceptor of the Plastic Surgery Service of Hospital São Lucas of PUCRS, Porto Alegre, RS, Brazil. 3. Ph.D. in Cell Immunology from University of Sheffield, United Kingdom, coordinator of the Cell Therapy Center of the Institute of Biomedical Research of PUCRS, Porto Alegre, RS, Brazil. 4. Full professor, former president of the Brazilian Society of Reconstructive Microsurgery, former president of the Brazilian Society of Hand Surgery, Porto Alegre, RS, Brazil. Pedro djacir escobar Martins1 carlos oscar Uebel2 denise cantarelli Machado3 jefferson braga da silva4 Franco T et al.Vendramin FS et al.ORIGINAL ARtIcLe ABSTRACT Background: The differences between fetal and adult scars suggest the possibility of manipulating skin scarring outcomes. This study aimed to assess whether the use of adult stem cells from adipose tissue is beneficial to skin healing. Methods: This was a randomized controlled study for which 18 patients were selected based on inclusion and exclusion criteria. The adult stem cells used were autologous and were extracted from infraumbilical adipose tissue prior to abdominoplasty. These cells were implanted into the surgical wound dermis in the suprapubic region before skin synthesis. The results were assessed blindly based on the Draaijers scale by three physicians and by the patients themselves in a self-assessment. Photometric assessment by digital photography was also performed. Results: Among the 18 operated patients, considering the surgical result, 17 (94.4%) had excellent or good results and one (5.5%) had wound dehiscence, which was considered a bad result. Considering skin healing in the searched area, there was no statistically significant difference in the photometric evaluation; in both the self-assessment by the patients and the physicians’ assessment, the results were significantly in favor of intervention with stem cells (P = 0.12 and P = 0.003, respectively). Consideration of all assessments (physicians, patients and photometric) found a statistically significant difference in favor of the implantation of adult stem cells from adipose tissue (P <0.001). Conclusions: Skin healing results after implantation of adult stem cells derived from adipose tissue were satisfactory. Keywords: Stem cells. Wound healing. Abdomen/surgery. RESUMO Introdução: Fatores que diferenciam a cicatrização fetal e a do adulto instigam a possibilidade de manipulação das soluções de continuidade da pele. Este estudo teve como objetivo avaliar se o uso de células-tronco adultas do tecido adiposo é benéfico à cicatrização da pele. Método: Estudo controlado, randomizado, para o qual foram selecionadas 18 pacientes, considerando-se critérios de inclusão e exclusão. As células-tronco adultas utilizadas eram autólogas, extraídas do tecido adiposo da região infraumbilical, precedendo a realização da abdominoplastia. Essas células, antes da síntese da pele, foram implantadas na derme da ferida operatória, na região suprapúbica. A ava- liação dos resultados foi realizada com base na escala de Draaijers, por três avaliadores médicos cegados, e pelas próprias pacientes, por autoavaliação. Foi realizada, também, avaliação fotométrica por fotografia digital. Resultados: Dentre as 18 pacientes operadas, sob o ponto de vista cirúrgico, 17 (94,4%) apresentaram resultados excelentes ou bons e uma (5,5%) apresentou deiscência de sutura, considerado mau resultado. Quanto à cicatrização da pele na área pesquisada, à avaliação fotométrica, não houve diferença estatisticamente significante; à autoavaliação pelas pacientes, os resultados atingiram nível de significância a favor da intervenção com células-tronco (P = 0,12); e à avaliação pelos médicos, foi atingido nível de significância a favor da intervenção por células-tronco (P = 0,003). Considerando-se todas as avaliações realizadas (médicos, pacientes e fotométrica), foi encontrada diferença estatisticamente significante favorável ao implante de células-tronco adultas do tecido adipose (P < 0,001). Conclusões: Os resultados da cicatrização da pele, após implante de células-tronco adultas derivadas de tecido adiposo, foram satisfatórios. Descritores: Células-tronco. Cicatrização. Abdome/cirurgia. Rev. Bras. Cir. Plást. 2011; 26(3): 394-401 395 Adipose tissue mature stem cells in skin healing INTRODUCTION Since the Egyptian age, surgeons have been concerned about wounds and their healing, as evidenced in the papyrus of Edwin S. Smith1. Closing the surgical wound is a basic condition for surgical success. It is essential for the physician to have knowledge of the healing process in order to handle the tissues correctly and obtain an optimal outcome. Healing is divided into inflam- matory, proliferative, and maturation stages2,3. The inflamma- tory or reactive stage lasts about four days and begins at the moment the injury occurs. The proliferative or regenerative stage begins on the fourth day post-injury and lasts about ten days. The last stage, maturation, is the longest and may last from the eighth day to the sixth month or more. In this stage, the scar tension increases quickly between one week and six weeks post-injury and reaches its maturity plateau after about one year of tissue remodeling. Several factors such as infection, local tissue ischemia, diabetes mellitus, radiation, malnutrition, exogenous drugs, and deficiency of minerals and vitamins3 may interfere with the healing process and its progression in a very complex set of events. The scar may be considered adequate, inadequate, or proliferative. These results are determined by the balance between collagen synthesis and degradation. If this balance tilts in either direction, the result will not be satisfactory. In chronic wounds, collagen degradation is greater than its synthesis, whereas in proliferative, hypertrophic, or keloid scars, the opposite occurs, i.e., collagen deposition exceeds its degradation3. Studies of human fetuses operated on in utero revealed that scars were minimal or unnoticeable after birth4. Lin et al.5 concluded that fetal fibroblasts remained true to their phenotype even when transplanted into adults. This fetal healing process takes place in the absence of inflammation, resulting in a non-apparent scar. According to Estes et al.6, fibroblasts in fetal wounds do not develop to an activated state (myofibroblasts) until a late stage of pregnancy. Bullard et al.7 demonstrated that dermal fibroblasts have significantly more interstitial collagenase in fetal wounds than in those of adults. There is evidence that less inflammation and reduced collagen accumulation occur in fetal healing compared to the adult process. These facts suggest the possibility of mani- pulating skin scarring in adults with the aim of limiting the intensity of the inflammatory process and thus producing a better result for the scar. Plastic surgeons in particular have directed their attention to skin healing. In their surgeries, they seek to conceal the scars by placing them according to the force lines of the skin in areas where they will not be visible or will be only minimally noticeable. When the scars are located in constantly exposed areas, as in the face, they use or conduct concealing therapeutic and cosmetic measures to make the scars less noticeable3,8-11. Progress in studies of cellular and molecular biology may have a large impact on understanding the healing process and its clinical applications. Research using stem cells is advan- cing the understanding of how damaged cells are replaced by healthy cells in adult organisms12,13. This is an area of intense academic and applied research. The use of stem cells to treat diseases, known as regenerative medicine13, has greatly advanced. Stem cells are fundamental not only to coordinate the formation of organs from the embryonic to the adult stage but also for their role in regeneration and tissue repair. Although several criteria for defining stem cells have been proposed, in short, they should be undifferentiated cells capable of proliferation, self-renewal, production of numerous functionally differentiated cells and tissue rege- neration after an injury14. In consideration of ethical and legal issues, researchers pursuing therapeutic applications have conducted their studies with stem cells, especially those derived from bone marrow stroma15-18. More recent studies have shown that this cell population can also be isolated from adipose tissue19-22 collected by means of liposuction23,24. Some authors prefer not to use the term stem cells, referring to this adipose tissue material as processed lipo-aspirate (PLA) cells or adipose-derived adult stem cells (ADAS)20,21,25,26. Clinical research using autologous stem cells extracted from adipose tissue is encouraged as the cells are easy to obtain. This study aims to assess the effects of these cells on human skin healing. METHODS Ethical Aspects This study was approved on 11/24/2005, as Protocol No 05/02789, by the Ethics Committee of Hospital São Lucas of Pontifícia Universidade Católica do Rio Grande do Sul (PUCRS – Porto Alegre, RS). All patients who participated in this study signed an informed consent form. Procedures Cells obtained exclusively from autologous adipose tissue were used in patients in this study. Its implementation did not alter the surgical sequence or significantly increase the duration of the proposed procedure. The collection of adipose tissue was performed in a maximum period of 5 minutes prior to abdominoplasty. The implantation of adult stem cells from adipose tissue was of similar duration. The separation of these cells, which was of similar duration as the abdominoplasty, was performed at the Cell Therapy Center of the Institute of Biomedical Research of PUCRS simultaneously with the surgery. Rev. Bras. Cir. Plást. 2011; 26(3): 394-401396 Martins PDE et al. The inclusion criteria considered were patients in the Plastic Surgery Service of Hospital São Lucas of PUCRS, indications for abdominoplasty, white skin, female gender, age between 30 and 45 years, already having children, and having no stretch marks in the supraumbilical region. The exclusion criteria were smoking, history of keloids or hypertrophic scarring, diabetes mellitus, any skin or connec- tive tissue disease, previous supraumbilical scar, prolonged use of corticosteroids, previous chemotherapy or radiothe- rapy, weight loss post-obesity, infection, hematoma, seroma or dehiscence during the abdominoplasty postoperative period and patient withdrawal during the course of the study. All patients who participated in this study were operated on by the same surgeon, and the same surgical technique, which consisted of liposuction in the infraumbilical region followed by abdominoplasty, was performed in all of them27,28. These two procedures are performed in the same surgery, facilitating the production of the scar as the object of this study and the acquisition of the adipose tissue from which adult stem cells were taken. Prior to abdominoplasty, 30 ml of adipose tissue was collected by liposuction from the infraumbilical region where there is a major concentration of adult stem cells29. Liposuc- tion was performed with a 50-ml disposable syringe and a cannula with a caliber of 4 mm and length of 25 cm. In order to avoid any change in the adipose tissue, this procedure was performed without infiltration at the site (dry procedure)24. The adipose tissue was transported, in its own syringe and under sterile conditions, to the Cellular Therapy Center of the Institute of Biomedical Research of PUCRS in order to carry out the extraction of adult stem cells while the surgery was performed (Figure 1). Surgical Technique All patients were operated on under epidural anesthesia with puncture in the L3-L4 epidural space and injection of 150 mg of ropivacaine hydrochloride 0.75% and 100 mg of fentanyl citrate. In the transoperative period, the patient was sedated with midazolam 15 mg, intravenously, in divided doses. The same surgical sequence was followed in the abdomi- noplasty for all cases: prior resection, in a single block, of the skin flap and subcutaneous cell tissue from the infraumbilical region in the area from the umbilical scar to the pubic region located between the two anterosuperior iliac spines27,28 (Figure 2). Then, juxta-aponeurotic detachment of the supraumbilical dermal-adipose flap to the level of the ribs and xiphoid process was performed. Next, the musculoaponeurotic wall of the abdomen was repositioned by plication with discontinuous stitches of 2.0 monofilament nylon (Ethicon®). The umbilical scar was fixed with 4.0 monofilament nylon sutures (Ethicon®) in the musculoaponeurotic wall and sutured with the same thread to the skin of the supraumbilical dermal-adipose flap that had been pulled up to its new position at the pubic edge of the surgical incision. To complete the abdominoplasty, the closure of the surgical wound upper and lower edges was performed at all levels. This synthesis resulted in the abdomi- noplasty scar in which the research with adult stem cells from adipose tissue was conducted (Figure 3). For the skin closure, the same procedures were always followed, i.e., 4.0 monofilament nylon thread (Ethicon®) was used for the subdermal layer and 3.0 monofilament nylon thread (Ethicon®) for the subcutaneous cell tissue and the intradermal sutures. In all patients, a 1/4 suction drain (Drenoplass®) was placed by inferior counter-incision in the pubic region. The purpose of this drain was to prevent fluid accumulation that could stretch the skin and change the tension of the suture lines in the studied region. Adult Stem Cell Collection from Adipose Tissue The extraction of adult stem cells from adipose tissue was performed at the Cell Therapy Center of the Institute Figure 1 – Liposuction with syringe. In A, demarcation of the skin flap and subcutaneous cell tissue to be resected. In B, cannula for liposuction and syringe with adipose tissue. Figure 2 – Prior resection: infraumbilical flap resected in a single block prior to supraumbilical detachment. Rev. Bras. Cir. Plást. 2011; 26(3): 394-401 397 Adipose tissue mature stem cells in skin healing of Biomedical Research of PUCRS as follows: 20 ml of adipose tissue was divided between two tubes and washed with 40 ml of Dulbecco’s Phosphate-Buffered Saline (DPBS; Invitrogen Corp., Carlsbad, CA, USA) containing 2% (v/v) of fetal bovine serum (FBS; Invitrogen Corp., Carlsbad, CA, USA) for red blood cell collection. The suspension was centrifuged at 450 × g for five minutes. The adipose tissue was transferred to a new tube to which 0.015% (w/v) of collagenase (Sigma Corp., St. Louis, MO, USA) diluted in DPBS in a total of 50 ml was added. The tube was placed in an orbital shaker and incubated at 37°C for 45 minutes until the tissue was completely dissociated. The collagenase was inactivated with the culture medium Dulbecco’s modi- fied Eagle’s medium (DMEM – Invitrogen Corp. Carlsbad, CA, USA) containing 10% (v/v) FBS, and the solution was divided between two tubes. The cells were centrifuged at 1,200 × g for 10 minutes, and the supernatant was discarded. The cells were resuspended in 10 ml of DPBS containing 10% (v/v) FBS and centrifuged again for washing. Then, the total number of cells was quantified using a hemocytometer. The cells were resuspended in saline to a density of 5 × 108 cells per ml for infiltration of the scar. Flow cytometry was performed with the following antibo- dies: CD73, CD105, and CD117. The samples were analyzed in a FACSCalibur flow cytometer (Becton Dickinson Immu- nocytometry Systems, San Jose, CA, USA). An aliquot of 100 µl of the suspension of adult stem cells from adipose tissue was used for characterization of the cell populations. Twenty microliters of each antibody was added, and the solution was incubated at room temperature for 30 minutes in the dark. The sample was centrifuged at 200 × g for 5 minutes, and the supernatant was discarded. The sample was washed with 2 ml of PBS (with 0.1% sodium azide and 1% FBS) by centrifugation at 200 × g for 5 minutes. The supernatant was discarded and the cells were resuspended in 500 µl of PBS. Use of Adult Stem Cells from Adipose Tissue on the Scar To carry out this research, the segment located in the suprapubic region in the location of the abdominoplasty inci- sion was selected and marked 5 cm to each side of midline (Figure 3). One side of each incision was randomly chosen for implantation of adult stem cells from adipose tissue, and the side selected was not known to the patients or the observers. Before the skin closure, adult stem cells from adipose tissue suspended in saline were implanted in the surgical wound dermis. Prior to beginning the portion of the study with the selected patients, the volume required to reach the area of 1 cm² of the skin was calculated by injecting methy- lene blue into the dermis. It was found that each 0.5 ml injected covers 1 cm² of the skin (Figures 4 and 5). On the randomly chosen side, 5 ml of saline containing adult stem cells from adipose tissue at a density of 5 × 108 per ml was injected into both edges of the surgical wound18. In the contralateral side, which served as a control, an iden- tical volume of saline was injected. It was thus possible to compare the healing process with and without implantation of adult stem cells from adipose tissue in the same patient. Healing Assessment Research in the field of healing is still in its initial stages. In 1997, Morris et al.30 described a study using rabbits’ ears to compare the treatment of hypertrophic scars with triam- cinolone or saline solution. Historically, human healing has been assessed by clinical studies. For this reason, a system for scar assessment in a common medical language is necessary. The Vancouver scale has had great acceptance and is widely used for burns31-34. In 1998, Beausang et al.35 expanded this scale to make it more comprehensive for assessment of linear scars after surgery or trauma. As these two scales did not include a self-assessment component, Draaijers et al.36 created a scale that relies on the patient’s and observer’s assessments. In addition to these scales, morphometric analysis by digital photography has been considered an objective method for documentation and assessment of scars37. The scars were assessed in this study by the following methods: 1. Patient/observer scales (Draaijers et al.36) – consists of two numeric scales validated and tested in relation to the Vancouver scale31-34. The observer’s scale contains 5 assessment items: vascularization, pigmentation, elas- ticity, thickness, and relief. The patient’s scale contains six assessment items: color, elasticity, thickness, relief, itching, and pain. Each assessment item receives a score Figure 3 – Suprapubic wound: region of use of adult stem cells from adipose tissue in both edges of the surgical wound, randomized to the side of the midline. Rev. Bras. Cir. Plást. 2011; 26(3): 394-401398 Martins PDE et al. ranging from 1 to 10. A score of 10 means the worst scar and the worst imaginable feeling. The sum of the scores in the observer’s scale ranges from 5 to 50, while the sum of the patient’s scores ranges from 6 to 60. The smallest sums of scores, 5 and 6 respectively, reflect normal skin. In this study, four observers, three physicians, and the operated patient herself assessed the healing results at 1 month, 3 months, 6 months, and 12 months postoperatively. Two plastic surgeons and one dermatologist, all with more than ten years of expertise and none from the medical staff of Hospital São Lucas, PUCRS, were included as observers. 2. Morphometric scale by digital photography and image analysis (Image Pro Plus, Media Cybermetics, United States)37 –photometric assessment performed by a physician of the Service of Plastic Surgery at Hospital São Lucas of PUCRS who did not know in which side adult stem cells from adipose tissue were implanted. The scar was assessed by optical density of image (ODI) and average perpendicular length to the scar at ten points, on both sides, in the areas of implantation of the adult stem cells from adipose tissue and in areas of saline injection. The studied patients were photographed at all stages of assessment with the same camera (Sony®: DSC-W7, 7.2 mega pixels) at the same brightness and distance. Statistical Analysis All elements observed in the patients were quantified by pixel analysis of the photographs or by patients’ impression and physicians’ assessment scores. Average descriptive measures were obtained for each time of assessment. Then, the areas under the curve for the points formed by both sides of the scar to be compared were calculated. The curves were compared by Student’s t test for paired samples. Then, the proportions of favorable and unfavorable remarks on the intervention with stem cells were also compared using the binomial test. The adopted level of significance was alpha = 0.05. Data were analyzed by intention to treat using the protocol Last Observation Carried Forward (LOCF), processed and assessed with SPSS, version 15.0. RESULTS Eighteen patients were operated on, 17 (94.4%) of whom had excellent or good results, considering the surgical result; in one patient (5.5%), the result was considered poor due to suture dehiscence in the suprapubic region. During the course of the study, another 5 (27.7%) patients were lost to follow- up, leaving 12 (66.6%) at the end of the study. However, the protocol of “intention to treat” was used to include all 18 patients in the analysis. Using the criteria in the scale of Draaijers et al.36, it was possible to observe that the sides of the scars implanted with adult stem cells from adipose tissue showed better healing than those in which only saline solution was infiltrated (control) (Figures 6 and 7). When the photometric aspects were compared, no statis- tically significant difference was detected in the random measurement (P = 0.44) or total measuring (P = 0.66) analyses. To compare patients’ assessments, six parameters were considered: pain, itching, color, stiffness, thickness, and irregularity. In the scores analysis, no statistical significance (P > 0.17) was found for any of these aspects. However, consideration of all of the assessment events throughout the observation period yielded 42 measuring points, of which 15 were favorable to the control and 27 to the stem cells, Figure 4 – Infiltration of methylene blue into the dermis to calculate the volume needed to cover 1 cm² of skin surface. Figure 5 – Implantation of adult stem cells from adipose tissue into the dermis at 0.5 ml/cm² of skin surface. Rev. Bras. Cir. Plást. 2011; 26(3): 394-401 399 Adipose tissue mature stem cells in skin healing reaching the significance level in favor of intervention by stem cells (P = 0.12). In the medical observers’ assessments, five aspects were considered: vascularization, pigmentation, thickness, elasticity, and contraction. No statistically significant diffe- rence was found in any of these aspects (P > 0.37). However, consideration of all of the assessments distributed throughout the observation period yielded 35 measuring points, of which 8 were favorable to the control and 27 to the stem cells, reaching a significance level in favor of intervention by stem cells (P = 0.003) (Figure 8). Stratifying the assessments by patients and photometry produced no statistically significant difference, probably due to the reduced number of events assessed. However, combi- ning all assessments (physicians, patients and photometric) yielded a statistically significant difference in favor of the implantation of adult stem cells from adipose tissue. In a total of 91 events, 65 were favorable to the implantation with adult stem cells from adipose tissue and 26 to the control (P < 0.001) (Table 1). DISCUSSION The abdominoplasty results are secondary to the focus of this research, which exclusively analyzes skin healing. However, they are important to prove that this clinical study did not cause any alterations that could compromise the postoperative healing process and outcomes in the patients participating. Evidence-based practices38 are used to ensure good post- surgical healing results. In addition to accurate surgical tech- nique and careful positioning of scars in accordance with the force lines of the skin, avoidance of any tension in the suture lines is also important. Immobilization and compression of the scar are recommended during the postoperative period, including the maturation phase3. Postoperative period – 1 month Postoperative period – 3 months Postoperative period – 6 months Postoperative period – 1 year Figure 6 – Implantation of adult stem cells from adipose tissue on the right side. Postoperative period – 1 month Postoperative period – 3 months Postoperative period – 6 months Postoperative period – 1 year Figure 7 – Implantation of adult stem cells from adipose tissue on the left side. -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Figure 8 – Physicians’ assessment. Graph with scattering of points representing the distribution of assessment events throughout the observation period and showing favorable overall results for the implantation of adult stem cells from adipose tissue (P = 0.003). Favorable overall results Stem cells Control Vascularization Pigmentation Thickness Contraction Elasticity Differences of the average in the assessment scores of the scar P = 0.003 Rev. Bras. Cir. Plást. 2011; 26(3): 394-401400 Martins PDE et al. Table 1 – Comparison of the assessment events during the observation period. Assessment method Assessment events Favorable to group P Stem cells Control Photometric 14 11 3 0.106 Patients 42 27 15 0.120 Physicians 35 27 8 0.003 Total 91 65 26 <0.001 Therapeutic measures such as use of corticosteroids, botulinum toxin, vitamins A and E, silicone tapes and laser e radiotherapy are used for prevention or in the presence of hypertrophic scars or keloids3,8-11. This clinical, prospective, and randomized study was conducted with the same objec- tive: the improvement of scars. The implantation of adult stem cells from adipose tissue into the dermis of the abdomi- noplasty surgical wound demonstrated a beneficial effect on healing. The autologous cells used had no contraindications and did not cause side effects, as may occur in other appro- aches that use corticosteroids or radiotherapy. It was not possible to perform a comparative analysis with other similar clinical studies due to the scarcity of published research assessing the implantation of adult stem cells from adipose tissue into surgical incisions in human skin. The papers mentioned in this study had performed such analyses in laboratory animals39,40, and their results, as in this research, also showed beneficial effects of cell therapy on skin healing. As this is an initial study, the results reported here can be considered promising when compared to other studies of longer duration, such as experiments that employ cell therapy in the regeneration of other tissues. Studies on dise- ases or trauma of organs such as the heart, liver, kidney and peripheral nerves18,41-43 have proven that these tissues can be regenerated. CONCLUSION The effect of implanting adult stem cells from adipose tissue on skin healing of postoperative abdominoplasty wounds has proved to be satisfactory. REFERENCES 1. Porter R. The greatest benefit to mankind: a medical history of humanity. New York: W. W. Norton & Company; 1997. 2. Townsend Jr. CM, Beauchamp RD, Evers BM, Mattox KL. Sabiston textbook of surgery: the biological basis of modern surgical practice. 17th ed. Philadelphia: Elsevier; 2004. 3. Lorenz HP, Longaker MT. Wound healing: repair biology and wound and scar treatment. In: Mathes SJ, ed. Plastic surgery. vol. 1. Philadel- phia: Saunders Elsevier; 2006. p. 209-32. 4. Adzick NS, Longaker MT. Fetal wound healing. New York: Chapman & Hall; 1992. 5. Lin RY, Sullivan KM, Argenta P, Peter Lorenz H, Scott Adzick N. Scarless human fetal skin repair is intrinsic to the fetal fibroblast and occurs in the absence of an inflammatory response. Wound Repair Regen. 1994;2(4):297-305. 6. Estes JM, Vande Berg JS, Adzick NS, MacGillivray TE, Desmoulière A, Gabbiani G. Phenotypic and functional features of myofibroblasts in sheep fetal wounds. Differentiation. 1994;56(3):173-81. 7. Bullard KM, Cass DL, Banda MJ, Adzick NS. Transforming growth factor beta-1 decreases interstitial collagenase in healing human fetal skin. Pediatr Surg. 1997;32(7):1023-7. 8. Xiao Z, Zhang F, Cui Z. Treatment of hypertrophic scars with intrale- sional botulinum toxin type A injections: a preliminary report. Aesthetic Plast Surg. 2009;33(3):409-12. 9. Horswell BB. Scar modification. Techniques for revision and camou- flage. Atlas Oral Maxilofac Surg Clin North Am. 1998;6(2):55-72. 10. Viera MH, Amini S, Konda S, Berman B. Do postsurgical interven- tions optimize ultimate scar cosmesis. G Ital Dermatol Venereol. 2009;144(3):243-57. 11. Haedersdal M, Moreau KE, Beyer DM, Nymann P, Alsbjørn B. Frac- tional nonablative 1540 nm laser resurfacing for thermal burn scars: a randomized controlled trial. Lasers Surg Med. 2009;41(3):189-95. 12. Daley GQ, Goodell MA, Snyder EY. Realistic prospects for stem cell therapeutics. Hematology Am Soc Hematol Educ Program. 2003;398- 418. 13. Fodor WL. Tissue engineering and cell based therapies, from the bench to the clinic: the potential to replace, repair and regenerate. Reprod Biol Endocrinol. 2003;1:102. 14. Loeffler M, Bratke T, Paulus U, Li YQ, Potten CS. Clonality and life cycles of intestinal crypts explained by a state dependent stochastic model of epithelial stem organization. J Theor Biol. 1997;186(1):41-54. 15. Pittenger MF, Mackay AM, Beck SC, Jaiswal RK, Douglas R, Mosca JD, et al. Multilineage potential of adult human mesenchymal stem cells. Science. 1999;284(5411):143-7. 16. Tuan RS, Boland G, Tuli R. Adult mesenchymal stem cells and cell- based tissue engineering. Arthritis Res Ther. 2003;5(1):32-45. 17. Tohill M, Terenghi G. Stem-cell plasticity and therapy for injuries of the peripheral nervous system. Biotechnol Appl Biochem. 2004;40(Pt 1):17-24. 18. Braga-Silva J, Gehlen D, Padoin AV, Machado DC, Garicochea B, Costa da Costa J. Can local supply of bone marrow mononuclear cells improve the outcome from late tubular repair of human median and ulnar nerves? J Hand Surg Eur Vol. 2008;33(4):488-93. 19. Martinez-Estrada OM, Muñoz-Santos Y, Julve J, Reina M, Vilaró S. Human adipose tissue as a source of Flk-1+ cells: new method of differentiation and expansion. Cardiovasc Res. 2005;65(2):328-33. 20. Zuk PA, Zhu M, Mizumo H, Huang J, Futrell JW, Katz AJ, et al. Mul- tilineage cells from human adipose tissue: implications for cell-based therapies. Tissue Eng. 2003;7(2):211-28. 21. De Ugarte DA, Morizono K, Elbarbary A, Alfonso Z, Zuk PA, Zhu M, et al. Comparison of multi-lineage cells from human adipose tissue and bone marrow. Cells Tissues Organs. 2003;174(3):101-9. Rev. Bras. Cir. Plást. 2011; 26(3): 394-401 401 Adipose tissue mature stem cells in skin healing 22. Safford KM, Hicok KC, Safford SD, Halvorsen YD, Wilkison WO, Gimble JM, et al. Neurogenic differentiation of murine and hu- man adipose-derived stromal cells. Biochem Biophys Res Commun. 2002;294(2):371-9. 23. Illouz YG. A new method for localized lipodystrophies. Rev Chir Esthet. 1980;4:19. 24. Fournier P, Otteni FM. Lipodissection in body sculpturing: the dry procedure. Plast Reconstr Surg. 1983;72(5):598-609. 25. Fraser JK, Wulur I, Alfonso Z, Hedrick MH. Fat tissue: an underap- preciated source of stem cells for biotechnology. Trends Biotechnol. 2006;24(4):150-4. 26. Lambert APF, Zandonai AF, Bonatto D, Machado DC, Henriques JAP. Differentiation of human adipose-derived adult stem cells into neuronal tissue: does it work? Differentiation. 2009;77(3):221-8. 27. Vasconez LO, De La Torre JI. Abdominoplasty. In: Mathes SJ, ed. Plastic surgery. Vol. 6. Philadelphia: Saunders Elsevier; 2006. 28. Pontes R. Abdominoplastia: ressecção em bloco e sua aplicação em lifting de coxa e torsoplastia. (Abdominoplasty: resection en bloc and its applica- tion in thigh lifting and torsoplasty) Rio de Janeiro: Revinter; 2004. 29. Padoin AV, Braga-Silva J, Martins P, Rezende K, Rezende AR, Grechi B, et al. Sources of processed lipoaspirate cells: influence of donor site on cell concentration. Plast Reconst Surg. 2008;122(2):614-8. 30. Morris DE, Wu L, Zhao LL, Bolton L, Roth SI, Ladin DA, et al. Acute and chronic animal models for excessive dermal scarring: quantitative studies. Plast Reconstr Surg. 1997;100(3):674-81. 31. Sullivan T, Smith J, Kermode J, McIver E, Courtemanche DJ. Rating the burn scar. J Burn Care Rehabil. 1990;11(3):256-60. 32. Baryza MJ, Baryza GA. The Vancouver Scar Scale: an administration tool and its interrater reliability. J Burn Care Rehabil. 1995;16(5):535-8. 33. Nedelec B, Shankowsky HA, Tredgett EE. Rating the resolving hyper- trophic scar: comparison of the Vancouver Scar Scale and scar volume. J Burn Care Rehabil. 2000;21(3):205-12. 34. Mustoe TA, Cooter RD, Gold MH, Hobbs FD, Ramelet AA, Shakespeare PG, et al. International clinical recommendations on scar management. Plast Reconstr Surg. 2002;110(2):560-71. 35. Beausang E, Floyd H, Dunn KW, Orton CI, Ferguson MW. A new quantitative scale for clinical scar assessment. Plast Reconstr Surg. 1998;102(6):1954-61. 36. Draaijers LJ, Tempelman FR, Botman YA, Tuinebreijer WE, Middelkoop E, Kreis RW, et al. The patient and observer scar assessment scale: a reliable and feasible tool for scar evaluation. Plast Reconstr Surg. 2004;113(7):1960-5. 37. Davey RB, Sprod RT, Neild TO. Computerised colour: a technique for the assessment of burn scar hypertrophy. A preliminary report. Burns. 1999;25(3):207-13. 38. Atiyeh BS. Nonsurgical management of hypertrophic scars: evi- dence-based therapies, standard practices, and emerging methods. Aesthetic Plast Surg. 2007;31(5):468-92. 39. Stoff A, Rivera AA, Sanjib Banerjee N, Moore ST, Michael Numnum T, Espinosa-de-Los-Monteros A, et al. Promotion of incisional wound repair by human mesenchymal stem cell transplantation. Exp Dermatol. 2009;18(4):362-9. 40. Saton H, Kishi K, Tanaka Y, Kubota Y, Nakajima T, Akasaka Y, et al. Transplanted mesenchymal stem cells are effective for skin regeneration in acute cutaneous wounds. Cell Transplant. 2004;13(4):405-12. 41. Mays RW, van’t Hof W, Ting AE, Perry R, Deans R. Development of adult pluripotent stem cell therapies for ischemic injury and disease. Expert Opin Biol Ther. 2007;7(2):173-84. 42. Navarro-Alvarez N, Soto-Gutierrez A, Kobayashi N. Stem cell research and therapy for liver disease. Curr Stem Cell Res Ther. 2009;4(2):141-6. 43. Watorek E, Klinger M. Stem cells in nephrology: present status and future. Arch Immunol Ther Exp (Warsz). 2006;54(1):45-50. Correspondence to: Pedro Djacir Escobar Martins Av. Engenheiro Alfredo Correa Daudt, 125 – ap. 301 – Boa Vista – Porto Alegre, RS, Brazil – CEP 90480-120 E-mail: clinicapedromartins@terra.com.br work_pjny4h2rsjfoxcbcay4fbpfcny ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218543587 Params is empty 218543587 exception Params is empty 2021/04/06-02:16:20 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218543587 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:20 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_pknbqxoxnjajrh3cuf65j7ddxe ---- 1 Volume 116| Number 5/6 May/June 2020 Book Review https://doi.org/10.17159/sajs.2020/7960 © 2020. The Author(s). Published under a Creative Commons Attribution Licence. Uniting botanical science and art BOOK TITLE: The Amaryllidaceae of southern Africa AUTHORS: Graham Duncan, Barbara Jeppe, Leigh Voigt ISBN: 9781919766508 (hardcover, 709 pp) PUBLISHER: Umdaus Press, Pretoria; ZAR1299 PUBLISHED: 2017 REVIEWER: Brian W. van Wilgen AFFILIATION: Centre for Invasion Biology, Department of Botany and Zoology, Stellenbosch University, Stellenbosch, South Africa EMAIL: bvanwilgen@sun.ac.za HOW TO CITE: Van Wilgen BW. Uniting botanical science and art. S Afr J Sci. 2020;116(5/6), Art. #7960, 1 page. https://doi.org/10.17159/ sajs.2020/7960 ARTICLE INCLUDES: ☐ Peer review ☐ Supplementary material PUBLISHED: 27 May 2020 The Amaryllidaceae is a large family of flowering plants, with over 800 species in more than 50 genera, distributed across warm temperate and tropical parts of the world. The largest proportion of species is in South America, but southern Africa is home to approximately 250 species in 18 genera and they are found in a wide range of habitats. The family includes many popular garden plants, such as daffodils, snowdrops and clivias, and vegetables, such as onions, chives and garlic. There are three subfamilies: Agapanthoideae (with the single endemic southern African genus Agapanthus), Allioideae (onions and chives) and Amaryllidoideae. This book is dedicated to the Amaryllidoideae, and thus does not include the eight species of Agapanthus, nor the approximately 20 species of the African genus Tulbaghia (wild garlic). The bulbs of wild amaryllids were collected by Dutch sailors at the Cape as early as 1603, but the family was only formally described in 1805 by the French naturalist Jean St Hilaire, who named it after Amaryllis, a beautiful maiden who, in Greek mythology, fell in love with the handsome shepherd Alteo, who had a passion for flowers. James G. Baker, Keeper of the Herbarium and Library of the Royal Gardens, Kew, made enormous contributions to the taxonomy of the family in the late 19th century, single-handedly writing the entire text of Volume 6 of the Flora Capensis (Haemodoraceae to Liliaceae) in 1896, as well as full descriptions of the family in the Flora of Tropical Africa in 1898. During the 20th century, several publications dealing with the southern African Amaryllidaceae appeared, including reviews of Cyrtanthus in 1939, Nerine in 1967, Crinum in 1973 and Haemanthus in 1984. All are, of course, now outdated, and a modern review was necessary. This book brings together the scattered accounts of these species, and provides an up-to-date synthesis of the taxonomy, distinguishing features, distribution, ecology, conservation status and cultivation of 289 taxa (species, subspecies and varieties). The book is arranged in alphabetical order of genera, and there is an introduction to each genus that provides information on its history of discovery, ecology, distribution, and medicinal and poisonous properties. The extensive scientific text was prepared by Graham Duncan, curator of the indigenous bulb collection at the Kirstenbosch National Botanical Garden. Duncan has drawn on both his qualifications in botanical taxonomy and extensive experience as a professional horticulturalist to provide a thorough, comprehensive, and highly informative review on the state of our knowledge on these plants. This hefty book is not, however, only a dry treatise of a plant family – the story of how the book eventually came about is, itself, intriguing. The book owes its existence to Barbara Jeppe and Leigh Voigt, a mother-and-daughter team of artists who together spent 45 years collecting and illustrating individual species. The art of accurately illustrating plant specimens has been vital to botanical science for centuries. Before modern photography, and particularly recently digital photography, it was a necessary aspect of botany that took time, patience and great skill. In 1972, botanical artist Barbara Jeppe began to paint the various Nerine and Haemanthus species near her home at Lake Sibaya, initiating a collection of paintings of the amaryllids of the area. Later this collection was supplemented with species from as far afield as the Western Cape and the Richtersveld, and, over time, she conceived the idea of illustrating a complete work on the family. The process of accurate rendering was time- consuming, often requiring visits to remote sites to get access to fresh material, with multiple trips required to each site to depict the leaves and flowers which do not appear simultaneously. When it became apparent to Barbara that she would not be able to finish the task in her lifetime, her daughter Leigh Voigt, also an accomplished botanical artist, promised to complete the work. Over the next 16 years, Leigh set about filling in the gaps, often having to fly to wherever a new species had been found in order to paint it in situ. Following the completion of the plates, Graham Duncan spent a further 2 years drafting the text. The final product, a sizeable book of over 700 pages, is illustrated with 248 full-page colour plates and a distribution map for each species. Not every taxon is illustrated with a colour plate, and one or two have more than one plate, but the coverage is close to comprehensive. In addition, the book has over 120 informative colour photographs of the plants flowering in their natural habitats. The originals for all of the painted plates were purchased by Louis Norval, and now form part of the Homestead Art Collection housed at the Norval Foundation in Cape Town. By combining traditional botanical art with outstanding photography, this book, published by Umdaus Press, sets a new standard for botanical publishing in southern Africa. Although not a book to be taken easily into the field as an identification guide, it will have wide appeal to professional botanists and conservationists, as well as those with an interest in growing the many amaryllid species. I also have no doubt that, in time, this publication will become one of the most collectable texts in the field of botanical Africana. www.sajs.co.za https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/ https://orcid.org/0000-0002-1536-7521 mailto:bvanwilgen@sun.ac.za https://www.sajs.co.za/associationsmemberships https://crossmark.crossref.org/dialog/?doi=10.17159/sajs.2020/7960&domain=pdf&date_stamp=2020-05-27 _Hlk11152902 work_pkxt5x35irdjla26o65l5c4nxe ---- Preserving city color plan, surveying Iranian submontane cities Urbanism Preserving city color plan, surveying Iranian submontane cities • M. Molavi 7 PRESERVING CITY COLOR PLAN, SURVEYING IRANIAN SUBMONTANE CITIES Mehrnaz MOLAVI Associate professor, PhD (Urbanism), Department of Urbanism, Faculty of Architecture and Art, University of Guilan, e-mail: mehrnaz.molavi@gmail.com Abstract. No Considering color pallet of the buildings of every city, arouse a debate related to identity and aesthetics of urban environment. This debate is succeeded by researches about the color of cities, from limited hues of traditional city to numerous colors of modern city. The French Lenclos was the first one who after dedicating many years to the color subject, innovated a method of surveying color of cities; which is used by anyone who is researching in this field. This article after mentioning Lenclos’ method of surveying color plan of cities, represents author’s case study in color pallet of Lahijan. Author’s method in this case study is a combination of digital photography and photo shop software along with Lenclos’ method. Results (shown in a bar chart) confirm that although traditional color pallet of the city is still dominant, but it has some differences with citizens’ opinion achieved by interviews and questionnaires. Key words: color identity, local color, Lenclos’ approach. 1. Introduction Streets are an important part of public open space in the city. In urban areas, streets constitute a significant part of the public open space and are seen as the most important symbols of the public realm (Mehta, 2007). Social commentators and scholars suggest that people’s image of a city is often that of its streets. “For many urbanites it is the streets that represent the outdoors” (Jacobs, 1993). In the streets it is the façade that is responsible for the visual impact of a building, as the interface between the viewer and the built space that has a purpose, a signification and a context (Opincariu, 2011). Facades constitute of lines, surfaces, volumes and also colors. Colors include the Inherent color of materials and also colors which are added to facades to make them more beautiful or make a coherent view of the street. Protecting and restoring the color identity and the main character of the certain cities which are known for their specific character is expeditious. Juxtaposition of these colors, constitute the color map of a given city which is the main subject of this article. Urbanism. Arhitectură. Construcţii • Vol. 6 • Nr. 2 • 2015 • 8 Here it should be mentioned that there is a difference between color mapping and color plan of a city. Color mapping is trying to record the existing colors of buildings that usually encompass the identity and the character of a city. But color plan is what urban planners and urban designers decide for the development parts of the cities in order to prevent chaos in today’s rapidly growing cities. Gou and Wang believe that the “urban color plan is the game between urban planner and architect”, but they also believe that “Color plan is necessary in the present stage of development due to rapid boom of urban construction and economy development” (Gou, Wang, 2007). In fact the present color plans can be taken as necessitous mostly because of its overall color effect, not as a controlling method for the government (Gou, 2009). Let’s return to color mapping of identified cities. The dominated colors constitute an important part of city identity. The distinction between main colors of different cities depends on various factors. These factors include color of local materials, mineral pigments in that region, culture of inhabitants, climate, etc. Recording colors of cities is a rather new issue on which researches has been started in 1960s leading to a suggestion of a methodology to record color maps of cities in 1970s. The color map of cities includes colors applied in walls, windows, doors and visible parts of roofs. Further, color of other parts such as dominated colors in natural environment including vegetation, color of water and the sky can be added to the map (Lenclos, 2005). Documentation of color maps of cities is recording a part of cultural identity of cities. Original color map of each city along with construction technology and dominated forms of architecture, even issues such as their costume and local food are considered as a cultural heritage of each city and area; obviously, it is vital to record and maintain them. Recording and maintaining color plan of each city as well as maintaining its mosques, public baths, castles and old houses, is maintaining the historical heritage as a treasure to pass to next generations. On the other hand, technological developments and abundance of new construction materials which have a variety of colors eliminated those limitations that in past caused using few homogenous colors leading to creation of a unique pleasant collection of colors in entire city or each neighborhood. Development of a vast continuum of colors and materials and the ability to choose among them may be exciting at first sight, but later, it will yield disappointing results. Visual anarchy dominated in cities led demanding a comprehensive plan for colors as well as regulations to make it united. Recording color map of cities can be used in providing a comprehensive plan for colors of cities. By the help of comprehensive plan of city colors, it is possible to predict diverse pallets for various neighborhoods. This plan will be beneficial especially for development areas of cities in which lack of identity is one of their inevitable problems. The purpose of this study is to provide a common methodology of preparing color plans of cities. Then, the changes that the researcher has been made in the methodology will be stated using the case of Lahijan as an example. First, a brief explanation is offered about the first recorded color maps of cities in 19th century. Then the reason for dramatic changes in the method of using colors, Urbanism Preserving city color plan, surveying Iranian submontane cities • M. Molavi 9 especially in 20th century and its results will be reviewed. In the following sections, different stages of Lenclos’ framework in recording color maps of cities will be argued. Then, suggested changes by the researcher will be mentioned. Therefore, a combination between Lenclos’ framework and digital photography to have samples of colors will be applied in the case of Lahijan. 2. Why this research is necessary During last few decades, tendency to use color as a form-creating instrument was increasingly present and it has been an international phenomenon in cultural color. It is obvious that color design must consider improvement of the architectural quality of the space, aesthetic expression and pleasant psycho- climate, all according to the type of the social activity (Tomic, Maric, 2011). For many years and even centuries, each city had its own material and mines to produce color used in facades. Pigments brought from other areas were costly and limited. Color pallet of each city consisted of local mining pigments, imported pigments and natural color of local material (Porter, 2009). 20th century witnessed a significant progress in this regard. At the first half of 20th century, industrial production of chemical colors started. Industrial production made possible production of related cheap colors. In 1980s, an interest to colors and a vast use of them in cities became extremely common (Swirnoff, 2005). It should be admitted that the origin of color identity of cities was those limited range of local colors which developed radical differences between color identities of cities of different regions. Even cities located in the same region had absolutely different colors due to their altitude and type of geology (Lenclos, 2005). Now extension of choice of color and its availability for all social classes, have removed those limitations. It is predictable that soon or late visual chaos in cities would be dominant that have no consequence but lack of identity. Therefore study about color, color psychology and methods of using colors appears as a new field of research. Recognition of color pallet of cities and trying to protect it in inevitable reconstructions of CBDs and also in development areas is why this research necessitates. 3. Literature Review 3.1. Use of color since past till today History of use of color in city and its buildings goes back to first civilizations. Mesopotamian cities were white cities. Adobe built houses of these cities had colorful insides, but from outside were painted white. On the contrary Greek cities were not white. Jack Ignase Hittorf found that Greek houses, temples and even sculptures were painted with bright colors, and today’s whiteness of ancient Greek, is because of weathering of colors due to passage of time and effects of erosive factors such as wind and sun (Zybaczynski, 2013). Vitrouvious in “Ten books of architecture” considered color in a closely related relationship to the finishing of facades, with the decoration, representing “Venustas” which is “one of the appropriate principles that should govern the construction of all types of buildings” (Zybaczynski, 2013). The Renaissance and the Baroque brought the elimination use of color from the outside of the architecture and Urbanism. Arhitectură. Construcţii • Vol. 6 • Nr. 2 • 2015 • 10 focused on the inside. In eighteenth and nineteenth century color of locale building materials constituted of dominant colors of town relating to its identity. In the industrial cities of nineteenth century this color identity was covered with dust and smoke of factories. Fortunately number of industrial cities of those days was limited. Influence of modernism can be better seen in cities which this style was dominant. The beginning of the twentieth century can be considered as a chromatic rupture, the color being removed from both the outside and the inside of buildings. The white replaces the “chromatic”, remaining the color associated with the modern movement in architecture. The impression of modernism can be seen in cities that this style was dominant there. With the postmodernism, the color assumes new roles, through the exaggerated polychromies emphasizing both the volumes and the ornament, the colors varying from intense to medium range, rarely to pale range. The color becomes a significant part of an architectural composition and a space determining tool, focusing on both the physical qualities of color and its ability to draw attention, to highlight. The last 40-50 years have brought forefront a holistic approach of color, namely “the harmonizing of architecture with the surrounding landscape and with the inhabitants” (Lancaster, 1996). Each building should not be seen in isolation, as an architectural object that exists in isolation (in city, village or natural environment) but, instead, must be seen in context, as a part of the environment to which it belongs. It influences the environment both through volume and the architectural language and chromatics, just as it directly influences it. It was mentioned that industrial and almost cheap production of chemical colors leads to expanded use of color in cities. Awareness of the importance of color in architecture started a scientific approach to color subject between researchers, architects and colorists and a number of them such as Lenclos, Lancaster, Yoshida, and Brino began to study about the color of built environment (rural or urban) that succeeded an approach to architecture named “environmental architecture” (Caivano, 2006). The issue of color in architecture has been approached in many ways and on many levels: the relationship between color and humans regarding the psycho- physiological influences (Meerwein et al, 2007), the relationship between the perceived color and the inherent one (Fridell Anter, 2008), the interaction between the architectural form and color (Caivano, 2006, Vosbeck, 2009), between architecture, color and city (Minah, 2008), the relationship between the color and the geographical location (Lenclos, 2004), and even the establishment of a methodology to create color harmony at the principled level (Zybaczynski, 2013), as well as the level of the architecture and the city (Kobayashi, 1998), each study focusing on certain aspects of the relationship between color and architecture. Giovani Brino between these researchers began a study about the relationship between color and the identity of city (Brino, 2009). He found that city of Turin in Italy has had a color plan to harmonize color of buildings in the more important streets and squares. This research beside Lenclos’ researches about the relationship between the color of city and its geographical location was succeeded by contemporary color plan for different cities of the world, proposed by Brino, Tom Porter and Byron Mikellides. Urbanism Preserving city color plan, surveying Iranian submontane cities • M. Molavi 11 3.2. The first recorded color plan of cities The first plan of cities’ colors in the world was created in nineteenth century by the council of constructors in Turin (Linton, 1999). This city was founded by Victor Amadeus II to show its glory and had a special color plan. It was based on the idea that the main streets and squares in a city has to be consistent with a homogenous color system with a unique architecture. The main Turin pallet of hues imitated the noble building materials: marble, granite, terracotta and brick (It should be noted that the cost of construction materials in each region depends upon the availability of constituents of construction materials or construction materials mines). Torin has had possibly limited access to suitable clay soils for making the tile and brick, which is not imaginable in Middle East). As Turin was an impoverished regional capital attempting to attract status using poor, stucco facades dressed with mineral pigments that acted as a simulacrum for a more expensive range of building materials (Brino, 2007). When the French Lenclos started researching on cities’ colors, he was not aware of this color plan. Because it was a century that an authoritarian bureaucrat proceeded to erode and obliterate the sophisticated polychrome city concept under a monotone layer of Molasses Yellow, the color whose pigment mines were close to Turin, carried out by train and was naturally cheap. Gradually the yellow color spread in all cities of region until it became known as “Turin yellow”. Fortunately, due to exact recording of colored pallet in 19th century in documentation center of Turin city council, the plan was found in late 20th and revived by Giovanni Brino. It was difficult to revive and implement because the colors had been recorded only by their names; there were no sample related to each name. By careful research in various texts, Brino managed to find equivalence for each color and execute that in Turin (Porter, 1997). 3.3. World experiences in recording color plans The scientific method was founded by Jean-Phillip Lenclos. Many designers, color consultants and architectures used this method, including Byron Mikellides who proposed color plan of Savannah located in Georgia, United States (Mikellides, 2005), or Tom Porter who was in charge of providing a color plan for Oslo with a group of MA students at Oslo university (Porter, 1997). Recently, consultant engineers of Roger Evans are managing a project for New Hall which is a new development zone of Essex in England. By this project, consistent pallets will be provided for this developing area. It includes all the constituents of facades and even the color and material of floor (Linton, 1999). Unfortunately, even by a lot of searching, I could not find samples of stage by stage process of practically providing color plan, or even the final form of a color plan (It would be definitely very useful seeing a project that the international reputation of its contractor implies the reliability of plans). Lenclos first wrote ‘colors of France’ about the differences of the colors of different regions of France in 1982. Then, using a codified method resulting from his studies, he published ‘colors of Europe’ in 1995. His research on color of cities world-wide leaded to his book ‘world colors, geography of colors’ in 1999 in French and 2004 in English. Urbanism. Arhitectură. Construcţii • Vol. 6 • Nr. 2 • 2015 • 12 However, trying to record color plan of Lahijan by adopting framework of Lenclos’ researches, I have made some changes in details. It is the use of digital photography instead of sampling from building material, surfaces of facades, and the color of windows and doors. In following the method of using digital photography and evaluating photos by Photoshop will be explained. But before, how Lenclos got to this framework and its various stages will be described. 4. Methodology of Lenclos Lenclos’ purpose is to distinguish prominent features and details of architectural colors in a selected area, in general and in detail. Firstly, he chooses single or group houses as the representative of architectural and color qualities in their environment. The selection depends on knowing the dominated colors in the case study city. After choosing buildings or a group of buildings which include colors introducing identity of the city, the colors will be analyzed in this way: His method consists of three stages: at first stage, his work depends on objective evidence. He takes samples of materials directly from the building and site. A small flaked layer (or forcibly flaked by tools!), a part of window color, a stone or some soil of the site, all of which consist color of that area collected as a series. This sampling of floors, walls, roofs, doors, shutters and all other details of the built environment are seen as general combination of colors. Lenclos usually sets samples on a dark board to display colors of the built environment (Fig. 1).When achieving a sample is not possible, its color is recorded by a manually painted card. At second stage, the collected data are synthesized. The chromatic information thus obtained from a site, is then assembled in the studio for a long and meticulous process of synthesis. All the collected samples will be translated to painted gouache color cards which faithfully reproduce the original colors. This is a kind of interpreting colored materials to colored cards. However, this raises a problem: the physical samples obtained from materials are rarely in a single hue. A piece of a brick, stone or coating color which has been under erosion has many shades hardly to be expressed as single color. In these circumstances, either we should create a sense of color of the material by combining different colors or we should reproduce the dominant color of the sample as it is seen from a distance (or as we close our eyelid so a dim view we have). When a collection of colored cards was completed, they are then classified into a series of panels which produce a color synthesis of both a site and its architectural elements. The third stage is called systematic color conceptualizing. At this stage, the survey selection and synthesis of that in the workshop will be formed as a collection of applied colors appropriate to a particular site. It means that the sample colors of facades with the color of doors and windows (or shutters) will be classified in a pallet. Colored pallets can be provided in various forms. For instance, to show the relations among colors at one surface, one can use the method of making tables of equal surfaces of the existing colors in a pallet; they should be arranged systematically based on their lightness, concentration and tonality. Each of these tables can show color of walls; doors or windows. Urbanism Preserving city color plan, surveying Iranian submontane cities • M. Molavi 13 Fig. 1. Samples of colors of the built environment (Photo: Lenclos, 2004) Another version of our synthesis can focus more on the dimensional color relationships. It means that the colors picked from each facade are got together in small squares that are an abstract form of the facade of buildings. In this case, three colored charts are provided: (1) the general color of facades and roofs of a settlement, (2) The other chart includes predominant and proportional incidence of the colors of doors, windows and shutters; and (3) a chart that superimposes the elements of the selective palette on to the general palette. It provides a combination of all the colors of facades in equal adjacent squares. The third chart can be assumed to be the final synthesis of color samples of a supposed site which provides the historical and geographical notation of a site in a specific time and place. By applying mentioned pallets to the existing buildings or the proposed projects, both consistency and diversity can be achieved (Fig. 2) (Lenclos, 2004). Lenclos’ method in studying color plan of cities is regarded as a base for each research with the same topic. For the same reason, I took this method as a framework in research on cities of Iran; such as Lahijan. As it has been mentioned, Lenclos’ method has been changed a little to be consistent with modern technology, explained below. 5. Author’s Method to record colors of cities of Iran, A Case study – Lahijan 5.1. Using digital photography and adding the factor of quantity When the writer decided to use Lenclos’ method (innovated in 1970) as her framework in survey research of colors in Iran, she faced with a serious problem. To cut samples from walls, gables or windows was not only very difficult but it was illegal. Therefore, an unknown researcher could not use famous Lenclos’ approach to get samples. What the writer used was digital photography to record color of samples. Being easily used, digital photography has many advantages. Using Photoshop software, one can edit digital photos; even Photoshop tools can be used to help the research. Urbanism. Arhitectură. Construcţii • Vol. 6 • Nr. 2 • 2015 • 14 Fig. 2. Selected photo of Lahijan (Photo: Author) The facilities in digital photography hit on the writer to add also the subject-matter of quantity of colors to the research. There was no clear perception in Lenclos’ approach about the quantity. To distinguish quantity of each color, at first, the main colored areas must be identified for each photo. To explain main color, it has to be mentioned that each photo has many colors many of them are values of a single color. Mentioned values are resulted from trivial changes in light (such as being in light or shadow). For example, the single color of one wall in different parts of it is seen in three or four close but distinctive colors in its digital photo, because of changes in light and sensitivity of Photoshop software. Besides, green foliage has light and dark areas. Even the vegetation may show trivial differences regarding their color. To define a color as the main one, for the mentioned wall and the green of leaves of a specific plant, a single color is introduced as main color, despite having different tones. Further, the color of Lahijan facades are divided into three main groups: white, cream, gray- white. To decrease the number of tones, the number of the colors of photo should be reduced to about 1/3 to 1/5 real colors. Thus, we need a computer program to identify close colors and take them as a single color. Such program can identify areas of main colors more easily and in shorter time. ‘Quantifying colors is the process of providing a pallet of colors consisting of reduced number of colors of a photo… to find a quantified photo which is closed to the main photo (though it has less number of colors)’ (Sudha et al, 2003). After reducing the number of colors of the photo, area of each color should be calculated. It is possible to perform that by Photoshop CS5 which is installed in almost all the computers. In this approach, a network of perpendicular lines is superimposed on selected photo which is an appropriate case study of colors of intended urban environment. In this meshed photo, one may define the areas of main colors. Then, using the picker tool, the number of the squares belonged to each color group are designated and counted. Urbanism Preserving city color plan, surveying Iranian submontane cities • M. Molavi 15 Trying to define color plan of Lahijan, after selecting the proper photo, the main color areas are reduced from 80 colors to 17. This is viable by decreasing the picture format to 75%. Then a mesh made of perpendicular intersected lines, making equal squares are superimposed on the selected photo. Then by using a selective tool named “picker tool” the squares belonged to each color (from the set constituted of 17 colors) are identified and counted. Quantity of each color is shown in a bar chart. By using bar chart, not only percentage of each color is determined but also, comparison between their areas will be done easily by it. Finally, the resulting colorful features express the color plan of the environment of the selected photo. The above approach was applied in surveying and recoding colors of cities in Iran. It has to be mentioned that selected cities are chosen according to their location in different geographical climates of Iran. Temperate and humid, dry and warm as well as cold and mountainous climates are considered in this research. In this article, the course of survey and record color plan of Lahijan is described. 5.2. The color plan of Lahijan Lahijan is one of the cities of Guilan province with rich vegetation and plenty of rain. Rainfall makes an almost rapid erosion of superficies and changes their color. To distinguish prominent color features and details of architecture, a group of buildings which are representative of architectural and color qualities in their environment are chosen. To recognize dominated colors in the environment, some photos which are taken in different seasons should be taken as criterion. In these photos, the dominant color of facades is white. This is because of the material of their cladding which is a composite of white cement and rock powder. The white color of facades is not a pure white, rather mixed with hues of beige or gray which is the result of the composites of the cladding or the age of buildings. The materials of roofs are usually red terracotta or gray plates of gable roofs. Due to different ages of terracotta, their colors are in various tones of red. The color of the gable roof is usually gray which does not have prominent tones. Other materials used in covering roofs are slates painted according to owner’s desire. Blue is the most favorite. In choosing a sample photo to study color pallet of Lahijan, it has to be noted that Lahijan, Rasht and Anzali are three dense cities of Guilan Province. Smaller cities are less dense and their urban texture is open due to its temperate and humid climate. The degree of density in large cities and the degree of open spaces, vegetation and lakes or pools in urban context of small cities have been considered in choosing the mentioned photo. In small cities, the soft view must absolutely be regarded in the photo. However, in Lahijan, there is no photo that covers merely built environment. That is because vege- tation has a strong presence. Such a photo would not be realistic. Anyway, it is obvious that separating of main structures from natural environment will result color pallet of the built environment. Besides, consi- dering soft view will show a combination of built and natural environment. In the selected photo for this study, due to focus on about 25 buildings, the sky and perspective are omitted (Fig. 3). After selection of the sample, a perpendicular network with dimensions of 5 mm length of every square covered the photo. Then, as already mentioned, main color areas Urbanism. Arhitectură. Construcţii • Vol. 6 • Nr. 2 • 2015 • 16 are recognized. For this purpose, equivalent tones of a single color or colors that are very close to each other are regarded as a single color. After reducing about 80 colors to 17, the number of squares for various tones of green, from dark to pale covers 37% of bar chart. A small amount of red and brown is also seen in city pallet. Therefore, as it can be seen in bar chart (Fig. 4), the prominent colors in Lahijan buildings are white, beige and pale gray, and gray for shingle roofs. The next prominent color is green. This is the color of doors, window frames and also the city vegetation. Vegetation does not refer to soft landscape of Sheytan kooh (mountain bound on city), as the latter was omitted from the photo deliberately. But the green of trees and plants are seen everywhere in city and protrudes itself as a prominent color. More than color of doors and window frames, red and brown are the color of terracotta or other roof materials. These results show despite of development of Lahijan in recent years, colors of traditional pallet are still dominant. This may be due to use of traditional material is still common in Lahijan and the percentage of using new materials is not as much to make a chaos in color identity of the city. From this findings, some criteria can be derived which can prevent disharmonic colors usage that result in visual chaos in future. These criteria can easily transform to a color plan proposed to authorities of Lahijan. Such a color plan not only can preserve color pallet of city, but also propose diversity in different districts of the city especially in development areas. 6. Conclusion Although the only color plan before the 20th century has been recorded for Turin, it has to be known that the color of each city both in that era and at the present is considered as its identity. The difference between colors of cities was due to the limitations their residents had regarding the local materials and pigments. The result was a significant difference between the color of cities in various areas and a consistency within each city. Nowadays, the appearance of chemical colors and their variety has made significant changes in color of cities after a number of limitations due to conventional inorganic colors. These changes with no specific framework are going to dominate identity color of cities and change it to a chaos. Visual confusion is the result of easy and cheap accessibility to a vast number of colors that necessitates a consistent plan for each city. Production and implementation of these color plans not only utilized in revival of depressed inner cores but also helps to characterize their style. Therefore, one can not only use systematically a variety of colors but also protect color identity of cities. Lenclos was the first one who paid attention to the color of cities. In his innovative method he analyzed existing colors of an urban environment by gathering samples of its colors. Adjusting Lenclos’ method with modern technology, we can elevate its efficiency. Instead of cutting parts of facades, flaking color from doors and windows, or gathering different kinds of soil and local stones and manually painting color cards with gouache to make pallet, with the help of digital photography, the colors can be recorded. Digital photography makes determination of frequency for each color possible and providing color pallet of cities or their development areas will be easier and more comprehensive. Urbanism Preserving city color plan, surveying Iranian submontane cities • M. Molavi 17 Fig. 3. Selected photo of Lahijan (Photo: Author) Fig. 4. Bar chart of colors of the environment (By: Author) In this research, as a case study, the color identity of Lahijan was studied. The result of case study shows that most used colors in Lahijan are white, beige and pale gray for facades, gray for gable roofs, green for doors, window frames and vegetation among buildings, red and brown for terracotta of old building roofs or for colored materials of new roofs. This research should be done for other cities of Iran with distinct various climates and different from that of Lahijan. Then, finding the common points and differences, the influential factors in colors of cities will be discovered. Knowing influential factors in combination of specific colors of each city not only helps determination of real identity of each city but also enables the researcher to predict future developments of cities due to changes in the past, suggesting an appropriate pallet in its color plan. Determining factors is especially significant and efficient in providing color plan in development areas or new cities, because in theses urban environments there is no past to be regarded as criterion. Determining the mentioned factors needs another research. However, it must be considered that in combination of influential factors on color of cities, all the mentioned factors do not have a similar importance. In each city, the factors are combined with a specific proportion creating pallet of that city. In Urbanism. Arhitectură. Construcţii • Vol. 6 • Nr. 2 • 2015 • 18 comparison of colors of different cities and degree of influence of the factors, this matter should be reckoned and examined in case. REFERENCES Brino G. (2009), Italian city color plans, in: Porter T., Mikellides B. (Editors), Color for architecture, Taylor & Francis, New York, pp. 30-35. Caivano J. L. (2006), The research on environmental design: Brief history, current development, and possible future, in: Proceedings of 10th congress of the international color association, pp. 705-713. Fridell Anter K. (2008), Forming spaces with color and light: Trends in architectural practice and Swedish color research, Journal of Color Deign & Creativity 2(2):1-10. Gou A. (2013), Method of urban color plan based on spatial configuration, Color research and application 38(1):65-72. Gou A., Wang J. (2008), Research on the location characters of urban color plan in China, Color research and application 33(1):68-76. Jacobs A. (1993), Great streets, MIT Press, Cambridge, MA. Lenclos J. P., Lenclos D. (2004), Colors of the world, W. W. Norton and & Company Inc., New York, pp. 40-45. Lenclos J. P. (2005), The geography of color, in: Proceedings of 10th Congress of International Color Association, Granada, Spain, pp. 307- 315. Lancaster M. (1996), Colorscape, Academy Editions, London, pp. 84-87. Linton H. (1999), Color in architecture, McGraw Hill, Hong Kong, pp. 151-157. Meerwein G., Rodeck B., Mahnke F. (2007), Color communication in architectural space, Birkhauser Verlag AG, Basel, Boston, Berlin, pp. 19-23. Mehta V. (2007), Lively streets. Determining environ- mental characteristics to support social behavior, Journal of Planning education and research 27(2):165-187. Minah G. (2008), Color as idea: The conceptual basis for using color in architecture and urban design, Journal of Color Deign & Creativity 3(2): 1-9. Mikellides B. (2009), Color, arousal, hue-heat and time estimation, in: Porter T., Mikellides B. (Editors), Color for architecture, Taylor & Francis, New York, pp. 128-134. Opincariu D. (2011), Structure and building facades, the new concept of ornament, Acta Technica Napocensis 54(2):193-203. Porter T. (1997), Environmental color mapping, Journal of Urban Design International 2(1):23-31. Porter T. (2009), Perceptual color, inherited color, in: Proceedings of the 11th Congress of International Color Association, Sydney, Australia, pp. 605-612. Tomic D. V., Maric I. (2011), Color in the city: Principles of nature-climate characteristics, Facta universitatis architecture and civil engineering 9(2):315-323. Sudha N., Srikanthan T., Mailachalam B. (2003), A VLSI architecture for 3-D self-organizing map based color quantization and its FPGA implementation, Journal of systems architecture 3(2):51-57. Swirnoff L. (2005), Light, locale and the color of cities, in: Porter T., Mikellides B. (Editors), Color for architecture, Taylor & Francis, New York, pp. 26-29. Vosbeck R. (2009), Color in architecture, Journal of Color research and application 9(2):38-42. Zybaczynski V. (2013), The color of architecture, past and present, Urbanism. Arhitectură. Construcţii 4(4):93-96. Received: 2 March 2014 • Revised: 11 August 2014 • Accepted: 5 November 2014 Article distributed under a Creative Commons Attribution-NonCommercial- NoDerivatives 4.0 International License (CC BY-NC-ND) work_pmhcxwz5jbgajjm3angsydk2va ---- Submission to Prometheus 1 Author’s post-print of: Meyer, E.T. (2007). Technological change and the form of science research teams: dealing with the digitals. Prometheus 25(4), 345-361. Final published formatting available at journal: http://www.tandfonline.com/doi/abs/10.1080/08109020701689193 Technological change and the form of science research teams: dealing with the digitals Author: Eric T. Meyer, Ph.D. Research Fellow Oxford Internet Institute University of Oxford 1 St Giles Oxford OX1 3JS United Kingdom e-mail: eric.meyer@oii.ox.ac.uk Phone: +44 (0) 1865 287218 Abstract: During the late 1990s, photography moved from being a primarily analog medium to being an almost entirely digital medium. The development of digital cameras and software for working with photographs has led to the wholesale computerization of photography in many different domains. This paper reports on the findings of a study of the social and organizational changes experienced by marine mammal scientists who have changed from film-based photography to digital photography. This technical change might be viewed as a simple substitution of a digital for an analog camera, hardly of significance to how scientists do what they do. However, a perspective anchored in social informatics leads to the expectation that such incremental technical changes can have significant outcomes, changing not only how scientists work, but also the outcomes of their research. This present study finds that key consequences of this change have been the composition of the personnel working on the scientific research teams for marine biology projects and the ways in which these scientists allocate their time. Keywords: social informatics, digital photography, technological change, marine biology Introduction and background Digital photography is an innovation in information technology that has been widely and rapidly adopted across a variety of domains since the late 1990s. To understand the role of digital photography as a technology used in scientific work, this study examined marine mammal researchers who use photo-identification as a tool for gathering, organizing and analyzing data about whales, dolphins, otters, seals, manatees, and other marine animals. Many marine mammal researchers have recently switched from film to digital photography. This seemingly minor change has contributed to a number of fundamental alterations to the ways in which they do their scientific work. The study examined digital photography as an entry point for examining the role of new technologies entering scientific practice and regular use. Orlikowski calls this perspective the “practice lens” for studying technology: http://www.tandfonline.com/doi/abs/10.1080/08109020701689193 mailto:eric.meyer@oii.ox.ac.uk 2 Rather than trying to understand why and how a given technology is more or less likely to be appropriated in various circumstances, a practice lens focuses on knowledgeable human action and how its recurrent engagement with a given technology constitutes and reconstitutes particular emergent structures of using the technology (technologies-in-practice). 1 The project was conceived and designed as a social informatics research study. Social informatics in North America is a relatively new approach to studying socio-technical systems, 2 first entering regular use 3 at a meeting of similarly minded researchers who gathered for an NSF-workshop at Indiana University in 1997 to examine “the integration of computerization and networked information into social and organizational life, and the roles of information and communication technology (ICT) in social and organizational change”. 4 Since that time, there have been several notable efforts aimed at explicating some of the central propositions in social informatics. 5 Three key themes that are central to much of SI research: embeddedness, configuration, and duality. 6 Embeddedness is the notion that technologies are not isolated from their social and institutional contexts. Configuration captures the degree that uses of an ICT that are not completely determined by the design of the ICT; there is flexibility in how they are used. This in turn influences the consequences of using the ICT. Finally, duality means that “ICTs have both enabling and constraining effects”. 7 All three of these key themes emerge in the present study. However, this article is focused primarily on the evidence of the embedded nature of digital photography: how the possibilities and limitations of photography influence interactions with wild animals, how researchers spend their time after a day collecting photographs, how they spend their time back at the lab, how work is divided, and what scientific results are available. Much of social informatics research is anchored in a recognition that technology does not independently cause changes in the social order; instead, it often has measurable consequences, both positive and negative, both intended and unintended, in social settings where technology is put to use in ways shaped by the choices of users and other key actors. Marine mammal photo-identification There are an estimated two thousand practicing marine biologists who carry out research on a variety of species of whales, dolphins, seals, manatees, and sea otters. Their methods vary and include acoustics, genetics, and the technique that sits at the focal point of this research, photo-identification. Briefly, the general photo-identification process is that field biologists on small boats approach an animal or group of animals and take photographs that can be used for identifying individual animals. The most important feature for identification varies by species. Humpback whales are identified by the coloration and patterns of marks on their flukes (tails), while blue whales are identified by the blotchy patterns of coloration on their backs. Bottlenose dolphins are identified by the nicks and notches in their dorsal fins, while manatees are identified by scars on their backs, received primarily from boat hits. Once these photographs are made in the field, they are brought back to the lab and undergo an extensive process involving documentation, organization, and categorization. One of the most time-consuming aspects of the entire process is ‘matching’, the process of starting with a photograph of an unknown animal collected in the field and comparing it to all previously identified known animals to look for a match, thus allowing a positive identification of the new animal. These identifications are then used for a wide variety of scientific purposes. Marine mammal science is a highly visible field due to the level of public interest in marine mammals. While the scientific publications are not necessarily consumed by the 3 public, the research process and the activities of the scientists are frequently reported in the media and through environmental organizations and their publications. The public nature of this field has resulted both in increased funding and in increased scrutiny and regulation. 8 Both of these have in turn influenced the development of non-lethal and minimally invasive methods such as photo-identification. Because photo-identification meets the criteria both of being non-injurious to the animals and of being an excellent scientific tool for gathering the data needed to answer scientific questions, photography has become a mainstay in the field. For example, many of the participants in this study report spending between half and nearly all of their time working with photography in one way or another. As a result, this scientific field offers an excellent case study for understanding how digital photography might play a role in this and other research domains which rely heavily on photographic data. Digital photography Even though there has been extensive writing and research about the nature of photography, the artistic elements of photography and about the social roles of photography, there have been relatively few attempts to understand photography from a socio-technical perspective. Most studies treat cameras as mere artefacts, put into use by their human users. This study, on the other hand, will examine how a seemingly innocuous three pound hunk of metal can have widespread consequences throughout a domain or a community of practice. Research Questions The main study from which these data are drawn had nine research questions. The data in this article were collected to answer the following two research questions: 1. Who are the relevant actors within the systems supporting photo-identification research? 2. Who becomes involved in the photo-id process for the first time when scientists adopt digital photography; which formerly involved actors and technologies are excluded; and how are peripheral actors affected? There are at least three possible sets of expectations. From a strictly technical perspective, the substitution of one piece of equipment would be expected to have mainly incremental effects. The new equipment is merely a three-pound device which has digital sensors instead of film. One might expect, then, that any changes in scientific activity would be minor elaborations, such as improvements in the efficiency, of previous patterns. A second, more technologically utopian point of view, would suggest that this innovation would free the scientists from the necessity of dealing with film and the accompanying issues of processing and storage, which in turn could free them up to do more science. The efficiencies of digital photography over film photography would, from this perspective, allow scientists to reduce staff size, take more photographs, incur fewer expenses, and generally do ‘better’ science. The third scenario is the social informatics point of view, in which even incremental technological changes are understood frequently to result in unintended consequences. These consequences can be either positive or negative (as seen from the point of view of the actors), but are largely unexpected by the actors in the system. From this perspective, new technologies often shift patterns of work and can have implications throughout a domain. In this sense, technological change is likely to reconfigure how scientists do their work, as well as the outcomes of their work. 9 A classic example of this perspective is the work on the productivity paradox. The productivity paradox discussed the fact that although considerable sums were spent 4 implementing technology programs that were expected to increase productivity, actual measurable productivity had stagnated throughout the 1970s and 1980s. 10 By the late 1990s, however, new research showed both that productivity related to information technology investment was increasing and also identified many of the other impacts of new information technologies beyond simply productivity 11 . This social informatics point of view informs the present study, leading to the hypothesis that the seemingly simple change in camera technology will have a number of unintended consequences throughout the domain of marine mammal science. Some changes may be viewed positively by the scientists, others may be viewed negatively, but the changes will extend into unexpected areas of their scientific practice. Research Methods Studying digital photography use by scientists could be approached in a number of possible ways, but this study employed a multi-site case study approach. Marine mammal researchers operate within a bounded system of scientists, which makes them amenable to the case study approach. 12 Furthermore, the reason for studying this multi-site case 13 was to develop an instrumental case study that helps to pursue an external interest of refining theory, as opposed to developing an intrinsic case study designed primarily to understand the phenomenology of a particular case. 14 The theory that the main study was seeking to refine was Kling’s STIN strategy for social informatics research. 15 The refinements made to this strategy are discussed elsewhere. 16 Sample The research sites selected for this study were identified through a purposive method involving an examination of recent research articles in marine mammalogy using photo- identification methods, by talking with every available scientist presenting a paper or poster using photo-identification methods at the 2005 Society of Marine Mammalogy biennial meeting, and by asking them to recommend others who might participate in this project. This sample was expanded through a ‘snow-ball’ sampling procedure. While this sort of purposive sample has limitations, in a small field such as the subject of this study it can be the most efficient way to gain access to “information rich” participants in the domain. 17 Marine mammal researchers are located in a variety of public, private and educational institutions around the world, and there is no specific organization 18 or special interest group dedicated to photo-identification methods. Thus, there was no practical way to establish a population from which to draw interview subjects randomly. In addition, since the author travelled to the laboratory locations, some practical limitations of logistics had to be taken into account. However, the researchers and research sites chosen were often referred to, unprompted, during interviews at other, unrelated sites, indicating that they are part of a shared network of scientists. Many have been involved in this work for a considerable period of time. 19 This sample represents a significant component of ordinary scientific practice in the marine mammal photo-identification subfield. Field sites The sites chosen for this study demonstrate the variety in the types of organizations involved in marine research. Non-profit organizations ranged from very small institutes with just a few employees in an out-of-the-way location to large campuses with multiple buildings and hundreds of employees. Some of the facilities were closed to the public, while others had public outreach facilities such as museums, aquariums, and boat trips. The colleges and universities were on campuses familiar to anyone in higher education, and the facilities housing the scientists were typical university office and laboratory spaces. The government 5 facilities tended to be large campuses with multiple large buildings housing hundreds of employees, but the animal research programs share these campuses with many other marine biology and oceanographic research programs. The study included non-profit organizations (three small, one mid-sized, and three large non-profits), higher education institutions (one each of a liberal arts college, a teaching university, and a research university), and government agencies (three total). While a non-representative case study could have been produced by visiting just one site, by comparing and contrasting what occurs at the different research centres, this study is able to reach some general conclusions about the field of marine mammal photo-id research. This is preferable to being able only to consider what happens at a single, possibly idiosyncratic, location. The multi-site method of creating a case study from a number of sub- cases provides a higher degree of certainty that this research is more representative of this sub-field. During the site visits, participants were generally interviewed in their offices or laboratories. In most cases, they were near their computers and could demonstrate examples of their work on their computer screens. They were also asked to show how they did their work, and at many sites it was possible to observe various stages of the photo-ID process, including data collection on the water and photo matching from both paper-based and computer-based catalogues. For this research project, the site visits were relatively brief, ranging from 1-3 days each, depending on the number of people being interviewed. While short visits of this nature are not ideal compared to extended interactions, the relatively large number of interviews done during multiple site visits mitigates some of the weaknesses of this data collection strategy. The specific data collection methods included semi-structured interviews (41 interviews total, lasting a total of 54 hours and transcribed for a total of 1132 pages of transcribed interviews), analysis of photographs (405 images), and analysis of documents such as manuals, memos, reports, articles, newsletters, and websites (115 documents containing 1098 total pages). As Yin points out, “these and other types of documents are useful even though they are not always accurate and may not be lacking in bias …the most important use of documents is to corroborate and augments evidence from other sources”. 20 In other words, documentation helps the analyst triangulate accounts of events using multiple sources of evidence. Another type of evidence was collected through direct observation of scientists in their work settings. During a field visit, the case study researcher recorded observations of environmental conditions and social behaviours. As Yin points out, “if a case study is about a new technology, for instance, observations of the technology at work are invaluable aids for understanding the actual uses of the technology or potential problems being encountered”. 21 While observing may seem like a natural and somewhat passive activity, a researcher must plan carefully to avoid being either marginalized or influencing the observation setting too greatly. Creswell 22 identifies a series of steps for observing social environments. These include identifying gatekeepers and key informants for access into the observational setting; recording notes in the field of both a descriptive and a reflective nature; recording details about individual behaviours, the physical setting, events and activities as well as one’s own reaction to these details. 23 These procedures were followed by this study and documented in the case study database. Using multiple methods with careful attention to the compilation of a case study database addresses some of the shortcomings that occur too frequently in qualitative research. The use of triangulation, or multiple methods, reflects an attempt to secure an in-depth understanding of the phenomenon in question, since objective reality may be difficult or impossible to capture. 24 Case studies generally use triangulation as a way to clarify meaning 6 through both verifying that an observation is replicable and, conversely, identifying ways the phenomenon is seen differently by different actors. 25 Findings and Discussion Some of the relevant actors in the marine mammal photo-identification system are obvious, others less so, and in this paper we will discuss only those most relevant to demonstrating the nature of the changes in the work processes that have developed to expedite photo- identification of marine mammals. Among the more obvious participants are the investigators, researchers, technicians, field personnel and support staff involved in gathering and using the scientific data to identify and track various marine mammals. Investigators are the primary leaders of the scientific projects, exercising autonomy in decision-making and acting as leaders in generating research and funding. Investigators are generally senior scientists who oversee the operations of their studies, although they may delegate specific day-to-day decisions to trusted others. The involvement of an investigator in hands-on research depends on the study. The researchers, on the other hand, are less senior scientists and are nearly always heavily involved in the day- to-day science but are less responsible for organizational issues than the investigators. Technicians are often less well-credentialed (often holding undergraduate degrees) and are in many cases younger participants in the scientific projects. For instance, the average age for technicians in this study was 28, compared with 34 years old for researchers, and 56 for investigators. 26 Technicians are generally much less autonomous with regard to their work assignments. Even though the fact that a research scientist is an actor in a system of scientific knowledge production is wholly unsurprising, the investigators are actually quite surprising in a number of ways. First, a naïve assumption held prior to doing this research was that most of the leaders of the scientific projects studied would be Ph.D.-holding academics. In fact, however, this was true only of the participants at universities and government agencies. At projects centred in colleges or universities (n=3), four Ph.D.-level scientists were part of the study, representing all of the principal investigators (PIs) and one-third of all participants at those sites. Likewise, at government agencies (n=3), Ph.D. scientists led four of the five projects, and made up one-half of the participants at the sites. At non-profit organizations, however, Ph.D. scientists led the projects at only two of the seven sites studied, representing only one-tenth of the participants. Indeed, several of the scientists had relatively little formal academic training, holding a bachelor’s degree or a lesser credential. The path of entry into many of the non-profit organizations is often somewhat informal, through such opportunities as apprenticeships and volunteer opportunities. While most of the technicians who participated in this research held Bachelor’s degrees in a biology-related subject, getting hired to work with marine mammals in many cases involved a ‘foot in the door’ approach rather than an academic route. The following three examples illustrate this point 27 : Anna Hawes: I started here [at Pacific Whale] as an intern. I interned for two quarters, volunteered for two quarters, and now I'm a paid employee. Amy Kirkwood: Came to Atlantic Dolphin in 2002, I believe. Started out as a volunteer, and then got hired. Began volunteering and then I was hired as (name)’s admin assistant. And with all my other experience I was eventually able to move over to photo ID. 7 Kathryn Stamps: I went from my internship right to my part time job with the state just to pay the bills basically, because I volunteered here for some time before they could pay me. Many of the younger technicians at the non-profit organizations reported similar stories of creatively finding ways to gain leverage into an organization through internships, volunteering, and holding a variety of low-paid, temporary jobs either in or out of science to be able to pay their bills. Most, however, either had plans to get their Master’s degree or were actively enrolled in a Master’s program, as they learned by experience the truth of Alan Crane’s observation above that “you have to have at minimum a Master’s degree.” Many of the students interviewed at the universities involved in this study had done work in the field before returning to get their Master’s or Ph.D. degrees: Cynthia Conlin: I traveled a fair bit and then I got a job as a research assistant…studying nesting sea turtles and I did that for about seven months and then got the job in New Zealand basically starting as a volunteer research assistant and through that got to know Gerald [Lemoine, the investigator at her current project]. While solid practical experience was once a ticket into marine mammal science, particularly when it was a younger field, today the experience must, in most cases, be combined with formal academic credentials in order to advance in the field. Other important actors related to the performance of the ordinary scientific activities in these organizations are the support personnel. Once the photographs are collected in the field, one of the chores upon returning to shore is to download the digital images from memory cards to hard-drive storage which will later be transferred to computers and/or servers back at the lab. This is obviously a change from the period when the field was film- based, and represents both a new set of activities and a new set of actors, the information technology (IT) support personnel. Whether IT support is just another role played by a researcher or technician or whether there are staff dedicated to this role depends partly on the size of the study. If there are dedicated IT staff members, they are unlikely to be in the field at the time data is downloaded and backed up, but they will have helped set up procedures for dealing with technical aspects of storing and backing up data. Several of the larger projects had very detailed procedures manuals with specifications for each step of this process. Others appointed someone who seemed to have an affinity for data management to overseeing the IT work: John Maze: I studied Marine Science, concentration in Marine Biology… [and now I] I basically manage the data collection, train people that are in the field, like the new grad students and research assistants. I schedule all the field work, who the captains are going to be, who the observers are going to be. Just manage the computers, the space, intern acquisition and placements, just all that jazz. The next step after collecting the images and associated log data in the field is to download the data onto computers, back everything up, and then start to process the images and record the data into a database. This step is one that has shifted from the film era to the digital era. With film, the images take far longer to be processed, sometimes at the end of the field season altogether. Digital images are available immediately, which means in most studies, people start working on organizing them right away, even if they are in field locations distant from their labs. 8 Dr. Rita Price: So we get back and usually what we do is we have all these images. And what happens with the digital? You take more pictures. There’s more data to go through. [laughs]…I’d think at least twice as much… [slight groan] because we don’t delete things in the field. We don’t do that…What we do is we try to go through…those digital files and we try to pick the best picture of an individual and put them in the best folders for that encounter. So when you come home, there are the five whales that were in the best photo of each one. Usually we don’t get that done because we’re just really tired. RP cont.: But the other thing we do, years ago when we first started the digital stuff, I’d get an external hard drive and we’d back everything up on the external hard drive. Plus, we’d make CDs of everything. So, the problem was, you know, before all I had to do was make a label and put it around the film. Now, I’m processing and batching and renaming and trying to find all these, okay now it’s 07CA001 (for roll “1”), ‘D’ for digital, 00 for Orcinus Orca, 001 for frame 1; and it’s just like, “Oh my god.” And me being not raised up with computers and stuff…I mean it was a cool tool and everything like that; I loved it. But all of a sudden when we got back to the ship we’re processing all this stuff and you’re looking at a couple hours worth of work…. And all of a sudden my computer that I had initially was [working] all night long [on] the batch rename or converting to .tiffs or something; I can’t remember which one it was. It took forever, you know? I’d wake up in the middle of the night and I’d see the thing flashing that it got stuck, and I’d have to…oh god! [laughing] So anyway. That’s kind of the deal with the digitals. One of the Atlantic Dolphin Research Institute technicians described how ADRI researchers deal with their data at the end of a day on the water: Emma Hatcher: We come back in immediately because it takes several hours to download those four gigs of images usually. We’ll drop the card on the hard drive, we’ll go clean the boat, and we’ll pull off all the data sheets and start entering the data sheets. Somebody will clean, somebody will enter, and usually it’s enough of a timeframe that you get in that late at night in the dark, so we’ll leave the card, we’ll drop and leave the card then lock everything down. In the morning, we’ll compare the card to what was dropped to make sure it’s equal. We won’t use that card the next day to make sure that it gets backed up, but overnight technically at 11 o’clock everything should be backed up. Many of the projects also invested in either dedicated or shared information technology (IT) experts. Database design, network server administration, and computer support all require specialized skills. In some cases, computer experts were brought in, while in others existing staff with a talent for working with a program like Microsoft Access had part of their time devoted to designing databases. The following excerpt describes a complex database project that was initiated at the Atlantic Coast Marine Center (ACMC): Brandon Lindell: I actually have been working with a programmer for quite a few years on and off. We used to have all of our data in dBASE and then I sort of envisioned a much more flexible and user-friendly environment, so we moved into Access. So, he helped us with that and we had a little matching program that folks from [a funding organization] helped set up and he helped sort of finish off. So, I’ve been working with him for years and he actually helped write some of the technical 9 aspects of the grant, but unfortunately once we got the grant he was no longer available to work on it. He basically didn’t have a whole lot of hope that it would work out and, you know he had to pay the bills and stuff. So, anyways, that was a fairly disturbing turn of events. And he had actually even started a little prototype of what we’re moving towards. So then we had to put it out to bid and that was pretty difficult because this guy I’ve been working with, he really understood the database and he’d been working with it in a lot of different ways and understood how we worked because a lot of computer programmers really don’t get it. They’re used to people who are not as computer-adept and, you know they’re used to financial stuff or healthcare systems primarily…. So, I basically sat down with all of the people I worked with and designed every single interface that we would ever need with our data…you know, every screen we would need and every field in our database and it was pretty detailed and I think both exciting to the people who have been on it and a little overwhelming because there was so much information. ACMC provided the details of this project that was put out to bid, and the prototype interface was laid out in Excel spreadsheets, using the cells as a grid to simulate a screen and included information on required fields and the behaviour of each field. A total of 53 Excel documents were required to specify the design that the biologists needed from the computer programmers. When the project was awarded, “they said, ‘Yup, we can have it done in six months.’ And two years later, they had it done” (Brandon Lindell). Here is a similar experience at Whale Canada: Paul Dawkins: We had a specially designed database for us on the Mac. We have all the data for that animal’s sightings over the years, all the details, associated animals, any new scars, things of that nature, the depth, the atmospherics…we can get everything from that day. And the photographs are linked to that. We’re working on a new thing right now because of all these new pictures coming in; it’s a little scary because, you know, before we had contact sheets and all that. Things were more tangible. And so because there’s so many different people working on this we have a thing we call [name] that we’ve created with some software developers. And we’re hoping to have that in place this year so that we have a much more standardized way of putting the pictures in, after each day. So it’s not just so much of a mess. You know, it can get quite messy on computers if each person’s putting their files and stuff. We’ve had a little…it’s been a little slower getting that up and going than we’d hoped because the software guys didn’t quite click on what needed to be done. But, I think, we might be getting there. The Atlantic Dolphin Research Institute (ADRI) also used expert help to design their databases and talked about the length of time required to build a system: Interviewer: How about the database that they showed me? They tell me you designed that? Did you also do the programming part of it? Faye Hampton: No, I worked with a programmer here… That has been ongoing. It took a year or more and we still tweak it. Coming up with a database to track information about the photographs and about the other data collected in the field has been a major issue for most of the projects. Some projects 10 took a different approach from ACMC or ADRI, and chose to have biologists working on the project pick up database skills: Ellen Batton: Would you like to see our in-house database…? [This] is the first database that I designed and from there things have kind of grown from it… What users see typically when they open the database is ….I think it’s about five different tables. First tier of data being effort, so you have one record per day, general information about that day. You know, who, which boat, where, how long were you out? Keeps a log of events throughout the day and a log of conditions throughout the day. So from here, if you enter this, you can have several options of where to go from here and one is…to add sightings to this particular survey, so it will automatically build a link to this day… You can also filter sightings, so you can view sightings from the survey…, the next here is sighting information and each sighting is given a unique number and this is all the… forms…for information: lat, longs, start, end times, species, number of animals, behavior, you know, various permit information. All pretty big. Regardless of whether projects chose to use in-house staff or to bring in outside experts, a common theme throughout the sites was that the information systems required to track all of their digital information had grown far more complex than they had originally envisioned and that maintaining the systems and upgrading their features had become a growing task for all the projects. The expertise to build and maintain these systems, then, is a resource that the marine mammal scientists have had to enlist either by hiring experts or by developing the skills of scientists. These examples illustrate some of the more obvious actors in the system of marine mammal photo-identification. Investigators, researchers and technicians all fall within the overall category of scientists, while field support and IT support personnel may or may not also fall into that category. However, there are some less obvious actors in this socio- technical system. Volunteer workers play several important roles on many of these scientific projects. As mentioned above, some students and recent graduates volunteer for research organizations with an eye toward eventually being hired into full-time staff positions. Another type of volunteer is also an important actor in this system, the eco-volunteer. Eco-volunteers sign up to participate in research during their vacations and provide both labour and participation fees. While the most well-known institution organizing eco- volunteer trips is Earthwatch, some of the projects recruited volunteers through other organizations or even recruited volunteers directly. Many of the projects encourage the participation of eco-volunteers on their field research trips and gain several benefits from this interaction. First, they have additional hands available on the boat to handle activities such as logging data and serving as spotters for animals in the water. Cynthia Conlin: So generally, my setup was that I was drive, because it was just a little boat, like a five and a half meter boat. I would be driving the boat, I would have a photographer and I would have maybe two or three volunteers who would basically record data that I would call out. And I found that system worked really well for me to be able to always keep my eye on the dolphins at all times. Because often, you know, you can’t do everything. Depending on the project, the expectations for the eco-volunteers can vary: 11 Paul Dawkins: We really try to minimize whale watching. We really make it clear to people that they’re coming with a research team. And, that we’re not going to cater to tourism. So if they have any doubts about coming out with us, don’t. [Laughs] The volunteer contribution is not limited to labour, however, as the following pair of comments by research investigators illustrate: Dr. Gerald Lemoine: Each volunteer needs to pay Earthwatch; I think it’s now $2100 or $2200 to be with us for two weeks. And they help, it’s variable, depending on their background and capabilities and interests. But some are of tremendous help, intellectually and physically. But they help financially, for sure, all of them. Dr. Gary Lewin: Earthwatch has been a sustaining source of support for us. It’s somewhat less so now than it used to be. We’re down to half as many teams now as we used to have. But, we’ve been working with them since 1982 and it’s been a source of support, as well. Who are these volunteers? There doesn’t appear to be a single type, as comments from two of the researchers illustrate: John Maze: Oh, all kinds. There are a few types [of volunteers]. There’s the high school student who’s interested in doing science type, there’s the high school student who got it as a gift type, there’s older people that just want a vacation, sort of alone and away from anything. And then there are single people and married people that just, it sounds terrible to say, but it’s not really, if they want a break from their jobs, from their kids, from their house, from their friends, just want to get away and do something for themselves. I get a lot of those people. I get people that are biologists, that want to come and study different biology, different sorts of things. Every once in awhile we get college undergraduates that are interested in the field, but we generally steer them towards internships. It’s a full length internship just because they can get more out of that. It’s more designed for them, but sometimes they can’t with their schedule, so it’s better for them to come [as volunteers]. Holly Kershaw: I’d say the most common would be middle aged single women, so maybe in their early 40’s or late 30’s. We get a lot of teachers because teachers can get grants through Earthwatch to come so they have their way paid to come on the trip. Then they have to create a lesson plan out of what they learned and bring it back to the classroom. So we usually have at least one teacher on every trip but it’s mostly women that come in that 40-year-old range I’d say. One downside to volunteer labour is that when dealing with complex systems, volunteers often do not participate long enough to be able to contribute: Amy Kirkwood: We have volunteers that have gone out on boats with us, and that have helped us with little things in the lab, but generally there’s such a high turnover with volunteers, that it would take them too long to teach them how to use all of this. Another issue is the time that the care and feeding of short-term volunteers takes the researchers and technicians during their field season: Interviewer: What are some of the costs and benefits? 12 Holly Kershaw: Well, the benefit of course is that you get money. Also you’re hopefully educating some of these people so they’ll go and spread the conservation message basically to others. The cost is that it does take away from the research sometimes because some of these people, a lot of them are there on vacation and things. We’re there trying to get our research done, but you have to spend more time training. You get more individuals in, more teams in every two weeks so you have to start over with training and spend more time with that. Then also we have a lot of bad weather days due to high winds and then you have to [entertain them], especially if we’re in a remote place. [In this remote location], you know, you can’t go to a movie or to the pub so you have to find activities to keep them happy and all that. So it’s more work than if you didn’t have them but we couldn’t be there unless we had them so... The decision about how much to rely on volunteer labour both on the water and back in the lab depends not only on the preferences of the project personnel, then, but also on the complexity of the tasks that need to be performed. At least one organization used to use volunteers more extensively during the film era than they do now that they have transitioned to digital photography. The Pacific Whale Project used to rely on volunteers to use their darkroom and print out black and white prints of the photographs that would be used for identification. Now that they have transitioned to digital photography, however, they turned their darkroom into a room for other uses and no longer use as many volunteers. Instead, full- time staff handled most of the steps in organizing and processing the digital images. There are a number of other actors and non-human actants involved in the photo- identification system too numerous to discuss in detail. Some of these are philanthropists, professional societies, funding agencies, governmental organizations, photography systems, information systems, animals, and ecosystems. Exclusion of Actors What about excluded actors in this socio-technical system? Has the influence and participation of some types of actors been reduced with the change from film photography to digital photography? Some of the excluded actors are fairly obvious (film-processing laboratories, film manufacturers, companies to ship the film), and not really terribly interesting from our analytic perspective since they were primary external participants in the system. For the purposes of this paper, we will focus on discussing the most mentioned change in human participants at marine mammal labs: the shift from volunteer labour to paid staff positions throughout the photo-identification process. This shift has resulted in a linked pair of included and excluded actors: the paid staff members are newly included, and some, but not all, of the roles played by volunteers are newly excluded. There are costs and benefits of this change to the practice of marine mammal science. The logging of data on the boat is an important role volunteers play in many of the projects: Christine Showers: I guess the primary help [the eco-volunteers provided for] my research was to record data. So, we had data sheets that we recorded surface data, like environmental data or the different behaviors or numbers, so it’s good because typically, one researcher will need to watch the dolphins to call out the behavior accurately, so it’s helpful if you have a volunteer that’s actually recording that data while you watch the dolphins. I guess they also help in terms of maintaining equipment and cleaning equipment up at the end of the day and they actually helped quite a bit this last season in terms of analyzing data on the computer. 13 However, a trend was noted that had been discussed by a South European Dolphin Center researcher toward changing habits in the field regarding logging data about photographic images. Part of the earlier quotation is reproduced here: Leah Tull: With the slides, I would call out every picture I took. I would say left dorsal, right dorsal, and if possible the animal and if I wouldn’t know the animal I would say A1 or C1 or whatever just to indicate it’s the first adult…. [A volunteer] would definitely hang on my lips and write [everything] down…. Unfortunately, now we don’t still do that. Several people commented that the involvement of eco-volunteers in the field was highly variable, and some believed that as the equipment got more complex, there were fewer tasks that they could trust volunteer outsiders to do accurately. The real shift, however, was noted at some of the sites that previously used local volunteer to do a number of the tasks related to maintaining the office, keeping track of data, processing images, and matching images. Recall Jacob Tipton’s quotation above, in which he noted that the move away from volunteer interns and toward permanent staff is more efficient, but also more costly. One of the researchers on the same project described it this way: Ellen Batton: I think that it’s very different. I mean, our interns come in and they’re handed stock photos and this is a humpback whale, that’s the catalog, go look for it. And it’s changed so much in the last few years, because when it was still in film, the other thing that interns did as they came in - they didn’t actually develop the film, but the negatives would need to be printed and it would be like here’s a list. One of the staff would review the list to be printed and matched and we’d show them how to use the darkroom and then they would spend days and days and days in the darkroom printing photos. So that was a little bit more of a hands-on approach, but this system has taken the place of the interns, basically, so now you can’t really have an intern. You will need to have somebody who’s got some skill married to the program and that’s a shift that we’re still dealing with that we just don’t have the - - we used to really need a lot of unskilled labor and now we struggle for finding things for unskilled labor to do. The costs of this change are obvious from the point of view of the volunteers and interns who want to work with an organization that was struggling to find things for them to do. Many interns go to non-profits to enhance their education in biology, and also to increase their ability to get hired later, either by the same organization or by another one. Recall the discussion above (see p. 6) that profiled the technicians who are actors in this system. Many of the current technicians occupying paid staff positions had gained entrée to the organization as unpaid interns or volunteers. If there are fewer opportunities for interested people to make meaningful contributions and demonstrate their commitment to marine biology, this narrows one opportunity for long term participation in marine mammal science. It is somewhat ironic that the flip side of this trend is that some of the same organizations are hiring full-time staff (either permanently or on temporary contracts) to do the more complex and technical work that unskilled interns and volunteers used to do. Open questions remain as to whether those interns are finding their way into these paid positions without having the requisite experience, and if so, how are they doing so? The benefits from this change accrued to two different parties. First, newly employed paid employees benefited financially, particularly if their other option would have been to either do the same work for free or work in another organization about which they may not have the same passionate interest. Second, the organization benefited from having a well- 14 trained, stable workforce over which they have more control that they would over unpaid workers. As Tipton noted above, the current system which relied on paid staff was more efficient, especially in terms of time, which helps advance the organization’s scientific agenda. The opposing cost from the organization’s point of view was economic – staff needed to be paid regularly, and that meant the organization would have to ensure a continuing flow of money or risk watching their scientific program grind to a halt. Conclusion The results reported here are part of a much larger study examining the role of a specific technological change (the switch from film-based to digital photography) in enabling and influencing changes in organizational settings. This analysis suggests that marine mammal scientists have altered the makeup of their project teams in response to the demands of the new technology of digital photography. Most of the scientists with whom we spoke were somewhat surprised by the extent to which their staffing needs had changed and in some cases there appeared to be some concern that securing sustainable funding to continue to support the additional staff members could be difficult in the future. The need to hire new staff or retrain existing staff to deal with increasingly complex databases, information systems, and computer networks is a new cost for most of the projects. In the past, when using film-based photography most of the studies found their computer needs to be relatively modest since much of their data was stored not in computer format, but in binders containing photographic slides or prints. These increasingly computer-based systems for saving, storing, moving, documenting and analysing digital photographs have led to a reduction in the reliance on volunteer workers and an increase in paid scientific staff. Whereas volunteers represented either free labour or a source of project income, paid staff technicians are an ongoing operating expense. Also, it is not clear whether this increase in reliance on paid staff will affect the possibility of gaining entrée to the system for uncredentialed newcomers. A number of the younger staff members had begun their careers as volunteer staff (mostly while the organizations were still using film photography) and worked their way into permanent positions. If these opportunities become less available, it could affect the supply of new scientists in the field in the future. Even more importantly, many of the scientists have shifted their allocation of time. Many of the scientists and their staff members now spend considerable time at the end of a field day and during their time back at the lab dealing with the demands of computer-based photography. Instead of labelling a canister of film and mailing it to a processor at the end of a long day in the field, the scientists now spend time downloading, organizing, renaming, archiving, and maintaining their digital photographs. In the lab, they spend considerable amounts of time categorizing, organizing, identifying, and working with these digital photographs. While some of these jobs are similar to those done when they used film, many have intensified in terms of time demands, and others are new tasks entirely. The seemingly simple change of swapping a three-pound piece of analog camera equipment for one equipped with a digital sensor has clearly had consequences in this domain. While the many positive consequences reported elsewhere 28 are strong enough that no scientists in the study are considering switching back to film-based photography, the unintended consequences regarding the form of scientific research teams and how the scientific work gets done and who does it will have continuing impacts on the practice of marine mammal science. 15 Notes and References 1. Wanda J. Orlikowski, 'Using Technology and Constituting Structures: A Practice Lens for Studying Technology in Organizations', Organization Science, 11, 4, 2000, p. 241. 2. The term social informatics has been in use longer in Europe, where the earliest departments of social informatics date back at least as far as the 1980s, and possibly earlier. See Vasja Vehovar, 'Social Informatics: An Emerging Discipline?', in Jacques Berleur, Marco I. Numinem, and J. Impagliazzo (eds.), Ifip International Federation for Information Processing, Volume 223, Social Informatics: An Information Society for All? In Remembrance of Rob Kling, Springer, Boston, 2006. 3. Kling mentions that the idea of coming up with a better term for this type of research first arose at an NSF-funded workshop on digital libraries held at UCLA in 1996, and social informatics was one possible term offered as a replacement for the variety of terms in use “including ‘social analysis of computing,’ ‘social impacts of computing,’ ‘information systems research,’ and ‘behavioral information systems research’”. See Rob Kling, 'What Is Social Informatics and Why Does It Matter?' D-Lib Magazine, 5, 1, 1999, p. 13. 4. Rob Kling, Holly Crawford, Howard Rosenbaum et al., Learning from Social Informatics: Information and Communication Technologies in Human Contexts, http://www.slis.indiana.edu/SI/Arts/SI_report_Aug_14.doc, 2000 [accessed July 28 2004]. 5. Kling, 1999, op. cit.; Rob Kling, Howard Rosenbaum, and Steve Sawyer, Understanding and Communicating Social Informatics: A Framework for Studying and Teaching the Human Contexts of Information and Communication Technologies, Information Today, Inc., Medford, NJ, 2005; Steve Sawyer, 'Social Informatics: Overview, Principles and Opportunities', Bulletin of the American Society for Information Science and Technology, 31, 5, 2005; S. Sawyer and K. R. Eschenfelder, 'Social Informatics: Perspectives, Examples, and Trends', Annual Review Of Information Science And Technology, 36, 2002. 6. Kling, Rosenbaum, and Sawyer, op. cit. 7. Ibid., p. 54. 8. Eric T. Meyer, 'Socio-Technical Perspectives on Digital Photography: Scientific Digital Photography Use by Marine Mammal Researchers', PhD thesis, Indiana University, Bloomington, 2007. 9. William H. Dutton, 'The Internet and Social Transformation: Reconfiguring Access', in William H. Dutton, et al. (eds.), Transforming Enterprise, MIT Press, Cambridge, MA, 2005. 10. E. Brynjolfsson, 'The Productivity Paradox of Information Technology', Communications of the ACM, 36, 12, 1993. 11. E. Brynjolfsson and L. M. Hitt, 'Beyond the Productivity Paradox', Communications of the ACM, 41, 8, 1998; Kling, 1999, op. cit. 12. Robert Stake, 'Case Studies', in Norman K. Denzin and Yvonna S. Lincoln (eds.), Strategies of Qualitative Inquiry Sage, Newbury Park, CA, 1998. 13. John W. Creswell, Qualitative Inquiry and Research Design: Choosing among Five Traditions, Sage, Thousand Oaks, CA, 1998; Eva Nadai and Christoph Maeder, 'Fuzzy 16 Fields: Multi-Sited Ethnography in Sociological Research', Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 6, 3, 2005. 14. Stake, op. cit. 15. Rob Kling, Geoffrey McKim, and Adam King, 'A Bit More to It: Scholarly Communication Forums as Socio-Technical Interaction Networks', Journal of the American Society for Information Science and Technology, 54, 1, 2003; Eric T. Meyer, 'Socio-Technical Interaction Networks: A Discussion of the Strengths, Weaknesses and Future of Kling's Stin Model', in Jacques Berleur, Marco I. Numinem, and J. Impagliazzo (eds.), Ifip International Federation for Information Processing, Volume 223, Social Informatics: An Information Society for All? In Remembrance of Rob Kling, Springer, Boston, 2006. 16. Meyer, 2007, op. cit.. 17. M.Q. Patton, Qualitative Evaluation and Research Methods, Sage, Newbury Park, CA, 1990. Cited in Janice M. Morse, 'Designing Funded Qualitative Research', in Norman K. Denzin and Yvonna S. Lincoln (eds.), Strategies of Qualitative Inquiry, Sage, Thousand Oaks, CA, 1998. 18. While the Society for Marine Mammalogy (SMM) is the major professional affiliation for the scientists in this study, the Society itself also has many more members not engaged in photo-related work. Other common techniques include genetics research, acoustics, and invasive health assessments. With no way to reliably identify which SMM members used which technique, drawing a representative sample would have had its own inherent limitations. 19. Most of the sites included are shown in a table of photo-identification projects listed in a 1988 report on a symposium convened to bring photo-identification researchers together for the first and apparently only time Philip S. Hammond, Sally A. Mizroch, and Gregory P. Donovan, Individual Recognition of Cetaceans: Use of Photo-Identification and Other Techniques to Estimate Population Parameters (Report of the International Whaling Commission Special Issue 12), International Whaling Commission, Cambridge, MA, 1990.; Mizroch, personal communication, Dec. 2005. 20. Robert K. Yin, Case Study Research: Design and Methods, Sage, Newbury Park CA, 2003, p. 87. 21. Ibid., p. 92. 22. Creswell, op. cit. 23. Ibid., pp. 125-26. 24. Norman K. Denzin and Yvonna S. Lincoln, The Handbook of Qualitative Methods, Sage Publications, Thousand Oaks, CA, 2000, p. 2. 25. Stake, op. cit. 26. Technicians (n=13), mean age=27.5, range=20-35, s.d.=4.1; Researchers (n=14), mean age=33.7, range=27-45, s.d.=6.2; Investigators (n=14), mean age=55.6, range=50-62, s.d.=4.0 27. All names of people and places in this article are pseudonyms. 28. Meyer, 2007, op. cit. work_ppaq7q4s3rbilmybfflfjssn6u ---- untitled D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 rsfs.royalsocietypublishing.org Research Cite this article: Stoddard MC, Miller AE, Eyster HN, Akkaynak D. 2018 I see your false colours: how artificial stimuli appear to different animal viewers. Interface Focus 9: 20180053. http://dx.doi.org/10.1098/rsfs.2018.0053 Accepted: 15 October 2018 One contribution of 11 to a theme issue ‘Living light: optics, ecology and design principles of natural photonic structures’. Subject Areas: environmental science, biomaterials, computational biology Keywords: animal coloration, artificial stimuli, animal vision, sensory ecology, ultraviolet digital photography, spectrophotometry Author for correspondence: Mary Caswell Stoddard e-mail: mstoddard@princeton.edu & 2018 The Author(s) Published by the Royal Society. All rights reserved. Electronic supplementary material is available online at https://dx.doi.org/10.6084/m9. figshare.c.4274654. I see your false colours: how artificial stimuli appear to different animal viewers Mary Caswell Stoddard1, Audrey E. Miller1, Harold N. Eyster2 and Derya Akkaynak1 1Department of Ecology and Evolutionary Biology, Princeton University, Princeton, NJ 08544, USA 2Institute for Resources, Environment and Sustainability, University of British Columbia, Vancouver, British Columbia, Canada V6T 1Z4 MCS, 0000-0001-8264-3170; DA, 0000-0002-6269-4795 The use of artificially coloured stimuli, especially to test hypotheses about sexual selection and anti-predator defence, has been common in behavioural ecology since the pioneering work of Tinbergen. To investigate the effects of colour on animal behaviour, many researchers use paints, markers and dyes to modify existing colours or to add colour to synthetic models. Because colour perception varies widely across species, it is critical to account for the signal receiver’s vision when performing colour manipulations. To explore this, we applied 26 typical coloration products to different types of avian feathers. Next, we measured the artificially coloured feathers using two complementary techniques—spectrophotometry and digital ultraviolet–visible photography—and modelled their appearance to mammalian dichromats (ferret, dog), trichromats (honeybee, human) and avian tetrachromats (hummingbird, blue tit). Overall, artificial colours can have dramatic and some- times unexpected effects on the reflectance properties of feathers, often differing based on feather type. The degree to which an artificial colour differs from the original colour greatly depends on an animal’s visual system. ‘White’ paint to a human is not ‘white’ to a honeybee or blue tit. Based on our analysis, we offer practical guidelines for reducing the risk of introducing unintended effects when using artificial colours in behavioural experiments. 1. Introduction In a classic study, biologists applied ultraviolet (UV)-absorbing sunblock on male blue tits Cyanistes caeruleus and discovered that this changed their attrac- tiveness to females, who modified the sex ratio of their broods in response [1]. Experimental colour manipulations like this one have played a central role in behavioural ecology for decades. The tradition was popularized by Tinbergen and colleagues, who modified the appearance of gull eggs to illuminate the mechanisms of egg recognition and camouflage [2,3]. Biologists have continued to deploy artificially coloured stimuli in a wide range of studies to investigate the effects of colour on animal behaviour, typically using paints, markers and dyes to modify existing colours (on animals and plants) or to colour a synthetic model. This widespread and (seemingly) simple approach has yielded new insights into the role of colour in sexual and social signalling, mimicry, anti-predator defence and pollination behaviour across diverse taxa (table 1). The advantages and risks associated with using artificial stimuli have been recently highlighted in a pair of thought-provoking papers by Hauber et al. [32] and Lahti [33]. The discussion is focused on artificial egg stimuli, which are commonly—and increasingly—used to investigate egg rejection behaviour in hosts of avian brood parasites. In most egg rejection experiments, which exceed 10 000 in number [32], biologists have deposited a painted model egg (made of wood or plaster) or a painted-over natural egg in a host bird’s nest to gauge the host’s response: the egg will be accepted or rejected. An alternative approach is to use natural eggs in experiments. In this case, a host nest is http://crossmark.crossref.org/dialog/?doi=10.1098/rsfs.2018.0053&domain=pdf&date_stamp=2018-12-14 mailto:mstoddard@princeton.edu https://dx.doi.org/10.6084/m9.figshare.c.4274654 https://dx.doi.org/10.6084/m9.figshare.c.4274654 http://orcid.org/ http://orcid.org/0000-0001-8264-3170 http://orcid.org/0000-0002-6269-4795 Table 1. A representative list of publications using artificial colour treatments on natural and synthetic stimuli. Silhouette icons are from kisspng.com and covered by a personal use licence. rsfs.royalsocietypublishing.org Interface Focus 9: 20180053 2 D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 rsfs.royalsocietypublishing.org Interface Focus 9: 20180053 3 D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 ‘parasitized’ using a real parasitic or conspecific egg, and stat- istical methods are used to determine the effects of different aspects of the stimulus on behaviour [34]. Hauber et al. [32] identify several merits of using artifi- cial stimuli, which they define as any object made up of, or modified by, a material or pigment not directly extracted from nature. The main benefits include: (i) artificial stimuli can be standardized; (ii) correlated traits—like colour and pattern (e.g. speckling)—can be varied independently; and (iii) supernormal stimuli can push an animal’s sen- sory and cognitive limits, revealing ‘hidden’ behavioural plasticity (i.e. a host bird might never reject a natural para- site egg but is fully capable of rejecting an egg with a more extreme appearance). But using artificial stimuli can be perilous, requiring us to make assumptions about the sensory and cognitive experiences of the study animal. Lahti [33] dubs this risk the ‘umwelt gamble’. Do we under- stand an animal’s perceptual world, or umwelt, well enough to feel confident that an artificial stimulus is having the intended effect? Lahti [33] argues that we should proceed cautiously, mainly because: (i) artificial stimuli often elicit different behavioural responses from the natural stimuli for which they are substitutes; (ii) changing one aspect of a stimulus can induce other undesired changes (i.e. increas- ing the spot size on an egg with a Sharpie marker might also change the egg’s colour, texture or smell); (iii) artificial stimuli might tap into sensory biases or preferences in unex- pected ways, or be so far outside the natural percept (of an egg, for example) that it is seen as a total oddity; and— ultimately—(iv) humans are often poor judges of which features are most salient to animals. Although the Hauber et al. [32] and Lahti [33] commentaries do not exclusively address artificial colour manipulations, it is clear that the stakes are probably highest when colour is involved: Lahti [33] concludes by imploring researchers to consider seriously the gamble we take ‘when we pick up that paintbrush or magic marker’ ( p. 534). Human colour vision differs markedly from that of other animals. Birds, for example, are tetrachromatic and have four colour cones, one of which is UV sensitive, compared with three in trichromatic humans; they also possess oil droplets in the retina, which further modify the cone sensitivities [35]. A survey of the animal kingdom reveals that the number of colour cone types varies dramatically across taxonomic groups, ranging from the monochromats ( pinnipeds, some whales and deep-sea fish) and dichromats (Eutherian mam- mals, some New World monkeys) to the trichromats (some primates, honeybees, many amphibians), tetrachromats (birds, and many turtles, lizards and fish) and beyond (butterflies, mantis shrimp) [36,37]. Because of this, artificially coloured stimuli—when used to test hypotheses about signal- ling and communication—may fail unless researchers carefully account for the colour perception of the intended signal receiver. Fortunately, many researchers are aware of this (table 1) and often use spectrophotometry and models of animal colour vision to estimate what an artificial colour might look like to the study animal. However, it is not always clear when and how to adopt these measures, and whether or not human vision can be a suitable proxy for animal colour perception remains a topic of discussion [38]. Here, we systematically analyse and compare the effects of different artificial colour treatments from the perspective of different animal viewers. Such a study, to our knowledge, has not been conducted. Our overall goal is to provide a set of practical guidelines for minimizing the ‘umwelt gamble’ when using artificial colours in behavioural experiments. To establish these guidelines, we ask the following: (i) In behav- ioural experiments, what materials are commonly used—and rsfs.royalsocietypublishing.org Interface Focus 9: 20 4 D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 for what purposes? (ii) How do different artificial colours change the reflectance properties of the substrates to which they are applied? (iii) Do artificial colours have different effects on different substrates? (iv) Using models of animal colour vision, how might artificial colours appear to a range of animal viewers? (v) When combined with visual models, do two complementary techniques, spectropho- tometry and digital UV-visible photography, yield similar estimates of animal colour perception? As a case study, we applied 26 different artificial colours to single avian feathers. We measured untreated (control) and artificially coloured feathers using spectrophotometry and photography, and we modelled their appearance to different animal receivers, including dichromats, trichromats and tetrachromats. These measurements comprise a comprehensive dataset; we make all reflectance spectra available here to the research community as part of the electronic supplementary material. 180053 2. Methods 2.1. Selecting and applying different treatments We reviewed the literature to identify animal behaviour studies that have used artificially coloured stimuli. Our goal was not to produce an exhaustive list but rather a representative set of papers, capturing diversity in colour treatment products, animal taxa and functional hypotheses (e.g. about sexual selec- tion, anti-predator defence). These studies are summarized in table 1. For simplicity, we restricted our search to studies using paints, markers, glue, dyes, sunscreens and a few natural products (e.g. gum Arabic, rutin). We purchased 26 commonly used products similar or identi- cal to those we found in our literature search (table 1). We grouped these according to colour effect (as viewed by a human): clear, white, black and grey, UV-blocking and colour. We obtained commercially available duck (Anas platyrhynchos domesticus), turkey (Meleagris gallopavo domesticus), pheasant (Pha- sianus colchicus), guineafowl (Numida meleagris) and peacock (Pavo cristatus) feathers from a range of online vendors. The feathers were natural and untreated with chemicals or dyes with the excep- tion of the turkey feathers, which were bleached white. We retained the turkey feathers in our study as a useful point of com- parison with the unbleached white duck feather. Overall, the feathers exhibited a range of natural colour-producing mechan- isms—unpigmented white (duck), melanin-based ( pheasant and guineafowl) and iridescent structural colour from melanin arrays in feather barbules ( peacock)—and provided different types of natural substrate on which to apply the treatments. For this study, we did not include feathers coloured by carotenoid pig- ments: future work could explore the effects of artificial colour treatments on carotenoid-based colours, which are common in birds and other taxa. For each of the 26 artificial colour treatments, we applied one coat of the product to each of the five feather types. Because the products differed considerably in thickness and viscosity, we cannot say that feathers in each treatment received the same volume of product. This is certainly something with which researchers should experiment when performing their own colour manipulations, as the amount of product applied could affect conclusions. One set of unmodified feathers served as the controls. For paints, we used a separate paintbrush for each treatment to avoid contamination. 2.2. Spectrophotometry We used a USB4000 UV – VIS spectrophotometer with a PX-2 lamp (Ocean Optics, Dunedin, FL, USA) to obtain reflectance measurements for the control and treated feathers. Feathers were placed on a dark black velvet card and reflectance was measured normal (908) to the feather using a bifurcated illumina- tion/reflectance optical fibre. We obtained two measurements per feather for the duck, turkey, guineafowl and pheasant feath- ers. Measurements of guineafowl and pheasant feathers contained a mix of lightly and darkly pigmented regions. For the peacock feather, we obtained two measurements for each of the four distinct colour patches comprising the ocellus: the innermost ‘purple-black’ region, followed by the ‘blue-green’, ‘bronze-gold’ and outermost ‘light green’ regions (see [39] for defi- nitions). For simplicity, we measured these iridescent peacock colours from one angle only (normal); future analyses could inves- tigate effects at multiple angles. All reflectance data are available in the electronic supplementary material. 2.3. UV-visible photography Digital photographs of control and artificially coloured feathers were taken using a modified Nikon D7000 camera converted to full spectrum sensitivity and a Nikkor 105 mm lens. Visible- spectrum images were taken through a Baader UV/IR-Cut/L filter that transmits light from 420 to 680 nm, while UV images were taken through a Baader U-Filter that transmits light from 320 to 380 nm. Photographs were taken in raw format with ISO 400 and a fixed aperture of f/8. All images were taken in a dark room using an Iwasaki eyeColor arc bulb as the only light source. The bulb’s UV filter was removed so that the lamp would emit light in the UV-visible range (300–700 nm). The light was diffused with a sheet of polytetrafluoroethylene (PTFE), which is a spectrally flat plastic. To ensure steady emission from the lamp, the light source was kept on for at least 10 minutes before photographs were taken. Feathers for each treatment were photographed from above on a white (not spectrally flat) background; a 40% Spectralon grey reflectance standard (Labsphere, North Sutton, NH, USA) and scale bar were included in each image. 2.4. Modelling animal colour perception We used two parallel pipelines to calculate the relative stimu- lation of the different colour cone types (i.e. the relative photon or quantal catch) for six visual systems: two mammalian dichro- mats (ferret Mustela putorius, dog Canis familiaris), two trichromats (honeybee Apis mellifera, human Homo sapiens) and two avian tetrachromats (hummingbird Trochilidae spp., blue tit). Because the mechanisms for luminance (achromatic) percep- tion differ considerably across these animal taxa (i.e. double cones for birds, the sum of the medium and longwave-sensitive cones for humans [40]), we did not model luminance in this analysis. We used the same animal photoreceptor sensitivities in both pipelines: ferret, dog, honeybee and human curves are from the Mica toolbox [41], and hummingbird curves are from [42]. In pipeline 1, we used Pavo’s built-in blue tit curves. In pipe- line 2, we used Mica’s built-in blue tit curves. Original sources for these photoreceptor sensitivities are as follows: ferret [43,44], dog [45], honeybee [46], human [41], hummingbird [42] and blue tit [47]. For the ferret, Douglas & Jeffery [44] gives the photoreceptor absorption and lens transmission spectra; for the dog [45], the overall spectral sensitivities are estimated from colour matching experiments; for the honeybee [46], only the cone absorption spectra are given; for humans [41], absorptance curves are pro- vided; for the blue tit [47] and hummingbird [42], visual pigment, ocular media and oil droplet spectra are given. 2.4.1. Pipeline 1: reflectance spectra Reflectance spectra were processed in R [48] using the package Pavo [49]. First, we averaged the two replicate measurements per feather or feather patch ( for peacock). We then calculated rsfs.royalsocietypublishing.org Interfa 5 D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 absolute and relative colour cone stimulation for each visual system (see details above), assuming von Kries adaptation to an ideal illuminant and background. We also estimated just- noticeable differences (JNDs) between the untreated (control) and artificially coloured feathers using the following colour cone densities and Weber fractions ( for the most abundant cone type): ferret (cone ratio 1 : 14, Weber fraction ¼ 0.05), dog (cone ratio 1 : 9, Weber fraction ¼ 0.27), honeybee (cone ratio 1 : 0.47 : 4.4, Weber fraction ¼ 0.13), human (cone ratio 1 : 5.49 : 10.99, Weber fraction ¼ 0.05), blue tit (cone ratio 1 : 2 : 2 : 4, Weber fraction ¼ 0.1), hummingbird (cone ratio 1 : 1.9 : 2.2 : 2.1, Weber fraction ¼ 0.05). To obtain this information, we consulted the fol- lowing sources: [50 – 53], using parameters for peacock Pavo cristatus as estimates for hummingbird. ce Focus 9: 20180053 2.4.2. Pipeline 2: digital images Images were processed using the Mica toolbox plugin in ImageJ [41]. The linear raw UV and visible images were manually aligned and converted to normalized 32-bit multispectral images. For each feather or feather patch (for peacock), two square regions of interest (ROIs) were selected; the estimated colour cone stimulation values for the two ROIs were sub- sequently averaged. We chose ROI sizes to best fit each feather/patch. In general, these corresponded to squares of these dimensions: 5 mm � 5 mm (duck), 1 cm � 1 cm (turkey), 4 mm � 4 mm ( pheasant, guineafowl) and either 3 mm � 3 mm or 5 mm � 5 mm ( peacock). Using these ROIs as inputs, cone catch values were estimated using cone mapping models in the Mica toolbox [41]. A model for a particular animal viewer is generated as follows. First, the responses of the camera’s sensors—to a large dataset of known natural spectra, under a specified illuminant—are simulated, using known sensor sensitivities for the camera. Next, an ani- mal’s colour cone stimulation responses—to the same natural spectra under a specified illuminant—are simulated, using known photoreceptor (cone) sensitivities. Then a polynomial model is generated so that the animal’s cone stimulation values can be predicted from the camera’s stimulation values; the model is then applied to the images of interest (in our case, the feather ROIs). To generate a model for each of the six animal visual systems used in this study, we used the following inputs to Mica: camera sensitivities: Mica’s default sensitivities for the Nikon D7000 and Nikkor 105 mm lens; photography illumi- nant: Mica’s built-in irradiance spectrum of the eyeColor arc bulb; animal photoreceptor sensitivities: we used sensitivities from various sources for ferret, dog, honeybee, human, blue tit and hummingbird (see above); specified illuminant (for the final colour cone estimates): ideal, achromatic light. We also specified a polynomial term of 2 and an interaction term of 3. For each of the six visual models, we conducted batch image analysis on the ROIs for the control and artificially coloured feathers. This yielded estimates of an animal’s relative cone stimulation responses to the different feathers, as follows: ferret and dog: [sws, lws]; honeybee [uvs, sws, mws], human [sws, mws, lws], hummingbird [vs, sws, mws, lws] and blue tit [uvs, sws, mws, lws], where uvs ¼ UV-sensitive, vs ¼ violet-sensitive, sws ¼ shortwave-sensitive, mws ¼ mediumwave-sensitive and lws ¼ longwave-sensitive. 3. Results 3.1. In behavioural experiments, what materials are commonly used—and for what purposes? Our non-exhaustive search of the literature, summarized in table 1, showed that artificially coloured stimuli have been used to test diverse hypotheses about the influence of colour on behaviour. Colour manipulation experiments have been pop- ular in studies of sexual selection, social signalling, anti-predator defence (camouflage and aposematism) and mimicry, with additional work on sensory bias, foraging behaviour, parental care and pollination ecology. Many experiments involve birds and butterflies, but other taxonomic groups—including spiders, moths, wasps, frogs and fish—are represented. The most common materials used to produce artificial colours appear to be enamel and acrylic paints, permanent markers and sunscreens, but creative alternatives (e.g. hair dye [4], a UV-reflective Fish Vision paint designed for fish lures [31]) exist. Artificial colour treatments are either applied to the integu- ment (e.g. feathers, skin, scales, petals) of a live animal or plant—or to a fully synthetic model (e.g. plaster egg, plastic disc). Treatments are usually intended to function in one of three ways: as a control, to add or enhance a colour (additive), or to remove all or part of a colour (subtractive). As a control, usually a clear or white paint is used to determine whether there is an effect of some artificial treatment. Ideally, the control should not change the appearance of the trait being studied, so clear materials are often used. For additive treatments, typi- cally colour is added to match or resemble natural variation, but sometimes creating a generic colour—or an exaggerated colour intended to be beyond natural, or supernormal—is the goal. For subtractive treatments, the intent is usually to mask a colour, often so that it ‘disappears’ by blending in with the rest of the animal. Sometimes the goal is to block only part of the spectrum; this is why sunscreens are often used when the objective is to reduce UV reflectance but leave the rest of the spectrum more or less unaltered. 3.2. How do different colour treatments change the reflectance properties of the substrates to which they are applied? Do colour treatments have different effects on different substrates? For simplicity, we focus here on the effects of artificial colour on three types of feather: white duck feathers, brown phea- sant feathers and the blue-green patch of the peacock feather (figure 1). The white duck feather is unpigmented, the pheasant feather is pigmented with melanin and the blue-green patch of the peacock feather, which has been shown to influence mating success [39], is a structural colour produced by the arrangement of melanin rod nano- structures and keratin in the feather barbules [54]. Overall, artificial colours had very different effects on these three feather types. Our results are summarized in figure 1 and dis- cussed below; reflectance spectra for the other feathers (turkey, guineafowl, additional peacock feather patches) can be found in the electronic supplementary material. 3.2.1. Untreated feathers ( figure 1, red curves) The white duck feather was characterized by low reflectance between 300 and 400 nm, a sharp peak at 426 nm and rela- tively flat reflectance from 500 to 700 nm. The brown pheasant feather had a relatively flat, dark (approx. 10% reflectance) spectrum. The blue-green peacock feather had a pronounced peak at 512 nm, with low reflectance in the UV (300 – 400 nm) and longwave (600 – 700 nm) portions of the spectrum. no treatment Marks-a-lot black marker Pilot black marker Berol black marker black Sharpie super black Sharpie king size black Sharpie gray Sharpie no treatment Ocean Potion 15 Safetec sunblock No-ad 50 sunblock no treatment Unipaint Oil orange paint-marker orange Sharpie yellow Sharpie no treatment Elmer’s glue Krazy glue Testors clear paint thinner no treatment DecoArt white acrylic paint Liquitex white acrylic paint Testors white enamel paint Rust-oleum white paint Mohawk white paint-marker no treatment Liquitex black acrylic paint DecoArt black glossy paint Rust-oleum flat black latex paint Testors black enamel paint DecoArt black acrylic paint different feather types di ff er en t tr ea tm en t ty pe s clear white black (marker) black (paint) sunscreen colours white duck feather blue-green patch on peacock feather brown pheasant feather 300 400 500 600 700 0 20 40 60 re fl ec ta nc e (% ) 300 400 500 600 700 0 5 10 15 20 25 300 400 500 600 700 0 5 10 15 300 400 500 600 700 0 20 40 60 re fl ec ta nc e (% ) 300 400 500 600 700 0 5 10 15 20 25 300 400 500 600 700 0 5 10 15 300 400 500 600 700 0 20 40 60 re fl ec ta nc e (% ) 300 400 500 600 700 0 5 10 15 20 25 300 400 500 600 700 0 5 10 15 300 400 500 600 700 0 20 40 60 80 wavelength (nm) re fl ec ta nc e (% ) 300 400 500 600 700 0 5 10 15 20 25 wavelength (nm) 300 400 500 600 700 0 20 40 60 re fl ec ta nc e (% ) 300 400 500 600 700 0 5 10 15 20 25 300 400 500 600 700 0 5 10 15 20 300 400 500 600 700 0 20 40 60 80 100 re fl ec ta nc e (% ) 300 400 500 600 700 0 20 40 60 80 300 400 500 600 700 0 10 20 30 40 50 300 400 500 600 700 0 10 20 30 40 50 wavelength (nm) (e) ( f ) (b)(a) (c) (d ) (i) (k) (m) (n) (o) (p) (l)( j) (g) (h) (q) (r) Figure 1. Effects of selected artificial colour treatments presented here for the duck, pheasant and peacock feathers. Please note that the y-axis is not on the same scale in each plot. Results for the remaining treatments and feather types can be found in the electronic supplementary material. rsfs.royalsocietypublishing.org Interface Focus 9: 20180053 6 D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 3.2.2. Clear treatments ( figure 1, row 1) On the white duck feather, the two clear glues had a minimal effect on reflectance, while the paint thinner reduced the overall brightness (absolute reflectance). On the brown pheasant feather, the glues also minimally affected reflectance; the paint thinner both reduced brightness and changed the shape of the reflectance curve. On the blue-green peacock feather, both glues increased brightness in parts of the spectrum (300 – 450 nm, 575 – 700 nm) and Krazy glue produced a slight upward shift in the wavelength of maximum reflectance (hereafter, the ‘green peak’). Paint thinner, however, had a mini- mal effect on reflectance. Overall, while glue may be an effective control (i.e. minimally changing the reflectance properties of the untreated feather) for white and melanin-based feathers, paint thinner may be a better choice for structural colours. 3.2.3. White treatments ( figure 1, row 2) On the white duck feather, white markers and paints reduced reflectance in the UV region (300 – 400 nm) and produced rsfs.royalsocietypublishing.org Interface Focus 9: 20180053 7 D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 bright, flat reflectance from 425 to 700 nm, with some vari- ation in the overall brightness produced by different treatments. On the brown pheasant feather, white treatments reduced the UV reflectance slightly and increased reflectance elsewhere; the brightness of painted pheasant feathers was lower than those of duck, because the underlying pheasant feather was so dark. One white paint-marker (Mohawk) failed to produce a brighter ‘white’ colour similar to the other treatments because it did not adhere well to the feather. On the blue-green peacock feather, white markers and paints had very different effects on the shape and intensity (bright- ness) of the reflectance spectrum. Even two similar acrylic paints produced very different spectra: DecoArt increased brightness and retained a small peak around 510 nm, while Liquitex produced a less bright spectrum with relatively flat reflectance above 400 nm. Overall, for white and pigmen- ted feathers, white treatments appear to produce ‘white’ spectra with low UV reflectance and moderate-to-high flat reflectance elsewhere, though the effects on brightness vary by treatment. For structurally coloured feathers, white treatments do not always mask the underlying colour and affect the substrate in very different ways (see ‘Unusual effects’ below). 3.2.4. Black treatments ( figure 1, rows 3 and 4) On the white duck feather, black markers and paints pro- duced a dark, flat reflectance spectrum from 300 to 700 nm. The acrylic paints (Liquitex and DecoArt) produced darker spectra than the latex paint (Rust-oleum). On the brown pheasant feather, the effects were similar. However, the Marks-a-lot marker had a minimal effect on the reflectance properties of the already-dark untreated feather. On the blue-green peacock feather, the black treatments completely failed to produce dark, flat reflectance spectra; instead, the green peak was retained and sometimes shifted, and the different treatments exerted various effects on brightness (see ‘Unusual effects’). Overall, while black treatments might effectively produce black spectra when applied to duck and pheasant feathers, they are ineffective on structural peacock feathers. 3.2.5. Sunscreen treatments ( figure 1, row 5) On the white duck feather, sunscreens reduced but did not eliminate UV reflectance below 400 nm. Perhaps surprisingly, sunscreens also affected reflectance above 400 nm, greatly reducing the intensity of the untreated feather’s sharp peak around 420 nm. On the brown pheasant feather, sunscreens had only a minimal effect on the shape and brightness of the flat, dark reflectance spectrum. On the blue-green pea- cock feather, sunscreens did not change the UV reflectance but did shift the untreated feather’s green peak from 512 nm to about 550 nm, probably due to glycerin—a common sunscreen ingredient (see ‘Unusual effects’). Over- all, while sunscreens appear to have minor effects on melanin-pigmented feathers, they can produce large changes to the reflectance properties of white and structurally coloured feathers, and these changes are not (as some researchers might expect) limited to the UV wavelengths. 3.2.6. Colour treatments ( figure 1, row 6) On the white duck feather, orange and yellow treatments chan- ged the reflectance properties in expected ways, producing reflectance spectra typical of orange and yellow colours. An orange paint-marker (Unipaint Oil) produced a brighter orange than an orange Sharpie marker. On the brown pheasant feather, only the orange paint-marker (Unipaint Oil) coated the feather sufficiently well to produce an orange reflectance spec- trum. On the blue-green peacock feather, the orange and yellow treatments produced unusual reflectance spectra (see ‘Unusual effects’). Overall, while markers appear to produce orange and yellow reflectance spectra on white feathers, a paint-marker or paint is likely to be required to add colour effectively to melanin-pigmented feathers. In addition, orange and yellow treatments fail to produce typical orange and yellow spectra on structurally coloured feathers. 3.2.7. Unusual effects Almost all colour treatments had unusual effects on the struc- turally coloured, blue-green peacock feather, compared with the effects on the white duck feather. The primary reason for this is that materials interact with the feather structure— nanoscale melanin barbules and keratin in the feather barbules—in complex and highly variable ways. For example, when applied to the green barbules of peacock feathers, glycerin induces an upward shift in the peak of maximum reflectance: we see this effect when sunscreens, of which gly- cerin is a typical ingredient, are applied to the blue-green patch (figure 1, row 5 and column 3). Glycerin fills the air holes of the barbules, changing the refractive index contrast (between the air and barbules) and causing a shift to longer wavelengths [54]. 3.3. Using models of animal colour vision, how might artificial colours appear to a range of animal viewers? Like any colour, the appearance of an artificial colour depends (in part) on its spectral properties and the colour cone sensitivities of the animal viewer. Estimating the relative photon catch values for six species representing three colour vision systems (dichro- matic, trichromatic, tetrachromatic) showed that the effects of artificial colour treatments can be very different depending on the animal viewer. Here we highlight one example (figure 2) that illustrates this point; detailed results, including the relative photon catch values for all visual systems for all treatments, are provided in the electronic supplementary material. Imagine a scenario in which a biologist paints a white feather (or flower, or other substrate) orange or yellow, to determine how different signal receivers—a ferret, a bee, a human and a blue tit, for example—respond to the modified stimuli. Painting the feather white (as a control) and orange or yellow (as the test) will have very different visual impacts on the different signal receivers. When painted white, the reflectance spectrum of a (bleached) white turkey feather changed: its reflectance was reduced between 300 and 400 nm and increased between 425 and 700 nm (figure 2a). Therefore, animals with UV sen- sitivity would detect substantial differences between the colour of the unpainted ‘white’ turkey feather and the painted ‘white’ turkey feather. This was evident when we cal- culated the relative colour cone stimulation values of the honeybee and blue tit, both of which have UV sensitivity (figure 2b). Compared with the unpainted feather, the white-painted feather showed lower relative stimulation of (b)(a) (c) (d ) 300 400 500 600 700 0 20 40 60 80 100 wavelength (nm) re fl ec ta nc e (% ) 1 5 27 1 no treatment 5 DecoArt white acrylic paint 26 orange Sharpie 27 yellow Sharpie 26 0 0.0025 0.0050 0.0075 0.0100 300 400 500 600 700 no rm al iz ed s en si ti vi ty swslws ferret 0 0.005 0.010 0.015 300 400 500 600 700 wavelength (nm) no rm al iz ed s en si ti vi ty uvs sws mws honeybee 0 0.005 0.010 0.015 300 400 500 600 700 sws mws lws human 0 0.005 0.010 .0015 300 400 500 600 700 wavelength (nm) uvs sws mws lws blue tit 0 3 6 9 no treatment Elmer's glue Krazy glue Testors clear paint thinner DecoArt white acrylic paint Liquitex white acrylic paint Testors white enamel paint Rust-oleum white paint Mohawk white paint-marker Liquitex black acrylic paint DecoArt black acrylic paint DecoArt black glossy paint Rust-oleum flat black latex paint Testors black enamel paint Marks-a-lot black marker Pilot black marker Berol black marker black Sharpie super black Sharpie king size black Sharpie gray Sharpie Ocean Potion 15 Safetec sunblock No-ad 50 sunblock Unipaint Oil orange paint-marker orange Sharpie yellow Sharpie JND between untreated and treated feather re la ti ve c on e ca tc he s receptor uvs sws mws lws ferret honeybee human blue tit 0.51 0.49 0.55 0.45 0.65 0.35 0.68 0.32 0 0.25 0.50 0.75 1.00 1 5 26 27 0.36 0.35 0.28 0.49 0.40 0.11 0.38 0.31 0.31 0.45 0.25 0.30 0 0.25 0.50 0.75 1.00 1 5 26 27 0.34 0.33 0.33 0.33 0.33 0.33 0.49 0.31 0.20 0.43 0.40 0.17 0 0.25 0.50 0.75 1.00 1 5 26 27 0.27 0.26 0.26 0.22 0.31 0.31 0.31 0.08 0.60 0.16 0.12 0.13 0.36 0.34 0.14 0.16 0 0.25 0.50 0.75 1.00 1 5 26 27 visual model treatment category black clear colour control UV-blocking white blue tit ferret honeybee human Figure 2. Modelling the appearance of artificial colour stimuli to different visual systems. In (a), the solid red curve shows the reflectance spectrum of the bleached turkey feather before any treatment was applied. Dashed lines denote white paint and coloured Sharpie treatments applied to the bleached turkey feather. We calculated the relative cone catches (c) corresponding to these stimuli from the perspective of a ferret (dichromat), honeybee (trichromat with UV sensitivity), human (trichromat) and a blue tit (tetrachromat), whose spectral sensitivities are given in (b). See the main text for details about the spectral sensitivity curves. Different treatments induced different cone catches in these visual systems (c). For example, to the honeybee and the blue tit, which are sensitive to UV light, the white treatment resulted in reduced UV-cone stimulation when compared with the unaltered feather (c, compare 1 versus 5). Comparison of the just- noticeable differences (JNDs) calculated between the untreated turkey feather and the 26 treatments covered in our study are shown in (d ). The red dashed line represents the dis- criminability threshold at JND ¼ 1; to a given observer, values to the right of this line are predicated to be discriminable, and values to the left are considered indiscriminable. These results suggest that while two colours might—in some cases—be seen as very similar (and probably indistinguishable) by humans, other animals may perceive them as different and distinguishable, depending on the colour treatment. Silhouette icons are from phylopic.org and covered by a Creative Commons licence. rsfs.royalsocietypublishing.org Interface Focus 9: 20180053 8 D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 the UV cone type (for bee and blue tit) and increased the stimulation of the other cones (figure 2c). Even for the ferret, which has some UV sensitivity, the painted feather resulted in different cone stimulation values. By contrast, the relative colour cone stimulation values for humans, who have broad sensitivity between 400 and 700 nm, barely chan- ged: the unpainted and painted feathers would both appear to a human to be white (figure 2c), evenly stimulating the shortwave-, mediumwave- and longwave-sensitive cones. However, note that the painted feather would appear rsfs.royalsocietypublishing.org Interface Focus 9: 20180053 9 D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 brighter due to its increased absolute reflectance. An estimate of the JNDs between the untreated (unpainted) and painted feathers (figure 2d, see ‘DecoArt white acrylic paint’) suggested that the two colours would be seen as very similar (and probably indistinguishable) by humans and ferrets but different (and probably distinguishable) by honeybees and blue tits. The take-home message is that ‘white’ to a human is not the same as ‘white’ to a honeybee or bird. In the hypothetical scenario described above, white paint might be an effective control for humans, but it would be a wildly inappropriate choice for many other animals. This revel- ation—that our human concept of ‘white’ does not always translate to animal viewers—has been discussed often in the literature, but we highlight it here because it is a classic example. A corollary is that ‘yellow’ (approximately 50% mws and 50% lws) or ‘orange’ (approximately 25% mws and 75% lws) to a human is not the same as ‘yellow’ or ‘orange’ to a honey- bee or ferret, because these animals are less sensitive to longwave parts of the spectrum (figure 2b). For example, we found that yellow and orange Sharpies, which increased reflec- tance in the longwave parts of the spectrum (550 – 700 nm), resulted in larger colour differences (relative to the untreated feather) for humans and blue tits than for ferrets and honey- bees (figure 2a – d). This example can be extended to illustrate how two hues that appear different to a human observer might not be distinguishable by another animal viewer. The yellow and orange Sharpie treatments shown in figure 2a are likely to be distinguishable (different) from the untreated turkey feather by human viewers, but to a honeybee the feather treated with orange Sharpie is likely to be indistin- guishable from the untreated turkey feather (at least in terms of color, discounting brightness) (figure 2d, ‘orange Sharpie’). In the scenario described above: to the biologist, the white con- trol treatment would appear similar to the untreated feather, while the yellow- and orange-manipulated feathers would appear different, as intended. However, from the perspective of the honeybee, the orange-treated feather would appear ‘whiter’ (more achromatic) than the ‘white’ treatment being used as a con- trol (figure 2d, ‘DecoArt white acrylic paint’). As mentioned above, two treatments of the same type/ material (e.g. Sharpie marker), but of different colours (e.g. yellow and orange), can yield varying levels of discriminabil- ity depending on the viewer (see JND values in figure 2d ). It is important to note that this can also be true if two treat- ments are different types/materials but the same colour. For example, unlike the orange Sharpie, the orange Unipaint Oil paint-marker (figure 2d ) is distinguishable (from white) to both the human and the honeybee, not just the human. A final point is that here we use ‘white’ and ‘orange’ to convey the familiar human-assigned colour terms; whether and how non-human animals might categorize and label colours is well beyond the scope of this paper. 3.4. When combined with visual models, do two complementary techniques—spectrophotometry and digital UV-visible photography—yield similar estimates of animal colour perception? As methods for quantifying animal colour, spectropho- tometry and digital UV-visible photography have distinct advantages and disadvantages [55]. Briefly, a benefit of spectrophotometry is that it captures detailed reflectance data across the wavelengths of interest (in this study, from 300 to 700 nm). A limitation is that only single, small points on an object can be captured at a time. Digital photography with calibrated cameras [56] solves this problem because images capture colour and spatial information simul- taneously. Consequently, large patches of colour can easily be quantified and analysed. However, even though digital photography—combined with visual models—can be used to estimate animal cone stimulation values [41], it is not poss- ible with a standard digital camera to reproduce the full reflectance spectrum of a given colour. Here, we found that both spectrophotometry and digital photography, when combined with visual models, yielded similar photon catch estimates of standard, uniform colours on a Macbeth ColorChecker chart (X-Rite, Grand Rapids, MI, USA) (figure 3). We demonstrated this by comparing the rela- tive cone stimulation values for each channel. For example, we correlated the [uvs, sws, mws, lws] values for blue tit (figure 3d ) estimated using ‘pipeline 1’ (spectrophotometry) with those estimated using ‘pipeline 2’ (camera) (see Methods). These tight correlations disappeared when we used the actual feather data to conduct a similar analysis. The spread in the data (figure 3) probably arises from the fact that we did not measure precisely the same patch of feather using the two different methods: with photography, we quantified colour on a larger surface area of the feather, for example. In addition, the sensitivity of our camera to wavelengths lower than 350 nm is very low, which may explain why, for the feather data, the camera-based estimates differ substantially from the spectrophotometry-based estimates for the UV-sensitive receptors of honeybee and blue tit (figure 3b,d). This effect might be less apparent with the Macbeth chart colours because most of the colour squares reflect little UV light. We urge researchers using a spectrophotometer or a camera to conduct their own systematic tests to ensure that colour data are repro- ducible. For sound advice on this topic, see [57]. In addition, we conducted our analyses in the laboratory, under very con- trolled light conditions. In theory, both spectrophotometry and UV-visible photography are robust to moderate changes in lighting (e.g. in outdoor conditions, as long as the ambient light spectrum is fairly flat) if appropriate calibration stan- dards are used, but it would be worthwhile to compare the two approaches in the field. 4. Discussion The use of artificially coloured stimuli in animal behaviour experiments has a long history, and their value in modern be- havioural ecology is well appreciated [32,33]. However, assuming that animals view artificially coloured stimuli in the ways we expect can be dangerous because animal colour perception varies widely across taxa. Here, we have explored ways in which biologists can reduce ‘the umwelt gamble’ [33] when undertaking their own colour manipu- lation experiments. Our advice boils down to five steps, which we discuss below. 4.1. Step 1: clarify your question What is the goal of artificial colour manipulation? Is it to match a natural colour? To create an enhanced colour within the range of natural variation? To remove a colour? 0 0.25 0.50 0.75 1.00 0 0.25 0.50 0.75 1.00 spectrophotometer ca m er a feather data 0 0.25 0.50 0.75 1.00 0 0.25 0.50 0.75 1.00 spectrophotometer ca m er a feather data 0 0.25 0.50 0.75 1.00 0 0.25 0.50 0.75 1.00 spectrophotometer ca m er a feather data 0 0.25 0.50 0.75 1.00 0 0.25 0.50 0.75 1.00 spectrophotometer ca m er a feather data 0 0.25 0.50 0.75 1.00 0 0.25 0.50 0.75 1.00 spectrophotometer ca m er a Macbeth ColorChecker data lws R2 = 0.99, sws R2 = 0.99 lws R2 = 0.41 sws R2 = 0.41 mws R 2 = 0.97 lws R2 = 0.99 sws R2 = 0.98 mws R2 = 0.96 uvs R2 = 0.89 sws R2 = 0.99 mws R2 = 0.38 uvs R2 = 0.56 sws R2 = 0.42 mws R2 = 0.99 lws R2 = 0.99 uvs R2 = 0.89 sws R2 = 0.99 mws R2 = 0.42 lws R2 = 0.57 uvs R2 = 0.55 sws R2 = 0.52 mws R2 = 0.34 lws R2 = 0.67 sws R2 = 0.55 0 0.25 0.50 0.75 1.00 0 0.25 0.50 0.75 1.00 spectrophotometer ca m er a Macbeth ColorChecker data 0 0.25 0.50 0.75 1.00 0 0.25 0.50 0.75 1.00 spectrophotometer ca m er a Macbeth ColorChecker data 0 0.25 0.50 0.75 1.00 0 0.25 0.50 0.75 1.00 spectrophotometer ca m er a Macbeth ColorChecker data receptor sws lws receptor sws mws lws receptor uvs sws mws receptor uvs sws mws lws (c) human(a) ferret (b) honeybee (d ) blue tit Figure 3. Correlations between cone catches obtained using two techniques: spectrophotometry and multi-spectral digital photography. We imaged a Macbeth ColorChecker (X-Rite, Inc.) and artificially coloured feathers using both techniques. The correlations for the solid patches of the Macbeth chart were near perfect, indicating that both methods can produce comparable data. The correlations were weaker, however, for the colours of real feathers. See text for a discussion of factors related to image acquisition and processing that can yield these differences between two techniques. Silhouette icons are from phylopic.org and covered by a Creative Commons licence. rsfs.royalsocietypublishing.org Interface Focus 9: 20180053 10 D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 To produce a supernormal colour beyond the range of natural variation? To answer these questions, quantifying the natural colour (usually of the animal or plant of interest)—using a spectrophotometer or a calibrated digital camera—is likely to be an essential first step. What colour is the patch (or patches) of interest? Is it unpigmented, pigmented or structu- rally coloured? Is its reflectance spectrum simple and smooth, or is it more complex, with multiple peaks? In our analyses, most of the natural feather colours were simple, characterized by reflectance spectra that were relatively flat or with a single peak or plateau. However, some natural colours have mul- tiple peaks (see the brown pheasant feather in figure 1b, for example), and it may be more challenging to modify or repro- duce these colours. Second, who is the intended signal receiver? Is it a bird? A bee? Which species? This will deter- mine the wavelengths over which you should quantify the colours (natural and artificial) of interest. 4.2. Step 2: test a range of products and materials, and be mindful of their effects on different substrates Next, consider the material you will apply to the colour patch. Different materials, even materials in the same general colour class (e.g. white paints and markers), can have different effects on the same substrate (figure 1d – f ), so it is wise to test out a var- iety of materials and to measure the resulting spectra (see Step 3). Some materials might not perform as expected: sunscreens, for example, reduce the UV reflectance but can also alter reflec- tance in other parts of the spectrum (figure 1, row 5). In addition, do not assume that a given marker or paint will have the same effect on all substrates. We found that iridescent feathers, compared with white unpigmented feathers, are affected by colour manipulations in different ways. Perhaps this is why methods for carefully altering iridescent plumage colours, compared with white or pigment-based colours, remain elusive [39]. However, researchers successfully modi- fied the iridescent blue colour of a butterfly wing using rutin (a plant pigment) mixed with ethanol [21]—so improved techniques might be on the horizon. 4.3. Step 3: measure the artificial colour (and usually the relevant natural, untreated colour) with a spectrophotometer or a calibrated digital camera Many researchers have used spectrophotometry (table 1) to confirm that an artificially coloured stimulus has the desired spectral properties: that it matches the spectrum of a natural colour, blocks UV reflectance or blackens the colour altogether, for example. Once this is established, is it really necessary to perform visual modelling (step 4)? It depends, but usually—yes. If the goal is to match the spectrum of a natural colour, and you find an artificial colour that achieves this perfectly, then visual modelling will tell you what you expect: that the perceived colour difference between the natu- ral and artificial colours will be negligible. But in reality, it is difficult to produce a perfect match, and visual modelling is almost always advisable to determine how different the arti- ficial stimulus might appear relative to the natural or desired colour. This becomes even more vital when multiple signal receivers are involved because the same artificial colour (e.g. white paint) will look very different to a human than it will to a hummingbird. In lieu of spectrophotometry, images of artificially coloured stimuli can be captured with rsfs.royalsocietypublishing.org Interface Focus 9: 20180053 11 D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 a calibrated digital camera and then combined with visual models to estimate animal colour perception (step 4). Though this approach is currently less common (table 1), the growing affordability, portability and accessibility of UV-visible photography [41,56,58] suggests that this may soon change. 4.4. Step 4: estimate the appearance of the artificial and natural, untreated colours using visual models Visual models [59 – 61] allow us to calculate relative cone stimulation and estimate the perceived difference between colours, for different animal colour vision systems. These models are powerful but have important limitations (see a recent review [62] and the accompanying commentaries), particularly when it comes to the perception of two very different (suprathreshold) colours [60]. However, using visual models to estimate the perception of artificially coloured stimuli gives us our best chance at reducing the ‘umwelt gamble’, because in doing so we try to account for the perceptual experience of the intended signal receiver. A critical point to emphasize is that there can be a great deal of variation in the visual systems of species belonging to the same taxonomic group. Consider fish, for example: some species are monochromatic or dichromatic, while others are trichromatic or tetrachromatic, and even fish living in the same microhabitat (for example, reef fish or cichlids) can exhibit highly variable cone spectral sensitivities [36,63,64]. In butterflies, some species possess many photo- receptors but express only a subset of these, depending on the ecological task at hand [36,64]. Thus, it is important to select a visual model that is appropriate for the species in question, not just for the broad taxonomic group. 4.5. Step 5: choose a suitable control In a colour manipulation experiment, an ideal control material will have the same properties as the artificial colour substance (the same smell, thickness, texture)—but not the same colour. The control can then be applied to one of the treatment groups: if the response to the control is similar to the response to the natural, unmodified stimu- lus, then any response in the experimental treatment (to an artificially coloured stimulus) is likely to be due to colour, rather than smell or texture. Finding a perfect control, how- ever, is likely to be challenging: a clear glue or paint thinner is unlikely to have similar properties to an acrylic paint. In these cases, getting creative is the best bet. Sheldon and col- leagues [1] mixed sunblock chemicals with fatty preen oil to test the effect of UV colour on attractiveness; they used the fatty preen oil alone as the control. Choosing a good control is key to Lahti’s [33] ‘artifact detection test’, which is some experimental proof that the artificial stimulus has been per- ceived in the way the researcher intends. Additional ‘artifact detection tests’ can be used to demonstrate that novel artificial stimuli are perceived as equally unfamiliar [33] (as in studies with PVC pipe, coloured plastic discs and model eggs in table 1) or that responses to artificial stimuli can predict responses to natural stimuli [32]. 4.6. Putting it all together For an excellent example of how these five steps can be put into action, see a recent study by Finkbeiner et al. [31], who investigated how yellow hindwing bars impact the mating success and survival of Heliconius erato butterflies. The team carefully produced four types of paper models— using a combination of UV-yellow paint, UV-blocking filters, natural pigment and yellow manila paper, plus clear neutral density filters as controls. The model colours were intended to match those of natural H. erato or a closely related mimetic species in the genus Eueides. The team tested these assumptions using spectrophotometry and visual modelling to butterfly and avian vision. They then used the models in mate choice experiments with conspecifics and predation experiments with birds, concluding that the UV and yellow components of hindw- ings are important for mate choice in H. erato—and do not increase predation risk, relative to the ancestral yellow pigments used by Eueides species. In this paper, we focused on artificial colours produced by paints, markers, glues and sunscreens. However, many studies use inkjet printers, three-dimensional printers and computer monitors to produce and display artificially coloured stimuli. The general principles outlined above apply broadly to such studies, but reducing the ‘umwelt gamble’ when using these technologies—especially in the context of animations and virtual reality—may require additional considerations [65 – 69]. We also focused on studies aimed at testing the effect of colour on behaviour, rather than those in which artificial colours are used for some other pur- pose, such as marking individuals for long-term tracking and identification. This too, of course, can inadvertently affect be- haviour, a fact famously demonstrated by Burley et al. [70] when they showed that male zebra finches Taeniopygia guttata prefer females wearing pink and black plastic leg bands but not blue or green. Therefore, researchers using artificial colour for tracking and identification can also profit from following the steps suggested above, which will reveal what marked individuals might look like to conspecifics and to predators. In a recent paper, Bergeron & Fuller [38] challenge the notion that human vision is always unsuitable for evaluat- ing animal coloration, asking ‘how bad is it?’ We do not doubt that our own colour vision experience as humans can sometimes lead to helpful insights about animal colour, but it can also lead us astray. Here we have shown that relying on human vision alone to judge the effectiveness of an artificial colour treatment is sometimes a bad bet. Why not reduce the gamble? More than ever before, we have access to the devices, tools and infor- mation necessary [71 – 73] to quantify colours in a way that is relevant to animal vision. Data accessibility. Additional data are provided in the electronic sup- plementary material. Authors’ contributions. M.C.S., A.E.M., H.N.E. and D.A. conceived and planned the study, collected and analysed the data and produced figures and tables. M.C.S. wrote the manuscript, to which all authors contributed. Competing interests. We declare we have no competing interests. Funding. Funding to M.C.S. was provided by Princeton University and a Sloan Research Fellowship. Acknowledgements. We thank the Editor and three anonymous reviewers for constructive suggestions. We are grateful to Ben Hogan for assist- ance in creating figures. We also thank members of the Stoddard Laboratory for discussion and feedback. 12 D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 References rsfs.royalsocietypublishing.org Interface Focus 9: 20180053 1. Sheldon BC, Andersson S, Griffith SC, Örnborg J, Sendecka J. 1999 Ultraviolet colour variation influences blue tit sex ratios. Nature 402, 874 – 877. (doi:10.1038/47239) 2. Tinbergen N. 1962 Egg shell removal by the black- headed gull (Larus r. ridibundus L.) II. The effects of experience on the response to colour. Behaviour 19, 74 – 116. (doi:10.1080/00063656209476020) 3. Tinbergen N. 1953 The herring gull’s world: a study of the social behaviour of birds. London, UK: Collins. 4. Hill GE. 1991 Plumage coloration is a sexually selected indicator of male quality. Nature 350, 337 – 339. (doi:10.1038/350337a0) 5. Pryke SR, Andersson S, Lawes MJ, Piper SE. 2002 Carotenoid status signaling in captive and wild red- collared widowbirds: independent effects of badge size and color. Behav. Ecol. 13, 622 – 631. (doi:10. 1093/beheco/13.5.622) 6. Fugle GN, Rothstein SI, Osenberg CW, McGinley MA. 1984 Signals of status in wintering white-crowned sparrows, Zonotrichia leucophrys gambelii. Anim. Behav. 32, 86 – 93. (doi:10.1016/S0003-3472(84)80327-9) 7. Jablonski PG. 1999 A rare predator exploits prey escape behavior: the role of tail-fanning and plumage contrast in foraging of the painted redstart (Myioborus pictus). Behav. Ecol. 10, 7 – 14. (doi:10. 1093/beheco/10.1.7) 8. Wiebe KL, Slagsvold T. 2009 Mouth coloration in nestling birds: increasing detection or signalling quality? Anim. Behav. 78, 1413 – 1420. (doi:10. 1016/j.anbehav.2009.09.013) 9. Cohen BS, MacKenzie ML, Maerz JC, Farrell CB, Castleberry SB. 2016 Color perception influences microhabitat selection of refugia and affects monitoring success for a cryptic anuran species. Physiol. Behav. 164, 54 – 57. (doi:10.1016/j. physbeh.2016.05.042) 10. Rodd FH, Hughes KA, Grether GF, Baril CT. 2002 A possible non-sexual origin of mate preference: are male guppies mimicking fruit? Proc. R. Soc. Lond. B 269, 475 – 481. (doi:10.1098/rspb. 2001.1891) 11. Wourms MK, Wasserman FE. 2017 Butterfly wing markings are more advantageous during handling than during the initial strike of an avian predator. Evolution 39, 845 – 851. (doi:10.1111/j.1558-5646. 1985.tb00426.x) 12. Shaak SG, Counterman BA. 2017 High warning colour polymorphism in Heliconius hybrid zone roosts. Ecol. Entomol. 42, 315 – 324. (doi:10.1111/ een.12386) 13. Olofsson M, Løvlie H, Tibblin J, Jakobsson S, Wiklund C. 2012 Eyespot display in the peacock butterfly triggers antipredator behaviors in naı̈ve adult fowl. Behav. Ecol. 24, 305 – 310. (doi:10.1093/ beheco/ars167) 14. Senar JC, Camerino M. 1998 Status signaling and the ability to recognize dominants: an experiment with siskins (Carduelis spinus). Proc. R. Soc. Lond. B 265, 1515 – 1520. (doi:10.1098/rspb.1998.0466) 15. Benson WW. 1972 Natural selection for Mullerian mimicry in Heliconius erato in Costa Rica. Science 176, 936 – 939. (doi:10.1126/science.176.4037.936) 16. Vallin A, Jakobsson S, Lind J, Wiklund C. 2005 Prey survival by predator intimidation: an experimental study of peacock butterfly defense against blue tits. Proc. R. Soc. B 272, 1203 – 1207. (doi:10.1098/rspb. 2004.3034) 17. Ballentine B, Hill GE. 2003 Female mate choice in relation to structural plumage coloration in blue grosbeaks. Condor 105, 593 – 598. (doi:10.1650/ 7234) 18. Jeffords MR, Sternburg JG, Waldbauer GP. 1979 Batesian mimicry: field demonstration of the survival value of pipevine swallowtail and monarch patterns. Evolution 33, 275 – 286. (doi:10.1111/j. 1558-5646.1979.tb04681.x) 19. Brandley N, Johnson M, Johnsen S. 2016 Aposematic signals in North American black widows are more conspicuous to predators than to prey. Behav. Ecol. 27, 1104 – 1112. (doi:10.1093/beheco/arw014) 20. Tibbetts EA, Dale J. 2004 A socially enforced signal of quality in a paper wasp. Nature 432, 218 – 222. (doi:10.1038/nature02949) 21. Kemp DJ. 2007 Female butterflies prefer males bearing bright iridescent ornamentation. Proc. R. Soc. B 274, 1043 – 1047. (doi:10.1098/rspb. 2006.0043) 22. Moskát C, Székely T, Cuthill IC, Kisbenedek T. 2008 Hosts’ responses to parasitic eggs: which cues elicit hosts’ egg discrimination? Ethology 114, 186 – 194. (doi:10.1111/j.1439-0310.2007.01456.x) 23. Croston R, Hauber ME. 2014 Spectral tuning and perceptual differences do not explain the rejection of brood parasitic eggs by American robins (Turdus migratorius). Behav. Ecol. Sociobiol. 68, 351 – 362. (doi:10.1007/s00265-013-1649-8) 24. Siitari H, Honkavaara J, Huhta E, Viitala J. 2002 Ultraviolet reflection and female mate choice in the pied flycatcher, Ficedula hypoleuca. Anim. Behav. 63, 97 – 102. (doi:10.1006/anbe.2001.1870) 25. Waser NM, Price MV. 1985 The effect of nectar guides on pollinator preference: experimental studies with a montane herb. Oecologia 67, 121 – 126. (doi:10.1007/BF00378462) 26. Delhey K, Peters A, Johnsen A, Kempenaers B. 2007 Brood sex ratio and male UV ornamentation in blue tits (Cyanistes caeruleus): correlational evidence and an experimental test. Behav. Ecol. Sociobiol. 61, 853 – 862. (doi:10.1007/s00265-006-0314-x) 27. Davis AK, Cope N, Smith A, Solensky MJ. 2007 Wing color predicts future mating success in male monarch butterflies. Ann. Entomol. Soc. Am. 100, 339 – 344. (doi:10.1603/0013- 8746(2007)100[339:WCPFMS]2.0.CO;2) 28. Bán M, Moskát C, Barta Z, Hauber ME. 2013 Simultaneous viewing of own and parasitic eggs is not required for egg rejection by a cuckoo host. Behav. Ecol. 24, 1014 – 1021. (doi:10.1093/beheco/ art004) 29. Chai P. 1988 Wing coloration of free-flying neotropical butterflies as a signal learned by a specialized avian predator. Biotropica 20, 20 – 30. (doi:10.2307/2388422) 30. Hatle JD, Salazar BA. 2001 Aposematic coloration of gregarious insects can delay predation by an ambush predator. Environ. Entomol. 30, 51 – 54. (doi:10.1603/0046-225X-30.1.51) 31. Finkbeiner SD, Fishman DA, Osorio D, Briscoe AD. 2017 Ultraviolet and yellow reflectance but not fluorescence is important for visual discrimination of conspecifics by Heliconius erato. J. Exp. Biol. 220, 1267 – 1276. (doi:10.1242/jeb.153593) 32. Hauber ME, Tong L, Bán M, Croston R, Grim T, Waterhouse GIN, Shawkey MD, Barron AB, Moskát C. 2015 The value of artificial stimuli in behavioral research: making the case for egg rejection studies in avian brood parasitism. Ethology 121, 521 – 528. (doi:10.1111/eth.12359) 33. Lahti DC. 2015 The limits of artificial stimuli in behavioral research: the umwelt gamble. Ethology 121, 529 – 537. (doi:10.1111/eth.12361) 34. Spottiswoode CN, Stevens M. 2010 Visual modeling shows that avian host parents use multiple visual cues in rejecting parasitic eggs. Proc. Natl Acad. Sci. USA 107, 8672 – 8676. (doi:10.1073/pnas.0910486107) 35. Hart NS, Vorobyev M. 2005 Modelling oil droplet absorption spectra and spectral sensitivities of bird cone photoreceptors. J .Comp. Physiol. A 191, 381 – 392. (doi:10.1007/s00359-004-0595-3) 36. Jacobs GH. 2018 Photopigments and the dimensionality of animal color vision. Neurosci. Biobehav. Rev. 86, 108 – 130. (doi:10.1016/j. neubiorev.2017.12.006) 37. Osorio D, Vorobyev M. 2008 A review of the evolution of animal colour vision and visual communication signals. Vision Res. 48, 2042 – 2051. (doi:10.1016/j.visres.2008.06.018) 38. Bergeron ZT, Fuller RC. 2018 Using human vision to detect variation in avian coloration: how bad is it? Am. Nat. 191, 269 – 276. (doi:10.5061/dryad.nq3fp) 39. Dakin R, Montgomerie R. 2013 Eye for an eyespot: how iridescent plumage ocelli influence peacock mating success. Behav. Ecol. 24, 1048 – 1057. (doi.org/10.1086/695282) 40. Osorio D, Vorobyev M. 2005 Photoreceptor spectral sensitivities in terrestrial animals: adaptations for luminance and colour vision. Proc. R. Soc. B 272, 1745 – 1752. (doi:10.1098/rspb.2005.3156) 41. Troscianko J, Stevens M. 2015 Image calibration and analysis toolbox—a free software suite for objectively measuring reflectance, colour and pattern. Methods Ecol. Evol. 6, 1320 – 1331. (doi:10.1111/2041-210X.12439) 42. Ödeen A, Håstad O. 2010 Pollinating birds differ in spectral sensitivity. J. Comp. Physiol. A 196, 91 – 96. (doi:10.1007/s00359-009-0474-z) 43. Calderone JB, Jacobs GH. 2003 Spectral properties and retinal distribution of ferret cones. Vis. Neurosci. 20, 11 – 17. (doi:10.1017/S0952523803201024) http://dx.doi.org/10.1038/47239 http://dx.doi.org/10.1080/00063656209476020 http://dx.doi.org/10.1038/350337a0 http://dx.doi.org/10.1093/beheco/13.5.622 http://dx.doi.org/10.1093/beheco/13.5.622 http://dx.doi.org/10.1016/S0003-3472(84)80327-9 http://dx.doi.org/10.1093/beheco/10.1.7 http://dx.doi.org/10.1093/beheco/10.1.7 http://dx.doi.org/10.1016/j.anbehav.2009.09.013 http://dx.doi.org/10.1016/j.anbehav.2009.09.013 http://dx.doi.org/10.1016/j.physbeh.2016.05.042 http://dx.doi.org/10.1016/j.physbeh.2016.05.042 http://dx.doi.org/10.1098/rspb.2001.1891 http://dx.doi.org/10.1098/rspb.2001.1891 http://dx.doi.org/10.1111/j.1558-5646.1985.tb00426.x http://dx.doi.org/10.1111/j.1558-5646.1985.tb00426.x http://dx.doi.org/10.1111/een.12386 http://dx.doi.org/10.1111/een.12386 http://dx.doi.org/10.1093/beheco/ars167 http://dx.doi.org/10.1093/beheco/ars167 http://dx.doi.org/10.1098/rspb.1998.0466 http://dx.doi.org/10.1126/science.176.4037.936 http://dx.doi.org/10.1098/rspb.2004.3034 http://dx.doi.org/10.1098/rspb.2004.3034 http://dx.doi.org/10.1650/7234 http://dx.doi.org/10.1650/7234 http://dx.doi.org/10.1111/j.1558-5646.1979.tb04681.x http://dx.doi.org/10.1111/j.1558-5646.1979.tb04681.x http://dx.doi.org/10.1093/beheco/arw014 http://dx.doi.org/10.1038/nature02949 http://dx.doi.org/10.1098/rspb.2006.0043 http://dx.doi.org/10.1098/rspb.2006.0043 http://dx.doi.org/10.1111/j.1439-0310.2007.01456.x http://dx.doi.org/10.1007/s00265-013-1649-8 http://dx.doi.org/10.1006/anbe.2001.1870 http://dx.doi.org/10.1007/BF00378462 http://dx.doi.org/10.1007/s00265-006-0314-x http://dx.doi.org/10.1603/0013-8746(2007)100[339:WCPFMS]2.0.CO;2 http://dx.doi.org/10.1603/0013-8746(2007)100[339:WCPFMS]2.0.CO;2 http://dx.doi.org/10.1093/beheco/art004 http://dx.doi.org/10.1093/beheco/art004 http://dx.doi.org/10.2307/2388422 http://dx.doi.org/10.1603/0046-225X-30.1.51 http://dx.doi.org/10.1242/jeb.153593 http://dx.doi.org/10.1111/eth.12359 http://dx.doi.org/10.1111/eth.12361 http://dx.doi.org/10.1073/pnas.0910486107 http://dx.doi.org/10.1007/s00359-004-0595-3 http://dx.doi.org/10.1016/j.neubiorev.2017.12.006 http://dx.doi.org/10.1016/j.neubiorev.2017.12.006 http://dx.doi.org/10.1016/j.visres.2008.06.018 http://dx.doi.org/10.5061/dryad.nq3fp http://dx.doi.org/doi.org/10.1086/695282 http://dx.doi.org/10.1098/rspb.2005.3156 http://dx.doi.org/10.1111/2041-210X.12439 http://dx.doi.org/10.1007/s00359-009-0474-z http://dx.doi.org/10.1017/S0952523803201024 rsfs.royalsocietypublishing.org Interface Focus 9: 20180053 13 D ow nl oa de d fr om h tt ps :/ /r oy al so ci et yp ub li sh in g. or g/ o n 05 A pr il 2 02 1 44. Douglas RH, Jeffery G. 2014 The spectral transmission of ocular media suggests ultraviolet sensitivity is widespread among mammals. Proc. R. Soc. B 281, 20132995. (doi:10.1098/rspb.2013.2995) 45. Jacobs GH. 1993 The distribution and nature of color-vision among the mammals. Biol. Rev. Camb. Phil. Soc. 68, 413 – 471. (doi:10.1111/j.1469-185X. 1993.tb00738.x) 46. Peitsch D, Fietz A, Hertel H, de Souza J, Ventura DF, Menzel R. 1992 The spectral input systems of hymenopteran insects and their receptor-based colour vision. J. Comp. Physiol. A 170, 23 – 40. (doi:10.1007/BF00190398) 47. Hart NS, Partridge JC, Cuthill IC, Bennett A. 2000 Visual pigments, oil droplets, ocular media and cone photoreceptor distribution in two species of passerine bird: the blue tit (Parus caeruleus L.) and the blackbird (Turdus merula L.). J. Comp. Physiol. A Neuroethol. Sens. Neural. Behav. Physiol. 186, 375 – 387. (doi:10.1007/s003590050437) 48. R Development Core Team. 2015 2015 R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. 49. Maia R, Eliason CM, Bitton PP, Doucet SM, Shawkey MD. 2013 pavo: an R package for the analysis, visualization and organization of spectral data. Methods Ecol. Evol. 4, 906 – 913. (doi:10.1111/2041- 210X.12069) 50. Troscianko J, Wilson-Aggarwal J, Stevens M, Spottiswoode CN. 2016 Camouflage predicts survival in ground-nesting birds. Sci. Rep. 6, 19966. (doi:10.1038/srep19966) 51. Barry KL, White TE, Rathnayake DN, Fabricant SA, Herberstein ME. 2014 Sexual signals for the colour- blind: cryptic female mantids signal quality through brightness. Funct. Ecol. 29, 531 – 539. (doi:10.1111/ 1365-2435.12363) 52. Pretterer G, Bubna-Littitz H, Windischbauer G, Gabler C, Griebel U. 2004 Brightness discrimination in the dog. J. Vis. 4, 10 – 19. (doi:10.1167/4.3.10) 53. Mowat FM, Petersen-Jones SM, Williamson H, Williams DL, Luthert PJ, Ali RR, Bainbridge JW. 2008 Topographical characterization of cone photoreceptors and the area centralis of the canine retina. Mol. Vis. 14, 2518 – 2527. 54. Zi J, Yu X, Li Y, Hu X, Xu C, Wang X, Liu X, Fu R. 2003 Coloration strategies in peacock feathers. Proc. Natl Acad. Sci. USA 100, 12 576 – 12 578. (doi:10. 1073/pnas.2133313100) 55. Burns KJ, McGraw KJ, Shultz AJ, Stoddard MC, Thomas DB. 2017 Advanced methods for studying pigments and coloration using avian specimens. In The extended specimen: emerging frontiers in collections-based ornithological research. Studies in avian biology (no. 50) (ed. MS Webster), pp. 23 – 55. Boca Raton, FL: CRC Press. 56. Stevens M, Párraga CA, Cuthill IC, Partridge JC, Troscianko TS. 2007 Using digital photography to study animal coloration. Biol. J. Linn. Soc. 90, 211 – 237. (10.1111/j.1095-8312.2007.00725.x) 57. White TE, Dalrymple RL, Noble DWA, O’Hanlon JC, Zurek DB, Umbers KDL. 2015 Reproducible research in the study of biological coloration. Anim. Behav. 106, 51 – 57. (doi:10.1016/j.anbehav.2015.05.007) 58. Akkaynak D, Treibitz T, Xiao B, Gürkan UA, Allen JJ, Demirci U, Hanlon RT. 2014 Use of commercial off-the-shelf digital cameras for scientific data acquisition and scene-specific color calibration. J. Opt. Soc. Am. A 31, 312 – 321. (doi:10.1364/ JOSAA.31.000312) 59. Kelber A, Vorobyev M, Osorio D. 2003 Animal colour vision—behavioural tests and physiological concepts. Biol. Rev. 78, 81 – 118. (doi:10.1017/ S1464793102005985) 60. Kemp DJ, Herberstein ME, Fleishman LJ, Endler JA, Bennett ATD, Dyer AG, Hart NS, Marshall J, Whiting MJ. 2015 An integrative framework for the appraisal of coloration in nature. Am. Nat. 185, 705 – 724. (doi:10.1086/681021) 61. Renoult JP, Courtiol A, Schaefer HM. 2013 A novel framework to study colour signaling to multiple species. Funct. Ecol. 27, 718 – 729. (doi:10.1111/ 1365-2435.12086) 62. Olsson P, Lind O, Kelber A. 2017 Chromatic and achromatic vision: parameter choice and limitations for reliable model predictions. Behav. Ecol. 29, 273 – 282. (doi:10.1093/beheco/arx133) 63. Marshall J, Carleton KL, Cronin T. 2015 Colour vision in marine organisms. Curr. Opin Neurobiol. 34, 86 – 94. (doi:10.1016/j.conb.2015.02.002) 64. Cronin TW, Johnsen S, Marshall NJ, Warrant EJ. 2014 Visual ecology. Princeton, NJ: Princeton University Press. 65. Cuthill IC, Hart NS, Partridge JC, Bennett A, Hunt S. 2000 Avian colour vision and avian video playback experiments. Acta Ethol. 3, 29 – 37. (doi:10.1007/ s102110000027) 66. Woo KL, Rieucau G. 2011 From dummies to animations: a review of computer-animated stimuli used in animal behavior studies. Behav. Ecol. Sociobiol. 65, 1671 – 1685. (doi:10.1007/s00265- 011-1226-y) 67. Powell DL, Rosenthal GG. 2017 What artifice can and cannot tell us about animal behavior. Curr. Zool. 63, 21 – 26. (doi:10.1093/cz/zow091) 68. Baldauf SA, Kullmann H, Bakker TCM. 2008 Technical restrictions of computer-manipulated visual stimuli and display units for studying animal behaviour. Ethology 114, 737 – 751. (doi:10.1111/j. 1439-0310.2008.01520.x) 69. Chouinard-Thuly L et al. 2017 Technical and conceptual considerations for using animated stimuli in studies of animal behavior. Curr. Zool. 63, 5 – 19. (doi:10.1093/cz/zow104) 70. Burley N, Krantzberg G, Radman P. 1982 Influence of colour-banding on the conspecific preferences of zebra finches. Anim. Behav. 30, 444 – 455. (doi:10.1016/S0003-3472(82)80055-9) 71. Cuthill IC et al. 2017 The biology of color. Science 357, 6350. (doi:10.1126/science.aan0221) 72. Caro T, Stoddard MC, Stuart-Fox D. 2017 Animal coloration: production, perception, function and application. Phil. Trans. R. Soc. B 372, 20170047. (doi:10.1098/rstb.2016.0333) 73. Caro T, Stoddard MC, Stuart-Fox D. 2017 Animal coloration research: why it matters. Phil. Trans. R. Soc. B 372, 20160333. (doi:10.1086/ 519398) http://dx.doi.org/10.1098/rspb.2013.2995 http://dx.doi.org/10.1111/j.1469-185X.1993.tb00738.x http://dx.doi.org/10.1111/j.1469-185X.1993.tb00738.x http://dx.doi.org/10.1007/BF00190398 http://dx.doi.org/10.1007/s003590050437 http://dx.doi.org/10.1111/2041-210X.12069 http://dx.doi.org/10.1111/2041-210X.12069 http://dx.doi.org/10.1038/srep19966 http://dx.doi.org/10.1111/1365-2435.12363 http://dx.doi.org/10.1111/1365-2435.12363 http://dx.doi.org/10.1167/4.3.10 http://dx.doi.org/10.1073/pnas.2133313100 http://dx.doi.org/10.1073/pnas.2133313100 http://dx.doi.org/10.1111/j.1095-8312.2007.00725.x http://dx.doi.org/10.1016/j.anbehav.2015.05.007 http://dx.doi.org/10.1364/JOSAA.31.000312 http://dx.doi.org/10.1364/JOSAA.31.000312 http://dx.doi.org/10.1017/S1464793102005985 http://dx.doi.org/10.1017/S1464793102005985 http://dx.doi.org/10.1086/681021 http://dx.doi.org/10.1111/1365-2435.12086 http://dx.doi.org/10.1111/1365-2435.12086 http://dx.doi.org/10.1093/beheco/arx133 http://dx.doi.org/10.1016/j.conb.2015.02.002 http://dx.doi.org/10.1007/s102110000027 http://dx.doi.org/10.1007/s102110000027 http://dx.doi.org/10.1007/s00265-011-1226-y http://dx.doi.org/10.1007/s00265-011-1226-y http://dx.doi.org/10.1093/cz/zow091 http://dx.doi.org/10.1111/j.1439-0310.2008.01520.x http://dx.doi.org/10.1111/j.1439-0310.2008.01520.x http://dx.doi.org/10.1093/cz/zow104 http://dx.doi.org/10.1016/S0003-3472(82)80055-9 http://dx.doi.org/10.1126/science.aan0221 http://dx.doi.org/10.1098/rstb.2016.0333 http://dx.doi.org/10.1086/519398 http://dx.doi.org/10.1086/519398 I see your false colours: how artificial stimuli appear to different animal viewers Introduction Methods Selecting and applying different treatments Spectrophotometry UV-visible photography Modelling animal colour perception Pipeline 1: reflectance spectra Pipeline 2: digital images Results In behavioural experiments, what materials are commonly used—and for what purposes? How do different colour treatments change the reflectance properties of the substrates to which they are applied? Do colour treatments have different effects on different substrates? Untreated feathers (figure 1, red curves) Clear treatments (figure 1, row 1) White treatments (figure 1, row 2) Black treatments (figure 1, rows 3 and 4) Sunscreen treatments (figure 1, row 5) Colour treatments (figure 1, row 6) Unusual effects Using models of animal colour vision, how might artificial colours appear to a range of animal viewers? When combined with visual models, do two complementary techniques—spectrophotometry and digital UV-visible photography—yield similar estimates of animal colour perception? Discussion Step 1: clarify your question Step 2: test a range of products and materials, and be mindful of their effects on different substrates Step 3: measure the artificial colour (and usually the relevant natural, untreated colour) with a spectrophotometer or a calibrated digital camera Step 4: estimate the appearance of the artificial and natural, untreated colours using visual models Step 5: choose a suitable control Putting it all together Data accessibility Authors’ contributions Competing interests Funding Acknowledgements References work_psa2ky7lvrf45otd344ajv56py ---- doi:10.1016/j.jembe.2003.08.017 www.elsevier.com/locate/jembe Journal of Experimental Marine Biology and Ecology 299 (2004) 185–199 Abundance estimation of rocky shore invertebrates at small spatial scale by high-resolution digital photography and digital image analysis Daniel Pech a,*, Alfonso R. Condal b , Edwin Bourget c , Pedro-Luis Ardisson d a Québec-Océan, Département de Biologie, Université Laval, Quebec, QC, Canada G1K 7P4 b Département des sciences géomatiques, Université Laval, Quebec, QC, Canada G1K 7P4 c Vice-rectorat á la recherche, Université de Sherbrooke, Sherbrooke, QC, Canada J1K 2R1 d Departamento de Recursos del Mar, CINVESTAV-IPN, Unidad Mérida, 97310 Mérida, Yucatán, México Received 8 April 2003; received in revised form 23 June 2003; accepted 29 August 2003 Abstract We have tested both the usefulness of high-resolution digital photography for data acquisition and digital image analysis, by non-supervised classification and high pass filter, for recognition and abundance estimation of benthic intertidal organisms. These digital tools were compared with visual scan and photo quadrat conventional methods. The comparison was done using 40 quadrats (10 � 5 cm) randomly selected along a 5-m transect on the rocky shore of the Pemaquid Point, Maine, USA. ANOVA for repeated measures was used to test differences among methods. Monte Carlo simulation analysis was used to explore differences among methods over a large set of data (n = 100, 500, 1000 quadrats). Differences among methods were observed when 40 quadrats were used. Tukey multiple comparison test showed that abundance estimation from visual scan, photo quadrat and digital image analysis by high pass filter do not differ significantly among them but differ from non-supervised classification results. Due to its accurate estimation, high pass filter (Prewitt) method was chosen as the most reliable digital method to estimate species abundance. Monte Carlo simulation of visual scan, photo quadrat and high pass filter results showed significant differences when the number of quadrats was larger. These results showed that the combined use of digital photography and digital image analysis techniques for the acquisition and analysis of recorded data is a powerful method for the study of intertidal benthic organisms. Results produced using these techniques 0022-0981/$ - see front matter D 2003 Elsevier B.V. All rights reserved. doi:10.1016/j.jembe.2003.08.017 * Corresponding author. Tel.: +1-418-656-2131x4511; fax: +1-418-653-2339. E-mail address: Daniel.Pech-Pool@giroq.ulaval.ca (D. Pech). D. Pech et al. / J. Exp. Mar. Biol. Ecol. 299 (2004) 185–199186 were similar than those produced by conventional methods but were obtained in a much- reduced time. D 2003 Elsevier B.V. All rights reserved. Keywords: Benthos; Digital photography; Digital image analysis; Small spatial scale 1. Introduction In nature, benthic organisms can be considered as distributed in systems hierarchically organized (Farmer and Adams, 1991). Traditionally, this structure can be viewed as fully nested, with higher levels representing assemblies of lower level units. Given this complexity, some researchers (e.g. Horne and Schneider, 1995; Somerfield and Gage, 2000) have focused their efforts on the development of new techniques and models that permit to simplify the approach used to understand properties of the different compart- ments of the hierarchical structure. There are still, however, numerous limitations to the understanding of scale and temporal dependent patterns in ecosystems. One of these limitations is a conceptual one because patches are often defined in some convenient but not necessarily biologically relevant manner in relation to the organisms being studied (Langton et al., 1995). Another limitation is related to the sampling methods used for acquiring and estimating biological information (Andrew and Mapstone, 1987). Estimations of the accuracy and efficiency of the most common sampling methods used in benthic studies (e.g. visual scan, photo quadrats, video transects, point-contact, random dots) have been presented by different authors (e.g. Foster et al., 1991; Whorff and Griffing, 1992; Meese and Tomich, 1992; Miller and Ambrose, 2000). All these authors have reached the conclusion that these methods contain potential errors for estimating commu- nity properties of species (e.g. abundance, species diversity). For example, the introduction of variability due to the observer in the case of the visual method (Meese and Tomich, 1992) or under estimation of variables such as abundance or species richness due to the misidentification of species in the case of the photo quadrat or video transect methods. In the last decade, new forms of multispectral sensors have been developed that has permitted the acquisition of digital imagery along the whole electromagnetic spectrum (King, 1995). This kind of technology has opened up new possibilities in the use of video and photographic cameras. Imagery acquired with these devices can be easily stored in standard computers at 3–4 s per frame with < 20 cm of ground resolution. Therefore, it is possible now to acquire easily and at relatively low cost high quality digital field data allowing the built up of extensive image data bases to support qualitative and quantitative studies (Livingstone et al., 1999). Digital imagery and digital image analysis are now commonly used to obtain and interpret thematic maps in physical sciences and geophysics (Christakos, 2000). One typical and well-documented example is the study of the spatial distribution of laminated sediments (Dartnell and Gardner, 1993; Glasbey and Horgan, 1995; Cooper, 1997) and soils (Petterson et al., 1993). The implementation of these tools to study marine benthos has not been fully explored. The utilization of digital techniques to sample rocky shores benthos could help to improve D. Pech et al. / J. Exp. Mar. Biol. Ecol. 299 (2004) 185–199 187 the results obtained by standard sampling methods like visual scan or photo quadrats. Since images are usually obtained in multispectral mode (red, green and blue), it is possible using different procedures of images analysis to eliminate radiometric distortion caused by panoramic effects like shadows and changes in rock surface coloration, bringing in turn both more accurate species identification and better estimates of the ecological properties of community (e.g abundance). Up till now, digital image analysis has been used to map benthic assemblages and habitats at spatial scales from m to km applying principal component analysis (Pasqualini et al., 1997) and/or non-supervised and supervised classification methods (Sheppard et al., 1995; Cuevas-Jiménez et al., 2002). Digital image analysis was also used to study Antarctic benthic communities using landscape indices and principal component analysis (Teixidó et al., 2002). However, all images used in the studies cited above were acquired by analogue photography and digitized afterwards. The use of high-resolution digital photography in the study of marine benthos has not been yet reported. The aim of this paper is to examine the advantages offered by this technology for extracting at small spatial scales (cm to m) biological information (abundance, species richness). The results show that, due its low cost, minimum operational time, accuracy and versatility, the proposed application could constitute an interesting alternative method for quantitative studies of marine benthos. 2. Material and methods Sampling was carried out on the rocky shore of the Pemaquid Point, Maine, USA, in July 2002. Benthic intertidal samples were obtained using 40 quadrats of 10 � 5 cm, randomly selected along a 5-m transect. Abundance was recorded as species cover (%). The cover of benthic invertebrates was estimated first by a visual scan method using a grid (Dethier et al., 1993), secondly by photographing the quadrat area and determining cover in the laboratory using a grid (photo quadrat), and thirdly by photographing the quadrat area and determining cover by means of digital image analysis. 2.1. Visual scan (VS) Visual scan sampling was done using a grid of 32 small squares (1.25 � 1.25 cm each) superimposed on the quadrat frame (5 � 10 cm). Each square filled by a species counted as 3.25% cover, a square half filled count for 1.6% cover and organism filling less than 1/4 square counted for 0.5%. This method eliminates the need for decision rules such as any square>half filled is counted as filled (Dethier et al., 1993). The small size of quadrats (5 � 10 cm) allowed determining the percent cover under each small square. 2.2. Photo quadrat (PQ) Photographs were also taken in the same area after removal of the grid used for the visual scan. One photo per 10 � 15 cm quadrat was taken using a digital camera (Nikon coolpix 990 at 3.1 megapixels). The camera was held directly over the target area D. Pech et al. / J. Exp. Mar. Biol. Ecol. 299 (2004) 185–199188 perpendicularly attached to the quadrat frame using a 30-cm long rod, thus minimizing possible parallax errors. Photo analysis was performed following standard procedures of image analysis. Photographs were projected on the computer screen, then a computer grid, similar to the one used in the visual scan method (32 small squares of 1.25 � 1.25 cm each), was superimposed onto the photography. Computer grid was created using the Sigmascan Pro 5 software (SPSS science 1999). Cover was estimated using the same procedure rules as the ones used in VS. 2.3. Digital image analysis (DIA) A second series of photographs was obtained for the same quadrats, using the same digital camera. At this time, photographs were analysed following digital image analysis procedures. Photographs (1024 � 768 pixel size) were transferred directly from the camera to the computer using JPG format. They were then converted into PIX format to be treated with the PCI software (PCI geomatics, 2000). PIX format is fully supported by GDB (generic database) libraries that allow different file types to be used interchangeably without loss of the radiometric resolution in the original image. Photographs in PIX format were then separated into RGB (red, green, blue) components (images). The images were first displayed in a RGB colour mode and radiometrically enhanced (linear method) to better display the information stored in their RGB components. Secondly, supervised, non- supervised classification, and high pass filter procedures were separately applied to all RGB images to determine the most accurate DIA method for cover estimation. Image classification methods automatically categorize all pixels in an image into information classes and replace visual image with quantified information data (thematic maps). Supervised classification is based on the multispectral information contained in the image, labelling the pixels as members of a particular ground cover type or class (Richards, 1986). This procedure is made possible by some prior or acquired knowledge of specific sites in the scene that represent homogeneous examples of the known classes present in the scene. These areas are referred as training sites. Multivariate statistical parameters are then calculated for each training site with the goal of estimating the spectral characteristics or signatures for the classes present in the scene. Once these signatures have been determined, every pixel in the scene is evaluated and assigned to the class for which it has the maximum likelihood of being a member. This last step is done using appropriate classification algorithms. Non-supervised classification (NSCL) on the other hand is a clustering procedure, partitioning the image into a number of spectral classes, totally unrelated to ground cover types (Glasbey and Horgan, 1995). Therefore, it does not start with a pre-determined set of classes as in a supervised classification but it requires an estimation of the number of groups present in the image. A high band pass filter (Prewitt filter, in this paper) is part of a group of techniques applied directly to the image to increase its geometric detail (Richards, 1986). High pass filters emphasize abrupt changes in grey values between pixels, thus permitting the recognition of regions in the image where high spatial frequencies information (edges) is present. The filter procedure segments the image into groups but does not calculate automatically the number of pixels inside each group. Filtered images were therefore D. Pech et al. / J. Exp. Mar. Biol. Ecol. 299 (2004) 185–199 189 reconverted to JPG format for cover estimation using the Sigmascan Pro 5 software. Cover estimation was interactively determined by counting the number of pixels under the surfaces representing the organism under study. Percent cover for each species was determined by comparing the total number of pixels for each species to the total number of pixels in the image (total number of pixels for each species � total number of pixels in the image � 100� 1). An ANOVA for repeated measures was performed to test for differences between the cover estimations obtained from the different sampling methods used in this study. Assumptions of normality and homoscedasticity were tested and data were transformed when necessary. A Tukey test for multiple comparisons was used when significant differences were observed. Finally, with the purpose of exploring the usefulness of the application of the estimation methods over a large set of data (n = 100, 500, 1000 quadrats), a Monte Carlo simulation analysis, based on the distribution parameters (i.e. mean and SD) of the real calculated data, was used. Again ANOVA for repeated measures was used to test for differences among methods when the number of sampling quadrats is increased. 3. Results The visual census allowed to identify three benthic species whose presence was confirmed by the analysis of photographs and digital analysis of images. The species were, in decreasing order of abundance, Semibalanus balanoides, Littorina littorea, and Mytilus edulis. Percent cover estimation was in general a time consuming task. The total operational time required in the field to estimate cover varied from 2 to 5 min per quadrat (Fig. 1). The use of a digital camera reduced this time to only a few seconds (30 s approximately) per image, the rest of the time being spent in the laboratory performing DIA. Besides, visual inspection of the digital data shows that 15 out of 40 (37%) images contained objects whose boundaries were not well defined. This lack of geometric definition limited proper species identification, leading to misclassification and therefore inaccurate estimations of percent cover. Similar spectral signatures, lack of contrast between the substratum and the object, and the small size of some of the individuals were responsible for this situation. The main goal of our DIA treatment was precisely to find a way to minimize these problems. The digital camera used in this study offered the possibility of obtaining colour composite images (red, green, and blue components). Thus a multispectral analysis (e.g. supervised classification) based on these three components can be carried out. To determine the usefulness of such an approach, a correlation analysis was performed between the RGB components of the image. This analysis showed that pixel distributions on each component (RGB) were highly correlated (Table 1). Supervised classification methods utilize the RGB multispectral information for differentiating the organisms present in the data. The high correlations shown between RGB components in our data imply a lack of sufficient spectral information to perform this task. Furthermore, as shown in Fig. 2A, supervised classification only detects two Fig. 1. Mean time used in each sampling and determination method. Mean is showed F 1 S.D. VS, visual scan; PQ, photo quadrat; HPF, high pass filter; NSCL, non-supervised classification. D. Pech et al. / J. Exp. Mar. Biol. Ecol. 299 (2004) 185–199190 spectral groups, one of which corresponds to the zones of high concentration of encrusting algae. This is an inconvenience whose solution requires further studies beyond the scope of this work. The non-supervised classification (NSCL) algorithm–Isodata–was used to determine how many groups could be recognized in the RGB data. Previous ANOVA test showed that the results from NSCL, applied in the RGB separately, do not show significant differences ( F = 5.32; p < 0.05). Given this condition and the high correlation shown by the RGB components (Table 1), the red band was arbitrarily chosen to pursue the analysis. Fig. 2B presents a red image segmented by Isodata into four groups (see scale in Fig. 2B). Isodata identified correctly the organisms under study but also illustrates several potential problems. For example, S. balanoides was a mixture of groups three and four, L. littorea Table 1 Mean values of the correlation coefficients between pixel distribution of the red, the green, and the blue components of digital image Digital image R G B Digital image R 1 G 0.99 1 B 0.97 0.99 1 The correlation coefficients were obtained by simple linear regression using all pixels (1024 � 764) contained in each of the 40 images. Fig. 2. Example of an image results obtained by (A) a supervised classification, and (B) a non-supervised classification of the same image. Fig. 2A shows two spectral classes, the black one represents zones with high concentration of encrusting algae. Four statistically different groups were found using supervised classification. Scale numbers, on the right of Fig. 2B, indicate the relation between the four groups and the corresponding gray values in the image. D. Pech et al. / J. Exp. Mar. Biol. Ecol. 299 (2004) 185–199 191 included groups two and four, and these groups were also present on the bare rock substrate. In other words, organisms were subdivided into subgroups and there were also spectral overlaps; the high spatial resolution and the large dynamic range present in the data together with the incapability of the RGB bands to resolve the spectral characteristics of the groups were responsible for this situation. In general, non-supervised classification D. Pech et al. / J. Exp. Mar. Biol. Ecol. 299 (2004) 185–199192 does not improve the recognition of individuals that in the original image were not well defined. Therefore, classification criteria based only on the grey value of the pixel, as Isodata, led to overlap of different groups and to inaccurate percent cover estimation (>50%). Fig. 3. An original image and its high pass filtered version. (A) Digital red band before application of the Prewitt filter, (B) after application of high pass filter. The lines (a, b, c) indicate segments analyzed for profile information. All segments are oriented in the same direction (left to right). Fig. 4. Gray value profiles from the three sections shown in Fig. 3 of: (A) rock with noise (segment a in Fig. 3); (B) diffuse organism (segment b in Fig. 3); and (C) well-defined organism (segment c in Fig. 3). D. Pech et al. / J. Exp. Mar. Biol. Ecol. 299 (2004) 185–199 193 Fig. 5. Comparison of percent cover estimation values of (A) S. balanoides, (B) L. littorea, and (C) M. edulis obtained by different estimation methods used in this work: VS, visual scan; PQ, photo quadrat; HPF, high pass filter; NSCL, non-supervised classification. D. Pech et al. / J. Exp. Mar. Biol. Ecol. 299 (2004) 185–199194 D. Pech et al. / J. Exp. Mar. Biol. Ecol. 299 (2004) 185–199 195 Next, the original image was filtered using the high pass filter (see Fig. 3). To make the comparison meaningful, the same enhancement look-up-table was applied to both images. The filtered images emphasized the regions in the image where abrupt changes in grey values took place. Fig. 3B emphasizes the high frequency information, offering a greater contrast than the raw image. The discrimination between the image background and the benthic organisms is increased. To show this point more clearly, profiles, each 85 pixels long, were obtained in three different sections of the images (white lines in Fig. 3). The first segment (a) corresponded to a region where species were not present, the second one (b) was from a region where species were not well defined, and finally the third segment (c) was obtained from a region where a well defined species (S. balanoides) was present. All segments were oriented in the same direction (left to right). These profiles, presented in Fig. 4, clearly indicate that in a region lacking the presence of species (Fig. 4A), there is no clear gain by filtering the original image because the profiles are basically superimposed on each other. However, where species are present (Fig. 4B and 4C), the filtered image offered a greater definition of such objects. For example, in Fig. 4C, the boundaries defining S. balanoides were well defined by a series of five maximum peaks situated in the middle of the graph. A similar pattern of five peaks, with similar numerical values, was also observed at the beginning of Fig. 4B, allowing, by analogy, to infer that the ill defined species in Fig. 4B were indeed of the same type (S. balanoides) than the one observed in Fig. 4C. We have therefore four methods for estimating percent cover: the visual scan (VS), the analogue photo quadrat (PQ), the DIA using a non-supervised classification (NSCL) and the high pass filter (HPF). Comparison of these methods is presented in Fig. 5. DIA using HPF and the VS and PQ methods offered comparable estimates. However, the NSCL method diverged for all three species examined. Statistically, ANOVA for repeated measures test showed that indeed there were significant differences between the estimation methods (Table 2). For example, for S. balanoides, the estimation by NSCL method was higher than the other three for 80% of the cases (Fig. 5A). A similar situation was observed for L. littorea (Fig. 5B) and for M. edulis (Fig. 5C). According to the Tukey multiple Table 2 Results from ANOVA for repeated measures to test for the differences between methods for determining percentage cover SS MS F p P(a) Tukey (MCT) S. balanoides 13165.44 4388.48 345.21 0.00 1.00 VS, PQ, HPF p NSCL Error 1983.16 12.71 Total 15148.6 L. littorea 474.33 144.91 4.07 0.01 0.81 VS, PQ, HPF p NSCL Error 1992.28 33.58 Total 2427.56 M. edulis 597.01 199.00 7.9 0.5 0.45 VS, PQ, HPF p NSCL Error 6088.36 253.34 Total 6677.38 VS = visual scan; PQ = photo quadrat, HPF, = high pass filter; NSCL= non-supervised classification. p < 0.05; P= power test value, a = 0.05. Table 3 Results from ANOVA for repeated measures using data from Monte Carlo simulation analysis Species N SS MS F P P(a) Tukey (MCT) S. balanoides 100 438.38 219.19 2.44 0.089 1 Error 26 719.39 89.96 Total 27 157.78 500 1312.37 656.18 8.91 0.000 1 VS p PQ, HPF Error 110 240.4 73.64 Total 111 552.8 1000 1694.17 847.07 11.57 0.000 1 VS p PQ, HPF Error 219 511.3 73.24 Total 22 120.51 L. littorea 100 707.86 353.93 10.25 0.000 0.98 VS p PQ, HPF Error 10 256.5 34.53 Total 10 964.36 500 3509.46 1754.72 43.28 0.000 1.00 VS p PQ, HPF Error 60 697.86 40.54 Total 64 207.32 1000 6897.44 3448.72 86.55 0.000 1.00 VS p PQ, HPF Error 119 422.82 39.84 Total 126 320.21 M. edulis 100 1059.01 529.5 2.75 0.065 0.54 Error 57 167.63 192.48 Total 58 226.64 500 3068.18 1534.09 8.41 0.000 0.80 PQ p HPF Error 273 216.53 182.5 Total 276 284.72 1000 2191.95 1095.97 5.97 0.000 0.96 PQ p HPF Error 556 234.41 183.59 Total 552 426.44 Simulated data were calculated based on the mean and SD of the real calculated percent cover for each one of three species. VS = visual scan; PQ = photo quadrat; HPF = high pass filter. p < 0.05; P= power test value, a = 0.05. D. Pech et al. / J. Exp. Mar. Biol. Ecol. 299 (2004) 185–199196 comparison test, the NSCL method exhibited significant differences in percent cover determination with respect to the other three methods. VS, PQ, and DIA using HPF do not present significant differences in their estimations. Therefore, a Monte Carlo simulation test was only applied to VS, PQ, and DIA using HPF. Significant differences between them for percent cover estimation of each species were observed when the number of sampled quadrats was increased (Table 3). For example, S. balanoides and L. littorea cover estimations by the VS method differed from the PQ and DIA methods. For M. edulis, Tukey test detected only a significant difference between the PQ and DIA using HPF methods. 4. Discussion Given the small size of quadrats used in this study (5 � 10 cm), the estimation of the number of taxa by the three methods was carried out without difficulty. Our calculations of D. Pech et al. / J. Exp. Mar. Biol. Ecol. 299 (2004) 185–199 197 percent cover using 40 quadrats, excluding the results obtained by NSCL, show no significant differences between the VS, PQ, and DIA using HPF. In the case of these last three methods, Monte Carlo simulations show significant differences when the number of quadrats is increased (>100 quadrats). This result is attributable to the fact that in both VS and PQ methods there are always a large number of unidentified or misclassified objects present. There is also a potential for misidentification of species whenever photographs or images are used, due to the fact that species with similar morphologies cannot be easily discriminated. Foster et al. (1991) have shown that photo quadrats consistently underesti- mate organisms cover and the number of taxa present. Using DIA by HPF, the estimation of organisms cover is a function of the number of pixels defining each object and, in turn, each object is defined by its boundaries in the filtered image. The combination of a high-resolution digital camera as a data acquisition device and DIA techniques for the treatment of such data present several advantages and could constitute powerful tools for the study of benthic communities. The analysis of the biological information presented in this study shows that the results obtained using digital image analysis (DIA) by high pass filter (HPF) provided similar results to the ones already obtained by conventional methods (VS, PQ). However, it is not without potential problems. Table 4 lists some of the advantages and disadvantages in using these digital tools. Probably one of the most important advantages of DIA is that using HPF is possible to increase resolution at the edge of the image, eliminating therefore the noise caused by wet surfaces and shadows. Another advantage is that the information extracted from the images is objective and quantitative, reducing in this way the variability produced by the effect of the observers (Dethier et al., 1993). It is well known that time is a critical factor in conducting experiments and observations in intertidal zone due mainly to the duration of the tidal cycle. Acquiring data using a digital camera is fast and flexible. This technology allows immediate revision of images, deletion and re-recording when necessary without having to wait for the developed films, as is the case with analogue cameras. Furthermore, digital images are directly saved either on memory cards or computer memories avoiding the use of digitizing procedures. Thus, 100% of the original high quality data is preserved in a short time and at minimum cost. Another point in favour of the use of high resolution camera in benthic studies is that images obtained by such cameras (in our case Nikon coolpix 990) with at least a resolution of 3.3 megapixels can be enlarged up to four times without loosing details. This property provides high frequency information on the organisms under study. Table 4 Advantages and disadvantages in using digital tools (high resolution digital photography and digital image analysis) Advantages Disadvantages Objective, quantitative Estimates only the canopy cover Speed for acquiring field data May require a long laboratory process Cost-effective for repetitive sampling Expensive for one time sampling Camera easy to use Data analysis requires specialized training Compatible digital image format Requires large memory space Allows the exploration of alternative ways to extract information Time consuming in the laboratory Provides information of three spectral bands Requires specialized software D. Pech et al. / J. Exp. Mar. Biol. Ecol. 299 (2004) 185–199198 Another advantage of HPF is that it is a technique that does not require additional input from the user (i.e. colour ranges). Filter techniques based on convolution calculations allow automatic association of changes in brightness with pixel position (Richards, 1986), permitting in this way to delimit information classes (groups of pixels that form categories of interest). In our case, the high resolution available in our images provided us with quality information about the morphology of species; these geometric forms are easily identified in a high band pass filtered image. However, species do not present necessarily well-defined spectral signatures in a RGB space. The series of steps needed for this process can be routinely implemented on any digital image analysis package by a series of programmable computer procedures. Using this programmable procedure, the 40 raw images presented in this study were filtered in 15 min reducing thus the time needed for such analysis by about 60%. However, no method is without potential errors or problems; HPF does not determine automatically the number of pixels associated to the information classes; so, cover need to be interactively determinated by delimiting the edge boundaries of the organism. Another problem is that HPF only offers, similarly to the photo quadrat method, a two-dimensional view permitting to estimate the canopy covers (see also Meese and Tomich, 1992). Additional steps such as removal of algae may be necessary to estimate the under canopy species cover. Given the high spatial resolution of our data, the high pass filter technique constitutes the most accurate method known to us so far. Although there may be conditions where it might be a useful technique, automatic colour segmentation based on colour ranges cannot be used routinely for accurate species discrimination in images from benthic sites (Bernhardt and Griffing, 1999). The use of a high pass filter is only one of a series of possible DIA techniques. For example, other techniques suitable for determining organisms cover could be colour segmentation or supervised and non-supervised classifications, but in our case, the enhancement of information at the edges presented by the filtered image permits a better, global identification of invertebrate species and thus a more accurate estimation of percent cover. But, on the other hand, using classification methods is possible to detect areas with high concentration of incrusting algae. In short, this study shows that the use of high-resolution photography together with high filtering techniques is a simpler and less expensive alternative to classical methods for obtaining and analyzing benthic samples at small spatial scales (cm). The use of a digital camera for acquiring data reduces substantially the temporal variability introduced inevi- tably by the sampling acquisition process. Digital image analysis using high pass filters increase the discrimination between the image background and the organism allowing a most accurate cover determination. Then this digital tools could therefore substantially lower the cost of monitoring programs where a fast acquisition data rate is needed at any spatial scale and improve the estimates of the ecological properties of the community. Acknowledgements We thank the Darling Marine Center (USA) for providing field facilities and C. Sence for assistance on the field. We also thank two anonymous referees for providing helpful D. Pech et al. / J. Exp. Mar. Biol. Ecol. 299 (2004) 185–199 199 improvements to the manuscript. This research was supported by NSERC and FCAR grants to EB. Financial support to D.P. was provided by MEQ (Québec, Canada) and CONACYT (Mexico) scholarships. [SS] References Andrew, N.L., Mapstone, B.D., 1987. Sampling and the description of spatial pattern in marine ecology. Ocean- ogr. Mar. Biol. Ann. Rev. 25, 39–90. Bernhardt, S.P., Griffing, L.R., 1999. An evaluation of image analysis at benthic sites based on color segmenta- tion. Bull. Mar. Sci. 69, 639–653. Christakos, G., 2000. Spatiotemporal mapping in natural sciences. In: Christakos, G. (Ed.), Modern Spatiotem- poral Geostatistics. Oxford Univ. Press, England, pp. 1–24. Cooper, M.C., 1997. The use of digital image analysis in the study of laminated sediments. J. Paleolimnol. 19, 33–40. Cuevas-Jiménez, A., Ardisson, P.-L., Condal, A.R., 2002. Mapping of shallow coral reefs by colour aerial photography. Int. J. Remote Sensing 23, 3697–3712. Dartnell, P., Gardner, J.V., 1993. Digital imaging of sediment cores for archives and research. J. Sediment. Petrol. 63, 750–752. Dethier, M.N., Graham, E.S., Cohen, S., Tear, L.M., 1993. Visual versus random-point percent cover estimation: ‘objective’ is not always better. Mar. Ecol. Prog. Ser. 96, 93–100. Farmer, A.M., Adams, M.S., 1991. The nature of scale and the use of hierarchy theory in understanding the ecology of aquatic macrophytes. Aquat. Bot. 41, 1–3. Foster, M.S., Harrold, C., Hardin, D.D., 1991. Point vs. photo quadrat estimates of the cover of sessile marine organisms. J. Exp. Mar. Biol. Ecol. 146, 193–203. Glasbey, A.C., Horgan, G.W., 1995. Image Analysis for the Biological Science. John Wiley & Sons, Canada. Horne, J.K., Schneider, S.D., 1995. Spatial variance in ecology. Oikos 74, 18–26. King, D.J., 1995. Airborne multi-spectral digital camera and video sensors: a critical review of systems designs and applications. Can. J. Remote Sensing 21, 245–273. Langton, R.W., Auster, P.J., Schneider, D.C., 1995. A spatial and temporal perspective on research and manage- ment of groundfish in the northwest atlantic. Rev. Fish Sci. 3, 201–229. Livingstone, D., Raper, J., McCarthy, T., 1999. Integrating aerial videography and digital photography with terrain modelling: an application for coastal geomorphology. Geomorphology 29, 77–92. Meese, R.J., Tomich, A., 1992. Dots on the rocks: a comparison of percent cover estimation methods. J. Exp. Mar. Biol. Ecol. 165, 59–73. Miller, W., Ambrose, R., 2000. Sampling patchy distributions: comparison of sampling design in rocky intertidal habitats. Mar. Ecol. Prog. Ser. 196, 1–14. Pasqualini, V., Pergent-Martini, C., Fernandez, C., Pergent, G., 1997. The use of airborne remote sensing for benthic cartography: advantages and reliability. Int. J. Remote Sensing 18, 1167–1177. Petterson, G.I., Remberg, I., Geladi, P., Lindberg, A., Lindgren, F., 1993. Spatial uniformity of sediment accu- mulation in varved lake sediments in northern Sweden. J. Paleolimnol. 9, 195–208. Richards, J.A., 1986. Remote Sensing Digital Image Analysis: An Introduction. Springer-Verlag, Berlin. Sheppard, C.R.C., Matheson, K., Bythell, J.C., Murphy, J.C., Myers, P., Blake, B., 1995. Habitat mapping in the Caribbean for management and conservation: use and assessment of aerial photography. Aquat. Conserv. Mar. Freshw. Ecosyst. 5, 277–298. Somerfield, P.J., Gage, J.D., 2000. Community structure of the benthos in Scottish sea-lochs: IV. Multivariate spatial pattern. Mar. Biol. 136, 1133–1145. Teixidó, N., Garrabou, J., Arntz, W.E., 2002. Spatial pattern quantification of Antarctic benthic communities using landscape indices. Mar. Ecol. Prog. Ser. 242, 1–14. Whorff, J.S., Griffing, L., 1992. A video recording analysis system used to sample intertidal communities. J. Exp. Mar. Biol. Ecol. 160, 1–12. Abundance estimation of rocky shore invertebrates at small spatial scale by high-resolution digital photography and digital image analysis Introduction Material and methods Visual scan (VS) Photo quadrat (PQ) Digital image analysis (DIA) Results Discussion Acknowledgements References work_psrefbxrdrbm7cw65g4uzljwym ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218562327 Params is empty 218562327 exception Params is empty 2021/04/06-02:16:43 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218562327 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:43 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_psw4gb5d3bccrp5mowyfddyo3y ---- Virtual Mentor Virtual Mentor American Medical Association Journal of Ethics May 2010, Volume 12, Number 5: 412-417. IMAGES OF HEALING AND LEARNING The Genetic Basis of Body Shape: Lessons from Mirror Twins and High- Definition Digital Photography David Teplica, MD, MFA Introduction American culture places great emphasis on body shape. There is a widely held presumption that “diet plus exercise = looking good.” This premise gives rise to huge expenditures of time and resources in all-too-often frustrating attempts to “get in shape,” but what is not considered is that shape and size may have entirely different biological underpinnings. Many individuals successfully lose weight and significantly reduce body size, only to remain unhappy with their residual shapes. To a certain degree size may be under one’s control by the intentional modulation of caloric intake and burn, but there is little data to support the idea that even the most stringent efforts can effectively and permanently change body configuration or fat distribution. For example, dietary manipulation can affect overall size, but though it may temporarily shrink both waist and hips, it will not necessarily bring about the desired waist-to-hip ratio. Because many Americans believe that body shape can be controlled by behavior, societal judgment is often levied against patients who choose to manipulate their native body contours surgically. Absent evidence that shape is inborn, many continue to struggle for decades, only to fail to reach their goals. Since surgery requested later in life is more complex, and complication rates can be higher, this misconception has ethical implications. Anatomy and body shape are evaluated routinely by a host of imaging techniques. Medical imaging underwent major expansion in the late twentieth century with the introduction of CAT scan technology, magnetic resonance imaging, and other computer-based modalities. During those same years, however, the advent of digital cameras resulted in a shift in clinical photography to a less scientific “point-and- shoot” mentality, which produced an explosion of case-related patient images that were often published with no consistent standardization of technique. The outcomes of plastic surgery intervention are often evaluated by looking at these less-than-ideal “before and after” snapshots. Such documentation fails to provide accurate and quantifiable data to support the notion that surgery has effected permanent change, or that the underlying condition could only be changed by surgery in the first place. Fortunately for our future understanding of this complex issue, standardized imaging technologies and software now exist to address long-unanswered questions about the inheritance of body shape and the quantification of surgical results. A new and unique monozygotic mirror- Virtual Mentor, May 2010—Vol 12 www.virtualmentor.org 412 twin model incorporating standardized photographic techniques provides a tool for investigating questions of anatomic development and adult human form. Anatomic Observations Using a Mirror-Twin Model Facial skin features historically were thought to stem from a combination of genetic and environmental influences. In the past, to help determine the genetic origin of a facial skin feature, correspondence of surface findings was erroneously sought by comparing the same sides of two twins [1]. More recently, I have used highly standardized photographic techniques and skin surface analysis to address questions of inheritance of anatomic features [2]. With technical insight from Kalev Peekna, I developed a formal digital method to easily account for the phenomenon of mirroring in twins which, though previously ill-defined by science, has been long acknowledged among twins themselves. Anatomic mirroring is the term used to describe the phenomenon that a lesion or anatomic structure on one side of a monozygotic (MZ) twin is found in a similar location on the opposite side of the co- twin (e.g., a mole on twin A’s right cheek can be paired to a mole on twin B’s left cheek). Our technique was therefore developed to definitively and reproducibly diagnose mirroring and allow for its differentiation from simple same-side concordance in order to show the genetic contribution to facial shape [3]. Figure 1(below) shows typical concordance of skin features in a pair of MZ twins who exhibit no anatomic mirroring. Correspondence in the skin surface findings in another set of twins can only be appreciated if opposite sides of the face are carefully examined (figure 2, next page). Figure 1. Detailed analysis of the same sides of the faces of two concordant, non-mirrored MZ twins reveals striking similarities. These similarities include the same number and configuration of wrinkle creases on both the forehead and brow, nearly identical crow’s feet wrinkle lines with similar branching patterns at the corners of the eyes, similar helical root creases, pre-tragal creases, identical oblique earlobe creases, and a series of skin lesions that appear to have migrated at different rates during early embryonic development, with each feature being more anterior in twin B. None of the findings present on the right sides of the twin faces are present on the left. www.virtualmentor.org Virtual Mentor, May 2010—Vol 12 413 hkushnic Placed Image Figure 2. Both twin A (left) and twin B (right) exhibit a polygon of nevi only on opposite cheeks. It is likely that different rates of embryologic tissue transit account for the slight differences in the shapes of the polygonal arrangement in each twin, although both clusters remain within the boundaries of the anatomic region innervated by the second branch of the trigeminal nerve. New digital addition and subtraction techniques used to analyze highly standardized images of twins can be employed to study facial shape for the presence of anatomic mirroring [3]. As in radiological techniques used for digital subtraction angiography, images of twin faces are overlapped and then digitally subtracted from each other to determine whether anatomic shape was concordant (present on the same side in both twins) or mirrored (present on the right side in one and left side in the other), as are the twins in figures 3 and 4 (next page). Analysis of 27 pairs of monozygotic twins showed that 64 percent of male pairs and 23 percent of female pairs exhibited the mirror phenomenon, and that there was no relationship between the mirror phenomenon and the timing of the first split of the egg in either gender [4]. When the appropriate side of the face was analyzed (i.e., either the same or opposite) in these same twins, nearly 100 percent of skin features were found to be present in both twins [5]. In light of these observations, all future studies of anatomic inheritance should control or consider the mirror phenomenon. In addition, and perhaps more importantly, the above findings bring the role of environmental influence into question. It is illogical to think that random environmental influence could consistently affect only one side of one twin and only one side (for example, just the mirror-opposite side) of another twin in exactly the same way over their entire lifetimes—whether they were raised in the same or different environments. As a result, environmental influence can be eliminated as a variable if mirroring is analyzed and controlled in the twin study population. Virtual Mentor, May 2010—Vol 12 www.virtualmentor.org 414 hkushnic Placed Image Figure 3 (left). Representative pair of female mirror twins. Twin A and twin B have been digitally overlapped. Figure 4 (right). When digitally subtracted from each other, the images from figure 3 show symmetrical “ghosting” consistent with anatomic mirroring of the pair’s skin findings. (Digital subtraction of the images of concordant twins results in an asymmetrical “ghost,” indicating that the inherent asymmetries of the face are concordant and not mirrored.) Standardized imaging and digital analysis have preliminarily confirmed the presence or absence of mirroring of body form in MZ twin torsos. Figure 5 illustrates the extreme alignment of anatomy when two concordant male MZ twin torsos are digitally added to each other, but the alignment is lost when the photograph of Twin B is horizontally flipped. Digital subtraction has successfully identified concordance or mirroring in all pairs studied to date. It follows that the body shapes of the twin pairs must be inherently similar (concordant) or similar-but-mirrored, regardless of differences in size [5]. Figure 5. Left to right: The native state of twin A; the native state of twin B; the digital addition of twin A imposed on twin B, showing near-complete anatomic alignment of the torsos; and, finally, the digital addition of the native state of twin A added to the horizontally flipped image of twin B, showing a dramatic decrease in alignment consistent with a non-mirrored native state. www.virtualmentor.org Virtual Mentor, May 2010—Vol 12 415 hkushnic Placed Image hkushnic Placed Image hkushnic Placed Image Measuring Postsurgery Results The same standardized imaging techniques can also be used to accurately quantify postsurgery results, because photographic variance has been nearly eliminated. In figure 6, the postoperative result has been digitally subtracted from the preoperative baseline anatomic state, providing evidence of shape change which can actually be measured. The same methods could be used to track disease progression (e.g. Cushing disease or HIV-related lipodystrophy), the effects of therapeutic interventions, or changes in body configuration due to aging. Figure 6. Standardized digital subtraction analysis (preoperative minus postoperative views) of the surgically imposed shape changes following full-body circumferential reproportioning. This surgery was preceded by weight loss of more than 100 pounds, which had reduced the patient’s size, but had not achieved the patient’s desired shape. Discussion The above findings, developed using a MZ twin approach that controls for the “mirror twin” phenomenon, supports the concept that body surface features and body shape are genetically predetermined. Diet and exercise appear to be able to temporarily alter size, but it seems that only surgery, disease, or trauma can permanently alter shape. This observation has direct implications for twins and non-twins alike who have concerns about skin or body features. Patients who request body contour surgery (the elective alteration of baseline anatomic form) are often counseled to make lifestyle changes to alter their weight (with the presumption that it will change their shape) before surgery is performed. In light of the findings presented above, patients should instead be counseled to adopt healthy diets and exercise routines that can be maintained throughout adulthood, regardless of the effect on weight preoperatively. Surgery should proceed once metabolic steady state is reached and body weight has stabilized, after several months, so the patient can enjoy an improved body configuration without struggling to maintain an unrealistic daily routine. Data on the genetic inheritance of undesired body shapes could help inform future ethical decisions regarding elective surgery. The broader implication of these photographic and anatomic findings is that the very structure of the “nature vs. nurture” debate as it pertains to body shape must be reconsidered. It is clear that there may be limits to the effect of environment on anatomic shape. Virtual Mentor, May 2010—Vol 12 www.virtualmentor.org 416 hkushnic Placed Image References 1. Gedda L. Twins in History and Science. Springfield, IL: Charles C. Thomas; 1961. 2. Teplica D, Keith D. A study of the mirror symmetry phenomenon in the faces of 100 sets of monozygotic twins, using rigidly standardized photographic techniques and digital analysis. Paper presented at: 10th International Workshop on Multiple Pregnancy; September 5-7, 1996; Zakopane, Poland. Arch Perinat Med. 1996:1(2). 3. Teplica D, Peekna K. The mirror phenomenon in monozygotic twins. In: Blickstein I, Keith L. Multiple Pregnancy: Epidemiology, Gestation, and Perinatal Outcome. 2nd ed. London: Taylor and Francis; 2005: 277-288. 4. Teplica D, Derom C, Peekna K, Derom R. Embryological timing in mirror- image twinning [abstract]. Twin Res Hum Genetics. 2007;10 Suppl:54. 5. Teplica D. Iconography in twins: a modern photographic perspective. Paper presented at: The 1st World Congress of Twin Pregnancy: A Global Perspective; April 16-18, 2009; Venice, Italy. David Teplica, MD, MFA, is a clinical associate in the plastic and reconstructive surgery section at the University of Chicago’s Pritzker School of Medicine and an attending surgeon at Saint Joseph Hospital in Chicago. His photographic explorations have been widely reproduced, the images are exhibited throughout the United States and Europe, and prints are held in museum, corporate, and private collections. Surgically, his primary interest is in alteration of body form and facial shape through the carefully controlled addition or subtraction of adipocytes from the subcutaneous plane. Disclosure Support for this work was provided by an NIH Surgical Scientist Training Grant, the Center for Study of Multiple Birth, and the Eastman Kodak Company. Acknowledgment The author wishes to thank Dr. Louis Keith, who provided editorial assistance and insight into the text. Dr. Thomas J. Krizek provided critical early advice and encouragement. Related in VM Are Cosmetic Surgeons Complicit in Promoting Suspect Norms of Beauty?, May 2010 The viewpoints expressed on this site are those of the authors and do not necessarily reflect the views and policies of the AMA. Copyright 2010 American Medical Association. All rights reserved. www.virtualmentor.org Virtual Mentor, May 2010—Vol 12 417 http://virtualmentor.ama-assn.org/2010/05/msoc1-1005.html work_psxsirzwljgolnkafq45473dui ---- Archeomatica_n2_2010_def_nocover.indd 6 ArcheomaticA N° 2 giugno 2010 DOCUMENTAZIONE L a formazione dei cosiddetti mosaici di scena costituisce un caso mol- to particolare nel campo della fotogrammetria sferica. Il ricampiona- mento segue tecniche proprie delle rappresentazioni cartografi che. Da un unico punto l’orizzonte viene ricoperto da immagini parzialmente ricoprentesi. Le fotografi e vengono poi proiettate su una sfera virtua- le che a sua volta è mappata su piano secondo la cosiddetta proiezione azimut-zenit o equirettangolare ovvero latitudine-longitudine, chiamato anche panorama sferico. Le equazioni della rappresentazione sono molto semplici x=r.θ e y = φ.r ove θ e φ sono gli angoli di direzione al punto oggetto, x e y le coordinate immagine ed r il raggio della sfera. LA FOTOGRAMMETRIA SFERICA UNA NUOVA TECNICA PER IL RILIEVO DEI VICINI di Gabriele Fangi Grazie all'elevata tecnicità della fotografi a digitale, la fotogrammetria sferica ha ampliato le proprie capacità. Il ricampionamento, la correlazione di immagine o lo stitching permettono di realizzare fotomosaici accrescendo, così, la già lunga storia della fotogrammetria architettonica Chiesa di S. Maria della Ca- rità (AP) - Uno dei panorami dell’interno. Tecnologie per i Beni Culturali 7 In altre parole le coordinate immagine del panorama sfe- rico, sono la registrazione delle direzioni al punto scalate di un valore pari al raggio della sfera, tutte quelle che si otterrebbero con un teodolite. La principale differenza con questo, a parte ovviamente la precisione, è che l'asse della sfera non può essere messo verticale con lo stesso grado di precisione d, è quindi necessario stimare due angoli di rota- zione attorno ai due assi orizzontali x e y. Le equazioni tra le coordinate terreno X, Y, Z di un arbitrario punto oggetto P, le coordinate sfera X',Y',Z' del suo punto immagine P', in un sistema centrato nel centro della sfera di coordinate X 0 , Y 0 , e Z 0 e parallelo a quello terreno, sono La proiezione latitudine-longitudine Dividendo la prima per la seconda si elimina la distanza all’oggetto d e le [1] diventano con semplici passaggi: Se poniamo dα x =dα y =0 otteniamo le consuete equazioni alla direzione orizzontale e all’angolo zenitale, avendo trascu- rato l’effetto di sfericità e rifrazione. La restituzione avviene per intersezione di due o più rette proiettive [2]. I panorami vengono orientati come si orien- ta una stazione di teodolite. In alternativa si può usare la tecnica della “poligonale cieca” (Fangi 1998), cioè usare la condizione di complanarità per orientare un panorama rispetto ad un altro. In questo caso, si stimano le coordinate modello di una serie di punti e si effettua la rototraslazione nel sistema di riferimento. I modelli attigui possono essere concatenati. Da ultimo si può effettuare la compensazione di blocco a fasci proiettivi. [2] Ove si accetti una precisione ridotta si può usare il norma- le software topografi co, ovvero effettuare compensazioni combinate teodolite-panorami. I vantaggi di questa tecnica consistono nel fatto che si ha a disposizione una specie di (pseudo) fotocamera ideale: • risoluzione molto elevata (es. 30.000 x 15.000 pixel); • costi molto bassi; • angolo di campo fi no a 360°; • libretto di campagna ideale in cui sono registrate tutte le possibili direzioni angolari provenienti da un punto; • estrema velocità di esecuzione; • nessuna distorsione; • possibilità di usare normale software topografi co; • possibilità di effettuare compensazioni combinate teodolite-panorama. ALCUNI ESEMPI Gli esempi di restituzione di beni culturali ottenuti con la fotogrammetria sferica sono ormai molto numerosi, come gli interni delle chiese, le facciate monumentali, piazze. Si presentano qui tre casi esemplari per certi versi antitetici fra loro, ciascuno con una sua particolarità: la chiesa di Santa Maria della Carità ad Ascoli Pi-1. ceno; Plaza de Armas a Cuzco, Perù;2. Ad Deir (Monastero) a Petra, Giordania.3. Il primo caso riportato fa riferimento ad un rilievo eseguito dal laboratorio del dipartimento come attività conto terzi; obbiettivo del lavoro: soddisfare le richieste di una commit- tenza. Negli altri casi si trattava di riprese fotografi che nate in occasione di viaggi turistici, quindi senza pretese di com- pletezza. Nel caso di un lavoro commissionato, si debbono soddisfare requisiti di precisione e di interezza che rendono il lavoro alquanto oneroso. Al contrario quando si agisce en tourist, l’obiettivo consiste solamente nel raggiungimento di una testimonianza, di una documentazione che non può essere completa né esaustiva, ma solamente la più ricca possibile di informazioni e di documentazione, compatibil- mente con tempi disponibili, sempre molto limitati. LA CHIESA DI SANTA MARIA DELLA CARITÀ AD ASCOLI PICENO Si tratta di una chiesa barocca, costruita nel XVI secolo, su disegni di Cola d’Amatrice e di Conte Conti, dalle notevoli e ricche decorazioni. L’interno è a navata unica coperta a botte e contraffortata da sette muri trasversali, fra i quali sono sistemate le cappelle, termina con una zona absidale [1] 8 ArcheomaticA N° 2 giugno 2010 dove si trova l’altare maggiore. Dopo gli eventi sismici di aprile 2009, si voleva accertare la condizione di stabilità dell’edifi cio. Il rilievo è stato eseguito con la tecnica della fotogrammetria sferica. Per la misura di alcune particolari- tà come la verticalità delle pareti e le sezioni della volta, si è proceduto per rilievo diretto con stazione totale refl ec- torless per punti isolati. Sono stati realizzati: • una rete di circa cinquanta punti di appoggio costituita da quattro stazioni, una collocata all’interno, una per la facciata, due per la copertura; • diciassette panorami all’interno della chiesa, quattro per la facciata, e dieci per la copertura. La precisione di restituzione è stata soddisfacente, dell’or- dine del centimetro, al di là delle aspettative. IL MONOPLOTTING Quando l’oggetto da restituire giace su un piano oppure su una superfi cie defi nibile matematicamente attraverso punti è possibile la restituzione da un solo panorama. Le rette pro- iettive vengono intersecate con la superfi cie. In questa ma- niera è possibile restituire forme complesse, non altrimenti restituibili in monoscopia. La restituzione del decoro interno della chiesa è avvenuto con il monoplotting. Individuato il piano medio, la restituzione è avvenuta tramite il panorama frontale. La procedura non è rigorosa, ma non si avevano alternative e comunque la committenza non era interessata tanto al decoro quanto alla strutture dell’edifi cio. PLAZA DE ARMAS – CUZCO, PERÚ Cuore dell’antica capitale Incas, Plaza de Armas a Cuzco espone una serie pregevolissima di monumenti barocchi spagnoli coloniali, tra i quali la Cattedrale, la chiesa della Compagnia del Gesù e l’ingresso monumentale dell’Universi- tà di Sant’Antonio Abate, la seconda università più antica del sud America. Nel novembre 2007, in occasione di una visita turistica, sono stati realizzati tredici panorami, di cui dieci disposti lungo il perimetro della piazza, e tre in posizione centrale. Nessuna misura è stata effettuata sul posto. Chiesa di S. Maria della Carità (AP) – Il modello a fi lo di ferro (G. Fangi). La Chiesa di S. Maria della Carità – Sezione trasversale (G. Mariotti, M. Villani, L. Baldassarri). Chiesa di S. Maria della Carità – sezione longitudinale (G. Mariotti, M. Villani, L. Baldassarri). La chiesa S. Maria della Carità (AP) – Il disegno del decoro interno è sta- to ottenuto con il monoplotting (G. Mariotti, M. Villani, L. Baldassarri). Chiesa di S. Maria della Carità (AP) - Uno dei panorami della facciata, la curvatura visibile si deve alla proiezione sferica. Tecnologie per i Beni Culturali 9 In un secondo momento, è stato realizzato il modello, orien- tato e dimensionato con le coordinate UTM fornite da Goo- gle Earth ad alta risoluzione, non avendo niente di meglio a disposizione per l’orientamento e il dimensionamento (Fan- gi 2008b). AD DEIR - PETRA - GIORDANIA Si tratta del monumento più pregevole dell’architettura na- batea. Costruito nel I secolo a.C., probabilmente in ono- re del re Obada I, il monumento è stato ricavato scavando nella roccia tenera della montagna. Le sue dimensioni sono imponenti, l’altezza arriva a 45 metri. La facciata, artico- lata su due piani, è caratterizzata da imponenti colonne e culmina in alto con una grande urna circolare sopra un cor- po cilindrico posto in posizione centrale, nel mezzo di due timpani triangolari simmetrici spezzati. In alto il fregio con triglifi , metope circolari e capitelli, si presenta nel tipico stile nabateo, caratterizzato da una estrema stilizzazione e semplicità; davanti al Monastero si apre un grande piazzale, probabilmente destinato alle cerimonie. Plaza de Armas, Cuzco, Perù - Modello a fi lo di ferro, a sinistra la Cattedrale, a destra la chiesa della Compagnia del Gesù (restituzione G. Fangi). Chiesa della compagnia del Gesù - Render (C.M. Agostani). Ad Deir, Petra. Il panorama di sinistra. 10 ArcheomaticA N° 2 giugno 2010 IL RILIEVO Nell’aprile 2008 sono stati realizzati tre panorami da po- sizione frontale. Per il dimensionamento del modello, si è potuta prendere solamente una misura di distanza, con un distanziometro disto. Le fasi della restituzione sono state: orientamento dei panorami, formazione del mo-1. dello, e suo dimensionamento; realizzazione del modello a fi lo di ferro;2. modello solido approssimativo;3. fotoediting4. o fotomodellazione in ambiente CAD mediante proiezione dei panorami orientati sopra il modello grezzo. LA FOTOMODELLAZIONE La restituzione a fi lo di ferro produceva un risultato poco soddisfacente in quanto, a causa della mancanza di stereo- scopia, è stato impossibile restituire la quasi totalità del mo- numento costituito da superfi ci irregolari. La superfi cie del tempio infatti si presenta alquanto erosa a causa dell’azio- ne del vento e del tempo sulla roccia tenera. Per ovviare a questo limite si è pensato di fare ricorso al fotoediting in ambiente CAD. In primo luogo si è passati dal modello a fi lo di ferro al modello solido, ancorché molto approssimativo. In ambiente 3D Studio sono stati quindi immessi i panora- mi orientati, dei quali erano note la posizione e l’orien- tamento. Il panorama, così posizionato, è stato proiettato sopra il modello, rendendo possibile l’editing del modello, modifi candone la forma, aggiungendo dettagli, fi no a quan- do questo non coincideva con la proiezione del panorama. Infi ne il panorama proiettato ha “vestito” il modello fi nale, costituendone la texture. Il procedimento descritto, è stato ideato e messo in opera da Enzo D’Annibale (D’Annibale, Fangi 2009). CONCLUSIONI La fotogrammetria sferica apre un nuovo capitolo della storia, alquanto lunga, della fotogrammetria architettoni- ca. Tale tecnica è caratterizzata da velocità di esecuzione, completezza della documentazione ed esiguità dei costi e consente inoltre la formazione di fi lmati Quicktime, interat- tivi, validissimi strumenti di documentazione. ABSTRACT Spherical Photogrammetry Photogrammetry has expanded its horizons thanks to digital pho- tography with techniques such as resampling, image enhancemnt or correlation of image; among the possibilities introduced the image stitching enables to paste partial photos. Through the new possibili- ties offered by spherical photogrammetry, it’ll open a new chapter of history of the architectural photogrammetry. AUTORE GABRIELE FANGI GABRIELEFANGI@GMAIL.COM RIFERIMENTI Armetta V., Dominaci D., Fangi G. (2008), • Applicazione di fotogrammetria panoramica per il rilievo della chiesa di San Cataldo a Palermo, 12a Conferenza Nazionale Asita, L'Aquila, 21- 24 ottobre 2008, pp. 159-164. D’Annibale E., Fangi G. (2009)• , Interactive modeling by projection of oriented spherical panorama, 3D-Arc’2009, 3D Virtual Reconstruction and Visualization of complex Architectures – Trento 25-29 febbraio 2009- ISPRS Archives - Vol XXXVIII-5/W1 1682-1777. Fangi G. (2007a), • Una nuova fotogrammetria architettonica con i panorami sferici multimmagine, in Atti Convegno Sifet, Arezzo, 27-29 giugno 2007. Fangi G. (2007b), • The Multi-image spherical Panoramas as a tool for Architectural Survey, in Atti XXI International CIPA Symposium, 1-6 ottobre 2007, Atene, ISPRS International Archive – Vol. XXXVI-5/C53 – ISSN 1682-1750 – CIPA Archives vol. XXI-2007 ISSN 0256-1840 - pp.311-316. Fangi G. (2007c), • La Fotogrammetria sferica dei mosaici di scena per il rilievo architettonico –in «SIFET», n. 3, pp. 23-42. Fangi G., Clini P., Fiori F. (2008), • Simple and quick digital technique for the safeguard of Cultural Heritage. The Rustem Pasha mosque in Istanbul, DMACH 4 - 2008 - Digital Media and its Applications in Cultural Heritage 5 - 6 novembre, 2008, Amman pp. 209-217 Fangi G. (2008), • El levantamiento fotogrametrico de Plaza de Armas en Cuzco por medio de los panoramas esfericos - XXX Convegno Internazionale di Americanistica, Perugia (Italia), 6-12 maggio 2008. Fangi G. (2009)• , FurtherDevelopments of the Spherical Photogrammetry for Cultural Heritage, XXII Cipa Symposium, Kyoto, 11-15 ottobre 2009. Fangi G., Schiavoni A. (2009), • Un’esperienza di Mobile Mapping con la fotogrammetria sferica, in Atti della 13a Conferenza Nazionale Asita, 1-4 dicembre 2009, Bari. Mastroiacono M., Fangi G., Nardinocchi C., Sonessa A. (2008), • Un’esperienza di rilievo fotogrammetrico basato su panorami sferici, in 12a Conferenza Nazionale Asita, L'Aquila, 21-24 ottobre 2008 – pp. 1451-1456. Ad Deir, il modello grezzo, in rosso le linee del modello a fi lo di ferro (E. D’Annibale). Ad Deir. Il modello fi nale con la texture costituita dalla proiezione del panorama centrale. Sono molto numerose le occlusioni (E. D’Annibale). work_psxsjya2czbirckmkeyx2bu64u ---- BioMed CentralBMC Musculoskeletal Disorders ss Open AcceStudy protocol The Knee Clinical Assessment Study – CAS(K). A prospective study of knee pain and knee osteoarthritis in the general population George Peat*, Elaine Thomas, June Handy, Laurence Wood, Krysia Dziedzic, Helen Myers, Ross Wilkie, Rachel Duncan, Elaine Hay, Jonathan Hill and Peter Croft Address: Primary Care Sciences Research Centre, Keele University, Keele, North Staffordshire, United Kingdom ST5 5BG Email: George Peat* - g.m.peat@cphc.keele.ac.uk; Elaine Thomas - e.thomas@cphc.keele.ac.uk; June Handy - j.e.handy@cphc.keele.ac.uk; Laurence Wood - l.r.j.wood@physio.keele.ac.uk; Krysia Dziedzic - k.s.dziedzic@cphc.keele.ac.uk; Helen Myers - h.l.myers@cphc.keele.ac.uk; Ross Wilkie - r.wilkie@cphc.keele.ac.uk; Rachel Duncan - r.c.duncan@cphc.keele.ac.uk; Elaine Hay - e.m.hay@cphc.keele.ac.uk; Jonathan Hill - j.hill@cphc.keele.ac.uk; Peter Croft - p.r.croft@cphc.keele.ac.uk * Corresponding author Abstract Background: Knee pain affects an estimated 25% of the adult population aged 50 years and over. Osteoarthritis is the most common diagnosis made in older adults consulting with knee pain in primary care. However, the relationship between this diagnosis and both the current disease-based definition of osteoarthritis and the regional pain syndrome of knee pain and disability is unclear. Expert consensus, based on current evidence, views the disease and the syndrome as distinct entities but the clinical usefulness of these two approaches to classifying knee pain in older adults has not been established. We plan to conduct a prospective, population-based, observational cohort study to investigate the relative merits of disease-based and regional pain syndrome-based approaches to classification and prognosis of knee pain in older adults. Methods: All patients aged 50 years and over registered with three general practices in North Staffordshire will be invited to take part in a two-stage postal survey. Respondents to this survey phase who indicate that they have experienced knee pain within the previous 12 months will be invited to attend a research clinic for a detailed assessment. This will consist of clinical interview, physical examination, digital photography, plain x-rays, anthropometric measurement and a brief self-complete questionnaire. All consenting clinic attenders will be followed up by (i) general practice medical record review, (ii) repeat postal questionnaire at 18-months. Background The current consensus of expert opinion is that "it is important to separate conceptually the disease process of osteoarthritis and the syndrome of musculoskeletal pain and disability" [1]. Given that classification is "arguably one of the most central and generic of all our conceptual exercises" [2] and that "all our activities in public health, in epidemiology, and in clinical practice depend on the way we classify, recognize, and identify diseases" [3] this issue deserves attention. The disease of osteoarthritis (OA) is considered to be an active process involving the entire synovial joint with both degenerative and repair processes. It has multiple Published: 12 February 2004 BMC Musculoskeletal Disorders 2004, 5:4 Received: 16 December 2003 Accepted: 12 February 2004 This article is available from: http://www.biomedcentral.com/1471-2474/5/4 © 2004 Peat et al; licensee BioMed Central Ltd. This is an Open Access article: verbatim copying and redistribution of this article are permitted in all media for any purpose, provided this notice is preserved along with the article's original URL. Page 1 of 9 (page number not for citation purposes) http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1186/1471-2474-5-4 http://www.biomedcentral.com/1471-2474/5/4 http://www.biomedcentral.com/ http://www.biomedcentral.com/info/about/charter/ BMC Musculoskeletal Disorders 2004, 5 http://www.biomedcentral.com/1471-2474/5/4 determinants that differentially affect incidence and pro- gression [4], and may not comprise a single uniform dis- ease but several (e.g. tibiofemoral vs patellofemoral; isolated knee vs generalised OA). It has been argued that "from a clinical perspective, the most compelling definition of (the) disease (of osteoar- thritis) is one that combines the pathology of disease with pain that occurs with joint use" (termed "symptomatic knee OA") [5]. Symptomatic knee osteoarthritis (OA) affects an estimated 10–12% of the adult population aged 55 years and over [6] with an annual rate of radiographic progression of approximately 3–4% [7,8]. However, there are grounds to doubt whether this definition is, in fact, the basis for using the diagnostic label of "osteoarthritis" in primary care, where it is one of the most common diag- noses made in older adults [9]. Current guidelines dis- courage the routine ordering of x rays to confirm a diagnosis of OA [10] making it unlikely that the 'pathol- ogy' has been verified in many cases. Although plain x- rays may be ordered by primary care clinicians, the deci- sion to do so appears to be determined less by clinical fea- tures at the point of presentation as by provider characteristics and other considerations such as patient expectations of a diagnosis or a predetermined manage- ment plan [11,12]. Knee osteoarthritis in practice may be better characterised as a "preference-sensitive" diagnosis (with respect to clinicians' and patients' preferences) that will include a combination of radiographically verified symptomatic knee OA and non-radiographic knee pain in older adults which is given the same diagnostic label [13]. The effect of using such a 'mixed' approach to OA classifi- cation in practice on the accuracy of prognoses or the effectiveness of management is unclear. However, it can- not be assumed that adopting a more rigorous approach to diagnosis using only the disease-based definition of symptomatic knee OA would be an improvement although much of the current evidence for the effective- ness of interventions has been based on participants defined in this way. One proposed alternative to disease-based classification is instead to view knee pain in older adults (with the possi- ble exception of severe OA) as a regional pain syndrome [14] in which psychosocial rather than pathoanatomical features contribute to variance in symptom severity and disability [1,15]. This observation alone, however, does not provide direct evidence about exactly how such an approach might be implemented in practice nor how use- ful this might be. The debate on the relative merits of dis- ease-based and regional pain syndrome-based approaches to classifying and diagnosing knee pain in primary care is also found across a range of other musculoskeletal pains [e.g. [16-18]]. The general aims of the cross-sectional component of this study are to investigate the relative usefulness of disease- based and regional pain syndrome-based approaches to classifying knee pain in older adults and to develop sim- ple assessment tools that are clinically practicable for the primary care setting. We have taken our starting point as knee pain in older adults in the general population to reflect the diagnostic challenge in primary care where the presentation of undifferentiated symptoms is common [19]. This starting point encompasses a larger proportion of older adults in the general population – approximately 25% of those aged 55 years or over experience knee pain that has lasted four weeks or longer at any given point in time [6]. Specifically our study will consider the following questions: • What is the association between symptomatic radio- graphic knee OA and chronic knee pain? Does this associ- ation differ between tibiofemoral joint (TFJ) and patellofemoral joint (PFJ) OA? • Can simple clinical signs and symptoms accurately iden- tify symptomatic radiographic knee OA in older adults with knee pain? • How does the distribution of other clinical features (e.g. signs and symptoms indicative of periarticular pathol- ogy), coexisting hand OA and non-clinical characteristics (e.g. psychosocial factors) compare between symptomatic radiographic knee OA and other knee pain? • In what respects do consulters and non-consulters differ in their characteristics at baseline? • How does a GP diagnosis of "knee osteoarthritis" relate to disease-based and regional pain syndrome-based classification? It has been argued that deciding what to do about a prob- lem is often of more interest to clinicians and patients than what to call it [20]. Accurate information on the likely future course will play an important role in the deci- sion-making of both parties. Classifications of knee pain at a single point in time, whether on the basis of a disease or regional pain syndrome approach, need to be qualified by descriptions of their subsequent course over time. Such prospective studies offer the potential of describing intra- and inter-individual changes, investigating mechanisms to explain these changes, and forecasting change or out- come [21,22]. The current study has been designed with attention to previously published requirements for report- ing longitudinal studies in rheumatology [23,24]. Page 2 of 9 (page number not for citation purposes) BMC Musculoskeletal Disorders 2004, 5 http://www.biomedcentral.com/1471-2474/5/4 The general aims of the longitudinal component of this study are to describe the clinical course and patterns of health care utilisation of knee pain in older adults in the general population, and to develop prognostic indicators of clinical course and predictors of consultation. Specifi- cally, it will address the following questions: • What proportion of this sample consult their general practitioner for knee pain within the follow-up period? Can this be predicted by clinical and/or non-clinical vari- ables collected at baseline? • How common is clinical deterioration (in terms of increasing pain/disability severity) in this sample? Can it be predicted? • What is the relative contribution of disease-based, clini- cal, and regional pain syndrome-based variables as prog- nostic markers? Methods Design A population-based prospective observational cohort study in four phases will be completed in a sample of patients, aged 50 years and over, registered with three local general practices (Figure 1). Ethical approval for all phases of the study has been obtained from the North Staffordshire Local Research Ethics Committee with Phases 1 and 2 complete as at October 2003. Phase 1: Baseline two-stage mailed survey Phase 2: Baseline clinical assessment study of the knee (CAS(K)) Phase 3: 18-month prospective review of general practice medical records Phase 4: Follow-up mailed survey at 18 months Phase 1: Baseline two-stage mailed survey The organisation and content of Phase 1 is the same as that described in a separate cohort undertaken in three separate practices in North Staffordshire [25]. This con- sists of a mailed "Health Survey" questionnaire to all adults aged 50 years and over registered with the partici- pating practices. Respondents who provide written con- sent to further contact and who report pain or problems in the hands, or pain in the hips, knees or feet will be sent a second questionnaire (the "Regional Pain Survey" ques- tionnaire). These two questionnaires include measures of sociodemographic characteristics, general health status, psychosocial and lifestyle variables, and pain and disabil- ity (general and site-specific). Non-responders to each questionnaire will be sent a reminder postcard at two weeks and a repeat questionnaire at 4 weeks. Phase 2: Baseline clinical assessment study of the knee (CAS(K)) Respondents to the Regional Pain Survey questionnaire who report experiencing pain in or around the knee within the last 12 months and who provide written con- sent to further contact will then be sent a letter of invita- tion and a Patient Information Sheet outlining the CAS(K) and the details of reimbursement for their travel to the clinic. Participants will be asked to telephone the Research Centre if they are interested in taking part to book an appointment. Non-responders to this initial invi- tation letter will be sent a reminder invitation approxi- mately one week later. Participants who consent to telephone contact will be telephoned, with a reminder let- ter being posted if telephone contact is not established after attempts on three different days. Those willing to take part in the CAS(K) will be booked into the next convenient appointment and, if necessary, travel arrangements (taxi) made. Participants who do not attend the clinic for their specified appointment will be sent another letter asking them to re-contact the Research Centre and book another appointment if they still wish to participate. Assessment clinics for the CAS(K) will be conducted twice-weekly in the Rheumatology and Physiotherapy out-patient departments of a local National Health Serv- ice Trust Hospital. A maximum of 19 appointments per week are scheduled. Each clinic is to be staffed by a Clinic Co-ordinator, a Clinic Support Worker, three trained Research Therapists acting as Assessors, and two Radiographers. On arriving at clinic participants will be issued with a file containing all assessment documentation marked with their unique study number. Prior to commencing the assessment, the procedures outlined in the Patient Infor- mation Sheet will be discussed with participants to give the opportunity to ask questions. Written informed con- sent to take part in the CAS(K) study will be obtained from all participants. Appropriate clothing (shorts) for the assessment are to be provided. Participants will undertake the following standardised assessment: digital photography of the lower limbs and hands, clinical interview and examination of the knees and hands, plain radiography of both knees and both hands, simple anthropometric measurement and brief self-complete questionnaire. Each participant's visit is expected to last approximately 1 1/2 hours. Page 3 of 9 (page number not for citation purposes) BMC Musculoskeletal Disorders 2004, 5 http://www.biomedcentral.com/1471-2474/5/4 Digital photography of the lower limbs and hands A total of four photographs will be taken of each partici- pant by an Assessor using a digital camera (Olympus Camedia C-4040 ZOOM: resolution 2272 × 1704 pixels). In static standing, anterior and posterior views of the lower limbs (waist to feet inclusive) will be taken. The dorsal aspect of each hand and wrist is also to be photo- graphed. Photographs will be taken according to pre- defined written protocols that include standardised posi- tioning of participants. To preserve anonymity partici- pants' faces will not be included in any of the photographs, their unique study number instead being placed in each frame. Digital photography will take approximately 10 minutes to complete for each participant. Plain radiography of the knees and hands Radiographs of both knees and both hands are to be obtained for all subjects. Imaging of the tibiofemoral joint of the knee will be undertaken using weight-bearing Flowchart of study proceduresFigure 1 Flowchart of study procedures Exclusions Non-respondents No consent to further contact No hand/hip/knee/foot pain Exclusions Non-respondents No knee pain in the last 12 months Non-respondents/declined to make an appointment DNA/UTA appointment Non-consent to taking part in CAS(K) Losses to follow-up Phase 4: Mailed Follow-up Survey questionnaire Phase 2: Clinical Assessment Study of the Knee - CAS(K) Knee pain in last 12 months Respondents to Regional Pain Survey Phase 1b: Mailed Regional Pain Survey questionnaire Hand/hip/knee/foot pain in last 12 months Respondents to Health Survey Phase 1a: Mailed Health Survey questionnaire All adults aged 50 years and over registered with 3 practices in North Staffordshire Phase 3: 18-month prospective follow-up of general practice medical records Data collection points are in bold Page 4 of 9 (page number not for citation purposes) BMC Musculoskeletal Disorders 2004, 5 http://www.biomedcentral.com/1471-2474/5/4 semiflexed (MTP) posteroanterior (PA) view according to a defined protocol [26]. The patellofemoral joint of the knee will be imaged with the lateral and skyline view, both in a supine position with the knee flexed to 45°. Dorsi-palmar views of the hand and wrist are to be per- formed. X-rays will take approximately 20 minutes to complete for each participant. Clinical interview and physical examination of the knees and hands Participants will be interviewed and examined by a trained Assessor blinded to the findings from radiography and digital photography. This procedure will comprise three components. Firstly, a standardised clinical inter- view for the knee problem will be conducted using an abbreviated version of the KNE-SCI [27] – a structured, standardised interview developed to gather quantitative data on clinically relevant aspects of knee problems in older adults. Questions are directed principally at the par- ticipants' most problematic knee (index knee) and cover aspects of the history of knee problem, current knee symp- toms, family history of joint problems, patient causal and diagnostic attribution, and selected current and previous treatment. Secondly, a brief, standardised, screening examination of both hands will be conducted. This will include identifying deformity, nodes or bony enlarge- ment, and swelling at target joints as specified by the American College of Rheumatology criteria for classifica- tion of hand OA [28]. Participants will also be asked to complete a test of maximal gross grip strength using a Jamar dynamometer (Sammons Preston, Chicago, IL) and pinch grip strength using a B&L pinch gauge (B&L Engi- neering, Tustin, CA). Thirdly, a standardised physical examination of both knees will be conducted. This will include tests of swelling, patellofemoral joint compres- sion, range of movement, bony enlargement, superficial point tenderness, joint laxity, maximal knee extensor and flexor strength, crepitus, and single-leg standing balance. Pre-defined protocols for all components of the interview and examination are to be used for standardisation between Assessors. Assessment findings will be recorded on a standard form that is to be checked for missing data immediately post-assessment by the Clinic Co-ordinator or Clinic Support Worker. Discussion between Assessors and participants about diagnosis and/or appropriate management is to be discouraged. Participants will be advised to discuss clinical queries with their General Prac- titioner. The interview and examination will take approx- imately 40 minutes to complete for each participant. Simple anthropometric measurement Weight (in kg) and height (in cm) of each participant are to be measured using digital scales (Seca Ltd., Birming- ham, UK) and a wall-mounted measure (Holtain Ltd., Crymych, UK) respectively. Brief self-complete questionnaire During the clinic visit, participants will complete a brief self-complete questionnaire containing questions relating to their knee problem – days of pain, aching or stiffness in previous month, days in pain in the previous 6 months [29], episode duration [30], symptom satisfaction (adapted from [31]), and the Chronic Pain Grade [32] – and any hand problems – days of pain, aching or stiffness in previous month [28], hand dominance, previous hand injury, previous hand surgery. All questionnaires will be checked by the Clinic Co-ordinator or Clinic Support Worker following completion for any missing data. The questionnaire takes approximately five minutes to complete. Travelling and out-of-pocket expenses will be reimbursed after the assessment. Post-clinic procedure The digital image memory card and all completed clinical assessment documentation and questionnaires will be returned to the Research Centre. Digital images are to be downloaded from the memory card onto a computer. A clinical report on the X-ray films will be provided by a Consultant Radiologist at the NHS Trust Hospital. The films and report are to be forwarded to the Research Cen- tre where they will be screened by a Consultant Rheuma- tologist for any radiographic "red flags" or significant radiographic abnormality (see below). Standardised coding of radiographic features on the x-ray films will be carried out by a Clinical Rheumatology Research Fellow who will be blinded to the radiologist's report and all assessment data. Knee films will be scored for individual radiographic features, including osteo- phytes, joint space width, sclerosis, subluxation and chon- drocalinosis. The Altman Atlas [33] and scoring system [34] are to be used for the PA and skyline views and the Burnett Atlas [35] for the lateral view. Additionally, PA and skyline views will be assigned a Kellgren and Law- rence grade [36]. Consent forms, assessment documentation, x-ray films, and reports are to be placed in separate secure storage. Communication with participants' general practice Assessment findings will be communicated to partici- pants and their General Practice only in specific circum- stances that will be explained to participants at the start of the clinic: Mandatory notification of clinical 'red flags': all participants will be routinely screened during the clinical assessment for signs and symptoms suggesting potentially serious Page 5 of 9 (page number not for citation purposes) BMC Musculoskeletal Disorders 2004, 5 http://www.biomedcentral.com/1471-2474/5/4 pathology requiring urgent medical attention. These are: recent trauma to the knees or hands that may have resulted in significant tissue damage; recent sudden wors- ening of knee symptoms; and acutely hot, swollen, pain- ful knees or hands [37]. In the event of such findings, participants will be informed that they require urgent attention, a standard fax will be immediately sent to the General Practice, and appropriate medical attention arranged the same day. A letter of confirmation will be subsequently sent to the participants' General Practice. Mandatory notification of radiographic 'red flags': in the event of any radiographic red flags (including suspected malig- nancy, unresolved fracture, infection) reported by the Consultant Radiologist a standard fax will be sent with a copy of the x-ray report to the General Practice notifying them of this. This will subsequently be confirmed by letter. Discretionary notification of significant radiographic abnor- mality: at the discretion of the Consultant Rheumatolo- gist, the General Practice will be notified of significant radiographic abnormality (e.g. previous fracture, inflam- matory arthropathy) Availability of x-ray report on request: to prevent unnecessary duplication of x-rays, participants' General Practitioner can request the x-ray report if they feel it would be valua- ble for clinical management. Quality assurance and quality control Quality assurance and control are important for the integ- rity of longitudinal studies and the validity of their con- clusions [38]. This is especially true of observer- dependent methods of data-gathering. In the current study, the personal interview and physical examination, and the taking and scoring of x rays are subject to a number of quality assurance and control procedures. The clinical interview and physical examination have been pre-piloted. Inter- and intra-Assessor reliability of knee interview and examination variables have also been formally tested in three of the Assessors in a pilot study [[39], Wood et al., unpublished data]. All Assessors are required to conduct at least two clinical assessments prior to the commencement of data collection and during the first month clinics with reduced numbers of participants are to be held to allow all study procedures to be tested and reviewed. All radiographers participating in the study also receive training prior to the commencement of the study. The Clinical Rheumatology Research Fellow will be trained in the methods for scoring the plain radiographs. This single observer will score all films and intra-observer variability is to be assessed using 50 sets of films scored eight weeks apart. Inter-observer variability will be assessed using a second observer with prior experience of grading knee x-rays who will also grade 50 films. This inter-observer variability exercise is to be undertaken after a single consensus meeting. A detailed Observer Manual with protocols for obtaining written informed consent, digital photography, clinical interview and examination, administration of the brief self-complete questionnaire, anthropometric measure- ment, plain radiography will be provided to all members of the CAS(K) team for reference during the entire study period. During the data collection period, digital photographs for all participants will be reviewed and participants with any missing or spoilt images are to be recalled to repeat the photographs. Quality control sessions will be arranged with each Assessor after every 100 patients in total recruited to the study. These sessions include observation of assessments in clinic by the Principal Investigator, structured observation of assessment in a healthy volun- teer, or direct inter-Assessor comparisons on selected patients. The outcome of each Quality Control session will be fed back to the individual Assessor and the group as a whole. Quality control sessions for plain radiography are also scheduled during the study to ensure consistency. Phase 3: Prospective review of general practice medical records All participants in Phase 1 who give permission for their GP records to be accessed will have their computerised medical records tagged by a member of the Centre's Health Informatics Specialist team. All consultations for the 12-month period prior to clinic attendance, and for the 18-month period following clinic attendance, will be identified. The three practices participating in this research are fully computerised and undergo annual audits completed by the Health Informatics team to assess the quality and completeness of the data entry at the practices. Data on consultations, medications, and referrals will be used to investigate patterns of primary and secondary health care utilisation within Phase 2 participants and compare these to Phase 1 participants who did not attend the research clinic. All sensitive data (name, contact details) will be removed from the medical records data and the consultation data will be linked to the survey and clinical assessment data by unique survey identifier. Phase 4: Follow-up mailed survey at 18 months A follow-up survey will be mailed to all Phase 2 partici- pants 18 months after their baseline clinical assessment. The focus of follow-up will be on clinical (pain/disability Page 6 of 9 (page number not for citation purposes) BMC Musculoskeletal Disorders 2004, 5 http://www.biomedcentral.com/1471-2474/5/4 severity) change and possible determinants of this. The proposed content of this survey is provided in Table 1. Pri- mary outcome data will be sought from non-respondents by telephone. Participants who have moved practice dur- ing the follow-up period will be traced using NHS tracing service and their new general practitioner will be asked for permission to include them in the follow-up. Sample size The sample size for this study was determined by the esti- mated numbers of participants needed in Phase 2 to ensure sufficient power for both cross-sectional and longi- tudinal analyses. A target sample of 800 was set. We esti- mate that 90 participants (12.5%) will report clinically significant deterioration over the 18-month period. With this number of participants, we will have 80% power to detect a rate ratio of 1.8 or greater with a minimum 64% exposure rate (e.g. presence of radiographic OA) in those who have deteriorated, at 95% level of confidence. Statistical analysis Linking data collected at the clinical assessment with that from the 18-month follow-up questionnaire, we will be able to determine prospectively the factors that are related to clinical deterioration using risk ratios and associated 95% confidence intervals. Conclusions The Knee Clinical Assessment Study is a prospective, pop- ulation-based, observational cohort study based in North Staffordshire that intends to investigate issues surround- ing the classification and course of knee pain and knee osteoarthritis in community-dwelling adults aged 50 years and over. The findings of this study will contribute new evidence to the debate on the relative merits of adopt- ing a disease-based and a regional pain syndrome-based approach to understanding and managing this complaint. Table 1: Content of 18-month follow-up questionnaire Concept Measurement method Details Perceived change in knee pain since baseline Transition index‡ [40] Completely recovered / Much better / Better / No change / Worse / Much worse Knee injury since baseline Single item (adapted from [41]) Yes/No GP consultation for knee pain since baseline Single item (adapted from [41]) Yes/No Health care utilisation for knee pain Since baseline... Single item on services (adapted from [41]) In last month... Single item on oral analgesia§ (adapted from [41]) Single item on non-pharmacological and other§ Yes/No to physiotherapy, hospital specialist, acupuncture, osteopath/chiropractor, drugs on prescription, knee operation, knee injection, other Yes/No to ibuprofen, paracetamol, aspirin, coproxamol, cocodamol, dihydrocodeine, diclofenac, naproxen, tramadol, COX-II, other Yes/No to dieting to lose weight, specific knee exercises, topical, glucosamine/chondroitin sulphate, other Coping strategies Single-item CSQ [42] 7 items rating diverting attention, reinterpreting pain sensation, coping self-statements, ignoring sensations, praying/hoping, catastrophising, increase behavioural activities on 7-point scale Knee pain/disability Pain days in last 6 months§ Chronic Pain Grade§ [32] Bothersomeness of knee pain [43] WOMAC (LK3.0)§ [44] Symptom satisfaction§ (adapted from [31]) No days, 1–30, 31–89, 90+ days 7-item scale giving categorical grade I-IV 5-point Likert scale Pain (0–20)‡, Stiffness (0–8), Physical functioning (0–68) subscales 5-point Likert scale Bodily pain Self-completed manikin§ Pain lasting a day or more in last 4 weeks. Scored as 44 mutually exclusive areas + LBP, neck pain, and hip pain [45] Hand Hand pain in last 12 months§ Bothersomeness of hand pain [43] Hand problems in last 12 months§ Bothersomeness of hand problems [43] Side of hand pain§ Duration in last 12 months§ (adapted from [41]) Yes/No within last 12 months 5-point Likert scale Yes/No within last 12 months 5-point Likert scale Right, left, both <7 days/1–4 weeks/>1 month – <3 months/3+ months Hand physical features Single item on nodes [46] Hand drawing [46]§ Yes/No for nodes and knobbly swelling Perceived general health MOS SF-12§‡ [47] Physical and mental component scores Mobility Mobility limitation Preclinical mobility adjustment [48] Yes/No Yes/No Anxiety and depression HAD§ [49] Anxiety and depression subscales Demographic characteristics Date of birth, gender Current employment status§‡ Employed, not working due to ill health, seeking employment, retired, housewife, other Narrative account Single item Open-ended question asking for an account of the course of knee pain §also gathered at baseline. ‡ Minimum data to be sought by telephone from non-respondents. Page 7 of 9 (page number not for citation purposes) BMC Musculoskeletal Disorders 2004, 5 http://www.biomedcentral.com/1471-2474/5/4 List of abbreviations used A/P = Anteroposterior; CAS(K) = Clinical Assessment Study of the Knee; CPG = Chronic Pain Grade; CSQ = Coping Strategies Questionnaire; GP = General Practi- tioner; HAD = Hospital Anxiety & Depression scale; MOS SF-12 = Medical Outcomes Study Short Form-12; MTP = Metatarsophalangeal; OA = Osteoarthritis; P/A = Poster- oanterior; PFJ = Patellofemoral joint; ROM = Range of motion; TFJ = Tibiofemoral joint; WB = Weight-bearing; WOMAC = Western Ontario & McMaster Universities Osteoarthritis Index Competing interests None declared. Authors' contributions All authors participated in the design of the study and drafting the manuscript. Acknowledgements This study is supported financially by a Programme Grant awarded by the Medical Research Council, UK (Grant Code: G9900220) and by Support for Science funding secured by North Staffordshire Primary Care Research Consortium for NHS service support costs. KD is supported by a grant from the Arthritis Research Campaign. The authors would like to thank the administrative and health informatics staff at Keele University's Primary Care Sciences Research Centre, staff of the participating general practices and Haywood Hospital, especially Dr Jackie Sakhlatvala, Carole Jackson and the Radiographers at the Depart- ment of Radiography. References 1. Sharma L, Fries JF: Osteoarthritis and physical disability. In: Oste- oarthritis: new insights. Part 1: The disease and its risk factors. Ann Intern Med 2000, 133:642-643. 2. Bailey KD: Typologies and taxonomies. An introduction to classification techniques. Quantitative Applications in the Social Sciences Edited by: Lewis-Beck MS. Thousand Oaks: Sage Publications; 1994. 3. Feinstein AR: On homogeneity, taxonomy, and nosography. In Clinical Biostatistics St Louis: The C.V. Mosby Company; 1977:353-368. 4. Doherty M: Risk factors for the progression of knee osteoarthritis. Lancet 2001, 358:775-776. 5. Felson DT, Lawrence RC, Dieppe PA, Hirsch R, Helmick CG, Jordan JM, Kington KS, Lane NE, Nevitt MC, Zhang Y, Sowers M, McAlindon T, Spector TD, Poole AR, Yanowski SZ, Ateshian G, Sharma L, Buck- walter JA, Brandt KD, Fries JF: Osteoarthritis: new insights. Part I: The disease and its risk factors. Ann Intern Med 2000, 133:635-646. 6. Peat G, McCarney R, Croft P: Knee pain and osteoarthritis in older adults: a review of community burden and current use of primary health care. Ann Rheum Dis 2001, 60:91-97. 7. Felson DT, Zhang Y, Hannan MT, Naimark A, Weissman BN, Aliabadi P, Levy P: The incidence and natural history of knee osteoar- thritis in the elderly. The Framingham Osteoarthritis Study. Arthritis Rheum 1995, 38:1500-1505. 8. Cooper C, Snow S, McAlindon TE, Kellingray S, Stuart B, Coggon D, Dieppe PA: Risk factors for the incidence and progression of radiographic knee osteoarthritis. Arthritis Rheum 2000, 43:995-1000. 9. McCormick A, Fleming D, Charlton J: Morbidity statistics from general practice. Fourth national study 1991–1992. Office of Population Censuses and Surveys. Series MB5 No.3 London: HMSO; 1995. 10. RCR Working Party: Making the Best Use of a Department of Clinical Radiology: Guidelines for Doctors Fourthth edition. London: The Royal Col- lege of Radiologists; 1998. 11. Bedson J, Jordan K, Croft P: How do GPs use x rays to manage chronic knee pain in the elderly? A case study. Ann Rheum Dis 2003, 62:450-454. 12. Morgan B, Mullick S, Harper WM, Finlay DB: An audit of knee radi- ographs performed for general practitioners. Br J Radiol 1997, 70:256-260. 13. Hopman-Rock M, de Bock GH, Bijlsma JW, Springer MP, Hofman A, Kraaimaat FW: The pattern of health care utilization of elderly people with arthritic pain of the hip or knee. Int J Qual Health Care 1997, 9:129-137. 14. McAlindon TE: The knee. Balliére's Best Pract Res Clin Rheumatol 1999, 13:329-344. 15. Hannan MT, Felson DT, Pincus T: Analysis of discordance between radiographic change and knee pain in osteoarthritis of the knee. J Rheumatol 2000, 27:1513-1517. 16. Van Eerd D, Beaton D, Cole D, Lucas J, Hogg-Johnson S, Bombardier C: Classification systems for upper-limb musculoskeletal dis- orders in workers: a review of the literature. J Clin Epidemiol 2003, 56:925-936. 17. Hadler NM: The semiotics of "upper limb musculoskeletal dis- orders in workers". J Clin Epidemiol 2003, 56:937-939. 18. Van Eerd D, Beaton D, Bombardier C, Cole D, Hogg-Johnson S: Clas- sifying the forest or the trees? J Clin Epidemiol 2003, 56:940-942. 19. Knottnerus JA: Between iatrotropic stimulus and interiatric referral: the domain of primary care research. J Clin Epidemiol 2002, 55:1201-1206. 20. Feinstein AR: Clinical epidemiology. The architecture of clinical research 1st edition. Philadelphia: WB Saunders Company; 1985. 21. Bijleveld CCJ, van der Kamp LJT, Mooijaart A, van der Kloot WA, van der Leeden R, van der Burg E: Longitudinal data analysis. Designs, models and methods Thousand Oaks: Sage Publications; 1998. 22. Menard S: Longitudinal research. Series: Quantitative Applications in the Social Sciences 2nd edition. Thousand Oaks: Sage Publications; 2002. 23. Silman A, Symmons D: Reporting requirements for longitudinal studies in rheumatology. J Rheumatol 1999, 26:481-483. 24. Wolfe F, Lassere M, van der Heijde D et al.: Preliminary core set of domains and reporting requirements for longitudinal observational studies in rheumatology. J Rheumatol 1999, 26:484-489. 25. Thomas E, Wilkie R, Peat G, Hill S, Dziedzic KS, Croft PR: The North Staffordshire osteoarthritis project-NorStOP. Pro- spective 3-year study of natural history and health care utili- sation in the general population in North Staffordshire (NorStOP). BMC Musculoskelat Discord 2004, 5(2):. 26. Buckland-Wright C, Wolfe F, Ward RJ, Flowers N, Hayne C: Sub- stantial superiority of semiflexed (MTP) views in knee oste- oarthritis: A comparative radiographic study, without fluoroscopy, of standing extended, semiflexed (MTP) and Schuss views. J Rheumatol 1999, 26:2664-2674. 27. Peat G, Lawton H, Hay E, Greig J, Thomas E, for the KNE-SCI Study Group: Development of the KNE-SCI: a research tool for studying the primary care clinical epidemiology of knee problems in older adults. Rheumatology 2002, 41:1101-1108. 28. Altman R, Alarcon G, Appelrouth D, Bloch D, Borenstein D, Brandt K et al.: The American College of Rheumatology criteria for the classification and reporting of osteoarthritis of the hand. Arthritis Rheum 1990, 33:1601-1610. 29. Von Korff M, Jensen MP, Karoly P: Assessing global pain severity by self report in clinical and health services research. Spine 2000, 25:3140-3151. 30. de Vet HCW, Heymans MW, Dunn KD, Pope DP, van der Beek AJ, Macfarlane GJ, Bouter LM, Croft PR: Episodes of low back pain. A proposal for uniform definitions to be used in research. Spine 2002, 27:2409-2416. 31. Cherkin DC, Deyo RA, Street JH, Barlow W: Predicting poor out- comes for back pain seen in primary care using patients' own criteria. Spine 1996, 21:2900-2907. 32. Von Korff M, Ormel J, Keefe FJ, Dworkin SF: Grading the severity of chronic pain. Pain 1992, 50:133-149. 33. Altman RD, Hochberg M, Murphy A, Wolfe F, Lesquesne M: Atlas of individual radiographic features in osteoarthritis. Osteoarthritis Cartilage 1995, 3:3-70. Page 8 of 9 (page number not for citation purposes) http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1016/S0140-6736(01)06006-8 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1016/S0140-6736(01)06006-8 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11564477 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11033593 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11033593 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1136/ard.60.2.91 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1136/ard.60.2.91 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1136/ard.60.2.91 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11156538 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=7575700 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=7575700 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1002/1529-0131(200005)43:5<995::AID-ANR6>3.0.CO;2-1 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1002/1529-0131(200005)43:5<995::AID-ANR6>3.0.CO;2-1 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10817551 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1136/ard.62.5.450 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1136/ard.62.5.450 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12695159 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9166050 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9166050 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1016/S1353-4505(97)00106-3 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1016/S1353-4505(97)00106-3 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9154499 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1053/berh.1999.0023 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10852280 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10852280 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10852280 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1016/S0895-4356(03)00122-7 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1016/S0895-4356(03)00122-7 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=14568622 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1016/S0895-4356(03)00158-6 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1016/S0895-4356(03)00158-6 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=14568623 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1016/S0895-4356(03)00157-4 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1016/S0895-4356(03)00157-4 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1016/S0895-4356(02)00528-0 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1016/S0895-4356(02)00528-0 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12547450 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9972991 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9972991 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9972992 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9972992 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9972992 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10606380 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10606380 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10606380 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1093/rheumatology/41.10.1101 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1093/rheumatology/41.10.1101 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1093/rheumatology/41.10.1101 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12364627 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=2242058 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=2242058 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1097/00007632-200012150-00009 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1097/00007632-200012150-00009 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11124730 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1097/00007632-200211010-00016 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1097/00007632-200211010-00016 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12438991 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1097/00007632-199612150-00023 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1097/00007632-199612150-00023 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1097/00007632-199612150-00023 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9112715 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1016/0304-3959(92)90154-4 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1016/0304-3959(92)90154-4 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=1408309 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=8581752 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=8581752 BMC Musculoskeletal Disorders 2004, 5 http://www.biomedcentral.com/1471-2474/5/4 Publish with BioMed Central and every scientist can read your work free of charge "BioMed Central will be the most significant development for disseminating the results of biomedical researc h in our lifetime." Sir Paul Nurse, Cancer Research UK Your research papers will be: available free of charge to the entire biomedical community peer reviewed and published immediately upon acceptance cited in PubMed and archived on PubMed Central yours — you keep the copyright Submit your manuscript here: http://www.biomedcentral.com/info/publishing_adv.asp BioMedcentral 34. Altman RD, Fries JF, Bloch DA, Carstens J, Cooke TD, Genant H et al.: Radiographic assessment of progression in osteoarthritis. Arthritis Rheum 1987, 30:1214-1225. 35. Burnett S, Hart D, Cooper C, Spector T: A radiographic atlas of osteoarthritis London: Springer-Verlag; 1994. 36. Kellgren JH, Lawrence JS: Radiological assessment of osteoarthrosis. Ann Rheum Dis 1957, 16:494-501. 37. Klippel JH, Dieppe P, Ferri FF: Primary Care Rheumatology London: Mosby; 1999. 38. Whitney CW, Lind B, Wahl PW: Quality assurance and quality control in longitudinal studies. Epidemiol Rev 1998, 20:71-80. 39. Peat G, Wood L, Wilkie R, Thomas E, for the KNE-SCI Study Group: How reliable is structured clinical history-taking in older adults with knee problems? Inter- and intra-observer varia- bility of the KNE-SCI. J Clin Epidemiol 2003, 56:1030-1037. 40. van der Windt DA, Koes BW, Deville W, Boeke AJ, de Jong BA, Bouter LM: Effectiveness of corticosteroid injections versus physiotherapy for treatment of painful stiff shoulder in pri- mary care: randomised trial. BMJ 1998, 317:1292-1296. 41. Jinks C, Jordan K, Ong BN, Croft P: A brief screening tool for knee pain in primary care (KNEST). 1. Validity and reliability. Rheumatology 2001, 40:528-536. 42. Jensen MP, Keefe FJ, Lefebvre JC, Romano JM, Turner JA: One- and two-item measures of pain beliefs and coping strategies. Pain 2003, 104:453-469. 43. Dunn KM, Ong BN, Hooper H, Croft PR: Classification of low back pain in primary care: using "bothersomeness" to identify the most severe patients. [Oral presentation (F6, p.17)]. International Forum VI for Primary Care Research on Low Back Pain. Linköping 2003. 44. Bellamy N: WOMAC osteoarthritis index. A user's guide London Health Services Centre, McMaster University: Ontario; 1996. 45. Lewis M, Jordan K, Lacey RJ, Jinks C, Sim J: Inter-rater reliability assessment of the scoring of the body manikin. Rheumatology 2003, 42(Suppl 1):52. 46. O'Reilly S, Johnson S, Doherty S, Muir K, Doherty M: Screening for hand osteoarthritis (OA) using a postal survey. Osteoarthritis Cartilage 1999, 7:461-465. 47. Ware J Jr, Kosinski M, Keller SD: A 12-item Short-Form Health Survey: construction of scales and preliminary tests of relia- bility and validity. Med Care 1996, 34:220-233. 48. Fried LP, Young Y, Rubin G, Bandeen-Roche K, WHAS II Collabora- tive Research Group: Self-reported preclinical disability identi- fies older women with early declines in performance and early disease. J Clin Epidemiol 2001, 54:899-901. 49. Zigmond AS, Snaith RP: The hospital anxiety and depression scale. Acta Psychiatr Scand 1983, 67:361-370. Pre-publication history The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2474/5/4/prepub Page 9 of 9 (page number not for citation purposes) http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=3689459 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=13498604 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=13498604 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9762510 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9762510 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1016/S0895-4356(03)00204-X http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1016/S0895-4356(03)00204-X http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1016/S0895-4356(03)00204-X http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=14614993 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=28713 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=28713 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=28713 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9804720 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1093/rheumatology/40.5.528 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1093/rheumatology/40.5.528 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1093/rheumatology/40.5.528 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11371661 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1016/S0304-3959(03)00076-9 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1016/S0304-3959(03)00076-9 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12927618 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1053/joca.1999.0240 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1053/joca.1999.0240 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10489318 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1097/00005650-199603000-00003 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1097/00005650-199603000-00003 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1097/00005650-199603000-00003 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=8628042 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1016/S0895-4356(01)00357-2 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1016/S0895-4356(01)00357-2 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10.1016/S0895-4356(01)00357-2 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=6880820 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=6880820 http://www.biomedcentral.com/1471-2474/5/4/prepub http://www.biomedcentral.com/ http://www.biomedcentral.com/info/publishing_adv.asp http://www.biomedcentral.com/ Abstract Background Methods Background Methods Design Phase 1: Baseline two-stage mailed survey Phase 2: Baseline clinical assessment study of the knee (CAS(K)) Digital photography of the lower limbs and hands Plain radiography of the knees and hands Clinical interview and physical examination of the knees and hands Simple anthropometric measurement Brief self-complete questionnaire Post-clinic procedure Communication with participants' general practice Quality assurance and quality control Phase 3: Prospective review of general practice medical records Table 1 Phase 4: Follow-up mailed survey at 18 months Sample size Statistical analysis Conclusions List of abbreviations used Competing interests Authors' contributions Acknowledgements References Pre-publication history work_puyrdgu5qffkxjvso7dpm6nl6i ---- Primate Coloration: An Introduction to the Special Issue James P. Higham Received: 30 September 2009 /Accepted: 2 October 2009 / Published online: 24 October 2009 # Springer Science + Business Media, LLC 2009 Color plays a significant role in the ecology and behavior of many animal species, including in food detection and foraging, intraspecific signaling, predator detection, and camouflage. However, understanding the role of color in the lives of animals offers many challenges, as animal coloration sits at the boundary between chemistry (the chemical basis and properties of colors exhibited and encountered), physics (the composition and structure of light, and physical patterns of reflection and absorption), and numerous aspects of animal biology. As such, color and its perception are at the interface between an animal’s behavioral and sensory ecology and the wider physical environment. Primates are the most colorful group of mammals, and many species display striking sexual dimorphism in pelage, facial, and anogenital skin coloration. The range of colors exhibited by primates of diverse taxa suggests the importance of color in primate behavior and social signaling, but empirical studies of primate color were rare until relatively recently. Primate coloration research has lagged behind work conducted on other taxa such as fish, birds, insects, and reptiles in the number of studies of adaptive function, as well as in the sophistication of the methods used to overcome the inherent problems of color quantification, and in the appreciation of differences in the sensory systems with which animals detect and interpret color. The limited ability to conduct controlled experiments on primates, and the obstacles of studying colorful primates that often live in wet, dark forests (making the use of photographic equipment difficult), have provided further challenges to color research in this group. Int J Primatol (2009) 30:749–751 DOI 10.1007/s10764-009-9381-y J. P. Higham (*) Institute for Mind and Biology, University of Chicago, Chicago, IL 60637, USA e-mail: jhigham@uchicago.edu On August 5, 2008, a symposium entitled “Primate Coloration: Measurement, Mechanisms and Function” took place as part of the 22nd Congress of the International Primatological Society in Edinburgh. This symposium brought together researchers who have studied various aspects of coloration in diverse species, including nonprimates, and sought to highlight the growing range of research on primate coloration, to shed light on the underlying mechanisms determining color expression, and to elucidate the functions of color signals in the lives of primates. This issue is a result of that symposium, and features papers by the majority of the speakers who participated on the day. I would like to take this opportunity to thank Russell Tuttle for first inviting this special issue, and Jo Setchell for being so enthusiastic and supportive of it after she became Editor-In-Chief. Jo’s own contribution to this issue was invited, and was in preparation, before she was asked by Springer to take up her current role as EIC. Special thanks are due to the British Ecological Society, who provided generous sponsorship, and to Melissa Gerald, who originally co-proposed the symposium with me. Springer very kindly offered to produce color figures for free throughout the issue. Janet Slobodien, Myrene Grande, Theresa Kornak, and various editorial assistants, all helped enormously at various stages of the process, ensuring that the issue was produced quickly and to a high standard. I would also like to thank all of the referees who reviewed papers; the quality of the final manuscripts is testament to their hard work. The issue begins with a study by Melin et al., who show that there is variation in foraging strategies among white-faced capuchins (Cebus capuchinus) according to visual system type, with dichromatic individuals apparently using other senses such as olfaction to compensate for reduced visual discriminability when selecting ripeness of figs by color. A number of studies then focus on the importance of skin color variation in various Old World primate species. Dubuc et al. show that variation in rhesus macaque (Macaca mulatta) female facial color correlates with the timing of ovulation, such that facial color has the potential to signal this to others. Bergman et al. present data on gelada (Theropithecus gelada), showing that male chest patch color acts as a badge of status, with harem leader males being more colorful than others, and males with larger harems being more colorful than those with smaller harems. Marty et al. present data on one of the most colorful but least studied of catarrhines, the drill (Mandrillus leucophaeus). They show that color of male drills is positively related to rank, but that rank appears to be a more important factor than color in determining male-female association patterns and rates of sexual behaviors. Setchell et al. combine data from their long-term studies on mandrills (Mandrillus sphinx) with new data, to ask whether mandrill coloration acts as a signal of overall individual quality. They find no evidence from their study population that this is the case, despite the many previous exciting results from this population showing that color is linked to status. Finally, Stephen et al. show that facial color can be important in signaling health and condition in our own species (Homo sapiens), with observers changing the color of human faces to make them appear healthier. This study reminds us that investigating how color may signal underlying physiological condition is relevant for understanding ourselves as well as the nonhuman primates we study. Though the aforementioned manuscripts reveal the large recent increase in the number of studies addressing the function of primate skin color signals, this has 750 J.P. Higham unfortunately yet to be matched in the number investigating pelage color. As such, it is especially pleasing to be able to include studies on the underlying mechanisms and adaptive significance of fur coloration. Clough et al. provide one of the first studies to measure pelage color objectively in the wild. They show that, although variation in color of the rufous crown of male red-fronted lemurs (Eulemur fulvus rufus) does not correlate with variation in male reproductive success or rank, it is nonetheless related to interseason differences in androgen levels, and hence may provide some information about male condition. In a second study of pelage color, Kamilar presents interesting new comparative analyses testing the function of countershading in primate groups, and finds support for hypotheses linking the evolution of countershading to antipredation strategies. Overall, the research manuscripts assembled collectively highlight the importance of color in the lives of many primate species, including our own, in both social and sexual signaling, camouflage, and in detecting and obtaining food. Numerous studies in this issue measure color objectively via digital photography. However, it is important to remember that the detection and interpretation of color signals is not objective; it is entirely subjective, according to receiver perception. In this regard, Stevens et al. present a detailed methods paper that includes techniques for the analysis of color with respect to specific visual systems, linking the study of primate color signals to the visual sciences. Given information about visual system properties, discrimination threshold modeling can be used to determine how different colors appear within the visual system(s) of the relevant receiver(s). Primatologists are well placed to use such methods given that the visual systems of a broad range of primate species have been studied in detail. The approach outlined in the contribution of Stevens et al. would seem particularly well suited to investigating the significance of color in species with polymorphic color vision, such as many New World monkeys and lemurs. Linking genetic approaches for the determination of visual system type, detailed field studies of behavior, and visual system modeling, should allow investigation of the colors exhibited by cryptic and conspicuous fruits, camouflaged predators, and the displays of conspecifics, to occur within the context of how different group members are likely to perceive the same color signals. This is likely to be key in understanding the selective pressures that maintain such polymorphisms in natural populations. The utility of such methods for the analysis of signals in species with visual systems similar to our own, i.e., red, green, blue trichromats, remains to be seen, though early indications are that even here this approach can be enlightening (Higham et al. unpubl data). The importance of all color signals encountered by animals, from a ripe fig in a tree, to the bright face or fur of a conspecific, is in the eye of the beholder. Primate Coloration: Introduction 751 Primate Coloration: An Introduction to the Special Issue << /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (None) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (ISO Coated v2 300% \050ECI\051) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.3 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJDFFile false /CreateJobTicket false /DefaultRenderingIntent /Perceptual /DetectBlends true /ColorConversionStrategy /sRGB /DoThumbnails true /EmbedAllFonts true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 524288 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveEPSInfo true /PreserveHalftoneInfo false /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts false /TransferFunctionInfo /Apply /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true ] /NeverEmbed [ true ] /AntiAliasColorImages false /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 150 /ColorImageDepth -1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages false /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasGrayImages false /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 150 /GrayImageDepth -1 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /GrayImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasMonoImages false /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 >> /AllowPSXObjects false /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (None) /PDFXOutputCondition () /PDFXRegistryName (http://www.color.org?) /PDFXTrapped /False /SyntheticBoldness 1.000000 /Description << /ENU /DEU >> >> setdistillerparams << /HWResolution [2400 2400] /PageSize [5952.756 8418.897] >> setpagedevice work_pvygg6m5tfcuvkglhnuvq45gaq ---- Microsoft Word - 005_Dr.MihaelaMOCIRAN.doc Applied Medical Informatics Original Research Vol. 24, No. 1-2/2009, pp: 47 - 52 47 Risk Factors and Severity of Diabetic Retinopathy in Maramureş Mihaela MOCIRAN 1,*, Cristian DRAGOŞ 2, Nicolae HÂNCU 1 1 “Iuliu Haţieganu” University of Medicine and Pharmacy Cluj-Napoca, 13 Emil Isac, 400023 Cluj- Napoca, Romania. 2 Babeş-Bolyai University of Cluj Napoca, 1st Mihail Kogalniceanu, 400084 Cluj-Napoca, Romania. E-mail(s): mihaela.zoicas@xnet.ro; cristian.dragos@econ.ubbcluj.ro; nhancu@umfcluj.ro * Author to whom correspondence should be addressed; County Hospital “C. Opris” Baia Mare, Baia Mare, 31 George Coşbuc Street, Tel: 0724044581. Abstract: Aims: We looked for the factors which can be adjust in order to reduce the frequency and severity of diabetic retinopathy in Maramureş County, Romania. Methods: We done a cross-sectional study of 1269 persons with diabetes mellitus registered in the healthcare system in Maramureş. The signs of diabetic retinopathy were ascertained from retinal digital photographs. We analyzed different parameters: demographics (age, gender, smoking status, alcohol use, living areas, socioeconomic status); physical measurements (body mass index BMI, abdominal circumference); laboratory measurements (glycemic control, lipid profile, the degree of proteinuria) diabetes characteristics (duration, type, other complications) and characteristics of comorbid diseases. Results: The total number of patients with retinopathy was 246. From these 94 (38.2%) have mild non proliferative diabetic retinopathy (NPDR) 78 (31.7%) have moderate NPDR, 52 (21.1%) have severe NPDR and 22 (9.0%) have proliferative diabetic retinopathy (PDR). Among our patients 3.2% had maculopathy (0.6% mild, 0.6% moderate and 2.0% severe). The risk factors associated with severity of retinopahy were: diabetes duration (p=0.000), HbA1c (p=0.005), presence of nephropathy (p=0.004), presence of polineuropathy (p=0.002). Risk factors associated with severity of maculopathy was presence of nephropathy (p=0.000). Conclusions: A positive correlation between diabetes duration and diabetes control and severity of retinopathy was found. Severity of retinopathy was higher in the presence of nephropathy and polineuropathy. Severity of maculopathy was higher in the presence of nefropathy. Keywords: Diabetes; Retinopathy; Severity; Risk factors. Introduction There is an enormous increase in the worldwide prevalence of diabetes. It was estimated that more than 250 people around the world have diabetes and it is expected to rise to 380 million within 20 years [1]. An individual’s risk is defined by the presence of both genetic and environmental factors. The diabetic retinopathy is the most frequent complication of this burden “epidemic disease”. UKPDS [2], a landmark study in type 2 diabetes, and similar DCCT [3], a remarkable study in type 1 diabetes established the importance of blood glucose control and blood pressure control in reducing the risk of diabetic retinopathy. This established that for a 1% decrement in HbA1c a 31% lower reduction of retinopathy [3]. Similar findings came from Steno-2 Study [4]. Lipid-lowering agents are widely associated in this disease [5]. While hard exudates and macular oedema are often associated with high lipid levels, lipid-lowering agents may be a therapeutic option for the treatment of this complication [6]. Any improvement in management of this disease has a major contribution in reduction of microvascular complications. MOCIRAN Mihaela, DRAGOS Cristian, and HÂNCU Nicolae 48 Therefore we looked for the factors which can be adjust in order to reduce the frequency and severity of diabetic retinopathy. There are few data about our patients, the patient in Maramureş County, the northeast part of Romania. Material and Method All individuals with diabetes registered in the county diabetes register in the region of Maramures at the end of the year 2007 were included in the study. We analysed retrospective the hospital documents, individual records and any other medical notes. All individuals gave oral informed consent for to use their dates. The study followed the recommendations of the Declaration of Helsinki. Baseline variables Variables: age, gender, cigarette smoking (present or absent), alcohol consumption (never, occasionally, chronic), areas (urban or rural), socioeconomic status (good, medium, poor) were obtained from the hospital records or any other medical notes. Diabetes and its complications data (as type, duration, medications, complications (peripheral neuropathy- presence or absence; autonomic neuropathy - presence or absence, diabetic nephropathy - presence or absence; peripheral vascular disease - presence or absence)) were obtained from the hospital records. Concomitant disease: presence or absence of hypertension and other cardiovascular diseases (heart attacks, cerebrovascular disease, rheumatic heart disease, congenital heart disease, heart failure), hyperlipidemia, any other concomitant disease were obtained from the hospital records (steatohepatitis, hypothyroidism, hyperthyroidism, rheumatic disease etc). Physical measurement: height without shoes (in m), weight in light clothing (in kg) and waist circumference (in cm). Body mass index (BMI) was calculated: weight(kg)/height2 (m²). Laboratory measurements: Glycosylated hemoglobin (HbA1c) (%) (normal < 7%), total cholesterol (mg/dl) â(normal < 180 mg/dl), HDL-cholesterol (mg/dl) (normal HDL > 40 mg/dl in men and >50 mg/dl in women) , triglycerides (mg/dl) (normal < 150mg/dl) from fasting blood samples measured in local laboratories. Total cholesterol was measured by an enzymatic colorimetric cholesterolesterase method, triglycerides were measured by an enzymatic colorimetric glycerophosphatoxidase method and HDL was measured by direct enzymatic determination (analyser ACCENT 300). Kidney function was assessed by calculating the estimated Glomerular Filtration Rate (GFR) using Modification of Diet in Renal Disease (MDRD). Study equation GFR (mL/min/1.73m2)=175 × (serum creatinine) - 1.154 × (Age) - 0.203 × (0.742 if female ) × (1.212 if African American) [7]. We measured the degree of proteinuria and estimate diabetic nephropathy: microalbuminuria (urinary albumin excretion rate of 30-300 mg/24) for incipient nephropathy and macroalbuminuria (urinary albumin excretion more than 300 mg/24 h). It was used colorimetric methods by pyrogallol (analyser ACCENT 300). Retinal digital photography (auto fundus camera Nidek AFC-210, CE marked) was performed at the end of the examination after medical dilatation of the pupil. We used Standard Internal Fixation, with three positions, the center of the captured image being selected among the macula, the papilla and the midpoint between the macula and papilla. Each of the six digital photographs was graded by greatest degree in any field for the macula and retina separately. Overall retinopathy and maculopathy levels were assessed based on the International Clinical Diabetic Retinopathy and Diabetic Macula Edema Disease Severity Scale [8] for each patient based on the grading score of the worse eye: none, non proliferative diabetic retinopathy (NPDR) (mild, moderate, severe) and proliferative diabetic retinopathy (PDR). In addition, the presence and severity of maculopathy (none, mild, moderate or severe) was identified and recorded. In some unclear or severe situations, we referred the patients to an ophthalmologist for further evaluation and treatment. Inclusion Criteria: pacients with diabetes mellitus registered in Diabetes Center from Maramureş County. We excluded patients with history of laser treatment, patients with cataract or other Risk Factors and Severity of Diabetic Retinopathy in Maramures 49 diseases which may affect the quality of digital photography, patients with severe mental diseases, patients whose medical records didn’t contain all the dates describe above. Statistics The ordered multinomial logit model was used in order to evaluate the possibility of having diabetic retinopathy in relation to the risk factors: diabetes type, diabetes duration, HbA1c, cholesterol, triglycerides, HDL, nephropathy, polineuropathy, peripheral vascular disease hyperlipidemia, cardiovascular disease, hypertension, BMI, abdominal circumference, alcohol, smoke, age, gender, medium, socioeconomic conditions. For these estimations, we used STATA software package version 9.1. Results From a total of 12917 registered patients with diabetes only 1269 were available for analysis (the rest were excluded because we did not have all the dates for analyses as mentioned above). The sample was representative, the prevalence of the retinopathy having a confidence interval of ± 2.1% for a 0.05 p-value. The results of the logit model in analysis of risk factors for diabetic retinopathy is presented in Table 1 and for maculopathy in Table 2. Table 1. Risk factors for diabetic retinopathy types Diabetic retinopathy type (ordered multinomial logit model) coef. p-value Diabetes type -0.033272 0.935 Diabetes duration 0.101247 0.000*** HbA1c 0.117456 0.005*** Cholesterol -0.000688 0.724 Triglycerides -0.001066 0.135 HDL 0.008627 0.113 Nephropathy 0.829131 0.004*** Hyperlipidemia -0.033934 0.718 Cardiovascular disease -0.397895 0.053* Hipertension 0.350344 0.065 BMI -0.058148 0.009*** Abdominal circumferince 0.016371 0.077 Alchool -0.026031 0.845 Smoke -0.019123 0.930 Age -0.000014 0.999 Gender 0.110966 0.512 Medium 0.291435 0.070* Socioeconomic conditions 0.046626 0.693 Polineuropathy 0.644894 0.002*** Peripheral vascular disease -0.784258 0.296 N=1269, Pseudo R2 = 0.0692 LR 2χ (20)=141.80 (p<0.001) *significant at 10%, **significant at 5%, ***significant at 1% The total number of patients with retinopathy was 246. From these 94 (38.2%) have mild NPDR, 78 (31.7%) have moderate NPDR, 52 (21.1%) have severe NPDR and 22 (9.0%) have PDR. Among our patients 3.2% had maculopathy (0.6% mild, 0.6% moderate and 2.0% severe). The prevalence of retinopathy in relation with diabetes duration was presented in Table 3. Regarding maculopathy in relation with diabetes duration we found the results presented in Table 4. The prevalence disbtribution of retinopathy in relation with HBA1c is presented in Table 5. MOCIRAN Mihaela, DRAGOS Cristian, and HÂNCU Nicolae 50 Table 2. Risk factors for diabetic maculopathy types Diabetic maculopathy type (ordered multinomial logit model) coef. p-value Diabetes type 0.087442 0.912 Diabetes duration 0.061536 0.022** HbA1c 0.225284 0.011** Cholesterol 0.011612 0.008*** Triglycerides -0.004936 0.050*** HDL 0.004647 0.709 Nefropathy 1.956632 0.000*** Hiperlipidemia -0.423629 0.332 Cardiovascular diseases -0.304823 0.506 Hipertension 0.270929 0.541 BMI -0.058140 0.301 Abdominal circumferince -0.011453 0.592 Alchool 0.492044 0.097* Smoke -0.086044 0.861 Age 0.016946 0.420 Gender -0.183224 0.629 Medium 0.388188 0.295 Socioeconomic conditions 0.191850 0.465 Polineuropathy 0.893494 0.029** Peripheral vascular disease -40.996200 0.999 N=1269, Pseudo R2 = 0.1456 LR 2χ (20)=62.81 (p<0.001) *significant at 10%, **significant at 5%, ***significant at 1% Table 3. The prevalence of retinopathy in relation with diabetes duration (%) Retinopathy type (%) NPDR mild NPDR moderate NPDR severe PDR Total below 5 4.6 3.9 3.0 0.9 12.4 from 5 to 10 9.7 6.4 4.3 1.8 22.2 from 10 to 15 11.5 14.2 7.1 4.4 37.2 Diabetes duration (years) over 15 22.2 30.6 11.1 6.3 70.2 Table 4. The prevalence of maculopathy in relation with diabetes duration (%) Maculopathy type (%) mild moderate Severe total below 5 0.65 0.79 1.57 3.01 from 5 to 10 0.61 0.61 1.82 3.04 from 10 to 15 0.88 0.00 2.65 3.53 Diabetes duration (years) over 15 0.00 0.00 4.76 4.76 Table 5. The prevalence of retinopathy in relation with HBA1c DR type (%) HbA1c (values %) NPDR mild NPDR moderate NPDR severe PDR DR total under 7 5.4 4.5 2.9 1.0 13.8 from 7 to 9 8.2 7.6 5.3 1.9 23.0 over 9 11.9 7.9 5.1 3.4 28.3 Risk Factors and Severity of Diabetic Retinopathy in Maramures 51 Discussion In a population-based cohort followed after WESDR, Wisconsin Epidemiologic Study of Diabetic Retinopathy, among US adults after 4-14 years’ diabetes duration during the period 1990- 2002, the authors also observed less severe retinopathy than expected, with a very low prevalence of moderate–severe nonproliferative retinopathy (10 percent) and only one person with treated or proliferative retinopathy by 14 years' duration. In contrast, at the baseline evaluation in WESDR, moderate–severe nonproliferative retinopathy was found in 35 percent of persons and proliferative retinopathy in 25 percent of persons at 13–14 years' duration [9]. Again, the more recent population-based studies in Sweden and Finland reported lower levels of advanced nonproliferative or preproliferative retinopathy, ranging from approximately 2 percent to 5 percent [10,11]. In these countries the improvement in treatment was the principal argument for these decreasing in prevalence and severity. In a comparative study between France, Italy, Spain, and United Kingdom the authors find that the proportion of diagnosed DR ranged from 10.3% (95% CI [6.7%-14.0%]) in Spain to 19.6% (95% CI = [16.0%-23.1%]) in the United Kingdom and consistently across countries, mild NPDR was the most common severity level of diagnosed DR. PDR ranged from 19.7% (France) to 31.5% (UK). Diabetic macular edema was reported in approximately 10% of patients [12]. In our region the prevalence of moderate and severe DR was higher but the prevalence of PDR was lower. In our region the most prevalent disease is also mild NPDR. We can debate about the fact that in our country there is no screening program for DR. There are many patients who did not ask for medical advice or did not respect their treatment or even refused to go to specialists in diabetes. There are few ophthalmologists trained in diabetic retinopathy and unfortunately they are only in universities. Glycosylated hemoglobin was also strongly related to the risk of retinopathy, and the association was stronger at longer durations. For a 1 percent increase in mean glycosylated hemoglobin level, at 4 years, the hazard of retinopathy increased by 14%, and at 7 and 9 years, the hazards increased by 44% and 42%, respectively [9]. In Chennai Urban Rural Epidemiology Study, ordinal logistic regression analysis 1715 subjects showed that male gender (P = 0.041), duration of diabetes (P < 0.0001), glycated haemoglobin (HbA(1c); P < 0.0001), macroalbuminuria (P = 0.0002) and insulin therapy (P = 0.0001) were significantly associated with severity of DR [13]. We also recognize that HbA1c is associated with severity of DR. A study done in Spain with 208 patients find that patients with long standing diabetes have a high risk to develop diabetic maculopathy, but other closely-related risk factors are hypertension, hyperglycemia, lipids, tobacco smoking and renal status [14]. In a small study with only 38 patients the author noticed that patients with diabetic maculopathy had significantly higher values of total lipids, triglycerides, total cholesterol. There were no statistically significant differences for HDL [15]. Interesting in another study the authors find that the regression of hard exudates was most likely due to the aggressive lipid lowering [16]. In our study we have a small number of patients with diabetic maculopathy and the relation between maculopathy and cholesterol and triglycerides is uncertain. We need much more patients with maculopathy for significant and interpretable dates. Conclusions A positive correlation between diabetes duration and diabetes control and severity of retinopathy was found. Severity of retinopathy was higher in the presence of nephropathy and polineuropathy. Severity of maculopathy was higher in the presence of nefropathy. References 1. International Diabetes Federation [internet] [cited 2009 May 10]. Available from: URL: http://www.idf.org/ MOCIRAN Mihaela, DRAGOS Cristian, and HÂNCU Nicolae 52 2. Stratton IM, Adler AI, Neil HAW, Matthews DR, Manley SE, Cull CA et al. on behalf of the UK Prospective Diabetes Study Group. Association of glycaemia with macrovascular and microvascular complications of type 2 diabetes(UKPDS 35): prospective observational study. Br Med J. 2000;321:405-411. 3. Diabetes Control and Complications Trial Research Group. The relationship of glycemic exposure (HbA1c) to the risk of development and progression of retinopathy in the diabetes control and complications trial. Diabetes. 1995;44(8):968-983. 4. Gaede P, Lund-Andersen H, Parving HH, Pedersen O. effect of a multifactorial intervention on mortality in type 2 diabetes. N Engl J Med. 2008;358(6):580-591. 5. American Diabetes Association. Standards of Medical Care in Diabetes -2009. Diabetes Care. 2209;32:S13-S61. 6. Chew EY, Klein ML, Ferris FL, Remaley NA, Murphy RP, Chantry K et al. Association of elevated serum lipid levels with retinal hard exudates in diabetic retinopathy. Early Treatment Diabetic Retinopathy study (ETDRS report 22). Arch Ophthalmol. 1996;114:1079-1084. 7. MDRD GFR Calculator. http://mdrd.com/ 8. Wilkinson CP, Ferris FL, Klein RE et al. Proposed international clinical diabetic retinopathy and diabetic maculat edema disease severity scales. Ophthalmology. 2003;110:1677-1682. 9. LeCaire T, Palta M, Zhang H, Allen C, Klein R, D'Alessio D. Lower-than-Expected Prevalence and Severity of Retinopathy in an Incident Cohort followed during the First 4–14 Years of Type 1 Diabetes. The Wisconsin Diabetes Registry Study. American Journal of Epidemiology 2006 164(2):143-150. 10. Falck A, Käär M-L, Laatikainen L. A prospective, longitudinal study examining the development of retinopathy in children with diabetes. Acta Paediatr 1996;85:313–19. 11. Kernell A, Dedorsson I, Johansson B, et al. Prevalence of diabetic retinopathy in children and adolescents with IDDM. A population-based multicentre study. Diabetologia 1997;40:307–10. 12. Rubino A, Rousculp MD, Davis K, Wang J, Girach A. Diagnosed diabetic retinopathy in France, Italy, Spain, and the United Kingdom. Prim Care Diabetes. 2007 Jun;1(2):75-80. Epub 2007 Apr 2. 13. Pradeepa R, Anitha B, Mohan V, Ganesan A, Rema M. Risk factors for diabetic retinopathy in a South Indian Type 2 diabetic population-the Chennai Urban Rural Epidemiology Study (CURES) Eye Study 4. Diabet Med 2008. 14. Asensio-Sánchez VM, Gómez-Ramírez V, Morales-Gómez I, Rodríguez-Vaca I. Clinically significant diabetic macular edema: systemic risk factors. Arch Soc Esp Oftalmol. 2008;83(3):173-6. 15. Golubovic-Arsovska M. Association of dyslipidaemia with macular oedema and hard exudates in diabetic maculopathy. Prilozi. 2007;28(2):149-60. 16. Cusick M, Chew EY, Chan CC, Kruth HS, Murphy RP, Ferris FL, 3rd. Histopathology and regression of retinal hard exudates in diabetic retinopathy after reduction of elevated serum lipid levels. Ophthalmology 2003; 110(11): 2126-33. © 2009 by the authors; licensee SRIMA, Cluj-Napoca, Romania. work_pypqg62bxzccjeojqtkpqbc6nm ---- Microsoft Word - The Carte de Visite and Domestic Digital Photography.docx The Version of Record of this manuscript has been published and is available in Photographies, 9:3, (2016). http://www.tandfonline.com/10.1080/17540763.2016.1202309 The Carte de Visite and Domestic Digital Photography Stephen Burstow Introduction Entombed in the forbidding leather covers of forgotten family albums, the carte de visite was largely ignored or derided for much of the twentieth century. As the most popular form of photograph from the 1860s to the 1880s, art historians portrayed it as the ubiquitous product of shallow commercialism. Carte de visite portraits, in particular, with their stock studio settings and stereotyped poses, were seen as devoid of individual expression, while for others their social function was confined to the rising middle class’s visual display of status.1 From the beginning of this century, however, the carte de visite, along with other forms of nineteenth-century photography, has received sustained reappraisal by historians and cultural theorists who have conducted fine-grained explorations of the social practices around the production, exchange, and collection of cartes. The porosity of the public and private has been a particular focus of this research. Markers of status became disconcertingly democratised with portrait cartes of royalty, statesmen, and clergy jostling for space in the print seller’s window with those of actresses and sportsmen.2 These mass- produced celebrity cartes were then placed in carte de visite albums alongside portraits of friends and relatives, together with cartes of sentimental scenes and exotic landscapes and peoples. This visual curation of a world extending from the domestic parlour to the 1 John Tagg, “A Democracy of the Image: Photographic Portraiture and Commodity Production,” in The Burden of Representation: Essays on Photographies and Histories (London: Macmillan, 1988). 2 John Plunkett, “Celebrity and Community: The Poetics of the Carte-De-Visite,” Journal of Victorian Culture 8, no. 1 (2003). Stephen Burstow The Carte de Visite 2 Egyptian pyramids has prompted several authors to make comparisons between cartes de visite and domestic digital photographs: in both carte albums and on social media platforms, images of family and friends are arranged contiguously with images gleaned from the wider world.3 These nascent comparisons have prompted the approach for this study of the carte de visite, one with an awareness of the qualities associated with domestic digital photography: ephemerality, proliferation, and the affordance of self-performance and social communication over and above memory. The affordance of memory is a particular focus as it is routinely claimed that while domestic digital photography and related online image sharing platforms serve the needs of immediate social communication, the domestic photographs of the pre-digital era were made and collected primarily for posterity.4 This article questions this portrayal by investigating countervailing practices: while twenty-first-century subjects are availing themselves of the expanding mnemonic functionality of social media platforms such as Facebook, nineteenth-century portrait sitters and album compilers are shown to have given great importance to the immediate social utility of the carte de visite. This article examines cartes de visite through their full temporal life: from production to dissemination and collection in albums and then finally to their fate beyond the first generation of their origin. Before proceeding, two orientations are required. Firstly, the relationship of memory to the carte de visite and domestic digital photography is placed in the context of a wider discourse on memory and photography. Secondly, as this article relies on carte de visite albums as primary sources, a brief summary of the related scholarship is useful to contextualise the focus of this survey of carte de visite albums in Australian collections. 3 Patrizia Di Bello, Women's Albums and Photography in Victorian England (Aldershot: Ashgate, 2007), 156- 160; Risto Sarvas and David M. Frohlich, From Snapshots to Social Media: The Changing Picture of Domestic Photography (New York: Springer, 2011), 37–42. 4 From Snapshots to Social Media: The Changing Picture of Domestic Photography, 133. Stephen Burstow The Carte de Visite 3 Photography and Memory On 4 February 2016, Facebook celebrated its twelfth birthday by placing short “Friends Day” videos on users’ Timelines. Accompanied by music and text (“Here are your friends/You’ve done a lot together/Your friends are pretty awesome”), the videos presented a sequence of photographs of users with their Facebook friends. Paula Young Lee, a columnist with the online magazine Salon.com commented: “As we sift through images of our curated experiences on Facebook, we do not relive our past so much as we recreate ourselves out of selected memories. Thanks to our willing collaboration, Facebook is creating an alternate reality we like better than the kind that wrinkles and ages.”5 Lee’s experience of having her authentic memory obstructed, even supplanted, by photographic images, echoes a critical tradition that can be traced from Charles Baudelaire’s nineteenth-century polemics against photography to texts by Siegfried Kracauer and Walter Benjamin written in the 1920s and 1930s.6 This understanding of memory and photography contrasts the memory images that attain their strength through subjective affections and repressions, with photographic images that are merely objective collections of externalities. When Benjamin surveys the portraits in a nineteenth-century family album, the faithful recording of appearances reveals nothing but a series of “foolishly draped” figures posed in ridiculous studio sets.7 Kracauer, looking at a family portrait of his grandmother taken when she was a young woman, also finds the indexicality of photography of little assistance in establishing a mnemonic connection: “All right, so it’s grandmother; but in reality it’s any young girl in 1864…. Likeness has ceased to be of any help.”8 5 Paula Young Lee, “Thanks for Nothing, Facebook: #Friendsday Is Our Latest Reminder That Curated Selves Are Becoming ‘More Real Than Real,’” Salon.com, http://www.salon.com/2016/02/04/thanks_for_nothing_facebook_friendsday_is_our_latest_reminder_that _curated_selves_are_becoming_more_real_than_real/. 6 Charles Baudelaire, “Salon of 1859,” in Art in Theory, 1815–1900: An Anthology of Changing Ideas, ed. Charles Harrison, Paul Wood, and Jason Gaiger (Maiden, MA: Blackwell, 1998); Siegfried Kracauer, “Photography,” in The Mass Ornament: Weimar Essays, ed. Thomas Y. Levin (Cambridge, MA: Harvard University Press, 1995); Walter Benjamin, “A Small History of Photography,” in One Way Street and Other Writings (London: NLB, 1979). 7 “A Small History of Photography,” 246. 8 Kracauer, “Photography,” 48. Stephen Burstow The Carte de Visite 4 Benjamin’s and Kracauer’s sense of alienation from these family portraits contrasts with the experience of the subjects and the immediate collectors of nineteenth-century studio portraits for whom the direct trace of a loved one’s presence held fetishistic power. As art historian and journalist Lady Elizabeth Eastlake commented in 1857: What indeed are nine-tenths of those facial maps called photographic portraits, but accurate landmarks and measurements for loving eyes and memories to deck with beauty and animate with expression, in perfect certainty, that the ground- plan is founded upon fact?… Though the faces of our children may not be modelled and rounded with that truth and beauty which art attains, yet minor things—the very shoes of the one, the inseparable toy of the other—are given with a strength of identity which art does not even seek [emphasis in original].9 What is particularly significant about Eastlake’s text is that establishing the veracity of representation is only the first step in bringing the depicted loved one fully to mind: the viewer’s “loving eyes and memories” are required to transform the photographic portrait into a memory image. For all its indexicality, the portrait is only a map, an outline that must yet be adorned with beauty and expression. For the personal portrait to retain some of this mnemonic potency beyond the first generation of its production requires its continued integration into the rituals of family oral history. And as Martha Langford has shown through her study of Canadian family photograph albums, these once-treasured artefacts easily vanished from ongoing family narration.10 For Benjamin and Kracauer this was clearly the case: in two generations the family carte album had become a dusty relic filled with empty curiosities. This anti-mnemonic perspective has been challenged by recent scholarship that has investigated the social practices surrounding the emergence of the carte de visite in the 1860s. As Elizabeth Siegel observes, at this time “family 9 Elizabeth Eastlake, “‘Photography,’ Reprinted from Quarterly Review (London) 101 (April 1857),” in Photography: Essays & Images: Illustrated Readings in the History of Photography, ed. Beaumont Newhall (London: Secker and Warburg, 1980), 94. 10 Martha Langford, Suspended Conversations: The Afterlife of Memory in Photographic Albums (Montreal: McGill-Queen's University Press, 2008), 199. Stephen Burstow The Carte de Visite 5 photograph albums were the height of novelty; their contents were emphatically of the present and acted as daily reminders for the living.”11 This attention to nineteenth-century social practices surrounding the production, exchange and collecting of cartes shifts the focus from the memory that sustains family narratives across centuries and generations to the recent memory required for everyday life—a zone of memory that is also pertinent to domestic digital photography. For most Facebook users, the “Friends Day” videos posted to their Timelines would have reached back no further than ten years, reflecting the much-observed temporal shallowness of online social media. Yet as Paula Young Lee’s commentary indicates, the curating of this memory is vigorously disputed. The intention of this article is to bring this twenty-first- century experience of memory as an unfolding, contested affordance of photography, to the consideration of the carte de visite. Carte de Visite Albums The carte de visite album, as a site of curation and display of individual sensibility, familial bonds, and wider cultural awareness, is crucial to this study. Although the carte album’s importance was noted by Beaumont Newhall and Elizabeth Anne McCauley, more recent historical scholarship has offered detailed descriptions and analyses of particular albums: Patricia di Bello examines albums of two upper-class English women while Lara Perry considers an album compiled by the English Bishop Samuel Wilberforce. Martha Langford singles out a small number of albums from the collection of Canadian McCord Museum for more detailed commentary and Elizabeth Siegel conducts a more general survey for her history of the nineteenth-century American photograph album.12 11 Elizabeth Siegel, Galleries of Friendship and Fame: A History of Nineteenth-Century American Photograph Albums (New Haven: Yale University Press, 2010), 146. 12 Beaumont Newhall, The History of Photography from 1839 to the Present (1949), 5th ed. (New York: The Museum of Modern Art, 1994), 49; Elizabeth Anne McCauley, A.A.E. Disdéri and the Carte De Visite Portrait Photograph (New Haven: Yale University Press, 2004), 46–48; Di Bello, Women's Albums and Photography in Victorian England; Lara Perry, “The Carte De Visite in the 1860s and the Serial Dynamic of Photographic Likeness,” Art History 36, no. 4 (2012); Langford, Suspended Conversations: The Afterlife of Memory in Photographic Albums; Siegel, Galleries of Friendship and Fame: A History of Nineteenth-Century American Photograph Albums. Stephen Burstow The Carte de Visite 6 This article employs a survey of forty carte de visite albums from the collections of three Australian public libraries.13 This sample size was chosen as being large enough to identify patterns that would extend and critique particular analyses made by di Bello and Siegel on the staging of carte portraits and practices of album organisation and annotation. From the above scholarship, this approach is most closely aligned to that of Siegel’s study of American albums, although in my study there is no attempt to argue for the distinctiveness of albums based on their association with the Australian colonies. Production How a subject performs a self has become an inescapable focus of the study of domestic digital photography. As Nancy Van House observes: “making, showing, viewing and talking about images are not just how we represent ourselves, but contribute to the ways we enact ourselves, individually and collectively, and reproduce social formations and norms.”14 The ease of image-making using camera-phones, together with the increasingly public online circulation of personal images, has accelerated the constant recalibrations and changes of register required of the subject in representing the self within these shifting social formations. The performance of the self in carte de visite portraiture, by contrast, can appear determined and static, evidenced by a stable repertoire of poses, costumes, and studio settings. Elizabeth Anne McCauley, for example, in her pioneering study of French carte de visite portraiture, concludes that the carte de visite “represents an early step toward the simplification of complex personalities into immediately graspable and choreographed 13 Carte de visite albums were viewed from the collections of the National Library of Australia, the State Library of New South Wales and the State Library of Victoria. The forty albums surveyed represent all of the intact carte albums in these libraries’ collections. In these albums carte de visite albumen prints predominate, but the albums also include tintypes, ambrotypes, and colour lithographs, as well as the larger cabinet card albumen prints. Albums from the late 1860s often contained apertures for both cartes de visite and cabinet cards. The carte de visite print was mounted on a card measuring sixty-five by 107 millimetres and the cabinet card measured 107 by 168 millimetres: Alan Davies, An Eye for Photography: The Camera in Australia (Carlton, VIC: The Miegunyah Press, 2004), 20–22. 14 Nancy A. Van House, “Personal Photography, Digital Technologies and the Uses of the Visual,” Visual Studies 26, no. 2 (2011): 131. Stephen Burstow The Carte de Visite 7 performers.”15 Yet the substance of McCauley’s book demonstrates that this portrait choreography was dynamic and contested territory. She pays particular attention to the complex relationship between photographic and painted portraiture, charting the crucial argument as to which medium was best able to portray the mind, soul or character of the sitter rather than mere appearance. This struggle for authenticity informed the related discussions about appropriate poses, costumes, and settings for carte portraits. As the carte de visite industry grew, many middle-class patrons ventured into a portrait studio for the first time and experienced the social and psychological challenge of arriving at a pose. Consequently, much presentational advice was published for the benefit of both photographer and patron. The renewed popularity of physiognomy, with its promise of transparency between appearance and inner self, led commentators such as American physician and author Oliver Wendell Holmes to caution: “The anxiety which strives to smooth its forehead cannot get rid of the telltale furrow. The weakness which belongs to the infirm of purpose and vacuous of thought is hardly to be disguised, even though the moustache is allowed to hide the centre of expression” [emphasis in original].16 Pronouncements on the futility of dissembling when posing for carte portraits were supported by the Victorian concern with the education of feelings or sentiment. Even utilitarian philosopher John Stuart Mill was concerned that, especially in England, an obsession with commerce and industry could lead to an absence of “high feelings” and to conduct that was “always directed towards low and petty objects.”17 The antidote, derived from the popular Romanticism of Jean-Jacques Rousseau and William Wordsworth, was the contemplation of nature and the study of art and literature. As Patrizia di Bello comments in her study of Victorian women’s photograph albums: 15 McCauley, A.A.E. Disdéri and the Carte De Visite Portrait Photograph, 224. 16 Oliver Wendell Holmes, “Doings of a Sunbeam,” in Photography: Essays and Images, ed. Beaumont Newhall (London: Secker and Warburg, 1981), 70. 17 John Stuart Mill, Autobiography with an Introduction by C.V. Shields (New York: Liberal Arts Press, 1957), 38. Stephen Burstow The Carte de Visite 8 Romanticism and the cult of sensibility were crucial in establishing the private sphere as the setting for the true realisation of personal interiority, through activities and interactions based on authentic feelings rather than performances put on for fashionable considerations and in the interest of social ambition.18 The importance of this cultivation of “personal interiority” can be seen in a trope of portrait poses where the sitter is shown reading, doing embroidery, or contemplating an artwork or flower arrangement. Although the settings for most carte portraits make reference to the domestic parlour, these portraits of interiority create a heightened sense of intimacy for the viewer as the subject is observed engaging in a private activity, rather than presenting themself to the camera. The surveyed albums also present an even purer form of the introspective portrait: one that has no visible object of contemplation. The sitter gazes off into space, but not in order to present a noble profile to the camera—rather the attention is directed inwards with eyes downcast and the face averted from the camera. While the portrait of the Reverend W. Vetch (figure 18) was perhaps intended as a more spiritual variation on this theme, it shares with other introspective portraits the communication of depth of character and sentiment through inward focus. 18 Di Bello, Women's Albums and Photography in Victorian England, 40. Stephen Burstow The Carte de Visite 9 Figure 18. Albumen carte de visite photograph. Portrait of Reverend W. Vetch mounted in album. Brearley and Bray families’ collection. State Library of New South Wales, PXE 1525. Despite the dangers of inauthenticity expressed by Oliver Wendell Holmes above, most commentators on the practice of posing for a carte portrait advised sitters to adopt a posing technique, as simply presenting oneself to the camera lens in a transparently neutral pose gave unacceptable results. An anonymous author, writing in a British weekly journal in 1862 about his own experience of carte portraiture, felt that the portrait resulting from simply “presenting a tolerably favourable view of my features and limbs to Stephen Burstow The Carte de Visite 10 the fatal lens” was “so tame and unimposing a picture that I determined on the next occasion to throw more intellect into the thing.”19 At the next portrait sitting the writer fixed his gaze on a curtain and “blasted it with the energy of my regard…. That look has, I am happy to say, been reproduced faithfully, and no one could see the portrait without giving its original credit for immense penetration, great energy and strength of character, and a keen and piercing wit.”20 The range of commentary and advice available to the portrait subject indicates that carte portraiture was a contested practice where the sitter could exercise considerable agency. As portrayed by the author above, this agency was enhanced through the process of trial and error permitted by the technology of the carte de visite. When portrait subjects sat for their carte portrait, it was possible, using a four-lens studio camera and a moving negative plate holder, to expose four portrait images at a time. The negative plate could be moved to the other side and, with the sitter holding the same pose, another four images could be exposed. After development of the plate and production of the albumen print, eight near-identical portraits could be cut from the grid and pasted onto cards. This was the best approach if the sitter wanted multiple prints of a single pose. However, some uncut carte de visite prints have survived that show patrons pursuing other strategies: for example, a sheet of uncut portraits of Princess Gabrielle Bonaparte shows her opting for five different poses across the eight images.21 Cartes collected in albums also give insights into the conduct of studio portrait sessions. In several of the albums surveyed, different portraits of the same person, that are clearly the product of the same studio session, have been collected together as a series.22 Within a particular session, the set, the pose, and the framing can all vary. Carte portraiture 19 “The Carte De Visite,” Sydney Morning Herald, 27 October 1862, 8. Patrizia di Bello cites this article as being published in the British journal All the Year Round (1862): 165‒168. Elizabeth Siegel quotes the same article being reprinted in the American Union, Boston, on 16 August 1862 and it appeared in The Sydney Morning Herald on 27 October the same year. This circulation points to the international relevance of social commentary on the newly emergent carte portraiture. 20 Ibid. 21 Oliver Mathews, The Album of Carte-De-Visite and Cabinet Portrait Photographs 1854-1914 (London: Reedminster Publications, 1974), 9. The photographer exposed individual areas of the glass negative by uncapping one lens at a time. 22 An example is found in the carte de visite album, State Library of Victoria, PCLTAF 854 H85.70/72. Stephen Burstow The Carte de Visite 11 permitted photographers and patrons an expanded field of experimentation with the knowledge that, because of the carte’s negative/positive process, individual portraits could be selected later from a proof sheet for multiple printing. With the arrival of the carte de visite, the possibilities for self-performance that were available within a single studio session then opened out across time as patrons made return visits with diverse motivations for commissioning a new set of portraits: dissatisfaction with a previous portrait, a marriage, growing children, a change in career, or a new outfit. For Oliver Wendell Holmes, notwithstanding his earlier-quoted belief in physiognomy, encountering a series of portraits of an intimate enabled novel understandings of personal identity: We have learned many curious facts from photographic portraits which we were slow to learn from faces. One is the great number of aspects belonging to each countenance with which we are familiar. Sometimes, in looking at a portrait, it seems to us that this is just the face we know, and that it is always thus. But again another view shows us a wholly different aspect, and yet as absolutely characteristic as the first; and a third and fourth convince us that our friend was not one, but many, in outward appearance, as in the mental and emotional shapes by which his inner nature made itself known to us.23 Holmes here intimates a strikingly modern understanding of the self. His sequential encounters with different “views” or photographic portraits of “our friend” facilitate a re- evaluation of the subject’s mental and emotional constitution. While not destroying continuity of character, each of these portrait encounters requires a revision and a complication of the previous understanding. For Lara Perry this consideration of the series becomes crucial: “How the individual image gives meaning to the series, and how the series gives meaning to the individual image, is the dynamic of likeness that was exploited in the culture of the carte de visite.”24 Emerging from this portrait series is an expanding 23 Holmes, “Doings of a Sunbeam,” 70. 24 Perry, “The Carte De Visite in the 1860s and the Serial Dynamic of Photographic Likeness,” 731. Stephen Burstow The Carte de Visite 12 inter-subjective consciousness that also challenges the poverty of the initial understanding based only on face-to-face encounters. It is significant that Holmes is describing his perceptions of a friend and not a distant public figure, the implication being that the mediation of social relationships through photographic images was already significant in the 1860s. If the carte de visite represents the point at which photographic portraits began to signify “as part of a set or series rather than as unique objects,” domestic digital photography seems to have reached the apogee of series production: who now ever takes a single photograph?25 Yet much of the heat that is generated in public commentary on contemporary private-gone-public photographs occurs when individual images are subjected to scrutiny, rather than being understood as moments from the image-making stream. This escape and appropriation of digital images from databases both private and public now appears so commonplace that it is useful to consider related effects that emerged with the mass production of the carte de visite. Dissemination Perhaps the most distinctive quality of digital domestic photography lies in its ease of circulation. These images can have unlimited parallel lives: shared face-to-face on a smartphone, posted to a photo blog, annotated on a social media platform, printed in a photo book, or perhaps even incorporated into a memorial audio-visual presentation at the subject’s funeral. Each of these instances of the image offers different contexts for interpretation and there is considerable public concern about image creators losing control both of the context and the image file itself.26 By comparison with the recent experience of personal photography from the snap- shot era, the dissemination of cartes de visite can potentially offer more useful analogues for this experience of digital domestic photography. As Sarvas and Frohlich observe, while 25 Ibid., 747. 26 Martin Hand, Ubiquitous Photography (Cambridge: Polity Press, 2012), 180–181. Stephen Burstow The Carte de Visite 13 there was no technical barrier in the negative/positive technologies of the “Kodak Path” to the production of unlimited prints from any photographic negative, in practice reproduction of domestic photographs was highly circumscribed: for example during the snapshot era, photo-lab customers might order a second set of prints for distribution to family and friends, while some families had multiple prints of portraits made to send with Christmas cards.27 Personal carte de visite portraits, by contrast, were usually purchased by the dozen, with the explicit intention of exchange, while most photographic studios held the negatives on file for reorders. Prior to the carte, painted portraits, whether on canvas or as miniatures, were commissioned for specific recipients and exhibition settings. The portrait miniature, in particular, was part of a culture of gift exchange and letter writing amongst intimates that provided an agreed interpellation.28 The daguerreotype portrait followed this highly directed mode of distribution and display. However, as Lara Perry observes, the carte de visite “escaped these boundaries and found itself in multiple and possibly unpredictable interpretive contexts.”29 Many carte portraits of celebrities, from royalty to statesmen and wealthy industrialists, were produced with the same aesthetic as the portraits of any middle-class patron, using similar studio sets, props, and styles of attire. Consequently, there was no formal way to distinguish between a portrait given by a bourgeois portrait subject to friends or family, and the celebrity portrait that could be purchased from a photographic studio or print seller. Anxiety about potential confusion that could arise from the conflation of these portrait categories resulted in cautionary tales appearing in the photographic press. Elizabeth Siegel mentions a story published in the American Journal of Photography in 1863 that related how a portrait subject, having become newly famous through success on the stock market, is shocked to find his carte portrait being sold everywhere by print 27 Sarvas and Frohlich, From Snapshots to Social Media: The Changing Picture of Domestic Photography, 95– 96. 28 Marcia Pointon, “‘Surrounded with Brilliants’: Miniature Portraits in Eighteenth-Century England,” Art Bulletin 83, no. 1 (2001). 29 Perry, “The Carte De Visite in the 1860s and the Serial Dynamic of Photographic Likeness,” 739. Stephen Burstow The Carte de Visite 14 sellers.30 He discovers that the portrait given only to members of his family has been traded by his son to an unscrupulous travelling photographer who copies and markets it as a celebrity carte. In the nineteenth century, with the expansion of the urban middle class, social communications that would previously have remained within family networks were finding their way into the public domain. While facilitating this process, the carte portrait could easily be transformed from an image of private affection into one of public notoriety. Lara Perry observes that although advertising in newspapers for potential marriage partners was still a somewhat controversial practice in the 1860s, these ads “often invited respondents to include a carte in their reply, or suggested an exchange of cartes de visite.”31 Perry then lists a series of incidents, reported in the press, that were used to illustrate the potentially deleterious consequences of the practice. A young “Adonis” is found to have accumulated more than a hundred cartes of unsuspecting young women, while other advertisers were lured into meetings where they were publicly humiliated. Within just a few years, cartes de visite had proliferated globally and escaped the bounds of their initial use within elite social circles. Although we are now accustomed to the ease of digital file dissemination, the materiality of the carte de visite provided little barrier to this appropriation. As a highly mobile, standardised commodity, carte portraits easily slipped the leash of their intended functions to then take on ambiguous and potentially disturbing new meanings. Collecting Our experience of the twenty-first-century photo album is of its dispersal. The centrally curated family album of the last century has lost its dominance as everyone amasses individual collections that are also shared and annotated online. In this way, the visual 30 Siegel, Galleries of Friendship and Fame: A History of Nineteenth-Century American Photograph Albums, 56-57. 31 Perry, “The Carte De Visite in the 1860s and the Serial Dynamic of Photographic Likeness,” 740. Stephen Burstow The Carte de Visite 15 basis of memory is paradoxically becoming both more individual and more communal. This mnemonic potency has been downplayed in much of the analysis of domestic digital photography: “personal photographs may be becoming more public and transitory, less private and durable and more effective as objects of communication than of memory.”32 By comparison, the carte de visite album, a substantial bound volume taking pride of place in the domestic parlour, appears to offer durability and longevity, its collection of carefully curated family photographs calling forth rituals of remembrance for generations to come. Although an analysis of carte de visite albums reveals two contrasting paradigms of collecting and organisation, neither supports the imperatives of dynastic memory. Figure 19. Albumen carte de visite photographs. Portraits of HRH Duke of Edinburgh and O. H. Brierly mounted in album. Brearley and Bray families’ collection. State Library of New South Wales, PXE 1525. 32 Van House, “Personal Photography, Digital Technologies and the Uses of the Visual,” 125. Stephen Burstow The Carte de Visite 16 Figure 20. Cabinet card and cartes de visite mounted in opening spread of an album. Left page: albumen cabinet card photograph, reproduction of painting. Right page: two cartes de visite; colour lithograph religious image and albumen photograph unidentified portrait of child. Ramsay family’s collection. State Library of New South Wales PXE 1165 box 2. From the surveyed albums, two carte albums in the collection of the State Library of New South Wales provide immediate contrasts in both content and organisation. The first album for consideration comes from the Brearley and Bray families’ collection of papers and photographs: an album with ninety-three images, displaying two cartes per page. The first four images are carte portraits of celebrities: on the first page (figure 19) are the Duke of Edinburgh (1844‒1900) and O. W. Brierly (1817‒1894, a marine painter who travelled widely in the Pacific), followed on the second page by a medical doctor and a clergyman. Each is identified in Gothic text written on the mount. The next two portrait subjects, both men, are identified in their own hands. The opening hierarchical taxonomy is clear, with royalty then leading to a survey of prominent professions. The remaining unidentified portraits, presumed to be either of family or friends, appear to be organised according to the age of the sitter, with the elderly, children, and young women grouped together across the double page spreads. This album exemplifies a paradigm that is based on the expression of status and the assembly of a virtual family. In some albums, this is reflected in the selection of royal portraits followed by portraits of family and friends, while in other albums of this type the Stephen Burstow The Carte de Visite 17 hierarchy begins with the portraits of the family patriarch and matriarch.33 These are the albums where the desire to assert familial bonds can be most strongly felt. As Susan Sontag observed, the demands of commerce, war, empire, and migration in the second half of the nineteenth century were dispersing the members of many middle-class families and the carte album permitted their virtual reassembly.34 To what extent were the curators of such albums compiling these collections for posterity, for future generations? Elizabeth Siegel recounts a publishing project by a Dr. A. H. Pratt, who, concerned about the lack of family documentation following the chaos of the American Civil War, produced a carte album with many templates for the inclusion of personal, physical, and genealogical data.35 Some Australian commentators also predicted that the carte album would replace the family genealogy preserved in the front of the family bible.36 The survey of Australian carte de visite albums, however, does not support these visions of the carte album as a genealogical resource. Despite some albums including an index page, none have any entries: in only one of the surveyed albums is there a genealogical listing that could potentially be contemporaneous with the compilation of the album.37 And while the subjects of celebrity cartes are frequently identified by hand-written entries on the mounts, family members and friends are rarely identified contemporaneously. This lack of identification suggests that although compilers of these carte albums hoped to stitch together a virtual family through photographic representation, they did not see the carte album forming a genealogical database as envisioned by Dr. Pratt. The second album, exemplifying a contrasting paradigm for collecting and organisation, comes from the Ramsay family collection and has thirty-seven images that include both 33 An album that begins with a royal portrait (Queen Victoria): National Library of Australia Pic Album 359 ID 1585489. An album that begins with the family patriarch and matriarch: “Page and Wood family album of cartes-de-visite,” State Library of Victoria PCLTA 1378 H2011.14/1–42. 34 Susan Sontag, On Photography (Harmondsworth: Penguin, 1980), 8-9. 35 Siegel, Galleries of Friendship and Fame: A History of Nineteenth-Century American Photograph Albums, 104. 36 Davies, An Eye for Photography: The Camera in Australia, 22. 37 National Library of Australia Pic Album 349 ID3044249. Stephen Burstow The Carte de Visite 18 cabinet cards and cartes de visite. The opening spread (figure 20) has a cabinet card reproduction of a painting showing a dog, standing on a small iceberg with its paw on a broad-brimmed hat. It gazes forlornly to the heavens. On the facing page are two cartes de visite: a colour lithograph of a devotional image, and a child’s portrait. Through the rest of the album, cartes of sentimental etchings of dogs are interspersed with family and celebrity portraits including those of Charles Dickens and William Makepeace Thackeray. Status and hierarchy appear almost absent from this album as systematic forces of collection and organisation. Instead, there is a strong sense of the affective meaning of each carte, whether a sentimental scene gifted by a friend or a commissioned family portrait, determining its inclusion in the album with little thought for image sequence or juxtaposition. Although Elizabeth Siegel, in her study of American carte albums, concludes that a “wide variety of arrangements” is the result of album organisation being an “intensely personal activity,” there are antecedents for this ad hoc mode of album curation.38 An important precursor to the carte album was the sentiment album, which was owned by young women and circulated by the owner amongst her circle of friends and acquaintances.39 Handwritten contributions of epigraphs and verses were intended to convey appropriate feelings while revealing the fineness of the contributor’s sensibility, or perhaps, in the case of a potential suitor, a desire to outdo a rival. Carte albums that were influenced by this social dynamic of the sentiment album depict an evolving social circle through the collection of fashionable and affective images. These heterogeneous collections resisted prescribed methods of organisation, affording the expression of sentiment and taste rather than the building of visual genealogies for future generations. 38 Siegel, Galleries of Friendship and Fame: A History of Nineteenth-Century American Photograph Albums, 10. 39 Andrea Kunard, “Traditions of Collecting and Remembering,” Early Popular Visual Culture 4, no. 3 (2006). Stephen Burstow The Carte de Visite 19 Cartes in the Twenty-first Century William Darrah, in his survey of the carte de visite, claims that: “In England alone 300 to 400 million cartes were sold every year from 1861 to 1867.”40 How have cartes survived into the twenty first century and in what context? Darrah’s book, published in 1981, is an indication of cartes beginning to attract attention beyond a small group of collectors. Now, cartes have become part of the collecting mainstream: on eBay.com a search for “carte de visite” gave 37,236 results with many of these auction lots containing multiple cartes.41 As befits the first form of photograph to be easily shipped around the world, cartes de visite are now part of a global online marketplace. The vast majority of surviving cartes have been separated from albums: an eBay.com search for “carte de visite album” gave 409 results with only thirty-five albums advertised as containing some cartes and thirty-six albums described as empty. Museums and libraries hold a comparatively small number of carte albums. In Australia, some of these albums have passed into public collections as part of the papers of politically prominent families but many have been collected with little or no provenance, coming, for example, from the Public Trustee wishing to dispose of unclaimed deceased estates. Individual unidentified cartes have also been collected by public institutions and thousands circulate through private collections. Although unidentified family snapshots are routinely curated into new public contexts as “found photography,” nineteenth-century studio portraiture has been immune to this kind of appropriation, being far removed from the living memory required for nostalgic fetishism and devoid of the quotidian frisson of the amateur. Consequently, art museums have searched for other themes to present this photography to a twenty-first-century public. A recent exhibition at the Art Gallery of New South Wales, Australia and the Photograph (2015), curated by Judy Annear, is the first major survey of Australian photography to seriously represent cartes de visite, both as individual images and as collected in albums. 40 William C. Darrah, Cartes De Visite in Nineteenth Century Photography (Gettysburg PA: W. C. Darrah, 1981), 4. 41 eBay.com accessed 7 August 2015. Stephen Burstow The Carte de Visite 20 While historical scholarship is present in the curating of the exhibition, the contextualising of the carte de visite through a catalogue essay proposes that the subjects of these carte portraits should not be abandoned to the Victorian world of rigid convention. Martyn Jolly, in his catalogue essay on the photograph album in nineteenth-century Australia, instead emphasises the dynamic social currency of carte portraiture by quoting from the article, cited earlier, that was reprinted in the Sydney Morning Herald in 1862:42 you have the opportunity of distributing yourself among your friends, and letting them see you in your favorite attitude, and with your favorite expression. And then you get into those wonderful books which everyone possesses, and strangers see you there in good society, and ask who that very striking looking person is.43 The “good society” here could refer either to the portrait’s context in the carte album or to the people gathered in the ambiguously private/public space of the domestic parlour. For Jolly, this nineteenth-century account of self-performance has unmistakable parallels with Facebook profile circulation.44 Our contemporary experience of domestic digital photography has opened up the possibility of a new rapport between us and these Victorian portrait subjects and album compilers. They can now be portrayed as brave pioneers, revelling in the opportunity to articulate themselves and their social world through the photographic image. Memory: the Carte de Visite and Domestic Digital Photography Having traced the life of the carte de visite through production, dissemination, and collection, it is appropriate in conclusion to return to the question of how cartes functioned as resources for memory. This relationship of memory to the carte de visite is then employed to briefly interrogate the mnemonic functionality of domestic digital photography. 42 Martyn Jolly, “Delicious Moments: The Photograph Album in Nineteenth-Century Australia,” in Australia and the Photograph, ed. Judy Annear (Sydney, NSW: Art Gallery of NSW, 2015). 43 “The Carte De Visite,” 8. 44 “Delicious Moments: The Photograph Album in Nineteenth-Century Australia,” 235. Stephen Burstow The Carte de Visite 21 With the arrival of the carte de visite came many proposals for the production of familial photographic archives. As noted earlier, Dr. A. H. Pratt envisaged the creation of physical and genealogical records through the use of his annotated carte album. Recent scholarship has provided a significant corrective to this male dynastic imperative by fleshing out the social domains imbricated within carte portrait production, distribution, and collection: domains where women were crucial actors. Through this scholarship the photographic studio joins the domestic parlour as a site for the performance of an individual sensibility and the expression of rank and social standing while the carte itself, as a slippery mass-produced commodity, escapes the confines of intended distribution and precipitates novel social encounters. The largely female curation of the carte album is proposed as a dynamic, creative act, “poised between a reading practice and a writing practice.”45 This perspective emphasises the functions of social communication and self- expression over that of memory. Elizabeth Siegel, in her attempt “to see what these [carte de visite albums] meant at the time of their first use,” places the albums “within the context of sentiment rather than memory.”46 This perspective offers a more nuanced understanding of the nineteenth- century commentary on memory and the carte de visite. Marcus Aurelius Root (1808‒ 1888), author and proprietor of one of North America’s busiest photographic studios, argued for prescience with regard to family photographs, proposing that carte portraits of all family members should be taken as soon as possible, while the subjects were in good health. Yet the benefits of this documentation are clearly for the current generation: “In this competitious and selfish world of ours, whatever tends to vivify and strengthen the social feelings should be hailed as a benediction.”47 This remembrance is tethered strongly in the present, with the carte portraits affording daily reminders of loved ones, whether present, absent, or recently deceased. 45 Di Bello, Women's Albums and Photography in Victorian England, 25. 46 Ibid., 146. 47Marcus Aurelius Root, “The Camera and the Pencil (1864),” in Photography in Print: Writings from 1816 to the Present, ed. Vicki Goldberg (New York: Simon and Schuster, 1981), 149. Stephen Burstow The Carte de Visite 22 There is a further argument against the orientation of carte album collectors towards creating mnemonic resources for future generations: a lack of written identifications. As discussed earlier, in the Australian surveyed albums, very few family members are identified within the albums. Some friends and celebrities may have been asked to add their autographs below their portraits, as this authenticated the personal nature of the gift, but clearly it was considered unnecessary to identify other friends and family members in writing. As both Laura Perry and Elizabeth Siegel have observed with regard to carte portraiture, there were many nineteenth-century commentators who claimed that prior knowledge of the subject was crucial in order to complement the inadequate impression supplied by the portrait.48 Oral album-based storytelling was not simply commentary but provided essential identification. As Siegel points out, studio carte portraits lack the “moment and setting that would become the hallmark of the snapshot,” and therefore required even more comprehensive contextualising: a knowledgeable first- generation narrator was assumed.49 What emerges from this consideration of the social functions of the carte de visite is a present temporality and a facilitation of memory through social communication and self- representation. This more integrated understanding of memory and the carte de visite provides a useful perspective to question the marginalising by some scholars of the mnemonic affordances of domestic digital photography.50 Several studies of domestic digital photography have shown a bias towards the immediate sharing of images over their organisation and purposeful archiving.51 Photo streams promote a sense of temporariness with any image being easily replaced by another. Online applications such as Snapchat and WhatsApp, by featuring a short lifespan for exchanged images, have also emphasised this quality of ephemerality. Yet as online storage of personal photographs has expanded exponentially, social media platforms and 48 Perry, “The Carte De Visite in the 1860s and the Serial Dynamic of Photographic Likeness,” 731-732; Siegel, Galleries of Friendship and Fame: A History of Nineteenth-Century American Photograph Albums, 145. 49 Galleries of Friendship and Fame: A History of Nineteenth-Century American Photograph Albums, 145. 50 Van House, “Personal Photography, Digital Technologies and the Uses of the Visual,” 125. 51 Sarvas and Frohlich, From Snapshots to Social Media: The Changing Picture of Domestic Photography, 107–111. Stephen Burstow The Carte de Visite 23 photo storage services have begun to understand the potential of these images to hold users’ attention. New applications and features have been developed to exploit this “digital past.” In 2011, Facebook launched its Timeline feature, what its press release announced as “a new kind of profile that lets you highlight the photos, posts and life events that help you tell your story.”52 In the same year the online application Timehop appeared, offering a way of aggregating personal photographs from various sources, based on the anniversaries of the photos being posted online. In 2014, when Timehop was attracting a million users, a technology journalist compared its mnemonic functionality to existing services: “it’s about resurfacing old memories that might otherwise be relegated to the depths of your Facebook Timeline or Instagram profile.”53 Following Timehop and other services such as Dropbox’s Flashback, Facebook launched a feature in 2015 called “On This Day” that places selections of image- and text-based posts in the user’s newsfeed, focused on the user’s Facebook relationships as well as the anniversaries of posts.54 As discussed earlier in relation to the “Friends Day” video project, Facebook is also exploring various ways of structuring these image posts, in this case into narrated slideshows. It is significant that these recent developments in social media facilitate particular mnemonic excursions rather than the building of life story databases such as Facebook’s Timeline. This instigation of memory through self-performance and social communication was anticipated by the carte de visite where the photographic stimulus for recall and reminiscence was intimately connected with the strengthening of social bonds and the opportunity for self-representation. A carte portrait of a loved one could always become incorporated into ritualised life storytelling. Yet, as has been demonstrated, many carte albums evince an ad hoc, contingent structure that arises from affective responses to particular images and occasions rather than overarching narratives. 52 Slater Tow, “Timeline: Now Available Worldwide,” news release, 15 December 2011, https://www.facebook.com/notes/facebook/timeline-now-available-worldwide/10150408488962131/. 53 Ellis Hamburger, “Throwback Thursday Is the Secret to Timehop's Runaway Success,” The Verge, 15 May 2014. 54 This feature uses introductory text such as “Stephen, you and x became friends on Facebook 7 years ago today. We thought you’d like to look back on some of the memories you’ve shared together.” “Introducing on This Day: A New Way to Look Back at Photos and Memories on Facebook,” news release, 24 March 2015, http://newsroom.fb.com/news/2015/03/introducing-on-this-day-a-new-way-to-look-back-at- photos-and-memories-on-facebook/. Stephen Burstow The Carte de Visite 24 As noted earlier, the circulation of personal digital images through multiple personal collections has resulted in memory becoming simultaneously more individual and communal. Individual practices of image tagging, annotation, and organisation are channelled through the common affordances of social media platforms. The carte de visite also anticipates this phenomenon with the provision of personal and public images in a standardised format. Through the portrait cartes of intimates, together with cartes of celebrities, artworks, and scenes, an individual album compiler was able to express an idiosyncratic sentiment and taste while still representing their family and social circles within the acceptable bounds of rank and status. Twenty-first-century scholarship has effected a liberation of the carte de visite. What was previously seen as an artefact imprisoned by stultifying Victorian conventions has been released into the world of modern visual culture. Any carte can now evoke a dynamic life story, whether purchased from the print seller’s window, passed hand-to-hand, or sent to a loved one on the far side of the globe. It is this scholarship’s insight into the way images gain meaning through movement that offers further mutual insights into the domain of domestic digital photography. Stephen Burstow The Carte de Visite 25 Baudelaire, Charles. "Salon of 1859." In Art in Theory, 1815-1900: An Anthology of Changing Ideas, edited by Charles Harrison, Paul Wood and Jason Gaiger, 66668. Maiden, MASS.: Blackwell, 1998. Benjamin, Walter. "A Small History of Photography." Translated by Edmund Jephcott and Kingsley Shorter. In One Way Street and Other Writings, 240-57. London: NLB, 1979. "The Carte De Visite." Sydney Morning Herald, 27 Oct 1862, 8. Darrah, William C. Cartes De Visite in Nineteenth Century Photography. Gettysburg, PA.: W.C. Darrah, 1981. Davies, Alan. An Eye for Photography: The Camera in Australia. Carlton, VIC.: The Miegunyah Press, 2004. Di Bello, Patrizia. Women's Albums and Photography in Victorian England. Aldershot: Ashgate, 2007. Eastlake, Elizabeth. "'Photography', Reprinted from Quarterly Review (London) 101 (April 1857)." In Photography : Essays & Images : Illustrated Readings in the History of Photography, edited by Beaumont Newhall, 81-95. London: Secker & Warburg, 1980. Hamburger, Ellis. "Throwback Thursday Is the Secret to Timehop's Runaway Success." The Verge, 15 May 2014. Hand, Martin. Ubiquitous Photography. Cambridge: Polity Press, 2012. Holmes, Oliver Wendell. "Doings of a Sunbeam." In Photography: Essays & Images, edited by Beaumont Newhall, 63-77. London: Secker & Warburg, 1981. "Introducing on This Day: A New Way to Look Back at Photos and Memories on Facebook." news release., 24 March, 2015, http://newsroom.fb.com/news/2015/03/introducing-on-this-day- a-new-way-to-look-back-at-photos-and-memories-on-facebook/. Jolly, Martyn. "Delicious Moments: The Photograph Album in Nineteenth-Century Australia." In Australia and the Photograph, edited by Judy Annear, 234-35. Sydney, NSW: Art Gallery of NSW, 2015. Kracauer, Siegfried. "Photography." In The Mass Ornament: Weimar Essays, edited by Thomas Y. Levin, 47-64. Cambridge, Mass. : Harvard University Press, 1995. Kunard, Andrea. "Traditions of Collecting and Remembering." Early Popular Visual Culture 4, no. 3 (2006): 227-43. Langford, Martha. Suspended Conversations: The Afterlife of Memory in Photographic Albums. Montreal: McGill-Queen's University Press, 2008. Lee, Paula Young. "Thanks for Nothing, Facebook: #Friendsday Is Our Latest Reminder That Curated Selves Are Becoming “More Real Than Real”." Salon.com, http://www.salon.com/2016/02/04/thanks_for_nothing_facebook_friendsday_is_our_lates t_reminder_that_curated_selves_are_becoming_more_real_than_real/. Mathews, Oliver. The Album of Carte-De-Visite and Cabinet Portrait Photographs 1854-1914. London: Reedminster Publications, 1974. McCauley, Elizabeth Anne. A.A.E. Disdéri and the Carte De Visite Portrait Photograph. New Haven: Yale University Press, 2004. Mill, John Stuart. Autobiography with an Introduction by C.V. Shields. New York: Liberal Arts Press, 1957. Newhall, Beaumont. The History of Photography from 1839 to the Present (1949). 5th ed. New York: The Museum of Modern Art, 1994. Perry, Lara. "The Carte De Visite in the 1860s and the Serial Dynamic of Photographic Likeness." Art History 36, no. 4 (2012): 728-49. Plunkett, John. "Celebrity and Community: The Poetics of the Carte-De-Visite." Journal of Victorian Culture 8, no. 1 (2003): 55-79. Pointon, Marcia. "'Surrounded with Brilliants': Miniature Portraits in Eighteenth-Century England." Art Bulletin 83, no. 1 (2001): 48-71. Stephen Burstow The Carte de Visite 26 Root, Marcus Aurelius. "The Camera and the Pencil (1864)." In Photography in Print: Writings from 1816 to the Present, edited by Vicki Goldberg, 148-51. New York: Simon and Schuster, 1981. Sarvas, Risto, and David M. Frohlich. From Snapshots to Social Media: The Changing Picture of Domestic Photography. New York: Springer, 2011. Siegel, Elizabeth. Galleries of Friendship and Fame: A History of Nineteenth-Century American Photograph Albums. New Haven: Yale University Press, 2010. ———. "Talking through the "Fotygraft Album"." Chap. 13 In Phototextualities: Intersections of Photography and Narrative edited by Alex Hughes and Andrea Noble. Albuquerque: University of New Mexico Press, 2003. Sontag, Susan. On Photography. Harmondsworth, England: Penguin, 1980. Tagg, John. "A Democracy of the Image: Photographic Portraiture and Commodity Production." Chap. 1 In The Burden of Representation: Essays on Photographies and Histories, 35-59. London: Macmillan, 1988. Tow, Slater. "Timeline: Now Available Worldwide." news release, 15 December, 2011, https://www.facebook.com/notes/facebook/timeline-now-available- worldwide/10150408488962131. Van House, Nancy A. "Personal Photography, Digital Technologies and the Uses of the Visual." Visual Studies 26, no. 2 (2011): 125-34. work_q2r36ddbybe5njr4huzy7ukwa4 ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218564413 Params is empty 218564413 exception Params is empty 2021/04/06-02:16:45 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218564413 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:45 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_q4ptvuxcmrac7ol4sl7l5ospfy ---- A randomized trial study on the effect of amniotic membrane graft on wound healing process after anal fistulotomy O A m a G H a b c a A R A A K A H W P h 2 B j coloproctol (rio j). 2 0 1 7;3 7(3):187–192 w w w . j c o l . o r g . b r Journal of Coloproctology riginal Article randomized trial study on the effect of amniotic embrane graft on wound healing process after nal fistulotomy hahramani Leila a, Pirayeh Saeideh a, Khazraei Hajar a, Bagher pour Ali a, osseini Seyed Vahid a, Noorafshan Ali b, Safarpour Ali Reza c,∗, Mousavi Laleh a Shiraz University of Medical Sciences, Colorectal Research Center, Shiraz, Iran Shiraz University of Medical Sciences, Anatomy Department, Stereology Research Center, Shiraz, Iran Shiraz University of Medical Sciences, Gasteroentrohepatology Research Center, Shiraz, Iran r t i c l e i n f o rticle history: eceived 19 December 2016 ccepted 27 March 2017 vailable online 15 May 2017 eywords: nal fistula uman amniotic membrane ound healing ost-operative complication a b s t r a c t Objective: Human amniotic membrane (HAM) used as a wound coverage for more than a century. The aim of this study is to evaluate the efficacy of amniotic membrane on wound healing and reduce post-operative complication. Study design: Randomized clinical trial study. Place and duration of study: Surgery Department, Shahid Faghihi Hospital, Shiraz, in the period of between Sep. 2014 and Nov. 2015. Methodology: 73 patients with anal fistula were divided into two groups. The patients suffered from simple perianal fistula (low type) without any past medical history. Fistulotomy were performed for all of them and in interventional group HAM were applied as biologic dressing. Their wound healing improvement was evaluated post-operative in two groups. Results: From 73 patients participated in the study, 36 patients were in control group and 37 patients were in intervention group. According to the analysis of images taken from the wound, the rate of wound healing was 67.39% in intervention group and 54.51% in control group (p < 0.001). Discharge, pain, itching and stool incontinency was lower in intervention group. Analysis of pathology samples taken from the wound showed no differences between two groups. Conclusion: HAM application could lead to improvement of wound healing and reduced post- operative complications. In conclusion, HAM may act as a biologic dressing in the patients with anal fistula. © 2017 Sociedade Brasileira de Coloproctologia. Published by Elsevier Editora Ltda. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/ licenses/by-nc-nd/4.0/). ∗ Corresponding author. E-mail: colorectal2@sums.ac.ir (S.A. Reza). ttp://dx.doi.org/10.1016/j.jcol.2017.03.006 237-9363/© 2017 Sociedade Brasileira de Coloproctologia. Published by Elsevier Editora Ltda. This is an open access article under the CC Y-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). dx.doi.org/10.1016/j.jcol.2017.03.006 http://www.jcol.org.br http://crossmark.crossref.org/dialog/?doi=10.1016/j.jcol.2017.03.006&domain=pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ mailto:colorectal2@sums.ac.ir dx.doi.org/10.1016/j.jcol.2017.03.006 http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ 188 j coloproctol (rio j). 2 0 1 7;3 7(3):187–192 Ensaio clínico randomizado sobre o efeito do enxerto de membrana amniótica sobre o processo de cicatrização após fistulotomia anal Palavras-chave: Fístula anal Membrana amniótica humana Cicatrização da ferida Complicação pós-operatória r e s u m o Objetivo: Membrana amniótica humana (MAH) tem sido usada para cobrir feridas por mais de um século. O objetivo deste estudo é avaliar a eficácia da membrana amniótica na cicatrização de feridas e reduzir complicações pós-operatórias. Desenho do estudo: Ensaio clínico randomizado. Local e duração do estudo: Departamento de Cirurgia, Shahid Faghihi Hospital, Shiraz, Irã, entre setembro de 2014 a novembro de 2015. Método: 73 pacientes com fístula anal foram divididos em dois grupos. Os pacientes sofriam de fístula perianal simples (tipo baixo) sem histórico médico prévio. A fistulotomia foi real- izada em todos eles e no grupo intervenção, MAH foi aplicada como curativo biológico. A melhora da cicatrização foi avaliada no período pós-operatório em dois grupos. Resultados: De 73 pacientes que participaram do estudo, 36 pacientes eram do grupo controle e 37 pacientes do grupo intervenção. De acordo com a análise das imagens da ferida, a taxa de cicatrização foi 67,39% no grupo intervenção e 54,51% no grupo controle (p < 0,001). Secreção, dor, prurido e incontinência fecal foi menor no grupo intervenção. A análise das amostras patológicas retiradas da ferida não mostrou diferenças entre os dois grupos. Conclusão: A aplicação de MAH pode levar à melhoria da cicatrização de feridas e reduzir as complicações pós-operatórias. Em conclusão, a MAH pode atuar como um curativo biológico nos pacientes com fístula anal. © 2017 Sociedade Brasileira de Coloproctologia. Publicado por Elsevier Editora Ltda. Este é um artigo Open Access sob uma licença CC BY-NC-ND (http://creativecommons.org/ Introduction Fistula-in-ano disease usually exists after anorectal infection. There are many treatment options for management of anal fis- tulas with minimum chance of incontinence and recurrence. Surgical managements have to eliminate the septic foci and any associated epithelized tract to avoid recurrence and pre- serve the anal sphincter function. All of the options have different success rates. Fistulotomy used in the underlying sphincter tissue and is recommended for low fistulas with reported success rates varying from 29% to 53%. Success rates with plug have been comparable or inferior to the advancement flap (48–62%). The flap should consist of the part of the internal sphincter and mucosa with a broad base of blood supply and should be sutured without tension. The success rate can be raised by removing the underlying infected anal gland and curetting the rest of the tract.1 Setonisa less invasive approach with minimal damage to the sphincter. However the discomfort caused to the patient during the long time required for wound healing is the main disadvantage of this approach. However, a cutting seton can have better (up to 99%) success rate, it can cause severe dis- comfort to the patient and also, can have 18–25% incidence of incontinence. Draining seton can have 20–40% persistent fistula rate, but with a low incidence of incontinence.2 In 2006, ligation of inter sphincteric fistula tract (L.I.F.T.) introduced by Rojanasakul for the first time as a total sphinc- ter saving procedure.3 Healing rate after 6–7 weeks is usually ranging from 68% to 83%. Video assisted anal fistula treat- ment (VAAFT) described by Prof. Meinero, that is done with licenses/by-nc-nd/4.0/). the rigid endoscope and the tract is cauterized, curetted and the internal opening is stapled.4 Cochrane database have described that no major difference was seen between the various techniques used if recurrence rates are concerned.5 Thus there is no single method that is perfect and physician has to choose the surgery depending on his/her experience, the type of fistula and the other local conditions. Many post-operative complications are because of dysfunc- tion of wound healing. Vascularity of anal canal is important but the main reason is infection and lack of scare recovery due to scare situation and humid dressing. So, complications like pain, itching, discharge and recurrence occurred. Human amniotic membrane (HAM) is the inner layer of the fetal membranes and has bio-compatibility, easy availability, elasticity and stability and it has been used as an alternative biomaterial for research in many surgeries and wound-healing procedures. Amniotic membrane has been used in different organs for example, many surgeons evaluated the efficacy of HAM as a biologic dressing in burn wounds or in corneal epithelium reconstruction with transplantation of epithelial cells on a lyophilized amniotic membrane (LAM) or in gastroin- testinal tract surgeries.6,7 Many studies assessed the efficacy of HAM as a biologic dressing in skin ulcers reported better outcomes in comparison to some other methods. Moreover, in a few studies, HAM has been evaluated in GI tract of animal models and the results showed accelerating wound healing process.8 Uludag et al. used HAM patch in colon anastomo- sis in rats and reported that using HAM decreases dehiscence rate, intra-abdominal abscesses, anastomotic leakage, adhe- sion formation and intestinal obstruction.9 http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ http://creativecommons.org/licenses/by-nc-nd/4.0/ j coloproctol (rio j). 2 0 1 7;3 7(3):187–192 189 Assessed for eligibility (n=80) clinically diagnosed anal fistula and fistulotomy done Excluded (n=7) leaved study Group A: HAM applied (n=37) Randomization Group B: without HAM (n=36) Follow-up 28 days Analysis: Itching, gas and stool incontinence, discharge, pain Follow-up 28 days Analysis: Itching, gas and stool incontinence, discharge, pain dia d t u t fi M T t p u M 2 c a g o i fi l a e m > Fig. 1 – Flow However, HAM has been put into practice for less than a ecade and more studies are needed for better evaluation and he probable long-term adverse effects of HAM should be eval- ated in further studies. The aim of this study was to evaluate he HAM effect on wound healing acceleration in the post stulotomy procedure. ethodology he study has been designed as a randomized clinical trial o evaluate efficacy of HAM in healing of fistula in-Ano. 73 atients with clinical diagnosis of fistula in-Ano were eval- ated in Shahid Faghihi Hospital of Shiraz University of edical Sciences between September 2014 and November 015. All patients suffered low type fistula in-Ano that was onfirmed by colorectal surgeon with physical examination nd anoscopy. The patient were randomly allocated into two roups; fistulotomy with marsupialization and HAM applying n wound in group A and fistulotomy with marsupialization n group B as control group (standard procedure for low type stula). The inclusion criteria were as follows: clinical diagnosis of ow type fistula (sphincter involvement <30%), age 18–65 years, nd American society of anesthesiologists class I or II. The xclusion criteria included the following; 1) immune compro- ised patients such as T.B, AIDS or DM received steroid drugs 20 mg/day; 2) inflammatory bowel disease; 3) past medical gram RCT. history of previous anal surgeries; 4) history of gas or stool incontinence; 5) allergy to egg; 6) refuse to participation in this study; 7) BMI > 30; 8) fistula with abscess; 9) high type fistula (sphincter involvement >30%); 10) previous pelvic radiation; 11) perianal dermatitis. Written informed consent form was filled by all the patients that participated in the study before surgery. Ran- domization was done by block randomization permuted total patients was 80 that 73 participants allocated in each group (we lost 7 patients in follow-up). The patients in the two groups received prophylactic dose of metronidazole just before anes- thesia and two doses post-operatively at 8 and 16 h. All patients operated in prone position, after anoscopy and iden- tification of fistula tract and internal, external orifice of fistula. Fistulotomy was done in eligible participants. Performing fis- tulatomy and curettage in group A, then HAM applied on the wound of fistula. HAM was fixed on the side of wound by monocryl 4/0 in four points the same as marsupialization. Then, digital photography was taken from 10 cm distance. Finally surgical dressing was applied. In group B, after fistulo- tomy and curettage, marsupialization was done in four points of the side wall with monocryl 4/0 and dressing the same as group A. Digital photography was taken also from 10 cm dis- tance as well. Normal diet was start after 1 day and dressing was removed. Then, the patients discharge in second days, if they did not have any complications such as unpredictable pain, abnormal discharge and cellulites. Both groups were operated by colorectal surgeon. Another colorectal surgeon 190 j coloproctol (rio j). 2 0 1 7;3 7(3):187–192 Table 1 – Demographic data from patients under simple surgery and HAM with surgery (percent). Group Male Female Mean age ± SD Simple surgery 31 (86.1%) 5 (13.9%) 39.94 ± 10.77 Surgery with HAM 20 (54.1%) 17 (45.9%) 37.32 ± 10.27 Table 2 – Discharge, itching, pain, incontinence parameters. Parameter p-value Odds ratio 95% confidence interval Lower Upper Discharge 0.000 2.29 1.53 3.42 Itching 0.000 4.82 2.65 8.78 1 – Pain <0.0001 Fecal incontinence 0.007 visited the patients 3, 7, 14, 21 and 28th days post operation that he was blinded to allocation of the two groups. Follow-up data form was complete with attention to sign and symptom of the patients (Fig. 1). Itching, gas and stool incontinence, dis- charge, pain scoring was determined by using VAS system. In second visit (2 weeks post operation), digital photograph was taken again in the same distance. Tissue biopsy was taken of 10 patients in both groups randomly. Therefore, primary out- come in this study was wound healing acceleration by HAM that was evaluated subjectively and objectively. Secondary outcome was infection and abscess formation. Question forms evaluated wound healing and infection subjectively and digi- tal photograph (digital image: stereolith), tissue biopsy taken helped us for objective evaluation (10 participants in each group). This study was approved by the ethics committee of Shiraz University of Medical Sciences and was registered with the Iranian Clinical Trial Register (IRCT: 201310219936N6). Statistics Statistical analysis was performed by SPSS software (version 16) and also SAS (for categorical repeated measurement). In descriptive analysis quantitative variables were revealed by mean ± SD, and qualitative variables were showed by fre- quency and percent. Qualitative variables were pain, discharge and itching. Quantitative variable was percent of scar recovery. They were measured during times after surgery. Repeated measurement analysis (RMA) was done for evaluation of sig- nificant changes in the outcome variables. Qualitative RMA and quantitative RMA were performed by SPSS and SAS soft wares respectively. Generalized estimating equation (GEE) was the method for discharge assay. Two samples t-test, �2 test and fisher exacted test also were uses in appropriate comparisons. p-value greater than 0.05 was considered significant. Results 80 patients evaluated and 7 of patients leave the study, 36 of them had simple fistulatomy (5 female and 31 male) and 37 of patients had fistulatomy with HAM graft (17 female and 20 male). In this study, mean age of patients with simple fistu- latomy was 39.9 years and mean age of patients with HAM was 37.3 years with no significant difference (Table 1). .61 1.34 1.93 −0.72 −0.11 In this study, variables like sex, age, history of fissure before surgery, time and type of surgery and their effects on discharge assayed (Table 2). Time and discharge had significant differ- ence (p = 0.003), that means increase of time decrease chance of discharge (OR = 0.96). Also, surgery with HAM in comparison with simple fistulatomy decrease chance of discharge more than two times (OR = 2.29). Sex, age and fissure did not have significant difference and showed that two groups were equal as sex and age. Itching and fissure before surgery, type of surgery and time had significant difference. GEE results in itching showed sig- nificant difference in time (p = 0.004) and by increase of time, chance of lack of itching increased (OR = 1.04). There was sig- nificant difference between two groups for itching (p < 0.05) and chance of lack of itching in group 1 was more than 4 times of group zero (OR = 4.82). Fissure in clinical exam before surgery affects itching significantly (p < 0.05). Chance of lack of itching in patients with fissure was lower than patients with- out fissure (OR = 0.17). Sex and age did not show any difference on itching (p = 0.421, p = 0.07), respectively. For Analysis the data for Pain SAS software used and GEE marginal modeling method showed that time significantly affected (p < 0.05) and by spending more time chance of lack of pain increased (OR = 2.14). Two groups demonstrated signif- icant difference in pain (p < 0.05) and chance of lack of pain in group zero was less than group 1 (OR = 0.47). So, surgery with HAM suggested as better surgery in comparison to another surgery. Sex and age did not show any difference on pain. Percent of healing According to the photographic data in day of surgery and 14 days after that, percent of scare recovery obtained by digi- tal image analysis. Mean ± SD of percent of recovery in group without HAM was 54.51 ± 4.86 and in group that used HAM graft was 67.39 ± 4.69. The difference between two groups was significant (p < 0.0001) that means use of HAM increased rate of scare recovery. Fecal incontinence parameter Lack of fecal incontinency in interventional group was signif- icantly less than control group (p-value = 0.007). Wexner score was used for incontinency evaluation. 2 0 1 7 P M a p D U f a s t fi s c d c k a e e p e t g r i e d o h b t e b l i l r c t s t p f s r a fi s u T q r 1 j coloproctol (rio j). athology ann–Whitney test used for comparison between two groups nd there was no significant difference between them as athologic data (p-value = 0.76). iscussion sually 70.7% of fistulas were healed in at least 1 year of ollow-up. Fistula-in-ano is a challenging condition to man- ge despite the technological advances and it is not a gold tandard treatment algorithm for it. Low transsphincteric fis- ulas are treated by fistulotomy successfully while complex stulas are managed by advancement flap repair, cutting eton, partial fistulotomy, stem cell injection, fibrin or dermal ollagen glue injection, plug, VAAFT, LIFT, and FiLaC, but evi- ence on healing, recurrence, and safety of these options is not larified completely. A study on anal fistula is needed to define ind of fistula (low, high, transsphincteric, intersphincteric) nd outcome measures (healing time, incontinence). Human amniotic membrane (HAM) has bio-compatibility, asy availability, elasticity and stability that researchers have ncouraged to consider it as a biologic dressing and appro- riate bio-prosthesis for more than 100 years. Many surgeons xamined the efficacy of HAM as a biologic dressing in their reatment methods such as burn wounds treatment or in astrointestinal tract surgeries and desirable outcomes were eported.6 Amnion cells synthesize peptides of the innate mmunity system, like as beta-defensins, elastase-inhibitors, lafin, lactoferrin, or IL-1-RA.HAM had antimicrobial effect ue to these immune factors. Also, HAM synthesizes numer- us growth factors such as epithelial growth factor (EGF), uman growth factor (HGF), keratinocyte growth factor (KGF), asic fibroblast growth factor (bFGF), and tissue growth fac- ors (TGF-alpha, TGF-beta-1, TGF-beta-2, and TGF-beta-3) and xpected to accelerate reepithelialization and wound-healing y the activation of keratinocytes.10 Collagen type IV and aminin are mainly compositions of basement membrane and s pivotal for coherence between dermal layers and the epithe- ial. Our findings showed that repairing anal fistula with HAM esults in better outcome compared to simple repair. This is in oncordance with the results of other studies which reported he application of HAM in repairing recto-vaginal fistulas.6 We standardized histologic findings by using a modified coring system and provide a quantitative comparative con- ext. Although quantitative assessment of anal fistula healing rocess is challenging, we believe it would help researchers or more accurate comparison. Many surgical approaches for decrease healing time used uch as: Fistulotomy with 8.3% minor incontinence and 8.3% ecurrence rate,11 Advancement flap with 29% incontinence nd 10% recurrence,12 York Mason approach,13 Seton, Plug, brin glue14 or Stem cell injection with complex (high or trans- phincteric) anal fistulae. According to our knowledge, this study is first study to eval- ate the effect of HAM on wound healing post fistulotomy. he main positive point seems comparison of HAM effect by uantitative and qualitative measurement. 1 ;3 7(3):187–192 191 Conclusion Though the anal fistula is troublesome to the surgeons, it seems be improved by using the HAM graft. Our results seem to demonstrate that this technique is both simple and effective and would result in better surgical and histological outcomes comparing to simple repair. HAM increased rate of recovery and it suggested that HAM could be used for further research on patients’ treatment. Conflicts of interest The authors declare no conflicts of interest. Acknowledgements This article was extracted from the thesis of Dr. Pirayeh, no. 5180, and approved by the research vice-chancellor of Shiraz University of Medical Sciences. Hereby, the authors would like to thank this vice-chancellery for financially supporting the study. e f e r e n c e s 1. Mushaya C, Bartlett L, Schulze B, Ho YH. Ligation of intersphincteric fistula tract compared with advancement flap for. Am J Surg. 2012;204:283–9. 2. Galis-Rozen E, Tulchinsky H, Rosen A, Eldar S, Rabau M, Stepanski A, et al. Long-term outcome of loose seton for complex anal fistula: a two-centre study of patients with and without Crohn’s disease. Colorectal Dis. 2010;12:358–62. 3. Rojanasakul A. LIFT procedure: a simplified technique for fistula-in-ano. Tech Coloproctol. 2009;13:237–40. 4. Meinero P, Mori L. Video-assisted anal fistula treatment (VAAFT): a novel sphincter-saving procedure for treating complex anal fistulas. Tech Coloproctol. 2011;15:417–22. 5. Jacob TJ, Perakath B, Keighley MR. Surgical intervention for anorectal fistula. Cochrane Database Syst Rev. 2010;12. CD006319. 6. Roshanravan R, Ghahramani L, Hosseinzadeh M, Mohammadipour M, Moslemi S, Rezaianzadeh A, et al. A new method to repair recto-vaginal fistula: Use of human amniotic membrane in an animal model. Adv Biomed Res. 2014;3:114. 7. Ahn JI, Lee DH, Ryu YH, Jang IK, Yoon MY, Shin YH, et al. Reconstruction of rabbit corneal epithelium on lyophilized amniotic membrane using the tilting dynamic culture method. Artificial Organs. 2007;31:711–21. 8. Ghahramani L, Jahromi AB, Dehghani MR, Ashraf MJ, Rahimikazerooni S, Rezaianzadeh A, et al. Evaluation of repair in duodenal perforation with human amniotic membrane: an animal model (dog). Adv Biomed Res. 2014;17:113. 9. Uludag M, Citgez B, Ozkaya O, Yetkin G, Ozcan O, Polat N, et al. Effects of amniotic membrane on the healing of primary colonic anastomoses in the cecal ligation and puncture model of secondary peritonitis in rats. Int J Colorectal Dis. 2009;24:559–67. 0. Loeffelbein DJ, Rohleder NH, Eddicks M, Baumann CM, Stoeckelhuber M, Wolff KD, et al. Evaluation of human amniotic membrane as a wound dressing for split-thickness skin-graft donor sites. BioMed Res Int. 2014:572183. 1. Pescatori M, Ayabaca SM, Cafaro D, Iannello A, Magrini S. Marsupialization of fistulotomy and fistulectomywounds http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0075 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0080 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0085 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0085 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0085 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0085 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0085 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0085 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0085 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0085 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0085 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0085 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0085 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0085 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0085 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0085 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0085 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0085 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0085 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0085 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0085 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0085 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0085 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0090 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0095 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0100 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0105 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0110 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0115 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0120 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 j). 2 0 1 1 fistula remains a challenge for colorectal surgeon. Int J Colorectal Dis. 2015;30:595–603. 192 j coloproctol (rio improves healing and decreases bleeding: a randomized controlled trial. Colorectal Dis. 2006;8:11–4. 2. Perez F, Arroyo A, Serrano P, Candela F, Perez MT, Calpena R. Prospective clinical and manometric study of fistulotomy with primary sphincter reconstruction in the management of recurrent complex fistula-in-ano. Int J Colorectal Dis. 2006;21:522–6. 1 1 7;3 7(3):187–192 3. Cadeddu F, Salis F, Lisi G, Ciangola I, Milito G. Complex anal 4. Malik AI, Nelson RL. Surgical management of anal fistulae: a systematic review. Colorectal Dis. 2008;10:420–30. http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0125 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0130 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0135 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 http://refhub.elsevier.com/S2237-9363(17)30045-X/sbref0140 A randomized trial study on the effect of amniotic membrane graft on wound healing process after anal fistulotomy Introduction Methodology Statistics Results Percent of healing Fecal incontinence parameter Pathology Discussion Conclusion Conflicts of interest Acknowledgements References work_q5ki5t3q3bcr3pu2xq64shprja ---- untitled Public Health Nutrition: 18(2), 361–371 doi:10.1017/S1368980014000305 Impact of a Smarter Lunchroom intervention on food selection and consumption among adolescents and young adults with intellectual and developmental disabilities in a residential school setting Kristie L Hubbard1,*, Linda G Bandini2,3, Sara C Folta1, Brian Wansink4, Misha Eliasziw5 and Aviva Must5 1Friedman School of Nutrition Science and Policy, Tufts University, 150 Harrison Avenue, Boston, MA 02111, USA: 2Eunice Kennedy Shriver Center, University of Massachusetts Medical School, Waltham, MA, USA: 3Department of Health Sciences, Boston University, Boston, MA, USA: 4Dyson School of Applied Economics and Management, Cornell University, Ithaca, NY, USA: 5Department of Public Health and Community Medicine, Tufts University School of Medicine, Boston, MA, USA Submitted 19 April 2013: Final revision received 18 November 2013: Accepted 4 February 2014: First published online 17 March 2014 Abstract Objective: To assess whether a Smarter Lunchroom intervention based on beha- vioural economics and adapted for students with intellectual and developmental disabilities would increase the selection and consumption of fruits, vegetables and whole grains, and reduce the selection and consumption of refined grains. Design: The 3-month intervention took place at a residential school between March and June 2012. The evaluation employed a quasi-experimental, pre–post design comparing five matched days of dietary data. Selection and plate waste of foods at lunch were assessed using digital photography. Consumption was estimated from plate waste. Setting: Massachusetts, USA. Subjects: Students (n 43) aged 11–22 years with intellectual and developmental disabilities attending a residential school. Results: Daily selection of whole grains increased by a mean of 0?44 servings (baseline 1?62 servings, P 5 0?005) and refined grains decreased by a mean of 0?33 servings (baseline 0?82 servings, P 5 0?005). The daily consumption of fruits increased by a mean of 0?18 servings (baseline 0?39 servings, P 5 0?008), whole grains increased by 0?38 servings (baseline 1?44 servings, P 5 0?008) and refined grains decreased by a mean of 0?31 servings (baseline 0?68 servings, P 5 0?004). Total kilojoules and total gram weight of food selected and consumed were unchanged. Fruit (P 5 0?04) and vegetable (P 5 0?03) plate waste decreased. Conclusions: A Smarter Lunchroom intervention significantly increased whole grain selection and consumption, reduced refined grain selection and con- sumption, increased fruit consumption, and reduced fruit and vegetable plate waste. Nudge approaches may be effective for improving the food selection and consumption habits of adolescents and young adults with intellectual and developmental disabilities. Keywords Adolescence Health promotion Intellectual disability Developmental disability Population-based data from the USA (1–3) and Australia (4) indicate that youth with intellectual and developmental disabilities (I/DD) are at an increased risk of obesity. A higher prevalence of obesity has been reported among non-representative samples of youth with spina bifida(5), cerebral palsy(6,7), Down’s syndrome(8) and intellectual disability(9–12). Obesity among youth with I/DD may undermine their ability to live independently, limit future opportunities for employment, and may contribute to health disparities in adulthood(13). Youth with I/DD are more vulnerable to poor diet quality compared with typically developing children due to their complex medical, physical and behavioural challenges (i.e. medication use, cognitive impairments, eating pro- blems)(14–16). Compared with typically developing peers, youth with I/DD, including children with autism spectrum disorder, consume fewer daily servings of fruits and vegetables (17,18) and these outcomes have a positive association with lower family income (18) . Schools repre- sent ideal environments for public health interventions to *Corresponding author: Email usherkristie@gmail.com r The Authors 2014 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:47, subject to the Cambridge Core terms of use. http://crossmark.crossref.org/dialog/?doi=10.1017/S1368980014000305&domain=pdf https://www.cambridge.org/core improve population-level dietary patterns of children and adolescents(19). Little is known about the extent to which youth with I/DD have been included in school-based efforts to improve dietary intake(20). Behavioural economics and principles of behavioural science that guide recent efforts to ‘steer students to better choices by making low or no-cost changes to the cafeteria environment’ are termed the Smarter Lunchroom Movement. When redesigning lunchrooms to be smarter, how food is served and presented to students is modi- fied rather than emphasizing extreme changes to what foods are served(21). This approach preserves autonomous choice – a central tenet of health promotion for youth with I/DD(22). The six principles of Smarter Lunchroom design include efforts to: (i) manage portion sizes; (ii) make healthy choices more convenient; (iii) improve visibility of healthier foods; (iv) enhance taste expectations; (v) utilize suggestive selling (prompts); and (vi) use smart pricing and bundling strategies(23). Smarter Lunchroom interventions have improved fruit and vegetable selection and consumption among typically developing high-school students(24), but these strategies have not been tested specifically among youth with I/DD. Furthermore, no pub- lished research has addressed whether youth with I/DD in residential education settings can benefit from adaptations to evidenced-based health promotion strategies that have proved successful among typically developing youth in regular education settings. The present study adapted these Smarter Lunchroom principles to meet the needs of students with I/DD enrolled in a residential school. Outcomes of interest, established a priori, aligned with new federal nutrition standards for school lunch(25), addressed dietary deficits common among youth, and included improvements in the selection and consumption of fruits, vegetables, whole grains and refined grains based on the number of servings. The evaluation employed a pre–post quasi- experimental design in which five days of matched diet- ary data were compared between baseline and follow-up to assess changes at the individual level(26). We hypo- thesized that the intervention would increase students’ selection and consumption of fruits, vegetables and whole grains, and decrease their selection and con- sumption of refined grains, over a 3-month period. Methods Setting Of the 6?5 million students with disabilities served through the Individuals with Disabilities Education Act (IDEA) in the USA, 3?4 % are served in private specialized day and/or residential programmes (2008 data) (27) . Under IDEA, the right to a free and public education in the ‘least restrictive environment’ provides that separate schooling in private programmes occurs only when the nature or severity of the disability is such that education in regular classes cannot be achieved satisfactorily. The interven- tion was implemented in Massachusetts at a private specialized residential school for students with I/DD between December 2011 and June 2012. The school served 120 students aged 9–22 years with I/DD and a range of secondary emotional, mental health and beha- vioural conditions including autism spectrum disorder. Eighty-eight students lived at the school (i.e. residential) and thirty-two attended the day programme only. Eighty per cent of students’ families were at or below the federal poverty level. Students aged 9–18 years were enrolled in the education programme and grouped into classrooms by age and functional ability; students aged 18–22 were enrolled in the vocation programme to focus on job training and grouped according to job site. The student to teacher ratio was 3 to 1. Recruitment The study was conducted according to the guidelines established in the Declaration of Helsinki and all proce- dures involving human subjects were approved by the Tufts University Institutional Review Board. At the school administrators’ request, all students participated in the intervention to avoid disruptions in daily routines. The research aspect was limited to the pre–post evaluation of the selection and plate waste of foods at lunch using digital photography. The licensing policy of the school stipulated students classified as wards of the state (n 20) were ineligible to participate in the research aspects of the intervention. Recruitment letters were sent to the families of the remaining eligible students (n 100). Written parental permission to participate in the research aspect (evaluation) was received for fifty-one students. Assent to participate in the evaluation was obtained from partici- pants via classroom visits. Participants were told that pictures would be taken of their tray before and after they ate lunch to help us learn more about students’ eating habits. Participants were aware that they could stop participating at any time and were free to decline having the food photographs taken of their lunch tray on each day of data collection. Baseline conditions Formative research was conducted between December 2011 and February 2012 and is described elsewhere(28). Baseline data were collected in February 2012 prior to any dining hall layout changes. The school participated in the School Breakfast and National School Lunch Programs, with breakfast and dinner provided in the residential housing units. The intervention focused on the lunch meal, served daily in the dining hall from 10.45 to 12.00 hours. School food service followed a seasonal three-week cycle menu. Table 1 displays week 1 of the baseline menu. The order of choices in the serving line at baseline was as follows: (i) peanut butter and jelly 362 KL Hubbard et al. Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:47, subject to the Cambridge Core terms of use. https://www.cambridge.org/core Table 1 Menu at baseline Monday Tuesday Wednesday Thursday Friday Entrée 1 with bundled side Entrée Asian chicken salad Veggie burger on whole-wheat bun Steak and blue cheese salad with whole-wheat roll Sausage gumbo over brown rice Tuna caprese salad with flatbread crackers Bundled side dish Mediterranean mix (tomatoes, cucumbers, feta cheese) Entrée 2 with bundled side Entrée Grilled mozzarella cheese sandwich on whole-wheat bread Barbeque turkey tips with corn bread Popcorn chicken Roasted turkey wrap with spinach and tomato Pizza on whole-grain crust Bundled side dish Baby carrots Baby spinach side salad with cucumbers and grape tomatoes Cucumbers and carrots with dip Whole grain goldfish Garden greens side salad Soup Turkey wild rice cranberry soup Vegetable soup Tuscan soup Chicken noodle Florentine soup Roasted garlic rosemary chowder Dessert Chocolate pudding with whipped topping Yellow cake with chocolate frosting Homemade trail-mix Canned fruit Canned peaches Fruit cocktail Whole fresh fruit Apples Apples Apples Apples Apples Bananas Bananas Bananas Bananas Bananas Oranges Oranges Oranges Oranges Oranges Yoghurt 4 oz low-fat 4 oz low-fat 4 oz low-fat 4 oz low-fat 4 oz low-fat Milk Skimmed, 1 %, Lactaid (white only) Skimmed, 1 %, Lactaid (white only) Skimmed, 1 %, Lactaid (white only) Skimmed, 1 %, Lactaid (white only) Skimmed, 1 %, Lactaid (white only) Alternative entrée with bundled side dish Peanut butter and jelly sandwich on white bread with side of pretzels Peanut butter and jelly sandwich on white bread with side of pretzels Peanut butter and jelly sandwich on white bread with side of pretzels Peanut butter and jelly sandwich on white bread with side of pretzels Peanut butter and jelly sandwich on white bread with side of pretzels Condiments Saltine crackers Saltine crackers Saltine crackers Saltine crackers Saltine crackers Ketchup Ketchup Ketchup Ketchup Ketchup Mustard Mustard Mustard Mustard Mustard Mayonnaise Mayonnaise Mayonnaise Mayonnaise Mayonnaise Butter Butter Butter Butter Butter Margarine Margarine Margarine Margarine Margarine S m a rte r L u n c h ro o m fo r y o u th w ith d isa b ilitie s 3 6 3 D o w n lo ad ed fro m h ttp s://w w w .cam b rid g e.o rg /co re. 06 A p r 2021 at 01:16:47, su b ject to th e C am b rid g e C o re term s o f u se. https://www.cambridge.org/core sandwiches on white bread served with a corresponding side of pretzels; (ii) soup; (iii) main entrée option 1 with a corresponding side dish; (iv) main entrée option 2 with corresponding side dish; (v) fresh fruit (apples, oranges, bananas offered daily); (vi) yoghurt; (vii) dessert or canned fruit; and (viii) milk (skimmed, 1 % and Lactaid – white milk only). The main entrée was provided by the head server to ensure standard portion sizes. The remaining items were pre-portioned in separate dishes by food-service staff in advance because vocational students participated in the lunch service. Prior to the intervention, the menu was communicated to students through words and Picture Communication Sym- bolsTM (Dynavox Mayer-Johnson LLC, Pittsburgh, PA, USA) for foods. Picture Communication Symbols are visual representations of concepts and ideas that reinforce meaning. They are used as an alternative method of com- munication for youth with cognitive impairments or com- munication disorders(29). A placemat used as a tray liner depicted the lunch elements and included a picture high- lighting dessert. The peanut butter and jelly sandwiches were available daily to accommodate students who had very limited food repertoires. Side dishes (i.e. pretzels and vegetable side dishes) were ‘bundled’ with the entrée. Students were permitted to refuse the side dish that was automatically plated with the entrée in accordance with National School Lunch Program rules for offer v. serve, but were not permitted to switch side dishes. A fruit bowl containing apples, oranges and bananas was kept behind the counter. Dessert was served on the eye-level counter by a vocational student. Canned fruit was offered on Tuesday and Friday, when dessert was not offered. Students arrived to the dining hall by classroom, including the primary teacher and teaching assistants. Students had 30 min to choose and eat lunch. The lunch periods assigned to classrooms were staggered to avoid overcrowding. Teachers selected their own food from the serving line and ate lunch with their students to provide them with the support and supervision they required due to their cognitive, behavioural and physical challenges. No monetary transactions took place because student meals were included in yearly tuition. Intervention planning Adaptations to classic Smarter Lunchroom strategies were necessary due to physical and social factors within the lunchroom environment and the unique characteristics of the study population, including: cognitive disabilities (low literacy and comprehension, impairments in reasoning and decision making); sensory sensitivities (both auditory and oral); communication disorders; oral-motor impairments (all students are considered high risk for choking); and mobility limitations. Youth with I/DD, particularly those with autism spectrum disorder, may experience anxiety and exhibit disruptive behaviour in response to change and transition. Additionally, many students had communication challenges and language-based disabilities. Students were prepared for the impending changes through Social StoriesTM, videos, student lunch advisory committee activities and a 2 d pilot to practice data-collection procedures. Social Stories describe situations, relevant social cues and common responses in a specific format on the premise that an improved understanding of the situation will lead to the desired behavioural response(30). Dining hall layout changes The intervention capitalized on environmental changes to enhance the students’ experience of making choices in the serving line for all three weeks of the menu cycle (Fig. 1). The goal was to induce improvements in students’ food choices through ‘nudging’ rather than menu changes. Communication of the menu choices was enhanced by supplementing the Picture Communication Symbols with real food photos. In our formative work, teachers descri- bed real food photos as the optimal visual aids because they were more accurate and descriptive compared with Picture Communication Symbols. For example, students were confused if the entrée-sized salad on the lunch menu was taco salad with multiple toppings but the Picture Communication Symbol featured a plain lettuce salad. The placemat was revised to present a non-directive (no foods pictured) instruction for food placement on the tray. Peanut butter and jelly sandwiches were moved to the back counter and made available only by request to encourage students to at least consider the two main entrée options. Fruit was moved to the beginning of the serving line. Apples, bananas and oranges were separated into attractive and easy-to-reach baskets to improve accessibility. An easy-to-eat fruit option (e.g. apple sauce) was available by request daily near the fresh fruit. The healthiest entrée (i.e. meeting the greatest amounts of the dietary targets) was placed earlier in line, followed by side dishes. A critical change was the unbundling of side dishes and entrées, made in response to formative research which indicated students were confused by the inability to change side dishes and our desire to support autonomous choice. Teachers were trained to support autonomous student choices in the serving area. Desserts were kept behind the counter, rather than serving them at eye level. Milk and yoghurt were not targeted for improvement because formative research suggested that almost all students selected dairy daily. The menu was altered in two instances. One menu change was to serve peanut butter and jelly sandwiches on wheat bread rather than white; a second change was to reduce the portion sizes of desserts to 75 % of their original size. The two menu changes were a result of our community-engaged formative research; teachers unanimously asked for these two changes during the planning stage. Activities to support the intervention included: (i) pro- mpting by ‘celebrity servers’; (ii) the creation of fruit and vegetable-inspired artwork for the dining hall; 364 KL Hubbard et al. Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:47, subject to the Cambridge Core terms of use. https://www.cambridge.org/core (iii) classroom-based taste-testing activities; and (iv) logo- naming and branding activities. Fidelity to the layout changes was monitored on three non-consecutive days for the first four weeks of the intervention, followed by weekly observations in months 2 and 3. Specifically, vocational students who worked in the serving area required support to adjust to their new roles. We moni- tored the ability of students and staff to serve the food as delineated in the layout plan. Measures The digital photography of foods method(31,32) was used to measure food selection and plate waste at lunch for five consecutive days (Monday through Friday) at baseline in February 2012 and five consecutive days (Monday through Friday) at follow-up in June 2012 on the same week of the menu cycle to allow for direct comparison. Digital photography methodology has been validated in school cafeteria settings in comparison to weighed and visual estimation of portion sizes(32). Two camera stations were located near the exit of the serving area to capture selection and at the waste disposal station to capture plate waste. Trays were lined with a paper placemat that contained a unique identification to link selection and plate waste photos to the individual parti- cipants each day. Two angle (418) and two aerial (40 cm (16 in)) photographs were taken of each tray to assess selection and plate waste, for a total of four photographs per participant per day. Portions of each available item were weighed in tripli- cate at baseline and follow-up to ensure no changes in serving sizes (with the exception of desserts) occurred. Standardized recipes and nutrient content of each available item were analysed by a registered dietitian (K.L.H.) using the Nutrition Data System for Research (NDSR; University of Minnesota, Minneapolis, MN, USA, 2011). NDSR was used to calculate servings of fruits, vegetables, whole grains and refined grains per each available item. NDSR food group servings were derived from the Nutrition Coordinating Center (NCC) Food Group Serving Count System, defined per the Dietary Guidelines for Americans 2005. All items, including side dishes, had the potential to contribute to the calculated servings. Each food was linked to macronutrient and micronutrient information from NDSR. Food selection and plate waste were estimated using a triple-screen computer set-up that simultaneously displayed photographs of the reference portion, food selection and plate waste. A trained research assistant coded selection as ‘yes/no’ of each available item fol- lowed by quantity, because for certain items, such as milk, participants were permitted to take more than one. Selection was verified by a registered dietitian when plate waste was coded. Photographs of weighed standard reference portions were captured for all available items. A registered dietitian estimated consumption by comparing the plate waste photograph with the standard reference photograph. Consumption was coded on a five-point scale (0 %, 25 %, 50 %, 75 %, 100 %). Consumption estimates for fruits with cores and peels included the edible portion only. Gram weights were estimated from the plate waste photographs as follows: consumption estimates were entered and linked to the NDSR nutrient analyses based on gram weights of the reference portion. Servings of fruits, vege- tables, whole grains and refined grains of each available item selected, wasted and consumed were calculated from Fig. 1 (colour online) Intervention elements. From left to right: easy-to-reach fruit baskets, Picture Communication Symbol TM for ‘choose’, menu board featuring food photographs (top), baby spinach side dish, non-directive placemat, fruit salad side dish, intervention logo (bottom). *The Picture Communication Symbols r1981–2011 by Mayer-Johnson LLC. All Rights Reserved Worldwide. Used with permission Smarter Lunchroom for youth with disabilities 365 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:47, subject to the Cambridge Core terms of use. https://www.cambridge.org/core the standardized recipes in NDSR. In addition to servings, counts of all available items selected and consumed were generated. Data analysis Three different analyses of the data were conducted. First, for the primary analysis, mixed linear regression models were used to evaluate mean changes in servings of fruits, vegetables, whole grains and refined grains selected, wasted and consumed with the individual participant as the unit of analysis. The models included two fixed within-participant factors that were crossed: visit (baseline v. follow-up) and day of the week. Random participant intercepts were used to induce the within- participant correlations. Day-to-day variability was assessed using a likelihood ratio test comparing the log-likelihood of full models that included the interaction terms with partial models with no interaction terms. Second, the percentage of selected foods that were wasted was examined. Overall plate waste was assessed by calculating the percentage of total kilojoules and total gram weight of foods selected that were wasted. The plate waste of fruits and vegetables was assessed by calculating the percentage of the servings selected that were wasted. The mean percentage of plate waste (for total kilojoules, total gram weight and total fruit and vegetable servings) was calculated for each participant and averaged across all participants. Third, Poisson regression was used to evaluate changes in item count of foods selected and consumed. Counts were used to examine the relative contribution of chan- ges in selection of foods targeted in the intervention (i.e. whole fruit, canned fruit, vegetable side dishes, soup side dishes, entrée-sized salads, desserts, and peanut butter and jelly sandwiches) to the changes in servings of fruits, vegetables, whole grains and refined grains selec- ted (expressed as a rate: per 100 student-trays). We used the same approach to examine the relative contribution of changes in consumption. Rates of milk and yoghurt selection and consumption were examined for potential unintended shifts away from these foods. All statistical analyses were conducted using SAS statistical software package version 9?2; P values less than 0?05 were con- sidered statistically significant. Results Enrolment Fifty-one participants were enrolled in the research study. For each participant, a complete data record would contain twenty observations, consisting of selection and plate waste photos on each of five days at both baseline and follow-up. Dietary data were excluded from six participants with completely missing baseline or follow- up data (due to hospitalizations), from one participant who had no matching pre–post intervention days and from one participant who followed a gluten-free diet sent from home. These exclusions yielded a final sample size of forty-three participants. Of the 860 possible observa- tions for the forty-three participants, 196 were missing (23 %) leaving a total of 664 observations (332 selection, 332 consumption) for the analyses. Reasons for missing data consisted of classroom field trips, illness, off-campus job locations and transient refusal to participate in data collection. Each day, one to three participants refused to participate in the pre or post photograph. The mean age of the participants in the analyses was 18?3 (SD 2?5) years (range 11–22 years); 51 % were female; 72 % were residential students; and 53 % were enrolled in the edu- cation programme. Selection Daily mean kilojoules and mean gram weight of foods and beverages selected did not change over the study period (Table 2). Significant benefits of the intervention were observed for daily selection of whole grain and refined grain servings (Fig. 2(a)). Daily selection of whole grains increased by a mean of 0?44 servings (from 1?62 to 2?06 servings) and refined grains decreased by a mean of 0?33 servings (from 0?82 to 0?49 servings). Daily selection of fruit and vegetable servings did not change. Significant variability in daily mean serving changes was observed for vegetable selection (likelihood ratio test, P , 0?001) but was not significant for the selection of fruit (P 5 0?16), whole grains (P 5 0?05) and refined grains (P 5 0?07). Rates of selection of whole fruit, canned fruit, vegetable side dishes, soup side dishes, entrée-sized salads, desserts, and peanut butter and jelly sandwiches are shown in Table 3. The rate of canned fruit selection more than doubled. No significant changes were observed in rates of whole fruit selection. Raw vegetable side dishes and soup side dishes were grouped together to examine the changes in rates of selection for all vegetable side dishes. The rate of selection of all vegetable side dishes did not change significantly from baseline to follow-up. Total vegetable side dishes were divided into raw vegetable sides and soup Table 2 Daily mean kilojoules and mean gram weight of food selected and consumed, at baseline and follow-up, among stu- dents (n 43) aged 11–22 years with intellectual and developmental disabilities attending a residential school in Massachusetts, USA, March–June 2012 Baseline Follow-up Mean 95 % CI Mean 95 % CI Selection Kilojoules 3636 3381, 3895 3707 3448, 3962 Gram weight 784 696, 873 791 702, 878 Consumption Kilojoules 3025 2757, 3288 3054 2787, 3322 Gram weight 610 532, 689 637 558, 716 366 KL Hubbard et al. Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:47, subject to the Cambridge Core terms of use. https://www.cambridge.org/core sides to determine whether the form of the vegetable impacted the rate of selection. The rate of soup selection increased significantly by 28 %, while the rate of selection of raw vegetable sides decreased significantly by 46 %. The rate of dessert selection did not change. Consumption Daily mean kilojoules and mean gram weight of foods and beverages consumed did not change over the study period (Table 2). Significant benefits of the intervention were observed for daily consumption of fruit, whole grain and refined grain servings (Fig. 2(b)). Daily consumption of fruits increased by a mean of 0?18 servings (from 0?39 to 0?57 servings), whole grains increased by a mean of 0?38 servings (from 1?44 to 1?83 servings) and refined grains decreased by a mean of 0?31 servings (from 0?68 to 0?37 servings). Daily vegetable servings consumed did not change. Significant variability in daily mean serving changes was observed for vegetable consumption (likelihood ratio test, P 5 0?008), but not for fruit (P 5 0?27), whole grain (P 5 0?05) and refined grain (P 5 0?28) consumption. Plate waste Participants at baseline wasted a mean of 17?5 % of the total kilojoules selected and a mean of 21?4 % of the total gram weight of foods and beverages selected. Overall plate waste did not change significantly over the inter- vention period (17?6 % of the total kilojoules post and 19?5 % of the total gram weight post). The change in the percentage of total kilojoules wasted differed significantly Fruits Vegetables Whole grains Refined grains Fruits Vegetables Whole grains Refined grains Monday Tuesday Wednesday Thursday Friday Overall Monday Tuesday Wednesday Thursday Friday Overall –1 ·0 –1 ·0 –1 ·0 –0 ·5 –0 ·5 –0 ·50·0 0·0 0·00·00·5 0·51·0 1·0 1·0 1·0–1 ·0 –1 ·0 –0 ·5 0·0 0·5 1·0 –0 ·5 0·0 0·5 1·0–1 ·0 –0 ·5 0·0 0·5 1·0–1 ·0 –0 ·5 0·0 0·5 1·0–1 ·0 –0 ·5 0·5 0·5 Change in servings mean overall = 0·14 95 % Cl –0·02, 0·29 Change in servings mean overall = 0·18 95 % Cl 0·05, 0·31 Change in servings mean overall = 0·04 95 % Cl –0·11, 0·20 Change in servings mean overall = 0·38 95 % Cl 0·11, 0·65 Change in servings mean overall = –0·31 95 % Cl –0·51, –0·10 Change in servings mean overall = –0·12 95 % Cl –0·31, 0·08 Change in servings mean overall = 0·44 95 % Cl 0·14, 0·73 Change in servings mean overall = –0·33 95 % Cl –0·56, –0·11 Fig. 2 Mean change (with 95 % confidence interval represented by horizontal bar) in daily servings of fruits, vegetables, whole grains and refined grains (a) selected and (b) consumed, from baseline to follow-up, by day of the week and overall, among students (n 43) aged 11–22 years with intellectual and developmental disabilities attending a residential school in Massachusetts, USA, March–June 2012 Smarter Lunchroom for youth with disabilities 367 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:47, subject to the Cambridge Core terms of use. https://www.cambridge.org/core across days (likelihood ratio P 5 0?02), but did not differ significantly for percentage gram weight wasted (P 5 0?15). Significant benefits of the intervention were observed for fruit and vegetable plate waste. The mean percentage of fruit servings wasted from those selected decreased by 9?4 % (P 5 0?04) and the mean percentage of vegetable servings wasted from those selected decreased by 9?0 % (P 5 0?03; Fig. 3). The percentage of fruit and vegetable servings wasted from those selected did not differ across days (fruit P 5 0?97, vegetables P 5 0?05). Discussion To our knowledge, the present study is the first to investi- gate food-environment intervention approaches based on behavioural economics and principles of behavioural Table 3 Estimated differences in the rates of selection and consumption of menu items at baseline and follow-up among students (n 43) aged 11–22 years with intellectual and developmental disabilities attending a residential school in Massachusetts, USA, March–June 2012 Measure Time point Sample observations (student trays)* Baseline rate (per 100 student-trays) Follow-up rate (per 100 student-trays) Rate ratio 95 % CI P value Canned fruit Selection 332 21?69 31?32 2?37 1?11, 5?08 0?03 Consumption 332 18?07 30?72 2?55 1?18, 5?54 0?02 Whole fruit Selection 332 28?91 34?93 1?18 0?81, 1?71 0?39 Consumption 332 25?90 30?72 1?20 0?80, 1?80 0?38 All vegetable side dishes Selection 332 97?59 92?77 1?00 0?85, 1?18 0?95 Consumption 332 77?11 84?33 1?16 0?95, 1?41 0?14 Raw vegetable side dishes Selection 332 53?61 36?74 0?54 0?41, 0?70 ,0?001 Consumption 332 38?55 33?13 0?68 0?49, 0?95 0?02 Soup side dishes Selection 332 43?98 56?02 1?28 1?02, 1?60 0?03 Consumption 332 38?55 51?20 1?37 1?06, 1?76 0?02 Entrée-sized salads Selection 204 7?83 5?42 0?75 0?31, 1?82 0?53 Consumption 204 7?22 5?42 0?95 0?35, 2?55 0?92 Desserts Selection 140 34?93 27?71 0?87 0?70, 1?08 0?20 Consumption 140 34?93 25?90 0?81 0?65, 1?02 0?07 Peanut butter and jelly sandwiches Selection 332 13?25 15?66 1?16 0?53, 2?51 0?70 Consumption 332 13?25 15?66 1?16 0?53, 2?51 0?70 Milk and yoghurt Selection 332 149?39 140?96 0?94 0?81, 1?10 0?44 Consumption 332 138?00 133?73 0?96 0?82, 1?13 0?64 *Entrée-sized salads and desserts not offered daily, resulting in differences in sample size. Increase 10 0 –10 Decrease –9·4 % –9·0 % –4·4 % +3·6 % Fruits Vegetables Whole grains Refined grains C ha ng e in p er ce nt ag e of p la te w as te Fig. 3 Mean change (with 95 % confidence interval represented by vertical bar) in percentage of fruit, vegetable, whole grain and refined grain servings wasted of those selected, from baseline to follow-up, among students (n 43) aged 11–22 years with intellectual and developmental disabilities attending a residential school in Massachusetts, USA, March–June 2012 368 KL Hubbard et al. Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:47, subject to the Cambridge Core terms of use. https://www.cambridge.org/core science in a population of students with I/DD. Our findings are consistent with studies employing behavioural eco- nomic approaches in lunchroom environments among typically developing students. Verbal prompts from food- service workers to encourage fruit selection resulted in significant improvements in selection and consumption of fruits at lunch among schoolchildren(33). Peeling and slicing oranges to improve the accessibility of fruit increased the percentage of children selecting and consuming oranges in an elementary-school cafeteria(34). When offered a choice between carrots or celery instead of a requirement to take them, a greater proportion of junior high students con- sumed their vegetable (35) . A Chef’s Initiative intervention to improve the availability of healthy foods in Boston middle schools resulted in significant improvements in the pro- portion of students choosing whole grains and vegetables and the total amount of these foods consumed(36). The intervention resulted in shifts in the sources of kilojoules selected and consumed, with an overall improvement in diet composition, rather than a decrease in overall energy intake. We observed no overall increase in plate waste, nor did the intervention cause unintended shifts away from selecting and consuming healthy foods. These results suggest that the intervention was effective for improving dietary intake, but may not directly affect positive energy balance or obesity. The cumulative impact of these relatively small changes at one eating occasion translate to an increase of 1?0 fruit serving, an increase of 2?2 whole grain servings and a decrease of 1?7 refined grain servings for one individual over a 5 d school week. The observed improvement in whole grain consumption could be achieved by substituting half a slice of whole grain bread for half a bag of pretzels (refined grain) daily. The intervention resulted in a decrease in fruit and vegetable plate waste, supporting the hypothesis that students will consume a greater percentage of the fruit and vegetable side dishes when given the opportunity to make an autonomous choice. A reduction in fruit and vegetable plate waste could lead to significant cost savings for schools. The favourable impact was achieved through subtle ‘nudge’ mechanisms that preserve autonomous choice, were accepted by stu- dents, and carry a high potential for long-term sustainability due to the low implementation cost and potential for savings related to lower food waste. Changes in the rates of particular menu items selected and consumed offer additional insights for the mechan- isms by which changes in overall servings selected and consumed were achieved. Decreased rates of dessert selection and consumption, although non-significant, accounted for approximately 12 % of the decrease in daily mean refined grain servings selected. Changing peanut butter and jelly sandwiches from white to wheat bread accounted for 30 % of the increase in daily mean whole grain servings selected. Observed shifts in selection towards canned fruits and soup suggest that processed forms of fruits and vegetables may be preferred over raw forms by students with I/DD. Although we observed a significant increase in the percentage of fruit servings consumed and rates of canned fruit selection and consumption, the magnitude of the behaviour change was not adequate to observe an overall increase in mean servings of fruit selected at the individual level. The power to detect these changes may have been limited by our small sample size. Changes in vegetable servings selected should be inter- preted with caution because the vegetable side dishes were automatically placed on trays at baseline, making it difficult to isolate true selection of these items at this time point. The unbundling of raw vegetable side dishes caused a shift towards soup side dishes. The soups contained 0?5 to 1 serving of vegetables per 6 ounce portion – less than the vegetable servings provided by raw vegetable side dishes. This may explain the increase in the percentage of vegetable sides consumed from those selected, but no significant increase in mean vegetable servings consumed. The student population in the present study was hetero- geneous with respect to primary and secondary diagnoses, medication use, cognitive ability and severity of beha- vioural and emotional challenges. The licensing policy of the school, designed to protect this vulnerable population, limited the ability to obtain additional information about the students beyond age and sex. Although it may have been beneficial to attempt to evaluate these and other potential modifying factors, the small sample size did not support the investigation of differential effects by student- level characteristics, even had they been available. Two important limitations of the study were its small sample size and the lack of a control school to help rule out the potential influence of secular trends or events that may have occurred outside the study. To the best of our knowledge, the school did not implement any other changes in campus environments outside the dining hall that could impact food selection and eating habits at the lunch meal. None the less, findings should be replicated in a larger population and a comparison school, if possible. Schools have been identified as the optimal venue to deliver nutrition interventions and policies for children and should support the inclusion of youth with I/DD. Interventions to improve dietary intake need to address barriers at the individual and environmental levels that are perceived or experienced by youth with I/DD and their caregivers(37). In our experience, the community- engaged research process facilitated a broad and rich discussion of health promotion opportunities for youth with I/DD and led to an intervention that incorporated values of foremost importance to the school community. No students were excluded based on their disability and because the intervention did not rely on reasoning, those with significant cognitive impairments were not disadvantaged. Students readily adapted to layout changes, data collection procedures, and the switch of peanut butter Smarter Lunchroom for youth with disabilities 369 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:47, subject to the Cambridge Core terms of use. https://www.cambridge.org/core and jelly sandwiches from white bread to wheat. There were no reports of behavioural problems. Although the specific intervention elements may have limited generalizability, we believe the approach to the intervention design which focused on the process of developing adaptations based on formative research and engaging the school community is highly generalizable and makes an important contribution to the growing literature highlighting the need for the adaptation of evidence-based health promotion strategies(38). Evidence from interventions with adults with intellectual disability support the invol- vement of caregivers in the research process as well as the consideration for the context of the lived disability experience(39). The time required for the formative research and adaptation process was substantially greater compared with similar studies designed for typically devel- oping students. A major impetus for a careful approach was to ensure student and teacher safety and to prevent unin- tentional cognitive or emotional stress. Conclusion A Smarter Lunchroom intervention, based on behavioural economics and adapted for students with I/DD, significantly increased whole grain selection and consumption, reduced refined grain selection and consumption, increased fruit consumption, and reduced fruit and vegetable plate waste. Results suggest that low-cost interventions that create environments in which the healthiest choice is the easiest choice hold great promise for improving the short-term food choices and dietary intake of this vulnerable popula- tion. Future studies are needed to evaluate whether dietary changes are maintained in the long term and if the effects are replicated in regular education settings. Acknowledgements Sources of funding: This study was funded by the Deborah Munroe Noonan Memorial Research Foundation. The Deborah Munroe Noonan Memorial Research Foundation had no role in the design, analysis or writing of this article. Conflict of interest: None. Ethics: This study was approved by the Tufts University Institutional Review Board. Authors’ contributions: All authors contributed intellectually to the research aims, drafting and revising of the article, and gave final approval of the version to be published. K.L.H., L.G.B., S.C.F. and A.M. conducted the research. K.L.H. analysed the data and wrote the manuscript. M.E. assisted with the statistical analyses and interpretation of results. K.L.H. and A.M. had primary responsibility for the final content. Acknowledgements: The authors would like to thank the students at the Cardinal Cushing School for participating in the study. They gratefully acknowledge the contributions of the school staff and the Project Advisory Board to the design and implementation of the intervention. References 1. Bandini L, Curtin C, Hamad C et al. (2005) Prevalence of overweight in children with developmental disorders in the continuous National Health and Nutrition Examination Survey (NHANES) 1999–2002. J Pediatr 146, 738–743. 2. Chen AY, Kim SE, Houtrow AJ et al. (2010) Prevalence of obesity among children with chronic conditions. Obesity (Silver Spring) 18, 210–213. 3. Curtin C, Anderson SE, Must A et al. (2010) The prevalence of obesity in children with autism: a secondary data analysis using nationally representative data from the National Survey of Children’s Health. BMC Pediatr 10, 11. 4. Emerson E & Robertson J (2010) Obesity in young children with intellectual disabilities or borderline intellectual functioning. Int J Pediatr Obes 5, 320–326. 5. Simeonsson RJ, McMillen JS & Huntington GS (2002) Secondary conditions in children with disabilities: spina bifida as a case example. Ment Retard Dev Disabil Res Rev 8, 198–205. 6. Bandini L, Schoeller DA, Fukagawa NK et al. (1990) Body composition and energy expenditure in adolescents with cerebral palsy or myelodysplasia. Pediatr Res 29, 70–77. 7. Hurvitz EA, Green LB, Hornyak JE et al. (2008) Body mass index measures in children with cerebral palsy related to gross motor function classification: a clinic-based study. Am J Phys Med Rehabil 87, 395–403. 8. Luke A, Roizen NJ, Sutton M et al. (1994) Energy expenditure in children with Down syndrome: correcting metabolic rate for movement. J Pediatr 125, 829–838. 9. Bégarie J, Maı̈ano C, Leconte P et al. (2013) The prevalence and determinants of overweight and obesity among French youths and adults with intellectual disabilities attending special education schools. Res Dev Disabil 34, 1417–1425. 10. Lin JD, Yen C, Li C et al. (2005) Patterns of obesity among children and adolescents with intellectual disabilties in Taiwan. J Appl Res Intellect Disabil 18, 123–129. 11. Stewart L, Van de Ven L, Katsarou V et al. (2009) High prevalence of obesity in ambulatory children and adoles- cents with intellectual disability. J Intellect Disabil Res 53, 882–886. 12. Takeuchi E (1994) Incidence of obesity among school children with mental retardation in Japan. Am J Ment Retard 99, 283–288. 13. Drum C, McClain MR, Horner-Johnson W et al. (2011) Health Disparities Chart Book on Disability and Racial and Ethnic Status in the United States. Concord, NH: University of New Hampshire, Institute on Disability. 14. Gibson JC, Temple VA, Anholt JP et al. (2011) Nutrition needs assessment of young Special Olympics participants. J Intellect Dev Disabil 36, 264–268. 15. Sharp WG, Berry RC, McCracken C et al. (2013) Feeding problems and nutrient intake in children with autism spectrum disorders: a meta-analysis and comprehensive review of the literature. J Autism Dev Disord 43, 2159–2173. 16. Twachtman-Reilly J, Amaral SC & Zebrowski PP (2008) Addressing feeding disorders in children on the autism spectrum in school-based settings: physiological and behavioral issues. Lang Speech Hear Serv Sch 39, 261–272. 17. Evans EW, Must A, Anderson SE et al. (2012) Dietary patterns and body mass index in children with autism and typically developing children. Res Autism Spectr Disord 6, 399–405. 18. Yen C-F & Lin J-D (2010) Factors for healthy food or less- healthy food intake among Taiwanese adolescents with intellectual disabilities. Res Dev Disabil 31, 203–211. 19. Institute of Medicine (2004) Preventing Childhood Obesity: Health in Balance. Washington, DC: The National Academies Press. 370 KL Hubbard et al. Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:47, subject to the Cambridge Core terms of use. https://www.cambridge.org/core 20. Minihan P, Fitch S & Must A (2007) What does the epidemic of childhood obesity mean for children with special health care needs? J Law Med Ethics 35, 61–77. 21. Just DR & Wansink B (2009) Smarter lunchrooms: using behavioral economics to improve meal selection. Choices 24, issue 3; available at http://www.choicesmagazine.org/ magazine/article.php?article 5 87 22. Rimmer JH (2002) Health promotion for individuals with disabilities. Dis Manage Health Outcomes 10, 337–343. 23. Cornell Center for Behavior Econcomics in Child Nutri- tion Programs (2012) Six guiding principles to improving eating behaviors. http://smarterlunchrooms.org/sites/default/ files/introduction_to_smarter_lunchrooms_6_principles_ powerpoint.pdf (accessed April 2013). 24. Hanks AS, Just DR & Wansink B (2013) Smarter lunchrooms can address new school lunchroom guidelines and child- hood obesity. J Pediatr 162, 867–869. 25. US Department of Agriculture, Food and Nutrition Service (2012) Nutrition Standards in the National School Lunch and School Breakfast Programs. 7 CFR Parts 210 and 220. Fed Reg 77, issue 17, 4088–4167. 26. Cook T & Campbell D (1979) Quasi-Experimentation: Design and Analysis Issues for Field Studies. Boston, MA: Houghton Mifflin Company. 27. Boyle CA, Boulet S, Schieve LA et al. (2011) Trends in the prevalence of developmental disabilities in US children, 1997–2008. Pediatrics 127, 1034–1042. 28. Hubbard K, Bandini L, Folta SC, et al. (2012) Adaptation of smarter lunchroom design to the specific needs of children with intellectual and developmental disabilities: formative research and community collaboration. Pre- sented at the American Public Health Association 140th Annual Meeting and Exposition, San Francisco, CA, USA, 27–31 October 2012. 29. Mayer-Johnson LLC (2013) Picture Communication Symbols. http://www.mayer-johnson.com/category/symbols-and- photos (accessed September 2013). 30. The Gray Center for Social Learning and Understanding (2013) What are Social StoriesTM? http://www.thegraycen- ter.org/social-stories/what-are-social-stories (accessed June 2013). 31. Williamson DA, Allen HR, Martin PD et al. (2004) Digital photography: a new method for estimating food intake in cafeteria settings. Eat Weight Disord 9, 24–28. 32. Williamson DA, Allen HR, Martin PD et al. (2003) Comparison of digital photography to weighed and visual estimation of portion sizes. J Acad Nutr Diet 103, 1139–1145. 33. Schwartz MB (2007) The influence of a verbal prompt on school lunch fruit consumption: a pilot study. Int J Behav Nutr Phys Act 4, 6. 34. Swanson M, Branscum A & Nakayima PJ (2009) Promoting consumption of fruit in elementary school cafeterias. The effects of slicing apples and oranges. Appetite 53, 264–267. 35. Price JP & Just DR (2009) Getting kids to eat their veggies. Presented at the International Association of Agricultural Economists 27th Triennial Conference, Beijing, China, 16–22 August 2009. 36. Cohen JF, Smit LA, Parker E et al. (2012) Long-term impact of a chef on school lunch consumption: findings from a 2-year pilot study in Boston middle schools. J Acad Nutr Diet 112, 927–933. 37. Humphries K, Traci M & Seekins T (2004) A preliminary assessment of the nutrition and food-system environment of adults with intellectual disabilitites living in supported arrangements in the community. Ecol Food Nutr 43, 517–532. 38. Rimmer JH (2011) Promoting inclusive community-based obesity prevention programs for children and adolescents with disabilities: the why and how. Child Obes 7, 177–184. 39. Hamilton S, Hankey CA, Miller S et al. (2007) A review of weight loss interventions for adults with intellectual disabilities. Obes Rev 8, 339–345. Smarter Lunchroom for youth with disabilities 371 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:47, subject to the Cambridge Core terms of use. https://www.cambridge.org/core work_q6pesvx37rco5lue3573gqjijy ---- Microsoft Word - 23. IABCR-2017(3)-04-13_93-96.docx Section Dermatology Efficacy of Oral Tranexamic Acid in the Treatment of Melasma: A Pilot Study Shafia N Kakru1, Md Raihan2*, Mirza Aumir Beg3, Basit Kakroo4 1Lecturer;2Associate Professor, Department of Dermatology, Venerology and Leprosy, Hamdard Institute of Medical Sciences and research, Delhi, India.3Assisant professor, Department of Pedodontics, Suda Rustagi College of Dental Sciences & Research, Faridabad, Haryana.4Basit Kakroo, Medical student at the University Of East Anglia, Norwich Medical school. International Archives of BioMedical and Clinical Research | Oct – Dec 2017| Vol 3| Issue 4 93 Original Article ABSTRACT______________________________________________________________ Background: Melasma a common skin pigmentary disorder poses a great challenge to clinicians due to unsatisfactory results and high recurrence rate. Many treatment modalities have been tried by clinicians without significant improvement in the lesion. Methods: This cross sectional study was done on 90 patients including both male and female and were diagnosed with moderate to severe melasma. TA 250 mg (thyrodin) bid for six months was prescribed along with topical sunscreen. Digital photography was performed at the first visit and at subsequent visits. The effects of treatment were evaluated by two dermatologists independently. Results were assed clinically and photographically. Result: 90 patients with moderate to severe melasma were enrolled in the study. The average age was 36 years. 44patients (48.8%) had good improvement, 25 patients (27.7%) had excellent improvement and 17 patients (18.8%) had fair improvement and 4 patients (4.4%) had no improvement. Three patients complained about gastric upset. None of the patients had serious systemic side effects, only few had oligomenorrhoea, palpitation. Patient’s satisfaction was similarly noted. Conclusion: oral administration of TA is effective and safe treatment for melasma. Key Words: Melasma, oral tranexamic acid DOI:10.21276/iabcr.2017.3.4.23 Article History Received: 18.11.17 Accepted: 25.11.17 *Address for Correspondence Dr. Md Raihan, Associate Professor, Department of Dermatology, Venerology and Leprosy, Hamdard Institute of Medical Sciences and research, Delhi, India. Copyright: © the author(s) and publisher. IABCR is an official publication of Ibn Sina Academy of Medieval Medicine & Sciences, registered in 2001 under Indian Trusts Act, 1882. This is an open access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. INTRODUCTION_____________________ Melasma an acquired hypermelanosis characterized by slowly enlarging tan to brown macules on the skin of face and arms, usually on the cheeks, forehead, nose, and upper lip.[1] Melasma is a common acquired disorder of pigmentation and is known to occur in all skin types, all ethnic groups and both sexes but It is relatively more common in darker skin type (Fitzpatrick skin type III and IV) and more in women of child bearing age than men. Exact etiology of melasma is unknown, but genetic factors and exposure to UV irradiation are considered as main causes in addition to endocrine factors (e.g. pregnancy, hormonal therapy and ovarian dysfunction), drugs (e.g. phenytoin, phototoxic drugs), cosmetics, vascular and systemic diseases like thyroid dysfunction and anemia of multifactorial origin. These factors cause an increase in synthesis of melanosomes in melanocytes and their transfer to keratinocytes. Melasma lesions typically fade in winter and aggravate in summer. Chloasma (melasma related to pregnancy) usually diminishes within few months of delivery but melasma lesions due to oral contraceptives are usually persistent.[2,3,4] Access this article online Website: www.iabcr.org Quick Response code DOI: 10.21276/iabcr.2017.3.4.23 How to cite this article: Kakru SN, Raihan M, Beg MA, Kakroo B. Efficacy of Oral Tranexamic Acid in the Treatment of Melasma: A Pilot Study. Int Arch BioMed Clin Res. 2017;3(4):93- 96. Source of Support: Nil, Conflict of Interest: None www.iabcr.org Kakru SN, et al.: Efficacy of Oral Tranexamic Acid International Archives of BioMedical and Clinical Research | Oct – Dec 2017| Vol 3| Issue 4 94 There are three clinical patterns of melasma, malar (most common), centrofacial and mandibular.[5] On the basis of visible light, wood’s light and lesional histology, melasma has been classified as epidermal, which has increased melanin predominantly in basal and suprabasal layers of epidermis with pigment accentuation on Wood’s lamp. The dermal type has perivascular melanin laden macrophages in superficial and deep dermis and does not accentuate with Wood’s lamp. The mixed variety has elements of both and appears as deep brown colors with Wood’s lamp accentuation of only the epidermal component.[4] The current treatments includes sun avoidance, sunscreens, topical use of depigmenting agents such as hydroquinone, azelaic acid and kojic acid[6] alone or in combination with other topical therapies such as tretinoin[7], topical corticosteroids, chemical peels and dermabrasion. Also different kinds of lasers that help in removing selective pigment have become increasingly popular.[8] Despite the multiplicity of all these therapies, their efficacy and safety remain controversial.[9] In the treatment of melasma the introduction of tranexamic acid (TA) is relatively a new concept. TA, a synthetic derivative of amino acid lysine widely used as antifibrinolytic agent inhibits plasminogen- keratinocyte interaction which decreases tyrosinase activity leading to decreased melanin synthesis by melanocytes.[10,11] METHODS__________________________ This pilot study was conducted at Outpatient Department of Dermatology, Hamdard institute of Medical Sciences and Research New Delhi. The study enrolled 90 patients of age group 20 years and above of either gender with moderate to severe melasma fulfilling the inclusion criteria. The study was done between April 2017 to October 2017. Informed written consent was taken from all patients. A detailed history was taken from all patients regarding the etiological factors (sun exposure, cosmetic use, oral contraceptive use or phototoxic drug, past pregnancies, menstrual history, and thyroid dysfunction) and also family history was taken into account. All patients were checked for bleeding time, clotting time and platelet count before the start of the study. Based on Wood's lamp examination diagnosis of melasma (epidermal, dermal or mixed) was made. Inclusion criteria included patients with moderate to severe melasma and absence of any significant inflammatory signs. The exclusion criteria ruled out women with a pregnancy or a breastfeeding, a history of thrombosis, abnormal bleeding profile, over expectation for the treatment, any other treatment therapies within 6 months, refusal to allow photographs and those who were unable to follow-up for a duration of six months. The proper record of the size and severity of each melasma lesion was made. Photographs were taken in the beginning and also at each visit under the same exposure condition. All patients were asked to return to OPD every 4 weeks for analysis of the changes in the lesion. They also were instructed to apply broad-spectrum sunscreens. A tablet of Tranexamic acid (Tyrodin) was prescribed orally at the dosage of 250mg twice daily for a period of 6 months. No other medication either topical or oral was taken by the patients. Results were assessed by clinical improvement and also photographic assessment was made. The results were rated as excellent if improvement was >90 %, good if >60%, fair if >30% and no improvement If <30%. Patient’s satisfaction was also taken into record at each visit. RESULTS___________________________ In the beginning 100 patients were enrolled in this study but 10 patients were dropped out due to their failure to complete the study. Ninety patients were finally taken. Amongst ninety patients 13 (14.4%) were male and 77 (85.5%) were females with 60 patients (66.6%) having moderate melasma and 30 patients (33.3%) having severe melasma. Mean age of patients was 36 years. The youngest age of patient was 19 years and 52 years being the oldest. All patients were with Fitzpatrick’s skin type III and IV skin. The duration of melasma ranged from 4 years to 15 years. 46 patients (51.1%) had malar pattern, 25 (27.7%) had centrofacial pattern, 18 patients (20%) had mixed and only 1 patient (1.1%) had mandibular pattern. The malar pattern being the most common followed by centrofacial and then mixed. On the basis of woodslamp examination 55 patients (61.1%) had epidermal, 20 patients (22.2%) had mixed and 15 patients (16.6%) had dermal type of melasma. Epidermal type of melasma being the most common type. Baseline characteristics of the melasma patients are given in Table 1. Table 1: Distribution of Patients Variables N Percentage % Males 13 14.4 Females 77 85.5 Table 2: Age-wise distribution of patients Table 3: Distribution of patients on the basis of duration. Table 4: Distribution of patients The analysis of the results demonstrated 44 patients (48.8%) excellent improvement, 25 patients (27.7%) good improvement, and 17 patients (18.8%) fair improvement and AGE N Percentage % ≤20 2 2.2 21-40 60 55.5 >41 28 36.6 Duration (years) N Percentage % < 1 7 7.7 1-5 50 55.5 >5 33 36.6 Type N Percentage % Epidermal 55 61.1 Mixed 20 22.2 Dermal 15 16.6 Int Arch BioMed Clin Res. Kakru SN, et al.: Efficacy of Oral Tranexamic Acid International Archives of BioMedical and Clinical Research | Oct – Dec 2017| Vol 3| Issue 4 95 Table 5: Distribution of patients no improvement was seen in 4 patients (4.4%). After a 6 month follow-up period, with completion of treatment 75 (83.3%) patients had no recurrence while 15 patients (16.6%) had recurrence. Our study showed that TA in low dose does not cause any serious side effects and is relatively safe. Three patients (3.3%) in our study showed gastric upset (nausea), four patients (4.4%) showed oligomenorrhoea and one patient (1.1%) reported palpitation. No serious side effect was reported in our study. Before After Fig 1. Before and after treatment with tranexamic acid. Fig 2. Before and after treatment with tranexamic acid. Before After Fig 3. : Before and after treatment with tranexamic acid. DISCUSSION________________________ Melasma is a common skin pigmentary disorder of Asian and Latin American, predominantly affecting women, causes are still unknown .Genetic predisposition, ultraviolet (UV) exposure and hormonal factors and some drugs (e.g. phenytoin) play role in its pathogenesis.[12] Melasma not only causes cosmetic problem but also can cause psychological distress. Hydroquinone has been considered a gold standard treatment[13] but this treatment is neither a satisfactory nor a safe method due to its severe adverse reactions including erythema, stinging, colloid milium, irritant and allergic contact dermatitis, nail discoloration, and paradoxical post-inflammatory hypermelanosis.[14] Laser therapy has shown good effect but it has downtime and has a side effect of post-inflammatory hyperpigmentation.[15] The most effective and safe treatment for melasma is yet to be explored. TA has been routinely used through oral administration, and it also could be used with intradermal microinjection.[16] TA has been previously used as haemostatic agent due to its anti-fibrinolytic effect and in 1979 was first introduced by Nijor.[17] Role of TA in melasma study done by Sufan and Hangyan et al showed oral administration of TA is a very effective and safe therapy.[18] Dunn and Goa et al[13] studied the role of TA in melasma and showed TA acts by inhibition of tyrosinase activity which is vital in melanin synthesis in epidermal melanocytes. A study done by Safoora and Riffat on effect of TA in melasma showed oral tranexamic acid is a safe and effective treatment.[20] Similar studies as ours, effect of TA in melasma in Nepali population[20] and also in Chinese population[21] and it was concluded by the authors that TA is an effective and safe treatment for melasma. The dose of TA given in melasma is very less as compared to the dosage given for its haemostatic effect that makes the fatal side effects like thromboembolism, myocardial infarction, cerebrovascular accident rare. The results of our study are comparable with other studies done on effect of oral TA in melsama. We have noted encouraging results with TA in melasma and in our study we didn’t come across any serious side effects. CONCLUSION_______________________ It’s concluded that TA is an effective and relatively safe therapy in melasma. Our study was non-invasive, caused no irritation of the skin, no risk of post-inflammatory hyperpigmentation and no downtime was required and was affordable by the patients. Further research on long term administration TA and reduction of recurrence rate is also needed. Acknowledgments: The authors thank to all the patients who participated in this study and also the dermatology staff at Hamdard Institute of Medical Sciences and Research, New Delhi. REFERENCES_______________________ 1. Johnston GA, Sviland L, Mclelland J. Melasma of the arms associated with hormone replacement therapy. Br J Dermatol 1998; 139:932. Pattern N Percentage% Malar 46 51.1 Centrofacial 25 27.7 Mandibular 1 1.1 Mixed 18 20 Before After www.iabcr.org Kakru SN, et al.: Efficacy of Oral Tranexamic Acid International Archives of BioMedical and Clinical Research | Oct – Dec 2017| Vol 3| Issue 4 96 2. Grimes PE. Melasma. Etiologic and therapeutic considerations. Arch Dermatol 1995; 131:1453-7. 3. Grimes PE. Management of hyperpigmentation in darker racial ethnic groups. Semin Cutan Med Surg 2009; 28:77- 85. 4. Gupta AK, Gover MD, Nouri K, Taylor S. Treatment of melasma: A review of clinical trials. J Am Acad Dermatol 2006; 55:1048-65. 5. Sanchez NP, Pathak MA, Sato S et al. Melasma: A clinical, light microscopic, ultrastructural, and immunofluorescence study. J Am Acad Dermatol 1981;4:698- 710. 6. Prignano F, Ortonne J, Buggiani G, Lotti T. Therapeutical approaches in melasma. Dermatol Clin 2007; 25:337– 342. 7. Romero C, Aberdam E, Larnier C, Ortonne JP.Retinoic acid as modulator of UVB-induced melanocyte differentiation..J Cell Sci 1994;107:1095–1103. 8. Angsuwarangsee S, Polnikorn N. Combined ultra-pulse CO2 laser and Q-switched Alexandrite laser compared with Q-switched Alexandrite laser alone for refractory melasma.Dermatol Surg 2003; 29:5964. 9. Prignano F, Ortonne J, Buggiani G, Lotti T. Therapeutical approaches in melasma. Dermatol Clin 2007; 25:337– 342. 10. Maeda K, Tomitab Y. Mechanism of the inhibitory effect of tranexamic acid on melanogenesis in cultured human melanocytes in the presence of keratinocyte conditioned medium. J Health Sci 2007;53:389-96. Maeda K, Naganuma M. Topical trans-4 amino methyl cyclo hexanecarboxylic acid prevent ultraviolet radiation-induced pigmentation. J Photochem Photobiol 1998; 47:136-41. 11. Pawaskar MD, Parikh P, Markowski T et al. Melasma and its impact on health- related. Quality of life in Hispanic women. J Dermatolog Treat 2007; 18:5-9. 12. Dunn CJ, Goa KL. Tranexamic acid: a review of its use in surgery and other indications. Drugs 1999; 57:1005– 1032 13. Prignano F, Ortonne J, Buggiani G, Lotti T. Therapeutical approaches in melasma. Dermatol Clin 2007; 25:337– 342. 14. Nouri K, Bowes L, Chartier T, Romagosa R, Spencer J.Combination treatment of melasma with pulsed CO2 laser followed by Q-switched Alexandrite laser: a pilot study. Dermatol Surg 1999; 25:494–497. 15. Lee JH, Park JG, Lim SH et al. Localized intradermal microinjection of tranexamic acid for treatment of melasma in Asian patients: a preliminary clinical trial. Dermatol Surg 2006; 32:626–631. 16. Nijor T. Treatment of melasma with tranexamic acid. Clin Res 1979; 13:3129-31. 17. Wu S, Shi H, Wu H et al. Treatment of melasma with oral administration of tranexamic acid. Aesthetic Plast Surg 2012; 36:964-70. 18. Safoora .A, Riffat .N. Oral tranexamic acid in treatment of melasma in Pakistani population: a pilot study. Journal of Pakistan Association of Dermatologists 2014;24 (3):198-203. 19. Karn D, K C S, Amatya A et al. Oral tranexamic acid for the treatment of melasma. Kathmandu Univ Med J 2012;10:40-3. 20. Wu S, Shi H, Wu H et al. Treatment of melasma with oral administration of tranexamic acid. Aesthetic Plast Surg. 2012; 36:964-70. work_q6x2mnj5bzapvnooyyfczkxzk4 ---- Comparison of optometry vs digital photography screening for diabetic retinopathy in a single district KL Tu1, P Palimar1, S Sen1, P Mathew1 and A Khaleeli2 Abstract Purpose To compare (a) the clinical effectiveness and (b) cost effectiveness of the two models in screening for diabetic retinopathy. Methods (a) Retrospective analysis of referral diagnoses of each screening model in their first respective years of operation and an audit of screen positive patients and a sample of screen negatives referred to the hospital eye service from both screening programmes. (b) Cost effectiveness study. Participants (1) A total of 1643 patients screened in the community and in digital photography clinics; (2) 109 consecutive patients referred to the Diabetic Eye Clinic through the two existing models of diabetic retinopathy screening; (3) 55 screen negative patients from the optometry model; (4) 68 screen negative patients audited from the digital photography model. Results The compliance rate was 45% for optometry (O) vs 50% for the digital imaging system (I). Background retinopathy was recorded at screening in 22% (O) vs 17% (I) (P ¼ 0.03) and maculopathy in 3.8% (O) vs 1.7% (I) (P ¼ 0.02). Hospital referral rates were 3.8% (O) vs 4.2% (I) Sensitivity (75% for optometry, 80% for digital photography) and specificity (98% for optometry and digital photography) were similar in both models. The cost of screening each patient was d23.99 (O) vs d29.29 (I). The cost effectiveness was d832 (O) vs d853(I) in the first year. Conclusion The imaging system was not always able to detect early retinopathy and maculopathy; it was equally specific in identifying sight-threatening disease. Cost effectiveness was poor in both models, in their first operational year largely as a result of poor compliance rates in the newly introduced screening programme. Cost effectiveness of the imaging model should further improve with falling costs of imaging systems. Until then, it is essential to continue any existing well-coordinated optometry model. Eye (2004) 18, 3–8. doi:10.1038/sj.eye.6700497 Keywords: diabetes; retinopathy; screening; optometry; digital photography Introduction Diabetic retinopathy is the leading cause of blindness of working age patients in the UK.1 Laser photocoagulation is effective if retinopathy is detected before irreversible changes take place.2 Improving metabolic control in patients with mild retinopathy slows the progression of retinopathy,3 indicating a case for the detection of early background retinopathy. In a survey of ophthalmologists in England and Wales carried out in 1999, there was no programme of diabetic retinopathy screening in 23% and in only 64% were the results routinely sent to the general practitioner.4 In 1999, the UK National Screening Committee asked the British Diabetic Association (now Diabetes UK) to convene an advisory panel to produce a model for a cost-effective national screening programme. The panel’s recommendations are now published on the national screening committee’s website.5 Digital retinal photography is the preferred modality, and a national programme, to be rolled over a period of 3–4 years, has been proposed. In the Warrington NHS Health Trust catchment area (total population 306 401, 1991 census), two models of screening for diabetic retinopathy have been operating in parallel for Received: 16 July 2002 Accepted in revised form: 20 December 2002 1Department of Opthalmology Warrington Hospital Warrington, UK 2Department of Medicine Halton Hospital Warrington, UK Correspondence: P Palimar Department of Ophthalmology Warrington Hospital Warrington WA5 5QG UK Tel: þ 44 1925 635 911 Fax: þ 44 1925 662 395 E-mail: palimar@ tinyonline.co.uk Eye (2004) 18, 3–8 & 2004 Nature Publishing Group All rights reserved 0950-222X/04 $25.00 www.nature.com/eye C L IN IC A L S T U D Y its two component areasFWarrington and HaltonFfor geographic and logistical reasons. For the Warrington area, an optometric screening programme run by accredited optometrists is in operation since November 1995, whereas a digital photographic screening was introduced in November 1998 for the Halton area. This is a unique setting among the health trusts in the UK and affords us an opportunity to compare the two methods in terms of effectiveness and cost effectiveness in their respective first years of running. Methods Screening programme (a) Optometric screening. This was carried out by optometrists, using slit-lamp biomicroscopy, in the Warrington area who have been accredited for screening following attendance and subsequent assessment at the Retinal Eye Clinic in Warrington Hospital. The accreditation is based on the British Diabetic Association model.6 Patients are directly referred to the hospital diabetic eye service according to a set protocol. Referable retinopathy (RR) is defined as: background retinopathy with macular involvement; background retinopathy without macular involvement if large circinate or plaque hard exudates within the major temporal vascular arcades; and background retinopathy and reduced visual acuity (o6/12 Snellen) not corrected by pin hole (suggestive of macular oedema). Preproliferative retinopathy involves: five or more cotton wool spots and/or the presence of venous abnormalities (eg tortuosity or beading), or intraretinal microvascular abnormalities and/or extensive intraretinal haemorrhages. Proliferative retinopathy involves: advanced diabetic eye disease (vitreous haemorrhage, fibrous tissue, recent retinal detachment and rubeosis iridis). The above findings are regarded as sight-threatening diabetic eye diseaseFSTDR. Suspected glaucoma: intraocular pressure of more than 22 mmHg 7 a cup : disc ratio of 40.6 or an asymmetry of 40.2. Any other lesion that the observer cannot interpret with reasonable certainty: incidental retinal vascular problems/ naevus. (b) Digital photography for Halton residents. This was carried out by the Halton Eye Screening Project (HESP) based at Halton Hospital using a fixed digital camera (Topcon nonmydriatic, model TRC-NW5S, Sony video head 3CCD DXC-950P with a resolution of 800x600). Following mydriasis with 1% Tropicamide, four 451 field nonstereoscopic images are captured by a professional medical photographer. The fields include the three used by the Liverpool Diabetic Eye Study7 plus the foveo-centric fourth field, to facilitate a clear macular view (Figure 1). Viewing the images on a Texet monitor 1500 with a resolution of 1280 � 1024, an experienced ophthalmologist grades them within a week. Referral to the eye service is according to the same criteria as for the optometric screening referrals. Quality assurance for the imaging system is secured by using the services of a professional photographer, an experienced intermediate-grade ophthalmologist who reads the images, and through special audit clinics. These are run by the consultant ophthalmologist examining a systematic sample (10%) of screening negative patients using slit-lamp biomicroscopy within 3 months of initial photography. (c) The call–recall service for both screening modalities is administered by the Cheshire Health Agency based on the diabetes register. Patients who are currently attending the eye clinic are excluded from the call–recall system. Hospital diabetic eye service (DES)Fan experienced ophthalmologist examines and grades the referrals from both systems, using slit-lamp biomicroscopy. Appropriate treatment and/or follow-up is then arranged. The study groups consist of the following. Optometry: (a) 769 patients screened by optometrists in the first year, (b) 51 patients referred to DES for STDR, (c) 55 patients referred for cataract/glaucoma with no STDR and in the first year. Photography: (a) 874 patients screened by photography in its first year, (b) 68 screen negative patients from the audit clinic, and (c) 58 patients referred to the HDES following photography for STDR. Statistical analysis was carried out using Instats software on the Apple MacIntoshs computer. Two-sided P values, using Fisher’s exact test, were calculated for Field 4 Field 2 Field 1 Field 3 Figure 1 Four overlapping 45 fields for retinal photography of the right eye. Optometry vs digital photography KL Tu et al 4 Eye referral diagnoses of background diabetic retinopathy and maculopathy for each screening model, and for false positive rates. Results A total of 6294 people (2.05% of the district population) were identified as having diabetes. Of these, 1312 (21%) have type I IDDM and 4982 (79%) have type 2 NIDDM. The whole population is predominantly Caucasian with only 1.1% being of ethnic origin. Optometry model A total of 14 optometrists carried out between eight and 50 screenings in the first year. A total of 1708 patients were invited to attend the accredited optometrists. In all, 769 patients attended, giving a compliance rate of 45%. There were 429 (55.7%) males and 340 (44.3%) females. Of the patients, 29 were less than 30 years, 273 were between 30 and 60 years, and 467 were over 60 years (1 : 9 : 16). The mean age was 62.8 years. In total, 591 patients had no retinopathy (76.8%). The distribution of diabetic retinopathy was as follows: 168 (21.8%) mild to moderate background retinopathy; eight (1%) severe including preproliferative diabetic retinopathy, five (0.7%) with proliferative retinopathy, and 29 (3.8%) patients with maculopathy (Table 1). Overall, 29 patients (3.8%) were referred directly to the diabetic eye clinic. In addition, 157 patients had a degree of cataract in either eye (20.4%) and glaucoma was diagnosed/suspected in 21 patients (2.7%). A total of 55 (7.1%) patients were referred for cataract and glaucoma through the general practitioner. Digital photography Of the 1748 invitations sent out for retinal photography, 874 attended giving a compliance rate of 50%. There were 456 (52.1%) males and 418 females (47.9%). The age distribution was: 36 patients less than 30 years, 314 patients between 30 and 60 years, and 530 patients over 60 years (1 : 9 : 15) with a mean age of 61.2 years. In sum, 682 patients had no retinopathy (78.1%). The distribution of diabetic retinopathy was as follows: 149 (17.1%) mild to moderate background retinopathy; 15 (1.8%) severe background retinopathy including pre-proliferative changes; two (0.2%) with proliferative retinopathy; 12 (1.7%) with maculopathy (Table 1); 37 (4.2%) patients were referred to the hospital diabetic eye service. Although the two samples were not specifically age and sex matched, the male : female ratio and the age distribution of the two groups are similar. A significantly higher proportion of patients were diagnosed with very mild to moderate background retinopathy by the optometrists (P ¼ 0.03). Diabetic maculopathy, too, was recorded in a significantly larger group by the optometrists (P ¼ 0.02). There was no statistically significant difference in the overall rate of referrals by both models. Sensitivity and specificity of the two systems could not be determined because of the lack of funding for examining the whole study cohort. A limited exercise has been carried out by extrapolating the findings at the DES and the Audit Clinic. Hospital Diabetes Eye Service Cases were classified as true positives if they satisfied the criteria for STDR, when examined in the DES. In all, 51 consecutive patients seen from the optometrists were seen. The findings of the ophthalmologist were compared with the referral record of the optometrist. Of these, 13 were deemed unjustified referrals (Table 2). The records of the 55 patients referred for cataract/glaucoma, with apparently no STDR, were compared with findings in the eye clinic. One patient had severe background diabetic retinopathy. This was classified as false negative and the ratio of 1 : 54 was applied to the screen negative patients as a whole. The sensitivity and specificity for STDR was 75 and 98%, respectively (Table 3). The 58 patients referred to the DES from the photography model were assessed. Referral was deemed unnecessary in 10 cases (Table 2). To determine sensitivity and specificity, we included 68 patients seen in the audit clinic. One patient had maculopathyFSTDR (false negative). This ratio of 1 : 67 was applied to the whole screened population. The sensitivity and specificity for STDR were 80 and 98%, respectively (Table 4). There was no statistical difference in the confirmed cases of maculopathy between the two screening Table 1 Analysis of diagnoses/staging by screener/reader at each screening model Optometry Digital photography Total invites 1708 1748 Total number screened 769 (45%) 874 (50%) No retinopathy 591 (76.8%) 682 (78.1%) Background DR 168 (21.8%) 149 (17.1%) P=0.03 Preproliferative DR 8 (1%) 15 (1.8%) Maculopathy 29 (3.8%) 12 (1.7%) P=0.02 Proliferative DR 5 (0.7%) 2 (0.2%) Cataract 157 (20.4%) Not known Glaucoma 21 Not known Hospital referrals 29/769 (3.8%) 37/874 (4.2%) DR, diabetic retinopathy. Optometry vs digital photography KL Tu et al 5 Eye modules. Comparison of other features/stages of diabetic retinopathy, too, did not show any statistically significant differences. Costing The cost of setting up each system is shown in Table 5. The cost of screening per patient is arrived at by dividing the total cost by the number of patients actually screened in the first year. The cost effectiveness is calculated by dividing the total cost by the numbers of true positives (test positives � sensitivity). Optometric screening Cost per screened case ¼ cost of call–recall service per patient þ optometry fee per patient ¼ 6550/769 þ d15.48 ¼ d23.99. Cost effectiveness for optometry ¼ total cost/true positives ¼ d18 454/ 22 ¼ d839. Digital photography Cost per actual patient screened ¼ d25 599.30/874 ¼ d29.29. Cost effectiveness for digital photography ¼ d25 599/30 ¼ d853. Discussion The prevalence of diabetes is 2.05%, comparing well with other reports.7–10 The overall prevalence of diabetic retinopathy in the two screened models varied from 21 to 24%. The prevalence in other studies from the UK has been reported as 30.3% in Exeter11 (EDRS), 40.4% in the Liverpool Diabetic Eye Study12 (LDES), 41% in insulin-requiring subjects,13 and 50% in noninsulin- requiring diabetics14 in the Melton Mowbray Study (MMS). The prevalence of maculopathy has been cited as 5–14%, severe preproliferative and proliferative retinopathy 1.1–9%.11–14 Our equivalent figures are 3.8% for maculopathy and 0.8% for proliferative retinopathy. The ethnic mix in our population is similar to the other groups. The lower figures for overall retinopathy and STDR must be explained by other factors. Firstly, we have excluded from the screening programmes all patients already attending the Diabetic Eye Clinic in the hospital. The three other studies all included, in their sample, patients already under the care of hospital diabetic eye specialists (up to 30% in MMS.14) Secondly, our programmes were established later than the others (Melton Mowbray, 1987; Exeter 1992; Liverpool Diabetic Eye Study, 1992) and hence were probably influenced by changing clinical practice with regard to glycaemic control. This effect could also explain to some degree the difference in our own two models with the optometry model being established 3 years later. We have been unable to access the biochemical details of all our patients to substantiate this theory. The existence of a good opportunistic screening prior to our systematic screening programmes may have also helped. Finally, some authors have suggested that lower prevalence rates could be a result of poor compliance.12 Table 2 Unjustified referrals Optometry Digital photography Mild–moderate retinopathyF8 Mild–moderate retinopathyF6 DrusenF4 DrusenF2 Resolved retinal vein occlusionF1 Tamoxifen retinopathyF1 Macular scarF1 Table 3 Detection of sight-threatening retinopathy by optome- trists True positive True negative Test positive 38 13 Test negative 13 705 Sensitivity: 74.5% (95% CI: 60–86). Specificity: 98% (95% CI: 97–99). Table 4 Detection of sight-threatening retinopathy by digital photography True positive True negative Test positive 48 10 Test negative 12 804 Sensitivity: 80% (95% CI: 68–89). Specificity: 98.7% (95% CI: 98–99). Table 5 Costing of services Optometry Imaging Total screened in first year 769 874 Fees d15.48 per patient Medical photographerF d5330 per annum (pa) Staff gradeFd3960 pa NurseFd2500 pa SecretaryFd1500 pa Lease of system: d5759.30 pa CHA (call/recall service) d6550 pa d6550 pa Total operational cost d18454.12 d25599.30 CHAFCheshire Health Agency. Optometry vs digital photography KL Tu et al 6 Eye Compliance with both screening models in their first respective years of operation was equally poor. Compliance in the first year of other programmes has varied from 80 to 100%.8,11 On review, the DNA rate for the HESP had dropped to 15% by October 2000 (year 2). This increase in compliance is in contradistinction to the Exeter study, where DNA rates actually worsened over successive cycles.11 Improved coverage rather than minor differences in modes of screening had been suggested as the most economical way forward.15 A model in which a mobile screening unit visits inner city community clinics and performs mydriatic 35 mm colour photography has been well described.16 Compliance in the fifth year of its operation was 80%.7 Digital photography ideally should have as little transportation or movement as possible to avoid damaging the delicate optics and computer equipment.17 Optometric screening appeared to detect significantly higher rates of early retinopathy as well as maculopathy. Comparison of other features/stages of diabetic retinopathy did not show any differences. Although of little significance to the ophthalmologist, picking up early background retinopathy signals the need for tighter systemic control by the physician.3 The high proportion of unjustified referrals represents, in part, an initial ‘play safe’ attitude by the screener/reader. The sensitivity of 75% and specificity of 98% are similar to other optometry programmes.8,18 The use of slit-lamp biomicroscopy resulted in better sensitivity than the 65% reported using direct ophthalmoscopy.7 We agree with nsc that there is a lack of a hard record for quality assurance and/or for monitoring progressive change. Also, the critical size of the caseload below which practitioners may be unable to maintain their skills adequately is not known. In the first year, the number of patients screened per optometrist varied from eight to 50. The sensitivity and specificity of digital photographic screening for diabetic retinopathy have been favourably compared with 35 mm colour photographic screening.19 The sensitivity of 80% and specificity of 98% in our system are similar to other reports.20–22 We are slightly concerned by the apparent failure of the digital system to pick up subtle macular changes, although there was no statistical difference in the confirmed cases of maculopathy between the two screening modules (ie they were equally specific). Better resolution and the use of stereoimages and oral fluorescein angiography may increase the detection of subtle macular oedema.7,23 Digital photography has an advantage over optometric screening in that there is a hard record, making quality assurance easier. Digital images also offered us the ease of acquisition, storage and transfer between screener/reader and treating ophthalmologist. In our study, in their first respective years of operation, the cost per screened patient was d23.99 for optometry vs d29.29 for digital photography. The estimated cost of d23 per screened patient in the nsc model5 and the Liverpool Diabetic Eye Study24 is remarkably similar to our figure. We have based cost effectiveness, in the first year, on true positives and found the two models to be comparable (d839 for optometry vs d853 for digital). The nsc has arrived at a figure of d1370 for cost effectiveness. This, however, includes the cost of being seen by the ophthalmologist, laser treatment, follow-up, etc. If the costing was to be stopped at the stage of referral, the equivalent cost effectiveness figure is d270. This is in line with the Liverpool Diabetic Eye Study, where the cost effectiveness for systematic 35 mm photographic screening in its fifth year was d209.24 The nsc has estimated the cost effectiveness based on 8.5% referral and a compliance of 85% in the first year. Our referral rate from both screening programmes was 4%. We would explain this on the basis of previous efficient opportunistic screening and a systematic exclusion from the call–recall system of all diabetic patients attending any eye clinic (see above). Our revised cost effectiveness based on 85% attendance, a referral rate of 8.5%, and the current sensitivity would be as follows. Optometry Numbers of true positives ¼ Compliance � total invites � referral rates � sensitivity ¼ 85% � 1708 � 8.5% � 75% ¼ 92. Total cost ¼ fee per patient þ call–recall ¼ 1451(85% of 1708) � d15.48 þ d6550 ¼ d28 961 Cost effectiveness ¼ Total cost/true positives ¼ 28 961/ 92 ¼ d315. Digital photography Numbers of true positives ¼ compliance � total invites � referral rates � sensitivity ¼ 85% � 1748 � 8.5% � 80% ¼ 95. Cost effectiveness ¼ Total cost/true positives ¼ d25 599/95 ¼ d269. Thus, cost effectiveness improves substantially in both systems both by increased coverage and by increased referral rates. Poor compliance has a disproportionate bearing on the imaging programme as the fixed costs (including the capital outlay for the lease of the camera) are greater. Conversely, cost per screen as well as cost per true positive case (cost effectiveness) in the imaging model has the greater potential to come down as the coverage improves. In conclusion, an optometry system can pick up minimal background retinopathy, highlighting the need Optometry vs digital photography KL Tu et al 7 Eye for tight systemic control and secondary prevention. The imaging system is not always able to detect subtle retinopathy but is equally specific in identifying sight-threatening disease. Cost effectiveness was poor in both models as a reflection of poor compliance rates in the first year. Cost effectiveness of the imaging model should further improve with falling costs of imaging systems. Until then, it is essential to continue any existing well-coordinated optometry model. Acknowledgements This material was presented as a poster at the Oxford Eye Meeting, July 2002. We are indebted to Gaynor Shawcross, medical photographer and all accredited optometrists. We hold no proprietary interest in the imaging system mentioned in the paper. References 1 Evans J. Causes of Blindness and Partial Sight in England and Wales 1990–1991. HMSO: London, 1995. 2 Diabetic Retinopathy Screening Group. Photocoagulation treatment of proliferative diabetic retinopathy. Clinical application of diabetic retinopathy (DRS) findings. DRS Report No. 8. Ophthalmology 1991; 88: 583–600. 3 Diabetes Control and Complications Trial Research Group. The effect of intensive treatment of diabetes and progression of long term complication in insulin dependent diabetes mellitus. N Engl J Med 1993; 329: 977–86. 4 Baker R, Grimshaw G, Thompson JR, Wilson A. Services for diabetic retinopathy in England and Wales: a survey of ophthalmologists. Pract Diabetes Int 1989; 16(2): 33–34. 5 http://www.diabetic-retinopathy.screening.nhs.uk/ recommendations.html. 6 British Diabetic Association. Practical guidance: initiating a district optometry screening programme for diabetic eye disease. A British Diabetic Association Report. September 1997 7 Harding SP, Broadbent DM, Neoh C, White MC, Vora J. Sensitivity and specificity of photography and direct ophthalmoscopy in screening for sight threatening eye disease: the Liverpool Diabetic Eye Study. BMJ 1995; 311: 1131–1135. 8 Prasad S, Kamath GG, Jones K, Clearkin LG, Phillips RP. Effectiveness of optometrist screening for diabetic retinopathy using slit-lamp biomicroscopy. Eye 2001; 15: 595–601. 9 Wells S, Bennett I, Holloway G, Harlow V. Area widediabetes care: the Manchester experience with primary health care teams 1991–1997. Diabetic Med 1998; 15 (Suppl 3): S49–S53. 10 Siann T, Duncan EM, Sullivan F, Mathews D, Cromie DT. Area-wide diabetes care: the Lanarkshire experience with primary health care teams 1994–1997. Diab Med 1998; 15 (Suppl 3): 54–57. 11 Ling R, Ramsewak V, Taylor D, Jacob J. Longitudinal study of a cohort of people with diabetes screened by the Exeter Diabetic Retinopathy Screening Programme. Eye 2002; 16: 140–145. 12 Broadbent DM, Scott JA, Vora JP, Harding SP. Prevalence of diabetic eye disease in an inner city population: the Liverpool Diabetic Eye Study. Eye 1999; 13: 160–165. 13 Sparrow JM, McLeod BK, Smith TD, Birch MK, Rosenthal AR. The prevalence of diabetic retinopathy and maculopathy and their risk factors in the non-insulin treated diabetic patients of an English town. Eye 1993; 7: 158–163. 14 Mcleod BK, Thompson JR, Rosenthal AR. The prevalence of retinopathy in the insulin-requiring diabetic patients of an English country town. Eye 1988; 2: 424–430. 15 Thompson JR, Grimshaw GM, Wilson AD, Baker R. Screening for diabetic retinopathy: a survey of health authorities during a period of transition. J Eval Clin Practice 1999; 5(1): 81–85. 16 Owens DR, Gibbins RL, Kohner E, Grimshaw GM, Greenwood R, Harding SP. Diabetic retinopathy screening, 2000 Diabetes UK. Diabetic Med 2000; 17: 493–494. 17 Hildred RB. Alternative imaging in ophthalmology. Part 3: Retinal photography and diabetes in primary care. Eye News 2001; 7(6): 12–20. 18 Burnett S, Hurwitz B, Davey C, Ray J, Chaturvedi N, Salmazmann J et al. The implementation of prompted retinal screening for diabetic eye disease by accredited optometrists in an inner-city district of North London: a quality of care study. Diabetic Med 1998; 15: 238–243. 19 George LD, Halliwell M, Hill R, Adlington SJ, Lusty J et al. A comparison of digital retinal images and 35 mm colour transparencies in detecting and grading diabetic retinopathy. Diabetic Med 1998; 15: 250–253. 20 Buxton MJ, Sculpher MJ, Ferguson BA, Humphreys JE, Altman JFB, Spiegelhalter DJ et al. Screening for treatable diabetic retinopathy: a comparison of different methods. Diabetic Med 1991; 8: 371–377. 21 Gibbins RL, Owens DR, Allen JC, Eastman L. Practical application of the European Field Guide in screening for diabetic retinopathy by using ophthalmoscopy and 35 mm retinal slides. Diabetologia 1998; 41: 59–64. 22 O’Hare JP, Hopper A, Madhaven C, Charny M, Purewal TS, Harney B et al. Adding retinal photography to screening for diabetic retinopathy: a prospective study in primary care. BMJ 1996; 312: 679–682. 23 Newsom R, Moate B, Casswell T. Screening for diabetic retinopathy using digital colour photography and oral fluorescein angiography. Eye 2000; 14: 579–582. 24 James M, Turner DA, Broadbent DM, Vora J, Harding SP. Cost effectiveness analysis of screening for sight threatening diabetic eye disease. BMJ 2000; 320; 1627–1631. Optometry vs digital photography KL Tu et al 8 Eye Comparison of optometry vs digital photography screening for diabetic retinopathy in a single district Introduction Methods Screening programme Results Optometry model Digital photography Hospital Diabetes Eye Service Costing Optometric screening Digital photography Discussion Optometry Digital photography Acknowledgements References work_qa7m6mlrbjd4hp6gj3aimxfdfu ---- Power in context: the Lismore landscape project D.I. REDHOUSE, M. ANDERSON, T. COCKERELL, S. GILMOUR, R. HOUSLEY, C. MALONE & S. STODDART" Modern studies of Iron Age landscapes in Scotland have concentrated on the outer islands (e.g. Parker Pearson & Shaqles 1999; Harding ZOOO), and most recently on areas such as Caithness (Heald &Jackson 2001). Argyll (FIGURE 1) has been recently designated a 'black hole' in terms of current knowledge (Haselgrove et al. 2001: 25). Although this description underesti- mates the work of the Royal Commission survey of Argyll (RCAHMS 1975), further work is needed to al- low comparative models of settlement organization. The Royal Commission recovered evidence for 14 sites - brochs, duns and forts - broadly defined as Iron Age. The current project will build on these founda- tions by analysis of aerial photographs and systematic survey, followed by detailed topographical and geo- physical survey of earthworks and selective excava- tion. The aim is to understand the changing landscape of the period 1000 BC-AD 1000, through a reconstruc- tion of the precise dated development of settlement against the pattern of land-use, leading to new models of economic and political organization. Work by one of us (TC) provided systematic aerial photographic cover of the island at 1:6000 on 3 May 2000 (eg. FIGURE Z), which will be employed to iden- tify sites and, after ground truthing by GPS, will pro- vide an enhanced Digital Elevation Model. A desktop study (2001-2) (by DIR) drew on the Ordnance survey (0s) digital mapping data provided through the JISC/ EDNA Digimap scheme and the site database of the National Monuments Record of Scotland (RCAHMS). These data were analysed in ESRI ArcInfo 8.0.1 on Solaris 7 to ask some basic spatial questions of the known archaeological sites on the island: notably most Iron Age sites were placed so as to have maximum visual control of the maritime approaches bom the nearest mainland to the southeast (e.g. FIGURE 3). The first fieldwork (August 2002) concentrated on the systematic recording of the monument of Tirefour, the best-known Iron Age site on the island. A detailed topographic survey was implemented to investigate this site's geographical context and to detect outworks to the main defended area. The topographic survey (FIGURE 4) confirms details of the defensive outworks on the Tirefour ridge, highlights associated sub-rec- tangular buildings which do not appear on 1875-1900 OS sheets, and outlines a platform below the broch to the north, and records the process of decay of the monu- ment since the RCAHMS survey of May 1968. During the 2002 season, all h o w n prehistoric sites on the island were visited and their condition compared with the record made in the RCAHMS survey. In particu- lar, a comparative viewshed using digital photogra- phy was constructed (by MA) of a sample of sites (e.g. FIGURE 5) which will be compared with computer-gen- erated viewsheds and displayed on the project website . A pilot study of the potential for environmental reconstruction in the nearest loch to Tirefour (by RH) comprised augering at the northeast end of Balnagowan (Baile a'Ghobhainn) loch which recorded a peat and marl sequence in excess of 6 m. Towards the northeast margin of the deposit, a full 4-m sequence was recov- ered with alternating layers of fen cam peat and marl over a silt of probable late glacial age. This sequence, in conjunction with the potentially high preservation of calcareous soils for excavated ecofacts, gives great hope for detailed environmental and economic recon- struction. FIGURE 1. Map showing location of Tirefour in relationship to Balnagowan loch. * Redhouse, Department of Archaeology, University of Cambridge, Downing Street, Cambridge CB2 3DZ, England. d.i.redhouse@arch.cm.ac.uk Anderson, Corpus Christi College, Cambridge C B 2 lm, England. maa358hermes.cam.ac.uk Cockerell, Committee for Aerial Photography, Free School Lane, Cambridge CB2 3W, England. tfc10008cam.ac.uk Gilmour, Royal Commission on the Ancient & Historical Monuments of Scotland, 16 Bernard Terrace, Edinburgh EH8 9NX, Scotland. Simongi@rc&ms,gov.uk Housley, Department of Archaeology, University of Glasgow, Glasgow G12 8QQ, Scotland. R.Housley@archaeology.arts.gla.ac.uk Malone, Department of Prehistory & Early Europe, British Museum, London W C l B 3DG, England. cmalone@british-museum.ac.uk Stoddart, Magdalene College, Cambridge CB3 OAG, England. ssl68cam.ac.uk ANTIQUITY 76 (2002): 945-6 946 NEWS & NOTES FIGURE 4. Detailed topographic plan of Tirefour. Initial work on the island of Lismore (2229 ha (c. 15.4x2.3 km)) shows the great potential for a detailed picture of an Iron Age landscape. Tirefour (FIGURE 6) was located within a fertile area of the island, and it is the understanding and development of this relation- ship which will be investigated by a battery of inter- disciplinary techniques over the next two years, against a broader pattern of Iron Age sites throughout the whole island. Acknowledgements. For 6nance: Historic Scotland, the CookTrust, the University of Cambridge and Magdalene College, Cambridge. FIGURE 3. Viewshed f r o m Tirefour generated in ESRIArcInfo 8.0.1 on Solaris 7. DEM taken f r o m 0 s PANORAMA data set. Observer height 5 m. (Crown copyright Ordnance Survey..) For other support: The Cambridge Committee for Aerial Photog- raphy (CUCAP), the JISCEDINA Digitnap scheme, the National Monuments Record of Scotland in the Royal Commission o n the Ancient and Historical Monuments of Scotland, The His- torical Society of Lisrnore, the White and Kayll families and many individuals born Lismore. References HARDING, D.W. 2000. The Hebrideon Iron Age: twenty years' research. Edinburgh Department of Archaeology. Occasional Paper 20. HASELGROVE, C., I. m, T. CHAMPION, J. CREIGHTON, A. GWILT, J.D. HILL., F. HUNTEX & A. WOODWARD. 2001. Understanding the British Iron Age: An Agenda for Action: A reportfor the Iron Age Research Seminar and the Council of the Prehistoric Society. Salisbury: Trust for Wessex Archaeology. HEALD, A. &A. JACKSON. 2001. Towards a new understanding of hon Age C a i h e s s , Proceedings ofthe Society of Antiquaries ofscotland 131: 12947. PARKER WON, M. & N. SHARPLES. 1999. Between land and sea. Excavations at Dun W a n , South Uist. Sheffield Sheffield Academic Press. RCAHMS 1975 Argyll. An Inventory ofAncient Monuments 2: Lorn. Edinburgh: RCAHMS. FIGURE 5. Photographic viewshed from Tirefour. work_qakeefukwzhijc424h3lvilzsa ---- [PDF] Simple way of splinting the arm following vascular anastomosis in the axilla | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.4103/ijps.IJPS_34_15 Corpus ID: 10941925Simple way of splinting the arm following vascular anastomosis in the axilla @article{Kumar2017SimpleWO, title={Simple way of splinting the arm following vascular anastomosis in the axilla}, author={Pravin H. P. Kumar and Vijay Dattatrya Kadam and Sushil Nahar and Rahul Krishnarao Patil and A. Koul}, journal={Indian Journal of Plastic Surgery : Official Publication of the Association of Plastic Surgeons of India}, year={2017}, volume={50}, pages={116 - 117} } Pravin H. P. Kumar, Vijay Dattatrya Kadam, +2 authors A. Koul Published 2017 Medicine Indian Journal of Plastic Surgery : Official Publication of the Association of Plastic Surgeons of India 1. Tatlidede S, Egemen O, Bas L. A useful tool for intraoperative photography: underwater camera case. Ann Plast Surg 2008;60:239-40. 2. Raigosa M, Benito-Ruiz J, Fontdevila J, Ballesteros JR. Waterproof camera case for intraoperative photographs. Aesthetic Plast Surg 2008;32:368‐70. 3. Tsai J, Liao HT, Wang WK, Lam WL, Kuo LM, Chen RF, et al. A safe and efficient method for intra-operative digital photography using a waterproof case. J Plast Reconstr Aesthet Surg 2011;64:e253-8.  View on PubMed thieme-connect.de Save to Library Create Alert Cite Launch Research Feed Share This Paper Figures and Topics from this paper figure 1 figure 1 Hypertensive disease Axilla Lamivudine References SHOWING 1-4 OF 4 REFERENCES Waterproof Camera Case for Intraoperative Photographs M. Raigosa, J. Benito-Ruiz, J. Fontdevila, J. Ballesteros Medicine Aesthetic Plastic Surgery 2007 8 Save Alert Research Feed A novel method for lower-extremity immobilization after free-flap reconstruction of posterior heel defects. Gregory A. Buford, M. Trzeciak Medicine Plastic and reconstructive surgery 2003 27 View 1 excerpt, references background Save Alert Research Feed DIEP Flaps in Women with Abdominal Scars: Are Complication Rates Affected? Brian M. Parrett, S. Caterson, A. Tobias, B. Lee Medicine Plastic and reconstructive surgery 2008 66 View 1 excerpt, references background Save Alert Research Feed Comments: A useful modification of the plaster backslab to off-load pressure from reconstructions of the heel and elbow Hari Venkataramani, D. Jain, S. Sabapathy Medicine Indian journal of plastic surgery : official publication of the Association of Plastic Surgeons of India 2012 2 PDF View 1 excerpt, references background Save Alert Research Feed Related Papers Abstract Figures and Topics 4 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_qam3mknqq5gkflfnfrmotoi57i ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218563573 Params is empty 218563573 exception Params is empty 2021/04/06-02:16:44 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218563573 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:44 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_qbk6nmchyzgkbdewy6j3gq4wyi ---- 10750_2007_9064_596_1-web 143.. P R I M A R Y R E S E A R C H P A P E R The use of digital photography for the definition of coastal biotopes in Azores N. V. Álvaro Æ F. F. M. M. Wallenstein Æ A. I. Neto Æ E. M. Nogueira Æ J. Ferreira Æ C. I. Santos Æ A. F. Amaral Received: 17 October 2006 / Revised: 9 May 2007 / Accepted: 7 June 2007 / Published online: 18 July 2007 � Springer Science+Business Media B.V. 2007 Abstract Sampling benthic communities usually requires intensive field and lab work which is generally performed by skilled staff. In algal dominated com- munities, like those on the shores of the Azores, biotope characterization studies focused on the more conspicuous algae categories, thus reducing the skills required for species identification. The present study compares in situ quadrat quantifications done by a skilled reader, with computer based quadrat quantifi- cations using digital photographic records of the same areas read in situ, accomplished by skilled and non- skilled readers. The study was conducted inter- and subtidally at various shore heights/depths. Quantifica- tion of algal coverage, both in situ and computer based, used the point to point method with quadrats of 0.25 m · 0.25 m for the intertidal, and 0.50 m · 0.50 m for the subtidal surveys, both subdivided into 36 intersection points. Significant differences were found between in situ readings and computer based readings of photographic records conducted both by experi- enced and inexperienced readers. Biotopes identified using in situ data and image based data differ both for the subtidal and intertidal. Keywords Algae � Quantification � Littoral biotopes � Digital image Introduction During the last decade there has been great effort in finding time effective and non-destructive research methods to reduce expert time and involvement in ecological field surveys (Turnbull & Davies, 2001). Photography and video are examples of such methods, and digital image quality increase allows the collec- tion of high resolution images that can be immediately discarded when not meeting the desired results. Its application is wide, ranging from remote sensing of wide areas using satellite and plain photography to photo microscopy. In many cases it is easier and cheaper to use good quality images rather than to go into the field to record species and/or other environ- mental features. Digital images have been used widely in biological and ecological studies, e.g. as a tool to monitor animal behaviour in Lobsigerl et al. (1986), Van Rooij & Videler (1996) and Ishii et al. (1998). Magorrian & Service (1998) and Pech et al. (2004) used digital imagery to assess and quantify benthic Handling editor: K. Martens Electronic supplementary material The online version of this article (doi:10.1007/s10750-007-9064-7) contains supplementary material, which is available to authorized users. N. V. Álvaro � F. F. M. M. Wallenstein (&) � A. I. Neto � E. M. Nogueira � J. Ferreira � C. I. Santos � A. F. Amaral Secção de Biologia Marinha, Laboratório de Ficologia, CIRN, Departamento de Biologia, Universidade dos Açores, Apartado 1422, Ponta Delgada, São Miguel— Açores 9501-801, Portugal e-mail: fmacedo@notes.uac.pt 123 Hydrobiologia (2008) 596:143–152 DOI 10.1007/s10750-007-9064-7 http://dx.doi.org/10.1007/s10750-007-9064-7 organisms. Norris et al. (1997) used video images to estimate seagrass beds coverage, and more recently Bullimore & Hiscock (2001) used photography to monitor sublittoral rock biotopes and Ducrotoy & Simpson (2001) to monitor algal bed coverage. The latter refers the importance of using image in surveys that consider wide ecological functional groups of algae that structure benthic communities, and empha- sizes the use of photography in enabling unskilled fieldworkers’ participation, thus restricting the use of experts in the data processing. In the Azores there have been recent developments in biotope definition and spatial distribution (Wal- lenstein & Neto, 2006; Wallenstein et al., in press; Wallenstein et al., submitted), based on broad ecological categories that are easily recognizable by unskilled surveyors. This classification and the methodologies used were developed with coastal management purposes in mind to assess biotope coverage by existing protected areas. These method- ologies were developed to be implemented by surveyors with limited skills in algae taxonomy. The use of digital images is seen as a means to further reduce the involvement of skilled field surveyors in the collection of biotic data for biotope definition, and thus increase its feasibility by official agencies staff. With the purpose of defining biotopes based on broad ecological categories the present study was developed on Graciosa Island (Azores) and aimed at verifying: (i) the possibility of using digital imagery and thus dismiss skilled field surveyors, and (ii) the need to involve skilled surveyors in image based quantifica- tion of algae communities. Materials and methods Location selection Rocky shore study sites around Graciosa Island were selected randomly by overlaying a 2 km · 2 km grid on a map of the island (Fig. 1). The grid intersections around the coastline created a pool of potential study sites. These were numbered 1 to 16 anticlockwise from Santa Cruz and survey sites were selected using random numbers. As most intersections did not fall directly on the coastline, survey sites were located by a north, south, east or west landward projection from a selected numbered intersection (see Fig. 1). The total number of sites to be studied was defined a priori to assure a balanced sampling design for both inter- and subtidal zones, considering substratum type and shore height as structuring factors for the intertidal (Wallenstein & Neto, 2006; Wallenstein et al., submitted) and depth for the subtidal (Wallen- stein et al., in press). A total of nine intertidal sites were surveyed, three for each substrate category (cobbles, boulders and bedrock), and at each site Fig. 1 Graciosa Island with superimposed 2 km · 2 km grid and indication of landward projection of numbered intersections 144 Hydrobiologia (2008) 596:143–152 123 algae were quantified at three shore levels (L1–L3; see below in ‘‘Intertidal replication’’). Subtidally, twelve sites were surveyed, three at each of four depth ranges (4–6 m, 12–14 m, 20–22 m and 28– 30 m). The survey was conducted throughout the months of June and July 2006. Field work Subtidal Quantitative data on sessile organisms as algae, sponges, hydrozoans and bryozoans were gathered from nine replicate quadrats at each site: the first quadrat was placed next to the anchor of the boat at the mid depth level of the desired depth range (i.e. 5 m; 13 m; 21 m; 29 m); subsequent quadrats were placed at a random distance and direction from the first one (Wallenstein et al., in press). Intertidal Surveys followed the methodology of Wallenstein & Neto (2006) and Wallenstein et al. (submitted). At each survey site three transects were laid down; the first one placed at a right angle to the shore line at the most central zone of the shore extension to be surveyed; the following two, parallel to the first one, were located randomly at a maximum distance of 9 m. Shore height was divided into three equidistant levels for collecting quantitative data starting at the uppermost algae recorded on each transect, down to low water level. Level 1 (L1) at the lowest point of the intertidal immediately above low water level; level 3 (L3) where the first algae was recorded and level 2 (L2) at half distance between L1, and L3. Algae, barnacles and limpets were quantified within three replicate quadrats at each shore level. The first replicate quadrat was placed on the transect line, and the subsequent ones placed at a random distance and direction from the first one. In situ quantification and photographs Subtidal replicate quadrats [minimum sampling area of 0.50 m · 0.50 m defined by Neto (1997)] and intertidal replicate quadrats [minimum sampling area of 0.25 m · 0.25 m defined by Neto (1997)] were quantified in situ using the point-to-point method (Hawkins & Jones, 1992) with 36 intersections (S data set) and subsequently photographed. Quantification consisted in recording the frequency of occurrence of each organism inside the quadrat (number of point intersections coinciding with each organism; maxi- mum of 36). Photographs were taken with a SONY V3 camera inside a watertight casing attached to a stainless steel structure (see Fig. 2). Intertidal images covered the complete 0.25 m · 0.25 m quadrat area, while subtidal 0.50 m · 0.50 m quadrats were covered by a set of four 0.25 m · 0.25 m images taken clockwise from the upper left corner (Fig. 3a). Laboratory work Image treatment and quantification of organisms All images were adjusted for brightness and contrast with Adobe Photoshop 5. Subtidal 0.50 m · 0.50 m images resulted from the composition of the respec- tive sets of four 0.25 m · 0.25 m partial images (see Fig. 3b). A 36 intersection grid was overlaid over the quantification area of each image (see Fig. 3c) and organisms quantified at the computer using the final composition (see Fig. 3d) following in situ proce- dures by the in situ reader [SR, a skilled operative to be used as a control when comparing in situ quan- tifications (S) with computer based quantifications], an inexperienced phycologist (IP) and an experienced phycologist (EP). Data treatment and analysis In situ and image based quantification frequency matrices were converted into percentage cover matrices by dividing frequency of occurrence of Fig. 2 Waterproof casing attached to stainless steel structure Hydrobiologia (2008) 596:143–152 145 123 each species/ecological category inside each quadrat by the maximum possible occurrence per quadrat (36). For each quadrat read in situ and at the computer by all readers percentage differences between in situ readings (S) and each computer based reading (SR, IR and ER), and between different computer based readings were calculated using the formula: P i¼1 xiRa � xiRbj j 72 ; a 6¼ b; in which I—species/ecological categories; X—num- ber of intersections in a quadrat (�36) attributed to species/ecological category i, by reader R1 or R2; Ra—reader a (S—in situ reading; SR—computer reading by in situ reader; ER—computer reading by experienced phycologist; IR—computer reading by inexperienced phycologist); Rb—reader b (S—in situ reading; SR—computer reading by in situ reader; ER—computer reading by experienced phycologist; IR—computer reading by inexperienced phycologist); 72—maximum number of different intersections between two readings of the same quadrat (36*2). To test the hypothesis that there are no significant differences between image and in situ readings, data was analysed using analysis of variance (ANOVA). For subtidal data two fixed and orthogonal factors were considered: (1) reader (3 levels—in situ reader, inexperienced phycologist and experienced phycolo- gist) and (2) depth (4 levels—29 m, 21 m, 13 m, 5 m). Three fixed and orthogonal factors were considered for intertidal data: (1) reader (3 levels—in situ reader, inexperienced phycologist and experienced phycolo- gist); (2) substratum (3 levels—bedrock, boulders and Fig. 3 Image collection and treatment process: (a) subtidal photographing sequence; (b) subtidal quadrat composition; (c) superimposition of quantifying grid over composed image; (d) final subtidal image 146 Hydrobiologia (2008) 596:143–152 123 cobbles) and (3) shore height (3 levels—L1, L2 and L3). Cochran’s test (Winer, 1971) was used to test for heterogeneity of variance and to check for the need to transform data. Multiple comparisons of levels within significant factors were made using Student Newman Keuls (SNK) tests. Biotope definition was achieved with PRIMER software (Clarke & Warwick, 2001) following the guidelines defined by Wallenstein et al. (in press) and were applied to the four quantification data sets (S, SR, IP and EP) to compare results and assess image quantification applicability for biotope definition. Results Subtidal Average percentage deviation between subtidal image and in situ readings ranged between 15% and 20% with high standard deviation values associated (Table 1). The highest-average deviation values were associated to both extreme depth levels (29 m and 5 m), while the lowest ones were associated to the 13 m depth level. When quantifying communities based on digital images data from IP presented higher deviations relative to that from S, while data from SR presented lower average differences globally and at all depth levels. ANOVA results (Table 2) revealed that the interaction between reader and depth is not significant and that differences between depths were significant as were differences between deviations from the three image readings relative to in situ readings. SNK test indicated that SR deviations were similar to those of EP, while both of them differed from IP. Biotopes defined using both S data and SR data turned out to be the same (Table 3a), differing from those resulting from EP (Table 3b) and those of IP (Table 3c). Both EP and IP added new ecological categories to the biotopes defined with S data, but did not omit any of the previously found by the in situ surveyor. Intertidal Average percentage deviation between intertidal image and in situ readings were generally higher than for the subtidal ranging between 18% and 80% with high-standard deviation values associated (Ta- ble 4). The lowest values were associated with cobble locations followed by boulders, and with data from bedrock presenting the highest average deviations. Higher average deviations were shown at the inter- mediate shore level (L2) than at lower (L1) and higher (L3) levels. When comparing readers IP presented higher deviations relative to S, while SR presented overall lower average differences at all shore levels and for all substrata. ANOVA revealed that all factors are significant as were all interactions between them (Table 5). The interaction between reader, shore height and substra- Table 1 Average percentage deviation (±1sd) between in situ subtidal quantitative data (S) and image derived data by the in situ reader (SR), by an inexperienced phycologist (IP), and an experienced phycologist (EP) at four subtidal depth levels SR IP EP 29 m 15.8 (±14.5) 21.9 (±19.2) 20.4 (±18.0) 21 m 13.6 (±13.1) 17.1 (±11.0) 16.7 (±14.7) 13 m 7.8 (±5.8) 14.2 (±9.5) 15.5 (±16.2) 5 m 16.2 (±13.2) 26.4 (±19.7) 18.3 (±19.1) Global 13.3 (±11.6) 19.9 (±14.8) 17.7 (±17.0) Table 2 Two factor ANOVA of deviations in subtidal image derived quantitative data versus in situ quantitative data Source Degrees of freedom Mean squares F ratio P F ratio versus Reader 2 624.3827 5.11 0.0066 Residual Depth 3 533.8765 4.37 0.0049 Residual Reader · depth 6 71.9877 0.59 0.7390 Residual Residual 312 122.1918 Total 323 SNK tests of reader SR = EP = IP No transformation of data: Cochran’s test = 0.1426 (not significant) Hydrobiologia (2008) 596:143–152 147 123 tum is the one that reflects all factors, as such SNK tests were chosen to focus on this interaction to check significant differences between readers at all combi- nations of substrate and shore height. Although in most cases SR was similar to EP, and both of them differed from IP, it was not possible to define a generalized pattern as for subtidal observations, (Table 5). Patterns observed were restricted to shore levels L3 and L5 at boulder and cobble locations: bedrock was the substrate that caused higher vari- ability in similarity/dissimilarity between readers. Intertidal biotopes defined using S data (Table 6a) and SR data (Table 6b) differed with an additional category for shore height L2 (Calcareous turf). Biotopes defined from EP data (Table 6c) omitted two categories compared with S data biotopes (calcareous turf and Laurencia type from shore height L2). IP data (Table 6d) lacked three categories (Green algae from L3, Laurencia type from shore height L2 and Cystoseira spp. from shore height L1) while recognising another one (Pterocladiella capill- acea on shore heights L2 and L3). When compared with S data biotopes, both EP and IP data lost ecological categories, while SR data added one. Discussion Subtidal Applicability of methods Differences between all image readers (SR, EP and IP) and in situ readings (S) ranged from values that are acceptable (average � standard deviation & 5%) to values that reached an unacceptable error (aver- age + standard deviation & 50%). From these values it would be difficult to decide whether or not to use Table 3 Subtidal biotopes obtained from in situ data and image derived data Shore level Species/ecological categories a) in situ (S) data biotopes = image derived (SR) data biotopes 5 m Dictyota spp. Zonaria tournefortii Asparagopsis spp. Stypocaulon type 13 m Dictyota spp. Zonaria tournefortii Asparagopsis spp. 21 m Dictyota spp. Zonaria tournefortii Asparagopsis spp. 29 m Dictyota spp. Zonaria tournefortii b) Image derived (EP) data biotopes 5 m Dictyota spp. Zonaria tournefortii Asparagopsis spp. Stypocaulon type Non calcareous turf 13 m Dictyota spp. Zonaria tournefortii Asparagopsis spp. 21 m Dictyota spp. Zonaria tournefortii Asparagopsis spp. 29 m Dictyota spp. Zonaria tournefortii Acrosorium venulosum Calcareous crust c) Image derived (IP) data biotopes 5 m Dictyota spp. Zonaria tournefortii Asparagopsis spp. Stypocaulon type Non calcareous turf Calcareous turf Green algae 13 m Dictyota spp. Zonaria tournefortii Asparagopsis spp. Non calcareous turf 21 m Dictyota spp. Zonaria tournefortii Asparagopsis spp. Non calcareous turf 29 m Dictyota spp. Zonaria tournefortii Non calcareous turf Calcareous turf 148 Hydrobiologia (2008) 596:143–152 123 direct field observations or image observations for quantifying benthic algae communities. The fact that the in situ reader image readings (SR) showed less deviation than all other readers compared to in situ quantitative data (S) could be an argument to use a skilled field surveyor. The lower deviation may reflect a previous knowledge of subtidal communities by the SR, and could have influenced the results. Since average deviations of the SR are close to those of the EP, confirmed by the ANOVA SNK tests, this suggests that having experienced phycologist computer image read- ings can replace the in situ reader. Both average differences and SNK tests indicated that IP data differs significantly from the other two, thus demonstrating that a IP would not be a satisfactory replacement for a SR or EP in order to gather reliable quantitative data from images. Biotopes defined by both the EP and the IP differed from those defined by SR and S. This reflects the danger of dismissing the SR, even if not differing significantly from the experienced phycologist EP. As a direct consequence of this the comparison of subtidal biotopes obtained using the methods of Wallenstein et al. (in press) in field studies on São Miguel, Santa Maria and Graciosa Islands, is only possible if the in situ surveyor is kept throughout the whole survey to assure that the results are compa- rable. If it is decided to read digital images instead of a skilled surveyor in situ, this should be implemented from the beginning of the survey and kept throughout the whole study to assure compa- rability of results. However, the exclusion of a skilled phycologist from an image based benthic community characterization survey is not recom- Table 4 Average percentage deviation (±1sd) between in situ intertidal data (S) and image derived data by the in situ reader (SR), by an inexperienced phycologist (IP), and an experienced phycologist (EP) at three shore height levels (L1, L2 and L3) from three substrate categories (bedrock, boulders and cobbles) SR IP EP Bedrock L1 29.3 (±21.9) 48.5 (±27.6) 38.1 (±32.0) L2 47.4 (±31.7) 51.8 (±21.8) 70.1 (±29.8) L3 32.8 (±21.9) 69.8 (±23.9) 39.8 (±34.1) Global 36.5 (±25.2) 56.7 (±24.4) 49.3 (±31.9) Boulders L1 26.9 (±17.7) 41.8 (±24.8) 43.7 (±27.2) L2 30.8 (±18.6) 65.1 (±19.6) 41. 6 (±21.8) L3 20.3 (±16.5) 81.6 (±16.9) 21.8 (±23.8) Global 26.0 (±17.6) 62.8 (±20.4) 35.7 (±24.3) Cobbles L1 21.3 (±17.8) 31.6 (±17.8) 21.8 (±23.0) L2 20.32 (±10.7) 42.3 (±24.3) 22.1 (±17.8) L3 18.0 (±23.8) 75.5 (±17.4) 13.9 (±19.7) Global 19.9 (±17.4) 49.8 (±19.8) 19.3 (±20.2) Table 5 Three factor ANOVA of deviations in intertidal image quantification versus in situ quantification. No transformation of data: Cochran’s test = 0.0812 (not significant) Source Degrees of freedom Mean Squares F ratio P F ratio versus Reader 2 28,621.4993 100.52 0.0000 Residual Substrate 2 10,400.1043 36.53 0.0000 Residual Shore height 2 3410.2977 11.98 0.0000 Residual Reader · substrat 4 1935.2853 6.80 0.0000 Residual Reader · shore height 4 7372.9540 25.89 0.0000 Residual Substrate · shore height 4 984.7503 3.46 0.0082 Residual Reader · substrate · shore height 8 817.9160 2.87 0.0038 Residual Residual 702 284.7389 Total 728 SNK tests of reader · substrate · shore height L1 L2 L3 Bedrock SR = EP; IP = EP; SR = IP SR = IP = EP SR = EP = IP Boulders SR = IP = EP SR = EP = IP SR = EP = IP Cobbles SR = IP = EP SR = EP = IP SR = EP = IP Hydrobiologia (2008) 596:143–152 149 123 mended, as also referred by Ducrotoy & Simpson (2001) that state the need of an expert for the data processing phase. Technical problems Image quality may be the cause of the deviations encountered between in situ readings (S) and image based readings (SR, EP and IP), namely in subtidal studies using 0.50 m · 0.50 m quantification quadrats that require fractionate photography (sets of four 0.25 m · 0.25 m images) in situ, and posterior computer based composition for image based quan- tification of species. This process might result in a certain amount of error that is difficult to quantify (Fig. 3d), also referred to in the study of Singh et al. (1998) that defend its use when large areas of interest are needed within a single picture frame. Addition- ally, poor image quality will result from overexpo- sure (Annex 1a) and increased water turbidity particularly noticeable at shallow depths where there is much water movement. Possibly indicating a direct causal relation, it is at shallow depth levels that most differences occurred between in situ readings (S) and image based readings (SR, EP and IP). Such noise can easily be removed if taken into consideration when photographing in further studies that intend to use image for benthic community characterization. Furthermore, at deeper subtidal levels there is a ‘canopy effect’ where frondose algae camouflage lower strata of bare rock, encrusting algae and turf forming algae used for attachment. In situ readings (S) are subject to this effect to a greater extent than image based readings (SR, EP and IP) due to the quadrat and its nylon mesh that flattens the canopy of vegetation against the substrata beneath. Since pho- tographs are taken without the quantification quadrat, frondose algae are recorded in their natural position thus revealing a higher proportion of the lower attachment strata. Consequently the appearance of non calcareous turf, Acrosorium venulosum and calcareous crust, that are typical understrata compo- nents, and reflect greater accuracy in digital image data for biotope definition. Intertidal Applicability of methods Average differences (as much as 90%) between computer based image data and in situ quantitative data at intertidal levels are more variable than at the subtidal levels, thus increasing the level of uncer- tainty and thus the applicability of using digital Table 6 Intertidal biotopes obtained with in situ readings and image derived readings Shore level Species/ecological categories a) S—In situ quantification biotopes L3 Green algae Barnacles First stratum Non calcareous turf L2 Calcareous Turf Laurencia type First stratum Non calcareous turf L1 Calcareous Turf Cystoseira spp. Erect calcareous Non calcareous turf b) SR—In situ reader image derived quantification biotopes L3 Green algae Barnacles First stratum Non calcareous turf L2 Calcareous turf Laurencia type First Stratum Non calcareous turf Erect Calcareous L1 Calcareous turf Cystoseira spp. Non calcareous turf Erect Calcareous c) EP—Experienced phycologist image derived quantification biotopes L3 Green algae Barnacles First stratum Non calcareous turf L2 First stratum Non calcareous turf L1 Calcareous turf Cystoseira spp. Erect calcareous Non calcareous turf d) IP—Inexperienced phycologist image derived quantification biotopes L3 Barnacles First stratum Non calcareous turf L2 Calcareous turf Pterocladiella capillacea Non calcareous turf L1 Calcareous turf Pterocladiella capillacea Erect Calcareous Non calcareous turf 150 Hydrobiologia (2008) 596:143–152 123 images for such studies. Although also high, lower dissimilarities occur in data for less complex com- munities that characterize unstable rocky substrata (cobbles). Dissimilarity between in situ data increases with community complexity and substratum stability (e.g. boulders and bedrock); for shore height, mid- shore levels show greater differences between image and in situ data. Communities (biotopes) at this shore level are transitional between upper levels, where green algae are dominant but with low abundances, and lower levels, where turfs and some frondose algae co-dominate. As with subtidal data, SR data presents the lowest differences compared to S data, although not for all substrata and all shore levels. SNK results are not generally applicable, although the majority of cases indicate that EP and SR data are more similar than EP data is to IP data. This might suggest that a skilled field surveyor could be replaced by an image collector, while quantitative data would be obtained through the use of digital images by an experienced phycologist. However, since there isn’t a generalized pattern between SR, EP and IP readings across substrata and shore level, the use of digital image could lead to unreliable results in some substratum and/or shore height categories. Technical problems Intertidal communities are more difficult to charac- terize using digital images as they are mainly composed of ecological categories differentiated with difficulty (e.g. calcareous and non calcareous turfs). Other aspects of image quality may affect data derived such as: high water retention creating light reflex making it difficult to diagnose species/ ecological categories (see Annex 1b); light incidence [e.g. absent on cloudy days (see Annex 1c) versus varying angle of incidence on sunny days causing differential shading (see Annex 1d)]; condensation inside the water proof camera (see Annex 1e); and wave action causing splash and spray on the camera’s lens (see Annex 1f). Characterization studies that use image are widely applied to benthic habitat characterization, and are to some extent successful in characterizing large-scale macroinver- tebrate associations that are easily identifiable at some distance (e.g. Barker et al., 1999; Collie et al., 2000; Kostylev et al., 2001; Lund-Hanson et al., 2004 and Tkachenko, 2005). However, few small scale studies exist that proved successful in charac- terizing algae based communities, as intended by the present work and used for monitoring purposes by Ducrotoy & Simpson (2001). Final remarks The methodology proposed in the present paper proved to be efficient in subtidal surveys if imple- mented during the whole survey with the advantage of dismissing the experienced phycologist from the field and lowering the cost of short and long term monitoring plans and general ecological surveys, as suggested by Pech et al. (2004). Contrarily to the suggestion of Ducrotoy & Simpson (2001), however, this method proved to be inadequate for intertidal surveys, which is likely to be related to the highly variable and patchy nature of Azorean intertidal communities (Neto, 2000). Nevertheless, further surveys of intertidal communities should consider the inclusion of photographic surveys and an effort should focus on eliminating image collection prob- lems that are avoidable if the causes are considered at the planning stage. Acknowledgements The authors wish to thank: Doctor José Azevedo for general discussions; Daniel Torrão, Dr. Ruben Couto and Dr. Marlene Terra for helping with field work and Dr. Ian Tittley, MSc. Gustavo Martins and the anonymous referees for advice and peer review of the paper. This work was supported partially by the project POCTIMGS/54319/2002— Biotope—Classification, Mapping and Modelling of Azores Littoral Biotopes, and CIRN-Centro de Investigação de Recursos Naturais, both from Fundação para a Ciência e Tecnologia (Portugal). The work performed in the present study complies with the current laws of Portugal. References Barker, B. A. J., I. Helmond, N. J. Bax, A. Williams, S. Davenport & V. A. Wadley, 1999. A vessel-towed camera platform for surveying seafloor habitats of the continental shelf. Continental Shelf Research 19: 1161–1170. Bullimore, B. & K. Hiscock, 2001. Procedural Guideline No. 3–12 Quantitative surveillance of sublittoral rock biotopes and species using photographs. In Davies, J., J. Baxter, M. Bradley, D. Connor, J. Khan, E. Murray, W. Sanderson, C. Turnbull & M. Vincent (eds), Marine Monitoring Handbook. Joint Nature Conservation Committee. Clarke, K. R. & R. M. Warwick, 2001. Change in marine communities: an approach to statistical analysis and interpretation (2nd edition). PRIMER-E, Plymouth. Hydrobiologia (2008) 596:143–152 151 123 Collie, J. S., G. A. Escanero & P. C. Valentine, 2000. Photo- graphic evaluation of the impacts of bottom fishing on benthic epifauna. ICES Journal of Marine Sciences 57: 987–1001. Ducrotoy, J. P. A. & S. D. Simpson, 2001. Developments in the application of photography to ecological monitoring, with reference to algal beds. Aquatic Conservation: Marine & Freshwater Ecosystems 11: 123–135. Hawkins, S. J. & H. D. Jones, 1992. Marine field course guide. 1. rocky shores. Marine Conservation Society. Immel Pub., London. Ishii, K., H. Takahashi, K. Odal, T. Kojim, H. Soed, K. Takemoto, M. Hiwad & S. Sameshimal, 1998. 3-D anal- ysis of behavioral responses of aquatic life using stereo video imagery. OCEANS ’98 Conference Proceedings, 1(28): 267–271. Kostylev, V. E., B. J. Todd, G. B. J. Fader, R. C. Courtney, G. D. M. Cameron & R. A. Pickrill, 2001. Benthic habitat mapping on the scotian shelf based on multibeam bathymetry, surficial geology and sea floor photographs. Marine Ecology Progress Series 219: 121–137. Lobsigerl, U., G. E. Hoar & B. D. Johnson, 1986. Time lapse underwater photography system with event sensor Capa- bility. OCEANS 18: 19–23. Lund-Hansen, L. C., E. Larsen, K. T. Jensen, K. N. Mouritsen, C. Cristiansen, T. J. Andersen & G. Vølund, 2004. A new video and digital camera system for studies of the dynamics of microtopographic features on tidal flats. Marine Geosources and Geotechnolgy 22: 115–122. Magorrian, B. H. & M. Service, 1998. Analysis of underwater visual data to identify the impact of physical disturbance on horse mussel (Modiolus modiolus) Beds. Marine Pol- lution Bulletin 36(5): 354–359. Neto, A. I., 1997. Studies on algal communities of São Miguel, Azores. PhD Thesis. Universidade dos Açores, Ponta Delgada. Neto, A. I., 2000. Ecology and dynamics of two intertidal algal communities on the littoral of the island of São Miguel (Azores). Hydrobiologia 432: 135–147. Norris, J. G., S. Wyllie-Echeverria, T. Mumford, A. Bailey & T. Turner, 1997. Estimating basal area coverage of sub- tidal seagrass beds using underwater videography. Aqua- tic Botany 58(3–4): 269–287. Pech, D., A. R. Condal, E. Bourget & P. Ardisson, 2004. Abundance estimation of rocky shore invertebrates at small spatial scale by high-resolution digital photography and digital image analysis. Journal of Experimental Marine Biology and Ecology 299: 185–199. Singh, H, J. Howland, D. Yoerger & L. Withcomb, 1998. Quantitative photomosaiking of underwater imagery. In Proceedings of the OCEANS Conference. IEEE, Nice. Tkachenko, K. S., 2005. An evaluation of the analysis system of video transects used to sample subtidal epibiota, Jour- nal of Marine Biology and Ecology 318: 1–9. Turnbull C. & J. Davies, 2001. Procedural guidelines. In Da- vies, J., J. Baxter, M. Bradley, D. Connor, J. Khan, E. Murray, W. Sanderson, C. Turnbull & M. Vincent (eds), Marine Monitoring Handbook. Joint Nature Conservation Committee. Van Rooij, J. M. & J. J. Videler, 1996. A simple field method for stereo-photographic length measurement of free- swimming fish: merits and constraints. Journal of Exper- imental Marine Biology and Ecology 195: 237–249. Wallenstein, F. F. M. M. & A. I. Neto, 2006. Intertidal rocky shore biotopes of the Azores: a quantitative approach. Helgoland Marine Research 60: 196–206. Wallenstein F. F. M. M., A. I. Neto, N. V. Álvaro & I. Tittley, in press. Subtidal rocky shore communities of the Azores: developing a biotope survey method. Journal of Coastal Research (accepted 12 April 2006). Wallenstein, F. F. M. M., A. I. Neto, N. V. Álvaro & C. I. Santos, submitted. Algae based biotopes of the Azores (Portugal): spatial and seasonal variation. Aquatic Ecology. Winer, B. J., 1971. Statistical Principles in Experimental Design, McGraw-Hill, Tokyo. 152 Hydrobiologia (2008) 596:143–152 123 The use of digital photography for the definition of coastal biotopes in Azores Abstract Introduction Materials and methods Location selection Field work Subtidal Intertidal In situ quantification and photographs Laboratory work Image treatment and quantification of organisms Data treatment and analysis Results Subtidal Intertidal Discussion Subtidal Applicability of methods Technical problems Intertidal Applicability of methods Technical problems Final remarks Acknowledgements References << /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (None) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (ISO Coated) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.3 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJDFFile false /CreateJobTicket false /DefaultRenderingIntent /Perceptual /DetectBlends true /ColorConversionStrategy /sRGB /DoThumbnails true /EmbedAllFonts true /EmbedJobOptions true /DSCReportingLevel 0 /SyntheticBoldness 1.00 /EmitDSCWarnings false /EndPage -1 /ImageMemory 524288 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveEPSInfo true /PreserveHalftoneInfo false /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts false /TransferFunctionInfo /Apply /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true ] /NeverEmbed [ true ] /AntiAliasColorImages false /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 150 /ColorImageDepth -1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages false /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasGrayImages false /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 150 /GrayImageDepth -1 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /GrayImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasMonoImages false /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 >> /AllowPSXObjects false /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (None) /PDFXOutputCondition () /PDFXRegistryName (http://www.color.org?) /PDFXTrapped /False /Description << /ENU /DEU >> >> setdistillerparams << /HWResolution [2400 2400] /PageSize [2834.646 2834.646] >> setpagedevice work_qdvvm6twwvcf7jrwah4kp4slhm ---- Research Article A Qualitative Analysis of Dental Photography in Orthodontics: The Patient’s Perspective Muhsin Çifter Department of Orthodontics, Faculty of Dentistry, Istanbul University, Turkey Correspondence should be addressed to Muhsin Çifter; mcifter@istanbul.edu.tr Received 2 March 2018; Accepted 17 July 2018; Published 30 July 2018 Academic Editor: Alberto Baldini Copyright © 2018 Muhsin Çifter. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Purpose. Dental photography is an essential part of orthodontic treatment. It is used during all stages, and many components can affect the image quality. During the procedure, attendants and the patient must often work together to obtain high-quality images. These aspects likely influence the patient’s experience, which is important in today’s healthcare services. This study qualitatively investigated the effects of dental photography procedures on the patient experience. Methods. This research used a qualitative approach that included both observational and interview methods. Twenty patients (16-20 years old) underwent dental photography for the first time at the initial stage of orthodontic treatment. Results. The lack of detailed information regarding the procedure and the appearance of the intraoral mirrors and retractors were primary causes of patient stress prior to the procedure. During the procedure, the mirrors and retractors caused pain for most patients. The inefficient designs and lack of compatibility between the items used were the primary reasons for patient complaints. Conclusions. Patients must be informed in advance and in detail about the procedure and the equipment to be used. Improved designs for the camera flash system and the intraoral equipment are needed to maximize both patient satisfaction and image quality. 1. Introduction The effects of advances in digital recording technologies are visible in different sectors. Dentistry is an important example of digital dental photography (DDP), which has become an essential part of orthodontic treatments [1]. DDP enables clinicians to record key stages of treatment. It also contributes to the orthodontic discipline in aspects including communication with patients, self-check of specialists, treat- ment planning, and provision of the treatment for clinical research, education, and marketing purposes to increase the patient’s motivation and cooperation during the process [1–5]. Medicolegal concerns are another aspect, and DDP protects both patients’ and dentists’ rights in possibly difficult circumstances [6]. In the orthodontic discipline, at least four extraoral and five intraoral photographs are recommended. For proper treatment planning and documentation, the extraoral pho- tographs should show the patient’s correct appearance, espe- cially the natural smile; the intraoral photographs should show the complete dentition and occlusion. During the DDP process, various errors may originate from the practitioner, the patient, and the poor design of the equipment used [7]. If the causes of the errors are defined, proper solutions can be implemented. Therefore, three aspects of the procedure should be investigated: the patient, the practitioner, and the equipment. The relevant literature on DDP primarily focuses on the technological aspects of the procedure and their possible influences on picture quality [1, 3, 5, 8]. No prior research has been identified regarding the components and their effects on the patient’s experience. However, patient perception of the procedure has not been addressed to date but has great importance in today’s healthcare approach. In addition, new instrument designs should be patient-centred to increase comfort and efficiency. In this study, a qualitative approach was adopted that included both observational and interview methods to investigate the effects of DDP procedures on the patient’s experience. Based on the results, a number of recommendations were made to increase the patient- centeredness of the conventional DDP procedure. Hindawi BioMed Research International Volume 2018, Article ID 5418592, 9 pages https://doi.org/10.1155/2018/5418592 http://orcid.org/0000-0002-6914-0496 https://doi.org/10.1155/2018/5418592 2 BioMed Research International 2. Materials and Methods This research was approved by the ethics committee of the Faculty of Dentistry of Istanbul University in accordance with the Declaration of Helsinki and was supported by the Scientific Research Projects Unit of Mimar Sinan Fine Arts University. Informed and voluntary written consent were obtained from all the subjects. This study used a qualitative approach to understand patients’ actual experiences during a regular dental photogra- phy procedure. The methodology used in this research con- sisted of two commonly used qualitative research methods in the social sciences: observation and interview. The observation method is primarily used for exploratory purposes and allows researchers to directly obtain data about actions, behaviours, and real-life experiences [9, 10]. This method is also useful for the detection of unusual aspects [10], because in certain cases there could be a difference between what people say and what they do or experience [9]. However, the observation method is recommended as a supportive or supplementary method due to its disadvantages, including the observers’ influence in the field and the considerable time necessary for the procedure and analysis [9, 10]. In this study, the observation method was used in conjunction with face-to-face, semistructured interviews [9] to collect in-depth insights from the study participants. The interview is a particularly useful method for the collection of information about people’s experiences, attitudes, feelings, or reasons for discontent [11]. Therefore, semistructured interviews were used as the primary data collection method in this study and were supported by observation of the patients during their dental photography procedures. 2.1. Selection of the Participants. This study included 20 participants (10 males and 10 females). Patients were excluded if they were older or younger than 16-20 years of age, had disabilities, had temporomandibular joint problems, had cleft lip and/or palate anomalies, or had undergone a previous dental photography procedure. Due to these criteria, purposive sampling was considered the appropriate sampling method for this research [9]. To ensure anonymity, all participants were given a unique code (beginning with “A” for male and “B” for female participants) during analysis. Their interview responses are presented using these codes. 2.2. Study Setting and Procedures. All observational and interview studies were performed in the Orthodontics Department of Istanbul University; the overall research methodology could be considered a contextual inquiry [12]. To ensure reliable results from all participants, the photog- raphy procedure was standardised as previously described [13, 14]. All photographs were taken by the same professional technician who was in charge of this procedure for the department. A Canon EOS 60D digital single lens reflex (DSLR) camera with a 100-mm Macro Lens and a Canon MR-14EX Macro Ring-Lite (Canon, Tokyo, JP) was used for photography (Figure 1). The process of dental photography was investigated in three stages: portrait and profile photographs; intraoral frontal and profile photographs; and intraoral buccal and occlusal photographs. A plain, coloured background was used in stage 1; spandex (Hager & Werken, Duisburg, DE) and wire type cheek retractors (Masel, Bristol, PA) were used in stage 2; and both retractors and dental mirrors (Ortho Technology, Florida, USA) were used in stage 3 (Figure 2). A bowl of hot water was used to warm the mirrors to avoid fogging, which is a commonly used method among clinicians. All of these items are likely to influence the patient experience at certain points during the procedure. 2.3. Data Analysis. Data analysis included the analysis of video recordings gathered from observations and data col- lected from semistructured interviews. Seventeen of 20 par- ticipants permitted video recording of their dental photog- raphy sessions. All 20 participants agreed to voice recording during the interviews. All data derived from the interviews were transcribed verbatim, and thematic analysis was per- formed using QSR NVivo 11 Qualitative Data Analysis Soft- ware (ORS International, Victoria, AU). Thematic analysis included the coding of raw data and categorization under potential themes to construct thematic networks to interpret new meanings [9, 11]. This method enabled researchers to acquire an in-depth understanding of patients’ feelings about the procedure and the products used. In vivo coding, which refers to the use of words and short phases directly from participants’ own expressions, was applied during analysis [15]. 3. Results 3.1. Observations. In all, 10 male and 10 female participants were observed. However, 3 participants did not agree to a video recording of their sessions. During the analysis, commonly observed procedural problems that were likely to affect patient experience were recorded regardless of their frequency and were categorized; they are summarized in Table 1. Many procedural difficulties had a direct effect on patient satisfaction. Patient stress, difficulties in equipment position- ing and control, nonadjustable standard equipment sizes, lack of communication, and saliva accumulation were frequently observed problems. 3.2. Interviews. The interviews consisted of questions in two primary parts: the patients’ thoughts regarding (1) their over- all dental photography experience and (2) their reflections on each stage of the procedure. 3.2.1. Overall Experience. Two important findings of the patients’ overall experience were derived during the inter- views. First, 40% (8 of 20) of patients were pleased that their orthodontic treatment had begun and were positively motivated. They stated that the procedure was for their well- being and aesthetic appearance; therefore, any difficulty was worthwhile. BioMed Research International 3 Figure 1: Primary equipment used during the procedure. (A) Spandex and wire type cheek and lip retractors and occlusal-buccal mirrors; (B) DSLR camera with Ring-Lite. Figure 2: Stages of digital dental photography. Each image includes the respective stage number. - My feelings are positive because the procedure was for my well-being. My clinician wanted my teeth to be photographed, so I had to have it done (𝐴8). Second, 75% (15 of 20) of patients expressed that they received very little or no information regarding the procedure prior to the procedure. Some patients also indicated that they were misinformed by other people who had undergone a similar procedure, and therefore they felt unnecessarily stressed. Three participants also mentioned that the actual procedure was easier than they expected. - I spoke with several people who had undergone this treatment. They told me that the procedure would make me sick, so I was initially apprehen- sive. But it was not as bad as the stories (𝐵5). - I did not receive any information on what the procedure would be like. I was expecting something similar to an X-ray scan (𝐵4). A number of patients also reported that they experienced intrinsic problems during the procedure due to feelings of nervousness or discomfort caused by the intraoral equip- ment. Physical pain was also reported to be an important influence on their overall experience. In addition, three patients reported that the technician’s behaviour affected their experience. - I didn’t like that someone directs you to ‘turn left’ and ‘turn right’, and then puts weird things in your mouth. I didn’t like it (𝐵3). 3.2.2. Experience during Different Stages of the Procedure. The results suggest that, during each stage of the procedure, par- ticipants experienced diverse problems that also influenced their overall experience. With the exception of the first stage, it was found that the equipment used had the largest impact. While taking portrait and profile photographs, the patients reported that privacy was their primary concern. For example, two of the patients were asked to remove their head scarves during the procedure to obtain a complete facial view; both patients expressed their discomfort regarding this requirement. Three participants said that smiling on command was challenging. This finding was also evident 4 BioMed Research International Ta bl e 1: C om m on pr ob le m st ha tw er e lik el y to aff ec tt he pa tie nt ’s ex pe ri en ce an d th e pr oc ed ur al effi ci en cy . Se ve ra lp at ie nt ss ho w ed si gn so fs tr es s w hi le w ai tin g fo rt he ph ot og ra ph y pr oc ed ur e to st ar t. Se ve ra lp at ie nt ss ee m ed un ea sy w he n th ey ob se rv ed th e eq ui pm en tt o be us ed , pa rt ic ul ar ly th e de nt al m ir ro rs .S om e pa tie nt sv er ba lly ex pr es se d th ei r im pr es si on sa cc or di ng ly . Th e pa tie nt sw er e as ke d to sm ile in or de r to ca pt ur e ph ot og ra ph so ft he ir na tu ra l, fu ll- fa ce ,s m ili ng ex pr es si on s. H ow ev er , m an y pa tie nt sf ai le d to pr ov id e th is si m pl e fa ci al ex pr es si on du e to st re ss , w hi ch re su lte d in ad di tio na ls ho ot in gs an d ex te nd ed th e pr oc ed ur e. Th e si ze of th e st an da rd eq ui pm en td id no tfi ts om e pa tie nt sc om fo rt ab ly . D ue to th e am ou nt of eq ui pm en tt ha t re qu ir ed si m ul ta ne ou su se ,p at ie nt sw er e as ke d to he lp w ith po si tio ni ng .I n ce rt ai n ca se s, pa tie nt sc ou ld no tu nd er st an d w ha t th ey w er e as ke d to do ,w hi ch ex te nd ed th e pr oc ed ur e. D ue to eq ui pm en tp os iti on in g an d co m m un ic at io n pr ob le m s, in so m e ca se s th e te ch ni ci an w as re qu ir ed to ph ys ic al ly to uc h an d m an ip ul at e th e pa tie nt ’s he ad . Sa liv a ac cu m ul at io n w as a fr eq ue nt pr ob le m ,a nd w as ob se rv ed in m os t pa tie nt s. D ur in g in tr ao ra lp ho to gr ap hy ,t he cl in ic ia n us ed ho tw at er to w ar m th e m et al m ir ro rs to pr ev en tf og gi ng du e to pa tie nt br ea th in g. H e de te rm in ed th e sa fe ty of th e m ir ro r’s te m pe ra tu re us in g hi sg lo ve d ha nd . BioMed Research International 5 Ta bl e 1: C on tin ue d. Th e oc cl us al m ir ro rl od ge d be tw ee n th e re tr ac to rs du ri ng in tr ao ra lo cc lu sa l ph ot og ra ph y of so m e pa tie nt s. If th at oc cu rr ed ,t he pa tie nt ’s he ad po si tio n ch an ge d as th e te ch ni ci an w ith dr ew th e m ir ro r. It w as ob se rv ed th at pa tie nt s' in tr ao ra l so ft tis su es hu rt du ri ng ce rt ai n st ag es of th e pr oc ed ur e; pa in w as id en tifi ed by th e pa tie nt ’s fa ci al ex pr es si on .O ne pa tie nt al so ve rb al ly ex pr es se d th is pr ob le m du ri ng hi sp ro ce du re . D ur in g th e in tr ao ra la nd ex tr ao ra l ph ot og ra ph st ag es ,i tw as ob se rv ed th at th e fla sh bu rs ts fr om a ve ry cl os e di st an ce hu rt so m e pa tie nt s' ey es . Th e cl in ic ia n re qu ir ed pa tie nt st o ad op t un co m fo rt ab le po st ur es du e to th e ra ng e of eq ui pm en th e w as re qu ir ed to co nc ur re nt ly po si tio n an d co nt ro l. H e w as al so re qu ir ed to ho ld th e he av y ca m er a w ith a ri ng fla sh at ta ch ed w ith on ly on e ha nd . 6 BioMed Research International during the observational study, and the patients indicated that nervousness hindered their ability to perform this simple emotional expression. - When he (the technician) took my profile photo- graph, I felt like a criminal (𝐵1). -I don’t like people taking my photograph, and I have avoided this for some time. For this reason, I thought it was a bit weird (𝐴7). During the extraoral close-up photography stage, the retrac- tors were reported to be the primary source of discomfort. In all, 70% (14 of 20) of patients reported different problems (39 coding references), which could be divided into the following three categories. Physical Discomfort due to Use of the Retractors on Themselves. Nine patients reported physical discomfort due to the retrac- tors, including pain, irritation, and strain on the mouth tissue. This was the most frequently expressed problem category during intraoral frontal close-up photography. -I think the retractor could be smaller. It is very big and strains your mouth. Maybe a smaller retractor would work better and would be more comfortable; with this one, you need to open and stretch your mouth too much, and it forces your jaw (𝐴3). - The retractors hurt my mouth a bit, but every- thing was alright (𝐵8). Ideational Discomfort about Using the Retractors on Them- selves. Six patients reported discomfort associated with the negative feeling they had while using the retractor. These feel- ings were expressed as nervousness, weirdness, and anxiety. - I felt embarrassed while the retractor was in my mouth because I think my teeth look bad. I don’t know, maybe it just looks weird, but it was not a good feeling (𝐵2). Prejudice about the Retractors’ Appearance. Four patients reported that they felt stressed when they had first seen the retractor, as they could not understand its function or thought it might hurt their mouth. One patient argued that the appearance of retractors could scare children and suggested designing them in a colourful way. -If they use the same thing on children, it may scare them. Something colourful may be better (𝐴5). - I thought the mirror would cut my cheeks, but it was not that disturbing (𝐵1). Both retractors and dental mirrors were used during the stage of intraoral close-up buccal and occlusal photography. The results of the interviews suggested that dental mirrors were the primary source of discomfort during this stage. In total, 17 of 20 patients reported problems (30 coding references), which were categorized under the same themes as the retractors. However, their weights were different. Prejudice about the Mirrors’ Appearance. This was the most frequently mentioned category during intraoral photogra- phy; 7 patients reported 15 coding references. The primary feelings reported were anxiety, surprise, and fear when they first saw the dental mirrors. The patients reported that they could not understand the purpose of the equipment and believed they might hurt themselves during the procedure. -I did not have concerns at the beginning. I was thinking it would be simple: I smile and the technician takes my photograph. Then, when I saw the metal things (mirrors), I felt a bit stressed because I wondered if they would hurt. But the experience was not that bad (𝐴9). Physical Discomfort regarding the Use of the Mirrors on Them- selves. Six patients reported physical discomfort regarding the mirrors, such as pain and irritation of the intraoral soft tissues. However, more distinctly than the retractors, patients also expressed that they felt nausea due to the mirrors inside their mouths, complained about the temperature of the mirrors, and had difficulty breathing through the nose while the mirrors were inside their mouths. -I did not have any problem with the big mirror [occlusal mirror], but the one used on the sides put pressure on my gums (𝐵10). - One of the mirrors was very hot. I would have appreciated if it had been cooler before putting it in my mouth (𝐴9). - When there is something in my mouth, I feel sick because I cannot swallow, and I had that feeling (𝐴10). Ideational Discomfort about Using the Mirrors on Themselves. Two patients reported discomfort associated with negative feelings while using the mirrors on themselves. These feelings were expressed as weirdness or embarrassment due to their appearance while the mirrors were used inside their mouths. - I felt weird about the mirrors. They did not hurt, but this was the first time I had experienced something like this (𝐵4). In addition, seven patients complained about the pain caused by the buccal mirrors during the procedure. Pressure, pain, and strain on the gingival tissues and lips were reported by patients during the interviews. Three patients also reported that the occlusal mirror did not fit comfortably inside their mouth due to its size. Finally, 45% (9 of 20) of the participants complained about the ring flash burst in their eyes. During the intraoral BioMed Research International 7 photography stage, ring flashes attached to the camera are widely used to capture clear macro images without shades. However, the patients expressed their feelings about flash burst with expressions such as direct flash burst in the eyes, very close distance for a flash burst, lacrimation due to the light, tiredness of eyes due to the light, discomfort due to too much light, and reflexive blink due to the light. 4. Discussion Proper documentation is essential for orthodontic treat- ments, and DDP is a fundamental and widely preferred component of clinical documentation [1–6]. Until the inter- vention of digital photography, the images taken by conven- tional cameras could only be visible after photographic pro- cessing. Digital photography enables displaying the captured images instantly which gives the clinician the opportunity to capture the ideal images without patient call-backs. The other essential advantages of digital photography are the ease of transmitting the images, opportunity of detailed editing, and ease of archiving. In orthodontic and dentofacial treatments the patient’s intraoral and extraoral appearance may be dramatically changed so high-quality photographs are mandatory for documenting these chances. Also for treatment planning and medico-legal necessity pretreatment malocclusion and soft and hard tissue health conditions should also be visually documented in detail. Basically clini- cal photography is composed of three main aspects: the pho- tographer, the patient, and the equipment. Existing researches on this field are mostly focused on used methods, getting high-quality images, and encountered technical difficulties. However patient’s perspective is a key aspect to improve the clinical photography outcomes and for a patient-centred medical approach. In the present study, it was determined that the mirrors used for occlusal and buccal intraoral photography and the retractors were the most complained- about items used in the procedure and have an overall impact on the patient’s experience. In this study, 40% of the patients were motivated before the procedure, as this was the first step of their treatment and they were willing to start. However, 75% of the patients reported that they were misinformed about the procedure which, according to the interviews, caused stress. In addition, signs of stress were observed before the procedure due to the appearance of the retractors and the mirrors. Another issue related to the stress and anxiety of the patients was observed during the smiling photographs. Full-face, naturally smiling photographs are essential for treatment planning, especially for patients who require improvement in gingival and incisor appearance for an aesthetic smile [8, 16]. Many patients failed to achieve a simple natural smile due to the stress induced by the procedure, privacy issues, and malocclusion-related embarrassment because of their appearance. Based on these data, it can be inferred that patient stress can be considerably reduced if adequate information regarding the procedure and the instruments used is provided before the procedure. In addition, visually appealing designs for the retractors and mirrors may help reduce patient anxiety. Observations have shown that the intraoral photography stage was the most challenging for both the technician and the patients. Most patient complaints were related to the retractors and the intraoral mirrors. Large and durable retractors are needed to take a photograph of the desired intraoral field of view. However, the patients reported that the size and structure of the retractors primarily caused pain and a feeling of excessive strain on the soft tissues. In addition, the size and appearance of the equipment gave the patients the impression that the equipment would not fit in their mouth easily, which was also a cause of anxiety. These problems were more prominent for the mirrors, which were the most complained-about items in the procedure. Acquisition of the buccal and occlusal image of the distal aspect of the second molars was a challenge for both the technician and the patients. Due to the inadequate design of the buccal mirrors, soft tissue irritation and pain were the primary complaints. Feelings of nausea and difficulty breathing through the nose to minimise fog accumulation were other concerns raised by the patients regarding the occlusal mirror. The heat of the mirrors, intended to prevent fog accumulation, was another complaint of some patients. During the intraoral buccal and occlusal photography stage, three different items are used simultaneously (camera, retractors, and mirrors). Several problems were encoun- tered due to the incompatibility of the items; these issues affected the success of the procedure and the comfort of the patients. However, in this study, the technician was unassisted during the procedure. The technician needed to manage the orientation of the patients’ head, the mirrors, and the retractors with one hand, while trying to capture the appropriate image using a heavy camera with the other. Patient assistance was inevitable because the occlusal mirror became stuck between the retractors, and the orientation of the patient’s head changed as a result. In addition, the technician was required to adopt challenging postures, as he was attempting to acquire the best image while also trying to hold the mirrors (buccal and occlusal) in position without disturbing the patient. These problems occurred because the items in use were not designed to be used together; they were not adjustable and so were incompatible with each other. Problems of incompatibility and the need for assistance from patients can be eliminated by better design solutions. The items being used should be compatible with each other and should reduce the need for patient assistance without increasing the technician’s workload. Saliva accumulation was another important concern of intraoral photography. Lack of aspiration of saliva without interrupting the field of view was a challenge for the techni- cian and a disturbance for the patients. From these results, it can be clearly said that practice and patient-friendly designs are necessary to improve both the technical procedure and patient comfort. An issue that has not been previously mentioned in this field is the influence of the flash burst on the patient. In this study, 45% of the patients complained about the ring flash burst in their eyes from a close distance. The primary concerns were lacrimation, eye tiredness, and reflexive blink due to the severe light. As the working distance (distance 8 BioMed Research International PROBLEMSOPPORTUNITIES Patient-Centred Procedure-Centred Equipment-Related Procedure-Related Products used during different stages of the procedure cause pain. Also, every product has a stand-alone design that is incompatible with other equipment. This extends the procedure time due to repetitive photography sessions and increases the need for help from peers or patients. Universal equipment sizes do not comfortably fit all patients. Patient comfort is important during the procedure. The designs of the retractors and mirrors must be appealing for patients, and must also be designed to fit or adapt to as many patients as possible. Sharp edges on equipment should be avoided. Different apparatuses must be designed to be compatible and facilitate positioning without needing to be held in place. Patients must be informed about the procedure and equipment that will be used, before the procedure occurs. Patient stress must be reduced before beginning the procedure. The need for the technician to receive help from the patient or other people must be eliminated by providing better design solutions for conventional equipment. This would also decrease the communication problems between patients and clinicians during the procedure. Photography Most patients showed signs of nervousness during the observational study. Communication problems also affect the patient experience. The functional insufficiency of conventional equipment and the variety result in positioning problems in the patient’s mouth and uncomfortable postures adopted by the technician during the procedure. The DSLR camera with a ring flash attached is heavy and is not appropriate for clinical use. Dental Figure 3: Primary problems and opportunities related to digital dental photography. between the subject and the camera) was as low as 20-30 cm for intraoral images, the flash burst inevitably affected patients’ eyes unfavourably. These complaints should be addressed by new designs, or special eyeglasses may be worn to prevent the undesirable effects of the bright light. Figure 3 summarizes the results of this research in two dimensions as the equipment/procedure-related and patient/procedure-centred aspects of DDP. It also highlights the problems observed and suggests opportunities to provide better patient experiences. The small sample size was a limitation of this research. However, this was a qualitative study and substantial time was required to collect and analyse the observational and inter- view data. In addition, this study enabled the collection of important insights of patients regarding the actual procedure and the identification of key factors that are likely to affect the patient’s experience and overall motivation for treatment. Future research should include the development of a specific questionnaire to assess the impact of these factors on the patient’s experience, which will enable data collection from a larger sample group and acquisition of generalizable results. 5. Conclusions Patient discomfort and the technician’s need for additional assistance for this simple procedure are due to design- and usability-related shortcomings of the equipment used during the procedure. These items can also cause pain and anxiety in patients due to inadequate design. (1) Patient stress prior to DDP can be reduced by provid- ing detailed information about the procedure and the items to be used. (2) The retractors and the mirrors were the most painful and complained-about items. New designs are needed to improve patient comfort. (3) The items used during the procedure should be more compatible with each other to reduce technician effort and maximize patient comfort. (4) To protect the patient’s eyes from the flash burst, spe- cial glasses can be used. New flash systems designed specifically for dental photography are needed to reduce patient complaints and increase patient com- fort. Data Availability All the data regarding the results of this research are generated during the study. BioMed Research International 9 Conflicts of Interest The author declares that there are no conflicts of interest regarding the publication of this paper. Acknowledgments The author would like to thank Abdusselam Selami Cifter and Yener Altiparmakogullari of Mimar Sinan Fine Arts University, Department of Industrial Design, and Cigdem Yilmaz for their contributions and guiding comments. This research was supported by the Scientific Research Projects Unit of Mimar Sinan Fine Arts University [2015-24]. References [1] J. Sandler and A. Murray, “Digital photography in orthodon- tics,” Journal of Orthodontics, vol. 28, no. 3, pp. 197–201, 2001. [2] A. S. Ogodescu, C. Sinescu, E. A. Ogodescu, M. Negrutiu, and E. Bratu, “Digital Tools in the Interdisciplinary Orthodontic Treatment of Adult Patients,” International Journal of Biology and Biomedical Engineering, vol. 4, no. 4, pp. 97–105, 2010. [3] R. Fahim and R. Thakur, “Digital Dental Photography: The Guidelines for a Practical Approach,” TMU Journal of Dentistry, vol. 1, no. 3, pp. 106–112, 2014. [4] G. A. Morse, M. S. Haque, M. R. Sharland, and F. J. Burke, “The use of clinical photography by UK general dental practitioners,” British Dental Journal, vol. 208, no. 1, pp. 1–6, 2010. [5] H. Yilmaz, F. Bilgic, and O. Akinci Sozer, “Recent Photography Trends in Orthodontics,” Turkish Journal of Orthodontics, vol. 28, no. 4, pp. 113–121, 2016. [6] P. Wander, “Dental photography in record keeping and litiga- tion,” British Dental Journal, vol. 216, no. 4, pp. 207-208, 2014. [7] J. Sandler, J. Dwyer, V. Kokich et al., “Quality of clinical pho- tographs taken by orthodontists, professional photographers, and orthodontic auxiliaries,” American Journal of Orthodontics and Dentofacial Orthopedics, vol. 135, no. 5, pp. 657–662, 2009. [8] S. Kumar Shetty B, M. Kumar Y, and C. Sreekumar, “Digital photography in orthodontics,” International Journal of Dental Research, vol. 5, no. 2, pp. 135–138, 2017. [9] C. Robson, A Resource for Users of Social Research Methods in Applied Settings, Real World Research, Wiley, Chichester, West Sussex, 2011. [10] J. W. Creswell, Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, Sage Publications, Thousand Oaks, California, 2009. [11] A. Yildirim, H. Simsek, A. Yıldırım, and H. Şimşek, Qualita- tive Research Methods in Social Sciences, Seckin Publications, Ankara, 2016. [12] P. Jordan, An Introduction to Usability, Tylor and Francis, London, 2002. [13] W. Bengel, Mastering Digital Dental Photography, Quintessence Publishing Co, Surrey, United Kingdom, 2006. [14] V. Desai, D. Bumb, N. Marwah, and K. Toumba, “Digital Dental Photography: A Contemporary Revolution,” International Jour- nal of Clinical Pediatric Dentistry, pp. 193–196, 2013. [15] M. B. Miles, A. M. Huberman, and J. Saldana, Qualitative Data Analysis - A Methods Sourcebook, Sage Publications, Thousand Oaks, California, 2014. [16] E. A. McLaren and D. A. Terry, “Photography in dentistry.,” Journal of the California Dental Association, vol. 29, no. 10, pp. 735–742, 2001. Hindawi www.hindawi.com International Journal of Volume 2018 Zoology Hindawi www.hindawi.com Volume 2018 Anatomy Research International Peptides International Journal of Hindawi www.hindawi.com Volume 2018 Hindawi www.hindawi.com Volume 2018 Journal of Parasitology Research Genomics International Journal of Hindawi www.hindawi.com Volume 2018 Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 Hindawi www.hindawi.com The Scientific World Journal Volume 2018 Hindawi www.hindawi.com Volume 2018 Bioinformatics Advances in Marine Biology Journal of Hindawi www.hindawi.com Volume 2018 Hindawi www.hindawi.com Volume 2018 Neuroscience Journal Hindawi www.hindawi.com Volume 2018 BioMed Research International Cell Biology International Journal of Hindawi www.hindawi.com Volume 2018 Hindawi www.hindawi.com Volume 2018 Biochemistry Research International Archaea Hindawi www.hindawi.com Volume 2018 Hindawi www.hindawi.com Volume 2018 Genetics Research International Hindawi www.hindawi.com Volume 2018 Advances in Virolog y Stem Cells International Hindawi www.hindawi.com Volume 2018 Hindawi www.hindawi.com Volume 2018 Enzyme Research Hindawi www.hindawi.com Volume 2018 International Journal of Microbiology Hindawi www.hindawi.com Nucleic Acids Journal of Volume 2018 Submit your manuscripts at www.hindawi.com https://www.hindawi.com/journals/ijz/ https://www.hindawi.com/journals/ari/ https://www.hindawi.com/journals/ijpep/ https://www.hindawi.com/journals/jpr/ https://www.hindawi.com/journals/ijg/ https://www.hindawi.com/journals/tswj/ https://www.hindawi.com/journals/abi/ https://www.hindawi.com/journals/jmb/ https://www.hindawi.com/journals/neuroscience/ https://www.hindawi.com/journals/bmri/ https://www.hindawi.com/journals/ijcb/ https://www.hindawi.com/journals/bri/ https://www.hindawi.com/journals/archaea/ https://www.hindawi.com/journals/gri/ https://www.hindawi.com/journals/av/ https://www.hindawi.com/journals/sci/ https://www.hindawi.com/journals/er/ https://www.hindawi.com/journals/ijmicro/ https://www.hindawi.com/journals/jna/ https://www.hindawi.com/ https://www.hindawi.com/ work_qexyrxds2vcyxlysm2gbefeqre ---- BioMed CentralDiagnostic Pathology ss Open AcceCommentary The modern histopathologist: in the changing face of time Biman Saikia*1, Kirti Gupta2 and Uma N Saikia2 Address: 1Department of Immunopathology Postgraduate Institute of Medical Education and Research, Chandigarh, India and 2Department of Histopathology, Postgraduate Institute of Medical Education and Research, Chandigarh, India Email: Biman Saikia* - bimansaikia@gmail.com; Kirti Gupta - kirtigupta10@yahoo.co.in; Uma N Saikia - umasaikia22@yahoo.co.in * Corresponding author Abstract The molecular age histopathologist of today is practicing pathology in a totally different scenario than the preceding generations did. Histopathologists stand, as of now, on the cross roads of a traditional 'visible' morphological science and an 'invisible' molecular science. As molecular diagnosis finds more and more applicability in histopathological diagnosis, it is time for the policy makers to reframe the process of accreditation and re-accreditation of the modern histopathologist in context to the rapid changes taking place in this science. Incorporation of such 'molecular' training viv-a-vis information communication technology skills viz. telemedicine and telepathology, digital imaging techniques and photography and a sound knowledge of the economy that the fresh entrant would ultimately become a part of would go a long way to produce the Modern Histopathologist. This review attempts to look at some of these aspects of this rapidly advancing 'art of science.' Introduction In an era when we are talking of molecular classification of traditional histology and immunotherapy for cancers, the role of the histo-morphologist seems to be exploring areas which one could never have imagined a few decades ago. With the rapidly advancing field of biotechnology and molecular biology, a modern histopathologist is expected to be well versed not only in the traditional his- topathological techniques but also keep pace with the ever expanding frontiers of science and technology. With molecular diagnosis threatening to overrule the his- topathological diagnosis with every new discovery, it is time for the histopathologist to embrace and incorporate the recent advancements of practicing pathology in its present modern context. With the fast changing scenario, it is but obvious that the accreditation of the histopathologist needs to undergo a drastic change, and re-accreditation of those already in the field a necessity. The current accreditation process preva- lent across most of the countries is to train a student with varying degrees of vigorousity and at the end of the train- ing period, assess the candidate through an examination which gives the license to practice pathology throughout life. In this scenario, it is totally up to the pathologist's own efforts and interest to keep up all the enthusiasm of keeping abreast with the newer technologies and new diagnostic trends. Whereas primary accreditation looks at what the new practitioner can do, re-accreditation looks at what established practitioners actually do. The accredita- tion and re-accreditation procedure hence needs to incor- porate not only the traditional histopathological techniques but also the fields of biotechnology, telecom- munication, information communication technology, professional photography and a bit economics. To this list could be added a number of other entities such as socio- Published: 6 June 2008 Diagnostic Pathology 2008, 3:25 doi:10.1186/1746-1596-3-25 Received: 29 April 2008 Accepted: 6 June 2008 This article is available from: http://www.diagnosticpathology.org/content/3/1/25 © 2008 Saikia et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Page 1 of 5 (page number not for citation purposes) http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=18534037 http://www.diagnosticpathology.org/content/3/1/25 http://creativecommons.org/licenses/by/2.0 http://www.biomedcentral.com/ http://www.biomedcentral.com/info/about/charter/ Diagnostic Pathology 2008, 3:25 http://www.diagnosticpathology.org/content/3/1/25 psychological factors, hospital management, quality con- trol and host of others which is beyond the scope of this discussion. The subject of accreditation and re-accredita- tion however has been a topic of constant reviews where authors have tried to devise list of areas where a his- topathologist should be able to demonstrate continuing fitness to practice, and activities like recording educa- tional activity, testing a pathologist's knowledge and interpretative skills, testing his diligence and peer review/ appraisal has been proposed as various ways and tools to assess a practicing pathologist [1]. Discussion There was a time when Pathology was considered to be the mother of medical science. Histopathology, over the years has witnessed its evolution from a mainly autopsy based pathology to the current molecular histopathology. Recent advances in various fields of science and technol- ogy and the incorporation of these to histopathological practice and acceptance of some philosophical concepts, particularly the functional correlation of morphological studies, has changed the outlook of both histopathology and the histopathologist. An elaborate review on the topic at the turn of the century by Mapstone and Quirke [2] on the pathologist in context to the 21st century reveals that the ideas and concepts that were envisioned then, has to a large extent, already found widespread acceptance in the pathology community, if not moved further into it. This is particularly true for the application of information tech- nology, and to a lesser extent to molecular biology advances. It's been sometime now since the human genome has been sequenced. The initial enthusiasm of reading the blueprint of human biology however seems nowhere in sight and it is likely to be a number of decades before many of the expected benefits of the genomics revolution are actually realized to the extent initially expected. The human genome project seems to have opened the Pan- dora's Genomic Box resulting in more questions than answers. The first and most widespread application of molecular techniques in histopathology perhaps came in the form of immunohistochemistry (IHC), though IHC perhaps can- not be called a molecular technique in its true sense. Albert H. Coons and his colleagues were the first to use fluorescent dye labeled antibodies to identify antigens in tissue sections. Subsequently, enzyme labels such as per- oxidase (Nakane and Pierce 1966), alkaline phosphatase (Mason and Sammons 1978) and Colloidal gold (Faulk and Taylor 1971) were introduced. Within a short time, histopathology literature was flooded with immunohisto- chemistry-based studies, expanding both the list of mark- ers as well as the list of confusion. A pubmed search as on 31st March 2008 for "immunohistochemistry" yielded a total of 415067 papers with 12,426 reviews reflecting the impact immunohistochemistry have had on the scientific community. The impact of the scientific breakthroughs has however started becoming more apparent gradually. This can be said more confidently when we talk of the contribution of the gene array technology, which allows gene expression measurements of thousands of genes in parallel, providing a powerful tool for pathologists seek- ing new markers for diagnosis. With such kind of data accumulating rapidly, it won't be long when molecular signatures would be assigned to each and every patholog- ical lesion and a molecular diagnosis would come even before the paraffin sections are ready, and the his- topathologist would already know what to look for in the sections. Various studies have demonstrated that micro-array based gene expression profiling enables accurate tumor classifi- cation and can be very helpful diagnostic tool for cancers with unknown primaries and histologically undifferenti- ated tumors [3,4]. Molecular profiling has also given way to the histo-clinical classifications [5-7] since such a clas- sification would be expected to corroborate more accu- rately with the functional behavior of a tumor. Molecular classification systems have been attempted extensively in organ systems like the breast and the kidney and to a lesser extent in lungs, thyroid, endometrium, ovarian, tes- ticular, and sarcomas. Molecular classification systems in breast carcinoma and renal cell carcinoma have been shown to be of relevance in not only classification and diagnosis but also in assessing response to therapy, and as a predictor of survival [8,9]. Whereas genomic studies are establishing new molecular classifications, genetic altera- tions are also being identified and characterized, generat- ing new targets for therapy and new tools to predict disease recurrence and response to therapy. This com- bined molecular approach is expected to have an impact on individual 'tailored' therapy for cancer patients. However, with the concept of "tumor heterogeneity" being increasingly recognized, it has become imperative to define and analyze pure population of cells separately within the neoplasm before assigning a molecular signa- ture to a particular neoplasm. The advent of Laser capture microdissection (LCM) technique has brought about this reliable procurement of pure population of cells from tis- sue sections under direct microscopic visualization and bridged a very significant technical gap between the his- topathologist's microscope and the molecular biologist's work bench. This has now opened the doors to enhancing our understanding of molecular mechanisms regulating cellular developments and its functioning both in normal and diseased states. Page 2 of 5 (page number not for citation purposes) Diagnostic Pathology 2008, 3:25 http://www.diagnosticpathology.org/content/3/1/25 Whereas techniques like immunohistochemistry are rela- tively simple and can be performed by the technical staff, more sophisticated techniques involving handling of cell culture for LCM and other molecular methods require a more intricate knowledge on the part of the pathologist for meaningful interpretation. It is thus important that a student is given hands-on experience on performing these techniques, designing of such experiments and more importantly interpreting the data, during the initial train- ing. The initial excitement surrounding the development of DNA micro array analysis and proteomics has also raised questions about the role of these techniques in actual clin- ical practice and patient management [10]. Though it is theoretically possible to build a comprehensive gene expression database for each of the organ systems/tumor type and use it as a clinical diagnostic and prognostic tool, microarray technology remains complex and time con- suming and hence till date remains largely a research tool. It is prudent thus that the histopathologist takes this molecular approach with all his background knowledge of morphology rather than leave tumor diagnosis entirely in the hands of the molecular biologist to have a better and long lasting impact. Nature however has never been too kind to the scientist. The story does not end with mere expression or non- expression of a gene and production of the corresponding mRNA. What matters at the end is the functionality of the end product which is the 'functional' protein. The story only begins with the production of the mRNA and the probability of the mRNA ultimately landing up in a func- tional protein would depend on the smooth co-ordina- tion of an even more complicated array of biological functions like splicing, post-translational modifications, DNA methylation, acetylation and a host of other epige- netic factors. What needs to be assayed is therefore the functional aspect of the gene and the focus hence shifts from genomics to proteomics and more recently to gly- comics. The latter, scientists believe, will have an equally dramatic effect as genomics have had. Given that a single protein can come with 10 or more different forms of sug- ars attached to it, giving the protein subtle differences in function, which cannot be detected by the current tech- niques in proteomics, leaves us with the possibility of modulating the function of the protein by modulating the activity of the enzymes that make the sugars on the pro- tein. With the advent of the "glyco-chip" it is now possible that histopathologists too will be hunting for the sugars on these proteins and possibly do "Immuno-glyco-histo- chemistry" to look at the functional aspect of the protein they currently detect with immunohistochemistry. The scope is ever expanding and unlimited. Information Communication Technology (ICT) and the histopathologist Working in isolated environments where access to peers, education and information is limited, is one of the highest risk factors for physicians' loss of medical competence [11]. This can't be less true for the histopathologist as well. So it's evident that for keeping pace with all the advances taking place at an outstanding pace, dissemination of information becomes a major issue. It is not surprising therefore that medical science has found its place in the electronic media or vice versa. It is also not surprising that most if not all prominent journals in life sciences are available on-line, though with a price tag and majority of them has options for online-submission and peer review. Information Communication Technology has also paved way for telemedicine and a science which, a few years back was a mere concept is rapidly gaining recognition as a well defined, accepted and effective application and is already showing its impact in clinical practice and modern health- care. While telemedicine is a wider concept, Telepathol- ogy is the sub-discipline of telemedicine that deals with the capture, transmission, and viewing of pathological and histological images via telecommunication channels such as the internet, dedicated satellite or telephone, as opposed to the conventional methods of microscopy. Offering a host of innovative benefits and applications, modern telepathology systems now help to deliver more accurate remote diagnoses and are tending to replace tra- ditional consultancy methods using light microscopy. Telepathology over the years has undergone drastic evolu- tion from static telepathology, which is a store-and-for- ward concept to dynamic, real time telepathology, using fully motorized robotic systems. A concordance rate of as high as 99–100% has been reported between telepathol- ogy and light microscopic diagnosis using such technolo- gies [12,13]. What has however revolutionized telepathology applications and has to a great extent removed the most fundamental drawback of telepathol- ogy, that is the limitation of the available image for diag- nosis, is the concept of the "virtual slide." Virtual slides are digitized images where the entire slide is scanned at a very high resolution, acquiring the entire image of the histopathology section at all magnifications available on the microscope. Software driven motorized stage helps acquire all the fields of view and digitally stitch all the images into one single image which can be viewed by multiple pathologists. The novel array microscope for the first ultra-rapid virtual slide processor (D Metrix DX-40) digital slide scanner is an example [14]. The optics consists of a stack of three 80- element 10 × 8-lenset array. Uniquely shaped lenses in each of the lenslet arrays constitute a single "miniaturized Page 3 of 5 (page number not for citation purposes) Diagnostic Pathology 2008, 3:25 http://www.diagnosticpathology.org/content/3/1/25 microscope" constituting a set of 80 microscopes. Scan- ning a glass slide with the array microscope produces seamless two-dimensional image data of the entire slide i.e. a virtual slide. Image acquisition can be as rapid as 58 secs for a 2.25 cm2 tissue section and 40 slides can be scanned per hour. The only limitation would be the extremely large file size of the images generated, some- times exceeding 1.5 Gb and hence the limitation with the bandwidth available for transmitting these images. It is however not unachievable considering the fact that invest- ing in such a facility with a high bandwidth capacity is a one time effort, the fruit of which can be reaped for a long time. So, with the advent of virtual microscopy, it may be possible that an expert referral pathologist in the near future will be interpreting virtual images on LCD screens rather than glass slides. What needs to be emphasized is incorporation of the concept right at the inception of the pathology training with hands-on experience on these concepts rather than giving a theoretical knowledge for the simple reason that the factor which has till now ham- pered the growth and wide-spread application of telepa- thology is the mindset of the traditional pathologist, despite scientific evidence of efficacy of the technique. A technology can improve only when it finds its place in routine practice, and it is only through trial and error that a desired perfection is achieved. The situation however is not that grim, and excellent examples of telepathology applications have been set even in a country like India which is fast shedding its third world image. With the opening of the School of Telemedicine at the Sanjay Gan- dhi Post Graduate Institute of Medical Sciences at Luc- know, the first of its kind in the country, telepathology promises a bright future in the times to come. Histopathologist as a professional photographer A morphologist's job is to play with images, be it live or captured. Photography hence has been an integral part of pathology practice since its inception. But chemical pho- tography now has largely been replaced by digital photog- raphy for obvious reasons. Digital cameras utilize charge- coupled devices (CCD) or Complementary Metal Oxide Semiconductor (CMOS) image sensors to measure light energy and their circuitry to convert the measured infor- mation into a digital signal. Digital photography has the advantage of lower running cost, instant availability with- out any processing, easy archiving and transmission through the electronic media. It allows for adjusting, enhancing, and annotating images. Moreover it allows for post acquisition image optimization and modification to suit one's satisfaction of image quality using softwares like Adobe Photoshop. This however also carries the disadvan- tage and the fear of falsification of images. But as is true for every technological advance distraught with some side-effects, the advantages of digital photography in his- topathology practice far outweighs its disadvantages. Patient care is enhanced by the transmission of digital images to other individuals for consultation and educa- tion, and by the inclusion of these images in patient care documents. In research laboratories, digital cameras are widely used to document experimental results and to obtain experimental data. Since pathology is a visual science, the inclusion of quality digital images into lectures, teaching handouts and elec- tronic documents is as essential as taking lectures. A few institutions have gone beyond the basic application of digital images to developing large electronic atlases, ani- mated, audio-enhanced learning experiences, multidisci- plinary internet conferences, and other innovative applications. So isn't it wise for a postgraduate student to handle a short course on photography at the beginning of the training as a pathologist? Histopathologist and the Economy The Indian Healthcare sector, estimated to be US$ 30 bil- lion has been growing at a frenetic pace in the past few years. Revenues from the healthcare sector account for 5.2 per cent of the GDP and currently it employs over 4 mil- lion people. Private spending accounts for almost 80 per cent of total healthcare expenditure. The burgeoning healthcare market and recent government initiatives [15] accompanied by the socio-economic changes in the pop- ulation has made the Indian healthcare industry an attrac- tive investment proposition. The private sector already accounts for about 70% of India's health care services market and this private health- care will continue to be the largest component in 2012 and is likely to double to US$ 35.7 billion [16]. The chances thus of a newly qualified pathologist of landing up in the private sector and contributing to its growth is perhaps as high as, if not more, than the chances of being in the public sector. The Histopathology Market The series of technologies developed over the years to address the challenges of molecular diagnostics include eight major areas: amplification technologies (gene and signal), micro array technologies, blotting technologies, electrophoretic technologies, probe technologies, restric- tion fragment-length polymorphism (RFLP) analysis, RNA inhibition analysis, and single nucleotide polymor- phism (SNP) analysis with significant overlap among them to support one another. These developments have set the stage for excellent growth potential in the market- place. Histopathology, an approximately 1 billion market worldwide is growing at 10% per annum [17] despite the fact that molecular diagnostic's contribution to such sta- tistics is only a minuscule. Further down the century, his- topathology, molecular biology and IT sector combined Page 4 of 5 (page number not for citation purposes) Diagnostic Pathology 2008, 3:25 http://www.diagnosticpathology.org/content/3/1/25 Publish with BioMed Central and every scientist can read your work free of charge "BioMed Central will be the most significant development for disseminating the results of biomedical researc h in our lifetime." Sir Paul Nurse, Cancer Research UK Your research papers will be: available free of charge to the entire biomedical community peer reviewed and published immediately upon acceptance cited in PubMed and archived on PubMed Central yours — you keep the copyright Submit your manuscript here: http://www.biomedcentral.com/info/publishing_adv.asp BioMedcentral will make the modern Histopathologist's market enjoy a good share in the healthcare sector. Is the fresh entrant into pathology training aware of this growing aspect of healthcare? Conclusion So, are we looking at a histopathologist, who is not only a good traditional morphologist, but also an IT savvy scien- tist, with drops of Leonardo Da Vinci and Picasso in blood, who is ready to take technology head-on, without the inhibitions the current generation of pathologists face and who would look at the histopathology market with a much wider perspective of the current economy? The making of this Modern Histopathologist would require a drastic change in the accreditation procedure and a major initiative on the part of the policy makers and the teachers of Pathology. The transition from a field of visual interpre- tations to a largely invisible molecular science will take some time to firm foot but will eventually be there. Competing interests The authors declare that they have no competing interests. Authors' contributions BS and KG conceptualized, designed and drafted the man- uscript. UNS did the critical revision for important intel- lectual content. All authors read and approved the final manuscript. References 1. Furness Peter N: Accreditation and re-accreditation of the world's pathologists. In Recent advances in histopathology, Number 19 Edited by: David G Lowe, James CE. Underwood: Churchill Living- stone, United Kingdom; 2001:131-144. 2. Mapstone NP, Quirke P: The pathologist in the 21st Century- man or machine? In Progress in Pathology Volume 3. Edited by: Nigel Kirkham, Nicholas R. Lemoine: Churchill Livingstone, United King- dom; 1997:139-151. 3. Bloom G, Yang IV, Boulware D, Kwong KY, Coppola D, Eschrich S, Quackenbush J, Yeatman TJ: Multi-platform, multi-site, microar- ray based human tumor classification. Am J Pathol 2004, 164:9-16. 4. Buckhaults P, Zhang Z, Chen YC, Wang TL, St Croix B, Saha S, Bardelli A, Morin PJ, Polyak K, Hruban RH, Velculescu VE, Shih IeM: Identifying tumor origin using a gene-expression-based clas- sification map. Cancer Res 2003, 63:4144-4149. 5. Shedden KA, Taylor JM, Giordano TJ, Kuick R, Misek DE, Rennert G, Schwartz DR, Gruber SB, Logsdon C, Simeone D, Kardia SL, Green- son JK, Cho KR, Beer DG, Fearon ER, Hanash S: Accurate molec- ular classification of human cancers based on gene expression using a simple classification with a pathological tree-based framework. Am J Path 2003, 163:1985-1995. 6. Su AI, Welsh JB, Sapinoso LM, Kern SG, Dimitrov P, Lapp H, Schultz PG, Powell SM, Moskaluk CA, Frierson HF Jr, Hampton GM: Molec- ular classification of human carcinoma by use of gene expres- sion signatures. Cancer Res 2001, 61:7388-7393. 7. Ma XJ, Patel R, Wang X, Salunga R, Murage J, Desai R, Tuggle JT, Wang W, Chu S, Stecker K, Raja R, Robin H, Moore M, Baunoch D, Sgroi D, Erlander M: Molecular classification of human cancers using a 92-gene real time quantitative polymerase chain reaction assay. Arch Pathol Lab Med 2006, 130:465-473. 8. Lyman GH, Kuderer NM: Gene expression profile assays as pre- dictors of recurrence-free survival in early-stage breast can- cer: a meta-analysis. Clin Breast Cancer 2006, 7(5):372-9. 9. Jones J, Otu H, Spentzos D, Kolia S, Inan M, Beecken WD, Fellbaum C, Gu X, Joseph M, Pantuck AJ, Jonas D, Libermann TA: Gene sig- natures of progression and metastasis in renal cell cancer. Clin Cancer Res 11(16):5730-9. 2005 Aug 15 10. Sotiriou C, Piccart MJ: Taking gene-expression profiling to the clinic: when will molecular signatures become relevant to patient care? Nat Rev Cancer 2007, 7(7):545-53. 11. Lewkonia R: Educational implications of practice isolation. Med Educ 2001, 35:528-9. 12. Dunn BE, Choi H, Almagro UA: Routine Surgical Telepathology in the department of Veterans Affairs: Experience Related Improvements in Pathologists Performance in 2200 cases. Telemed J 1999, 5:323-32. 13. Weiss-Carrington P, Blount M, Kipreos B: Telepathology between Richmond and Beckley Veterans Affairs Hospitals: report on the first 1000 cases. Telemed J 1999, 5:367-73. 14. Weinstein RS, Descour MR, Liang C, Barker G, Scott KM, Richter L, Krupinski EA, Bhattacharyya AK, Davis JR, Graham AR, Rennels M, Russum WC, Goodall JF, Zhou P, Olszak AG, Williams BH, Wyant JC, Bartels PH: An array microscope for ultrarapid virtual slide processing and telepathology. Design, fabrication, and vali- dation study. Hum Pathol 2004, 35(11):1303-14. 15. KS Jayaraman: India sets its sights on global health care mar- ket. Nature Medicine 2003, 9:377. doi:10.1038/nm0403-377a 16. [http://www.ibef.org/artdispview.aspx]. 17. [http://www.roche.com/irp070626.pdf]. Page 5 of 5 (page number not for citation purposes) http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=14695313 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=14695313 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12874019 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12874019 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12874019 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=14578198 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=14578198 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=14578198 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11606367 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11606367 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11606367 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16594740 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16594740 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16594740 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=17239261 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=17239261 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=17239261 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16115910 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16115910 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=17585334 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=17585334 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=17585334 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11380853 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10908448 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10908448 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10908452 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10908452 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10908452 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=15668886 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=15668886 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=15668886 http://www.ibef.org/artdispview.aspx http://www.roche.com/irp070626.pdf http://www.biomedcentral.com/ http://www.biomedcentral.com/info/publishing_adv.asp http://www.biomedcentral.com/ Abstract Introduction Discussion Information Communication Technology (ICT) and the histopathologist Histopathologist as a professional photographer Histopathologist and the Economy The Histopathology Market Conclusion Competing interests Authors' contributions References work_qfi2nvinwzewxps2rh4jzz6key ---- The role of AI classifiers in skin cancer images 750 | wileyonlinelibrary.com/journal/srt Skin Res Technol. 2019;25:750–757.© 2019 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd 1 | I N T R O D U C T I O N Malignant skin neoplasms, for example melanoma, basal cell carci‐ noma (BCC) and squamous cell carcinoma (SCC), have gained public attention over the last years. The increased interest in this matter, not only by population in general but also by scientific researchers, can be attributed to the growing incidence rates and respective mor‐ tality tolls of malignant skin tumours.1 Despite the common association of alarming prevalence rates to unsafe life‐xmlstyle choices, such as excessive and unprotected UV exposure,2 the introduction of preventative measures, like frequent screening and biopsy of suspected lesions, can also be accountable by the growth verified in these numbers.3 Moreover, the totality of deaths caused by malignant skin neoplasia has shown some stabiliza‐ tion, a consequence of early recognition strategies.1 To support this tendency and possibly improve the current rates, developments in the diagnosis field are required. The diagnosis of skin cancer is mostly based on the evaluation performed by a physician, being the decision of surgical excision only made when suspected malignity or inconclusive analysis occurs.4 To supply additional information to the one acquired by the naked eye, several imaging modality techniques have been used to assist in skin cancer diagnosis. Dermoscopy images are a clinical reality, represent‐ ing the number one aiding tool to evaluate melanocytic tumours,5 apart from other methods, as thermography and spectroscopy‐ based. Still, the understanding of the gathered information can be Received: 29 March 2019 | Accepted: 28 April 2019 DOI: 10.1111/srt.12713 O R I G I N A L A R T I C L E The role of AI classifiers in skin cancer images Carolina Magalhaes | Joaquim Mendes | Ricardo Vardasca INEGI‐LAETA, Faculdade de Engenharia, Universidade do Porto, Porto, Portugal Correspondence Ricardo Vardasca, Faculdade de Engenharia, Universidade do Porto, Rua Dr. Roberto Frias S/N, 4200‐465 Porto, Portugal. Email: rvardasca@fe.up.pt Funding information Fundação para a Ciência e a Tecnologia, Grant/Award Number: LAETA ‐ UID/ EMS/50022/2013 Abstract Background: The use of different imaging modalities to assist in skin cancer diagnosis is a common practice in clinical scenarios. Different features representative of the le‐ sion under evaluation can be retrieved from image analysis and processing. However, the integration and understanding of these additional parameters can be a challeng‐ ing task for physicians, so artificial intelligence (AI) methods can be implemented to assist in this process. This bibliographic research was performed with the goal of assessing the current applications of AI algorithms as an assistive tool in skin cancer diagnosis, based on information retrieved from different imaging modalities. Materials and methods: The bibliography databases ISI Web of Science, PubMed and Scopus were used for the literature search, with the combination of keywords: skin cancer, skin neoplasm, imaging and classification methods. Results: The search resulted in 526 publications, which underwent a screening pro‐ cess, considering the established eligibility criteria. After screening, only 65 were qualified for revision. Conclusion: Different imaging modalities have already been coupled with AI meth‐ ods, particularly dermoscopy for melanoma recognition. Learners based on support vector machines seem to be the preferred option. Future work should focus on image analysis, processing stages and image fusion assuring the best possible classification outcome. K E Y W O R D S algorithms, image processing and computer vision, machine learning, skin cancer https://orcid.org/0000-0001-5602-718X https://orcid.org/0000-0003-4254-1879 https://orcid.org/0000-0003-4217-2882 mailto:rvardasca@fe.up.pt | 751MAGALHAES Et AL. a challenging task for clinicians and different diagnosis can be found for the same skin neoplasia, due to the dependence of practitioner experience.5 Thus, the introduction of artificial intelligence (AI) com‐ putational methods can be of value to supply a second opinion. The use of AI for decision support systems in medicine has been present for over 50 years. Machine learning algorithms represent a key branch of this area, specially when dealing with medical deci‐ sions, due to its ability to learn over time, as the supplied information increases, before drawing a conclusion.6 In the case of skin cancer diagnosis, a set of input variables can be retrieved from image anal‐ ysis and processing. These parameters are feed to a classifier, for example support vector machine (SVM), decision trees and random forest, that delivers an output result, assigning each lesion to a given group.6 The classification performance can be evaluated by a set of selected measurements.7 High accuracy and efficiency rates are one of the main advan‐ tages of the application of classifiers in medical diagnosis, since the developed algorithms can capture and integrate information in ways in a fraction of time that the human brain cannot perform. Its suc‐ cessful implementation could reduce human error, providing early diagnosis and consequent cost reduction in skin cancer treatments.8 The goal of this review was to evaluate the current state of the application of machine learning algorithms as an assistive tool in skin cancer diagnosis, based on information retrieved from different im‐ aging modalities. The understanding of the parameters and strate‐ gies used to acquire the best results is of interest, in order to outline possible improvements for upcoming researches. 2 | M AT E R I A L S A N D M E T H O D S 2.1 | Search strategy The presented bibliographic research was performed in the reference sources ISI Web of Science, PubMed and Scopus, with the following keyword combinations, respectively: TOPIC: (skin cancer) OR TOPIC (skin neoplasm) AND TOPIC: (imaging) AND (classification meth‐ ods); ((skin cancer [Title/Abstract]) OR (skin neoplasm)) AND (imag‐ ing) AND (classification methods [Title/Abstract])); (TITLE‐ABS‐KEY ("skin cancer") OR TITLE‐ABS‐KEY ("skin neoplasm") AND TITLE‐ ABS‐KEY (imaging) AND TITLE‐ABS‐KEY ("classification methods")). The selected fields of search, in each database, were used to assure uniformity in the results encountered, and the use of complex and unclear terms was avoided to guarantee a maximum number of re‐ sults. Since the terms “skin cancer” and “skin neoplasm” are common interchangeable expressions, the Boolean operator OR was applied. No data restriction was considered. After the database search, a du‐ plicate removal was performed. 2.2 | Screening and eligibility of results A title and abstract screening of the encountered articles was ini‐ tially performed to consider only those that reported the use of clas‐ sification methods in skin cancer images. The first eligibility criterion consisted in the removal of meeting abstracts and revision articles. Secondly, only publications written in English were kept, eliminating articles submitted in other languages. Considering that this review is focused on the use of classifiers for skin cancer classification, articles that described the use of classifica‐ tion methods for other purposes were excluded, as well as research studies that classified skin neoplasms with different approaches, for example ABCDE rule, making the third and fourth selection rule, respectively. Lastly, publications that did not report values of accu‐ racy, sensitivity or specificity of classification outcomes were also removed. The remaining publications were categorized based on the im‐ aging modality used to characterize the skin neoplasm: dermatos‐ copy, microscopy, digital photography, spectroscopy, combination and others. A full‐text review was conducted following the PRISMA rules for systematic reviews detailed in Ref.9,10. The revision process is described in Figure 1. 3 | R E S U LT S A total of 526 publications were selected from the database search, with 251, 194 and 81 publications found in ISI Web of Science, PubMed and Scopus, respectively. The duplicate removal leads to the exclusion of 183 articles. The remaining 343 publications were submitted to a title and abstract screening process that resulted in the elimination of 195 records, since artificial intelligence compu‐ tational classification methods were not applied to skin cancer im‐ ages for classification purposes. Following, 83 articles were removed from the final results, due to the eligibility criteria defined, being 19, 7, 31, 15 and 9 eliminated due to the first, second, third, fourth and fifth criterion, respectively. The final set includes 65 publications eli‐ gible for revision with 34, 6, 7, 12, 3 and 3 articles concerning the use of dermoscopy, microscopy, spectroscopy, digital photography, F I G U R E 1 PRISMA flow diagram. Adapted from Moher et al10 752 | MAGALHAES Et AL. histopathology, combination of several and other imaging modalities to analyse the neoplasia, respectively. 3.1 | Dermoscopy The successful use of artificial intelligence methods for the assess‐ ment of skin lesions represented by computerized digital dermatos‐ copy images is one of the most documented subjects of this field. The majority of authors reporting on this topic demonstrate the usefulness of several classification algorithms in the evaluation of le‐ sion’ malignancy, distinguishing benign from malignant melanocytic tumours. Grzesiak‐Kopec et al11 presented two different strategies for this purpose, using single classifiers: Naive Bayes, random forest and k‐NN; and a meta‐learning approach with bootstrap aggregating and vote ensemble classifier. The outcomes of the metaheuristics ex‐ ceeded the ones of single classifiers, having bagging applied to ran‐ dom forest presented the highest sensitivity value (0.851). Pairwise coupling (PWC) of SVM, k‐NN and Gaussian maximum likelihood (ML) was applied by Rahman et al12 to different colour and texture features extracted from skin lesion’ images. The implementation of fusion by sum of the single classification results obtained exceed the perfor‐ mance of the lone machine learning algorithms, delivering an accu‐ racy (AC) of 75.69%. Pennisi at al13 showed the ability of Naive Bayes, Adaptive Boosting (AdaBoost), k‐NN and random trees machine learning methods in the detection of melanomas among benign le‐ sions, segmented with Delaunay Triangulation. The best results were encountered with AdaBoost, with sensitivity (SN) and specificity (SP) values of 0.935 and 0.871, respectively. With the same purpose, Ruiz et al14 use k‐NN, ANN and Bayes learners in a collaborative method, improving the end results. A number of seven neighbours and seven hidden layers were the optimal parameters encountered for the k‐NN and ANN algorithm, respectively. A part from these, several other au‐ thors choose similar approaches, comparing single learners to select the classifier that best performs, depending on the provided data.15,16 The implementation of more complex approaches for the purpose of classifying pigmented skin lesions was performed by Masood et al17, with the development of a Deep belief network, using labelled and unlabelled data, in parallel with a self‐advised single vector ma‐ chine learning algorithm responsible for improving the classification results. This strategy deals with the frequent problem of insufficient training data, delivering classification errors lower than other com‐ mon classifiers. Schaefer et al18 presented another method to tackle this data imbalance, using an ensemble of various one‐class classifi‐ ers based on support vector data description. The final classification results showed its superiority in comparison with other ensemble classifiers based on SVM, commonly used in this situation. The com‐ bination of multilayer perceptron (ANN), Naive Bayes, decision tree, k‐NN and SVM classifiers documented by Castillejos‐Fernández et al19 also exceeds the performance of the single classifiers in the task of malignancy classification. The relevance of the features selected for input is stressed, as the ensemble accuracy decreases with the increase of feature number. This topic is also highlighted by Faal et al,20 after achieving better classification results with an ensemble of k‐NN, SVM and Linear discriminant analysis (LDA) algorithms with different feature inputs for the different classifiers, as oppose to the same shape, colour and texture components. The impact of input vectors was also tested by Rastgoo et al21 with the implementation of an ensemble learning with random forest (RF) and weighted com‐ bination constructed with RF, SVM and LDA, where the combination of several features achieve higher specificity results (94%), instead of the use of a single characteristic. This exact conclusion was reached by the same author,22 but applying single learners, that is, SVM, RF and Gradient boosting. In both articles, RF outperforms the others. Fengying et al23 build a meta‐ensemble model for the classification of melanocytic lesions, composed of three different ensembles based on NN, each feed with different inputs, with posterior combination of the outputs. Its overall sensitivity and accuracy exceeded RF, Gentle AdaBoost, SVM, k‐NN, Fuzzy NN and systems based on Bagging of Features (BoF) models. Lastly, Abbas et al24 manipulated image fea‐ tures representative of lesion' patterns to classify pigmented skin tumours, through the use of majority voting with support vector ma‐ chine, achieving accuracy, sensitivity and specificity values of 93%, 94% and 84%, respectively. Apart from ensemble models and meta‐learning approaches, the implementation of single learners for melanocytic lesion classi‐ fication is also fairly common. SVM is the favoured learning model, being used standardly by the majority of authors, that preferred to focus in elaborate feature extraction and selection methods.25‐28 Specifically, Jaworek‐Korjakowska et al29 used this type of learn‐ ing algorithm to develop a computer‐aided diagnosis (CAD) system for the detection of micro malignant melanoma that outdid other literature models, with SN of 90% and SP of 96%. La Torre et al30 tested the performance support vector machine classifiers with different function kernels, namely chi‐square, Gaussian and gen‐ eralized Gaussian. The latter showed remarkable results, detecting all cancerous lesions. Codella et al31 explored the application of SVM to whole and partitioned images, segmented using ensemble approaches, resulting in a area under the receiver operating char‐ acteristic (ROC) curve of 0.843. Support vector machine has also been used to deal with class imbalance situations for melanoma classification. Celebi et al implemented Random under‐sampling and Synthetic minority oversampling technique (SMOTE) and con‐ cluded that SMOTE is a better approach, since the first option can eliminate valuable data32 and reduce drastically the number of samples, hardening the learning process. Since SVM can ignore input samples that are not linearly separated during the training step, Masood and Al‐Jumaily33 opted for a Self‐Advising SVM (SA‐SVM) strategy, to retrieve information from the misclassified data, during this phase. SA‐SVM presented higher AC, SN and SP, followed by SVM with radial basis function, quadratic, poly‐ nomial, linear and multilayer perceptron kernel, decreasingly. The classification capacity of SVM with different kernel functions was also explored by Wahba,34 defending the importance of kernel selection, according to the type of features included in the input vector. Since the supplied dataset was non‐linearly separable, the quadratic polynomial function kernel delivered the top results. | 753MAGALHAES Et AL. The same kernel was found to be the most adequate by Yuan et al,35 to prevent the increase of error rate. Few authors exceed the 1 level distinction with SVM, introducing a second classification step. Suganya36 developed a model to, firstly, categorize melano‐ cytic and non‐melanocytic lesions, followed by the differentiation of melanoma from naevus and BCC from seborrhoeic keratosis, respectively. Joseph and Panicker37 divided normal and abnormal skin lesions, further classifying the last into Atypical nevi or mela‐ noma. Both achieved SN, SP and AC values close to, or above 90%. Melanoma detection proved to be doable with the implemen‐ tation of back‐propagation neural networks (NN) classifiers, al‐ though inferior to the ones performed with SVM. Premaladha and Ravichandran achieved an accuracy of 87%, considering an error lower than e−0.5, while Messadi et al38 established a maximum error of 0.1, achieving a correct classification rate (TCR) of 76.76%. The implementation of additional algorithms for the optimization of ANN performance in skin cancer classification is also found. Common methods include the use of genetic algorithms (GA)39 and particle swarm optimization (PSO).40 Additionally, random forests (RF) learning methods have also been applied to dermatoscopy images for both melanoma41 and BCC42,43 classification. Ferris et al41 constructed a model of 1000 decision trees and a threshold for malignant diagnosis of 0.4, with sensitivity results higher than physicians and specificity lower, while Kharazmi et al42,43 explored the use of vascular features for basal cell carcinoma automatic detection with 100 trees. No reference is made to the reasons considered for tree number selection. When compared to the abovementioned classifiers, the imple‐ mentation of k‐NN outside of ensemble models or comparative approaches is more uncommon. For melanoma recognition in der‐ matoscopy images, Ganster et al44 choose a 24‐NN strategy, achiev‐ ing a better overall performance when only two classes (benign and malignant) were considered, as oppose to three (benign, dysplastic and malignant). A neighbourhood of 24 was selected, considering that it ensured the best results, for the available data set. 3.2 | Microscopy Machine learning algorithms have demonstrated their potential in the detection of melanoma tumours, analysed through microscopy techniques. The use of decision trees appears to be the preferred choice for the classification phase, whether the image collection is performed by confocal laser‐scanning microscopy (CLSM)45‐47 or re‐ flectance confocal microscopy (RCM).48 The reported sensitivity and specificity values of CLSM papers exceed the 90% mark, reaching 97% and 96%,46,47 respectively. The same classification and regres‐ sion tree (CART) software is implemented by all authors, although the parameters selected are not mentioned. The use of features extracted from fluorescence images, as inputs for machine learning classifiers, has also been approached. Odeh et al49 reported excellent results, for the k‐NN algorithm, in the classifi‐ cation of benign and malignant skin lesions and not so good outcomes in the differentiation of BCC and AK tumours. A Euclidean distance metric was used, and several k were tested (1, 3, 5, 7 and 9). The best results were achieved with k = 1; however, this can be misleading due to possible overfit of the date. The author accentuates the relevance of feature selection, testing the use of genetic algorithm and sequen‐ tial scanning selection technique for this purpose. Odeh and Baareh50 explored further one of the authors previous work and tested other options, namely ANNs with GA and an Adaptive Neuro‐Fuzzy Inference System for the same classification purposes. Nonetheless, k‐NN with GA outperformed all the others. 3.3 | Spectroscopy Artificial intelligence algorithms have achieved satisfactory results in the identification of the deadliest form of skin cancer, in spectros‐ copy images. However, these are often worse than the ones attained in dermoscopy and microscopy methods. Li et al51 trained and tested the k‐NN, ANN and Naïve Bayes classifiers, detailing the parameter choice. While default values were chosen for the application of the latter algorithm, ANN was used with back‐propagation and the num‐ ber of six hidden units was justified by the sum of the number of inputs and outputs divided by two. To balance noise and result robustness, a number of three neighbours was selected for k‐NN. SVM was the learner of choice for Liu et al52 that verified a particular improvement in classification results, when patient’ age was added to the input fea‐ ture vector. Tomatis et al53,54 focused their research on the implemen‐ tation of neural networks for classification of multispectral images, achieving SN and SP values higher than 70% in both works. The use of electrical impedance spectra as input for melanoma detection was successfully tested by Mohr et al55 with the SVM classifier, achieving high accuracy. With the same goal, Aberg et al56 combined the results of four different learners (partial least squares discriminant analysis (PLS‐DA), SVM, ANN and k‐NN) to obtain a sensitivity value of 95%. Lastly, k‐NN was used by Maciel et al57 to discriminate other skin disorders, for example psoriasis from neoplastic lesions, represented in spectral images. The increment of k had little to none influence on the great SN and SP results encountered. 3.4 | Digital photography Classification strategies based on support vector machine learning are favoured when evaluating features extracted from macroscopic im‐ ages. Similar to previously mentioned imaging methods, discrimination of melanoma among other lesions is the main objective, and accuracy values from 79%58 to 97%59 have been achieved. Takruri et al60 con‐ structed a multi‐classifier to improve the accuracy of melanoma detec‐ tion, based on three SVM algorithms with radial basis function (RBF) kernel. The top result of 88.9% ACC was achieved when Probability Averaging Fusion was used to combine the classifiers results, instead of Majority Voting. The RBF kernel was also selected by Oliveira et al,61 as well as the histogram intersection kernel, due to the non‐linearity of the data, to increase algorithm efficiency. Likewise, Spyridonos et al62 made the same kernel selection, but for the detection of AK among 754 | MAGALHAES Et AL. healthy skin, attaining sensitivity and specificity values in the range of 63.7%‐80.2% and 65.6%‐82.3%, respectively. Has in other papers with different imaging techniques for skin lesion classification, some authors rather focus their work on the development of novel feature extraction and selection strategies to achieve the best classification results.63‐66 Jafari et al67 choose this approach, extracting different sets of colour features that were used as inputs for an ANN classifier. Neural networks performance in melanoma diagnosis, based on extracted colour features, was also evaluated by Przystalki et al.68 SVMs learners with linear, polyno‐ mial, quadratic and sigmoid kernel were used for comparison, with the highest accuracy (97.44%) corresponding to the linear function. The implementation of nearest neighbour methods in macroscopic images was described only by Cavalcanti and Scharcanski.69 A k of 1 and the Euclidean distance metric were selected to distinguish benign from malignant lesions. In order to reduce the number of false neg‐ atives obtained by this approach, a set of Bayes classifiers is applied afterwards, leading to an increase in sensitivity of 94.92%‐96.37%. 3.5 | Histopathological images The detection of non‐melanoma skin cancers and pre‐cancerous lesions has successfully been done with the analysis of histopatho‐ logical images. SVM is the go‐to machine learning algorithm, show‐ ing sensitivity and specificity values higher than 90%70 and 80%,71 respectively. Its use in semi‐advised learning models is also en‐ countered, but for melanoma recognition. Like in17 for dermoscopy images, Masood et al72 choose this strategy to address the issue of limited unlabelled data, using SVM to adjust the weight of each sample. The SA‐SVM outperformed the standard SVM learner. 3.6 | Other imaging modalities Lesion classification strategies based on information retrieved from un‐ common imaging modalities have also been studied by some authors. Parameters from terahertz pulse images of BCC lesions were retrieved and used for its distinction from healthy skin, using SVM algorithms.73 Kia et al74 classified healthy skin, melanoma, BCC and benign lesion' sonograms with multilayer perceptron (ANN), achieving SN of 98% and SP of 5%. Even though the authors emphasize the importance of high sensitivity, the low specificity obtained is not acceptable, since unnec‐ essary patient stress would be caused, due to the high number of false positives. Hence, the proposed classifier needs improvements. Lastly, Ding et al75 used 3D texture features and 2D ABCD parameters for melanoma diagnosis with support vector machine. A multilayer per‐ ceptron kernel was selected, reaching an accuracy of 87.8%. 3.7 | Statistical significance The acquirement of good classification results can have a greater im‐ pact when supported by an indicator of statistical significance, that is P‐value. Typically, this probability is set to be less than 5%, in order to achieve statistically significant results.76 When analysing the results of this bibliographic research, few are the authors that choose to include this indicator in their work. Some of them selected a P‐value smaller than 5%,53,73 while others preferred a lower threshold of 0.00125,52—Table 1. All papers used this probability during the feature selection stage for the construc‐ tion of input vectors, in order to guarantee the best possible classifi‐ cation outcome. Thus, only features that resulted in a P‐value lower than the set limit were considered for lesion classification. 4 | D I S C U S S I O N O F T R E N D S A N D F U T U R E C H A L L E N G E S Most publications concerning the use of classifiers for skin cancer detection seem to highlight the importance of feature extraction and selection stages to attain the best results.19‐22,25,26,28,49,63‐67 Thus, it is expected further research in this area, focusing on image analysis and processing, in preference of new machine learning strategies. In fact, there is a loophole in the description of the classification task, since some papers lack any reference to the parameters se‐ lected for the implementation of the algorithm of choice.11,27,37,42,71 The decision of using platforms that already include pre‐written al‐ gorithms, that is WEKA, MATLAB and LIBSVM, could be the cause, Authors Imaging modality P‐value Application of P‐value Amelard et al25 Dermoscopy <0.001 Selection of features for dif‐ ferentiation between benign and malignant lesions Truong et al73 Terahertz pulse imaging <0.05 Selection of Debye parame‐ ters for distinction of normal skin and BCC lesions Liu et al52 Reflectance imaging + 3D geometric information <0.001 Selection of best combination of patient' meta‐data for melanoma detection Tomatis et al53 Multispectral imaging <0.05 Selection of lesion descrip‐ tors for distinction of mela‐ noma and non‐melanoma lesions T A B L E 1 Publications referring to the use of P‐value | 755MAGALHAES Et AL. since a few authors prefer the implementation of standard models, instead of exploring other parameter options. Nonetheless, papers concerning the development of highly complex classification sys‐ tems can also be found17,33,72 and its increment should be pursued. The increase of available labelled data is of concern to improve the training task. However, the majority of studies rely on images available on databases13,49 or supplied by hospitals,21,44 being de‐ pendent of the number of given samples. Hence, strategies to ad‐ dress this issue are of interest for future work and have already been explored by some authors.17,72 The use of support vector machine (SVM) algorithms seem to be the prime choice for skin cancer classification, either on basic mod‐ els29,30 or on more complex ones,18,60 suggesting upcoming research with this tendency. The use of p‐value to attest the significance of the acquired re‐ sults is not a common practice; however, it should be considered in upcoming research.25,52,53,73 Lastly, new publications pertaining to classification models for skin cancer detection should focus on finding a good balance be‐ tween specificity and sensitivity values, to avoid faulty diagnosis for healthy and ill patients, respectively, opening space for different image modalities information fusion. 5 | C O N C L U S I O N Machine learning algorithms are a useful tool to assist in medical diag‐ nosis, due to its ability to rapidly assimilate information. Its efficiency in melanoma detection, particularly in dermoscopy images, has been proved with extensive research in this area, using several classifiers. However, methodologies based on information retrieved from other imaging modalities, for example spectroscopy and sonograms, needs more improvements for its application in a clinical scenario. A note is made for future research, stressing the significance of the adopted image analysis and processing methods, due to the per‐ formance dependency of the classifier on this task. In addition, fur‐ ther work exploring different classifier parameter options is also of importance to assure its successful implementation. Thus, reducing the human/operator error and the associated health costs with skin cancer diagnosis. A C K N O W L E D G E M E N T S Authors gratefully acknowledge the funding of Project NORTE‐ 01‐0145‐FEDER‐000022 ‐ SciTech ‐ Science and Technology for Competitive and Sustainable Industries, cofinanced by Programa Operacional Regional do Norte (NORTE2020), through Fundo Europeu de Desenvolvi‐mento Regional (FEDER) and Project LAETA ‐ UID/ EMS/50022/2013. O R C I D Carolina Magalhaes https://orcid.org/0000‐0001‐5602‐718X Joaquim Mendes https://orcid.org/0000‐0003‐4254‐1879 Ricardo Vardasca https://orcid.org/0000‐0003‐4217‐2882 R E F E R E N C E S 1. Reichrath J, Leiter U, Garbe C. Epidemiology of Melanoma and Nonmelanoma Skin Cancer — The Role of Sunlight. In: Reichrath J, ed. Sunlight, Vitamin D and Skin Cancer. New York, NY: Springer; 2014:89‐103. 2. Narayanan DL, Saladi RN, Fox JL. Ultraviolet radiation and skin can‐ cer. Int J Dermatol. 2010;49(9):978‐986. 3. Apalla Z, Nashan D, Weller RB, Castellsagué X. Skin cancer: epide‐ miology, disease burden, pathophysiology, diagnosis, and therapeu‐ tic approaches. Dermatol Ther (Heidelb). 2017;7(1):5‐19. 4. Lubam MC, Bangs SA, Mohler AM, Common D. Benign skin tu‐ mors. Am Fam Physician. 2003;67(4):729‐738 http://www.aafp.org/ afp/2003/0215/p729.html. 5. Massone C, Di Stefani A, Soyer HP. Dermoscopy for skin cancer detection. Curr Opin Oncol. 2005;17(2):147‐153. 6. Kononenko I. Machine learning for medical diagnosis: history, state of the art and perspective. Artif Intell Med. 2001;23(1):89‐109. 7. Labatut V, Cherifi H. Accuracy measures for the comparison of classifiers. 5th Int Conf Inf Technol. 2011;(May):11 https ://doi.org/10.1.1.658. 1777. 8. Gui C, Chan V. Machine learning in medicine. Univ West Ont Med J. 2017;86(2):77‐78. 9. Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting systematic reviews and meta‐analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Medicine. 2009;6(7):e1000100. 10. Moher D, Liberati A, Tetzlaff J, Altman D, The PRISMA Group. Preferred reporting items for systematic reviews and meta‐analy‐ ses: The PRISMA statement. PLoS ONE. 2009;6(7):1‐6. 11. Grzesiak‐Kopeć K, Ogorzałek M, Nowak L. Computational clas‐ sification of melanocytic skin lesions. Artif Intell Soft Computing. 2016:9693;169–178. 12. Rahman MM, Bhattacharya P. An integrated and interactive deci‐ sion support system for automated melanoma recognition of der‐ moscopic images. Comput Med Imaging Graph. 2010;34(6):479–486. 13. Pennisi A, Bloisi DD, Nardi D, Giampetruzzi AR, Mondino C, Facchiano A. Skin lesion image segmentation using Delaunay Triangulation for melanoma detection. Comput Med Imaging Graph. 2016;52:89–103. 14. Ruiz D, Berenguer V, Soriano A, Sánchez B. A decision support sys‐ tem for the diagnosis of melanoma: A comparative approach. Expert Syst Appl. 2011;38(12):15217–15223. 15. Narasimhan K, Elamaran V. Wavelet‐based energy features for di‐ agnosis of melanoma from dermoscopic images. Int J Biomed Eng Technol. 2016;20(3):243. 16. Barata C, Ruela M, Francisco M, Mendonca T, Marques JS. Two sys‐ tems for the detection of melanomas in dermoscopy images using texture and color features. IEEE Syst J. 2014;8(3):965–979. 17. Masood A, Al‐Jumaily A, Anam K. Self‐supervised learning model for skin cancer diagnosis. Int IEEE/EMBS Conf Neural Eng NER. 2015;2015‐July:22‐24. 18. Schaefer G, Krawczyk B, Celebi ME, Iyatomi H, Hassanien AE. Melanoma Classification Based on Ensemble Classification of Dermoscopy Image Features. In: Hassanien AE, Tolba MF, Taher Azar A, eds. Advanced Machine Learning Technologies and Applications. AMLTA 2014. Communications in Computer and Information Science. Cham: Springer; 2014:291–298. https ://doi.org/10.1007/978‐3‐319‐13461‐1_28 19. Castillejos‐fernández H, López‐ortega O. An intelligent system for the diagnosis of skin cancer on digital images taken with dermos‐ copy. Acta Polytechnica Hungarica. 2017;14(3):169–185. https://orcid.org/0000-0001-5602-718X https://orcid.org/0000-0001-5602-718X https://orcid.org/0000-0003-4254-1879 https://orcid.org/0000-0003-4254-1879 https://orcid.org/0000-0003-4217-2882 https://orcid.org/0000-0003-4217-2882 http://www.aafp.org/afp/2003/0215/p729.html http://www.aafp.org/afp/2003/0215/p729.html https://doi.org/10.1.1.658.1777 https://doi.org/10.1.1.658.1777 https://doi.org/10.1007/978-3-319-13461-1_28 756 | MAGALHAES Et AL. 20. Faal M, Miran Baygi MH, Kabir E. Improving the diagnostic accuracy of dysplastic and melanoma lesions using the decision template combination method. Ski Res Technol. 2013;19(1):113–122. 21. Rastgoo M, Morel O, Marzani F, Garcia R. Ensemble approach for differentiation of malignant melanoma. Proc SPIE ‐ Int Soc Opt Eng. 2015;9534. https ://doi.org/10.1117/12.2182799. 22. Rastgoo M, Garcia R, Morel O, Marzani F. Automatic differentia‐ tion of melanoma from dysplastic nevi. Comput Med Imaging Graph. 2015;43:44–52. 23. Xie F, Fan H, Li Y, Jiang Z, Meng R, Bovik A. Melanoma classification on dermoscopy images using a neural network ensemble model. IEEE Trans Med Imaging. 2017;36(3):849–858. 24. Abbas Q, Sadaf M, Akram A. Prediction of dermoscopy patterns for recognition of both melanocytic and non‐melanocytic skin lesions. Computers. 2016;5(3):13. 25. Amelard R, Glaister J, Wong A, Clausi DA. High‐level intuitive fea‐ tures (HLIFs) for intuitive skin lesion description. IEEE Trans Biomed Eng. 2015;62(3):820–831. 26. Almansour E, Jaffar MA. Classification of dermoscopic skin cancer images using color and hybrid texture features. IJCSNS Int J Comput Sci Netw Secur. 2016;16(4):135–139. 27. Adjed F, Faye I, Ababsa F, Gardezi SJ, Dass SC. Classification of skin cancer images using local binary pattern and SVM classifier. In: 4th International Conference on Fundamental and Applied Sciences (ICFAS 2016). Vol 1787. Kuala Lumpur, Malaysia: AIP Conference Proceedings; 2016. https ://doi.org/10.1063/1.4968145 28. Tan TY, Zhang L, Jiang M. An intelligent decision support system for skin cancer detection from dermoscopic images. In: 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC‐FSKD). IEEE; 2016:2194–2199. doi:10.1109/FSKD.2016.7603521. 29. Jaworek‐Korjakowska J. Computer‐aided diagnosis of micro‐malig‐ nant melanoma lesions applying support vector machines. Biomed Res Int. 2016;2016:1–8. 30. La Torre E, Caputo B, Tommasi T. Learning methods for melanoma recognition. Int J Imaging Syst Technol. 2010;20(4):316–322. 31. Codella N, Nguyen Q‐B, Pankanti S, et al. Deep learning ensembles for melanoma recognition in dermoscopy images. IBM J Res Dev. 2017;61(4/5):5:1–5:15. 32. Celebi ME, Kingravi HA, Uddin B, et al. A methodological approach to the classification of dermoscopy images. Comput Med Imaging Graph. 2007;31(6):362–373. 33. Masood A, Al‐Jumaily A. SA‐SVM based automated diagnostic system for skin cancer. In: Wang Y, Jiang X, Zhang D, eds. In SPIE Sixth International Conference on Graphic and Image Processing (ICGIP 2014). Vol 9443. Sidney, Australia: International Society for Optics and Photonics; 2015. https ://doi.org/10.1117/12.2179094 34. Wahba MA, Ashour AS, Napoleon SA, Abd Elnaby MM, Guo Y. Combined empirical mode decomposition and texture features for skin lesion classification using quadratic support vector machine. Heal Inf Sci Syst. 2017;5(1):10. 35. Yuan X, Yang Z, Zouridakis G, Mullani N. SVM‐based Texture Classification and Application to Early Melanoma Detection. In: 2006 International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE; 2006:4775‐4778. https ://doi.org/10.1109/ IEMBS.2006.260056 36. Suganya R. An automated computer aided diagnosis of skin lesions de‐ tection and classification for dermoscopy images. In: 2016 International Conference on Recent Trends in Information Technology (ICRTIT). IEEE; 2016:1‐5. https ://doi.org/10.1109/ICRTIT.2016.7569538 37. Joseph S, Panicker JR. Skin lesion analysis system for melanoma detec‐ tion with an effective hair segmentation method. In: 2016 International Conference on Information Science (ICIS). IEEE; 2016:91‐96. https ://doi.org/10.1109/INFOS CI.2016.7845307 38. Messadi M, Bessaid A, Taleb‐Ahmed A. New characterization methodology for skin tumors classification. J Mech Med Biol. 2010;10(03):467–477. 39. Aswin RB, Jaleel JA, Salim S. Hybrid genetic algorithm ‐ Artificial neu‐ ral network classifier for skin cancer detection. In: 2014 International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT). IEEE; 2014:1304–1309. https ://doi.org/10.1109/ICCIC CT.2014.6993162 40. Cheng B, Joe Stanley R, Stoecker WV, et al. Analysis of clinical and dermoscopic features for basal cell carcinoma neural network clas‐ sification. Ski Res Technol. 2013;19(1):217–222. 41. Ferris LK, Harkes JA, Gilbert B, et al. Computer‐aided classifica‐ tion of melanocytic lesions using dermoscopic images. J Am Acad Dermatol. 2015;73(5):769–776. 42. Kharazmi P, Lui H, Wang ZJ, Lee TK. Automatic detection of basal cell carcinoma using vascular‐extracted features from dermoscopy images. In: 2016 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE). IEEE; 2016:1‐4. https ://doi.org/10.1109/ CCECE.2016.7726666. 43. Kharazmi P, AlJasser MI, Lui H, Wang ZJ, Lee TK. Automated detec‐ tion and segmentation of vascular structures of skin lesions seen in dermoscopy, with an application to basal cell carcinoma classifica‐ tion. IEEE J Biomed Health Inform. 2017;21(6):1675–1684. 44. Ganster H, Pinz P, Rohrer R, Wildling E, Binder M, Kittler H. Automated melanoma recognition. IEEE Trans Med Imaging. 2001;20(3):233–239. 45. Gerger A, Koller S, Weger W, et al. Sensitivity and specificity of con‐ focal laser‐scanning microscopy for in vivo diagnosis of malignant skin tumors. Cancer. 2006;107(1):193–200. 46. Lorber A, Wiltgen M, Hofmann‐Wellenhof R, et al. Correlation of image analysis features and visual morphology in melanocytic skin tumours using in vivo confocal laser scanning microscopy. Ski Res Technol. 2009;15(2):237–241. 47. Gerger A, Wiltgen M, Langsenlehner U, et al. Diagnostic image anal‐ ysis of malignant melanoma in in vivo confocal laser‐scanning mi‐ croscopy: a preliminary study. Ski Res Technol. 2008;14(3):359–363. 48. Koller S, Wiltgen M, Ahlgrimm‐Siess V, et al. In vivo reflec‐ tance confocal microscopy: automated diagnostic image analy‐ sis of melanocytic skin tumours. J Eur Acad Dermatology Venereol. 2011;25(5):554–558. 49. Odeh SM, de Toro F, Rojas I, Saéz‐Lara MJ. Evaluating fluorescence illumination techniques for skin lesion diagnosis. Appl Artif Intell. 2012;26(7):696–713. 50. Odeh SM, Baareh A. A comparison of classification methods as diagnostic system: a case study on skin lesions. Comput Methods Programs Biomed. 2016;137:311–319. 51. Li L, Zhang Q, Ding Y, Jiang H, Thiers BH, Wang JZ. Automatic di‐ agnosis of melanoma using machine learning methods on a spectro‐ scopic system. BMC Med Imaging. 2014;14(1):1–12. 52. Liu Z, Sun J, Smith M, Smith L, Warr R. Incorporating clinical meta‐ data with digital image features for automated identification of cu‐ taneous melanoma. Br J Dermatol. 2013;169(5):1034–1040. 53. Tomatis S, Carrara M, Bono A, et al. Automated melanoma detec‐ tion with a novel multispectral imaging system: Results of a pro‐ spective study. Phys Med Biol. 2005;50(8):1675–1687. 54. Tomatis S, Bono A, Bartoli C, et al. Automated melanoma detection: Multispectral imaging and neural network approach for classifica‐ tion. Med Phys. 2003;30(2):212–221. 55. Mohr P, Birgersson U, Berking C, et al. Electrical impedance spec‐ troscopy as a potential adjunct diagnostic tool for cutaneous mela‐ noma. Ski Res Technol. 2013;19(2):75–83. 56. Åberg P, Birgersson U, Elsner P, Mohr P, Ollmar S. Electrical imped‐ ance spectroscopy and the diagnostic accuracy for malignant mela‐ noma. Exp Dermatol. 2011;20(8):648–652. https://doi.org/10.1117/12.2182799 https://doi.org/10.1063/1.4968145 https://doi.org/10.1117/12.2179094 https://doi.org/10.1109/IEMBS.2006.260056 https://doi.org/10.1109/IEMBS.2006.260056 https://doi.org/10.1109/ICRTIT.2016.7569538 https://doi.org/10.1109/INFOSCI.2016.7845307 https://doi.org/10.1109/INFOSCI.2016.7845307 https://doi.org/10.1109/ICCICCT.2014.6993162 https://doi.org/10.1109/CCECE.2016.7726666 https://doi.org/10.1109/CCECE.2016.7726666 | 757MAGALHAES Et AL. 57. Maciel VH, Correr WR, Kurachi C, Bagnato VS, da Silva SC. Fluorescence spectroscopy as a tool to in vivo discrimination of dis‐ tinctive skin disorders. Photodiagnosis Photodyn Ther. 2017;19:45–50. 58. Jafari MH, Samavi S, Karimi N, Soroushmehr S, Ward K, Najarian K. Automatic detection of melanoma using broad extraction of features from digital images. In: 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE; 2016:1357‐1360. https ://doi.org/10.1109/EMBC.2016.7590959 59. Eslava J, Druzgalski C. Differential feature space in mean shift clustering for automated melanoma assessment. In: Jaffray DA, ed. IFMBE Proceedings. Vol 51. IFMBE Proceedings. Cham: Springer International Publishing; 2015:1401‐1404. https ://doi. org/10.1007/978‐3‐319‐19387‐8_341 60. Takruri M, Rashad MW, Attia H. Multi‐classifier decision fusion for en‐ hancing melanoma recognition accuracy. Int Conf Electron Devices, Syst Appl. 2017:0‐4. 61. Oliveira RB, Marranghello N, Pereira AS, Tavares J. A computational approach for detecting pigmented skin lesions in macroscopic im‐ ages. Expert Syst Appl. 2016;61:53–63. 62. Spyridonos P, Gaitanis G, Likas A, Bassukas ID. Automatic discrim‐ ination of actinic keratoses from clinical photographs. Comput Biol Med. 2017;88:50–59. 63. Abbes W, Sellami D, Control A, Departmement EE. High‐level fea‐ tures for automatic skin lesions neural network based classification. Int Image Process Appl Syst Conf. 2016;1–7. 64. Karami N, Esteki A. Automated Diagnosis of Melanoma Based on Nonlinear Complexity Features. In: Osman NAA, Abas WABW, Wahab AKA, Ting H, eds. 5th Kuala Lumpur International Conference on Biomedical Engineering 2011. Berlin, Heidelberg: Springer; 2011:270‐274. https ://doi.org/10.1007/978‐3‐642‐21729‐6_71 65. Sanchez I, Agaian S. Computer aided diagnosis of lesions extracted from large skin surfaces. In: 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE; 2012:2879‐2884. https ://doi.org/10.1109/ICSMC.2012.6378186 66. Tabatabaie K, Esteki A. Independent Component Analysis as an Effective Tool for Automated Diagnosis of Melanoma. In: 2008 Cairo International Biomedical Engineering Conference. IEEE; 2008:1‐4. https ://doi.org/10.1109/CIBEC.2008.4786081 67. Jafari MH, Samavi S, Soroushmehr S, Mohaghegh H, Karimi N, Najarian K. Set of descriptors for skin cancer diagnosis using non‐dermoscopic color images. In: 2016 IEEE International Conference on Image Processing (ICIP). IEEE; 2016:2638‐2642. https ://doi.org/10.1109/ICIP.2016.7532837 68. Przystalski K. Decision support system for skin cancer diagnosis. Oper Res. 2010;406–413. 69. Cavalcanti PG, Scharcanski J. Automated prescreening of pig‐ mented skin lesions using standard cameras. Comput Med Imaging Graph. 2011;35(6):481–491. 70. Noroozi N, Zakerolhosseini A. Computer assisted diagnosis of basal cell carcinoma using Z‐transform features. J Vis Commun Image Represent. 2016;40:128–148. 71. Noroozi N, Zakerolhosseini A. Differential diagnosis of squamous cell carcinoma in situ using skin histopathological images. Comput Biol Med. 2016;70:23–39. 72. Masood A, Al‐Jumaily A. Semi‐advised learning model for skin cancer diagnosis based on histopathalogical images. In: 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE; 2016:631‐634. https ://doi. org/10.1109/EMBC.2016.7590781 73. Truong B, Tuan HD, Wallace VP, Fitzgerald AJ, Nguyen HT. The potential of the double debye parameters to discriminate be‐ tween basal cell carcinoma and normal skin. IEEE Trans Terahertz Sci Technol. 2015;5(6):990–998. 74. Kia S, Setayeshi S, Shamsaei M, Kia M. Computer‐aided diag‐ nosis (CAD) of the skin disease based on an intelligent classifi‐ cation of sonogram using neural network. Neural Comput Appl. 2013;22(6):1049–1062. 75. Ding Y, John NW, Smith L, Sun J, Smith M. Combination of 3D skin surface texture features and 2D ABCD features for improved mela‐ noma diagnosis. Med Biol Eng Comput. 2015;53(10):961–974. 76. Teixeira PM. Sobre o significado da significância estatística. Acta Med Port. 2018;31(5):238. How to cite this article: Magalhaes C, Mendes J, Vardasca R. The role of AI classifiers in skin cancer images. Skin Res Technol. 2019;25:750–757. https ://doi.org/10.1111/srt.12713 https://doi.org/10.1109/EMBC.2016.7590959 https://doi.org/10.1007/978-3-319-19387-8_341 https://doi.org/10.1007/978-3-319-19387-8_341 https://doi.org/10.1007/978-3-642-21729-6_71 https://doi.org/10.1109/ICSMC.2012.6378186 https://doi.org/10.1109/CIBEC.2008.4786081 https://doi.org/10.1109/ICIP.2016.7532837 https://doi.org/10.1109/EMBC.2016.7590781 https://doi.org/10.1109/EMBC.2016.7590781 https://doi.org/10.1111/srt.12713 work_qhkigxqbozhtxftqr47m2jlime ---- ORIGINAL RESEARCH Novel Treatment of Onychomycosis using Over-the-Counter Mentholated Ointment: A Clinical Case Series Richard Derby MD, Patrick Rohal MD, Constance Jackson MD, Anthony Beutler MD, and Cara Olsen PhD, MPH Background: Current medication treatments for onychomycosis have less than full cure-rate efficacy and have the potential for adverse side effects. Vicks VapoRub (The Proctor & Gamble Company, Cincinnati, OH) has been advocated in the lay literature as an effective treatment for onychomycosis. This pilot study tested Vicks VapoRub as a safe, cost-effective alternative for treating toenail onychomycosis. Methods: Eighteen participants were recruited to use Vicks VapoRub as treatment for onychomycosis. Participants were followed at intervals of 4, 8, 12, 24, 36, and 48 weeks; digital photographs were ob- tained during initial and follow-up visits. Primary outcome measures were mycological cure at 48 weeks and clinical cure through subjective assessment of appearance and quantifiable change in the area of affected nail by digital photography analysis. Patient satisfaction was a secondary outcome, measured using a single-item questionnaire scored by a 5-point Likert scale. Results: Fifteen of the 18 participants (83%) showed a positive treatment effect; 5 (27.8%) had a mycological and clinical cure at 48 weeks; 10 (55.6%) had partial clearance, and 3 (16.7%) showed no change. All 18 participants rated their satisfaction with the nail appearance at the end of the study as “satisfied” (n � 9) or “very satisfied” (n � 9). Conclusions: Vicks VapoRub seems to have a positive clinical effect in the treatment onychomycosis. ( J Am Board Fam Med 2011;24:69 –74.) Keywords: Mentholated Ointment, Onychomycosis, Treatment Toenail onychomycosis is a common diagnosis for primary care physicians. The prevalence of onych- omycosis in the North American adult population may range from 2% to 18%, with prevalence in- creasing to 20% and 30% for those older than 60 years and 70 years, respectively.1–5 Onychomycosis is commonly associated with tinea pedis. Signifi- cant physical and psychological effects, such as pain and negative self-image, may occur in patients with onychomycosis.6 Dermatophytes Trichophyton rubrum and Tricho- phyton mentagrophytes are the predominant patho- gens in onychomycosis; nondermatophytes (usually Candida) account for a smaller percentage (10% to 20%) of toenail onychomycosis.1,7 Presentation of infection may occur in various patterns: fungal in- vasion of distal or lateral margins of the nail (dis- tolateral subungual onychomycosis); direct effect from above or on top of the nail with a powdery, white, patchy discoloration (superficial white ony- This article was externally peer reviewed. Submitted 26 May 2010; revised 20 August 2010; accepted 30 August 2010. From the US Air Force 375th Medical Group, Family Medicine Residency Program, Belleville, IL (RD); the US Air Force 779th Medical Group, Department of Family Medicine, Malcolm Grow Medical Center, Joint Base An- drews, MD (PR); the US Air Force 28th Medical Group, Department of Family Medicine, Ellsworth AFB, SD (CJ); and the Departments of Family Medicine (AB) and Preven- tive Medicine and Biometrics (CO), Uniformed Services University, Bethesda, MD (AB). Funding: Funding as provided by the TRUE Research Foundation, San Antonio, TX. Conflict of interest: none declared. Disclaimer: The opinions and assertions contained herein are the private views of the authors and are not to be construed as official or as reflecting the views of the Uni- formed Services University of the Health Sciences, the US Air Force, or the US Department of Defense. Corresponding author: Lt. Col. Richard Derby, 375th Medical Group, Family Medicine Residency Program, 180 S. Third Street, Suite 400, Belleville, IL 62220 (E-mail: Richard.derby-02@scott.af.mil). doi: 10.3122/jabfm.2011.01.100124 Treatment of Onychomycosis using Mentholated Ointment 69 o n 5 A p ril 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://w w w .ja b fm .o rg / J A m B o a rd F a m M e d : first p u b lish e d a s 1 0 .3 1 2 2 /ja b fm .2 0 1 1 .0 1 .1 0 0 1 2 4 o n 5 Ja n u a ry 2 0 1 1 . D o w n lo a d e d fro m http://www.jabfm.org/ chomycosis); or infection beginning from the prox- imal location beneath the nail bed (proximal sub- ungual onychomycosis).7 Current treatment agents for onychomycosis in- clude both systemic and topical medications. A meta-analysis of systemic therapies showed myco- logical cure rates of 76% with the use of terbinfine, 63% with the use of itraconazole pulse dosing, 61% with the use of griseofulvin, and 48% with the use of fluconazole.8 Downsides to oral therapy include the potential for adverse side effects, most notably hepatotoxicity, and the significant cost of the med- ication course, which is typically of 3 months’ du- ration. Ciclopirox 8% is a topical lacquer solution that has been approved by the US Food and Drug Administration for treatment of onychomycosis, with reported mycological cure rates of 34% in meta-analysis studies of North America patients.9 Cure rates of ciclopirox 8% and other topical ther- apies that have not been approved by the US Food and Drug Administration (eg, amorolfine 5% and tioconazole 28%) are lower than those observed with systemic treatments, and the course of topical treatments ranges from 6 to 12 months.7 Vicks VapoRub (The Proctor & Gamble Com- pany, Cincinnati, OH) has been popularized by lay medical Web sites as a home cure for onychomy- cosis.10 No published trials examining the effect of this compound on onychomycosis have been ac- complished. However, the active and inactive in- gredients in Vicks VapoRub (thymol, menthol, camphor, and oil of Eucalypus) have shown efficacy against dermatophytes in vitro.11–14 The purpose of this pilot study was to test the efficacy of Vicks VapoRub as a safe, cost-effective alternative for treating toenail onychomycosis in an outpatient clinic setting. Methods The study protocol was approved by the institu- tional review board of the Malcolm Grow US Air Force Medical Center. Participants were recruited from an outpatient family medicine clinic that serves both an active duty and civilian (dependent and retiree) populations. Information posters were placed in the clinic lobbies to advertise the study. Patients used contact details on the posters to ar- range an appointment with study investigators. During the initial appointment, the study was ex- plained and informed consent for participation was obtained. Demographic data (age, sex, military sta- tus) was obtained along with historic data (duration of dystrophic nail, prior treatment for onychomy- cosis, chronic medical diseases, medication use, and allergy history). Inclusion criteria were men and women older than 18 years of age with clinical onychomycosis that was evident on at least one great toenail. Ex- clusion criteria included any history of allergic sen- sitivity to Vicks VapoRub or its active ingredients (thymol, camphor, menthol, or oil of Eucalyptus); any use of oral antidermatophyte medication within the last year; any deformity of the affected nail that would preclude sampling for potassium hydroxide (KOH) and culture or prevent adequate photo- graphic assessment of the nail; and a negative cul- ture of fungal infection from the sampling taken during the initial visit. After consent was obtained, a digital photograph of the affected nail was taken, and then a nail wedge/clipping was collected for KOH microscopy and culture. The participant was then supplied with the study treatment (Vicks VapoRub) and in- structed to apply a small amount of Vicks VapoRub with a cotton swab or finger to the affected nail at least once daily. If the culture of the nail sample was negative for fungal infection, volunteers were con- tacted and removed from the study. Volunteers with positive cultures were contacted for follow-up assessments at 4, 8, 12, 24, 36, and 48 weeks. Re- peat digital photographs, assessment for adverse reactions, treatment effect, patterns of VapoRub use, and the patient’s perceived tolerability of treat- ment were performed/assessed during each visit. The primary outcome measures for the study were mycological cure at 48 weeks, defined as neg- ative KOH and culture of nail sample, and clinical cure (clearance of dystrophic nail). Clearance of dystrophic nail was assessed by gross appearance at the end of the study period as “complete,” “partial,” or “no change.” Clearance was quantified through serial digital photography of the affected nail. Pho- tographic editing software (Photoshop CS3, Adobe Systems, Inc., San Jose, CA) was used to define the nail edges and the borders of the affected nail region so areas (in pixel units) of total nail and affected nail could be calculated. Using these areas, the ratio of affected nail area to total nail area was calculated for each photograph taken during the course of the study. A secondary outcome mea- sured was patient satisfaction with the appearance 70 JABFM January–February 2011 Vol. 24 No. 1 http://www.jabfm.org o n 5 A p ril 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://w w w .ja b fm .o rg / J A m B o a rd F a m M e d : first p u b lish e d a s 1 0 .3 1 2 2 /ja b fm .2 0 1 1 .0 1 .1 0 0 1 2 4 o n 5 Ja n u a ry 2 0 1 1 . D o w n lo a d e d fro m http://www.jabfm.org/ of the affected nail at the end of the study period; this was assessed using a single-item questionnaire scored on a 5-point Likert scale (1 � very satisfied, 2 � satisfied, 3 � neither satisfied nor dissatisfied, 4 � dissatisfied, and 5 � very dissatisfied). Statistics Descriptive statistics were used to report outcome data. Paired t test and Fisher’s exact test were used to analyze significance of treatment effect from the initial to 48 week period and between infecting pathogen subtypes. Results Forty participants were recruited to the study over 10 months. Of these, 20 were removed because of a negative fungal culture result on the initial nail sampling. Two participants removed themselves from the study (1 at 12 weeks and the other at 24 weeks), citing an unwillingness to continue fol- low-up examinations/photographs. Thus, 18 of the 20 participants with culture-proven onychomycosis completed the 48-week study period. Five of the 18 participants who completed the study were seen at every follow-up period and had photographs taken at 4, 8, 12, 24, 36, and 48 weeks. The remaining 13 participants attended the major- ity of follow-up appointments (average missed ap- pointments � 1.8). Most missed appointments (16 of 23) were in the 4- or 8-week follow-up period; 6 were in the 12-week period; and 1 was in the 36-week period. At each follow-up visit, queries about compliance and use patterns (1–2 times per week, 3–5 times per week, or daily) showed that the majority of participants (15 of 18) reported daily application of the Vicks VapoRub and the remain- ing participants (3 of 18) reported use 3 to 5 times per week. The outcome data are presented in Table 1. Overall, 15 of the 18 participants (83%) had a positive response to the Vicks VapoRub treatment for onychomycosis. Five participants (27.8%) had a mycological cure (negative nail culture) at 48 weeks; 4 of these 5 showed complete clinical and mycological cures (22.2%). However, one still had evidence of dystrophic nail. Ten participants (55.6%) showed evidence of partial clinical cure (decreasing area of dystrophic nail); 9 of these had positive nail cultures at 48 weeks and one had a negative nail culture. The remaining 3 participants (16.7%) showed no significant clinical improve- ment through 48 weeks and had positive cultures at 48 weeks. Interestingly, all 18 participants rated their satisfaction with the appearance of the af- fected nail after the study course as either “very satisfied” (n � 9) or “satisfied” (n � 9). Photo- graphs of the observed changes in the toenails of three participants are presented in Figure 1. The average ratio of affected to total nail area decreased from 63% at initial evaluation to 41% at 48 weeks (P � .001; paired t test). Outcomes were better for the 5 participants with positive cultures for either Candida parapsilosis or T. mentagrophytes. All 5 of these participants showed complete clinical cure compared with none of the 13 participants with other organism growth (P � .001; Fisher’s exact test). Four of these 5 partici- pants also had negative cultures by the end of the study compared with one of the 13 remaining par- ticipants (P � .008; Fisher’s exact test). All 5 par- ticipants with positive cultures for either C. parap- silosis or T. mentagrophytes where also highly satisfied with treatment compared with only 4 of the 13 remaining participants (P � .029; Fisher’s exact test). Although these findings indicate a strong association between the organism and the success of treatment, they should be considered preliminary because they do not correspond to any preplanned hypothesis. Discussion We demonstrated in this pilot study that Vicks VapoRub provides a positive effect in the treat- ment of onychomycosis. This is the first clinical study in the literature to describe this finding. To date, treatment for onychomycosis is accom- plished primarily with oral (cure rates, 48% to 76%) and/or topical (34% cure rate with ciclopirox 8%)8,9 The cost for a complete course of oral med- ication treatment for onychomycosis ranges from $780 to $900 (not including associated costs for laboratory monitoring); a course of treatment with ciclopirox 8% is approximately $200.15 A 1-year course of Vicks VapoRub, by comparison, can be expected to cost approximately $24 to $36 (the cost of 2 to 3 6-ounce jars). Key weaknesses if this study include the small sample size, the lack of a control group, and vari- ability in the pathogens, as well as the initial degree of nail involvement between participants. These doi: 10.3122/jabfm.2011.01.100124 Treatment of Onychomycosis using Mentholated Ointment 71 o n 5 A p ril 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://w w w .ja b fm .o rg / J A m B o a rd F a m M e d : first p u b lish e d a s 1 0 .3 1 2 2 /ja b fm .2 0 1 1 .0 1 .1 0 0 1 2 4 o n 5 Ja n u a ry 2 0 1 1 . D o w n lo a d e d fro m http://www.jabfm.org/ Ta bl e 1. Pa rt ic ip an t D em og ra ph ic s, Cu lt ur e O rg an is m ,P ho to gr ap hi c As se ss m en t of Pe rc en ta ge of N ai l Af fe ct ed In it ia ll y an d at 12 -W ee k In te rv al s, Fi na l Cl in ic al an d M yc ol og ic al Cu re ,a nd Sa ti sf ac ti on P ar ti ci pa nt Se x A ge (y r) O rg an is m In it ia l A m ou nt A ff ec te d (% ) A ff ec te d at 12 W ee ks (% ) A ff ec te d at 24 W ee ks (% ) A ff ec te d at 36 W ee ks (% ) A ff ec te d at 48 W ee ks (% ) C lin ic al C ur e M yc ol og ic al C ur e Sa ti sf ac ti on * 1 M al e 68 F un ga l el em en ts 61 58 64 – 45 P ar ti al N o 2 2 M al e 76 C ry pt oc oc cu s la m en ti i 98 82 46 48 54 P ar ti al N o 1 3 M al e 30 T ri ch op hy to n ru br um 72 57 92 47 51 P ar ti al N o 2 4 M al e 37 T . ru br um 35 – 24 18 12 P ar ti al Y es 2 5 F em al e 52 C an di da pa ra ps ilo si s 44 22 22 25 27 P ar ti al N o 1 6 M al e 49 T . ru br um 70 50 60 44 47 P ar ti al N o 2 7 M al e 47 F un ga l el em en ts 59 – 45 64 47 P ar ti al N o 2 8 F em al e 31 T . ru br um 39 – 37 49 48 N o ch an ge N o 2 9 F em al e 37 C . pa ra ps ilo si s 37 – 17 11 4 C om pl et e Y es 1 10 F em al e 52 C . pa ra ps ilo si s 54 50 37 14 11 C om pl et e Y es 1 11 F em al e 85 P en ic ill iu m sp . 10 0 – 91 93 95 N o ch an ge N o 2 12 F em al e 45 T ri ch op hy to n m en ta gr op hy te s 20 – 15 9 5 C om pl et e N o 1 13 F em al e 69 F us ar iu m sp . 10 0 87 92 91 80 P ar ti al N o 2 14 M al e 54 T . ru br um 89 90 72 79 70 P ar ti al N o 1 15 M al e 40 T . ru br um 89 84 93 83 79 P ar ti al N o 1 16 M al e 57 C an di da al bi ca ns 63 58 53 51 58 N o ch an ge N o 2 17 M al e 65 T . m en ta gr op hy te s 84 71 41 20 7 C om pl et e Y es 1 18 F em al e 32 T . m en ta gr op hy te s 16 11 7 5 5 C om pl et e Y es 1 *1 � ve ry sa ti sfi ed ; 2 � sa ti sfi ed . 72 JABFM January–February 2011 Vol. 24 No. 1 http://www.jabfm.org o n 5 A p ril 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://w w w .ja b fm .o rg / J A m B o a rd F a m M e d : first p u b lish e d a s 1 0 .3 1 2 2 /ja b fm .2 0 1 1 .0 1 .1 0 0 1 2 4 o n 5 Ja n u a ry 2 0 1 1 . D o w n lo a d e d fro m http://www.jabfm.org/ weaknesses underscore the fact that this study does not prove or disprove the clinical utility of this unorthodox treatment of onychomycosis. Nevertheless, the strength of this study is found in the simplicity of its design; we provided a relatively inexpensive and innocuous therapy with straightforward instructions for use and then measured for effect through objective cul- ture and clinical appearance. Although inherently subjective, we believe the assessment of clinical appearance to be an important outcome measure. Nail fungal culture has good specificity and pos- itive predictive value (�94%), but the sensitivity is poor (30% to 50%).7 Photoshop CS3 software allowed for measurement of the area ratio of clinically affected (dystrophic) nail to the total nail. Although an unvalidated method of measur- ing the degree of infection in the nail, it was useful in providing a more objective means of estimating effect and change in clinical appear- ance. The participants’ positive satisfaction rat- ings with the treatment irrespective of the final clinical outcome shows that there is potential benefit in providing this simple, innocuous treat- ment even with unproven or partial efficacy. Future studies may benefit from grouping by the pathogens isolated. In the current study we enrolled 3 participants with cultures positive for C. parasilosis and 3 with cultures positive for T. mentagrophytes. These 6 participants accounted for all 5 of the complete clinical cures (one par- ticipant with the C. parapsilosis organism had a partial cure). In contrast, of the 6 participants with cultures positive for T. rubrum, 5 had a partial cure and 1 had no change. Focusing future treatment studies on specific organisms may pro- vide more details about treatment efficacy. Addi- tional areas for future studies may include com- bining Vicks VapoRub with other medical and physical modalities (filing/clipping) or compar- ing efficacy based on the degree of nail involve- ment (mild vs severe). Figure 1. Serial photographic assessment of clinical onychomycosis in selected participants. Discussion: - Topical treatment for onychomycosis have shown mixed/incomplete efficacy through limited trials - This small pilot trial shows similar mixed results however there is probable monetary advantage, ease of rx/instruction at primary care level – ease of maintenance treatment - Potential role as adjuvant rx to oral to help to reduce recurrence - May be more efficacious for specific pathogens - Weakness of study – small sample group, variability in pathogens, nail involvement Discussion: Discussion: Participant 2 – initial 24 weeks 48 weeks – no mycological cure partial clinical cure, ‘very satisfied’ Participant 17 – initial 24 weeks 48 weeks – mycological and clinical cure, ‘very satisfied’ Participant 9 – initial 24 weeks 48weeks – mycological and clinical cure, ‘very satisfied’ doi: 10.3122/jabfm.2011.01.100124 Treatment of Onychomycosis using Mentholated Ointment 73 o n 5 A p ril 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://w w w .ja b fm .o rg / J A m B o a rd F a m M e d : first p u b lish e d a s 1 0 .3 1 2 2 /ja b fm .2 0 1 1 .0 1 .1 0 0 1 2 4 o n 5 Ja n u a ry 2 0 1 1 . D o w n lo a d e d fro m http://www.jabfm.org/ Conclusion In this pilot study, Vicks VapoRub seemed to have some clinical effect in treating onychomycosis, par- ticularly when C. parapsilosis and T. mentagrophytes were the infecting organisms. Regardless of clinical effect, participants were highly satisfied with the simple, innocuous treatment strategy of once-daily application of Vicks VapoRub to the affected nail. Vicks VapoRub may represent a significant addi- tion to the clinical options for treating onychomy- cosis, not only because of its clinical effect but also because of the minimization of side effects and its lower cost compared with established therapies. Such a treatment might be a viable first-line option for a condition with limited morbidity apart from cosmetic effect. Future studies are required for proof of efficacy and better delineation of treat- ment effects. References 1. Gupta AK, Jain HC, Lynde CW, et al. Prevalence and epidemiology of onychomycosis in patients vis- iting physicians’ offices: a multicenter Canadian sur- vey of 15,000 patients. J Am Acad Dermatol 2000; 43:244 – 8. 2. Erbagci Z, Tuncel A, Zer Y, Balci I. A prospective epidemiologic survey on the prevalence of onycho- mycosis and dermatophytosis in male boarding school residents. Mycopathologia 2005;159:347. 3. Ghannoum MA, Hajjeh RA, Scher R, et al. A large- scale North American study of fungal isolates from nails: the frequency of onychomycosis, fungal distri- bution, and antifungal susceptibility patterns. J Am Acad Dermatol 2000;43:641– 8. 4. Gupta AK, Jain HC, Lynde CW, Summerbell RC. Prevalence of unsuspected onychomycosis in pa- tients visting dermatologists’ offices in Ontario, Canada: a multicenter survey of 2001 patients. Int J Dermatol 1997;36:783–7. 5. Elewski BE, Charif MA. Prevalence of onychomy- cosis in patients attending a dermatology clinic in northeastern Ohio for other conditions. Arch Der- matol 1997;133:1172–3. 6. Drake LA, Patrick DL, Fleckman P, et al. The im- pact of onychomycosis on quality of life: develop- ment of an international onychomycosis question- naire to measure patient quality of life. J Am Acad Dermatol 1999;41:189 –96. 7. de Berker D. Fungal nail disease. N Engl J Med 2009;360:2108 –16. 8. Gupta AK, Ryder JE, Johnson AM. Cumulative meta-analysis of systemic antifungal agents for the treatment of onychomycosis. Br J Dermatol 2004; 150:537– 44. 9. Gupta AK, Joseph WS. Ciclopirox 8% nail lacquer in the treatment of onychomycosis of the toenails in the United States. J Am Podiatr Med Assoc 2000;90: 495–501. 10. Vicks VapoRub might help fight toenail fungus. Consum Rep 2006;71:49. 11. Pinto E, Pina-Vaz C, Salgueiro L, et al. Antifungal activity of the essential oil of Thymus pulegioides on Candida, Aspergillus and dermatophyte species. J Med Microbiol 2006;55(Pt 10);1367–73. 12. Salgueiro LR, Pinto E, Goncalves MG, et al. Chem- ical composition and antifungal activity of the essen- tial oil of Thymbra capitata. Planta Med 2004;70: 572–5. 13. Pina-Vaz C, Goncalves Rodrigues A, Pinto E, et al. Antifungal activity of Thymus oils, and their essen- tial compounds. J Eur Aca Dermatol Vernerol 2004; 18:73– 8. 14. Ramsewak RS, Nair MG, Stommel M, Selanders L. In vitro antagonistic activity of monoterpenes and their mixtures against “toe nail fungus” pathogens. Phytother Res 2003;17:376 –9. 15. Lexi-Comp Reader version 2.4060407. Drug pricing information. Hudson, OH: Lexi-Comp, Inc.; 2006. 74 JABFM January–February 2011 Vol. 24 No. 1 http://www.jabfm.org o n 5 A p ril 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://w w w .ja b fm .o rg / J A m B o a rd F a m M e d : first p u b lish e d a s 1 0 .3 1 2 2 /ja b fm .2 0 1 1 .0 1 .1 0 0 1 2 4 o n 5 Ja n u a ry 2 0 1 1 . D o w n lo a d e d fro m http://www.jabfm.org/ work_qi5ii5l5gzhbrfije3grwjf37q ---- 829 CLINICS 2009;64(9):829-30 EDITORIAL Hospital das Clínicas, Faculdade de Medicina da Universidade de São Paulo – São Paulo/SP, Brazil. mrsilva36@hcnet.usp.br IN THE SEPTEMBER 2009 ISSUE OF CLINICS Mauricio Rocha-e-Silva, Editor doi: 10.1590/S1807-59322009000900001 In this September 2009 issue of Clinics we highlight a study by Meneghini et al. on the effect of Memantine, a N- methyl-d-aspartate (NMDA) glutamate receptor antagonist used to treat Alzheimer’s disease on cardiomyocites. They found that the reduction of nuclear size of rats exposed to a cold stress is effectively prevented by simultaneous treatment with memantine. We publish nine papers on Clinical Science and three other reports on non clinical research. Bueno et al. evaluated the reasons for resubmitting research projects to the Research Ethics Committee of a University Hospital in São Paulo Brazil, and found that the main reasons for returning the projects to the researchers were the use of inadequate language and/or difficulty un- derstanding the informed consent form, lack of information about the protocol in the informed consent form, as well as doubts regarding methodological and statistical issues of the protocol. Otherreasons involved lack of accuracy or incomplete documentation, need of clarification, approval for participation of external entities, and lack of information on financial support. Bittencourt et al. evaluated HFE and non-HFE gene mutations in Brazilian patients with hereditary hemochro- matosis and found that one-third of Brazilian subjects with the classical phenotype of HH do not carry HFE or other mutations that are currently associated with the disease in Caucasians. This observation suggests a role for other yet unknown mutations in the aforementioned genes or in other genes involved in iron homeostasis in the pathogenesis of hereditary hemochromatosis in Brazil. Kamulegeya et al. investigated the epidemiological characteristics of maxillofacial fractures and associated fractures in 132 patients seen in the Oral Surgery Unit of Mulago Hospital, Kampala, Uganda through a six-month prospective study, which included socio-demographic fac- tors, type and etiology of injury, additional fractures, and post-surgery complications and recommend that anticipated changes in maxillofacial trauma trends necessitate regular epidemiologic studies of facial fractures to allow for de- velopment and implementation of timely novel preventive measures. Brandão et al. followed 53 patients with medullar thyroid carcinoma, a neoplasia of intermediate prognosis and dif- ferentiation, which does not always respond predictably to known treatments. They found that clinical and pathological aspects of surgically treated patients are predictors of disease progression. Specifically, even treated cervical lymph node metastases are significantly correlated with disease progres- sion. Carvalho et al. evaluated crack cocaine use practices, risk behaviors associated with HIV infection among drug users, and their involvement with violence in 350 drug users attending drug abuse treatment clinics in São Paulo, Brazil. A high HIV prevalence and associated risky sexual behaviors were observed among crack cocaine users. The society and the authorities that deal with violence related to crack users and drug trafficking should be aware of these problems. Campos et al. endeavored to identify factors associated with increased levels of self-reported quality of life among HIV-infected patients after four months of antiretroviral therapy. Patients were recruited at two public health referral centers for AIDS in the city of Belo Horizonte, Brazil, for a prospective adherence study. Authors highlight the impor- tance of modifiable factors such as psychiatric symptoms and treatment-related variables that may contribute to a bet- ter quality of life among patients initiating treatment. Con- sidering that poor quality of life is related to non-adherence to antiretroviral therapy, careful clinical monitoring of these factors may contribute to ensuring the long-term effective- ness of antiretroviral regimens. Meyer et al. evaluated, by means of the Inflammatory Bowel Disease Questionnaire the quality of life of 36 ul- cerative colitis patients submitted to proctocolectomy with sphincter preservation using J-pouch reconstruction over ten years ago. They concluded that the possibility of sphincter 830 CLINICS 2009;64(9):829-30In the September 2009 issue of Clinics Rocha-e-Silva M preservation should always be taken into account, since pa- tients remain clinically stable and have a high quality of life even after long periods. Miot et al. estimated oculometric parameters of 15 cases of Graves’ ophthalmopathy in comparison to 12 healthy eyes using digital photography and digital image analysis. This comparative analysis suggests that eye proptosis is related to an asymmetric increase in lateral oculometric measures, and that standardized digital photographs can be used in clinical practice to objectively estimate oculometric parameters of Graves’ ophthalmopathy patients. Vaz et al. sought to identify the participation of the co- agulation system in the differential diagnosis of pleural effu- sions through the laboratory profile of coagulation and fibri- nolysis in 54 pleural fluids (15 transudates and 39 exudates). The coagulation system was found to play an important role in the development of pleural diseases. Coagulation tests show differences between transudates and exudates but not among exudate subgroups. Understanding the pathophysi- ological mechanisms of pleural disorders may help to define new diagnostic and therapeutic approaches. Pai et al. studied 98 pelvic halves of embalmed cadav- ers, in which the origin and course of the obturator artery were traced and noted. The data obtained in this study show that it is more common to find an abnormal obturator artery than was reported previously, and this observation has im- plications for pelvic surgeons and is of academic interest to anatomists. Surgeons dealing with direct, indirect, femoral, or obturator hernias need to be aware of these variations and their close proximity to the femoral ring. Cardoso et al. identified the scientific production of ten- ured faculty from the Universidade de São Paulo, Faculdade de Medicina performed from 2001 to 2006, and that it is pos- sible to analyze this by the number of papers published by full professors, taking into account not only their academic position and influence, but also the fact that publication is an opportunity to stimulate joint projects with other members of the same institution. Zanoni et al. investigated mesenteric microcirculatory dysfunctions, bacterial translocation phenomenon, and he- modynamic/metabolic disturbances in a rat model of intes- tinal obstruction and ischemia. They found that Intestinal obstruction and ischemia in rats is a relevant model for the in vivo study of mesenteric microcirculatory dysfunction and the occurrence of bacterial translocation. This model parallels the events implicated in multiple organ dysfunction (MOD) and death. We also publish 3 case reports. work_qmfr2vpmjrh6da57h42olu4ggy ---- bij_725.fm Biological Journal of the Linnean Society , 2007, 90 , 211–237. With 11 figures © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90 , 211–237 211 Blackwell Publishing LtdOxford, UKBIJBiological Journal of the Linnean Society0024-4066© 2006 The Linnean Society of London? 2006 90? 211237 Original Article USING CAMERAS TO STUDY ANIMAL COLORATION M. STEVENS ET AL . *Corresponding author. Current address: Department of Zoology, University of Cambridge, Downing Street, Cambridge CB2 3EJ, UK. E-mail: ms726@cam.ac.uk Using digital photography to study animal coloration MARTIN STEVENS 1 *, C. ALEJANDRO PÁRRAGA 2 , INNES C. CUTHILL 1 , JULIAN C. PARTRIDGE 1 and TOM S. TROSCIANKO 2 1 School of Biological Sciences, University of Bristol, Woodland Road, Bristol BS8 1UG, UK 2 Department of Experimental Psychology, University of Bristol, Woodland Road, Bristol BS8 1TN, UK Received 19 May 2005; accepted for publication 1 March 2006 In understanding how visual signals function, quantifying the components of those patterns is vital. With the ever- increasing power and availability of digital photography, many studies are utilizing this technique to study the con- tent of animal colour signals. Digital photography has many advantages over other techniques, such as spectrometry, for measuring chromatic information, particularly in terms of the speed of data acquisition and its relatively cheap cost. Not only do digital photographs provide a method of quantifying the chromatic and achromatic content of spa- tially complex markings, but also they can be incorporated into powerful models of animal vision. Unfortunately, many studies utilizing digital photography appear to be unaware of several crucial issues involved in the acquisition of images, notably the nonlinearity of many cameras’ responses to light intensity, and biases in a camera’s processing of the images towards particular wavebands. In the present study, we set out step-by-step guidelines for the use of digital photography to obtain accurate data, either independent of any particular visual system (such as reflection values), or for particular models of nonhuman visual processing (such as that of a passerine bird). These guidelines include how to: (1) linearize the camera’s response to changes in light intensity; (2) equalize the different colour channels to obtain reflectance information; and (3) produce a mapping from camera colour space to that of another colour space (such as photon catches for the cone types of a specific animal species). © 2007 The Linnean Society of London, Biological Journal of the Linnean Society , 2007, 90 , 211–237. ADDITIONAL KEYWORDS: camera calibration – colour vision – colour measurement – digital cameras – imaging – radiance – reflection – signals. INTRODUCTION Investigations into the adaptive functions of animal coloration are widespread in behavioural and evolu- tionary biology. Probably because humans are ‘visual animals’ themselves, studies of colour dominate func- tional and evolutionary investigations of camouflage, aposematism, mimicry, and both sexual and social sig- nalling. However, with advances in our knowledge of how colour vision functions and varies across species, it becomes increasingly important to find means of quantifying the spatial and chromatic properties of visual signals as they are perceived by other animals or, at the very least, in a manner independent of human perception. This is nontrivial because colour is not a physical property, but rather a function of the nervous system of the animal perceiving the object (Newton, 1718: ‘For the rays, to speak properly, are not coloured’; Endler, 1990; Bennett, Cuthill & Norris, 1994). One way to produce an objective measure of the properties of a colour signal is to measure surface reflectance using spectrophotometry, which provides precise information on the intensity distribution of wavelengths reflected (Endler, 1990; Zuk & Decruye- naere, 1994; Cuthill et al ., 1999; Gerald et al ., 2001; Endler & Mielke, 2005). Reflectance data can also be combined with information on the illuminant and the photoreceptor sensitivities of the receiver (and, if available, neural processing) to model the colours per- ceived by nonhuman animals (Kelber, Vorobyev & Osorio, 2003; Endler & Mielke, 2005). However, con- ventional spectrometers provide only point samples, and to characterize adequately the colour of a hetero- geneous object requires multiple samples across an 212 M. STEVENS ET AL . © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90 , 211–237 appropriately designed sampling array, such as multiple transects or prespecified regions (Cuthill et al ., 1999; Endler & Mielke, 2005). This not only has a cost in terms of sampling time, but also the informa- tion about spatial relationships between colours then needs to be reconstructed from the geometry of the sampling array (Endler, 1984) and the spatial resolu- tion is generally crude. Spectrometry also usually requires a static subject, either because of the need to sample an array or because the measuring probe often needs to be close to or touching the colour patch, a particular problem in the field or with delicate museum specimens. Focusing optics can obviate the need for contact with the animal or plant and offer a degree of ‘remote sensing’ (Marshall et al ., 2003; Sumner, Arrese & Partridge, 2005), but this approach is rare. An alternative to spectrometry is photography, which has a long history of use in studies of animal coloration (Thayer, 1896, 1909; Cott, 1940; Tinbergen, 1974; Pietrewicz & Kamil, 1979) but is becoming increasingly used because of the flexibility and appar- ent precision that digital imaging provides. Colour change in the common surgeonfish (Goda & Fujii, 1998), markings in a population of Mediterranean monk seals (Samaranch & Gonzalez, 2000), egg cryp- sis in blackbirds (Westmoreland & Kiltie, 1996), the role of ultraviolet (UV) reflective markings and sexual selection in guppies (Kodric-Brown & Johnson, 2002), and the functions of primate colour patterns (Gerald et al ., 2001) comprise a few recent examples. Digital photography bears many advantages over spectrome- try, particularly in the ability to utilize powerful and complex image processing algorithms to analyse entire spatial patterns, without the need to recon- struct topography from point samples. More obviously, photographing specimens is relatively quick, allowing rapid collection of large quantities of data, from unre- strained targets and with minimal equipment. Imag- ing programs can be used to obtain various forms of data, including colour patch size and distribution measures, diverse ‘brightness’ and colour metrics, or broadband reflection values (such as in the long-, medium-, and short wavebands). Video imaging can provide temporal information too. Digital technology also has the potential for manipulating stimuli for use in experiments, with the most impressive examples being in animations within video playback experi- ments (Künzler & Bakker, 1998; Rosenthal & Evans, 1998), although there are problems with these meth- ods that need to be understood (D’Eath, 1998; Fleish- man et al ., 1998; Cuthill et al ., 2000a; Fleishman & Endler, 2000). Digital photography is increasingly incorporated into many studies of animal coloration due to its per- ceived suitability for objectively quantifying colour and colour patterns. However, many studies appear to be unaware of the complex image processing algo- rithms incorporated into many digital cameras, and make a series of assumptions about the data acquired that are rarely met. The images recorded by a camera are not only dependent upon the characteristics of the object photographed, the ambient light, and its geom- etry, but also upon the characteristics of the camera (Barnard & Funt, 2002; Westland & Ripamonti, 2004). Therefore, the properties of colour images are device- dependent, and images of the same natural scene will vary when taken with different cameras because the spectral sensitivity of the sensors and firmware/ software in different cameras varies (Hong, Lou & Rhodes, 2001; Yin & Cooperstock, 2004). Finally, the images are frequently modified in inappropriate ways (e.g. through ‘lossy’ image compression; for a glossary of some technical terms, see Appendix 1) and ‘off-the- shelf ’ colour metrics applied without consideration of the assumptions behind them. At best, most current applications of digital photography to studies of ani- mal coloration fail to utilize the full potential of the technology; more commonly, they yield data that are qualitative at best and uninterpretable at worst. This present study aims to provide an accessible guide to addressing these problems. We assume the reader has two possible goals: (1) to reconstruct the reflectance spectrum of the object (maybe just in broad terms such as the relative amounts of long-, medium- and short- wave light; although we will also consider something more ambitious) or (2) to model the object’s colour as perceived by a nonhuman animal. Because we are con- sidering applications of the accessible and affordable technology of conventional digital colour cameras, we are primarily focused on the human-visible spectrum of c . 400–700 nm, but we also consider UV imaging and combining this information with that from a stan- dard camera. Our examples come from an investiga- tion of colour patterns on lepidopteran wings, and how these might be viewed by avian predators. This is a challenging problem (birds are potentially tetra- chomatic and have an UV-sensitive cone type; Cuthill et al ., 2000b), yet it is both tractable and informative, because much of the avian colour world overlaps with ours and birds are the focal organisms in many studies of animal coloration (whether their sexual signals, or the defensive coloration of their prey). CONCEPTUAL BACKGROUND The light coming from a point on an object, its radi- ance spectrum, is a continuous distribution of differ- ent intensities at different wavelengths. No animal eye, or camera, quantifies the entire radiance spec- trum at a given point, but instead estimates the inten- sity of light in a (very) few broad wavebands. Humans USING CAMERAS TO STUDY ANIMAL COLORATION 213 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90 , 211–237 and many other primates use just three samples, cor- responding to the longwave (LW or ‘red’), mediumwave (MW or ‘green’) and shortwave (SW or ‘blue’) cone types in the retina (Fig. 1A); bees and most other insects also use three samples, but in the UV, SW, and MW wavebands; birds and some reptiles, fish and but- terflies use four samples (typically UV, SW, MW, and LW; Fig. 1B). A corollary of colour vision based on such few, broadband, spectral samples is that the colour appearance of an object can be matched, perfectly, by an appropriate mixture of narrow waveband lights (‘primary colours’) that differentially stimulate the photoreceptors. Three primary colours [e.g. red, green, and blue (RGB) in video display monitors] are required for colour matching by normally sighted humans. All that is required is that the mix of primary colours stimulates the photoreceptors in the same way as the radiance spectrum of the real object (without actually having to mimic the radiance spectrum per se ). The additive mixing of three primaries is the basis of all video and cinematographic colour reproduction, and colour specification in terms of the amounts of these primaries, the so-called tristimulus values, lies at the base of most human colour science (Wyszecki & Stiles, 1982; Mollon, 1999; Westland & Ripamonti, 2004). That said, RGB values from a camera are not standardized tristimulus values and so, although they are easily obtained with packages such as Paintshop Pro (Corel Corporation; formerly Jasc Software) or Photoshop (Adobe Systems Inc.), simply knowing the RGB values for a point in a photograph is not suffi- cient to specify the colour of the corresponding point in the real object. An over-riding principle to consider when using dig- ital cameras for scientific purposes is that most digital cameras are designed to produce images that look good, not to record reality. So, just as Kodachrome and Fujichrome produce differing colour tones in ‘ana- logue’ film-based cameras, each film type having its own advocates for preferred colour rendition, the same is true of digital cameras. The values of R, G and B that are output from a camera need not be linearly related to the light intensity in these three wave- bands. In technical and high-specification cameras they are, and the sensors themselves (the Charge Cou- pled Devices; CCDs) generally have linear outputs. By contrast, most cameras designed for non-analytical use have nonlinear responses (Cardei, Funt & Bar- nard, 1999; Lauziére, Gingras & Ferrie, 1999; Cardei & Funt, 2000; Barnard & Funt, 2002; Martinez-Verdú, Pujol & Capilla, 2002; Westland & Ripamonti, 2004). This is a function of post-CCD processing to enhance image quality, given the likely cross-section of print- ers, monitors, and televisions that will be used to view the photographs (these devices themselves having diverse, designed-in, nonlinearities; Westland & Ripa- monti, 2004). Most digital images will display well on most monitors because the two nonlinearities approx- imately cancel each other out. The first step in anal- ysing digital images is therefore to linearize the RGB values. Even with RGB values that relate linearly to R, G, and B light intensity, there is no single standard for what constitutes ‘red’, ‘green’, and ‘blue’ wavebands; nor need there be because different triplets of primary colours can (and, historically, have been) used in experiments to determine which ratios of primaries match a given human-perceptible colour (Mollon, 1999; Westland & Ripamonti, 2004). The spectral sen- sitivities of the sensors in a digital camera need not, and usually do not, match those of human visual pig- ments, as was the case with the Nikon 5700 Coolpix camera primarily used in this study (Fig. 1C). The RGB values in images from a given camera are specific to that camera. Indeed, the values are not necessarily even specific to a particular make and model, but rather specific to an individual camera, because of inherent variability in CCDs at the manufacturing stage (Fig. 2). One can, however, map the camera RGB values to a camera-independent, human colour space (and, under some circumstances, that of another ani- mal) given the appropriate mapping information. Therefore, the mapping, through mathematical transformation, of the camera-specific RGB values to camera-independent RGB (or other tristimulus repre- sentation) is the second crucial step in obtaining use- ful data from a digital image. Furthermore, and often as part of the transformation step, it will usually be desirable to ‘remove’ variation due to the illuminating light. The camera measures R, G, and B radiance, which is the product of the reflectance of the object and the three-dimensional radiance spectrum illuminat- ing the object (often approximated by the irradiance spectrum of the illuminant). The situation is rather more complex underwater, where the medium itself alters the radiance spectrum (Lythgoe, 1979) by wave- length-dependent attenuation. However, an object does not change colour (much) when viewed under a blue sky, grey cloud, or in forest shade, even though the radiance spectra coming from it changes consider- ably. This phenomenon of ‘colour constancy’, whereby the visual system is largely able to discount changes in the illuminant and recover an object’s reflectance spectrum, is still not fully understood (Hurlbert, 1999), but equivalent steps must be taken with digital images if it is object properties that are of interest rather than the radiance itself. Many digital cameras allow approximations of colour constancy (white-point balancing) at the point of image acquisition; for exam- ple by selecting illuminant conditions such as sky- light, cloudy, and tungsten. However, these settings are an approximation and, in practice, their effects 214 M. STEVENS ET AL . © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90 , 211–237 Figure 1. A, normalized absorptance (equal areas under curves) of human cones. Absorbance ( N ) data from Dartnall, Bowmaker & Mollon (1983) converted to absorptance ( P ) by the equation P = 1 − 10 − 1NLS , where L is the length of the cone (20 µ m from Hendrickson and Drucker, 1992), and S is specific absorbance, 0.015/ µ m − 1 . B, normalized absorptance (equal areas under curves) of starling cones to different wavelengths of light. From Hart, Partridge & Cuthill (1998). C, normalized spectral sensitivity (equal areas under curves) of the sensors in the Nikon 5700 Coolpix camera used in the present study. SW, shortwave; MW, mediumwave; LW, longwave; UV, ultraviolet. 350 400 450 500 550 Wavelength (nm) Wavelength (nm) Wavelength (nm) 600 650 700 750 0 0.002 0.004 0.006 0.008 0.01 N o rm a li s e d A b s o rp ta n c e N o rm a li s e d A b s o rp ta n c e N o rm a li s e d S p e c tr a l S e n s it iv it y 0.012 0.014 SW UV SW SW MW MW MW LW LW LW Double A B C 0.016 250 300 350 400 450 500 550 600 650 700 750 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 0.02 300 350 400 450 500 550 600 650 700 750 0.00 0.05 0.10 0.15 0.20 USING CAMERAS TO STUDY ANIMAL COLORATION 215 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90 , 211–237 need to be eliminated because the effect of the illumi- nant itself needs to be ‘removed’. Removing the effect of the light source characteristics can thus be coupled to eliminating any biases inherent in the camera’s image processing (such as an over-representation of some wavelengths/bands to modify the appearance of the photograph; Cardei et al ., 1999; Finlayson & Tian, 1999; Lauziére et al ., 1999; Martinez-Verdú et al ., 2002). This is essential if accurate data representing the inherent spectral reflection characteristics of an animal’s colour are to be obtained. Many studies have used cameras to investigate ani- mal colour patterns, but most fail to test their digital cameras to determine if all of the above assumptions are met and/or if the analysis yields reliable data (Frischknecht, 1993; Villafuerte & Negro, 1998; Wede- kind et al ., 1998; Gerald et al ., 2001; Kodric-Brown & Johnson, 2002; Bortolotti, Fernie & Smits, 2003; Cooper & Hosey, 2003); for a rare exception, see Losey (2003). We approach these problems in the sequence that a scientist would have to address them if interested in applying digital photography to a research question about biological coloration. This study focuses on obtaining data corresponding to inherent animal col- oration, such as reflection data, and of obtaining data relevant to a given receiver’s visual system. Either of these data types may be more suitable depending upon the research question. Reflection data does not assume specific environmental conditions or a parti- cular visual system viewing the object, and so data can be compared across different specimens easily, even when measured in different places. The lack of assumptions about the receiver’s visual system, such as photoreceptor types, distributions, abundances, sensitivities, opponency mechanisms, and so on, means the data ‘stand alone’ and can be analysed as an inherent property of the animal or an object prop- agating the signal. This is useful if a researcher sim- ply wishes to know if, for example, individual ‘a’ has more longwave reflection than individual ‘b’. Remov- ing illumination information coincides with evidence that many animals possess colour constancy. Con- versely, simply taking reflection into account could be misleading if what one really wants to know is how a signal is viewed by a receiver. For example, if an indi- vidual possesses a marking high in reflection of a spe- cific waveband, but the environment lacks light in that part of the spectrum or the receiver is insensitive to that waveband, the region of high spectral reflection will be unimportant as a signal. Therefore, it is often necessary to include the ambient light characteristics and, if known, information concerning the receiver’s visual system. However, calculated differences in photon catches of various photoreceptor types (for example) between the different conditions do not nec- essarily lead to differences in perception of the signal, if colour constancy mechanisms exist. Furthermore, if reflection information is obtained, this may be con- verted into a visual system specific measure, either by mapping techniques, as discussed here, or by calcula- tions with illuminant spectra and cone sensitivities. Therefore, although the present study deals with both types of measurements, we focus more on the task of Figure 2. A plot of spectral sensitivity of two Nikon 5700 cameras for the longwave (LW), mediumwave (MW), and shortwave (SW) channels. Even though the cameras are the same make and model, and were purchased simultaneously, there are some (albeit relatively small) differences in spectral sensitivity. S p e c tr a l S e n s it iv it y 400 450 500 550 Wavelength (nm) LW_1 LW_2 MW_1 MW_2 SW_1 SW_2 600 650 700 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 216 M. STEVENS ET AL . © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90 , 211–237 obtaining information about inherent properties of animal coloration. We assume that images are stored to a precision of 8 bits in each colour channel, such that intensity is on a scale of 0–255; such ‘true colour’ images (2 8 cubed, or > 16 million colours) are the current norm. Although some studies have used conventional (nondigital) cam- eras to study animal coloration, we would advise against doing so. Although conventional film can be linearized, the corrections required from one batch of film to the next are likely to differ, even from the same manufacturer. Film processing techniques, such as scanning to digitize the images, are also likely to intro- duce considerable spatial and chromatic artefacts, which need to be removed/prevented before analysis. CHOOSING A CAMERA We have mentioned the nonlinear response of many digital cameras and although we show (below) how linearization can be accomplished, nonlinearity is bet- ter avoided. Other than this, essential features to look for are (Table 1): 1. The ability to disable automatic ‘white-point balanc- ing’. This is a software feature built into most cameras to achieve a more natural colour balance under differ- ent lighting conditions. The brightest pixel in any image is set to 255 for R, G, and B (i.e. assumed to be white). Obviously, for technical applications where the object to be photographed has no white regions, this would produce data in which the RGB values are inap- propriately weighted. 2. A high resolution. The resolution of a digital image is generally limited by the sensing array, rather than the modulation transfer function of the lens. Essen- tially, the number of pixels the array contains deter- mines resolution, with higher resolution cameras able to resolve smaller colour patches allowing more detail to be measured, or the same amount of relative detail measured from a further distance from the subject. Also important is the Nyquist frequency (half that of the highest frequency spatial waveform), which is the highest spatial frequency where the camera can still accurately record image spatial detail; spatial pattern- ing above this frequency results in aliasing, which could be a problem for patterns with a very high level of spatial detail (Efford, 2000). There is no set rule as to what a minimum level of pixels in an image should be; if it is possible to work in close proximity to the object, then even a 0.25-megapixel image may be suf- ficient. The problem is to avoid Nyquist limit problems, where the pixels need to be less than half the size of the smallest detail in the image that you are interested in. Each pixel on a digital camera sensor contains a light sensitive photodiode, measuring the intensity of light over a broadband spectrum. A colour filter array is positioned on top of the sensor to filter the red, green, and blue components of light, leaving each pixel sen- sitive to one waveband of light alone. Commonly, there is a mosaic of pixels, with twice as many green sensi- tive ones as red or blue. The two missing colour values for each individual pixel are estimated based on the values of neighbouring pixels, via so-called demosaic- ing algorithms, including Bayer interpolation. It is not just the number of pixels a camera produces (its geo- metrical accuracy) that matters, but also the quality of each pixel. Some cameras are becoming available that have ‘foveon sensors’, with three photodetectors per pixel, and can thus create increased colour accu- racy by avoiding artefacts resulting from interpolation algorithms. However, due to the power of the latest interpolation software, colour artefacts are usually minor, especially as the number of pixels increases, and foveon sensors may have relatively low light sen- sitivity. Higher quality sensors have a greater dynamic range, which can be passed on to the images, and some cameras are now being produced with two photodiodes per pixel: one of which is highly sensitive to low light levels, the other of which is less sensitive and is used to estimate higher light levels without becoming sat- urated. A distinction should also be made between the Table 1. Desirable characteristics when purchasing a digital camera for research Attribute Relative importance High resolution (e.g. minimum of 5 megapixels) Medium (depends upon the complexity/size of the object photographed) Manual white balance control High Macro lens Medium Ability to save TIFF/RAW file formats High Manual exposure control High Remote shutter release cable capability Low Ability to change metering method Medium Optical zoom Medium USING CAMERAS TO STUDY ANIMAL COLORATION 217 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90 , 211–237 number of overall pixels and the number of effective pixels. A conventional 5 megapixel camera actually may output 2560 × 1920 pixel images (4915 200 pixels) because some of the pixels in the camera are used for various measurements in image processing (e.g. dark current measurements). 3. The ability to store images as uncompressed TIFF (Tagged Image File Format) or RAW files. Some mid- range cameras allow storage as RAW files, others do not but often allow images to be saved as TIFF files. This is something to determine before purchasing a camera. Other file types, in particular JPEGs (Joint Photographic Experts Group), are unsuitable because information is lost in the compression process. JPEG compression is of the ‘lossy’ type, which changes the data coming from the CCD array, and where the lost information cannot be recovered. This is often unde- tectable to the human eye, but introduces both spatial and chromatic artefacts in the underlying image data, particularly if the level of compression is high (for two simple illustrations, see Figs 3, 4). JPEGs compress both the colour and spatial information, with the spa- tial information sorted into fine and coarse detail. Fine detail is discarded first because this is what we are Figure 3. Four images of the hind left spot on the emperor moth Saturnia pavonia illustrating the effects of compression on image quality. A, an uncompressed TIFF image of the original photograph. B, a JPEG image with minimal compression (10%). C, a JPEG image with intermediate compression (50%), which still appears to maintain the original structure of the image, but careful examination of the image’s spatiochromatic content shows inconsistencies with the original TIFF file. D, a JPEG image with maximal compression (90%) showing severe spatial and chromatic disruption. A B C D 218 M. STEVENS ET AL . © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90 , 211–237 less sensitive to. For example, Gerald et al . (2001) used digital images to investigate the scrota of adult vervet monkeys Cercopithecus aethiops sabaeus . They saved the images as JPEG files but, because the level of compression of the files is not stated, it is impossible to assess the degree of error introduced. Camera man- uals may state the level of compression used on differ- ent settings, and image software should also state the level of compression used when saving JPEG files. However, even if the level of compression is known, the introduction of artefacts will be unpredictable and so JPEG files should be avoided. Lossy compression is different from some other types of compression, such as those involved with ‘zipping’ file types, where all the compressed information can be recovered. Uncom- pressed TIFF files are loss-less, but TIFF files can be compressed in either lossy or loss-less ways, and, like JPEGs, TIFFs can be modified before being saved in other ways if the necessary camera functions are not turned off (such as white-point balancing). For most cameras, a given pixel on a CCD array has only one sensor type (R, G, or B), and interpolation is required to estimate the two unknown colour values of a given pixel. Both JPEGs and TIFF files undergo interpola- tion at the stage of image capture by the camera’s internal firmware, which cannot be turned off, and the method is usually opaque to the user. Some cameras have the capacity to store RAW images. RAW files are those that are the direct product of the CCD array, and, unlike TIFFs or JPEGs which are nearly always 8-bit, RAW files are usually 12- or 16-bit. This means they can display a wider variety of colours and are generally linear because most CCDs are linear, and undergo none of the processing potentially affecting other file types. The RAW files from the camera in our study occupy approximately half of the memory of an uncompressed TIFF file because even though the TIFF file only retains 8-bits of information, it occupies twice the storage space because it has three 8-bit colour channels, as opposed to one 12-bit RAW channel per CCD pixel. However, before being useable as an image, RAW files must also go through interpolation steps in the computer software into which the files are read. Thumbnails of unprocessed RAW files in RGB format can be read into some software, but these are rela- tively useless, being only 160 × 120 pixels in resolu- tion, compared to 2560 × 1920 pixels for the processed images. The conversion to another file type can pro- ceed with no modification, just as would be the case if taking photos directly as uncompressed TIFF images. One problem with RAW files is that they can differ between manufacturers and even between camera Figure 4. Grey values measured when plotting a transect across a grey scale step image with increasing values from left to right. Grey values start at 0 on the left of the series of steps and increase in steps of 25 to reach values of 250 on the right. Plotted on the graph are the values measured for images of the steps as an uncompressed TIFF file, and JPEGs with ‘minimum’ (10%), ‘intermediate’ (50%), and ‘maximum’ (90%) levels of compression. Values of 30, 60, and 90 have been added to the JPEG files with minimum, intermediate and maximum levels of compression to separate the lines vertically. Note that, as the level of compression increases, the data measured are more severely disrupted, particularly at the boundary between changes in intensity. In the case of complex patterns, the disruption to the image structure means that measurements at any point in the image will be error prone. 0 25 50 75 100 Pixels 125 150 175 0 50 100 150 200 G re y V a lu e 250 300 350 400 Step Image Maximum Intermediate Minimum Tiff USING CAMERAS TO STUDY ANIMAL COLORATION 219 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90 , 211–237 models, and so special software and/or ‘plug-ins’ may be needed, or the software provided by the manufac- turer must be used, to convert the images to other file formats. Unfortunately, the interpolation process is rarely revealed by the manufacturer, and may intro- duce nonlinearities into the file. It is possible to write custom programmes to read RAW files into software programmes and this has the advantage that the user can then either use the RAW data directly or decide exactly what method should be used to interpolate the RAW file into a TIFF file. Once our RAW files had been processed by software supplied by the manufacturer, they had almost identical properties to the uncom- pressed TIFF files (the introduction of nonlinearities could be due to the software processing or a nonlinear CCD). Some imaging software should allow the RAW files to be processed into TIFFs without introducing nonlinearities. RAW files can also be converted into 16-bit TIFF files, which show higher accuracy than 8- bit TIFFs and may highlight extra detail. These 16-bit file types occupy approximately 30 Mb, so consider- able storage space is needed to keep a large number of these files. However, relatively more unprocessed RAW files can be stored than TIFFs on a memory card. 4. The capacity for manual exposure control or, at the very least, aperture priority exposure. The calibration curve may vary with different aperture settings and focus distances so, to avoid the need for a large num- ber of separate calibration estimates, it is more con- venient to fix the aperture at which photographs are taken and work at constrained distances. If the aper- ture value is increased, more light from the edge of the lens is allowed through, and these rays usually do not converge on the same point as those rays coming through the centre of the lens (spherical aberration). This is especially true for colours near the edges of the human visible spectrum. By keeping the aperture con- stant and as small as possible (large F-numbers), this problem is unlikely to be significant. 5. The ability to take a remote shutter release cable (manual or electronic) to facilitate photography at long integration times (slow shutter speeds) when light levels are low. 6. Known metering characteristics. Many cameras have multiple options for light metering, such that the exposure is set dependent upon average intensity across the entire field imaged, only the intensity at the central spot, or one or more weighted intermedi- ates. Knowing which area of the field in view deter- mines exposure facilitates image composition. 7. Optical zoom can be useful, particularly if the level of enlargement can be fixed manually, so it can be repro- duced exactly, if needed, each time the camera is turned on. Digital zoom is of no value because it is merely equivalent to postimage-capture enlargement and so does not change the data content of the area of interest. 8. Good quality optics. One problem with lenses is chromatic aberration, in which light of different wave- lengths is brought to a focus in a different focal plane, thus blurring some colours in the image. This can be caused by the camera lens not focusing different wave- lengths of light onto the same plane (longitudinal chro- matic aberration), or by the lens magnifying different wavelengths differently (lateral chromatic aberration). Párraga, Troscianko & Tolhurst (2002) tested camera lenses of the type in our Nikon camera, by taking images in different parts of the spectrum through nar- rowband spectral filters and verified that the optimal focus settings did not vary significantly, meaning that the lenses did not suffer from this defect. Narrow bandpass filters selectively filter light of specific nar- row wavebands (e.g. from 400 to 410 nm). Using a set of these filters enables images to be obtained where the only light being captured is in a specific waveband. Other lenses may not be as good, especially if they have a bigger optical zoom range. Therefore, aside from the requirement to produce images free from problems such as spherical aberration, the most important issue is to minimize chromatic aberration. As with Párraga et al . (2002), a good test for this is to take images of a page of text under white light through narrowband red and blue filters without changing the focus (this requires manual focus). If there is no chromatic aber- ration, then both images should be equally sharp. A more formal test is to measure the Fourier spectrum of the two images; if there is a good correction for chromatic aberration the two spectra should be the same. Furthermore, Hong et al . (2001) noted that, in some camera lenses, light is not uniformly transmitted across its area, with the centre of the lens transmitting more light. This would result in the pixels in the centre of the image being over-represented in terms of inten- sity. This potential problem should be tested for. Losey (2003) also found that the edges of images were slightly darker. In some situations, a good macro lens is also highly desirable because this allows close up images of complex patterns to be obtained. Without a macro lens, it may not be possible to move the camera close enough to resolve complex patterns. Some cam- eras even come with a ‘super’ macro lens, such as the Fujifilm FinePix S7000, which allows photographs to be taken up to 1 cm from the object. 9. The capacity to take memory cards of high capacity. TIFF files are very large ( c . 15 Mb for an image 2560 by 1920 pixels), so that a 512 Mb card that can store over 200 medium-compression JPEGs will only store 34 TIFFs. IMAGE COLOUR VALUES The colour values to be calculated and used in any analysis are stored as RGB values in TIFF files auto- 220 M. STEVENS ET AL . © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90 , 211–237 matically when a camera saves an image or when a file is converted into a TIFF image from its RAW file format and, if 8-bit, this is on a scale of 0–255. The camera or computer conversion software may have the option to save the image as either 8-bit or 16-bit, but 8-bit is currently more standard. The steps that follow to calculate values corresponding to, for exam- ple, reflection or photon catches are spelt out below. If adjusting an image with a standard or set of stan- dards to recover reflectance, then the standards should have a flat reflectance spectrum (i.e. R = G = B); therefore, the image values are adjusted so that R = G = B in the linearized picture. This will give an image in which the pixels have the correct rel- ative spectral reflectance. At this point, a crucial issue to emphasize is that many image software pro- grammes offer the option to convert values into other colour spaces, such as HSB (three images correspond- ing to hue, saturation, and brightness). Conversions such as HSB should be avoided and we strongly advise against this type of conversion. HSB is a human-vision-specific colour space, and even in terms of human vision, it is unlikely to be accurate; a more widely used and well tested colour space for humans is the Commission Internationale de l’Éclairage (CIE) Laboratory colour space, which may in some cases be appropriate. There are numerous pitfalls with using methodological techniques based on human vision to describe animal colours (Bennett et al ., 1994; Stevens & Cuthill, 2005). SOFTWARE One of the biggest advantages of using images to anal- yse coloration is the existence of a huge number of flexible and powerful software programmes, coupled with the option to write custom programmes in a variety of programming languages. Some of the pro- grammes available to deal with image processing are standard and quite affordable, such as Paintshop Pro or Photoshop, which can be used for a range of simple tasks. However, there are a range of other options available, including the popular freeware programmes such as the open-source image editor GIMP and the Java-based (Sun Microsystems, Inc.; Efford, 2000) imaging programme ‘Image J’ (Rasband, 1997–2006; Abràmoff, Magalhäes & Ram, 2004), with its huge variety of available ‘plugins’, written by various people for a range of tasks. Image J also permits custom pro- grammes written in the language Java to accompany it. For example, a plugin that we used, called ‘radial profile’, is ideal for analysing lepidopteran eyespots, and other circular features. This works by calculating the normalized intensities of concentric circles, start- ing at a central point, moving out along the radius. Figure 5 gives an example of this plug-in, as used to analyse an eyespot of the ringlet butterfly Aphantopus hyperantus . The programme MATLAB (The Mathworks Inc.) is also an extremely useful package for writing calibra- tions and designing sophisticated computational mod- els of vision. This is a relatively easy programming language to learn, is excellent for writing custom and powerful programmes, and, due to its matrix mani- pulation capabilities, is excellent for dealing with images (digital images are simply matrices of num- bers). MATLAB can also be bought with a range of ‘toolboxes’ that have numerous functions already writ- ten for various tasks, including image processing, sta- tistics, and wavelet transformations. MATLAB has available an Image Processing Toolbox with a range of useful functions (Hanselman & Littlefield, 2001; Hunt et al ., 2003; Gonzalez, Woods & Eddins, 2004; West- land & Ripamonti, 2004). HOW FREQUENTLY SHOULD CALIBRATIONS BE UNDERTAKEN? The frequency with which calibrations should be undertaken depends upon the specific calibration required. For example, determining the spectral sen- sitivity of a camera’s sensors need only be performed once because this should not change with time as long as the lens on the camera is not changed, in which case recalibration may be needed. Additionally, the calcu- lation of the camera’s response to changing light levels and the required linearization need only be performed once because this too does not change with time. How- ever, if calculating reflection, the calibration needs to be performed for each session/light setup because the light setup changes the ratio between the LW, MW, and SW sensors. CALIBRATING A DIGITAL CAMERA There are several steps that should be followed when wishing to obtain values of either reflection or data corresponding to an animal’s visual system. To obtain values of reflection: 1. Obtain images of a set of reflectance standards used to fit a calibration curve. 2. Determine a calibration curve for the camera’s response to changes in light intensity in terms of RGB values. 3. Derive a linearization equation, if needed, to lin- earize the response of the camera to changes in light intensity, based on the parameters deter- mined from step 2. 4. Determine the ratio between the camera’s response in the R, G, and B channels, with respect to the reflectance standards, and equalize the response of USING CAMERAS TO STUDY ANIMAL COLORATION 221 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90 , 211–237 Figure 5. Results from a radial profile analysis performed upon one eyespot of the ringlet butterfly Aphantopus hyper- antus , illustrating the high percentage reflectance values obtained for the centre of the spot and the ‘golden’ ring further from the centre, particularly in the red and green channels, and the lack of an eyespot in the ultraviolet (UV). 0 2 4 6 8 Distance from Spot Centre (Pixels) 10 12 14 16 0 5 10 15 20 25 % R e fl e c ti o n 30 R R G B UV G B UV 35 40 45 222 M. STEVENS ET AL . © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 the different colour channels to remove the effects of the illuminating light and any biases inherent in the camera’s processing. If data corresponding to an animal’s visual system is required (such as relative photon catches): 1. Obtain photographs of reflectance standards through a set of narrow band-pass filters, at the same time as measuring the radiance with a spec- trophotometer. 2. Determine the linearity of the camera’s response to changing light levels and, if necessary, derive a linearization. Furthermore, using radiance data and the photographs through the band-pass filters, determine the spectral sensitivity of the camera’s different sensor types. 3. Using data on the spectral sensitivity of the cam- era’s sensors, and the sensitivity of the animal’s sensors to be modelled, produce a mapping based on the response to many different radiance spectra between the two different colour spaces. These different steps are discussed in detail below. LINEARIZATION If a set of grey reflectance standards is photographed and then the measured RGB values are plotted against the nominal reflectance value, the naïve expectation would be of a linear relationship (Lauziére et al., 1999). One might also expect the values obtained for each of the three colour channels to be the same for each standard because greys fall on the ach- romatic locus of R = G = B (Kelber et al., 2003). How- ever, as mentioned previously, many cameras do not fulfil such expectations, and they did not for the Nikon 5700 Coolpix camera that we used in our study (Fig. 6; see Appendix 2). A different nonlinear relationship between grey value and nominal reflection for each colour channel requires that the linearizing transfor- mation must be estimated separately for each chan- nel. Also, it means that an image of a single grey reflection standard is insufficient for camera calibra- tion; instead a full calibration experiment must be performed. We used a modification of the linearization protocols developed by Párraga (2003) and Westland & Ripam- onti (2004). The first step is to photograph a range of standard greyscales of known reflectance value. West- land & Ripamonti (2004) used the greyscale of the Macbeth ColorChecker chart (Macbeth, Munsell Color Laboratory). In the present study, because we required reflection standards suitable for UV photo- graphy (see below), we used a set of Spectralon diffuse reflectance standards (Labsphere Inc.). These stan- dards, made of a Teflon microfoam, reflect light of wavelengths between 300 nm and 800 nm (and beyond) approximately equally, and are one of the most highly Lambertian substances available over this spectral range. The standards had nominal per- centage reflection values of 2%, 5%, 10%, 20%, 40%, Figure 6. The relationship between the grey scale value measured for a set of seven Spectralon refl ectance standards from raw digital TIFF file images and the nominal reflection value, showing a curved relationship for the R, G, and B data. The ‘required’ line illustrates values that should be measured if the camera’s response was linear and the three channels equally stimulated. LW, longwave; SW, shortwave; MW, mediumwave. 0 10 20 30 40 % Reflection 50 60 70 80 0 50 100 150 LW MW SW Required 200 G re y V a lu e 250 USING CAMERAS TO STUDY ANIMAL COLORATION 223 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 50%, and 75%. If the object of the study, as in Westland & Ripamonti (2004: chapter 10) and the present study, is to recover reflectance data from the images, then the nature of the illuminant, as long as it is stable over time (Angelopoulou, 2000) and of adequate intensity in all wavebands, is irrelevant. We used a 150-W Xenon arc lamp (Light Support), which was allowed to warm up and stabilize for 1 h before the calibration exercise, and then tested for stability before and after the calibration. In Párraga (2003), the goal was to recover spectral radiance; thus, at the same time as photographing the standards, the radiance of each greyscale patch was measured using a spot-imaging telespectroradiometer (TopCon Model SR1, Calibrated by the National Physical Laboratory). After that, each sensor’s grey level output was plotted against a mea- sure of the total spectral radiance that stimulated it, at various shutter speeds. Because radiance, unlike reflectance, varies with the illuminant, Párraga (2003) repeated the calibration process under a variety of lighting conditions. If recovering radiance is the objec- tive, it is important to determine the calibration curve appropriate to the conditions under which the research photographs will be taken. Objects will give problems of metamerism if their reflectance spectra are ‘spiky’ or, at least, very uneven. It is therefore important to check the linearization calibration works for objects from the same class as those being measured. The next step is to determine the function relating the intensity values (0–255) for each of the RGB sen- sors to true reflection, or radiance, as appropriate, as measured spectrometrically. Many studies describe power functions of the same family as those relating intensity to voltage in cathode ray tube monitors; these are so-called gamma functions of the type: Output = constant × (inputγ). For this reason, the lin- earization process is sometimes referred to as ‘gamma correction’. The term gamma function means different things in film photography, digital photography, and algebraic mathematics, and so is a potentially confus- ing term that is best avoided. Because the response of the camera’s sensors is likely to be camera specific, we recommend determining the curve that best fits the data. Although many curves will no doubt fit the data very closely (e.g. a Modified Hoerl and Weibull model, amongst others, fitted our reflection data very well), it is preferable to choose a function that is the same for each of the camera’s three sensors; this makes produc- ing the calibrations much easier because the calibra- tion equation will be of the same form for each channel, with only the parameters varying. If there are several curves that all fit the data well, then choos- ing the simplest equation and with the lowest number of parameters makes calibration much easier. The equation of the curve to produce a calibration should be reversible, favouring a simpler model, because try- ing to revert a high-order polynomial, for example, can be very complicated. We found that the function below fitted our camera well: QS = a × bP (1) where Qs is the photon catch of a given sensor S (R, G, or B), P the value of the pixel of sensor S, and a and b are constants. Qs is the product of the measured radi- ance spectrum and the sensor’s spectral sensitivity, but it is rare for manufacturers to publish such data. Westland & Ripamonti (2004) mention that luminance is sometimes used as an approximation, on the assumption that for a grey standard the radiance in all three channels should be the same. However, this assumes a spectrally flat light source, which no light source ever really is. Therefore, the spectral sensitiv- ity needs to be measured directly, by measuring the camera RGB values when imaging greyscales illumi- nated through narrow band-pass filters. In this way, one can construct spectral sensitivity curves analo- gous to the spectral sensitivity curves of animal pho- toreceptors (Fig. 1). Párraga (2003) provides technical details on how to achieve this. In Párraga’s (2003) linearization exercise, the value of b in the equation above was found to be similar for all conditions tested (sunny, cloudy, and incandescent artificial light) and all sensors. Thus, the value of a defined each curve. Because the linearized values for R, G, and B represented radiance in the wavebands specific to each sensor, the photograph’s exposure was also taken into account in the calibration process (a longer exposure time representing lower radiance). Therefore, the following three equations were derived to linearize and scale each RGB value to radiance measures, where QS is the radiance measured by sen- sor S, b and the ai are the coefficients estimated by ordinary least-squares regression of log-transformed values, c is a value to account for inherent dark cur- rent (see below) in the camera, and t is the integration time the photograph was taken on [1/shutter speed]: QR = a1(bR − c1)/t (2) QG = a2(bG − c2)/t (3) QB = a3(bB − c3)/t (4) If the object of the research is to obtain reflection rather than radiance measures, then t can be ignored and functions such as eqn. 1 could be used, provided that t is constant and measurements of known reflec- tion standards are also made. Because the reflection values of greyscales are, by definition, equal in all wavebands, sensor spectral sensitivity does not in principle need to be known for linearization in relation to reflection, although in practice, one would want to know the spectral sensitivity curves for the camera’s 224 M. STEVENS ET AL. © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 sensors for the data to be readily interpreted (in terms of the sensitivity corresponding to each sensor). In the case of either radiance or reflection calibration, one should check that it is valid to force the calibration curve through the origin. All digital imaging sensors have associated with them an inherent ‘dark current’ (due to thermal noise in the sensors) (Efford, 2000; Stokman, Gevers & Koenderink, 2000; Barnard & Funt, 2002; Martinez-Verdú et al., 2002), so that a set of images with the lens cap on may not produce mea- surements of zero. As with spectrometry, the dark cur- rent can be estimated by taking images at the same exposure settings as calibration photos, and using the pixel values as an offset for the curve. One should also check whether increasing the integration time, or temperature changes within the range at which the camera will be used for data collection, alters these background dark current values. Figure 7 provides an example of linearization per- formed on the RGB values from photographs of reflec- tance standards (Fig. 6). This shows that, generally, the linearization was successful. However, one should note that the values of the reflectance standards with low nominal reflection values are not accurate because these standards were partially underexposed (i.e. there are many pixels with values equal or close to the dark current values) and, for this specific set of images of standards, some standards are slightly closer to, or further away from, the light source. This means that the calibration line will not be perfectly straight. Because the relatively darker areas (low pixel values) of images are often inaccurate in the measurements they yield, these values may be nonlinear (Barnard & Funt, 2002). However, the measurement error is rela- tively small. RGB EQUALIZATION If the goal is to derive reflection data from the photographs, then greys should, by definition, have equal reflection in all three colour channels. So, if R ≠ G ≠ B in the calibration images, the next step is to equalize the three channels with respect to the images of the reflection standards, and then scale the values between 0–255. This, in theory, should be relatively simple: it is a matter of producing a ratio between the three channels and then scaling them, usually with respect to the green channel as a refer- ence point, before multiplying the entire image by 2.55 to set the values on a scale between 0 and 255. So, for our data: R′ = (RxR)2.55 (5) G′ = (GxG)2.55 (6) B′ = (BxB)2.55 (7) where xi is the scaling value for each channel, and R, G, and B are the linearized image values for each channel, respectively. The equalized values were then tested for accuracy using a different set of calibration images. Figure 8 shows the result. The three channels closely match the required calibration line. Note that there is no need for 255 to represent 100% reflection; indeed, to obtain maximum resolution in colour dis- Figure 7. The relationship between measured greyscale value and nominal reflection value for the seven reflectance standards, showing the linearization of the gamma curves. LW, longwave; SW, shortwave; MW, mediumwave. 0 10 20 30 40 % Reflection 50 60 70 80 0 50 100 150 LW MW SW Required 200 G re y V a lu e 250 USING CAMERAS TO STUDY ANIMAL COLORATION 225 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 crimination within and between images, if all images to be analysed are relatively dark then it would be advisable for the maximum pixel value within the dataset to be 255. An important issue is that of saturation. With regards to the above calibration results (Figs 6, 7, 8), we maintained an integration time of 1/30 s and a lens aperture of f/8.0. This resulted in images that were slightly under-exposed and guarded against the serious problem of saturation. Saturation (also known as ‘clipping’; Lauziére et al., 1999) occurs when the light levels arriving at the sensors reaches an upper limit, above which any more photons are not registered. This can be a serious problem because it prevents measurements of the true value that the pixels would have reached had saturation not occurred; a problem recognized in some studies (Hong et al., 2001). The effects of saturation are easy to find, with saturated pixels in the original image yielding values of approximately 255, with little or no stan- dard deviation. For example, images taken under similar circumstances, but with an integration time of 1/15 s produce results that at nominal reflection values of 75%, the red channel ceases to rise in pixel values. This is due to the effects of saturated pixels in the original image in the red channel, which causes the calibration to fail, since the linearization becomes ineffective and the equalization procedure results in the red channel grey values dropping away at higher reflection values (Fig. 9). These problems can be avoided by changing the exposure/integration time (t), or altering the intensity of the light source, because these determine the flux of light reaching the camera’s sensors (Hong et al., 2001). However, if the exposure is to be changed between images it is impor- tant to test that the response of the camera is the same at all exposure settings, otherwise a separate calibration will need to be performed for every change in exposure. Therefore, where possible, it is recom- mended that the aperture value, at least, is kept constant (Hong et al., 2001). It is often the case that the red channel of a digital camera is the first to saturate (as was the case with our camera, even when using a light source biased towards shorter wavelengths of light; Fig. 9), possibly because the sensors in some cameras may be biased to appeal to human perceptions, with increasing red channel values giving the perception of warmth. This may be particularly deleterious for studies investigat- ing the content of red signals (Frischknecht, 1993; Wedekind et al., 1998), which are widespread because of the abundance of carotenoid-based signals in many taxa (Grether, 2000; Pryke, Lawes & Andersson, 2001; Bourne, Breden and Allen, 2003; Blount, 2004; McGraw & Nogare, 2004; McGraw, Hill & Parker, 2005) and theories linking carotenoid signals to immune function (Koutsos et al., 2003; McGraw & Ardia, 2003; Navara & Hill, 2003; Grether et al., 2004; McGraw & Ardia, 2005). Some cameras are also biased in their representation of relatively short wave- lengths, to compensate for a lack of these wavelengths in indoor lights (Lauziére et al., 1999). Figure 8. The greyscale values measured for the set of reflectance standards following the process of RGB channel equalization and scaling, showing a close fit to the required values. LW, longwave; SW, shortwave; MW, mediumwave. 0 10 20 30 40 % Reflection 50 60 70 80 0 50 100 150 LW MW SW Required 200 G re y V a lu e 250 226 M. STEVENS ET AL. © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 SELECTING/CONTROLLING LIGHT CONDITIONS To some extent, the importance of selecting standard- ized lighting conditions and distances depends upon the calibration required. Lighting conditions should be as stable, standardized, and consistent as possible for each photo if measurements of reflection are desired, especially if photographs of standards are taken only at the beginning and end of sessions. However, when photographing natural scenes and using measures of photon catch, for example, lighting conditions are likely to vary considerably. This may in fact be an important part of the study: to include information about the ambient light. Generally, it is often best to avoid flashguns because the output of these is difficult to measure and may be variable; however, a high-end flash with good light diffusers may be fine. If using a flash, putting a grey standard(s) of known reflectance into the part of the scene interested in should allow good recovery of reflectance, even if the illumination conditions vary in an uncontrolled manner, although these standards may need to be included in every image rather than just at the start/end of sessions. Therefore, using a flash may be acceptable if one is just interested in reflectance, but should be avoided if Figure 9. The greyscale values measured for the set of reflectance standards following the process of linearization (A) and then RGB channel equalization (B) and scaling, showing that the linearization does not produce a linear response when there are saturated pixels in the image, as is the case in the R channel in this example. Saturated pixels also result in a poor equalization result, indicated by a dropping off of the R channel at higher values. 0 10 20 30 40 % Reflection 50 60 70 80 0 50 100 150 Required A Red Green Blue200 G re y V a lu e 250 0 10 20 30 40 50 60 70 80 0 50 100 150 Required B Red Green Blue200 G re y V a lu e 250 USING CAMERAS TO STUDY ANIMAL COLORATION 227 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 you are interested in the behaviour of natural illumi- nation (e.g. shadows). MAPPING TO CAMERA-INDEPENDENT MEASURES Having used the coefficients obtained in the lineariza- tion procedure to linearize the RGB values in the images obtained, the next step is to transform them to camera-independent values. This is because the R, G, and B data, whether in radiance or reflectance units, are specific to the wavebands designated by the cam- era sensors’ spectral sensitivity curves (Fig. 1C). This may be sufficient for some research purposes; for example, if the sensitivities of the camera’s sensors broadly correspond to the bandwidths of interest. However, it will often be desirable, either because a specific visual system is being modelled (e.g. human, bird), or simply to facilitate comparison of the results across studies, to transform the camera-specific RGB values to camera-independent measures. In human studies, these are frequently one of the sets of three- coordinate representations devised by the CIE for colour specification and/or matching. Different three- variable representations have been devised to approx- imate colour-matching for images illuminating only the M-L-cone-rich central fovea, or wider areas of the retina; for presentation of images on video display units or printed paper; or representations that incor- porate the colour balance arising from a specific illu- minant, or are illumination independent (Wyszecki & Stiles, 1982; Mollon, 1999; Westland & Ripamonti, 2004). The advantage is that all these metrics are pre- cisely defined, the formulae downloadable from the CIE website, and the values in one coordinate system can be transformed to another. Westland & Ripamonti (2004) provide formulae and downloadable MATLAB (The Mathworks Inc.) code for such transformations. Another possible camera-independent transforma- tion is to map the linearized RGB values to the spectral sensitivities of the photoreceptors of either humans (Párraga et al., 2002) or nonhuman species. In the case of RGB radiance measures, this corre- sponds to calculating the photon catches of an animal’s photoreceptors, rather than the camera’s sen- sors, when viewing a particular scene. In the case of RGB reflectance measures, this can be thought of as a mapping to a species-specific estimate of reflectance in the wavebands to which the animal’s photoreceptors are sensitive. Both types of mapping are particularly relevant to studies involving nonhuman animals, where accurate psychophysical estimates of colour- matching, of the sort used to calculate human- perceived colour from camera data, are not usually available. For such mapping to be viable, it is not nec- essary that the species’ cone spectral sensitivities match those of the camera’s sensors particularly closely (e.g. this is not true for humans; compare Fig. 1A, C). However, for the transformation to pro- duce reliable data, the species’ overall spectral range has to fall within that of the camera, and the species has to have three or less photoreceptors. For example, one can map RGB data to the lower dimensional colour space of a dichromatic dog (with one short- and one medium/long-sensitive cone type; Jacobs, 1993), but a camera with sensitivities such as that shown in Fig. 1C can never capture the full colour world of a trichromatic bee (with UV, short-, and medium-wave photoreceptors; Chittka, 1992). Mapping RGB data to a bird’s colour space would appear to be invalid on two counts: birds have a broader spectral range than a conventional camera (often extending into the UV-A) and are potentially tetrachromatic (Cuthill et al., 2000b). However, if the scenes or objects of interest lack UV information, then a mapping from RGB to avian short-, medium-, and long-wave cone sensitivi- ties can be achieved. We present the method here, which can be used for any analogous trichromatic sys- tem (e.g. human) or, with simple modification, a lower- dimensional system of the type that is typical for most mammals (Jacobs, 1993). Subsequently, we consider how UV information from a separate imaging system can be combined with the RGB data to provide a com- plete representation of bird-perceived colour. The goal is to predict the quantal catches, Qi, of a set of i photoreceptors (where i ≤ 3), given a triplet of cam- era-sensor-estimated radiance values, QR, QG, and QB, derived from the calibration and linearization process described above. This amounts to solving a set of simultaneous regression equations, which are likely to be nonlinear. Mappings can be peformed for more than three photoreceptor classes, provided that the spectral sensitivities of all types are covered by the spectral range of one or more of the camera’s sensors. For example, a mapping could be produced to calculate images corresponding to the longwave, mediumwave, and shortwave cones of a bird’s visual system, plus a luminance image based on avian double cone sensitiv- ity. Once mapped images have been obtained, further calculations also allow the production of images cor- responding to various opponency channels. Westland & Ripamonti (2004) summarize their, and other, research on the family of equations most likely to pro- vide a good fit to data, and conclude that linear models (with interaction terms) of the following type perform well. For ease of interpretation, we use the notation R, G, and B to describe the camera pixel values rather than their calibrated and linearized equivalents, QR, QG, and QB. Qi = bi1R + bi2G + bi3B + bi4RG + bi5RB + bi6GB + bi7RGB (8) 228 M. STEVENS ET AL. © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 Where bi are coefficients specific to receptor i, and the curve is forced through the origin (when the calibrated camera sensor value is zero, the animal’s quantal catch is zero). In some cases, dependent on the camera and the nature of the visual system to which mapping is required, polynomials (i.e. including terms in R2, G2, and B2, or higher orders) may provide a significantly better fit (and did in our case); this should be investi- gated empirically. Cheung et al. (2004) note that even mapping functions of unconstrained form, obtained using neural networks applied to large datasets, do not significantly outperform polynomials. The data required to estimate the coefficients for the i photore- ceptors can either be radiances directly measured using an imaging spectroradiometer (Párraga, 2003) or, more conveniently, radiances approximated as the product of reflectance spectra and the irradiance spec- trum of the illuminant. Using eqn. 8, applied to a trichromat, 3 × 7 coefficients need to be estimated, so the number of radiance spectra must be considerably greater than this (> 100 in our experience as a mini- mum, but closer to a 1000 is better). Large numbers of radiance spectra can be obtained from internet data- bases (Parkkinen, Jaaskelainen and Kuittinen, 1988; Sumner & Mollon, 2000). The coefficients for each pho- toreceptor are then found by multiple regression (or, conveniently, if using MATLAB, by matrix algebra; Westland & Ripamonti, 2004). Although, in principle, one could derive a mapping function (i.e. set of coeffi- cients) for all possible natural spectra, viewed under all possible illuminants, greater precision can be achieved by determining a situation-specific mapping function for the research question at hand. For example, if the goal is to use a camera to quantify the coloration of orange to red objects under blue skies, then a very pre- cise mapping function could be estimated by using radi- ance data calculated only from the reflectance spectra of orange to red objects viewed under blue sky irradi- ance. If one is to derive the mapping functions by cal- culation (i.e. calculate quantal catch for camera and desired cone sensitivities, using reflectance and irra- diance data), then the sensitivity of the camera’s sen- sors is required. However, one could also derive the mapping empirically without ever measuring camera sensor sensitivities, by measuring the response of the camera’s three channels to different (known) radiance spectra, and by determining the response of the cones of the required animal’s visual system. To achieve accu- rate mapping, the camera’s response would have to be measured for many hundreds of radiance spectra and this would be time-consuming, involving many stimuli. UV IMAGING In our own research, we wished to quantify lepi- dopteran wing patterns, with respect to avian vision, so we also needed to measure the amount of reflection in the avian-visible UV waveband. At the same time as RGB photography, images of the reflectance standards and the lepidopterans were taken with a UV sensitive video camera (see Appendix 2). First, we tested whether the camera was linear with respect to both changes in the integration time, and with respect to increases in the reflection value; being a high-specification technical camera, this was indeed the case. This meant that the only calibrations needed were to scale the images to between 0 and 255; which is not initially as easy as it sounds because the cali- brations have to account for different gain and the integration times. Figure 10 provides an example of the results for the UV calibration process. In most sit- uations, it will be simpler to maintain the same gain values because this reduces the number of factors to consider in the calibration process. If images are obtained from more than one camera, there is an additional consideration that must be addressed; that of ‘image registration’. Images derived from one RGB camera will all be the same angle and distance from the specimens, and so the objects pho- tographed will be on an identical scale in each of the three channels, based on the interpolations imple- mented. This may not be the case if obtaining images from a second camera; such as in our study, meaning that the specimens were a different size in the photo- graphs and would not necessarily be easy to align with the RGB images. Furthermore, one camera may pro- duce images with a lower resolution, and with less high frequency information; different cameras will have different Nyquist frequencies, meaning that although aligning lower spatial frequency patterns may be relatively easy, information may be lost or poorly aligned at higher frequencies. One potential approach is to use Fourier filtering to remove the high- est spatial frequency information from those images that contain it, down to the highest frequencies con- tained in the images from the other camera. However, this may be undesirable if the high spatial frequency information is important, as it frequently will be with complex patterns, or where edge information between pattern components is critical. The task of aligning images is made easier if: (1) different cameras are set up as closely as possible, in particular with relation to the angle of photography because this is the hardest factor to correct and (2) rulers are included in at least a sample of the images, so they can be rescaled to ensure specimens occupy the same scale in different images. Including rulers in images allows for true dis- tance measurements to be obtained and for spatial investigations to be undertaken. If images from one camera are larger than those from another, then it is the larger images that should be scaled down in size because this avoids artefactual data, generated by USING CAMERAS TO STUDY ANIMAL COLORATION 229 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 interpolation, if images are rescaled upwards. Once the objects in the photographs are of the same size, it may be a relatively trivial task to take measurements from the different images that directly correspond. However, if the images are still difficult to align then an automated computational approach can be used. A variety of these are available, and users should care- fully consult available manuals/information for the corresponding software to be sure of how the registra- tion is completed, and to check what changes may occur to the image properties. However, in many cases, changes to the image structure will probably be small, especially at lower spatial frequencies, and have little influence on the results. One such plug-in, for the free- ware software Image J (Rasband, 1997–2006; Abrà- moff et al., 2004), is ‘TurboReg’ (available via a link from the Image J website) (Thévenaz, Ruttimann & Unser, 1998), which comes with a variety of options to align sets of images. HOW BEST TO USE COLOUR STANDARDS A crucial step in calibrating a digital camera is to include colour standards in some or all of the photo- graphs taken. Including a set of colour standards in each photo allows calibrations to be derived for each individual photo, which would be highly accurate. However, in most cases, this is impractical and unnec- essary. For example, when the light source used is con- sistent, a set of reflectance standards used to fit a calibration curve need only be included in photos at the start and end of a session. Including these in each photo may leave little space for the objects of interest. By contrast, in many cases, such as when photograph- ing natural scenes where the illuminating light may change and when wishing to calculate values such as photon catches, it may be important to include at least one grey standard in the corner of each photo. Possibly the best objects to include in a photo are Spectralon reflectance standards (Labsphere Inc.), which reflect a known amount of light equally at all wavelengths in the UV and human visible spectrum. However, these are expensive and easily damaged, and if a single standard is sufficient, a Kodak grey card (Eastman Kodak Company), which has an 18% reflectance, can be included, which is relatively inexpensive. SPATIAL MEASUREMENTS Often, we do not wish to measure solely the ‘colour’ of a patch, but the area or shape of a region of interest. In principle, this sounds easy but has several complica- tions. For example, the colour boundary of an area vis- ible to humans may not be exactly the same as for that of another animal. Additionally, there may be colours that we cannot see (such as UV) that have different boundaries to those visible by a human (although most colour patches generally have the same boundary for different colour bands, such as UV, SW, MW, and LW). Another problem corresponds to the acuity of the ani- mal in question. Regions of interest with complex boundaries may be only discernable by animals with a high enough spatial acuity. Furthermore, there is a specific problem with gradual boundaries, particularly relating to defining where the actual edge of the colour region is. Figure 10. The effect of scaling the ultraviloet (UV) images obtained with the PCO Variocam camera and Nikon UV transmitting lens, showing a close fit to the required values. 0 10 20 30 40 50 60 70 80 0 50 100 150 Gain 12db Gain 24db Required 200 G re y V a lu e % Reflection 250 230 M. STEVENS ET AL. © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 There are several ways to address these issues and one must remember that the image processing steps that facilitate patch size or shape measurement may interfere with the accurate measurement of patch colour per se (e.g. by enhancing contrast between patches). One method of determining the boundary of a colour patch is to produce an automated procedure to define a specific area of interest. This can be done by thresholding an 8-bit or colour image to a binary (black and white) image where each individual pixel has a value of either one (white) or zero (black) (Fig. 11). This can be performed by writing a custom programme where the threshold level is defined specifically by the user, preferably based on an explicit assumption or data. Otherwise, most imaging software has automatic thresholding algorithms, although it is not always known what the thresholding value used will be. A different method that can be used to define an area of interest is that of edge detection. This is where an algorithm is used to determine edges in an image, corresponding to sharp changes in intensity (either luminance or in terms of individual colour channels). These edges may, for example, be found at the bound- ary of a colour patch (Fig. 11). The useful thing about edge detection algorithms is that they can be opti- mized and not linked to any specific visual system, or they correspond to the way in which real visual sys- tems work (Marr & Hildreth, 1980; Bruce, Green & Georgeson, 2003; Stevens & Cuthill, 2006). Once the boundary of a colour patch has been defined, it is simple to measure the area of the patch. Measuring the shape of an object is more difficult, although imaging software often comes with algo- rithms to measure attributes such as the relative cir- cularity of an area and, occasionally, more advanced shape analysis algorithms. DRAWBACKS TO USING DIGITAL IMAGES The most notable drawback is that the information obtained is not wavelength specific (i.e. it is known what wavelengths contribute to each channel, but not the contribution of any specific wavelength to the RGB value of any one pixel). This drawback can be over- come by so-called multispectral imaging (or, if the number of wavebands is high, ‘hyperspectral imag- ing’). This can involve rotating a set of filters in front of the lens, allowing the acquisition of successive images of different wavebands (Brelstaff et al., 1995; Lauziére et al., 1999; Angelopoulou, 2000; Stokman et al., 2000; Losey, 2003). This method may be partic- ularly useful if detailed wavelength information is required, or if the visual system of the receiver that the signal is aimed at is poorly matched by the sensi- tivity of an RGB camera. We do not cover this tech- nique here because, although it combines many of the advantages of spectrometry with photography, the technology is not practical for most behavioural and evolutionary biologists. Hyperspectral cameras are often slow because they may have to take upwards of 20 images through the specified spectral range. The equipment, and controlling software, must be con- structed de novo and conventional photography’s advantage of rapid, one-shot, image acquisition is lost. The specimens must be stationary during the proce- dure because movement can cause problems with image registration. Also, as Losey (2003) acknowl- edges, images obtained sequentially in the field may be subject to short-term variations in environmental conditions, and thus introduce considerable noise. Averaging the values obtained from multiple frames of the same waveband may help to eliminate some of this effect (Losey, 2003). PROBLEMS WITH USING THE AUTOMATIC CAMERA SETTINGS Many studies of animal coloration utilizing cameras apparently use the camera with its automatic set- tings. There are numerous problems that can arise when using the ‘auto’ mode. The main problem is that the settings used by the camera are adjusted accord- ing to the scene being photographed and so may be inconsistent. In general, camera manufacturers are interested solely in selling cameras and therefore want to produce pictures that look aesthetically ‘good’ by enhancing some of the images’ colours and contrasts and, thus, automatic modes are generally compatible with this objective. Given that an automat- ically set white balance changes between photos, it gives rise to different ratios between the LW, MW, and SW sensor responses. This need not always be an irre- trievable flaw but would almost certainly need some highly complex calibration procedures to recover con- sistent data, such as calibrating every single combina- tion of white balance, contrast enhancement and aperture setting modes. Any low- to mid-range camera is likely to have some white balancing present, and most mid-range cameras will give the option to man- ually set the white balance. If the camera does not allow this option and there is no indication of this in the manual, then changing the white-balance settings may not be possible. An additional problem with auto- matic settings is that calibration curves/settings could also change at different aperture settings; this may not always be the case but, when using the automatic mode, there is there additional complication that the aperture and exposure (integration) time may change significantly simultaneously, leading to unnecessarily complicated calibrations if values of reflection, for example, are required. The aperture selected by the USING CAMERAS TO STUDY ANIMAL COLORATION 231 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 Figure 11. Different images of a clouded yellow butterfly Colias croceus, modified to show regions of interest, such as the wing spots, identified by various techniques. A, the original 8-bit grey-level image (pixel values between 0 and 255). B, the image after an edge detection algorithm has been applied, clearly identifying a boundary around the two forewing spots, but not the hindwing spots. C, the original image after being thresholded to a binary (black/white) image with a threshold of 64. This clearly shows the forewing spots but does not produce spots where the hindwing spots were in the original image. D, the original image when converted to a binary image with a threshold of 128, now picking out both the forewing and hingwing spots (although with some ‘noise’ around the hindwing spots). E, the original image converted to a binary image with a threshold of 192, now not showing any clear wing spots. F, the original image when first converted to a pseudocolour image, where each pixel value falling between a given range is given a specific colour. The image is then reconverted to a grey-level image and now shows the hindwing spots with marginally sharper edges than in the original image. A B C D E F 232 M. STEVENS ET AL. © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 camera will also affect the quality of the image, par- ticularly depth of field. Another potentially serious problem with using the auto mode is that the photo- graph will not optimize the dynamic range of the scene photographed, meaning that some parts of the scene may be underexposed or, far more seriously, saturated. CONCLUSIONS One of the earliest studies to outline how digital image analysis can be used to study colour patterns is that of Windig (1991), with an investigation of lepidopteran wing patterns. Windig (1991) used a video camera, connected to a frame grabber to digitize the images for computer analysis, a similar method to that which we used to capture the UV sensitive images. Windig (1991) stated that the method was expensive, and the programmes were highly complex but, today, flexible user friendly software is available, with various free- ware programmes downloadable off the internet, and the purchase of a digital camera and software is pos- sible for a fraction of the cost of setup used by Windig (1991). Windig (1991) argued that any image analysis pro- cedure should meet three criteria. First, completeness: a trait should be quantified with respect to all charac- ters, such as ‘colour’ and area. Our procedure meets this criterion because reflection, plus spatial meas- urements, are attainable. Second, the procedure needs to be repeatable. This was also the case with our approach because the calibrations for a set of images of reflectance standards were still highly accurate for other images taken under the same conditions, but at different times. Finally, the process should be fast rel- ative to other available methods, as was our study, with potentially hundreds of images taken in a day, quickly calibrated with a custom MATLAB pro- gramme and then analysed with the range of tools available in Image J. Another advantage of capturing images with a dig- ital camera is that there are potentially a host of other noncolour analyses. Detailed and complex measure- ments of traits can be undertaken rapidly, with mea- surements and calculations that would normally be painstakingly undertaken by hand performed almost instantaneously in imaging software, including mea- surements of distances, areas, and analysis of shapes, plus complex investigations such as Fourier analysis (Windig, 1991). This may be particularly useful if han- dling the specimens to take physical measurements is not possible. The use of digital technology in studying animal col- oration is a potentially highly powerful method, avoid- ing some of the drawbacks of other techniques. In future years, advances in technology, software, and our understanding of how digital cameras work will add further advantages. It is already possible to extract data of a scene from behind a plane of glass (Levin & Weiss, 2004), which could become useful for studies of aquatic organisms (although most glass filters out UV wavelengths; Lauziére et al., 1999). Techniques are also being developed to remove the shadows from images; shadows can make edge recog- nition more difficult (Finlayson, Hordley & Drew, 2002), and hinder tasks such as image registration. With the explosion in the market of digital photogra- phy products, and the relatively low cost to purchase such items, there is the temptation to launch into using such techniques to study animal signals, with- out prior investigation into the technicalities of using such methods. This could result in misleading results. Therefore, although digital photography has the potential to transform studies of coloration, caution should be implemented and suitable calibrations developed before such investigations are undertaken. KEY POINTS/SUMMARY Below is a list of some of the main points to consider if using cameras to study animal coloration. 1. Images used in an analysis of colour should be either RAW or TIFF files and not JPEGs. 2. Grey reflectance standards should be included in images at the start of a photography session if the light source is constant, or in each image if the ambient light changes. 3. It is crucial not to allow images to become satu- rated or underexposed because this prevents accu- rate data being obtained. 4. Many cameras have a nonlinear response to changes in light intensity, which needs linearizing before usable data can be obtained. 5. To produce measurements of reflectance, the response of the R, G, and B colour channels needs to be equalized with respect to grey reflectance standards. 6. Measurements of cone photon catches correspond- ing to a specific visual system can be estimated by mapping techniques based upon sets of radiance spectra and camera/animal spectral sensitivity. 7. Digital images can be incorporated into powerful models of animal vision. 8. Do not convert image data to formats such as HSB, which are human-specific and inaccurate. Instead, use reflection data, calculations of photoreceptor photon catches or, if working on human-perceived colour, well-tested colour spaces such as CIE. 9. If using more than one camera, image registration may be a problem, especially if the different cam- eras have different resolutions. This problem can be minimized by setting up different cameras as USING CAMERAS TO STUDY ANIMAL COLORATION 233 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 close to one another as possible and ensuring that one camera does not capture significantly higher levels of spatial detail than the other. 10. Digital imaging is also a potentially highly accu- rate and powerful technology to study spatial patterns. ACKNOWLEDGEMENTS We are very grateful to D. J. Tolhurst and G. P. Lovell for help with much of the project. C.A.P. was sup- ported by BBSRC grants S11501 to D. J. Tolhurst and T.S.T., and S18903 to I.C.C., T.S.T., and J.C.P. M.S. was supported by a BBSRC studentship. REFERENCES Abràmoff MD, Magalhäes PJ, Ram SJ. 2004. Image processing with Image J. Biophotonics International 7: 36– 43. Angelopoulou E. 2000. Objective colour from multispectral imaging. International Conference on Computer Vision 1: 359–374. Barnard K, Funt B. 2002. Camera characterisation for color research. Color Research and Application 27: 152–163. Bennett ATD, Cuthill IC, Norris KJ. 1994. Sexual selection and the mismeasure of color. American Naturalist 144: 848– 860. Blount JD. 2004. Carotenolds and life-history evolution in animals. Archives of Biochemistry and Biophysics 430: 10– 15. Bortolotti GR, Fernie KJ, Smits JE. 2003. Carotenoid concentration and coloration of American Kestrels (Falco sparverius) disrupted by experimental exposure to PCBs. Functional Ecology 17: 651–657. Bourne GR, Breden F, Allen TC. 2003. Females prefer car- otenoid colored males as mates in the pentamorphic live bearing fish, Poecilia parae. Naturwissenschaften 90: 402– 405. Brelstaff GJ, Párraga CA, Troscianko T, Carr D. 1995. Hyperspectral camera system: acquisition and analysis. In: Lurie JB, Pearson J, Zilioli E, eds. SPIE − human vision, visual processing and digital displays geographic informa- tion systems, photogrammetry, and geological/geophysical remote sensing. Paris: Proceedings of the SPIE, 150–159. Bruce V, Green PR, Georgeson MA. 2003. Visual percep- tion, 4th edn. Hove: Psychology Press. Cardei VC, Funt B. 2000. Color correcting uncalibrated dig- ital images. Journal of Imaging Science and Technology 44: 288–294. Cardei VC, Funt B, Barnard K. 1999. White point estima- tion for uncalibrated images. Proceedings of the IS and T/ SID seventh color imaging conference: color science systems and applications, 97–100. Cheung V, Westland S, Connah D, Ripamonti C. 2004. A comparative study of the characterisation of colour cameras by means of neural networks and polynomial transforms. Coloration Technology 120: 19–25. Chittka L. 1992. The colour hexagon: a chromaticity diagram based on photoreceptor excitations as a generalised repre- sentation of colour opponency. Journal of Comparative Phys- iology A 170: 533–543. Cooper VJ, Hosey GR. 2003. Sexual dichromatism and female preference in Eulemur fulvus subspecies. Interna- tional Journal of Primatology 24: 1177–1188. Cott HB. 1940. Adaptive colouration in animals. London: Methuen Ltd. Cuthill IC, Bennett ATD, Partridge JC, Maier EJ. 1999. Plumage reflectance and the objective assessment of avian sexual dichromatism. American Naturalist 153: 183–200. Cuthill IC, Hart NS, Partridge JC, Bennett ATD, Hunt S, Church SC. 2000a. Avian colour vision and avian video playback experiments. Acta Ethologica 3: 29–37. Cuthill IC, Partridge JC, Bennett ATD, Church SC, Hart NS, Hunt S. 2000b. Ultraviolet vision in birds. Advances in the Study of Behaviour 29: 159–214. D’Eath RB. 1998. Can video images imitate real stimuli in animal behaviour experiments? Biological Reviews 73: 267– 292. Dartnall HJA, Bowmaker JK, Mollon JD. 1983. Human visual pigments: microspectrophotometric results from the eyes of seven persons. Proceedings of the Royal Society of London Series B, Biological Sciences 220: 115–130. Efford N. 2000. Digital image processing: a practical introduc- tion using JAVA. Harlow: Pearson Education Ltd. Endler JA. 1984. Progressive background matching in moths, and a quantitative measure of crypsis. Biological Journal of the Linnean Society 22: 187–231. Endler JA. 1990. On the measurement and classification of colour in studies of animal colour patterns. Biological Jour- nal of the Linnean Society 41: 315–352. Endler JA, Mielke PW Jr. 2005. Comparing color patterns as birds see them. Biological Journal of the Linnean Society 86: 405–431. Finlayson GD, Hordley SD, Drew MS. 2002. Removing shadows from images. European Conference on Computer Vision 4: 823–836. Finlayson GD, Tian GY. 1999. Color normalisation for color object recognition. International Journal of Pattern Recogni- tion and Artificial Intelligence 13: 1271–1285. Fleishman LJ, Endler JA. 2000. Some comments on visual perception and the use of video playback in animal behavior studies. Acta Ethologica 3: 15–27. Fleishman LJ, McClintock WJ, D’Eath RB, Brainard DH, Endler JA. 1998. Colour perception and the use of video playback experiments in animal behaviour. Animal Behav- iour 56: 1035–1040. Frischknecht M. 1993. The breeding colouration of male three-spined sticklebacks (Gasterosteus aculeatus) as an indicator of energy investment in vigour. Evolutionary Ecol- ogy 7: 439–450. Gerald MS, Bernstein J, Hinkson R, Fosbury RAE. 2001. Formal method for objective assessment of primate color. American Journal of Primatology 53: 79–85. Goda M, Fujii R. 1998. The blue coloration of the common surgeonfish, Paracanthurus hepatus − II. Color revelation and color changes. Zoological Science 15: 323–333. 234 M. STEVENS ET AL. © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 Gonzalez RC, Woods RE, Eddins SL. 2004. Digital image processing using MATLAB. London: Pearson Education Ltd. Grether GF. 2000. Carotenoid limitation and mate preference evolution: a test of the indicator hypothesis in guppies (Poecilia reticulata). Evolution 54: 1712–1724. Grether GF, Kasahara S, Kolluru GR, Cooper EL. 2004. Sex-specific effects of carotenoid intake on the immunologi- cal response to allografts in guppies (Poecilia reticulata). Proceedings of the Royal Society of London Series B, Biolog- ical Sciences 271: 45–49. Hanselman D, Littlefield B. 2001. Mastering MATLAB 6: a comprehensive tutorial and reference. Upper Saddle River, NJ: Pearson Education International. Hart NS, Partridge JC, Cuthill IC. 1998. Visual pigments, oil droplets and cone photoreceptor distribution in the Euro- pean starling (Sturnus vulgaris). Journal of Experimental Biology 201: 1433–1446. Hendrickson A, Drucker D. 1992. The development of parafoveal and mid-peripheral human retina. Behavioural Brain Research 49: 21–31. Hong G, Lou RM, Rhodes PA. 2001. A study of digital camera colorimetric characterization based on polynomial modeling. Color. Research and Application 26: 76–84. Hunt BR, Lipsman RL, Rosenberg JM, Coombes HR, Osborn JE, Stuck GJ. 2003. A guide to MATLAB: for beginners and experienced users. Cambridge: Cambridge University Press. Hurlbert A. 1999. Colour vision; is colour constancy real? Cur- rent Biology 9: R558–R561. Jacobs GH. 1993. The distribution and nature of colour vision among the mammals. Biological Reviews 68: 413–471. Kelber A, Vorobyev M, Osorio D. 2003. Animal colour vision − behavioural tests and physiological concepts. Biolog- ical Reviews 78: 81–118. Kodric-Brown A, Johnson SC. 2002. Ultraviolet reflectance patterns of male guppies enhance their attractiveness to females. Animal Behaviour 63: 391–396. Koutsos EA, Clifford AJ, Calvert CC, Klasing KC. 2003. Maternal carotenoid status modifies the incorporation of dietary carotenoids into immune tissues of growing chickens (Gallus gallus domesticus). Journal of Nutrition 133: 1132– 1138. Künzler R, Bakker TCM. 1998. Computer animations as a tool in the study of mating preferences. Behaviour 135: 1137–1159. Lauziére YB, Gingras D, Ferrie FP. 1999. Color camera characterization with an application to detection under day- light. Trois-Rivières: Vision Interface, 280–287. Levin A, Weiss Y. 2004. User assisted separation of reflec- tions from a single image using a sparsity prior. European Conference on Computer Vision 1: 602–613. Losey GS Jr. 2003. Crypsis and communication functions of UV-visible coloration in two coral reef damselfish, Dascyllus aruanus and D. reticulates. Animal Behaviour 66: 299–307. Lythgoe JN. 1979. The ecology of vision. Oxford: Clarendon Press. Marr D, Hildreth E. 1980. Theory of edge detection. Proceed- ings of the Royal Society of London Series B, Biological Sciences 207: 187–217. Marshall NJ, Jennings K, McFarland WN, Loew ER, Losey GS Jr. 2003. Visual biology of Hawaiian coral reef fishes. II. Colors of Hawaiian coral reef fish. Copeia 3: 455– 466. Martinez-Verdú F, Pujol J, Capilla P. 2002. Calculation of the color matching functions of digital cameras from their complete spectral sensitivities. Journal of Imaging Science and Technology 46: 15–25. McGraw KJ, Ardia DR. 2003. Carotenoids, immunocompe- tence, and the information content of sexual colors: an exper- imental test. American Naturalist 162: 704–712. McGraw KJ, Ardia DR. 2005. Sex differences in carotenoid status and immune performance in zebra finches. Evolution- ary Ecology Research 7: 251–262. McGraw KJ, Hill GE, Parker RS. 2005. The physiological costs of being colourful: nutritional control of carotenoid uti- lization in the American goldfinch, Carduelis tristis. Animal Behaviour 69: 653–660. McGraw KJ, Nogare MC. 2004. Carotenoid pigments and the selectivity of psittacofulvin-based coloration systems in parrots. Comparative Biochemistry and Physiology B 138: 229–233. Mollon JD. 1999. Specifying, generating and measuring colours. In: Carpenter RHS, Robson JG, eds. Vision research: a practical guide to laboratory methods. Oxford: Oxford Uni- versity Press, 106–128. Navara KJ, Hill GE. 2003. Dietary carotenoid pigments and immune function in a songbird with extensive carotenoid- based plumage coloration. Behavioral Ecology 14: 909–916. Newton I. 1718. Opticks, or a treatise of the reflections, refrac- tions, inflections and colours of light, 2nd edn. London: Printed for W. and J. Innys. Parkkinen J, Jaaskelainen T, Kuittinen M. 1988. Spectral representation of color images. IEEE 9th International Com- ference on Pattern Recognition, Rome, Italy 2: 933–935. Párraga CA. 2003. Is the human visual system optimised for encoding the statistical information of natural scenes? PhD Thesis, University of Bristol. Párraga CA, Troscianko T, Tolhurst DJ. 2002. Spatiochro- matic properties of natural images and human vision. Cur- rent Biology 12: 483–487. Pietrewicz AT, Kamil AC. 1979. Search image formation in the blue jay (Cyanocitta cristata). Science 204: 1332–1333. Pryke SR, Lawes MJ, Andersson S. 2001. Agonistic caro- tenoid signalling in male red-collared widowbirds: aggres- sion related to the colour signal of both the territory owner and model intruder. Animal Behaviour 62: 695–704. Rasband WS. 1997–2006. Image J. Bethesda, MD: National Institutes of Health. Available at http:/rsb.info.nih.gov/ij/. Rosenthal GG, Evans CS. 1998. Female preference for swords in Xiphophorus helleri reflects a bias for large appar- ent size. Proceedings of the National Academy of Sciences of the United States of America 95: 4431–4436. Samaranch R, Gonzalez LM. 2000. Changes in morphology with age in Mediterranean monk seals (Monachus mona- chus). Marine Mammal Science 16: 141–157. Stevens M, Cuthill IC. 2005. The unsuitability of html-based colour charts for estimating animal colours − a comment on Berggren & Merilä. Frontiers in Zoology 2: 14. USING CAMERAS TO STUDY ANIMAL COLORATION 235 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 Stevens M, Cuthill IC. 2006. Disruptive coloration, crypsis and edge detection in early visual processing. Proceedings of the Royal Society Series B, Biological Sciences 273: 2141– 2147. Stokman HMG, Gevers T, Koenderink JJ. 2000. Color measurement by imaging spectrometry. Computer Vision and Image Understanding 79: 236–249. Sumner P, Arrese CA, Partridge JC. 2005. The ecology of visual pigment tuning in an Australian marsupial: the honey possum Tarsipes rostratus. Journal of Experimental Biology 208: 1803–1815. Sumner P, Mollon JD. 2000. Catarrhine photopigments are optimised for detecting targets against a foliage background. Journal of Experimental Biology 203: 1963–1986. Thayer AH. 1896. The law which underlies protective colora- tion. Auk 13: 477–482. Thayer GH. 1909. Concealing-coloration in the animal king- dom: an exposition of the laws of disguise through color and pattern: being a summary of Abbott H. Thayer’s discoveries. New York, NY: The Macmillan Co. Thévenaz P, Ruttimann UE, Unser M. 1998. A pyramid approach to subpixel registration based on intensity. IEEE Transactions on Image Processing 7: 27–41. Tinbergen N. 1974. Curious naturalists, rev. edn. London: Penguin Education Books. Villafuerte R, Negro JJ. 1998. Digital imaging for colour measurement in ecological research. Ecology Letters 1: 151– 154. Wedekind C, Meyer P, Frischknecht M, Niggli UA, Pfander H. 1998. Different carotenoids and potential infor- mation content of red coloration of male three-spined stick- leback. Journal of Chemical Ecology 24: 787–801. Westland S, Ripamonti C. 2004. Computational colour science using MATLAB. Chichester: John Wiley & Sons Ltd. Westmoreland D, Kiltie RA. 1996. Egg crypsis and clutch survival in three species of blackbirds (Icteridae). Biological Journal of the Linnean Society 58: 159–172. Windig JJ. 1991. Quantification of Lepidoptera wing patterns using an image analyzer. Journal of Research on the Lepi- doptera 30: 82–94. Wyszecki G, Stiles WS. 1982. Color science: concepts and methods, quantitative data and formulae, 2nd edn. New York, NY: John Wiley. Yin J, Cooperstock JR. 2004. Color correction methods with applications for digital projection environments. Journal of the Winter School of Computer Graphics 12: 499–506. Zuk M, Decruyenaere JG. 1994. Measuring individual vari- ation in colour: a comparison of two techniques. Biological Journal of the Linnean Society 53: 165–173. APPENDIX 1 GLOSSARY OF TECHNICAL TERMS Aliasing When different continuous signals become indistin- guishable as a result of digital sampling. Spatial alias- ing is manifested as the jagged appearance of lines and shapes in an image. Aperture Aperture refers to the diaphragm opening inside a photographic lens. The size of the opening regulates the amount of light passing through onto the colour filter array. Aperture size is usually referred to in f- numbers. Aperture also affects the ‘depth of field’ of an image. Bit depth This relates to image quality. A bit is the smallest unit of data, such as 1 or 0. A 2-bit image can have 22 = 4 grey levels (black, low grey, high grey and white). An 8-bit image can have 28 = 256 grey levels, ranging from 0 to 255. Colour images are often referred to as 24-bit images because they can store up to 8 bits in each of the three colour channels and therefore allow for 256 × 256 × 256 = 16.7 million colours. Charge-coupled device (CCD) A small photoelectronic imaging device containing numerous individual light-sensitive picture elements (pixels). Each pixel is capable of storing electronic charges created by the absorption of light and produc- ing varying amounts of charge in response to the amount of light they receive. This charge converts light into electrons, which pass through an analogue- to-digital converter, which produces a file of encoded digital information. Chromatic aberration This is caused by light rays of different wavelengths coming to focus at different distances from the lens causing blurred images. Blue will focus at the shortest distance and red at the greatest distance. Colour filter array Each pixel on a digital camera sensor contains a light sensitive photodiode which measures the brightness of light. These are covered with a pattern of colour filters, a colour filter array, to filter out different wavebands of light. Demosaicing algorithms Most digital cameras sample an image with red, green, and blue sensors arranged in an array, with one type at each location. However, an image is required with an R, G, and B-value at each pixel location. This is produced by interpolating the missing sensor values via so called ‘demosaicing’ algorithms, which come in many types. Exposure The exposure is the amount of light received by the camera’s sensors and is determined by the aperture and the integration time. 236 M. STEVENS ET AL. © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 Foveon sensors Foveon sensors capture colour by using three layers of photosensors at each location. This means that no interpolation is required to obtain values of R, G, and B at each pixel. Image resolution The resolution of a digital image is the number of pixels it contains. A 5-megapixel image is typically 2560 pixels wide and 1920 pixels high and has a resolution of 4915 200 pixels. JPEG JPEG (Joint Photographic Experts Group) is very common due to its small size and widespread com- patibility. JPEG is a lossy compression method, designed to save storage space. The JPEG algorithm divides the image into squares, which can be seen on badly compressed JPEGs. Then, a discrete cosine transformation is used to turn the square data into a set of curves, and throws away the less significant part of the data. The image information is rear- ranged into colour and detail information, compress- ing colour more than detail because changes in detail are easier to detect. It also sorts detail information into fine and coarse detail, discarding fine detail first. Lossy compression A data compression technique in which some data is lost. Lossy compression attempts to eliminate redun- dant or unnecessary information and dramatically reduces the size of a file by up to 90%. Lossy compres- sion can generate artefacts such as false colours and blockiness. JPEG is an image format that is based on lossy compression. Lossless compression Lossless compression is similar to ‘zipping’ a file, whereby if a file is compressed and later extracted, the content will be identical. No information is lost in the process. TIFF images can be compressed in a lossless way. Macro lens A lens that provides continuous focusing from infinity to extreme close-ups. Modulation transfer function The modulation transfer function describes how much a piece of optical equipment, such as a lens, blurs the image of an object. Widely spaced features, such as broad black and white stripes, do not lose much con- trast, because a little blurring only affects their edges, but fine stripes may appear to be a uniform grey after being blurred by the optical apparatus. The modula- tion transfer function is a measure of how much bright-to-dark contrast is lost, as a function of the width of the stripes, as the light goes through the optics. Nyquist frequency The Nyquist frequency is the highest spatial fre- quency where the CCD can still correctly record image detail without aliasing. RAW A RAW file contains the original image information as it comes off the sensor before internal camera process- ing. This data is typically 12 bits per pixel. The cam- era’s internal image processing software or computer software can interpolate the raw data to produce images with three colour channels (such as a TIFF image). RAW data is not modified by algorithms such as sharpening. RAW formats differ between camera manufacturers, and so specific software provided by the manufacturer, or self written software, has to be used to read them. Saturation In the context of calibrating a digital camera, we use this term to denote when a sensor reaches an upper limit of light captured and can no longer respond to additional light. This is also called ‘clipping’ as the image value cannot go above 255 (in an 8-bit image) regardless of how much additional light reaches the sensor. Saturation can also be used to refer to the apparent amount of hue in a colour, with saturated colours looking more vivid. Sensor resolution The number of effective non-interpolated pixels on a sensor. This is generally much lower than the image resolution because this is before interpolation has occurred. TIFF TIFF (Tagged Image File Format) is a very flexible file format. TIFFs can be uncompressed, lossless com- pressed, or can be lossy compressed. While JPEG images only support 8 bits per channel RGB images, TIFF also supports 16 bits per channel and multilayer CMYK images in PC and Macintosh format. White balance Most digital cameras have an automatic white bal- ance setting whereby the camera automatically sam- ples the brightest part of the image to represent white. However, this automatic method is often inac- curate and is undesirable in many situations. Most digital cameras also allow white balance to be chosen manually. USING CAMERAS TO STUDY ANIMAL COLORATION 237 © 2007 The Linnean Society of London, Biological Journal of the Linnean Society, 2007, 90, 211–237 APPENDIX 2 TECHNICAL DETAILS In the present study, we used a Nikon Coolpix 5700 camera, with an effective pixel count of just under 5.0 megapixels. This does not have all of the desired features described in our paper (the intensity response is nonlinear and the zoom cannot be precisely fixed) and we offer no specific recommendation, but it is a good mid-priced product with high quality optics and full control over metering and exposure. UV photogra- phy was with a PCO Variocam, fitted with a Nikon UV- Nikkor 105 mm lens, a Nikon FF52 UV pass filter and an Oriel 59875 ‘heat’ filter (the CCD is sensitive to near-infra-red). The camera was connected to a Toshiba Satellite 100 cs laptop and also to an Ikegami PM-931 REV.A monitor, which displayed the images that were to be saved via a PCO CRS MS-DOS based programme. With the camera remote control, the gain and the integration time of the images could be adjusted, with the gain either set to 12 db or 24 db and the integration time between one and 128 video frames (1 frame = 40 ms). Images were transferred to a PC and all measure- ments were taken with the (free) imaging programme ‘Image J’ (Rasband, 1997–2006; Abràmoff et al., 2004). Measurements of standards were taken by drawing a box over the area of interest, and then using the his- togram function to determine the mean grey scale value and standard deviation for each channel. All other image and data manipulations, including the linearization and transformation between coordinate systems, were performed with MATLAB (The Math- works Inc.), although other languages, such as Java (Sun Microsystems, Inc.; Efford, 2000) are also useful. MATLAB has rapidly become an industry standard in vision science, on account of its efficiency at matrix mathematics and manipulation (photographic data are large matrices). MATLAB and Image J benefit from the large number of plug-ins and toolboxes writ- ten by users for other users. work_qpuytrmafvaqhhhwh76crzh6ey ---- Presentazione standard di PowerPoint THE ANALOG AND DIGITAL IMAGE IN PHOTOTHERAPY Davide Susa Photography is proba- bly the most powerful artistic medium from an emotional point of view. She is a travel compa- nion during a person's life (Valtorta R., 2016). PHOTOTHERAPY - according to Judy Weiser, represents a: "[...] articulated system of psychotherapy techniques based on the use of photography ... within their therapeutic activities [...]" (Weiser J., 2006). In phototherapy: ✓ the patient, with the photographic image, which acts as a "stimulus", is activated through perceptions, emotions, and reactions (Emotional Bond) recorded and used by the psychologist; ✓ the psychologist-psycho- therapist helps the patient in "conscious investigations" about himself (Cognitive Bond) with respect to the emotional content that the photographic image arouses in him. RESEARCH IDEA Check whether the digital image, as part of a Phototherapy exercise, can be physiologically and emotionally “activated” like the analog image. For the research, the exercise "The Space Station“ was chosen, J. Weiser, "Photo- therapy" (2013) was chosen, as: ✓ It can evoke strong physio- logical and emotional reactions, ✓ It allows you to explore the emotional bond that binds the individual to the images chosen by him. The activity was divided into 2 distinct phases to explore: ✓ the feeling linked to the "loss of something", the photographic image; ✓ awareness of the most dear image to the person. SELF REPORT It has been prepared for the detection of the emotional bond of the participant in the photographic image, with two different indices: • fast heart beat • body and hand sweating • accelerated breathing • physical and motor activation Physiological Activation with the graphs of the "Self-Assessment Manikin", developed by Bradley and Lang (1994), which analyzes three characteristics: ✓emotion (from positive to negative) ✓internal physiological activation (high to low) ✓perceptions of domination / self-control (low to high) Non-Verbal Affective Assessment (N.V.A.A.) SELF REPORT The SELF-REPORT can be used as a tool for assessing the patient's emotional bond in a phototherapy exercise. In the Sector Literature there are no such tools PARTICIPANTS GROUP The research was carried out thanks to the support of 17 participants. These people are between 29 and 66 years old, 12 participants are women. 9 8 11 3 The group was divided as follows between Analog (paper) and Digital images: ❑ 9 participants carried out the exercise with paper images; ❑ 8 participants performed the exercise with digital images. Gender of the participants in the exercise MaleFemale Are you fond of photography? 17 replies what kind of images did you use in the exercise? 17 replies Digital Analog The survey revealed that the Phototherapeutic Exercise has a low impact on the "Physiological Activation" level regardless of the type of images (analog and digital) used and settings, in the presence or digital environment. Globally, the same N.V.A.A. both in analog and digital mode. Different are the activation phases depending on the mode: ✓ Phase 2, when the participant remains with a single image, for the activity with analog photographs ✓ Phase 1, when the first image is removed from the person, due to the way it is done with digital images. • fast heart beat • body and hand sweating • accelerated breathing • physical and motor activation Physiological Activation Self-Report Results with the graphs of the "Self-Assessment Manikin", developed by Bradley and Lang (1994), which analyzes three characteristics: ✓ emotion (from positive to negative) ✓ internal physiological activation (high to low) ✓ perceptions of domination / self-control (low to high) Non-Verbal Affective Assessment (N.V.A.A.) Self-Report Results: Physiological activation Analogical Exercise: ✓ no physiological activations were detected during Phase 1 ✓ 3 parameters detected during Phase 2 (accelerated heartbeat, accelerated breathing, physical and motor activation) ✓ 2 parameters detected on Global evaluation (accelerated heart rate, accelerated breathing) Digital Exercise: ✓ no physiological activations were detected, either globally or in the individual phases of the exercise NOTE: despite the MODE detected is “0”, 3 out of 8 subjects were found physiological activations of at least 3 parameters both in Phase 1 and globally. Phase 2 did not register these exceptions to the value of MODE. Self-Report Results: Non-Verbal Affective Assessment (N.V.A.A.) Analogical Exercise: ✓ no relevant elements of N.V.A.A. during Phase 1 ✓ Phase 2 highlighted strongly negative emotions and significant physiological activation, albeit with strong self-control ✓ The Global evaluation was medial, Mode value "3" on all parameters Digital Exercise: ✓ during Phase 1 of the slightly negative Emotions and relevant physiological activation, with a discreet self- control by the subjects ✓ no elements of N.V.A.A. relevant during Phase 2, here the emotion turned from negative to positive ✓ The Global evaluation was medial, Mode value "3" on all parameters NEXT STEPS ✓Insights on physiological and emotional activations with more complex and sophisti- cated tools (e.g. biofeedback, neuro- feedback, etc.). ✓Possibility to standardize the Self-Report for the analysis and evaluation of the perception of the emotional bond of a patient with respect to a photographic image and a phototherapy activity. work_qqsgsvia5fbz5nmajyfff5ypbi ---- Research Article Research on the Application of an Information System in Monitoring the Dynamic Deformation of a High-Rise Building Mingzhi Chen ,1 Guojian Zhang ,2,3 Chengxin Yu ,1 and Peng Xiao 1 1College of Business, Shandong Jianzhu University, Jinan 250101, China 2College of Surveying and Geo-Informatics, Shandong Jianzhu University, Jinan 250101, China 3School of Environmental Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China Correspondence should be addressed to Mingzhi Chen; chenmingzhi1975@126.com, Guojian Zhang; g_j_zhang@cumt.edu.cn, and Chengxin Yu; ycx1108@126.com Received 31 October 2019; Revised 17 April 2020; Accepted 25 June 2020; Published 30 July 2020 Academic Editor: Suzanne M. Shontz Copyright © 2020 Mingzhi Chen et al. 2is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. With the acceleration of urbanization, there are increasingly more high-rise buildings in cities. In turn, high-rise building collapse accidents occur frequently. 2e causes of the danger, in addition to the extremely severe stress, are predominantly due to the long- term role of unstable factors, resulting in an unrecoverable internal structure. 2erefore, it is advisable to monitor the dynamic deformation of buildings to prevent accidents. However, there is no particularly mature application system on the market to enable this monitoring, so our research group has carried out a long-term research in this field. 2is paper introduces the design of a set of practical information systems by using information technology and the principle of close-range photogrammetry. 2e system used the photographing scale transformation-time baseline parallax (PST-TBP) method to analyse image data collected with a digital camera, implement close-range photogrammetry, and input the image data into a computer. A building deformation diagram is obtained using our own software. 2e associated deformation curve can clearly reproduce the building deformation trend to monitor the building health. We conducted many laboratory simulation experiments to verify the information system, and the verification results prove that this process is rigorous. To apply this information system to a real-life scenario as soon as possible, we further studied its application to high-rise buildings, improved the system by using data and experience obtained by monitoring the tallest local building, and achieved good results. Finally, combined with the development of current intelligent technology, directions for system improvement are explored. 1. Introduction 2e process of urbanization is currently accelerating, city areas are expanding, and the population has increased rapidly. In addition, there are increasingly more high-rise buildings. However, the collapse of high-rise buildings is also frequent, and the hidden danger of these hazards has received increasingly more attention [1–3]. 2e causes of the danger, in addition to the extremely severe stress, are pre- dominantly due to the long-term role of unstable factors, resulting in an unrecoverable internal structure, but the associated microdeformation processes are difficult to monitor [4, 5]. If we can obtain real-time deformation data from these buildings, then these data can be superimposed and compared, and we can clearly analyse whether the building is in a state of health. If the deformation is excessive or unrecoverable in an unhealthy state, collapse can be prevented in advance, which is a very effective method to avoid accidents [6–9]. 2e study of building deformation monitoring to ensure its safety has been started since the 1990s. In 2000, Luo et al.’s and other experiments verified that GNSS (Global Navi- gation Satellite System) could be used to monitor the low- frequency dynamic characteristics of high-rise buildings [10]. In 2006, Lee and others used digital cameras to monitor displacement and compared the results with those of a laser vibrator [11]. Chen Weihuan and others used a digital camera to monitor the dynamic displacement of Guangzhou new TV tower in real time in 2011, and compared with the dynamic displacement data measured by GNSS, it is Hindawi Mathematical Problems in Engineering Volume 2020, Article ID 3714973, 9 pages https://doi.org/10.1155/2020/3714973 mailto:chenmingzhi1975@126.com mailto:g_j_zhang@cumt.edu.cn mailto:ycx1108@126.com https://orcid.org/0000-0003-2745-1525 https://orcid.org/0000-0003-2640-9587 https://orcid.org/0000-0002-9365-2416 https://orcid.org/0000-0001-5124-1232 https://creativecommons.org/licenses/by/4.0/ https://doi.org/10.1155/2020/3714973 basically consistent. In 2012, Kuang et al. used GNSS to monitor the dynamic response of high-rise buildings under typhoon loads and to monitor the dynamics of a high-rise building under construction in Hong Kong [12]. In 2018, a smart SHM system with 46 sensors at Shanghai World Financial Center (SWFC) in China is validated for the analysis of dynamic characteristic of high building by earthquake. And at the same time, Zhou utilized a to- mography-based persistent scatterers interferometry mon- itoring the deformation performances of high-rise building, i.e., SWFC and JinMao Tower, in Shanghai Lujiazui Zone. 2e above scholars have done a lot of work in the field of high-level health monitoring, our scheme is different from them, and our monitoring of real-time instantaneous dy- namic deformation has new research. After years of research, our research group designed a set of information systems to combine information technology with digital photogrammetry. Digital photography [13–15] can observe the structural deformation of buildings, and the combination of digital photography and information tech- nology can make this technology faster and more efficient than the alternatives. 2en, we verified the scientific foundation of this method by simulating and making various building models in the laboratory. Several basic experiments can prove that the method is efficient and scientific. In addition, we collected and monitored the instantaneous dynamic deformation image of buildings and draw the associated graph. To further apply the systems in practice, we need to study their practical application in a wide variety of buildings; our aim is thus to study the application of deformation monitoring information systems in high-rise buildings. 2is paper briefly introduces the principle and workflow of one of the information systems and then describes a typical simulation experiment for scientific verification. It also focuses on how we can monitor the first tall building in Jinan, China and obtain first-hand data and experience of field monitoring and explains how to further improve the function of the in- formation system in the process of overcoming various problems in real time and incorporate recent advancements in intelligent technology at the present stage. Finally, new im- provements and development directions are put forward. 2. Construction of the Dynamic Deformation Monitoring Information System 2.1. Accuracy Assessment of Test Camera. 2e distortion of the camera in the photographic system plays a major role in influencing its measurement accuracy [16–20]. Considering that the distortion error is linear near the central position of the images [21], we adopted a grid method [22] in the study to eliminate the distortion of the digital camera to improve the measurement accuracy. Figure 1 illustrates that the distortion error of the deformation point on the object moves from Position A (xA, zA) to Position A’ (xA′, zA′) in the camera’s view and that ΔXAA′ and ΔZAA′ are the corre- sponding horizontal and vertical displacements of this de- formation point, respectively. After modifying the distortion of test camera, we used the direct linear transformation (DLT) method [23] to assess its measurement accuracy. 2e DLT method is shown as follows: x − L1X + L2Y + L3Z + L4 L9X + L10Y + L11Z + 1 � 0, z − L5X + L6Y + L7Z + L8 L9X + L10Y + L11Z + 1 � 0, ⎫⎪⎪⎪⎪⎪⎪⎬ ⎪⎪⎪⎪⎪⎪⎭ (1) where (x, z) are the image plane coordinates of deformation points without errors, (X, Y, Z) are the space coordinates of the corresponding deformation points, and Li(i � 1, 2, 3, . . . , 11) are the functions of the exterior and interior parameters of test camera. 2e spatial coordinates of the reference points and de- formation points were measured by a total station in the laboratory. 2e DLTmethod was used to calculate the spatial coordinates of deformation points De0 and De1 based on the coordinates in Tables 1 and 2. 2eir differences were ob- tained by comparing the actual coordinates of De0 and De1 with their calculated coordinates (Table 3). 2e maximal and minimal measurement errors were 2 mm and 0 mm, re- spectively, with an average measurement error of 1 mm. 2.2. Monitoring Principle. 2e monitoring principle of the dynamic deformation monitoring information system (DDMIS) is time baseline parallax based on photographing scale transformation (PST-TBP) method. 2e PST-TBP method is detailed in [24–27] and Figure 2 shows its sche- matic diagram. Part of the PST-TBP method is as follows. In Figure 2(b), on the object plane,Δxdef andΔzdef of the corresponding deformation point are Δxdef � SA Sa Δpdefx � m ·Δp def x , Δzdef � SA Sa Δpdefz � m ·Δp def z , ⎫⎪⎪⎪⎪⎪⎬ ⎪⎪⎪⎪⎪⎭ (2) A A′ ∆ZAA′ XA′ ∆XAA′ X Z O zA xA ZA′ Di sto rti on er ro r Figure 1: Influence of distortion error. 2 Mathematical Problems in Engineering where m is the photographing scale on the reference plane, Δxdef andΔxdef are the horizontal and vertical deformation of deformation point on the object plane, and Δpdefx and Δpdefz are the horizontal and vertical parallax of the corre- spondence image point on the image plane. Note that there exist systematic errors in Δpdefx and Δp def z . 2en, the corrected parallax of the corresponding de- formation points is obtained: corΔpdef′x �Δp def x −Δp def0′ x , corΔpdef′z �Δp def z −Δp def0′ z , ⎫⎬ ⎭ (3) where (corΔpdef′x , corΔp def′ z ) are the corrected parallax of deformation points in the coordinate system after the bary- centralization. (Δpdef0′x ,Δp def′ z ) are the systematic error of the deformation point in the coordinate system after the bary- centralization, respectively. 2en, we obtain the corrected displacements of defor- mation points based on the control plane: corΔxdef � m · corΔpdef′x , corΔzdef � m · corΔpdef′z , ⎫⎬ ⎭ (4) Table 1: Spatial coordinates of reference points R0 to R7 (m). Point number R0 R1 R2 R3 R4 R5 R6 R7 X 108.343 109.739 110.144 110.144 109.492 108.862 108.456 108.713 Y 95.770 95.804 95.957 95.957 97.022 97.199 96.194 96.487 Z 99.534 99.542 99.540 99.540 99.568 99.240 99.225 99.384 Table 2: Pixel coordinates of reference points (R0 to R7) and deformation points (De0 and De 1) (pixel). Point number R0 R1 R2 R3 R4 R5 R6 R7 De0 De1 Photo 1 X 205 428 581 958 1540 1864 782 1120 429 1541 Z 687 468 437 451 490 924 1057 796 700 718 Photo 2 X 144 1179 1486 1716 1918 1631 487 907 619 1543 Z 572 495 491 530 605 1101 948 799 643 826 Table 3: Accuracy assessment for deformation points De0 and De1. Name Actual coordinates (m) Calculated coordinates (m) Differences (mm) De0-X 108.825 108.826 1 De0-Y 95.887 95.888 1 De0-Z 99.441 99.440 1 De1-X 109.067 109.065 2 De1-Y 96.935 96.934 1 De1-Z 99.394 99.394 0 o Image plane CCD Camera Reference plane Depth 1 Depth 2 Depth 3 Depth 4 N Width 1 Width 2 Object plane (a) A A′ Δxpst Δzpst x z x z a b Δpx Δpz Image plane Object plane R0 S Projection center Reference plane R1 R2 R3 R4 R5 R6 R7 def def def def (b) Figure 2: Photographing scale transformation-time baseline parallax method [24–27]. Mathematical Problems in Engineering 3 where (corΔxdef , corΔzdef) are the corrected displacements on the reference plane of deformation points. According to the photographing scale difference be- tween the reference plane and the object plane, we got the actual deformation of the deformation points: Δxdefpst i � Δpstc · corΔx def , Δzdefpst i � Δpstc · corΔz def , ⎫⎪⎬ ⎪⎭ (5) whereΔxdefpst i andΔz def pst i are the actual spatial deformations on the object plane of deformation, i � 1, 2, 3, . . . , n,Δpstc is the coefficient of the photographing scale transformation, and Δpstc � Depth4/Depth3. 2.3. Accuracy Assessment of DDMIS in the Laboratory. Before test on the high building, the reliability of DDMIS should be investigated in the laboratory. We used a Sony-350 camera in this study to monitor the instantaneous defor- mation of a steel frame when it was impacted by a dumbbell. Table 4 shows the parameters of the Sony-350 camera. Figure 3(b) shows the test site. Points labeled as U0–U4 are deformation points. Points labeled as C0–C9 are reference points, which formed the reference plane. 2e test process and impact loads are detailed in [28]. After processing the images, we obtained the measurement accuracy of DDMIS. Table 5 shows that the average absolute accuracy and relative accuracy were 0.28mm and 1.1‰, re- spectively. 2is suggests that DDMIS in this study can meet the accuracy requirements of high-rise buildings monitoring. 3. Field Experiment 2is information system has achieved very good results through the simulation experiments of typical structures, but if we apply it to practice, we still have to overcome many difficulties. For example, the structure of the building itself, the photography scene environment, the changing weather during the image capture process, and the people and ve- hicles around the building should be investigated to further complete the DDMIS. 2us, this paper tried to use DDMIS to monitor high building. 3.1.ExperimentSiteSelection. We chose the main building of Hanyu Jingu, a tall building in Jinan, which is 339 meters high. It is also surrounded by other tall buildings, so it is impossible to set up cameras at close range to capture de- formed images at the top of the tallest building. After several days of peripheral research and coordi- nation, the final image capture location was selected to be a nearby hill less than 200 meters above sea level; there is a manmade cement flat floor in front of the main building of Hanyu Jingu that is suitable for arranging cameras and reference signs. 3.2. Optimization and Design of the Field Schematic Diagram. Since the monitoring distance for this building is longer than that for the laboratory, this distance can fully meet the accuracy requirements for high-definition digital cameras at this stage. However, the arrangement of reference points and the selection of deformation points are difficult. To solve this problem, we improved the experimental prin- ciple. We use the image matching-time baseline parallax method to form a reference plane perpendicular to the photography direction by setting a stable reference point not far from the camera, as shown in Figure 4. In the data calculation, the follow-up image is matched with the ref- erence picture, and then the parallax caused by external factors is eliminated. 3.3. Improvement of the Information System Workflow. Before the data collection, we also improved the workflow of the information system, integrating the field investigation and the preplan setting into the operation process of the information system. 2e improved scheme is as follows: (1) Site investigation: conduct an investigation of the building site, and then determine the basic infor- mation of the building structure, age, load, local terrain, and so on. (2) Monitoring point selection: according to the tall building structure and structural mechanics, in ad- dition to other principles, confirm the important and requested monitoring points. (3) Camera arrangement and deviation correction: we arrange digital cameras in multiple directions and reconcile them before use in order to eliminate the effect of distortion difference, which solves the problem that the digital cameras have no internal or external coordinates. (4) Data acquisition: we use several digital cameras to collect high-resolution multiangle and omnidirec- tional building deformation images. We collect a large amount of photographical data to obtain a normal distribution confidence interval. (5) Computer software analysis: the advantage of digital cameras is that they can transmit digital signals; the photographs we take are transferred to the computer in the form of a digital signal. When analysing a large amount of data with our self-developed computer imaging analysis software, the main job is the con- tent processing and deformation value calculation. (6) Content processing includes bitmap format con- version, reading reference point pixel position data, reading deformation point pixel position data, cal- culating the number of reference points, calculating the number of deformation points, and calculating the photograph scale coefficient. (7) 2e step of calculating the deformation value is as follows: coordinate centralization, centering the reference point pixel coordinates; calculating the correction coefficients of reference point inspection; finding the centralization coordinates of deforma- tion points; finding the positive value of the parallax change of the deformation point; and finding the 4 Mathematical Problems in Engineering apparent difference after correction of the defor- mation point. (8) An analysis report of the monitoring results is issued on site. 2e entire solution process is completed in as fast as 15 minutes. 2e integration of internal and external operations is realized, and the efficiency and accuracy of monitoring are greatly improved over previous efforts. In this paper, we use two cameras to obtain the deformation data. 2en, we can verify the accuracy of DDMIS. According to the improved schematic diagram, we set up the reference points C0, C1, C2, and C3, as shown in Figures 4(a) and 4(b). Because the tall building is very high, we cannot put our own design marks for deformation points on it, so we also improved the workflow of the information system. First, we take pictures of tall buildings and then look for marks from the building itself. 2ese marks locations are as follows in pictures: (1) 2ey have an exact intersection. (2) 2e point position has a high computer resolution. As Figures 4(a) and 4(b) show, the points we are looking for are U0, U1, and U2. 4. Data Analysis 2rough data processing, the measurement accuracy of Cameras 1 and 2 (Tables 6 and 7) and deformation values of the deformation points (Tables 8 and 9) were obtained. In the test, the pixel displacements of the reference points were supposed to be zero in theory. However, their pixel dis- placements were not zero in DDMIS. As such, these values were deemed as the measurement accuracy of DDMIS. Table 6 shows the average measurement accuracy of Camera 1. Table 7 shows the average measurement accuracy of Camera 2. Moreover, Table 7 shows that the average dis- placements of C0–C3 were 1.17 pixels, 0.74 pixels, 0.59 pixels, and 0.28 pixels, respectively. Table 7 shows that the average displacements of C0–C3 were 1.16 pixels, 0.99 pixels, 0.34 pixels, and 0.54 pixels, respectively. 2e measurement accuracy of DDMIS reached subpixel in the field experiment. 2us, deformation values of the deformation points in Tables 8 and 9 were reliable. In order to assess the high-rise buildings health situation on the test field, the deformation curves of the high building (Figures 4–8) were obtained in real time by DDMIS and they are useful to study the high-rise buildings dynamic properties. Figure 3: Impact test on a steel frame [28]. Table 5: Measurement accuracy [28]. Line Actual length (mm) Measured length (mm) Absolute accuracy (mm) Relative accuracy (%) U2–U3 296.80 296.90 0.10 0.34 U3–U4 278.80 278.60 0.20 0.72 C2–C3 241.40 241.20 0.20 0.83 C6–C7 253.90 253.30 0.60 2.40 C7–C8 225.10 225.40 0.30 1.30 Average 0.28 1.10 Table 4: Parameters of Sony-350 camera [28]. Type Sensor Sensor scale Focal length Active pixels Sony DSLR A350 (Sony-350) CCD 23.5 ×15.7 mm 35 mm (27–375) 4592 × 3056 pixels Mathematical Problems in Engineering 5 In Figure 6, deformation points U0, U1, and U2 are in elastic deformation in X direction. 2e maximum defor- mation values of deformation points U0, U1, and U2 are 0.89 pixels (on Photo 5), 1.88 pixels (on Photo 10), and 2.17 pixels (on Photo 9), respectively. As we all know, the high-rise buildings are always shaking, and it shook violently with the increasing height along high-rise buildings. 2us, in X direction, the monitoring results are consistent with the vibration rule of high-rise buildings. In Figure 5, deformation points U0, U1, and U2 are in elastic deformation in X direction. 2e maximum Camera 2 0 2 4 6 8 10 12 Photo number U0 U1 U2 0 1 2 3 4 5 D ef or m at io n in c om pr eh en si ve di re ct io n/ pi xe l (a) Camera 1 U1 U2U0 –2.50 –2.00 –1.50 –1.00 –0.50 0.00 0.50 1.00 D ef or m at io n in X d ir ec tio n/ pi xe l 1 2 3 4 5 6 7 8 9 10 (b) Figure 4: (a) Deformation curves of deformation points in comprehensive direction from Camera 2. (b) High-building global deflection curves from Camera 1. Table 6: Measurement accuracy of Camera 1 (pixel). C0 C1 C2 C3 X Z X Z X Z X Z 0.80 0.85 0.65 0.37 0.22 0.55 0.06 0.27 1.17 0.74 0.59 0.28 Table 7: Measurement accuracy of Camera 2 (pixel). C0 C1 C2 C3 X Z X Z X Z X Z 0.63 0.98 0.21 0.97 0.15 0.30 0.49 0.22 1.16 0.99 0.34 0.54 Table 8: Deformation data obtained with Camera 1. Photo number DX0 DZ0 DX1 DZ1 DX2 DZ2 1 −0.89 −0.65 −1.33 −0.95 −1.67 −2.17 2 0.02 0.76 −0.27 0.62 −0.49 −0.49 3 0.02 0.26 −0.26 1.12 −0.49 0.00 4 −0.88 0.00 −1.32 1.00 −1.66 0.00 5 0.76 −0.65 −0.38 −0.95 −0.49 −0.17 6 −0.15 0.26 −1.44 0.12 −0.67 0.00 7 0.35 −0.74 −0.95 −0.88 −1.17 −1.00 8 −0.65 0.26 −0.95 0.12 −1.17 0.01 9 −0.39 0.26 −1.83 0.12 −2.17 −0.99 10 0.26 0.67 −1.88 −0.32 −1.00 −0.31 Table 9: Deformation data obtained with Camera 2. Photo number DX0 DZ0 DX1 DZ1 DX2 DZ2 1 −0.38 −0.13 −0.38 0.48 0.62 −0.59 2 −0.86 −0.63 −0.72 0.22 0.39 0.12 3 0.62 −0.23 0.62 −1.06 1.63 −0.44 4 −0.23 0.63 0.78 1.68 0.78 0.18 5 −0.23 −0.50 0.78 1.28 1.79 −0.44 6 1.41 1.00 2.70 0.90 3.92 2.31 7 0.28 1.48 1.29 2.53 2.29 3.04 8 0.78 1.00 −0.23 0.90 1.79 1.30 9 1.79 1.00 1.79 0.90 2.79 0.29 10 0.92 −0.01 1.06 1.91 2.18 −0.72 D ef or m at io n in X di re ct io n/ pi xe l U0 U1 U2 –2.00 –1.00 0.00 1.00 2.00 3.00 4.00 5.00 0 2 4 6 8 10 12 Photo number Camera 2 Figure 5: Deformation curves of deformation points in X direction from Camera 2. 6 Mathematical Problems in Engineering deformation values of deformation points U0, U1, and U2 are 1.79 pixels (on Photo 6), 2.70 pixels (on Photo 6), and 3.92 pixels (on Photo 6), respectively. 2us, in X direction, the monitoring results are also consistent with the vibration rule of high-rise buildings. In Figure 7, deformation points U0, U1, and U2 are in elastic deformation in comprehensive direction. 2e maximum deformation values of deformation points U0, U1, and U2, are 1.10 pixels (on Photo 1), 1.91 pixels (on Photo 10), and 2.74 pixels (on Photo 1), respectively. 2us, in comprehensive direction, the monitoring results are also consistent with the vibration rule of high-rise buildings. In Figure 4(a), deformation points U0, U1, and U2 are in elastic deformation in comprehensive direction. 2e max- imum deformation values of deformation points U0, U1, and U2 are 2.05 pixels (on Photo 6), 2.84 pixels (on Photo 6), and 4.55 pixels (on Photo 6), respectively. 2us, in com- prehensive direction, the monitoring results are also con- sistent with the vibration rule of high-rise buildings. In Figure 4(b), the global deformation of Hanyu Jin’gu high-rise buildings is in elastic deformation in X direction. From Photo 1 to Photo 6, the shape of high-rise buildings changed from line to left semiparabola and parabola. From Photo 7 to Photo 10, the shape of high-rise buildings changed from left semiparabola to line and parabola. It is very different from the vibration rule of a flexible cantilever beam. In Figure 8, the global deformation of Hanyu Jin’gu high-rise buildings is in elastic deformation in X direction. From Photo 1 to Photo 6, the shape of high-rise buildings changed from line to right semiparabola and line. From Photo 7 to Photo 10, the shape of high-rise buildings changed from line to parabola and right semiparabola. It is also very different from the vibration rule of a flexible cantilever beam. According to the monitoring results, we found that the global deformation of Hanyu Jin’gu high-rise buildings is complex. 2eir dynamic properties are completely different from that of a low-rise building. 2is paper also provides technical support for mastering the dynamic characteristics of high-rise buildings and their real-time security early warning. Note that it is impossible to monitor the high-rise buildings with high accuracy since the measurement dis- tance is as far as 700 m, and digital camera pixels are limited. And we used two cameras to monitor Hanyu Jin’gu high-rise buildings for making the monitoring results reliable. 5. Conclusions 2is study uses monocular digital photography based on the photographing scale transformation-time baseline parallax (PST-TBP) method, to monitor the instantaneous dynamic global deformation of a high-rise building in natural state. Two digital cameras were used to monitor the high-rise buildings to support each other. Deformation curves of the high-rise buildings were depicted by the dynamic defor- mation monitoring information system (DDMIS) to study the dynamic properties of the high-rise buildings. 2rough processing the image sequences of the high-rise buildings, the following conclusions are obtained: Camera 2 U1 U2U0 –2.00 –1.00 0.00 1.00 2.00 3.00 4.00 5.00 D ef or m at io n in X d ir ec tio n/ pi xe l 1 2 3 4 5 6 7 8 9 10 Figure 8: High-building global deflection curves from Camera 2. –2.50 –2.00 –1.50 –1.00 –0.50 0.00 0.50 1.00 0 2 4 6 8 10 12 D ef or m at io n in X di re ct io n/ pi xe l Photo number Camera 1 U0 U1 U2 Figure 6: Deformation curves of deformation points in X direction from Camera 1. 0 2 4 6 8 10 12 Photo number Camera 1 0 0.5 1 1.5 2 2.5 3 D ef or m at io n in c om pr eh en si ve di re ct io n/ pi xe l U0 U1 U2 Figure 7: Deformation curves of deformation points in compre- hensive direction from Camera 1. Mathematical Problems in Engineering 7 (1) 2e measurement accuracy of DDMIS reached sub- pixel in X and Z directions. From Camera 1, the av- erage displacements of C0–C3 in X and Z directions were 0.80 and 0.85 pixels, 0.65 and 0.37 pixels, 0.22 and 0.55 pixels, and 0.06 and 0.27 pixels, respectively. From Camera 2, the average displacements of C0–C3 were 0.63 and 0.98 pixels, 0.21 and 0.97 pixels, 0.15 and 0.30 pixels, and 0.49 and 0.22 pixels, respectively. (2) Deformation points on the high-rise buildings are in elastic deformation in X and comprehensive direction. 2e high-rise buildings are always shaking, and it shook violently with the increasing height along high- rise buildings. 2e maximum deformation values of deformation point U2 in X and comprehensive di- rection are 3.92 and 4.55 pixels, respectively. (3) 2e global deformation of Hanyu Jin’gu high-rise buildings is complex, and their dynamic properties are completely different from that of a low-rise building. In natural state, the shape of Hanyu Jin’gu high-rise buildings changed back and forth from oblique line to semiparabola and parabola. In conclusion, the Hanyu Jin’gu high-rise buildings were in good health at the time of testing. 2is study proves that DDMIS can depict the deformation trend curves of the high- rise buildings which are useful for studying the high-rise buildings dynamic properties. Moreover, the deformation trend curves of the high-rise buildings also can be used to assess the high-rise buildings health situation and warn the possible danger of the high-rise buildings. 2is information is essential for making decisions regarding the high-rise buildings health. 2is experiment also provides valuable experience for the on-site analysis of more types of buildings, and it also prompts the research group to further improve the safety deformation monitoring information system. With the advent of the era of computer intelligence, many intelligent devices have appeared. For example, the clarity of digital cameras is improving, and it is also possible to connect to a wireless network to transmit the image capture information to a computer system in real time. 2e realization of this technology is conducive to real-time monitoring of the health status of buildings. At present, the functions of various intelligent handheld devices are be- coming increasingly mature. A high-end smartphone or palmtop tablet already has its own image capture function comparable to a professional digital camera, and these smart devices themselves can run image processing software at high speed. For using such an intelligent device in the future, we recommend incorporating our building safety infor- mation system into application software running on smart devices to simplify the operation of this system. 2is technology can be applied to the health monitoring of various buildings at any time or place and provides a powerful guarantee for human health and safety. Data Availability 2e data used to support the findings of this study are available from the corresponding author upon request. Disclosure Chengxin Yu, Mingzhi Chen, and Guojian Zhang are the first, second, and third corresponding authors, respectively. Conflicts of Interest 2e authors declare that there are no conflicts of interest regarding the publication of this paper. Acknowledgments 2e authors gratefully acknowledge the financial support from the Science and Technology Project of Shandong Province of China (Grant no. 2010GZX20125). References [1] T.-H. Yi, H.-N. Li, and M. Gu, “Recent research and appli- cations of GPS-based monitoring technology for high-rise structures,” Structural Control and Health Monitoring, vol. 20, no. 5, pp. 649–670, 2013. [2] T.-H. Yi, H.-N. Li, and M. Gu, “Optimal sensor placement for health monitoring of high-rise structure based on genetic algorithm,” Mathematical Problems in Engineering, vol. 2011, Article ID 395101, 12 pages, 2011. [3] T.-H. Yi, H.-N. Li, and M. Gu, “A new method for optimal selection of sensor location on a high-rise building using simplified finite element model,” Structural Engineering and Mechanics, vol. 37, no. 6, pp. 671–684, 2011. [4] Z. Lu, D. Wang, S. F. Masri, and X. Lu, “An experimental study of vibration control of wind-excited high-rise buildings using particle tuned mass dampers,” Smart Structures and Systems, vol. 18, no. 1, pp. 93–115, 2016. [5] M. Kuwabara, S. Yoshitomi, and I. Takewaki, “A new ap- proach to system identification and damage detection of high- rise buildings,” Structural Control and Health Monitoring, vol. 20, no. 5, pp. 703–727, 2013. [6] J.-W. Park, J.-J. Lee, H.-J. Jung, and H. Myung, “Vision-based displacement measurement method for high-rise building structures using partitioning approach,” NDT & E Interna- tional, vol. 43, no. 7, pp. 642–647, 2010. [7] M. H. Rafiei and H. Adeli, “A novel machine learning-based algorithm to detect damage in high-rise building structures,” ;e Structural Design of Tall and Special Buildings, vol. 26, pp. 1–11, 2017. [8] I. Venanzi, F. Ubertini, and A. L. Materazzi, “Optimal design of an array of active tuned mass dampers for wind-exposed high-rise buildings,” Structural Control and Health Moni- toring, vol. 20, no. 6, pp. 903–917, 2013. [9] J.-H. Lee, H.-N. Ho, M. Shinozuka, and J.-J. Lee, “An ad- vanced vision-based system for real-time displacement measurement of high-rise buildings,” Smart Materials and Structures, vol. 21, no. 12, Article ID 125019, 2012. [10] Z. C. Luo, Y. Q. Chen, and Y. X. Liu, “Application of GPS in the simulation study of dynamic characteristics of tall buildings,” Journal of Wuhan Technical University of Sur- veying and M Apping, vol. 25, pp. 100–104, 2000. [11] J. J. Lee and M. Shinozuka, “A vision-based system for remote sensing of bridge displacement,” NDT & E International, vol. 39, no. 5, pp. 425–431. [12] C. L. Kuang, J. S. Zhang, F. H. Zeng, and W. J. Dai, “Using GPS technology to monitor dynamic response characteristics 8 Mathematical Problems in Engineering of high-rise building loading by typhoon,” Journal of Geodesy and Geodynamics, vol. 32, pp. 139–143, 2012. [13] T. Luhmann, S. Robson, S. Kyle, and I. Harley, Close Range Photogrammetry, Wiley, Hoboken, NJ, USA, 2007. [14] R. Jiang, D. V. Jáuregui, and K. R. White, “Close-range photogrammetry applications in bridge measurement: liter- ature review,” Measurement, vol. 41, no. 8, pp. 823–834, 2008. [15] H.-G. Maas, “Photogrammetric techniques for deformation measurements on reservoir walls,” in Proceedings of the IAG Symposium on Geodesy for Geotechnical and Structural En- gineering, pp. 319–324, Eisenstadt, Austria, 1998. [16] J. Behmann, A.-K. Mahlein, S. Paulus, H. Kuhlmann, E.-C. Oerke, and L. Plümer, “Calibration of hyperspectral close-range pushbroom cameras for plant phenotyping,” ISPRS Journal of Photogrammetry & Remote Sensing, vol. 106, pp. 172–182, 2015. [17] B. Caprile and V. Torre, “Using vanishing points for camera calibration,” International Journal of Computer Vision, vol. 4, no. 2, pp. 127–139, 1990. [18] M. Gašparović and D. Gajski, “Two-step camera calibration method developed for micro UAV’s,” in Proceedings of the ISPRS—International Archives of the Photogrammetry, Re- mote Sensing and Spatial Information Sciences, vol. XLI-B1, pp. 829–833, 2016. [19] T. Luhmann, C. Fraser, and H.-G. Maas, “Sensor modelling and camera calibration for close-range photogrammetry,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 115, pp. 37–46, 2016. [20] G. P. Mateo and J. Luka, “Gimbal influence on the stability of exterior orientation parameters of UAV acquired images,” Sensors, vol. 17, no. 2, pp. 401–416, 2017. [21] X. U. Fang, “2e monitor of steel structure bend deformation based on digital photogrammetry,” Editoral Board of Geo- matics & Information Science of Wuhan University, vol. 3, pp. 256–260, 2001, in Chinese. [22] J. I. Jeong, S. Y. Moon, S. G. Choi, and D. H. Rho, “A study on the flexible camera calibration method using a grid type frame with different line widths,” in Proceedings of the 41st SICE Annual Conference. SICE 2002, vol. 2, pp. 1319–1324, Osaka, Japan, August 2002. [23] C. Mingzhi, Y. ChengXin, X. Na, Z. YongQian, and Y. WenShan, “Application study of digital analytical method on deformation monitor of high-rise goods shelf,” in Proceedings of the 2008 IEEE International Conference on Automation and Logistics, pp. 2084–2088, Qingdao, China, September 2008. [24] G. Zhang, G. Guo, C. Yu, and L. Li, “Monitoring dynamic global deflection of a bridge by monocular digital photog- raphy,” Civil Engineering Journal, vol. 27, no. 2, pp. 168–182, 2018. [25] G. Zhang, G. Guo, C. Yu, L. Li, S. Hu, and X. Wang, “Monitoring instantaneous dynamic displacements of ma- sonry walls in seismic oscillation outdoors by monocular digital photography,” Mathematical Problems in Engineering, vol. 2018, Article ID 4316087, 15 pages, 2018. [26] G. Zhang, C. Yu, G. Guo et al., “Monitoring sluice health in vibration by monocular digital photography and a mea- surement robot,” KSCE Journal of Civil Engineering, vol. 23, no. 6, pp. 2666–2678, 2019. [27] G. Zhang, G. Guo, Y. n. Lv, and Y. Gong, “Study on the strata movement rule of the ultrathick and weak cementation overburden in deep mining by similar material simulation: a case study in China,” Mathematical Problems in Engineering, vol. 2020, Article ID 7356740, 21 pages, 2020. [28] G. Zhang, G. Guo, L. Li, and C. Yu, “Study on the dynamic properties of a suspended bridge using monocular digital photography to monitor the bridge dynamic deformation,” Journal of Civil Structural Health Monitoring, vol. 8, no. 4, pp. 555–567, 2018. Mathematical Problems in Engineering 9 work_qtwklk7vfzc7zdc4lhhmnxplbe ---- European journal of American studies , Reviews 2011-2 European journal of American studies Reviews 2011-2 Mick Gidley, Photography and the USA. Theodora Tsimpouki Electronic version URL: http://journals.openedition.org/ejas/9168 ISSN: 1991-9336 Publisher European Association for American Studies Electronic reference Theodora Tsimpouki, « Mick Gidley, Photography and the USA. », European journal of American studies [Online], Reviews 2011-2, document 5, Online since 02 September 2011, connection on 01 May 2019. URL : http://journals.openedition.org/ejas/9168 This text was automatically generated on 1 May 2019. Creative Commons License http://journals.openedition.org http://journals.openedition.org http://journals.openedition.org/ejas/9168 Mick Gidley, Photography and the USA. Theodora Tsimpouki REFERENCES London: Reaktion Books, 2011. Pp. 184. ISBN 978 18189 770 1. 1 Mick Gidley’s Photography and the USA is premised on the conviction that there is “a symbiotic connection” between the medium of photography and the American nation, especially in the aftermath of the Second World War. Given that Photography and the USA belongs to a series of books under the general title Exposures designed to explore photography from thematic perspectives or in relation to significant nationalities (i.e. “photography and literature,” “photography and cinema,” “photography and science,” but also “photography and Australia,” “photography and Africa,” “photography and Italy”), definitional problems underlying the two referents of the book’s title need not be addressed here. There is no interrogation of photography as a discursive concept nor there is an attempt to define the USA in terms other than spatial-geographical ones. As Gidley notes at the outset, “whether or not such a causal and one-to-one connection between nation and the medium is justified, the link itself exists” and, therefore, the approach adopted is “on photography and the USA” (7). Nevertheless, as the distinguished literary and cultural studies scholar that he is, Gidley returns–often implicitly rather than through explicit statements and analyses—to questions of definition and addresses the role of photography in the formation of American national identity as well as the historical value in photographs as visual records of the American past. 2 Organized around ideas of technology, history and the nature of the documentary, Photography and the USA may be read as a brief history of or an introduction to American photography from the nineteenth century to the present day while at the same time it Mick Gidley, Photography and the USA. European journal of American studies , Reviews 2011-2 | 2011 1 explores the role of the photographic image in the visual culture of the USA. As is most appropriate, the book begins with a cultural reading of American contributions to developments in photographic technology, primarily during the nineteenth century. One of them was the mass mechanization of the reproduction of photographs (1880), while yet another, a few years later, was George Eastman’s production of the first roll film to equip his new, inexpensive Kodak camera. Twentieth century contributions to the medium include the development of photographically illustrated magazines, which promoted the photo-essay or story and gave rise to major exponents of the new form, such as W. Eugene Smith, Margaret Bourke-White or Gordon Parks. With the advent of digital photography, it has become impossible to adequately describe the stupendous growth of the medium, its cultural significanceand a plurality of practices, but Gidley takes up the challenge with impressive eloquence and erudition. 3 One such area of investigation involves the ways in which photography has intersected with some of the major themes of American history, including the Western frontier, immigration, race and racism, the rise of the city and the growth of consumerism. As “documents of the American life” in particular historical instances, Gidley selects photographs that seem to have some authenticity not only as visual records but also as social history. Yet, knowing too well that singling out images for discussion from the sheer volume of archival imagery is in itself a critical judgment, he repeatedly emphasizes his own active role as the author of the book in the selection and interpretation of images. Thus, Gidley chooses to discuss some frontier photographs which have become iconic in their depiction of the West. Though many historians nowadays would not necessarily agree with Frederick Jackson Turner’s claim that western expansion is the determinant in U.S. history, Gidley contends that Western iconography retains its potency not only in keeping up with its orthodox, mythologizing status but also with inviting the tradition’s subversion, as is the case of Lee Friedlander’s famous view of Mt Rushmore or John Pfahl’s “altered landscape” of Ansel Adam’s Moonrise over Hernadez, New Mexico. 4 But, photographs do not merely record historical events; they sometimes are integral to the meaning of the events, constitutive to their nature, to the extend that they might be termed “symbolic documents.” Featuring many important images of Jacob Riis and Lewis Hine on immigrant and child life, of Frances B. Johnston on Indians and African- Americans and of Edward S. Curtis on Native Americans, the chapter on “Documents” foregrounds the author’s belief, echoing Alan Trachtenberg, in the visual power of the photograph to perform a social act. Without underestimating their artistic virtuosity, Gidley acknowledges the influence of the photographersfostered by the FSA project (Walker Evans and Dorothea Lange, in particular) in creating the image of the Depression in the U.S. After its demise, the FSA became an inspirational force in other projects which undertook to document areas of American life—country courthouses, monuments and other architectural buildings—though the images produced, Gidley reminds us, cannot be separated from the representational tasks assigned to them by the institutions commissioning and circulating them. 5 The last chapter of the book, aptly entitled “Emblems,” features photographs which have an enduring, symbolic significance: it includes presidential faces, the star spangled banner, JFK, iconic landscapes, etc, all of which, once again point to the potency of national visual symbols. Obviously, these photographs’ cumulative effect does not depend solely on the subject matter but also on the mode of its visual representation. Here, Mick Gidley, Photography and the USA. European journal of American studies , Reviews 2011-2 | 2011 2 Gidley’s analysis revolves around two distinct, equally popular and simultaneous traditions of photography, the “straight” photography of Alfred Stieglitz or of the “Group f/74,” which aimed at representing “the thing in itself,” and the “mixed modes” tradition where the photographic image was openly manipulated and was perceived as such. Though Gidley favors the straight tradition which he considers dominant partly because it bears witness to abiding themes of American history, he is as insightful in his visual analysis of more experimentaland abstract images. 6 Photography and the USA is an excellent introduction for students of American studies or the visual arts, as well as the general reader fascinated by the role of the image in American history and culture. AUTHOR THEODORA TSIMPOUKI National and KapodistrianUniversity of Athens Mick Gidley, Photography and the USA. European journal of American studies , Reviews 2011-2 | 2011 3 http://arts.jrank.org/pages/9987/abstract-experimental-photography.html http://arts.jrank.org/pages/9987/abstract-experimental-photography.html Mick Gidley, Photography and the USA. work_qzga5emeubatxjv67ag53wfvn4 ---- STUDY PROTOCOL Open Access The clinical assessment study of the foot (CASF): study protocol for a prospective observational study of foot pain and foot osteoarthritis in the general population Edward Roddy1*, Helen Myers1, Martin J Thomas1, Michelle Marshall1, Deborah D’Cruz1, Hylton B Menz1,2, John Belcher1, Sara Muller1 and George Peat1 Abstract Background: Symptomatic osteoarthritis (OA) affects approximately 10% of adults aged over 60 years. The foot joint complex is commonly affected by OA, yet there is relatively little research into OA of the foot, compared with other frequently affected sites such as the knee and hand. Existing epidemiological studies of foot OA have focussed predominantly on the first metatarsophalangeal joint at the expense of other joints. This three-year prospective population-based observational cohort study will describe the prevalence of symptomatic radiographic foot OA, relate its occurrence to symptoms, examination findings and life-style-factors, describe the natural history of foot OA, and examine how it presents to, and is diagnosed and managed in primary care. Methods: All adults aged 50 years and over registered with four general practices in North Staffordshire, UK, will be invited to participate in a postal Health Survey questionnaire. Respondents to the questionnaire who indicate that they have experienced foot pain in the preceding twelve months will be invited to attend a research clinic for a detailed clinical assessment. This assessment will consist of: clinical interview; physical examination; digital photography of both feet and ankles; plain x-rays of both feet, ankles and hands; ultrasound examination of the plantar fascia; anthropometric measurement; and a further self-complete questionnaire. Follow-up will be undertaken in consenting participants by postal questionnaire at 18 months (clinic attenders only) and three years (clinic attenders and survey participants), and also by review of medical records. Discussion: This three-year prospective epidemiological study will combine survey data, comprehensive clinical, x- ray and ultrasound assessment, and review of primary care records to identify radiographic phenotypes of foot OA in a population of community-dwelling older adults, and describe their impact on symptoms, function and clinical examination findings, and their presentation, diagnosis and management in primary care. Background Symptomatic osteoarthritis (OA) is common in the gen- eral population, affecting the daily lives of an estimated 10% of people aged over 60 years [1]. It has a major impact on the quality of later life (OA is one of the ten leading causes of disability-adjusted life years [2]), on health care systems and costs (e.g. annual GP consulta- tion rate of 250 per 10,000 persons aged 15 years and over [3]), and on economic productivity [4]. An ageing population and the rising prevalence of important causes of OA (e.g. obesity) ensure that it is an increasing challenge for the future [5]. The foot is the least studied joint complex affected by OA [6]. The prevalence of foot pain, problems and deformities (hallux valgus, arch deformities, hind-foot valgus) is high in community-dwelling older adults [7-12] and these contribute to locomotor disability [13-16], poor balance and risk of falling [17-19]. How- ever, the contribution of foot OA within this is unclear. The first metatarsophalangeal joint (1st MTPJ) was * Correspondence: e.roddy@cphc.keele.ac.uk 1Arthritis Research UK Primary Care Centre, Primary Care Sciences, Keele University, Staffordshire, ST5 5BG, UK Full list of author information is available at the end of the article Roddy et al. Journal of Foot and Ankle Research 2011, 4:22 http://www.jfootankleres.com/content/4/1/22 JOURNAL OF FOOT AND ANKLE RESEARCH © 2011 Roddy et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. mailto:e.roddy@cphc.keele.ac.uk http://creativecommons.org/licenses/by/2.0 included in early descriptions of primary generalised OA [20], where it was shown to be relatively strongly asso- ciated with symptoms [21]. However, there are few examples internationally of epidemiological research that will extend our understanding of foot OA [6,22,23]. The recent publication of a validated atlas for scoring OA not only at the 1st MTPJ but also at the 1st and 2nd cuneo-metatarsal joints (CMJ), the navicular-1st cunei- form joint (NCJ) and the talo-navicular joint (TNJ) [24] now provides a basis for investigating patterns of radio- graphic foot OA, and their relation to impairment (e.g. pain and deformity), activity limitation and participation restriction. The majority of ongoing formal healthcare for people with OA is provided in primary care. Peripheral joint pain is a common presentation to the primary care phy- sician by older adults [25] and OA is one of the most frequently made diagnoses [26], yet there have been few systematic attempts to link defined clinical phenotypes with the diagnosis of OA in primary care [27]. Such research is needed to understand which phenotypes are seen by general practitioners, which are recognised as OA, and at what stage of development they are pre- sented and recognised. Such research could form the basis for improved recognition, assessment and manage- ment of OA in primary care. In addition to the questions of what phenotypes pre- sent to primary care and how they are managed, a cru- cially important issue is what effect primary care management has on outcome. Non-consultation for per- ipheral joint problems is common. Approximately 80% of those with musculoskeletal foot problems do not appear to consult their GP over prolonged periods of time (three years) [28]. Part of this is likely be related to the belief, pervasive among both the public and practi- tioners, that “nothing can be done”. Furthermore, despite randomised controlled trial evidence about the short-term efficacy of primary care treatment for some peripheral joint problems [29], there are few investiga- tions of the long-term effect of primary care consulta- tion or OA management on impairment, activities or participation. In summary, there is a paucity of research evidence concerning the radiographic phenotypes of foot OA and their impact on symptoms, clinical features, activity lim- itation and participation restriction. Important questions concerning how clinical phenotypes relate to the diagno- sis of OA in primary care, and how the outcome from foot pain and OA is influenced by primary care consul- tation have been under-researched in relation to the foot but also at other joint sites. This prospective, obser- vational, cohort study will combine unselected popula- tion sampling of older adults, self-reported survey data, comprehensive clinical and radiographic assessment, and linkage to computerised primary care records, to address these issues over a three-year period. It is designed to complement earlier studies of knee pain and OA [30] and hand pain, problems and OA [31] and permit com- bining of data across all three cohorts as well as direct comparison between them. The aims of the study are to: (i) Describe the frequency and pattern of co-occur- rence of radiographic features of symptomatic OA in the following foot joints: the 1st MTPJ, the 1st and 2nd CMJs, the NCJ and the TNJ. (ii) Relate the occurrence of radiographic OA, described above, cross-sectionally to foot pain and dis- ability, foot deformities, and soft tissue problems on physical examination. The associations between foot OA, foot pain, disability and footwear will also be examined. (iii) Determine prospectively the factors that predict clinical deterioration, for example, radiographic OA, footwear characteristics, pain/OA at other sites, and psy- chosocial factors. (iv) Identify which foot pain phenotypes present to primary care and are diagnosed in this setting. (v) Describe the patterns of self-care and primary health care use for foot OA. (vi) Model the effects of care on the outcome of severe foot pain. Methods Study design The study is a three-year population-based prospective observational cohort study. Ethical approval for all phases of the study has been obtained from Coventry Research Ethics Committee (REC reference number: 10/ H1210/5). Adults aged 50 years and over registered with four separate local general practices will be invited to participate in the study, irrespective of consultation (Fig- ure 1). Data collection will be in five phases: Phase 1: Baseline postal Health Survey questionnaire Phase 2: Baseline Clinical Assessment Study of the Foot (CASF) Phase 3: Review of general practice medical records Phase 4: Follow-up mailed survey at 18 months (Phase 2 participants only) Phase 5: Follow-up mailed survey at 3 years (Phase 1 and Phase 2 participants) Phase 1: Baseline postal Health Survey questionnaire All adults aged 50 years and over registered with four local general practices (mailed population approximately 9000 adults) will be mailed a letter of invitation from their general practitioner, a Participant Information Sheet, a Health Survey questionnaire, and a pre-paid return envelope. The lead general practitioner (GP) at Roddy et al. Journal of Foot and Ankle Research 2011, 4:22 http://www.jfootankleres.com/content/4/1/22 Page 2 of 16 each practice will be invited to identify potentially vulner- able patients (e.g. dementia, severe or terminal illnesses) they feel should be excluded from the study. Practice lists will be screened prior to mailing to ensure that addresses are up to date and exclude any recent deaths or depar- tures from the practice list. Health Survey questionnaires will be mailed in batches (n = 500) to ensure regular recruitment to research clinics (Phase 2) and to limit the interval between questionnaire completion and clinic attendance. Pilot cognitive interviews have been underta- ken with members of the Research Centre’s Research User Group to test the Health Survey questionnaire’s lay- out, readability, content, language and length. The ques- tionnaire will be divided into five main sections: (i) general health (including generic measures of physical function, psychosocial factors and lifestyle [32-36] (Addi- tional File 1: Appendix 1)); (ii) specific health problems including musculoskeletal co-morbidity and pain [37,38]; (iii) questions concerning the presence [39], duration, location [14], severity [40], and impact [41,42] of foot pain; (iv) demographic and socioeconomic characteristics [43,44]; and (v) employment (Table 1). Non-responders to the questionnaire will be sent a reminder postcard after two weeks. Those who do not respond to the remin- der postcard will be sent a repeat questionnaire and Par- ticipant Information Sheet with a further covering letter four weeks after the initial mailing. Questionnaires will ask for consent (i) to contact participants again by post and/or (ii) to review medical records. Responders will be given the option of providing their telephone number for further contact. Phase 2: Baseline Clinical Assessment Study of the Foot (CASF) Responders to the Health Survey questionnaire who report experiencing pain in or around the foot within the last twelve months and who provide written consent to further contact will be sent a letter of invitation to attend a research clinic. The letter of invitation will be accompanied by a Participant Information Sheet provid- ing details of the study. Participants will be asked to tel- ephone the Research Centre if they are interested in Data collection points are in bold All adults aged 50 years and over registered with 4 general practices in North Staffordshire Phase 1: Mailed Health Survey questionnaire Exclusions Respondents to Health Survey questionnaire Non-respondents Foot pain in the last 12 months No foot pain in the last 12 months Phase 2: Clinical Assessment Study of the Foot (CASF) “Clinic” population Non-respondents/declined to make an appointment Did not/unable to attend appointment Non-consent to participate in CASF Losses to follow-up Phase 4: Mailed 18-month Follow-up Survey Losses to follow-up Phase 5: Mailed 3-year Follow-up Survey Phase 3: Medical record review including 18-months prior to clinic attendance “Survey” population Figure 1 Flowchart of study procedures. Roddy et al. Journal of Foot and Ankle Research 2011, 4:22 http://www.jfootankleres.com/content/4/1/22 Page 3 of 16 Table 1 Content of baseline postal Health Survey questionnaire Concept Measurement method Detail Section A: General health Perceived general health MOS SF12 [33] Physical and mental component summary scores Physical function MOS SF36 [32] Physical functioning sub-scale Anxiety and depression Hospital anxiety and depression scale [34] Anxiety and depression sub-scales Participation Keele Assessment of Participation (KAP) [35,73] 5-items assessing person-perceived, performance-based participation Support Emotional support: single question Yes, no, no need Physical support: single question Yes, no, no need Life-style Smoking status Current, previous, never Anthropometric characteristics Self-reported height Self-reported weight Footwear Toe-box breadth line drawings (Additional File 1: Appendix 1) Type most frequently worn by decade Heel height line drawings (females only) (Additional File 1: Appendix 1) Type most frequently worn by decade Physical activity Short-Form International Physical Activity Questionnaire (IPAQ) [36] Frequency and duration of 4 activities performed during previous 7 days Section B: Specific health problems Hallux valgus Self-completed line drawings [37] 5 line-drawings for each foot depicting increasing severity of hallux valgus Co-morbidities Falls, fractures, chest problems, heart problems, deafness, problem with eyesight, raised blood pressure, diabetes, stroke, cancer, liver disease, kidney disease, poor circulation, rheumatoid arthritis Yes, for any that apply Intermittent claudication Edinburgh Claudication Questionnaire [38] Pain or discomfort in legs when walking, pain characteristics, pain location (leg manikin) Bodily pain Self-completed body manikin In the past 4 weeks, have you had pain that has lasted for one day or longer in any part of your body? If yes, shade pain location on manikin Site-specific questions Have you had any problems with your hands or pain in your hands/hips/knees/feet in the last year? Section C: Foot pain Foot pain characteristics Side of pain Both, right, left Duration in past 12 months < 7 days, 1-4 weeks, 1-3 months, 3+ months Foot injury: Have you ever injured your foot badly enough to see a doctor about it? No, right foot only, left foot only, both feet Foot pain, aching, stiffness in last month [39] No days, few days, some days, most days, all days Location: self-completed foot manikin [14] In the past month, have you had any ache or pain that has lasted for one day or longer in your feet? If yes, shade pain location on foot manikin Foot pain intensity in last month [40] 0-10 NRS with verbal anchors (no pain, pain as bad as can be) Complaint- specific functioning Manchester Foot Pain and Disability Index [41] 19-items across four constructs: pain, function, appearance, work/leisure Coping strategies for foot pain Foot-related fatigue: single item Single-item coping strategies questionnaire [42] None of the time, on some days, on most/every day(s) 0-6 NRS with verbal anchors (never do that, always do that) Healthcare use Medication use in last month For foot pain, for other pain Consultation in last 12 months for foot pain General practitioner, physiotherapist, podiatrist, chiropodist (NHS and private) Section D: Demographic/socioeconomic characteristics Demographic characteristics Date of birth Gender Marital status Married, separated, divorced, widowed, cohabiting, single Roddy et al. Journal of Foot and Ankle Research 2011, 4:22 http://www.jfootankleres.com/content/4/1/22 Page 4 of 16 taking part in order to book an appointment. Non- responders to this initial invitation letter will be sent a reminder invitation approximately two weeks later. Those willing to take part in the study will be booked into the next convenient appointment and, if necessary, travel arrangements (taxi) made. Postal con- firmation of the appointment will be made by letter and then by a reminder postcard shortly prior to the appointment. The postcard will be mailed in an envel- ope to maintain confidentiality about the nature of the appointment. Participants who do not attend the clinic for their specified appointment will be sent another letter asking them to re-contact the Research Centre and book another appointment if they still wish to participate. Assessment clinics for the study will be conducted twice-weekly in a local NHS Trust community rheuma- tology hospital. A maximum of 12 appointments per week are scheduled. Each clinic is to be staffed by a Clinic Co-ordinator, a Clinic Support Worker, two trained Health Professionals (podiatrist or physiothera- pist) acting as Research Assessors, one trained Research Assessor (physiotherapist, radiographer or nurse) acting as an Ultrasonographer, and two Radiographers. On arriving at clinic, participants will be issued with a file containing all assessment documentation marked with their unique study number. Prior to commencing the assessment, the procedures outlined in the Partici- pant Information Sheet will be discussed with each par- ticipant. Participants will be given the opportunity to ask questions. Written informed consent to take part in the study will be obtained from all participants. Appro- priate clothing (shorts) for the assessment will be provided. Participants will undertake the following standardised assessment: digital photography of both feet and ankles; plain radiography of both feet, ankles and hands; ultra- sound of the plantar fascia in both feet; clinical inter- view; physical assessment of the feet, lower limb and hands; simple anthropometric measurement and self- complete questionnaire (Table 2). Each participant’s visit is expected to last approximately 2 hours. Digital photography Each participant will have three photographs taken by a Research Assessor using a digital camera (Canon Digital IXUS 75: Resolution 7.1 mega pixels, 3× zoom). Each foot will be imaged separately with the participant standing in a specially designed mirror-box that enables images of the dorsum, medial and lateral aspects of their foot to be captured in a single photograph. An additional posterior view photograph of both feet will be taken with the participant positioned in a self-selected relaxed bipedal stance on a gym step using a separate camera (Canon PowerShot A480: Resolution 10.0 mega pixels 3.3× zoom) mounted on a tripod to the height of the step. The photograph will be taken at a distance of 40 cm and will capture the heels, ankles and lower limb. To preserve anonymity participants’ faces will not be included in any of the photographs: their unique study number will be placed in each frame. Permission to use anonymised digital images for educational purposes will be sought in the written consent form. Digital photogra- phy will take approximately 5 minutes to complete for each participant. Plain radiography of the feet and hands Digital radiographs of both feet, ankles and hands will be obtained for all participants. Weight-bearing dorso- plantar and lateral views of each foot will be obtained Table 1 Content of baseline postal Health Survey questionnaire (Continued) Living arrangements Alone, not alone Socioeconomic characteristics Current employment status Employed, not working due to ill-health, retired, unemployed/ seeking work, housewife, other Current/recent job title Free text Current/recent job title of spouse Free text Adequacy of income [43] Find it a strain to get by from week to week, have to be careful with money, able to manage without much difficulty, quite comfortably off Higher education Yes/no (If yes, age finished full-time education) Ethnicity White UK/European, Afro Caribbean, Chinese, Asian, African, Other Section E: Work Work status Working full-time, part time, or off work due to ill-health Work performance 0-10 NRS with verbal anchors (worst performance, best performance) Work limitation due to a health problem or physical limitation Not affected, changed the way I do the job, reduced the number of hours, currently off work Job lock Would like to leave work but can’t due to financial needs MOS SF 12 = Medical Outcomes Study Short Form 12; MOS SF 36 = Medical Outcomes Study Short Form 36; NRS = numerical rating scale Roddy et al. Journal of Foot and Ankle Research 2011, 4:22 http://www.jfootankleres.com/content/4/1/22 Page 5 of 16 Table 2 Content of clinical assessment: clinical interview, physical examination and self-complete questionnaire Concept Measurement method Detail Clinical Interview Pre-assessment screening: Screen for clinical “red flags” Recent significant foot or hand injury Recent sudden change in foot symptoms Screen for joint surgery History of joint operations Foot pain characteristics Side of pain Comparative severity of bilateral symptoms Duration Within 12 months, 1-5 years, 5-10 years, 10+ years (for each foot) Preceding accident/injury Yes/no Foot pain/aching/discomfort in last month Yes/no Foot pain quality Short-form McGill Pain Questionnaire [47] 15 sensory and affective descriptors Sleep disturbance Self-report Yes/no Sensory disturbance Self-reported tingling/numbness/pins and needles Yes/no (for each foot) Causal attribution What do you think has caused the problem with your foot/feet? Recorded verbatim Diagnostic attribution What do you think is the matter with your foot/feet now? Recorded verbatim Foot surgery Details of any foot surgery Nature of surgery Right/left < 1 year, 1- < 5 years, 5- < 10 years, 10+ years ago Foot/ankle injury Details of foot/ankle injury Sprain, fracture, other Right/left; forefoot, mid-foot, heel, ankle < 1 year, 1- < 5 years, 5- < 10 years, 10+ years ago Planned treatment Are you waiting for any appointments or treatments for this foot or ankle problem? Yes/no (free text comments for yes) Importance of health problems What would you consider to be your two most important health problems at the moment? [Includes foot problem] Recorded verbatim Physical examination Screen for clinical “red flags” Acutely, swollen, hot, painful feet or hands Yes/no (free text for comments) Observation Skin lesions Bunionette, hyperkeratotic lesions, ulcers (plantar and dorsal aspect) Toe deformity MTPJ and interphalangeal joint hyperextension Mallet toe, hammer toe, claw toe, retracted toe Present/absent (great toe) Present/absent (lesser toes) Palpation Mid-foot bony exostosis Plantar fascia tenderness Present/absent Present/absent (insertion and mid-arch) Foot posture Foot Posture Index [50] Six-criterion scoring system Navicular Height [49] Millimetres Foot Length [51] Millimetres Arch index [48,49] Weightbearing footprint. Length of footprint excluding toes is divided into equal thirds. Arch index = area of middle third divided by area of entire footprint Range of movement (foot/ankle) Ankle dorsiflexion (with knee flexed and extended) [53] Degrees Subtalar inversion [52] Degrees Subtalar eversion [52] Degrees 1st MTP joint dorsiflexion [54] Degrees Roddy et al. Journal of Foot and Ankle Research 2011, 4:22 http://www.jfootankleres.com/content/4/1/22 Page 6 of 16 Table 2 Content of clinical assessment: clinical interview, physical examination and self-complete questionnaire (Continued) Knee valgus/ varus deformity Intercondylar distance Centimetres Intermalleolar distance Centimetres Anthropometric measurements Height Metres Weight Kilograms Lower limb physical function Short physical performance battery (SPPB) [57] Standing balance test, timed repeated chair stand test, 4-metre gait speed test Hand osteoarthritis Deformity, enlargement, swelling, nodes [55] Observation and palpation: swelling (MCPJ), nodes (PIPJ and DIPJ), deformity and enlargement (1st CMCJ, PIPJ and DIPJ) Hand function Power grip strength (Jamar dynanometer) [56] Pounds Pinch grip strength (B&L pinch gauge) [56] Pounds Self-complete questionnaire Section A: Foot Pain Foot pain chronicity Chronic Pain Grade [59] 6 questions (0-10 NRS) and 1 question (4 response options) giving grade I-IV Complaint- specific functioning Symptom satisfaction [64] 5-point Likert scale (Very dissatisfied to Very satisfied) Section B: Hand pain and problems Hand pain characteristics Hand pain in last 12 months Side of pain Duration in past 12 months Hand pain, aching, stiffness in last month [55] Present/absent < 7 days, 1-4 weeks, 1-3 months, 3+ months No days, few days, some days, most days, all days Hand pain intensity in last month [40] 0-10 NRS with verbal anchors (no pain, pain as bad as could be) Location: self-completed hand manikin [60] AUSCAN [65,66] In the past month, have you had any ache or pain that has lasted for one day or longer in your hand? If yes, shade location on hand manikin Pain and stiffness sub-scales Complaint- specific functioning AUSCAN [65,66] Physical function sub-scale Hand dominance Self-report Right, left, both Healthcare use GP consultation within last 12 months for hand problem Section C: Hip pain Hip pain characteristics Side of pain Both, right, left Duration in past 12 months < 7 days, 1-4 weeks, 1-3 months, 3+ months Hip pain, aching, stiffness in last month [61] No days, few days, some days, most days, all days. WOMAC (hip) [62] Pain and stiffness sub-scales Complaint- specific functioning WOMAC (hip) [62] Physical function sub-scale Healthcare use GP consultation within last 12 months for hip pain Section D: Knee pain Knee pain characteristics Side of pain Both, right, left Duration in past 12 months < 7 days, 1-4 weeks, 1-3 months, 3+ months Knee pain, aching, stiffness in last month [63] No days, few days, some days, most days, all days. WOMAC (knee) [62] Pain and stiffness sub-scales Roddy et al. Journal of Foot and Ankle Research 2011, 4:22 http://www.jfootankleres.com/content/4/1/22 Page 7 of 16 according to a defined protocol [24] and stored on disc. The participant will stand in a relaxed position with the weight of the participant’s body distributed equally. A relaxed position will be achieved by asking the partici- pant to walk on the spot for a few steps and then stand relaxed. For the dorso-plantar view the participant will stand with the plantar aspect of both feet on the detec- tor. The x-ray tube will be angled 15° cranially with a vertical central ray centred at the base of the third metatarsal [24]. For lateral projections the participant will stand on a low platform with the detector posi- tioned at the side of the participant’s foot. The x-ray tube will be angled at 90° with a horizontal central ray centred on the base on the base of the first metatarsal [24]. Weight-bearing antero-posterior views of both ankle joints will also be obtained with the participant standing on the low platform. The detector will be posi- tioned behind the participant. The x-ray tube will be angled 90° with a horizontal central ray centred midway between the malleoli [45]. Dorso-palmar views of both hands are to be performed. The palmar aspect of the hand will be placed on the detector with the fingers extended, separated slightly and spaced evenly [31]. A vertical central ray will be centred on the head of the third metacarpal [45]. Each foot, ankle and hand will be imaged separately and the film focus distance will be set at 110 cm for all projections. X-rays will take approxi- mately 20 minutes to complete for each participant. Ultrasound of the plantar fascia The ultrasound examination will be performed using a variable frequency 8-13 MHz linear transducer with a Logiq-e ultrasound system (GE Healthcare). The partici- pant will be positioned in a self-selected half-lying posi- tion, or sitting position if the half-lying position cannot be assumed by the participant, on a couch with their feet hanging over the end of the couch and ankles dorsi- flexed to 90 degrees. Real-time sagittal (longitudinal) imaging of the plantar aponeurosis will be performed with the focus adjusted to the depth of the fascia for each participant. Plantar fascia thickness will be mea- sured at a standard reference point where the plantar fascia crosses the anterior aspect of the inferior border of the calcaneus on the longitudinal view but at its thickest point in the transverse plane [46]. Three measurements will be taken and recorded on a paper proforma. The Research Assessor performing the ultra- sound will be blind to the results of the clinical assess- ment. The scan will take approximately 10 minutes for each participant. Ultrasound images will be retained and digitally stored at the Research Centre for quality control purposes. Consent will be sought in the clinic consent process for the use of anonymised images for educational purposes and in presentations. Clinical interview and physical examination Participants will be interviewed and examined by a trained Research Assessor who will be blind to the radiographic and sonographic findings. This procedure will comprise three components. Firstly, a standardised clinical interview will be conducted to gather quantita- tive data relating to foot pain and symptoms in older adults [47], causal and diagnostic attribution, previous injury or surgery, and planned treatment (Table 2). Sec- ondly, a detailed, standardised, examination of both feet will be conducted. This will include assessment of skin lesions; common deformities; foot posture including sta- tic arch index [48,49], Foot Posture Index [50], foot length [51], navicular height [49,51]; and range of move- ment of subtalar inversion and eversion [52], ankle dor- siflexion [53], and 1st MTPJ dorsiflexion [54] (Table 2). Thirdly, a brief standardised physical examination of both hands, and both knees will be conducted (Table 2). This will include assessment of presence of deformity, enlargement, swelling and nodes in both hands [55]; maximal power and pinch grip strength using a Jamar dynamometer and B&L pinch gauge respectively [56]; and presence of varus and valgus deformities at the both knees. Lower extremity physical performance will be also assessed [57]. Plantar pressures from both feet will be recorded dur- ing level barefoot walking using a pressure platform (RS Scan® International, Olen, Belgium). This system con- sists of a 12 mm thick floor mat (578 mm × 418 mm) incorporating 4096 resistive sensors sampling at a rate of 300 Hz. The two-step gait initiation protocol will be used whereby the participant is positioned two step lengths from the front edge of the pressure platform and is instructed to walk in a normal manner, striking Table 2 Content of clinical assessment: clinical interview, physical examination and self-complete questionnaire (Continued) Complaint- specific functioning WOMAC (knee) [62] Physical function sub-scale Healthcare use GP consultation within last 12 months for knee pain AUSCAN = Australian Canadian Osteoarthritis Hand Index; CMCJ = carpometacarpal joint; DIPJ = distal interphalangeal joint; GP = General practice; MTPJ = metatarsophalangeal joint; MCPJ = metacarpophalangeal joint; NRS = numeric rating scale; PIPJ = proximal interphalangeal joint; WOMAC = Western Ontario and McMaster Universities Osteoarthritis Index Roddy et al. Journal of Foot and Ankle Research 2011, 4:22 http://www.jfootankleres.com/content/4/1/22 Page 8 of 16 the sensor area with the second step [58]. The system will be calibrated at the beginning of each session and recalibrated for participants’ individual weight and shoe size prior to each assessment. The participant will com- plete several practice trials, to allow them to familiarise themselves with the two step approach and calculate their starting position. Three trials will be recorded for each foot. Maximum force (N), peak pressure (N/cm2) and contact time (ms) will be collected. Footprints obtained will be divided into masks corresponding to the major structural regions of the foot. Pre-defined protocols for all components of the inter- view and assessment will be used for standardisation between Research Assessors. Assessment findings will be recorded on a standard form that is to be checked for missing data immediately post-assessment by the Clinic Co-ordinator or Clinic Support Worker. Discus- sion between Research Assessors and participants about diagnosis and/or appropriate management will be dis- couraged. Participants will be advised to discuss clinical queries with their General Practitioner. The interview and assessment will take approximately 40 minutes to complete for each participant. Simple anthropometric measurements Weight (in kg) and height (in cm) of each participant will be measured using calibrated digital scales (Seca Ltd., Birmingham, UK) and a wall-mounted measure (Seca Ltd., Birmingham, UK) respectively. Self-complete clinic questionnaire During the clinic visit, participants will complete a self- complete questionnaire. The questionnaire will be divided into four main sections: (A) Foot pain; (B) Hand pain and problems; (C) Hip pain; and (D) Knee pain. Questions will relate to pain [40,55,59-63], site-specific function [62,64-66], and GP consultation (Table 2). Sec- tion A will be completed by all clinic-attenders. Sections B, C, and D will be completed only by those who reported hand, hip or knee pain respectively in their Health Survey questionnaire. The Clinic Co-ordinator or Clinic Support Worker will guide participants as to which sections need to be completed and will check all questionnaires following completion for any missing data. The questionnaire will take approximately 30 min- utes to complete. Travelling and out-of-pocket expenses will be reim- bursed after the assessment. Post-clinic procedure The digital cameras, study laptop and all completed clinical assessment documentation and questionnaires will be returned to the Research Centre. Digital images will be downloaded from the memory cards and laptop onto a secure server. A clinical report on the x-ray images will be provided by a Consultant Radiologist at the NHS Trust Hospital. The images and report will be forwarded to the Research Centre where they will be screened by a Con- sultant Rheumatologist for any radiographic “red flags” or significant radiographic abnormality (see below). Standardised coding of radiographic features on the foot and hand x-ray images will be carried out by the Research Radiographer (a trained observer with a back- ground in diagnostic radiography). The Research Radio- grapher will be blinded to all assessment data and the radiologist’s report. Foot images will be scored for indi- vidual radiographic features, including osteophytes and joint space width, at the 1st MTPJ, 1st and 2nd CMJs, NCJ and TNJ according to the Menz atlas and classifica- tion system [24]. With the exception of the TNJ, both dorso-plantar and lateral projections will be used to assess osteophyte and joint space width. For the grading of TNJ osteophytes, only the lateral projection will be used as the dorsal aspect of the joint, where osteophytes most commonly develop, is not easily visualised on the dorso-plantar projection. Standardised coding of radio- graphic features using the Kellgren and Lawrence grad- ing system will be completed for the ankle joints and sixteen joints in each hand and wrist [67]: the distal interphalangeal joints (DIP), the proximal interphalan- geal joints (PIP), the interphalangeal joint of the thumb (IP), the metacarpophalangeal joints (MCP), the thumb carpometacarpal joint (CMC) and the trapezioscaphoid joint (TS). Consent forms, assessment documentation, digital x- ray images and reports are to be placed in separate secure storage. Communication with participants’ general practice Assessment findings will be communicated to partici- pants and their General Practice only in specific circum- stances that will be explained to participants at the start of the clinic: Mandatory notification of clinical ‘red flags’ All participants will be routinely screened during the clinical assessment for signs and symptoms suggesting potentially serious pathology requiring urgent medical attention (Table 2). These are: recent trauma to the feet or hands that may have resulted in significant tissue damage; recent sudden worsening of foot or hand symp- toms; and acutely hot, swollen, painful feet or hands [68]. In the event of such findings, participants will be informed that they require urgent attention, a standard fax will be immediately sent to the General Practice, and appropriate medical attention arranged the same day. A letter of confirmation will be subsequently sent to the participants’ General Practice. Mandatory notification of radiographic ‘red flags’ In the event of any radiographic red flags (including sus- pected malignancy, unresolved fracture, infection) reported by the Consultant Radiologist a standard fax Roddy et al. Journal of Foot and Ankle Research 2011, 4:22 http://www.jfootankleres.com/content/4/1/22 Page 9 of 16 will be sent with a copy of the x-ray report to the Gen- eral Practice notifying them of this. This will subse- quently be confirmed by letter. Discretionary notification of other significant radiographic abnormality At the discretion of the Consultant Rheumatologist, the General Practice will be notified of other significant radiographic abnormality (e.g. previous fracture, inflam- matory arthropathy). Availability of x-ray report on request To prevent unnecessary duplication of x-rays, partici- pants’ GPs can request an x-ray report if they feel it would be valuable for clinical management. Quality assurance and control Quality assurance and control are important for the integrity of longitudinal studies and the validity of their conclusions [69]. This is especially true of observer- dependent methods of data-gathering. In the clinical assessment phase of the study, the clinical interview and physical assessment, ultrasound, digital images, plantar pressure and the taking and scoring of x-ray will be sub- ject to a number of quality control procedures. Inter- and intra-assessor reliability of foot interview and examination variables have been established, where possible, from the published literature [49-54,70]. Assessors will undergo training in consent procedures, clinical interview and physical assessment techniques. All Research Assessors will be required to conduct at least two clinical assessments prior to the commencement of data collection. During the first month clinics with reduced numbers of participants will be held to allow all study procedures to be tested and reviewed. All radiographers participating in the study will also receive training prior to the commence- ment of the study. Selected Research Assessors will receive ultrasound training on a formally assessed course, Focused Specia- list Ultrasound Practice, run by University of Derby (UK). This course consists of the principles of ultra- sound physics and imaging science. The Research Asses- sors will then receive specific clinical training from a Consultant Musculoskeletal Sonographer to assess the plantar fascia thickness. In addition to meeting the course assessment requirements clinical competence for the study will be assessed by the Consultant Sonogra- pher following a period of supervision and mentorship. The Research Radiographer will be trained in the methods for scoring the plain radiographs. This single observer will score all images and intra-observer varia- bility will be assessed using 60 sets of images scored eight weeks apart. Inter-observer variability will be assessed using a second observer with prior experience of grading foot x-rays for OA who will also grade 60 sets of images. A detailed Assessor Manual with protocols for obtain- ing written informed consent, digital photography, clini- cal interview and physical assessment, administration of the self-complete questionnaire, anthropometric mea- surement, plain radiography, and ultrasound will be pro- vided to all members of the study team for reference during the entire study period. During the data collection period, digital photographs for all participants will be reviewed and participants with any missing or spoilt images will be recalled to repeat the photographs. Quality control sessions for consent procedures, clinical interview and physical assessment, radiography and ultrasound will be underta- ken at regular intervals throughout the study. These ses- sions will include observation of assessments in clinic by the Principal Investigator, structured observation of assessments in a healthy volunteer, and direct inter- assessor comparisons on selected participants. Observa- tion of radiography and ultrasound will be undertaken by the Research Radiographer and Consultant Muscu- loskeletal Sonographer respectively. The outcome of each quality control session will be fed back to the indi- vidual Research Assessor and the group as a whole. Phase 3: Review of general practice medical records All participants in Phase 1 who give permission for their GP records to be accessed will have their computerised medical records tagged by a member of the Research Centre’s Health Informatics Specialist team. All consul- tations for the 18-month period prior to clinic atten- dance, and for the three-year period following clinic attendance, will be identified. The four practices partici- pating in this study are fully computerised and undergo annual audits completed by the Health Informatics team to assess the quality and completeness of the data entry at the practices [71]. This data will cover consultations, prescriptions, and referrals. All relevant foot-related consultations will be identified using search techniques based on Read codes and free text entries, which have been previously devel- oped and successfully applied by the Research Centre [28,72]. Participants with a relevant recorded consulta- tion will be classified into those receiving an OA diag- nosis recorded by their GP and those receiving non- specific symptom codes (e.g. arthralgia). In addition, all comorbid consultations will be identified and sub- grouped by Read code chapter. Patterns of primary and secondary health care utilisa- tion will be compared between Phase 2 participants and Phase 1 participants who did not attend the research clinic. All sensitive data (name, contact details) will be removed from the medical records data and the consul- tation data will be linked to the survey and clinical assessment data by unique survey identifier. Roddy et al. Journal of Foot and Ankle Research 2011, 4:22 http://www.jfootankleres.com/content/4/1/22 Page 10 of 16 Phase 4: Follow-up mailed survey at 18 months (Phase 2 respondents only) Follow-up surveys will be mailed to all Phase 2 partici- pants 18 months after their baseline clinical assessment. The focus of follow-up will be clinical (severity of pain and functional limitation) change and possible determi- nants of this. The content of this survey is provided in Table 3. Non-responders to the questionnaire will be sent a reminder postcard after two weeks. Those who do not respond to the reminder postcard will be sent a repeat questionnaire and Participant Information Sheet with a further covering letter four weeks after the initial mailing. Primary outcome data will be sought from non- respondents by telephone interview or shortened postal questionnaire. We plan to trace participants who have moved practice during the follow-up period using the NHS tracing service. Phase 5: Follow-up mailed survey at 3 years (Phase 1 and Phase 2 respondents) Follow-up surveys will be mailed to all Phase 1 and 2 participants 3 years after their baseline Health Survey questionnaire. In addition to information about clinical change in Phase 2 participants, the survey will also include repeat measures of lifestyle [36,73], general health (including generic measures of physical function [32,33]), psychosocial factors [34], co-morbidity [37,38] and basic screening questions concerning the presence [39], duration, location [14], severity [40], and impact of foot pain [41,42] (Table 4). Non-responders to the ques- tionnaire will be sent a reminder postcard after two weeks. Those who do not respond to the reminder post- card will be sent a repeat questionnaire and Participant Information Sheet with a further covering letter four weeks after the initial mailing. Primary outcome data Table 3 Content of 18-month postal follow-up Health Survey questionnaire (Phase 2 participants only) Concept Measurement method Detail Foot pain characteristics Change in foot pain over past 18 months Completely recovered, much better, better, no change, worse, much worse Since your assessment 18 months ago, have you ever injured your foot badly enough to see a doctor about it? No/right only/left only/both Foot pain, aching, stiffness in last month [39] No days, few days, some days, most days, all days Foot pain intensity in past month [40] 0-10 NRS with verbal anchors (no pain, pain as bad as could be) Foot pain chronicity Chronic Pain Grade [59] 6 questions (0-10 NRS) and 1 question (4 response options) giving grade I-IV Complaint- specific functioning Manchester Foot Pain and Disability Index [41] 19-items across four constructs: pain, function, appearance, work/leisure Symptom satisfaction [64] 5-point Likert scale (Very dissatisfied to Very satisfied) Healthcare use Use of services/treatments for foot pain in past 18 months GP, physiotherapist, hospital specialist, acupuncture, podiatrist, chiropodist, drugs on prescription, foot injection, foot surgery, osteopath/ chiropractor, other (specify) Medication use in last month For foot pain, for other pain Coping strategies for foot pain Single-item coping strategies questionnaire [42] 0-6 NRS with verbal anchors (never do that, always do that) Perceived general health MOS SF 12 [33] Physical and mental component summary scores Physical function MOS SF 36 [32] Physical functioning sub-scale Anxiety and depression Hospital anxiety and depression scale [34] Anxiety and depression sub-scales Hallux valgus Self-completed line drawings [37] 5 line-drawings for each foot depicting increasing severity of hallux valgus Bodily pain Self-completed body manikin In the past 4 weeks, have you had pain that has lasted for one day or longer in any part of your body? If yes, shade location of pain on manikin Regional pain Site-specific questions Have you had any problems with your hands or pain in your hands/hips/ knees in the last year? Demographic characteristics Date of birth Gender Socioeconomic characteristics Current employment status Employed, not working due to ill-health, retired, unemployed/seeking work, housewife, other MOS SF 12 = Medical Outcomes Study Short Form 12; MOS SF 36 = Medical Outcomes Study Short Form 36; NRS = numerical rating scale Roddy et al. Journal of Foot and Ankle Research 2011, 4:22 http://www.jfootankleres.com/content/4/1/22 Page 11 of 16 Table 4 Content of 3-year postal follow-up Health Survey questionnaire (Phase 1 and Phase 2 participants) Concept Measurement method Detail Section A: General health Perceived general health MOS SF 12 [33] Physical and mental component summary scores Physical function MOS SF 36 [32] Physical functioning sub-scale Anxiety and depression Hospital anxiety and depression scale [34] Anxiety and depression sub-scales Participation Keele Assessment of Participation (KAP) [35,73] 5-items assessing person-perceived, performance-based participation Physical activity Short-Form International Physical Activity Questionnaire (IPAQ) [36] Frequency and duration of 4 activities performed during previous 7 days Section B: Specific health problems Hallux valgus Self-completed line drawings [37] 5 line-drawings for each foot depicting increasing severity of hallux valgus Co-morbidities Falls, fractures, chest problems, heart problems, deafness, problems with eyesight, raised blood pressure, diabetes, stroke, cancer, liver disease, kidney disease, poor circulation, rheumatoid arthritis Yes for any that apply Intermittent claudication Edinburgh Claudication Questionnaire [38] Pain or discomfort in legs when walking, pain characteristics, pain location (leg manikin) Bodily pain Self-completed body manikin In the past 4 weeks, have you had pain that has lasted for one day or longer in any part of your body? If yes, shade location of pain on manikin Site-specific questions Have you had any problems with your hands or pain in your hands/hips/knees/feet in the last year? Section C: Foot pain Foot pain characteristics Side of pain Both, right, left Duration in past 12 months < 7 days, 1-4 weeks, 1-3 months, 3+ months Have you ever injured your foot badly enough to see a doctor about it? No/right foot only/left foot only/both feet Location: self-completed foot manikin [14] In the past month, have you had any ache or pain that has lasted for one day or longer in your feet? If yes, shade location of pain on foot manikin Foot pain, aching, stiffness in last month [39] No days, few days, some days, most days, all days Foot pain intensity in last month [40] 0-10 NRS with verbal anchors (no pain, pain as bad as could be) Complaint- specific functioning Manchester Foot Pain and Disability Index [41] 19-items across four constructs: pain, function, appearance, work/leisure Coping strategies for foot pain Single-item coping strategies questionnaire [42] 0-6 NRS with verbal anchors (never do that, always do that) Healthcare use Medication use in last month For foot pain, for other pain Consultation with general practitioner in last 12 months for foot pain Section D: Demographic/socioeconomic characteristics Demographic characteristics Date of birth Gender Marital status Married, separated, divorced, widowed, cohabiting, single Living arrangements Alone, not alone Anthropometric characteristics Self-reported height Self-reported weight Socioeconomic characteristics Current employment status Employed, not working due to ill-health, retired, unemployed/ seeking work, housewife, other Roddy et al. Journal of Foot and Ankle Research 2011, 4:22 http://www.jfootankleres.com/content/4/1/22 Page 12 of 16 will be sought from non-respondents by telephone inter- view (Phase 2 participants only) or shortened postal questionnaire. We plan to trace participants who have moved practice during the follow-up period using the NHS tracing service. Sample size The sample size for this study was determined by the estimated numbers of participants needed in Phase 2 in order to ensure sufficient power for both cross-sectional and longitudinal analyses. The primary aim is to com- pare the proportion of participants with poor functional outcome across the radiographic (p2) and no radio- graphic OA groups (p1) at 3 years. Assuming p1 = 20% in the unexposed group, a sample size of 426 will have 80% power to detect a relative risk (p2/p1) of 1.62 using a 5% significance level. Allowing for a drop-out figure of 80 from baseline to three years will require an initial recruitment of 506 participants to Phase 2. Statistical analysis Patterns of symptomatic radiographic foot OA The frequency and co-occurrence of radiographic fea- tures of symptomatic OA at the 1st MTPJ, the 1st and 2nd CMJs, the NCJ and the TNJ will be described using simple descriptive statistics. Features associated with foot OA phenotypes Linking data collected at the clinical assessment with that from the baseline health survey questionnaire, the occurrence of radiographic OA will be related cross-sec- tionally to foot pain and disability, foot deformities, soft- tissue problems, footwear characteristics and pain/OA at other sites, using odds ratios and associated 95% confi- dence intervals adjusted for age, gender and BMI. The effect of missing primary outcome data will be investi- gated using multiple imputation methods. Outcome of foot OA at 3 years Linking baseline date to 18-month and three-year fol- low-up questionnaires, we will then be able to determine prospectively the factors that are related to clinical dete- rioration using risk ratios and associated 95% confidence intervals, for example, radiographic OA, footwear char- acteristics, pain/OA at other sites, psychosocial factors. The presentation and diagnosis of OA in primary care Participants with a recorded consultation for joint- related problems will be classified into those receiving an OA diagnosis recorded by their GP and those receiv- ing a non-specific symptom code (e.g. arthralgia). The proportion of participants who (a) consult and (b) are diagnosed with OA will be described using simple descriptive statistics. Logistic regression will be used to identify which features, including OA phenotype, are strongly associated with consultation and foot OA-diag- nosis. The effect of missing primary outcome data will be investigated using multiple imputation methods. Describing self-care and primary care Annual consultation rates and cumulative consultation probabilities will be calculated over the three-year per- iod. Using logistic regression and survival analysis tech- niques, we will investigate further how different phenotypes relate to subsequent patterns of primary care consultation (for joint pain, other morbidities, and specifically for OA) and referral to secondary care. Self- care reported by participants in the surveys at each 18 month time-point will be described. Modelling the outcomes of care We will model the effects of care on impairment, activ- ity limitation and participation restriction. We aim to use propensity scores (ie the propensity or likelihood of a person to seek healthcare given their characteristics) and random effects repeated measures multilevel models in order to take into account the effects of both Table 4 Content of 3-year postal follow-up Health Survey questionnaire (Phase 1 and Phase 2 participants) (Continued) Current/recent job title Free text Current/recent job title of spouse Free text Adequacy of income [43] Find it a strain to get by from week to week, have to be careful with money, able to manage without much difficulty, quite comfortably off Additional questions for phase 2 participants only Foot pain characteristics Change in foot pain over past 3 years Completely recovered, much better, better, no change, worse, much worse Since your assessment 3 years ago, have you injured your foot badly enough to see a doctor about it? No/right foot only/left foot only/both feet Foot pain severity Chronic Pain Grade [59] 6 questions (0-10 NRS) and 1 question (4 response options) giving grade I-IV Symptom satisfaction [64] 5-point Likert scale (Very dissatisfied to Very satisfied) Healthcare use Use of services/treatments for foot pain in past 3 years GP, physiotherapist, hospital specialist, acupuncture, podiatrist, chiropodist, drugs on prescription, foot injection, foot surgery, osteopath/chiropractor, other (specify) MOS SF 12 = Medical Outcomes Study Short Form 12; MOS SF 36 = Medical Outcomes Study Short Form 36; NRS = numerical rating scale Roddy et al. Journal of Foot and Ankle Research 2011, 4:22 http://www.jfootankleres.com/content/4/1/22 Page 13 of 16 observed and unobserved covariates on outcome at each follow-up time point. Using consultations and secondary care referrals from medical record review, together with socio-demographic, clinical, general health and pheno- type characteristics, we will apply these methods to a series of analyses in which the separate effects of each of the components of consultation on subsequent out- comes at each follow-up time point will be modelled. Discussion Symptomatic foot OA is a common problem, yet is under-researched relative to other sites commonly affected by OA, such as the knee and hand [6]. In this three-year prospective epidemiological study, we will combine survey data, clinical, radiographic, and ultra- sound assessment, and primary care consultation records to describe the frequency and co-occurrence of OA at frequently affected joints of the foot, and to relate their occurrence to symptoms, function, clinical exami- nation findings and life-style factors such as footwear. We will also describe the natural history of clinical symptoms relating to foot OA and assess how these pre- sent to primary care and are subsequently diagnosed and treated. This study will focus on OA of the foot yet, in reality, people with OA are commonly affected at multiple joint sites [23,74]: a quarter of patients awaiting knee and hip replacement surgery have generalised radiographic OA [75]. Pain and functional impairment have been shown to be greater as the number of painful joint sites increases [11,16,76]. This study has been specifically designed to complement previous clinical assessment studies of the knee [30] and hand [31], which will per- mit data to be combined across all three cohorts, allow- ing more detailed investigation of patterns of multiple joint involvement, and the comparative and additive effects of pain and OA on symptoms and outcome. An obvious limitation of our study is that asympto- matic people will not be invited for to attend for clinical assessment, so we will not be able to estimate the fre- quency of asymptomatic radiographic OA or clinical examination findings. However, symptoms are the pre- senting feature to primary care and, as with our pre- vious clinical assessment studies [30,31], are the starting point in this study. This enables us to investigate the occurrence of foot osteoarthritis and inter-relationship between clinical signs, symptoms, and radiographic dis- ease within symptomatic individuals and their clinical course over time. In phase two of this study, every effort will be made to maintain the quality of the data obtained and minimise information bias in the data that will be collected at the research clinics. Standardised interview questions and physical assessment protocols have been developed and are described in detail in an Assessor Manual which will be given to each Research Assessor. Research Assessors will undergo a period of training prior to the start of the study. Quality control will be reviewed at regular intervals throughout the course of the study to ensure continued adherence to the protocols. Additional material Additional file 1: Footwear questionnaire. Acknowledgements This work is supported by an Arthritis Research UK Programme Grant (18174) and service support through the West Midlands North CLRN. The study funders had no role in the study design; data collection, analysis, or interpretation; in the writing of the paper; or in the decision to submit the paper for publication. HBM is currently a National Health and Medical Research Council of Australia fellow (Clinical Career Development Award, ID: 433049). The authors would like to thank the administrative, health informatics and research nurse teams at the Arthritis Research UK Primary Care Centre particularly Alicia Bratt, Shirley Caldwell, Claire Calverley, Charlotte Clements, Kathryn Dwyer, Ian Thomas and Chan Vohora; staff of the participating general practices and Haywood Hospital, especially Dr Jackie Saklatvala, Carole Jackson and the Radiographers at the Department of Radiography; Alison Hall who led the training and mentoring of ultrasound assessors; Robert Bradshaw-Hilditch, Dr Catherine Colquhoun, Mr Rob Rees, Dr Michael Shadforth, Dr Simon Somerville, Julie Taylor and Professor Jim Woodburn who contributed to the development of the clinic assessment schedule; the team undertaking clinical assessments, Linda Hargreaves, Gillian Levey, Liz Mason, Jennifer Pearson, Julie Taylor, and Dr Laurence Wood; and Ian Steward and RSscan Lab Ltd for the loan of the Foot scan system. Author details 1Arthritis Research UK Primary Care Centre, Primary Care Sciences, Keele University, Staffordshire, ST5 5BG, UK. 2Musculoskeletal Research Centre, Faculty of Health Sciences, La Trobe University, Bundoora, Victoria 3086, Australia. Authors’ contributions All authors participated in the conception and design of the study, and drafting of the manuscript. All authors read and approved the final manuscript. Competing interests HBM is Editor-in-Chief of Journal of Foot and Ankle Research. It is journal policy that editors are removed from the peer review and editorial decision making processes for papers they have co-authored. All other authors declare that they have no competing interests. Received: 4 April 2011 Accepted: 5 September 2011 Published: 5 September 2011 References 1. World Health Organisation: The burden of musculoskeletal conditions at the start of the new millenium. Report of a WHO Scientific Group. WHO Technical Report Series No. 919 2003. 2. Mathers CD, Loncar D: Projections of global mortality and burden of disease from 2002 to 2030. PLoS Med 2006, 3(11):e442. 3. Jordan K, Clarke AM, Symmons DP, Fleming D, Porcheret M, Kadam UT, Croft P: Measuring disease prevalence: a comparison of musculoskeletal disease using four general practice consultation databases. Br J Gen Pract 2007, 57:7-14. 4. Woolf AD, Pfleger B: Burden of major musculoskeletal conditions. Bull World Health Organ 2003, 81:646-656. Roddy et al. Journal of Foot and Ankle Research 2011, 4:22 http://www.jfootankleres.com/content/4/1/22 Page 14 of 16 http://www.biomedcentral.com/content/supplementary/1757-1146-4-22-S1.DOCX http://www.ncbi.nlm.nih.gov/pubmed/17132052?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/17132052?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/17244418?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/17244418?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/14710506?dopt=Abstract 5. European Bone and Joint Health Strategies Project: European Action Towards Better Musculoskeletal Health. 2004. 6. Trivedi B, Marshall M, Belcher J, Roddy E: A systematic review of radiographic definitions of foot osteoarthritis in population-based studies. Osteoarthritis Cartilage 2010, 18:1027-1035. 7. Benvenuti F, Ferrucci L, Guralnik JM, Gangemi S, Baroni A: Foot pain and disability in older persons: an epidemiologic survey. J Am Geriatr Soc 1995, 43:479-484. 8. Leveille SG, Guralnik JM, Ferrucci L, Hirsch R, Simonsick E, Hochberg MC: Foot pain and disability in older women. Am J Epidemiol 1998, 148:657-665. 9. Menz HB, Lord SR: The contribution of foot problems to mobility impairment and falls in community-dwelling older people. J Am Geriatr Soc 2001, 49:1651-1656. 10. Badlissi F, Dunn JE, Link CL, Keysor JJ, McKinlay JB, Felson DT: Foot musculoskeletal disorders, pain, and foot-related functional limitation in older persons. J Am Geriatr Soc 2005, 53:1029-1033. 11. Keenan AM, Tennant A, Fear J, Emery P, Conaghan PG: Impact of multiple joint problems on daily living tasks in people in the community over age fifty-five. Arthritis Rheum 2006, 55:757-764. 12. Roddy E, Zhang W, Doherty M: Prevalence and associations of hallux valgus in a primary care population. Arthritis Rheum 2008, 59:857-862. 13. Chen J, Devine A, Dick IM, Dhaliwal SS, Prince RL: Prevalence of lower extremity pain and its association with functionality and quality of life in elderly women in Australia. J Rheumatol 2003, 30:2689-2693. 14. Garrow AP, Silman AJ, Macfarlane GJ: The Cheshire Foot Pain and Disability Survey: a population survey assessing prevalence and associations. Pain 2004, 110:378-384. 15. Keysor JJ, Dunn JE, Link CL, Badlissi F, Felson DT: Are foot disorders associated with functional limitation and disability among community- dwelling older adults? J Aging Health 2005, 17:734-752. 16. Peat G, Thomas E, Wilkie R, Croft P: Multiple joint pain and lower extremity disability in middle and old age. Disabil Rehabil 2006, 28:1543-1549. 17. Tinetti ME, Speechley M, Ginter SF: Risk factors for falls among elderly persons living in the community. N Engl J Med 1988, 319:1701-1707. 18. Menz HB, Morris ME, Lord SR: Foot and ankle risk factors for falls in older people: a prospective study. J Gerontol A Biol Sci Med Sci 2006, 61:866-870. 19. Menz HB, Morris ME, Lord SR: Foot and ankle characteristics associated with impaired balance and functional ability in older people. J Gerontol A Biol Sci Med Sci 2005, 60:1546-1552. 20. Kellgren JH, Moore RA: Generalized osteoarthritis and Heberden’s nodes. Br Med J 1952, 1:181-187. 21. Lawrence JS, Bremner JM, Bier FA: Osteo-arthrosis. Prevalence in the population and relationship between symptoms and x-ray changes. Ann Rheum Dis 1966, 25:1-24. 22. Menz HB, Morris ME: Determinants of disabling foot pain in retirement village residents. J Am Podiatr Med Assoc 2005, 95:573-579. 23. Wilder FV, Barrett JP, Farina EJ: The association of radiographic foot osteoarthritis and radiographic osteoarthritis at other sites. Osteoarthritis Cartilage 2005, 13:211-215. 24. Menz HB, Munteanu SE, Landorf KB, Zammit GV, Cicuttini FM: Radiographic classification of osteoarthritis in commonly affected joints of the foot. Osteoarthritis Cartilage 2007, 15:1333-1338. 25. Mantyselka P, Kumpusalo E, Ahonen R, Kumpusalo A, Kauhanen J, Viinamaki H, Halonen P, Takala J: Pain as a reason to visit the doctor: a study in Finnish primary health care. Pain 2001, 89:175-180. 26. Jordan K, Jinks C, Croft P: A prospective study of the consulting behaviour of older people with knee pain. Br J Gen Pract 2006, 56:269-276. 27. Bierma-Zeinstra SM, Lipschart S, Njoo KH, Bernsen R, Verhaar J, Prins A, Bohnen AM: How do general practitioners manage hip problems in adults? Scand J Prim Health Care 2000, 18:159-164. 28. Menz HB, Jordan KP, Roddy E, Croft PR: Musculoskeletal foot problems in primary care: what influences older people to consult? Rheumatology (Oxford) 2010, 49:2109-2116. 29. Zammit GV, Menz HB, Munteanu SE, Landorf KB, Gilheany MF: Interventions for treating osteoarthritis of the big toe joint. Cochrane Database Syst Rev 2010, , 9: CD007809. 30. Peat G, Thomas E, Handy J, Wood L, Dziedzic K, Myers H, Wilkie R, Duncan R, Hay E, Hill J, Croft P: The Knee Clinical Assessment Study - CAS (K). A prospective study of knee pain and knee osteoarthritis in the general population. BMC Musculoskelet Disord 2004, 5:4. 31. Myers H, Nicholls E, Handy J, Peat G, Thomas E, Duncan R, Wood L, Marshall M, Tyson C, Hay E, Dziedzic K: The Clinical Assessment Study of the Hand (CAS-HA): a prospective study of musculoskeletal hand problems in the general population. BMC Musculoskelet Disord 2007, 8:85. 32. Ware JE Jr, Sherbourne CD: The MOS 36-item short-form health survey (SF-36). I. Conceptual framework and item selection. Med Care 1992, 30:473-483. 33. Ware J Jr, Kosinski M, Keller SD: A 12-Item Short-Form Health Survey: construction of scales and preliminary tests of reliability and validity. Med Care 1996, 34:220-233. 34. Zigmond AS, Snaith RP: The hospital anxiety and depression scale. Acta Psychiatr Scand 1983, 67:361-370. 35. Wilkie R, Peat G, Thomas E, Hooper H, Croft PR: The Keele Assessment of Participation: a new instrument to measure participation restriction in population studies. Combined qualitative and quantitative examination of its psychometric properties. Qual Life Res 2005, 14:1889-1899. 36. Craig CL, Marshall AL, Sjostrom M, Bauman AE, Booth ML, Ainsworth BE, Pratt M, Ekelund U, Yngve A, Sallis JF, Oja P: International physical activity questionnaire: 12-country reliability and validity. Med Sci Sports Exerc 2003, 35:1381-1395. 37. Roddy E, Zhang W, Doherty M: Validation of a self-report instrument for assessment of hallux valgus. Osteoarthritis Cartilage 2007, 15:1008-1012. 38. Leng GC, Fowkes FG: The Edinburgh Claudication Questionnaire: an improved version of the WHO/Rose Questionnaire for use in epidemiological surveys. J Clin Epidemiol 1992, 45:1101-1109. 39. Dufour AB, Broe KE, Nguyen US, Gagnon DR, Hillstrom HJ, Walker AH, Kivell E, Hannan MT: Foot pain: is current or past shoewear a factor? Arthritis Rheum 2009, 61:1352-1358. 40. Williamson A, Hoggart B: Pain: a review of three commonly used pain rating scales. J Clin Nurs 2005, 14:798-804. 41. Garrow AP, Papageorgiou AC, Silman AJ, Thomas E, Jayson MI, Macfarlane GJ: Development and validation of a questionnaire to assess disabling foot pain. Pain 2000, 85:107-113. 42. Jensen MP, Keefe FJ, Lefebvre JC, Romano JM, Turner JA: One- and two- item measures of pain beliefs and coping strategies. Pain 2003, 104:453-469. 43. Thomas R: Income - commentary.[http://qb.soc.surrey.ac.uk/topics/income/ thomaswealth.htm]. 44. Department for Communities and Local Government: The English Indices of Deprivation. 2007 [http://www.communities.gov.uk/documents/ communities/pdf/733520.pdf], Published: 28/03/2008. 45. Whitley AS, Sloanne C, Hoardley G, Moore AD, Alsop CW: Clark’s positioning in radiography London: Holder Arnold; 2005. 46. Gibbon WW, Long G: Ultrasound of the plantar aponeurosis (fascia). Skeletal Radiol 1999, 28:21-26. 47. Melzack R: The McGill Pain Questionnaire: major properties and scoring methods. Pain 1975, 1:277-299. 48. Cavanagh PR, Rodgers MM: The arch index: a useful measure from footprints. J Biomech 1987, 20:547-551. 49. Menz HB, Munteanu SE: Validity of 3 clinical techniques for the measurement of static foot posture in older people. J Orthop Sports Phys Ther 2005, 35:479-486. 50. Redmond AC, Crosbie J, Ouvrier RA: Development and validation of a novel rating system for scoring standing foot posture: the Foot Posture Index. Clin Biomech (Bristol, Avon) 2006, 21:89-98. 51. Menz HB, Tiedemann A, Kwan MM, Latt MD, Sherrington C, Lord SR: Reliability of clinical tests of foot and ankle characteristics in older people. J Am Podiatr Med Assoc 2003, 93:380-387. 52. Menadue C, Raymond J, Kilbreath SL, Refshauge KM, Adams R: Reliability of two goniometric methods of measuring active inversion and eversion range of motion at the ankle. BMC Musculoskelet Disord 2006, 7:60. 53. Bennell KL, Talbot RC, Wajswelner H, Techovanich W, Kelly DH, Hall AJ: Intra-rater and inter-rater reliability of a weight-bearing lunge measure of ankle dorsiflexion. Aust J Physiother 1998, 44:175-180. 54. Hopson MM, McPoil TG, Cornwall MW: Motion of the first metatarsophalangeal joint. Reliability and validity of four measurement techniques. J Am Podiatr Med Assoc 1995, 85:198-204. 55. Altman RD, Alarcón G, Appelrouth D, Bloch D, Borenstein D, Brandt K, Brown C, Cooke TD, Daniel W, Gray R: The American College of Roddy et al. Journal of Foot and Ankle Research 2011, 4:22 http://www.jfootankleres.com/content/4/1/22 Page 15 of 16 http://www.ncbi.nlm.nih.gov/pubmed/20472083?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/20472083?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/20472083?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/7730527?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/7730527?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/9778172?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11843999?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11843999?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15935029?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15935029?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15935029?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/17013823?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/17013823?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/17013823?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/18512715?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/18512715?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/14719214?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/14719214?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/14719214?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15275789?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15275789?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15275789?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16377770?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16377770?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16377770?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/17178617?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/17178617?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/3205267?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/3205267?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16912106?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16912106?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16424286?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16424286?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/14896078?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/5905334?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/5905334?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16291849?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16291849?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15727887?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15727887?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/17625925?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/17625925?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11166473?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11166473?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16611515?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16611515?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11097101?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11097101?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15028109?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15028109?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15028109?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/17760988?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/17760988?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/17760988?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/1593914?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/1593914?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/8628042?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/8628042?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/6880820?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16155776?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16155776?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16155776?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16155776?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12900694?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12900694?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/17387024?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/17387024?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/1474406?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/1474406?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/1474406?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/19790125?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16000093?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16000093?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/10692609?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/10692609?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12927618?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12927618?dopt=Abstract http://qb.soc.surrey.ac.uk/topics/income/thomaswealth.htm http://qb.soc.surrey.ac.uk/topics/income/thomaswealth.htm http://www.communities.gov.uk/documents/communities/pdf/733520.pdf http://www.communities.gov.uk/documents/communities/pdf/733520.pdf http://www.ncbi.nlm.nih.gov/pubmed/10068071?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/1235985?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/1235985?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/3611129?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/3611129?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16187508?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16187508?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/13130085?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/13130085?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16872545?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16872545?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16872545?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11676731?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11676731?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/7738816?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/7738816?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/7738816?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/2242058?dopt=Abstract Rheumatology criteria for the classification and reporting of osteoarthritis of the hand. Arthritis Rheum 1990, 33:1601-1610. 56. Mathiowetz V, Weber K, Volland G, Kashman N: Reliability and validity of grip and pinch strength evaluations. J Hand Surg [Am] 1984, 9:222-226. 57. Guralnik JM, Simonsick EM, Ferrucci L, Glynn RJ, Berkman LF, Blazer DG, Scherr PA, Wallace RB: A short physical performance battery assessing lower extremity function: association with self-reported disability and prediction of mortality and nursing home admission. J Gerontol 1994, 49: M85-M94. 58. Bryant A, Singer K, Tinley P: Comparison of the reliability of plantar pressure measurements using the two-step and midgait methods of data collection. Foot Ankle Int 1999, 20:646-650. 59. Von Korff M, Ormel J, Keefe FJ, Dworkin SF: Grading the severity of chronic pain. Pain 1992, 50:133-149. 60. Ferry S, Pritchard T, Keenan J, Croft P, Silman AJ: Estimating the prevalence of delayed median nerve conduction in the general population. Br J Rheumatol 1998, 37:630-635. 61. Altman R, Alarcon G, Appelrouth D, Bloch D, Borenstein D, Brandt K, Brown C, Cooke TD, Daniel W, Feldman D: The American College of Rheumatology criteria for the classification and reporting of osteoarthritis of the hip. Arthritis Rheum 1991, 34:505-514. 62. Bellamy N: WOMAC Osteoarthritis Index. A User’s Guide London (Ontario): London Health Services Centre, McMaster University; 1996. 63. Altman R, Asch E, Bloch D, Bole G, Borenstein D, Brandt K, Christy W, Cooke TD, Greenwald R, Hochberg M: Development of criteria for the classification and reporting of osteoarthritis. Classification of osteoarthritis of the knee. Diagnostic and Therapeutic Criteria Committee of the American Rheumatism Association. Arthritis Rheum 1986, 29:1039-1049. 64. Cherkin DC, Deyo RA, Street JH, Barlow W: Predicting poor outcomes for back pain seen in primary care using patients’ own criteria. Spine (Phila Pa 1976) 1996, 21:2900-2907. 65. Bellamy N, Campbell J, Haraoui B, Buchbinder R, Hobby K, Roth JH, MacDermid JC: Dimensionality and clinical importance of pain and disability in hand osteoarthritis: Development of the Australian/ Canadian (AUSCAN) Osteoarthritis Hand Index. Osteoarthritis Cartilage 2002, 10:855-862. 66. Bellamy N, Campbell J, Haraoui B, Gerecz-Simon E, Buchbinder R, Hobby K, MacDermid JC: Clinimetric properties of the AUSCAN Osteoarthritis Hand Index: an evaluation of reliability, validity and responsiveness. Osteoarthritis Cartilage 2002, 10:863-869. 67. Lawrence JS: Osteo-arthrosis. In Rheumatism in populations. Edited by: Lawrence JS. London: William Heinemann Medical Books; 1977:98-155. 68. Klippel JH, Dieppe P, Ferri FF: Primary Care Rheumatology London: Mosby; 1999. 69. Whitney CW, Lind BK, Wahl PW: Quality assurance and quality control in longitudinal studies. Epidemiol Rev 1998, 20:71-80. 70. Wrobel JS, Armstrong DG: Reliability and validity of current physical examination techniques of the foot and ankle. J Am Podiatr Med Assoc 2008, 98:197-206. 71. Porcheret M, Hughes R, Evans D, Jordan K, Whitehurst T, Ogden H, Croft P: Data quality of general practice electronic health records: the impact of a program of assessments, feedback, and training. J Am Med Inform Assoc 2004, 11:78-86. 72. Menz HB, Jordan KP, Roddy E, Croft PR: Characteristics of primary care consultations for musculoskeletal foot and ankle problems in the UK. Rheumatology (Oxford) 2010, 49:1391-1398. 73. Wilkie R, Peat G, Thomas E, Hooper H, Croft PR: The Keele Assessment of Participation: a new instrument to measure participation restriction in population studies. Combined qualitative and quantitative examination of its psychometric properties. Qual Life Res 2005, 14:1889-1899. 74. Cooper C, Egger P, Coggon D, Hart DJ, Masud T, Cicuttini F, Doyle DV, Spector TD: Generalized osteoarthritis in women: pattern of joint involvement and approaches to definition for epidemiological studies. J Rheumatol 1996, 23:1938-1942. 75. Gunther KP, Sturmer T, Sauerland S, Zeissig I, Sun Y, Kessler S, Scharf HP, Brenner H, Puhl W: Prevalence of generalised osteoarthritis in patients with advanced hip and knee osteoarthritis: the Ulm Osteoarthritis Study. Ann Rheum Dis 1998, 57:717-723. 76. Croft P, Jordan K, Jinks C: “Pain elsewhere” and the impact of knee pain in older people. Arthritis Rheum 2005, 52:2350-2354. doi:10.1186/1757-1146-4-22 Cite this article as: Roddy et al.: The clinical assessment study of the foot (CASF): study protocol for a prospective observational study of foot pain and foot osteoarthritis in the general population. Journal of Foot and Ankle Research 2011 4:22. Submit your next manuscript to BioMed Central and take full advantage of: • Convenient online submission • Thorough peer review • No space constraints or color figure charges • Immediate publication on acceptance • Inclusion in PubMed, CAS, Scopus and Google Scholar • Research which is freely available for redistribution Submit your manuscript at www.biomedcentral.com/submit Roddy et al. Journal of Foot and Ankle Research 2011, 4:22 http://www.jfootankleres.com/content/4/1/22 Page 16 of 16 http://www.ncbi.nlm.nih.gov/pubmed/2242058?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/2242058?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/8126356?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/8126356?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/8126356?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/10540996?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/10540996?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/10540996?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/1408309?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/1408309?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/9667616?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/9667616?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/2025304?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/2025304?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/2025304?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/3741515?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/3741515?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/3741515?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/3741515?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12435330?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12435330?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12435330?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12435331?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12435331?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/9762510?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/9762510?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/18487593?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/18487593?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/14527973?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/14527973?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16155776?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16155776?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16155776?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16155776?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/8923371?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/8923371?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/10070270?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/10070270?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16052574?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16052574?dopt=Abstract Abstract Background Methods Discussion Background Methods Study design Phase 1: Baseline postal Health Survey questionnaire Phase 2: Baseline Clinical Assessment Study of the Foot (CASF) Digital photography Plain radiography of the feet and hands Ultrasound of the plantar fascia Clinical interview and physical examination Simple anthropometric measurements Self-complete clinic questionnaire Post-clinic procedure Communication with participants’ general practice Mandatory notification of clinical ‘red flags’ Mandatory notification of radiographic ‘red flags’ Discretionary notification of other significant radiographic abnormality Availability of x-ray report on request Quality assurance and control Phase 3: Review of general practice medical records Phase 4: Follow-up mailed survey at 18 months (Phase 2 respondents only) Phase 5: Follow-up mailed survey at 3 years (Phase 1 and Phase 2 respondents) Sample size Statistical analysis Patterns of symptomatic radiographic foot OA Features associated with foot OA phenotypes Outcome of foot OA at 3 years The presentation and diagnosis of OA in primary care Describing self-care and primary care Modelling the outcomes of care Discussion Acknowledgements Author details Authors' contributions Competing interests References work_r2adwqygljaf5aoq6n7fpzile4 ---- ORIGINAL ARTICLE Nasal continuous positive airway pressure from high flow cannula versus Infant Flow for preterm infants DM Campbell 1,2 , PS Shah 1 , V Shah 1 and EN Kelly 1 1 Department of Paediatrics, Mt Sinai Hospital, University of Toronto, Toronto, ON, Canada and 2 Department of Paediatrics, St Michael’s Hospital, University of Toronto, Toronto, ON, Canada Objective: To compare the feasibility of continuous positive airway pressure (CPAP) support generated by high flow nasal cannula with conventional CPAP for prevention of reintubation among preterm infants with a birth weight of p1250 g. Study Design: Preterm infants were randomized to CPAP generated via high flow cannula or the Infant Flow Nasal CPAP System (VIASYS, Conshohocken, PA, USA) at extubation. Primary outcome was incidence of reintubation within 7 days. Secondary outcomes included change in oxygen use and frequency of apnea and bradycardias postextubation. Results: Forty neonates were randomized. Twelve of 20 infants randomized to high flow cannula CPAP were reintubated compared to three of 20 using Infant Flow (P ¼ 0.003). The high flow cannula group had increased oxygen use and more apneas and bradycardias postextubation. Conclusions: CPAP delivered by high flow nasal cannula failed to maintain extubation status among preterm infants p1250 g as effectively as Infant Flow CPAP. Journal of Perinatology (2006) 26, 546–549. doi:10.1038/sj.jp.7211561; published online 13 July 2006 Keywords: infant; premature; extubation; randomized trial Introduction Continuous positive airway pressure (CPAP) facilitates extubation by preventing atelectasis, maximizing the functional residual capacity of the lung, and recruiting collapsed alveoli.1 In preterm infants CPAP decreases the incidence and severity of apneas, reduces the degree of asynchronous breathing, and prevents reintubation following initial extubation.2,3 One of the most common forms of CPAP currently used is the Infant Flow system (VIASYS) (IF-CPAP). With this device, gas flowing through nasal prongs is facilitated during exhalation via a fluidic-flip design.4,5 IF-CPAP has been shown to decrease oxygen requirement, improve respiratory effort6–9 and prevent extubation failure in preterm infants when compared to traditional nasal prong systems and bubble CPAP.9,10 However, nasal prong CPAP devices have been shown to damage the nares of infants, causing discomfort and rarely, long-term disfigurement.11,12 Simple nasal cannula has recently been shown to generate positive end-expiratory pressure (PEEP) in preterm infants if air or oxygen is delivered at a high flow rate (i.e. 1–2 l/min).13,14 The amount of distending pressure generated by such cannula depends on the size of the cannula, flow rate and size of the infant.13 In a recent study, investigators were able to generate a positive distending pressure of 4.5 cm H2O via simple nasal cannula using a calculated flow rate based on infant’s weight. This method of CPAP (high flow CPAP (HF-CPAP)) generated comparable pressure to that delivered by Hudson nasal prong CPAP set to deliver a pressure of 6 cm H2O. 14 HF-CPAP was also shown to be equally effective compared to the Hudson prong nasal CPAP in the management of apnea of prematurity over a 6 h period.14 This type of HF-CPAP has the advantage of being easy to administer at a lower cost. The purpose of this study was to evaluate the feasibility of HF-CPAP in preventing extubation failure of preterm infants with birth weight p1250 g, compared to IF-CPAP. Materials and methods Eligibility Intubated preterm infants with a birth weight p1250 g admitted to the Level III neonatal intensive care unit at Mt. Sinai Hospital, Toronto, Canada were eligible for inclusion in this study. Patients were excluded if there were signs and symptoms of upper airway obstruction, or congenital anomalies of the airway. Randomization and allocation This study was approved by the Mt Sinai Hospital Research Ethics Board. Informed written consent was obtained from parents before the neonate’s first extubation. In our unit, infants are usually extubated to nasal prong CPAP (IF-CPAP) at the time of first Received 22 February 2006; revised 18 May 2006; accepted 7 June 2006; published online 13 July 2006 Correspondence: Dr DM Campbell, Department of Paediatrics, St Michael’s Hospital, 30 Bond Street, Rm 15-014, Toronto, ON, Canada M5B 1W8. E-mail: campbelld@smh.toronto.on.ca Journal of Perinatology (2006) 26, 546–549 r 2006 Nature Publishing Group All rights reserved. 0743-8346/06 $30 www.nature.com/jp extubation. Time of extubation, ventilatory settings at extubation, and use of caffeine was determined by the medical team. Randomization cards were generated using random number tables and stratified in groups of 10. The randomization cards were sealed in sequentially marked opaque envelopes and opened immediately before extubation. The infants were randomly allocated to either HF-CPAP or IF-CPAP just before extubation. Procedure The IF-CPAP circuit used was the Infant Flow system (VIASYS r, Conshohocken, PA, USA). Six mm long nasal prongs were available in three sizes (4, 4.5 and 5 mm). The largest prongs that fit easily into the nares were used for each infant. Flow of air/oxygen was adjusted in order to deliver between 5 and 6 cm H2O of pressure. The HF-CPAP system consisted of a gas source, air-oxygen blender and a nonheated bubble humidifier delivering air/oxygen via standard nasal cannula (Infant size, Salter Labs, Arvin, CA, USA). HF-CPAP air/oxygen flow through the cannula was adjusted using the following formula as reported by Sreenan et al. 14 Flow ðl=minÞ ¼ 0:92 þ 0:68x ðx ¼ weight in kgÞ Nasal cannula flow delivered at this rate in very low birth weight preterm infants has been estimated to deliver distending pressure of 4.5 cm H2O, comparable to that of Hudson nasal CPAP prongs set at 6 cm H20. 14 In the post-extubation period CPAP was not weaned. Infants were maintained in a proper position to maintain a good seal. Oxygen saturation guidelines for all infants during the study period were identical. The decision to reintubate an infant was made by the team based on one or more of the following criteria: an uncompensated respiratory acidosis with a pH of <7.25, an oxygen requirement of >60% to maintain a transcutaneous saturation of >90%, severe apnea (defined as apnea and bradycardia requiring bag and mask ventilation or reintubation for recovery) or frequent apnea (defined as three or more, apneas of any severity in one hour). Outcome The primary outcome was success of extubation, defined as remaining extubated for at least 7 days. Secondary outcomes included: change in the amount of oxygen between pre- and postextubation and frequency of apnea and bradycardia events. The definition of apnea was a pause in breathing for >20 s. The definition of bradycardia was heart rate less than 80 beats per minute for 10 s. Nurses’ notes were used to record these parameters. Change in oxygen use post-extubation was calculated as the difference between the daily percent FiO2 post-extubation (over 7 days) and the percent FiO2 pre-extubation (up to 12 h pre- extubation). Daily percent FiO2 was estimated by calculating the mean from hourly percent FiO2 recorded in the chart. Nasal damage was qualitatively assessed by digital photography at 1, 7 and 30 days post-extubation using the following scale: (0) no appreciable damage, (1) mild (erythema of nares), (2) moderate (bleeding of nasal mucosa or nares), (3) severe (deformation of nares, perforation of nasal mucosa). Data regarding: number of days required to come off any form of CPAP, time to reach full feeds, rate of weight gain, frequency of chronic lung disease (need for supplemental oxygen at 36 weeks post- conceptional age), necrotizing enterocolitis, intraventricular hemorrhage, sepsis and retinopathy of prematurity were also collected. Sample size and statistical analysis A historical successful extubation rate for preterm infants p1250 g, observed over 12 months preceding the start of the study, was 70%. In order to demonstrate a difference of 15% in extubation rates between the two strategies, in excess of 400 infants would be required (a of 0.05 and a power of 0.80). A pilot study was therefore undertaken, with 20 patients randomized to each group. Student’s t-tests were performed for continuous variables and w2-test for categorical data. Wilcoxon rank sum tests were used to compare data sets for groups of patients with unequal numbers. Fisher’s least significant difference test was used for post hoc analysis. Results The baseline characteristics were similar for both groups of patients (Table 1). The majority of infants were extubated within 5 days of age in each group (n ¼ 16, 80%). Most were extubated from synchronized intermittent mandatory ventilation (HF-CPAP n ¼ 16, IF-CPAP n ¼ 14) at similar ventilatory settings (Table 1). Other forms of ventilation used before extubation included: pressure support (HF-CPAP n ¼ 3, IF-CPAP 2), assist control ventilation (HF-CPAP n ¼ 0, IF-CPAP n ¼ 2), or high frequency ventilation (HF-CPAP 1, IF-CPAP 2). Mean flow rate used in the HF-CPAP group was 1.6 l/min (range 1.4 to 1.7 l/min). Flow rates used to generate IF-CPAP were not recorded but generally involved flow rates from 6 to 8 l/min. All infants who received caffeine had it administered before extubation. Twelve of 20 infants, 12 randomized to HF-CPAP required reintubation within 7 days compared to three of 20 infants in the IF-CPAP group (relative risk 2.1, 95% confidence interval 1.3–3.0, P ¼ 0.003). Of the 12 infants in the HF-CPAP group who failed extubation, seven were reintubated within 48 h. All infants were reintubated due to severe apnea or increased frequency of apneas. Secondary outcomes for all infants are reported in Table 2. There were no statistically significant differences noted in the incidence of necrotizing enterocolitis, intraventricular hemorrhage, chronic lung disease, sepsis and retinopathy of prematurity between groups. Eight of 20 infants extubated to HF-CPAP remained extubated on this form of distending pressure for more than 7 days. This High flow nasal cannula in preterm infants DM Campbell et al 547 Journal of Perinatology subgroup of infants also had higher oxygen use post-extubation compared to those who remained extubated on IF-CPAP (2.0±1.9 versus 0.4±1.6, P ¼ 0.04). Differences in secondary outcomes were not observed in infants who remained extubated, including time to reach full feeds and weight gain at 7, 14 and 30 days of age. Nasal damage as assessed by digital photography of stages 1, 2 or 3 was not observed among any infants. Discussion Previous studies have demonstrated that high flow nasal cannula can deliver positive distending pressure equivalent to conventional forms of nasal CPAP. 13,14 HF-CPAP has also been shown to be as effective as conventional CPAP in managing apnea of prematurity.14 As a result of this study the use of high flow nasal cannula as an ‘equivalent’ form of CPAP has increased. Anecdotal nursing reports also affirmed the early use of HF-CPAP, since it was felt to ‘spare’ the preterm infants from nasal damage, which was occasionally seen in infants managed with nasal prong CPAP. However, recent reports have demonstrated side effects of high air flow through simple nasal cannula including drying and bleeding of the nasal mucosa as well as airway obstruction. 15,16 Our findings suggest that HF-CPAP probably should not be used as an equivalent form of CPAP in preterm infants. Compared to IF-CPAP, HF-CPAP was associated with an increase in the number of extubation failures, higher oxygen use, and more apneas and bradycardias. This occurred despite the fact that there was a trend for more infants randomized to the HF-CPAP arm to be on caffeine before extubation. We were unable to detect any nasal damage from either form of CPAP, although this may have been due to ‘study effect’ in which respiratory therapists and nurses were more careful in monitoring the nares of the infants. Sreenan et al. 14 reported that HF-CPAP was effective in the management of apneas and bradycardias over a 6 h period. The infants studied by Sreenan et al.14 were of higher gestational age compared to this study (30.3 weeks versus 27.5) and also had a higher number of apneas (1–2/6 h) and bradycardias (2–3/6 h). Most of their patients were already established on CPAP, and had therapeutic levels of theophylline. Our findings differ from their report. The reason for this difference may be the outcome studied. Table 1 Baseline characteristics of study population Variables HF-CPAP (n ¼ 20) IF-CPAP (n ¼ 20) P-value Male:female 11:9 10:10 0.75 Gestational age in weeks 27.4±1.6 27.6±1.9 0.56 Birth weight in grams 1008±157 925±188 0.12 Age at extubation in hours (median and range) 39 (7.5–792) 24 (18–1224) 0.98 Mean airway pressure at time of extubation (cm H20) 7.7±0.9 7.8±1.3 0.51 Administration of surfactant (n) 19 18 1.00 Administration of antenatal steroids (n) 18 15 0.41 Administration of caffeine (n) 14 9 0.11 % Oxygen use at the time of extubation 21.4±1.1 23.5±5.5 0.13 Abbreviations: IF-CPAP, Infant Flow system (VIASYS) continuous positive airway pressure; HF-CPAP, high flow CPAP. Values expressed as mean±s.d. unless otherwise stated. Table 2 Primary and secondary outcomes among all randomized infants Variables HF-CPAP (n ¼ 20) IF-CPAP (n ¼ 20) P-value Remained extubated at 7 days (n) 8 17 0.003* Change in oxygen use postextubation (%) 4.7±6.4 1.0±2.1 0.025* Episodes of apnea and bradycardia/day 6±8 2±3 0.045* Days to reach full feeds 14±6 16±10 0.530 Weight gain postextubation in grams 7 days �28±82 �25±64 0.85 14 days 102±125 102±98 0.89 30 days 492±189 521±217 0.57 Abbreviations: IF-CPAP, Infant Flow system (VIASYS) continuous positive airway pressure; HF-CPAP, high flow CPAP. *P<0.05; values expressed as mean±s.d. unless otherwise stated. High flow nasal cannula in preterm infants DM Campbell et al 548 Journal of Perinatology Successful extubation at 7 days was felt to be more clinically relevant outcome and a better overall reflection of CPAP efficacy. Since we were not monitoring pressure delivered by the HF-CPAP device it is possible that we were unable to generate sustained distending pressure for extended periods of time despite replicating Sreenan et al’s. technique. We performed this pilot study to assess the feasibility of this intervention in order to form a basis for a large randomized controlled trial. Perceived advantages of alternate forms of CPAP using simple nasal cannula include ease of administration, possible reduction in nasal damage and increased ability for parents to hold and bond with their infants. We failed to show any benefits with this method of CPAP, and demonstrated that it may be harmful. A post hoc power calculation using the observed sample sizes and failure rates, and a type 1 error rate of 5% demonstrates that our study has a statistical power of 87% that the difference in reintubation rates was not due to chance alone. Limitations of our study include an inability to mask the intervention, small sample size and not monitoring the actual distending pressure generated. In conclusion, CPAP delivered by high flow nasal cannula failed to maintain extubation status among preterm infants p1250 g. This form of continuous positive airway pressure is not an effective alternative to conventional nasal continuous positive airway pressure. Its use in other patients, such as older preterm infants, remains to be determined. Acknowledgments We would like to thank Ms Shafah Fallagh, PhD for her role as a statistical consultant on this paper. Financial support provided by Physician Services Incorporated Foundation Toronto, Canada (02-29R). References 1 da Silva WJ, Abbasi S, Pereira G, Bhutani VK. Role of positive end-expiratory changes on functional residual capacity in surfactant treated preterm infants. Pediatr Pulmonol 1994; 18: 89–92. 2 Locke R, Greenspan JS, Schaffer TH, Rubenstein SD, Wolfson MR. Effect of nasal CPAP on thoracoabdominal motion in neonates with respiratory insufficiency. Pediatr Pulmonol 1991; 11: 259–264. 3 Davis PG, Henderson-Smart DJ. Nasal continuous positive airways pressure immediately after extubation for preventing morbidity in preterm infants. Cochrane Database Syst Rev 2003; 2: CD000143. 4 Moa G, Nilsson K, Zetterstrom H, Jonsson LO. A new device for administration of nasal continuous positive airways pressure in the newborn: an experimental study. Crit Care Med 1988; 16: 1238–1242. 5 Moa G, Nilsson K. Nasal continuous positive airways pressure: experiences with a new technical approach. Acta Pediatr 1993; 82: 210–211. 6 Klausner JF, Lee AY, Hutchison AA. Decreased imposed work with a new nasal continuous positive airway pressure device. Pediatr Pulmonol 1996; 22: 188–194. 7 Courtney SE, Pyon KH, Saslow JG, Arnold GK, Pandit PB, Habib RH. Lung recruitment and breathing pattern during variable versus continuous flow nasal continuous positive airway pressure in preterm infants: an evaluation of three devices. Pediatrics 2001; 107: 304–308. 8 Pandit PB, Courtney SE, Pyon KH, Saslow JG, Habib RH. Work of breathing during constant- and variable-flow nasal continuous positive airway pressure in preterm neonates. Pediatrics 2001; 108: 682–685. 9 Lipsten E, Aghai ZH, Pyon KH, Saslow JG, Nakhla T, Long J et al. Work of Breathing during nasal continuous positive airway pressure in preterm infants: a comparison of bubble vs variable-flow devices. J Perinatol 2005; 25: 453–458. 10 Mazzella M, Bellini C, Calevo MG, Campone F, Massocco D, Mezzano P et al. A randomised control study comparing the Infant Flow driver with nasal continuous positive airway pressure in preterm infants. Arch Dis Child Fetal Neonatal Ed 2001; 85: F86–F90. 11 Robertson NJ, McCarthy LS, Hamilton PA, Moss AL. Nasal deformities resulting from flow driver continuous positive airway pressure. Arch Dis Child Fetal Neonatal Ed 1996; 75: F209–F212. 12 Loftus BC, Ahn J, Haddad Jr J. Neonatal nasal deformities secondary to nasal continuous positive airway pressure. Laryngoscope 1994; 104: 1019–1022. 13 Locke RG, Wolfson MR, Shaffer TH, Rubenstein SD, Greenspan JS. Inadvertent administration of positive end-distending pressure during nasal cannula flow. Pediatrics 1993; 91: 135–138. 14 Sreenan C, Lemke RP, Hudson-Mason A, Osiovich H. High-flow nasal cannulae in the management of apnea of prematurity: a comparison with conventional nasal continuous positive airway pressure. Pediatrics 2001; 107: 1081–1083. 15 Kopleman AE, Holbert D. Use of oxygen cannulas in extremely low birthweight infants is associated with mucosal trauma and bleeding, and possibly with coagulase-negative staphylococcal sepsis. J Perinatol 2003; 23(2): 94–97. 16 Kopelman AE. Airway obstruction in the extremely low birthweight infant treated with oxygen cannula. J Perinatol 2003; 23(2): 164–165. High flow nasal cannula in preterm infants DM Campbell et al 549 Journal of Perinatology Nasal continuous positive airway pressure from high flow cannula versus Infant Flow for preterm infants Introduction Materials and methods Eligibility Randomization and allocation Procedure Outcome Sample size and statistical analysis Results Discussion Acknowledgements References work_r5nnloaspvhpxlawq2mca3nvgq ---- Enhancing communication between doctors using digital photography. A pilot study and systematic review Abstracts / International Journal of Surgery 10 (2012) S1–S52S36 ABSTRACTS CORE Metadata, citation and similar papers at core.ac.uk Provided by Elsevier - Publisher Connector Conclusions: In our study, thyroplasty as a method for vocal cord medi- alisation led to improved voice quality post-operatively and to good patient satisfaction. 0363: INSERTION OF A SECOND NASAL PACK AS A PROGNOSTIC INDICATOR OF EMERGENCY THEATRE REQUIREMENT IN EPISTAXIS PATIENTS Edward Ridyard 1, Vinay Varadarajan 2, Indu Mitra 3. 1 University of Manchester, Manchester, UK; 2 North West Higher Surgical Training Scheme, North West, UK; 3 Manchester Royal Infirmary, Manchester, UK Aim: To quantify the significance of second nasal pack insertion in epistaxis patients, as a measure of requirement for theatre. Method: A one year retrospective analysis of 100 patient notes was undertaken. After application of exclusion criteria (patients treated as outpatients, inappropriate documentation and patients transferred from peripheral hospitals) a total of n¼34 patients were included. Of the many variables measured, specific credence was given to requirement of second packing and requirement for definitive management in theatre. Results: Of all patients, 88.5% required packing. A further 25% (7/28) of this group had a second pack for cessation of recalcitrant haemorrhage. Of the second pack group, 85.7% (6/7) ultimately required definitive management in theatre. One sample t-test showed a statistically significant correlation between patients with a second nasal pack and requirement for theatre (p<0.001). Conclusions: Indications for surgical management for epistaxis vary from hospital to hospital. The results of this study show that insertion of a second pack is a very good indicator of requirement for definitive management in theatre. 0365: MANAGEMENT OF LARYNGEAL CANCERS: GRAMPIAN EXPERIENCE Therese Karlsson 3, Muhammad Shakeel 1, Peter Steele 1, Kim Wong Ah-See 1, Akhtar Hussain 1, David Hurman 2. 1 Department of otolaryngology-head and neck surgery, Aberdeen Royal Infirmary, Aberdeen, UK; 2 Department of Oncology, Aberdeen Royal Infirmary, Aberdeen, UK; 3 University of Aberdeen, Aberdeen, UK Aims: To determine the efficacy of our management protocol for laryngeal cancer and compare it to the published literature. Method: Retrospective study of prospectively maintained departmental oncology database over 10 years (1998-2008). Data collected include demographics, clinical presentation, investigations, management, surveillance, loco-regional control and disease free survival. Results: A total of 225 patients were identified, 183 were male (82%) and 42 female (18%). The average age was 67 years. There were 81 (36%) patients with Stage I disease, 54 (24%) with Stage II, 30 (13%) with Stage III and 60 (27%) with Stage IV disease. Out of 225 patients, (130)96% of Stage I and II carcinomas were treated with radiotherapy (55Gy in 20 fractions). Patients with stage III and IV carcinomas received combined treatment. Overall three-year survival for Stage I, II, III and IV were 91%, 65%, 63% and 45% respectively. Corresponding recurrence rates were 3%, 17%, 17% and 7%; 13 patients required a salvage total laryngectomy due to recurrent disease. Conclusion: Vast majority of our laryngeal cancer population is male (82%) and smokers. Primary radiotherapy provides comparable loco-regional control and survival for early stage disease (I & II). Advanced stage disease is also equally well controlled with multimodal treatment. 0366: RATES OF RHINOPLASTY PERFORMED WITHIN THE NHS IN ENGLAND AND WALES: A 10-YEAR RETROSPECTIVE ANALYSIS Luke Stroman, Robert McLeod, David Owens, Steven Backhouse. University of Cardiff, Wales, UK Aim: To determine whether financial restraint and national health cutbacks have affected the number of rhinoplasty operations done within the NHS both in England and in Wales, looking at varying demographics. Method: Retrospective study of the incidence of rhinoplasty in Wales and England from 1999 to 2009 using OPCS4 codes E025 and E026, using the electronic health databases of England (HesOnline) and Wales (PEDW). Extracted data were explored for total numbers, and variation with respect to age and gender for both nations. Results: 20222 and 1376 rhinoplasties were undertaken over the 10-year study period in England and Wales respectively. A statistical gender bias was seen in uptake of rhinoplasty with women more likely to undergo the surgery in both national cohorts (Wales, p<0.001 and England, p<0.001). Linear regression analysis suggests a statistical drop in numbers under- going rhinoplasty in England (p<0.001) but not in Wales (p>0.05). Conclusion: Rhinoplasty is a common operation in both England and Wales. The current economic constraint combined with differences in funding and corporate ethos between the two sister NHS organisations has led to a statistical reduction in numbers undergoing rhinoplasty in England but not in Wales. 0427: PATIENTS' PREFERENCES FOR HOW PRE-OPERATIVE PATIENT INFORMATION SHOULD BE DELIVERED Jonathan Bird, Venkat Reddy, Warren Bennett, Stuart Burrows. Royal Devon and Exeter Hospital, Exeter, Devon, UK Aim: To establish patients' preferences for preoperative patient informa- tion and their thoughts on the role of the internet. Method: Adult patients undergoing elective ENT surgery were invited to take part in this survey day of surgery. Participants completed a ques- tionnaire recording patient demographics, operation type, quality of the information leaflet they had received, access to the internet and whether they would be satisfied accessing pre-operative information online. Results: Respondents consisted of 52 males and 48 females. 16% were satisfied to receive the information online only, 24% wanted a hard copy only and 60% wanted both. Younger patients are more likely to want online information in stark contrast to elderly patients who preferred a hard copy. Patients aged 50-80 years would be most satisfied with paper and internet information as they were able to pass on the web link to friends and family who wanted to know more. 37% of people were using the internet to further research information on their condition/operation. However, these people wanted information on reliable online sources to use. Conclusions: ENT surgeons should be alert to the appetite for online information and identify links that are reliable to share with patients. 0510: ENHANCING COMMUNICATION BETWEEN DOCTORS USING DIGITAL PHOTOGRAPHY. A PILOT STUDY AND SYSTEMATIC REVIEW Hemanshoo Thakkar, Vikram Dhar, Tony Jacob. Lewisham Hospital NHS Trust, London, UK Aim: The European Working Time Directive has resulted in the practice of non-resident on-calls for senior surgeons across most specialties. Conse- quently majority of communication in the out-of-hours setting takes place over the telephone placing a greater emphasis on verbal communication. We hypothesised this could be improved with the use of digital images. Method: A pilot study involving a junior doctor and senior ENT surgeons. Several clinical scenarios were discussed over the telephone complemented by an image. The junior doctor was blinded to this. A questionnaire was completed which assessed the confidence of the surgeon in the diagnosis and management of the patient. A literature search was conducted using PubMED and the Cochrane Library. Keywords used: “mobile phone”, “photography”, “communication” and “medico-legal”. Results & Conclusions: In all the discussed cases, the use of images either maintained or enhanced the degree of the surgeon's confidence. The use of mobile-phone photography as a means of communication is widespread, however, it's medico-legal implications are often not considered. Our pilot study shows that such means of communication can enhance patient care. We feel that a secure means of data transfer safeguarded by law should be explored as a means of implementing this into routine practice. 0533: THE ENT EMERGENCY CLINIC AT THE ROYAL NATIONAL THROAT, NOSE AND EAR HOSPITAL, LONDON: COMPLETED AUDIT CYCLE Ashwin Algudkar, Gemma Pilgrim. Royal National Throat, Nose and Ear Hospital, London, UK Aims: Identify the type and number of patients seen in the ENT emergency clinic at the Royal National Throat, Nose and Ear Hospital, implement changes to improve the appropriateness of consultations and management and then close the audit. Also set up GP correspondence. Method: First cycle data was collected retrospectively over 2 weeks. Information was captured on patient volume, referral source, consultation https://core.ac.uk/display/82599307?utm_source=pdf&utm_medium=banner&utm_campaign=pdf-decoration-v1 Insertion of a second nasal pack as a prognostic indicator of emergency theatre requirement in epistaxis patients Management of laryngeal cancers: Grampian experience Rates of rhinoplasty performed within the NHS IN England and Wales: A 10-year retrospective analysis Patients' preferences for how pre-operative patient information should be delivered Enhancing communication between doctors using digital photography. A pilot study and systematic review The ENT emergency clinic at the royal national throat, nose and ear hospital, London: Completed audit cycle work_qtsn2wlsxndn7c7ai4athodd6a ---- STRATEGIES TO PROMOTE WEIGHT LOSS IN ADOLESCENTS WITH INTELLECTUAL AND DEVELOPMENTAL DISABILITIES By Lauren Taylor Ptomey Submitted to the graduate degree program in Dietetics and Nutrition and the Graduate Faculty of the University of Kansas in partial fulfillment of the requirements for the degree of Doctor of Philosophy. ________________________________ Chairperson: Debra Sullivan ________________________________ Jeannine Goetz ________________________________ Joseph Donnelly ________________________________ Susan Carlson ________________________________ J. Leon Greene ________________________________ Jianghua He ________________________________ Cheryl Gibson Date defended: June 27th 2013 ii The Dissertation Committee for Lauren Taylor Ptomey certifies that this is the approved version of the following dissertation STRATEGIES TO PROMOTE WEIGHT LOSS IN ADOLESCENTS WITH INTELLECTUAL AND DEVELOPMENTAL DISABILITIES ________________________________ Chairperson: Debra Sullivan Date approved: July 15th, 2013 iii ABSTRACT Introduction: Adolescents with intellectual and developmental disabilities (IDD) are at an increased risk of obesity with up to 55% considered overweight and 31% obese. However, there has been minimal research on weight management strategies for adolescents with IDD. This series of studies aimed to (1) compare the effectiveness of two weight loss diets, an enhanced stop light diet (eSLD) and a conventional diet (CD), for overweight and obese adolescents with IDD, (2) to determine the feasibility of using tablet computers as a weight loss tool in overweight and obese adolescents with IDD, (3) to determine if the use of photo-assisted 3-day food records significantly improved the estimation of energy and macronutrient intake reported in proxy assisted 3-day food records in adolescents with IDD, and (4) to evaluate the intervention components of the program by discovering parents’ feelings and opinions on the intervention program for both diet groups. Methods: A 2-month pilot intervention was conducted. All participants were randomized to the eSLD or CD, and were given a tablet computer which they used to track daily dietary intake and physical activity. Participants and a parent met weekly with a health educator via video chat on the tablet computer to receive diet and physical activity feedback and education. Participants completed a proxy-assisted 3-day food record and took pictures of all meals at baseline and the end of month 2 to determine dietary intake, and parents of participants were interviewed using a semi-structured interview at the end of the study. Results: Twenty participants (45% female, 14.9 ± 2.2 yrs. old) were randomized and completed the intervention (10 eSLD, 10 CD). Participants in both diets were able to lose weight, the mean weight change in the eSLD group is 1.67kg more than that of the CD group, but the difference (eSLD: -3.89 ± 2.66 kg vs. CD: -2.22 ± 1.37 kg)was not statistically significant. iv Furthermore, participants in both groups increased their diet quality as measured by the HEI-2005. Participants were able to use the tablet computer to track their dietary intake 90.4% (range: 27.8%-100%) of possible days, to track their physical activity 64.3% (range: 0%-100%) of possible days, and to attend 80.0% of the video chat meetings. The use of photo-assisted food records significantly increased the estimates of energy intake by 16.7% (p=0.0006) at baseline and 10.6% (p=0.0305) at the end of month 2 compared to use of proxy-assisted food records. Interviews identified that parents had a positive attitude towards the program, liked the convenience of the program, appreciated the use of the tablet computer, and felt that the program taught beneficial strategies to continue to encourage healthy habits in the home. Conclusion: A weight loss program in adolescents with IDD was successfully conducted, with overall acceptability from both adolescents and parents. Both the eSLD and the CD were identified as weight management strategies that could potentially lead to clinically significant weight loss in adolescents with IDD, and tablet computers were found to be a feasible tool and delivery system for weight loss in adolescents with IDD. The results also suggest that photo-assisted 3-day food records may provide better estimates of energy intake in adolescents with IDD compared to proxy- assisted 3-day food records. Finally, parents reported changing their behaviors to help their child successfully follow a weight loss intervention, but may need more education about the benefits of physical activity and ideas on how to increase the physical activity of adolescents with IDD. v ACKNOWLEDGMENTS I thank Dr. Debra Sullivan (advisor), Dr. Jeannine Goetz, Dr. Joseph Donnelly, Dr. J. Leon Greene, Dr. Jianghua He, Dr. Cheryl Gibson, and Dr. Susan Carlson for serving on my dissertation committee and guiding me through my Ph.D. training. I thank Dr. Stephen Herrmann for training me to analyze accelerometry and “Lose it!” data. I thank Dr. Jae Hoon Lee for his help and instruction using SAS. I thank the entire team at The University of Kansas Center for Physical Activity and Weight Management for providing weekly feedback and help. I thank Health Management Resource for providing all the food for this study. I thank all of participants and their families who participated in this study. Finally, I thank my own family and my husband Ryan for all their love and support as I completed my doctoral degree. vi TABLE OF CONTENTS Chapter One: Introduction ............................................................................................................................ 1 Definition Of IDD .............................................................................................................................................. 2 Obesity Rates And Health Complications ................................................................................................ 3 Diet And Physical Activity ............................................................................................................................. 3 Past Research ..................................................................................................................................................... 4 Potential Weight Management Approaches For Adolescents With IDD ..................................... 6 Tablet Computers ............................................................................................................................................. 8 Conclusion ........................................................................................................................................................ 10 Purpose Of Dissertation .............................................................................................................................. 10 Chapter Two: An Innovative Weight Loss Program For Adolescents With Intellectual And Developmental Disabilities .............................................................................................................. 12 Abstract ............................................................................................................................................................. 13 Introduction .................................................................................................................................................... 14 Methods............................................................................................................................................................. 17 Results ............................................................................................................................................................... 25 Discussion ........................................................................................................................................................ 31 Chapter Three: Digital Photography Provides Better Estimates Of Dietary Intake In Adolescents With Intellectual And Developmental Disabilities ............................................. 36 Abstract ............................................................................................................................................................. 37 Introduction .................................................................................................................................................... 39 Methods............................................................................................................................................................. 41 vii Results ............................................................................................................................................................... 43 Discussion ........................................................................................................................................................ 46 Chapter Four: A Weight Management Intervention For Adolescents With Intellectual And Developmental Disabilities: A Parent’s Perspective .......................................................... 50 Abstract ............................................................................................................................................................. 51 Introduction .................................................................................................................................................... 52 Methods............................................................................................................................................................. 53 Results ............................................................................................................................................................... 55 Discussion ........................................................................................................................................................ 61 Chapter Five: Discussion And Conclusion .......................................................................................... 65 Summary Of Findings .................................................................................................................................. 66 Clinical Significance ...................................................................................................................................... 69 Limitations ....................................................................................................................................................... 70 Future Directions .......................................................................................................................................... 70 Conclusions ...................................................................................................................................................... 71 References ........................................................................................................................................................... 72 Appendix A: Consent Forms ....................................................................................................................... 84 Legal Guardian Consent .............................................................................................................................. 85 Assent Form .................................................................................................................................................... 91 Appendix B: Questionnaires ...................................................................................................................... 93 Health History Form .................................................................................................................................... 94 IDD Adolescent iPad Survey ...................................................................................................................... 98 viii eSLD Semi-Structured Interview Guide ................................................................................................ 99 Conventional Diet Semi-Structured Interview Guide.................................................................... 101 Appendix C: Data Collection Form ........................................................................................................... 103 Appendix D: Weekly Lessons ..................................................................................................................... 105 Appendix E: Enhanced Stop Light Diet Guides .................................................................................... 132 Appendix F: Conventional Diet Forms .................................................................................................. 138 Appendix G: Application Screen Captures ........................................................................................... 143 Appendix H: Example Images From The Photo-Assisted Record ................................................ 146 Appendix I: Results From Parent Interviews ...................................................................................... 148 Conventional Diet Group .......................................................................................................................... 149 eSLD Group .................................................................................................................................................... 167 1 CHAPTER ONE: INTRODUCTION 2 DEFINITION OF IDD In the United States approximately 1-3% of the population is diagnosed with an intellectual or developmental disability (IDD)1. IDD is defined by the American Association of Intellectual and Developmental Disabilities (AAIDD) as a disability originating before the age of 10, characterized by significant limitations in both intellectual function and in adaptive behavior2. Intellectual functioning refers to general mental capacity, such as learning, reasoning, and problem solving. This is typically measured as an Intelligence Quotient (IQ), with an IQ below 70 being indicative of intellectual limitations 2. Adaptive behaviors are comprised of three skills: conceptual skills, social skills, and practical skills. Conceptual skills refer to language and literacy, ability to comprehend time, manage money and other number concepts, and give self-direction. Social skills refer to interpersonal skills, social responsibility, self-esteem, social problem solving, and the ability to follow rules and to obey laws. Practical skills refer to activities of daily living, occupation skills, travel skills, use of the telephone, use of money, and ability to follow a schedule or routine2. IDD is diagnosed when an individual has an IQ below 70 and has limitations in two or more adaptive behaviors (listed above) 3. Furthermore, IDD can be classified into mild, moderate, and severe forms based on an individual’s IQ. An IQ of 50-69 indicates mild IDD, 35-49 indicates moderate IDD, 20-34 indicates severe IDD, and below 20 indicates profound IDD4. There are a myriad of underlying causes of IDD that include genetic conditions, growth or nutrition deficiencies during pregnancy, prematurity, and autism 1. However, 3 genetic conditions appear to be the primary cause of IDD, contributing to 40-60% of severe cases5. Such conditions include Down syndrome (trisomy 21), DiGeorge syndrome, and fragile X syndrome5. OBESITY RATES AND HEALTH COMPLICATIONS The prevalence of obesity in both adults and adolescents with intellectual and developmental disabilities (IDD) is approximately 2-3 times greater than in the general population6-15. In terms of adolescents, Rimmer et al. 15 reports that 42% of adolescents with autism are overweight with 25% considered obese, and 55% with Down syndrome are overweight with 31% considered obese. This rate of obesity is a serious problem; studies show that obese adolescents are up to 4 times more likely to become obese adults and to develop chronic diseases, such as hypertension, type 2 diabetes, and metabolic syndrome compared to their non- obese peers16-19. DIET AND PHYSICAL ACTIVITY Poor nutritional intake is believed to contribute to the increased risk of obesity; however, nutritional intake and diet quality have only been examined in adults. Draheim et al. 20 reports that 71.4 - 85.6% of community dwelling adults with IDD consumed a high-fat diet (>30% of calories from dietary fat), and less than 7% consumed the recommended five servings of fruits and vegetables per day (0% - 4.4% of men and 0 - 6.4% of women)21. Adolfsson and colleagues 20 show adults with IDD have low intakes of fiber, vitamin A, thiamin, riboflavin, folic acid, iron, and selenium. Ptomey et al.22 examined data from 4 overweight and adults with IDD who participated in a weight loss research study. Participants consumed diets low in fruits, whole grains, and healthy oils, meeting 25.2%, 23.0%, and 1.1% of the federal dietary guidelines, respectively. These individuals also consumed a diet high in sodium with 87.3% consuming more than the recommended intake of sodium (0.7 grams per 1,000 kcals). Additionally, participants had a lower Healthy Eating Index (HEI) score than the general population (45.6 vs. 58.2, respectively)22. Furthermore, adults with IDD are less active and have lower levels of cardiovascular fitness than the general public, which may contribute to the increased obesity risk and related health consequences in these individuals23-26. The activity level of adolescents with IDD appears to be similar to adults with IDD. Lin et al. 26 reports that only 29.9% of adolescents with IDD have regular physical activity habits and only 8% met the U.S. Surgeon General’s and American College of Sports Medicine’s recommendation of at least 30 minutes of moderate-intensity physical activity most days of the week. The most common physical activities among this population are walking, Special Olympics, and jogging. PAST RESEARCH Healthy People 2020, the National Institute on Disability and Rehabilitation Research, The Academy of Dietetics and Nutrition, the World Health Organization, and the Surgeon General’s Report on Health and Wellness of People with Disabilities all recommend additional efforts to decrease the high prevalence of obesity amongst children, adolescents, and adults with IDD 3,27-29. However, little research has been conducted to 5 promote weight loss or the prevention of weight gain in adults or adolescents with IDD 30- 39. In 2008, Hamilton et al. 40 conducted a systematic review and found 8 studies evaluating weight loss interventions for adults with IDD. This review identified no adequately powered, long-term studies that targeted changes in both energy intake and energy expenditure. The most recent interventions reported within the review included a behavioral approach focusing on teaching self-control techniques, such as reducing the rate of eating and increasing awareness of environmental cues related to eating39, a physical activity only intervention that required all participants to walk for a set number of minutes each day 38, and a health promotion intervention for adults with IDD living in hospitals or long-term facilities33. The mean weight change elicited by these interventions was minimal (+0.7 kg to –3.4 kg). The follow-up weight at 6 and 12 months following active intervention was also minimal (range -2 to +3 kg). Thus, both weight loss and weight maintenance were ~1.5 to 3% from baseline, which is considerably less than the long-term weight loss necessary to achieve health benefits (5-10%) as recommended by the NHLBI Guidelines 41. In 2011, Saunders et al.37 reported the results of a pilot study in 79 adults with mild to moderate IDD (age = 31.6 ± 9.6 yrs., BMI = 37.0 ± 9.6). All participants were placed on an enhanced Stop Light Diet (eSLD) that provided 1200 to 1300 kcals/day. The eSLD was a modified version of the original Stop Light Diet (see Potential weight management approaches for adolescents with IDD section) that incorporated pre-packaged meals (2 shakes and 2 prepackaged entrees/d), 5 servings of fruits and vegetables per day, and a physical activity component. The average weight loss for the 73 participants who completed the 6-month intervention was 6.1 kg (6.3%). Subsequently, 50 of 73 participants 6 elected to continue treatment and were supported until the study ended one year after baseline. In that group, average weight loss from baseline to 12 months was 8.6 kg (9.1%). A six month follow-up phase was conducted, and 29 of the 43 individuals who participated in the follow-up continued to lose weight, while 14 regained some weight, the highest being a gain of 5.8 kg. Of those 14 that regained weight, only 4 regained as much as they had originally lost. Overall, the average weight loss of the 43 participants in the follow-up group was 8.8 kg, with weight change ranging from +5.8 kg to -23.2 kg. Only one published study exploring weight loss and maintenance in adolescents with IDD was identified. Hickson et al.42 reported findings from a non-controlled, single group study of 17 youth with IDD, 14±4 years of age. The intervention consisted of a 10- week school-based program with 16 physical activity and nutrition lessons. There was a trend for reduction in BMI at 10 weeks; however, at follow-up (6 months), BMI exceeded baseline. POTENTIAL WEIGHT MANAGEMENT APPROACHES FOR ADOLESCENTS WITH IDD The conventional reduced energy diet (CD), a 500-1000 kcal/day deficit achieved through meal planning, is recommended by the Academy of Nutrition and Dietetics (AND) 43 and the NHLBI Guidelines 41, and results in weight loss of approximately 1 to 2 pounds per week. The CD includes reducing portion size, consuming a low-fat diet (less than 30% calories from fat), and increasing fruit and vegetable intake. Consuming high volume/low calorie foods, such as fruits and vegetables, has been shown to be beneficial to weight loss 7 because they can help to reduce over-consumption of high-energy dense foods, possibly resulting in weight loss 44-47. However, implementation of the CD can be difficult for the general population, and it may be particularly problematic for individuals with IDD. The CD can require calorie counting, following a group format, and an ability to read and to comprehend educational materials, nutrition labels, etc. Given their limited cognitive function and need for individualized education, these requirements may present an insurmountable barrier for adolescents with IDD. The successful pilot study by Saunders et al.37 introduced a simplified approach to energy restriction for adults with IDD using the Stop Light Diet in combination with portion controlled meals and increased fruit and vegetable consumption. The original Stop Light Diet was developed by Epstein 48 for use in children and categorizes foods according to energy content: red (high energy), yellow (moderate energy), and green (low energy) to correspond to a traffic signal. Children were instructed they could eat as many green foods as they wished, to limit the yellow foods, and to avoid the red foods. The original Stop Light Diet was given a Grade 1 (strong, consistent supporting evidence) for its effectiveness in weight management for children by The Academy of Nutrition and Dietetics Evidence Analysis Library 49. Portion controlled meals (PCMs) designed for weight loss are any prepackaged, pre- portioned food product that is low-calorie, high-nutritional content, and intended to take the place of regular meals or snacks. Examples of PCMs include shakes, such as Slim Fast, or frozen or shelf-stable entrees, such as Lean Cuisine. PCMs are effective for weight loss as they are decision-free so there is no guesswork or measuring of portion size. 8 Furthermore, PCMs are convenient, and there are a sufficient variety of product options to satisfy even picky eaters. In normal adults, reduced-energy diets utilizing PCMs have been consistently shown to result in significantly greater weight loss when compared to a conventional meal plan diet 50-53. The Academy for Nutrition and Dietetics Evidence Analysis Library has also given PCMs a Grade 1 rating for their effectiveness in weight management 54. Although PCMs have not been used in adolescents with IDD, a recent report 55 has shown significantly improved weight loss over 4 months using PCMs meals compared with a CD in 113 obese, typically developing adolescents. TABLET COMPUTERS Due to their limited cognitive function, there are some barriers to conducting weight management research in individuals with intellectual and developmental disabilities. One of these barriers is their ability to accurately remember their dietary intake and physical activity and to be able to conduct dietary recalls. It is for this reason that there are no validated methods of dietary assessment in this population56,57. The use of tablet computers may help adolescents track their dietary intake and physical activity, and the use of photo-assisted food recalls may help to improve the accuracy of the 24-hour diet recall in individuals with IDD. The use of technology has become a common occurrence in adolescents in the general population. A report by The U.S. Department of Education 58 indicated that over 90% of teens (ages 13-17) use computers and that 25% of 5-year-olds can use the Internet. Perhaps surprising to those not familiar with adolescents with IDD, the use of technology is widespread in this population as well. The high use of computer technology appears to 9 affect all adolescents including those with IDD. Previous studies indicate computer technology has been successfully used for education and training of both children and adults with IDD 59-64. Research has demonstrated that adolescents with mild to severe IDD can independently operate a tablet computer to watch movies, listen to music, and play games65. A recent systematic review of 15 studies using tablet computers as a teaching tool for individuals with IDD found that the use of iPads in this population has positive effects and concludes that they are a viable teaching tool for individuals with IDD66. Computer and iPad technology has been used to teach adolescents with IDD to use the spellcheck function of a word processor 66, to identify and write numbers 67, to understand emotional expression and social skills68,69, to acquire and prepare food 70, and to perform daily life skills by providing auditory and visual prompts66. A photo-assisted food recall is a technique in which digital images are taken of all food and beverages consumed during the recall period. This method has been validated for assessing portion sizes 71 in subjects eating prepared 72,73 or home meals 74 and for assessing food intake in confined settings 75 for both children and adults. Previous studies have been conducted in adults with IDD to determine the feasibility of using photo-assisted dietary assessments. These studies have found that digital photography appears to be a feasible, reliable, and valid method for assessing dietary quality in individuals with IDD, and results in more food items being identified 76,77. The use of photo-assisted 24-hour food recall in adults with IDD resulted in a significantly greater energy intake being reported per eating occasion than the standard recalls78. However, there are no published reports of studies that examined the benefit or feasibility of using digital photography to enhance dietary assessments in adolescents with IDD. 10 CONCLUSION IDD is a disability characterized by significant limitations in both intellectual function and in adaptive behavior that currently affects 1-3% of the population. Obesity rates in adolescents with IDD are 2-3 times greater than in their non-disabled peers. It has been suggested that poor diet quality and lack of physical activity may contribute to these increased rates. The high prevalence of obesity in adolescents with IDD is alarming; however, minimal research has been conducted to promote weight loss or prevent weight gain in this population. Studies have found that traditional weight loss methods may not be appropriate for individuals with IDD due to their decreased cognitive ability. One pilot study found that the use of an innovative diet, an eSLD, may successfully promote weight loss and weight maintenance in adults with IDD, but this diet has not yet been implemented in adolescents. The use of tablet computers may decrease the research limitations found when working with individuals with IDD by allowing them to easily track their dietary intake and physical activity, and provide more accurate dietary assessments. However, the feasibility of using tablet computers to promote weight loss in adolescents and to conduct dietary assessments in adolescents with IDD has not been studied. PURPOSE OF DISSERTATION Research to identify successful weight loss and weight management strategies in adolescents with IDD is needed. The goals of this project were to (1) compare the effectiveness of two weight loss diets, an enhanced stop light diet (eSLD) and a conventional diet (CD) in overweight and obese adolescents with IDD, (2) determine the feasibility of using tablet computers as a weight loss tool in overweight and obese 11 adolescents with IDD, (3) determine if the use of photo-assisted 3-day food records significantly improved amount of energy and macronutrient intake reported in proxy assisted 3-day food records in adolescents with IDD, and (4) to evaluate the intervention components of the program by discovering parent’s feelings and opinions on the intervention program for both diet. 12 CHAPTER TWO: AN INNOVATIVE WEIGHT LOSS PROGRAM FOR ADOLESCENTS WITH INTELLECTUAL AND DEVELOPMENTAL DISABILITIES 13 ABSTRACT INTRODUCTION: Adolescents with intellectual and developmental disabilities (IDD) are at an increased risk of obesity with up to 55% considered overweight and 31% obese. However, there has been minimal research on weight management strategies for adolescents with IDD. The purpose of this study was to compare the effectiveness of two weight loss diets, an enhanced Stop Light Diet (eSLD) and a conventional diet (CD), and to determine the feasibility of using tablet computers as a weight loss tool in overweight and obese adolescents (11-18 yrs.) with IDD. METHODS: A 2-month pilot intervention was conducted. All participants were randomized to the eSLD or CD, and were given a tablet computer, which they used to track daily dietary intake and physical activity. Participants and parents met weekly with a health educator via video chat on the tablet computer to receive diet and physical activity feedback and education. RESULTS: 20 participants (45% female, 14.9 ± 2.2 yrs. old) were randomized and completed the intervention (10 eSLD, 10 CD). Participants in both diets were able to lose weight, and there were no significant differences between the eSLD and CD (-3.89 ± 2.66 kg vs. -2.22 ± 1.37 kg). Participants were able to use the tablet computer to track their dietary intake 90.4% (range: 27.8%- 100%) of possible days, to track their physical activity 64.3% (range: 0%-100%) of possible days, and to attend 80.0% of the video chat meetings. CONCLUSIONS: Both dietary interventions appear to promote weight loss in adolescents with IDD, and the use of tablet computers appears to be a feasible tool to deliver a weight loss intervention in adolescents with IDD. 14 INTRODUCTION Child and adolescent obesity rates in the U.S. have more than tripled in the last 30 years. Approximately 31% of adolescents are overweight or obese (BMI ≥ 85th percentile), with 19% considered obese (BMI ≥ 95th percentile)79. The prevalence of obesity in adolescents with intellectual and developmental disabilities (IDD) is approximately 2 times greater than in the general population 6-15,80,81. Rimmer et al. 15 reports that 42% of adolescents with autism are overweight or obese, with 25% of those considered obese, and 55% of adolescents with Down syndrome are overweight, with 31% of considered obese. This rate of obesity is a serious problem as studies show that obese adolescents are up to 4 times more likely to become obese adults and to develop chronic diseases, such as hypertension, type 2 diabetes, and metabolic syndrome than their healthy weight peers 16- 19. Healthy People 2020, The National Institute on Disability and Rehabilitation Research, The Academy of Nutrition and Dietetics, The World Health Organization, and the Surgeon General’s Report on Health and Wellness of People with Disabilities all recommend additional efforts to decrease the high prevalence of obesity among children, adolescents, and adults with IDD 3,27-29. However, there is limited data on which to base effective weight management interventions in any age group with IDD30-39. A conventional reduced energy diet (CD), with an energy deficit of 500-1000 kcal/day, is recommended for healthy individuals by the Academy of Nutrition and Dietetics43 and the National Heart, Lung and Blood Institute (NHLBI) Guidelines 41, and results in weight loss of approximately 1 to 2 pounds (lbs)/week. However, implementation of the CD can be difficult for the general population and may be 15 particularly problematic for individuals with IDD as it can require calorie counting and an ability to read and comprehend educational materials and nutrition labels. Given their limited cognitive function and need for individualized instruction, these requirements may prevent adolescents with IDD from successfully following a CD. While the CD may not be the best strategy for adolescents with IDD, few weight loss techniques have been found to be successful in individuals with limited cognitive functioning. Hamilton et al. 40 conducted a systematic review and found eight studies evaluating weight loss interventions for adults with IDD; none were identified for adolescents. Interventions reported within the review included a behavioral approach focusing on teaching self-control techniques, such as reducing the rate of eating and increasing awareness of environmental events related to eating 39, a physical activity only intervention that required all participants to walk for a set number of minutes each day 38, and a health promotion intervention for adults with IDD living in hospitals or long-term care facilities 33. The mean weight change elicited by these interventions was minimal, +0.7 kg to –3.4 kg, (~1.5% to 3%), which is considerably less than the long-term weight loss necessary to achieve health benefits (5-10%) as recommended by the NHLBI Guidelines 41 or the minimum 3% weight loss suggested as clinically relevant 82. To date, there has only been one successful weight loss study in individuals with IDD. A pilot study by Saunders et al. 37 introduced a simplified approach to energy restriction for adults with IDD using an enhanced Stop Light Diet (eSLD). This diet provided 1200 to 1300 kcals/day and included a modified version of the original Stop Light Diet in combination with pre-packed meals (2 shakes and 2 prepackaged entrées), five servings of fruits and vegetables a day, and a physical activity component. In this study, 79 adults with 16 mild to moderate IDD (age = 31.6 ± 9.6 yrs., BMI = 37.0 ± 9.6) were placed on the eSLD. The average weight loss for the 73 participants who completed the 6-month intervention was 6.1 kg (6.3%). Subsequently, 50 of 73 participants elected to continue treatment, and the average weight loss after 12 months was 8.6 kg (9.1%). A limitation of the study by Saunders et al. was that participants were only seen monthly due to cost and time restrictions, caused by using an individual face-to-face delivery system to provide monthly dietary education. As participants were only seen monthly, they did not get immediate feedback on their dietary choices or physical activity. This may be problematic as individuals with IDD may not be able to remember what they ate, how much physical activity they completed, or any events or behaviors that affected diet and physical activity outcomes during that month; thus, they may not get the proper dietary or physical activity education they need. While the eSLD appears to be a successful weight loss diet in adults with IDD, the use has not been reported in adolescents with IDD, and it is unknown if the eSLD promotes significant weight loss in that population. Adolescents in the general population frequently use computer technology. A report by The U.S. Department of Education 58 indicated that over 90% of teens (ages 13-17) use computers and that 25% of 5-year-olds can use the Internet. Perhaps surprising to those not familiar with adolescents with IDD, the use of technology is widespread in this population as well. Previous studies indicate computer technology has been successfully used for education and training of both children and adults with IDD 59-64 66. Technology in the form of tablet computers may be an effective educational tool in weight management of adolescents with IDD. Technology-based interventions have shown great promise as a mechanism for promoting weight loss in the general population, as they are a cost-effective, 17 scalable platform. The most effective interventions have interactive tools, tailored feedback to participants, and some form of health instructor contact 83. The use of tablet computers and technology in adolescents with IDD could allow for instant feedback, serve as a visual aid, and allow for more education and health educator feedback, which may reduce some of the limitations in conducting a weight loss program in individuals with IDD. However, there are currently no published reports that have explored the feasibility of using tablet computers as a weight loss tool in adolescents with IDD. The purpose of this study was to compare the effectiveness of two weight loss diets, an enhanced Stop Light Diet (eSLD) and a conventional diet (CD), and to determine the feasibility of using tablet computers as a weight loss tool in overweight and obese adolescents with IDD. METHODS Participants and Enrollment An 8-week pilot investigation for adolescents with mild to moderate IDD was conducted. To participate, individuals had to be 11-18 years of age with mild to moderate IDD as verified by a parent or legal guardian, of sufficient cognitive ability to understand directions, and able to communicate preferences (e.g. foods), wants (e.g. more to eat/drink), and needs (e.g. assistance with food preparations) through spoken language. Participants needed to be overweight or obese (BMI > 85th percentile on CDC growth charts) or have greater than a 0.5 ratio of height to waist circumference as this indicates excess central adiposity in children and adolescents 84-87. All participants were required to live at home with a parent or guardian and to have access to wireless internet. Individuals 18 were excluded from the study if they had insulin dependent diabetes, had participated in a weight reduction program involving diet and physical activity in the past 6 months, were currently being treated for major depression or eating disorders, were consuming special diets (vegetarian, Atkins, etc.), had a diagnosis of Prader-Willi Syndrome, were pregnant, planning on or becoming pregnant within the next two months, or if they became pregnant during the study. Participants were recruited through local community programs, advertisements in the target area, and through special education programs in local school districts. All parents or legal guardians signed a university approved consent form (appendix A), and all participants gave oral assent (appendix A) to participant in the study. Once enrolled, all participants were randomized to either the enhanced Stop Light Diet (eSLD) or the conventional diet (CD). Intervention Components Overview. All participants were randomized to either the eSLD or CD, and participants were given a tablet computer (Apple iPad2®) that they used to track dietary intake and daily steps. At baseline, the participant and a parent attended a 90-minute at- home diet orientation session, and subsequently participated in weekly at-home 30-minute education sessions that were conducted over video chat (FaceTime) on the iPad. All participants completed outcome assessment at baseline and at the end of month 2. Theoretical model. The intervention is based on the behavioral principles of Social Cognitive Theory (SCT) 88, a frequently used framework for weight management in both children and adults 89. The intervention utilized goal setting, self-monitoring, stimulus control (prompts, scheduling, environmental cues), modeling, positive reinforcement, 19 behavioral contracting and self-regulatory techniques. Family based weight loss interventions using behavioral principles to modify both diet and PA have been shown to promote weight loss in typically developing adolescents 90-92 . Parental involvement improves dietary control both at home and when eating out, can influence planning and scheduling of PA, and provides frequent opportunities for supportive interactions with the participant 93. One parent served as the primary family contact and agreed to partner with health educators in conveying the principles of the intervention (diet/physical activity) to the participant, as well as assisting their dependent with adherence to the study protocol. eSLD diet. The original stop light diet (SLD) developed by Epstein 48 for use in children, categorizes foods according to energy content: red (high energy), yellow (moderate energy), and green (low energy) to correspond to a traffic stop light (appendix E). Children are encouraged to eat as many green foods as desired, to limit consumption of yellow foods, and to avoid red foods. The SLD is easy for children and individuals with IDD to understand, especially with added assistance from parents 37,48, and was given a Grade 1 (strong, consistent supporting evidence) for its effectiveness in weight management for children by The Academy of Nutrition and Dietetics Evidence Analysis Library 49. The SLD was enhanced (eSLD) with the addition of fruits and vegetables (≥ 5 servings/day), and high-volume, low-energy portion controlled meals (PCMs) consisting of 2 entrées and 2 shakes a day. The eSLD provided 1200-1400 kcals/day corresponding to a 500-700 calorie deficit. Non-caloric beverages were allowed ad libitum. All PCMs were provided to participants and were delivered to the participants’ homes on a monthly basis. An example of a typical eSLD meal plan appears in appendix E. 20 PCMs designed for weight loss are any pre-packaged, pre-portioned food product that is low-calorie, high in nutritional content, and intended to take the place of regular meals or snacks. PCMs are an effective weight loss tool as they are decision-free so there is no guesswork or measuring of portion size. Furthermore, PCMs are convenient, and there are a sufficient variety of product options to satisfy even picky eaters. In normal adults, reduced energy diets utilizing PCMs have been consistently shown to result in significantly greater weight loss when compared to a conventional meal plan diet 50-53. The Academy of Nutrition and Dietetics Evidence Analysis Library has given PCMs a Grade 1 rating for their effectiveness in weight management 54. Although PCMs has not been used in adolescents with IDD, a recent report 55 has shown significantly improved weight loss over 4 months using PCMs compared with a CD in 113 obese, typically developing adolescents. CD diet. Participants in the CD group were educated to consume a nutritionally balanced, high-volume, lower fat (20-30% energy) diet as recommended by the United States Department of Agriculture’s (USDA) MyPlate approach 94. Participants’ energy needs were estimated using the equation of Mifflin-St Jeor 95 times 1.3 to account for daily physical activity. A deficit of 500-700 kcal/day was prescribed; however, prescriptions never recommended less than 1200 kcals/day as this could inhibit weight loss and is not recommended for adolescent weight management 96. Consumption of five servings of fruits and vegetables per day was recommended. Participants were provided examples of meal plans consisting of suggested servings of grains, proteins, fruits and vegetables, dairy, and fats based on their energy needs (appendix F), and participants were counseled on appropriate portion sizes using three-dimensional food models. 21 Physical activity. All participants were instructed to engage in moderate intensity physical activity to gradually accumulate a total of 60 minutes per day at least five days per week as recommended by the American College of Sports Medicine (ACSM)97. As adolescents with IDD have difficulty with estimating time spent doing physical activity, participants were asked to wear a pedometer and to log steps on the tablet computer daily so that the health educator could monitor their progress and provide weekly feedback. Tablet Computer. A tablet computer (iPad2, Apple, Cupertino, CA, USA) was given to each participant for the duration of the study. The application “Lose It!” was used to track all foods and beverages consumed and physical activity minutes while the application “iStep Log” or “Fitbit” was used to track daily steps. Lose It! is an application that contains a food database with the nutrient content for many foods. This application allows an individual to easily track what they ate for all meals and snacks. Food and beverages were logged in Lose It! by entering in the food name then selecting the portion size or by scanning the bar code of the food item using the tablet computer. iStep Log is a simple application that allowed a participant to easily track their daily steps. When an individual logged into the application, he or she selected the day of the week and entered their steps as reported by their pedometer. Due to software malfunctions in the iStep Log application, the final 7 participants used a Fitbit wireless activity tracker to monitor their daily steps taken. Fitbit activity trackers accurately capture all activity, steps taken, stairs climbed, and intensity of activities and wirelessly syncs all data collected to the tablet computer. The corresponding Fitbit application on the iPad provided a graphic display of accumulated steps as participants move towards their step goal. All information entered into the applications was stored on a secure online server that was only accessible by the health 22 educators and study staff. Screen shots of the Lose it! and Fitbit applications can be found in appendix G. Weekly monitoring. Participants met with a health educator for 30 minutes once a week over video chat using the application FaceTime on the tablet computer. During these meetings, the health educator logged onto the participant’s Lose It! and iStep Log/Fitbit accounts to assess how well the participant followed the diet and physical activity intervention and provided feedback to the participant. A brief lifestyle modification session covering topics such as fruit and vegetable consumption, physical activity, or portion size was provided during each video chat session (appendix C). Data Collection and Assessment Health History Questionnaire. Each participant’s health history, as well as basic demographic information was assessed using a health history questionnaire completed by a parent at the time of consent (appendix B). The same questionnaire also assessed the participant’s diet and physical activity history such as typical diet, frequency of eating away from home, and exercise habits. Anthropometrics. Weight was assessed with a calibrated digital scale accurate to ±0.1 kg (Model #PS6600, Befour, Saukville, WI). All participants were weighed with the participant clothed and without shoes. Weight was taken at baseline and the end of months 1 and 2. Height was measured at baseline and the end of month 2 using a portable stadiometer (Model #IP0955, Invicta Plastics Limited, Leicester, UK). BMI was calculated as weight (kg)/ height (m)2. Waist circumference was obtained at baseline and the end of month 2 following the methods described by Lohman et al. 98 23 Accelerometry. To assess physical activity levels, all participants wore an ActiGraph Model GT3+ (ActiGraph LLC, Pensacola, FL) for 4 consecutive days (2 week days and 2 weekend days) at baseline and at the end of month 2. Accelerometer data (activity counts) collected was summarized in 1-minute epochs using the proprietary ActiGraph software prior to exporting the data to Microsoft Excel. The accelerometer outcome variables were the average accelerometer counts/min over the 4-day period and time spent sedentary and in light, moderate, and vigorous intensity PA. To identify intensity levels, the cut-points used in the National Health and Nutrition Examination Survey as described by Troiano et al. 99 were used. For the purposes of our analyses, daily valid data time was defined as 1440 minutes minus [Non-wear time + Spurious data time + Malfunction time]. Daily wear time was equal to 1440 minus non-wear time. Non-wear time was defined by an interval of at least 20 consecutive minutes of activity counts of zero counts/min with an allowance for 1- 2 minutes of counts between 0 and 100. Spurious data time was defined as activity counts ≥ 16,000 counts/min and malfunction time was defined as consecutive identical counts/min > zero (e.g., 32,767) for > 20 minutes. A valid monitoring day was defined as ≥ 10 hours of valid data. Energy/macronutrient intake. Dietary intake was assessed using 3-day photo- assisted diet records at baseline and at the end of month 2, a technique that has been demonstrated to improve dietary assessment of individuals with IDD 76,78. Participants (with the help of a parent) were asked to write down all food and beverages consumed over 3 days (2 weekdays, 1 weekend day) and to use the tablet computer to take pictures of all meals consumed at home during those days. A registered dietitian reviewed the food records with the participant and parent and collected additional details from the 24 photographs. All dietary records were entered by a registered dietitian into Nutrition Data System for Research (NDSR; version 2011). NDSR output files were generated to determine total energy intake, macronutrient composition, and diet quality. Diet Quality. The Healthy Eating Index-2005 (HEI-2005) was used to calculate diet quality from data obtained from the 3-day photo-assisted diet records at baseline and at the end of month 2. The HEI-2005 100 is a measure of diet quality developed by the United States Department of Agriculture that assesses conformance to the 2005 Dietary Guidelines for Americans 101. The HEI-2005 assesses the intake of total fruit, whole fruit, total vegetables, dark green and orange vegetables, total grains, whole grains, milk, meat and beans, non-hydrogenated oils, saturated fat, sodium, and calories from solid fats, alcoholic beverages, and added sugars (SoFAAS) and provides a point value based on how well a person meets the dietary guidelines, expressed as a percent per 1000 kcals 100. The HEI- 2005 was calculated using NDSR output and followed the method developed by Miller et al 102. Point values for each category were summed to give the final HEI score, with 100 points as the maximum score. Diet quality was considered “good” for total scores greater than 80, “needs improvement” for scores ranging between 51-80, and “poor” for scores less than 51 100. Tablet Usage. To determine the feasibility of using tablet computers to track dietary intake and daily steps, participants’ Lose it! and iStep Log/ Fitbit usage was monitored. Lose It! and iStep Log/Fitbit data were reviewed by a registered dietitian to determine how many days out of the 2-month intervention participants were able to track at least one meal and enter their daily steps. 25 Tablet Questionnaire. At the end of months 1 and 2, the participant completed a questionnaire, with the help of their parent, assessing their comfort level using the tablet computer, tracking their food and steps, and using the video chat (appendix B). All questions had responses given in a 5-point Likert scale ranging from “very easy” to “very hard”. Semi-Structured Interview. At the end of month 2, a recorded semi-structured interview was conducted with the participant’s main family caretaker to gather information on the overall ease and enjoyment of the program (appendix B). Interviews were transcribed verbatim, and thematic analysis was conducted (appendix I). Statistical Analysis. Sample demographics and all outcome measures were summarized using descriptive statistics. Wilcoxon rank sum test compared group median at baseline and the end of month 2 as well as changes in group means from baseline to month 2. General linear modeling was used to examine group, time, or group-by-time interaction effects on each outcome. Simple linear regression was used to examine if a covariate, such as participants’ age, gender, race, and level of IDD severity (mild or moderate), was associated with weight loss. Statistical significance was determined at 0.05 alpha level, and all analyses were conducted using SAS (version 9.2, 2012, SAS Institute Inc.) 103. RESULTS Recruitment. Of the 21 participants who enrolled in the pilot study, 20 completed the study. The participant who did not complete the study dropped after baseline testing due to family complications. Of the 20 participants who completed the study (age = 14.9 ± 26 2.2 yrs., 45% female), 10 were randomized to the eSLD, and 10 were randomized to the CD. Table 1 provides demographic data for the participants who completed the study. The only significant difference between diet groups was age (p=0.04). Table 1. Demographic data of all participants and participants by diet group All Participants (n=20) Participants on CD (n=10) Participants on eSLD (n=10) Variable M ± SD / % M ± SD / % M ± SD / % Age (yrs) 14.9 ± 2.2 13.9 ± 2.2 15.9 ± 1.8 Gender Male 11 (55%) 5 (50%) 6 (60%) Female 9 (45%) 4 (40%) 5 (50%) Race Asian 1 (5%) 1 (10%) 0 (0%) Black 4 (20%) 0 (0%) 4 (40%) White 14 (70%) 8 (80%) 6 (60%) Mixed 1 (5%) 1 (10%) 0 (0%) Ethnicity Not Hispanic/Latino 20 (100%) 10 (100%) 10 (100%) Level of IDD severity Mild 12 (60%) 8 (80%) 4 (40%) Moderate 8 (40%) 2 (20%) 6 (60%) Secondary Diagnosis Autism 9 (45%) 5 (50%) 4 (40%) Down Syndrome 8 (40%) 3 (30%) 5 (50%) Other 3 (15%) 2 (20%) 1 (10%) BMI Percentile 90.6% ±8.1% 88.4% ±9.0% 92.7% ± 6.0% Waist to Height Ratio 0.5 ±0.1 0.5 ±0.1 0.5 ±0.1 Anthropometrics. Participants in the eSLD had a mean weight loss of 3.89 ± 2.66 kg (p=0.002), compared to participants in the CD who had a mean loss of 2.22 ± 1.37 kg (p=0.002). This resulted in a loss of 4.6% and 3.3% body weight, respectively. There were no significant differences in weight change between groups. A Cohen’s d measure of effect size test found the effect size of group difference is 0.8. Table 2 provides full weight loss results. Covariates that significantly affected the weight data were race (weight change in 27 kg p=0.015; BMI change p=0.036) and level of IDD severity (weight change in kg p=0.005, % weight change p=0.015, BMI change p=0.007). Waist circumference decreased in both the eSLD (p=0.011) and the CD (p=0.012), there we no significant differences between groups Table 2. Change in body weight (kg) and BMI across intervention of all participants and participants by diet group All Participants (n=20) CD (n=10) eSLD (n=10) p- value Weight (kg) Baseline 73.7 ± 28.3 65.1 ± 25.3 82.3 ± 29.8 0.093 2 months 70.6 ± 26.5 62.8 ± 24.1 78.4 ± 27.7 0.143 Weight change (kg) -3.1 ± 2.2 -2.2 ± 1.4 -3.9 ±2.7 0.072 % Weight change -4.0 ± 1.8 -3.3 ± 1.2 -4.6 ±2.1 0.123 BMI Baseline 28.8 ± 6.5 26.9 ± 5.3 30.7 ±7.3 0.248 2 months 27.5 ± 6.0 25.9 ± 5.1 29.2 ±6.7 0.280 BMI change -1.3 ± 0.7 -1.0 ± 0.4 -1.6 ±0.9 0.105 Waist Circumference Baseline 85.2 ± 15.3 80.6 ± 12.6 89.8 ± 17.0 0.185 2 months 82.2 ± 13.8 77.4 ± 10.9 87.0 ± 15.3 0.125 Waist Circumference Change -3.0 ± 3.0 -3.2 ± 3.3 -2.8 ± 2.9 0.807 Accelerometry. Valid accelerometer data were collected from 16 subjects (7 eSLD, 9 CD) at baseline and 15 subjects (9 eSLD, 6 CD) at the end of month 2. These data revealed there was a significant decrease in sedentary activity time in both groups (p=0.028); however, there was no difference between groups. No significant differences in moderate or vigorous activity were detected. Energy/macronutrient intake. Three-day photo-assisted food records were collected from 20 participants (60 records at baseline and 60 at the end of month 2). Fifty-two 28 records were deemed to be reliable and representing a typical day at baseline, and 51 at the end of month 2. Dietary analysis revealed a significant 844.9 ± 641.0 kcal deficit between baseline and the end of month 2 in the eSLD(p=0.009) and 674.9 ± 769.4 kcal (p=0.027) deficit in the CD. Participants in the eSLD had a significantly greater reduction of energy intake compared to participants in the CD group (p=0.048). Figure 1 provides a macronutrient comparison of the CD vs. the eSLD. Of the 10 participants in the eSLD, 100% consumed the low calorie shakes, and 70% consumed the PCMs entrées that were provided. Participants in the eSLD consumed an average of 1.1 ± 0.1 shakes and 1.0 ± 0.1 entrées per day. * Significantly greater reduction of energy intake in eSLD compared to CD (p=0.048). Figure 1. Changes in energy and macronutrient intake across the intervention for eSLD and CD participants Energy (kcal) Carbohydrate (g) Fat (g) Protein (g) CD -674.9 -83.0 -31.8 -17.1 eSLD -844.9 -60.3 -51.9 -32.8 -900 -800 -700 -600 -500 -400 -300 -200 -100 0 M e a n c h a n g e i n e n e rg y ( k c a ls ) a n d M a c ro n u tr ie n ts (g ). * 29 Diet Quality. The HEI-2005 increased in both diet groups; however, this was not found to be significant. Participants in the eSLD increased from a mean score of 55.9 ± 9.0 to 62.6 ± 6.3. Participants in the CD improved from a mean score of 58.7 ± 8.3 to 61.5 ±5.6. There were no significant differences between groups. A significant improvement across the intervention in both groups was a decrease in calories from added sugars and fat (p=0.015). Table 3 provides full HEI-2005 results. Table 3. Healthy Eating Index-2005 scores for adolescents with IDD across a 2-month intervention. Maximum Score Possible All Participants CD eSLD Group Differences Component M± SD M± SD M± SD p Total fruits Baseline 5 2.1 ± 1.4 2.2 ± 1.5 2.0 ± 1.4 0.751 2 months 5 2.5 ± 1.5 2.5 ± 1.5 2.5 ± 1.6 0.840 Change 0.4 ± 1.3 0.3 ± 1.1 0.5 ± 1.5 0.616 Whole fruits Baseline 5 2.6 ± 1.5 3.0 ± 1.6 2.1 ± 1.4 0.168 2 months 5 3.1 ± 1.6 3.4 ± 1.6 2.8 ±1.7 0.380 Change 0.6 ± 1.5 0.3 ± 1.2 0.7 ±1.9 0.644 Total vegetables Baseline 5 2.9 ± 1.4 2.6 ± 1.4 3.2 ±1.4 0.321 2 months 5 3.0 ± 1.2 2.8 ± 0.9 3.3 ±1.4 0.490 Change 0.2 ± 1.6 0.2 ± 1.0 0.2 ±2.0 0.839 Dark green & yellow vegetables Baseline 5 1.4 ± 1.7 1.4 ± 1.7 1.3 ±1.7 0.980 2 months 5 2.3 ± 1.8 1.4 ± 1.5 3.2 ±1.7 0.029* Change 1.0 ± 2.5 0.1 ± 1.3 1.8 ±3.1 0.049* Total grains Baseline 5 4.4 ± 0.7 4.5 ± 0.9 4.3 ±0.5 0.070 2 months 5 4.4 ± 0.7 4.5 ± 0.6 4.3 ±0.8 0.396 Change -0.0 ±0.6 0.0 ±0.6 -0.0 ±0.7 0.719 Whole grains Baseline 5 1.5 ± 1.3 1.7 ± 1.6 1.2 ±0.7 0.512 2 months 5 1.8 ± 1.3 1.9 ± 1.5 1.7 ±1.2 0.986 Change 0.3 ± 1.7 0.2 ± 1.9 0.5 ±1.6 0.516 30 Maximum Score Possible All Participants CD eSLD Group Differences Milk Baseline 10 6.6 ± 2.8 7.0 ± 2.7 6.1 ±3.0 0.358 2 months 10 6.2 ± 2.7 6.5 ± 2.7 6.0 ±2.9 0.669 Change -0.3 ± 3.6 -0.5 ± 2.2 -0.1 ±4.7 0.956 Meat & beans Baseline 10 7.5 ±2.4 6.2 ± 2.6 8.9 ±1.0 0.007* 2 months 10 8.0 ± 1.7 7.7 ± 1.7 8.2 ±1.7 0.538 Change 0.4 ± 1.9 1.5 ± 2.0 -0.7 ±1.0 0.009* Oil Baseline 10 6.2 ±2.4 6.7 ± 3.4 5.7 ± 3.4 0.537 2 months 10 5.2 ±3.0 5.5 ± 2.9 5.0 ± 3.2 0.752 Change -1.0 ±3.9 -1.3 ± 4.1 -0.8 ± 3.9 0.753 Saturated fat Baseline 10 5.9 ± 1.8 6.3 ± 2.0 5.4 ± 1.6 0.155 2 months 10 6.8 ± 1.2 6.6 ± 1.4 7.0 ± 0.9 0.781 Change 0.9 ± 2.0 0.2 ±1.9 1.6 ± 1.9 0.137 Sodium Baseline 10 1.8 ± 1.9 2.2 ± 2.4 1.5 ± 1.3 0.836 2 months 10 1.7 ± 1.0 2.0 ± 0.9 1.3 ± 1.1 0.180 Change -0.2 ± 1.9 -0.2 ± 2.1 -0.2 ± 1.7 0.753 SoFAAS Baseline 20 14.6 ± 3.9 15.0 ± 4.5 14.2 ± 3.4 0.446 2 months 20 17.1 ± 2.0 16.8± 2.4 17.3 ± 1.5 0.988 Change 2.52 ± 4.2 1.9 ± 5.1 3.1 ± 3.2 0.447 Total score Baseline 100 57.3 ± 8.5 58.7 ± 8.3 55.9 ±9.0 0.971 2 months 100 62.0 ± 5.8 61.5 ± 5.6 62.6 ± 6.3 0.897 Change 4.71 ± 9.3 2.8 ± 7.3 6.6 ± 11.0 0.529 * Indicates a significant difference between groups using Wilcoxon signed-rank test. Tablet Usage. Participants in both groups entered their food intake into the tablet computer with a median of 90.40 % (range: 27.8%-100%) and entered their steps 64.3% (range: 0-100%) of total days in the study. Constant malfunctioning of the iStep Log application lead to lower rates of recording step counts. Thus, alternative solutions were explored, and a sample group of 7 participants were given a Fitbit wireless activity monitor. Participants using the Fitbit recorded their steps 72% of the time. Additionally, 31 participants attended an average of 80% of weekly video chat meetings, and 84% had either weekly FaceTime or phone contact with their health educator. There were no significant differences in tracking data between diet groups or level of IDD classification (mild vs. moderate). Tablet Questionnaire. Eighty-five percent of the adolescents reported that they enjoyed using the tablet computer, and 95% reported that the tablet computer was either “easy” or “very easy” to use. Seventy percent of participants reported that it was “easy” to enter their food into the Lose It! application, and 10% reported that it was “very easy”. Only 10% reported that it was “hard” to enter food into Lose It! Thirty-five percent reported that entering their steps into iStep Log or Fitbit application was “very easy”, 35% reported it was “easy”, 10% reported it as “ok”, and 20% reported it as “hard”. All participants who reported entering steps as “hard” were adolescents who experienced software malfunctions with the iStep Log application. All participants who used the Fitbit reported it was “very easy” to use. Semi-Structured Interview. When asked about their involvement in helping to enter food into the Lose It! application, 42% of parents reported that the participants entered everything on their own, 26% reported that they helped the participants to enter the food, and 32% reported having to enter everything themselves. DISCUSSION The current study was a pilot investigation of an innovative diet for adolescents with IDD and the first known weight loss intervention in adolescents with IDD. The results found that adolescents with IDD are willing to follow a supervised diet program and can be 32 successful in reducing their weight. All of the participants lost weight during the diet phase, regardless of diet randomization. Although the program was short in duration, the results are encouraging as both diets exceeded the minimum 3% weight loss suggested as clinically relevant 82 while also producing improvements in overall diet quality. The use of the eSLD resulted in a slightly greater calorie deficit and weight loss than the CD, and while the difference was not significant, the small sample size and short duration of the program may be responsible for the lack of significant difference between groups. Diet quality data revealed that, while not significant, participants in the eSLD had a greater increase of whole fruits, dark green and yellow vegetables, and whole grains and a greater decrease in saturated fat and calories from added fat and sugars compared to participants in the CD. These results suggest that the eSLD may result in a significantly greater weight loss and increase in diet quality with a greater sample size and longer duration when used in adolescents with IDD compared to a CD. Adolescents with IDD can follow an eSLD, and participants appeared to enjoy the use of PCMs. While only 70% consumed the provided entrées, the other 30% still used PCMs. In these cases, the families just chose to purchase low-calorie frozen meals from the grocery store as the participant did not like the taste of the provided entrées. Parents reported liking the use of PCMs as this allowed the adolescent to have more control over food choices. The adolescent could choose their own meal and warm it up on their own. Furthermore, parents did not have to worry about determining if it was a healthy choice or an appropriate portion size. Throughout the intervention, participants appeared to increase the amount of time spent doing physical activity. Although accelerometer data only showed a significant 33 decrease in sedentary time with no significant increases in moderate or vigorous activity, self-reported step data from the iStep Log and Fitbit applications showed that participants in both groups increased their daily step count by an average of 3000 steps across the 2- month intervention. Thus, adolescents with IDD may be willing to follow a physical activity program but may need a longer intervention period to significantly increase their physical activity. Alternatively, a greater sample size may be needed to show a significant increase in adolescents’ time spent performing moderate or vigorous activity. The most common forms of physical activity reported were school-based sports and physical activity-based video games, such as the Wii® and XBox Kinect®. This study also determined that it is feasible to use a tablet computer as a weight loss tracking tool and education delivery system in adolescents with IDD. Participants were able to track their food intake 83.4% of the time, which is a greater rate than in the general public 104-106. While the ability to track steps appeared to be lower than the ability to track dietary intake, two major issues in this study were that some participants lost their pedometer within 3 days of starting the program and that the iStep Log application frequently crashed on many participants’ tablet computers. Thus, Fitbit activity monitors were used for the final seven participants. Future studies should consider using electronic pedometers that will automatically import step data into the tablet computer and Lose It! application, such as the Fitbit used in the latter part of this study. While this study is promising, limitations include a small sample size, a short duration, and non-stratified level of IDD severity during randomization. Given that the current study was a self-funded pilot, the final sample size was only 20; thus, the power of the study was very small (a priori power could not reach 0.6 even for a large effect). If the 34 sample size had been larger, the power of the study and ability to detect significant differences between groups would have increased. A post-hoc power analysis shows that 54 subjects (27 in each group) in total will provide 80% power for Wilcoxon rank sum test to detect such an effect at 0.05 level (G-power 3.1). Furthermore, the study was only 2 months in duration, whereas most weight loss program are 6 months in duration. However, it can be hypothesized that weight loss would have continued in both groups, and a possible difference between diets could have been detected (the predicted weight loss at month 6 is 5.0 - 6.7 kg (6.9 - 9.2% of baseline weight) for the eSLD and 2.9 - 3.8 kg (4.7 - 6.2% of baseline weight) for the CD assuming 50-100% of current rates of weight loss at the end of month 2). The lack of stratification by level of severity of IDD resulted with the eSLD having more moderate severity IDD participants while the CD had mostly mild severity IDD participants. Consequently, our results do not reflect the ability of an individual with moderate IDD to follow a conventional diet. One final limitation is that all participants used a tablet computer to help them follow the intervention; it could be suggested that the use of the tablet computer caused participants to be more successful in reducing their energy intake and losing weight, than using the diets alone. Therefore, it is unknown if the use of technology enhanced the effect of the diets. Overall, this study is the first known weight loss intervention in adolescents with IDD. It was found that both the CD and the eSLD may be effective weight loss strategies in adolescents with IDD. Furthermore, adolescents with IDD can use tablet computers to track their dietary intake and daily steps and to communicate with a health educator over video chat, which suggests that tablet computers are a feasible weight loss tool and delivery system for adolescents with IDD. Long-term studies in a large sample of adolescents with 35 IDD are needed to determine if the eSLD is more effective, in terms of weight loss, than the CD and if the use of tablet computers in combination with the diets is more effective than diets alone. 36 CHAPTER THREE: DIGITAL PHOTOGRAPHY PROVIDES BETTER ESTIMATES OF DIETARY INTAKE IN ADOLESCENTS WITH INTELLECTUAL AND DEVELOPMENTAL DISABILITIE 37 ABSTRACT INTRODUCTION: Dietary assessment of adolescents with intellectual and developmental disabilities (IDD) is challenging due to the limited cognitive abilities of this population. The objective of this study was to determine if photo-assisted food records improve the assessment of energy and macronutrient intake in adolescents with IDD at two time periods. METHODS: Participants used a mobile device to take photos of all food and beverages consumed over a three-day period while also completing a proxy-assisted 3-day food record with the help of a parent. A registered dietitian reviewed the proxy-assisted 3- day food record with the participant and a parent and then reviewed the images with the participant and recorded the differences between the standard records and the photos. The standard records and the photo-assisted records were entered separately into Nutrition Data System for Research (NDSR) for dietary analysis. RESULTS: Two hundred and twelve eating occasions (130 at baseline, and 82 at the end of month 2) were entered from 20 participants (age = 14.9 ± 2.2 yrs., 45% female). Participants captured photos for 53.8 ± 31.7% of all eating occasions consumed at baseline and 47.2 ± 34.1% at the end of month 2. Photo-assisted records captured significantly higher estimates of energy intake per eating occasion than regular proxy-assisted records at baseline (p=0.0006) and the end of month 2 (p=0.03), as well as significantly greater grams of fat (p=0.011), carbohydrates (p=0.0033), and protein (p=0.0041) at baseline and significantly greater grams of carbohydrates (p= p=0.0074) at the end of month 2. CONCLUSION: The use of photo- assisted diet records appears to be a feasible method to obtain substantial additional details about dietary intake that consequently may improve the overall energy and 38 macronutrient intake reported when using proxy-assisted diet records in adolescents with IDD. 39 INTRODUCTION Due to the complexity of nutrition and the many health risks that accompany poor diet quality and excessive energy intake, it is essential to have valid methods to assess dietary intake. Dietary assessments that provide valid and reliable data are vital to increase the effectiveness of health interventions and policies both at the individual and population level. Adolescents with intellectual and developmental disabilities (IDD) are a population that may benefit from dietary monitoring as the prevalence of obesity in adolescents with IDD is approximately 2-3 times greater than in healthy adolescents 6-15,80,81. The high prevalence of obesity is a serious problem as research shows that obese adolescents are up to 4 times more likely to become obese adults and to develop chronic diseases, such as hypertension, type 2 diabetes, and metabolic syndrome compared to their healthy weight peers 16-19. While the need for accurate and valid dietary assessment is high for adolescents with IDD, there are inherent challenges in conducting dietary assessments in this population due to compromised cognitive functioning, poor memory, and a shortened attention span 57,107. Consequently, researchers have not yet validated a method for dietary intake assessment in individuals with IDD due to these significant barriers in collecting valid and reliable data 57. Dietary records are an assessment method in which respondents record all food and beverages consumed, with the corresponding amounts, over a period of days. Dietary records have the potential to provide accurate data for food consumed during the recording period. They allow respondents to record food and beverages as they are 40 consumed, lessening the problem of omission and increasing the described details of foods. For these reasons, diet records are often regarded as one of the best dietary assessment methods in the general population108. Proxy-assisted diet records are an assessment method in which a family member or staff member assists a participant in completing a diet record. Proxy-assisted records are commonly used in populations with limited reporting capabilities, such as young children and individuals with Alzheimer’s 109, and have been used in adults with IDD 20,37,110. While it appears proxy-assisted diet records may be a logical technique for dietary assessments in adolescents with IDD, proxy-assisted food records have not been validated in adults with IDD. Furthermore, no dietary assessment technique has been found to be valid or accurate in adults or adolescents with IDD 20,37,110. A photo-assisted diet record is a new technique in which digital images are taken of all food and beverages consumed during the record period. This method has been validated in the general population for assessing portion sizes 71, as well as energy and macronutrient intake 72-75,111-114. Previous studies have determined the feasibility of using photo-assisted dietary assessments in adults with IDD. These studies have found that digital photography appears to be a feasible and reliable method for assessing dietary intake in adults with IDD76,77. The use of photo-assisted 24-hour food recalls in adults with IDD resulted in a significantly greater energy intake being reported per eating occasion when compared to the standard recalls ( p=0.002) as well as a greater intake of fat (0=0.006), protein (p=0.029), and carbohydrates (p=0.003)78. The authors concluded that photo-assisted recalls have the potential to be a more accurate dietary assessment technique than 24-hour recalls alone. 41 While previous studies in adults with IDD have determined photo-assisted recalls improve the total energy and macronutrients reported compared to a 24-hour recall, no information is available regarding the use of digital photography in dietary assessment in adolescents with IDD, and whether it would improve the estimated energy and macronutrient intake reported compared to a proxy-assisted 3-day food record. Therefore, the aim of this study was to determine if the use of photo-assisted 3-day food records significantly different intake estimates of energy and macronutrient intake reported in proxy-assisted 3-day food records in adolescents with IDD. METHODS A cross-sectional study was conducted with 20 adolescents with IDD. Each participant completed two separate photo-assisted 3-day food records, 2 months apart. All participants had mild (IQ of 50-69) to moderate (IQ of 35-49) IDD and were enrolled in a healthy lifestyle intervention trial. All parents or legal guardians signed a university approved consent form (appendix A), and all participants gave oral assent (appendix A) to participant in the study. All participants were given a mobile device with a built-in camera at the beginning of the study that was subsequently used as part of the healthy lifestyle intervention. The mobile device was an Apple iPad 2 (Apple, Cupertino, CA, USA), Wi-Fi only model (241.2 x 185.7 x 8.8 mm; 601 g) with a 246-mm screen (diagonal dimension), 16 GB storage, and iOS5115. The iPad 2 uses an LED backlight screen with a 1024 x 768 screen resolution at 132 pixels per inch and has a battery life of up to 10 hours. The iPad 2 rear-facing camera (1280 x 720 pixels or 0.92 megapixel camera with autofocus) was used for the photo- 42 assisted records. At baseline, the participant and a parent were instructed on basic mobile device functions, including how to operate the camera application. The study personnel demonstrated how to operate the camera application and observed the participant independently take satisfactory images. Participants were instructed to take before and after images with the mobile device of all food and beverages consumed at home for a 3-day period (2 weekdays, 1 weekend day) at baseline and the end of month 2. Each participant was given a fiduciary marker (a 5 cm X 5cm checked square) to be included in all images to serve as a reference measure to aid study staff in determining portion sizes. Participants also completed hard-copy proxy- assisted diet records, with the help of a parent, for the same 3 days. Calendar prompts were programmed into the mobile device to remind participants to comply with the photo/record protocol. Example images can be found in appendix H. During a home visit at baseline and the end of month 2, a registered dietitian reviewed the 3-day diet record with the participant and parent without the use of the images. Portion guides (i.e., glasses, mugs, bowls, circles, thickness sticks, chip bags, drink bottles, a 12-inch ruler, measuring cups and spoons, a grid, wedges, geometric shapes, and diagrams of chicken pieces)116 were used to help clarify portion size and provide better accuracy. After the initial review was completed, the registered dietitian reviewed the date and time stamped images. Each photographed eating occasion or food item was discussed with the participant and parent to identify additional details regarding the food type, portion size, and other characteristics (e.g., drinks, side dishes, ingredients, condiments, etc.). Food items, portion size, and specific details about any food items that were different from the record were recorded in different colored ink to distinguish the photo-assisted 43 record from the standard record. Additional details, including the reasons for the differences (e.g., forgot food, inaccurate portion size), number of meals captured by images, and total number of meals consumed, were also recorded. All dietary records were entered into Nutrition Data System for Research (NDSR) software version 2011117 by registered dietitians. The proxy-assisted records and the photo-assisted records were entered as two separate records. Dietary analysis from NDSR was used to determine total intake of calories, fat, carbohydrate, and protein. Not all eating occasions were captured by digital photography due to participants eating lunch and some snacks at school; therefore, only the eating occasions that were captured by photo were analyzed and compared to the standard proxy-assisted 3-day food record. The differences in energy (kcals) and macronutrient (grams of fat, carbohydrate, and protein) intake between the standard recall and the photo-assisted recall for each participant were measured using a mixed model. The mixed model was used to take care of the correlation among meals reported by the same participant. Models were adjusted for the participants’ age, gender, race, and level of IDD severity (mild vs. moderate IDD), thereby improving the accuracy of the energy and macronutrient estimates. Statistical significance was determined at 0.05 alpha level, and all analyses were conducted using SAS (version 9.2, 2012, SAS Institute Inc) 103. RESULTS Diet records were collected from 20 participants (age = 14.9 ± 2.2 yrs., 45% female). Table 1 shows demographic data. Participants captured images for 53.8 ± 31.7% of all 44 eating occasions consumed at baseline and 47.2 ± 34.1% at the end of month 2. This resulted in the analysis of 212 eating occasions (130 at baseline and 82 at the end of month 2). There was an average of 2.6 ± 2.2 dietary differences (i.e., incorrect portion size, forgetting a food, etc.) per eating occasion between the standard and photo-assisted record at baseline and 2.5 ± 3.0 differences at the end of month 2. At baseline, the most common difference between the photo-assisted record and the standard record was incorrect portion size (37.4%), followed by forgetting a food eaten (32.1%), missing or incorrect details about food (28.2%), and reporting a food that was not actually consumed (2.3%). At the end of month 2, the most common difference was forgetting a food eaten (70.1%), followed by incorrect portion size (16.2%), incorrect or missing details about a food item (12.1%), and reporting a food not actually consumed (1.0%). Table 1. Demographic data of all participants All Participants (n=20) Variable n ( %) Age (yrs) M ± SD 14.9 ± 2.2 Gender Male 11 (55%) Female 9 (45%) Race Asian 1 (5%) Black 4 (20%) White 14 (70%) Mixed 1 (5%) Ethnicity Not Hispanic/Latino 0 (0%) Hispanic/Latino 20 (100%) Level of IDD severity Mild 12 (60%) Moderate 8 (40%) Secondary Diagnosis Autism 9 (45%) Down Syndrome 8 (40%) Other 3 (15%) 45 After adjusting for age, gender, race, and level of IDD severity, photo-assisted records showed significantly higher estimates of total energy intake per eating occasion compared to standard records at baseline (standard = 429.3± 261.8 kcals vs. photo- assisted = 515.6 ± 330.8 kcals, p=0.001) and again at the end of month 2 (standard = 437.9 ± 229.6 kcals vs. photo-assisted = 489.7 ± 265.8 kcals, p=0.03). This resulted in a 16.7% increase in total energy reported per eating occasion at baseline and a 10.6% increase at the end of month 2. Photo-assisted records also showed significantly greater intakes of grams of fat (p=0.011), carbohydrates (p=0.003), and protein (p=0.004) at baseline and significantly greater intakes of grams of carbohydrates (p= p=0.007) at the end of month 2. Fat and protein intakes had no significant differences between record methods at the end of month 2. Table 2. Reported energy and macronutrient intake using standard records compared to photo-assisted records per eating occasion at baseline and the end of month 2 Standard record Photo-assisted record Percent Difference % 16.7% 13.0% 15.4% 14.7% 17.9% 8.4% 18.0% 9.4% Difference N M± SD N M± SD % p Energy (kcal) Baseline 130 429.4 ± 261.8 130 515.7 ± 330.9 16.7% 0.001 * End of month 2 82 437.9 ± 229.7 82 503.0 ± 275.4 13.0% 0.022 * Carbohydrate (g) Baseline 130 57.1 ± 36.1 130 67.5 ± 46.7 15.4% 0.003 * End of month 2 82 59.0 ± 27.8 82 69.1 ± 34.9 14.7% 0.008 * Fat (g) Baseline 130 16.0 ±13.0 130 19.5 ± 17.5 17.9% 0.011 * End of month 2 82 14.6 ±12.8 82 16.9 ± 14.4 8.4% 0.150 Protein (g) Baseline 130 17.2 ±12.8 130 21.0 ± 19.4 18.0% 0.004 * End of month 2 82 19.6 ±13.8 82 21.5 ± 14.3 9.4% 0.148 * Denotes significance at 0.05 alpha level using mixed modeling for repeated measures 46 The macronutrient composition was evaluated to compare the standard and photo- assisted records at both baseline and the end of month 2 to examine whether certain types of foods were captured differently between methods (e.g., high carbohydrate foods, high fat foods, etc.). Figure 1 illustrates the macronutrient composition as a percent of total energy intake at baseline and the end of month 2 for both record methods, and shows that there was no statistical difference for the percent distribution of grams of fat, protein, or carbohydrates at either time point (all p>0.05). DISCUSSION This study was designed to determine whether photo-assisted food records provide a significant difference in reported energy and macronutrient intake when assessing 3-day food records in adolescents with IDD. The use of photo-assisted food records provided significantly greater total energy intake compared to proxy-assisted food records. These 0% 10% 20% 30% 40% 50% 60% Carbohydrate Fat Protein Carbohydrate Fat Protein Baseline End of Month 2 % o f to ta l e n e rg y i n ta k e Figure 1. Macronutrient composition of standard and photo-assisted records of adolescents with IDD at baseline and the end of month 2 Standard record Photo- assisted record 47 large differences in caloric and macronutrient intake suggest that standard proxy-assisted food records may underestimate dietary intake amounts in this population. Similar to previous studies using digital photography in adults with IDD, participants were able to capture images of their meals56,77,78. Participants were able to take photos of their food for 53.8 ± 31.7% of all eating occasions consumed at baseline and 47.2 ± 34.1% at the end of month 2. The rate of capture in this study is less than in previous studies using digital photography in adults. However, the main reason for not capturing a meal on camera that was reported on the food record was consuming a meal or snack while at school, where the mobile devices were not allowed. In this study, 37% of all eating occasions occurred at school. A 16.7% increase in reported energy intake per eating occasion was identified at baseline, and a 13.0% increase was identified at the end of month 2. While these increases are significant, they are less than the 25.8% increase observed in adults with IDD using photo-assisted 24-hour recalls78. Although one cannot directly compare adolescents to adults, it can be suggested that the use of food records vs. food recalls in individuals with IDD may provide a better estimate of dietary intake, and the use of a proxy (staff or parent) may also benefit the accuracy of dietary intake. There was a greater difference in energy intake reported between assessment methods at baseline compared to the end of month 2 (6.1% difference between baseline and month 2). All participants were enrolled in an 8-week healthy lifestyle intervention, which included dietary education. The knowledge parents and participants received from this intervention may have made them more aware of the foods they were eating, which may explain why there was a decrease in energy difference between photo-assisted records 48 and standard proxy records between baseline and the end of month 2. Participants and their parents may have become more knowledgeable about what they were eating and could report better details and more accurate portion sizes. They may also just have become more comfortable with the recall process. Another observed change was that, at the end of month 2, only 16.2% and 12.1% of the differences between record methods were due to incorrect portion size and incorrect or missing details about food, respectively, while at baseline these were 37.5% and 28.3%, respectively. This supports the theory that participants and parents either became more aware of what they were eating or grew more comfortable reporting portion size and specific details about food. The macronutrient composition (percent calories from fat, protein, and carbohydrate) between record methods was not different at baseline or month 2, indicating that the significant increase in energy intake was responsible for the significant increases in macronutrient intake. Certain macronutrients, such as high fat foods, were not more likely to be underreported. While this study highlights the value of using digital photography in combination with proxy-assisted food records in adolescents with IDD, several limitations do exist. All participants were enrolled in an 8-week healthy lifestyle intervention, which included dietary education. The knowledge they received from this intervention may have caused them to become better at reporting food intake, making the difference between the two methods smaller at month 2. However, the difference between recall methods was still significant in terms of energy intake, which demonstrates that, even with proper portion size and nutrition education, digital photography may still provide added value to food records. Another limitation is that participants were not randomly selected as they were 49 enrolled in a healthy lifestyle intervention, which may limit the generalizability of these findings. However, the results were adjusted for age, gender, race, and level of IDD severity to help account for this limitation. Finally, while the photo-assisted record method did show significant increases in estimates of dietary intake, this method is not yet validated in adolescents with IDD, and future validation studies need to be conducted. The use of digital photography as a dietary assessment technique for adults with IDD has been published, however, this is the first known study to look at the use of digital photography for dietary assessments in adolescents with IDD. Although, future research needs to validate dietary assessment methods in adolescents with IDD, the use of photo- assisted diet records appears to be a feasible method to obtain substantial additional details about dietary intake of adolescents with IDD. Individuals working with adolescents with IDD may be able to use photo-assisted records to obtain more accurate dietary intake data, which is lacking in adolescents with IDD. Accurate dietary intake data would help design appropriate diets for weight loss and general health promotion, which is necessary to reduce chronic disease risk in adolescents with IDD27,28,118. 50 CHAPTER FOUR: A WEIGHT MANAGEMENT INTERVENTION FOR ADOLESCENTS WITH INTELLECTUAL AND DEVELOPMENTAL DISABILITIES: A PARENT’S PERSPECTIVE 51 ABSTRACT INTRODUCTION: It has been suggested that successful weight management strategies for adolescents with intellectual and developmental disabilities (IDD) must not only influence diet and physical activity levels, but must change parental attitudes and behaviors. Thus, the ability of a parent to accept the program and change his or her own attitudes and behaviors could dictate whether a participant is successful or not. The purpose of this study was to obtain parental feedback and evaluation for an 8-week diet and physical activity intervention for adolescents with IDD. METHODS: Semi-structured interviews were conducted in 18 parents (48.1± 5.6year; 94% female) whose child had just finished a diet and physical activity program for adolescents (11-18yrs.) with IDD. Interviews were transcribed verbatim, and thematic analysis was conducted. RESULTS: The five major themes identified were acceptability of the diets, physical activity, use of the tablet computers, plans for the healthy eating and physical activity in the future, and changes in parents’ own behaviors. Parents reported a positive attitude towards the program. They liked the convenience of the program, the use of the tablet computer, and felt they had learned beneficial strategies to continue to encourage healthy habits in the home. CONCLUSION: Results from the interviews indicated that parents were able to change their behaviors to help their adolescent successfully follow a weight loss intervention. However, parents of adolescents with IDD may place a larger emphasis on dietary intake compared to physical activity, and they may need more education about the benefits of physical activity and ideas on how to increase the physical activity in the home. 52 INTRODUCTION Childhood obesity is a major concern in the United States. Adolescent obesity rates in the United States (U.S.) have more than tripled in the last 30 years, and approximately 31% of adolescents are overweight or obese (BMI ≥ 85th percentile), with 19% considered obese (BMI ≥ 95th percentile) 79. Adolescents with IDD have significantly higher rates of obesity compared to typically developing adolescents11-15,80. For example, Rimmer et al15 reports that 42% of adolescents with autism are overweight, with 25% being obese, and 55% of adolescents with Down syndrome are overweight, with 31% being obese. The high prevalence of obesity is a serious problem as studies show that obese adolescents are up to 4 times more likely to become obese adults and to develop chronic diseases, such as hypertension, type 2 diabetes, and metabolic syndrome, than their healthy weight peers 16- 19. Healthy People 2020, the National Institute on Disability and Rehabilitation Research, The Academy of Nutrition and Dietetics, the World Health Organization, and the Surgeon General’s Report on Health and Wellness of People with Disabilities all recommend additional efforts to decrease the high prevalence of obesity among children, adolescents, and adults with IDD 3,27-29. However, little research has been conducted to promote weight loss or weight management in adolescents with IDD 40. Parents play an important role in the lives of adolescents with IDD, especially in terms of weight control, as they perform the food shopping and meal planning and preparation, and they often have a role in the physical activity level of their child119. Researchers believe that successful obesity prevention/intervention strategies for adolescents and children with IDD must not only influence diet and physical activity levels 53 of the adolescents and children, but must also influence parental attitudes and behaviors120,121. Thus, the ability of a parent to accept the program and change his/her attitude and behaviors during a weight management intervention could dictate whether a participant is successful or not. Recently, a successful 2-month pilot intervention was conducted to determine the effectiveness of two weight loss diets, an enhanced stop light diet (eSLD) and a conventional diet (CD), and to determine the feasibility of using tablet computers as a weight loss tool in overweight and obese adolescents with IDD. Adolescents in both diet groups were successful at losing weight and appeared to enjoy the program. However, most adolescents were unable to provide detailed feedback or evaluation of the program. The purpose of this study was to obtain parental evaluation of the components of the diet and physical activity intervention in terms of ease of use and feasibility, and to identify what parents want from a weight loss program to determine strategies that could inform future weight management programs for adolescents with IDD. METHODS Participants and recruitment Participants were parents of adolescents (ages 11-18) enrolled in a diet and physical activity intervention for adolescents with IDD. Details on the diet and physical activity intervention can be found in chapter 2, but briefly, it was a 2-month weight loss intervention where participants were randomized into one of two diets: an enhanced Stop Light Diet (eSLD) or a conventional diet (CD). All participants were given a tablet computer (Apple iPad2) with which to track their dietary intake using the application Lose It! and 54 their physical activity (steps) using the applications iStep Log or Fitbit. The eSLD diet used portion controlled meals (PCMs), comprising two shakes and two entrées a day, and a color-coded stoplight guide to help participants choose healthy food items. The CD taught portion control and food groups servings according to the USDA MyPlate guide94. Only one parent was interviewed per household. In households with more than one parent, the parent most involved with helping the adolescent follow the diet and physical activity program was selected for the interview. Parents signed a university approved consent form (appendix A) when enrolling their dependent in the diet and physical activity intervention. Parents could opt out of the interviews even if their dependent participated in the intervention. Data collection After the participant’s child completed their final assessments for the intervention, face-to-face semi-structured interviews were conducted using an interview guide (appendix B) developed by the researchers. The same interviewer conducted all interviews. The interviews began with parents filling out a demographic form to determine their age, gender, race, education level, and employment status. They were then asked semi-structured, open-ended questions tailored to their child’s intervention group (eSLD or CD) addressing their general perceptions about the program. Interviews were performed in the participant’s home and were audio-recorded. All interviews lasted 20-40 minutes. 55 Data analysis All interviews were transcribed verbatim, and the transcripts were given to three independent researchers for coding and analysis. A thematic analysis was undertaken applying the constant comparison method122. Data were initially organized according to categories in the interview guide, and then major themes were formulated. The three researchers grouped data deductively into the major themes. Coded transcripts were triangulated by one researcher by cross-checking the codes for inter-coder agreement. Disputes among coders were resolved through consensus. Data were grouped into five major themes (appendix I). Representative verbatim comments were selected for presentation. RESULTS Out of the 20 adolescents that completed the pilot intervention, 18 parents were interviewed. Only one parent chose not to be interviewed due to privacy concerns, and another parent had two adolescents enrolled in the study. Parents were primarily female (94%), and all had a college degree. Sixty-six percent of parents worked full time, and 11% were not employed. Table 1 reflects the full demographic data of participants interviewed. 56 Table 1. Demographic data of all parents and parents segmented by child’s diet group All parents (n=18) Parents with children on eSLD (n=9) Parents with children on CD (n=9) Variable % % % Age (yrs)* 48.1± 5.6 49.1 ± 7.2 48.0. ± 4.8 Gender Male 1 (6%) 0 (0%) 1 (11%) Female 17(94%) 9 (100%) 8 (89%) Race Black 4 (20%) 4(44%) 0(0%) White 14 (70%) 5 (56%) 9 (100%) Other 0 (0%) 0 (0%) 0 (0%) Ethnicity Not Hispanic/Latino 18 (100%) 9 (100%) 9 (100%) Highest Level of Education High school/GED 0 (0%) 0 (0%) 0 (0%) Associate’s Degree 1 (6%) 0 (0%) 1 (11%) Bachelor’s Degree 11 (61%) 5 (56%) 6 (67%) Graduate Degree 6 (33%) 4 (44%) 2 (22%) Employment Status No job 2 (11 %) 0 (0%) 2 (22%) Part-Time 4 (33%) 3 (33%) 1 (11%) Full Time 12 (67%) 6 (67%) 6 (67%) * M ± SD Acceptability of Diets Enhanced Stop Light Diet The majority of parents stated that their child liked the PCMs and mentioned that the child found them to be “tasty”, “easy”, and “filling”, while still allowing the child to choose their own meals. Parents had a positive attitude towards the meals and found them to be convenient, portion-controlled, simple to make, and nutritionally adequate. Most schools allowed the adolescents to prepare their PCMs in a microwave during lunchtime. Furthermore, they were pleased that their child was able to prepare the meals without assistance. 57 “I liked them, they really helped, especially at lunch time. And it really helps me because you know how busy we are. It helps us not to have to worry, um, about what she’s going to eat. And like… because if any other given day she would just go in the refrigerator and get like whatever was there that she liked, in undisclosed amounts. And that was the hardest thing… it was like how much did you eat? Now it’s just easy for her. She’s like ‘oh wow’… pop it in the microwave for a minute, and she’s set. And a lot of times when I got home I’m like ‘Did you eat yet?’, and she’s like ‘Yeah, I already ate!’ And it was just really nice to know.” Problems that caused families to not use PCMs were going out for dinner or having someone else take care of the adolescent; however, most parents felt their child could make healthy food choices using the stop light guide in place of the PCMs. “Regardless of what setting he was in, we encouraged the healthy eating. The red foods, green foods, yellow foods was helpful because when he was with his grandmother she could say to him, ‘No, that’s a red food,’ to manage that system.” Most parents thought the stop light guide was a good tool that complemented the PCMs; however, a few reported that they did not use the guide as they felt they could figure out what was a healthy food to feed their child on their own. Most parents thought their child could understand the guide and could use it to pick out healthy food choices. Conventional Diet Parents whose child was in the CD group felt that their dependent benefited the most from education about proper portion size and that portion control caused the diet to be successful. Many parents reported they gained knowledge about correct portion sizes as well and could then provide their adolescent correct portion sized choices at mealtimes or help them to portion out food. “I really enjoyed him learning about portion size. I think portion size was the most important thing here because he eats big food, like grizzly bear sized portions, and he thought that was ok to do. Doing this program he really learned what a portion size was and saw that he needed to cut back. I even learned what a real portion size was and that maybe I was feeding him too much.” 58 The main struggle related to the conventional diet was trying to get the adolescents to eat the recommended five fruits and vegetables during the day. Some parents admitted their child had an aversion to fruits and vegetables; others just struggled to get enough in during the day. “Also, trying to increase his fruits and vegetable was hard. Again he doesn’t have a big variety of foods he likes to eat, but we’ve been trying to just keep the things on hand that he will eat, but he is just not a fruits and vegetables kind of guy.” Parental Suggestions for Both Diets Suggestions for improvements in both diet groups were educating teachers as well as the parents and participants, increasing the length of the program, allowing parents to also have an iPad, providing money to buy more fruits and vegetables, and providing healthy cooking recipes. All parents reported that they would do the program again if given the chance. Physical Activity Most parents reported that their child’s main form of exercise was playing on the Nintendo Wii system. Other common forms of exercise reported were school-related physical activity (gym class or sports teams) and Special Olympics sports. Some parents mentioned that they had a treadmill or elliptical machine at the home and that their child would use it occasionally. Only two parents had a gym membership for their child. “Mostly just the Wii. She does the sports at school, and then she’s got the dance with her Special Olympics group, and she likes both, oh, and dance. She loves her dance classes.” 59 Parents did not seem to place a large emphasis on exercise, and no parent did physical activity with their dependent. Many parents commented on why they didn’t encourage more exercise or do exercise with their child. “Anyway, no, he didn’t do a lot of exercise. It’s hard for me to get him to because I work evenings. Even though I’m off on Saturday, we don’t belong to the gym or anything; it’s gotten too cold to go outside so we just didn’t do much, and I think he is fine with that.” Use of the tablet computer All parents liked using the tablet computer as a weight loss tool. They felt that the tablet made their child want to participate in the program and that it made healthy eating and exercise fun. Common words used to describe the tablet and the applications used on the tablet were “easy”, “visually appealing”, and “portable”. “Well, I liked that it was so portable. We took it with us even when we went out to eat. A lot of times we were able to just enter things in right there. He uses it a lot, so it’s just so handy. It’s not like if we had a separate book or something separate we’d have to do it with. He is a kid with an iPad: he wants it with him all the time. I think it’s just very … makes it just really acceptable. It fits his lifestyle.” Parents felt that the Lose it! application was a good learning tool for adolescents with IDD, especially because it was very visually appealing and easy to understand. They conveyed that it was easy to maneuver and that their child was able to do it without a lot of help, although most noted that they would help their child from time to time with entering things. “I like that it was easy for him to use and that he was the one that was doing it, not me. It was easy and fun for him to do that” The biggest problem encountered using the tablet and the applications was remembering to record their foods everyday into the Lose it! and iStep Log applications. 60 “Nothing was hard to do on the iPad, just remembering to do it was the thing he struggled with the most”. Plans for healthy eating and physical activity after the intervention All parents were going to continue using the weight management strategies they learned during the intervention in the future. The majority of parents were going to download the application “Lose It!” and have their child continue to track his or her dietary intake and physical activity. Parents indicated that they planned to continue trying to increase the amounts of fruits and vegetables they were serving and to monitor the portion size of the foods they prepared. Parents whose child was in the eSLD group mentioned that they were going to purchase low-calorie frozen entrees from the grocery store so that they could continue to have PCMs in the house that their child could use for lunch and dinner options. “We will continue to use the app. We will continue to use the pre-portion meals. We’ll continue to use basically all of the information that you shared with us. I feel good about this.” Changes in Parents’ own habits Parents’ own dietary habits changed because of their child participating in this program. They became more aware of portion sizes and really tried to serve the correct portion sizes to the entire family, tried to choose healthier restaurants when eating out, and tried to serve more fruits and vegetables at meal times and for snacks. “We (the family) aren’t going out to eat as much. I’m just really looking at portion sizes and trying to make sure I try to get more fruits and vegetables. I think everything that we are trying to get for him, I am doing, too. It’s not like he was eating separate food from us. What he was doing, we were doing. By him making better choices and us making better choices for him, I am doing the same for myself and the rest of the family. I think it’s definitely been beneficial for probably all of us, the whole family.” 61 While dietary habits appeared to change, parents did not report changes in their own physical activities levels. Most said that they were already physically active and that they just kept doing their own routine, while others noted that they just didn’t feel they had time to do physical activity. “I have taken more of a focus on accomplishing professionally rather than focusing everything on him, and, so yeah, I just don’t have time. Single moms have a lot to do. We have to prioritize, and exercising is just not one of my priorities right now.” DISCUSSION As there are currently no recommended weight management strategies for adolescents with IDD and minimal research has been conducted for weight loss and weight management strategies for this population40, it is unknown what diet strategies are most appropriate for weight management in adolescents with IDD. Previous research shows that parents play a large role in terms of weight management in adolescents with IDD119-121. Thus, this study was important in exploring what parents did and did not like about a diet and physical activity program, to determine what strategies would be successful or what aspects programs need to be addressed in weight management programs for adolescents with IDD. An equal number of parents from each diet group completed the interview process, and themes were analyzed for diet acceptability, tablet computer use, physical activity, future plans for healthy eating and physical activity after intervention completion, and parent behavior change. Overall, the program was highly accepted and supported by parents in both diet groups. All parents indicated they would do the program again if given 62 the chance. The tablet computers were an acceptable tool for weight loss as parents felt that their children could use them with minimal assistance and enjoyed using them. Parents appeared to respond well to the program because they felt it was easy for their child to understand, it was appealing for their child, and it decreased parent burden. The interviews suggest that parents of adolescents with IDD want a program that will allow their child to lose weight but will not require a lot of time commitment and burden on the parents’ end. Those parents who were interviewed understood and were pleased with the dietary components of the both diet groups. Parents whose child was in the eSLD group enjoyed the use of PCMs, and parents whose child was in the CD group benefited from portion size education. The self-reported behaviors of parents in both diet groups changed as a result of the program: parents began to increase their intake of fruits and vegetables and to decrease the portion sizes they were eating. These dietary behaviors changes are significant, as research has found that children and adolescents who have parents that model healthy dietary behaviors have a decreased risk of obesity123-125. Parents appeared to emphasize the dietary component of the intervention over the physical activity component. While parents seemed very involved with helping their child select healthy food options and record their dietary intake, they relied on the school or Special Olympics to provide physical activity options. Even those who obtained gym memberships or had gym equipment in their home did not put a lot of effort into getting their dependent to use the equipment. No parent recounted performing physical activity as a family, such as going on walks. Furthermore, while parents’ dietary habits seemed to have improved because of this program, parents’ own physical activity did not change, again 63 suggesting that parents were not as involved in encouraging physical activity as they were with changing dietary habits. These results suggest that when designing a successful weight management intervention parents must be taught the importance of physical activity and taught ways to encourage their child as well as the whole family to engage in physical activity. While this study demonstrates parental acceptability of two different diet strategies (eSLD and CD) for weight loss in adolescents with IDD, there are some limitations to this study. All parents were well educated as they all had a college degree and many had graduate degrees; thus, this may limit the generalizability of our findings. CONCLUSIONS The current study explored parental evaluations of a weight loss program for adolescents with IDD. Overall, parents had a positive attitude towards the program regardless of which diet group their child followed. They enjoyed the convenience of the program, appreciated the use of the tablet computer, and felt that they had learned beneficial strategies to continue encouraging healthy habits in the home. Results from the interviews indicate that parents want a weight management program that is geared towards the adolescent and does not add extra stress and burden to themselves as a parent. Furthermore, it appears that parents of adolescents with IDD may place a larger emphasis on dietary intake compared to physical activity, and may need more physical activity education and ways to increase physical activity of the participant and the family. As parents of adolescents with IDD typically do the food shopping, perform the meal planning and preparation, and often have a role providing outlets for physical activity, their 64 views and opinions on weight management strategies are vital and should be taken into consideration when developing and conducting weight management programs and interventions. 65 CHAPTER FIVE: DISCUSSION AND CONCLUSION 66 SUMMARY OF FINDINGS This series of studies aimed to (1) compare the effectiveness of two weight loss diets, an enhanced stop light diet (eSLD) and a conventional diet (CD), in overweight and obese adolescents with IDD, (2) to determine the feasibility of using tablet computers as a weight loss tool in overweight and obese adolescents with IDD, (3) to determine if the use of photo-assisted 3-day food records significantly increased the amount of energy and macronutrient intake reported in proxy-assisted 3-day food records in adolescents with IDD, and (4) to evaluate the intervention components of the program by assessing parents’ feelings and opinions regarding the intervention programs. Overall, the results show that weight loss and weight management programs in adolescents with IDD can be successfully conducted, with overall acceptability from both adolescents and parents. The results demonstrate two weight management strategies that may potentially lead to clinically significant weight loss in adolescents with IDD. Participants’ success in logging diet and physical activity data showed that tablet computers are a feasible tool and delivery system for weight loss in adolescents with IDD. Comparison of photo-assisted food recalls to proxy-assisted recalls demonstrated that photo-assisted 3-day food records may provide more accurate estimates of energy intake in adolescents with IDD compared to proxy-assisted 3-day food records. Finally, the interviews suggested that parents will change in order to help adolescents with IDD follow a diet and physical activity program and appeared to approve of the approach used in this study. 67 AN INNOVATIVE WEIGHT LOSS PROGRAM FOR ADOLESCENTS WITH INTELLECTUAL AND DEVELOPMENTAL DISABILITIES To our knowledge, this is the first weight loss intervention incorporating the use of technology in adolescents with IDD. We found that both the CD and the eSLD might be effective weight loss strategies in adolescents with IDD. Participants in the eSLD had a mean weight loss of 3.89 ± 2.66 kg, compared to participants in the CD who had a mean loss of 2.22 ± 1.37 kg (p=0.0720), which constitutes a loss of 4.6% and 3.3% body weight, respectively. Decreased sedentary activity time and decreased caloric intake contributed to this weight loss. Both groups had a significant decrease in sedentary activity time and in caloric intake, with a significant caloric reduction of 844.9 ± 641.0 kcal/day (p=0.0024) in the eSLD and 674.9 ± 769.4 kcal/day (p=0.0301) in the CD. Furthermore, participants in both groups increased their diet quality as measured by the HEI-2005. The eSLD group increased by 6.6 ± 11.0 points, and the CD group increased by 2.8 ± 7.3 points. We found that adolescents with IDD were able to use the tablet computer to track their dietary intake a median of 90.40 % (range: 27.8%-100%) of total days, enter their steps 64.3% (range: 0-100%) of total days, and speak with a health educator over video chat 80% of the scheduled weekly meetings. Most adolescents in this study (95%) reported enjoying the use of the tablet computers, and 42% were able to complete tablet computer tasks without assistance. These results suggest that tablet computers are a feasible weight loss tool and delivery system for adolescents with IDD. 68 DIGITAL PHOTOGRAPHY PROVIDES BETTER ESTIMATES OF DIETARY INTAKE IN ADOLESCENTS WITH INTELLECTUAL AND DEVELOPMENTAL DISABILITIES We examined the use of digital photography in assessing dietary intake of adolescents with IDD. The results of this study found that adolescents with IDD are able to take images of their meals, as participants captured 53.8 ± 31.7% of all eating occasions consumed at baseline and 47.2 ± 34.1% of all eating occasions at the end of month 2. These results are similar to the studies done in adults with IDD77,78. Furthermore, we found that the use of photo-assisted food records significantly improved estimates of energy by 16.7% (p=0.0006) at baseline and 10.6% (p=0.0305) at the end of month 2 compared to use of proxy-assisted food records. Our results suggest that the use of digital photography in combination with proxy-assisted food records may improve estimates of energy intake and macronutrient intake in adolescents with IDD. A WEIGHT MANAGEMENT INTERVENTION FOR ADOLESCENTS WITH INTELLECTUAL AND DEVELOPMENTAL DISABILITIES: A PARENT’S PRESCRIPTIVE We explored parental perceptions of the intervention by conducting semi- structured interviews with participant’s parents. The five major themes identified were acceptability of the diets, physical activity, use of the tablet computers, plans for healthy eating and physical activity in the future, and changes in parents’ own behaviors. Parents had a positive attitude towards the program and liked the convenience of the program, appreciated the use of the tablet computer, and felt that the program taught beneficial strategies to continue to encourage healthy habits in the home. The results from the interviews suggested that parents were able to help the adolescent successfully follow a weight loss intervention. The interviews also indicated that parents want a weight 69 management program that is geared toward the adolescent and that does not add extra stress and burden to themselves as parents. Furthermore, it was observed that parents may place a larger emphasis on dietary intake compared to physical activity and may need more education about the benefits of physical activity and ideas on how to increase the physical activity of adolescents with IDD. CLINICAL SIGNIFICANCE The rates of obesity and risk of developing chronic diseases are high in adolescents with IDD15-19,126. The results of these studies suggest that the eSLD and the CD are both feasible weight management strategies for adolescents with IDD and that the use of tablet computers is a feasible delivery system for weight management in adolescents with IDD. Furthermore, the use of both the eSLD and the CD could potentially lead to significant weight loss as well as increases in diet quality and decreases in sedentary activity time in adolescents with IDD, all of which have the potential to decrease the risk of developing chronic diseases. The results of these studies also suggest that valid dietary assessment techniques need to be developed for use in adolescents with IDD. Digital photography in combination with proxy-assisted food records is feasible and may improve the estimations of dietary intake of adolescents with IDD. Individuals and medical professionals working with adolescents with IDD should be cognizant of the limitations of conducting dietary assessments in adolescents with IDD and should be aware that underreporting may occur. 70 LIMITATIONS The present studies were self-funded pilot studies; thus, they were small in sample size and lacked power to identify significant differences between groups. The intervention period was only two months, while most weight loss interventions are typically 3-6 months in duration127 . Furthermore, there was no follow-up period to determine if the participants were able to maintain the weight loss they achieved. FUTURE DIRECTIONS While we identified two diets that may be successful weight loss strategies in adolescents with IDD, long-term, adequately powered studies are needed to determine if the eSLD is more effective in terms of weight loss in adolescents with IDD, as well as if either diet results in continued weight management after the weight loss period concludes. Furthermore, while the tablet computer appears to be a feasible weight loss tool for adolescents with ID, future studies need to determine if the use of technology as a delivery system for weight loss is more effective than traditional dietary and PA tracking and face to face education as seen in the study by Saunders et al37. Finally, in order to accurately assess the dietary intake and diet quality of adolescents with IDD in general, valid dietary assessment methods are needed. While photo-assisted 3-day foods records appear to be a feasible method of collecting more accurate dietary intake information in adolescents with IDD, it is still not a validated method of dietary assessment in this population. Validation studies need to be conducted to determine if photo-assisted recalls in conjunction with 3 day proxy-assisted food records can be used to provide valid dietary assessment data in adolescents with IDD. Common 71 validation practices include comparisons of the dietary assessment method against other similar methods128, comparisons of the dietary assessment to direct observation of meal consumption129, and the use of double-labeled water 130-132 . As no other validated dietary assessment method exists in this population, direct observation or the use of double- labeled water would be the best techniques. CONCLUSIONS The results of the present studies suggest that weight loss interventions in adolescents with IDD are feasible and that the use of both the eSLD and the CD may be successful strategies to promote weight loss in this population. Additionally, the use of technology in the form of tablet computers also appears to be a feasible and appealing strategy to deliver weight loss interventions. Furthermore, using technology may help to provide a more accurate dietary assessment method in adolescents with IDD via the use of digital photography. Future research will benefit from the use of the strategies developed in these studies to discover ways to promote weight loss in adolescents with IDD to help lower the increased risk of obesity and chronic diseases that affects this population. 72 REFERENCES 1. Kronkosky Charitable Foundation. Intellectual Disability Research Brief. 2011. 2. American Association on Intellectual and Developmental Disabilities. Definition of Intellectual Disabilities. 2012; http://www.aaidd.org/content_100.cfm?navID=21. Accessed April 16th, 2012. 3. He FJ, MacGregor GA. Effect of longer-term modest salt reduction on blood pressure. Cochrane database of systematic reviews (Online). 2004(3):CD004937. 4. Sulkes SB. Intellectual Disability. Merck Home Health Handbook 2009; http://www.merckmanuals.com/home/childrens_health_issues/learning_and_devel opmental_disorders/intellectual_disability.html?qt=&sc=&alt=. Accessed April 16, 2012. 5. Ainsworth PP, B. Understanding Mental Retardation. University Press of Mississippi; 2004. 6. Rimmer JH, Braddock D, Marks B. Health characteristics and behaviors of adults with mental retardation residing in three living arrangements. Research in developmental disabilities. Nov-Dec 1995;16(6):489-499. 7. Rimmer JH, Wang E. Obesity prevalence among a group of Chicago residents with disabilities. Archives of physical medicine and rehabilitation. Jul 2005;86(7):1461- 1464. 8. Harris N, Rosenberg A, Jangda S, O'Brien K, Gallagher ML. Prevalence of obesity in International Special Olympic athletes as determined by body mass index. J Am Diet Assoc. Feb 2003;103(2):235-237. 9. Yamaki K. Body weight status among adults with intellectual disability in the community. Ment Retard. Feb 2005;43(1):1-10. 10. Melville CA, Hamilton S, Hankey CR, Miller S, Boyle S. The prevalence and determinants of obesity in adults with intellectual disabilities. Obesity reviews : an official journal of the International Association for the Study of Obesity. May 2007;8(3):223-230. 11. Curtin C, Anderson SE, Must A, Bandini L. The prevalence of obesity in children with autism: a secondary data analysis using nationally representative data from the National Survey of Children's Health. BMC pediatrics. 2010;10:11. http://www.aaidd.org/content_100.cfm?navID=21 http://www.merckmanuals.com/home/childrens_health_issues/learning_and_developmental_disorders/intellectual_disability.html?qt=&sc=&alt= http://www.merckmanuals.com/home/childrens_health_issues/learning_and_developmental_disorders/intellectual_disability.html?qt=&sc=&alt= 73 12. Chen AY, Kim SE, Houtrow AJ, Newacheck PW. Prevalence of obesity among children with chronic conditions. Obesity (Silver Spring, Md.). Jan 2010;18(1):210-213. 13. De S, Small J, Baur LA. Overweight and obesity among children with developmental disabilities. Journal of intellectual & developmental disability. Mar 2008;33(1):43-47. 14. Maiano C. Prevalence and risk factors of overweight and obesity among children and adolescents with intellectual disabilities. Obesity reviews : an official journal of the International Association for the Study of Obesity. Mar 2011;12(3):189-197. 15. Rimmer JH, Yamaki K, Lowry BM, Wang E, Vogel LC. Obesity and obesity-related secondary conditions in adolescents with intellectual/developmental disabilities. Journal of intellectual disability research : JIDR. Sep 2010;54(9):787-794. 16. Must A, Jacques PF, Dallal GE, Bajema CJ, Dietz WH. Long-term morbidity and mortality of overweight adolescents. A follow-up of the Harvard Growth Study of 1922 to 1935. The New England journal of medicine. Nov 5 1992;327(19):1350- 1355. 17. Pi-Sunyer FX. Medical hazards of obesity. Annals of internal medicine. Oct 1 1993;119(7 Pt 2):655-660. 18. DiPietro L, Mossberg HO, Stunkard AJ. A 40-year history of overweight children in Stockholm: life-time overweight, morbidity, and mortality. Int J Obes. 1994 1994;18:585-590. 19. Freedman DS, Khan LK, Serdula MK, Dietz WH, Srinivasan SR, Berenson GS. The relation of childhood BMI to adult adiposity: the Bogalusa Heart Study. Pediatrics. Jan 2005;115(1):22-27. 20. Adolfsson P, Sydner YM, Fjellstrom C, Lewin B, Andersson A. Observed dietary intake in adults with intellectual disability living in the community. Food & nutrition research. 2008;52. 21. Draheim CC, Stanish HI, Williams DP, McCubbin JA. Dietary intake of adults with mental retardation who reside in community settings. American journal of mental retardation : AJMR. Sep 2007;112(5):392-400. 22. Ptomey L. Diet Quality of Adults with Intellectual and Developmental Disabilities as Measured by the Healthy Eating Index-2005. Unpublished. 2012. 74 23. Fernhall B, McCubbin JA, Pitetti KH, et al. Prediction of maximal heart rate in individuals with mental retardation. Med Sci Sports Exerc. Oct 2001;33(10):1655- 1660. 24. Graham A, Reid G. Physical fitness of adults with an intellectual disability: a 13-year follow-up study. Research quarterly for exercise and sport. Jun 2000;71(2):152-161. 25. Stanish HI, Draheim CC. Walking habits of adults with mental retardation. Ment Retard. Dec 2005;43(6):421-427. 26. Lin JD, Lin PY, Lin LP, Chang YY, Wu SR, Wu JL. Physical activity and its determinants among adolescents with intellectual disabilities. Research in developmental disabilities. Jan-Feb 2010;31(1):263-269. 27. U.S. Department of Health and Human Services. Healthy People 2020. Washington, DD: Office of Disease Prevention and Health Promotion. 28. World Health Organization. Ageing and Intellectual Disabilities-Improving Longevity and Promoting Healthy Ageing: Summative Report. Geneva, Switzerland: World Health Organization;2000. 29. Carmona RH, Giannini M, Bergmark B, Cabe J. The Surgeon General's Call to Action to Improve the Health and Wellness of Persons with Disabilities: historical review, rationale, and implications 5 years after publication. Disability and health journal. Oct 2010;3(4):229-232. 30. Chapman MJ, Craven MJ, Chadwick DD. Fighting fit? An evaluation of health practitioner input to improve healthy living and reduce obesity for adults with learning disabilities. Journal of intellectual disabilities : JOID. Jun 2005;9(2):131-144. 31. Fox RA, Haniotes H, Rotatori A. A streamlined weight loss program for moderately retarded adults in a sheltered workshop setting. Applied research in mental retardation. 1984;5(1):69-79. 32. Fox RA, Rosenberg R, Rotatori AF. Parent involvement in a treatment program for obese retarded adults. Journal of behavior therapy and experimental psychiatry. Mar 1985;16(1):45-48. 33. Marshall D, McConkey R, Moore G. Obesity in people with intellectual disabilities: the impact of nurse-led health screenings and health promotion activities. Journal of advanced nursing. Jan 2003;41(2):147-153. 75 34. McCarran MS, Andrasik F. Behavioral weight-loss for multiply-handicapped adults: assessing caretaker involvement and measures of behavior change. Addictive behaviors. 1990;15(1):13-20. 35. Podgorski CA, Kessler K, Cacia B, Peterson DR, Henderson CM. Physical activity intervention for older adults with intellectual disability: report on a pilot project. Ment Retard. Aug 2004;42(4):272-283. 36. Rimmer JH, Heller T, Wang E, Valerio I. Improvements in physical fitness in adults with Down syndrome. American journal of mental retardation : AJMR. Mar 2004;109(2):165-174. 37. Saunders RR, Saunders MD, Donnelly JE, et al. Evaluation of an approach to weight loss in adults with intellectual or developmental disabilities. Intellectual and developmental disabilities. Apr 2011;49(2):103-112. 38. Fisher E. Behavioural weight reduction program for mentally retarded adult females. . Percept Mot Skills. 1986(62):359-362. 39. Rotatori AS, H; Fox, R. Behavioral weight reduction procedures for obese mentally retarded individuals: a review. Ment Retard. 1981(19):157-161. 40. Hamilton S, Hankey CR, Miller S, Boyle S, Melville CA. A review of weight loss interventions for adults with intellectual disabilities. Obesity reviews : an official journal of the International Association for the Study of Obesity. Jul 2007;8(4):339- 345. 41. National Heart L, and Blood Institute. Clinical guidelines on the identification, evaluation, and treatment of overweight and obesity in adults; The evidence report. National Institutes of Health; 1998. 42. Hinckson EA, Dickinson A, Water T, Sands M, Penman L. Physical activity, dietary habits and overall health in overweight and obese children and youth with intellectual disability or autism. Research in developmental disabilities. Feb 6 2013;34(4):1170-1178. 43. Cummings S, Parham ES, Strain GW. Position of the American Dietetic Association: weight management. J Am Diet Assoc. Aug 2002;102(8):1145-1155. 44. Ditschuneit HH, Flechtner-Mors M, Johnson TD, Adler G. Metabolic and weight-loss effects of a long-term dietary intervention in obese patients. Am J Clin Nutr. 1999 1999;69:198-204. 76 45. Heber D, Ashley JM, Wang HJ, Elashoff RM. Clinical evaluation of a minimal intervention meal replacement regimen for weight reduction. J Am Coll Nutr. 1994 1994;13 (6):608-614. 46. Bell EA, Rolls BJ. Energy density of food affects energy intake across multiple levels of fat content in lean and obese women. Am J Clin Nutr. 2001 2001;73:1010-1018. 47. Prentice AM. Manipulation of dietary fat and energy density and subsequent effects on substrate flux and food intake. Am J Clin Nutr. 3/1998 1998;67:535S-541S. 48. Epstein L, Squires S. The Stoplight Diet for Children: An Eight-Week Program for Parents and Children. Boston: Little Brown & Co; 1988. 49. Academy of Nutrition and Dietetics. What is the evidence to support using the Traffic Light Diet to limiting calorie and food intake in children? 2005; http://www.adaevidencelibrary.com/conclusion.cfm?conclusion_statement_id=250 052. 50. Ditschuneit HH, Flechtner-Mors M. Efficacy of replacement of meals with diet shakes and nutrition bars in the treatment of obesity and maintenance of body weight. Int J Obes. 1996 1996;20 (S4):57. 51. Wing RR, Jeffery RW, Burton LR, Thorson C, Nissinoff KS, Baxter JE. Food provision vs structured meal plans in the behavioral treatment of obesity. Int J Obes. 1996 1996;20(1):56-62. 52. Wing RR, Jeffery RW. Food provision as a strategy to promote weight loss. Obes Res. Nov 2001;9 Suppl 4:271S-275S. 53. Heymsfield SB, van Mierlo CA, van der Knaap HC, Heo M, Frier HI. Weight management using a meal replacement strategy: Meta and pooling analysis from six studies. Int J Obes Relat Metab Disord. May 2003;27(5):537-549. 54. Academy of Nutrition and Dietetics. How effective (in terms of client adherence and weight loss and maintenance) are meal replacements (liquid meals, meal bars, frozen prepackaged meals)? 2011; http://www.adaevidencelibrary.com/evidence.cfm?evidence_summary_id=250141. 55. Berkowitz RI, Wadden TA, Gehrman CA, et al. Meal replacements in the treatment of adolescent obesity: a randomized controlled trial. Obesity. Jun 2011;19(6):1193- 1199. http://www.adaevidencelibrary.com/conclusion.cfm?conclusion_statement_id=250052 http://www.adaevidencelibrary.com/conclusion.cfm?conclusion_statement_id=250052 http://www.adaevidencelibrary.com/evidence.cfm?evidence_summary_id=250141 77 56. Humphries K, Traci MA, Seekins T. Food on FIlm: Pilot Test of an Innovative Method for Recording Food Intake of Adults with Intellectual Disabilities Living in the Community. Journal of Applied Research in Intellectual Disabilities. 2008;21(2):168- 173. 57. Humphries K, Traci MA, Seekins T. Nutrition and adults with intellectual or developmental disabilities: systematic literature review results. Intellectual and developmental disabilities. Jun 2009;47(3):163-185. 58. U.S. Department of Education. Computer and Internet Use by Children and Adolescents in 2001. NCES;2003. 59. Boyd AW. Adapting to the iPad, called education's 'equalizer'. USA Today2011. 60. Davies DK, Stock SE, Wehmeyer ML. Enhancing independent task performance for individuals with mental retardation through use of a handheld self-directed visual and audio prompting system. Educ Train Ment Ret. Jun 2002;37(2):209-218. 61. Spence-Cochran K, Pearl C. A Comparison of Hand-Held Computer and Staff Model Supports for High School Students with Autism and Intellectual Disabilities. Assistive Technology Outcomes and Benefits. 2009(Special Issue):26-42. 62. Stock SE, Davies DK, Wehmeyer ML, Lachapelle Y. Emerging new practices in technology to support independent community access for people with intellectual and cognitive disabilities. NeuroRehabilitation. 2011;28(3):261-269. 63. Van Laarhoven T. Using Video iPods to teach Life Skills to Individuals with Autism Spectrum Disorder: Background Research and Creation of Video-Based Materials. Assistive Technology Outcomes and Benefits. 2009(Special Issue):18-25. 64. Wehmeyer ML, Smith SJ, Palmer SB. Technology Use by Students with Intellectual Disabilities: An Overview. Journal of Special Education Technology. 2004;19(4):7-22. 65. Kagohara DM, Sigafoos J, Achmadi D, van der Meer L, O'Reilly MF, Lancioni GE. Teaching students with developmental disabilities to operate an iPod Touch((R)) to listen to music. Res Dev Disabil. Nov-Dec 2011;32(6):2987-2992. 66. Kagohara DM, van der Meer L, Ramdoss S, et al. Using iPods((R)) and iPads((R)) in teaching programs for individuals with developmental disabilities: a systematic review. Res Dev Disabil. Jan 2013;34(1):147-156. 78 67. Jowett EL, Moore DW, Anderson A. Using an iPad-based video modelling package to teach numeracy skills to a child with an autism spectrum disorder. Dev Neurorehabil. 2012;15(4):304-312. 68. McHugh L, Bobarnac A, Reed P. Brief report: teaching situation-based emotions to children with autistic spectrum disorder. J Autism Dev Disord. Oct 2011;41(10):1423-1428. 69. Ramdoss S, Machalicek W, Rispoli M, Mulloy A, Lang R, O'Reilly M. Computer-based interventions to improve social and emotional skills in individuals with autism spectrum disorders: a systematic review. Dev Neurorehabil. 2012;15(2):119-135. 70. Ayres K, Cihak D. Computer- and video-based instruction of food-preparation skills: acquisition, generalization, and maintenance. Intellect Dev Disabil. Jun 2010;48(3):195-208. 71. Williamson DA, Allen HR, Martin PD, Alfonso AJ, Gerald B, Hunt A. Comparison of digital photography to weighed and visual estimation of portion sizes. J Am Diet Assoc. Sep 2003;103(9):1139-1145. 72. Higgins JA, LaSalle AL, Zhaoxing P, et al. Validation of photographic food records in children: are pictures really worth a thousand words? European journal of clinical nutrition. Aug 2009;63(8):1025-1033. 73. Martin CK, Han H, Coulon SM, Allen HR, Champagne CM, Anton SD. A novel method to remotely measure food intake of free-living individuals in real time: the remote food photography method. The British journal of nutrition. Feb 2009;101(3):446- 456. 74. Dahl Lassen A, Poulsen S, Ernst L, Kaae Andersen K, Biltoft-Jensen A, Tetens I. Evaluation of a digital method to assess evening meal intake in a free-living adult population. Food & nutrition research. 2010;54. 75. Martin CK, Newton RL, Jr., Anton SD, et al. Measurement of children's food intake with digital photography and the effects of second servings upon food intake. Eating behaviors. Apr 2007;8(2):148-156. 76. Humphries K, Traci MA, Seekins T. Food on FIlm: Pilot Test of an Innovative Method for Recording Food Intake of Adults with Intellectual Disabilities Living in the Community. Journal of Applied Research in Intellectual Disabilities. 2008;21(2):126- 173. 79 77. Elinder LS, Brunosson A, Bergstrom H, Hagstromer M, Patterson E. Validation of personal digital photography to assess dietary quality among people with intellectual disabilities. Journal of intellectual disability research : JIDR. Feb 2012;56(2):221-226. 78. Ptomey LT, Herrmann SD, Lee J, Donnelly JE, Sullivan DK. Photo assisted 24-hour dietary recalls in adults with intellectual and developmental disabilities. Paper presented at: Experimental Biology2013; Boston, MA. 79. Ogden CL, Carroll MD, Curtin LR, Lamb MM, Flegal KM. Prevalence of high body mass index in US children and adolescents, 2007-2008. JAMA : the journal of the American Medical Association. Jan 20 2010;303(3):242-249. 80. Mikulovic J, Marcellini A, Compte R, et al. Prevalence of overweight in adolescents with intellectual deficiency. Differences in socio-educative context, physical activity and dietary habits. Appetite. Apr 2011;56(2):403-407. 81. Slevin E, Truesdale-Kennedy M, McConkey R, Livingstone B, Fleming P. Obesity and overweight in intellectual and non-intellectually disabled children. J Intellect Disabil Res. Sep 7 2012. 82. Donnelly JE, Blair SN, Jakicic JM, Manore MM, Rankin JW, Smith BK. American College of Sports Medicine Position Stand. Appropriate physical activity intervention strategies for weight loss and prevention of weight regain for adults. Med Sci Sports Exerc. Feb 2009;41(2):459-471. 83. Pellegrini CA, Duncan JM, Moller AC, et al. A smartphone-supported weight loss program: design of the ENGAGED randomized controlled trial. BMC public health. 2012;12:1041. 84. Mokha JS, Srinivasan SR, Dasmahapatra P, et al. Utility of waist-to-height ratio in assessing the status of central obesity and related cardiometabolic risk profile among normal weight and overweight/obese children: the Bogalusa Heart Study. BMC Pediatr. 2010;10:73. 85. Nambiar S, Hughes I, Davies PS. Developing waist-to-height ratio cut-offs to define overweight and obesity in children and adolescents. Public health nutrition. Oct 2010;13(10):1566-1574. 86. Nambiar S, Truby H, Abbott RA, Davies PS. Validating the waist-height ratio and developing centiles for use amongst children and adolescents. Acta Paediatr. Jan 2009;98(1):148-152. 80 87. Garnett SP, Baur LA, Cowell CT. Waist-to-height ratio: a simple option for determining excess central adiposity in young people. Int J Obes (Lond). Jun 2008;32(6):1028-1030. 88. Bandura A. Social Foundations of Thought and Action: A Social Cognitive Theory. Englewood Cliffs, New Jersey: Prentice-Hall; 1986. 89. Baranowski T, Perry CL, Parcel GS. How individuals, environments, and health behavior interact: Social Cognitive Theory. In: Glanz K, Lewis FM, Rimmer BK, eds. Health Behavior and Health Education: Theory, Research, and Practice. 3rd ed. San Francisco, CA: Jossey-Bass Publishers; 2002:246-279. 90. Epstein LH. Family based behavioural intervention for obese children. Int J Obes. 1996 1996;20 Suppl. 1:S14-S21. 91. Epstein LH, Paluch RA, Roemmich JN, Beecher MD. Family-based obesity treatment, then and now: twenty-five years of pediatric obesity treatment. Health Psychol. Jul 2007;26(4):381-391. 92. Shrewsbury VA, Steinbeck KS, Torvaldsen S, Baur LA. The role of parents in pre- adolescent and adolescent overweight and obesity treatment: a systematic review of clinical recommendations. Obes Rev. Apr 27 2011. 93. Fleming RK, Stokes EA, Curtin C, et al. Behavioral health in developmental disabilities: A comprehensive program of nutrition, exercise, and weight reduction. Int J behav Consult Ther. 2008;4(3):287-296. 94. U.S. Department of Agriculture. Choose MyPlate. 2013; http://www.choosemyplate.gov. 95. Mifflin MD, St Jeor ST, Hill LA, Scott BJ, Daugherty SA, Koh YO. A new predictive equation for resting energy expenditure in healthy individuals. Am J Clin Nutr. Feb 1990;51(2):241-247. 96. Spear BA, Barlow SE, Ervin C, et al. Recommendations for treatment of child and adolescent overweight and obesity. Pediatrics. Dec 2007;120 Suppl 4:S254-288. 97. Haskell WL, Lee IM, Pate RR, et al. Physical activity and public health: Updated recommendation for adults from the American College of Sports Medicine and the American Heart Association. Med Sci Sports Exerc. Aug 2007;39(8):1423-1434. 98. Lohman TG, Roche AF, Martorell R. Anthropometric Standardization Reference Manual. Champaign, Ill: Human Kinetics Books; 1988. http://www.choosemyplate.gov/ 81 99. Troiano RP, Berrigan D, Dodd KW, Mâsse LC, Tilert T, McDowell M. Physical activity in the United States measured by accelerometer. Medicine and science in sports and exercise. 2008;40(1):181. 100. Guenther PM, Reedy J, Krebs-Smith SM. Development of the Healthy Eating Index- 2005. J Am Diet Assoc. Nov 2008;108(11):1896-1901. 101. U.S. Department of Health and Human Services and U.S. Department of Agriculture. Dietary Guidelines for Americans, 2005. 6th Edition ed. Washington, DC: U.S. Government Printing Office2005. 102. Miller PE, Mitchell DC, Harala PL, Pettit JM, Smiciklas-Wright H, Hartman TJ. Development and evaluation of a method for calculating the Healthy Eating Index- 2005 using the Nutrition Data System for Research. Public Health Nutr. Feb 2011;14(2):306-313. 103. SAS Institute. SAS/STAT 9.3 user's guide. Cary, NC: SAS Institute Inc; 2002-2010. 104. Tate DF, Jackvony EH, Wing RR. A randomized trial comparing human e-mail counseling, computer-automated tailored counseling, and no counseling in an Internet weight loss program. Archives of internal medicine. Aug 14-28 2006;166(15):1620-1625. 105. Turner-McGrievy G, Tate D. Tweets, Apps, and Pods: Results of the 6-month Mobile Pounds Off Digitally (Mobile POD) randomized weight-loss intervention among adults. Journal of medical Internet research. 2011;13(4):e120. 106. Turner-McGrievy GM, Beets MW, Moore JB, Kaczynski AT, Barr-Anderson DJ, Tate DF. Comparison of traditional versus mobile app self-monitoring of physical activity and dietary intake among overweight adults participating in an mHealth weight loss program. Journal of the American Medical Informatics Association : JAMIA. Feb 21 2013. 107. Braunschweig CL, Gomez S, Sheean P, Tomey KM, Rimmer J, Heller T. Nutritional status and risk factors for chronic disease in urban-dwelling adults with Down syndrome. American journal of mental retardation : AJMR. Mar 2004;109(2):186- 193. 108. Thompson FE, Byers T. Dietary assessment resource manual. The Journal of nutrition. Nov 1994;124(11 Suppl):2245S-2317S. 109. Emmett P. Workshop 2: The use of surrogate reporters in the assessment of dietary intake. European journal of clinical nutrition. Feb 2009;63 Suppl 1:S78-79. 82 110. Ptomey L, Goetz J, Lee J, Donnelly J, Sullivan D. Diet Quality of Overweight and Obese Adults with Intellectual and Developmental Disabilities as Measured by the Healthy Eating Index-2005. J Dev Phys Disabil. 2013/03/24 2013:1-12. 111. Wang DH, Kogashiwa M, Ohta S, Kira S. Validity and reliability of a dietary assessment method: the application of a digital camera with a mobile phone card attachment. Journal of nutritional science and vitaminology. Dec 2002;48(6):498- 504. 112. Swanson M. Digital photography as a tool to measure school cafeteria consumption. The Journal of school health. Aug 2008;78(8):432-437. 113. Kikunaga S, Tin T, Ishibashi G, Wang DH, Kira S. The application of a handheld personal digital assistant with camera and mobile phone card (Wellnavi) to the general population in a dietary survey. Journal of nutritional science and vitaminology. Apr 2007;53(2):109-116. 114. Lazarte C, Encinas ME, Alegre C, Granfeldt Y. Validation of digital photographs, as a tool in 24-h recall, for the improvement of dietary assessment among rural populations in developing countries. Nutrition journal. Aug 29 2012;11(1):61. 115. Apple iPad 2 Technical Specifications. 2012; http://http://www.apple.com/ipad/ipad-2/specs.html Accessed 28 March 2013. 116. Wright JD, Borrud LG, McDowell MA, Wang CY, Radimer K, Johnson CL. Nutrition assessment in the National Health And Nutrition Examination Survey 1999-2002. J Am Diet Assoc. May 2007;107(5):822-829. 117. Schakel, Sally FG. Maintaining a nutrient database in a changing marketplace: Keeping pace with changing food products-A research perspective. Kidlington, ROYAUME-UNI: Elsevier; 2001:98. 118. Academy of Nutrition and Dietetics. Providing nutrition services for infants, children, and adults with developmental disabilities and special health care needs. J Am Diet Assoc. Jan 2004;104(1):97-107. 119. George VA, Shacter SD, Johnson PM. BMI and attitudes and beliefs about physical activity and nutrition of parents of adolescents with intellectual disabilities. Journal of intellectual disability research : JIDR. Nov 2011;55(11):1054-1063. 120. Skouteris H, McCabe M, Swinburn B, Hill B. Healthy eating and obesity prevention for preschoolers: a randomised controlled trial. BMC Public Health. 2010;10:220. http://http/www.apple.com/ipad/ipad-2/specs.html 83 121. McGillivray J, McVilly K, Skouteris H, Boganin C. Parental factors associated with obesity in children with disability: a systematic review. Obes Rev. Mar 25 2013. 122. Glaser BG. The constant comparative method of qualitative analysis. Social problems. 1965;12(4):436-445. 123. Tabacchi G, Giammanco S, La Guardia M, Giammanco M. A review of the literature and a new classification of the early determinants of childhood obesity: from pregnancy to the first years of life. Nutrition Research. 2007;27(10):587-604. 124. Harrison K, Bost KK, McBride BA, et al. Toward a Developmental Conceptualization of Contributors to Overweight and Obesity in Childhood: The Six‐Cs Model. Child Development Perspectives. 2011;5(1):50-58. 125. Xiong N, Ji C, Li Y, He Z, Bo H, Zhao Y. The physical status of children with autism in China. Research in developmental disabilities. 2009;30(1):70-76. 126. Rimmer JH, Yamaki K, Davis BM, Wang E, Vogel LC. Obesity and overweight prevalence among adolescents with disabilities. Preventing chronic disease. Mar 2011;8(2):A41. 127. Anderson JW, Konz EC, Frederich RC, Wood CL. Long-term weight-loss maintenance: a meta-analysis of US studies. The American journal of clinical nutrition. 2001;74(5):579-584. 128. Guest C. Design Concepts in Nutritional Epidemiology. Journal of Epidemiology and Community Health. Jun 1992;46(3):317. 129. Gibson R. Principles of Nutritional Assessment 2nd ed. New York: Oxford University Press; 2005. 130. Hill RJ, Davies PS. The validity of self-reported energy intake as determined using the doubly labelled water technique. The British journal of nutrition. Apr 2001;85(4):415-430. 131. Subar AF, Kipnis V, Troiano RP, et al. Using intake biomarkers to evaluate the extent of dietary misreporting in a large sample of adults: the OPEN study. American journal of epidemiology. Jul 1 2003;158(1):1-13. 132. Trabulsi J, Schoeller DA. Evaluation of dietary assessment instruments against doubly labeled water, a biomarker of habitual energy intake. American journal of physiology. Endocrinology and metabolism. Nov 2001;281(5):E891-899. 84 APPENDIX A: CONSENT FORMS 85 Legal Guardian Consent KU Diet and Physical Activity Program INTRODUCTION The University of Kansas supports the practice of protection for human subjects participating in research. The following information is provided for you to decide whether you wish to participate and allow your dependent to participate in the present study. You may refuse to sign this form and your dependent will not be able to participate in this study. You should be aware that even if you agree to participate, you are free to withdraw at any time. If you do withdraw from this study, it will not affect you or your dependent’s relationship with this unit, the services it may provide, or the University of Kansas. PURPOSE OF THE STUDY This study is designed to compare two different diets and how these diets influence body weight across 2 months in adolescents, with mild to moderate Intellectual and Developmental Disabilities (IDD). PROCEDURES The procedures for this study are outline below. Inclusion/Exclusion Criteria Individuals are eligible if they meet the following criteria: • 11-18 years of age • Diagnosis of an intellectual and developmental disability • Ability to provide consent from their personal physician to participate in this study. Individuals will be ineligible if they report any of the following criteria: • Insulin dependent • Participation in a weight reduction program involving diet and PA in the past 6 months. • Current treatment of eating disorders, consuming special diets (vegetarian, Atkins, etc.), or diagnosis of Prader-Willi Syndrome. • Currently pregnant, planning on or becoming pregnant during the study. Prior to starting the study, you and your dependent will be asked to complete/obtain the following: Dependent Physician Release We will ask you to provide a physician release form to determine if your dependent can participate in this study. This form will be provided to you and will include a description of the program. This form will need to be reviewed and signed by your dependents physician prior to starting the program. This form is only needed at baseline (before your dependent starts the program). 86 Health History Form: We will ask you to complete a Health History form for your dependent and this will be used to determine past and current health issues that may affect your dependent’s eligibility status. This form will also inquire about basic demographic information about your dependent such as age, gender and ethnicity. This will occur at baseline only (prior to starting the study) and will take about 10 minutes. INTERVENTION Adherence to the protocol will involve one guardian serving as the primary family contact who will partner with research assistants (RAs) in conveying the principles of the intervention (diet/PA) to your dependent. The guardian will be asked to help their dependent follow the study protocol. Diet: All eligible participants who are overweight or obese will be assigned randomly to one of two diet groups. You or your dependent will not be able to pick the diet group assigned.  Conventional Diet: focuses on portion control and is low in fat and includes fruits, vegetables and low calorie snacks.  Enhanced Stop Light Diet: Follows a plan that concentrates on healthy food choices and uses low-fat prepackaged meals (provided by the study) and includes fruits, vegetables and low calorie snacks. All eligible healthy weight participants will be assigned to either the conventional or enhanced Stop Light Diet, but both diets will be modified to focus on making healthy food choices, increasing fruits and vegetable intake, and promoting weight maintance. Physical Activity: All diet participants will be asked to perform physical activity (PA) that will progress slowly to 300 minutes of exercise per week. Tablet Computer: A tablet computer will be provided for use during the 8-week study and will be used to assist with menu planning, self-monitoring, feedback, data reporting and video chat meetings. If your dependent breaks/destroys the tablet computer they will not be held accountable, and if available, another tablet will be given to them to use for the remainder of the study. All tablets will be protected with a sturdy cover to help reduce risk of damage. If the computer is lost, there is a tracking application built into the computer to help find it. If the tablet cannot be found, your dependent will still not be held accountable. Tablet computers must be returned to research staff after the 8 week study. Educational Sessions: Monthly in-person sessions: The study participant (your dependent) and a guardian (you) will attend home meeting sessions lasting about 90 minutes. These sessions will be conducted before the program starts (baseline) and at the end of months 1 and 2. 87 Weekly Face-time sessions: Weekly 20 minute video chat meetings will be held between a study educator, your dependent, and you via a tablet computer provided to the participant during the study. The weekly video chat meeting will provide opportunities to answer questions and provide feedback to the participant. ASSESSMENTS The following assessments will occur at the residence of the participant or at a mutually agreed upon location: Dependent body weight, height and waist size We will measure your dependent’s height and weight. We will measure the waist with a measuring tape. Your dependent will be asked to not consume food or drink (may drink water) for 12 hours prior to these measurements. We will ask your dependent to remove shoes and put on a t-shirt and shorts prior to these measurements. These assessments will occur at baseline (before your dependent starts the diet), and at the end of months 1 and 2. This will take about 5 minutes to complete each time. Dependent Physical Activity Monitor Your dependent will be asked to wear a small device that measures physical activity for 4 consecutive days (2 week days and 2 weekend days) on two separate occasions. The device is small and light weight and about the size of a pager. Your dependent will be asked to wear the device on an elastic belt over the hip during all waking hours in the 4-day testing period. Your dependent does not need to wear the device when in bed or bathing. This 4-day period will occur at baseline and at the end of month 2. Your dependent will wear an electronic pedometer (step counter) during waking hours for the 2 month study. Dependent Diet Intake We will ask your dependent to tell us the foods they ate during the past 24 hours on 3 different, consecutive days (2 week days and 1 weekend day) using photo assisted food records. Your dependent will be asked to take before and after photos (using the provided tablet computer camera) of all food and beverages consumed over the assigned three days in addition to a parent/guardian assisted hard copy diet record. This 3- day photo assisted food record will occur at baseline (before the start the program) and at the end of months 2. At the end of month 1 your dependent will be asked to tell us what foods they ate during the past 24 hours for only1 day and take photos of those meals. Each assessment (diet intake) will take about 15 minutes to complete. Parent Interview At the end of the study (end of month 2), guardians will be asked to complete a semi-structured interview assessing the perceived helpfulness, acceptability, and ease/burden of the program. Tablet Questionnaire At the end of months 1 and 2 your dependent will be asked to complete a questionnaire assessing their comfort using the tablet computer to track their food and physical activity, using the video chat, and taking photos. 88 Tracking Records We will ask you and your dependent to record what your dependent eats/drinks and their physical activity daily on the tablet computer across the 2 months study. The research staff will review the records at the end of each month. RISKS Dependent Exercise Possible effects of exercise are muscle soreness, fatigue, nausea, dizziness. In rare occasions there are unpredictable changes in blood pressure or heart rhythm, and heart attack. BENEFITS Your dependent’s participation in this study may result in the reduction of health risk factors. Participation in exercise may improve your dependent’s fitness, muscular strength and flexibility. Your dependent may lose body weight, body fat, or both. Your dependent may gain knowledge related to diet and physical activity. COMPENSATION Your dependent will receive either free prepackaged meals or a stipend for fruits and vegetables for 8 weeks depending on the diet they are assigned. Participants assigned to the Usual Care diet will receive $2.00 per day to purchase fruits and vegetables and this will be provided in the form of a gift card at baseline and at the start of month 1 and month 2. Investigators may ask for your dependent’s social security number in order to comply with federal and state tax and accounting regulations. Your dependent will also have use of the tablet computer for 8 weeks before returning it at the end of the study and he/she will earn free download of applications (approved by parent) for compliance to the protocol. PARTICIPANT CONFIDENTIALITY Your dependent’s name will not be associated in any publication or presentation with the information collected about your dependent or with the research findings from this study. Instead, the researcher(s) will use a study number rather than your dependent’s name. Your dependent’s identifiable information will not be shared unless required by law or you give written permission. Permission granted on this date to use and disclose your information remains in effect indefinitely. By signing this form you give permission for the use and disclosure of your dependent information for purposes of this study at any time in the future. AUTHORIZATION TO USE OR DISCLOSE (RELEASE) HEALTH INFORMATION THAT IDENTIFIES YOU FOR A RESEARCH STUDY To perform this study, researchers will collect information about your dependent. The information collected about your dependent will be used by Dr. Joseph Donnelly, his research staff, KU’s Center for Research and officials at KU that oversee research, including committees and offices that review and monitor research studies. In addition, Dr. Donnelly and his team may share the information gathered in this study, including your dependent’s 89 information, with government officials who oversee research, if a regulatory review takes place. Some persons or groups that receive your health information as described above may not be required to comply with the Health Insurance Portability and Accountability Act’s privacy regulations, and your health information may lose this federal protection if those persons or groups disclose it. The researchers will not share information about your dependent with anyone not specified above unless required by law or unless you give written permission. Permission granted on this date to use and disclose your dependent’s information remains in effect indefinitely. By signing this form you give permission for the use and disclosure of your dependent’s information for purposes of this study at any time in the future. SHARING OF INFORMATION Additionally, you authorize your dependent’s special education professionals, physical education teachers and school food service staff to attend meetings and request and receive information regarding your dependent’s progress and participation in the diet research project. You understand that this Authorization can be revoked at any time to the extent that the use or disclosure has not already occurred prior to your request for revocation. INTERNET STATEMENT The weekly video chat meetings will be conducted solely between the health educator, your dependent and you. It is possible, however, with internet communications, that through intent or accident someone other than the intended recipient may see your dependent’s response. INSTITUTIONAL DISCLAIMER STATEMENT In the event of injury, the Kansas Tort Claims Act provides for compensation if it can be demonstrated that the injury was caused by the negligent or wrongful act or omission of a state employee acting within the scope of his/her employment. REFUSAL TO SIGN CONSENT AND AUTHORIZATION You are not required to sign this Consent and Authorization form and you may refuse to do so without affecting your right to any services you are receiving or may receive from the University of Kansas or to participate in any programs or events of the University of Kansas. However, if you refuse to sign, your dependent cannot participate in this study. CANCELLING THIS CONSENT AND AUTHORIZATION You may withdraw your consent to participate in this study at any time. You also have the right to cancel your permission to use and disclose further information collected about your dependent in writing, at any time, by sending your written request to: Joseph E. Donnelly, ED.D. Professor University of Kansas 90 1301 Sunnyside Robinson room 100 Lawrence KS, 66045 If you cancel permission to use your dependent’s information, the researchers will stop collecting additional information about your dependent. However, the research team may use and disclose information that was gathered before they received your cancellation, as described above. QUESTIONS ABOUT PARTICIPATION Questions about procedures should be directed to the researcher(s) listed at the end of this consent form. PARTICIPANT CERTIFICATION: I have read this Consent and Authorization form or I have had it read to me. I have had the opportunity to ask, and I have received answers to any questions I had regarding the study. I understand that if I have any additional questions about my dependent’s rights as a research participant, I may call Kansas Human Subjects Committee (785-864-7429), write the Human Subjects Committee Lawrence Campus (HSCL), University of Kansas, 2385 Irving Hill Road, Lawrence, Kansas 66045-7568, or email irb@ku.edu. I give permission for my dependent to take part in this Study as a research subject. I further agree to the uses and disclosures of my dependent’s information as described above. By my signature I affirm that I have received a copy of this Consent and Authorization form. _______________________________ Print Name of child participant _______________________________ Print Name of Legal Guardian _______________________________ ____________________ Signature of Legal Guardian Date mailto:irb@ku.edu 91 ASSENT FORM Weight loss and Weight Maintenance Diet and Exercise Program We want to find out if certain foods help you have a healthy body weight and if exercise will improve your health. There will be two different healthy food diet programs. If you decide to join the program you will be assigned to one of the two available programs. You will not be able to pick the program that you receive. One program uses prepackaged foods that you warm up. The other program uses foods that are not in prepackaged containers. Both programs include fruits and vegetables and healthy snacks and performing exercise (like walking and swimming). You will use a tablet computer during the study. The tablet computer will be used to help plan and track your diet and exercise program. We will also use the tablet computer to have video meetings where we will talk about the diet and exercise program. The program will last for 2 months (8 weeks). You do not have to join the program and can ask questions any time. Please talk about the program with your guardian/parent. Your guardian/parent will need to say it is ok for you to join the program. If you decide to join the study, we will ask you to do the following measurements where you live or a place you and your parent would like to meet. 1. We will ask you to provide a form from your doctor saying it is ok for you to be in this program. 2. We will measure how tall you are, how much you weigh and the size of your waist using a measuring tape. These tests will happen 3 times. The test will happen at baseline (before the study starts) and at the end of months 1 and 2. It will take about 5 minutes each time we do this test. 3. We will ask you to wear a small monitor around your waist for 4 days. This will track how much you move. These tests will happen 2 times. This test will happen at baseline (before the study starts) and at the end of month 2. We will ask you to wear the monitor when you get up in the morning and take off the monitor when you go to bed. Do not wear the monitor when bathing. 4. We will ask you to take a picture of the food you eat before and after you eat the food for 3 days in a row. We will ask you to take a picture at baseline (before the study starts) and at the end of months 1 and 2. After you take pictures of your food, we will look at these pictures as ask you questions about the food in the pictures. This will take about 15 minutes each time we ask you about the food in the pictures. 5. We will ask you to track your exercise and what you eat on a daily bases on a tablet computer during the time you are in the study (2 months). We will review your records in person with you at the end of month 1 and end of month 2. It will take about 15 minutes when we discuss your exercise and diet records 6. We will ask you questions about how much you or don’t like to use the tablet computer to track your exercise and what you eat and how much you like or do not like doing the video chat and take photos. We will ask you these questions at the end 92 of month 1 and end of month 2. These questions will take about 10 minutes each time. 7. We will ask you and your guardian to complete questions about your health so we understand if can participate in the study. BENEFITS Will this program help you? We will learn if the way food is made and performing exercise will make you healthier. If you decide not to participate, you will not be in trouble. Would you like to take part in the program? 93 APPENDIX B: QUESTIONNAIRES 94 IDD Adolescents Weight Loss Project Health History Form Intake date: _____/_____/______ KU faculty/staff present: ____________________________ Participant name: _______________________________________________ Parent name:____________________________________________________ Address: ______________________________________________________ Tel. number: _______________ Cell number: _______________ Email: _________________ DOB: _______________ Gender: Male Female Does individual meet the definition of someone with DD Yes No Is the participant able to walk Yes No Is the participant able to communicate foods eaten during the day Yes No Level of Support: Mild (intermittent reminders, few activities with direct assistance) Moderate (direct assistance, few activities with reminders or occasional support) Secondary Diagnosis: Autism Down syndrome Other/Not otherwise specified Chronic health related conditions: (check all that apply, first 3 make eligible) Diabetics who use insulin Pregnancy Metabolic disorder such as Prader Willi Syndrome Hypertension Type 2 diabetes Asthma 95 Food allergies (e.g., lactose, gluten, nuts, etc.) _____________________________ Race of participant: American Indian/Alaska Native Asian Pacific Islander Black or African American White Two or more races Ethnicity: Not Hispanic or Latino Hispanic or Latino Living arrangement: How many people live in your house? (e.g., 1=alone) 1 2 3 4 5 6 Living arrangement: How siblings live in your house? (e.g., 1=alone) 1 2 3 4 5 6 Do you eat breakfast out? No Yes ______x per week Do you eat lunch out? No Yes ______x per week Do you eat dinner out? No Yes ______x per week Do you eat pre-prepared meals? (pizza, sub sandwiches, etc.) Never 1-2 x week 3-4 x week 5-6 x week Daily Describe: A typical breakfast ______________________________________________ 96 A typical lunch ______________________________________________ A typical dinner ______________________________________________ Do you have a blender? No Yes Do you have a weight scale? No Yes Food Scale No Yes How often do you exercise? Never 1-2 x week 3-4 x week 5-6 x week daily Type of exercise (Check all that apply) Walking Swimming Biking/stationary bike Weight/resistance training Special Olympics Sports (basketball, tennis, etc) Other __________________ Where do you exercise? Home School Recreation center Other How do you think you will benefit by losing weight? (check all that apply) Ability to be more physically active (greater mobility) Better health Other ________________________________________________ What prevents you from losing weight? (check all that apply), I don’t want to (lack of personal effort) Lack of social support Cost for family (personal finances) Lack of knowledge on how to Problems acquiring healthy foods Limited food selection No accessible exercise alternatives Food aversions Other ________________________________________________ 97 Physician diet and physical activity release signed Consent to Participate signed 98 IDD Adolescent iPad Survey Participants name:_____________________________ Time Point: Month 1 Month 2 Please circle the response that best represents how you feel about using the iPad. 1. How much did you like using the iPad? (circle one) Disliked Neither disliked or liked Liked 2. Using the iPad was____________? (circle one) Very Hard Hard OK Easy Very Easy 3. Writing down your food with the iPad was___________? (circle one) Very Hard Hard OK Easy Very Easy 4. Writing down your exercise with the iPad was______________? (circle one) Very Hard Hard OK Easy Very Easy 5. Writing down your steps with the iPad was______________? (circle one) Very Hard Hard OK Easy Very Easy 6. Taking pictures with the iPad was______________? (circle one) Very Hard Hard OK Easy Very Easy 7. Did you need help taking pictures with the iPad? (circle one) Yes No a. If you circled yes, who helped your take pictures? Parent Other, please specify_____________________ 8. How much did you like playing games on the iPad? (circle one) Liked a lot Liked Unsure Did not Like Really did not like Other comments: __________________________________________________________________________________________________________________ __________________________________________________________________________________________________________________ __________________________________________________________________ Semi-Structured Interviews 99 eSLD SEMI-STRUCTURED INTERVIEW GUIDE Topics Main Question Follow-up Question Probes Prepackaged Meals Tell me about your experience with the portion controlled meals? What did you most like about the meals? Tell about times when you did not eat the meals? Stoplight Diet Tell me how easy it was to use the stoplight guide? How well did your child understand the stoplight diet? What aspects of the guide would you change? iPad What did you like about the iPad? What things did you have trouble with? Tell me about any problems you had using lose it or istep log What did you like about the Facetime meetings? How easy or hard was it to take photos with the iPad? Can you give me some examples? Was time an issue? Future Now that you have completed the study how will you use this information in the future? When you think about prepackaged meals what is the likelihood you will eat them in the future? How would you feel about continuing to track your dietary intake? How would you feel about continuing to track your physical activity? 100 Suggestions What do you feel we can improve? Would you do this program again? Parent Involvement How involved were you in helping your child follow their diet? On a scale of 1-5, with 1 being never and 5 being always did you do the following:  Enter food into Lose it  Enter physical activity into Lose it  Enter steps into the ipad Now we are you to switch and talk a little about you. Some parents have told us their own behaviors have change as they helped their child with this program. How did your physical activity level change as a result of your participations in this program? How did your diet change as a result of this program? 101 CONVENTIONAL DIET SEMI-STRUCTURED INTERVIEW GUIDE Topics Main Question Follow-up Question Probes Diet What were your favorite things about the diet? What aspects of the diet were hard to follow? What aspects did you enjoy? iPad What did you like about the iPad? What things did you have trouble with? Tell me about any problems you had using lose it or istep log. What did you like about the Facetime meetings? How easy or hard was it to take photos with the iPad? Can you give me some examples? Future Now that you have completed the study how will you use this information in the future? How will you use the lessons you learned about portion sizes? How will you use the information about recommended servings of food groups? Suggestions What do you feel we can improve? Would you do this program again? 102 Parent Involvement How involved were you in helping your child follow their diet? On a scale of 1-5, with 1 being never and 5 being always did you do the following:  Enter food into Lose it  Enter physical activity into Lose it  Enter steps into the ipad Now we are you to switch and talk a little about you. Some parents have told us their own behaviors have change as they helped their child with this program. How did your physical activity level change as a result of your participations in this program? How did your diet change as a result of this program? 103 APPENDIX C: DATA COLLECTION FORM 104 DATA COLLECTION FORM Participant Name: Date:____/____/____ Testing Period: Base 1 Month 2 Month (Circle one) Tech:___________ Age: _____ D.O.B.:____/____/____ 1/8= 0.125 2/8= 0.25 Weight (lbs) 1. ______, 2._______ 3/8= 0.375 4/8= 0.50 2 weights within 0.1 lbs 5/8= 0.625 6/8= 0.75 Height (inches) 1.________2. _______ 2 heights to the nearest 1/8th inch (0.125 in). 7/8= 0.875 BMI______________ Waist circumference 1._______________ 2. ______________ 2 measurements within 2 cm to the nearest 0.1 cm 105 APPENDIX D: WEEKLY LESSONS 106 Burgers 20  Years  Ago 333  Calories Today  700  Calories Portion  S i ze Let’s  take  a  lo o k  at  how  po rt ion  si zes  ha v e   changed  o v er  the  past  2 0  ye a rs! Bagels 20  Years  Ago Today 3 inches Diameter 140 calories 350  Calories WEEK 1 107 108 WEEK 2 109 110 111 112 113 WEEK 3 114 115 116 WEEK 4 117 118 119 120 WEEK 5 121 122 123 WEEK 6 124 125 126 127 128 WEEK 7 129 130 WEEK 8 131 132 APPENDIX E: ENHANCED STOP LIGHT DIET GUIDES 133 134 135 136 137 How to make a shake! 1. Pour cold water into blender. 2. Add shake mix with flavorings and/or fruit. 3. Turn blender on to lowest speed. 4. Blend for about 10 seconds. 5. Gradually add ice cubes 1 at a time (replace blender cover after adding each ice cube). 6. Continue mixing on lowest speed for 1 ½ minutes until ice is thoroughly blended and shake is smooth. Options:  To make your shake extra filling, mix on high speed an additional 10 seconds.  If you don’t have a blender available, pour 8 ounces of very cold water into a tall glass. Add 1 packet of shake mix and mix thoroughly with spoon, fork, or whisk. 138 APPENDIX F: CONVENTIONAL DIET FORMS 139 140 McDonalds Hamburger Cheeseburger Filet-O-Fish Grilled Chicken Sandwich Grilled BBQ Snack Wrap Grilled Honey Mustard Snack Wrap Grilled Ranch Snack Wrap Apple Slices Fruit and Yogurt Parfait McChicken Mini Meal with Apples Taco Bell Fresco Chicken Burrito Fresco Steak Burrito Fresco Chicken Soft Taco Fresco Soft Taco Fast Food Guide 141 Crunchy Taco Chicken Gordita Cheese Roll-Up CiCi’s Pizza Cheese Pizza (1 slice) Alfredo Pizza (1 slice) Ham & Pineapple Pizza (1 slice) Zesty Veggie Pizza (1 slice) Ole’ Pizza (1 slice) Garlic Bread (1 piece) Subway 6” Ham (No cheese) 6” Oven Roasted Chicken 6” Turkey Breast (No cheese) 6” Veggie Delite 142 143 APPENDIX G: APPLICATION SCREEN CAPTURES 144 Lose it! App for diet tracking. Four figures: F1. Shows tracking of meals throughout the day F2. Shows food search F3. Shows food being entered. F4. Shows running daily calorie total and calorie budget. F1 F4 F3 F2 F1 145 Fitbit app for Step Tracking 146 APPENDIX H: EXAMPLE IMAGES FROM THE PHOTO-ASSISTED RECORD 147 148 APPENDIX I: RESULTS FROM PARENT INTERVIEWS 149 CONVENTIONAL DIET GROUP Diet What were your favorite things about the program?  Well, I think just eating what we normally eat; but just thinking about portion control…and also if we had something that maybe wasn’t quite as healthy, the fact that she could exercise and, um, make healthier choices during the day, so …that you know… “Easy”…writing it down…using the iPad…was good.  I think mine was just the beginni… still the first day when you were showing the portions and um (pause) … just showing us how much people should be eating and.. Yeah… and well watching her pay attention to things and…Not think it’s just her mom that’s sending her out to exercise  Um, helping him learn portion control and forcing him to try new fruits and vegetables  I think the best thing about the diet was just… it made me a little bit more accountable. So I had to be a little bit more thoughtful.  Um, I liked that he was paying more attention to the amounts that he was eating and made him more aware of how many calories and how much he was taking in; and then to see him decide either to go ahead and eat what he wanted to eat and choose to walk on the treadmill or to maybe go… “No, I don’t need to eat that”… and… So that he wouldn’t have to walk on the treadmill  I liked how it had the kids, they had to stay within certain calories and they were to do it right and subtract it out and it was real easy so the Lose It app was real easy for them to use  Nick really enjoyed playing on the ipad. I really enjoyed him learning about portion size. I think portion size was the most important thing here because he eats big food, like grizzle bear sized portions, and he thought that was ok to do. Doing this program he really learned what a portion size was and saw that he needed to cut back.  I think it was pretty effective. I think it was … at least it was helpful for me to really … we try to focus on serving size and him scanning it and seeing that I think was … the visual part of it I think was really good. I think for him he probably could have had a few … a little bit less calories, but I don’t know how that works nutritionally or whatever, but yeah, I think it was easy to use and that was pretty … it was kind of a learning experience. I thought it was pretty good. I think coming out once a month was good especially if it’s long term. I think that would be good. The incentive, I think the money it’s good for him. It’s not his biggest incentive. It’s difficult to find an incentive for him. He already has the iPad so that for him … but the other, at the 150 same time we go to bookstores and stuff. He was very excited that he was able to … I don’t see, he doesn’t even remember.  I loved everything. And I told everyone that I could possibly tell. It was so educational for both of us, and she - it just worked for her. And obviously, you can tell by the weight loss. She's just made really good healthy choices. So anyway, yeah, she just did great. The school was very instrumental in documenting her food. When she didn't, just, I of course - I would pack her lunch, and we would make choices together. And then, the school would say, "Oh, she ate the entire thing". Or half the thing. So we were able to be real specific in cups and portion size. And I've even noticed she's branched out, which is shocking, to just getting a bowl of dry Cheerios for her morning. Because she has a real grazing issue. So. That's been good. You've been diversifying what you like to eat. What aspects of the program were hard to follow?  Nothing really.  The exercise, it was the best and the worst part, Cause for a long time she did not want to do it still.  Uh, trying… trying to limit his dairy product intake. Because his favorite beverage is milk, pretty much….. It actually has gotten easier….yeah. It was kind of a struggle the first half… Yeah, he used to be more fussy if we told him “No, he can’t have any more milk” , but now he’s just like, “oh, ok….”  I found it tedious….Um, and some things were very difficult…So sometimes I avoided…they were a couple times when I had planned a certain meal in my head and I thought “I don’t want to sit there and list all those ingredients… I’ll do something else”, but other than that, I can’t add anything more. Yes, I felt that it… sometimes it was difficult to think of other things that I could make that were low calorie… That were also easy to write down. And so I found that I was thinking about both things: both low calorie but also how easy is it going to be for me to write it all down and to keep track of it. And food now…and I don’t buy a lot of the processed foods…like the sauces and that sort of thing, but it’s not easy to get pure foods…And sometimes you know there are approximations. It’s not exactly what I had, and so sometimes things that I had I’m sure were higher calorie than what I actually gave her… and sometimes it’s the other way around… so I’m like “Well, it kind of evens out…” (laughs) I think coming up with meal ideas that were low- calorie was hard, so I tried to keep it to something like a piece of meat and vegetables; and I’m sure there’s more creative ways of doing it? Also it was hard because I am not sure what foods the school is giving her. And I am sure that they just let her have what she wants. And I give her a certain amount; and even at home I’ll say “That’s it.” I don’t leave the tomato sauce on the table. I give her a certain 151 amount and tell her “that’s it.” When it’s gone, it’s gone. And she told me she got more. And it’s all those little things that they probably aren’t thinking about…But for Ali with her metabolism… it’s a big deal…And one day they decided on a moment’s notice…. because the class had done really well so they took them out to McDonald’s and got them a breakfast burrito. She’d already had breakfast. She had packed lunch. And then she had a breakfast burrito. I mean something like that… that’s it. That’s the day gone. We’re over now; and I can’t withhold for her dinner. And so the teacher said to me: “Well, do you want me to keep her back then when the kids go out?” Well no, I don’t want to do that. So I said “No, that’s fine if you go out; but could you just make sure she has something low-calorie”. And it wouldn’t hurt for the other kids too. None of them are starving. Something yogurt. They don’t need a breakfast burrito: 460 calories. And that’s, I think, what’s killed us too. And so there are days…. I found it a little bit frustrating because I’m trying to follow the diet; I’m trying to keep to that calorie limit and then she gets a breakfast burrito or she gets a couple chocolate chip cookies; and then I can’t give her anything  No, I thought it was really easy for him to follow since we weren’t really asking him to eat anything that he didn’t like to eat or didn’t want to eat; or you know, we weren’t trying to introduce new foods … just a different way of eating  Making sure that they were eating their five grains and two fruits and two vegetables and trying to do that and the calories at the same time was tough. Yes and I don't know if it would be easier to just have to have way to do it one the lose it...you sent me a thing where we could mark off fruits and all that but it just got too much to do both at the same time. I think if they were able to hit I ate a fruit, I ate a bread or when I sit down for my meal, okay I have this and I have that. I don't know then if that would stay within their calories so I didn't know what the focus was.  At first it was hard for him to remember to eat fruits and vegetables. It was just a change. I think having you send him daily reminders made that easier though. Entering things into the iPad was not hard per say but I knew Nick felt like it was a hassle to remember to do everyday. He did it, but I know that sometimes he did not really want to.  Just some times he just didn’t really care to be honest with you. I had to just hold him to task. He would be like oh I don’t care if I eat a gallon of ice cream. Just trying to have him really learn and understand that these choices we’re making we’re trying to get him to eat healthy and stuff. I don’t think I could just let him loose and he would continue to follow it on his own and stuff like that. Like with a lot of things for him and maybe other kids like him, it takes a long time to really affect a change.  What he is he internalizes it or generalizes it, but I think it’s been a great start. I think we plan to try to stick with at least looking at serving sizes and have you had this and limit the amounts. Also trying to increase his fruits and vegetable. Again he 152 doesn’t have a big variety of foods he likes to eat, but we’ve been trying to just keep the things on hand that he will eat. What aspects did you enjoy?  Someone forcing her to exercise and having her be able to track what she was eating on the ipad.  I think helping him learn… you know, what kind of things he should eat and what food groups he should eat; and how much of them are better for him.. But, um….by having to actually look at how much he’s getting, I think that helped him.  It was just accountability of writing down everything that I had and seeing the calories in everything. Because that was one of the things that surprised me actually is how many calories are in some of the things. You know, even the cereals; and even though I don’t get the sugary cereals. Even just a Mini-Wheats. So, um…and then I would give her eggs sometimes and that has a lot of calories in it  It was pretty easy for him to think about weather he wanted to not eat or go to walk on the treadmill… and this diet better than one where I would have said you can only have lettuce and carrots and celery… and I know; that’s why I thought this was a good diet for him to try… because it didn’t ask him to eat vegetables. Well, I mean it kind of did. It made him want to think about it; but if hr didn’t like them….  The liked the Lose It They were happy with doing that because if they wanted a splurge day we'd go okay you at least have two or three fruits and vegetables and then you can have your cookie. It worked. The fit bit was awesome because they liked to see how many steps they took and trying to figure out what days got more steps and why they got more steps than other days was interesting. Yeah, so the fit bit. They have A days and B days at school so it would be whatever days they had different schedules would have more steps than others. They didn't wear the fit bit to PE so PE wasn't a part of it. It was how many times do I have to go up and down and what classes so that was interesting because it was significant, a couple thousand.  He loved to have the ipad. He would take it everywhere with him, it was such a good motivator. I think he enjoyed losing weight but really the ipad was the best part of the study for him.  I liked how it created the bank of food, especially for somebody like Michael. He doesn’t eat a huge variety of food. I think the fact that when he was in the green and if it went too red, if he went over, he was very cognizant of that. I think the visual aspect of it I liked for him. Even if I went through it, I think it was good. Then we like the Fitbit Michael kept. He would look and see where he was. He’d see if he had the flower on. I thought that was pretty good. Again a lot of the visual cues for him I think helped him see what he was … just to see what progress he was making. I just think that the technology aspect of it makes it so much more easy and I think people 153 will stick with it probably way longer than if they have to write it. Well you saw my writing, whereas if I do it on the iPad, I’ll put it like Brussels sprouts and I’ll put olive oil fourth of a teaspoon or whatever is the smallest amount because it’s a mix step over the whole tray, stuff like that. I just think it’s so much easier to be accurate I guess on there.  Ok, it was the iPad What type of physical activity was done?  She liked to dance and would do her dance videos on the TV. And you know she walks the dogs every morning and evening around the block so that really helped.  Well it took a month to get her to do anything and then she was just talking, but in the last week she has really started to run so that is good.  Just whatever is done in gym class, we just don’t have time for more than that.  Mostly just the Wii She does the sports and then she’s got the dance…and she likes both, oh and dance she loves her dance classes.  He would get on the treadmill, that is the only activity he really wanted to do. Well other than the bowling he does at school.  Wresting was his main activity. He was practices three to four times a week and meets on the weekends. However, now that that is over we are trying to find some things for him to do. He does weight training at school but I am hoping to try to get him to do some cardio work. Like use a treadmill or something. He doesn’t seem open to the idea as he thinks they are boring, but I am hoping to find a way.  Well in P.E they do quite a variety, which is good. He did the elliptical a little bit. They’ve been doing badminton. They just kind of rotate, pickle ball, things like that. I think they just get him moving. He does walk a lot at school. He does a lot of … he does like jogs or whatever, so he does walk a lot. Just trying to keep him moving is a challenge I think for us and always will be whether we’re tracking it or not. In the summer he does swim more. We usually do lessons and then we try to go into the pool and stuff like that. He’ll start up again. There is going to be Special Olympics. We’ll have tennis that will start in March. I think there’s also swimming. Unfortunately they’re at the same time and two nights in a row. I don’t know if they’re both going to fit in our schedule, but we’ll definitely do one of them.  You do dance, but what - when we went to the gym, what did you do? Went on the treadmill. We tried to go (to the gym) once a weekend, every Saturday. But she - every opportunity that she could - could go run and get the mail or, you know, walk the dog around the cul-de-sac, we tried to do. I ordered a treadmill and a elliptical that should've been here by now, so she can go downstairs and put on her favorite movie. And we don't have to hike it over to the club and do all that. And that way if she wants, you know, if she wants some computer time, I can say, "That's fine, but you need to have 15 minutes on the treadmill." So it's not here yet, but it's coming. 154 Ipad What did you like about the iPad?  Oh, that it was just “touch”…”this and that”…and that it recorded previous meals eaten…that made it easy…Uh, that you could do the “scan the bar code”…that was very easy. It didn’t work on all-- on every bar code- but that was okay. It worked on enough of them. Um, and then…that it built a list of foods and portion sizes that she ate a lot… so that you were just picking those off of that list. That made it real easy.  Well the games And I thought it was kind of cool that I like… put down the food I ate and the exercise I did… and got to see how much calories she had gained…at the end of the day.  Well, Sam loved all of it. I have an old, old original one which is really pokey slow now; but he liked…he liked the games on there. He really…actually, one of his favorite ones now is that Food Group one. I thought the apps were pretty user- friendly; and I like how the, uh…how the “Lose It” keeps your… keeps the data you put in it, so it’s easier to find things again.  Yeah. It was easy. It was straightforward. It was no problem. It was just tedious. But it would have been tedious using the ipad or writing it down.  He would say the games. I like that it was easy for him to use and it… he was the one that was doing it, and not me, and it was easy & fun for him to do that  It was a great teaching tool, absolutely, and just the ease of finding the app and the app had all the information that they needed and they could click on fit bit and Lose It and it had all the information they needed right there.  It made him want to do the program. He loved the iPad and right away it was his main reason for signing up, besides the fact we wanted him to. He would take it everywhere and I think because he had it with him it helped him to remember to do everything he was supposed to do.  Well, I liked that it was so portable. We took it with us even when we went out to eat. A lot of times we were able to just enter things in right there. He uses it a lot, so it’s just so handy. It’s not like if we had a separate book or something separate we’d have to do it with. He has … as a kid with an iPad, he wants it with him all the time. I think it’s just very … makes it just really acceptable. It fits his lifestyle.  I know you liked it, but was it - was it hard or was it easy? Easy. That made it - that was easy because you could just come home from school, and you would do - punch your little loseIt! button and search for foods and put it in there. Very user-friendly. What things did you and your child have trouble with?  Nothing  No troubles 155  Nothing  Nothing  Nothing it was pretty easy. I guess we did have trouble with the wifi freezing that first night, but once Grant moved downstairs it was fine. So it was just a wifi thing. So I mean… I think as long as the participants have a good Wi-Fii signal, it would be ok…  Other than breaking the screen on Mallory’s ipad! (laughs). It was just a bad cover on it and I was the one that knocked it off the table not her, so it really was my fault.  Nothing was hard to do on the ipad, just remember to do it was the thing he struggled with the most.  I don’t think so. I really don’t think so. The struggles that I have with Michael I would have whether he was on the diet. Just trying to get him to increase his activity level, constantly trying to find sports or anything that he likes. So far swimming is about the only thing he really likes to do. That doesn’t always work out in the winter and everything. Tell me about any problems you and your child had using lose it or istep log.  I think it was just a matter of remembering to put it in. Sometimes we would not put the information in the same day. So it was just a matter of remembering…or if we put it down on a piece of paper; remembering to put it in there. So that was more “us” than the application. No problem with the application. I think we, um, …once we got the right password and everything, then we were good on that.  iStep would never work so we just wrote the steps down  None, it was simple  It was simple just tedious  Nothing really. Just that one…that first night we tried to video chat and the lose it was not connected, but you told us what to do and it worked fine after that.  No just the syncing with the fit bit since it didn’t work for those 10 days before you came to fix it. Oh and Ryan had a hard time trying to find some of his activities like time on the Wii or...I'm trying to think what we had to...it was hard to log a bike ride. Is it a stationary bike or bike ride outside. You can't really log your miles per hour so those were a little bit harder. When we went to the trampoline place, how do you log that? I know at the beginning it was hard at least for lunch time trying to guess what foods in the Lose It. You helped us categorize if I have a turkey sub that it more fits probably in the Subway because there are so many and it would be helpful if the school would publish what it is so a lot of times with Mallory we ate manicotti and you'd see either I'd have her enter the recipe in or we'd try to range it because one manicotti might be 800 or 200 so let's try to hit middle of the road. 156  No real problems. Towards the end he had trouble syncing the fitbit with his computer, it would have helped to sync with the ipad instead, but I know you all didn’t get the newest version. He still wore it but sometimes we were not sure if the data transferred to you.  No, I don’t think so. … I don’t know, it just … it would be nice if they were somehow connected so that you didn’t have to enter the … you could see if they somehow were interfacing with each other, but I don’t know if that’s possible, if there is something else. You’ve probably thought of that. I think that would be good because then he … not that it was a pain to put the exercise on there, but especially for example for P.E at school. I don’t really know how hard is he doing it or how … he can’t … he says, “Oh I did it for five minutes or whatever.” I’m not sure the school … they write it down, but I just don’t know how much they’re going to exactly put down to the minute every activity he has. I know he’s in there for 45 minutes and I don’t really know what percentage is devoted to what. I think heaping them together would be great. Then you could really see, maybe it wouldn’t show the activity, but you know at least hey, he’s done a lot of steps this day or P.E had a lot … I think that would be nice.  The belt thing because it frustrated me not to be able to get to get it going everyday. Not the belt - the step thing (the fit bit). It just stopped working or I couldn’t calibrate it correctly. I tried and tried and tried and tried, but - so maybe the information's in there. I don't know.  She had trouble spelling it, it's not very user-friendly with the spelling and we could not get the barcode scanner to work. Once she got good about, you know - you can't, like, you can put in Weight Watchers, and then it gives you 363 choices. What did you like about the Facetime meetings?  Oh, I thought those were good. Yeah, I think Paige really enjoyed those. It was convenient, very convenient…to do it that way.  Well we could not get to the Internet with my leg so you know we had to call you, but I think that was helpful. Emily was always excited to tell you about her progress, I think it would have been better if we could have used the facetime so you could have seen her data, but it still worked out well.  Yeah…. It helped him, they were not very long so it was not inconvenient  I think it’s useful.  Well, you know, the first one I made him sit out here …And he…( starts to whisper)..”It was really cute; he was fixing his hair” (laughs). He really seemed to like them. He says he learned to stop overeating and to drink his special juice…you know the V8 we have him drink to get vegetables in.  I think they looked forward to them. I liked that it helped me hold the kids accountable for what they were doing so I could still hear you guys set your goals 157 and then I'd try to reinforce the goals even though I wasn't on there. That's what I liked. It's highly convenient I think for everybody because then we can do it first thing in the morning or late at night or you're not tied to the schedule.  I think they were good. Nick is old enough to be in charge of his own diet so we didn’t sit over his shoulder and make sure he was doing them, we figured if there was a problem you would let us know. Overall, I think they helped him remember do to the program and knowing that you would be looking at what he was entering made him remember to eat the right things and to enter them into his ipad.  I don’t know how much he got out of that to be honest with you. Michael, he doesn’t know how to use the phone. He doesn’t like talking to people about himself. That’s one of his issues. That’s a goal that he has on his IP, just to talk about himself. For him I don’t know. I don’t know if he would do that long term. He might get used to it. I don’t really know. I try to give him privacy thinking that would get him to talk more, but it probably didn’t. I’m not sure he really understood why he was doing that.  Oh, I liked it. If nothing else, because I know she didn't have a lot of questions, it kept her remembering that she was doing this program. Because your face became synonymous with this thing that we were attempting to do. How easy or hard was it for your child take photos with the iPad?  Easy…she did that all on her own  Easy has a problem once where she accidently deleted a photo but taking them was easy  Easy, he really liked doing that.  It was hard to remember to take them but easy to actually do.  Very easy. It's a little harder because I kept making them keep the case on and you'd have to take a couple extra. I think reminding because they might eat a little bit so I'd have to get it back out and let's take a picture again. I think they remember if you do it more often. The hardest part was just remembering to take the pictures  It was easy, he always had it with him.  I think it’s pretty easy. In the beginning when you had us take, he was doing it better than me. He knows how to do it.  Easy. Future How will you use the lessons you learned about portion sizes?  Well again…portion control and I think it is very helpful to write things down because you don’t realize that you’ve maybe had eaten out more than you thought 158 you had…or that you haven’t introduced any new vegetables in a while (laughs). So I think just the portion control and meal planning…yeah.  I’m a lot more conscious of portions going on plates and getting daily exercise  Well, if I can get Dad to get on the program, we’re going to keep trying to do the same thing. We’re going to limit the pop we drink. Which, he’s done pretty good. He knows that he should not drink regular pop very often. I think that has helped a lot; and then we need to try fruits and veggies. Are we still going to eat peaches in our lunch? You like those, don’t ya? Yeah, cause I don’t want him to be a little chunker like … like some people are. Umm … with a lot of people, yeah…and unfortunately I think “more so”… I think the kids that are ‘not’…. don’t have a weight problem. I think most of them, it’s because their genetics are pretty … (laughs... I mean they have pretty good…. because most of the kids that go to his team group….um, they all have Down Syndrome; and most of them are a little bit overweight. Not; I mean some of them ‘not’; but some of them are “very” overweight… but most of them are a little overweight. And all the ones that come into my pharmacy…. I think everybody but “him” are. In fact, there’s one little girl who’s younger than Sam. She’s 12, maybe 13 now… and she’s on high cholesterol medication… and she’s on a diabetes medication already. Yeah; and one of his best friends is pretty pudgy. But he eats salads! He drowns them in salad dressing.  I think keeping track of it is… is really the only way of making sure that you’re staying in that schedule. Because there were times when I thought I was doing really well and then I went and put it in and thought ‘oh no’. You know, we’re close to going over…we’ve gone slightly over…so with that said… there are some of the meals that she has a lot. There are certain breakfast and lunches that she likes and she’ll ask for again and again; and I know exactly how much I can put in now. So those I wouldn’t… I don’t feel I would need to do anymore. Serving size was useful, I think you know if I do put some seasoning in…. I’ll just put a tablespoon in and normally it’s actually for the three of us. So I’m putting it down on hers as one tablespoon because I don’t know how much she’s getting in hers…but really she’s just getting a 1/3 of that tablespoon. So probably only about a teaspoon of margarine. But I’m careful about how much I’m putting in. I’m not just taking a dollop out and putting it in. I’m putting in a tablespoon.  Well, I know he was telling me a little bit ago that he will try to eat less and if he eats “more”, especially when he’s working at McDonalds, he’s going to try to work out. I was hoping he would be interested in, um, like on my iPod that I might give him… hecould have the Lose It! Program or the have the program that I use and continue to keep track of what I eat so he would know where he’s at each day, but we are still talking about that. He says he is tired of tracking. Overall, with this information we learned I think it gives us a reference point to remind him, or to say…”You know, well you had a cheeseburger and fries at lunch; do you need to have a cheeseburger 159 and fries again at supper, or could we maybe…so it’ll… so now that he has that visual and that experience, we can just reference it  We are going to...I'm going to set them up with the Lose It and we're going to keep doing it. We won't have an Ipad so we'll have to try to figure it out on the computer and do it that way which will be different for them. I'm curious how easy it will be for them because I think they'll have to both sign in at different times so it will be interesting. Also portion size was important. I think for them to see eating a cup of cereal as opposed to a bowl of cereal was different and Ryan just used to use milk to wet his cereal where now he's drinking his milk. I think we'll carry that forever. Just keep up with eating less food and eating more of the right types of foods. We are really going to be working hard to get him to do more aerobic training so he can get some exercise in since wrestling is over. If he keeps up the good work we may get him an ipad and get that app that he has been using so he will keep doing it. Like I said portion size was the big thing. I think he will use this information for the rest of his life. He now knows that he can’t have 6 cups of milk at dinner time or have a whole pound of ground beef with his dinner. I think he knows what a portion is and how to cut back and my hope is that he will continue to do just that. This had made him much more aware of what’s in everything and what are good and bad choices to make. I think, like I said we’re going to continue looking at serving sizes. We may just go ahead and continue with Lose It just so … because sometimes I might be surprised oh, he’s way under today and I’m surprised. Or wow look … so I think we might just keep up. Maybe not every single day, but I think I’ll at least pick some days where hey let’s go ahead and use it and see where we are, because I’d like for him to continue slimming down a little bit. I know he’s going to go through puberty, but I don’t think he’s going to be real tall. I just want him to be healthy. I think this is a good tool for us to use to try to get him there. What we’ve been doing is, we’ve just been going by it, like okay what’s the portion size of something, like for dinner? It will be whatever. If it’s three [tequidos] where he might have gotten sick, we just say here is three. He doesn’t argue with it and really we’re going to … and I think just showing him, “Oh look, see it says three here.” I think that that will help him. Really again it’s the visual for him. Then we’ll just … like at that other time I just said, “Well here, have this apple if you’re still hungry. He hasn’t really complained too much. Then as far as things go, that’s where I think it probably makes the biggest difference because if he does have ice cream or whatever we’re just going to stay, reduce the portion and that’s it. I think if there’s a snack he wants then he really wants it. For him I think a lot of it is his metabolism to be honest with you. I just think that he, overall … like a lot of his snacks are beans, but if there was cake for example, he would eat a whole cake, or he would eat a whole gallon of ice cream. We just don’t have it all that often. I think at school I think I’ve helped them become aware. That’s probably been a learning curve for them. I think because it’s been an 160 ongoing issue we’ve had for a couple of years with them. I think they finally … it made them sit up and take notice. Like oh, we’ve got this … it’s something that somebody else is looking at this. Not that they didn’t listen to us, but I think it was just easy for them to just say, “Oh well, let’s let them have it, let them have it.” I think at special occasions, I think that’s when he could get into trouble. Sometimes there’s a lot of them. if you have use of a lot …. You think oh, it’s only a birthday. Well, when you have X number of birthdays in a family, an extended family in a month. I think just special occasions we need to just really watch it and treat it like it’s another day. You can have a little extra, but not … just don’t use that stuff as an excuse I guess. Going out to eat, we’ve actually gone out less I think because so many places we go are so high in calories and we realize wow. We went to Five Guys last weekend I think the first time because it was just unbelievably … we just never noticed. We never bothered looking at the nutrition until we got the tab. I think trying to make better choices about going out is probably going to be a big thing, because that’s probably where he consumes a lot of these excess calories would be burgers, fries, pizza and all that. We were probably going out maybe two, three days a week. I’m sure we cut out at least one of the days. We probably go out maybe once or twice now. We usually go out on the weekend and maybe once during the week. It used to be sometimes like well it’s just … so we just … we didn’t even really discuss it to be honest with you. We just one of those things I was like oh, well that place is like … so we say well, let’s just take what we could come up with at home. Usually if we don’t go out unless he … or to get an ice cream sundae that we’d make at home, he would never go over his calories. I don’t know if it’s just the way the [inaudible 00:12:24]. It’s not conscious. It’s just what we have is just not nearly … even if we have a frozen pizza, it’s still not like those huge pizzas you get at a restaurant kind of thing.  Like I said this was the big thing. I think he will use this information for the rest of his life. He now knows that he can’t have 6 cups of milk at dinner time or have a whole pound of ground beef with his dinner. I think he knows what a portion is and how to cut back and my hope is that he will continue to do just that. This had made him much more aware of what’s in everything and what are good and bad choices to make.  We're gonna continue it, right? Pack your lunches, cut your noodles in half... Taking her lunch every single day. Finding the Smart Ones so she could have a hot lunch. And I'd nuke it in the morning and wrap it in foil so she'd have - and she was able to choose which one she wanted. That kind of thing. And then choose your drink and choose your... You usually chose a fruit. How will you use the information about recommended servings of food groups? 161  Yeah, we used a smaller bowel for treats and things like that. That seems to go good, right?  We already got a variety so that was not very hard and we didn’t learn much from that. I mean for breakfast we always have like dairy and cereal…. and for lunch we always have some kind of fruit or something like that with it … and then for dinner we usually have meat and a vegetable at least.  Yeah, I mean I pretty know…. I know pretty well what a portion size is; but I think that helps “him”. And we have certain plates he eats out of which really helps because it’s hard to tell what a portion size is when you just have one big plate. He likes to eat out of those little ones that are like the picnic plates that are segmented… so pretty much a normal portion size fits in one of those little segments. He does tend to like to overeat on his meat choices. He’s a like a “meatsy” guy… (laughs ) So we have to watch that, but…When he can eat more, more fruits and different vegetables… that will give him a wider variety of things that he can eat or that he would like to eat; & it keeps him from eating… making other choices that may not be as good.  We were already doing a lot of that cause she’s a big salad eater. So pretty much every day I think she’s close, but I have been conscious about putting another piece of fruit in her lunch. But she’s a pretty good eater. So I think we’re… I think we’re pretty close to hitting the five portions of fruit and vegetables  I think he will use it. I know he wants to go back to eating bigger stuff. You know he was having a double quarter pounder everyday and now he is having a regular hamburger, but he wants to go back to having more. But he says he wants to have less than he was as he knows that was too much, more like a McDouble. He is trying to convince me that they patties are smaller and closer to the food model you showed us.  Yeah we absolutely want to try because I don't think there was ever a day we hit the recommended, ever. We tried. . I think Mallory struggles with fruit and Ryan struggles with vegetables. I don't think either of them struggle with grain. I have always tried to get them to eat a protein at every meal and then Ryan does over on the cheese so trying to cut him back on dairy a little bit so just trying to get that balance.  He knows he has to eat less carbohydrates, he can’t just snack on bread anymore. He is learning to eat more fruit and vegetables in place of his high sugar snacks and we are hoping he is going to just keep doing that.  We’re trying to … still that’s kind of a challenge. Still trying to get him to increase his fruits and vegetables and probably yeah … just more like the snacking of fruits and vegetables I think because sitting down in a meal … he just has, he just doesn’t like to. Well, I’m not going to eat an apple and a meal. To him that’s just weird or 162 whatever. Sometimes I just try to sneak in and just give it. Here, why don’t you have this as a snack? I wouldn’t say he has a huge, huge appetite because of his medication it keeps his appetite in check. If it wasn’t for that, oh my gosh, he would be huge. There’s no doubt about that. Just trying to again keep those portions down and try to get it a little more balanced, which is probably one of the bigger challenges we’ve always had with him, is to get a variety of foods. He probably likes vegetables more than fruits. It’s usually … he likes crunchy, he doesn’t like bananas for example, but again when we have smoothies, he’ll have everything in the smoothie. As the weather gets warmer I’ll make more of those. Often times I’ll make them, just in the morning he could have it instead of milk or with his milk or whatever. That’s a good way to get the fruits and vegies in his is just to throw carrots in something or whatever, because he’s decided now he’s not crazy about carrots which he does like. It’s just sometimes there’s no rhyme or reason to his logic. He doesn’t really have any. He just arbitrarily says, oh I don’t like that anymore. Anyway, probably the fruits and vegetables, increasing those will be an ongoing issue.  Do you remember how big a portion that she taught you? Right when we started doing this program, how big would - how big could your meat be? Do you remember? Your fist or a deck of cards? We’re going to keep doing that. And eating more vegetables. We really did that. I had chopped up vegetables for her, on a plate, for the family, all the time. Suggestions What do you feel we can improve?  Uh…I don’t know if ‘you all’ can improve anything. It just took us a little while to get in the mode of remembering. Um, and I know you had tools that had like reminders on the Ipad & everything. Maybe if we would have used those more. Um, but…yeah, I think that was the thing. Um, and I would say more times than not, “I” ended up entering the information on there. Paige, sometimes she would sit right there next to me…but about half the time, or at least half the time she didn’t want to do it.  The only thing I can think of is… maybe check like the…and, and we could have done it at home too… but maybe check… like the waist and the weight more often… You know, not like all the time, but maybe more often… cause I wonder… you know, if we would have checked it and I can’t even remember when we did check it. Like, if we would have checked it after two weeks…you know, cause teenagers you don’t want to check it too often either or they’re going to start worrying about it… but like every two weeks or something… or even three… that maybe should would have realized she… she wasn’t getting somewhere and maybe would have started the 163 exercise that she’s doing ‘now’ a little earlier. I don’t think it would matter if you gave out scales, but just…just ‘knowing’ … even if, even if you… and I’m not just speaking…I’m just thinking of the big picture of other families too… who may not…do it… have…you know, because we do have a good scale, but some may not… But even if you would just say ‘Ok, I want a weight in three weeks … you know, even if you just would have told us…… after two or three week ‘we want a weight and we want a…you know… a waist size. I mean, I don’t know how you could do it because I know it has to be standard but…That might be an extra visit for ‘you’ if you had to do it and people didn’t have it but… I just think, you know, after week 2 or 3 it… it would be have been nice to have that also.  Um, I don’t know… my husband thought that his calorie limit was too restrictive, but I didn’t think it was at all because we didn’t really have to cut back very much. Like I don’t think we really cut back at all… other than on some of the dairy because he probably drank way too much milk anyway for a normal person; which he always has. I mean he wasn’t a real huge… he doesn’t snack. Like he’s not a kid that you have to bolt the pantry or the fridge cause he just would never… If he’s hungry, he tells you he’s hungry and that means he’s starving. So he’s not a big snacker; but um… I don’t know… I really I liked it. I mean, I thought it was easy… to follow… I don’t really think it was any different. I mean, other than portion control and trying to drink new things and not drinking so much pop; I mean I thought it was real easy to do  Get into the schools and…(makes a grumbling noise with her fist)…Rrrrrr. You know, I think they need to buy into it too. That would make the big difference; and I even offered to send in snacks… but then the other kids are having cookie or a brownie… you know, they’re doing cooking activities and making these things… and she’s having celery. That’s no fair either.  No, it’s great how it is.  I think trying and maybe it's a test group of seeing if there's an app with kids and focus only on getting all of these components in or and a group do the calorie ones. Yeah I think which is the more beneficial way to do it. Should I focus on getting good food or not so many calories? I know you have that test group that eats food you provide so that would be interesting to do. I think having that third group, which is the best way to do it? Boxed food, food groups, or calories?  Nothing really, it was very well structured. However, there can always be improvements…um…um…I guess the only thing is that having the home visits and the facetime at home was a problem for him, he is easily distracted when at home and it is hard to hold his attention. If we would have been out of the house he may have listened more. However, Nicky can drive himself while I know a lot of these kids can’t so that probably would not work for everyone. 164  I don’t know. I think for Michael is just a matter of making it long enough that these habits will become, these changes will become habit. That’s probably the biggest thing that it just needs to be continued. It needs to be ongoing. it needs to be probably a few months or so. For him to really … even for any of us I think, even for our family habits I think … I hope we plan to keep a lot of some of the changes we’ve made. Not going out to eat so much or trying to just increase our fruits and vegetables, to keep those going. I just think for Michael, just having something … just have to do it longer, probably even two months. I think it’s a study two months for a change. The way he eats and his activity level is probably not enough. I guess making it longer would be a suggestion. I don’t know, I guess if it’s a year and a half, 12 months what might be good, again if a lot of the kids are visual is to maybe do a chat of the weight or something like that just as a … so that they could … he could see his progress maybe and see … there might be something. Yeah. I think that for a lot of kids that are visual that might be … over time they could see oh look at that. I started out weight this how because for him to go oh look how, for swimming doing it was good. I’m not sure that’s going to do it. Although he did say a couple of weeks ago, he goes, “I’m getting skinnier, aren’t I?” He noticed. I guess it’s probably in his belt and guessing since his pants I’ve had to tighten them a little bit. I don’t know if he was proud of it, but definitely he’s noticing that his body is changing which I think is great. Of course we don’t want to say, “Oh you’re really overweight.” We don’t ever say that to him, but we just talk about if we swim, then we’re healthier, we get a little bit stronger, we grow taller and with strong muscles and stuff. He’s … I’m hoping that what he sees about this is becoming healthier. That’s our goal. Yeah. I think it’s been a pretty good experience and we’d love to. If you do end up continuing we’d like to be considered to be included.  I thought it was fantastic. Would you do this program again?  Yes, I think so  Yes, he didn’t realize it was any different for him besides not being able to have the pop and milk so it was easy and we would do it again.  Yes I think so  Yes, I would. Definitely.  I would have him do it again  In a heartbeat, I don’t think 60 days was long enough for them to make a habit so I want them to do it again.  Yes, it has helped him immensely.  Yeah, definitely.  Absolutely. 165 Parent Involvement How did your physical activity level change as a result of your participations in this program?  I wouldn’t say that’s changed because Alli plays her Wii while I am doing the dishes so I don’t do it with her.  Um, I don’t think so because I was already tracking and exercising. Um, and so… it maybe made me more aware of what he was eating; and perhaps I didn’t buy certain things during this time so he wouldn’t have that option.  I think it leveled it out where I've been trying with my fitness pal and going to the gym and then they didn't want to do anything so I think it helped get them a little bit more motivated so that we do it as a family and not I go do something and my husband goes and does something. I think maybe it's increased it a little bit because I'm doing it with them and I'm doing my own.  Not really, we are already pretty active so it was more of trying to get Nick to do things with us.  Like this in the winter, probably not. We are more … we get out more in the summer than we do in the winter, so that didn’t really increase that much.  I work out all the time anyway, but with - it forced me - some things I thought for sure they were less calories than what they really were. And that was - I'm usually a good one about reading it, but since Lent started, I'm doing 40 days of exercising, so that's what I added. How did your diet change as a result of this program?  Yeah it changed, cause I’m eating pretty much what she’s eating. So what I pack for her lunch I pack for my lunch. Um, you know we eat the same dinner. Um, our breakfast is different, But apart from that we eat pretty much the same thing so…portion sizes, yes.  That didn’t change as I am already pretty active. It was fun because his dad and I track our steps daily already so with Grant doing it we could all compare.  I'd say no because I think it's maybe portion control where we're a lot more focused on how much does everybody's meat weigh and how many vegetables do we have and how many servings. I'll say yes in that respect, more portion.  No we ate the same things, I think Nick just learned how much of the food to take and how to make better choices when we were not around to make meals for him.  Yeah, again I think we aren’t going out to eat as much. We’re just really looking at portion sizes and trying to make sure we try to get more fruits and vegetables. I think everything that we’re trying to get for Michael, it’s not like he was eating 166 separate food from us. What he was doing, we were doing. By him making better choices and us making better choices for him we were doing the same. I think it’s definitely been beneficial for probably all of us, the whole family.  Well, I just - we just - overall, I tried to - anything she was gonna have, we all had it. That kind of thing. And just, you know, trying to be - when we went out to eat, just talking about, "Ok, if you're gonna have a pancake..." What'd you have when we went out to breakfast the other day, a chocolate chip pancake? Lots of vegetables. Lots of - very little red meat. 167 eSLD GROUP Prepackaged Meals Tell me about your experience with the pre-packaged meals?  Well I said, “we” liked them….it’s at least easy anyways….I mean he’ll pretty much eat anything, although he showed a preference for some (you know) over the others…. but he did…. I said (for me) it was easy for me too… (ya know)… not having to measure things (you know) and I don’t have time for that. He liked…well like I said it was easy…he liked the ones that would kind of mimic what… (ya know)…what Noah or some of them… (ya know, the other family members)…. were eating. So like, we had Mexican food….um…like we had taco salad or something… I could…I could like cut up the beef enchilada or something and put it on lettuce…. and it would look a lot the same (ya know?)….so he liked things like that. He liked….he liked the Mushroom?…The Mushroom Rrrr….something? I can’t remember how you said that; but he liked that too. We got a lot of help this time from the grandparents…they were…they were on board with it, and wrote down what he ate and stuff for me and… but the most out of control I felt was when at school…because…. Just whatever they were serving…..and I know that he’s always encouraged to eat healthy but I could never keep track of what he had exactly.  She really liked the shakes, I think those were her favorite. It was a great way to get her fruit in for the day. She love to have them for breakfast and would ask for them a lot. She thinks they tasted really good.  I liked them, they… they really helped, especially at lunch time…. And, it really helps me because you know how busy we are… it just helps; it helps Sidney and it helps us not to have to worry, um, about what she’s going to eat. And like… because if any other given day she would just go in the refrigerator and get like whatever was there… that she saw…. that she liked, or whatever, in undisclosed amounts. And that was the hardest thing… it was like how much did you eat? You know, not knowing that… because she would just… you know, she’s old enough to go into the refrigerator and get what she wants; so being able to say, you know… it’s just easy for her. She’s like “oh wow”… pop ‘em in the oven… I mean pop it in the microwave for a minute, and she’s set. And a lot of times when I got home; because her dad would be here when she gets out of school, she’d… I’m like: “Did she eat yet?” …. “Yeah, she ate” (Dad)… She’s like “Yeah, I already ate!” And it was just really nice to know. She would warm them up at school. Her teacher would go to the teacher’s… it’s right, in the lunchroom area they have their own little microwave, so they’d microwave it for her. It only took a minute, which was really handy, because most frozen meals are what… like four or five minutes 168  He just got tired of them. At first they were great. They were very convenient for me but he got tired of them. At first he liked the HMR ones yeah but he's not really crazy, after a week he would not eat them. Then I started cooking them and then I started cooking everything and you suggested buying frozen ones and I started doing that and that makes life a lot easier. He loves the frozen ones they got pizza and spaghetti and lasagna, his kind of stuff. They were great because they looked healthy.  They weren't so tasty, but I liked the shakes except I didn't really like the vanilla one. We went to the store and got Smart Ones or either we made our own food.  For me, I think that they were tasty for Patrick because he likes to eat them. Now we have to be careful because he will go and get him another one and another one. The shakes didn’t work for us for breakfast. Had they had the breakfast entrees … I don’t know if that’s something that they’re working on. That probably would’ve helped too. He just ate lunch out because it was … I think it would’ve been more helpful had he had more than just dinner.  It was very easy . My only problem was that they weren’t that tasty. I doctored them up a little so that helped. A little pepper. Some of them I had to put hot sauce on. The chicken I put barbeque sauce on. Now the turkey with the beans, those beans were hard as rocks. I could’ve thrown them and knocked some window panes out with them. Other than that, most of them were good. I didn’t like that one. I wasn’t crazy about the lasagna. The rest of them were pretty good. Breakfast was hard too because when I say that’s breakfast when I give him the shake, he gives me attitude. We started using the shakes for snacks only because he couldn’t use them for breakfast. He just could not understand that.  Now we purchased packaged meals. I think we were able to try a lot of … with the different meals to just expand the diet choice. We tried a lot of different things. We tried the Chinese ones. We tried the fish. We tried the pasta. We tried Alfredo, the macaroni. There wasn’t any of the meals that he didn’t like that we tried. We tried each variety. Shakes were good and in fact I was pleasantly surprised when I joined him in taking the shakes because I wanted to have a feel for what they tasted like. That they were pretty good and he even asked to make sure that we made his shakes every night.  I think that worked out really well. Yeah, you liked them (towards participant), didn't you? She took them - more days than not - to school. Didn't eat them at home, but used them for 3 days a week when she eats - when she takes her lunch. So, more times than not, that's what she took, and that was fine. And then she added a piece of fruit or some carrots or something to that, and that worked out great. . For breakfast, she would have that shake every single morning. Now she's out of the chocolate - and do you like the vanilla as well? (she says no) But maybe if you added 169 a little chocolate syrup to it? You could make it chocolate. Yeah, then you could make it chocolate. And when we were at the Downs Syndrome clinic yesterday, they talked about her needing a multi-vitamin, so we got that, grind it, and then put that in with her daily shake. What did you most like about the meals?  Just that it was easy and he could still choose. It’s something he can fix himself and something that’s easy to keep track of. It just helps so much with portion control.  They were simple and tasted good and we knew she was getting the right about of food for that meal. They would also really help to fill her up.  She could make them by herself and that made her feel more independent too. Like she more was in control; and she knew that, you know… she started to notice that she was losing weight. So, it’s very convenient  They had everything in them. Low cal and perfect. Then I could add vegetables which I like to do and I could just send them to school. Which was great because that was his biggest meal. So he would just eat those instead of buying his lunch I'd rather him eat healthy than...and right now his age he needs to start making directed choices I guess, start to curb his wants into what he needs instead.  I liked the idea of the meals but just not the taste of the meals. They are really helpful to...I don't know if I'm supposed to but just to have it so you already know the calorie count and it has the nutrition that you want and stuff but the taste of those was just not good, not something that you could do on a continuing basis. If you really are somebody who...  For me, we had more on weekends. I tried to stick to it more on weekends. During the week it was just for dinner. I like the convenience of it because then you can take it places. It was just ready.  He would have the meals for dinner. Then lunch was whatever the boys had while they were out on the program. But they were great especially when you’re really hungry. It’s fast. I like that fact that the calories were already on there. You’re counting your calories for the day. It was just there. The information was there.  That’s a good way for us to work on portion control. It’s also a good way to work on skill building because what we started working on doing is having Austin look at the directions and prepare the meals. Some of the prepackaged meals, you poke holes at the top. Some of them you had to lift the film. The main thing for us is portion control. Added benefit is the independent preparing various meals.  That it was portion controlled. And that it was so easy: she could do it herself. There was lots of success with that, and an easy way to. ( asking about bringing them to 170 school) Yeah, that wasn't a problem at all. There were a lot of kids who bring food from home and microwave. Tell about times when your child did not eat the meals?  Usually the only times he didn’t eat them is if we were out to eat (ya know where he ate….) or if he was at school …. like eating a school lunch…. and um …I said.. or I said there were a couple times where, ya know, we’d eat over at Grandma’s house or something like that…um, he spends the night a lot over at his m….or… my mom’s house & Gerald’s mom’s house…& if he was over for the whole day, they would bring one for lunch or for dinner, ya know? Because they kind of have, ya know, like traditions where they’d go and eat…but they would always make sure…like if they got pizza, they would get the healthy ones with the thin crust and not with all the meat & stuff on it.. so..  She has been good about getting out the measuring cups and trying to measure out her food or look at the side of the box then figure out what color food it was and if she could eat it. Um… It would either be; like sometimes like after the game or sometimes on the weekend, um, I would let her, you know, eat a normal meal… I would say; or close to normal… like with modifications. So like if we were eating Chinese food… instead of giving her the one with the… all of the sauce & all of the salty stuff or whatever; I’d get her like with light sauce, or like the clear sauce, or with the brothy sauce. And do her with more vegetables… so as opposed to getting her like Lo-Mein noodles, I’d get her (you know) like beef or broccoli… or chick…not beef and broccoli, but like chicken and broccoli. And I would tell them, you know, give me extra broccoli or add some; can you add some extra carrots or like mixed vegetables? So when she did eat; she was eating really kind of mostly vegetables and then some meat in there; or I’d compensate because we’d have like brown rice… like instant brown rice; and I’d you know switch it out for the white rice… or the…cause it wasn’t too hard cause my husband eats healthy anyway… so that was…. You know, it was just trying to increase the… well you know, monitor the fat content…and the… you know, the carbs. : Mm-hmm. Or I would sometimes what I would do is let her have like a little bit of treat with her meal; so she’d have like… she’d eat her, you know, whatever meal and then she’d get like… but see, it would kind of help me out because I would control that 250… 200-250 calories and then she’d have like… if she wanted nachos or something, I could easily… or she’d have like one taco as opposed to eating like, you know, 5 or 6 like we were doing; she’d eat like one taco and then have her meal. And then she’d eat like… for dessert, she still had her ice cream; but her ice cream would be like a fudge bar… but the fudge bar would be what like… 60 calories or a 100 calorie fudge bar… as opposed to like, you know, real ice cream… like 3 - 400 calorie regular ice cream. 171  I cooked. I always cook healthy. We have a lot of fish. We have a lot of chicken. Chicken and a vegetable or fish and a vegetable. We did that for a long time when he first started. That's what...I've got like tonight we're going to have...I got some collard greens and I like to put those with potatoes and I've got some ham but it's lean ham. I like a lot of vegetables.  We went to the store and got Smart Ones or either we made our own food.  He would have the meals for dinner. Then lunch was whatever the boys had while they were out on the program. Because with them being at school all day; it’s really not school, the 18-21 program. It seems like they were doing something different every day in the community. There was no way we could monitor lunch. They got what they wanted. Terrence more or less ate what he wanted, but she wouldn’t let him super size it or anything. She tried to discourage him from ordering sodas and order water. So not only was she trying to help him health wise but even economically, because she was telling him “Oh Terrence, don’t spend $2.50 on a Coke. You can get water for free.”  Actually, for me, because the teacher sends home every day what he eats because she knew we were on this. She geared him towards eating more salads and stuff like that; salads and he didn’t eat dessert. Then, we tried to keep it where he didn’t eat too many calories for breakfast; a couple eggs and a couple strips of bacon. We tried to make sure he had a fruit every breakfast and a vegetable every dinner. Now if we would’ve gotten on that other program with the fruits and the vegetables, I think that would’ve helped too. But, even though we didn’t get on the program, I tried to buy more vegetables and more fruit so that he could just have two vegetables at lunch or dinner or whenever he’s at home.  It depends. Regardless of what setting Austin was in we encouraged the healthy eating. The red foods, green foods, yellow foods was helpful because when he was with his grandmother she could say to him, “Austin, that’s a red food,” to manage that system. We’ve been practicing healthy eating for a while so we try to do that at home and it works better if we keep extra stuff out of the house. Austin is a sneaky teenagers so he’s learned how to get what he wants as opposed to asking for permission. When he’s with other family members, they may not practice those healthy eating as well. That’s a challenge especially when someone who picks him up for an after school therapy session. We try to continue to make sure that anybody that he’s with outside of my setting is also trying to [inaudible 00:03:28]. Then we had our Friday free nights. Friday free night was breakfast which we went from all fried to steam and blackened and French fries, that’s all.  We would make sure that there was a lean protein and then a fruit and a vegetable. And, again, working on portion control. We would use some of our measuring cups to make sure that that was a half a cup or that was a cup. So that... Evenings, you 172 know, probably the biggest downfall - if there is a downfall for her - it would be snacks. It's hard to... And her tendency to use, like, a 90 calorie brownie as a snack versus a, you know, fruit with peanut butter. Because she doesn't like peanut butter, you know? So that was a little - fruit with some cheese, you know? Just snack choices. Yeah. Yeah. Because all of the - every meal at home that she eats. For breakfast, she would have that shake every single morning. Exercise Can you tell me about the exercise your child did?  Caleb helps manage the football team so those practices and he would lift weights with the other players.  She mainly played the Wii or we would go to the rec center on the weekends.  She has cheering a few times a week and has been playing the Wii. She is in shape. Let me tell you; because I tried to do that “Wii”. “Black-eyed Pea… one of the dances? ” Is one of… their dances are real like… you’re jumping up, you’re getting on the ground, you’re getting up… you’re hopping down… you’re kicking it. “I” was wore out; but “she”… she can do… she can literally do it for hours. I mean, hard dancing… like hard intense dancing… so I can tell that she has really good stamina; because before she’d be like “Whoa, after like ten or fifteen minutes… she’s done”. She’ll do the Wii… usually when she has something ‘really good.’ Like when she…if she…If I say “ok”… you know; she’s having one of those weak moments, or we’re somewhere when everyone’s eating deliciousness. She likes to dance. Dancing and music…that’s how she. Yeah; she learned how to talk and everything pretty much. So dancing, music, and visual. I mean… I can’t…she can; if I don’t stop her, she’ll go for hours. So I have to tell her: You have to stop and drink water. Like you have to… so yeah, she doesn’t mind exercise  She has actually started taking him to the Y in the evening. They did a water aerobics class Monday night. Tuesday night they did kickboxing and then I don't know what Friday is but she's taking him for all that great stuff and then me and him, we go and he likes to play racquetball and basketball and kick the ball up against the wall with me and then we do some of the machines too. He's getting some activity.  At points in time she would want to exercise but sometimes she just really didn't want to. When she did she'd go on Just Dance or took a walk or go up and down the stairs 10 times. If we have the TV on I try and get her to move around while she watched  He walks when he’s at my house. Usually when they go out in the community they walk. He has misplaced the thing (pedometer). He misplaced it just in the last few 173 days. I haven’t seen it in about three or four days. Okay. Anyway, no … It’s hard for me because I work evenings. Even though I’m off on Saturday, we don’t belong to the gym or anything; it’s gotten too cold to go outside. We didn’t do much. We didn’t do much exercise though.  Patrick does the treadmill now. We do it every day. I do get Terrance when Terrance is over there. He walks for a little bit on it. We have Wii’s but don’t use them much. The Wii, they see that as playing. It’s just something. I think when you’re having to get dressed and go somewhere then it’s built in your schedule. I think probably having some kind of log for people to actually sign in that they’re exercising. I found that you can put it in the Ipad. I started doing some of that. I think making it mandatory where you log what you do for exercise that day and make you even more aware you need to go out and move.  Exercise and physical activity generally tends to be a lot of the same thing, walking, walking around a track as well as the treadmill. Basketball, swimming, we’ve got weights and so we do probably five minutes of weight. He has a punching bag downstairs where he’ll do that. We’ve got a weight ball, medicine ball, sit ups, pushups. We try to do a little bit of that each day. We don’t always get that done. They’re really short sessions. He has P.E at school. He will move. When the weather is nice he goes … oh, and he hits his tennis ball around. That’s one of his favorite things to do. When the weather is nice, he’ll do that plus shoot baskets either outside or at my mother’s. There is a goal house down the street that’s accessible to him.  Umm, yoga, zumba, and water arobics That's it. And what do you do (towards participant)? Well, tell her what you do on the Wii. The days that she doesn't go to work at the gym, she would do 30 minutes of the Wii. And she was pretty consistent about that. So what kind of things did you do on the Wii - on the Wii Fit (towards participant)? Yoga, and zumba…the Wii zumba. Yeah. If it was was the weekend. What does - what do you do at Lifetime Fitness? Elliptical, bike, and the treadmill. And then some weights. And sometimes some sit-ups. Yeah, and then you Oh, she goes to the gym 3 days a week. Or 4 days because we do aqua on Saturdays. So 4 days a week for an hour. Her and I do our water class. We do our aqua aerobics class once or twice a week. Stoplight Diet Tell me how easy it was to use the stoplight guide?  We didn’t really use it  We really liked it. It was pretty simple to do.  It was pretty easy, I think I kind of looked over the Stoplight Diet the first couple of days and then got a good idea of what was red, green, and yellow Uh, so now she’s 174 kind of in that mode where she’s monitoring herself automatically. She knows it… she’ll automatically say “this is healthy, this is not”. She’ll eat vegetables; she’ll be like “no”… like pizza; she knows that’s got a lot of fat or, you know, things like that. So it kind of all worked together. So now she knows when she goes to lunch, she’ll take her meal and she’s going to get fruits and vegetables at school. She can have whatever fruits and vegetables they have; but not an entrée…  No. I knew what was healthy and since I made the meals I didn’t think he needed to learn that.  It was really easy to use. It was very helpful. If I didn't really know what she couldn't eat or anything we’d look at it.  I only looked at it a few times, but I started limiting some of the bacon and the sausage and stuff like that. We just focused on the main thing which was his waffles, because he has OCD. He eats the same things over and over. I did eliminate the meats and only once in awhile.  We didn’t. We kept the turkey sausage and the turkey bacon. Terrance, most of the time he has a hot breakfast; I eliminated stuff like pastries and pancakes and waffles. We went more with bacon and eggs. I eliminated potatoes. He’s a big potato fan.  I think sit was easy because it was something that when I shared it with my mother. One example we were at HyVees and he wanted doughnuts and she immediately said to him, “Austin, what kind of food is that?” He responded it was a red food. Being able to talk through that process was much easier. How well did your child understand the stoplight diet?  I think she understood it “pretty well”. I think at first she thought the “red” light meant “red-colored” foods. That they weren’t…you know….or “green-colored” foods. But then I think you understood the…right? (toward Emily) That “Stop” meant “Don’t eat….” ( “yeah” from Emily). But I think cause…I’m the one that prepares most of her meals, we got away from that pretty quick; but I think that’s what she thought at the very beginning…But then she caught on  Sidney got idea of what was red, green, and yellow… because we would reiterate; and then at school they would reiterate… you know, fruits and vegetables. It made it easier actually when she went to school that her teachers were kind of monitoring her too  It was easy. I think for Lauren, she's a pretty concrete thinker so if it was in the yellow light she would feel like she couldn't have any of it. Sometimes we'd have to talk it out; you can have a little bit. There's not enough stuff in the green light to...so it was learning how much of the yellow was OK to eat. 175  I didn’t go over it with him much because his attention span just doesn’t work like that.  I think he got it OK.  He does. Yeah. He knows that the red foods, you don’t want to eat very many of them. Green foods you can eat all you want and the yellows sometimes. Yeah, it’s an easy system to manage.  It says unlimited fruits and vegetables, and that's what we've been talking about. That those are good snacks. And even yesterday, she was - we were at the hospital most of the day, and then she came with me to work for a little bit. And she had a small piece of cheese pizza and then a huge, heaping helping of broccoli. And some peaches, so that was… Yeah, she's making good choices. What aspects of the guide would you change?  Um, I don’t kn….I think it was pretty self-explanatory and I liked the pictures- where she could look at just the pictures of the foods and tell which were the good and ok to eat. I don’t know what I would change on it.  Change some of the yellow foods to green. Maybe to rule out that middle ground or try to make it more clear  I don’t think so. Nothing comes to mind right off hand. Having the different foods listed and having the different categories was extremely helpful. iPad What did you like about the iPad?  Lose it was fun when I had time to do it. It was very fun… I could see where that could really help us…. just let me plan ahead when we got out…(you know)…because it had all the restaurants and things on it…because we had fun one night just looking up stuff that was on there…like as a family…even Noah was like “look up to see how much a blizzard is?” at Dairy Queen, or something (you know)…and we’d be like “oh, guess what it is?” We were just guessing, you know what I mean? And seeing who got closest…it was very fun  It was easy…um, Emily was motivated by it. Um, the games and the pictures… Actually, she was motivated by all of it. She sent messages to your sister at school. We liked….. I liked the visual on there….particularly on the program; where you can see…where you put the food in & the exercise in. You could see it go down. I think it was a simple visual that she could get.  Um, I liked it. 176  Yeah he loved it. It was wonderful for him to take to school, to take pictures. That was a wonderful start. It got everybody on board at the school because it was being recorded. That was really cool so it was a great start.  We liked it a lot. She loved the games. It really is easy to use and neat how you can see how it's tracking and I like the fact that you can see oh my gosh, I'm already three quarters to my limit and it's only half way through the day. I got to pull back a little bit. So yeah from I think it's good for kids like Participant.  I really loved the Ipads. If there is any way you can give poor families one. Now it’s Christmas. This would be really a good thing because the boys really love the Ipads and the mommies love them too.  They were very convenient. She had to walk me through talking on them because, of course, I couldn’t have figured it out on my own.  I liked the fact that we were giving our teenagers a tool that they could self manage and they could use. It is …. a lot of times what happens with kids with special needs and Austin has autism, is there’s not that expectation that they can use technology to organize themselves the same way we do. The fact that you guys included the technology so that …. And you selected an app that had really nice visuals, was easy to enter the information and also allow the kids to take pictures so that they can have more ownership for their eating habits, was a really nice thing. For Austin I especially liked the face time.I thought that that was a real good way to work on instructions for someone who doesn’t really talk on the phone and to develop that attending and talking about the lesson. At the beginning, you probably thought oh my gosh, but after we could see the improvement every time and Austin doesn’t talk on the phone a lot, so that was really a nice option to have as well. Yeah. The thing about autism is that because Austin’s language and communication skills are his biggest area of need, he’s often prejudged as not being able to do things. Austin has a lot of good skills. We just … sometimes I think this might have been a kid that could have been non-verbal.  I think it was very easy. And once I did it a few times with her Then she was able to do it on her own. And I would check it most nights. There were a couple times that we were out of town in the midst of this, but I would check it... You know, sometimes she'd put in something as lunch that was really dinner - or you know, we'd make some adjustments? But she did a really good job about that. What things did you have trouble with?  I think if we had… like I said…if we had our Wi-Fi already, it would have been perfect. So it was just our fault because we didn’t have the Wi-Fi yet, so we could not use a lot of the features like the facetime and scanning the bar codes on foods…: Because like I said…we couldn’t…we couldn’t track (you know) his miles (or whatever) like I wanted to on there…. like his speedometer steps and all that 177 stuff….but he did not play with it very much but that’s just him(you know)…. he never has really been into that kind of stuff…like he would rather listen to music or play his drums….or things like that---he’s never really has been into like any kind of games on the computer.  I don’t think there were any that we had trouble with. It was more ‘operator’ … trying to get familiar with it  It was ... It gets hard when I get really busy and tired. Because it’s like homework, and I noticed in the past couple weeks. It’s been like “Oh my gosh, I gotta get this in” …. But I was just like “Oh gosh... where…Sidney, is it charged up? You know, it became like really work because I wanted her to be more independent with it; but she’s not. She’s more independent with her choices, but like going to the….she’ll go play on it and things like that; but she goes into her own world and likes to… you know  No. I'm not a technology person. I just don't like it. It took too long. I think it would've worked better for me to stay consistent to just write them down.  No not really.  At first I was clueless. I’m not a gadget person. Once I got to doing it every day, I figured it out  He didn’t have any trouble because young kids use them in school  We really didn’t have any trouble with it. We didn’t have any trouble with it. Tell me about any problems you had using lose it or istep log  Just without internet we could not log his miles on the ipad so I had to write them down on paper.  It was fine if she could scan her foods, that was really east. But portion size was hard I think also for these kids to still conceptualize “sizes”…or “portions”…is hard. I don’t know if…you know, you had icons of the portion sizes. Cause Emily put that she had 3 cups of pears; when she only had 3 slices of a pear. Or like one day you wrote down that you did like 3 hours of volleyball; instead of 30 minutes  Sidney not understanding what a ½ cup is…. an 1/8 cup… or a full; you know? Things like that she wasn’t very good at; but she could read and point at the pictures and type in what she had. She was pretty good at that. The exercise she struggled with a little bit. She was making up stuff. Like boxing for five hours… stuff like that. I was like “that’s…. you didn’t box for five hours, honey.” Stuff like that  It took time and I don't have it. There's so much; going to school and I'm always looking for an activity for him to get into and taking him to it or picking him up from it. I have enough to do.  Nothing 178  Entering the stuff was okay. It’s just that I think you have to really be a routine- oriented person. There were days when we forget. Then somebody took it over and it didn’t work out. Every day, yes. That was the hard part. I wish we could just do maybe a couple of days where we have to, like on weekends when we’re really on the program versus every day.  I think it was more of a hassle. Then again when I’m home on weekends, because I have him with me … Again, if he was in my sight every day it would’ve made more sense, because then I have to be mindful of what he’s eating because then that helps me think of what is going on throughout the day seeing that visual.  Right. One day I came home. We were all eating. The battery was just gone. It was like four or five and I couldn’t enter anything.  No. The only thing that would have been helpful for the Lose It app would have been to be able to print the documents, but managing that and going back and forth … it’s really pretty straightforward and easy to use, good app.  No. You know, the barcode is a little sensitive, so it would work or it wouldn't work sometimes. But for the most part it was fine. She loved (the fitbit). That was a source of trauma when we lost that. Yeah, because she really liked the FitBit. What did you like about the Facetime meetings?  I think they would have been helpful if we could have got them to work.  I think that it was really easy. She felt like pretty big stuff having someone call her every week, it really kept her on track.  It actually for me… was stressful. Because, I guess, my schedule is so busy. You know what I mean? My schedule is so crazy all day long so coming home and having to remember something is like… I mean, just trying to remember to help her with her homework and just getting her stuff ready for her. We get up at 4:50 in the morning so I would… I would be tired by 8, 8:30. I’m like…you know, just helping her with her homework; we get finished with that. She has chores. We have clothes to wash to get her ready for school and every day she needs something different: Cheerleading outfit, workout clothes for her gym class…. Um, different clothes for her volunteering when she goes to Harvester’s. A different outfit for… I mean, she literally would be going with bags of things of stuff, and then... Right! So that why…I’m just like…”Ah, was I supposed to?” You know… but for the most part, I think the Face Time helped keep me on track a little bit because… you know, day to day stuff everything goes…”Ok, if it’s not done right now…!” So, you know, when you ‘did’ check in… it was good. I would remember: “Oh gosh, it’s Sunday… I need to call Lauren!” I didn’t get to talk to her this week! So that kind of did help; I think I’d be all the way in wherever zone… just crazy people zone, I guess…  Yeah that would remotivate me to start writing them down again or putting them in, yeah. They were good but it was hard to remember to do them. I know it puts more 179 work on you but I'm guessing if you just had their phone number and sent them a text reminder like an hour or so before or that morning or something or the day before. We try and put it into our calendar so we don't always do it. I know you would send messages to the ipad but I think sending them to the cell phone would have been better, even to help remind her to enter her food and wear the fitbit.  I liked that. Oh yes I did. I did. It helped to keep us on track and it was not very inconvenient to our schudual. I think because I’m not … I just don’t … I have trouble remembering. I think again, I think other families probably wouldn’t have any problems meeting that schedule because you made it so easy, so convenient. You’re just calling once in awhile on an Ipad. That was the problem. It wasn’t your scheduling the meetings. It was me having to remember. That was a “me” thing.  It was helpful to have questions answered especially in those first weeks. It was just hard that Terrance didn’t want to do it, do it was more us talking than him and you.  I think it was a good way to interact, to go through the lessons and to do it in a way where it didn’t require us to be some place at a certain time because our schedules are busy even in the evening, or you to be some place at a certain time. It was an excellent way to get that lesson, for you to share information, provide that input. He looked forward to the calls. It was … convenience was probably another one of the big things. It was not a burden at all. There were a couple of times when we were running late and it was easy for us to adjust our times because we were going to use face time to handle our lessons as opposed to physically being some place.  Yeah, that worked out, huh( towards participant)? Yes. Yes( I think it helped her out). Yeah, that worked out ( for her to be told what she could improve on). No (it was not a burden). I mean, I think, other than the normal scheduling glitches -- I think it was fine. How easy or hard was it to take photos with the iPad?  Yeah, easy for me…my husband didn’t know how to do it (laughs). He never really wanted to take pictures but Brooke would help him out, she was really good at that and she is only 5. She was so funny, they had switched the screen one time & it was that blue screen…that blue crazy screen! I said “Brooke, come in here & fix this for me….and boy, she just fixed it. I was amazed….  It was great, she loved to take pictures. She would sometimes flip the camera around and take a picture of herself instead of the food by accident but other than that there were no issues. It was definitely easier than using a digital camera. Emily still can’t figure out how to use one of those.  That was pretty easy and Sidney really liked it.  I don't think it was hard at all for him. Yeah he likes that kind of stuff.  It's easy but it was hard to remember. It's hard because when you eat a food and then you realize you have to take a picture and you already ate it 180  Yes. He’s got lots and in fact sometimes he took pictures of food that I didn’t know that he took. Yeah. Some of the pictures were blurred. That was another skill that we could work on. We could work on making sure that that picture included all of the items and that it was a good picture, so yeah. A lot of skill building with the way you guys designed that. Easy to take the pictures. The thing that he … because this was a new skill for him, he needed to just know that not only do you want to push the button, but you actually want to make sure that you have a clear picture. That is a separate skill where we could look at it, pull it back, have him take a look, see that it was blurred and retake it. Future Now that you have completed the study how will you use this information in the future?  Well, first of all…I’ve learned he doesn’t need to eat as much as he’s been requesting before (you know?). And that he will eat different things. He tried some fruits that he had not tried before; and (I said) it got easier for us because we were kind of on the program saying “no, you’re done…(you know?)…because we all can benefit from that…telling ourselves “Ok, you’re done, you know?” Because you sit here at the table and he eats so fast…that (you know) he always thinks he wants more & more. This helped us realize how much he really needed to be full. And once he ate, I mean he didn’t ask for more the entire night…so it’s not like he was telling us he was hungry all night (you know?). So we definitely learned (you know) some things he could eat, and that he needed less amount.  I think one thing that we both learned a lot about was our serving sizes and we are going to keep up with that in the future.  Portion size… counts for a lot. I mean, just knowing that… you know, you can actually shrink the size of your stomach a little bit with your intake so that you don’t even crave as much food, but it takes… it takes a few weeks to get there; but I think portion size. If nothing else.. portion size and just knowing … and I’ve always kind of known this in the back of my head anyway; but just seeing when like using the iPad app Seeing like how much vegetable you can really, truly eat… and how much it amounts to nothing ? And how you can exercise and go from thinking that you’re about to meet your 1200 calorie quota to next thing you know… you’ve thrown in some exercise, and next think you know you’re down to 600. You’re like “How in the world are you going to even make it to a thousand calories?” You know what I mean? I’m like… “Wow, this is pretty cool!” so I think that helped me to learn like …. exercise is super important . Portion size is important; but vegetables … like you were saying are “really important.” You know, I really can’t… when I see her eat anything…I’m like “Uh” sometimes it’s like so hard to get vegetables in when you’re 181 a busy family. Like if I don’t go grocery shopping and my husband eats up all the vegetables too; so um, I think that just really solidifies for me like: “you can have…you know; you can eat whatever you want as long as it’s like…”this much meat” or “lean meat.” Of course I knew that because my husband eats healthy… But I mean just really seeing it; how it’s affecting her. How simple it… you can make it even for people with special needs. I don’t think there’s really an excuse for anybody anymore. If they just learn “that”; and they can stick with it for a few weeks, I think that anyone… you know “anybody” can follow. I… I give no excuses. I always say “If Sidney can do it; I don’t want to hear it”. You know what I mean? I don’t want to hear any excuses from any people.  Keep doing the meals for lunch time and send him over to Tammy. I want one over at Tammy's too because portion control is big over there and what they eat is fatty. She does have diabetes but she hasn't learned yet that she has to watch what she eats. She's exercising like crazy but she's not eating properly.  I think we have a much better idea of how to eat and how many calories are in foods and how to track a little bit better and the importance of exercise  We’re definitely going to make it a lifestyle change, because I know I’m not going to go back to trying to do fast food as other meals. It’s easy but at the same time it’s so unhealthy. I think just understanding that and taking the steps that we’ve taken to eliminate certain things. Because OCD runs in the family, I have to make sure that we’re not having bad habits. Start fostering the good and then they become the habits that we want to keep; definitely more salads, less juices, less pop, more water.  More vegetables, more fruit. Exercising.  I think we will continue to use the app. We will continue to use the pre-portion meals. We’ll continue to use basically all of the information that you shared with us. It was a nice package that you guys put together, really nice package.  We're gonna keep on with the portion control and keep on with the LoseIt! We're gonna keep tracking, aren't we, Caroline? When you think about prepackaged meals what is the likelihood you will eat them in the future?  Oh I bet we’ll use them quite a bit. It’s something he can fix himself and something that’s easy to keep track of. It just helps so much with portion control. Because you look at it and on the plate you think…”gosh, that’s not very much” but then you eat one…I mean it fills you up  I had people asking me… like her teachers and stuff? They’re asking me like: Where did you get those? Like how much did they cost? Like Sidney’s losing like a ton of weight… and she’s doing so good with it… and you know I think everybody wants it to be easy… So I know they have like all those TV things that are saying like “prepackaged meals” and all of that It really works. I think the hard thing will be just 182 trying to monitor the salt intake for most of the frozen ones. Um…so what I’m thinking is… my game plan for now that we’re done with this…I have to be realistic. You know, there’s going to be some times when ideally I want to make Sidney’s lunch every day, but that’s not gonna…that’s not realistic (laughs). So just getting her five meals a week at least so that she’ll have ‘em for school? And then letting her have shakes in the morning. Um, what I was going to ask you… is I usually do frozen fruit in her shakes? But I’m thinking of doing like… you know, giving her the “soy” as opposed to the fat milk, or either… maybe even skim milk; and um, I was thinking about the… “Breakfast?…” What do they call those things? Those instant breakfast things.  I have to, at least for now  Probably the Smart Ones, I think those are great. They taste pretty good and it's that fact that the calories and the nutrition are already packed in and it makes it really easy to use them  I will use prepackaged, because like I said they’re very convenient  I really haven’t shopped around to see which ones … To learn about the calories. Of course, I’m going to be looking for taste and less calories  He’ll eat them. We were already using prepackaged meals for breakfast because they are convenient. We have the portion control. It was enough too that I knew he had enough to eat. Doing so more on the weekends when we’re at home and we’re doing a lunch, I think that we will probably increase that, because again another way to get variety and portion control and manage his calorie intake. He’ll eat them.  You know, I think that that's probably a good idea for the days that she eats out at school to get the Weight Watchers or the Lean Cuisine meals. I think that's an easy way to make sure it's portion controlled. And it's convenient for her to take that along with some, you know, fresh fruit or fresh vegetables from home. How would you feel about continuing to track your dietary intake?  Yeah I think we would do it…because I noticed just the difference eight pounds looks on him…you know? I mean really it does. His clothes fit better…  I think it’s really helpful. Yeah, to write it. I think for anyone it is. To write down what you’re eating; everything you’re eating. In between, and snacks…  I think as long as she is eating those meals we will be fine and won’t need to keep tracking.  I'll keep track of what he eats in a day. I won't write it down but I'll monitor it because I can. It's got Tammy on board which paying attention to what he's eating and how much and it's really helped.  Yeah we may download that Lose It! app.  Yes, on weekends. It’s more convenient for me on weekends. 183  Yes. It would be better if it was the person doing it versus us trying to figure out what happened throughout the day and all that stuff, I just don’t know if he could have done it. I guess honestly I never even had him try.  I want to do that. I’m actually looking … this is a really nice system. Not having to write things on a log. Writing the information on a log is a burden. I’ve been able to scan the foods, so we scanned a lot of things. I’m going to miss the fact that we don’t have that history, those things and the my food selection, but it’s a pretty nice system to be able to maintain.  We are going to keep doing that. Yeah. I'd love to keep one of your sheets - a blank sheet that we could copy? So she could also be responsible for tracking. You know, that way, and we'll also put it on my iPad. How would you feel about continuing to track your physical activity?  He has football practice and that keeps him accountable, so we may have to figure something else out for next semester.  We made a sheet we can write down what she is going on what day to keep track, so yeah we are going to keep doing that.  mm-hmm, like this weekend she didn’t have as much physical activity. I tried to get her out to walk as much as possible, but it was cold and she didn’t want to do it… so that was challenging  You know what, there is a program at the school that asks for input and I have been writing it down but they just leave them in his backpack so what good does that do. Just knowing that he's getting it and actually I think he's starting to...this was his first week to actually go with Tammy. It's his first week back in school and all that kind of stuff but she wants to lose weight so he's her workout partner so it's awesome. It really is.  Yeah. I think we have a goal of just getting some activity in every day  I will just make sure that he does it, not sure if I need to write it down.  Yep. In fact Austin got to the point where he would do something and when we were riding home in the car, he would talk about having to add the physical activity that he did. That worked really well, including housework. One time he was doing his weekly chores and he said he needed to add the housework to that.  We will keep doing that too. Suggestions What do you feel we can improve? 184  I would just make sure everybody has all the technology. Because that’s the only thing that was frustrating to me…because I was like “Ugghhh…” it was so hard and trying to get somewhere that had Wi-Fi. And just to make sure all the families have that.  Um, the only part that I was noticing is that I was doing most of the food input. I don’t know if there’s a way to make it easier for the kids…um, you know, more visual. . I…I visualize like the board-maker programs where the things that have just like the little icons….or where she could look at it and almost point to the foods she had … um; that would be a huge amount of foods in there.  It’s nothing you can change, but I just didn’t like having to enter everything on to the ipad, it was time consuming. However, I can see it being helpful for others.  think older moms maybe don't want to do the technology part. Maybe the young moms are into it but I'm not and maybe other older mothers are not right now. For him, technology was a great motivator for him to be able to have a new device and take pictures with it but I didn't like writing it down. If I could just write not type, write it down. Maybe if he could do it on his own, but he can’t. Old fashioned. Would've been better for me.  The only thing that I think you could improve on is the packaged meals, just making them taste better.  Give an Ipad to keep. I think the program would’ve been just way better at the end if we were able to keep the Ipads, because special needs kids they need that for communication. All the aps they can download, especially … I’m not trying to single certain families out but like low-income families. They probably should find a way to give them some kind of incentive like that. I also suggest that you guys, because they’re prepackaged meals, that you monitor proper blood pressure. I don’t know if you guys are able to do that  I would say the money for the fruits and vegetables. Then like she said, the gym membership, if they made it affordable where they could, while they were doing the program or whatever; even after the program is over, if they gave a big enough discount where they could join a gym. Make it a family thing, because when you take your child. He won’t do anything by himself. If you can get the family moving where they’re participating in a family gym membership somewhere  Let’s see. I think if you guys are still working with our population, I don’t know … because summer time is probably better or maybe a longer time period. I think that probably a longer time period might be better because our kids, young adults, they’re not little kids anymore. Sometimes it takes a while to get all of those skills that we want them to have. As a parent we’re looking at a lot of separate skill building as well as to weight loss. For example for Austin we were looking at independently doing everything as a part of that process. Being able to have 185 ownership and management of his weight loss and giving him the tools to be able to do it. Utilizing technology, the camera, the face time, all of those type of things. Probably a longer time might be helpful just to be able to look at that progress, going through various seasons when there’s various things about one. Then also looking at how well it’s being maintained. Austin, I’d like to get 100 pounds off of him. We have a lot of weight that we need to get rid of on the healthy choices and eating habits. That’s probably the biggest thing that comes to mind. Austin doesn’t have a reason to want to do it for himself. Some kids are concerned about what others think. Austin doesn’t have that concept of being concerned about peer pressure or anything like that. He has no incentive for losing weight other than we’ve talked about the fact that when he loses 50 pounds we’re going to … where are we going to go? To the beach, but that’s not strong enough to be able to say no matter what I want to get rid of these pounds. Being able to work towards that goal has probably the most meaning for Austin right now. If we were looking at a situation where there is some type … not enough money to pay for going toward the beach, because he doesn’t have that strong concept of I need this amount of money. For him to be able to look at making a contribution to that would be a good thing. Then we could say okay Austin, you’ve lost these pounds, you’ve earned this much money. In fact that’s one of things that I’ve thought about being able to do is provide … as we get more involved in this process, be able to provide small amount for pounds lost to just … so that he can see getting closer to that goal.  Hmm. I really do think this was done nicely. I mean, I liked all the components. I liked the FaceTime. The iPad, that was a nice incentive to, you know, if you continue tracking then you can play the games on there. I thought for this particular age group that was probably really good. You know, I thought the FitBit was a great idea. Um, so yeah, that was all. We wish we could keep going. Would you do this program again?  Yeah!  Mm-Hmm. Yeah. I think it was very worthwhile.  Yes, I love it!  Oh yeah. Perfect time. Definitely we needed. I didn't know I could be so involved I guess and have a hold on what he ate like I can and I needed to know that. He was this way pretty much because it's cheaper to have him eat at school. It was easier to have him eat at school but I didn't like him being overweight. It just really helped me be aware of what I could do. It will make a whole big difference in his life because now he'll be able to do the activities that when he was too heavy he couldn't breathe and that kind of stuff. 186  Yeah we’d do it again Overall we liked it, it was just kind of hard. There was nothing really hard but it was just remembering to do it. That is what was hard. But having her finally lose some weight really made the hassle worth it.  Mm-hmm (affirmative), I would. I want people to embrace it because I’m glad that somebody’s now aware that special needs children, their waistline is way too big. For one, they take lots of medicine. That’s affecting their health and weight. I’m glad somebody’s realizing that and wanting to do something about it. I give it A+.  Yes, I would do anything for his health. I think it was a good program for the same reasons, because once they get to be an adult they’re free to eat whatever they want. This kind of gave them some guidelines, some limits. Then they were full.  I would and my schedule is so crazy that there’s not a whole lot of extra things that I’d take on. Weight reduction is and eating healthy is one of the things that is very important to us. The tools that you’ve shared have been very helpful in continuing that program. I like it, I really did and I told several people, family and friends about it. I bragged on you guys. If it had been something that would have been burdensome for us or if it had been something that we wouldn’t have been able to do for whatever reason, I would have had to bow out because we just don’t have that kind of flexibility. The fact that you were … you did a great job too. You worked with our schedule. You provided information. I was able to ask questions about different things. Yeah, I would do it again. I love the program. I love that you guys put … added the technology part of it. I think providing the kids with the iPad and feeling comfortable that that was a part of the program was huge. I think it was just a well designed, usable study. Even though we didn’t have the maximum weight loss for a variety of reason, it’s still a wonderful thing for us to be able to continue on with. I really liked it. I liked it a lot.  Yeah. Definitely. This - the food. You made it so easy. And, um, that was - the iPad and everything else. It was great. Parent Involvement How did your physical activity level change as a result of your participations in this program?  Um, maybe a little bit? But…not much  Well, just going to her games! (laughs). I had to become a little bit more active! I can’t say…um…I guess… I guess I’ve become a little more active. But I’m still not active as I should be… cause I’m so active at work. I guess when I get home I’m just… I’m just tired. But I will…well, I guess I would say to some degree “yeah” cause I notice like when we go to the mall. Like I will purposely walk the mall…. like way more than I probably should. Um covering…like normally I’m like “uh, let’s just 187 stop..” but now I’m like you know…I’m all…in the back of my head I’m always thinking about calories burning… not for myself but for her.. yeah. Cause I’m like ‘I know she’s going to want this good sandwich…I’m going to walk around’.. (laughs)  Maybe I wouldn't have joined the gym, probably not because...I have taken more of a focus on accomplishing professionally rather than focusing everything on Spencer and so yeah. We joined the gym and so that's good because I feel better too. We go to the gym on weekends. We haven't gone...when he wasn't going to school we went during the week but now that school is back in session, Tammy has got him going and...Single moms have a lot to do.  Not really because I tend to exercise. We have a dog so I walk the dog every day or I go to an exercise class a couple times a week. For me my exercise didn't change. I just encouraged her come along  The other thing that’s changed with us we’ve increased in our activities. Now Eugene has always been physical, because he exercises more than us. That’s why he’s skinnier than us. I see now we want to go walking with him because he walks two times a day, three times a day. I would just sit at home. Now when I see him going walking, I want to go walking with him I would say activity’s increased with that, with me. I’m more mindful of what I’m putting in my mouth. Then right before I got on the program, I started Weight Watcher’s too. I think it is working.  No. Maybe had it been springtime, because I like to walk we would have gotten more.  Not a whole lot for me. The things that we did at home that involved the weights and some of the movements associated with that, I might have done a little bit of that with him. Part of my goal was to say to him, “Austin, it’s time do your exercise,” and him do it without my involvement. I don’t swim. I don’t play basketball. I don’t do any of those activities.  Well, I already work out 6 times a week, so it was probably the same How did your diet change as a result of this program?  Uh, it did….I think portion size. It made me more aware of portion sizes. I found myself wanting to set a better example for Emily. So I was getting me and Emily more vegetables. And kind of being aware of … that I was getting enough of my servings a day also. So those would probably be the two biggest things.  : Um…No…cause I mean we…my husband eats healthy so it fell right in line with what…we still have our healthy days and then we have our days when we just kind of fall off… or when we eat just what we want. Um…so on those days, I noticed I had to be more conscientious of making sure, you know, I did have an “alternative”. More vegetables. So I had to make more vegetables for… because usually it’s just 188 like my husband is just like “well, get me tons of this” or “tons of that”. So now it’s just like “ok”. It was just a little bit more of the same  Probably not. My dad was a gardener and he always had half an acre of garden and so I was raised on all kinds of vegetables and I just keep eating this way.  My diet did change some. Some days I was bad with her but I did think more about what we were eating and what the content was. Yeah we tried to not go out to eat quite so much, tried not to do so much fast food. We tried to put more protein, chicken and things like that in the diet and definitely more fruits and vegetables. Every time she said can I have a snack I'd say how about some fruit. I'm a big fruit person anyway so we just incorporated that in more and just putting vegetables and fruit out before we even start dinner so that we could eat those first and then fill up a little bit.  Of course, I’m making vegetables for the whole family. We ate more fruits and vegetables, drank more water.  For me, we did the same. We increased fruits and vegetables. Increased water intake. Less visits, because we are an eat-out family. We totally changed that. If you didn’t have us in the program right now we would be at a restaurant eating. That’s what we do for pastime. That’s changed a lot. Finally, we had gotten Gene on board to understand that we need to have other things to celebrate family, because he sees it as bonding time. We’re doing other things now. Going to church is bonding time or visiting a museum; doing something other than eating.  Because we’ve both been working on healthy eating and we have to have things in the house that are going to meet that requirement for both us I actually ended up losing some weight. I probably paid a lot more attention, but I did not try any of those prepackaged meals. I did have shakes with him.  Probably not my diet, but my portion sizes did. work_r7e5mlwd2ndnhm76vmjf4zyjby ---- Deep Photo: Model-Based Photograph Enhancement and Viewing Johannes Kopf Boris Neubert Billy Chen Michael Cohen Daniel Cohen-Or University of Konstanz University of Konstanz Microsoft Microsoft Research Tel Aviv University Oliver Deussen Matt Uyttendaele Dani Lischinski University of Konstanz Microsoft Research The Hebrew University Original Dehazed Relighted Annotated Figure 1: Some of the applications of the Deep Photo system. Abstract In this paper, we introduce a novel system for browsing, enhanc- ing, and manipulating casual outdoor photographs by combining them with already existing georeferenced digital terrain and urban models. A simple interactive registration process is used to align a photograph with such a model. Once the photograph and the model have been registered, an abundance of information, such as depth, texture, and GIS data, becomes immediately available to our sys- tem. This information, in turn, enables a variety of operations, rang- ing from dehazing and relighting the photograph, to novel view syn- thesis, and overlaying with geographic information. We describe the implementation of a number of these applications and discuss possible extensions. Our results show that augmenting photographs with already available 3D models of the world supports a wide vari- ety of new ways for us to experience and interact with our everyday snapshots. Keywords: image-based modeling, image-based rendering, image completion, dehazing, relighting, photo browsing 1 Introduction Despite the increasing ubiquity of digital photography, the meta- phors we use to browse and interact with our photographs have not changed much. With few exceptions, we still treat them as 2D en- tities, whether they are displayed on a computer monitor or printed as a hard copy. It is well understood that augmenting a photograph with depth can open the way for a variety of new exciting manip- ulations. However, inferring the depth information from a single image that was captured with an ordinary camera is still a long- standing unsolved problem in computer vision. Luckily, we are witnessing a great increase in the number and the accuracy of ge- ometric models of the world, including terrain and buildings. By registering photographs to these models, depth becomes available at each pixel. The Deep Photo system described in this paper, con- sists of a number of applications afforded by these newfound depth values, as well as the many other types of information that are typ- ically associated with such models. Deep Photo is motivated by several recent trends now reaching crit- ical mass. The first trend is that of geo-tagged photos. Many photo sharing web sites now enable users to manually add location in- formation to photos. Some digital cameras, such as the RICOH Caplio 500SE and the Nokia N95, feature a built-in GPS, allowing automatic location tagging. Also, a number of manufacturers offer small GPS units that allow photos to be easily geo-tagged by soft- ware that synchronizes the GPS log with the photos. In addition, location tags can be enhanced by digital compasses that are able to measure the orientation (tilt and heading) of the camera. It is expected that, in the future, more cameras will have such function- ality, and that most photographs will be geo-tagged. The second trend is the widespread availability of accurate digi- tal terrain models, as well as detailed urban models. Thanks to commercial projects, such as Google Earth and Microsoft’s Virtual Earth, both the quantity and the quality of such models is rapidly increasing. In the public domain, NASA provides detailed satellite imagery (e.g., Landsat [NASA 2008a]) and elevation models (e.g., Shuttle Radar Topography Mission [NASA 2008b]). Also, a num- ber of cities around the world are creating detailed 3D models of their cityscape (e.g., Berlin 3D). The combination of geo-tagging and the availability of fairly ac- curate 3D models allows many photographs to be precisely geo- registered. We envision that in the near future automatic geo- registration will be available as an online service. Thus, although we briefly describe the simple interactive geo-registration technique that we currently employ, the emphasis of this paper is on the ap- plications that it enables, including: • dehazing (or adding haze to) images, • approximating changes in lighting, • novel view synthesis, • expanding the field of view, • adding new objects into the image, • integration of GIS data into the photo browser. Our goal in this work has been to enable these applications for sin- gle outdoor images, taken in a casual manner without requiring any special equipment or any particular setup. Thus, our system is ap- plicable to a large body of existing outdoor photographs, so long as we know the rough location where each photograph was taken. We chose New York City and Yosemite National Park as two of the many locations around the world, for which detailed textured models are already available1. We demonstrate our approach by combining a number of photographs (obtained from flickrTM) with these models. It should be noted that while the models that we use are fairly de- tailed, they are still a far cry from the degree of accuracy and the level of detail one would need in order to use these models directly to render photographic images. Thus, one of our challenges in this work has been to understand how to best leverage the 3D informa- tion afforded by the use of these models, while at the same time preserving the photographic qualities of the original image. In addition to exploring the applications listed above, this paper also makes a number of specific technical contributions. The two main ones are a new data-driven stable dehazing procedure, and a new model-guided layered depth image completion technique for novel view synthesis. Before continuing, we should note some of the limitations of Deep Photo in its current form. The examples we show are of outdoor scenes. We count on the available models to describe the distant static geometry of the scene, but we cannot expect to have access to the geometry of nearby (and possibly dynamic) foreground objects, such as people, cars, trees, etc. In our current implementation such foreground objects are matted out before combining the rest of the photograph with a model, and may be composited back onto the photograph at a later stage. So, for some images, the user must spend some time on interactive matting, and the fidelity of some of our manipulations in the foreground may be reduced. That said, we expect the kinds of applications we demonstrate will scale to 1For Yosemite, we use elevation data from the Shuttle Radar Topography Mission [NASA 2008b] with Landsat imagery [NASA 2008a]. Such data is available for the entire Earth. Models similar to that of NYC are currently available for dozens of cities. include any improvements in automatic computer vision algorithms and depth acquisition technologies. 2 Related Work Our system touches upon quite a few distinct topics in computer vision and computer graphics; thus, a comprehensive review of all related work is not feasible due to space constraints. Below, we at- tempt to provide some representative references, and discuss in de- tail only the ones most closely related to our goals and techniques. Image-based modeling. In recent years, much work has been done on image-based modeling techniques, which create high qual- ity 3D models from photographs. One example is the pioneering Façade system [Debevec et al. 1996], designed for interactive mod- eling of buildings from collections of photographs. Other systems use panoramic mosaics [Shum et al. 1998], combine images with range data [Stamos and Allen 2000], or merge ground and aerial views [Früh and Zakhor 2003], to name a few. Any of these approaches may be used to create the kinds of textured 3D models that we use in our system; however, in this work we are not concerned with the creation of such models, but rather with the ways in which their combination with a single photograph may be useful for the casual digital photographer. One might say that rather than attempting to automatically or manually reconstruct the model from a single photo, we exploit the availability of digital terrain and urban models, effectively replacing the difficult 3D reconstruc- tion/modeling process by a much simpler registration process. Recent research has shown that various challenging tasks, such as image completion and insertion of objects into photographs [Hays and Efros 2007; Lalonde et al. 2007] can greatly benefit from the availability of the enormous amounts of photographs that had al- ready been captured. The philosophy behind our work is somewhat similar: we attempt to leverage the large amount of textured geo- metric models that have already been created. But unlike image databases, which consist mostly of unrelated items, the geometric models we use are all anchored to the world that surrounds us. Dehazing. Weather and other atmospheric phenomena, such as haze, greatly reduce the visibility of distant regions in images of outdoor scenes. Removing the effect of haze, or dehazing, is a challenging problem, because the degree of this effect at each pixel depends on the depth of the corresponding scene point. Some haze removal techniques make use of multiple images; e.g., images taken under different weather conditions [Narasimhan and Nayar 2003a], or with different polarizer orientations [Schechner et al. 2003]. Since we are interested in dehazing single images, taken without any special equipment, such methods are not suitable for our needs. There are several works that attempt to remove the effects of haze, fog, etc., from a single image using some form of depth informa- tion. For example, Oakley and Satherley [1998] dehaze aerial im- agery using estimated terrain models. However, their method in- volves estimating a large number of parameters, and the quality of the reported results is unlikely to satisfy today’s digital photography enthusiasts. Narasimhan and Nayar [2003b] dehaze single images based on a rough depth approximation provided by the user, or de- rived from satellite orthophotos. The very latest dehazing methods [Fattal 2008; Tan 2008] are able to dehaze single images by making various assumptions about the colors in the scene. Our work differs from these previous single image dehazing meth- ods in that it leverages the availability of more accurate 3D models, and uses a novel data-driven dehazing procedure. As a result, our method is capable of effective, stable high-quality contrast restora- tion even of extremely distant regions. Novel view synthesis. It has been long recognized that adding depth information to photographs provides the means to alter the viewpoint. The classic “Tour Into the Picture” system [Horry et al. 1997] demonstrates that fitting a simple mesh to the scene is sometimes enough to enable a compelling 3D navigation experi- ence. Subsequent papers, Kang [1998], Criminisi et al. [2000], Oh et al. [2001], Zhang et al. [2002], extend this by providing more sophisticated, user-guided 3D modelling techniques. More recently Hoiem et al. [2005] use machine learning techniques in order to construct a simple “pop-up” 3D model, completely automatically from a single photograph. In these systems, despite the simplicity of the models, the 3D experience can be quite compelling. In this work, we use already available 3D models in order to add depth to photographs. We present a new model-guided image com- pletion technique that enables us to expand the field of view and to perform high-quality novel view synthesis. Relighting. A number of sophisticated relighting systems have been proposed by various researchers over the years (e.g., [Yu and Malik 1998; Yu et al. 1999; Loscos et al. 2000; Debevec et al. 2000]). Typically, such systems make use of a highly accurate geo- metric model, and/or a collection of photographs, often taken under different lighting conditions. Given this input they are often able to predict the appearance of a scene under novel lighting conditions with a very high degree of accuracy and realism. Another alterna- tive to use a time-lapse video sequence [Sunkavalli et al. 2007]. In our case, we assume the availability of a geometric model, but have just one photograph to work with. Furthermore, although the model might be detailed, it is typically quite far from a perfect match to the photograph. For example, a tree casting a shadow on a nearby building will typically be absent from our model. Thus, we cannot hope to correctly recover the reflectance at each pixel of the photo- graph, which is necessary in order to perform physically accurate relighting. Therefore, in this work we propose a very simple re- lighting approximation, which is nevertheless able to produce fairly compelling results. Photo browsing. Also related is the “Photo Tourism” system [Snavely et al. 2006], which enables browsing and exploring large collections of photographs of a certain location using a 3D inter- face. But, the browsing experience that we provide is very differ- ent. Moreover, in contrast to “Photo Tourism”, our system requires only a single geo-tagged photograph, making it applicable even to locations without many available photos. The “Photo Tourism” system also demonstrates the transfer of an- notations from one registered photograph to another. In Deep Photo, photographs are registered to a model of the world, making it possible to tap into a much richer source of information. Working with geo-referenced images. Once a photo is reg- istered to geo-referenced data such as maps and 3D models, a plethora of information becomes available. For example, Cho [Cho 2007] notes that absolute geo-locations can be assigned to individ- ual pixels and that GIS annotations, such as building and street names, may be projected onto the image plane. Deep Photo sup- ports similar labeling, as well as several additional visualizations, but in contrast to Cho’s system, it does so dynamically, in the con- text of an interactive photo browsing application. Furthermore, as discussed earlier, it also enables a variety of other applications. In addition to enhancing photos, location is also useful in organiz- ing and visualizing photo collections. The system developed by Toyama et al. [2003] enables a user to browse large collections of geo-referenced photos on a 2D map. The map serves as both a vi- sualization device, as well as a way to specify spatial queries, i.e., all photos within a region. In contrast, DeepPhoto focuses on en- hancing and browsing of a single photograph; the two systems are actually complementary, one focusing on organizing large photo collections, and the other on enhancing and viewing single pho- tographs. 3 Registration and Matting We assume that the photograph has been captured by a simple pin- hole camera, whose parameters consist of position, pose, and focal length (seven parameters in total). To register such a photograph to a 3D geometric model of the scene, it suffices to specify four or more corresponding pairs of points [Gruen and Huang 2001]. Assuming that the rough position from which the photograph was taken is available (either from a geotag, or provided by the user), we are able to render the model from roughly the correct position, let the user specify sufficiently many correspondences, and recover the parameters by solving a nonlinear system of equations [Nister and Stewenius 2007]. The details and user interface of our registration system are described in a technical report [Chen et al. 2008]. For images that depict foreground objects not contained in the model, we ask the user matte out the foreground. For the appli- cations demonstrated in this paper the matte does not have to be too accurate, so long as it is conservative (i.e., all the foreground pixels are contained). We created mattes with the Soft Scissors system [Wang et al. 2007]. The process took about 1-2 minutes per photo. For every result produced using a matte we show the matte next to the input photograph. 4 Image Enhancement Many of the typical images we take are of a spectacular, often well known, landscape or cityscape. Unfortunately in many cases the lighting conditions or the weather are not optimal when the pho- tographs are taken, and the results may be dull or hazy. Having a sufficiently accurate match between a photograph and a geomet- ric model offers new possibilities for enhancing such photographs. We are able to easily remove haze and unwanted color shifts and to experiment with alternative lighting conditions. 4.1 Dehazing Atmospheric phenomena, such as haze and fog can reduce the vis- ibility of distant regions in images of outdoor scenes. Due to at- mospheric absorption and scattering, only part of the light reflected from distant objects reaches the camera. Furthermore, this light is mixed with airlight (scattered ambient light between the object and camera). Thus, distant objects in the scene typically appear consid- erably lighter and featureless, compared to nearby ones. If the depth at each image pixel is known, in theory it should be easy to remove the effects of haze by fitting an analytical model (e.g., [McCartney 1976; Nayar and Narasimhan 1999]): Ih = Io f (z) + A (1− f (z)) . (1) Here Ih is the observed hazy intensity at a pixel, Io is the original intensity reflected towards the camera from the corresponding scene point, A is the airlight, and f (z) = exp(−β z) is the attenuation in intensity as a function of distance due to outscattering. Thus, after Input Model textures Final dehazed result 2000 4000 6000 8000 0 0.2 0.4 0.6 0.8 1 Depth In te n si ty Estimated haze curves f (z) Figure 2: Dehazing. Note the artifacts in the model texture, and the significant deviation of the estimated haze curves from exponential shape. Input Dehazed Input Dehazed Figure 3: More dehazing examples. estimating the parameters A and β the original intensity may be recovered by inverting the model: Io = A + (Ih −A) 1 f (z) . (2) As pointed out by Narasimhan and Nayar [2003a], this model as- sumes single-scattering and a homogeneous athmosphere. Thus, it is more suitable for short ranges of distance and might fail to correctly approximate the attenuation of scene points that are more than a few kilometers away. Furthermore, since the exponential attenuation goes quickly down to zero, noise might be severely am- plified in the distant areas. Both of these artifacts may be observed in the “inversion result” of Figure 4. While reducing the degree of dehazing [Schechner et al. 2003] and regularization [Schechner and Averbuch 2007; Kaftory et al. 2007] may be used to alleviate these problems, our approach is to estimate stable values for the haze curve f (z) directly from the relationship between the colors in the photograph and those of the model tex- tures. More specifically, we compute a curve f (z) and an airlight A, such that eq. (2) would map averages of colors in the photograph to the corresponding averages of (color-corrected) model texture colors. Note that although our f (z) has the same physical inter- prertation as in the previous approaches, due to our estimation pro- cess it is not subject to the constraints of a physicially-based model. Since we estimate a single curve to represent the possibly spatially varying haze it can also contain non-monotonicities. All of the pa- rameters are estimated completely automatically. For robustness, we operate on averages of colors over depth ranges. For each value of z, we compute the average model texture color Îm(z) for all pixels whose depth is in [z−δ , z + δ ], as well as the average hazy image color Îh(z) for the same pixels. In our imple- mentation, the depth interval parameter δ is set to 500 meters, for all images we experimented with. The averaging makes our ap- proach less sensitive to model texture artifacts, such as registration and stitching errors, bad pixels, or contained shadows and clouds. Before explaining the details of our method, we would like to point out that the model textures typically have a global color bias. For example, Landsat uses seven sensors whose spectral responses dif- fer from the typical RGB camera sensors. Thus, the colors in the re- sulting textures are only an approximation to ones that would have been captured by a camera (see Figure 2). We correct this color bias by measuring the ratio between the photo and the texture col- ors in the foreground (in each channel), and using these ratios to correct the colors of the entire texture. More precisely, we compute a global multiplicative correction vector C as C = Fh lum(Fh) / Fm lum(Fm) , (3) Input Fattal’s Result Inversion Result Our Result Figure 4: Comparison with other dehazing methods. The second row shows full-resolution zooms of the region indicated with a red rectangle in the input photo. See the supplementary materials for more comparison images. where Fh is the average of Îh(z) with z < zF , and Fm is a similarly computed average of the model texture. lum(c) denotes the lumi- nance of a color c. We set zF to 1600 meters for all our images. Now we are ready to explain how to compute the haze curve f (z). Ignoring for the moment the physical interpretation of A and f (z), note that eq. (2) simply stretches the intensities of the image around A, using the scale coefficient f (z)−1. Our goal is to find A and f (z) that would map the hazy photo colors Îh(z) to the color-corrected texture colors CÎm(z). Substituting Îh(z) for Ih, and CÎm(z) for Io, in eq. (2) we get f (z) = Îh(z)−A CÎm(z)−A . (4) Different choices of A will result in different scaling curves f (z). We set A = 1 since this guarantees f (z) ≥ 0. Using A > 1 would result in larger values of f (z), and hence less contrast in the dehazed image, and using A < 1 might be prone to instabilities. Figure 2 shows the f (z) curve estimated as described above. The recovered haze curve f (z) allows to effectively restore the con- trasts in the photo. However, the colors in the background might undergo a color shift. We compensate for this by adjusting A, while keeping f (z) fixed, such that after the change the dehazing pre- serves the colors of the photo in the background. To adjust A, we first compute the average background color Bh of the photo as the average of Îh(z) with z > zB, and a similarly com- puted average of the model texture Bm. We set zB to 5000m for all our images. The color of the background is preserved, if the ratio R = A + (Bh −A)· f−1 Bh , f = Bh −1 Bm −1 , (5) has the same value for every color channel. Thus, we rewrite eq. (5) to obtain A as A = Bh R− f−1 1− f−1 , (6) and set R = max(Bm,red /Bh,red , Bm,green/Bh,green, Bm,blue/Bh,blue). This particular choice of R results in the maximum A that guaran- tees A ≤ 1. Finally, we use eq. (2) with the recovered f (z) and the adjusted A to dehaze the photograph. Figures 2 and 3 show various images dehazed with our method. Figure 4 compares our method with other approaches. In this com- parison we focused on methods that are applicable in our context of working with a single image only. Fattal’s method [2008] de- hazes the image nicely up to a certain distance (particularly con- sidering that this method does not require any input in addition to the image itself), but it is unable to effectively dehaze the more distant parts, closer to the horizon. The “Inversion Result” was obtained via eq. (2) with an exponential haze curve. This is how dehazing was performed in a number of papers, e.g., [Schechner et al. 2003; Narasimhan and Nayar 2003a; Narasimhan and Nayar 2003b]. Here, we use our accurate depth map instead of using mul- tiple images or user-provided depth approximations. The airlight color was set to the sky color near the horizon, and the optical depth β was adjusted manually. The result suffers from amplified noise in the distance, and breaks down next to the horizon. In contrast, our result manages to remove more haze than the two other approaches, while preserving the natural colors of the input photo. Note that in practice one might not want to remove the haze com- pletely as we have done, because haze sometimes provides percep- tually significant depth cues. Also, dehazing typically amplifies some noise in regions where little or no visible detail remain in the original image. Still, almost every image benefits from some degree of dehazing. Having obtained a model for the haze in the photograph we can insert new objects into the scene in a more seamless fashion by applying the model to these objects as well (in accordance with the depth they are supposed to be at). This is done simply by inverting eq. (2): Ih = A + (Io −A) f (z). (7) This is demonstrated in the companion video. 4.2 Relighting One cannot underestimate the importance of the role that lighting plays in the creation of an interesting photograph. In particular, in landscape photography, the vast majority of breathtaking pho- tographs are taken during the “golden hour”, after sunrise, or be- fore sunset [Reichmann 2001]. Unfortunately most of our outdoor snapshots are taken under rather boring lighting. With Deep Photo Input Relighted Relighted Input Relighted Relighted Input Relighted Input Relighted Figure 5: Relighting results produced with our system. Original Relighted Lit Model Figure 6: A comparison between the original photo, its relighted version, and a rendering of the underlying model under the same illumination. it is possible to modify the lighting of a photograph, approximating what the scene might look like at another time of day. As explained earlier, our goal is to work on single images, aug- mented with a detailed, yet not completely accurate geometric model of the scene. This setup does not allow us to correctly re- cover the reflectance at each pixel. Thus, we use the following sim- ple workflow, which only approximates the appearance of lighting changes in the scene. We begin by dehazing the image, as described in the previous section, and modulate the colors using a lightmap computed for the novel lighting. The original sky is replaced by a new one simulating the desired time of day (we use Vue 6 Infinite [E-on Software 2008] to synthesize the new sky). Finally, we add haze back in using Eq. (7), after multiplying the haze curves f (z) by a global color mood transfer coefficient. The global color mood transfer coefficient LG is computed for each color channel. Two sky domes are computed, one corresponding to the actual (known or estimated) time of day the photograph was taken, and the other corresponding to the desired sun position. Let Iref and Inew be the average colors of the two sky domes. The color mood transfer coefficients are then given by LG = Inew/Iref. The lightmap may be computed in a variety of ways. Our current implementation offers the user a set of controls for various aspects of the lighting, including atmosphere parameters, diffuse and ambi- ent colors, etc. We then compute the lightmap with a simple local shading model and scale it by the color mood coefficient: L = LG ·LS ·(LA + LD ·(n ·l)) , (8) where LS ∈ [Ishadow, 1] is the shadow coefficient that indicates the amount of light attenuation due to shadows, LA is the ambient co- efficient, LD is the diffuse coefficient, n the point normal, and l the direction to the sun. The final result is obtained simply by multi- plying the image by L. Note that we do not attempt to remove the existing illumination before applying the new one. However, we found even this ba- sic procedure yields convincing changes in the lighting (see Fig- ure 5, and the dynamic relighting sequences in the video). Figure 6 demonstrates that relighting a geo-registered photo generates a completely different (and more realistic) effect than simply render- ing the underlying geometric model under the desired lighting. 5 Novel View Synthesis One of the compelling features of Deep Photo is the ability to mod- ify the viewpoint from which the original photograph was taken. Bringing the static photo to life in this manner significantly en- hances the photo browsing experience, as shown in the companion video. Assuming that the photograph has been registered with a suffi- ciently accurate geometric model of the scene, the challenge in changing the viewpoint is reduced to completing the missing tex- ture in areas that are either occluded, or are simply outside the orig- inal view frustum. We use image completion [Efros and Leung 1999; Drori et al. 2003] to fill the missing areas with texture from other parts of the photograph. Our image completion process is similar to texture-by-numbers [Hertzmann et al. 2001], where in- stead of a hand-painted label map we use a guidance map derived from the textures of the 3D model. In rural areas these are typically aerial images of the terrain, while in urban models these are the texture maps of the buildings. The texture is synthesized over a cylindrical layered depth image (LDI) [Shade et al. 1998], centered around the original camera po- sition. The LDI image stores, for each pixel, the depths and nor- mals of scene points intersected by the corresponding ray from the viewpoint. We use this data structure, since it is able to represent both the visible and the occluded parts of the scene (in our exam- ples we used a LDI with four depth layers per pixel). The colors of the frontmost layer in each pixel are taken from the original photo- graph provided that they are inside the original view frustum, while the remaining colors are synthesized by our guided texture transfer. We begin the texture transfer process by computing the guiding value for all of the layers at each pixel. The guiding value is a vector (U,V, D), where U and V are the chrominance values of the corre- sponding point in the model texture, and D is the distance to the corresponding scene point from the location of the camera. In our experiments, we tried various other features, including terrain nor- mal, slope, height, and combinations thereof. We achieved the best results, however, with the realtively simple feature vector above. Including the distance D in the feature vector biases the synthesis towards generating textures at the correct scale. D is normalized so that distances from 0 to 5000 meters map to [0, 1]. We only include chrominance information in the feature vector (and not lu- minance) to alleviate problems associated with existing transient features such as shading and shadows in the model textures. Texture synthesis is carried out in a multi-resolution manner. The first (coarsest) level is synthesized by growing the texture outwards from the known regions. For each unknown pixel we examine a square neighborhood around it, and exhaustively search for the best matching neighborhood from the known region (using the L2 norm). Since our neighborhoods contain missing pixels we cannot apply PCA compression and other speed-up structures in a straight forward way. However, the first level is sufficiently coarse and its synthesis is rather fast. To synthesize each next level we upsam- ple the result of the previous level and perform a small number of k-coherence synthesis passes [Ashikhmin 2001] to refine the result. Here we use a 5×5 look-ahead region and k = 4. The total syn- thesis time is about 5 minutes per image. The total texture size is typically on the order of 4800×1600 pixels, times four layers. It should be noted that when working with LDIs the concept of a pixel’s neighborhood must be adjusted to account for the existence of multiple depth layers at each pixel. We define the neighborhood in the following way: On each depth layer, a pixel has up to 8 pixels surrounding it. If the neighboring pixel has multiple depth layers, the pixel on the layer with the closest depth value is assigned as the immediate neighbor. To render images from novel viewpoints, we use a shader to project the LDI image onto the geometric model by computing the distance of the model to the camera and using the pixel color from the depth layer closest to this distance. Significant changes in the viewpoint eventually cause texture distortions if one keeps using the texture from the photograph. To alleviate this problem, we blend the pho- tograph’s texture into the model’s texture as the new virtual camera gets farther away from the original viewpoint. We found this to sig- nificantly improve the 3D viewing experience, even for drastic view changes, such as going to bird’s eye view. Figure 7: Extending the field of view. The red rectangle indicates the boundaries of the original photograph. The companion video demonstrates changing the viewpoint. Thus, the texture color T at each terrain point x is given by T (x) = g(x) Tphoto(x) + (1−g(x)) Tmodel(x), (9) where the blending factor g(x) is determined with respect to the current view, according to the following principles: (i) pixels in the original photograph which correspond to surfaces facing camera are considered more reliable than those on oblique surfaces; and, (ii) pixels in the original photograph are also preferred whenever the corresponding scene point is viewed from the same direction in the current view, as it was in the original one. Specifically, let n(x) denote the surface normal, C0 the original camera position from which the photograph was taken, and Cnew the current camera position. Next, let v0 = (C0 −x)/‖C0 −x‖ denote the normalized vector from the scene point to the original camera position, and similarly vnew = (Cnew −x)/‖Cnew −x‖. Then g(x) = max (n(x)·v0, vnew ·v0) . (10) In other words, g is defined as the greater among the cosine of the angle between the normal and the original view direction, and the cosine of the angle between the two view directions. Finally, we also apply re-hazing on-the-fly. First, we remove haze from the texture completely as described in Section 4.1. Then, we add haze back in, this time using the distances from the current camera position. The results may be seen in Figure 7 and in the video. 6 Information Visualization Having registered a photograph with a model that has GIS data as- sociated with it allows displaying various information about the scene, while browsing the photograph. We have implemented a simple application that demonstrates several types of information visualization. In this application, the photograph is shown side- by-side with a top view of the model, referred to as the map view. The view frustum corresponding to the photograph is displayed in the map view, and is updated dynamically whenever the view is changed (as described in Section 5). Moving the cursor in either of the two views highlights the corresponding location in the other view. In the map view, the user is able to switch between a street map, an orthographic photo, a combination thereof, etc. In addition (a) (b) (c) (d) (e) Figure 8: Different information visualization modes in our system. (a-b) Coupled map and photo views. As the user moves the mouse over one of the views, the corresponding location is shown in the other view as well. The profile of a horizontal scanline in the map view (a) is shown superimposed over the terrain in the photo view (b). Since the location of the mouse cursor is occluded by a mountain in the photo, its location in the photo view is indicated using semi-transparent arrows. (c) Names of landmarks are automatically superimposed on the photo. (d-e) Coupled photo and map views with superimposed street network. The streets under the mouse cursor are highlighted in both views. to text labels it is also possible to superimpose graphical map ele- ments, such as roads, directly onto the photo view. These abilities are demonstrated in Figures 1 and 8 and in the companion video. There are various databases with geo-tagged media available on the web. We are able to highlight these locations in both views (photo and map). Of particular interest are geo-tagged Wikipedia articles about various landmarks. We display a small Wikipedia icon at such locations, which opens a browser window with the corresponding article, when clicked. This is also demonstrated in the companion video. Another nice visualization feature of our system is the ability to highlight the object under the mouse in the photo view. This can be useful, for example, when viewing night time photographs: in an urban scene shot at night, the building under the cursor may be shown using daylight textures from the underlying model. 7 Discussion and Conclusions We presented Deep Photo, a novel system for editing and browsing outdoor photographs. It leverages the high quality 3D models of the earth that are now becoming widely available. We have demon- strated that once a simple geo-registration of a photo is performed, the models can be used for many interesting photo manipulations that range from de- and rehazing and relighting to integrating GIS information. The applications we show are varied. Haze removal is a challenging problem due to the fact that haze is a function of depth. We have shown that now that depth is available in a geo-registered photo- graph, excellent “haze editing” can be achieved. Similarly, having an underlying geometric model makes it possible to generate con- vincing relighted photographs, and dynamically change the view. Finally, we demonstrate that the enormous wealth of information available online can now be used to annotate and help browse pho- tographs. Within our framework we used models obtained from Virtual Earth. The manual registration is done within a minute, matting out the Figure 9: Failure cases: some of the described applications pro- duce artifacts for badly registered (left) and/or insuffienctly accu- rate models (right). In this case the dehazing application generated halos around misaligned depth edges because it used wrong depth values there. The same artifacts can be observed by zooming into the full images in Figures 2 and 3. foreground is also an easy task using state-of-the-art techniques such as Soft Scissors [Wang et al. 2007]. All other operations such as dehazing and relighting run at interactive speeds; however, com- puting very detailed shadow maps for the relighting can be time consuming. As can be expected, there are always some differences and mis- alignments between the photograph and the model. The may arise due to insufficiently accurate models, and also due to the fact that the photographs were not captured with an ideal pinhole camera. Although they can lead to some artifacts (see Figure 9), we found that in many cases these differences are less problematic than one might fear. However, automatically resolving such differences is certainly a challenging and interesting topic for future work. We believe that the applications presented here represent just a small fraction of possible geo-photo editing operations. Many of the existing digital photography products could be greatly enhanced with the use of geo information. Operations could encompass noise-reduction and image sharpening with 3D model priors, post- capture refocussing, object recovery in under or over-exposed areas as well as illumination transfer between photographs. GIS databases contain a wealth of information, of which we have just used a small amount. Water, grass, pavement, building mate- rials, etc, can all potentially be automatically labeled and used to improve photo tone adjustment. Labels can be transferred automat- ically from one image to others. Again, having a single consis- tent 3D model for our photographs provides much more than just a depth value per pixel. In this paper we mostly dealt with single images. Most of the appli- cations that we demonstrated become even stronger when combin- ing multiple input photos. A particularly interesting direction might be to combine Deep Photo with the Photo Tourism system. Once a Photo Tour is geo-registered, the coarse 3D information generated by Photo Tourism could be used to enhance online 3D data and vice-versa. The information visualization and novel view synthe- sis applications we demonstrate here could be combined with the Photo Tourism viewer. This idea of fusing multiple images could even be extended to video that could be registered to the models. Acknowledgements This research was supported in parts by grants from the the fol- lowing funding agencies: the Lion foundation, the GIF founda- tion, the Israel Science Foundation, and by DFG Graduiertenkol- leg/1042 “Explorative Analysis and Visualization of Large Infor- mation Spaces” at University of Konstanz, Germany. References ASHIKHMIN, M. 2001. Synthesizing natural textures. Proceedings of the 2001 symposium on Interactive 3D graphics (I3D), 217– 226. CHEN, B., RAMOS, G., OFEK, E., COHEN, M., DRUCKER, S., AND NISTER, D. 2008. Interactive techniques for registering images to digital terrain and building models. Microsoft Re- search Technical Report MSR-TR-2008-115. CHO, P. L. 2007. 3D organization of 2D urban imagery. Proceed- ings of the 36th Applied Imagery Pattern Recognition Workshop, 3–8. CRIMINISI, A., REID, I. D., AND ZISSERMAN, A. 2000. Single view metrology. International Journal of Computer Vision 40, 2, 123–148. DEBEVEC, P. E., TAYLOR, C. J., AND MALIK, J. 1996. Mod- eling and rendering architecture from photographs: A hybrid geometry- and image-based approach. Proceedings of SIG- GRAPH ’96, 11–20. DEBEVEC, P., HAWKINS, T., TCHOU, C., DUIKER, H.-P., SAROKIN, W., AND SAGAR, M. 2000. Acquiring the re- flectance field of a human face. Proceedings of SIGGRAPH 2000, 145–156. DRORI, I., COHEN-OR, D., AND YESHURUN, H. 2003. Fragment-based image completion. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2003) 22, 3, 303–312. E-ON SOFTWARE, 2008. Vue 6 Infinite. http: //www.e-onsoftware.com/products/vue/vue_ 6_infinite. EFROS, A. A., AND LEUNG, T. K. 1999. Texture synthesis by non-parametric sampling. Proceedings of IEEE International Conference on Computer Vision (ICCV) ’99 2, 1033–1038. FATTAL, R. 2008. Single image dehazing. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2008) 27, 3, 73. FRÜH, C., AND ZAKHOR, A. 2003. Constructing 3D city models by merging aerial and ground views. IEEE Computer Graphics and Applications 23, 6, 52–61. GRUEN, A., AND HUANG, T. S. 2001. Calibration and Orienta- tion of Cameras in Computer Vision. Springer-Verlag, Secaucus, NJ, USA. HAYS, J., AND EFROS, A. A. 2007. Scene completion using millions of photographs. ACM Transactions on Graphics (Pro- ceedings of SIGGRAPH 2007) 26, 3, 4. HERTZMANN, A., JACOBS, C. E., OLIVER, N., CURLESS, B., AND SALESIN, D. H. 2001. Image analogies. Proceedings of SIGGRAPH 2001, 327–340. HOIEM, D., EFROS, A. A., AND HEBERT, M. 2005. Automatic photo pop-up. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2005) 24, 3, 577–584. HORRY, Y., ANJYO, K.-I., AND ARAI, K. 1997. Tour into the picture: using a spidery mesh interface to make animation from a single image. Proceedings of SIGGRAPH ’97, 225–232. KAFTORY, R., SCHECHNER, Y. Y., AND ZEEVI, Y. Y. 2007. Variational distance-dependent image restoration. Proceedings of IEEE Conference on Computer Vision and Pattern Recogni- tion (CVPR) 2007, 1–8. KANG, S. B. 1998. Depth painting for image-based rendering applications. Tech. rep., Compaq Cambridge Research Lab. LALONDE, J.-F., HOIEM, D., EFROS, A. A., ROTHER, C., WINN, J., AND CRIMINISI, A. 2007. Photo clip art. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2007) 26, 3, 3. LOSCOS, C., DRETTAKIS, G., AND ROBERT, L. 2000. Interactive virtual relighting of real scenes. IEEE Transactions on Visual- ization and Computer Graphics 6, 4, 289–305. MCCARTNEY, E. J. 1976. Optics of the Atmosphere: Scattering by Molecules and Particles. John Wiley and Sons, New York, NY, USA. NARASIMHAN, S. G., AND NAYAR, S. K. 2003. Contrast restora- tion of weather degraded images. IEEE Transactions on Pattern Analysis and Machine Intelligence 25, 6, 713–724. NARASIMHAN, S. G., AND NAYAR, S. K. 2003. Interactive (de)weathering of an image using physical models. IEEE Work- shop on Color and Photometric Methods in Computer Vision. NASA, 2008. The landsat program. http://landsat.gsfc. nasa.gov/. NASA, 2008. Shuttle radar topography mission. http://www2. jpl.nasa.gov/srtm/. NAYAR, S. K., AND NARASIMHAN, S. G. 1999. Vision in bad weather. Proceedings of IEEE International Conference on Computer Vision (ICCV) ’99, 820–827. NISTER, D., AND STEWENIUS, H. 2007. A minimal solution to the generalised 3-point pose problem. Journal of Mathematical Imaging and Vision 27, 1, 67–79. OAKLEY, J. P., AND SATHERLEY, B. L. 1998. Improving image quality in poor visibility conditions using a physical model for contrast degradation. IEEE Transactions on Image Processing 7, 2, 167–179. OH, B. M., CHEN, M., DORSEY, J., AND DURAND, F. 2001. Image-based modeling and photo editing. Proceedings of ACM SIGGRAPH 2001, 433–442. REICHMANN, M., 2001. The art of photography. http://www.luminous-landscape.com/essays/ theartof.shtml. SCHECHNER, Y. Y., AND AVERBUCH, Y. 2007. Regularized im- age recovery in scattering media. IEEE Transactions on Pattern Analysis and Machine Intelligence 29, 9, 1655–1660. SCHECHNER, Y. Y., NARASIMHAN, S. G., AND NAYAR, S. K. 2003. Polarization-based vision through haze. Applied Optics 42, 3, 511–525. SHADE, J., GORTLER, S., HE, L.-W., AND SZELISKI, R. 1998. Layered depth images. Proceedings of SIGGRAPH ’98, 231– 242. SHUM, H.-Y., HAN, M., AND SZELISKI, R. 1998. Interactive con- struction of 3-d models from panoramic mosaics. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 1998, 427–433. SNAVELY, N., SEITZ, S. M., AND SZELISKI, R. 2006. Photo tourism: exploring photo collections in 3d. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2006) 25, 3, 835–846. STAMOS, I., AND ALLEN, P. K. 2000. 3-D model construction using range and image data. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 1998, 531– 536. SUNKAVALLI, K., MATUSIK, W., PFISTER, H., AND RUSINKIEWICZ, S. 2007. Factored time-lapse video. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2007) 26, 3, 101. TAN, R. T. 2008. Visibility in bad weather from a single image. Proceedings of IEEE Conference on Computer Vision and Pat- tern Recognition (CVPR) 2008, to appear. TOYAMA, K., LOGAN, R., AND ROSEWAY, A. 2003. Geographic location tags on digital images. Proceedings of the 11th ACM international conference on Multimedia, 156–166. WANG, J., AGRAWALA, M., AND COHEN, M. F. 2007. Soft scis- sors: an interactive tool for realtime high quality matting. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2007) 26, 3. YU, Y., AND MALIK, J. 1998. Recovering photometric proper- ties of architectural scenes from photographs. Proceedings of SIGGRAPH ’98, 207–217. YU, Y., DEBEVEC, P., MALIK, J., AND HAWKINS, T. 1999. In- verse global illumination: recovering reflectance models of real scenes from photographs. Proceedings of SIGGRAPH ’99, 215– 224. ZHANG, L., DUGAS-PHOCION, G., SAMSON, J.-S., AND SEITZ, S. M. 2002. Single-view modelling of free-form scenes. The Journal of Visualization and Computer Animation 13, 4, 225– 235. work_rdcz2tdsgrfbrbqgc5fy6cslne ---- NALDC Jump to Main Content An official website of the United States government Here’s how you know Here’s how you know Official websites use .gov A .gov website belongs to an official government organization in the United States. Secure .gov websites use HTTPS A lock ( A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites. Toggle navigation National Agricultural Library Digital Collections U.S. Department of Agriculture Home NALDC API Contact Us Search History search for Search Selections (0) Show Selections Clear Selections NALDC Main content area The New National Agricultural Library Digital Collections The NALDC offers access to collection materials available in digital format. Use this site to explore several collections of full-text historical documents on the agricultural and food sciences.   Access more digitized materials through the National Agricultural Library collection available in the Internet Archive: https://archive.org/details/usdanationalagriculturallibrary NALDC Collection Lists Historical Dietary Guidance Digital Collection Pomological Watercolor Collection Animal Welfare Act History Digital Collection Historical Gypsy Moth Publications Collection World Poultry Science Association Collection Plant Inventory, U.S. Department of Agriculture Collection Fruit and Vegetable Market News Collection USDA Publications Collection Rural Development Publications Collection Organic Roots Collection Reports of Bean Improvement Cooperative and National Dry Bean Council Research Conference NAL Information Centers Collection Footer Home Sitemap Social Links Facebook Twitter YouTube RSS Government Links Policies and Links Plain Writing FOIA Accessibility Statement Privacy Policy Non-Discrimination Statement Anti-Harassment Policy Information Quality Agricultural Research Service USDA.gov USA.gov WhiteHouse.gov Sign up for ARS News updates Your email address Sign up work_rfrzyoh6cjcjvgk2wjr3riqqpm ---- Evolving dimensions in medical case reporting EDITORIAL Open Access Evolving dimensions in medical case reporting Aristotle D Protopapas1* and Thanos Athanasiou2 Abstract Medical case reports (MCRs) have been undervalued in the literature to date. It seems that while case series emphasize what is probable, case reports describe what is possible and what can go wrong. MCRs transfer medical knowledge and act as educational tools. We outline evolving aspects of the MCR in current practice. The full translational potential of medical case reports (MCRs) is not always considered by authors, periodicals or readers, as MCRs are often perceived as a low-budget form of publication for fledgling medical writers. The acceptance rate for MCRs and their priority for publica- tion are lower than those for other manuscripts in tradi- tional journals. It is important to emphasize that prospective, retrospective and observational randomized controlled trials are always constructed on the basis of data obtained from individual patients whose cases are the units that create the cohort, allowing the investiga- tor to define end points and make inferences by calcu- lating effect sizes. It is safe to say that all classes of evidence (Classes I through III) are constructed using the accumulated units of observation comprising indivi- dual cases. Although MCRs are limited by the fact that they cannot be generalized beyond the context of the individual patient or patients described [1] and thus are not suitable for inference, they offer a high degree of opportunity to transfer medical knowledge and act as educational tools, and in a very direct way. In this editorial, we attempt to outline the evolving dimensions of MCRs in four particular areas of medical education: (1) reporting of adverse events (AEs), (2) new diseases or exceptional environments, (3) medical inno- vation and (4) appropriate use of media in terms of ethics, standardization and creativity (Figure 1). MCRs of adverse events: errors of omission or errors of commission? An AE is an unwanted event that occurs in the course of treatment, especially in a clinical trial. MCRs may be the first warning of catastrophic AEs [2,3]. The Journal of Medical Case Reports encourages con- structive MCRs of AEs, especially the unreported side effects or adverse interactions involving medications or unexpected events in the course of treating a patient [4]. An isolated AE can be important in forming part of the evidence in the healthcare sciences [5]. Some meth- odologies of randomized controlled trials lead to the exclusion of such isolated AEs from their data sets, which renders an isolated MCR of an AE even more valuable. It is especially crucial to highlight translational AEs, where in vitro or animal experiments have not been reproduced in humans, with resultant ramifications for patient safety [6]. The evidence in national and interna- tional AE databases should be consulted, and each AE should be logged appropriately [7]. Overall, an AE should be reported in an exacting, scientific way. A root cause analysis should be included, and a survey of evidence-based recommendations should conclude the report. Below we present a brief algorithm from the generic template of JMCR[4] that summarizes the above-mentioned contentions: 1. Introduction: The Introduction should state the indications of the intervention, a brief overview of exist- ing evidence and current evidence-based recommenda- tions, preferably tabulated and classified [8]. 2. Case presentation: Are case presentations an error of omission or errors of commission? Adherence to recommendations and good practice are essential. 3. Discussion: The Discussion section should translate findings to clinical best practices and patient safety. In cases in which a case report describes an AE that occurred within the setting of a randomized controlled trial, this should be clearly stated and details of the ran- domized controlled trial, such as its registration and* Correspondence: aristotelis.protopapas02@imperial.ac.uk 128 Old Brompton Road, London, SW7 3SS, UK Full list of author information is available at the end of the article Protopapas and Athanasiou Journal of Medical Case Reports 2011, 5:164 http://www.jmedicalcasereports.com/content/5/1/164 JOURNAL OF MEDICAL CASE REPORTS © 2011 Protopapas and Athanasiou; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. mailto:aristotelis.protopapas02@imperial.ac.uk http://creativecommons.org/licenses/by/2.0 relevant reference of the protocol, should be included in the MCR. 4. Conclusion: The Conclusion section should com- prise a root cause analysis. MCRs of new diseases, rare conditions or exceptional environments Case reports remain an important source of evidence for rare conditions or exceptional treatment environments. Large trials are not possible in such cases, and MCRs offer important treatment information. Typical examples include the description of a new or previously unde- scribed genetic condition with an atypical inheritance pattern or the management circumstances of individual patients in the context of geographic or physiological extremes (including high altitude and major disasters). In the absence of larger data sets, these individual cases offer valuable information for healthcare practitioners in treating any similar patients when they occur. MCRs of innovation We feel that case reports describing innovative techni- ques advance healthcare and biotechnology by translat- ing, validating and finally returning data from individual patients for further development and refinement of technologies with a low cost-impact ratio. The major issue of biomedical innovation is patient safety, and this should be reflected in reporting a novel intervention. The relevant institutional, national and international guidelines should provide benchmarks for innovations and should be adopted in drafting the MCR manuscript. Appropriate use of media for MCRs: ethics, standardization and creativity The expanding role of multi-media in driving home a message from a case presentation and attracting reader- ship cannot be overemphasized. JMCR aspires to describe ethical, high-quality imaging modalities in MCRs. Ethics JMCR’s mandatory policy on consent to publish [9] applies especially to the explicit consent of the reported patients to have their images, X-rays and histological films published. It is also important to keep in mind that the author guidelines of JMCR state that authors must preserve the anonymity of the patient [4]. It is expected that all photographs of humans and reproductions of medical imaging (for example, computed tomographic Medical Case Reports (MCRs) Adverse Events New Diseases or Exceptional Environments Medical Innovation Multimedia Figure 1 Evolving dimensions increasing the educational value of MCRs Protopapas and Athanasiou Journal of Medical Case Reports 2011, 5:164 http://www.jmedicalcasereports.com/content/5/1/164 Page 2 of 3 (CT) scan slices) are stripped of any identifying informa- tion. The free and open access to JMCR articles renders these precautions even more important, as the general public has access to every picture! Standardization A small, selective study [10] found a lack of standardiza- tion and relevance presented in published radiological images. It is important that the legend of each image be accurate and directly relevant to the MCR. Creativity Media, being a direct non-verbal message per se, should display creativity in our current Information Age. Digital photography is currently accessible to most healthcare organizations around the world, facilitating the capture and transmission (that is, uploading) of visual data. The use of medical photography expertise is a sound invest- ment, yet conventional medical illustrators need to evolve and diversify from sketching to digitized multi- media (that is, platform non-specific and viewable using free or widely available tools [4,11]). The time has come for streaming media to replace traditional forms of media. An example is making whole CT sequences (axial or three-dimensional), as opposed to still, selected slices, available in the MCR [11]. Summary In this era of digitization, case reporting necessitates a shift of gravitas to patient safety, the application of improved multi-media and an overall increase in educa- tional potential. In this brief editorial, we have attempted to guide prospective authors of MCRs in optimizing their creative writing from ethical and practi- cal points of view, especially with regard to patient safety. This guidance is meant to complement JMCR’s general instructions to authors [4]. Author details 128 Old Brompton Road, London, SW7 3SS, UK. 2Division of Surgery and Cancer, Imperial College London, QEQM Wing, St Mary’s Hospital, Praed Street, London, W2 1NY, UK. Authors’ contributions ADP and TA were equal contributors in writing the manuscript. We both read and approved the final manuscript. Competing interests The authors declare that they have no competing interests. Received: 28 October 2010 Accepted: 27 April 2011 Published: 27 April 2011 References 1. Doherty M: What value case reports? Ann Rheum Dis 1994, 53:1-2. 2. Kidd M, Hubbard C: Introducing Journal of Medical Case Reports. J Med Case Rep 2007, 1:1. 3. Joki T, Vaananen I: [Thalidomide and embryopathies: report of 2 cases] [in Finnish]. Duodecim 1962, 78:822-827. 4. Instructions for JMCR authors. [http://jmedicalcasereports.com/info/ instructions/]. 5. Jenicek M: Clinical Case Reporting in Evidence-Based Medicine. 2 edition. London: Arnold; 2001. 6. Protopapas AD: Anastomotic devices for coronary bypass: lethal complications have been previously reported! Eur J Cardiothorac Surg 2004, 25:145. 7. Manufacturer and User Facility Device Experience Database (MAUDE). [http://www.fda.gov/cdrh/maude.html]. 8. Guirguis-Blake J, Calonge N, Miller T, Siu A, Teutsch S, Whitlock E: Current processes of the U.S. Preventive Services Task Force: refining evidence- based recommendation development. Ann Intern Med 147:117-122. 9. Kidd M, Hrynaszkiewicz I: Journal of Medical Case Reports’ policy on consent for publication. J Med Case Rep 2010, 4:173. 10. Siontis GC, Patsopoulos NA, Vlahos AP, Ioannidis JP: Selection and presentation of imaging figures in the medical literature. PLoS One 2010, 5:e10888. 11. TeraRecon iNtuition 3D Movie. [http://www.youtube.com/watch? v=L_ziz60cffo&feature=player_embedded]. doi:10.1186/1752-1947-5-164 Cite this article as: Protopapas and Athanasiou: Evolving dimensions in medical case reporting. Journal of Medical Case Reports 2011 5:164. Submit your next manuscript to BioMed Central and take full advantage of: • Convenient online submission • Thorough peer review • No space constraints or color figure charges • Immediate publication on acceptance • Inclusion in PubMed, CAS, Scopus and Google Scholar • Research which is freely available for redistribution Submit your manuscript at www.biomedcentral.com/submit Protopapas and Athanasiou Journal of Medical Case Reports 2011, 5:164 http://www.jmedicalcasereports.com/content/5/1/164 Page 3 of 3 http://www.ncbi.nlm.nih.gov/pubmed/8311547?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/14029397?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/14029397?dopt=Abstract http://jmedicalcasereports.com/info/instructions/ http://jmedicalcasereports.com/info/instructions/ http://www.ncbi.nlm.nih.gov/pubmed/14690755?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/14690755?dopt=Abstract http://www.fda.gov/cdrh/maude.html http://www.ncbi.nlm.nih.gov/pubmed/20526360?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/20526360?dopt=Abstract http://www.youtube.com/watch?v=L_ziz60cffo&feature=player_embedded http://www.youtube.com/watch?v=L_ziz60cffo&feature=player_embedded Abstract MCRs of adverse events: errors of omission or errors of commission? MCRs of new diseases, rare conditions or exceptional environments MCRs of innovation Appropriate use of media for MCRs: ethics, standardization and creativity Ethics Standardization Creativity Summary Author details Authors' contributions Competing interests References << /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.3 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.1000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails true /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams false /MaxSubsetPct 100 /Optimize true /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo true /PreserveFlatness true /PreserveHalftoneInfo false /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Apply /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true ] /NeverEmbed [ true ] /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 300 /ColorImageMinResolutionPolicy /Warning /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 500 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 >> /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 >> /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 300 /GrayImageMinResolutionPolicy /Warning /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 500 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 >> /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 >> /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /Warning /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 1200 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 >> /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /CHS /CHT /DAN /DEU /ESP /FRA /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN /KOR /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR /PTB /SUO /SVE /ENU (Use these settings to create Adobe PDF documents suitable for reliable viewing and printing of business documents. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.) >> >> setdistillerparams << /HWResolution [2400 2400] /PageSize [612.000 792.000] >> setpagedevice work_rgvfjvabnrd2bnzrhrzwopsng4 ---- © 2018 Ochi et al. This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms. php and incorporate the Creative Commons Attribution – Non Commercial (unported, v3.0) License (http://creativecommons.org/licenses/by-nc/3.0/). By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms (https://www.dovepress.com/terms.php). Clinical, Cosmetic and Investigational Dermatology 2018:11 37–39 Clinical, Cosmetic and Investigational Dermatology Dovepress submit your manuscript | www.dovepress.com Dovepress 37 R e s e a R C h L e t t e R open access to scientific and medical research Open Access Full Text Article http://dx.doi.org/10.2147/CCID.S136730 the effect of oral clindamycin and rifampicin combination therapy in patients with hidradenitis suppurativa in singapore harumi Ochi Lixian Chris tan hazel h Oon Department of Dermatology, National skin Centre, singapore Abstract: Hidradenitis suppurativa (HS) is a chronic inflammatory disease of follicular occlu- sion characterized by abscesses, draining sinuses, and scarring. The efficacy and tolerability of combination treatment with oral clindamycin and rifampicin have previously been assessed in 4 studies including groups of Caucasian patients. Overall results are promising with reported improvement rates between 71.4% and 85.7%. In this study, we propose that combination therapy is safe and efficacious in the treatment of HS, not only among Caucasians, but also in a group of Asian patients in Singapore. Keywords: hidradenitis suppurativa, combination therapy, clindamycin, rifampicin Introduction Hidradenitis suppurativa (HS) is a chronic inflammatory disease of follicular occlu- sion. Although HS is not primarily an infectious disease, Staphylococcus aureus and Staphylococcus epidermidis are pathogens most frequently isolated as secondary colonizers.1 In this study, we propose that combination therapy with oral clindamycin and rifampicin is efficacious in the treatment of HS in a group of Asian patients in Singapore. Methodology This retrospective study assessed the efficacy of a 10-week course of oral clindamycin 300 mg twice daily and oral rifampicin 300 mg twice daily in the treatment of HS. Patients who received this combination therapy between 1 December 2012 and 31 July 2013 in a tertiary dermatological center in Singapore were included. This study was approved as an audit by the Head of Acne Clinic of National Skin Centre (NSC), Singapore. As this was performed retrospectively, permission to access the medical records of the patients was granted by the Director of NSC. Patient consent was waived by the Head of Acne Clinic as data were de-identified and retrospective. Results Eleven patients (9 males) had a mean age of 24.5±8.8 years. There were 6 Chinese (54.5%), 4 Malays (36.3%) and 1 Indian (9.1%). Five were smokers (45.5%), 6 were obese (54.5%) and 1 had a family history of HS (9.1%). The duration of HS prior to commencement of oral clindamycin and rifampicin ranged from 2 to 20 years. Eight patients (72.7%) had previous treatments, including retinoids and antibiotics, with limited effect and persistent disease. At the end of 10 weeks of treatment, 7 of the Correspondence: harumi Ochi Department of Dermatology, National skin Centre, 1 Mandalay Road, 308205, singapore tel +65 6253 4455 Fax +65 6253 3225 email ochi.harumi@mohh.com.sg Journal name: Clinical, Cosmetic and Investigational Dermatology Article Designation: Research Letter Year: 2018 Volume: 11 Running head verso: Ochi et al Running head recto: Effect of oral clindamycin and rifampicin in hidradenitis suppurativa DOI: http://dx.doi.org/10.2147/CCID.S136730 http://www.dovepress.com/permissions.php www.dovepress.com www.dovepress.com www.dovepress.com https://www.facebook.com/DoveMedicalPress/ https://www.linkedin.com/company/dove-medical-press https://twitter.com/dovepress https://www.youtube.com/user/dovepress Clinical, Cosmetic and Investigational Dermatology 2018:11submit your manuscript | www.dovepress.com Dovepress Dovepress 38 Ochi et al 11 patients (63.6%) reported clinical improvement. Four patients had digital photography documenting response before and after treatment, and 2 blinded assessors evaluated the improvement using the HS Physician Global Assessment (PGA) score. Three patients achieved clear, minimal or mild scoring from all sites after completion of therapy, and 2 patients reported a 2-grade improvement relative to baseline from at least 1 site. There was only 1 patient (9.1%) who reported side effects of nausea and vomiting and 1 patient (9.1%) who defaulted follow-up (Table 1). Discussion The efficacy and tolerability of this combination treatment had previously been assessed in 4 studies. Overall results are promising with reported improvement rates between 71.4% and 85.7%.1–4 Statistically significant improvements in all quality-of-life dimensions of the Skindex-France question- naire were also described in 1 study.2 It is hypothesized that both the antibacterial and anti- inflammatory properties of clindamycin and rifampicin are responsible for the beneficial effects in treating HS. Clindamycin is a lincosamide antibiotic that is active against Gram-positive cocci and anaerobic bacteria. It mediates inflammation by suppressing complement-derived che- motaxis of polymorphonuclear leukocytes. Rifampicin is a lipid-soluble, broad-spectrum antibiotic highly effective against S. aureus. Additionally, it modifies cell-mediated hypersensitivity by suppressing antigen-induced transforma- tion of sensitized lymphocytes. Rapid emergence of bacterial resistance may result with rifampicin monotherapy.5 Hence, combination therapy is synergistic with reduced resistance rates and increased anti-inflammatory properties. Although Table 1 Demographics of patients, previous treatments, response and side effects of combination therapy Case number Age (years) Gender Duration of disease (years) Affected area(s) Prior therapy Physician clinical assessment Pretreatment PGA score Posttreatment PGA score Reported side effects 1 18 Male 2 axilla, neck Doxycycline, topical clindamycin Improved Nil Nil Nil 2 18 Male 4 Perineal Doxycycline, erythromycin, isotretinoin, minocycline Improved Nil Nil Nil 3 19 Male 9 Perineal Bactrim, cephalexin, doxycycline, erythromycin, isotretinoin, minocycline Improved 2.75 1.50 Nil 4 20 Male 6 Perineal, axilla augmentin, topical clindamycin Nonresponder Nil Nil Nil 5 21 Male 13 Perineal, axilla Doxycycline, topical clindamycin Improved 2.67 1.00 Nil 6 21 Male 3 Perineal, axilla, neck Nil Improved 1.75 2.00 Nil 7 21 Male 3 Perineal, back Defaulted Defaulted Nil Nil Nil 8 22 Male 5 Perineal Isotretinoin, minocycline, topical clindamycin Improved Nil Nil Nil 9 48 Male 20 Perineal, axilla augmentin, acitretin. ciprofloxacin, clindamycin, ceftriaxone, isotretinoin, infliximab Nonresponder 3.13 3.00 Nil 10 27 Female 7 Perineal, axilla Doxycycline, isotretinoin Nonresponder Nil Nil Nausea, vomiting 11 35 Female 2 Perineal, axilla Nil Improved Nil Nil Nil Abbreviation: PGa, hidradenitis suppurativa Physician Global assessment. www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dermatology 2018:11 submit your manuscript | www.dovepress.com Dovepress Dovepress Clinical, Cosmetic and Investigational Dermatology Publish your work in this journal Submit your manuscript here: https://www.dovepress.com/clinical-cosmetic-and-investigational-dermatology-journal Clinical, Cosmetic and Investigational Dermatology is an interna- tional, peer-reviewed, open access, online journal that focuses on the latest clinical and experimental research in all aspects of skin disease and cosmetic interventions. This journal is included on PubMed. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors Dovepress 39 effect of oral clindamycin and rifampicin in hidradenitis suppurativa a longer duration of treatment appears warranted in chronic diseases like HS, no large differences in outcome between patients treated for 10 weeks or more and those treated for a shorter period have been reported.4 Other studies have similarly described good tolerability with low rates of side effects between 13.0% and 38.2% (Table 2). Gastrointestinal complaints were most commonly reported, but there were no cases of clindamycin-associated Clostridium difficile colitis.1–4 In a recent systematic review of HS treatment, only combination clindamycin–rifampicin regimen, infliximab, Nd:YAG laser and surgical excision were considered effective treatments. However, some of these modalities have limitations. Infliximab has resulted in adverse events including severe allergic reactions, multifocal motor neu- ropathy and drug-induced lupus reactions. Recurrence rates of up to 42.8% after surgical excision have also been described.6 Conclusion Oral clindamycin and oral rifampicin combination therapy is safe and efficacious in the treatment of HS in groups of Caucasian and Asian patients in Singapore. Table 2 summarized data of the available studies on rifampicin–clindamycin in hs Reference Number of patients Treatment modalities Assessment of the severity of HS Number of patients with improvement Number of patients with side effects Bettoli et al1 23 Rifampicin 600 mg and clindamycin 600 mg for 10 weeks sartorius 17/20 (85%) 3 (13%) Number of exacerbations Gener et al2 116 Rifampicin 600 mg and clindamycin 600 mg for 10 weeks sartorius 60/70 (86%) 10 (14%) hurley skindex-France questionnaire hs Patient Global assessment Mendonça and Griffiths3 14 Rifampicin 600 mg and clindamycin 600 mg for 10 weeks No specific score 10/14 (71%) 4 (29%) van der Zee et al4 34 Rifampicin and clindamycin different dosages and duration hurley 28/34 (82%) 13 (38%) Investigator total assessment Present study 11 Rifampicin 600 mg and clindamycin 600 mg 10 weeks hs Physician Global assessment 7/11 (63.6%) 1 (9.1%) Abbreviation: hs, hidradenitis suppurativa. Acknowledgment The authors thank Dr Heng Yee Kiat for assisting with the PGA scoring. Disclosure Dr Hazel H Oon has received research grants from Pfizer and Novartis and acted as a speaker for Novartis, Galderma, and AbbVie. The authors report no other conflicts of interest in this work. References 1. Bettoli V, Zauli S, Borghi A, et al. Oral clindamycin and rifampicin in the treatment of hidradenitis suppurativa-acne inversa: a prospective study on 23 patients. J Eur Acad Dermatol Venereol. 2014;28(1):125–126. 2. Gener G, Canoui-Poitrine F, Revuz JE, et al. Combination therapy with clindamycin and rifampicin for hidradenitis suppurativa: a series of 116 consecutive patients. Dermatology. 2009;219(2):148–154. 3. Mendonça CO, Griffiths CE. Clindamycin and rifampicin combina- tion therapy for hidradenitis suppurativa. Br J Dermatol. 2006;154(5): 977–978. 4. van der Zee HH, Boer J, Prens EP, Jemec GB. The effect of combined treatment with oral clindamycin and oral rifampicin in patients with hidradenitis suppurativa. Dermatology. 2009;219(2):143–147. 5. Van Vlem B, Vanholder R, De Paepe P, Vogelaers D, Ringoir S. Immuno- modulating effects of antibiotics: literature review. Infection. 1996;24(4): 275–291. 6. Rambhatla PV, Lim HW, Hamzavi I. A systematic review of treatments for hidradenitis suppurativa. Arch Dermatol. 2012;148(4):439–446. www.dovepress.com www.dovepress.com www.dovepress.com Publication Info 4: work_rhmgtwbiajdvhkv5jdzakwvzdq ---- Microsoft Word - PRT23 with front page.doc Archived at the Flinders Academic Commons http://dspace.flinders.edu.au/dspace/ This is the author’s final corrected draft of this article. It has undergone peer review. The original publication is available at www.springerlink.com Publisher’s website: http://www.springerlink.com/content/pt54r510t8870104/ Citation: A.J. Roberts-Thomson, N. Massey-Westropp, M.D. Smith, M.J. Ahern, J. Highton and P.J. Roberts-Thomson The use of the hand anatomic index to assess deformity and impaired function in systemic sclerosis”, Rheumatology International 26(5): 439-444. 2006 Mar. THE USE OF THE HAND ANATOMIC INDEX TO ASSESS DEFORMITY AND IMPAIRED FUNCTION IN SYSTEMIC SCLEROSIS A.J. Roberts-Thomson1, N. Massy-Westropp2, M.D. Smith3, M.J. Ahern3, J. Highton4, P.J. Roberts-Thomson3. 1. Occupational Therapy student, University of South Australia. 2. Occupational Therapist, Department of Orthopaedics, Repatriation General Hospital, Daw Park, South Australia. 3. Rheumatology Unit, Repatriation General Hospital, Daw Park, South Australia. 4. Department of Medicine, University of Otago Medical School, Dunedin, New Zealand. Correspondence to: P.J. Roberts- Thomson, Rheumatology Unit Repatriation General Hospital Daws Road, Daw Park, South Australia 5041. Phone: (618) 8204 4585, Fax: (618) 8204 4158 Email: peter.roberts-thomson@flinders.edu.au Running Title: Hand Anatomic Index in Scleroderma ABSTRACT Objective: To determine the “hand anatomic index” (HAI - a quantitative measure of hand deformity) in systemic sclerosis (scleroderma) and to compare it with other measures of hand deformity and functional impairment. Methods: The HAI (measure of open hand span minus closed hand span/lateral height of hand) was determined in 30 patients with scleroderma and compared with hand deformity (as assessed by two independent rheumatologists) and with the Health Assessment Questionnaire (mHAQ), hand strength and prehensile gripability data. Results: The HAI was confirmed as a reliable measure which clearly distinguished patients with increasing hand deformity and separated patients with diffuse scleroderma (n=12) from limited scleroderma (n=18), p=0.005. The HAI correlated significantly with measures of global functional impairment (as measured by the MHAQ) r=-0.46, p=0.01, hand strength r=0.51, p=0.0001 and prehensile gripability, r=-0.37, p=0.05 but not with disease duration r=-0.16, p=NS nor age at disease onset r=0.20, p=NS. It was estimated that the HAI accounts for ~25% of the total global disability (as measured by HAQ). Conclusion: Measurement of the HAI in scleroderma provides a reliable and objective measure reflecting variable degrees of hand deformity and functional impairment and might provide a valid clinical outcome measure in patients with this disabling disorder. Key words: Scleroderma, Hand Anatomic Index, Deformity, Functional Impairment. A simple quantitative measurement of hand deformity in scleroderma would be a valuable clinical parameter particularly in the initial assessment of such patients and their subsequent follow up. Furthermore such an assessment of hand deformity might also closely relate to functional impairment, a clinical measure of much importance. Recently Highton and colleagues1 have derived a hand anatomic index (HAI) from the measurements of certain hand dimensions and have shown it can clearly distinguish patients with rheumatoid arthritis (RA) from healthy controls. Additionally these investigators showed in their rheumatoid patients a significant correlation between this HAI and measure of disease duration, disease activity and functional impairment (as assessed by the modified Health Assessment Questionnaire (HAQ). They provided strong evidence, therefore, in support of their hypothesis that quantitative measurements of visible, structural abnormalities of the hands will detect the presence of RA and reflect its severity. We have therefore derived this same HAI in 30 patients with scleroderma and compared it with a number of validated functional measurements. Our results support the notion that the determination of the HAI might provide a simple and clinically relevant measure of impairment of hand function in this enigmatic and frequently disabling disorder. METHODS Patients Thirty patients with scleroderma were selected arbitrarily (i.e. without prior knowledge of their disease subtype or severity) from the South Australian Scleroderma Register2 according to their geographic accessibility (i.e. closeness to research unit) and their willingness to participate in the study. Twelve of these patients had diffuse scleroderma and 18 limited scleroderma ( with the disease being subtyped according to the criteria as discussed by LeRoy and colleagues3). The demographics, disease subtype, disease characteristic and serology of this patient cohort are listed in Table 1. Hand Anatomic Index The HAI was determined by the method of Highton1 with the modification that callipers and ruler were used to determine the three hand dimensions rather than the determination using a computerised video imaging analytical program as initially described. The HAI is defined as the measurement of maximum hand spread (a) minus measurement of closed hand span (b) divided by measurement of maximum lateral hand height (c) (i.e. (a-b) c where a = distance of maximum hand spread measured from middle tip of thumb to middle tip of fifth digit, b = distance of closed hand with thumb lying extended against index finger from middle tip of thumb to middle tip of fifth digit and c = distance of maximum hand height (from lateral view of open hand – generally from the apex of the most prominent MCP joint to the base)). The HAI was determined for both dominant and non dominant hands with the HAI (m) being the mean of the two values. Functional Assessments Health Assessment Questionnaire (HAQ). Patients were requested to complete a modified HAQ4 Hand Grip Strength Hand grip strength was measured using a calibrated Jamar dynamometer according to the standardised method of Fess5. Prehensile Hand Gripability Prehensile hand gripability as measured according to the method of Dellehag6. In brief the gripability test was scored according to the combined time to complete 3 separate tasks, viz 1) placing flexi grip stocking over the non-dominant hand, 2) positioning of a paper clip on an envelope and 3) pouring water from a plastic jug into a cup. Digital Photography Digital photography of the hands in 3 views (antero-posterior view of hand in extended and closed position and lateral view of open hand) were obtained for each patient. Hand deformity was assessed by having 2 rheumatologists in a blinded but concasual fashion grade these images as normal, mild, moderate or grossly deformed Ethics The project had the approval of the Flinders Clinical Research Ethics Committee with all patients giving written consent to participate in the study. Statistical Analysis In general non-parametric statistical tests (Stata version 8 software) were used with a p value <0.05 being considered significant. Test-retest reliability was determined by intraclass correlation coefficient. Comparison between groups was performed using the Mann-Whitney test while correlation between two variables was assessed using Spearman rank-order correlation coefficients (r). RESULTS To confirm reliability of the HAI we measured the 3 specific hand dimensions on two occasions in 5 patients. Calculated test-retest reliability correlation coefficient was 0.994. The HAI (m) was then correlated with hand deformity as assessed in a blinded fashion by 2 independent senior rheumatologists viewing 3 hand digital photographs from each patient. A progressive decline in the mean HAI was observed between hands graded as normal to those with gross deformity (r=-0.77, p=0:0000, Figure 1). Patients were then divided into those with limited and diffuse cutaneous scleroderma and the HAI (m) plotted (Figure 2). Patients with diffuse scleroderma had significantly lower values that those with limited scleroderma (p = 0.005, Mann- Whitney). The HAQ (m) was then correlated with HAQ score, hand strength, hand gripability, age at onset, and disease duration and the results are tabulated (Table II) and selected scatter plots are shown Figures 3, 4 and 5. Of particular interest was the significant correlation observed between the HAI (m) with other measures of hand functions and global disability (as measured by the HAQ). It was established by regression analysis that the HAI (m) accounts for ~ 25% of the total HAQ variance. DISCUSSION We describe a simple and reliable measure of quantifying hand anatomical deformity in scleroderma, have noted that it can discriminate significantly between patients with diffuse and limited cutaneous scleroderma and have then correlated this HAI with other measures of hand functional impairment and global impairment. We conclude that the HAI appears to have good construct validity in scleroderma but that further studies will be required to test its sensitivity to change over time. The HAI was recently described by one of us1 (JH) and was shown to produce a reliable measure to reflect the progression of anatomical abnormality of the hand caused by rheumatoid arthritis. As such it was suggested that it might provide a useful outcome measure in rheumatoid arthritis1. Hand deformity with disability is a prominent feature in scleroderma and we have therefore determined the HAI in 30 patients with this disease. In order to further simplify the measurement of the HAI we have modified the original description by removing the need for computerised video image analysis – we have merely measured the necessary hand dimensions using a calliper and ruler. We have verified that this is a reliable measure by replicating our measures in the same patient thus confirming its reproducibility. Further we have shown that the HAI will clearly distinguish between patients with diffuse and limited scleroderma the former having greater anatomical deformity as would be expected by the greater degree of flexural contractures and joint immobility seen in this more severe variant. In addition the HAI also correlated significantly with other measures of hand dysfunction and with the HAQ measure of global functional impairment. This is not unexpected as the hand is especially important in performing the activities of daily living as assessed in the HAQ. Indeed we established that the HAI accounts for approximately 25% of the total variance of the HAQ score. The positive correlations of the HAI with hand function and global disability together with the lack of correlation of the HAI with age at disease onset and with disease duration would be consistent with the HAI having good construct validity in scleroderma. It is of interest that the severity of the hand deformity in our diffuse scleroderma patients (as assessed by the HAI) is of a similar order of magnitude as is found in longstanding rheumatoid arthritis. Highton et al1 quotes a HAI for healthy control subjects (of similar gender ratio and age range to our own patients) of 3.76 + 0.6 (SD) versus 1.51 + 0.9 in his rheumatoid arthritis patients. This latter figure is very similar to that observed for our diffuse scleroderma patients. Furthermore we have recently determined in a study involving 35 patients with diffuse scleroderma that the global functional impairment of these patients is likewise very similar to patients with rheumatoid arthritis (mean mHAQ + 95% Cl for diffuse scleroderma = 0.82 (0.55- 1.08) as compared with 1.03 (0.88-1.21) for 170 rheumatoid arthritis patients, unpublished observations). There are a number of publications assessing hand function in RA, OA and Scleroderma 7-11 both of a quantitative nature (i.e.range of motion, grip strength, dexterity ) and of a qualitative nature (i.e. pain, self-efficacy) and many have been validated. Recently the Cochin hand function scale CHFS (an 18 point questionnaire) has been shown to have favourable construct validity in scleroderma6 with the CHFS total score explaining 75% of the HAQ global score variance (as assessed by ANOVA). We are now proceeding to compare the HAI with the CHFS. It would seem most efficient to have one qualitative assessment (eg CHFS) and one quantitative assessment (eg HAI) to comprehensively assess hand function in scleroderma, for the purpose of sequential monitoring. However in both these measures further research will be necessary to assess the sensitivity to change over time. In conclusion measurement of the HAI in patients with scleroderma reliably reflects hand deformity/functional impairment and might provide a valuable outcome measure in this disabling rheumatic disorder. ACKOWLEDGEMENTS We thank Mrs C. Thomas for typing the manuscript, Mr P Hakendorf for providing expert statistical assistance and the scleroderma patients for their willing cooperation. TABLE 1 CHARTERISTICS OF PATIENTS WITH SCLERODERMA No Gender Mean Age Mean Disease Serology Male:Female years (range) Duration (range) (antibody) 12 4:8 50.3 (39-68) 10.8 (2-18) Scl-70 = 3 Diffuse Limited 18 2:16 64.5 (36-84) 23.6 (4-64) centromere = 7 TABLE II CORRELATION OF HAND ANATOMIC INDEX WITH OTHER HAND FUNCTIONAL INDICES AND DISEASE CHARACTERISTICS HAQ HAND STRENGTH (av) HAND GRIPABILITY AGE AT ONSET DISEASE DURATION r=-0.46 r=0.51 r=-0.37 r=0.20 r=-0.16 Hand anatomical Index (av) p=0.01+ p=0.0001 p=0.05 p=NS p=NS correlation coefficient (Spearman) p+ = p value. NS – not significant REFERENCES 1. Highton J, Solomon C, Gardiner DM and Doyle TCA. Video Image Analysis of Hands: Development of an ‘anatomic index’ as a potential outcome measure in rheumatoid arthritis. British Journal of Rheumatology 1996;35:1274-1280. 2. Roberts-Thomson PJ, Jones, M, Hakendorf P, Kencana Dharmapatni AASS, Walker JG, MacFarlane JG, Smith MD and Ahern MJ. Scleroderma in South Australia: epidemiological observations of possible pathogenic significance. Internal Medicine Journal 2001; 31:220-229. 3. LeRoy EC, Black C, Fleischmajer R . Editorial.Scleroderma: classification, subset and pathogenesis. J Rheumatol 1988; 15:202-5. 4. Pincus T, Callahan LF, Brooks RH, Fuchs HA, Olsen NJ, Kaye JJ. Self-report questionnaire scores in rheumatoid arthritis compared with traditional physical, radiographic, and laboratory measures. Ann Intern Med 1989; 110:259-66. 5. Fess, EE. Hand Rehabilitation in H.L. Hopkins and H.D. Smith (eds) Williard and Spachman’s Occupational Therapy (8th edn) pp 674-690, Philadelphia, J.B. Lippincott, 1993. 6. Dellhag B, Bjelle A. A grip ability test for use in rheumatology practice. J Rheumatol 1995;22:1559-65. 7. Rannou FP, Poiraudeau S, Guillevin L, Reevl M, Fermanian J, Mouthon L. Construct Validity of the Cochin Hand Function Scale in Systemic Sclerosis. Arth Rheumo atls:1054, 2004. 8. Brower LM, Poole JL, Reliability and Validity of the Duruöz Hand Index in Persons with Systemic Sclerosis (Scleroderma). Arthritis & Rheumatism Vol.51, No.5 October 15,2004, pp805-809. 9. Wolfe, F, Michaud, K, Gefellar, O and Chi, HK. June 2003, Predicting mortality in patient with rheumatoid arthritis. Arthritis & Rheumatism, Vol 48, No. 6, pp 1530-1542. 10. Smyth, AZ, MacGregor, AJ, Mukerjee, D, Brough, GM, Black, CM and Denton, CP. 2003, A cross section comparison of three self reported functional indices in scleroderma. Rheumatology, Vol 42:732-738. 11. Hawley, DJ and Wolfe, F. 1992. “Sensitivity to change of the Health Assessment Questionnaire (HAQ) and other Clinical and Health Status Measures in Rheumatoid Arthritis. Results of short term clinical trials and observational studies versus long-term observational studies”. Arthritis Care and Research Vol 5, No.3, Sept. 130-136. LEGEND TO FIGURES Figure 1 Hand deformity as graded by 2 senior rheumatologists into 4 grades (x abscissa) and plotted against the HAI (y ordinate). Each point represent a patient and the horizontal bars represent the means for the 4 catagories. (n=29 patients only). Figure 2 Box and wisker demonstration of HAI (m) between patients with diffuse cutaneous scleroderma (n=12) and limited cutaneous scleroderma (n=18) p=0.005. The box demonstrates the mean and 25th and 75th percentile while the wisker indicate the 5th and 95th percentile. Figure 3 Correlation between HAI (m) and HAQ. Spearman r=-0.46, p=0.01. Figure 4 Correlation between HAI (m) and hand strength. Spearman r=0.51, n=0.0001. Figure 5 Correlation between HAI (m) and hand prehensile gripability. Spearman r= -0.37, p=0.05. Conflict of Interest Statement No conflict of interest has been declared by the authors. Key Messages: The hand anatomic index (HAI) (open hand span minus closed hand span/lateral height of hand) is a simple, reliable and quantitative measure of hand deformity. The HAI determined in 30 patients with scleroderma clearly distinguished patients with increasing hand deformity and correlated significantly with measures of hand strength (p=0.0001), prehensile gripability (p=0.05) and global functional impairment (as measured by the Health Assessment questionnaire p=0.01). Measurement of the HAI in scleroderma (and in rheumatoid arthritis) is a reliable and objective measure of hand deformity and functional impairment and might provide a useful clinical outcome measure. Its sensitivity to change over time needs to be determined. work_riywtcno6vamnpnc72cf2hy6ga ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218544714 Params is empty 218544714 exception Params is empty 2021/04/06-02:16:22 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218544714 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:22 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_rjbvkelpnvcrjhn7a3c6o7xp7m ---- Send Orders for Reprints to reprints@benthamscience.ae 59 1874-2106/18 2018 Bentham Open The Open Dentistry Journal Content list available at: www.benthamopen.com/TODENTJ/ DOI: 10.2174/1874210601812010059 EDITORIAL Digital Dentistry: The Revolution has Begun The digital revolution is changing the dental profession. Intraoral, desktop and face scanners, Cone Beam Computed Tomography (CBCT), software for Computer Assisted Design Computer Assisted Manufacturing (CAD/CAM) and guided surgery, new aesthetic materials, milling machines and 3D printers are radically transforming the dental profession. The modern digital workflow consists of four phases that are closely dependent on each other: the acquisition of information, processing that information in a project, producing the necessary devices and the clinical application on the patient. These phases integrate into the traditional workflow based on anamnesis, clinical examination, 2D radiology, treatment plan formulation, and execution of the therapies, but all based on 3D perspective, leading us to the virtual patient. Now we can, through an intraoral scan of one or more prosthetic preparations with powerful intraoral scanners, acquire 3D information for the realization of a prosthetic Computer Assisted Design (CAD) project; within the modeling software We can design our restoration that is then milled in a highly aesthetic material (ceramic, lithium disilicate, zirconia) and applied on the patient. The same goes for implants, without having to take conventional physical impressions with impression trays, which our patients have never appreciated. At the same time, information related to teeth and gingiva, received from an intraoral scan, can be superimposed on the bone-related information acquired via low dosage radiation Cone Beam Computed Tomography (CBCT). It is therefore possible to plan the optimal positioning of implants with software to guide the surgery. Planning data are transferred to a surgical template that can be physically fabricated in various ways and different materials. This guide will help the surgeon correctly position the implants, without the need to raise a flap. However, fixed prosthesis and implant surgery are not the only disciplines affected by the digital revolution; aesthetic dentistry, orthodontics and regenerative and maxillofacial surgery have also experienced this change. In aesthetic dentistry, the so-called digital smile design techniques, which can design and realize the patient's smile through 2D (digital photography) 3D (intraoral and face scanning, CAD software and milling) tools are very successful. Similarly, the application of digital techniques opens up new horizons in orthodontics, which is due to greater possibilities for diagnosis and planning (through the “safe bone” set-up), but above all, due to possibility of clinically implementing and employing a whole range of customized devices such as aligners. Regenerative bone surgery can apply personalized bone synthetic grafts on a patient's defect, because they are drawn in 3D from CBCT. The benefits for the clinician are many, such as the greater simplicity of surgery and reduction of surgical time. In the future, these bone grafts will be printed in 3D and loaded with growth factors or the patient's stem cells to accelerate and enhance healing processes. This is the perspective of Bone Tissue Engineering. Lastly, in maxillofacial surgery, it is now possible to draw and realize, by means of rapid prototyping techniques such as laser sintering and laser melting, personalized metal devices and custom-made implants, even in titanium. All these changes represent a true revolution for our profession, comparable to what was, over 30 years ago, the introduction of dental implants. The digital revolution opens up interesting scenarios and possibilities, but it also represents a challenge for the dentist and his/her team. In fact, it is necessary to learn about new devices, software and machines and to understand how to integrate them efficiently into the workflow. The Open Dentistry Journal, 2018, 12, (Suppl-1, M1) 59-60 http://benthamopen.com http://crossmark.crossref.org/dialog/?doi=10.2174/1874210601812010059&domain=pdf http://www.benthamopen.com/TODENTJ/ http://dx.doi.org/10.2174/1874210601812010059 60 The Open Dentistry Journal, 2018, Volume 12 Francesco Mangano (Guest Editor) In this special issue, entirely dedicated to the world of digital dentistry, we have gathered several scientific and clinical papers that deal with digital topics. We hope you will find them interesting and useful. Francesco Mangano (Guest Editor) Department of Surgical and Morphological Science Dental School University of Varese, Varese, Italy Tel: +39-0344-85524 Fax: +39-0344-530251 E-mail: francescoguidomangano@gmail.com © 2018 Francesco Mangano. This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 International Public License (CC-BY 4.0), a copy of which is available at: https://creativecommons.org/licenses/by/4.0/legalcode. This license permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. https://creativecommons.org/licenses/by/4.0/legalcode Digital Dentistry: The Revolution has Begun work_rktoryzkqfhdvoa63b4wiabr4m ---- Abstract: Effect of Local Flaps Used for the Reconstruction of Nasal Tip Tumors on the Function of Nasal Valves | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1097/01.GOX.0000526170.37929.51 Corpus ID: 40943936Abstract: Effect of Local Flaps Used for the Reconstruction of Nasal Tip Tumors on the Function of Nasal Valves @article{Evin2017AbstractEO, title={Abstract: Effect of Local Flaps Used for the Reconstruction of Nasal Tip Tumors on the Function of Nasal Valves}, author={Nuh Evin and O. Akdag and M. Karameşe and Z. Tosun}, journal={Plastic and Reconstructive Surgery Global Open}, year={2017}, volume={5} } Nuh Evin, O. Akdag, +1 author Z. Tosun Published 2017 Medicine Plastic and Reconstructive Surgery Global Open METHODS: 60 patients who had non-melanocytic skin cancer on the only nasal tip were included in this study. There was no previous history of nasal surgery, allergic rhinitis, concha hypertrophy and other breathing problems in any patients. Six patients were treated with forehead flap, ten patients nasolabial island flap, twenty patients bilobe flap that based on inferior, 24 patients bilobe flap that based on superior. Function of the internal and external nasal valves were evaluated by… Expand View via Publisher doi.org Save to Library Create Alert Cite Launch Research Feed Share This Paper Topics from this paper Surgical Flaps Allergic rhinitis (disorder) Dermatologic disorders Skin Neoplasms Hypertrophy Anterior nares Expiration, function nasal endoscopy Concha of ear structure References SHOWING 1-3 OF 3 REFERENCES The Incompetent External Nasal Valve: Pathophysiology and Treatment in Primary and Secondary Rhinoplasty M. Constantian Medicine Plastic and reconstructive surgery 1994 138 Save Alert Research Feed Nasal valve malfunction resulting from resection of cancer. J. Robinson, G. Burget Medicine Archives of otolaryngology--head & neck surgery 1990 43 View 1 excerpt, references background Save Alert Research Feed Insufficiency of the Nasal Valve or Nozzle and Its Treatment F. Jeppesen, J. S. Jeppesen Medicine Ear, nose, & throat journal 1992 9 Save Alert Research Feed Related Papers Abstract Topics 3 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_ro4jk2tefbfw5mc4ruffd7vofm ---- Military Technical College Kobry Elkobbah, Cairo, Egypt April 3-5,2018 9th International Conference on Mathematics and Engineering Physics (ICMEP-9) 1 Dilatometry of Refractory Metals and Alloys Using Multi- Wavelength Laser Shadowgraphy of Filament Samples H. S. Ayoub 1 , Ashraf. El-Sherif 2 , H. H. Hassan 2 , S. A. Khairy 3 1, 3,4 Department of Physics, Faculty of Science, Cairo University, Egypt 2 Laser Photonics Research Center, Engineering Physics Department, Military Technical College, Cairo, Egypt Abstract: In this work, we discuss a new technique for measuring the linear thermal expansion of refractory metals and alloys over their entire temperature range from room to near melting. The technique is based on generating multi-wavelength laser shadowgraphs for filament wire samples under gradual heating, and measuring the dimensional changes from its shadowgraphs using digital camera. The samples are clamped from both ends and joule heated by direct current under vacuum. The measurements are non-contact and accurate, enabling low cost dilatometric measurements that help in future synthesis and test of new grade of refractory materials, used as plasma facing material in nuclear fusion reactor, or special super alloys for high temperature applications were thermal structural stability are required. Keywords: Elevated temperatures, Filament dilatometry, Linear expansion, Low cost multi- wavelength laser shadowgraphy, Refractory materials. Background Refractory metals (Niobium, Molybdenum, Tantalum, Tungsten, and Rhenium) can be defined as those metals exhibiting melting points greater than 2273K. Their alloys are vital materials to virtually every major industry and to many branches of applied sciences including aerospace, automotive, nuclear technology, lighting, metal processing, mining, electronics and prosthetics. They share some distinguished properties, including, high melting point, high hardness at room temperature, they have a relatively high density and they are stable against creep deformation to very high temperatures. Their thermal expansion is one of important thermo physical characteristics [1]. The development of several applications in modern science and technology place a heavy demand on the accurate knowledge of their thermal expansivity, in order to prevent the generation of harmful internal stress when a structural part is heated and kept at constant length, and therefore achieving optimum designs to guarantee safety. For example, nano-science, is producing new refractory materials with unusual microstructure and mechanical behavior that requires efficient dilatometric investigation [2]. The thermal expansion of refractory materials is measured by special Military Technical College Kobry Elkobbah, Cairo, Egypt April 3-5,2018 9th International Conference on Mathematics and Engineering Physics (ICMEP-9) 2 dilatometers [3, 4], capabale of performing elevated temperature extensometry and thermometry simultaneously (as shown in figure 1). . Figure 1. Block diagram of a modern dilatometer The basic components of a modern dilatometer include six part; the sample, the sample cage or carrier, the heat source or sink, the thermometry, the extensometry, and the environment (These components are discussed briefly in table 1). According to the manufacturer brochures [5] modern dilatometers often have a common functional diagram where computer controls the whole measuring processes and data accusation, allowing measurements that include thermal expansion, CTE, annealing studies, determination of phase transitions and the glass transition, softening points, kinetics studies, construction of phase diagrams and sintering studies, including the determination of sintering temperature, sintering step and rate- controlled sintering Investigation of processing parameters as reflected by dimensional changes of the material can be studied in great detail through exact duplication of thermal cycles and rates used in the actual process. Table 1 basic components of a modern dilatometer Structure of the dilatometer Properties The Sample  powder, sheets, rods (most used), wires The Sample Cage or Carrier  free end, single fixed end to fixed-fixed end  vertical or horizontal type Military Technical College Kobry Elkobbah, Cairo, Egypt April 3-5,2018 9th International Conference on Mathematics and Engineering Physics (ICMEP-9) 3 The Heat Source or Sink  electric heater coil, gas flame, microwave, ultrasound, electron beam, infrared or laser beam. and Joule heating by electric current or electromagnetic induction.  cooling, by water , inert gas flow, liquid nitrogen or helium for cryogenic temperatures dilatometry. The Thermometry  contacting techniques: thermocouples, thermistors, electric resistivity of the sample, ultrasound propagation and metrology thermometers.  Non-contacting techniques: pyrometers, bolometer, reference spectrometry and thermal cameras detectors. The Extensometry  Surface techniques include metrological, mechanical, electric, optical, and laser methods.  subsurface techniques include acoustic, X-ray, Γ-ray, electron beam and neutron beam methods. The Environment  Vacuum  Inert gas Dilatometry techniques are numerous, the selection of the suitable technique depends upon the nature of the sample, the temperature range of the measurement and the resolution of the measurement. To evaluate different dilatometry techniques, Table (2) summarizes the resolution of each one [6]. Table 2. Resolution of Different Dilatometry Techniques. The Technique Resolution (m) The Technique Resolution (m) Push Rod 10 -5 Modulation Calorimetry 10 -7 Strain Gauge 10 -7 Ultrasound 10 -5 LVDT 10 -8 Microwave 10 -12 Capacitive Technique 10 -10 Electron Diffraction 10 -14 Optical Comparators 10 -6 Neutron Diffraction 10 -14 Laser (Non Interferometric) 10 -6 - 10 -8 X- Ray Diffraction 10 -14 Laser Interferometers 10 -7 - 10 -9 Γ- Ray Attenuation 10 -14 Obviously, the most sensitive techniques are those based on capacitance and high energy particle diffraction, unfortunately, capacitance technique is used for low temperature measurements, and high energy particle techniques may cause transmutation of the sample in some cases. However, non-interferometric laser techniques (including shadowgraphy) are suitable for high temperatures measurements with reasonable resolution. Other techniques are neither suitable for this temperature range or of enough sensitivity. There is so many problems associated with elevated temperature dilatometry, which can affect the accuracy of the measurement, these problems are summarized in table 3. Table 3. Problems associated with elevated temperature dilatometry Problem type Details Problems Related to Sample  Thermal expansion at high temperatures is associated with physical instability of the sample specially when using push rod techniques.  Some dilatometers require special treatment of the samples test material (i.e. optical flat, polished surface, special roughness, knife edge marks ...).  Some dilatometers require samples of standard dimensions that may need the preparation of massive quantity of test material. Problems Related to Dilatometer  Most of the measurements are made under vacuum or in presence of inert gas atmosphere to avoid oxidization of the incandescent sample.  Metallurgical applications often involve sophisticated temperature controls capable of applying precise temperature-time profiles for heating and quenching the sample.  Dilatometry requests precise contact thermometry based on thermocouples that probe the surface of the sample under test, and may not be able to sense the real temperature of the sample’s core.  Also some dilatometers use special non-contact pyrometers that measure the apparent radiant temperature of the sample. Military Technical College Kobry Elkobbah, Cairo, Egypt April 3-5,2018 9th International Conference on Mathematics and Engineering Physics (ICMEP-9) 4  Powerful heaters and special furnace modified for measurements at elevated temperatures (up to 2700°C). We decided to use shadowgraphy to prevent most of the previously cited problems, in order to ease the measurements, reduce the mass of the sample, decrease the time of heating. Theory To achieve our goal, it was a must to increase the length of the sample as possible to obtain a noticeable expansion during testing and therefore being able to exclude any sophisticated technique such as interferometry or high resolution photography from the experiment that uses only a modest digital camera to detect such expansion. On the other hand it is very hard to sustain a long sample with relatively small diameter straight at incandescence temperatures. To overcome this technical problem it is necessary to change the symmetry of the sample from short cylinder bar to uniformly coiled wire (assuming that the sample material is ductile) and change the fixation method from single fixed end to fixed- fixed end, and this could lead to the reduction of containing volume for the sample (as seen in figure 2) (a) (b) Figure 2. Comparison between traditional and proposed sample layout; (a) traditional rod sample in fixed-end pushrod fused silica cage with thermocouple, (b) proposed filament wire sample in fixed-fixed end mount. A comparison showing major differences between traditional and proposed method are featured in table 4, revealing the advantages of filament sample layout in the ease of the measurement, and the reduction of sample mass. Table 4. basic features of traditional and proposed method Features Traditional method Proposed method Sample Shape Cylinder Helical Coil Diameter 1-12 mm 0.001-0.4 mm Length 7-25 mm 7-50 mm Heat Source Furness Joule Heating Temperature sensor Thermocouple Sample resistivity Military Technical College Kobry Elkobbah, Cairo, Egypt April 3-5,2018 9th International Conference on Mathematics and Engineering Physics (ICMEP-9) 5 Temperature range Up to 2400 K Up to 3400 K Clampping method Fixed end fixed- fixed end Theoretically, the expansion of the filament sample made of a wire of length l can be measured as a function of wire diameter d or as a function of filament diameter D. In our case its easer to measure ΔD rather than ΔD, hence the linear expansion of the sample can be calculated as: (1) The temperature T of the filament sample can be obtained from the change of its resistance R by using the formalism of exponents [7] from the equation : ( ) (2) Where the value of x depends of the resistivity of the material and the subscript o denotes the room temperature value. Experimental setup We decided to test our technique on a standard automotive lamp sample type P21 previously used in many works, to measure the linear expansion of non-sag tungsten. The reason behind this choice is the fact that the geometry of the lamp filament is exactly matching our theoretical needs. Therefore we designed a standard shadowgraphy setup (as shown in figure 3) to generate shadowgraphs for the filament lamp during it incandescence. Figure 3. Block diagram of filament dilatometer Military Technical College Kobry Elkobbah, Cairo, Egypt April 3-5,2018 9th International Conference on Mathematics and Engineering Physics (ICMEP-9) 6 Assuming x = 0.83 [8], the temperature of incandescent tungsten coil (filament) is calculated from the equation : ( ) (3) In shadowgraphy setup [9], the object under test is placed near the focal plane of convex throw lens of focal length f, and then backlighted by laser beam so that, the shadowgraph of the object is project at throw distance DShadow With magnification factor M is given by the equation: (4) If the object undergoes a small in-plane position shift due to vibration, then this shift will be magnified at the shadowgraph plan by a factor of M , this technique can detect small shifts as δ= 1×10 -7 m (assuming that M = 2000×), knowing that the theoretical limit is in order of 1×10 -8 m. The expansivity of the filament material can be approximated by the equation, (5) The setup must operate at dark room noting that at incandescent temperature the illumination of the sample may present a problem, therefore a proper sample housing is needed for masking scattered lighting. Figure 4 shows the test setup during operation. Figure 4. Dark room operation of filament dilatometer The test procedure is summarized in figure 5, where d is the diameter of the filament wire, I, V are the current and voltage across the filament, W is the nominal wattage, Imax is the maximum current Iop , Vop are the nominal operating current and voltage for the filament. Military Technical College Kobry Elkobbah, Cairo, Egypt April 3-5,2018 9th International Conference on Mathematics and Engineering Physics (ICMEP-9) 7 Figure 5. Flow chart of filament dilatometry Results and discussion We preformed sample shadowgraphy using 3 different laser wavelengths as seen in figure 6.a, 6.b, 6,c. it has been noticed that the most confortable one for the human eye was the red and the green. (a) (b) (c) Military Technical College Kobry Elkobbah, Cairo, Egypt April 3-5,2018 9th International Conference on Mathematics and Engineering Physics (ICMEP-9) 8 Figure 6. Filament shadowgraph image at (a) 436 nm (b) 536 nm and (c) 640 nm By applying digital zoom to each shadowgraphic picture (figure 7) then applying gray scale filter (figure 8) we noticed that the best contrast for digital photography of shadowgraph was given by blue laser ( Figure 8.a). (a) (b) (c) Figure 7. Digital zoom on filament edge shadowgraph images at (a) 436 nm (b) 536 nm and (c) 640 nm (a) (b) (c) Figure 8. Gray scale digital zoom filament edge shadowgraph images at (a) 436 nm (b) 536 nm and (c) 640 nm We measured the expasivity of tungsten filament as a function of its temperature up to melting point and burnout. We obtained a very good agreement with the data of reference [10] as shown in figure 9. Military Technical College Kobry Elkobbah, Cairo, Egypt April 3-5,2018 9th International Conference on Mathematics and Engineering Physics (ICMEP-9) 9 Figure 9. Measured expansivity of tungsten compared to the values of ref.[10] Best results were obtained when using filaments of maximum ratio between coil diameter and its core wire cross section. Conclusion A one watt blue (405 nm) commercial multimode semiconductor laser has proven to be a suitable backlighting source, to generate shadowgraphs of incandescent objects, while green or red laser (523 nm or 605 nm) was not of the same effectiveness. The expansion of coil diameter is the key parameter that represents the linear expansion of the sample,. The measured tungsten expansivity values came in good agreement with the previously published data. This low cost method successfully attained a resolution of 5×10 -8 m. Acknowledgment The authors are very grateful to the members of Physics Department, Faculty Science, Cairo University, for their support with measurement instruments, encouragement and helpful suggestions. References 1. Tietz, T. E. and Wilson, J. W., Behavior and Properties Of Refractory Metals, Stanford University Press, Stanford, CA, p1–28. ISBN 978-0-8047-0162-4 (1965). Military Technical College Kobry Elkobbah, Cairo, Egypt April 3-5,2018 9th International Conference on Mathematics and Engineering Physics (ICMEP-9) 10 2. Wirtz, O. M., Thermal Shock Behavior of Different Tungsten Grades under Varying Conditions, Forschungszentrum, Jülich, ISBN 978-3-89336-842-6 (2013). 3. Taylor, R. E. et al, Thermal Expansion of Solids, Materials, Park, OH: ASM Int., ISBN 0-87170-623-7 (1998). 4. Bernard Yates, Thermal Expansion, Plenum Press, New York (1972). 5. PHYWE Co., Dilatometer with clock gauge, Instruction brochure (2014) 6. E. G. Wolff, Measurement Techniques for Low Expansion Materials, 9th National SAMPE Technical Conference, AtlantaGa, 9, 57-72, 1977. 7. Agrawal, D.C. and Menon, V. J., Illuminating physics with gas-filled lamps: Exponent-rule, Latin American J. of Phys. Educ., 3, 33-36 (2009). 8. H.H. Hassan, S.A. Khairy and H.S. Ayoub, A Simple Laboratory Experiment for Measuring the Expansivity of Tungsten at Elevated Temperatures, Nature and Science; 13(11), 146-151, 2015. 9. Subramaniamy, S., White, D. R., Scholl, D. J. and Weber, W. H., In situ optical measurement of liquid drop surface tension in gas metal arc welding, J. Phys. D: Appl. Phys. 31, 1963–1967 (1998). 10. White, G. K., and Minges, M. L., Thermophysical properties of some key solids: An update, International Journal of Thermophysics ,18, 5, 1269-1327 (1997). work_rpaphdktazgm5f63jdx5672x6u ---- Special Section on Cultural Heritage Computers & Graphics 35 (2011) v–vi Contents lists available at ScienceDirect Computers & Graphics 0097-84 doi:10.1 journal homepage: www.elsevier.com/locate/cag Editorial Special Section on Cultural Heritage Over the last two decades, there have been many high profile success stories where cutting edge computer graphics (CG) technol- ogy was used in collaboration with cultural heritage (CH) profes- sionals to unlock the secrets of humanity’s legacy. Well known examples include the empirical 3D acquisition of Michelangelo’s David, the laser scan-based 3D mapping of the tombs found in the Valley of the Kings and the decipherment of the instructions to and the operation of the 2nd-century BCE Antikythera Mechanism— arguably the first computer in human kind history. This special edition of Computers and Graphics contains new work that continues this tradition. Jenny and Hurni [1] propose a series of geometrical transformations that allow assessing the planimetric and geodetic accuracy of old maps before using the data for geo-historical studies. Laycock et al. [2] present techni- ques to aid in the semiautomatic extraction of building footprints from digital images of archive maps and sketches by aligning the old maps to modern vector data. Ducke et al. [3] propose an open- source pipeline for deriving a full 3D model from a series of overlapping images and illustrate it with data from the archae- ological site at Weymouth. Osorio et al. [4] discuss novel digitis- ing techniques for high specular reflection materials based on a multi-spectral approach and describe new virtual installations in the context of the Gold Museum in Bogota. Abel et al. [5] present a study of whether computed tomography is cost effective and can be used to capture and document the fine surface topology of flaked stone tools used by early humans. Li et al. [6] propose a skull completion framework based on symmetry and surface matching which will benefit subsequent archaeological and anthropological processing and analysis. Scheiblauer and Wimmer [7] present an out-of core interactive editing system for point clouds from a laser scanner. Finally, Blake and Ladeira [8] suggest a method for preparing users for a foreign cultural experience in Virtual Reality storytelling and illustrate it with San folklore. More remarkable examples are sure to follow as leading CH institutions and professionals continue exploring advanced compu- ter graphics-based tools and methods in collaboration with CG specialists. CG research on its own has pointed the way to new application development opportunities in the cultural heritage community. These collaborations and advanced CG research under- score the potential value of integrating robust digital imaging into ongoing CH working practice. Regardless of these substantial potential advantages, little adop- tion of CG tools and methods within the CH community has taken place over the last twenty years. This slow rate of adoption is due to many factors. Many of the tools were and remain expensive and difficult to use by CH practitioners without extensive retraining and require the expertise of digital specialists from outside of the existing 93/$ - see front matter & 2011 Published by Elsevier Ltd. 016/j.cag.2011.03.041 cultural heritage working cultures. The digital representations them- selves often lacked transparency due to the absence of the digital equivalent of the traditional scientific ‘lab notebook’, documenting the means and circumstances of their generation. The hard truth is that most CG imaging of CH materials has focused exclusively on generating a high quality digital representation of the subject without providing a means for scientific evaluation of the representations quality. This led to a distrust of their scientific reliability and infrequent use of these CG representations in CH scholarship. Further, lack of process history transparency, dependence on closely held proprietary file formats and software, and an absence of established and funded long-term data preservation strategies, has led to concerns about digital data safety. Over the last twenty years, these concerns were confirmed by the increasing number of large-scale losses of digitally captured CH data, known today as the dawning of the Digital Dark Age. When paper records can last centuries, why adopt digital records that are useless after only ten years? Only recently has the adoption rate of robust digital imaging tools by cultural heritage professionals begun to increase. This is partly due to CG driven work to remove the existing barriers to widespread adoption of digital technologies by adapting new robust imaging applications explicitly for use by cultural heritage profes- sionals that are designed to be compatible with existing CH skill sets and working cultures. These applications, as well as related tool research and development road maps, include methods of enhan- cing scientific reliability of digital representations through the recording of scientific process history (digital ‘lab notebooks’), and open source architectures that enhance long-term digital preserva- tion strategies. Work in this volume examines the advantages of exclusive use of such open source workflows. Some of these computer graphics tools enable public access and interactive engagement both within collections and sites as well as over the Internet. Through the London and Seville Charters, the Virtual Reality (VR) community, made up of CG and CH professionals working together, has taken the lead in ensuring that transparency and the means for scientific qualitative evaluation are present in their work products. Museums have a growing interest in using VR representations to increase public access to their collections both within the museum and over the Internet. Several contributions to this Journal explore VR modalities to enrich the public experience of interaction with collections. These initiatives, along with the adop- tion by an increasing number of museums, libraries, archives, archaeologists, epigraphists, numismatists, and many others in the cultural heritage community of digital photography-based imaging tools and practices they can use by themselves, are creating an environment conducive to future widespread digital imaging adop- tion by CH professionals. www.elsevier.com/locate/cag dx.doi.org/10.1016/j.cag.2011.01.005 dx.doi.org/10.1016/j.cag.2011.01.005 Editorial / Computers & Graphics 35 (2011) v–vivi References [1] Bernhard Jenny, Lorenz Hurni. Studying cartographic heritage: analysis and visualization of geometric distortions. Computers and Graphics 2011;35(2):402–11. doi:10.1016/j.cag.2011.01.005. [2] Stephen David Laycock, Philip G. Brown, Robert G. Laycock, Andy M. Day. Aligning archive maps and extracting footprints for analysis of historic urban environments. Computers and Graphics 2011;35(2):242–9. doi:10.1016/ j.cag.2011.01.002. [3] Benjamin Ducke, David Score, Joseph Reeves. Multiview 3D reconstruction of the archaeological site at Weymouth from image series. Computers and Graphics 2011;35(2):375–82. doi:10.1016/j.cag.2011.01.006. [4] Maria F. Osorio, Pablo Figueroa, Flavio Prieto, Pierre Boulanger, Eduardo Londoño. A novel approach at documenting artifacts at the Gold Museum in Bogota. Computers and Graphics 2011;35(4):894–903. doi:10.1016/j.cag.2011. 01.014. [5] Richard L. Abel, Simon A. Parfitt, Nick M. Ashton, Simon G. Lewis, Beccy Scott, Chris B. Stringer. Digital preservation and dissemination of ancient lithic technology with modern micro-CT. Computers and Graphics 2011;35(4):878–84. doi:10.1016/j.cag.2011.03.001. [6] Xin Li, Zhao Yin, Li Wei, Shenghua Wan, Wei Yu, Maoqing Li. Symmetry and template guided completion of damaged skulls. Computers and Graphics 2011;35(4):885–93. doi:10.1016/j.cag.2011.01.015. [7] Claus Scheiblauer, Michael Wimmer. Out-of-core selection and editing of huge point clouds. Computers and Graphics 2011;35(2):342–51. doi:10.1016/j.cag. 2011.01.004. [8] Edwin Blake, Ilda Ladeira. Cultural reinterpretation and resonance: the san and hip-hop. Computers and Graphics 2011;35(2):383–91. doi:10.1016/ j.cag.2011.01.003. Alan Chalmers is Professor of Visualisation in the International Digital Laboratory, WMG, at the Univer- sity of Warwick. He has published over 190 papers in journals and international conferences on realistic computer graphics, parallel processing, multi-sensory perception and virtual archaeology. He is Honorary President of Afrigraph and a former Vice President of ACM SIGGRAPH. In addition, he is Founder and Inno- vation Director of the spinout company, goHDR, which is developing software to facilitate the widespread adoption of high dynamic range (HDR) imaging tech- nology. His research is working towards achieving Real Virtuality: high-fidelity, multi-sensory virtual environ- ments, including developing ‘‘digital taste’’. Mark Mudge is President and co-founder of Cultural Heritage Imaging (CHI), a public charity and California non-profit corporation, incorporated in 2002. Mark has BA in Philosophy from New College of Florida (1979). He has worked as a professional bronze sculptor and has been involved in photography and 3D imaging for over 20 years. He is a co-inventor, with Tom Malzbender, of the computational photography tech- nique, Highlight Reflectance Transformation Imaging. He has published twelve articles and book chapters related to imaging cultural heritage materials and serves on several international committees, including The International Council of Museums’ (ICOM) Docu- mentation Committee (CIDOC). Luis Paulo Santos is an Auxiliar Professor at the Department of Informatics, Universidade do Minho, Portugal. His research interests are in interactive high fidelity rendering and parallel processing. He received his Ph.D. in 2001 from Universidade do Minho in Scheduling under Conditions of Uncertainty. Luis Paulo published several papers on both computer graphics and parallel processing on international conferences and journals. He has been a member of several inter- national program committees, acted as program co- chair of the 2007 EGPGV symposium and EuroPar 2005 conference and organized EGPGV 2006, VAST 2008 and VS-Games 2010 in Braga, Portugal. He manages several nationally funded graphics R&D projects and partici- pates in several European projects with both academia and industry. He is a member of the Direction Board of the Portuguese chapter of Eurographics since 2008. Guest Editors Alan Chalmers Visualisation in the International Digital Laboratory, WMG, University of Warwick, UK E-mail address: Alan.Chalmers@warwick.ac.uk Mark Mudge Cultural Heritage Imaging (CHI), California, USA Luis Paulo Santos n Department of Informatics, Universidade do Minho, Portugal E-mail address: psantos@di.uminho.pt Received 31 March 2011 n Corresponding author. dx.doi.org/10.1016/j.cag.2011.01.005 dx.doi.org/10.1016/j.cag.2011.01.002 dx.doi.org/10.1016/j.cag.2011.01.002 dx.doi.org/10.1016/j.cag.2011.01.006 dx.doi.org/10.1016/j.cag.2011.01.014 dx.doi.org/10.1016/j.cag.2011.01.014 dx.doi.org/10.1016/j.cag.2011.03.001 dx.doi.org/10.1016/j.cag.2011.01.015 dx.doi.org/10.1016/j.cag.2011.01.004 dx.doi.org/10.1016/j.cag.2011.01.004 dx.doi.org/10.1016/j.cag.2011.01.003 dx.doi.org/10.1016/j.cag.2011.01.003 mailto:Alan.Chalmers@warwick.ac.uk mailto:Alan.Chalmers@warwick.ac.uk Special Section on Cultural Heritage References work_rpc6hnbvfrhldmibw72xorow4m ---- pone.0076677 1..14 Colorimetric and Longitudinal Analysis of Leukocoria in Recreational Photographs of Children with Retinoblastoma The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Abdolvahabi, Alireza, Brandon W. Taylor, Rebecca L. Holden, Elizabeth V. Shaw, Alex Kentsis, Carlos Rodriguez-Galindo, Shizuo Mukai, and Bryan F. Shaw. 2013. “Colorimetric and Longitudinal Analysis of Leukocoria in Recreational Photographs of Children with Retinoblastoma.” PLoS ONE 8 (10): e76677. doi:10.1371/journal.pone.0076677. http://dx.doi.org/10.1371/ journal.pone.0076677. Published Version doi:10.1371/journal.pone.0076677 Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:11879167 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA http://osc.hul.harvard.edu/dash/open-access-feedback?handle=&title=Colorimetric%20and%20Longitudinal%20Analysis%20of%20Leukocoria%20in%20Recreational%20Photographs%20of%20Children%20with%20Retinoblastoma&community=1/4454685&collection=1/4454686&owningCollection1/4454686&harvardAuthors=0b768e9f98a32a6851636b8053995e7a&department http://nrs.harvard.edu/urn-3:HUL.InstRepos:11879167 http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA Colorimetric and Longitudinal Analysis of Leukocoria in Recreational Photographs of Children with Retinoblastoma Alireza Abdolvahabi1., Brandon W. Taylor1., Rebecca L. Holden1, Elizabeth V. Shaw1, Alex Kentsis2, Carlos Rodriguez-Galindo3, Shizuo Mukai4,5, Bryan F. Shaw1* 1 Department of Chemistry and Biochemistry, Baylor University, Waco, Texas, United States of America, 2 Department of Pediatrics, Memorial Sloan-Kettering Cancer Center, New York, New York, United States of America, 3 Department of Pediatric Oncology, Dana-Farber Cancer Institute, Boston, Massachusetts, United States of America, 4 Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts, United States of America, 5 Retina Service, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts, United States of America Abstract Retinoblastoma is the most common primary intraocular tumor in children. The first sign that is often reported by parents is the appearance of recurrent leukocoria (i.e., ‘‘white eye’’) in recreational photographs. A quantitative definition or scale of leukocoria – as it appears during recreational photography – has not been established, and the amount of clinical information contained in a leukocoric image (collected by a parent) remains unknown. Moreover, the hypothesis that photographic leukocoria can be a sign of early stage retinoblastoma has not been tested for even a single patient. This study used commercially available software (Adobe PhotoshopH) and standard color space conversion algorithms (operable in Microsoft ExcelH) to quantify leukocoria in actual ‘‘baby pictures’’ of 9 children with retinoblastoma (that were collected by parents during recreational activities i.e., in nonclinical settings). One particular patient with bilateral retinoblastoma (‘‘Patient Zero’’) was photographed .7, 000 times by his parents (who are authors of this study) over three years: from birth, through diagnosis, treatment, and remission. This large set of photographs allowed us to determine the longitudinal and lateral frequency of leukocoria throughout the patient’s life. This study establishes: (i) that leukocoria can emerge at a low frequency in early-stage retinoblastoma and increase in frequency during disease progression, but decrease upon disease regression, (ii) that Hue, Saturation and Value (i.e., HSV color space) are suitable metrics for quantifying the intensity of retinoblastoma-linked leukocoria; (iii) that different sets of intraocular retinoblastoma tumors can produce distinct leukocoric reflections; and (iv) the Saturation-Value plane of HSV color space represents a convenient scale for quantifying and classifying pupillary reflections as they appear during recreational photography. Citation: Abdolvahabi A, Taylor BW, Holden RL, Shaw EV, Kentsis A, et al. (2013) Colorimetric and Longitudinal Analysis of Leukocoria in Recreational Photographs of Children with Retinoblastoma. PLoS ONE 8(10): e76677. doi:10.1371/journal.pone.0076677 Editor: Sanjoy Bhattacharya, Bascom Palmer Eye Institute, University of Miami School of Medicine, United States of America Received July 10, 2013; Accepted September 2, 2013; Published October 30, 2013 Copyright: � 2013 Abdolvahabi et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: This work was supported by a laboratory start-up fund provided to Bryan F. Shaw from the College of Arts and Sciences at Baylor University. This work was also supported by the Mukai Fund, at Massachusetts Eye and Ear Infirmary. The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. Competing Interests: The authors have declared that no competing interests exist. * E-mail: Bryan_Shaw@baylor.edu . These authors contributed equally to this work. Introduction Retinoblastoma (Rb) is an aggressive cancer that forms rapidly in the developing retina of children, typically before the age of five years [1–3]. Epidemiology estimates that ,7000–8000 children develop Rb throughout the world each year, and ,3000–4000 children die annually [4]. The median age of diagnosis in the U.S. is ,24 months for unilateral disease and ,9– 12 months for bilateral disease [3,5–7]. Survival rates are high in developed countries (e.g., ,95% survival in the U.S. [7–9]) but drop in resource limited settings (e.g., 48% in India [10]; 46% in Namibia [11,12]). Lower survival rates are attributed to delayed diagnosis and the development of extra-ocular and metastatic disease [13]. Survivors typically experience moderate to severe vision loss; however, early diagnosis can increase the rate of vision preservation and survival [14–18]. Diagnosing Rb continues to be a major challenge. The incidence of this cancer in the U.S. is sufficiently high (i.e., 1:16,000–18,000 births [7]) that pediatricians are advised to screen for Rb by performing the ‘‘red reflex’’ test with an ophthalmo- scope [19,20]. In spite of pediatric screening, one of the most effective methods for detecting Rb appears to be amateur photography: the diagnosis of a large proportion of Rb cases in the U.S. (e.g., ,80% in one study [21]) appears to be initiated by a parent’s concern over recurrent leukocoria in photographs of their child [22] (see Figures 1B and 2D for examples of Rb-linked leukocoria). Although other rare eye conditions can also result in recurrent leukocoria [23] (e.g., Coats’ disease [24], pediatric cataract [25], chorioretinitis [26], and persistent fetal vasculature [27]), the most common cause of persistent leukocoria in children under the age of 5 years old is historically considered to be Rb [28–30]. PLOS ONE | www.plosone.org 1 October 2013 | Volume 8 | Issue 10 | e76677 Leukocoria (from Greek meaning ‘‘white pupil’’ colloquially referred to as ‘‘white eye’’ or ‘‘cat eye’’) has been historically associated with advanced Rb and low rates of ocular salvage [21], however, the age of emergence and longitudinal frequency of leukocoria has never been determined for even a single patient. Thus, the correlation between the emergence and frequency of leukocoria (detected by parents during recreational photography) and disease onset, progression and remission remains unknown. We suspect that photographic leukocoria might emerge earlier in disease progression than tacitly assumed, but that it initially occurs at a low frequency because of the small size or eccentric position of the tumor (s), and becomes progressivley more frequent (and thus easily noticed by parents) as tumors increase in size and number. Despite the effective – albeit, anecdotal – use of digital photography by parents to detect Rb-linked leukocoria [21], there have been no efforts to develop tools that might increase the effectiveness of digital photography in screening Rb (e.g., software that is embedded in a camera or computing device that can detect leukocoria). The recreational photographs of Rb patients with leukocoria have never been analyzed with basic tools in computer graphics that can quantify the colorimetric properties of the leukocoric reflection, e.g., the Hue (color), Saturation (color concentration) and Value (brightness). Thus, a quantitative definition and scale of leukocoria does not exist and the correlation between the clinical severity of Rb (i.e., size, position, and number of tumors) and the colorimetric properties of its leukocoric reflection remains undetermined. We hypothesize that photo- graphs of children with Rb – of the type that parents collect – do contain more clinically relevant information than a simple binary detection of ‘‘white-eye’’. This information (if present and readily quantifiable) might be useful to a pediatric clinician or ophthal- mologist, and could be instantly transmitted out of environments with limited resources, where most deaths occur. We presume that amateur photography has been overlooked as a quantitative tool for screening Rb because amateur recreational photography involves untrained (or unsuspecting) users who are operating dozens of different devices in diverse settings (i.e., at multiple angles, focal apertures, light intensities, etc.). Neverthe- less, despite the optically diverse nature of recreational photogra- phy, parents have inarguably proven that this practice of photography is as effective at detecting Rb as pediatric examina- tions (if not more effective, because parents photograph their children more often than they are examined by a clinician and/or possibly because flash photography involves a rapid flash pulse, t,500 ms, that will not necessarily contract the pupil and impede the reflection of light off peripheral tumors, Figure 1). In this study, we analyzed .7000 recreational photographs of nine Rb patients and 19 control children (who were photographed alongside patients, i.e., were ‘‘playmates’’). We show that the intensity of a leukocoric reflection can be quantified in HSV color space; we also show that the lateral and longitudinal frequency of leukocoria can correlate with the clinical severity of Rb and its progression and remission. The results suggests that leukocoria can emerge in the earliest stages of Rb (e.g., at 12 days old in one patient), but occurs initially at low frequency and is, presumably, easily overlooked by a parent. Finally, we propose a quantitative scale by which leukocoria intensity can be graded. With regard to this study, it must be remembered that the recreational photography of an infant and toddler by his parents is not by nature optically random. For example, a parent will typically photograph an infant or toddler at a finite range of focal lengths and will favor certain positions and angles of the child over other angles (i.e., top-down pictures are more often collected than bottom-up). As we show in this paper, thousands of recreational photographs collected over several years can have similar exposure times, focal apertures, and depending upon the camera, a consistent flash pulse and aspect ratio. Methods Collection of Donated Photographs of Retinoblastoma Patients and Healthy Control Subjects Photographs of nine children with Rb (2 girls, 7 boys) were donated by their parents. The parents of eight of the children only donated images that they had judged to be leukocoric; these sets of photographs were small (i.e., ,10 images per child) and were not longitudinal in nature. The parents of a ninth child (a male, referred to as ‘‘Patient Zero’’), who are the corresponding authors of this study, donated their entire library of photographs that consisted of an unsorted set of 9493 digital photographs in JPEG (Joint Photographic Experts Group) format. Out of this library, 7377 photographs contained the patient’s face and thus were used in analysis. The photographs of Patient Zero also contained images of 19 different children (approximately age-matched) who functioned as embedded controls. The metadata tags included in the EXIF (Exchangeable Image File) data of each leukocoric JPEG file from each patient were analyzed in order to determine: (i) the date that each picture was collected (i.e., the age of the patient), (ii) whether a flash was used, (iii) whether ‘‘Red Eye Reduction’’ was in effect, (iv) the make and model of the camera, and (v) photographic parameters such as exposure time and focal aperture. A total of fourteen different digital cameras were used to collect the photographs in this study. Two cameras were used contigu- ously to collect images of Patient Zero: (i) a Canon PowerShot SD750H (Canon USA, Lake Success, NY) from age 0–16 months, and (ii) a Nikon D3000H (Nikon Inc., Melville, NY) from age 16– 36 months. Both cameras were equipped with a xenon flash tube and contained ‘‘Red Eye Reduction’’ and ‘‘Red Eye Removal’’ technologies. Nine digital cameras that were used to photograph the remaining eight patients were: Apple iPhone 4H (Apple Inc., Cupertino, CA) for Patient 1, n = 9 leukocoric photographs; Panasonic DMC-FS3H (Panasonic Inc., Secaucus, NJ), Canon PowerShot SD300H, and Canon EOS Digital Rebel XSiH for Patient 3, n = 4; Nikon D60H and Blackberry 8330H (Blackberry, Ontario, Canada) for Patient 4, n = 9; Blackberry 8330H for Patient 5, n = 9; Nikon D60H for Patient 6, n = 3; Canon PowerShot A80H and Canon PowerShot A2000 ISH for Patient 7, n = 3; and Panasonic DMC-LZ2H for Patient 8, n = 7. The digital camera used to photograph Patient 2 could not be determined. The following three digital camera phones were used to generate 72 photographs of a healthy adult that exhibited ‘‘pseudo-leukocoria’’: Samsung SGH-I997; Apple iPhone 4; and Droid RazrH (Motorola). Because anecdotal evidence suggests that ‘‘pseudo-leukocoria’’ occurs more frequently in low-light condi- tions, we collected ‘‘pseudo-leukocoric’’ images under low-light conditions (i.e., a dimly lit room) characterized by a light intensity of 0.025960.0074 mE/m2/s, as measured by a digital light meter (LX1010B, Dr. Meter). A flash was emitted during the collection of each ‘‘pseudo-leukocoric’’ image. Colorimetric Analysis of Pupillary Reflexes The average HSV color space parameters (Hue, Saturation, and Value) of each pupil were determined in the following manner: (i) each pupil was cropped in its entirety and the total pixel count was determined using Adobe PhotoshopH (Adobe, San Jose, CA; CS5 Photographic Detection of Retinoblastoma PLOS ONE | www.plosone.org 2 October 2013 | Volume 8 | Issue 10 | e76677 Extended, version 12.0.4664); (ii) the number of pixels with a given intensity in three color-channels (red, green, and blue; RGB) was then determined for each pupil; (iii) the average RGB coordinates of each cropped pupil was calculated using Microsoft ExcelH (Microsoft Inc., Redmond, WA); (iv) these RGB coordi- nates were then transformed to the HSV cylindrical coordinate system using the standard RGB-HSV conversion algorithm introduced by Smith [31] (which is operable in Microsoft ExcelH). Photographs that contain pupils comprised of 10 pixels or fewer were not analyzed because of their low resolution. We chose to express and quantify leukocoria in HSV color space (instead of RGB) because HSV specifies the color in terms that are more intuitive (to us) and thus much easier to interpret and communicate. For instance, the HSV system defines a basic color of the visible range of the electromagnetic spectrum (Hue), the concentration of that color, i.e., from pink to red (Saturation), and thirdly, the brightness of the color (Value). The RGB system, on the other hand, partitions a single color into color channels designated ‘‘Red’’, ‘‘Green’’, and ‘‘Blue’’ that correspond to the additive color components which make up the single color. While both HSV and RGB can specify a color to the same precision, RGB requires knowledge of additive colors in order to understand how changes in one channel affect the overall appearance of color. Thus, in our opinion, this feature makes RGB more complicated and less intuitive comparing to HSV color space [31]. Moreover, consumers (parents) are often self-educated in HSV color space during the use of electronic image displays (i.e., computer screens, flat screen televisions, etc.). In the case of Patient Zero, we did not crop and quantify the HSV of every pupil in each of the 7377 facial photographs. Instead, we manually inspected each photograph for pupils that were suspicious for leukocoria (i.e., pupils that were not obviously black or dark red in appearance). The entire process of cropping pupils and quantifying HSV parameters for each pupil in this subset was then performed in duplicate by separate researchers. Clinical Description of Patient Zero Patient Zero was diagnosed with bilateral Rb by an ophthal- mologist at 123 days of age. The only presenting sign was leukocoria, which the parents had reported noticing for three - weeks prior to diagnosis. A diagnosis of Group B disease by the International Classification of Retinoblastoma [5] was made in both eyes, which was based on examination of the dilated eyes and fundus photography. The position and size of tumors were, however, significantly different in each eye. The tumors in the left eye were generally smaller, and although they were posterior with one near the optic nerve and one in the macula (but outside of the fovea), none involved the center of the macula. The right eye contained two tumors: the larger tumor (diameter = 15 mm) was centrally located, and involved the entirety of the macula; the smaller tumor (diameter = 1.5 mm) was more peripheral, located at 4 o’clock. The left eye contained three posterior tumors, as described above, with diameters of 6 mm, 1.5 mm, and 0.4 mm. Over a period of 5 months after diagnosis, Patient Zero received five different types of treatment. In chronological order, the treatments were: (i) systemic vincristine and carboplatin (age: 132–196 days), (ii) focal cryotherapy to right and left eye (age: 200 and 207 days), (iii) focal laser photoablation to right eye (age: 207 days) and left eye (age: 207, 220, 264 days), (iv) enucleation of right eye (age: 220 days, after progression to Group D with vitreous seeding), and (v) proton beam radiation to left eye (age: 222–258 days). Systemic chemotherapy along with cryotherapy and laser consolidation slowed the growth of existing tumors, but failed to reduce their size, and did not prevent the appearance of new tumors, however, treatment with proton beam radiotherapy resulted in an excellent response. Ethics Statement This study was determined to be exempt from review by an Institutional Review Board at Baylor University. The parents of Figure 1. Leukocoria in Children with Retinoblastoma. A) The reflection of visible light by an intraocular Rb tumor can cause the pupil to appear white (leukocoric) during photography; an increase in the size of a tumor will generally increase the number of photographic angles that will produce leukocoria during recreational photography. B) An example of a leukocoric picture from a set of 7377 pictures of a patient (Patient Zero) with bilateral Rb. Images of Patient Zero were donated by his parents. doi:10.1371/journal.pone.0076677.g001 Photographic Detection of Retinoblastoma PLOS ONE | www.plosone.org 3 October 2013 | Volume 8 | Issue 10 | e76677 our study participants have given written informed consent, as outlined in the PLOS consent form, to publication of their children’s photograph. Results and Discussion We reiterate that the images in this study are photographically diverse (i.e., pictures were collected at multiple photographic angles, poses, settings, and lighting conditions) and thus accurately reflect typical recreational photographs of infants and toddlers in typical recreational activities (i.e., crawling, eating, crying, etc.). This photographic diversity is by no means a limitation or liability to this study – or to the utility of photography in detecting Rb – but rather increases the probability that light will sample the tumor surface and be reflected back towards the camera lens, regardless – to some degree – of tumor position or size (Figure 1A). Moreover, the parents of each child did not anticipate, during photography, that a photograph might be used for a scientific study. The photographs thus represent an authentic set of ‘‘family pictures’’ of the sort that might initiate a diagnosis of Rb, and in the case of Patient Zero, did in fact initiate diagnosis (Figure 1B). The large number and longitudinal nature of available photographs of Patient Zero allowed us to determine the longitudinal frequency of leukocoria as a function of age and whether the colorimetric properties of leukocoria were statistically different in the right versus left eye. It should be noted that the smaller sets of photographs of the other eight patients are not longitudinal in nature, or large enough (in our opinion) to justify a statistically significant comparison between leukocoria intensity and clinical severity, but are useful for surveying the possible range of Rb-linked leukocoria in HSV color space. Longitudinal Frequency of Leukocoria in ‘‘Patient Zero’’: From Birth through Diagnosis and Remission The longitudinal frequency of photographs of Patient Zero is shown in Figure 2A. The parents collected photographs consis- tently over a period of three years. We manually analyzed this entire set of photographs and found that 237 out of 7377 pictures contained at least one leukocoric pupil; leukocoria was detected in 120 left pupils and 146 right pupils. Approximately 80% of the leukocoric pictures were taken with a Canon PowerShot SD750 (shown in Figure 2B). A pupil was classified as leukocoric if it exhibited an abnormal reflection with a Value $0.50, and a Saturation that was #0.60 (in HSV color space). Approximately 10% of pupils that were categorized as leukocoric exhibited an average pixel Value #0.5 or Saturation $0.60, but were nonetheless classified as leukocoric because only a portion of the pupil exhibited abnormal Saturation or Value. In contrast, many non-leukocoric pupils (from Patient Zero and control subjects) contained a specular reflection of the cornea (which is not indicative of disease) that caused the average pixel brightness to be .0.5. This type of specular reflection is common in flash photography and appears as a white dot in the pupil, iris or sclera. We did not attempt to subtract specular reflections from images of any patient or control subjects because our goal is to determine how effective digital photography – as practiced by amateurs during recreation – can be at quantifying leukocoria. Examples of leukocoria from the donated set of photographs of Patient Zero are shown in Figures 1B and 2D. In addition, approximately 300 cropped images of pupils from Patient Zero and healthy control subjects are grouped according to gross shade and arranged into spirals (Figure 3). Each spiral contains: (i) cropped pupils of Patient Zero that exhibited leukocoria (denoted ‘‘Lk+/Rb+’’); (ii) non-leukocoric pupils from the patient (which appear black or red, denoted ‘‘Lk-/Rb+’’); and (iii) cropped pupils from healthy subjects that appeared red or black (denoted ‘‘Lk-/ Rb-’’ in Figure 3). Leukocoric pupillary reflections were not detected in healthy control subjects (however, leukocoria can occur rarely in children who do not have any known eye disease, Figure 2. A Collection of ,7,000 Digital Photographs of a Single Patient with Retinoblastoma. A) Longitudinal frequency of photography of ‘‘Patient Zero’’ by parents over a three year period (i.e., from birth to 3 years old; 7377 photographs). B) The majority of leukocoric pictures (,80%) were collected with this compact 7.1 mega- pixel Canon PowerShot SD750 camera. C) Digital picture of Patient Zero (i.e., child on left, exhibiting leukocoria in left eye) and a healthy playmate (i.e., child on right, exhibiting a red reflex in both eyes). D) Example of a digital picture of Patient Zero; right eye exhibited leukocoria, and the left eye exhibited a red reflex. Photographs in C & D were taken with Canon PowerShot SD750. Permission to include images of the healthy control child was granted by both parents. doi:10.1371/journal.pone.0076677.g002 Photographic Detection of Retinoblastoma PLOS ONE | www.plosone.org 4 October 2013 | Volume 8 | Issue 10 | e76677 presumably during off-axis photography and reflection of the optic nerve [32,33]). The gross appearance of leukocoric pupillary reflections in Patient Zero was often white or gray (Figure 3A), but leukocoria also appeared with yellow Hues (Figure 3B), pink Hues (Figure 3C) and orange Hues (Figure 3D). The photographic reflection of Rb tumors might, therefore, be more accurately described by a general term such as ‘‘photocoria’’ (Greek: light pupil), instead of leukocoria, because the abnormal reflections do not necessarily appear white [34]. We attribute the differences in the gross appearance of photocoric pupils to be caused by different angles of photography, which result in variable mixtures of light reflected from the healthy regions of the retina and optic nerve, and light reflected by the surface of a tumor. A timeline of the diagnosis, treatment, and remission of the Patient Zero is described in Figure 4, and compared with the daily and monthly frequency of leukocoria. Leukocoria first occurred at 12 days old (Figure 4A–C) – several months before the parents first noticed leukocoria – but only occurred in ,5% of facial pictures taken during the first month of life (Figures 4B,4D). Leukocoria increased in frequency during disease progression (reaching as high as 100% of pictures per day and 25% of pictures taken per month, Figures 4B,4D), even in spite of systemic chemotherapy, laser photoablation therapy, and cryotherapy. The increase in frequency, despite chemotherapy, is consistent with clinical observations that systemic chemotherapy did not significantly reduce tumor size, or prevent the formation of new small tumors (which were immediately and successfully treated with cryotherapy or laser photoablation therapy). The treatment of the patient’s left eye with proton beam radiation and laser photoablation (which resulted in long term tumor regression) decreased the frequency of leukocoria to ,2% per month (Figure 4D). Leukocoria frequency remained ,2% per month throughout the period of remission. The lateral distribution of leukocoria in Patient Zero is shown in Figure 4E. The right eye accounted for 60–85% of all detected leukocoria (until it was enucleated at 9 months of age). We attribute this higher frequency to the greater total surface area of tumors in the right eye and their central location, which might increase the probability (during recreational photography at multiple angles) that light will reflect off the surface and into the Figure 3. Examples of Cropped Leukocoric and Non-Leukocoric Pupils from a Set of 7377 Pictures of Patient Zero (and Control Children Who Were Photographed Alongside Patient). Each spiral contains: (i) cropped leukocoric pictures from Patient Zero (denoted Lk+/ Rb+), (ii) non-leukocoric pupils from Patient Zero (Lk2/Rb+), and (iii) non-leukocoric pupils from healthy control subjects (Lk2/Rb2). A) Cropped leukocoric pupils that exhibit a gray scale (classic leukocoria); cropped leukocoric pupils with non-black and white appearance are also shown: B) yellow, i.e., ‘‘xanthocoria’’; C) pink, i.e., ‘‘rhodocoria’’; D) orange, i.e., ‘‘cirrocoria’’. Many pupils in A–D contain specular reflections of cornea that appear as a white dot and are not indicative of disease. doi:10.1371/journal.pone.0076677.g003 Photographic Detection of Retinoblastoma PLOS ONE | www.plosone.org 5 October 2013 | Volume 8 | Issue 10 | e76677 camera lens (Figure 1A). The total surface area of tumors in the right eye was calculated to be ,4-fold greater than the surface area of tumors in the left eye. The lateral ratio of the total surface area of tumors in the right and left eye was approximated using the measured height and diameter of tumors from fundus photogra- phy performed at age 129 days, and 199 days; a semi-spherical geometry was assumed when calculating the surface area of each tumor, as previously described [35]. The correlation between the frequency of leukocoria and the progression and remission of disease and also the greater frequency in the more severely affected eye suggests that the leukocoria observed in these images are clinically relevant, and that leukocoria frequency can be a clinically relevant parameter. Right Leukocoric Pupils of Patient Zero Exhibited Lower Saturation and Value than Left Pupils The Saturation and Value of the right and left leukocoric pupils from Patient Zero are plotted in Figure 5 (as a per-pixel average, red circles). The average Hue versus Value of each leukocoric pupil is also plotted on a polar coordinate plane (Figure 6, red circles). Because the HSV quantities are expressed as a per-pixel average (the average number of pixels analyzed was 308.11), they are independent of image resolution. We also calculated the mean Hue, Saturation, and Value of right and left leukocoric pupils over the entire three-year period of photography (Table 1). These colorimetric (and statistical) analyses of pupils demonstrate that the Saturation and Value, but not the Hue of right leukocoric pupils, are different than left leukocoric pupils in Patient Zero. For example, the three-year aggregate mean Saturation of right leukocoric (RL) pupils (SRL = 0.234) is 46% lower than left leukocoric (LL) pupils (SLL = 0.436; p,0.0001 * ). The aggregate mean Value of the right leukocoric pupils (VRL = 0.677) is 17% lower than left pupils (VLL = 0.818; p,0.0001 * ). The right and left pupils did not show differences in Hue: the three-year aggregate mean Hue of right leukocoric pupils (HRL = 21.1u, i.e., yellow) were nearly identical to left leukocoric pupils (HLL = 21.0u). The derivation of the p values and statistical significance of differences in the HSV quantities of right and left eyes are discussed below. The ability to detect variations in the average colorimetric properties of leukocoric reflection from different ocular sets of Rb tumors with a pocket-sized digital camera (Figure 2B), during recreational photography is remarkable. We hypothesize that the lower Saturation of leukocoric reflections from the right eye, compared to the left for Patient Zero, is caused by the greater degree of retinal eclipsing by the larger surface area of the tumors in the right eye compared to the left eye. Right and Left Pupils are Colorimetrically Identical in Healthy Children It is possible that the bilateral differences in Saturation and Value of leukocoria in Patient Zero resulted from a photographic clustering artifact, i.e., images were collected at a constant angle, lighting, pose, or setting which may lead to a measurable amount of clinically irrelevant leukocoria (i.e., pseudo-leukocoria [32]). To begin to rule out this possibility and to establish quantitative and colorimetric definitions of healthy pupillary reflexes, we measured Figure 4. Comparison of Frequency of Leukocoria with Age of Patient Zero and Timeline of his Treatment. A) Number of leukocoric pictures plotted as a function of age. Inset shows expansion of age 0–135 days. B) Daily frequency of leukocoric pictures from a set of 7377 facial pictures plotted as a function of age. Inset shows expansion of age 0–135 days. C) First leukocoric pictures of patient at 12, 35, and 78 days old. D) Comparison of monthly frequency of leukocoria with treatment of patient. E) Lateral distribution of leukocoria in 7377 photographs of Patient Zero. After the first month of life, the right eye accounted for the majority of leukocoric pupils that were observed until the right eye was enucleated. doi:10.1371/journal.pone.0076677.g004 Photographic Detection of Retinoblastoma PLOS ONE | www.plosone.org 6 October 2013 | Volume 8 | Issue 10 | e76677 the Hue, Saturation, and Value of right and left pupils from 19 healthy children (without having any known eye diseases; 305 pupils in total; 166 left pupils; 139 right pupils; mean age = 39.5 months old, median age = 20 months old, as adjusted to their frequency of appearance alongside Patient Zero in photo- graphs). The images of these children represent a convenient set of internal controls from which we could determine the average HSV in healthy pupillary reflections and also further ascertain if the two cameras used resulted in high levels of clinically irrelevant ‘‘pseudo-leukocoria’’ (caused, for example, by reflection of the optic nerve). As an instance, each control child was – by virtue of being photographed alongside Patient Zero – also photographed with the same camera as Patient Zero, under the same lighting conditions, exposure time, flash pulse duration, and aperture (Figure 2C). The HSV were determined for pupils from each healthy child in the same manner as leukocoric pictures and regardless of the gross appearance of the healthy child’s pupil. The colorimetric properties of control pupils should be identical among the right and left eyes of these children, so long as no photographic clustering artifact (pseudo-leukocoria) is present in these data. Plots of the Saturation and Value of right and left control pupils are shown in Figure 5 (blue squares). Polar plots of Hue (angular) and Value (radial) of right and left control pupils are shown in Figure 6 (blue squares). The aggregate mean Hue, Saturation and Value for all 139 right and 166 left control pupils that were photographed over the three-year period are listed in Table 1. These quantities represent a reasonable starting point for establishing standard colorimetric properties of pupillary reflexes of healthy children (at the age of Rb susceptibility) during digital photography. The mean Hue, Saturation, and Value for all right control pupils were nearly identical to those of the left control pupils. For example, the mean Hue of right control (RC) pupils (HRC = 350.2u) differed only 6.4u from left control (LC) pupils (HLC = 343.8u; p.0.05); the bilateral Saturation differed by only 2% (p = 0.9845) and the Value by 6% (p = 0.4508). The similarities in the HSV of right and left control pupils suggest Figure 5. Quantification of Saturation and Value of Right and Left Leukocoric Pupils of Patient Zero and 19 Healthy Control Children. A) Digital image showing bilateral leukocoria in Patient Zero taken at the age of 199 days. B) Illustration of cylindrical HSV (Hue, Saturation, Value) color space. C) Plot of average Saturation and Value of cropped leukocoric and control pupils from right eyes of Patient Zero (red circles) and 19 control subjects (blue squares). D) Plot of average Saturation and Value of cropped leukocoric and control pupils from left eyes of patient (red circles) and 19 control subjects (blue squares). E) Saturation and Value from right and left leukocoric and control pupils (a combination of plots C and D). Images of cropped pupils are matched to enlarged data points in order to illustrate the range of Saturation and Value of leukocoric and control pupils. doi:10.1371/journal.pone.0076677.g005 Table 1. Mean HSV Quantities of Leukocoria in ‘‘Patient Zero’’ and Control Pupils Over 3 Years. Leukocoric, left; n = 120 Leukocoric, right; n = 146 Control, left; n = 166 Control, right; n = 139 Hue a,d 21.0u (0.033u) 21.1u (0.215u) 343.8u (1.060u) 350.2u (0.877u) Saturationb 0.436 (0.159) 0.234 (0.166) 0.317 (0.194) 0.322 (0.216) Value c 0.818 (0.168) 0.677 (0.166) 0.280 (0.210) 0.297 (0.216) aFor Hue of R and L non-leukocoric controls, p.0.05; for Hue of R and L leukocoric pupils, p,0.05*; p-values for Hue were calculated with Wheeler-Watson test. b For Saturation of R and L controls p = 0.9845; for Saturation of right and left leukocoric pupils p,0.0001*. c For Value of R and L controls, p = 0.4508; for Value of R and L leukocoric pupils p,0.0001*. d Error values in parentheses are standard deviation, except for those of Hue, which are circular standard deviation (CSD). CSD is a circular statistical analogue of standard deviation which measures the spread of the data points about the average center. doi:10.1371/journal.pone.0076677.t001 Photographic Detection of Retinoblastoma PLOS ONE | www.plosone.org 7 October 2013 | Volume 8 | Issue 10 | e76677 that: (i) leukocoria detected in this study is only observed in a patient with Rb and thus is clinically relevant, (ii) the cropped pupils from 19 different control subjects have similar colorimetric properties (Table 1), and most importantly, (iii) any differences that are detected in HSV quantities of right and left leukocoric pupils from Patient Zero are not caused by photographic clustering artifacts, but are instead, caused by clinical differences in each eye. Statistical Significance of Bilateral and Longitudinal Differences in Hue, Saturation, and Value of Cropped Pupils from Patient Zero and Healthy Control Subjects In order to determine if the Saturation and Value of each set of right and left cropped pupils (from Patient Zero and healthy control subjects) were normally distributed, we performed a Shapiro-Wilk test. We did not perform a similar statistical analysis on photographs of other patients with Rb because of the small number of photographs (i.e., ,10) of each child. The results of the Shapiro-Wilk test demonstrated that the quantities of both Saturation and Value of right and left pupils were characterized by a non-normal distribution (p,0.005 * ). The absence of a normal distribution demonstrates that a non- parametric statistical test (e.g., the Van der Waerden test) is most appropriate to compare the statistical similarity of the Saturation or Value of cropped pupils from each eye of the patient and control subjects. We therefore used the Van der Waerden test to determine p-values of Saturation and Value between right and left pupils, and to determine if the Saturation of right leukocoric pupils is associated with the same mathematical distribution as Saturation of left leukocoric pupils. The results demonstrate that the differences in Saturation and Value of right and left pupils from Patient Zero are statistically significant (Table 1). In order to determine if the differences in the Hue of right and left pupils were statistically significant, we used the Wheeler- Watson test. Because the Hue of cropped pupils is expressed as a directional (circular) statistic, the Shapiro-Wilk test for normality and the Van der Waerden test – which were designed for use on non-directional data – are not applicable. The Wheeler-Watson test is a non-parametric test designed to determine statistical similarity between the distributions of different sets of directional data, and is thus appropriate for comparing Hue of right and left eyes, etc. The Hue of right and left pupils were not signifantly different (Table 1). The Average Hue of Leukocoria in Patient Zero is Yellow The three-year mean Hue of right and left leukocoric pupils of Patient Zero exhibited a yellow Hue, in comparison to right and left pupils from control subjects, which exhibited a red Hue (Figure 6). We hypothesize – but cannot prove – that the yellow Hue associated with this patient’s leukocoria resulted from the chemical composition and/or surface properties of the Rb tumor. While this hypothesis is bold, it is by no means capricious. For example, the diverse chemical composition of the tapetum lucidum (e.g., guanine, collagen, or riboflavin) among nocturnal animals is thought to cause the variably colored eye-shines (i.e., retinal reflexes) that are commonly observed among these animals (ranging from blue in bovine to yellow-green in canine) [36]. The tapetum lucidum is a reflective layer of retinal tissue (not present in humans) that functions as a biologic reflector system to enhance visual sensitivity in low-light conditions [36]. Previous analyses of Rb tumors from both fundus photography and pathological analyses of surgical specimens from enucleated eyes show that Rb tumors can be white, ‘‘off-white’’, tan, or yellow in appearance [37]. Rb tumors (or regions of tumors) that are yellow have been associated with hemorrhage, macular yellow pigment, calcification, and necrosis [37], however, it is possible that the yellow color we detect arises from the lipid composition of the plasma membrane of tumor cells. The lipid constituents of Rb cells have not been determined exactly and categorically, and the lipid content of cultured Rb cells can vary among different Rb 2 / Rb 2 cell lines [38]. Retinoblastoma tumors have been reported to possess increased levels of unsaturated fatty acids [39–42], as well as a higher content of cholesterol than healthy cells in the retina [43]. Intraocular cholesterosis (abnormal deposition of cholesterol) has also been reported in children with Rb after systemic chemotherapy, cryotherapy and laser photoablation [44]. No clinical deposition of cholesterol (e.g., hard exudation) was observed in the eyes of Patient Zero. Nevertheless, the degree to which cholesterosis occurred in Patient Zero, at any time throughout his lifetime, is unknown and thus we can only speculate on the cause(s) of the yellow Hue. Figure 6. Quantification of Hue and Value of right and left Leukocoric Pupils of Patient Zero and 19 Healthy Control Children. A) Depiction of Hue as an angular quantity. B) Polar plots of average Hue, per pixel (angular dimension) and average Value, per pixel (radial dimension) for right eye of patient that exhibited leukocoria (red circles), and right eye from 19 healthy children (blue squares). C) Polar plots of average Hue, per pixel (angular dimension) and average Value, per pixel (radial dimension) for left eye of patient that exhibited leukocoria (red circles), and left eye from 19 healthy children (blue squares). D) Combination of data points from plots C and D. doi:10.1371/journal.pone.0076677.g006 Photographic Detection of Retinoblastoma PLOS ONE | www.plosone.org 8 October 2013 | Volume 8 | Issue 10 | e76677 Saturation and Value of Right Pupils from Patient Zero Remain Different From Left Pupils throughout Three-Year Period of Treatment Because the right and left eyes of Patient Zero received different types of treatment (e.g., the right eye was not treated with proton beam radiation therapy, but was instead enucleated), it is possible that the colorimetric differences in right and left leukocoria are not caused by differences in tumor surface area or position, but instead are caused by changes in the surface properties of tumors (e.g., calcification) or retina that resulted from radiation or photoabla- tion therapy. The calcification of the large tumor in the left eye, and laser photoablation of the two small tumors (at 6 o’clock and 9 o’clock), can be seen from clinical images of the left retina that were obtained with fundus photography (Figure 7). In order to test the hypothesis that bilateral colorimetric differences are caused by treatment, we compared the longitudinal changes in the HSV properties of right and left leukocoric pupils over the three-year period of photography. First, we divided images into three longitudinal groups based upon the time of photography: (i) before treatment began (Period 1, age: 0– 131 days), (ii) after chemotherapy, laser photoablation and cryotherapy (Period 2, age: 132–221 days), and (iii) after proton beam therapy and final treatment with laser photoablation therapy (Period 3, age: 259–945 days). In order to examine the variation of the HSV of each right and left leukocoric reflection, from day to day and throughout all three time periods, we plotted the HSV of each leukocoric pupil as a function of the patient’s age (Figure 8). A linear fit was applied to the HSV data points for each treatment period in each eye (dashed lines in Figure 8). The mean Saturation of left leukocoric pupils only varied from 0.400–0.459 (i.e., 13% variation) throughout all three periods of treatment (Table 2). The mean Saturation of the right leukocoric pupil from Periods 1 and 2 were 0.226 and 0.265 (i.e., 15% variation; p = 0.1606); the right eye was enucleated before the beginning of Period 3. Throughout all three periods of treatment, the mean Value of left leukocoric pupils varied only 0.822–0.835 (i.e., 2% variation); the mean Value of the right leukocoric pupil from Periods 1 and 2 were 0.641 and 0.696, respectively (i.e., 8% variation, p = 0.1407). The persistent difference in Saturation or Value of right and left leukocoric pupils throughout the entire three-year period suggests that the colorimetric differences between right and left leukocoric pupils are not caused by treatment, but are instead the result of differences in the surface area and/or the position of tumors in the right versus left eye. In conclusion, the colorimetric properties of the right and left eye of Patient Zero were different from each other before and after radiation therapy, and were generally stable over the three-year period of photography. The administering of proton beam therapy to the left eye cannot entirely explain the differences in the Saturation or Value of left and right leukocoric pupils. We also do not believe that the greater Value observed in the left eye arose from the exposure of sclera that resulted from photoablation of the two small tumors at 6 o’clock and 9 o’clock (see fundus photographs in Figure 7). For example, the Value (or Saturation) of leukocoric pupils did not change significantly as a result of laser photoablation therapy and exposure of the sclera (possibly because the bare sclera and tumor reflect similarly during flash photog- raphy). We conclude that the longitudinal stability of the colorimetric properties over the three-year period of photography is due to the stabilization of growth of the predominant tumor in each eye that was quickly accomplished for this patient by his early diagnosis at age 4 months. The Colorimetric Differences between Right and Left Leukocoric Pupils of Patient Zero Are Not Artifacts of Photography Determining the exact exposure time and focal aperture (‘‘f- number’’) for each leukocoric image is necessary to determine whether the colorimetric differences that we detect between right and left leukocoric pupils of Patient Zero are the results of a photographic artifact, or are clinically relevant. For example, many of the leukocoric photographs of the patient did not contain bilateral leukocoria, which means that many of the right and left leukocoric pupils were contained in different photographs. It is thus possible that the colorimetric differences between right and left leukocoric reflections were caused – at least in part – by differences in the optical settings of the camera during the collection of each image (e.g., exposure time, focal aperture, flash mode, etc.). The photographic settings for each photograph are embedded as EXIF data in each JPEG file, and can be viewed when the JPEG file is analyzed in software programs such as PicassaH (Google Inc., Mountain View, CA). The average time of exposure and average focal aperture were calculated for right and left Figure 7. Longitudinal Set of Clinical Images of the Left Retina of Patient Zero Collected with Fundus Photography and Age- Matched Leukocoria in Left Pupil. The left retina contains three tumors; one large tumor at 12 o’clock, and two smaller tumors at 6 o’clock and 9 o’clock (the two smaller tumors were treated with laser photoablation therapy which resulted in tumor eradication and exposure of the sclera). The radiation symbol denotes the point in time when proton beam radiation therapy was administered to the left eye (age of patient is listed in days). doi:10.1371/journal.pone.0076677.g007 Photographic Detection of Retinoblastoma PLOS ONE | www.plosone.org 9 October 2013 | Volume 8 | Issue 10 | e76677 Figure 8. Longitudinal Plot of HSV Quantities of Leukocoria in Patient Zero. In order to project quantities of Hue in a Cartesian coordinate, we converted quantities of Hue to a linear scale. A) Plot of average HSV (per pixel) for leukocoric pupils from right eye. A linear fit of data points was made for two time periods: before and after systemic chemotherapy. B) Plot of average HSV (per pixel) for leukocoric pupils from left eye throughout the three-year period of photography. A linear fit of data points was made for two time periods: before the administering of proton beam radiation therapy, and after the completion of therapy (treatment timeline is listed at right panel). doi:10.1371/journal.pone.0076677.g008 Table 2. HSV Properties of Leukocoric Pupils in ‘‘Patient Zero’’ During Three Periods of Treatment. Treatment Period Left Eye Right Eye Hue e Sat. Value Hue Sat. Value Period 1: age: days 0–131 (n = 19 left, 37 right) a,b 16.5u (0.013u) 0.411 (0.145) 0.835 (0.194) 10.7u (0.156u) 0.265 (0.171) 0.641 (0.145) Period 2: age: days 132–221 (n = 21 left, 109 right) a,b,c 13.9u (0.048u) 0.400 (0.195) 0.822 (0.170) 25.5u (0.244u) 0.226 (0.166) 0.696 (0.170) Period 3: age: days 259–945 (n = 77 left)c 23.8u (0.024u) 0.459 (0.144) 0.823 (0.157) n/ad n/ad n/ad a For the right eye, a comparison of Periods 1 and 2 yielded p-values of p.0.05 for Hue, p = 0.1606 for Saturation, and p = 0.1407 for Value. bFor the left eye, a comparison of Periods 1 and 2 yielded p-values of p.0.05 for Hue, p = 0.7370 for Saturation, and p = 0.7622 for Value. c For the left eye, a comparison of treatment Periods 2 and 3 yielded p-values of p,0.05* for Hue, p = 0.1484 for Saturation, and p = 0.9930 for Value. *Entries with asterisk indicate the given colorimetric property is statistically different between the two sets being compared at a 0.05 significance level. dTreatment Period 3 was post-enucleation of the right eye. *Entries with asterisk indicate the given colorimetric property is statistically different between the two sets being compared at a 0.05 significance level. eFor Hue, comparisons were made using the Wheeler-Watson test. Error values in parentheses are standard deviation, except for those of Hue, which are circular standard deviation (CSD). CSD is a circular statistical analogue of standard deviation which measures the spread of the data points about the average center. doi:10.1371/journal.pone.0076677.t002 Photographic Detection of Retinoblastoma PLOS ONE | www.plosone.org 10 October 2013 | Volume 8 | Issue 10 | e76677 leukocoric pupils, and were found to be statistically similar. The colorimetric differences between right and left eye are therefore clinically relevant. For example, the average time of exposure (texp) of the 120 photographs with left leukocoric pupils was texp = 16.964.1 msec, versus texp = 18.664.1 msec for the 146 photographs with a right leukocoric pupil. Likewise, the average focal apertures (f) were similar: f = 3.761.0 for photographs with left leukocoric pupils; f = 3.961.4 for photographs with right leukocoric pupils. The EXIF data also documented that a flash pulse (from the xenon flash tube) was emitted during the collection of every leukocoric picture. To ensure that the cropping of leukocoric pupils in this study is reproducible, we had two different researchers (AA and BT) crop the entire set of photographs and quantify the pupils in HSV color space (Table 3). The mean colorimetric properties of both eyes for Patient Zero and all controls are statistically similar (p.0.05) regardless of which researcher performed the analyses (denoted as ‘‘Trial 1’’ and ‘‘Trial 2’’ in Table 3). This similarity illustrates that colorimetric variations in leukocoric and non-leukocoric pupils are not the artifacts caused by variations in the practice of pupil cropping. Leukocoria Can Occur in Spite of ‘‘Red Eye Reduction’’ Technology There is growing concern among clinicians that Red Eye Reduction and Red Eye Removal technologies will inhibit the ability of a digital camera to detect leukocoria [45], and possibly cause significant delays in the diagnosis of Rb. We point out that the two cameras used to collect photographs of Patient Zero were equipped with optional ‘‘Red Eye Removal’’ and ‘‘Red Eye Reduction’’ technologies (which are two entirely different types of technology), however, these technologies were not generally used by parents in this study. For example, the parents of Patient Zero utilized Red Eye Reduction flash mode in only ,5% of the leukocoric pictures they collected (as determined by analysis of the EXIF data for each JPEG image), and the parents did not edit any of the images with Red Eye Removal software. Red Eye Reduction technology employs a flash with a series of two light pulses: the first pulse is intended to contract the pupil (immediately prior to the collection of the image), and the second pulse provides lighting during exposure. In contrast, Red Eye Removal technol- ogy is a software feature that edits a photograph (i.e., removes red eye) after it is collected. We conclude that the Red Eye Reduction feature did not entirely inhibit leukocoria in Patient Zero, possibly because the centrally located tumors blocked light from bombard- ing the retina, and inhibited the contraction of the pupil during the first flash pulse. Occurrence of Clinically Irrelevant ‘‘Pseudo-Leukocoria’’ During Flash Photography In contrast to the possibility that Red Eye Reduction technology might inhibit the occurrence of leukocoria, there is also evidence suggesting that new models of digital cameras (such as the camera embedded within the Apple iPhoneH) are causing clinically irrelevant leukocoria to occur in healthy children and adults (who do not have any known eye disease) at a higher rate than previous models of digital cameras. The cause(s) of this alarming increase in pseudo-leukocoria – alarming because it might cause parents to begin to overlook leukocoria that is clinically relevant – is not known. We hypothesize that this increase in pseudo-leukocoria is caused by: (i) errors in post-processing of the image after collection (i.e., ‘‘Red Eye Removal’’), (ii) the type of ‘‘flash’’ or light source (i.e., a Light Emitting Diode (LED) in newer cameras such as the iPhoneH vs. a xenon flash tube in older cameras), and/or (iii) the proximity of the light source to the lens (a general rule of thumb in photography is that retinal reflections are minimized by moving the flash source away from the camera lens, which is of course impossible in compact cameras). Nevertheless, the leukocoric reflections that we detect in this study are clinically relevant, that is, are not likely to be caused by reflection of the optic nerve [32], or artifacts of advanced camera technologies. This conclusion is based on: (i) the absence of leukocoria in 305 images of pupils from 19 healthy control subjects, and the absence of leukocoria in adults that were photographed alongside each patient (data not shown), (ii) the correlation between the longitudinal frequency of leukocoria in Patient Zero and the progression/remission of disease, and (iii) the bilateral correlation between the lateral frequency and intensity of leukocoria and the clinical severity of each eye (Figure 4–5). In order to determine if pseudo-leukocoria can be easily distinguished (colorimetrically) from Rb-linked leukocoria, we collected and analyzed 72 pseudo-leukocoric images (of a healthy adult male) using three different digital camera phones equipped with a flash or light source (Figures 9 and 10; see Methods for more details on the type of cameras used). We determined that pseudo-leukocoria only occurred under low-light conditions, i.e., at 0.025960.0074 mE/m2/s (according to measurement with a digital light meter). This low-light setting was established by simply turning off overhead cool fluorescent lights in a windowed office. We found that ‘‘pseudo-leukocoria’’ did not occur during flash photography in the same room when the lights were turned on i.e., Table 3. Mean Colorimetric Properties Calculated From Images Containing Leukocoric and Healthy Control Pupils and Cropped by Two Different Researchers (Trials). Eye Huea (Trial 1)c Hue (Trial 2) c Sat.b (Trial 1) Sat. (Trial 2) Sat. % dev d Value b (Trial 1) Value (Trial 2) Value % dev R (Lk e ) 21.1u (0.215u) 17.7u (0.189u) 0.234 (0.166) 0.266 (0.188) 12.03 0.677 (0.1658) 0.665 (0.166) 1.77 L (Lk) 21.0u (0.033u) 20.7u (0.034u) 0.436 (0.159) 0.438 (0.156) 0.46 0.818 (0.168) 0.808 (0.174) 1.22 R&L (Lk) 21.1u (0.035u) 19.0u (0.109u) 0.325 (0.191) 0.344 (0.194) 5.52 0.741 (0.181) 0.729 (0.184) 1.62 R&L (Control) 347u (0.978u) 348u (0.647u) 0.319 (0.204) 0.318 (0.203) 0.31 0.288 (0.212) 0.267 (0.196) 7.29 aFor each entry of Hue, the number in parentheses is the circular standard deviation (CSD) of the crops in that data set. CSD is a circular statistical analogue of standard deviation which measures the spread of the data points about the average center. b For each entry of Value and Saturation, the number in parentheses is the standard deviation of the crops in that data set. cTrial 1 was cropped by Brandon W. Taylor and Trial 2 by Alireza Abdolvahabi. d Percent deviations (% dev) are calculated using: X1 {X2j j Xgreater |100. e Lk: Leukocoric. doi:10.1371/journal.pone.0076677.t003 Photographic Detection of Retinoblastoma PLOS ONE | www.plosone.org 11 October 2013 | Volume 8 | Issue 10 | e76677 at 10.460.0074 mE/m2/s in the presence of a higher light intensity. The average colorimetric quantities of pseudo-leukocoric reflections (denoted ‘‘PL’’ in Figure 10) from all three cameras were grouped in similar color space (i.e., Hue: 15u–30u; Saturation: 0.3–0.4; and Value: 0.9–0.7), and were similar to the HSV properties of some Rb patients with 3u leukocoria (Figure 10). The similarity between Rb-linked leukocoria and pseudo-leuko- coria suggests that any type of leukocoria detection software that might be engineered to alert users of digital cameras to the presence of leukocoria will need to discriminate between pseudo- leukocoria and Rb-linked leukocoria based (in part) on the higher rate at which leukocoria is likely to occur during the photography of a child with Rb compared to a healthy subject. Colorimetric Analysis of Leukocoric Photographs from Eight Additional Patients with Rb As the first step in determining the range of HSV coordinates that will generally describe Rb-linked leukocoria in recreational photography, we plotted the Hue, Value, and Saturation of leukocoric pupils for all nine patients in this study (Figure 10). As shown in Figure 10, the average Value and Saturation of leukocoric pupils from each patient are Value .0.3, and Saturation ,0.6. The leukocoria in all but one of the patients was characterized by a red-yellow hue. In order to begin to establish a quantitative scale of leukocoria and a means of interpreting the colorimetric properties of pupillary reflexes in digital photographs, we sectioned the Saturation-Value plane of HSV color space into five regions (Figure 10): (i) 1u leukocoria, (ii) 2u leukocoria, (iii) 3u leukocoria, (iv) ‘‘black’’ eye, and (v) ‘‘red’’ eye. The exact boundaries of the regions that we propose in this scale of leukocoria are by no means definitive, and will likely change (slightly) as we analyze more photographs from more patients. In this particular scale, first degree leukocoria represents the most intense level of leukocoria, i.e., the brightest, least colored leukocoria. We therefore present this scale as a tentative first step in developing methods for quantitatively interpreting leukocoria. Improving the Timing of Diagnosis of Retinoblastoma in Developing Nations with Digital Photography Over the past decade, the timing of diagnosis of Rb in underdeveloped countries has been improved by campaigns that increase the public awareness of Rb, and facilitate referral of children who might have Rb [13,46]. The growing prevalence of digital cameras in developing nations (e.g., in India, which is predicted to have a compounded annual growth rate (CAGR) of ,27% in its camera market in the next five years, outpacing the market growth in the USA) [47–49], and their growing use in telemedicine [50,51] suggest that digital photography can play an increasingly large role in these types of ongoing Rb campaigns. We find it reasonable to predict that access to digital photographic devices will continue to increase for many families in developing nations [49] at a faster rate than access to pediatric clinicians who can screen for Rb with conventional methods. The photography of children by parents in resource-limited settings might, therefore, represent a rapid, economical, and effective method for decreasing the age of diagnosis of Rb in these environments. Can the Digital Camera Help Preserve Vision of Rb Patients in Highly Developed Nations? Diagnostic challenges also continue to exist in highly developed nations, despite the high rate of survival. Improving the timing of diagnosis in developed nations will not lead to enormous increases in the (already high) survival rate, however, removing delays in diagnosis can result in greater degrees of vision preservation [14– 18]. Recent reports have described great deficiencies in the ability of pediatricians to detect Rb, possibly due to inconsistencies in the administering of the ‘‘red reflex’’ test, or simply because a child only receives (typically) ,12 examinations by a pediatrician during the first two years of life [52]. Moreover, regular examinations of a child’s vision – which might also detect Rb – typically begin after the age of 3 years, which is outside the typical age of diagnosis [52]. In contrast to this limited number of pediatric examinations, a child who lives in a highly developed nation will be photographed hundreds or thousands of times by parents, guardians, relatives, or acquaintances during the first two years of life. We point out that although many individuals own digital cameras in developed nations, a significant portion do not, and their increasing access to digital photography over time might lead to improvements in the use of digital photography to detect Rb. For example, it is estimated that ,10% of individuals in the USA do not have access to a digital camera (either as a standalone camera or a camera phone [53,54]). The use of the digital cameras to detect intraocular abnormal- ities might represent the most economical and rapid method for improving Rb diagnosis in developed nations. The potential of digital photography cannot be overestimated, in our opinion, because the detection limits of digital photography have not even been established, and might be much higher than currently appreciated. For example, digital photography appears to be able to detect early stage Rb that presents with a ‘‘gray’’ pupil (Figure 3) – which might not seem abnormal to a parent or clinician – prior to the presentation of classic ‘‘white’’ (or yellow) leukocoria. Utilizing the full potential of digital photography in screening Rb will likely require the development of a computer software that can alert the photographer (or viewer of the image) to the presence of Figure 9. Examples of ‘‘Pseudo-Leukocoria’’ (Bilateral or Unilateral) in a Healthy Adult without any Known Eye Disease. Upper panel: photograph collected with a Motorola Droid RazrH and ‘‘pseudo-leukocoria’’ which can be seen in the right eye (unilateral). Middle panel: photograph collected with a Samsung SGH-I997H and with ‘‘pseudo-leukocoria’’ observable in the left eye (unilateral). Lower panel: photograph collected with an Apple iPhone 4H and ‘‘pseudo- leukocoria’’ can be partially seen in both eyes (bilateral). As described in the text, all photographs were taken in the same low-light conditions i.e., intensity of 0.025960.0074 mE/m2/s. doi:10.1371/journal.pone.0076677.g009 Photographic Detection of Retinoblastoma PLOS ONE | www.plosone.org 12 October 2013 | Volume 8 | Issue 10 | e76677 an abnormal pupillary reflection that might or might not be obvious to the naked eye. Conclusion The primary clinical result of this study suggests that a leukocoric photograph of a child with Rb can provide more information than a binary readout of leukocoria. We have shown that the quantity of Saturation and Value of a leukocoric pupillary reflection (in HSV color space) might be a crude metric for approximating the degree of leukocoria, which might be – for tumors in certain positions – a convenient expression of the total reflective surface area of intraocular Rb tumors. This study also shows that ‘‘low frequency’’ leukocoria can be overlooked more easily by parents than ‘‘high frequency’’ leukocoria. For example, the parents of Patient Zero did not notice leukocoria until it appeared in .60% of pictures per day, and .10% of pictures per month; in fact, approximately 3 months elapsed from the time leukocoria emerged to the time it had increased to sufficient frequency that parents began to notice leukocoria. Increasing public awareness about leukocoria can accelerate diagnosis by preventing parents from overlooking sporadic leukocoria during the early stages of Rb. For example, although ‘‘Patient Zero’’ was diagnosed 5–8 months earlier than the average age of diagnosis for bilateral Rb [16,55], it is reasonable to predict that an earlier diagnosis at 12 or 35 days old – when the leukocoria first emerged – would have improved the patient’s outcome. This study only examined photographs of 9 patients with Rb (and only a single patient in longitudinal and bilateral detail), however, we suspect that the primary clinical finding of this study – that distinct ocular sets of Rb tumors produce distinct colorimetric patterns during amateur photography – will be found to be generally applicable, to some degree, with leukocoric photographs of other children with Rb. This zeroth order approximation is based on the assumption that: (i) the possible position of Rb tumors, in both time and space, is quite narrow (i.e., the surface area of a child’s retina is ,11 cm 2 , and Rb tumors typically form before the age of 5 years), and (ii) the photography of children by parents occurs at multiple angles, which will increase the probability that light will bombard tumors in both central or peripheral positions. We believe that the optical diversity of recreational photography – the collection of images at different angles and lighting conditions – by no means lowers its utility in Rb detection, but actually improves its applicability by increasing the probability that leukocoria will be eventually observed, regardless of the tumor position. Analyzing additional libraries of photographs – similar in size to the library of Patient Zero – from more Rb patients will be necessary to determine whether the colorimetric properties of leukocoria have general clinical relevance, and to fully interpret the clinical implications of HSV quantities of a leukocoric image. The software that we used to analyze photographs (Adobe PhotoshopH) is readily available, and the algorithm for converting RGB to HSV color space is operable in Microsoft ExcelH. Researchers or clinicians without expertise in computer science should, therefore, be able to carry out the colorimetric analyses that we describe on photographs of other patients. Collecting a database of HSV coordinates of leukocoria from other patients will help establish a quantitative definition and scale of leukocoria, which might prove useful for quickly approximating the clinical severity of Rb when a parent reports leukocoria. The digital photography of children in recreational settings is by no means as useful as high-resolution clinical methods for examining and imaging the retina (e.g., ophthalmoscopy and fundus photography). This technical disparity notwithstanding, the high frequency of photography of children by parents throughout the entire five year period of Rb susceptibility, combined with the growing prevalence of digital photography, is resulting in the accumulation of enormous, longitudinal sets of images that represent crude retinal scans. The colorimetric analysis of these types of large photographic libraries – over 7,000 images for the single patient in this study – might be, as an aggregate, useful for screening or assessing Rb. We envision that the creation of computer software that can automatically detect and quantify leukocoria – within thousands of images from a parent’s library of Figure 10. Saturation-Value Scale for Quantifying Leukocoria in Photographs of Children with Retinoblastoma. A) Sectioning the Saturation-Value plane of HSV color space into a useful scale for classifying pupillary reflexes in recreational photographs. In this proposed scale, leukocoria is divided into differing degrees of brightness and color concentration (1u being the brightest, least colored; 3u is the least bright and most colored); areas that likely represent a typical ‘‘red’’ or ‘‘black’’ pupillary reflex are indicated. Each data point labeled ‘‘Rb’’ refers to the average H, S, or V of all leukocoric images of one of nine patients; the superscript of each label refers to the patient number (beginning with zero); subscript text refers to right or left pupil. ‘‘PL’’ refers to Pseudo-Leukocoria from images of a healthy individual that were collected with one of three different camera phones; the subscript refers to the camera that was used to photograph the individual (see text). ‘‘NL’’ refers to Non-Leukocoric controls (average of right and left pupils) from healthy children (i.e., data contained in Figure 5 and Table 1). The value ‘‘n’’ below each Rb, NL, and PL point refers to the number of pictures from which each average was calculated. B) Plot showing the average Hue of cropped pupils from panel A. doi:10.1371/journal.pone.0076677.g010 Photographic Detection of Retinoblastoma PLOS ONE | www.plosone.org 13 October 2013 | Volume 8 | Issue 10 | e76677 ‘‘baby pictures’’ or during photography or web-based social networking – will facilitate the automated and instantaneous screening of leukocoria in children throughout the entire period of their Rb susceptibility. Acknowledgments The authors would like to thank the families who donated digital photographs for this study. The authors also acknowledge Dr. Erich Baker and Dr. Greg Hamerly for helpful discussions. Author Contributions Conceived and designed the experiments: BFS. Performed the experi- ments: EVS AK SM BFS. Analyzed the data: AA BWT RLH BFS. Wrote the paper: AA BWT AK CRG SM BFS. References 1. Dimaras H, Kimani K, Dimba EA, Gronsdahl P, White A, et al. (2012) Retinoblastoma. Lancet 379: 1436–1446. 2. Zhang J, Benavente CA, McEvoy J, Flores-Otero J, Ding L, et al. (2012) A novel retinoblastoma therapy from genomic and epigenetic analyses. Nature 481: 329– 334. 3. Houston SK, Murray TG, Wolfe SQ, Fernandes CE (2011) Current update on retinoblastoma. Int Ophthalmol Clin 51: 77–91. 4. Kivela T (2009) The epidemiological challenge of the most frequent eye cancer: retinoblastoma, an issue of birth and death. Br J Ophthalmol 93: 1129–1131. 5. Kiss S, Leiderman YI, Mukai S (2008) Diagnosis, classification, and treatment of retinoblastoma. Int Ophthalmol Clin 48: 135–147. 6. Shrestha A, Adhikari RC, Saiju R (2010) Retinoblastoma in a 37 years old man in Nepal: a case report. Kathmandu Univ. Med. J 8: 247–250. 7. Broaddus E, Topham A, Singh AD (2009) Incidence of retinoblastoma in the USA: 1975–2004. Br J Ophthalmol 93: 21–23. 8. Broaddus E, Topham A, Singh AD (2009) Survival with retinoblastoma in the USA: 1975–2004. Br J Ophthalmol 93: 24–27. 9. Seregard S, Lundell G, Svedberg H, Kivela T (2004) Incidence of retinoblastoma from 1958 to 1998 in Northern Europe: advantages of birth cohort analysis. Ophthalmology 111: 1228–1232. 10. Swaminathan R, Rama R, Shanta V (2008) Childhood cancers in Chennai, India, 1990–2001: incidence and survival. Int J Cancer 122: 2607–2611. 11. Wessels G, Hesseling PB (1996) Outcome of children treated for cancer in the Republic of Namibia. Med Pediatr Oncol 27: 160–164. 12. Bowman RJ, Mafwiri M, Luthert P, Luande J, Wood M (2008) Outcome of retinoblastoma in east Africa. Pediatr Blood Cancer 50: 160–162. 13. Leander C, Fu LC, Pena A, Howard SC, Rodriguez-Galindo C, et al. (2007) Impact of an education program on late diagnosis of retinoblastoma in Honduras. Pediatr Blood Cancer 49: 817–819. 14. Wilson MW, Qaddoumi I, Billups C, Haik BG, Rodriguez-Galindo C (2011) A clinicopathological correlation of 67 eyes primarily enucleated for advanced intraocular retinoblastoma. Br J Ophthalmol 95: 553–558. 15. Canturk S, Qaddoumi I, Khetan V, Ma Z, Furmanchuk A, et al. (2010) Survival of retinoblastoma in less-developed countries impact of socioeconomic and health-related indicators. Br J Ophthalmol 94: 1432–1436. 16. Rodriguez-Galindo C, Wilson MW, Chantada G, Fu L, Qaddoumi I, et al. (2008) Retinoblastoma: one world, one vision. Pediatrics 122: 763–770. 17. Epelman S (2012) Preserving Vision in Retinoblastoma Through Early Detection and Intervention. Curr Oncol Rep. 14: 213–219. 18. Narang S, Mashayekhi A, Rudich D, Shields CL (2012) Predictors of long-term visual outcome after chemoreduction for management of intraocular retino- blastoma. Clin Experiment Ophthalmol 40: 736–742. 19. Ventura G, Cozzi G (2012) Red reflex examination for retinoblastoma. Lancet 380: 803–804. 20. Li J, Coats DK, Fung D, Smith EO, Paysse E (2010) The detection of simulated retinoblastoma by using red-reflex testing. Pediatrics 126: 202–207. 21. Abramson DH, Beaverson K, Sangani P, Vora RA, Lee TC, et al. (2003) Screening for retinoblastoma: presenting signs as prognosticators of patient and ocular survival. Pediatrics 112: 1248–1255. 22. Maki JL, Marr BP, Abramson DH (2009) Diagnosis of retinoblastoma: how good are referring physicians? Ophthalmic Genet 30: 199–205. 23. Haider S, Qureshi W, Ali A (2008) Leukocoria in children. J Pediatr Ophthalmol Strabismus 45: 179–180. 24. Rubin MP, Mukai S (2008) Coats’ disease. Int Ophthalmol Clin 48: 149–158. 25. Lim Z, Rubab S, Chan YH, Levin AV (2010) Pediatric cataract: the Toronto experience-etiology. Am J Ophthalmol 149: 887–892. 26. Shoji K, Ito N, Ito Y, Inoue N, Adachi S, et al. (2010) Is a 6-week course of ganciclovir therapy effective for chorioretinitis in infants with congenital cytomegalovirus infection?. J Pediatr 157: 331–333. 27. Kumar A, Jethani J, Shetty S, Vijayalakshmi P (2010) Bilateral persistent fetal vasculature: a study of 11 cases. J AAPOS 14: 345–348. 28. Haik BG, Saint Louis L, Smith ME, Ellsworth RM, Abramson DH, et al. (1985) Magnetic resonance imaging in the evaluation of leukocoria. Ophthalmology 92: 1143–1152. 29. Smirniotopoulos JG, Bargallo N, Mafee MF (1994) Differential diagnosis of leukokoria: radiologic-pathologic correlation. Radiographics 14: 1059–1079. 30. Phan IT, Stout T (2010) Retinoblastoma presenting as strabismus and leukocoria. J Pediatr 157: 858. 31. Smith AR (1978) Color gamut transform pairs. SIGGRAPH Comput Graph 12: 12–19. 32. Marshall J, Gole GA (2003) Unilateral leukocoria in off axis flash photographs of normal eyes. Am J Ophthalmol 135: 709–711. 33. Russell HC, Agarwal PK, Somner JE, Bowman RJ, Dutton GN (2010) Off-Axis Digital Flash Photography: a Common Cause of Artefact Leukocoria in Children. J Pediatr Ophthalmol Strabismus 1: 1–3. 34. Cha SB, Shields C, Shields JA (1993) The parents of a 9-month-old black girl discovered a yellow pupil in her right eye. Retinoblastoma. J Ophthalmic Nurs Technol 12: 182–189. 35. Sussman DA, Escalona-Benz E, Benz MS, Hayden BC, Feuer W, et al. (2003) Comparison of retinoblastoma reduction for chemotherapy vs external beam radiotherapy. Arch Ophthalmol 121: 979–984. 36. Ollivier FJ, Samuelson DA, Brooks DE, Lewis PA, Kallberg ME, et al. (2004) Comparative morphology of the tapetum lucidum (among selected species). Vet Ophthalmol 7: 11–22. 37. Rodriguez-Galindo C, Wilson MW (2010) Retinoblastoma. Springer. 156 p. 38. Yorek MA, Figard PH, Kaduce TL, Spector AA (1985) A comparison of lipid metabolism in two human retinoblastoma cell lines. Inves Ophthalmol Vis Sci 26: 1148–1154. 39. Hyman BT, Spector AA (1981) Accumulation of N-3 polyunsaturated fatty acids cultured human Y79 retinoblastoma cells. J Neurochem 37: 60–69. 40. Goustard-Langelier B, Alessandri JM, Raguenez G, Durand G, Courtois Y (2000) Phospholipid incorporation and metabolic conversion of n-3 polyunsat- urated fatty acids in the Y79 retinoblastoma cell line. J Neurosci Res 60: 678– 685. 41. Yorek MA, Bohnker RR, Dudley DT, Spector AA (1984) Comparative utilization of n-3 polyunsaturated fatty acids by cultured human Y-79 retinoblastoma cells. Biochim Biophys Acta 795: 277–285. 42. Hyman BT, Spector AA (1982) Choline uptake in cultured human Y79 retinoblastoma cells: effect of polyunsaturated fatty acid compositional modifications. J Neurochem 38: 650–656. 43. Makky A, Michel JP, Ballut S, Kasselouri A, Maillard P, et al. (2010) Effect of cholesterol and sugar on the penetration of glycodendrimeric phenylporphyrins into biomimetic models of retinoblastoma cells membranes. Langmuir 26: 11145–11156. 44. Gombos DS, Howes E, O’Brien JM (2000) Cholesterosis following chemoreduc- tion for advanced retinoblastoma. Arch Ophthalmol 118: 440–441. 45. Murphy D, Bishop H, Edgar A (2012) Leukocoria and retinoblastoma: pitfalls of the digital age? Lancet 379: 2465. 46. Luna-Fineman S, Barnoya M, Bonilla M, Fu L, Baez F, et al. (2012) Retinoblastoma in Central America: Report from the Central American Association of Pediatric Hematology Oncology (AHOPCA). Pediatr Blood Cancer 58: 545–550. 47. Rao S (2006) Explosive growth continues for India’s mobile-phone market. Microwaves & Rf 45: 22. 48. Molineux A, Cheverst K (2012) A Survey of Mobile Vision Recognition Applications, in Speech, Image, and Language Processing for Human Computer Interaction: Multi-Modal Advancements. IGI Global: Allahabad. Ch. 14, 292– 308. 49. (2012) India Camera Market to Surpass USD 3 Billion (Rs.1675 Crore) by 2017 Says TechSci Research. Press Release Newswire. 50. Wang S, Zhao X, Khimji I, Akbas R, Qiu W, et al. (2011) Integration of cell phone imaging with microchip ELISA to detect ovarian cancer HE4 biomarker in urine at the point-of-care. Lab Chip 11: 3411–3418. 51. Zhu H, Yaglidere O, Su TW, Tseng D, Ozcan A (2011) Cost-effective and compact wide-field fluorescent imaging on a cell-phone. Lab Chip 11: 315–322. 52. Rodriguez-Galindo C (2011) The basics of retinoblastoma: back to school. Pediatr Blood Cancer 57: 1093–1094. 53. Smith A (2013) Smartphone Ownership – 2013 Update. Pew Research Center: Washington D.C. 54. Tillmann K, Ely C (2012) Digital Camera Market Overview. Consumer Electronic Association (CEA). 55. Poulaki V, Mukai S (2009) Retinoblastoma: genetics and pathology. Int Ophthalmol Clin 49: 155–164. Photographic Detection of Retinoblastoma PLOS ONE | www.plosone.org 14 October 2013 | Volume 8 | Issue 10 | e76677 work_rprrdhdfnrdizksz6ig7hz37xe ---- Digital dental photography | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.4103/ijdr.IJDR_396_17 Corpus ID: 52050695Digital dental photography @article{Kalpana2018DigitalDP, title={Digital dental photography}, author={D. Kalpana and S. Rao and J. Joseph and Sampath Raju Kumara Kurapati}, journal={Indian Journal of Dental Research}, year={2018}, volume={29}, pages={507 - 512} } D. Kalpana, S. Rao, +1 author Sampath Raju Kumara Kurapati Published 2018 Engineering, Medicine Indian Journal of Dental Research Photography has always been an integral part of dentistry. The journey goes back to the time when film photography was used only for documentation and referral purpose which has now evolved to digital photography. Its application in dental practice is simple, fast, and extremely useful in documenting procedures of work, education of patients, and pursuing clinical investigations, thus providing many benefits to the dentists and patients. The article describes the added benefits of digital… Expand View on Wolters Kluwer doi.org Save to Library Create Alert Cite Launch Research Feed Share This Paper 6 Citations View All Topics from this paper Photography, Dental Prosthodontic specialty benefit Documented photograph 6 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Color Management and Communication in Dentistry Oscar Emilio Pecho Yataco, Razvan Ghinea, A. D. Bona Computer Science 2020 1 Save Alert Research Feed Basic Techniques of Clinical Photography F. Liu, Y. Li, X. Shi Art 2019 Save Alert Research Feed Seeing is believing? When scanning electron microscopy (SEM) meets clinical dentistry: The replica technique. L. Naves, D. Gerdolle, O. S. de Andrade, Marco Markus Maria Gresnigt Computer Science, Medicine Microscopy research and technique 2020 Save Alert Research Feed Systematic Review of the Online Data Available for Educational Applications in Prosthodontics Saleem Ali Qasem Atiah, Aisha Mohammed Saleh Yamani, Abdulhamid Aidarous Alamir, Maan Mohammed A Shabi, Hassan Ibrahim Ahmed Khormi, Dr. Fatima Sultana 2020 PDF View 1 excerpt Save Alert Research Feed Assessment of Color Parameters on Maxillary Right Central Incisors Using Spectrophotometer and RAW Mobile Photos in Different Light Conditions Mirko Soldo, Dvor Illeš, R. Ćelić, Dubravka Knezović Zlatrić Medicine Acta stomatologica Croatica 2020 PDF View 1 excerpt Save Alert Research Feed The History and Significance of Dental Photography F. Liu, M. Xu, Tongfeng He Art 2019 Save Alert Research Feed References SHOWING 1-10 OF 18 REFERENCES SORT BYRelevance Most Influenced Papers Recency Photography in Clinical Dentistry- A Review Manjunath Sg, Raju T Ragavendra, Sowmya K Setty, K. Jayalakshmi Medicine 2011 7 View 2 excerpts, references background Save Alert Research Feed Digital Photography in General and Clinical Dentistry-Technical Aspects and Accessories R. Sreevatsan, K. Philip, E. Peter, K. Singh, M. S. Gahlot Medicine 2015 2 Highly Influential View 7 excerpts, references background Save Alert Research Feed History and current use of clinical photography in orthodontics. Donna L Galante Medicine Journal of the California Dental Association 2009 6 Highly Influential View 3 excerpts, references background Save Alert Research Feed Photographic-assisted diagnosis and treatment planning. Ron Goodlin Medicine Dental clinics of North America 2011 25 View 2 excerpts, references background Save Alert Research Feed Digital Dental Photography: A Contemporary Revolution V. Desai, D. Bumb Medicine International journal of clinical pediatric dentistry 2013 23 PDF View 2 excerpts, references background Save Alert Research Feed Technical analysis of clinical digital photographs. Daniel R Llop Computer Science, Medicine Journal of the California Dental Association 2009 7 Highly Influential View 10 excerpts, references background and methods Save Alert Research Feed Esthetic Dentistry in Clinical Practice M. Geissberger Medicine 2010 20 PDF View 1 excerpt, references background Save Alert Research Feed Tooth display and lip position during spontaneous and posed smiling in adults Pieter Van der Geld, P. Oosterveld, S. Bergé, A. M. Kuijpers-Jagtman Psychology, Medicine Acta odontologica Scandinavica 2008 41 PDF View 1 excerpt, references methods Save Alert Research Feed 10 reasons why dental photography should be an essential part of your practice Dent Econ 2014;104:1‐7 2014 Mastering Digital Dental Photography W. Bengel Art 2006 19 Highly Influential View 10 excerpts, references methods and background Save Alert Research Feed ... 1 2 ... Related Papers Abstract Topics 6 Citations 18 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_rpvg23zul5fqpgf3zmlaefmffm ---- Anaesthetic eye drops for children in casualty departments across south east England It is a common practice to use topical anaesthetic drops to provide temporary relief and aid in the examination of the eyes when strong blepharospasm precludes thorough examination. Ophthalmology departments usually have several types of these—for example, amethocaine, oxybuprocaine (benoxinate), and proxymetacaine. The dura- tion and degree of discomfort caused by amethocaine is significantly higher than proxymetacaine,1 2 whilst the difference in the discomfort between amethocaine and oxybuprocaine is minimal.2 When dealing with children, therefore, it is recommended to use proxymetacaine drops.1 It was my experience that Accident & Emergency (A&E) departments tend to have less choice of these drops. This survey was done to find out the availability of different anaesthetic drops, and the preference for paediatric use given a choice of the above three. Questionnaires were sent to 40 A&E departments across south east England. Thirty nine replied back, of which one department did not see any eye casualties. None of the 38 departments had proxymeta- caine. Twenty units had amethocaine alone and 10 units had oxybuprocaine alone. For paediatric use, these units were happy to use whatever drops were available within the unit. Eight units stocked amethocaine and oxybuprocaine, four of these were happy to use either of them on children and four used only oxybuprocaine. One of the latter pre- ferred proxymetacaine but had to contend with oxybuprocaine due to cost issues. Children are apprehensive about the instillation of any eye drops. Hence, it is desirable to use the least discomforting drops like proxymetacaine. For eye casualties, in majority of District General Hospitals, A&E departments are the first port of call. Hence, A&E units need to be aware of the benefit of proxymetacaine and stock them for paedia- tric use. M R Vishwanath Department of Ophthalmology, Queen Elizabeth Queen Mother Hospital, Margate, Kent; m.vishwanath@virgin.net doi: 10.1136/emj.2003.010645 References 1 Shafi T, Koay P. Randomised prospective masked study comparing patient comfort following the instillation of topical proxymetacaine and amethocaine. Br J Ophthalmol 1998;82(11):1285–7. 2 Lawrenson JG, Edgar DF, Tanna GK, et al. Comparison of the tolerability and efficacy of unit- dose, preservative-free topical ocular anaesthetics. Ophthalmic Physiol Opt 1998;18(5):393–400. Training in anaesthesia is also an issue for nurses We read with interest the excellent review by Graham.1 An important related issue is the training of the assistant to the emergency physician. We wished to ascertain if use of an emergency nurse as an anaesthetic assistant is common practice. We conducted a short telephone survey of the 12 Scottish emer- gency departments with attendances of more than 50 000 patients per year. We interviewed the duty middle grade doctor about usual practice in that department. In three depart- ments, emergency physicians will routinely perform rapid sequence intubation (RSI), the assistant being an emergency nurse in each case. In nine departments an anaesthetist will usually be involved or emergency physi- cians will only occasionally perform RSI. An emergency nurse will assist in seven of these departments. The Royal College of Anaesthetists2 have stated that anaesthesia should not proceed without a skilled, dedicated assistant. This also applies in the emergency department, where standards should be comparable to those in theatre.3 The training of nurses as anaesthetic assistants is variable and is the subject of a Scottish Executive report.4 This consists of at least a supernumerary in-house program of 1 to 4 months. Continued professional devel- opment and at least 50% of working time devoted to anaesthetic assistance follow this.4 The Faculty of Emergency Nursing has recognised that anaesthetic assistance is a specific competency. We think that this represents an important progression. The curriculum is, however, still in its infancy and is not currently a requirement for emergency nurses (personal communication with L McBride, Royal College of Nursing). Their assessment of competence in anaes- thetic assistance is portfolio based and not set against specified national standards (as has been suggested4). We are aware of one-day courses to familiarise nurses with anaesthesia (personal communication with J McGowan, Southern General Hospital). These are an important introduction, but are clearly incomparable to formal training schemes. While Graham has previously demon- strated the safety of emergency physician anaesthesia,5 we suggest that when anaes- thesia does prove difficult, a skilled assistant is of paramount importance. Our small survey suggests that the use of emergency nurses as anaesthetic assistants is common practice. If, perhaps appropriately, RSI is to be increasingly performed by emergency physicians,5 then the training of the assis- tant must be concomitant with that of the doctor. Continued care of the anaesthetised patient is also a training issue1 and applies to nurses as well. Standards of anaesthetic care need to be independent of its location and provider. R Price Department of Anaesthesia, Western Infirmary, Glasgow: Gartnavel Hospital, Glasgow, UK A Inglis Department of Anaesthesia, Southern General Hospital, Glasgow, UK Correspondence to: R Price, Department of Anaesthesia, 30 Shelley Court, Gartnavel Hospital, Glasgow, G12 0YN; rjp@doctors.org.uk doi: 10.1136/emj.2004.016154 References 1 Graham CA. Advanced airway management in the emergency department: what are the training and skills maintenance needs for UK emergency physicians? Emerg Med J 2004;21:14–19. 2 Guidelines for the provision of anaesthetic services. Royal College of Anaesthetists, London, 1999. http://www.rcoa.ac.uk/docs/glines.pdf. 3 The Role of the Anaesthetist in the Emergency Service. Association of Anaesthetists of Great Britain and Ireland, London, 1991. http:// www.aagbi.org/pdf/emergenc.pdf. 4 Anaesthetic Assistance. A strategy for training, recruitment and retention and the promulgation of safe practice. Scottish Medical and Scientific Advisory Committee. http://www.scotland.gov.uk/ library5/health/aast.pdf. 5 Graham CA, Beard D, Oglesby AJ, et al. Rapid sequence intubation in Scottish urban emergency departments. Emerg Med J 2003;20:3–5. Ultrasound Guidance for Central Venous Catheter Insertion We read Dunning’s BET report with great interest.1 As Dunning himself acknowledges, most of the available literature concerns the insertion of central venous catheters (CVCs) by anaesthetists (and also electively). However, we have found that this data does not necessarily apply to the critically-ill emergency setting, as illustrated by the study looking at emergency medicine physicians2 where the ultrasound did not reduce the complication rate. The literature does not distinguish between potentially life-threatening complications and those with unwanted side-effects. An extra attempt or prolonged time for insertion, whilst unpleasant, has a minimal impact on the patient’s eventual outcome. However, a pneumothorax could prove fatal to a patient with impending cardio-respiratory failure. Some techniques – for example, high internal jugular vein – have much lower rates of pneumothorax. Furthermore, some techni- ques use an arterial pulsation as a landmark. Such techniques can minimise the adverse effect of an arterial puncture as pressure can be applied directly to the artery. We also share Dunning’s doubt in the National Institute for Clinical Excellence (NICE) guidance’s claim that the cost-per- use of an ultrasound could be as low as £10.3 NICE’s economic analysis model assumed that the device is used 15 times a week. This would mean sharing the device with another department, clearly unsatisfactory for most emergency situations. The cost per complica- tion prevented would be even greater. (£500 in Miller’s study, assuming 2 fewer complica- tions per 100 insertions). Finally, the NICE guidance is that ‘‘appro- priate training to achieve competence’’ is LETTERS PostScript. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Accepted for publication 27 May 2004 Accepted for publication 27 May 2004 608 Emerg Med J 2005;22:608–610 www.emjonline.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://e m j.b m j.co m / E m e rg M e d J: first p u b lish e d a s 1 0 .1 1 3 6 /e m j.2 0 0 3 .0 1 3 3 9 1 o n 2 6 Ju ly 2 0 0 5 . D o w n lo a d e d fro m http://emj.bmj.com/ undertaken. We are sure that the authors would concur that the clinical scenario given would not be the appropriate occasion to ‘‘have a go’’ with a new device for the first time. In conclusion, we believe that far more important than ultrasound-guided CVC insertion, is the correct choice of insertion site to avoid those significant risks, which the critically-ill patient would not tolerate. M Chikungwa, M Lim Correspondence to: M Chikungwa; mchikungwa@ msn.com doi: 10.1136/emj.2004.015156 References 1 Dunning J, James Williamson J. Ultrasonic guidance and the complications of central line placement in the emergency department. Emerg Med J 2003;20:551–552. 2 Miller AH, Roth BA, Mills TJ, et al. Ultrasound guidance versus the landmark technique for the placement of central venous catheters in the emergency department. Acad Emerg Med 2002;9:800–5. 3 National Institute for Clinical Excellence. Guidance on the use of ultrasound locating devices for placing central venous catheters. Technology appraisal guidance no 49, 2002 http://www.org.uk/cat.asp?c = 36752 (accessed 24 Dec 2003). Patients’ attitudes toward medical photography in the emergency department Advances in digital technology have made use of digital images increasingly common for the purposes of medical education.1 The high turnover of patients in the emergency department, many of whom have striking visual signs makes this an ideal location for digital photography. These images may even- tually be used for the purposes of medical education in presentations, and in book or journal format.2 3 As a consequence patients’ images may be seen by the general public on the internet, as many journals now have open access internet sites. From an ethical and legal standpoint it is vital that patients give informed consent for use of images in medical photography, and are aware that such images may be published on the world wide web.4 The aim of this pilot study was to investigate patient’s attitudes toward medical photography as a guide to consent and usage of digital photography within the emergency department. A patient survey questionnaire was designed to answer whether patients would consent to their image being taken, which part(s) of their body they would consent to being photographed, and whether they would allow these images to be pub- lished in a medical book, journal, and/or on the internet. All patients attending the minors section of an inner city emergency department between 1 st January 2004 and 30 th April 2004 were eligible for the study. Patients were included if aged over 18 and having a Glasgow coma score of 15. Patients were excluded if in moderate or untreated pain, needed urgent treatment, or were unable to read or under- stand the questionnaire. All patients were informed that the questionnaire was anon- ymous and would not affect their treatment. Data was collected by emergency department Senior House Officers and Emergency Nurse Practitioners. 100 patients completed the questionnaire. The results are summarised below: Q1 Would you consent to a photo- graph being taken in the Emergency Department of you/part of your body for the purposes of medical education? Yes 84%, No 16% 21% replied Yes to all forms of consent, 16% replied No to all forms of consent, while 63% replied Yes with reservations for particular forms of consent. Q2 Would you consent the follow- ing body part(s) to be photo- graphed (head, chest, abdomen, limbs and/or genitalia)? The majority of patients consented for all body areas to be photo- graphed except for genitalia (41% Yes, 59% No) citing invasion of privacy and embarrassment. Q3 Would you consent to your photo being published in a medical journal, book or internet site? The majority of patients gave con- sent for publication of images in a medical journal (71%), book (70%), but were more likely to refuse consent for use of images on internet medical sites (47% Yes, 53% No or unsure). In determining the attitudes of patients presenting in an inner city London emer- gency department regarding the usage of photography, we found that the majority of patients were amenable to having their images used for the purposes of medical education. The exceptions to this were the picturing of genitalia and the usage of any images on internet medical sites/journals. The findings of this pilot study are limited to data collection in a single emergency department in central London. A particular flaw of this survey is the lack of correlation between age, sex, ethnicity, and consent for photography. Further study is ongoing to investigate this. There have been no studies published about patients’ opinions regarding medical photography to date. The importance of obtaining consent for publication of patient images and concealment of identifying fea- tures has been stressed previously.5 This questionnaire study emphasises the need to investigate patients’ beliefs and concerns prior to consent. A Cheung, M Al-Ausi, I Hathorn, J Hyam, P Jaye Emergency Department, St Thomas’ Hospital, UK Correspondence to: Peter Jaye, Consultant in Emergency Medicine, St Thomas’ Hospital, Lambeth Palace Road, London SE1 7RH, UK; peter.jaye@gstt.nhs.uk doi: 10.1136/emj.2004.019893 References 1 Mah ET, Thomsen NO. Digital photography and computerisation in orthopaedics. J Bone Joint Surg Br 2004;86(1):1–4. 2 Clegg GR, Roebuck S, Steedman DJ. A new system for digital image acquisition, storage and presentation in an accident and emergency department. Emerg Med J 2001;18(4):255–8. 3 Chan L, Reilly KM. Integration of digital imaging into emergency medicine education. Acad Emerg Med 2002;9(1):93–5. 4 Hood CA, Hope T, Dove P. Videos, photographs, and patient consent. BMJ 1998;316:1009–11. 5 Smith R. Publishing information about patients. BMJ 1995;311:1240–1. Unnecessary Tetanus boosters in the ED It is recommended that five doses of tetanus toxoid provide lifelong immunity and 10 yearly doses are not required beyond this.1 National immunisation against tetanus began in 1961, providing five doses (three in infancy, one preschool and one on leaving school).2 Coverage is high, with uptake over 90% since 1990.2 Therefore, the majority of the population under the age of 40 are fully immunised against tetanus. Td (tetanus toxoid/low dose diphtheria) vaccine is often administered in the Emergency Department (ED) following a wound or burn based upon the patient’s recollection of their immunisation history. Many patients and staff may believe that doses should still be given every 10 years. During summer 2004, an audit of tetanus immunisation was carried out at our depart- ment. The records of 103 patients who had received Td in the ED were scrutinised and a questionnaire was sent to the patient’s GP requesting information about the patient’s tetanus immunisation history before the dose given in the ED. Information was received in 99 patients (96% response). In 34/99 primary care records showed the patient was fully immunised before the dose given in the ED. One patient had received eight doses before the ED dose and two patients had been immunised less than 1 year before the ED dose. In 35/99 records suggested that the patient was not fully immunised. However, in this group few records were held before the early 1990’s and it is possible some may have had five previous doses. In 30/99 there were no tetanus immunisation records. In 80/99 no features suggesting the wound was tetanus prone were recorded. These findings have caused us to feel that some doses of Td are unnecessary. Patient’s recollections of their immunisation history may be unreliable. We have recommended that during working hours, the patient’s general practice should be contacted to check immunisation records. Out of hours, if the patient is under the age of 40 and the wound is not tetanus prone (as defined in DoH Guidance1), the general practice should be contacted as soon as possible and the immunisation history checked before admin- istering Td. However, we would like to emphasize that wound management is paramount, and that where tetanus is a risk in a patient who is not fully immunised, a tetanus booster will not provide effective protection against tetanus. In these instances, tetanus immunoglobulin (TIG) also needs to be considered (and is essential for tetanus prone wounds). In the elderly and other high-risk groups—for example, intravenous drug abusers—the Accepted for publication 25 February 2004 Accepted for publication 12 October 2004 PostScript 609 www.emjonline.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://e m j.b m j.co m / E m e rg M e d J: first p u b lish e d a s 1 0 .1 1 3 6 /e m j.2 0 0 3 .0 1 3 3 9 1 o n 2 6 Ju ly 2 0 0 5 . D o w n lo a d e d fro m http://emj.bmj.com/ need for a primary course of immunisation against tetanus should be considered not just a single dose and follow up with the general practice is therefore needed. The poor state of many primary care immunisation records is a concern and this may argue in favour of centralised immuni- sation records or a patient electronic record to protect patients against unnecessary immu- nisations as well as tetanus. T Burton, S Crane Accident and Emergency Department, York Hospital, Wigginton Road, York, UK Correspondence to: Dr T Burton, 16 Tom Lane, Fulwood, Sheffield, S10 3PB; tomandjackie@doctors.org.uk doi: 10.1136/emj.2004.021121 References 1 CMO. Replacement of single antigen tetanus vaccine (T) by combined tetanus/low dose diphtheria vaccine for adults and adolescents (Td) and advice for tetanus immunisation following injuries. In:update on immunisation issues PL/ CMO/2002/4 London Department of Health, August, 2002. 2 http://www.hpa.org.uk/infections/topics_az/ tetanus. Accepted for publication 20 October 2004 BOOK REVIEWS and Technical Aspects be comprehensive with regards to disaster planning? Can it provide me with what I need to know? I was confused by the end of my involvement with this book or perhaps overwhelmed by the enormity of the importance of non-medical requirements such as engineering and tech- nical expertise in planning for and managing environmental catastrophes. Who is this book for? I am still not sure. The everyday emergency physician? I think not. It serves a purpose to be educational about what is required, in a generic sort of way, when planning disasters. Would I have turned to it last year during the SARS outbreak? No. When I feared a bio-terrorism threat? No. When I watched helplessly the victims of the latest Iranian earthquake? No. To have done so would have been to participate in some form of voyeurism on other people’s misery. Better to embrace the needs of those victims of environmental disasters in some tangible way than rush to the book shelf to brush up on some aspect of care which is so remote for the majority of us in emergency medicine. J Ryan St Vincent’s Hospital, Ireland; j.ryan@st-vincents.ie Neurological Emergencies: A Symptom Orientated Approach Edited by G L Henry, A Jagoda, N E Little, et al. McGraw-Hill Education, 2003, £43.99, pp 346. ISBN 0071402926 The authors set out with very laudable intentions. They wanted to get the ‘‘max- imum value out of both professional time and expensive testing modalities’’. I therefore picked up this book with great expecta- tions—the prospect of learning a better and more memorable way of dealing with neuro- logical cases in the emergency department. The chapter headings (14 in number) seemed to identify the key points I needed to know and the size of the book (346 pages) indicated that it was possible to read. Unfortunately things did not start well. The initial chapter on basic neuroanatomy mainly used diagrams from other books. The end result was areas of confusion where the text did not entirely marry up with the diagrams. The second chapter dealing with evaluating the neurological complaint was better and had some useful tips. However the format provided a clue as to how the rest of the book was to take shape—mainly text and lists. The content of this book was reasonably up to date and if you like learning neurology by reading text and memorising lists then this is the book for you. Personally I would not buy it. I felt it was a rehash of a standard neurology textbook and missed a golden opportunity of being a comprehensible text on emergency neurology, written by emer- gency practitioners for emergency practi- tioners. P Driscoll Hope Hospital, Salford, UK; peter.driscoll@srht.nhs.uk Emergency medicine procedures E Reichmann, R Simon. New York: McGraw- Hill, 2003, £120, pp 1563. ISBN 0-07- 136032-8. This book has 173 chapters, allowing each chapter to be devoted to a single procedure, which, coupled with a clear table of contents, makes finding a particular procedure easy. This will be appreciated mostly by the emergency doctor on duty needing a rapid "refresher" for infrequently performed skills. ‘‘A picture is worth a thousand words’’ was never so true as when attempting to describe invasive procedures. The strength of this book lies in the clarity of its illustrations, which number over 1700 in total. The text is concise but comprehensive. Anatomy, patho- physiology, indications and contraindica- tions, equipment needed, technicalities, and complications are discussed in a standardised fashion for each chapter. The authors, pre- dominantly US emergency physicians, mostly succeed in refraining from quoting the ‘‘best method’’ and provide balanced views of alternative techniques. This is well illustrated by the shoulder reduction chapter, which pictorially demonstrates 12 different ways of reducing an anterior dislocation. In fact, the only notable absentee is the locally preferred Spaso technique. The book covers every procedure that one would consider in the emergency department and many that one would not. Fish hook removal, zipper injury, contact lens removal, and emergency thoracotomy are all explained with equal clarity. The sections on soft tissue procedures, arthrocentesis, and regional analgesia are superb. In fact, by the end of the book, I was confident that I could reduce any paraphimosis, deliver a baby, and repair a cardiac wound. However, I still had nagging doubts about my ability to aspirate a sub- dural haematoma in an infant, repair the great vessels, or remove a rectal foreign body. Reading the preface again, I was relieved. The main authors acknowledge that some proce- dures are for "surgeons" only and are included solely to improve the understanding by "emergentologists" of procedures that may present with late complications. These chap- ters are unnecessary, while others would be better placed in a pre-hospital text. Thankfully, they are relatively few in number, with the vast majority of the book being directly relevant to emergency practice in the UK. Weighing approximately 4 kg, this is undoubtedly a reference text. The price (£120) will deter some individuals but it should be considered an essential reference book for SHOs, middle grades, and consul- tants alike. Any emergency department would benefit from owning a copy. J Lee Environmental Health in Emergencies and Disasters: A Practical Guide Edited by B Wisner, J Adams. World Health Organization, 2003, £40.00, pp 252. ISBN 9241545410 I have the greatest admiration for doctors who dedicate themselves to disaster prepared- ness and intervention. For most doctors there will, thank god, rarely be any personal involvement in environmental emergencies and disasters. For the others who are involved, the application of this branch of medicine must be some form of ‘‘virtual’’ game of medicine, lacking in visible, tangible gains for the majority of their efforts. Reading this World Health Organization publication however has changed my percep- tion of the importance of emergency plan- ners, administrators, and environmental technical staff. I am an emergency physician, blinkered by measuring the response of interventions in real time; is the peak flow better after the nebuliser? Is the pain less with intravenous morphine? But if truth be known it is the involvement of public health doctors and emergency planners that makes the biggest impact in saving lives worldwide, as with doctors involved in public health medicine. This book served to demonstrate to me my ignorance on matters of disaster responsive- ness. But can 252 pages of General Aspects 610 PostScript www.emjonline.com o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://e m j.b m j.co m / E m e rg M e d J: first p u b lish e d a s 1 0 .1 1 3 6 /e m j.2 0 0 3 .0 1 3 3 9 1 o n 2 6 Ju ly 2 0 0 5 . D o w n lo a d e d fro m http://emj.bmj.com/ work_rqbzjzpsbfgpdlpwghvq4ol53m ---- 1 Technical note: A simple approach for efficient collection of field reference data for calibrating remote sensing mapping of northern wetlands Magnus Gålfalk1, Martin Karlson1, Patrick Crill2, Philippe Bousquet3, David Bastviken1 1Department of Thematic Studies – Environmental Change, Linköping University, 581 83 Linköping, Sweden. 5 2Department of Geological Sciences, Stockholm University, 106 91 Stockholm, Sweden. 3Laboratoire des Sciences du Climat et de l'Environnement (LSCE), Gif sur Yvette, France. Correspondence to: Magnus Gålfalk (magnus.galfalk@liu.se) Abstract. The calibration and validation of remote sensing land cover products is highly dependent on accurate field reference data, which are costly and practically challenging to collect. We describe an optical method for collection of field reference 10 data that is a fast, cost-efficient, and robust alternative to field surveys and UAV imaging. A light weight, water proof, remote controlled RGB-camera (GoPro) was used to take wide-angle images from 3.1 - 4.5 m altitude using an extendable monopod, as well as representative near-ground (< 1 m) images to identify spectral and structural features that correspond to various land covers at present lighting conditions. A semi-automatic classification was made based on six surface types (graminoids, water, shrubs, dry moss, wet moss, and rock). The method enables collection of detailed field reference data which is critical in many 15 remote sensing applications, such as satellite-based wetland mapping. The method uses common non-expensive equipment, does not require special skills or education, and is facilitated by a step-by-step manual that is included in the supplementary information. Over time a global ground cover database can be built that is relevant for ground truthing of wetland studies from satellites such as Sentinel 1 and 2 (10 m pixel size). 1 Introduction 20 Accurate and timely land cover data are important for e.g. economic, political, and environmental assessments, and for societal and landscape planning and management. The capacity for generating land cover data products from remote sensing is developing rapidly. There has been an exponential increase in launches of new satellites with improved sensor capabilities, including shorter revisit time, larger area coverage, and increased spatial resolution (Belward & Skøien 2015). Similarly, the development of land cover products is increasingly supported by the progress in computing capacities and machine learning 25 approaches. However, at the same time it is clear that the knowledge of the Earth´s land cover is still poorly constrained. For example, a comparison between multiple state-of-the-art land cover products for West Siberia revealed disturbing uncertainties (Frey and Smith 2007). Estimated wetland areas ranged from 2 - 26% of the total area, and the correspondence with in situ observations Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. 2 for wetlands was only 2 - 56%. For lakes, all products revealed similar area cover (2-3%), but the agreement with field observations was as low as 0-5%. Hence, in spite of the progress in technical capabilities and data analysis progress, there are apparently fundamental factors that still need consideration to obtain accurate land cover information. The West Siberia example is not unique. Current estimates of the global wetland area range from 8.6 to 26.9 x 106 km2 with great inconsistencies between different data products (Melton et al. 2013). The uncertainty in wetland distribution has multiple 5 consequences, including being a major bottleneck for constraining the assessments of global methane (CH4) emissions, which was the motivation for this area comparison. Wetlands and lakes are the largest natural CH4 sources (Saunois et al. 2016) and available evidence suggest that these emissions can be highly climate sensitive, particularly at northern latitudes predicted to experience the highest temperature increases and melting permafrost – both contributing to higher CH4 fluxes (Yvon-Durocher et al. 2014; Schuur et al. 2009). 10 CH4 fluxes from plant functional groups in northern wetlands can differ by orders of magnitude. Small wet areas dominated by emergent graminoid plants account for by far the highest fluxes per m2, while the more widespread areas covered by e.g. Sphagnum mosses have much lower CH4 emissions per m 2 (e.g. Bäckstrand et al. 2010). The fluxes associated with the heterogeneous and patchy (i.e. mixed) land cover in northern wetlands is well understood on the local plot scale, whereas the large-scale extrapolations are very uncertain. The two main reasons for this uncertainty is that the total wetland extent is 15 unknown and that present map products do not distinguish between different wetland habitats which control fluxes and flux regulation. As a consequence the whole source attribution in the global CH4 budget remains highly uncertain (Kirschke et al. 2013; Saunois et al. 2016). To resolve this, improved land cover products being relevant for CH4 fluxes and their regulation are needed. The detailed characterization of wetland features or habitats requires the use of high resolution satellite data and sub-pixel classification 20 that quantify percent, or fractional, land cover. A fundamental bottleneck for the development of fractional land cover products is the quantity and quality of the ground truth, or reference, data used for calibration and validation (Foody 2013; Foody et al. 2016). While the concept “ground truth” leads the thoughts to a perfectly represented reality, 100% accurate reference data do not exist. In fact, reference data can often be any data available at higher resolution than the data product, including other satellite imagery, airborne surveys, in addition to field observations. In turn, the field observations can range from rapid 25 landscape assessments to detailed vegetation mapping in inventory plots, where the latter yields high resolution and high- quality data but is very expensive to generate in terms of time and manpower (Olofsson et al. 2014; Frey & Smith 2007). Ground-based reference data for fractional land cover mapping can be acquired using traditional methods, such as visual estimation, point frame assessment or digital photography (Chen et al. 2010). These methods can be applied using a transect approach to increase the area coverage in order to match the spatial resolutions of different satellite sensors (Mougin et al. 30 2014). The application of digital photography and image analysis software has shown promise for enabling rapid and objective measurements of fractional land cover that can be repeated over time for comparative analysis (Booth et al. 2006a). While several geometrical corrections and photometric setups are used, nadir (downward facing) and hemispherical view Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. 3 photography are most common, and the selected setup depends on the height structure of the vegetation (Chen et al. 2010). However, most previous research has focused on distinguishing between major general categories, such as vegetation and non- vegetation (Laliberte et al. 2007; Zhou & Liu 2015), and are typically not used to characterize more subtle patterns within major land cover classes. Many applications in literature have been in rangeland, while there is a lack of wetland classification. Furthermore, images have mainly been close-up images taken from a nadir view perspective (Booth et al. 2006a; Chen et al. 5 2010; Zhou & Liu 2015), thereby limiting the spatial extent to well below the pixel size of satellite systems suitable for regional scale mapping. From a methano-centric viewpoint, accurate reference data at high enough resolution, being able to separate wetland (and upland) habitats with differing flux levels and regulation, is needed to facilitate progress with available satellite sensors. The resolution should preferable be better than 1 m2 given how the high emitting graminoid areas are scattered on the wettest spots 10 where emergent plants can grow. Given this need, we propose a quick and simple type of field assessment adapted for the 10 x 10 m pixels of the Sentinel 1 and 2 satellites. Our method uses true color images of the ground, followed by image analysis to distinguish fractional cover of key land cover types relevant for CH4 fluxes from northern wetlands, where we focus on few classes, that differ in their CH4 emissions. We provide a simple manual allowing anyone to take the photos needed in a few minutes per field plot. Land cover classification 15 can then be made using the Red-Green-Blue (RGB) field images (sometimes also converting them to the Intensity-Hue- Saturation (IHS) color space) by software such as e.g. CAN-EYE (Weiss & Baret 2010), VegMeasure (Johnson et al. 2003), SamplePoint (Booth et al. 2006b), or eCognition (Trimble commercial software). With this simple approach it would be quick and easy for the community to share such images online and to generate a global reference database that can be used for land cover classification relevant to wetland CH4 fluxes, of other purposes depending of the land cover classes used. We use our 20 own routines written in Matlab due to the large field of view used in the method, in order to correct for the geometrical perspective when calculating areas (to speed up the development of a global land cover reference database, we can do the classification on request if all necessary parameters and images are available as given in our manual). 2 Field work The camera setup is illustrated in Fig.1, with lines showing the spatial extent of a field plot. Our equipment included a 25 lightweight RGB-camera (GoPro 4 Hero Silver; other types of cameras with remote control and suitable wide field of view would also work) mounted on an extendable monopod that allows imaging from a height of 3.1 - 4.5 meters. The camera had a resolution of 4000 x 3000 pixels with a wide field of view (FOV) of 122.6 x 94.4 deg. and was remotely controlled over Bluetooth using a mobile phone application that allows a live preview, making it possible to always include the horizon close to the upper edge in each image (needed for image processing later – see below). The camera had a waterproof casing and 30 could therefore be used in rainy conditions, making the method robust to variable weather conditions. Measurements were made for about 200 field plots in northern Sweden in the period 6-8 September 2016 . Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. 4 Figure 1: A remotely controlled wide-field camera mounted on a long monopod captures the scene in one shot, from above the horizon down to nadir. After using the horizon image position to correct for the camera angle, a 10 x 10 m area close to the camera is used for classification. For each field plot, the following was recorded: 5 • One image taken at > 3.1 m height (see illustration in Fig. 1) which includes the horizon coordinate close to the top of the image. • 3-4 close-up images of common surface cover in the plot (e.g. typical vegetation). • GPS position of the camera location (reference point) • Notes of the image direction relative to the reference point. 10 The long monopod was made from two ordinary extendable monopods taped together, with a GoPro camera mount at the end. The geographic coordinate of the camera position was registered using a handheld Garmin Oregon 550 GPS with a horizontal accuracy of approximately 3 m. The positional accuracy of the images can be improved by using a differential GPS and by registering the cardinal direction of the FOV. The camera battery typically lasts for a few hours after a full charge, but was charged at intervals when not used, e.g. when moving between different field sites. 15 3 Image processing and models As the camera had a very wide FOV, the raw images do have a strong lens distortion (Fig. 2). This can be corrected for most camera models (e.g., the GoPro series) using either commercial software, such as Adobe Lightroom or Photoshop (which we used), or using distortion models in programming languages (e.g. Matlab). Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. 5 Figure 2: Correction of lens distortion. (A) Raw wide-field camera image. (B) After correction. Using a distortion-corrected calibration image, we developed a model of the ground geometry by projecting and fitting a 10 x 10 m grid on a parking lot with measured distances marked using chalk (Fig. 3). The geometric model uses the camera FOV, camera height, and the vertical coordinate of the horizon (to obtain the camera angle). We find an excellent agreement between 5 the modeled and measured grids (fits are within a few centimeters) for both camera heights of 3.1 and 4.5 m. The vertical angle 𝛼 from nadir to a certain point in the grid with ground distance Y along the center line is given by 𝛼 = arctan(𝑌/ℎ), where h is the camera height. For distance points in our calibration image (Fig. 3), using 0.2 m steps in the range 0 – 1 m and 1 m steps from 1 to 10 m, we calculate the nadir angles 𝛼(Y) and measure the corresponding vertical image coordinates 𝑦𝑐𝑎𝑙𝑖𝑏 (𝑌). 10 Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. 6 Figure 3: Calibration of projected geometry using an image corrected for lens distortion. Model geometry are shown as white numbers and a white grid, while green and red numbers are written on the ground using chalk (red lines at 2 and 4 m left of the center line were strengthened for clarity). The camera height in this calibration measurement is 3.1 m. In principle, for any distortion corrected image there is a simple relationship 𝑦𝑖𝑚𝑔 (𝛼) = (𝛼(𝑌) − 𝛼0)/𝑃𝐹𝑂𝑉, where 𝑦𝑖𝑚𝑔 is 5 the image vertical pixel coordinate for a certain distance 𝑌, 𝑃𝐹𝑂𝑉 the pixel field of view (deg pixel-1), and 𝛼0 the nadir angle of the bottom image edge. In practice, however, correction for lens distortion is not perfect so we have fitted a polynomial in the calibration image to obtain 𝑦𝑐𝑎𝑙𝑖𝑏 (𝛼) from the known 𝛼 and measured 𝑦𝑐𝑎𝑙𝑖𝑏 . Using this function we can then obtain the 𝑦𝑖𝑚𝑔 coordinate in any subsequent field image using 𝑦𝑖𝑚𝑔 = 𝑦𝑐𝑎𝑙𝑖𝑏 (𝛼 + 𝑃𝐹𝑂𝑉ℎ𝑜𝑟 ∙ (𝑦𝑖𝑚𝑔 ℎ𝑜𝑟 − 𝑦𝑐𝑎𝑙𝑖𝑏 ℎ𝑜𝑟 )) (1) where 𝑦𝑖𝑚𝑔 ℎ𝑜𝑟 and 𝑦𝑐𝑎𝑙𝑖𝑏 ℎ𝑜𝑟 are the vertical image coordinates of the horizon in the field and calibration image, respectively. As the 10 𝑃𝐹𝑂𝑉 varies by a small amount across the image due to small deviations in the lens distortion correction, we have used 𝑃𝐹𝑂𝑉ℎ𝑜𝑟 which is the pixel field of view at the horizon coordinate. In short, the shift in horizon position between the field and calibration images is used to compensate for the camera having different tilts in different images. In order to obtain correct ground geometry it is therefore important to always include the horizon in all images. Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. 7 The horizontal ground scale 𝑑𝑥 (pixels m-1) varies linearly with 𝑦𝑖𝑚𝑔 , making it possible to calculate the horizontal image coordinate 𝑥𝑖𝑚𝑔 using 𝑥𝑖𝑚𝑔 = 𝑥𝑐 + 𝑋 ∙ 𝑑𝑥 = 𝑥𝑐 + 𝑋 ∙ (𝑦𝑖𝑚𝑔 ℎ𝑜𝑟 − 𝑦𝑖𝑚𝑔 ) ∙ 𝑑𝑥0 𝑦𝑐𝑎𝑙𝑖𝑏 ℎ𝑜𝑟 ∙ ℎ𝑐𝑎𝑙𝑖𝑏 ℎ𝑖𝑚𝑔 (2) where 𝑑𝑥0 is the horizontal ground scale at the bottom edge of the calibration image, 𝑥𝑐 the center line coordinate (half the horizontal image size), 𝑋 the horizontal ground distance, and ℎ𝑐𝑎𝑙𝑖𝑏 and ℎ𝑖𝑚𝑔 the camera heights in the calibration and field image, respectively. 5 Thus, using Eqs. (1) and (2) we can calculate the image coordinates (𝑥𝑖𝑚𝑔 , 𝑦𝑖𝑚𝑔 ) in a field image from any ground coordinates (𝑋, 𝑌). A model grid is shown in Fig. 3 together with the calibration image, illustrating their agreement. Figure 4: One of our field plots. (A) Image corrected for lens distortion, with a projected 10 x 10 m grid overlaid. (B) Image after recalculation to overhead projection (10 x 10 m). 10 For each field image, after correction for image distortion, our Matlab script asks for the 𝑦 -coordinate of the horizon (which is selected using a mouse). This is used to calculate the camera tilt and to over-plot a distance grid projected on the ground (Fig. 4A). Using Eqs. (1) and (2) we then recalculate the image to an overhead projection of the nearest 10 x 10 m area (Fig. 4B). This is done using interpolation, where a (𝑥𝑖𝑚𝑔 , 𝑦𝑖𝑚𝑔 ) coordinate is obtained from each (𝑋, 𝑌) coordinate, and the brightness in each color channel (𝑅, 𝐺, 𝐵) calculated using sub-pixel interpolation. The resulting image is reminiscent of an 15 overhead image, with equal scales in both axes. There is however a small difference, as the geometry (due to line of sight) does not provide information about the ground behind high vegetation (such as high grass) in the same way as an image taken from overhead. Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. 8 4 Image classification After a field plot has been geometrically rectified, so that the spatial resolution is the same over the surface area used for classification, the script distinguishes land cover types by color, brightness and spatial variability. Aided by the close-up images of typical surface types also taken at each field plot (Fig. 5), providing further verification, a script is applied to each overhead- projected calibration field (Fig. 4B) that classifies the field plot into land cover types. This is a semi-automatic method that 5 can account for illumination differences between images. In addition, it facilitates identification as there can for instance be different vegetation with similar color, and rock surfaces that have similar appearance as water or vegetation. After an initial automatic classification, the script has an interface that allows manual movement of areas between classes. For calculations of surface-color we filter the overhead projected field-images using a running 3 x 3 pixel mean filter, providing more reliable statistics. Spatial variation in brightness, used as a measurement of surface roughness, is calculated using a 10 running 3 x 3 pixel standard deviation filter. Denoting the brightness in each (red, green, and blue) color channel 𝑅, 𝐺 and 𝐵, respectively, we could for instance find areas with green grass using the green filter index 2𝐺/(𝑅 + 𝐵), where a value above 1 indicates green vegetation. In the same way, areas with water (if the close-up images show blue water due to clear sky) can be found using a blue filter index 2𝐵/(𝑅 + 𝐺). If the close-up images show dark or gray water (cloudy weather) it can be distinguished from rock and white vegetation using either a total brightness index (𝑅 + 𝐺 + 𝐵)/3 or an index that is sensitive 15 to surface roughness, involving 𝜎(𝑅), 𝜎(𝐺), or 𝜎(𝐵), where 𝜎 denotes the 3 x 3 pixels standard deviation centered on each pixel, for a certain color channel. Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. 9 Figure 5: Close-up images in one of our 10 x 10 m field plots (Fig. 4). In this study we used six different land cover types of relevance for CH4 regulation: graminoids, water, shrubs, dry moss, wet moss, and rock. Examples of classified images are shown in Fig. 6. In a test study, we were able to make classifications of about 200 field plots in northern Sweden in a three-day test campaign despite rainy and windy conditions. For each field plot, 5 surface area (m2) and coverage (%) were calculated for each class. An additional field plot and classification example can be found in supplementary information S2. Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. 10 Figure 6: Classification of a field plot image (Fig. 4B) into the six main surface components. All panels have an area of 10 x 10 m. (A) Graminoids. (B) Water. (C) Shrubs. (D) Dry moss. (E) Wet moss. (F) Rock. 5 Conclusions This study describes a quick method to document ground surface cover and process the data to make it suitable as ground truth 5 for remote sensing. The method requires a minimum of equipment that is frequently used by researchers and persons with general interest in outdoor activities, and image recording can be made easily and in a few minutes per plot without requirements of specific skills or education. Hence, if the method gets widespread and a fraction of those who visits northern wetlands (or other environments without dense tall vegetation where the method is suitable) contributes images and related information, there is a potential for rapid development of a global database of images and processed results with detailed land 10 cover for individual satellite pixels. In turn, this could become a valuable resource for remote sensing ground truthing. To facilitate this development supplementary information S1 includes a complete manual and authors will assist with early stage image processing and initiate database development. Acknowledgements 15 Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. 11 This study was funded by a grant from the Swedish Research Council VR to David Bastviken (ref. no. VR 2012-48). We would also like to acknowledge the collaboration with the IZOMET project (ref.no VR 2014-6584) and IZOMET partner Marielle Saunois (Laboratoire des Sciences du Climat et de l'Environnement (LSCE), Gif sur Yvette, France). References Belward, A. S. and Skøien, J. O.: Who launched what, when and why; trends in global land-cover observation capacity from 5 civilian earth observation satellites, Isprs Journal of Photogrammetry and Remote Sensing, 103, 115-128, 2015. Booth, D. T., Cox, S. E., Meikle, T. W., and Fitzgerald, C.: The accuracy of ground cover measurements, Rangeland Ecology and Management, 59 (2), 179-188, 2006a. Booth, D. T., Cox, S. E., and Berryman R. D.: Point sampling digital imagery with "SamplePoint", Environ Monk Assess 123, 97-108, 2006b. 10 Bäckstrand, K., Crill, P. M., Jackowicz-Korczynski, M., Mastepanov, M., Christensen, T. R., Bastviken, D.: Annual carbon gas budget for a subarctic peatland, Northern Sweden, Biogeosciences, 7, 95-108, 2010. Chen, Z., Chen, W., Leblanc, S. G., and Henry, G. H. R.: Digital Photograph Analysis for Measuring Percent Plant Cover in the Arctic, Arctic, 63 (3), 315-326, 2010. Frey, K. E., Smith, L. C.: How well do we know northern land cover? Comparison of four global vegetation and wetland 15 products with a new ground-truth database for West Siberia, Global Biogeochem. Cycles 21, n/a-n/a, 2007. Foody, G. M.: Ground reference data error and the mis-estimation of the area of land cover change as a function of its abundance, Remote Sensing Letters, 4, 783-792, 2013. Foody, G. M., Pal, M., Rocchini, D., Garzon-Lopez, C. X., and Bastin, L.: The Sensitivity of Mapping Methods to Reference Data Quality: Training Supervised Image Classifications with Imperfect Reference Data, Isprs International Journal of Geo-20 Information, 5, 2016. Johnson, D. E, Vulfson, M., Louhaichi, M., and Harris, N. R.: VegMeasure version 1.6 user's manual. Corvallis, OR: Department of Rangeland Resources, Oregon State University, 2003. Kirschke, S., Bousquet, P., Ciais, P., Saunois, M., Canadell, J. G., Dlugokencky, E. J., et al.: Three decades of global methane sources and sinks, Nature Geoscience, 6, 813-823, 2013. 25 Laliberte, A. S., Rango, A., Herrick, J. E., Fredrickson, E. L., and Burkett, L.: An object-based image analysis approach for determining fractional cover of senescent and green vegetation with digital plot photography, Journal of Arid Environments, 69, 1–14, 2007. Melton, J. R., Wania, R., Hodson, E. L., Poulter, B., Ringeval, B., Spahni, R., et al.: Present state of global wetland extent and wetland methane modelling: conclusions from a model inter-comparison project (WETCHIMP), Biogeosciences, 10, 753-788, 30 2013. Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. 12 Mougin, E., Demarez, V., Diawara, M., Hiernaux, P., Soumaguel, N., and Berg, A.: Estimation of LAI, fAPAR and fCover of Sahel rangelands, Agricultural and Forest Meteorology, 198-199, 155-167, 2014. Olofsson, P., Foody, G. M., Herold, M., Stehman, S. V., Woodcock, C. E., and Wulder, M. A.: Good practices for estimating area and assessing accuracy of land change, Remote Sensing of Environment, 148, 42-57, 2014. Saunois, M., Bousquet, P., Poulter, B., Peregon, A., Ciais, P., Canadell, J. G., et al.: The global methane budget 2000–2012, 5 Earth Syst. Sci. Data, 8, 697-751, 2016. Schuur, E. A. G., Vogel, J. G., Crummer, K. G., et al.: The effect of permafrost thaw on old carbon release and net carbon exchange from tundra, Nature, 459, 556-559, 2009. Weiss, M., and Baret, F.: CAN-EYE V6.4.6 User Manual. In: EMMAH laboratory (Mediterranean environment and agro- hydro system modelisation). National Institute of Agricultural Research (INRA), French, 2010. 10 Yvon-Durocher, G., Allen, A. P., Bastviken, D., Conrad, R., Gudasz, C., St-Pierre, A., Thanh-Duc, N., and del Giorgio, P. A.: Methane fluxes show consistent temperature dependence across microbial to ecosystem scales, Nature, 507, 488-491, 2014. Zhou, G. and S. Liu.: Estimating ground fractional vegetation cover using the double-exposure method, International Journal of Remote Sensing, 36 (24), 6085-6100, 2015. Biogeosciences Discuss., https://doi.org/10.5194/bg-2017-445 Manuscript under review for journal Biogeosciences Discussion started: 26 October 2017 c© Author(s) 2017. CC BY 4.0 License. work_rqg6gianjreuddiqrwnsewzn7m ---- CCIDEN_6760_evolution_of_photography_in_maxillofacial_surgery © 2009 Schaaf et al, publisher and licensee Dove Medical Press Ltd. This is an Open Access article which permits unrestricted noncommercial use, provided the original work is properly cited. Clinical, Cosmetic and Investigational Dentistry 2009:1 39–45 Clinical, Cosmetic and Investigational Dentistry 39 r e v I e w Dovepress open access to scientific and medical research Open Access Full Text Article submit your manuscript | www.dovepress.com Dovepress evolution of photography in maxillofacial surgery: from analog to 3D photography – an overview Heidrun Schaaf Christoph Yves Malik Hans-Peter Howaldt Philipp Streckbein Department of Maxillo-Facial Surgery, University hospital Giessen and Marburg GmbH, Giessen, Germany Correspondence: Heidrun Schaaf University Hospital Giessen and Marburg GmbH, Department of Maxillo-Facial Surgery, Klinikstrasse 29; 35385 Giessen, Germany Tel +49 641/99 46271 Fax +49 641/99 46279 email heidrun.schaaf@uniklinikum-giessen.de; heidrun.schaaf@gmx.net Abstract: In maxillofacial surgery, digital photographic documentation plays a crucial role in clinical routine. This paper gives an overview of the evolution from analog to digital in pho- tography and highlights the integration of digital photography into daily medical routine. The digital workflow is described and we show that image quality is improved by systematic use of photographic equipment and post-processing of digital photographs. One of the advantages of digital photography is the possibility of immediate reappraisal of the photographs for alignment, brightness, positioning, and other photographic settings, which aids in avoiding errors and allows the instant repetition of photographs if necessary. Options for avoiding common mistakes in clinical photography are also described and recommendations made for post-processing of pictures, data storage, and data management systems. The new field of 3D digital photography is described in the context of cranial measurements. Keywords: digital, photography, documentation, dental, 3D imaging Introduction As in most technical and medical fields, impressive developments have occurred in recent years in the technological aspects of digital photography and the possibilities of digital documentation. Digital medical photography allows a professional view of novel clinical cases in cranio-maxillofacial surgery. Visualization can be more effective than a verbal description and can aid in making appropriate decisions for treatment. One of the advantages of digital photography is the possibility of reviewing the picture immediately to judge technical aspects such as sharpness, illumination, color, and patient positioning. The immediate availability of digital images enables the treating physician to monitor a selected aspect in successive or serial shots in the presence of the patient. Fewer appointments with patients may be necessary, as review of the accomplished or planned procedures is possible without waiting for photographs to be processed. Due to the development of powerful data storage tools and software, clinical patient records can be supplemented with informative photographs, and these photographs can be integrated into digital patient files. These improvements along with technical innovations in photography have set the stage for high-quality results in maxillofacial surgery. In the literature, clinical photography is discussed from different viewpoints such as those of plastic and reconstructive surgery, dermatology, dentistry, and orthodontics.1–7 Although human life unfolds in a 3-dimensional (3D) setting, most observations and data are captured only in 2 dimensions, and information about the third dimension is left to our judgment. Especially in the medical field, where surgery can change the appearance of a face, 3D assessment is becoming more and C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e n tis tr y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dentistry 2009:140 Schaaf et al Dovepress submit your manuscript | www.dovepress.com Dovepress more essential. This new method will prove its value not only for planning of dental or surgical procedures, but also for predicting the outcome. Several approaches have been investigated to open the third dimension to the medical world, starting with computerized tomography (CT),8–10 ultrasonography,11–13 stereolithography,14,15 and laser scanners.16,17 A detailed review of 3D craniofacial reconstruction imaging should describe modern imaging techniques most commonly used in medicine and dentistry. Analysis of the whole craniofacial complex, virtual simulation, and real simulation of orthognatic surgery as well as laser scanning with use of stereolithographic biomodeling have been discussed.18 The aim of this article is to describe step-by-step the recent developments in medical photography, address solutions for data storage, and highlight the benefits as well as some of the technical and human pitfalls of this technology in the medical profession. History of digital photography In August 1981, the digital camera revolution began when the Sony Corporation released the first commercial electronic handheld camera without film (the Sony Mavica). This was designed as a point-and-shoot camera, which used a charge-coupled device-sensor (CCD-sensor) to record still images to Mavipak diskettes with the equivalent of 0.3 megapixel (MP) resolution. Because the pictures were viewed on a TV screen and could not be processed on a computer, the Mavica was not considered a true digital camera. In 1988, Fuji unveiled the DS-1P as the first true digital camera, which recorded images to a removable static random-access memory (SRAM) card in a computerized file.19 The first commercially available digital camera was sold in 1990 as the DYCAM Model 1 or Logitech FotoMan with a resolution of 376 × 240 pixels at 256 grayscale levels for a manufacturer’s suggested retail price (MSRP) of US$995.20 The next rung on the evolutionary ladder of digital photography was the Kodak DSC-100, shown publicly at the Photokina in 1990 and marketed in 1991 for a MSRP of US$25,000. It was the first digital single-lens reflex camera (DSLR) consisting of a modified Nikon F3 SLR body and a 1.3 MP digital back.21 Although various companies such as Canon, Nikon, Fujifilm, Sigma, Kodak, Pentax, Olympus, Panasonic, Samsung, and Minolta released DSLR cameras intended for professional photographers and early adopters, DSLR cameras could not compete with film-based SLR cameras due to their lack of speed and image resolution. DSLR cameras began to compete with SLR cameras in 1999, when Nikon introduced the Nikon D1, which employed autofocus lenses such as those in current use. In subsequent years, image resolution increased and prices decreased, until the Canon EOS Digital Rebel made DSLR technology available to amateur photographers with a quality comparable to that of film cameras. Digital workflow in clinical routine With further development of CCD resolution, the question was often raised of when or if digital technology would exceed film technology in image quality. This issue has not yet been resolved and depends on numerous parameters. In summary, a resolution of 12 to 16 MP is equivalent to that of ISO 100 color film, but this comparison can only be made when high-quality lenses are used. For image resolution exceeding 10 MP, the quality of the lenses and image compression seem to be the limiting factor for image quality.22–24 For practical and clinical applications, more detailed image resolution does not yield further advantages, and thus the evolution of the DSLR technique in clinical photography has apparently reached its end. Considering digital imaging as a tool for routine work in dentistry and oral and maxillofacial surgery, acquired image data must be linked to patient data, maintained, and stored long term. The amount and quality of image data determine the dimensions of the required image storage system. The best image quality is supplied by unprocessed RAW-image data, which is not recommended in clinical photography due to the degree of post-processing needed and the large file sizes generated. The standardized JPG image format with variable compression, used with a resolution of 6 to 8 MP and low compression, fulfills the requirements of clinical photography and is manageable even for large numbers of images. In digital workflow, the sharpness, white balance, brightness, and orientation of images should be verified before they are stored in the database. Images should not be post-processed for these parameters, but primarily should be exposed correctly, due to the time-consuming nature of post- processing and the possibility of falsifying the document. Thus, the ability to immediately control the quality of the picture is a valuable advantage of the digital era. The requirements for storage of patient images are complex. A patient image database should have a hierarchical structure for user administration, support key-wording, indexing, and savable queries, have a programmable interface for linking image data to a clinical information system C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e n tis tr y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dentistry 2009:1 41 evolution of photography in maxillofacial surgeryDovepress submit your manuscript | www.dovepress.com Dovepress (CIS), and be fast, scalable, and intuitive to use. Some of the CISs that are currently commercially available support structured data systems with the ability to link an image to a patient file. For more advanced storage and administrative functions, professional digital asset management systems (eg, the Canto® Cumulus) must be integrated into the CIS via a programmable interface. A good compromise for a low-priced image database is to use software such as Adobe Photoshop® Lightroom or ACDSee Pro, which can be used separately from the CIS with few limitations of convenience and function. As the importance of photography in routine work increases, long-term storage, reliability, and availability become an issue. Although image data can be stored to digital media such as DVDs and Blu-ray® discs, the durability of the image data is threatened by the possibility of hardware failure (due to wear, electrical surge, flood, or fire), accidental deletion, theft, and malicious software. To guarantee permanent availability and safe long-term storage of image data, a multistage strategy must be followed including daily automated backup on a physically separate device, firewalls, a virus scanner, an uninterruptible power source (UPS), surge protection, access control, and a documented emergency and disaster recovery plan. Standardization of facial medical photography A meaningfully defined standard picture set is necessary and can be adapted to the concerns of the respective users. A full- face front view, oblique, submental oblique, and lateral views have been described as a useful basic picture set. Intraoral documentation includes upper and lower occlusal, buccal left and right, and frontal views.2,25 Additional picture sets can be obtained for orthognathic surgery, skull deformities, synostotic or positional plagiocephaly, facial palsy, aesthetic surgery, and dental implantology. In dental implantology, the frontal region of the upper jaw is particularly and aesthetically important, and additional close-ups showing neighboring structures are essential. The attention of the surgeon should not focus on the tooth or implant alone, since an implant usually also has effects on the lip and cheek contours of the patient at various ages. A preoperative assessment with the aid of photographs should therefore be included in the planning. Standardization is indispensable to produce pre- and post-operative photographs that are comparable. One of the fundamental parameters should be the patient’s position with the head at the same level as the camera. For each picture, the patient’s position and distance from the camera should remain the same, and rotation of the head and tilting must be avoided. The image should be aligned horizontally and vertically to the middle axis of the occlusion plane. For facial pictures, the Frankfort Horizontal Plane should be parallel to the floor and aligned vertical to the occlusion plane. The deformity can be exaggerated or masked if the patient is wrongly position, and this is especially likely to happen with orthognatic patients, as shown in Figure 1. The photograph should be adjusted so that the mid-sagittal plane of the patient is orientated perpendicular to the optical axis. Interfering cosmetics and jewelry should be removed as well as blood or saliva in intraoral views. 3D photography The brain can achieve 3D perception by interpreting the difference in depth of 2 pictures with the right and left eye. a c b d Figure 1 Lateral view of an orthognatic patient with Angle Class 2. The pictures show markedly different profiles. a) Correct position of the patient; b) tracings of photographs a, c, and d; c) the head is bent backward and the Frankfort Horizontal Plane is not parallel to the ground, and the deformity is therefore underestimated; d) the head is bent forward and the deformity is exaggerated. C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e n tis tr y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dentistry 2009:142 Schaaf et al Dovepress submit your manuscript | www.dovepress.com Dovepress Recently, 3D imaging has been adopted as an innovation in digital photography. The establishment of the next dimension in photography lies in the use of more than one camera at a time. The easiest way to achieve a 3D image is to take 2 pictures of the same object by moving the camera to one side without changing the level. These 2 pictures can now be viewed with 2 eyes using the cross-eye method, looking at the left picture with the left eye and at the right picture with the right eye. The photograph appears 3D when the images are fused. This method can be learned with patience. More professional ways of producing real 3D pictures require additional camera viewpoints, and several camera systems have been introduced with this capacity. In 2008, a 3D digital imaging system, the Fuji Finepix Real 3D, was announced, with dual lenses that capture images simultaneously. Application of 3D digital photography in the medical field For medical concerns, other systems with more than two cameras have been investigated, for example the 3D capture systems by Genex® or 3dMD® (Figure 2). The 3dMD® cranial system, for example, works with five camera viewpoints to obtain a full 360° picture of the head (Figures 3 and 4). These systems have been analyzed with regard to their anthropometric precision and accuracy of digital 3D photogrammetry of the face, and can be combined or compared with direct anthropometry using statistical methods.26 Furthermore, these 3D applications are useful in the description of cranial and facial soft tissues. A meaningful example of their use in medical treatment is the identification of common features in children with craniofacial deformities. The capacity for 3D visualization supports the ability to distinguish synostotic and non-synostotic plagiocephaly. The addition of this feature adds significant information in the diagnosis and treatment of these children. The use of 3D photography is of interest in all fields dealing with the treatment of obvious changes in the appearance of facial morphology, both for evaluating changes and predicting surgical results. Applications of 3D imaging for assessment of facial changes have been described in orthodontics as well as in the related discipline of orthognathic surgery.27–31 Other authors have described applications in patients with cleft lip and palate32–35 or with craniofacial malformations to aid in recognizing the key components of particular syndromes.36 New technologies are being implemented in 3D photogrammetry for collecting phenotypic measurements of the face.37 Photogrammetry is more than simply making measurements using stereoscopic photographs, but can capture 3D images with the ability to estimate coordinates of points, linear or surface distances, and volumetric measurements. The more sophisticated computerized stereophotogrammetry, C3D, has been introduced as a useful technique for 3D recording of monochrome and color stereo images32,38–40 in the field of maxillofacial surgical planning. As previously mentioned, standardization is an essential requirement in clinical and scientific photography, and this has been demonstrated in the field of 3D photography as well. More information is gained with the added dimension, but the number of possible mistakes increases accordingly. Discussion The changeover from analog to digital photography in medicine has occurred gradually and without major difficulties, and the advantages of technologies for digital photography in the dental and maxillofacial field have been clearly outlined; however, the availability of these digital technologies represents both an opportunity and a challenge. The physician is expected to provide sufficient image processing and to ensure the high quality of images. Meaningful archiving and secure storage can be achieved using a professional keyword-indexed asset management system. Such a system provides easy access for presentations and lectures, as well as for forensic purposes. The capability for digital post-processing, however, has the disadvantage of enabling falsification of images. Many published papers define a basic picture set in 2 dimensions for different uses including dentistry, orthodontics, and maxillofacial and plastic surgery.2,3,6,25,41 Furthermore, supplemental picture sets for special Figure 2 The 3dMD® cranial system uses 5 camera viewpoints to generate a 360° image of the head. C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e n tis tr y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dentistry 2009:1 43 evolution of photography in maxillofacial surgeryDovepress submit your manuscript | www.dovepress.com Dovepress circumstances have been described, which are useful in the field of maxillofacial surgery.25 Beyond the function of documentation, attempts have been made to use photography as a means of identifying landmarks and measure distances on two-dimensional photographs. Measurements of photographs have been carried out by various specialists, for example, for computerized eyelid measurement analysis in ophthalmology.42 Other attempts to characterize facial morphology in orthodontics using standardized photographs have been examined and compared to cephalometric measurements.43,44 Photographic methods have also been used to identify landmarks or digitally optimize appliances such as head bands.45–47 Nevertheless, reducing the picture set to a minimum will increase acceptance and feasibility. Knowledge of common mistakes can prevent pitfalls and help in achieving professional skills in digital photography.48,49 Manipulation of the patient’s head position49 or changes in illumination50 can make a difference in the surgical outcome. The advantages of digital photography such as saving time, lower costs, speed of storage, and reduced storage space with easier access to the photographs, have been described in the literature.2,51 The use of 3D photography supports clinical diagnosis and treatment in various fields. In medical genetics, it has demonstrated high levels of sensitivity and specificity in discriminating between controls and individuals diagnosed with Noonan syndrome, and has the potential for use in training physicians.36 Precision and error of 3d phenotypic measures from 3dMD photogrammetric images have also been described in the field of clinical dysmorphology in medical genetics. Here the precision is specified as highly repeatable with an error for placement of landmarks in the sub-millimeter range.37 The development of CT has revolutionized diagnostic and treatment purposes in medicine. Especially the field of orthognatic surgery has major benefits in the three-dimensional analysis.52 The combination of CT-based 3D data sets with 3D photographs could add significant information for tissue landmarks requiring information of hairline or eyelids. It could be shown that the registration of 3D photographs with CT images could provide an accurate match between the 2 surfaces.53 Recently this group was able to confirm the accuracy of matching 3D photographs with skin sur- faces from cone-beam CTs with an error within ±1.5 mm.54 Using 3D stereophotogrammetry for the soft tissue analysis 2 observers showed a high reliability coefficient with 0.97 for intraobserver and 0.94 for intraobserver reliability in 20 patients.55 However, it been reported that the accuracy of 3D facial imaging in orthodontics using the Genex camera system Figure 3 Five camera viewpoints of the head of a patient with deformational plagiocephaly. Camera views: a) half profile front right, b) half profile front left, c) half profile back left, d) half profile back right, e) from above. Figure 4 2D illustration of the composed 3D image of the patient’s head, which was generated from the 5 views in Figure 3. C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e n tis tr y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dentistry 2009:144 Schaaf et al Dovepress submit your manuscript | www.dovepress.com Dovepress showed substantial image distortion when images of sharp angles 90° were captured. This system, the Genex Rainbow 3D Camera Model, is a technology with 2 cameras. The accuracy was greater the less that the z-coordinate was incorporated in the image. This limitation was to be expected, given the camera configuration. Because the lenses were located somewhat close to each other, resulting in a limited field of view, it was difficult to get an accurate z-coordinate measurement.31 In the medical literature several 3D imaging systems in photography have been introduced. Besides commercially o f f e r e d systems like 3dMD and Genex, other 3D custom-made systems and software developments have presented.38–40 The validation of the systems has been published independently.28,32,37,56 The only comparison of measurement data of different 3D photogrammetric systems was performed by Weinberg et al26 and showed that both systems are sufficiently concordant (relative to one another), accurate (relative to direct anthropometry), and precise to meet the needs of most clinical and basic research designs. Conclusion The evolution of photography has resulted in easy-to-use and affordable digital photography for the practitioner. In the specialty of dentistry, medical photography has become a high-quality tool for health care professionals using a defined standard picture set for documentation in a standard reproducible set-up. The newest innovation in photography, incorporating the third dimension, offers detailed studies of the facial surface and soft tissue morphology. The advantages of digital photography include improved capabilities for diagnostics, planning of surgery and treatment, follow-up, and interdisciplinary communication between physicians and other specialists. Disclosures The authors report no conflicts of interest. References 1. Bengel W. Standardization in dental photography. Int Dent J. 1985;35(3):210–217. 2. Ettorre G, Weber M, Schaaf H, Lowry JC, Mommaerts MY, Howaldt HP. Standards for digital photography in cranio-maxillo-facial surgery–Part I: Basic views and guidelines. J Craniomaxillofac Surg. 2006;34(2):65–73. 3. Galdino GM, DaSilva And D, Gunter JP. Digital photography for rhinoplasty. Plast Reconstr Surg. 2002;109(4):1421–1434. 4. Galdino GM, Vogel JE, Vander Kolk CA. Standardizing digital photography: it’s not all in the eye of the beholder. Plast Reconstr Surg. 2001;108(5):1334–1344. 5. Jemec BI, Jemec GB. Suggestions for standardized clinical photography in plastic surgery. J Audiov Media Med. 1981;4(3):99–102. 6. Sandler J, Murray A. Digital photography in orthodontics. J Orthod. 2001;28(3):197–201. 7. Sandler J, Murray A. Clinical photographs-the gold standard. J Orthod. 2002;29(2):158–161. 8. Alder ME, Deahl ST, Matteson SR. Clinical usefulness of two- dimensional reformatted and three-dimensionally rendered computerized tomographic images: literature review and a survey of surgeons’ opinions. J Oral Maxillofac Surg. 1995;53(4):375–386. 9. Guerrero ME, Jacobs R, Loubele M, Schutyser F, Suetens P, van Steenberghe D. State-of-the-art on cone beam CT imaging for preoperative planning of implant placement. Clin Oral Investig. 2006; 10(1):1–7. 10. Xia J, Samman N, Yeung RW, et al. Computer-assisted three-dimensional surgical planing and simulation. 3D soft tissue planning and prediction. Int J Oral Maxillofac Surg. 2000;29(4):250–258. 11. Hell B. 3D sonography. Int J Oral Maxillofac Surg. 1995;24(1 Pt 2): 84–89. 12. Landes CA, Goral WA, Sader R, Mack MG. Three-dimensional versus two-dimensional sonography of the temporomandibular joint in comparison to MRI. Eur J Radiol. 2007;61(2):235–244. 13. Roelfsema NM, Hop WC, Wladimiroff JW. Three-dimensional sonographic determination of normal fetal mandibular and maxillary size during the second half of pregnancy. Ultrasound Obstet Gynecol. 2006;28(7):950–957. 14. Bill JS, Reuther JF, Dittmann W, et al. Stereolithography in oral and maxillofacial operation planning. Int J Oral Maxillofac Surg. 1995; 24(1 Pt 2):98–103. 15. Santler G, Karcher H, Ruda C. Indications and limitations of three- dimensional models in cranio-maxillofacial surgery. J Craniomaxillofac Surg. 1998;26(1):11–16. 16. Nakamura N, Suzuki A, Takahashi H, Honda Y, Sasaguri M, Ohishi M. A longitudinal study on influence of primary facial deformities on maxillofacial growth in patients with cleft lip and palate. Cleft Palate Craniofac J. 2005;42(6):633–640. 17. Noguchi N, Tsuji M, Shigematsu M, Goto M. An orthognathic simulation system integrating teeth, jaw and face data using 3D cephalometry. Int J Oral Maxillofac Surg. 2007;36(7):640–645. 18. Papadopoulos MA, Christou PK, Christou PK, et al. Three-dimensional craniofacial reconstruction imaging. Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 2002;93(4):382–393. 19. Larish LL. Understanding Electronic Photography. New York: McGraw-Hill Education; 1990:44. 20. Photography P. Popular Photography. New York: HFM U.S. 1991:111. 21. Photography P. Popular Photography. New York: HFM U.S. 1991:56. 22. Clark RN. Film versus Digital Summary. www.clarkvision.com/ imagedetail/film.vs.digital.summary1.html 2005. Accessed Nov 23, 2008. 23. Lenhard K. Optik für die Digitale Fotografie. Bad Kreuznach; www.schneiderkreuznach.com/knowhow/digfoto.htm. Accessed Nov 23, 2008. 24. Rockwell K. The Megapixel Myth. La Jolla California. www. kenrockwell.com/tech/mpmyth.htm. 2006. Accessed Nov 23, 2008. 25. Schaaf H, Streckbein P, Ettorre G, Lowry JC, Mommaerts MY, Howaldt HP. Standards for digital photography in cranio-maxillo-facial surgery – Part II: Additional picture sets and avoiding common mistakes. J Craniomaxillofac Surg. 2006;34(7):444–455. 26. Weinberg SM, Naidoo S, Govier DP, Martin RA, Kane AA, Marazita ML. Anthropometric precision and accuracy of digital three-dimensional photogrammetry: comparing the Genex and 3dMD imaging systems with one another and with direct anthropometry. J Craniofac Surg. 2006;17(3):477–483. 27. Hajeer MY, Ayoub AF, Millett DT. Three-dimensional assessment of facial soft-tissue asymmetry before and after orthognathic surgery. Br J Oral Maxillofac Surg. 2004;42(5):396–404. C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e n tis tr y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dentistry 2009:1 Clinical, Cosmetic and Investigational Dentistry Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/clinical-cosmetic-and-investigational-dentistry-journal Clinical, Cosmetic and Investigational Dentistry is an international, peer-reviewed, open access, online journal focusing on the lat- est clinical and experimental research in dentistry with specific emphasis on cosmetic interventions. Innovative developments in dental materials, techniques and devices that improve outcomes and patient satisfaction and preference will be highlighted. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. 45 evolution of photography in maxillofacial surgeryDovepress submit your manuscript | www.dovepress.com Dovepress Dovepress 28. Hajeer MY, Mao Z, Millett DT, Ayoub AF, Siebert JP. A new three-dimensional method of assessing facial volumetric changes after orthognathic treatment. Cleft Palate Craniofac J. 2005;42(2): 113–120. 29. Hajeer MY, Millett DT, Ayoub AF, Siebert JP. Applications of 3D imaging in orthodontics: part II. J Orthod. 2004;31(2):154–162. 30. Hajeer MY, Millett DT, Ayoub AF, Siebert JP. Applications of 3D imaging in orthodontics: part I. J Orthod. 2004;31(1):62–70. 31. Lee JY, Han Q, Trotman CA. Three-dimensional facial imaging: accuracy and considerations for clinical applications in orthodontics. Angle Orthod. 2004;74(5):587–593. 32. A y o u b A , G a r r a h y A , H o o d C , e t a l . V a l i d a t i o n o f a vision-based, three-dimensional facial imaging system. Cleft Palate Craniofac J. 2003;40(5):523–529. 33. Hood CA, Bock M, Hosey MT, Bowman A, Ayoub AF. Facial asymmetry – 3D assessment of infants with cleft lip and palate. Int J Paediatr Dent. 2003;13(6):404–410. 34. Hood CA, Hosey MT, Bock M, White J, Ray A, Ayoub AF. Facial characterization of infants with cleft lip and palate using a three- dimensional capture technique. Cleft Palate Craniofac J. 2004; 41(1):27–35. 35. Schwenzer-Zimmerer K, Chaitidis D, Berg-Boerner I, et al. Quantitative 3D soft tissue analysis of symmetry prior to and after unilateral cleft lip repair compared with non-cleft persons (performed in Cambodia). J Craniomaxillofac Surg. 2008;36(8):431–438. 36. Hammond P, Hutton TJ, Allanson JE, et al. 3D analysis of facial morphology. Am J Med Genet A. 2004;126(4):339–348. 37. A l d r i d g e K , B o y a d j i e v S A , C a p o n e G T , D e L e o n V B , Richtsmeier JT. Precision and error of three-dimensional phenotypic measures acquired from 3dMD photogrammetric images. Am J Med Genet A. 2005;138(3):247–253. 38. Ayoub AF, Siebert P, Moos KF, Wray D, Urquhart C, Niblett TB. A vision-based three-dimensional capture system for maxillofacial assessment and surgical planning. Br J Oral Maxillofac Surg. 1998;36(5): 353–357. 39. Ayoub AF, Wray D, Moos KF, et al. Three-dimensional modeling for modern diagnosis and planning in maxillofacial surgery. Int J Adult Orthodon Orthognath Surg. 1996;11(3):225–233. 40. Bourne CO, Kerr WJ, Ayoub AF. Development of a three-dimensional imaging system for analysis of facial change. Clin Orthod Res. 2001;4(2):105–111. 41. Jones M, Cadier M. Implementation of standardized medical photography for cleft lip and palate audit. J Audiov Media Med. 2004;27(4):154–160. 42. Coombes AG, Sethi CS, Kirkpatrick WN, Waterhouse N, Kelly MH, Joshi N. A standardized digital photography system with computerized ey e l i d m e a s u r e m e n t a n a l y s i s . P l a s t R e c o n s t r S u r g . 2 0 0 7 ; 120(3):647–656. 43. Ferrario VF, Sforza C, Miani A, Tartaglia G. Craniofacial morphometry by photographic evaluations. Am J Orthod Dentofacial Orthop. 1993;103(4):327–337. 44. Zhang X, Hans MG, Graham G, Kirchner HL, Redline S. Correlations between cephalometric and facial photographic measurements of craniofacial form. Am J Orthod Dentofacial Orthop. 2007; 131(1):67–71. 45. Hutchison BL, Hutchison LA, Thompson JM, Mitchell EA. Plagiocephaly and brachycephaly in the first two years of life: a prospective cohort study. Pediatrics. 2004;114(4):970–980. 46. Hutchison BL, Hutchison LA, Thompson JM, Mitchell EA. Quantification of plagiocephaly and brachycephaly in infants using a digital photographic technique. Cleft Palate Craniofac J. 2005;42(5):539–547. 47. Zonenshayn M, Kronberg E, Souweidane MM. Cranial index of symmetry: an objective semiautomated measure of plagiocephaly. Technical note. J Neurosurg. 2004;100(5 Suppl Pediatrics): 537–540. 48. Nayler J, Geddes N, Gomez-Castro C. Managing digital clinical photographs. J Audiov Media Med. 2001;24(4):166–171. 49. Niamtu J. Image is everything: pearls and pitfalls of digital photography and PowerPoint presentations for the cosmetic surgeon. Dermatol Surg. 2004;30(1):81–91. 50. Ikeda I, Urushihara K, Ono T. A pitfall in clinical photography: the appearance of skin lesions depends upon the illumination device. Arch Dermatol Res. 2003;294(10–11):438–443. 51. Trune DR, Berg DM, DeGagne JM. Computerized digital photography in auditory research: a comparison of publication-quality digital printers with traditional darkroom methods. Hear Res. 1995; 86(1–2):163–170. 52. Swennen GR, Schutyser F, Hausamen JE. Three-Dimensional Cephalometry A Color Atlas and Manual . 1st ed. Berlin: Springer; 2005. 53. De Groeve P, Schutyser F, Cleynen-Breugel J, Suetens P. Registration of 3D photographs with spiral CT images for soft tissue simulation in maxillofacial surgery. Med Image Comput Comput Assist Interv. 2001;2208:991–996. 54. Maal TJ, Plooij JM, Rangel FA, Mollemans W, Schutyser FA, Berge SJ. The accuracy of matching three-dimensional photographs with skin surfaces derived from cone-beam computed tomography. Int J Oral Maxillofac Surg. 2008;37(7):641–646. 55. Plooij JM, Swennen GR, Rangel FA, et al. Evaluation of reproducibility and reliability of 3D soft tissue analysis using 3D stereophotogrammetry. Int J Oral Maxillofac Surg. 2009;38(3):267–273. 56. Weinberg SM, Scott NM, Neiswanger K, Brandon CA, Marazita ML. Digital three-dimensional photogrammetry: evaluation of anthropometric precision and accuracy using a Genex 3D camera system. Cleft Palate Craniofac J. 2004;41(5):507–518. C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e n tis tr y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com/clinical-cosmetic-and-investigational-dentistry-journal www.dovepress.com www.dovepress.com www.dovepress.com www.dovepress.com Pub Info 20: Nimber of times reviewed: work_rsnnoslrljewpe4opccnydd4ma ---- Microsoft PowerPoint - Balloon_Analysis_ABSPoster_9 One Balloon at Four Rotations � Also of interest is the balloon azimuthal symmetry. This can be determined by viewing a balloon from several angles around the shaft axis. � Figure 12 shows results from a single balloon photographed at 90° increments. Maximum variation is on the order of 0.5 mm, with an average σ = 2.2% of the local diameter or about 0.70 mm. Figure 10. Shape graph for 5 balloons, inflated once Multiple Inflations of One Balloon � A single balloon was inflated and deflated to the same volume 5 times to check consistency. Results are shown as separate data series in the shape graph of Figure 9. � Consistency is excellent. The lower series in light green shows the standard deviation at each angular point (scale on right). The average standard deviation was 1.1% of the diameter or 0.36 mm. TECHNIQUE FOR QUANTIFYING BALLOON SHAPE AND RESULTS OF A CONSISTENCY EVALUATION Steve Axelrod, Tai Ngo, Tom Rusch; Office of Advanced Technology and Science, Xoft, Inc., Sunnyvale, CA ABSTRACT Poster presented at the Annual American Brachytherapy Society Meeting, May 31-June 2, 2009, Toronto, Canada INTRODUCTION � Since brachytherapy dose depends strongly on geometry (distance from source to tissue), manufacturing tolerances are both stringent and challenging. For example, the central lumen is required to be centered to within approximately 1 mm to minimize asymmetric dose delivery. � While parameters such as lumen centering, length and diameter are relatively straightforward to measure, the detailed shape of the balloon is a less easy thing to define and determine. Yet it is obvious that having a reproducible shape, either from one balloon to the next or when re-inflating a given balloon, is highly desirable. � A new technique has been developed which enables complete characterization of a projection of the balloon shape to arbitrary precision. • It uses digital photography and a unique image processing technique to obtain spatial coordinates of the balloon edges. • Typically the outer edge of a slice through the longitudinal center would be characterized in r-θ space, in 1 degree steps, with a precision of better than ± 0.1 mm. Image Acquisition � The balloon under test is inflated with water to a fixed volume and inserted into a test setup that locates it reproducibly with respect to a mounted digital camera (Figure 3). It is enclosed in a box to control the lighting with an appropriate dark backdrop. The camera (EOS Rebel XT, lens EF-S18-55MM) is manually focused at an f-stop of 3.5, with typical exposure time of 2 seconds in b/w mode with zoom set at the extreme out position. Lighting intensity and direction are both important for obtaining images that yield reproducible results. METHODS � Presenting a predictably uniform shape is important dosimetrically in any brachytherapy treatment -- especially so in uses such as IORT where CT scan data is not available, and the nominal balloon shape must be assumed in planning the treatment. � The technique described provided detailed quantitative evaluation of the shape of Xoft Axxent® balloons used in APBI. � It will be used to qualify a new manufacturing process at the current vendor and to qualify first articles produced by a new vendor. CONCLUSION BACKGROUND Funding provided by � Purpose. There are many situations in which one would like to measure the shape of balloons used in accelerated partial breast irradiation (APBI), e.g., for comparing actual to design specification, for determining variability, for evaluating consistency over time, and for qualifying new vendors. Such balloons come in several sizes and shapes, including ones which are approximately spherical and ellipsoidal. Traditional quantitative inspections tend to be limited to diameter and length. Largely due to the complexity of shape, visually driven inspections, even those using images acquired by photographs or inspection devices, are typically limited to being essentially qualitative. When quantitative it is typically so in only a limited sense, for example, determining the diameter at one or several locations. Yet there is a great deal more information available in principle. Thus a means of acquiring image data and processing it to allow full characterization of balloon shapes is highly desirable. � Materials and Methods. A digital camera and the balloon under test are arranged in a dedicated fixture at a fixed distance and orientation. Lighting of the balloon is set so that the image stands out clearly from the background, allowing image processing software (NIH ImageJ) to easily identify the balloon circumference and create an image data file with only the edge as non-zero elements. Further processing in a custom LabVIEW program identifies the set of edge points and determines their locations in the image in r-θ coordinates, interpolates the angular values at increments of one degree, and creates a table and plot of radial distance from image center as a function of angle relative to image center. This data completely specifies the balloon shape and allows direct numerical comparison of different balloons, calculation of average shapes and standard deviations as a function of angle, etc. � Results. Variation in balloon shape was measured for a single balloon inflated/deflated 5 times, for 5 different balloons inflated to the same volume, and for 4 rotations of the same balloon. Overall shape reproducibility in terms of standard deviation averages about 1%, or about 0.4 mm in the former two cases, and approximately 2% in the latter case. � Conclusions. The technique described provides detailed quantitative evaluation of the shape of Xoft Axxent® balloons used in APBI. It will be used to qualify a new manufacturing process at the current vendor and to qualify first articles produced by a new vendor. Presenting a predictably uniform shape is important dosimetrically in any brachytherapy treatment but especially so in uses such as intra-operative radiation therapy (IORT) where CT scan data is not readily available, and the nominal balloon shape must be assumed in planning the treatment. � Over the past two years, the Axxent® Electronic Brachytherapy (eBx) System has been used to deliver accelerated partial breast irradiation (APBI) using an inflated balloon placed into the patient’s resection cavity one to two weeks post-lumpectomy. � The balloon applicator is a sterile, disposable, single use device that functions as a guide for the x-ray source. An integral drain was built into the balloon applicator for seroma management, which potentially improves dosimetry and reduces the infection rate. The radiolucent balloon wall improves visibility on planar film and in CT images. See Figure 1. � Water-filled balloon applicators for the Axxent® eBx System have been FDA-cleared for breast APBI in five different sizes: 3-4 cm, 4-5 cm and 5-6 cm spherical shapes plus 5x7 cm and 6x7 cm ellipsoidal shapes. � When used with the Axxent® X-ray Source operating at 50 kVp and 0.30 mA beam current, dose rates at 1 cm into tissue surrounding the balloon range from approximately 0.2 Gy per minute for the largest balloon to 1 Gy per minute for the smallest. � When the prescription surface is 0.5 cm from the balloon, the dose rates increase by about 50% to 0.3 Gy per minute for the largest balloon and 1.5 Gy per minute for the smallest. � Xoft has submitted an application to the U.S. FDA to allow use of these balloons in locations other than the breast, for example in intra-operative radiation therapy (IORT). RESULTS Figure 7. Shape graph of elliptical balloon Figure 8. Shape graph for a single balloon inflated and deflated 5 times. Multiple Balloons Inflated Once � Consistency between balloons in manufacturing is critical to ensure proper dosimetry in clinical use. • Xoft’s balloons are routinely and individually checked for leakage, shaft symmetry and gross size. • Applying the shape test described here to a set of balloons provides more detailed information on the manufacturing process and greater confidence in what is used for patient treatment. � Five balloons were randomly selected and inflated to the same volume, then mounted, photographed and processed as described. � Figure 10 shows the shape graphs for all 5, superimposed. The lower green trace is the standard deviation at each angle. It averages 1.3% or 0.40 mm. � The average shape coordinates are plotted in Figure 11 as Y vs. X to show the shape in a more traditional manner. RESULTS The Axxent® eBx System � The Axxent® controller (Figure 2) provides power to the X-ray source and allows the X-ray source to be translated. The translation or pullback movement of the X-ray source within the balloon is designed to provide a predictable dose of radiation in the tissue surrounding the balloon. � The Axxent® system can be used as an alternative to HDR sources such as Ir-192. Unique attributes of the Axxent® eBx System include: • No radioisotope regulatory, handling and safety issues. • Lower shielding requirements, which brings HDR treatment “out of the bunker.” • High dose rate and unique dose sculpting capabilities. • Staff can be in the room with the patient during treatment � The Axxent® eBx System is a self-contained unit that can be wheeled from one procedure room to the next. Figure 2. Axxent® System Controller Figure 1 (A) The 3-4 cm, 4-5 cm, 5-6 cm spherical balloon applicators (B) Axxent® Controller arm showing the inflated balloon applicator (A) (B) Figure 3. Layout of the apparatus used to acquire the photographic images of the balloons. Camera and balloons are mounted in fixed locations within an enclosure to ensure reproducibility of image size and illumination. Analysis � A custom LabVIEW program further processes the thresholded image. A specially written routine identifies sets of contiguous non-zero pixels and stores them in linear arrays which record their positions in the 2-D image array. In this case there is only one such set per image. It then plots a “shape graph” of the distance from image center as a function of the angle about the center of each identified point. If the balloon was a perfect circle, the plot would be a horizontal line. The shape graph of a perfect ellipse with a long axis to short axis ratio of 1.20 is shown in Figure 6. � Figure 7 shows the shape graph of an actual balloon. The vertical scale is radius in cm, and the horizontal scale is angle in degrees. To reduce noise from the edge detection and thresholding processes, the raw data is smoothed by averaging 5 adjacent points. To facilitate comparisons between balloons, the data is then resampled to provide a radius value at each integer degree. � Figure 8 shows the data after this resampling. With this arrangement of the data, one can compare multiple runs, e.g., by calculating averages and standard deviations at each integer degree. Figure 4.Cropped image of balloon Figure 5. Thresholded image of balloon outline. Figure 8. Shape graph (averaged and resampled) Individual Radii for All Five Balloons Figure 12. Shape graph of 1 balloon at 4 rotations Individual Radii for All Five Inflations Run 1 Run 2 Run 3 Run 4 Run 5 Sigma 0 deg 90 deg 180 deg 270 deg Sigma Individual Radii for All Rotation Angles Balloon 1 Balloon 2 Balloon 3 Balloon 4 Balloon 5 Sigma -0.0% -1.0% -2.0% -3.0% -4.0% -5.0% -6.0% -7.0% -8.0% -9.0% -10.0% -200 -150 -100 -50 0 50 100 150 200 Angle R a d iu s , c m 3.5- 3.4- 3.3- 3.2- 3.1- 3.0- 2.9- 2.8- 2.7- 2.6- 2.5- Figure 11. Plot of average shape coordinates Average Balloon Shape 4.0- 3.0- 2.0- 1.0- 0.0- -1.0- -2.0- -3.0- -4.0- -4.0 -2.0 0.0 2.0 4.0 X coordinate, cm Y c o o rd in a te , c m SUMMARY � A new technique has been developed to quantify the shape of brachytherapy balloons. The technique uses digital photography and both standard and custom developed image processing algorithms. It allows rigorous comparisons of re-inflation consistency, balloon to balloon consistency, and rotational symmetry. Results of these tests on Xoft balloons are summarized in Table 1. Table 1 Test Avg radius, cm Avg σ Avg σ, mm Rotation 3.17 2.2% 0.70 Re-inflation 3.18 1.1% 0.36 Multiple balloons 3.18 1.3% 0.40 � The technique can be used to sample test manufactured product, qualify new OEM vendors, and compare the shapes of balloons from different producers. It can also be applied to items other than balloons for quantitative shape characterization. � Images are downloaded to a computer, where they are opened in NIH ImageJ, a freely available software package. ImageJ functions are used to center and crop to a standard size, eliminate the balloon shaft, find the balloon edges, and finally “threshold” the image. Edges are identified by sharp changes in intensity. In image thresholding, all pixels values below a certain number are set to zero, while all above that number are set to a fixed value. Both of these functions are standard features available in ImageJ. Figure 4 shows the cropped image while figure 5 shows it after edges are identified and thresholded. The images here are 700 pixels on a side. A photograph of a ruler was taken in the same position in the apparatus to establish the number of pixels per cm. METHODS Figure 6. Shape graph of a perfect ellipse -0.0% -1.0% -2.0% -3.0% -4.0% -5.0% -6.0% -7.0% -8.0% -9.0% -10.0% -200 -150 -100 -50 0 50 100 150 200 Angle R a d iu s , c m 3.5- 3.4- 3.3- 3.2- 3.1- 3.0- 2.9- 2.8- 2.7- 2.6- 2.5- Figure 9. Shape graph for a single balloon inflated and deflated 5 times. -0.0% -1.0% -2.0% -3.0% -4.0% -5.0% -6.0% -7.0% -8.0% -9.0% -10.0% -200 -150 -100 -50 0 50 100 150 200 Angle R a d iu s , c m 3.5- 3.4- 3.3- 3.2- 3.1- 3.0- 2.9- 2.8- 2.7- 2.6- 2.5- work_rsv64dedqbg65avc33knaum4z4 ---- Microsoft Word - SNB_D_18_01867_revised Spectrophotometric and Digital Colour Colourimetric (DCC) analysis of Colour-based Indicators Yusufu, D., & Mills, A. (2018). Spectrophotometric and Digital Colour Colourimetric (DCC) analysis of Colour- based Indicators. SENSORS AND ACTUATORS B-CHEMICAL, 273, 1187 -1194. https://doi.org/10.1016/j.snb.2018.06.131 Published in: SENSORS AND ACTUATORS B-CHEMICAL Document Version: Peer reviewed version Queen's University Belfast - Research Portal: Link to publication record in Queen's University Belfast Research Portal Publisher rights Copyright 2018 Elsevier. This manuscript is distributed under a Creative Commons Attribution-NonCommercial-NoDerivs License (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits distribution and reproduction for non-commercial purposes, provided the author and source are cited. General rights Copyright for the publications made accessible via the Queen's University Belfast Research Portal is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The Research Portal is Queen's institutional repository that provides access to Queen's research output. Every effort has been made to ensure that content in the Research Portal does not infringe any person's rights, or applicable UK laws. If you discover content in the Research Portal that you believe breaches copyright or violates any law, please contact openaccess@qub.ac.uk. Download date:06. Apr. 2021 https://doi.org/10.1016/j.snb.2018.06.131 https://pure.qub.ac.uk/en/publications/spectrophotometric-and-digital-colour-colourimetric-dcc-analysis-of-colourbased-indicators(be456ea2-511a-4b21-a1c9-7642b92ae263).html 1    Spectrophotometric and Digital Colour Colourimetric (DCC) analysis of  Colour‐based Indicators    Dilidaer Yusufu and Andrew Mills*  School of Chemistry and Chemical Engineering, Queens University Belfast, BT9 5AG  e‐mail: andrew.mills@qub.ac.uk                                    Key words: digital photography; spectrophotometry; RGB; colour analysis; colour indicators        2    Abstract  Seven simulated absorption spectra that span the visible spectrum, are used to probe the  degree of linear correlation that exists between real absorbance, Ao, at max, and three well‐ established  colour‐based  parameters,  based  on  the  standard  Red,  Green  and  Blue  scale,  sRGB, namely: (i) apparent absorbance, A(sRGB), (ii) apparent fraction of absorbed light, 1‐ T(sRGB), where T is the apparent transmittance and (iii) colour difference, E.  In all cases  the  colour‐based  parameter,  A(sRGB),  linearly  correlates  best  with  Ao.    This  predicted  correlation  is  tested  using  three  different,  actual  colour‐based  indicators,  using  UV/Vis  absorption  spectroscopy  to  monitor  the  change  in  actual  absorbance  of  each  of  the  indicators and digital photography to monitor simultaneously the change in the values of  sRGB,  and  so  A(sRGB).    The  three  different  indicators  used  were:  a  CO2  indicator,  a  photocatalytic activity  indicator and an oxygen indicator.    In all three cases the apparent  absorbance  parameter,  A(sR),  derived  from  sRGB  analysis  of  the  digital  images,  is  proportional  to  the  real  absorbance,  as  measured  using  UV/Vis  spectrophotometry,  and  able to yield the same key analytical information.  The increasing use of sRGB analysis of  digital photographic images, i.e. digital colour colourimetry, DCC, is discussed briefly.      3    Introduction  There is a great deal of interest in optical sensors and probes, which are able to respond  reversibly, or irreversibly, to specific analytes [1].  In many cases the optical effect is a colour  change measured using UV/Vis absorption spectroscopy, where a change in the measured  absorbance  at  a  specific  wavelength,  A,  is  usually  related  simply  to  a  change  in  concentration of the analyte under test.  However, even when used in a very basic form,  UV/Vis spectroscopy is not an inexpensive technique, nor is it particularly portable, both of  which add to the overall cost of analysis when using optical sensors.    Interestingly,  there  is  a  recent  and  growing  interest  in  the  use  of  digital  photography,  coupled with digital  image analysis, to probe colour changing reactions that are so often  studied  using  UV/Vis  spectroscopy.    For  example,  Knutson  et  al.  have  used  digital  photography to study the kinetics of reaction between crystal violet and hydroxide ions [2].   Wang  et  al.  have  used  a  mobile  phone's  digital  camera  to  probe  the  colour  change  associated with a .microchip enzyme‐linked immunosorbent assay, ELISA, when exposed to  different levels of the ovarian HE4 biomarker in urine [3].  Capitan‐Vallvey et al. have used  colour  digital  photography  for  one‐shot,  multi‐ion  (potassium,  magnesium  and  hardness)  detection  [4]  and  Ozcan  et  al.  have  used  mobile  phone  microscopy  for  high  resolution  pathology imaging [5].  And finally and most recently, photography has been used monitor  an  epidermal  UV  colorimetric  dosimeter  and  is  now  the  basis  of  L'Oreal's  'My  Patch'  UV  dosimeter, which involves monitoring UV dose with a mobile phone digital camera coupled  to an App [6].  Interestingly, in several of the different examples cited above the method  used to relate the digital photographic data to the analyte concentration was different.  The  above approach to quantitative analysis via digital image analysis, has been termed ‘digital  camera colourimetry', which we shall refer to henceforth as DCC, given its clear link with  conventional colourimetery and its use of simple (non‐spectrophotometric) devices, such as  the filter photometer [7, 8].    Colour  has  been  precisely  defined internationally  and  standards  have  been  developed  to  ensure accurate colour representation [9, 10].  Thus, the International Colour Consortium,  ICC, have defined a device‐independent, standard Red (R), Green (G) and Blue (B) colour  space, sRGB, where any colour in that colour space is defined by the values of R, G and B,  each of which ranges from zero to 255 (i.e. 8 bit format), or 0 to 1 (sometimes referred to  4    as:  fractional  format)  [11].    Whatever  the  format,  often  these  values  are  referred  to  as  'intensities' and this terminology will be adopted here, since, as we shall see, they are used  in  DCC  to  generate  apparent  absorbance  and  transmittance  values,  i.e.  A(sRGB)  and  T(sRGB), respectively, vide infra.  Many image analysis mobile phone Apps (ColorMeter [12],  Color Card [13], and Color Mate [14]) and open source digital image processing programs,  such as: ImageJ [15], Adobe Photoshop [16] or Image Colour Picker [17], are able to analyse  readily any part of a digital image and return the non‐linear sRGB values associated with the  colour.    As note earlier, in many of the examples of DCC cited above the individual methodologies  used  to  relate  the  sRGB  colour  data  to  the  analyte  concentration  were  different.    For  example, Knutson et al. [2], in their study of the kinetics of reaction between crystal violet  and hydroxide ion, showed that the DCC absorbance parameter for the green component,  i.e. A(sG), was related directly the concentration of the dye, crystal violet, CV, where:                                                            A(sG)  =  log(sGo/sGCV)                                                           (1)  and sGo and sGCV are the digital image ‘intensity’ values for the sG component derived from  the photograph of the reaction solution in the absence and presence of the CV, respectively;  note:  sGo  is  usually  taken  to  be  equal  to  255  (or  1).    In  contrast,  in  their  use  of  DCC  to  quantify the level of HE4 in urine, Wang et al. [3] showed that the DCC transmittance for the  red component of the digital image, i.e. T(sR), was related directly the concentration of HE4,  via the expression:                                                                      1‐T(sR) = K[HE4]                                                                (2)  where, T(sR) = sRHE4/sRo and sRo and sRHE4 are the intensity values for the sR components of  the  digital  image  of  the  ELISA  indicator  in  the  absence  and  presence  of  the  HE4,  respectively.  The parameter, 1‐ T(sR), will be referred to here as the apparent fraction of  light absorbed by the system.  Since the standard red component is not absorbed in absence  of HE4, once again it is assumed that the value of sRo = 255 (or 1 if in fractional format).   Finally,  Araki  et  al.  [6],  in  their  use  of  DCC  to  determine  the  UV  dose  received  by  the  indicator, employed the sRGB data, derived from photographic images of the indicator, to  calculate the values of L, a and b, in the Lab colour scale, and then determine the change in  colour between a UV exposed and non‐exposed indicator, i.e. E, where:  5                                                        E  =  [(Lo – Lx)2 + (ao – ax)2 +(bo – bx)2]1/2                                     (3)  Where the subscripts 'o' and x' refer to the values of L, a and b values derived from the  images of the non‐exposed and UV‐exposed indicator, respectively, where the value Lo, ao  and bo were assumed to be 1, 0 and 0, respectively.   This paper seeks to identify which of the above three digital image‐based parameters, i.e.  A(sRGB), 1‐T(sRGB) and E,  is best at providing a  linear correlation with real absorbance  values, as measured using UV/Vis spectrophotometry, at a series of wavelengths that span  the visible solar spectrum.  Once identified, this DCC analysis best method is combined with  digital image recording and sRGB calculations, using a mobile phone and App, respectively,  to probe the responses of three different known optical sensor systems to see if it is able to  generate the same analytical information as that gleaned from a UV/Vis spectrophotometric  analysis of the same system.    Experimental  Materials  Unless  stated  otherwise,  all  chemicals  were  purchased  from  Sigma  Aldrich  and  used  as  received.    The  Activ™  self‐cleaning  glass  was  a  gift  from  Pilkington  Glass.    All  aqueous  solutions  we  made  up  with  double‐distilled,  deionised  water.    All  gases  were  purchased  from BOC.    The phenol red (PR) CO2‐sensitive plastic polymer film was prepared as described previously  [18], using PR instead of 8‐hydroxypyrene‐1,3,6‐trisulfonic acid trisodium salt (HPTS) as the  pH‐sensitive  dye.    Thus,  0.2  g  of  PR  were  fully  dissolved  in  a  mixture  of  3.1  ml  of  40%  tetrabutylammonium hydroxide, TBAH, aqeous solution and 100 ml of ethanol, after which  2 g of hydrophilic silica were added, and the dispersion stirred for 2.5 h.  The final purple  powdered  form  of  the  PR‐TBAH‐SiO2  pigment  was  then  obtained  via  spray‐drying  the  dispersion usinga Buchi B‐290 spray dryer.  The PR‐TBAH‐SiO2 pigment was then mixed with  low density polyethylene (LDPE), 5 wt%, and extruded out as a 50 m thick, 10 cm wide, film  using  a  Rondol  Microlab  10  mm  twin  screw  extruder  (barrel  L/D  25/1).    The  final  purple  coloured plastic film changed to a yellow colour upon exposure to 100% CO2, returning to its  original purple colour in the absence of CO2, i.e. the colour‐changing process was reversible.   6    The preparation of the resazurin, Rz, ink used in this work is described elsewhere [19], but  briefly, 1.33 mg of Rz and 133 mg of glycerol were added to 1 mL of a 1.5 wt% aqueous  solution of hydroxyethyl cellulose (HEC), (MW = 250k).  The ink was stirred for at least 5 h,  to ensure thorough mixing and dissolution of the dye, before use.  The Activ self‐cleaning  glass samples were coated with the Rz ink, by securing the glass sample to an impression  bed (i.e. a clipboard) and drawing down a ca. 2.5 cm line of the ink placed 3 mm from the  top edge of the glass sample.  A wire wound rod (a 'K‐bar' No. 3 [20]) was used to create the  draw  down  film,  the  final  dry  thickness  of  which  was  ca.  2.1 m.    The  Rz‐coated  Activ  sample was then irradiated with UV light (2 mW cm‐2 using a 352 nm broad band UVA black  light (BL) lamp, 2 x 15W) and the photocatalysed reduction of the Rz monitored periodically  both  spectrophotometrically  and  photographically  until  no  further  colour  change  was  observed in the Rz ink, or after a period of 12 min, whichever was the shorter period of  time.    The  preparation  of  the  oxygen  sensitive  thionin  acetate,  Th,  ink  used  in  this  work  was  otherwise  identical  to  that  of  a  MB  ink  described  elsewhere  [21],  and  which  briefly  comprised: 20 mg of thionin acetate, 100 mg of P25 titania and 1 g of glycerol dissolved  mixed in with 10 cm3 of a 5 wt.% of hydroxyethyl cellulose (HEC, average Mw: 123k) solution  in double distilled deionised water.  This ink was sonicated for at least 30 minutes to ensure  thorough  dissolution  of  the  soluble  components  and  subsequently  spin‐coated  onto  a  borosilicate cover slip (2.4 mm in diameter and 0.145 mm thick) at a rotation speed of 2000  rpm, to produce a dried Th ink film of thickness ~2.1 µm. The ink film was then ‘activated’ to  its colourless, oxygen sensitive form by exposing it to UVA irradiation (typical irradiance, 2  mW  cm‐2,  using  a  352  nm  broad  band  UVA  BL  lamp,  2  x  15W))  for,  typically,  20  s.    The  oxygen driven recovery of the leuco‐thionin to the blue coloured thionin was monitored as a  function of time, t, by UV‐vis spectrophotometry and digital photography.  Methods  All  UV/Vis  spectra  were  recorded  using  a  Cary  60  UV/Vis  spectrophotometer.    All  digital  images were recorded using the digital camera (12‐megapixel) on a 7+ iPhone.  All digital  images were analysed using either the ColorMeter App [12], or the free ImageJ (v 1.51d)  software  [15],  both  yielded  the  same  non‐linear  values,  sR',  sG'  and  sB',  which  were  subsequently processed as described in the text.  7    Simulated absorption spectra and determination of sRGB and Lab values  A  very  simple  simulation  of  the  UV/Vis  absorption  spectrum  of  an  absorbing  species,  D,  which has a wavelength of maximum absorption, max, can be generated using the following  expression:                                          A =  Ao.exp[‐{( ‐ max)/(0.6.FWHM)}2]                                            (4)  where A and Ao are the absorbances due to D at  and max, respectively, and FWHM is full  width at half maximum of the absorption band, set at 50 nm.  Note that according to Beer's  law, Ao is proportional to the concentration of D, [D], and can be varied as required.  Thus,  figure 1 illustrates the simulated absorption spectra for a series of simulated dyes, D, with  max values set at those associated with the rainbow colours of the solar spectrum, i.e. red,  orange, yellow, green, cyan, blue violet, with, in all cases, Ao set at 1.      Figure 1: Absorption spectra of seven simulated dyes each absorbing at their max values the  primary rainbow colours, red, orange, yellow, green, cyan, blue and indigo.  Each spectrum  was calculated using eqn(4) with Ao = 1.      Knowledge of the absorption spectrum of any light‐absorbing species allows its linear sRGB  values to be calculated and its coloured defined, via the three tristimulus values, X, Y and Z,  which can be calculated using the following formulae [10]:  8                                                                ∑ ∑                                                                 (5)                                                                 ∑ ∑                                                               (6)                                                              ∑ ∑                                                                   (7)    where T() is the transmittance (i.e. 10‐A) of the simulated dye, S() is the spectral power of  a standard illuminant (here assumed to be daylight andusually referred to as D65), and x(),  y() and z() are the two degree standard colour matching functions (CMFs), which are a  consequence of the three different cone responses of the eye.  Tables of S() for D65 and  x(), y() and z() are readily available from the literature [22] and a typical set used here  are  illustrated  in  figure  2,  and  also  contained  in  the  'S1.xls'  spread  sheet  in  the  supplementary information.    9      Figure  2:  Plots  of  the  known  colour  matching  functions,  i.e.  x(),  y()  and  z()  for  a  2o  standard  observer  and  the  S()  relative  spectral  power  of  a  standard  (D65,  i.e.  daylight)  illuminant as a function of wavelength,  [20].  Since the value of A at any wavelength, , for any of the simulated dyes illustrated in figure  1, can be calculated using eqn (4), given a knowledge of max, it follows that each simulated  dye absorption spectrum depicted in figure 1, can be defined by a set of tristimulus values,  X, Y and Z.   These, in turn, can be converted to sR, sG and sB values using the following  matrix transformation equation [10]:                                         3.2410 1.5374 0.4986 0.9692 1.8760 0.0416 0.0556 0.2040 1.0570                                         (8)  The  above  process  generates  linear  sR,  sG  and  sB  fractional  values  in  the  range  0‐1,  however, it more usual to convert them to the 24 bit sRGB code, [sR sG sB], where [0 0 0] is  black and [255 255 255] is white.  Usually this means multiplying each fractional value by  255 and rounding to nearest whole number, with the caveat that if any fractional value is <  0, then a value of zero is returned and if any is > 1, a value of 255 is returned [10].  10    Alternatively, the tristimulus values, X, Y and Z, can be converted to Lab values using the  following equations [6]:                                                                    ∗ 116 / 16                                                      (9)                                                              ∗ 500 / /                                           (10)                                                              ∗ 200 / /                                            (11)  Where                                                     / if otherwise                                     (12)  Where, for any simulated dye, Xn, Yn and Zn are the values of X, Y and Z, calculated using  eqns (5‐7), respectively, but with T() set always to be 1.  Thus, in this work, it is found that:  Xn = 95.047, Yn = 100.000, Zn = 108.883 using the D65 Illuminant.    Table 1 lists the: (i) X, Y and Z, (ii) sR, sG and sB and (iii) Lab and E values calculated for  each of the absorption spectra illustrated in figure 1, using equations (5 – 7), (8) and (9) and  (3), respectively.  In table 1, the colour border used round this data, i.e. (i) – (iii) is how each  simulated dye with a maximum absorbance, Ao, = 1 would appear, i.e. as the complimentary  colour to that which it is absorbing.  An example spreadsheet, highlighting the calculations involved in the determination of the  (i) – (iii) values reported in Table 1 for the yellow‐absorbing, i.e. max = 575 nm, simulated  dye is given in the 'S2.xls' spreadsheet in the supplementary information.  Note: in all E  value calculations, all colour changes are with reference to white light, for which the values  of: Lo, ao and bo are: 100, 0 and 0, respectively.     11    Table 1: Spectral and calculated s(RGB), Lab and E characteristics of seven simulated dyes spanning the solar spectrum  Colour Violet Blue Cyan Green Yellow Orange Red λ max  (nm) 410 470 495 530 575 590 650 [X,Y,Z]  [0.88,0.99,0.74]  [0.83,0.90,0.37]  [0.87,0.77,0.69]  [0.80,0.54,1]  [0.53,0.52,1.1]  [0.49,0.60,1.1]  [0.76,0.92,1.1]  sRGB [244,255,160] [255,229,65] [255,158,159] [255,71,255] [94,130,255] [31,177,255] [128,255,255] Lab [97,‐12,24] [95,‐5,54] [97,29,12] [93,65,‐33] [79,9,‐39] [77,‐21,‐31] [92,‐22,‐6] ΔE* 27 54 31 73 45 44 24 PCC** blue blue green green red red red *: Reference white point is RGB (255, 255, 255) or Lab (100, 0, 0), where ΔE = 0. **: Primary colour component. 12    Correlation studies  The  previous  section  outlines  how  it  is  possible  to  use  the  associated  UV/Vis  absorption  spectrum to calculate the sR, sG and sB and Lab and E values for any of the simulated dyes  solutions illustrated in figure 1.  Now, for all the simulated dyes illustrated in figure 1, the  maximum absorption at max, i.e. Ao, was set at 1.  However, the same process can also be  used to calculate the sR, sG and sB and Lab and E values for any of the simulated dyes for  any value of Ao, i.e. at any dye concentration.  Thus, in this work Ao was varied from 0‐1, in  steps  of  0.05,  for  each  of  the  simulated  dyes  to  simulate  a  range  of  different  dye  concentrations, and in each case values for sR, sG and sB and Lab and E, were calculated  using the same procedure as described in the previous section.  Spectra for the simulated  dyes  for  which  Ao  >  1  were  not  studied  for,  as  we  shall  see,  this  is  too  high  for  DCC  to  produce any likely correlation with any measured UV/Vis absorbance.  In the case of the sR,  sG and sB data generated for each simulated dye, usually one of the colours, i.e. R, G or B,  varies to a greater extent compared to the other two over the Ao range 0‐1.  For example,  for the yellow‐absorbing dye illustrated in figure 1, with max = 575 nm, a brief inspection of  the variation of the calculated values of sR, sG and sB as a function of Ao, illustrated in figure  3, reveals that the red component is most affected by the change of Ao from 0‐1.  In all this  work,  for  any  light  absorbing  system  we  shall  refer  to  this  component,  i.e.  the  colour  component  which  changes  most  markedly  with  change  in  Ao  as  the  'primary  colour  component' or 'PCC' for short.  Note: as a rough guide, the PCC is the complementary colour  to that of the analytical system under test.  Thus, the PCC is red, green or blue for a coloured  system that appears approximately cyan, magenta or yellow, respectively [23].  13      Figure 3: Plots of sR (red line), sG (green line), and sB (blue line), values determined for the  yellow‐absorbing simulated dye as a function of absorbance at max, i.e. Ao, from which it is  clear that the red component varies most significantly as the value of Ao is increased from 0  to  1,  in  steps  of  0.05,  whilst,  in  contrast,  the  blue  component  remains  unchanged  at  all  values of Ao.     As a consequence, since red is the PCC for the yellow absorbing dye (see figure 3), only the  red  component  data  were  used  to  calculate  values  for  A(sR)  and  1‐T(sR)  for  the  yellow‐ absorbing  simulated  dye,  using  the  appropriate  versions  of  equations  (1)  and  (2),  respectively,  as  a  function  of  Ao.  In  addition,  the  sR,  sG  and  sB  data  in  figure  3,  for  the  yellow‐absorbing simulated dye, was used to calculate values for Lab, and so E, also as a  function of Ao which was varied over the range 0‐1.  Plots of these three different data sets,  i.e. A(sR), 1‐T(sR)and E vs Ao, for the yellow simulated dye are illustrated in figure 4 from  which  it  appears  that  the  absorbance‐like  parameter  based  on  the  sR  values,  i.e.  A(sR),  provides  the  best  linear  correlation,  of  the  three  plots,  with  Ao,  and  this  observation  is  supported by the near unity value for the square of the correlation coefficient for the line of  best fit (i.e. r2 = 0.9941).  It is also clear that all three parameters show increasing deviation  from  linearity  as  the  value  of  is  Ao  increased  and  that  even  A(sR)  shows  clear  signs  of  deviating significantly at Ao values > 1.  This was found to be true for all the simulated dyes  tested and so the correlation study was limited to a variation of Ao from 0 to 1, as noted  earlier.  14      Figure 4: Plots of A(sR) (black dots), 1‐T(sR) (red dots) and E (blue dots) as a function of Ao  for  the  yellow‐absorbing  simulated  dye  (see  figure  1)  calculated  using  the  sR,  sG  and  sB  values illustrated in figure 3.  The line of best fit (broken line) for the plot of A(sR) vs. Ao  yields a correlation coefficient , r2, 0.9941.    The same procedure was applied to all the simulated dyes illustrated in figure 1 and in each  case  the  PCC  was  identified  (see  Table  1)  and  the  data  associated  with  the  PCC  used  to  generate plots of: A(sR), 1‐T(sR) and E as a function of Ao over the Ao range of 0‐1.  The  square of the correlation coefficient was determined for each of these correlation plots and  the  results  of  this  work  are  plotted  as  a  function  of  max  in  figure  5  for  the  different  simulated dyes.  A brief inspection of this plot of correlation coefficient data reveals that, as  for  the  yellow‐absorbing  dye  in  figure  4,  the  absorbance‐like  parameter,  A(sR,  sG  or  sB),  correlates linearly much more closely with Ao, than the other two functions, 1‐T(sR, sG or  sB) and E.    15      Figure 5: Plots of squares of correlation coefficients, r2 as a function of max for the seven  simulated dyes listed in table 1.  The individual r2 values themselves were derived from plots  of the sRGB PCC data for each of the seven dyes in the form of: A(sR, sG or sB) (black line),  1‐T(sR, sG or sB) (red line) or E (blue line) vs. Ao.      These  results  imply  that  values  for  the  absorbance‐like  parameter,  i.e.  A(sR,  G  or  B),  determined from digital photographic data, i.e. sR, sG or sB values, can be used, instead of  absorbance  values  determined  using  UV/Vis  spectrophotometry,  as  a  measure  of  concentration of the light absorbing species.  This is a useful observation given that the cost  of DCC is low and the technique very portable, unlike UV/Vis spectroscopy and, also – once  in place ‐ the process is simple to effect.  In order to test the above hypothesis, derived from  studies of simulated dyes, three, well‐established colour‐based sensors were studied using  both  the  conventional,  and  costly,  UV/Vis  spectrophotometric  technique  and  the  very  inexpensive,  increasingly  popular,  simple  method  of  DCC,  using  a  digital  mobile  phone  camera and a RGB measuring App.    16    Application of DCC to established optical sensor systems  (1) CO2‐sensitive indicators  A  large  number  of  CO2‐sensitive,  colour‐based  indicators  have  been  developed  and  commercialised  for  various  applications,  including  correct  tracheal  intubation  [24]  and  environmental  monitoring  in  air  and  water  [25‐28].    In  most  cases  the  colour  changing  process is due to the following equilibrium reaction:                                                                D‐  +  H2O  +  CO2  ⇌  DH  +  HCO3‐                                     (13)  where D‐ and DH are the deprotonated and protonated forms, respectively, of a pH sensitive  dye, such as phenol red, PR.  It follows that in the presence of an excess of bicarbonate, the  concentrations of D‐ and DH are related to the ambient level of CO2, i.e. via an expression of  the form:                                                                        [DH]/[D‐]  =  K.%CO2                                                     (14)  Where, [DH] and [D‐] are the concentrations of the protonated and deprotonated forms of  the pH dye, respectively, and K is the proportionality constant which provides a measure of  the sensitivity of the sensor towards CO2.  If, as is usually the case, the absorbance of the  indicator is measure at a wavelength where only D‐ absorbs, which is usually max for D‐, i.e.  A(D‐), then, according to eqn (14) and Beer's law, A(D‐) will decrease as %CO2 is increased.  It  also follows that a plot of 1/A(D‐) vs %CO2 will yield a straight line with a gradient, K [29, 30].   However, a much simpler, although more approximate method for assessing the value of K  is to measure the %CO2 necessary to create equal concentration of [DH] and [D‐], i.e. %CO2(S  = ½), since it is equal to the reciprocal of K.  Note: at %CO2(S = ½), A(D‐) = A(D‐)o/2, where  A(D‐)o is the initial absorbance of the indicator [31].    In this work the optical responses  of a typical PR‐based CO2‐sensitive plastic film sensor,  towards different levels of CO2, were measured using both a UV/Vis spectrophotometer and  a mobile phone digital camera and the results of this work are illustrated in figure 6 [32].    17      Figure  6:  UV/Vis  absorption  spectra  and  digital  photographic  images  recorded  for  a  PR‐ based CO2 sensitive plastic film as a function of increasing levels of %CO2 in the gas phase.      The UV/Vis absorbance data in figure 6 is for the PR film at the max for its deprotonated  form (582 nm) was then plotted as a function of %CO2 as illustrated in figure 7, from which a  value for %CO2(S = ½), and so K, were gleaned for the PR CO2 indicator film.  Similarly, the  ColorMeter App on the mobile phone was used to analyse each of the digital photographs  taken of the film as a function of %CO2 and illustrated in figure 6.  The sRGB values extracted  using the App are gamma corrected, non‐linear parameters, i.e. sR', sG' and sB' values.  In  this,  and  all  the  other  indicators  described  in  this  section,  the  PCC  was  red  and  the  conversion  from  the  App  measured  sR'  non‐linear  values  associated  with  the  digital  photographs  illustrated  in  figure  6,  to  their  linear,  sR,  counterparts  used  the  following  function:                 If sR'  0.0405, then sR = 12.92. R', otherwise sR =  {(sR' + 0.055)/1.055}2.4            (15)  where sR' and sR are in their fractional (i.e. 0‐1) forms, rather than their more usual 8 bit (0‐ 255) forms.  Once sR' and sR were converted to their linear values, using eqn (15), then the  value of A(sR) associated with each of the indicator photographs illustrated in figure 6 was  determined and a plot constructed of A(sR) vs %CO2, which is illustrated in figure 7.  As an  18    example, sRGB analysis (using the App) of the image of the CO2 indicator at 2% CO2 reveals a  value of sR' of 0.59 (or 150 in 8 bit form), which translates to a value of 0.30 (or 77 in 8 bit  form)  for sR,  calculated  using  eqn  (15),  and  a  value  of  0.52  for  A(sR),  based  on  eqn  (1),  assuming sRo = 255.  A quick comparison of the two plots in figure 7, namely absorbance, A582, (at max PR; i.e.  582 nm, measured spectrophotometrically and taken from the spectral changes in figure 6)  vs %CO2 and A(sR) (calculated as in eqn (1) using sR values derived from the digital images in  figure 6) vs %CO2 reveals that they are near identical in shape, implying –as predicted in the  first  section  –  that  in  practice  spectrophotometric  absorbance,  A582,  and  the  apparent  absorbance parameter, A(sR), based on digital image analysis, are linearly correlated in this  system.    As  you  would  expect,  given  the  linear  correlation  between  A582,  and  the  DCC  parameter, A(sR), both plots of A582 and A(sR) vs %CO2 yielded the same value of 2.7 % for  %CO2(S = ½), as illustrated in figure 6.  Thus, the solid line in figure 7 is based on eqn (14),  assuming a value of K = 0.37 %‐1, which predicts the value of %CO2(S = ½) = 2.7%.      Figure 7: Plots of: (i) absorbance (at max PR; measured spectrophotometrically and taken  from the spectral changes in figure 6) vs %CO2 (black data points) and A(sR) (calculated as in  eqn (1) using sR values derived from the digital images in figure 6) vs %CO2 (red data points).   The broken line  identifies the values of A592 and A(sR), when the  indicator  is exposed to  100% CO2, so that all the PR is in its protonated form.  The solid line was generated using  eqn (14), with K = 0.37 %‐1, which yields a value of %CO2(S = ½) of 2.7%,  19      (2) Photocatalyst activity sensitive indicators  In recent years a number of new, so called 'self‐cleaning' architectural materials have been  developed in the form of glass (e.g. Activ from Pilkington Glass) [33], paint (e.g. Climisan  from STO) [34], concrete (e.g. TX Active from Italcementi) [35] and tiles (e.g. Hydrotect  from TOTO) [36].  In all cases the active ingredient is photoactive TiO2, which absorbs UV  light and is able to effect the complete mineralisation of most organic contaminants which  have deposited onto its surface, i.e.                                                     TIO2                Pollutant  +  O2    minerals (such as: H2O, CO2 and acids)      (16)                                                      UV    The above process is an example of semiconductor photocatalysis [37].  There are a growing  number of commercial and research photocatalytic materials and so there is a real need for  methods to assess the photocatalytic activity.  As a consequence a number of international  standard testing methods have been developed for this purpose, however most are slow  (usually take hours) and are laboratory based, and so not conducive to testing samples in  situ [38].  Recently a new colourimetric method of assessing photocatalytic activity has been  reported [39, 40] based the photocatalysed reduction of a blue dye, resazurin, Rz, to a pink  coloured product, resorufin, Rf, via the following photocatalytic reaction:                                                       TiO2                         glycerol +  Rz    glyceraldehyde +  Rf                                         (17)                                                        UV  The rate of photo‐induced change in colour of the Rz ink film is proportional to the activity  of the underlying photocatalytic material under test.  The test is easy to use, fast, i.e. usually  < 10 min, and can be used in the in situ assessment of photocatalytic materials [41, 42].  In  addition, it is now the basis of a recent ISO (DIS 21066) [43], in which the time taken for the  20    Rz to lose 90% of  its colour, ttb(90),  is measured, and taken as an inversely proportional  measure of the activity of the photocatalyst.    Here the Rz ink was used to probe the activity of a piece of Activ self‐cleaning glass using  both a UV/Vis spectrophotometer and a mobile phone digital camera to monitor the change  in colour of the Rz ink film (blue to pink) as a function of UV irradiation time and the results  of this work are illustrated in figure 8.      Figure 8: UV/Vis absorption spectra and digital photographic images recorded for a Rz ink  on a piece of Activ glass as a function of UV irradiation time.       The  parameter ttb(90) can  be  determined  for the  Activ glass  by  plotting  the  change  in  absorbance due to the Rz at its max = 609 nm, i.e. A609, as a function of irradiation time, as  illustrated in figure 9, where At=0 is the overall change in A609 as the indicator turns from  blue to pink due to reaction (17), and ttb(90) is the UV irradiation time taken for A = 0.9.  At=0.  Using the same data manipulation process as described in the previous section, the  photographic images in figure 8 of the Rz ink film were used to generated the plot of A(sR)  vs UV irradiation time, also illustrated in Figure 9.  As before, a quick comparison of the two  21    plots  in  figure  9,  namely  absorbance,  A609  (at  max  Rz,  i.e.  609  nm;  measured  spectrophotometrically  and  taken  from  the  spectral  changes  in  figure  8)  and  A(sR)  (calculated as in eqn (1) using sR values derived from the digital images in figure 8) vs UV  irradiation time reveals that the spectrophotometric absorbance and A(sR) data are linearly  correlated, as the data points lie on a common line.  As a consequence the two data sets  generate the same value for ttb(90), namely, ca. 8.6 min.      Figure 9: Plots of: (i) absorbance (at max Rz; measured spectrophotometrically and taken  from the spectral changes in figure 8) vs %CO2 (black data points) and A(sR) (calculated as in  eqn (1) using sR values derived from the digital images in figure 8) vs UV irradiation time  (red data points).  Both data sets identify the same value for ttb(90), namely 8.6 min.    (3) UV‐activated, O2‐sensitive indicators  Probably the most commonly analysed chemical species is O2, which is not surprising given  its  key  role  in  many  biochemical  and  chemical  processes.    In  terms  of  indicators  the  detection  of  oxygen  is  dominated  both  commercially  and  in  the  academic  literature  by  luminescence  quenching  [44].    Examples  of  commercial  O2  indicators  based  on  this  technology  include:  Oxydot  from  oxysese  [45],  Spot  SP  series  from  PreSense  [46],  OpTech O₂ from Mocon [47], RedEye oxygen sensor patches from OceanOptics [48].  In  contrast,  there  are  few  colour‐based  O2  indicators  [49],  although  this  group  has  recently  developed  a  UV‐activated  colour‐based  oxygen  indicator  that  utilises  an  ink  containing  a  semiconductor photocatalyst (usually TiO2), a highly coloured redox dye, usually a thiazine  22    dye such as methylene blue, MB or thionine, Th, and a sacrificial electron donor, usually  glycerol.    Upon  an  initial  illumination  with  UV  the  ink  is  'activated'  –  i.e.  rendered  O2‐ sensitive, via the following photocatalytic process:                                                                     TiO2                                         glycerol +  MB    glyceraldehyde +  LMB                   (18)                                                                       UV  since the reduced form of MB, i.e. leuco‐methylene blue, LMB, reacts readily with O2, via:                                            LMB  +  O2     MB  +  H2O                                          (19)  Thus, upon exposure to a short, intense burst of UV illumination the initially blue coloured  ink  is  bleached  to  LMB  and  stays  bleached  until  and  unless  O2  is  present,  whereupon  it  regains its original blue colour at a rate, or half‐life, t(50), that depends upon the ambient  level of O2 [50]. This indicator technology has been used commercially to identify O2 ingress  in packaged food [51].    In this study, a typical TiO2/thionine/glycerol  ink was used to coat a glass cover slip and,  after exposure to a burst of UV light (2 mW cm‐2 for 20 s from 2x15 W 352 nm BL lamps) the  recovery of its original colour in air (21% O2) was monitored both spectrophotometrically  and photographically as a function of 'dark' time (i.e. time after the initial exposure to UV  radiation) and the results of this work are illustrated in figure 10.      23      Figure  10:  Recovery  in  air  of  a  photocatalytically‐reduced  TiO2/thionine/glycerol  ink  film  monitored  both  spectrophotometrically  and  using  a  digital  mobile  phone  camera.    The  change in absorbance spectrum of the ink film recorded every 2 mins is illustrated here.    The parameter t(50)  in air can be determined for the UV‐activated TiO2/thionine/glycerol  ink film by plotting the change in absorbance due to the Th in the ink at its max = 609 nm,  i.e. A609, as a function of 'dark' time, as illustrated in figure 11, where At=0 is the overall  change in A609 as the indicator turns from colourless to blue due to reaction (19), and t(50) is  the time taken for A = 0.5At=0.  Using the same data manipulation process as described  in the previous section, the photographic images in figure 10 of the TiO2/thionine/glycerol  ink film were used to generated the plot of AsR) vs 'dark' time, also illustrated in Figure 11.   As before, a quick comparison of the two plots in figure 11, namely absorbance, Abs609 (at  max  for  Th,  i.e.  609  nm;  measured  spectrophotometrically  and  taken  from  the  spectral  changes in figure 10 and A(sR) (calculated as in eqn (1) using sR values derived from the  digital  images  in  figure  10)  vs  UV  irradiation  time  reveals  that  the  spectrophotometric  absorbance and A(sR) data are linearly correlated, as the data points lie on a common line.   24    As a consequence the two data sets generate the same value for t(50), namely, 4.6 min.   Other work shows that t(50) is inversely proportion to %O2 [47].    Figure 11: Plots of: (i) absorbance (at max Th; measured spectrophotometrically and taken  from the spectral changes in figure 9) vs %CO2 (black data points) and A(sR) (calculated as in  eqn (1) using sR values derived from the digital images in figure 9) vs 'dark' time (red data  points).  Both data sets identify the same value for t(50), namely 4.6 min.        25    Conclusions  The  absorption  spectrum  of  a  simple  simulated  dye,  with  an  absorbance,  Ao,  at  max  is  described by eqn (4).  This equation allows the creation of seven absorption spectra, with Ao  = 1, that span the visible spectrum.  These spectra can be converted into sRGB values, see  Table 1, which in turn can be used to calculate values for the apparent absorbance, A(sRGB),  apparent fraction of absorbed light, 1‐T(sRGB), where T is the apparent transmittance and  (iii) colour difference, E.  The latter are popular parameters in digital colour colourimetry,  DCC.  For each dye, one of the sRGB colour parameters, the principle colour component,  PCC, varies more than the other two as Ao is varied from 0 to 1 (see Table 1) and it is this  colour that is used to probe the degree of correlation between actual absorbance (Ao) and  the three DCC parameters: A(sRGB), 1‐T(sRGB) and E.  For all seven simulated dyes, over  the absorbance range 0‐1, the apparent absorbance, A(sRGB), correlates better with Ao than  the  other  two  DCC  parameters.    Three  real  indicators,  namely:  a  CO2  indicator,  a  photocatalytic activity indicator and an oxygen indicator, were used to test this prediction  and in each case the PCC was red.  In all three cases the apparent absorbance parameter,  A(sR), derived from sRGB analysis of the digital images, was found to be proportional to the  real absorbance, as measured using UV/Vis spectrophotometry, and yielded the same key  analytical information, i.e. pCO2(S=1/2) = 2.7%, ttb(90) = 8.6 min and t(50) = 4.6 min.  In all  this work, digital  image information capture required only a mobile phone camera and a  colour  measuring  App,  and  so  is  much  cheaper  and  easier  to  use  than  most  UV/Vis  spectrophotometers.    Although  never  as  good  as  UV/Vis  spectrophotometry,  the  widespread use of digital cameras and Apps makes it increasingly likely that the use of DCC  in  quantitative  and,  especially  semi‐quantitative,  analysis  of  colour  based  indicators  will  become more common place.  This work suggests that in most cases the digital data should  be  plotted  in  the  form  of  apparent  absorbance  A(sRGB),  rather  than  fractional  light  absorbed,  1‐T(sRGB),  or  colour  difference,  E.    The  fact  that  a  digital  camera  can  photograph many indicators simultaneously, suggests that it is also ideally suited for multi‐ analyte  analysis,  using  an  array  of  colourimetric  indicators,  unlike  UV/Vis  spectrophotometry.      26    References  [1]  X.‐d.  Wang,  O.S.  Wolfbeis,  Fiber‐optic  chemical  sensors  and  biosensors  (2013–2015),  Analytical chemistry, 88 (2016) 203‐227.  [2] T.R. Knutson, C.M. Knutson, A.R. Mozzetti, A.R. Campos, C.L. Haynes, R.L. Penn, A fresh  look  at  the  crystal  violet  lab  with  handheld  camera  colorimetry,  Journal  of  Chemical  Education, 92 (2015) 1692‐1695.  [3] S. Wang, X. Zhao, I. Khimji, R. Akbas, W. Qiu, D. Edwards, D.W. Cramer, B. Ye, U. Demirci,  Integration  of  cell  phone  imaging  with  microchip  ELISA  to  detect  ovarian  cancer  HE4  biomarker in urine at the point‐of‐care, Lab on a Chip, 11 (2011) 3411‐3418.  [4]  A.  Lapresta‐Fernández,  L.F.  Capitán‐Vallvey,  Multi‐ion  detection  by  one‐shot  optical  sensors using a colour digital photographic camera, Analyst, 136 (2011) 3917‐3926.  [5] Y. Zhang, Y. Wu, Y. Zhang, A. Ozcan, Fusion of lens‐free microscopy and mobile‐phone  microscopy images for high‐color‐accuracy and high‐resolution pathology imaging,  Optics  and  Biophotonics  in  Low‐Resource  Settings  III,  International  Society  for  Optics  and  Photonics, 2017, pp. 100550P.  [6] H. Araki, J. Kim, S. Zhang, A. Banks, K.E. Crawford, X. Sheng, P. Gutruf, Y. Shi, R.M. Pielak,  J.A. Rogers, Materials and device designs for an epidermal UV colorimetric dosimeter with  near field communication capabilities, Advanced Functional Materials, 27 (2017) 1604465 ‐  1604474.  [7]  G.W.  Ewing,  Instrumental  Methods  of  Chemical  Analysis,  4th  ed.,  McGraw‐Hill  Inc.,  Tokyo, Japan, 1975.  [8]  Single‐beam  photometer/filter,  http://www.medicalexpo.com/prod/robert‐ riele/product‐69866‐675162.html (accessed March 2018).  [9]  M.S.  Tooms,  Colour  Reproduction  in  Electronic  Imaging  Systems:  Photography,  Television, Cinematography, John Wiley & Sons, New Delhi, India, 2016.  [10] D.L. Williams, T.J. Flaherty, C.L. Jupe, S.A. Coleman, K.A. Marquez, J.J. Stanton, Beyond λ  [lambda]  max:  transforming  visible  spectra  into  24‐bit  color  values,  Journal  of  Chemical  Education, 84 (2007) 1873‐1877.  27    [11] M.A. Stokes, M.; Chandrasekar, S.; Motta, R. , A Standard Default Color Space for the  Internet: sRGB Version 1.10, http://www.color.org/sRGB.xalter (Accessed March 2018).  [12]  ColorMeter  RGB  Hex  Color  Picker  and  Colorimeter  by  White  Marten,  https://itunes.apple.com/us/app/colormeter‐rgb‐hex‐color‐picker‐and‐ colorimeter/id713258885?mt=8 (Accessed March 2018).  [13]  Color  Card  and  RGB  Color  Meter  by  NStart  MITech,  https://itunes.apple.com/us/app/color‐card‐and‐rgb‐color‐meter/id1297107041?mt=8  (Accessed March 2018).  [14]  Color  Mate  ‐  Convert  and  Analyze  Colors  by  David  Williames,  https://itunes.apple.com/us/app/color‐mate‐convert‐and‐analyze‐ colors/id896088941?mt=8 (Accessed March 2018).  [15] Image J, https://imagej.nih.gov/ij/ (Accessed March 2018).  [16]  Adobe  Photoshop,    https://www.adobe.com/uk/products/photoshop/free‐trial‐ download.html (Accessed March 2018).  [17] Image Colour Picker, https://imagecolorpicker.com/ (Accessed March 2018).    [18] A. Mills, D. Yusufu, Highly CO2 sensitive extruded fluorescent plastic indicator film based  on HPTS, Analyst, 141 (2016) 999‐1008.  [19]  A.  Mills,  N.  Wells,  J.  MacKenzie,  G.  MacDonald,  Kinetics  of  reduction  of  a  resazurin‐ based photocatalytic activity ink, Catalysis Today, 281 (2017) 14‐20.  [20] K Hand Coater, http://rkprint.com/?page_id=10 (Accessed March 2018).  [21]  A.  Mills,  D.  Hazafy,  Nanocrystalline  SnO2‐based,  UVB‐activated,  colourimetric  oxygen  indicator, Sensors and Actuators B: Chemical, 136 (2009) 344‐349.  [22]  A.  E308‐1,  Standard  Practice  for  Computing  the  Colors  of  Objects  by  Using  the  CIE  System, ASTM International, West Conshohoken, PA, 2001.  [23]  Complementary  colours,  after‐images,  retinal  fatigue,  colour  mixing  and  contrast  sensitivity,  http://www.animations.physics.unsw.edu.au/jw/light/complementary‐ colours.htm (Accessed March 2018).  28    [24]  Nellcor™  Adult/Pediatric  Colorimetric  CO2  Detector,  http://www.medtronic.com/covidien/en‐us/products/intubation/nellcor‐adult‐pediatric‐ colorimetric‐co2‐detector.html (Accesses March 2018).  [25] A. Mills, K. Eaton, Optical sensors for carbon dioxide: an overview of sensing strategies  past and present, Quimica Analitica, 19 (2000) 75‐86.  [26] Presens CO2‐sensors, https://www.presens.de/ (Accessed March 2018).  [27] Ocean Optics, https://oceanoptics.com/(Accessed March 2018).  [28] Mocon, http://www.mocon.com/ (Accessed March 2018).  [29] A. Mills, Q. Chang, N. McMurray, Equilibrium studies on colorimetric plastic film sensors  for carbon dioxide, Analytical Chemistry, 64 (1992) 1383‐1389.  [30] A. Mills, Q. Chang, Tuning colourimteric and fluorimetric gas sensors for carbon dioxide,  Analytica chimica acta, 285 (1994) 113‐123.  [31] A. Mills, Optical sensors for carbon dioxide and their applications, in: M.‐I. Baraton (Ed.)  Sensors for Environment, Health and Security, Springer, UK, 2009, pp. 347‐350.  [32] A. Mills, G.A. Skinner, P. Grosshans, Intelligent pigments and plastics for CO 2 detection,  Journal of Materials Chemistry, 20 (2010) 5008‐5010.  [33] Activ™ glass from Pilkington, https://www.pilkington.com/en‐gb/uk/products/product‐ categories/self‐cleaning/pilkington‐activ‐range (Accessed March 2018).  [34] Climisan paint from STO, http://www.sto.co.uk/en/home/home.html (Accessed March  2018).  [35]  TX  Active  from  Italcementi,  https://asknature.org/idea/tx‐active‐ cement/#.WquuDX9pxhE (Accessed March 2018).  [36]  Hydrotect  from  TOTO,  http://gb.toto.com/technology/technology‐single‐ view/Technology/show/HYDROTECT/(Accessed March 2018).  [37]  A.  Mills,  C.  O’Rourke,  K.  Moore,  Powder  semiconductor  photocatalysis  in  aqueous  solution:  An  overview  of  kinetics‐based  reaction  mechanisms,  Journal  of  Photochemistry  and Photobiology A: Chemistry, 310 (2015) 66‐105.  29    [38]  A.  Mills,  C.  Hill,  P.K.  Robertson,  Overview  of  the  current  ISO  tests  for  photocatalytic  materials, Journal of Photochemistry and Photobiology A: Chemistry, 237 (2012) 7‐23.  [39] A. Mills, J. Wang, S.‐K. Lee, M. Simonsen, An intelligence ink for photocatalytic films,  Chemical Communications, (2005) 2721‐2723.  [40] A. Mills, N. Wells, Reductive photocatalysis and smart inks, Chemical Society Reviews,  44 (2015) 2849‐2864.  [41] A. Mills, J. Hepburn, D. Hazafy, C. O’Rourke, N. Wells, J. Krysa, M. Baudys, M. Zlamal, H.  Bartkova, C.E. Hill, Photocatalytic activity indicator inks for probing a wide range of surfaces,  Journal of Photochemistry and Photobiology A: Chemistry, 290 (2014) 63‐71.  [42]  A.  Mills,  N.  Wells,  Indoor  and  outdoor  monitoring  of  photocatalytic  activity  using  a  mobile  phone  app.  and  a  photocatalytic  activity  indicator  ink  (paii),  Journal  of  Photochemistry and Photobiology A: Chemistry, 298 (2015) 64‐67.  [43] ISO/PRF 21066, https://www.iso.org/standard/69815.html (Accessed March 2018).  [44] X.‐d. Wang, O.S. Wolfbeis, Optical methods for sensing and imaging oxygen: materials,  spectroscopies and applications, Chemical Society Reviews, 43 (2014) 3666‐3761.  [45] Oxydot from OxySense, http://www.oxysense.com/(Accessed March 2018).  [46] Spot SP from PreSense, https://www.presens.de/products/o2/sensors.html (Accessed  March 2018).  [47]  OpTech®  O₂  from  Mocon,  http://www.mocon.com/instruments/optech‐o2‐model‐ p.html (Accessed March 2018).  [48]  RedEye  oxygen  sensor  patches  from  OceanOptics,  https://oceanoptics.com/product/redeye‐oxygen‐sensing‐patches/ (Accessed March 2018).  [49] S.‐K. Lee, A. Mills, A. Lepre, An intelligence ink for oxygen, Chemical communications,  (2004) 1912‐1913.  [50]  S.‐K.  Lee,  M.  Sheridan,  A.  Mills,  Novel  UV‐activated  colorimetric  oxygen  indicator,  Chemistry of Materials, 17 (2005) 2744‐27.  30    [51] UPM Shelf Life Guard Keeping an Eye on Packaged Foods, http://www.upm.com/About‐ us/Newsroom/Releases/Pages/UPM‐Shelf‐Life‐Guard‐Keeping‐an‐Eye‐on‐Packaged‐Foods‐ 001‐to‐10‐helmi‐2011‐19‐14.aspx (Accessed March 2018).    work_rtro4r25pjhsjbtmneww3jja5a ---- Dual wavelength excitation for the time-resolved photoluminescence imaging of painted ancient Egyptian objects Comelli et al. Herit Sci (2016) 4:21 DOI 10.1186/s40494-016-0090-5 R E S E A R C H A R T I C L E Dual wavelength excitation for the time-resolved photoluminescence imaging of painted ancient Egyptian objects Daniela Comelli1†, Valentina Capogrosso1,2, Christian Orsenigo3 and Austin Nevin2*† Abstract Background: The scientific imaging of works of art is crucial for the assessment of the presence and distribution of pigments and other materials on surfaces. It is known that some ancient pigments are luminescent: these include pink red-lakes and the blue and purple pigments Egyptian Blue (CaCuSi4O10), Han blue (BaCuSi4O10) and Han purple (BaCuSi2O6). Indeed, the unique near-infrared luminescence emission of Egyptian blue allows the imaging of its distri- bution on surfaces. Results: We focus on the imaging of the time-resolved photoluminescence of ancient Egyptian objects in the Burri Collection from the Civic Museum of Crema and of the Cremasco (Italy). Time-resolved photoluminescence images have been acquired using excitation at 355 nm for detecting the ns-emission of red lakes and binding media; by employing 532 nm excitation Egyptian blue is probed, and the spatial distribution of its long-lived microsecond emis- sion is imaged. For the first time we provide data on the photoluminescence lifetime of Egyptian blue directly from objects. Moreover, we demonstrate that the use of a pulsed laser emitting at two different wavelengths increases the effectiveness of the lifetime imaging technique for mapping the presence of emissions from pigments on painted surfaces. Laser-induced luminescence spectra from different areas of the objects and traditional digital imaging, using led-based lamps, long pass filters and a commercial photographic camera, complement the results from photolumi- nescence lifetime imaging. We demonstrate the versatility of a new instrumental setup, capable of recording decay emission kinetics with lifetimes from nanosecond to microseconds. Conclusions: While the combined wavelength approach for the imaging of emissions from different materials has been demonstrated for the study of ancient Egyptian pigments (both organic and inorganic), the method could be extended to the analysis of modern pigments and paintings. Keywords: Photoluminescence imaging, Egyptian blue, Lifetime, Laser induced fluorescence, Pigments, Ancient Egyptian materials, Paintings © 2016 The Author(s). This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/ publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Background The imaging of paintings and painted objects relies on the interaction between radiation and matter and the detec- tion of reflected or emitted photons. Technical exami- nation and analysis of works of artworks typically relies on the integration of non-invasive imaging and suitable spectroscopic analysis, often followed by taking and ana- lysing samples. In this work we report an imaging study of Egyptian objects with a novel portable setup for time- resolved photoluminescence (TRPL) imaging and spec- troscopy, with a focus on the analysis of Egyptian Blue and lake pigments on artefacts. Photoluminescence (PL) of pigments and paint has received significant attention due to its practical use in the assessment of the condi- tion of works of art: for example, aged organic varnishes may appear more luminescent than freshly applied paint and some pigments, including semiconductor materials, Open Access *Correspondence: austinnevin@gmail.com †Daniela Comelli and Austin Nevin contributed equally to this work 2 Istituto di Fotonica e Nanotecnologie-Consiglio Nazionale delle Ricerche (IFN-CNR), Piazza Leonardo da Vinci 32, 20133 Milan, Italy Full list of author information is available at the end of the article http://orcid.org/0000-0003-1911-1045 http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/publicdomain/zero/1.0/ http://creativecommons.org/publicdomain/zero/1.0/ http://crossmark.crossref.org/dialog/?doi=10.1186/s40494-016-0090-5&domain=pdf Page 2 of 8Comelli et al. Herit Sci (2016) 4:21 emit characteristic PL signals [1]. Among ancient Egyp- tian pigments, Egyptian blue and madder-based red lakes are luminescent and are routinely identified with the aid of microscopic and spectroscopic analysis [2]. While the fluorescence observed in madder lakes is due to the anth- raquinone molecules purpurin and alizarin, the optical emission of Egyptian blue has been ascribed to Cu2+ in a solid calcium-silica matrix [3–5]. Recent research on the mineral pigment has reported that synthetic cuprori- vaite has a radiative emission with a maximum peaked at 910 nm and a lifetime of 107 μs and a  quantum yield of 10 % [6]. The imaging of PL from painted surfaces is often car- ried out to detect the colour of the emission, for which various approaches have been suggested. Whereas ultraviolet (UV) light sources based on Mercury lamps require filtering of spurious radiation emitted in the vis- ible to render them useful for imaging and photography, innovative uses of Xenon-flashes and digital photography have demonstrated the peculiar and noteworthy infrared emissions from Egyptian blue on paintings and painted objects from the British Museum and, more recently, on Fayoum Portraits [7–9]. Many other examples of digital imaging of Egyptian blue from objects have been documented. In addition to digital photographic  imaging, PL spec- troscopic imaging techniques have found multiple appli- cations for the analysis of paintings, including those carried out with a scanning Lidar approach [10, 11] and with light emitting diodes (LED) [12]. With these meth- ods, the detection of PL can be carried out with imaging spectrometers or suitably filtered imaging detectors [13, 14]. Another approach to the imaging of the PL from works of art relies on the detection of the emission life- time, which can be achieved by combining pulsed laser excitation with a time-gated imaging detector, known as fluorescence lifetime imaging or, more generally, TRPL Imaging [15]. Applications of TRPL imaging for the detection of organic materials on stone sculptures [16], wall paintings [17], and semiconductor pigments on paper [18] have been published, and demonstrate how TRPL imaging can map the  chemical composition of surfaces based on significant differences in emis- sion lifetime. In previous applications a ns Q-switched Nd:YAG laser emitting at 355 nm was employed as this wavelength can excite emissions from binding media and organic polymers and aforementioned fluorescent pigments. In this work we have modified our TRPL imaging set-up by adding excitation at 532  nm from the same Nd:YAG laser to allow the excitation of Egyp- tian blue. The combined wavelength approach has been applied to the imaging of painted objects from Carla Maria Burri Collection in the Civic Museum of Crema and the Cremasco (see sample description). Comple- mentary laser-induced PL spectroscopy from selected points on the objects has been carried out and digital photography using LED excitation complement images acquired with TRPL imaging. Results and discussion Preliminary PL spectroscopy analysis of a model sam- ple painted with the commercially available Egyptian blue pigment reveals a characteristic emission spectrum with a maximum emission at 920  ±  6  nm and a mono- exponential emission lifetime of 138 ±  4  µs (95  % confi- dence) (Fig. 1), in good agreement with the spectral data reported by others from pigments from the same sup- plier [7, 19]. The emission lifetime measured in our com- mercial pigment is in agreement with Borozov et al. [19] but differs with respect to that reported for the synthetic mineral calculated using single photon counting and 637 nm excitation (τ = 107 μs [6]). It is possible that the discrepancy in the PL lifetime is due to grinding of the pigment rather than to differences related to the methods used for the estimation of the PL decay [19]. PL imaging of the ancient Egyptian cartonnage (Fig.  2a) reveals the presence of different luminescent painted areas, effec- tively probed with the two lamp-based excitations: fol- lowing UV-excitation (Fig.  2b) an orange emission on red-pink areas and an intense bluish emission in white areas are observed. Conversely, excitation in the green and following inspection of the emission in the near- infrared reveals the use of an infrared luminescent pig- ment in most, but not all, of the areas painted with dark colours (Fig. 2c). Some of the areas painted dark are black and do not contain luminescent pigments. Results of PL spectroscopy of a dark painted area fol- lowing excitation at 532  nm (point 3 in Fig.  2a), is clear evidence of the use of the Egyptian blue pigment, as the detected emission spectrum closely resembles that of the Egyptian blue model sample (Fig.  3). TRPL imag- ing of details of the cartonnage painted with some of the dark colours reveals a monoexponential microsecond decay kinetic with a mean lifetime in the analysed area of 119  µs (7.7  µs interquartile range) and a negative skew- ness of −3.04, a reflection of the distribution of lifetime values that tend to be shorter than the mean (Fig. 4b). UV-induced optical emission of the cartonnage is associated with a nanosecond decay kinetic, typically ascribed to emission from organic molecules: follow- ing analysis of the first ten nanoseconds of the emis- sion decay profile, red-pink painted areas, white painted areas (as the face and the decorative heat) and the yel- low painted background have an effective lifetime close to 3.4, 3.8 and 3.1  ns, respectively (Fig.  4). Although the reconstructed lifetime values differ by hundreds of Page 3 of 8Comelli et al. Herit Sci (2016) 4:21 100 200 300 400 500 600 700 800 900 1000 0 1 2 3 4 5 6 x 10 5 Time / s P L e m is si o n c o u n ts data1 Monexponenatial fit = 137.9 s (133.8 s, 142.0 s) (95% confidential bounds) R2 = 0.999 Fig. 1 PL analysis of the Egyptian blue painted model sample. (top) PL emission spectrum and (bottom) PL emission lifetime following 532 nm pulsed excitation of the model sample painted with the commercial Egyptian blue pigment. A linear fit of the detected emission decay kinetic (black line) on the basis of a monoexponential model and goodness of fit in terms of R2. The sample exhibits an emission spectrum peaked at 920 nm and a monoexponetial decay kinetic with a lifetime of 138 μs Page 4 of 8Comelli et al. Herit Sci (2016) 4:21 picoseconds, the lifetime map allows the rapid discrimi- nation between the different painted areas, suggesting the presence of different organic pigments and binders. Specifically, the emission spectrum recorded on the red- pink painted areas (point 1 in Fig. 2a), peaked at 615 nm, resembles the spectral features of a red lake pigment, likely madder-based [4, 20] (Fig. 3). The broad spectrum recorded in the white painted area, with an emission peak at 590 nm, (Fig. 3) suggests the presence of a com- plex organic material, and is ascribed to emissions from binding media [1]. TRPL imaging of the stone mask, following visible exci- tation, suggests the presence of traces of the Egyptian blue pigment for decorating the areas of hair, beard and irises of the eyes  in Fig.  5. Blue used to paint the irises in polychromy has also been reported by Verri [7]. The clear identification of Egyptian Blue has been achieved through PL spectroscopy on a point on the hair of the mask: an emission spectrum comparable to that recorded on the commercial sample of Egyptian blue was recorded (data not shown). In terms of lifetime analysis, these painted areas show a microsecond decay kinetic close to that found from details on the cartonnage painted with Egyptian blue: here, we have detected a mean lifetime of 121.3  µs (6.6  µs of interquartile range) and a negative skew of −2.79 (Fig. 5). Conclusions The dual wavelength-excitation PL lifetime imaging approach reported here for the first time has been dem- onstrated to be valuable for probing the optical emission Fig. 2 Digital imaging of the cartonage from the Burri collection. (Top) Visible image; (middle) image of the PL emitted in the visible fol- lowing UV excitation, highlighting the presence of a fluorescence red pigment. Three points analysed with PL spectroscopy are indicated. (Bottom) image of the NIR-PL following excitation with green light, demonstrating the presence of a NIR emitting pigment, most prob- ably Egyptian Blue, in most of the areas painted in dark colours Fig. 3 Laser-induced PL spectroscopy of different analysis points of the cartonage: a red-pink painted area (point 1 in Fig. 2) (λmax 615 nm), flesh tones (point 2 in Fig. 2) (λmax 590 nm) following excitation at 355 nm. The PL emission spectrum of an area painted with dark colours (point 3 in Fig. 2) (λmax 920 nm) resembles the spectral features of Egyptian blue Page 5 of 8Comelli et al. Herit Sci (2016) 4:21 of different materials, of organic and inorganic nature, for the study of Ancient Egyptian artefacts. The same approach could be extended to the analysis of modern pigments and paintings, allowing an in-depth investiga- tion of the emission properties of organic and inorganic artist materials, including Cd-based pigments, in a com- bined way [13, 21]. Whereas the PL properties of the naturally occurring cuprorivaite mineral have been widely investigated in terms of both spectrum and decay kinetics [3, 6] and its brilliant emission has been exploited for the rapid detec- tion of the presence of the pigment in artworks from the Ancient Egypt [7], the near-infrared emission in Egyptian blue-painted ancient objects is not completely under- stood [19]. The intense and long emission recorded with PL imaging which is on the order of 120 μs is nonetheless diagnostic for the presence of Egyptian blue. Differences in lifetime in objects with respect to that reported and detected from that of the pure mineral and the synthetic commercial pigment are noted. The reasons behind life- time differences are beyond the scope of this work and require more refined analysis. Research in the future should address different degrees of crystallinity of the pigment in ancient objects following ancient synthesis Fig. 4 Luminescence lifetime imaging of the cartonage. (Left panel, top) False colour map of the effective PL lifetime reconstructed on details of the Cartonage following 355 nm pulsed laser excitation. The map is superimposed over the UV-induced fluorescence image of the artwork. (Right panel, top) The distribution of lifetime values in the cartonage in pixels (counts) vs. lifetime; the lifetime histogram shows a tri-modal behaviour, suggesting the presence of three different fluorescent materials with effective lifetime values of 3.1, 3.4 and 3.8 ns. (Left panel, bottom) False colour map of the PL lifetime reconstructed on details of the Cartonage following 532 nm pulsed laser excitation. The map is superimposed over the visible-induced NIR-PL image of the artwork, and show the use of Egyptian blue for different details of the cartonage painted in dark colours. (Right panel, bottom) The distribution of lifetime values in the cartonage in pixels (counts) vs. lifetime, showing a unimodal behaviour with a mean lifetime of 119.3 μs (7.7 μs of interquartile range) Page 6 of 8Comelli et al. Herit Sci (2016) 4:21 processes, grinding or sintering of the pigments and determine if chemical interactions between the pigment and the surrounding matrix in objects affects the emis- sion lifetimes. Further analyses with complementary methods sensitive to impurities which could account for differences in lifetime would be required to better refine these hypotheses. Methods Description of the collection and objects The antiquities collection of the late Carla Maria Burri (1935–2009), Director of the Italian Cultural Institute in Egypt from 1993 and 1999, was donated to the Museo Civico di Crema e del Cremasco in Winter 2010/2011 [22]. In 2013, Christian Orsenigo was charged with the study of the artefacts, under the supervision of Dr. Francesco Muscolino of the Soprintendenza Archeologia della Lombardia. The collection consists of about eighty objects and covers a vast time span; ranging from the most ancient item—a faience tile once belonging to the decoration of the walls of the galleries underneath Djos- er’s Step Pyramid at Saqqara (ca. 2630–2611  B.C.E.)— excluding some flints actually under study—to Islamic glass dating to the 11th C. (Orsenigo 2016, forthcom- ing). The collection includes objects belonging to differ- ent typologies, such as funerary statuettes (shabtis) from the late New Kingdom, masks and parts of sarcophagi, bronzes and amulets, other than Hellenistic and Roman terracotta figurines and lamps. In this work we examine a Cartonnage and a Mask. The cartonnage The polychrome cartonnage fragment probably comes from the back terminal of a mask, depicting a human- headed ba-bird, crowned with solar disc and holding two maat-feathers at each wing. Unfortunately noth- ing is known about the provenance of this object, as is the case for a very similar fragment kept at the Petrie Museum, UCL (accession number  UC45900), that is a particularly good comparandum [23] The material is linen covered with plaster and then painted. Car- tonnages such as this can be dated from Ptolemaic to Roman Periods [24] The mask Even if preserved only in its upper front part, the object is likely a miniature painted terracotta comic mask of an actor playing a slave character in the New Com- edy. It shows a broad nose, brows flying up to the sides and wrinkled forehead. It can be dated to the Ptolemaic Period. See references for comparisons with similar objects [25, 26]. Fig. 5 Luminescence lifetime imaging of a mask from the Burri collection. Left panel False colour map of the PL lifetime reconstructed on details of the stone mask following 532 nm pulsed laser excitation. The map, superimposed on the visible-induced NIR-PL image, outlines the use of the Egyptian blue pigment for decorating the mask’s hair and eyes. Right panel related distribution of lifetime values in the mask, showing a unimodal behaviour with a mean lifetime of 121.3 µs (6.6 µs interquartile range) Page 7 of 8Comelli et al. Herit Sci (2016) 4:21 Reference sample A painted model sample of Egyptian blue pigment (Kre- mer pigmente, GmbH) was prepared and analysed. The pigment, dispersed in Plextol, was applied as a painted layer on a glass substrate. Time‑resolved photoluminescence imaging The TRPL imaging device is, described in detail else- where, and is summarized below. A schematic diagrame of the setup can be found elsewhere [16]. The device is comprised of a ns laser excitation source combined with a time-gated intensified camera (C9546-03, Hamamatsu Photonics, Hamamatsu City, Japan), capable of high speed gating to capture images of transient phenomena. A custom-built trigger unit and a precision delay genera- tor (DG535 Stanford Research System, Sunnyvale, CA, USA) complete the system, which has a net temporal jit- ter close to 0.5 ns. The Q-switching laser source (FTSS 355-50 Crylas GmbH, Berlin, Germany), based on the third harmonic of a diode-pumped Nd:YAG crystal (λ =  355  nm, Pulse energy  =  70  μJ, Pulse duration  =  1.0  ns, repetition frequency  =  100  Hz), has been modified in order to obtain the second harmonic of the same source emis- sion (λ  =  532  nm, Pulse energy  =  60  μJ, Pulse dura- tion =  1.0  ns). By using both wavelengths we probe the PL emission of materials with different absorption spec- tra. The two emission lines of the laser source are col- linear; hence it is easy to switch between them during measurements. In order to spectrally clean laser emis- sions, proper bandpass filters (FL355-10 or FL532-10, Thorlabs GmbH Germany) are employed at the exit of the laser source. The laser beam, coupled to a silica optical fibre, is magnified with suitable silica optics in order to illumi- nate a circular area of about 25 cm diameter on the sur- face of the object, with a typical fluence per pulse below 140  nJcm−2. This very low power density does not lead to detectable changes in the intensity of emission due to photooxidation in samples following typical measure- ments. The kinetics of the emission is detected by the gated intensified camera, which is based on a GaAs pho- tocathode with spectral sensitivity from 380 to 900  nm. The gate width of the camera is adjustable from 3  ns to continuous mode, depending on the kinetic properties of the surface under investigation. In this work a gate width of 5  ns was employed to detect the nanosecond kinetics of the emission from organic materials. Long-lived decay kinetics ascribed to emissions from areas painted in Egyptian blue have been effectively sampled by increas- ing the width to 100  µs. A proper optical highpass filter w placed in front of the camera lens in order to remove excitation light: the B + W UV/IR Cut 486 M MRC filter (Schneider Optics), with high transmission from 380 to 720  nm, was employed for lifetime analysis following 355  nm excitation, whereas the Kodak Wratten 23A fil- ter transmitting light beyond 550  nm was employed for measurements at 532 nm excitation. As has been shown in previous research, the short tem- poral jitter of the system is key to the estimation of ns lifetimes which may be used to differentiate organic bind- ing media and pigments [16]. TRPL imaging is achieved through the reconstruction of the effective lifetime map based on a simple mono-exponential decay model [14]. Laser‑induced photoluminescence spectroscopy A compact spectrometer and the same dual-wavelength laser source employed in the TRPL imaging device were used for detecting emission spectra from selected points on objects. The compact spectrometer (TM-CCD C10083CA-2100, Hamamatsu Photonics) mounts a back thinned CCD image sensor and a transmission-type grating, recording spectra between 320–1100  nm with a spectral resolution of 6  nm. Through fibre optics both the laser and the spectrometer are remotely connected to an optical probe, working in the 45–0° configuration mode. Proper transmission high-pass filters (FEL420 or FEL550, Thorlabs GmbH Germany) are mounted on the probe which, that allow excitation and collection of photons from a point on surface of approximately 1 mm diameter at a distance of 35  mm. The probe is equipped with a proper transmission high-pass filter (FGL420 or FELH550, Thorlabs GmbH Germany) depending on the employed laser wavelength. Spectra are reported follow- ing background subtraction (mainly related to the sensor read and dark noise) and correction for the spectral effi- ciency of the device. Digital imaging As proposed in past research [7, 8], a commercial Nikon D7100 digital camera (D60) was employed for record- ing UV-induced digital images of the PL emission of objects. Excitation was provided from a Xenon-based flash equipped with an UV bandpass filter (DUG11, Schott AG), whereas a transmission filter blocking light in the UV and in the NIR spectral range (W UV/IR Cut 486  M MRC filter, Schneider Optics) was mounted in front of a 50-mm focal camera lens [7, 8]. Similarly, infra- red digital photography of the emission of Egyptian Blue painted objects was performed using the commercial digital camera without the infrared blocking filter (sup- plied by Advanced Camera Services, UK) and mounting an). An infrared transmission filter (R72, HOYA) was placed in front of the camera lens [8] for detection only of the infrared emission. Excitation of PL emission was achieved using a 15 W Watt green LED-based lamp with Page 8 of 8Comelli et al. Herit Sci (2016) 4:21 emission in the green (4000 lx @ 1 m) (FLAT PAR CAN RGB 10 IR WH, Cameo (Germany)). Authors’ contributions Analysis was carried out by DC, AN, VC and SM. Historical context and information has been provided by CO. All authors read and approved the final manuscript. Author details 1 Diparitmento di Fisica, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milan, Italy. 2 Istituto di Fotonica e Nanotecnologie-Consiglio Nazionale delle Ricerche (IFN-CNR), Piazza Leonardo da Vinci 32, 20133 Milan, Italy. 3 Dipartimento di Studi letterari, filologici e linguistici, Biblioteca e Archivi di Egittologia, Università degli Studi di Milano, via Festa del Perdono 3, 20122 Milan, Italy. Acknowledgements Research was partially funded through the Bilateral Project between Italy and Egypt coordinated by Daniela Comelli (Politecnico di Milano, Italy) and Abdelrazek Elnaggar (Univeristy of Fayoum, Egypt) (Progetti di Grande Rilevanza, Protocollo Esecutivo–EGITTO, PGR 00101.) Authors thank Daniela Gallo Carrabba and Fiorenzo Gnesi (Associazione Carla Maria Burri), Francesca Moruzzi and Simone Riboldi (Museo civico di Crema e del Cremasco), Franc- esco Muscolino (Soprintendenza Archeologia della Lombardia) and Patrizia Piacentini (Università degli Studi di Milano, Chair of Egyptology). Competing interests The authors declare that they have no competing interests. Received: 14 December 2015 Accepted: 30 May 2016 References 1. Nevin A, Spoto G, Anglos A. Laser spectroscopies for elemental and molecular analysis in art and archaeology. Appl Phys A. 2012;106:339–61. 2. Scott DA, Warlander S, Mazurek J, Quirke S. Examination of some pigments, grounds and media from Egyptian cartonnage fragments in the Petrie Museum, University College London. J Archaeol Sci. 2009;36(3):923–32. 3. Pozza G, Ajò D, Chiari G, De Zuane F, Favaro M. Photoluminescence of the inorganic pigments Egyptian blue, Han blue and Han purple. J Cult Herit. 2000;1(4):393–8. 4. Clementi C, Doherty B, Gentili PL, Miliani C, Romani A, Brunetti BG, Sgamellotti A. Vibrational and electronic properties of painting lakes. Appl Phys A. 2008;92(1):25–33. 5. Claro A, Melo MJ, Schäfer S, de Melo JSS, Pina F, van den Berg KJ, Burn- stock A. The use of microspectrofluorimetry for the characterization of lake pigments. Talanta. 2008;74(4):922–9. 6. Accorsi G, Verri G, Bolognesi M, Armaroli N, Clementi C, Miliani C, Romani A. The exceptional near-infrared luminescence properties of cuprorivaite (Egyptian blue). Chem Commun. 2009;23:3392–4. 7. Verri G. The spatially resolved characterisation of Egyptian blue, Han blue and Han purple by photo-induced luminescence digital imaging. Anal Bioanal Chem. 2008;394(4):1011–21. 8. Verri G, Saunders D. Xenon flash for Reflectance and luminescence (multispectral) imaing in cultural heritage applications. Br Mus Tech Bull. 2014;8:87–92. 9. Ganio M, Salvant J, Williams J, Lee L, Cossairt O, Walton M. Investigating the use of Egyptian blue in Roman Egyptian portraits and panels from Tebtunis, Egypt. Appl Phys A. 2015;121(3):813–21. 10. Spizzichino V, Angelini F, Caneve L, Colao F, Corrias R, Ruggiero L. In situ study of modern synthetic materials and pigments in contemporary paintings by laser-induced fluorescence scanning. Stud Conserv. 2015;60(S1):78–84. 11. Raimondi V, Lognoli D, Palombi L. A fluorescence LIDAR combining spec- tral, lifetime and imaging capabilities for the remote sensing of cultural heritage assets. Proc SPIE Int Soc Opt Eng. 2014;. doi:10.1117/12.2067388. 12. Daveri A, Vagnin M, Nucera F, Azzarelli M, Romani A, Clementi C. Visible- induced luminescence imaging: a user-friendly method based on a system of interchangeable and tunable LED light sources. Microchem J. 2014;. doi:10.1016/j.microc.2015.11.019 (in press). 13. Thoury M, Delaney JK, De La Rie ER, Palmer M, Morales K, Krueger J. Near- infrared luminescence of cadmium pigments: in situ identification and mapping in paintings. Appl Spectrosc. 2011;65(8):939–51. 14. Comelli D, Valentini G, Nevin A, Farina A, Toniolo L, Cubeddu R. A portable UV-fluorescence multispectral imaging system for the analysis of painted surfaces. Rev Sci Instrum. 2008;79(8):086112. 15. Nevin A, Cesaratto A, Bellei S, D’Andrea C, Toniolo L, Valentini G, Comelli D. Time-resolved photoluminescence spectroscopy and imaging: new approaches to the analysis of cultural heritage and its degradation. Sen- sors. 2014;14(4):6338–55. 16. Comelli D, D’Andrea C, Valentini G, Cubeddu R, Colombo C, Toniolo L. Fluorescence lifetimeimaging and spectroscopy as tools for nondestruc- tive analysis of works of art. Appl Opt. 2004;43:2175–83. 17. Comelli D, Nevin A, Valentini G, Osticioli I, Castellucci EM, Toniolo L, Gulotta D, Cubeddu R. Insights into Masolino’s wall paintings in Castigli- one Olona: advanced reflectance and fluorescence imaging analysis. J Cult Herit. 2011;12(1):11–8. 18. Comelli D, Nevin A, Brambilla A, Osticioli I, Valentini G, Toniolo L, Fratelli M, Cubeddu R. On the discovery of an unusual luminescent pigment in Van Gogh’s painting “Les Bretonnes et le pardon de Pont Aven”. Appl Phys A. 2012;106(1):25–34. 19. Borozov SM, Würth C, Resch-Genger U, Klimant I. New life of ancient pig- ments: application in high-performance optical sensing materials. Anal Chem. 2013;85:9371–7. 20. Grazia C, Clementi C, Miliani C, Romani A. Photophysical properties of alizarin and purpurin Al(III) complexes in solution and in solid state. Photochem Photobiol Sci. 2011;10:1249–55. 21. Cesaratto A, D’Andrea C, Nevin A, Valentini G, Tassone F, Alberti R, Frizzi T, Comelli D. Analysis of cadmium-based pigments with time-resolved photoluminescence. Anal Methods. 2014;6:130–8. 22. Gallo Carrabba D. Carla Maria Burri: l’Egitto mi ha aperto le sue braccia. Crema: Gruppo Antropologico Cremasco; 2012. 23. Petrie museum. http://petriecat.museums.ucl.ac.uk/. Accessed 1 Dec 2015. 24. D’Auria S, Lacovara P, Roehrig CH. Mummies and magic: the funerary arts of ancient Egypt. Boston: Museum of Fine Arts; 1988. 25. Webster TBL, Green JR, Seeberg A. Monuments illustrating new comedy. London: Institute of Classical Studies, University of London School of Advanced Study; 1995. 26. British museum collection database. http://www.britishmuseum.org/ research/collection_online. Accessed 1 Dec 2015. http://dx.doi.org/10.1117/12.2067388 http://dx.doi.org/10.1016/j.microc.2015.11.019 http://petriecat.museums.ucl.ac.uk/ http://www.britishmuseum.org/research/collection_online http://www.britishmuseum.org/research/collection_online Dual wavelength excitation for the time-resolved photoluminescence imaging of painted ancient Egyptian objects Abstract Background: Results: Conclusions: Background Results and discussion Conclusions Methods Description of the collection and objects The cartonnage The mask Reference sample Time-resolved photoluminescence imaging Laser-induced photoluminescence spectroscopy Digital imaging Authors’ contributions References work_runafxe4brfqzoxjab5ij2amr4 ---- Wound Assessment by 3-Dimensional Laser Scanning sources of measurement error for manual tracings, and in general, DIA provides a technology that can at least partially overcome these problems. Finally, it is important to note that DIA provides information beyond the quantitative evaluation of traditional manual tracings. Because DIA involves digital photog- raphy, it provides an image and thus a basis for objec- tive evaluation of other end points such as infection, lesion thickness, granulation status, surrounding edema, and lesion progression over time, if used at sequential time points. Overall, additional studies are needed to further assess the uses of DIA in quantita- tive evaluation of other cutaneous lesions and beyond. Correspondence: Dr Chen, Department of Dermatol- ogy, Emory University School of Medicine, 101 Woo- druff Cir, Atlanta, GA 30322 (schen2@emory.edu). Financial Disclosure: None reported. Funding/Support: This project was supported in part by an unrestricted educational grant from Otsuka Pharma- ceuticals; National Institutes of Health (NIH) grant NHLBI R01-47345 and the Veterans Administration Merit Re- view Board (Dr Sumpio); and Mentored Patient Ori- ented Career Development Award K23AR02185-01A1 from the National Institute on Arthritis and Musculo- skeletal and Skin Disease, NIH, and the American Skin Association David Martin Carter Research Scholar Award (Dr Chen). 1. Sumpio BE, Chen SC, Moran E, et al. Adjuvant pharmacological therapy in the management of ischemic foot ulcers: results of the HEALing of Ischemic Foot Ulcers With Cilostazol Trial (HEAL-IT). Int J Angiol. 2006;15(2):76-82. Wound Assessment by 3-Dimensional Laser Scanning R ecent advances in our understanding of the biol-ogy of cutaneous tissue repair have influencedcurrent therapeutic strategies for chronic wound management and will continue to influence chronic wound management strategies into the future.1 An effective and accurate monitoring of skin lesions should be performed by measuring in an objective, pre- cise, and reproducible way the complete status and evo- lution of the wound.2 The main goal of current research projects is to design an easy-to-use technological sys- tem that can monitor the qualitative and quantitative evo- lution of a skin lesion. This level of monitoring can be achieved by using 3-di- mensional scanners: in particular, systems based on ac- tive optical approaches.3 There are 2 different areas of potential applications of such types of devices: in medi- cal treatment (to improve the efficacy of therapeutic regi- mens)4 and pharmacologic scientific research (to assess the quality and effectiveness of new chemicals or clini- cal procedures).5 Methods. We prospectively examined 15 patients with venous leg ulcers. The patients who underwent sequen- tial imaging of chronic wounds for this study all at- tended the leg ulcer clinic of the Wound Healing Re- search Unit at the University of Pisa, Pisa, Italy. Our sequential imaging system is equipped with a Vivid 900 laser scanner (Minolta, Osaka, Japan), which is used for digitizing or scanning the wound shape. With regard to the calculation of the “external” surface and volume of a wound, it is necessary to assess its original shape to de- termine the missing volume virtually. At the time of pa- tient presentation, information on the shape of the skin be- fore the wound occurred is missing, and the technique for virtual reconstruction of the original wound surface must be as easy and user-friendly as possible. The system, rely- ing on an analysis of the shape of the surface immediately outside the wound perimeter, creates an interpolating vir- tual surface that is continuously connected to the existing surface outside the wound and to that covering it. The parameters we studied were the mean wound area (measured in square centimeters) and mean volume (cu- bic centimeters). To assess interrater reproducibility, scans were evaluated by 2 independent investigators. For as- sessment of intrarater reproducibility, a single investi- gator performed 2 consecutive measurements 5 min- utes apart. Immediately after the first wound assessment of the first observer, a second observer, blinded to the findings of the first analysis, measured the same wound. The means and standard deviations of duplicate de- terminations for each wound were used for analysis. The reproducibility of measurements was evaluated by means of an intraclass correlation coefficient (ICC) and its 95% confidence interval (CI). Results. The measured total areas and volumes for inde- pendent raters and for subsequent measures of 1 rater are Figure 2. Procedure for using the Image Pro Express (Media Cybernetics, Silver Spring, Maryland) digital image analysis (DIA) software: (1) position target lesion toward camera at a set distance and toward a light source for standardization; (2) place ruler for calibration (present scale is in millimeters); (3) take digital photograph; (4) download image into computer running DIA software; (5) trace target, as demonstrated in this image; and (6) query DIA software to perform diameter and area calculations. Zakiya M. Pressley, MD Jovonne K. Foster, MS Paul Kolm, PhD Liping Zhao, MS Felicia Warren, BA William Weintraub, MD Bauer E. Sumpio, MD, PhD Suephy C. Chen, MD, MS (REPRINTED) ARCH DERMATOL/ VOL 143 (NO. 10), OCT 2007 WWW.ARCHDERMATOL.COM 1333 ©2007 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 reported in Table 1. No statistically significant differences were found between scans evaluated by the 2 investiga- tors about wound area and volume. The relative errors and the intraclass correlation coefficients are reported in Table 2. The ICC values were excellent for both intrarater and interrater reproducibility with a very low relative er- ror value. The mean ± SD time for a full scan acquisition on the wound area and volume was 3.6 ± 1.4 minutes. Comment. The laser scanner system used in this study enables users to accurately acquire 3-dimensional digi- tal models of various types of skin wounds. Since the fi- nal users will be physicians and not computer experts, a user-friendly system is believed to be a fundamental pa- rameter for its success. The accuracy of scanning systems has improved in the past few years, and prices have also decreased, making these devices affordable for a wider community of po- tential users.6 The integration into a single system of ca- pabilities that can capture the shape and surface reflec- tion characteristics makes 3-dimensional scanning an invaluable resource in all those applications where it is necessary to sample both surface attributes. Correspondence: Dr Romanelli, Wound Healing Re- search Unit, Department of Dermatology, University of Pisa, Via Roma, 67, 56126 Pisa, Italy (m.romanelli@med .unipi.it). Financial Disclosure: None reported. 1. Schultz G, Mozingo D, Romanelli M, Claxton K. Wound healing and time: new concepts and scientific applications. Wound Repair Regen. 2005;13(4) (suppl):S1-S11. 2. Romanelli M, Gaggio G, Coluccia M, Rizzello F, Piaggesi A. Technological advances in wound bed measurements Wounds. 2002;14:58-66. 3. Chen F, Brown GM, Song M. Overview of three-dimensional shape measure- ment using optical methods Opt Eng. 2000;39:10-14. 4. Kantor J, Margolis DJ. A multicentre study of percentage change in venous leg ulcer area as a prognostic index of healing at 24 weeks. Br J Dermatol. 2000; 142(5):960-964. 5. Moore K, McCallion R, Searle RJ, Stacey MC, Harding KG. Prediction and monitoring the therapeutic response of chronic dermal wounds. Int Wound J. 2006;3(2):89-96. 6. Mani R. Science of measurements in wound healing. Wound Repair Regen. 1999; 7(5):330-334. The Diagnostic Yield of Histopathologic Sampling Techniques in PAN-Associated Cutaneous Ulcers P olyarteritis nodosa (PAN), a medium-sized ves-sel (MSV) vasculitis, may result in cutaneousulcers.1 There is no specific serologic abnormal- ity associated with PAN; therefore, the mainstay diag- nosis consists of histologic evidence of MSV vasculitis in the context of pertinent clinical findings.2 Several fac- tors may contribute to the potential low diagnostic yield of tissue biopsy specimens from MSV-vasculitic ulcers. The present study evaluates the role of tissue sampling in the histologic evaluation of PAN-associated cutane- ous ulcers. Methods. Retrospective analysis of de-identified archi- val biopsy specimens taken from skin ulcers and sural nerves of 29 patients with histologically proven PAN- associated MSV vasculitis. Patients met the classifica- Table 1. The Total Areas and Volumes of the Different Wounds Measured by the 2 Independent Raters and the 2 Measurements Made by the Single Rater a Wound Parameter Rater 1 Rater 2Measurement 1 Measurement 2 Area, cm2 52.36 ± 8.5 51.26 ± 3.6 53.36 ± 8.4 Volume, cm3 18.3 ± 2.6 18.6 ± 3.7 19.4 ± 4.6 a All data are reported as mean ± SD. Table 2. Percentage Relative Error in the Measurements of Total Areas and Volumes and ICC of the Different Scans Between 2 Independent Raters and Within a Single Rater a Wound Parameter Intrarater ICC Interrater ICC Area, cm2 1.06 ± 0.66 0.9976 0.54 ± 0.39 0.9936 Volume, cm3 1.96 ± 1.33 0.9832 1.44 ± 0.91 0.9714 Abbreviation: ICC, intraclass correlation coefficient. a Unless otherwise indicated, data are reported as mean ± SD percentage relative error. Total cases of PAN-associated cutaneous ulcers 29 Diagnosis on first skin biopsy Diagnosis on second skin biopsy Negative (40%)12 Positive specimens contained subcutis and peripheral and nearby central areas of the ulcer (60%) 17 Specimens lacked subcutis and nearby central areas of ulcer (17%) 2 Specimens contained subcutis but lacked nearby central areas of ulcer (83%) 10 Positive specimens contained subcutis and peripheral and nearby central areas of the ulcer (75%) 9 Negative (25%)3 Biopsy speciman contained subcutis, peripheral, and nearby central areas of ulcer (33%). Diagnosis was confirmed with sural nerve biopsy 1 Specimens contained subcutis but lacked nearby central areas of ulcer (66%) 2 Figure 1. Evaluation of the role of sampling technique and site of polyarteritis nodosa (PAN)-associated cutaneous ulcer in the yield of the histopathologic diagnosis. Marco Romanelli, MD, PhD Valentina Dini, MD Tommaso Bianchi, MD Paolo Romanelli, MD (REPRINTED) ARCH DERMATOL/ VOL 143 (NO. 10), OCT 2007 WWW.ARCHDERMATOL.COM 1334 ©2007 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 work_rvcy6qhy3vcxhchscwfnvjkhy4 ---- The use of digital photos to assess visual cover for wildlife in rangelands lable at ScienceDirect Journal of Environmental Management 91 (2010) 1366e1370 Contents lists avai Journal of Environmental Management journal homepage: www.elsevier.com/locate/jenvman The use of digital photos to assess visual cover for wildlife in rangelands Cameron N. Carlyle a, Lauchlan H. Fraser b,*, Cindy M. Haddow c, Becky A. Bings d, William Harrower a a Dept. of Botany, University of British Columbia, Vancouver, BC V6T 1X1, Canada b Dept. of Natural Resource Sciences, Thompson Rivers University, 900 McGill Road, PO Box 3010, Kamloops, BC V2C 5N3, Canada c Range Specialist, Ecosystems Branch, Environmental Stewardship Division, Ministry of Environment PO Box 9338, Stn, Prov Govt, Victoria, BC V8W 9M1, Canada d Ecosystem Biologist, Cariboo Region, Ministry of Environment, 400-640 Borland St., Williams Lake, BC V2G 4T1, Canada a r t i c l e i n f o Article history: Received 26 August 2009 Received in revised form 3 February 2010 Accepted 10 February 2010 Keywords: Bunchgrass Digital image analysis Grazing Litter Range management Robel pole Wildlife management * Corresponding author. Tel.: þ1 250 377 6135; fax E-mail address: lfraser@tru.ca (L.H. Fraser). 0301-4797/$ e see front matter � 2010 Elsevier Ltd. doi:10.1016/j.jenvman.2010.02.018 a b s t r a c t Grassland vegetation can provide visual cover for terrestrial vertebrates. The most commonly used method to assess visual cover is the Robel pole. We test the use of digital photography as a more accurate and repeatable method. We assessed the digital photography method on four forage grassland species (Pseudoroegneria spicata, Festuca campestris, Poa pratensis, Achnatherum richardsonii). Digital photos of 2-dimensional cutout silhouettes of three bird species sharp-tailed grouse, western meadowlark and savannah sparrow were used to model the impact of clipping (i.e., grazing) on visual cover. In addition, photos of artificial voles were used to model litter on cover available to small mammals. Nine sites were sampled and data were analyzed by the dominant grass species in each study plot. Regression analysis showed that digital photos (r2 ¼ 0.62) were a better predictor than the Robel pole (r2 ¼ 0.26) for assessment of cover. Clipping heights showed that clipping at less than 15 cm left the silhouettes 50% exposed. Digital photo analysis revealed that visual cover was affected by the type of grass species, with F. campestris > P. pratensis > A. richardsonii > P. spicata. Biomass and litter were both positively related to cover for small mammals. � 2010 Elsevier Ltd. All rights reserved. 1. Introduction Amount and type of vegetative cover can affect ground-nesting bird success (Kirsch, 1974; Duebbert and Lokemoen, 1976; Martin and Roper, 1988; Hernández et al., 2003). Grazing alters the structure of grasslands through the removal of plant biomass, thus potentially increasing the susceptibility of ground-nesting birds, as well as small grassland mammals, to predation. The resulting loss of suitable habitat and increased predation risk due to grazing may be a contributing factor to the decline in grassland bird populations in western rangelands (Ammon and Stacey, 1997; Brennan and Kuvlesky, 2005). To manage sustainable range use for cattle and wildlife it is necessary to understand the effects of grazing on parameters of wildlife habitat, such as vegetative cover. The Robel pole was devised as a method to non-destructively estimate grassland productivity; it has since been adapted for measurement of habitat characteristics in relation to concealment in ground cover for small mammals and particularly birds (Robel et al., 1970). Increased grazing pressure reduces the visual cover reading obtained by the Robel pole (Reese et al., 2001; West and Messmer, 2006). There is : þ1 250 828 5450. All rights reserved. evidence that greater visual cover readings are found at bird nesting sites compared to non-nesting sites (e.g., Fondell and Ball, 2004; Pitman et al., 2005; West and Messmer, 2006). However, other studies which modeled habitat characteristics and included Robel pole readings in their initial site characterization found that the pole readings had little explanatory power (e.g Moynahan et al., 2006; Renfrew et al., 2005; Warren and Anderson, 2005; Winter et al., 2005). A problem is that the Robel pole is limited by a potential observer bias and possible subjectivity in measurement (Gotfryd and Hansell, 1985; Block et al., 1987; Ganguli et al., 2000; Collins and Becker, 2001; Limb et al., 2007). If the Robel pole method contains excessive measurement error, it is difficult to make sound conclusions on the assessment of wildlife habitat. This observation led us to the consideration of digital photography as a potentially more representative means of assessing the vegetative cover provided by standing vegetation. If photo imaging is a more accurate tool than the Robel pole to assess vegetative cover, photo imaging can be used to perform better tests of the effects of grazing and vegetation composition on wildlife habitat parameters. Structural heterogeneity found in grasslands supports diverse and stable wildlife populations (McGarigal and McComb, 1995). Rapid and accurate quantification of vegetation structure is essential to assessing wildlife habitat, especially avian habitat (McGarigal and McComb, 1995; Sutter and mailto:lfraser@tru.ca www.sciencedirect.com http://www.elsevier.com/locate/jenvman Table 1 Mean values of biomass (g), litter (g), stem height (cm), and Robel pole readings (cm) in 0.25 m2 plots dominated by one of four species: bluebunch wheatgrass, rough fescue, Kentucky bluegrass, and spreading needlegrass. Numbers in parentheses are �standard error. Dominant grass species Mean plot biomass (g) Mean plot litter (g) Mean stem height (cm) Mean Robel pole (cm) Bluebunch wheatgrass 41.32 (3.3) 15.30 (2.3) 36.35 (1.1) 7.46 (1.0) Rough fescue 50.58 (3.1) 48.65 (5.6) 42.10 (1.2) 10.36 (0.8) Kentucky bluegrass 38.96 (3.4) 29.41 (5.4) 20.00 (0.7) 5.52 (0.5) Spreading needlegrass 41.92 (3.2) 53.31 (6.7) 25.67 (2.5) 5.00 (1.9) C.N. Carlyle et al. / Journal of Environmental Management 91 (2010) 1366e1370 1367 Brigham, 1998). Photo techniques have been used for other appli- cations such as canopy gap analysis, biomass estimation and for novel applications such as non-destructive biomass sampling of shrubs (Boyd and Svejcar, 2005; Limb et al., 2007). Here, we expand the photo analysis tool to apply to vegetative cover for birds and small mammals in rangelands. We tested for differences in vegetative cover provided by four dominant grass species in the grasslands of the southern interior of British Columbia, Canada, and that represent an important forage for cattle: bluebunch wheatgrass (Pseudoroegneria spicata [Pursh] A. Löve), rough fescue(Festuca campestris Rybd.), Kentucky bluegrass (Poa pratensis L.), spreading needlegrass (Achnatherum richardsonii [Link] Barkworth) (Tisdale, 1947). We tested vegetative cover for three ground-nesting birds found in the southern interior grasslands of British Columbia, Canada: sharp-tailed grouse (Tympanuchus phasianellus), western meadowlark (Sturnella neglecta) and savannah sparrow (Passerculus sandwhichensis) (Fraser et al., 1999). These birds represent a range in size scale from the relatively small sparrow, to medium meadowlark, to large grouse. We also tested vegetative cover for voles. Voles, particularly the montane vole (Microtus montanus), construct runs and nests in the litter layer of grasslands and are important prey for endangered burrowing owls and badgers (Todd et al., 2003). The objectives of this study were to: 1) Examine the accuracy of digital photography compared to the Robel pole in assessing vegetative cover; 2) Model the effect of controlled clipping of grassland vegetation on cover provided to three ground-nesting birds; 3) Examine the variation in cover across four important forage grass species; and, 4) Measure the effect of litter removal on the cover provided to small mammals. 2. Methods 2.1. Study sites The study area was located in the southern interior grasslands of British Columbia, Canada. The soils in the study area are Orthic Dark Brown Chernozems, mean annual temperature is 4 �C, and mean annual precipitation is 250e450 mm, depending on location. Nine sites were selected, six in Lac du Bois Provincial Park, British Columbia (UTM10 E0680738 N5626223) and three near Williams Lake, British Columbia (UTM10 E0541492 N5759574) during the summer of 2005. Sites were located in grazed and ungrazed pastures and were chosen to represent a range of native grassland types. Locations ranged from 712 to 1000 m a.s.l., biomass at the sites ranged from 109 g/m2 to 204 g/m2, and litter amounts ranged from 41.7 g/m2 to 269.9 g/m2 (Carlyle, unpublished data) 2.2. Stem height, Robel pole and digital photos of bird species silhouettes At each of the nine sites, a 50 m transect was established, and twenty-five 0.5 � 0.5 m plots were examined every 2 m along the transect. Stem height, excluding inflorescences, of the dominant grass species at each plot was measured before clipping. Visual obscurity was measured with a Robel pole (Robel et al., 1970) and with a digital photo (see Limb et al., 2007), both taken from 4 m away and 1 m above ground level. Photos were taken with a Konica Minolta Dimage Z2 (4 mega pixels and a resolution of 2272 �1704 pixels) within a 3-h period during mid-day- 11:00e14:00 in July. Vegetation was successively clipped to 25, 20, 15, 10, and 5 cm from ground level. At each clipping level visibility measurements were done with a Robel pole and digital photos. The Robel pole is a 2.5 cm diameter pole divided into 2.5 cm alternating black and white bands, the lowest visible band is recorded. Digital photos were taken of life size cutouts of three grassland birds: savannah sparrow, western meadowlark and sharp-tailed grouse. We selected these three birds because they are dependent on grasslands that provide vegetative cover and litter (Fraser et al., 1999). The cutouts were painted a fluorescent orange (see Photo Analysis below for explanation) and placed standing vertically in the grassland. 2.3. Digital photos of artificial small mammals Fluorescent orange dowels (cylindrical pieces of wood 10 cm long � 2.5 cm diameter) were used as artificial voles to measure the cover provided by litter. Four dowels were placed within each corner of a 0.5 � 0.5 m quadrat. If litter was present, we assumed that the voles could likely use the litter as cover; therefore, the dowel was positioned under the litter. A digital photo, with a nadir view, was taken from 1 m above the ground with the dowels placed first beneath any litter that may have been present, and then with the litter removed. Evidence of small mammal activity (tunnels, feces, and nests) in the plot was recorded (either present or absent). The litter and the above ground live plant biomass were collected, dried for 48 h at 65 �C in a forced-air drying oven and weighed. 2.4. Photo analysis Each digital photo was analyzed using GIMP's colour select tools which counts the number of pixels of a specified colour in a photo (Kimball and Mattis, 2006, an open-source software package). The cutouts and dowels were painted fluorescent orange so that they contrasted with the surrounding vegetation. For each photo, we used the colour select tool and identified the fluorescent orange on the cutouts or dowels. Different lighting and shading within a single photo, and between photos, created a range of hues of the fluorescent orange. The threshold setting for the colour select tool determined the range of hues associated with the fluorescent orange. We determined that a setting of 40 selected most of the visible parts of the cutouts while minimizing the amount of unintentional selection. When sections of the photo other than the cutouts or dowels were unintentionally selected, due to the similarity of hues (e.g., orange-brown grasses in senescence), these sections of the photo were coloured white to remove them from the count. The cutouts and dowels were all photographed in their entirety (i.e., no vegetation) for the maximum number of pixels, which served as the reference photo. The number of pixels in field photos was then divided by the reference photo to obtain a percentage of pixels visible in each photo. 2.5. Data analysis Linear regression was used to model the relationships amongst the digital photos, Robel pole, cutting levels (25, 20,15,10, and 5 cm Table 2 Linear regression analyses for digital photo (percent of cutout visible), Robel pole, height of clipping (25, 20,15, 10, and 5 cm from ground level), and undisturbed stem height (measured in unclipped plots). All P < 0.001. Regression model r2 Degrees of freedom Digital photo vs. height of clipping 0.62 3774 Robel pole vs. height of clipping 0.26 3774 Digital photo vs. undisturbed stem height 0.16 567 Robel pole vs. undisturbed stem height 0.00 567 Digital photo vs. Robel pole 0.14 3778 Table 3 Mean percent visibility of three bird cutouts (sharp-tailed grouse, western mead- owlark, savannah sparrow) for each of the four dominant grass species using digital imagery software. Numbers in parentheses are �standard error. Dominant grass Mean percent visibility Grouse Meadowlark Sparrow Bluebunch wheatgrass 25.4 (2.9) 27.3 (2.8) 22.5 (2.6) Rough fescue 7.5 (1.8) 5.2 (1.4) 3.1 (1.0) Kentucky bluegrass 13.1 (1.8) 12.8 (1.9) 6.4 (1.1) Spreading needlegrass 13.5 (2.5) 10.3 (1.9) 4.46 (0.8) C.N. Carlyle et al. / Journal of Environmental Management 91 (2010) 1366e13701368 from ground level), and undisturbed stem height. Linear regression was also used to model the relationship between cutting height and visibility in the photo for each species of cutout combined across dominant grass species and separately for each dominant grass species. Grass species with few occurrences, such as stiff needlegrass (Achnatherum occidentale Thurb.), were included in cross-species analysis but not analyzed individually. The coeffi- cients from these regressions were then used to calculate the estimated clipping height required to produce a given percent visibility of the cutouts. A two-way analysis of variance (ANOVA) was completed to test for differences between the mean visibility of the cutouts for different dominant grass species. The visibility of the dowels was regressed against both biomass and litter weight. All data was Log10 (x þ 1) transformed to meet the assumptions of normally distributed data for the parametric tests. The coefficients were then used to model the amount of biomass and litter present to provide cover for the dowels (i.e., small mammals). Only the Lac du Bois data were used to model and compare dominant grass species as visual cover. All analyses were done using the R statistical package (R Development Core Team, 2006). 3. Results The mean biomass of the plots, when separated to dominant grass species, ranged from 38.96 g � 3.4 SE to 50.58 g � 3.1 SE, litter from 15.3 g � 2.3 SE to 53.31 g � 6.7 SE, stem height from 20.0 cm � 0.7 SE to 42.10 cm � 1.2 SE and Robel pole readings from 5.0 � 1.9 cm to 10.36 cm � 0.8 SE (Table 1). The highest correlation was between the digital photos and the height of clipping (Table 2). The Robel pole measurements and stem height of the dominant plant species before clipping have low correlation with each other, the height of clipping and the values obtained with the digital photos. Table 4 Linear regression models to predict the stubble height (cm) required for visual cover of Grass species Degrees of freedom r2 f-value Int Grouse Bluebunch wheatgrass 156 0.42 111 0.9 Rough fescue 206 0.73 544 0.7 Kentucky bluegrass 213 0.6 320.3 0.7 Spreading needlegrass 83 0.69 181.8 0.7 Meadowlark Bluebunch wheatgrass 157 0.41 107.3 0.9 Rough fescue 204 0.71 487.6 0.8 Kentucky bluegrass 212 0.59 310.2 0.8 Spreading needlegrass 82 0.7 192.2 0.8 Sparrow Bluebunch wheatgrass 154 0.41 105.3 0.8 Rough fescue 191 0.61 302.7 0.6 Kentucky bluegrass 205 0.55 249 0.7 Spreading needlegrass 81 0.66 158.3 0.7 Linear regression models estimated that 50% visibility of the cutouts occurred at 14.3 cm, 13.6 cm and 9.8 cm for the grouse (intercept ¼ 0.89, slope ¼ �0.03), meadowlark (intercept ¼ 0.90, slope ¼ �0.03) and sparrow (intercept ¼ 0.79, slope ¼ �0.03) respectively across all dominant grass species. Three two-factor ANOVAs testing the effects of clipping and dominant grass species on the visibility of each cutout type separately showed that clipping reduced the cover for the cutouts (grouse: F ¼ 4.99 P < 0.001; meadowlark: F ¼ 6.1, P < 0.001; sparrow: F ¼ 6.60, P < 0.001). The dominant grass species also altered the visibility of the cutouts (grouse: F ¼ 1.12, P < 0.001; meadowlark: F ¼ 1.31, P < 0.001; sparrow: F ¼ 1.09, P < 0.001). None of the interactions were significant. Of the four grass species, bluebunch wheatgrass provided the least cover to the cutouts while rough fescue provided the greatest cover (Table 3). The clipping height at which cutouts were 50% visible ranged from 10.9 to 19.9 cm for the grouse, 10.5e19.4 cm for the meadowlark and 6.8e14.1 cm for the sparrow (Table 4). The visibility of the dowels was significantly influenced by the amount of live biomass and litter (Fig. 1). A linear model estimates that, with all sites pooled, 25 g per 0.25 m2 of standing biomass was needed to provide approximately 50% cover after litter was removed (intercept 0.37, slope ¼ �0.14). Similarly, 5.3 g of litter was required to obscure 50% of the dowels when vegetation was present (intercept ¼ 0.30, slope ¼ �0.15). 4. Discussion 4.1. Evaluation of digital photography to assess vegetative cover Our results show that the Robel pole, a common tool to assess visual cover for wildlife, was less reliable than digital photos for the measurement of visual cover (also see Limb et al., 2007). The accuracy of the Robel pole is limited by (1) the small diameter of the three bird species in four species of forage grasses. ercept Slope Stubble height (cm) required for the visual obstruction of birds at six different percent visibility values. 0% 10% 25% 50% 75% 90% 42305 �0.022188 42.5 38.0 31.2 19.9 8.7 1.9 87143 �0.026438 29.8 26.0 20.3 10.9 1.4 �4.3 88156 �0.022702 34.7 30.3 23.7 12.7 1.7 �4.9 86405 �0.022164 35.5 31.0 24.2 12.9 1.6 �5.1 7143 �0.02424 40.1 36.0 29.8 19.4 9.1 2.9 00733 �0.028759 27.8 24.4 19.1 10.5 1.8 �3.5 23635 �0.025687 32.1 28.2 22.3 12.6 2.9 �3.0 2166 �0.02427 33.9 29.7 23.6 13.3 3.0 �3.2 56815 �0.025387 33.8 29.8 23.9 14.1 4.2 �1.7 93038 �0.028384 24.4 20.9 15.6 6.8 �2.0 �7.3 17835 �0.026952 26.6 22.9 17.4 8.1 �1.2 �6.8 23923 �0.026441 27.4 23.6 17.9 8.5 �1.0 �6.7 Fig. 1. Relationship of the visibility of the dowels (i.e., artificial voles) with vegetative biomass (g) and litter (g) from 0.25 m2 plots. All axes are log transformed, log10 (x þ 1). A, Litter was removed from the plot for the analysis of biomass (r2 ¼ 0.15, P < 0.001, df ¼ 107, F-statistic ¼ 19.38). B, Biomass and litter were present in the plot for the analysis of litter (r2 ¼ 0.61, P < 0.001, df ¼ 107, F-statistic ¼ 166). C.N. Carlyle et al. / Journal of Environmental Management 91 (2010) 1366e1370 1369 pole (2.5 cm); (2) the discreet 2.5 cm measurement units along the length of the pole; and, (3) the subjectivity of observer error. Digital photography is also discreet because it is limited to the number of pixels in the digital image, but the measurement is essentially continuous due to the large number of pixels in each photo. However, digital photography is often more labour intensive than the Robel pole; more field work is required plus photo processing on a computer. One way to reduce the labour is to limit the number of photos or the number of different cutouts. We used silhouettes of three different grassland birds but a single coloured board could be used instead. 4.2. Effect of clipping on vegetative cover As expected, clipping reduced the amount of vegetative cover for all four of the forage grasses considered. However, the effect of clipping varied by grassland species, and the difference generally seemed to be a result of the form and structure of the grass species. For example, rough fescue provided the most cover, and it is a 40e90 cm tall bunchgrass, densely tufted, which can form large, 30e60 cm diameter tussocks in relatively productive sites (Hitchcock and Cronquist,1973). Much of the cover provided by this species was probably due to these pronounced tussocks. Alterna- tively, bluebunch wheatgrass, which provided the least cover, is a relatively sparse bunchgrass with thin leaves and does not tend to form the large tussocks associated with rough fescue (Hitchcock and Cronquist, 1973). Spreading needlegrass supported moderate visible cover. Although tall, with culms up to 100 cm, needlegrass produces relatively small tussocks (Hitchcock and Cronquist, 1973). Kentucky bluegrass, a sod forming grass, has been widely distrib- uted due to its grazing-tolerant characteristics. Bluegrass' cover was comparable to needlegrass. Since rough fescue provides the greatest vegetative cover for grassland birds, it might be expected that a greater number of bird nests would be found in rough fescue patches. If so, range managers should consider grazing practices that support rough fescue, or similar tussock-forming grasses. Complete visibility can only be achieved if the cutout was sitting above the ground. Furthermore, clipping did not go below 5 cm so 100% visibility was rarely possible unless the cutout was sitting on high ground. Generally, cutouts were sitting on lower ground because grass hummocks had a tendency to raise the ground between the cutout and camera. As a result, the regression indi- cates negative clipping values to produce high visibility. This may cause an underestimation of the clipping height necessary to provide a given amount of cover. 4.3. Cover for small mammals The amount of litter and live vegetation directly affected the vegetative cover given to the artificial small mammals. Our results suggest that small mammals would have better visual cover based on litter availability. Since grazing reduces litter quantity, grazing likely reduces visual cover for small mammals. Small scale site selection based on vegetation characteristics has been observed in Microtus species (Bias and Morrison, 2006; Luque-laren and López, 2007). Therefore, even a patchy distribution of litter should provide adequate cover for small mammals. Small mammals are an essen- tial part of the grassland food web, and range managers should consider grazing practices that provide litter. Live biomass also provided visual cover, but live biomass alone explained less of the variation in small mammal (dowel) visibility. 4.4. Management implications Previous research supports the concept that dense, residual cover can increase the success of ground-nesting birds, often through reduced predator efficiency (Kirsch, 1974; Duebbert and Lokemoen, 1976; Martin and Roper, 1988; Hernández et al., 2003). Our data can provide guidelines for a minimum level of cover for grassland birds and small mammals based on stubble (clipping or grazing) height, biomass and litter. The results indicate that stubble heights of 15 cm or less would leave all three of the bird cutout species more than 50% exposed. How a visibility of 50% for a cutout translates to an actual animal is unknown. Obviously, other factors will affect cover including animal behaviour and camouflage, but considering the fact that cover in general is important for nest success (as evidenced by the studies cited above), being able to accurately quantify cover classes is very important. Results from the litter study indicate that to manage rangelands to support small mammals, litter levels and standing biomass must be high. Rough fescue provided the most cover, and estimates C.N. Carlyle et al. / Journal of Environmental Management 91 (2010) 1366e13701370 showed that its ability to provide cover was the most resistant to clipping. However, this does not imply that grazing resulting in a lower stubble height in rough fescue is less detrimental because it does not take into account other benefits provided by the grass or incorporate the long-term effects of disturbance. The tradeoffs between wildlife and range requirements need to be considered. 5. Conclusions Vegetation in grassland ecosystems provides visual cover for birds and small mammals. Our comparative test of the Robel pole and digital photography methods to assess visual cover demon- strated that the use of digital photography was a more accurate and repeatable method on the four forage grassland species (P. spicata, F. campestris, P. pratensis, A. richardsonii). Clipping heights showed that clipping at less than 15 cm left the silhouettes 50% exposed. Biomass and litter were both positively related to cover for small mammals. Future work is needed to manipulate the amount of vegetation, litter, and cover in a manner consistent with rangeland use, and to monitor the population dynamics and behavioural response of wildlife. The requirements are likely to vary depending on, for example, the species being considered, grassland type, pasture size, predation risk, the type of predator and nest site availability. Acknowledgements The authors thank British Columbia (BC) Parks for access to Lac Du Bois Provincial Park. Brandy Ludwig and Amber Greenall assisted with field work and photo analysis. This project was supported by the BC Ministry of Environment, an Industrial NSERC through partnership with the Grasslands Conservation Council of B.C. to C.N.C., and an NSERC DG to L.H.F. References Ammon, E.M., Stacey, P.B., 1997. Avian nest success in relation to past grazing regimes in a montane riparian system. Condor 99, 7e13. Bias, M.A., Morrison, M.L., 2006. Habitat selection of the salt marsh harvest mouse and sympatric rodent species. J. Wild. Manage. 70, 732e742. Block, W.M., With, K.A., Morrison, M.L., 1987. On measuring bird habitat: influence of observer variability and sample size. Condor 89, 241e251. Boyd, C.S., Svejcar, T.J., 2005. A visual obstruction technique for photo monitoring of willow clumps. Range. Ecol. Manage. 58, 434e438. Brennan, L.A., Kuvlesky Jr., W.P., 2005. North American grassland birds: an unfolding conservation crisis? J. Wild. Manage. 69, 1e13. Collins, W.B., Becker, E.F., 2001. Estimation of horizontal cover. J. Range Manage. 54, 67e70. Duebbert, H.F., Lokemoen, J.T., 1976. Duck nesting in fields of undisturbed grass-legume cover. J. Wild. Manage. 40, 39e49. Fondell, T.F., Ball, I.J., 2004. Density and success of bird nests relative to grazing on western Montana grasslands. Biol. Cons 117, 203e213. Fraser, D.F., Harper, W.L., Cannings, S.G., Cooper, J.M., 1999. Rare Birds of British Columbia. Wildl. Branch and Resour. Inv. Branch, B.C. Minist. Environ., Lands and Parks, Victoria, BC. ps. 244. Ganguli, A.C., Vermeire, L.T., Mitchell, R.B., Wallace, M.C., 2000. Comparison of four nondestructive techniques for estimating standing crop in shortgrass plains. Agron. J. 92, 1211e1215. Gotfryd, A., Hansell, R., 1985. The impact of observer bias on multivariate analyses of vegetation structure. Oikos 45, 223e234. Hernández, F., Henke, S.E., Silvy, N.J., Rollins, D., 2003. The use of prickly pear cactus as nesting cover by Northern Bobwhites. J. Wild. Manage. 67, 417e423. Hitchcock, C.L., Cronquist, A., 1973. Flora of the Pacific Northwest. University of Washington Press, Seattle, WA. Kimball, S., Mattis, P., 2006. The Gimp 2.2. Available at: http://www.gimp.org/ (accessed 14.07.09.). Kirsch, L.M., 1974. Habitat management consideration for prairie-chickens. Wild. Soc. Bull. 2, 124e129. Limb, R.F., Hickman, K.R., Engle, D.M., Norland, J.E., Fuhlendorf, S.D., 2007. Digital photography: reduced investigator variation in visual obstruction measurements for Southern tallgrass prairie. Range. Ecol. Manage. 60, 548e552. Luque-laren, J.J., López, P., 2007. Microhabitat use by wild-ranging Cabrera voles Microtus cabrerae as revealed by live trapping. Eur. J. Wild. Res. 53, 221e225. Martin, T.E., Roper, J.J., 1988. Nest predation and nest-site selection of a western population of the hermit thrush. Condor 90, 51e57. McGarigal, K., McComb, W.C., 1995. Relationships between landscape structure and breeding birds in the Oregon coast range. Ecol. Mono 65, 235e260. Moynahan, B.J., Lindberg, M.S., Thomas, J.W., 2006. Factors contributing to process variance in annual survival of female greater sage-grouse in Montana. Ecol. Appl. 16, 1529e1538. Pitman, J.C., Hagen, C.A., Robel, R.J., Loughin, T.M., Applegate, R.D., 2005. Location and success of Lesser Prairie-chicken nests in relation to vegetation and human disturbance. J. Wild. Manage. 69, 1259e1269. R Development Core Team, 2006. R 2.4.0 e A Language and Environment. Reese, P.E., Volensky, J.D., Schacht, W.H., 2001. Cover for wildlife after summer grazing on Sandhills rangeland. J. Range Manage. 54, 126e131. Renfrew, R.B., Ribic, C.A., Nack, J.L., 2005. Edge avoidance by nesting grassland birds: a futile strategy in a fragmented landscape. Auk 122, 618e636. Robel, R.J., Briggs, J.N., Dayton, A.D., Hulbert, L.C., 1970. Relationships between visual obstruction measurements and weight of grassland vegetations. J. Range Manage. 23, 295e297. Sutter, G.C., Brigham, R.M., 1998. Avifaunal and habitat changes resulting from conversion of native prairie to crested wheat grass: patterns at songbird community and species levels. Can. J. Zool. 76, 869e875. Tisdale, E.W., 1947. The grasslands of the southern interior of British Columbia. Ecol 28, 346e382. Todd, L.D., Poulin, R.G., Wellicome, T.I., Brigham, R.M., 2003. Post-fledging survival of burrowing owls in Saskatchewan. J. Wild. Manage. 67, 512e519. Warren, K.A., Anderson, J.T., 2005. Grassland songbird nest-site selection and response to mowing in West Virginia. Wild. Soc. Bull. 33, 285e292. West, B.C., Messmer, T.A., 2006. Effects of livestock grazing on duck nesting habitat in Utah. Range. Ecol. Manage. 59, 208e211. Winter, M., Johnson, D.H., Shaffer, J.A., 2005. Variability in vegetation effects on density and nesting success of grassland birds. J. Wild. Manage. 69, 185e197. http://www.gimp.org/ The use of digital photos to assess visual cover for wildlife in rangelands Introduction Methods Study sites Stem height, Robel pole and digital photos of bird species silhouettes Digital photos of artificial small mammals Photo analysis Data analysis Results Discussion Evaluation of digital photography to assess vegetative cover Effect of clipping on vegetative cover Cover for small mammals Management implications Conclusions Acknowledgements References work_rvyzozpndffbzaadtluamnjruu ---- Hindawi Publishing Corporation Nursing Research and Practice Volume 2012, Article ID 307258, 8 pages doi:10.1155/2012/307258 Research Article The Effectiveness and Clinical Usability of a Handheld Information Appliance Patricia A. Abbott Health Systems & Outcomes Department, Johns Hopkins University School of Nursing, Baltimore, MD, USA Correspondence should be addressed to Patricia A. Abbott, pabbott2@jhu.edu Received 5 October 2011; Accepted 9 January 2012 Academic Editor: Marita G. Titler Copyright © 2012 Patricia A. Abbott. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Clinical environments are complex, stressful, and safety critical—heightening the demand for technological solutions that will help clinicians manage health information efficiently and safely. The industry has responded by creating numerous, increasingly compact and powerful health IT devices that fit in a pocket, hook to a belt, attach to eyeglasses, or wheel around on a cart. Untethering a provider from a physical “place” with compact, mobile technology while delivering the right information at the right time and at the right location are generally welcomed in clinical environments. These developments however, must be looked at ecumenically. The cognitive load of clinicians who are occupied with managing or operating several different devices during the process of a patient encounter is increased, and we know from decades of research that cognitive overload frequently leads to error. “Technology crowding,” enhanced by the plethora of mobile health IT, can actually become an additional millstone for busy clinicians. This study was designed to gain a deeper understanding of clinicians’ interactions with a mobile clinical computing appliance (Motion Computing C5) designed to consolidate numerous technological functions into an all-in-one device. Features of usability and comparisons to current methods of documentation and task performance were undertaken and results are described. 1. Introduction Physicians and nurses are highly mobile workers who operate in complex, stressful, and safety critical environ- ments. Frequent interruptions, rapidly changing patient status, complex clinical presentations and information from multiple streams all combine to increase the cognitive load of practitioners and create the potential for medical error. These challenges have created a demand for technological solutions that will help clinicians manage information and make optimal decisions in this demanding work environment. The plethora and diversity of highly portable, increasingly compact, and powerful information and communication technology (ICT) devices on the market is evidence of an industry response to this growing demand. Untethering a provider from a physical “place” with mobile technology and delivering the right information at the right time and at the right location are expectations for effective and safe clinical practice. These technological solutions can, however, contribute to the problem. Clinicians are confronted with numerous different devices to complete a series of related, yet separate actions. It is not uncommon to see practitioners with a mix of communication devices, barcode readers, and computers on wheels—some being worn around the neck, hooked to belt loops, and stuffed in pockets, while others are being pushed up and down hallways. This is in addition to stethoscopes, otoscopes, and other clinical devices traditionally carried by a provider. This problem of device overload or “technology crowd- ing” is now becoming an additional clinical millstone. Indeed, recent studies are pointing to marked productivity losses in environments where high technology dependence and technology overload intersect [1]. Orchestrating numer- ous devices with a variety of functions (some which overlap), increases clutter and cognitive load, distracting the user’s attention away from the tasks at hand. Losing focus in the clinical environment contributes to increased opportunity for medical error [2, 3]. In recognition of the problem of technology crowding, a shift from numerous independent single-function devices to consolidated mobile information appliances (such as i- pads, multifunction smart phones, and portable clinical 2 Nursing Research and Practice tablet PCs), is occurring. While this shift is appropriate and welcomed by most, it is dangerous to consider device consolidation as a panacea to the information management challenges raised earlier. As with any new technology, it is important to fully understand how the technology is utilized in the real-world environment, the degree of usability that it possesses, the impact it may have on users, and its effect on workflows. This is of great importance, particularly in safety- critical environments where prediction of sequelae is difficult and electronic propagation of error can be immediate and far reaching. Studies that compare how health IT is actually used, versus how the device was designed to be used, are necessary. There are numerous instances of a misalignment of design and actual real-world use of health IT in the literature. Han et al. [4] demonstrated unexpected increases in mortality in a pediatric ICU after the implementation of a commercially available computerized provider order entry system (CPOE), while Koppel et al. [5] uncovered 22 types of medical error risks facilitated by CPOE. Ash et al. [6] specifically focused on the unintended consequences of health IT, describing how and why errors occur when health IT is implemented without investigations of how patient care systems are actually used in the real-world clinical environment. Vincente [7] makes the important point that the biggest threats to both safety and effectiveness arise from situations that are “unfamiliar to workers and that have not been anticipated by designers” (page 22). Studies and experience show that busy clinicians will not tolerate technology, software, or processes that impose workflow barriers or that introduce additional difficulty into already complex task performance. Workarounds, a common response to suboptimal technology, are a frequent result of problems with technology design. Workarounds can result in use of the system in ways not anticipated by the designer; echoing the point made by Vincente [7]. When workarounds occur, built-in safety features are often circumvented, and the potential for a cascade of negative downstream effects can occur [8]. For example, Koppel et al. [9] cite observations of nurses who carry extra copies of barcoded patient wristbands to avoid multiple trips to the drug carts. In effect, this workaround disabled device safety alert features that resulted in wrong patient-wrong drug errors. Workarounds and unanticipated uses of technology are becoming increasingly dangerous in healthcare environ- ments. In this era of healthcare reform, accountability and reimbursement for “meaningful use” of health information technology, the impetus for comparisons of design intention with actual use is highly important. Improved design and reduction of the negative unintended consequences are the goals of health information technology usability and impact studies. 2. Study Goals and Questions With these factors in mind, we undertook a study to gain a deeper understanding of clinicians’ interactions with a mobile clinical computing appliance designed to consolidate numerous technological functions. Features of usability and comparisons to current methods of documentation and task performance while using a portable PC (mobile clinical computing appliance) were of particular interest. The following specific questions were the foci of the study. (1) What specific themes define the usability challenges that clinicians encounter when using a mobile device to assist them in completing typical clinical tasks? (2) How usable is the C5, viewed as an important instance of a class of devices that are increasingly used by clinicians in patient care settings? While this study focuses on one device, and the results are not generalizable beyond the specific device tested, the usability themes that emerged from pursuit of question 1 and methods employed in this study can be applied to a wide range of devices and can help guide the way usability of such devices is assessed in the future. The approach employed in this study is intended to be of particular applicability to multifunction devices such as the C5. 3. Methods 3.1. Device. We studied a newly introduced “all-in-one” mobile hand-held PC, the “Mobile Clinical Assistant” (or MCA C5 ) that was specifically developed to address the chal- lenges of technology crowding and device overload in busy healthcare environments. The C5 mobile PC incorporates wireless technology, Windows operating system, a 10.4 inch color display screen, a barcode scanner, a digital camera, a RFID reader, and a biometric fingerprint reader. The device weighs 3.3 pounds and also has built-in loudspeakers, a microphone, a handle, and a tethered writing stylus. The C5 has a water resistant, sealed case to allow disinfection using equipment grade liquids (such as Viraguard) between patient encounters. The device is “ruggedized” to withstand a drop from 5 feet onto concrete. The C5 can access and display clinical information from external servers; no personal health information is persistently stored on the device itself. Finally, the device contains an accelerometer which enables the screen display to rotate based on device orientation, and an antitheft system which can be set to alarm, shut down, and delete all content in temporary storage if the device is moved outside the work environment, where its use is authorized. 3.2. Subjects. Study subjects were a convenient sample of experienced clinical nurses, recruited via word-of-mouth and by advertisement on several nursing listservs. 3.3. Setting. Data were collected in a simulated clinical environment as these subjects completed a series of tasks designed to reveal the strengths and weaknesses of the C5’s design. We conducted both phases of this study within a large University School of Nursing 30-bed patient care simulation laboratory, and specifically in a small side classroom that is structured to represent a 3-bed intensive care unit. Within this room, there are 2 full-size Laerdal “SimMan” clinical Nursing Research and Practice 3 mannequin simulators and one infant “SimBaby” in a bassinet. 3.4. Tasks. With simulated patient data provided by an electronic health record system (Eclipsys Sunrise Clinical Manager—SCM Version 4.5), subjects performed tasks related to barcode medication administration, digital pho- tography of a stage 4 pressure ulcer for wound documenta- tion, and an assessment of a newborn with documentation. Each of these tasks was chosen as representative of actions that a nurse might undertake in the course of a normal clinical workday. For the purpose of the use of the C5 digital camera testing/wound assessment, a partial body mannequin with a variety of skin ailments was used. This partial mannequin is designed to illustrate a variety of skin conditions for use by educators. For example, a very life-like stage 4 sacral deep pressure ulcer with exposed bone, tissue tunneling, wound edges, exposed muscle, and exudate is present as sutures, rashes, stage 1 and 2 pressure ulcers, bruises, and nevi. The stage 4 sacral pressure ulcer was used for a portion of digital photography component of the study. The subjects also used a full-size SimMan mannequin to approximate camera use with a “live” patient who required turning and positioning to obtain a picture of the sacral pressure ulcer. The barcode scanning component of the study was implemented via the use of proprietary forms software and barcodes constructed specifically for this study. Barcoded badges, medications, and patient ID bands were created and used in the testing of the C5 barcode scanner. ID bands were attached to mannequins and contrived “staff badges” with a barcode on the back were created and worn by subjects. “SimBaby” was used for the assessment procedure using the C5. All studies were completed in the same room under similar light conditions (mid-day). 3.5. Study Design. Following IRB review and approval, the study was conducted with two separate phases using two different subject samples. Phase 1 tested the procedure and the tooling prior to enrolling and studying the primary participants. Two experts were used for Phase 1. In Phase 1, user and environmental analyses were conducted to profile the characteristics of system users and the environment in which they interact. Heuristic evaluations and cognitive walkthroughs, a type of usability inspection where evaluators interact with the system and examine the device for usability issues, were also performed in Phase 1. This trial phase enabled the formal study procedures to be fine-tuned and the data collection procedures to be refined. The results from first part of the study will not be covered in detail in this paper. Phase 2 of the study was conducted with 15 subjects to generate data illuminating the usability of the C5. Data were generated through ethnographic observations, surveys, and interviews of users during and after the performance of a series of the three tasks (documenting, photographing, and barcode scanning) while using the C5. The focus of this paper is on Phase 2. In Phase 2, subjects completed in random order three simulated tasks using the C5 device wound documentation using digital photography; barcode scanning with medication administration, and completion of a standard admission assessment on a newborn infant. Each participant completed the questionnaire after finishing all three tasks. Trained observers documented field obser- vations, and subjects were asked to “think-aloud” as they worked through the scenarios. 3.6. Data Collection Methods and Instruments. As each subject completed the three tasks, the PI was taking notes, inquiring, encouraging think-aloud, answering, and probing/interviewing about specific actions. The field notes from the observations were included in the data analysis. The “think-aloud” protocols generated by participants were recorded directly by the C5 device and saved. The questionnaire used in this study was adapted from the QUIS (Questionnaire for Use Interaction Satisfaction). QUIS is a long-standing, reliable, and valid usability checklist (http://lap.umd.edu/quis/). The QUIS was modified based on focus group input, adding specific items unique to the characteristics of the C5, and then content validity was determined by an expert panel in Phase 1. The resulting questionnaire was comprised of 7 sections: demographics (11 items, including years in practice and computing experi- ence); overall user reaction (5 items); physical characteristics of device (13 items); device reliability (1 item); simulated device management activities (2 items); other topics (6 items); user opinions (6 items). Items used Likert-type response scales (e.g., Easy-Hard) or checklists (Yes-No). Each of the 7 sections also included an area for free text comments comparing the C5 with standard methods of similar task completion/documentation in clinical practice. The entire questionnaire took approximately 15 minutes to complete. 3.7. Study Procedure. Following consent, each subject’s expe- rience began with orientation to the C5. Subjects were taught how to use the C5 camera, the C5 barcode scanner, and how to document in Eclipsys SCM. Each subject was also oriented to the device, how to adjust the views based on arm positioning, how to use the writing stylus, how to insert and remove the device from a docking station, and how to change the battery and conduct the disinfecting procedure. Subjects were also instructed on the talk-aloud data collection procedure and asked to practice and demonstrate it prior to the start of the study to assure understanding and comfort. The consenting and orientation took, on average, approximately 1 hour per subject. Subjects were allowed to question, practice, and repeat as many times as they felt necessary to come to a level of comfort with the device and the procedure prior to starting the study. Subjects personally determined how to hold the device and were encouraged to change positioning as necessary during the study. At that point, the study was begun, and the audio recorder (built in to the C5) was turned on. These audio files were later transcribed and analyzed. Following the completion of the study, the recorder was turned off, and subjects were given the questionnaire to complete. 4 Nursing Research and Practice 3.8. Data Analysis and Usability Theme Identification. The PI, the research assistant, and two informatics experts assembled to code, analyze, and interpret the observational data and the subject voice recording (think-aloud) transcripts. To create the coding scheme for the transcripts, we employed an approach similar to that of Kushniruk et al. [10]. By reading three randomly chosen transcripts, all members of the team created individual lists of subject-expressed usability categories. Using a consensus process, the team then arrived at a single consolidated list of usability categories which were then used to classify and tag expressed comments in the audio files from all 15 subjects. Each of the 15 transcripts was independently coded by two members of the team using the previously derived usability categories. Usability issues which arose and not represented in the original coding scheme were flagged for later consideration. Coding disagreements were settled by a third independent team member. The occurrence of each coded utterance was marked with a timing point so that, during analysis, the PI could return to that exact time marker on the audio file to listen and record any specific comments. The results from the coding of the transcripts were then matched to the 7 sections of the questionnaire and (along with observations from field notes) were used to complete the dataset for analysis. The following example illustrates how the three data streams (questionnaire, observations, and coded transcripts) were consolidated. One question on the survey asked “How easy is it to use the camera during the process of document- ing with the C5?” The subject’s rating from the questionnaire was then supplemented with any instances from the subject’s coded transcript of expressed difficulty with the camera. The PI’s field notes were examined and any observations that highlighted user difficulty with using the camera were noted and added to the dataset. In example, observed difficulties with the camera included subjects struggling to depress the shutter button with the occasional accidental machine shutdown caused by hitting the on/off button located adjacent to the shutter button. The clustering of these three data streams created a deeper and multidimensional dataset of usability issues. 4. Results 4.1. Demographics. Of the 15 RN subjects, there were 2 males and 13 females. Twelve of the subjects identified themselves as White not Latino, 1 identified as Asian not Latino, and 2 identified themselves as White Latinos. All subjects were RNs; three were prepared at the baccalaureate level, ten had a master’s degree, one had a PhD, and one had obtained postdoctoral training. Most of the subjects in the study were between 41–55 years of age. The average number of years of RN licensure in this sample was 21. The degree of comfort with the use of computers in the clinical setting for patient care purposes was assessed by participants as high—with all but two ranking themselves as “very comfortable.” Two ranked themselves as “somewhat comfortable.” The majority of the users estimated that they used computers in their clinical practice upwards of 50% of the time. 4.2. Usability Themes. The data from the questionnaire, observations, and audio recordings clustered into 5 themes. Several themes (1 and 3) included subthemes: (1) input ease (with subthemes of TIP tool, barcode reader, and camera); (2) portability; (3) security/safety (with bacterial transmission included as a key aspect of safety); (4) efficiency gains; (5) general ease/intuitiveness. 4.3. Usability of the C5 4.3.1. Theme 1: Input Ease. The theme of “input ease” is a compilation of specific items in the consolidated data set that relate to ease by which data can be input into the C5. The input ease theme broke out naturally into subthemes based on the three different input modalities: TIP tool, barcode reader, and camera. The TIP tool was useable in two ways— by tapping and clicking with pulldown menus and onscreen keyboard or using the stylus like a pen with handwriting recognition. The TIP tool is not specific to the C5, it is a Microsoft feature, yet many of the subjects had no experience with the use of a TIP tool. It is included here due to its relative negative impact on usability comparisons. TIP Tool. The results of the use of the TIP tool stylus-based input met with mixed results. Eight of the 15 subjects rated the TIP tool “tapping” input as somewhat to very difficult, and the field notes and coded comments revealed marked instances of difficulty and frustration. Subjects were observed to repeatedly tap the screen with increasing vigor while and expressing negative perceptions. In contrast, the TIP tool handwriting recognition was rated positively by 13 of the 15 subjects, with many expressing surprise at its level of accuracy. However, only 1 of the 15 subjects mastered the proper method of editing the handwriting, spawning creative yet inefficient workarounds. Frustration with the editing function was high, but the perceived value of being able to handwrite on the screen was a highly rated feature amongst most of the subjects. Camera. Eighty percent of the subjects rated the digital camera built in to the C5 as a very positive feature of the C5. The participants voiced support for digital photography as a part of the patient record and believed that the impact of the camera on workflow and patient care was overwhelmingly positive. Recorded comments relayed comparisons with current methods of photography in clinical settings which revealed very inefficient processes of requesting a camera, locating it, assuring that the batteries were operational and similar. Several subjects stated that they would enjoy using such a camera when working with patients in chronic wound management settings to show the status of wounds that a patient could not easily visualize (such as sacral pressure ulcers) or to better document the nature of wounds for a Nursing Research and Practice 5 patient record. While supportive of the camera as a concept, 11 of the 15 participants found the C5 camera difficult to use. Problems included the location of the shutter button adjacent to the on/off switch, the positioning of the stylus tether directly in front of the lens, the low megapixels (2.0) which resulted in lower quality photos, and poor flash strength. In addition, subjects did not respond favorably to the process of focusing which required that the entire C5 be moved in and out (similar to an i-Pad) instead of being able to autofocus or zoom in with a focus button on the device itself. Barcode Scanner. Usability of the barcode scanner was rated highly, with only 2 of the subjects rating the scanner to be “somewhat difficult” to use in the survey. The observational and the coded transcript data, however, provide additional dimensionality to the use of the barcode scanner and opportunities for improvement. In analysis of the remarks, the subjects were overwhelmingly positive about barcode scanning and were pleased that the C5 contained this feature. However, subjects voiced a concern about having to move the entire device to scan something, and about the limited range of the scanner (6–8 inches maximum). For example, the testing scenario included scanning an IV bag that was already hanging from a pole. One subject reached over the mannequin to scan a barcoded IV bag and dropped the device on the mannequin’s head. Several expressed concerns about ease of scanning a patient’s wristband and having to position the entire C5 device to do so. Six subjects verbalized the value of bar coding and viewed it as an important safety feature. Others commented that it was good to have an “all in one device” because they were “already loaded with things to carry” and were not in favor of a documentation device and a separate barcode scanning device. Three subjects who were familiar with barcode scanning also commented that a barcode scanner located away from where scanning occurs “does not help me to improve safety or make my job easier” (paraphrased). 4.3.2. Theme 2: Portability. The portability theme included the benefit of being “untethered” from a fixed workstation in addition to perceptions of transportability/handling of the device. The portability of the device was rated from “valuable” to “very valuable” by 11 of the 15 participants on the survey. The transcripts and observation data supported the survey results with many verbalized comparisons of current practice with fixed workstations and the inefficiency of computers on wheels and/or fixed stations. At the start of the study, every subject was encouraged to hold and readjust the C5 as needed and to use the built- in handle as he/she saw fit. Observational and transcript files reveal significant amounts of shifting and repositioning of the device that decreased over time. The autorotation of the screen was voiced by several participants as a necessary and positive feature. Five of the 15 participants asked for an accompanying “strap” of some sort so that they could have two free hands at times. Three other participants said that a strap would alleviate some of the concerns they had about the device weight. Twelve of the 15 subjects carried the device like a lunchbox in between task stations in the lab. Most of the subjects were observed to use the device like a clipboard or a medication tray. While the majority (60%) of the participants rated the device’s weight (3.3 lbs) on the survey as “neutral”, all other ratings were skewed towards intolerable. The observational and transcript data highlighted concerns over weight, yet at the same time illustrated resourcefulness of the nurse subjects to adjust. Eight subjects specifically commented on the weight as being a problem, yet 5 of the 8 simply determined a way to deal with it (e.g., pulling up a bedside table, putting it on the edge of the bassinette, balancing it on a side rail or bedside table, or propping it on their knee). This also spawned the request for a strap or somewhere to hang the device when hands were needed for something else. 4.3.3. Theme 3: Security and Safety. The theme of “secu- rity/safety” is a compilation of specific items in the consol- idated data set that relate to the perceptions of security and safety aspects of the C5 device. The concept of ability to disinfect the C5 was included in this construct as a patient safety dimension. Participants rated the ability to disinfect the C5 as a “very important” feature (N = 13) and as making an important contribution to ease of use and efficiency. Regarding theft and data security, six of 15 leaned more towards “very worried,” while 7 were on the opposite end of “not very worried.” The survey results also revealed that most of the subjects were not concerned about the security of patient data on the C5, with thirteen of the 15 subjects having “little to no concerns.” In the transcripts, two subjects voiced concerns that patient data “lives” on the C5 even after being explained that the C5 is just a conduit to the server. These two subjects were adamant, fearing that if the device was stolen someone could access a copy of patient data that resides inside of the C5. Six of the subjects expressed concern that the C5 would be appealing to thieves and also that the clinicians would be held responsible if the device were stolen. 4.3.4. Theme 4: Efficiency Gains. The theme of “efficiency gains” is a compilation of variables from the consolidated data set that relate to the potential contributions that the C5 device may make to efficiency and usefulness. The process of wipe disinfecting the device clustered with this construct due to comments about time savings and/or additional steps that may facilitate efficiency in workflow. The overall usefulness of the device was rated highly positive on the survey, with 13 subjects indicating that the C5 would help improve their practice. The transcripts and observational data support the survey data. Comments included “No more running back and forth, forgetting and missing details. I have the machine where I need it and when I need it” and “In the morning, we have so many services on the floor, everyone is looking up their labs, and all the computers are taken up and nurses cannot get to their POE orders because they cannot get to the computer. This will allow them to have their own POE orders in their hands, and 6 Nursing Research and Practice not have to worry about fighting a resident for a computer system first thing in the morning.” Similarly, 13 of the 15 subjects on survey believed that the C5 will improve their efficiency and effectiveness. The transcript and observational data support the survey data. Comments included “The disadvantage (of ) coming out to the station is that you always get interrupted and then you (find that you) forgot to document, whatever. So the faster you can document, related to the actual care is better. So I think the closer to care is good” and “not walking back and forth to the nurse’s station saves me time and steps. I do not have the enough energy or the memory to waste anymore.” 4.3.5. Theme 5: General Ease/Intuitiveness. The theme of “general ease/intuitiveness” is derived from the variables that relate to the overall ease of using the device and the ability to “figure out” how to do something with the C5 relying on intuition and experience. On the survey question of “overall impression of the C5 device,” the majority of the participants rated the C5 device highly. Eleven subjects rated the C5 as a “4” (approaching “wonderful”), and “4” ranked it with a “5” (wonderful). On the survey scale that assessed frustration versus satisfaction— 8 of the subjects felt that the device was frustrating (8 ranked it as neutral or worse) to use. Similarly, 7 of the 15 rated the device as somewhat difficult to use. However, ten of the fifteen ranked the use of the device as stimulating or very stimulating (in contrast to boring or dull) to use. Most of the subjects (9) rated the C5 as “intuitive and easy to use.” The results of the observation data shed additional light on the seemingly contradictory findings from the survey. Those who had an observed higher level of computer experience appeared to be more “at ease” with the device and used the features much more easily. This observation may illustrate differences between self-rated levels of computing experience (which were high by survey) with actual ability. For example, even though the majority of survey results pointed towards high level of comfort and computing literacy, subjects who were familiar with the TIP tool were observed to readily use it without issue. Those subjects who were very familiar with Eclipsys SCM 4.5 software had apparent/observed higher levels of comfort. Subjects with a greater degree of computing experience were able to open and close applications easier, use the barcode scanner, increase sizes of windows to enhance visibility, and readjust the view (portrait/landscape) to adapt to needs. Others struggled with certain aspects of the device and their frustration was apparent to the observers. Examples of comments from the transcripts were “Do something with the string, it is driving me crazy”; “I can do this quicker with a pen and paper, the handwriting recognition is not working for me”; “How do you minimize something. . .actually, what does minimize mean?” 5. Discussion On the whole, the study participants perceived the C5 as highly useful, believed that the device would contribute to efficiency gains in practice, and considered device portability to be very important in supporting clinical workflow. The subjects’ comparisons of the C5 with standard and current personal practice revealed significant frustration with the redundancy of current methods of documentation, device overload, and the imperative of employing workarounds when inefficient processes impede timely completion of tasks in busy environments. The ability to quickly disinfect the device and move on to the next patient was clearly important to the nurses who were the subjects in the study, particularly in consideration of an increased focus on prevention of hospital acquired infec- tions. Compared with current methods for documentation and performance of the tasks the C5 supports, the subjects valued the ability to untether from the nurse’s station and be able to access and enter data instantaneously at the point of need. In addition, the value of having a personalized portable computing device and not having to compete for a workstation, particularly during shift change or rounds, was a virtue of the C5 raised by subjects. Barcoded functions are increasing in popularity, and the subjects expressed strong desire for not being loaded with another device or having to pull a computer on wheels with an attached barcode scanner into the room. Smaller, more portable, and all in one appeared to be the most desirable mechanism for this study population. The untethering potential of the C5 may have implica- tions beyond ubiquitous access to data. Empowered by a portable multifunction device, clinicians began to imagine novel ways the technology could be used to help them in their daily work. Several of the subjects who specialize in ostomy and wound care began to generate ideas about exchanging wound pictures across the team to measure healing responses, to be able to take a picture of a sacral ulcer to show a patient the impact of a certain treatment or the benefits of an action the patient and or family has taken, or to take a picture of a patient as part of the formal medical record so that proper patient identification at bedside is enhanced. Digital photography incorporated as part of wound care assessments was viewed by several of the participants as a more accurate method of documentation than the current practice of narrative description. Even in light of the overall positive reaction to the concept of an all-in-one portable computing device, dis- tinct usability issues emerged from the study. Some of the identified usability issues were potentially serious and could have negative consequences, from user frustration and possible technology abandonment, to patient harm. The study revealed many aspects of the device that could be improved with design modification and also perhaps through enhancing training and increasing computer literacy in clinical user groups [11]. The aspects of the device most in need of attention, in the view of study subjects, were centered on “form factor” or physical device form. The areas of improvement in regards to the form factor included: (1) the location of on/off switches next to other impor- tant feature buttons. Frustration was high when, after arranging the patient and the device to take a picture, Nursing Research and Practice 7 the off switch was accidentally pressed instead of the shutter and the machine shut down. It took consid- erable time to restart and reauthenticate, reposition the patient and refocus, generating negative subject reactions; (2) the location of the stylus tether which results in its hanging over the camera lens. After taking a sometimes difficult to obtain picture, users were quite frustrated with the appearance of the tether; (3) the weight of the device without some way to offload it easily to reduce weight stress and/or free up hands. As the study procedure time progressed, subjects began to voice concerns about the weight and what 8 or more hours of use would invoke; (4) the camera structure with no auto focus or ability to adjust lens without moving the device and the low megapixels of the camera. The manner of focusing (similar to that of an i-Pad) was not positively received, and the low resolution thwarted some of the benefit of wound documentation where edges and color resolution are very important aspects; (5) the need for detachable/retractable components to better support workflow, such as the camera and the barcode scanner on a tether to support higher maneuverability around a patient. Subjects suggested that a camera lens or the barcode reader be put in the stylus (or similar) so that they could stretch it to the patient instead of requiring the movement of the entire device to the patient. Other areas of improvement were noted that are not related to the physical form factor, and fell instead on aspects related to the subjects themselves. Approximately half of the subjects had concerns about the security of patient data on a portable device, a view that persisted after discussions of how client-server technology eliminates persistent data storage on the C5. The subjects’ belief about data persistence was difficult to change. An additional aspect was in the observed difference between self-reported computer comfortableness/literacy and the observed levels of the same. Even though the demographics in the survey illustrated that all but 2 of the subjects felt “very comfortable” with computing technology and that over 50% said that they routinely use computing technology in the workplace, there were observable differences in comfort and agility of use of the device. Nurses who were observed to be more comfortable with computing technology had lower levels of frustration, and more easily configured the device to fit their style. Several subjects struggled with basic computing manipulations such as minimization, how to work with pull down menus, and moving between landscape, and portrait orientations. The findings point to a need to enhance the general computing competencies of all clinicians—who are expected to be able to work with increasingly complex health IT. An additional potentially valuable outcome of this study in a specific example of health IT usability is in the five themes that emerged from the multimethod approach. With the expectation that more devices of this type will come on the market with similar design characteristics, a structure for quickly assessing the general dimensions of usability may be a useful tool. Further study and validation is needed, however, particularly in naturalistic settings where additional external influences will further impact use patterns and potential workarounds. The primary limitation of the study is the focus on a single device with multiple features that have been encapsulated in a specific form factor. As such, the results speak to the usability of this single device in toto. While many of the findings may carry forth to support general usability principles (e.g., the suboptimal placement of the on and off button adjacent to the shutter button), this study was not able to measure the contributions of individual features to overall measures of usability. Finally, generalizability of the usability themes that emerged from this work must necessarily be the subject of further research. These themes may prove to be limited to multifunction devices such as the C5 or they may generalize more widely. Further research that focuses upon consolidated devices such as the C5 and their impact on usability is warranted. In general, the study resulted in overall positive findings regarding the utility and usability of a portable information appliance, particularly in comparison to current methods used by the participants in similar clinical situations. The usability constraints that arose were primarily related to the physical form factor, issues that can be mitigated with further design modification. The need for mobile and highly usable devices to support the effectiveness of busy clinicians is high, and further studies of the alignment between design intention and real-world use are imperative. Acknowledgments The assistance of Dr. Charles Friedman (University of Michigan) in editing of this paper is acknowledged as is the assistance of Dr. Laura Taylor, Rosemary Mortimer, and Rana Chedid (Johns Hopkins University School of Nursing). References [1] P. Karr-Wisniewski and Y. Lu, “When more is too much: operationalizing technology overload and exploring its impact on knowledge worker productivity,” Computers in Human Behavior, vol. 26, no. 5, pp. 1061–1072, 2010. [2] T. K. Bucknall, “Medical error and decision making: learning from the past and present in intensive care,” Australian Critical Care, vol. 23, no. 3, pp. 150–156, 2010. [3] S. E. McDowell, H. S. Ferner, and R. E. Ferner, “The pathophysiology of medication errors: how and where they arise,” British Journal of Clinical Pharmacology, vol. 67, no. 6, pp. 605–613, 2009. [4] Y. Y. Han, J. A. Carcillo, S. T. Venkataraman et al., “Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system,” Pediatrics, vol. 116, no. 6, pp. 1506–1512, 2005. [5] R. Koppel, J. P. Metlay, A. Cohen et al., “Role of computerized physician order entry systems in facilitating medication 8 Nursing Research and Practice errors,” Journal of the American Medical Association, vol. 293, no. 10, pp. 1197–1203, 2005. [6] J. S. Ash, M. Berg, and E. Coiera, “Some unintended conse- quences of information technology in health care: the nature of patient care information system-related errors,” Journal of the American Medical Informatics Association, vol. 11, no. 2, pp. 104–112, 2004. [7] K. Vincente, Cognitive Work Analysis: Toward Safe, Productive, and Healthy Computer-Based Work, Lawrence Erlbaum, Mah- wah, NJ, USA, 2002. [8] J. DiConsiglio, “Creative ’work-arounds’ defeat bar-coding safeguard for meds. Study finds technology often doesn’t meet the needs of nurses,” Materials Management in Health Care, vol. 17, no. 9, pp. 26–29, 2008. [9] R. Koppel, T. Wetterneck, J. L. Telles, and B. T. Karsh, “Workarounds to barcode medication administration systems: their occurrences, causes, and threats to patient safety,” Journal of the American Medical Informatics Association, vol. 15, no. 4, pp. 408–423, 2008. [10] A. W. Kushniruk, M. M. Triola, E. M. Borycki, B. Stein, and J. L. Kannry, “Technology induced error and usability: the relationship between usability problems and prescription errors when using a handheld application,” International Journal of Medical Informatics, vol. 74, no. 7-8, pp. 519–526, 2005. [11] P. A. Abbott and A. Coenan, “Globalization and advances in information and communication technologies: The impact on nursing and health,” Nursing Outlook, vol. 56, no. 5, pp. 238– 246, 2008. Submit your manuscripts at http://www.hindawi.com Endocrinology International Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Gastroenterology Research and Practice Breast Cancer International Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hematology Advances in Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Scientifica Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Pediatrics International Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Urology Hepatology International Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Inflammation International Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Computational and Mathematical Methods in Medicine Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 BioMed Research International Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Surgery Research and Practice Current Gerontology & Geriatrics Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Nursing Research and Practice Evidence-Based Complementary and Alternative Medicine Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Hypertension International Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Prostate Cancer Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Surgical Oncology International Journal of work_rxjyvd3izvab5lfju4dw64aghe ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218555111 Params is empty 218555111 exception Params is empty 2021/04/06-02:16:34 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218555111 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:34 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_rziqnt2ahrbgld2eu5dpbbmrqq ---- FOREWORD POLE PHOTOGRAMMETRY WITH AN ACTION CAMERA FOR FAST AND ACCURATE SURFACE MAPPING J. A. Gonçalves a,b, *, O. F. Moutinho a, A. C. Rodrigues a a University of Porto, Science Faculty, Rua Campo Alegre, 4169-007, Porto, Portugal - jagoncal@fc.up.pt, up200704231@fc.up.pt, up200805757@fc.up.pt b CIIMAR - Interdisciplinary Centre of Marine and Environmental Research, Porto, Portugal Inter Commission WG I/Va KEY WORDS: High resolution, Orientation, Point Cloud, Accuracy, Change Detection ABSTRACT: High resolution and high accuracy terrain mapping can provide height change detection for studies of erosion, subsidence or land slip. A UAV flying at a low altitude above the ground, with a compact camera, acquires images with resolution appropriate for these change detections. However, there may be situations where different approaches may be needed, either because higher resolution is required or the operation of a drone is not possible. Pole photogrammetry, where a camera is mounted on a pole, pointing to the ground, is an alternative. This paper describes a very simple system of this kind, created for topographic change detection, based on an action camera. These cameras have high quality and very flexible image capture. Although radial distortion is normally high, it can be treated in an auto-calibration process. The system is composed by a light aluminium pole, 4 meters long, with a 12 megapixel GoPro camera. Average ground sampling distance at the image centre is 2.3 mm. The user moves along a path, taking successive photos, with a time lapse of 0.5 or 1 second, and adjusting the speed in order to have an appropriate overlap, with enough redundancy for 3D coordinate extraction. Marked ground control points are surveyed with GNSS for precise georeferencing of the DSM and orthoimage that are created by structure from motion processing software. An average vertical accuracy of 1 cm could be achieved, which is enough for many applications, for example for soil erosion. The GNSS survey in RTK mode with permanent stations is now very fast (5 seconds per point), which results, together with the image collection, in a very fast field work. If an improved accuracy is needed, since image resolution is 1/4 cm, it can be achieved using a total station for the control point survey, although the field work time increases. * Corresponding author. 1. INTRODUCTION High resolution and high accuracy terrain mapping can provide data for the detection of height changes due to several reasons, as erosion, or mass movements (James and Robson, 2012). LiDAR systems can provide important data but they are expensive and not accessible to many users, especially for small areas. Automatic methods, based on Structure from Motion (SfM) can provide dense and rigorous point clouds only from images, with obvious advantages in terms of costs (James and Robson, 2012, Raugust and Olsen, 2013). Several studies of applications have been published in geomorphology (Fonstad et al., 2013, Westoby et al., 2014, Johnson et al., 2014). Very high resolution mapping from drones are now common and can be done with very good positional accuracy. A drone flying 20 meters above the ground, with a compact camera, acquires images with a ground sampling distance (GSD) of around 1 cm. Provided that ground control exists with that positional accuracy, current processing software, based on SfM algorithms, will allow for the extraction of very accurate and detailed surface models. Studies of landslide analysis have been carried for example by Niethammer et al. (2011). However, there may be situations where different approaches may be needed, either because higher resolution is required or the operation of a drone is not possible, due to obstacles or legal use. Pole photogrammetry, where a camera is mounted on a pole, pointing to the ground, is an alternative. Action cameras are now very popular. Although they are normally presented as video cameras, they provide very good quality discrete images, normally with resolutions of 12 megapixels, as is the case of the GoPro Hero 4. Images can be acquired in time lapse mode, with rates of one or more images per second, for thousands of continuous images (Digital Photography Review, 2014). Another interesting aspect of action cameras is the large field view, which can cover large areas, when compared to other compact cameras. Action cameras are normally associated with large distortion, making their photogrammetric use difficult. However, as Baletti et al. (2014) show, they can be calibrated using a standard distortion model, and yield good photogrammetric results. This paper describes a very simple surveying system based on an action camera mounted on a pole, thought for applications such as erosion assessment. An analysis of the accuracy that can be achieved in several different conditions is presented. The system takes advantage of the availability of permanent GNSS networks, which allow for real time differential corrections (real time kinematics, RTK) through internet. Ground control points can be surveyed with centimetric accuracy in very short times of a few seconds. This requires The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B1, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic This contribution has been peer-reviewed. doi:10.5194/isprsarchives-XLI-B1-571-2016 571 only one receiver, equipped with a GSM data communication for internet access in the field. Structure from motion processing was done with Photoscan (Agisoft, 2016), in order to achieve a digital surface model (DSM) and an ortho-mosaic. 2. DESCRIPTION OF THE METHODOLOGY The methodology implemented is very simple, low cost and of fast execution. Details of the image acquisition system, ground control point collection and data processing are given below. 2.1 Pole and camera The system is composed by an extensible rod, with maximum length of 4 meters, and a small tripod attached at one end, where the camera is fixed (Figure 1). The camera inclination can be adjusted in the tripod. The rod, made of aluminium, is very light and can be held vertically by the operator, so that the camera will be at least 5 meters above the ground. It can also be kept slightly tilted, in order that the camera axis becomes closer to the vertical direction (Figure 1 b). (a) (b) Figure 1. GoPro camera fixed in a small tripod at the end of the rod (a); the rod held tilted, by an operator (b). The camera is a GoPro Hero 4, which can be controlled by wifi. An application in a smartphone displays the images and allows to start and stop image collection. The camera has a resolution of 12 megapixels (4000 x 3000) in discrete image capture. It has a continuous shooting mode (time lapse mode) with a nominal maximum rate of 2 images per second. In fact, the actual average rate was 3 images every 2 seconds. When moving at a normal walking speed the overlap provides large redundancy for multi-stereo viewing. The nominal focal distance of the GoPro camera is 3 mm and the pixel size is 1.73 m (Digital Photography Review, 2014). The area covered by a vertical image taken at 4 meters height will be 6.9 m by 9.2 m, assuming no radial distortion, to which corresponds a ground sampling distance (GSD) of 2.3 mm. A separation of 1 meter between consecutive photos leads to an overlap of 85%, in the direction of the smaller image side. The system is mainly intended for corridor mapping, with a single strip. For areas wider than 9.2 meters, overlapping strips can be made, with the appropriate care of operator to move in parallel lines. 2.2 Ground control Precise image orientation requires ground control points, with an appropriate distribution throughout the area. Although natural points, such as paintings on the pavement, may exist, it is preferable to use marked points. In the present work, points were marked with chalk. In cases where that is not possible, as bare soil or sand, a set of signals printed in a rigid material were prepared. In any case it is convenient that they are numbered in order to facilitate point identification. Figure 2 shows both types of points. (a) (b) Figure 2. Images of the types of points used: (a) point marked with chalk and (b) signals. The easiest way to survey the points is by dual frequency GNSS, in RTK mode (real time kinematics). With the availability of permanent station networks, providing differential corrections in real time, on the internet, very high accuracy can be achieved in very short observation times. In the case of Portugal the network used was RENEP (National Network of Permanent Stations) operated by the Portuguese national mapping agency (DGT, 2016). In the present work, a Trimble R6 receiver was used. Typically this type of receiver achieves an accuracy of 1 or 2 cm in 5 to 10 seconds of observation per point. The GCP collection becomes very fast in this way. Positional accuracy at centimetre level may be appropriate for many situations, but, since the GSD is 2.3 mm, a better accuracy of the GCPs would improve the digital surface model to be extracted. For that, longer observation time per point and post processing is needed. Another alternative is to survey the points with a total station. In any case the operation time of the field work increases significantly. In order to verify the vertical accuracy of the DSM to be extracted it is advisable to survey some independent altimetric check points, which don’t need to be marked. This is important to certify that deformations in the model do not exist. The time taken to survey these extra points is small if GNSS-RTK is used. 2.3 Structure from motion processing The processing methodology designated as SfM is essentially a method of automatic triangulation, of images not necessarily acquired in a regular manner, as in conventional photogrammetry. A free bundle adjustment is done with large number of conjugate points obtained by algorithms similar to SIFT (Lowe, 2004). This initial relative orientation process, normally called “image alignment”, is followed by an absolute orientation with ground control points. Since non-metric cameras are normally used, it is usual to include an auto- calibration in the image orientation. The following step is the generation of a dense point cloud, by multi-view stereo matching, which leads to a digital surface model (DSM) and an ortho-rectified image mosaic. This processing is appropriate for the images acquired by the GoPro camera, which includes large radial distortion, and The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B1, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic This contribution has been peer-reviewed. doi:10.5194/isprsarchives-XLI-B1-571-2016 572 obviously is not a metric camera. The software used was Agisoft Photoscan (Agisoft, 2016), which does all the SfM processing, and exports a DSM and an ortho rectified mosaic in standard GIS formats. It incorporates a functionality of image rectification if radial distortion parameters are known, which was used. 2.4 Pre-calibration The GoPro camera incorporates large radial distortions that need to be determined in the auto-calibration process. In some cases of processing original GoPro images the image alignment process did not converge to an appropriate solution, originating a sparse point cloud with obvious deformations. It was decided to do some form of previous camera calibration. From some successful initial test surveys a set of calibration parameters (focal distance, principal point position and three coefficients of the radial distortion) were considered as an approximation of the camera distortion. The focal distance was reduced from the nominal value (3 mm) by 2.5%. The principal point was displaced by 44 pixels in x and 14 pixels in y direction. Figure 3 shows an original image and the corresponding undistorted image. A points on the corner of the rectified image suffered a displacement of 650 pixels from its position on the original image. Figure 3. Original image and the corresponding undistorted image obtained with the approximate camera calibration. In further surveys images were previously rectified, and the orientation process was run over the undistorted images. Since the initial rectification is only an approximation, the bundle adjustment is done with the auto-calibration for a refinement of the interior orientation. 3. TESTS CARRIED OUT Tests were carried out at the Astronomical Observatory of the University of Porto. Some paths with a total length of 240 meters were surveyed. The average speed of the operator was 1 m/s, which led to an average separation of 0.7 meters between consecutive photos. The paths were covered twice, in opposite directions, leading to a total of 680 photos. A total of 29 points were marked and surveyed with GNSS- RTK, with 5 seconds average time per point. The estimated accuracy provided by the real time processing were, on average, 1.0 cm in planimetry and 1.5 cm in altimetry. From these points, 14 were used as GCPs and 15 as independent check points of the orientation process (XYZ Check Points). Some unmarked points (a total of 77) were also surveyed to be used as check points of the DSM heights. Figure 4 shows the surveyed area, with the location of the three types of points. The maximum length of the surveyed area is 110 m. The points were also surveyed with a total station, all from the same station. The expected accuracy is of 3 mm. In the case of the height check points, only 23 were surveyed with total station. Figure 4. Test area and location of control and check points. 4. IMAGE ORIENTATION RESULTS Undistorted images were input and went through the initial step of image alignment. It is followed by the identification of GCP and ICP locations, which is a manual procedure in Photoscan. It took some time because all points were found in more than 20 images. The bundle adjustment was then made, with the choice of further refinement of the interior orientation parameters. The ones considered were the focal distance, the principal point position, 3 polynomial coefficient of radial distortion and 2 tangential parameters. 4.1 Orientation with GNSS points The process was initially done with the GNSS points. Residuals are provided for the GCPs and the ICPs. Table 1 shows the statistics of the residuals: minimum, maximum, average and root mean square of the residuals. Being the orientation process a least squares adjustment, the average of the GCP residuals is zero. As would be expectable, residuals in the ICPs are slightly larger than for the GCPs. The obtained values are of the order of the GNSS accuracy, but better in the height component. 14 GCPs ICPs E N H E N H Min -1.8 -2.2 -2.1 -2.0 -4.8 -2.4 Max 2.4 2.4 1.8 4.2 3.4 1.1 Mean 0 0 0 0.8 -0.2 -0.9 RMSE 1.1 1.4 1.0 2.0 2.4 1.3 Table 1. Statistics of the residuals found on the GNSS points, in centimetres. These errors have a contribution from the photogrammetric process but, certainly a larger contribution from the GCP and ICP accuracy. 4.2 Orientation with total station points The process was repeated, now replacing the coordinates with the ones obtained by the total station survey. It could be observed that residuals decreased, both for the GCPs and ICPs, as shown in Table 2. In the case of the height, which again performed better, the improvement was approximately by a factor of 2. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B1, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic This contribution has been peer-reviewed. doi:10.5194/isprsarchives-XLI-B1-571-2016 573 Although there was an improvement, the errors are still larger than the image resolution (0.23 cm). Even in this case it is not difficult that the surveying procedures (mark definition, verticality of the prism, measurement of instrument height, etc.) introduce several mm errors. Anyway, this accuracy is enough for most of the intended applications. 14 GCPs 15 ICPs E N H E N H Min -1.8 -1.2 -0.5 -1.7 -2.1 -1.5 Max 1.5 2.0 0.4 2.5 3.3 0.2 Mean 0 0 0 0.0 0.6 -0.5 RMSE 1.0 0.9 0.3 1.4 1.6 0.7 Table 2 – Statistics of the residuals found on total station points, in centimetres. 5. DSM ANALYSIS The following step was the generation of a dense point cloud, which was done in one fourth of the resolution, i.e., one pixel matched in every 4 by 4 image pixels. This leads to an approximate density of 1 point per square centimetre. This operation for 680 images took approximately 5 hours. The DSM was created in the form of a 1 cm grid, which is good to describe very small details of the ground surface. Some visual inspection was done, together with an altimetric residual analysis. 5.1 Visual analysis Figure 5 shows the DSM in colours, with hill shading. The height range in the area was 4.3 meters. Figure 5. Colored DSM, with hill shading effect. Visual inspection of the DSM shows very fine detail on the pavement. No discontinuities were found. Vertical objects, such as some pillars that exist in the area, are shown with acceptable detail. An orthoimage mosaic was also built. Figure 6 shows (a) a smaller area, with the DSM and contours with 10 cm interval, and (b) the ortho of the same area also with the contours. (a) (b) Figure 6. Colored DSM (a) and orthoimage (b) of a smaller area, with 10 cm vertical spacing contours. 5.2 Residual analysis A quantitative assessment of the DSM heights was made with the altimetric check points. Residuals in the check points are calculated as the difference between the measured height and the heights calculated from the DSM grid, using bilinear interpolation. Table 3 contains the statistics of the residuals in the check points, obtained by both surveying methods. GNSS Total station No, of points 77 23 Min -2.6 -0.53 Max 3.8 0.44 Mean 0.5 -0.05 RMSE 1.1 0.32 Table 3. Statistics of the height errors found (cm) in the altimetric check points (GNSS and total station). There was a clear improvement for the DSM obtained from images oriented with total station points. As residuals became of milimetric order, they were shown with one more digit. Results are now closer to the image resolution: RMS of 3.2 mm, slightly more than the image GSD (2.3 mm). 6. CONCLUSIONS The system described is of very low cost and of fast field data collection, when control points are surveyed with GNSS. Vertical accuracy of the extracted surface model had an RMSE of around 1 cm. Although much more time is taken for the total station survey, accuracy can be improved to a few millimeters, close to the image resolution. Anyway it must be noticed that all control and check points were located in flat areas. Some accuracy loss may be expected if the surface is rougher and control points have to be located in place with some slope. The system is being tested in new situations, of actual erosion assessment, in order to be more thoroughly validated. A limitation of the method is related to the increase of the area covered, and consequently the number of images. Processing times tend to be extremely large. To avoid this, some optimization can be done by reducing the time interval between images, without compromising accuracy. This will be done in future work ACKNOWLEDGEMENTS The GNSS National Network of Permanent Stations (RENEP) of “Direcção Geral do Território” was used for differential corrections. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B1, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic This contribution has been peer-reviewed. doi:10.5194/isprsarchives-XLI-B1-571-2016 574 REFERENCES Agisoft, 2016. Agisoft documentation: internet address http://www.agisoft.com/pdf/photoscan-pro_1_2_en.pdf (assessed on March 2016). Digital Photography Review, 2014. Available online: www.dpreview.com (accessed on March 2016). Lowe, D. 2004. Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision. Vol. 60, No. 2, pp. 91-110. Fonstad, M.A., Dietrich, J.T., Courville, B.C., Jensen, J.L., and Carbonneau, P.E., 2013, Topographic structure from motion: a new development in photogrammetric measurement, Earth Surface Processes and Landforms, v. 38, p. 421-430. James, M.R., Robson, S., 2012. Straightforward reconstruction of 3D surfaces and topography with a camera: accuracy and geoscience application. Journal of Geophysical Research, v. 117, F03017. Johnson, K., Nissen, E., Saripalli, S., Arrowsmith, J.R., McGarey, P., Scharer, K., Williams, P., and Blisniuk, K., 2014. Rapid mapping of ultrafine fault zone topography with structure from motion. Geosphere, v. 10, no. 5, p. 969-986. Niethammer, U., Rothmund, S., Schwaderer, U., Zeman, J., Joswig, M., 2011. Open source image-processing tools for low- cost UAV-based landslide investigations. Int. Arch. Photogram. Remote Sensing Spatial Info. Sci., 38, 1–6. Raugust, J.D., Olsen, M.J., 2013. Emerging technology: structure from motion. LiDAR Magazine, v. 3, no. 6, 5 p. Westoby, M.J., Brasington, J., Glasser, N.F., Hambrey, M.J., and Reynolds, J.M., 2012. Structure from motion photogrammetry: a low-cost, effective tool for geoscience applications. Geomorphology, v. 179, p. 300-314. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B1, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic This contribution has been peer-reviewed. doi:10.5194/isprsarchives-XLI-B1-571-2016 575 work_rzkuuqcjpjfcji3ktemjxvc7fu ---- Microsoft Word - Main document copia.docx Zurich Open Repository and Archive University of Zurich Main Library Strickhofstrasse 39 CH-8057 Zurich www.zora.uzh.ch Year: 2019 Accuracy of an automated three-dimensional technique for the computation of femoral angles in dogs Longo, Federico ; Savio, Gianpaolo ; Contiero, Barbara ; Meneghello, Roberto ; Concheri, Gianmaria ; Franchini, Federico ; Isola, Maurizio Abstract: Aims: The purpose of the study was to evaluate the accuracy of a three-dimensional (3D) automated technique (computer-aided design (aCAD)) for the measurement of three canine femoral angles: anatomical lateral distal femoral angle (aLDFA), femoral neck angle (FNA) and femoral tor- sion angle.Methods:Twenty-eight femurs equally divided intotwo groups (normal and abnormal) were obtained from 14 dogs of different conformations (dolicomorphic and chondrodystrophicCT scans and 3D scanner acquisitions were used to create stereolithographic (STL) files , which were run in a CAD platform. Two blinded observers separately performed the measurements using the STL obtained from CT scans (CT aCAD) and 3D scanner (3D aCAD), which was considered the gold standard method. C orrelation coefficients were used to investigate the strength of the relationship between the two mea- surements.Results: A ccuracy of the aCAD computation was good, being always above the threshold of R2 of greater than 80 per cent for all three angles assessed in both groups. a LDFA and FNA were the most accurate angles (accuracy gt;90 per cent).Conclusions: The proposed 3D aCAD protocol can be considered a reliable technique to assess femoral angle measurements in canine femur. The developed algorithm automatically calculates the femoral angles in 3D, thus considering the subjective intrinsic femur morphology. The main benefit relies on a fast user-independent computation, which avoids user-related measurement variability. The accuracy of 3D details may be helpful for patellar luxation and femoral bone deformity correction, as well as for the design of patient- specific, custom-made hip prosthesis implants. DOI: https://doi.org/10.1136/vr.105326 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-172071 Journal Article Accepted Version Originally published at: Longo, Federico; Savio, Gianpaolo; Contiero, Barbara; Meneghello, Roberto; Concheri, Gianmaria; Fran- chini, Federico; Isola, Maurizio (2019). Accuracy of an automated three-dimensional technique for the computation of femoral angles in dogs. Veterinary Record, 185(14):443. DOI: https://doi.org/10.1136/vr.105326 Research article 1 2 3 Accuracy of an automated three-dimensional technique for the 4 computation of femoral angles in dogs 5 6 7 F. Longo a, d *, G. Savio b, B. Contiero a, R. Meneghello c, G. Concheri b, F. Franchinib, M. Isola a 8 9 a Department of Animal Medicine, Production and Health, University of Veterinary Medicine, 10 Padova, Italy 11 b Laboratory of Design Tools and Methods in Industrial Engineering, Department of Civil, 12 Architectural and Environmental Engineering, University of Engineering, Padova, Italy 13 c Department of Management and Engineering, University of Padova, Vicenza, Italy 14 d Clinic for Small Animal Surgery, Vetsuisse Faculty University of Zurich, Zurich, Switzerland 15 16 17 18 19 20 21 * Corresponding author. Tel: +39 049 8272608. 22 E-mail address: flongo@vetclinics.uzh.ch (F. Longo). 23 24 25 26 27 28 29 30 31 32 33 34 Abstract 35 The purpose of the study was the evaluation of the accuracy of a three-dimensional (3D) automated 36 technique (aCAD) for the measurement of three canine femoral angles: anatomical lateral distal 37 femoral angle (aLDFA); femoral neck angle (FNA); and femoral torsion angle (FTA). 38 Twenty-eight femurs equally divided in 2 groups (normal and abnormal) were obtained from 14 39 dogs of different conformations (dolicomorphic and chondrodystrophic). 40 Computed tomographic (CT)-scans and 3D scanner acquisitions were used to create 41 stereolithographic (STL) files which were run in a computer-aided-design (CAD) platform. Two 42 blinded observers performed separately the measurements using the STL obtained from CT-scans 43 (CT aCAD) and 3D scanner (3D aCAD), which was considered the gold standard method. 44 The correlation coefficients were used to investigate the strength of the relationship between the 45 two measurements. 46 The accuracy for the aCAD computation was good, being always above the threshold of R2> 80% 47 for all three angles assessed in both groups. ALDFA and FNA were most accurate angles (accuracy 48 > 90 %). 49 The proposed 3D aCAD protocol can be considered a reliable technique to assess femoral angle 50 measurements in the canine femur. The developed algorithm automatically calculates the femoral 51 angles in 3D, thus considering the subjective intrinsic femur morphology. The main benefit relies 52 on a fast user-independent computation, which avoid user-related measurement variability. The 53 accuracy of 3D details may be helpful for patellar luxation and femoral bone deformity correction 54 as well as for the design of patient specific custom-made hip prosthesis implants. 55 56 Keywords: Accuracy, Dog; Femur; Computed tomography; Three-dimensional constructions; 3D 57 scanner 58 Introduction 59 60 The state of art for the measurement of angles in the canine femur has been traditionally limited to 61 multiple orthogonal radiographs (RX),1-3 which were gradually overtaken by the computed 62 tomography (CT)-scans 4,5 and magnetic resonance (MRI) evaluations. 6,7 These latter two 63 diagnostic techniques exhibit satisfactory aptitudes in terms of bone and images manipulation, 64 avoiding the positioning issue that frequently characterizes the radiographic evaluation.4,8 However, 65 CT and MRI lack on real three-dimensional (3D) measurement of angles since that almost for all 66 the values proposed by the literature were achieved with two-dimensional (2D) imaging. 6,9,10 67 Recently a 3D Python-based algorithm run on a computer-aided-design (CAD) software 68 (Rhinoceros version 5, Robert McNeell & Associates) was presented as a novel methodology for 69 the computation of femoral angles in the canine femur.11,12 The femoral angles computed, 70 differently from those obtained using different diagnostic techniques, 1-10 were measured in a real 71 3D fashion. The main benefit relies on automated measurements, which are independent from the 72 points selected by the operator, bone orientation and conformation as well. As a result, the operator-73 related measurement variability is decreased as the manual manipulation of the bone model and the 74 identification of target anatomical landmarks are not required. The repeatability and reproducibility 75 of the proposed protocol were assessed and compared with manual measurements made with 76 radiographs and CT reconstructions, finding that the 3D protocol was the most repeatable and 77 reproducible method.12 This conclusion was, also, supported by the automated design of the 3D 78 protocol, which restricts the potential user-related errors only to the operations required for the 79 creation of the mesh model and, therefore, remarkably decreases the computational time.11 80 However, the accuracy of 3D measurements, described as the difference of a measured value from a 81 true value, was not assessed and needed to be investigated. Therefore, the purpose of this study was 82 to determine the accuracy of our aCAD protocol for the computation of three femoral angles in 83 dogs: anatomical lateral distal femoral angle (aLDFA); femoral neck angle (FNA); and femoral 84 torsion angle (FTA). 85 Polygonal mesh models were created from 3D reconstructions of CT images and femoral angles 86 were computed with the developed protocol. The values obtained were compared to the 87 measurements performed with the same aCAD protocol but executed on polygonal mesh models 88 generated by 3D scans, which due to its high-resolution 3D nature, was assumed as the gold 89 standard technique for this study. 90 The second object of this study was to assess the efficacy of the aCAD protocol for the 91 measurement of femoral angles in either normal or abnormal femurs. 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 Materials & Methods 111 112 Fourteen canine paired pelvic limbs were collected. The cadavers were euthanized for reasons 113 unrelated with this project and a signed informed consent was requested before proceeding with 114 imaging acquisition and femur disarticulation. The study was conducted in a double-blind fashion 115 by two observers (an orthopaedic surgeon and an engineer). Moreover, one experienced radiologist 116 acquired all radiographic and CT images. He, also, anonymised all CT scans using a legend and 117 separately packed every femur sample to prevent any conditioning for the observers. 118 Specimens were first radiographed with digital radiographic equipment (Kodak Point of Care CR-119 360 System, Carestream Health). A standard ventro-dorsal and latero-lateral views were performed. 120 121 CT scans 122 CT scans were then acquired with four multi-detector row CT scanner (Toshiba Asteion S4, 123 Toshiba Medical Systems Europe). Dogs were positioned with a supine recumbence with legs 124 adducted, extended and tied above the stifles. An amperage of 150 mA, exposure time of 0.725 s 125 and voltage of 120 kV were set on. A slice thickness of 1 mm (reconstruction interval 0.8 mm) was 126 applied. CT images were reconstructed with a high-resolution filter for bones with the following 127 bone window (window length 1000 Hounsfield units, HU; window width 4000 HU). A 3D volume 128 reconstruction was done using a DICOM-processing software (Osirix version 2.7, pixmeo SARL). 129 The first observer isolated with Osirix every anonymized femur by cropping the tibia and pelvis, 130 avoiding unintentional modification of the profiles of the femoral head and condyles. Once the 131 femur model was separated, it was segmented using the procedure described by Longo et al.12 132 Briefly, using the region of interest (ROI) and 2D/3D growing region software functions, the 133 observer found the mean density femur values, which usually are major than 300 Hounsfield unit 134 (HU) and then set-up the segmentation parameters in a dedicated tool window. As a result, a 135 bitmapped (newly generated imaging series) was created and 3D reconstructed, through surface 136 rendering function. Finally, a 3D stereolithograophic (STL)13 file was saved and imported in the 137 Rhinoceros platform.11,12 138 139 3D scans 140 STL files were generated from 3D scans to obtain reference models on which compare femoral 141 angles measured on CT. Femurs were disarticulated at coxo-femoral and femoral-tibial joints, 142 dissected free from soft tissues excluding the patella and fabellae and stored in plastic bags at a -143 20°. A 3D scanner (Cronos 3D dual, Open technologies) was used for the femoral analysis. The 144 second observer positioned every anonymised femur on a circular rotating platform. The scanning 145 of the femur was performed adopting a triangulation technique, based on cameras, characterized by 146 a predetermined convergence angle and a fringe projector. The platform was automatically rotated 147 of a predetermined angle sequence, obtaining at least 5 to 10 acquisitions. A 3D geometrical bone 148 model was generated superimposing and aligning the multiple views of the model, obtained per 149 each sequence, by means of an engineering software (Optical RevEng, Open technologies). 150 Cleaning, filtering and closing-holes phases were used to delete model inaccuracies such as noises 151 and local spikes. As a result, a high-resolution mesh model of the bone was obtained and saved as a 152 STL file. The accuracy of the 3D scanner is ±30 µm.14 Similar results were obtained by the internal 153 verification procedure based on ISO (10360-8:2013) at the Laboratory of Design Tools and 154 Methods in Industrial Engineering. Considering that the 3D scanner accuracy is higher more than an 155 order of magnitude compared to CT axial resolution (0.8 mm), it is possible to assume the 3D scan 156 models as reference. 157 158 159 160 161 Automated-CAD measurements from CT reconstructions (CT aCAD) and 3D scanner 162 (3D aCAD) 163 Both observers imported each CT (Fig. 1) or 3D (Fig. 2) STL file in the CAD software where the 164 aCAD protocol was used to measure femoral angles. The aCAD computation was performed 165 following the same procedure steps described by Savio et al.11 In brief, the vertices inside the 166 femoral medullary canal (internal mesh) were selected and deleted. This operation is needed to 167 improve the quality of axis drawing and angle measurements, as the presence of internal vertices 168 may interfere with the automatic computation. Then, the femoral analysis was initiated by clicking 169 on the femoral head. To compute the femoral angles, the developed algorithm first identifies points, 170 planes and axis into the femur mesh. It performed all the measurements in few minutes through four 171 automatic phases: 1) femur alignment; 2) proximal femoral long axis computation; 3) analysis of 172 the proximal femoral epiphysis; 4) analysis of the distal femoral epiphysis. During these two final 173 phases, the vertices representing the femoral head and condyles were superimposed by spheres (Fig. 174 1 and 2).11,12 Finally, aLDFA, FNA and FTA angles were displayed on the screen and recorded by 175 the observer. 176 177 Groups 178 Considering radiographic, CT and visual gross evaluation, the specimens were examined for 179 evidence of osteoarthritis (OA) and difference of breed conformation (dolicomorphic vs 180 chondrodystrophic). The femurs were divided in two groups. Group 1 was assigned as normal, 181 adopting the following inclusion criteria: femurs were obtained from dolicomorphic breeds with no 182 evidence of OA. Whereas the second category was more heterogenic and included femurs either 183 affected by OA regardless of conformation or taken from chondrodrystophic breeds (Fig. 3). 184 The radiologist radiographically evaluated the degree of OA and converted the OA score to a 185 numeric scale (0= none; 1= mild; 2= moderate ;3= severe).15,16 186 187 188 Statistical analysis 189 The statistical analyses were performed using a commercially available software (SAS 9.4, SAS 190 Institute Inc., Cary, NC, USA). Normality distribution hypothesis was assessed by Shapiro-Wilk 191 test. A linear regression analysis was applied, considering the gold standard method (3D aCAD) as 192 the independent variable and the CT aCAD as the dependent variable. 193 The adjusted R2 was used to quantify the strength of the relationship between the angle measured 194 through CT aCAD (observer 1) and 3D aCAD (observer 2) techniques. Adjusted R2 values > 80 % 195 were considered acceptable. The hypotheses of the linear model on the residuals were graphically 196 assessed. 197 The descriptive statistics (means, standard deviations, medians and interquartile ranges) were 198 calculated for each angle (aLDFA, FNA and FTA) measurements for both imaging techniques. 199 The paired Student t-test was performed to compare the data recorded with CT aCAD and the gold 200 standard. Statistical significance of P-value was set at < 0.05. 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 Results 217 218 Twenty-eight femurs divided in two groups (1 = normal, 2= abnormal) of 14 femurs each, were 219 used for this study. The specimens were obtained from dogs of different breeds and conformations: 220 3 mixbreed dogs, 2 Dachshunds, 2 French bouledogs and 1 Pug, German shepherd, Labrador R., 221 Bernese mountain dog, Segugio italiano, Amstaff and Great Dane. Ten dogs were intact males, 3 222 were spayed females and 1 was a not-spayed female. The overall mean body weight was 19.5 kg 223 (range 4-44 kg), whereas the body weight means of the groups were: group 1 (16.1 kg, range 13-28 224 kg) group 2 (19.3 kg, range 4-44 kg). The overall mean age was 9.5 years (range 2-15 years). The 225 mean age of group 1 was 4.7 years (range 2-8 years), while group 2 had a mean age of 12.5 years 226 (range 9-15 years). 227 Group 1 included 14 dolicomorphic femurs with no evidence of radiographic OA. Within the 14 228 femurs of the group 2, there were: 4 chondrodystrophic femurs not affected by OA, 6 229 chondrodystrophic femurs affected by OA (mean OA score: 1) and 4 dolicomorphic femurs affected 230 by OA (mean OA score: 2). 231 All data regarding the 3 angles and for both CT aCAD and 3D aCAD measurements were normally 232 distributed (Shapiro-Wilk test >0.9). The values of the angles recorded were well aligned along 233 regression lines in almost all the samples, excepted for some femurs included in group 2 (Fig. 4) 234 The adjusted R2 value of the CT aCAD and 3D aCAD measurements resulted always above the 235 acceptance criterion of 80%, regardless of the angle measured and the group considered. Overall, 236 the coefficients calculated for all 28 femurs were: aLDFA > 95%; FNA > 95% and FTA > 86% 237 (Fig. 4). Specifically, within group 1 the coefficients were: aLDFA > 93%; FNA > 93% and FTA > 238 98%, while within group 2: aLDFA > 97%; FNA > 94% and FTA > 82% (Fig. 4). Technique-239 related means, medians and interquartile ranges values for the 3 angles are displayed in Table 1. 240 The t-test showed that there was not a statistically significant difference (P < 0.05) in the mean 241 difference values of each paired measurements for every angle assessed, excepted for FTA 242 measurement in group 2 (Table 2) 243 Discussion 244 245 This study investigated the accuracy of a novel automated 3D technique (aCAD) for the 246 computation of canine femoral angles. We used the correlation coefficients to assess the strength of 247 the relationship between the angle measurements performed by the observers in Rhino starting from 248 STL files created either from CT-scans (CT aCAD) or 3D scanner (3D aCAD). The aCAD 249 methodology has, looking at the accuracy investigation, been satisfactory for all three angles 250 assessed (> 82%). 251 This suggests that the CT aCAD measurements were comparable to the 3D aCAD measurements, 252 which represented our reference standard method of assessment. The practical consequence is that 253 the developed 3D protocol is not only repeatable and reproducible12 but also may be considered 254 enough accurate. However, a validation of the 3D scanner on bone measurements needs to be 255 performed to corroborate this subjective assumption. 256 The accuracy of a test is a description of how close a measured value is to an assumed true value; 257 which means that a “true” value must be both identifiable and measurable, thus providing an 258 unequivocal gold standard against which new tests may be assessed.4,16 In this study, we have 259 assumed 3D scanner measurements of femoral anatomic specimens as the gold standard method for 260 two main reasons. First, 3D scanner allows for creating detailed and precise geometrical bony 261 models17 that could nicely reproduce the original femoral morphology. We have applied white spray 262 onto the femoral specimens and waited at least 24 hours before the image acquisition with the 263 scanner. The aim was to increase the visualization of the femoral cortices, decreasing the radio-264 transparency of the bone and thus improve the quality of the femoral captures. Second, 3D scanner 265 allows the user to work with real 3D files, which we cannot obtain from other reported two-266 dimensional techniques. 9,18 It may be argued that we could have either measured the femoral angles 267 on digital photography images of femur specimens or calculated them directly onto the bones. 268 Although, the quantification of an established “true” value for a such variable measurement (angle) 269 depends on arbitrary anatomic landmarks, in the authors’ opinion a comparison between a 3D 270 technique (aCAD) with a 2D gold standard method (digital photography) wouldn’t be feasible. The 271 reason is attributable to the structured differences of the methodologies tested. 272 A direct measurement of femoral angles onto femurs specimens could have represented an 273 alternative gold standard. However, we believe that such method couldn’t represent an accurate 274 methodology as well because precise anatomic reference lines needed to be drawn, increasing the 275 risk of operator-measurement errors. 276 Overall, the aLDFA and FNA were the most accurate angles since that their correlation coefficients 277 were always above the 90% threshold, regardless of the groups considered. FTA measurements 278 were still satisfactory but showed a lesser accuracy. These results partially confirmed the data that 279 we previously presented.12 Specifically, the aLDFA represents the most repeatable, reproducible 280 and accurate angle to measure. The FNA, which resulted as the lesser repeatable and reproducible 281 angle to be quantified with three different diagnostic techniques (RX, CT and aCAD computation), 282 here exhibited comparable values between CT aCAD and 3D aCAD. Whereas, the measurements 283 recorded for the FTA resulted as the most out of range from the real values, but still within the 284 established threshold of acceptance (> 80%) in both normal and abnormal femurs. 285 The computation ability of the developed protocol in femurs of different dimensions, conformations 286 (dolicomorphic and chondrodystrophic breeds) as well as in femurs affected or unaffected by OA, 287 represented a key point of our project. Previously, the described 3D protocol was performed mrely 288 on normal femurs, free of orthopaedic diseases.11,12 289 The femoral angles measured by the observers are commonly quantified in the preoperative 290 planning of patellar luxation ,5,19 which is frequently caused by femoral deformities.20,21 These 291 skeletal malformations cause imbalanced joint loading and when they are either severe or lately 292 diagnosed (chronic), they may lead to OA which deforms the articular profiles.22-24 In this study, 10 293 out of 28 femurs were affected by OA, of which one (femur 19) had a severely arthritic femoral 294 head (OA score: 3) (Fig. 5) and two (femurs 25 and 26) had the condylar profiles altered (OA score: 295 2). The massive remodeling of the articular profiles, above all of the femoral head, represents both a 296 challenge for the computational analysis and a plausible explanation for a less than perfect accuracy 297 detected for the FTA. The algorithm needs to correctly identify and fit the original sphere of the 298 femoral head and condyles. During the pilot developing phase, the algorithm was set up to exclude 299 from the analysis all the vertices that belong to external components of the femoral head fitting such 300 as osteophytes, which could potentially alter the computational analysis.11 The FTA correlation 301 coefficient obtained for the computation of abnormal femurs (R2=82%) means that the algorithm 302 effectively analyses also deformed femoral heads but not as accurately as for FNA and aLDFA 303 computation (≥ 92%). Considering the satisfactory FTA accuracy in the normal group (R2 FTA > 304 98%), we attribute the lower FTA accuracy in abnormal femurs mainly to the difficulty of analysing 305 severely altered femoral head profiles. However, the accuracy obtained was still major than 80% 306 threshold (R2=82%). 307 The descriptive statistic displayed in Table 1 shows that the values measured for FNA and FTA fall 308 within the ranges described in the literature: FNA (125°-138°) 3,25 and FTA (12-40°). 2, 25 309 The FNA and especially FTA reference ranges are wide.2,3,25 In the authors’ opinion this is 310 concerning and need to be clarified as femoral torsion is frequently detected in case of patellar 311 luxation and need to be often corrected. The accepted clinical tolerance for FTA suggests that there 312 is a variable either depending on the femur morphology or on the observer ability which influences 313 the angle measurements. Explanations may rely on the identification of the target points such as the 314 center of the femoral head and neck, which could be challenging for the observer, especially in the 315 case of severe OA. Our FTA mean ranges from 20-22° (table 1), which agrees with our previous 316 results11,12 and with the literature ranges.2,25 However, sometimes a 27° reference value for femoral 317 torsional deformity is assumed,20 and therefore the obtained FTA mean implies that our 3D 318 technique identifies a more retroverted position of the femoral head. Whether this result may have a 319 clinical impact could not be answered with this study and therefore need to be further investigated. 320 The aLDFA mean values, accordingly with those already found by the authors’ 11,12 are slightly 321 lower than the reported range (aLDFA 94-98°).3, 25 We impute this result mainly to morphologic 322 heterogeneity of the femurs computed. We analyzed a range of femurs of different dimensions 323 (small to large dogs) and conformations (dolicomorphic and chondrodystrophic), while the data 324 reported in literature were obtain mainly in large dolicomorphic dogs.16,25 It is plausible to expect 325 that chondrodystrophic dogs as well as small size breeds may be characterized by different values 326 regarding frontal and torsional femoral alignment. Furthermore, the t-test analysis exhibited a not 327 significant difference for each paired of values assessed. In almost for all the cases evaluated, the 328 CT aCAD measurements tended towards underestimating the femoral angle values compared to the 329 gold standard, but this tendency was statistically significant only for the femoral torsion evaluation 330 in the group of abnormal femurs (Table 2). 331 332 Conclusions 333 We have shown that the automatic measurements obtained from CT derived data are 334 significantly comparable with high-resolution 3D scanner-derived data, suggesting that the tested 335 automated CAD technique is an accurate methodology for measuring femoral angles in both normal 336 and abnormal canine femurs. However, currently it is not validated what should a gold standard be 337 for 3D measurements. Therefore, further studies could be undertaken to compare anatomical versus 338 3D scanner measurements of bones. 339 The presented methodology could represent a reliable diagnostic method to adopt when a femoral 340 deformity is suspected, having the automated and 3D nature of its assessments and rapidity of its 341 computational analysis as main substantial benefits. Moreover, the precision of patellar luxation 342 planning may increase, due to the user-independent structure of measurements. Finally, the 343 possibility to correctly identify anatomic landmarks such as the original curvature of the femoral 344 head, the external and internal profiles of the femoral neck, and potentially the original morphology 345 of the acetabulum, even in the case of a severe degenerative joint disease, may extends its 346 usefulness in the future, also, for arthroplasty purposes. However, further evaluations need to be 347 done with a greater number of samples to improve the quality and the precision of the femur 348 computation in severely arthritic femoral heads. 349 350 Conflict of interest statement 351 None of the authors of this paper have a financial or personal relationship with other people 352 or organisations that could inappropriately influence or bias the content of the paper. 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 References 382 383 384 1. Bardet JF, Rudy RL, Hohn RB. Measurement of femoral torsion in dogs using a 385 biplanar method. Vet Surg 1983;12:1-6. 386 387 2. Montavon, P.M., Hohn, R.B., Olmestead, M.L., Rudy, R.L., 1985. Inclination and 388 anteversion angles of the femoral head and neck in the dog evaluation of a standard method 389 of measurement. Vet Surg 1985;14:272-282. 390 391 3. Tomlison, J, Fox D, Cook JL, et al. Measurement of femoral angles in four dog breeds. Vet 392 Surg 2007;36:593-598. 393 394 4. Oxley B, Gemmill TJ, Pink J, et al. Precision of a novel computed tomographic method for 395 quantification of femoral varus in dogs and an assessment of the effect of femoral 396 malpositioning. Vet Surg 2013;42:751-758. 397 398 399 5. Barnes, D.M., Anderson, A.A., Frost, et al. Repeatability and reproducibility of 400 measurements of femoral and tibial alignment using computed tomography multiplanar 401 reconstructions. Vet Surg 2015;44:85-93. 402 403 6. Kaiser S, Cornely D, Golder W, et al. The correlation of canine patellar luxation and the 404 anteversion angle as measured using magnetic resonance images. Vet Radiol Ultrasound 405 2001;42:113-118. 406 407 hat formatiert: Deutsch (Schweiz) 7. Ginja MMD, Ferreira, AJA, Jesus SS, et al. Comparison of clinical, radiographic, computed 408 tomographic, and magnetic resonance imaging methods for early prediction of canine hip 409 laxity and dysplasia. Veterinary Radiol Ultrasound 2009;50:135-143. 410 411 8. Jackson GM, Wendelburg KL. Evaluation of the effect of distal femoral elevation on 412 radiographic measurement of the anatomic lateral distal femoral angle. Vet Surg 413 2012;41:994-1001. 414 415 9. Dudley RM, Kowaleski MP, Drost, WT, et al. Radiographic and computed 416 tomographic determination of femoral varus and torsion in the dog. Veterinary Radiol 417 Ultrasound 2006;47:546-552. 418 419 10. Yasukawa S, Edamura K, Tanegashima K, et al. Evaluation of bone deformities of the 420 femur, tibia, and patella in Toy Poodles with medial patellar luxation using computed 421 tomography. Vet Comp Orthop Tramatol 2016;29:29-38. 422 423 11. Savio G, Baroni T, Concheri G, et al. Computation of femoral canine morphometric 424 parameters in three-dimensional geometrical models. Vet Surg 2016;45:987-995. 425 426 12. Longo, F., Nicetto, T., Banzato T, et al. Automated computation of femoral angles in dogs 427 from three-dimensional computed tomography reconstructions: Comparison with manual 428 techniques. Vet J 2018;232:6–12. 429 430 13. Botsch M, Kobbelt L, Pauly M, et al. Mesh data structures. In: Botsch ed. Polygon Mesh 431 Processing, 1st Edn. MA, USA: A.K. Peters, 2010 pp. 21-28 432 433 14. https://www.growshapes.com/store/p137/Open-Technologies-3D-Scanner/Cronos-3D-Dual-434 3MP.html. Last access 12/04/2019. 435 436 15. Lopez MJ, Lewis BP, Swaab ME, et al. Relationships among measurements obtained by use 437 of computed tomography and radiography and scores of cartilage microdamage in hip joints 438 with moderate to severe joint laxity of adult dogs. Am J Vet Res 2008;69:362-370. 439 440 16. D’Amico LL, Xie L, Abell LK et al. Relationships of hip joint volume ratios with degrees of 441 joint laxity and degenerative disease from youth to maturity in a canine population 442 predisposed to hip joint osteoarthritis. Am J Vet Res 2011;72:376-383. 443 444 17. Palmer RH, Ikuta CL, Cadmus JM 2011. Comparison of femoral angulation measurement 445 between radiographs and anatomic specimens across a broad range of varus conformations. 446 Vet Surg 2011;40:1023-1028. 447 448 18. Fahrni S, Campana L, Dominguez A, et al. CT-scan vs. 3D surface scanning of a skull: first 449 considerations regarding reproducibility issues. Forensic Sciences Research 2017;2:99-99. 450 451 19. Swiderski JF, Radecki SV, Park RD, et al. Comparison of radiographic and anatomic 452 femoral varus angle measurements in normal dogs. Vet Surg 2008;37:43-48. 453 454 20. Gibbons SE, Macias C, Tonzing MA, et al. Patellar luxation in 70 large breed dogs. J Small 455 Anim Pract 2006; 47: 3–9. 456 457 hat formatiert: Deutsch (Schweiz) hat formatiert: Deutsch (Schweiz) hat formatiert: Deutsch (Schweiz) hat formatiert: Deutsch (Schweiz) 21. Brower BE, Kowaleski MP, Peruski AM, et al. Distal femoral lateral closing wedge 458 osteotomy as a component of comprehensive treatment of medial patellar luxation and 459 distal femoral varus in dogs. Vet Comp Orthop Tramatol 2017;20-27. 460 461 22. Roch SP, Gemmill TJ. 2008. Treatment of medial patellar luxation by femoral closing 462 wedge ostectomy using a distal femoral plate in four dogs. J Small Anim Pract 463 2008;249:52–158. 464 465 23. Dobbe J, du Pre` KJ, Kloen P, et al. Computer-assisted and patient-specific 3-D planning 466 and evaluation of a single-cut rotational osteotomy for complex long-bone deformity. 467 Medical & Biological Engennering & Computing 2011;49:1363–1370. 468 469 24. Milner SA, Davis TR, Muir KR, et al. Long-term outcome after tibial shaft fracture: is 470 malunion important? Journal of Bone and Joint Surgery American volume 2002; 84A:971–471 980. 472 473 25. Petazzoni, M. Radiographic measurements of the femur. In: Petazzoni, M., Jaeger, G.H. 474 (Eds). Atlas of Clinical Goniometry and Radiographic Measurements of the Canine Pelvic 475 Limb. 2nd Edn. Merial, Milano, Italy: Merial, 2008, pp. 34-54. 476 477 478 479 480 481 482 483 484 Table 1 descriptive statistics measured with both computed tomography (CT aCAD: tested 485 protocol) and 3D scanner (3D aCAD: gold standard) techniques for each angle. 486 487 Technique aLDFA FNA FTA CT aCAD Mean ± SD 92.51 ± 5.4 125.32 ± 10.2 21.96 ± 7.1 Median 92.7 127.96 21.58 IQR 7.7 8.28 8.8 3D aCAD Mean ± SD 92.55 ± 5.3 124.26 ± 10.8 20.87 ± 6.4 Median 92.2 126.8 20.2 IQR 6.95 11.85 6.25 488 Table 2 Mean difference and P-value of paired t-test calculated for each angle. 489 T- test aLDFA FNA FTA Normal Abnormal Normal Abnormal Normal Abnormal Mean - 0,14° 0,22° - 0.24 - 0.24 - 0.41 - 1.77 hat formatiert: Deutsch (Schweiz) hat formatiert: Deutsch (Schweiz) hat formatiert: Deutsch (Schweiz) Mean Difference ± SD ± 1,16 ± 0,79 ± 0.82 ± 3.82 ± 0.79 ± 3.21 P-value 0,65 0,3 0,29 0,08 0,07 0,05 490 491 492 493 494 495 496 497 498 499 500 501 502 Figure legends 503 504 505 Fig. 1. 3D computation performed in a stereolithographic file obtained from a computed 506 tomography reconstruction (CT aCAD) of a 2-years-old French Bouledog. After the 3D 507 computation, femoral axes appear in the bone model (A). The green line is the femoral head and 508 neck axis (FHNA), the blue lines represent the mechanical axis (MA) and the hip joint orientation 509 line (HJOL), the red line is the proximal femoral long axis (PFLA) and the gold line is the 510 transcondylar axis (TCA). (B) Cranial and caudal aspect of the proximal femoral epiphysis. Notice 511 the fitting of the femoral head and the section of the femoral neck (light blue). (C) Medial-lateral 512 and caudal-cranial views of the femoral condyles. Note the sphere fitting of both condyles (light 513 blue spheres) as well as the green vertices that represent the contact area of the TCA. 514 515 Fig. 2. 3D computation performed in a stereolithographic file obtained from a 3D-scanner 516 acquisition (3D aCAD) of a 4-years-old Bernese mountain dog (A). The green line is the femoral 517 head and neck axis (FHNA), the blue lines represent the mechanical axis (MA) and the hip joint 518 orientation line (HJOL), the red line is the proximal femoral long axis (PFLA) and the gold line is 519 the transcondylar axis (TCA). (B) Cranial and caudal aspect of the proximal femoral epiphysis. 520 Notice the presence of red vertices outside of the femoral head fitting which represent parts of the 521 acetabulum excluded from the computation. (C) Medio-lateral and caudal-cranial aspects of the 522 distal femoral epiphysis. TCA, PFLA and MA are visible. 523 524 Fig. 3. Cranio-caudal views of four abnormal femurs after importation on Rhinoceros. (A) Right 525 femur of a 12-years-old German Shepherd severely affected by osteoarthritis (OA) of the femoral 526 head. (B) Right femur of a 10-years-old Pug which had a severe degeneration of the femoral head 527 and neck. (C and D) Left chondrodystrophic femurs affected by mild (C) and severe OA (D) of the 528 distal femoral epiphysis. The dogs were an 8-years-old French Bouledog and a 13 years-old 529 Dachshund. 530 531 Fig. 4. Graphical representation of the regression analysis. Line (A): regression 532 line of the totality of the femurs assessed for each angle. The R2 are >80 % for all three 533 angles. Line (B): regression analysis of group 1 (normal femurs). The R2 are > 93 %, having the 534 FTA measurement as the most accurate angle. Line (C): graphical representation of the 535 regression of group 2 (abnormal). The aLDFA angle was the most accurate (R2> 93 %), while the 536 FTA the most challenging to measure (R2> 82 %). 537 538 Fig. 5. Digital cranio-caudal photograph of the femur specimen of a 12-years-old German 539 Shepherd. (B and C) Cranial and caudal views of the femoral head and neck. The green line is the 540 femoral head and neck axis (FHNA), the blue lines represent the mechanical axis (MA) and the hip 541 joint orientation line (HJOL), the red line is the proximal femoral long axis (PFLA). Observe that 542 the osteophytes fall outside the green sphere and are not considered for fitting of the femoral head. 543 (D) Caudal view of the femoral condyles: the MA and transcondlyar axis (gold line) are drawn. (E) 544 Femoral cranio-caudal view after the 3D computation. 545 546 547 548 549 550 work_s34ia3o5jrd4rlfb2ziflrfrbu ---- New estimates of leaf angle distribution from terrestrial LiDAR_ Comparison with measured and modelled estimates from nine broadleaf tree species Contents lists available at ScienceDirect Agricultural and Forest Meteorology journal homepage: www.elsevier.com/locate/agrformet New estimates of leaf angle distribution from terrestrial LiDAR: Comparison with measured and modelled estimates from nine broadleaf tree species Matheus Boni Vicaria,⁎,1, Jan Pisekb,1, Mathias Disneya,c a Department of Geography, University College London, Pearson Building, Gower Street, London, WC1E 6BT, London, UK b Tartu Observatory, University of Tartu, Observatooriumi 1, Tõravere, 61602, Tartumaa, Estonia c NERC National Centre for Earth Observation (NCEO), UK A R T I C L E I N F O Keywords: Leaf angle distribution (LAD) Terrestrial LiDAR scanning (TLS) Digital photography 3D Monte Carlo ray tracing (MCRT) Radiative transfer A B S T R A C T Leaf angle distribution (LAD) is an important property which influences the spectral reflectance and radiation transmission properties of vegetation canopies, and hence interception, absorption and photosynthesis. It is a fundamental parameter of radiative transfer models of vegetation at all scales. Yet, the difficulty in measuring LAD causes it to be also one of the most poorly characterized parameters, and is typically either assumed to be random, or to follow one of a very small number of parametric ‘archetype’ functions. Terrestrial LiDAR scanning (TLS) is increasingly being used to measure canopy structure, but LAD estimation from TLS has been limited thus far. We introduce a fast and simple method for detection of LAD information from terrestrial LiDAR scanning (TLS) point clouds. Here, it is shown that LAD information can be obtained by simply accumulating all valid planes fitted to points in a leaf point cloud. As points alone do not have any normal vector, subsets of points around each point are used to calculate the normal vectors. Importantly, for the first time we demonstrate the effect of distance on the reliable LAD information retrieval with TLS data. We test, validate, and compare the TLS-based method with established leveled digital photography (LDP) approach. We do this using data from both real trees covering the full range of existing leaf angle distribution type, but also from 3D Monte Carlo ray tracing. Crucially, this latter approach allows us to simulate both images and TLS point clouds from the same trees, for which the LAD is known explicitly a priori. This avoids the difficulty of assessing LAD manually, which being a difficult and potentially error-prone process, is an additional source of error in assessing the accuracy of LAD extraction methods from TLS or photography. We show that compared to the LDP measurement technique, TLS is not limited by leaf curvature, and depending on the distance of the TLS from the target, is potentially capable of retrieving leaf angle information from more complex, non-flat leaf surfaces. We demonstrate the possible limitation of TLS measurement techniques for the retrieval of LAD information for more distant ca- nopies, or for taller trees (h > 20 m). 1. Introduction The leaf angle distribution (LAD) is a key property of vegetation canopies, and is therefore vital for models used to represent and un- derstand the plant canopy processes of photosynthesis, evapo- transpiration, radiative transfer (RT), and hence spectral reflectance and absorptance (Warren Wilson, 1959; Lemeur and Blad, 1974; Ross, 1981; Myneni et al., 1989; Asner, 1998; Stuckens et al., 2009). Yet, despite the strong sensitivity of many of these models to variability in LAD, the difficulty in measuring LAD often causes it to be one of the most poorly constrained parameters in structural models of canopy radiative transfer (see e.g. Ollinger, 2011). As a result, LAD is very often assumed to be random, or spherical in order to simplify the process of modelling RT in vegetation, without clear justification or quantified impact of the resulting uncertainty. Improving methods for measuring LAD is thus essential for advancing ecological understanding of its role within the biophysical interaction of sunlight and the forest canopy, and how we can better represent it within canopy models. Various methods and instruments have been proposed over the years for in situ measurement of leaf inclination angles (e.g., Lang, 1973; Smith and Berry, 1979; Kucharik et al., 1998; Falster and Westoby, 2003; Hosoi and Omasa, 2007; Müller-Linow et al., 2015). However, their wide-spread use has been generally hampered by dif- ficulties in applying them to tall (and closed) canopies, their https://doi.org/10.1016/j.agrformet.2018.10.021 Received 25 May 2018; Received in revised form 10 September 2018; Accepted 27 October 2018 ⁎ Corresponding author. E-mail address: matheus.vicari.15@ucl.ac.uk (M.B. Vicari). 1 Joint first authorship. Agricultural and Forest Meteorology 264 (2019) 322–333 Available online 06 November 2018 0168-1923/ © 2018 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/BY/4.0/). T http://www.sciencedirect.com/science/journal/01681923 https://www.elsevier.com/locate/agrformet https://doi.org/10.1016/j.agrformet.2018.10.021 https://doi.org/10.1016/j.agrformet.2018.10.021 mailto:matheus.vicari.15@ucl.ac.uk https://doi.org/10.1016/j.agrformet.2018.10.021 http://crossmark.crossref.org/dialog/?doi=10.1016/j.agrformet.2018.10.021&domain=pdf unsatisfactory ability to reproduce measurements, or high costs. Ryu et al. (2010) proposed a robust, simple method to identify leaf angles from leveled digital photography (LDP) and to provide reliable LAD information for broadleaf trees. The LDP method was also found to be comparable to manual clinometer measurements (Pisek et al., 2011). McNeil et al. (2016) implemented the LDP method successfully on images collected with digital cameras mounted onboard unmanned aerial vehicles (UAV). While this method has been shown to be more efficient compared to the earlier approaches mentioned above (Zou et al., 2014), the need for manual, non-automated identification of suitable leaves for LAD determination, and the regulatory and piloting challenges with UAVs still pose drawbacks that hinder more effective and widespread use of the LDP method. With mm-level ranging accuracy and fine resolution allowing the capture of very detailed 3-D structural information of a canopy, ter- restrial LiDAR scanning (TLS) technology might be able to overcome the shortcomings of the conventional means of measuring LAD profile of the canopy. So far, TLS has been shown to be able to provide more detailed information about tree properties such as diameter at breast height (DBH) (Bauwens et al., 2016), height (Király and Brolly, 2007; Prasad et al., 2016; Palace et al., 2016), aboveground biomass (Calders et al., 2013, 2015; Tanago et al., 2018; Disney et al., 2018) and overall structure (Côté et al., 2012; Raumonen et al., 2013; Hackenberg et al., 2014a,b; Malhi et al., 2018). There have been already several attempts to measure leaf orientation with TLS data. Jin et al. (2015) estimated leaf angles from averaging normal vectors of clustered leaves and ob- tained a correlation coefficient of 0.96 for a validation using a single camphorwood tree. Zheng and Moskal (2012) also proposed the use of normal vectors to estimate leaf angle but calculated from subsets of points. This latter method was able to predict leaf inclination with R squares of 0.73 and 0.573 for big leaf maple tree and sugar maple, respectively. Bailey and Mahaffee (2017) is another example of ap- proach using normal vectors but estimated for triangulated surfaces from the point cloud. Zhao et al. (2015) presented a physical-statistical approach using Maximum Likelihood Estimates to infer leaf properties from TLS data. Although all these methods showed potential, their in- creased algorithm complexity and possible uncertainties must be noted: dependency of clustering or triangulation algorithms in preliminary steps (Jin et al., 2015; Bailey and Mahaffee, 2017); dependency of model and parameters selection (Zhao et al., 2015); or presence of noisy data (Bailey and Mahaffee, 2017). Also, only a very limited set of trees, usually of small size, and simple simulations were used for validation of these methods. Effects caused by larger laser footprint, i.e. larger trees (> 40 m) or longer scanning distances, have also not been tested and have the potential to limit accuracy. In this work, we introduce a method that is based on simple as- sumptions and fast (i.e. processing times under 2 min per tree on a consumer-grade processor) for detection of LAD information from TLS point clouds. Importantly, for the first time we demonstrate the effect of TLS distance on the retrieval of accurate LAD information from TLS data. We test, validate, and compare the TLS-based method with the LDP approach by Ryu et al. (2010) using real trees covering the full range existing leaf angle distribution types. Crucially, we also address one of the key issues that can hinder validating LAD measurement: the difficulty of obtaining accurate known LAD information to compare with values derived from LDP, TLS etc. Here, we augment the com- parison of methods using real trees, by using very detailed 3D model trees to simulate LDP and TLS measurements, but with LAD known a priori. This allows us to compare LAD estimates against ‘true’ values without having to worry about the uncertainty of the true values. 2. Materials and methods 2.1. Study sites and tree species We measured leaf inclination angles on four broadleaf tree species from the Royal Botanic Gardens, Kew, the United Kingdom (51.478 °N, 0.295 °W). We also used model representations of another five tree species used to reconstruct actual canopy of the Järvselja birch stand in Estonia (58.277 °N; 27.341 °E) (Kuusk et al., 2013) in the fourth phase of RAdiative transfer Model Intercomparison (RAMI-IV) exercise (Widlowski et al., 2015). The leaf inclination angle measurements from the Royal Botanic Gardens at Kew were made on individual trees with separate tree crowns on 17 October 2017. The sampled tree species included Japanese hop hornbeam (Ostrya japonica), date plum (Diospyros lotus), maidenhair tree (Ginkgo biloba), and Wollemi pine (Wollemia nobilis). 2.2. Terrestrial laser scanner data: simulated and measured 2.2.1. Simulated terrestrial laser scanner point clouds Synthetic TLS clouds were generated for five 3D scenes, which are comprised of a single tree model in the origin of an infinite plane. These stands have been generated as part of the 4th phase of the Radiative Transfer Model Intercomparison (RAMI) exercise2, to provide realistic scenes for canopy model benchmarking (Widlowski et al., 2015). Here, we used the RAMI-IV representations of the Järvselja birch stand (summer)3. The full RAMI 1 ha scene contains 1029 trees, comprising 18 individual variant trees of 7 species: Norway maple (Acer plata- noides, ACPL), common alder (Alnus glutinosa; RAMI-IV representation code ALGL3), Silver birch (Betula pendula; BEPE2), common ash (Fraxinus excelsior; FREX), and small-leaved lime (Tillia cordata; TICO2). Here, we use a subset of 5 of these trees to simulate TLS point clouds, using the librat, Monte-Carlo ray tracing (MCRT) library (Fig. 1) (Lewis, 1999; Disney et al., 2006, 2011). The librat model is one of two models that provide the ‘most credible’ full 3D RT model solutions in the RAMI exercise and form the basis of the RAMI Online Model Checker (ROMC), a community tool for benchmarking more approximate RT models (Widlovski et al., 2008). Librat has been used to simulate canopy properties and LiDAR point clouds for a number of applications (Disney et al., 2009, 2010; Hackenberg et al., 2014a,b; Woodgate et al., 2016). The leaves of each of the RAMI-IV trees are oriented explicitly for each tree, and so their individual angles are known exactly a priori. Librat TLS simulations were performed at 120 different locations around each tree, set in a regular grid with 10 m spacing (Fig. 2). As each simulated cloud has material information, only leaf points were kept in further steps for LAD estimates. The simulated scan angle re- solution was 0.04°, resulting in a point cloud with point density and resolution similar to the measured point cloud. In order to avoid in- troducing uncertainties from partial returns, beam divergence was in- finitely small. Although this is not realistic, the fact that our field data is also filtered to reduce the number of points that resulted from partial hits makes this assumption less relevant in the validation of our method. For each tree, we used combinations of 4 point clouds, located 90° apart at the same 10 m interval (Fig. 2 and Fig. 3A). 2.2.2. Measured TLS data Real tree LiDAR data was obtained at Kew using a Riegl VZ-400 laser scanner (RIEGL Laser Measurement Systems GmbH, Horn, Austria). This scanner has a range close to 700 m, wavelength of 1550 nm, beam divergence of 0.35 mrad and in the case of this study an angular resolution of 0.04° was used. The scanner was supported by a tripod at 1.5 m above the ground and placed in 4 different locations, approximately 90° apart and around 5 m from each tree. A set of 5 cylindrical reflectors supported by garden poles were placed around each tree to assist in co-registration of the 4 separate point clouds. 2 http://rami-benchmark.jrc.ec.europa.eu/HTML/ 3 http://rami-benchmark.jrc.ec.europa.eu/HTML/RAMI-IV/ EXPERIMENTS4/ACTUAL_CANOPIES/JARVSELJA_SUMMER_BIRCHSTAND/ JARVSELJA_SUMMER_BIRCHSTAND.php)/. M.B. Vicari et al. Agricultural and Forest Meteorology 264 (2019) 322–333 323 http://rami-benchmark.jrc.ec.europa.eu/HTML/ http://rami-benchmark.jrc.ec.europa.eu/HTML/RAMI-IV/EXPERIMENTS4/ACTUAL_CANOPIES/JARVSELJA_SUMMER_BIRCHSTAND/JARVSELJA_SUMMER_BIRCHSTAND.php)/ http://rami-benchmark.jrc.ec.europa.eu/HTML/RAMI-IV/EXPERIMENTS4/ACTUAL_CANOPIES/JARVSELJA_SUMMER_BIRCHSTAND/JARVSELJA_SUMMER_BIRCHSTAND.php)/ http://rami-benchmark.jrc.ec.europa.eu/HTML/RAMI-IV/EXPERIMENTS4/ACTUAL_CANOPIES/JARVSELJA_SUMMER_BIRCHSTAND/JARVSELJA_SUMMER_BIRCHSTAND.php)/ Scans were co-registered and filtered by pulse shape deviation to remove “noisy” data, i.e. partial laser hits (Pfennigbauer and Ullrich, 2010) using RiSCAN Pro (RIEGL Laser Measurement Systems GmbH, Horn, Austria). Trees were manually extracted from registered point clouds in CloudCompare (CloudCompare) (Fig. 3B). Materials were separated using TLSeparation Python package (Vicari, 2017) and only leaf points were kept. 2.2.3. Leaf inclination angle retrieval from TLS data Our algorithm (Fig. 4) assumes that LAD can be obtained by simply accumulating all valid planes fitted to points in a leaf point cloud. As points alone do not have any normal vector, subsets of points around each point are used to calculate the normal vectors. The LAD algorithm starts by performing a Nearest Neighbors search around each point in the leaf cloud, using a fixed number of neighbors (kNN) for each point neighborhood. kNN values were tested in a sensitivity analysis (Section 2.2.4) and a value of 10 points was selected for our point clouds. The value of this parameter should be a compromise between a neighbor- hood of points small enough as to reduce the occurrence of angles calculated from more than a single leaf, but still with a number of points sufficient to minimize the impact of noise in the data. We opted for a fixed kNN value in our method in order to speed up processing while also being robust (see Section 2.2.4.). Even though a fixed kNN setting might generate neighborhoods of variable size, pre-processing steps and the further filtering of neighborhoods (see below) help to mitigate negative impacts which this parameter might cause on angle estimates. Next, Singular Value Decomposition (SVD) is used to perform a surface regression analysis (plane fitting) on each subset of points (Fig. 5). Through SVD a set of eigenvalues and eigenvectors is calcu- lated for each point. Eingenvectors relative to the smallest of 3 eigen- values, in a 3D space, are equivalent to the normal vector of the fitted plane (Mandel, 1982; Klasing, 2009). As normal vectors calculated using SVD are susceptible to outliers, a filtering step ensures that only normal vectors from points that are close enough to a plane are kept. The smallest eigenvalue, in a 3D space, is a direct expression of how close to a plane the set of points is. To standardize all eigenvalues into a Fig. 1. Example of simulated RAMI IV tree representations; A – Acer platanoides (ACPL); B – Betula pendula (BEPE2). Fig. 2. Locations of simulated scans selected to validate our TLS method. Fig. 3. Examples of simulated point cloud, Norway maple (Acer platanoides; ACPL) - A; and scanned point cloud, Wollemi pine (Wollemia nobilis) - B. Fig. 4. LAD retrieval algorithm from TLS data. SVD refers to Singular Value Decomposition. M.B. Vicari et al. Agricultural and Forest Meteorology 264 (2019) 322–333 324 common range (0 to 1), the eigenvalue ratios were calculated for each point, dividing each eigenvalue by the sum of all three eigenvalues. The sensitivity analysis was also used to select this parameter’s value and, therefore, only points with the third eigenvalue ratio lower than 0.1, were kept (Fig. 6). This filtering step ensures that points in intersecting regions or with strong presence of noise are not used in the LAD esti- mation. Filtered vectors ( ⃑N ) pointing downwards, i.e. negative z-axis component, were inverted by multiplying them by -1: ⃑ ⃑⃑= ⎧ ⎨⎩ ≥ < − N N N N N ' 0, 0, * 1 z z (1) The inclination angle, in degrees, for each point (αi) is obtained by calculating the angle between corrected normal vectors ( ′⃑N i) and a zenith vector ( ⃑z ) defined as (0, 0, 1). A final multiplication by (180 / pi) is used to convert the final value to degrees: ⃑ ⃑ ⃑ ⃑= ⋅ ⋅ ⎛ ⎝ ⎞ ⎠ −α cos z N z N π ( ' ) (‖ ‖ ‖ ' ‖) * 180 i i i 1 (2) The algorithm produces one inclination angle value for each valid plane in the leaf point cloud (Fig. 7). This means that every leaf will produce multiple inclination angles, one for each valid plane patch. To generate the final LAD, point-wise angles are aggregated (Fig. 8D). In order to better understand how robust our TLS method is, given different input parameters, we performed a sensitivity analysis using the simulated point clouds. Also, as an example of possible application of our TLS method, Fig. 8E shows azimuth angles for different height slices along the tree height. 2.2.4. Sensitivity analysis of LAD estimates from simulated data The sensitivity analysis was performed using the same simulated point clouds as in the LAD comparison. A set of values for kNN (5, 10, 15, 20, 25, 50, 100) and eigenvalue ratio threshold (0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.5) were combined iteratively and used to process every tree point cloud, from different scanning distances, generating 1225 esti- mates in total. Resulting distributions were binned in intervals of 5° and compared against their respective reference LAD using Mean Absolute Error (MAE). We also investigated the impact of scanning distance in the TLS LAD estimates by calculating the difference between each tree’s LAD for scanning distances of 10 and 50 m. The difference was calculated, using MAE, for all simulated point clouds, except those from tree model FREX. Point clouds from this tree were not considered in the distance analysis because its leaves model employs 11 leaves with different area, and so, could not be assessed in the same way as for the other trees. A validation test was also developed to evaluate the TLS LAD ac- curacy for different leaf curvatures. We used the single leaves from the same leaf models from the four RAMI-IV trees used in our simulated dataset. A set point clouds were simulated for each leaf, varying point density and leaf curvature, and also varying knn values used to estimate the leaf angles. Pointwise angles were estimated for each leaf point cloud and aggregated into LADs. Each LAD was quantitatively com- pared (MAE) to the expected LAD from its respective simulated curved leaf. In the case of this validation leaf curvature was defined in terms of the fraction of leaf area that covers a sphere with known radius. For a flat leaf, the radius is infinite and so the curvature is assumed as 0%. A completely curved leaf, i.e. 100%, covers a sphere in a way that its extremities are touching each other. 2.3. Simulated digital photography images Simulated LDP images of the RAMI-IV trees were also generated using the librat MCRT model (Fig. 1). Images were simulated at an equivalent resolution of 1 M P i.e. image size of 1024 × 1024 from a distance of 1.5 m from the trees in each case. Images were simulated from 4 locations around each tree for every 1 m of live crown, and with 1 ray per pixel. This means that total number of images varied from 24 (TICO2) to 48 (ALGL3). Librat image simulations have been used for various modelling applications, particularly for comparison with other Fig. 5. Example of a set of eigenvectors calculated for a random set of points that represent a plane. Sub-indices of e represent eigenvectors relative to ei- genvalues from largest (0) to smallest (2). In this example, e2 represents the normal vector of a plane fitted to all points shown. Fig. 6. Example of normal vectors filtering by eigenvalue ratio threshold. Vectors relative to points with third eigenvalue ratio higher than 0.1 were removed. To define local neighbor- hoods and calculate the eigenvalues, 10 points around each point shown in this figure were selected (knn = 10) (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article). M.B. Vicari et al. Agricultural and Forest Meteorology 264 (2019) 322–333 325 canopy measurement properties such as DHP estimates of gap fraction and LAI as well as with field-measured properties (Disney et al., 2011; Woodgate et al., 2016; Origo et al., 2017). 2.4. Leveled digital photography (LDP) measurements We used a leveled digital camera approach (Ryu et al., 2010) to measure leaf inclination angles and validate the results obtained with TLS data (Section 2.2.3). Beside the simulated images for RAMI-IV tree representations described in Section 2.2.1, we took a series of leveled digital images of tree crowns for four tree species at Kew during calm conditions (to prevent wind effects on leaves; Tadrist et al., 2014) along their full vertical tree profile. We compared two approaches to obtain leveled digital photography: a Nikon CoolPix 4500 digital camera (4 M P) leveled and tripod-mounted, and images taken by a hand-ba- lanced (i.e. determined by observer's feeling) Sony Xperia Z5 Compact phone equipped with 23 M P 1/2.3-inch multi-aspect BSI CMOS sensor, paired with an F2.0 lens. None of the lenses were evaluated for dis- tortions. Next, both simulated images of the RAMI-IV tree representations and photos obtained at Kew were visually inspected for the presence of leaves with their surfaces oriented approximately perpendicular to the viewing direction of the digital camera (Fig. 9). Inclination angles of suitable leaves were measured using the public domain image proces- sing software ImageJ (http://imagej.nih.gov/ij/). It has been suggested that hundreds of leaves should be measured to obtain an accurate Fig. 7. Normal vectors calculated for a single leaf from field scanned Diospyros lotus point cloud (A) and from two neighboring leaves from TICO2 simulated point cloud (B). Respective leaf angle distributions calculated from point-wise angle are shown under each respective set of points. Fig. 8. Diospyros lotus TLS data and results. (a) Extracted point cloud. (b) Separated leaf points. (c) Examples of normal vectors (1000 vectors randomly sampled for visualization purposes only, out of around 350 thousand). (d) Pointwise leaf angle. (e) Azimuth leaf angle distribution over height slices. M.B. Vicari et al. Agricultural and Forest Meteorology 264 (2019) 322–333 326 http://imagej.nih.gov/ij/ representation of the leaf inclination angles (Kucharik et al., 1998). However, a more recent study suggests that around 75 leaves may be sufficient to obtain a representative leaf inclination angle distribution at single crown (Pisek et al., 2013). In the current work, approximately 100 leaves were measured whenever possible. Tables 1 and 2 provide the exact number of leaves collected for each species in this study. 2.5. Leaf inclination angle distribution and G-function We estimated leaf inclination angle distribution assuming a uniform distribution of leaf azimuth angle and leaf inclination angle being in- dependent of leaf size. The measured leaf inclination angles were fitted with the two-parameter Beta distribution (Goel and Strebel, 1984), which was shown to be the best suited for describing the probability density of θL (by Wang et al., 2007): = − − −f t B μ ν t t( ) 1 ( , ) (1 ) μ v1 1 (3) where =t θ π2 /L . The Beta-distribution function B μ ν( , ) is defined as: ∫= − = + − −B μ ν x x dx Γ μ Γ υ Γ μ ν ( , ) (1 ) ( ) ( ) ( ) μ ν 0 1 1 1 (4) where Γ is the Gamma function and μ and ν are two parameters cal- culated as: ⎜ ⎟= − ⎛ ⎝ − ⎞ ⎠ μ t σ σ (1 ¯) 1 t 0 2 2 (5) ⎜ ⎟= ⎛ ⎝ − ⎞ ⎠ ν t σ σ ¯ 1 t 0 2 2 (6) where σ0 2 is the maximum standard deviation with expected mean t̄ (σ0 2 = t̄ (1- t̄ )) and σt 2 is variance of t (Wang et al., 2007). Following Goel (1988), leaf inclination angle distributions can be described using six parametric ‘archetype’ functions based on empirical evidence of the natural variation of leaf normal distributions: spherical, uniform, planophile, plagiophile, erectophile and extremophile. For spherical canopies, the relative frequency of leaf inclination angles is the same as the relative frequency of the inclinations of the surface elements of a sphere; for uniform canopies, the proportion of leaf in- clination angles is the same at any angle; planophile canopies are characterized by a predominance of horizontally oriented leaves; pla- giophile canopies are dominated by inclined leaves, erectophile ca- nopies are dominated by vertically oriented leaves, and extremophile canopies by high frequencies of both horizontally and vertically or- iented leaves (Lemeur and Blad, 1974). As these classical distributions are widely used and easier to interpret than the parameter values of the Beta distribution, we classified all estimated leaf inclination angle dis- tributions by finding the closest archetype distribution. For each mea- sured distribution, the deviation from the distributions suggested by de Wit ( fdeWit)) was quantified using a modified version of the inclination index provided by Ross (1975): ∫= −χ f θ f θ dθ| ( ) ( ) |L π L deWit L L0 2 (7) 3. Results and discussion 3.1. LAD retrieval from TLS and LDP: simulated tree representations It should be noted that the simulated leaves had no curvature, which removed one source of uncertainty for estimating the leaf angles with the LDP measurement technique in particular. Overall, the agreement between TLS- and LDP-based measurement techniques using the 3D tree simulations was with R squares between 0.45 (ALGL3; Fig. 10B) and 0.8 (ACPL, Fig. 10A) for actual measurements. Importantly, the simulated trees covered the full range of possible leaf angle probability density functions (PDFs), and both approaches agreed on the assigned de Wit type (1965), except the ACPL tree representation (LDP – spherical; TLS - erectophile) (Table 1). Even in this case the difference in the mean values between the two approaches was less than 6°, which is within the limits of the previously identified uncertainty of the LDP measuring technique by Raabe et al. (2015). The TLS-based PDFs agreed to within 84% with the prescribed leaf element frequencies of the RAMI-IV trees (Fig. 10). The agreement between the prescribed and retrieved PDFs was linked to the foliage density. The ALGL3 and BEPE2 RAMI-IV trees had very dense foliage, which can obscure and make it more Fig. 9. A schematic diagram of the protocol used to measure leaf inclination angle from leveled digital photography (illustrated on Diospyros lotus). Table 1 Statistical moments (i.e., number of observations, mean, standard deviation, two parameters μ, ν) of the leaf angle measurements and the LAD function type (after De Wit, 1965) from the LDP and TLS approaches for the simulated RAMI IV tree representations. PL – planophile, PG – plagiophile, U – uniform, S – spherical, E – erectophile. LDP TLS Tree representation n Mean S.D. u v Type n Mean S.D. u v Type ACPL 279 55.71 24.80 0.80 1.30 S 177342 61.42 21.11 0.93 2.01 E ALGL3 183 48.66 23.55 1.21 1.42 U 164372 42.98 21.32 1.8 1.65 U BEPE2 224 40.26 18.24 2.77 2.25 PG 73685 42.54 19.18 2.37 2.12 PG FREX 152 31.21 21.49 1.94 1.03 PL 252275 33.62 23.18 1.58 0.94 PL TICO2 116 40.83 17.84 2.90 2.41 PG 228446 46.67 18.9 2.24 2.42 PG Table 2 Optimal parameter values for TLS LAD estimates, detected by sensitivity ana- lysis using simulated data. MAE stands for Mean Absolute Error. Distance (m) kNN Eigenvalue ratio threshold MAE mean MAE std 10 10 0.15 0.013 0.008 20 10 0.10 0.011 0.007 30 5 0.10 0.012 0.008 40 5 0.10 0.015 0.009 50 10 0.15 0.018 0.012 M.B. Vicari et al. Agricultural and Forest Meteorology 264 (2019) 322–333 327 challenging to sample leaves located deeper within the tree crowns. The dense foliage of these trees, and the way individual leaves were re- presented in the simulated images with sometimes not so distinct out- lines (see Fig. 1B for an example), posed additional challenges to identifying suitable leaves and sample the whole crown space evenly with the LDP measurement approach. This was mainly due to the de- creased contrast between the potential target and background (Fig. 1B). Still, both TLS and LDP measurement techniques were able to provide PDFs that correctly approximated even the bi-modal PDF prescribed for the BEPE2 tree simulation (Fig. 10C). Fig. 10 also illustrates the agreement between the PDFs and the fitted beta distributions. This shows similar results to Wang et al. (2007), who evaluated the two- parameter beta distribution as the more consistent and robust predictor over other approaches. Fig. 11 demonstrates the effect of TLS distance on the LAD in- formation retrieval. There is a tendency towards more vertical PDF retrieval by TLS with an increasing distance of the sensor from the target. If the LAD of a given target can be described as spherical or erectophile, then our simulation results indicate that TLS can provide good quality information even at a distance of 50 m (Fig. 11A). How- ever, the results get progressively worse with distance if the ‘true’ LAD can be described as rather horizontally oriented (Fig. 11B–E). For the planophile case (FREX; Fig. 11E), the TLS-derived LAD type shifts to a different one (uniform) already at the distance of 20 m. We found that leaf size/area plays an important role in this change of LAD along dif- ferent scanning distances. In fact, there is a linear relationship (R2 = 0.99) between individual leaf areas, provided in the reference data for each tree model (Widlowski et al., 2015), and the variation of each tree’s LAD over distance. In this case MAE is inversely proportional to leaf area, which means that LAD estimates from small leaves will have a larger degradation of accuracy over scanning distance. Variations in scanning distance also affect the capability of a LiDAR Fig. 10. Frequency and fitted beta distributions of leaf angle for the simulated tree representations for RAM-IV Järvselja birch stand. Differences in the distributions between LDP (black), TLS (blue) and RAMI IV (red) representations as tested by Chi-square two sample test were non-significant at p < 0.05 for all measured species (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article). M.B. Vicari et al. Agricultural and Forest Meteorology 264 (2019) 322–333 328 Fig. 11. Effect of varying TLS-target distance on the LAD retrieval from TLS for the select RAMI IV tree representations. Fig. 12. Box whisker showing Mean Absolute Error (MAE) of TLS LAD estimates method over a range of kNN and eigenvalues ratio threshold. Results were aggregated from all simulated point clouds used in the validation of our TLS method. The box dimensions show the quartiles for 25% to 75% of MAE, the center line represents median MAE and the whiskers show minimum and maximum MAE. M.B. Vicari et al. Agricultural and Forest Meteorology 264 (2019) 322–333 329 scanner to detect and resolve leaves of different inclination angles. Using a hypothetical leaf at 10 m from the ground, a laser beam will have an inclination angle of 45° at 10 m from the tree and an angle of 11.3° at 50 m from the tree. This means that leaves with a low in- clination angle, i.e. horizontal leaves, will have a considerably smaller projected area on the laser sensor. Also, as the scanning angle resolu- tion was the same for all simulations, the point density (number of points per unit of area) scaled down based on the area of a scanned hemisphere. So in the case of our simulated point clouds, the point density was reduced by an average of 86% ( ± 8%) when changing scanning distances from 10 m to 50 m. The combination of these factors means that with increasing distance from the tree, leaves with lower inclination angles will be less likely to be intersected by a laser beam, which helps to explain why there is a shift towards erectophile LADs for longer scanning distances. The lower individual leaf area for ALGL3 and BEPE2 is contributing factor that makes this effect even more pro- nounced for these trees. 3.2. Sensitivity analysis of LAD retrieval from TLS We used the simulated point clouds to perform a sensitivity analysis of our TLS method over a range of kNN and eigenvalues ratio threshold. Results of the sensitivity tests (Fig. 12) show that our TLS method is able to estimate LAD with an average MAE of 0.018 (standard deviation of 0.012). Fig. 12 also helps to understand how changes in each para- meter impact the accuracy in LAD estimates. The results suggest that lower values for kNN, e.g. 5 or 10, and filtering thresholds of above 0.1 are optimal. However, we note that the assessment of the impact of threshold values (Fig. 12b) might have been limited due to the lack of noise and overlapping leaves in our simulated data. A set of optimal parameters values was also generated for each scanning distance (Table 2) and a comparison of LAD estimates using these parameters against the RAMI-IV reference data is shown in Fig. 13. The curvature analysis showed that the TLS method is able to pre- dict LAD from curved leaves with MAE lower than 0.11 for all cases (Fig. 14). The curvature validation suggests that our TLS method is able to accurately predict LAD from curved leaves and that point density is a major constraining factor in the accuracy of LAD estimates. For a comparison, the point density of a single scan using an angular step resolution of 0.04° is around 20,000 points/m² at 10 m and 1000 points/m² at 50 m. Fig. 8 shows that even for a hypothetical completely curved leaf point cloud, with point density relative to a 50 m single scan, the MAE of a TLS LAD is still lower than 0.11. In tests with higher point densities the MAE lowers to below 0.04. Also, for a completely flat leaf the MAE is lower than 0.001 overall. However, we note that this was an hypothetical point cloud with no noise, occlusion or over- lapping leaves. In actual point clouds the MAE is expected to be higher, especially for leaves scanned from longer distances (e.g. > 30 m). The impact of point density also suggests that the use of multiple scans should improve the accuracy of LAD estimates. 3.3. LAD retrieval from TLS and LDP: real trees Next, we compared LDP and TLS measurement techniques using real trees. Similarly, to the model tree representations described above, the pool of sampled real trees at Kew contained a wide range of LAD types, from broadly plagiophile (Diospyros lotus) to erectophile (Ostrya japo- nica) case. Results show that very similar results (fitted Beta distribu- tions) can be obtained using different LDP measurement techniques (images taken with a digital camera mounted on a tripod or a hand- balanced smart phone; Fig. 15). Compared to tripod-based camera ap- proach, smart phone based acquisition provides up to ten times faster and more flexible approach, and it is encouraging these advantages are not offset by the accuracy of the method, if a large enough sample pool is collected (Table 3). Results show that the agreement between TLS and LDP measure- ment techniques were lower than those obtained from the simulated tree representations. Compared to the simulated cases, the sampled trees at Kew possessed leaves with various curvatures, and the effect of leaf curvature was clearly captured in the agreements between TLS and LDP measurement techniques for given species. The agreement between the fitted Beta distributions for the TLS and LDP approaches for Wollemia nobilis (Fig. 15) was 97%. Wollemia nobilis possesses relatively short leaves with no transverse curvature. The sampled Diospyros lotus tree had leaves with various curvature rates, and the bent leaves of Ginkgo biloba were the most challenging for applying the LDP mea- surement technique, which relies on identifying rather flat leaves or- iented approximately perpendicular to the viewing direction of the digital camera/lens. This challenge was subsequently reflected in the greater difference between both retrieved PDFs and fitted Beta dis- tributions for Diospyros lotus and Gingko biloba, compared to Wollemia nobilis (Fig. 15). However, it is notable that the LDP and TLS mea- surement techniques agreed in the assigned de Wit type in all cases bar one (Table 3). The only exception was Ginkgo Biloba, where LDP as- signed spherical and TLS assigned erectophile LAD type. This was si- milar to the ACPL case from the simulated trees described above in Section 3.1. It should be noted that the resulting leaf projection or G function (Ross, 1981), which describes the projection of unit foliage area on the plane perpendicular to the view direction (Myneni et al., 1989; Ross, 1981), is rather similar between spherical and erectophile Fig. 13. 1:1 simulated trees TLS LAD comparison grouped by tree. Solid lines represent linear regression fitted to each tree’s LAD comparison scatter points. Shaded areas represent a confidence interval of 95% for each regression. Fig. 14. Box whisker showing Mean Absolute Error (MAE) of TLS LAD esti- mates method for different leaf curvatures and point densities. Results were aggregated for all hypothetical leaves used in the curvature tests. The box di- mensions show the quartiles for 25%–75% of MAE, the center line represents median MAE and the whiskers show minimum and maximum MAE. M.B. Vicari et al. Agricultural and Forest Meteorology 264 (2019) 322–333 330 canopies (Ryu et al., 2010; Pisek et al., 2013). 3.4. Implications for the future potential of TLS to retrieve LAD information Our results are very encouraging with respect to using our proposed TLS measurement technique for the retrieval of LAD information. At the same we also discuss the potential limits of the method. The validation of previously proposed methods to retrieve LAD information with TLS data (e.g. Zheng and Moskal, 2012; Jin et al., 2015; Bailey and Mahaffee, 2017) was done with smaller, isolated trees or shrubs with TLS positioned at a close distance to targets. Here, for the first time we demonstrate the possible limitation of TLS measurement techniques for the retrieval of LAD information for more distant canopies, or for taller trees (h > 20 m). Compared to the LDP measurement technique, TLS is not limited by leaf curvature, and depending on the distance might be even capable of retrieving leaf angle information from more complex leaf surfaces. Consequently, TLS might provide more accurate information about LAD than LDP in these cases, albeit the assigned de Wit may well still be the same for LDP and TLS (Table 1). TLS might be also applicable in cases where LDP is not an option (severely bent, twisted leaves). This way TLS measurements might allow us to extend greatly the pool of plant species for which we could retrieve LAD information, particularly for taller trees, which is so crucial for correct RT modelling in vegetation canopies (Govind et al., 2013). It remains to be seen if TLS can provide us with good quality information about needleleaf canopies as well. Our TLS method can also provide further information about leaf angles, such as 3D partitioning of the leaf points, which could assist on further insights into how leaf angle changes over height or different directions (Fig. 8E). An example of inclination and azimuth angles partitioning across crown height is shown in Fig. 16, which shows not only how leaf angles vary for different partitions but also how much each partition represents in the total amount of leaf material. We note that azimuth angles estimates are not within the scope of this paper and such in- formation is just provided as an example of possible outcomes of our TLS leaf angle method. One final issue to consider is that currently, TLS instruments are in general much more expensive than cameras, and their operation characteristics are also more variable e.g. time-of-flight v phase shift, beam divergence, angular resolution etc. Consequently, the point clouds from different sensors and acquisitions will have varying potential for LAD retrieval. These aspects will need to be ex- plored further in order to allow better understanding of the resulting uncertainties, and to allow direct comparison between different studies. 4. Conclusions In this study we introduce a fast and simple method for detection of LAD information from terrestrial LiDAR scanning (TLS) point clouds. Our findings provide support for the following conclusions: 1) LAD information can be obtained by simply accumulating all valid planes fitted to points in a leaf point cloud. 2) Compared to the LDP measurement technique, TLS is not limited by leaf curvature, and depending on the distance of the scanner (i.e. the footprint of the TLS at the target), might be even capable of re- trieving leaf angle information from more complex leaf surfaces. 3) TLS measurement techniques might be limited for the retrieval of accurate LAD information for more distant canopies, or for taller trees (h > 20 m). We recommend keeping the scanning distance within 20 m from the target canopy and the use of multiple (> 2) scans from different positions around the trees to improve leaf de- tection. Fig. 15. Frequency (top) and fitted beta dis- tributions (bottom) of leaf angle for the four tree species at Kew, measured from the tripod- stabilized leveled digital photography (LDPt – green line), hand-balanced digital photography (LDPh – pink line) and terrestrial laser scanner (TLS – blue line) (For interpretation of the re- ferences to colour in this figure legend, the reader is referred to the web version of this article). Table 3 Statistical moments (i.e., number of observations, mean, standard deviation, two parameters μ, ν) of the leaf angle measurements and the LAD function type (T), after De Wit (1965) from the tripod-stabilized leveled digital photography (LDPt), hand-balanced digital photography (LDPh) and terrestrial laser scanner (TLS) ap- proaches at the Kew Royal Botanical Gardens. PG – plagiophile, U – uniform, S – spherical, E – erectophile. ID represents the tree identifier: DL - Diospyros lotus, GB - Ginkgo biloba, OJ - Ostrya japonica, WN - Wollemia nobilis. LDPt LDPh TLS ID n Mean S.D. M ν T n Mean S.D. μ ν T n Mean S.D. μ ν T DL 100 39.59 11.00 8.68 6.82 PG 120 41.67 9.52 11.39 9.82 PG 212595 50.85 14.64 3.6 4.68 PG GB 78 57.67 18.90 1.52 2.70 S 80 56.28 18.78 1.64 2.74 S 277167 65.92 16.95 1.21 3.31 E OJ 90 66.37 11.63 2.78 7.81 E 120 68.09 10.92 2.80 8.70 E 165189 63.68 17.79 1.26 3.04 E WN 80 46.71 22.05 1.52 1.64 U 123 44.44 23.06 1.42 1.39 U 205368 41.25 21.82 1.75 1.48 U M.B. Vicari et al. Agricultural and Forest Meteorology 264 (2019) 322–333 331 Acknowledgments JP was supported by the Estonian Research Council grant PUT1355 and Mobilitas PlussMOBERC-11. MBV was supported by the Science Without Borders grant 233849/2014-9 from the National Council of Technological and Scientific Development – Brazil. MD acknowledges the support of the NERC National Centre for Earth Observation for the provision of TLS equipment. MD was also supported in part by NERC Standard GrantsNE/N00373X/1 and NE/ P011780/1, and European Union’s Horizon 2020 research and innovation programme under grant agreement No 640176 for the EU H2020 BACI project. We gratefully acknowledge the invaluable help of staff at Kew Gardens, in particular: Amanda Cooper, Tony Kirkham and Justin Moat. We thank Dr. Youngryel Ryu and another anonymous reviewer for constructive comments. References Asner, G.P., 1998. Biophysical and biochemical sources of variability in canopy re- flectance. Remote Sens. Environ. 64, 234–253. Bailey, B.N., Mahaffee, W.F., 2017. Rapid measurement of the three-dimensional dis- tribution of leaf orientation and the leaf angle probability density function using terrestrial LiDAR scanning. Remote Sens. Environ. 193, 63–76. Bauwens, S., Bartholomeus, H., Calders, K., Lejeune, P., 2016. Forest inventory with terrestrial LiDAR: comparison of static and hand-held mobile laser scanning. Forests 7, 127. Calders, K., Lewis, P., Disney, M., Verbesselt, J., Herold, M., 2013. Investigating as- sumptions of crown archetypes for modelling LiDAR returns. Remote Sens. Environ. 134, 39–49. Calders, K., Newnham, G., Burt, A., Murphy, S., Raumonen, P., Herold, M., Culvenor, D., Avitabile, V., Disney, M., Armston, J., Kaasalainen, M., 2015. Nondestructive esti- mates of above-ground biomass using terrestrial laser scanning. Methods Ecol. Evol. 6, 198–208. Côté, J.-F., Fournier, R.A., Frazer, G.W., Niemann, K.O., 2012. A fine-scale architectural model of trees to enhance LiDAR-derived measurements of forest canopy structure. Agric. For. Meteorol. 166, 72–85. De Wit, C.T., 1965. Photosynthesis of Leaf Canopies. Agricultural Research Report No. 663, Wageningen. http://library.wur.nl/WebQuery/wurpubs/413358. Disney, M., Lewis, P., Saich, P., 2006. 3D modelling of forest canopy structure for remote sensing simulations in the optical and microwave domains. Remote Sens. Environ. 100, 114–132. Disney, M.I., Lewis, P., Bouvet, M., Prieto-Blanco, A., Hancock, S., 2009. Quantifying surface reflectivity for spaceborne lidar via two independent methods. IEEE Trans. Geosci. Remote Sens. 47 (10), 3262–3271. https://doi.org/10.1109/TGRS.2009. 2019268. Disney, M.I., Kalogirou, V., Lewis, P.E., Prieto-Blanco, A., Hancock, S., Pfeifer, M., 2010. Simulating the impact of discrete-return lidar system and survey characteristics over 2 young conifer and broadleaf forests. Remote Sens. Environ. 114, 1546–1560. https://doi.org/10.1016/j.rse.2010.02.009. Disney, M.I., Lewis, P., Gomez-Dans, J., Roy, D., Wooster, M., Lajas, D., 2011. 3D ra- diative transfer modelling of fire impacts on a two-layer savanna system. Rem. Sens. Environ. 115, 1866–1881. https://doi.org/10.1016/j.rse.2011.03.010. Disney, M.I., Boni Vicari, M., Calders, K., Burt, A., Lewis, S., Raumonen, P., Wilkes, P., 2018. Weighing Trees With Lasers: Advances, Challenges and Opportunities, Royal Society Interface Focus. special Issue on Royal Society meeting ‘The terrestrial laser scanning revolution in forest ecology’https://doi.org/10.1098/rsfs.2017.0048. Falster, D.S., Westoby, M., 2003. Leaf size and angle vary widely across species: what consequences for light interception? New Phytol. 158, 509–525. Goel, N.S., Strebel, D.E., 1984. Simple beta distribution representation of leaf orientation in vegetation canopies. Agron. J. 76, 800–802. Goel, N.S., 1988. Models of vegetation canopy reflectance and their use in estimation of biophysical parameters from reflectance data. Remote Sens. Rev. 4, 1–213. Govind, A., Guyon, D., Roujean, J.-L., Yauschew-Raguenes, N., Kumari, J., Pisek, P., Wigneron, J.-P., 2013. Effects of canopy architectural parameterizations on the modeling of radiative transfer mechanism. Ecol. Modell. 251, 114–126. Hackenberg, J., Morhart, C., Sheppard, J., Spiecker, H., Disney, M., 2014a. Highly ac- curate tree models derived from terrestrial laser scan data: a method description. Forests 5, 1069–1105. Hackenberg, J., Morhart, C., Sheppard, J., Spiecker, H., Disney, M.I., 2014b. Highly ac- curate tree models derived from terrestrial laser scan data - a method description, Forests (OPEN ACCESS). Special Issue: LiDAR and Other Remote Sensing Applications in Mapping and Monitoring of Forests Structure and Biomass) 5, 1069–1105. https://doi.org/10.3390/f5051069. Hosoi, F., Omasa, K., 2007. Factors contributing to accuracy in the estimation of the woody canopy leaf area density profile using 3D portable lidar imaging. J. Exp. Bot. 58 (12), 3463–3473. Jin, S., Tamura, M., Susaki, J., 2015. A new approach to retrieve leaf normal distribution using terrestrial laser scanners. J. For. Res. 27, 631–638. Király, G., Brolly, G., 2007. Tree Height Estimation Methods for Terrestrial Laser Scanning in a Forest Reserve. Proceedings of ISPRS Workshop on laser scanning 2007 and SilviLaser, Espoo, Finland, pp. 211–215 12-14 September, 2007. Klasing, K., 2009. Surface-based Segmentation of 3D Range Data. Technical Report TR- LSE-2009-10-1. Institute of Automatic Control Engineering, Technische Universität München, Germany. Kucharik, C., Norman, J.M., Gower, S.T., 1998. Measurements of leaf orientation,light distribution and sunlit leaf area in a boreal aspen forest. Agric. Forest Meteorol. 91, 127–148. Kuusk, A., Lang, M., Kuusk, J., 2013. Database of optical and structural data for the validation of forest radiative transfer models. In: In: In: Kokhanovsky, A. (Ed.), Light Scattering Reviews, vol. 7. Springer, Berlin, Heidelberg, pp. 109–148. Lang, A.R.G., 1973. Leaf orientation of a cotton plant. Agric. For. Meteorol. 11, 37–51. Lemeur, R., Blad, B.L., 1974. A critical review of light models for estimating the short- wave radiation regime of plant canopies. Agric. For. Meteorol. 14, 255–286. Lewis, P., 1999. Three-dimensional plant modelling for remote sensing simulation studies using the Botanical Plant Modelling System. Agron. Agric. Environ. 19, 185–210. Malhi, Y., Bentley, L.P., Jackson, T., Lau, A., Shenkin, A., Herold, M., Calders, K., Bartholomeus, H., Disney, M.I., 2018. Understanding the Ecology of Tree Structure and Tree Communities Through Terrestrial Laser Scanning. Royal Society Interface Focus, special Issue on Royal Society meeting ‘The terrestrial laser scanning revolu- tion in forest ecology’https://doi.org/10.1098/rsfs.2017.0052. Mandel, J., 1982. Use of the singular value decomposition in regression analysis. Am. Stat. 36 (1), 15–24. https://doi.org/10.2307/2684086. McNeil, B.E., Pisek, J., Lepisk, H., Flamenco, 2016. E. A. Measuring leaf angle distribution in broadleaf canopies using UAVs. Agric. For. Meteorol. 218-219, 204–208. https:// doi.org/10.1016/j.agrformet.2015.12.058. Müller-Linow, M., Pinto-Espinosa, F., Scharr, H., Rascher, U., 2015. The leaf angle dis- tribution of natural plant populations: assessing the canopy with a novel software tool. Plant Methods 11, 1–16. Myneni, R.B., Ross, J., Asrar, G., 1989. A review on the theory of photon transport in leaf canopies. Agric. For. Meteorol. 45, 1–153. Ollinger, S.V., 2011. Sources of variability in canopy reflectance and the convergent properties of plants. New Phytol. 189, 375–394. Origo, N., Calders, K., Nightingale, J., Disney, M.I., 2017. Influence of levelling technique on the retrieval of canopy structural parameters from digital hemispherical photo- graphy. Agric. For. Meteorol. 237-238, 143–149. https://doi.org/10.1016/j. Fig. 16. Zenith (A) and azimuth (B) leaf angle distributions over 3 m vertical slices along the crown height estimated from ALGL3 point clouds at 10 m scanning distance. M.B. Vicari et al. Agricultural and Forest Meteorology 264 (2019) 322–333 332 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0005 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0005 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0010 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0010 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0010 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0015 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0015 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0015 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0020 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0020 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0020 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0025 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0025 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0025 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0025 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0030 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0030 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0030 http://library.wur.nl/WebQuery/wurpubs/413358 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0040 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0040 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0040 https://doi.org/10.1109/TGRS.2009.2019268 https://doi.org/10.1109/TGRS.2009.2019268 https://doi.org/10.1016/j.rse.2010.02.009 https://doi.org/10.1016/j.rse.2011.03.010 https://doi.org/10.1098/rsfs.2017.0048 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0065 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0065 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0070 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0070 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0075 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0075 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0080 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0080 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0080 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0085 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0085 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0085 https://doi.org/10.3390/f5051069 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0095 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0095 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0095 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0100 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0100 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0105 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0105 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0105 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0110 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0110 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0110 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0115 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0115 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0115 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0120 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0120 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0120 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0125 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0130 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0130 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0135 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0135 https://doi.org/10.1098/rsfs.2017.0052 https://doi.org/10.2307/2684086 https://doi.org/10.1016/j.agrformet.2015.12.058 https://doi.org/10.1016/j.agrformet.2015.12.058 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0155 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0155 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0155 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0160 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0160 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0165 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0165 https://doi.org/10.1016/j.agrformet.2017.02.004 agrformet.2017.02.004. Palace, M., Sullivan, F.B., Ducey, M., Herrick, C., 2016. Estimating tropical forest struc- ture using a terrestrial lidar. PLoS One 11 (4). https://doi.org/10.1371/journal.pone. 0154115. e0154115. Pfennigbauer, M., Ullrich, A., 2010. Improving quality of laser scanning data acquisition through calibrated amplitude and pulse deviation measurement. In: In: Turner, M.D., Kamerman, G.W. (Eds.), PIE 7684, Laser Radar Technology and Applications XV, vol. 7684 pp. 76841F. Pisek, J., Ryu, Y., Alikas, K., 2011. Estimating leaf inclination and G-function from leveled digital camera photography in broadleaf canopies. Trees 25, 919–924. Pisek, J., Sonnentag, O., Richardson, A.D., Mõttus, M., 2013. Is the spherical leaf in- clination angle distribution a valid assumption for temperate and boreal broadleaf tree species? Agric. For. Meteorol. 169, 186–194. Prasad, O.P., Hussinb, Y.A., Weirb, M., Karnac, J.C., Yogendra, K., 2016. Derivation of forest inventory parameters for carbon estimation using terrestrial LIDAR. The International Archives of the Photogrammetry. Remote Sens. Spatial Inf. Sci. 677–684 XLI-B8. Raabe, K., Pisek, J., Sonnentag, O., Annuk, K., 2015. Variations of leaf inclination angle distribution with height over the growing season and light exposure for eight broadleaf tree species. Agric. For. Meteorol 2–11 214–215. Raumonen, P., Kaasalainen, M., Akerblom, M., Kaasalainen, S., Kaartinen, H., Vastaranta, M., Holopainen, M., Disney, M., Lewis, P., 2013. Fast automatic precision tree models from terrestrial laser scanner data. Remote Sens. (Basel) 5, 491–520. Ross, J., 1975. Radiative transfer in plant communities. In: In: Monteith, J.L. (Ed.), Vegetation and the Atmosphere, vol. 1. Academic Press, London, UK, pp. 13–55. Ross, J., 1981. The Radiation Regime and Architecture of Plant Stands. Junk Publishers, The Hague, pp. 391. Ryu, Y., Sonnentag, O., Nilson, T., Vargas, R., Kobayashi, H., Wenk, R., Baldocchi, D.D., 2010. How to quantify tree leaf area index in a heterogeneous savanna ecosystem: a multi-instrument and multi-model approach. Agric. For. Meteorol. 150, 63–76. Smith, J.A., Berry, J.K., 1979. Optical diffraction analysis for estimating foliage angle distribution in grassland canopies. Aust. J. Bot. 27, 123–133. Stuckens, J., Somers, B., Delalieux, S., Verstraeten, W.W., Coppin, P., 2009. The impact of common assumptions on canopy radiative transfer simulations: A casestudy in ci- trusorchards. J. Quant. Spectrosc. Radiat. Transf. 110, 1–21. https://doi.org/10. 1016/j.jqsrt.2008.09.001. Tadrist, L., Saudreau, M., de Langre, E., 2014. Wind and gravity mechanical effects on leaf inclination angles. J. Theor. Biol. 341, 9–16. Tanago, G.J., Lau, A., Bartholomeus, H., Herold, M., Avitabile, V., Raumonen, P., Martius, C., Goodman, R.C., Disney, M., Manuri, S., Burt, A., Calders, K., 2018. Estimation of above-ground biomass of large tropical trees with terrestrial LiDAR. Methods Ecol. Evol. 9, 223–234. https://doi.org/10.1111/2041-210X.12904. Vicari, M. (2017) TLSeparation. Http://doi.org/10.5281/zenodo.1147706. Wang, W.M., Li, Z.I., Su, H.B., 2007. Comparison of leaf angle distribution functions: effects on extinction coefficient and fraction of sunlit foliage. Agric. For. Meteorol. 143, 106–122. Warren Wilson, J., 1959. Analysis of the spatial distribution of foliage by two dimensional point quadrats. New Phytol. 58, 92–99. Widlovski, J.-L., Robustelli, M., Disney, M.I., Gastellu-Etchegorry, J.-P., Lavergne, T., Lewis, P., North, P.R.J., Pinty, B., Thompson, R., Verstraete, M.M., 2008. The RAMI Online Model Checker (ROMC): a web-based benchmarking facility for canopy re- flectance models. accepted Rem. Sens. Environ. 112 (3), 1144–1150. https://doi.org/ 10.1016/j.rse.2007.07.016. Widlowski, J.L., Mio, C., Disney, M., Adams, J., Andredakis, I., Atzberger, C., Brennan, J., Busetto, L., Chelle, M., Ceccherini, G., Colombo, R., Côté, J.F., Eenmäe, A., Essery, R., Gastellu-Etchegorry, J.P., Gobron, N., Grau, E., Haverd, V., Homolová, L., Huang, H., Hunt, L., Kobayashi, H., Koetz, B., Kuusk, A., Kuusk, J., Lang, M., Lewis, P.E., Lovell, J., Malenovský, Z., Meroni, M., Morsdorf, F., Möttus, M., Ni-Meister, W., Pinty, B., Rautiainen, M., Schlerf, M., Somers, B., Stuckens, J., Verstraete, M.M., Yang, W., Zhao, F., Zenone, T., 2015. The fourth phase of the radiative transfer model inter- comparison (RAMI) exercise. Actual canopy scenarios and conformity testing. Remote Sens. Environ. 169, 418–437. Woodgate, W., Armston, J., Disney, M.I., Suarez, L., Jones, S.D., Hill, J., Wilkes, P., Soto- Berelov, M., 2016. Quantifying the impact of woody material on leaf area index es- timation from hemispherical photography using 3D canopy simulations. Agric. For. Meteorol. 226-227, 1–12. https://doi.org/10.1016/j.agrformet.2016.05.009. Zheng, G., Moskal, L.M., 2012. Leaf orientation retrieval from terrestrial laser scanning (TLS) data. IEEE Trans. Geosci. Remote Sens. 50, 3970–3979. Zhao, K., et al., 2015. Terrestrial lidar remote sensing of forests: maximum likelihood estimates of canopy profile, leaf area index, and leaf angle distribution. Agric. For. Meteorol 209–210, 100–113. Zou, X., Mõttus, M., Tammeorg, P., Torres, C.L., Takala, T., Pisek, J., Mäkelä, P., Stoddard, F.L., Pellikka, P., 2014. Photographic measurement of leaf angles in field crops. Agric. For. Meteorol. 184, 137–146. M.B. Vicari et al. Agricultural and Forest Meteorology 264 (2019) 322–333 333 https://doi.org/10.1016/j.agrformet.2017.02.004 https://doi.org/10.1371/journal.pone.0154115 https://doi.org/10.1371/journal.pone.0154115 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0180 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0180 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0180 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0180 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0185 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0185 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0190 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0190 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0190 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0195 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0195 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0195 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0195 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0200 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0200 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0200 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0205 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0205 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0205 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0210 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0210 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0215 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0215 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0220 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0220 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0220 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0225 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0225 https://doi.org/10.1016/j.jqsrt.2008.09.001 https://doi.org/10.1016/j.jqsrt.2008.09.001 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0235 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0235 https://doi.org/10.1111/2041-210X.12904 http://Http://doi.org/10.5281/zenodo.1147706 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0250 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0250 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0250 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0255 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0255 https://doi.org/10.1016/j.rse.2007.07.016 https://doi.org/10.1016/j.rse.2007.07.016 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0265 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0265 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0265 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0265 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0265 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0265 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0265 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0265 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0265 https://doi.org/10.1016/j.agrformet.2016.05.009 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0275 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0275 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0280 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0280 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0280 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0285 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0285 http://refhub.elsevier.com/S0168-1923(18)30345-9/sbref0285 New estimates of leaf angle distribution from terrestrial LiDAR: Comparison with measured and modelled estimates from nine broadleaf tree species Introduction Materials and methods Study sites and tree species Terrestrial laser scanner data: simulated and measured Simulated terrestrial laser scanner point clouds Measured TLS data Leaf inclination angle retrieval from TLS data Sensitivity analysis of LAD estimates from simulated data Simulated digital photography images Leveled digital photography (LDP) measurements Leaf inclination angle distribution and G-function Results and discussion LAD retrieval from TLS and LDP: simulated tree representations Sensitivity analysis of LAD retrieval from TLS LAD retrieval from TLS and LDP: real trees Implications for the future potential of TLS to retrieve LAD information Conclusions Acknowledgments References work_s4duubfiofc5hhwwkbp2jjkok4 ---- Inicio - MEDITEKNIA Google Analytics 4 928 232 278 Avda. Alcalde José Ramírez Bethencourt, 20 - Las Palmas de Gran Canaria info@mediteknia.com INICIO La Clínica Nuestro equipo SERVICIOS Cirugía de MOHS Cáncer de Piel Rejuvenecimiento Facial Toxina Botulínica Ácido Hialurónico Láser Q-SWITCHED ND-YAG Láser IPL Láser Ablativo Fraccionado Mesoterapia Relleno de Labios Cicatrices de Acné Sudoración excesiva Peelings Eliminación de Tatuajes Depilación Láser Hydrafacial CLÍNICA DEL PELO Trasplantes del pelo Unidad Restauración de Imagen Programas de formación LABORATORIO Mediteknia & Skin Hair Lab DERM & SHOP NOTICIAS Blog Vídeos Contacto Search   Ubicación HORARIO HORARIO Lunes y Martes 09:00 – 13:30 / 16:00-20:00 Miércoles 10:00 – 20:00 Jueves 09:00 – 13:30 / 16:00-20:00 Viernes 09:00 – 1800 CITA PREVIA Solicitar cita previa Trasplantes de pelo Unidad restauración de imagen Cirugía de MOHS Cáncer de piel Rejuvenecimiento facial Relleno de labios Cicatrices de acné Sudoración excesiva Peelings Eliminación tatuajes Depilación láser Hidrafacial Acepto la política de privacidad CLÍNICA DEL PELO Clínica del pelo Trasplantes de pelo Unidad de restauración de imagen Más información CONTACTO Contacta con nosotrosSabemos que cada paciente es único y por eso tratamos de ofrecerte siempre una atención personalizada. (+34) 928 232 278 info@mediteknia.com Formulario de contacto + Bienvenidos a Clínica Mediteknia El Dr. Jiménez Acosta Bienvenidos a Clínica Mediteknia El Dr. Jiménez Acosta MEDITEKNIA es un espacio clínico especializado en dermatología, dermoestética y tratamientos relacionados con la pérdida del cabello, dirigido por el Dr. Jiménez Acosta y ubicado en Las Palmas de Gran Canaria. La calidad en la atención al paciente es lo primero: nuestro principal objetivo es ofrecer una máxima calidad de atención sanitaria en el ámbito de la dermatología, dermostética y problemas relacionados con el cabello. Cada paciente es único, distinto y cada patología o tratamiento es personalizado. En Mediteknia nuestro trato con el paciente es de absoluta confidencialidad y respeto. NUESTRO EQUIPO MÉDICO DR. FRANCISCO JIMÉNEZ ACOSTA DIRECTOR MÉDICO DRA. ZAIDA HERNÁNDEZ DERMATÓLOGA DRA. ESMERALDA LÓPEZ DERMATÓLOGA DRA. SARA STANKOVA MEDICINA ESTÉTICA DR. FRANCISCO JIMÉNEZ ACOSTA DIRECTOR MÉDICO - Licenciatura en Medicina en la Universidad de Navarra en 1983. - Doctorado en el Dpto. de Morfología de la Universidad Autónoma de Madrid. - Especialidad de Dermatología vía MIR en el Hospital Universitario La Paz en Madrid (1984-1987). - Fellow en Dermatopatología en la Universidad de Miami (Florida, 1988-89). - Fellow en Cirugía de Mohs por el Departamento de Dermatología de la Universidad de Duke (Carolina del Norte, 1992-94). - Fellowship de especialización en Cirugía de Trasplante del Cabello, en Hot Springs, Arkansas, (1994-1995). - Dermatólogo del Servicio de Dermatología de la Clínica La Zarzuela, en Madrid (1997-1998). - Director de la Unidad de Dermatología de la Clínica San Roque en Las Palmas de Gran Canaria (1998-2003). - Desde 2003 hasta el momento actual desarrolla su trabajo íntegramente en su propia clínica Dermatológica en la capital grancanaria. Sociedades Médicas: Miembro de la Academia Española de Dermatología y Venereología (AEDV). Colegio Oficial de Médicos de Las Palmas. Academia Americana de Dermatología (AAD. Consejo Directivo de la Sociedad Internacional de Cirujanos de Trasplante Capilar (ISHRS). Sociedad Europea de Investigación Capilar (EHRS). DRA. ZAIDA HERNÁNDEZ DERMATÓLOGA Nacida en Las Palmas de Gran Canaria. Licenciada en Medicina en la Universidad de Las Palmas de Gran Canaria, y especialista MIR en Dermatología en el Hospital Insular de Gran Canaria. Durante la especialidad completa su formación en Dermatopatología en la Universidad de Wake Forest en Carolina del Norte, EEUU. Desde principios del año 2014 forma parte del equipo médico de Mediteknia, combinado este trabajo con el del Servicio de Dermatología del Hospital Insular. Aparte del tratamiento de las enfermedades de la piel y cirugía cutánea, las áreas de interés de la Dra. Zaida Hernández son la eliminación de manchas en la cara, psoriasis, luz pulsada (IPL), dermocosmética y rejuvenecimiento facial, entre otras. DRA. ESMERALDA LÓPEZ DERMATÓLOGA Nacida en Las Palmas de Gran Canaria. Licenciada en Medicina por la Universidad de las Palmas de Gran Canaria. Especialista MIR en Dermatología Médico-Quirúrgica y Venereología en el Hospital Universitario Insular de Gran Canaria con mención especial de excelencia. Combina su trabajo como parte del equipo médico de Mediteknia con la actividad pública en el Servicio de Dermatología del Hospital Insular. DRA. SARA STANKOVA MEDICINA ESTÉTICA - Licenciada en Medicina, Universidad Carolina, Praga 1993. - Especialista MIR en estomatología 1998. - Máster en Medicina Estética, Universidad de Córdoba 2010. - Especialista universitario en Nutrición Deportiva, Universidad de Cádiz 2011. - Experto Universitario en Terapias Naturales, Universidad de Cádiz 2012. - Postgrado en Medicina Ortomolecular por CFIS 2015. - Máster Medicina de precisión y anti-aging CFIS ( en curso). - Postgrado en ozonoterapia médica CFIS ( en curso). - Médico formador para dietas de aporte proteico y micronutrición YSONUT. Sociedades Médicas: SEME (Sociedad Española de Medicina Estética ) ACAME ( Asociación Canaria de Medicina Estética ) SENMO ( Sociedad española de Nutrición y Medicina Ortomolecular) AEMI (Asociación Española de Microinmunoterapia ) Conoce al equipo de Mediteknia SERVICIOS DESTACADOS BÓTOX Reduce/ elimina las líneas o arrugas de expresión. Rejuvenece tu piel y consigue un efecto lifting sin necesidad de someterse a cirugía. Más información TÓXINA BOTULÍNICA Especialmente indicado para zonas localizadas (entrecejo, líneas horizontales de la frente, patas de gallo, comisuras o elevación de cejas). Más información TRASPLANTE CAPILAR Perfección de la técnica con más de 25 años de experiencia. Especializados en cirugías de trasplante de pelo con resultados estéticos absolutamente naturales y sin dolor. Más información DERMATOLOGÍA Cuidamos de tu piel. Mediteknia cuenta con 3 dermatólogos en su equipo médico, que atenderán cualquier enfermedad de la piel. Más información HYDRAFACIAL Tratamiento Flash que no sólo mejora la apariencia, sino que también le ayuda a recuperar la juventud y la salud en la piel. Más información HYDRAFACIAL Funciona con todo tipo de piel, ofrece resultados inmediatos, no requiere tiempo de incapacidad ni genera incomodidad. Más información CIRUGÍA DE MOHS La cirugía de Mohs puede emplearse para eliminar cualquier carcinoma basocelular y/o carcinoma epidermoide. Más información MAYOR PORCENTAJE DE CURACIÓN Hasta el momento la cirugía de Mohs es la técnica que ofrece mayor porcentaje de curación y menor posibilidad de que el tumor reaparezca de nuevo. Más información ÁCIDO HIALURÓNICO Un aspecto natural y rejuvenecido. Capta y retiene agua aportando volumen al área tratada por lo que permite remodelar contornos y rellenar arrugas profundas. Además, ayuda a la producción de colágeno y elastina, recuperando así la elasticidad de la piel. Más información BÓTOX Reduce/ elimina las líneas o arrugas de expresión. Rejuvenece tu piel y consigue un efecto lifting sin necesidad de someterse a cirugía. Más información TRASPLANTE CAPILAR Perfección de la técnica con más de 25 años de experiencia. Especializados en cirugías de trasplante de pelo con resultados estéticos absolutamente naturales y sin dolor. Más información DERMATOLOGÍA Cuidamos de tu piel. Mediteknia cuenta con 3 dermatólogos en su equipo médico, que atenderán cualquier enfermedad de la piel. Más información HYDRAFACIAL Tratamiento Flash que no sólo mejora la apariencia, sino que también le ayuda a recuperar la juventud y la salud en la piel. Más información CIRUGÍA DE MOHS La cirugía de Mohs puede emplearse para eliminar cualquier carcinoma basocelular y/o carcinoma epidermoide. Más información ÁCIDO HIALURÓNICO Un aspecto natural y rejuvenecido. Más información POR QUÉ ELEGIRNOS 20 Años de experiencia en dermatología y trasplante capilar 20 Pacientes han confiado en Mediteknia en 2019 para mejorar su salud y bienestar 0 Tratamientos estéticos, dermatológicos y capilares adecuados a cada caso 13 3 Dermatólogos, 1 Médico estético y 9 profesionales del sector sanitario Hacer una pregunta OFERTAS ESPECIALES DESDE 280 € EFECTO LIFTING Toxina Botulínica Reduce o elimina las arrugas y las líneas de expresión. 280 € / ZONA 380 € / 2 ZONAS 480 € / 3 ZONAS * IGIC incluido en los precios SESIÓN 200 € MESOTERAPIA Piel revitalizada Concentrado de Ácido Hialurónico y un cóctel de vitaminas. 200 € / SESIÓN 450 € / 3 SESIONES PROMOCIÓN: 420 € / 3 SESIONES *IGIC incluido en los precios SESIÓN 300€ PLASMA RICO EN PLAQUETAS Piel y escote más jóvenes 300 € / SESIÓN 750 € / 3 SESIONES PROMOCIÓN: 690 € / 3 SESIONES *IGIC incluido en los precios DESDE 120€ Hydrafacial Tratamiento Flash 160 € / SESIÓN PROMOCIÓN: 120 € / SESIÓN 320 € / 3 SESIONES *IGIC incluido en los precios NUESTRO BLOG 22Mar Llegó la primavera, prepara tu piel La primavera no solo la sangre altera, también nuestra piel. Con la llegada de esta estación, muchas personas con pieles sensibles comienzan a tener reacciones alérgicas como consecuencia del contacto directo con ciertas sustancias que se encuentran en el ambiente. 25Feb “Con la pandemia se ha elevado la demanda de tratamientos antiarrugas en la frente y los ojos” “Con la pandemia se ha elevado la demanda de tratamientos antiarrugas en la frente y los ojos”. Así lo afirma la Dra. Sara Stankova, especialista en Medicina Estética, quien se ha incorporado recientemente al equipo Mediteknia para ofrecer a los pacientes nuevos tratamientos en el campo de la estética facial, corporal, medicina antiaging y nutrición. 19Ene Hydrafacial, un aliado para tu piel en invierno El invierno está aquí y con él, toda una lista de efectos que nos deja en nuestra piel esta estación. La calefacción, los cambios bruscos de temperaturas, el frío y el viento son agresiones silenciosas, que dañan la barrera protectora natura de tu piel. Para evitarlo, la hidratación es fundamental. Por eso, el tratamiento Hydrafacial Leer Más “Hydrafacial, un aliado para tu piel en invierno”Ver más posts ÚLTIMA TECNOLOGÍA AL SERVICIO DE LOS PACIENTES Apostamos por la profesionalidad, la tecnología, el rigor científico y la responsabilidad con el paciente. Nosotros te llamamos Nosotros te llamamos Qué opinan nuestros pacientes Testimonios Pacientes Tratamiento hydrafacial. Es una media hora aproximadamente de tratamiento, muy cómodo en cabina. Consta de varios pasos que culminan en una hidratación y masaje final. La piel queda muy limpia y luminosa. Ese aspecto radiante dura entre tres o cuatro días. También es apreciable la mejoría de la textura y tono de la piel. Me encanta. Con el tratamiento que realicé tuve una gran mejora en el aspecto de mi piel notándola más clara y luminosa. Efectos secundarios no tuve ninguno y el trato recibido en la clínica siempre ha sido el mejor en los tratamientos que me he realizado en Mediteknia. Cansada de multitud de cremas que prometían reducción y eliminación de manchas y de no conseguir resultados acudí a Mediteknia donde el trato recibido desde el principio ha sido maravilloso y algo que valoro mucho, PERSONALIZADO. Tras tres sesiones de IPL si bien todas las manchas no han desaparecido totalmente, si se han aclarado bastante y algunas han llegado incluso a desaparecer. Muy contenta, por tanto, con los resultados. SOCIEDADES MÉDICAS Avda. Alcalde José Ramírez Bethencourt, 20 Las Palmas de Gran Canaria info@mediteknia.com SÍGUENOS EN Estaremos encantados de atenderte Déjanos tus datos completando el formulario y nosotros contactaremos contigo. O si lo prefieres puedes llamarnos al siguiente número de teléfono: 928 232 278 Nombre* Email* Teléfono* Acepto la política de privacidad Acepto recibir comunicaciones comerciales Copyright © 2020 Clínica Mediteknia    |    Política de privacidad  |   Política de cookies Web realizada por INDEXA SALUD work_s4rzdmxh5jh7bc4iuk2py777ca ---- OTT-113724-cutaneous-basal-cell-carcinoma-arising-within-a-keloid-scar- © 2016 Goder et al. This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution – Non Commercial (unported, v3.0) License (http://creativecommons.org/licenses/by-nc/3.0/). By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms (https://www.dovepress.com/terms.php). OncoTargets and Therapy 2016:9 4793–4796 OncoTargets and Therapy Dovepress submit your manuscript | www.dovepress.com Dovepress 4793 C a s e r e p O r T open access to scientific and medical research Open access Full Text article http://dx.doi.org/10.2147/OTT.S113724 Cutaneous basal cell carcinoma arising within a keloid scar: a case report Maya Goder1,* rachel Kornhaber2,* Daniele Bordoni3 eyal Winkler1 Josef Haik1 ariel Tessone1 1Department of plastic and reconstructive surgery, sheba Medical Center, Tel Hashomer, Israel; 2school of Health sciences, Faculty of Health, University of Tasmania, sydney, NsW, australia; 3Department of senology, Ospedale santa Maria della Misericordia Urbino, Urbino, Italy *These authors contributed equally to this work Abstract: Basal cell carcinomas (BCCs) are one of the most frequent cutaneous malignancies. The majority of BCCs are reported to occur on the auricular helix and periauricular region due to ultraviolet light exposure. Despite the frequency of BCCs, those that develop within scar tissue are rare, and the phenomenon of keloid BCCs has rarely been reported in the literature. Keloid collagen within BCCs is associated with morphoeiform characteristics, ulceration, or necrosis. Extensive keloid collagen is often seen in BCCs of the ear region, a site prone to keloid scarring. This article presents a rare case of a secondary tumor (BCC) which arose on top of a primary tumor (keloid scar) on the right auricle region in a healthy 23-year-old female after an ear piercing 2 years prior. To our knowledge, the tumor described in this case, in contrast to keloidal BCCs, has never been reported in the literature. Keywords: basal cell carcinoma, BCC, keloid scar, auricle, methylprednisolone acetate Introduction Basal cell carcinomas (BCCs) develop within the basal cell layer of the epidermis.1 They are one of the most frequent skin malignancies2,3 accounting for 75%–80% of all skin cancers, with 70%–80% of all BCCs occurring in the head and neck region.1 Furthermore, they are the most common type of malignancy of the ear.4 The majority of BCCs are reported to occur on the auricular helix and periauricular region due to exposure to ultraviolet light4 and may infiltrate the cartilage.4 Reports of an etiologi- cal relationship between prior trauma/scar tissue and the development of a BCC have been reported within the literature.3 To date, only five case reports of keloidal BCCs have been discussed in the literature.5–9 Keloid collagen within BCCs is reported to be associated with morphoeiform characteristics, ulceration, or necrosis, with exten- sive keloid collagen seen in BCCs of the ear region, a site prone to keloid scarring.10 However, presented here, to our knowledge, is the first reported case of a secondary tumor (BCC) which arose on top of a primary tumor (keloid scar) on the right auricle region in a healthy 23-year-old female after an ear piercing 2 years prior. Case report A 23-year-old female presented with ongoing management of a keloid scar on her right auricle with no relevant medical history. At 21 years of age, she had her ear pierced and developed a local infection that was treated initially with topical antibiotics. Subsequently, a keloid scar began to form (Figure 1). At first presentation, the lesion was observed to be a 1 cm red keloid scar on the mid-helix of the right auricle. Initially, 40 mg of methyl- prednisolone acetate was injected locally and she was given a prescription for a pressure earing. Mechanical pressure is often used as a means to prevent or treat keloid scars.11 Correspondence: Maya Goder Department of plastic and reconstructive surgery, sheba Medical Center, Tel Hashomer, emek Haela st 1, 52621, ramat Gan, Israel Tel +972 52 888 1555 email maya.b.h@gmail.com Journal name: OncoTargets and Therapy Article Designation: Case report Year: 2016 Volume: 9 Running head verso: Goder et al Running head recto: Cutaneous basal cell carcinoma arising within a keloid scar DOI: http://dx.doi.org/10.2147/OTT.S113724 http://www.dovepress.com/permissions.php https://www.dovepress.com/terms.php http://creativecommons.org/licenses/by-nc/3.0/ https://www.dovepress.com/terms.php www.dovepress.com www.dovepress.com www.dovepress.com http://dx.doi.org/10.2147/OTT.S113724 https://www.facebook.com/DoveMedicalPress/ https://www.linkedin.com/company/dove-medical-press https://twitter.com/dovepress https://www.youtube.com/user/dovepress mailto:maya.b.h@gmail.com OncoTargets and Therapy 2016:9submit your manuscript | www.dovepress.com Dovepress Dovepress 4794 Goder et al At 1 month follow-up, during which time she had been wearing the pressure earing consistently, a slight improve- ment was observed with a mild reduction in the overall size of the keloid scar. Another injection of 40 mg methylpred- nisolone acetate was given locally (Figure 2). Four months after commencing the initial treatment, the patient presented again, and a 15% improvement was observed with a reduction in the size of the keloid scar tissue to a diameter of 11 mm. Again, the patient received another local injection of methylprednisolone acetate, but at a reduced dose of 20 mg. At 5 months follow-up, the base of the keloid was unchanged and remained at 11 mm. However, the overall volume of the keloid had a flattened appearance as a result of the pressure earing. Again, the patient was administered another injection (locally) at the original dose of 40 mg methylprednisolone acetate. A further 10 weeks on, the base of the keloid scar measured 10 mm and its height was reduced, with the total thickness of the right helix of the involved area measuring 8 mm as opposed to 7 mm in the opposite ear. Of significance, it was noted that part of the normal tissue in the involved area was replaced by the keloid over 1 mm thickness. The dose of 40 mg methylprednisolone acetate was once again injected locally, in conjunction with the pressure earring (Figure 3). A further 2 months on, the base of the keloid remained unchanged at 10 mm; however, the total thickness of the helix reduced to 7 mm. The posterior region of the helix remained a small flat patch, while the anterior of the helix presented with only minimal keloid content. At this time, another injec- tion of 40 mg methylprednisolone acetate was administered. After completing a year’s treatment with methylprednisolone Figure 1 Keloid scar at first presentation. Figure 2 Follow-up after 1 month of wearing the pressure earing. Figure 3 Keloid scar 5 months since the initial presentation with a flattened appearance. www.dovepress.com www.dovepress.com www.dovepress.com OncoTargets and Therapy 2016:9 submit your manuscript | www.dovepress.com Dovepress Dovepress 4795 Cutaneous basal cell carcinoma arising within a keloid scar acetate and 3 years after the original piercing, the keloid scar became active again, appearing red and telangiectasic. A final local injection of 60 mg methylprednisolone acetate was administered, resulting in no improvement with a further rapid growth in size (Figure 4). It was decided to surgically excise the keloid lesion using wide margins, and subsequently, the growth was sent for his- tological examination. Histological examination determined that the lesion was consistent with keloid and had tiny foci of BCC arising from within. The patient provided informed written consent for the described procedures and the use of digital photography for the purposes of treatment, teaching and use in academic publications. Discussion The first known description of keloid was found in Smith Papyrus derived from ancient Egypt circa 3000 BC.12 The term keloid derives from the Greek word cheloid, chele (ηλη) meaning a crab’s claw and the suffix-oid, meaning like.4 Keloids are benign dermal fibroproliferative tumors13 that develop after the dermis experiences local trauma such as surgery, burns, laceration, tattoos, and infections.14 However, abnormal scarring remains poorly understood and is a consequence of surgical and traumatic wounds.15 Keloids are reported to be more frequent in certain ethnic groups and has an incidence of 15%–20% in the black population.16 Fur- thermore, these unique scars have been reported in patients with hereditary connective tissue disorders such as Ehlers– Danlos syndrome, in which keloids manifest as one of the clinical indicators.17 Keloid scar may develop anywhere; however, the ear is a common site for keloid formation, and the scar usually occurs after trauma or ear piercing,18 although there is limited data available for the treatment of helical rim keloids.19 The ear is reported to be a region with a propensity for the development of keloidal BCCs and is a site that is prone to the development of keloids in certain individuals.10,18 This may be related to the frequent piercing of earlobes and the trauma and infection that can often ensue. Requena et al5 described a keloidal BCC for its striking and distinctive features, and from a clinical pathological basis, as a variant of a BCC deemed to be rare.10 However, this description has been refuted by others10 with a study identifying that keloid BCCs are not as rare as originally stated and therefore do not characterize a distinctive clinicopathological variant.10 Jones et al10 state that keloid BCCs are found in different histological kinds of BCCs with varying appearances. They found 1.6% of all BCCs had keloidal collagen in the stroma.10 Misago9 who reported a case of a keloidal BCC, also found the stroma char- acteristically demonstrates the prominent keloidal, thickened collagen bundles and well-circumscribed keloidal collagen bundles that proliferated in a nodular form. Subsequently, it has been suggested that keloidal stromal reaction is due to local inflammatory changes secondary to necrosis or ulceration.10 Furthermore, it has been suggested there may be a correlation between keloid BCCs and the ear as a site for the development of keloidal stroma.10,20 However, the tumor described in this case, in contrast to a keloidal BCC, is rare and to our knowl- edge has never been reported in the literature. It is unique in that the common pathological process of a BCC developing necrosis and ulceration, which in turn cause inflammation and keloid scar formation, is reversed. In this case, we presented a keloid scar that has been dormant for 2 years, which improved under conservative treatment, and then underwent malignant transformation to a BCC. Since keloid scars can be considered a tumor, we in fact present a secondary tumor (BCC) which arises on top of a primary tumor (keloid scar). Furthermore, He et al13 and Meade et al21 demonstrate another example of unusual keloid behavior of eruptive keloids associated with cancer and the clinical importance of giving long-term dynamic consideration when following a keloid patient. Figure 4 rapid regrowth of keloid scar. www.dovepress.com www.dovepress.com www.dovepress.com OncoTargets and Therapy Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/oncotargets-and-therapy-journal OncoTargets and Therapy is an international, peer-reviewed, open access journal focusing on the pathological basis of all cancers, potential targets for therapy and treatment protocols employed to improve the management of cancer patients. The journal also focuses on the impact of management programs and new therapeutic agents and protocols on patient perspectives such as quality of life, adherence and satisfaction. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. OncoTargets and Therapy 2016:9submit your manuscript | www.dovepress.com Dovepress Dovepress Dovepress 4796 Goder et al Conclusion Our case study highlights the development of a BBC arising within a keloid scar to the auricle region after an ear piercing 2 years prior. Keloidal characteristics often occur on the ear; however, there remain no reports of such a case within the literature. Given the sparse amount of evidence available, it is important for clinicians to understand and identify key keloidal features in BCCs and to reinforce the association of morphoeiform patterns of growth, ulceration, and necrosis as described by Jones et al10 Therefore, the authors stress the importance of considering early biopsy in any rapidly growing or changing keloidal scar. Acknowledgment The authors would like to thank the patient for her cooperation. Disclosure The authors report no conflicts of interest in this work. References 1. Chung S. Basal cell carcinoma. Arch Plast Surg. 2012;39(2):166–170. 2. Rao J, Deora H. Surgical excision with forehead flap as single modality treatment for basal cell cancer of central face: single institutional experi- ence of 50 cases. J Skin Cancer. 2014;2014:1–5. 3. Lim K-R, Cho K-H, Hwang S-M, Jung Y-H, Kim Song J. Basal cell carcinoma presenting as a hypertrophic scar. Arch Plast Surg. 2013;40(3): 289–291. 4. Sand M, Sand D, Brors D, Altmeyer P, Mann B, Bechara FG. Cutaneous lesions of the external ear. Head Face Med. 2008;4(2):1–13. 5. Requena L, Martin L, Farina MC, Pique E, Escalonilla P. Keloidal basal cell carcinoma. A new clinicopathological variant of basal cell carcinoma. Br J Dermatol. 1996;134(5):953–957. 6. Balestri R, Misciali C, Zampatti C, Odorici G, Balestri JA. Keloidal basal cell carcinoma: should it be considered a distinct entity? J Dtsch Dermatol Ges. 2013;11(12):1196–1198. 7. Nagashima K, Demitsu T, Nakamura T, et al. Keloidal basal cell carcinoma possibly developed from classical nodulo-ulcerative type of basal cell carcinoma: report of a case. J Dermatol. 2015;42(4):427–429. 8. Misago N, Ogusu Y, Narisawa Y. Keloidal basal cell carcinoma after radiation therapy. Eur J Dermatol. 2004;14(3):182–185. 9. Misago N. Keloidal basal cell carcinoma. Am J Dermatopathol. 2008; 30(1):87. 10. Jones M, Bresch M, Alvarez D, Böer A. Keloidal basal cell carcinoma: not a distinctive clinicopathological entity. Br J Dermatol. 2009;160(1): 127–131. 11. Tanaydin V, Beugels J, Piatkowski A, et al. Efficacy of custom-made pressure clips for ear keloid treatment after surgical excision. J Plast Reconstr Aesthet Surg. 2016;69(1):115–121. 12. Wilkins RH. Neurosurgical classic. XVII. J Neurosurg. 1964;21: 240–244. 13. He Y, Merin MR, Sharon VR, Maverakis E. Eruptive keloids associated with breast cancer: a paraneoplastic phenomenon? Acta Derm Venereol. 2011;91(4):480–481. 14. De Sousa RF, Chakravarty B, Sharma A, Parwaz MA, Malik A. Efficacy of triple therapy in auricular keloids. J Cutan Aesthet Surg. 2014;7(2): 98–102. 15. Del Toro D, Dedhia R, Tollefson TT. Advances in scar management: prevention and management of hypertrophic scars and keloids. Curr Opin Otolaryngol Head Neck Surg. 2016; Epub 2016 May 7. 16. Viera MH, Vivas AC, Berman B. Update on keloid management: clinical and basic science advances. Adv Wound Care. 2012;1(5):200–206. 17. Halim AS, Emami A, Salahshourifar I, Kannan TP. Keloid scarring: understanding the genetic basis, advances, and prospects. Arch Plast Surg. 2012;39(3):184–189. 18. Shin JY, Lee JW, Roh SG, Lee NH, Yang KM. A comparison of the effectiveness of triamcinolone and radiation therapy for ear keloids after surgical excision: a systematic review and meta-analysis. Plast Reconstr Surg. 2016;137(6):1718–1725. 19. Park TH, Rah DK. Successful eradication of helical rim keloids with surgical excision followed by pressure therapy using a combination of magnets and silicone gel sheeting. Int Wound J. 2015. Epub 2015 November 23. 20. Slemp AE, Kirschner RE. Keloids and scars: a review of keloids and scars, their pathogenesis, risk factors, and management. Curr Opin Pediatr. 2006;18(4):396–402. 21. Meade C, Smith S, Makhzoumi Z. Eruptive keloids associated with aromatase inhibitor therapy. JAAD Case Rep. 2015;1(3):112–113. http://www.dovepress.com/oncotargets-and-therapy-journal http://www.dovepress.com/testimonials.php www.dovepress.com www.dovepress.com www.dovepress.com www.dovepress.com Publication Info 4: Nimber of times reviewed 2: work_s6lq3nkaoncebfcqqg7nx3iuxe ---- 1 THE STATE OF SPORT PHOTOJOURNALISM Concepts, practice and challenges Richard Haynes, Adrian Hadland, and Paul Lambert Based on a global survey of photojournalism and case studies of recent transformations in the use of photography in sport, this paper critically analyses current professional practices of sport photojournalists focusing on the contemporary challenges faced by this industry. Rhetoric proclaiming the death of the photographer in the age of video technology and self-mass communication of digital photographs, has presented a major challenge to the survival of photographers and photography as a professional practice in news media. In the specific field of sport photojournalism, photographers have faced added challenges of accreditation to sport with the selective access to sporting venues or events through commercial licensing of ‘preferred media partners’ and increasing management of ‘image rights’ and anti-piracy measures. This has occurred at a time when sport images and the digital distribution of sporting images are greater than ever. The data for this article are taken from a World Press Photo Foundation-University of Stirling longitudinal project on photojournalism and represents the views and experiences of over 700 photographers who are engaged in sports photojournalism. KEYWORDS amateur; digital; photography; photojournalism; sport Accepted for publication in Digital Journalism, published by Taylor and Francis Introduction “To save money, instead of hiring real sports photographers, sports/news publications are telling their sportswriters to shoot games with their cellphones and use these still pictures to illustrate their sports columns. This puts freelance sports photographers out of work, and results in lousy pictures” – Sports Photojournalist, World Press Photo survey, 2016 London. Thursday 9 August, 2012. Jamaican sprinter Usain Bolt wins the 200 meters Olympic final making him the first man to successfully defend both Olympic sprint titles. “Half man, half superhero,” gushed the admiring Daily Telegraph of London. Within minutes of his success, Bolt borrows a camera from one of the photographers crowding around him and mimics their moves, down on one knee, turning the camera on the cameras. Then, with a Jamaican flag wrapped around his shoulders, Bolt assumes the frozen archer position that has become his signature. In mid pose, he turns to his right and smiles. The photograph is used across the entire front page not just of the Telegraph the next morning, which accompanies the image with the words ‘The Greatest’, but all around the world. The use of powerfully evocative images in print, online and on television has not just tracked Bolt’s stellar career, a partnership he acknowledged with his little camera show in the Olympic Stadium in London, it has been a fundamental component of the transformation of sport in the contemporary era. 2 But, just as sports and sports journalism have evolved and changed, so too has the process by which sport images are captured, transmitted and published. Sports photojournalistsi must grapple with a range of challenges that collectively threaten the very sustainability and future of their art, ironically at a moment when the image has never been as ubiquitous or as important as an expression of human creativity or as a medium for demonstrating athletic prowess. With billions of images uploaded onto the internet everyday, the field has become crowded. The massification of image production has also coincided with media company cutbacks as legacy businesses struggle to adjust to the disruption of the digital era. If any sector has suffered from the commonly described ‘crisis’ of journalism (Reinardy, 2011), photojournalists have been among them. In newspapers, photographers and photographic departments are frequently the first to go, at times suffering disproportionate cutbacks, especially in America (Mortensen, 2014; Anderson, 2013). The findings of this research reflect the impact of this crisis on the work patterns and prospects of those making a living illustrating news storytelling with images. There have been shifts, too, within the photographic industry marked notably by the concentration of photographic agencies, many of which have been bought up by big corporate players such as Getty Imagesii. In the specific field of sport photojournalism, photographers have faced added challenges of accreditation to sport with the selective access to sporting venues or events through commercial licensing of ‘preferred media partners’, (Edwards 2015, Greenslade 2010, 2013) and increasing management of ‘image rights’ and anti-piracy measures (Haynes 2007). This has occurred at a time when sport images and the digital distribution of sporting images proliferate. This confluence of developments provokes a series of questions concerning the current state and future of sports photography: How are sports photographers coping in the digital era? What are the key challenges they face? Is there a future for professional sports photography? Surprisingly, very little research has been conducted on photographers generally and there is almost nothing on sports photographers in particular. This article aims to begin the process of addressing this absence. It does so by considering recent data from more than 700 professional photographers involved in sport to varying degrees. This study focuses on the changing practices of sport photojournalism based on an international survey of photographers. It is therefore based on the experiences, perceptions, professional values and practices of sport photojournalists, which have been relatively ignored in the field of sport and communications more broadly. The results provide fascinating insights into the world of sports photography and into the challenges, opportunities and threats faced by sports photographers on a daily basis. Sport Photojournalism; Literature Review Beyond work commissioned by World Press Photo (see, especially, Campbell 2013) only a handful of research studies have been published on photojournalists as a group (Mortensen, 2014; Caple, 2013; Papadopolous & Pantti, 2011, Pantti & Bakker, 2009; Taylor, 2000). Often this research included other groups, such as amateur or citizen photographers (see Allan 2015). Mäenpää’s (2014) is one of the few papers dedicated specifically to photojournalists. Mäenpää’s work is based on Finnish data obtained from 20 interviews and an online survey of 200 people and while the focus is on photojournalists’ values, the respondents include a wide range of people associated with the photography industry such as graphic designers and art directors. Vauclare and Debeauvais (2015) have also published a 3 French language study of French photographers commissioned by the Ministry of Culture and Communication (see Sutton, 2015 for an overview). Sport photojournalism is an established part of the sport-media complex, which has been identified as the dominant mode of elite sport and increasingly characterised as being heavily commodified, mediatized and global in a digitally networked communications environment (Hutchins and Rowe, 2012). In spite of the broadening of the media-sport complex, both in terms of networked media sport and globalisation, the cultures of sport continue to reflect some deep-seated prejudices in terms of sexism, racism and homophobia (Messner and Cooky, 2010). As well as the very important issue of media representation of women, ethnic minorities and the LGBT community in sport, the issue of equality in media sport and communications is of genuine concern in the context of this study. For example, the paucity of women in sport media professions and the ‘gendered’ roles in which they tend to occupy continues to present a major challenge to the industry in terms of equality and diversity (Franks and O’Neil, 2015). Recent studies on the changing nature of photojournalism, both in terms of the professional role of photographers in news outlets and transformations in visual storytelling, have also emphasised the impact of processes of digitisation and networked communications on the profession (Yaschur, 2011; Zavoina and Reichert, 2000 and Campbell, 2013). As has been recognised more broadly in the media and communications industries, digitisation has transformed 'media work' (Deuze, 2007) and convergent, networked communications have transformed the power relations between producers, distributors and consumers of media (Castells, 2013). Most notable in the case of photojournalists has been a shift from the standard production of the printed still image to more diversified roles of digital visual storytelling; producing videos and slideshows for online news content, which sit alongside the more established editorial shots of events, people and places. One of the central questions such developments pose is what are the continuities and discontinuities from print photojournalism, long established in the 20th century, and the user- created, software enabled, networked photography, which has blossomed in the early 21st century? A common observation would suggest there are significant discontinuities in terms of the scale of popular photography and image distribution, but it is less clear whether the actual form of photography has been transformed quite as radically. As Manovich (2016) has identified, this raises a further question concerning variability, that is, has the recent explosion of digital photography led to more diversity in image capture and distribution? Or, conversely, has it led to more repetition of themes, uniformity and social mimicry? For example, while 'the selfie' may be a new visual form, it is also a uniformly mimicked and creatively constrained visual culture. Newsrooms and photographers, of course, have changed their routines to accommodate such changes, and many news outlets now join the online conversation on Twitter or Instagram to provide instantaneous visual feeds from breaking news stories. Photojournalism has always adapted to technological advances, whether it be the move to colour printing in the 1970s and 1980s, or the shift to digital in the 1990s. For example, research in the late-1990s by Russial and Wanta (1998) revealed sport photojournalism, especially during the coverage of major events like the Olympic Games or the Super Bowl, was at the forefront of adaptations to digital photography, even where cost and variable quality issues constrained the use of such technologies in everyday news gathering routines. They also noted how recruitment patterns 4 changed in terms of skillsets, where the ability to process photographs in a dark room were usurped by computer literacy and an ability to edit images in specialist software packages such as Photoshop. In the context of sports media, the expansion of digital content across news services (print, broadcast, online, mobile) and in particular through the rise of networked social media, means that sport photography has mushroomed in both volume and contributors, most of which is supplied by amateurs sharing their images online through applications such as Instagram, Pinterest, Facebook, Twitter and Snapchat among many others. For professional photojournalists, the broadening of visual culture around sport may be viewed as a direct challenge to their previously privileged position to provide visual storytelling around sport. The multi-variegated access to images from sport, where a sport star’s 'selfie' can become a more socially valued mode of engagement than a professional photograph, can potentially undermine the economic value of professional sport photography as the main visual record of an event or sport star. A challenge also comes from the user-generated images of fans, whose access to networked mobile photography brings new perspectives on the sporting event which can be instantaneously produced, distributed and shared during live action to various online communities. The speed of distribution is key to the cultural capital of fan-made digital photography from sport, and in terms of accessibility to a visual record of an event, usurps the professional distribution of images that have to be processed via editorial controls and syndication. In short, fan-made media, in this case sporting images distributed in networked social media, do not face the same professionalised, routinised and conventionalised practices of sport photojournalists. However, they may quite rapidly become distributed and shared as the visual image telling the story of a sport event or happening that render industrially produced media images as either redundant or lacking relevance to the fan experience. In addition to the competition brought by fan-made media, the expansion of digital images from sport online has brought with it new constraints and forms of sport industry regulation. As the volume of sporting images increased, so too have attempts to increase regulation of accredited access to sporting venues, as well as mechanisms to increase the control of licensing rights of sporting images in the online environment. This is mainly due to an increased awareness on the part of governing bodies of sport, individual venues and professional sports clubs of the value of visual culture in sport (Hutchins and Rowe, 2012). The ownership and control of intellectual property rights to sport, in particular the regulation of the rights of access to either broadcast or distribute commercial images from sport have become incredibly lucrative aspects of contemporary sport economics (Haynes, 2007). For example, there has been a marked increase in preferred media partnerships between sports organisations and media outlets that are prepared to collaborate on exclusivity contracts regarding access and syndication of images to competing new organisations. In the Premier League (football) in England, this has included exclusive media partnerships between Newcastle United FC and media outlets Sky Sports and the national Daily Mirror newspaper (Edwards, 2015), as well as internally-run photography syndication by Southampton FC (Greenslade, 2010), which effectively monopolises the commercialisation of images from these stadia. In other cases, such as Nottingham Forest FC, football clubs in England have taken access rights away from particular news outlets because sports journalists have upset the owners of such clubs with critical news stories (Greenslade, 2013). For news agencies and the fans such developments present a closure on access to what may be 5 privately owned sports organisations but are more broadly viewed as essential parts of the local community and cultural traditions of the area. In particular, from within our general survey of international photojournalists from the World Press Photo Competition, we were interested in gathering some basic data on the range and specialization of photographers on the sports they covered. We wanted to know about the demographic breakdown of sport photojournalism in relation to the wider survey, including gender balance and regional distribution, as well as how these factors related to the coverage of particular sports. Through a survey of photographers around the world we were keen to analyse the extent to which the processes identified above are actually having an influence on the practices of those working in a sporting context. To what extent have sport photographers adopted the broader visual culture of digital media and introduced video capture in to their work? How many photographers have identified the issue of access to sporting venues and events as a problem or a challenge to their daily work in sport? Are developments in networked social media felt by photographers to be affecting the value of their work in sport? Our analysis of sport photojournalists around the world is, therefore, a first attempt to map some of the key patterns of work in the contemporary networked digital media environment, and the extent to which issues related to access and social media are viewed as having an impact on the practices of sport photojournalists. Methods Our focus on sport photojournalists draws on data from a broader questionnaire survey of photography practice and routines in wider news contexts. The broader study investigates the attitudes and values of photographers on a range of issues relevant to contemporary developments in their profession, covering for instance employment arrangements, professional practices, and opinions about future developments in the field (see Hadland et al. 2015). Many of these issues are relevant to the study of sports photojournalism and will be factors in the contemporary responsibilities and work roles of professionals in the industry, many of whom are struggling to survive the impacts of declining circulations in newspapers and the proliferation of images online (Mortensen, 2014; Anderson, 2013). We gained access to a sample of more than 700 sports photographers through collaboration with the World Press Photo Foundation (WPPh), host of a leading international photography competition. In early 2016, 5,775 photographers from more than 100 countries and territories sent in their work to be judged across a variety of categories, including best sports photograph (single image and story). After submitting their work, the entrants were contacted by email and encouraged to link to a voluntary online survey. More than a third of the entrants, 1,991 photographers, responded and worked their way through almost 70 questions about subjects ranging from equipment and pay to ethicsiii. A similar survey was undertaken in early 2015 (see Hadland et al. 2015), but the 2016 study (which is due to be published by the end of 2016) has a special focus on photographers working in sport. Photographers responding to the survey were commonly engaged in multiple forms of photography at the same time. In the 2016 survey, 713 of the 1,991 respondents (36%) indicated that sports photography was one of the forms of photography that they undertook (usually in combination with other forms). Of these respondents, 362 (or 18% of the full 6 sample) reported that sports photography was the major focus of their news work. The survey also collected information on the income (if any) generated by respondents through their photography, as well as data on other features of their employment status. By comparing responses across several related questions, we could also identify 284 respondents whose answers indicated that they generated an important element of their income through sports photographyiv. Most of our results below are presented for each of these three groups: the wider category of 713 who engage in some level of sports photography; the 362 respondents who report that sports photography is one of their main activities; and the smaller group of 284 for whom sports photography is an important element of their income. Results are presented below in the form of summary statistics that give a profile applicable to the respective groups of sports photographers. This is contrasted, when relevant, with a corresponding figure for all survey respondents, and with indicators of the ‘statistical significance’ of the difference between the relevant group of sports photographers, and others. In broad terms, a ‘significant’ difference between the group and others suggests that the pattern of difference in the profiles, as revealed in our sample of data, is strong enough that it is very plausible that the difference holds genuinely in the underlying population of all photographers and photojournalists. Results and Analysis Table 1 presents a number of summary statistics about sports photography among the survey respondents. Based on our data, the average sports photojournalist is male, well- educated (usually to degree level) and in his late 30s or early 40s. He is more likely to be employed by a media company or agency than self-employed, though the proportion is roughly 60:40. This is higher than for the photojournalists in this study as a whole, most of whom are self-employed. Table 1: Aspects of sports photojournalism from the 2016 WPP survey All responses (N = 1991) Any sports photography (N = 713) Mostly sports photography (N = 362) Sports photography important part of income (N = 284) Background characteristics Percent male 15.5 8.7* 6.6* 4.9* Percent aged 30-50 67.5 69.3 67.1 66.2 Percent employed (cf. self- employed) 43.4 59.6* 58.6* 61.6* % Saying access to sports venues is… (N=921) ‘difficult’ or ‘very difficult’ 20.6 18.5* 19.7 18.7 7 ‘difficult’ or ‘very difficult’ (women) 28.2 16.7* 18.2 7.7 ‘diff.’ or ‘very diff.’ (self-employed) 23.5 20.0* 18.8 20.4 ‘difficult’ or ‘very difficult’ (for photographers covering football) 19.0 18.4 19.3 18.5 ..becoming a little/much harder to access grounds in last few years 34.8 39.0 40.5 Range of sports covered % who ‘cover a wide range of sports’ 42.9 56.9 59.9 % who cover ‘a small number’ 29.7 28.5 30.3 % who only shoot one or two sports 19.4 12.7 9.9 % employed rather than self- employed (of those covering football) 34.0 36.8 34.4 Risks related to livelihood from sports photography (% mentioning…) ..access/rights restrictions 47.4 50.8 50.7 ..Amateur/citizen photography 35.5 39.0 41.6 ..Social media 15.4 18.5 17.3 ..Copyright issues 28.3 30.9 30.6 ..Cost of equipment 37.0 40.9 39.8 Notes: Analysis of 1,991 respondents to 2016 WPP questionnaire. * in first two panels indicates that the category proportion is significantly different from the value for all other respondents at the 95% statistical significance level. In our sample, there are proportionally fewer sports photographers in Africa and Europe compared to the Americas. For example, of the 220 respondents from either North America or Australasia in this year’s study, 53 (24%) said they mainly worked in sports photography. This compares to 18% across the sample as a whole, and reminds us that those patterns 8 revealed in the sample for those respondents who are engaged in sports photography are for respondents from a somewhat different national distribution to those of photographers as a whole. The gender gap, already so evident in photojournalism as a whole, is even more pronounced in sports photojournalism, the data suggests. Across our whole sample, over two years, the gender spread is roughly 85% male and 15% female. Among the sports photojournalists, only 6,6% are female and 92.4% are male, and the gender disparity increases when we isolate those photographers for whom sports photography is an important part of their livelihood. Many of the sports photographers covered a wide range of sports, particularly those who gained relatively more of their income from sport. For all those undertaking sports photography, 43% report covering a wide range of sports, 30% a few sports, and 19% report that they shoot only one or two sports. Respondents were also asked to indicate which from a list of popular sports they were engaged in covering. Figure 1 summarises the responses received. 9 Figure 1: Percent of respondents shooting images in the following named sports We can see that football/soccer is by far the most covered sport with around 70-80% of the sports photojournalists surveyed saying that it is one of the sports they most often work on. 10 There is a considerable gap before other sports feature as among those that respondents most often cover – for example, basketball, athletics and tennis are mentioned by around a third of relevant respondents. In terms of the link between country and football, sports photographers in Asia, Africa and Central and South America were more likely to cover football than any other sport. By contrast, North American sports photographers were more likely to be covering golf, basketball or ice hockey than photographers from other regions. Sports photographers focusing on football were relatively less likely to be self-employed than other sports photographers. As a group, just under 20% of the respondents said they found it difficult (or very difficult) to gain entrance to stadiums or clubs. A third of all the sports respondents said they believed this had become more difficult in recent years. It is possible that difficulties of access are more pronounced for women photographers than men. From all respondents who answered a question on access to sports venues, a higher proportion (28%) of women described access as ‘difficult’ or ‘very difficult’, compared to 20% of their male counterparts. However, this pattern does not hold for those respondents for whom sports photography is a relatively more important part of their professional lives – in these groups, more men than women report difficulties of access. However, self-employment status, and also data on what specific sports were covered, was not strongly linked to reported difficulties of access. Additionally, many respondents (in all categories) reported that in their view it had become harder to access sports venues over the last few years (34% of all responses among people engaged in sports photography). Though access may not be an especially prominent issue in the mindset of respondents, it is clearly influencing a number of photographers, and many believe it has been increasingly problematic in the recent past. Respondents were also asked to identify if they considered a number of factors that are thought to be relevant to the practice of sports photography as potential threats to their livelihoods (Table 1, panel 4). Around 50% of relevant respondents listed access to venues as an issue that was a risk to their livelihood as sports photographers. The cost of equipment and the threat of amateur photographers was cited by around 40% while copyright infringements concerned around 30% and social media around 15%. Concerns were generally higher in those groups where photography was a more important part of their income, and it was noticeable that North American and Australasian sports photographers (the regions with higher levels of sports photography amongst the sample) were considerably more concerned about the threat of amateurism (with 55% expressing that this was a serious danger). In an open section of the questionnaire where respondents were invited to give more detail on their livelihoods as sports photojournalists, some respondents expressed their concerns with the use of ‘hobbyist’ photographers working on low or no commissions often producing poor quality images. A similar sentiment is expressed in the comments in the quote at the beginning of the introduction of this article, which were also taken from the open section of the questionnaire. The rising cost of equipment was cited by many of the respondents as a real threat to the sustainability of their work as sports photographers. Ordinarily, the sports photographer needs much larger, more expensive lenses than other types of photographer in order to capture the high-speed, close-up action shot that are the hallmark of their business. Male photographers (20%) appeared to be slightly more worried than female photographers (13%) 11 about the cost of equipment and African and Central/South American photographers considerably more concerned about this than their counterparts in Europe, Asia and North America. Table 2: Aspects of photography as experienced by sports photojournalists in the 2016 WPP survey All responses (N = 1991) Any sports photography (N = 713) Mostly sports photography (N = 362) Sports photography important part of income (N = 284) Digital practices and social media % Using film camera 17.7 9.1* 8.0* 6.3* % Often/always enhance digital images (e.g. contrast, hue) 43.5 34.6* 30.4* 32.0* % Images used without permission 75.3 77.6 76.0 76.0 % Often/Always uses social media 51.0 48.3 47.2 46.1 Financial situation at present… ..is difficult/very difficult (%) 35.6 28.1* 32.3 28.5* ..is worse than last year (%) 31.0 31.8 34.5 32.4 .. income less or a lot less now compared to 5 years ago (%) 38.4 35.8 40.2 37.9 Imputed mean annual income from photography (USD) 25600 30500* 30500* 35800* Indicators of satisfaction with photography (% happy with, or % stating that they have control over…) Current mix of assignments 62.1 63.5 63.0 63.0 Control over what work I do 30.2 40.3* 39.8* 43.3* Work rarely edited without permit 79.1 75.3* 74.3* 75.0 12 Not too overwhelmed by tech. change 73.5 72.9 74.0 76.1 Feel photography is valued 50.5 51.6 51.4 51.4 Positive on future of photography 51.6 50.4 48.3 48.2 Notes: Analysis of 1,991 respondents to 2016 WPP questionnaire. * indicates that the category proportion is significantly different from the value for all other respondents at the 95% statistical significance level. Table 2 summarises some other ways in which sports photographers were similar to or different from the other photographers who participated in the WPP survey in 2016. On many issues the pattern is one of similarity, for instance with few significant differences between sports and other photographers in levels of social media use and in experiences of the use of images without permission (panel 1); in attitudes to financial circumstances (panel 2); and in general satisfaction with the profession (panel 3). There are however a few exceptions: sports photographers, particularly those for whom sports photography is a relatively more important part of their livelihood, are relatively less likely to use a film camera than other photographers (panel 1), but when using digital images they are also relatively less likely to report that they often enhance images such as by adjusting contrast or colour (panel 1). In terms of their financial situation, sports photographers are somewhat more positive about their circumstances than other photographers, and it is certainly the case that among survey respondents, those who were engaged in sports photography reported on average higher annual incomes from photography than those who were not (panel 2). Lastly, panel 3 suggests that sports photographers generally felt they had more control over their daily work than did other photographers, although in contrast they were slightly more likely to report that their work could be edited without their permission. This latter pattern may suggest that sports journalists work in an environment in which they feel relatively autonomous and are generally satisfied by their circumstances, but are nevertheless aware that the products of their work may be controlled by others such as editors. Table 3: Logistic regression model results for influences on mentioning ‘Amateur/Citizen photographers’ and ‘Access restrictions’ as ‘an important risk to your livelihood as a sports photojournalist’ Amateur/ citizen photography a threat to livelihood Access restrictions a threat to livelihood Model 1a Model 1b Model 2a Model 2b Gender Male (reference) 0 0 0 0 Female ns ns ns ns 13 Age 29 years old or younger - -- ns ns 30-49 years (reference) 0 0 0 0 50 years old or older ns ns ns ns Continent North America or Australasia +++ +++ ns + All other continents (reference) 0 0 0 0 Sports covered Mentions football +++ +++ Any other mentions (reference) 0 0 Positive financial situation (likert score, 1=Very difficult, 5=Very good) -- ns Psuedo-R2 0.024 0.040 0.003 0.012 N 713 713 713 713 Notes: Analysis restricted to 713 cases who reported some level of sports photographic activity. n.s. = no significant effect. + / ++ / +++ and - / -- / --- indicate significant effects in a positive and negative direction respectively, at the 90%, 95% and 99% significance thresholds. Reference category coefficients are zero by design. Regression models allow us to assess the relative influence of nominated explanatory factors upon an outcome of interest, in a manner that controls for other relevant differences between cases. In Table 3, results from four regression models are shown. Models 1a and 1b summarize relative influences upon the probability of a respondent stating that amateur/citizen photography is an important risk to their livelihood in sports photography, and Models 2a and 2b summarize similarly selected influences upon the probability of stating that access restrictions are an important risk. Overall the models show that the highlighted factors are in combination only a small influence upon the chances of mentioning these risks (evident in the small ‘pseudo=R2’ values, which indicate the percent of variation in the outcome that can be attributed to the various explanatory variables). The table also shows in summary form whether or not the highlighted measures had a distinctive and significant impact upon the outcome. Only a few variables are seen to have a discernible influence across the models (net of the effects of other variables). 14 From models 1a and 1b, we see that photographers based in the North America or Australasia were more likely to mention amateur photography as a threat, but gender was not influential net of other measures, and age was associated only with a slight pattern (whereby younger respondents were more likely to consider amateur photography a threat). In Model 1b, we see that those photographers who mentioned shooting images of football were relatively more likely to see amateur photography as a threat, but having a more positive attitude to their current financial situation was associated with a lower chance of seeing amateur photography as a threat (net of other factors). From models 2a and 2b, we see that gender, age and continent have no important influences (net of each other) upon the chances of regarding access restrictions as an important threat (model 2a), however when we also take account of field of photography and financial attitudes, we see a slight pattern whereby those who shoot football are relatively more likely to mention the threat of access restrictions, and, in the context of the other variables included in the model, a slight influence of continent in which those based in North America and Australasia are slightly more likely to mention access as an issue (net of other factors included in the models). Conclusion At the outset of this paper we identified three key questions concerning the current practices and outlook for photojournalists specializing in sport: How are sports photographers coping in the digital era? What are the challenges they face? Is there a future for professional sports photography? Certainly, the results and analysis give us an initial impression of the answers. In terms of how sports photojournalists are coping in the digital era, in spite of the restructuring of the industry and the ‘crisis’ so frequently alluded to, the loss of formal employed positions and the technical challenges, photojournalists who mainly cover sport are doing relatively well. Their income is generally higher than photographers as a whole and they are somewhat more positive about their circumstances than are other photojournalists. Sports photographers are more likely to be formally employed by media companies and only a relatively modest proportion are finding access to stadia and sport clubs difficult. Most photographers tend to cover a wide range of sports, particularly those who gain relatively more of their income from sport. The sports photographers who participated in this study generally felt they had more control over their daily work than did other photographers while the general feeling was that sports photographers tended to work in an environment in which they felt relatively autonomous and were generally satisfied with their circumstances. There were some national and regional variations in terms of general conditions of work with, for instance, sports photographers in Asia, Africa and Central and South America more likely to cover football than any other sport. The key challenges were identified in the study as the persistent gender gap in sports photojournalism, pervasive copyright infringements, the anticipation of increasingly difficult access to sports stadia and athletes, the rising cost of equipment and the threat of both social media and amateur photographers. There were gender-based and regional variations in how keenly these challenges were felt. The question of the future sustainability of sports photojournalism is less clear from the data. Certainly there are challenges and the manner in which these are resolved or allowed to develop will have an impact on the profession going forward as well as on the representation 15 of sport. The gradual disappearance of women sports photographers will not help correct the sexism, racism, homophobia and lack of diversity already evident in the cultures of sport. The capturing of iconic sports moments and their inspirational impact may be undermined by a lack of quality images and inadequate equipment. A new narcissistic aesthetic of subject- generated content may be looming. These diverse factors may encourage or discourage young people from entering the profession or force sports photojournalists in their prime to leave. In highlighting the threat of amateur photographers, free images, copyright theft and dwindling levels of full employment, the data support the notion that sports photojournalists have been in the frontline of the crisis spawned by the advent of online news and images. They have, however, been slightly less affected than news photojournalists as a whole. Certainly, it is important to track the work patterns, trends and vulnerabilities of those who make a living populating the world online and offline with quality images. This study is a reminder of the importance of gathering data from the people who have made such a profound contribution to the commercialisation and sustainability of sport itself. Given the scarcity of studies on photojournalists, and on sports photo-journalism in particular, there is evidently a need for more in-depth research in this area. REFERENCES Allan, Stuart. 2015. ‘Introduction’, Journalism Practice, 9:4, 455-464, DOI: 10.1080/17512786.2015.1030131 Anderson, Monica. 2013. “At newspapers, photographers feel the brunt of job cuts,” http://www.pewresearch.org/fact-tank/2013/11/11/at-newspapers-photographers-feel- the-brunt-of-job-cuts/ Blaikie, Norman. 2003. Analyzing Quantitative Data: From Description to Explanation. London: Sage. Campbell, David. 2013. Visual Storytelling in the Age of Post-Industrialist Journalism, Research Project published under the auspices of the World Press Photo Academy, https://www.david-campbell.org/multimedia/world-press-photo-multimedia-research/ Caple, Helen. 2013. Photojournalism: A social-semiotic approach. Basingstoke: Palgrave MacMillan. Castells, Manuel. 2013. Communication Power (2nd Ed.). Oxford: Oxford University Press. Deuze, Mark. 2007. Media work. Cambridge, MA: Polity. Edwards, Luke. 2015. Newcastle owner Mike Ashley’s ‘preferred media partners’ strategy threatens objective and independent press. The Telegraph, 31 July 2015. Accessed at: http://www.telegraph.co.uk/sport/football/teams/newcastle-united/11777127/Mike- Ashleys-preferred-media-partners-strategy-at-Newcastle-threatens-an-objective-and- independent-press.html Franks, Suzanne and O’Neill, Deirdre. 2015. Women reporting sport: Still a man’s game? Journalism, 17(4), 474-492. Greenslade, Roy. 2010. Proclaiming press freedom as football clubs ban journalists may be own goal. The Guardian, 9 August 2010. Available at: https://www.theguardian.com/media/greenslade/2010/aug/09/southampton-fc-press- freedom Greenslade, Roy. 2013. MPs condemn football clubs for banning journalists. The Guardian, 10 December 2013. Available at: https://www.theguardian.com/media/greenslade/2013/dec/10/nationalunionofjournalis ts-newcastleunited http://www.pewresearch.org/fact-tank/2013/11/11/at-newspapers-photographers-feel-the-brunt-of-job-cuts/ http://www.pewresearch.org/fact-tank/2013/11/11/at-newspapers-photographers-feel-the-brunt-of-job-cuts/ https://www.david-campbell.org/multimedia/world-press-photo-multimedia-research/ http://www.telegraph.co.uk/sport/football/teams/newcastle-united/11777127/Mike-Ashleys-preferred-media-partners-strategy-at-Newcastle-threatens-an-objective-and-independent-press.html http://www.telegraph.co.uk/sport/football/teams/newcastle-united/11777127/Mike-Ashleys-preferred-media-partners-strategy-at-Newcastle-threatens-an-objective-and-independent-press.html http://www.telegraph.co.uk/sport/football/teams/newcastle-united/11777127/Mike-Ashleys-preferred-media-partners-strategy-at-Newcastle-threatens-an-objective-and-independent-press.html https://www.theguardian.com/media/greenslade/2010/aug/09/southampton-fc-press-freedom https://www.theguardian.com/media/greenslade/2010/aug/09/southampton-fc-press-freedom https://www.theguardian.com/media/greenslade/2013/dec/10/nationalunionofjournalists-newcastleunited https://www.theguardian.com/media/greenslade/2013/dec/10/nationalunionofjournalists-newcastleunited 16 Hadland, Adrian, Campbell David, and Paul Lambert. 2015. The State of News Photography: The Lives and Livelihoods of Photojournalists in the Digital Age. Research Report. Oxford: Reuters Institute for the Study of Journalism, University of Oxford. Haynes, Richard. 2007. Footballers’ image rights in the new media age. European Sport Management Quarterly, 7(4), 361-374. Hutchins, Brett and Rowe, David. 2012. Sport Beyond Television: The Internet, Digital Media and the Rise of Networked Media Sport. London: Routledge. Mäenpää, Jenni. 2014. ‘Rethinking Photojournalism: The Changing Work Practices and Professionalism of Photojournalists in the Digital Age’, Nordicom Review 35 (2), pp.91-104. Manovich, Lev. 2016. Instagram and Contemporary Image. http://manovich.net/index.php/projects/instagram-and-contemporary-image Mortensen, Tara Marie. 2014 ‘Blurry and Centred or Clear and Balanced?’, Journalism Practice, DOI: 10.1080/17512786.2014.892703. Pantti, Mervi and Bakker, Piet. 2009. ‘Misfortunes, memories and Sunsets: Non-professional images in Dutch news media’, International Journal of Cultural Studies 12: 471. DOI: 10.1177/1367877909337860. Papadopolous, Kari Anden- & Pantti, Mervi. 2011. Amateur Images and Global News. Bristol: Intellect. Reinardy, Scott. 2011. ‘Newspaper journalism in crisis: Burnout on the rise eroding young journalists’ career commitment’, Journalism 12 (1), pp33-50. DOI: 10.1177/1464884910385188. Russial, John and Wanta, Wayne. 1998. Digital imaging skills and the hiring and training of photojournalists. Journalism and Mass Communication Quarterly, 75(3), 593-605. Sutton, Benjamin. 2015. “New French Report Zooms In on Field of Photography,” http://hyperallergic.com/209257/new-french-report-zooms-in-on-field-of- photography/ Taylor, John. 2010. ‘Problems in Photojournalism: realism, the nature of news and the humanitarian narrative’, Journalism Studies 1:1, 129-143. DOI: 10.1080/146167000361212. Vauclare, Claude and Debauvais, Rémi. 2015. Le Metier de Photographe, http://culturecommunication.gouv.fr/Politiques-ministerielles/Etudes-et- statistiques/Publications/Collections-de-synthese/Culture-etudes-2007-2015/Le- metier-de-photographe-CE-2015-3 Yaschur, Carolyn. 2011. ‘Visualising the future of visual journalism’. In Kelly Kaufhold, Amber Willard Hinsley and Seth Lewis. (Eds.) The Future of News: An Agenda of Perspectives (2nd Ed., pp. 127-138). San Diego, CA: Cognella Academic Publishing. Zavoina, Susan and and Reichart, Tom. 2000. Media convergence/management change: The evolving work flow for visual journalists. The Journal of Media Economics, 13(2), 143-151. Richard Haynes, Division of Communications, Media & Culture, University of Stirling, UK. E-mail: r.b.haynes@stir.ac.uk Adrian Hadland, (author to whom correspondence should be addressed), Division of Communications, Media & Culture, University of Stirling, UK. E-mail: adrian.hadland@stir.ac.uk http://manovich.net/index.php/projects/instagram-and-contemporary-image http://hyperallergic.com/209257/new-french-report-zooms-in-on-field-of-photography/ http://hyperallergic.com/209257/new-french-report-zooms-in-on-field-of-photography/ http://culturecommunication.gouv.fr/Politiques-ministerielles/Etudes-et-statistiques/Publications/Collections-de-synthese/Culture-etudes-2007-2015/Le-metier-de-photographe-CE-2015-3 http://culturecommunication.gouv.fr/Politiques-ministerielles/Etudes-et-statistiques/Publications/Collections-de-synthese/Culture-etudes-2007-2015/Le-metier-de-photographe-CE-2015-3 http://culturecommunication.gouv.fr/Politiques-ministerielles/Etudes-et-statistiques/Publications/Collections-de-synthese/Culture-etudes-2007-2015/Le-metier-de-photographe-CE-2015-3 mailto:r.b.haynes@stir.ac.uk mailto:adrian.hadland@stir.ac.uk 17 Paul Lambert, Division of Sociology, Social Policy and Criminology, University of Stirling, UK. E-mail: Paul.lambert@stir.ac.uk NOTES i Photojournalists describe themselves in many different ways from visual storytellers to sports photojournalists. In this paper, photographers covering mainly sport and sports photojournalists are interchangeable concepts. Variations in proportion of time spent taking pictures of sport or the extent to which income is determined by sports work are outlined as relevant. ii Getty Images has a history of aggressive purchasing in the sector including sports photography agency Allsport in 1998, the Image Bank in 1999, Mediavast in 2007 and Jupiterimages in 2009. iii A full copy of the survey questionnaire is available at: http://rms.stir.ac.uk/converis- stirling/person/18027. iv This number was calculated by identifying those who mainly worked in photography and who also stated either (i) that they mostly undertake sports photography and that photography generated the bulk of their income (266 respondents), or (ii) in response to a separate independent question, that sports was the type of photography that generated the most of their income (148 respondents). mailto:Paul.lambert@stir.ac.uk http://rms.stir.ac.uk/converis-stirling/person/18027 http://rms.stir.ac.uk/converis-stirling/person/18027 work_sbdgihzejvfyroz4q72hpkh4qu ---- A Gel Formulation Containing a New Recombinant Form of Manganese Superoxide Dismutase: A Clinical Experience Based on Compassionate Use-Safety of a Case Report Case Report A Gel Formulation Containing a New Recombinant Form of Manganese Superoxide Dismutase: A Clinical Experience Based on Compassionate Use-Safety of a Case Report Lucia Grumetto,1 Antonio Del Prete,2 Giovanni Ortosecco,1 Antonella Borrelli,3 Salvatore Del Prete,2 and Aldo Mancini4 1Department of Pharmacy, University of Naples Federico II, 80131 Naples, Italy 2Department of Neurosciences and Reproductive and Dentistry Sciences, University of Naples Federico II, 80131 Naples, Italy 3Molecular Biology and Viral Oncology Unit, Department of Experimental Oncology, National Institute of Cancer, IRCCS Foundation, Naples, Italy 4Leadhexa Inc., QB3-UCSF, San Francisco, CA, USA Correspondence should be addressed to Lucia Grumetto; grumetto@unina.it Received 9 May 2016; Accepted 13 July 2016 Academic Editor: Guy Kleinmann Copyright © 2016 Lucia Grumetto et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Background. We report a case of bilateral posterior subcapsular cataracts (PSCs) in a 24-year-old man with an allergic conjunctivitis history caused by a long-term therapy with glucocorticoids. Case Presentation. The patient showed a visual acuity of 9/10 for both eyes. He followed a therapy with ketotifen and bilastine for four years. During the last six months before our evaluation, he was treated with chloramphenicol and betamethasone, interrupted for onset of cataracts and increased intraocular pressure. We treated him with ophthalmic gel preparation containing a new recombinant form of manganese superoxide dismutase (rMnSOD) at a concentration of 12.5𝜇g/mL, only for the right eye, while left eye was treated with standard protocol of Bendazac-lysine g 0.5. Conclusion. This case report shows the protective effects of rMnSOD versus PSC disease, probably due to the capacity of rMnSOD of countering free radical species. 1. Background The occurrence of cataracts is one of the leading causes of visual impairment in the elderly [1]. The disease can be classi- fied as cortical, nuclear, and posterior subcapsular according to the location of the opacity within the lens [2]. The cataract is a complex disease with an etiology not completely under- stood [3] and related to some environmental components, including UV light, sun exposure, vitamin C deficiency [4], and some drugs. Indeed, glucocorticoids may cause steroid- induced posterior subcapsular cataracts (PSCs) being capable of inducing changes into the transcription of genes in lens epithelial cells [5]. Oxidative stress has long been recognized as an important mediator of pathophysiology in lens epithe- lial cells (LECs) and also plays an essential role in the patho- genesis of cataract [6]. There are a plethora of works aimed at demonstrating the importance of maintaining a proper intake of antioxidants and that indicate how genetic polymorphisms associated with genes for glutathione, a molecule reducing endogenous, may influence the development of cataracts [7]. Furthermore, some studies have been oriented to the inves- tigation of biomarkers of oxidative stress of the cells of the lens such as higher levels of telomerase activity, and several substances were tested acting as scavengers against Reacting Oxygen Species (ROS) and lipid peroxidation, both mecha- nisms of cellular deterioration [8]. Recent studies have reported the association between ROS induced DNA damage of LECs and the development of cataract, indicating that oxygen free radical generators such as hydrogen peroxide can accelerate the biomolecular mech- anisms that underlie the development of congenital cataract due to specific mutations [9]. The effects of topical adminis- tration of glucocorticoids on rabbit lenses are well described Hindawi Publishing Corporation Case Reports in Ophthalmological Medicine Volume 2016, Article ID 7240209, 4 pages http://dx.doi.org/10.1155/2016/7240209 http://dx.doi.org/10.1155/2016/7240209 2 Case Reports in Ophthalmological Medicine (a) (b) (c) (d) Figure 1: Lenz of right eye (a) and left eye (b) before the administration, respectively, of rMnSOD gel formulation in the right eye (c) and left eye (d) after the treatment with Bendazac-lysine g 0.5 eye drops. in animals [10]. The mechanism underlying steroid-induced damage could be due to a conformational change of lens crys- tallins which results in an unmasking of –SH groups with a consequent increased susceptibility to oxidation; the hypoth- esis could be that the protective effect exerted by some substances occurs by counteracting this oxidation. In our previous work [11], we demonstrated the protective effect of rMnSOD against UV rays, powerful free radical generators, on the epithelium both of conjunctiva and cornea of rabbit eyes. In this case report, we reported a clinical experience based on compassionate use-safety report of the gel formulation containing the rMnSOD protein on a young patient with PSCs, caused by a long-term therapy with glucocorticoids, monitoring his clinical condition over time. 2. Case Presentation 2.1. Patient and Operative Details. An Italian 24-year-old man with an allergic conjunctivitis history was admitted to our department with PSCs diagnosis of both eyes through slit- lamp examination, caused by a glucocorticoid therapy lasting six months, one drop twice daily for both eyes. At the moment of our observation, the patient had visual acuity of 9/10 valued with a decimal system eye test card (Sbisà, Firenze) for both eyes. The patient has been treated with ketotifen 0.05% eye drops, one drop in both eyes three times a day, and ketotifen 0.05% ophthalmic gel, one drop in both eyes at evening, and bilastine 20 mg tablets per os one time in a day at the evening. He had undergone this therapy for 4 years. Furthermore, only for the last six months, he took eye drops consisting of chloramphenicol 0.5% and betamethasone 0.2%, one drop in both eyes two times a day. The patient showed an enhanced intraocular pressure (IOP), about 23-24 mmHg by a Nidek NT 2000 auto noncontact Tonometer. In order to discontinue glucocorticoid therapy, the patient was treated with local specific immunotherapy; he was allergic to grass and olive tree; therefore, he was treated with a sublingual vaccine for systemic therapy and for topical therapy with eye drops containing allergens of grass and olive oil as reported in literature [12]; thus, in this way he was able to suspend glucocorticoid therapy. Figures 1(a) and 1(b) show the right and left eyes of the patient valued with the lift lamp at the moment of his admittance. In accordance with the Declaration of Helsinki (1964), it was possible to administer Case Reports in Ophthalmological Medicine 3 him for compassionate use. The ophthalmic gel preparation contained rMnSOD at a concentration of 12.5𝜇g/mL, one drop three times a day for the right eye, while the left eye was treated with Bendazac-lysine g 0.5 eye drops, one drop three times in a day, for a period of five months. The experiment was conducted with the human subject’s understanding and consent. The ophthalmic gel preparation with rMnSOD was obtained as described in the literature [11] at a concentration of 12.5𝜇g/mL prepared as a single dosage form and in a sterile room under laminar flow hood. The developed for- mulations were primarily evaluated for clarity by visual observation against a black and white background in a well- lit cabinet, drug content by UV spectrophotometry at 280 nm (Shimadzu UV-visible spectrophotometer, Japan), and pH (Crison Instruments digital pH meter, Spain). 3. Results and Discussion The follow-up was carried out with Digital Photo Set Righton RS-1000, an inherited full-fledged zoom photo slit lamp with 6 megapixel high resolution digital images, to monitor the slightest symptoms to the use of the programmed flash illumination. Overall zoom ratio covering 7.5x to 32.3x, with a standard 12.5x eye piece magnification range from 7.5 pixels to 32.3 pixels, enables all details to be observed without vignetting. The optional 10 pixel eye piece was used for low magnification observation. With digital photography, a photo frame eye piece can be attached to the unit to show the available shooting area, while the image can be displayed on the computer monitor. After 5 months of therapy with the rMnSOD gel formulation, the visualacuity of right eye remained unchanged (Figure 1(c)) at 9/10, while the visual acuity of left eye (Figure 1(d)) got worse at 6/10. The patient did not show any toxicity during all the therapy. rMnSOD is a recombinant protein easily administrable in vitro and in vivo and it is very active against the free radicals. The protein is very stable in solution and is able to enter cells. On the contrary, the wild type MnSOD is not administrable in vitro or in vivo. The rMnSOD enters cells through the leader peptide, which is able to recognize the estrogen receptor on the cells [13, 14]. Many researches [11–15] have demonstrated the protective action against the oxidative damage for organs and tissues of animals. The protective effects of rMnSOD gel formulation have been reported in a recent study on rabbit eyes by us [11]. In this case report, we want to highlight that the rMnSOD protects from degenerative process of PSC, compared to the current therapy with bendaline, interna- tionally allowed [15]. Our findings suggest that rMnSOD gel formulation might be used also to protect eyes in diseases, as PSC, from oxidative damages. The patient will continue to be followed up in our clinic. 4. Conclusion According to the current knowledge about the therapeutic target role of the redox balance, this case report suggests an important action of the rMnSOD which, vehicled as an ophthalmic gel, would achieve a good therapeutic efficiency without side effects. As previously demonstrated in our ophthalmologic study on rabbit eye [11], the rMnSOD is able to reduce the oxidative stress, thus preventing the worsening of PSC disease. Based on the evidence of the protective effects of rMnSOD versus PSC disease, probably due to the capacity of rMnSOD of countering free radical species, our findings suggest that rMnSOD gel formulation could be considered as an associated additional treatment for PSC. Further study will be performed before taking action. Abbreviations IOP: Intraocular pressure LECs: Lens epithelial cells LOCSII or LOCSIII: Lens Opacification Classification System PSCs: Posterior subcapsular cataracts ROS: Reacting Oxygen Species rMnSOD: Recombinant manganese superoxide dismutase SOD: Superoxide dismutase (SOD). Consent The paper reports the results of experimental investigation on a human subject and includes a statement that the study was performed with informed consent and in accordance with the Declaration of Helsinki (1964). Written informed consent for publication of their clinical details and/or clinical images was obtained from the patient. A copy of the consent form is available for review. Competing Interests Dr. Aldo Mancini is the founder of Laedhexa Biotechnologies Inc. The other authors declare that they have not conflicts of interest. Authors’ Contributions Antonio Del Prete followed the patient and proposed the subject. Lucia Grumetto provided gel formulation and wrote the paper. Aldo Mancini and Antonella Borrelli provided rMnSOD. Giovanni Ortosecco checked the paper. Salvatore Del Prete provided the images. All authors read and approved the final paper. Acknowledgments The authors are grateful to Lega Italiana di Napoli per la Lotta Contro i Tumori (LILT). References [1] B. Thylefors, “A global initiative for the elimination of avoidable blindness,” The American Journal of Ophthalmology, vol. 125, no. 1, pp. 90–93, 1998. [2] B. E. K. Klein, R. Klein, and K. L. P. Linton, “Prevalence of age-related lens opacities in a population: the Beaver Dam Eye Study,” Ophthalmology, vol. 99, no. 4, pp. 546–552, 1992. 4 Case Reports in Ophthalmological Medicine [3] A. Shiels and J. F. Hejtmancik, “Genetic origins of cataract,” Archives of Ophthalmology, vol. 125, no. 2, pp. 165–173, 2007. [4] J. Zuercher, J. Neidhardt, I. Magyar et al., “Alterations of the 5󸀠untranslated region of SLC16A12 lead to age-related cataract,” Investigative Ophthalmology & Visual Science, vol. 51, no. 7, pp. 3354–3361, 2010. [5] E. R. James, “The etiology of steroid cataract,” Journal of Ocular Pharmacology and Therapeutics, vol. 23, pp. 403–420, 2007. [6] O. Ates, H. H. Alp, I. Kocer, O. Baykal, and I. A. Salman, “Oxidative DNA damage in patients with cataract,” Acta Oph- thalmologica, vol. 88, no. 8, pp. 891–895, 2010. [7] S. K. Çelik, N. Aras, Ö. Yildirim et al., “Glutathione S- transferase GSTM 1, null genotype may be associated with susceptibility to age-related cataract,” Advances in Clinical and Experimental Medicine, vol. 24, no. 1, pp. 113–119, 2015. [8] M. A. Babizhayev and Y. E. Yegorov, “Biomarkers of oxidative stress and cataract. novel drug delivery therapeutic strategies targeting telomere reduction and the expression of telomerase activity in the lens epithelial cells with N-acetylcarnosine lubricant eye drops: anti-cataract which helps to prevent and treat cataracts in the eyes of dogs and other animals,” Current Drug Delivery, vol. 11, no. 1, pp. 24–61, 2014. [9] K. Khoshaman, R. Yousefi, A. Mohammad Tamaddon, L. Saso, and A. A. Moosavi-Movahedi, “The impact of Hydrogen peroxide on structure, stability and functional properties of Human R12C mutant 𝛼a-crystallin: the imperative insights into pathomechanism of the associated congenital cataract incidence,” Free Radical Biology and Medicine, vol. 89, pp. 819– 830, 2015. [10] C. Costagliola, G. Iuliano, M. Menzione, G. Apponi-Battini, and G. Auricchio, “Effect of topical glucocorticoid administration on the protein and nonprotein sulfhydryl groups of the rabbit lens,” Ophthalmic Research, vol. 19, no. 6, pp. 351–356, 1987. [11] L. Grumetto, A. Del Prete, G. Ortosecco et al., “Study on the protective effect of a new manganese superoxide dismutase on the microvilli of rabbit eyes exposed to UV radiation,” BioMed Research International, vol. 2015, Article ID 973197, 9 pages, 2015. [12] A. Del Prete, C. Loffredo, A. Carderopoli, O. Caparello, R. Verde, and A. Sebastiani, “Local specific immunotherapy in allergic conjunctivitis,” Acta Ophthalmologica, vol. 72, no. 5, pp. 631–634, 1994. [13] A. Mancini, A. Borrelli, A. Schiattarella et al., “Tumor suppres- sive activity of a variant isoform of manganese superoxide dis- mutase released by a human liposarcoma cell line,” International Journal of Cancer, vol. 119, no. 4, pp. 932–943, 2006. [14] A. Borrelli, A. Schiattarella, R. Mancini et al., “A new hexapep- tide from the leader peptide of rMnSOD enters cells through the oestrogen receptor to deliver therapeutic molecules,” Scientific Reports, vol. 6, article 18691, 2016. [15] J. A. Balfour and S. P. Clissold, “Bendazac lysine. A review of its pharmacological properties and therapeutic potential in the management of cataracts,” Drugs, vol. 39, no. 4, pp. 575–596, 1990. work_sck7yvc6szechj3guz3bb7xuyy ---- [PDF] Surgical pearl: The use of Polaroid photography for mapping Mohs micrographic surgery sections. | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1016/J.JAAD.2004.08.031 Corpus ID: 8342733Surgical pearl: The use of Polaroid photography for mapping Mohs micrographic surgery sections. @article{Jih2005SurgicalPT, title={Surgical pearl: The use of Polaroid photography for mapping Mohs micrographic surgery sections.}, author={M. Jih and L. Goldberg and P. Friedman and A. Kimyai-Asadi}, journal={Journal of the American Academy of Dermatology}, year={2005}, volume={52 3 Pt 1}, pages={ 511-3 } } M. Jih, L. Goldberg, +1 author A. Kimyai-Asadi Published 2005 Medicine Journal of the American Academy of Dermatology A ccurate tissue orientation and precise mapping of tissue specimens are fundamental to the technique of Mohs micrographic surgery. The ideal method for tissuemappingwould allow for precise replication of both the surgical defect and the excised tissue specimens in a reliable, rapid and reproducible manner. The most commonly utilized mapping technique is a hand-drawn sketch of the tissue specimens, which can be used to map and orient the tissue. Themain drawback to hand-drawn maps is the… Expand View on PubMed dermlasersurgery.com Save to Library Create Alert Cite Launch Research Feed Share This Paper 7 Citations View All Figures and Topics from this paper figure 1 figure 2 Specimen Neoplasms photograph 7 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Mohs Mapping Fidelity: Optimizing Orientation, Accuracy, and Tissue Identification in Mohs Surgery J. Li, S. Silapunt, M. Migden, J. L. McGinness, T. H. Nguyen Computer Science, Medicine Dermatologic surgery : official publication for American Society for Dermatologic Surgery [et al.] 2018 3 Save Alert Research Feed A Survey of Mohs Tissue Tracking Practices Jessica B Dietert, D. Macfarlane Medicine Dermatologic surgery : official publication for American Society for Dermatologic Surgery [et al.] 2019 1 Save Alert Research Feed Use of digital photographic maps for Mohs micrographic surgery. B. Moriarty, E. Seaton, F. Deroide Medicine Dermatologic surgery : official publication for American Society for Dermatologic Surgery [et al.] 2014 7 Save Alert Research Feed Histological Mohs Maps Improve the Accuracy of Dermatology Residents' Interpretations of Mohs Slides: A Pilot Study Jeffrey B Tiger, Z. Ganesh, J. Mouzakis, S. Iwamoto Medicine Dermatologic surgery : official publication for American Society for Dermatologic Surgery [et al.] 2017 Save Alert Research Feed Primary cutaneous mucinous carcinoma: Report of two cases treated with Mohs’ micrographic surgery R. Cecchi, V. Rapicano Medicine The Australasian journal of dermatology 2006 14 Save Alert Research Feed [The Muffin technique--an alternative to Mohs' micrographic surgery]. M. Moehrle, H. Breuninger Medicine Journal der Deutschen Dermatologischen Gesellschaft = Journal of the German Society of Dermatology : JDDG 2006 14 Save Alert Research Feed Melanoma of the Nose: Prognostic Factors, Three‐Dimensional Histology, and Surgical Strategies V. Jahn, H. Breuninger, C. Garbe, M. Maassen, M. Moehrle Medicine The Laryngoscope 2006 18 Save Alert Research Feed References SHOWING 1-3 OF 3 REFERENCES Surgical pearl: digital imaging for mapping Mohs surgical specimens. C. Papa, M. Ramsey Medicine Journal of the American Academy of Dermatology 2000 7 View 1 excerpt, references methods Save Alert Research Feed Mohs Tissue Mapping and Processing: A Survey Study S. Silapunt, S. R. Peterson, J. Alcalay, L. Goldberg Medicine Dermatologic surgery : official publication for American Society for Dermatologic Surgery [et al.] 2003 57 View 2 excerpts, references methods and background Save Alert Research Feed Digital computerized mapping in Mohs micrographic surgery. J. Alcalay Medicine Dermatologic surgery : official publication for American Society for Dermatologic Surgery [et al.] 2000 10 View 1 excerpt, references methods Save Alert Research Feed Related Papers Abstract Figures and Topics 7 Citations 3 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_septivkparfmzg2isgi3wddoeu ---- Monitoring pigment‐driven vegetation changes in a low‐Arctic tundra ecosystem using digital cameras Monitoring pigment-driven vegetation changes in a low-Arctic tundra ecosystem using digital cameras ALISON L. BEAMISH,1,� NICHOLAS C. COOPS,2 TXOMIN HERMOSILLA,2 SABINE CHABRILLAT,3 AND BIRGIT HEIM1 1Alfred Wegener Institute, Periglacial Research, Telegrafenberg A45, 14473 Potsdam, Germany 2Integrated Remote Sensing Studio (IRSS), Faculty of Forestry, University of British Columbia, 2424 Main Mall, Vancouver, British Columbia V6T1Z4 Canada 3Helmholtz Centre Potsdam (GFZ), German Research Centre for Geosciences, Telegrafenberg A17, 14473 Potsdam, Germany Citation: Beamish, A. L., N. C. Coops, T. Hermosilla, S. Chabrillat, and B. Heim. 2018. Monitoring pigment-driven vegetation changes in a low-Arctic tundra ecosystem using digital cameras. Ecosphere 9(2):e02123. 10.1002/ecs2.2123 Abstract. Arctic vegetation phenology is a sensitive indicator of a changing climate, and rapid assess- ment of vegetation status is necessary to more comprehensively understand the impacts on foliar condition and photosynthetic activity. Airborne and space-borne optical remote sensing has been successfully used to monitor vegetation phenology in Arctic ecosystems by exploiting the biophysical and biochemical changes associated with vegetation growth and senescence. However, persistent cloud cover and low sun angles in the region make the acquisition of high-quality temporal optical data within one growing season challenging. In the following study, we examine the capability of “near-field” remote sensing technologies, in this case digital, true-color cameras to produce surrogate in situ spectral data to characterize changes in vegetation driven by seasonal pigment dynamics. Simple linear regression was used to investigate relation- ships between common pigment-driven spectral indices calculated from field-based spectrometry and red, green, and blue (RGB) indices from corresponding digital photographs in three dominant vegetation com- munities across three major seasons at Toolik Lake, North Slope, Alaska. We chose the strongest and most consistent RGB index across all communities to represent each spectral index. Next, linear regressions were used to relate RGB indices and extracted leaf-level pigment content with a simple additive error propaga- tion of the root mean square error. Results indicate that the green-based RGB indices had the strongest rela- tionship with chlorophyll a and total chlorophyll, while a red-based RGB index showed moderate relationships with the chlorophyll to carotenoid ratio. The results suggest that vegetation color contributes strongly to the response of pigment-driven spectral indices and RGB data can act as a surrogate to track seasonal vegetation change associated with pigment development and degradation. Overall, we find that low-cost, easy-to-use digital cameras can monitor vegetation status and changes related to seasonal foliar condition and photosynthetic activity in three dominant, low-Arctic vegetation communities. Key words: hyperspectral; low-Arctic; red, green, and blue indices; true-color digital photography; vegetation pigments. Received 20 January 2018; accepted 24 January 2018. Corresponding Editor: Debra P. C. Peters. Copyright: © 2018 Beamish et al. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. � E-mail: abeamish@awi.de INTRODUCTION Changes to the functioning of Arctic ecosys- tems, such as shifts in photosynthetic activity, net primary productivity, and species composition influence global climate change and the resulting feedbacks (Zhang et al. 2007, Bhatt et al. 2010, Parmentier and Christensen 2013). Climatic changes have been accompanied by broad-scale shifts in Arctic vegetation community composi- tion and species distribution, as well as fine-scale shifts in individual plant reproduction and ❖ www.esajournals.org 1 February 2018 ❖ Volume 9(2) ❖ Article e02123 info:doi/10.1002/ecs2.2123 http://creativecommons.org/licenses/by/3.0/ phenology (Walker et al. 2006, Bhatt et al. 2010, Elmendorf et al. 2012a, Bjorkman et al. 2015, Pre- v�ey et al. 2017). Changes in vegetation phenol- ogy can impact overall ecosystem functioning as a result of mismatched species–climate interac- tions. This in turn can impact species photosyn- thetic activity and growth through increased vulnerability to events such as frost, soil satura- tion, or disruption to species chilling require- ments (Inouye and McGuire 1991, Yu et al. 2010, Cook and Wolkovich 2012, Høye et al. 2013, Wheeler et al. 2015). In Arctic ecosystems, snow pack conditions and timing of snowmelt, rather than temperature, are primary drivers of the onset of vegetation phe- nology (Billings and Bliss 1959, Bjorkman et al. 2015). Changing snowmelt dynamics due to changes in winter precipitation and spring tem- peratures have been shown to directly influence tundra vegetation phenology throughout the growing season (Bjorkman et al. 2015). Thus, the onset of the growing season, or the leaf-out stage, acts as a benchmark of phenology, and identifica- tion and characterization of this stage are key to understanding the overall and subsequent phe- nology and fitness of tundra vegetation (Iler et al. 2013, Wheeler et al. 2015). The timing of maxi- mum expansion and elongation of leaves and stems and when vegetation is at, or near, peak photosynthetic activity, or peak greenness, are also important benchmarks of phenological phases. Peak greenness marks the climax of the growing season, the timing and magnitude of which can indicate ecosystem functioning, and acts as a benchmark for long-term monitoring of tundra productivity (Bhatt et al. 2010, 2013). A final benchmark of phenological phase is the end of the growing season, or senescence. In combina- tion with leaf-out, senescence dictates growing season length and has implications for overall sea- sonal productivity and carbon assimilation (Park et al. 2016). Characterizing key biophysical prop- erties of Arctic vegetation associated with these three benchmark phenological phases is impor- tant for accurate monitoring and quantification of local and regional changes in Arctic vegetation and in turn Arctic and global changes to energy and carbon cycling, as well as associated feed- backs. The presence and seasonal changes in veg- etation pigment content can be used to infer biophysical properties including photosynthetic activity and foliar condition at key phenological phases of Arctic vegetation. The major photosynthetic pigment groups of chlorophyll and carotenoids absorb strongly in the visible spectrum creating unique spectral sig- natures (Curran 1989, Gitelson and Merzlyak 1998, Gitelson et al. 2002, Coops et al. 2003). Chlorophyll pigments are the dominant factor controlling the amount of light absorbed by a plant and therefore photosynthetic potential, which ultimately dictates primary productivity. Carotenoids (carotenes and xanthophylls) are responsible for absorbing incident radiation and providing energy to the photosynthetic process and are often used to provide information on the physiological status of vegetation (Young and Britton 1990, Bartley and Scolnik 1995). The third major pigment group of anthocyanins (water- soluble flavonoids) has a less concise function in vegetation providing photoprotection (Steyn et al. 2002, Close and Beadle 2003), drought and freezing protection (Chalker-Scott 1999), and they have been shown to play a role in recovery from foliar damage (Gould et al. 2002). Photo- synthetic pigments influence a wide range of plant functioning from photosynthetic efficiency to protective functions (Demmig-Adams and Adams 1996) and can be measured destructively through laboratory analysis or non-destructively using high spectral resolution remote sensing (Gitelson et al. 1996, 2001, 2002, 2006). The development and degradation of chloro- phyll pigments as a result of vegetation emer- gence, stress, or senescence causes a distinct shift in spectral signatures in the visible spectrum to shorter wavelengths due to a narrowing of the major chlorophyll absorption feature (550– 750 nm) and an overall reduction in spectral absorption (Ustin and Curtiss 1990). Carotenoids (yellow to orange) and anthocyanins (blue to red) have less straightforward seasonal, and therefore spectral, shifts. The spectral absorption by carote- noid pigments can result in mixed or masked spectral absorption with chlorophyll signals due to overlapping absorption features in the visible spectrum (400–700 nm). Carotenoids and antho- cyanins often have relatively higher concentra- tions in senesced and young leaves when chlorophyll concentrations are relatively low (Tieszen 1972, Sims and Gamon 2002, Stylinski et al. 2002). However, in general, the predictable ❖ www.esajournals.org 2 February 2018 ❖ Volume 9(2) ❖ Article e02123 BEAMISH ET AL. and distinct spectral features created by changes in vegetation pigment content allow inferences of ecological parameters such as photosynthetic activity, nutrient concentration, and biomass (Mutanga and Prins 2004, Mutanga and Skidmore 2004, Asner and Martin 2008, Ustin et al. 2009). Of particular interest for in situ and laboratory spectroscopy of vegetation and phenology are the spectral characterization of the xanthophyll cycle (carotenoids) and the ratio between photosynthetic pigments of carotenoids and chlorophyll as they relate to radiation use efficiency and in turn radia- tive transfer models (Blackburn 2007, Garbulsky et al. 2011, Gamon et al. 2016). Radiative transfer models are used to simulate leaf and canopy spec- tral reflectance and transmittance, and have been the basis for the inverse determination of biophysi- cal and biochemical parameters of vegetation based on spectral reflectance (F�eret et al. 2011). Even at lower spectral resolution, the importance of pigments and pigment dynamics related to the xanthophyll cycle can be seen in available products for recent satellite missions; for example, the Euro- pean Space Agency’s (ESA) Sentinel-2 provides many pigment-driven vegetation indices (see https://www.sentinel-hub.com/develop/documenta tion/eo_products/Sentinel2EOproducts). While highly valuable, in situ reflectance spec- troscopy, like optical remote sensing, is challeng- ing for remote Arctic field sites. Remote locations, challenges associated with field-based monitor- ing, and atmospheric and illumination conditions of optical satellite imagery of these areas limit available optical data, hindering phenological studies. To complement the use of airborne and satellite imagery to measure vegetation pigments and classify biophysical properties in Arctic ecosystems, the use of true-color digital photogra- phy and red, green, and blue (RGB) indices to infer photosynthetic activity and foliar condition is a promising area of study and should be exam- ined (Anderson et al. 2016, Beamish et al. 2016). Research has shown that Arctic vegetation changes are complex with strong site and species- specific responses through space and time (Walker et al. 2006, Bhatt et al. 2010, Elmendorf et al. 2012b, Bjorkman et al. 2015, Prev�ey et al. 2017). True-color digital photography represents both a simple and cost-effective way to increase data volume both spatially and temporally to cap- ture site and species-specific responses. Data from consumer-grade digital cameras have extremely high spatial resolution and high data collection capabilities and are less weather dependent than airborne and optical satellite remote sensing as varying illumination can be easily corrected (Richardson et al. 2007, Nijland et al. 2014). Large-scale, true-color, repeatable digital photog- raphy networks in temperate ecosystems such as the PhenoCam network (http://phenocam.sr. unh.edu/webcam/) highlight the existing ecologi- cal applications of this method. Additional work by Richardson et al. (2007) and Coops et al. (2012) demonstrates the link between ground-based true-color camera systems and satellite-scale (Landsat and MODIS) observations. In addition to phenological parameters, well-established RGB indices derived from consumer-grade digital cameras have been shown to accurately identify vegetation cover in Arctic tundra and temperate forests (Richardson et al. 2007, Ide and Oguma 2010, Nijland et al. 2014, Beamish et al. 2016) and can be closely related to gross primary production in grasslands, temperate forests, Arctic tundra, and wetlands (Ahrends et al. 2009, Migliavacca et al. 2011, Westergaard-Nielsen et al. 2013, Anderson et al. 2016), as well as containing a sen- sitivity to changes in plant photosynthetic pig- ments (Ide and Oguma 2013). In this study, we examine the capability of in situ, true-color digital photography to act as a surrogate for in situ spectral data in assessing pig- ment-driven vegetation changes associated with three key seasons representing early, peak, and late season, in three dominant, low-Arctic tundra vegetation communities. To do this, we asked the following research questions: (1) What are the relationships between RGB indices and in situ, pigment-driven spectral indices? (2) How do these relationships change with community type and season? (3) To what extent do the indices rep- resent actual chlorophyll and carotenoid content? We conclude with proposing the RGB indices best suited as proxies for in situ spectral data for moni- toring seasonal vegetation change and the related pigment dynamics in a low-Arctic tundra. METHODS Study site Data were collected at the Toolik Field Station (68°62.570 N, 149°61.430 W) on the North Slope of ❖ www.esajournals.org 3 February 2018 ❖ Volume 9(2) ❖ Article e02123 BEAMISH ET AL. https://www.sentinel-hub.com/develop/documentation/eo_products/Sentinel2EOproducts https://www.sentinel-hub.com/develop/documentation/eo_products/Sentinel2EOproducts http://phenocam.sr.unh.edu/webcam/ http://phenocam.sr.unh.edu/webcam/ the Brooks Range, in north central Alaska in the growing season of 2016. The Toolik Area is repre- sentative of the southern Arctic Foothills, a phys- iographic province of the North Slope (Walker et al. 1989). Vegetation is a combination of moist tussock tundra, wet sedge meadows, and dry upland heaths. Data were acquired in the Toolik Vegetation Grid, a 1 9 1 km long-term monitor- ing site established by the National Science Foun- dation (NSF) as part of the Department of Energy’s R4D (Response, Resistance, Resilience, and Recovery to Disturbance in Arctic Ecosys- tems) project (Fig. 1). Vegetation monitoring plots (1 9 1 m) are positioned in close proximity to equally spaced site markers delineating the intersection of the Universal Transverse Mercator (UTM) coordinates and each point within the grid. A subset of the grid was sampled for the purpose of this study representing three distinct and dominant vegetation communities as defined by Bratsch et al. (2016) (Fig. 1). The com- munities include moist acidic tundra (MAT), moist non-acidic tundra (MNT), and moss tun- dra (MT). A detailed description and representa- tiveness (% cover of the Toolik Vegetation Grid) of the three vegetation communities sampled within the Long-term Toolik Vegetation Grid are shown in Table 1. Digital photographs In order to capture the three major seasons of early, peak, and late, we acquired true-color digi- tal photographs on three days in the 2016 grow- ing season (June 16, day of year [DOY] 165; July 11, DOY 192; August 17, DOY 229). Images were taken at nadir approximately 1 m off the ground with a white 1 9 1 m frame for registration of off nadir images (Fig. 1). According to the Toolik Lake Environmental Data Centre (https://toolik. alaska.edu/edc/, 2017) phenology record that monitors 15 dominant species at the Toolik Field Station, the first image acquisition date occurred two weeks after the average date of first leaf-out (May 29, DOY 149) accurately capturing the peak of the early leaf-out phase. The second acquisi- tion took place one week after average date of last flower petal drop (July 3, DOY 185) and within one week of recorded full green-up (end of June). The final acquisition took place three weeks after average first day of fall color change (July 27, DOY 209) and within the range of recorded peak fall colors (mid-August). Fig. 1. Toolik Vegetation Grid located in the Toolik Research Area on the North Slope of the Brooks Range in northern Alaska and a late season example plots of the three vegetation communities monitored in the study. MAT, moist acidic tussock tundra; MNT, moist non-acidic tundra; MT, moss tundra. ❖ www.esajournals.org 4 February 2018 ❖ Volume 9(2) ❖ Article e02123 BEAMISH ET AL. https://toolik.alaska.edu/edc/ https://toolik.alaska.edu/edc/ Images were acquired with a consumer-grade digital camera (Panasonic DM3 LMX, Osaka, Japan) in raw format, between 10:00 h and 14:00 h, under uniform cloud cover to reduce the influence of shadow and illumination differ- ences. The digital camera collects color values by means of the Bayer matrix (Bayer 1976) with individual pixels coded with a red, green, and blue value between 0 and 256. RGB values of each pixel in each image were extracted using ENVI+IDL (Version 4.8; Harris Geospatial, Boul- der, Colorado, USA). To reduce the impact of non-nadir acquisitions, photo registration was undertaken using the 1 9 1 m apart. Table 2 shows the normalized RGB channels and RGB indices calculated based on the normalized red, green, and blue values of the digital pho- tographs, hereafter referred to as RGB indices. Digital camera indices 2G-RB and nG have been used to monitor vegetation phenology and biomass extensively (Richardson et al. 2007, Ide and Oguma 2013, Beamish et al. 2016). Addition- ally, we defined new RGB indices to examine their representativeness to selected pigment- driven spectral indices (Table 3). We chose these RGB indices as they represented as closely as possible the mathematical formulas of selected pigment-driven indices presented in the Field- based spectral data section. Field-based spectral data Within one week of the digital photographs, field-based spectral measurements of each vege- tation plot were acquired using a GER 1500 field spectroradiometer (Spectra Vista Corporation, New York, New York, USA). Spectral data were Table 1. Description of the three vegetation communities monitored in this study. Community Description Detailed description % cover Moist acidic tussock tundra Occurs on soils with pH < 5.0–5.5 and is dominated by dwarf erect shrubs such as Betula nana and Salix pulchra, graminoids species (Eriophorum vaginatum), and acidophilous mosses Betula nana–Eriophorum vaginatum. Dwarf-shrub, sedge, moss tundra (shrubby tussock tundra dominated by dwarf birch, Betula nana). Mesic to subhygric, acidic, moderate snow. Lower slopes and water-track margins. Mostly on Itkillik I glacial surfaces 6.1 Salix pulchra–Carex bigelowii. Dwarf-shrub, sedge, moss tundra (shrubby tussock tundra dominated by diamond- leaf willow, Salix pulchra). Subhygric, moderate snow, lower slopes with solifluction Moist non-acidic tundra Dominated by mosses, graminoids (Carex bigelowii), and prostrate dwarf shrubs (Dryas integrifolia) Carex bigelowii–Dryas integrifolia, typical subtype; Tomentypnum nitens–Carex bigelowii, Salix glauca subtype: nontussock sedge, dwarf-shrub, moss tundra (moist non-acidic tundra). Mesic to subhygric, non-acidic (pH > 5.5), shallow to moderate snow. Solifluction areas and somewhat unstable slopes. Some south-facing slopes have scattered glaucous willow (Salix glauca) 5.8 Mossy tussock tundra A moist acidic tussock tundra- type community dominated by sedges (E. vaginatum) and abundant Sphagnum spp. Eriophorum vaginatum–Sphagnum; Carex bigelowii– Sphagnum: tussock sedge, dwarf-shrub, moss tundra (tussock tundra, moist acidic tundra). Mesic to subhygric, acidic, shallow to moderate snow, stable. This unit is the zonal vegetation on fine-grained substrates with ice-rich permafrost. Some areas on steeper slopes with solifluction are dominated by Bigelow sedge (Carex bigelowii) 54.2 Table 2. Calculated red, green, and blue (RGB) indices from RGB channels of digital photographs. Index Formula Source nG G/(R+B+G) nR R/(R+B+G) nB B/(R+B+G) GR nG/nR GB nG/nB 2G-RB 2 9 nG � (nR + nB) Richardson et al. (2007) Table 3. Pseudo pigment-driven red, green, and blue (RGB) indices calculated from RGB channels of digi- tal photographs. Indices Formula nBG (nB�nG)/(nB+nG) nRG (nR�nG)/(nR+nG) RGr ((nR�nG)/nR) nG�1 (1/nG) 2R-GB 2 9 nR � (nG + nB) ❖ www.esajournals.org 5 February 2018 ❖ Volume 9(2) ❖ Article e02123 BEAMISH ET AL. acquired on June 14 (DOY 165), July 8 (DOY 189), and August 23 (DOY 235) in 2016 with a spectral range of 350–1050 nm, 512 bands, a spectral resolution of 3 nm, a spectral sampling of 1.5 nm, and an 8° field of view. Data were acquired under clear weather conditions between 10:00 and 14:00 local time corresponding to the highest solar zenith angle. In each plot, radiance data were acquired at nadir approximately 1 m off the ground resulting in an approximately 15 cm diameter ground instantaneous field of view. To reduce noise and characterize the spec- tral variability in each plot, an average of nine point measurements of upwelling radiance (Lup) in each of the 1 9 1 m plots was used. Down- welling radiance (Ldown) was measured as the reflection from a white Spectralon© plate. Sur- face reflectance (R) was processed as: R ¼ Lup Ldown � 100 (1) Reflectance spectra (0–100%) were prepro- cessed with a Savitzky-Golay smoothing filter (n = 11), and to remove sensor noise at the spec- tral limits of the radiometer, data were subset to 400–985 nm. Using the spectral reflectance data, we calcu- lated common pigment-driven vegetation indices, focusing on vegetation color, and those produced as remote sensing products by the ESA and the North American Space Agency (NASA); a brief description of each is provided below (Table 4). The indices that were calculated from the field- based GER data are hereafter referred to as pig- ment-driven spectral indices. The Photochemical Reflectance Index (PRI) is an indicator of photosynthetic radiation use efficiency and was first proposed by Gamon et al. (1992). The Plant Senescence Reflectance Index (PSRI) is sensitive to senescence-induced reflectance changes from changes in chlorophyll and carotenoid con- tent and was proposed by Merzlyak et al. (1999). The Pigment Specific Simple Ratios (PSSRa and b) aim to model chlorophyll a and b, respectively (Blackburn 1998, 1999, Sims and Gamon 2002). Chlorophyll Carotenoid Index (CCI) was devel- oped to track the phenology of photosynthetic activity of evergreen species by Gamon et al. (2016). The carotenoid reflectance indices (CRI1 and CRI2) use the reciprocal reflectance at 508 and 548 nm and 508 and 698 nm, respectively, in order to remove the effect of chlorophyll (Gitelson et al. 2002). The anthocyanin reflectance indices (ARI1 and ARI2) developed by Gitelson et al. (2001, 2006) are designed to reduce the influence of chlorophyll absorption to isolate the anthocyanin absorption by taking the difference between the reciprocal of green (550 nm) and the red-edge (700 nm). ARI2 includes a multiplication by the near infrared (NIR) to reduce the influence of leaf thickness and density. Vegetation pigment concentration To estimate how accurately plot-level indices represent actual leaf-level pigment content, leaves and stems (n = 213) of the dominant vas- cular species in a subset of the sampled plots were collected at early, peak, and late season for chlorophyll and carotenoid analysis. Samples were placed in porous tea bags and preserved in a silica gel desiccant in an opaque container for up to 3 months until pigment extraction (Esteban et al. 2009). Each sample was homogenized by grinding with a mortar and pestle. Approxi- mately 1.00 mg (�0.05 mg) of homogenized sample was placed into a vial with 2 mL of Table 4. Pigment-driven spectral indices. Indices Short Formula Source Photochemical Reflectance Index PRI (q533 � q569)/(q533 + q569) Gamon et al. (1992) Plant Senescence Reflectance Index PSRI (q678 � q498)/q748 Merzlyak et al. (1999) Pigment Specific Simple Ratio PSSR PSSRa = q800/q671 Blackburn (1998, 1999); PSSRb = q800/q652 Sims and Gamon (2002) Chlorophyll Carotenoid Index CCI (q531 � q645)/(q531 + q645) Gamon et al. (2016) Carotenoid Reflectance Index 1 CRI1 (1/q508) � (1/q548) Gitelson et al. (2002) Carotenoid Reflectance Index 2 CRI2 (1/q508) � (1/q698) Gitelson et al. (2002) Anthocyanin Reflectance Index 1 ARI1 (1/q550) � (1/q700) Gitelson et al. (2001) Anthocyanin Reflectance Index 2 ARI2 q800 9 [(1/q550) � (1/q700)] Gitelson et al. (2001) ❖ www.esajournals.org 6 February 2018 ❖ Volume 9(2) ❖ Article e02123 BEAMISH ET AL. dimethylformamide (DMF). Vials were then wrapped in aluminum foil to eliminate any degradation of pigments due to UV light and stored in a fridge (4°C) for 24 h. Samples were measured into a cuvette prior to spectrophoto- metric analysis. Bulk pigment concentrations were then estimated using a spectrophotometer measuring absorption at 646.8, 663.8, and 480 nm (Porra et al. 1989). Absorbance (A) val- ues at specific wavelengths were transformed into lg/mg concentrations of chlorophyll a, Chla; chlorophyll b, Chlb; total chlorophyll, Chla+b; car- otenoids, Car, using the following equations: Chla ¼ 12:00 � A663:8 � 3:11 � A646:8 (2) Chlb ¼ 20:78 � A646:8 � 4:88 � A663:8 (3) Chlaþb ¼ 17:67 � A646:8 þ 7:12 � A663:8 (4) Car ¼ A480 � Chla � � =245 (5) The chlorophyll a to chlorophyll b (Chla:b) and the chlorophyll to carotenoid ratios (Chl:Car) were calculated by dividing Eq. 2 by Eq. 3 and Eq. 4 by Eq. 5, respectively. Pigment concentra- tion was calculated as the average concentration of the dominant species in each plot. Data analysis We examined the potential of digital camera RGB data as a proxy for identified pigment- driven vegetation indices using existing and new RGB indices. Simple linear regression was per- formed between RGB indices defined in Tables 2 and 3, and hyperspectral PRI, PSRI, PSSR, CCI, CRI, and ARI indices defined in Table 4 (App- endix S1: Table S1). The RGB and spectral indices from significant RGB/spectral linear regressions were then chosen for linear regression with Chla, Chlb, Chla:b, and Car:Chl for each vegetation com- munity to explore how well the chosen indices represent actual leaf-level pigment content. As we assume RGB indices can be used in place of pigment-driven spectral indices, we wanted to include the error associated with spec- tral indices as a proxy for leaf-level pigments to more accurately estimate uncertainties. To do this, we used a simple additive error propagation of root mean square error of the selected RGB/ pigment and spectral/pigment regressions using the following equations: RMSE ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiPn t¼1 ðXobs;t � Xmodel;tÞ2 n s (6) RMSEprop ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðRMSEz � yÞ2 þ RMSEx � y � �2q (7) where RMSE is the root mean square error, Xobs;t and Xmodel;t are the actual and predicted values of the linear regression t, respectively, and where RMSEprop is the propagated RMSE of RGB~ pigment regressions using the RMSE of the RGB~pigment regressions (RMSEz � y) and the RMSE of the spectral~pigment regressions (RMSEx � y; Appendix S1: Table S2). All analyses were performed in R (R Development Core Team 2007), and an alpha of 0.05 was used. RESULTS RGB indices as a surrogate for pigment-driven spectral indices The strength of the relationships between the RGB indices and the pigment-driven spectral indices was variable between vegetation commu- nities (Appendix S1). The most consistent and strongest regressions across the three vegetation communities were selected and are presented in Table 5 and Fig. 2. The strongest three regres- sions were observed between RGr and ARI2 in MT (R2 = 0.77, P < 0.01, RMSE = 0.04), followed by RG and PSRI in MAT (R2 = 0.75, P < 0.01, RMSE = 0.05) and MNT (R2 = 0.75, P < 0.01, RMSE = 0.04; Fig. 2). PSRI had the strongest relationships across the three communities, while the carotenoid reflectance indices of CRI1 and CRI2 and the anthocyanin index of ARI1 had the weakest relationships. In general, MT had the strongest and most consistent regressions across all indices followed by MAT and finally MNT. We chose the following six RGB indices, 2G-RB, 2R-GB, nG, nG�1, RG, and RGr, representing the pigment-driven indices of PSSRb, PRI, PSSRa, CCI, PSRI, and ARI2, respectively, for linear regression with leaf-level pigment content. RGB indices as a surrogate for leaf-level pigment content The relationships between selected RGB indices and pigment content suggest high variability in both fit and uncertainty across the three vegeta- tion communities (Table 6). In general, the indices ❖ www.esajournals.org 7 February 2018 ❖ Volume 9(2) ❖ Article e02123 BEAMISH ET AL. performed best across all communities when pre- dicting Chla with the strongest observed in MNT. nG, representing PSSRa, had the single strongest regression with Chla in MNT (R 2 = 0.61, P < 0.01, RMSE-P = 0.89), and this relationship was also moderate in MAT and MT. Chla+b and Chla:b also showed weak to moderate correlations, but per- formance across communities was more variable than Chla. Both Chla+b and Chla:b showed moder- ate correlations with all indices except 2R-GB in MNT and MAT, respectively. 2R-GB, representing PRI, also had a moderate relationship with Chl: Car in MT (R2 = 0.53, P < 0.01, RMSE = 0.21) but poor performance in the other two communities. The indices performed worst when predicting Chlb with R 2 ≤ 0.27 in all cases. DISCUSSION The relationships between selected RGB indices and pigment-driven spectral indices sup- port the capability of digital cameras to act as a surrogate for in situ spectral data and for the study of seasonal vegetation changes associated with pigment dynamics in dominant low-Arctic vegetation communities. Evidence for the capa- bility of digital cameras as an ecological monitor- ing tool of vegetation phenology, biomass, and productivity and the relationships of these parameters to spectrally derived vegetation indices exists in a variety of ecosystems (Ahrends et al. 2009, Coops et al. 2010, Ide and Oguma 2010, Migliavacca et al. 2011, Westergaard-Nielsen et al. 2013, Nijland et al. 2014, Anderson et al. 2016, Beamish et al. 2016). Our study adds to this knowledge base by expanding to pigment-driven vegetation indices chosen for their indication of photosynthetic efficiency, a parameter of high interest in vegetation remote sensing. In addi- tion, we showcase the utility of a ground-based camera system in an ecosystem characterized by challenging acquisition conditions for spectral data. There are some technical considerations when using these types of digital cameras that need to be considered such as consumer-grade digital cameras typically have limited range charge cou- ple device/complementary metal oxide semicon- ductor sensors and employ automatic exposure adjustments with changing environmental brightness and local illumination. This in turn affects the comparability of images collected under different conditions; however, this can be overcome through the use of band ratios which are insensitive to brightness. Local illumination differences caused by direct sunlight can still introduce error; however, the Arctic is subject to unique illumination conditions because of the low solar zenith angle, long periods of daylight, and frequent cloud cover leading to frequent dif- fuse illumination conditions. Previous research has suggested collection of data under uniform cloud cover (diffuse conditions) is ideal (Ide and Oguma 2010). All images in this study were col- lected under uniform cloud cover by choice; however, when considering a framework of high-frequency (daily or hourly) data collection, these phenomena should be taken into consider- ation. Additionally, this study does not take into account the impact of atmospheric scattering common in shorter wavelengths (blue) of air- and space-borne platforms. This phenomenon Table 5. The most consistent, significant linear regressions between red, green, and blue (RGB) indices and pigment-driven spectral indices in the three communities. Spectral RGB MAT MNT MT R2 P-value RMSE R2 P-value RMSE R2 P-value RMSE PRI 2R-GB 0.46 <0.01 0.04 0.29 <0.01 0.04 0.53 <0.01 0.03 PSRI RG 0.78 <0.01 0.05 0.75 <0.01 0.04 0.70 <0.01 0.05 PSSRa nG 0.72 <0.01 0.01 0.60 <0.01 0.01 0.71 <0.01 0.01 PSSRb 2G-RB 0.64 <0.01 0.04 0.53 <0.01 0.03 0.51 <0.01 0.03 CCI nG�1 0.56 <0.01 0.04 0.58 <0.01 0.02 0.73 <0.01 0.02 CRI1 GB 0.40 <0.01 0.18 0.16 0.02 0.14 0.34 <0.01 0.14 CRI2 RB 0.09 <0.01 0.05 �0.01 0.43 0.04 0.17 0.03 0.02 ARI1 RG 0.30 <0.01 0.10 0.17 0.01 0.07 0.42 <0.01 0.07 ARI2 RGr 0.63 <0.01 0.06 0.43 <0.01 0.04 0.77 <0.01 0.04 Note: Bold values represent moderate or stronger (R2 > 0.40) significant linear regressions. ❖ www.esajournals.org 8 February 2018 ❖ Volume 9(2) ❖ Article e02123 BEAMISH ET AL. would presumably reduce the correlation in the blue channel, and thus, its use should be mini- mized when inferring relationships between air- and space-borne platforms. However, the use of the blue channel in ground-based remote sensing as in this study is less of a concern as atmo- spheric scattering is minimal at this near-sensing scale. Another consideration is that, digital cam- eras contain what is known as a Bayer color filter array that combines one blue, one red, and two Fig. 2. Simple linear regression between the six best RGB indices and pigment-driven spectral indices across three seasons in each community type. MAT, moist acidic tussock tundra; MNT, moist non-acidic tundra; MT, moss tundra; RMSE, root mean square error. ❖ www.esajournals.org 9 February 2018 ❖ Volume 9(2) ❖ Article e02123 BEAMISH ET AL. green sensors into an image pixel making green the most sensitive channel in the camera to mir- ror the sensitivity of the human eye (Bayer 1976). From the standpoint of the human eye and digital cameras, all of the best performing RGB indices except one provide a measure of vegeta- tion greenness and the weaker relationships seen with the carotenoid and anthocyanin (non-green pigments) indices are indicative of the cameras overall green bias or green sensitivity. The best performing RGB green band-based indices of 2G-RB, nG, nG�1, RG, and rGR had the strongest relationships (R2 > 0.43) with the pigment- driven spectral indices of PSSRb, PSSRa, CCI, PSRI, and ARI2, respectively, across all three veg- etation communities. The moderate to strong relationships observed in all three communities suggest these RGB indices can be used to moni- tor seasonal vegetation changes associated with pigment-driven color changes, mostly related to the amount of green or chlorophyll pigments, in dominant low-Arctic vegetation communities Table 6. Simple linear regression between the selected red, green, and blue (RGB) indices and the spectral index they are representing in parentheses, and pigment content with the propagated mean absolute percentage error (MAPE-P). Pigment lg/mg Veg RGB (spectral) (a) 2G-RB (PSSRb) 2R-GB (PRI) nG (PSSRa) R2 P-value RMSE RMSE-P R2 P-value RMSE RMSE-P R2 P-value RMSE RMSE-P Chla MAT 0.38 0.00 0.31 0.48 0.17 0.02 0.35 0.52 0.39 0.00 0.30 0.42 MNT 0.59 0.01 0.12 0.22 0.06 0.26 0.18 0.25 0.61 0.01 0.12 0.22 MT 0.41 0.01 0.30 0.48 0.23 0.06 0.34 0.50 0.40 0.01 0.30 0.47 Chlb MAT �0.02 0.45 0.12 0.18 �0.01 0.44 0.12 0.19 �0.01 0.43 0.12 0.18 MNT 0.27 0.09 0.07 0.11 �0.14 0.84 0.09 0.11 0.26 0.09 0.07 0.11 MT 0.08 0.18 0.20 0.29 0.06 0.20 0.20 0.28 0.08 0.18 0.20 0.28 Chla+b MAT 0.30 0.00 0.20 0.31 0.15 0.03 0.22 0.33 0.31 0.00 0.20 0.28 MNT 0.53 0.02 0.08 0.15 �0.02 0.39 0.12 0.16 0.54 0.01 0.08 0.15 MT 0.24 0.05 0.28 0.44 0.16 0.10 0.30 0.43 0.24 0.05 0.28 0.42 Chla:b MAT 0.50 0.00 0.32 0.53 0.15 0.03 0.41 0.61 0.50 0.00 0.32 0.51 MNT �0.02 0.38 0.28 0.41 0.33 0.06 0.23 0.37 0.01 0.34 0.28 0.39 MT 0.12 0.13 0.32 0.36 0.04 0.25 0.33 0.41 0.12 0.13 0.32 0.39 Chl:Car MAT 0.19 0.01 0.18 0.25 0.12 0.04 0.19 0.26 0.20 0.01 0.18 0.26 MNT 0.01 0.34 0.21 0.30 �0.14 0.93 0.22 0.31 0.01 0.34 0.21 0.31 MT 0.23 0.06 0.19 0.27 0.53 0.00 0.15 0.21 0.24 0.05 0.19 0.29 (b) nG�1 (CCI) RG (PSRI) RGr (ARI2) R2 P-value RMSE RMSE-P R2 P-value RMSE RMSE-P R2 P-value RMSE RMSE-P Chla MAT 0.37 0.00 0.31 0.49 0.35 0.00 0.31 0.46 0.38 0.00 0.31 0.45 MNT 0.60 0.01 0.12 0.22 0.56 0.01 0.13 0.21 0.55 0.01 0.13 0.15 MT 0.39 0.01 0.31 0.47 0.35 0.02 0.32 0.47 0.36 0.02 0.31 0.47 Chlb MAT �0.02 0.49 0.12 0.19 �0.02 0.44 0.12 0.19 0.00 0.36 0.12 0.19 MNT 0.26 0.09 0.07 0.11 0.04 0.29 0.08 0.11 0.03 0.30 0.08 0.11 MT 0.08 0.19 0.20 0.27 0.08 0.18 0.20 0.28 0.08 0.18 0.20 0.28 Chla+b MAT 0.30 0.00 0.20 0.32 0.28 0.00 0.21 0.31 0.31 0.00 0.20 0.30 MNT 0.53 0.02 0.08 0.15 0.43 0.03 0.09 0.14 0.43 0.03 0.09 0.11 MT 0.24 0.05 0.28 0.41 0.23 0.06 0.28 0.42 0.23 0.06 0.28 0.41 Chla:b MAT 0.51 0.00 0.31 0.54 0.43 0.00 0.34 0.52 0.43 0.00 0.34 0.51 MNT 0.00 0.35 0.28 0.39 0.22 0.12 0.25 0.37 0.21 0.12 0.25 0.35 MT 0.12 0.13 0.32 0.39 0.10 0.16 0.32 0.39 0.11 0.15 0.32 0.40 Chl:Car MAT 0.19 0.01 0.18 0.27 0.20 0.01 0.18 0.26 0.23 0.01 0.18 0.24 MNT 0.01 0.34 0.21 0.30 �0.05 0.46 0.21 0.31 �0.06 0.47 0.22 0.28 MT 0.24 0.05 0.19 0.29 0.36 0.02 0.17 0.27 0.35 0.02 0.17 0.23 Notes: Veg, vegetation community; Pig, pigment content. Bold values represent moderate or stronger (R2 > 0.40) significant lin- ear regressions. MAT, moist acidic tussock tundra; MNT, moist non-acidic tundra; MT, moss tundra; RMSE, root mean square error. See Tables 1–3 for definitions of both spectral and RGB indices. ❖ www.esajournals.org 10 February 2018 ❖ Volume 9(2) ❖ Article e02123 BEAMISH ET AL. (Fig. 2). Though we found an overall weakness in RGB indices to accurately predict leaf-level pigment content, a number of significant weak to moderate relationships between Chla and Chla+b with the green RGB indices suggest they do cap- ture seasonal changes in chlorophyll content. Moderate to weak relationships between pig- ment content and pigment-driven spectral indices have also been reported, even with data collected concurrently at the leaf level, due to variations in species-specific plant structure and developmental stage (Sims and Gamon 2002). The pigment content presented in this study is an approximation using the mean of dominant vascular species and does not take into account the influence of moss or standing litter, both of which influence greenness, especially at early and late season when vascular vegetation is not fully expanded. This combination of roughly esti- mated pigment content, a greenness signal that is not only composed of vascular species, especially at early and late season, and species-specific characteristics could explain the weak correla- tions and the differences between communities. However, it should be noted that all indices demonstrated a significant moderate correlation in at least one community with pigment content. The five pigment-driven spectral indices used in this study target different pigment groups; however, all are relevant for the monitoring of foliar condition and photosynthetic activity sug- gesting the cameras can also indirectly infer changes to non-green pigment groups through an absence or changes in the greenness or chloro- phyll pigments. Though the green RGB indices were generally the most consistent and had the strongest relationships, we also found that PRI is well represented by 2R-GB and nR in MT and MAT (Table 5; Appendix S1). Since the pigment- driven spectral index of PRI is a prominently used index for estimating photosynthetic light use efficiency (Gamon et al. 1992, Pe~nuelas et al. 1995, Gamon and Oecologia 1997), the use of digital camera nR to monitor this parameter is a particularly interesting result. Current operational satellite missions provide an excellent opportunity for global monitoring of foliar condition with relatively high spatial reso- lution. Here we focused on exploring the spectral information in the visible wavelength region related to tundra vegetation color, driven by pigment dynamics. The broadband spectral set- tings of major operational satellite missions are at first consideration not optimal for capturing the detailed spectral reflectance of vegetation as represented by narrowband spectral vegetation indices. However, we show that RGB color val- ues from consumer-grade digital cameras mea- suring even broader-band spectral information show correlations with reputed pigment-driven spectral indices such as CCI, PSSRa, and ARI2 indices. Our results suggest vegetation color con- tributes strongly to the response of these hyper- spectral indices. The use of narrowband spectral indices related to vegetation color and pigment dynamics in order to monitor vegetation status and condition is already occurring with the Earth Observation System products from the Sentinel-2 multispectral satellite and will become more common in the future with upcoming hyperspec- tral satellite missions such as the Environmental Mapping and Analysis Program (EnMAP) planned for 2020. CONCLUSIONS Results of this study support the utility of digi- tal cameras to act as a surrogate for in situ spec- tral data to monitor pigment dynamics as a result of seasonal changes in a low-Arctic ecosys- tem. The RGB indices using the green band per- form best as proxies for pigment-driven spectral indices. We highlight nG as a proxy for PSSRa in particular because of moderate to strong relation- ships with both spectral and pigment data sug- gesting this RGB index can track changes in chlorophyll a content. We also suggest 2G-RB as a proxy for PSSRb, nG�1 for CCI, RG for PSRI, and RGr as a proxy for ARI1. Though the accu- racy of pigment prediction for these indices is not as strong, there is evidence that RGB indices do track seasonal changes. This method repre- sents a promising gap-filling tool and compli- mentary data source for optical remote sensing of vegetation in logistically and climatically challenging Arctic ecosystems. The implementa- tion of low-cost time-lapse systems or nadir point measurements by an observer with con- sumer-grade digital cameras is highly feasible and proven in Arctic tundra ecosystems, and this study increases the possible applications of this method. ❖ www.esajournals.org 11 February 2018 ❖ Volume 9(2) ❖ Article e02123 BEAMISH ET AL. ACKNOWLEDGMENTS This research was supported by EnMAP science preparatory program funded under the DLR Space Administration with resources from the German Fed- eral Ministry of Economic Affairs and Energy (support code: DLR/BMWi 50 EE 1348) in partnership with the Alfred Wegener Institute in Potsdam. Funding has also been provided through an NSERC Doctoral post- graduate scholarship awarded to AB. SC acknowl- edges support from the European Union’s Horizon 2020 Research and Innovation Programme (Grant No: 689443) via the iCUPE project (Integrative and Com- prehensive Understanding on Polar Environments). The authors would like to thank the logistical support provided by Toolik Research Station and Skip Walker of the Alaska Geobotany Center at the University of Alaska, Fairbanks. We would also like to thank Marcel Buchhorn and the HySpex Lab at the University of Alaska, Fairbanks for calibration of the spectrometer and MB and SW for providing GIS data of the Toolik Area. Finally, we would like to thank Robert Guy from the Faculty of Forestry at the University of British Columbia for laboratory support. LITERATURE CITED Ahrends, H., S. Etzold, W. L. Kutsch, R. St€ockli, R. Br€ugger, F. Jeanneret, H. Wanner, N. Buchmann, and W. Eugster. 2009. Tree phenology and carbon dioxide fluxes: use of digital photography for pro- cess-based interpretation at the ecosystem scale. Climate Research 39:261–274. Anderson, H. B., L. Nilsen, H. Tømmervik, S. Karlsen, S. Nagai, and E. J. Cooper. 2016. Using ordinary digital cameras in place of near-infrared sensors to derive vegetation indices for phenology studies of high Arctic vegetation. Remote Sensing 8:847. Asner, G. P., and R. E. Martin. 2008. Spectral and chemical analysis of tropical forests: scaling from leaf to canopy levels. Remote Sensing of Environ- ment 112:3958–3970. Bartley, G., and P. Scolnik. 1995. Plant carotenoids: pig- ments for photoprotection, visual attraction, and human health. Plant Cell 7:1027–1038. Bayer, B. 1976. Color imaging array. U.S. Patent 3,971,065. Beamish, A., W. Nijland, M. Edwards, N. Coops, and G. Henry. 2016. Phenology and vegetation change measurements from true colour digital photogra- phy in high Arctic tundra. Arctic Science 2:33–49. Bhatt, U., D. Walker, and M. Raynolds. 2010. Circum- polar Arctic tundra vegetation change is linked to sea ice decline. Earth Interactions 14:1–20. Bhatt, U. S., D. A. Walker, M. K. Raynolds, P. A. Bie- niek, H. E. Epstein, J. C. Comiso, J. E. Pinzon, C. J. Tucker, and I. V. Polyakov. 2013. Recent declines in warming and vegetation greening trends over pan- Arctic tundra. Remote Sensing 5:4229–4254. Billings, W., and L. Bliss. 1959. An alpine snowbank environment and its effects on vegetation, plant development, and productivity. Ecology 40:388– 397. Bjorkman, A., S. Elmendorf, A. Beamish, M. Vellend, and G. Henry. 2015. Contrasting effects of warming and increased snowfall on Arctic tundra plant phe- nology over the past two decades. Global Change Biology 21:4651–4661. Blackburn, G. 1998. Spectral indices for estimating photosynthetic pigment concentrations: a test using senescent tree leaves. International Journal of Remote Sensing 19:657–675. Blackburn, G. 1999. Relationships between spectral reflectance and pigment concentrations in stacks of deciduous broadleaves. Remote Sensing of Envi- ronment 70:224–237. Blackburn, G. A. 2007. Wavelet decomposition of hyperspectral data: a novel approach to quantify- ing pigment concentrations in vegetation. Interna- tional Journal of Remote Sensing 28:2831–2855. Bratsch, S. N., H. E. Epstein, M. Buchhorn, and D. A. Walker. 2016. Differentiating among four Arctic tundra plant communities at Ivotuk, Alaska using field spectroscopy. Remote Sensing 8:51. Chalker-Scott, L. 1999. Environmental significance of anthocyanins in plant stress responses. Photochem- istry and Photobiology 70:1–9. Close, D., and C. Beadle. 2003. The ecophysiology of foliar anthocyanin. Botanical Review 69:149–161. Cook, B., and E. Wolkovich. 2012. Divergent responses to spring and winter warming drive community level flowering trends. Proceedings of the National Academy of Sciences 109:9000–9005. Coops, N., T. Hilker, C. Bater, M. Wulder, S. Nielsen, G. McDermid, and G. Stenhouse. 2012. Linking ground-based to satellite-derived phenological metrics in support of habitat assessment. Remote Sensing Letters 3:191–200. Coops, N., T. Hilker, F. Hall, C. Nichol, and G. Drolet. 2010. Estimation of light-use efficiency of terrestrial ecosystems from space: a status report. BioScience 63:788–797. Coops, N., C. Stone, D. Culvenor, L. Chisholm, and R. Merton. 2003. Chlorophyll content in eucalypt veg- etation at the leaf and canopy scales as derived from high resolution spectral data. Tree Physiology 23:23–31. Curran, P. J. 1989. Remote sensing of foliar chemistry. Remote Sensing of Environment 30:271–278. ❖ www.esajournals.org 12 February 2018 ❖ Volume 9(2) ❖ Article e02123 BEAMISH ET AL. Demmig-Adams, B., and W. W. Adams. 1996. The role of xanthophyll cycle carotenoids in the protection of photosynthesis. Trends in Plant Science 1:21–26. Elmendorf, S. C., et al. 2012a. Plot-scale evidence of tundra vegetation change and links to recent sum- mer warming. Nature Climate Change 2:453–457. Elmendorf, S., et al. 2012b. Global assessment of exper- imental climate warming on tundra vegetation: heterogeneity over space and time. Ecology Letters 15:164–175. Esteban, R., et al. 2009. Alternative methods for sam- pling and preservation of photosynthetic pigments and tocopherols in plant material from remote locations. Photosynthesis Research 101:77–88. F�eret, J.-B., C. Franc�ois, A. Gitelson, G. Asner, K. Barry, C. Panigada, A. Richardson, and S. Jacquemoud. 2011. Optimizing spectral indices and chemometric analysis of leaf chemical properties using radiative transfer modeling. Remote Sensing of Environment 115:2742–2750. Gamon, J., F. Huemmrich, C. Wong, I. Ensminger, S. Garrity, D. Hollinger, A. Noormets, and J. Pe~nue- las. 2016. A remotely sensed pigment index reveals photosynthetic phenology in evergreen conifers. Proceedings of the National Academy of Sciences 113:13087–13092. Gamon, S., and S. Oecologia. 1997. The photochemical reflectance index: an optical indicator of photosyn- thetic radiation use efficiency across species, func- tional types, and nutrient levels. Oecologia 112: 492–501. Gamon, J., J. Pe~nuelas, and C. Field. 1992. A narrow- waveband spectral index that tracks diurnal changes in photosynthetic efficiency. Remote Sens- ing of Environment 41:35–44. Garbulsky, M., J. Pe~nuelas, J. Gamon, Y. Inoue, and I. Filella. 2011. The photochemical reflectance index (PRI) and the remote sensing of leaf, canopy and ecosystem radiation use efficiencies a review and meta-analysis. Remote Sensing of Environment 115:281–297. Gitelson, A. A., G. P. Keydan, and M. N. Merzlyak. 2006. Three-band model for noninvasive estimation of chlorophyll, carotenoids, and anthocyanin con- tents in higher plant leaves. Geophysical Research Letters 33:LL1402. Gitelson, A., and M. Merzlyak. 1998. Remote sensing of chlorophyll concentration in higher plant leaves. Advances in Space Research 22:689–692. Gitelson, A. A., M. N. Merzlyak, and O. B. Chivku- nova. 2001. Optical properties and nondestructive estimation of anthocyanin content in plant leaves. Photochemistry and Photobiology 74:38–45. Gitelson, A. A., M. N. Merzlyak, and H. K. Lichten- thaler. 1996. Detection of red edge position and chlorophyll content by reflectance measurements near 700 nm. Journal of Plant Physiology 148:501– 508. Gitelson, A. A., Y. Zur, O. B. Chivkunova, and M. N. Merzlyak. 2002. Assessing carotenoid content in plant leaves with reflectance spectroscopy. Photo- chemistry and Photobiology 75:272–281. Gould, K. S., J. McKelvie, and K. R. Markham. 2002. Do anthocyanins function as antioxidants in leaves? Imaging of H2O2 in red and green leaves after mechanical injury. Plant, Cell and Environ- ment 25:1261–1269. Høye, T. T., E. Post, N. M. Schmidt, K. Trøjelsgaard, and M. C. Forchhammer. 2013. Shorter flowering seasons and declining abundance of flower visitors in a warmer Arctic. Nature Climate Change 3:759– 763. Ide, R., and H. Oguma. 2010. Use of digital cameras for phenological observations. Ecological Informat- ics 5:339–347. Ide, R., and H. Oguma. 2013. A cost-effective monitor- ing method using digital time-lapse cameras for detecting temporal and spatial variations of snow- melt and vegetation phenology in alpine ecosys- tems. Ecological Informatics 16:23–34. Iler, A. M., T. T. Høye, D. W. Inouye, and N. M. Sch- midt. 2013. Nonlinear flowering responses to cli- mate: Are species approaching their limits of phenological change? Philosophical Transactions of the Royal Society B 368:20120489. Inouye, D. W., and D. A. McGuire. 1991. Effects of snowpack on timing and abundance of flowering in Delphinium nelsonii (Ranunculaceae): implica- tions for climate change. American Journal of Bot- any 78:997–1001. Merzlyak, M., A. Gitelson, O. Chivkunova, and V. Rak- itin. 1999. Non-destructive optical detection of pig- ment changes during leaf senescence and fruit ripening. Physiologia Plantarum 106:135–141. Migliavacca, M., M. Galvagno, E. Cremonese, M. Ros- sini, M. Meroni, O. Sonnentag, S. Cogliati, G. Manca, F. Diotri, and L. Busetto. 2011. Using digital repeat photography and eddy covariance data to model grassland phenology and photosynthetic CO2 uptake. Agricultural and Forest Meteorology 151:1325–1337. Mutanga, A. Skidmore., and H. H. Prins. 2004. Predict- ing in situ pasture quality in the Kruger National Park, South Africa, using continuum-removed absorption features. Remote Sensing of Environ- ment 89:393–408. Mutanga, O., and A. Skidmore. 2004. Hyperspectral band depth analysis for a better estimation of grass biomass (Cenchrus ciliaris) measured under con- trolled laboratory conditions. International Journal ❖ www.esajournals.org 13 February 2018 ❖ Volume 9(2) ❖ Article e02123 BEAMISH ET AL. of Applied Earth Observation and Geoinformation 5:87–96. Nijland, W., D. R. Jong, D. Jong, and M. Wulder. 2014. Monitoring plant condition and phenology using infrared sensitive consumer grade digital cameras. Agricultural and Forest Meteorology 184:98–106. Park, T., S. Ganguly, H. Tømmervik, E. S. Euskirchen, K.-A. Høgda, S. Karlsen, V. Brovkin, R. R. Nemani, and R. B. Myneni. 2016. Changes in growing sea- son duration and productivity of northern vegeta- tion inferred from long-term remote sensing data. Environmental Research Letters 11:084001. Parmentier, F.-J., and T. Christensen. 2013. Arctic: speed of methane release. Nature 500:529. Pe~nuelas, J., I. Filella, and J. A. Gamon. 1995. Assess- ment of photosynthetic radiation-use efficiency with spectral reflectance. New Phytologist 131:291–296. Porra, R. J., W. A. Thompson, and P. E. Kriedemann. 1989. Determination of accurate extinction coefficients and simultaneous equations for assaying chlorophylls a and b extracted with four different solvents: verifi- cation of the concentration of chlorophyll standards by atomic absorption spectroscopy. Biochimica et Bio- physica Acta (BBA): Bioenergetics 975:384–394. Prev�ey, J., et al. 2017. Greater temperature sensitivity of plant phenology at colder sites: implications for convergence across northern latitudes. Global Change Biology 23:2660–2671. R Development Core Team. 2007. R: a language and environment for statistical computing. R Founda- tion for Statistical Computing, Vienna, Austria. Richardson, A., J. Jenkins, B. Braswell, D. Hollinger, S. Ollinger, and M.-L. Smith. 2007. Use of digital web- cam images to track spring green-up in a decidu- ous broadleaf forest. Oecologia 152:323–334. Sims, D. A., and J. A. Gamon. 2002. Relationships between leaf pigment content and spectral reflec- tance across a wide range of species, leaf structures and developmental stages. Remote Sensing of Environment 81:337–354. Steyn, W. J., S. J. E. Wand, D. M. Holcroft, and G. Jacobs. 2002. Anthocyanins in vegetative tissues: a proposed unified function in photoprotection. New Phytologist 155:349–361. Stylinski, C., J. Gamon, and W. Oechel. 2002. Seasonal patterns of reflectance indices, carotenoid pigments and photosynthesis of evergreen chaparral species. Oecologia 131:366–374. Tieszen, L. L. 1972. The seasonal course of above- ground production and chlorophyll distribution in a wet arctic tundra at Barrow, Alaska. Arctic and Alpine Research 4:307–324. Ustin, S. L., and B. Curtiss. 1990. Spectral characteris- tics of ozone-treated conifers. Environmental and Experimental Botany 30:293–308. Ustin, S., A. Gitelson, and S. Jacquemoud. 2009. Retrie- val of foliar information about plant pigment sys- tems from high resolution spectroscopy. Remote Sensing of Environment 113:S67–S77. Walker, M. D., D. A. Walker, K. R. Everett, and C. Segelquist. 1989. Wetland soils and vegetation, arc- tic foothills, Alaska. U.S. Fish and Wildlife Service Biological Report 89, Washington, D.C., USA. Walker, M., et al. 2006. Plant community responses to experimental warming across the tundra biome. Proceedings of the National Academy of Sciences USA 103:1342–1346. Westergaard-Nielsen, A., M. Lund, B. Hansen, and M. Tamstorf. 2013. Camera derived vegetation green- ness index as proxy for gross primary production in a low Arctic wetland area. ISPRS Journal of Pho- togrammetry and Remote Sensing 86:89–99. Wheeler, H., T. Høye, N. Schmidt, J.-C. Svenning, and M. Forchhammer. 2015. Phenological mismatch with abiotic conditions—implications for flowering in Arctic plants. Ecology 96:775–787. Young, A., and G. Britton. 1990. Carotenoids and stress. Pages 87–112. in R. G. Alscher and J. R. Cummings, editors. Stress Responses in Plants: Adaptation and Acclimation Mechanisms. Wiley- Liss, New York, New York, USA. Yu, H., E. Luedeling, and J. Xu. 2010. Winter and spring warming result in delayed spring phenol- ogy on the Tibetan Plateau. Proceedings of the National Academy of Sciences 107:22151–22156. Zhang, X., D. Tarpley, and J. T. Sullivan. 2007. Diverse responses of vegetation phenology to a warming climate. Geophysical Research Letters 34:L19405. SUPPORTING INFORMATION Additional Supporting Information may be found online at: http://onlinelibrary.wiley.com/doi/10.1002/ecs2. 2123/full ❖ www.esajournals.org 14 February 2018 ❖ Volume 9(2) ❖ Article e02123 BEAMISH ET AL. http://onlinelibrary.wiley.com/doi/10.1002/ecs2.2123/full http://onlinelibrary.wiley.com/doi/10.1002/ecs2.2123/full work_sg3hus5ctvckzf4hivgkzfa6da ---- right eye. Each time, multiple subconjunctival lashes in the temporal aspect were epilated. In October 2004, the patient underwent conjunctivoplasty with en bloc removal of the lashes and cautery to the underlying sclera. So far, there has been successful resolution of symptoms. Comment Ectopic cilia are rarely encountered. Some reports describe congenital lash tufts in the temporal aspects of upper eyelids.4,5 Eyelashes have also been reported emerging from the iris de novo and following trauma.6 The aetiology of the former remains uncertain, posteriorly located dermoids or teratomas have been postulated. In the latter case, displacement of lash follicles is felt to be causative.6 The few reports of subconjunctival cilia mainly concern single lashes1,7,8 and include granuloma formation secondary to conjunctival embedding of a cilium7 and dermolipomas allowing lash ingress to the conjunctiva.1 To the authors’ knowledge, there is only one other published report of ectopic cilia in the setting of previous intraocular surgery, that patient having had retinal surgery, pterygium removal, and cataract extraction.3 In our case, given that the ectopic cilia were first observed 1 year following surgery, it is likely that displacement occurred perioperatively. The mechanism may have been a cumulative effect of the fall suffered by the patient preoperatively, and surgery. Following traumatic displacement of lash tissue to the conjunctiva, the peritomy folds gave recess for any dislodged follicles. The yearly recurrence contrasts with trichitic lash regrowth. This unusual case illustrates the need for such patients to be reviewed for recurrence necessitating surgical intervention. Acknowledgements There are no proprietary interests and this work has never been published or presented elsewhere before. References 1 Gutteridge IF. Case Report. Curious cilia cases. Clin Exp Optom 2002; 85(5): 306–308. 2 Belfort B, Bruce Ostler H. Cilia incarnata. Br J Ophthal 1976; 60: 594–596. 3 Hunts JH, Patrinely JR. Conjunctival cilia entrapment: an unrecognized cause of ocular irritation. Ophthal Plast Reconstr Surg 1997; 13(4): 289–292. 4 Dalgleish R. Case Notes. Ectopic cilia. Br J Ophthal 1966; 50: 592–594. 5 Owen RA. Ectopic cilia. Br J Ophthal 1968; 52: 280. 6 Mackintosh GIS, Grayson MC. Case reports. Atraumatic iris cilia. Br J Ophthal 1990; 74: 748–749. 7 Kiesel RD. Conjunctival granuloma due to an imbedded cilium. Am J Ophthalmol 1961; 51: 706–708. 8 Mathur SP. Cilia emerging through the conjunctiva. Int Surg 1968; 50(1): 14–16. S George and G Silvestri Ophthalmology Department, Eye and Ear Clinic, Royal Victoria Hospital, Grosvenor Road, Belfast BT12 6BA, UK Correspondence: S George, Tel: þ 44 2890240503; Fax: þ 44 2890330744. E-mail: sonja_AC@yahoo.com Eye (2006) 20, 617–618. doi:10.1038/sj.eye.6701940; published online 27 May 2005 Sir, Occurrence and reactivation of cytomegalovirus retinitis in systemic lupus erythematosus with normal CD4 counts Cytomegalovirus retinitis is the most common opportunistic ocular infection in patients with acquired immune deficiency syndrome (AIDS), accounting for up to 30–40% of all ocular manifestations.1 CMV retinitis is also known to occur in patients with rheumatic disease, postorgan transplant and leukaemia on immunosuppressive therapy. A strong risk factor for the development of CMV retinitis is a CD4 þ T-lymphocyte count of o50 cells/ml.2 With counts greater than 100 cells/ml, reactivation or occurrence of this disease is unusual.2,3 We report two cases of CMV retinitis in patients with systemic lupus erythematosus (SLE) Figure 1 Subconjunctival cilia near temporal limbus of right eye. Correspondence 618 Eye undergoing immunotherapy despite normalised CD4 þ T-lymphocyte counts. Case report 1 A 47-year-old Chinese lady with a 12-year history of SLE developed active diffuse lupus nephritis and was started on mycophenolate mofetil (CellCepts, Roche, NJ, USA) 500 mg b.i.d. with dose increase to 750 mg b.i.d. after 3 months. She complained of deterioration in vision of her right eye 1 month later. Visual acuity was 20/40 in the affected eye and 20/20 in the fellow eye. Anterior segment was normal. No relative afferent pupillary defect (RAPD) was elicited. Fundal exam of the affected eye showed retinal necrosis with flame haemorrhages over the central macula region consistent with clinical zone 1 CMV retinitis (Figure 1a). Blood investigations revealed a low absolute CD4 þ T-lymphocyte count of 53 cells/ml (normal 280–1430 cells/ml) with a CD4 : CD8 ratio of 0.33 (normal 0.50–2.50). HIV status was negative. Mycophenolate was withdrawn and replaced with low-dose prednisolone and hydroxychloroquine. Intravitreal ganciclovir therapy was started: 2 mg/ 0.04 ml twice weekly for the first month (induction) followed by 2 mg/0.04 ml weekly for the second month (high-dose maintenance) and thereafter 1 mg/0.02 ml weekly intermediate-dose maintenance therapy. CMV retinitis regressed and visual acuity improved to 20/30. Intravitreal treatment was discontinued when her CD4 count increased to 394 cells/ml after 6 months. Despite a normal CD4 cell count, reactivation of CMV retinitis occurred 1 month later at the border of previous CMV retinitis involving the superior edge of the macula (Figure 1b) and reinduction with intravitreal ganciclovir was started. Progression was halted, but her vision dropped to 20/120. Case report 2 A 32-year-old Malay woman with a 4-year history of SLE and secondary antiphospholipid syndrome (APS) was treated with oral prednisolone and hydroxychloroquine since 1998. She was however noncompliant to medication and follow-ups, resulting in multiple episodes of relapses. She developed neuropsychiatric lupus in September 2002 that required treatment with intravenous pulse methylprednisolone and cyclophosphamide, followed by long-term high-dose oral prednisolone (60 mg o.d.). After 3 months, she presented with acute blurring of vision in right eye over 3 days with a visual acuity of hand movement with RAPD. Anterior segment exam and intraocular pressures were normal with no evidence of rubeosis iridis. Fundal exam revealed extensive retinal haemorrhages with thrombosed and attenuated retinal arteries and veins consistent with vaso-oclusive disease. The nasal retina also had granular retinal infiltrates and necrosis with adjacent retinal haemorrhage advancing centrally consistent with the clinical diagnosis of zone 2 CMV retinitis (Figure 2). No vitritis was seen. The left eye was normal with a visual acuity of 20/20. Investigations showed CD4 count of 586 ells/ml. HIV serology was negative. CMV serology was positive for IgG, but nonspecific for IgM. She was restarted on anticoagulation. Oral prednisolone was reduced to 18 mg o.d., and hydroxychloroquine 300 mg o.d. Prior to institution of local anti-CMV treatment, fundal examination showed mild resolution of retinal infiltrates associated with overlying vitritis and mild anterior uveitis. A decision was made to withhold local anti-CMV Figure 1 (a) CMV retinitis occurring at the macular area. (b) Reactivation of CMV retinitis along previous borders extending towards the superior edge of the fovea. Arrows point to sites of recurrence. Correspondence 619 Eye treatment, as there was clinical evidence of immune recovery uveitis documented on serial fundal examinations and 9-view digital photography. Over the next 3 months, there was spontaneous resolution of CMV retinitis and uveitis responded to topical prednisolone acetate 1%. However, visual acuity remained at hand movements only. Discussion We have illustrated two SLE patients who developed CMV retinitis while on heavy immunosuppressive therapy. There have been few previous cases reported in the literature.4–6 CMV retinitis has also been described in three cases associated with Wegener’s granulomatosis,7 rheumatoid arthritis8 and Behcet’s disease.9 All were under chronic immunosuppression with cytotoxic drugs such as azathioprine, high-dose steroids, cyclophosphamide and adenine arabinosine. Unlike AIDS patients, these patients typically do not manifest any clinical symptoms of systemic CMV involvement despite the ocular manifestations. In our first patient, CMV retinitis was triggered possibly due to an impaired T-cell function with decreased counts caused by mycophenolate mofetil. This drug is well known to have a potent cytostatic effect on lymphocytes, which resulted initially in a significant decrease in her CD4 count.10 Interestingly, reactivation of CMV retinitis still occurred despite recovery of CD4 counts 6 months after withdrawal of mycophenolate. The occurrence of CMV retinitis associated with high CD4 count was also seen in our second SLE patient who was on chronic high-dose steroid therapy for treatment of her complications and multiple flare-ups. These suggest that occurrence and reactivation can still occur despite a high absolute CD4 count possibly due to disruption of functional integrity of the immune system with depression of leucocytic and complement function by chronic cytotoxic drugs and possibly due to an underlying dysfunction in the autoimmune system from the disease itself.11 This is unlike AIDS where CD4 count is commonly used as an indicator of immune status that predicts the risk of opportunistic infections, for example, CMV retinitis. In SLE patients complicated by CMV retinitis, absolute CD4 þ T-lymphocyte count may not be a good marker for cessation of specific anti-CMV therapy and they may need a longer duration of treatment till stabilization of their underlying systemic condition to prevent reactivation. However, with gradual tapering of immune-suppressants and recovery of their immune system, we observe that it is possible to mount an immune response as demonstrated by the immune- recovery uveitis associated with resolution of the CMV retinitis in our second patient. It was also noted from the literature review that all the SLE patients who developed CMV retinitis, including our first patient, had lupus nephritis requiring chronic immunotherapy. It is likely that these patients may have undergone more intensive or prolonged immunosuppressive therapy that further impaired their immune function, thereby rendering them more susceptible to the condition. Margo and Arango also suggested that SLE patients with secondary APS develop CMV retinitis in ischaemic retina due to widespread microvascular disease aggravated by antiphospholipid antibodies.4 The presence of APS and severity of lupus nephritis may be indirect risk factors in SLE patients for CMV retinitis. However, larger epidemiological studies will be required to confirm these postulations. Conclusion Our two case reports illustrate that CMV retinitis can occur in SLE patients who are severely immunosuppressed due to chronic high-dose immunosuppressants or medications with potent cytostatic effects such as mycophenolate. Treatment options may include immunotherapy dose reduction or substitution, possibly combined with specific anti-CMV treatment. The presence of lupus nephritis and secondary APS may be risk factors for developing CMV retinitis, as it is often associated with intensive immunosuppression. Owing to the impaired immune system, the absolute CD4 þ T-lymphocyte count may not be a reliable indicator of the immune status and function in an SLE patient. As such, we recommend that heavily immunosuppressed SLE patients who develop sudden visual symptoms should be evaluated early for CMV Figure 2 Retinal infiltrates and necrosis consistent with zone 2 CMV retinitis. Extensive haemorrhage with severe vaso-occlu- sive disease also noted. Correspondence 620 Eye retinitis. For patients who have developed CMV retinitis, monitoring of treatment protocols should be based on the patient’s clinical systemic condition with less reliance on blood results and CD4 counts. Acknowledgements Financial or proprietary interests: None. References 1 Au Eong KG, Beatty S, Charles SJ. Cytomegalovirus retinitis in patients with acquired immune deficiency syndrome. Postgrad Med J 1999; 75: 585–590. 2 Bowen EF, Wilson P, Atkins M, Madge S, Griffiths PD, Johnson MA et al. Natural history of untreated cytomegalovirus retinitis. Lancet 1995; 346: 1671–1673. 3 Kuppermann BD, Petty JG, Richman DD, Mathews WC, Fullerton SC, Rickman LS et al. Correlation between CD4 þ counts and prevalence of cytomegalovirus retinitis and human immunodeficiency virus-related noninfectious retinal vasculopathy in patients with acquired immunodeficiency syndrome. Am J Ophthalmol 1993; 115: 575–582. 4 Margo CE, Arango JL. Cytomegalovirus retinitis and the lupus anticoagulant syndrome. Retina 1998; 18: 568–570. 5 Schlingemann RO, Wertheim-van Dillen P, Kijlstra A, Bos PJ, Meenken C, Feron EJ. Bilateral cytomegalovirus retinitis in a patient with systemic lupus erythematosus. Br J Ophthalmol 1996; 80: 1109–1110. 6 Kaji Y, Fujino Y. Use of intravitreal ganciclovir for cytomegalovirus retinitis in a patient with systemic lupus erythematosus. Nippon Ganka Gakkai Zasshi 1997; 101: 525–531. 7 Wrinkler A, Finan MJ, Pressly T, Roberts R. Cytomegalovirus retinitis in rheumatic disease: a case report. Arthritis Rheum 1987; 30: 106–108. 8 Scott WJ, Giangiacomo J, Hodges KE. Accelerated cytomegalovirus retinitis secondary to immunosuppressive therapy. Arch Ophthalmol 1986; 104: 1117–1118, 1124. 9 Berger BB, Weinberg RS, Tessler HH, Wyhinny GJ, Vygantas CM. Bilateral cytomegalovirus panuveitis after high dose corticosteroid therapy. Am J Ophthalmol 1979; 88: 1020–1025. 10 Allison AC, Eugui EM. Mycophenolate mofetil and its mechanisms of action. Immunopharmacology 2000; 47: 85–118. 11 Kulshrestha MK, Goble RR, Murray PI. Cytomegalovirus retinitis associated with long-term oral corticosteroid use. Br J Ophthalmol 1996; 80: 849–850. J-J Lee1, SCB Teoh1, JLL Chua1, MCH Tien2 and T-H Lim1 1The Eye Institute, Tan Tock Seng Hospital, 11 Jalan Tan Tock Seng, Singapore 308433, Republic of Singapore 2Imperial College School of Medicine, University of London, UK Correspondence: J-J Lee, Tel: þ 065 6357 7726; Fax: þ 065 6357 7718. E-mail: Jong_Jian_Lee@ttsh.com.sg Eye (2006) 20, 618–621. doi:10.1038/sj.eye.6701941; published online 27 May 2005 Sir, Silicone oil migration causing increasing proptosis 13 years after retinal surgery Silicone oil is necessary for endo-tamponade in selected cases of retinal detachment. Oil granuloma is a recognised condition in which there is a granulomatous response to mineral oils in the body tissues. In the past, this has been a well-documented complication of breast augmentation surgery. We report a case of silicone oil leaking into the periorbital and retro-orbital tissues, causing increasing proptosis and red eye many years after retinal surgery. Case report A 54-year-old Caucasian male presented with a 6-month history of a red, ‘unsightly’, more prominent right eye. He had suffered trauma to this eye 40 years previously and developed cataract and retinal detachment 13 years ago. He underwent a lensectomy and vitrectomy with silicone oil insertion at that time. The eye had been blind since this surgery and he was lost to follow-up until this recent complication. He was a non-smoker and had no other medical history and was on no medication. On examination, the eye was divergent and proptosed with a large subconjunctival gelatinous mass medially and an opaque, vascularised cornea (Figure 1). There was no regional lymphadenopathy. A CT scan Figure 1 Clinical photograph showing gelatinous subconjunc- tival mass medially in the right eye. Correspondence 621 Eye Occurrence and reactivation of cytomegalovirus retinitis in systemic lupus erythematosus with normal CD4 counts Case report 1 Case report 2 Discussion Conclusion Acknowledgements References work_sgwv75lrnbfghp47brgcxhukgq ---- The Influence of Age, Duration of Diabetes, Cataract, and Pupil Size on Image Quality in Digital Photographic Retinal Screening PETER HENRY SCANLON, MRCP1 CHRIS FOY, MSC2 RAMAN MALHOTRA, FRCO, PHTH3 STEPHEN J. ALDINGTON, DMS4 OBJECTIVE — To evaluate the effect of age, duration of diabetes, cataract, and pupil size on the image quality in digital photographic screening. RESEARCH DESIGN AND METHODS — Randomized groups of 3,650 patients had one-field, nonmydriatic, 45° digital retinal imaging photography before mydriatic two-field photography. A total of 1,549 patients were then examined by an experienced ophthalmologist. Outcome measures were ungradable image rates, age, duration of diabetes, detection of referable diabetic retinopathy, presence of early or obvious central cataract, pupil diameter, and iris color. RESULTS — The ungradable image rate for nonmydriatic photography was 19.7% (95% CI 18.4 –21.0) and for mydriatic photography was 3.7% (3.1– 4.3). The odds of having one eye ungradable increased by 2.6% (1.6 –3.7) for each extra year since diagnosis for nonmydriatic, by 4.1% (2.7–5.7) for mydriatic photography irrespective of age, by 5.8% (5.0 – 6.7) for nonmyd- riatic, and by 8.4% (6.5–10.4) for mydriatic photography for every extra year of age, irrespective of years since diagnosis. Obvious central cataract was present in 57% of ungradable mydriatic photographs, early cataract in 21%, no cataract in 9%, and 13% had other pathologies. The pupil diameter in the ungradable eyes showed a significant trend (P � 0.001) in the three groups (obvious cataract 4.434, early cataract 3.379, and no cataract 2.750). CONCLUSIONS — The strongest predictor of ungradable image rates, both for nonmydri- atic and mydriatic digital photography, is the age of the person with diabetes. The most common cause of ungradable images was obvious central cataract. Diabetes Care 28:2448 –2453, 2005 T he use of nonmydriatic photogra- phy has been reported from the U.S. (1– 4), Japan (5), Australia (6,7), France (8), and the U.K. (9 –13). Reports of ungradable image rates for nonmydri- atic photography vary between 4% re- ported by Leese et al. (10) and 34% reported by Higgs et al. (13). In the U.K., national screening pro- grams for detection of sight-threatening diabetic retinopathy are being imple- mented in England (14), Scotland (15), Wales, and Northern Ireland. England and Wales are using two 45° field mydri- atic digital photography as their preferred method. Scotland is using a three-stage screening procedure, in which the first stage is one-field nonmydriatic digital photography with mydriatic photography used for failures of nonmydriatic photog- raphy and slit-lamp biomicroscopy for failures of both photographic methods. Northern Ireland is performing nonmyd- riatic photography in those aged �50 years and mydriatic photography in those aged �50 years. The Gloucestershire Diabetic Eye Study (9) was designed to formally eval- uate the community-based nonmydriatic and mydriatic digital photographic screening program that was introduced in October 1998. The current study was de- signed to evaluate the effect of age, dura- tion of diabetes, cataract, and pupil size on the image quality in nonmydriatic and mydriatic digital photographic screening. RESEARCH DESIGN AND METHODS — For the comparison of mydriatic photography and nonmydriatic photography in those patients with grad- able images, the Gloucestershire Diabetic Eye Study (9) was designed to detect a difference of 2% in the detection of refer- able diabetic retinopathy between the methods (9% for mydriatic and 7% for nonmydriatic photography). To detect this difference with 80% power and 5% sig- nificance level, 3,650 patients had to be examined, allowing for an estimated un- gradable image rate of 15% with nonmyd- riatic photography. Eighty groups of 50 patients from within individual general practices were randomly selected for inclu- sion as potential study patients. This num- ber allowed for lower rates of screening uptake within some of the study practices. The patient’s history (including dia- betes type) was taken and signed consent obtained. Patients classified as type 1 had commenced insulin within 4 months of diagnosis, while patients classified type 2 were not requiring insulin or commenced insulin after 4 months of diagnosis. Visual acuity was measured using ret- roilluminated LogMAR charts modified from those used in the Early Treatment Diabetic Retinopathy Study (16). One 45° nonmydriatic digital photograph was taken of each eye using a Topcon NRW5S camera with Sony 950 video camera cen- tered on the macula, repeated once only if necessary. After mydriasis with Tropic- amide 1%, two 45° photographs, macular and nasal, were taken of each eye accord- ing to the EURODIAB protocol (17). Di- rect ophthalmoscopy was performed, the results of which were recorded. The screener was at liberty to take additional retinal or anterior segment views if he ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● From the 1Department of Ophthalmology, Cheltenham General Hospital, Cheltenham, U.K.; the 2R&D Support Unit, Gloucester Hospitals National Health Service Trust, Gloucester, U.K.; the 3Oxford Eye Hos- pital, Oxford, U.K.; and the 4Retinopathy Grading Centre, Imperial College, London, U.K. Address correspondence and reprint requests to Dr. Peter Scanlon, Gloucestershire Eye Unit, Cheltenham General Hospital, Sandford Road, Cheltenham, GL53 7AN, U.K. E-mail: peter.scanlon@glos.nhs.uk. Received for publication 9 February 2005 and accepted in revised form 23 June 2005. A table elsewhere in this issue shows conventional and Système International (SI) units and conversion factors for many substances. © 2005 by the American Diabetes Association. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. P a t h o p h y s i o l o g y / C o m p l i c a t i o n s O R I G I N A L A R T I C L E 2448 DIABETES CARE, VOLUME 28, NUMBER 10, OCTOBER 2005 considered this to be appropriate and was specifically requested to take an anterior segment view of an eye with a poor quality image. Grading Patients for the reference standard ex- amination (n � 1,549) using 78D lens slit-lamp biomicroscopy and direct oph- thalmoscopy were recruited from those attending for photographic screening on days when an experienced ophthalmolo- gist (P.H.S.) was able to attend. A separate study was performed to validate the oph- thalmologist’s reference standard against seven-field stereophotography (18). A specialist registrar in ophthalmol- ogy (R.M.) interpreted the images from the study patients who received the refer- ence standard examination (n � 1,549). P.H.S. interpreted the images of all pa- tients who did not receive his reference standard examination (n � 2,062). Grad- ers had a history sheet, including the pa- tient’s age, diabetes and ophthalmological history, visual acuity, screener’s ophthal- moscopy findings, and reasons for extra views. Nonmydriatic and mydriatic images were graded using Orion software (Cwm- bran, U.K.) with time of grading sepa- rated by at least 1 month to prevent bias from a memory effect. It was not possible to mask the grader between methods be- cause one image of each eye was captured without mydriasis and two images with mydriasis. For grading, 19-inch Sony Tri- nitron monitors were used with a screen resolution of 1,024 � 768 and 32-bit color (although we recognize that the camera system was limited to 24 bit). The Topcon fundus camera with Sony digital camera produced an image of resolution 768 � 568 pixels. Image grading and the reference stan- dard examination used the Gloucester- s h i r e a d a p t a t i o n o f t h e E u r o p e a n Working Party guidelines (19) for refer- able diabetic retinopathy (previously used in the Gloucestershire Diabetic Eye Study [9] and validated against seven- field stereophotography in a separate study [18]), as shown in Table 1. Refer- able retinopathy was classified as grades three to six on this form. The Interna- tional Classification (20) was not used be- cause the current study was undertaken before this was introduced and, even if this was available, referral to an ophthal- mologist in the U.K. is at a level between level 3 and level 4 of the International Classification. The ungradable image rate was clas- sified as the number of patients with an ungradable image in one or both eyes un- less referable diabetic retinopathy was de- tected in either eye. Image quality was judged with reference to each eye on the macular view and an eye was considered ungradable when the large vessels of the temporal arcades were blurred or more than one-third of the picture was blurred unless referable retinopathy was detected in the remainder. The nasal view was re- garded as providing supplementary infor- mation and was not used for image quality assessments. Reexamination of photographs P.H.S. reexamined all the anterior seg- ment photographs from eyes with un- gradable images and any control eyes (i.e., if an anterior segment photograph had been taken of the patient’s other eye) to determine whether cataract was present using the following classifications: 1) ob- vious central cataract: impaired central red reflex with obvious cataract almost certainly contributing to poor image qual- ity; 2) early cataract: some impairment of central red reflex with cataract, which may or may not contribute to poor image quality; and 3) no cataract: good central red reflex and either no cataract or early peripheral lens changes not considered to contribute to poor image quality. The horizontal pupil diameter of all the pupils in the central axis on the 19- inch monitor on which the anterior seg- m e n t i m a g e s w e r e d i s p l a y e d w a s measured. The anterior segment images had been collected using a standardized methodology, so as to maintain near equivalence in image magnification be- tween patients. Any other pathology that might have contributed to impaired qual- ity of retinal images was recorded. Iris color of the ungradable eye was classified as blue, green (including blue with brown flecks or green), light brown, or dark brown. Statistical methods Data were entered into a customized da- tabase in the Medical Data Index (Patient Administration System) at Cheltenham General Hospital and downloaded into SPSS version 10 (SPSS, Chicago, IL) for Table 1 Description Grade right eye Grade left eye Outcome No diabetic retinopathy 0 0 12/12 Minimal nonproliferative diabetic retinopathy 1 1 12/12 Mild nonproliferative diabetic retinopathy 2 2 12/12 Maculopathy 3 3 Refer Hemorrhage �1 DD from foveal center 3a 3a Routine Exudates �1 DD from foveal center 3b 3b Soon Groups of exudates (including circinate and plaque) within the temporal arcades �1 DD from foveal center 3c 3c Soon Reduced VA not corrected by a pinhole likely to be caused by a diabetic macular problem and/or suspected clinically significant macular edema. 3d 3d Soon Moderate to severe nonproliferative diabetic retinopathy 4 4 Refer Multiple cotton wool spots (�5) 4a 4a Soon and/or multiple hemorrhages 4a 4a Soon and/or intraretinal microvascular abnormalities 4b 4b Soon and/or venous irregularities (beading, reduplication, or loops) 4b 4b Soon Proliferative diabetic retinopathy 5 5 Refer New vessels on the disc, new vessels elsewhere, preretinal hemorrhage, and/or fibrous tissue Urgent Advanced diabetic retinopathy 6 6 Refer Vitreous hemorrhage, traction/traction detachment, and/or rubeosis iridis Immediate DD, disc diameter; VA, visual acuity. Scanlon and Associates DIABETES CARE, VOLUME 28, NUMBER 10, OCTOBER 2005 2449 data analysis as required. Percentages and 95% CIs were calculated. Multiple logistic regression was used to assess the impact of more than one predictive factor on the odds of poor image quality. Pupil diameters for ungradable eyes and the opposite gradable eyes (where an- terior segment photographs of both eyes were available) were compared for the un- gradable eyes with no cataract, early cat- aract, and obvious central cataract. To identify any trends, the diameters in un- gradable and opposite eyes and the differ- ence between them were compared between cataract groups using one-way ANOVA with a linear contrast. Age and duration of diabetes were compared be- tween the no cataract and the obvious central cataract group using Mann- Whitney U tests. RESULTS Acceptance rate of screening invitation and nonattendance rate at screening appointment and identification of the study population Of 11,909 people with diabetes in the county, 74% responded to the screening invitation and attended. Of those who re- sponded to the screening invitation and booked an appointment, the attendance rate was 95%. The high response and at- tendance rates enabled the target popula- tion of 3,650 patients from within 80 groups of 50 patients to be identified and examined. Images of 39 patients from one prac- tice were excluded from the study because the patient images were accidentally cap- tured in JPEG format instead of TIFF format. Ungradable image rates were cal- culated for all remaining 3,611 patients in the study. Seven grading forms were absent from the nonmydriatic group, all of which were from the subgroup of 1,549 patients who had the reference standard examination. Ungradable image rate and age The ungradable image rate for nonmydri- atic photography was 19.7% (95% CI 18.4 –21.0) and for mydriatic photogra- phy was 3.7% (3.1– 4.3). A total of 15 patients in the nonmydriatic group and 8 patients in the mydriatic group who were found to have an ungradable image in one eye were not included in the ungradable image rate because referable retinopathy was detected in the other eye (Fig. 1). Detection of referable retinopathy in different age ranges From the reference standard examination of 1,549 patients, 180 patients were found to have referable diabetic retinopa- thy. The grading form for one of these patients (from the nonmydriatic group) was missing, making the maximum pos- sible detection in that group 179. Levels of detection of referable diabetic retinop- athy were 82.8% for mydriatic photogra- p h y ( 1 4 9 o f 1 8 0 ) a n d 5 7 . 5 % f o r nonmydriatic photography (103 of 179). Analyzing the nonmydriatic figures in 10- year age-groups, the younger age-groups had better image quality results and better identification of referable diabetic reti- nopathy (Fig. 2). Type of diabetes, sex of study patients, and duration of diabetes Of 3,611 study patients, 16.5% had type 1 diabetes, 81.6% had type 2 diabetes, and 1.9% had unknown diabetes status. Participants were 55% male and 45% fe- male. Duration of diabetes was 41.7% 0 – 4 years, 26.2% 5–9 years, 13.7% 10 –14 years, 7.6% 15–19 years, 10.8% 20� years, and 0.2% unknown duration. The 1,549 reference standard subgroup patients had very similar characteristics. Ungradable image rate versus age and duration of diabetes Because an association was found be- tween ungradable image rate and both age and duration of diabetes and also between age and duration of diabetes, a logistic re- gression analysis was undertaken to see if Figure 1—Unassessable image patients for mydriatic and nonmydriatic photographic screening. Image quality in diabetic retinal screening 2450 DIABETES CARE, VOLUME 28, NUMBER 10, OCTOBER 2005 the associations were independent of each other. For nonmydriatic photography, the odds of having one eye ungradable in- creased by 2.6% (95% CI 1.6 –3.7) for each extra year since diagnosis, irrespec- tive of age, and by 5.8% (5.0 – 6.7) for every extra year of age, irrespective of years since diagnosis. For mydriatic pho- tography, the odds of having one eye un- gradable increased by 4.1% (2.7–5.7) for each extra year since diagnosis, irrespec- tive of age, and by 8.4% (6.5–10.4) for each extra year of age, irrespective of years since diagnosis. The analysis showed that both age and years since diagnosis con- tributed to the odds of having an ungrad- able image in one eye. Influence of cataract and other pathology Of the 169 ungradable eyes from 133 pa- tients, 8 eyes had no anterior segment im- age. Of the 161 eyes with an anterior segment image, 92 eyes (57%) had obvi- ous central cataract, 34 eyes (21%) had early cataract, and 15 eyes (9%) had no cataract. The study of other pathology showed 10 eyes (6%) had a corneal scar, 9 eyes (6%) had asteroid hyalosis, and 1 eye (1%) had a history of hemorrhage, glau- coma, and blindness (not from diabetic retinopathy). Influence of pupil diameter There were 97 patients in whom one eye was not assessable. In 12 cases, there was a nondiabetic, noncataract pathological reason detected that would explain why imaging was unsuccessful (e.g., corneal scarring), and in 5 cases no anterior seg- ment image was taken of the ungradable eye. In the remaining 80 cases, no obvious other pathology was detected that could explain poor image quality, suggesting a relationship with pupil size. To test this hypothesis, we examined the pupil diam- eter in those 54 cases in which an anterior segment view was available of both the ungradable eye and the gradable fellow eye. The following comparisons were made between the two eyes. In eight eyes with no cataract seen in the ungradable eye, the mean pupil diameter in the un- gradable eye was 2.7 cm and in the grad- able control eye was 3.6 cm (difference: 0.9 cm). In 14 eyes with early cataract seen, the mean pupil diameter in the un- gradable eye was 3.4 cm and in the grad- able control eye was 3.9 cm (difference: 0.5 cm). In 32 eyes with obvious cataract seen, the mean pupil diameter in the un- gradable eye was 4.4 cm and in the grad- able control eye was 4.3 cm (difference: �0.1 cm). The pupil diameter in the un- gradable eye and the difference in pupil diameters between the two eyes both showed significant trends (P � 0.001 and P � 0.008, respectively) in the three groups. However, the pupil diameter in the gradable eye did not show a signifi- cant trend (P � 0.072) in any group. The eight people in the no cataract group with poor pupillary dilation (mean 2.7 cm) had a mean age of 72.7 years and a mean duration of 20.4 years with diabe- tes. The 32 people with obvious central cataract and good pupillary dilation (mean 4.4 cm) had a mean age of 78.5 years and a mean duration of 8.7 years with diabetes. The Mann-Whitney U test showed no significant difference for the ages between these two groups but a sig- nificant difference for duration of diabetes (P � 0.003). Iris color in ungradable eyes Of the 124 patients in whom anterior views enabled color determination, there were 68 blue (55%), 24 green (19%), 21 light brown (17%) and 11 dark brown (9%) eyes. The iris color is in keeping with Gloucestershire’s predominant pro- portional white Caucasian population, the main ethnic minority groups being In- dian/British Indian (0.7%) and Black/ Black British (0.8%). CONCLUSIONS — Several possible factors might have an influence on image quality in retinal photography. Age is sug- gested in the following studies. Higgs et al. (13) reported that 13% �50 years, 39% 50 –70 years, and 54% �70 years had ungradable nonmydriatic images. Buxton et al. (21) reported that the un- gradable image rate varied between 2% in the Exeter physician group and 9% in the Oxford general practitioner group. The difference between these two groups was principally related to age, duration of di- abetes, and type of diabetes. Some studies (3,8) have reported nonmydriatic un- gradable image rates �12%, but the aver- age age of the study population was �55 years. Duration of diabetes is suggested as a factor by Cahill et al. (22), who in 2001 reported that pupillary autonomic dener- vation increases with increasing duration of diabetes mellitus. Ethnicity is sug- gested by Klein et al. (23). Flash intensity is suggested by Taylor et al. (24), who reported less patient dis- comfort with the lower flash power (10 W vs. 300 W) of the digital system. In non- mydriatic photography, there is a faster pupil recovery time with lower flash in- tensities, which may improve image qual- ity in the fellow eye. Age, duration of diabetes, and ethnic- ity were not reported in some studies (7,11,25), while others (1,6) have re- ported these variables but have not re- ported an association. The study by Lin et al. (4) excluded 197 patients (48.5%) for unusable seven-field reference standard photos and a further 12 patients (2.96%) Figure 2—Referable retinopathy by age compared to the reference standard examination. Scanlon and Associates DIABETES CARE, VOLUME 28, NUMBER 10, OCTOBER 2005 2451 because of unusable ophthalmoscopy records, which made it difficult to inter- pret the ungradable image rate of 8.1%. Shiba et al. (5) excluded the �70 years age-group and remarkably attempted 9 � overlapping nonmydriatic 45° fields (5), whereas others have only attempted five fields (8), three fields (2,3), and the ma- jority only one nonmydriatic field (1,6,9,10,13,21). Patient numbers varied from 40 eyes in the study by Lim et al. (2) to 3,611 patients in the current study. The current study has suggested that, after excluding a small number of patients with other pathology, the causes of un- gradable images in mydriatic photogra- phy are obvious central cataract (57%), a combination of early cataract and a small pupil (21%), and a small pupil alone (9%). There was a dip to 75% in the 30 –39 age-group (two patients missed) and 62.5% in the 40 – 49 age-group (six patients missed) in detection of referable retinopathy using mydriatic photogra- phy. If ungradable images were test posi- tive (i.e., referable), six patients in total would have been missed in the 30 – 49 age- group. On retrospective examination of the mydriatic images, the pathology was visible in five of six of these (two hav- ing received extensive laser treatment and being graded as stable treated diabetic ret- inopathy). There was only one person whose retinopathy visible within the two 45° fields was mild nonproliferative dia- betic retinopathy (i.e., not referable), whereas small new vessels elsewhere were visible in the peripheral retina only on ref- erence standard examination. This is the only patient in this age-group that should have been a definite false negative for the test. While a 20% failure rate for nonmyd- riatic photography might be acceptable because patients could be reexamined by other means, there is a difference in de- tection of referable retinopathy between the two methods, as shown in Fig. 2. The Health Technology Board for Scotland used data from the current study in their report (15) and concluded that similar sensitivities and specificities could be achieved by dilating those patients with ungradable images. However, this relies on the ability of the screener to accurately determine an ungradable image at the time of screening and, in the Scottish sys- tem, relies on the assumption that the grading of one field will detect referable retinopathy with the same degree of accu- racy as the grading of two fields (giving evidence from Olson et al.’s study [26]). There have been differing views on the number of fields required for screening, Bresnick et al. (27) supporting Olson et al.’s view that one field may be sufficient. However, studies by Moss et al. (28), Shiba et al. (5), and von Wendt et al. (29) have suggested that higher numbers of fields give greater accuracy in detection of retinopathy levels. Data from the current study indicates that there would potentially be very many occasions on which nonmydriatic imag- ing in patients aged �50 years would re- sult in ungradable images. In the �80 years age-group, the failure rate is re- duced from 41.6 to 16.9% by dilation with G Tropicamide 1%. It is possible that the failure rate of 16.9% following dila- tion with G Tropicamide 1% could be fur- ther reduced by the addition of G Phenylephrine 2.5% for this specific group. Routinely dilating the �50 years age-group with G Tropicamide 1% at out- set could potentially reduce the failure rate by �80%. If screening programs are going to consider nonmydriatic photog- raphy to detect sight-threatening diabetic retinopathy, the findings of the current study largely support the use of this method for the group �50 years of age who are at lowest risk of ungradable im- ages, and yet, this group contains a num- ber of young regular nonattendees, who some authors suggest are at greatest risk of blindness (e.g., MacCuish et al. [30] and Jones [31]). Acknowledgments — This study was funded by the Project Grant South West R&D Direc- torate. P.H.S is submitting this work for an MD thesis to University College London. The study was designed by P.H.S. with the support of C.F, and the article was written by P.H.S. with the help of S.J.A. P.H.S. performed all the clinical examinations, and P.H.S. and R.M. graded all the images. C.F. undertook the data analysis. All coauthors commented on the drafts and helped to interpret the findings. P.H.S. is the guarantor for this publication. References 1. Pugh JA, Jacobson JM, Van Heuven WA, Watters JA, Tuley MR, Lairson DR, Lori- mor RJ, Kapadia AS, Velez R: Screening for diabetic retinopathy: the wide-angle retinal camera. Diabetes Care 16:889 – 895, 1993 2. Lim JI, LaBree L, Nichols T, Cardenas I: A comparison of digital nonmydriatic fun- dus imaging with standard 35-millimeter slides for diabetic retinopathy. Ophthal- mology 107:866 – 870, 2000 3. Bursell SE, Cavallerano JD, Cavallerano AA, Clermont AC, Birkmire-Peters D, Ai- ello LP, Aiello LM, Joslin Vision Network Research Team: Stereo nonmydriatic dig- ital-video color retinal imaging compared with Early Treatment Diabetic Retinopa- thy Study seven standard field 35-mm ste- reo color photos for determining level of diabetic retinopathy. Ophthalmology 108: 572–585, 2001 4. Lin DY, Blumenkranz MS, Brothers RJ, Grosvenor DM: The sensitivity and spec- ificity of single-field nonmydriatic mono- chromatic digital fundus photography with remote image interpretation for dia- betic retinopathy screening: a comparison with ophthalmoscopy and standardized mydriatic color photography. Am J Oph- thalmol 134:204 –213, 2002 5. Shiba T, Yamamoto T, Seki U, Utsugi N, Fujita K, Sato Y, Terada H, Sekihara H, Hagura R: Screening and follow-up of di- abetic retinopathy using a new mosaic 9-field fundus photography system. Dia- betes Res Clin Pract 55:49 –59, 2002 6. Harper CA, Livingston PM, Wood C, Jin C, Lee SJ, Keeffe JE, McCarty CA, Taylor HR: Screening for diabetic retinopathy us- ing a non-mydriatic retinal camera in ru- ral Victoria. Aust N Z J Ophthalmol 26: 117–121, 1998 7. Yogesan K, Constable IJ, Barry CJ, Eikel- boom RH, McAllister IL, Tay-Kearney ML: Telemedicine screening of diabetic retinopathy using a hand-held fundus camera. Telemed J 6:219 –223, 2000 8. Massin P, Erginay A, Ben Mehidi A, Vicaut E, Quentel G, Victor Z, Marre M, Guil- lausseau PJ, Gaudric A: Evaluation of a new non-mydriatic digital camera for de- tection of diabetic retinopathy. Diabet Med 20:635– 641, 2003 9. Scanlon PH, Malhotra R, Thomas G, Foy C, Kirkpatrick JN, Lewis-Barned N, Har- ney B, Aldington SJ: The effectiveness of screening for diabetic retinopathy by dig- ital imaging photography and technician ophthalmoscopy. Diabet Med 20:467– 474, 2003 10. Leese GP, Ahmed S, Newton RW, Jung RT, Ellingford A, Baines P, Roxburgh S, Coleiro J: Use of mobile screening unit for diabetic retinopathy in rural and urban areas. BMJ 306:187–189, 1993 11. Jones D, Dolben J, Owens DR, Vora JP, Young S, Creagh FM: Non-mydriatic Po- laroid photography in screening for dia- betic retinopathy: evaluation in a clinical setting. Br Med J (Clin Res Ed) 296:1029 – 1030, 1988 12. Murgatroyd H, Ellingford A, Cox A, Bin- nie M, Ellis JD, MacEwen CJ, Leese GP: Effect of mydriasis and different field strategies on digital image screening of di- abetic eye disease. Br J Ophthalmol 88: 920 –924, 2004 13. Higgs ER, Harney BA, Kelleher A, Reck- Image quality in diabetic retinal screening 2452 DIABETES CARE, VOLUME 28, NUMBER 10, OCTOBER 2005 less JP: Detection of diabetic retinopathy in the community using a non-mydriatic camera. Diabet Med 8:551–555, 1991 14. Gillow JT, Gray JA: The National Screen- ing Committee review of diabetic retinop- athy screening. Eye 15:1–2, 2001 15. Facey K, Cummins E, Macpherson K, Morris A, Reay L, Slattery J: Organisation of Services for Diabetic Retinopathy Screen- ing. Glasgow, Scotland, Health Technol- ogy Board for Scotland, 2002, p. 1–224 16. Ferris FL, 3rd, Kassoff A, Bresnick GH, Bailey I: New visual acuity charts for clin- ical research. Am J Ophthalmol 94:91–96, 1982 17. Aldington SJ, Kohner EM, Meuer S, Klein R, Sjolie AK: Methodology for retinal photography and assessment of diabetic retinopathy: the EURODIAB IDDM com- plications study. Diabetologia 38:437– 444, 1995 18. Scanlon PH, Malhotra R, Greenwood RH, Aldington SJ, Foy C, Flatman M, Downes S: Comparison of two reference standards in validating two field mydriatic digital photography as a method of screening for diabetic retinopathy. Br J Ophthalmol 87: 1258 –1263, 2003 19. Retinopathy Working Party: A protocol for screening for diabetic retinopathy in Europe. Diabet Med 8:263–267, 1991 20. Wilkinson CP, Ferris FL 3rd, Klein RE, Lee PP, Agardh CD, Davis M, Dills D, Kam- pik A, Pararajasegaram R, Verdaguer JT, the Global Diabetic Retinopathy Project Group: Proposed international clinical di- abetic retinopathy and diabetic macular edema disease severity scales. Ophthal- mology 110:1677–1682, 2003 21. Buxton MJ, Sculpher MJ, Ferguson BA, Humphreys JE, Altman JF, Spiegelhalter DJ, Kirby AJ, Jacob JS, Bacon H, Dud- bridge SB, et al.: Screening for treatable diabetic retinopathy: a comparison of dif- ferent methods. Diabet Med 8:371–377, 1991 22. Cahill M, Eustace P, de Jesus V: Pupillary autonomic denervation with increasing duration of diabetes mellitus. Br J Oph- thalmol 85:1225–1230, 2001 23. Klein R, Klein BE, Neider MW, Hubbard LD, Meuer SM, Brothers RJ: Diabetic retinopathy as detected using ophthal- moscopy, a nonmydriatic camera and a standard fundus camera. Ophthalmology 92:485– 491, 1985 24. Taylor DJ, Fisher J, Jacob J, Tooke JE: The use of digital cameras in a mobile retinal screening environment. Diabet Med 16: 680 – 686, 1999 25. Williams R, Nussey S, Humphry R, Thomp- son G: Assessment of non-mydriatic fun- dus photography in detection of diabetic retinopathy. Br Med J (Clin Res Ed) 293: 1140 –1142, 1986 26. Olson JA, Strachan FM, Hipwell JH, Goat- man KA, McHardy KC, Forrester JV, Sharp PF: A comparative evaluation of digital imaging, retinal photography and optometrist examination in screening for diabetic retinopathy. Diabet Med 20:528 – 534, 2003 27. Bresnick GH, Mukamel DB, Dickinson JC, Cole DR: A screening approach to the surveillance of patients with diabetes for the presence of vision-threatening reti- nopathy. Ophthalmology 107:19 –24, 2000 28. Moss SE, Meuer SM, Klein R, Hubbard LD, Brothers RJ, Klein BE: Are seven stan- dard photographic fields necessary for classification of diabetic retinopathy? In- vest Ophthalmol Vis Sci 30:823– 828, 1989 29. von Wendt G, Heikkila K, Summanen P: Detection of retinal neovascularizations using 45 degrees and 60 degrees photo- graphic fields with varying 45 degrees fields simulated on a 60 degrees photo- graph. Acta Ophthalmol Scand 80:372– 378, 2002 30. MacCuish AC: Early detection and screening for diabetic retinopathy. Eye 7:254 –259, 1993 31. Jones RB, Larizgoitia I, Casado L, Barrie T: How effective is the referral chain for di- abetic retinopathy? Diabet Med 6:262– 266, 1989 Scanlon and Associates DIABETES CARE, VOLUME 28, NUMBER 10, OCTOBER 2005 2453 work_sijgn2mao5aqhehklxoaiom5ai ---- untitled Original Article · Originalarbeit Onkologie 2008;31:362–365 Published online: June 24, 2008 DOI: 10.1159/000137713 Schlüsselwörter Elektrische Felder · TTFelder (tumor-treating fields) · Tumoren · Metastasen · Pilotstudie Zusammenfassung Hintergrund: Durch isolierte Elektroden übertragene elektrische Felder mit niedriger Intensität und sorgfältig abgestimmter mittlerer Frequenz (tumor-treating fields, TTFelder) können das Wachstum von Tumorzellen selek- tiv verhindern. Patienten und Methoden: Diese offene, prospektive Pilotstudie sollte die Sicherheit, Verträglich- keit und Wirksamkeit der TTFelder-Behandlung verab- reicht durch das NovoTTF-100A™-Gerät bei Patienten mit lokal fortgeschrittenen und/oder metastasierten soli- den Tumoren evaluieren. Alle 6 Patienten waren intensiv vorbehandelt; es stand keine Standardtherapie mehr für sie zur Verfügung. TTFelder wurden mindestens 14 Tage als Dauertherapie verabreicht. Ergebnisse: Es wurden keine behandlungsbedingten schwerwiegenden Neben- wirkungen dokumentiert. Eine partielle Remission einer Hautläsion eines Mammakarzinoms wurde beobachtet; bei 3 Patienten wurde eine Stabilisierung und bei einer Patientin ein Fortschreiten der Erkrankung unter Thera- pie beobachtet. Schlussfolgerung: Trotz der geringen Patientenzahl in der vorliegenden Studie sind die Absenz von Toxizität und die Hinweise auf eine gute Wirksamkeit vielversprechend und erfordern größere klinische Stu - dien mit dieser neuen Therapieoption. Key Words Electric fields · Tumor-treating fields (TTFields) · Tumors · Metastasis · Pilot study Summary Background: The transmission of electric fields using insulated electrodes has demonstrated that very low-in- tensity, properly tuned, intermediate-frequency electric fields, termed tumor-treating fields (TTFields), selectively stunts tumor cell growth and is accompanied by a de- crease in tumor angiogenesis. Patients and Methods: This open, prospective pilot study was designed to evaluate the safety, tolerability, and efficacy profile of TTFields treatment in patients with locally advanced and/or metastatic solid tumors using the NovoTTF- 100A™ device. All 6 patients were heavily pre-treated with several lines of therapy; no additional standard treatment option was available to them. TTFields treat- ment using continuous NovoTTF-100A lasted a mini- mum of 14 days and was very well tolerated. Results: No related serious adverse events occurred. Outcomes showed 1 partial response of a treated skin metastasis from a primary breast cancer, 3 cases where tumor growth was arrested during treatment, and 1 case of dis- ease progression. One mesothelioma patient experi- enced lesion regression near TTFields with simultaneous tumor stability or progression in distal areas. Conclusion: Although the number of patients in this study is small, the lack of therapy toxicity and the efficacy observed in data gathered to date indicate the potential of TTFields as a new treatment modality for solid tumors, definitely warranting further investigation. Dr. med. Marc Salzberg c/o University Hospital Department for Medical Oncology Petersgraben 4, 4031 Basel, Switzerland E-mail msalzberg@pharmabrains.ch A Pilot Study with Very Low-Intensity, Intermediate- Frequency Electric Fields in Patients with Locally Advanced and/or Metastatic Solid Tumors Marc Salzberga Eilon Kirsonb Yoram Paltib, c Christoph Rochlitza a University Hospital Basel, Switzerland b Novocure Ltd., c Technion, Haifa, Israel © 2008 S. Karger GmbH, Freiburg Accessible online at: www.karger.com/onk Fax +49 761 4 52 07 14 E-mail Information@Karger.de www.karger.com 362_365_10012_salzberg:362_365_10012_salzberg 23.06.2008 13:59 Uhr Seite 362 D o w n lo a d e d b y: U n iv e rs itä ts b ib lio th e k M e d iz in B a se l 1 3 1 .1 5 2 .2 1 1 .6 1 - 1 0 /2 4 /2 0 1 7 1 1 :5 4 :1 1 A M http://dx.doi.org/10.1159%2F000137713 Introduction In the laboratory setting and in clinical practice, alternating electric fields show a wide range of effects on living tissues. At very low frequencies (below 1 kHz), alternating electric fields stimulate excitable tissues, such as nerve, muscle, and heart, through membrane depolarization [1]. The transmis- sion of such fields by radiation is insignificant, and, therefore, they are usually applied directly by contact electrodes, though some applications have also used insulated electrodes. At very high frequencies (above many MHz), a completely dif- ferent biological effect is observed. Tissue heating becomes dominant primarily due to dielectric losses [2]. This phenom- enon serves as the basis for some commonly used medical treatment modalities, including diathermy and radio-frequency tumor ablation, which can be applied through insulated elec- trodes [3]. It was recently demonstrated that very low-intensity, properly tuned, intermediate-frequency electric fields, termed tumor- treating fields (TTFields), selectively stunt the growth of tumor cells [4]. This inhibitory effect was demonstrated in nu- merous proliferating cell types, while non-proliferating cells and tissues were unaffected. Interestingly, Nordenström’s [5] 1989 observation that different cell types show specific inten- sity and frequency dependencies when intraneoplastic anodic and cathodic fields were used, is again confirmed with TTFields inhibition. At the cellular level, the TTFields effect was shown to be due to arrest of proliferation and selective destruction of dividing cells. The damage caused by the fields to the replicating cells was dependent on the orientation of the mitotic spindle in relation to the field vectors, indicating that this effect is non-thermal. Indeed, temperature measure- ments made within culture dishes during treatment and on the skin above treated tumors in vivo, showed no significant ele- vation in temperature compared to control cultures/mice. At the subcellular level, it was found that TTFields disrupt the normal polymerization-depolymerization process of micro- tubules during mitosis, similar to what has been seen in cells treated with agents that interfere directly or indirectly with microtubule polymerization (e.g. paclitaxel or docetaxel) [6–10]. Animal studies have confirmed the described inhibi- tion of tumor growth following less than 1 week of TTFields treatment [4]. The growth inhibition was accompanied by a decrease in angiogenesis within the tumor, due to inhibition of endothelial cell proliferation, while no treatment-related side effects were observed. Our group is the first to treat patients worldwide with this new therapeutic modality. This open, prospective pilot study was designed to evaluate the safety and tolerability profile of TTFields treatment and the tumor response in patients with locally advanced and/or metastatic solid tumors. A medical device specifically designed to apply intermittent electric fields through insulated electrodes was built into adhesive strips which were then fixed onto the patient’s skin. The NovoTTF-100A™ instrument was developed by Novocure Ltd., Haifa, Israeli. Patients and Methods Prior to study commencement, the trial protocol was approved by the local Ethics Committee, and concurrence with the required standards of the Declaration of Helsinki was ensured. Patients with histologically- proven, locally advanced or metastatic malignant tumors were recruited. Major selection criteria were: age ≥ 18 years, at least 1 measurable lesion, tumor location accessible to field application through externally placed electrodes, ECOG performance ≤ 2, no additional standard therapy avail- able, and no concomitant anti-tumor therapy. Six patients, with a median age of 66 years (range 24–76) and suffering from various cancers, were recruited, and provided written informed con- sent. All patients were previously treated with several lines of therapy, and no additional standard treatment option was available to them. Four of the patients suffered from skin lesions, 1 had a glioblastoma multiforme (GBM), and 1 had metastases from a mesothelioma in the retroperitoneal cavity (table 1). Therapy was initiated in the outpatient clinic of the Basel University Hospital under medical supervision for the first 6 hours of treatment. Thereafter, patients were released to continue treatment on an ambulatory basis. Safety and tolerability parameters were determined. The NovoTTF-100A device used in this trial (depicted in figs. 1 and 2) is a portable, battery-operated device that produces TTFields. These TTFields are applied to the patient by means of surface electrodes that are electri- cally insulated, thereby ensuring that resistively coupled electric currents are not delivered to the patient. The electrodes are placed on the patient’s shaved skin over a layer of adhesive hydrogel, and held in place with hy- poallergenic adhesive strips. The gel beneath the electrodes must be re- placed every 3–4 days, and the skin re-shaved in order to maintain optimal coupling between the electrodes and the skin. All treatment parameters are pre-set so there are no electrical output adjustments available to, or required by, the patient. Patients received continuous TTFields treatment at 100–200 kHz at a field intensity of 0.7 V/cm root mean square (RMS). TTFields were applied to 2 pairs of insulated electrode arrays in an alter- nating fashion at 1 s per pair. Each electrode had a surface area of 4.5–13.5 cm2. The 2 pairs were arranged normal to each other so as to gen- Onkologie 2008;31:362–365NovoTTF in Solid Tumors 363 Patient, # Date of initial diagnosis Primary tumor Location of treated lesion 1 22/05/1998 invasive ductal breast cancer right chest wall, axillary skin lesions 2 08/11/2002 malignant melanoma left thigh, skin lesions 3 31/03/2003 pleural mesothelioma regional spread to retroperitoneum 4 23/10/2000 adenocarcinoma of the breast left chest wall, skin lesion 5 05/09/2002 glioblastoma multiforme left hemisphere of brain 6 18/02/2003 invasive ductal breast cancer left chest wall, skin lesion Table 1. Demo- graphics 362_365_10012_salzberg:362_365_10012_salzberg 23.06.2008 13:59 Uhr Seite 363 D o w n lo a d e d b y: U n iv e rs itä ts b ib lio th e k M e d iz in B a se l 1 3 1 .1 5 2 .2 1 1 .6 1 - 1 0 /2 4 /2 0 1 7 1 1 :5 4 :1 1 A M erate sequentially 2 perpendicular fields in the tumor positioned in be- tween them [4, 11]. The first 2 patients recruited received 2 weeks of continuous TTFields therapy. From patient 3 onwards, all patients received at least 4 weeks of continuous treatment. Patients were allowed to disconnect from the device for up to 30 min, twice a day. Results Safety The total exposure time of the 6 patients to TTFields treat- ment was 128 full days. Individual patients were exposed to NovoTTF-100A treatment for 13–46 days. The TTFields treat- ment was generally well tolerated, and the compliance was > 80%. Time without treatment was due to battery changes, electrode gel replacement, and time taken by the patient for personal needs (e.g. bathing). The patients learned rapidly to manage normal daily life with the NovoTTF100A. The only improvement most patients suggested was that the device should become lighter and less noisy. Adverse events were mild for all patients. The only adverse event related to treat- ment was a grade 1 skin irritation, with reddening of the skin in 3 out of 6 patients. These lesions occurred beneath the elec- trodes, and were reversible. Treatment of the skin lesions included the repositioning of the electrodes and topical appli- cation of steroid-containing ointments. No related abnormal laboratory values or serious adverse events were recorded. Efficacy All patients suffered from progressive disease prior to enter- ing the study, and all were intensively pre-treated. Tumor size was assessed by digital photography in the 4 patients with skin tumors as the measurable lesion, and the other 2 patients by computed tomography (CT) scans. One partial response of a treated skin metastasis of a primary breast cancer was ob- served (fig. 3). In 3 patients, an arrest of tumor growth during treatment was seen (fig. 4), and 1 patient experienced progres- sive disease. In the mesothelioma patient (Patient 3), some tumor regression was seen in the area of the tumor which was exposed to TTFields, while the other portions of the tumor were stable or progressive. Patient 5 with rapidly growing GBM resistant to temozolomide and carmustine, likewise did not respond to the 4 weeks of treatment with TTFields. On the basis of the treatment data subsequently obtained on GBM patients, we can possibly attribute this failure to the treatment duration being too short [4]. Discussion TTFields are a new cancer treatment modality that has shown a favorable tolerability and efficacy profile in preclinical studies. We report the results of the first study with TTFields in humans, and confirm, in the clinical setting, the feasibility 364 Onkologie 2008;31:362–365 Salzberg/Kirson/Palti/Rochlitz Fig. 1. Picture of NovoTTF portable device. Basel Patient S04 - Treated tumor 0 100 200 300 400 500 600 700 800 900 -15 -10 -5 0 5 10 15 20 25 30 35 40 45 Days from treatment initiation A re a (m m ^2 ) TTFields Fig. 3. Patient with a 51% reduction in tumor size after 4 weeks of TTFields treatment (partial response). Fig. 4. Patient with flattening of the tumor after treatment and the appearance of healthy looking granulation tissue at the tumor margins. 20% reduction in tumor size after 6 weeks of TTFields treatment (stable disease). Fig. 2. Treatment setting with NovoTTF for malignant glioblastoma. Picture shows attached elec- trodes. Baseline After NovoTTF treatment (4 weeks) 362_365_10012_salzberg:362_365_10012_salzberg 23.06.2008 13:59 Uhr Seite 364 D o w n lo a d e d b y: U n iv e rs itä ts b ib lio th e k M e d iz in B a se l 1 3 1 .1 5 2 .2 1 1 .6 1 - 1 0 /2 4 /2 0 1 7 1 1 :5 4 :1 1 A M of the TTFields treatment with the NovoTTF device. Other subsequently conducted studies confirm these findings [4]. Furthermore, patients experienced very low toxicity as a con- sequence of this treatment, which can be explained in light of the known passive electric properties of normal tissues within the body and the effects of electric fields applied via insulated electrodes. More specifically, 2 types of toxicities may be ex- pected in an electric field-based treatment modality. First, the fields could interfere with the normal function of excitable tis- sues within the body causing, in extreme cases, cardiac ar- rhythmias and seizures. However, this is not truly a concern with TTFields since, as frequencies increase above 1 kHz, exci- tation by alternating sinusoidal electric fields decreases dra- matically due to the parallel resistor-capacitor nature of the cell membrane which has a time constant of about 1 ms. Sec- ondly, the anti-mitotic effect of TTFields might be expected to damage the replication of rapidly dividing healthy cells within the body (bone marrow, small intestine mucosa). The lack of damage to intestinal mucosa in animals undergoing TTFields treatment is probably a reflection of the fact that the small intestine mucosal cells have a slower replication cycle than neoplastic cells, and that the fraction of the field that affects the mucosal areas where the cells replicate is small due to bypassing lower resistance pathways. Bone marrow is almost completely naturally protected from TTFields due to its high electric resistance from both the surrounding bone and bone marrow itself, relative to other tissues in the body. TTFields therapy was very well tolerated and safe. The 4 patients with skin lesions showed transient yet convincing in- hibition in the growth rate of the treated lesions. One of these patients had a partial response to treatment. Although the number of patients in this study is small, the lack of toxicity of this therapy and the promise of efficacy seen in the data gath- ered to date indicate the potential of TTFields as a new treat- ment modality for solid tumors, definitely warranting further investigation in larger clinical trials. Onkologie 2008;31:362–365NovoTTF in Solid Tumors 365 References 1 Polk C: Therapeutic applications of low-frequency sinusoidal and pulsed and magnetic electric fields; in Bronzino JD (ed): The Biomedical Engineering Handbook. Boca Raton, FL, CRC Press, Inc., 1995, pp. 1404–16. 2 Elson E: Biologic effects of radiofrequency and microwave fields: in vivo and in vitro experimental results; in Bronzino JD (ed): The Biomedical Engineering Handbook. Boca Raton, FL, CRC Press, Inc., 1995, pp. 1417–23. 3 Chou CK: Radiofrequency hyperthermia in cancer therapy; in Bronzino JD (ed): The Biomedical Engineering Handbook. Boca Raton, FL, CRC Press, Inc., 1995, pp. 1424–30. 4 Kirson E, Dbaly V, Tovarys F, Vymazal J, Soustiel J, Itzhaki A, Mordechovich D, Steinberg-Shapira S, Gurvich Z, Schneidermann R, Wassermann Y, Salzberg M, Ryffel B, Goldsher D, Dekel E, Palti Y: Alternating electric fields arrest cell proliferation in animal tumor models and human brain tumors. Proc Natl Acad Science U S A 2007;104:10152–57. 5 Nordenström BE: Electrochemical treatment of cancer. I: Variable response to anodic and cathodic fields. Am J Clin Oncol 1989;12:530–6. 6 Jordan MA, Thrower D, Wilson L: Effects of vin- blastine, podophyllotoxin and nocodazole on mi - totic spindles: implications for the role of micro- tubule dynamics in mitosis. J Cell Sci 1992;102: 401–16. 7 Rowinsky EK, Donehower RC: Paclitaxel (taxol). N Engl J Med 1995;332:1004–14. 8 Kline-Smith SL, Walczak CE: The microtubule- destabilizing kinesin XKCM1 regulates micro- tubule dynamic instability in cells: Mol Biol Cell 2002;13:2718–31. 9 Kapoor TM, Mayer TU, Coughlin ML, Mitchison TJ: Probing spindle assembly mechanisms with monastrol, a small molecule inhibitor of the mitotic kinesin, Eg5. J Cell Biol 2000;150:975–88. 10 Maiato H, Sampaio P, Lemos CL, Findlay J, Carmena M, Earnshaw WC, Sunkel CE: MAST/ Orbit has a role in microtubule-kinetochore attach- ment and is essential for chromosome alignment and maintenance of spindle bipolarity. J Cell Biol 2002;157:749–60. 11 Kirson E, Gurvich Z, Schneiderman R, Dekel E, Itzhaki A, Wasserman Y, Schatzberger R, Palti Y: Disruption of cancer cell replication by alternating electric fields. Cancer Res 2004;64:3288–95. 362_365_10012_salzberg:362_365_10012_salzberg 23.06.2008 13:59 Uhr Seite 365 D o w n lo a d e d b y: U n iv e rs itä ts b ib lio th e k M e d iz in B a se l 1 3 1 .1 5 2 .2 1 1 .6 1 - 1 0 /2 4 /2 0 1 7 1 1 :5 4 :1 1 A M work_siphecpuazcaxmqzjf6ud2nmee ---- © W. S. Maney & Son Ltd 2009 DOI 10.1179/175355309X457295 Postcards from the Edge of Time: Archaeology, Photography, Archaeological Ethnography (A Photo-Essay) Yannis Hamilakis, Aris Anagnostopoulos University of Southampton, UK Fotis Ifantidis University of Thessaloniki, Greece In this photo-essay we present and discuss an experiment with digital photography as part of our archaeological ethnography within the Kalaureia Research Programme, on the island of Poros, Greece. We contextualize this attempt by reviewing, briefl y but critically, the collateral development of photography and modernist archaeology, and the links between photo- graphy and anthropology, especially with regard to the fi eld of visual anthro- pology. Our contention is that at the core of the uses of photographs made by both disciplines is the assumption that photographs are faithful, disem- bodied representations of reality. We instead discuss photographs, including digital photographs, as material artefacts that work by evocation rather than representation, and as material memories of the things they have witnessed; as such they are multi-sensorially experienced. While in archaeology photo- graphs are seen as either offi cial records or informal snapshots, we offer instead a third kind of photographic production, which occupies the space between artwork and ethnographic commentary or intervention. It is our contention that it is within the emerging fi eld of archaeological ethnography that such interventions acquire their full poignancy and potential, and are protected from the risk of colonial objectifi cation. keywords Photography, Archaeology, Archaeological ethnography, Social anthropology, Senses, Materiality, Kalaureia, Greece Introduction In one of Aris’s visits to a neighbour of the sanctuary of Poseidon, a ship mechanic by trade, the latter pulled out a hefty tome on horses. It was an ‘Encyclopaedia of public archaeology: archaeological ethnographies, Vol. 8 No. 2–3, 2009, 283–309 284 YANNIS HAMILAKIS et al. Horses’ published by DK publishers in Britain. Aris thought he meant to demonstrate his passion for horses, which was already known to him. ‘No,’ he insisted, ‘I bought this book in one of my trips abroad, because I love horses. But it kept a surprise for me in store. Look at this.’ He turned to a page with a picture of an old man on a horse. The caption to the photograph said something about the ‘Pindos horse’, which apparently was fi gured here, but not much else. ‘This is my father’, the neighbour insisted. It turned out that, ages ago, a gentleman had arrived at the farmstead kept by his father near the sanctuary of Poseidon, and taken some pictures of him riding the horse. That same person had written the book. Aris asked whether the neigh- bour’s father had received anything for this. ‘He did not understand, he was an il- literate man’, the son told him. What about himself, Aris insisted, but he waved the question away and changed the subject. It was probably too late for all this, Aris thought back then, too late to press for claims on memory as property. To discover a picture of your long-dead father inside a book on horses in some European capital is surely to marvel at the unexpected trajectories photographs can take. It is also to feel a sense of awe at your own inability to control photographic representations, once they have taken off. This is a photo-essay, an experimental attempt to combine archaeological ethno- graphy with the use of creative, digital photography. Our experiment took place within the Kalaureia research programme (www.kalaureia.org), centred around the excavation of the ancient sanctuary of Poseidon, on the island of Poros, Greece. The authors are all team members of this project, engaged in a collaborative production of an archaeological ethnography: a critical and dialogic space which enables the understanding of ‘local’, unoffi cial, contemporary discourses and practices to do with this archaeological site, as it is currently being constituted by various offi cial and alternative archaeologies (for a discussion on the theoretical underpinnings of this project, see Hamilakis and Anagnostopoulos, this volume). The incident narrated above is only one of several examples which alerted us to the fact that our own photographic voracity in this project is by no means innocent. We did not enter a pristine backwater to photograph ‘ways of life’ or ‘archaeological processes’. Instead, we entered a fi eld where ‘locals’ are familiar with the power of the image, and the circulation of visual material in local, national, and global arenas. They are alert to the multiple regimes of value created by the circulation and exchange of images, and have developed multiple ways of interacting with, infl uenc- ing, breaking and exploiting it. With these thoughts in mind, in this essay we will start by critically reviewing in turn the links between archaeology, socio-cultural anthropology (the two parent disciplines of archaeological ethnography) and photo- graphy. We will then outline briefl y our ideas on how digital, creative photography can be deployed as part of archaeological ethnographic projects, before we describe the use of photography as part of our project. Finally, we will present, with commentary when needed, some examples of this photographic work. Archaeology and photography as collateral devices of modernity Odd that no one has thought of the disturbance (to civilization) which this new action causes. Roland Barthes (1981: 12) 285POSTCARDS FROM THE EDGE OF TIME: A PHOTO-ESSAY Equally odd, we may add, that there is so little discussion on the collateral develop- ment of the photographic and the archaeological. Yet, as recent studies have shown (e.g. Shanks, 1997; Hamilakis, 2001, 2008, in press; Bohrer, 2005; Lyons et al., 2005; Downing, 2006; Hauser, 2007), there is much to be gained by studying the links between photography and archaeology as devices of Western modernity that came into existence more or less at the same time, and partook of the same ontological and epistemological principles. Barthes (1981) was one of the fi rst to note the importance of the fact that the same century had invented history and photography. When in 1839, the scientist and politician François Arago (1786–1853) was urging the delegates in the French Chamber of Deputies to buy Daguerre’s invention, one of his arguments was the archaeological applications of the new technique, while the other key per- sonality credited with the invention of photography, Fox Talbot, had an active archaeological interest and is considered as one of those who deciphered the cuneiform script. Within a few months from its invention, photography was being used extensively in capturing images of antiquities, in bringing ‘home’ traces of the material past, especially at a time when the emerging nation states were putting restrictions on the export and movement of antiquities. If the fundamental event of modernity is the reframing and capturing of the world as picture, as suggested by Heidegger (1977), then both photography and modernist archaeology partook of this process of visualization and exhibition. Both shared the epistemological certainties of Western modernity, be it the principle of visual evidential truth (‘seeing is believing’), the desire to narrate things ‘as they really were’, or objectivism. Both archaeology and photography objectifi ed, in both senses of the word: archaeology produced, through the selective recovery, reconstitution and restoration of the fragmented material traces of the past, objects for primarily visual inspection. Photography materialized and captured a moment, and produced photographic objects to be gazed at. But they both also partook of the modernist inquiry on the individual and national self as other, as something external that can be materialized in objects and things, gazed at, dissected and analysed (Downing, 2006). They also both attempted to freeze time: photography by capturing and freezing the fl eeting moment (see Berger and Mohr, 1982: 86), and archaeology by arresting the social life of things, buildings and objects, and attempting to reconstitute them into an idealized, original state. Photography also facilitated a fundamental illusion of the modernist, especially national, imagina- tion: the re-collection, the bringing together of things (in the form of their photo- graphic representations), and the creation and reconstitution of the whole, of the corpus, of a national or archaeological totality (see Hamilakis, 2007). Modernist archaeology and photography partook of a novel, Western conception of the body and of the sensuous self, one that was grounded on Cartesian dualism, and on the prioritization of an autonomous and disembodied sense of vision (see Crary, 1992). But they also reinforced further that conception, be it through objects exhibited in a museum behind glass cases or photographs to be gazed at. They thus both promoted a certain way of seeing that was largely disembodied and desensitized. Yet, despite these dominant developments, Western modernity, scarcely a monolithic entity, harboured diverse scopic regimes, and other vernacular modernities came into existence, both within and outside the European core (see Pinney, 2001; Pinney and 286 YANNIS HAMILAKIS et al. Peterson 2003; Lydon, 2005). Modernist archaeological cultures were also expressed in diverse ways, but were also constrained at the same time by the elite character of the enterprise. More importantly, both archaeology and photography produced material artefacts which, by virtue of their materiality, invited a fully-embodied, multi-sensorial and kinaesthetic encounter (see Wright, 2004), resulting in an as- yet-unresolved tension. It was the tactile properties of photography especially that encouraged Walter Benjamin (2008 [1935–1936]) to celebrate photography as the new mimetic technology that could enrich the human sensorium, acting as a prosthetic sensory device (Buck-Morss, 1992; Taussig, 1993). In the areas known as the classical lands, photography was active from its inven- tion. The fi rst daguerreotypes of the Athenian Acropolis were produced in 1839, the same year that the process had become offi cially known. Within a few years, a large number of commercial photographers produced photographic reproductions of the most famous classical monuments, guided mostly by classical authors or biblical references. These started circulating widely as individual photos or photographic albums, producing a new visual economy of classical antiquity (see Sekula, 1981; Poole, 1997). A photographic canon was established from early on with regard to the monuments to be photographed, but also the specifi c angle chosen, the framing, and so on (see Szegedy-Maszak, 2001). This photographic canon contributed to a new way of seeing classical antiquity, one based on an autonomous and disembodied gaze, emphasizing classical monuments in splendid isolation, devoid of other mate- rial traces and of contemporary human presence (Hamilakis, 2001). Archaeologists and photographers in the 19th century worked in tandem: the fi rst were producing staged themes, selected, cleansed and reconstituted classical edifi ces out of the mate- rial traces of the past; and photographers were framing these themes (in an equally selective manner) and they were reproducing them widely. They both thus con tributed to a new simulacrum economy of classical antiquity. Rather than losing their magical aura, their ‘unique apparition of a distance, however near [they] may be’, as Benjamin would have wanted it (2008: 23), classical antiquities with their endless photographic reproductions, gained further in auratic and thus distancing value, and their already high esteem within the Western elite visual economy was strengthened even more, as they were now the originals of a myriad of reproduced images (see Hamilakis, 2001). Through photography, classical monuments, in their visual-cum-tactile photo- graphic renderings, reached many more people than before. This photographic corpus had an inherent potential, through its evocation of materiality and tactility, by showing buildings and objects, and through the materiality and tactility of the photographic object itself, to be appreciated in a fully embodied and multi-sensorial way. This potential, however, in order to be fulfi lled, required a counter-modernist embodiment of the self, one at odds with the dominant Western one. It may be the case that in certain contexts, that potential was indeed fulfi lled, but overall, things turned out otherwise. As Taussig put it, history has not taken the turn Benjamin thought that mimetic machines might encourage it to take. The irony that this failure is due in good part to the very power of mimetic machinery to control the future by unleashing imageric power on a scale previously only dreamed of, would not have been lost on him, had he lived longer (1993: 26). 287POSTCARDS FROM THE EDGE OF TIME: A PHOTO-ESSAY Photography and anthropology The development of the new fi eld of visual anthropology over the past few decades was a complex process that incorporated both a critique and an affi rmation. The critique was aimed towards previous methods and assumptions surrounding visual representations. Modern anthropology defi ned itself through a violent distancing from 19th-century ‘armchair’ versions of anthropology. Earlier anthropologists had contented themselves with second-hand information from missionaries and travellers, or at the very least with information that was brought to their ‘veranda’ by willing locals. Their concern was mostly with typological distinctions between ‘tribes’, languages, or racial ‘types’, and the assorted artefacts that documented the rise and extinction of distinct cultural traits. Photography was widely used to visually document indigenous tribes, in an effort that largely resembled typological repre- sentations in archaeology (for a critical review of racial hints in ethnographic photography, see Poole, 2005). After World War I, fi eldwork methods were transformed, and the physical presence of the researcher amidst the people studied gradually became the sine qua non of ethnography. ‘Being there’ became the central claim to anthropological knowledge, and a complex visual metaphor evolved. Ethnographic narrative was a fi rst-hand account of an impartial observer; the eye of the ethnographer replaced the photo- graphic lens, thus privileging vision over other senses in imparting and consuming ethnographic experience (Pink, 2006: 8). Simultaneously, photography gradually became suspect for it undermined the authority of the ethnographic eye: it was too facile an indication of ‘being there’, associated with the amateurism of tourists and the superfi cial gaze of journalists (see Pinney, 1992; Grimshaw and Ravetz, 2005: 5). Although founding fi gures of fi eldwork anthropology, such as Malinowski or Evans- Pritchard, took many photographs, only some of which featured in their works, they edited these very carefully and altogether avoided discussing the conditions of their production (Wolbert, 2001; Poole, 2005: 166; Pink, 2006: 7). The suspicion towards visual testimonies developed over the years to what at least one commentator described as anthropology’s ‘iconophobia’ (Taylor, 1996). Even after visual anthropology emerged as a subfi eld in the 1960s, anthropologists by and large have deemed visual evidence as ‘insuffi cient’, unless accompanied by the textual testimony of the ethnographer (Stoller, 1992; Loizos, 1993; MacDougall, 1997). In fact, for this critique, ‘textuality itself, and textuality alone, is the condition of possibility of a legitimate (“discursive, intellectual”) visual anthropology’ (Taylor, 1996: 66, discussing Bloch, 1988). It was only natural then that the renewed interest in imagework coincided for anthropology with the ‘crisis of representation’ of the 1980s and 1990s (Pink, 2006: 12–13). Textual and narrative techniques that produced an objective effect in ethno- graphy were put to the test and found wanting (for a summary, see James et al., 1997). As Pink claims, critical refl ection on ‘power relations and truth claims in the wider anthropological project [. . .] inspired new forms of representing anthro- pologists’ own and other people’s experiences’ (Pink, 2006: 13). Besides raising the subject of refl exivity, which has always been crucial in visual ethnography, this cri- tique also brought to the fore the subjectivity of the ethnographer as an instrument of ethnographic understanding. 288 YANNIS HAMILAKIS et al. The present moment is one in which anthropology is still trying to overcome its ‘logocentrism’ (Grimshaw and Ravetz, 2005: 6) and devise other modes of repre- sentation that convey more fully the ethnographic experience. Within the discipline, however, there is still resistance to accepting an independent life for images, and demands that these be clothed in words in order to enhance their descriptive depth. The image is still deemed too ‘shallow’, despite Taylor’s convincing argument to the contrary (1996). The issue has risen in practical terms for us when constructing this essay: should we leave the evocative presence of photographs to speak for itself, or should we dress it in our own words? And if so, what would the content of these words be? Should it provide a backdrop for the reading of the pictures, should it complement it with ethnographic information, or should it accompany it with a com- ment that expresses our own feelings towards it? We concluded, albeit tentatively and instinctively, that at the source of this conundrum is a tacit fundamental assumption: that words and images are used in ethnography as representations of ethnographic truth. We feel that the only way out of this impasse is to claim a new life for both images and words, a life of evocation rather than representation, in order to create fl eeting instances of meaning between reader/viewer and writer/photographer/ethno- grapher. In our photo-essay, we put forward a modest proposal to treat visual and textual cues as of the same order, as material artefacts embedded in histories of archaeology and transversed by archaeologies of visual representations. Beyond representation: photographs as evocative material artefacts Given this heritage, and the associated debates, what is to be done with photography in contemporary archaeology, beyond its usual role as documentation? How can we counter the traditions of the autonomous gaze, and of disembodied encounters partaken by both modernist archaeology and early photography? How can we benefi t from the experience of visual anthropology and the debates that it provokes? More pertinently, is there a place for an active role of photography within archaeo- logical ethnography? Luckily, in bringing about such a role we can build not only on the growing body of critical work on the collateral development of early photography and modernist archaeology, part of which we discussed above, but also on experi- mental ventures in contemporary photography, and, of course, on new technological innovations, the most important being digital photography, with its various possi- bilities of enhancement and artistic modifi cation. In tandem with this critical and experimental work in photography, the critique of the ontological basis and of the bodily confi gurations of modernist archaeology allows for a deployment of photo- graphy in archaeology on a completely different basis. Finally, the still fl uid and experimental nature of the fi eld of archaeological ethnography, the contours of which we trace with this volume, offers possibilities for collaborative work between photographers and archaeological ethnographers. As Bateman has noted (2005), in any excavation there are normally two types of photographic production: the offi cial, normally tightly controlled documentary photographic record (both the on-site photography, and the fi nds photos in the lab or the museum afterwards); and the unoffi cial snapshots produced mostly by students and by visitors to the site. We advocate and offer here a third kind of photographic 289POSTCARDS FROM THE EDGE OF TIME: A PHOTO-ESSAY production: photography that is between artwork and visual ethnographic commen- tary. While a similar kind of photography has been attempted in other projects (e.g. Bateman 2005), we propose here its use as part of collaborative archaeological ethnography. It is our contention that the creative use of digital photography can be of immense value to the emerging fi eld of archaeological ethnography. Given our conception of archaeological ethnography as sensuous, fully embodied scholarship (see Hamilakis and Anagnostopoulos, this volume), we treat a photograph not as visual representation but, to paraphrase Laura Marks, as ‘material artefact of the object it has witnessed’ (2000: 22; see also Edwards and Hart, 2004). Digital photographs are no less artefactual and material than analogue photographs: they too are experienced materially, be it on screen or in their printed versions on paper (Sassoon, 2004: 197). Digital photography, with its possibilities of retouching and reworking, has helped dispel and undermine further the myth that photographs re-present, they reproduce faithfully reality. They are rather material artefacts, artistic objects, contemporary interventions, commentaries upon other artefacts and objects, and upon other interventions, in our case of archaeological and ethnographic nature. In other words they are memories, that is reworked renderings of the things they have witnessed. They do not represent, but rather recall. They do not show, but rather evoke. As such, they are material mnemonics, and as all memory, they are reworkings of the past, not a faithful reproduction of it (see Hamilakis and Labanyi, 2008). Like all mnemonic recollections, they can be comforting and consoling, as well as uncomfortable, unsettling, and disturbing. Photographs can also lead to unexpected associations; they can unearth, bring to the surface, but also throw into sharp focus things that were always there but were not seen, nor felt and experienced. For example, in archaeological projects, the kind of photography that we advocate here can frame, focus on and bring to the surface the hidden or overlooked materialities: the remnants and traces of periods and lives not offi cially valorized as worthy of archaeological documentation, or the remnants of the continuous biography of a site, as is being transformed by archaeological and non-archaeological agents (see theotheracropolis.com photo-blog, for another example). We suggest that within a sensuous archaeological, multi-temporal ethnog- raphy, photography can be framed as, but also experienced through, haptic visuality (see Marks, 2000), or rather through fully embodied, performative and multi-sensory visuality. Photographs can be touched with the hands as well as the eyes, and they can evoke texture, smells, tastes, and sounds, be it through the depiction of their theme, or the angle chosen, or the manipulation and reworking of the image. The same techniques can also help evoke and recall different times and temporalities, diverse, human and material biographies. Our thesis here resonates with what Chris Pinney has called ‘corpothetics’, as opposed to aesthetics, which he defi nes as ‘the sensory embrace of images, the bodily engagement that most people (except Kantians and modernists) have with artworks’ (2001: 158). Recent calls for the visualization of archaeology (e.g. Cochraine and Russell, 2007), well-meant, important and pertinent critiques that advocate the opening up of the discipline to new forms of expression, often ignore the historical, ontological and epistemological links between archaeology and visual devices such as photography, 290 YANNIS HAMILAKIS et al. oblivious thus to the problematic baggage that this historical link entails. Moreover, they seem to assume that creative artistic practice on its own, without critical historical interrogation and ethnographic contextualization, has in itself the power to transform archaeology. We contend instead that it is within the framework of collaborative archaeological ethnography that such use of photography attains its full potential (see Castañeda 2000–2001). This is not to deny the importance and power of the medium of photography itself, nor to suggest that it is in need of external validation. Within the context of archaeological ethnography, photography becomes another form of ethnography and the photographer becomes an ethnographer: she/he turns our attention to certain fl eeting moments, to specifi c overlooked objects and artefacts which are exposed and lit from certain revealing angles, and to momentary situations that deserve scrutiny, interrogation, dialogue, and critique. The ‘freezing of time’ thus becomes in this case a revelatory moment. But this photo-ethnographic work will need the other forms of ethnographic work, such as the in-depth and long- term participant observation, and the multiple ethnographic conversations, in order to acquire its full power and poignancy. Archaeological ethnography opens up the space for such dialogue, allows diverse local voices to enter into conversation with the photographer, and challenges their stated or implicit assumptions. Photography can operate as the performative and multi-sensorial commentary on some of the issues these conversations have brought up, and it can expose others that would require further ethnographic exploration. Moreover, archaeological ethnography can constantly alert us to the danger of reproducing a colonial photographic regime, of objectifying, in other words, people and things alike, by invading, capturing and appropriating their realities. In providing a historical and social context, ethnography can also counter the de-aestheticization or anaestheticization of photography (Buck-Morss, 1992), that is its divorce from the human sensorium and its elevation into an abstract, timeless, ‘aesthetic’ value, which, in association with archaeological monuments, often acquires the connotations of high ‘taste’ (see Bourdieu, 1986). Ethnography also brings to the fore the political, so often masked but in reality inseparable from the aesthetic, as they are both about what can (that is, what is allowed to) be seen and experienced, and what not (Ranciere, 2006). Finally, ethnography allows local people to ‘talk back’, comment on the photographs, select or reject certain photographic interventions, or produce, display and circulate their own. We have also found Castañeda’s notion of photography as ethnographic installa- tion (this volume) of much interest: the idea that photographic interventions, both the photographic process itself but also the exhibition and circulation of photographs, can provide an arena for further ethnographic encounters, can produce unexpected reactions, trigger memories, and evoke personal and object biographies that would otherwise have remained untold (see Hoskins 1998). Moreover, the return of the photographic production (both our own but also other, archival and historical ones) to various local communities, beyond the opportunities it offers for further dialogue, constitutes a fundamental ethical act of sharing knowledge, images, material artefacts. As Mitchell has observed (2005), echoing Jay’s work (1993), much of the critique on visuality in recent years has been characterized by iconophobic suspicion and 291POSTCARDS FROM THE EDGE OF TIME: A PHOTO-ESSAY anxiety (see the example of anthropology, above), unintentionally perhaps revealing the power of images to evoke and elicit reactions, indeed demand such reactions from humans. While we would still advocate the need to historicize and critique the scopic regimes of modernity, in its various confi gurations, we would concur with Mitchell and others that, rather than resorting to iconophobia, we should treat images as sensuous material artefacts that have the capacity to produce and enact relationships, arrange and rearrange the material social fi eld. In this project, we attempt to move beyond critique in order to demonstrate some of this power. Photography as part of archaeological ethnography in Kalaureia In the Kalaureia archaeological ethnography project we used photography right from the start. The two of us who worked as the main ethnographers (Aris and Yannis) routinely took many ethnographic photographs, but it was with the addition to our team of Fotis Ifantidis, an archaeologist and a photo-blogger, that photography became an important part of our project. After a short exploratory visit in May 2007, Fotis joined the team for three weeks in May and June 2008. He thus formed part of the ethnographic team, and he took a large number of photographs of the site, of the visitors, of the workmen and the archaeologists, of the town and its people, of the surrounding landscape and seascape. Fotis’s work became the topic of discussion, debate and critique within the broader archaeological group, including the workmen. His photographic production was put into circulation immediately (another advantage of digital technology), and was thus subjected to feedback, and to instant critique (see Bateman 2005), operating in other words as an ethno- graphic installation from the very beginning. In the summer of 2008, we set up a photo-blog (kalaureiainthe present.org) and we hope to produce a separate-volume photo-essay (in English and in Greek) which will merge ethnographic accounts and photography. When we circulated the idea of doing a series of portraits for the workmen as a way of honouring and valorizing their contribution to the archaeological process, the reaction was mixed, both from the archaeologists and from the workmen. One of the workmen, Mr M, responded to our request to take his portrait by saying, half-seriously: ‘if you want to honour someone you dedicate a statue to him’. He also asked if there would be any fi nancial benefi ts to them from this work. M’s initial reaction to our idea constituted not only an eloquent and witty way of articulating his resistance to photographic objectifi cation, but it brought to the fore the political economy of archaeological practice, and labour relationships on site. The workers fi lmed each other with their mobile phones and then showed it around, for laughs. They downloaded saucy fi lms and played them loud. The fact that we wanted to photograph them, however, was suspicious, since they felt they would not be able to control the trajectory of the picture which, based on their experience, could be used to mar their public profi le. Most workmen (including Mr M) were, however, convinced, especially since they understood that they would maintain part of the control of the photographic process. They had a series of photos taken in various poses and at various times, and they themselves selected the one that was to be circulated further (see Berger and Mohr, 1982: 26). At the end of the excavation season we produced a series of large-scale 292 YANNIS HAMILAKIS et al. paper versions of these portraits and offered them to the workmen, during the feast held to mark the end of the season, a gesture that resulted in further reactions and comments, mostly positive, and in any case of much ethnographic interest. In August 2008 we exhibited some of these photos at an open-air photographic exhibition organized by the local community at Galatas, on the Peloponnesian coast, opposite Poros but only a fi ve-minute boat trip away. We engaged in a dialogue with the viewers, a venture, however, which was less successful than we hoped, mainly because of a lack of the appropriate context for such viewing. After securing their permission, we included some of the photographic portraits of the excavation workers, who all live in Galatas and the villages nearby. Some months later, in November 2008, Aris went to the dig to talk to the workers who were clearing ground for the next excavation season. Upon seeing him, Mr M started telling him that he (Aris) was in big trouble since the father of one of the workers was looking for him. He wanted, Mr M said, to complain about our use of the picture of his son at the exhibition, and he claimed he was going to bring the case all the way to the European Court. The other workers joined in, in what turned out to be a premedi- tated practical joke. Caught unawares, Aris was trying to fi gure out how much of this was true and how much they were making up. He contented himself with laughing self-consciously, and mumbling something to the effect that they had sought permis- sion to display these photos from everyone portrayed. Mr M would not have any of that. He warned Aris that the ‘old-man’s money piled up would surely overshadow the tallest skyscraper’. He had the money to litigate us to death, it was implied. The joke went on for a while, despite Aris’s protestations, increasing his sense of unease. In the era of the internet, of blogging, and of omnipresent mobile phone cameras, it would be naive and patronizing to believe that local people are immune to or ignorant of the universal circulation of images and their value connotations, expressed in economic terms. This plain fact, which we have to negotiate constantly, transforms any visual form of expression we attempt. When we use photographs as material artefacts in order to evoke responses and ethnographic situations, we have to answer to both those portrayed — or their relatives — and those who question the very act of photography as some sort of appropriation. When we use photographs as evoca- tive evidence of ethnographic involvement, we cannot divest them of their contestable meaning, and the remembrance of contestation during and after their production and their circulation. So we cannot claim that these photos ‘represent’ something, but instead we must deal with them as material artefacts caught in a web of power and signifi cation. The textual component of this essay is not meant to act as the scholarly validation of photography; it rather provides some clues that situate those images historically and ethnographically as contested things, and lay bare the processes that led to their production. In this photo-essay, we present a small sample of the photographic work carried out as part of the Kalaureia project. It is hoped that this artwork (the combined effect of images and words, words seen as both images and as signs) can convey the sense of ‘being in the ethnographic fi eld’, of being attentive to its evocative materialities and temporalities, of coming into direct contact with the texture and tactility of the place, but also its multiple and intermingling layering. The insights gained through this evocative experience are of a different order from those gained 293POSTCARDS FROM THE EDGE OF TIME: A PHOTO-ESSAY by a conventional essay, generating as they do affect and emotion in a much more poignant manner. Thus they lead to an alternative production and experiencing of the archaeological site, a site where ancient buildings are temporarily decentred, and olive trees and early 20th-century ceramics acquire relevance and import, as much as ancient classical fi nds. All photographs are by Fotis Ifantidis, unless otherwise stated. The text is by the two remaining authors but incorporates feedback and commentary by Fotis. The arrangement of the photos takes the viewer into a tour, starting from the temple, walking around the site, and ending at the town of Poros. 307POSTCARDS FROM THE EDGE OF TIME: A PHOTO-ESSAY Acknowledgements We are grateful to Vasko Démou and to an anonymous reviewer for constructive comments and advice on an earlier version. References Barthes, R 1981[1980] Camera lucida: refl ections on photography (trans. by R Howard). Hill and Wang, New York. Bateman, J 2005 Wearing Juninho’s shirt: record and negotiation in excavation photographs. In: Smiles, S and S Moser (eds) Envisioning the past: archaeology and the image. Blackwell, Oxford, 192–203. Benjamin, W 2008 [1935–1936] The work of art in the age of its technological reproducibility and other writings on media. The Belknap Press of Harvard University Press, Cambridge, MA. Berger, J and J Mohr 1982 Another way of telling. Writers and Readers, London. Bloch, M 1988 Interview with G Houtman. Anthropology Today 4(1) 18–21 Bohrer, F N 2005 Photography and archaeology: the image as object. In: Smiles, S and S Moser (eds) Envisioning the past: archaeology and the image. Blackwell, Oxford, 180–191. Bourdieu, P 1986 Distinction: a social critique of the judgement of taste (trans. by R Nice). Routledge, London. Buck-Morss, S 1992 Aesthetics and anaesthetics: Walter Benjamin’s artwork essay reconsidered. October 62 3–41. Castañeda, Q E 2000–2001 Approaching ruins: a photo-ethnographic essay on the busy intersections of Chichén Itzá. Visual Anthropology Review 16(2) 43–70. Cochrane, A and Russell, I 2007 Visualising archaeology: a manifesto. Cambridge Archaeological Journal 17(1) 3–19. Crary, J 1992 Techniques of the observer: vision and modernity in the nineteenth century. MIT Press, Cambridge, MA. Downing, E 2006 After images: photography, archaeology and psychoanalysis and the tradition of Bildung. Wayne State University Press, Detroit. Edwards, E and J, Hart 2004 Introduction: photographs as objects. In: Edwards, E and J Hart (eds) Photographs, objects, histories. Routledge, London, 1–15. Grimshaw, A and A Ravetz 2005 Introduction. In: Grimshaw, A and A Ravetz (eds) Visualising anthropology. Intellect Books, Bristol, 1–16 Hamilakis, Y 2001. Monumental visions: Bonfi ls, classical antiquity, and nineteenth-century Athenian society. History of Photography 25(1) 5–12, 23–43. Hamilakis, Y 2007 The nation and its ruins: antiquity, archaeology, and national imagination in Greece. Oxford University Press, Oxford. Hamilakis, Y 2008 Monumentalising place: archaeologists, photographers, and the Athenian Acropolis from the eighteenth century to the present. In: Rainbird, P (ed.) Monuments in the landscape: papers in honour of Andrew Fleming. Tempus, Stroud, 190–198. Hamilakis, Y in press Trasformare in monumento: archeologi, fotografi e l’Acropoli ateniese dal Settecento a oggi. In: Barbanera, M (ed.) Relitti riletti. B. Borringhieri, Torino. Hamilakis, Y and J Labanyi 2008 Introduction: time, materiality and the work of memory. History and Memory 20(2) 5–17. Hauser, K 2007 Shadow sites: photography, archaeology, and the British landscape 1927–1955. Oxford University Press, Oxford. Heidegger, M 1977 The question concerning technology and other essays. Harper Torchbooks, New York. Hoskins, J 1998 Biographical objects: how things tell the stories of people’s lives. Routledge, London. James, A, Hockey, J and A Dawson 1997 Introduction: the road from Santa Fe. In: James, A, J Hockey and A Dawson (eds) After writing culture. Routledge, London, 1–15 Jay, M 1993 Downcast eyes: the denigration of vision in twentieth century French thought. University of California Press, Berkeley and Los Angeles, CA. 308 YANNIS HAMILAKIS et al. Loizos, P 1993 Innovation in ethnographic fi lm: from innocence to self-consciousness, 1955–1985. University of Chicago Press, Chicago, IL. Lydon, J 2005 Eye contact: photographing indigenous Australians. Duke University Press, Durham, NC and London. Lyons, C, J K Papadopoulos, L S Stewart and A Szegedy-Maszak 2005 Archaeology and photography: early views of ancient Mediterranean sites. The Getty Museum, Los Angeles. MacDougall, D 1997 The visual in anthropology. In: Banks, M and H Morphy (eds) Rethinking visual anthropology. Yale University Press, London, 276–295. Marks, L 2000 The skin of the fi lm: intercultural cinema, embodiment, and the senses. Duke University Press, Durham, NC and London. Mitchell, W J T 2005 What do pictures want? The lives and loves of images. The University of Chicago Press, Chicago and London. Pink, S 2006 The future of visual anthropology: engaging with the senses. Routledge, London. Pinney, C 1992 The parallel histories of anthropology and photography. In: Edwards, E (ed.) Anthropology and photography. Yale University Press, New Haven, CT, 74–95. Pinney, C 2001 Piercing the skin of the idol. In: Pinney, C and N Thomas (eds) Beyond aesthetics: art and the technologies of enchantment. Berg, Oxford, 157–179. Pinney, C and N Peterson (eds) 2003 Photography’s other histories. Duke University Press, Durham, NC. Poole, D 1997 Vision, race, and modernity: a visual economy of the Andean image world. Princeton University Press, Princeton, NJ. Poole, D 2005 An excess of description: ethnography, race and visual technologies. Annual Review of Anthropology 34 159–179. Ranciere, J 2006 The politics of aesthetics (trans. by G Rockhill). Continuum, London. Sassoon, J 2004 Photographic materiality in the age of digital reproduction. In: Edwards, E and J Hart (eds) Photographs, objects, histories. Routledge, London, 197–213. Sekula, A 1981 The traffi c in photographs. Art Journal 41(1), 15–25. Shanks, M 1997 Photography and archaeology. In: Molyneaux, B L (ed.) The cultural life of images: visual representation in archaeology. Routledge, London, 73–107. Stoller, P 1992 The cinematic griot: the ethnography of Jean Rouch. University of Chicago Press, Chicago, IL. Szegedy-Maszak, A 2001 Felix Bonfi ls and the traveller’s trail through Athens. History of Photography 25(1) 13–22 and 23–43. Taussig, M 1993 Mimesis and alterity: a particular history of the senses. Routledge, New York and London. Taylor, L 1996 Iconophobia: how anthropology lost it at the movies. Transition 69 64–88. Wolbert B 2001 The anthropologist as photographer: the visual construction of ethnographic authority. Visual Anthropology 13 321–43. Wright, C 2004 Material and memory: photography in the western Solomon Islands. Journal of Material Culture 9(1) 73–85. Notes on Contributors Yannis Hamilakis is Reader in Archaeology at the University of Southampton, and has published extensively on the politics of the past, on archaeology and national imagination, on sensuous, embodied archaeology, on social memory, and on Aegean prehistory. He also studies the links between archaeology and photography from the 19th century to the present. He coordinates the archaeological ethnography project of the Kalaureia Research Programme. Address: Archaeology, School of Humanities, University of Southampton, Southampton, SO17, 1BJ, UK. Email: y.hamilakis@soton.ac.uk Aris Anagnostopoulos is Post-Doctoral Research Fellow in Archaeology at the University of Southampton and works as ethnographer for the Kalaureia Research 309POSTCARDS FROM THE EDGE OF TIME: A PHOTO-ESSAY Programme. He has trained in social anthropology and urban history and holds a PhD in social anthropology from the University of Kent, Canterbury. His dissertation studied the creation of public space in early 20th-century Crete. He has worked as documentary researcher and scriptwriter. Address: Archaeology, School of Humanities, University of Southampton, Southampton, SO17, 1BJ, UK. Email: a.anagnostopoulos@soton.ac.uk Fotis Ifantidis is an archaeologist and photo-blogger, and a PhD candidate at the University of Thessaloniki. Since 2006 he has maintained a photo-blog on the excava- tions at the Neolithic site of Dispilio in northern Greece, experimenting with the visual interplay between art and archaeologies (visualizing-neolithic.blogspot.com). He is also the main contibutor in The Other Acropolis photo-blog (theotheracropolis. com). Address: 94 Theagenous Charisi Street, 54453 Thessaloniki, Greece. Email: fotisif@hotmail.com work_sixgnkkn6jb5pmj7hkfw6bkqri ---- 0000_V52.12.indb Calhoun: The NPS Institutional Archive Faculty and Researcher Publications Faculty and Researcher Publications 2009-12 The Profession of IT, Computing's Paradigm Denning, Peter J. Computing's Paradigm (with Peter Freeman) (December 2009) Trying to categorize computing as engineering, science, or math is fruitless; we have our own paradigm. http://hdl.handle.net/10945/35483 28 C o m m u n i C at i o n S o F t h E a C m | D e C e m b e R 2 0 0 9 | v O L . 5 2 | N O . 1 2 V viewpoints C oMPutinG riGhtFully CoMeS up in many discussions of uni- versity organization and cur- ricula, high school courses, job qualifi cations, research funding, innovation, public policy, and the future of education. In repeated at- tempts to characterize our fi eld in these discussions, our leaders continue to en- counter sometimes contentious debate over whether computing is a fi eld of en- gineering or science. Because it leaves others with a sense that we lack a clear focus, that debate negatively infl uences policies involving computing. There seems to be agreement that computing exemplifi es engineering and science, and that neither engineering nor science characterizes computing. What then does characterize comput- ing? In this column, we will discuss computing’s unique paradigm and of- fer it as a way to leave the debilitating debate behind. The word “paradigm” for our pur- poses means a belief system and its as- sociated practices, defi ning how a fi eld sees the world and approaches the solu- tions of problems. This is the sense that Thomas Kuhn used in his famous book, The Structure of Scientifi c Revolutions. Paradigms can contain sub-paradigms: thus, engineering divides into electri- cal, mechanical, chemical, civil; science divides into physical, life, and social sci- ences, which further divide into sepa- rate fi elds of science. Roots of the Debate Whether computing is engineering or science is a debate as old as the fi eld itself. Some founders thought the new fi eld a branch of science, others en- gineering. Because of the sheer chal- lenge of building reliable computers, networks, and complex software, the engineering view dominated for four decades. In the mid-1980s, the science view began to assert itself again with the computational science movement, which claimed computation as a new sub-paradigm of science, and stimu- lated more experimental research in computing. Along the way, there were three waves of attempts to provide a unifi ed view. The fi rst wave was by Alan Perlis,9 Allen Newell,8 and Herb Simon,11 who argued that computing was unique among all sciences and engineering in its study of information processes. Simon went so far as to call computing a science of the artifi cial. The second wave started in the late 1960s. It focused on programming, seen as the art of designing information processes. Edsger Dijkstra and Donald Knuth took strong stands favoring pro- gramming as the unifying theme. In recent times, this view has foundered because the fi eld has expanded and the public understanding of programmer has become so narrow (a coder). The third wave was the NSF-spon- sored Computer Science and Engineer- ing Research Study (COSERS), led by Bruce Arden in the mid-1970s. It defi ned computing as automation of informa- tion processes in engineering, science, and business. It produced a wonderful report that explained many exotic as- pects of computing to the layperson.1 However, it did not succeed in reconcil- ing the engineering and science views of computing. Peaceful Coexistence In the mid-1980s, the ACM Education Board was concerned about the lack of a common defi nition of the fi eld. The Board charged a task force to investigate; its response was a report Computing as a Discipline.4 The central argument of the report was that the computing fi eld was a unique combination of the tradi- tional paradigms of math, science, and engineering (see Table 1). Although all three had made substantial contribu- tions to the fi eld, no single one told the whole story. Programming—a practice that crossed all three paradigms—was essential but did not fully portray the depth and richness of the fi eld. The report in effect argued for the peaceful coexistence of the engineer- ing, science, and math paradigms. It found a strong core of knowledge that supports all three paradigms. It called on everyone to accept the three and not the Profession of it Computing’s Paradigm Trying to categorize computing as engineering, science, or math is fruitless; we have our own paradigm. DOI:10.1145/1610252.1610265 Peter J. Denning and Peter A. Freeman V viewpoints D e C e m b e R 2 0 0 9 | v O L . 5 2 | N O . 1 2 | C o m m u n i C at i o n S o F t h E a C m 29 try to make one of them more important than the others. Around 1997, many of us began to think the popular label IT (information technology) would reconcile these three parts under a single umbrella unique to computing.3,7 Time has proved us wrong. IT now connotes technologi- cal infrastructure and its financial and commercial applications, but not the core technical aspects of computing. a Computing Paradigm There is something unsatisfying about thinking of computing as a “blend of three sub-paradigms.” What new para- digm does the blend produce? Recent thinking about this question has produced new insights that, taken together, reveal a computing paradigm. A hallmark of this thinking has been to shift attention from computing ma- chines to information processes, in- cluding natural information processes such as DNA transcription.2,6 The great principles framework interprets com- puting through the seven dimensions of computation, communication, co- ordination, recollection, automation, evaluation, and design (see http:// greatprinciples.org). The relationships framework interprets computing as a dynamic field of many “implementa- tion” and “influencing” interactions.10 There is now a strong argument that computing is a fourth great domain of science alongside the physical, life, and social sciences.5 These newer frameworks all rec- ognize that the computing field has expanded dramatically in the past decade. Computing is no longer just about algorithms, data structures, nu- merical methods, programming lan- guages, operating systems, networks, databases, graphics, artificial intelli- gence, and software engineering, as it was prior to 1989. It now also includes exciting new subjects including Inter- net, Web science, mobile computing, cyberspace protection, user interface design, and information visualization. The resulting commercial applications have spawned new research challenges in social networking, endlessly evolving computation, music, video, digital pho- tography, vision, massive multiplayer online games, user-generated content, and much more. The newer frameworks also recog- nize the growing use of the scientific (experimental) method to understand computations. Heuristic algorithms, distributed data, fused data, digital fo- rensics, distributed networks, social networks, and automated robotic sys- tems, to name a few, are often too com- plex for mathematical analysis but yield to the scientific method. These scien- tific approaches reveal that discovery is as important as construction or design. Discovery and design are closely linked: the behavior of many large designed systems (such as the Web) is discovered by observation; we design simulations to imitate discovered information pro- cesses. Moreover, computing has devel- oped search tools that are helping make scientific discoveries in many fields. The newer frameworks also recog- nize natural information processes in many fields including sensing and cog- nition in living beings, thought process- es, social interactions, economics, DNA transcription, immune systems, and quantum systems. Computing concepts enable new discoveries and understand- ings of these natural processes. The central focus of the comput- ing paradigm can be summarized as there is an interesting distinction between computational expressions and the normal language of engineering, science, and mathematics. table 1. Sub-paradigms embedded in computing. math Science Engineering Initiation Characterize objects of study (definition) Observe a possible recurrence or pattern of phenomena (hypothesis) Create statements about desired system actions and responses (requirements) Conceptualization hypothesize possible re- lationships among objects (theorem) Construct a model that explains the observation and enables predictions (model) Create formal statements of system functions and interactions (specifica- tions) Realization Deduce which relation- ships are true (proof) Perform experiments and collect data (validate) Design and implement prototypes (design) evaluation Interpret results Interpret results Test the prototypes Action Act on results (apply) Act on results (predict) Act on results (build) table 2. the computing paradigm. Computing Initiation Determine if the system to be built (or observed) can be represented by information processes, either finite (terminating) or infinite (continuing interactive). Conceptualization Design (or discover) a computational model (for example, an algorithm or a set of computational agents) that generates the system’s behaviors. Realization Implement designed processes in a medium capable of executing its instructions. Design simulations and models of discovered processes. Observe behaviors of information processes. evaluation Test the implementation for logical correctness, consistency with hypotheses, performance constraints, and meeting original goals. evolve the realization as needed. Action Put the results to action in the world. monitor for continued evaluation. 30 C o m m u n i C at i o n S o F t h E a C m | D e C e m b e R 2 0 0 9 | v O L . 5 2 | N O . 1 2 viewpoints ing, such as the noncomputability of halting problems. Self-reference is com- mon in natural information processes; the cell, for example, contains its own blueprint. The interpretation “computational thinking”12 embeds nicely into this paradigm. The paradigm describes not only a way of thinking, but a system of practice. Conclusion The distinctions discussed here offer a distinctive and coherent higher-level description of what we do, permitting us to better understand and improve our work and better interact with peo- ple in other fields. The engineering- science debates present a confusing picture that adversely affects policies on innovation, science, and technology, the flow of funds into various fields for education and research, the public per- ception of computing, and the choices young people make about careers. We are well aware that the comput- ing paradigm statement needs to be discussed widely. We offer this as an opening statement in a very important and much needed discussion. References 1. Arden, B.W. What Can Be Automated: Computer Science and Engineering Research Study (COSERS). MiT Press, 1983. 2. Denning, P. Computing is a natural science. Commun. ACM 50, 7 (July 2007), 15–18. 3. Denning, P. Who are we? Commun. ACM 44, 2 (Feb. 2001), 15–19. 4. Denning, P. et al. Computing as a discipline. Commun. ACM 32, 1 (Jan. 1989), 9–23. 5. Denning, P. and P.S. Rosenbloom. Computing: The fourth great domain of science. Commun. ACM 52, 9 (Sept. 2009), 27–29. 6. Freeman, P. Public talk “iT Trends: impact, expansion, Opportunity,” 4th frame; www.cc.gatech. edu/staff/f/freeman/Thessaloniki 7. Freeman, P. and Aspray, W. The Supply of Information Technology Workers in the United States. Computing Research Association, 1999. 8. Newell, A., Perlis, A.J., and Simon, H.A. Computer science, letter in Science 157, 3795 (Sept. 1967), 1373–1374. 9. Perlis, A.J. The computer in the university. in Computers and the World of the Future, M. Greenberger, ed. MiT Press, 1962, 180–219. 10. Rosenbloom, P.S. A new framework for computer science and engineering. IEEE Computer (Nov. 2004), 31–36. 11. Simon, H. The Sciences of the Artificial. MiT Press (1st ed. 1969, 3rd ed. 1996). 12. Wing, J. Computational thinking. Commun. ACM 49, 3 (Mar. 2006), 33–35. Peter J. Denning (pjd@nps.edu) is the director of the Cebrowski institute for information innovation and Superiority at the Naval Postgraduate School in Monterey, CA, and is a past president of ACM. Peter A. Freeman (peter.freeman@mindspring.com) is emeritus Founding Dean and Professor at Georgia Tech and Former Assistant Director of NSF for CiSe. Copyright held by author. information processes—natural or constructed processes that transform information. They can be discrete or continuous. Computing represents informa- tion processes as “expressions that do work.” An expression is a description of the steps of a process in the form of an (often large) accumulation of instruc- tions. Expressions can be artifacts, such as programs designed and created by people, or descriptions of natural occur- rences, such as DNA and DNA transcrip- tion in biology. Expressions are not only representational, they are generative: they create actions when interpreted (executed) by appropriate machines. Since expressions are not directly constrained by natural laws, we have evolved various methods that enable us to have confidence that the behaviors generated do useful work and do not create unwanted side effects. Some of these methods rely on formal mathe- matics to prove that the actions generat- ed by an expression meet specifications. Many more rely on experiments to vali- date hypotheses about the behavior of actions and discover the limits of their reliable operation. Table 2 summarizes the computing paradigm with this focus. While it con- tains echoes of engineering, science, and mathematics, it is distinctively dif- ferent because of its central focus on information processes.5 It allows engi- neering and science to be present to- gether without having to choose. There is an interesting distinction between computational expressions and the normal language of engineer- ing, science, and mathematics. Engi- neers, scientists, and mathematicians endeavor to position themselves as out- side observers of the objects or systems they build or study. Outside observers are purely representational. Thus, tradi- tional blueprints, scientific models, and mathematical models are not execut- able. (However, when combined with computational systems, they give auto- matic fabricators, simulators of mod- els, and mathematical software librar- ies.) Computational expressions are not constrained to be outside the systems they represent. The possibility of self- reference makes for very powerful com- putational schemes based on recursive designs and executions, and also for very powerful limitations on comput- ACM Journal on Computing and Cultural Heritage � � � � � JOCCH publishes papers of significant and lasting value in all areas relating to the use of ICT in support of Cultural Heritage, seeking to combine the best of computing science with real attention to any aspect of the cultural heritage sector. � � � � � www.acm.org/jocch www.acm.org/subscribe work_sjobhdntiffkjha5ouqk3i7ds4 ---- untitled Public Health Nutrition: 16(6), 1066–1072 doi:10.1017/S1368980012004612 Increasing children’s lunchtime consumption of fruit and vegetables: an evaluation of the Food Dudes programme Dominic Upton*, Penney Upton and Charlotte Taylor Institute of Health and Society, University of Worcester, Henwick Grove, Worcester WR2 6AJ, UK Submitted 1 February 2012: Final revision received 29 June 2012: Accepted 24 August 2012: First published online 16 October 2012 Abstract Objectives: Although previous research has shown that the Food Dudes programme increases children’s fruit and vegetable consumption at school, there has been limited evaluation of the extent to which changes are maintained in the long term. Furthermore, despite knowledge that the nutritional content of home-supplied meals is lower than that of school-supplied meals, little consideration has been given to the programme’s impact on meals provided from home. The present study therefore assessed the long-term effectiveness of the Food Dudes programme for both school- and home-supplied lunches. Design: Two cohorts of children participated, one receiving the Food Dudes intervention and a matched control group who did not receive any intervention. Consumption of fruit and vegetables was assessed pre-intervention, then at 3 and 12 months post-intervention. Consumption was measured across five consecutive days in each school using weighed intake (school-provided meals) and digital photography (home-provided meals). Setting: Fifteen primary schools, six intervention (n 1282) and seven control schools (n 1151). Subjects: Participants were children aged 4–11 years. Results: A significant increase in the consumption of fruit and vegetables was found at 3 months for children in the intervention schools, but only for those eating school-supplied lunches. However, increases were not maintained at 12 months. Conclusions: The Food Dudes programme has a limited effect in producing even short-term changes in children’s fruit and vegetable consumption at lunchtime. Further development work is required to ensure the short- and long-term effect- iveness of interventions promoting fruit and vegetable consumption in children such as the Food Dudes programme. Keywords Child Fruit Health behaviour Vegetables The health-related benefits of eating a diet rich in fruit and vegetables are well documented. Evidence suggests that increased fruit and vegetable consumption significantly reduces the risk of CVD and stroke(1–4) and offers pro- tective effects against some forms of adult cancer(5,6). Despite the positive health outcomes associated with con- suming fruit and vegetables and recommendations that children over 2 years of age should consume five portions of fruit and vegetables daily, most children in the UK fail to meet recommended levels of intake(7). Since evidence from longitudinal studies suggests that food preferences established in childhood and adolescence are likely to persist into adulthood(8–10), it is clear that interventions to increase children’s consumption of fruit and vegetables would be beneficial. As children spend a large proportion of their time in school, the school environment is a logical setting for tar- geting nutrition behaviours. Interventions to promote fruit and vegetable consumption in the school environment are varied in their approach and effectiveness. However, three strategies that have been shown to have a reliable effect on children’s fruit and vegetable consumption are taste expo- sure, peer modelling and rewards(11). One evidence-based intervention which incorporates these three core principles is the Food Dudes (12) . This programme is aimed at primary- school children and is designed to increase consumption of fruit and vegetables both at school and at home. The programme also aims to help children develop a liking for fruit and vegetables, reduce their snack consumption, think of themselves as healthy eaters and establish a whole-school healthy eating culture(13). Research has suggested that the Food Dudes programme is effective in producing increases in children’s fruit and vegetable consumption at school(14–19) and at home(14,15). Evidence also suggests that the programme encourages an increased liking for fruit and vegetables(14). However, only one evaluation study(16) has investigated the impact of the intervention beyond a 6-month follow-up; thus the *Corresponding author: Email p.upton@worc.ac.uk r The Authors 2012 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:19, subject to the Cambridge Core terms of use. https://www.cambridge.org/core effectiveness of the programme in facilitating long-term behaviour change is unclear. Furthermore, UK studies of lunchtime consumption have focused mainly upon school-supplied meals, neglecting those supplied from home. It is known that the nutritional content of packed lunches is far lower than that of school-supplied meals(20), containing only half the recommended amount of fruit and vegetables(21). It is therefore important that the effective- ness of the Food Dudes programme in increasing fruit and vegetable consumption for all children, including those eating home-supplied lunches, is established. The aims of the present study were therefore twofold: first, to investigate the effectiveness of the Food Dudes programme in increasing primary-school children’s fruit and vegetable consumption for both home- and school- supplied meals; and second, to establish the extent to which the programme is able to influence long-term maintenance (12 months post-intervention) of any beha- viour changes which were observed. Experimental methods Design A between-group analysis was conducted of two cohorts of children participating in the study; one receiving the Food Dudes intervention and a matched control group who did not receive the intervention. The impact of the Food Dudes programme on fruit and vegetable consumption was assessed at baseline (prior to the intervention), at 3-month follow-up (post-intervention) and at 12-month follow-up. Participants The programme was evaluated in fifteen primary schools in the West Midlands, predominantly in areas of high depri- vation. Participants were 2433 children aged 4–11 years, 1282 in the intervention schools (690 boys and 592 girls) and 1151 in the control schools (596 boys and 555 girls). Power calculations, using G Power, were computed to determine necessary sample size. Intervention schools were selected by the local health authority and control schools matched as far as possible in terms of school size, proportion of children entitled to free school meals and proportion of children from ethnic minorities. Character- istics of the study sample are shown in Table 1. Food Dudes intervention The Food Dudes programme consists of an initial 16 d intervention phase during which children watch a series of DVD episodes of the Food Dudes’ adventures. The Food Dudes are four super-heroes who gain special powers by eating their favourite fruit and vegetables that help them maintain the life force in their quest to defeat General Junk and the Junk Punks. The Dudes encourage children to ‘keep the life force strong’ by eating fruit and vegetables every day. Class teachers also read letters to the children from the Food Dudes to reinforce the DVD messages. During the first four days of the intervention, children are given rewards for tasting both the target fruit and vegetables and then for consuming both foods for the remaining 12 d. Following the intervention, a main- tenance phase is implemented during which fruit and vegetable consumption is encouraged, but with less intensity than the intervention phase (a full description of the rationale behind the intervention and details of the Food Dudes programme is given elsewhere (14) ). Procedure The same procedure was employed in both the inter- vention and control schools at each study phase and measures were recorded across five consecutive days in each school. As the study employed an ecological design, no changes were implemented to school practices which could impact upon the everyday experience and choices Table 1 Demographic characteristics of the study sample: primary-school children aged 4–11 years from fifteen schools, West Midlands, UK Group n Boys (n) Girls (n) IMD Rank (%) FSM (%) Ethnic minorities (%) Intervention 1 125 64 61 1768 5?44* 40?7 22 2 61 34 27 1217 3?75* 39?0 27 3 149 82 67 7242 22?30 13?2 10 4 167 98 69 3639 11?20 30?5 82 5 49 34 15 1768 5?44* 57?9 14 6 296 148 148 2822 8?69* 25?9 18 7 265 162 103 20 609 63?45 7?8 74 8 209 88 121 20 609 63?45 8?7 71 Control 9 125 57 68 2528 7?78* 36?6 25 10 188 94 94 3432 10?57 28?0 15 11 104 48 56 8199 25?24 35?8 10 12 284 158 126 26 581 81?83 2?8 10 13 222 128 94 9748 30?01 35?5 80 14 135 67 68 6195 19?07 7?8 51 15 95 46 49 14 977 46?11 14?5 10 IMD, Index of Multiple Deprivation (1 5 most deprived, 32 482 5 least deprived); FSM, free school meals. *Schools within 10 % of most deprived areas. Evaluation of the Food Dudes programme 1067 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:19, subject to the Cambridge Core terms of use. https://www.cambridge.org/core of children, i.e. school lunchtime menus remained as prescribed by the local education authority. However, food standards developed by the School Food Trust(22) require that at least one portion of fruit and one portion of vegetables or salad must be provided per pupil per day, thus ensuring consistency in fruit and vegetable provision both between menus and schools across the UK. In line with guidelines developed by the Health Pro- motion Agency(23), a child’s portion of fruit or vegetables was defined as 40 g. Control schools remained under baseline conditions during the 16 d intervention phase. Lunchtime consumption School-provided lunches Consumption at lunchtime for children having school- provided meals was assessed using the weighed intake method, the ‘gold standard’ method for measuring dietary intake(24). Prior to lunchtime, each child was given a label with his/her identification number, name and class. Due to the time frame of lunchtime service and the number of participants in the study, mean portion size was obtained to provide an accurate measure of dietary intake. Average portions of all fruit and vegetables on the school menu were taken and five weights of each food recorded to obtain a mean weight. At the beginning of the lunchtime period, children’s food choices were recorded on a spreadsheet and, once the children had finished their lunch, the weight of any food waste for each child was recorded. The weighing area was located next to the rubbish bin and the return of trays monitored by the research team to ensure that children did not throw away any uneaten food. Salter digital scales were used, accurate to 1 g. The amount of fruit and vegetables consumed was calculated by subtracting the leftover weight from the average portion weight recorded. In cases where a negative value was obtained, it was assumed that the child did not consume that particular food item and a value of zero was reported. Home-provided lunches At the start of the day, lunchboxes were labelled with the child’s identification number, name and class and a digital photograph taken of lunchbox contents after morning break. Following lunchtime, lunchboxes were collected and a photograph taken of any leftovers. Lunchtime staff instructed children to leave any uneaten food or packaging in their lunchboxes at the end of lunchtime. All rubbish bins were located away from tables to ensure that the children did not throw any food items away and also enabling close monitoring of food disposal by the research team. The number of portions of fruit, and vegetables con- sumed was visually estimated on a five-point Likert scale (0, 1/4, 1/2, 3/4, 1) using previously validated guidelines (25). Inter-rater reliability analysis was performed using correla- tion to determine consistency among raters. Agreement was calculated for 25 % (n 80) of the study sample at baseline and was found to be excellent (r (78) 5 0?98, P , 0?01). Ethical approval The study was conducted according to the guidelines laid down in the Declaration of Helsinki and all procedures involving human subjects were approved by the Uni- versity of Worcester Institute of Health and Society Ethics Committee. Informed consent was obtained from the head teacher at each school. Consent was sought from head teachers acting in loco parentis, supplemented by parental ‘opt-out’ consent whereby children are included in the study unless their parents withdraw them(26). Data analysis Mean values were computed for each child to provide an indication of average daily consumption of fruit and vegetables for children who (i) consumed school- supplied lunches and (ii) consumed home-supplied lunches. In cases where children consumed both school- and home-supplied lunches during the same study phase or across study phases, children were classified according to the predominant mode of supply (school or home), with the criterion that children consumed exclusively school- or home-supplied lunches on a minimum of 3 d during each phase. Data were analysed using the statistical software package IBM SPSS Statistics 19?0 and differences in consumption tested using repeated- measures ANOVA. Paired t tests determined the source of any variance and effect sizes, using Cohen’s d, were calculated to measure the practical significance of any changes in fruit and vegetable consumption. An a level of 0?05 was used in all statistical analyses. Results Description of the study sample A total of 2433 children participated at baseline, 1696 at 3-month follow-up (30 % attrition from baseline) and 1470 at 12-month follow-up (13 % attrition from the second time point). Two intervention schools only completed the baseline phase for reasons unconnected with the study. The analyses presented are for children from whom data were available on at least three consecutive days and at each time point in the study. A multivariate ANCOVA was undertaken to establish the potential impact of age, sex, ethnicity and Index of Multiple Deprivation on children’s fruit and vegetable consumption. Analysis determined that differences were not significant for age (F (2, 33) 5 1?05, P . 0?05), sex (F (2, 33) 5 5?99, P . 0?05), ethnicity (F (2, 33), 5 2?17, P . 0?05) or Index of Multiple Deprivation (F (2, 33), 5 1?75, P . 0?05). Lunchtime consumption School-provided meals Figure 1 displays lunchtime consumption of fruit and vegetables in the intervention and control schools. Analysis of fruit and vegetable consumption identified a 1068 D Upton et al. Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:19, subject to the Cambridge Core terms of use. https://www.cambridge.org/core significant main effect of study phase (F (2, 519) 5 14?26, P , 0?01, Z2p 5 0?02) and school setting (F (1, 519) 5 45?83, P , 0?001, Z2p 5 0?09). However, there was no significant interaction between study phase and school setting (F (2, 519) 5 1?20, P . 0?05, Z2p 5 0?005). Paired-samples t tests demonstrated that fruit and vegetable consumption in the intervention schools was statistically higher at 3-month follow-up than baseline and of small practical significance (t 5 22?54, P , 0?05, d 5 0?26, 95 % CI 25?39, 6?10) but not in the control schools (t 5 20?97, P . 0?05, d 5 0?07, 95 % CI 24?46, 4?01). A statistically significant decrease was evident in the intervention and control schools at 12-month follow-up but was of greater practical significance for the control group (t 5 1?40, P , 0?05, d 5 20?14, 95 % CI 25?46, 5?71 and t 5 2?63, P , 0?05, d 5 20?21, 95 % CI 23?57, 3?73, respectively). Home-provided lunches Mean portions of fruit and vegetables consumed are shown in Fig. 2. Results of lunchtime fruit and vegetable consumption showed a significant main effect of study phase (F (2, 343) 5 3?52, P , 0?05, Z2p 5 0?01) but not school setting (F (1, 343) 5 1?52, P . 0?05, Z2p 5 0?004). The inter- action between study phase and school setting was also non-significant (F (2, 343) 5 1?65, P . 0?05, Z2p 5 0?005), suggesting that changes in consumption over time were not due to school setting (intervention or control). No short-term changes in fruit and vegetable consumption were found in the intervention schools; however, decreases evident at 12-month follow-up were not statistically or practically significant (t 5 1?37, P . 0?05, d 5 20?16, 95 % CI 20?30, 0?01). In the control schools, fruit and vegetable consumption was statistically higher at 3-month follow-up compared with baseline, however of small practical significance (t 5 22?55, P , 0?05, d 5 0?26, 95 % CI 20?12, 0?38), but not at 12-month follow-up (t 5 20?48, P . 0?05, d 5 0?05, 95 % CI 20?08, 0?16; see Table 2). Discussion The present study demonstrated that the Food Dudes pro- gramme has a limited effect in producing even short-term increases in children’s consumption of fruit and vegetables at lunchtime. Although significant increases were found at 3-month follow-up in the intervention but not in the control group for school-provided lunches, the non-significant interaction effect suggests any changes were not the result of the intervention. Likewise, no short-term increases were found in the intervention schools for children who con- sumed home-provided lunches although significant increa- ses at 3-month follow-up were observed in the control schools. This indicates that children who did not receive the intervention still increased their fruit and vegetable consumption in the short term. Once again this may be explained by the non-significant interaction effect observed for children consuming home-supplied lunches, which suggests that changes in consumption between study phases did not reflect a programme effect. Previous research has (62 g) (54 g) (49 g) (40 g) (42 g) (33 g) 1·8 1·6 1·4 1·2 1·0 0·8 0·6 0·4 0·2 0 Intervention Control M ea n po rt io ns c on su m ed Fig. 1 Portions of fruit and vegetables consumed at lunchtime (amount in grams in parentheses) in the intervention and control schools (school-provided meals) at baseline ( ), 3-month follow-up ( ) and 12-month follow-up ( ); primary- school children aged 4–11 years, Food Dudes programme, West Midlands, UK. Values are means with 95 % confidence intervals represented by vertical bars (30 g) (25 g) (29 g) (36 g) (30 g) M ea n po rt io ns c on su m ed Intervention Control 1·0 0·9 0·8 0·7 0·6 0·5 0·4 0·3 0 0·2 0·1 Fig. 2 Portions of fruit and vegetables consumed at lunchtime (amount in grams in parentheses) in the intervention and control schools (home-provided meals) at baseline ( ), 3-month follow-up ( ) and 12-month follow-up ( ); primary- school children aged 4–11 years, Food Dudes programme, West Midlands, UK. Values are means with 95 % confidence intervals represented by vertical bars Evaluation of the Food Dudes programme 1069 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:19, subject to the Cambridge Core terms of use. https://www.cambridge.org/core found the programme to be effective in increasing children’s lunchtime consumption of fruit and vegetables(14,15,17); however, this has focused almost exclusively upon school- supplied meals and not those supplied from home. While one study(16) found the intervention to be effective in increasing the consumption of home-supplied fruit and vegetables, the sample size was small (forty-nine children in the intervention and fifty-three in the control group(27)) and thus may have limited power to detect a significant effect. The findings of that study have yet to be replicated and there remains a lack of evidence for the effectiveness of the programme in increasing fruit and vegetable consump- tion particularly for home-provided meals. In contrast to school-provided meals which are required to conform to food- and nutrition-based standards(22), there is arguably greater potential for variation in the provision of fruit and vegetables for meals provided from home (28) . Consequently, the potential of the programme to change eating behaviours for children consuming home-supplied lunches may be more difficult. The present findings offer limited support for the role of repeated tasting, peer modelling and rewards alone in producing short- or long-term increases in fruit and vege- table consumption. The development and manifestation of eating behaviours is embedded within a system of influences including intrapersonal (food preferences(29,30)), social (family eating habits(31)) and cultural factors(32), along with aspects of the physical environment such as availability and accessibility(28,33). Consequently, children’s fruit and vegetable consumption is likely to be the result of an interaction between various levels of these ecolo- gical systems(34). Availability is an important factor in determining consumption of fruit and vegetables (35) at school, for both those meals prepared in school and those brought from home. If children are not provided with fruit and vegetables then this will inevitably impact upon their levels of consumption. Indeed, research(28) has found that home availability of fruit and vegetables was associated with increased levels of consumption and suggested that this could be easily manipulated in order to increase children’s fruit and vegetable intake. Furthermore, it is important that schools work with parents and children to increase awareness of what constitutes a healthy lunch(21) and educating parents about the nutritional content of home-provided lunches is therefore essential(36). Collect- ively, this may enhance the effectiveness of the programme in increasing consumption of fruit and vegetables for children who consume home-provided lunches. Avail- ability of fruit and vegetables is also likely to impact upon consumption of school-provided meals. In each of the schools that participated in the study, it was observed that older children (aged 7–11 years) typically enter the dining hall towards the end of lunchtime service when fruit and vegetables may not always still be available. Caterers should take this factor into account when planning menus and ensure that sufficient portions of fruit and vegetables are available for each child. School policies around healthy eating are also likely to mediate consumption. Recent research(37) identified that schools can effectively impact upon children’s eating behaviour by increasing availability of fruit and vegetables; however, the availability of unhealthy foods offered in competition with healthier options undermines this effect. Habit has also been high- lighted as a strong predictor of fruit and vegetable con- sumption in children (33) . In order to facilitate long-term behaviour change, it may be argued that healthy eating behaviours, such as fruit and vegetable consumption, need to become habitual, i.e. behaviour determined by auto- maticity and executed without awareness(38,39). Further development of the Food Dudes programme could focus on encouraging habitual intake and take account of the ecological factors that mediate fruit and vegetable consumption. Indeed, the programme is currently being developed further to support the long-term maintenance of consumption. Comparison between the present findings and those from previous Food Dudes evaluation studies is difficult due to differences in the definition of portion size, parti- cularly regarding lunchtime consumption. For example, a child’s portion of fruit and vegetables has been defined as 80 g and 60 g, respectively, which are likely to be larger than appropriate for children of primary school age(14). Variations in study design also present difficulties. First, previous evaluation studies typically assess the impact of the programme during the 16 d intervention phase(14,16,17); therefore it is likely that increases in consumption will be more pronounced while the intervention procedures are still in place. Second, existing studies provide an evaluation based upon experimental design rather than an ecological approach as reported here. To maximise the effectiveness of interventions, assessment of intake should be conducted in a way that is ecologically valid, an important con- sideration within the context of public health. The stringent control evident in the literature(14,15), while necessary to guide intervention development, is not conducive to the eating context of the school setting. The social context of the eating environment can have a large impact on children’s behaviour and, given limited attention capacities of children, tightly controlled exposure may result in increased attention on the target stimuli and increased Table 2 Short- and long-term changes in mean portions of fruit and vegetables consumed (grams in parentheses); primary-school children aged 4–11 years, Food Dudes programme, West Mid- lands, UK School provided Home provided FU1 FU2 FU1 FU2 Intervention 0?21 (8 g)* 20?12 (5 g)* 2 (60) 20?13 (5 g) Control 0?06 (2 g) 20?16 (7 g)* 0?19 (7 g)* 0?04 (1 g) FU1 5 3-month follow-up – baseline; FU2 5 12-month follow-up – baseline. *Significant at P , 0?05. 1070 D Upton et al. Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:19, subject to the Cambridge Core terms of use. https://www.cambridge.org/core consumption(40). This may account for the differences in the findings between the present study and previous evaluations of the programme. A particular strength of the present study is the use of validated measures of dietary intake. As noted by Klepp et al.(41), evaluations of such interventions should be based upon robust measures of dietary intake. Many evaluations of interventions designed to increase children’s fruit and vegetable consumption rely on self-report measures, which are clearly limited by the ability of respondents (in this case children) to accurately recall and record consumption. In contrast, the present study used weighed intake of foods, the ‘gold standard’ assessment tool, to measure consumption of school-provided meals. It was not practical to employ this method for home-supplied lunches, so these were assessed using digital photography, which offers a pragmatic and reliable tool for assessing consumption in the school setting(42). This method is particularly effective for studies that require rapid acquisition of data and mini- mal disruption to the eating environment such as the study reported here (43) . Conclusions The present results offer limited support for the effective- ness of the Food Dudes intervention in increasing the fruit and vegetable consumption of primary-school children. Clearly, further development work is required to ensure both the short- and long-term effectiveness of interventions promoting fruit and vegetable consumption in children such as the Food Dudes programme(44). The Food Dudes Forever phase of the programme currently underway is one approach that may enhance the short- and long-term effects of the programme on children’s eating habits. Acknowledgements Sources of funding: This research was funded by the Department of Health West Midlands and Wolverhampton City Primary Care Trust. Conflict of interest: No conflict of interest. Authors’ contributions: All authors were involved in the conception and design of the study. D.U. and P.U. assisted with data analysis and revised the paper. C.T. collected the data, performed the analysis and drafted the paper. All authors critically reviewed the final manuscript submitted for publication. Acknowledgements: The authors would like to thank the school staff, children and their families who participated in this project and the research team who assisted with the collection and analysis of data. References 1. Gillman M, Cupples L, Gagnon D et al. (1995) Protective effect of fruits and vegetables on development of stroke in men. JAMA 273, 1113–1117. 2. Key TA & Thorogood M (1996) Dietary habits and mortality in 11000 vegetarians and health conscious people. BMJ 313, 775–779. 3. Lock K, Pomerleau J, Causer L et al. (2005) The global burden of disease attributable to low consumption of fruit and vegetables: implications for the global strategy on diet. Bull World Health Organ 83, 100–108. 4. Maynard M, Gunnell D, Emmett P et al. (2003) Fruit, vegetables, and antioxidants in childhood and risk of adult cancer: the Boyd Orr cohort. J Epidemiol Community Health 57, 218–225. 5. Steinmetz KA & Potter JD (1996) Vegetables, fruit, and cancer prevention: a review. J Am Diet Assoc 96, 1027–1039. 6. Willett WC & Trichopoulos D (1996) Nutrition and cancer: a summary of the evidence. Cancer Causes Control 7, 178–180. 7. Department of Health (2000) The National School Fruit Scheme. London: Department of Health. 8. Kelder SH, Perry CL, Klepp KI et al. (1994) Longitudinal tracking of adolescent smoking, physical activity, and food choice behaviors. Am J Public Health 84, 1121–1126. 9. Lytle L, Seifert S, Greenstein J et al. (2000) How do children’s eating patterns and food choices change over time? Results from a cohort study. Am J Health Promot 14, 222–228. 10. Mikkilä V, Räsänen L, Raitakari OT et al. (2004) Long- itudinal changes in diet from childhood into adulthood with respect to risk of cardiovascular diseases: The Cardiovascular Risk in Young Finns Study. Eur J Clin Nutr 58, 1038–1045. 11. Lowe CF, Dowey AJ & Horne PJ (1998) Changing what children eat. In The Nation’s Diet: The Social Science of Food Choice, pp. 57–80 [A Murcott, editor]. Longman: London. 12. Horne PJ, Lowe CF, Fleming PFJ et al. (1995) An effective procedure for changing food preferences in 5–7 year-old children. Proc Nutr Soc 54, 441–452. 13. Food Dudes (2009) What is Food Dudes? http://www.food- dudes.co.uk/en/ (accessed June 2012). 14. Lowe CF, Horne PJ, Tapper KK et al. (2004) Effects of a peer modelling and rewards-based intervention to increase fruit and vegetable consumption in children. Eur J Clin Nutr 58, 510–522. 15. Horne P, Tapper K, Lowe C et al. (2004) Increasing children’s fruit and vegetable consumption: a peer-model- ling and rewards-based intervention. Eur J Clin Nutr 58, 1649–1660. 16. Horne PJ, Hardman CA, Lowe CF et al. (2009) Increasing parental provision and children’s consumption of lunchbox fruit and vegetables in Ireland: the Food Dudes interven- tion. Eur J Clin Nutr 63, 613–618. 17. Horne PJ, Greenhalgh J, Erjavec M et al. (2011) Increasing pre-school children’s consumption of fruit and vegetables. A modelling and rewards intervention. Appetite 56, 375–385. 18. Presti G, Zaffanella M, Milani L et al. (2009) Increasing fruit and vegetable consumption in young children: the Food Dudes Italian trial short-term results. Psychol Health 24, 326. 19. Tapper K, Lowe CF, Horne PJ et al. (2002) An intervention to increase children’s consumption of fruit and vegetables. Proc Br Psychol Soc 10, 102. 20. Rees G, Richards C & Gregory J (2008) Food and nutrient intakes of primary school children: a comparison of school meals and packed lunches. J Hum Nutr Diet 21, 420–427. 21. Rogers IS, Ness AR, Hebditch KK et al. (2007) Quality of food eaten in English primary schools: school dinners vs packed lunches. Eur J Clin Nutr 61, 856–864. 22. School Food Trust (2008) A Guide to Introducing the Government’s Food-Based and Nutrient-Based Standards for School Lunches, pp. 2.1–2.4. London: School Food Evaluation of the Food Dudes programme 1071 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:19, subject to the Cambridge Core terms of use. https://www.cambridge.org/core Trust; available at http://www.schoolfoodtrust.org.uk/the- standards/the-nutrient-based-standards/guides-and-reports/ guide-to-the-nutrient-based-standards 23. Health Promotion Agency (2009) Nutritional Standards for School Lunches: a guide for implementation. http:// www.healthpromotionagency.org.uk/Resources/nutrition/ pdfs/food_in_school_09/Nutritional_Standard-1EEBDB.pdf (accessed July 2011). 24. Wrieden W, Peace H, Armstrong J et al. (2003) A Short Review of Dietary Assessment Methods used in National and Scottish Research Studies. http://www.food.gov.uk/ multimedia/pdfs/scotdietassessmethods.pdf (accessed July 2011). 25. Dresler-Hawke E, Whitehead D & Coad J (2009) What are New Zealand children eating at school? A content analysis of ‘consumed versus unconsumed’ food groups in a lunch- box survey. Health Educ J 68, 3–13. 26. Severson H & Biglan A (1989) Rationale for the use of passive consent in smoking prevention research: politics, policy and pragmatics. Prev Med 18, 267–279. 27. Food Dudes (2009) Research and Evaluation: Dublin lunch- box measures. http://www.fooddudes.ie/html/research.html (accessed June 2012). 28. Koui E & Jago R (2008) Associations between self-reported fruit and vegetable consumption and home availability of fruit and vegetables among Greek primary-school children. Public Health Nutr 11, 1142–1148. 29. Bere E & Klepp K (2005) Changes in accessibility and preferences predict children’s future fruit and vegetable intake. Int J Behav Nutr Phys Act 2, 15. 30. Cullen KW, Baranowski T, Owens E et al. (2003) Availability, accessibility and preferences for fruit, 100 % fruit juice and vegetables influence children’s dietary behavior. Health Educ Behav 30, 615–626. 31. Gross SM, Pollock ED & Braun B (2010) Family influence; key to fruit and vegetable consumption among fourth and fifth grade students. J Nutr Educ Behav 42, 235–241. 32. Robinson T (2008) Applying the socio-ecological model to improving fruit and vegetable intake among low-income African Americans. J Community Health 33, 395–406. 33. Reinaerts E, de Nooijer J, Candel M et al. (2007) Explaining school children’s fruit and vegetable consumption: the contributions of availability, accessibility, exposure, par- ental consumption and habit in addition to psychosocial factors. Appetite 48, 248–258. 34. McLeroy K, Bibeau D, Steckler A et al. (1988) An ecological perspective on health promotion programs. Health Educ Q 15, 351–377. 35. Blanchette L & Brug J (2005) Determinants of fruit and vegetable consumption among 6–12-year-old children and effective interventions to increase consumption. J Hum Nutr Diet 18, 431–443. 36. Evans C, Greenwood D, Thomas J et al. (2010) A cross- sectional survey of children’s packed lunches in the UK: food- and nutrient-based results. J Epidemiol Community Health 64, 977–983. 37. Bevans KB, Sanchez B, Teneralli R et al. (2011) Children’s eating behavior: the importance of nutrition standards for foods in schools. J Sch Health 81, 424–429. 38. Brug J, de Vet E, de Nooijer J et al. (2006) Predicting fruit consumption: cognitions, intention, and habits. J Nutr Educ Behav 38, 73–81. 39. van’t Riet J, Sijtsema SJ, Dagevos H et al. (2011) The importance of habits in eating behaviour. An overview and recommendations for future research. Appetite 57, 585–596. 40. Olsen A, Ritz C, Kraaij LW et al. (2012) Children’s liking and intake of vegetables: a school-based intervention study. Food Qual Prefer 23, 90–98. 41. Klepp K, Pérez-Rodrigo C, De Bourdeaudhuij I et al. (2005) Promoting fruit and vegetable consumption among European schoolchildren: rationale, conceptualization and design of the Pro Children Project. Annals Nutr Metab 49, 212–220. 42. Swanson M (2008) Digital photography as a tool to measure school cafeteria consumption. J Sch Health 78, 432–437. 43. Williamson DA, Allen H, Martin P et al. (2003) Comparison of digital photography to weighed and visual estimation of portion sizes. J Am Diet Assoc 103, 1139–1145. 44. Knai C, Pomerleau J, Lock K et al. (2006) Getting children to eat more fruit and vegetables: a systematic review. Prev Med 42, 85–95. 1072 D Upton et al. Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:19, subject to the Cambridge Core terms of use. https://www.cambridge.org/core work_skl4hb4hmvh45pqh4zt3humrqe ---- Microsoft Word - Kemper et al. 2016_ISPRS MONITORING SEABIRDS AND MARINE MAMMALS BY GEOREFERENCED AERIAL PHOTOGRAPHY G. Kemper a, *, A. Weidauer b, T. Coppack b, c a GGS - Büro für Geotechnik, Geoinformatik und Service, Speyer, Germany - kemper@ggs-speyer.de b Institute of Applied Ecology (IfAÖ GmbH), Rostock, Germany - weidauer@ifaoe.de c Current affiliation: APEM Ltd, Manchester, United Kingdom - t.coppack@apemltd.co.uk KEY WORDS: Biological Monitoring, Digital Surveys, Environmental Impact Assessment, Marine Wildlife, Offshore Wind ABSTRACT: The assessment of anthropogenic impacts on the marine environment is challenged by the accessibility, accuracy and validity of bio- geographical information. Offshore wind farm projects require large-scale ecological surveys before, during and after construction, in order to assess potential effects on the distribution and abundance of protected species. The robustness of site-specific population estimates depends largely on the extent and design of spatial coverage and the accuracy of the applied census technique. Standard environmental assessment studies in Germany have so far included aerial visual surveys to evaluate potential impacts of offshore wind farms on seabirds and marine mammals. However, low flight altitudes, necessary for the visual classification of species, disturb sensitive bird species and also hold significant safety risks for the observers. Thus, aerial surveys based on high-resolution digital imagery, which can be carried out at higher (safer) flight altitudes (beyond the rotor-swept zone of the wind turbines) have become a mandatory requirement, technically solving the problem of distant-related observation bias. A purpose-assembled imagery system including medium-format cameras in conjunction with a dedicated geo-positioning platform delivers series of orthogonal digital images that meet the current technical requirements of authorities for surveying marine wildlife at a comparatively low cost. At a flight altitude of 425 m, a focal length of 110 mm, implemented forward motion compensation (FMC) and exposure times ranging between 1/1600 and 1/1000 s, the twin-camera system generates high quality 16 bit RGB images with a ground sampling distance (GSD) of 2 cm and an image footprint of 155 x 410 m. The image files are readily transferrable to a GIS environment for further editing, taking overlapping image areas and areas affected by glare into account. The imagery can be routinely screened by the human eye guided by purpose-programmed software to distinguish biological from non-biological signals. Each detected seabird or marine mammal signal is identified to species level or assigned to a species group and automatically saved into a geo-database for subsequent quality assurance, geo-statistical analyses and data export to third-party users. The relative size of a detected object can be accurately measured which provides key information for species-identification. During the development and testing of this system until 2015, more than 40 surveys have produced around 500.000 digital aerial images, of which some were taken in specially protected areas (SPA) of the Baltic Sea and thus include a wide range of relevant species. Here, we present the technical principles of this comparatively new survey approach and discuss the key methodological challenges related to optimizing survey design and workflow in view of the pending regulatory requirements for effective environmental impact assessments. * Corresponding author 1. INTRODUCTION 1.1 General Introduction The marine environment is subject to a variety of anthropogenic influences, including global climate change. The environmental changes caused by climate change and the actions taken by humans to mitigate and to adapt to these changes are currently leading to an increased exploitation of marine resources with presumed (although largely unquantified) effects on marine life. The development of the offshore wind industry, for example, in combination with shipping and fisheries potentially leads to a reduction undisturbed wintering and resting areas for seabirds (Mendel & Garthe, 2010; Dierschke et al., 2012). Furthermore, the increasing demand for marine gravels and sands intended for coastal protection is in potential conflict with the habitat requirements of some benthivorous duck species, which feed from molluscs and other invertebrates on the seabed (Müncheberg et al., 2012). In order to determine the state of the marine environment in a rapidly changing world, as well as to assess the conservation status of its inhabitants, an increasingly accurate information base on the distribution and abundance of marine species is required. This entails effective monitoring schemes that provide meaningful data and detailed vulnerability assessment maps to inform policy decisions and to guide spatial planning procedures. An objective evaluation of the environmental consequences of human activities for marine organisms has often been hampered by the lack of precise data in the affected areas. Here, we review recent developments to improve data acquisition by using high- resolution georeferenced digital photography to map seabirds and marine mammals. 1.2 Limitations of visual survey methods The assessment of the effects of offshore wind farms, for example, on seabirds and marine mammals are typically based The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B8, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic This contribution has been peer-reviewed. doi:10.5194/isprsarchives-XLI-B8-689-2016 689 on large-scale surveys that take place on a monthly or seasonal basis before, during and after construction of the wind farm. In the past, birds and mammals were usually counted by observers from ships (Garthe et al., 2002) or low-flying aircraft (Diederichs et al., 2002; Thomsen et al., 2004) along linear transects, thereby recording the frequency of observed objects and estimating the distance of each object from the point of observation. While ship-based counts are associated with considerable costs for logistics, airborne observations are comparatively time- and cost-effective, however pose methodological problems. Due to the relatively high airspeed of the deployed aircraft (about 100 kn; ~50 m/s) flying at around 250 ft (~80 m) to ensure that bird species can be identified by the human eye, observer-based aerial surveys can provide only rough population estimates, especially in bird species that aggregate in large numbers. This is the case, for example, in the Baltic Sea where large groups of sea ducks occur. Low-flying aircraft also scare-off sensitive bird species to a non-negligible extent, such that quantitative estimates are additionally biased. Furthermore, the detection probability decreases with the distance between the bird and the observer. The resulting quantitative estimates and the quality of species identification can also vary widely between individual observers. Overall, these sources of variance that impact quantitative population estimates can only be compensated by synthetic correction factors. However, such corrections do not enhance the quality of the raw data per se and limit the comparability of data sets collected under different sampling configurations and conditions. Assessing the environmental impacts of an offshore wind farm on birds essentially requires an unbiased detection of the distribution of individuals relative to the turbines’ coordinates. If sensitive bird species are systematically disturbed by the detection method itself (low-flying aircraft, ship) cause-effect relationships between wind turbines and birds cannot be reliably addressed. Raw data resulting from observer-based surveys cannot be quality-assured retrospectively and used to secure evidence. Finally, there is a great safety risk for the human observer flying at low altitude past operational wind turbines. 1.3 Aerial digital survey methods For the reasons mentioned above, survey methods based on aerial digital imaging by means of aircraft flying at significantly higher altitude are an effective alternative and are more and more replacing traditional observer-based survey techniques. The current version of the standard investigation concept (StUK4) of the German Federal Maritime and Hydrographic Agency (BSH) in fact recommends the use of aerial digital imaging methods for assessing the impact of offshore wind turbines on seabirds and marine mammals which has stimulated further refinement of this method. In the early 1950s, the first attempts were made to quantify large aggregations of birds by conventional aerial photography. It was none other than the famous zoologist Bernhard Grzimek who demonstrated together with his son that large flocks of flamingos in Africa could only be quantified exactly on aerial photographs (Grzimek & Grzimek, 1960). For the large-scale quantification of pelagic marine species, however, it was for a long time not feasible to use (analogue) aerial photography because of the sheer number of required images and the costs involved in image archiving. With the advent of digital camera technology and the availability of large digital archives, these problems are now manageable and the costs have become moderate. In addition, digital images provide the possibility of automated data analysis via image processing applications. Meanwhile, the development of digital aerial photography and the corresponding storage and computing capacities are so advanced that digital aerial imaging can be used standardized and cost-effectively. Digital survey techniques have been increasingly used since about 2007 in environmental impact studies for the offshore wind industry, most notably in Denmark and the United Kingdom (Thaxter & Burton, 2009). Both videographic and photographic techniques are deployed. Videography benefits from a higher frame rate though at the cost of a smaller frame (foot-print) size per camera unit. As a consequence, multiple parallel video streams are recorded to achieve sufficient areal coverage. High-resolution digital photography, on the other hand, compromises frame rate in favour of a larger aspect ratio per camera unit. Currently, the use of high-resolution medium- format cameras with up to 80 megapixels is possible (Coppack et al. 2015). This enables photographic flights at altitudes over 1300 ft (~400 m) with a ground sampling distance (GSD) of 2 cm, depending on the specifications of the lens. At these altitudes, which are well above the rotor-swept zone of the wind turbines, displacement effects on sensitive bird species through the presence of the aircraft are significantly reduced. Furthermore, bird distributions and individual distances to man- made structures (wind turbines, vessels) can be accurately measured and stored digitally for further GIS analyses. This is a major advantage over observer-based protocols that involve voice recordings of quantitative estimates. 2. DEVELOPMENT OF A METHOD BASED ON GEOREFERENCED PHOTOGRAPHY 2.1 Methodological principles Comparisons of simultaneous observer-based aerial surveys with digital aerial surveys generally show that relevant bird and marine mammal species can be detected and classified to species level in digital images (Thaxter & Burton, 2009; Kulemeyer et al. 2011; Taylor et al. 2014, Dittmann et al. 2014). Although digital still images of birds and marine mammals initially showed significant motion blur due to long exposure times relative to the required airspeed and the lack of forwards motion compensation (FMC), an evaluation of the quantitative outcomes of conventional and digital surveys was possible. Results from a pilot study carried out by Kulemeyer et al. (2011) suggested that the numbers of three sea duck species had been significantly underestimated by the observer-based method, i.e., frequencies of individuals were 15% (Common Eider), 69% (Long-tailed Duck), and 98% (Common Scoter) lower than frequencies determined by the aerial digital method (Kulemeyer et al., 2011). Such differences may be partly explained by the time lag between surveys and differences in coverage between photographic and visual methods. However, comparative results strongly suggest that photographic methods provide more accurate population estimates than conventional observer-based aerial surveys that are more susceptible to species-specific variation in flush behaviour and observation bias. To calibrate both methods accurately, it would be necessary to repeat such parallel flights under a range of different weather conditions. 2.2 Camera technology Airborne image acquisition for biological monitoring is presently based on commercially available, photogrammetric components. The requirements to be met by a purpose- The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B8, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic This contribution has been peer-reviewed. doi:10.5194/isprsarchives-XLI-B8-689-2016 690 assembled camera system include a flight altitude of at least 1300 ft (taking human flight safety, average cloud base and the escape distance of birds into account), a GSD of 2 - 3 cm, a covered strip width of about 400 m, and the option for a 30% to 40% image overlap to compensate for loss of effective coverage due to glint and glare (reflections from the sea surface), and vignetting (Figure 1; cf. Groom et al., 2013). Figure 1. Example of a series of georeferenced, overlapping aerial images shown in QGIS A typical camera system to monitor marine wildlife may consist of two or more medium format cameras (e.g. PhaseOne IXA180) mounted onto a gyroscopically stabilized platform (Figure 2). Figure 2. (a) Equipping a twin-engine aircraft for aerial digital surveys of marine wildlife, (b) View of a crosswise mounted tandem camera (PhaseOne IXA180) within a gyroscopically stabilized platform (GGS Speyer), (c) Exterior view of the camera system through the hatch of the aircraft The gyro-static camera platform is connected with a computer system that triggers shutter release, stores the incoming image files and, at the same time, operates the flight management system (AeroTopoL) and the logging of geographic position and altitude from the GNSS-INS sensors (Kemper, 2012). The sensors of each camera unit (PhaseOne IXA180) includes 10,320 x 7,752 cells at a resolution of 5.2 micron. Equipped with 110 mm lenses, this camera setup covers a strip width of 407 m from an altitude of 1400 ft (423 m) with 2 cm GSD. The minimal reading interval is 1.5 s and enables a theoretical image overlap of 48% at an airspeed of 100 kn (~50 m/s) and an image length in the direction of flight of 155 m. Realistically, frame rates of 1.8 - 2.0 s are feasible and an image overlap of 30% sufficient under normal weather conditions. In addition, the camera system has implemented FMC. Trials have shown that details of birds and marine mammals are depicted best with sensitivity set to ISO 100, an aperture of 3.2 and an exposure of <1/1,000 s. In general, modular imaging systems based on medium format camera units are smaller and lighter and can be installed in small aircraft, including unmanned aerial vehicles (UAV). A current disadvantage of the applied camera unit is the unavailability of a near infra-red (NIR) band, which could be advantageous to compensate glare at sensor level. This option currently has to be solved by including a third dedicated NIR camera. 2.3 Data storage and processing The monitoring guideline of the German Federal Maritime and Hydrographic Agency (BSH) currently recommend study areas of at least 2000 km² of which normally 10% are to be sampled within one day. The volume of data (raw image files) arising from such surveys may reach around 5,000 – 8,000 frames per flight and camera. Each camera can store about 500 - 800 GB on two separate solid-state disks during the flight. The image transfer rate lies at 40 - 70 MB/s via USB 3.0 interface. Each raw image is unpacked and converted into orthorectified block oriented 500 - 800 MB large 16-bit RGB TIFF. The image files are augmented with acquisition-related metadata. After this georeferencing, the overlapping image areas are marked and edited, and series of analysable images are compiled. From this point on, visual screening of the image files with commercially available or open-source geographic information systems is possible. The sheer extent of the image database (a single flight takes up to 9 TB) and the need to incorporate image processing algorithms into the workflow, puts forth the development and use of purpose-programmed software applications. 2.4 Image analysis for object recognition There are basically two (not mutually exclusive) approaches to analysing and evaluating the resultant extensive image material: (1) automated image pre-processing (e.g. by eCognition; cf. Groom et al., 2013), including information from a visually screened subsample of images; (2) visual pre-processing of the complete image material with the help of purpose-programmed GIS-based screening applications (e.g. by combining QGIS and OpenCV; cf. Coppack et al. 2015, Figure 3). Both approaches involve image pre-processing, in which the images without biological positive signals are sorted out and the geo-positions of all potential biological signals are collected in a database for subsequent identification and quality assurance. The visual pre-selection of entire sets of images is presently the more robust approach, yet requires a lot of manpower. Visual screening is facilitated by purpose-programmed software applications, by which each image is divided into equal segments (50 - 80 segments per image, depending on effective footprint size) that can be tabbed through in a logical sequence The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B8, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic This contribution has been peer-reviewed. doi:10.5194/isprsarchives-XLI-B8-689-2016 691 (Figure 3). All objects that could potentially be a bird or a marine mammal (and any other conspicuous object) are marked manually on the screen and automatically stored in a database, thereby saving an image identification number, the flight transect number, the camera identification number, the object class and the geo-position of the object. The object class is roughly defined as: (1) swimming bird; (2) flying bird; (3) marine mammal; (4) conspicuous unknown object; (5) glare/glint; (6) wave/spray. For quality assurance, a sub-set of the images is screened twice. All selected objects from the visual screening process are then classified by an experienced analyst to reject objects that are not birds or marine mammals. Figure 3. Screenshot of a software interface for the systematic visual screening of geo-referenced aerial photographs (example image shows a harbour porpoise) 2.5 Species identification (ID) and quality assurance This subsequent step includes the identification of objects down to species or species-group level and the quantification of identified birds or marine mammals. Ideally, two experts classify each object independently, using a specially developed identification tool which is linked with the geo-database to facilitate the retrieval of individual objects and accompanying metadata (Figure 4). Figure 4. Screen shot of a software interface used for species identification (example image shows a juvenile gannet) This ID tool also enables the measurement and storage of morphometric parameters, which is important supplementary information for identifying bird or mammal species (and which can be used to calculate the flight heights of birds, if relevant). If there are discrepancies in species identification between the two experts (as indicated automatically following a database query), a third expert can be consulted to review the case and issue a final decision. Typical features for the identification of bird species in aerial images have been described by Dittmann et al. (2014). All birds and marine mammals that are identified to genus or species level receive an attribute of accuracy. In the case of birds, individuals are further categorized (where possible) according to (1) sex, (2) age class (adult, immature, juvenile, K1-K4), and (3) behavioural traits (swimming, diving, flying, direction of flight). For marine mammals, additional information may be noted, e.g. the presence of calves. After the ID process, the census information is transferred into standard data tables and used for the calculation of population densities at grid-cell or at survey-area level, and for geo- statistical analyses related to habitat use (e.g. Skov et al., 2016) and potential displacement effects. Randomized samples of the photographic material containing classified georeferenced objects can be reciprocally quality controlled by external, independent reviewers. The geo-spatial positioning of individuals allows a precise measurement of their distance to anthropogenic structures (e.g. pipelines, offshore wind turbines) and relating species-specific distributions to functional ecological parameters (e.g. water depth), habitat features, and associated food resources. Figure 5. The distributions of three sea duck species based on gapless aerial photos taken on March 12 2014 in the German Baltic Sea (Bay of Wismar) in relation to different water depth classes. Maps are based on data by Steffen (2014). The examples given in Figure 5 show the specific distributional patterns of three species of sea ducks in the Baltic Sea derived from aerial images covering almost 100% of the area (Steffen 2014). The morphological characteristics of these species as seen in aerial images (cf. Figure 5) are described by Dittmann et al. (2014) as follows: Common Eider, Somateria mollissima. ♂ (nuptial plumage): white back, white scapulars and leg stain form typical "fish tail" shape; black tail, rump and flanks merge with dark background of the water surface; the black does not appear very clearly. ♀: body is light brown on the light-facing side, the sides of the head contrasting in brighter beige. Long-tailed Duck, Clangula hyemalis. ♂ (nuptial plumage): white plumage parts on head, nape and rump, and pale grey back plumage outshine darker plumage parts that merge partly with the background, giving the impression of a figure eight-type shape. ♀: white plumage parts on head and rump, which are separated by the dark back plumage, appear as two distinct spots. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B8, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic This contribution has been peer-reviewed. doi:10.5194/isprsarchives-XLI-B8-689-2016 692 Common Scoter, Melanitta nigra. ♂ (nuptial plumage): completely black; yellow patch on beak mostly invisible, feet as opposed to the Velvet Scoter Melanitta fuscata dark, but not always visible. ♀: body completely dark brown, bright head sides contrasting strongly. Databases containing this species-specific information coupled with individual biometric measurements will be useful when training automated object identification algorithms in order to accelerate the pre-screening process, which currently limits the overall workflow. 3. CONCLUSIONS AND OUTLOOK The use of high-resolution aerial imagery to map marine wildlife shows several advantages over conventional observer- based methods. Site-specific frequencies of individuals can be determined without correcting for distance-related observation bias (Buckland et al., 2001), and the resulting population estimates remain verifiable at raw-data level. The method can also be applied to complement land-based waterbird counts, e.g. in protected areas, and to advance national or international monitoring schemes. However, there are still a number of methodological challenges both in image acquisition an image analysis that need to be overcome in the future (Buckland et al., 2012; Taylor et al., 2014; Coppack et al. 2015). From a logistical and economic perspective, it is reasonable attempting to capture the full range of seabirds and marine mammal species with the same sensor technique, under the same sampling regime, and within the shortest possible time window. However, while white seabirds, gulls for example, are clearly visible against the dark background of the sea surface (in areas that are free of glare, glint, and spray), marine mammals often merge with their environment and may become visible only when they emerge from the sea surface and reflect the sunlight. These varying signal-to-noise relationships between biota require a high quantization of the imaging channels and filters, imposing equivalent requirements on the image processing software. Up to now, seabirds and marine mammals have been usually targeted manually in aerial digital images. With the increasing regular use of aerial digital methods, image pre-processing (object recognition) should be automated in order to accelerate and standardise the entire workflow. Automated object recognition based on deep machine learning is subject to current research. Moreover, there is a need for further optimising sampling design and effort in view of the associated survey costs. In principle, the following three parameters would need to be addressed: (1) the required minimum number of seasonal surveys to distinguish phenological and stochastic fluctuations from population changes (cf. Maclean et al. 2012); (2) the required size of a survey area to characterize habitat clines and their associated populations; (3) the minimum effective coverage of a survey area in conjunction with the optimum sampling design to obtain statistically robust results. These parameters are being continually discussed in several national and international research projects in order to define the minimum regulatory requirements for effective environmental impact assessments. As mentioned above, glare is a critical factor that limits effective coverage, potentially producing many false positives. Glare effects show up in the RGB bands but not in the NIR. Current trials have therefore included a supplementary camera in the NIR band to obtain more information on glare detection and compensation. This could facilitate the automated separation of signals from background noise in order to detect objects of interest with higher precision and consistency. Beside this improvement, a better image-to-image correlation as well as a better image rectification will be required. To achieve this with the setup of three cameras (two RGB and one NIR), further camera-calibrations using photogrammetric software are intended. We propose an accurate calibration of the cameras (calibration of focal length, PPS/PPA, radial distortion) and of the relative offset angles (relative boreside angles). A significant improvement could be achieved by using precise GNSS-INS to receive direct referencing values for the projection centres and the rotation angles Omega, Phi, and Kappa. REFERENCES Buckland, S. T., Anderson, D. R., Burnham, K. P., Laake, J. L., Borchers, D. L., Thomas, L., 2001. Introduction to Distance Sampling. Estimating abundance of biological populations. 4th Edition, Oxford University Press Inc., New York, USA. Buckland, S. T., Burt, M. L., Rexstad, E. A., Mellor, M., Williams, A. E., & Woodward, R., 2012. Aerial surveys of seabirds: the advent of digital methods. Journal of Applied Ecology, 49, pp. 960-967. Coppack, T., Weidauer, A., Kemper, G., 2015. Erfassung von Seevogel- und Meeressäugerbeständen mittels georeferenzierter Digitalfotografie. AGIT, Journal für Angewandte Geoinformatik, 1, pp. 358-367. Diederichs, A., Nehls, G. & Petersen, I. K., 2002. Flugzeugzählungen zur großflächigen Erfassung von Seevögeln und marinen Säugern als Grundlage für Umweltverträglichkeitsstudien im Offshorebereich. Seevögel, 23, pp. 28-46. Dierschke, V., Exo, K. M., Mendel, B. & Garthe, S., 2012. Gefährdung von Sterntaucher Gavia stellata und Prachttaucher G. arctica in Brut-, Zug- und Überwinterungsgebieten – eine Übersicht mit Schwerpunkt auf den deutschen Meeresgebieten. Vogelwelt, 133, pp. 163-194. Dittmann, T., Fürst, R., Gebhardt-Jesse, U., Grenzdörffer, G., Kilian, M., Löffler, T., Mader, S., Schleicher, K., Schulz, A., Steffen, U., Weidauer, A. & Coppack, T., 2014. Vogelbestimmung aus der Vogelperspektive. Vogelwarte, 52, pp. 335-336. Garthe, S., Hüppop, O. & Weichler, T., 2002. Anleitung zur Erfassung von Seevögeln auf See von Schiffen. Seevögel, 23, pp. 47-55. Groom, G., Stjernholm, M., Nielsen, R. D., Fleetwood, A. & Petersen, I. K., 2013. Remote sensing image data and automated analysis to describe marine bird distributions and abundances. Ecological Informatics, 14, pp. 2-8. Grzimek, M. & Grzimek, B., 1960. Flamingoes censused in east Africa by aerial photography. Journal of Wildlife Management, 24, pp. 215-217. Kemper, G., 2012. New Airborne Sensors and Platforms for Solving Specific Tasks in Remote Sensing. ISPRS, International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXIX-B5, pp. 351- 356. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B8, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic This contribution has been peer-reviewed. doi:10.5194/isprsarchives-XLI-B8-689-2016 693 Kulemeyer, C., Schulz, A., Weidauer, A., Röhrbein, V., Schleicher, K., Foy, T., Grenzdörffer, G. & Coppack, T., 2011. Georeferenzierte Digitalfotografie zur objektiven und reproduzierbaren Quantifizierung von Rastvögeln auf See. Vogelwarte, 49, pp. 105-110. Maclean, I., Rehfisch, M. M., Skov, H., & Thaxter, C. B., 2013. Evaluating the statistical power of detecting changes in the abundance of seabirds at sea. Ibis, 155, pp. 113-126. Mendel, B. & Garthe, S., 2010. Kumulative Auswirkungen von Offshore-Windkraftnutzung und Schiffsverkehr am Beispiel der Seetaucher in der Deutschen Bucht. Coastline Report, 15, pp. 31-44. Müncheberg, R., Gosselck, F., Coppack, T. & Weidauer, A., 2012. Klimawandel an der Ostsee: Interessenskonflikte zwischen Natur-und Küstenschutz bei der Gewinnung mariner Sande. In: Klimaanpassung als Herausforderung für die Regional- und Stadtplanung Erfahrungen und Erkenntnisse aus der deutschen Anpassungsforschung und -praxis. Mahammadzadeh, M., Chrischilles, E. (Eds.), Institut der Deutschen Wirtschaft, Cologne, Germany. Skov, H., Heinänen, S., Thaxter, C. B., Williams, A. E., Lohier, S., & Banks, A. N., 2016. Real-time species distribution models for conservation and management of natural resources in marine environments. Marine Ecology Progress Series, 542, pp. 221- 234. Steffen, U., 2014. Entwicklung alternativer Sampling Designs bei der luftbildgestützten Seevogelzählung unter Berücksichtigung einer GIS-basierten Modellierung am Beispiel von Meeresenten. Master Thesis, Rostock University, Rostock, Germany. Taylor, J. K. D., Kenney, R. D., Leroi, D. J. & Kraus, S. D., 2014. Automated vertical photography for detecting pelagic species in multitaxon aerial surveys. Marine Technology Society Journal, 48, pp. 36-48. Thaxter, B. C. & Burton, N. H. K., 2009. High definition imagery for surveying seabirds and marine mammals: a review of recent trials and development of protocols. British Trust for Ornithology Report commissioned by COWRIE Ltd. Thomsen, F., Laczny, M. & Piper, W., 2004. Methodik zur Erfassung von Schweinswalen (Phocoena phocoena) und anderen marinen Säugern mittels Flugtransekt-Zählungen. Seevögel, 25, pp. 3-12. Revised Xxxx 2016 The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B8, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic This contribution has been peer-reviewed. doi:10.5194/isprsarchives-XLI-B8-689-2016 694 work_smrseqsyubaa3o7v2bmnjv7p5y ---- TF_Template_Word_Windows_2013 Assistive Technology for Disabled Visual Artists: Exploring the Impact of Digital Technologies on Artistic Practice Dr Chris Creed School of Computing and Digital Technology Birmingham City University Millennium Point, Birmingham, B4 7XG chris.creed@bcu.ac.uk [This is an Accepted Manuscript of an article published by Taylor & Francis Group in Disability and Society on 11/05/2018, available online: http://www.tandfonline.com/10.1080/09687599.2018.1469400] Disabled artists with physical impairments can experience significant barriers in producing creative work. Digital technologies offer alternative opportunities to support artistic practice, but there has been a lack of research investigating the impact of assistive digital tools in this context. This paper explores the current practice of physically impaired visual artists and their experiences around the use of digital technologies. An online survey was conducted with professional disabled artists and followed up with face-to-face interviews with ten invited artists. The findings illustrate the issues disabled artists experience in their practice and highlight how they are commonly using mainstream digital technologies as part of their practice. However, there is little awareness around novel forms of technology (e.g. eye gaze tracking) that present new creative opportunities. The importance of digital tools for supporting wider practice (i.e. administrative and business tasks) was also highlighted as a key area where further work is required. Keywords: assistive technology; disability art; digital tools; accessibility [Manuscript Length: 6524 words] Introduction Professional disabled visual artists can experience significant barriers and difficulties in the production of their creative work. For instance, physically impaired artists can find mailto:chris.creed@bcu.ac.uk http://www.tandfonline.com/10.1080/09687599.2018.1469400 it difficult to use common artistic tools such as brushes, pencils, and large canvases thus requiring them to adapt their practice to help facilitate the creative process. Traditional assistive tools such as mouth sticks, head wands, and custom designed grips (for holding brushes) can make creative work somewhat accessible, but research has shown they also present significant interaction issues and can lead to further health complications (Perera et al., 2007). Digital technologies present alternative approaches for disabled artists as assistive tools, but no studies to date have explored in detail how professional artists (across a range of career stages and art forms) incorporate such technologies into their practice. These technologies include mainstream artistic software (e.g. Photoshop, Illustrator, etc.), mobile applications (e.g. Sketchbook Pro, Manga Studio, and ProCreate), alternative mouse and keyboard designs, trackballs, and switches. Moreover, there are many novel and innovative technologies such as eye gaze tracking (van der Kamp and Sundstedt, 2011; Heikkilä, 2013; Creed, 2016), mid-air gesturing and motion tracking (Creed, 2014; Diment and Hobbs, 2014), and voice recognition (Harada, et al., 2007) that have dropped in price significantly over the past couple of years. These technologies offer alternative assistive methods for disabled artists thus potentially helping to ensure they are not excluded or marginalised (Watling, 2011), yet it remains unclear whether artists are aware of these technologies and the extent to which they are utilising them as part of their practice. Whilst views around digital technologies for disabled people have been explored in the context of perceptions, experiences, and wider issues (Harris, 2010; Borg et al., 2011; MacDonald and Clayton, 2013), their role has not been investigated in the context of professional disabled artistic practice. A deeper understanding of the current practice of physically impaired visual artists, the typical barriers they experience, and their existing usage of digital technologies can help to inform software developers and engineers in producing new tools that better support creative work. To address the lack of work in this area this paper details research conducted to explore in more detail the current practice of professional physically impaired artists with a particular emphasis on the role of digital technologies. An online survey was conducted with professional visual artists to investigate and understand better the creative process and experiences of disabled artists (n = 18). Ten artists – ranging in career stage and artistic medium – were then invited for face-to-face interviews to discuss their practice in further detail. This research was undertaken in collaboration with a disability-led visual arts organisation (DASH – Disability Arts Shropshire) who support and commission disabled visual artists. DASH supported the project in terms of assisting with the recruitment of artists, as well as informing the design of the survey and interview questions to ensure key areas around artistic practice were addressed. Core themes emerging from this work are highlighted to explore how an artist’s impairment can influence practice and the extent to which digital technologies are currently being utilised. Research Questions There were three key research questions that were investigated in this study: 1. How do physically impaired visual artists currently work? In particular, which art forms are they working in, what is their typical creative process, and which tools do they use to support their practice? 2. Which barriers do physically impaired artists currently experience around their work? What levels of personal assistance are required and how does this influence creative practice? 3. Are physically impaired artists currently utilising digital technologies and assistive tools in their practice? If so, what are their experiences in using these tools? If not, are there any specific reasons why they are not utilising them to support creative work? Related Work Previous research studies have highlighted some of the issues disabled people experience around digital technologies (e.g. cost of devices, lack of support, negative attitudes to technology, technology abandonment, etc. - Phillips and Zhao, 1993; Riemer-Reiss and Wacker, 2000; Harris, 2010; Borg et al., 2011; MacDonald and Clayton, 2013). Many researchers have also questioned the impact of digital tools in supporting disabled people more widely (e.g. Goggin and Newell, 2003) – however, there has been little work to date exploring the current practice of professional disabled visual artists and their use of assistive and digital technologies. Boeltzig (2009) investigated the working methods and practice of “younger” artists (aged 18-25) across a range of different impairments (visual, physical, and cognitive) and highlighted the adaptations some artists had to make around their practice. However, this work was primarily with “non-professional” artists, so it is unclear whether the themes and approaches highlighted also apply to more established artists. Perera et al. (2007) conducted a workshop with people who had a range of impairments to examine the methods they used to produce creative work as a hobby. They found that people used traditional assistive tools such as mouth sticks, head wands, and custom designed grips to help them work, but highlighted how slow, tedious, and tiring the production of creative work can be for people with physical impairments. Moreover, whilst these tools can help facilitate creative activities (to a certain extent), they can also lead to further physical issues such as chronic neck strain and damage to teeth. It is important to note that this work was also not conducted with professional disabled artists - as such, it is unclear whether professional artists are using the same types of assistive tools and whether they experience similar issues around their practice. Other work has explored the potential of do-it-yourself (DIY) tools to support creative work. For instance, Coleman and Cramer (2015) highlight how different types of assistive devices (digital and non-digital adaptations of traditional tools) can be used to support disabled children who want to do creative work. Hurst et al. (2011) discuss the development of non-digital assistive tools to support people who cannot use their hands to produce creative outputs. Other studies have focused more on digital technologies for supporting disabled people - for instance, Diment and Hobbs (2014) developed the Kinect Virtual Art Program (KVAP) which used the Microsoft Kinect sensor to track the gestures of severely disabled children. These gestures were then used as input for producing creative work within a therapeutic context. Harada et al. (2007) explored the use of voice recognition as an input for creating graphical work and developed a system designed predominantly for people with motor impairments to allow them to create free-form drawings. Studies have also looked at eye gaze tracking technology for creative work - the first study in this area was reported by Gips et al. (1996) who presented an application called "Eye Painting" that mimicked finger painting with a user's eyes to create basic coloured line drawings. Hornof and Cavender (2005) developed EyeDraw - an application for disabled children and young adults that allowed them to draw and manipulate basic shapes through gaze tracking. Similarly, Heikkilä (2013) developed a graphics application (EyeSketch) that allowed users to produce graphics and manipulate shapes via their eyes. van der Kamp and Sundstedt (2011) explored the combination of voice recognition (for menu selections) and eye gaze for drawing pictures in their application. Finally, Kwan and Betke (2011) developed eye-operated image editing software (Camera Canvas) for people with physical impairments. These studies all provide examples for how digital technologies can potentially support people creating graphical and artistic work. However, none of the studies were focused around professional disabled artists - it is therefore unclear how professional artists currently work and how they are utilising digital technologies to support their practice. Methodology The research had two stages – the first was an online survey examining how professional physically impaired artists currently work, the types of barriers they experience, and the extent to which they use digital tools. To take part in the survey participants had to be practicing and professional artists (i.e. they have exhibited their work) with some form of physical impairment. Moreover, their primary art form needed to be in the visual arts – with a particular emphasis on painting, drawing, and digital art. The survey consisted of sixteen questions – the initial questions focused on collecting basic demographic information about the artists (e.g. age, gender, etc.), identifying the nature of artists' physical impairments, and exploring their current creative process. In particular, questions focused on how often artists typically work, the medium(s) they work in (e.g. paint, pencil, oils, etc.), and how their impairments influence practice. Artists were also asked about the types of software they currently use to create their work (if applicable). The next set of questions focused on whether artists used assistive tools in their practice and examined whether these tools were “traditional” (e.g. head wands and mouth sticks) or “digital” (e.g. eye tracking technology, speech input, custom keyboards, etc.). If artists stated that they did not use assistive technologies, they were asked to explain further if there were any particular reasons for not using them. Additional questions focused on how artists currently use the assistive tools they highlighted and any limitations they experience when using them. The survey concluded with a question around the extent to which artists required support from personal assistants. The survey was completed by 25 artists although seven did not appear to have a physical impairment or did not provide sufficient detail in their responses to understand whether they met the full criteria for the study. These respondents were therefore not included in the final analysis. Artists reported working across a diverse range of disciplines such as painting, illustration, printmaking, clay and cardboard sculpting, eye gaze art, and digital photography (Table 1). The artists were aged between 20 - 74 and were at a variety of career stages including emerging, mid-career, and established. Artists also reported a wide range of different physical conditions including multiple sclerosis, motor neurone disease, generalized dystonia, muscular dystrophy, cerebral palsy, arthrogryposis, quadriplegia, and multiple joint arthritis. The second stage involved investigating more deeply some of the subtleties and nuances around the creative practice of visual artists working across different media. In- depth interviews (n = 10) were therefore conducted with invited artists (from the survey) using a qualitative semi-structured interview method. A quota sampling approach was adopted to ensure that artists were invited across a range of career stages (emerging, mid-career, and established) and art forms - as well as those using different types of assistive technologies. The interviews focused on each artist’s processes for creating their work, their views and experiences around the use of digital technologies in their practice, and their confidence around using technology to support their work (an issue that has been highlighted in the literature as negatively influencing technology adoption amongst disabled people - e.g. Harris (2010) and MacDonald and Clayton (2013)). Artists were also asked to bring along some of their work to help describe their creative process. Personal assistants who accompanied the artists were also encouraged to share their views or any relevant insights. All interviews were video recorded for later analysis. Analysis of the survey and interview data utilised an ethnographic approach (Glaser and Strauss, 1967) where the responses were coded and analysed to identify initial themes and concepts. These concepts were then iteratively analysed and refined through repeatedly exploring the relationships between different themes. This aspect of the study was also conducted in collaboration with DASH and the key themes and interpretations were shaped and confirmed through their feedback and input. Findings Impairment(s) Influencing Artistic Practice Artists described the variety of ways in which impairments influenced their working methods. One strong theme to emerge was around constraints in the types of art forms that were feasible for artists. For instance, artists commented that they worked in a particular way or used certain tools because they were more accessible to them than others (or were the only method available): “Eyegaze art is the only way that I can express myself creatively ... I would like to study art again, particularly life drawing, but it's difficult to get to classes because of my condition” [Artist 4] “My impairments affect my creative methods profoundly. Basically the options I was left with at age 14 (and the decreasing number of them since) have curtailed most methods of expressing genuine creativity be it musical or visual” [Artist 16] Several artists highlighted difficulties in their current practice around the traditional artistic tools and materials they use - for example, difficulties around squeezing paint out of tubes, finding palettes that are suitable for water colours, and how the use of screen beds and etching can be physical and tiring activities. Others have had to switch from traditional methods to digital tools as physical impairments have increasingly influenced their practice. In particular, artists commented on how they adapted their methods to work on smaller scales: “... as a painter I have adjusted my methods to work on smaller scale, scanning images, using digital photography and Photoshop merged with traditional techniques. sometimes printing and painting back onto print...” [Artist 5] “I use everyday materials, such as cardboard to create 3D sculptures. This medium allows me to make large scale work in smaller sections and sit down if needed.” [Artist 15] Some of the artists highlighted how physical issues such as stiffness in hands, pain, motor impairments, and weakness influence their ability to create work: “I work by sitting on a mat on the floor in a W seated position. this gives me a stable base which enables me to have an increased use of my arms. I am aware that sitting in this way is bad for my hips and causes increased muscle tone but there is no other way ... I need paint to be squeezed onto a flat pallet and water in a stable bowl to avoid spillage ... I work flat so I can lean on my wrist to limit involuntary movements” [Artist 9] “Stiffness and issues with my hands/fingers which can affect my work ... My hands often feel very stiff which affect my psychomotor/drawing skills but I have developed a simple exercise for warming up (may be placebo, I don't care) and it eases the more drawing I do” [Artist 10] Moreover, artists emphasized how they can typically only work for short periods - generally around 2-3 hours at a time. This point was closely related to a common theme around experiencing fatigue which 11 artists in the online survey explicitly highlighted as influencing their practice: “Joint problems etc also limit very much my ability to paint landscapes/on location - all my work is studio based ... generally take photos in 2-3 hour sessions - longer not possible due to above issues and fatigue, partially caused by concentration needed for conversation” [Artist 14] “I paint and draw standing up for short periods at a time … I use digital drawing mostly using touch technologies for access and sometimes Wacom pen and pad on a low set desk sitting in my wheelchair for very short periods due to muscle fatigue” [Artist 8] Overall, the findings highlighted how the artists involved in the study have shaped and adapted their working methods and patterns to better support their individual creative practice. This, in turn, leads to highly individualised methods of working that typically require the use of specific tools and resources that enable artists to work independently (often with the support of personal assistants). The Role of Personal Assistants Another key theme to emerge was focused around personal assistance – 11 artists in the online survey commented on the need for help from others to enable them to work. The level of assistance required depended on the specific needs of each artist - for example, some were completely reliant on others: “I need help with everything ... I cannot create any artwork without somebody to help me ... I have to have everything placed exactly right to enable me to draw / paint ... it helps if the person is patient and has an understanding of the nature and process of art making” [Artist 6] Others required less support, but still needed help around setting up tools and for people to be available to assist during a creative session: “I need a PA [personal assistant] to assist me to adjust height of canvasses and easels, to set up and assemble my space as I need it. To reach high and low equipment. Squeeze paint and mediums out for painting” [Artist 8] “I do need help in the studio but so does Tracy Emen ... In the studio I employ help when I need it lifting and carrying moving canvasses” [Artist 18] “… setting up work/tools ... I need paint to be squeezed onto a flat pallet and water in a stable bowl to avoid spillage” [Artist 9] Some artists did not need any help specifically around their creative work, but instead required support around transporting work and materials (either around a studio or to an exhibition). It is clear, therefore, that personal support from others plays a crucial role in supporting the creative practice of physically impaired visual artists with the specific needs highly dependent on individual requirements. As highlighted by numerous authors (e.g. Briesden, 1986; Davis, 1990; Morris, 1993; French, 1993), this type of personal assistance is essential in enabling disabled people to live and work independently. However, whilst it is clear that this support enables physically impaired artists to work, two artists also highlighted the “constraints” this places around their practice: “I need someone to place the paper on a board and put a pen / pencil / brush in my hand ... the main limitation I have, is that I do not always have somebody to help me when I want to work” [Artist 6] “I am a sculptor and a painter however I only sculpt if I have a support worker to lift and carry ...” [Artist 11] It is often argued that digital technologies can address these issues and reduce reliance on others thus helping to support “independence”. However, self-sufficiency should not necessarily be the end goal with regards to using digital technologies in supporting artists - as French (1993) emphasises “… independence [for disabled people] can give rise to inefficiency, stress and isolation, as well as wasting precious time”. It is also important to note that digital technologies and tools (designed and customised to the specific needs of a disabled artist) are unlikely to completely remove the need for personal support. For example, if an effective and efficient eye gaze tracking solution was developed to support the practice of disabled artists, it is likely that someone with more severe physical impairments would still need support in setting up the eye tracking sensor and running through the eye calibration process (i.e. an interactive software- driven process that allows the system to accurately track an individual’s eyes). Moreover, the use of digital technologies to support artistic practice can present issues around personal assistants needing significant technical experience to help troubleshoot or fix basic (or more complex) technical issues if the tools, sensors, or devices are not working. This, in turn, could make the creative process more problematic and frustrating for disabled artists if technical issues cannot be quickly resolved. Wide Usage of Mainstream Artistic Software 14 artists from the online survey reported using mainstream digital technologies and software in their practice. In terms of software, Photoshop was by far the most commonly used (9 artists) - other software used included Illustrator, Indesign, After Effects, Final Cut Pro, GIMP, Flash, Lightroom, Paint, Artrage, and Revelation Natural Art. Four artists stated that they use mobile applications including Sketchbook Pro, Manga Studio, Ink, ProCreate, Pen and Ink, Graphite, Snapseed, Infinite Paint, and PicsPro. Artists highlighted several advantages they found when using mainstream digital technologies and software - in particular, they commented how use of a Wacom tablet, for example, can help support dexterity as well as making it feasible to remove unintentional mistakes. Artists also commented on how a digital approach offered more flexibility and freedom in their practice. “Although my style is very similar to traditional pen & ink work I now almost always work digitally after developing rough sketches. This is simply because it makes allowances for sudden stiffness or twitches (which also happen...) which can easily be altered in a digital format ... I have truly embraced the dork side and enjoy all forms of technological development which can be used to enhance visual communication” [Artist 10] “I prefer to draw digitally rather than paint ... always looking to technology to improve my techniques and practice. Particularly interested in touch and freedom of movement where I don't have to hold equipment and something that can enable me to make a big gesture with very little movement, considering my limited scale of movement” [Artist 8] However, some artists also highlighted the issues they experience when using mainstream digital tools - these included issues around lacking control when using a stylus with artistic applications, difficulty around operating multi-touch tablets (typically due to motor impairments), the use of trackballs as a mouse replacement leading to repetitive strain injuries, the different feel of using a digital approach over traditional tools (which provide tangible feedback and can incorporate more "randomness" into work), and issues around the use of digital equipment that may be too big or heavy to use: “The digital responses are less intuitive and lack the gestural impact that one material interacting with another has. less accidental marks and no translucent colour layering” [Artist 5] “I have tried drawing apps and a stylus but lack the control ... when using ipad to draw I cant stable my wrist” [Artist 9] "I scan my drawings (with assistance) and work on them in photoshop. I am able to use photoshop on my own, using a mouse, but I find the keyboard very difficult to use” [Artist 6] These issues highlight the important role of the input devices that disabled artists use to control and manipulate mainstream artistic software. The variety of tools used often provide methods of interaction that are significantly different from a traditional mouse and keyboard. This can lead to issues as the emphasis in designing mainstream creative applications (e.g. Photoshop) leans heavily towards a more traditional mouse and keyboard interaction. This mode of interaction therefore encourages interface design elements such as small icons which take up minimal screen space and are easy to select with a mouse (for non-disabled users), as well as keyboard shortcuts that help to speed up common activities and actions (e.g. copy and paste). However, to take one example, the use of a technology such as eye gaze tracking presents a completely different form of interaction in comparison to a mouse and keyboard (Jacob, 2003). Multiple research studies have highlighted how users (disabled and non-disabled) find it difficult to select small targets via eye gaze due to a range of technical and physiological constraints (Bates and Istance, 2002; Ashmore et al., 2005; Findlater et al., 2010). Therefore, attempting to use this form of interaction (or others) – in conjunction with interfaces that have been designed primarily for non-disabled users operating a mouse and keyboard – is likely to be problematic. This particular example was supported by Artist 4 – this artist utilises eye gaze tracking technology to support her creative practice. Whilst this technology assists her in accessing and using creative software, she also highlighted how this approach “… takes a long time which is frustrating, because I compare how quickly I would work if I could still use my hands”. This was emphasised in a demo she provided of how she uses eye gaze tracking with Natural Revelation Art package – an application she uses because it has larger icons and buttons than those commonly used in other creative packages (likely due to the fact that children are a key target audience for this software). Whilst the use of eye gaze tracking to control this application was somewhat accessible in enabling her to produce creative work, it was also clear that simple actions such as selecting a “pen” tool often had to be repeated on multiple occasions (thus potentially leading to frustrating user experiences). To draw on work by Akrich (1993), these findings highlight how the “scripts” used by designers of mainstream creative software still do not appear to prioritise or emphasise inclusive design. Moreover – consideration has not been given to the plethora of ways in which disabled people may want to use and access the software and as such they may have to endure a sub-standard interaction experience (or be completely excluded). Simply bolting on digital technologies (such as eye gaze tracking, speech recognition, or motion tracking) onto existing interfaces is unlikely to provide an optimal experience and means that disabled artists will likely have to find workarounds to try and make software accessible. If designers can create “scripts” or “scenarios” that are more inclusive in nature – combined with iterative design and development undertaken in close collaboration with disabled people and other stakeholders (in real world longitudinal scenarios) – the potential for creating more effective and efficient methods may be increased. Specialised Assistive Technology Not Widely Used 10 artists from the online survey reported not currently using any digital assistive tools (e.g. eye gaze tracking, speech recognition software, etc.) to support their working practice. In particular, none of the artists reported using traditional assistive tools such as head wands, mouth sticks, or custom-designed grips for creative work. Those who are using assistive technology are making use of trackballs, eye tracking technology, elbow crutches, and wheelchair accessories (e.g. for holding cameras). In terms of software, artists reported using speech-to-text applications (e.g. Dragon NaturallySpeaking) and accessibility features such as screen magnifiers and virtual keyboards. Whilst these tools can provide benefits to artists, they also present other issues - for instance, one artist using a wheelchair camera holder experiences significant issues in attaching it to her chair and then attaching the camera to the arm (Artist 3). The arm is also not long enough and thus the artist has to bend forward to see through the viewfinder (which can lead to lower back issues). In terms of trackballs, Artist 7 commented that they enable finer control of the mouse (when coupled with software that slows down the cursor), but that after repeated use it has resulted in a repetitive strain injury. Other artists have experienced issues with accessibility software - for example, Artist 18 commented on difficulties experienced with speech-to-text software in that it did not always "... catch many of the little words..." the artist used. There also appeared to be a lack of awareness around the variety of digital technologies that could be used for assistive purposes (as suggested by the lack of discussion in artist responses). Several artists openly expressed this view: “I am sure there are many other digital what-sits which might help me i.e. voice activation but I have not got to grips with this technology ... the main limitation I have, is that I don't know enough about what is possible / available” [Artist 6] “I dare say there's stuff that could help. I just don't know where to start” [Artist 16] “I am not aware of any tools, but would be open to exploring them” [Artist 15] "My take on 'digital art' is to draw picture in Word, using the 'insert' option, for shapes and colours" [Artist 12] The reasons for this lack of awareness are unclear, although could be related to the novelty of some of the technologies. For instance, eye gaze tracking is not a commonly used technology and may not be familiar to artists who have not explicitly researched or explored this method of interaction. Moreover, as highlighted in previous work (Harris 2010; McDonald and Clayton, 2013), cost could also be significant issue with the perception that innovative technologies are likely to be extortionately expensive (which is not necessarily always the case). Importance of Wider Practice Artist 18 highlighted an essential point that has received no attention from researchers to date which is related to issues around administrative tasks. The artist’s personal assistant summarised this effectively (the artist has cerebral palsy): “… a simple task, such as … a seven-line email can take 40 to 60 minutes to complete... extrapolate this for the number of emails she receives - her gallery, art institutions, suppliers, media, students - plus ordering of supplies, arranging print-making, planning work for a show and catalogue, and the time and effort involved are considerable and exhausting. It is unusual ... to be able to get into the studio before midday most days.” [Artist 18] Whilst this point was not explicitly raised by other artists, it will likely apply more widely for artists who may have issues in using traditional input tools for computers (i.e. a mouse and keyboard). It is therefore not sufficient for digital tools to only make the artistic and creative process more accessible - they also need to support all of the wider tasks involved in an artist's practice. This is a neglected research area where further work is required. Conclusion This study focused on three core research questions to explore the creative practice of professional physically impaired visual artists and their use of digital technologies. These questions placed a particular emphasis on the way in which artists currently work (e.g. which types of art forms, typical creative process, etc.), the barriers and issues they currently experience around their practice, and the extent to which they are currently utilising digital technologies and assistive tools to support their practice. The findings highlighted that artists are working professionally across a variety of different art forms including painting, illustration, printmaking, sculpting, eye gaze art, and digital photography. In terms of artistic process, methods of working are clearly dependent upon the individual artist and their unique requirements – although several key themes emerged. For instance, some artists worked with particular art forms as these were the only ones that were feasible or accessible to them. Several artists have had to adapt their practice over time as their impairments have developed with artists highlighting, for example, how they have switched to working on smaller scales and then scaling up their work. Artists also discussed ways in which they attempt to reduce the impact of physical impairments influencing their process through ergonomically adapting the way in which they work (sometimes in ways that will likely lead to further physical issues – as in the case of Artist 9 working on floor in a W seated position). In terms of issues that artists experience around their practice, fatigue was highlighted as a common barrier to working thus resulting in many of the artists only being able to work for short periods at a time. Other issues focused around being able to move and transport tools, adjusting the height of easels, squeezing paint out of tubes, and a range of other challenges (dependant on the specific experience of individual artists). As such, personal assistance was emphasised by many artists as being a crucial element of their practice in enabling them to work and to have control over their creative workflow. However, whilst assistants are essential in supporting the work of disabled artists, a couple of artists also expressed some frustration that they could only work when assistants were available. It is often argued that digital technologies could help to provide disabled people with more self-sufficiency and independence although this was only a desire that Artist 9 explicitly commented on (“I believe that I could be completely independent with the right equipment/software”). Similar to previous work, this suggests that self-sufficiency and complete independence is not necessarily a specific goal for the majority of artists involved in this project. The findings around the artists’ use of digital technologies and assistive tools in their practice were surprising. In particular, it was found that mainstream digital technologies and software are widely used and that these tools are supporting disabled artists to produce creative work professionally. This was a surprising finding as these tools would often be considered inaccessible to disabled people due to the design of the interfaces being developed for people using a traditional mouse and keyboard. Moreover – this finding contradicts previous work in the field (e.g. Harris, 2010) which has highlighted how disabled people are not commonly using mainstream technologies and software. This change in trend may likely be due – in some part – to the wider choice of potential applications available (e.g. on mobile platforms) and improvements made to the accessibility of a range of devices (e.g. accessibility features available on Windows, the Mac operating system, iOS, and Android). Whilst these accessibility features can make mainstream software somewhat accessible, it can also lead to interaction issues as artists are having to “bolt” different input methods onto applications that were not necessarily designed to support those methods of interaction (e.g. using eye gaze tracking with Photoshop). In contrast, only a small minority of the artists reported using more novel technologies such as eye gaze tracking and speech recognition to support their practice. As noted in other research (Harris 2010; McDonald and Clayton, 2013) this seems to be due to a lack of awareness and access to these products – with cost often being a significant barrier. However, whilst the prices of these technologies have dropped significantly (i.e. typically around £100), there still seems to be limited awareness of these tools and their potential to support practice. There also remains relatively few tools available that are designed to specifically support disabled people for creative work. As highlighted, this results in artists having to adapt themselves to use mainstream technologies as opposed to tailoring these tools to support their specific needs and requirements. Therefore, whilst professional creative work is clearly possible using novel digital technologies (as exemplified by Artist 4 using eye gaze tracking), it can be a time-consuming, tedious, and frustrating process that requires significant effort and persistence to produce work. The research conducted in this study highlights both how digital technologies can potentially support creative practice for disabled artists and also introduce further issues and complications. To address some of these issues, future work in this area needs designers, developers, and user experience specialists – in collaboration with disabled artists and other stakeholders – to create digital tools that better support people wanting to produce creative work (using a variety of input methods). Whilst alternative approaches for producing creative work are made available through the increased accessibility of innovative technologies, the risk is that these technologies are simply “bolted” onto mainstream tools, applications, and software. This will likely result in usability issues as these mainstream applications have not been designed for novel methods of interaction such as eye gaze tracking, mid-air gesturing, speech recognition, or switch-based interaction. It is crucial, therefore, that a collaborative approach is adopted to increase the likelihood that future tools are designed to address the specific needs and requirements of disabled artists. Moreover, it is essential that future research not only focuses on creative work, but also all the activities associated with the wider practice of disabled artists (e.g. the business side of their practice). This is an area that has received no attention from researchers to date, but is essential to ensure that artists can manage all aspects of their practice. In summary, this research provides a deeper insight into the working practice of professional physically impaired visual artists and demonstrates the variety of ways in which artists are producing their work (including barriers, use of digital technologies, etc.). Novel technologies present alternative approaches for disabled and non-disabled artists to produce creative work, although it was clear from this study that disabled artists are not aware of these possibilities. Furthermore, it is not yet clear whether technologies such as eye gaze tracking and speech recognition can genuinely enhance the practice of disabled artists more widely or whether they simply introduce more complexities and challenges. This therefore represents an interesting area for future research to help better determine the long-term impact of these digital and assistive technologies to support the creative practice of disabled artists. Artist Gender Age Physical Impairment(s) Art Form(s) A1 F 58 Severe pain in hands, shoulders, base of spine and hips Fused glass art A2 F 26 Generalised dystonia - causing abnormal posture and spasms/pain. Affects mouth, neck, right hand and right foot. Photography A3 F 55 Multiple sclerosis – decreased grip in hands and fatigue. Photography A4 F 49 Motor neurone disease – unable to use hands. Digital art via eye gaze tracking A5 M 49 Mobility, chronic pain, and fatigue. Painting with mixed media and collage A6 F 61 Muscular dystrophy – all muscles are very weak – experiences pain and fatigue. Drawing / painting A7 F 42 Mobility impaired with limited dexterity. Visual art and sound A8 F 49 Mobility impaired in all limbs and joints creating restricted movement. No muscle mass creates physical and mental fatigue over short periods of movement. Painting, drawing, and digital art via mobile applications A9 F 20 Severe Cerebral Palsy Quadriplegia. Mixture of spastic, athetoid and ataxic/involuntary movements. Pencil, acrylic, paint, clay and glazes A10 M 49 Multiple sclerosis – resulting in limited mobility and fatigue. Neuropathic pain Digital illustrator affecting arms and legs. Stiffness and issues with hands/fingers. A11 F 39 Hip amputee Sculptor / painter A12 F 74 Parkinson’s, arthritis, and knee replacement. Wax A13 M 60 Two fingers and a thumb on each hand. Some degree of arthritis and pain, along with mobility problems due to arthritis in ankle and knee joints. Drawing and collage A14 F 41 Ulcerative colitis with associated enteropathic arthropathy affected mainly hands and feet, bilateral frozen shoulders (mainly released) and bilateral carpal tunnel syndrome Drawing / painting (graphite for initial drawings, oil paint for paintings) A15 M 41 Right leg above knee amputee Cardboard sculptor A16 F 63 Mobility, pain, and fatigue. Digital designer A17 F 22 Pain and fatigue Wood, paper, plastics, and ceramics (mixed media) A18 F 60 Cerebral palsy Painter Table 1: An overview of the artists who completed the survey (including self- identification of physical impairments) – artists in bold are those invited for interviews. References Akrich, M. 1992. “The De-Scription of Technical Objects” In Shaping Technology\Building Society, edited by W. Bijker & J. Law. Ashmore, M., A.T. Duchowski, and G. Shoemaker. 2005. “Efficient eye pointing with a fisheye lens.” Paper presented at Proceedings of Graphics interface, 203-210. Bates, R. and H. Istance. 2002. “Zooming Interfaces!: Enhancing the Performance of Eye Controlled Pointing Devices.” Paper presented at fifth international ACM conference on Assistive technologies, 119-126. Boeltzig, H., J. Sullivan Sulewski, and R. Hasnain. 2009. “Career development among young disabled artists.” Disability & Society 24 (6): 753-769. Brisenden, S. 1986. “Independent living and the medical model of disability.” Disability, Handicap & Society 1 (2): 173-178. Borg, J., S. Larsson, and P. Östergren. 2011. “The right to assistive technology: For whom, for what, and by whom?” Disability & Society 26 (2): 151-167. Coleman, M. B., and E. S. Cramer. 2015. “Creating Meaningful Art Experiences With Assistive Technology for Students With Physical, Visual, Severe, and Multiple Disabilities.” Art Education 68: 6-13. Creed, C., R. Beale, and P. Dower. 2014. “Digital Tools For Physically Impaired Visual Artists” Poster presented at the 16th International ACM SIGACCESS Conference on Computers & Accessibility, New York, 253-254. Creed, C. 2016. “Eye Gaze Interaction for Supporting Creative Work with Disabled Artists.” Poster presented at the British Human-Computer Interaction Conference. Bournemouth. Davis, K. 1990. “A Social Barriers Model of Disability: Theory into Practice (the emergence of the ‘Seven Needs’).” Derbyshire Coalition of Disabled People (http://www. leeds. ac. uk/disability-archiveuk/index. html). Diment, L., and D. Hobbs. 2014. “A Gesture-Based Virtual Art Program for Children with Severe Motor Impairments-Development and Pilot Study.” Journal of Assistive, Rehabilitative & Therapeutic Technologies, 2 (1). Findlater, L., A. Jansen, K. Shinohara, M. Dixon, P. Kamb, J. Rakita, and J.O. Wobbrock. 2010. “Enhanced area cursors: reducing fine pointing demands for people with motor impairments.” Paper presented at 23nd annual ACM symposium on User interface software and technology, 153-162. French, S. 1993. “What’s So Great About Independence.” In Disabling barriers– enabling environments, 44-48. Gips, J., and P. Olivieri. 1996. EagleEyes: An Eye Control System for Persons with Disabilities. Paper presented at the Eleventh International Conference on Technology and Persons with Disabilities, 1-15. Glaser, B, and A. Strauss. 1967. The Discovery of Grounded Theory. London: Weidenfield & Nicolson. Goggin, G. and C. Newell. 2003. Digital disability: The social construction of disability in new media. Rowman & Littlefield. Harada, S., J. Wobbrock, and J. A. Landay. 2007. “Voicedraw: A Hands-Free Voice- Driven Drawing Application for People with Motor Impairments.” Paper presented at the 9th International ACM SIGACCESS Conference on Computers and Accessibility, Arizona, 27-34. Harris, J. 2010. “The use, role and application of advanced technology in the lives of disabled people in the UK.” Disability & Society 25 (4): 427-439. Heikkilä, H. 2013. “EyeSketch: A Drawing Application for Gaze Control.” Paper presented at Conference on Eye Tracking South Africa (ETSA). South Africa, 71-74. Hornof, A. J., and A, Cavender. 2005. “EyeDraw: Enabling Children with Severe Motor Impairments to Draw With Their Eyes.” Paper presented at the SIGCHI Conference on Human Factors in Computing Systems, Oregon, 161-170. Hurst, A., and J. Tobias. 2011. “Empowering Individuals with Do-It-Yourself Assistive Technology.” Paper presented at 13th international ACM SIGACCESS Conference on Computers and Accessibility, Dundee, 11-18. Jacob, R .J. and K .S. Karn. 2003. “Eye tracking in human-computer interaction and usability research: Ready to deliver the promises.” In The Mind's Eye, 573-605. van der Kamp, J., and V. Sundstedt. 2011. “Gaze and Voice Controlled Drawing.” Paper presented at the 1st Conference on Novel Gaze-Controlled Applications, Karlskrona. Kwan, C. and M. Betke. 2011. “Camera Canvas: Image Editing Software for People with Disabilities.” In Universal Access in Human-Computer Interaction. Applications and Services, edited by C. Stephanidis, 146-154. Berlin: Springer. Macdonald, S. J., and J. Clayton. 2013. “Back to the Future, Disability and the Digital Divide.” Disability & Society 28 (5): 702–718. Morris, J. 1993. Independent lives? Community Care and Disabled People. London: Macmillan. Perera, D. P., J. R. T. Eales, and K. Blashki. 2007. “The Drive to Create: An Investigation of Tools to Support Disabled Artists.” Paper presented at the 6th ACM SIGCHI conference on Creativity & Cognition, Washington, 147-152. Phillips, B. and H. Zhao. 1993. “Predictors of assistive technology abandonment.” Assistive technology 5 (1): 36-45. Riemer-Reiss, M.L. and R. R. Wacker. 2000. Factors associated with assistive technology discontinuance among individuals with disabilities. Journal of Rehabilitation 66 (3): 44-50. Watling, S. (2011) “Digital exclusion: coming out from behind closed doors.” Disability & Society 26 (4): 491-495. work_soaejc7zmjffpfmmfglhixg23q ---- doi:10.1016/j.msea.2007.01.147 A c d s u e S © K 1 u f o s s g b p E r t a d o i 1 t 0 d Materials Science and Engineering A 464 (2007) 202–209 Instrumented anvil-on-rod tests for constitutive model validation and determination of strain-rate sensitivity of ultrafine-grained copper M. Martin a, A. Mishra b, M.A. Meyers b, N.N. Thadhani a,∗ a School of Materials Science and Engineering, Georgia Institute of Technology, Atlanta, GA 30332, United States b Department of Mechanical and Aerospace Engineering, University of California, San Diego, CA 92093, United States Received 8 December 2006; received in revised form 28 January 2007; accepted 31 January 2007 bstract Anvil-on-rod impact tests were performed on as-received (cold-rolled) OFHC copper rods and copper processed by 2- or 8-passes of equal hannel angular pressing (ECAP). The average grain size ranged from ∼30 �m for the as-received sample to ∼440 nm for the 8-pass sample. The ynamic deformation states of the samples were captured by high-speed digital photography and velocity interferometry was used to record the ample back (free) surface velocity. Computer simulations utilizing AUTODYN-2D hydrocode with the Johnson–Cook constitutive model were sed to generate free surface velocity traces and transient deformation profiles for comparison with the experimental data. The comparison of xperimental results with AUTODYN simulations provided a means for extracting the strain-rate sensitivity of copper as a function of grain size. train-rate sensitivity was found to increase as grain size decreased. 2007 Elsevier B.V. All rights reserved. trafine l t T a t a fl i i c s i n J ( eywords: Strain rate sensitivity; Grain size effects; Dynamic deformation; Ul . Introduction Nanocrystalline and ultrafine-grained (UFG) metals have nique mechanical properties (e.g., strength, hardness, and atigue resistance) that render them good candidates for vari- us structural applications [1–6]. Recent results indicate that train-rate sensitivity in UFG metals is enhanced in compari- on with conventional polycrystalline metals having micro-scale rains [7–12]. The strain-rate sensitivity of UFG copper has een studied by Gray et al. [13] by performing quasistatic com- ression tests and split Hopkinson pressure bar experiments on CAP-processed specimens. This study revealed that the strain- ate sensitivity of UFG Cu is significantly higher than that of ypical annealed, polycrystalline Cu, and its yield strength is bove that extrapolated from the Hall-Petch relation. The work escribed in this paper is an extension of what has been previ- usly done to determine the strain-rate sensitivity enhancement n UFG, ECAP-processed Cu at strain rates on the order of 103 to 05 s−1 using dynamic reverse Taylor [14] anvil-on-rod impact ests. ∗ Corresponding author. Tel.: +1 404 894 2651. E-mail address: naresh.thadhani@mse.gatech.edu (N.N. Thadhani). p o b t i a 921-5093/$ – see front matter © 2007 Elsevier B.V. All rights reserved. oi:10.1016/j.msea.2007.01.147 -grained (UFG) copper The rod-on-rigid-anvil impact experiment developed by Tay- or [14] in 1948 has become a standard method for investigating he high strain rate (∼103 to 105 s−1) response of materials. In aylor’s impact experiment, a cylindrical specimen is acceler- ted to impact a rigid anvil and deformation propagates through he cylinder as a wave. After impact, the specimen is recovered nd the changes in its dimensions are used to infer its dynamic ow strength [14,15]. This test has become a common tool for nvestigating the constitutive response of materials by attempt- ng to reproduce the final deformed shape of the specimen with a onstitutive model [15–19]. However, simply matching the final hape of the specimen does not necessarily provide a robust val- dation of the constitutive model since the deformation path is ot considered [20]. Constitutive models based on empirical relationships (i.e., ohnson–Cook [16]) as well as physically based relationships i.e., Zerilli–Armstrong [17]) have been commonly used in the ast for comparison with experimental results. It is not the intent f this paper to choose one model or type of model over another, ut simply to explain the validation method which was used in his study. In recent years, the Taylor impact test has been performed n its reverse configuration, with the rigid anvil impacting stationary rod-shaped sample, allowing for simultaneous mailto:naresh.thadhani@mse.gatech.edu dx.doi.org/10.1016/j.msea.2007.01.147 and E v [ d m t t b i [ d g i t m m v a f t w i u 2 2 u o F 8 T s M. Martin et al. / Materials Science elocity interferometry of the free (back) surface velocity 21,22] and high-speed photography of the impact and specimen eformation throughout the entire deformation event. The imple- entation of multiple time-resolved diagnostics which monitor he entire deformation event allows for development of consti- utive models and more robust validation, as described in detail y Eakins and Thadhani [20]. In the present work, a reverse Taylor anvil-on-rod impact test nstrumented with high-speed digital photography and VISAR 23] velocity interferometry was used to investigate the dynamic eformation response of copper of nano- to micro-meter scale rain size. This method has also been applied to other materials ncluding bulk metallic glass matrix composites [24]. Although it is useful to validate the extent to which a consti- utive equation predicts the dynamic deformation response of a aterial by comparing simulations and experimental data, the ethod also enables determination of the constants which pro- ide the best fit to the experimental data. In this study on Cu, u s m p ig. 1. (a) Optical micrograph of initial Cu with a grain size of 30 �m, and TEM micro -pass ECAP-processed Cu with a grain size of ∼440 nm. The grain size distribution he initial as-received Cu illustrates an extensive deformation cell substructure typic ize. ngineering A 464 (2007) 202–209 203 well-characterized material, the Johnson–Cook equation [16], or which all relevant constants except for the strain-rate sensi- ivity were previously known [16] or determined experimentally, as used to extract the effect of grain size on strain rate sensitiv- ty by examining three Cu specimens which had been processed sing 0, 2, or 8 ECAP passes. . Experimental procedure .1. Materials Commercially obtained OFHC, cold-rolled Cu was processed sing a horizontal split ECAP die with an interior channel angle f 102◦ and exterior angle of 20◦ [25]. The processing route tilized was BC, in which the sample is rotated by 90 ◦ in the ame direction between consecutive passes [26,27]. Fig. 1 shows icrographs and grain size distributions of the as-received, 2- ass and 8-pass ECAP Cu. These figures show the reduction in graphs of (b) 2-pass ECAP-processed Cu with a grain size of ∼890 nm, and (c) s of the 2- and 8-pass ECAP samples are below the corresponding micrograph. al of cold-rolled rods, and the ECAP Cu shows clear evidence of refined grain 2 e and Engineering A 464 (2007) 202–209 g r i E o E o 9 r t 2 r a t o r v [ 2 [ d d a e c s s r A i g s [ F E a r l Table 1 Experiment details including the number of ECAP passes the specimen had undergone during processing and the impact velocity Material (no. of ECAP passes) Impact velocity (m/s) Average strain rate (s−1) As-received (cold-rolled) 88 1093 2 123 1528 8 125 1557 T l 3 e t 2 m t e σ a d t p l w s a m l s T by Johnson and Cook. The strain rate constant, C, the only remaining variable, was then empirically obtained by fitting the simulated free surface velocity trace with that obtained experi- 04 M. Martin et al. / Materials Scienc rain size that occurred with increasing ECAP passes. The as- eceived Cu had an average grain size, measured using the line ntercept method on optical micrographs, of ∼30 �m. After two CAP passes the grain size had been reduced to an average size f ∼890 nm, with even further reduction to ∼440 nm after eight CAP passes. The Cu samples were machined into cylindrical specimens f 4 mm diameter and 4 mm length for static testing and rods of .4 mm diameter and 40.13 mm length for dynamic testing. The ods were lapped on both ends with 45 �m diamond suspension o insure parallel surfaces. .2. Static compression testing Compression tests were performed on the as-received (cold- olled), 2-pass and 8-pass Cu samples. The tests were performed t strain rates of 5 × 10−3 and 1 s−1 using a Satec compression esting machine. These data were used to compare the strengths f the three differently processed specimens at various strain ates. The data collected at 1 s−1 were also used to determine the alues of constants needed for modeling with the Johnson–Cook 16] equation, as described later. .3. Reverse Taylor impact tests Instrumented reverse Taylor anvil-on-rod impact tests 20,24,28], were conducted to permit correlation of simulated eformation profiles with transient deformation states recorded uring the experiment. A schematic of the reverse Taylor nvil-on-rod impact test setup is shown in Fig. 2. Complete xperimental details are described in Ref. [20]. The projectile onsisted of an 80 mm diameter 2024 Al sabot with a maraging teel rigid anvil plate (∼6.2 mm thickness) secured to the front urface. The rod-shaped samples were mounted onto a target ing and aligned with a laser beam to ensure parallel impact. n IMACON-200 high-speed digital camera, used to capture mages of the deformation of the rods during impact, was trig- ered using crush pins. The free surface velocity of the back urface of the rod was captured in each experiment by a VISAR 23] probe, which was positioned behind the sample, as seen in ig. 2. The details of each experiment, including the number of CAP passes the specimen had undergone, the impact velocity, nd average strain rate are given in Table 1. The average strain ate was defined based on the impact velocity and change in ength of the specimen [29]. Higher strain rates (on the order of Fig. 2. Schematic of the reverse Taylor anvil-on-rod impact test setup. m F p o he average strain rate was estimated using the impact velocity and change in ength of the specimen according to the method described by Meyers [29]. 600 s−1 for 125 m/s impact of the 8-pass sample) were experi- nced by the specimens during the early stages of impact, and he strain rate subsequently decreased as deformation continued. .4. Autodyn-2D modeling AUTODYN simulations of the anvil-on-rod impact experi- ents were performed to validate the constitutive response of he ECAP Cu specimens using the Johnson–Cook constitutive quation [16]. = [A + Bεn][1 + C ln ε̇∗][1 − T ∗m] (1) The unknown strain rate sensitivity constant, C, was gener- ted by fitting the simulated free surface velocity trace to that etermined experimentally using VISAR. The models were fur- her validated by comparing simulated transient deformation rofiles with the images captured during deformation. Simu- ations were run in 2D as an axisymmetric problem, and gauges ere placed on the specimen’s rear surface to track the free urface velocity; the model setup can be seen in Fig. 3. For the Johnson–Cook equation, the hardening constant, B, nd hardening exponent, n, were obtained from stress–strain data easured at ε̇ = 1 s−1. The yield strength coefficient, A, was eft at 90 MPa, the value determined by Johnson and Cook [16] ince this value represents yield stress in undeformed copper. he thermal softening exponent, m, was also left as determined entally, as described by Eakins and Thadhani [20]. ig. 3. Axisymmetric problem setup and mesh in AUTODYN-2D showing the rojectile (partial), flyer plate and specimen. The gauge on the back (free) surface f the specimen tracks the free surface velocity. M. Martin et al. / Materials Science and Engineering A 464 (2007) 202–209 205 Fig. 4. True stress–strain plots for Cu samples in as-received state and after two and eight ECAP passes (ε̇ = 5 × 10−3 and 1 s−1). F i 3 3 2 s r Table 2 Quasistatic 0.2% offset flow stress values for ECAP specimens tested in com- pression at two different (quasi-static) strain rates No. of ECAP passes Flow stress at ε̇ = 5 × 10−3 s−1 (MPa) Flow stress at ε̇ = 1 s−1 (MPa) 0 301 316 2 371 388 8 374 421 Table 3 Final axial and areal strain values measured from recovered specimens No. of ECAP passes Impact velocity (m/s) Axial strain, ε̇ = ln (Lf/L0 ) Areal strain, ε̇ = 1 − (A0/A) 0 88 0.137 0.377 2 8 t i e t t z i f r p 3 f E F T o s ig. 5. Eight-pass Cu specimen recovered after reverse Taylor anvil-on-rod mpact tests at 125 m/s. . Results and discussion .1. Static compression testing Static stress–strain curves were obtained for the as-received, - and 8-pass ECAP Cu specimens. Fig. 4 shows the true stress- train response for as-received, 2- and 8-pass ECAP Cu at strain ates of 5 × 10−3 and 1 s−1. The effects of significant deforma- m s t a ig. 6. Four of 16 high-speed digital images from (a) impact of a 2-pass ECAP Cu sp hese images show the projectile + flyer plate assembly accelerating from the left to f the impact face and decrease in specimen length. Comparison of the 2- and 8-pass pecimen. 123 0.177 0.549 125 0.169 0.518 ion during ECAP processing and the accompanying reduction n grain size on the strength of these copper specimens were vident in the static test data. It is important to note that even he as-received (0-pass) Cu sample had an extensive deforma- ion substructure due to cold-rolling consistent with the almost ero work hardening observed in the stress-strain curves shown n Fig. 4. Table 2 lists the values of the flow strengths obtained rom quasistatic experiments. For both static strain rates, these esults show an increase in flow strength with increasing ECAP asses. .2. Dynamic testing Reverse Taylor anvil-on-rod impact experiments were per- ormed on the as-received Cu specimen at 88 m/s, the 2-pass CAP Cu specimen at 123 m/s, and the 8-pass ECAP Cu speci- en at 125 m/s. A representative image of the recovered 8-pass pecimen (impacted at 125 m/s) is shown in Fig. 5, with indica- ions of its initial and final dimensions. Table 3 lists the final axial nd areal strains measured from the recovered samples. Compar- ecimen at 123 m/s and (b) impact of an 8-pass ECAP Cu specimen at 125 m/s. impact the stationary rod-shaped specimen, which deforms by mushrooming specimens shows that the 2-pass specimen is deforming more than the 8-pass 2 e and Engineering A 464 (2007) 202–209 i s d s a i a s t i o i t s 3 C f e m m s f D s t c t m o t t w e w t e a t p r w o t fi u e p w d a a Fig. 7. Simulated vs. experimental free surface velocity trace for each ECAP Cu specimen. Simulation is based on modified Johnson–Cook parameters from s ( c s 06 M. Martin et al. / Materials Scienc son of the recovered ECAP specimens showed that the 8-pass pecimen exhibited less deformation than the 2-pass specimen uring dynamic testing at a similar impact velocity, due to greater train hardening achieved during additional ECAP passes. Fig. 6 shows 4 of 16 images captured during reverse Taylor nvil-on-rod testing of the (a) 2- and (b) 8-pass ECAP Cu spec- mens. These images show the projectile + flyer plate assembly ccelerating from the left to impact the stationary rod-shaped pecimen, which is in the center of the image. It can be seen in hese images that the impact face of the specimens are expand- ng, or mushrooming, and the length is decreasing. Comparison f the 2- and 8-pass specimens shows that the 2-pass specimen s deforming more than the 8-pass specimen due to the addi- ional strain hardening endured by the 8-pass specimen during ix more ECAP passes. .3. AUTODYN-2D modeling AUTODYN simulations were performed using the Johnson– ook constitutive model with hardening constants obtained rom stress-strain data measured at strain rate of 1 s−1 and an mpirically determined strain-rate sensitivity parameter that was odified such that the simulated free surface velocity trace atched the experimentally measured velocity trace. Fig. 7 hows the comparisons between the experimentally measured ree surface velocity traces and those obtained from the AUTO- YN simulation using the Johnson–Cook model for each pecimen. It can be seen that the simulations capture details of he slope and peak of the free surface velocity and result in a very lose to fit to the experimental data. The differences between he simulation and experiment in the first step size appear to be ore obvious in the case of the 8-pass ECAP sample than the thers. We attribute this to the effects of the ultrafine grains and heir associated boundaries, the details of which are not cap- ured in the simulations. The increase in the number of grains ith increasing grain size can result in more dispersion of the lastic waves, and consequently lower amplitude reverberation, hich was observed in the experimental traces, as compared to he simulated traces. The final values of the Johnson–Cook constants used for ach case resulting in the best match to the experimental data re summarized in Table 4. The Johnson–Cook equation with he modified constants was then used to generate deformation rofiles for comparison with deformation profiles of the final ecovered samples, as well as the transient profiles recorded ith the high-speed digital camera. The dimensions of each f the recovered specimens and those from the final state of he simulation are reported in Table 5. The simulations show nal deformation geometries that differ from experimental val- es by < 7.38% in length and < 2.01% in impact face radius for ach sample. The larger error in the final length is due to the hysical measurements performed on the recovered specimens, hich had slightly non-parallel ends and non-round impact face ue to secondary impact in catch tank. The experimental and simulated length and impact face radius t transient times are compared in Tables 6 and 7 for the 2-pass nd 8-pass samples, respectively. These results demonstrate a t p b p tress-strain data and empirical fit to the experimental free surface velocity trace. a) Initial Cu, 88 m/s, (b) 2-pass Cu, 123 m/s, and (c) 8-pass Cu, 125 m/s. lose match (<2.7% difference for each time) of these dimen- ions between simulations and experiments for each of the ransient times, indicating that the model provides an accurate rediction of the deformation, and the constants obtained can e used to further investigate the deformation response. Com- arisons of the simulated and experimental transient profiles of M. Martin et al. / Materials Science and Engineering A 464 (2007) 202–209 207 Table 4 Modified Johnson–Cook parameters used in AUTODYN simulations No. of ECAP passes Yield stress, A (MPa) Hardening constant, B (MPa) Hardening exponent, n Strain rate constant, C Thermal softening exponent, m 0 90 340 0.0334 0.009 1.09 2 90 390 0 0.011 1.09 8 90 423 0 0.017 1.09 A and m were left as determined by Johnson and Cook [16], B and n were determined using σ = Bεn in the plastic range of the stress–strain data obtained at ε̇ = 1 s−1, and C was then determined by empirically fitting the simulations to the experimental data. Table 5 Comparison of final dimensions from simulations and experiments No. of ECAP passes Lf (mm) %Difference Rf at impact face (mm) %Difference 0 Expt: 35.00 7.23 Expt: 5.97 2.01 Sim: 37.53 Sim: 6.09 2 Expt: 33.63 7.03 Expt: 6.96 1.01 Sim: 36.17 Sim: 7.03 8 Expt: 33.88 7.38 Expt: 6.73 1.93 Sim: 36.38 Sim: 6.86 Experimental values were measured from recovered specimens, which had undergone secondary impact in the catch tank, resulting in non-parallel ends or imperfectly round cross sections, and subsequently a small amount of error associated with measurement of these values, so average values are reported. Table 6 Transient dimensions of the 2-pass ECAP specimen during deformation obtained from high-speed digital images during the experiment and from simulations at corresponding times Time (�s) L (Expt) (mm) L (Sim) (mm) %Difference in length R (Expt) (mm) R (Sim) (mm) %Difference in radius 12.49 38.67 ± 0.09 38.69 0.06 5.93 ± 0.08 5.75 1.55 21.93 37.80 ± 0.11 37.82 0.04 6.28 ± 0.18 6.29 0.06 31.37 36.94 ± 0.17 37.10 0.43 6.59 ± 0.14 6.65 0.43 40.81 36.37 ± 0.10 36.53 0.43 6.91 ± 0.23 6.91 0.00 50.26 36.16 ± 0.10 36.19 0.07 6.83 ± 0.43 7.02 1.41 Table 7 Transient dimensions of the 8-pass ECAP specimen during deformation obtained from high-speed digital images during the experiment and from simulations at corresponding times Time (�s) L (Expt) (mm) L (Sim) (mm) %Difference in length R (Expt) (mm) R (Sim) (mm) %Difference in radius 14.15 38.28 ± 0.06 38.53 0.66 6.06 ± 0.14 5.81 2.03 24.14 37.51 ± 0.20 37.63 0.31 6.40 ± 0.17 6.33 0.52 33.82 36.43 ± 0.06 36.99 1.55 6.72 ± 0.27 6.64 0.57 4 5 t i u s c m ( c J o i E received Cu samples. The plot of flow strengths also indicates the effect of strain rate on the flow strength, with a ∼42% increase in strength of the as-received Cu versus a ∼53% increase for the 8-pass ECAP Cu over strain rates of seven orders of magnitude. Table 8 Dynamic flow stress values calculated using the Johnson–Cook equation No. of ECAP passes Dynamic flow stress, Johnson–Cook 3.65 36.44 ± 0.28 36.50 0.15 3.49 36.44 ± 0.03 36.31 0.37 he 2- and 8-pass ECAP Cu specimens at selected times during mpact are shown in Fig. 8. A close correlation, within the pixel ncertainty associated with the image resolution, between the imulated and experimental deformation states is observed. The orrelation provides further validation that the Johnson–Cook odel is accurately describing deformation of the as-received cold-rolled) and ECAP Cu, so the significance of each constant an be further evaluated. Values of the dynamic flow stress were calculated using the ohnson–Cook equation, and are listed in Table 8. The variation f flow strength as a function of strain rate for each specimen s plotted in Fig. 9. It can be seen from this plot that the 8-pass CAP sample is consistently stronger than the 2-pass and as- 0 2 8 6.66 ± 0.22 6.82 1.19 6.52 ± 0.19 6.87 2.70 (Eq. (1)) (MPa) 427 513 571 208 M. Martin et al. / Materials Science and Engineering A 464 (2007) 202–209 Fig. 8. Radius as a function of distance from impact end plots comparing Johnson–Cook simulations with the experimental transient deformation pro- fi i m C C t t s u [ p a t i p c F s e r e s v i o 4 s a d u c e s o s t b 0 s p A n a s F References les at various times after impact of the (a) 2- and (b) 8-pass ECAP Cu samples mpacted at 123 and 125 m/s, respectively. (a) shows a marker indicating the aximum pixel uncertainty corresponding to the image resolution. Table 4 lists a strain rate constant of 0.009 for the as-received u, 0.011 for 2-pass ECAP Cu, and 0.017 for the 8-pass ECAP u, which are the strain rate constants that provided the best fit of he simulation to the experimental free surface velocity trace and ransient profiles. These values show an increase in strain-rate ensitivity of the ECAP samples, which is due to the effect of ltrafine grain size, consistent with that observed in other studies 9,13]. The strain rate sensitivity of the as-received Cu is com- arable with that for microcrystalline fcc materials including Cu s determined by Conrad [30] (Conrad found a strain rate sensi- ivity of 0.004.). The 2-pass ECAP specimen shows only a slight ncrease in strain rate sensitivity compared to the initial sample, ossibly due to incomplete formation of sub-grain structure. In ontrast, the 8-pass ECAP sample, with a ∼440 nm grain size, ig. 9. Flow strengths of the ECAP Cu specimens at different strain rates ranging even orders of magnitude. xhibited nearly twice as much strain-rate sensitivity as the as- eceived Cu. These results are consistent with the mechanism xplained by Goeken et al. [8], who have observed an enhanced train rate sensitivity corresponding to a decrease in activation olume V = ( √ 3kT/σm), which has been attributed to a change n the rate-controlling mechanism from forest dislocations to ther behaviors in the ultrafine grain size domain. . Conclusions The dynamic deformation of as-received (cold-rolled) and ubsequently ECAP-processed Cu tested using reverse Taylor nvil-on-rod impact experiments was captured by high speed igital photography and compared with AUTODYN-2D sim- lations using the Johnson–Cook constitutive equation with onstants obtained from stress–strain data and by fitting to an xperimentally measured free surface velocity trace. The con- titutive equation provided a good fit to the final shape of each f the impacted specimens, as well as the transient deformation hapes. The constitutive equation also enabled the evaluation of he strain-rate sensitivity parameter, which was determined to e 0.009 for the as-received Cu, 0.011 for 2-pass ECAP Cu, and .017 for 8-pass ECAP Cu. These results show an increase in train-rate sensitivity with decreasing grain size, consistent with revious observations [9,13]. cknowledgements Research at Georgia Tech was supported by ARO Grant o. E-48148-MS-000-05123-1 (Dr. Mullins program monitor), Boeing Graduate Fellowship, and a NASA Jenkins Fellow- hip. Research at UCSD was supported by the National Science oundation under Grant CMS-0210173 (NIRT). [1] C. Suryanarayana, Int. Mater. Rev. 40 (1995) 41–64. [2] R.Z. Valiev, N.A. Krasilnikov, N.K. Tsenev, Mater. Sci. Eng. A 137 (1991) 35–40. and E [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ M. Martin et al. / Materials Science [3] R.Z. Valiev, A.V. Korznikov, R.R. Mulyukov, Mater. Sci. Eng. A 168 (1993) 141–148. [4] R.Z. Valiev, E.V. Kozlov, Y.F. Ivanov, J. Lian, A.A. Nazariv, B. Baudelet, Acta Metall. Mater. 42 (1994) 2467–2475. [5] V.Y. Gertsman, R. Birringer, R.Z. Valiev, H. Gleiter, Scripta Mater. 30 (1994) 229–234. [6] J.R. Weertman, Mater. Sci. Eng. A 166 (1993) 161–167. [7] L. Lu, S.X. Li, K. Lu, Scripta Mater. 45 (2001) 1163–1169. [8] J. May, H.W. Hoppel, M. Goeken, in: Z. Horita, T.G. Langdon (Eds.), Pro- ceedings of the 3rd International Conference of Nanomaterials by Severe Plastic Deformation. Nanospd 3/3rd International Conference of Nanoma- terials by Severe Plastic Deformation, Nanospd 3, Japan, 2005. [9] Q. Wei, S. Cheng, K.T. Ramesh, E. Ma, Mater. Sci. Eng. A 381 (2004) 71. 10] R. Schwaiger, B. Moser, M. Dao, N. Chollacoop, S. Suresh, Acta Mater. 51 (2003) 5159–5172. 11] F. Dalla Torre, H. Van Swygenhoven, M. Victoria, Acta Mater. 50 (2002) 3957–3970. 12] Y.M. Wang, E. Ma, Mater. Sci. Eng. A 375–377 (2004) 46–52. 13] G.T. Gray, T.C. Lowe, C.M. Cady, R.Z. Valiev, I.V. Aleksandrov, Nanos- truct. Mater. 9 (1997) 477. 14] G. Taylor, Proc. R. Soc. London A 194 (1948) 289–299. 15] M.L. Wilkins, M.W. Guinan, J. Appl. Phys. 44 (1973) 1200–1206. [ [ [ ngineering A 464 (2007) 202–209 209 16] G.R. Johnson, W.H. Cook, Proceedings of the 7th International Symposium on Ballistics, The Hague, The Netherlands, 1983, p. 541. 17] F.J. Zerilli, R.W. Armstrong, J. Appl. Phys. 61 (1987). 18] J.B. Hawkyard, Int. J. Mech. Sci. 11 (1969) 313–333. 19] W.H. Gust, J. Appl. Phys. 53 (1982) 3566–3575. 20] D. Eakins, N.N. Thadhani, J. Appl. Phys. 100 (2006), 073503– 073501–073508. 21] I. Rohr, H. Nahme, K. Thoma, Int. J. Impact Eng. 31 (2005) 401–433. 22] I. Rohr, H. Nahme, K. Thoma, J. Phys. IV 110 (2003) 513–518. 23] L.M. Barker, R.E. Hollenbach, J. Appl. Phys. 43 (1972) 4669–4675. 24] M. Martin, N.N. Thadhani, L. Kecskes, R. Dowding, Scripta Mater. 55 (2006) 1019–1022. 25] A. Mishra, V. Richard, F. Gregori, R.J. Asaro, M.A. Meyers, Mater. Sci. Eng. A (2005) 410–411. 26] V.M. Segal, Mater. Sci. Eng. A 197 (1995) 157–164. 27] S. Ferrasse, V.M. Segal, K.E. Hartwig, R.E. Goforth, Metal. Mater. Trans. 28A (1997) 1047–1057. 28] M. Martin, S. Hanagud, N.N. Thadhani, Mater. Sci. Eng. A 443 (2007) 209–218. 29] M.A. Meyers, Dynamic Behavior of Materials, Wiley-Interscience, New York, 1994, p. 668. 30] H. Conrad, in: V. F. Zackey (Ed.) High Strength Materials, 1965. Instrumented anvil-on-rod tests for constitutive model validation and determination of strain-rate sensitivity of ultrafine-grained copper Introduction Experimental procedure Materials Static compression testing Reverse Taylor impact tests Autodyn-2D modeling Results and discussion Static compression testing Dynamic testing AUTODYN-2D modeling Conclusions Acknowledgements References work_ss6s6si2bbhstk6fhvxfbg6cbi ---- Microsoft Word - EMfGpaperHobbsetal.doc 1 ELEVATION MODELS FOR GEOSCIENCE “Monitoring coastal change using terrestrial LiDAR” P. Hobbs*, A Gibson, L. Jones, C. Poulton, G. Jenkins, S. Pearson, K. Freeborough British Geological Survey Kingsley Dunham Centre Keyworth Nottingham NG12 5GG United Kingdom prnh@bgs.ac.uk ABSTRACT The paper describes recent applications by the British Geological Survey (BGS) of the technique of mobile terrestrial LiDAR surveying to monitor various geomorphological changes on English coasts and estuaries. These include cliff recession, landslides and flood defences, and are usually sited at remote locations undergoing dynamic processes with no fixed reference points. Advantages, disadvantages and some practical problems are discussed. The role of GPS in laser scanning is described. INTRODUCTION The use and application of terrestrial-based Light Detection And Ranging (LiDAR), using a method popularly known as laser scanning, has greatly increased over the past five years. The perception of the technique has changed, from that of a novel, but complex surveying tool to a relatively simple, almost routine method for precision measurement. The method was first widely used within the quarry industry where the results of repeat surveys were used to manage and plan material extraction. The technique has subsequently found architectural, civil engineering and industrial applications and, more recently has been adopted by the computer games industry to capture street scenery. Within geoscience, terrestrial LiDAR has been applied to the monitoring of volcanoes (Hunter et al., 2003), earthquake and mining subsidence, quarrying, buildings, heritage and conservation, forensics (Paul & Iwan, 2001; Hiatt, 2002), landslides (Rowlands et al., 2003) and coastal erosion (Hobbs et al., 2002; Miller et al., 2006; Poulton et al., 2006). The method has developed in parallel with airborne LiDAR, and to some extent terrestrial photogrammetry (Adams et al., 2003) and other airborne/ spaceborne techniques (Balson et al., 1998; Webster & Dias, 2006). This paper describes the different techniques and applications to which the British Geological Survey (BGS) has used terrestrial LiDAR over the past six years, and the successes and difficulties that have been encountered over that time. TERRESTRIAL LiDAR – APPLICATIONS AT BGS Since 2000, the BGS has used various terrestrial LiDAR and GPS systems in combination to measure, record and monitor a variety of geological exposures and geomorphological subjects, initially in collaboration with 3DLaserMapping Ltd. Most of the work has centred on the monitoring of active landslides on eroding coastlines, where the target surface is visible from a number of locations and is generally free of vegetation. Good reflections are returned from natural rock and soil materials at these sites, with rare exceptions where water seepage dramatically reduces the reflectance of dark mudrocks. CORE Metadata, citation and similar papers at core.ac.uk Provided by NERC Open Research Archive https://core.ac.uk/display/52810?utm_source=pdf&utm_medium=banner&utm_campaign=pdf-decoration-v1 2 The platform for the scanner is usually a tripod (Figure 1). This provides the versatility and mobility essential when scanning in a dynamic environment, where any kind of permanent installation is ruled out. The instrument can either be positioned over a known point or a differential global positioning system (dGPS) antenna substituted for the scanner to obtain the position, provided that the height is accurately measured, and the discrepancy between antenna and scanner heights is accounted for. Care must be taken to ensure tripod stability, particularly in sand. Most cliffs can be laser-scanned from the beach or rock platform using this method, but with certain caveats described later. Figure 1 Laser scanner mounted on tripod – mobile set-up Perhaps surprisingly, the method is also suited to low-lying features, normally only considered for airborne LiDAR, with the proviso that elevated vantage points are necessary. These may be provided by a vehicle roof in a temporary configuration (Figure 2) or, increasingly commonly, a dedicated vehicle mounting. Most vehicle mounting arrangements suffers in strong winds, although jacks were used at the four corners of the vehicle to eliminate suspension movement and provide stability. A modular gantry (Figure 3) or hydraulic platform may also be used. However, there are important issues of stability particularly where the instrument cannot be remotely operated. In the case of critical monitoring, e.g. for a large extremely hazardous landslide or volcano, a permanent solid monument is preferred, and if possible the instrument mounted on a long-term basis. The latter situation minimises errors due to instability and setting up associated with temporary tripod-mounting, but of course ties up the instrument for long periods and may expose it to damage. In a coastal situation the installation of a solid monument is usually not feasible. The set-up shown in Figure 4, whilst providing a solid platform, can only be temporary as the tide covers the block, as evidenced by the mussels attached to it. 3 Figure 2 Laser scanning from vehicle roof – mobile set-up Figure 3 Laser scanning from a 6m high modular gantry – temporary set-up In most cases it is not possible to erect permanent monuments on the coast or estuary from which to laser-scan. Where monitoring is required using temporary mobile platforms (e.g. tripods), laser scans must be oriented to a fixed grid reference system. In areas of coastal erosion lacking fixed reference points, the current solution is geodetic-quality dGPS. The raw output data from a scan consist of vertical and horizontal angles, distances, and reflective intensities, plus calibrated digital images where available. The angles and distances are subsequently ‘oriented’ into xyz grid co-ordinate positions (local or global) on the computer using dGPS (or other) location information. This format allows ‘oriented’ scans taken from other positions to be combined, as well as roving GPS data sets, to form a single model. Recently, the additional feature of calibrated digital photography has enhanced the method, allowing both the raw data and the final 3D model to be coloured accurately, the outcome resembling a 3D photo. This is very useful for the geoscientist who wishes to visualise, record, and measure terrain, structures, volumes, and processes. 4 Figure 4 Laser scanner mounted to rock-bolt on WW2 concrete block – temporary set-up COASTAL RECESSION Coastal recession is of worldwide concern, particularly in the light of current global climate change predictions, associated sea level changes and increased storm occurrence (Lee & Clark, 2002; Clayton, 1989). Monitoring of recession is considered a key factor in successful coastal management and hazard mitigation (Hall et al., 2002). The coastal environment is one in which high precision surveying can be made difficult by the dynamic nature of the environment. Typically, away from the built environment there are no lasting reference points with which to ‘fix’ each survey. Each element of the eroding coastline, i.e. cliff, platform, and beach, such as those in many parts of eastern and southern England, is in an almost continuous state of flux. Tides, unstable slopes, and the routine destruction of any fixed reference points therefore create an immediate problem for the surveyor using terrestrial LiDAR in these environments. Hence the need to accurately fix scans based on mobile or temporary set-ups such as those shown in Figures 1 and 2. The use of dGPS to locate laser scans can itself be compromised close to high cliffs, particularly where the satellite configuration is unfavourable. Methods other than terrestrial LiDAR have been used to monitor unstable cliffs. These may be subdivided into direct and other remote techniques. Examples of the former include instrumented rock-bolts and cable tensiometers. Examples of the latter, Time Domain Reflectometry (Pasuto et al., 2000), and Digital Image Processing (Allersma, 2000). As with other sciences, sophisticated computer models are increasingly being used to characterise and predict coastal cliff recession (Walkden & Hall, 2005), particularly where the element of climate change is introduced. These require quantitative input data, such as those obtainable by terrestrial and aerial LiDAR and by other remote techniques. Direct slope stability monitoring methods tend to be targeted at specific features where movement is expected. They therefore provide only local information, and crucially may miss unforeseen movements or events. Quantitative information about mass behaviour usually requires a remote method. 5 SLOPE DYNAMICS PROJECT The British Geological Survey has been carrying out coastal monitoring in England using terrestrial LiDAR since 1999 (Hobbs et al., 2002). The Slope Dynamics Project has 12 coastal locations where ‘soft’ rock cliffs are subject to marine erosion and/or landslide activity. These sites are scanned annually or bi-annually (depending on the rate of change) to assess the influence of geology, geomorphology, and geotechnics on the process of cliff recession. Recently, active inland landslides have been included (Rowlands et al., 2003), though these tend to be more unusual and less dynamic than their coastal counterparts. As part of the project, platforms and beaches are included in the scans so that the relationships between wave attack and cliff degradation can be examined. The role of landslides on cliff recession is a topic of some interest in Britain, particularly along the east and south coasts of England where the rocks making up cliffs and platforms are comparatively weak, susceptible to erosion and instability, and marine attack is powerful. Huge quantities of sediment liberated from the cliffs are moved along the coast or offshore, and re-deposited by the sea. Modelling this action in response to time and environmental conditions is key to understanding the likely effects of climate change. Such models require quantitative information about cycles of sediment supply and the relationship with platform erosion and beach thickness in order to calibrate their predictions. Terrestrial LiDAR is one way of doing this, albeit on a local scale. Figure 5 Coastal mobile terrestrial LiDAR method The method used by the project in the coastal environment involves setting up a baseline on the foreshore, parallel to the cliff with a tripod at each end (Figure 5). The tripods are alternately occupied by a laser scanner and a dGPS antenna. The scanner fixes the position of the other tripod using either a single-shot (or an automated micro- scan) to a reflective target in place of the dGPS antenna, and then scans the cliff and platform. This may be repeated along the foreshore, or in some cases on the cliff top in order to get the fullest possible coverage. Where large-scale rotational landslides and large embayments are present, this task may be difficult, particularly where access to the cliff is impossible or unsafe. Coverage of ‘shadow’ areas may be improved by infilling with a roving GPS where access is possible. The scan data consist of xyz position and reflective intensity, with the possible addition of a digital image mosaic provided by a built-in calibrated camera. The final output can be in the form of a ‘point cloud’ (Figure 6), a 2D intensity plot (Figure 7), a 3D point-cloud 6 coloured from the photo-mosaic (Figure 8), or a 3D triangulated ‘solid’ surface model (Figure 9) over which the photo-mosaic has been draped (Figure 10). The coloured point-cloud output (Figure 8) is visually effective where the density of points is high, but the solid model allows greater manipulation and the calculation of areas, volumes, and cross-sections. False-colour models can be utilised to show height (Figure 11) and range (Figure 12). The various uses of these models by the geoscientist are summarised in Table 1. Table 1 Uses for each model type Model: Uses: 2D intensity 3D point- cloud 3D colour point-cloud 3D solid False- colour Lithology recognition    Geomorphology     Structural geology   Volumes, areas   Cross-sections  The ‘Scan A’ example shown in Figures 6 to 13 is a 20 m high cliff formed in matrix- dominant Late Devensian tills (Withernsea Till and Skipsea Till Members of the Holderness Formation), from part of the 50 km long Holderness coast of East Yorkshire. Historical erosion rates are between 1 and 2 m per year (Balson et al., 1998). Landslides on this coastline typically consist of single rotational failures and smaller toppling failures. The rotational features tend to develop en-echelon, a factor possibly related to sub-vertical joint patterns in the tills. It is clear from the figures depicting Scan A that each model contains gaps or ‘shadow’ areas, which represent areas that the laser was unable to ‘see’. For instance, in Figure 13 a boulder close to the scanner has cast a long laser ‘shadow’ across the beach thus preventing any points being captured behind it. This can be rectified to some extent by carrying out multiple scans from several positions each having a different viewpoint on the same object. Then with the application of the dGPS, or other, 3D model orientation method, these ‘shadows’ can be significantly reduced or eliminated. Of course, this adds considerably to the amount of post-processing required. New dGPS systems are reducing the amount of post-processing by improved real-time processing, for example by mobile telephone communication with a GPS network. However, this may introduce a fresh problem associated with mobile network coverage in remote locations. PROBLEMS TO CONSIDER One problem with the triangulated ‘solid’ surface is that uneven coverage of points in the original point cloud results in either gaps in the model (Figure 9) or oversized polygons (Figure 14) depending on the threshold parameters selected. This is particularly the case where the cliff is receding unevenly from crest to toe, i.e. it has a ‘stepped’ profile, or where landslides are of a rotational type, featuring back-tilted slip masses, and hence are in the laser ‘shadow’ when scanned from the beach. As the laser scanner sweeps the subject from its fixed position it has the attribute of a shotgun; that is, nearby features are densely covered with points compared with distant features. In the case of a largely planar subject such as a building, this may not be a problem. However, for natural features such as cliffs, the result may be wide variation in the surface detail captured, and hence the integrity of the 3D solid model. 7 In the case of large coastal landslide complexes, the range of the instrument becomes an issue. Many high-speed laser scanners, having a maximum range of typically less than 500 m, might struggle with such features, particularly where access to the cliff to carry out multiple scans is impossible. A common problem encountered during the project has been the inadequacy of PC / software combinations to deal with the millions of points produced by modern scanners, notwithstanding that the scanners used were not classed as ‘high-speed’. This is particularly the case where ‘solid’ models are required. The repeated scanning of the same cliff enables changes in elevation to be displayed and quantified, provided that a solid grid-oriented model for each epoch has been produced. Figure 6 Part of Scan A showing raw point cloud Figure 7 Full Scan A – 2D intensity plot 8 Figure 8 Full Scan A – 3D coloured point-cloud Figure 9 Full Scan A – 3D triangulated ‘solid’ surface model Figure 10 Full Scan A – 3D triangulated surface model with digital colour photo overlay Figure 11 False-colour 2D height model for Scan A (red=low, blue=high) Figure 12 False-colour 2D range model for Scan A (red=near, blue=far) 9 Figure 13 Side view of part of Scan A – 3D raw point-cloud (red arrow: boulder casting shadow) Figure 14 Scan B: triangulated surface model 10 Figure 15 Scan B: elevation ‘change’ model for part of Scan B (refer to Figure 14) (red=height increase, blue=height decrease) CHANGE MODELS The ‘Scan B’ example shown in Figures 14 and 15 is of a cliff up to 50 m high on the North Yorkshire coast consisting of complex superficial deposits of till and other interlayered glacial deposits overlying folded and weak Speeton Clay Formation mudrocks. The ‘change’ model in Figure 15 shows the elevation difference between two solid 3D models, derived from scans taken one year apart. The resulting annual vertical displacement is coloured proportionately so that red is maximum ground level rise and blue is maximum ground level fall, though the two are not of equal magnitude. In terms of slope morphology the change model shows us that a debris flow at the toe of the cliff has accumulated material, the backscarp has lost material, and changes have occurred to the beach levels. Information in the area of the oversized polygons to the rear of the debris flow is probably unreliable. Again, the density and reliability of data are important factors when interpreting these change models (Miller et al., 2006). When considering change models of coastal cliffs it is important to include the foreshore (platform and beach) as part of the same model as the cliff itself. This is for two reasons: firstly, the beach is a transient feature which may consist of both transported material and debris derived locally from the landslide. As such, support of the cliff toe and restriction of seepage are also transient features affecting the cliff itself. Secondly, a deep-seated landslide may have its slip surface extending below the level of the foreshore and hence the foreshore becomes directly involved in the movement. In the long-term, the erosion of the platform itself must be considered as part of the model (Walkden & Hall, 2005). The methodology for producing ‘change’ models is very much dependent on the software packages available to the user, and can be achieved in a variety of ways and with variable effectiveness depending on the geometry of the subject. These usually involve more than one package, and possibly as many as four. Issues relating to the robustness of such models have been addressed by Miller et al. (2008). 11 CONCLUSIONS In order to correctly interpret the terrestrial LiDAR change models described, a combination of models should be considered. These could include the elevation and range models (Figures 11 & 12), which can have their own change models derived so that the vertical and horizontal components of movement can be resolved. In its simplest form, a rock fall from a vertical cliff face represents a loss of material from the cliff face and an accretion of debris on the beach. However, a similar fall from the crest of an inclined cliff will result in accretion at mid-cliff. This might appear from an elevation change model alone as if caused by an uplift of strata, as for example at the toe of a rotational landslide, rather than a deposition of fallen debris. Subtle pre- cursor processes such as the opening of joints may produce apparent ‘accretion’ of the cliff face prior to failure and ultimate recession. Such small movements may or may not be resolved by the scan depending on the method and equipment used. The basic 2D intensity model (Figure 7) is useful in distinguishing textures. This has a greater applicability to man-made structures and materials (e.g. concrete, metal, brick), but can still be useful for distinguishing the lithologies of strongly contrasting geomaterials. Laser scan models enhanced by calibrated digital photography (Figures 8 & 10) are a useful resource for the geoscientist, as the textures and colours greatly enhance the interpretation of lithology, structure, and geomorphology. This is of course further enhanced by the 3D capability, whereby the true geometry of coastal landslide and erosion features can be appreciated. Solid 3D models allow volumes and areas to be calculated either in relation to a planar datum or to a previous model. Thus, displaced volumes may be calculated and displaced masses estimated. REFERENCES Adams, J.C., Smith, M.J. and Bingley, R.M. (2003) Development and integration of terrestrial cliff- face mapping techniques with regional coastal monitoring. In: Proc. Ann. Conf. Remote Sensing & Photogrammetry Society, RSPSoc., pp8. Allersma, H.G.B. (2000) Measuring of slope surface displacement by using digital image processing. In: Landslides in Research, Theory, and Practice. Eds: Bromhead, E., Dixon, N., Ibsen, M-L.. Thomas Telford, London. Vol. 1, pp37-44. Balson, P. S., Tragheim, D., Newsham, R. (1998) Determination and prediction of sediment yields from recession of the Holderness coast, Eastern England. Proceedings of the 33rd MAFF Conference of River and Coastal Engineers; London, Ministry of Agriculture, Fisheries and Food 1998, pp4.5.1- 4.6.2. Hall, J.W., Meadowcroft, I.C., Lee, E.M., and Van Gelder, P.H.A.J.M. (2002) Stochastic simulation of episodic soft coastal cliff recession. Coastal Engineering. 46(3), pp159-174. Hobbs, P.R.N., Humphreys, B., Rees, J.G., Tragheim, D.G., Jones, L.D., Gibson, A., Rowlands, K., Hunter, G., and Airey, R. (2002) Monitoring the role of landslides in ‘soft cliff’ coastal recession. In: Instability: Planning and Management. Eds. R.G. McInnes & J. Jakeways. Thomas Telford, London. pp589-600. Hiatt, M.E. (2002). Sensor Integration Aids Mapping at Ground Zero. Photogrammetric Engineering and Remote Sensing, 68, pp877-879. Hunter, G., Pinkerton, H., Airey, R. & Calvari, S. (2003). The application of a long-range laser scanner for monitoring volcanic activity on Mount Etna. Journal of Volcanology and Geothermal Research, 123, pp203-210. 12 Miller, P.E., Mills, J.P., Edwards, S.J., Bryan, P., Marsh, S. and Hobbs, P., 2006. Integrated remote monitoring for coastal geohazards and heritage sites. Proceedings of the ASPRS 2006 Annual Conference, May 1-6 2006, Reno, Nevada. On CD Rom: 11 pages. Miller, P.E., Mills, J. P., Edwards, S. J., Bryan, P. G., Marsh, S. H., Hobbs, P. and Mitchell, H., 2008. A robust surface matching technique for coastal geohazard monitoring. ISPRS Journal of Photogrammetry and Remote Sensing. Accepted for publication 18/2/08. Pasuto, A., Silvano, S., and Berlasso, G. (2000) Application of time-domain Reflectometry (TDR) technique in monitoring the Pramollo Pass landslide (Province of Udine, Italy). In: Landslides in Research, Theory, and Practice. Eds: Bromhead, E., Dixon, N., Ibsen, M-L.. Thomas Telford, London. Vol. 3, pp1189-1194. Paul, F. & Iwan, P. (2001). Data Collection at Major Incident Scenes using Three Dimensional Laser Scanning Techniques. In: The Institute of Traffic Accident Investigators: 5th International Conference held at York. York. Poulton, C.V.L., Lee, J.R., Hobbs, P.R.N., Jones, L., & Hall, M. (2006) Preliminary investigation into monitoring coastal erosion using terrestrial laser scanning: case study at Happisburgh, Norfolk. Bull. Geol. Soc. Norfolk, 56, pp45-64. Rowlands, K., Jones, L. & Whitworth, M. (2003). Photographic Feature: Landslide laser scanning: a new look at an old problem. Quarterly Journal of Engineering Geology, 36, pp155-158. Webster, T.L. and Dias, G. (2006) An automated GIS procedure for comparing GPS and proximal LIDAR elevations. Computers & Geoscience, 32, pp713-726. Walkden, M.J., and Hall, J.W. (2005) A predictive mesoscale model of the erosion and profile development of soft rock shores. Coastal Engineering, 52, pp535- 563. work_suxzaofxk5c5pc4lkndibn67z4 ---- 59 Članak je primljen: 23. decembar 2013. Revidirana verzija: 10. januar 2014. Prihvaćena verzija: 5. februar 2014. UDC: 778:004 Jelena Matić nezavisni teoretičar umetnosti i medija j.matic05@gmail.com Dva razmatranja digitalne fotografije: Marta Rosler i Fred Ričin Apstrakt: Pojava fotografije predstavljala je jedno od najvažnijih otkrića društva i kulture 19. veka. Na kraju 20. i početkom 21. veka ta uloga pripada digitalnoj tehnologiji. Jedna od posledica transformacije analogne u digitalnu fotografiju jeste pojava novih rasprava i teorija o prirodi me- dija. Dok jedna grupa savremenih teoretičara zastupa tezu da ova tehničko-tehnološka promena ne znači ništa drugo do smrt fotografije, dotle drugi autori tvrde suprotno. U ovom radu sam pokušala da ukažem na dva pogleda i razmatranja o fotografiji – Marte Rosler i Freda Ričina. Ključne reči: fotografija, teorija, digitalno, manipulacija, dokumentarnost, objektivnost; U tekstu kataloga izložbe „Slike“ (Pictures) održane u Umetničkom prostoru (Artists Space) u Njujorku 1977. godine, Daglas Krimp (Douglas Crimp) je izneo sledeće zapažanje: „Naše iskustvo preovladava na slikama, slikama u novinama i časopisima, sa televizije i u bioskopu... Dok nam se nekad činilo da slike imaju funkciju u interpretaciji realnosti, sada nam se čini da su je uzurpirali.“1 Prema Viktoru Burginu (Victor Burgin), fotografija balansira između pokretne i statične slike. „Fotografija deli statičnost slike sa slikarstvom, kameru sa filmom, pretenduje da bude smeštena između ova dva medija, ali se nalazi na potpuno drugačijem putu od njih.“2 Drugim rečima, film za prezentaciju zahteva bioskop, a klasični mediji (slika, skulptura, grafika) galeriju ili muzej. Foto- grafija takvih zahteva nema jer može biti prezentovana svuda i na bilo koji način. Upravo ta njena specifična priroda obezbedila joj je mogućnost da se, za razliku od bilo kog drugog oblika vizuelnog izražavanja ili fenomena, može analizirati i razmatrati sa više različitih teorijskih stanovišta. Teze Daglasa Krimpa i Viktora Burgina, kao i teorija Valtera Benjamina (Walter Benjamin)3 najviše se potvrđuju danas. U kulturi trećeg milenijuma digitalni fotoaparati, mobilni telefoni ili Ajpodovi (IPod) i Ajpadovi (IPad), novi tipovi kompjutera i skenera i druge tehnološke igračke 1 Douglas Crimp, “Pictures”, October, 1978, vol. 8, 55–58, cit. u: Mary Warner Marien, Photography – A Cultural History, London, Laurence King Publishing, 2002, 424. 2 Victor Burgin, “Looking at Photographs”, u: Victor Burgin (ed.), Thinking Photography, London, Macmillan Edu- cation, 1982, 142–143. 3 Cf. Valter Benjamin, „Umetničko delo u razdoblju njegove tehničke reproduktivnosti“, u: Valter Benjamin, O fotografiji i umetnosti, Beograd, Kulturni centar Beograd, 2006, 98–130. ART+MEDIA | Časopis za studije umetnosti i medija / Journal of Art and Media Studies Broj 5, april 2014 60 postale su nezaobilazni detalji svakidašnjice. Naš sistem funkcionisanja i percepcije sveta nije više isti. Sve do pojave digitalne tehnologije termin novi mediji podrazumevao je fotografiju, film i video, a sada internet, digitalnu fotografiju, veb dizajn. Ovi novi mediji proširili su naš rečnik novom terminologijom, dok su institucije poput arhiva, galerija, biblioteka, muzeja, dobili svoje naslednike u vidu internet arhiva, galerija i muzeja. A, ukoliko se bolje sagleda istorija fotografije, čitav efekat koji je proizvela pojava ovih novih medija u našem društvu i kulturi umnogome je sličan efektu koji je fotografija imala pre skoro sto osamdeset godina. Opet smo fascinirani nečim novim, bržim lakšim i jednostavnijim. Prema mišljenju Larsa Kila Bertelsena (Lars Kiel Bertel- sen): „Digitalizacija je opisana kao tehnološka revolucija istog kalibra kao što je otkriće fotografije na početku 19. veka (…).“4 Pozivajući se na teorije fotografije Andrea Bazena (André Bazin), Rolana Barta (Roland Bar- thes) i Susane Sontag (San Sontag), Vilijam Mičel (William J. Mitchell) zastupa tezu o kraju karte- zijanskoj sna, jer su sa pojavama digitalne tehnologije i slike dramatično izmenjena pravila igre.5 Kao jedan od glavnih razloga Mičel je naveo da se sa novom tehnologijom, slike preuzete iz razli- čitih izvora, putem kompjutera i fotošopa, mogu lakše, brže i jednostavnije montirati, lako pre- zentovati, a takve manipulacije se teško mogu otkriti. „Razlike između uzročnih procesa kamere i namernih procesa autora ne mogu se više smatrati tako poverljivim i kategoričkim.“6 Za razliku od klasične, digitalna fotografija je daleko komplikovanija jer ne postoji jedinstveni negativ. Bez- granični broj displejeva i printova se može stvoriti iz samo jedne kopije. Pri tom, originalni fajl može za veoma kratko vreme biti uništen, a mnogi njegovi naslednici nastavljaju da žive. Konač- no, sa digitalnom tehnologijom „napušteno je zlatno pravilo – slike se ne mogu više uzimati kao garancije vizuelne istine, niti kao označitelj sa stabilnim značenjem i vrednošću (…).“7 Nasuprot Mičelu, Marta Rosler (Martha Rosler) se nije fokusirala samo na razliku između di- gitalne i manuelne fotografije, već i na fotografsku istinu. Ovom problematikom, na primerima dokumentarne fotografije i fotožurnalizma, Roslerova je teorijski i praktično počela da se bavi to- kom sedamdestih godina prošlog veka. U projektu „Prodavnica u dva neadekvatna opisna sistema“ (The Bowery in Two Inadeguate Descriptive Systems, 1974–1975) kroz crno-bele fotografije izloga njujorških bakalnica i određenim tekstom, Roslerova je kritikovala prezentaciju u dokumentarnoj fotografiji između dva svetska rata; konkretno, fotografije dece radnika Luisa Hajna (Lewis Hine) i skitnica, beskućnika i sirotinjskih kvartova Jakoba Rijsa (Jacob Ris). „Kao kontrast čistoj senzacio- nalnosti koja proizilazi iz novinarskog interesovanja, za radničku klasu, imigrante i sirotinjski život, težnja za poboljšanjem kod Rijsa, Luisa Hajna i ostalih uključenih u propagiranje društvenog rada, bila je, kroz prezentaciju slika i drugih oblika diskursa, usmerena ka ispravljanju nepravdi. Oni nisu opažali ove nepravde kao fundamentalne za društveni sistem koji ih je tolerisao – pretpostavka da su one bile samo tolerisane, a ne izazvane, predstavlja osnovnu zabludu društvenog rada.“8 Kada je u pitanju digitalna fotografija, kritička razmatranja o mogućnostima fotografske manipulacije po Roslerovoj „zvone posmrtno zvono ‘istini’“9. Manipulacija fotografskom slikom prisutna je još u 4 Lars Kiel Bertelsen, “It’s Only a Paper Moon...”, u: Lars Kiel Bertelsen, Rune Gade, Mett Sandbye (eds.), Symbolic Imprints: Essays on Photography and Visual Culture, Aarhus University Press, 1999, 90. 5 William J. Mitchell, Reconfigured Eye: Visual Truth in The Post-Photography era, www.stanford.edu/class/hi- story34q/readings/Mitchell/MitchellIntention. Sajtu pristupila 5.03.2014. 6 Ibid. 7 Ibid. 8 Martha Rosler, “In, Around and Afterthoughts (On Documentary Photography)”, u: Decoys and Disruptions: Selected Writings 1975–2001, Cambridge MA–London–New York, The MIT Press–International Center of Photo- graphy, 2004, 177. 9 Martha Rosler, “Image Simulation, Computer Manipulation”, u: Decoys and Disruptions: Selected Writings 1975– 2001, Cambridge MA–London–New York, The MIT Press–International Center of Photography, 2004, 262. 61 ISTRAŽIVANJA / RESEARCH | ART+MEDIA radovima Oskara Rejlandera (Oscar Rejlander).10 Takođe, manipulisanje kroz fotomontažu ili pre- klapanje negativa, javlja se i u praksi njegovog kolege Henrija Piča Robinsona (Henry Peach Robin- son). Možemo otići dalje i kao primer navesti dramatične pejzaže Gistava L Greja (Gustave Le Gray) koje su nastale preklapanjem dva negativa (npr., snimak neba sa oblacima i more) i „Autoportret kao utopljenik“ (Self portrait as a Drowned Man, 1839) Ipolita Bajara (Hippolyte Bayard), koja se smatra najranijim primerom manipulacije. U skladu sa ovim, i retuširane ili naknadno kolorisane fotografije se takođe mogu posmatrati kao manipulacije. S druge strane, Marta Rosler ističe da u istoriji fotografije nailazimo na primere gde manipulacija kolažno-montažnim tehnikama nije uvek u službi prikrivanja istine. „Ukoliko hoćemo da se pozovemo na optimističnije i pozitivnije primere manipulisanja slikama, moramo da izaberemo slike u kojima je manipulacija očigledna i to ne samo kao forma umetničke refleksije, već i kao forma koja šire ukazuje na kvalitet istine na fotografijama i ističe iluzionističke efekte na površini (ili čak definiciji) ‘stvarnosti’.“11 Za ovu autorku, dobar primer predstavljaju političke fotomontaže berlinskih Dadaista. Isto tako, manipulacija se ne mora povezivati sa pomenutim tehnikama. Identifikacija fotogra- fije sa objektivnošću, kako tvrdi Roslerova, „zapravo je moderna ideja“12. U nizu primera Rosle- rova navodi pojedine fotografije građanskih ratova u Americi (Aleksandra Gardnera /Alexander Gardner/) ili Španiji (Roberta Kape /Robert Capa/), koje jesu manipulacije, a da pri tom ni na pozitivu ili negativu nije vršena nikakva naknadna intervencija. U slučaju Gardnerove fotografije palog vojnika u rovu nije snimak zatečenog, već nameštenog stanja. Pored ovih, Roslerova će kao primer navesti i uređivačku politiku i praksu novina USA Today, koja je svojim fotografijama, između ostalog, savetovala čitaocima koji film da koriste u određenim situacijama, što je takođe neka vrsta manipulacije.13 Njen zaključak je da značenja fotografija, bez obzira da li su one nastale manuelnim ili digitalnim putem, ne zavise od tehnologije koja se koristi u njihovoj produkciji već od ideologije i tradicije.14 Sa druge strane, teorija Freda Ričina (Fred Ritchin) balansira između utopijsko-humanistič- kog stava bliskog Mičelu i anti utopijskog stava Marte Rosler. U skladu sa ovom tvrdnjom su nekoliko Ričinovih ključnih eseja i knjiga: „U sopstvenoj slici: dolazeća revolucija u fotografiji“ (In Our Own Image: The Coming Revolution in Photography, 1990), „Kritična slika“ (The Criti- cal Image, 1990) i „Posle fotografije“ (After Photography, 2009) u kojima se pitanjima digitalne fotografije autor bavi isključivo na primerima fotožurnalizma. Primera radi, Ričin ukazuje da digitalna tehnologija gotovo u potpunosti isključuje i fotografa i subjekta jer su oni zamenjivi.15 Ovakve tvrdnje su bliske ne samo Mičelovim stavovima već i nekim najranijim tezama o fotogra- fiji, kao što je ona da otkriće fotografije ne znači ništa drugo do smrt slikarstva, te da fotografija nije umetnost jer nastaje posredstvom fotoaparata. Fotografija je postala umetnost, slikarstvo nije umrlo. Drugim rečima, pojava digitalne fotografije ne znači nestanak fotografa i subjekta. Kao i Marta Rosler, Fred Ričin ne smatra da je digitalna fotografija radikalno nova jer je dosta toga preuzela od tradicionalne, analogne fotografije.16 Sa druge strane, iako kao Roslerova misli da se fotografijom manje-više manipuliše od njenog nastanka, Ričin smatra da sve te prethodne manipulacije nisu prelazile granicu morala.17 Nova digitalna tehnologija, prema Ričinovom mi- 10 Ibid. 263. 11 Ibid. 279. 12 Ibid. 264. 13 Ibid. 264–275. 14 Ibid. 15 Cf. Fred Ritchin, After Photography, New York, W.W. Northon & Company Inc, 2009, 27. 16 Cf. Ibid. 20. 17 Cf. Fred Ritchin, In Our Own Image: The Coming Revolution in Photography, New York, Aperture, 1990, 29. ART+MEDIA | Časopis za studije umetnosti i medija / Journal of Art and Media Studies Broj 5, april 2014 62 šljenju, omogućava, pre svega, urednicima bilo kojih novina ili časopisa da imaju apsolutnu vizu- elnu kontrolu nad fotografskom slikom i da na jedan mnogo suptilniji način promene značenje fotografije, značenje koje je njen autor imao na umu. Kada je u pitanju fotožurnalizam, vrednost fotografija ovog tipa, kako Ričin tvrdi, treba da se zasniva na savesti i reputaciji samog fotografa.18 Prema Viktoru Burginu, svaka slika je označitelj. Njeno značenje je različito kada je artikuli- sano sa drugim složenim diskurzivnim formacijama u određenom trenutku, društvu, kulturi.19 Iako je nastala, a zatim i tretirana kao objektivan mediji, fotografija nije, niti treba da bude objek- tivna. Drugim rečima, nije važno da li je fotografija nastala digitalnim ili manuelnim, odnosno, analognim putem. Jer, tehnologija će se konstantno menjati i usavršavati i ona nema veze sa objektivnošću, značenjem i percepcijom fotografije. U eri simulacija, kada je postalo jasno da je realnost odavno i bespovratno izgubljena, analiza, tumačenje i shvatanje fotografije, bez obzira da li je dokumentarna, reklamna ili umetnička, krije se u posmatraču i u interdisciplinarnosti, kao bitnom i odlučujućem činiocu teorije i kritike. Literatura: – Bertelsen, Lars Kiel, “It’s Only a Paper Moon...”, u: Bertelsen, Lars Kiel, Rune Gade, Mett Sandbye (eds.), Symbolic Imprints: Essays on Photography and Visual Culture, Aarhus Uni- versity Press, 1999, 88–107. – Burgin, Victor, “Looking at photographs”, u: Burgin, Victor (ed.), Thinking photography, London, Macmillan Education, 1982, 142–152. – Burgin, Victor, The End of Art Theory, Criticism and Postmodernity, New York, Humanite Press International, 1987. – Marien, Warner Mary, Photography – A Cultural History, London, Laurence King Publis- hing, 2002. – Mitchell, William, Reconfigured Eye: Visual Truth in The Post – Photography era, www. stanford.edu/class/history34q/readings/Mitchell/MitchellIntention. – Ritchin, Fred, “The Critical Image”, u: Squires, C., Photojournalism in the Age of Compu- ters, Seatle, Bay Press, 1990, 28–37. – Ritchin, Fred, In Our Own Image: The Coming Revolution in Photography, New York, Aperture, 1990. – Ritchin, Fred, After Photography, New York, W.W.Northon & Company Inc, 2009. – Rosler, Martha, “In, Around and Afterthoughts (On Documentary Photography)”, u: De- coys and Disruptions: Selected Writings 1975–2001, Cambridge MA–London–New York, The MIT Press–International Center of Photography, 2004, 152–206. – Rosler, Martha, “Image Simulation, Computer Manipulation”, u: Decoys and Disruptions: Selected Writings 1975–2001, Cambridge MA–London–New York, The MIT Press–Inter- national Center of Photography, 2004, 259–317. 18 Cf. Fred Ritchin, “The Critical Image”, u: C. Squires, Photojournalism in the Age of Computers, Seatle, Bay Press, 1990, 29. 19 Victor Burgin, The End of Art Theory, Criticism and Postmodernity, New York, Humanite Press International, 1987, 100. 63 ISTRAŽIVANJA / RESEARCH | ART+MEDIA Two Considerations of Digital Photography: Martha Rosler and Fred Ritchin Summary: The advent of photography was one of the most important discoveries in the culture and the society of 19th century. At the end of the 20th and begining of the 21st century this role belong to digital technology. One of the consequences this transformation of the analog to digital photography are new discussion and theories about the nature of the media. While one group of contemporary theorists argues that this technical-technological change means nothing elese but the death of photography as we know, the other one claim opposite. In this paper, I will try to pinpoint at two views and considerations on digital photography, of Martha Rosler and Fred Ritchin. Keywords: photography, theory, digital, manipulation, documentarity, objectivity; work_sv2wot3ifzfwhabkwu3eiw46vm ---- Color Variation among Habitat Types in the Spiny Softshell Turtles (Trionychidae: Apalone) of Cuatrociénegas, Coahuila, Mexico SUZANNE E. MCGAUGH Department of Ecology, Evolution, and Organismal Biology, Iowa State University, Ames, Iowa 50011, USA; E-mail: smcgaugh@iastate.edu ABSTRACT.—Ground coloration is highly variable in many reptile species. In turtles, ground color may correspond well to the background coloration of the environment and can change over time to match new surroundings in the laboratory. Variable carapace and plastron coloration across three habitat types were investigated in the Black Softshell Turtle, Apalone spinifera atra, by measuring individual components of the RGB (Red, Green, Blue) color system. In general, A. s. atra carapaces were darker in turtles from lagoons than in turtles from playa lakes. Red and green values were significantly different among all pairs of habitat types, but blue values differed only between the playa lakes and lagoons. Mean color components (RG only) for each population were significantly correlated with corresponding values for the bottom substrate, indicating a positive association of carapace and habitat substrate color components. In contrast, plastron ground color RGB channels showed no significant differences between habitat types and no significant correlations with substrate RGB. These results suggest that dorsal background matching in A. s. atra may be responsible for some of the variation in this key taxonomic trait. The color of an organism is an important component of many aspects of an organism’s biology and is often used as a taxonomic character (Endler, 1990; Brodie and Janzen, 1995; Darst and Cummings, 2006). Yet, an animal’s coloration may be plastic and can depend on many different biotic and abiotic factors (Endler, 1990; Bennett et al., 1994). For example, overall habitat irradiance is a common stimulus for physiological color change in reptiles, presumably as a means to become more cryptic (Norris and Lowe, 1964; Rosen- blum, 2005; Rowe et al., 2006a). This physiolog- ical response occurs rapidly and uses hormonal signals to expand or contract melanophores in the dermal layer of skin (Bartley, 1971). Inter- estingly, laboratory tests indicate that the shells and skin of turtles, too, can change color to more closely match dark or light backgrounds, although this change occurs over weeks or months (Woolley, 1957; Bartley, 1971; Rowe et al., 2006b). Cuatrociénegas in Coahuila, Mexico, offers a unique ecosystem in which to observe natural variation in shell color of turtles across three very different aquatic habitat types (Winokur, 1968). The region’s endemic and endangered Softshell Turtle, Apalone spinifera atra (atra meaning black or dark; Webb and Legler, 1960; Lovich et al., 1990, Fritz and Havaš, 2006), is taxonomically defined by its dark pigmentation, and the validity of A. s. atra as a full species has been challenged (Smith and Smith, 1979). Hatchlings of A. s. atra cannot be differentiated from a lighter conspecific, Apalone spinifera emoryi (Winokur, 1968), but adults show marked differences in coloration across habitats (this study), which could be a result of genetically based ontogenetic pigmenta- tion variation among habitats or phenotypic plasticity in response to substrate color varia- tion. Apalone spinifera atra was originally described as an isolated species (Webb and Legler, 1960), but canal building in the late 1800s was thought to have opened the basin hydrologically, result- ing in opportunities for hybridization between A. s. atra and A. s. emoryi (Smith and Smith, 1979; Ernst and Barbour, 1992). A recent genetic evaluation found no differentiation between morphologically identified A. s. atra individuals and morphologically identified A. s. emoryi individuals within and outside the basin (McGaugh and Janzen, in press). However, substantial coloration differences exist among habitats and this phenotypic variation was left unexplained by the genetic study. Examination of the color variation of the softshell turtles across the basin, in relation to background coloration of habitats (Woolley, 1957; Bartley, 1971; Rowe et al., 2006b), may provide an important morphological perspective on A. s. atra’s refuted species delimitation (McGaugh and Janzen, in press). For simplicity, and because haplotypes in the genetic study showed no morphological (i.e., light and dark turtles shared mitochondrial and nuclear DNA haplo- types) or geographic grouping, the name A. s. atra is used to refer to all Apalone in this study, even though A. s. emoryi morphs are present in Journal of Herpetology, Vol. 42, No. 2, pp. 347–353, 2008 Copyright 2008 Society for the Study of Amphibians and Reptiles the basin (Smith and Smith, 1979; McGaugh and Janzen, in press). In this study, I evaluated the hypothesis that background matching could be responsible for the observed color variation across habitats in A. s. atra. To evaluate this hypothesis, coloration of the animals’ carapace and plastron and locality substrate was measured. Background matching was expected to be probable if carapace, but not plastron coloration, was correlated to locality substrate coloration. Color was measured with digital photographs using the RGB system (red, green, blue; Stevens et al., 2007), the components of which corre- spond to broad bands of longwave (red), mediumwave (green), and shortwave (blue) light (Stevens et al., 2007). Possible influences of sex, size, and their interactions were also evaluated. MATERIALS AND METHODS Study Site and Field Methodology.—Three main habitat types, playa lakes (barrial lakes), la- goons, and a river, were investigated within El Área de Protección de Flora y Fauna Cuatrocié- negas, Coahuila, Mexico. This area of high endemism contains diverse habitats, including hundreds of small water bodies nestled within the Chihuahuan desert (Minckley, 1969; Meyer, 1973). Playa lakes are large, shallow lakes (,1 m deep) with sparse vegetation, large daily temperature fluctuations, and relatively high mineral content (Minckley, 1969). Lagoons are typically deeper lakes (,1–10 m) with abundant vegetation, including waterlilies (Nymphaea), muskgrass (Chara), pondweeds (Potamogeton), and cattails (Typha), and relatively constant temperatures (Minckley, 1969). Rivers are flow- ing channels with steep banks that are up to 2.5 ms deep and have vegetation such as Nymphaea, Chara, Potamogeton, Typha, bladder- worts (Utricularia), and sedge (Eleocharis) where the current is slow (Minckley, 1969). Most aquatic habitats in Cuatrociénegas have clear water with a visible bottom. Six localities were examined for A. s. atra, and localities with the same habitat types (e.g., lagoons) were combined in statistical analysis except for the Pearson’s correlation analysis (described below). All habitats used in this study are stable bodies of water of unknown geologic age. No aboveground aquatic connec- tions are known between the localities used in this study. Turtle dispersal between sites may not be frequent because sites are separated by distances (5.53–31.58 km) greater than other species of softshells typically travel terrestrially (Galois et al., 2002). Sampling for shell colora- tion included all five drainages of the basin (Evans, 2005) and spanned 30 days of trapping (4 June to 5 July 2004). Turtles were captured in lobster or hoop traps baited with sardines. Each A. s. atra was tattooed with a unique pattern on the plastron. Plastron length was measured with dial calipers to the nearest millimeter, and sex was deter- mined by tail length (Webb and Legler, 1960), with males identified by a much longer tail than females relative to their body size. All A. s. atra had plastron lengths over 73 mm. Hatchling and juvenile turtles were not included in the analysis. Sixty total individuals were sampled (Lagoon1: N 5 16, Lagoon2: N 5 6, Lagoon3: N 5 13, River: N 5 10, Playa lake1: N 5 10, Playa lake2: N 5 5). Digital Photography and Analysis.—Digital photographs were taken with a Canon Power Shot G5 that was positioned directly above the turtle. Color component measurements from digital systems have been shown to be positive- ly correlated with values from spectrometry, and digital photography has many advantages over spectrometry for data acquisition in the field (Rowe et al., 2006a; Stevens et al., 2007). A level on the tripod ensured that the camera lens was parallel to the ground. The lens-to-animal distance measure was not taken. Photos were taken out of direct sunlight and with the flash on and white balance off. Each photo was taken with the highest resolution possible for the camera (2,592 3 1,944 pixels) and converted to JPEG at the highest quality compression level available on the camera. Digital quantification of color was performed using Jasc Paint Shop Pro9 (Corel, Eden Prairie, MN) by viewing the untransformed values of R, G, and B in the diagnostic histogram. The HSL (Hue, Saturation, Lightness) system was not used in the analysis because this system may be inaccurate (Stevens et al., 2007). In Jasc Paint Shop Pro9, the histogram displays a distribution graph of the separate channels of color in an image and allows the user to analyze the distribution. A swath of at least 100 square pixels was selected from the right middle portion of the carapace and the plastron of A. s. atra. All photos were analyzed at a resolution of 70.866 pixels per cm. Therefore, each A. s. atra sample was $14 mm2. Some A. s. atra had a blotched pattern on their carapaces, and these were included in the analysis. Otherwise, their pigmentation was generally uniform. Care was taken during analysis only to sample areas with a clear view of the carapace (i.e., flash glares, algae growth, mineral deposits, or bite marks on the animal were excluded from the swath). Intraindividual color variability was not mea- sured because of these various obstructions. 348 S. E. MCGAUGH A grey color standard (paint swatch EE2054C from Lowe’s Home Improvement Warehouse) was placed in each photo. Digital quantification of the paint swatch was achieved by the same method described for the turtles. Associations between turtle RGB-values and paint swatch RGB-values were strong (r . 0.33 and P , 0.002 for RB-values of carapace and RGB-values of plastron, but correlation coefficient of paint swatch G-values and carapace G-values was not significant [r . 0.23, P , 0.08]); therefore, substantial light variation across photographs occurred. Thus, standardizing the RGB values from the turtles by the paint swatch was necessary, and RGB-values from the paint swatch were used as a covariate in the statistical analyses. Some of the inconsistencies of variable ambient field conditions and camera biases can be accounted for by providing a common color swatch in each picture (J. A. Endler, pers. comm.; M. Stevens, pers. comm.; but for detailed review, see Stevens et al., 2007). However, this technique does not remove inconsistencies associated with nonuniform brightness across the photograph (J. A. Endler, pers. comm.; Stevens et al., 2007). To reduce the effect of these inconsistencies on the overall analysis, the animal was consistently placed so that a landscape photo was taken with the posterior of the animal on the left side of the photograph and the snout on the right side. Finally, because only one grey color standard was used, no linearization (also called gamma correction) could be achieved; consequently, dark objects may be estimated as lighter than they are, and light objects may be estimated as darker than they actually are (see Stevens et al., 2007:fig. 6). Fortunately, any such biases render the comparisons in this study conservative. Past studies that have evaluated the relation- ship between carapace color and habitat type have qualitatively described the habitats as ‘‘dark’’ or ‘‘light’’ bottomed (e.g., Rowe et al., 2006a). In this study, lagoons are dark bot- tomed, playa lakes are light bottomed, and the river was intermediate. To provide a more quantitative measure of this habitat descriptor, one wet substrate sample of approximately 200 ml in volume was taken from the bottom of each site and was photographed and mea- sured for RGB-values in the same manner as the turtles. Substrate color at each site appeared relatively uniform although this assumption was not explicitly tested. Vegetation was not included. However, lagoons were the only aquatic habitats with substantial submerged vegetation, and bare substrate makes up large portions of the lagoon bottom (Webb and Legler, 1960). Statistical Analysis.—To test for habitat struc- turing in color components, data were trans- formed for normality (verified through Shapiro- Wilks tests). All carapace components were log transformed, and all paint swatch components from the carapace photographs were raised to the three-quarters power to achieve normality. Plastron R and paint swatch R from the plastron photographs were raised to the three-quarters power to achieve normality, and other plastron components and paint swatch components from plastron photographs were normal (all verified through Shapiro-Wilks tests). Each response variable (R, G, or B from plastron or carapace) was analyzed with analysis of covariance (ANCOVA), using the respective paint swatch component as the covariate and plastron length, sex, habitat type, and interaction terms between all combinations of these factors as fixed factors. All factors and interaction terms that were not significant were removed from the model, and the ANCOVA was rerun until only significant factors remained. The only significant interac- tion that was detected was between habitat type, sex, and plastron length for blue carapace component (F2,48 5 6.17, P , 0.0038). To incorporate this interaction term in the model for the blue carapace color channel, all nonsig- nificant factors and lower-order interaction terms were left in this model. Sex was not a significant factor for plastron or carapace RGB- values (P . 0.09 for all ANCOVAs) and, thus, was removed from all models, except the blue carapace component where it was previously explained to be important for a significant interaction term. Plastron length was not a significant factor for carapace color components (P . 0.22, F1, 48 , 0.151 for all ANCOVAs) but did show a significant, or nearly significant, positive association with plastron color compo- nents (R: F1, 54 5 2.82, P , 0.10; G: F1, 54 5 13.86, P , 0.0005; B: F1, 54 5 10.36, P , 0.00022). Therefore, for carapace color components, the model consisted only of the color standard as a covariate and habitat type as a factor. However, for the blue color component as mentioned above and for plastron color components, the model consisted only of the color standard as a covariate and plastron length and habitat type as factors. This statistical analysis resulted in some of the models being inconsistent with others. Several additional analyses were done to ensure that results were not an artifact of the model. The blue color component and plastron components were analyzed using the most basic model for the other two carapace components (e.g., color standard as covariate and habitat type only), and overall results did not change. The alternative strategy, including all nonsig- nificant factors in the analysis of carapace red COLOR VARIATION IN APALONE 349 and green components, resulted in important differences across pairwise habitat types being missed because the model was unnecessarily overparameterized with nonsignificant terms. Least-squares (LS) means t-tests, a method used to assess the significance of differences among the best linear-unbiased estimates of the habitat means for the model design, were used to determine which habitat types were signifi- cantly different for certain color components. Among-site substrate samples were compared using LS means t-tests. Finally, Pearson’s product-moment correlations were used to assess the relationship between LS mean for carapace and substrate sample RGB-values and plastron and substrate sample RGB-values for each locality. All statistics were performed in R 2.4.0 (R Development Core Team, 2006) and JMP 6.0.2 (SAS Institute Inc., Cary, NC, 2006). RESULTS Carapace and Plastron Color Variation.—For A. s. atra, there were significant differences among habitat types for all carapace color components (R: F2, 59 5 22.35, P , 0.001; G: F2, 59 5 32.60, P , 0.001; B: F2, 58 5 7.37, P , 0.002). Red and green color channels increased significantly from lagoons to the river to the playa lakes (lagoons to river: R: t57 5 2.91, P , 0.003; G: t57 5 2.37, P , 0.011; lagoons to playa lakes R: t57 5 6.57, P , 0.001; G: t57 5 8.07, P , 0.001; playa lakes to river: R: t57 5 2.50, P , 0.015; G: t57 5 4.02, P , 0.002). The B color channel was significantly greater, or nearly so, in playa lakes than in the lagoons and the river (lagoons to river: t48 5 0.694, P , 0.49; lagoons to playa lakes: t48 5 3.84, P , 0.001; playa lakes to river: t48 5 1.63, P 5 0.054). Consistently higher RGB values suggest that turtles from playa lakes are overall lighter than those from lagoons. No significant habitat structuring of plastron color compo- nents was observed (F2, 54 , 1.90, P . 0.16 in all cases). Overall, results of the RGB analyses suggest that carapace color components differ significantly among most habitat types, whereas plastron color components do not. Bottom Substrate Color Variation and Relation- ship between Substrate and Shell Colors.—Sub- strate samples showed the same trend as the turtle RGB-values and increased from lagoons to the river to playa lakes. Playa lakes R and G substrate values were significantly higher than in lagoons (playa lakes vs. lagoons: R: t3 5 4.24, P , 0.012; G: t3 5 3.67, P , 0.017, playa lakes vs. river: R: t3 5 2.82, P , 0.066; G: t3 5 2.39, P , 0.096). No significant differences existed in the blue color channel (F2, 5 5 4.785, P , 0.12). Correlations of turtle carapace RG-values and substrate RG-values were strongly positive and significant (R: r 5 0.81, P , 0.026; G: r 5 0.74, P , 0.045). Associations for turtle carapace B and substrate B-values were positive but not signif- icant (B: r 5 0.39, P , 0.22). All associations of plastron RGB components and substrate RGB components were negative, but no significant correlations were detected (R: r 5 20.35, P , 0.50; G: r 5 20.51, P , 0.15; B: r 5 20.69, P , 0.065). DISCUSSION Two conclusions can be drawn from the statistical analyses performed in this study: (1) carapace and substrate sample R and G color components differ among habitat type compar- isons and are significantly, positively correlated to each other; and (2) plastron ground color components do not differ significantly across habitat types and do not significantly correlate with substrate sample color components. The color components varied across habitats in different ways. In particular, the short wavelength reflectance (B) of the turtles showed a weaker trend of habitat structuring than the long (R) and medium (G) wavelength light (Table 1). Short wavelengths of light are often absorbed by dissolved organic matter in the water and are not available for illumination of underwater objects (Markager and Vincent, 2000). Alternatively, blue is often conspicuous in water ,2 m deep (Maan et al., 2006); hence, may be constrained to maintain crypsis. There- fore, ambient lighting conditions underwater could eliminate this color channel’s relative importance to potential cryptic coloration or constrain the plasticity of this color channel to match the surroundings. Although more work remains to be done on the photic environment experienced by these turtles, either of these explanations may be consistent with an adap- tive explanation for carapace color variation. My results support a hypothesis of dorsal background matching for A. s. atra. Although not all comparisons were significant, the RGB color components of the sampled localities generally revealed an increase in carapace ground color darkness from playa lakes to the river to lagoons (Figs. 1, 2; Table 1), and the correlations of R and G with substrate values were positive and significant. In contrast, plastron ground color did not show any significant habitat type structuring in RGB- values or significant correlation with substrate RGB-values. If these observations of general carapace darkening were the result of a passive staining process from abiotic forces instead of background matching, plastron and carapace color should have changed in the same direction (Rowe et al., 2006a). Further, background 350 S. E. MCGAUGH matching has been noted for a wide variety of reptiles (e.g., reviewed by Norris and Lowe, 1964; Cooper and Greenberg, 1992; Rowe et al., 2006b), and the pattern observed here (i.e., dorsal, but not ventral, matching) is consistent with expectations from a cryptic coloration mechanism. It is unknown, and would be interesting to investigate, whether the observed phenotypic variability is a result of short-term phenotypic plasticity such as that seen in laboratory settings (Bartley, 1971; Ernst et al., 1994) or is a result of a fixed local adaption (Rosenblum et al., 2004). The habitat structuring of carapace RGB- values and plastron pigmentation in A. s. atra is important in a taxonomic context. Apalone spinifera atra is predominantly defined by dark dorsal coloration (Webb and Legler, 1960; Smith and Smith 1979; Lovich et al., 1990). Winokur (1968) reported that darker, A. s. atra–like animals were mainly in lagoons and that lighter, A. s. emoryi–like specimens resided in rivers (no playa lakes were examined in his analysis). He then hypothesized that ecological preferences (i.e., lagoons for the darker A. s. atra, rivers for the lighter A. s. emoryi) potentially were in- volved (Winokur, 1968). Here, I suggest that background color matching, whether transient and plastic or fixed and locally adaptive, explains much of the phenotypic variation among Apalone in the Cuatrociénegas basin. This hypothesis is supported by the correlation of carapace but not plastron, color components to substrate samples and experimental evidence from an earlier study that showed that dorsal background matching occurs in the laboratory in Apalone (Bartley, 1971; Ernst et al., 1994). The DNA evidence of the companion study (McGaugh and Janzen, in press), supports the decision of Fritz and Havaš (2006) to demote A. s. atra from species to subspecies rank but leaves unexplained the morphological diversity seen in this species across the basin. The preliminary morphological data presented here suggests that background matching may explain the phenotypic diversity seen across habitats in the basin. However, the mechanism by which this substantial morphological variation is achieved remains enigmatic. Additional work, TABLE 1. Statistics on RGB-values of carapaces for Apalone spinifera atra in Cuatro Ciénegas, Mexico. Carapace coloration across three habitat types was analyzed by ANCOVA and LS means t-tests. Significant comparisons are denoted by an asterisk. Raw data were log transformed (carapace) and taken to the three-fourths power (color standard for carapace photographs) for all A. s. atra. Data from soil samples and their color standards were not transformed. Abbreviations are R 5 red (long wavelength), G 5 green (medium wavelength), and B 5 blue (short wavelength). Habitat Soil Apalone Lagoon LS mean LS mean Playa River R 69.24 1.936 , 0.001* , 0.003* G 68.34 1.884 , 0.001* , 0.011* B 64.68 1.781 , 0.001* 0.490 Playa Lake R 187.22 2.149 – , 0.015* G 172.56 2.109 – , 0.002* B 156.72 1.939 – 0.054 River R 78.00 2.043 , 0.015* – G 80.00 1.960 , 0.002* – B 81.00 1.773 0.054 – FIG. 1. Boxplots of carapace RGB-values for Apa- lone spinifera atra in different habitats in Cuatrociéne- gas, Coahuila, Mexico. The box indicates quartiles, and the median is indicated by the heavy line within the box. The lines illustrate points falling within 1.5 times the box size. Outliers, which were more extreme than the lines, are not shown. COLOR VARIATION IN APALONE 351 such as a reciprocal transplant experiment, is needed to determine whether the pigmentation variation is plastic in the field, as is seen in the lab (Bartley, 1971; Ernst et al., 1994), or could potentially be a result of fixed genetic differ- ences between populations. Acknowledgments.—K. Lundquist helped dig- itize the images. R. Jeppesen, N. Hernandez, E. Bonnell, C. McKinney, J. Howeth, J. Siegrist, and D. Hendrickson provided valuable field assis- tance. Special thanks to J. Endler, M. Stevens, and D. Adams for help with the evaluation of color parameters; F. Janzen, J. Tucker, and the Janzen Lab for useful comments on the manu- script; and R. Rapp for valuable discussion. Support was provided by Chelonian Research Foundation’s Linnaeus Fund Research Grants, Iowa State University’s Professional Advance- ment Grant in nondissertation research, and the Gaige Fund of the American Society of Ichthy- ologists and Herpetologists. Work was done under permit DAN00739. Field methods were approved by the Committee on Animal Care from Iowa State University (Protocol 5-03-5442- J). SEM was supported by a Graduate Research Fellowship from the National Science Founda- tion and by National Science Foundation IBN- 0212935 to F. Janzen. LITERATURE CITED BARTLEY, J. A. 1971. A histological and hormonal analysis of physiological and morphological chro- matophore responses in the soft-shelled turtle Trionyx sp. Journal of Zoology 163:125–144. BENNETT, A. T. G., I. C. CUTHILL, AND K. J. NORRIS. 1994. Sexual selection and mismeasure of color. Amer- ican Naturalist 144:848–860. BRODIE, E. D., III, AND F. J. JANZEN. 1995. Experimental studies of coral snake mimicry: generalized avoid- ance of banded snake patterns by free-ranging avian predators. Functional Ecology 9:186–190. COOPER, W. E., AND N. GREENBERG. 1992. Reptilian coloration and behavior. In C. Gans and D. Crews (eds.), Biology of the Reptilia. Volume 18, pp. 298–422. Smithsonian Institution Press, Wash- ington, DC. DARST, C. R., AND M. E. CUMMINGS. 2006. Predator learning favours mimicry of a less-toxic model in poison frogs. Nature 440:208–211. ENDLER, J. A. 1990. On the measurement and classifi- cation of color in studies of animal colour patterns. Biological Journal of the Linnean Society 41: 315–352. ERNST, C. H., AND R. W. BARBOUR. 1992. Turtles of the World. Smithsonian Institution Press, Washington, DC. ERNST, C. H., J. E. LOVICH, AND R. W. BARBOUR. 1994. Turtles of the United States and Canada. Smithso- nian Institution Press, Washington, DC. EVANS, S. B. 2005. Using Chemical Data to Define Flow Systems in Cuatro Ciénegas, Coahuila, Mexico. Un- publ. master’s thesis, University of Texas, Austin. FRITZ, U., AND P. HAVAŠ. 2006. Checklist of chelonians of the world. Vertebrate Zoology 57:149–368. GALOIS, P., M. LÉVEILLÉ, L. BOUTHILLIER, C. DAIGLE, AND S. PARREN. 2002. Movement patterns, activity, and home range of the Eastern Spiny Softshell Turtle (Apalone spinifera) in Northern Lake Champlain, FIG. 2. Pictorial illustration of Apalone spinifera atra (above) from the three habitat types described in this study. Samples are representative of the habitat type and are not outliers. 352 S. E. MCGAUGH Québec, Vermont. Journal of Herpetology 36: 402–411. LOVICH, J. E., W. R. GARSTKA, AND C. J. MCCOY. 1990. The development and significance of melanism in the Slider Turtle. In J. W. Gibbons (ed.), Life History and Ecology of the Slider Turtle, pp. 233–254. Smithsonian Institution Press, Wash- ington, DC. MAAN, M. E., K. D. HOFKER, J. J. M. VAN ALPHEN, AND O. SEEHAUSEN. 2006. Sensory drive in cichlid specia- tion. American Naturalist 167:947–954. MARKAGER, S., AND W. F. VINCENT. 2000. Spectral light attenuation and the absorption of UV and blue light in natural waters. Limnology and Oceanog- raphy 45:642–650. MCGAUGH, S. E., AND F. J. JANZEN. In press. The status of Apalone atra populations in Cuatrociénegas, Coahuila, Mexico: preliminary data. Chelonian Conservation and Biology. MEYER, E. R. 1973. Late-Quaternary paleoecology of the Cuatro Ciénegas basin, Coahuila, México. Ecology 54:983–995. MINCKLEY, W. L. 1969. Environments of the Bolsón of Cuatro Ciénegas, Coahuila, Mexico, with special reference to the aquatic biota. Texas Western Press, El Paso. NORRIS, K. S., AND C. H. LOWE. 1964. An analysis of background color matching in amphibians and reptiles. Ecology 45:565–580. R DEVELOPMENT CORE TEAM. 2006. R: A Language and Environment for Statistical Computing [Internet]. R Foundation for Statistical Computing, Vienna, Austria, Available from: http://www.R-project. org. Accessed 10 March 2006. ROSENBLUM, E. B. 2005. The role of phenotypic plasticity in color variation of Tularosa Basin lizards. Copeia 2005:586–596. ROSENBLUM, E. B., H. E. HOEKSTRA, AND M. W. NACHMAN. 2004. Adaptive reptile color variation and the e v o l u t i o n o f t h e M C 1 R g e n e . E v o l u t i o n 58:1794–1808. ROWE, J. W., D. L. CLARK, AND M. PORTER. 2006a. Shell color variation of Midland Painted Turtles (Chrys- emys picta marginata) living in habitats with variable substrate colors. Herpetological Review 37:293–298. ROWE, J. W., D. L. CLARK, C. RYAN, AND J. K. TUCKER. 2006b. Effect of substrate color on pigmentation in Midland Painted Turtles (Chrysemys picta m a r g i n a t a ) a n d R e d - E a r e d S l i d e r T u r t l e s (Trachemys scripta elegans). Journal of Herpetology 40:358–364. SMITH, H. M., AND R. B. SMITH. 1979. Guide to Mexican Turtles Bibliographic Addendum III, Synopsis of the Herpetofauna of Mexico, Volume 6. John Johnson, North Benington, VT. STEVENS, M., C. A. PÁRRAGA, I. C. CUTHILL, J. C. PARTRIDGE, AND T. S. TROSCIANKO. 2007. Using digital photography to study animal coloration. Biological Journal of the Linnean Society 90:211–237. WEBB, R. G., AND J. M. LEGLER. 1960. A new soft- shell turtle (genus Trionyx) from Coahuila, Mexico. University of Kansas Science Bulletin 40:21–30. WINOKUR, R. M. 1968. The Morphology and Relation- ships of the Soft-Shelled Turtles of the Cuatrocié- negas Basin, Coahuila, Mexico. Unpubl. master’s thesis. Arizona State University, Tempe. WOOLLEY, P. 1957. Colour change in a Chelonian. Nature 179:1255–1256. Accepted: 9 December 2007. COLOR VARIATION IN APALONE 353 work_sv7zvuxaenhq7p7ns7afhnzzgu ---- University of Birmingham Assessing the Raspberry Pi as a low-cost alternative for acquisition of near infrared hemispherical digital imagery Kirby, Jennifer; Chapman, Lee; Chapman, Victoria DOI: 10.1016/j.agrformet.2018.05.004 License: Creative Commons: Attribution-NonCommercial-NoDerivs (CC BY-NC-ND) Document Version Peer reviewed version Citation for published version (Harvard): Kirby, J, Chapman, L & Chapman, V 2018, 'Assessing the Raspberry Pi as a low-cost alternative for acquisition of near infrared hemispherical digital imagery', Agricultural and Forest Meteorology, vol. 259, pp. 232-239. https://doi.org/10.1016/j.agrformet.2018.05.004 Link to publication on Research at Birmingham portal Publisher Rights Statement: Published in Agricultural and Forest Meteorology on 15/05/2018 DOI: https://doi.org/10.1016/j.agrformet.2018.05.004 General rights Unless a licence is specified above, all rights (including copyright and moral rights) in this document are retained by the authors and/or the copyright holders. The express permission of the copyright holder must be obtained for any use of this material other than for purposes permitted by law. • Users may freely distribute the URL that is used to identify this publication. • Users may download and/or print one copy of the publication from the University of Birmingham research portal for the purpose of private study or non-commercial research. • User may use extracts from the document in line with the concept of ‘fair dealing’ under the Copyright, Designs and Patents Act 1988 (?) • Users may not further distribute the material nor use it for the purposes of commercial gain. Where a licence is displayed above, please note the terms and conditions of the licence govern your use of this document. When citing, please reference the published version. Take down policy While the University of Birmingham exercises care and attention in making items available there are rare occasions when an item has been uploaded in error or has been deemed to be commercially or otherwise sensitive. If you believe that this is the case for this document, please contact UBIRA@lists.bham.ac.uk providing details and we will remove access to the work immediately and investigate. Download date: 06. Apr. 2021 https://doi.org/10.1016/j.agrformet.2018.05.004 https://research.birmingham.ac.uk/portal/en/persons/jennifer-kirby(fdbba289-be34-41a7-b0c6-b392132fc860).html https://research.birmingham.ac.uk/portal/en/persons/lee-chapman(1b1714b2-a5d1-4dc9-ba54-07b902fa400e).html https://research.birmingham.ac.uk/portal/en/persons/vicky-chapman(26fd4cee-b1f3-4acf-9526-57895a7cadad).html https://research.birmingham.ac.uk/portal/en/publications/assessing-the-raspberry-pi-as-a-lowcost-alternative-for-acquisition-of-near-infrared-hemispherical-digital-imagery(a8ffe21b-d353-4579-a1c1-9f03bd6a55f4).html https://research.birmingham.ac.uk/portal/en/publications/assessing-the-raspberry-pi-as-a-lowcost-alternative-for-acquisition-of-near-infrared-hemispherical-digital-imagery(a8ffe21b-d353-4579-a1c1-9f03bd6a55f4).html https://research.birmingham.ac.uk/portal/en/journals/agricultural-and-forest-meteorology(2fc6d970-b9d6-44cb-879e-0e70cf794cd0)/publications.html https://doi.org/10.1016/j.agrformet.2018.05.004 https://research.birmingham.ac.uk/portal/en/publications/assessing-the-raspberry-pi-as-a-lowcost-alternative-for-acquisition-of-near-infrared-hemispherical-digital-imagery(a8ffe21b-d353-4579-a1c1-9f03bd6a55f4).html Title: Assessing the Raspberry Pi as a low-cost alternative for acquisition of near 1 infrared hemispherical digital imagery 2 3 Authors & Affiliations: 4 Jennifer Kirby 5 University of Birmingham, College of Geography, Earth and Environmental Science, 6 Edgbaston, Birmingham, B15 2TT, JXK067@bham.ac.uk 7 Professor Lee Chapman (Principle Correspondence) 8 University of Birmingham, College of Geography, Earth and Environmental Science, 9 Edgbaston, Birmingham, B15 2TT, l.chapmam@bham.ac.uk 10 Dr Victoria Chapman 11 Met Office Surface Transport Team, Birmingham Centre for Railway Research and 12 Education, Gisbert Kapp Building, University of Birmingham, Edgbaston, 13 Birmingham, B15 2TT, victoria.chapman@metoffice.gov.uk 14 15 'Declarations of interest: none' 16 17 Abstract 18 Hemispherical imagery is used in many different sub-fields of climatology to calculate 19 local radiation budgets via sky-view factor analysis. For example, in forested 20 environments, hemispherical imagery can be used to assess the leaf canopy, (i.e. 21 leaf area / gap fraction) as well as the radiation below the canopy structure. Nikon 22 Coolpix cameras equipped with an FC-E8 fisheye lens have become a standard 23 device used in hemispherical imagery analysis however as the camera is no longer 24 manufactured, a new approach needs to be investigated, not least to take advantage 25 of the rapid development in digital photography over the last decade. This paper 26 conducts a comparison between a Nikon Coolpix camera and a cheaper alternative, 27 the Raspberry Pi NoIR camera, to assess its suitability as a viable alternative for 28 future research. The results are promising with low levels of distortion, comparable to 29 the Nikon. Resultant sky-view factor analyses also yield promising results, but 30 challenges remain to overcome small differences in the field of view as well as the 31 present availability of bespoke fittings. 32 Key words: Hemispherical fisheye, Near infra-red, Raspberry Pi, Sensors 33 1. Introduction 34 Hemispherical imagery is commonly used to assist in the assessment of radiation 35 budgets. Examples of use include below tree canopies, in urban areas or within 36 riverine environments (Hall et al., 2017; Liu et al., 2015; Chapman, 2007; Chapman 37 et al; 2007; Bréda, 2003; Ringold et al., 2003; Watson and Johnson, 1987). Imagery 38 is usually obtained using a camera equipped with a fisheye lens (Figure 1a) which 39 allows the camera to take an approx. 180˚ hemispherical image (Liu et al., 2015; 40 Chianucci et al., 2015). These images are then processed to analyse the amount of 41 Comment [JK1]: Removed resolution comment visible sky shown in the image (known as the sky-view factor). This can then be used 42 in forestry research to quantify the health of a tree and to compare differences 43 between tree canopies (Schwalbe et al., 2009; Leblanc et al., 2005 Jonckheere et al. 44 2004). 45 (a) (b) (c) Figure 1 (a) FC-E8 Fisheye lens attached to a Coolpix camera Source: Reproduced 46 with permission from Chapman et al. (2007), copyright © 2007 IEEE, (b) First2Savv 47 1850 fisheye camera attached to a Samsung Galaxy S5 Neo; (c) Perspex Dome 48 used to measure distortion. 49 The use of fisheye imagery for this application can be dated back to the early work of 50 Anderson (1964), but it was the advent of digital photography which saw the 51 approach become widely adopted. Following a number of scoping studies, which 52 successfully compared results obtained from film cameras to the new generation of 53 digital cameras (Englund et al. 2000; Frazer et al. 2001; Hale and Edwards, 2002), 54 the new technology quickly became adopted by the scientific community. However, 55 following the successful transition to mass digital photography, studies for the past 56 two decades have become very reliant on the early digital cameras produced by 57 Nikon (Error! Reference source not found.) such as the Coolpix 950 or 4500 58 (Chianucci et al. 2016; Lang et al., 2010; Chapman, 2007; Zhang et al., 2005; Baret 59 and Agroparc, 2004; Ishida, 2004). Indeed, whilst research into hemispherical 60 imagery has also been conducted using alternative cameras and equipment (Table 61 2Error! Reference source not found.), the Nikon Coolpix range equipped with the 62 FC-E8 fisheye lens undoubtedly remains the most popular choice in research to 63 date. 64 Seasonal Changes in Canopy Structure Liu et al., 2015 Used a Nikon Coolpix 4500 camera at sunset / sunrise to capture hemispherical images of tree canopies in order to investigate seasonal changes of tree canopies. Comparing Nikon Coolpix to film cameras and Leaf canopy analysers Homolová et al. 2007 Used a Nikon Coolpix 8700 to compare canopy analysers to hemispherical imagery. Garrigues, et al., 2008 Compares Nikon Coolpix 990 with LAI-2000 and AccuPAR. Frazer et al., 2001 Compared a Nikon 950 to a film camera and highlighted the potential for blurred edges and colour distortion of a Coolpix camera but noted it can be used in calculating canopy gap measurements. Englund et al., 2000 Compared a digital Nikon 950 and a film camera to find that low resolution images from the Nikon 950 were an adequate comparison to film cameras. Grimmond et al., 2001 Compared a Nikon 950 Coolpix to a plant canopy analyser and found that the Nikon was an effective and easy approach to canopy analysis. Gap function Analysis and Estimation of tree canopies Hu et al., 2009 Uses a Nikon 950 Coolpix camera to take hemispherical images to calculate gap size and shape within a tree canopy. Gap function Analysis and Estimation of tree canopies Zhang et al., 2005 Researched the effect of exposure on calculating the leaf area index and gap function analysis using a Nikon Coolpix 4500. Lang et al., 2010 Calculated gap function of canopies using a Nikon Coolpix 4500 and compared it to the Canon EOS 5D cameras. Chianucci et al. 2016 Used a Nikon 4500 to compare gap functions in forested canopies. Danson et al., 2007 A Nikon 4500 was used as a comparison to terrestrial laser scanning. Adaption or calibration of Nikon cameras Chapman, 2007 Adapted a Nikon 4500 camera to make in near infra-red in order to better estimate sky-view factors and the woody bark index of tree canopies. Baret, & Agroparc, 2004 Used a Nikon 4500 in order to determine the optical centre of an image using a fisheye lens. Ishida, 2004 Created threshold software for colour images from a Nikon 950 camera. Table 1 List of sample studies that use Nikon Coolpix cameras.65 66 Studies Camera used Approach Kelley and Krueger, 2005 HemiView 2.1 digital image system Used a 20-megapixel SLR CMOS camera as part of the HemiView software (Delta Devices 2017) to record canopy structure in riparian environments Duveiller and Defourny, 2010 Canon PowerShot A590 camera Used a Canon PowerShot A590 camera to assess batch processing of hemispherical images Rich, 1990 Canon T90 Minolta X700 Nikon FM2 Olympus OM4T Comprehensive instructions on how to take hemispherical photography with a list of cameras suitable for research Urquhart, et al., 2014 Allied Vision GE-2040C camera Uses sky-view factors from a high dynamic range camera to calculate short term solar power forecasting Wagner and Hagemeier, 2006 Canon AE-1 camera Used a Canon camera to estimate leaf inclination angles on tree canopies Table 2 Studies using alternative cameras for hemispherical photography. 67 68 The Nikon Coolpix range of cameras remains a key tool in forest climatology (Error! 69 Reference source not found. and Table 2Error! Reference source not found.). 70 Unfortunately, the Coolpix range is no longer readily available (Nikon, 2016) with 71 digital camera technology advancing considerably in the interim making models such 72 as the Coolpix 4500 camera appear large and bulky with a relatively poor battery life 73 and low image resolution (3.14 megapixels). However, even today, the FC-E8 74 fisheye lens remains one of the least distorted on the market (Holmer et al., 2001) 75 and as such, the camera series remains very popular with researchers as a tried and 76 tested means to collect hemispherical imagery (Chapman, 2007). A significant 77 further advantage of the Coolpix range of cameras was the ability to easily convert 78 the camera to take near infra-red (NIR) imagery. By adapting a camera in this way, it 79 significantly enhances its functionality in the forest environment as due to the highly 80 reflective nature of vegetation it becomes easier to distinguish this from woody 81 elements and other features in imagery when taken in NIR; which can then be used 82 to assess the health and density of tree canopies (Chen et al., 1996; Turner et al., 83 1999). 84 Overall, the Nikon Coolpix camera has reached the point where it is informally 85 viewed as a standard device for this purpose, but with dwindling numbers now 86 available for purchase on internet auction sites, there is a need to investigate new 87 and more sustainable means to collect data in the long term. Whilst new digital 88 cameras are available on the market, the approach explored in this paper is to 89 investigate whether a low-cost alternative can be developed using readily available 90 off-the-shelf components. 91 2. Methods 92 2.1 Adapting a Raspberry Pi 93 The Raspberry Pi is a range of small computers designed to minimise the cost of 94 computing and thus make it, and computer programming more generally, accessible 95 to a wide audience. After a prolific launch, it now has a worldwide following of 96 developers focussed on producing generic code and peripherals for use in a range of 97 applications. As an example, the computer can now be readily fitted with a 98 Raspberry Pi camera and subsequently programmed to take images at set time 99 intervals. 100 At the time of writing, the most popular Pi compatible camera available on the market 101 is the Pi camera which comprises of a Sony IMX219 9-megapixel sensor. This is 102 available either as a standard device or as a Pi NoIR camera where the infra-red 103 blocking filter (needed by modern digital cameras due to the inherent capability to 104 see beyond the visible spectrum: Chapman, 2007) has been removed (Raspberry Pi, 105 2016). As outlined in the previous section, NIR capability improves the utility of the 106 approach for use in forested environments. 107 2.2 Comparison of Fisheye lenses 108 Unfortunately, a fisheye lens is presently not available that has been specifically 109 designed for the Pi NoIR camera. However, due to the recent proliferation of 110 smartphone photography, there is a wide range of fisheye lenses that are now 111 available for smartphones which have the potential to be used. The key 112 consideration here, as per Holmer et al, (2001), is to select a lens with minimal 113 distortion to reduce error in later image analyses. This can be achieved by testing 114 the equiangularity of the lens by calculating any distortions in the radial distance. As 115 shown in Figure 2, the aim is to acquire an image where the radial distance is 116 directly proportional to the zenith angle (Chapman, 2008). 117 118 (a) (b) (c) (d) (e) Figure 2 (a) Visual comparison of Nikon Coolpix camera, (b) smart phone camera 119 with attached 185˚ fisheye lens, (c) smart phone camera with attached fisheye lens 120 198˚, (d) smart phone camera with attached fisheye lens 180˚ and (e) smart phone 121 camera with attached fisheye lens 235 ˚ 122 123 A range of available fisheye lenses were tested for distortions (Table 3). In this initial 124 test, the fisheye lenses were clipped onto a Samsung Galaxy S5 Neo (Figure 1 b) 125 and placed under a large Perspex calibration dome marked at equal points along the 126 sides using a compass (Figure 1 c). A plumb bob was then used to position the 127 device directly below the centre of the dome before a series of images collected 128 (Figure 2). Measurement distortions were then calculated using Image-J software 129 (Figure 3). 130 131 Product Field of view Cost (At time of writing) Yarrashop fisheye lens 180 £7.99 First2Savv JTSJ-185-A01 fisheye lens 185 £8.99 AUKEY fisheye lens 198 £11.99 MEMTEQ universal fisheye lens 235 £10.99 Table 3 Mobile fisheye lenses specification. 132 133 134 Figure 3 Comparison of radial distortion between different mobile fisheye lenses and 135 Nikon Coolpix 4500 camera FC-E8 lens. 136 137 0.0 10.0 20.0 30.0 40.0 50.0 60.0 0.0 10.0 20.0 30.0 40.0 50.0 60.0 D IS T O R T E D D IS T A N C E ( C M ) MEAN RADIAL DISTANCE FROM DOME CENTRE (CM) Equiangular Dome 180˚ fisheye lens 185˚ fisheye lens 198˚ fisheye lens Nikon Coolpix FC-E8 lens 235˚ fisheye lens The results show that the 185˚ fisheye lens (Figure 2b) is most comparable with the 138 Nikon Coolpix FC-E8 lens (Figure 2a). It has a similar field of view (FOV) and 139 despite a slight reduction in image clarity at high radial distances, the 185˚ lens has 140 the lowest level of distortion (Figure 3). However, comparisons between the Nikon 141 Camera FC-E8 lens and other mobile fisheye lenses are not as favourable and all 142 display clear distortions and/or significant reductions in FOV. For example, the 180 ˚ 143 (Figure 2d) camera captures the lowest FOV of the compared fisheye lenses (Figure 144 3). The 198˚ fisheye lens (Figure 2c) has excellent clarity at high radial distances 145 however has a lower FOV then reported and high levels of distortion (Figure 3). 146 Conversely, the 234 ˚ fisheye lens (Figure 2e) has a high FOV however has high 147 levels of distortion, especially at high radial distances (Figure 3). Based on these 148 analyses, the 185˚ fisheye lens was chosen for further investigation. 149 2.3 Adapting a Pi Noir camera to take hemispherical images 150 In order to use the 185 ˚ fisheye lens with the Pi NoIR camera, a series of small 151 adaptations are required. Whilst these adaptations could be achieved using 3D 152 printing technology, this was achieved in this study using parts scavenged from the 153 First2Savv 185º fisheye lens (Figure 4Error! Reference source not found.a) and 154 tubing from a Waveshare Raspberry Pi Camera Module Kit (Figure 4b). The camera 155 component of the Waveshare kit was removed, using a saw and drill, to leave a 156 hollow tube. The tubing (Figure 4b) was then tied and secured to the base of the 157 Raspberry PI NoIR camera using thin wire (Figure 4c). The camera was then 158 attached to the Raspberry Pi board using the connector port (Figure 4d). 159 (a) (b) (c) (d) Figure 4 (a) 185˚ fisheye lens attached to base (b) base component of Raspberry Pi 160 fisheye module, (c) fisheye module attached to Raspberry Pi NoIR camera (d) 161 Camera module attached to a Raspberry Pi computer. 162 163 3. Comparison of Nikon camera and Pi NoIR Raspberry Pi camera. 164 3.1 General Specifications 165 Table 4 shows the specification comparison of both the Pi NoIR camera version 1 166 and 2, the Nikon Coolpix 4500 and the Nikon Coolpix 9000 camera. As has been 167 demonstrated in the previous section, the reported FOV can vary with individual 168 cameras (Grimmond et al., 2001) and therefore this has been estimated in this study 169 using a mechanical clinometer. The adapted Pi camera FOV (164 ˚ ) is less than the 170 Comment [JK2]: Removed resolution argument Nikon Coolpix FOV (176 ˚ ) which is hypothesised to be a consequence of the added 171 tubing (Figure 4b) causing some distortion and loss of image at ground level. 172 Nikon 900 Nikon 4500 Pi NoIR V1 Pi NoIR V2 Pixel range 1.2 megapixels 3.14megapixels 5 megapixels 8 megapixels Optical Zoom 3 x optical zoom lens 4 x optical zoom lens N/A N/A Field of View 183 0 FC-E8 lens (176 ˚ using a mechanical clinometer) 183 0 FC-E8 lens (176 ˚ using a mechanical clinometer) 185 o mobile fisheye lens (164 ˚ using a mechanical clinometer) 185 o mobile fisheye lens (164 ˚ using a mechanical clinometer) Dimensions 143 x 76.5 x 36.5mm (5.6 x 3.0 x 1.4 in.) 130 x 73 x 50mm (5.1 x 2.9 x 2.0 in.) 25 x 24 x 1mm 25 x 24 x 1mm Cost £100* £200* £25 £25 * Approximate Second-hand price 173 Table 4 Comparison of Coolpix cameras to Raspberry Pi cameras 174 175 3.2 Distortion Analysis 176 As hemispherical imagery is mostly used in the analysis of tree canopies, the loss of 177 information at ground level (i.e. high radial distances) is less of a concern. It is at 178 these extremities of the image where distortions are also more common and indeed 179 one of the main attractions of the Nikon Coolpix range of cameras (Holmer et al., 180 2001). Whilst an equiangular lens is not an essential requirement of a camera 181 system for this application, it does ensure fewer corrections are required and 182 minimises error in subsequent analysis. The distortions of the adapted fisheye lens 183 are again tested by using the Perspex calibration dome (Figure 5). 184 (a) (b) Figure 5 (a) Nikon Coolpix camera in a Perspex dome and (b) Raspberry Pi NoIR 185 camera with fisheye attached under Perspex dome. 186 187 The FOV of the adapted Pi camera is demonstrated to be less than the Nikon 188 camera however there is a greater level of distortion when using a Nikon Coolpix 189 camera (190 ). This difference is likely due to the size of the equipment with the Nikon Coolpix 191 camera being larger in size than the Pi camera lens (145 mm compared to 25mm). 192 With respect to equiangularity, there is a strong correlation between radial distance 193 distortions of the Nikon Coolpix FC-E8 lens camera and Raspberry Pi NoIR adapted 194 fisheye camera at 99.9% confidence level (195 ). 196 197 198 Figure 6 Radial Distortion of a Nikon Coolpix FC-E8 lens camera and a Raspberry Pi 199 camera with attachable fisheye lens. 200 201 0.0 10.0 20.0 30.0 40.0 50.0 60.0 0.0 10.0 20.0 30.0 40.0 50.0 60.0 D IS T O R T E D D IS T A N C E ( C M ) MEAN RADIAL DISTANCE FROM DOME CENTRE (CM) Nikon Coolpix FC-E8 lens 185˚ fisheye lens attached to Raspberry Pi NoIR camera Equiangular Dome 3.3 Sky-view factor Analysis 202 To further demonstrate the inter-device comparability, images were captured 203 devices for sky-view factor analysis (204 205 Nikon Coolpix FC-E8 Lens Nikon Coolpix FC-E8 Lens SVF Raspberry Pi Camera Raspberry Pi Camera SVF Raspberry Pi Camera – After Threshold Analysis Raspberry Pi Camera – Leaves only SVF (a) (b) (c) (d) (e) (f) (g) (h) Figure 7). The Images were then analysed using ‘Sky-View Calculator’ software 206 (Göteborg Urban Climate Group, 2018) developed by Lindberg and Holmer 207 (2010) using a process where the image was converted to binary (208 209 Figure 7), divided into concentric annuli before calculating the number of white Pixels 210 (sky) in each annulus and summed (Holmer et al. 2001; Johnson and Watson, 1984; 211 Nikon Coolpix FC-E8 Lens Nikon Coolpix FC-E8 Lens SVF Raspberry Pi Camera Raspberry Pi Camera SVF Raspberry Pi Camera – After Threshold Analysis Raspberry Pi Camera – Leaves only SVF (a) (b) (c) (d) (e) (f) (g) (h) Steyn 1980). Analyses were performed on the original imagery as well as images 212 cropped to have the same FOV. Table 5 shows that when the FOV is uncorrected, 213 the Pi overestimates the sky-view factor, but when this is corrected, the output is 214 very similar and is significant at the 99.9% level. 215 216 Nikon Coolpix FC-E8 Lens Nikon Coolpix FC-E8 Lens SVF Raspberry Pi Camera Raspberry Pi Camera SVF Raspberry Pi Camera – After Threshold Analysis Raspberry Pi Camera – Leaves only SVF (a) (b) (c) (d) (e) (f) (g) (h) Figure 7 Visual variations in sky-view factors when comparing a Nikon Coolpix FC-217 E8 lens with a 185˚ Raspberry Pi NoIR camera. 218 219 Image Sky-view factor Leaf-view factor Nikon Coolpix Camera (Non- adjusted FOV) Nikon Coolpix Camera (adjusted FOV) Raspberry Pi Camera. Raspberry Pi camera – contribution of leaves (a) 0.25 0.17 0.17 0.55 (b) 0.24 0.26 0.29 0.68 (c) 0.40 0.42 0.44 0.45 (d) 0.4 0.45 0.45 0.45 (e) 0.3 0.34 0.35 0.26 (f) 0.4 0.45 0.47 0.33 (g) 0.37 0.40 0.44 0.42 (h) 0.48 0.33 0.34 0.53 Table 5 Sky view factors of Nikon Coolpix camera adjusted FOV, Raspberry Pi NoIR 220 camera, Nikon Coolpix unadjusted FOV and Raspberry Pi leaves only images. 221 222 3.4 Near Infrared Capabilities 223 In addition to hardware availability, the advantages of using a Raspberry Pi NoIR 224 camera over a Nikon 4500 camera is the in-built near infra-red (NIR) technology. 225 Comment [JK3]: Changed an to a Comment [JK4]: Removed resolution argument Although it is also possible to convert the Nikon Coolpix camera to take NIR images 226 (Chapman, 2007), this involves substantial effort which risks damaging the camera. 227 The capability of the Pi NoIR was confirmed in this study. A simple threshold 228 analysis proved sufficient to remove all other aspects of the image except for 229 vegetation (230 231 Nikon Coolpix FC-E8 Lens Nikon Coolpix FC-E8 Lens SVF Raspberry Pi Camera Raspberry Pi Camera SVF Raspberry Pi Camera – After Threshold Analysis Raspberry Pi Camera – Leaves only SVF (a) (b) (c) (d) (e) (f) (g) (h) Figure 7Table 5). The differences in sky-view factor can then be calculated; from 232 this a leaf-view calculation were made and presented in Table 5, indicating an 233 approximation of leaf cover in the image and further highlights the utility of the 234 camera in forestry applications. 235 4. Conclusions 236 The Nikon Coolpix camera range has provided a reliable ‘standard’ solution for 237 obtaining hemispherical fisheye imagery for many years. However, whilst still fit for 238 purpose, an alternative is needed to ensure a sustainable means of data collection 239 moving forward. This paper has shown that comparable results can be provided with 240 a low-cost image collection system using readily available components. 241 The Pi NoIR camera provides an off-the-shelf NIR solution, making it perfect for use 242 in forested environments and thus removing the need for further adaptation (i.e. 243 removal of blocking filters and addition of cold mirrors: Chapman, 2007). However, 244 fisheye lenses are not yet readily available and hence there is presently a need to 245 carry out alternative adaptations such as those outlined in this paper, or the use of 246 simple 3D printing technology. However, the most positive result from this study is 247 the direct comparability of the imagery (and subsequent results from sky-view factor 248 analyses) obtained from the two techniques. Both systems have similarly low levels 249 of distortion, but there are minor differences in relation to the FOV. Further research 250 is needed to adapt the Raspberry Pi to make the sensor usable in the field; this 251 includes waterproofing the technology and testing the equipment at various 252 temperature ranges. A limitation of this study is that the technology was not tested 253 for interference from electronic or radio waves. 254 255 Comment [JK5]: Removed resolution comment Further advantages of the Raspberry Pi approach are the computing capability of the 256 device, which means it has internal logging capabilities and (once waterproofed) 257 could be left in the field in time lapse mode for long periods at a time, even relaying 258 imagery over the internet in real-time if communications are available. Overall, 259 moving forward there are many advantages to using the Raspberry Pi, however the 260 key conclusion is that a fit for purpose and dynamic solution for the collection of 261 hemispherical imagery can be readily produced at a low cost. 262 Acknowledgements 263 Funding for this research was provided by the Rail Safety and Standards Board 264 (RSSB) and the Engineering and Physical Sciences Research Council (EPSRC). 265 References 266 1. Anderson, M.C. 1964: Studies of Woodland light climate. Journal of Ecology 267 52, 27-41 268 2. Baret, F. and Agroparc, S. 2004: A simple method to calibrate hemispherical 269 photographs. INRA-CSE, France 270 (http://147.100.66.194/can_eye/hemis_calib3. pdf). 271 3. Bréda, N. J. 2003: Ground‐based measurements of leaf area index: a review 272 of methods, instruments and current controversies. Journal of experimental 273 botany, 54(392), 2403-2417 274 4. Chapman, L. 2008: An introduction to upside-down remote sensing, Progress 275 in Physical Geography 32 (5), 529-542 276 5. Chapman, L. 2007: Potential applications of near infra-red hemispherical 277 imagery in forest environments. Journal of Agricultural and Forest 278 Meteorology, 143(1), 151-156. 279 6. Chapman, L., Thornes, J. E., Muller, J. P., and McMuldroch, S. 2007: 280 Potential applications of thermal fisheye imagery in urban environments. IEEE 281 Geoscience and Remote Sensing Letters, 4(1), 56-59. 282 7. Chen, J. M. 1996: Optically-based methods for measuring seasonal variation 283 of leaf area index in boreal conifer stands. Journal of Agricultural and Forest 284 Meteorology, 80(2-4), 135-163. 285 8. Duveiller, G., and Defourny, P. 2010: Batch processing of hemispherical 286 photography using object-based image analysis to derive canopy biophysical 287 variables. Proceedings of GEOBIA, 1682-1777. 288 9. Chianucci, F, Leonardo D, Donatella G, Daniele B, Vanni N, Cinzia L, 289 Andrea R and Piermaria C 2016: "Estimation of canopy attributes in beech 290 forests using true colour digital images from a small fixed-wing 291 UAV." International journal of applied earth observation and 292 geoinformation 47. 60-68. 293 10. Chianucci, F., Macfarlane, C., Pisek, J., Cutini, A., and Casa, R. 2015: 294 Estimation of foliage clumPing from the LAI-2000 Plant Canopy Analyzer: 295 effect of view caps. Trees, 29(2), 355-366. 296 11. Danson, F. M., Hetherington, D., Morsdorf, F., Koetz, B., and Allgower, B. 297 2007: Forest canopy gap fraction from terrestrial laser scanning. IEEE 298 Geoscience and remote sensing letters, 4(1), 157-160. 299 12. Englund, S. R., O'brien, J. J., and Clark, D. B. 2000: Evaluation of digital 300 and film hemispherical photography and spherical densiometry for measuring 301 forest light environments. Canadian Journal of Forest Research, 30(12), 302 1999-2005. 303 13. Delta 2017: HemiView Forest Canopy Image Analysis System, 304 https://www.delta-t.co.uk/product/hemiview/ accessed 30/01/2018 305 14. Frazer, G. W., Fournier, R. A., Trofymow, J. A., and Hall, R. J. 2001: A 306 comparison of digital and film fisheye photography for analysis of forest 307 canopy structure and gap light transmission. Journal of Agricultural and forest 308 meteorology, 109(4), 249-263. 309 15. Garrigues, S., Shabanov, N. V., Swanson, K., Morisette, J. T., Baret, F., 310 and Myneni, R. B. 2008: Intercomparison and sensitivity analysis of Leaf 311 Area Index retrievals from LAI-2000, AccuPAR, and digital hemispherical 312 photography over croplands. Journal of Agricultural and Forest meteorology, 313 148(8), 1193-1209. 314 16. Grimmond, C. S. B., Potter, S. K., Zutter, H. N., and Souch, C. 2001: RaPid 315 methods to estimate sky-view factors applied to urban areas. International 316 Journal of Climatology, 21(7), 903-913 317 17. Göteborg Urban Climate Group, 2018: Sky-view Calculator, University of 318 Göthenburg, https://gvc.gu.se/english/research/climate/urban-319 climate/software/download. Accessed 30/01/2018 320 18. Hale, S. E., and Edwards, C. 2002: Comparison of film and digital 321 hemispherical photography across a wide range of canopy densities. Journal 322 of Agricultural and Forest Meteorology, 112(1), 51-56. 323 19. Hall, R. J., Fournier, R. A., and Rich, P. 2017: Introduction. In Hemispherical 324 Photography in Forest Science: Theory, Methods, Applications (pp. 1-13). 325 Springer Netherlands. 326 https://www.delta-t.co.uk/product/hemiview/ https://gvc.gu.se/english/research/climate/urban-climate/software/download https://gvc.gu.se/english/research/climate/urban-climate/software/download 20. Holmer, B., Postgård, U. and Eriksson, M., 2001: Sky view factors in forest 327 canopies calculated with IDRISI. Theoretical and Applied Climatology, 68(1), 328 pp.33-40. 329 21. Homolová, L., Malenovský, Z., Hanuš, J., Tomášková, I., Dvořáková, M., 330 and Pokorný, R. 2007: Comparaison of different ground techniques to map 331 leaf area index of Norway spruce forest canopy. In Proceedings of the 332 International Society for Photogrammetry and Remote Sensing (ISPRS) 333 22. Hu, L., Gong, Z., Li, J., and Zhu, J. 2009: Estimation of canopy gap size and 334 gap shape using a hemispherical photograph. Trees, 23(5), 1101-1108. 335 23. Ishida, M. 2004: Automatic thresholding for digital hemispherical 336 photography. Canadian Journal of Forest Research, 34(11), 2208-2216. 337 24. Johnson, G. T., and Watson, I. D. 1984: The determination of view-factors in 338 urban canyons. Journal of Climate and Applied Meteorology, 23(2), 329-335. 339 25. Jonckheere, I., Fleck, S., Nackaerts, K., Muys, B., CopPin, P., Weiss, M., 340 and Baret, F. 2004: Review of methods for in situ leaf area index 341 determination: Part I. Theories, sensors and hemispherical photography. 342 Journal of Agricultural and Forest meteorology, 121(1), 19-35. 343 26. Kelley, C.E. and Krueger, W.C., 2005: Canopy cover and shade 344 determinations in riparian zones. Journal of the American Water Resources 345 Association, 41(1), pp.37-046. 346 27. Lang, M., Kuusk, A., Mõttus, M., Rautiainen, M., and Nilson, T. 2010: 347 Canopy gap fraction estimation from digital hemispherical images using sky 348 radiance models and a linear conversion method. Journal of Agricultural and 349 Forest Meteorology, 150(1), 20-29. 350 28. Leblanc, S. G., Chen, J. M., Fernandes, R., Deering, D. W., and Conley, A. 351 2005: Methodology comparison for canopy structure parameters extraction 352 from digital hemispherical photography in boreal forests. Journal of 353 Agricultural and Forest Meteorology, 129(3),187-207. 354 29. Lindberg, F., and Holmer, B. 2010: Sky View Factor Calculator, Göteborg 355 Urban Climate Group, Department of Earth Sciences, University of 356 Gothenburg 357 30. Liu, Z., Wang, C., Chen, J. M., Wang, X., and Jin, G. 2015: Empirical 358 models for tracing seasonal changes in leaf area index in deciduous broadleaf 359 forests by digital hemispherical photography. Journal of Forest Ecology and 360 Management, 351, 67-77. 361 31. Nikon, CoolPix 4500 product archive, 2017: 362 http://imaging.nikon.com/lineup/CoolPix/others/4500/ accessed 30/01/2018 363 32. Raspberry Pi 2016: https://www.RaspberryPi.org/products/Pi-noir-camera/, 364 Accessed 2017 365 33. Rich, P. M. 1990: Characterizing plant canopies with hemispherical 366 photographs. Remote sensing reviews, 5(1), 13-29. 367 34. Ringold, P. L., Sickle, J., Rasar, K., and Schacher, J. 2003: Use of 368 hemispheric imagery for estimating stream solar exposure. Journal of the 369 American Water Resources Association, 39(6), 1373-1384. 370 35. Schwalbe, E., Maas, H. G., Kenter, M., and Wagner, S. 2009: Hemispheric 371 image modeling and analysis techniques for solar radiation determination in 372 forest ecosystems. Journal of Photogrammetric Engineering and Remote 373 Sensing, 75(4), 375-384. 374 http://imaging.nikon.com/lineup/CoolPix/others/4500/ https://www.raspberrypi.org/products/pi-noir-camera/ 36. Steyn, D.G., 1980: The calculation of view factors from fisheye‐lens 375 photographs: Research note. 376 37. Turner, D. P., Cohen, W. B., Kennedy, R. E., Fassnacht, K. S., and Briggs, 377 J. M. 1999: Relationships between leaf area index and Landsat TM spectral 378 vegetation indices across three temperate zone sites. Remote sensing of 379 environment, 70(1), 52-68. 380 38. Urquhart, B., Kurtz, B., Dahlin, E., Ghonima, M., Shields, J. E., and 381 Kleissl, J. 2014: Development of a sky imaging system for short-term solar 382 power forecasting. Atmospheric Measurement Techniques Discussions, 7, 383 4859-4907. 384 39. Wagner, S., and Hagemeier, M. 2006: Method of segmentation affects leaf 385 inclination angle estimation in hemispherical photography. Journal of 386 Agricultural and Forest Meteorology, 139(1), 12-24. 387 40. Watson, I. D., and Johnson, G. T. 1987: Graphical estimation of sky view‐388 factors in urban environments. Journal of Climatology, 7(2), 193-197. 389 41. Zhang, Y., Chen, J.M. and Miller, J.R., 2005: Determining digital 390 hemispherical photograph exposure for leaf area index estimation. Agricultural 391 and Forest Meteorology, 133(1-4), pp.166-181. 392 393 work_swzcp7eldjgk7fz57e6q5jzmpm ---- CLINICAL TRIAL DESIGN AND OUTCOME MEASURES (L NALDI, SECTION EDITOR) Skin Cancer Prevention: Recent Evidence from Randomized Controlled Trials Adèle C. Green & Catherine A. Harwood & John Lear & Charlotte Proby & Sudipta Sinnya & H. Peter Soyer Published online: 5 July 2012 # Springer Science+Business Media, LLC 2012 Abstract Despite the billions of health care dollars spent each year on treating skin cancer, there is a dearth of randomized controlled trials (RCTs) that have evaluat- ed skin cancer prevention. RCTs published in the last 3 years that have directly assessed skin cancer preven- tion as their primary aim suggest that regular use of sunscreen is cost effective, but prolonged use of top- ical therapies such as tretinoin and 5-fluorouracil may not be. Sirolimus-based immunosuppression for second- ary skin cancer prevention in long-term renal transplant recipients appears effective, but benefits may be offset by the adverse effects. Many RCTs using pre-invasive actin- ic keratoses (AKs) as endpoints are too small and/or too short to provide evidence on skin cancer prevention. Another stumbling block is the difficulty in reproducibly diagnosing and counting AKs in response to preventive agents. Longer term and better surveillance methods are urgently required to improve the quality of evidence from future RCTs. Keywords Skin cancer . Prevention . Randomized controlled trials .RCTs .Clinical trials .Outcome measures . Interventions . Sunscreen . Topical treatment . Basal cell carcinoma . BCC . Squamous cell carcinoma . SCC . Actinic keratosis .AK .5-fluorouracil .5-FU .Digitalphotography . Dermoscopy A. C. Green Cancer and Population Studies Group, Queensland Institute of Medical Research, 300 Herston Rd., Brisbane, Queensland 4006, Australia A. C. Green University of Manchester, Manchester Academic Health Sciences Centre, Oxford Rd., Manchester M13 9PT, UK A. C. Green (*) Queensland Institute of Medical Research, PO Royal Brisbane Hospital, Brisbane, Queensland 4029, Australia e-mail: adele.green@qimr.edu.au C. A. Harwood Centre for Cutaneous Research, Blizard Institute, Barts and the London School of Medicine and Dentistry, Queen Mary University of London, 4 Newark St., London E1 2AT, UK e-mail: caharwood@doctors.org.uk J. Lear Departments of Dermatology, Salford Royal NHS Foundation Trust and Manchester Royal Infirmary and University of Manchester, Stott Ln., Salford M6 8HDUK e-mail: john.lear@cmft.nhs.uk C. Proby Division of Cancer Research, Medical Research Institute, University of Dundee, Ninewells Hospital & Medical School, Dundee DD1 9SY ScotlandUK S. Sinnya : H. P. Soyer Dermatology Research Centre, The University of Queensland, School of Medicine, Princess Alexandra Hospital, Brisbane, QLD 4102Australia e-mail: s.sinnya@uq.edu.au H. P. Soyer e-mail: p.soyer@uq.edu.au Curr Derm Rep (2012) 1:123–130 DOI 10.1007/s13671-012-0015-9 Introduction Basal cell carcinoma (BCC) and squamous cell carcinoma (SCC) are by far the commonest cancers in white-skinned populations. It is estimated that in the United States alone in 2006, a total of 2,152,500 people were treated for around 3,507,700 such cancers [1]. Despite their relatively low mortality rates, such high-incidence rates mean that the associated health care costs impose a substantial financial burden on health care systems [2]. Cutaneous melanoma, the other major type of skin cancer, affected around 10,342 people in Australia in 2007 [3] and some 68,700 people in the United States in 2009, with around 8,600 deaths [4]. Collective management of these skin cancers drives an expenditure that exceeds the total amount spent on other major cancers combined [5–7]. In the United States, skin cancer treatments cost an estimated $2 billion each year [8••]. Beyond these are further billions—around $1.2 billion in the United States [9•]—expended treating the related and highly prevalent skin lesions, actinic keratoses (AKs). AKs affect 5 % to 25 % of people in the United Kingdom and United States and up to 60 % in Australia [10, 11], and are one of the strongest risk factors for skin cancer. Exposure to solar UV radiation is the major environmen- tal cause of skin cancer, and, recently—especially in tem- perate climates—exposure to artificial UV light such as sunbeds has contributed to carcinogenic exposure of the skin [12]. Theoretically, it should be straightforward to control a large proportion of skin cancers and their massive costs by implementing measures to decrease susceptible people’s level of UVexposure. Adoption of sun-avoidance behaviors by fair- skinned people is an obvious and fundamental preventive measure. The U.S. Preventive Services Task Force has sys- tematically reviewed all counseling measures aimed at pre- venting skin cancer [13•]. It found few rigorous counseling trials and concluded that, of all counseling strategies assessed, only those relevant to primary care settings increased skin- protective behaviors. Other diverse approaches to achieving skin-protective behavior changes have ranged from shade provision interventions in schools [14] to appearance- focused interventions based on decision-theoretical models of health behavior in young female indoor tanners [15]. A more direct interventional approach is to focus on skin cancer as the primary outcome, and trials that have taken this direct approach are the focus of this paper. Scope of This Review Randomized controlled trials (RCTs) published in the last 3 years that followed up 100 or more patients and were conducted with the explicit primary aim of evaluating the effectiveness of preventive measures in decreasing the incidence of BCC, SCC, or melanoma have been reviewed. Intervention studies that aimed to prevent AKs in the long term (defined as at least 1 year) or, based on the premise that a proportion of AKs are premalignant SCC precursors, aimed to diminish AK burden as a long-term outcome and thus a means of achieving SCC prevention (although this was mostly implicit) have also been mentioned. Systematic reviews of relevant trials, including RCTs with outcomes assessed at less than 1 year given the high level of review evidence, have also been summarized. We begin by briefly considering findings from prevention trials that were published prior to 2009 and then review the most recent substantive published evidence. We conclude by discussing the controversies regarding the natural history of AKs and the difficulties of assessing the prevalence of AKs in large-scale clinical settings. Pre-2009 Skin Cancer Prevention Trials and Systematic Reviews RCTs published before April 2007 and conducted among people at high risk of developing keratinocyte cancer have been reviewed by the Cochrane Collaboration [16]. The authors noted that, because only 10 such studies had been conducted, data were very limited, and they summarized the results as follows. One trial that had assessed topical T4N5 liposome lotion containing DNA repair enzymes in patients with xeroderma pigmentosum, the rare inherited disorder [17], showed that annual rates of new AKs and BCCs were reduced, whereas annual rates of SCCs and melanomas were non-significantly raised in the intervention compared with placebo group. No significant adverse effects were seen. Six trials evaluated systemic retinoid therapies to prevent skin cancers in renal transplant recipients: half assessed acitretin and half other oral retinoids. Of the three that assessed acitretin, only one provided standard measures of outcome. It showed no difference in time to first skin cancer when 30 mg per day acitretin was compared to placebo over 6 months, but there was a reduction in number of new skin cancers in the first 12 months after intervention. All three acitretin trials reported frequent mucocutaneous side effects, and when adverse event data (evaluable in two of the trials) were pooled, an increased risk of having an adverse event (unspecified) with acitretin therapy could not be excluded (RR, 1.80; 95 % CI, 0.70–4.61) [16]. Among people with past keratinocyte cancer, results regarding new BCCs after oral retinoid or isoretinoin treatment were inconclusive, but risk of new SCC was raised in one trial and of adverse events in another [16]. In two trials of antioxidant supplements among people with a history of skin cancer, one found that selenium supplements significantly elevated the risk of a new SCC, whereas the other gave inconclusive evidence regarding 124 Curr Derm Rep (2012) 1:123–130 beta-carotene supplements. A trial of a reduced fat diet showed a trend toward fewer new keratinocyte cancers [16], although BCC and SCC endpoints could not be distinguished. The authors of this systematic review concluded that, although preventive treatments may benefit people at high risk of developing skin cancer, firm conclusions could not be drawn because of the very small number of trials addressing each type of intervention. A systematic review and meta-analysis of five short-term RCTs (1293 patients) evaluated the efficacy and safety of an immune-response modifier, imiquimod 5 % cream, in the treatment of AKs and found that complete clearance occurred in 50 % of patients treated with imiquimod compared with 5 % treated with vehicle only [18]. The proportion of patients with any adverse event, most often erythema (27 %) and scabbing or crusting (21 %), was substantially higher with imiquimod than with the vehicle. The authors concluded that imiquimod 5 % cream was effective in the treatment of AK and thus may potentially prevent the development of SCC [18]. Finally, a single community-based skin cancer prevention trial was conducted in Australia [19]. The Nambour Skin Cancer Prevention Trial evaluated daily sunscreen use and beta-carotene supplementation in the prevention of BCC and SCC. The trial showed that, in comparison with people randomized to using sunscreen at their discretion, if at all, people randomized to daily use of a broad-spectrum SPF15+ sunscreen had no reduction in BCC but a 40 % reduction in SCC tumors at the conclusion of the trial [20]—a reduction maintained 8 years later [21]. Rate of acquisition of AKs was also reduced in the daily sunscreen group [22], as was the time to subsequent BCCs after the first BCC [23] in the trial period. There was no effect of beta-carotene supplementation on development of actinic tumors, malignant or benign [20, 22]. Recent Skin Cancer Prevention Trials and Systematic Reviews There continues to be a paucity of evidence from RCTs that have directly set out to assess skin cancer prevention, and most that are recently published or ongoing are secondary rather than primary prevention. Findings from three RCTs with at least 100 participants have been published recently: one, a pragmatic trial in an Australian community and two, secondary skin cancer prevention trials in renal transplant recipients and U.S. veterans, respectively (Table 1). There have been two systematic reviews published in the last 3 years of 5-fluorouracil (5-FU) and photodynamic therapy for treatment of AKs (Table 1). Community-Based Trial Results of the long-term follow-up of participants of the aforementioned Nambour Skin Cancer Prevention Trial were recently published in relation to sunscreen allocation and prevention of new primary melanoma [24••]. It was shown that regular use of sunscreen by people in the high sun exposure setting of Queensland reduced the development of melanoma compared with discretionary sunscreen use. Inves- tigation of the lifetime health costs and benefits of sunscreen promotion in the primary prevention of skin cancers, includ- ing melanoma, showed that routine sunscreen use by white populations residing in sunny settings is likely to be a highly cost-effective investment for governments and consumers over the long term [8••]. Trial in Renal Transplant Patients The CONVERT open-label, randomized, multicenter trial found keratinocyte skin cancer rates were reduced in long- term renal transplant recipients at 2 years after their conver- sion to sirolimus-based immunosuppression compared with continued calcineurin inhibitor immunotherapy [25••] (Table 1). Trial in Patients with Multiple Past Basal Cell Carcinomas and Squamous Cell Carcinomas The Veterans Affairs Topical Tretinoin Chemoprevention Tri- al found no difference in times to first BCC or SCC and no difference in AK counts between patients treated with topical 0.1 % tretinoin compared with matching placebo over 1.5 to 5.5 years [26••]. Systematic Reviews of Treatment of Multiple Prevalent Actinic Keratoses A systematic review of 5-FU in the treatment of prevalent AKs found greater reduction in mean number AKs with both 5 % 5-FU and 0.5 % 5-FU compared with placebo, but not compared with laser resurfacing [27•]. However, two thirds of patients required retreatment after 1 year, and up to half of those treated were unable to complete treatment because of adverse effects. The systematic review of photo- dynamic therapy in the treatment of prevalent AKs found photodynamic therapy to be superior to placebo, but there was insufficient evidence regarding its comparison with other treatments [28•] (Table 1). Ongoing Skin Cancer Prevention Trials The U.S. National Institutes of Health Clinical Trials regis- try [29] currently lists 66 RCTs of AK treatment. Of those that have recently updated information available, only two appear to have started in the last 5 years, with 100 or more patients followed-up for at least 12 months. In a phase II Curr Derm Rep (2012) 1:123–130 125 trial, the efficacy of afamelanotide (formerly CUV1647), a chemical analogue of alpha-melanocyte stimulating hor- mone, in reducing AKs and SCCs in around 200 organ transplant patients is being investigated (NCT00829192). Another RCT is studying the long-term effects of treatment of skin areas with 5 to 10 AKs with imiquimod 5 % cream versus diclofenac 3 % gel with respect to the risk of pro- gression to in situ and invasive SCC at 3 years in around 250 immunocompetent patients (NCT01453179). There are at least two other ongoing randomized con- trolled trials of skin cancer prevention, both in immunosup- pressed patients (N>100), of which the authors are aware and/or directly involved. A multicenter-sponsored (Spirig Pharma Ltd., Switzerland) RCT examining the role of sun- screen in skin cancer prevention is currently in progress in Turkey and 11 countries across Europe. In this study, 300 organ transplant recipients at risk of skin cancer are being randomized to discretionary use of sunscreen in a galeni- cally improved, dosable version of the liposomal sunscreen reported in a smaller study [30] of organ transplant recipi- ents in Germany. This current 2-year study will examine reduction of AK and development of new skin cancers (as well as viral warts) as primary outcome measures (C. Ulrich, personal communication). The aims of another scheduled trial, the O3A Trial, are to evaluate omega-3 polyunsaturated fatty acid supplementation (4 g per day) versus placebo (double blind), in the prevention of BCC and SCC in 340 organ transplant patients in a 2-year intervention period. It is being conducted by a multicen- ter team, including some of the authors (A. C. Green, C. A. Harwood, J. Lear, and H. P. Soyer) in Australia and United Kingdom. Controversies Regarding Actinic Keratoses Since the majority of skin cancer prevention trials have been aimed at secondary prevention of SCC, many RCTs have focused on the treatment of AKs as trial endpoint, since AKs are presumed to be surrogate biomarkers of SCC, as noted previously. This is a difficult area in many ways given the controversies surrounding the malignant potential of AKs and the assumption—but lack of conclusive evidence—that clearance of AK prevents SCC. In addition, even if the AK Table 1 Randomized controlled trials and systematic reviews of skin cancer prevention/treatment of AKs published since 2009 Trial Participants Intervention and control groups Outcomes Comments Nambour Skin Cancer Trial (Green et al. [24••]) 1621 community members randomly selected; 44 % male Daily application of sunscreen to head and arms vs discretionary sunscreen use; beta-carotene 30 mg vs placebo from 1992 to 1996 Reduced new primary skin melanoma in daily sunscreen group from 1993 to 2006 (HR, 0.50; 95 % CI, 0.24–1.02); beta- carotene no effect Melanomas histologically reviewed by two pathologists blinded to treatment allocation CONVERT Trial (Alberu et al. [25••]) 830 renal transplant patients; ages 25–75 y; 69 % male Conversion to SRL-based vs continued CNI immunotherapy for 2 y Lower keratinocyte skin cancer rates after SRL-based, CNI-free immu- notherapy (1.2 vs 4.3, P<0.001) Adverse events and withdrawal rates higher in SRL-based immuno- therapy at 2 y (26 % vs 20 %, P<0.07) VATTC Trial (Weinstock et al. [26••]) 1131 U.S. veterans with 2 + BCC or SCC in past 5 y; 59 % ages> 70 y Topical 0.1 % tretinoin vs matching placebo for 1.5–5.5 y No difference in times to first BCC (P00.3) or first SCC (P00.4) between groups; no difference in AK counts Worse symptoms in intervention group at 1 y Systematic review of 5-FU and AKs (Askew et al. [27•]) 13 studies involving 865 patients with multiple prevalent AKs Topical 5 % 5-FU (5 % or 0.5 %) vs variety of treat- ments* for 4 w–12 m Average reduction in mean number AK: 80 % (range, 59 %–100 %) (5 % 5-FU); 86 % (0.5 % 5-FU) vs 95 % (laser resurfacing); 28.0 % (placebo) 66 % required retreatment after 1 y; up to 50 % unable to complete treatment because of adverse effects Systematic review of photodynamic therapy and AKs (Fayter [28•]) 28 studies involving 2611 patients with multiple prevalent AKs PDT vs cryotherapy; placebo PDT (cream); cryotherapy; 5- FU; imiquimod; different PDT parameters for various dura- tions MAL-PDT superior to placebo (pooled odds of clearance at 3 m0 OR, 8.05; 95 % CI, 5.50–11.79) Insufficient evidence regarding PDT compared with other treatments *CO2 laser or 30 % trichloroacetic acid peel; cryotherapy; imiquimod 5 % cream; diclofenac sodium 3 % gel; PDT; 5-FU augmented with tretinoin; placebo; 5-aminolevulinic acid PDT, activated with either blue or pulsed laser light 5-FU 5-fluorouracil; AK actinic keratosis; AKs actinic keratoses; BCC basal cell carcinoma; CNI calcineurin inhibitor; HR hazard ratio; MAL methyl aminolevulinate; PDT photodynamic therapy; SCC squamous cell carcinoma; SRL sirolimus 126 Curr Derm Rep (2012) 1:123–130 intervention may be assumed to prevent SCC, it is not known whether complete clearance is required and/or whether persistent clearance or maintenance of clearance over an unknown period is required to achieve this. There are also controversies at the clinical level; that is, it is not known what type of AKs are most closely associated with SCC. For example, are small, barely palpable ones less of a risk than large, inflamed hyperkeratotic ones? The chal- lenges around clinical diagnosis and quantification have not been addressed often in published studies but have important implications for interpreting study validity in terms of intervention agents’ long-term efficacy in treating and/or preventing AK, as well as their putative role in SCC prevention. Definition AKs are scaly skin lesions of variable size that are pink or red, often asymptomatic, and arising on the face, bald scalp, backs of hands and forearms, or other sites where cumula- tive UV light-induced changes have occurred. An early AK is flat, pink, and mildly scaly and is often appreciated more readily by palpation than visualization because of its dis- tinctive roughened, sandpaper-like quality. As lesions prog- ress, they become infiltrated, and the scaling may become increasingly more prominent, culminating in markedly hy- perkeratotic AK and cutaneous horns. Adjacent AKs may merge into one another, thereby producing a field of abnor- mal skin. Such “field change” or “field cancerization” indi- cates severe photodamage (Fig. 1) and sites at which SCC preferentially develop. A proposed clinical grading of “mild,” “moderate,” and “severe” aims to reflect the pro- gressive infiltration and hyperkeratosis of AK. Histopatho- logically, there is partial thickness dysplasia and loss of stratification of the epidermis such that histological grading systems have also been devised to reflect the extent of epidermal dysplasia—the so-called keratinocyte intraepithe- lial neoplasia grades I, II, and III [31, 32]. Natural History and Rate of Malignant Transformation The natural history of AK is poorly understood but is believed to involve turnover of prevalent individual lesions (regression and recurrence) as well as development of new lesions [10]. Although some AKs clearly evolve into SCCs, the majority do not and the risk of progression to invasive malignancy is unknown [33] but much debated. A system- atic review of 15 studies showed estimated progression rates of between 0.025 % and 20 % per year, per lesion [34]. A more recent study from the United States that prospectively followed 169 patients with 7784 AKs found that the risk of progression for a specific AK was 2.57 % at 4 years, and that 65 % of all primary SCCs arose directly in lesions previously diagnosed as AKs [35]. Cited rates of transfor- mation into SCC are almost certainly overestimates, since even the most conservative have been based on successive clinical counts of AK of subjects’ skin at widely varying intervals, using counting techniques of uncertain reliability and validity [30]. Indeed, there remains lack of agreement among histopathologists regarding the diagnosis of SCC and AK [36••] because of the continuous clinicopathologic spectrum from benign atypical keratotic lesions to invasive malignant lesions [2]. Monitoring Actinic Keratoses Evaluating the efficacy of AK therapies requires quantifica- tion of AK burden before and after treatment. For the purposes Fig. 1 Multiple AKs merge with widespread field changes in severely photodamaged skin Curr Derm Rep (2012) 1:123–130 127 of clinical studies, distinguishing features for AK have not been well established. In immunosuppressed organ transplant recipients, the clinical picture is further complicated by the increased prevalence of verrucokeratotic lesions, most of them viral warts, which are often present in areas of field cancer- ization and are clinically (or sometimes histologically) indis- tinguishable from AK [37]. Many studies have validated clinical diagnosis of AK by biopsying lesions, but how target lesions were selected has not been stated often. Diagnosis of discrete, clinically typical AK is relatively easy, but in patients with multiple and atypical lesions—particularly where lesions merge to give a field of skin abnormality (Fig. 1)—diagnosis and counting of AKs is far from straightforward and may be almost impossible [36••]. Previous studies of treatment and natural history of AK have usually quantified lesions by counting alone, yet counting of AK as a technique for evaluating therapeutic efficacy may be unreliable, even when performed by expe- rienced dermatologists, since interobserver reproducibility is poor [26••] and may inadequately account for spontaneous regression of AK [30]. Joint discussion of discrepancies by observers may enhance the reliability of these counts, al- though substantial variation remains [38]. The importance of developing and validating techniques to enhance reliabil- ity of AK assessment has been highlighted, but the problem remains unresolved [30, 38]. Role of Digital Photography and Dermoscopy for Imaging of Actinic Keratoses Standard photographic evaluation, blinded, has been sug- gested as perhaps the most reliable technique, but this is also untested [30]. More recently, digital photography of regions of the body such as the face, scalp, and dorsa of hands and forearms has been advocated for early detection and surveil- lance of keratinocyte skin cancer. Advantages of imaging AKs with digital photographs over counting AKs, especially on severely photodamaged skin (Fig. 1), are the relative ease and speed that an established imaging protocol can be carried out. Digital photography also allows clinicians to detect smaller or nonspecific lesions, which may otherwise be missed or passed over, especially in a busy outpatient clinic. Digital images can be stored for successive compar- isons during repeated follow-up, as in clinical trials, using similar protocols to those now established for naevus sur- veillance [39]. On the other hand, the role of dermoscopy in assessing large number of AKs over time is not established. To date, most of the dermoscopic literature on AK has focused on differentiating pigmented AKs from lentigo maligna [40], although recently a progression model of facial AK has been proposed based on a large series of dermoscopic images of AKs developing into invasive SCCs [41]. However, the added value of dermoscopic examination of individual AKs among some thousands of AKs being followed during large skin cancer prevention trials seems quite low, and even lower for confocal microscopy. Ultimately, it is hoped that high-definition regional photography in combination with available image-analysis tools will enhance clinical exami- nation to achieve accurate and reproducible counts and assessments of AKs. Conclusion The number of recently published skin cancer prevention trials is small in inverse proportion to the need for them to better control the vast health costs associated with skin cancer treatment. Sunscreen emerges as a cost-effective preventive agent for people living in sunny places. There is inconsistent evidence regarding the long-term efficacy of topical treatments such as retinoids and 5-FU, and AK clearance in the short term is often associated with side effects. Several skin cancer prevention trials evaluating a range of possible preventive agents in immunosuppressed patients are currently underway, but the lack of validity and reproducibility of standard clinical and histopathological assessment of AK as a trial endpoint is an ongoing limita- tion. There is an urgent need to address these methodolog- ical challenges of diagnosis, quantification, and surveillance of AK burden as well as a need for future adequately powered, well-designed, long-term RCTs for strengthened evidence regarding skin cancer prevention. Disclosure A. C. Green: grants from National Health and Medical Research Council and L’Oreal; C. A. Harwood:; J. Lear: payment for giving lectures from Galderma, Leo, Almirall and payment for devel- opment of educational presentations from Galderma; C. Proby: none; S. Sinnya: none; H. P. Soyer: consulting fees and stock options from MoleMap Australia. References Papers of particular interest, published recently, have been highlighted as: • Of importance •• Of major importance 1. Rogers HW, Weinstock MA, Harris AR, Hinckley MR, Feldman SR, Fleischer AB, Coldiron BM. Incidence estimate of nonmela- noma skin cancer in the United States, 2006. Arch Dermatol. 2010;146(3):283–7. 2. Soyer HP, Rigel DS, Wurm EMT Dermatology. In: Bolognia JL, Jorizzo JL, Schaffer JV (editors) Vol 1. Actinic keratosis, basal cell carcinoma and squamous cell carcinoma. 3rd edn. Mosby/Elsevier, 2012. 128 Curr Derm Rep (2012) 1:123–130 3. AIHW. Cancer survival and prevalence in Australia: cancers diagnosed from 1982 to 2004. In. Canberra; 2008. 4. Jemal A, Saraiya M, Patel P, Cherala SS, Barnholtz-Sloan J, Kim J, et al. Recent trends in cutaneous melanoma incidence and death rates in the United States, 1992–2006. J Am Acad Dermatol. 2011;65(5):S17–S25.e13. 5. Housman TS, Feldman SR, Williford PM, Fleischer Jr AB, Goldman ND, Acostamadiedo JM, Chen GJ. Skin cancer is among the most costly of all cancers to treat for the Medicare population. J Am Acad Dermatol. 2003;48(3):425–9. 6. Staples MP, Elwood M, Burton RC, Williams JL, Marks R, Giles GG. Non-melanoma skin cancer in Australia: the 2002 national survey and trends since 1985. Med J Aust. 2006;184(1):6–10. 7. Weinstock MA. The struggle for primary prevention of skin cancer. American Journal of Preventive Medicine. 2008;34 (2):171–2. 8. •• Hirst NG, Gordon LG, Scuffham PA, Green AC. Lifetime cost- effectiveness of skin cancer prevention through promotion of daily sunscreen use. Value Health. 2012;15(2):261–8. This paper pro- vides a newly published analysis of the cost-effectiveness of sun- screen application for skin cancer preventon based on RCT evidence. 9. • Neidecker MV, Davis-Ajami ML, Balkrishnan R, Feldman SR: Pharmacoeconomic considerations in treating actinic keratosis. Pharmacoeconomics 2009;27(6):451–64. This paper illustrates the economic importance of preventing rather than treating AKs. 10. Frost C, Williams G, Green A. High incidence and regression rates of solar keratoses in a Queensland Community. J Invest Dermatol. 2000;115(2):273–7. 11. Holmes C, Foley P, Freeman M, Chong AH. Solar keratosis: Epidemiology, pathogenesis, presentation and treatment. Australas J Dermatol. 2007;48(2):67–76. 12. El Ghissassi F, Baan R, et al. A review of human carcinogens–part D: radiation. Lancet Oncol. 2009;10:751–2. 13. • Lin JS, Eder M, Weinmann S: Behavioral counseling to prevent skin cancer: a systematic review for the U.S. Preventive Services Task Force. Annals of Internal Medicine. 2011;154(3):190–201. This systematic review of studies of behavioral counseling to prevent skin cancer complements the present review, which does not cover behavioral counseling RCTs, since none have skin can- cer as primary endpoint. 14. Dobbinson SJ, White V, Wakefield MA, Jamsen KM, White V, Livingston PM, English DR, Simpson JA. Adolescents’ use of purpose built shade in secondary schools: cluster randomised controlled trial. BMJ 2009;338. 15. Hillhouse J, Turrisi R, Stapleton J, Robinson J. A randomized controlled trial of an appearance-focused intervention to prevent skin cancer. Cancer. 2008;113(11):3257–66. 16. Bath-Hextall F, Leonardi-Bee J, Somchand N, Webster A, Delitt J, Perkins W. Interventions for preventing non-melanoma skin can- cers in high-risk groups. Cochrane Database of Systematic Reviews (Online) (4). 2007. 17. Yarosh D, Klein J, O'Connor A, Hawk J, Rafal E, Wolf P. Effect of topically applied T4 endonuclease V in liposomes on skin cancer in xeroderma pigmentosum: a randomised study. Lancet. 2001;357 (9260):926–9. 18. Hadley G, Derry S, Moore RA. Imiquimod for actinic keratosis: systematic review and meta-analysis. J Invest Dermatol. 2006;126 (6):1251–5. 19. Green A, Battistutta D, Hart V, Leslie D, Marks G, Williams G, Gaffney P, Parsons P, Hirst L, Frost C, et al. The Nambour skin cancer and actinic eye disease prevention trial: design and baseline characteristics of participants. Control Clin Trials. 1994;15 (6):512–22. 20. Green A, Williams G, Nèale R, Hart V, Leslie D, Parsons P, Marks GC, Gaffney P, Battistutta D, Frost C, et al. Daily sunscreen application and betacarotene supplementation in prevention of basal-cell and squamous-cell carcinomas of the skin: a randomised controlled trial. Lancet. 1999;354(9180):723–9. 21. van der Pols JC, Williams GM, Pandeya N, Logan V, Green AC. Prolonged prevention of squamous cell carcinoma of the skin by regular sunscreen use. Cancer Epidemiology Biomarkers & Pre- vention. 2006;15(12):2546–8. 22. Darlington S, Williams G, Neale R, Frost C, Green A. A randomized controlled trial to assess sunscreen application and beta carotene supplementation in the prevention of solar keratoses. Arch Dermatol. 2003;139(4):451–5. 23. Pandeya N, Purdie DM, Green A, Williams G. Repeated occurrence of basal cell carcinoma of the skin and multifailure survival analysis: follow-up data from the nambour skin cancer prevention trial. Am J Epidemiol. 2005;161(8):748–54. 24. •• Green AC, Williams GM, Logan V, Strutton GM. Reduced melanoma after regular sunscreen use: randomized trial follow- up. Journal Clinical Oncology. 2011;29(3):257–63. This report is the first RCT evidence that sunscreen application can prevent melanoma. 25. •• Alberu J, Pascoe MD, Campistol JM, Schena FP, del Carmen Rial M, Polinsky M, et al. Lower malignancy rates in renal allo- graft recipients converted to sirolimus-based, calcineurin inhibitor- free immunotherapy: 24-month results from the CONVERT Trial. Transplantation 2011;92:303–10. This report provides RCT evi- dence that sirolimus-based immunosuppression can prevent skin cancer in renal transplant recipients who are at high risk of skin cancer. 26. •• Weinstock MA, Bingham SF, DiGiovanna JJ, Rizzo AE, Marcolivio K, Hall R, Eilers D, Naylor M, Kirsner R, Kalivas J et al. Tretinoin and the prevention of keratinocyte carcinoma (basal and squamous cell carcinoma of the skin): a veterans affairs randomized chemoprevention trial. J Invest Dermatol. 2012. This paper provides the results of a well-conducted trial of tretinoin use for skin cancer prevention. 27. • Askew DA, Mickan SM, Soyer HP, Wilkinson D. Effectiveness of 5-fluorouracil treatment for actinic keratosis – a systematic review of randomized controlled trials. International Journal Der- matology. 2009;48(5):453–63. This systematic review of 5-FU and AK treatment includes both small, short-term studies and large, longer term RCTs. 28. • Fayter D, Corbett M, Heirs M, Fox D, Eastwood A. A systematic review of photodynamic therapy in the treatment of pre-cancerous skin conditions, Barrett’s oesophagus and cancers of the biliary tract, brain, head and neck, lung,oesophagus and skin. 2010. Health technology assessment (Winchester, England) 14(37): 1–288. This quality systematic review of photodynamic therapy includes treatment of AKs. 29. ClinicalTrials.gov, US National Instit Health [cited 2012 April]; Available from: [www.clinicaltrials.gov]. 30. Epstein E. Quantifying actinic keratosis: assessing the evidence. Am J Clin Dermatol. 2004;5(3):141–4. 31. Roewert-Huber J, Stockfleth E, Kerl H. Pathology and pathobiol- ogy of actinic (solar) keratosis – an update. Br J Dermatol. 2007;157:18–20. 32. Ramos-Ceballos FI, Ounpraseuth ST, Horn TD. Diagnostic con- cordance among dermatopathologists using a three-tiered keratino- cytic intraepithelial neoplasia grading scheme. J Cutan Pathol. 2008;35(4):386–91. 33. Ko CJ. Actinic keratosis: facts and controversies. Clin Dermatol. 2010;28(3):249–53. 34. Quaedvlieg PJ, Tirsi E, et al. Actinic keratosis: how to differentiate the good from the bad ones? Eur J Dermatol. 2006;16:335–9. 35. Criscione VD WM, et al. Actinic keratoses: natural history and risk of malignant transformation in the Veterans Affairs Topical Tretinoin Chemoprevention Trial. Cancer 2009;115:2523–30. Curr Derm Rep (2012) 1:123–130 129 http://www.clinicaltrials.gov 36. •• Heal CF, Weedon D, Raasch BA, Hill BT, Buettner PG: Agree- ment between histological diagnosis of skin lesions by histopathol- ogists and a dermato-histopathologist. International Journal of Dermatology 2009;48(12):1366–9. This paper, which discusses many of the difficult issues relating to AK assessment, is important for its discussion of histopathological as well as clinical aspects. 37. Euvrard S, Boissonnat P, Roussoulières A, Kanitakis J, Decullier E, Claudy A, Sebbag L. Effect of everolimus on skin cancers in calcineurin inhihitor-treated heart transplant recipients. Transpl Int. 2010;23(8):855–7. 38. Weinstock MA, Bingham SF, Cole GW, et al. Reliability of count- ing actinic keratoses before and after brief consensus discussion: the VA topical tretinoin chemoprevention (VATTC) trial. Arch Dermatol. 2001;137:1055–8. 39. Wurm E. SH: Scanning for melanoma. Australian Prescriber. 2010;33:150–5. 40. Akay BN, Kocyigit P, Heper AO, Erdem C. Dermatoscopy of flat pigmented facial lesions: diagnostic challenge between pigmented actinic keratosis and lentigo maligna. Br J Dermatol. 2010;163 (6):1212–7. 41. Zalaudek I, Whiteman D, Rosendahl C, Menzies SW, Green AC, Hersey P, Argenziano G. Update on melanoma and non- melanoma skin cancer. Expert Rev Anticancer Ther. 2011;11 (12):1829–32. 130 Curr Derm Rep (2012) 1:123–130 Skin Cancer Prevention: Recent Evidence from Randomized Controlled Trials Abstract Introduction Scope of This Review Pre-2009 Skin Cancer Prevention Trials and Systematic Reviews Recent Skin Cancer Prevention Trials and Systematic Reviews Community-Based Trial Trial in Renal Transplant Patients Trial in Patients with Multiple Past Basal Cell Carcinomas and Squamous Cell Carcinomas Systematic Reviews of Treatment of Multiple Prevalent Actinic Keratoses Ongoing Skin Cancer Prevention Trials Controversies Regarding Actinic Keratoses Definition Natural History and Rate of Malignant Transformation Monitoring Actinic Keratoses Role of Digital Photography and Dermoscopy for Imaging of Actinic Keratoses Conclusion References Papers of particular interest, published recently, have been highlighted as: • Of importance •• Of major importance work_syvoxhw7gvekjkj54zrriypywi ---- untitled Recovering real-world scene: high-quality image inpainting using multi-exposed references Z.J. Zhu, Z.G. Li, S. Rahardja and P. Fränti A novel method of high-quality image inpainting for recovering an original scene from degraded images using reference images of differ- ent exposures is proposed. It consists of a new inter-pixel relationship function and the respective refinement to synthesise missing pixels from existing spatially co-related pixels, and a dual patching to mini- mise the noise caused by dynamic range lost. Experiments on the method have been conducted and the results demonstrate the reliability of the proposed method. Introduction: Images of the same scene can be captured with different exposures and combined with computing power to synthesise an image that overcomes the limitations of conventional cameras. However, useful data can be lost owing to camera shake, especially when capturing by a hand-held device, which generates noticeable artefacts in the syn- thesised image. In other words, different from the traditional image inpainting [1, 2] and image completion [3], which generate only photorea- listic patches, the degraded image inpainting in digital photography requires true luminance values of the real-world scene. Therefore, the challenge of patching is to find useful relations between missing pixels and the remaining pixels. The camera-response-function (CRF)-based fixing method [4, 5] uses only inter-image relationships. Unfortunately, the patched pixel is just a luminance shift from the reference pixel, and cannot represent the pixel value at the correct exposure of the degraded image. Motivated by these observations, we propose a new method using the refined inter-pixel relationship function (IRF) with both inter- image and intra-image correlations to recover the real scene luminance reliably and a dual patching to reduce the dynamic range lost, which further enhance the inpainting accuracy. Ref 1 A B A¢ B¢ degraded image Ref 2 Fig. 1 Three differently exposed images with degraded image due to camera shake Pixel value of B can be copied from A as their co-locations A0 and B0 have same intensity in reference image. Details inside rectangular box displayed in Fig. 3 0 0 50 100 150 200 250 300 0 50 100 150 200 250 300 0 50 100 150 200 250 300 100 200 a 300 0 100 200 b 300 0 100 200 c 300 Fig. 2 IRF curves a Initial values b After smoothing with median filter c After extending empty values at end x-axis is intensity of reference image; y-axis is calculated co-location intensity in degraded image Inter-pixel relationship function: As shown in Fig. 1, the pixels A0 and B0 have the same intensity in the reference image. According to pho- tography reciprocity, when the exposure time changes, the pixel values of A0 and B0 change correspondingly. Intuitively, the missing ELECTRONICS LETTERS 3rd December 2009 Vol. Authorized licensed use limited to: Joensuun Yliopisto. Downloaded on value of B can be copied from A in the degraded image. However, during the image capturing process, sensor noise, sampling noise and compression noise are commonly generated. Thus, it is more accurate to find all the pixels with the same intensity as B0 in the reference image (bZ) and calculate their co-location values in the degraded image (Z ) using mean average. We define the IRF as ccð bZcðx; yÞÞ ¼ P ð�x;�yÞ[VðbZcðx;yÞÞ Zcð�x; �yÞ jVðbZcðx; yÞÞj ð1Þ where c is the colour channel, and jVðbZcðx; yÞÞj is the cardinality of the spatially co-related pixels set VðbZcðx; yÞÞ, which is defined as VðbZcðx; yÞÞ¼ fð�x; �yÞjbZcð�x; �yÞ¼bZcð~x;~yÞg ð2Þ The IRF has three useful characteristics inherited from the physical camera response: Char1: the IRF is a monotonically increasing function: Char2: the pixels located at the left end (dark pixels) and right end (bright pixels) are highly compressed owing to dynamic range limit; Char3: when choosing different reference images, shorter exposure time leads to a smaller slope at the left end and a bigger slope at the right end. An ‘empty value’ problem is raised in the raw IRF when the reference image does not span the whole dynamic range, as shown in Fig. 2. Using Char1, the refinement has two steps. First, a median filter is adopted starting from the middle of the valid values towards left and right separ- ately. The median filter corrects the monotonic errors, and recovers the empty values between the valid values. The second step extends two ends of the curve by using the neighbourhood slope. The refined IRF is defined as CcðzÞ ¼ extendðmedianðccðzÞÞÞ; z [ ½0; 255� ð3Þ a b c d e f Fig. 3 Degraded area and patches a Original degraded image b Fixed image using exemplar-based inpainting c Fixed image using CRF d Fixed image using our method e HDR image synthesised with degraded image f HDR image synthesised with patched image Dual patching: To increase the accuracy, the reference image is selected to have the smallest exposure difference with the degraded image. However, the dynamic range loss caused by Char2 is still inevitable. If the reference image has shorter exposure time than the degraded image, then as can be seen from Char3, the dark pixels in the reference image are mapped from a big dynamic range to a small one. In other words, it is a compressing process mapping multiple values to one, which in turn makes the IRF in this area reliable. On the other hand, a highly compressed bright pixel in the reference image is mapped into multiple values in the degraded image, which causes inaccuracy owing to the dynamic range lost. Thus, multiple reference images, 45 No. 25 March 22,2010 at 07:19:42 EDT from IEEE Xplore. Restrictions apply. with longer and shorter exposure time, respectively, can be adopted to recover the lost dynamic range and enhance the patching accuracy. The missing pixel intensity is calculated by Pc ¼ _Ccð_ZÞWðmaxð_ZR; _ZG; _ZBÞÞþ €CÞcð €ZÞWðmaxð€ZR; €ZG; €ZBÞÞ Wðmaxð_ZR; _ZG; _ZBÞÞþ Wðmaxð€ZR; €Z G; €ZBÞÞ ð4Þ where _Z and €Z are the intensities of two reference images at the same co- location, c is the colour channel (c ¼ R, G, B) and W is the weighting function defined as WðzÞ ¼ logðz þ 1Þ; EVðrefÞ . EVðdegradeÞ logð256 � zÞ; EVðrefÞ , EVðdegradeÞ ; z [ ½0; 255� � ð5Þ Result: As shown in Fig. 3, the degraded area destroys the integrity of the original image composition. Clearly, the exemplar-based inpainting [2] algorithm works well on simple texture, such as the table top. However, obvious errors can be seen for complex content, like the baby’s face and the title of the book, because of shortage of reference. The CRF method [4] recovers the content by the luminance shift from the reference pixels, where an obvious artefact can be seen at the border. Our algorithm restores the original scene effectively. Table 1 shows the peak signal-to-noise ratio (PSNR) of each method. Table 1: PSNR comparison Method Exemplar-based inpainting [2] CRF method [4] Our method PSNR (dB) 11.75 20.54 33.66 In addition, we have tested our algorithm in high dynamic range (HDR) image synthesis [6]. The border artefact generated by the degraded image in Fig. 3e is completely removed after patching using our algorithm, as shown in Fig. 3f. Conclusions: This Letter describes a new image inpainting method to patch the degraded image in an exposure set. As it uses all the relations ELECTRONICS Authorized licensed use limited to: Joensuun Yliopisto. Downloaded on Marc of the valid pixels with refined IRF and reconstructs the missing area by dual patching, it demonstrates better quality than other algorithms. Experimental results with the HDR image synthesis further verify the efficiency of the proposed method. # The Institution of Engineering and Technology 2009 23 September 2009 doi: 10.1049/el.2009.2686 Z.J. Zhu, Z.G. Li and S. Rahardja (Department of Signal Processing, Institute For Infocomm Research, A�STAR, 1 Fusionopolis Way, 21-01 Connexis, 138632, Singapore) E-mail: zhuzj@i2r.a-star.edu.sg P. Fränti (Department of Computer Science, University of Joensuu, PO Box 111, Joensuu 80101, Finland) References 1 Hung, J.C., Hwang, C.H., Liao, Y.C., Tang, N.C., and Chen, T.J.: ‘Exemplar-based image inpainting base on structure construction’, J. Software, 2008, 3, (8), pp. 57 – 64 2 Criminisi, A., Pérez, E., and Toyama, K.: ‘Region filling and object removal by exemplar-based image inpainting’, IEEE Trans. Image Process., 2004, 13, (9), pp. 1200 – 1212 3 James, H., and Alexei, A.E.: ‘Scene completion using millions of photographs’. SIGGRAPH, San Diego, CA, USA, August 2007, article No. 4 4 Grosch, T.: ‘Fast and robust high dynamic range image generation with camera and object movement’. Proc. of Vision Modeling and Visualization, Aachen, Germany, 2006, pp. 277 – 284 5 Mann, S.: ‘Comparametric equations with practical applications in quantigraphic image processing’, IEEE Trans. Image Process., 2000, 9, (8), pp. 1389 – 1406 6 Debevec, P., and Malik, J.: ‘Recovering high dynamic range radiance maps from photographs’. SIGGRAPH, NJ, USA, 1997, pp. 130 – 135 LETTERS 3rd December 2009 Vol. 45 No. 25 h 22,2010 at 07:19:42 EDT from IEEE Xplore. Restrictions apply. work_t33almrgtjeoperzvbva3d7sta ---- Enk et al. BMC Research Notes 2010, 3:115 http://www.biomedcentral.com/1756-0500/3/115 Open AccessS H O R T R E P O R T Short ReportA salting out and resin procedure for extracting Schistosoma mansoni DNA from human urine samples Martin J Enk*1, Guilherme Oliveira e Silva2 and Nilton B Rodrigues2,3 Abstract Background: In this paper a simple and cheap salting out and resin (InstaGene matrix® resin - BioRad) DNA extraction method from urine for PCR assays is introduced. The DNA of the fluke Schistosoma mansoni was chosen as the target since schistosomiasis lacks a suitable diagnostic tool which is sensitive enough to detect low worm burden. It is well known that the PCR technique provides high sensitivity and specificity in detecting parasite DNA. Therefore it is of paramount importance to take advantage of its excellent performance by providing a simple to handle and reliable DNA extraction procedure, which permits the diagnosis of the disease in easily obtainable urine samples. Findings: The description of the extraction procedure is given. This extraction procedure was tested for reproducibility and efficiency in artificially contaminated human urine samples. The reproducibility reached 100%, showing positive results in 5 assay repetitions of 5 tested samples each containing 20 ng DNA/5 ml. The efficiency of the extraction procedure was also evaluated in a serial dilution of the original 20 ng DNA/5 ml sample. Detectable DNA was extracted when it was at a concentration of 1.28 pg DNA/mL, revealing the high efficiency of this procedure. Conclusions: This methodology represents a promising tool for schistosomiasis diagnosis utilizing a bio-molecular technique in urine samples which is now ready to be tested under field conditions and may be applicable to the diagnosis of other parasitic diseases. Introduction Schistosomiasis caused by Schistosoma mansoni is a major public health problem in countries of Latin Amer- ica, the Caribbean and Africa [1,2]. Routinely the diagno- sis of this disease is based on the detection of parasite eggs in stool. This approach is relatively inexpensive and easy to perform, and provides basic information on prev- alence and infection intensity. However, a well known limitation of these coproscopic methods is their lack of sensitivity, especially in low endemic areas and among individual infections with low parasite load [3-5]. In order to overcome this shortcoming multiple sampling and the examination of a larger amount of faeces are necessary, which increases costs considerably, making these tech- niques too cumbersome for accurate diagnosis under such conditions. Besides this intrinsic limitation of coproscopic stool examinations, the positive effect of suc- cessful control programs and the rising numbers of infected travelers and migrants urgently require more sensitive methods for diagnosing infection with Schisto- soma mansoni [6-8]. As an alternative, serological tests for antibody detec- tion can be applied for the diagnosis of schistosomiasis [9,10]. Unfortunately serological methods seem to have low sensitivity, cross-reaction with other helmith infec- tions and poor performance in distinguishing between active and past infections, which is particularly important for endemic areas. Furthermore these techniques require collection of blood, an invasive procedure which presents another disadvantage for their application in a large scale [11]. PCR-based diagnostic techniques have shown high sen- sitivity and specificity, and rely on the detection of S. mansoni DNA in feces, serum [12-14] and, recently, in plasma [15] and urine [16]. The use of urine as source of * Correspondence: marenk@cpqrr.fiocruz.br 1 Laboratório de Esquistossomose - Centro de Pesquisas René Rachou (CPqRR) - Fundação Oswaldo Cruz (FIOCRUZ), Av. Augusto de Lima 1715, Belo Horizonte, Minas Gerais, 30190-002, Brazil Full list of author information is available at the end of the article BioMed Central © 2010 Enk et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons At- tribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=20420662 http://www.biomedcentral.com/ Enk et al. BMC Research Notes 2010, 3:115 http://www.biomedcentral.com/1756-0500/3/115 Page 2 of 4 DNA in PCR detection of parasites has been already reported for Borrelia burgdorferi [17], Wuchereria ban- crofti [18], Mycobacterium tuberculosis [19,20] and Schis- tosoma sp. [16]. In all cases the extraction method relies on the use of organic solvents, such as phenol and chloro- form, or commercial kits that make the process hazard- ous and/or expensive to use when there are a large number of samples. Here we present a simple salting out and resin DNA extraction method for PCR. Methods Fifty milliliters of fresh urine, of a non infected individual, were treated with EDTA to a final concentration of 40 mM [21]. To assess the reproducibility of the extraction method, 30 milliliters of this sample were artificially con- taminated with 120 ng of adult S. mansoni DNA and divided in 6 aliquots of 5 ml each, containing 20 ng S. mansoni DNA per aliquot, equivalent 4 ng DNA/mL. Five of these aliquots, forming the 1st set of samples, were directly processed. To test the method's efficiency, the sixth aliquot was serially diluted in five consecutive 1:5 dilutions, in 4 mL of the remaining 20 mL non contami- nated urine, forming a 2nd set of 6 samples. The DNA concentration for each of these samples was 4 ng/mL, 800 pg/mL, 160 pg/mL, 32 pg/mL, 6,40 pg/mL and 1.28 pg/ mL. All 11 aliquots prepared as described above, were heated at 100°C, in a water bath for 10 min. After that, 5 M NaCl, in a volume of 1/10 of the sample volume was added to each tube. The tubes were shaken vigorously for 15 sec, placed on ice for 1 hr and centrifuged for 10 min at 4,000 rpm. The supernatant was transferred to another tube, shaken vigorously for 15 sec and centrifuged for 10 min at 4,000 rpm. Again the supernatant was transferred to another clean tube, and two times the sample volume of absolute ethanol was added. The DNA was then pre- cipitated at -20°C for at least 2 hr. The DNA strand was removed with a pipette, transferred to a 0,5 mL micro- centrifuge tube and washed in 200 μL 70% ethanol. The tubes were centrifuged again for 20 min at 14,000 rpm. The pellets were dried and resuspended in 100 μL of DNAse free water and 100 μL of InstaGene matrix® resin (BioRad). Samples were incubated at 56°C for 30 min and 100°C for 8 min, vortexed at high speed for 10 sec and centrifuged at 14,000 rpm for 3 min. The supernatant transferred to a new tube and used as template in PCR reactions. PCR was carried out using a pair of primers, directed to a 121 bp repetitive fragment, designed by Pontes et al. 2002 [12], GoTaq DNA polymerase and STR 10× buffer (Promega). Into each reaction tube were added 1 μL of 10× buffer, 0.1 μg/μL of 1× BSA, 0.8 U of Taq DNA poly- merase, 0.5 pmol of each, forward and reverse primers, 4 μL of DNA template and enough water to a final volume of 10 μL. A positive and negative control were performed using 1 ng of S. mansoni DNA as template for the posi- tive, and 1 μL of non-contaminated urine as template for the negative. A total of 40 cycles of amplification were conducted using a 30 sec. denaturing step at 95°C, 30 sec. annealing 65°C and 30 sec. extension at 72°C. PCR assays were conducted 5 times for each of the 11 DNA samples. Three μL of amplified products were visualized by elec- trophoresis in 8% polyacrylamide gel (PAGE) followed by silver staining [22] and recorded by digital photography. The study objectives were explained to the participant and informed written consent was obtained in compli- ance with the guidelines of the Helsinki Declaration about research carried out on humans. The study proto- col was received and approved by the Ethical Review Board of the René Rachou Research Center/Fiocruz (No. 03/2008) as well as by the Brazilian Ethical Review Board (CONEP - No 784/2008). Results and Discussion Positive results were observed for all 5 samples from the 1st set (figure 1), as well as for all 6 samples from 2nd set (figure 2) in all of the 5 assay repetitions. The results from the 1st sample set show the high reproducibility of the DNA extraction method, and those from the 2nd confirm the test's efficiency by detecting 1.28 pg DNA/mL urine, an approximately 3,000 times smaller quantity of DNA than in the first dilution (results samples set 2). Further- more, these data confirm the PCR reproducibility and sensitivity, hence, parasite DNA was detected in all 5 assay repetitions. This simple and inexpensive extraction method, utiliz- ing easy accessible urine samples as DNA source, in com- bination with the high sensitivity of PCR, constitutes a promising diagnostic tool to overcome the difficulties of Figure 1 An 8% polyacrylamide gel, representative of 5 assays, showing the reproducibility test of the extraction method. L = 100 bp Ladder, 1-5 = 1st sample set 4 ng/mL DNA. - = negative control. + = S. mansoni DNA positive control. Enk et al. BMC Research Notes 2010, 3:115 http://www.biomedcentral.com/1756-0500/3/115 Page 3 of 4 detecting schistosomiasis infections in areas of low trans- mission or in individual cases with low worm burden. As the S. mansoni genome contains approximately 580 fg of DNA [23] our proposed method detects theoretically the equivalent of two parasite cells in form of diluted DNA in urine. The detection limit of 0,1 pg parasite DNA of the described technique is comparable to those of other stud- ies carried out in human and non-human samples of feces, urine and blood which range between 1 fg and 1 pg. Pontes et al 2002 [12] reports 1 fg in human feces, San- doval et al 2006 [16] 1 fg in human urine, Wichmann et al 2009 [15] 2 parasite DNA copies/mL (about 1 pg) in human serum, Kato-Hayashi et al 2010 [24] 0,1 pg in non- human urine and serum and Akinwale et al 2008 [25] 10 fg in diluted parasite DNA. The use of nontoxic materials during the extraction process and its easy handling, may, in future, make this technique applicable under conditions in which there are limited resources. The data obtained by this study pro- vide sufficient evidence to test this technique in urine samples collected from the field, especially from low prevalence areas, in order to define sensitivity and speci- ficity in comparison to diagnostic tests currently in use. After adaptation, the procedure described here could, also be applied for the detection of trans-renal and/or cell-free DNA of other parasites as well as for viral, bacte- rial and fungal infections in future studies. Competing interests The authors declare that they have no competing interests. Authors' contributions MJE, and NBR designed the study protocol; MJE, GOS and NBR supervised and carried out the laboratory work and data collection; MJE, GOS and NBR carried out the analysis and interpretation of these data. MJE, GOS and NBR drafted the manuscript. All authors read and approved the final manuscript. Acknowledgements We thank Dr. Paulo Marcos Zech Coelho and Dr. Rodrigo Corrêa Oliveira for their valuable support and scientific advice and Dr. John Kusel for scientific and language revision. This work obtained financial support from the Fundação de Amparo a Pesquisa do Estado de Minas Gerais (Fapemig), Brazil, Conselho National de Pesquisa (CNPq), and Sistema Único de Saúde (SUS). Author Details 1Laboratório de Esquistossomose - Centro de Pesquisas René Rachou (CPqRR) - Fundação Oswaldo Cruz (FIOCRUZ), Av. Augusto de Lima 1715, Belo Horizonte, Minas Gerais, 30190-002, Brazil, 2Laboratório de Imunologia Celular e Molecular - Centro de Pesquisas René Rachou (CPqRR) - Fundação Oswaldo Cruz (FIOCRUZ), Av. Augusto de Lima 1715, Belo Horizonte, Minas Gerais, 30190-002, Brazil and 3Laboratório de Pesquisas Clínicas - Escola de Farmácia, Universidade Federal de Ouro Preto (UFOP), Campus Morro de Cruzeiro, Ouro Preto, Minas Gerais, 35400-00, Brazil References 1. Hotez PJ, Bottazzi ME, Franco-Paredes C, Ault SK, Periago MR: The neglected tropical diseases of latin america and the Caribbean: a review of disease burden and distribution and a roadmap for control and elimination. PLoS Negl Trop Dis 2008, 2:e300. 2. King CH, Dangerfield-Cha M: The unacknowledged impact of chronic schistosomiasis. Chronic Illn 2008, 4:65-79. 3. Utzinger J, Booth M, N'Goran EK, Muller I, Tanner M, Lengeler C: Relative contribution of day-to-day and intra-specimen variation in faecal egg counts of Schistosoma mansoni before and after treatment with praziquantel. Parasitology 2001, 122:537-544. 4. Kongs A, Marks G, Verle P, Van der SP: The unreliability of the Kato-Katz technique limits its usefulness for evaluating S. mansoni infections. Trop Med Int Health 2001, 6:163-169. 5. de Vlas SJ, Gryseels B: Underestimation of Schistosoma mansoni prevalences. Parasitol Today 1992, 8:274-277. 6. Bergquist R, Johansen MV, Utzinger J: Diagnostic dilemmas in helminthology: what tools to use and when? Trends Parasitol 2009, 25:151-156. 7. Rabello AL, Enk MJ: 8 Progress towards the detection of schistosomiasis. In Report of the Scientific Working Group meeting on Schistosomiasis: 14-16 november 2005; Geneva, Switzerland WHO World Health Organization on behalf of the Special Programme for Research and Training in Tropical Diseases; 2006:67-72. 8. Corachan M: Schistosomiasis and international travel. Clin Infect Dis 2002, 35:446-450. 9. Doenhoff MJ, Chiodini PL, Hamilton JV: Specific and sensitive diagnosis of schistosome infection: can it be done with antibodies? Trends Parasitol 2004, 20:35-39. 10. Feldmeier H, Buttner DW: Immunodiagnosis of Schistosomiasis haematobium and schistosomiasis mansoni in man. Application of crude extracts from adult worms and cercariae in the IHA and the ELISA. Zentralbl Bakteriol Mikrobiol Hyg A 1983, 255:413-421. 11. Gryseels B, Polman K, Clerinx J, Kestens L: Human schistosomiasis. Lancet 2006, 368:1106-1118. 12. Pontes LA, Dias-Neto E, Rabello A: Detection by polymerase chain reaction of Schistosoma mansoni DNA in human serum and feces. Am J Trop Med Hyg 2002, 66:157-162. 13. Pontes LA, Oliveira MC, Katz N, Dias-Neto E, Rabello A: Comparison of a polymerase chain reaction and the Kato-Katz technique for diagnosing infection with Schistosoma mansoni. Am J Trop Med Hyg 2003, 68:652-656. 14. ten Hove RJ, Verweij JJ, Vereecken K, Polman K, Dieye L, Van Lieshout L: Multiplex real-time PCR for the detection and quantification of Schistosoma mansoni and S. haematobium infection in stool samples collected in northern Senegal. Trans R Soc Trop Med Hyg 2008, 102:179-185. 15. Wichmann D, Panning M, Quack T, Kramme S, Burchard GD, Grevelding C, Drosten C: Diagnosing schistosomiasis by detection of cell-free parasite DNA in human plasma. PLoS Negl Trop Dis 2009, 3:e422. Received: 28 December 2009 Accepted: 26 April 2010 Published: 26 April 2010 This article is available from: http://www.biomedcentral.com/1756-0500/3/115© 2010 Enk et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.BMC Research Notes 2010, 3:115 Figure 2 An 8% polyacrylamide gel, representative of 5 assays, showing the efficiency test of the extraction method. L = 100 bp Ladder, 1-6 = 2nd sample set, 4 ng/mL, 800 pg/mL, 160 pg/mL, 32 pg/ mL, 6,4 pg/mL and 1.28 pg/mL of DNA respectively. - = negative con- trol. + = S. mansoni DNA positive control. http://www.biomedcentral.com/1756-0500/3/115 http://creativecommons.org/licenses/by/2.0 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=18820747 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=18322031 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11393827 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=11299032 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=15463638 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19269899 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12145730 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=14700588 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=6649986 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16997665 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12135287 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=12887022 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=18177680 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19381285 Enk et al. BMC Research Notes 2010, 3:115 http://www.biomedcentral.com/1756-0500/3/115 Page 4 of 4 16. Sandoval N, Siles-Lucas M, Perez-Arellano JL, Carranza C, Puente S, Lopez- Aban J, Muro A: A new PCR-based approach for the specific amplification of DNA from different Schistosoma species applicable to human urine samples. Parasitology 2006, 133:581-587. 17. Lebech AM, Hansen K: Detection of Borrelia burgdorferi DNA in urine samples and cerebrospinal fluid samples from patients with early and late Lyme neuroborreliosis by polymerase chain reaction. J Clin Microbiol 1992, 30:1646-1653. 18. Lucena WA, Dhalia R, Abath FG, Nicolas L, Regis LN, Furtado AF: Diagnosis of Wuchereria bancrofti infection by the polymerase chain reaction using urine and day blood samples from amicrofilaraemic patients. Trans R Soc Trop Med Hyg 1998, 92:290-293. 19. Cannas A, Goletti D, Girardi E, Chiacchio T, Calvo L, Cuzzi G, Piacentini M, Melkonyan H, Umansky SR, Lauria FN, et al.: Mycobacterium tuberculosis DNA detection in soluble fraction of urine from pulmonary tuberculosis patients. Int J Tuberc Lung Dis 2008, 12:146-151. 20. Green C, Huggett JF, Talbot E, Mwaba P, Reither K, Zumla AI: Rapid diagnosis of tuberculosis through the detection of mycobacterial DNA in urine by nucleic acid amplification methods. Lancet Infect Dis 2009, 9:505-511. 21. Milde A, Haas-Rochholz H, Kaatsch HJ: Improved DNA typing of human urine by adding EDTA. Int J Legal Med 1999, 112:209-210. 22. Sanguinetti CJ, Dias-Neto E, Simpson AJG: Rapid silver staining and recovery of PCR products separated on polyacrylamide gels. BioTechniques 1994, 17:915-918. 23. Gomes AL, Melo FL, Werkhauser RP, Abath FG: Development of a real time polymerase chain reaction for quantitation of Schistosoma mansoni DNA. Mem Inst Oswaldo Cruz 2006, 101(Suppl 1):133-136. 24. Kato-Hayashi N, Kirinoki M, Iwamura Y, Kanazawa T, Kitikoon V, Matsuda H, Chigusa Y: Identification and differentiation of human schistosomes by polymerase chain reaction. Exp Parasitol 2010, 124:325-329. 25. Akinwale OP, Laurent T, Mertens P, Leclipteux T, Rollinson D, Kane R, Emery A, Ajayi MB, Akande DO, Fesobi TW: Detection of schistosomes polymerase chain reaction amplified DNA by oligochromatographic dipstick. Mol Biochem Parasitol 2008, 160:167-170. doi: 10.1186/1756-0500-3-115 Cite this article as: Enk et al., A salting out and resin procedure for extracting Schistosoma mansoni DNA from human urine samples BMC Research Notes 2010, 3:115 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=16834820 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=1629318 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=9861400 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=18230246 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19628175 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=10335891 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=17308760 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=19948171 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list_uids=18501978 Abstract Introduction Methods Results and Discussion Competing interests Acknowledgements Author Details References work_t4aii5cchfbxxfyhexteznn5ja ---- A study of etiology, clinical features, ECG and echocardiographic findings in patients with cardiac tamponade Telecardiology for chest pain management in Rural Bengal – A private venture model: A pilot project Tapan Sinha *, Mrinal Kanti Das Introduction: Rural countryside do not get benefits of proper assessment in ACS for: 1. Absence of qualified and trained medical practitioner in remote places. 2. Suboptimal assessment by inadequately trained rural practitioners (RP). 3. Non-avail- ability of 24 hours ECG facility. Telecardiology (TC). where avail- able, has low penetration and marginal impact. Lots of discussions are done to fast track transfer of modern therapy at CCU including interventions & CABG. They remained utopian even at urban localities not to speak of inaccessible poverty- stricken rural areas. Objectives: To ascertain feasibility study of low cost TC at individual initiative supported by NGOs working in remote areas for extending consultation and treatment to those who can atleast afford/approach very basic medical supportive care only. Methods: 1. Site selection: A remote island (267.5 sq km) in South Bengal close to Sundarban Tiger Reserve, inhabited by about 480,000 people with about 100,000 in age above 60 years. Nearest road to Kolkata, 75 km away, is assessable by only five ferry services available from 5 am to 4 pm across rivers more than two km wide at places. 2. RP Training: Classroom trainings organized and leaflets in vernacular prepared and distributed in accordance to AHA patient information leaflets. 3. One Nodal Centre made operational from March 2015 after training three local volunteers (not involved in RP) for 2 weeks on: a. Single channel digital recording by BPL 1608T, blood pressure mea- surement by Diamond 02277, pulse oxygen estimation by Ish- nee IN111A and CBS estimation by Easy Touch ET301. b. Digital photography, transmission by email by 2G network to either of us in Kolkata by Micromax Canvas P666 Android Tab. 4. Patient selection by filling a form: chest pain at rest and exercise, chest discomfort, radiation of pain, shortness of breath, palpitation, sudden sweating, loss of consciousness, hypertension, dia- betes, age > 60 yrs. c. Mobile phone alert call to either of us after successful transmission. d. Review of ECG, BP, R-CBS and Oxygen saturation information on email as a single attachment as early as possible on Apple Air Tabon 3G network. e. Call back for further history/information and advices. Results: 22 patients assessed from March to end May 2015, all being referred by RPs who have undergone training. Two (2) cases of IHD by ECG noted. Both were known cases of inadequately controlled hypertension and diabetes. No case of ACS received. Two (2) cases of known COAD were received who were also hypertensive and had infection. Three (3) new cases of hyperten- sion detected. All other cases were normal for parameters exam- ined. Medical advices were given, further investigations suggested, follow up maintained by RPs. Advices for COAD given after consultation with colleagues. Average time taken to call back after receiving alert call 25 � 5 min. All calls received between 11am and 7pm. Two cases were needed to be shuffled between us for convenience. Statistical analysis avoided as number of case too small. Observations: 1. Public awareness need to be increased. 2. Resis- tance from RPs for pecuniary reasons may be overcome by involving them. 3. It is feasible to successfully operate the system at a nominal cost per case of about Rs. 70/- only exclud- ing the primary investment on training, instrumentation and service connection. 4. Medical and social impacts remain to be assessed. 5. Multiple nodal centres and involvement of all health care providers should usher into long cherished 'health for all'. A study of etiology, clinical features, ECG and echocardiographic findings in patients with cardiac tamponade Vijayalakshmi Nuthakki *, O.A. Naidu 9-227-85 Laxminagar Colony, Kothapet, Hyderabad, India Cardiac tamponade is defined as hemodynamically significant cardiac compression caused by pericardial fluid. 40 cases of cardiac tamponade were evaluated for clinical features of Beck's triad, ECG evidence of electrical alternans and etiology, biochemical analysis of pericardial fluid. Cardiac tamponade among these patients was confirmed by echocardiographic evidence apart from clinical and ECG evidence. We had 6 cases of hypothyroidism causing tampo- nade. Malignancy was the most common etiology followed by tuberculosis. Only 14 patients had hypotension which points to the fact that echo showed signs of cardiac tamponade prior to clinical evidence of hypotension and aids in better management of patients. Electrical alternans was present in 39 patients. We found that cases with subacute tamponade did not have hypotension but had echo evidence of tamponade.37 patients had exudate and 3 patients had transudate effusion. We wanted to present this case series to high light the fact that electrical alternans has high sensitivity in diagnosing tamponade and hypothyroidism as an etiology of tamponade is not that rare and patients with echocar- diographic evidence of tamponade may not always be in hypoten- sion. Role of trace elements health & disease G. Subrahmanyam 1,*, Ramalingam 2, Rammohan 3, Kantha 4, Indira 4 1 Department of Cardiology, Narayana MEdical Institutions, Chinta Reddy Palem, Nellore 524 002, India 2 Department of Biochemistry, Narayana MEdical Institutions, Chinta Reddy Palem, Nellore 524 002, India 3 Department of Pharmacology, Narayana MEdical Institutions, Chinta Reddy Palem, Nellore 524 002, India 4 College of Nursing, Narayana MEdical Institutions, Chinta Reddy Palem, Nellore 524 002, India A cross sectional one time community based survey is conducted in urban area of Nellore. 68 subjects were selected between 20 to 60 years of age by using convenient sampling techniques. Detailed demographic data is collected and investigations like Lipid Profile, Blood Sugar, ECG, Periscope Study (to assess vascular stiffness), trace Elements analysis of Copper, Zinc are done apart from detailed clinical examination. ECG as per Minnesota code criteria is carried out for the diagnosis of coronary artery diseases. The diagnostic criteria for Hypertension, BMI and Hyperlipidemia as per Asian Indian hypertensive guide lines 2012 and ICMR Guide- lines for diabetes mellitus is followed. Hypertension was present in 39.7%, over weight 14.7%, obesity 27.9%, ECG abnormality as evidence of Ischemic heart disease 8.38% RBB 3.38% sinus tachycardia 3.38%. 24% had raised blood sugar and cholesterol raised in 25% subjects. Periscope study was done in the 48 samples. Vascular stiffness indices as pulse wave velocity and augmentation index increased in 75% of subjects. Trace elements analysis for Zinc and Copper was done for 49 samples. Imbalance of trace elements either increased levels or decreased levels are present in 37 subjects. Vascular stiffness and alteration i n d i a n h e a r t j o u r n a l 6 7 ( 2 0 1 5 ) s 1 2 1 – s 1 3 3 S131 http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.331&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.331&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.332&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.332&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.333&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.ihj.2015.10.333&domain=pdf work_t4ijny4cbjfcfbr332kr5osvai ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218556750 Params is empty 218556750 exception Params is empty 2021/04/06-02:16:36 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218556750 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:36 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_t5foiysbmvdpfctvj2rg6hzvo4 ---- 645.fm Palaeontologia Electronica palaeo-electronica.org Picking up the pieces: the digital reconstruction of a destroyed holotype from its serial section drawings Julien Benoit and Sandra C. Jasinoski ABSTRACT Serial grinding, a popular but destructive technique in the early twentieth century, allows for the detailed tomographic study of vertebrate fossils. Specimen BP/1/1821 (formerly BPI/1/346) is a procynosuchid cynodont (Therapsida) that underwent serial grinding by A.S. Brink in 1961, resulting in a detailed and insightful study of its skull anatomy. However, BP/1/1821 was also designated by Brink as the holotype, and only specimen, of a new (now a junior synonym of Procynosuchus delaharpeae) species of cynodont: ‘Scalopocynodon gracilis’. This species has subsequently been recognised as a junior synonym of Procynosuchus delaharpeae, but the destruction of a holotype remains an irreversible loss. Brink built an enlarged wax model from the serial sec- tions, but it is degrading rapidly. In this article, we explain how we retrieved Brink’s orig- inal drawings of the sections, and how we were able to build a new digital model of this specimen using a scanner, virtual stack alignment, and 3D imaging. Comparison with previously published drawings demonstrates the accuracy of this digital model. With a 3D printer, we then re-created a more accurate replication of BP/1/1821 with resin. This life-sized replica now helps to complete the collections of the Evolutionary Studies Institute (University of the Witwatersrand, Johannesburg, South Africa) and replaces the long lost original specimen. The possibility to re-use these old data for palaeonto- logical research is also addressed. Julien Benoit. Evolutionary Studies Institute (ESI); School of Geosciences, University of the Witwatersrand, PO Wits, 2050, Johannesburg, South Africa; School of Anatomical Sciences, University of the Witwatersrand, 7 York Road, Parktown, 2193, Johannesburg, South Africa. julien.benoit@wits.ac.za Sandra C. Jasinoski. Evolutionary Studies Institute (ESI); School of Geosciences, University of the Witwatersrand, PO Wits, 2050, Johannesburg, South Africa. sandra_jas@hotmail.com Keywords: Cynodont; Serial Grinding Tomography; Therapsida; Digitization; Curation; Skull Submission: 12 February 2016 Acceptance: 26 July 2016 Benoit, Julien, and Jasinoski, Sandra C. 2016. Picking up the pieces: the digital reconstruction of a destroyed holotype from its serial section drawings. Palaeontologia Electronica 19.3.3T: 1-16 palaeo-electronica.org/content/2016/1478-reconstructing-scalopocynodon Copyright: © September 2016 Society of Vertebrate Paleontology. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. creativecommons.org/licenses/by/4.0/ BENOIT & JASINOSKI: RECONSTRUCTING SCALOPOCYNODON INTRODUCTION For each palaeontological animal species named, the International Code of Zoological Nomenclature (ICZN) stipulates that a holotype must be defined in order to serve as a reference for future works (ICZN, 1999). As a consequence, the designation of a type specimen is, by far, the most critical step when creating a new taxon, and the destruction of a type specimen constitutes an irre- placeable loss to science. Under South Africa’s heritage laws, this also represents a failure to pro- tect part of the National Estate. Unfortunately, this is what happened to the holotype of the South Afri- can procynosuchid cynodont ‘Scalopocynodon gracilis’ (this taxon name will thereafter be enclosed in quotation marks to reflect the fact that it is now considered a junior synonym of Procyno- suchus delaharpeae [Hopson and Kitching, 1972]), which was destroyed during a serial grinding tomo- graphic (SGT) study by Brink (1961), the author of the taxon. Although the SGT study allowed him to exquisitely describe the skull in great detail, this unfortunately led to the definitive loss of this unique piece of South African fossil heritage. Given the extraordinary wealth of fossils coming from the time-expansive Karoo sedimentary succession of South Africa (Rubidge and Sidor, 2001), South Afri- can therapsids were among the first and the most extensively studied species using this technique, even until recent time (e.g., Sollas and Sollas, 1914; Olson, 1937, 1944; Kermack, 1970; Fourie, 1993; Maier and van den Heever, 2002; Sigurdsen, 2006), because some of them are known from doz- ens or even hundreds of specimens. However, before it was synonymized with P. delaharpeae, ‘Scalopocynodon gracilis’ was represented by only one specimen (previously numbered BP/1/346, now BP/1/1821). The technique SGT was invented by Sollas (1904) and was intended to revolutionize the way in which palaeontologists study their fossils (Simp- son, 1933; Sutton, 2008). During SGT, a special device is used to grind the specimen at thin and regular intervals. At every step, a tomogram (i.e., drawing made either from a photographic plate or a projected image) or a photograph of the section is made so that the corpus of tomograms obtained illustrates the internal anatomy of the specimen in sequential slices (Sutton, 2008; see Appendix 1). Using this technique, it became possible to describe in great detail the internal anatomy of fos- sils. At that time, the only alternative was to wait for discoveries of naturally preserved internal struc- tures, such as an endocast of braincase and bony labyrinth (e.g., Case, 1914; Dart, 1925; Cox, 1962; Jerison, 1973; Quiroga, 1984; Court, 1992; Kielan- Jaworowska et al., 2004), isolated bony elements of the chondrocranium (e.g., Benoit et al., 2013a, 2013b), or to undergo a “dissection” of the struc- ture of interest (Dechaseaux, 1974; Gould, 1989). Serial grinding tomography was used in the twentieth century and led to a substantial improve- ment of knowledge about the deep internal mor- phological structure of extinct vertebrates (e.g., Sollas and Sollas, 1914; Stensiö, 1927; Jarvik, 1942, 1954). The technique also allows for the 3D reconstruction of the fossil using wax, cardboard, or polystyrene sheets that are cut into the shape of each fossil slice and stacked sequentially (Sutton, 2008; Cunningham et al., 2014). This kind of tomography is the methodological predecessor of the modern-day computer-assisted tomography (CT scan). Curiously, although alternative methods were invented to preserve the slices of the speci- mens on plates, and though non-destructive X-ray radiography was performed on fossils as early as the end of the 19th century (Branco, 1906), and CT scans as early as the end of the 1970s (Conroy and Vannier, 1984), SGT enjoyed great popularity and was applied in vertebrate palaeontology for a very long time with noticeable success (e.g., Sollas and Sollas, 1914; Stensiö, 1927; Jarvik, 1942, 1954; Olson, 1937, 1944; Fourie, 1993; Maier and van den Heever, 2002; Sigurdsen, 2006; and reviewed in Sutton, 2008; Cunningham et al., 2014; Laaß and Schillinger, 2015). Nevertheless the destructive aspect of this approach has gener- ally been implemented in well- sampled vertebrate taxa, where large numbers of specimens limited the impact of the destruction (e.g., the South Afri- can dicynodonts [Sollas and Sollas, 1914; Fourie, 1993], some Devonian fishes [Stensiö, 1927; Jar- vik, 1942, 1954], the well-sampled artiodactyls Merycoidodon, Poebrotherium, and Leptomeryx [Whitmore, 1953] and squalodontid whales from North America [Luo and Eastman, 1995], and mul- tituberculate mammals from Mongolia [Kielan- Jaworowska et al., 1986]). Researchers and curators remained reluctant to carry out SGT most of the time because the irre- versible damage to fossils is contrary to the princi- ples of heritage conservation. Moreover, the specimens selected for the technique tend to be well preserved since researchers want the best for their study in order to extract maximum information by destroying a minimum of fossils. The technique can be very time-consuming, sometimes taking years to carry out (Jarvik, 1942, 1954; Sutton, 2 PALAEO-ELECTRONICA.ORG 2008; Cunningham et al., 2014). Other methods were invented to inspect the interior of fossils with- out completely destroying them, such as midline sawing in order to prepare the interior of a skull, or tungsten microtomy which enable preservation of at least part of the original material, but they were expensive, time-consuming, and their use was not as widespread (Sutton, 2008; Cunningham et al., 2014). Only non-destructive CT imaging has replaced SGT in vertebrate palaeontology, but not prior to the 1990s. One of the most dramatic losses as a result of an SGT study was the destruction of the holotype specimen of ‘Scalopocynodon gracilis’. Brink intended to use serial grinding tomography (SGT) in order to study BP/1/346, the only known skull of a new genus and species he named in the same article (Brink, 1961). However, it appears that the destruction of a holotype was not intentional: Brink at first believed the specimen represented a com- mon therocephalian (Scaloposaurus) and did not realize that it was a new species of cynodont until the secondary palate became evident in the sec- tions (Brink, 1961, p. 119). At that point of realiza- tion, Brink (1961, p. 119) decided “to reconsider the implications of this misinterpretation ... It was also considered that the specimen might prove to be the type of new species of Procynosuchid, and careful thought was given to the implications of destroying a type. At this stage it was decided to reconstruct in wax the anterior half of the skull, as far as it was then sectioned. The resultant model, although exhibiting some peculiar features, suggested that the specimen very likely represented a juvenile stage of an existing species of Leavachia, Procy- nosuchus, or Galecranium, and it was decided to proceed with the sectioning.” The specimen BP/1/346 is thus to be consid- ered a lost holotype and ‘Scalopocynodon gracilis’ could be considered a nomen dubium. However, because Brink thoroughly described the morphol- ogy of the specimen and reconstructed a wax model (still available at the Evolutionary Studies Institute [ESI, University of the Witwatersrand, Johannesburg, South Africa]), Hopson and Kitch- ing (1972) were able to synonymize ‘Scalopocyno- don gracilis’ with Procynosuchus delaharpeae. The recognition of ‘Scalopocynodon gracilis’ as a junior synonym of Procynosuchus delaharpeae, a well- documented species, minimized the consequences of the loss of BP/1/346. However, as BP/1/346 will remain a holotype in perpetuity, its loss remains unfortunate, especially since a delicate wax model will not last as long as the original fossil. Here we present a new, digital reconstruction of BP/1/346 based on the original hand-drawn tomograms by Brink (1961). This 3D virtual model is compared with the wax model and the descrip- tion of the original specimen published by Brink (1961). This comparison revealed that the wax model is deformed, inaccurate in places, and has already been extensively restored. The purpose here is neither to resurrect the taxon ‘Scalopocyno- don gracilis’, nor it is to redescribe BP/1/346 since complete and comprehensive descriptions of this specimen as well as Procynosuchus delaharpeae have already been published (Brink, 1961; Kemp, 1979). Instead, the goal here is to compare the original description by Brink (1961), the wax model, and the digital model of BP/1/346 in order to dis- cuss the accuracy and reliability of this new model. Finally, we introduce the new 3D printed model of BP/1/346 based on the digital model, which now enriches the Karoo holotype collection of the ESI with a restored representation of the long lost holo- type. MATERIAL AND METHODS Brink (1961) clearly detailed how he recon- structed the wax copy of specimen BP/1/346. Drawings of the lateral, dorsal, and ventral views accompanied the description, as well as a selec- tion of 14 coronal sections and the reconstruction of a sagittal section (Brink, 1961, figures 33-35). Since 1961, the wax model was kept in the Karoo holotype room at the ESI under the catalogue num- ber BP/1/1821. The drawings received the same number, but they were labelled “Therocephalian” (Figure 1) and were not associated with the wax model. The drawings of the serial sections and the ventral and dorsal views of BP/1/346 were recov- ered in a plastic bag catalogued BP/1/1821. Draw- ings of the lateral view and sagittal section were not recovered. The cover of the serial sections stated that the drawings were those of a thero- cephalian skull possibly belonging to Aneu- gomphius (now synonymized with Theriognathus [Sigurdsen et al., 2012], but note that during the early stages of serial grinding Brink believed that it was a Scaloposaurus [Brink, 1961, p. 119]) (Figure 1), which could explain why those drawings were not stored along with the wax model. Despite this, we are confident that the drawings are those made by Brink in 1961 of BP/1/346 because: i) they are identical to Brink’s (1961) published figures and to the wax model (Figure 2), ii) the cover is labelled with the specimen number 346 (Figure 1), and iii) it is signed with Brink’s name and initials (Figure 1). 3 BENOIT & JASINOSKI: RECONSTRUCTING SCALOPOCYNODON Today, BP/1/346 now refers to a dicynodont speci- men, the holotype of Brachyuraniscus merwevillen- sis (transferred to Pristerodon by King and Rubidge, 1993), which was formerly numbered BP/ 1/218 and the skull of which is now ironically lost (King and Rubidge, 1993). Consequently, in order to avoid confusion, and since the original BP/1/346 cynodont specimen no longer exists, all the mate- rial formerly referred to ‘Scalopocynodon gracilis’ is hereby designated the number BP/1/1821. All 116 serial tomograms drawn by Brink (1961) are present. They were drawn with the help of an episcope (which projected an image of the specimen onto a sheet of mounted paper) with a four times magnification (Brink, 1961; Appendix 1). Though the paper is becoming yellowish, the pencil tracings are still clearly visible (Appendix 1). As stated in his article, Brink (1961) found a way to grind the entire specimen (usually the last centime- ters are shredded during the grinding process); hence the 116 slices represent the entire speci- men. The drawings of the serial sections were scanned with a standard office scanner. The pic- tures were then converted into 8 bit grayscale images and their contrast was improved using Fiji 64bit (Schindelin et al., 2012). The slices were aligned and stacked into a single RAW file using the “Align slices in stack” function of the Template Matching plugin under Fiji 64bit. Brink (1961) drew the serial sections with a four times magnification in order to reconstruct the wax specimen four times larger than the original. Here, the images were rescaled to natural size using the measurements given by Brink (1961). The digital reconstruction of BP/1/1821 was obtained using threshold and man- ual segmentation in Avizo 8 software (VSG). The digital model (Figure 3, Appendix 2) is a direct reconstruction of the slices drawings. No further efforts to cosmetically improve the digital model, such as resampling the slices or surface smooth- ing, were performed. Features that were crossed out on the drawings were not considered during segmentation (e.g., section 14, see Appendix 1). The specimen was printed in 3D using Zprinter 450 (Figure 4). This 3D printed model represents the first accurate, life-sized reconstruction of BP/1/ 1821 in 55 years. DESCRIPTION The digitization of the serial section drawings of BP/1/1821 illustrates how much information was lost during the SGT process. The slice edges are prominent (Figure 3), and it is clearly apparent that the slice intervals were too thick. Brink (1961) admitted that slicing the specimen every 0.5 mm was not enough for such a small skull (skull length: 60 mm) and a lot of information was lost. The jag- ged aspect of the digital specimen may also partly reflect the subjectivity of Brink's original hand- drawn tomograms. In effect, the apparent lack of alignment of some of the slices on what should be smooth surfaces (see Appendix 1) implies some degree of inaccuracy in the original drawings. Nonetheless, Brink clearly drew the sutures of most identifiable cranial bones, which can thus be digitally extracted and isolated (Figure 5). Having access to the complete series of sec- tions of the skull is quite useful. One can determine if some of the sutures remained patent or became obliterated, which might be an important indicator of ontogenetic age in basal cynodonts (see Jasi- noski et al., 2015). In addition, access to the com- plete set of sections (Appendix 1) allows for re- interpretation of the identification of bones and sutures and can highlight discrepancies with the published sections (Brink, 1961, figure 35) and/or the reconstructed skull (Brink, 1961, figures 33-34). This comparison showed that a few of the sutures in Brink’s published sections are not illustrated in his original drawings. For example, the parietal- parietal suture anterior and posterior to the pineal foramen is absent from his original drawings (Appendix 1), but it is present on his published sec- tions (Brink, 1961, figure 35, sections 84, 100). Therefore, it remains equivocal whether this mid- FIGURE 1. The label on the first page of the serial sec- tion drawings. It is written “Specimen N°1821 (FN). MN° 346. Complete Aneumogomphius? skull thero- cephalian. Sections ½ mm. Magnification x4.” The doc- ument is signed “A.S. Brink”. Abbreviations: FN, Field Number; MN°, Museum Number. 4 PALAEO-ELECTRONICA.ORG 5 FIGURE 2. Comparison of the different reconstructions of BP/1/1821 in dorsal (left) and ventral (right) views. 1, the wax model; and 2, the original illustrations by Brink, found with the serial section drawings. BENOIT & JASINOSKI: RECONSTRUCTING SCALOPOCYNODON line suture on the skull roof was partially obliter- ated. On the contrary, the nasal-frontal suture is absent (see Figure 5.1) from both the published and original drawings of the sections, but this might be due to the longitudinal orientation of the inter- digitations that is difficult to interpret in coronal sec- tion, or the thicker sections did not intersect the region of the suture, or weathering prevented detection of the suture (see Brink, 1961, p. 123). Brink estimated the position of this suture on his reconstruction of the skull (Brink, 1961, figure 33A). As for bone identifications, Brink’s (1961, fig- ure 35) section 102 through the labelled supraoc- cipital bone (‘so’) might actually be through the parietal or the interparietal. Additionally, the articu- lar (‘art’) on section 106 (Brink, 1961, figure 35) appears to be the quadratojugal (Appendix 1; Fig- ure 5.2). Apart from the size difference, the digital model and the wax model (Figure 6) look quite dif- FIGURE 3. The digital model in (1) right lateral, (2) left lateral and (3) ventral view in stereopairs. Scale bar equals 10 mm. See also the supplementary video in Appendix 2. FIGURE 4. The slices of specimen BP/1/1821 finally put back together again, as seen in the hands of the CT scan facility technician K. Jakata (ESI). This 3D printed model is the first time in 55 years that BP/1/1821 has been accurately reconstructed at life-size. 6 PALAEO-ELECTRONICA.ORG ferent from one another even though the latter model was apparently reconstructed using the same serial section drawings (Appendix 1). The digital reconstruction illustrates how the wax model of BP/1/1821 would have originally looked if it was based solely on Brink’s interpretations of the serial sections; however, it is apparent that some sort of smoothing was applied during or soon after the reconstruction of the wax model. The slices of wax used by Brink (1961) were 1 mm thick instead of the 2 mm that was required to reconstruct the specimen with a four times magnification. To com- pensate, every fifth slice was duplicated (see Brink, 1961, p. 122), which artificially increased the num- ber of sections, giving some room for this smooth- ing. Other inconsistencies, due to cosmetic improvements, are visible between both models. The morphology of the teeth is barely visible on the digital model (Figures 3, 6), whereas they are well reconstructed on the wax specimen (Figure 6). For instance, the rostral-most incisors were destroyed during the grinding of the first slice, and only their roots are visible on the serial sections (Appendix 1). This is not reflected by the wax model (Figure 6). The wax model also bears two canines on each side while only one canine was actually preserved on the left side (Figure 6; Brink, 1961). Brink (1961) also reconstructed two canines on both sides of the skull in his figures 33B and 34. Perhaps this dis- crepancy exists because the caudal-most of the two canines actually has an eroded root and was in the process of being replaced by the anterior one (Appendix 1, sections 20-28; Brink, 1961, p. 125). Post-canine teeth were also sculpted on the wax specimen. They are all conical, whereas the last four should be tricuspid (Brink, 1961). Hence, the close examination of the wax model of BP/1/1821 and the re-appraisal of its anatomy in comparison with the digital model and the original description show that some of the information that was lost during the serial grinding process was recon- structed on the wax model, especially the tooth morphology. However, the main differences between the models are due to deterioration of the wax model (Figures 6, 7). The pterygoid wings should be projecting ven- trally, as depicted in the digital reconstruction (Fig- ure 6) and the original description (Brink, 1961), but on the wax model they are nearly horizontal and oriented posteriorly (Figure 6). The slices of wax are also nearly horizontal on the pterygoid pro- cesses (Figure 7.3), which shows that this feature is due to deformation of the wax after the skull was FIGURE 5. Segmentation of the skull bones of BP/1/1821, (1) in the cranium and (2) in the lower jaw. Scale bar equals 10 mm. 7 BENOIT & JASINOSKI: RECONSTRUCTING SCALOPOCYNODON lying on its ventral side for a long period of time. The canines are also deformed in the same way (Figure 7.1). They are deflected laterally and ros- trally (Figures 6, 7.1), which also indicates that the specimen was stored on a flat surface for a pro- longed period of time. Incidentally, the rostrum is bent dorsally as it was pushed upward by the canines (Figure 6). The occipital plate and the area of the frontals, at the level of the orbits, are also collapsing (Figure 6). In addition, the whole wax model is laterally deformed when compared to the digital model (Figure 6). Finally, some parts of the wax model are bro- ken: the right zygoma (Figure 7.3-4), the right pter- ygoid process (Figure 7.3), part of the vomer (Figure 7.2), and the left mandible (Figure 7.5). Also, the reflected laminae of the angular are more pronounced on the digital model (Figures 3.1-2, 5.2) than on the wax model (Figure 7.7), presum- ably because they were broken on the latter. How- ever, no pieces of these broken parts were found in the specimen’s box. Eight nails and metallic sticks had been added to scaffold the wax, one in each zygoma (Figure 7.3), two inside each orbit (Figure 7.4), and one to hold each of the delicate quadrate process of the pterygoid (Figure 7.3). They were inserted inside the wax perhaps to prevent the model from collapsing in on itself. Six additional nails and sticks are also present in the jaws, two between each mandible (Figure 7.5) and two inside each ascending ramus (Figure 7.6). We are confi- dent that these nails and sticks were inserted inside the wax after the model had been recon- structed because: i) distinct traces of melting that have erased the limit between wax slices proves that the insertion of the nails was not synchronous with the construction of the model (Figure 7.6), and ii) because Brink (1961, p. 121) stated that such supports were absent from the original reconstruc- tion: “Although it was necessary to use supports in the process of reconstruction, the final product is entirely free from such supports. Such supports that were used were lengths or plates cut from the wax. On completion of the model all supports could be removed and the product is so stable that is it virtually like handling a modern mammalian skull.” The presence of this nail scaffolding suggests that the wax was restored at some point; however, it was impossible for us to determine exactly when. DISCUSSION Significance for Research The number of X-ray scanning and other non- destructive computer assisted tomographic studies (CT scan) of fossil vertebrates has greatly increased in the past 20 years, and the study of internal structures (e.g., palaeoneurology, bone FIGURE 6. Comparison between the digital model (top) and the wax model (bottom), in rostral (left) and lateral views (right). Arrows indicate the direction of deformation of the wax model. Scale bar equals 10 mm. 8 PALAEO-ELECTRONICA.ORG 9 FIGURE 7. The state of preservation of the wax model of BP/1/1821. 1, the right side of the snout showing the defor- mation of the canine; 2, the palate showing the broken vomer; 3, the basicranium showing multiple cracks, nails and sticks around the pterygoid; 4, view of the right orbit showing multiple cracks, nails and sticks; 5, dorsal view of the of the mandible showing multiple cracks, nails and sticks; 6, detail of the inner side of the left ascending ramus of the mandible showing a stick embedded in the wax; and 7, lateral view of the mandible showing the incomplete reflected lamina of the angular. The arrows point to the nails, dotted lines demarcate the cracks. Not to scale. BENOIT & JASINOSKI: RECONSTRUCTING SCALOPOCYNODON histology) in vertebrate palaeontology has entered a new Golden Age (see Sutton, 2008; Witmer et al., 2008; and Cunningham et al., 2014 for reviews). In this context, data such as those avail- able from old SGT sections are still valued as they contain important information about the internal structures of fossils that can supplement observa- tions based on CT scans. Figure 5 illustrates how much valuable data can still be extracted from this kind of serial section, which is particularly signifi- cant given the number of published SGT sections that are available in the literature. For example, the following cranial studies of South African therap- sids have published SGT sections and drawings or photographs of the sections that might still be avail- able in collections: dicynodonts (Sollas and Sollas, 1914; Keyser, 1975; Fourie, 1993), dinocephalians (Boonstra, 1968), gorgonopsians (Olson, 1938b); therocephalians (Olson, 1938a; Maier and van den Heever, 2002; Sigurdsen, 2006), and cynodonts (Broom, 1938; Rigney, 1938; Fourie, 1974). As we found discrepancies between some of Brink’s labels in his published sections (Brink, 1961, figure 35) and our interpretation of his original drawings, it is important to have access to the full series of hand-drawn tomograms so that reinterpretation can be implemented. In addition, the physical slices made using a microtome or grinding and peel techniques of some therapsid skulls are also housed in the fossil collections in South Africa, e.g., the Lystrosaurus skulls sectioned by Cluver (1971) at the Iziko South African Museum, the snout of a Diademodon sectioned by Grine (1978) at the ESI, or the skull of a Glanosuchus sectioned by Maier and van den Heever (2002) at the Depart- ment of Zoology of the University of Stellenbosch. Other fossil therapsids housed elsewhere in the world include, for example, the ‘Anomodont A’ sec- tioned by Olson (Olson, 1937, 1944; Angielczyk et al., 2016) and a young Galesaurus sectioned by Rigney (1938) at the Field Museum in Chicago. These fossil sections can be photographed and digitized to produce new 3D models of the once- complete specimen and can be used or re-ana- lyzed for research with no additional cost, time, or space requirements. Interestingly enough, using a similar technique described here, an intriguing ano- modont skull serially sectioned by Olson in 1937 was partially reconstructed in order to identify it and perform a revision of Brachyprosopus broomi (Angielczyk et al., 2016). The data produced from SGT studies should be valued as much as possible since specimens were sacrificed for them. As exemplified by Jasi- noski et al. (2015) and Benoit et al. (2015), data from SGT studies like those of Fourie (1974) are still relevant and often cited. There is also a flour- ishing literature in invertebrate palaeontology illus- trating how SGT data is useful for various fields such as the study of soft tissue structures (e.g., Sutton et al., 2005, 2006; Siveter et al., 2003, 2013), systematic and phylogenetic analyses (e.g., Sutton et al., 2002, 2005; Siveter et al., 2004, 2007a; Briggs et al., 2004, 2012), palaeoenviron- mental reconstruction (e.g., Lukeneder and Weber, 2014) or to infer behaviour in extinct species (e.g., Siveter et al., 2007b). The serial grinding, digital photography, and computerized restoration of the beautifully preserved Silurian fauna of the Here- fordshire deposit have allowed the virtual dissec- tion of several key invertebrate specimens in exquisite detail, which brought a new insight into the early radiation and palaeobiology of arthro- pods, brachiopods, and molluscs (Sutton et al., 2001, 2005, 2006; Briggs et al., 2004, 2012; Siveter et al., 2003, 2007a, b, 2013). Computer assisted tomography is still a rela- tively expensive technique in terms of specialized software and hardware, and the time and technical skill required to process the data (Sutton, 2008; Cunningham et al., 2014). Moreover, there are fea- tures that some CT scanning techniques cannot detect, such as phase contrast (i.e., contrast between the matrix infilling of two different internal structures) that is invisible to conventional X-ray sources and can only be detected by synchrotron beam or serial grinding tomography (Tafforeau et al., 2006; Sutton, 2008; Cunningham et al., 2014). Large and dense specimens often lack the appro- priate resolution and appear opaque on scans, except when using neutron-rays (which can make the specimen radioactive) or using SGT (Maier and van den Heever, 2002; Schwarz et al., 2005; Taf- foreau et al., 2006; Sutton, 2008; Cunningham et al., 2014; Laaß and Schillinger, 2015). Lastly, sometimes the specimens simply have insufficient contrast on CT scans while SGT offers direct access to the internal structure (Maier and van den Heever, 2002; see Cunningham et al., 2014). This is what happened with the invertebrate fossils from the Herefordshire deposit, which are preserved in calcite that is very similar to the micritic host rock (Sutton et al., 2001; Sutton, 2008; Cunningham et al., 2014). The fossils are difficult to prepare by the usual mechanical and chemical means, and X-ray techniques cannot discriminate between the matrix and the fossils (Sutton et al., 2001; Sutton, 2008; Cunningham et al., 2014). In this particular case, 10 PALAEO-ELECTRONICA.ORG SGT accompanied with digital photography of the sections and digital restoration proved to be the most effective way to access the morphology of the fossils (Sutton et al., 2001, 2005, 2006; Briggs et al., 2004, 2012; Siveter et al., 2003, 2007a, 2007b, 2013; Sutton, 2008). Significance for Curation and Archiving Fifty-five years ago, Brink (1961) created a wax model of BP/1/1821 without the need for sup- port structures, but the presence of metal sticks and nails indicates that over time it has collapsed on itself. Further evidence of this process of degra- dation is visible today through the dramatic defor- mation of the wax model in comparison to its digital counterpart that is based on Brink’s tomograms (Figure 6). Wax is very sensitive to temperature variations and the wax model has no internal sedi- ment filling, which makes it very susceptible to damage during physical handling. The digital reconstruction based on Brink’s drawings of the raw serial sections show that the wax model deteri- orated relatively quickly and, apart from the description (Brink, 1961), it is all that testifies to the morphology of the original specimen. What remains of the wax model of BP/1/1821 could be preserved through digitization using surface scan- ning, but the internal anatomy of the specimen would not be recorded. The use of X-ray scanning is excluded because of the low absorbance of the wax compared to the metallic sticks, which would not give enough contrast and would create artifacts on the radiographs. Moreover, the heat produced by the engine could further damage the wax. As such, digitization of Brink’s drawings in order to reconstruct the specimen in silico seems to be the most effective and easiest way to represent his interpretations of the original fossil. This is certainly the most important step for curation of the draw- ings, as they are also deteriorating (the paper is yellowish and the pencil marks are fading away). With the re-discovery of these drawings, it became possible to reconstruct a digital model that revealed how the wax model was affected by the vicissitudes of time (Figure 6). The recreation of a model of BP/1/1821 in sil- ico highlights three new opportunities for future conservation efforts: first, as a substitute to the wax model, a digital model offers fewer problems in terms of preservation since the quality of the digi- tized drawing and reconstruction will not decline with time or manipulations (however questions of data standardization, computer power, ethics, costs, and the implementation of sustainable, long- term digital archiving still remain important issues for digital palaeontological data; see Gilbert and Carlson, 2011; Mallison, 2011; and Cunningham et al., 2014 for a complete discussion). Second, a new model was 3D-printed in resin, and could be printed again at any time using any 3D printer. In contrast to the wax model, this model is the same size as the original (although this can easily be adjusted during printing), and its morphology accu- rately reflects that illustrated in Brink’s drawings of the serial sections (Figure 4). Lastly, a 3D-printed model can serve as a guide for the future resto- ration of the wax model. Nevertheless, one should keep in mind that Brink’s drawing are subjective and thus our new digital reconstruction, like the original wax model, is based on an interpretative medium and will never replace the lost specimen. CONCLUDING REMARKS Here we provide an easy and affordable tech- nique that assisted us to reconstruct BP/1/1821 from serial section drawings of the 55 year old holotype of the formerly recognised species ‘Scalo- pocynodon gracilis’. This study illustrates the value of old original drawings/ photographs of SGT sec- tions and peels which should not be neglected. Conservation of these data through digitization (e.g., scanning, digital photography) should be a priority, as they can provide important new per- spectives to long destroyed specimens. In the case of BP/1/1821, this technique has provided a tool to assess the state of preservation of the old wax model. To archive a digital copy of these drawings is therefore as important as curating original fossil specimens. If photographic plates (e.g., Sollas and Sollas, 1914), physical sections (e.g., Cluver, 1971), peels (e.g., Angielczyk et al., 2016), or drawings (e.g., Brink, 1961) are housed in collec- tions, then these items should be appropriately accessioned and linked to the record of the source fossil. Attempts to digitize them should be under- taken, and original notes from the studied speci- men should also be located and subsequently preserved. In addition, SGT data in vertebrate palaeontology has recently been overlooked because of the advent of non-destructive digital tomography (Cunningham et al., 2014). However, data already acquired could still potentially prove useful for modern comparative works and enable internal anatomical detail of a given fossil species at no additional cost. What happened to specimen BP/1/1821 is not an isolated case. According to Thackeray et al. (1998), the holotype skull of Lystrosaurus murrayi 11 BENOIT & JASINOSKI: RECONSTRUCTING SCALOPOCYNODON was sectioned by Huxley in 1859. At that time, scanning techniques did not exist, but more recently, the holotype of Stygimoloch spinifer was sectioned in order to study the cranial ontogeny in pachycephalosaurids (Horner and Goodwin, 2009). Among invertebrates, many holotypes from the Herefordshire locality underwent serial grinding because anatomical details were only accessible through this method (e.g., Sutton et al., 2002, 2005; Briggs et al., 2004, 2012; Siveter et al., 2007a); however, even in this particular case, opti- cal non-destructive tomographic techniques (e.g., synchrotron and CT scanning) might produce valu- able results (Sutton, 2008; Siveter et al., 2014). Serial grinding and sectioning can still be useful techniques to study fossil anatomy and evolution- ary processes (e.g., Maier and van den Heever, 2002; Sigurdsen, 2006), but we recommend per- forming X-ray, neutron, or surface scanning of the specimens before sectioning in order to retain a digital copy of the original (just like Maier and van den Heever [2002] did for their specimen of Glano- suchus). Destruction of a specimen, particularly a holotype, must remain a last resort, as this threat- ens the very premise of the taxonomic endeavour and, given the rapidly changing technological land- scape, prevents future analysis. ACKNOWLEDGMENTS The authors would like to thank Kudawashe Jakata for assisting with printing of the specimen. We also thank B. Zipfel, curator of the palaeonto- logical collections of the Evolutionary Studies Insti- tute (University of the Witwatersrand, Johannesburg), the assistant curator S. Jirah, and B.S. Rubidge for their help and access to the col- lections of the ESI. The authors also thank the two anonymous reviewers for their help to improve the manuscript. This research was conducted with financial support from PAST and its scatterlings projects; the NRF; and DST/NRF Centre of Excel- lence in Palaeosciences. REFERENCES Angielczyk, K.D., Rubidge, B.S., Day, M.O., and Lin, F. 2016. A Reevaluation of Brachyprosopus broomi and Chelydontops altidentalis, Dicynodonts (Therapsida, Anomodontia) from the Middle Permian Tapinoceph- alus Assemblage Zone of the Karoo Basin, South Africa. Journal of Vertebrate Paleontology, 36(2):e1078342. Benoit, J., Abdala, F., Van den Brandt, M.J., Manger, P.R., and Rubidge, B.S. 2015. Physiological implica- tions of the abnormal absence of the parietal foramen in a Late Permian cynodont (Therapsida). The Sci- ence of Nature (Naturwissenschaften), 102:69. Benoit, J., Ben Haj Ali, M., Adnet, S., El Mabrouk, E., Hayet, K., Marivaux, L., Merzeraud, G., Merigeaud, S., Vianey-Liaud, M., and Tabuce, R. 2013a. Cranial remain from Tunisia provides new clues for the origin and evolution of Sirenia (Mammalia, Afrotheria) in Africa. PLoS ONE, 8:e54307. Benoit, J., Merigeaud, S., and Tabuce, R. 2013b. Homo- plasy in the ear region of Tethytheria and the system- atic position of Embrithopoda (Mammalia, Afrotheria). Geobios, 46:357-370. Boonstra, L.D. 1968. The braincase, basicranial axis and median septum in the Dinocephalia. Annals of the South African Museum, 50:195-273. Branco, W. 1906. Die Anwendung der Rontgenstrahlen in der Palaontologie. Abhandlungen der Koniglich Preussischen Akademie der Wissenschaften, Verlag der Koniglichen Akademie der Wissenschaften. (In German) Briggs, D.E.G., Sutton, M.D., Siveter, D.J., and Siveter, D.J. 2004. A new phyllocarid (Crustacea: Mala- costraca) from the Silurian Fossil-Lagerstätte of Her- efordshire, UK. Proceedings of the Royal Society B: Biological Sciences, 271:131-138. Briggs, D.E.G., Siveter, D.J., Siveter, D.J., Sutton, M.D., Garwood, R.J., and Legg, D.A. 2012. A Silurian horseshoe crab illuminates the evolution of arthropod limbs. Proceedings of the National Academy of Sci- ence, USA, 109:15702-15705. Brink, A.S. 1961. A new type of primitive cynodont. Palaeontologia Africana, 7:119-154. Broom, R. 1938. On the structure of the skull of the cyno- dont, Thrinaxodon liorhinus, Seeley. Annals of the Transvaal Museum, 19:263-269. Case, E.C. 1914. On the structure of the inner ear in two primitive reptiles. Biological Bulletin, 27:213-216. Cluver, M.A. 1971. The cranial morphology of the dicyno- dont genus Lystrosaurus. Annals of the South African Museum, 56:155-274. Conroy, G.C. and Vannier, M.W. 1984. Noninvasive three-dimensional computer imaging of matrix-filled fossil skulls by high-resolution computed tomogra- phy. Science, 226:1236-1239. Cox, C.B. 1962. A natural cast of the inner ear of a dicy- nodont. American Museum Novitates, 2116:1-6. Court, N. 1992. Cochlea anatomy of Numidotherium koholense: auditory acuity in the oldest known pro- boscidean. Lethaia, 25:211-215. Cunningham, J.A., Rahman, I.A., Lautenschlager, S., Rayfield, E.J., and Donoghue, P.D.J. 2014. Virtual world of paleontology. Trends in Ecology and Evolu- tion, 29:347-357. Dart, R.A. 1925. Australopithecus africanus: the man- ape of South Africa. Nature, 115:195-199. Dechaseaux, C. 1974. Artiodactyles primitifs des Phos- phorites du Quercy. Annales de Paléontologie, 60:59-100. (In French) 12 PALAEO-ELECTRONICA.ORG Fourie, S. 1974. The cranial morphology of Thrinaxodon liorhinus Seeley. Annals of the South African Museum, 65:337-400. Fourie, H. 1993. A detailed description of the internal structure of the skull of Emydops (Therapsida: Dicy- nodontia). Palaeontologia Africana, 30:103-111. Gilbert, H. and Carlson, J. 2011. Data models and global data integration in paleoanthropology: A plea for specimen-based data collection and management. p. 111-121. In Macchiarelli, R. and Weniger, G.-C. (eds.), Pleistocene Databases: Acquisition, Storing, Sharing. Neanderthal Museum, Mettmann. Gould, S.J. 1989. Wonderful Life. The Burgess Shale and the Nature of History. W.W. Norton and Com- pany, New York. Grine, F.E. 1978. Notes on a specimen of Diademodon previously referred to Cyclogomphodon. Palaeonto- logia Africana, 21:167-174. Hopson, J.A. and Kitching, J.W. 1972. A revised classifi- cation of cynodonts (Reptilia; Therapsida). Palaeon- tologia Africana, 14:71-85. Horner, J.R. and Goodwin, M.B. 2009. Extreme Cranial Ontogeny in the Upper Cretaceous Dinosaur Pachy- cephalosaurus. PLoS ONE, 4(10):e7626. ICZN 1999. International Code of Zoological Nomencla- ture, 4th Edition. International Trust for Zoological Nomenclature, London. Jarvik, E. 1942. On the structure of the snout of cros- sopterygians and lower gnathostomes in general. Zoologiska Bidrag från Uppsala, 21:235-675. Jarvik, E. 1954. On the visceral skeleton in Eusthe- nopteron with a discussion of the parasphenoid and palatoquadrate in fishes. Kungliga Svenska Vetenskapsakademiens Handlingar, 5:1-104. Jasinoski, S.C., Abdala, F., and Fernandez, V. 2015. Ontogeny of the Early Triassic cynodont Thrinaxodon liorhinus (Therapsida): Cranial morphology. Anatomi- cal Record, 298:1440-1464. Jerison, H.J. 1973. Evolution of the Brain and Intelli- gence. Academic Press, New York. Kemp, T.S. 1979. The primitive cynodont Procynosu- chus: functional anatomy of skull and relationships. Philosophical Transactions of the Royal Society of London, Series B, 285:73-122. Kermack, D.M. 1970. True serial-sectioning of fossil material. Biological Journal of the Linnean Society, 2:47-53. Keyser, A.W. 1975. A re-evaluation of the cranial mor- phology and systematics of some tuskless Anomo- dontia. Memoirs of the Geological Survey of South Africa, 67:1-110. Kielan-Jaworowska, Z., Cifelli, R.L., and Luo, Z.-X. 2004. Mammals from the Age of Dinosaurs Origins, Evolu- tion, and Structure. Columbia University Press, New York. Kielan-Jaworowska, Z., Presley, R., and Poplin, C. 1986. The cranial vascular system in taeniolabidoid multitu- berculate mammals. Philosophical Transactions of the Royal Society B, Biological Sciences, 313:525- 602. King, G.M. and Rubidge, B.S. 1993. A taxonomic revi- sion of small dicynodonts with postcanine teeth. Zoo- logical Journal of the Linnean Society, 107:131-154. Laaß, M. and Schillinger, B. 2015. Reconstructing the Auditory Apparatus of Therapsids by Means of Neu- tron Tomography. Physics Procedia, 69:628-635. Lukeneder, A. and Weber, G.W. 2014. Computed recon- struction of spatial ammonoid-shell orientation cap- tured from digitized grinding and landmark data. Computers & Geosciences, 64:104-114. Luo, Z.-X. and Eastman, E.R. 1995. Petrosal and inner ear of a squalodontoid whale: implications for evolu- tion of hearing in Odontocetes. Journal of Vertebrate Paleontology, 15:431-442. Maier, W. and van den Heever, J. 2002. Middle ear struc- tures in the Permian Glanosuchus sp. (Thero- cephalia, Therapsida), based on thin sections. Fossil Record, 5:309-318. Mallison, H. 2011. Digitizing methods for paleontology - applications, benefits and limitations. p. 7-44. In Elewa, A.M.T. (ed.), Computational Paleontology. Springer-Verlag, Berlin, Heidelberg. Olson, E. C. 1937. The skull structure of a new anomo- dont. Journal of Geology, 45:851-858. Olson, E.C. 1938a. Notes on the brain case of a thero- cephalian. Journal of Morphology, 63:75-86. Olson, E.C. 1938b. The occipital, otic, basicranial and pterygoid regions of the Gorgonopsia. Journal of Morphology, 62:141-175. Olson, E.C. 1944. Origin of mammals based upon cra- nial morphology of the therapsid suborders. Geologi- cal Society of America, Special Papers, 55:1-136. Quiroga, J.C. 1984. The endocranial cast of the advanced mammal-like reptile Therioherpeton cargnini (Therapsida-Cynodontia) from the Middle Triassic of Brazil. Journal fuer Hirnforschung, 25:285- 290. Rigney, H.W. 1938. The morphology of the skull of a young Galesaurus planiceps and related forms. Jour- nal of Morphology, 63:491-529. Rubidge, B.S. and Sidor, C.A. 2001. Evolutionary pat- terns among Permo-Triassic Therapsids. Annual Review of Ecology and Systematics, 32:449-480. Schindelin, J., Arganda-Carreras, I., Frise, E., Kaynig, V., Longair, M., Pietzsch, T., Preibisch, S., Rueden, C., Saalfeld, S., Schmid, B., Tinevez, J.-Y., White, D.J., Hartenstein, V., Eliceiri, K., Tomancak, P., and Car- dona, P. 2012. Fiji: an open-source platform for bio- logical-image analysis. Nature Methods, 9:676-682. Schwarz, D., Vontobel, P.L., Eberhard, H., Meyer, C.A., and Bongartz, G. 2005. Neutron tomography of inter- nal structures of vertebrate remains: a comparison with X-ray computed tomography. Paleontologia Electronica, 8.2.30A:1-11 palaeo-electronica.org/2005_2/neutron/ issue2_05.htm 13 BENOIT & JASINOSKI: RECONSTRUCTING SCALOPOCYNODON Sigurdsen, T. 2006. New features of the snout and orbit of a therocephalian therapsid from South Africa. Acta Palaeontologica Polonica, 51:63-75. Sigurdsen, T., Huttenlocker, A., Modesto, S.P., Rowe, T.B., and Damiani, R. 2012. Reassessment of the morphology and paleobiology of the therocephalian Tetracynodon darti (Therapsida), and the phyloge- netic relationships of Baurioidea. Journal of Verte- brate Paleontology, 32:1113-1134. Simpson, G.G. 1933. A simplified serial sectioning tech- nique for the study of fossils. American Museum Novitates, 634:1-6. Siveter, D.J., Briggs, D.E.G, Siveter, D.J., Sutton, M.D., and Joomun, S.C. 2013. A Silurian myodocope with preserved soft-parts: cautioning the interpretation of the shell-based ostracod record. Proceedings of the Royal Society B, Biological Sciences, 280:20122664. Siveter, D.J., Siveter, D.J., Sutton, M.D., and Briggs, D.E. 2007b. Brood care in a Silurian ostracod. Pro- ceedings of the Royal Society B, Biological Sciences, 274:465-469. Siveter, D.J., Sutton, M.D., Briggs, D.E.G., and Siveter, D.J. 2003. An ostracode crustacean with soft parts from the Lower Silurian. Science, 302:1749-1751. Siveter, D.J., Sutton, M.D., Briggs, D.E.G., and Siveter, D.J. 2004. A Silurian sea spider. Nature, 431:978- 980. Siveter, D.J., Sutton, M.D., Briggs, D.E.G., and Siveter, D.J. 2007a. A new probable stem lineage crustacean with three-dimensionally preserved soft parts from the Herefordshire (Silurian) Lagerstatte, UK. Pro- ceedings of the Royal Society B, Biological Sciences, 274:2099-2107. Siveter, D.J., Tanaka, T., Farrell, U.C., Martin, M.J., Siveter, D.J., and Briggs D.E.G. 2014. Exceptionally preserved 450-Million-Year-Old Ordovician ostracods with brood care. Current Biology, 24:801-806. Sollas, W.J. 1904. A method for the investigation of fos- sils by serial sections. Philosophical Transactions of the Royal Society of London, Series B, 196:257-263 Sollas, I.B.J. and Sollas, W.J. 1914. A study of the skull of a Dicynodon by means of serial sections. Philo- sophical Transactions of the Royal Society of Lon- don, Series B, 204:201-225. Stensiö, E.A. 1927. The Downtonian and Devonian ver- tebrates of Spitsbergen. Part I. Family Cephalaspi- dae. Skrifter Svalbard Nordishavet, 12:1-391. Sutton, M.D. 2008. Tomographic techniques for the study of exceptionally preserved fossils. Proceedings of the Royal Society B, Biological Sciences, 275:1587-1593. Sutton, M.D., Briggs, D.E.G., Siveter, D.J., and Siveter, D.J. 2001. Methodologies for the visualization and reconstruction of three-dimensional fossils from the Silurian Herefordshire Lagerstätte. Palaeontologia Electronica, 4.1.1:1-17 palaeo-electronica.org/2001_1/s2/issue1_01 Sutton, M.D., Briggs, D.E.G., Siveter, D.J., and Siveter, D.J. 2005. Silurian brachiopods with soft-tissue pres- ervation. Nature, 436:1013-1015. Sutton, M.D., Briggs, D.E.G., Siveter, D.J., and Siveter, D.J. 2006. Fossilized soft tissues in a Silurian platyc- eratid gastropod. Proceedings of the Royal Society B, Biological Sciences, 273:1039-1044. Sutton, M.D., Briggs, D.E.G., Siveter, D.J., Siveter, D.J., and Orr, P.J. 2002. The arthropod Offacolus kingi (Chelicerata) from the Silurian of Herefordshire, England: computer based morphological reconstruc- tions and phylogenetic affinities. Proceedings of the Royal Society B, Biological Sciences, 269:1195- 1203. Tafforeau, P., Boistel, R., Boller, E., Bravin, A., Brunet, M., Chaimanee, Y., Cloetens, P., Feist, M., Hoszowska, J., Jaeger, J.-J., Kay, R.F., Lazzari, V., Marivaux, L., Nel, A., Nemoz C., Thibault, X., Vig- naud, P., and Zabler, S. 2006. Applications of X-ray synchrotron microtomography for non-destructive 3D studies of paleontological specimens. Applied Phys- ics A, 83:195-202. Thackeray, J.F., Durand, J.F., and Meyer, L. 1998. Mor- phometric analysis of South African dicynodonts attributed to Lystrosaurus murrayi (Huxley, 1859) and L. declivis (Owen, 1860): probabilities of conspecific- ity. Annals of the Transvaal Museum, 36:413-420. Whitmore, F.C. 1953. Cranial morphology of some Oligo- cene Artiodactyla. Geological Survey Professional Paper, 243:1-159. Witmer, L.M., Ridgely, R.C., Dufeau, D.L., and Semones, M.C. 2008. Using CT to peer into the past: 3D visual- ization of the brain and ear regions of birds, croco- diles, and nonavian dinosaurs, p. 67-88. In Endo, H. and Frey, R. (eds.), Anatomical Imaging: Towards a New Morphology. Springer Verlag, Tokyo. 14 PALAEO-ELECTRONICA.ORG APPENDIX 1. Movie of the aligned serial section drawings of BP/1/1821 made by A.S. Brink. The 116 sections represent the entire specimen. Note that Brink numbered the sections from 1 to 118 because he missed number 83 and duplicated 110; however Brink also labelled every section with the corre- sponding 0.5 mm interval so we are confident that no section is missing. Because of this misla- belling, the section numbers in the movie are shifted between 83 and 110 with respect to Brink’s numbered sections (see Brink, 1961, figure 35). See: palaeo-electronica.org/content/2016/1478-reconstructing-scalopocynodon 15 BENOIT & JASINOSKI: RECONSTRUCTING SCALOPOCYNODON APPENDIX 2. Movie of the virtual 3D model of BP/1/1821 that was reconstructed from the drawings made by A.S. Brink (1961). See: palaeo-electronica.org/content/2016/1478-reconstructing-scalopocynodon 16 Picking up the pieces: the digital reconstruction of a destroyed holotype from its serial section drawings Julien Benoit and Sandra C. Jasinoski Introduction Material and methods Description Discussion Significance for Research Significance for Curation and Archiving Concluding remarks Acknowledgments References << /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Dot Gain 20%) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.4 /CompressObjects /Tags /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /CMYK /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams false /MaxSubsetPct 100 /Optimize true /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo true /PreserveFlatness true /PreserveHalftoneInfo false /PreserveOPIComments true /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Apply /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true ] /NeverEmbed [ true ] /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 300 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /ColorImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 300 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /GrayImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 1200 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 >> /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile () /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA /BGR /CHS /CHT /CZE /DAN /DEU /ESP /ETI /FRA /GRE /HEB /HRV (Za stvaranje Adobe PDF dokumenata najpogodnijih za visokokvalitetni ispis prije tiskanja koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.) /HUN /ITA /JPN /KOR /LTH /LVI /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken die zijn geoptimaliseerd voor prepress-afdrukken van hoge kwaliteit. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR /POL /PTB /RUM /RUS /SKY /SLV /SUO /SVE /TUR /UKR /ENU (Use these settings to create Adobe PDF documents best suited for high-quality prepress printing. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.) >> /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ << /AsReaderSpreads false /CropImagesToFrames true /ErrorControl /WarnAndContinue /FlattenerIgnoreSpreadOverrides false /IncludeGuidesGrids false /IncludeNonPrinting false /IncludeSlug false /Namespace [ (Adobe) (InDesign) (4.0) ] /OmitPlacedBitmaps false /OmitPlacedEPS false /OmitPlacedPDF false /SimulateOverprint /Legacy >> << /AddBleedMarks false /AddColorBars false /AddCropMarks false /AddPageInfo false /AddRegMarks false /ConvertColors /ConvertToCMYK /DestinationProfileName () /DestinationProfileSelector /DocumentCMYK /Downsample16BitImages true /FlattenerPreset << /PresetSelector /MediumResolution >> /FormElements false /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles false /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) ] /PDFXOutputIntentProfileSelector /DocumentCMYK /PreserveEditing true /UntaggedCMYKHandling /LeaveUntagged /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false >> ] >> setdistillerparams << /HWResolution [2400 2400] /PageSize [612.000 792.000] >> setpagedevice work_tcdyn7iejrchvfcipqjollbp3y ---- Microsoft Word - PPM 3_2007 _continued_.doc “Strategies of Imitation: An Insight” AUTHORS Enrico Valdani Alessandro Arbore ARTICLE INFO Enrico Valdani and Alessandro Arbore (2007). Strategies of Imitation: An Insight. Problems and Perspectives in Management, 5(3-1) RELEASED ON Friday, 05 October 2007 JOURNAL "Problems and Perspectives in Management" FOUNDER LLC “Consulting Publishing Company “Business Perspectives” NUMBER OF REFERENCES 0 NUMBER OF FIGURES 0 NUMBER OF TABLES 0 © The author(s) 2019. This publication is an open access article. businessperspectives.org Problems and Perspectives in Management / Volume 5, Issue 3, 2007 (continued) © Enrico Valdani, Alessandro Arbore, 2007. 198 Strategies of Imitation: An Insight Enrico Valdani*, Alessandro Arbore** Abstract The success of an innovative firm stimulates other organizations to follow suit in a competitive game of imitation. The aim of this article is to focus on the circumstances and underlying reasons favoring imitative strategies, while arranging the literature and empirical evidence on the issue. It is intended as a systematization of different contributions on this topic taken from different per- spectives. We are convinced that such a comprehensive insight can be very useful to innovative companies as well. Key words: imitation strategies; later entrance; follower. JEL Clfssification: M30. The Strategies of Imitation: Players and Main Types The success of an innovative firm stimulates other organizations to follow suit in a competitive game of imitation. The aim of this article is to provide a comprehensive insight on the circum- stances and underlying reasons of imitation strategies by rearranging the literature and empirical evidence on this issue. Starting from the players who may conduct such a practice, the following must be distinguished: a) new-comers, that is, companies previously outside the industry. Because of some dis- ruptive innovation (Christensen, 1997) -- by which a first mover changes the rules of the game -- they realize to have, now, the right resources to enter the competitive arena and begin the chase. Think at Hewlett-Packard for the camera industry: by imi- tating the disruptive innovation of digital photography introduced by Sony in 1981, they entered this market in 1998 and started to subtract market shares to the previous contenders (with a market share of 7% in 2006 they overtook Olympus for example); b) incumbents, feeling threatened by the innovation in their market and deciding to imi- tate it, either immediately or after trying to counter it to defend their original con- ducts. Think at Nikon this time, still in the camera industry: they reacted quite late to the digital revolution, introducing their first Coolpix in 1997, that is six years later than Kodak. Not surprisingly, when the innovation originates from another incum- bent, the imitative reaction is generally quicker: think at the Nike Air and the Adidas Megabounce, for example; c) enterprises in the retail system, often belonging to Large-scale Retail Channels, which increasingly use their own tangible or intangible resources to imitate success- ful brand names. Think at Carrefour, for example: they market about 2000 products with different private labels: Carrefour, Carrefour Quality Line, Firstline, French- touch and Bang. Referring to the innovative standard chosen by the imitator, we will also identify three different types of conduct: (1) parasite imitation; (2) incompatible or redundant imitation; and (3) induced imitation. The imitation game we call parasite imitation takes place when the imitator follows the innova- tor’s lead by reproducing a similar, successful standard (a so-called “dominant design”, as in Ut- terback, 1996). This is facilitated when there are not many legal or awareness barriers to protect the innovation or when the barriers are weak or difficult to defend thus enabling quick imitation. * Bocconi University, Italy. ** Bocconi University, Italy. Problems and Perspectives in Management / Volume 5, Issue 3, 2007 (continued) 199 In the parasite imitation game the imitator may in any case provide the market with a valid offer that might even be considered better because of improvements to bundling or more competitive prices, even if the product is an imitation of the innovation. Think, for example, at the SUVs (Sport Utility Vehicles), that followed each other in the automotive industry. The game of incompatible or redundant imitation takes place when the imitator, in turn, answers with innovation, but is technologically incompatible with that of the innovator. Their solution is still able to satisfy the same needs and provide similar benefits. In the presence of network exter- nalities (Katz & Shapiro, 1985, 1994) or, more in general, of positive feedbacks, the technologies of the innovator and imitator fight each other to become the standard taken on by the market (a well- know war for the standard. E.g., Shapiro & Varian, 1999). The game of incompatible or redundant imitation may end up with a truce, a duopoly, or end with the defeat of one of the fighters, which is often the case. An incompatible imitation of the Sony PlayStation was the Microsoft Xbox. Not only the applications, the accessories, and the games are incompatible: in the latest versions, even their high-definition video players are facing a war for standard: a Blu-Ray player for the PS3, and an Hd-Dvd player (optional) for the Xbox360. The game of induced imitation takes place in contexts where the innovative enterprise facilitates and accelerates the game of imitation, when they realize that it is the best, most effective and per- haps least expensive to establish their standard. The history of video and electronic markets is full of well-known wars for standards, like the classic Betamax Vs Vhs. In most of cases the compa- nies that won the war followed a strategy that gave incentives for the imitation game. They guaran- teed user licenses and technological support to any rival who requested it. In that case the compa- nies in question did not fight the other technology, they were interested in winning demand, per- suading potential buyers and users it was useful and cost effective to abandon old technologies and move on to new ones able to offer higher value. In conclusion of this first section, a further, radical distinction between imitation games should be pointed out: the imitation outlined in this paper in any case remains within the legal limits of law- ful competition. Of course, imitation with unlawful intentions also exists, which we generally call counterfeit. Counterfeit is out of the scope of this work (see, among the others, Hopkins, Kontnik, & Turnage, 2003). The Object and Entity of Imitation Imitation may be extended to products and services generated by the innovator, as well as to its technologies, procedures, processes, organizational models and market strategies. Also, it can be the imitation of either an incremental or a radical innovation, following the common dichotomy (Abernathy, 1978). Starting with the (lawful) imitation of products and services, this activity may be more or less original. In this sense, we have: clones: legal copies of the original product, but sold under the brand name of the imi- tator. In some cases the clone stands out because the quality is higher than that of the original product or the price is much lower. It can be the headphone or the AC charger for a Nokia phone, as well as the cartridge for a Canon printer; marginal imitations: it is possible to imitate an innovation by modifying marginal elements, developing a different design, reconfiguring the product, using new alter- native materials or using different manufacture processes. From this point of view, the English coffee-shop chain “Costa” represents a marginal imitation of the innova- tor “Starbucks”; incremental imitation (also known as innovative imitation or technological leapfrog- ging): in this case the imitator enters a developing market with a significant techno- logical contribution thereby innovating and overtaking the pioneer innovator. A re- markable example, here, is the incremental imitation of Microsoft Excel over the pioneeristic Lotus 123. Problems and Perspectives in Management / Volume 5, Issue 3, 2007 (continued) 200 creative imitation: they define the most innovative copy of the pioneer product. In this case the imitator makes some changes to the original concept, with the aim of creating new applications for the pioneer product to meet the needs of new customer segments or to enter new markets or new sectors. So, for example, a three-wheels working vehicle like the “Ape” by Piaggio may became a unique taxi for tourists in its Indian imitation, named Tuc Tuc (this is a licensed imitation, by the way). As said, imitation concerns not only products, but also strategies, organization models and proc- esses that bring market success to the innovator. For instance, activities related to competitive in- telligence and benchmarking are undertaken to assess the market drive capacity of rivals or excel- lent enterprises from other industries in order to copy them. From this point of view, even if Japa- nese companies are often accused of taking part in imitation warfare, it should be acknowledged that their international market success actually stimulated many European and American enter- prises to study the skills and capacity behind their results in order to re-engineer their procedures and critical processes. It is without doubt that imitating a product is easier than imitating a process or a procedure. The latter are intangible resources, less obvious, fruit of constant investments in corporate culture, in its climate and organization mechanisms that make it stand out and so making it more difficult to copy. Similarly, some scholars point out how the complexity and random ambi- guity of a successful strategy – “what’s behind it” – act as protection against being imitated by competitors (Lippman & Rumelt, 1982; Rivkin, 2000; Szulanski, 1996; Ounjian & Carne, 1987). The Strategic Rationale: Behavioral Drivers There are several reasons for enterprises to play the imitation game. It is useful to distinguish, here, between incumbents and other players, namely new comers and large-scale retailers. New comers recognize new opportunities for diversification in the innovation of the first mover: the innovation provides these enterprises with the opportunity to overcome old barriers to enter the industry, often giving them an edge over previous incumbents: as a vintage example, we can think about when the ski industry or tennis racket industry still required skills in processing wood. Inno- vations in materials used, introduced by a first mover, naturally opened the door for new comers with skills and resources from the world of plastics. The same is true in how the quantum leap of digital photography opened the door for imitators from the world of electronics. Also large-scale retailers increasingly exploit opportunities offered by up-stream integration through parasite imitation strategies for products and successful brand names. According to AC- Nielsen, in 2005 the so-called private-label phenomenon reached nearly a fifth of the market share of consumer products in North America with an average annual growth rate of 7%. The rate reached in the frozen foodstuffs segment, for example, reached 30%, while for cosmetics it is only 2% but with growth rates of over 20% (ACNielsen, 2006). The motivation of large-scale retailers is clear: capitalize on resources such as contact with end users and the loyalty they have for the retailer to exploit integration opportunities with particularly low risk, both due to the flexibility in production investments and because only goods, brands or formulas that have a proven profit po- tential are imitated. Instead, motivations can be differentiated when speaking about imitation strategies adopted by incumbents. The following cases can be distinguished: a) Reaction to the element of surprise, when success is clear. Many enterprises are taken by surprise when a smaller and entrepreneurial innovator launches an innovative prod- uct or service. This happens when the opportunity and market potential of the new product, when first launched, are not recognized or are underestimated. This is what happened in the US with many successful e-tailers, like Amazon. In these cases, the re- action of the traditional players is put off until sales and demand clearly skyrocket. Even in these cases the enterprise does not always react and chalks up the new prod- uct’s success to a passing fad or is afraid of jeopardizing or cannibalizing sales of their current products (this was the problem of Barnes & Noble’s before reacting to Ama- Problems and Perspectives in Management / Volume 5, Issue 3, 2007 (continued) 201 zon’s success). In these cases, the imitator only reacts when the changes in the market show a clear risk of suffering a loss in market share or losing dominance. b) Strategic choice to wait until success is clear. In this case the enterprise makes a con- scious choice to wait patiently for the innovation’s market development. This is the usual choice of a company that, with a significant stock of resources and skills, pre- fers to leave the cost of market development to the innovating company. This was the strategy adopted by TIM and Vodafone for the Italian third-generation mobile mar- ket, where the first mover was the new comer Hutchison3G. Firms that choose this way, strategically plan their entrance for the first error or when sales of the new product begin to take off so they exploit their speed in reacting and ability to imitate. It is clear that the critical element of this choice is time. The enterprises, even if they avoid typical risks entailed with being the first mover, are still open to a different risk, i.e. waiting too long: indeed, their imitation may be too late, when margins, economies of experience or the number of early imitators have already seized most of the opportunities opened up by the innovator. Ideally, from this standpoint, the desired logic should be to study the probability of an innovation’s success by time t and the potential profits obtainable from an imitation strategy in that scenario and in that moment (Levitt, 2006). So imitation may become a systematic choice. In any case it should be accompanied by fine-tuning superior skills in exploiting imitation-speed economies, both in terms of technical and production development and in terms of process flexibility and, more in general, in terms of time-to-market. An excellent example of this is the clothing company Zara. A system- atic imitation, on the contrary, runs the risk of being less effective if the innovations have many set-up issues, considerable need for capital and products not easy or unable to copy quickly. c) Strategic choice for imitation even when success is not yet certain. In this interesting case, actually very common, there may be several reasons behind such a choice (Lie- berman & Asaba, 2006): c-1) Imitation based on implicit information. Where there is high uncertainty some enterprises may observe the actions of the first movers and, especially if the latter are well-known players, they may decide to imitate them regardless of the private infor- mation in their possession. Naturally, the assumption of imitators is that the first movers have better information. For this reason, once a critical mass of imitating en- terprises is reached, there may be what some economists refer to as “information cas- cades” (Bikhchandani, Hirshleifer & Welch, 1992). A clear example is when so many companies quickly entered the e-business market until the bubble burst at the beginning of 2000s. Where there are information cascades, since the imitation proc- ess is essentially based on the theory that the first movers are going in the right direc- tion, an industry is exposed to high risks and when that direction is proven to be wrong society as a whole may suffer considerable costs. A similar phenomenon, again under environmental uncertainty, is mimetic isomorphism, studied by several organizational sociologists (see, in particular, the Institutional Theory put forth by Di- Maggio & Powell, 1983). In this case the organizational model is imitated. The strategy, again, would enable savings in costs for research for the best solution as an answer to existing uncer- tainty. However, the process often becomes more ritual than rational. Indeed, even here the imi- tated structure may not be the best even if it is much slower and more difficult to assess this than assessing the imitation of a product for instance. c-2) Legitimization or status. As above, certain enterprises – or certain managers – follow the behavior of others, first of all to seek legitimacy from institutions and their public, i.e. to attach their status to other operators, clearly well-established (Institu- tional Theory). It is the same as emulation and aspiration in social consumption mechanisms. In this sense the imitation becomes a signal, in order to avoid gaining a negative reputation on the market (economic theory of herd behavior). When situa- tions remain uncertain, some studies have shown how the first imitators are guided Problems and Perspectives in Management / Volume 5, Issue 3, 2007 (continued) 202 by more rational motivations, as above, whereas later imitators have more symbolic motivations, such as the ones just outlined (e.g., Fligstein, 1985, 1991). c-3) Preemptive defense of the status quo and reducing rivalry levels. Another reason for an incumbent to decide to follow an innovator even before knowing the outcome of their moves is, using an analogy from football, to keep them covered very closely. If an innovator decides, for instance, to explore a new market segment or a new geo- graphical area, the imitator may assess whether to do the same immediately so, re- gardless of the action’s success, the relative positions would remain the same. The aim of this line of action, undertaken to defend competitive equality preemptively, is naturally to reduce risk for the enterprise. We can observe a similar behavior within the car industry. Toyota is currently the leader in marketing hybrid technologies and, even if the success of this trajectory is sill uncertain, its direct competitors are start- ing to invest in the same direction. In some cases, especially in concentrated markets, mutual imitation may become a form of tacit tol- erance: “divergent strategies reduce the ability of the oligopolists to coordinate their actions tacitly (…) reducing average industry profitability” (Porter, 1979, p. 217. Quoted in Lieberman & Asaba, 2006). The more recent idea of “mutual forbearance” (Bernheim & Whinston, 1990) comes from this: certain enterprises would imitate mutual presence in different markets to have more points of contact which would facilitate collusion since it would increase the possibility for counter measures (at present, there is not much empirical evidence on the effectiveness of this strategy). In all the cases of imitation examined above, it may happen that the imitators are the ones that gain advantage from the initial efforts of the innovative enterprise, in terms of finance and market domination. For example, many innovations in the domestic electronics market (HiFi, VCR, etc.) were developed in the laboratories of Philips, however, their competitors, induced imitators, were the ones able to take advantage of the new technologies extensively. The reasons for this are shown below. They provide the core motivation for the imitation game, to add to the specific considerations outlined above. First of all, the enterprises that justify the imitation game back their arguments with the explorer metaphor, arguing that the pioneer, like the explorer, should sustain the high risks and uncertain- ties related to their research in unknown territories and in many cases, hostile territories. So the explorer may acquire public recognition for their discoveries, but the benefits of their research may actually be reaped by those who are able to take advantage of the discoveries. Indeed, the imitator may: learn from the mistakes of the innovator and not spend their resources on developing products without market potential and instead draw on the experience of others, i.e. their products and services which better meet the needs and benefits expressed by customers; avoid or reduce financial efforts which are instead sustained by the first mover dur- ing the initial phases of their research and development and in systems engineering; focus attention and resources on the development of the technological process in- stead of focusing on the technology of the product or service, and in doing so im- prove both quality and production efficiency; avoid the trap of inertia innovators may fall into where they are less inclined to make improvements and incremental moves to the innovation; avoid costs for customer education and awareness of new products, which are in- stead sustained by the innovator; take advantage of experience gained in other markets. The ease of an imitator enter- ing a new market also depends on the experience and knowledge gained in the manu- facture and sales of products related or near the innovation. These experiences, tech- nologies, of marketing and reputation increase the ability and speed of reaction to the initiative of the first mover; Problems and Perspectives in Management / Volume 5, Issue 3, 2007 (continued) 203 also have more freedom of movement, attempting to change the rules of the competi- tion game set by the innovator. Because of its size and resources available, the pio- neer is often forced to cater to a certain market segment. Later developments in de- mand may create new opportunities for later entrants, who may take up more desir- able and attractive positions and invest in all the other segments of the market that are not covered and guarantee large sales volumes. The imitation game is also justified when acknowledging that the idea of the innovation can be assimilated into a process of incremental improvements rather than a radical technological discon- tinuation, fruit of an inventor’s creativity, who dreams up the product or service and then develops it on his own from both a technological and commercial standpoint. Even if that may happen, it should be noted that it is rather rare. Products resulting from radical technological breakthroughs are not dropped on the market. They are usually the result of a long incremental process, the fruit of constant technological and production improvements that follow up until the launch of the first product offered by the pioneer who invented it. This is because in the beginning pioneer products have technical imperfections and are still primitive in their features and design. They can not guar- antee performance levels that satisfy the expectations of target customers. These defects allow the imitator (later entrant) to enter at a later stage with products and services that can compensate and satisfy market needs with innovations and improvements. Enabling Factors: What Favors Imitation The speed in which newcomers or incumbents can copy the innovation generated and spread by the first mover depends on several factors: absence of legislation to protect manufacturing secrets or patents for the innovation; encouragement from customers for other manufacturers to become secondary or third tier sources for their procurement; suppliers that may provide and spread raw materials and critical technologies for the manufacture of the new product or service; difficulty in imitating the production process; spreading and gaining knowledge of the innovation; inability (or will, for induced imitation) of the first mover to build entrance barriers against potential rivals. In addition, as mentioned, also environmental uncertainty is a factor that may favor imitation as seen in mimetic isomorphism and information cascades. The last condition that may favor particu- lar imitation strategies is a concentrated or static competition context: it has been shown how the reason is in preemptive defense of the status quo that a systematic and mutual imitation can guar- antee under these circumstances. The benefits gained by the innovator, in any case, are not substantial if there are no solid and strong barriers to stop rival imitation. From this point of view, the effectiveness of mechanisms to protect innovation benefits is the most troubling aspect for the innovative enterprise. Empirical studies have shown that legal protection based on patents or licenses are less effective. In the opin- ion of enterprises belonging to 12 industries, these forms of protection are suitable to defend inno- vations from imitation in the following cases (Teece, 1987): 65%, pharmaceuticals industry; 30%, chemicals industry; 10-20%, oil and steel industries; less than 10%, in the industries of industrial machinery, textiles, automotive, tires, office supplies, etc. Other studies have shown that 60% of the innovations and patents registered are imitated in a span of four years and the development costs of the imitator are less than 35% of those sustained by the innovator (Mansfield, Schwartz, & Wagner, 1981). Further studies show that for innovations not protected by law, imitation time is reduced to less than a year (Jacobson, 1992). Problems and Perspectives in Management / Volume 5, Issue 3, 2007 (continued) 204 Successful Strategies for Product Imitation Imitation can be pursued with several ways of market conduct necessary for the follower to stimu- late and persuade customers to run the risk of abandoning the products or services of the innovator (Schnaars, 1994). Successful strategies can be reduced to the following ones: 1) Exercise power, market power that enables competing with a product and the same position as that of the innovator. 2) Repositioning the innovator’s product. The product is essentially the same, but it is positioned based on one of the following: lower price and/or quality; higher quality; new applications; 3) Lateral entrance, i.e. competing with a similar products, but in different markets. The options introduced are outlined below: 1) The use of market power, as the primary conduct of the imitator is when the follower decides to enter the market, created by the innovator, breaking down the barriers set up for protection, using all the critical mass of their resources and market drive. A perfect example is the reaction of Microsoft to Netscape Navigator. They used all their market power to diffuse their browser Explorer. 2.1) Repositioning can first of all take place with a move based on lower price and/or quality following three different conducts: some quality, lower price: the imitator offers the market a replica of the innovator’s product but at a more competitive price. This is the strategy of Lexus, that imitates Mercedes and BMW, provides similar quality, but for relatively lower prices; downgrading: instead of imitating the innovator, the imitator downgrades features in order to offer certain large market segments a version of the innovator’s product at a more accessible price. It is the strategy of Funai, for example, offering LCD and other electronics at discounted prices, sometimes under the Emerson, Sylvania, and Symphonic brands. Funai recently became the supplier for TVs manufactured under Walmart's house brand, Durabrand. 2.2) For repositioning with higher quality, the goal of the imitator is to be considered as the second best. To this end, their strategy is not to clone the innovator’s products, nor to compete based on price, but to arouse the interest of customers through incremental im- provements to the pioneer’s product. This is usually the case of second generation: the ef- forts of the imitator are geared towards searching for ways to strengthen features or per- formance levels of the product or service, then launching on the market a second genera- tion of products known for their improvements made over previous versions. Google, for example, was a second generation search engine, organizing its results by peer ranking. Providing better performances, they overcame the first generation products, like Yahoo; 2.3) Another way to reposition an imitation is through product reconceptualization: the imitator exploits the innovation of the product, but changes its intended use and applica- tion. This is possible by redefining the structural features or performance of the product being imitated. As anticipated, this is the case of the Indian Tuc Tuc, as a reconceptuali- zation of the Italian Ape Piaggio; 3) Lateral entrance, as mentioned, is the strategy followed by the challenging imitator that attempts to satisfy the same needs, but in markets still untapped by the innovator. “Lino’s Coffee”, for instance, is a new chain following the strategy of “Starbucks”, but in Italy, that is the only western market untapped by the American giant. Conclusion The moves made by the imitator, as seen, are made with a wide range of options that require mar- ket drive and resources that help the follower in eroding and breaking down the competitive ad- vantage acquired by the innovator. The first movers, according to their market position, can pre- Problems and Perspectives in Management / Volume 5, Issue 3, 2007 (continued) 205 vent or defend their market dominance by building or raising preemptive barriers. Nonetheless, the article reviewed a certain number of real cases, circumstances and strategies that may still favor the imitator, which, eventually, will lead the competition. References 1. ACNielsen. (2006). The Power of Private Label 2005, Schaumburg, IL: Author. 2. Abernathy, W.J. (1978). The Productivity Dilemma. Baltimore, MD: The Johns Hopkins Press. 3. Bernheim, B., & Whinston, M.D. (1990). Multimarket Contact and Collusive Behavior. Rand Journal of Economics, 21, 1-26. 4. Bikhchandani, S., Hirshleifer, D., & Welch, I. (1992). A Theory of Fads, Fashion, Custom, and Cultural Change as Informational Cascades. Journal of Political Economy, 100, 992-1026. 5. Christensen, C.M. (1997). The Innovator's Dilemma. Boston, MA: Harvard Business School Press. 6. DiMaggio, P.J., & Powell, W.W. (1983). The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields. American Sociological Review, 48, 147-160. 7. Fligstein, N. (1985). The Spread of the Multidivisional Form among Large Firms, 1919-1979. American Sociological Review, 50, 377-391. 8. Fligstein, N. (1991). The Structural Transformation of American Industry: an Institutional Account of the Causes of Diversification in the Largest Firms, 1919-1979. In Powell, W.W. & DiMaggio, P.J. (Eds.). The New Institutionalism in Organizational Analysis. Chicago & London: The University of Chicago Press. 9. Hopkins, D.M., Kontnik, L.T., Turnage, M.T (2003). Counterfeiting Exposed: Protecting Your Brand And Customers. Hoboken, NJ: John Wiley and Sons, Inc. 10. Jacobson, R. (1992). The Austrian School of Strategy. Academy of Management Review, 17(4), 782-807. 11. Katz, M.L., & Shapiro, C. (1985). Network Externalities, Competition, and Compatibility. The American Economic Review, 75(3), 424-440. 12. Katz, M.L., & Shapiro, C. (1994). Systems Competition and Network Effects. Journal of Economic Perspectives, 8(2), 93-115. 13. Levitt, S.D. (2006). An Economist Sells Bagels: A Case Study in Profit Maximization. NBER Working Papers, 12152. Washington, DC: National Bureau of Economic Research, Inc. 14. Lieberman, M. B., & Asaba, S. (2006). Why Do Firms Imitate Each Other. Academy of Man- agement Review, 31(2), 366-385. 15. Lippman, S.A., & Rumelt, R.P. (1982). Uncertain Imitability: An Analysis of Interfirm Dif- ferences in Efficiency under Competition. Bell Journal of Economics, 13(2), 418-438. 16. Mansfield, E., Schwartz, M., & Wagner, S. (1981). Imitation Costs and Patents: An Empirical Study, The Economic Journal, 91, 907-918. 17. Ounjian, M.L., & Carne, E.B. (1987). A Study of the Factors Which Affect Technology Transfer in a Multi-location Multi-business unit Corporation, IEEE Transaction on Engineer- ing Management, 34(3), 194-201. 18. Porter, M.E. (1979). The Structure within Industries and Companies’ Performance. Review of Economics and Statistics, 61, 214-227. 19. Rivkin, J.W. (2000). Imitation of Complex Strategies. Management Science, 46, 824-844. 20. Schnaars, S.P. (1994). Managing Imitation Strategies. New York, NY: The Free Press. 21. Shapiro, C., & Varian, H.R. (1999). Information Rules: A Strategic Guide to the Network Economy. 22. Boston, MA: Harvard Business School Press. 23. Szulanski, G. (1996). Exploring Internal Stickiness: Impediments to the Transfer of Best Prac- tice Within the Firm. Strategic Management Journal, 17, 27-43. 24. Teece, D.J. (1987). Profiting from Technological Innovation: Implications for Integration, Collaboration, Licensing and Public Policy. In D.J. Teece (Ed.), The Competitive Challenge: Strategies for Industrial Innovation and Renewal (pp. 185-220). Cambridge, MA: Ballinger. 25. Utterback, J.M. (1996). Mastering the Dynamics of Innovation. Boston, MA: Harvard “Strategies of Imitation: An Insight” work_tchmscjv3vbxno27ntyit73cju ---- untitled Near-infrared digital photography to estimate snow correlation length for microwave emission modeling Ally Mounirou Toure,1,* Kalifa Goïta,1 Alain Royer,1 Christian Mätzler,2 and Martin Schneebeli3 1CARTEL, Département de Géomatique Appliquée, Université de Sherbrooke, 2500 Blvd Université, Sherbrooke, Québec J1K 2R1, Canada 2Institute of Applied Physics (IAP), University of Bern, Sidlerstrasse 5, 3012 Bern, Switzerland 3WSL Institute for Snow and Avalanche Research (SLF), Flüelastrasse 11, 7260 Davos-Dorf, Switzerland *Corresponding author: ally.toure@usherbrooke.ca Received 9 May 2008; revised 3 November 2008; accepted 22 October 2008; posted 14 November 2008 (Doc. ID 95774); published 12 December 2008 The study is based on experimental work conducted in alpine snow. We made microwave radiometric and near-infrared reflectance measurements of snow slabs under different experimental conditions. We used an empirical relation to link near-infrared reflectance of snow to the specific surface area (SSA), and converted the SSA into the correlation length. From the measurements of snow radiances at 21 and 35 GHz, we derived the microwave scattering coefficient by inverting two coupled radiative transfer models (the sandwich and six-flux model). The correlation lengths found are in the same range as those determined in the literature using cold laboratory work. The technique shows great potential in the determination of the snow correlation length under field conditions. © 2008 Optical Society of America OCIS codes: 280.0280, 280.4991, 120.5630, 120.5700. 1. Introduction There are various radiative transfer models designed to simulate the snow emission in the microwave range, e.g., the Helsinki University of Technology snow emission model (HUT, [1]), the microwave emission model of layered snowpacks (MEMLS, [2]), the dense medium radiative transfer model (DMRT, [3]), and the strong fluctuation theory (SFT, [4,5]). HUT and DMRT on the one hand use the snow grain size as a structure parameter; on the other hand, MEMLS and SFT use the correlation length to quantify the snow microwave scattering. For practical purposes, snow hydrologists have defined snow grain size as the greatest diameter of prevailing grains in the snow layer [6,7]. However, it has been shown that using the maximum extent of characteristic grains can lead to a strong overesti- mation of the effective grain size applicable in scat- tering models [8]. This optical grain size corresponds to the diameter of noncontacting spheres with the same surface area and the same ice volume as the snowpack under consideration, and thus with the same specific surface area (SSA) [9]. For example, the effective size of long needles is quite small, be- cause the value is close to the diameter of the nee- dles; for disks of large diameter, the optical grain size could also be small, depending on their thickness [10]. Stellar snow crystals can have a maximum ex- tent of 1 cm, but their thickness is only 20 to 40 μm. The optical grain size and the correlation length are the appropriate parameters to describe electromag- netic wave scattering [8]. The correlation length can be understood as a measure of the average 0003-6935/08/366723-11$15.00/0 © 2008 Optical Society of America 20 December 2008 / Vol. 47, No. 36 / APPLIED OPTICS 6723 s o u r c e : h t t p s : / / d o i . o r g / 1 0 . 7 8 9 2 / b o r i s . 3 7 6 2 4 | d o w n l o a d e d : 1 6 . 5 . 2 0 1 6 distance beyond which variations of the dielectric constant in one region of space become uncorrelated with those in another region [11]. At a distance greater than the correlation length, the values can be considered random. The correlation length pc is proportional to the optical grain size Do, and both are inversely proportional to SSA [12]. Also in the computation of the microwave scatter- ing of a granular medium such as snow, the physi- cally meaningful structure parameters are pc and Do [8,10,13,14]. One way to determine these quanti- ties is through the measurement of SSA. Until re- cently, there was no easy way for a practical determination of the SSA under field conditions. Because of that, there have been some attempts to establish empirical relationships between the correlation length and the snow grain size. The ap- proaches are approximations based on observations made at one site. The empirical formulas cannot be used in all circumstances. The concept of using the scattering of light to determine SSA and thus Do was first speculated by Giddings and Lachapelle [15] and Warren and Wiscombe [16] found first con- firmation by using the Mie theory to describe the op- tical properties of snow in the solar spectrum. The dependence of snow reflectance on grain size was used in optical remote sensing to map grain size in the surface snow layer [17,18]. The fact that SSA can be directly converted to optical grain size was shown by Mitchell [19]. The traditional method of de- termining SSA involves cold-laboratory work, which is time consuming and laborious [20,21]. Therefore, an important model-input parameter has so far been unavailable in the field. Advances in digital camera technology have made it possible to measure the snow reflectance using a simple digital camera fitted with an appropriate fil- ter. An exponential relationship was found in [22] be- tween the SSA and the reflectance of snow at the near-infrared (NIR) spectrum. Matzl and Schneebeli used the relationship to determine the SSA of snow profile with an uncertainty of less than 15%. A coupled contact illumination probe and field spectro- radiometer was used by Painter et al. [23] to deter- mine the optical grain size under field conditions with greater precision than the NIR technique. But the NIR technique has much higher spatial re- solution imagery of SSA (a 1 mm thick layer can be clearly identified) than the spectrometer measure- ments technique (2 cm spatial resolution). This char- acteristic makes the NIR method interesting because it can also be used to accurately determine the snow stratigraphy, and locate ice lenses, hard crust, and thin layers in the profile [22]. All these parameters greatly influence the snow microwave emission. Furthermore, NIR photography is simpler and more convenient for field work. The equipment is basic; only a digital camera and the Spectralon for calibra- tion are needed. No source of artificial light is needed. In this study, we considered the NIR technique for determination of the snow SSA and its correlation length. To check whether the NIR reflectance can be used to determine the correlation length for use in snow microwave emission modeling, we carried out experimental work at the snow station of the Swiss Federal Institute for Avalanche Research at Weissfluhjoch, Davos. We performed two types of experiment: 1. The first was to measure the snow slab bright- ness temperature at 21 and 35 GHz in different experimental conditions, and from that we retrieved the absorption and scattering coefficients through the inversion of the six-flux radiative transfer equations [24]. 2. The second experiment was to determine the SSA of the snow sample using NIR digital photogra- phy, and from that information, to compute the cor- relation length of the samples. 2. Experiments A. Geographical Location of the Site The site of the measurements is Weissfluhjoch, Davos, Switzerland, in the Dorftälli, a locally flat area at 2540 m above sea level. This is the location of the snow station for the Swiss Federal Institute for Snow and Avalanche Research. At this station, regular weather and snow parameter measurements are made. The measurements were conducted from 12 to 22 April 2006, i.e., just before the spring melt began. B. Material We used two portable, linearly polarized radiometers operating at 21 and 35 GHz (Table 1) built at the In- stitute of Applied Physics, University of Bern. A blackbody box kept at ambient temperature was used to calibrate the radiometers. An aluminum plate was used to serve as the perfect reflector, a 5 cm thick blackbody plate was used to serve as the perfect absorber of microwave radiations, and a 3 cm thick styrofoam plate was used to thermally insulate and to help support the slab. The output vol- tage of the radiometers was recorded by a data log- ger. The voltage is linearly related to the brightness temperature (Tb). The digital camera used for the NIR photography was a Sony Cybershot DSC- P200. The charge-coupled device (CCD) used in the camera is sensitive to NIR light. A filter was placed Table 1. Properties of the Portable Dicke Radiometers Properties Radiometer 1 Radiometer 2 Center frequency f ½GHz� 21 35 Bandwidth Δf ½GHz� 0.8 0.8 Integration time t ½S� 1–8 1–8 Sensibility [K] 0.1 0.1 3 dB beam width φ 9° 9° Horn antenna Pyramidal Pyramidal 6724 APPLIED OPTICS / Vol. 47, No. 36 / 20 December 2008 over the CCDs to block the visible light. The wave- length of the detected light is in the range of 840 and 940 nm. The photos were taken under diffuse light with the camera placed on a tripod at approxi- mately 1 m from the sample. When there was direct radiation, a white curtain was used to create a diffuse light. C. Tb Measurement Procedures The principal setup of the experiment is shown in Fig. 1. The radiometers were mounted on a frame that allows easy variation of incidence angle and change of polarization. The snow sample is placed on the styrofoam plate on a metal table. Between the snow and the sample, an absorber or metal plate was inserted. The samples were placed near the radiometer antennas to ensure that all the measured radiation came from the sample surface and volume. A box at the rear contains the device for adjusting the voltage of the power supply, as well as a car battery as a spare power supply in case of failure of the cen- tral power generator of the snow station. The output voltages of the radiometers were recorded by a data- logger [polycorder (PC)]. a. Removal of the slab. The preparation of the snow samples was one of the most delicate tasks of the measurement procedure. This method requires homogeneous samples of dry snow to study their mi- crowave properties as a function of their structure. To measure the brightness temperature (Tb) of the snow, we first had to mechanically remove 70 × 60 cm blocks of snow with thicknesses varying between 9 and 20 cm. Snow was excavated at a pre- viously undisturbed site to ensure that the layers were as homogeneous as possible. The snowpack was examined to find layers of sufficient thickness. Once a layer was chosen, a sample was prepared by cutting the layer with metal plates, taking great care to keep the slab undisturbed. The sample was then placed on a 3 cm thick styrofoam plate to help support the sample. Tests done by Wiesmann et al. [24] showed that the transmissivity of the styrofoam is better than 0.99 in all conditions for the frequency range used here. The styrofoam had the advantage of thermally insulating the samples from the various underlying bases during handling and measure- ments. In the case of new snow, the handling was very difficult, even on the styrofoam plate, because those samples were likely to break apart or to show fissures, requiring the preparation of new samples. The surface of the samples was then smoothed by a straight metal edge, a process that had to be per- formed very accurately. The slabs were then placed on a platform in front of the radiometers for measure- ments (Fig. 1). Thirty-three samples were used for the experimental measurements. The radiometric measurements were taken immediately after the sample was extracted. After the radiometric mea- surements were finished, the snow temperature was determined at different points near the center and the edges of the samples. The maximum tem- perature difference did not exceed 2 K. The snow density was also measured. The density ranged from 69 to 400 kg=m3. b. Measurements. The brightness temperatures of the snow slabs were measured in two different si- tuations to determine the reflectivity and the trans- missivity of the samples: (a) the snow slab on the aluminum plate at 45° angle of incidence, at both ver- tical and horizontal polarization; (b) the snow slab on the absorber at 45° angle of incidence, at both hori- zontal and vertical polarization. For calibration pur- poses, the sky brightness temperature and that of the blackbody box at ambient temperature were also measured. The depth at which the sample was taken was also recorded. D. Near-Infrared Photography Before taking the NIR photos, the side of the slab was carefully smoothed with a blade. The calibration tar- gets were then placed on the sample (Fig. 2). The tar- gets were manufactured from Spectralon standards with NIR reflectances of 50% and 99%. The distance between the camera and the profile wall was approxi- mately 1 m. To compensate for tilting or turning of the camera with respect to the surface of the slab, a geometric correction of the digital image was needed. The correction required at least three tar- gets to be inserted on the profile with a known dis- tance from each other and known geometry (Fig. 2). The nearest neighbor method was used to transform the target coordinates in the digital image congru- ently with their original geometry and distance [22]. The NIR reflectance (nir) of the snow was calibrated with regard to the pixel intensity of the targets: nir ¼ a þ bi; ð1Þ Fig. 1. (Color online) Experimental setup: the radiometer is mounted on a frame directed at the snow sample (9 to 20 cm) which is placed on a 3 cm thick styrofoam plate on a metal table. Between the snow and the sample an absorber or metal plate was inserted. The polycorder (PC) and the battery box are also shown. 20 December 2008 / Vol. 47, No. 36 / APPLIED OPTICS 6725 where i is the intensity of each pixel and a and b are determined by a linear regression on the pixel intensities of gray-scale standards. E. Correlation of Reflectance and SSA It was found by Matzl and Schneebeli [22] that the correlation between the NIR reflectance of the snow samples calculated from the calibrated digital image and their SSA is about 90%. The spatial resolution of the imagery is very high. In images covering between 0.5 and 1 m2, even layers of 1 mm thickness can be documented and measured (Fig. 3). The error in- creases with increasing SSA from 3% for SSA below 5 mm−1 to about 15% for SSA values above 25 mm−1: SSA ¼ A⋅enir=b; ð2Þ where nir is in %, A ¼ 0:017 � 0:009 mm−1, and b ¼ 12:222 � 0:842 [22]. 3. Radiometric Properties of the Snow Slab The aim of the radiometric property studies is to un- derstand how various Tb are related to the emissiv- ity (e), reflectivity (r) and transmissivity (t) of a snow slab, and eventually to the snow scattering and absorption coefficients. To connect the measured Tb to the snow internal scattering and the reflections at the interfaces, we used the simple sandwich model based on the radia- tive transfer proposed and developed by Wiesmann et al. [24]. We made two types of measurement: a. Brightness temperature of snow on metal plate (Tbmet). Figure 4(a) presents the situation of a snow slab with thickness d on a metal plate. The upwelling brightness temperature Tbmet is composed of the emitted snow radiation, caused by the physical snow temperature Tsnow of the sky radiation Tbsky that is reflected from the slab (total reflectivity rmet), and of Tbsky that is transmitted through the snow slab and reflected by the metal plate. The up- welling brightness temperature of the snow on the metal plate Tbmet is Tbmet ¼ ð1 − rmetÞTsnow þ rmetTsky; ð3Þ where rmet is the total reflectivity of the snow on the metal plate, which comprises the internal reflectivity of the slab and that of the interface and can be expressed as follows: rmet ¼ ri þ ð1 þ riÞ2Rmet; ð4Þ where Rmet is a function of internal transmissivity (t) and reflectivity (r) and of the interface reflectivity (ri): Rmet ¼ r þ t2ð1 − rÞ−1 1 − rri − rit 2ð1 − rÞ−1 : ð5Þ Fig. 2. (Color online) Preparation of the snow slab for the NIR photography. Four gray-white targets are placed on a smoothed snow side. Fig. 3. (Color online) NIR photo of a 13 cm thick inhomogeneous slab laying on a blackbody and styrofoam slabs: the top layer is newly fallen snow with high NIR (SSA ¼ 38:5=mm), the middle layer is refrozen snow with very low NIR (SSA ¼ 7:7=mm), and the bottom layer consists of rounded snow (SSA ¼ 17:5=mm). 6726 APPLIED OPTICS / Vol. 47, No. 36 / 20 December 2008 b. Brightness temperature of snow on absorber (Tbabs). Similar to the case of the snow on metal plate, the upwelling brightness temperature of the snow on absorber Tbabs is [Fig. 4(b)] Tbabs ¼ ð1 − rabsÞTsnow þ rabsTsky; ð6Þ where it is assumed that the snow and absorber temperature are the same and that rabs is the total reflectivity of the snow on the absorber, including the internal reflectivity of the slab (r) and that of the interface air/snow (ri): rabs ¼ ri þ ð1 − riÞ2Rabs; ð7Þ where Rabs similarly to Rmet is expressed as follows: Rabs ¼ r þ rit2ð1 − rriÞ−1 1 − rri − ðritÞ2ð1 − rriÞ−1 : ð8Þ The internal reflectivities (r) and transmissivities (t) can be computed from the measured Tbmet and Tbabs if ri (air–snow interface) is known. If the snow inter- face is considered smooth, ri can be computed using the Fresnel reflectivity formula at the given inci- dence angle, polarization, and dielectric slab permit- tivity ε. To obtain the values of r and t, we just need to solve the system of Eqs. (4), (5), (7), and (8). To link the slab internal reflectivity r and trans- missivity t of our samples on the one hand, to the in- ternal scattering and absorption coefficients on the other hand, we used the six-flux model proposed by Wiesmann et al. [24]. The six-flux model is a sim- plified radiative transfer model that helps to account for all the radiation that is propagating in the snow slab. It is a model that reduces the radiation at a given polarization to six-flux streams along and op- posed to the three principal axes of the slab. The hor- izontal fluxes represent the trapped radiation whose internal incidence angle θ is greater than the critical angle for total reflection θc: θ > θc ¼ arcsin ffiffiffiffiffiffiffiffi 1=ε p ; ð9Þ where ε is the relative permittivity of the slab. The vertical fluxes represent the radiation which does not undergo total reflection; this is the radiation that interacts with the space above the snowpack. In the case of plane-parallel and isotropic slabs, the six- flux model is reduced to the well-known two-flux model, and the two-flux absorption coefficient γ0a and scattering coefficient γ0b can be written in terms of six-flux parameters: γ0a ¼ γað1 þ 4γcðγa þ 2γcÞ−1Þ; ð10Þ γ0b ¼ γb þ 4γ2c ðγa þ 2γcÞ−1; ð11Þ where γa is the six-flux absorption coefficient, γb is the six-flux backscattering coefficient, and γc is the corner scattering coefficient (the coefficient for cou- pling between the vertical and the horizontal fluxes). Wiesmann et al. [24] also established that, for a snow layer of thickness d, the internal reflectivity (r) and transmissivity (t) can be expressed as r ¼ r0ð1 − t20Þð1 − r20t20Þ−1; ð12Þ t ¼ t0ð1 − r20Þð1 − r20t20Þ−1; ð13Þ where the one-way transmissivity t0 of the slab is t0 ¼ expð−γd=ðcos θÞÞ; ð14Þ and the reflectivities r0 of infinite slab thickness is r0 ¼ γb0ðγa0 þ γb0 þ γÞ−1 ð15Þ and a function of the two-flux absorption, scattering coefficients (γ0a and γ0b), and the damping coefficient γ, which can also be expressed as a function of γ0a and γ0b: γ ¼ ðγa0ðγa0 þ 2γb0ÞÞ1=2: ð16Þ Fig. 4. (Color online) Principles of snow sample measurements: (a) brightness temperature Tbmet of snow on metal plate and (b) brightness temperature Tbabs of snow on absorber. Correspond- ing values of blackbody radiation Tbbb were also measured. 20 December 2008 / Vol. 47, No. 36 / APPLIED OPTICS 6727 For an isotropic scattering, the total scattering coef- ficient γs is given by the sum γs ¼ 2γb þ 4γc; ð17Þ and the ratio 2γc=γb can be expressed as follows: 2γc=γb ¼ x=ð1 − xÞ; ð18Þ where x is x ¼ ððε − 1Þ=εÞ1=2: ð19Þ From Eqs. (10)–(19) we have the full picture of how the internal reflectivity r and transmissivity t are linked to the six-flux scattering coefficients γb, γc, and γs. We solved the system of Eqs. (10)–(19) to de- rive the scattering coefficients. 4. Results and Discussion The scatterplot in Fig. 5 shows a trend for SSA to in- crease with decreasing density. The plot shows char- acteristic SSA-density clusters for newly fallen snow (density, ρ < 200 kg⋅m−3 shown in triangles), rounded grain (200 < ρ < 300 kg⋅m−3 shown in filled circles) and for compacted rounded grain snow (ρ > 300 kg⋅m−3 shown in diamonds). The same clus- ters were also found by Matzl and Schneebeli [22], and it is assumed that this loose relationship is due to the fact that metamorphism and sintering of- ten have an impact on both SSA and density at the same time. In fact, Kerbrat et al. [21] have shown that SSA is not correlated to snow density. The loose relation that seems to suggest a correlation between SSA and density is due to a side effect that, on aver- age, when the snowpack increases in density, its SSA decreases. That is not always the case. Rosenfeld and Grody [25] also demonstrated that from midseason, an internal restructuring of the snow occurs that can dramatically affect SSA even though the total mass of individual particles, as well as the total density of the snowpack, remain constant. The representation of the scattering coefficient at 21 and 35 GHz at vertical polarization versus SSA [Figs. 6(a) and 6(b)] shows an exponential decrease of the scattering coefficient with the snow specific surface. The correlation coefficients between the scattering coefficient at 21 and 35 GHz at vertical po- larization and the SSA are −0:57 and −0:64, respec- tively. New snow with high SSA has a low scattering coefficient, and settled snow with rounded grains has low SSA and a high scattering coefficient. Samples 15 and 19, taken at the bottom of the snow- pack and made up of coarse grains, have very low SSA and a very high scattering coefficient. The same trend can be seen at horizontal polarization [Figs. 6(c) and 6(d)]. The correlation coefficients be- tween the scattering coefficient at 21 and 35 GHz at horizontal polarization and the SSA are −0:64 and −0:77, respectively. Figure 7 is the representation of the scattering coefficient at 21 and 35 GHz at vertical polarization [Figs. 7(a) and 7(b)] and at 21 and 35 GHz at horizon- tal polarization [Figs. 7(c) and 7(d)] versus snow den- sity. As expected, there is no correlation between the scattering coefficients and the snow density. Denser samples are not always characterized by larger scat- tering coefficients. These results show that the scat- tering is mainly dependent on SSA. Foster et al. [26] also demonstrated that the SSA is so dominant in snow microwave scattering that the cumulative contribution of other structural features is over- whelmed. According to Mätzler [8] and Debye et al. [12], the relation between the correlation length (pc) and the SSA is as follows: pc ¼ 4 ð1 − vÞ SSA ; ð20Þ where v ¼ ρ=ρice is the volume fraction of the ice (ρ is the density of the snow and ρice ¼ 917 kg=m3 is the density of ice). We used the formula to convert the SSA derived from the NIR photography into the correlation length. The representations of the pc versus the scat- tering coefficient at 21 and 35 GHz at vertical polar- ization [Figs. 8(a) and 8(b)] show increasing scattering coefficient with pc. Coarse snow grains have high correlation lengths, which translate into high scattering coefficients. The same trend can also be seen for the scattering coefficients at 21 and 35 GHz at horizontal polarization [Figs. 8(c) and 8(d)]. Similar results were found by Wiesmann et al. [24] on samples collected during the winters of 1994/1995 and 1995/1996 using the same methodology to ex- tract the scattering coefficient and cold laboratory measurements to determine the pc. The plots of Fig. 5. (Color online) Scatterplot of SSA and snow mean density. Three different types of snow are distinguished: snow with density ρ < 200 kg⋅m−3 (shown as triangles), density 200 < ρ < 300 kg⋅m−3 (shown as filled circles), and snow with density ρ > 300 kg⋅m−3 (shown as diamonds). 6728 APPLIED OPTICS / Vol. 47, No. 36 / 20 December 2008 correlation length versus scattering coefficients at vertical and horizontal polarization of the samples with densities above 200 kg⋅m−3 are fitted with a power law fit: γsp ¼ d2p:ðpcÞc3p: ð21Þ The fits are shown by the solid lines. The parameters d2p and c3p, (p represents the vertical or horizontal polarization) and their equivalent found by Wiesmann et al. [24] (d2wie, c2wie) are provided in Table 2. The pc we found are in the same range as those of Wiesmann et al. [24], but our scattering coef- ficients are generally higher than those found by them. The power law fits (Fig. 8) in [24] and are stee- per compared to those of our data. The first possible reason for these discrepancies between our data and that of [24], especially at 21 GHz and horizontal po- larization could be due to mechanical damage of the radiometer antenna at 21 GHz. The antenna tended to shake slightly when a change in polarization from vertical to horizontal was made. The discrepancies could also arise from the fact that there was a differ- ence between our experimental setup and that of [24]. They used a metal frame of the same size as the samples to hold together the snow slabs. Using the frame reduces the edge effects because the frame acts as a mirror for radiation hitting the edge, and thus simulates a larger sample. More importantly, most of our samples were made up of very hard den- sified snow and the values of pc are mostly concen- trated between 0.1 and 0:2 mm. The 2005/2006 winter season was exceptional. The snow depth was about 2 m, while the average snow depth usually recorded in the area is 1:5 m [27,28]. This means that the metamorphism yielded different types of snow grain. For snow density exceeding approximately 350 kg⋅m−3, which is the case for most of our samples, Fig. 6. (Color online) Representation of the scattering coefficient at (a) 21 GHz vertical polarization, (b) 35 GHz vertical polarization, (c) 21 GHz horizontal polarization, and (d) 35 GHz horizontal polarization versus the SSA. Samples with density ρ < 200 kg⋅m−3 are shown as triangles. Samples with density ρ > 200 kg⋅m−3 are shown as filled circles. The numbers on the right of the symbols indicate the sam- ple’s number. The solid curve is the exponential fit of the dense samples. 20 December 2008 / Vol. 47, No. 36 / APPLIED OPTICS 6729 pore spaces are very small (due to compaction) and the kinetic growth process is limited. “Hard” depth hoar develops [29,30] under these conditions. Hard depth hoar crystals are composed of sharp angular crystals, but are relatively smaller and stronger (due to higher degree of bonding) than classic depth hoar made up of larger faceted cup shaped grains with thinner walls (smaller pc). These larger hollow crystals tend to grow in low density snow with large pore spaces [29]. The hard depth hoar tends to have a higher pc than the classic one. The near-infrared reflectance depends on the opti- cal grain size, which is inversely related to SSA but is independent of snow density. As already mentioned at the beginning of this section, SSA and snow den- sity are not correlated. Snow is a random granular medium in which grains cannot be considered as in- dependent scatterers [10]. The snow emission model MEMLS is based on this principle. This is why it uses the correlation length, a parameter that takes into account both the SSA and the snow density [8,10,12]. The estimated correlation lengths from the NIR technique and measured snow parameters, including density, temperature, and sample thickness were used as input parameters in MEMLS to simulate the brightness temperatures at 21 and 35 GHz. Figure 9 shows the comparison between the mea- sured brightness temperature of a snow sample placed on a metal plate and snow emission model (MEMLS) predictions. Figures. 9(a) and 9(b) are the results for 21 GHz at vertical and horizontal polarizations. Figures 9(c) and 9(d) represent the re- sults for 35 GHz at vertical and hroizontal polariza- tions. The correlation coefficient ranges from 0.55 at 21 GHz to 0.75 at 35 GHz. There can be several rea- sons for the scatter, especially at 21 GHz, including the fact that volume scattering is stronger at 35 GHz than at 21 GHz. Furthermore, a systematic error was added to the measurements at 21 GHz be- cause, during the measurements, the 21 GHz radio- meter was less stable than the 35 GHz instrument. Fig. 7. (Color online) Representation of the scattering coefficient at (a) 21 GHz vertical polarization, (b) 35 GHz vertical polarization, (c) 21 GHz horizontal polarization, and (d) 35 GHz horizontal polarization versus snow density. The numbers on the right of the symbols indicate the sample’s number. 6730 APPLIED OPTICS / Vol. 47, No. 36 / 20 December 2008 5. Conclusion We used the relations established by Debye et al. [12] and by Matzl and Schneebeli [22], respectively, to link infrared reflectances of snow to the SSA, then the SSA to the correlation length, knowing the den- sity of snow. From the measurements of brightness temperatures at 21 and 35 GHz, we derived the scat- tering coefficient of the samples by inverting the sandwich and six-flux radiative transfer models. We showed that the scattering coefficient increases with the correlation length. The relationship can be defined by a power law. The pc we found are in the same range as those de- termined by Wiesmann et al. [24]. New snow has a pc range from 0.06 to 0:08 mm, settled densified snow is between 0.08 and 0:2 mm, and the depth hoar range is Fig. 8. (Color online) Double logarithmic representation of the scattering coefficients at (a) 21 GHz vertical polarization, (b) 35 GHz ver- tical polarization, (c) 21 GHz horizontal polarization, and (d) 35 GHz horizontal polarization versus the correlation length. Samples with a density ρ < 200 kg⋅m−3 are shown as triangles; samples with a density ρ > 200 kg⋅m−3 are shown as filled circles. The numbers on the right of the symbols indicate the sample’s number. The solid lines are the power law fit of the dense samples; the broken lines represent the power law fit found by Wiesmann et al. [24]. Table 2. Fit Parameters for the Six-Flux Scattering Coefficient Versus the Correlation Length and the Correlation Coefficientsa Frequency (GHz) Polarization d2, ½m−1� c3 R d2wie, ½m−1� c3wie Rwie 21 Vertical 2.967 0.975 0.83 68.3 2.95 058 Horizontal 2.514 0.342 0.44 – – – 35 Vertical 3.514 0.843 0.73 260.4 3.13 0.90 Horizontal 5.781 0.876 0.73 – – – ad2, c3, R) compared to those determined by [24] (d2wie, c3wie, Rwie). 20 December 2008 / Vol. 47, No. 36 / APPLIED OPTICS 6731 between 0.2 and 0:4 mm. This confirms the theoretical relationship [Eq. (20)] established by Debye et al. [12] thatconnectsthesnowcorrelationlengthtoitsspecific surface.Thetechniqueshowsgreatpotentialinthede- terminationofsnowcorrelationlengthunderfieldcon- ditions, even though it still needs some improvements (i.e., elements such as uniform illumination of the snow-pitwall).Theimprovementofthedetermination of the correlation length is critical in the solving of the many-to-one relationship between the snow water equivalent and the measured brightness temperature through the data assimilation scheme. This work was supported by the National Science and Engineering Research Council of Canada and by Environment Canada (CRYSYS Program). We are grateful to the Institute of Applied Physics (IAP), University of Bern, for the invaluable logistical sup- port they provided us with for the field work, and to the Centre de Calcul Scientifique (CCS) of Université de Sherbrooke and especially to HuiZhong Lu who has been useful in helping to optimize the inversion program. References 1. J. T. Pulliainen, J. Grandell, and M. T. Hallikainen, “HUT snow emission model and its applicability to snow water equivalent retrieval,” IEEE Trans. Geosci. Remote Sens. 37, 1378–1390 (1999). 2. A. Wiesmann and C. Mätzler, “Microwave emission model of layeredsnowpacks,”RemoteSens.Environ.70,307–316(1999). 3. L. Tsang, C.-T. Chen, A. T. C. Chang, J. Guo, and K.-H. Ding, “Dense media radiative transfer theory based on quasi- crystalline approximation with application to passive micro- wave remote sensing of snow,” Radio Sci. 35, 731–749 (2000). 4. Y.-Q. Jin, Electromagnetic Scattering Modelling for Quantita- tive Remote Sensing (World Scientific, 1993). 5. L. Tsang and J. A. Kong, “Application of strong fluctuation random medium theory to scattering from a vegetation-like half space,” IEEE Trans. Geosci. Remote Sens. GE-19, 62– 69 (1981). 6. S. Colbeck, E. Akitaya, R. Armstrong, H. Gubler, J. Lafeuille, K. Lied, D. McClung, and E. Morris, International Classi- fication for Seasonal Snow on the Ground (University of Colorado, 1990). 7. R. L. Armstrong, A. Chang, A. Rango, and E. Josberger, “Snow depths and grain-size relationships with relevance for passive microwaves studies,” Ann. Glaciol. 17, 171–176 (1993). Fig. 9. Comparison of measured Tb of snow placed on a metal plate with snow emission model (MEMLS) predictions: (a) and (b) represent the results at 21 GHz vertical and horizontal polarization; (c) and (d) represent the results at 35 GHz vertical and horizontal polarization. 6732 APPLIED OPTICS / Vol. 47, No. 36 / 20 December 2008 8. C. Mätzler, “Relation between grain size and correlation length of snow,” J. Glaciol. 48, 461–466 (2002). 9. T. C. Grenfell and S. G. Warren, “Representation of a nonsphe- rical ice particle by a collection of independent spheres for scattering and absorption of radiation,” J. Geophys. Res. 104, 31697–31709 (1999). 10. C. Mätzler, “Autocorrelation function of granular media with free arrangement of spheres, spherical shells or ellipsoids,” J. Appl. Phys. 81, 1509–1517 (1997). 11. R. Parwani, “Correlation function, “http://staff.science.nus .edu.sg/~parwani/c1/node2.html. 12. P. Debye, H. R. Anderson, and H. Brumberger, “Scattering by an inhomogeneous solid II. The correlation function and its applications,” J. Appl. Phys. 28, 679–683 (1957). 13. A. Stogryn, “Correlation functions for random granular media in strong fluctuation theory,” IEEE Trans. Geosci. Remote Sens. GE-22, 150–154 (1984). 14. H. Lim, M. E. Veysoglu, S. H. Yueh, R. T. Shin, and J. A. Kong, “Random medium model approach to scattering from a ran- dom collection of discrete scatters,” J. Electromagn. Waves Appl. 8, 801–817 (1994). 15. J. Giddings and E. Lachapelle, “Diffusion theory applied to radiant energy distribution and albedo of snow,” J. Geophys. Res. 66, 181–189 (1961). 16. S. G. Warren and W. J. Wiscombe, “A model for the spectral albedo of snow. I: Pure snow,” J. Atmos. Sci. 37, 2734–2745 (1980). 17. J. Dozier, S. R. Schneider, J. McGinnis, and F. Davis, “Effect of grain size and snowpack water equivalent on visible and near- infrared,” Water Resour. Res. 17, 1213–1221 (1981). 18. A. Nolin and J. Dozier, “A hyperspectral method for remotely sensing the grain size of snow,” Remote Sens. Environ. 74, 207–216 (2000). 19. D. L. Mitchell, “Effective diameter in radiative transfer: gen- eral definition, application, and limitation,” J. Atmos. Sci. 59, 2330–2346 (2002). 20. L. Legagneux, A. Cabanes, and F. Dominé, “Measurement of the specific surface area of 176 snow samples using methane adsorption at 77 K,” J. Geophys. Res. 107, 4335 (2002). 21. M. Kerbrat, B. Pinzer, T. Huthwelker, H. W. Gäggeler, M. Ammann, and M. Schneebeli, “Measuring the specific sur- face area of snow with x-ray tomography and gas adsorption: comparison and implications for surface smoothness,” Atmos. Chem. Phys. 8, 1261–1275 (2008). 22. M. Matzl and M. Schneebeli, “Measuring specific surface area of snow by near-infrared photography,” J. Glaciol. 52, 558– 564 (2006). 23. T. H. Painter, N. P. Molotch, M. Cassidy, M. Flanner, and K. Steffen, “Contact spectroscopy for determination of stratigraphy of snow optical grain size,” J. Glaciol. 53, 121– 127 (2007). 24. A. Wiesmann, C. Mätzler, and T. Wiese, “Radiometric and structural measurements of snow samples,” Radio Sci. 33, 273–289 (1998). 25. S. Rosenfeld and N. C. Grody, “Metamorphic signature of snow revealed in SSM/I measurements,” IEEE Trans. Geosci. Remote Sens. 38, 53–63 (2000). 26. J. L. Foster, D. K. Hall, A. T. C. Chang, A. Rango, W. Wergin, and E. Erbe, “Effects of snow crystal shape on the scattering of passive microwave radiation,” IEEE Trans. Geosci. Remote Sens. 37, 1165–1168 (1999). 27. T. Weise, “Radiometric and structural measurements of snow,” PhD thesis (Inst. of Appl. Physics University of Bern, 1996). 28. M. Laternser and M. Schneebeli, “Long-term snow climate trends of the Swiss Alps (1931-99),” Int. J. Climatol. 23, 733–750 (2003). 29. E. Akitaya, “Studies on depth hoar,” Contrib. Inst. Low Temp. Sci. Hokkaido Univ. Ser. A 26, 1–67 (1974). 30. D. Marbouty, “An experimental study of temperature-gradient metamorphism,” J. Glaciol. 26, 303–312 (1980). 20 December 2008 / Vol. 47, No. 36 / APPLIED OPTICS 6733 work_tcux4getnrfannv4mcqkz4zkfe ---- Chitosan Gel to Treat Pressure Ulcers: A Clinical Pilot Study pharmaceutics Article Chitosan Gel to Treat Pressure Ulcers: A Clinical Pilot Study Virginia Campani 1, Eliana Pagnozzi 2, Ilaria Mataro 2, Laura Mayol 1 ID , Alessandra Perna 3 ID , Floriana D’Urso 4, Antonietta Carillo 4, Maria Cammarota 4, Maria Chiara Maiuri 5 and Giuseppe De Rosa 1,* ID 1 Department of Pharmacy, Università degli Studi di Napoli Federico II, Via D. Montesano 49, 80131 Naples, Italy; virginia.campani@unina.it (V.C.); laumayol@unina.it (L.M.) 2 M.D. Department of Plastic and Reconstructive Surgery and Burn Unit, Hospital Hospital “A. Cardarelli”, Via A. Cardarelli 9, 80131 Naples, Italy; elianapagnozzi@virgilio.it (E.P.); ilariamataro@gmail.com (I.M.) 3 First Division of Nephrology, Department of Cardio-thoracic and Respiratory Sciences, Second University of Naples, School of Medicine, via Pansini 5, Ed. 17, 80131 Naples, Italy; alessandra.perna@unicampania.it 4 U.O.S.C Farmacia, U.O.S.S. Galenica Clinica e Preparazione Farmaci Antiblastici, Hospital “A. Cardarelli”, Via A. Cardarelli 9, 80131 Naples, Italy; florianad-urso@hotmail.it (F.D.); antoniettacarillo1985@gmail.com (A.C.); maresas@tin.it (M.C.) 5 U.M.R.S. 1138, Centre de Recherche des Cordeliers, 15, rue de l’Ecole de Médecine, 75006 Paris, France; chiara.maiuri@crc.jussieu.fr * Correspondence: gderosa@unina.it; Tel.: + 39-0-81-678-666 Received: 7 December 2017; Accepted: 14 January 2018; Published: 17 January 2018 Abstract: Chitosan is biopolymer with promising properties in wound healing. Chronic wounds represent a significant burden to both the patient and the medical system. Among chronic wounds, pressure ulcers are one of the most common types of complex wound. The efficacy and the tolerability of chitosan gel formulation, prepared into the hospital pharmacy, in the treatment of pressure ulcers of moderate severity were evaluated. The endpoint of this phase II study was the reduction of the area of the lesion by at least 20% after four weeks of treatment. Thus, 20 adult volunteers with pressure ulcers within predetermined parameters were involved in a 30 days study. Dressing change was performed twice a week at outpatient clinic upon chronic wounds management. In the 90% of patients involved in the study, the treatment was effective, with a reduction of the area of the lesion and wound healing progress. The study demonstrated the efficacy of the gel formulation for treatment of pressure ulcers, also providing a strong reduction of patient management costs. Keywords: chitosan; gel; pressure ulcers; chronic wounds; wound healing; clinical study 1. Introduction Pressure ulcers are localized areas of injury to the skin and they mainly affect patients that require bed rest. They are caused by external forces, such as pressure, or shear, or a combination of both, and often occur over bony prominence [1]. Wound resolution is often impaired by bacterial proliferation and the production of exudates that causes maceration of healthy skin layers [2–4]. Moreover, many factors like smoking, obesity, old age, and malnutrition can promote the development of chronic skin damage and impair healings processes [3]. Wound care represents a heavy cost on the total health care budget [5]. Pressure ulcers have been shown to increase length of hospital stay and associated hospital costs. Costs are mainly dominated by health professional time and, for more severe ulcers, by the incidence of complications, including hospital admission/length of stay [6]. Advanced wound dressings have prohibitive costs for public health system. The economic care impacts of wound healing represent a serious bottleneck for the correct wound care, especially in public hospitals. Pharmaceutics 2018, 10, 15; doi:10.3390/pharmaceutics10010015 www.mdpi.com/journal/pharmaceutics http://www.mdpi.com/journal/pharmaceutics http://www.mdpi.com https://orcid.org/0000-0002-8869-6465 https://orcid.org/0000-0003-4048-9392 https://orcid.org/0000-0002-1210-1269 http://dx.doi.org/10.3390/pharmaceutics10010015 http://www.mdpi.com/journal/pharmaceutics Pharmaceutics 2018, 10, 15 2 of 8 Chitosan (CHI) is a natural polysaccharide that is composed of units of glucasamine linked by a 1–4 glycosid bond to N-acetyl glucosamine units [7]. Due to its characteristics of biodegradability, biocompatibility and safety, CHI has attracted considerable interest for biological applications. The presence of a positive charge at physiological pH makes CHI adhesive, ensuring a longer permanence in the application site [8,9]. Furthermore, the antiseptic activity of CHI was also demonstrated [10]. Finally, its abundance in nature and the low-cost of production make this polymer of commercial interest and suitable to be used for a large-scale production [11]. Many studies have demonstrated the effect of CHI in wound healing due to its microbiological activity, and to the ability to promote homeostasis and angiogenesis processes [12–16]. Moreover, CHI positive charges attract growth factors that enhance cell growth and proliferation [17]. In particular, severe infiltrations of polymorphonuclear cells and thick scab have been reported when treating skin wound with CHI-based dressings in dogs [18]. Recently, our research group reported an experimental protocol to prepare the CHI gel suitable for a hospital pharmacy [19]. These CHI-based gels demonstrated the ability to promote wound healing in vitro and in vivo in an animal model of pressure ulcer [19]. Here, we report a pilot clinical study on 20 patients with pressure ulcers and treated with the CHI gels prepared into the hospital. In this study, the efficacy and the tolerability of the treatment were evaluated. The aim of study was to provide a proof-of-concept to support further study on this device, prepared with low-cost biomaterial and directly into the hospital, to reduce the management cost of hospitalized patients affected by pressure ulcers. 2. Materials and Methods Chitosan from crab shells, highly viscous (>400 mPa·s 1% acetic acid at 20 ◦C) was purchased from Farmalabor (Canosa di Puglia, Italy), acetic acid was obtained by Carlo Erba (Milano, Italy), sterile water was purchased from B. Braun (Milan, Italy), regenerated cellulose 0.22 microM membranes were obtained by Corning (Viesbaden, Germany), and the immediate sterile packaging was kindly offered by Alfamed (Naples, Italy). 2.1. Gel Preparation CHI gels were prepared at the Unità di Manipolazione di Chemioterapici Antiblastici (U.M.A.C.A.) center situated in the Azienda di rilievo nazionale, A.O.R.N. Antonio Cardarelli (Naples, Italy). Gels were prepared, as previously described by Mayol et al. [19] with same modifications. Briefly, CHI powder was sterilized in autoclave at 121 ◦C for 20 min and 2 atm. Sterilization was checked by microbiological tests carried out on CHI samples at the Laboratorio Chimico Merceologico (Naples, Italy). Samples preparation was made under laminar flow hood and directly in the immediate sterile packaging. The acetic acid aqueous solution was filtered on 0.22 microM membrane filters before use. Then, 0.1 M acetic acid solution was slowly added under continuous stirring to 2% CHI powder until the obtainment of a clear solution. To evaporate the organic solvent, gels were sealed with 0.22 microM filter caps and then placed in oven for 48 h at 37 ◦C under vacuum (Vuototest, Mazzali, Monza, Italy); finally, filters were removed and the samples were sealed with hermetic caps. Each formulation was prepared in 30 mL sterile container (kindly provided by Alfamed s.r.l., Naples, Italy), intended for a single administration and stored at 4 ◦C (Figure 1). Pharmaceutics 2018, 10, 15 3 of 8 Pharmaceutics 2018, 10, 15 3 of 8 Figure 1. Chitosan gel formulation used in the study. 2.2. Patients Eligibility Volunteers patients of both sexes, aged between 40 to 80 years with good nutrition conditions, a life expectancy of at least six months with the ability to sign informed consent and affected by pressure ulcers of moderate severity (class II EUPAP/NPUAP 2014) were enrolled. The study excluded subject with: age less than 40 years or older than 80 years, malnutrition state, predisposition to bleeding, or treatment with anticoagulants, infections (including HIV positive), infected injuries, patients not available to follow the procedures envisaged by the study. A total of 20 adult volunteers with skin ulcers were involved in this 30 days study. Only patients susceptible to outpatient treatment were recruited and hospitalization was not envisaged at any stage of the study. The protocol for the clinical study (identification code: CHITODERM) was examined and approved by ethics committee (No. 558 of 06/24/2016) of “Cardarelli-Santobono” responsible for the experimentation and biomedical research activities carried out at the A.O.R.N. Antonio Cardarelli and A.O.R.N. Santobono-Pausilipon. 2.3. Patients Retirement Patients had the opportunity to retire from the clinical trial at any time and with no obligation to motivate the interruption. Moreover, treatment discontinuation has been provided in case of adverse events such as erythema, itching, and pain. In this case, motivations were attached to the medical record of the patient and no patient replacement was expected. For these patients a follow- up of 30 days duration was planned. 2.4. Study Design and Treatment Pressure ulcers were treated according to the EUPAP/NPUAP guidelines. Firstly, the lesion was cleaned with povidone iodine solution and finally washed with physiological solution. The gel preparation was applied on the decubitus ulcer covering the total area of the lesion. Once filled with the CHI gel, the skin lesion was covered with a secondary dressing. Dressing change was performed two times a week at the outpatient clinic for chronic wound treatment. The study lasted 30 days. 2.5. Safety and Efficacy Assessment The endpoint of the phase II study was the reduction (expressed as a percentage) of the area of the lesion by at least 20% after four weeks of treatment. The secondary endpoint of the study was to establish the tolerability of the gel preparations in the treatment of pressure ulcers. The occurrence of adverse events such as erythema, itching, and pain was evaluated. Patients assessed their degrees of overall satisfaction with the treatment using a 100 mm long horizontal line visual analog scale (VAS). Patients were treated twice a week. Before each application and at the end of the treatment, the area of the skin lesion was measured. Moreover, any influence of concomitant therapies and the general status of the patient were evaluated. The area of the lesion was evaluated by digital photography, applying a ruler beside the lesion. Figure 1. Chitosan gel formulation used in the study. 2.2. Patients Eligibility Volunteers patients of both sexes, aged between 40 to 80 years with good nutrition conditions, a life expectancy of at least six months with the ability to sign informed consent and affected by pressure ulcers of moderate severity (class II EUPAP/NPUAP 2014) were enrolled. The study excluded subject with: age less than 40 years or older than 80 years, malnutrition state, predisposition to bleeding, or treatment with anticoagulants, infections (including HIV positive), infected injuries, patients not available to follow the procedures envisaged by the study. A total of 20 adult volunteers with skin ulcers were involved in this 30 days study. Only patients susceptible to outpatient treatment were recruited and hospitalization was not envisaged at any stage of the study. The protocol for the clinical study (identification code: CHITODERM) was examined and approved by ethics committee (No. 558 of 06/24/2016) of “Cardarelli-Santobono” responsible for the experimentation and biomedical research activities carried out at the A.O.R.N. Antonio Cardarelli and A.O.R.N. Santobono-Pausilipon. 2.3. Patients Retirement Patients had the opportunity to retire from the clinical trial at any time and with no obligation to motivate the interruption. Moreover, treatment discontinuation has been provided in case of adverse events such as erythema, itching, and pain. In this case, motivations were attached to the medical record of the patient and no patient replacement was expected. For these patients a follow-up of 30 days duration was planned. 2.4. Study Design and Treatment Pressure ulcers were treated according to the EUPAP/NPUAP guidelines. Firstly, the lesion was cleaned with povidone iodine solution and finally washed with physiological solution. The gel preparation was applied on the decubitus ulcer covering the total area of the lesion. Once filled with the CHI gel, the skin lesion was covered with a secondary dressing. Dressing change was performed two times a week at the outpatient clinic for chronic wound treatment. The study lasted 30 days. 2.5. Safety and Efficacy Assessment The endpoint of the phase II study was the reduction (expressed as a percentage) of the area of the lesion by at least 20% after four weeks of treatment. The secondary endpoint of the study was to establish the tolerability of the gel preparations in the treatment of pressure ulcers. The occurrence of adverse events such as erythema, itching, and pain was evaluated. Patients assessed their degrees of overall satisfaction with the treatment using a 100 mm long horizontal line visual analog scale (VAS). Pharmaceutics 2018, 10, 15 4 of 8 Patients were treated twice a week. Before each application and at the end of the treatment, the area of the skin lesion was measured. Moreover, any influence of concomitant therapies and the general status of the patient were evaluated. The area of the lesion was evaluated by digital photography, applying a ruler beside the lesion. Digital images were analyzed using the open source software “Image J” (Java 1.8.0_112) to calculate the area of the wound. At 14 and 30 days visit patient’s satisfaction level was assessed by VAS score, ranging from not satisfied (score-0) to fully satisfied (score-100) with the treatment outcomes. 2.6. Statistical Analyses All of the skin lesion areas were analyzed. The data obtained for each lesion area changes (assessed by image analysis) and presented as area (cm2) and as percentage (%) of reduction of the area of lesion were then statistically analyzed. A one-sample Student’s t-test was utilized, and the results were analyzed with the statistics software GraphPad Prism Version 6.0a for Macintosh (GraphPad Software, San Diego, CA, USA). 3. Results and Discussion The aim of this trial was to test the efficacy of the gel to accelerate the healing of pressure ulcers. CHI is a biocompatible, biodegradable, and low cost natural polymer proposed for several biopharmaceutical applications, among them wound healing [13–16]. Although the clinical use of CHI as a wound healing agent has shown difficulty in taking off, its ability to promote tissue regeneration is well known [13–16]. Different effects, among them inhibition of the microbial growth and increased homeostasis, able to promote healing of injured tissues have been ascribed to CHI. In particular, CHI has been found to be involved in the rapid mobilization of platelets and red blood cells to the injured site and also in vasoconstriction and activation of blood clotting factors responsible for blood clotting [4,12,13]. Thus, CHI accelerates the granulation phase in wound healing and stimulates macrophage activity. Finally, the N-acetyl-D-glucosamine, which os responsible for fibroblast proliferation, increases collagen and HA synthesis in the wound cavity, and it also allows oxygen permeability at the wound site [12,13]. At the moment, few CHI-based wound dressings (i.e., ChitoFlex®, ChitoGauze®, ChitoSAM™, Celox™ Rapid Gauze) are available on the market, but proposed for their hemostatic properties although than for tissue regeneration. This pilot clinical study is aimed to support the use of a CHI-based device to increase the wound healing rate, independently on the bleeding of the wound. Here, to carry out the study, the preparation protocol previously developed [19] was reproduced in a sterile manufacturing area at the U.M.A.C.A. center of the A.O.R.N. Antonio Cardarelli. Thus, in the first phase of the work, three batches prepared into the hospital were characterized in terms of viscoelastic behavior, showing no significant differences with gel previously prepared [19]. Gels were prepared and stored in hermetically sealed containers, to avoid following growth of microbiological contamination. Microbiological tests carried out at the Laboratorio Chimico Merceologico of Naples confirmed that all the three pilot batches prepared in this study were sterile and suitable to be administrated on damaged skin (data not shown). The clinical protocol was designed according to the 2014 EUPAP/NPUAP guidelines. In particular, following the cleansing of the wound, CHI gel was spread on the skin lesions, then covered with a secondary dressing. The primary endpoint to investigate the efficacy of the treatment was the reduction of the area of the lesion at least 20% after 30 days of treatment of the skin lesion. Indeed, a reduction of the area of the lesion of about 20% observed after the first weeks of treatment can be considered a predictive healing factor [20]. In Table 1, the percentages of reduction of the area of the lesion after 30 days of treatment were reported. As shown, a significant reduction of the area of the lesion (higher than 20%) was observed in most patients (about 90%) with a complete wound healing in 20% of the cases after four-week of treatment with CHI gel. Moreover, in 18 patients the treatment was effective, showing a significantly reduction of the area of the lesion and wound healing progression. Pharmaceutics 2018, 10, 15 5 of 8 Furthermore, in 50% of patients involved in the clinical study, the reduction of the area of the lesion was higher than 50% (patients 1, 2, 5, 6, 7, 9, 10, 14, 16, 17), with a complete wound healing of the ulcers in some cases (patients 2, 5, 9, 16). The results obtained from t test demonstrated that the reduction of the area of the lesions after 30 days of treatment was statistically significant with a two-tailed p value of 0.0002 (t value of 4.16 and a p value of 0.0002). Table 1. Area of the lesion before and after treatment with chitosan (CHI) formulation and the percentage of reduction of the lesion for each patient. Patient Area of the Lesion (before Treatment) Area of the Lesion (after Treatment) Reduction of the Area of the Lesion (%) 1 30,238 13,457 55 2 1245 125 90 3 12,580 10,742 15 4 7271 5421 25 5 7356 1031 86 6 7205 3379 53 7 8479 3881 54 8 17,492 9090 48 9 2500 670 73 10 10,832 4329 60 11 2352 1564 34 12 1929 1527 21 13 3687 2901 21 14 2263 146 94 15 33,403 26,336 21 16 33,403 26,336 97 17 1346 460 66 18 14,407 9910 31 19 32,366 20,927 35 20 14,699 13,994 5 In Figure 2, representative images of the wounds in three patients before the treatment (panels A1, B1, C1) and after 30 days of treatment (panels A2, B2, C2) are showed. Interestingly, any patient reported adverse effects of mild, moderate of serious severity after the administration of the gel preparations. Finally, in any case, the discontinuation of the treatment was required. Pharmaceutics 2018, 10, 15 5 of 8 Table 1. Area of the lesion before and after treatment with chitosan (CHI) formulation and the percentage of reduction of the lesion for each patient. Patient Area of the Lesion (before Treatment) Area of the Lesion (after Treatment) Reduction of the Area of the Lesion (%) 1 30,238 13,457 55 2 1245 125 90 3 12,580 10,742 15 4 7271 5421 25 5 7356 1031 86 6 7205 3379 53 7 8479 3881 54 8 17,492 9090 48 9 2500 670 73 10 10,832 4329 60 11 2352 1564 34 12 1929 1527 21 13 3687 2901 21 14 2263 146 94 15 33,403 26,336 21 16 33,403 26,336 97 17 1346 460 66 18 14,407 9910 31 19 32,366 20,927 35 20 14,699 13,994 5 In Figure 2, representative images of the wounds in three patients before the treatment (panels A1, B1, C1) and after 30 days of treatment (panels A2, B2, C2) are showed. Interestingly, any patient reported adverse effects of mild, moderate of serious severity after the administration of the gel preparations. Finally, in any case, the discontinuation of the treatment was required. Figure 2. Images of pressure ulcers before (A1, B1, C1) and after 30 days of treatment with chitosan gel (A2, B2, C2). Furthermore, we evaluated the overall patient satisfaction level on a visual analogue scale (VAS) ranging from not satisfied (score-0) to fully satisfied (score-100). The VAS scale was used due to its advantages in the evaluation of satisfaction outcome; indeed, VAS scale is reported as a very powerful research tool, reliable, sensitive, and easy to use [21]. As expected, the patient who obtained a significant reduction of the area of the lesion gave VAS score higher than those patients who did Figure 2. Images of pressure ulcers before (A1, B1, C1) and after 30 days of treatment with chitosan gel (A2, B2, C2). Pharmaceutics 2018, 10, 15 6 of 8 Furthermore, we evaluated the overall patient satisfaction level on a visual analogue scale (VAS) ranging from not satisfied (score-0) to fully satisfied (score-100). The VAS scale was used due to its advantages in the evaluation of satisfaction outcome; indeed, VAS scale is reported as a very powerful research tool, reliable, sensitive, and easy to use [21]. As expected, the patient who obtained a significant reduction of the area of the lesion gave VAS score higher than those patients who did not obtain a significant result after 30 days of treatment (Table 2). Table 2. Patient’s satisfaction level assessed by visual analogue scale (VAS) score at 14 days and 30 days visit. Patient VAS Score VAS Score 14 Days 30 Days 1 41 75 2 63 94 3 19 25 4 34 36 5 74 91 6 45 82 7 56 68 8 44 70 9 55 88 10 70 86 11 42 59 12 28 43 13 31 59 14 75 100 15 15 40 16 66 95 17 63 84 18 27 49 19 36 45 20 15 21 These results on patients confirm our previous findings in an animal model of pressure ulcers [19], where a significant reduction of the area of the lesion after 3 and 10 days of treatment with CHI gel was found. On the other hand, the healing process should not be observed in chronic wound, such as pressure ulcers, during 12 weeks, from the beginning of the treatment [22]. On the contrary, in this study, significant wound healing progression was observed in the majority of the patients (90%) treated with the CHI gel for four weeks. Interestingly, the 50% of the patients the reduction of the area was superior to the 50% compared to the area before the treatment; finally, 20% of the patients resulted completely healed. These encouraging results confirmed the efficacy of the CHI gel in wound healing, suggesting that this formulation could provide real advantages in term of efficacy and the cost of the treatment. A clinical trial on a larger group of patients (phase III) could provide further information on the use of CHI gels on patients with pressure ulcers and other kind of ulcers. Moreover, in this case, only patients with ulcers at the stage II were enrolled. Other studies should also be organized to investigate the effect of CHI gels also on ulcers of a higher severity. 4. Conclusions In conclusion, this pivotal study on a restricted group of subjects affected by pressure ulcers demonstrated the tolerability and the efficacy of the CHI-based gel formulation in promoting wound healing. Although on a limited number of volunteers, 90% of the treated patients were responder to treatment, with 20% of the patients completely healed. Furthermore, in this study the gel was prepared directly into the hospital, in the sterile area of the hospital pharmacy. This approach could represent an alternative to the marketed dressing of innovative biomaterials that are generally Pharmaceutics 2018, 10, 15 7 of 8 quite expensive and not always available in public hospital. On the contrary, CHI gel preparation is very easy to perform and the materials that are used in the preparation have negligible costs. Once that the efficacy of this device will be demonstrated on a larger number of patients (Phase III), these findings could represent the basis of new protocols making together increased healing rate with cost-saving for the public health systems. Acknowledgments: The study was financed by the Italian Minister of Health (Progetto Giovani Ricercatori–bando 2007). Author Contributions: Virginia Campani: design and characterization of the formulations, analysis of the experimental results, preparation of the manuscript. Eliana Pagnozzi: coordinator of the clinical trial. Ilaria Mataro: analysis and interpretation of clinical results. Laura Mayol: characterization of the formulations. Alessandra Perna: statistical analysis. Floriana D’Urso: preparation of the clinical protocol and dossier for the ethical committee. Antonietta Carillo: preparation of the formulation in the hospital pharmacy. Maria Cammarota: coordination of the HUMACA center in which the formulation has been prepared. Maria Chiara Maiuri: Design of clinical study. Giuseppe De Rosa coordinator together of the study. Conflicts of Interest: The authors declare no conflict of interest. References 1. Westby, M.J.; Dumville, J.C.; Soares, M.O.; Stubbs, N.; Norman, G. Dressings and topical agents for treating pressure ulcers. Cochrane Database Syst. Rev. 2017. [CrossRef] [PubMed] 2. Mutsaers, S.E.; Bishop, J.E.; McGrouther, G.; Laurent, G.J. Mechanism of tissue repair: From wound healing to fibrosis. Int. J. Biochem. Cell Biol. 1997, 29, 5–17. [CrossRef] 3. Pereira, R.F.; Barrias, C.C.; Granja, P.L.; Bartolo, P.J. Advanced biofabrication strategies for skin regeneration and repair. Nanomedicine 2013, 8, 603–621. [CrossRef] [PubMed] 4. Agrawal, P.; Soni, S.; Mittal, G.; Bhatnagar, A. Role of polymeric biomaterials as wound healing agents. Int. J. Low Extrem. Wounds 2014, 13, 180–190. [CrossRef] [PubMed] 5. Lindholm, C.; Searle, R. Wound management for the 21st century: Combining effectiveness and efficiency. Int. Wound J. 2016, 13, 5–15. [CrossRef] [PubMed] 6. Dealey, C.; Posnett, J.; Walker, A. The cost of pressure ulcers in the United Kingdom. J. Wound Care 2012, 6, 261–264. [CrossRef] [PubMed] 7. Tomihata, K.; Ikada, Y. In vitro and in vivo degradation of films of chitin and its deacetylated derivatives. Biomaterials 1997, 18, 567–575. [CrossRef] 8. He, P.; Davis, S.S.; Illum, L. In vitro evaluation of the mucoadhesive properties of chitosan microspheres. Int. J. Pharm. 1998, 166, 75–88. 9. Calvo, P.; Remunan-Lopez, C.; Vila-Jato, J.L.; Alonso, M.J. Novel chitosan derivatives enhance the transport of hydrophilic hydrophilic chitosan-polyethylene oxide nanoparticles as protein carriers. J. Appl. Polym. Sci. 1997, 63, 125–132. [CrossRef] 10. Burkatovskaya, M.; Castano, A.P.; Demidova-Rice, T.N.; Tegos, G.P.; Hamblin, M.R. Effect of chitosan acetate bandage on wound healing in infected and noninfected wounds in mice. Wound Repair Regen. 2008, 3, 425–431. [CrossRef] [PubMed] 11. Khor, E.; Lim, L.Y. Implantable applications of chitin and chitosan. Biomaterials 2003, 24, 2339–2349. [CrossRef] 12. Muzzarelli, R.; Tarsi, R.; Filippini, O.; Giovanetti, E.; Biagini, G.; Varaldo, P.E. Antimicrobial properties of N-carboxybutyl chitosan. Antimicrob. Agents Chemother. 1990, 34, 2019–2023. [CrossRef] [PubMed] 13. Ueno, H.; Mori, T.; Fujinaga, T. Topical formulations and wound healing applications of chitosan. Adv. Drug Deliv. Rev. 2001, 52, 105–115. [CrossRef] 14. Wang, W.; Lin, S.; Xiao, Y.; Huang, Y.; Tan, Y.; Cai, L.; Li, X. Acceleration of diabetic wound healing with chitosan-crosslinked collagen sponge containing recombinant human acidic fibroblastgrowth factor in healing-impaired STZ diabetic rats. Life Sci. 2008, 82, 190–204. [CrossRef] [PubMed] 15. Boateng, J.S.; Matthews, K.H.; Stevens, H.N.; Eccleston, G.M. Wound healing dressings and drug delivery systems: A review. J. Pharm. Sci. 2008, 97, 2892–2923. [CrossRef] [PubMed] 16. Charernsriwilaiwat, N.; Rojanarata, T.; Ngawhirunpat, T.; Opanasopit, P. Electrospun chitosan/polyvinyl alcohol nanofibre mats for wound healing. Int. Wound J. 2014, 11, 215–222. [CrossRef] [PubMed] http://dx.doi.org/10.1002/14651858.CD011947.pub2 http://www.ncbi.nlm.nih.gov/pubmed/28639707 http://dx.doi.org/10.1016/S1357-2725(96)00115-X http://dx.doi.org/10.2217/nnm.13.50 http://www.ncbi.nlm.nih.gov/pubmed/23560411 http://dx.doi.org/10.1177/1534734614544523 http://www.ncbi.nlm.nih.gov/pubmed/25056991 http://dx.doi.org/10.1111/iwj.12623 http://www.ncbi.nlm.nih.gov/pubmed/27460943 http://dx.doi.org/10.12968/jowc.2012.21.6.261 http://www.ncbi.nlm.nih.gov/pubmed/22886290 http://dx.doi.org/10.1016/S0142-9612(96)00167-6 http://dx.doi.org/10.1002/(SICI)1097-4628(19970103)63:1<125::AID-APP13>3.0.CO;2-4 http://dx.doi.org/10.1111/j.1524-475X.2008.00382.x http://www.ncbi.nlm.nih.gov/pubmed/18471261 http://dx.doi.org/10.1016/S0142-9612(03)00026-7 http://dx.doi.org/10.1128/AAC.34.10.2019 http://www.ncbi.nlm.nih.gov/pubmed/2291669 http://dx.doi.org/10.1016/S0169-409X(01)00189-2 http://dx.doi.org/10.1016/j.lfs.2007.11.009 http://www.ncbi.nlm.nih.gov/pubmed/18164317 http://dx.doi.org/10.1002/jps.21210 http://www.ncbi.nlm.nih.gov/pubmed/17963217 http://dx.doi.org/10.1111/j.1742-481X.2012.01077.x http://www.ncbi.nlm.nih.gov/pubmed/22925275 Pharmaceutics 2018, 10, 15 8 of 8 17. Lee, D.W.; Lim, H.; Chong, H.N.; Shim, W.S. Advances in chitosan material and its hybrid derivatives: A review. Open Biomater. J. 2009, 1, 10–20. [CrossRef] 18. Ueno, H.; Yamada, H.; Tanaka, I.; Kaba, N.; Matsuura, M.; Okumura, M.; Kadosawa, T.; Fujinaga, T. Accelerating effects of chitosan for healing at early phase of experimental open wound in dogs. Biomaterials 1999, 20, 1407–1414. [CrossRef] 19. Mayol, L.; De Stefano, D.; Campani, V.; De Falco, F.; Ferrari, E.; Cencetti, C.; Matricardi, P.L.; Maiuri, R.; Carnuccio, A.; Gallo, M.C.; et al. Design and characterization of a chitosan physical gel promoting wound healing in mice. J. Mater. Sci. Mater. Med. 2014, 25, 1483–1493. [CrossRef] [PubMed] 20. Flanagan, M. Improving accuracy of wound measurement in clinical practice. Ostomy Wound Manag. 2003, 49, 28–40. 21. Singer, A.J.; Church, A.L.; Forrestal, K.; Werblud, M.; Valentine, S.M.; Hollander, J.E. Comparison of patient satisfaction and practitioner satisfaction with wound appearance after traumatic wound repair. Acad. Emerg. Med. 1997, 4, 133–137. [CrossRef] 22. Boateng, J.; Catanzano, O. Advanced Therapeutic Dressings for Effective Wound Healing—A Review. J. Pharm. Sci. 2015, 104, 3653–3680. [CrossRef] [PubMed] © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). http://dx.doi.org/10.2174/1876502500901010010 http://dx.doi.org/10.1016/S0142-9612(99)00046-0 http://dx.doi.org/10.1007/s10856-014-5175-7 http://www.ncbi.nlm.nih.gov/pubmed/24584669 http://dx.doi.org/10.1111/j.1553-2712.1997.tb03720.x http://dx.doi.org/10.1002/jps.24610 http://www.ncbi.nlm.nih.gov/pubmed/26308473 http://creativecommons.org/ http://creativecommons.org/licenses/by/4.0/. Introduction Materials and Methods Gel Preparation Patients Eligibility Patients Retirement Study Design and Treatment Safety and Efficacy Assessment Statistical Analyses Results and Discussion Conclusions References work_td4isadxuvgkrd6krgppmsouui ---- 5847_200801PPAD_Griffin_v2.qxd USING DIGITAL PHOTOGRAPHY TO VISUALIZE, PLAN, AND PREPARE A COMPLEX PORCELAIN VENEER CASE Jack D. Griffin, Jr, DMD, FAGD* Pract Proced Aesthet Dent 2008;20(1):A-G A Visualization and a pre-operative plan are critical to efficient and thorough case preparation. Congenitally missing teeth, coupled with improper tooth positioning, can compromise the aesthetic rehabilitation outcome. Utilizing pre-treatment digital photography as an outline for tooth reduction and laser tissue re-contouring may help to create a symmetric and pleasing smile, even under less ideal conditions. Learning Objectives: This article discusses the use of digital photography as a case-planning tool, the gingival treatment protocol to correct the emergence profile and tissue discrep- ancies, and the preparations needed to gain acceptable tooth proportions with missing and misshapen teeth. Upon reading this article, the reader should: • Recognize digital photography as a tool to plan, prepare, and evaluate the cosmetic case. • Understand preparation guidelines to correct missing and misshapen teeth for conservative porcelain veneers using images as a guideline. • Understand basic luting protocol for cementation of ten veneers with a pho- tographic follow-up. Key Words: digital photography, porcelain veneers, prosthodontics G R I F F I N J A N U A R Y / F E B R U A R Y 20 1 * Private practice, Eureka, Missouri. Jack D. Griffin, Jr, DMD, FAGD, 18 Hilltop Village Center Drive, Eureka, MO 63025 Tel: 636-938-4141 • E-mail: esmilecenter@aol.com C O N T I N U I N G E D U C A T I O N X X 5847_200801PPAD_Griffin_v2.qxd 1/16/08 2:09 PM Page A B Vol. 20, No. 1 Practical Procedures & AESTHETIC DENTISTRY Bonded porcelain veneers have been used suc-cessfully for over 20 years to create seemingly ideal smiles despite natural and iatrogenic limitations.1 Aesthetic dentistry allows clinicians to correct smile deficiencies when teeth are congenitally missing, misaligned, or improperly placed.2,3 Despite the ability to make dramatic aesthetic improvements, dentists must temper expectations in light of previous dental treatment, congenital limitations, or patient experience. Visualization is the key for success in any case. Photographic evaluation, along with clinical exami- nation, laboratory mock-up, and practitioner experi- ence, form the basis for tissue preparation and smile restoration. These are all important when selecting preparation techniques and materials for an optimal aesthetic outcome. Photography can be a useful adjunct to case planning. When used in conjunction with laboratory waxups, digital previews, and direct mock-ups, pre- dictable tooth preparation can be achieved (Table). Anatomical and functional challenges must be incorporated into a thorough treatment plan with consideration for soft tissue position, preparation design, and idealized tooth positioning for excellent long-term results.4,5 Treatment of Congenitally Missing Lateral Incisors When lateral incisors are congenitally missing, many fac- tors must be considered to allow the clinician to decide if the incisors should be brought into a lateral position or if the spaces should be widened to facilitate implant placement or restoration using a fixed partial denture (FPD).6 The patient’s overall aesthetic desires, existing occlusion, and practitioner limitations are some of the many factors that must be addressed when finalizing treatment goals.7 Transforming canines into lateral incisors can be an aesthetic challenge because the hard tissue archi- tecture of the canine eminence, the darker natural color, the increase in buccolingual and mesiodistal dimen- sions, and soft tissue shapes are often inconsistent with laterals. For these, and other reasons, it is more common to create spaces for aesthetic bridge or implant placement.8 Case Presentation A 25-year-old female patient presented for cosmetic con- sultation with only 2 weeks remaining for her preexist- ing orthodontic treatment. The patient desired tooth whitening, improved proportions, and removal of the excess tissue display evident during smile. The initial treat- ment plan included moving tooth #6(13) distally to its normal location, followed by composite buildup of tooth #10(22) to hold a more normal size and position in the arch, and movement of teeth #9(21) and #10 to the Applying Photography During Aesthetic Restoration Using Porcelain Veneers Photography can be used for a variety of purposes during aesthetic rehabilitation using porcelain veneers, including: • Preoperative marketing, documentation, and evaluation; • Formulation of a tissue-reduction blueprint and guide; • Maintenance of a reference during tooth preparation; • Laboratory communication to transfer treatment goals and information on the teeth’s preopera- tive appearance and preparation design; • Postoperative technique and material evaluation, marketing, and documentation. Table Figure 1. A lateral photograph was captured to allow evaluation of the tooth proportions and smile heights. Tooth #6(13) had been moved mesially to fill the space where tooth #7(12) was missing. 5847_200801PPAD_Griffin_v2.qxd 1/16/08 2:09 PM Page B PPAD C Griffin patient’s left to correct the midline and cant. This treat- ment would then be followed by placement of an implant- retained or FPD restoration for tooth #7(12), with veneers on the remaining incisors to correct proportions. Pre-treatment Photography, Analysis, and Preparation Guide Once orthodontic therapy was completed, a full series of images was taken with a digital SLR camera, a 105-mm macro lens, and a ring flash (ie, Nikon D70s, Nikkor 105mm macro lens, Nikon SB29s Speedlight) in “A” aperture priority mode with varying aperture settings (f/stop). Prior to the restorative appointment, the images were loaded on a computer, analyzed, and a written preparation plan was created. A full-facial image was captured with the patient in a natural smile to allow the clinician to clearly identify the incorrect midline and canted central incisors. Notes were made on the images to move the midline 2 mm to the patient’s left; the need for cant correction was also observed. Lateral smile views were subsequently cap- tured to clearly communicate the number of teeth that were evidenced during natural smile. This photographic evaluation allowed the clinician to observe the darker, wider, and more prominent position of the canine as compared to a typical lateral incisor. The patient’s exces- sive gingival display was also evident in the premolar region, and redundant lip tissues were all evident from the right perspective (Figure 1). The left side showed a narrow tooth #10 with a mesial space (Figure 2). Intraoral images were taken in full occlusion and show- ing both arches slightly opened (f/32). The aforemen- tioned smile deficiencies were marked, along with the areas that required tissue reduction to correct them. The desired midline and cant were noted on the photos, as well as the proposed distal of the laterals (this case in blue). Ideal tooth proportions and a symmetrical smile design with central dominance and a height-to-width ratio of about 75% were used as a basis for smile enhancements.9 The central incisors were measured and approxi- mately 70% of their width was estimated for the lateral incisors, as measured from the proposed midline. The photographs were marked in blue to function as a ref- erence during tooth preparation and to provide the tech- nician room to correct the existing deficiencies. The distal aspect of the desired lateral was also marked in blue on magnified images using similar mea- surements at f/45 according to the aforementioned pro- tocol (Figure 3). Frenum reduction, emergence profile changes, and gingival crown lengthening were marked with green. On the left side, the mesial aspects of teeth #9, #11(23), and #12(24) required reduction to com- pensate for the midline shift in this direction (Figure 4). The extent of tissue removal to correct the heavy frenum and redundant lip were evaluated, and a plan was made for their removal (Figure 5). Soft Tissue Reduction With Photographic Guidance At the preparation appointment, printed versions of the marked images were placed on the countertop and the digital images were displayed on the opera- tory monitor for easy reference. These images were reviewed prior to tooth preparation so visualization of the case could be made and referred to throughout preparation. A direct composite mock-up was fabricated prior to anesthesia delivery so that incisal edge position, phonetics, and aesthetics could be evaluated. Photographs were taken to capture the indicated changes, and an impression was made for laboratory consultation.10 A diode laser (ie, Odyssey, Ivoclar Vivadent, Amherst, NY) was applied at a low power setting to ensure predictable control and to eliminate unexpected tissue healing.11 The frenectomy was completed and care was taken to remain approximately 4 mm from the Figure 2. Redundant tissue was evident on the maxillary lip and excessive gingival display was evident. The small size of the lateral incisor was also noted. 5847_200801PPAD_Griffin_v2.qxd 1/16/08 2:09 PM Page C dry-wet line during removal of the redundant lip tissue. Crown lengthening and emergence profile corrections were completed using the treatment planning photos as a guideline on the right side (Figure 6). On the left side, crown lengthening was performed on the premolars and minor amounts of soft tissue were removed from the lat- eral to broaden the emergence profile and to match the proposed width increase (Figure 7). Tooth Preparation Using the photographs as guides, minimal tooth reduc- tion (between 0.3 mm and 0.5 mm) was performed with a finishing diamond. The teeth were reduced in an ideal form and the incisal edges were reduced approximately 0.5 mm and beveled towards the facial aspect to facil- itate development of incisal characterization and a definitive stop when seating the veneers (Figure 8).12 The margins were slightly subgingival with a subtle rounded chamfer. Intraoral measurements were combined with the marked photos to guide interproximal reduction. Calipers were used to measure the central incisors, reduced to approximately 70% of that measurement, and then locked at that position. These devices were then used to measure the mesial reduction of tooth #11 until suf- ficient reduction was performed for the lateral incisor (Figure 9). The locked calipers were then moved to tooth #6 where the tooth was prepared in excess of 1 mm to accommodate ceramic thickness.13 D Vol. 20, No. 1 Practical Procedures & AESTHETIC DENTISTRY Figure 4. The required tissue modifications were marked on the pho- tograph of the lateral aspect to create natural-looking emergence profiles. The blue lines represented the desired tooth positions with the necessary midline shift. Figure 5. The excessive frenum and lip tissues were marked for recontouring. Figure 6. A diode laser was used at a low setting (1.5 watts) to remove the excess tissues. The diagnostic photographs were used as the blueprint to measure the necessary tissue changes while main- taining biologic principles. Figure 3. The markings provided a framework in which tooth reduc- tion and gingival modifications would be performed. Print and com- puterized versions of these images were viewed in the operatory during treatment. 5847_200801PPAD_Griffin_v2.qxd 1/16/08 2:09 PM Page D The preparations were photographed and evalu- ated. Slight preparation enhancements were then made. The facial aspect of the canine was leveled consider- ably to ensure reduced thickness on the lateral incisor (Figure 10). A retraction paste (ie, ExpaSyl, Kerr, Orange, CA) was injected along the gingival margins, allowed to sit for five minutes, and then rinsed (Figure 11). Two polyvinyl- siloxane impressions were fabricated with a bite regis- tration using midline- and occlusal-plane guide sticks. The provisional restorations were subsequently constructed. Laboratory Communication The impressions, models, and bite registration were sent to the laboratory with the photos obtained prior to and during treatment. Pre- and post-gingival recon- touring, bite registration, and preoperative photographs were used to ensure accurate communication to the laboratory.14 The images included shade tabs adjacent to the preparations to demonstrate the underlying tooth color that required coverage. Hand-stacked feldspathic porcelain (ie, Noritake, Henry Schein, Melville, NY) with moderate translucency was chosen as the restorative material because of its ability to be fabricated with only 0.3 mm to 0.4 mm of thickness while maintaining strength, beauty, and fit.15,16 These veneers were selected because of their excellent fit and the minimal amount of tooth preparation required, particularly because of the patient’s young age.17 PPAD E Griffin Figure 8. A minimally invasive preparation design was created using a finishing diamond. Margins were taken just below the modified tis- sue heights. Figure 7. The gingival tissues were recontoured by following the marked photographs; modifications were limited by the patient’s bio- logic width. Tissue charring was cleaned with hydrogen peroxide on a microbrush. Figure 9. Calipers were set and locked at a 70% width of the central incisors, and mesial reduction of the mesial aspect of tooth #11(23) facilitated development of sufficient space for the lateral incisor. Figure 10. Occlusal view demonstrates the required narrowing of the transformed canine on the distal and facial aspects. The canine adja- cent to the small lateral incisor was reduced on the mesial aspect to provide the lateral space required and to shift the midline. 5847_200801PPAD_Griffin_v2.qxd 1/16/08 2:09 PM Page E Restoration Delivery At two weeks, the patient reported only minor discom- fort following the initial procedure and slight sensitivity to cold during the provisionalization phase. The tempo- rary restorations were removed with hemostats and scalers, without the need for anesthesia. A retraction cord was placed in several places to control tissue leakage. The restorations were tried in, removed, cleaned with 38% phosphoric acid, silanated, a bonding agent was applied, and the material was air thinned. The teeth were isolated with retractors (ie, See-More, Discus, Culver City, CA), and the central incisors etched for 15 sec- onds with 38% phosphoric acid. Several coats of a dentin bonding agent (ie, Cabrio, Discus, Culver City, CA) were applied and then air thinned. The central veneers were placed first with a self-cure luting cement (ie, Insure Yellow Red Light, Cosmedent, Chicago, IL), cured, and cleaned up. The remaining teeth were luted into place, working posteriorly from the centrals (Figure 12). The tissues were healing well with slight redness and inflammation after five days (Figure 13). Residual cement between #9 and #10 and in various areas was removed. At three weeks post-cementation, the tissues healed very well and final incisal shaping and lingual polishing was done with rubber polishers (Figures 14 and 15). A full series of images was repeated with the same poses and camera settings used in the preoperative images. A final full series of images was repeated at 18 months for case evaluation and documentation. There were improved tooth proportions, good gingival health, and a more pleasing overall smile. On the left side, the spaces were closed and redundant lip tissue was more pleasing with natural tooth proportions (Figure 16). The tissues at this time had accepted treatment very well with very little inflammation. The full-facial smile showed better tooth symmetry with a correctly placed midline and cant of the central incisors. It is much easier for the patient to give a full, natural smile after completion of the enhancements, and with patient consent, she can be a great marketing tool, using the pre- and post-operative images. Conclusion Despite the limitations of a missing tooth and question- able orthodontic position, the result was cosmetically F Vol. 20, No. 1 Practical Procedures & AESTHETIC DENTISTRY Figure 11. A hemostatic agent and retraction putty were placed and rinsed after 5 minutes, followed by impression capture using a polyvinylsiloxane material. Figure 12. The definitive lateral and canine veneers were tried in, cleaned, and cemented prior to removal of excess cement and fol- lowing placement of the centrals. Figure 13. Adjustments were made five days post- cementation. Healing was progressing normally and the patient experienced no postoperative discomfort. 5847_200801PPAD_Griffin_v2.qxd 1/16/08 2:09 PM Page F acceptable, and these smile enhancements should last for many years to come (Figure 16). Not all of the com- promises in tissue heights were able to be corrected, but most of the treatment goals were met. The key with this case was visualization, preparation, and communication with the patient and lab regarding expectations, which were all made possible with a thor- ough plan that featured digital photography evaluation. Acknowledgment The author mentions his gratitude to Mr. Adrium Jurim, Jurim Dental Studio, Great Neck, NY, for the fabrication of the restorations depicted herein. The author declares no finan- cial interest in any of the products cited herein. References 1. Swift EJ Jr, Friedman MJ. Critical appraisal: Porcelain veneer out- comes, part II. J Esthet Rest Dent 2006;18(2):110-113. 2. Tipton, PA. Aesthetic tooth alignment using etched porcelain restora- tions. Pract Proced Aesthet Dent 2001;13(7):551-555. 3. Dumfahrt H. Procelain laminate veneers. A retrospective evalua- tion after 1 to 10 years of service: Part 1 – Clinical procedure. Int J Prosthodont 1999;12(6):505-513. 4. Peumans M, Wan Meerbeek B, Lambrecht P, Vanherle G. Porcelain veneers: A review of the literature. J Dent 2000;28(3):163-177. 5. Bassett JL. Replacement of missing mandibular lateral incisors with a single pontic all-ceramic prosthesis: A case report. Pract Peridont Aesthet Dent 1997;9(4):455-461. 6. Trushkowsky RD. Replacement of congenitally missing lateral incisors with ceramic resin-bonded fixed partial dentures. J Prosthet Dent1995;73(1):12-16. 7. Kinzer GA, Kokich VO Jr. Managing congenitally missing lateral incisors. Part II: Tooth-supported restorations. J Esthet Rest Dent 2005;17(2):76-84. 8. Kinzer GA, Kokich VO Jr. Managing congenitally missing lateral incisors. Part I: Canine substitution. J Esthet Rest Dent 2005;17(1): 5-10. 9. Blitz N, Steel C, Wilhite C. Principles of proportion and central dominance. Diagnosis and treatment evaluation in cosmetic den- tistry. A guide to Accreditation Criteria. J Cosmet Dent 2000: 16-17. 10. Mizrahi B. Visualization before finalization: A predictable proce- dure for porcelain laminate veneers. Pract Proced Aesthet Dent 2005;17(8):513-518. 11. Kokich VG, Nappen DL, Shapiro PA. Gingival contour and clin- ical crown length: Their effect on the esthetic appearance of max- illary anterior teeth. Am J Orthod 1994;86(2):89-94. 12. Gurel G. Predictable, precise, and repeatable tooth preparation for porcelain laminate veneers. Pract Proced Aesthet Dent 2003; 15(1):17-24. 13. Sim C, Ibbetson R. Comparison of fit of porcelain veneers fabri- cated using different techniques. Int J Prosthodont 1993;6(1): 36-42. 14. Griffin JD. How to build a great relationship with the laboratory technician: Simplified and effective laboratory communications. Contemp Esthet 2006;10(7):26-34. 15. Meijering AC, Roeters FJ, Mulder J, Creugers NH. Patient’s satis- faction with different types of veneer restorations. J Dent 1997; 25(6):493-497. 16. Hornbrook D. Invisible beauty: The ultimate in esthetic restorations. Contemp Esthet 2005;9(12):14-23. 17. Sim C, Ibbertson R. Comparison of fit of porcelain veneers fabri- cated using different techniques. Int J Prosthodont 1993;6(1): 36-42. PPAD G Griffin Figure 15. The central, lateral, and canine proportions were accept- able and the gingival recontouring was tolerated well, despite slight inflammation three weeks postoperatively. Figure 14. Postoperative lateral view three weeks following cementa- tion. Note the appearance of the healed gingival tissues and improved aesthetics despite the apical gingival margin location of the original canine. Figure 16. Analysis of the preoperative photographs allowed the clinician to guide the required tooth reduction so that proper tooth proportions were created along with soft tissue enhancements. 5847_200801PPAD_Griffin_v2.qxd 1/16/08 2:09 PM Page G 1. Which of the following is not a critical pre- preparation component for complex veneer case planning? a. Evaluation of photographs. b. Thorough clinical examination. c. Direct mock up or laboratory waxup. d. Even and smooth tooth reduction. 2. Photography can be used during porcelain veneer cases for the following reasons EXCEPT: a. Case documentation. b. A soft and hard tissue reduction blueprint. c. To plan gingival reduction to avoid biologic width violation. d. As a preoperative communication tool with the laboratory. 3 Which of the following are factors that must be weighed before reaching a final treatment goal: a. Patient expectations. b. Existing occlusion. c. Practitioner skill and limitations. d. All of the above. 4. The purpose for the direct composite mock up is all of the following EXCEPT to: a. Check phonetics. b. Verify aesthetics. c. Analyze the restoration's opacity. d. Check the incisal edge position. 5. The incisal edges were reduced: a. To provide additional strength to the porcelain. b. To correct cant in smile and move midline. c. To allow the ceramist more opportunity for incisal character and translucency and provide seating stops. d. To aid in soft tissue healing and plaque control. 6. A full series of photographic images was cap- tured before tooth preparation and numbing. They were then: a. Loaded on a computer, analyzed, and used to form a written treatment sequence plan. b. Sent to the web site company to download for marketing. c. Printed and used for in-office promotion. d. Evaluated for decay and periodontal disease. 7. Transforming premolars into lateral incisors can be challenging for the following reasons EXCEPT: a. The hard tissue contours of the canine eminence. b. A lighter tooth color that must be darkened to mimic a incisor. c. An increased mesio-distal dimension that must be narrowed. d. Soft tissue emergence profiles that are incon- sistent with lateral incisors. 8. At what point in the procedure were photo- graphic images taken? a. Preoperatively. b. Following tooth preparation. c. Post-cementation. d. All of the above. 9. Photography was used during this case for the following reasons: a. Visualization, temporization, permission. b. Visualization, preparation, communication. c. Preparation, customization, meditation. d. Preparation, cementation, finalization. 10. Laboratory communication included all of the following EXCEPT: a. Bite registration. b. Models. c. Photographs. d. Periodontal measurements. P P A D H To submit your CE Exercise answers, please use the answer sheet found within the CE Editorial Section of this issue and complete as follows: 1) Identify the article; 2) Place an X in the appropriate box for each question of each exercise; 3) Clip answer sheet from the page and mail it to the CE Department at Montage Media Corporation. For further instructions, please refer to the CE Editorial Section. The 10 multiple-choice questions for this Continuing Education (CE) exercise are based on the article “Using digital photography to visualize, plan, and prepare a complex porcelain veneer case,” by Jack D. Griffin, Jr, DMD, FAGD. This article is on Pages 000-000. CONTINUING EDUCATION (CE) EXERCISE NO. X CECONTINUING EDUCATIONX H Vol. 20, No. 1 5847_200801PPAD_Griffin_v2.qxd 1/16/08 2:09 PM Page H work_thsjvarpfrcpjexxt62deufymq ---- Mont’e Scan: Effective shape and color digitization of cluttered 3D artworks FABIO BETTIO, ALBERTO JASPE VILLANUEVA, EMILIO MERELLA, FABIO MARTON, and ENRICO GOBBETTI, CRS4 Visual Computing, Italy RUGGERO PINTUS, CRS4 Visual Computing, Italy and Yale University, USA We propose an approach for improving the digitization of shape and color of 3D artworks in a cluttered environment using 3D laser scanning and flash photography. In order to separate clutter from acquired material, semi-automated methods are employed to generate masks used to segment the range maps and the color photographs. This approach allows the removal of unwanted 3D and color data prior to the integration of acquired data in a 3D model. Sharp shadows generated by flash acquisition are easily handled by this masking process, and color deviations introduced by the flash light are corrected at the color blending step by taking into account the geometry of the object. The approach has been evaluated on a large scale acquisition campaign of the Mont’e Prama complex. This site contains an extraordinary collection of stone fragments from the Nuragic era, which depict small models of prehistoric nuraghe (cone-shaped stone towers), as well as larger-than-life archers, warriors, and boxers. The acquisition campaign has covered 37 statues mounted on metallic supports. Color and shape were acquired at a resolution of 0.25mm, which resulted in over 6200 range maps (about 1.3G valid samples) and 3817 photographs. Categories and Subject Descriptors: I.3.3 [Computer Graphics] Picture and Image Generation; I.3.7 [Computer Graphics] Three-Dimensional Graphics and Realism General Terms: Cultural Heritage Additional Key Words and Phrases: 3D scanning, shape acquisition, color acquisition, 3D visualization ACM Reference Format: Fabio Bettio, Alberto Jaspe Villanueva, Emilio Merella, Fabio Marton, Enrico Gobbetti, Ruggero Pintus. 2014. Mont’e Scan: Effective shape and color digitization of cluttered 3D artworks ACM J. Comput. Cult. Herit. 8, 1, Article 4 (August 2014), 22 pages. DOI:http://dx.doi.org/10.1145/2644823 1. INTRODUCTION The increasing performance and proliferation of digital photography and 3D scanning devices is making it possible to acquire, at reasonable costs, very dense and accurate sampling of both geometric and optical surface properties of real objects. A wide variety of cultural heritage applications stand to benefit particularly from this technological evolution. In fact, this technological progress is leading to the This research is partially supported by the Region of Sardinia, EU FP7 grant 290277 (DIVA), and Soprintendenza per i Beni Archeologici per le Province di Cagliari ed Oristano. Author’s address: F. Bettio, R. Pintus, A. Jaspe, E. Merella, F. Marton and E. Gobbetti; CRS4, POLARIS Ed. 1, 09010 Pula (CA), Italy; email: {fabio,ruggero,ajaspe,emerella,marton,gobbetti}@crs4.it www: http://www.crs4.it/vic/ http://graphics.cs. yale.edu/ Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1 (212) 869-0481, or permissions@acm.org. c© 2014 ACM 1556-4673/2014/08-ART4 $15.00 DOI:http://dx.doi.org/10.1145/2644823 ACM Journal on Computing and Cultural Heritage, Vol. 8, No. 1, Article 4, Publication date: August 2014. 4:2 • F. Bettio, A. Jaspe Villanueva, E. Merella, F. Marton, E. Gobbetti, and R. Pintus possibility to construct accurate colored digital replicas not only for single digital objects but at a large scale. Accurate reconstructions built from objective measures have many applications, ranging from virtual restoration to visual communication. Fig. 1. Reassembled Nuragic statue with supports and its virtual reconstruction. The black support structure holds the fragments in the correct position, with minimal contact surface, avoiding pins and holes in the original material. A 360-degree view is possible, but color and shape capture is difficult because of clutter, occlusions, and shadows. The rightmost image depicts our 3D reconstruction. Photo courtesy of ArcheoCAOR. The digitization approach most widely used today is a combination of laser scanning with digital photography. Using computational techniques, digital object surfaces are reconstructed from the laser- scan-generated range maps, while the apparent color value sampled in digital photos is transferred by registering the photos with respect to the 3D model and mapping it to the 3D surface using the recovered inverse projections. Since early demonstrations of the complete modeling pipeline (e.g., [Bernardini and Rushmeier 2002; Levoy et al. 2000]), most of its components have reached sufficient maturity for adoption in a variety of application domains. This approach is particularly well suited to cultural heritage digitization, since scanning and photographic acquisition campaigns can be performed quickly and easily, without the need to move objects to specialized acquisition labs. The most costly and time consuming part of 3D reconstruction is thus moved to post-processing, which can be performed off-site. Thus, in recent years research has focused on improving and automating the post-processing steps – for instance, leading to (semi-)automated scalable solutions for range-map alignment [Pingi et al. 2005], surface reconstruction from point clouds [Kazhdan et al. 2006; Manson et al. 2008; Cuccuru et al. 2009; Calakli and Taubin 2011], photo registration [Pintus et al. 2011c; Corsini et al. 2012], and color mapping [Callieri et al. 2008; Pintus et al. 2011b; 2011a]. Even though passive image-based methods have recently emerged as a viable (and low-cost) 3D reconstruction technology [Remondino 2011], the standard pipeline based on laser scanning or other active sensors still remain a widely used general-purpose approach, mainly because of the higher reliability for a wider variety settings (e.g., featureless surfaces) [Remondino 2011; Koutsoudis et al. 2014]. ACM Journal on Computing and Cultural Heritage, Vol. 8, No. 1, Article 4, Publication date: August 2014. Mont’e Scan: Effective shape and color digitization of cluttered 3D artworks • 4:3 In this paper, we tackle the difficult problem of effectively adapting the 3D scanning pipeline to the acquisition of color and shape of 3D artworks on-site, in a cluttered environment. This case, arises, for instance, when scanning restored and reassembled ancient statues in which (heavy) stone fragments are maintained in place by a custom exostructure (see Fig. 1 for an example). Digitizing statues without removing the supports allows one to perform scanning directly on location and without moving the fragments, therefore enabling a completely contactless approach. On the other hand, the presence of the supporting structure typically generates shadow-related color artifacts, holes due to occlusion effects and extra geometry that must be removed. With the standard 3D scanning pipeline these issues lead to laborious and mostly manual post-processing steps – including cleaning the geometry and careful pixel masking (for more details see Sec. 2 with an overview of the related work). Motivated by these issues, in this work we present a practical approach for improving the digitization of the shape and color of 3D artworks in a cluttered environment. While our methods are generally applicable, the work was spurred by our involvement in the Digital Mont’e Prama project (see Sec. 3), which included the fine-scale acquisition of 37 large statues and therefore required robust and scalable methods. Our main contributions presented in this article are the following: —an easy-to-apply acquisition protocol based on laser scanning and flash photography; —a simple and practical semi-automatic method for clutter removal and photo masking; —a scalable implementation of the entire masking, editing, infilling, color-correction, and color-blending pipeline, that works fully out-of-core without limits on model size and photo number; —the evaluation of the method and tools in a large-scale real-world application involving a massive acquisition campaign covering 37 statues mounted on metallic supports, acquired at 0.25mm reso- lution, resulting in over 6200 range scans (approximately 1.3G valid samples) and 3817 10Mpixel photographs. This work is an invited extended version of our Digital Heritage 2013 contribution [Bettio et al. 2013]. In addition to supplying a more thorough exposition in this work, we also provide significant new material including an analysis of requirements, the description of an improved pipeline which performs color infilling and takes into account position and orientation of surfaces during color mapping, and the presentation of results on the complete Mont’e Prama dataset. 2. RELATED WORK Our system extends and combines state-of-the-art results in a number of technological areas. In the following text we only discuss the approaches most closely related to our novel contributions, which focus on effective clutter removal. For more detailed information on the entire 3D scanning pipeline we refer the reader to the recent survey by Callieri et al. [2011]. Our approach assumes that color images are reliably registered to the 3D models. Obtaining this registration is an orthogonal problem for which a variety of solutions exists (e.g., [Borgeat et al. 2009; Pintus et al. 2011c; Corsini et al. 2012; Pintus and Gobbetti 2014]). The results in this work have been obtained with the approach of Pintus et al. [2014]. Color and geometry masking. Editing and cleaning the acquired 3D model is often the most time- consuming reconstruction task [Callieri et al. 2011]. While some techniques exist for semi-automatic 3D clutter removal in 3D scans, they are typically limited to well-defined situations (e.g., walls vs. furniture for interior scanning [Adan and Huber 2011] or walls vs. organic models for exterior scanning [Lafarge and Mallet 2012]). Manual editing is also typically employed in approaches that work on images and range maps. For instance, Farouk et al. [2003] embedded a simple image editor into the scanning GUI. We also follow the approach of working on 2D representations, but concentrate our efforts to reduce human interventions. Interactive 2D segmentation is a well-known research topic with several ACM Journal on Computing and Cultural Heritage, Vol. 8, No. 1, Article 4, Publication date: August 2014. 4:4 • F. Bettio, A. Jaspe Villanueva, E. Merella, F. Marton, E. Gobbetti, and R. Pintus state-of-the-art solutions that typically involve classification and/or editing of color image datasets (see well-established surveys [Zhang et al. 2008; McGuinness and O’Connor 2010]). In general, the aim of these techniques is to efficiently cope with the foreground/background extraction problem with the least possible user input. The simplest tool available is the Magic Wand in Adobe Photoshop 7 [Adobe Systems Inc. 2002]. The user selects a point and the software automatically computes a connected set of pixels that belong to the same region. Unfortunately, an acceptable segmentation is rarely achieved automatically since choosing the correct color or intensity tolerance value is a difficult or even impossible task. Many classic methods, such as intelligent scissors [Mortensen and Barrett 1999], active contours [Kass et al. 1988] and Bayes matting [Chuang et al. 2001], require a considerable degree of user input in order to achieve satisfactory results. More accurate approaches have been presented that solve the semi-automatic image segmentation problem by using Graph Cuts [Boykov and Jolly 2001]; here the user marks a small set of background and/or foreground pixels as seeds and the algorithm propagates that information to the remaining image regions. Among the large number of extensions to the Graph Cuts methodology [Zeng et al. 2008; Xu et al. 2007], the GrabCut technique [Rother et al. 2004] combines a very straightforward manual operation, with a color modeling and an extra layer of (local) minimization to the Graph Cuts technique; this requires a small effort from the user but proves to be very robust in different segmentation scenarios. In this work we propose an adaptation of the GrabCut approach to the problems of editing point cloud geometries and pre-processing images for texture blending. In our approach, we perform a minimal user-assisted training on a small set of acquired range maps and images in order to automatically remove clutter from images and point clouds. Color acquisition and blending. Most cultural heritage applications require the association of material properties to geometric reconstructions of the sampled artifact. While many methods exist for sampling Bidirectional Radiance Distribution Functions (BRDF) [Debevec et al. 2000; Lensch et al. 2003] in sophisticated environments with controlled lighting, the typical cultural heritage applications impose fast on-site acquisition and the use of low-cost and easy to use procedures and technologies. Color photography is the most common approach. Since removing lighting artifacts requires knowledge of the lighting environment, one approach is to employ specific techniques that use probes [Debevec 1998; Corsini et al. 2008]. However, these techniques are hard to use in practice, in typical museum settings with local lights. Dellepiane et al. [2009; 2010] proposed, instead, to use light from camera flashes. They propose to use the Flash Lighting Space Sampling (FLiSS) – a correction space where a correction matrix is associated to each point in the camera field of view. Nevertheless, this method requires a laborious calibration step. Given that medium- to high-end single-lens reflex (SLR) cameras support fairly uniform flash illumination and RAW data acquisition modes that produce images where each pixel value is proportional to incoming radiance [Kim et al. 2012], we take the simpler approach of using a constant color balance correction for the entire set of photographs and apply a per-pixel intensity correction based on geometric principles. This approach effectively reduces calibration work. While our previously published results [Bettio et al. 2013] only employed a distance-based correction, in this work we employ a more complete correction that also takes into account surface orientation. The method is similar to the one originally used by Levoy et al. [2000], without the need of special fiber optic illuminators and of per-pixel color response calibration. Under our flash illumination, taken from relatively far from the statues (2.5m), the flash can be approximated as a point source and energy deposition on the statue is negligible compared to typical ambient lighting. In addition, while previous color blending pipelines worked on large triangulated surfaces [Levoy et al . 2000; Callieri et al. 2008] or single-resolution point-clouds [Pintus et al. 2011a], we blend images directly on multiresolution structures leading to increased scalability. The pipeline presented in our original work [Bettio et al. 2013] is also combined here with inpainting and infilling methods for constructing seamless models. ACM Journal on Computing and Cultural Heritage, Vol. 8, No. 1, Article 4, Publication date: August 2014. Mont’e Scan: Effective shape and color digitization of cluttered 3D artworks • 4:5 Our implementation is based on combining screened Poisson reconstruction [Kazhdan and Hoppe 2013] with an anisotropic color diffusion process [Wu et al. 2008] implemented in a multigrid framework. Scalable editing and processing. Massive point-cloud processing and interactive point-cloud editing are required to produce high-quality 3D models from an input set of registered photos and merged geometries. In this work, we represent geometric datasets as a forest of out-of-core octrees of point samples [Wand et al. 2007; Wimmer and Scheiblauer 2006; Scheiblauer and Wimmer 2011] and employ the same structure for all operations, including color blending and editing. While previous works split and refine nodes based on strict per-node sample budgets, our approach is based on local density estimates, which allows us keep to the structures more balanced. 3. CONTEXT AND METHOD OVERVIEW The design of our method, which is of general use, has taken into account requirements gathered from domain experts in the context of a large scale project. Section 3.1 briefly illustrates the design process and the derived requirements, while Sec. 3.2 provides a general overview of our approach, justifying the design decisions in relation to the requirements. Fig. 2. Mont’e Prama Statues on display at the CRCBC exhibition hall. Scanning was performed on-site. 3.1 Requirements While our methods are of general use, our work is motivated by the Digital Mont’e Prama project, a collaborative effort between CRS4 and the Soprintendenza per i Beni Archeologici per le Province di Cagliari ed Oristano (ArcheoCAOR, the government department responsible for the archeological heritage of the provinces of Cagliari and Oristano), which aims to digitally document, archive, and present to the public the large and unique collection of pre-historic statues from the Mont’e Prama complex, including larger-than-life human figures and small models of prehistoric nuraghe (cone-shaped stone towers). The project covers aspects ranging from 3D digitization to visual exploration (see Balsa et al. [2014] and Marton et al. [2014] for more details on the visual exploration aspects). The Mont’e Prama Collection. The Mont’e Prama complex is a large set of sandstone sculptures created by the Nuragic civilization in Western Sardinia. More than 5000 sculpture fragments were recovered after four excavation campaigns carried out between 1975 and 1979. According to the most recent estimates, the stone fragments came from a total of 44 statues depicting archers, boxers, warriors and ACM Journal on Computing and Cultural Heritage, Vol. 8, No. 1, Article 4, Publication date: August 2014. 4:6 • F. Bettio, A. Jaspe Villanueva, E. Merella, F. Marton, E. Gobbetti, and R. Pintus Laser Scanning Flash Photography Virtual Reconstruction Training Automatic Cleaning Manual Editing Geometry Consolidation Training Automatic Masking Manual Editing Photo Mapping On-site Coarse Registration Photos Range Maps Photos Seamless Colored 3D Model Color correction Photo Blending On-site SfM Registration Infilling and inpainting Fig. 3. Pipeline. We improve digitization of 3D artworks in a cluttered environment using 3D laser scanning and flash photography. Semi-automated methods are employed to generate masks to segment the 2D range maps and the color photographs, removing unwanted 3D and color data prior to 3D integration. Sharp shadows generated by flash acquisition are handled by the masking process and color deviations introduced by the flash light are corrected at color blending time by taking into account object geometry. A final seamless model is created by combining Poisson reconstruction with anisotropic color diffusion. User-guided phases are highlighted in yellow. models of prehistoric nuraghe. These can be traced to an as-yet undetermined period, which goes from the tenth to the seventh century BC. Restoration, carried out at the Centro di Restauro e Conservazione dei Beni Culturali (CRCBC) of Li Punti (Sassari, Italy) resulted in the partial reassembly of 25 human figures with height varying between 2 and 2.5 meters, and 13 approximately one-meter-sized nuraghe models. Following modern restoration criteria, reassembly was performed in a non-invasive way (no drilling or bolt insertions into the sculptures). Definitely joining fragments have been glued together using a water-soluble epoxy resin, and all the gaps on the resin-filled surface were covered with lime- mortar stucco. Custom external supports have been designed to sustain all the parts of a statue in order to ensure stability to all the components without the use of mechanical attachments, while minimizing contacts with the statue and maximizing visibility (see Fig. 1). All supports allow a 360 degree view of the statue. The project covers 37 out of the 38 reassembled statues – one nuraghe was excluded since it is small and still subject to further reassembling work (see Fig. 2 for the setup of the exhibition in the CRCBC hall and Fig. 9 for the final digitally reconstructed models). Requirement Analysis. In order to design our acquisition and 3D reconstruction technique, we em- barked in a participatory design process involving domain experts with the goal of collecting the detailed requirements of the application domain; the experts included two archaeologists from ArcheoCAOR and ACM Journal on Computing and Cultural Heritage, Vol. 8, No. 1, Article 4, Publication date: August 2014. Mont’e Scan: Effective shape and color digitization of cluttered 3D artworks • 4:7 two restoration experts from CRCBC. Additional requirements stem from our analysis of related work (see Sec. 2) and our own past experience developing capture, and reconstruction systems for architecture and cultural heritage [Cuccuru et al. 2009; Pintus et al. 2011a; Mura et al. 2013]. In the following text we describe the main requirements used for guiding the development process, briefly summarizing how they were derived: R1. On-site scanning in a limited time frame. Scanning of the entire set of statues must be per- formed while the statues are on display in the CRCBC exhibition hall. All the scanning has to be performed during a time-frame of max 8h/day (early morning and late evening) and in a period of no more than two months. R2. No contact and no fragment motion.The statues are made of relatively soft sandstone and are very sensitive to contact and erosion. Therefore, the acquisitions must be performed with the statues mounted on their supports and without any contact. The only motion possible is the sliding of the base of each fully reassembled statue (i.e., 2D rotation and translation on the exhibition floor). Fragments cannot be moved one relative to the other. R3. Macro- and micro-structural shape capture and reconstruction. Like almost any decorated artifact of cultural heritage, the statues present information at multiple scales (global shape and carvings). Even the finest material micro-structure carries valuable information (e.g., on the artistic decorations, as well as on the carving process). For instance, almost all the fragments of the Mont’e Prama statues have millimeter-sized carvings on various parts, and thus require sub-millimetric model precision. R4. High-resolution color capture and reconstruction. While no results of a human coloring process is currently visible on the statues, color information provides valuable information and is part of the aura of the objects. For instance, the fragments acquired in this project are often made of stones with different grains. In fact, the sandstone itself has a spatially varying texture, color inserts and characteristic patterns are due to organic and fossil inclusions as well as limestone deposits. In addition, traces of fire are visible on multiple fragments and lead to a characteristic dark-brown coloring. Moreover, surface finish is variable and subtly modifies stone color, while not sensibly modifying the reflectance, which remains extremely diffuse. In short, it is important to capture and reproduce color at a resolution comparable to the resolution of geometry capture. R5. Seamless virtual models for public presentation. While the acquisition process should lead to the creation of models accurate enough for archival, study, and further restoration activities, one of the main motivations for the project is to use them for visual communication to scholars as well as to the public at large, both in museum settings and on the web. In order to preserve the aura of original artifacts, the 3D models should be at high resolution and seamless. The exostructures should not be present in these virtual models. Since the exostructure has contact points with the artwork, holes are unavoidable. However, for public presentation the model should be watertight, as visible holes would be confusing and unattractive. Since contact areas are small and placed in areas with lack of detail, smoothly infilling and inpainting these areas is considered acceptable for public display. R6. Low-labor approach. To be applicable to this Mont’e Prama project, and to ensure wide adoption in other cultural heritage campaigns, the proposed method should be flexible and required a reasonably low amount of labor, to reduce the time and the costs required to produce the models. 3.2 Approach Requirements R1-R6 were taken as guidelines for our research, which resulted in the definition of a semi-automatic color and shape 3D scanning pipeline for the acquisition and reconstruction of cluttered ACM Journal on Computing and Cultural Heritage, Vol. 8, No. 1, Article 4, Publication date: August 2014. 4:8 • F. Bettio, A. Jaspe Villanueva, E. Merella, F. Marton, E. Gobbetti, and R. Pintus objects. The pipeline is an enhancement of the well-established pipeline based on laser scanning and digital photography. This makes the method general-purpose enough to be used in different settings by operators trained to use the classic robust approach based on 3D laser scanning. Figure 3 outlines the approach used, which consists of a short on-site phase and a subsequent, mostly automatic, off-site phase (R1 and R6). The only on-site operations are the acquisition of geometry and color, which are performed in a contact-less manner by only sliding and rotating the statue while it is mounted on its support (R2). The geometry acquisition operation is performed with a triangulation laser scanner (R3), which produces range and reflectance maps that are incrementally coarsely aligned during scanning in order to monitor 3D surface coverage. On the other hand, color is acquired in a dark environment by taking a collection of photographs with an uncalibrated camera while using the camera flash as the only source of light (R1, R4 and R6). A Macbeth color checker, visible in at least one of the photographs, is used for post-process color calibration. Analogously to the geometry acquisition step, coverage is (optionally) checked on-site by coarsely aligning the photographs using a Structure-from-Motion (SfM) pipeline. The remainder of the work can be performed off-site using semi-automatic ( R6) geometry and color pipe-lines that communicate only at the final merging step. In order to remove geometric clutter (R5) the user manually segments a very small subset of the input range maps and produces a training dataset that is sufficient for the algorithm to automatically mask unwanted geometry. This step exploits the reflectance channel of the laser scanner. As commonly done for cultural heritage pipelines, the automatic masking can be in principle be revised by visually inspecting and optionally manually improving the segmentation, using the same tools employed for creating the example masks. Note that this step, in contrast to previous work [Farouk et al. 2003], is entirely optional (see Sec. 8 for an evaluation of the manual labor required). In order to create a clean 3D model, the masks are applied to all range maps, which are then finely registered with a global registration method and optionally edited manually for the finishing touch. The geometry reconstruction is then performed using Poisson reconstruction [Kazhdan et al. 2006], which takes care of infilling small holes that appears in the unscanned areas (i.e., where supports touch the surface of the scanned object) using a smoothness prior on the indicator function. The effect is similar to volumetric diffusion [Davis et al. 2002]. The color pipeline follows a similar work pattern. It begins from the photographs in raw format. After the training performed by the user on a small subset of images the algorithm automatically masks all the input photos removing clutter (R5). The user then optionally performs a visual check and a manual refinement, then the masked photos – already coarsely aligned among themselves with SfM – are aligned with the geometry using the method by Pintus et al. [2014]. The photos are finally mapped to the surface by color-blended projection [Callieri et al. 2008; Pintus et al. 2011a]. During the blending step colors are calibrated using a data extracted from the color checker and the differences in illumination caused by the flash used during photography are corrected using geometric information (R5). Finally, an anisotropic diffusion process [Wu et al. 2008] is employed to perform a conservative inpainting of the areas left without colors due to occlusions (R5). It should be noted that the infilling and inpainting approaches employed in this work are minimalist (R5). We do not aim at reconstructing high frequency details in large areas. Instead, we just smoothly extend color and geometry from the neighborhood of holes to avoid the presence of confusing and unattractive holes for public display applications. Details on semi-automatic geometry and color masking, as well as scalable data consolidation and color mapping are provided in the following sections. The corresponding phases are highlighted in yellow in Fig. 3. ACM Journal on Computing and Cultural Heritage, Vol. 8, No. 1, Article 4, Publication date: August 2014. Mont’e Scan: Effective shape and color digitization of cluttered 3D artworks • 4:9 4. DATA ACQUISITION While geometry acquisition is performed using the standard triangulation laser scanning approach, color acquisition is performed using an uncalibrated flash camera. In our context, flash illumination is a viable way to image the objects, as it provides us with sharp shadows together with information on the image-specific direction of the illumination. Since at color mapping-time the geometry of the image is known, we can correct each projected pixel according to the position of the surface on which it projects with respect to the camera and the flash’s light, thus obtaining a reasonable approximation of the surface albedo (see Sec. 7). In addition, cluttering material – e.g., the supporting exostructure – generates sharp shadows which can be easily identified both by the masking process and by taking into account geometric occlusion in the color mapping process (see Sec. 5). In contrast to previous work [Dellepiane et al. 2009; 2010; Levoy et al. 2000], we handle images directly in RAW format, which allows us to correct images without prior camera calibration. We experimentally measured that on a medium/high end camera, such as the Nikon D200 employed in our work, acquisition in RAW format produces images with pixel values proportional to the incoming radiance at medium illumination levels (as also verified elsewhere [Friedrich et al. 2010; Kim et al. 2012]), and that the flash emits a fairly uniform light within a reasonable working space. Fig. 4. Flash illumination. Using RAW camera data distance-based scaling provides a reasonable correction. Balance between color channels can then be ensured using color-checker-based calibration. Sensor near-linearity has been verified by taking images of a checkerboard in a dark room with t=1/250s f/11.0+0.0, ISO 400. As shown in the graph in Fig. 4 values (measured on the white checker- board squares) are proportional to 1/d2, where d is the distance from the flash light. Distance-based scaling can be thus exploited at color mapping time to provide reasonable correction, while balance between color channels can be ensured using color-checker-based calibration (see Sec. 7). The character- istics of flash illumination have been verified by taking a photograph of a white diffuse material at 2m using the same 50mm lens used for photographing the statues. After correcting for lighting angle and distance the illumination varies only by a maximum of 7.6% within the view frustum. While more accurate results may be obtainable with calibration techniques, even the most accurate ones performed off-site [Levoy et al. 2000; Dellepiane et al. 2009; 2010; Friedrich et al. 2010] do not perfectly match local shading and illumination settings since, in particular, indirect illumination is not ACM Journal on Computing and Cultural Heritage, Vol. 8, No. 1, Article 4, Publication date: August 2014. 4:10 • F. Bettio, A. Jaspe Villanueva, E. Merella, F. Marton, E. Gobbetti, and R. Pintus taken into account and the photographed materials are (obviously) not perfect Lambertian scatterers. We thus consider this uncalibrated approach to be suitable for practical use. It should be noted that whenever needed these alternate techniques can be easily performed during post-processing using the same captured data. Indeed, the availability of RAW images in the captured database grants the ability to perform a variety of post-process enhancements [Kim et al. 2012]. 5. SEMI-AUTOMATIC GEOMETRY AND COLOR MASKING Our masking process aims to separate the foreground geometry (the object to be modeled) from the cluttering data (in particular, occluding objects), under the assumption of different appearances – as captured in the reflectance and color signals. Starting from a manual segmentation of a small set of examples (Sec. 5.1) we train a histogram-based classifier of the materials (Sec. 5.1), which is then refined by finding an optimal labeling of pixels using graph cuts (Sec. 5.3). Then, a final (optional) user-assisted revision can be performed using the same tool used for manual segmentation. Input Ground truth Histogram Auto Mask Difference Diff. Detail Fig. 5. Automatic masking. Geometry (top) and color (bottom) results for a single image. From left to right: acquired re- flectance/color image; user-generated ground truth masking; mask generated by histogram-based classification; final automatically generated mask; difference to ground truth; magnified region of difference image. In the difference image, black and white pixels are perfect matches, while yellow pixels are false positives, green pixels are false negative points on this image, and red pixels are real false negative points considering the entire dataset. 5.1 Manual segmentation To perform the initial training the user is provided with a custom segmentation tool, with the same interface for range maps and color images. The tool allows the user to visually browse images/scans in the acquisition database, visually select a small subset (typically, less than 5%) and draw a segmentation in the form of a binary mask – using white for foreground and black for background. The mask layer is rendered on top of the image layer and the user can vary the transparency of the mask to evaluate the masking results. In addition to using standard draw/erase brushes, our tool supports interactive grab-cut segmentation [Rother et al. 2004] in which the user selects a bounding box of the foreground object to initialize the segmentation method. 5.2 Histogram-based classification The user-selected small subset of manually masked images and range maps is used to learn a rough statistical distribution of pixel values that characterize foreground objects. For artifacts made of fairly uniform materials – e.g., stone sculptures – 3-4 range maps and 4-5 images are typically sufficient. ACM Journal on Computing and Cultural Heritage, Vol. 8, No. 1, Article 4, Publication date: August 2014. Mont’e Scan: Effective shape and color digitization of cluttered 3D artworks • 4:11 For the range maps we build a 1D histogram of reflectance values, quantized to 32 levels, by accumu- lating all pixels that were marked as foreground in the user-defined mask. On the other hand, for the color images we use a 2D histogram based on hue and saturation, both quantized to 32 levels. Ignoring the value component is more robust to shading variation due to flash illumination and variable surface orientation. The process is repeated for all manually masked images, thus accumulating histogram values before a final normalization step. The histogram computed on the training set can be used for a rough classification of range map/image pixels based on reflectance/color information. This classification is simply obtained by back-projecting each image pixel to the corresponding bin location and interpreting the normalized histogram value as a foreground probability. It is worth noting that whether the histogram is computed from foreground or clutter data is not important; as long as the rest of the pipeline is consistent the only constraint is the aforementioned assumption that the two appearances are reasonably well separable. 5.3 Graph cut segmentation As illustrated in Fig. 5, third column, the histogram-based classification is very noisy but roughly succeeds in identifying the foreground pixels, which are generally marked with high probabilities. This justifies our use of histograms for the rough classification step rather than the more complex statistical representations typically used in soft segmentation [Ruzon and Tomasi 2000; Chuang et al. 2001]. Segmentation is improved by using the rough histogram-based classification as starting point for an iterated graph cut process. We initially separate all pixels in two regions: probably foreground for those with normalized histogram value larger than 0.5, and probably background for the others. We then iteratively apply the GrabCut [Rother et al . 2004] segmentation algorithm using a Gaussian Mixture Model with 5 components per region and estimating segmentation using min-cut. As illustrated in Fig. 5, column 4, the process produces tight and well regularized segmentation masks. 5.4 Morphological post-pass Since masks must be conservative, especially at silhouette boundaries where a small misalignment is likely to occur, we found it useful to post-process the masks using morphological filters. After denoising the mask using a small median filter (5x5 in this work) we perform an erosion of the mask using an octagon kernel (4x4 in this case). This step has the effect of eliminating small isolated spots and to avoid being too close to the silhouettes, and the removal does not create problems given the large overlap of images that cover the model. 6. DATA CONSOLIDATION AND EDITING The final result of the automatic masking step is a mask image associated to each range map and color image. These masks are used for pre-filtering the geometry and color information before further processing. The remaining processing steps, optionally including color mapping (see Sec. 7), are performed by using an editable out-of-core structure based on a forest of octrees. The structure consists of a scene structure, hierarchically grouping models and associating to each group a rigid body transformation for positioning it with respect to its parent. The leafs of the scene structure are out-of-core octrees of points. Similarly to the system of Wand et al. [2007], the data points are stored unmodified in the leaf nodes of the octrees while inner nodes provide grid-quantization-based multiresolution representations. In contrast to previous work, we dynamically decide whether to subdivide or merge nodes based on a local dynamic density estimation. In particular, at each insertion/deletion operation in a leaf node, we count how many grid cells are occupied in the quantization grid that would be associated to the node upon ACM Journal on Computing and Cultural Heritage, Vol. 8, No. 1, Article 4, Publication date: August 2014. 4:12 • F. Bettio, A. Jaspe Villanueva, E. Merella, F. Marton, E. Gobbetti, and R. Pintus Fig. 6. Interactive inspection and editing. Left: all captured range maps imported into the out-of-core scene structure without applying masks. Right: automatic masking removes most – if not all – clutter. Editing can thus be limited to fixing fine details. subdivision. If this number is larger than four times the number of points, we consider the node dense enough for splitting. The structure provides efficient rendering and allows for handling very large data sets using out-of-core storage, while supporting efficient online insertion, deletion and modification of point samples. After completing the masking process we import into a scene structure all the range maps, using a separate octree for each one (see Fig. 6). The initial transformation of each range map is the coarse alignment transformation computed during the scanning operation, while normals and local sampling space are computed by local differences. The hierarchical scene structure is exploited for grouping scans (e.g., separating low-res from hi-res scans and labeling semantically relevant parts). Alignment and editing operations are applied to the structure using an interactive editor. Alignment is performed by applying to (a subsets of) the range scans a global registration method based on the scalable approach by Pulli et al. [1999], using GPU-accelerated KNN to speed-up local pairwise ICP [Cayton 2012]. After satisfactory global alignment multiple octrees can optionally be merged together. Interactive editing is performed on the structure using a select-and-apply paradigm that supports undo history. At each application of a modifier (e.g., point deletion) modified samples are moved to a temporary out-of-core structure (a memory-mapped array) prior to modification. By associating the array to the undo list we are able to perform reversible editing. The final colored point clouds can then be further elaborated to produce seamless surface models. To produce consolidated models represented as colored triangle meshes, the most common surface representation, there exist a number of state-of-the-art approaches [Cuccuru et al. 2009; Manson et al. 2008; Kazhdan et al. 2006]. We have adopted the recent screened Poisson surface reconstruction approach [Kazhdan and Hoppe 2013] which produces high-quality watertight reconstructions by incor- porating input points as interpolating constraints, while reasonably infilling missing areas based on smoothness priors. Because the Poisson approach does not handle colored surfaces we incorporate color in a post-processing phase (see Sec. 7.1). ACM Journal on Computing and Cultural Heritage, Vol. 8, No. 1, Article 4, Publication date: August 2014. Mont’e Scan: Effective shape and color digitization of cluttered 3D artworks • 4:13 7. COLOR CORRECTION, MAPPING, AND INPAINTING The color attribute is obtained first by projecting masked photos onto the 3D model reference frame and then performing seamless texture blending of those images onto the surface [Pintus et al. 2011c; Pintus et al. 2011a]. In contrast to previous work, we blend and map images directly to the out-of-core structure and perform color correction starting from captured RAW images during the mapping operation. 7.1 Streaming color mapping Our streaming photo blending implementation closely follows our previous work [Pintus et al. 2011c; Pintus et al. 2011a], which we have extended to work on the multiresolution point cloud structure. We associate a blending weight to each point, which is initialized at zero. We then perform photo blending adding one image at a time. For each image, we start rendering the point cloud from the camera point of view, using a rendering method that performs a screen-space surface reconstruction and adapting the point cloud resolution to 1 projected point/pixel. We then estimate a per-pixel blending weight with screen-space GPU operations that take as input the depth buffer as well as the stencil masks (see Pintus et al. [Pintus et al. 2011a] for details). In a second pass on the point cloud, we update the point colors and weights contained in the visible samples of the leafs of the multiresolution structure. Once all images are updated we consolidate the structure recomputing bottom-up the colors and weights of inner node samples using averaging operations. As a result, the colored models are available in our out-of-core multiresolution point cloud structure for further editing. In order to apply this same process to triangulated surfaces, such as those coming out of the Poisson reconstruction, we import the surface vertices in our octree, perform the mapping, and then map the color back to the triangulated surface. In this manner we can use the spatial partitioning structure for view-frustum and occlusion culling during mapping operations. Fig. 7. Color correction and relighting. Left: original image under flash illumination; note the sharp shadows and uneven intensity. Center-left: projected color with distance-based correction and no synthetic illumination; notice that the flash highlight has been removed, but a darker shade is on the slanted surface. Center-right: projected color with distance-based and orientation- based correction and no synthetic illumination; note the even distribution and good approximation of surface albedo. Right: synthetically illuminated model based on recovered albedo using a different lighting setup. 7.2 Flash color correction Color correction happens at color blending time during color mapping operations. At this phase of the processing the color mapping algorithm knows the color stored in the corresponding pixel of the RAW image (the apparent color C(raw)), the camera parameters (camera intrinsic and extrinsic parameters as well as flash position), and the geometric information of the current sample (position and normal stored in the corresponding pixel of the frame buffers used to compute blending weights). As we verified, the RAW data acquisition produces images where each pixel value is proportional to incoming radiance and the flash light is fairly uniform (see Sec. 4). Therefore, we apply a simple ACM Journal on Computing and Cultural Heritage, Vol. 8, No. 1, Article 4, Publication date: August 2014. 4:14 • F. Bettio, A. Jaspe Villanueva, E. Merella, F. Marton, E. Gobbetti, and R. Pintus color correction method based of first principles, similar to the original approach of Levoy et al. [2000], but without per-pixel calibration of flash illumination and camera response. The results presented in this paper assume that the imaged surface is a Lambertian scatterer so that the measured color, for a sufficiently distant illumination, can be approximated for each color channel i by C (i) (raw) ≈ w(i) (balance) I(flash) d2 C (i) (surface) (n · l)+ (1) where w(i) (balance) is the channel’s scale factor used to achieve color balance, I(flash) is flash intensity, C (i) (surface) is the diffuse reflectance (albedo) of the colored surface sample, n is the surface sample normal, l is the flash light direction, and d is the distance of the surface sample from the flash light. As in standard settings, the color balance factors are recovered by taking a single image of a calibration target (Macbeth charts in our case) and using the same setting used for taking the photographs of the artifacts. Thus, to compute the color of the surface at color mapping time we consider a user-provided desired object distance do, using equation 1 to find C (i) (mapped) = d2 w (i) (balance) d20(� + (1 − �)ñ · l) C (i) (raw) (2) where � is a small non-null value (0.1 in our case) and ñ is the smoothed normal (obtained in screen- space with a 5x5 averaging filter. Normal smoothing and dot product offsetting are introduced to reduce the effect of possible over-corrections in the presence of a small misalignment – particularly at grazing angles. It should be noted that, since the Lambertian model does not take into account the roughness of the surface, under flash illumination it tends to over-shadow at grazing angles. As noted by Oren and Nayar [Oren and Nayar 1994], this effect is due to the fact that while the brightness of a Lambertian surface is independent of viewing direction, the brightness of a rough surface increases as the viewing direction approaches the light source direction. The small angular weight correction thus also contributes to reduce the boosting of colors near silhouettes. Figure 7 shows how a single flash image introduces sharp shadows and uneven intensity based on distance and angle of incidence. Shadows are removed by the color masking process described in Sec. 5 as well as by shadow mapping during color projection. Distance-based correction removes flash highlights but still produces darker shades on slanted surfaces. On the other hand, combining distance-based and orientation-based correction produces a reasonable approximation of surface albedo, thereby enabling a seamless combination of multiple images without illumination-dependent coloring. The resulting colored model can thus be used for synthetic relighting. 7.3 Inpainting Points of contacts between supports and status generate small holes in the geometry, as well as missing colors due to occlusions and shadows (seen in white in Fig. 8 left). In order to produce final colored watertight models – useful, e.g., for public presentations – it is important to smoothly reconstruct these missing areas. We took the conservative approach of only using smoothness priors to perform geometry infilling and color inpainting, rather than applying more invasive reconstruction methods based on – for instance – non-local cloning methods. This conservative approach has the advantage of not introducing spurious details, while repairing the surface enough to avoid the presence of distracting surface and color artifacts during virtual exploration. Geometry infilling is simply achieved by applying a Poisson surface reconstruction method [Kazhdan and Hoppe 2013] to reasonably infill missing areas based on smoothness priors (see Fig. 8 center). ACM Journal on Computing and Cultural Heritage, Vol. 8, No. 1, Article 4, Publication date: August 2014. Mont’e Scan: Effective shape and color digitization of cluttered 3D artworks • 4:15 On the other hand, color inpainting uses an anisotropic color diffusion process [Wu et al. 2008] implemented in a multigrid framework. We employ a meshless approach that can be applied either to the vertices of the triangle mesh produced by Poisson reconstruction, or directly to a point cloud constructed from it. We assume that each color sample stores the accumulated color and weight coming from color blending. We first extract all points with a null weight, which are those requiring infilling. We then extract a neighbor graph for this point cloud (by edge connectivity when operating on a triangle mesh or by a k-nearest neighbor search, with k=8, when working on point clouds), growing the graph by one layer in order to include colored points in the neighborhood of holes. We then produce a hierarchy of simplified graphs using sequence coarsening operations on the neighbor graph, so that each level has only one quarter of the samples of the finer one. We stop simplification when the number of nodes is small enough (less than 1000 in this paper) or no more simplification edges exists. The graph is used to quickly compute anisotropic diffusion using a multigrid solver based on V-Cycle iterations. Boundary conditions are computed using the samples with non-zero weight that are included in the hierarchy. The anisotropic diffusion equations are then successively transferred to coarser grids by simple averaging and used in a coarse-to-fine error-correction scheme. Once the coarser grid is reached the problem is solved through Gauss-Seidel iterations and the coarse grid estimates of the residual error can be propagated down to the original grid and used to refine the solution. The cycle is repeated a few times until convergence (results in this paper use 10 V-Cycle iterations). As illustrated in Fig 8 right, color diffusion combined with watertight surface reconstruction successfully masks the color and geometry artifacts due to occlusion and shadows. It is important to note that the original colors and geometry are preserved in the database and that these extra colors can be easily removed from presentation when desired. Fig. 8. Geometry infilling and inpainting. Left: the points of contact between the support and the statue generate small holes in the geometry as well as missing colors due to occlusions (in white). Middle: Poisson reconstruction smoothly infills holes. Right: color is diffused anisotropically for conservative inpainting. 8. IMPLEMENTATION AND RESULTS We implemented the methods described in this paper in a C++ software library and system running on Linux. The out-of-core octree structure is implemented on top of Berkeley DB 4.8.3, while OpenMP is used for parallelizing blending operations. The automatic masking subsystem is implemented on top of OpenCV 2.4.3. RAW color images from the camera are handled using the dcraw 9.10 library. The SfM software used for image-to-image alignment is Bundler 0.4.1 [Snavely et al. 2008]. All tests were run on a PC with an 8-core Intel Core i7-3820 CPU (3.60GHz), 64GB RAM and an NVIDIA GTX680 graphics board. ACM Journal on Computing and Cultural Heritage, Vol. 8, No. 1, Article 4, Publication date: August 2014. 4:16 • F. Bettio, A. Jaspe Villanueva, E. Merella, F. Marton, E. Gobbetti, and R. Pintus Fig. 9. Reconstructed Status of the Mont’e Prama complex. Colored reconstructions of the 37 reassembled statues. 8.1 Acquisition The scanning campaign covered 37 statues, which were scanned and photographed directly in the museum. Fig. 9 summarizes the reconstruction results. The geometry of all the statues was acquired at a resolution of 0.25mm using a Minolta Vivid 9i in tele mode, resulting in over 6200 640x480 range scans. The number or scans includes a few (wide) coarse scans which fully cover the statue, that were acquired to help with global scan registration. The scanning campaign produced over 1.3G valid position samples. Color was acquired with a Nikon D200 camera mounting a 50mm lens. All photos were taken with a flash in a dark room, with a shutter speed of 1/250s, aperture f/11.0+0.0, and ISO sensitivity 400. A total of 3817 10Mpixel photographs were produced. The on-site scanning campaign required 620 hours to complete for a team of two people, one camera, and one scanner. In practice, on-site time ACM Journal on Computing and Cultural Heritage, Vol. 8, No. 1, Article 4, Publication date: August 2014. Mont’e Scan: Effective shape and color digitization of cluttered 3D artworks • 4:17 was reduced by parallelizing acquisition with two scanning teams working on two statues at a time. The acquisition time includes scanning sessions, flash photography sessions (in dark room), and coarse alignment of scans using our point cloud editor. Photo alignment using the SfM pipeline was performed after each flash acquisition session, and in parallel to the scanning session, in order to verify whether sufficient coverage had been reached. Average bundle adjustment time was of 2 hours/statue. 8.2 Automatic geometric masking The quality and efficiency of our automatic geometric masking process was extensively evaluated on a selected dataset, which was also manually segmented to create a ground-truth result. The digital acquisition of selected statue, named Guerriero3 and depicted in Fig. 1, is composed by 226 range maps (54 of which containing clutter data). Samples (%) Model points 51.4M Clutter points 790K False-Positives 240 (486) 0.03 (0.06) False-Negatives 35757 (11219) 4.53 (1.42) True False-Negatives 5639 (2746) 0.68 (0.35) Fig. 10. Evaluation of automatic geometric masking. Results of manual segmentation of a single statue (Guerriero3) compared with the results produced by automatic masking. We report the number of range map samples labeled as model (“Model points”) and clutter (“Clutter points”) in the ground truth dataset, the samples erroneously labeled as statue (“False-positives”) or clutter (“False-negatives”) in the automatic method, as well as the number of false negative points that really lead to missing data in the combined dataset (“True False-negatives”). Values between parentheses compare the manually refined and the ground-truth datasets, instead of the purely automatic method. Percentages are computed with respect to the number of clutter points. Each ground-truth mask was created manually from the reflectance channel of the acquired range map using our interactive mask editor. An experienced user took about 330 minutes to complete the manual segmentation process for the entire statue. For the sake of completeness, we also measured the time required to remove clutter data from the 3D dataset by direct 3D point cloud editing, as done in typical scanning pipelines. Using our out-of-core point cloud editor this operation was completed by an experienced user in about 300 minutes, which is relatively similar to the time required for the manual 2D segmentation approach. By taking into account the relative complexity of the other statues, we can estimate a total time of about 130-150 man-hours for the manual cleaning of the entire collection of statues. The automatic segmentation process was started by manually segmenting 5 reflectance images using the same editor used for manual segmentation. This training set was used as input for the automatic classifier. The entire process took 9 minutes for the creation of the training set and 6 minutes for the automatic computation of the mask on an 8-core processor. The automatically generated masks were then manually verified and retouched using our system. This additional step, which is optional, took about 30 minutes. Applying the automatic process to the entire statue collection only took 5 hours, excluding manual cleaning, and a total of 13.5 hours including the manual post-process cleanup: this result is a more than ten-fold speed-up with respect to the manual approaches. The efficiency of the automatic masking method can be seen from the results presented in Fig. 10, which shows the results of the comparison tests between the automatically segmented masks (with and without post-process manual cleaning) and the ground-truth dataset. ACM Journal on Computing and Cultural Heritage, Vol. 8, No. 1, Article 4, Publication date: August 2014. 4:18 • F. Bettio, A. Jaspe Villanueva, E. Merella, F. Marton, E. Gobbetti, and R. Pintus More than 95.0% of the clutter samples are correctly labeled. False-positive samples represent extra points which can be easily identified and removed from the automated masks via 2D editing and they are only about 0.03% of the total clutter in the ground-truth dataset. False-negative points represent statue samples that have been erroneously masked; they are about 4.5% (1.4% in the clean-up dataset) of the total clutter in the ground-truth dataset. Since overlapping range maps typically acquire the geometry of same model region from multiple points of view, a false-negative sample is not a problem if its value is correctly classified in at least one mask covering the same area. By taking into account this fact, we verified that the points that were completely missed by the acquisition (True false negative) are only 0.68% of the total imaged clutter surface. This check was performed by searching in overlapping scans for samples within a radius of 1mm from each missing sample. Therefore, we can conclude that only a small portion of the surface is missed by the system. Further, Fig. 5 illustrates the position of the missing points; from the images it is easy to see that the points in question are often very sparse or represent small boundary area of the model. Thus, their overall effect on dataset quality is quite limited. 8.3 Automatic color masking The quality and efficiency of the color masking process was evaluated in a manner analogous to the geometry masking procedure. The selected statue –“Guerriero3”, depicted in Fig. 1 – was imaged by 68 photographs (33 of which containing clutter data). Manually masking the images took 181 minutes, while applying the automated process required 9 minutes to generate the training set, 15 minutes to automatically compute the masks on 8 CPU cores, plus a final 30 minutes for the optional manual post-process cleanup. The speed-up provided by our automated procedure is, again, substantial. The semi-automatic masking process for the entire set of statues only required a total of 41 hours (17 hours without the post-process cleaning). By taking into account the relative complexity of the other statues, we can estimate a total time of about 145 man-hours for the manual cleaning of the entire collection of statues. Samples (%) Model points 220.5M Clutter points 12.1M False-Positive 381K (334K) 3.16 (2.77) False-Negative 263K (253K) 2.18 (2.09) True False-Negative 8642 (7725) 0.07 (0.06) Fig. 11. Evaluation of automatic color masking. Results of manual segmentation of a single statue (Guerriero3) compared with automatic masking results. We report the number of colored samples labeled as model (“Model points”) and clutter (“Clutter points”) in the ground truth dataset, the samples erroneously labeled as statue (“False-positives”) or clutter (“False-negatives”) by the automatic method, as well as the number of false negative points that really lead to missing data in the combined dataset (“True False-negatives”). Values between parentheses compare the manually refined and the ground-truth datasets, instead of the purely automatic method. Percentages are computed with respect to the number of clutter points. As illustrated in the table in Fig. 11, the color masking procedures achieves results similar to those obtained by geometry masking. Again, about 95.0% of the samples are labeled correctly. In this case, false-positive samples are points where clutter color could potentially leak to geometry areas. These represent about 3% of the clutter area – i.e., below 0.2% of the model area. Instead, false-negative points are statue samples that do not receive color by a given image since they have been erroneously masked; they are about 2.2% (2.1% in the cleaned-up dataset) of the total clutter in the ground-truth dataset, but reduce to negligible amounts when considering overlapping photographs. This is because of the ACM Journal on Computing and Cultural Heritage, Vol. 8, No. 1, Article 4, Publication date: August 2014. Mont’e Scan: Effective shape and color digitization of cluttered 3D artworks • 4:19 large overlap between photos and the concentration of false negative in thin boundary areas covered from other angles. Sampling redundancy, required for alignment purposes, is thus also very beneficial to the automatic masking process. 8.4 Consolidation and coloring The generated geometry and color masks were used to create digital 3D models of the 37 statues (see Fig. 9). After cleaning, all models were imported into our system based on forests of octrees, which was used for all the 3D editing and color blending. We use lossless compression when storing our hierarchical database, thus achieving an average cost of about 38B/sample, with per-sample positions, normals, radii, colors, and blending weights (including database overhead). Disk footprints for our multiresolution editable representation are thus similar to storing single-resolution uncompressed data. We compared the performance of our system to the state-of-the-art streaming color blender [Pintus et al. 2011a]. Our pipeline required a total of 23 minutes for blending the Guerriero3 statue; as we already mentioned, the pipeline works directly on the editable representation of the model and includes color correction for flash illumination. On the other hand, the streaming color blender required 2.5 minutes for pre-computing the Morton-ordered sample stream and the culling hierarchy, and 26 minutes for color blending. Therefore, the increased flexibility of our system does not introduce additional overhead in the form of processing time nor does it require additional temporary storage – all while supporting fast turnaround times during iterative editing sessions. Flash color correction proved to be adequate in our evaluation. It produces visually appealing results without unwanted color variation and/or visible seams between acquisitions (see Fig. 7 for an example). It is important to note that, while no painting results are currently visible on the statues, including natural color considerably adds to the realism of the reconstruction, as demonstrated in Fig. 12. Fig. 12. Effect of color mapping. From left to right: original photograph (boxer 16); virtual reconstruction without color; virtual reconstruction with color. 9. CONCLUSION We have presented an approach for improving the digitization of shape and color of 3D artworks in a cluttered environment using 3D laser scanning and flash photography. The method was evaluated in ACM Journal on Computing and Cultural Heritage, Vol. 8, No. 1, Article 4, Publication date: August 2014. 4:20 • F. Bettio, A. Jaspe Villanueva, E. Merella, F. Marton, E. Gobbetti, and R. Pintus the context of a real-world large-scale digital reconstruction project concerning 37 statues mounted on supports. It proved capable of notably reducing on-site acquisition times and off-site editing times, while being able to produce good-quality results. Our method is of general applicability and is able to handle the difficult case of on-site acquisition in a cluttered environment, as exemplified by the problem of acquiring models of statue fragments held in place by exostructures. The technique is based on the standard combination of laser scanning and flash digital photography. One of its main advantages is that it does not need complex illumination setups (just a dark room during color acquisition), thus reducing the time required for color acquisition. Further, it drastically reduces post-processing time when compared to current procedures, thanks to semi-automatic masking of color and geometry and scalable color-corrected color blending. The produced colored 3D models are the starting point for creating a large number of applications based on visual communication, including passive approaches (e.g., still images, video, computer animations) and interactive ones (e.g., multimedia books, web distribution, interactive navigation). In its current implementation the method assumes that the statue material is easily separable from the unwanted support structure by analyzing reflectance and color, and cannot thus be successfully applied when the statue and the supporting structure are visually indistinguishable. However, this differentiation is enforced in modern restoration practices. The results presented in this paper take the assumption that the statue material is fairly diffuse and homogeneous – a common case for all ancient artifacts made of stone. This is not an intrinsic limitation of the method, since it could also be applied, just by reversing the masks, if the supporting material is diffuse and homogeneous. Regions of contact between the exostructure and the imaged object cannot obviously be recovered, since they are invisible to the imaging devices. This problem is common to all use-cases involving static supports. Since these parts are small and generally uninteresting, infilling techniques are typically used. However, this issue is orthogonal to this discussion. In the pipeline presented by this work, infilling is performed in a conservative way based on smoothness priors. An interesting avenue of future work would be to efficiently combine the method with more elaborate geometry and color synthesis techniques. Our work aimed at improving the effectiveness of the standard 3D scanning pipe-line in the case of cluttered 3D models. This pipeline, combining passive and active acquisition methods, is currently among the most commonly applied in the cultural heritage domain, due to the good reliability in a large variety of settings. An important avenue of future work is to evaluate whether alternative image-based approaches based on digital photography can effectively be adapted to the same difficult settings, e.g., by incorporating clutter analysis and flash light modeling in dense reconstruction pipelines. Acknowledgments. The authors are grateful to Marco Minoja, Elena Romoli, and Alessandro Usai of ArcheoCAOR for supporting this project and to Daniela Rovina, Alba Canu, Luisanna Usai and all the personnel of CRCBC for invaluable help during the scanning campaign. We also thank Roberto Combet, Alex Tinti, Marcos Balsa, and Antonio Zorcolo of CRS4 Visual Computing for their contribution to the scanning and post-processing tasks, and our in-house native English speaker Luca Pireddu for his help in revising the manuscript. REFERENCES ADAN, A. AND HUBER, D. 2011. 3D reconstruction of interior wall surfaces under occlusion and clutter. In Proc. 3DIMPVT. 275–281. ADOBE SYSTEMS INC. 2002. Adobe photoshop user guide. BALSA RODRIGUEZ, M., AGUS, M., MARTON, F., AND GOBBETTI, E. 2014. HuMoRS: Huge models mobile rendering system. In Proc. ACM Web3D International Symposium. ACM Press, New York, NY, USA. BERNARDINI, F. AND RUSHMEIER, H. 2002. The 3D model acquisition pipeline. In Computer Graphics Forum. Vol. 21. 149–172. BETTIO, F., GOBBETTI, E., MERELLA, E., AND PINTUS, R. 2013. Improving the digitization of shape and color of 3d artworks in a cluttered environment. In Proc. Digital Heritage. 23–30. ACM Journal on Computing and Cultural Heritage, Vol. 8, No. 1, Article 4, Publication date: August 2014. Mont’e Scan: Effective shape and color digitization of cluttered 3D artworks • 4:21 BORGEAT, L., POIRIER, G., BERALDIN, A., GODIN, G., MASSICOTTE, P., AND PICARD, M. 2009. A framework for the registration of color images with 3d models. Image Processing (ICIP) Proceedings, 69–72. BOYKOV, Y. Y. AND JOLLY, M.-P. 2001. Interactive graph cuts for optimal boundary & region segmentation of objects in N–D images. In Proc. ICCV. Vol. 1. 105–112. CALAKLI, F. AND TAUBIN, G. 2011. SSD: Smooth signed distance surface reconstruction. In Computer Graphics Forum. Vol. 30. 1993–2002. CALLIERI, M., CIGNONI, P., CORSINI, M., AND SCOPIGNO, R. 2008. Masked photo blending: Mapping dense photographic data set on high-resolution sampled 3D models. Computers & Graphics 32, 4, 464–473. CALLIERI, M., DELLEPIANE, M., CIGNONI, P., AND SCOPIGNO, R. 2011. Digital Imaging for Cultural Heritage Preservation: Analysis, Restoration, and Reconstruction of Ancient Artworks. CRC Press, Chapter Processing sampled 3D data: reconstruction and visualization technologies, 69–99. CAYTON, L. 2012. Accelerating nearest neighbor search on manycore systems. In Proc. IEEE IPDPS. 402–413. CHUANG, Y.-Y., CURLESS, B., SALESIN, D. H., AND SZELISKI, R. 2001. A Bayesian approach to digital matting. In Proc. CVPR. 264–271. CORSINI, M., CALLIERI, M., AND CIGNONI, P. 2008. Stereo light probe. Computer Graphics Forum 27, 2, 291–300. CORSINI, M., DELLEPIANE, M., GANOVELLI, F., GHERARDI, R., FUSIELLO, A., AND SCOPIGNO, R. 2012. Fully automatic registration of image sets on approximate geometry. IJCV, 1–21. CUCCURU, G., GOBBETTI, E., MARTON, F., PAJAROLA, R., AND PINTUS, R. 2009. Fast low-memory streaming MLS reconstruction of point-sampled surfaces. In Graphics Interface. 15–22. DAVIS, J., MARSCHNER, S. R., GARR, M., AND LEVOY, M. 2002. Filling holes in complex surfaces using volumetric diffusion. In Proc. 3DPVT. 428–441. DEBEVEC, P. 1998. Rendering synthetic objects into real scenes: bridging traditional and image-based graphics with global illumination and high dynamic range photography. In Proc. SIGGRAPH. 189–198. DEBEVEC, P., HAWKINS, T., TCHOU, C., DUIKER, H.-P., SAROKIN, W., AND SAGAR, M. 2000. Acquiring the reflectance field of a human face. In Proc. SIGGRAPH. 145–156. DELLEPIANE, M., CALLIERI, M., CORSINI, M., CIGNONI, P., AND SCOPIGNO, R. 2009. Flash lighting space sampling. In Computer Vision/Computer Graphics Collaboration Techniques. 217–229. DELLEPIANE, M., CALLIERI, M., CORSINI, M., CIGNONI, P., AND SCOPIGNO, R. 2010. Improved color acquisition and mapping on 3D models via flash-based photography. ACM JOCCH 2, 4, Article 9. FAROUK, M., EL-RIFAI, I., EL-TAYAR, S., EL-SHISHINY, H., HOSNY, M., EL-RAYES, M., GOMES, J., GIORDANO, F., RUSHMEIER, H. E., BERNARDINI, F., ET AL. 2003. Scanning and processing 3d objects for web display. In 3DIM. 310–317. FRIEDRICH, D., BRAUERS, J., BELL, A. A., AND AACH, T. 2010. Towards fully automated precise measurement of camera transfer functions. In IEEE Southwest Symposium on Image Analysis & Interpretation. IEEE, 149–152. KASS, M., WITKIN, A., AND TERZOPOULOS, D. 1988. Snakes: Active contour models. IJCV 1, 4, 321–331. KAZHDAN, M., BOLITHO, M., AND HOPPE, H. 2006. Poisson surface reconstruction. In Proc. SGP. 61–70. KAZHDAN, M. AND HOPPE, H. 2013. Screened poisson surface reconstruction. ACM Trans. Graph 32, 1. KIM, S., LIN, H., LU, Z., SUESSTRUNK, S., LIN, S., AND BROWN, M. 2012. A new in-camera imaging model for color computer vision and its application. IEEE Trans. PAMI 34, 12, 2289–2302. KOUTSOUDIS, A., VIDMAR, B., IOANNAKIS, G., ARNAOUTOGLOU, F., PAVLIDIS, G., AND CHAMZAS, C. 2014. Multi-image 3d reconstruction data evaluation. Journal of Cultural Heritage 15, 1, 73 – 79. LAFARGE, F. AND MALLET, C. 2012. Creating large-scale city models from 3D point clouds: a robust approach with hybrid representation. IJCV 99, 1, 69–85. LENSCH, H. P. A., KAUTZ, J., GOESELE, M., HEIDRICH, W., AND SEIDEL, H.-P. 2003. Image-based reconstruction of spatial appearance and geometric detail. ACM TOG 22, 2, 234–257. LEVOY, M., PULLI, K., CURLESS, B., RUSINKIEWICZ , S., KOLLER , D., PEREIRA , L., GINZTON, M., ANDERSON, S., DAVIS, J., GINSBERG, J., ET AL. 2000. The digital michelangelo project: 3d scanning of large statues. In Proc. SIGGRAPH. 131–144. MANSON, J., PETROVA, G., AND SCHAEFER, S. 2008. Streaming surface reconstruction using wavelets. In Computer Graphics Forum. Vol. 27. 1411–1420. MARTON, F., BALSA RODRIGUEZ, M., BETTIO, F., AGUS, M., JASPE VILLANUEVA, A., AND GOBBETTI, E. 2014. IsoCam: Interactive visual exploration of massive cultural heritage models on large projection setups. ACM Journal on Computing and Cultural Heritage 7, 2, Article 12. MCGUINNESS, K. AND O’CONNOR, N. E. 2010. A comparative evaluation of interactive segmentation algorithms. Pattern Recognition 43, 2, 434–444. ACM Journal on Computing and Cultural Heritage, Vol. 8, No. 1, Article 4, Publication date: August 2014. 4:22 • F. Bettio, A. Jaspe Villanueva, E. Merella, F. Marton, E. Gobbetti, and R. Pintus MORTENSEN, E. N. AND BARRETT, W. A. 1999. Toboggan-based intelligent scissors with a four-parameter edge model. In Proc. CVPR. Vol. 2. MURA, C., MATTAUSCH, O., JASPE VILLANUEVA, A., GOBBETTI, E., AND PAJAROLA, R. 2013. Robust reconstruction of interior building structures with multiple rooms under clutter and occlusions. In Proc. 13th International Conference on Computer-Aided Design and Computer Graphics. 52–59. OREN, M. AND NAYAR, S. K. 1994. Generalization of lambert’s reflectance model. In Proc. SIGGRAPH. ACM, 239–246. PINGI, P., FASANO, A., CIGNONI , P., MONTANI , C., AND SCOPIGNO, R. 2005. Exploiting the scanning sequence for automatic registration of large sets of range maps. Computer Graphics Forum 24, 3, 517–526. PINTUS, R. AND GOBBETTI, E. 2014. A fast and robust framework for semi-automatic and automatic registration of photographs to 3d geometry. ACM Journal on Computing and Cultural Heritage. To appear. PINTUS, R., GOBBETTI, E., AND CALLIERI, M. 2011a. Fast low-memory seamless photo blending on massive point clouds using a streaming framework. ACM JOCCH 4, 2, Article 6. PINTUS, R., GOBBETTI, E., AND CALLIERI, M. 2011b. A streaming framework for seamless detailed photo blending on massive point clouds. In Proc. Eurographics Area Papers. 25–32. PINTUS, R., GOBBETTI, E., AND COMBET, R. 2011c. Fast and robust semi-automatic registration of photographs to 3D geometry. In Proc. VAST. 9–16. PULLI, K. 1999. Multiview registration for large data sets. In Proc. 3D Digital Imaging and Modeling. 160–168. REMONDINO, F. 2011. Heritage recording and 3d modeling with photogrammetry and 3d scanning. Remote Sensing 3, 6, 1104–1138. ROTHER, C., KOLMOGOROV, V., AND BLAKE, A. 2004. Grabcut: Interactive foreground extraction using iterated graph cuts. In ACM TOG. Vol. 23. 309–314. RUZON, M. A. AND TOMASI, C. 2000. Alpha estimation in natural images. In Proc. CVPR. 18–25. SCHEIBLAUER, C. AND WIMMER, M. 2011. Out-of-core selection and editing of huge point clouds. Computers & Graphics 35, 2, 342–351. SNAVELY, N., SEITZ, S. M., AND SZELISKI, R. 2008. Modeling the world from internet photo collections. IJCV 80, 2. WAND, M., BERNER, A., BOKELOH, M., FLECK, A., HOFFMANN, M., JENKE, P., MAIER, B., STANEKER, D., AND SCHILLING, A. 2007. Interactive editing of large point clouds. In Proc. SPBG. 37–46. WIMMER, M. AND SCHEIBLAUER, C. 2006. Instant points: Fast rendering of unprocessed point clouds. InProc. SPBG. 129–137. WU, C., DENG, J., AND CHEN, F. 2008. Diffusion equations over arbitrary triangulated surfaces for filtering and texture applications. Visualization and Computer Graphics, IEEE Transactions on 14, 3, 666–679. XU, N., AHUJA, N., AND BANSAL, R. 2007. Object segmentation using graph cuts based active contours. Computer Vision and Image Understanding 107, 3, 210–224. ZENG, Y., SAMARAS, D., CHEN, W., AND PENG, Q. 2008. Topology cuts: A novel min-cut/max-flow algorithm for topology preserving segmentation in N–D images. Computer vision and image understanding 112, 1, 81–90. ZHANG, H., FRITTS, J. E., AND GOLDMAN, S. A. 2008. Image segmentation evaluation: A survey of unsupervised methods. Computer Vision and Image Understanding 110, 2, 260–280. ACM Journal on Computing and Cultural Heritage, Vol. 8, No. 1, Article 4, Publication date: August 2014. work_tkf3lpfl2bfjhjroxbrc4xisey ---- Comparison of field- and satellite-based vegetation cover estimation methods Journal of Ecology and Environment Ko et al. Journal of Ecology and Environment (2017) 41:5 DOI 10.1186/s41610-016-0022-z RESEARCH Open Access Comparison of field- and satellite-based vegetation cover estimation methods Dongwook W. Ko1*, Dasom Kim1, Amartuvshin Narantsetseg2 and Sinkyu Kang3 Abstract Background: Monitoring terrestrial vegetation cover condition is important to evaluate its current condition and to identify potential vulnerabilities. Due to simplicity and low cost, point intercept method has been widely used in evaluating grassland surface and quantifying cover conditions. Field-based digital photography method is gaining popularity for the purpose of cover estimate, as it can reduce field time and enable additional analysis in the future. However, the caveats and uncertainty among field-based vegetation cover estimation methods is not well known, especially across a wide range of cover conditions. We compared cover estimates from point intercept and digital photography methods with varying sampling intensities (25, 49, and 100 points within an image), across 61 transects in typical steppe, forest steppe, and desert steppe in central Mongolia. We classified three photosynthetic groups of cover important to grassland ecosystem functioning: photosynthetic vegetation, non-photosynthetic vegetation, and bare soil. We also acquired normalized difference vegetation index from satellite image comparison with the field-based cover. Results: Photosynthetic vegetation estimates by point intercept method were correlated with normalized difference vegetation index, with improvement when non-photosynthetic vegetation was combined. For digital photography method, photosynthetic and non-photosynthetic vegetation estimates showed no correlation with normalized difference vegetation index, but combining of both showed moderate and significant correlation, which slightly increased with greater sampling intensity. Conclusions: Results imply that varying greenness is playing an important role in classification accuracy confusion. We suggest adopting measures to reduce observer bias and better distinguishing greenness levels in combination with multispectral indices to improve estimates on dry matter. Keywords: Point intercept, Digital photography, Land cover estimate, NDVI, Photosynthetic vegetation, Greenness Background Defined as “land covered with herbaceous plants with less than 10% trees and shrub cover” (White et al. 2000), grass- land is an important ecosystem which dominates much of the global terrestrial ecosystem. Grasslands provide a wide range of ecological services across the world, including critical resources for nomadic livelihood, biodiversity, car- bon storage, water and nutrient cycling, and soil erosion protection (Mosier et al. 1991, White et al. 2000). Unfortunately, grassland has been subject to large- scale degradation, causing serious ecological and social problems in various geographical regions. In northeast * Correspondence: dwko@kookmin.ac.kr 1Department of Forest Environmental System, Kookmin University, Seoul, Republic of Korea Full list of author information is available at the end of the article © The Author(s). 2017 Open Access This artic International License (http://creativecommons reproduction in any medium, provided you g the Creative Commons license, and indicate if (http://creativecommons.org/publicdomain/ze Asia, for example, serious social and ecological damages are caused by intensive yellow dust phenomena, which frequently originates from the degraded grassland in Mongolia (Phadnis and Carmichael 2000, In and Park 2002). In Mongolia, overgrazing and drought conditions have played a significant role in grassland degradation (McCarthy 2001). Such conditions make the grassland ecosystem more vulnerable to dzud, which occurs in the severe winter following a suboptimal growing season, resulting in high level of livestock mortality and social instability (Fernández-Giménez et al. 2015). Considering the importance of grassland ecosystem to a wide range of human societies and ecological systems, accurate monitoring of vegetation status in grasslands is important. In representing grassland condition, quantify- ing land cover type is one of the most widely used le is distributed under the terms of the Creative Commons Attribution 4.0 .org/licenses/by/4.0/), which permits unrestricted use, distribution, and ive appropriate credit to the original author(s) and the source, provide a link to changes were made. The Creative Commons Public Domain Dedication waiver ro/1.0/) applies to the data made available in this article, unless otherwise stated. http://crossmark.crossref.org/dialog/?doi=10.1186/s41610-016-0022-z&domain=pdf mailto:dwko@kookmin.ac.kr http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/publicdomain/zero/1.0/ Ko et al. Journal of Ecology and Environment (2017) 41:5 Page 2 of 11 indicators, through classifying and proportioning the land cover in several categories such as vegetation, bare ground, and water (Meyer and Turner 1994). One of the most popular field methods to quantify land cover type is the point interception method, due to its simplicity, unbiasedness, and low cost (Canfield 1941, Ramsey 1979, Sutherland 2006). Point interception method is an extremely efficient method, but the method is known to underestimate cover types with uneven or patchy distri- bution (Buckland et al. 2007). Another alternative is the quadrats method, which estimates percent ground cover within a quadrat divided into several cover classes, or cover abundance scores, such as a Braun-Blanquet method (Daubenmire 1959). However, the estimated land cover is often known to be dependent upon the method utilized, because of their sensitivity to plant size, growth form, and crown density (Floyd and Anderson 1987). Analyzing digital images acquired from the field can be advantageous since the production of permanent im- ages enables the researcher to reanalyze the data later on with more advanced methods and softwares (Boyd and Svejcar 2005). This method can be particularly helpful since it can drastically reduce time spent in the field and control surveyor-bias (Booth et al. 2005). A study based on turf-grass dominated sites indicated that compared with line intersect, digital photography analysis was able to generate accurate results in much less time (Richardson et al. 2001). Another study which compared digital photo analysis and point intercept method also suggested that the results between the two methods were simi- lar when sufficient number of plots were combined together (Booth et al. 2005). Remote sensing technology is highly useful for system- atic and long-term vegetation cover monitoring (Iverson et al. 1989, Gemmell 1999, Turner et al. 2007, Yim et al. 2010). Through the use of various vegetation indices (e.g., normalized difference vegetation index (NDVI)), it can represent vegetation condition based on analyzing the spectral characteristics of the grassland (Cui et al. 2011). However, the method still requires ground truth data to validate its results. Moreover, studies have shown that establishing consistent guidelines and understand- ing the properties of field survey methods is critical in improving proper integration of remotely sensed and field-based data (Reinke et al. 2006). Considering sam- pling density for the spatial scale of interest, and spatial heterogeneity of the target vegetation type are some of the important aspects to consider in deciding which survey method to use. Meanwhile, greenness (photosynthetically active com- ponent) is not the only important factor in grassland ecosystems. Due to the wide range of seasonal condi- tions of temperature, precipitation, wind, fire, grazing, and human management of rangeland, the amount of dry matter on grasslands can provide valuable information (Bradley and Mustard 2005, Guerschman et al. 2009). For example, carbon and nutrient cycling, surface reflection, soil erosion, land degradation, and phonology assessment can all benefit from information on non-photosynthetic biomass, which makes dry matter estimation a very im- portant factor in evaluating the condition of grassland ecosystem (Byambakhuu et al. 2010, Stoner et al. 2016). In light of the information above, what should researchers expect from a variety of field methods and survey target components in grasslands under various conditions? To answer this question, we compared per- formances of three different grassland cover estimation methods: field point intercept, digital photography analysis with varying sampling densities, and NDVI acquired from a remote sensing platform (MODIS) were compared. We compared results for three ecosystem functional components in grassland: photosynthetic vegetation, non-photosynthetic vegetation, and bare soil. We also explored the effect of various grassland conditions, by comparing results across typical steppe, desert steppe, and forest steppe in central Mongolia. Methods Study area In this study, sites are distributed across the central part of Mongolia, between the latitude of 48° 48′ 0.72″ to 45° 41′ 3.12″ N and longitude of 96° 50′ 56.88″ to 105° 50′ 45.06″ E, within Tov, Arkhangai, Zavkhan, Bayankhon- gor, and Ovorkhangai aimag (Fig. 1, Table 1). Central Mongolian steppe zone can be roughly grouped into typical steppe, forest steppe, desertified steppe, and des- ert steppe based on the botanico-geographical groups (Karamysheva and Khramtsov 1995). In this study, based on the location and environment of the sites, three dif- ferent steppe types were covered by the site locations: forest steppe, typical steppe, and desert steppe. Central Mongolian steppe is characterized by high elevation (1043–1350 m) and sandy loam with abundant gravels, with higher number of livestock compared to the eastern Mongolian steppe (Hirobe and Kondo 2012). The central Mongolian steppe covers a range of vegetation character- istics and environmental conditions, dominated by various grass and sedges (Stipa spp. and Achnatherum spp.), pea- shrubs (Caragana spp.), and sages (Artemisia spp.). Field survey Survey for field data collection was conducted from July 16 to 25, 2013. A total of 61 transects across 28 sites were surveyed (Table 1). All sites were chosen so that none were close to a major road and at least several hundred meters away from minor paths. Each site in- cluded one to three line transects. Two methods for field survey were adopted: (1) point intercept and (2) digital Fig. 1 Map of Mongolia and the location of field survey sites Ko et al. Journal of Ecology and Environment (2017) 41:5 Page 3 of 11 photography. For both field surveys, one to three 30-m parallel line transects were established, at least 30 m away from each other. Five survey crew carried out the survey, who were ecologists and botanists who had at least graduate-level training. For point intercept cover estimate, cover type was re- corded at points at 1-m interval (30 points per transect whenever terrain allows). For digital photography estimate, digital photos of the surface was taken at 3-m intervals (Samsung ES95, 16.1MP), total of ten photographs per transect. Photos were taken at 1.2 m height, with the photograph plane parallel to ground surface. Later, digital images were cropped to cover approximately 2 m2 (164 × 123 cm) of the surface area with the final resolution of 2304 × 1728 pixels. A total of 1529 points and 549 digital photographs were collected. Table 1 Site description and location of plots Site Aimag Number of transects Latitude Longitude Eco-region TO01 Tov 1 47° 52' 40" 105° 50' 45" Steppe AR00 Arkhangai 2 47° 15' 43" 103° 19' 46" Forest steppe OV01 Arkhangai 2 47° 15' 47" 103° 33' 47" Steppe OV02A Ovorkhangai 3 46° 15' 17" 102° 47' 40" Steppe OV02B Ovorkhangai 3 46° 15' 00" 102° 47' 00" Steppe OV03A Ovorkhangai 3 46° 14' 15" 102° 49' 06" Steppe OV04 Ovorkhangai 2 45° 48' 05" 101° 53' 40" Desert steppe OV05 Ovorkhangai 2 45° 41' 42" 101° 40' 05" Desert steppe OV06 Ovorkhangai 3 45° 41' 03" 101° 36' 37" Desert steppe BA01 Bayankhongor 1 46° 13' 40" 100° 36' 37" Steppe BA03A Bayankhongor 3 46°43'28" 99°49'21" Steppe BA03B Bayankhongor 2 46° 43' 00" 99° 49' 00" Steppe ZA03 Zavkhan 2 47° 10' 26" 97° 16' 59" Steppe ZA04 Zavkhan 2 47° 19' 37" 96° 58' 20" Steppe ZA05 Zavkhan 1 47° 22' 38" 96° 55' 37" Forest steppe UL01 Zavkhan 2 47° 43' 03" 96° 50' 56" Forest steppe TE01 Zavkhan 3 48° 48' 00" 97° 30' 32" Forest steppe AR01A Arkhangai 3 47° 56' 25" 100° 38' 29" Forest steppe AR01B Arkhangai 3 47° 56' 00" 100° 38' 00" Forest steppe AR02 Arkhangai 2 47° 37' 50" 101° 04' 36" Forest steppe AR03A Arkhangai 3 47°33'36" 101°00'10" Forest steppe AR03B Arkhangai 3 47° 33' 00" 101° 00' 00" Forest steppe AR04A Arkhangai 1 47° 27' 58" 101° 28' 53" Forest steppe AR04B Arkhangai 2 47° 27' 00" 101° 29' 00" Forest steppe AR06 Arkhangai 2 47° 29' 27" 102° 09' 05" Steppe AR07 Arkhangai 2 47° 32' 12" 102° 14' 27" Steppe AR08A Arkhangai 3 47° 49' 50" 102° 55' 37" Steppe AR08B Arkhangai 3 47° 49' 00" 102° 55' 00" Steppe Ko et al. Journal of Ecology and Environment (2017) 41:5 Page 4 of 11 For both point intercept cover estimate, six major cat- egories were initially used in the field to record plant functional types as cover types at each point: grass, forb, shrub, litter, bare soil, and rock. For the final analysis, cover types were reclassified into functional groups based on photosynthetic properties considering their im- portance in representing a wide range of grassland con- ditions (sensu Guerschmann et al. 2009). For comparing cover estimates among field-based methods, photosyn- thetic vegetation (PV) cover was estimated by combining grass, forb, and half of shrub, and non-photosynthetic vegetation (NPV) was estimated by combining litter and half of shrub. Shrub cover was equally assigned to PV and NPV, considering that it is a mix of photosynthetic leaf part and non-photosynthetic woody part. Bare soil and rock were combined to bare soil (BS). Potential photosynthetic vegetation (PPV) was also calculated as the sum of PV and NPV. Since remainder of PPV is BS, we only estimated and analyzed the cover of PV, NPV, and PPV based on the total point frequency assigned to the corresponding cover types. For the digital photography cover estimate, photo- graphs were analyzed using the “SamplePoint” software, which assists classifying individual pixels within a photo- graph (Booth et al. 2006). We generated regularly distributed crosshairs over each photographs to classify the overlapping single pixels (Fig. 2). To consider the ef- fects of sampling intensity in digital photograph method, we used a variety of sampling intensities, by generating 25, 49, and 100 regularly spaced crosshairs in each photography (5 × 5, 7 × 7, and 10 × 10 sampling points, hereafter mentioned as SP25, SP49, and SP100, respect- ively). Each pixel under the crosshair was classified into six cover types following the same classification scheme used for the field point intercept method and then was reclassified into PV, NPV, and BS. PPV was also Fig. 2 Example of crosshairs generated by SamplePoint software for estimating cover by field-based digital photography method Ko et al. Journal of Ecology and Environment (2017) 41:5 Page 5 of 11 calculated as the sum of PV and NPV. To ensure consistency in classification, we adopted the following measures: (1) all classifications were made on the same computer-monitor set to maintain the visual characteris- tics of the images and (2) before actual classification, ob- servers spent 2 h together training on the same images. Satellite imagery and NDVI estimates In arid or semi-arid region, normalized difference vegetation index (NDVI) is often used for the estima- tion of green vegetation cover (Pickup et al. 1993, Chen et al. 2006) and productivity (Chen et al. 2004, Wang et al. 2004). NDVI is based on the spectral properties generated by photosynthetic process: it compares the ratio between visible red light, which is strongly absorbed, and near-infrared, which is strongly reflected by green vegetation. A variety of satellite platforms provides spectral information to calculate NDVI. In this study, we used MODIS (moderate reso- lution imaging spectroradiometer) products for its reliability of image acquirement in Mongolia espe- cially considering the non-optimal sky conditions that frequents the growing season (Jang et al. 2010). Spe- cifically, the 16-day composite products (MOD13Q1, 250 m resolution) based on MODIS Level-2G (daily) surface reflectance data with the acquisition date of July 12–27 (2013) were acquired for the NDVI value. NDVI from the corresponding pixel(s) for the study sites was extracted from the images and then was compared with point intercept and digital photog- raphy cover estimates. For comparing NDVI and field-based surveys, the field cover estimates of PV, NPV, and PPV were used to evaluate how different surface components, especially the dry matter, influ- enced the fit against the spectral properties of NDVI (Asner 1998, Booth et al. 2006). Ko et al. Journal of Ecology and Environment (2017) 41:5 Page 6 of 11 Statistical analysis Cover estimates from point intercept and digital photog- raphy method with varying intensity (SP25, SP49, and SP100) were compared by calculating summary statistics (mean, median, minimum, maximum, range, standard deviation (SD), and coefficient of variation (CV)) and conducting correlation analysis. Analysis of variance and Tukey’s HSD test was also conducted to analyze differ- ences between cover estimates of each method. For fur- ther insight, field estimates were grouped by steppe types for comparisons as well. NDVI estimates were compared among steppe types by calculating summary statistics, analysis of variance, and Tukey’s HSD test. For comparison of field-based estimates against MODIS- Fig. 3 Box plots of a–c PV cover estimates in each steppe type based on field sites combined. Thick black line within the rectangle indicates median, rectangl NDVI, correlation analysis was conducted. All statistical analyses were conducted with R (version 3.2.1). Results and discussion Comparison of field-based cover estimates—point intercept and digital photography Both field-based mean and median estimates of PV, NPV, and BS did not differ among field-based methods, even when steppe types were considered (α = 0.01, Fig. 3, Table 2). Considering PV estimates, differences were only noticeable in desert steppe, but not significant (α = 0.01, Table 2, Fig. 3a). Median of PV cover estimate in desert steppe was 17.8, 29.6, 26.1, and 23.9% for point intercept, SP25, SP49, and SP100, respectively, and range -based methods, and d–e PV, NPV, and bare soil cover estimates of all es are the interquartile range (IQR), and whiskers indicate 1.5 times IQR Table 2 Summary statistics of PV, NPV, and BS cover estimates of steppe types and field-based methods Cover type Steppe type Method Mean Median Min Max Range SD CV PV Typical steppe PI 35.0 35.6 11.1 63.3 52.2 15.4 0.440 SP25 34.4 34.6 9.5 58.0 48.5 13.5 0.392 SP49 33.9 31.2 7.3 58.6 51.3 13.6 0.401 SP100 33.4 30.6 7.7 57.3 49.6 13.7 0.410 Forest steppe PI 40.3 43.3 15.6 73.3 57.8 18.1 0.449 SP25 39.9 41.6 21.6 56.8 35.2 10.1 0.253 SP49 38.5 40.0 18.4 57.6 39.2 11.6 0.301 SP100 37.5 37.7 18.1 62.9 44.8 12.9 0.344 Desert steppe PI 22.0 17.8 16.7 31.7 15.0 8.4 0.382 SP25 36.6 29.6 23.5 56.8 33.3 17.7 0.484 SP49 34.7 26.1 22.5 55.5 33.0 18.1 0.522 SP100 34.0 23.9 22.9 55.1 32.2 18.3 0.538 NPV Typical steppe PI 28.5 23.4 10.6 80.0 69.4 18.8 0.660 SP25 23.9 21.0 2.3 66.8 64.5 18.2 0.759 SP49 23.7 22.8 1.7 60.8 59.1 17.1 0.723 SP100 24.5 23.1 2.0 60.9 58.9 17.2 0.705 Forest steppe PI 28.2 30.0 0.0 70.0 70.0 23.5 0.833 SP25 21.9 20.3 7.2 39.2 32.0 11.4 0.521 SP49 23.1 19.9 5.1 44.7 39.6 14.0 0.607 SP100 24.5 24.6 4.4 45.4 41.0 13.8 0.563 Desert steppe PI 12.0 14.4 6.7 15.0 8.3 4.6 0.385 SP25 5.2 4.4 4.0 7.2 3.2 1.7 0.335 SP49 5.4 4.5 2.2 9.6 7.4 3.8 0.698 SP100 6.6 3.5 2.3 14.0 11.7 6.4 0.976 BS Typical Steppe PI 36.5 34.4 3.3 60.0 56.7 16.0 0.439 SP25 41.7 38.6 11.2 69.3 58.1 17.6 0.421 SP49 42.3 38.9 13.5 70.0 56.5 16.7 0.394 SP100 42.1 39.0 14.0 69.8 55.8 16.3 0.388 Forest steppe PI 31.5 27.7 8.9 80.0 71.1 19.9 0.632 SP25 38.2 39.2 21.5 59.6 38.1 11.6 0.303 SP49 38.4 37.3 22.1 56.7 34.6 11.0 0.286 SP100 38.0 35.1 23.1 54.6 31.5 10.6 0.280 Desert steppe PI 65.9 67.8 53.3 76.6 23.3 11.8 0.179 SP25 58.2 63.2 38.8 72.5 33.7 17.4 0.299 SP49 59.9 64.3 42.3 73.0 30.7 15.8 0.264 SP100 59.4 62.1 42.6 73.6 31.0 15.7 0.264 Ko et al. Journal of Ecology and Environment (2017) 41:5 Page 7 of 11 was smaller in point intercept method (15.0%) compared to digital photograph methods (32.2–33.3%) (Table 2). This difference likely resulted from the sparse vegetation conditions in desert steppe, which can penalize cover esti- mates based on smaller sampling density (Milberg et al. 2008). This is a common issue, particularly for estimating abundance of rare components in environments with high level of spatial variability (Bergstedt et al. 2009, Burg et al. 2015). Overall, digital photography method seems to retain its consistency regardless of sampling density across most statistics in PV cover estimates. Mean NPV cover estimates did not differ significantly among field-based methods (α = 0.01, Table 2, Fig. 3b). While most statistics were similar to each other in typical steppe and forest steppe of NPV, the wider range of point intercept method in forest-steppe sites with extremely patchy or heterogeneous patterns of cover conditions is noticeable. For example, site UL01 had very Fig. 4 Scatterplot matrix and correlation coefficients of estimated covers between cover types (a-c PV, NPV, and PPV) and field-based methods. Lower left panels show scatterplots, and upper right panels show correlation coefficients and significance levels (*p = 0.01; **p < 0.001; ***p = 0) Ko et al. Journal of Ecology and Environment (2017) 41:5 Page 8 of 11 high bare soil cover but moderate PV cover, and none of the points was able to capture the NPV present at site. Meanwhile, in site AR04B, dominated by Alium spp. with extremely fine leaves in both green and desiccated state, field observations performed poorly in capturing the fine-textured green cover and classified most of them as NPV (70%), while detailed observation of digital photograph method was capable of capturing the nu- anced greenness and classifying them as PV (57–63%) rather than NPV (4–14%). In classifying BS, most estimates showed high agree- ment, and mean cover estimates were not significantly different from each other (α = 0.01, Table 2, Fig. 3c). It seems that point intercept method had a relatively high level of confusion in forest-steppe types as suggested by its large range (71%), which occurred in sites with very low or high level of NPV with finely textured vegetation (UL01 and AR01A). When all sites were combined and compared, mean cover estimates of PV, NPV, and BS did not show signifi- cant differences among methods (Fig. 3d–f). Ranges from point intercept method was relatively larger compared to digital photography methods, suggesting that digital pho- tography methods may ensure better consistency in cover estimates. A number of studies pointed out the issue of over- or underestimation of grassland cover from field methods (Dethier et al. 1993, Fensholt et al. 2004). Dethier et al. (1993) pointed out that point intercept method is subject to over-estimating cover compared to photo analysis, which is inherently constrained by the smaller number of sample points at field. However, this did not consistently apply to PV or NPV estimates in our study in any of the steppe types. Moreover, PV esti- mates from point intercept method in desert-steppe sites was slightly higher than estimates from digital photography method (Fig. 3). Correlation analysis of PV, NPV, and PPV cover estimates showed that digital photography methods were highly and significantly correlated with each other (Fig. 4a–c). How- ever, PV estimates of point intercept method were not cor- related with estimates of any of the digital photography methods (Fig. 4a). Interestingly, NPV estimates of point intercept method were significantly correlated, although moderately, with estimates of all digital photography methods (Fig. 4b). These suggest that compared to NPV classification, there is a higher level of disagreement in PV classification between point intercept and digital photog- raphy methods. This mismatch was dramatically reduced Ko et al. Journal of Ecology and Environment (2017) 41:5 Page 9 of 11 when correlation of PPV among field methods are consid- ered, since estimates of PPV cover from all field methods showed significantly high level of correlation (Fig. 4c). Such confusion of grassland classification may be due to the leaf angle distribution of dominant grass and sedge species, ac- cumulating dry matter, or the overwhelming bare soil in the background (Beck et al. 1990, Guerschman et al. 2009). Comparison of field-based cover estimates and NDVI NDVI estimates ranged from 0.14 to 0.50, with most sites showing NDVI values between 0.2 and 0.4 (Table 3). Forest- and typical-steppe NDVI estimates were signifi- cantly higher than the desert-steppe NDVI (p < 0.01, Table 3). Correlation analysis results indicate that PV estimates of point intercept method had moderate and significant correlation with NDVI (Fig. 4a). However, none of the PV estimates from digital photography methods were correlated with NDVI, and none of the field-based NPV estimates were correlated with NDVI (Fig. 4b). When cover estimates of PPV (combination of PV and NPV) were considered, all methods had moderate and significant correlation with NDVI (p < 0.01, Fig. 4c). In addition, for the point intercept method, correlation of PPV estimates with NDVI slightly improved, compared to the correlation of PV estimate with NDVI (Fig. 4a, c). Results suggest that part of the NDVI-related spectral signal can be traced back to PPV component identified in the field-based methods. Correlation between field methods and NDVI suggest some important insights. General consensus among ob- servers was that classifications of point intercept method in the field were subject to greater confusion, because of various field conditions experienced, such as time of day, weather, livestock trampling and droppings, and obser- ver conditions. However, results show that detailed classification of digital photography method may have underestimated PV component by limiting it to distinct- ively green vegetation, while excluding partly green vege- tation, and NPV included vegetation with a wide range of greenness, from slightly desiccated plant materials to very dry materials. In contrast, it is possible that point intercept method included widely varying green compo- nents, which was likely the reason why PV estimate alone had significant and moderately high level of correl- ation with NDVI. Theoretically, NDVI is intended to Table 3 Summary statistics and comparisons of NDVI of each steppe types (α = 0.01) Steppe type Mean Median Min Max Range SD CV Typical steppe 0.33 (a) 0.30 0.24 0.50 0.26 0.08 0.24 Forest steppe 0.35 (a) 0.34 0.30 0.44 0.14 0.05 0.14 Desert steppe 0.15 (b) 0.15 0.14 0.16 0.02 0.01 0.07 represent the photosynthetically active component in the image by considering its characteristics in differential absorption of red and near-infrared spectrum (Beck et al. 1990). However, grassland ecosystems are subject to a widely varying condition of greenness as influenced by phenology and inter- and intra-annual variability (Bradley and Mustard 2005). Therefore, identifying the varying condition and abundance of green and dry mat- ter is very important for evaluating the functional condi- tion of grasslands which often cannot be decisively classified as green or dry. For this reason, researchers suggest that the use of cellulose absorption index (CAI) in addition to vegetation indices (e.g., NDVI) for improv- ing and enriching how surface vegetation components is represented (Guerschman et al. 2009). Our study con- firms that point intercept method at field alone can be prone to errors since PV and NPV could not be easily distinguished, and varying levels of greenness were likely classified as PV. Therefore, evaluating how accurately any of the field methods can distinguish dry matter is subject to further studies by utilizing multispectral im- ages to compare with the estimated CAI and ideally with field method that can differentiate a more detailed grade of greenness. Moreover, scale and resolution of the data can be im- portant factors when characterizing surface properties (Turner et al. 1989). Although we carefully considered homogeneity of surface properties as site selection criteria, a single pixel from MODIS imagery (250 m resolution) covered a spatial extent that is significantly larger compared to field survey samples, including greater heterogeneity to be represented in the NDVI than we expected. This probably contributed to weaker fit with the field-based cover estimates (Moody and Woodcock 1995). Time scale can also be important, considering the short growing season and the rapidly shifting phenology of Mongolian grasslands (Boone et al. 2005). Since the 16-day composite NDVI was selected for this study, such a time frame may include time- driven phonological difference that may have influenced the results. Another important factor that may have confounded the results is the observer’s bias. Two types of observer’s bias are acknowledged: (1) bias caused by less-trained ob- server, which can amplify within-observer variation and (2) bias caused by inter-observer differences (Dethier et al. 1993, Bergstedt et al. 2009). For the field-based point intercept method, in particular, we noted that mis- recognition could be a significant factor, even with train- ing sessions before each survey. This is especially true for spatially heterogeneous surface or sites with very sparse green vegetation against extensive bare soil background during mid-day. Ultimately, bias-control is another factor that a researcher must carefully examine as trade-offs Ko et al. Journal of Ecology and Environment (2017) 41:5 Page 10 of 11 among survey methods, in terms of time and labor cost, scale-dependency, and vegetation characteristics. We sug- gest a variety of measures to control the bias of observers such as field training and consistency check among ob- servers before the actual survey begins and for digital pho- tography methods, ensuring the quality and consistency of color and texture representation of visual devices (use identical graphic card and monitor, with color specifica- tions and resolution matched), co-training session for ob- servers to ensure classification consistency, and diversified greenness-levels for classification. In our study, we suspect that the mismatch between point intercept and digital photography method is a re- sult of how varying range of greenness was treated, as PPV of both methods showed meaningful relationship with NDVI. Aside from this issue, it is notable that greater sampling density in digital photography methods of PPV estimates showed a slightly better fit against NDVI (Fig. 4c). Fit of SP100 and SP49 was slightly better compared to SP25, and there was no difference between SP100 and SP49. Considering the trade-offs between effort and performance, we propose that SP49 has the potential advantage among all methods, given that the confusion of greenness is alleviated, because of the additional virtues of digital photography method: quicker application in the field compared to point intercept method, possibility of re-analysis in the fu- ture when the need arises, and control over observers’ bias (Floyd and Anderson 1987, Boyd and Svejcar 2005, Booth et al. 2006). Our study suggests the potential of digital photography method to estimate vegetation cover. Digital photog- raphy method showed potential for application to other terrestrial ecosystems, such as forest or wetlands, and to evaluate rapid changes due to disturbances such as drought, grazing, or fire. This is especially relevant with the advent of unmanned aerial vehicle (UAV)-based digital photography for surveying and monitoring terres- trial ecosystems that can further reduce field cost and time (Rango et al. 2009, Cunliffe et al. 2016). While there are numerous sophisticated methods to analyze massive amount of images acquired from UAVs (Hervouet et al. 2011, Bollard-Breen et al. 2015), simple analysis of UAV- acquired images that share common grounds with trad- itional field survey methods, such as point intercept method, proves to be useful in reconstructing and evaluat- ing landscape change in the past and future. Conclusions Our study showed that when both PV and NPV were combined, estimates from point intercept method (simul- taneous field data and classification) and digital photog- raphy method (photo taken from field and classification later in the lab) showed moderate agreement against the satellite-derived NDVI (R2 = 0.43 to 0.48, p = <0.01). Point intercept method was more inclusive for a wider range of greenness compared to digital photography method. Greater sampling intensity of digital photography method slightly increased its agreement with NDVI; therefore, considering the efforts required, we suggest 49 points over 2 m2 is sufficient. Our study confirms the merits of point intercept method, the simplicity and low cost, but also suggests the potential of the digital photography method because of the possibility of future re-analysis. We suggest that incorporating a more explicit classifi- cation scheme to differentiate greenness may improve cover estimation results for both point intercept and digital photography methods. Abbreviations BS: Bare soil; CAI: Cellulose absorption index; MODIS: Moderate resolution imaging spectroradiometer; NDVI: Normalized difference vegetation index; NPV: Non-photosynthetic vegetation; PPV: Potential photosynthetic vegetation; PV: Photosynthetic vegetation; UAV: Unmanned aerial vehicle Acknowledgements This work was supported by the research grants from the Korea Forest Service (S121414L090110) and the National Research Foundation of Korea (NRF-201100009423). The authors wish to thank Dowon Lee, Reverend Sungil, Jaebum Kim, and Wanhyuk Park for their assistance in field survey and laboratory work. The field survey was completed under the permission of the Mongolian Academy of Sciences. Funding This work was supported by the research grants from the Korea Forest Service (S121414L090110) and the National Research Foundation of Korea (NRF-201100009423). The role of the fund from Korea Forest Service was in remote sensing data analysis and manuscript preparation. The role of the fund from the National Research Foundation of Korea was in the field data collection and initial analysis. Availability of data materials The data that support the findings of this study are available from the corresponding author (DWK) upon reasonable request. The data are not publicly available due to sensitive information regarding surface information of the study area. Authors’ contributions DWK conceived of the study, led its design and coordination, drafted the manuscript, collected and analyzed data and prepared results. DK participated in data collection, classification and remote sensing analysis. AN designed and guided the field survey and analysis methods. SK took part in the analysis and drafting the discussions. All authors read and approved the final manuscript. Competing interests The authors declare that they have no competing interests. Consent for publication Not applicable. Ethics approval and consent to participate Not applicable. Author details 1Department of Forest Environmental System, Kookmin University, Seoul, Republic of Korea. 2Institute of General and Experimental Biology, Mongolian Academy of Sciences, Ulaanbaatar, Mongolia. 3Department of Environmental Science, Kangwon National University, Kangwon, Republic of Korea. Received: 7 October 2016 Accepted: 6 December 2016 Ko et al. Journal of Ecology and Environment (2017) 41:5 Page 11 of 11 References Asner, G. P. (1998). Biophysical and biochemical sources of variability in canopy reflectance. Remote Sensing of Environment, 64, 234–253. Beck, L. R., Hutchinson, C. F., & Zauderer, J. (1990). A comparison of greenness measures in two semi-arid grasslands. Climatic Change, 17, 287–303. Bergstedt, J., Westerberg, L., & Milberg, P. (2009). In the eye of the beholder: bias and stochastic variation in cover estimates. Plant Ecology, 204, 271–283. Bollard-Breen, B., Brooks, J. D., Jones, M. R. L., Robertson, J., Betschart, S., Kung, O., Craig Cary, S., Lee, C. K., & Pointing, S. B. (2015). Application of an unmanned aerial vehicle in spatial mapping of terrestrial biology and human disturbance in the McMurdo Dry Valleys, East Antarctica. Polar Biology, 38, 573–578. Boone, R. B., BurnSilver, S. B., Thornton, P. K., Worden, J. S., & Galvin, K. A. (2005). Quantifying declines in livestock due to land subdivision. Rangeland Ecology & Management, 58, 523–532. Booth, D. T., Cox, S. E., Fifield, C., Phillips, M., & Williamson, N. (2005). Image analysis compared with other methods for measuring ground cover. Arid Land Research Management, 19, 91–100. Booth, D. T., Cox, S. E., & Berryman, R. D. (2006). Point sampling digital imagery with “Samplepoint.”. Environmental Monitoring and Assessment, 123, 97–108. Boyd, C. S., & Svejcar, T. J. (2005). A visual obstruction technique for photo monitoring of willow clumps. Rangeland Ecology & Management, 58, 434–438. Bradley, B. A., & Mustard, J. F. (2005). Identifying land cover variability distinct from land cover change: cheatgrass in the Great Basin. Remote Sensing of Environment, 94, 204–213. Buckland, S. T., Borchers, D. L., Johnston, A., Henrys, P. A., & Marques, T. A. (2007). Line transect methods for plant surveys. Biometrics, 63, 989–998. Burg, S., Rixen, C., Stöckli, V., & Wipf, S. (2015). Observation bias and its causes in botanical surveys on high-alpine summits. Journal of Vegetation Science, 26, 191–200. Byambakhuu, I., Sugita, M., & Matsushima, D. (2010). Remote sensing of environment spectral unmixing model to assess land cover fractions in Mongolian steppe regions. Remote Sensing of Environment, 114, 2361–2372. Canfield, R. H. (1941). Application of the line interception method in sampling range vegetation. Journal of Forestry, 39, 388–394. Chen, Z. M., Babiker, I. S., Chen, Z. X., Komaki, K., Mohamed, M. A. A., & Kato, K. (2004). Estimation of interannual variation in productivity of global vegetation using NDVI data. International Journal of Remote Sensing, 25, 3139–3159. Chen, X.-L., Zhao, H.-M., Li, P.-X., & Yin, Z.-Y. (2006). Remote sensing image-based analysis of the relationship between urban heat island and land use/cover changes. Remote Sensing of Environment, 104, 133–146. Cui, G., Lee, W.-K., Kwak, D.-A., Choi, S., Park, T., & Lee, J. (2011). Desertification monitoring by LANDSAT TM satellite imagery. Forest Science and Technology, 7, 110–116. Cunliffe, A. M., Brazier, R. E., & Anderson, K. (2016). Ultra-fine grain landscape-scale quantification of dryland vegetation structure with drone-acquired structure- from-motion photogrammetry. Remote Sensing of Environment, 183, 129–143. Daubenmire, R. (1959). A canopy-coverage method of vegetational analysis. Northwest Science, 33, 43–64. Dethier, M. N., Graham, E. S., Cohen, S., & Tear, L. M. (1993). Visual versus random- point percent cover estimations: “objective” is not always better. Marine Ecology Progress Series, 96, 93–100. Fensholt, R., Sandholt, I., & Rasmussen, M. S. (2004). Evaluation of MODIS LAI, fAPAR and the relation between fAPAR and NDVI in a semi-arid environment using in situ measurements. Remote Sensing of Environment, 91, 490–507. Fernández-Giménez, M. E., Batkhishig, B., Batbuyan, B., & Ulambayar, T. (2015). Lessons from the dzud: community-based rangeland management increases the adaptive capacity of Mongolian herders to winter disasters. World Development, 68, 48–65. Floyd, D. A., & Anderson, J. E. (1987). A comparison of three methods for estimating plant cover. Journal of Ecology, 75, 221–228. Gemmell, F. (1999). Estimating conifer forest cover with Thematic Mapper data using reflectance model inversion and two spectral indices in a site with variable background characteristics. Remote Sensing of Environment, 69, 105–121. Guerschman, J. P., Hill, M. J., Renzullo, L. J., Barrett, D. J., Marks, A. S., & Botha, E. J. (2009). Estimating fractional cover of photosynthetic vegetation, non- photosynthetic vegetation and bare soil in the Australian tropical savanna region upscaling the EO-1 Hyperion and MODIS sensors. Remote Sensing of Environment, 113, 928–945. Hervouet, A., Dunford, R., Piégay, H., Belletti, B., & Trémélo, M.-L. (2011). Analysis of post-flood recruitment patterns in braided-channel rivers at multiple scales based on an image series collected by unmanned aerial vehicles, ultra-light aerial vehicles, and satellites. GIScience & Remote Sensing, 48, 50–73. Hirobe, M., & Kondo, J. (2012). Effects of climate and grazing on surface soil in grassland. In N. Yamamura, N. Fujita, & A. Maekawa (Eds.), The Mongolian Ecosystem Network: Environmental Issues Under Climate and Social Changes (pp. 105–114). Japan: Springer. In, H.-J., & Park, S.-U. (2002). A simulation of long-range transport of Yellow Sand observed in April 1998 in Korea. Atmospheric Environment, 36, 4173–4187. Iverson, L. R., Cook, E. A., & Graham, R. L. (1989). A technique for extrapolating and validating forest cover across large regions calibrating AVHRR data with TM data. International Journal of Remote Sensing, 10, 1805–1812. Jang, K., Kang, S., Kim, J., Lee, C. B., Kim, T., Kim, J., Hirata, R., & Saigusa, N. (2010). Mapping evapotranspiration using MODIS and MM5 four-dimensional data assimilation. Remote Sensing of Environment, 114, 657–673. Karamysheva, Z. V., & Khramtsov, V. N. (1995). The steppes of Mongolia. Braun- Blanquetia, 17, 5–79. McCarthy, J. J. (2001). Climate change 2001: impacts, adaptation, and vulnerability: contribution of working group II to the Third Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge: Cambridge University Press. Meyer, W. B., Turner II, B. L. (1994). Changes in land use and land cover: a global perspective. Cambridge: Cambridge University Press. Milberg, P., Bergstedt, J., Fridman, J., Odell, G., & Westerberg, L. (2008). Observer bias and random variation in vegetation monitoring data. Journal of Vegetation Science, 19, 633–644. Moody, A., & Woodcock, C. E. (1995). The influence of scale and the spatial characteristics of landscapes on land-cover mapping using remote sensing. Landscape Ecology, 10, 363–379. Mosier, A., Schimel, D., Valentine, D., Bronson, K., & Parton, W. (1991). Methane and nitrous oxide fluxes in native, fertilized and cultivated grasslands. Nature, 350, 330–332. Phadnis, M. J., & Carmichael, G. R. (2000). Numerical investigation of the influence of mineral dust on the tropospheric chemistry of East Asia. Journal of Atmospheric Chemistry, 36, 285–323. Pickup, G., Chewings, V. H., & Nelson, D. J. (1993). Estimating changes in vegetation cover over time in arid rangelands using Landsat MSS data. Remote Sensing of Environment, 43, 243–263. Ramsey, F. L. (1979). Parametric models for line transect surveys. Biometrika, 66, 505–512. Rango, A., Laliberte, A., Herrick, J. E., Winters, C., Havstad, K., Steele, C., & Browning, D. (2009). Unmanned aerial vehicle-based remote sensing for rangeland assessment, monitoring, and management. Journal of Applied Remote Sensing, 3, 33542. Reinke, K., Reinke, K., Jones, S., & Jones, S. (2006). Integrating vegetation field surveys with remotely sensed data. Ecological Management and Restoration, 7, S18–S23. Richardson, M. D., Karcher, D. E., & Purcell, L. C. (2001). Quantifying turfgrass cover using digital image analysis. Crop Science, 41, 1884–1888. Stoner, D. C., Sexton, J. O., Nagol, J., Bernales, H. H., & Edwards, T. C. (2016). Ungulate reproductive parameters track satellite observations of plant phenology across latitude and climatological regimes. PloS One, 11, e0148780. Sutherland, W. J. (2006). Ecological census techniques: a handbook. Cambridge: Cambridge University Press. Turner, M. G., Dale, V. H., & Gardner, R. H. (1989). Predicitng across scales: theory development and testing. Landscape Ecology, 3, 245–252. Turner, B. L., Lambin, E. F., & Reenberg, A. (2007). The emergence of land change science for global environmental change and sustainability. Proceedings of the National Academy of Sciences, 104, 20666–20671. Wang, J., Rich, P. M., Price, K. P., & Kettle, W. D. (2004). Relations between NDVI and tree productivity in the central Great Plains. International Journal of Remote Sensing, 25, 3127–3138. White, R. P., Murray, S., Rohweder, M., Prince, S. D., & Thompson, K. M. (2000). Grassland ecosystems. Washington DC: World Resources Institute. Yim, J., Kleinn, C., Cho, H., & Shin, M. (2010). Integration of digital satellite data and forest inventory data for forest cover mapping in Korea. Forest Science and Technology, 6, 87–96. Abstract Background Results Conclusions Background Methods Study area Field survey Satellite imagery and NDVI estimates Statistical analysis Results and discussion Comparison of field-based cover estimates—point �intercept and digital photography Comparison of field-based cover estimates and NDVI Conclusions Abbreviations Acknowledgements Funding Availability of data materials Authors’ contributions Competing interests Consent for publication Ethics approval and consent to participate Author details References work_tkzof5otyfb7lkvacv6fd22pz4 ---- My New Toy It started out as a garden project. After many years, we still did not have a good idea of what the sun patterns were during the day. Our lot contains a few large oak trees and quite a few loblolly pines that provide a good deal of shade for the house, but they also make the search for the site of the perfect tomato garden very difficult. So the researcher in me was able to convince the gadgeteer in me to get a digital camera. Although it was supposedly only intended to record sun patterns, I researched the camera on the Internet, looking at everything from submegapixel toys to the mon- ster Nikons. I settled on a Kodak DC4800 because the price after a rebate was better than the competition and because it is possible to select one of three f-stops ~2.8, 5.6, and 8! with a knob atop the camera. Now, please understand as much as I love optics, I am an amateur photographer, who does not salivate at the sight of a Canon Rebel or a Nikon. At heart, I am a graphic de- signer, who wants to capture images anyway he can. When the camera arrived I immediately went about recording 11 different views of our yard ~it’s over an acre! each Saturday at 11 a.m., 2 p.m., and 5 p.m. Because the camera came with a 16 MB Compact Flash card and be- cause I tended to take some shots of the current blooms, the record of each foray had to be off-loaded to the com- puter before the next set of shots because of insufficient memory. As the weeks progressed, this limitation began to irritate me, so I wandered back onto the Net to see what was available. In the end I found a 128 MB Flash card at a reasonable price. It was a revelation. Having 128 MB memory in the camera is essentially equivalent to being given an endless supply of film. The camera can take more than 250 frames before you have to download the images to the computer. All of a sudden the camera was no longer a horticultural tool, but my graphic toy. At this point, I began to ap- proach any shot like a fashion photographer, shooting the same scene at the three f-stops and several different angles. Although I now had an abundance of images, they were quickly sorted through and some gems were plucked from my graphic scavenging. At this time, two events were put on our family calen- dar. My eldest daughter and youngest son were getting married within two weeks of each other. And I had this long roll of electronic film. The first wedding took place in Atlanta on the Memorial Day weekend and I was ready for it. Altogether, I took 250 frames during the rehearsal, the wedding, the reception, and the post-reception gather- ing back at the house. Then there was the question: what to do with all this imagery? For some time I have had a page on a Macintosh web site and posted some of our travel photos there for others in the family. Although I have access to Web sites that would permit me to load my own web pages and the software with which to create them, I have neither the time nor the inclination to do so. The Mac site provides a number of generic layouts, including one of a photo al- bum. You just load the images onto the site and provide captions for the pictures. So, on the Sunday after the wed- ding and all day Memorial Day I cropped, rebalanced, brightened, and changed contrast on 50 images with Pho- toshop. ~One drawback to the Kodak DC4800 is that its flash has no preflash feature, so I had a fair number of ‘‘red eyes’’ to fix.! The results can be seen at http:// homepage.mac.com/donoshea/. The second wedding was celebrated two weeks later in Pittsburgh. By now, I was a digital photography veteran. With the help of a new Titanium Powerbook I managed to take and off-load over 399 frames. Because the trip back to Atlanta included a little decompression time in the form of a drive along the Blue Ridge Parkway, I wasn’t able to sort and crop the frames until later in the week. So it took a little longer to get the pictures on the Web. This time I managed to reduce the number of images to about 100. The results can also be seen on the same Web site. My current project is to print copies of the images on the Web site for an album for each of the couples. Be- cause the resolution for the Web pages is only 72 dpi, while high-quality prints require 1200 dpi, all of the crop- ping and contrast changes that I did for the Web projects must be done again. Is it worth it? Well, members of our families who could not be at the weddings have been able to see pictures of the events. When we had a reception in Atlanta for my son and his new wife a week after the wedding, I assembled a slide show that we showed on my wife’s iMac during the festivities. Finally, I have been Editorial 1430 Optical Engineering, Vol. 40 No. 8, August 2001 Downloaded From: https://www.spiedigitallibrary.org/journals/Optical-Engineering on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use able to rescue pictures that I took with the digital camera that would have been unusable had it been a film camera and I were dependent on the technology of strangers. I certainly am enjoying the challenge and the opportu- nities that my new toy has presented me. There are some who bemoan the arrival of all the gadgets in this brave new digital world. But I’m not one of them. Although I have resisted the call of the pager and cell phone and I still haven’t a DVD player, I am not a technosnob. For the present, I can lead my life without these devices. But my recent excursions into digital photography are another matter. They have helped my wife and me to maintain contact with our extended family at a very special time. They have also enabled us to connect with the newly added members of our family. And most of this is related through these images, their capture, and their distribution to this wonderful field in which we work . . . and play. Donald C. O’Shea Editor 1431Optical Engineering, Vol. 40 No. 8, August 2001 Downloaded From: https://www.spiedigitallibrary.org/journals/Optical-Engineering on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use work_tlydtlf5wvdabjdnbwxt3kkbe4 ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218549887 Params is empty 218549887 exception Params is empty 2021/04/06-02:16:28 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218549887 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:28 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_tmhpimnq2zemtgrw7nrvxvoice ---- Measurement of Fracture Parameters for a Mixed-Mode Crack Driven by Stress Waves using Image Correlation Technique and High-Speed Digital Photography Measurement of Fracture Parameters for a Mixed-Mode Crack Driven by Stress Waves using Image Correlation Technique and High-Speed Digital Photography M. S. Kirugulige* and H. V. Tippur † *The Goodyear Tire and Rubber Co., 142 Goodyear Blvd, Akron, OH 44305, USA † Department of Mechanical Engineering, Auburn University, Auburn, AL 36849, USA ABSTRACT: Measurement of fracture parameters for a rapidly growing crack in syntactic foam sheets using image correlation technique and high-speed photography is presented. The perfor- mance of a rotating mirror-type multi-channel high-speed digital camera to measure transient deformations is assessed by conducting benchmark tests on image intensity variability, rigid trans- lation and rigid rotation. Edge-cracked foam samples are subjected to eccentric impact loading relative to the initial crack plane to produce mixed-mode loading conditions in a three-point bend configuration. High-speed photography is used to record decorated random speckles in the vicinity of the crack tip at a rate of 200 000 frames per second. Two sets of images are recorded, the first set before impact and the second after impact. Using image correlation methodology, crack-tip displacement field histories and dominant strains from the time of impact up to complete fracture are mapped. Over-deterministic least-squares analyses of crack-tip radial and tangential displace- ments are used to obtain mixed-mode fracture parameters. The measurements are compared with complementary finite element results. The fracture parameters determined from radial displace- ments seem more robust even when fewer number of higher order terms in the crack-tip asymptotic expansion are used. KEY WORDS: DIC, dynamic crack growth, high-speed imaging, mixed-mode SIF, optical metrology, stress wave loading Introduction The mixed-mode dynamic fracture behaviour of syntactic foams is examined in this study. Syntactic foams are lightweight structural materials manufac- tured by dispersing prefabricated hollow micro- balloons in a matrix. These materials also display superior thermal, dielectric, fire-resistant, hygro- scopic properties and sometimes radar or sonar transparency. They can also be tailored to suit a particular application by selecting microballoons made of glass, carbon or polymer to be used with different matrix materials (metal, polymer or cera- mic). Although syntactic foams were initially devel- oped for deep-sea applications, in recent years they have found a variety of applications such as buoy- ancy modules for boat hulls, parts of helicopters and airplanes, structural components of antenna assem- blies, thermal insulators in oil and gas industries, core materials in impact-resistant sandwich struc- tures, to name a few. The ability to absorb impact energy and vibrations makes them ideal candidates for packaging applications and protective enclosures. Although more suitable for compression-dominated applications, structural components made of syntac- tic foams often undergo a combined loading includ- ing shock, resulting in tensile or shear failures. Hence studying the mixed-mode fracture response under stress wave-dominant conditions is critical for this material. Choices in terms of experimental techniques for measuring real-time surface deformations/stresses in a dynamic failure event such as fracture initiation and propagation are somewhat limited. Dynamic photoelasticity [1–3], coherent gradient sensing [4–7] and moiré interferometry [8] have emerged over the years as full-field techniques suitable for investigating highly transient events such as crack initiation and propagation in solids. Interferometric techniques, however, involve elaborate surface preparation [transferring of gratings in case of moiré interferom- etry and preparing a specularly reflective surface in 108 � 2008 The Authors. Journal compilation � 2008 Blackwell Publishing Ltd j Strain (2009) 45, 108–122 case of coherent gradient sensing (CGS), birefringent coatings in reflection photoelasticity, etc.]. For cel- lular materials (syntactic foams, polymer metal foams, cellulosic materials, etc.) such surface prepa- rations are challenging and in some cases may not be feasible. In those instances, digital image correlation could be a useful tool because of its relative simplic- ity. It involves a rather simple surface preparation method of decorating the surface with random spec- kles using alternate mists of black and white paint. Recent advances in image processing and ubiquitous computational capabilities have made it possible to apply this technique to a variety of engineering applications including mixed-mode crack growth studies under static loading conditions [9, 10]. With the advent of high-speed digital cameras in recent years, recording rates as high as several million frames per second at a relatively high spatial resolu- tion have become possible. This has made image correlation techniques feasible for estimating surface displacements and strains to assess fracture/damage parameters. In this study, the digital image correla- tion technique is extended for the first time to mixed-mode dynamic fracture studies for estimating stress intensity factors (SIF) of stationary and propa- gating cracks in edge-cracked syntactic foam sheets subjected to stress wave loading. A rotating mirror- type high-speed digital camera is used to record random speckles in the vicinity of the crack tip. The entire crack-tip deformation and dominant strain history from the time of impact to complete fracture is mapped. Over-deterministic least-squares analyses of crack-tip displacement fields are performed to obtain dynamic SIF histories for both pre- and post- crack initiation periods. The SIF histories obtained from the image correlation method are compared with those obtained from finite element computa- tions. The Approach In this study, random speckle patterns on the speci- men surface were monitored during a dynamic mixed-mode fracture event. Two sets of these pat- terns, the first set before and the second set after deformation, were acquired, digitised and stored. Then a sub-image in an undeformed image was chosen and its location in the corresponding deformed image was sought (see Figure 1). Once the location was identified, the local displacements were quantified. In this study, a three-step approach was used in a MATLABTM [11] environment to estimate in-plane surface displacements and strains. In the following only a brief description of all the steps involved is provided, and details can be found in Ref. [12, 13]. In the first step, a 2D cross-correlation coefficient was computed to obtain initial estimates of full-field planar displacements. The peak of the correlation function was detected to sub-pixel accuracy (1/16th of a pixel) using bicubic interpolation. This process was repeated for the entire image to get full-field in-plane displacements. Further details about this method can be found in Ref. [14]. In the second step, an iterative technique based on nonlinear least- squares minimisation was implemented to estimate displacements and their gradients by using displace- ments obtained in the first step as initial guess values. The Newton–Raphson method, which uses line search and the BFGS (Broyden, Fletcher, Goldfarb and Shanno) algorithm to update an inverse Hessian matrix, was employed. Such an approach was dem- onstrated first by Sutton et al. [15] to measure dis- placements from speckle images. The displacement gradients obtained during the second step represent average values for each subset and tend to be noisy. Therefore, it was necessary to use smoothing algo- rithms in order to get a continuous displacement field (u,v) and then to extract strain values. It should be noted here that crack-opening displacements are discontinuous across the crack. Generic smoothing methods smooth displacements across the crack faces and hence interpretation of deformations near the crack tip tends to be inaccurate. Therefore a smoothing method which allows discontinuity of crack-opening displacements across the crack faces was introduced. A regularised restoration filter [16] with a second-order fit was employed for this Figure 1: Undeformed and deformed sub-images chosen from images before and after deformation respectively � 2008 The Authors. Journal compilation � 2008 Blackwell Publishing Ltd j Strain (2009) 45, 108–122 109 M. S. Kirugulige and H. V. Tippur : Measuring a Mixed-Mode Crack purpose. Details are again avoided here for brevity and can be found in Ref. [12]. Experimental Setup A schematic of the experimental setup used in this study is shown in Figure 2. The setup included an Instron-Dynatup 9250-HV (Instron, Norwood, MA, USA) drop tower for impact loading the specimen and a Cordin 550 ultrahigh-speed digital camera (Cordin Scientific Imaging, Salt Lake City, UT, USA) for recording images in real time. The drop tower had an instrumented tup for recording the impact force history and a pair of anvils for recording support reaction histories separately. The setup also had a delay/pulse generator to produce a trigger pulse when the impactor tup contacted the specimen. As all images were recorded during the dynamic event, lasting over a hundred microseconds, the setup used two high-energy flash lamps, triggered by the camera, to illuminate the specimen. The setup also utilised two computers, one to record the tup force and anvil reaction histories (5 MHz acquisition rate) and the other to record the images. Camera Performance Evaluation High-speed digital recording devices are broadly classified into two types based on the sensors they use. They are Complementary Metal Oxide Semi- conductor (CMOS) and Charged Coupled Device (CCD) sensor-based digital cameras. The former can record images at moderate rates (typically less than 10 000 frames per second at full resolution), whereas the latter can reach rates of 100 million frames per second. These ultrahigh-speed CCD cameras contain multiple sensors triggered electronically to achieve higher framing rates. They can be further classified into two types based on how the individual CCD sensors receive light from an objective lens. The first type uses stationary optical elements (beam splitters, lenses, etc.) with photomultiplier tubes (PMT) to amplify the light signal. In the second, however, a high-speed rotating mirror is used to distribute light to individual sensors by sweeping the image over them. It should be noted here that both the above types of CCD cameras introduce geometrical distor- tions/misalignments between successive images. The accuracy and repeatability of displacements mea- sured from these using the image correlation ap- proach outlined earlier are directly affected by optical misalignments. In the intensified CCD-type camera with PMT devices, image distortions arise because of two reasons: (i) different optical paths in the image formation process and (ii) random noise involved in multiplying photons by image intensifiers and fibre optic bundles. In the rotating mirror-type camera system, the distortions are limited to different optical paths for individual CCD sensors. In view of this, and despite relatively lower recording rates compared with the former type, good accuracy and repeatability are possible. Details about distortions in high-speed cameras and their corrections as related to image correlation are discussed in Ref. [17]. The high-speed camera system adopted in this study uses a combination of CCD-based imaging technology and high-speed rotating mirror optical system. It is capable of imaging rates of up to 2 million frames per second at a resolution of 1000 · 1000 pixels per image. It has 32 independent CCD image sensors positioned radially around a rotating mirror which sweeps light over these sensors (Figure 3). Each Figure 2: Schematic of the dynamic experimental setup 110 � 2008 The Authors. Journal compilation � 2008 Blackwell Publishing Ltd j Strain (2009) 45, 108–122 Measuring a Mixed-Mode Crack : M. S. Kirugulige and H. V. Tippur sensor is illuminated by a separate optical relay. Thus small misalignments and light intensity and optical focus variations between images are unavoidable. Hence, meaningful results pertaining to small defor- mations cannot be obtained by correlating images from two successive/different CCD sensors. However, the above artefacts are negligible, if not entirely ab- sent, between two images if they are captured by a same CCD sensor at two different time instants. This enables digital image correlation method for quanti- fying surface deformations. Accordingly, the follow- ing approach was adopted in this study. Prior to impact-loading the specimen, a set of 32 images of surface speckles were recorded at the desired framing rate (200,000 frames per second in this study). While keeping all the camera settings (CCD gain, flash lamp duration, framing rate, trigger delay, etc.) same, the next set of images, this time triggered by the impact event, was captured. Thereby each image in the deformed set had a corresponding image in the undeformed set. That is, if an image in the deformed set was recorded for example by sensor no. 10, then the image recorded by the same sensor (no. 10) in the undeformed set was chosen for per- forming image correlation operations. By adopting this method, the optical path was maintained the same for the two images under consideration and the only source of error now was the CCD noise (in the range of 4 to 6 grey levels in an 8-bit (256 grey levels) intensity image to be discussed next). It is worth noting that in order to get meaningful results it is essential that extraneous camera move- ments do not occur while recording a set of images and during the time interval between the two sets of images. This was achieved rather easily by triggering the camera electronically and anchoring the camera mechanically. In view of the above-mentioned distortions/mis- alignments, it was important to assess the camera performance to measure transient deformations in a dynamic test. Therefore a few benchmark tests – image intensity variability test, translation and rotation tests – were first conducted. Image intensity variability test In this study, 8-bit (0 to 255 levels) grey-scale images were captured and analysed to estimate CCD noise levels. The noise in an acquired image was dependent on the value of CCD gain that can be pre-set on a 0–1000 scale. For all the experiments reported in this paper, the gain was set in the range of 500 to 550. A value of more than 700 was the upper limit as it resulted in saturation of a few pixels in the acquired images and hence was avoided. For evaluation pur- poses, two sets of 32 images were acquired at framing rates of 200 000 and 50 000 frames per second in total darkness (with the lens cover to prevent light transmission into the camera cavity). All the images in these two sets had their pixels representing the grey-scale values in the range 0 to 8. Hence, the lower 3 bits in an 8-bit image represents CCD noise and the intensity represented by the remaining 5 bits can be faithfully measured. Figure 4 shows mean and stan- dard deviations of intensity values of all pixels (1 million pixels in a 1000 · 1000 pixel image) of the images captured in darkness. It can be seen from this figure that all the images have their mean intensity values in the range 6 to 8 with a very narrow spread (standard deviation in the range 2 to 4). As mentioned earlier, in the current study, tran- sient deformations were estimated by performing image correlation between two images acquired from the same CCD sensor, one before and another after impact. Therefore it is important to know intensity variations between the two images taken by the same CCD sensor at different instants of time. To this end, five sets of 32 images of a stationary sample, Figure 3: Optical schematic of cordin-550 camera: M1, M2, M3, M4, M5 are mirrors; R1 and R2 are relay lenses; r1, r2, …, r32 are relay lenses for CCDs; c1, c2, …, c32 are CCD sensors � 2008 The Authors. Journal compilation � 2008 Blackwell Publishing Ltd j Strain (2009) 45, 108–122 111 M. S. Kirugulige and H. V. Tippur : Measuring a Mixed-Mode Crack decorated with a random speckle pattern, were ac- quired at 200 000 frames per second. The grey-scale values at a few randomly chosen pixels were stored (same set of pixels were chosen from all the images). The intensity value at a particular pixel from all the five images acquired by the same CCD sensor was examined. This is listed in Table 1 for all the 32 CCD sensors. As expected, a significant difference in intensity value at a pixel exists between images ac- quired by different CCD sensors. More importantly, however, a very small variation exists in the grey- scale value at that pixel for images acquired by the same CCD sensor. The standard deviations are in the range of 2 to 6 grey levels for most sensors (appar- ently this is in the same range as the mean values observed for the images recorded in total darkness; see Figure 4). This demonstrates that between an undeformed and a deformed image recorded during an actual experiment, there would be no light intensity variation apart from random CCD noise. This is a subtle point but an important aspect of the high-speed camera system used here which makes it possible to perform image correlation between two Camera number 0 5 10 15 20 25 30 G ra y sc a le v a lu e 0 2 4 6 8 10 12 Experiment 1 (mean) Experiment 2 (mean) Experiment 1 (std) Experiment 2 (std) Figure 4: Mean and standard deviations of intensity values of images recorded in total darkness. Images were recorded at 50 000 frames per second in experiment 1 and at 200 000 frames per second in experiment 2 Table 1: Grey-scale values at a particular pixel in five repeated sets of images of speckle pattern acquired at 200 000 frames per second Camera no. Set 1 Set 2 Set 3 Set 4 Set 5 Mean SD 0 121 116 123 120 122 120.4 2.70 1 106 102 104 108 102 104.4 2.61 2 119 120 125 116 120 120 3.24 3 93 95 111 109 102 102 8.06 4 122 125 126 116 120 121.8 4.02 5 97 102 106 105 105 103 3.67 6 106 97 108 112 103 105.2 5.63 7 79 80 74 82 74 77.8 3.63 8 84 81 84 89 84 84.4 2.88 9 123 118 128 129 129 125.4 4.83 10 111 105 110 114 116 111.2 4.21 11 118 112 111 110 117 113.6 3.65 12 82 88 76 82 82 82 4.24 13 117 115 115 114 117 115.6 1.34 14 88 98 93 87 93 91.8 4.44 15 94 96 93 92 91 93.2 1.92 16 77 73 73 77 72 74.4 2.41 17 63 60 59 65 63 62 2.45 18 97 93 92 93 98 94.6 2.70 19 76 66 71 72 72 71.4 3.58 20 69 70 60 71 67 67.4 4.39 21 87 86 93 95 85 89.2 4.49 22 114 109 113 115 110 112.2 2.59 23 82 80 79 82 76 79.8 2.49 24 92 89 93 96 84 90.8 4.55 25 124 119 120 130 122 123 4.36 26 122 120 126 117 123 121.6 3.36 27 78 83 86 79 78 80.8 3.56 28 76 74 73 77 70 74 2.74 29 95 93 92 91 96 93.4 2.07 30 73 71 70 71 69 70.8 1.48 31 95 91 95 96 88 93 3.39 Note the consistency in grey-scale values of a camera no. (each row). Between different camera numbers, however, grey-scale variations are anticipated. 112 � 2008 The Authors. Journal compilation � 2008 Blackwell Publishing Ltd j Strain (2009) 45, 108–122 Measuring a Mixed-Mode Crack : M. S. Kirugulige and H. V. Tippur images acquired from the same CCD sensor and estimate displacements. In Figure 5, a typical histogram of an image (camera no. 09) is shown to demonstrate the quality of speckles based on the CCD gain settings used during experiments. As discussed earlier in this section, lighting and CCD gains were so adjusted that the intensity levels were in the mid-range of grey scales between 0 and 255. An approximately normal distri- bution of intensity in the histogram plot shows the quality of random speckles in these images. (The smaller peak in the histogram near zero grey level is due to the crack where the pixels were relatively dark.) Translation test In these experiments, a specimen [decorated with a random black/white (b/w) speckle pattern] was mounted on a 3D-translation stage. A series of known displacements were imposed in the X- and Y-direc- tions separately and images captured. The mean and standard deviations of the displacement fields were computed and compared with the applied values. Moreover, an out-of-plane (Z-direction) displacement of 30 lm1 was applied to the sample and a set of images were captured. Specific details about this test as well as the discussion of the results are reported in Ref. [12] and are avoided here for brevity. The results obtained experimentally match very well with the imposed values. It is instructive to study in-plane strain fields esti- mated from the measured displacements in these translation tests. To this end, the displacement field was smoothed by the restoration method [12] and strains were obtained by numerical differentiation. The mean and standard deviations of exx and eyy strains thus obtained are presented for the two tests in Table 2. These tests correspond to the X- and Y- translations of 60 ± 2 and 300 ± 2 lm. The applied displacement being a rigid translation, ideally zero strains are expected for all the images. However, numerical differentiation of noisy displacement data produces non-zero strains. The mean values of strains thus obtained are in the range of approximately 0 to ±300 le in both the experiments. The standard devi- ations of strains are in the range 0 to 300 le for various individual cameras. Interestingly, the mean and standard deviations remain unaffected by the amount of imposed translation. The implication of these in an actual experiment is that a relatively large rigid-body motion can be accommodated without sacrificing accuracy in the measured displacements and strains. Rotation test The objectives of the rotation test were: (a) to esti- mate the accuracy with which a pure rotation can be measured using this camera system; (b) to compare the performance of different individual cameras when they were used to measure the same applied rotation; and (c) to examine whether the applied rigid rotation produces any spurious strains. In the rotation test, a specimen decorated with random b/w speckle pattern was mounted on a rotation stage. Two sets of 32 images were recorded at 200 000 frames per second, one set before and another after imposing a rotation of 0.32 ± 0.02�. The full-field displacements between these two sets of images were computed. The sub-image size used in the analysis was 30 · 30 pixels so that displacements were avail- able after computations on a 32 · 32 (¼1024) grid of points. These displacements were smoothed by the restoration method explained in Ref. [12]. The cross derivatives ¶u/¶Y and ¶v/¶X were computed by numerical differentiation of displacement compo- nents and rotation xXY was then computed as, xXY ¼ 1 2 @u @Y � @v @X � � : (1) Figure 6 shows a plot of full-field xXY from a pair of images. The estimated values are close to the applied 0 0 2000 4000 6000 8000 10 000 12 000 14 000 50 100 150 Gray level 200 250 Figure 5: Histogram depicting pixel-level intensities (grey levels) of a decorated black/white speckle image acquired by a 1000 · 1000 CCD sensor. The vertical axis denotes the number of pixels 1 This is typically the amount of out-of-plane displacement that occurs in the vicinity of a crack tip in an experiment conducted in this work. For example, in Ref. [14] one can see roughly 7–9 fringes near the crack tip over a distance of �10 mm. Since these fringes represent surface slopes and the resolution of the CGS set-up is �0.015�/fringe, one can estimate the out-of-plane displacement around the crack tip to be �23 lm. � 2008 The Authors. Journal compilation � 2008 Blackwell Publishing Ltd j Strain (2009) 45, 108–122 113 M. S. Kirugulige and H. V. Tippur : Measuring a Mixed-Mode Crack value of rotation everywhere in the image except near the boundaries, particularly the corners. The deviations at the boundaries are expected because errors in derivatives of displacements (strains and rotations) get magnified near the boundaries because of the so-called edge effects. Then, the mean and standard deviations of xXY were computed for each image (while computing these quantities for a 32 · 32 matrix, three rows and three columns of data points were excluded near the edge of the image because of the presence of errors at these points). Figure 7 shows the mean and standard deviations of rotations and strains from this test. It can be seen from Figure 7A,C that an applied rotation of 0.0056 ± 0.00035 rad is measured by all individual cameras within an error band of �0.0005 rad (�10% Table 2: Mean and standard deviations of in-plane strain fields estimated from measured displacements in translation tests Camera no. Xtrans ¼ 60 ± 2 lm, Ytrans ¼ 60 ± 2 lm Xtrans ¼ 300 ± 2 lm, Ytrans ¼ 300 ± 2 lm exx (le) eyy (le) exx(le) eyy(le) Mean SD Mean SD Mean SD Mean SD 0 )21 203 260 217 1 209 2 229 1 )65 89 191 259 )35 241 15 245 2 )200 93 125 200 1 249 79 254 3 )210 140 30 209 14 341 106 231 4 )119 70 154 185 2 250 )10 241 5 )84 103 54 102 )52 168 11 336 6 )68 147 68 125 )179 159 52 254 7 )130 186 27 134 )66 196 85 278 8 0 95 100 157 )20 211 79 196 9 )44 118 72 154 11 220 117 206 10 )28 112 107 159 )1 215 45 237 11 )2 121 52 204 )2 219 84 214 12 )15 128 125 203 )42 201 )5 215 13 )8 141 37 285 )14 193 79 196 14 11 135 73 211 )51 236 4 202 15 11 85 15 315 )17 217 102 59 16 )7 103 169 308 )76 247 51 92 17 )15 113 4 319 )8 239 17 169 18 56 165 60 259 7 222 88 244 19 )8 126 192 215 )3 242 )33 243 20 32 117 89 242 )70 215 22 249 21 15 105 189 148 )17 288 55 235 22 28 150 196 167 )40 246 )34 248 23 5 162 176 142 )25 211 62 254 24 )171 108 42 217 15 224 137 312 25 )153 100 42 167 43 231 )5 287 26 )94 133 26 218 )59 230 4 188 27 )161 124 )24 197 )19 219 23 321 28 )134 92 162 127 11 237 93 249 29 )175 84 133 211 )16 203 )69 167 30 )93 149 )17 192 )16 195 72 220 31 )10 124 255 214 )3 201 54 222 x 10 –3 –4.8 –5.2 –5.6 ω xy ( ra d ia n s) y (mm) x (mm) –6 –6.4 30 3020 20 1010 0 0 Figure 6: Estimated in-plane rotation xXY from a pair of the images recorded by camera no. 1 114 � 2008 The Authors. Journal compilation � 2008 Blackwell Publishing Ltd j Strain (2009) 45, 108–122 Measuring a Mixed-Mode Crack : M. S. Kirugulige and H. V. Tippur error). A rigid rotation imposed to the sample should not produce any strains. Consequently, zero strains are expected from this test. The mean values of strain fields obtained are within 120 le and standard deviations are up to 300 le. Mixed-Mode Dynamic Fracture Experiment Sample preparation Edge-cracked syntactic foam samples were prepared for conducting mixed-mode dynamic fracture experiments. These samples were processed by mix- ing 25% (by volume) of hollow microballoons in a low-viscosity epoxy matrix. Room temperature cur- ing [prepared by mixing a bisphenol-A resin and an amine based hardener [supplied by Buehler Corp. as ‘Epo-Thin’ (Buehler, Lake Bluff, IL, USA)] in the ratio 100 : 38] was used for sample preparation. The microballoons used in this study were commercially available hollow glass spheres (supplied by 3M Corp.) of mean diameter of �60 lm and wall thickness �600 nm. The elastic modulus and Poisson’s ratio of the cured material (measured ultrasonically) were 3.02 ± 0.1 GPa and 0.34 ± 0.01 respectively [18]. Before casting the epoxy resin-hardener mixture, a sharp razor blade was inserted into the mould. After the sample was cured and removed from the mould, an edge ‘crack’ was left behind in the specimen [19]. Finally, the specimen was machined into a beam of height 50 mm with a crack of 10 mm length (a/W ¼ 0.2) as shown in Figure 8A. Subsequently, a random speckle pattern was created on the specimen surface by spraying with black and white paints. Experimental procedure As the fracture event to be captured is stress wave- dominated, the total duration of recording is rela- tively short and hence the high-speed camera was synchronised with impact. The sequence of events in a typical experiment was as follows: the specimen was initially rested on two instrumented supports/ anvils. The camera was focused on a 31 · 31 mm2 region of the sample in the vicinity of the crack tip (see Figure 8A). A set of 32 pictures of the stationary sample were recorded at 200 000 frames per second and stored. Then an impactor was launched (velocity M e a n ( ra d ia n s) 0.00560 0.00565 0.00570 0.00575 0.00580 S ta n d a rd d e vi a tio n ( ra d ia n s) 0.00000 0.00002 0.00004 0.00006 0.00008 0.00010 0.00012 S tr a in ( με ) –200 –100 0 100 200 εxx εyy Camera number 0 5 10 15 20 25 30 Camera number 0 5 10 15 20 25 30 Camera number 0 5 10 15 20 25 30 35 Camera number 0 5 10 15 20 25 30 S ta n d a rd d e vi a tio n ( με ) 0 50 100 150 200 250 300 350 ωXY ωXY (A) (B) (C) (D) εxx εyy Figure 7: Results from rotation test (applied rotation ¼ 0.0056 ± 0.00035 rad). (A) Mean and (C) standard deviation of rotation field estimated from image correlation. (B) Mean (D) standard deviation of in-plane strains estimated (ideally these strains need to be zeros) � 2008 The Authors. Journal compilation � 2008 Blackwell Publishing Ltd j Strain (2009) 45, 108–122 115 M. S. Kirugulige and H. V. Tippur : Measuring a Mixed-Mode Crack of 4.0 m s)1) towards the sample. As soon as the impactor contacted an adhesively backed copper tape affixed to the top of the specimen, a trigger signal was generated by a pulse/delay generator and was fed into the camera. The camera then sent a separate trigger signal to a pair of high-intensity flash lamps situated symmetrically with respect to the optical axis of the camera. A time delay was pre-set in the camera to capture images 85 ls after the initial impact/contact. This time delay provided sufficient time for the high-intensity flash lamps to ramp up to their full intensity levels and provide constant intensity illumination during the recording period. As the measurable deformations around the crack tip during the first 85 ls were relatively small, there was no significant loss of information because of this delay. A total of 32 images were recorded with 5 ls intervals between images, for a total duration of 160 ls. Once the experiment was complete, the re- corded images were stored in the computer. Just be- fore the impact occurred, the velocity of the tup was recorded [by the Instron Dynatup drop-tower system (Instron, Norwood, MA, USA)]. Tup force and support reaction histories were also recorded. These are shown in Figure 8B. In this plot, the multiple contacts be- tween the tup and specimen can be inferred from multiple peaks. The crack initiation in this experi- ment occurred at about 175 ls and the crack traversed across the specimen width in the next 60–70 ls. Therefore, only the first peak of the impact force history is of relevance here. As the left support was closer to the impact point compared with the right, impact force record starts earlier for the former. Moreover, it should be noted that anvils register noticeable impact force after 220 ls by which time the crack propagates through half the sample width. Thus, reactions from the two anvils play no role in the fracture of the sample up to this point. Accordingly, the sample was subsequently modelled as a free–free beam in finite element simulations. Finite element simulations Elasto-dynamic finite element simulations of the current problem were carried out up to crack initia- tion under plane stress conditions. The finite element mesh used is shown in Figure 8C along with the force boundary conditions at the impact point. Experi- mentally determined material properties (elastic modulus ¼ 3.1 GPa, Poisson’s ratio 0.34 and mass density ¼ 870 kg m)3) were used as inputs for the analysis. The numerical model was loaded using the force history recorded by the instrumented tup. [Before applying force boundary conditions, the tup force history was interpolated and smoothed for the following two reasons: (i) the time step of the tup force history measurement was larger than the one used in simulations and (ii) the force history recorded (A) (C) (B) Figure 8: (A) Specimen configuration for mixed-mode dynamic fracture experiment; (B) impactor force and support reaction histories recorded by Instron-Dynatup 9250 HV drop tower and (C) finite element mesh used for elasto-dynamic simulations 116 � 2008 The Authors. Journal compilation � 2008 Blackwell Publishing Ltd j Strain (2009) 45, 108–122 Measuring a Mixed-Mode Crack : M. S. Kirugulige and H. V. Tippur by the tup had experimental noise. Therefore smoothed cubic splines were fitted to the data before applying to the model.] The implicit time integration scheme of the Newmark b method (integration parameters b ¼ 0.25 and c ¼ 0.5 and 0.5% damping) was adopted. The details of finite element analysis can be found elsewhere [19]. The simulation results were used to obtain instantaneous SIFs up to crack initiation. The mode I and mode II SIF were calcu- lated by regression analyses of crack-opening and sliding displacements respectively. Results From each experiment 64 images were available, 32 from the undeformed set and 32 from the deformed set, each having a resolution of 1000 · 1000 pixels. Figure 9 shows four selected speckle images from the deformed set of 32 images. The time instant at which the images were recorded after impact is indicated below each image and the instantaneous crack tip is denoted by an arrow. The crack length history is plotted in Figure 10. The crack initiates at about 175 ls. Upon initiation, it rapidly accelerates and subsequently reaches a nearly steady velocity of �270 m s)1. The magnification used in this experi- ment was such that the size of one pixel was equiv- alent to 31 lm on the specimen surface. A sub-image size of 26 · 26 pixels was chosen for image correla- tion purpose. The in-plane displacements were estimated for all the 32 image-pairs and were resolved to an accuracy of 2–6% of a pixel (or 0.6–1.8 lm). The crack-opening displacement, v, and sliding displace- t = 215 ms t = 240 ms t = 170 ms t = 190 ms Figure 9: Acquired speckle images of 31 · 31 mm2 region near a mixed-mode crack at various times instants. Current crack tip location is shown by an arrow Time (ms) 80 100 120 140 160 180 200 220 240 260 C ra ck le n g th ( m m ) 5 10 15 20 25 30 Slope (c) ~ 270 m s–1 Figure 10: Crack growth history in syntactic foam sample under mixed-mode dynamic loading � 2008 The Authors. Journal compilation � 2008 Blackwell Publishing Ltd j Strain (2009) 45, 108–122 117 M. S. Kirugulige and H. V. Tippur : Measuring a Mixed-Mode Crack ment, u, for two sample images (one before crack initiation and the other after) are shown in Figure 11. Figure 11A,C shows v- and u- displacements at 150 ls and Figure 11B,D shows the same displacement components, respectively, at 220 ls after impact. A significant amount of rigid-body displacement can be seen in the u-field in the direction of impact (Figure 11C,D). Extraction of SIFs Both crack-opening and sliding displacement fields were used to extract dynamic SIFs in the current study. The asymptotic expressions for a dynamically loaded stationary crack are given by [20], ux ¼ XN n¼1 ðKIÞn 2l rn=2ffiffiffiffiffiffi 2p p j cos n 2 h � n 2 cos n 2 � 2 � � h n þ n 2 þð�1Þn n o cos n 2 h o þ XN n¼1 ðKIIÞn 2l rn=2ffiffiffiffiffiffi 2p p j sin n 2 h � n 2 sin n 2 � 2 � � h n þ n 2 �ð�1Þn n o sin n 2 h o ; (2) uy ¼ XN n¼1 ðKIÞn 2l rn=2ffiffiffiffiffiffi 2p p j sin n 2 h þ n 2 sin n 2 � 2 � � h n � n 2 þð�1Þn n o sin n 2 h o þ XN n¼1 ðKIIÞn 2l rn=2ffiffiffiffiffiffi 2p p �j cos n 2 h � n 2 cos n 2 � 2 � � h n þ n 2 �ð�1Þn n o cos n 2 h o ; (3) In the above equations, ux ( ” u) and uy ( ” v) are crack-sliding and opening displacements, (r,h) are crack-tip polar coordinates, j is (3)m)/(1+m) for plane stress where l and m are shear modulus and Poisson’s ratio respectively. The coefficients (KI)n and (KII)n of the leading terms (n ¼ 1) are the mode I and mode II dynamic SIF respectively. Equations (2) and (3) implicitly assume that inertial effects enter the coefficients while retaining the functional form of the quasi-static crack-tip equa- tions. However, once the crack is initiated, the asymptotic expressions for sliding and opening displacements for a steadily propagating crack (assuming transient effects are negligible) are used [21]: Figure 11: Crack-opening and sliding displacements (in lm) for pre- and post-crack initiation time instants: (A) v-displacement (uY) and (C) u-displacement (uX) before crack initiation (at t ¼ 150 ls); (B) v-displacement (uY) and (D) u-displacement (uX) after crack initiation (t ¼ 220 ls). Crack initiation time �175 ls. (A large rigid body displacement can be seen in (C) and (D); uX and uY denote displacements relative to the reference grid.) 118 � 2008 The Authors. Journal compilation � 2008 Blackwell Publishing Ltd j Strain (2009) 45, 108–122 Measuring a Mixed-Mode Crack : M. S. Kirugulige and H. V. Tippur ux ¼ XN n¼1 ðKIÞnBIðCÞ 2l ffiffiffi 2 p r ðn þ 1Þ r n=2 1 cos n 2 h1 � hðnÞr n=2 2 cos n 2 h2 n o þ XN n¼1 ðKIIÞnBIIðCÞ 2l ffiffiffi 2 p r ðn þ 1Þ r n=2 1 sin n 2 h1 � hð�nÞr n=2 2 sin n 2 h2 n o ; (4) uy ¼ XN n¼1 ðKIÞnBIðCÞ 2l ffiffiffi 2 p r ðn þ 1Þ �b1r n=2 1 sin n 2 h1 þ hðnÞ b2 r n=2 2 sin n 2 h2 � � þ XN n¼1 ðKIIÞnBIIðCÞ 2l ffiffiffi 2 p r ðn þ 1Þ b1r n=2 1 cos n 2 h1 þ hð�nÞ b2 r n=2 2 cos n 2 h2 � � ; (5) where rm ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi X2þb2mY2 q ; hm ¼tan�1 bmY X � � m¼1;2 b1 ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1� c CL � �2s ; b2 ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1� c CS � �2s CL ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðjþ1Þl ðj�1Þq s ; CS ¼ ffiffiffi l q r ; j¼ 3�m 1þm for plane stress hðnÞ¼ 2b1b2 1þb22 for odd n 1þb22 2 for even n 8< : and hð�nÞ¼hðnþ1Þ BIðcÞ¼ ð1þb22Þ D ; BIIðcÞ¼ 2b2 D ; D¼4b1b2�ð1þb 2 2Þ 2 : (6) Here (x,y) and (r,h) are the Cartesian and polar coor- dinates instantaneously aligned with the current crack tip, respectively (see, Figure 11B), c is crack speed, CL and CS are dilatational and shear wave speeds in the material, and l and m are shear modulus and Poisson’s ratio respectively. Again, coefficients (KI)n and (KII)n of the leading terms are the mode I and mode II dynamic SIFs respectively. For a mode I problem, uy( ” v) is the dominant in-plane displacement and is generally used for extracting mode I SIF history. However, in a mixed- mode problem, both ux and uy displacements are present. The crack-opening displacements (uy) can be viewed as those with mode I-rich information whereas crack sliding displacements (ux) with mode II-rich information. Thus, uy can be used to extract KI and ux to extract KII accurately. Alternatively, either radial (ur) or tangential (uh) displacements (computed by transforming ux and uy data) can be used to extract both KI and KII more accurately instead of using ux and uy individually. This has been vividly demon- strated by Yoneyama et al. [22] in a recent article. Accordingly, in this study both radial and tangential displacement components ur and uh were used to extract both KI and KII histories. For extracting SIF from displacement data, the current crack-tip location was identified and the Cartesian and polar coordinate systems (x)y and r)h) were established. A set of data points (usually 100 to 120) were collected in the region around the crack tip in the domain 0.3 Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1136/bmjopen-2017-020714 Corpus ID: 49616782Cauterisation versus fibrin glue for conjunctival autografting in primary pterygium surgery (CAGE CUP): study protocol of a randomised controlled trial @article{Lein2018CauterisationVF, title={Cauterisation versus fibrin glue for conjunctival autografting in primary pterygium surgery (CAGE CUP): study protocol of a randomised controlled trial}, author={Mladen Le{\vs}in and Martina Parad{\vz}ik and Josipa Marin Lovri{\'c} and I. Oluji{\'c} and Žana Ljubi{\'c} and Ana Vu{\vc}inovi{\'c} and K. Bu{\'c}an and L. Puljak}, journal={BMJ Open}, year={2018}, volume={8} } Mladen Lešin, Martina Paradžik, +5 authors L. Puljak Published 2018 Medicine BMJ Open Introduction Pterygium is a non-cancerous growth of the conjunctival tissue over the cornea that may lead to visual impairment in advanced stages, restriction of ocular motility, chronic inflammation and cosmetic concerns. Surgical removal is the treatment of choice, but recurrence of pterygium is a frequent problem. It has been previously shown that fibrin glue may result in less recurrence and may take less time than sutures for fixing the conjunctival graft in place during pterygium surgery… Expand View on Publisher bmjopen.bmj.com Save to Library Create Alert Cite Launch Research Feed Share This Paper 2 Citations Methods Citations 1 View All Supplemental Clinical Trials Interventional Clinical Trial Cauterization Versus fibrin Glue for Conjunctival Autografting in Primary Pterygium Surgery Pterygium is a noncancerous growth of the conjunctival tissue over the cornea. It is a progressive disease that may lead to visual impairment in advanced stages, as well as… Expand Conditions Pterygium of Conjunctiva and Cornea Intervention Procedure University of Split, School of Medicine May 2018 - December 2021 Explore Further Discover more papers related to the topics discussed in this paper Topics from this paper Conjunctival Diseases Fibrin Tissue Adhesive anaphylaxis Paraneoplastic Syndromes, Ocular Transplantation, Autologous infectious agent Deciduous tooth Multiple pterygium syndrome Tertiary Care Centers 2 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency How to minimize pterygium recurrence rates: clinical perspectives R. Nuzzi, Federico Tridico Medicine Clinical ophthalmology 2018 17 PDF View 1 excerpt, cites methods Save Alert Research Feed Comparing Adjuvant Beta Radiation, Mitomycin C, and Conjunctival Autograft in Primary Pterygium Treatment, a Three-year Follow-up Study Khalil M. al-Salem, Ahmad T. S. Saif, Passant Sayed Saif 2021 PDF Save Alert Research Feed References SHOWING 1-10 OF 39 REFERENCES SORT BYRelevance Most Influenced Papers Recency Fibrin glue versus sutures for conjunctival autografting in primary pterygium surgery. V. Romano, M. Cruciani, Luigi Conti, L. Fontana Medicine The Cochrane database of systematic reviews 2016 28 Highly Influential View 3 excerpts, references results, background and methods Save Alert Research Feed Evaluation of Autograft Characteristics After Pterygium Excision Surgery: Autologous Blood Coagulum Versus Fibrin Glue K. Mittal, S. Gupta, +4 authors R. Vajpayee Medicine Eye & contact lens 2017 6 View 1 excerpt, references methods Save Alert Research Feed Comparison of fibrin glue and sutures for attaching conjunctival autografts after pterygium excision. H. Uy, J. M. Reyes, John D G Flores, R. Lim-Bon-Siong Medicine Ophthalmology 2005 187 Save Alert Research Feed Evicel versus Tisseel versus Sutures for Attaching Conjunctival Autograft in Pterygium Surgery: A Prospective Comparative Clinical Study. Ofira Zloto, Eran Greenbaum, Ido Didi Fabian, Guy J. Ben Simon Medicine Ophthalmology 2017 12 PDF Save Alert Research Feed Sutureless and Glue-free Versus Sutures for Limbal Conjunctival Autografting in Primary Pterygium Surgery: A Prospective Comparative Study. A. Sharma, H. Raj, A. Gupta, A. Raina Medicine Journal of clinical and diagnostic research : JCDR 2015 22 Save Alert Research Feed Comparison of 4 Techniques for Limbal-Conjunctival Autograft Fixation in Primary Pterygium Surgery L. Mejía, J. P. Santamaría, M. Cuevas, A. Cordoba, S. Carvajal Medicine European journal of ophthalmology 2017 5 PDF View 1 excerpt, references background Save Alert Research Feed Fibrin Adhesive Is Better Than Sutures in Pterygium Surgery V. Ratnalingam, Andrew Lim Keat Eu, Gim Leong Ng, Rohana Taharin, Elizabeth John Medicine Cornea 2010 62 View 1 excerpt, references background Save Alert Research Feed Conjunctival Autograft Versus Amniotic Membrane Transplantation for Treatment of Pterygium: Findings From a Cochrane Systematic Review. Elizabeth Clearfield, B. Hawkins, I. C. Kuo Medicine American journal of ophthalmology 2017 26 View 1 excerpt, references methods Save Alert Research Feed Comparison of conjunctival autografts, amniotic membrane grafts, and primary closure for pterygium excision. P. Prabhasawat, K. Barton, G. Burkett, S. Tseng Medicine Ophthalmology 1997 538 Save Alert Research Feed Conjunctival autograft for pterygium. Elizabeth Clearfield, Valliammai Muthappan, Xue Wang, Irene C Kuo Medicine The Cochrane database of systematic reviews 2016 47 View 2 excerpts, references methods and background Save Alert Research Feed ... 1 2 3 4 ... Related Papers Abstract Supplemental Clinical Trials Topics 2 Citations 39 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_tnzdvhn2efal7l4ydp3zrbm2r4 ---- G U E S T E D I T O R I A L Special Section on Internet Imaging Giordano Beretta Hewlett-Packard Company 1501 Page Mill Road, 4U-6, Palo Alto, CA 94304 USA E-mail: beretta@hpl.hp.com Raimondo Schettini ITIM CNR Via Ampere 56 I-20131 Milano, Italy E-mail: centaura@itim.mi.cnr.it Internet imaging differs from other forms of electronic imaging in that it employs an internet (network of net- works) as a transmission vehicle. How- ever, the internet is only one compo- nent (albeit a major one) in the total imaging system. The total system com- prises client applications internet- worked with server applications, as well as offline authoring tools. The internet is an evolving commu- nication system. Its functionality, reli- ability, scaling properties, and perfor- mance limits are largely unknown. The transmission of images over the inter- net pushes the engineering envelope more than most applications. Conse- quently, the issues we are interested in exploring pertain to all aspects of the total system, not just images or imag- ing algorithms. This emphasis on systems is what sets internet imaging apart from other electronic imaging fields. For a local imaging application, even when it is split between a client and a server linked by an Ethernet, a system can be designed by stringing algorithms in a pipeline. If performance is an issue, it is easy to identify the weak link and re- place it with a better performing com- ponent. On the internet, the servers are un- known, the clients are unknown, and the network is unknown. The system is not easily predictable and the result is that the most common problem today is scalability. To be successful one has to follow a top-down design strategy, where the first step is a detailed analy- sis of the problems to be solved. When a solution is invented, algorithms are selected to produce a balanced sys- tem, instead of choosing algorithms of best absolute performance as is done in bottom-up approaches. The paper on the Visible Human by Figuiredo and Hersch is a good ex- ample illustrating these fundamentals. Today, storing a 49-Gbyte 3-dimen- sional volume is not hard, and a RAID disk array can deliver fast access times. However, storage space and seek time are not the limiting factors for the extraction of ruled surfaces from large 3-dimensional medical images. The problem is one of load balancing, which requires detailed performance measurements for scalability. Eventu- ally, a specialized parallel file striping system must be designed and opti- mized. Implementing and maintaining a system that must grow as more data becomes available and as surgeons re- quire new staging techniques for tu- mors is practical only in a centralized solution served on the internet. After e-mail, the most popular appli- cation on the internet is the World Wide Web, which is a hypertext system and as such is useful only when it can eas- ily be navigated through a visual interface,1 and search results are pre- sented in a context,2 as is illustrated for example by the KartOO search engine. Navigation requires structure,3 and al- though techniques such as ontologies have been known for years, the par- ticularities for decoupling and splicing ontologies are not yet sufficiently understood.4 In a recent paper, the Jörgensens have described the challenges of de- veloping an image indexing vocabulary,5 and yet we know that tax- onomies are not sufficiently powerful for efficiently finding related information through navigation.6 Progress in bio- informatics has given us new computa- tional tools that will allow the develop- ment of new collaborative structuring methodologies based on ontologies. Another example of how wrong things can go when the fundamentals of internet imaging are not understood is content-based image retrieval (CBIR) systems. Today they are part of all the major search engines on the in- ternet, and anyone who has tried to use them for real work has experienced how useless they are. Although over the years a number of CBIR algorithms has been proposed, none has stood out as being particu- larly robust, despite the fact that each claims to perform best on some bench- mark. Unfortunately there is no univer- sally accepted benchmark for CBIR and the lack of a metric is probably one of the main causes for the poor quality of today’s algorithms—without a perfor- mance metric is it impossible to diag- nose the shortcomings of a particular algorithm and identify the critical con- trol points.7 An international effort is underway to create a benchmark for CBIR,8 simi- lar to what was done in the past in the TREC effort for text retrieval. This re- quires an extensive collaboration to an- notate a sufficiently large image cor- pus, which establishes the ground truth Journal of Electronic Imaging / October 2002 / Vol. 11(4) / 421 Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Electronic-Imaging on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use against which performance can be measured. A tool has recently been de- veloped for this purpose.9 One particularly nasty problem on the internet is that a preponderance of the available images is not normalized towards a standard rendering intent, as is done in conventional stock image collections. In fact, the subtleties of the various references for color encoding in the stages of a distributed workflow are only recently being described and standardized.10 A correct output-referred color en- coding cannot be determined manually in the case of a large image corpus, as it is typically encountered in internet im- aging. Contrary to silver halide photog- raphy, where contemporary films can largely compensate for illumination de- viating from the intended illuminant, this is not the case in digital photogra- phy. This problem has led to the pro- posal of a number of automatic white balancing algorithms to compensate for these discrepancies by estimating the illuminant and applying a color appear- ance transformation. To benchmark these algorithms it is necessary to develop a ground truth for combinations of illuminations and as- sumed illuminants. Tominaga’s paper on a ‘‘Natural image database and its use for scene illumninant estimation’’ describes how such a database is cre- ated and how it is used in practice. Digitalization, compression, and ar- chiving of visual information has be- come popular, inexpensive and straightforward. Yet, the retrieval of this information on the World Wide Web— being highly distributed and minimally indexed—is far from being effective and efficient. A hot research topic is the definition of feasible strategies to mini- mize the semantic gap between the low-level features that can be automati- cally extracted from the visual contents of an image and the human interpreta- tion of such contents. Two different ap- proaches to this problem are described in the last two papers. Lienhart and Hartmann present novel and effective algorithms for clas- sifying images on the web. This type of algorithms will be a key element in the next generation of search engines, which will have to classify the web page media contents automatically. Ex- periments and results are reported and discussed about distinguishing photo- like images from graphical images, ac- tual photos from only photo-like, but ar- tificial images and presentation slides/ scientific posters from comics. The paper ‘‘Multimodal search in collections of images and text’’ by San- tini introduces the intriguing issue of how to infer meaning of an image from both its pictorial content and its context. The author describes a data model and a query algebra for databases of im- ages immersed in the World Wide Web. The author’s model provides a semantic structure that, taking into ac- count the connection with the text and pages containing them, enriches the in- formation that can be recovered from the images themselves. References 1. F.J. Verbeek et al., ‘‘Visualization of com- plex data sets over Internet: 2D and 3D vi- sualization of the 3D digital atlas of zebra fish development,’’ Proc. SPIE 4672, 20–29 ~January 2002!. 2. G. Ciocca et al., ‘‘A multimedia search en- gine with relevance feedback,’’ Proc. SPIE 4672, 243–251 ~January 2002!. 3. G. Beretta, ‘‘WWW1 Structure5 Knowl- edge, ’ ’ Technical Report HPL-96-99, HP Laboratories, Palo Alto, June 1996, http:// www.hpl.hp.com/techreports/96/HPL-96- 99.html 4. J. Tillinghast et al., ‘‘Structure and Naviga- tion for Electronic Publishing,’’ Proc. SPIE 3300, 38 – 45 ~January 1998!. 5. C. Jörgensen et al., ‘‘Testing a vocabulary for image indexing and ground truthing,’’ Proc. SPIE 4672, 207–215 ~January 2002!. 6. D.J. Watts et al., ‘‘Identity and Search in Social Networks,’’ Science 296, 1302–1305 ~May 2002!. 7. N.J. Gunther et al., ‘‘A Benchmark for Im- age Retrieval using Distributed Systems over the Internet: BIRDS-I,’’ Proc. SPIE 4311, 252–267 ~January 2001!. 8. http://www.benchathlon.net/ 9. T. Pfund et al., ‘‘A Dynamic Multimedia Annotation Tool,’’ Proc. SPIE 4672, 216 – 224 ~January 2002!. 10. S. Süsstrunk, ‘‘Color Encodings for Image Databases,’’ Proc. SPIE 4672, 174 –180 ~January 2002!. Giordano Beretta is with the Imaging Systems Labora- tory at Hewlett- Packard. He has been instrumental in bootstrapping the internet imag- ing community: in collaboration with Robert Buckley he has developed a course on ‘‘Color Imaging on the Internet,’’ which they have taught at several IS&T and SPIE conferences; with Raimondo Schettini he has started a series of Internet Imaging conferences at the IS&T/ SPIE Electronic Imaging Symposium; and he has nursed the Benchathlon effort through its first two years. He is a Fellow of the IS&T and the SPIE. Raimondo Schet- tini is an asso- ciate professor at DISCO, University of Milano Bicocca. He has been asso- ciated with Italian National Research Council (CNR) since 1987. In 1994 he moved to the Institute of Multimedia Information Tech- nologies, where he is currently in charge of the Imaging and Vision Lab. He has been team leader in several research projects and published more than 130 refereed pa- pers on image processing, analysis and re- production, and image content-based index- ing and retrieval. He is member of the CIE TC 8/3. He has been General Co-Chairman of the 1st Workshop on Image and Video Content-based Retrieval (1998), and gen- eral co-chair of the EI Internet Imaging Con- ferences (2000-2002). He was general co- chair of the First European Conference on Color in Graphics, Imaging and Vision (CGIV’2002). G U E S T E D I T O R I A L 422 / Journal of Electronic Imaging / October 2002 / Vol. 11(4) Downloaded From: https://www.spiedigitallibrary.org/journals/Journal-of-Electronic-Imaging on 05 Apr 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use work_tocmqzoapbav5jjhdxskhci2h4 ---- Untitled Viewpoint LesionMap: A Method and Tool for the Semantic Annotation of Dermatological Lesions for Documentation and Machine Learning Bell Raj Eapen1, MD, MSc; Norm Archer1, PhD; Kamran Sartipi2, PhD 1Information Systems, McMaster University, Hamilton, ON, Canada 2Department of Computer Science, East Carolina University, Greenville, NC, United States Corresponding Author: Bell Raj Eapen, MD, MSc Information Systems McMaster University DSB - 211 1280 Main Street West Hamilton, ON, L8S 4L8 Canada Phone: 1 905 525 9140 ext 26392 Email: eapenbp@mcmaster.ca Abstract Diagnosis and follow-up of patients in dermatology rely on visual cues. Documentation of skin lesions in dermatology is time-consuming and inaccurate. Digital photography is resource-intensive, difficult to standardize, and has privacy concerns. We propose a simple method—LesionMap—and an electronic health software tool—LesionMapper—for semantically annotating dermatological lesions on a body wireframe. We discuss how the type, distribution, and progression of lesions can be represented in a standardized way. The tool is an open-source JavaScript package that can be integrated into web-based electronic medical records. We believe that LesionMapper will facilitate documentation in dermatology that can be used for machine learning in a privacy-preserving manner. (JMIR Dermatol 2020;3(1):e18149) doi: 10.2196/18149 KEYWORDS LesionMap; LesionMapper; digital imaging; machine learning; dermatology Introduction Documenting the origin, distribution, and nature of dermatological lesions in a textual form is inefficient and imprecise. Dermatologists often document the images of the patient or draw the lesions on a body wireframe for later reference. Digital photography for clinical documentation is time-consuming and resource intensive to capture, organize, and maintain [1]. Additionally, there is a growing privacy-related concern over the use of these images [2]. Capturing a detailed account of dermatological lesions in a privacy-preserving way is becoming increasingly important in the era of machine learning and artificial intelligence (AI). Documentation in electronic medical records (EMRs) requires a simple and efficient tool that fits into the clinical workflow. There is a growing need for a standardized methodology and an annotation schema to facilitate the capture of rich data related to dermatological conditions for machine learning. For this, we propose a simple method—LesionMap (LM)—and an electronic health (eHealth) software tool—LesionMapper (LMR)—that fits into the clinical workflow. The sharing of clinical images between dermatologists for learning purposes is common, and most images are published with the consent of the patient [2]. However, increasingly, social media platforms are used for the easy sharing of such resources, with the associated implications on privacy [3]. Machine learning and AI applications need access to a large volume of data to build machine learning models for clinical decision support. Emerging techniques in machine learning and AI such as convolutional neural networks (CNN) and transfer learning [4] have several applications in dermatology [5]. Interestingly, some computer-vision methods can be applied to machine-generated images in addition to digital images [6]. In this paper, we describe common skin lesions, the semantic annotation methodology (LM), and a software tool (LMR) that can be used for semantic annotations. The tool is designed as an extensible software library (JavaScript) that can be JMIR Dermatol 2020 | vol. 3 | iss. 1 | e18149 | p. 1http://derma.jmir.org/2020/1/e18149/ (page number not for citation purposes) Eapen et alJMIR DERMATOLOGY XSL•FO RenderX mailto:eapenbp@mcmaster.ca http://dx.doi.org/10.2196/18149 http://www.w3.org/Style/XSL http://www.renderx.com/ incorporated into web-based EMRs. We briefly describe two such integrations with open-source EMRs—OpenMRS and OSCAR EMR. Classification of Skin Lesions Dermatologists use numerous descriptive terms to identify and describe skin lesions [7]. Flat skin lesions that are small are called macules, and when they exceed 1 cm in size, they are called patches. An elevated dome-shaped lesion is called a nodule, whereas a flat elevated lesion is called a plaque. Small fluid-filled lesions are called vesicles, and if they exceed 1 cm, they are called bullae. If vesicles are filled with pus instead of clear fluid, they are called pustules. Scales refer to a thickened outer layer of skin while the crust is a liquid debris. An ulcer is an irregularly shaped, deep loss of skin, and if it is superficial it is called an erosion. Atrophy is a thinning of skin, and a fissure is a linear cleft. Necrosis is dead skin tissue, and the scar is the replacement of lost skin by connective tissue. Localized hemorrhage into the skin is called purpura, and petechiae, when the hemorrhagic lesions are small. The color of the lesion can provide diagnostic cues, along with the shape, arrangement, and distribution. Discoid and annular are terms used to describe the shape. The distribution can be grouped, discrete, linear, serpiginous, reticular, generalized, symmetrical, or photodistributed. The size, location, and severity are also important. Although this is not an exhaustive list of dermatological descriptions, the most common descriptions are included here. Discrepancies in the terminology of dermatological lesions exist in the literature [8]. LM does not attempt to formalize the ontology, but proposes a pragmatic standard using the iconographic method. Iconographic Representations of Skin Lesions Most descriptive terms used in dermatology can be represented by iconographic images representative of the lesion or feature. The use of iconography in clinical documentation has been demonstrated in the context of pain [9]. The type of lesion can be easily represented by icons due to their visual similarity. The list of icons can be supplemented with custom icons for representing descriptive characteristics, such as the site of onset. LMR provides a set of icons for representing visual and nonvisual characteristics of common skin lesions and additional icons for descriptive characteristics (see Figure 1A and B). JMIR Dermatol 2020 | vol. 3 | iss. 1 | e18149 | p. 2http://derma.jmir.org/2020/1/e18149/ (page number not for citation purposes) Eapen et alJMIR DERMATOLOGY XSL•FO RenderX http://www.w3.org/Style/XSL http://www.renderx.com/ Figure 1. The LesionMapper interface: (A,B) icons for descriptive characteristics; (C) large nodule; (D) ulcer on lateral side; (E) multiple discrete plaques; (F) Christmas tree pattern in pityriasis rosea; (G) vitiligo patch showing variation in opacity. In addition to the type of lesions, there are five other characteristics of each icon that can be changed: size, position, number, orientation, and opacity. Additional information pertaining to the lesion can be encoded using the following characteristics: • The size of the icon can be used to indicate the average size of the lesion in conditions where lesion size points toward a diagnosis or a particular subtype of the primary diagnosis. For example, the size of the plaques can be a differentiating feature for small-plaque and large-plaque parapsoriasis. The original size of the icon, when placed on the LM, can be used for comparison (see the large nodule in Figure 1C). • The position of icon placement indicates the distribution of the lesions. The front and back of the body are depicted in the LM. The lateral view is not included to simplify the interface. To represent lateral distribution, the icons can be JMIR Dermatol 2020 | vol. 3 | iss. 1 | e18149 | p. 3http://derma.jmir.org/2020/1/e18149/ (page number not for citation purposes) Eapen et alJMIR DERMATOLOGY XSL•FO RenderX http://www.w3.org/Style/XSL http://www.renderx.com/ placed in the corresponding edge of the wireframe with an overlap of 50% (see the ulcer on the legs in Figure 1D). • Multiple icons of the same type can be used to represent discrete lesions, and a single large icon can be used to represent confluent distribution (see discrete plaques in Figure 1E). • The orientation can be used to indicate a pathognomonic distribution, such as the Christmas tree pattern in pityriasis rosea (see Figure 1F). • The opacity of the lesion can be used to indicate the severity of the presentation. For example, it can be used to represent the degree of depigmentation in a vitiligo patch or the severity of contact dermatitis (see Figure 1G). Mapping lesions consistently and accurately requires a tool that supports the various functions described above. In addition, from a design perspective, the tool should have the capability to integrate with other health information systems and EMRs. LesionMapper LMR is a prototype implementation of the LM method described above. We adopted the design science principles of Hevner et al [10] for information systems to design LMR. We searched the literature for similar approaches and available tools to address the problem of lesional documentation. Based on the success of similar approaches (Pain-QuILT for annotating pain [9]), we chose iconography as the method and standardized it based on our domain expertise in dermatology. Thereafter, we distinguished some of the easily identifiable characteristics of icons that can be programmatically controlled, such as size, orientation, and transparency. Subsequently, we converged on a popular framework (VueJS JavaScript framework [11]) for implementation. We designed the artifact adopting a modular pattern—as a JavaScript package shared as open source (see the GitHub repository [12])—that can be incorporated into web-based EMRs. LMR provides buttons to add various icons to the canvas. These icons can be independently moved and resized. The opacity and orientation can also be independently modified. The LM can be exported as an image or as a JavaScript Object Notation (JSON) string. LMR supports freehand drawing in the canvas to represent features that are not represented by icons though machine interpretation of the freehand drawing is challenging. Next, we describe the integration of LMR into two open-source EMRs—OpenMRS and OSCAR. Integration With Electronic Medical Records The modular design helps in the integration of LMR into web-based EMRs. The prototype is created using the VueJS JavaScript framework following the Universal Module Definition (UMD) pattern [13] that can be imported by different module loaders into other browser-based applications. The icons are converted into Base64 strings and included in the JavaScript files. Open Medical Records System (OpenMRS) is an open-source, Java-based EMR for developing countries with a modular and extensible architecture [14]. OpenMRS supports the Open Web Apps (OWA) specifications that make it possible to design external applications that extend the core functions. The OWA communicates with OpenMRS using REST APIs (representational state transfer application programming interfaces), a software architectural style used for creating Web services, and is embedded in the same server instance. OpenMRS has a custom concept dictionary that helps map data points to a uniform terminology. Nontextual data such as images are stored as ”complex concepts” outside the relational database. LMR can easily fit into an OWA design pattern, and the exported LM images can be stored as complex observations in the patient record. We have a prototype integration that can be used as an example [15]. OSCAR EMR is a web-based EMR system initially developed for primary care practitioners in Canada. OSCAR EMR has a complex data model, and additional data points are supported by an electronic form (eForm) module that stores data as key-value pairs [16]. eForms do not support images or other nontextual data. The ability of LMR to save LMs as a JSON string makes the integration of the LMR module into eForms possible. Machine Learning Applications Dermatological diseases have diverse presentations, with skin type and skin color adding to this variation. Some of these diseases involve hair, nails, and mucous membranes in addition to the skin. Traditional computer-vision algorithms such as convolutional neural networks (CNNs) and other variants of neural networks have limited application when there are many decision alternatives [17]. Hence, AI algorithms have had limited application in dermatology except in problems associated with classification (eg, the presence or absence of cancer) [18]. Such algorithms can classify only a given lesion rather than the patient as a whole (ie, a lesion is cancerous vs patient has cancer). Although few CNN-based image search algorithms have proven to be useful, AI algorithms for diagnostic decision making in clinical dermatology lag far behind areas such as radiology [17]. Text analytics and natural language processing (NLP) can be more useful than image analytics when the decision alternatives are numerous, as in dermatology. Multimodal approaches where an image is combined with metadata have shown promise [19]. Machine learning models built using LMs—especially the models created using the JSON representation—resemble text more than an image. Some relevant metadata such as the position and distribution of the lesions, which are difficult to be captured in text and hard to precisely decipher with NLP, are implicitly captured in LMs. The icons represent ontological concepts from dermatology and can map to any standard terminology system [7]. We posit that LMs are semantically rich enough to be used for machine learning applications. Machine learning models from LMs are likely to be more “explainable” than traditional black box algorithms [20]. The implicit metadata captured by JMIR Dermatol 2020 | vol. 3 | iss. 1 | e18149 | p. 4http://derma.jmir.org/2020/1/e18149/ (page number not for citation purposes) Eapen et alJMIR DERMATOLOGY XSL•FO RenderX http://www.w3.org/Style/XSL http://www.renderx.com/ LMs can supplement regular digital images, leading to better machine learning models. Advantages and Limitations LMs may save time for busy practitioners while capturing the type, distribution, and characteristics of the lesion; these data can be used to assess clinical progress. The LM exported as JSON resembles a markup language amenable to data mining and machine learning methods [21]. LMs are portable and can be easily and safely exchanged without privacy concerns. LMR can export LMs as images. These images can be used as a proxy for patient images in some computer vision–based applications. Computer vision has been successfully applied to identify metabolic defects from gene expression maps [22]. MNIST (Modified National Institute of Standards and Technology database), a dataset widely used in machine learning, consists of images of handwritten digits [23]. It is widely accepted that machine learning can reinforce some health care disparities in dermatology [24]. Skin color is a significant background noise that needs to be accounted for in any machine learning model. It is possible that some of the existing models are biased toward particular skin types that predominate in the training data set. Such models tend to be less sensitive in making predictions on different skin types [25]. The LMs are not affected by such bias. LMs, however, do not capture all the features, both explainable and unexplainable, captured by a digital image. Hence, LMs are not useful in scenarios where accurate and sensitive extraction of features from an image is important for prediction. For example, LMs are not appropriate for skin cancer classification [5] and mole mapping [26]. LMs do not support annotating dermatopathology images [27]; they are also not applicable for dermatoscopic images that rely on pixel-level analysis [28]. Examination findings such as fluctuation, consistency, and tenderness are not represented by icons at present to keep the interface simple. More icons can be added if the user community requires them. LMR and the LM method have not been clinically tested. The integration of LMR into existing EMRs may be difficult. Despite its anticipated ease of use, the actual impact of LMR on physician workflow, if any, needs to be investigated further. Discussion The skin is the largest organ in the human body, and as such, skin conditions are commonly encountered in any health care practice. Although dermatology is a specialization within clinical medicine, 50% of skin conditions are assessed and documented by nondermatologists [29]. There is no universal standard for pictographic documentation of the type, distribution, progression, and severity of lesions in dermatology, as in dentistry [30] and ophthalmology [31]. The LM standardizes visual representation using icons that can be extended to accommodate different use cases in clinical and cosmetic dermatology. The simplicity of the mapping rules facilitates use by nondermatologists in the skincare industry; LMs are also semantically rich enough to capture most relevant information about a skin condition with minimal effort. Image analytics in dermatology is not as popular, as it is in visually oriented medical specialties such as radiology and pathology; the exception is the field of skin cancer diagnostics. This is because of the privacy concerns associated with dermatological images and the difficulties in standardizing image capture. The LM is not a replacement for a digital image of the lesion. However, some of the diagnostic aspects that are difficult to be captured in images, such as distribution and progression, can be useful for machine learning applications, especially when combined with the textual representation of a patient’s history. Such multimodal approaches mimic the clinical workflow more so than CNN-based algorithms [32]. New computer-vision algorithms are proving to be capable of learning from computer-generated images [22]. We believe that LMs can be similarly used with computer-vision methods. Finally, we urge the open-source community to help us improve LMR and potential users to report issues on the repository [12] so that we can fix them. We will work on a 3D wireframe for better accuracy, and we welcome other feature requests from the user community. Conflicts of Interest None declared. References 1. Aspres N, Egerton IB, Lim AC, Shumack SP. Imaging the skin. Australas J Dermatol 2003 Feb;44(1):19-27. [doi: 10.1046/j.1440-0960.2003.00632.x] 2. Kunde L, McMeniman E, Parker M. Clinical photography in dermatology: Ethical and medico-legal considerations in the age of digital and smartphone technology. Australasian Journal of Dermatology 2013 May 29;54(3):192-197. [doi: 10.1111/ajd.12063] 3. Ventola CL. Social media and health care professionals: benefits, risks, and best practices. P & T 2014 Jul;39(7):491-520 [FREE Full text] [Medline: 25083128] 4. Shin H, Roth HR, Gao M, Lu L, Xu Z, Nogues I, et al. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans. Med. Imaging 2016 May;35(5):1285-1298. [doi: 10.1109/tmi.2016.2528162] JMIR Dermatol 2020 | vol. 3 | iss. 1 | e18149 | p. 5http://derma.jmir.org/2020/1/e18149/ (page number not for citation purposes) Eapen et alJMIR DERMATOLOGY XSL•FO RenderX http://dx.doi.org/10.1046/j.1440-0960.2003.00632.x http://dx.doi.org/10.1111/ajd.12063 http://europepmc.org/abstract/MED/25083128 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=25083128&dopt=Abstract http://dx.doi.org/10.1109/tmi.2016.2528162 http://www.w3.org/Style/XSL http://www.renderx.com/ 5. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017 Jan 25;542(7639):115-118. [doi: 10.1038/nature21056] 6. Levine MD, Nazif AM. Dynamic Measurement of Computer Generated Image Segmentations. IEEE Trans Pattern Anal Mach Intell 1985 Mar;PAMI-7(2):155-164. [doi: 10.1109/tpami.1985.4767640] 7. Eapen BR. ONTODerm--a domain ontology for dermatology. Dermatol Online J 2008 Jun 15;14(6):16. [Medline: 18713597] 8. Cardili RN, Roselino AM. Elementary lesions in dermatological semiology: literature review. An Bras Dermatol 2016 Oct;91(5):629-633. [doi: 10.1590/abd1806-4841.20164931] 9. Lalloo C, Kumbhare D, Stinson JN, Henry JL. Pain-QuILT: clinical feasibility of a web-based visual pain assessment tool in adults with chronic pain. J Med Internet Res 2014 May 12;16(5):e127 [FREE Full text] [doi: 10.2196/jmir.3292] [Medline: 24819478] 10. Hevner AR, March ST, Park J, Ram S. Design Science in Information Systems Research. MIS Quarterly 2004;28(1):75. [doi: 10.2307/25148625] 11. Street M, Passaglia A, Halliday P. Complete Vue. In: Complete Vue.js 2 Web Development: Practical Guide To Building End-to-end Web Development Solutions With Vue.js 2. Birmingham: Packt Publishing; 2018. 12. Eapen BR. GitHub. 2020. LesionMapper - Vue component for standardized mapping of lesions in dermatology URL: https:/ /github.com/dermatologist/lesion-mapper [accessed 2020-01-16] 13. Rich H. Dismantling the barriers to entry. Queue 2015;13(5):37 [FREE Full text] [doi: 10.1145/2742580.2742813] 14. Mamlin BW, Biondich PG, Wolfe BA, Fraser H, Jazayeri D, Allen C, et al. Cooking up an open source emr for developing countries: OpenMRS, a recipe for successful collaboration. In: American Medical Informatics Association. 2006 Presented at: AMIA Annual Symposium Proceedings; 2006; Washington p. 529. 15. Eapen BR. OpenMRS. 2017. Dermatology LesionMapper URL: https://addons.openmrs.org/show/org.openmrs.module. dermatology-lesionmapper [accessed 2020-01-18] 16. Safadi H, Chan D, Dawes M, Roper M, Faraj S. Open-source health information technology: A case study of electronic medical records. Health Policy and Technology 2015 Mar;4(1):14-28. [doi: 10.1016/j.hlpt.2014.10.011] 17. Li C, Shen C, Xue K, Shen X, Jing Y, Wang Z, et al. Artificial intelligence in dermatology. Chinese Medical Journal 2019;132(17):2017-2020. [doi: 10.1097/cm9.0000000000000372] 18. Sidey-Gibbons JAM, Sidey-Gibbons CJ. Machine learning in medicine: a practical introduction. BMC Med Res Methodol 2019 Mar 19;19(1). [doi: 10.1186/s12874-019-0681-4] 19. Wang H, Wang Y, Liang C, Li Y. Assessment of Deep Learning Using Nonimaging Information and Sequential Medical Records to Develop a Prediction Model for Nonmelanoma Skin Cancer. JAMA Dermatol 2019 Nov 01;155(11):1277. [doi: 10.1001/jamadermatol.2019.2335] 20. Andreas H, Chris B, Constantinos SP, Douglas BK. What do we need to build explainable AI systems for the medical domain? arXiv.org 2017 [FREE Full text] 21. Guazzelli A, Lin W, Jena T. PMML in action: unleashing the power of open standards for data mining and predictive analytics. California: Createspace Independent Publishing Platform; 2020. 22. Zhang J, Naik HS, Assefa T, Sarkar S, Reddy RVC, Singh A, et al. Computer vision and machine learning for robust phenotyping in genome-wide studies. Sci Rep 2017 Mar 8;7(1). [doi: 10.1038/srep44048] 23. Li Deng. The MNIST Database of Handwritten Digit Images for Machine Learning Research [Best of the Web]. IEEE Signal Process. Mag 2012 Nov;29(6):141-142. [doi: 10.1109/msp.2012.2211477] 24. Adamson AS, Smith A. Machine Learning and Health Care Disparities in Dermatology. JAMA Dermatol 2018 Nov 01;154(11):1247. [doi: 10.1001/jamadermatol.2018.2348] 25. Kamulegeya LH, Okello M, Bwanika JM, Musinguzi D, Lubega W, Rusoke D, et al. Using artificial intelligence on dermatology conditions in uganda: A case for diversity in training data sets for machine learning. bioRxiv.org 2019 [FREE Full text] [doi: 10.1101/826057] 26. Berk-Krauss J, Polsky D, Stein JA. Mole Mapping for Management of Pigmented Skin Lesions. Dermatologic Clinics 2017 Oct;35(4):439-445. [doi: 10.1016/j.det.2017.06.004] 27. Toro P, Corredor G, Romero E, Arias V. A visualization, navigation, and annotation system for dermatopathology training. In: International Society for Optics and Photonics. 2018 Presented at: 14th International Symposium on Medical Information Processing and Analysis, volume 10975; 2018; Mazatlán, Mexico. [doi: 10.1117/12.2511638] 28. Ferreira B, Barata C, Marques JS. What is the role of annotations in the detection of dermoscopic structures? In: Iberian Conference on Pattern Recognition and Image Analysis. 2019 Presented at: 9th Iberian Conference, IbPRIA 2019; 2019; Madrid, Spain p. 3-11. [doi: 10.1007/978-3-030-31321-0_1] 29. Ahiarah A, Fox C, Servoss T. Brief intervention to improve diagnosis and treatment knowledge of skin disorders by family medicine residents. Fam Med 2007;39(10):720-723 [FREE Full text] [Medline: 17987414] 30. Wu M, Koenig L, Lynch J, Wirtz T. Spatially-oriented EMR for Dental Surgery. In: American Medical Informatics Association. 2006 Presented at: AMIA Annual Symposium Proceedings; 2006; Washington p. 1147. 31. Chiang MF, Boland MV, Brewer A, Epley KD, Horton MB, Lim MC, American Academy of Ophthalmology Medical Information Technology Committee. Special Requirements for Electronic Health Record Systems in Ophthalmology. Ophthalmology 2011 Aug 1;118(8):1681-1687. [doi: 10.1016/j.ophtha.2011.04.015] [Medline: 21680023] JMIR Dermatol 2020 | vol. 3 | iss. 1 | e18149 | p. 6http://derma.jmir.org/2020/1/e18149/ (page number not for citation purposes) Eapen et alJMIR DERMATOLOGY XSL•FO RenderX http://dx.doi.org/10.1038/nature21056 http://dx.doi.org/10.1109/tpami.1985.4767640 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=18713597&dopt=Abstract http://dx.doi.org/10.1590/abd1806-4841.20164931 https://www.jmir.org/2014/5/e127/ http://dx.doi.org/10.2196/jmir.3292 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=24819478&dopt=Abstract http://dx.doi.org/10.2307/25148625 https://github.com/dermatologist/lesion-mapper https://github.com/dermatologist/lesion-mapper https://dl.acm.org/doi/10.1145/2742580.2742813 http://dx.doi.org/10.1145/2742580.2742813 https://addons.openmrs.org/show/org.openmrs.module.dermatology-lesionmapper https://addons.openmrs.org/show/org.openmrs.module.dermatology-lesionmapper http://dx.doi.org/10.1016/j.hlpt.2014.10.011 http://dx.doi.org/10.1097/cm9.0000000000000372 http://dx.doi.org/10.1186/s12874-019-0681-4 http://dx.doi.org/10.1001/jamadermatol.2019.2335 https://arxiv.org/abs/1712.09923 http://dx.doi.org/10.1038/srep44048 http://dx.doi.org/10.1109/msp.2012.2211477 http://dx.doi.org/10.1001/jamadermatol.2018.2348 https://www.biorxiv.org/content/biorxiv/early/2019/10/31/826057.full.pdf https://www.biorxiv.org/content/biorxiv/early/2019/10/31/826057.full.pdf http://dx.doi.org/10.1101/826057 http://dx.doi.org/10.1016/j.det.2017.06.004 http://dx.doi.org/10.1117/12.2511638 http://dx.doi.org/10.1007/978-3-030-31321-0_1 http://www.stfm.org/fmhub/fm2007/November/Ahunna720.pdf http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=17987414&dopt=Abstract http://dx.doi.org/10.1016/j.ophtha.2011.04.015 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=21680023&dopt=Abstract http://www.w3.org/Style/XSL http://www.renderx.com/ 32. Baltrusaitis T, Ahuja C, Morency L. Multimodal Machine Learning: A Survey and Taxonomy. IEEE Trans Pattern Anal Mach Intell 2019 Feb 1;41(2):423-443. [doi: 10.1109/tpami.2018.2798607] Abbreviations AI: artificial intelligence API: application programming interface CNN: convolutional neural network eForm: electronic form eHealth: electronic health EMR: electronic medical record JSON: JavaScript Object Notation LM: LesionMap LMR: LesionMapper MNIST: Modified National Institute of Standards and Technology database NLP: natural language processing OWA: Open Web Apps REST: representational state transfer UMD: Universal Module Definition Edited by G Eysenbach; submitted 06.02.20; peer-reviewed by F Kaliyadan, A Kt; comments to author 15.02.20; revised version received 26.02.20; accepted 27.02.20; published 20.04.20 Please cite as: Eapen BR, Archer N, Sartipi K LesionMap: A Method and Tool for the Semantic Annotation of Dermatological Lesions for Documentation and Machine Learning JMIR Dermatol 2020;3(1):e18149 URL: http://derma.jmir.org/2020/1/e18149/ doi: 10.2196/18149 PMID: ©Bell Raj R Eapen, Norm Archer, Kamran Sartipi. Originally published in JMIR Dermatology (http://derma.jmir.org), 20.04.2020. This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Dermatology Research, is properly cited. The complete bibliographic information, a link to the original publication on http://derma.jmir.org, as well as this copyright and license information must be included. JMIR Dermatol 2020 | vol. 3 | iss. 1 | e18149 | p. 7http://derma.jmir.org/2020/1/e18149/ (page number not for citation purposes) Eapen et alJMIR DERMATOLOGY XSL•FO RenderX http://dx.doi.org/10.1109/tpami.2018.2798607 http://derma.jmir.org/2020/1/e18149/ http://dx.doi.org/10.2196/18149 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=&dopt=Abstract http://www.w3.org/Style/XSL http://www.renderx.com/ work_tp67xpgpvnhtvogniljezut4le ---- Reports Ecological Insights from Long-term Research Plots in Tropical and Temperate Forests Organized Oral Session 40 was co-organized by Amy Wolf, Stuart Davies, and Richard Condit, and held on 6 August 2009 during the 94th ESA Annual Meeting in Albuquerque, New Mexico. An International Network of Forest Research Sites This session was devoted to findings from an international network of long-term forest dynamics plots, providing some of the first comparisons between the tropical and temperate study areas. Whereas most ecologists recognize the importance of long-term research (Callahan 1984, Likens 1989, Magnuson 1990, Hobbie et al. 2003, and others), large-scale projects like this one are rare. The global network of forest dynamics plots has expanded since its origin in the early 1980s, yet research at the plots has remained integrated due to a combination of individual scientists’ efforts and institutional commitments from the Center for Tropical Forest Science (CTFS, ‹www.ctfs.si.edu› of the Smithsonian Tropical Research Institute and the Arnold Arboretum of Harvard University (Hubbell and Foster 1983, Condit 1995). Today, 26 large plots have been established in tropical or subtropical forests of Central and South America, Africa, Asia, and Oceania, along with 8 recently added temperate forest plots. A standard protocol (Condit 1998) is followed at all sites, including the marking, mapping, and measuring of all trees and shrubs with stem diameters ≥1 cm. October 2009 519 Reports An underlying objective of the ESA session was to illustrate how an understanding of forest dynamics can contribute to long-term strategies of sustainable forest management in a changing global environment. Stuart Davies, Director of the Center for Tropical Forest Science, opened by describing the current plot network and the pivotal role of CTFS in supporting research at sites in 20 countries. Since the establishment of the first plot at Barro Colorado Island in Panama in 1980, researchers have applied the standard field methods in plots covering >1200 ha (12 km2). Field teams have measured ~3.5 million living trees belonging to at least 7900 species. Not only has this ambitious effort led to important ecological insights, but the network continues to build expertise and research capacity in the countries where research is being conducted. New sites continue to be added, and not a single forest dynamics plot has discontinued the monitoring protocol. Davies summarized some of the important published findings from forest dynamics plots. In the tropics, tree species diversity decreases with length of the dry season and insularity. Dispersal limitation, niche differences, and density dependence all have been demonstrated at forest dynamics plots, even though neutral dynamics can successfully predict species relative abundance distributions. In fact, most species (49–74%) in all of the forest dynamics plots exhibit some degree of habitat specificity. These attributes are especially important predictors of long-term vegetation change. Floristic changes on Barro Colorado Island, for example, have shown a consistent trend toward increased drought tolerance and increased average wood density. The baseline data established at these and other sites will continue to provide vital information about the effects of climate change and environmental perturbations. Stephen Hubbell, co-founder of the first forest dynamics plot with Robin Foster and others, described how different scales of reference reveal different types of ecological processes. Density and frequency- dependent interactions are strong at local spatial scales, but they become weak at intermediate scales. Hubbell introduced a new theory, the Enemy Susceptibility Hypothesis, which might explain an unexpected pattern of negative spatial autocorrelation of species’ population growth rates (r) at large (>700 m) spatial scales. This pattern is especially strong among rare species. He proposed that pathogens play an important role in keeping rare species rare by preventing them from becoming locally common anywhere. A video simulation demonstrated the complex spatial dynamics of pathogen–host interactions, illustrating the important geographic effect of pathogens on host communities. Jerry Franklin provided an historical perspective on long-term forest research plots and their critical role in testing predictions of ecological models and theories. He and co-authors have been monitoring permanent plots in North America’s Pacific Northwest for >50 years, but Franklin asserted that neither regional networks like these nor the current system of forest dynamics plots are yet adequate to address major issues of ecological theory or environmental change. A sustained international system of large, permanent forest research plots will pay important dividends in understanding forest ecology and human impacts on the environment. Institutional and government support are necessary to sustain a network of permanent research sites, but commitments by individual scientists and a culture of effective mentoring will be equally important ingredients of success. Robert Howe, Amy Wolf, and Richard Condit described results from one of the first forest dynamics plots in the temperate zone (Wisconsin, USA), where species diversity is more than an order of magnitude lower than at equivalent-sized forests in the humid tropics. Despite the simpler community structure, 520 Bulletin of the Ecological Society of America Reports patterns of species abundance are not unlike those in tropical forests, and many of the same processes (geologic history, local disturbance, habitat specialization, interspecific competition, and dispersal constraints) influence species distributions. As in tropical forests, most species in temperate forest plots are rare, maintained by dispersal from the surrounding metacommunity. The pattern of relative species abundances in the Wisconsin forest dynamics plot closely follows the log-series distribution, as predicted for tropical forests by Hubbell’s zero-sum neutral theory. This result is obtained despite the clear existence of species interactions and niche differentiation. Keping Ma, Director of the Institute of Botany at the Chinese Academy of Sciences, and coauthors described the newly established network of four large forest dynamics plots in China, ranging from a high-latitude temperate forest in Jilin Province in the northeast to species-rich subtropical forests in southern Yunnan Province. Research at these sites has resulted in 16 peer-reviewed publications during the past three years, including studies of spatial tree distribution patterns, species–area relationships, conservation genetics, dispersal dynamics, and habitat associations. Major findings include evidence of density dependence in 83% of subtropical tree species after the effects of habitat heterogeneity have been removed; a trend of decreased aggregation of trees with increased scale of reference; and the implication of multiple factors (dispersal and habitat heterogeneity) in species–area relationships. Ma also described the Beijing Living Herbarium Project, which uses digital photography to document plant species occurrences at long-term research sites across China. October 2009 521 Reports Sean Thomas, Michael Drescher, and Rajit Patankar outlined the scientific rationale for large-scale forest plots. These plots create a research platform that efficiently addresses many questions within a spatial scale that is highly relevant to important ecological processes. Some problems or questions can be addressed at a regional scale (e.g., parameterization of individual-based forest simulation models, analysis of diversity–productivity patterns), while others are made possible by the existence of a global network of plots (e.g., are natural enemy effects weaker in temperate than tropical forests?). Thomas and coauthors described findings from their recently established temperate forest plot in Ontario, Canada, including the documentation of canopy thinning in old/large trees, the strong effects of topography and stem density on species richness, and the ecological importance of maple spindle gall mites in forests dominated by sugar maple (Acer saccharum). Permanent forest dynamics plots have become an important resource for global studies of carbon dynamics, including a research program led by Helene Muller-Landau and numerous Center for Tropical Forest Science (CTFS) colleagues. Forests contain ~38% of terrestrial carbon pools and account for 48% of terrestrial net primary production, so an understanding of carbon dynamics in forests is critical for understanding global carbon cycles. Muller-Landau’s presentation illustrated that forest carbon pools are dynamic, varying spatially and temporally in response to climate, species composition, and other factors, including human impacts. Research at forest dynamics plots provides important information about carbon stocks, carbon dynamics, and mechanisms that underlie variation in carbon pools and fluxes. On Barro Colorado Island in Panama, for example, soil contributes 60% of measured forest carbon stocks, trees 36%, woody debris 3%, and lianas <1%. Initial results of the carbon studies by 522 Bulletin of the Ecological Society of America Reports Muller-Landau and others have revealed extensive variation in tree carbon stocks within and among tropical forests. Species composition, which is likely to be modified in response to human-induced climate change, plays a significant role in explaining this variation. The forest dynamics plots provide a detailed baseline for understanding long-term changes in forest carbon storage, information that will be critical for understanding the impacts of global change. Richard Condit described simulated tree communities, starting with real spatial configurations of trees in the Korup (Africa) and Barro Colorado Island (Panama) forest dynamics plots. The neutral (“voter”) model of Hubbell leads to conspicuously different species distribution patterns depending on the mean dispersal distances of species. Short dispersal distances (10 m) lead to highly aggregated distributions, whereas longer dispersal distances (>200 m) lead to more random distributions in the absence of niche segregation or species’ habitat preferences. If niche segregation among species is incorporated into the models, similar results are obtained, except that distributions are predictably aggregated around favored microhabitats. Condit compared simulations representing different underlying community dynamics (different dispersal distances and different degrees of niche differentiation) with actual patterns observed in tropical forest dynamics plots. Invariably, observed patterns matched simulations with high degrees of species input through dispersal. These findings strongly suggest that species assemblages in forest dynamics plots (15–50 ha) are influenced significantly by species input from the surrounding metacommunity. A high proportion of species in these assemblages might be maintained by persistent immigration, not by niche differentiation among coexisting species. Condit’s approach provides insights into the processes that structure tree communities, and additionally suggests the scale at which dispersal occurs in these tree communities. In most cases, dispersal distances of at least 100–200 m are prevalent. October 2009 523 One of the first and perhaps most dynamic permanent plots is the Mudumalai Forest Dynamics Plot in southern India described by R. Sukumar, H.S. Suresh, and H.S. Dattaraja. This dry tropical forest experiences periodic fires, annual droughts, and disturbance by elephants, resulting in dramatic changes in the abundance of grasses and small-stemmed woody plants. Despite high disturbance rates, the composition of canopy trees has been remarkably stable, with mortality rates below or equal to that of canopy trees in tropical moist forests. During >20 years of annual forest monitoring, Sukamar and colleagues have found no evidence that current frequencies of fire and large-mammal herbivory (including one of the highest elephant densities in the world) will convert the Mudumalai forest to savanna or grassland. Fire and drought impose short-term changes in the vegetation, particularly understory grasses and tree recruits, but carbon stocks have remained quite uniform and overall community properties have not shifted dramatically over the long term. Studies from Mudumalai clearly illustrate the value of long-term forest monitoring and the complexity of ecological responses to environmental change. Research in this growing network of permanent forest plots has provided not only baseline data for regional and global ecological comparisons, but it has also provided a fertile platform for generating new ideas about the structure and dynamics of forest communities. In addition to this Oral Session, more than 60 other papers and posters were presented at the ESA Meeting in Albuquerque from researchers associated with the forest dynamics plot network. The capacity audience at this organized oral session Reports 524 Bulletin of the Ecological Society of America suggests that interest in coordinated international research on forests is high. Such interest is likely to grow as more sites are added to the network and as the data accumulating from systematic, long-term monitoring continues to address critical issues for the future of the world’s forests. Literature cited Callahan, J. T. 1984. Long-term ecological research. BioScience 34:363–367. Condit, R. 1995. Research in large, long-term tropical forest plots. Trends in Ecology and Evolution 10:18–22. Condit, R. 1998. Tropical forest census plots: methods and results from Barro Colorado Island. Springer, Berlin, Germany. Hobbie, J. E. , S. R. Carpenter, N. B. Grimm, J. R. Gosz, and T. R. Seastedt. 2003. The U.S. long term ecological research program. BioScience 53:21–32. Hubbell, S. P. 2001. The unified neutral theory of biodiversity and biogeography. Princeton University Press, Princeton, New Jersey, USA. Hubbell, S. P., and R. B. Foster. 1983. Diversity of canopy trees in a Neotropical forest and implications for the conservation of tropical trees. Pages 25–41 in S. J. Sutton, T. C. Whitmore, and A. C. Chadwick, editors. Tropical rainforest: ecology and management. Blackwell, Oxford, UK. Likens, G. E., editor. 1989. Long-term studies in ecology: approaches and alternatives. Springer-Verlag, New York, New York, USA. Magnuson, J. J. 1990. Long-term ecological research and the invisible present. BioScience 40:495– 501. Amy Wolf Department of Natural and Applied Sciences University of Wisconsin-Green Bay Stuart Davies Arnold Arboretum Harvard University Richard Condit Center for Tropical Forest Science Smithsonian Tropical Research Institute Reports October 2009 525 work_tpq6b5yegbhodmxjghb23dimgq ---- The use of clinical photography by UK general dental practitioners The use of clinical photography by UK general dental practitioners G. A. Morse,1 M. S. Haque,2 M. R. Sharland3 and F. J. T. Burke4 VERIFIABLE CPD PAPER Although not mandatory, the impor- tance of photography as a clinical, admin- istrative and marketing tool suggests that its use should be widespread in the UK. Developments in digital photographic technology have facilitated the integration of photography into clinical dentistry as many practitioners have access to a compu- ter. These facilitate practice administration and reduce dependency on paper records. Computerisation, coupled with software, enables the manipulation and storage of digital images and transfer of data over the Internet. The most recent study on the use of photography in clinical dentistry in the UK was carried out in 2002,13 so it was consid- ered that a further study was timely. The aims and objectives of this study were therefore: To determine the proportion • and demographics of UK dental practitioners who use clinical photography To ascertain its uses in dental practices• To determine the reasons why non-• users do not make use of clinical photography To determine the use of computers • in clinical photography by UK-based practitioners. This mirrored the aims of the 2002 study.13 In this previous study it was found INTRODUCTION Clinical dental photography has a number of uses, which include:1–12 Documentation• Medico-legal purposes• Clinical assessment• Patient education• Case planning• Patient motivation and co-operation• Marketing• Facilitation of communication between • operator and technician Staff training• Demonstration and education purposes • for the operator and between clinicians Diagnosis• Restoration performance• Personal interest and satisfaction• Recording possessions and for • insurance purposes. Aim The aim of this study was to assess by means of a postal questionnaire the numbers of general dental practitioners (GDPs) who used clinical photography and for what application. Method The questionnaire was distributed to 1,000 ran- domly selected dentists in the UK with an explanatory letter and reply paid envelope. The data collected was computerised and analysed statistically. Results Five hundred and sixty-two replies were received. Of the respondents, 48% used clinical photography, with 59% using a digital camera, 34% a 35 mm camera and 19% a video camera. Principal uses of clinical photography were treatment planning (84%), patient instruction/motivation (75%), medico-legal reasons (71%) and com- munication with the laboratory (64%). Conclusion Clinical photography was used by 48% of general dental practitioner respondents. that clinical photography was used by 36% of general dental practitioner respondents. Males, specialist and private practitioners, and practitioners from the Midlands were more likely to be users of clinical pho- tography. Principal reasons for use were patient instruction, interest and medico- legal reasons. Only 8% had attended a course on dental photography.13 In the 2002 study 62% of respondents were using an intra-oral 35 mm camera and 32% a digital camera. Of the 64% respondents who did not use clinical pho- tography, 54% considered that they would use it at some time in the future. In (35 mm) chemical photography, the fi lm sheet serves as the light sensitive sen- sor, storage and reproduction medium so there is little scope for manipulation. In digital photography each of these three entities uses different media, facilitating manipulation and unparalleled fl exibil- ity in editing, copying and disseminating images. The high quality yet instantaneous image production make them superior to the relatively inexpensive Polaroid instant cameras with long economy possible by reusing storage media such as memory cards. Digital photography is environ- mentally greener by eliminating toxic dyes and processing chemicals, and the fi lm for instant cameras is expensive.14 Good qual- ity clinical pictures are of course possible 1*General Dental Practitioner, Hilltop Dental Surgery, 1007 Bristol Road South, Northfi eld, Birmingham, B31 2QT; 2Medical Statistician, Primary Care Clinical Sciences, University of Birmingham, Edgbaston, Birmingham, B15 2TT; 3Head of Multimedia Services, 4Professor of Primary Dental Care, University of Birmingham, St Chads Queensway, Birmingham B4 6NN *Correspondence to: Greg A. Morse Email: gregmorse@tiscali.co.uk Online article number E1 Refereed Paper - accepted 9 October 2009 DOI: 10.1038/sj.bdj.2010.2 ©British Dental Journal 2010; 208: E1 • Investigates the prevalence and demography of clinical photography use by general dental practitioners (GDPs). • Identifi es the most popular uses of clinical photography. • Looks at the reasons cited for use and non use of clinical photography by GDPs. I N B R I E F R E S E A R C H BRITISH DENTAL JOURNAL 1 © 2010 Macmillan Publishers Limited. All rights reserved. RESEARCH with 35 mm cameras using appropriate macro-lenses. Disadvantages of digital photography include the initial capital outlay and con- stantly changing technology, with an ini- tial steep learning curve that may appear daunting. Clinical photography is, how- ever no more daunting than many clinical procedures.15 Digital technology appears to be here to stay along with digital broad- casting, digital consumer goods and digital radiography.14 The digital single lens refl ex (DSLR) cam- era with a high quality lens is considered the ideal choice for dental photography, as it is capable of taking portraits as well as close up or macro images of the dentition, by through-the-lens viewing and metering, precise focusing and accurate framing.16 Intra-oral or fi bre optic cameras are a use- ful tool for the demonstration of patients’ oral problems on a monitor. However, the quality may be insuffi cient for permanent documentation or for archiving.15 The clinical practice of dentistry is a very visual and hands-on discipline. Clinical video photography can be used as an effective marketing and educational tool. In one study, undergraduates taught using real-time video produced more accu- rately tapered preparations. This ability was retained over one year.17 Video pho- tography may not only facilitate com- pliance with CPD requirements, but also familiarise clinicians with technical proce- dures. Videotaping a dental examination following a traumatic injury can optimise documentation. Standardisation of photo- graphic shots is helpful with documenta- tion and presentation.18 Transfer of digital images onto a com- puter and then over the Internet dramati- cally optimises their usefulness clinically, educationally, medico-legally and from a marketing perspective. The dynamic inter- play between dental, photography, compu- terisation and the Internet is demonstrated by websites such as www.locateAdoc.com, where clinical before and after shots are posted to promote dentists and procedures. YouTube hosted a two and a half minute video demonstration on standard dental photographic views by Dr Thomas Hedge (www.tomhedge.com) posted in February 2007, also advertising private courses and DVDs on dental photography. Another good example, which is also an interactive website, is www.thedigitaldentist-site.org. uk. This gives advice on technical pro- cedures as well as recommending suit- able equipment, including photographic equipment. The availability of privately run courses and associated DVDs was not researched at the time of the postal questionnaire. METHOD A questionnaire was designed using many of the questions from the 2002 study.13 These were repeated with the permission of the authors so that comparisons could be made. Questions were devised to quantify the use and attitude of general dental practi- tioners (GDPs) towards clinical photogra- phy, and to relate their usage to the type of practice, age and gender of the practi- tioner. These questions had originally been piloted among 10 GDPs.13 The questionnaire, accompanied by a prepaid envelope and a letter explain- ing the project, was sent by post to 1,000 randomly selected UK dentists using data from the Medlist Database (www.medlist. co.uk), a company specialising in provid- ing mailing lists for all areas of medicine, biology, science and technology. Random selection was achieved by this methodol- ogy. The larger list of dentists was fi rst de-duplicated on the postcode fi eld and then further reduced to 1,000 random records. This was achieved by placing the postal codes into a column in an Excel spreadsheet and the random number for- mula copied into each cell of the adjacent column. The columns were linked and the random number fi eld sorted randomly to produce a random sort of the postcodes, the fi rst 1,000 of which were then selected in the database. In an attempt to improve the response to the second mailing a request letter was per- sonally addressed to the recipient, explain- ing again the relevance of the research and the importance of receiving an adequate number of responses in order to make the study useful. As the study was designed to be anonymous, the second mailing had to be sent again to the entire sample of 1,000 dentists and was printed on pink paper to identify it from the fi rst mailing in which the questionnaires were green. The data were then entered onto an SPSS (SPSS Inc.) database for analysis, sorted and cross-checked against the rele- vant questionnaires to identify and correct erroneous data entries. Frequency analysis using descriptive statistics and then cross tabulation in SPSS were used to identify demographic associations. Confidence level calculation was completed using the formula recommended19 to assess the reliability of the data. The statistical sig- nifi cance of associations was tested using Chi square tests with appropriate follow- up multiple comparisons to evaluate late response bias. Sample size calculation was performed. To detect a prevalence of 36% (the percent- age of respondents who used photography in the previous study)13 with 5% precision and 95% confi dence, 354 responders were calculated to be required. In addition, 15 postgraduate deaneries (Table 1) were contacted by phone to con- fi rm the availability or otherwise of courses on dental photography in their region. The information received was collated manu- ally. The availability of privately run courses in the UK was not researched. RESULTS As all respondents did not answer every question, total numbers responding to each question varied. General and demographic data Replies were received from 562 dentists, a response rate of 56.2%. In the fi rst mailing 38.0% (n = 382) responded, with a further 18.0% (n = 180) after the second mail- ing. Male respondents amounted to 71.4% (n = 399). On the question of using photography, 536 valid responses were received. Of these 384 were from male responders and the rest, 152, were from female responders. Of the responders, 212 male and 57 female dentists used clinical photography. Regarding years since graduation, 3.8% (n = 22) had graduated between 0 and 5 years previously, 12.7% (n = 71) had graduated between 6 and 10 years pre- viously, 25% (n = 140) between 11 and 20 years ago and 55.8% (n = 314) more than 21 years ago. Single-handed prac- titioners amounted to 31.4% (n = 190) of the respondents. Regarding practice location, 45.3% (n = 252) of the respondents’ prac- tices were in a city or town centre 2 BRITISH DENTAL JOURNAL © 2010 Macmillan Publishers Limited. All rights reserved. RESEARCH Regarding practice type, 40.4% (n = 224) of the respondents practices were ‘mainly National Health Service (NHS)’, 34.1% (n = 189) were ‘mainly private’ and 25.5% (n = 141) were ‘mixed NHS and private’. Specialist practices amounted to 11% location, 38.4% (n = 213) were in a sub- urban location and 16.2% (n = 90) in a rural location. Table 1 Availability of courses in clinical photography Deanery Address Website Course/Title Eastern Postgraduate Dental Offi ce Eastern Deanery Block 3 Ida Darwin Suite, Fulbourn Hospital, Cambridge CB1 5EE www.easterndeanery.org ‘Smile please! The art and science of dental photography’ Kent, Surrey & Sussex The KSS Postgraduate Deanery 7 Bermondsey Street London SE1 2DD www.dental.kssdeanery.org ‘Photography in dental practice’ ‘Clinical photography’ London London Deanery 20 Guildford Street London WC1N 1DZ www.londondeanery.ac.uk ‘Clinical photos, jaw recording in good practice clinical record keeping’ Mersey Department of Postgraduate Dental Education First Floor, Hamilton House 24 Pall Mall, Liverpool L3 6AL www.merseydeanery.nhs.uk None Northern Postgraduate Institute for Medicine and Dentistry 10-12 Framlington Place Newcastle-upon-Tyne NE2 4AB www.pimd.co.uk ‘Dental photography to produce a case report’ Northern Ireland Northern Ireland Medical and Dental Training Agency 5 Annadale Avenue, Belfast Northern Ireland BT17 3JH www.nimdta.gov.uk None North Western North Western Deanery 4th Floor, Barlow House Minshull Street Manchester M1 3DZ www.nwpgmd.nhs.uk None Oxford The Department of Postgraduate Medical and Dental Education The Triangle, Roosevelt Drive Headington, Oxford OX3 7XP www.oxdent.ac.uk None Scotland NHS Education for Scotland 2nd Floor, Hanover Buildings 66 Rose Street Edinburgh EH2 2NN www.nes.scot.nhs.uk/dentistry 2 courses for dentists 2 courses for DCPs South West Dental Postgraduate Department Bristol Dental Hospital The Chapter House Lower Main Street Bristol BS1 2LY www.swdentalpg.net None South Yorkshire & East Midlands Regional Postgraduate Dental Offi ce Don Valley House Savile Street East Sheffi eld S4 7UQ www.pgde-trent.co.uk None Wales Dental Postgraduate Department Grove Mews, 1 Coronation Road Birchgrove CF14 4QY www.dentalpostgradwales.ac.uk ‘Digital photography’ Wessex NHS Wessex Deanery Highcroft, Romsey Road Winchester SO22 5DH www.wessex.org.uk/dental None West Midlands The Postgraduate Offi ce School of Dentistry St Chad’s Queensway Birmingham B4 6NN www.postgrad-dentistry.bham.ac.uk ‘Techniques of improving your image, Part I and Part II’ e-course and forum Yorkshire The Department for NHS Postgraduate Medicine and Dental Education Willow Terrace Road University of Leeds Leeds LS2 9TJ www.yorkshiredeanery.com None BRITISH DENTAL JOURNAL 3 © 2010 Macmillan Publishers Limited. All rights reserved. RESEARCH (n = 61), with a majority of these [59% (n = 36)] being orthodontic specialists. Regarding attendance at postgraduate courses or meetings during the year preced- ing the survey, 1.8% (n = 10) stated that they had attended no courses, 11.4% (n = 63) had attended one to two courses, 20.5% (n = 114) had attended three to four courses and 66.3% (n = 368) had attended fi ve or more. Of the respondents who had attended courses, 15.9% (n = 86) had attended a course on dental photography. Listed in Table 1 are deanery courses in May 2007. Clinical photography in some form was used by 48% (n = 270) of respondents. Of these, 34.4% (n = 93) used a 35 mm camera, with 59.3% (n = 160) using dig- ital photography. A further 18.9% (n = 51) used a video camera. The most stated perceived advantage for different photographic systems were reasonable cost, ease of use and the good quality of results achieved. The most frequently stated disadvantages of cam- era systems were that the camera system was bulky, time consuming, complicated, produced poor quality images and was expensive. Uses of dental photography The uses for which respondents used their photographic equipment are shown in Table 2, with treatment planning, followed by patient instruction and motivation, being the most frequently quoted uses. When asked to rate the usefulness of clinical photography for a variety of pur- poses, the respondents provided replies as indicated in Table 3, with medico-legal purposes and patient instruction/motiva- tion considered the most useful. Regarding the frequency of use of clinical dental photography, 54% of users of pho- tography (n = 168) photographed one to fi ve cases per week, 14.1% (n = 46) photo- graphed six to ten cases per week and 15.4% (n = 48) photographed more than ten cases. Photographs were stored most frequently on a computer [48.36% (n = 118)], 29.5% (n = 72) were stored in patient records and 3.7% (n = 9) in photographic fi les. Digital images were stored on a computer by 78.7% (n = 212) of respondents. With regard to future usage, 91.9% (n = 248) of respondents who used pho- tography considered that they would use clinical photography more in the future. Clinical effectiveness was thought to be enhanced by 85.2% (n = 242) of respond- ents who used photography. Associations between use of clinical photography and demographic factors There was no evidence of a signifi cant asso- ciation between use of clinical photography and practice location (p = 0.263), whether the practice was single-handed or group (p = 0.144), or whether respondents replied to the fi rst or the second mail- ing (p = 0.218). On the question of using photography, 536 valid responses were received. Three hundred and eighty-four of these were from male responders and the rest, 152, were from female responders. Table 2 Uses of clinical photography in practice cited by respondents Use Number Overall % % of photography users Patient instruction/motivation 202 35.9 75.1 Treatment planning 227 40.4 84.0 Communication with laboratory 173 30.8 64.0 Medico-legal reasons 194 34.5 71.8 Restoration performance 163 18.3 60.3 Teaching 91 16.2 33.7 Marketing 134 23.8 49.6 Interest 176 31.3 65.1 Other purposes 44 7.8 16.2 Table 3 Usefulness of clinical photography indicated by respondents Use % very useful overall % very useful of photography users Medico legal purposes 27 56.2 Patient instruction/motivation 26.5 55.1 Teaching 17.6 36.6 Interest 19.9 41.4 Treatment planning 22.8 47.4 Liaison with laboratory 19.6 40.7 Recording restoration performance 10.9 22.5 Marketing 21.4 44.4 Other reasons 3.7 7 Table 4 Reasons given for not using clinical photography Reason % overall % of those not using photography No perceived need/demand 27.6 58.1 High capital cost 14.4 30.3 Too time consuming 19.8 41.4 Don’t know what’s involved 10.3 21.6 Have no interest in photographs 6.0 12.6 Poor fees for photographs in NHS 21.0 44.0 Infection control risk 3.2 6.7 Other 7.5 15.6 4 BRITISH DENTAL JOURNAL © 2010 Macmillan Publishers Limited. All rights reserved. RESEARCH that they would commence taking clinical photographs within two years, and 31.8% (n = 50) within fi ve years. DISCUSSION The method used in this research was a postal questionnaire, a research method used frequently in dentistry.20 That almost all of the questions were answered by 100% of responders suggests that the question- naire was easily understood. The response rate of 56% was lower than the 76% rate of the previous (similar) study.13 There may be several reasons for this. It is possible that research performed by postal questionnaires is perceived as time-consuming and intrusive, particularly with the increasing amount of paper work expected of most dentists and dental spe- cialists. Additionally, over the past three years there have been major changes in UK dentistry with many GDPs pre-occu- pied with units of dental activity (UDAs) and working with the new NHS con- tract at the time when the questionnaire was distributed. Some of the results mirrored those found previously in 2002.13 An example of this was that users of clinical photography, as demonstrated in Table 2, considered den- tal photography useful for a number of reasons, with treatment planning, patient instruction and medico-legal reasons most popular. Reasons for not using photogra- phy were similar to those given in the 2002 study,13 with a signifi cant number of dentists seeing no demand or need. Lack of time, high cost and poor remu- neration were seen as further obstacles to using photography, as in the 2002 study13 (Table 4). Despite this, it was encourag- ing to note that 56% considered that they would commence using clinical photogra- phy at some time in the future, with 68% of these within the next two years. This last fi nding also mirrored the trend in the 2002 study,13 in which 51% of non-users con- sidered that they would commence clinical photography at some point in the future. In this respect, since an increased number of respondents stated that they were using clinical photography, it may be considered that some of the earlier respondents actu- ally did start using clinical photography in the time since the previous survey. As expected the use of digital photogra- phy was predominant, being used by 59% of clinical photographers, although 34% of clinical photographers still used a 35 mm camera. This is in contrast to the 2002 study in which a majority of respondents used 35 mm cameras with only 20% using a digital camera. The growth in digital over fi ve years is therefore dramatic. Compared to the 5% in previous 2002 study,13 16.1% of respondents had been on a course on dental photography in the previous year. This is still a low percentage and, considering that 48% of respondents used clinical photography, it suggests that there is a need and probably a demand for a greater number of postgraduate courses than currently available in most deaneries (Table 1). It was suggested by Sarll and Holloway21 that ‘failure to make changes which might improve practice profi tability as well as those promoting their own and their staff and patients’ welfare appeared to relate to lack of information’. It could, however, be suggested that lack of time to properly consider information also results in a fail- ure to make changes. This paper, and the previous study,13 have shown that photography is more often used by men. Whether this is refl ected in other areas of technology can only be sur- mised. To some extent, this may explain why use of clinical photography was more likely if you were a male practitioner. A signifi cant proportion of the dental profes- sion is female. The percentage of female dentists in the Medlist UK dental database was 36.3% (8,440 dentists).22 CONCLUSIONS Forty-eight percent of respondents • used some form of clinical photography Private and specialist practitioners • were more likely than practitioners in mixed NHS/private practice to use clinical photography. NHS practitioners were the least likely to use clinical photography (p <0.001) Male practitioners were more likely • than female practitioners to use clinical photography (p <0.001) Medico-legal uses, patient instruction • and motivation were cited as the most useful benefi ts of photography No perceived need, lack of time and • poor fees under the NHS were the reasons given most frequently for not Among the 384 male responders, 212 (55.2%, 95% CI 50.2%-60.1%) indicated that they use photography. Among the 152 female responders, 57 (37.5%, 95% CI 30.2%-45.5%) indicated that they use pho- tography. As the two confi dence intervals do not overlap, we may conclude that male dentists are more likely to use photography compared to female dentists. There was some evidence of an associa- tion between use of clinical photography and years since graduation, with those graduated most recently (0-5 years) and those graduated longest (>20 years) being less likely to use clinical photography than those who had graduated between 6 and 20 years ago (p <0.001). Principals and vocational dental practitioners (VDPs) were signifi cantly more likely to use clinical photography than respondents who were associates or who held ‘other’ positions in the practice (p = 0.003). There was a significant association between use of clinical photography and type of practice (NHS or private), with a greater proportion of respondents in pri- vate practice being users of clinical pho- tography than respondents from mixed NHS/private or solely NHS practices (p <0.001). The percentage of respondents in mixed practices who used clinical pho- tography was signifi cantly greater than the percentage using clinical photography in NHS practices (p <0.001). The use of clinical photography in spe- cialist practices was signifi cantly greater than in general practices (p <0.001). Individuals who had attended courses on clinical photography were more likely to use clinical photography (p <0.001). Those already using clinical photography were prepared to spend more (p <0.001) on a new system. Non-users of dental photography Reasons for not undertaking clinical pho- tography are listed in Table 4. The main reason stated was no perceived need or demand, by 58.1% (n = 155). The other main reasons were poor fees by 44.0% (n = 118), too time consuming by 41.4% (n = 111) and high capital cost by 30.3% (n = 81). Of the respondents who did not use clinical photography (n = 278), 56.1% (n = 156) felt that they would commence using clinical photography at some time in the future, with 68.2% (n = 107) estimating BRITISH DENTAL JOURNAL 5 © 2010 Macmillan Publishers Limited. All rights reserved. RESEARCH using clinical photography Digital photography was used by • 59.2%, 35 mm by 34.4% and a video system by 18.8% of users 16.1% had been on a photography • course in the last year Considering the number of dentists • using photography and intending to in the near future, there is a relative paucity of courses available nationally run by the deaneries. Thanks are due to the practitioners who responded. Thanks are also due to Mrs Lynda Malthouse at the University of Birmingham School of Dentistry for printing the questionnaires and Ms Tracey Carter at Hilltop Dental Surgery, together with Peter, Felicity and Eloise Morse, for collating the data. 1. Benjamin S, Aguisre A, Drinnan A. Digital photog- raphy enables better soft tissue screening diagnosis and case acceptance. Dent Today 2002; 21(11): 116–121. 2. Wander P. Photography 1: uses in general dental practice. Dent Update 1983; 11: 297–304. 3. Vargas M A. Use of photographs for communi- cating with the laboratory in indirect posterior restorations. J Prosthodont 2002; 11: 208–210. 4. Dalin J B. Digital photography and imaging can enhance practice in several ways. J Indiana Dent Assoc 2002-2003; 81: 24–26. 5. Erten H, Uctasli M B, Akarslan Z Z, Uzun O, Semiz M. Restorative treatment decision making with unaided visual examination, intraoral camera and operating microscope. Oper Dent 2006; 31: 55–59. 6. Goldstein M B. Digital photography in your general dental practice. The why’s, how’s, and wherefore’s. Dent Today 2003; 22(4): 98–101. 7. Freedman G. Intraoral cameras: patient education and motivation. Dent Today 2003; 22(4): 144–151. 8. Amet E, Milana J. Restoring soft and hard dental tissues using a removable implant prosthesis with digital imaging for optimum dental esthetics: a clinical report. Int J Periodontics Restorative Dent 2003; 23: 269–275. 9. Burns J S. Digital imaging. How candid use of a camera can promote your practice. Dent Today 2003; 22(6): 56–59. 10. Sandler J, Murray A. Digital photography in ortho- dontics. J Orthod 2001; 28: 197–201. 11. Christensen G J. Important clinical uses for digital photography. J Am Dent Assoc 2005; 136: 77–79. 12. Samaras C. Intraoral cameras: the value is clear. Compend Contin Educ Dent 2005; 26(6A Suppl): 456–458. 13. Sharland M, Burke F J T, McHugh S, Walmsley A D. Use of dental photography by UK dental practition- ers. Dent Update 2004; 31: 199–202. 14. Ahmad I. Digital dental photography. Part 3: principles of digital photography. Br Dent J 2009; 206: 517–523. 15. Ahmad I. Digital dental photography. Part 1: an overview. Br Dent J 2009; 206: 403–407. 16. Ahmad I. Digital dental photography. Part 4: choos- ing a camera. Br Dent J 2009; 206: 575–581. 17. Robinson P B, Lee J W. The use of real time video magnifi cation for the pre-clinical teaching of crown preparations. Br Dent J 2001; 190: 506–510. 18. Bengel W. Standardisation in dental photography. Int Dent J 1985; 35: 210–217. 19. Altman D G, Machin D, Bryant T N, Gardner M J (eds). Statistics with confi dence. 2nd ed. pp 46. London: BMJ Books, 2000. 20. Tan R T, Burke F J T. Response rates to question- naires mailed to dentists. A review of 77 publica- tions. Int Dent J 1997; 47: 349–354. 21. Sarll D, Holloway P J. Factors infl uencing innovation in general dental practice. Br Dent J 1982; 153: 264–266. 22. Nash P of Medlist. Personal communication, December 2007. 6 BRITISH DENTAL JOURNAL © 2010 Macmillan Publishers Limited. All rights reserved. The use of clinical photography by UK general dental practitioners Introduction Method Results General and demographic data Uses of dental photography Associations between use of clinical photography and demographic factors Non-users of dental photography Discussion Conclusions Acknowledgements References work_tpy7xe73pjbshgcaopynv6g2pu ---- SHORTS-12-050 1..12 The development of one-stop wide-awake dupuytren’s fasciectomy service: a retrospective review QMK Bismil1 • MSK Bismil2 • Annamma Bismil3 • Julia Neathey4 • Judith Gadd4 • Sue Roberts5 • Jennifer Brewster6 1 Consultant Orthopaedic Surgeon, The Wellington Hospital, London NW8 9LE, UK 2 Consultant Orthopaedic Surgeon, ExpertOrthopaedics.Com Ltd, Boston, Lincolnshire PE21 9DP, UK 3 Nursing Sister, ExpertOrthopaedics.Com Ltd, Boston, Lincolnshire PE21 9DP, UK 4 Theatre Sister, Parkside Surgery, Boston, Lincolnshire PE21 6PF, UK 5 Healthcare Support Worker, Parkside Surgery, Boston, Lincolnshire PE21 6PF, UK 6 Theatre Nurse, Parkside Surgery, Boston, Lincolnshire PE21 6PF, UK Correspondence to: QMK Bismil. Email: enquiries@expertorthopaedics.com Summary Objectives To detail the transition to a totally one-stop wide-awake (OSWA) Dupuytren’s contracture surgical service. Design Retrospective review of Dupuytren’s component of last 1000 OSWA cases. Setting The UK’s first totally one-stop wide-awake orthopaedic service. Participants 270 patients with Dupuytren’s contracture out of the last 1000 OSWA cases. Main outcome measures Surgical outcomes, patient satisfaction and cost-effectiveness and efficiency. Results The OSWA Dupuytren’s model is safe, efficient and effective; with a low complication rate, extremely high patient satisfaction; and cost- savings to the nhs of £2500 per case treated. The service saved the NHS approximately £675,000 for the 270 cases presented. Conclusions A totally one-stop wide-awake Dupuytren’s Contracture service is practicable and feasible alternative to the conventional treatment pathway, with benefits in terms of efficiency and cost-effectiveness. Introduction Dupuytren’s contracture is a surgical problem tra- ditionally managed in the hospital setting with surgery under general anaesthetic through selec- tive fasciectomy. Two North American studies1,2 have shown selective Dupuytren’s fasciectomy under local anaesthesia is a safe alternative to the traditional approach; but thus far no centre has incorporated wide awake Dupuytren’s surgery into a one-stop (same day) service. Recently, our one-stop wide-awake (OSWA) hand surgery service has been recognized as the first high volume provider of one-stop wide-awake hand surgery,3 and this paper details the Dupuyt- ren’s component of the service. Traditionally, surgery was performed under general anaesthesia using a tourniquet. Heuston’s DECLARATIONS Competing interests None declared Funding No authors have received funding or sponsorship for this paper or the associated audit Ethical approval This is a retrospective audit and review of service, hence no ethical approval has been sought Guarantor QMK Contributorship All authors contributed directly to the development of the OSWA service, the data collection for and authorship of the paper; hence all listed authors were contributors J R Soc Med Sh Rep 2012;3:48. DOI 10.1258/shorts.2012.012050 RESEARCH 1 mailto:enquiries@expertorthopaedics.com table top test (patient cannot lay palm flat on table) was often used as a criterion for surgery; largely on account of the balance of pros and cons and risks and benefits of treatment.4–6 Whilst this conservative-expectant approach has been the mainstay of management; the paradox is that once proximal interphalangeal joint con- tracture has developed, often surgical complete correction is not possible. A lower risk alternative to the standard management pathway might enable earlier intervention (e.g. before significant proximal interphalangeal (PIP) contracture has developed). Historically, surgery under general anaesthesia was performed on an inpatient, or increasingly on a daycase basis with full anaesthetic and recovery support. An alternative is the brachial plexus block or Biers block. The traditional patient pathway would involve: a GP consultation and referral; hospital outpatient visit; waiting list; admission for surgery with standard anaesthetic and surgical management; and outpatient follow-up appointments.4–6 In this paper we outline the development of a totally one-stop wide-awake Dupuytren’s service to provide a stimulus for debate and also to present a fourth option in addition to multistop surgery under general anaesthetic (GA) and multi- stop outpatient treatment with fasciotomy either through bacterial enzyme injection or using a needle. The American papers on wide awake fasciectomy1–2 have demonstrated wide awake Dupuytren’s is possible, safe and effective as an adjunct to surgery under GA in selected patients. Here we demonstrate this technique can be in- corporated into a totally one-stop wide-awake Dupuytren’s service, without any triage or case selection, with potential benefits in terms of effi- ciency, patient satisfaction and cost effectiveness; plus avoiding some of the traditional surgical risks such as general and regional anaesthesia and tourniquets, and potentially enabling removal of the pathological tissue at an earlier stage before irreversible contracture has set in. Methods In the 1000 case analysis recently published in this journal,3 270 were Dupuytren’s cases. Only com- plete audit cycles with medium follow-up data were included in the original group, and indeed in the Dupuytren’s subgroup. In this paper we analyze these cases in more detail. We treat NHS patients through a county-wide surgical scheme. In Lincolnshire, because our service has proved so successful, Dupuytren’s Contracture is not routinely funded for treatment under general anaesthesia or in hospital. Once a referral is received from a GP or intermediate care, the patient is contacted and asked to choose their own appointment slot. There is no waiting list. Before surgery, the patient is sent an infor- mation sheet and asked to complete a simple health questionnaire and a validated quick dis- abilities of arm, shoulder and hand (DASH) score which they give to the Consultant during their appointment. There is full web support such that all patient information and data entry can be done by the patient online. On the day of surgery, the patient is assessed by the surgeon for surgery, informed consent is obtained and surgery is performed: all within a 30–45 minute management slot. Documentation is streamlined yet comprehensive, and again there is full web support. There are no preop- erative or postoperative consultations. Patients are always at liberty to defer, postpone or not proceed with surgery. Tubiana stage is assessed and documented (Table 1). This is the standar- dized method of grading the total degree of con- tracture at the metacarpophalangeal (MCP) and PIP joints: stage 1=0–45 degrees; stage 2=45–90 degrees; stage 3=90–135 degrees; stage 4=135– 180 degrees.7 Fasciectomy is performed using lignocaine with adrenaline in the palm and plain lignocaine for digits. No tourniquet is used. No diathermy is used. Straight incisions with or without Z plasty are employed. Meticulous dissection is Table 1 Tubiana Stages Tubiana Stage Percentage 1 25 2 14 3 26 4 35 J R Soc Med Sh Rep 2012;3:48. DOI 10.1258/shorts.2012.012050 Journal of the Royal Society of Medicine Short Reports 2 Acknowledgements None essential. The relations of the Dupuytren’s tissue to the neurovascular bundle is key: pretendinous band volar and central; spiral band dorsal and peripheral; lateral digital sheet dorsal and periph- eral; Grayson’s ligament volar. For proximal inter- phalangeal (PIP) contracture we selectively use volar joint capsulotomy, release of the checkrein ligaments, and release of accessory collateral liga- ments. The flexor tendon sheath is opened in the cases of a persistent deformity limited dynami- cally by the flexor mechanism. Regarding the technique with no tourniquet, it is indeed more challenging, but careful dissection and a meticulous technique of fasciectomy will minimize bleeding. In our experience, point bleed- ing from the skin edges or subcutaneous fat will always stop after the first few minutes of surgery and with gentle but sustained pressure. Our injec- tion technique (Figure 1) has been devised and adapted to minimize pain and needle pen- etrations. It involves a maximum of three needle penetrations using a 25 gauge needle in the palm with infiltration of lignocaine with adrenaline in the subcutaneous plain (blanching must be seen), and one penetration through the anaesthe- tized skin at the base of the digit and into the volar compartment of the finger (no blanching but flexion of the digit will be seen with correct needle positioning) with a 21 gauge needle. Because of the deep penetration of the digital injection, we prefer not to use adrenaline for this injection; and once again would state that in Figure 1 Injection technique. (a) 25 gauge needle in palm directed perpendicular to skin: 2–3 penetrations lidocaine 2% with adrenaline 1:200 000; blanching of skin should be seen to confirm correct subcu- taneous plane. (b) 21 gauge needle directed through anaesthetized skin into volar compartment of finger: plain lignocaine 2%. (c) Finger flexion confirms correct volar compartment injection J R Soc Med Sh Rep 2012;3:48. DOI 10.1258/shorts.2012.012050 The development of a totally one-stop wide-awake Dupuytren’s Fasciectomy service 3 our experience, digital bleeding is not a limit- ing factor: excellent visualization is achieved (Figure 2). The skin is closed with interrupted and con- tinuous silk stitches which are removed after 10 days. Melanin dressings and a boxing-glove pressure dressing (Figure 3) with the finger splinted straight are used for 1 week. This is easily fashioned with a roll of wool secured with a 4 inch crepe bandage. A sling is provided for the first 3 days. Early active, gentle movement is encouraged and passive stretching is commenced once the wound is healed. We do not splint post- operatively and do not refer to physiotherapy. Most patients can return to work after 2–3 weeks. For staged operations (multiple digits, bilateral), subsequent stages can be performed any time after two weeks. The timeline for the streamlined and efficient OSWA Dupuytren’s pathway is detailed in Table 2, and the streamlining of the surgical equip- ment is underlined in Figure 4. For the purposes of this audit the patients were reassessed at least once by the surgical team post- surgery and also via postal, telephonic and online questionnaires. Correction of deformity was assessed through digital photography immediately and one week after surgery, using an android tablet and the Smart Tools (Copyright 2010–2012 Smart Tools Co) application; this was expressed as a percen- tage. Patient satisfaction and subjective correction of deformity were assessed by postal, telephonic and online questionnaires which the patients were asked to complete at their convenience. If the patient has a straight finger after surgery, as measured with a true lateral photograph of the finger, this is a 100% correction, otherwise the per- centage correction is calculated: ½ð180-residual angular deformityÞ=180��100 We do not use total active range of motion for two reasons. Firstly, this is not a study of recur- rence following fasciectomy, which has been well-investigated.4–6 Secondly, in our one-stop service we have a total of 30–45 minutes to assess and treat the patient, and do not have the luxury of multiple outpatient appointments. Using a goniometer to measure range of motion pre- or post-operatively is not possible in the con- text of our service; on the other hand, a digital photograph adds no time to or obstacles to the one-stop pathway. We suggest in the modern age that digital photography and proprietary cali- brated applications can be utilized in this fashion to screen for acceptable early correction of deform- ity; especially if allied with the patient’s Figure 2 One-stop wide-awake fasciectomy Figure 3 Boxing glove dressing J R Soc Med Sh Rep 2012;3:48. DOI 10.1258/shorts.2012.012050 Journal of the Royal Society of Medicine Short Reports 4 Table 2 Dupuytren’s one-stop management timeline Timeline Process/Practical Points Referral received • Admin staff contact patient • Patient chooses appointment slot • Admin staff post out preoperative health screening and advice. 30 minutes prior to one-stop slot • Patient arrives in waiting area, reads preoperative material One-stop slot- 0–10 minutes CONSULTATION • Surgeon calls patient • Assesses patient • Fills in preoperative and consent documentation • Marks skin incision • Risks and benefits and pros and cons of treatment options discussed • Informed consent obtained. • Simultaneously theatre nurse and healthcare support worker clean theatre and prepare equipment. 10–12 minutes • Healthcare support worker gowns and covers patient in dressing cubicle (surgical gown, shoe coverings, surgical hat) and brings into theatre; positions supine on operating table. 12–15 minutes • Surgical ‘time-out’ as per World Health Organization (WHO) criteria. • Surgeon administers local anesthesia • 5–10 ml 2% lignocaine with (palm) or without adrenaline (fingers) ◦ 10 ml syringe ◦ 21 gauge needle ◦ 25 gauge needle • Prepping and draping. 15–40 minutes PROCEDURE • Straight incisions • Z plasty as necessary • Standard equipment: • Scalpel • 2 × 15 blades • 2 × single skin hooks • 2 × double skin hooks • Alice forceps • Fine toothed forceps • 1 pack small swabs • 1 × 3-0 silk stitch (palm) • 1 × 4-0 silk stitch (digits) 40–45 minutes • Melanin dressing. • 2 inch crepe for fingers otherwise 4 inch crepe. • Boxing-glove type pressure dressing with affected digit splinted in extension (Figure 3): ◦ Use roll of wool in palm to apply gentle pressure on wound and avoid postoperative haematoma or healing in flexion. • Postoperative advice including sheet plus full online support. Postoperative advice • Elevation 3 days • Boxing-glove dressing for 7 days • Wound check within 1 week • Early active gentle mobilization of digits • Suture removal after 10 days • Active grip strengthening/stretching of digit in extension once wound healed. J R Soc Med Sh Rep 2012;3:48. DOI 10.1258/shorts.2012.012050 The development of a totally one-stop wide-awake Dupuytren’s Fasciectomy service 5 perception of correction, which is surely the key, along with their satisfaction. The North American papers have demonstrated that the technique we use is an effective and good option on the balance of risks and benefits; with a complication profile (including recurrence) similar to treatment under GA but without the risks of GA or tourni- quets.1–2 All Dupuytren’s patients are clinically reas- sessed after one week for the purposes of the audit, and staff document outcome on three cri- teria: surgical site; functional assessment and percentage correction of deformity. Thenceforth we rely on the patients reporting their own outcome via our website and via postal and tele- phonic questionnaires, which include pre- and post-operative validated quick-DASH scores. Outcome is assessed on the basis of five outcome criteria (Table 3). For an excellent outcome all available criteria are satisfied. For a good outcome all but one available criteria are satisfied. Results • 270 consecutive cases treated. • 26% females, 74% males. • Mean age 64.7 ◦ Range 44 to 84 • 57% right hands, 43% left hands. • Atypical presentation: ◦ Primary complaints of pain, neurological symptoms, triggering of digit, extensive skin involvement and puckering or nodular disease with functional disability. ◦ Up to 20% of cases ◦ See case report section • Over 5% revision cases ◦ Previous surgery under general anaesthesia ◦ Recurrent deformity ◦ No significant difference in outcome between revision and primary cases. • Mean correction of deformity as analyzed by digital photography: ◦ 95% • Range 30–100% ◦ There was no significant difference between immediate postoperatively and 1 week post- operatively corrections on calibrated digital photography. • Z plasty and flexor mechanism assessment and optimization routinely performed for stage 3 and 4 cases. • Infection rate <0.5% • Patient satisfaction rate with service over 99%. • Over 99% good-excellent clinical results on the basis of satisfying all or all but one available outcome criteria. • Unsightly/problematic scar less than 5%; delayed wound healing 10%. • Finger affected ◦ 41% ring ◦ 37% little ◦ 15% middle ◦ 4% thumb ◦ 3% Index • Isolated PIP contracture ◦ 20% of cases. • Pain control ◦ 40% of patients took no simple analgesia after surgery. ◦ 60% took simple analgesia. Figure 4 One stop wide-awake dupuytren’s tray J R Soc Med Sh Rep 2012;3:48. DOI 10.1258/shorts.2012.012050 Journal of the Royal Society of Medicine Short Reports 6 • Patient perspective: ◦ 87% report subjective correction of their deformity. • There were no intraoperative complications (such as nerve, vascular or tendon injury), no cases of finger amputation and no re-operations for early recurrence. • There were no cases of complex regional pain syndrome. • No cases unsuitable for local anaesthetic surgery or requiring onward referral. • No patient deferred/postponed surgery or did not attend. • One-stop wide-awake service: ◦ One-stop wide-awake care achieved for all cases ◦ Saves the NHS approximately £2500 per case, with a cost saving of £675 000 for the cases presented. Case reports A to E – examples of atypical presentations and revision cases Case A: Removal of pathalogical Dupuytren’s fascia 40-year-old gentleman presented with painful stage 1 Dupuytren’s with no palpable cord in patient’s words: ‘I’ve been working for myself this last two years as a tree-surgeon and landscape gardener, so I have Table 3 Outcome Criteria Five Outcome Criteria� Criterion Good outcome I Patient satisfaction survey-postal or online Patient satisfaction achieved II Postoperative audit assessments by clinical team – surgical site monitoring and deformity correction A. Acceptable correction of deformity: • 100% stage 1–2 • ≥ 90% stage 3–4 unless late presentation of fixed PIP contracture or criterion V achieved- see below • Fixed PIP contracture ≥50% correction achieved or criterion V achieved- see below B. Surgical site monitoring – patient followed to good outcome: • Acceptable healing achieved. C. Functional assessment • Acceptable outcome achieved. III Complication reporting – OSWA staff and Patient: • Surgical documentation • Online secure complication log�� • Via telephone helpline or through email�� No intraoperative complication IV Validated quick-DASH score Improvement of hand function post-surgery V Range of motion Clinically important difference (CID) achieved. 8 � Every attempt possible is made to collect all five outcome criteria for an individual patient, but this is not always possible: for an excellent outcome all available outcome criteria must be satisfied, and for a good outcome all but one criteria �� OSWA processes mean that all perceived problems are immediately assessed, verified and managed through open access clinical consultation J R Soc Med Sh Rep 2012;3:48. DOI 10.1258/shorts.2012.012050 The development of a totally one-stop wide-awake Dupuytren’s Fasciectomy service 7 always been very hard on my hands. My hands are my business, without them I cannot make money. I have never had any problems with my hands until 6 months ago when I bought a large industrial pressure washer… I have used smaller pressure washers on people’s paths, patios and driveways. I was getting more work around spring time where people wanted their properties to look nice for the summer. I had a brand new pair of padded gloves to reduce the vibration in my hands. After using my pressure washer all day I found that I could not feel my thumbs, kind of pins and needles. That didn’t feel right, so I gave it a rest for a few days. I had at least 2 weeks work on, so I spread the work out over a month and got the work finished… [I] put up with the pins and needles because I didn’t want to let people down, knowing that when I had finished, I would not do so much all at once with the pressure washer. ‘To my surprise I started to feel sharp painful lumps on my hands and they were extremely painful when using equipment such as chainsaws. The lumps just came up from nowhere over a few months which has lead me to Mr Bismil and his team, who have removed them now and all the pains have gone.’ At operation, the patient was found to have multi- focal nodular Dupuytren’s disease compress- ing the adjacent digital nerves. Fasciectomy was performed with immediate resolution of the neurological symptoms as documented by the patient himself above. Case B: Locked triggering digit secondary to Dupuytren’s A 65-year-old gentleman with fixed trigger finger secondary to Dupuytren’s with no palpable cord treated with fasciectomy. Case C – Revision case referred to OSWA-previous surgery under GA A 40-year-old gentleman who had his right ring finger Dupuytren’s operated upon twice, 15 years ago, under general anaesthesia but was left with a recurrent deformity after surgery; his finger had not been straight after either of the original operations. At OSWA surgery we were able to correct the deformity through removal of the pathological fascia and scar tissue that had been J R Soc Med Sh Rep 2012;3:48. DOI 10.1258/shorts.2012.012050 Journal of the Royal Society of Medicine Short Reports 8 left. The ring finger was 100% corrected and the little finger has also improved greatly on account of the optimization of ring finger and the flexor mechanism. A further OSWA treatment for the little finger is planned later this year. ‘I was pessimistic when I was referred. I had accepted I was going to lose the finger. I came request- ing amputation but was reassured by the surgeon and had my OSWA treatment to straighten the finger.’ Case D: Revision case originally treated under GA fully corrected at OSWA A 70-year-old gentleman with recurrent Dupuyt- ren’s following surgery under GA 10 years ago: fully corrected at OSWA surgery. Case E: Extensive skin Dupuytren’s disease treated at OSWA A 59-year-old male with little finger Dupuytren’s with extensive skin involvement and puckering pre- and 1 week post-OSWA fasciectomy. J R Soc Med Sh Rep 2012;3:48. DOI 10.1258/shorts.2012.012050 The development of a totally one-stop wide-awake Dupuytren’s Fasciectomy service 9 Discussion Dupuytren’s Contracture is a fibroproliferative disorder. Aetiology is multifactorial, with: genetic predisposition (autosomal dominant with incomplete penetrance, Scandanavian, European or Irish lineage); and superimposed environ- mental (microvessel ischaemia) and biochemical/ cellular factors (altered collagen profile, platelet derived growth factor, fibroblast growth factor, moyofibroblast proliferation).4–6 Surgical fasciectomy with removal of the Dupuytren’s tissue remains the gold standard treatment for Dupuytren’s contracture. Most sur- geons remove the pathological fascia per se (selec- tive fasciectomy) rather than all of the fascia including normal fascia (total fasciectomy).4–6 Another option in primary disease configurations with a palpable cord is to divide the cord (fasciot- omy) through bacterial enzyme (collagenase) injection8 and needle fasciotomy.9 These options have recently been popularized on account of the perceived lower risks, with no general anaesthesia and no tourniquet required, and lower costs versus standard surgery. In our experience both of these techniques have limitations. The effects of bacterial protein injection have not been ascer- tained as yet in the long term, or compared in a level one or long-term study to surgical fasciect- omy. Early complications of injection treatment include reactions to the collagenase and damage to adjacent structures; and there is a risk of tendon rupture with an injection that is directed into the tendon which sits just under the diseased fascia. Moreover, neither technique is all-encompassing and can only be used for certain configurations and stages of disease. Most surgeons fasciotomise the cord before removing it, and in our experience this does not result in an acceptable correction of deformity in most cases. Furthermore, isolated PIP contracture is fairly common and forms 20% of our practice; often in these cases there is no palpable cord and fasciotomy would not be logical. A single injection fasciectomy treatment for Dupuytren’s is an ap- pealing prospect for patients but, at present, this treatment is not available. The current bacterial enzyme pathway would involve multiple outpati- ent stops; usually involving multiple visits to the clinic, multiple injections with manipulation of the finger and a prolonged period of splinting. Both injections and fasciotomy, by definition, leave the majority of the pathological tissue behind. Whilst we agree that conventional surgery is relatively expensive, OSWA surgery is not; hence the potential advantages of the fasciot- omy treatments (injection or needle) in terms of cost saving, no GA and no tourniquet are not rel- evant versus OSWA surgery. Once the costings for the bacterial enzyme injection pathway include more than one injection and multiple out- patient appointments, OSWA surgery becomes significantly more cost-effective. Because we are fortunate to see and treat such large volumes of Dupuytren’s and hand surgery patients, we see many atypical presentations of Dupuytren’s (up to 20% of cases); with patients presenting with pain, neurological symptoms, triggering and nodular disease or skin puckering with functional disability. Hence, a one-stop service based around fasciotomy (either through bacterial protein injection or needle fasciotomy) does not seem to be an option: firstly because neither technique can be used to treat all Dupuyt- ren’s cases (i.e. revisions from failed previous treatment, and atypical presentations); and sec- ondly because of the multistop pathway for these treatments. Of the cases presented above or the 20% atypical presentations, none would have been suitable for fasciotomy. Traditionally, operation under general anaes- thesia was advised if the metacarpophalangeal (MCP) or proximal interphalangeal joint deform- ity (PIP) exceeded 30 degrees;4–6 and indeed this corresponds with Heuston’s table top test (patient can no longer achieve a flat hand in J R Soc Med Sh Rep 2012;3:48. DOI 10.1258/shorts.2012.012050 Journal of the Royal Society of Medicine Short Reports 10 contact with the table). However, complete correc- tion of a PIP joint flexion deformity is difficult to achieve and maintain. Hence, operative interven- tion has generally been recommended when any PIP joint contracture is noted. With traditional surgery for Dupuytrens with PIP contracture at 2-year follow-up, only 44% reported improvement in PIP joint extension.10 Historically, Dupuytren’s was considered a complex operation requiring general anaesthesia under tourniquet and hospital admission. Traditionally, surgery was therefore often reserved for resistant cases. Previous authors have demonstrated the safety and efficacy of Dupuytren’s Surgery under local anaesthesia.1,2 However, worldwide the uptake and evolution to total wide awake Dupuytren’s surgery has been slow. Certainly, the technique under local anaes- thesia without tourniquet is more demanding. A large experience of Dupuytren’s surgery and local anaethetic surgery is essential; and the surgeon has to become comfortable with perform- ing surgery without the use of a tourniquet. Our Dupuytren’s service has been established for 10 years, and the rate and complexity of referrals is increasing; this year we anticipate approximately 400 referrals (Figure 5). The results presented here confirm that a total one-stop wide-awake Dupuytren’s service is possible, safe and effective. If Dupuytren’s contracture is treated through a total wide awake approach the risks of surgical intervention are re-balanced and there is a possi- bility of treating the condition at an earlier stage before severe contracture develops. Certainly, all our procedures are performed on the basis of a favourable balance of benefits versus risks of sur- gical intervention. Furthermore, our experience and analysis of the early (stage I and II) versus late (stage III and IV) presentations demonstrates that more excellent outcomes are achieved in early cases. Moreover, our experience confirms that PIP contracture and stage IV contractures are difficult to fully correct. Recurrence of a con- tracture can only occur if the finger is left to bend for some time in the first place, hence overall we would advocate early intervention for symptomatic Dupuytren’s. The one-stop service is efficient from a patient and service perspective, and is cost-effective. We are able to operate the service at approximately 25–30% of national tariff costs i.e. a saving of Figure 5 Service figures. (a) Latest monthly figures. (b) 2009 Case mix: approximately 500 cases per year Dupuytrens less than 1% of cases. (c) Last 1000 cases (2011). (d) 2012 projections: approximately 1200 cases J R Soc Med Sh Rep 2012;3:48. DOI 10.1258/shorts.2012.012050 The development of a totally one-stop wide-awake Dupuytren’s Fasciectomy service 11 up to seventy five percent. With the present and impending changes in healthcare provision, the delivery and cost-effectiveness of elective surgical services must be optimized. This paper demon- strates that a total wide awake approach to Dupuytrens Contracture is safe, effective and prac- ticable. We suggest that a transition to total one-stop wide-awake approach to Dupuytren’s contracture would secure the cost-effective, effi- cient delivery of low-risk Dupuytren’s contracture management going forward. In addition to the importance of the OSWA Dupuytren’s service in the streamlined and cost- effective in Dupuytren’s care, our transition to a total OSWA Dupuytren’s service demonstrates that surgery that was previously thought to be only suitable for in-hospital care can now be managed through a one-stop wide-awake ap- proach. Dupuytren’s was the most complex and last condition to be incorporated into our service. We suggest that future OSWA services based on our model (whether orthopaedic or other special- ity) can build upon our experience of our service as a whole3 and with regard to incorporating complex conditions such as Dupuytren’s. We would suggest that initially the surgeon needs to have a good experience of the traditional manage- ment of the conditions (GA, tourniquets, regional anaesthesia, multistep pathway), then they can establish an OSWA-type service for minor con- ditions; and with all of the necessary management, process and clinical issues dealt with, they can transition to more complex procedures (in a hand surgery practice this would include Dupuytren’s and ulnar nerve entrapment at elbow and wrist). This is demonstrated by considering our 2009, 2011 and projected 2012 figures. For Dupuytren’s we were aware of the North American progress with wide awake fasciectomy and built upon this. The senior authors (MSKB and QMKB) have been treating Dupuytren’s Contracture purely under local anaesthesia for the last decade and have not had to reject/refer a single case for treat- ment under general anaesthesia. The success of our service is such that in our region, under the Lincolnshire Primary Care Surgical Scheme, Dupuytren’s Contracture is not funded for treat- ment under general anaesthesia or in hospital unless exceptional circumstances can be demon- strated. The cost saving to the NHS per case through our service is approximately £2500. With 12,000 Dupuytren’s operations performed in the UK each year,11 there is the potential of a thirty- million pound cost-saving to the NHS going forward. References 1 Nelson R, Higgins A, Conrad J, Bell M, Lalonde D. The Wide-Awake Approach to Dupuytren’s Disease: Fasciectomy under Local Anesthetic with Epinephrine. Hand (N Y). 2009 Nov 10. [Epub ahead of print]. 2 Denkler K. Dupuytren’s fasciectomies in 60 consecutive digits using lidocaine with epinephrine and no tourniquet. Plast Reconstr Surg 2005;115:802–10 3 Bismil MSK, Bismil QMK, Harding D, Harris P, Lamyman E, Sansby L. Transition to total one-stop wide-awake hand surgery service-audit: a retrospective review. J R Soc Med Sh Rep April 2012;3:23. doi: 10.1258/shorts.2012.012019 4 Benson LS, Williams CS, Kahle M. Dupuytren’s contracture. J Am Acad Orthop Surg 1998;6:24–35 5 Black EM, Blazar PE. Dupuytren Disease: An Evolving Understanding of an Age-Old Disease. J Am Acad Orthop Surg 2011;19:746–57 6 Brenner P, Rayan GM, Millesi H, Gratzer J, Schimek H, Twisselmann B. Dupuytren’s Disease: A Concept of Surgical Treatment [Kindle Edition]. Springer; 1 edition (15 April 1994) ASIN: B000PY4F5C 7 Tubiana R, Michon J, Thomine JM. Scheme for the assessment of deformities in Dupuytren’s disease. Surg Clin North Am 1968;48:979–84 8 Witthaut J, Bushmakin AG, Gerber RA, Cappelleri JC, Le Graverand-Gastineau MP. Determining clinically important changes in range of motion in patients with Dupuytren’s Contracture: secondary analysis of the randomized, double-blind, placebo-controlled CORD I study. Clin Drug Investig 2011;31:791–8. doi: 10.2165/ 11592940-000000000-00000 9 Van Demark RE Jr, Van Demark RE. Needle aponeurotomy: a treatment alternative for Dupuytren’s disease. 3rd.S D Med 2011;64:411–3, 415, passim 10 Rives K, Gelberman R, Smith B, Carney K. Severe contractures of the proximal interphalangeal joint in Dupuytren’s disease: results of a prospective trial of operative correction and dynamic extension splinting. J Hand Surg Am 1992;17:1153–9 11 See http://www.nhs.uk/conditions/Dupuytren’s- contracture/Pages/Introduction.aspx (last accessed 29 May 2012) # 2012 Royal Society of Medicine Press This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by-nc/2.0/), which permits non-commercial use, distribution and reproduction in any medium, provided the original work is properly cited. J R Soc Med Sh Rep 2012;3:48. DOI 10.1258/shorts.2012.012050 Journal of the Royal Society of Medicine Short Reports 12 http://www.nhs.uk/conditions/Dupuytren's-contracture/Pages/Introduction.aspx http://www.nhs.uk/conditions/Dupuytren's-contracture/Pages/Introduction.aspx http://www.nhs.uk/conditions/Dupuytren's-contracture/Pages/Introduction.aspx http://creativecommons.org/licenses/by-nc/2.0/ http://creativecommons.org/licenses/by-nc/2.0/ work_tqxlcsqnqrbo5ad37ucmxntvqe ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218560896 Params is empty 218560896 exception Params is empty 2021/04/06-02:16:41 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218560896 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:41 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_tssvkjgzjbgvlpw6ntgh6f6vci ---- TECHNICAL ADVANCE Open Access Automatic colorimetric calibration of human wounds Sven Van Poucke1*†, Yves Vander Haeghen2†, Kris Vissers3, Theo Meert4, Philippe Jorens5 Abstract Background: Recently, digital photography in medicine is considered an acceptable tool in many clinical domains, e.g. wound care. Although ever higher resolutions are available, reproducibility is still poor and visual comparison of images remains difficult. This is even more the case for measurements performed on such images (colour, area, etc.). This problem is often neglected and images are freely compared and exchanged without further thought. Methods: The first experiment checked whether camera settings or lighting conditions could negatively affect the quality of colorimetric calibration. Digital images plus a calibration chart were exposed to a variety of conditions. Precision and accuracy of colours after calibration were quantitatively assessed with a probability distribution for perceptual colour differences (dE_ab). The second experiment was designed to assess the impact of the automatic calibration procedure (i.e. chart detection) on real-world measurements. 40 Different images of real wounds were acquired and a region of interest was selected in each image. 3 Rotated versions of each image were automatically calibrated and colour differences were calculated. Results: 1st Experiment: Colour differences between the measurements and real spectrophotometric measurements reveal median dE_ab values respectively 6.40 for the proper patches of calibrated normal images and 17.75 for uncalibrated images demonstrating an important improvement in accuracy after calibration. The reproducibility, visualized by the probability distribution of the dE_ab errors between 2 measurements of the patches of the images has a median of 3.43 dE* for all calibrated images, 23.26 dE_ab for all uncalibrated images. If we restrict ourselves to the proper patches of normal calibrated images the median is only 2.58 dE_ab! Wilcoxon sum-rank testing (p < 0.05) between uncalibrated normal images and calibrated normal images with proper squares were equal to 0 demonstrating a highly significant improvement of reproducibility. In the second experiment, the reproducibility of the chart detection during automatic calibration is presented using a probability distribution of dE_ab errors between 2 measurements of the same ROI. Conclusion: The investigators proposed an automatic colour calibration algorithm that ensures reproducible colour content of digital images. Evidence was provided that images taken with commercially available digital cameras can be calibrated independently of any camera settings and illumination features. Background Chronic wounds are a major health problem, not only because of their incidence, but also because of their time- and resource-consuming management. This study was undertaken to investigate the possible use of colori- metric imaging during the assessment of human wound repair. The outline design of the current study is based on the system requirements for colorimetric diagnostic tools, published previously [1,2]. Digital photography is considered an acceptable and affordable tool in many clinical disciplines such as wound care and dermatology [3-12], forensics [13,14], pathology [15], traumatology, and orthodontics [16,17]. Although the technical features of most digital cameras are impressive, they are unable to produce reproducible and accurate images with regard to spectrophotometry [18-22]. Taking two pictures of a wound with the same camera and settings, immediately after one another, nor- mally results in two slightly different images. These dif- ferences are exacerbated when the lighting, the camera or its settings are different. Therefore, reproducibility is * Correspondence: svenvanpoucke@woundontology.com † Contributed equally 1Department of Anaesthesia, Critical Care, Emergency Care, Genk, Belgium Van Poucke et al. BMC Medical Imaging 2010, 10:7 http://www.biomedcentral.com/1471-2342/10/7 © 2010 Van Poucke et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. mailto:svenvanpoucke@woundontology.com http://creativecommons.org/licenses/by/2.0 poor. This may be less important when photographs are taken for documentation purposes, but when digital photography becomes part of medical evaluation or is used to perform measurements, it becomes critically important [5,6,18,23-29]. In our view, the quality of medical photography is principally defined by its repro- ducibility and accuracy [21]. Without reproducibility and accuracy of images, any attempt to measure colour or geometric properties is of little use [27]. A simple, practical and validated algorithm to solve this problem is necessary (Figure 1). Almost all colours can be reconstructed using a com- bination of three base colours; red, green and blue (RGB) [30]. Together, these three base colours define a 3-dimensional colour space that can be used to describe colours. The accurate handling of colour characteristics of digi- tal images is a non-trivial task because RGB signals gen- erated by digital cameras are ‘device-dependent’, i.e. different cameras produce different RGB signals for the same scene. In addition, these signals will change over time as they are dependent on the camera settings and some of these may be scene dependent, such as the shutter speed and aperture diameter. In other words, each camera defines a custom device-dependent RGB colour space for each picture taken. As a consequence, the term RGB (as in RGB-image) is clearly ill-defined and meaningless for anything other than trivial pur- poses. As measurements of colours and colour differ- ences in this paper are based on a standard colorimetric observer as defined by the CIE (Commission Internatio- nale de l’Eclairage), the international standardizing body in the field of colour science, it is not possible to make such measurements on RGB images if the relationship between the varying camera RGB colour spaces and the colorimetric colour spaces (colour spaces based on said human observer) is not determined. However, there is a standard RGB colour space (sRGB) that is fixed (device- independent) and has a known relationship with the CIE colorimetric colour spaces. Furthermore, sRGB should more or less display realistically on most modern display devices without extra manipulation or calibration (look for a ‘sRGB’ or ‘6500K’ setting) [31]. One disad- vantage of sRGB is that it cannot represent all the col- ours detected by the human eye. We believe that finding the relationship between the varying and unknown cam- era RGB and the sRGB colour space will eliminate most of the variability introduced by the camera and lighting conditions. The transformation between the input RGB colour space and the sRGB colour space was achieved via a col- our target-based calibration using a ‘reference chart’, namely the MacBeth Colour Checker Chart Mini [MBCCC] (GretagMacBeth AG, Regensdorf, Switzer- land). This chart provides a checkerboard array of 24 scientifically prepared coloured squares or patches in a wide range of colours with known colorimetric prop- erties under a CIE D65, noon daylight illuminant (6504 K). Many of these squares represent natural objects of special interest, such as human skin, foliage and blue sky. These squares are not only the same col- our as their counterparts, but also reflect light the same way in all parts of the visible spectrum. Different cali- bration algorithms defining the relationship between the input RGB colour space of the camera and the sRGB colour space have been published using various methods such as 3D look-up tables and neural networks. The algorithm in this study is based on three 1D look-up tables and polynomial modelling, as previously published by Vander Haeghen et al. [32] (Figure 2). This is a little different than e.g. the general methods used in the well- known ICC profiles http://www.color.org/index.xalter. In Figure 1 Chronic Wound and Reference Chart. Chronic Wound and Reference Chart after (left) and before (right) calibration. Van Poucke et al. BMC Medical Imaging 2010, 10:7 http://www.biomedcentral.com/1471-2342/10/7 Page 2 of 11 http://www.color.org/index.xalter ICC profiles the relationship of an unknown colour spaces to the so-called ‘profile connection space’ (PCS, usually CIE XYZ) are computed and stored. Output is then generated by going from this PCS to the desired output colour space, which in our case would be sRGB. This means 2 colour space transformations are required (RGB to PCS to sRGB), while our algorithm only needs 1. Although an inherently more flexible system, ICC profiling seems overkill for our intended application (straight camera RGB to sRGB transformation, without the need of determining and storing or embedding a device profile). However, It must be said that the advent of e.g. LittleCMS http://www.littlecms.com/ which is a free colour management system that focuses on determination and immediate application of profiles on images may change this view in the future, and that such a system could be a viable alternative for the cur- rent colour space transformation algorithms in our system. Methods The research has been carried out in accordance with the Helsinki Declaration; the methods used were subject to ethical committee approval (B32220083450 Commis- sie voor Medische Ethiek Faculteit Geneeskunde Leuven Belgium). Patients received detailed written and verbal explanation and patient authorization was required before inclusion and analysis of the images. Figure 2 RGB to sRGB Transformation Scheme. Van Poucke et al. BMC Medical Imaging 2010, 10:7 http://www.biomedcentral.com/1471-2342/10/7 Page 3 of 11 http://www.littlecms.com/ Experiment 1 The purpose of the first experiment was to investigate whether camera settings or lighting conditions nega- tively affect the quality of the colorimetric calibration [33]. Chronic wounds are assessed in different locations and environments. Therefore, we assessed the calibra- tion algorithm under extreme lighting conditions and with inappropriate camera settings. Image Acquisition Digital images of the MBCCC on a grey-coloured back- ground, in a Colour Assessment Cabinet CAC 120-5 (VeriVide Leicester, UK), were taken using two digital cameras; the Nikon D200 SLR (10.2 effective mega pix- els) with a 60 mm AF Micro Nikkor lens, and the Canon Eos D10 (6.3 million effective pixels) with a 50 mm Canon EF lens. All images were processed in high-quality jpeg mode. This means after the camera has applied processing (demosaicing, colour correction curves, matrixing, etc. ...) to the images. (Table 1). Calibration Procedure During the calibration procedure uniform illumination is assumed, as is a reference chart as part of the image of interest. The calibration provides a means of transform- ing the acquired images (defined in an unknown colour space, which is normally RGB), to a standard, well- defined colour space i.e. sRGB [34]. sRGB has a known relationship to the CIE L*a*b* colorimetric space, allow- ing computation of perceptual colour differences. The CIE L*a*b* colorimetric space, or CIELAB space with coordinates L*, a* and b*, refers to the colour-opponent space; L* refers to Luminance, a* and b* refer to the col- our-opponent dimensions [34-36]. The ‘detection of the MBCCC’ in the digital image can be done manually or automatically. The algorithm behind MBCCC detection is based on the initial detection of all the bright areas in an image (areas with pixel values close to 255), followed by a shape analysis. Shapes that are not rectangular, and either too small or too large compared with the image dimensions, are discarded (in pixel, we do not know the real dimension yet). The remaining areas are candidates for the MBCCC white patch. For each of the white patch candidates, the corresponding MBCCC black patch is searched for, taking into account the typical layout of the colour chart and the dimensions of the white patch can- didate. If this succeeds, the patches are checked for saturation (average pixel value > 255-δ or < \delta with \delta a small number, e.g. 3) in each of the colour chan- nels individually. If the number of saturated patches is acceptable (typically fewer than 6 out of 24 patches), cali- bration proceeds and its quality is assessed. Quality assessment consists of examining various conditions relating to the colour differences between the known spectrophotometric and the computed sRGB values, in accepted and rejected patches. If any of these tests fail, the algorithm rejects the calibration and continues the search. Analysis In this experiment precision is defined as a measure of the proximity of consecutive colour measurements on an image of the same subject. This is also known as reproducibility. The precision of the MBCCC chart detection, together with the calibration process, were evaluated by computing the perceptual colour differ- ences between all the possible pairs of measurements of each colour square of the MBCCC chart. These percep- tual colour differences are expressed in CIE units, and are computed using the Euclidean metric in the CIE L*a*b* colour space. Theoretically, one unit is the ‘just noticeable colour difference’ and anything above five units is ‘clearly noticeable’. The accuracy of a procedure is a measure of how close its results are to the ‘real’ values, i.e. those obtained using the ‘standard’ procedure or measurement device. For colour measurements this would be a spec- trophotometer. Consequently, the accuracy of the chart detection and colour calibration can be assessed by computing the perceptual colour differences between the measurements of the colour squares of the MBCCC chart and the spectrophotometric values of these squares. For this assessment the calibration was per- formed using half the colour patches of the MBCCC chart, while the other half were utilised in evaluation of accuracy. Accuracy is likely to be higher when the whole chart is used for calibration purposes. Precision and accuracy result in a probability distribution for the dE_ab errors. Tukey’s five-number summary of the dE_ab colour differences of each patch was also calcu- lated and visualized using a box plot (the minimum, the lower quartile, the median, the upper quartile and the maximum). Wilcoxon rank-sum statistics were used to Table 1 Parameter Settings Camera Type Canon Eos D10 Nikon D200 Scene lighting (cabinet) D65 TL84 A Camera sensitivity 100 ISO 400 ISO Camera exposure -1EV 0EV +1EV Camera white balance Auto Manual A D65 Lighting and camera settings records all the parameters that were varied during image acquisition. Note that these include some combinations that are inappropriate, such as setting the camera white balance to D65 with an A scene illuminant to produce off-colour images. This was done in order to challenge the calibration algorithm. 1st Illuminant D65: ‘Artificial Daylight’ fluorescent lamps conforming to Standard Illuminant D6500 (6500 K), 2nd Illuminant TL84: Philips Triphosphor fluorescent lamps, often chosen as a ‘Point of Sale’ illuminant, (5200 K), 3rd Illuminant A: ‘Filament (domestic) lighting’ (3000 K). In digital photography, ISO measures the sensitivity of the image sensor. Exposure is measured in lux seconds, and can be computed from exposure value (EV) and scene luminance over a specified area. White balancing is a feature that allows to compensate for different lighting conditions by adjusting the color balance based on the difference between a white object in the image and a reference white. Van Poucke et al. BMC Medical Imaging 2010, 10:7 http://www.biomedcentral.com/1471-2342/10/7 Page 4 of 11 test the calibration, which compares the locations of two populations to determine if one population has been shifted with respect to another. A sum of ranks compar- ison, which works by ranking the combined data sets and summing the ranks for each dE_ab, was utilised to compare the sum of the ranks with significance values based on the decision alpha (p < 0.05). Experiment 2 The second experiment was designed to quantify the impact of the automatic calibration procedure i.e. the chart detection, on real-world measurements. This may be of importance in a clinical setting, where automatic calibration of large batches of images in a single run is required. To examine this, 40 different images of real wounds were acquired, and a region of interest (ROI) was selected within each image. Three rotated versions (at 90°, 180° and 270°) of each image were created and automatically calibrated (Figure 3). Comparisons between the colour measurements of the ROIs of the rotated versions of each image highlighted the errors introduced by the automatic chart detection component of the calibration procedure. Image Acquisition Digital images (n = 40) of the chronic wounds were taken using a Sony Cybershot DSC-F828 digital camera (8.0 million effective pixels) and Carl Zeiss 28 - 200 mm equivalent lens, with fully automatic settings at different indoor locations, as is usually the case in daily clinical practice. Calibration Procedure and Analysis The calibration procedure was carried out in accordance with that recorded for experiment 1. The dE_ab colour differences between the average colour of the ROI of the four rotated versions of each image were computed and visualized using a probability distribution graph. Results Experiment 1 Figures 4, 5, 6, 7, 8 and 9 demonstrate examples of rea- listic sample images under different illuminants and show the corresponding calibrated images taken with Figure 3 Chronic Wound Images with Reference Chart and a Region of Interest. Van Poucke et al. BMC Medical Imaging 2010, 10:7 http://www.biomedcentral.com/1471-2342/10/7 Page 5 of 11 Figure 4 Example with Nikon D200: MBCCC under illuminant A (3000 K). Camera: exposure bias -1, automatic white balance. Figure 5 Example with Nikon D200: MBCCC under illuminant A (3000 K). Camera: exposure bias -1, calibrated image. Figure 6 Example with Canon 10D: MBCCC under illuminant A (3000 K). Camera: exposure bias -1, manual white balance. Figure 7 Example with Canon 10D: MBCCC under illuminant A (3000 K). Camera: exposure bias -1, calibrated image. Figure 8 Example with Nikon D200: MBCCC under illuminant D65 (6500 K). Camera: exposure bias -1, manual white balance. Figure 9 Example with Nikon D200: MBCCC under illuminant D65 (6500 K). Camera: exposure bias -1, calibrated image. Van Poucke et al. BMC Medical Imaging 2010, 10:7 http://www.biomedcentral.com/1471-2342/10/7 Page 6 of 11 different cameras. The images contained many saturated patches (see the ‘x’s on the patches) that were not used for the calibration, resulting in a lower quality calibration. The accuracy and reproducibility of the colour calibra- tion using different cameras, camera settings and illumina- tion conditions are presented using a probability distribution of dE_ab errors of all the MBCCC patches (Figure 10). A distinction is made between the full set of images and the ‘normal’ images, which were acquired with proper camera settings: correct manual or automatic white balance and no exposure bias. Indeed, the full set contains several images that were strongly over- or under- exposed, or had a mismatched white balance. These images demonstrated the effectiveness of the calibration method, but are not representative of day-to-day photo- graphy. Moreover, the term ‘proper patch’ was used to indicate patches that were not saturated during acquisition i.e. those patches with pixel values too close to 255 or 0. It was not possible to calculate the pixel value of these satu- rated patches, and their calibration was unfeasible. The accuracy and reproducibility results for the set of proper patches of normal images are representative for colours in properly photographed images, which are dif- ferent from the colours of the patches that were disre- garded during calibration due to saturation (marked by ‘x’ on the calibrated image) (Figure 11). Saturation in normal images or skin imaging is rare, but if it does occur it normally manifests itself as an overexposure of white, deep red, yellow and orange MBCCC patches. If this problem is frequent with a particular camera, it can be remedied by slightly underexposing images by, for example, half an f-stop (exposure bias). Tukey’s five-number summary of the dE_ab colour differences of each proper patch of the normal images was calculated and visualized using a box plot (the mini- mum, the lower quartile, the median, the upper quartile and the maximum) (Figure 12). Outliers were marked with a red ‘x’. To evaluate accuracy, the chart patches were split in two groups of 12 patches and only the sec- ond group was used for calibration, resulting in a lower quality calibration than if 24 patches had been used. The first group of 12 patches was used to check the accuracy. Colour differences between the measurements and real spectrophotometric measurements revealed median dE_ab values of 6.40 for proper patches of calibrated normal images and 17.75 for uncalibrated images, respectively, demonstrating an important improvement in accuracy after calibration (Figure 10). The result for the patches used in the calibration was also included, and they had a median of 1.59 dE_ab. Figure 12 presents the accuracy box plot for the proper patches of the normal images. As mentioned Figure 10 Accuracy of Color Calibration. Probability distribution of dE*ab errors between the patches of the images and spectrophotometric measurements. Based on 39 images (Nikon D200) & 15 images (Canon 10D) under different illuminants and settings. Median dE*ab is 6.40 for the proper patches of calibrated normal images, 17.75 for uncalibrated images. Van Poucke et al. BMC Medical Imaging 2010, 10:7 http://www.biomedcentral.com/1471-2342/10/7 Page 7 of 11 Figure 11 Reproducibility of Color Calibration. Probability distribution of dE*ab errors, based on 39 images taken with a Nikon D200 and 15 images with a Canon 10D under different illuminants and settings. Median of 3.43 dE*ab for all calibrated images, 23.26 dE*ab for all uncalibrated images, a median of 2.83 dE*ab for all ‘normal’ calibrated images and 14.25 dE*ab for all ‘normal’ uncalibrated images. If we restrict ourselves to the proper patches of normal calibrated images the median is only 2.58 dE*ab. Figure 12 Accuracy: boxplot for the proper patches of the normal images. Van Poucke et al. BMC Medical Imaging 2010, 10:7 http://www.biomedcentral.com/1471-2342/10/7 Page 8 of 11 above, we could only use patches that had not been used in computing the calibration in order to check accuracy, therefore only 12 patches are shown in this figure. As figure 11 demonstrates, the reproducibility, visua- lized by the probability distribution of the dE_ab errors between two measurements of the patches of the images, had a median of 3.43 dE* for all calibrated images, 23.26 dE_ab for all uncalibrated images, a median of 2.83 dE_ab for all ‘normal’ calibrated images, and 14.25 dE_ab for all ‘normal’ uncalibrated images. Restricting the calcu- lation to the proper patches of normal calibrated images, the median was 2.58 dE_ab. Wilcoxon sum-rank testing (p < 0.05) between uncalibrated normal images and cali- brated normal images with proper squares was equal to zero, demonstrating a highly significant improvement in reproducibility. Examining dE_ab errors for each MBCCC patch indi- vidually revealed that the greatest errors were found in the red, orange yellow, orange and yellow patches. Examination of cyan patches was excluded as these can- not be represented accurately in the sRGB colour space (Figure 13). Experiment 2 The reproducibility of the chart detection during auto- matic calibration is presented using a probability distri- bution of dE_ab errors between two measurements of the same ROI. Ideally this should be as close to zero as possible and comparable to the measurements of the same ROI depicted in the presented figures. Reproduci- bility: box plot for the proper patches of the normal images. The rotated versions of an image should all be equal. Any deviation from this would indicate variability in the chart detection, leading to a slightly different cali- bration and thus different measurements (Figure 14). Figure 13 Reproducibility: boxplot for the proper patches of the normal images. Figure 14 Probability distribution of dE*ab errors with region of interest calibration. Van Poucke et al. BMC Medical Imaging 2010, 10:7 http://www.biomedcentral.com/1471-2342/10/7 Page 9 of 11 Discussion The research presented here provides evidence that images taken with commercially available digital cameras can be calibrated independently of camera settings and illumination features, provided that illumination in the field of view is uniform and a calibration chart is used. This may be particularly useful during chronic wound assessment, as this is often performed in different loca- tions and under variable lighting conditions. The pro- posed calibration transforms the acquired images in an unknown colour space (usually RGB) to a standard, well defined colour space (sRGB) that allows images to be dis- played properly and has a known relationship to the CIE colorimetric colour spaces. First, we challenged the cali- bration procedure with a large collection of images con- taining both ‘normal’ images with proper camera settings and images that were purposely over- or underexposed and/or had white balance mismatches. The reproducibil- ity and accuracy of the calibration procedure is presented and demonstrates marked improvements. The calibration procedure works very well on the images with improper camera settings, as evidenced by the minimal differences between the error distributions of the complete set of images and the set with only the ‘normal’ images. An innovative feature demonstrated during our research is the automatic ‘detection and calibration of the MacBeth Colour Checker Chart Mini [MBCCC]’ in the digital image. Secondly, we tested the effect of this MBCCC chart detection on subsequent real-world colour mea- surements. Figure 14 demonstrates the probability distri- bution of errors between two colour measurements of the same region of interest that can be attributed to var- iations in the chart detection process. The majority of these errors were below 1 dE_ab, demonstrating that the chart detection is robust. This experiment is part of the research presented by the Woundontology Consortium, which is a semi-open, international, virtual community of practice devoted to advancing the field of research in non-invasive wound assessment by image analysis, ontology and semantic interpretation and knowledge extraction http://www. woundontology.com. The interests of this consortium are related to the establishment of a community driven, semantic content analysis platform for digital wound imaging with special focus on wound bed surface area and color measurements in clinical settings. Current research by the Woundontology Consortium is related to our concerns of the interpretation of clinical wound images without any calibration or reference procedure. Therefore we are investigating techniques to promote standardization. The platform used by this Consortium is based on Wiki technology, a collaborative environment to develop a “woundontology” using the Collaborative Ontology Development Service (CODS) and an image server. Research on wound bed texture analysis is per- formed by a computer program: “MaZda”. This applica- tion has been under development since 1998, to satisfy the needs of the participants of the COST B11 European project “Quantitative Analysis of Magnetic Resonance Image Texture” (1998-2002). Additionally, wound bed texture parameter data-mining is analyzed using “Rapid- Miner” which is one of the world-wide leading open- source data mining solution. Recently, results on: THE RED-YELLOW-BLACK (R-Y-B) SYSTEM: A COLORIMETRIC ANALYSIS OF CONVEX HULLS IN THE CIELAB COLOR SPACE were presented at the EWMA 2009 conference in Hel- sinki, Finland. Conclusions To our knowledge, the proposed technology is the first demonstration of a fundamental, and in our opinion, essential tool for enabling intra-individual (in different phases of wound healing) and inter-individual (for fea- tures and properties) comparisons of digital images in human wound healing. By implementing this step in the assessment, we believe that scientific standards for research in this domain will be improved [37]. Acknowledgements Part of this work was performed at the Liebaert Company (manufacturer for technical textile in Deinze, Belgium). We thank the members of the Colour Assessment Cabinet specially Albert Van Poucke, for the fruitful discussions. We also would like to thank the reviewers who helped to improve this paper with their suggestions. Author details 1Department of Anaesthesia, Critical Care, Emergency Care, Genk, Belgium. 2Department of Dermatology, University Ghent, Ghent, Belgium. 3Department of Anesthesiology, Pain and Palliative Medicine, The Radboud University Nijmegen Medical Centre, Nijmegen, Netherlands. 4CNS, Pain and Neurology, Janssen Research Foundation, Beerse, Belgium. 5Critical Care Department, Antwerp University Hospital, University of Antwerp, Antwerp, Belgium. Authors’ contributions SVP designed the method and drafted the manuscript. SVP and YV generated the data gave recommendations for their evaluation. PJ, TM, KV provided feedback and directions on the results. All authors read and approved the final manuscript. Competing interests The authors declare that they have no competing interests. Received: 12 March 2009 Accepted: 18 March 2010 Published: 18 March 2010 References 1. Haeghen Vander Y, Naeyaert JM: Consistent cutaneous imaging with commercial digital cameras. Arch Dermatol 2006, 142(1):42-46. 2. Van Geel N, Haeghen Vander Y, Ongenae K, Naeyaert JM: A new digital image analysis system useful for surface assessment of vitiligo lesions in transplantation studies. Eur J Dermatol 2004, 14(3):150-155. Van Poucke et al. BMC Medical Imaging 2010, 10:7 http://www.biomedcentral.com/1471-2342/10/7 Page 10 of 11 http://www.woundontology.com http://www.woundontology.com http://www.ncbi.nlm.nih.gov/pubmed/16415385?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16415385?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15246939?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15246939?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15246939?dopt=Abstract 3. Kanthraj GR: Classification and design of teledermatology practice: What dermatoses? Which technology to apply? Journal of the European Academy of Dermatology and Venereology 2009, 23(8):865-875. 4. Aspres N, Egerton IB, Lim AC, Shumack SP: Imaging the skin. Australas J Dermatol 2003, 44(1):19-27. 5. Bhatia AC: The clinical image: archiving clinical processes and an entire specialty. Arch Dermatol 2006, 142(1):96-98. 6. Hess CT: The art of skin and wound care documentation. Advances in Skin & Wound Care 2005, 18:43-53. 7. Bon FX, Briand E, Guichard S, Couturaud B, Revol M, J Servant JM, Dubertret L: Quantitative and kinetic evolution of wound healing through image analysis. IEEE Trans Med Imaging 2000, 19(7):767-772. 8. Jury CS, Lucke TW: The clinical photography of herbert brown: a perspective on early 20th century dermatology. Clin Exp Dermatol 2001, 26(5):449-454. 9. Phillips K: Incorporating digital photography into your wound-care practice. Wound Care Canada 2006, 16-18. 10. Oduncu H, Hoppe A, Clark M, Williams RJ, Harding KG: Analysis of skin wound images using digital colour image processing: a preliminary communication. Int J Low Extrem Wounds 2004, 3(3):151-156. 11. Levy JL, Trelles MA, Levy A, Besson R: Photography in dermatology: comparison between slides and digital imaging. J Cosmet Dermatol 2003, 2:131-134. 12. Tucker WFG, Lewis FM: Digital imaging: a diagnostic screening tool? Int J Dermatol 2005, 44(6):479-481. 13. Wagner JH, Miskelly GM: Background correction in forensic photography. ii. Photography of blood under conditions of non-uniform illumination or variable substrate color_practical aspects and limitations. J Forensic Sci 2003, 48(3):604-613. 14. Wagner JH, Miskelly GM: Background correction in forensic photography. i. photography of blood under conditions of non-uniform illumination or variable substrate color_theoretical aspects and proof of concept. J Forensic Sci 2003, 48(3):593-603. 15. Riley RS, Ben-Ezra JM, Massey D, Slyter RL, Romagnoli G: Digital photography: A primer for pathologists. J Clin Lab Anal 2004, 18:91-128. 16. Palioto DB, Sato S, Ritman G, Mota LF, Caffesse RG: Computer assisted image analysis methods for evaluation of periodontal wound healing. Braz Dent J 2001, 12:167-172. 17. Heydecke G, Schnitzer S, Türp JC: The colour of human gingiva and mucosa: visual measurement and description of distribution. Clin Oral Invest 2005, 9:257-265. 18. Scheinfeld N: Photographic images, digital imaging, dermatology, and the law. Arch Dermatol 2004, 140(4):473-476. 19. Gopalakrishnan D: Colour analysis of the human airway wall. Master’s thesis, University of IOWA 2003. 20. Lotto RB, Purves D: The empirical basis of colour perception. Consciousness and Cognition 2002, 11:609-629. 21. Prasad S, Roy B: Digital photography in medicine. J Postgrad Med 2003, 49(4):332-336. 22. Maglogiannis I, Kosmopoulos DI: A system for the acquisition of reproducible digital skin lesions images. Technol Health Care 2003, 11(6):425-441. 23. Haeghen Vander Y: Development of a dermatological workstation with calibrated acquisition and management of colour images for the follow- up of patients with an increased risk of skin cancer. Ph.D. thesis, University Ghent 2001. 24. Gilmore S: Modelling skin disease: lessons from the worlds of mathematics, physics and computer science. Australas J Dermatol 2005, 46(2):61-69. 25. Goldberg DJ: Digital photography, confidentiality, and teledermatology. Arch Dermatol 2004, 140(4):477-478. 26. Macaire L, Postaire JG: Colour image segmentation by analysis of subset connectedness and colour homogeneity properties. Computer Vision and Image Understanding 2006, 102:105-116. 27. Streinera DL: Precision and accuracy: Two terms that are neither. Journal of Clinical Epidemiology 2006, 59:327-330. 28. Feit J, Ulman V, Kempf W, Jedlickov H: Acquiring images with very high resolution using a composing method. Cesk Patol 2004, 40(2):78-82. 29. Byrne A, Hilbert DR: Colour realism and colour science. Behavioral and Brain Sciences 2006, 26:3-64. 30. Harkness N: The colour wheels of art, perception, science and physiology. Optics and Laser Technology 2006, 38:219-229. 31. Multimedia systems and equipment - Colour measurement and management -Part 2-1: Colour management - Default RGB colour space - sRGB. 1999, IEC 61966-2-1 Ed. 1.0 Bilingual. 32. Haeghen Vander Y, Naeyaert JM, Lemahieu I, Philips W: An imaging system with calibrated colour image acquisition for use in dermatology. IEEE Trans Med Imaging 2000, 19(7):722-730. 33. Ikeda I, Urushihara K, Ono T: A pitfall in clinical photography: the appearance of skin lesions depends upon the illumination device. Arch Dermatol Res 2003, 294:438-443. 34. Leon K, Mery D, Pedrischi F, Leon J: Colour measurement in l*a*b* units from rgb digital images. Food Research International 2006, 39(2006):1084-1091. 35. Danilova MV, Mollon JD: The comparison of spatially separated colours. Vision Res 2006, 46(6-7):823-836. 36. Johnson GM: A top down description of s-cielab and ciede2000. Col Res Appl 2003, 28:425-435. 37. Bellomo R, Bagshaw SM: Evidence-based medicine: classifying the evidence from clinical trials_the need to consider other dimensions. Crit Care 2006, 10(5):232. Pre-publication history The pre-publication history for this paper can be accessed here: [http://www.biomedcentral.com/1471-2342/10/7/prepub] doi:10.1186/1471-2342-10-7 Cite this article as: Van Poucke et al.: Automatic colorimetric calibration of human wounds. BMC Medical Imaging 2010 10:7. Submit your next manuscript to BioMed Central and take full advantage of: • Convenient online submission • Thorough peer review • No space constraints or color figure charges • Immediate publication on acceptance • Inclusion in PubMed, CAS, Scopus and Google Scholar • Research which is freely available for redistribution Submit your manuscript at www.biomedcentral.com/submit Van Poucke et al. BMC Medical Imaging 2010, 10:7 http://www.biomedcentral.com/1471-2342/10/7 Page 11 of 11 http://www.ncbi.nlm.nih.gov/pubmed/12581077?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16415393?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16415393?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11055792?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11055792?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11488837?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11488837?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11488837?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15866806?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15866806?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15866806?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/17163918?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/17163918?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15941435?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12762531?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12762531?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12762531?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12762530?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12762530?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12762530?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15065212?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15065212?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11696912?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11696912?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15096377?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15096377?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12470626?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/14699233?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/14757921?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/14757921?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15842395?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15842395?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15096378?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16549250?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15233022?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/15233022?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11055787?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/11055787?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12563541?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/12563541?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/16288793?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/17029653?dopt=Abstract http://www.ncbi.nlm.nih.gov/pubmed/17029653?dopt=Abstract http://www.biomedcentral.com/1471-2342/10/7/prepub Abstract Background Methods Results Conclusion Background Methods Experiment 1 Image Acquisition Calibration Procedure Analysis Experiment 2 Image Acquisition Calibration Procedure and Analysis Results Experiment 1 Experiment 2 Discussion Conclusions Acknowledgements Author details Authors' contributions Competing interests References Pre-publication history work_tuz4w54lefdrhi7xxs3mrwgaca ---- OPH_2005_219_SINDEX.indd Fax +41 61 306 12 34 E-Mail karger@karger.ch www.karger.com 421 Subject Index Vol. 219, 2005 Chromovitrectomy 251 Clinical trial, age-related macular degeneration 154 Coats’ disease 401 Coenzyme Q10 154 Conjunctival malignancy 177 Contact lenses 72 Continuous circular capsulorhexis 338 Cornea 324 Corneal fl ap 276 Cryptococcosis 101 Cyclooxygenase 2 inhibitor 243 Dacryocystorhinostomy, long-term patient satisfaction 97 Decentration, intraocular lens 26 Deep sclerectomy 281 Diabetes 1, 394 Diabetic cataracts 309 – macular edema 86, 206 – maculopathy 16 – retinopathy 107, 292 – – screening 292 Diffuse macular edema 394 Digital subtraction angiogram 136 Dissociated optic nerve fi ber layer appearance 206 Driving abilities 191 Drusen 154 Dry eye 276 Elderly patients, psuedouveitis/uveitis 263 Electrolytes 142 Emmetropization 226 Encephalitis 119 Endoillumination 338 Epiphora 136 Episcleral venous pressure 357 Evisceration 177 Exenteration 177 Exodeviation 237 Exophthalmos 181 Eyelid 57 Eye stabilization 401 Fatty acids (n–3) 154 Fibrous dysplasia 181 Fixation stability 16 Fluorescein angiography 86, 303, 350, 366 Foveal exudate 366 Fuchs’ heterochromic uveitis 21 Fucose 324 Acetyl- L -carnitine 154 Achromatic perimetry 202 A constant, intraocular lenses 390 Acute retinal necrosis 272 Adjustable intraocular lens 362 Adrenomedullin 107 Age-related macular degeneration 154, 214, 303 Agreement, intraocular pressure measurements 36 Amniotic membrane 297 Ankle-brachial index 334 Anterior segment surgery 129 Antibiotics, neonatal chlamydial conjunctivitis 232 Antioxidants 49, 309 Aponeurosis advancement 129 Apoptosis 1 Aqueous fl are 21 – humor outfl ow 357 Argon laser 267 Arterial stiffness 334 Arteriosclerosis 334 Arteriovenous crossing 386 Astrocytic hamartoma 350 Autofl uorescence, astrocytic hamartoma 350 Basal cell carcinoma 57 Blepharoplasty 185 Blepharoptosis 129 Blindness 345 Blood-aqueous barrier 21 Blood fl ow, normal-tension glaucoma 317 Blue-on-yellow perimetry 202 Branch retinal vein obstruction 386 – – – occlusion 267, 334 Bupivacaine 167 Cataract prevention/reversal 309 – surgery 167, 362, 390 CD44 receptors 287 Central retinal vein occlusion 267 – serous chorioretinopathy 202 Cerebral-cortex-derived neural stem cells 171 Cerebrospinal fl uid 101 Chlamydia pneumoniae 232 – trachomatis 232 Chorioretinal venous anastomosis 267 Choroid 287 Choroidal neovascular membranes 214 © 2005 S. Karger AG, Basel Accessible online at: www.karger.com/oph Functional magnetic resonance imaging, lateral geniculate nucleus function 11 Glaucoma 1, 26, 357, 373, 413 Growth hormone defi ciency 226 Healthy menstruating women, visual fi eld changes 30 Hepatitis B virus 93 Herpes zoster ophthalmicus 272 High altitude, macula changes 404 Hinge position, dry eye 276 Homelessness, eye disease risk 345 Homonymous hemianopia 11 Hyaluronan 287 – synthetase 287 Hypertension 366 Indocyanine green angiography 303, 350 – –, macular hole treatment 251 Infl uenza 119 Informed consent, blepharoptosis 129 Internal limiting membrane 251 – – – removal 206 Intracameral anesthesia 167 Intraocular lenses 72, 362, 390 – methotrexate 54 – pressure 36, 317, 413 Intravitreal cortisone 394 – triamcinolone 413 Lacrimal draining system 136 – pathway obstruction 232 – stenosis 136 Large initial overcorrection, exotropia surgery 237 Laser in situ keratomileusis 276 – photocoagulation 401 Lateral geniculate nucleus 11 Lens power calculation formulas 390 Leptin 107 Lidocaine gel 167 Light sensitivity 16 Limbal stem cell transplantation 297 �-Lipoic acid 49 Long-term vitreous replacement 147 Macula dysfunction 404 Macular hole 251 Matrix metalloprotease-9 upregulation 324 Microperimetry, diabetic maculopathy 16 Subject IndexOphthalmologica Vol. 219, 2005422 Mitochondria 154 Mitomycin C 281 Mouse model, cataract formation inhibition 309 Multifocal electroretinography 404 Multiple myeloma 43 Nasolacrimal duct obstruction 142 Neonatal conjunctivitis 232 Non-mydriatic digital photography, diabetic retinopathy screening 292 Normal-tension glaucoma 317 Oblique vision 115 Occult choroidal neovascularization 303 Ocular burns 297 Open-angle glaucoma 1 Optical coherence tomography 80, 86, 379, 404 Optic nerve atrophy 345 – – head photography 80 Orbital compression 181 – invasion, conjunctival squamous cell carcinoma 177 Oxidative stress 309 Palinopsia 115 Paracentral scotoma 191 Pars plana vitrectomy 206 Pediatric benign lid lesions 112 – cataract 72 Peduncular hallucinosis 115 Perfl uorohexyloctane (F 6 H 8 ) 147 Perimetry simulation 123 Perioperative posterior ischemic optic neuropathy 185 Perisim 2000 123 Phacoemulsifi cation 21, 338 Phacotrabeculectomy 26 Photoreceptors 171 Poverty, eye disease risk 345 Pregnancy , intraocular pressure measurements 36 Proliferative vitreoretinopathy 107 Pseudouveitis 263 Psoralen-UVA therapy, oxidative damage 49 Pulse wave velocity, brachial ankle 334 Punch trabeculectomy 281 Pyruvate 309 Reconstructive surgery, eyelid malignancies 57 Refraction 226 Refractive lens surgery 362 Retinal arterial macroaneurysms 366 – choroidal anastomosis 303 – nerve fi ber layer thickness 80, 379 – stem cells 171 – thickness 379 – vascular anomalous complex 303 – vein occlusion 243 – vessels 386 Retinitis 119 Retrograde degeneration, lateral geniculate nucleus 11 Rhegmatogenous retinal detachment 222 Rofecoxib 243 Short-wavelength perimetry, healthy menstruating women 30 Silicone oil 147 Slit-lamp biomicroscopy, diabetic retinopathy 292 Squamous cell carcinoma 177 Standard achromatic perimetry, healthy menstruating women 30 STAT3 protein 214 Subepidermal calcifi ed nodules 112 Subretinal fl uid 222 – strand 222 Tear proteins 142 Tenascin 214 Tendency-oriented perimetry program 123 Three-dimensional dacryocystography 136 – rotational angiography 136 Topical anesthesia, cataract surgery 167 Transforming growth factors 171, 222 Trypan blue 338 Tuberous sclerosis 350 Ultraviolet A radiation, oxidative damage 49 Uveitis 54, 263 Varicella-zoster virus 272 Vioxx 243 Viral infection, retinitis 119 Visual acuity 86, 154 – cortex 11 – fi eld 154, 373 – – defects, central/paracentral 191 – hallucinations 115 – loss 185 – pseudohallucinations 115 Vitreous 93 – hemorrhage 338 Wound healing 324 work_twluzboaxrfrfgo5jdnlte32vm ---- doi:10.1016/S1743-9191(06)60021-6 22 The Journal of Surgery • Volume 2 • Issue 1 • 2004 Documenting family history in colorectal cancer patients - a retrospective audit T. Satheshkumar1, AP Saklani2, J S Nagbhushan3, RJ Delicata4 1 Department of Surgery, James Cook University Hospital, Middlesbrough, TS4 3BW 2 Department of Surgery, Hereford County Hospital, Hereford. HR1 2ER 3 Department of Surgery, Crosshouse Hospital, Kilmarnock. KA2 0BE 4 Department of Surgery, Nevill Hall Hospital, Abergavenny, NP7 7EG Correspondence to: Mr T. Satheshkumar, 6 Carmel walk, New College way, Swindon SN3 2GT. United Kingdom. Abstract History elicitation is vital in the diagnosis and management of clinical cases. Failure to elicit a complete history can make us liable for negligence. This retrospective audit done in a DGH, investigates the Family History Documentation of CRC (Colorectal Cancer) patients. Introduction Colorectal Cancer is diagnosed in over a quarter of a million people in the UK each year and is the third most common type of cancer (http://www.statistics.gov.uk). The total number of new cases per annum is around 34,000 1. The prognosis for patients is highly dependent on the stage of disease. There are a number of systems in use of which the Dukes System is the most widely used. The 5-year survival for Duke A is 80%-90%, Duke B 60%-70%, Duke C 20%-30% and Duke D 5%-10% 2. Colorectal cancer is common and the incidence is closely related to patient age. After age, the second most common risk factor is family history of colon cancer. In fact, it is one of the most hereditable cancers with 20-25% of colorectal cancers (CRC) occurring in patients with a family history of the disease or with an early age of onset.3,4 Both of these types of presentation suggest a genetic predisposition5. Recognition of family cancer syndromes through history allows the primary care provider an opportunity to offer healthcare advice to an entire family6. Relatives of patients with sporadic colon cancers have a two to nine fold increased risk of developing large bowel cancer compared to the normal population7. This risk is highest for patients younger than forty-five and not significant for people sixty years or older8. There is no national screening protocol, which is cost-effective for colorectal cancer9. Therefore eliciting a good history and surveillance protocol in relatives of high-risk patients is highly desirable. Failure to elicit this history in the face of advances in genetic knowledge, and failure to identify familial CRC has provoked claims of negligence against healthcare providers10. Objective The aim of this study was to audit the documentation of any family history of cancer in the medical records of patients with CRC in one hospital. The audit was restricted to patients less than 60 years of age. Material and methods This audit was done in a District General Hospital (Nevill Hall Hospital, Gwent) by the Department of General surgery. The notes of patients aged under sixty years of age, and newly diagnosed with colorectal cancer over a three-year period from 1997-2000, were examined. Family history documentation of CRC in GP referral letters, pre-clerking notes, or elsewhere were recorded. Note was also made about enquiries into family history of other cancers, such as breast, ovary or endometrium. The degree of completion of the record was also noted regarding particularly the age at onset, the relationship of relatives with a positive cancer history, and whether or not a cancer family tree was present. Results In the three-year period (1997-2000), 50 patients below the age of sixty years were newly diagnosed with CRC. Their median age was 52 years with a range of 25 to 59 years. A total of 41 GP referral letters could be traced of which only 5 (12%) referred to family history relevant to CRC. In the medical records completed by pre-registration house officers (PRHO), only 18/50 (36%) had a record of relevant family history. Overall, only 27/50 (54%) patient notes made any reference to a family history. These figures indicate that in 46% of cases staff did not make any mention of family history in the case notes. Negative family history for other cancers was mentioned in only 14 cases (28%). A family history of polyps was recorded in only one patient. Table 1. Recording of family history in relation to type of admission In the 27 case notes containing reference to a family history of cancer several deficiencies were noted. The age at diagnosis of any familial cancer was mentioned in only 10/27 cases, and a formal cancer family tree drawn up in only 2 of 27 case notes. The degree of the relationship of the family member affected by cancer to the patient fared slightly better (22/27). The type of admission (elective or emergency) did not co-relate with any recording of family history in case notes (P=NS: Chi-square test): Discussion There is good evidence which links early detection of CRC with improved survival rates11. Variousmodalities of screening such as mass faecal occult blood test and flexible sigmoidoscopy are currently under study12. However no cost-effective national screening protocol for CRC has been approved9. An accurate record of family history in patients with CRC, used in conjunction with established criteria for screening, such as Amsterdam criteria, helps to identify high-risk families13. Furthermore, a regular update of family history in young patients who present with CRC may help to identify the tumour spectrum suggestive of family cancer syndrome.14 Our study may be biased as it is retrospective and negative histories may not have been recorded. The type of admission showed no difference in the incidence of a family history record, an observation that suggests that junior doctors disregard or are not aware of the importance of family history. We suggest that family records may be improved with the use of protocol forms that would help to ensure the inclusion of family history data. Failure to record this data may possibly contribute to a late detection of cancer, and it is not inconceivable that in the future this might constitute grounds for a claim of negligence. Conclusion This study identifies a lack of awareness and incompleteness in recording detailed family histories of CRC patients at both primary and secondary levels. This should be regarded as an important omission from the medical records Family History Elective Emergency Total Recorded Admission Admission Yes 23 4 27 No 14 9 23 Total 37 13 50 The Journal of Surgery • Volume 2 • Issue 1 • 2004 23 Recommendations A diagnosis of colorectal cancer should be accompanied with a completed ‘colorectal database’ record, including family history. This should ensure a complete medical record and a long-term referral document. References 1. Hayne D, Brown RS, McCormack M, Quinn MJ, Payne HA and Babb P. Current trends in colorectal cancer: site, incidence, mortality and survival in England and Wales. Clin Oncol (R Coll Radiol) 2001: 13: 448-52. 2. Deans GT, Patterson CC, Parks TG, Spence RA, Heatley M, Moorehead RJ and Rowlands BJ. Colorectal carcinoma: importance of clinical and pathological factors in survival. Ann R Coll SurgEngl.1994: 76: 59-64. 3. Stephenson BM, Finan PJ, Gascoyne J, Garbett F, Murday VA and Bishop DT. Frequency of Familial Colorectal Cancer. Br J Surg. 1991; 78: 1162-6. 4. St John DJ, McDermott FT, Hopper JL, Debney EA, Johnson WR and Hughes ES. Cancer risk in relatives of patients with common colorectal cancer. Ann Intern Med. 1993: 15: 785-90. 5. Terdiman JP, Conrad PG, Sleisenger MH. Genetic testing in hereditary colorectal cancer: indications and procedures. Am J Gastroenterol 1999; 94: 2344-56. 6. Houlston RS, Murday V, Harocopos C, Williams CB and Slack J. Screening and genetic counselling for relatives of patients with colorectal cancer in a family cancer clinic. BMJ.1990; 301: 366-8. 7. Church J, Lowry A and Simmang C. Practice Parameters for the Identification and Testing of Patients at Risk for Dominantly Inherited Colorectal Cancer-Supporting Documentation. Diseases of the colon & rectum 2001; 44: 1404-1412. 8. Fuchs CS and Giovannucci E. A prospective study of family history and the risk of colorectal cancer. N Engl J Med 1994; 331: 1669. 9. Ferrante JM. Colorectal cancer screening. Med Clin North Am 1996; 80: 27-43. 10. Lynch HT, Paulson J, Severin M, Lynch J and Lynch P. Failure to diagnose hereditary colorectal cancer and its medicolegal implications: a hereditary nonpolyposis colorectal cancer case. Dis Colon Rectum 1999; 42: 31-5. 11. Towler B, Irwig L, Glasziou P, Kewenter J, Weller D and Silagy C. A systematic review of the effects of Screening for colorectal cancer using the faecal occult blood test, hemoccult. BMJ 1998; 317: 559-65. 12. Atkin WS, Edwards R, Wardle J, Northover JM, Sutton S, Hart AR, Williams CB and Cuzick J. Design of a multicentre randomised trial to evaluate flexible sigmoidoscopy in colorectal cancer screening. J Med Screen 2001; 8: 137-44. 13. Vasen HF, Watson P, Mecklin JP and Lynch HT. New clinical criteria for hereditary nonpolyposis colorectal cancer (HNPCC, Lynch syndrome) proposed by the International Collaborative group on HNPCC Gastroenterology. 1999; 116: 1453-6. 14. De Leon MP, Benatti P, Pedroni M, Viel A, Genuardi M, Percesepe A and Roncucci L. Problems in the identification of hereditary nonpolyposis colorectal cancer in two familieswith late development of full-blown clinical spectrum. Am J Gastroenterol 2000; 95: 2110-5. Rapid corrosion of scalpel blades after exposure to local anaesthetics: clinical relevance of this interaction Kayvan Shokrollahi1, M. J. Eccles2, J. R. W. Hardy3 and J. Webb3 1 Department of Burns & Plastic Surgery, Morriston Hospital, Swansea 2 Department of Anaesthesia, Cheltenham General Hospital, Cheltenham 3 Department of Orthopaedic Surgery, Avon Orthopaedic Centre, Southmead Hospital, Bristol Correspondence to: Mr Kayvan Shokrollahi, Department of Burns & Plastic Surgery, Morriston Hospital, Swansea SA6 6NL. United Kingdom. Abstract Background: Infiltration of local anaesthetic into an area before incising with a scalpel is common surgical practice. After a chance observation that a carbon steel scalpel rusted within minutes of contact with local anaesthetic, the corrosive effects of normal saline and local anaesthetic solutions on carbon and stainless steel surgical blades were investigated. Methods: After a series of preliminary studies with approximate- ly fifty scalpels, we used a semi-quantitative technique using digital photography to demonstrate the corrosive effect of local anaesthetic on twelve carbon steel scalpel blades. These blades were exposed to saline, lignocaine and bupivacaine, and the surface changes were recorded and compared. A stainless steel blade was also photographed for comparison. Results: All blades were found to rust in all three solutions, but there were considerable differences in the rate of progression and the surface area of the blade affected. Corrosive effects occurred rapidly on the carbon steel blades when exposed to all solutions, the process beginning within minutes of immersion. The overall effect was most marked with blades partially immersed in local anaesthetic. The stainless steel blades were much more resistant rusting, but had started to corrode by twelve hours, and were substantially rusty after 24 hours. Total immersion in solution produced minimal effects and thus rapid corrosion requires an air-liquid interface. Conclusions: This paper demonstrates the surprisingly rapid speed of corrosion of the standard carbon steel scalpel blade when exposed to solution, especially in the presence of an air-liquid interface. This phenomenon has not been previously described and has a number of implications. In the developing world, scalpels may be re-used, and in such circumstances avoidance of contact with local anaesthetic may increase the life of the blades. In addition, excess tissue damage from the poor performance of a rusted blade may occur and may tattoo the skin with rust. Furthermore, there is evidence to suggest that iron oxides may have carcinogenic and cytotoxic properties. Carbon steel blades are often preferred as they can be manufactured sharper and cheaper, however, we would recommend either their replacement after contact with local anaesthetic, or the use of stainless steel blades in particular circumstances. Introduction The chance observation that a carbon steel scalpel blade left in a pool of bupivacaine rusted rapidly, led us to further investigate the speed of this reaction and whether this was likely to be significant over the time-course of most surgical procedures. We exposed a variety of carbon steel and stainless steel scalpel blades to solutions of normal saline, lignocaine and bupivacaine, and recorded the surface changes by photography to give a semi-quan- titative analysis. Materials and Methods Pilot studies To avoid excessive photography, preliminary experiments were undertaken on a variety of scalpel blades that showed consistent rusting of all carbon steel blades exposed to local anaesthetic. In one such experiment (figure 1) three separate carbon steel blades of four different types (numbers 10, 11, 15 and 20 blades, Swann-Morton, Sheffield UK, twelve in total) were exposed to a drop of lignocaine for a 3 minutes and showed significant rusting during this time course. Figure 1: A preliminary experiment on 3 sets of 4 different scalpel blades (number 10, 11, 15 and 20 blades, 24 altogether), demonstrating uniform corrosion of all blades subjected to a drop of lignocaine. Photographs were taken every minute for 10 min- work_txiqxaikibdfvjgboupfdu6tda ---- Heritage Architecture J. Feinberg et al., Int. J. of Herit. Archit., Vol. 1, No. 1 (2017) 17–26 © 2017 WIT Press, www.witpress.com ISSN: 2058-8321 (paper format), ISSN: 2058-833X (online), http://www.witpress.com/journals DOI: 10.2495/HA-V1-N1-17-26 SYNERGISM IN CONDITIONS EVALUATION TECHNOLOGIES: THE EXAMPLE OF THE SAN JUAN FORTIFICATION WALLS J. FEINBERG1, D. WOODHAM2 & C. CITTO2 1The Collaborative, Inc., USA. 2Atkinson-Noland & Associates, USA. ABSTRACT The original construction of the San Juan fortification walls in Puerto Rico dates from the mid-16th century. The fortifications were constructed with the single purpose of defending the City of San Juan and its harbour from attack, principally by sea. Over a four-century period of construction and recon- struction, the fortification walls evolved from one construction typology into at least ten identifiable types. The walls investigated in this study are 750 m long and 15 m high, and include two bastions and the San Juan Gate. The challenging task of evaluating such complex structures required the synergism between historic research and modern diagnostic techniques to develop a deep understanding of the history, materials and structural behaviour of the fortification walls. In addition to historic research, 21st-century technologies selected to evaluate the walls included laser scans, digital photo-documenta- tion, wall coring, remote visual inspection of the core interior, microwave radar scans, thermography, characterization of stone and mortar types and strength, and finite element modelling. Keywords: finite element modelling, historic, masonry, non-destructive, radar, wall. 1 INTRODUCTION Logical thinking can create theories for causes of deterioration in historic building materials and constructions. It can also create pathways to research. The ‘why’ of deterioration often has a complex answer as deterioration is the result of multiple complex forces, decisions and practice – the who, the what, the when and the how. The United States National Park Service, as the stewards of the UNESCO World Heritage Site of Old San Juan, Puerto Rico, wanted the answers to why a significant portion of the fortifications were deteriorating and what interventions might be recommended to achieve stabilization. The walls have a remarkable historical background and were constructed by the Spanish Crown to protect the large harbour and beyond from foreign invasion. Occupying a point of land on the north side of the harbour, the walls and two fortresses surrounded the city until 1897 when the wall was partially opened due to overcrowding within its confines. Not long after Ponce de León established the San Juan settlement, the harbour was recognized as a key stopping point for Spanish treasure galleons on their journey from South America back to Spain. Spain was not the only European power to recognize the importance of the harbour at San Juan, as invasion attempts by the English and the Dutch bear witness. Because the primary fortifications at the harbour mouth were deemed to be too strong for invasive land- ings, attacks were largely repulsed by the local troops and people manning outlying defensive outposts. Typical of the Caribbean climate, disease also played its part in repulsing invasions as the invading troops became ill. Only once in the earliest years were the invaders success- ful, burning the then small settlement. This paper is part of the proceedings of the 3rd International Conference on Defence Sites: Heritage and Future (Defence Heritage 2016) www.witconferences.com http://www.witconferences.com 18 J. Feinberg et al., Int. J. of Herit. Archit., Vol. 1, No. 1 (2017) As the political alliances in Europe changed over time, Spain’s awareness and concern for the safety of the harbour from its enemies also rose and fell. Major improvements to the fortifications occurred under the direction of its Irish-born military engineers primarily during the last quarter of the 18th century, with significant improvements to the fortresses and fortification walls. Typical of coastal fortifications, their period of usefulness faded in direct proportion to improvements in ship-based armaments with their increased power and range. In addition, the advent of steam-powered warships allowed the positioning of the ships with little regard to currents, tides and wind direction. By the time of the US invasion at the end of the 19th century, the number of armaments had been significantly reduced to less than 10% of its original. The National Park Service maintains the walls and fortresses for the enjoyment of people from all over the world, and this project has the underlying intent to continue their preservation by understanding the complex forces of deterioration and providing appropriate interventions. 2 RESOURCE DESCRIPTION The fortifications are a part of the walled city of Old San Juan, Puerto Rico. There are mul- tiple scarps and two bastions, constructed from calcareous sandstone from 1538 until the early 1800s with significant repair and replacement campaigns in the 20th century, in part a response to collapses. The length of the walls investigated in this study is 800 m, height ranges from 5.5 m to 14 m and the wall thickness at its base is 3.0–3.7 m (Fig. 1). The walls investigated in this study are the south and west portions of the fortifications and are part of the overall system anchored by the Castillo San Felipe del Morro at the seaward end of the peninsula, and Castillo de San Cristobal anchoring the opposite end at the isthmus to the mainland. The fortresses were designed to protect the harbour entrance and the land approach to the city. The improvements made to the walls circa 1770 to circa 1803 changed the plan view configuration from following the cliff face in a series of short walls to long straight walls to assist in line-of-sight firing. Unfortunately, the change meant the walls were not fully founded on bedrock; some sections were now founded on sand. The base of the Figure 1: View of San Agustin Bastion from the sea. J. Feinberg et al., Int. J. of Herit. Archit., Vol. 1, No. 1 (2017) 19 walls became more fully exposed to wave action. Puerto Rico waters are not always gentle as the island is subjected to major hurricanes; the land is also not always calm, with the occur- rence of major earthquakes. 3 HISTORIC RESEARCH PROCESS Major research had been carried out in the 1980s on the history of the fortifications [1], with the most effort centred on the history of the two fortresses. This previous research was based on material found in various Spanish archives and in the National Archives from the US Army. Our research divided the history into four periods: the early Spanish period of initial construction from the early 1500s until 1770; the period from 1770 until 1898, the period of Spanish reconstruction and improvements; and the period from 1898 to circa 1960, during which the US Army made significant alterations in terms of additional buildings but reactive responses to fortification wall condition issues. After the US Army period is the National Park Service period that continues to date. The focus of the Park Service has included removal of the buildings constructed after the Period of Significance and interpretation and preservation of the resources. In 1983 the park was recognized as a World Heritage Site [2]. Statement of Significance The main elements of the massive fortification of San Juan are La For- taleza, the three forts of San Felipe del Morro, San Cristóbal and San Juan de la Cruz (El Cañuelo), and a large portion of the City Wall, built between the 16th and 19th centuries to protect the city and the Bay of San Juan. They are characteristic examples of the historic methods of construction used in military architecture over this period, which adapted European designs and techniques to the special conditions of the Caribbean port cities. La Fortaleza (founded in the early 16th century and considerably remodeled in later centuries) reflects developments in military architecture during its service over the centuries as a fortress, an arsenal, a prison, and residence of the Governor-General and today the Governor of Puerto Rico. Criterion (vi): La Fortaleza and San Juan National Historic Site outstandingly illustrate the adaptation to the Caribbean context of Euro- pean developments in military architecture from the 16th to 20th centuries. They represent the continuity of more than four centuries of architectural, engineering, military, and political history. Our research focus was on the specific history of our section of walls and on the general history of the wider aspects of wall construction, stone materials, the mortar components, the construction labour and the military engineers who designed the fortifications in each of the various periods. One important and continuous factor familiar to people in the field became apparent: regardless of period, there was a critical imbalance between the funds requested by site administrators for repairs and the money made available. Repairable situations most often led to collapsed wall sections. Of course, the cost of reconstruction far exceeded the earlier requested funds for repair. The lack of proper funding also affected the initial construction. Typically the walls were faced with well-executed ashlar stone masonry, of some 18–21 in. in depth (0.5 m), laid over very rough masonry referred to as mamposteria. This technique, used in Puerto Rico and in 20 J. Feinberg et al., Int. J. of Herit. Archit., Vol. 1, No. 1 (2017) the Spanish-controlled Philippine territory, placed stones and mortar into forms. The result was a weak masonry. The mortar appears to have had more clay than lime and sand as field investigation revealed little to no mortar in the 9-foot-deep cores of the wall, leading to the belief that the mortar, what little there was, had washed away. The choice of masonry con- struction techniques was likely to have been influenced by the lack of skilled masons and workforce being too small in numbers, racked by disease and with low motivation given the lack of payment of wages for one or more years. Particularly valuable in the historic research were the historic plans that in particular indi- cated the evolution in the alignment of the walls, progressing from following the cliff faces to straight scarps (Fig. 2). The condition analysis and construction plans of the 20th century clearly showed the evolution in conditions ending in the collapse of multiple wall sections (Fig. 3). The resulting reconstructions were without much historic integrity, with several sec- tions faced with stone veneer over an underlying new concrete wall. The research revealed multiple wall typologies: three basic ones each with various iterations, and significant find- ings. The results of the field evaluation revealed even more information, confirming informa- tion from the historic research. 4 FIELD SURVEY AND DOCUMENTATION Three techniques were used: laser scanning, high-resolution photography and infrared thermography. The laser scans were successfully manipulated to test whether topographic analysis of the wall faces would reveal surface anomalies, larger areas of out-of-plane Figure 2: Section of a 1678 map of San Juan showing a version of Santa Elena Bastion. Figure 3: Historic photos documenting past collapses of wall sections of Casa Blanca Scarp circa 1922, at left, and of El Polvorín Scarp circa 1922, at right. J. Feinberg et al., Int. J. of Herit. Archit., Vol. 1, No. 1 (2017) 21 anomalies and movement adjacent to cracks. This test was in addition to the laser scan used for computer-aided design drawings (Fig. 4). The thermographic images were taken at the same time as the high-resolution digital photography, both from a boat. The photography cov- ered large sections of wall and detailed subsections of the larger scale images. Cooler areas seen in the thermographic images could be compared with the photographs of the same area to assist in determining if the temperature difference was attributable to through-wall water seep- age or apparent vegetative growth (Fig. 5). These images were taken after an extended period of above-average rainfall. Correlation of the locations of areas of seepage with areas later revealed by ground-penetrating radar (GPR) as having above-average voids in the masonry at depth confirmed the water source to be largely groundwater. 5 FIELD RESEARCH The field research techniques were selected to provide comprehensive information about the conditions of the walls and the likely causes of deterioration. In addition to laser scanning, infrared thermographic imaging and high-resolution digital photography, other techniques included the use of GPR with significant coverage of the historic wall sections, the drilling of eight horizontal cores from the wall face to an average 9-foot depth, which served to assist in the calibration of the GPR readings and provided materials for strength testing of stone removed by the cores and some mortar for laboratory analysis. The cores were viewed with a Figure 4: Wall survey for San Agustin Bastion. 22 J. Feinberg et al., Int. J. of Herit. Archit., Vol. 1, No. 1 (2017) Figure 5: Thermal (left) image and visible light (right) of Santa Elena Bastion. The blue- green areas in the thermal image show areas with dampness in the masonry. Figure 6: Comparison between a segment of wall section retrieved by deep coring at a depth of 0.55–0.85  m (left) and a videoscopic image of the interior of the core hole (right). borescope with colour video recording, clearly indicating the subsurface conditions including the significant lack of mortar and prevalence of voids (Fig. 6). Sampling of mortar at the surface was taken from the oldest sections of the walls. Samples were also taken of biological growth for its characterization in the laboratory; of particular concern were the blue-green algae. Samples of woody plants, ferns and similar vegetative growth were also taken. Visual review of the wall conditions occurred multiple times during the fieldwork; from the initial day’s visual investigation each new group of information was reviewed and the walls now viewed with new insight. The reviews included documentation of stone masonry problems such as cracks and material condition problems: lost material, eroded material and eight other issue areas including efflorescence. Particular attention was given to the drainage systems and natural surface flows above and behind the walls. Were the flows observed as seepage percolating down from behind the parapets and merlons or was the water coming from between the sandstone strata that we knew was interbedded with sands and clays? The question was asked of ourselves and we recognized that the answer would most likely come from the next phase of field research and monitoring. J. Feinberg et al., Int. J. of Herit. Archit., Vol. 1, No. 1 (2017) 23 6 LABORATORY ANALYSIS In addition to the analysis of the biological growth and the core stone material testing, the mortars were tested by the Middendorf method [3] to assess the presence of all components including Portland cement. From historic research we knew that the Spanish engineers used Portland cement in the late 1890s. The GPR data were evaluated and the relative amount of voids determined for the walls, and one area that had been scanned on a 2-foot or 60 cm spaced grid was evaluated and displayed as a three-dimensional model (Fig. 7). The GPR, along with the cores and the historic research, provided data used to develop the wall sections. 7 MODELLING Using computer modelling [4], representative wall sections of the original wall construc- tion were assessed as to how they would stand up to earthquakes. The fortification walls are located in a region that is seismically active and several major earthquakes have been recorded in the past. The earthquake intensity used in the analysis has a 2% probability of being exceeded in 50 years or, in other terms, a return period of 2,500 years [5]. Figure 7: Three-dimensional rendering images based on GPR data showing the most likely extent and distribution of internal voids. Estimated wall thickness at this location is 2.0 m (6.5 ft). Large voids at the back of the wall, extending to the soil, are visible. Yellow frame in image at lower right indicates approximate location of the back of the wall. 24 J. Feinberg et al., Int. J. of Herit. Archit., Vol. 1, No. 1 (2017) The complexity of the fortification walls required an advanced nonlinear analysis to deter- mine whether the expected seismic forces would damage the walls. A response spectrum was generated considering the seismicity of the region. The spectrum was used in the computer model to generate equivalent static lateral forces likely to be experienced by the walls during an earthquake. Lateral earth pressures were also included in the model under the assumption that the walls would retain soil. In this analysis, the fortification walls were modelled in a way that cracks would form in the structure once the material strength is exceeded. Mate- rial properties used in the finite element analysis were adjusted to account for the reduced strengths due to the saturated condition of the walls. The calcareous sandstone that makes up the expected majority of the stone in the wall was found to be very porous. The strength of the porous sandstone tends to decrease with the water content. The seismic capacity of the walls was evaluated based on a static pushover analysis, where the lateral reaction at the base of the wall (base shear) is plotted against the lateral displace- ment measured at the top of the wall. In this approach, the seismic forces are applied to the structure incrementally, and the force and displacement values are recorded at each step. If the base shear is plotted as a percentage of the total seismic forces (seismic demand), a capac- ity curve is produced. The capacity curve simply summarizes the amount of seismic force the wall is able to carry. If the wall is not able to resist 100% of the seismic demand, the wall is deemed deficient. Results from the seismic evaluation indicate that the original walls do not have the capacity to carry the required seismic demand, suggesting that the walls could suffer substantial damage under a strong earthquake with a very long return period. 8 FIRST-PHASE RESULTS From a process standpoint, the use of the selected diagnostic techniques coupled with the historic research was quite successful. For example, the weakness of the walls was found by the diagnostic techniques; the history relative to the quality of the labour, the scarcity of the materials for higher quality mortar, the lack of adequate funding for short-term completion of construction and for maintenance, and the military engineers’ desire for straight walls without adequate regard for bearing conditions provided an understanding of the why of the weakness. The use of mamposteria as a construction technique was a logical response to the lack of qualified masons and lime for competent mortars. The pathways for research came initially from past experience with similar fortifications and the typical associated problems of shoreline-located walls. These pathways for research were expanded by the historic research, with choices made as to direction and intensity of effort. This evolution is typical for any investigator, for example the back of wall drainage issue is of concern for any retaining wall. The extent of the problem was revealed by the mapping of vegetation by digital imaging and the use of thermographic imaging. The pattern of seepage locations and relative amounts indicated the problem’s extent and intensity. The historic research revealed the lack of maintenance by the US Army of the extensive drain- age system constructed by the Spanish Military, a substitution of the earlier system by a less effective system and inadvertent changes by the National Park Service that changed drainage patterns by removing roads that had interrupted and directed flows to structured drains. The source of subsurface water was evaluated based on historic research into multiple 20th-century projects for which geologic borings had been completed, indicating the inter- bedded layers of sandstone, clay and sand. A contractor in 1938 rejected the possibility of founding a reconstructed wall portion on the weak stone, based on information found in his field reports and requests for compensation for the added cost for reinforcement. Again, this J. Feinberg et al., Int. J. of Herit. Archit., Vol. 1, No. 1 (2017) 25 path of research was based on logic; confirmation came in the laboratory on stone samples from the interior of the wall, the stone having been originally sourced from the same rela- tively weak bedrock. As intended, many of the diagnostic techniques were able to be calibrated by other tech- niques. Many diagnostic techniques provided a different and confirming perspective on the condition issues (Table 1). Further, many of the techniques confirmed expectations provided by the historic research and provided logical pathways for other historic research to provide more answers and more confirmation on the why of deterioration. As expected, this first phase informed the decision of what additional research was required. 9 FURTHER STUDY AND PHASE TWO SCOPE Further study was required to determine several underlying conditions with likely contribu- tion to long-term problems. From the historic research, we knew that many caverns had formed at the base of several wall sections during the period 1925–1951. As one of the rec- ommended wall strengthening techniques to be evaluated was the injection of voids with specially formulated grout, there was an expressed concern that we needed to avoid pumping the grout into the ocean via a cavern and none of us wanted the wall section to be founded on a cavern. The tasks of this phase therefore included detailed GPR evaluation of the base of those sections of walls with higher than average percentages of voids, using the previ- ously tested 60 cm by 60 cm grid pattern from which three-dimensional images of the select areas of greatest concern could be created. To develop information about drainage at depth, groundwater levels, interconnection between the ocean and areas under the walls and bas- tions, a series of seven vertical bores were completed and equipped with piezometers. Stone bedrock samples were taken during the boring operation to provide knowledge about strata thicknesses and interbedding material in general, and to assess the competency of the bedrock for the installation of rock anchors to assist in earthquake wall resistance. Five crack monitors Table 1: Synergism matrix for employed techniques. Core Video scope Laser survey Infrared thermal Radar Biology Finite element analysis Video scope ++ Laser survey Infrared thermal + ++ + Radar +++ ++ + + Biology + +++ + Finite element analysis ++ ++ ++ ++ History research +++ ++ + +++ + + Symmetry 26 J. Feinberg et al., Int. J. of Herit. Archit., Vol. 1, No. 1 (2017) were installed that, with the data from the piezometers and the weather station, would allow us to see how the wall reacted when the ground was saturated with moisture and backside of wall pressures was greatest, and how the wall reacted to diurnal temperature change. The database was further enhanced by the import of tide information. The monitoring system was installed in the summer of 2015 and not all conditions have been encountered to provide all of the expected data. For example, the area has experienced a significant drought. 10 INTERPRETATION OF THE DATA, PARTIAL RESULTS FROM PHASE TWO The bedrock was of quite low strength and is unlikely to be effectively used for the placement of rock anchors. The crack monitors indicate diurnal movement, indicating the need to main- tain some cracks rather than have them filled. Much of the mortar in the wall faces having been found to have very high Portland cement content – not original but placed in the 20th century – it cannot serve to absorb movement. The piezometers as a benefit of the drought clearly show a relationship to the tide levels. If it were a direct relationship, the response would be in time with the tides indicating direct connectivity. There is a two- to two-and-a- half-hour delay, indicating possible charging of the bedrock and interbedded sand and clay strata by tidewater pushing up the overlying freshwater. 11 CONCLUSIONS The historic research, conducted as part of this project, informed the field investigations, laboratory analysis and our recommendations for further actions. Conversely, the investiga- tive techniques employed could be used to confirm documented previous construction and interventions. As intended, many of the diagnostic techniques were able to be calibrated by other techniques and allowed for extrapolation of the non-destructive techniques to additional areas of investigation. REFERENCES [1] Bearss, Edwin C., Historic Structure Report – Historic Data Section. San Juan Fortifica- tions, 1898–1958, San Juan National Historic Site, Puerto Rico. U.S. Department of the Interior/National Park Service, February 1984. Document no. D2185, as an example of a source document for historic research. [2] UNESCO World Heritage Center, World Heritage List, La Fortaleza and San Juan National Historic Site in Puerto Rico, Statement of Significance, available at http:// whc.unesco.org/pg.cfm?cid=31&id_site=266 (accessed 15 January 2016). [3] B. Middendorf et al., Chemical Characterization of Historic Mortars, State-of-the-Art Report of RILEM Technical Committee 167-COM: Characterization of Old Mortars with Respect to their Repair. RILEM Publications SARL, 2004. [4] Midas FEA 2016. Nonlinear and detail FE Analysis System for Civil Structures, v1.1. Midas Information Technology Co. Ltd. [5] American Society of Civil Engineers (ASCE), Seismic Rehabilitation of Existing Buildings (ASCE/SEI 41-06), Reston, Virginia, 2007. work_tzoktadfrbflzosv2otkzowoy4 ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218549046 Params is empty 218549046 exception Params is empty 2021/04/06-02:16:27 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218549046 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:27 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_u43qz3qihzbnnb5ip336tjsvbm ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218555864 Params is empty 218555864 exception Params is empty 2021/04/06-02:16:35 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218555864 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:35 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_ub6cvnf2irbhfdmrqhh7w67hle ---- PII: S0741-5214(98)70011-4 Telemedicine in vascular surgery: Feasibility o f digital imaging for remote management of wounds D o u g l a s J. W i r t h l i n , M D , S y a m B u r a d a g u n t a , B A , R o g e r A. E d w a r d s , S c D , D a v i d C . B r e w s t e r , M D , R i c h a r d P. C a m b r i a , M D , J o n a t h a n P. G e r t l e r , M D , G l e n n M . L a M u r a g l i a , M D , D i a n e E . J o r d a n , B S , J o s e p h C. K v e d a r , M D , a n d W i l l i a m M . A b b o t t , M D , Boston, Mass. Purpose: Telemedicine coupled w i t h digital p h o t o g r a p h y could potentially improve the quality o f o u t p a t i e n t w o u n d care and decrease medical cost b y allowing h o m e care nurs- es t o electronically t r a n s m i t images o f patients' wounds t o treating surgeons. T o deter- mine the feasibility o f this technology, we c o m p a r e d bedside w o u n d examination b y onsite surgeons w i t h viewing digital images o f wounds b y r e m o t e surgeons. Methods: Over 6 weeks, 38 wounds in 24 inpatients were p h o t o g r a p h e d with a K o d a k D C 5 0 digital camera (resolution 756 × 504 pixels/in2). Agreements regarding w o u n d description (edema, erythema, cellulitis, necrosis, gangrene, ischemia, and granulation) and w o u n d m a n a g e m e n t (presence o f healing problems, need for emergent evaluation, need f o r antibiotics, and need for hospitalization) were calculated a m o n g onsite sur- geons a n d between onsite and r e m o t e surgeons. Sensitivity and specificity o f r e m o t e w o u n d diagnosis c o m p a r e d w i t h bedside examination were calculated. Potential corre- lates o f agreement, level o f surgical training, certainty o f diagnosis, and w o u n d type were evaluated b y multivariate analysis. Results:Agreement between onsite and r e m o t e surgeons (66% t o 95% for w o u n d descrip- tion and 64% to 95% for w o u n d management) matched agreement a m o n g onsite sur- geons (64% t o 85% f o r w o u n d description and 63% to 91% f o r w o u n d management). Moreover, w h e n onsite agreement was low (i.e., 64% f o r erythema) agreement between onsite and r e m o t e surgeons was similarly low (i.e., 66% f o r erythema). Sensitivity o f r e m o t e diagnosis ranged f r o m 78% (gangrene) to 98% (presence o f w o u n d healing p r o b - lem), whereas specificity ranged f r o m 27% (erythema) to 100% (ischemia). Agreement was influenced b y w o u n d type (p < 0.01) b u t n o t b y certainty o f diagnosis (p > 0.01) o r level o f surgical training (p > 0.01). Conclusions: Wound evaluation on the basis o f viewing digital images is comparable with standard w o u n d examination and renders similar diagnoses and treatment in the majority o f cases. Digital imaging for remote w o u n d management is feasible and holds significant promise for improving outpatient vascular w o u n d care. (J Vasc Surg 1998;27:1089 - 1100.) From the Department of Surgery (Drs. Wirthlin, Brewster, Cambria, Gertler, LaMuraglia, and Abbott), and the Partners Telemedicine Center (S. Buradagunta, D. E. Jordan, and Drs. Edwards and Kvedar), Massachusetts General Hospital; and Harvard Medical School. Presented at the Twenty-fourth Annual Meeting of the New England Society for Vascular Surgery, Bolton Landing, N.Y., Sep. 18-19, 1997. Reprint requests: Douglas J. Wirthlin, MD, Department of Surgery/Division of Vascular Surgery, Massachusetts General Hospital, WAC 458, Boston, MA 02114. Copyright © 1998 by The Society for Vascular Surgery and International Society for Cardiovascular Surgery, North American Chapter. 0741-5214/98/$5.00 + 0 2 4 / 6 / 8 9 6 1 6 Telemedicine is an e v o M n g field t h a t c o m b i n e s t e l e c o m m u n i c a t i o n s a n d i n f o r m a t i o n technologies to p r o v i d e r e m o t e medical care and ranges f r o m activi- ties as simple as t e l e p h o n e consultation t o t e c h n o l o - gy as c o m p l e x as telesurgery.1 Electronic transmission o f clinical images for r e m o t e consultation is o n e c o m - p o n e n t o f t e l e m e d i c i n e t h a t has b e e n successfully i m p l e m e n t e d a n d tested in the fields o f d e r m a t o l o g y , pathology, and radiology. 2-8 M o r e o v e r , Kvedar et al.9 o b s e r v e d t h a t as m a n y as 83% o f d e r m a t o l o g i c diag- noses c o u l d accurately be m a d e by viewing still digi- tal images. Accordingly, we h y p o t h e s i z e d t h a t still digital i m a g e s c o u l d precisely r e p r e s e n t vascular 1089 JOURNAL OF VASCULAR SURGERY 1090 Wirthlin et al. June 1998 surgery wounds and that w o u n d evaluation and man- agement based on viewing digital images w o u l d closely match that o f evaluation and management based on bedside examination. Validation o f this hypothesis would suggest that digital imaging cou- pled with telemedicine could be implemented to enhance outpatient w o u n d care. For example, using current technology, visiting nurses could photograph patients' wounds and transmit the digital images over a short period o f time to the treating surgeon, thus allowing surgeons to manage w o u n d s remotely. Remote w o u n d management, in turn, has the poten- tial to decrease the frequency o f office visits, prevent unnecessary "urgent" w o u n d evaluations, and short- en hospital stay for patients with w o u n d complica- tions. This potential application o f telemedicine is par- ticularly appealing because o f the frequency and chronic nature o f w o u n d complications in vascular surgery, the large consumption o f resources to trcat these wounds, and the shift o f health care from the hospital to the outpatient setting. Johnson et al.10 observed that 40% to 50% o f patients who undergo lower extremity bypass procedures have a nonheal- ing ulcer that may require several months to heal despite a functioning bypass graft. Also, o f those patients who undergo lower extremity bypass proce- dures, 20% to 30% will have a w o u n d complication that will lengthen hospitalization and necessitate prolonged w o u n d care. l i d 3 Moreover, as suggested by Calligaro and others w h o have evaluated the impact o f care pathways in vascular surgery, there is increasing pressure to shorten hospitalization and decrease the cost o f care.14,15 Thus the purpose o f this study is to determine the accuracy o f w o u n d evaluation and management based on viewing digi- tal images as an initial step in testing the feasibility o f implementing digital imaging for r e m o t e w o u n d management. P A T I E N T S A N D M E T H O D S The feasibility o f digital imaging for remote w o u n d diagnosis was evaluated by comparing w o u n d evaluation and management based on viewing digital images o f wounds with evaluation and management based on examining wounds at the bedside. Surgeons who examined wounds directly were labeled onsite surgeons, and surgeons w h o viewed digital images were labeled remote surgeons. The onsite surgeons' w o u n d evaluation and management constituted the " g o l d standard" in this study. Fig. 1 shows an overview o f the study design. The experimental pro- tocol was approved by the Institutional Review Board o f Massachusctts General Hospital (Accession #9607618), and all subjects gave informed consent. P a t i e n t s . Vascular surgery inpatients at the Massachusetts General Hospital from March 18, 1996, to May 5, 1996, were invited to participate. Eligible patients included those inpatients who had recently undergone a lower extremity bypass proce- dure or amputation or were admitted for a w o u n d healing problem (nonhealing ulcer, necrotic/gan- grenous toes, cellulitis). Approximately 36% o f total eligible patients were included in this study, and eli- gible patients who did not participate were excluded as a result o f factors beyond the control o f the study: busy clinical activity, inability to coordinate digital imaging with onsite w o u n d evaluation, and refusal to consent. Twenty-four inpatients (six female and 18 male) with 38 separate wounds were pho- tographed. The wounds were categorized as follows: postoperative incision (n = 16), amputation site (n = 3), n e c r o t i c / g a n g r e n o u s toes (n = 11), and non- healing ulcer (n = 8). The number o f wounds per extremity and w o u n d category were assigned on the basis o f the onsite surgeons' examination and chart review. For postoperative incisions, any area o f the incision with a w o u n d complication that might inde- pendently influence w o u n d evaluation was consid- ered a separate wound. A brief history (age, sex, reason and date o f admission, medical and surgical history, vascular physical examination, and treatments rendered from admission to the time o f imaging) on each patient was obtained by chart review. D i g i t a l p h o t o g r a p h y a n d i m a g e display. During morning rounds, a nonphysician w i t h o u t any formal p h o t o g r a p h y training (S. B.) pho- tographed all wounds using a digital camera (Kodak D C 50, Eastman Kodak Co., Rochester, N.Y.; reso- lution 736 x 504 pixels/in 2, single C C D chip, 24- bit color). Table I describes our imaging protocol. Seventeen o f the 38 wounds were photographed on more than one day, rendering 45 image sets for comparison and 183 total images. Images were stored as highest-quality J P E G 16 files (Joint Photographic Experts Group, a compression algo- rithm for digital images) and were converted t o a Microsoft P o w e r p o i n t slide presentation to be viewed on a computer monitor (Mitsubishi 91TXM at the maximum attainable resolution). Fig. 2 shows examples o f w o u n d images. All images were graded by a nonsurgeon, non- medical photographer (E. 1~ M.) using a rating sys- tem developed to grade the image quality o f stan- dard 35 mm photographs. 9 Images were assigned a JOURNAL OF VASCULAR SURGERY Volume 27, Number 6 W i r t h l i n et al. 1091 I II 2 to 4 surgeons examine wounds at the bedside at the time of digital imaging, I I II II II I I 2 to 4 surgeons evaluate wounds by viewing digital images of wounds at a later time. I Wound questionnaire I I I Fig. 1. Study design. Using a standard questionnaire and bedside wound examination as the "gold standard," the feasibility of digital imaging for remote wound management was mea- sured by concordance between bedside and remote surgeons. p h o t o g r a p h i c quality r a t i n g (0 [ u n r e a d a b l e ] t o 5 [perfect quality]). Using this grading system, a score o f 4 is highest quality rating attainable with o u r dig- ital c a m e r a b e c a u s e t h e r e s o l u t i o n , c o l o r r a n g e , image m e m o r y , and lighting are inferior to standard 35 m m p h o t o g r a p h s . H o w e v e r , there are a n u m b e r o f " h i g h e r - e n d " digital cameras with p h o t o g r a p h i c features and characteristics m o r e similar to standard 35 m m cameras that w o u l d have scored consistently higher t h a n the camera used in this study. Wound e v a l u a t i o n . Wounds were evaluated by two to f o u r onsite surgeons and two t o four r e m o t e surgeons. Table II shows the n u m b e r o f evaluations c o m p l e t e d and the level o f surgical training o f t h e evaluators (five vascular attendings, five vascular fel- lows, a n d six surgical residents). O n s i t e s u r g e o n s examined each w o u n d at the bedside near the time o f digital imaging and had n o restrictions in m e t h o d o f e x a m i n a t i o n , fo r example, p al p at i o n , olfaction, viewing f r o m multiple angles, and time o f evalua- t i o n . T h e s e t t i n g ( l i g h t i n g , p a t i e n t p o s i t i o n , a n d b a c k g r o u n d ) d u r i n g o n si t e e x a m i n a t i o n m a t c h e d t h e s e t t i n g d u r i n g digital p h o t o g r a p h y . I n m o s t cases, onsite surgeons were involved in the care o f t h e patient, an d occasionally w h e n an onsite s u r g e o n was n o t t h e t r e a t i n g s u r g e o n a b r i e f h i s t o r y was given before examining t h e w o u n d . R e m o t e surgeons evaluated w o u n d s by viewing images o n a c o m p u t e r m o n i t o r some time after dig- ital imaging. Before viewing w o u n d images, r e m o t e surgeons were given a b ri ef medical h i st o ry o f each patient. T h e r e was n o time limit for viewing images and r e m o t e surgeons were allowed t o scroll back and f o r t h a m o n g d i f f e r e n t views o f t h e w o u n d s . N o image m a n i p u l a t i o n s such as m ag n i fi cat i o n , c o l o r e n h a n c e m e n t , o r contrast e n h a n c e m e n t were used. JOURNAL OF VASCULAR SURGERY I 0 9 2 Wirthlin ctal. June 1998 T a b l e I . I m a g i n g p r o t o c o l Camera (Kodak DC50) settings Auto focus, auto flash, auto exposure Lighting Window shade closed, overhead examination light on Position of patient Supine Presentation of wound Blue pad under extremity Transparent ruler and patient identifier number placed adja- cent to wound Photographs taken (image set) View of entire extremity (camera 40 inches from wound) View of thigh (camera 20 inches from wound) View of leg and foot (camera 20 inches from wound) Close-up views* (camera 18 inches from wound with lens set on telephoto) *Close-up views were taken of specific wounds (gan- grenous/necrotic toes or nonhealing ulcers, and areas along a postoperative incision with a wound complication). The compo- sition and number of close-up views taken per extremity were determined by an onsite surgeon. T a b l e I I . W o u n d evaluations Onsite Remote Attending 7 55 Fellow 76 65 Resident* 16 0 Total 99 i20 *Only two of six residents had completed less than 4 years of training. After evaluating w o u n d s , b o t h onsite a n d r e m o t e s u r g e o n s c o m p l e t e d a s t a n d a r d w o u n d q u e s t i o n - naire, w h i c h addressed questions r e g a r d i n g w o u n d d e s c r i p t i o n a n d w o u n d m a n a g e m e n t . S u r g e o n s answered either " y e s , " " n o , " " p r e s e n t , " o r " n o t p r e - s e n t " t o each q u e s t i o n a n d r e c o r d e d a level o f cer- t a i n t y f o r e a c h r e s p o n s e (1 [ n o t c e r t a i n ] t o 10 [absolutely certain]; T a b l e I I I ) . S t a t i s t i c a l m e t h o d s . All d a t a w e r e c o l l e c t e d , s t o r e d in M i c r o s o f t Excel files, a n d i m p o r t e d i n t o SAS (SAS I n s t i t u t e I n c . , Cary, N . C . ) f o r statistical analysis. Descriptive analysis was p e r f o r m e d for the f o l l o w i n g e n d p o i n t s : i m a g e q u a l i t y ( m e a n a n d range), certainty o f r e s p o n s e ( m e a n a n d range), a n d a g r e e m e n t a m o n g a n d b e t w e e n s u r g e o n g r o u p s ( m e a n p e r c e n t a g r e e m e n t a n d k a p p a values for each w o u n d d e s c r i p t o r a n d m a n a g e m e n t d e c i s i o n ) . I n a d d i t i o n , p r e v a l e n c e o f a g r e e m e n t s a n d d i s a g r e e - m e n t s a l o n g w i t h t w o v a r i a t i o n s o f k a p p a values, k a p p a ( n o r ) a n d k a p p a ( m a x ) 17 w e r e calculated (scc Appendix). I n an a t t e m p t t o c o n t r o l for s u r g e o n vari- a t i o n , a g r e e m e n t b e t w e e n r e m o t e a n d onsite sur- T a b l e I I I . W o u n d questionnaire* Wound descriptors Present Gangrene Necrosis Erythema CeHufitis/infection Iscbemiat Granulation tissue Wound management decisions Yes Wound healing problem present Need for examination within 24 hr by MD Need for hospitalization Need for antibiotics Need for debridement Not present No *The following additional wound descriptors and management decisions were asked but not used for analysis because of either infrequent occurrence, limited clhlical significance, vagueness of surgeon response, or poorly phrased question: (1) ecchymosis; (2) exposed bypass graft; (3) nonhealing wounds; (4) exposed tendon/bone; (5) edema; (6) drainage; (7) bedrest; (8) dressing changes; and (9) leg elevation. tThis descriptor was added to the questionnaire 4 weeks into the study. g e o n s was c a l c u l a t e d in t h r e e d i f f e r e n t subsets o f cases based o n the level o f a g r e e m e n t a m o n g onsitc surgeons: (1) all cases; (2) o n l y those cases in which onsite s u r g e o n a g r e e m e n t was greater t h a n 67%; a n d (3) only those cases in which onsite a g r e e m e n t was 100%. Ultimately, a g r e e m e n t was m e a s u r e d in five different categories: c a t e g o r y I , a g r e e m e n t a m o n g o n s i t e s u r g e o n s ; c a t e g o r y I I , a g r e e m e n t a m o n g r e m o t e surgeons; c a t e g o r y I I I , a g r e e m e n t b e t w e e n r e m o t e a n d onsite surgeons for all cases; c a t e g o r y IV, a g r e e m e n t b e t w e e n r e m o t e and onsite surgeons only in t h o s e cases in w h i c h onsite surgeons a g r e e m e n t was g r e a t e r t h a n 67%; a n d c a t e g o r y V, a g r e e m e n t bct~vccn r e m o t e a n d onsite s u r g e o n s o n l y in t h o s e cases in which onsite surgeons a g r e e m e n t was 100%. T o d e t e r m i n e f a c t o r s t h a t i n f l u e n c e d c o n c o r - dance, logistic regressions with d e p e n d e n t variables (certainty o f response, a g r e e m e n t b e t w e e n onsite a n d r e m o t e surgeons, a n d d i s a g r e e m e n t b e t w e e n onsite a n d r e m o t e s u r g e o n s ) a n d e x p l a n a t o r y variables (onsite vs r e m o t e surgeon, level o f surgical training, a n d w o u n d type) w e r e p e r f o r m e d . D a t a sets for m u l - tivariate analysis were g e n e r a t e d using the SAS-based r a n d o m n u m b e r g e n e r a t o r t o r a n d o m l y e l i m i n a t e r e m o t e a n d onsite evaluations in excess o f t w o per w o u n d . A p value o f 0.01 was considered significant r a t h e r t h a n using a p value o f 0.05 w i t h B o n f e r r o n i c o r r e c t i o n for multiple comparisons. Sensitivity a n d specificity o f r e m o t e w o u n d eval- u a t i o n were calculated f o r w o u n d diagnoses, includ- ing p r e s e n c e o f a w o u n d healing p r o b l e m , necrosis, JOURNAL OF VASCULAR SURGEP.Y Volume 27, Number 6 W i r t h l i n et al. 1 0 9 3 Fig. 2. Example of images representing each wound category. A, Postoperative incision; B, amputanon; C, gangrenous/necrotic toes; D, nonhealing ulcer. Wounds were photographed from vascular surgery inpatients with a Kodak DC50 digital camera during morning rounds. ischemia, erythema, granulation, gangrene, and cel- lulitis/infection. Results were derived from two sub- sets o f cases--greater than 67% agreement among onsite surgeons (category IV) and 100% agreement among onsite surgeons (category V). Sensitivity was calculated as the number o f "present" responses by remote surgeons divided by the total number o f opportunities for remote surgeons to respond "pre- sent" when onsite surgeons agreed (as defined in category IV and V) to "present." Specificity was cal- culated as the number " n o t present" responses by remote surgeons divided by the total number o f opportunities for remote surgeons to respond "pre- sent" when onsite surgeons agreed (as defined in category IV and V) to " n o t present." R E S U L T S There were no wound complications associated with the imaging process, and imaging took approx- imately 5 to 7 minutes per patient. Remote surgeons spent an average o f 3 to 5 minutes viewing images o f each patient. The mean quality index for all image sets was 3.3, with a range o f 2.7 to 3.8, and none o f the images were considered unusable. For w o u n d descriptors, the average certainty ranged from 9.5 to 9.9 among onsite surgeons and 8.2 to 9.8 among remote surgeons. Certainty was lowest for detection o f ischemia and cellulitis among both onsite and remote surgeons. For wound man- agement decisions, the average certainty ranged from 9.6 to 9.8 among onsite surgeons and 9.4 to 9.7 among remote surgeons. Certainty was lowest for use o f antibiotics and need for hospitalization among both onsite and remote surgeons. Attending surgeons compared with other evaluators recorded lower cer- tainty (certainty less than 10) when diagnosing ery- thema (odds ratio [OR], 0.06; p = 0.007). Otherwise, certainty was not influenced by remote versus onsite location, level of training, or wound type. Table IV shows the average percent agreement and kappa values regarding wound descriptors, and Table V shows the average percent agreement and kappa values regarding wound management deci- sions. Prevalence data, kappa, kappa(nor), and kappa(max) for category V are shown in Table VI. The specificity and sensitivity o f remote wound diag- nosis are shown in Table VII. The level o f surgical training was not associated (p > 0.01) with agreement or disagreement between remote and onsite surgeons for both wound descrip- tion and management. Wound type (nonhealing ulcer compared with postoperative incision) posi- tively influenced concordance between remote and onsite surgeons when diagnosing gangrene (OR, J O U R N A L OF VASCULAR SURGERY 1 0 9 4 W i r t h l i n et aL June 1998 Table IV, Average percent agreement regarding w o u n d descriptors Wound descriptors Category I Category H * C a t e g o r y I I I tCategory I V ¢Category V Agreement among Agreement among Agreement b e t w e e n Agreement b e t w e e n Agreement between onsite surgeons remote surgeons onsite surgeons vs onsite surgeons vs onsite surgeons vs (n = 45) (n = 45) remote surgeons (n = 45) r e m o t e surgeons remote surgeons N e c r o s i s G r a n u l a t i o n t i s s u e I s c h e m i a § G a n g r e n e C e l l u l i t i s / i n f e c t i o n E r y t h e m a 80% K = 0 . 5 3 9 0 % ~ = 0 . 3 3 83% K = 0 . 6 0 81% K = 0 . 3 4 9 2 % K = 0 . 4 4 80% K = 0 . 4 4 80% lc = 0 . 7 0 83% ~ = 0 . 5 0 78% K = 0 . 7 0 85% ~c = 0 . 7 7 9 3 % K = 0 . 6 4 74% K = 0 . 5 5 67% K = - 0 . 0 4 81% )c = 0 . 0 8 62% lc = 0 . 2 4 64% ~c = 0 . 2 2 86% K = 0 . 2 8 60% ~; = 0 . 1 2 91% ( n = 3 7 ) I< = 0 . 7 8 91% ( n = 3 3 ) ~: = 0 . 6 4 91% ( n = 8 ) ~ = 0 . 0 0 77% ( n = 3 9 ) K = 0 . 5 3 65% ( n = 3 3 ) ~ = 0 . 0 8 65% ( n = 3 0 ) ~ = 0 . 0 2 95% ( n = 3 4 ) ~ = 0 . 9 0 93% ( n = 3 1 ) ~ = 0 . 4 5 91% ( n = 7 ) ~ = 0 . 0 0 81% ( n = 3 7 ) ~ = 0 . 6 2 69% ( n = 2 7 ) )c = 0 . 0 0 66% ( n = 2 8 ) 1( = 0 . 0 2 * C o m p a r i s o n m a d e in all cases. t C o m p a r i s o n m a d e o n l y in s u b s e t o f cases w h e n 9 ; C o m p a r i s o n m a d e o n l y in s u b s e t o f cases w h e n § I s c h e m i a d a t a a v a i l a b l e f o r 1 0 p a t i e n t s only. ~c = k a p p a v a l u e . o n s i t e s u r g e o n s a g r e e d g r e a t e r t h a n 67%. o n s i t e s u r g e o n s a g r e e d 100%. Table V. Average percent agreement regarding w o u n d management decisions Wound descriptors Category I Category H * C a t e g o r y I I I /'Category I V ~Category V Agreement among Agreement among Agreement b e t w e e n Agreement b e t w e e n Agreement between onsite surgeons r e m o t e surgeons onsite surgeons vs onsite surgeons vs onsite surgeons vs (n = 45) (n = 45) r e m o t e surgeons (n = 45) r e m o t e surgeons remote surgeons W o u n d h e a l i n g 95% )c = 0 . 7 5 95% )c = 0 . 3 1 87% K = 0 . 4 3 92% ( n = 3 9 ) ~ = 0 . 4 8 p r o b l e m E m e r g e n t e x a m - 64% ~ = 0 . 1 7 80% ~ = 0 . 4 1 74% )c = 0 . 5 9 89% ( n = 2 9 ) ~ = 0 . 4 1 i n a t i o n N e e d f o r h o s p i t a l - 85% K = 0 . 0 4 85% ~: = 0 . 1 5 79% ~: = 0 . 2 6 87% ( n = 4 2 ) ~z = 0 . 6 7 i z a t i o n A n t i b i o t i c s 68% K = 0 . 2 8 80% ~ = 0 . 0 5 63% ~ = 0 . 3 9 71% ( n = 3 8 ) ~ = 0 . 3 9 D e b r i d e m e n t 74% K = 0 . 5 0 78% ~ = 0 . 1 8 60% K = 0 . 3 0 66% ( n = 3 8 ) ~ = 0 . 3 0 91% ( n = 3 9 ) ~: = 0 . 4 8 86% ( n = 2 6 ) ~ = 0 . 4 4 84% ( n = 3 9 ) lc = 0 . 8 0 69% ( n = 3 0 ) ~c = 0 . 4 0 63% ( n = 3 5 ) ~ = 0 . 2 3 * C o m p a r i s o n m a d e i n all cases. t C o m p a r i s o n m a d e o n l y in s u b s e t o f cases w h e n o n s i t e s u r g e o n s a g r e e d g r e a t e r t h a n 67%. : ~ C o m p a r i s o n m a d e o n l y i n s u b s e t o f cases w h e n o n s i t e s u r g e o n s a g r e e d 100%. ~: = k a p p a v a l u e . 4.1; p = 0 . 0 0 0 1 ) , granulation ( O R , 17.4; p = 0.0001), and determining the need for debridement (OR, 10.0; p = 0.0001). However, nonhealing ulcer was associated with decreased agreement between remote and onsite surgeons regarding n e e d for antibiotics (OR, 0.4; p = 0.009) and need for hospi- talization (OR, 0.2; p = 0.0003) and increased dis- agreement b e t w e e n remote and onsite surgeons regarding cellulitis/infection (OR, 3.1; p = 0.006) and need for emergent examination (OR, 8.6; p = 0.0001). D I S C U S S I O N Telemedicine has been in development for more than 30 years and has experienced rapid growth in the 1990s. Proponents o f telemedicine envision sev- eral valuable applications: providing specialty care to underserved areas, increasing the efficiency o f exist- ing medical resources, expanding a hospital's service area, and attracting international health care dollars to the U n i t e d States. Moreover, Perednia and Allen 18 predict that by 2000 many physicians will be directly or indirectly involved in clinical telemedi- cine. Telemedicine is already an integral tool in most radiology practices, and thcrc is ongoing develop- m e n t and clinical investigation o f telemedicine in almost all medical fields. This investigation is the first to critically evaluate the feasibility o f using digital images for remote w o u n d management. The main objective o f this study was to establish the " p r o o f o f concept" o f dig- ital imaging before implementing this technology in vascular surgery h o m e care. Implementation o f telemedicine requires validation o f quality o f care, ease o f use, cost-effectiveness, and acceptance by both patients and physicians. Our observations from this preliminary investigation suggest that digital photography in conjunction with telemedicine can IOURNAL OF VASCULAR SURGERY Volume 27, Number 6 Wirthlin et aL 1 0 9 5 Table VI. Kappa and prevalence data for wound management decisions and descriptors in category V* Kappa(nor)t Kappa(max)t Kappa ( 2 P o - l ) (po2/[1-Po]2+1) Atrue(+) Blaise(-) Cfalse(+) Dtrue(-) n~ W o u n d m a n a g e m e n t decisions Wound healing problem 0.48 0.79 0.79 55 2 5 4 33 Emergent examination 0.44 0.44 0.48 20 4 10 16 25 Need for hospitalization 0.80 0.83 0.83 13 3 1 29 23 Antibiotics 0.40 0.40 0.45 19 7 11 23 30 Debridement 0.23 0.70 0.70 2 i 9 54 33 W o u n d descriptors Necrosis 0.90 0.91 0.91 43 1 2 20 33 Granulation tissue 0.45 0.67 0,68 6 2 8 44 30 Ischemia§ 0.00 0.67 0.68 5 1 0 0 3 Gangrene 0.62 0.62 0.64 25 7 7 35 37 Cellulifis/infection 0.00 0.22 0.32 0 0 21 33 27 Erythema 0.02 0.19 0.30 28 6 16 4 27 *Comparison made only in subset o f cases when onsite tPo = proportion o f agreements as defined by Lantz et l:n = number o f cases. §Ischemia data available for lO patients only. surgeons agreed 100%. al.17 See appendix for explanation o f Kappa(nor) and Kappa(max). Table VII. Sensitivity and specificity for wound descriptors *Agreement category IV ?Agreement category V Descriptor Sensitivity Specificity Sensitivity Specificity Wound healing problems 98% 53% 98% 53% Necrosis 98% 82% 98% 87% Ischemia 88% 100% 88% 100% Erythema 87% 26% 89% 27% Granulation 77% 97% 82% 96% Gangrene 75% 82% 78% 85% Cellulitis/infection 71% 65% NA 66% *Calculations performed on subset o f cases in which onsite surgeons agreed greater than 67%. tCalculations performed on subset o f cases in which onsite surgeons agreed 100%. provide quality remote wound care using a relative- ly simple and cost-effective protocol. The potential quality o f remote w o u n d care was determined using bedside examination as the frame o f reference and evaluating concordance between r e m o t e and onsite surgeons. C o n c o r d a n c e o f response based on viewing various imaging media versus more conventional evaluation (hard-copy radiographs, glass slides, or patient examination) has been used to validate new technology in telera- diology, telepathology, and teledermatology.9,19-21 In our study, agreement between remote and onsite surgeons for b o t h w o u n d diagnosis and management was high (63% to 95%) and was com- parable with concordance observed in telederma- tology (83%). 9 Using onsite evaluation as a refer- ence, remote surgeons were able to make equiva- lent diagnoses in 66% to 95% o f image sets and r e c o m m e n d comparable m a n a g e m e n t in 66% to 92% o f cases. We also observed that the level o f dis- agreement between remote and onsite surgeons equaled the disagreement among onsite surgeons, suggesting that the imaging media did not inde- p e n d e n t l y impact agreement. Moreover, w o u n d healing problems and specific w o u n d conditions are readily detected by digital images, as demon- strated by the sensitivity o f remote wound diagno- sis (71% to 98%). These observations suggest that in general remote wound management via digital imaging will render care comparable with conven- tional evaluation. Kappa values generated trends similar to percent agreement, that is, the range of kappa values among onsite surgeons (-0.04 to 0.75) was comparable with that o f remote versus onsite surgeons (0.00 to 0.90), and when kappa values were low among onsite sur- geons, values were similarly low between remote and onsite surgeons. I n only one parameter (need for JOURNAL OF VASCULAR SURGERY 1 0 9 6 W i r t h l i n e t al. June 1998 debridement) was the kappa value in category V less than 0.4, whereas thc corrcsponding kappa valuc in category I was greater than 0.4 (a kappa value less than 0.4 denotes "marginal reproducibility"). 23 Our data also depicted the base rate problcm in kappa sta- tistics described by Lantz et al. 17 and Spitznagel et al.22 in which kappa values may bc skewed by preva- lence. For example, in several instances (i.e., presence o f w o u n d healing problem, granulation tissue, and ischemia) high percent agreement was associated with a low kappa value because o f unevenly distrib- uted prevalence. Correcting for uneven prevalence using kappa(nor) increased the concordance between onsite and remote surgeons (see appendix). Thus when there was reproducible agreement among onsite surgeons, digital imaging succeeded in that agreement between remote and onsite surgeons was high in these cases. Certainty o f remote w o u n d evaluation and man- agement was no different than that o f onsite sur- geons, and contrary to the experience in telederma- tology the certainty o f diagnosis varied little and did n o t influence agreement. 9 However, the certainty level was universally high, which may represent a response construct among surgeons (surgeons are more likely to respond " 1 0 " - - c o m p l e t e l y certain). Nonetheless, viewing digital imagcs did n o t appear to influence surgeons' certainty o f w o u n d diagnosis and management. Using our imaging protocol, remote w o u n d care may be limited by decreased ability to accurately rep- resent erythema. Agreement between remote and onsite surgeons was lowest for erythema (66%), cel- lulitis (69%), and management decisions that rou- tinely follow the diagnosis o f cellulitis, such as antibiotics (66%) and urgent examination (71%). Several factors may explain this observation includ- ing variable sensitivity o f surgeon examination, learning curve o f remote diagnosis, and purely tech- nological factors related to digital imaging. O f these, variation o f surgeon examination appeared to have the greatest impact, as shown by percent agreement and kappa values (poor onsite agreement appeared to be associated with decreased agreement between remote and onsite surgeons). Variation among physicians has been documented in other fields 24-31 and has been cited as a limitation o f evaluating w o u n d infections after vascular surgery, ll Thus without a criterion standard for w o u n d descrip- tion or management, we used bedside examination as the "gold standard" and observed considerable dis- agreement among onsite surgeons. For example, onsite surgeons disagreed in approximately one third o f cases regarding the presence o f erythema or cel- lulitis and regarding important management deci- sions such as need for urgent evaluation and hospi- talization. Also, we observed similar variation among remote surgeons. Thus physician variability, as in other investigations, limited our ability to evaluate this technology and may have adversely impacted concordance regarding erythema and cellulitis. In an attempt to control for surgeon variation, we stratified cases according to level o f onsite agreement and observed increasing concordance between remote and onsite surgeons (cellulitis, 62% to 65% to 69%). Unfortunately, because o f small patient numbers we were unable to simultaneously control for remote and onsite surgeon variability. Finally, physician vari- ation observed in this study and other investigations o f w o u n d complications speaks to the complexity o f managing vascular w o u n d s and begs for evalua- tion/technology that can more objectively evaluate erythema and cellulitis. There is likely a learning curve related to remote w o u n d management that may have impacted concor- dance o f diagnosing crythema and ccllulitis. For example, multivariate analysis showed that remote surgeons were less certain o f diagnosing cellulitis compared with making other diagnoses. Multivariate analysis also showed that disagreemcnt rcgarding erythema and cellulitis was more likely for nonheal- ing ulcers comparcd with other w o u n d types. We observed that remote surgeons appeared more hesi- tant to render diagnoses and treatments during their initial remote evaluations compared with later evalu- ations. Moreover, remote surgeons "overdiagnosed" and "overtreated" wounds compared with onsite sur- geons, that is, remote surgeons more often diag- nosed erythema and more often prescribed antibi- otics. This is depicted in the high number o f false positive results when remote surgeons disagreed with onsite surgeons (16 o f 22 for erythema and 21 o f 21 for cellulitis; Table VI). Accordingly, the sensitivity o f remotely diagnosing erythema was high (87%), whereas the specificity was low (26%). Thus, at pre- sent, erythema is readily diagnosed and may at times be "overdiagnosed" because o f extra caution on the part o f remote surgeons. However, with continued experience we would expect the accuracy o f detecting and treating erythema/cellulitis to improve. Technological factors may have also had an impact on the accuracy o f diagnosing erythema and cellnlitis. Numerous factors influence image quality, including the photographer, lighting, resolution, and computer monitor. We purposely chose a digital camera priced at less than $1000 to test a protocol JOURNAL OF VASCULAR SURGERY Volume 27, Number 6 W i r t h l i n et ai. 1097 t h a t w o u l d be feasible from a cost perspective w h e n i m p l e m e n t e d in nursing h o m e care. The resolution o f the camera used in this s t u d y (756 x 506 pix- e l s / i n 2) has been shown to be adequate compared with higher resolutions for clinical diagnosis in tele- d e r m a t o l o g y , a2 T h e single color chip, however, appears to be very sensitive to lighting and contains an infrared filter that depicts ultraviolet light as red color. This may have overrepresented red tones in images o f vascular wounds. Subsequently, we have observed that lighting (incandescent versus fluores- cent) alters the r e p r e s e n t a t i o n o f e r y t h e m a in patients with lower extremity cellulitis (unpublished data) and have since used polarizing filters to correct this characteristic o f singlechip digital cameras. Despite this potential flaw, the image quality is excel- l e n t and provides e n o u g h visual i n f o r m a t i o n t o appropriately m a n a g e the m a j o r i t y o f w o u n d s . Moreover, there are several cameras with higher res- olution and multiple color sensors t h a t m a y be need- ed to optimize remote w o u n d management. Finally, this t e c h n o l o g y omits i m p o r t a n t c o m p o n e n t s o f physical examination, such as palpation and olfac- tion, t h a t m a y allow for m o r e accurate diagnosis. N o n e t h e l e s s , digital i m a g i n g for r e m o t e w o u n d diagnosis is i n t e n d e d as an adjunct to standard phys- ical e x a m i n a t i o n p e r f o r m e d by the local care provider ( h o m e care nurse or physician). O u r observations s u g g e s t t h a t h e a l t h care providers o f varied levels o f training and background can successfully i m p l e m e n t this t e c h n o l o g y with rel- atively low costs in terms o f b o t h time and equip- ment. All images were taken by an individual with no medical or photographic background after mini- mal training (6 to 8 hours). The photographic qual- ity was universally g o o d , and all images were consid- ered clinically useful. Moreover, the imaging process was quick (5 to 7 minutes per patient) and did n o t complicate patient care. Since this project, vascular h o m e care nurses have been trained in a short peri- od o f time (about 2 weeks) to p h o t o g r a p h w o u n d s a n d t r a n s m i t images f r o m patients' h o m e s to the attending surgeon (Fig. 3). A l t h o u g h we did n o t specifically evaluate cost, several observations suggest t h a t this telemedicine application will be cost-effective. First, the digital camera is relatively inexpensive a n d decreased in price d u r i n g the study period ($800 to $500). We expect t h a t the cost o f this t e c h n o l o g y will continue to decrease. Second, the e q u i p m e n t to view images is simple and relatively inexpensive (standard desk top computer, monitor, and software), and current- ly attending surgeons can access images o f patients' Fig. 3. Wound images transmitted from a patient's home to the attending vascular surgeon during a home care visit. Images were taken by home-care nurses and transmitted with a lap top computer and standard phone line. A, Time zero. B, One month later. w o u n d s from their office computers or any o t h e r c o m p u t e r connected to the Massachusetts General Hospital network. The greatest cost o f implement- ing o f this telemedicine application has been the lap- top computers ($3500) used by h o m e care nurses to t r a n s m i t images f r o m p a t i e n t s ' h o m e s . This cost could be easily offset by fewer office visits, emer- gency r o o m evaluations, and shortened hospital stay. However, the ultimate cost-effectiveness will need to be proven in a prospective trial. Finally, o u r experi- ence suggests that physician and patient acceptance o f this t e c h n o l o g y is high, and currently nearly all eligible o u t p a t i e n t s c o n s e n t to digital i m a g i n g o f their wounds. However, a prospective trial o f r e m o t e w o u n d m a n a g e m e n t in outpatients, evaluating the opera- tional feasibility, quality o f care, and cost-effective- ness is n e e d e d to f u r t h e r validate this technology. Mso, further research o f remote w o u n d care m u s t be coupled with accurate assessment o f cost and quality JOURNAL OF VASCULAR SURGERY 1 0 9 8 W i r t h l i n et aL June 1998 o f outpatient care, which at present, despite much data o n inpatient care, are u n l ~ n o w n . 33 Moreover, recent reports suggest that quality o f home care may affect medical costs and outcomes, 33-35 which cor- roborates our experience with specialized vascular h o m e care. Implementation o f remote w o u n d sur- veillance along with other applications in nursing home care could improve the quality o f outpatient care and provide specialized care to remote locations. Also, we did not investigate the potential o f image manipulation such as simultaneous display o f wounds at varying time intervals, w o u n d area measurement, and outlining o f specific shades o f redness. These techniques could allow for more objective w o u n d evaluation. Finally, issues related to confidentiality, licensure, liability, and reimbursement, which have not been entirely resolved in other fields o f telemed- icine, must be considered in conjunction with devel- opment o f this technology. 18 Certainly, technology in digital imaging and elec- tronic transfer o f data will advance independent o f health care trends, and continued research in telemed- icine should capitalize on this progress. Technologic advancements that would complement digital imag- ing for remote w o u n d management include: (1) development o f an electronic patient record that incorporates digital images, radiographs, noninvasive laboratory data, and the hospital chart; and (2) devel- opment o f equipment to collect and transmit vascular noninvasive data for remote graft surveillance. C O N C L U S I O N This study suggests that digital imaging for remote w o u n d management is feasible on the basis o f high concordance between remote and onsite surgeons regarding w o u n d evaluation and management. This application o f telemedicine has the potential to improve the quality o f current outpatient wound care while decreasing the cost by allowing home care nurs- es to photograph wounds and transmit images over any distance in a short period o f time. This hypothesis needs to be verified in a prospective clinical trial assess- ing clinical outcomes and medical costs. The available technology in image media and electronic data trans- fer is advanced; however, the medical application o f this technology is in its infancy. Telemedicine holds significant promise in vascular surgery but will require careful, well-designed research for development and validation o f its clinical usefulness. We thank the Eastman Kodak Co. f o r supplying equip- m e n t (Kodak D C 5 0 camera), and w e gratefully acknowl- edge t h e assistance o f t h e n o n a u t h o r physicians w h o evalu- ated w o u n d s for this study, specifically Jeffrey Slaiby, M D , Peter Purcell M D , Joseph Giglia, M D , and a n u m b e r o f General S u r g e r y Residents at Massachusetts General Hospital. I n a d d i t i o n , w e a c k n o w l e d g e t h e assistance o f Virginia Capasso, RN, Debbie Burke, RN, and t h e other nurses on Bigelow 14 (Massachusetts General Hospital), Eric R. M e n n , Kimberly D. Galbraith, Linda A. M o t t l e , and o t h e r T e l e m e d i c i n e Center personnel, and Philip A A m a t o , PhD, w h o s u p p l i e d t h e Kodak D C 5 0 for o u r use. R E F E R E N C E S 1. Bowersox JC, Shah A, Jensen J, Hill H, Cordts PR, Green PS. Vascular applications o f telepresence surgery: initial feasi- bility studies in swine. J Vasc Surg 1996;23:281-7. 2. Grigsby J, Kaehny MM, Sandbert EJ, Schlenker RE, Shanghnessy PW. Effects and effectiveness o f telemedicine. Health Care Financing Rev 1995;17:27-34. 3. Perednia DA, Allen A. Telemedicine technology and clinical applications. JAMA 1995;273:483-8. 4. Hassol A, Ganmer G, Grigsby J, Mintzer CL, Puskin DS, Brunswick M. Rural telemedicine: a national snapshot. Telemed J 1996;2:43-8. 5. Federman D, Hogan D, Taylor JR, Caralis P, Kirsner RS. A comparison of diagnosis, evaluation, and treatment o f patients with dermatologic disorders. J Am Acad Dermatol 1995;32:726-9. 6. Herman PG, Gerson DE, Hessel SJ, Mayer BS, Wamick M, Blesser B, et al. Disagreements in chest roentgen interpreta- tion. Chest 1975;63:278-82. 7. Perednia DA, Brown NA. Teledermatology: one application o f telemedicine. Bull Med Libr Assoc 1995:83(1):42-7. 8. Menn ER, Kvedar JC. Teledermatology in a changing health care environment. Telemedicine 1995;1:303-8. 9. Kvedar JC, Edwards RA, Menn ER, Maofid M, Gonzalez E, Dover J, et ai. The substitution o f digital images for derma- tologic physical examination. Arch Dermatol. 1997;133: 161-7. 10. Johnson JA, Cogbill TH, Strutt PJ, Gundersen AL. Wound complications after infrainguinal bypass. Arch Surg 1988; 123:859-62. 11. Reifsnyder T, Bandyk D, Seabrook G, Kinney E, Towne J. Wound complications o f the in situ saphenous vein bypass technique. J Vasc Surg 1992;15:843-50. 12. Donaldson MC, Mannick JA, Whittemore AD. Femoral-dis- tal bypass with in situ greater saphenous vein. Ann Surg 1991;213:457-65. 13. Szilagyi DE, Smith RF, Elliott JP, Vrandecie MP. Infection in arterial reconstruction with synthetic grafts. Ann Surg 1972;176:321-33. 14. Calligaro KD, Dougherty MJ, Raviola CA, Musser DJ, DeLanrentis DA. Impact o f clinical pathways on hospital costs and early outcome after major vascular surgery. J Vasc Surg 1995;22:649-57. 15. Patterson RB, Whitley D, Porter K. Critical pathways and cost-effective practice. Semin Vase Surg 1997;10(2):113-8. 16. PC Webopaedia (online). Available from: www.pcwebope- d i a . c o m / J P E G . h t m / ( 1 9 9 7 , Sep. 2). 17. Lantz CA, Nebenzahl E. Behavior and interpretation o f the K statistic: resolution o f the two paradoxes.-J Clin Epidemiol 1996;49:431-4. 18. Perednia DA, Allen A. Telemedicine technology and clinical applications. JAMA 1995;273:483-8. JOURNAL OF VASCULAR SURGERY Volume 27, Number 6 Wirthlin et al. 1 0 9 9 19. Zelickson BD, Homan L. Teledermatology in the nursing home. Arch Dermatol 1997;133:171-4. 20. Halliday BE, Bhattacharyya AK, Graham AR, Davis JR, Leavitt SA, Nagle RB, et al. Diagnostic accuracy of an inter- national static-imaging telepathology consultation service. Hum Pathol 1997;28(1):17-21. 21. Scott WW, Rosenbaum JE, Ackerman SJ, Reichle RL, Magid D, Weller JC, et al. Subtle orthopedic fractures: teleradiolo- gy workstation versus film interpretation. Radiology 1993; 187:811-5. 22. Spitznagel EL, Helzer IE. A proposed solution to base rate problem in the kappa statistics. Arch Gcn Psychiatry 1985; 42:725-8. 23. Rosner B. Hypothesis testing: categorical data. In: Kugushev A, editor. Fundamentals of biostatistics. Belmont, Calif.: Wadworth Publishing; 1995. p. 426. 24. Garland LH. Studies on the accuracy of diagnostic proce- dures. AJR Am J Roentgenol 1959;82:25-38. 25. Hillman BJ, Hessel SJ, Swensson RC, Herman PG. Improving diagnostic accuracy: a comparison of interactive delphi consultations. Invest Radiol 1977;12:112-5. 26. Diagnostic decision-process in suspected pulmonary embolism: report oft_he Herlev Hospital study group. Lancet 1979;1:1336-8. 27. Bader JD, Shugars DA. Variation in dentists' clinical deci- sions. J Public Health Dent 1995;55:181-8. 28. Davis PB, gee RL, Millar J. Accounting for medical variation: the case of prescribing activity in a New Zealand general prac- tice sample. Soc Sci Med 1994;39:367-74. 29. Records NL, Tomblin JB. Clinical decision making: describ- ing the decision rules of practicing speech-language patholo- gists. J Speech Lang Hear Res 1994;37:144-56. 30. Leunens G, Menten J, Weltens C, Verstraete J, van der Schueren E. Quality assessment of medical decision making in radiation oncology: variability in target volume delineation for brain tumours. Radiother Oncol 1993;29:169-75. 31. Lidegaard O, Bottcher LM, Weber T. Description, evaluation and clinical decision making according to various fetal heart rate patterns: inter-observer and regional variability. Acta Obstet Gynecol Scand 1992;71:48-53. 32. Bittorf A, Fartasch M, Schuler G, Diepgen TL. Resolution requirements for digital images in dermatology. I Am Acad Dermatol 1997;37:195-8. 33. Dahlberg NL. A perinatal center antepartal homecare pro- gram. l Obstet Gynccol Neonatal Nuts 1988;17:30-4. 34. Donleavy J. Responsive restructuring: part I. Acute care nursing provider home visits. New Definition 1993;8:1-3. 35. Heaman M, Thompson L, Helewa M. Patient satisfaction with an antepartal home care program. J Obstet Gynecol Neonatal Nurs 1994;23:707-13. Submitted Sep. 22, 1997; accepted Feb. 11, 1998. A P P E N D I X . A s s h o w n b y L a n t z e t al. 17 a n d S p i t z n a g e l e t al. 22 k a p p a v a l u e s c a n v a r y f o r a g i v e n l e v e l o f a g r e e m e n t d e p e n d i n g o n t h e p r e v a l e n c e o f a g r e e m e n t s ( t r u e p o s i t i v e s a n d t r u e n e g a t i v e s ) a n d d i s a g r e e m e n t s (false p o s i t i v e s a n d false n e g a t i v e s ) . I L a p p a ( n o r ) , a p a r a m e - t e r d e r i v e d f r o m p r o p o r t i o n o f a g r e e m e n t s ( P o ) , has b e e n s u g g e s t e d as a s o l u t i o n f o r t h e v a r i a b i l i t y o b s e r v e d i n k a p p a v a l u e s w h e r e P o = (a + d ) / N a n d K a p p a ( n o r ) = 2 P o - 1. K a p p a ( n o r ) c o r r e c t s f o r a s y m m e t r y o f p r e v a l e n c e a n d is e q u a l t o k a p p a o n l y w h e n t h e r e is s y m m e t r y o f b o t h a g r e e m e n t s a n d dis- a g r e e m e n t s . K a p p a ( m a x ) is t h e m a x i m u m k a p p a v a l u e f o r a g i v e n p r e v a l e n c e o f a g r e e m e n t a n d d i s - a g r e e m e n t . K a p p a ( m a x ) b a l a n c e s a g r e e m e n t i n t h e t w o a g r e e m e n t c a t e g o r i e s a n d m a x i m a l l y s k e w s d i s - a g r e e m e n t i n t h e d i s a g r e e m e n t c a t e g o r i e s u s i n g t h e equation. Kappa(max) = P o 2 / (1 - P o ) 2 + 1. S e e ref- e r e n c e s 1 7 a n d 2 2 f o r f u r t h e r d i s c u s s i o n . D I S C U S S I O N Dr. J a m e s E s t e s (Boston, Mass.). I applaud y o u r inter- est in l o o k i n g in the technologic front here in terms o f c o m b i n i n g computers and medicine, and I enjoyed y o u r talk. I have o n e q u e s t i o n f r o m a p r a c t i c a l s t a n d p o i n t : where do you see this t e c h n o l o g y being applied, specifi- cally, in terms o f justifying the costs o f the e q u i p m e n t for acquiring and transmitting digital information? Dr. D o u g l a s J. W i r t h l i n . I think the potential for this t e c h n o l o g y is e n o r m o u s . We e m b a r k e d o n this p r o j e c t planning to i m p l e m e n t digital imaging in o u t p a t i e n t man- agement o f vascular wounds to decrease cost o f care while maintaining quality o f care. The cost o f e q u i p m e n t is min- imal c o m p a r e d With the potential cost savings. However, this needs t o be proven in a prospective trial. Dr. C a r l E. B r e d e n b e r g (Portland, Me.). O u r experi- ence thus far at Maine Medical Center with these types o f techniques has been limited largely to shared educational conferences, particularly in vascular surgery, with Dave Pilcher and Michael Ritchie at the University o f Vermont, and it's remarkable I think h o w lively and alive this is w h e n you are using n o t these still cameras b u t with television. I ' m still n o t sure, and this is in p a r t perhaps in answer to the previous question, whether this is the g o o d news or the bad news from an i n s t i t u t i o n ' s p o i n t o f view. F o r a rural network, for example, I could argue t h a t to be able to visualize wounds o f a patient up in Caribou, to visual- ize those d o w n in P o r t l a n d , and make decisions a b o u t whether o r n o t to get the patient to P o r t l a n d is clearly the g o o d news. N o w if, however, i t is a p a t i e n t in S o u t h P o r t l a n d and this is being viewed on F r u i t Street at the Massachusetts General Hospital, then I ' m less persuaded that this is truly g o o d news. This is n o t entirely specula- tion. F o r example, at Mercy Hospital in P o r t l a n d the radi- o l o g y d e p a r t m e n t s t a t i o n e r y carries t h e l o g o o f the JOURNAL OF VASCULAR SURGERY 1 1 0 0 W i r t h l i n et al. June 1998 M a s s a c h u s e t t s G e n e r a l H o s p i t a l r a d i o l o g y d e p a r t m e n t along with their own, so the implications o f this technol- o g y are i n d e e d far-reaching in many ways. Dr. W i r t h l i n . I n regards t o y o u r first c o m m e n t , y o u m e n t i o n e d that the video imaging was very useful. We use still digital images because the infrastructure requirements for t r a n s m i t t i n g v i d e o messages are m u c h g r e a t e r a n d m o r e expensive. Also, the resolution o f a still digital image is better than t h a t o f a video image. I n regards to whether this is g o o d news or bad news, I think from a patient s t a n d p o i n t it is g o o d news as l o n g as the t e c h n o l o g y is d e v e l o p e d and i m p l e m e n t e d properly. I n terms o f h o w this t e c h n o l o g y may be used at m a j o r medical centers is unclear at this time, and really this tech- n o l o g y is in its infancy. T h e r e is m u c h w o r k t h a t needs to be d o n e before this can be i m p l e m e n t e d , for example, a clinical trial proving t h a t this is safe and cost-effective. Dr. D a v i d B. P i l c h e r (Colchester, Vt.). This is a very disturbing paper, because it seems to me t h a t y o u are say- ing t h a t we surgeons w h o talk t o patients, who interact with patients w h e n we are visualizing their wounds, can n o w be radiologists and just l o o k at the still pictures. I d o n ' t t h i n k y o u r conclusion t h a t telemedicine has a value is n o t really valid. Your conclusion is that looking at still pictures is o f value. T h a t ' s n o t really telemedicine. T h a t ' s a l o t cheaper than telemedlcine, as I think you just s a i d - - transmitting digital pictures d o e s n ' t require fifll telemedi- cine, n o r does it allow the full potential, so I d o n ' t think y o u are really l o o k i n g at telemedicine. Dr. W i r t h l i n . T h e definition o f t e l e m e d i c i n e is v e r y broad. Telemedicine can be as simple as a p h o n e consulta- tion, a 911 p h o n e call, o r can be as complex as teleprcsence surgery. This project was developed just to test the feasibil- ity o f digital imaging. I n o t h e r words, h o w accurately does a digital image represent a wound? This is just the first step in the development o f this technology, and y o u ' r e tight, we did n o t test the telemedicine application o f digital imaging yet. T h a t is the next project, in which h o m e care nurses will transmit the images to the treating surgeons. T h e process o f transmitting the digital images makes it a telemedicine application. You're right, viewing a digital image is n o t equal to a standard physical examination, b u t it is comparable. Also, in terms o f the impact this t e c h n o l o g y may have o n the h u m a n e l e m e n t o r p a t i e n t - p h y s i c i a n r e l a t i o n s h i p , m y i m p r e s s i o n d u r i n g the s t u d y was t h a t p a t i e n t s are v e r y enthusiastic a b o u t this t e c h n o l o g y and w o u l d gladly pass up several office visits. Dr. T h o m a s F. O ' D o n n e l l (Boston, Mass.). I w o u l d like to make j u s t a s h o r t c o m m e n t , Dr. Bredenberg, at least at N e w England Medical Center w e ' r e n o t setting up telemedicine units in "distant" Maine b u t rather in less- r e m o t e places like Argentina, U n i t e d Arab Emirates, and Saudi Arabia. (laughter) I have one question for the authors a b o u t cellulitis and erythema. Obviously visualization is one o f the four com- ponents o f a physical examination, b u t telemedicine does n o t let you touch the w o u n d and sense the skin temperature or degree o f induration. You d o n ' t get this i m p o r t a n t aspect o f an examination o n a picture so that the examining physi- cian is unable to determine whether the skin is warm, which indicates cellulitis. D o yon want to c o m m e n t on that? D r . W i r t h l i n . V i e w i n g a d i g i t a l i m a g e in t e r m s o f d e t e c t i n g cellulitis and erythema will n o t be as g o o d as the standard physical examination. Also, we o b s e r v e d t h a t dig- ital images actually overrepresent r e d tones and t h a t fur- ther d e v e l o p m e n t in the t e c h n o l o g y is n e e d e d to improve the accuracy o f detecting e r y t h e m a and cellulitis. Dr. Jens J o r g e n s e n (South Portland, Me.). Maine is a big state, and there are some corners o f our state that are f u r t h e r f r o m P o r t l a n d t h a n N e w City is. A b o u t once a week I will see a patient who has driven 4 to 6 hours to come in for an appointment. Therefore if I get a call in the postoperative p e r i o d from the p a t i e n t or their physician that they are a little bit concerned a b o u t the appearance o f the w o u n d or the foot, it is oftentimes with a great deal o f reticence t h a t I say, " W h y d o n ' t you swing by the office and I ' l l take a look". So I think this technology has a great deal o f application in geographically disparate states such as Maine. I t h o u g h t it was a great presentation, and I w o u l d like to thank you for bringing this material to this forum. Dr. J o d A. B e r m a n (Springfield, Mass.). I think that the issue here i s n ' t so m u c h whether this t e c h n o l o g y can supplant the physical examination o f the p a r t o f the physi- cian, b u t rather its application in comparing the evaluation o f the visiting nurse with the evaluation o f the surgeon on the basis o f the images t h a t you obtain. Have y o u given any c o n s i d e r a t i o n t o c o m p a r i n g the accuracy o f t h e description t h a t the visiting nurse gives y o u with the eval- uation o f the physician on the basis o f y o u r images? Dr. W i r t h l i n . We plan to evaluate t h a t issue in the next phase o f the study. work_ucf3c52ahvhovkcv57eexsuht4 ---- Nature Exquisiteness based Digital Photography Arts Project for Creativity Enhancement among Low Achievers Students. (PROSFDak) Procedia - Social and Behavioral Sciences 103 ( 2013 ) 675 – 684 1877-0428 © 2013 The Authors. Published by Elsevier Ltd. Open access under CC BY-NC-ND license. Selection and peer-review under responsibility of The Association of Science, Education and Technology-TASET, Sakarya Universitesi, Turkey. doi: 10.1016/j.sbspro.2013.10.387 ScienceDirect 13th International Educational Technology Conference Nature exquisiteness based digital photography arts project for creativity enhancement among low achievers students. (PROSFDak) Siti Nuur Adha Mohd Sanif*, Zaharah Hussin, Fatiha Senom, Saedah Siraj & Abu Talib Putih Department of Educational Foundations & Humanities, Faculty of Education,University of Malaya,50603 Kuala Lumpur, Malaysia Abstract This study was conducted to examine the effectiveness of project-based learning in digital photography for creativity enhancement among a group of low achievers students in a secondary vocational school. Drawing on the Isman Instructional Design Model as the module design process, this study employed a quasi-experimental method. A group of 40 low achiever students were selected as sample of a single case study design in this study and it was conducted in 16 R & D sessions within 3 weeks. Researchers modified the project module PROSFak using the modules of Curriculum Plan; picturing Peace: Creative Digital Photography Project by ArtsBridge. The implications of the study for the Ministry of Education (MOE), teachers and students were also discussed. © 2013 The Authors. Published by Elsevier Ltd. Selection and peer-review under responsibility of The Association of Science, Education and Technology-TASET, Sakarya Universitesi, Turkey. Keywords: Photography digital, creativity, project based-learning, Isman instructional design model, quasi-experimental 1. Introduction Nature exquisiteness based digital photography arts project (PROSFDak) is a project-based learning that is design aptly to suit the need of vocational secondary school students who have difficulties in academic learning. Using digital cameras, students are able to produce works of photography to express the value and beauty of the environment. This art project is potentially powerful in attracting and motivating students who have less focus on the conventional teaching and learning activities in schools. * Corresponding author. Tel.: +6-012-692-5345; fax: +603-7956 5506. E-mail address: nuuradha@um.edu.my Available online at www.sciencedirect.com © 2013 The Authors. Published by Elsevier Ltd. Open access under CC BY-NC-ND license. Selection and peer-review under responsibility of The Association of Science, Education and Technology-TASET, Sakarya Universitesi, Turkey. http://creativecommons.org/licenses/by-nc-nd/3.0/ http://creativecommons.org/licenses/by-nc-nd/3.0/ 676 Siti Nuur Adha Mohd Sanif et al. / Procedia - Social and Behavioral Sciences 103 ( 2013 ) 675 – 684 The purpose of PROSFDak is to facilitate students in developing skills and competencies on information and technology, such as basic skills and skills of the digital age in order to equip them to face a better future (Chan, 2010). In addition, this project-based learning assists students to understand and apply the knowledge in their own field. Through these skills, students are able to manage their learning independently and in more effective ways. Furthermore, PROSFDak is able to build students’ awareness of environmental conservation through encouragement on aesthetic appreciation of beauty of nature. Photography is the process of producing images by the action of light or image using the tools; recorder, which is recognized as a camera. As a technique, photography was first introduced in 1839 with the invention of the daguerreotype by Louis-Jacques-Mande Daguerre (Mary, 2010). The original term words of photography was from British, photography is a use from two Greek words, namely phos mean light and graphic mean painting brush, or graphe which brings meaning 'painting with light' (John Ingledew, 2005). This implies painting with light is to understand the stem of photography. Today, the growing technologies changed film photography to digital photography with first digital cameras production in 1981 by Sony Mavica and Kodak in 1991. Nowadays, the use of a digital camera is to use electronics to record light rather than using light sensitive paper and exposing the paper to sunlight (Mary, 2010). Before photography exists, the medium of communication is through imagination and existing experience. Photographic documentation of the ingredients should be easy, fun and makes a story easily understood. A professional photographer, Joseph Meehan (2008) says the content of photography may bring a story, make a fact, convey feelings, control atmosphere with a more expressive power than the dramatic results of digital imaging use technology today. What we understand is that there are traits in photography from the aspect of the delivery of notice, as if it has been equalled, even rivalling delivery system that uses text. Digital photography is one of the visual culture elements that is based on visual media such as images, sculptured and art of dance. Since the 18th century, photography has become a medium to capture image as memory and the proof of existing (Sontag, 1977). Photography communicates through images, various information and meanings may be found in a picture. Mary (2010) argues that a photographer uses the medium to inspire or to elicit information for record storage, journalism, and scientific documentation. Photography has the potential to assist students’ learning particularly in spurring their interest to learn. This is because, unlike the conventional learning tools, the images or subject was made directly through slides, film and another visual tools via photography (Mitchell & Weber, 1999). Photography also teaches students ethics and experiences through pictures such as pictures depicting the battle of remorse, cruelty, fear and human civilization. The words that we often hear; "a picture tells a thousand stories." In 2000, the Malaysian Ministry of Education released a circular letter of the promotion of photography activities among school children. The purpose of the promotion of photography activities among school pupils is to make room for the pupils in the school to develop their potentials in a holistic and integrated manner to produce individuals who are intellectually, spiritually, emotionally and physically balanced. Guided by the circular letter, the researcher uses photography as a subject of learning to encourage students’ creative work in appreciation of art education. PROSFDak enhances students’ understanding and achievement in terms of learning basic digital photography techniques, elements of design, discussion and understanding of pictures through digital photography module guides. Therefore, the present study investigates the impacts of PROSFDak on the understanding and achievement of low achiever students on arts and creativity. Through this digital photography arts project, students can communicate through pictures as a non-verbal communication. With non-verbal communication, the meaning from the messenger can be easily understood by the recipients (Hashim, Mohammed Isaac and David, 2009). In this project, the students will go through several processes namely understanding, interest, desire, individual sensitivity, communication skills and ethical responsibility. This project also will attract students to understand and have interest in Visual Arts Education, as where exciting learning frameworks will be highlighted to attract students in arts education session. This is aligned with the learning of Visual Communication; Graphic Design, Posters, Music, Logo and Mascot, 677 Siti Nuur Adha Mohd Sanif et al. / Procedia - Social and Behavioral Sciences 103 ( 2013 ) 675 – 684 Calligraphy, Typography, Packaging, Environmental Graphics, Illustration, Computer Graphics, and Multimedia in KBSM Visual Arts syllabuses (2006). Arts education encourages students to explore various mediums, use their imagination and take a risks in intellectually, forming visual intelligence, engage in self-instructional projects, explore the symbolic function in art, have dialogue with others about the creative process and the results of their work, and build self assessment skills (Cromwell, 2000). Other benefits of arts education is to appreciate nature, appreciate the grace of God, sharpen the mind, good emotions shape, and build multi-sensory skills (Ghazie Ahmed Hashim Osman Ibrahim, 2007). Therefore, nature exquisiteness is chosen as a theme for this digital photography project. Aminudin (2004) argues that students who are active in extra-curricular activities are more likely to have good academic achievement. Thus, participation in PROSFDak could encourage low academic achieving students to be active in extra-curricular activities. Activities participated by the students will not have a negative impact on their academic achievement. Co-curricular activities are aimed to diversifying the knowledge and experience to intellectual development of students, talents, body and also to development student leadership, aesthetic value, self-esteem and positive social values (the National Education Policy, Ministry of Education, 2012). Arts Education will produce students who are independent, develop students' talents, able to express their views, practice good values in society and realize the career opportunities in the arts field. 2. The Aim of Research The aim of this research is to enhance creativity among a group of low achievers students and to determine students' creativity and interest in the field of digital photography. The study also aims to investigate the impact of PROSFDak on the academic achievement of students' technical skills through the examination of test scores, literacy and oral language skills. By focusing on the lack in literature, photography technical gives potential and benefit to art education specifically for social aspects to ensure the effectiveness in students’ development and learning in school life (Albertson & Davidson, 2007). This study will use some aspects of photography for students to learn as process and practice approaches in teaching studio environment, cultural and historical context to understand art and photography. This study also uses design process by employing the Isman instructional design model for teaching and learning. To achieve this aim, the researchers have set four research objectives. The first objective is to investigate to what extent the project-based learning in digital photography is effective in enhancing creativity among a group of low achievers students. The second objective is to examine possible differences in terms of creativity in the production of works of art in nature exquisiteness based digital photography arts project from boys and girls. The third objective is to find to what extent the nature exquisiteness based digital photography arts project is effective in cultivating students’ interest in art education. The fourth objective is to investigate possible significant differences in students’ ability to create images using creative imagination by pre and post-activities design. 3. Significance of the Study Guided by the goal of art education, the findings of the study will manure and shape the younger generation understanding in culture, have high aesthetic values, to be imaginative, critical, creative, innovative and inventive. These also contribute to the development of self, community and nation to meet the government's intention to provide an educational career path more clearly to students. To achieve this, the government proposes to rebrand vocational secondary schools to vocational colleges. Not only that, this transformation also involves changes in technical and vocational curriculum, the learning, the certification, trainers, and infrastructures. The result of the study can also be used by educators to improve the technical and vocational education in development pupils' creativity through Vocational and Technical Transformation (VOCTEC) on technical schools and vocational schools in Malaysia. 4. Scope and Limitations 678 Siti Nuur Adha Mohd Sanif et al. / Procedia - Social and Behavioral Sciences 103 ( 2013 ) 675 – 684 In this study, a group of 40 students in a vocational secondary school in the state of Johor was randomly selected. This study was conducted over 16 sessions of teaching and learning activities, and 3 weeks to allow 8 measurements, there are 2 sessions for each measurement for the completion of this entire lesson plan. 5. Instruments Researchers used two instruments in this study; questionnaires and rubric assessments. Questionnaires are used for identifying students’ achievement and information related to the art of digital photography and the effects of PROSFDak reliability of the items and questionnaires. The instruments were administered to 40 form four technical students. The second instrument is a Rubric assessments form used for Pre-activity and Post-activity. The rubric of students’ creativity measurement is based on photographic basic technique and composition principles of digital photography. This rubric is used by researchers as a process to determine, obtain, and provide useful information for researchers to make judgments about further action (Siti Hayati, 2011). Rubric assessment that is used as a system or process covers the activities of gathering information about the strategies and teaching and learning activities for researchers to analyze and decide accordingly to plan activities more effectively. To determine the validity of the content in the rubric of pre- activity and post-activities, three experts in the field of art and photography education validated an instrument that was adapted and built. The teaching and learning of this single treatment group is done through both indoor and outdoor classes using the PROSFDak project module which was modified from Curriculum Plan; picturing Peace: Creative Digital Photography Project by ArtsBridge (2005). Researchers used method of discussion, practical and technical skills, problem solving skills, as well as medium and appropriate instructional media research. Researchers make a full use of media and technology equipment such as digital cameras, flash-lights, computers, software editing Photoshop, lighting and studio equipment. 6. Theoretical Framework Isman Model This study employs the Isman Model to formulate PROSFDak module in teaching and learning activities to enhance creativity among low achieving students. The major goal of Isman model is to plan, develop, implement, evaluate, and organize fully learning activities to ensure the effectiveness of PROSFDak module in improving students’ performance (Isman, 2011). This model is designed to store information into long-term memory and respond to the form of environmental conditions to motivate students through active experience and contents. The Isman model was also used and implemented in a study by Norlidah, Saedah, Mohd Khairul Azman & Zaharah (2013) on the effectiveness of Facebook based learning in enhancing creativity among Islamic Studies students in the secondary educational setting in Malaysian. The findings of the study suggest that there were significant differences between the treatment group and the control group, which imply that Facebook based learning, has enhanced students’ creativity level in writing, problem solving and producing missionary motto. The Isman module was also used in a study by Norlidah & Saedah (2012) on designing and developing a Physics module based on learning style and appropriate technology in the secondary educational setting in Malaysia. The researcher conducted the module evaluation among 120 participants involving 30 participants of each learning style (visual/verbal, active/reflective). The results suggested that the module is effective for visual, active and reflective but not for verbal learners. The researcher also compared the module effectiveness according to gender and it is found that the module is effective for female learners but not for male learners in verbal. The findings of the research indicate that the Isman model was implemented successfully in designing and developing a Physics module based on learning style and appropriate technology in secondary educational setting in Malaysia. Hence, in the present study, the researchers aim to employ the Isman model in nature exquisiteness based digital photography arts project for creativity enhancement among low achievers students (PROSFDak) and to test the effectiveness of the module. The Isman instructional design model is described in a 679 Siti Nuur Adha Mohd Sanif et al. / Procedia - Social and Behavioral Sciences 103 ( 2013 ) 675 – 684 five-step systematic planning process. These are input, process, output, feedback and learning as shown in Figure 1. The researchers aim to test the effectiveness of the Isman model in nature exquisiteness based digital photography arts project for creativity enhancement among low achievers students (PROSFDak) as shown in table 1. 680 Siti Nuur Adha Mohd Sanif et al. / Procedia - Social and Behavioral Sciences 103 ( 2013 ) 675 – 684 Table 1 The use of Isman model to design and develop the effectiveness of the Isman model in nature exquisiteness based digital photography arts project for creativity enhancement among low achievers students (PROSFDak. Steps Work log Descriptions Step 1 Input Identify needs Identify contents Identify goals-objectives Identify teaching method Identify evaluation materials Identify instructional media Designing module based on exquisiteness based digital photography arts project for creativity enhancement among low achievers students (PROSFDak). Step 2 Process Redesigning of rubric assessments. Using validity and expert panel to redesign the PROSFDak module for creativity enhancement among low achievers students. Step 3 Output Rubric post-activity Analyze results Measured and analyze finding PROSFDak module based on model Picturing Peace by ArtsBridge. Lesson from PROSFDak module for creativity enhancement among low achievers students and teacher also. Step 4 Feedback Examine Examine rubric finding analysis: 1) Pre-activities rubric 2) Implementation of real R & D 3) Post-activities rubric 4) Measurement Step 5 Learning Learning and teaching process R & D Teaching PROSFDak modules Pre-and post-activities have been conducted to test the effectiveness of PROSFDak learning modules to enhance creativity among low achievers students. 7. Findings and Discussion The effectiveness of nature exquisiteness based digital photography arts project for creativity enhancement among low achievers students (PROSFDak) was analyzed based on early finding from pre-activities and post- activities of PROSFDak learning modules. The result of the post-activities shows that average creativity level score are 40. After the treatment of PROSFDAK module lesson, data from post-activities were analyzed by comparing mean achievement score between the pre-activities and post-activities. The independent sample t-test was performed to trace if there exists any enhancement in creativity level after treatment. The results show that there is significant enhancement in creativity level after treatment. Next, the researchers compared the creativity in production artwork according to gender and effectiveness of digital photography arts project in students’ interest for art education. The module suggests that was no difference for enhancing creativity in producing art works between of both genders. The effectiveness of PROSFDak module based learning in enhancing creativity among low achievers students also analyzed across creativity level in post-activities. A t-test was performed to determine if there were significant differences in ability to create images using creative imagination before and after the learning of PROSFDak modules. Findings from the experiment conducted among 40 participants suggest that PROSFDak module based learning has enhanced creativity level among low achievers students. Table 2 to Table 5 show the results of t-test comparison of pre/post-activities towards across enhance creativity level used PROSFDak learning modules. 681 Siti Nuur Adha Mohd Sanif et al. / Procedia - Social and Behavioral Sciences 103 ( 2013 ) 675 – 684 Findings from experiment conducted among 40 participants in the single group suggest the effectiveness of nature exquisiteness based digital photography arts project for creativity enhancement among low achievers students (PROSFDak). Table 2 t-test Comparison of Pre-Activities and Post-Activities Achievement in Creativity used PROSFDak Module Lesson Group n Mean SD t Significance Value Pre-Activities 40 19.3 4.5 -14.97 .05 Post-Activities 40 39.1 7.2 Table 2 shows that the pre-activities mean score for achievement (n = 40) is 19.3 (SD = 4.5), while post-activities (n = 40) is 39.1 (SD = 7.2). The difference in mean score between pre-activities and post-activities is 19.8. This indicates that PROSFDak module lesson is able to increase achievement in creativity among low achieving students. Hence, there is significantly higher score with value t(78) = -14.97, p <.05.The finding shows that the null hypothesis is rejected. Therefore there is a significant difference in creativity achievement used PROSFDak module lesson. Table 3 t-test Comparison of Post-Activities Achievement in Creativity production artwork by gender Group n Mean SD t Significance Value Male 19 41.2 7.4 1.78 .05 Female 21 37.2 6.7 Table 3 shows that post-activities achievements mean score for creativity production artwork by gender male (n = 19) is 41.2 (SD = 7.4), while gender female (n = 21) is 37.2 (SD = 6.7). The different in mean score between gender male and female is 4. This indicates that PROSFDak module lesson is able to increase achievement in product creativity artwork among low achieving students. Hence, there has higher significantly score with value t(38) = 1.78, p <.05. The finding shows that the null hypothesis is accepted. Therefore there is a no significant difference in creativity achievement production artwork by gender. Table 4 t-test Comparison of Post-Activities toward Interest in Art Education Group n Mean SD t Significance Value Post-Activities 40 39.1 7.2 30.4 .05 Interest in art education 40 4.4 0.6 Table 4 shows that the post-activities mean score for achievement (n = 40) is 39.1 (SD = 7.2), while interest in art education (n = 40) is 4.4 (SD = 0.6). The difference in mean score between post-activities and interest in art 682 Siti Nuur Adha Mohd Sanif et al. / Procedia - Social and Behavioral Sciences 103 ( 2013 ) 675 – 684 education is 34.7. This indicates that PROSFDak module lesson is able to increase interest in art education among low achieving students. Hence, there is significantly higher score with value t(78) = 30.4, p <.05.The finding shows that the null hypothesis is rejected. Therefore there is a significant difference in post-activities and interest in art education. Table 5 t-test Comparison of Pre-Activities and Post-Activities Achievement in Ability to Create Creativity used PROSFDak Module Learning Group n Mean SD t Significance Value Pre-Activities 40 19.3 4.5 -14.97 .05 Post-Activities 40 39.1 7.2 Table 5 shows that the pre-activities mean score for achievement (n = 40) is 19.3 (SD = 4.5), while post-activities (n = 40) is 39.1 (SD = 7.2). The difference in mean score between pre-activities and post-activities is 19.8. This indicates that PROSFDak module lesson is able to increase students’ ability to create creative imagination using PROSFDak module learning among low achieving students. Hence, there is significantly higher score with value t(78) = -14.97, p <.05.The finding shows that the null hypotesis is rejected. Therefore there is a significant difference in ability to create creative imagination using PROSFDak module learning. 8. Implication and Conclusions This paper has examined the nature exquisiteness based digital photography arts project for creativity enhancement among low achievers students (PROSFDak) by employing the Isman model. The effectiveness of the modules was tested and it was found that the module was effective for low achievers students. In addition, it was found that the three null hypotheses were rejected because there were significant differences in achievement between pre-activities and post-activities using the PROSFDak module learning. The outcome of this study will hopefully enhance the process of teaching and learning in art education particularly in vocational secondary school through the promotion of the use of photography lesson to enhance creativity among students and for career opportunities in the arts field. Acknowledgements Funding of this research work was generously supported by grants from the University of Malaya, Malaysia. References Abdul Rahim Hamdan, Ahmad Johari Sihes, Jamaluddin Ramli dan Musa Ismail. 2003. Tahap Minat, Pengetahuan Dan Kemahiran, Latihan Guru Dan Beban Tugas Guru Program Pemulihan Khas Sekolah Kebangsaan Daerah Pontian, Johor. Fakulti Pendidikan, Universiti Teknologi Malaysia, Skudai, Johor Bahru. Adelia Carstens. (2011). Generic Versus Discipline-specific Writing Interventions: Report on A Quasi-Experiment. Southern African Linguistics and Applied Language Studies. http://dx.doi.org/10.2989/16073614.2011.633363. Ainon Mohd. & Abdullah Hassan, 1999, Menyelesaikan Masalah Secara Kreatif, Kuala Lumpur: Utusan Publications & Distributors Sdn. Bhd. ISBN 967-61-0961-4 Albertson C. and Davidson M. (2007, August 29). Drawing with Light and Clay: Teaching and Learning in the Art Studios as Pathways to Engagement. International Journal of Education & the Arts. Vol. 8, Num. 9. Aminudin Bin Abdul Rahman. 2004. Penglibatan Pelajar Secara Aktif Dalam Aktiviti Kokurikulum Dan Kesannya Keatas Pencapaian Akademik. Satu Tinjauan Di Sekolah Menengah Kebangsaan Pekan Nanas, Pontian. Fakulti Pendidikan, Universiti Teknologi Malaysia, Skudai, Johor Bahru. Anil Kumar De and Arnab Kumar De. 2004. Environmental Education. New Delhi: New Age International (P) Ltd., Publishers. 683 Siti Nuur Adha Mohd Sanif et al. / Procedia - Social and Behavioral Sciences 103 ( 2013 ) 675 – 684 Ariella Azoulay. (2010). What is a Photograph? What is a Photography? Philosophy of Photography. Vol. 1, num. 1, 9-13. Barry Oreck. (2006). Artistic Choices: A Study of Teachers Who Use the Arts in the Classroom. International Journal of Education and The Arts, vol.7, num. 8. http://ijea.asu.edu. Ben Owen. (2006). Digital Photography Step by Step. London: Arctrurus Publishing Limited. Bianca Power, and Christopher Klopper. (2011). The Classroom Practice of Creative Arts Education in NSW Primary Schools: A Descriptive Account. International Journal of Education and The Arts, vol. 12, num. 11. http://www.ijea.org/. Cynthia Way. (2006). Focus on Photography: A Curriculum Guide. New York: International Center of Photography. David Moursund (2003). Project-Based Learning Using Information Technology. Second Edition. Dimuat turun dari http://www.questia.com/read/119764999/project-based-learning-using-information-technology. Eleni Kalligeros. (2009). Project-Based Learning And Technology Integration In a Constructivist classroom A Handbook for Fifth-Grade Teachers. Masters Of Arts Degree In Digital Media And Learning. University Of San Francisco. Eldon D. Enger and Bradley F. Smith. (2000). Environmental Science: A Study of Interrelationships. Seventh Edition. United State of America: McGraw-Hill Companies, Inc. Elliot W. Eisner. (2002). The Educational Imagination: On the Design and Evaluation of School Programs. Third Edition. New Jersey, Merrill Prentice Hall. Enid Zimmerman. (2010). Creativity and Art Education: A Personal Journey in Four Acts. Lowenfeld Lecture. Fethe, C. B. (2011). Hand And Eye: The Role Of Craft In R. G. Collingwood's Aesthetic Theory. Downloaded from bjaesthetics.oxfordjournals.org Fethe, C. B. (2011). Craft And Art: A Phenomenological Distinction. Downloaded from bjaesthetics.oxfordjournals.org. George W. Hartmann. (1935). Gestalt Psychology; A Survey of Facts and Principles. New York: Greenwood Press, Publishers. Gordon F. Vars. (2002). Educational Connoisseurship, Criticism, and the Assessment of Integrative Studies. Issues In Integrative Studies. No. 20, Pp. 65-76. Hashim Fauzy Yaacob dan Ishak Mad Shah. 2009. Efikasi Diri Dan Hubungannya Dengan Pencapaian Akademik Pelajar Ipta. Jurnal Pembangunan Pelajar. Bab 4, muka surat: 47-71. Ian Jeffrey. (1981). Photography: A Concise History. New York and Toronto: Oxford University Press. Ismail Yuksel. (2010). How to Conduct a Qualitative Program Evaluation in the Light of Eisner’s Educational Connoisseurship and Criticism Model. Turkish Online Journal of Qualitative Inquiry, October 2010, 1(2). Isman, Aytekin. (2005). The Implementation Results Of New Instructional Design Model: Isman Model. TOJET: The Turkish Online Journal of Educational Technology. October 2005. Vol. 4, Issue 4, Article 7. Isman, Aytekin, Abanmy, Fahad AbdulAziz, Hussein, Hisham Barakat and Al Saadany, Mohammed Abdurrahman. (2012). Effectiveness Of Instructional Design Model (Isman - 2011) In Developing The Planning Teaching Skills Of Teachers College Students' At King Saud University. TOJET: The Turkish Online Journal of Educational Technology – January 2012, volume 11 Issue 1. Jamalludin Harun & Zaidatun Tasir. (2003). Multimedia Dalam Pendidikan. Pahang: PTS Publications & Distributors Sdn. Bhd. Jeff Wignall. (2005). The Joy of Digital Photography. New York: Lark Books, Sterling Publishing Co., Inc. Jerry Uelsmann. (1973). The Criticism Of Photography As Art. University of Florida Press, Gainesville. John Ingledew. (2005). The Creative Photographer, A Complete Guide to Photography. New York: Harry N. Abrams, Incorporated. John D. Woods. (1988). Curriculum Evaluation Models: Practical Applications for Teachers. Australian Journal of Teacher Education. Volume 13, Issue 1. http://ro.ecu.edu.au/ajte/vol13/iss1/1 John Larmer and John R. Mergendoller. (2010). 8 Essentials for Project-Based Learning. Educational Leadership, Ascd. Vol. 68 No. 1. www.ascd.org. John W. Creswell. (2008). Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research. Third Edition. United States of America: Pearson Prentice Hall. Jonathan H Robbins. (2006). Connoisseurship, Assessments Of Performance And Questions Of Reliability. Paper presented at the 32nd Annual IAEA Conference, Singapore. Karen Heid. (2008). Creativity and Imagination: Tools for Teaching Artistic Inquiry. Art Education. International Journal of Education and the Art. Kassim Bin Abbas. (2009). Media Dalam Pendidikan: Merancang dan Menggunakan Media Dalam Pengajaran dan Pembelajaran. Malaysia: Universiti Pendidikan Sultan Idris, Perak. Kindler, A.M. (2008). Art, Creativity, Art Education and Civil Society. International Journal of Education and The Arts, vol.9 (Interlude 2). http://www.ijea.org/v9i2/. Khairudin Mohd Thani @ Jidin. (2012). Minda: Teori Gestalt dalam Fotografi: Continuation/kesinambungan. M/s: 48-51. FOTOGRAFIKA. Isu 05. Kuala Lumpur: PCP Publications. Larry B. Christensen. (2001). Experimental Methodology. Eighth Edition. Boston: Allyn and Bacon. A Pearson Education Company. Lee, Kyung Hwa. (2005). The Relationship Between Creative Thinking Ability and Creative Personality of Preschoolers. International Education Journal, 6(2), 194-199. http://iej.cjb.net. Leslie Stroebel and Richard Zakia. (1993). The Focal : Encyclopedia of Photography. 3rd Edition. Boston, London : Focal Press. Lucy Marsh. Creative Photography Techniques. Dipetik daripada http://www.ehow.com/facts_7355658_creative-photography- techniques.html#ixzz1jpEOHuys Linda, A. Jackson, Edward, A. Witt, Alexander Ivan Games, Hiram, E. Fitzgerald, Alexander Von Eye, and Yong Zhao. (2011). Information Technology Use and Creativity: Finding from the Children and Technology Project. Computer in Human Behavior. Elsevier Ltd. www.elsevier.com/locate/comphumbeh. 684 Siti Nuur Adha Mohd Sanif et al. / Procedia - Social and Behavioral Sciences 103 ( 2013 ) 675 – 684 Mary Warner Marien. (2010). Photography: A Cultural History. 3rd Edition. London: Laurence King Published Ltd. Meehan, Joseph R. (2008). Kodak, The Art of Digital Photography: Mood, Ambience & Dramatic Effects. New York: Lark Books, Sterling Publishing Co., Inc. Melody Milbrand and Lanny Milbrandt. (2011). Creativity: What Are We Talking About? Art Education. 64,1; pg.8. ProQuest Education Journals. Michelle Tillander. (2011). Creativity, Technology, Art, and Pedagogical Practices. Art Education; Jan 2011; 64, 1; ProQuest Education Journals, pg.40. Misri Hj Bohari. (2009). Pembelajaran Berasaskan Projek. Dimuat turun dari http://misrihjbohari.blogspot.com/2009/07/pengenalan- pembelajaran-berasaskan.html Mohammad Ali. (2010). Metodologi Dan Aplikasi Riset Pendidikan. Indonesia: Pustaka Cendekia Utama. Mohd Aris Bin Othman. 2007. Keberkesanan Kaedah Pengajaran Berbantukan Komputer Di Kalangan Pelajar Pencapaian Akademik Rendah Bagi Mata Pelajaran Geografi Tingkatan 4 Di Negeri Sembilan. Universiti Sains Malaysia, Pulau Pinang. Mohd. Majid Konting. (2005). Kaedah Penyelidikan Pendidikan. Kuala Lumpur: Dewan Bahasa dan Pustaka. Mohd. Najib Abdul Ghafar. (1999). Penyelidikan Pendidikan. Johor Bahru: Universiti Teknologi Malaysia, Skudai. Moursund, D.G. (1999). Project-based learning using information technology. http://darkwing.uoregon.edu/~moursund/Books/PBL1999/ Muhamad Syawalwahed Bin Mohamad Kamal & Nik Mohd Asri Bin Zahari. (2011). Pembelajaran Berasaskan Projek. Dimuat turun dari http://www.scribd.com/doc/47534917/Pembelajaran-Berasaskan-Projek-Present. Munira Shuib dan Azman Azwan Azmawati. (2001). Pemikiran Kreatif. Malaysia: Prentice Hall, Pearson Malaysia Sdn. Norlidah Alias, Saedah Siraj, Mohd Khairul Azman Md Daud and Zaharah Hussin. (2013). Effectiveness of Facebook Based Learning to Enhance Creativity Among Islamic Studies Students By Employing Isman Istructional Design Model. TOJET: The Turkish Online Journal of Educational Technology. January 2013. Vol. 12, Issue 1. Paul Comon. (2007). Kodak, The Art of Digital Photography. Digital Photo Design: how to compose winning pictures. New York: Lark Books, Sterling Publishing Co., Inc. Paul Duncum. (2001). Theoretical Foundations for an Art Education of Global Culture and Principles for Classroom Practice. International Journal of Education and The Arts, vol.2, num.3. http://www.ijea.org/v2n3/index.html. Peter Cope. (2008). The Digital Photographer’s Guide to Exposure. United Kindom: A David & Charles Book. Photography Tips. (2011). Creative Photography I — 12 Tips To Get Creative For Creative Photography. http://www.advancedphotography.net/creative-photography-12-tips-creative-creative-photography/ Richard Gardner, Kelly Powell dan Tracey Widmann. (2009). Rubrics. Appalachian State University. Robert J. Beck. (2009). The Cultivation of Students’ Metaphoric Imagination of Peace in a Creative Photography Program. International Journal of Education and The Arts, vol.10, num. 18. http://www.ijea.org/ Robert J. Beck. (2005). Curriculum Plan, Picturing Peace: A Creative Digital Photography Project for Elementary and Middle School Students. Developed by ArtsBridge America and Lawrence University. www.artsbridgeamerica.org. Robiah Sidin, Juriah Long, Khalid Abdullah, dan Puteh Mohamed. (2001). Pembudayaan Sains dan Teknologi: Kesan Pendidikan dan Latihan di Kalangan Belia di Malaysia. Jurnal Pendidikan 27, 35-45. Sawyer, R.K. (2006). Explaining Creativity, The Science of Human Innovation. United State of America: Oxford University Press, Inc. Siti Fatimah Mohd Yassin, Baharuddin Aris dan Abdul Hafidz Omar. (2006). Strategi Pembelajaran Projek Pembangunan Produk Multimedia Kreatif Secara Kolaboratif. Jurnal Pendidikan Universiti Teknologi Malaysia. Jilid, Oktober 2006, hal. 24-35. Sharkawi Che Din. (2012). Kembara: Elemen Warna dalam Seni Fotografi. M/s: 52- 55. FOTOGRAFIKA. Isu 05. Kuala Lumpur: PCP Publications. Sow Lee Sun. (2007). Penghasilan Modul Pembelajaran Berasaskan Teori Beban Kognitif Untuk Subjek Teknologi Maklumat Dan Komunikasi. Fakulti Pendidikan, Universiti Teknologi Malaysia. Stephanie Bell. (2010). Project-Based Learning for the 21st Century: Skills for the Future. Routledge, Taylor & Francis Group. The Clearing House, 83: 39–43. Surat Pekeliling Ikhtisas. Penggalakan Aktiviti Fotografi Di Kalangan Murid Sekolah. (2000).Bil. 18/2000. KP (BS) 8591/Jld.XVI (18). http://www.moe.gov.my/?id=162&act=pekeliling&cat=1&info=2000&pid=-1&ikhtisas=Kokurikulum&page=1 Terry Barret. (2005). Criticizing Photographs: An Introduction to Understanding Images. Fourth Edition. New York: McGraw-Hill. Tracie Costantino. (2011). Researching Creative Learning: A Review Essay. International Journal of Education & the Arts. Vol. 12, Review 7. http://www.ijea.org/. Unit Perancang Ekonomi. (2010). Rancangan Malaysia Kesepuluh 2011-2015. Jabatan Perdana Menteri Putrajaya. Dimuatturun dari W. Dwaine Greer. (1997). Art as a Basic, The Reformation in Art Education. United States of America: Phi Delta Kappa Educational Foundation. Ward Mitchell Cates. (1990). Panduan Amali Untuk Penyelidikan Pendidikan. Penterjemah: Syaharom Abdullah. Kuala Lumpur: Dewan Bahasa dan Pustaka. work_ucpb6cl6kve5helpotztegmor4 ---- Beyond Tradition and Modernity: Digital Shadow Theater Ugur Güdükbay, Fatih Erol, Nezih Erdogan Leonardo, Volume 33, Number 4, August 2000, pp. 264-265 (Article) Published by The MIT Press For additional information about this article Access provided by Bilkent Universitesi (4 Feb 2019 09:04 GMT) https://muse.jhu.edu/article/618057/summary https://muse.jhu.edu/article/618057/summary 264 Artists’ Statements messages (on an average of 2–3 a month) than I do public comments. Thank you for sharing. As a “melting pot caucasian American” I envy your sense of heritage and desire to share it with your children, I wish I had such a treasure to share with mine. I don’t know what is right or wrong, but sometimes, I think as “Americans,” the end to dis- crimination will only occur when we are all mixed into beautiful shades of tan. —Glass Houses [6] New houseguests are always welcome, as are return visitors. Glass Houses can be accessed via the California Museum of Photography at and at the Long Beach Mu- seum of the Arts. References and Notes 1. Glass Houses (1997), web site created at the Uni- versity of California, Riverside. Here I quote an ex- cerpt that appears on the screen, which can be accessed from the family room. 2. I use the term “modern” before “Chicana” to fo- cus on the ideologies of the changing Chicana feminism of the 1990s. 3. This quote is from an excerpt that appears on the screen, which can be accessed from the upstairs bedroom. 4. This quote is from an excerpt that appears on the screen, which can be accessed from the dressing room. 5. This quote is from an excerpt that appears on the screen, which can be accessed from the front entrance. 6. This quote is taken from a personal E-mail mes- sage I received from a houseguest on 12 March 2000. Bibliography Anzaldua, Gloria. Borderlands (San Franciso, CA: Aunt Lute Books, 1987). Anzaldua, Gloria. Making Face, Making Soul (San Francisco, CA: Aunt Lute Books, 1990). Castillo-Speed, Lillian. Latina: Women Voices from the Borderlands (New York: Simon & Schuster, 1995). Cisneros, Sandra. House on Mango Street (Houston, TX: Arte Publico Press, 1994). Lippard, Lucy R. Mixed Blessings: New Ar t in a Multicultural America (New York: Pantheon Books, 1990). Lippard, Lucy R. Get the Message? A Decade of Art for Social Change (New York: E.P. Dutton, 1984). Lovejoy, Margot. Postmodern Currents: Art and Artists in the Age of Electronic Media (Ann Arbor, MI: UMI Research Press, 1989). Meyer, Pedro. Truths & Fictions: A Journey from Docu- mentary to Digital Photography (New York: Aperture, 1995). Spender, Dale. Nattering on the Net: Women, Power and Cyberspace (North Melbourne, Australia: Spin- ifex Press, 1995). BEYOND TRADITION AND MODERNITY: DIGITAL SHADOW THEATER Ugur Güdükbay, Department of Com- puter Engineering, Bilkent University, 06533 Bilkent Ankara, Turkey. E-mail: . Fatih Erol, Department of Computer Engineering, Bilkent University, 06533 Bilkent Ankara, Turkey. E-mail: . Nezih Erdogan, Department of Graph- ics Design, Bilkent University, 06533 Bilkent Ankara, Turkey. E-mail: . Received 13 December 1999. Accepted for publication by Roger F. Malina. The first performances of Karagöz (Karagheus), the traditional Turkish Shadow Theater, date back to the 16th century [1,2]. It was one of the most popular forms of entertainment until the late 1950s. Legend has it that Karagöz and Hacivat were two masons whose unending conversations were so entertaining that they slowed down the construction of a mosque, to such an extent that the Sultan decreed their ex- ecution. It was a Sufi leader who in- vented the shadowplay, Karagöz, to console the Sultan who deeply regret- ted what he had done. Thus, the story also shows an example of how art func- tions as a consolation for loss. The mode of representation in Karagöz is in contrast with traditional narrative forms of the West. The west- ern narrative presents itself as real and hence is illusory. Karagöz, however, is non-illusory and self-reflexive in the sense that it quite often makes refer- ences to its fictitious nature, stressing the fact that what the spectators are viewing is not real but imaginary. We designed a software program that would digitally animate Karagöz char- acters. One of our aims was to show how traditional forms can be adapted to contemporary media; also we wanted to demonstrate how Karagöz can per- haps force the new media to develop new capabilities of artistic expression. The software, Karagöz, uses hierar- chical modeling [3] to animate two-di- mensional characters containing body parts and joints between these parts. Once the parts are defined, they are ag- gregated into more complex objects. The different characters of Karagöz have different body parts and joints, and therefore have different hierarchi- cal structures. While drawing the char- acters during animation, the system ap- plies the required transformations using the model parameters. For ex- ample, when a transformation is ap- plied to the hip, the two legs connected to it are also affected; these may have other transformations applied to them as well. Texture mapping [4] is the tech- nique used for rendering the charac- ters since different body parts are mod- eled as simple two-dimensional polygon meshes and have a predefined texture Fig. 1. Jacalyn Lopez Garcia, Glass Houses screen image, , 1997. Using Adobe Photoshop I created this image as a means of examining the relationship between my mother’s experience as a Mexican immigrant and the role this played in shaping my identity. Artists’ Statements 265 that can be mapped to these polygons as the model animates. To animate the models, the system uses keyframing based on the model parameters. The animation system functions as an authoring tool to create keyframe ani- mations involving these characters. This is done by editing the character parameters such as position and orien- tation for different keyframes. The ani- mations can then be played back by reading the animation parameters from disk for each keyframe and interpolat- ing between the keyframes (see Fig. 2). Thus we attempt to revive the long- neglected tradition of Karagöz in a modern framework. Its artistic features and means of expression are not yet exhausted but are open to further ex- plorations. We believe that our work is exemplary in that it is an instance of media technologies turning to old forms in their search for new possibili- ties in art production. Sample animations of the shadow play characters can be found at . Acknowledgment The characters used in the animations are scanned from the Hayali Küçükali Shadow Play Collection of the Turkish National Library and from the book Dünkü Karagöz, by Ugur Göktas (Akademi Kitabevi, 1992)(in Turkish). References 1. M. And, Karagöz—Turkish Shadow Theatre, (Dost Yayinlari, 1975). 2. U. Göktas, Dünkü Karagöz, (Akademi Kitabevi, 1992) (in Turkish). 3. J.D. Foley, A. van Dam, S.K. Feiner and J.F. Hughes, Computer Graphics: Principles and Practice, 2nd Ed. (Reading, MA: Addison-Wesley, 1996). 4. P. Heckbert, “Survey of Texture Mapping,” IEEE Computer Graphics and Applications 6, No. 11, 56–67 (1986). TORN TOUCH: INTERACTIVE INSTALLATION Joan Truckenbrod, Dept. Art and Tech- nology, The School of the Art Institute of Chicago, 112 South Michigan Ave., Chicago IL 60603, U.S.A. E-mail: . Received 20 January 2000. Accepted for publication by Roger F. Malina. As the technology of cyberspace races towards the future, humanity is begin- ning to raise a cry for the “hand” in this virtual ecology. When we link to cyber- space, we wish for a depth and physical- ity of experience that cyberspace is not able to offer. The “reach out and touch” of tele- phone mythology has become the ban- ner of the World Wide Web. E-mail and the Internet provide a long distance touch with an immediacy, simultaneity and multiplicity of connection. But the behavior and feel of this linking is me- diated through a flat screen. The sur- face of interaction is a projected world. In this monodimensionality the visual dominates over the other perceptual senses. Other sensory experiences like the electricity of touch, the memories embedded in smell and physical sensa- tions of tension are banished. McLuhan viewed the printing press as an invention that segmented sensory experiences, preventing synesthetic feeling in which there is a synthesis of hearing, seeing, tasting and touching. The Internet is an extension of the printing press, with the exception that the Internet is rhizomatic instead of lin- ear. When an individual perceptual sense becomes embedded or internal- ized in a technology, it becomes sepa- rated from the other senses. This por- tion of one’s self closes, as if it were locked in steel. Prior to such separa- tion, there is complete interplay among the senses. Virtual experience “over- throws the sensorial and organic archi- tecture of the human body by disem- bodying and reformatting its sensorium in powerful, computer generated, digi- tized spaces” [1]. Cyberspace disen- gages from the physical, causing sen- sory experience to be reduced to a monomedium of digital coding. In the interactive installation Torn Touch, exhibited in the Illinois Art Gal- lery in Chicago during ISEA 1997, the sense of touch connects the physical and the virtual realms of experience. The viewer is engaged with a sense of entanglement by the visceral character of cloth caught on a rusty barbed wire fence. From ancient times, the weaving of cloth has had important social and economic dimensions. I feel that the craft of spinning fiber and weaving cloth is a metaphor for the construc- tion of social, political, and commercial Internet weavings; it continues to com- municate social standing and political power. An integral component in ritu- als, cloth is embedded with spirituality. Fig. 2. Ugur Güdükbay et al., the animation system user interface. The parameters are ad- justed by moving the sliders in the animation editor. The effect of modifying the param- eters of a character is displayed. work_udy7wpdhezgpji7opln3mu5i4e ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218552033 Params is empty 218552033 exception Params is empty 2021/04/06-02:16:31 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218552033 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:31 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_uerd6gfrivgljnws66mb6ub2mq ---- Concepts and contexts in engineering and technology education: an international and interdisciplinary Delphi study Ammeret Rossouw • Michael Hacker • Marc J. de Vries � The Author(s) 2010. This article is published with open access at Springerlink.com Abstract Inspired by a similar study by Osborne et al. we have conducted a Delphi study among experts to identify key concepts to be taught in engineering and technology edu- cation and relevant and meaningful contexts through which these concepts can be taught and learnt. By submitting the outcomes of the Delphi study to a panel of experts in a two- day meeting we were able to add structure to the Delphi results. Thus we reached a concise list of concepts and contexts that can be used to develop curricula for education about engineering and technology as a contribution to technological literacy goals in education. Keywords Delphi study � Engineering and technology education � Concept-context � Expert panel � Engineering and technology education curriculum development � Technological literacy � ETE � Teaching concepts in contexts Introduction One of the main issues in the development of engineering and technology education is the search for a sound conceptual basis for the curriculum. This search has become relevant as the nature of technology education has changed: it has gradually evolved from focusing on skills to focusing on technological literacy. This literacy implies that pupils and students have developed a realistic image of engineering and technology. What is a realistic image of engineering and technology? The answer is derived from several sources; among them are the academic disciplines that study the nature of engineering and technology, such as the philosophy of technology, the history and sociology of technology, and design meth- odology (see De Vries 2005 for an extensive description of the insights that these disci- plines have brought forward for technology education). A different approach is to ask A. Rossouw (&) � M. J. de Vries Delft University of Technology, Delft, The Netherlands e-mail: ammeret@rossouw.eu M. Hacker Hofstra University, Hempstead, USA 123 Int J Technol Des Educ DOI 10.1007/s10798-010-9129-1 experts for their opinions on this matter, and that is the route we have taken to find broad concepts that offer a basis for developing engineering and technology education. We need to be explicit about what we mean by engineering and technology. Tech- nology, the broader of the two disciplines, encompasses the way humans develop, realize, and use (and evaluate) all sorts of artifacts, systems, and processes to improve the quality of life. Technological literacy is what people need to live in, and control, the technological environment that surrounds us. This literacy comprises practical knowledge, reasoning skills, and attitudes. Engineering is more limited. It encompasses the professions that are concerned with the development and realization of such artifacts, systems, and processes. Engineering and technology education has long been delivered in two ways: through general education and through vocational education. In general education, the focus his- torically has been on practical (craft) skills. However, this emphasis has changed in most countries: traditional school subjects have been replaced by what is generally called ‘‘technology education.’’ The main purpose of technology education is developing tech- nological literacy, but in some cases a vocational element remains. In vocational education the focus has been on preparing for a career in the commerce or in technical areas. This kind of teaching has focused on specific knowledge and skills. The latest development is that engineering has been accorded a more substantial place in general (technology) education. This shift is combined with the integration of science and math and leads to what is known as science, technology, engineering, and mathematics (STEM) education. Our use of the term engineering and technology education (ETE) relates to these con- temporary developments and characterizes ETE as important and valuable for all students. Traditionally, curricula for engineering and technology education are structured according to either engineering disciplines (e.g., mechanical engineering, electrical engineering, construction engineering) or application fields (e.g., transportation, communication). These structures do not offer much insight into the nature of engineering and technology. We believe a better approach for developing insights is to search for basic concepts that are broadly applicable in engineering and technology and cut through different engineering domains and application fields. An example of such a concept is the systems concept. In the 1970s, the Man-Made World (David et al. 1971) project focused on developing a curriculum based on such concepts. Since then, little work has been done in this area, although useful work has been done on identifying usable concepts. The various efforts to develop a sound conceptual basis for teaching engineering and technology have led to the development of important insights and ideas. A major accomplishment was the development of the Standards for Technological Literacy in the USA (International Technology Education Association 2000). In these standards there are many concepts related to engineering and technology. Although eminently useful as focal points for learning, standards typically define what students should know and be able to do in specific content or programmatic areas. In some cases, competencies defined by stan- dards are quite broad; in other cases, the competencies are atomistic. To enhance standards-driven curricula by helping learners understand relationships among technological domains, this study has identified a set of overarching, unifying concepts that cut across domains and thus give insight into the holistic nature of engi- neering and technology. These broad, unifying concepts can be used to develop curriculum and learning experiences in engineering and technology education. Some opportunities exist to make this study different from previous ones. We will mention three of the study’s components in particular: (1) We have consulted experts from a variety of disciplines concerned with basic concepts related to engineering and technology. The disciplines are A. Rossouw et al. 123 • technology education (as a component of general education at the secondary level as well as technology teacher education and educational research); • engineering education (at the tertiary level) and engineering organizations; philosophy and history of technology; • design methodology; • science and technology communication. This last discipline is concerned with communicating about science, engineering, and technology, and it too is faced with the need to work with clear and broadly applicable concepts related to engineering and technology. We realized that the purpose of this study was to give directions for secondary school education and for that reason we selected a majority of technology educators as they are in the best position to judge what fits with the nature of this level of education. At the same time we strived for a good number of other experts so that their opinion would have sufficient weight in the statistics to disturb a consensus that would otherwise only be based on the technology educators’ preferences. (2) We have consulted experts from a variety of countries. The Standards for Tech- nological Literacy project was primarily an effort involving experts in the US. (3) We have asked not only for concepts but also for contexts in which the concepts can be taught. This should be seen against the background of recent developments in educa- tional research. Such research has led to the insight that concepts are not learned easily in a top-down approach (i.e., learning the concepts at a general, abstract level first and then applying them to different contexts). Even an approach in which concepts are first learned in a specific context and then transferred to a different context has proved unfruitful (Pilot and Bulte 2006). The most recent insights developed reveal that concepts should be learned in a variety of contexts so that generic insights can grow gradually (Westra et al. 2007). This growth leads to the ability to apply the concepts in new contexts. In this approach, it is important to identify the concepts that should be learned as well as the contexts that are suitable for learning those concepts. In summary, this article describes a study that has identified basic and broad themes in engineering and technology, as well as the contexts that are suitable for learning about those themes. We have asked an international group of experts (be it with a bias towards the USA) in a variety of disciplines for their input. What we have looked for are over- arching concepts and themes that are both basic and broad: they must be transferable over a wide range of engineering and technological fields of study, and subsume and synthesize a body of related sub-concepts. The contexts should be broad enough to provide an under- standing of the impact of engineering and technology on society, culture, and the economy, but narrow enough to relate to pupils’ and students’ own experiences. Research methodology One way to ascertain the opinion of a group of experts is to conduct a Delphi study (Brown 1968; Reeves and Jauch 1978). The reputation of Delphi studies has changed. There was a time when Delphi studies were used frequently However, a growing awareness of the limitations of the Delphi method led to a decline in the method’s popularity, evidenced by the fact that fewer Delphi studies were accepted for publication in scholarly journals. Although the number of Delphi studies is still not high, the method has once again been accepted as a serious research design. A Delphi study was conducted by Jonathan Osborne, Sue Collins, Mary Ratcliffe, Robin Millar, and Rick Duschl, a group of well-respected Concepts and contexts in engineering and technology education 123 science education researchers, and published in a high-quality academic journal, the Journal of Research in Science Teaching, in 2003 (Osborne et al. 2003). This study was relevant not only because it justified our choice of the Delphi method, but also because it had a goal that was very similar to our own: to establish a list of basic and broad concepts related to science for use in the development of science education curricula. Our research design, similar to the one Osborne et al. used, is typical for Delphi studies. A group of experts were invited by e-mail to participate in the study. In a first round, the 32 experts who agreed to participate were asked to generate concepts (in Osborne et al.’s case for science and in our case for engineering and technology) and rate each one for importance. The number of experts involved is well over the 20–25 usually involved in a Delphi study (Osborne et al. had 23). In our research we have adapted this first round: in addition to asking the experts to generate their own concepts, we provided them with a draft list of concepts to rate on a 1–5 Likert scale. We did this because we wanted to clarify the level of generality we were looking for. In other words, by suggesting such concepts as ‘‘systems’’ and ‘‘optimization,’’ we wanted to prevent experts suggesting concepts that were substantially less transferable. As a source of inspiration for such concepts, we used, among others, David et al. (1971), Childress and Rhodes (2008), and Dearing and Daugherty (2004). Another adaptation is that we added draft definitions to the concepts in the draft list. We asked the experts to comment on these and to indicate whether or not they found the defined concepts suitable. The idea here was not to get exact definitions for the concepts, but to make sure everyone was thinking of the same concept when considering its suitability. The following rounds were more standard. In the second and third round the experts were presented with both the new concept entries and our draft list of broad concepts and their amended definitions. We decided not to give back the average scores in the second round, but only in the third round, because we wanted to have the full lists for participants to score, before making the scores public. The experts were asked to give scores of importance again, based on their own opinion and in round three also based on the information related to the total average score of the whole group. No more concepts could be added. We emphasized that our aim was not to reach exact definitions of the concepts. Instead, we hoped to convey the essence of each concept, so that the experts would not need to respond again to the definitions but only rate the concepts. Also, we asked the experts to be sparing with high scores so that only the most important concepts would stand out. We pointed out that aiming for a short list was also the reason why we did not include each concept that the experts had suggested in the first round. We did something similar for the contexts part, but allowed for more variety in the levels of generality here. In the second and third rounds we therefore included suggestions for contexts of different levels of generality, thereby leaving it to the experts to indicate whether they favored high-level generality contexts or lower-level contexts. In the second round we also mentioned more criteria for ranking the contexts. Usually this second round does not lead to sufficient consensus so a third round is needed. The third round is also needed to check for stability in the answers, for both concepts and contexts. To stimulate consensus in the third round, the experts are asked to account for deviating substantially from the average score. In case this still fails to result in consensus, one can search for subgroups in which consensus can be established (in our case this could, for instance, be the engineering education experts). To make this possible, we have asked the experts to provide some personal background data (age, gender, nationality, educational background, and professional area). The study was conducted during May and July of 2009. In order to stay within the available time, the experts were asked to return their responses in about a week. Several experts were on summer leave, so we have not been able to include all A. Rossouw et al. 123 responses for every round. For each round, we waited until at least 30 of the 32 experts sent their responses. This research method, aimed at establishing a consensus of experts’ opinions, has both strengths and weaknesses compared to a panel meeting. The main strength is that one can use statistical means to establish whether or not a consensus exists, and this lends certain objectivity to the study (even though the choice for the criteria and critical values for those remains a matter of preference). The main weakness is that one depends totally on opinions rather than facts. This makes the quality of the study dependent on the choice of experts for the Delphi panel. An advantage of a Delphi study over a panel meeting is that no single expert can dominate the consensus. The disadvantage is that it is not possible to discuss the results of the rounds with the experts in a direct interactive way. In our case we have combined the Delphi study and the panel meeting. Thus we hoped to combine the advantages of the Delphi study and the panel meeting. First phase: the Delphi study Data collection In the first round, the 32 experts (see Table 1 for the division of experts over the disciplines of philosophy/history of technology, engineering education and technology education) responded to a preliminary list of possible concepts and context by scoring those on a 1–5 scale and adding other concepts and contexts. Besides that they made general comments regarding the set-up of the study and the questionnaire. We will discuss those remarks in the next section, together with the outcomes of the third round. In the second round, each concept or context that had been proposed by more than one expert was included in the new list of concepts or contexts. To avoid ending up with very long lists of concepts and contexts, we often combined concepts and contexts that were phrased differently but had the same meaning. In some cases we could subsume a sug- gested concept or context under an already listed concept or context when it was clear that the concept or context was of a lower level of generality than a concept or context that was already in the list. By adding these sub-concepts and sub-contexts we could at the same time give some more substance to the concepts and contexts. A complete account of our decisions in this transition from the first to the second round would require a substantial amount of space and we prefer to refer to the full text of the report that was written for the panel meeting (published on the Internet under the title ‘‘CCETE Project. Concepts and contexts in engineering and technology education’’). We left it in the second round to the experts to agree or disagree with the decisions we made, using their scores. In the second round the experts could only score the concepts and contexts. This made the transition from the second to the third round an easier transition as we only needed to calculate the statistics for the scores and present those to the experts in the third round. In the third round, we again asked the experts to score, and we also asked them to give a reason for Table 1 Number of experts Philosophy/history and communication of technology 5 Engineering educator 7 Technology (Teacher) educator 20 Total respondents 32 Concepts and contexts in engineering and technology education 123 deviating more than 1 point from the average score in case they had a strong individual opinion on a specific concept or context. The aim of this was twofold: we wanted to stimulate experts to search for consensus where possible, and at the same time we wanted to allow for deviating points of view but with motivation so that we could have a dis- cussion about how to appreciate the deviating opinion. Outcomes after the third round The results of the third round can be found in Tables 2 and 3. Table 2 Statistics for the list of concepts Rank Potential unifying concept/theme Mean Mode SD % rating with 4 or 5 Stability 1 Design (as a verb) 4.83 5 0.38 100.0 1.2 2 System 4.67 5 0.48 100.0 6.1 3 Modeling 4.50 5 0.57 96.7 1.9 4 Social interaction 4.26 4 0.64 90.0 0.5 5 Optimization 4.00 4 0.74 80.0 2.1 6 Innovation 3.85 4 0.68 70.0 1.5 7 Specifications 3.85 4 0.71 73.3 3.9 8 Design (as a noun) 3.83 4 0.59 70.0 2.0 9 Sustainability 3.83 4 0.59 73.3 1.4 10 Trade-offs 3.82 4 0.82 70.0 2.6 11 Energy 3.79 4 0.62 76.7 2.2 12 Materials 3.78 4 0.74 73.3 4.0 13 Resource 3.72 4 0.67 73.3 0.6 14 Technology assessment 3.76 4 0.69 63.3 2.0 15 Invention 3.70 4 0.73 63.3 4.6 16 Risk and failure 3.64 4 0.85 60.0 1.4 17 Information 3.59 4 0.74 60.0 0.4 18 Function 3.54 4 0.75 56.7 1.6 19 Structure 3.43 4 0.67 53.3 0.0 20 Product lifecycle 3.55 3 0.75 50.0 5.6 21 Measuring 3.32 3 0.77 36.7 2.5 22 Standards 3.31 3 0.69 36.7 2.3 23 Application of science 3.28 3 0.86 40.0 1.8 24 Efficiency 3.23 3 0.72 33.3 1.6 25 Heuristics 3.04 3 0.76 30.0 4.2 26 Quality assurance 2.97 3 0.61 16.7 3.8 27 Modularity 2.87 3 0.90 13.3 3.4 28 Working principle 2.82 3 0.89 16.7 4.5 29 Algorithms 2.80 3 0.85 13.3 2.5 30 Complexity 2.72 3 0.75 13.3 2.8 31 Intellectual property 2.66 3 0.67 10.0 1.6 32 Tolerance 2.49 3 0.64 3.3 11.0 33 Practical reasoning 2.49 2 0.69 6.7 14.0 34 Technological trajectory 2.38 2 0.59 0.0 6.0 A. Rossouw et al. 123 Literature does not provide unambiguous cut-off points for consensus (indicated by the SD) and stability (calculated as 100 times ‘‘new score minus old score’’ divided by ‘‘new score’’). As a measure for consensus, for several authors a standard deviation below 1 already signifies consensus. Osborne et al. used the stricter criterion that 66% of respon- dents should rate the context 4 or above. Osborne et al. (2003) also used 33% as the maximum for stability (higher percentages mean: no stability). That requirement is easily met in our scores, both for concepts and contexts. Table 3 Statistics for the list of contexts Rank Possible context Mean Mode SD % rating with 4 or 5 Stability 1 Energy in society 4.37 4.00 0.72 93.3 3.7 2 Biotechnology 4.27 4.00 0.69 93.3 5.0 3 Sustainable technology 4.23 4.00 0.63 90.0 4.7 4 Transportation (using vehicles, traveling) 4.14 4.00 0.62 86.7 0.1 5 Medical technologies 4.10 4.00 0.92 86.7 1.1 6 Food 3.94 4.00 0.58 80.0 3.8 7 Industrial production 3.85 4.00 0.74 66.7 6.4 8 Water resource management 3.84 4.00 0.79 73.3 3.9 9 Construction 3.74 4.00 0.72 66.7 0.4 10 2-way communication 3.68 4.00 0.80 70.0 4.0 11 Global warming 3.62 4.00 0.97 55.2 3.8 12 Domestic technologies 3.60 4.00 0.79 58.6 5.4 13 Safety/security 3.52 3.00 0.65 46.7 1.9 14 Nanotechnology 3.48 4.00 0.91 50.0 1.0 15 Scientific research and exploration 3.31 3.00 0.90 36.7 2.8 16 Security/big brother 3.04 3.00 1.00 33.3 1.1 17 Sports and recreation 3.01 3.00 0.86 30.0 0.9 18 1-way communication 2.97 3.00 0.82 20.0 2.3 19 Virtual reality 2.96 3.00 0.88 20.0 4.7 20 Imagining the future 2.90 3.00 0.88 26.7 4.5 21 Do-it-yourself 2.70 3.00 0.92 16.7 8.0 22 Politics and technology 2.66 3.00 0.93 16.7 5.0 23 Rescue 2.62 3.00 0.69 0.0 9.0 24 Packaging 2.58 3.00 0.87 10.3 6.7 25 Toys 2.56 3.00 0.86 13.3 13.0 26 Robotization of society 2.55 2.00 0.82 10.3 10.8 27 Technology for peace 2.50 3.00 0.90 6.7 15.9 28 Music 2.50 2.00 0.90 10.0 17.4 29 Entertainment 2.46 3.00 0.90 6.7 18.9 30 Education 2.44 2.00 0.66 3.3 2.0 31 Personal care 2.42 2.00 0.74 6.7 9.1 32 Digital photography 2.38 3.00 0.74 3.3 8.8 33 Art and technology 2.36 3.00 0.88 6.7 2.1 34 Crime scene investigation 2.29 3.00 0.76 0.0 18.9 35 Religions & technology 1.59 1.00 0.72 3.3 14.6 Concepts and contexts in engineering and technology education 123 The results of the Delphi study have shown that a number of concepts stand out as possible foundations for an engineering and technology education curriculum. The concepts ‘‘design (as a verb),’’ ‘‘system,’’ ‘‘modeling,’’ ‘‘social interaction,’’ and ‘‘optimization’’ were given the highest average score by the Delphi experts. Of these, ‘‘optimization’’ gave rise to somewhat more disagreement among the experts than did the other concepts in this top-five list. ‘‘Second-best’’ concepts were ‘‘innovation,’’ ‘‘specifications,’’ ‘‘design (as a noun),’’ ‘‘sustainability,’’ ‘‘energy,’’ ‘‘materials,’’ ‘‘resource,’’ ‘‘trade-offs,’’ ‘‘technology assess- ment,’’ and ‘‘invention.’’ Of these ten, ‘‘trade-offs,’’, ‘‘technology assessment,’’ and ‘‘invention’’ had somewhat less consensus than the other seven. The concept ‘‘function’’ made an important change from round two to round three. In round three it receives a significantly lower percentage of 4 and 5 ratings. It dropped from 67% in round two to 58% in round three, thus not making the criterion for top concept in the final round. This is combined with a lower standard deviation in round three. As our criterion is a flexible one, we considered including this concept in the final list of ‘‘most important concepts’’. All this is followed by a whole list of concepts that get low average scores and high standard deviations. These concepts are apparently at least problematic. At the bottom of the list we find the concepts ‘‘technological trajectory,’’ ‘‘practical reasoning,’’ ‘‘toler- ance,’’ ‘‘intellectual property,’’ ‘‘complexity,’’ ‘‘algorithms,’’ ‘‘working principle,’’ ‘‘modularity,’’ and ‘‘quality assurance.’’ For two of these, namely ‘‘working principle’’ and ‘‘modularity,’’ there was substantial lack of consensus. Several experts took the trouble to deviate from the round two average score and account for the deviation. In their opinion these concepts were more important than suggested by the average score. Less disagree- ment existed about rejecting ‘‘practical reasoning,’’ ‘‘complexity,’’ and ‘‘algorithms.’’ But here too we find experts defending a higher score. One of the experts suggested that the concept ‘‘practical reasoning’’ is very important but could be a sub-concept of ‘‘design (as a verb).’’ Similarly, another expert suggested that ‘‘complexity’’ could belong with ‘‘sys- tems.’’ A third expert suggested that ‘‘modularity’’ could also be put under ‘‘systems.’’ These suggestions seem worth considering. The remaining concepts with low scores (below 3) were rejected by agreement. Compared to the fairly good agreement on the concepts, it is striking that the contexts gave more rise to disagreement. Standard deviations were generally higher here (0.80 on average) compared to the concepts (0.70 on average). But let us start with what was clearly agreed upon. The contexts ‘‘energy in society,’’ ‘‘biotechnology,’’ ‘‘sustainable technol- ogy,’’ ‘‘transportation’’ and ‘‘medical technologies’’ stand out as useful. Of these, ‘‘medical technologies’’ gave rise to the most disagreement. One expert explained that he scored it lower because these technologies seem to draw students to medical schools rather than engineering programs. The context ‘‘nanotechnology’’ scores a mode of four but gives rise to much disagreement. Proponents state that as an emergent technology that has big consequences for our future it is a very important context. Others view it as rather inac- cessible and doubt its suitability for teaching technology to young learners. One expert also remarks that this context is much more about science than about technology. Next are ‘‘food,’’ ‘‘industrial production,’’ ‘‘water resource management,’’ ‘‘construction,’’ ‘‘two- way communication,’’ ‘‘global warming,’’ and ‘‘domestic technologies.’’ Of these, ‘‘global warming’’ gives rise to more disagreement; some experts suggest it is close to, and should be integrated with, ‘‘sustainable technologies.’’ Rejected with agreement were the contexts ‘‘religions and technology,’’ ‘‘crime scene investigation,’’ ‘‘art and technology,’’ ‘‘digital photography,’’ ‘‘personal care,’’ and ‘‘education.’’ The experts found most of these too A. Rossouw et al. 123 narrow and suggested that they could be subsumed under one of the other contexts. The exception was ‘‘religions and technology,’’ which seems difficult to turn into practical material. Also, it may put the teacher in a difficult position as it can be a loaded subject. For the contexts ‘‘entertainment,’’ ‘‘digital photography,’’ ‘‘art and technology’’ and ‘‘religion and technology,’’ there are nevertheless one or two enthusiastic supporters with arguments for their position. One interesting defense for the last two is that boundary crossing provides rich contexts. Several experts argued to include ‘‘digital photography’’ under ‘‘communication’’ or as a form of ‘‘art and technology’’ under ‘‘entertainment.’’ Entertainment is defended as most relevant as it is so much part of the social, emotional, and physical well-being of students (influencing them both positively and negatively). The remaining contexts got only average scores (2.5–3.5) and often a lack of agreement. Highest standard deviations amongst these were found for ‘‘scientific research and exploration,’’ ‘‘politics and technology,’’ ‘‘security/big brother,’’ ‘‘music,’’ and ‘‘technol- ogy for peace.’’ Several experts commented that they ranked ‘‘do-it-yourself’’ higher than the round two average because this context was the route to engineering for many students. It is striking that the traditional domains of application in the US remain popular, as we see ‘‘transportation,’’ ‘‘communication,’’ ‘‘production,’’ and ‘‘construction’’ all in the list of highly scoring contexts. One of the experts expressed concern about this and wondered if there is a need to take a step forward. Biotechnology was already fairly popular in US technology education curricula and it features strongly here. Perhaps the most interesting outcome is that some new contexts stand out: ‘‘energy in society,’’ ‘‘sustainable technol- ogy’’ (with overwhelming support) and ‘‘global warming’’ (with less agreement). These seem to be related to an awareness of the global importance of these contexts. This is reflected by what one of the experts wrote, who could see ‘‘making the world a better place’’ as the umbrella context. Several experts suggested combining the three contexts, and they see global warming as a sub-context or discussion item within these or other contexts. Apart from this, most remarks highly favor the subject. ‘‘Food’’ is highly supported by 80% of respondents, for different reasons: it touches upon current societal problems on world scale, is heavily influenced by technology and is a basic human need (so familiar to all of us). Though it gets a lower average rating and higher standard deviation, ‘‘water resource management’’ is highly praised in the comments. Different experts appreciate this context and used similar arguments as those used for food: it is increasingly becoming an issue of high societal relevance, is heavily influenced by technology, and is a basic human need. A single expert commented that it is too narrow. This is probably the reason for the lower ratings. At the same time another expert is taken in by the broadness of it: ‘‘from potable water and desalinization to river and flooding control to reservoir building to bottled water to…’’ ‘‘Medical technology’’ is another definite winner. ‘‘Domestic technologies’’ gets 4? from little over half of the respondents (with remarks as ‘‘especially automation’’ and ‘‘part of student’s life’’) but with much less strong agreement. Another expert observed that the more practical contexts of lower abstraction level did not survive, in spite of the fact that these are strongly promoted by current educational research. Traditions seem to be strong among the experts. General issues raised Some remarks made by the experts give rise to more general considerations. In the first place, there is the issue of the level of abstraction, both in the concepts and in the contexts. Concepts and contexts in engineering and technology education 123 Several experts remarked that the Delphi study would result in a list of separate concepts, while actually the list should be structured. Some concepts are at a higher level of abstraction and generality than others. Also there are numerous connections between the concepts that remain hidden in the list. This is clearly one of the limitations of this Delphi study, and it probably could not be avoided, given the limitations of the Delphi method. One could argue that bringing structure to the list is a necessary next step in the process of developing a curriculum. One way of doing this can be to draw a concept map that contains all of the concepts identified by the experts as important. The map should also feature the sub-concepts. The experts saw some of the sub-concepts as important but ranked them low because of their low level in the hierarchy of abstraction. The same problem arises in the list of contexts. Several contexts were seen as very important by the experts but were ranked low because of their specificity. Another issue for the debate is what to do with the recent insight in educational research that says that contexts should be practices in which students can be involved. That idea clearly was not a priority in the experts’ considerations. Is this a matter of traditionalism or a lack of awareness of the latest educational research studies? Or did the experts con- sciously reject this new idea, preferring instead the more traditional, broader contexts? Several experts mentioned that the broad and general contexts should be read as umbrella terms that need further concretization and operationalization. In defense of the broader contexts, several experts remarked that in their view engineering and technology education should involve students in the wider global challenges, and this opinion seems to be a valid consideration. A third issue that several experts noted was whether or not the concepts should be specific for engineering and technology. Sometimes experts remarked that they rejected a concept because it was not specific for engineering and technology. How should we value that consideration? Would it lead to the immediate rejection of one of the highest scoring concepts, ‘‘systems,’’ because it emerged not in engineering but in biology, and is used in many disciplines other than engineering and technology? Why then was this criterion of uniqueness (for engineering and technology) not applied more consistently? Were relevant concepts lost not because they were less important but because they were less specific for engineering and technology? How do we value that? These are questions that were con- sidered in the later expert panel meeting. A fourth issue concerns the term ‘‘engineering and technology.’’ Some experts sug- gested that engineering and technology are different and cannot be taken together in one expression that suggests they are almost the same. This raises the question, would sepa- rating the two have resulted in two different lists of concepts? If so, how would that be valued? To what extent is this remark related to the perceived difference between general and vocational education, which may suggest that engineering is for the latter and tech- nology for the former? A related remark is that according to some experts the list of concepts was too focused on the design process and did not do justice to the social aspects of technology. Does that suggest a vocational bias in the list? In the list of preferred concepts, this fortunately does not seem to be a problem, because several concepts in this list are directly related to the social aspects of technology. The argument though is: all technologies exist only with human praxis. Therefore, this relationship needs to be embedded in each concept and not treated as something distinct to be considered sepa- rately. Looking at the list, we see that the concepts ‘‘design (as a verb),’’ ‘‘social inter- action,’’ and ‘‘technology assessment’’ seem to have the human–technology relationship embedded most clearly. Should all concepts clearly reflect the human–technology A. Rossouw et al. 123 relationship? These questions, too, were discussed at the expert panel meeting following the Delphi study. A fifth issue is the relationship between concepts and contexts. Some experts remarked that they had difficulty separating the two. The choice of contexts, according to them, cannot be independent from the preferences for certain concepts. Contexts are not infinitely flexible. Some are more suitable for learning certain concepts than others. The setup of the Delphi rounds did not take that into account. Here too we see a necessary next step in the process leading towards curriculum development. A sixth issue concerns the different approaches taken by experts in evaluating the contexts and the request for an overarching ‘‘umbrella context’’. The proposals for new contexts and the comments on existing ones reveal that respondents take very different approaches in suggesting and evaluating contexts. Analyzing the comments, the proposed contexts and the general remarks on the context part, we find roughly nine approaches, each with a different view on what the main criteria for suitable contexts should be. In random order, they state the following: ‘‘The contexts should…’’: 1. Be truly relevant to students’ lives 2. Exemplify enduring human concerns, being fundamental to human nature and relevant in a variety of cultures and societies 3. Be situated around societal issues/problems 4. Encompass the Human-Made World 5. Be big examples, like the development of the paper clip, as described by Petroski 6. Be local (culturally, geographically) 7. Cover the technological domains 8. Use the ‘‘Designed World Standards’’ in ‘‘Standards for Technological Literacy’’ 9. Best fit three considerations: (a) fit to the concepts; (b) familiarity to the learner; (c) ability for the instructor or curriculum designer to provide more and less complex versions of the contexts that help make salient the critical feathers and relationships. One respondent noted that before trying to find a set of contexts, we need an overarching ‘‘supercontext’’ or ‘‘umbrella context’’ to work within. ‘‘Purpose’’ could be such an umbrella context. Almost all the approaches listed above could be viewed as overarching umbrella contexts and used to frame the search for a set of suitable contexts. Only the approach that the contexts must be local might be hard to use to this purpose. Is there a best approach, or can the approaches be combined? It seems that two types of contexts received high averages in the final round: the traditional contexts and contexts that fit the suitability criteria of a combination of approaches. An example of such a context is the high scoring ‘‘food.’’ This context is suitable from each of the viewpoints of the first three types of approaches. In the expert panel meeting we have made an effort to find a combination of different approaches. Finally, several respondents, from the Philosophy, History and Communication as well as from the Technology Education groups posed fervent appeals for a more central place for normative aspects in technology education. Issues relating to ethics, sustainability, and the relationship between humans and technology should be factored into (all) the concepts. Contexts should be used as a discussion arena for these normative issues. In round one a Concepts and contexts in engineering and technology education 123 concept was suggested that we did not include in the new list but that is related to this: ‘‘unintended or unanticipated consequences’’: The idea that all technologies have conse- quences that are not anticipated by the designers. These consequences may be positive or negative. It seems this is a notion that is still lacking. One of the other respondents added: ‘‘Many (most?) of the great challenges on our planet in the 21st century (global warming, world hunger, pandemics, nuclear warfare, etc.) have resulted from technological endeavor and/or will be addressed by technological ‘fixes.’ … I think it’s far more important that kids understand that all technologies have unintended consequences and that we MUST assess them in that light (as well as for intended consequences).’’ On the other hand, a respondent warns: ‘‘Teachers shouldn’t put themselves in the position of seeming to push a political agenda, so sensitive topics must be handled carefully. Some of the contexts could be very tricky to handle (like religion, politics, and technology for peace).’’ Also one respondent remarked that some of the concepts are too value laden (‘‘robotization of society’’ and ‘‘security/big brother is watching you’’) and should be more neutral. The question here seems to be how neutral and ‘‘technical’’ technology education should or can be. Can or should we teach the nature of engineering and technology without involving normative aspects? In the panel meeting this issue has been discussed and a proposal was made that does justice to the appeal to give values a visible place in the lists of concepts and contexts. Second phase: panel meeting Set-up The outcomes of the Delphi study have been used as the input for a meeting of a panel of experts, with the purpose of turning these outcomes into a framework for curriculum development. The panel consisted of five participants of the Delphi study (all from tech- nology education) plus two other experts that had not been involved in the Delphi study (one from technology education and one from engineering education). Thus we created the possibility for a fresh look on the outcomes, while at the same time allowing input from those that were involved in the Delphi study preventing that the panel meeting would not do full justice to the Delphi study. Also two of the researchers were present (one of which has a background in both technology education and in the philosophy and history of technology). The process was as follows: first, the group reflected on the contexts that came out of the Delphi study; and second, reflected on the overarching concepts. Both of the lists were found to lack structure and hierarchy, which is understandable from the methodology of the Delphi study. An analysis was made of the nature of the consecutively ranked concepts and contexts to provide the necessary structure for use as a curriculum framework. Outcomes of the panel meeting The contexts that ranked high on the list that resulted from the Delphi study appeared to consist of two sub-groups of contexts. In the first place, the panel recognized the contexts that traditionally had been used in the US as curriculum organizers: construction, pro- duction, transportation, communication, and biotechnology. The remaining contexts all seemed to reflect major global concerns. Some examples of these are: energy, food, water, and medical technologies. This impression was confirmed by the motivations given by A. Rossouw et al. 123 some of the experts in the Delphi, one of whom phrased this as ‘‘making the world a better place.’’ In the discussion, the panel realized that both the traditional and the global concern contexts were related to basic human needs that are addressed by engineering and technology. Thus, the panel developed a single list of contexts that reflected engineering and technological endeavors in the ‘‘umbrella contexts’’ of addressing personal, societal, and global concerns. This list includes the following: food, shelter (our translation of the context that was originally called ‘Construction’), water, energy, mobility (originally called ‘Transportation’), production, health (the former ‘Medical technologies’ context), security, and communication. The list is presented in Table 4. This list both does justice to the outcomes of the Delphi study (it covers the top nine of the contexts list) and has a logic to support it (they are all basic human concerns). Another consideration was that the list would need to do justice to the fact that some of these contexts were put forward by the Delphi experts from a ‘global concern’ point of view, but other contexts originally were put forward because they allowed for deriving more con- crete practices that would appeal to the learners because of their daily life character. So the panel decided to add the recommendation that when developing a curriculum, the contexts should be elaborated in two directions: in a ‘personal concern’ (or ‘daily life practice’) direction and in a ‘global concern’ direction. The next step was to reflect on the concept list. It was evident that this list contained concepts of different levels of abstraction. Therefore it was decided to identify those concepts with the highest level of abstraction and to put the remaining concepts under these ‘main’ concepts as much as possible. Starting from the top of the list, the panel identified the following five concepts as the most abstract: design (as a verb), systems, modeling, resources, and values. The last-mentioned concept did not feature as such on the list but was introduced by the panel as a heading for several concepts in the list that were value- related. This introduction also answered to the concern of a number of Delphi experts to make the normative dimension of technology and engineering visible in the list of con- cepts. The remaining concepts could then be put under these five main headings as sub- concepts, but here the panel realized that there is no sound basis for drawing the line between those concepts that were included as sub-concepts and those that were not. In fact, all concepts on the list (except for two) were given at least a score of ‘3’, which indicates that even though not all were considered equally important, nearly all concepts were seen as fairly relevant. It would therefore be a missed opportunity for innovation of the cur- riculum to leave out concepts based on an arbitrary decision about where to draw the line. Table 4 Contexts Context Shelter (‘construction’) Artefacts for practical purposes (‘production’/‘manufacturing’) Mobility (‘transportation’) Communication Health (‘biomedical technologies’) Food Water Energy Safety Concepts and contexts in engineering and technology education 123 The panel decided to include the concepts that ranked high on the Delphi list. They serve as examples of sub-concepts that have a certain status of priority because they were ranked highly by the Delphi experts. Thus the panel ended with the list of concepts and sub- concepts that is presented in Table 5. Finally the panel decided to put forward a number of remarks concerning a possible next step, namely developing these lists into an ontology-based curriculum. In the first place, the panel noted that there are two possibilities of structuring the curriculum: according to the concepts, or according to the contexts. The first option will result in modules like ‘Systems’, ‘Resources’ or ‘Values’ and can be used to teach and learn the concepts in the way that is suggested by the current ideas on concept-contexts learning (learning a concept in a series of different contexts, which gradually leads to an insight on a more abstract level, and thereby also transferability to new contexts). This can be called a ‘systematic’ or ‘disciplinary’ approach. It would be similar to the ‘‘Man-Made World’’ course book that we had used as a source of inspiration for identifying possible concepts. The second option results in modules like ‘Water’, ‘Energy’, or ‘Mobility’ and can be used to show the versatile nature of the concepts because these will all feature in the module. This can be called a ‘thematic’ approach and currently seems to be the most popular internationally. Both options are justifiable and a curriculum could even contain a com- bination of modules based on both options. A second remark is that before developing the curriculum it is necessary to investigate the specific meaning concepts get when they are applied to the different contexts. This is necessary, because current theory on situated cognition (Hennessy 1993) claims that the concepts are indeed ‘colored’ by contexts and are therefore context-dependent to a certain degree. A third remark was that it would be useful for curriculum developers to have case studies or vignettes that illustrate how the framework of concept and context lists can be developed into a curriculum. A fourth remark was that the framework does not yet reflect the need to develop both qualitative and quantitative activities when developing the curriculum. Current technology education Table 5 Concept list Main concept Sub-concepts Designing (‘design as a verb’) Optimising Trade-offs Specifications Invention Product lifecycle Systems Artefacts (‘design as a noun’) Structure Function Modelling Resources Materials Energy Information Values Sustainability Innovation Risk/failure Social interaction Technology assessment A. Rossouw et al. 123 curricula are often biased towards the qualitative (only conceptual, without calculations), but the engineering dimension requires serious attention for the quantitative also. After the session, the panel felt that justice was done to the outcomes of the Delphi study, while at the same time the result now was more systematic and structured. The group decided to start developing project proposals in which the framework is elaborated and transformed into a curriculum proposal. Conclusions and recommendations Our Delphi study and expert panel meeting have resulted in a list of concepts and a list of contexts that can be used for developing engineering and technology education curricula. The approach of teaching concepts in contexts is a middle road between total unbelief in the possibility of teaching and learning abstract concepts that can be applied in different contexts and a naive belief in the possibility to teach such concepts directly at a high level of abstraction only to apply them later in various contexts. In the concept-context approach even the use of the term ‘transfer’ is used with some hesitation because teaching a concept in one context and then immediately transferring that concept to another context is problematic. The idea is that by teaching concepts in a variety of contexts gradually the learner will start to recognize the more generic nature of the concepts and be able to apply it in new contexts. This recognition has to be supported carefully, as the concepts take different ‘appearances’ in different contexts. Although at a high level of abstraction both a building and a bike are artifacts, it is by no means evident to novices in the field that this is the case. Evidently an ‘artifact’ in the context of shelter has a different ‘appearance’ than in the context of transportation. Yet, there is value in seeing the connection between both. This helps the learner to get a better understanding of the nature of both the building and the car. Similar examples can be provided for the other concepts and contexts in our results. We recommend that curriculum developers consider the use of our lists of concepts and contexts to rethink the engineering and technology education frameworks and cur- ricula. These lists may help, for instance, to bring more conceptual coherence in the now very extensive list of Standards for Technological Literacy that was developed and is now implemented in the USA. The current New Zealand curriculum is another example of an approach in which concepts and contexts play a vital role, and perhaps for a next revision of this curriculum, too, the outcomes of our study can be useful. Open Access This article is distributed under the terms of the Creative Commons Attribution Noncom- mercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. References Brown, B. B. (1968). Delphi process: A methodology used for the elicitation of opinions of experts. Document No: P-3925. Santa Monica, CA: The RAND Corporation. Childress, V. W., & Rhodes, C. (2008). Engineering outcomes for grades 9–12. The Technology Teacher, 67(7), 5–12. David, E. E., Jr, Piel, E. J., & Truxall, J. G. (1971). The Man-made world. New York: McGraw-Hill Book Company. de Vries, M. J. (2005). Teaching about technology. An introduction to the philosophy of technology for non- philosophers. Dordrecht: Springer. Concepts and contexts in engineering and technology education 123 Dearing, B. M., & Daugherty, M. K. (2004). Delivering engineering content in technology education. The Technology Teacher, 64(3), 8–11. Hennessy, S. (1993). Situated cognition and cognitive apprenticeship: implications for classroom learning. Studies in Science Education, 22, 1–41. International Technology Education Association. (2000). Standards of technological literacy. Content for the study of technology. Reston, VA: ITEA/Technology for all Americans project. Osborne, J., Ratcliffe, M., Collins, S., Millar, R., & Duschl, R. (2003). What ideas about science should be taught in school science? A delphi study of the ‘expert’ community. Journal of research in science teaching, 40(7), 692–720. Pilot, A., & Bulte, A. M. W. (2006). The use of contexts as a challenge for the chemistry curriculum: Its successes and the need for further development and understanding. International Journal of Science Education, 28(9), 1087–1112. Reeves, G., & Jauch, L. R. (1978). Curriculum development through Delphi. Research in Higher Education, 8(2), 157–168. Westra, R., Boersma, K., Waarlo, A. J., & Savelsbergh, E. (2007). Learning and teaching about ecosystems: Systems thinking and modelling in an authentic practice. In R. Pintó & D. Couso (Eds.), Contributions from science education research (pp. 361–374). Dordrecht: Springer. A. Rossouw et al. 123 Concepts and contexts in engineering and technology education: an international and interdisciplinary Delphi study Abstract Introduction Research methodology First phase: the Delphi study Data collection Outcomes after the third round General issues raised Second phase: panel meeting Set-up Outcomes of the panel meeting Conclusions and recommendations Open Access References << /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (ISO Coated v2 300% \050ECI\051) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.3 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Perceptual /DetectBlends true /DetectCurves 0.1000 /ColorConversionStrategy /sRGB /DoThumbnails true /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo true /PreserveFlatness true /PreserveHalftoneInfo false /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts false /TransferFunctionInfo /Apply /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true ] /NeverEmbed [ true ] /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 149 /ColorImageMinResolutionPolicy /Warning /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 150 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.40 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /ColorImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 149 /GrayImageMinResolutionPolicy /Warning /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 150 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.40 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /GrayImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 599 /MonoImageMinResolutionPolicy /Warning /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 >> /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA /BGR /CHS /CHT /CZE /DAN /ESP /ETI /FRA /GRE /HEB /HRV (Za stvaranje Adobe PDF dokumenata najpogodnijih za visokokvalitetni ispis prije tiskanja koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.) /HUN /ITA /JPN /KOR /LTH /LVI /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken die zijn geoptimaliseerd voor prepress-afdrukken van hoge kwaliteit. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR /POL /PTB /RUM /RUS /SKY /SLV /SUO /SVE /TUR /UKR /ENU (Use these settings to create Adobe PDF documents best suited for high-quality prepress printing. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.) /DEU >> /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ << /AsReaderSpreads false /CropImagesToFrames true /ErrorControl /WarnAndContinue /FlattenerIgnoreSpreadOverrides false /IncludeGuidesGrids false /IncludeNonPrinting false /IncludeSlug false /Namespace [ (Adobe) (InDesign) (4.0) ] /OmitPlacedBitmaps false /OmitPlacedEPS false /OmitPlacedPDF false /SimulateOverprint /Legacy >> << /AddBleedMarks false /AddColorBars false /AddCropMarks false /AddPageInfo false /AddRegMarks false /ConvertColors /ConvertToCMYK /DestinationProfileName () /DestinationProfileSelector /DocumentCMYK /Downsample16BitImages true /FlattenerPreset << /PresetSelector /MediumResolution >> /FormElements false /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles false /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) ] /PDFXOutputIntentProfileSelector /DocumentCMYK /PreserveEditing true /UntaggedCMYKHandling /LeaveUntagged /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false >> ] >> setdistillerparams << /HWResolution [2400 2400] /PageSize [595.276 841.890] >> setpagedevice work_ufz7hsuh3jafxpb2x7vqg6ccge ---- Microsoft Word - Accuracy Verification of GPS-INS.doc ACCURACY VERIFICATION OF GPS-INS METHOD IN INDONESIA A.K.Mulyana a , A. Rizaldy b , K.Uesugi c, * a The National Coordinating Agency For Surveys And Mapping, Cibinong 16911-mulyana@bakosurtanal.go.id b The National Coordinating Agency For Surveys And Mapping, Cibinong 16911-aldino.rizaldy@bakosurtanal.go.id c Pasco Corporation, NSDI Project Office, Menera Jamsostek 4th Floor Jl. Jend Gatot Subrato No 38, Jakarta- uesugi@pasco-prj-nsdi.com Commission I, WG I/2 KEY WORDS: GPS/INS, IMU, Adjustment, Image, Triangulation, Comparison ABSTRACT: Pasco Corporation (Japan) has been implementing a project in Indonesia for Sumatra Island which is named Data Acquisition and Production on the National Geo-Spatial Data Infrastructure (NSDI) Development. Digital aerial images in 25cm GSD for 1:10,000 scale mapping have been taken as a part of the project. The owner of the project, The National Coordinating Agency for Surveys and Mapping (Bakosurtanal) planned to apply conventional aerial triangulation method as the initial stage. Pasco recommended Direct Geo-Reference Methodology by using GPS-IMU measurements and carried out a verification work in a city area. Measurements of tie points were implemented by using KLT/ATLAS software manually and adjusted by BINGO software. Aerial triangulation accuracy verifications were done by using one height control in the block center, one GCP in the center and four GCPs at the corners and one in the center. The results are sequentially, rms X,Y= 0.410cm, rms Z= 0.394cm (one height control point), rms X,Y= 0.430cm, rms Z= 0.392cm (one GCP) and rms X,Y= 0.356cm, rms Z= 0.395cm (5 GCPs). 5 GCPs for each block in official applications have been preferred for safety reasons. Comparisons of direct geo-referencing results with geodetic check points and aerial triangulation block adjustments have been done. The details of the work have been given in this study. * Corresponding author. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B1, 2012 XXII ISPRS Congress, 25 August – 01 September 2012, Melbourne, Australia 329 1. INTRODUCTION This work compiles the verification accuracy of position & attitude in the digital photography integrated with GPS/IMU for the project conducted in Sumatra Island of Indonesia. At first, it was planned to conduct the traditional aerial triangulation method in the project planning phase. However, this project was contracted by PASCO, a company in Japan with several experiences in the Digital Aerial Photography and GPS/IMU (Inertial Measurement Unit) for the past 10 years. Additionally, PASCO has joined as a member of the working group for creating new standard of the national aerial survey by Geospatial Information Authority of Japan (GSI). In this verification, PASCO clarified whether 1/10,000 mapping methodology could be applied by using GPS/IMU in Indonesia, and in order to perform aerial triangulation (AT), how much can reduce the number of GCPs (Ground Control Point). PASCO carried out the aerial photography of the four cities, Medan, Pekanbaru, Padang and Jambi, for the NSDI project; however the verification about direct geo-referencing was done only in the Medan city with digital images in 25 cm GSD (Ground Sampling Distance). 2. SYSTEM AND METHODOLODY AEROcontorol GPS/IMU system from the IGI mbH was utilized in this project. The features and performance are shown below, referred from the product catalogue. I. Features • Determine with high accuracy position and attitude (Exterior Orientation parameter) when camera is exposed. • GPS reference station must be on the ground during flight mission. • System is a combination of GPS and IMU. II. Performance • Accuracy of position: 0.05 m • Accuracy of roll, pitch: 0.004 DEG • Accuracy of Heading: 0.01 DEG • GPS: 12 channel L1/L2 DGPS. Figure 1. CCNS4 & AEROcontrol It is necessary to determine the value of the IMU misalignment before taking aerial photography integrated with GPS/IMU. Boresight calibration has been carried out in the vicinity of MEDAN airport. Below shows the conditions of the boresight calibration and flight plan. Figure 2. Flight plan of the boresight calibration Boresight calibration is the work of calculation misalignment with aerial camera and IMU. This work is conducted to determine the value of its correction by comparing the exterior orientation parameters from aerial triangulation and direct geo- referencing. This involves usually the following conditions. • About 10 shots per 1 strip in every 4 strips, • Next strip can be taken to reverse of the flight direction, • 5 or more GCPs can set up a good geometry. Figure 3. Work Flow for the boresight calibration GCPs were installed around the center and 4 corners of the block and surveyed by applying static method of the GPS. Aerial photography integrated with GPS/IMU for boresight calibration was operated in the following manner; • Utilization of the UltraCAM-X manufactured by Vexcel Imaging GmbH (a Microsoft Company) for the Digital Aerial Photography. • Photography was carried out for the simultaneous observation of one epoch by one second in ground International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B1, 2012 XXII ISPRS Congress, 25 August – 01 September 2012, Melbourne, Australia 330 GPS reference station. • Exposure condition is shown as below. Focal length Photo scale Image res. Altitude Number of flight line Overlap 100.5 mm 1/15,920 11.5 cm 1600 m 4 65% 35% Table 1. Boresight calibration flight Aerial Triangulation was carried out in the traditional way (we observed photo coordinates for tie points manually using KLT/ATLAS, and calculated by the BINGO bundle block adjustment program. Accuracy of AT GCP residuals RMS MAX XY (m) Z (m) XY (m) Z (m) 0.06 0.10 0.08 -0.17 TIPOINT residuals RMS MAX X (µm) Y (µm) XY (µm) X (µm) Y (µm) XY (µm) 0.6 1.0 0.6 2.9 -3.6 3.6 Table 2. Accuracy of GCPs & tie points The attitude of each photo center at an exposure time was calculated by IMU analysis using AeroOffice software. GPS/INS position difference is within 0.10 m and INS/KF accelerometer bias is within -10000µg which are in acceptable level. Comparing the exterior orientation parameter calculated by AT and obtained by GPS/IMU, the misalignment of IMU and camera was calculated. Those are correction values for analysis of GPS/IMU data and are used until the removal of camera and IMU devices. When they are removed from each other, boresight calibration is performed again. 3. APPLICATION Aerial photography for 1:10,000 scale topographic mapping was planned with digital images in 25 cm GSD for Medan city area. It is very flat area and elevation ranges from 0 m to 100 m. For kinematic GPS solution reference stations were used considering the GSI (Geo-Spatial Information Authority of Japan) standard, the ground GPS reference station should be installed where the position does not exceed 70km from the target area. The GPS/IMU observation was recorded at second interval during flight, and was needed for camera position (X0, Y0, Z0) by processing the kinematic GPS between ground GPS reference station and on board GPS. Typically, IMU characteristic has the increasing cumulative error in constant velocity and straight flight. It was the longest length of one flight line around 40km in MEDAN, and it did not create any problem for this cumulative error. Since data acquisitions were done within 25cm ground resolution in this project, flight altitude was planned 3,100m above from the mean sea level. In case of GPS/IMU, it is not an impact of the position accuracy to determine the position and attitude of the camera directly. 22 GCPs were established to implement verification measurements. Figure 4. shows the project area and GCP distribution (blue circles are GPS reference stations). Figure 4. GCP and flight plan UltraCam-X (by Vexcel Imaging GbmH) digital frame camera with focal length 100.500mm, image size in panchromatic mode 9,420*14,430 pixel was used in that application. After calculation of exterior orientation parameter, the verification (check) points were measured by Stereo Plotter (Summit Evolution), and compared with GPS survey coordinates. Figure 5. Workflow for GPS/IMU processing 4. RESULTS AND CONCLUSION Following measurements and analysis have been implemented to verify the accuracy of Direct Geo-referencing by GPS/IMU. • Stereo models were created by exterior orientation parameters which were results of direct geo- referencing. 22 GCPs measured in stereo models and International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B1, 2012 XXII ISPRS Congress, 25 August – 01 September 2012, Melbourne, Australia 331 compared (as check points-CPs) with the geodetic coordinates. The results are at Table 3. GCP name Residuals X Y Z XY MD01 -0.280 -0.001 -0.360 0.280 MD02 0.282 0.015 -0.480 0.282 MD03 0.189 0.001 -0.720 0.189 MD04 0.370 0.099 -0.360 0.383 MD05 0.186 0.006 0.000 0.186 MD06 0.432 0.174 -0.773 0.466 MD07 0.065 -0.191 -0.816 0.202 MD08 0.263 0.009 -0.571 0.263 MD09 -0.066 -0.325 -0.163 0.332 MD10 -0.128 0.199 -0.163 0.237 MD12 0.000 0.000 0.538 0.000 MD13 -0.121 -0.062 -1.075 0.136 MD14 -0.181 0.059 0.614 0.190 MD15 0.060 -0.002 -0.845 0.060 MD16 0.062 -0.119 -0.384 0.134 MD17 0.063 -0.240 -0.077 0.248 ARP 0.685 -0.230 0.154 0.723 N1.1139 -0.105 0.041 -1.392 0.113 N1.1140 -0.061 -0.242 1.075 0.250 N1.1141 0.059 -0.177 -0.461 0.187 STDEV 0.232 0.145 0.594 0.155 RMS 0.244 0.149 0.658 0.286 Table 3. Residuals of check points • Aerial triangulation measurements and block adjustments with three different control point configurations (one height control in the block centre, one full control in the centre, 4 GCPs at the corners and 1 in the centre) have been implemented. The results are at Table 4. The number of the GCP does not affect the vertical accuracy of the block. Better horizontal accuracy can be provided by using 4 or 5 GCPs in the block, even though one height control point is enough to get accuracy for 1:10,000 scale mapping. But, PASCO has been recommending using 5 GCPs for safety reasons. On the other hand, it can be concluded that direct geo- referencing is highly reliable in that scale when the tables 3 and 4 compared. Aerial triangulation provides better vertical accuracy than direct geo-referencing, but horizontal accuracies for both comparisons are almost in the same level. GCP Config. # of CPs ∆X ∆Y ∆Z ∆XY 1 VE in the center block HO 20 VE 19 MIN -0.002 0.001 -0.074 0.070 MAX 0.656 -0.632 -0.744 0.695 STDEV 0.287 0.262 0.404 0.191 RMS 0.299 0.281 0.394 0.410 1 GCP in the center block HO 19 VE 19 MIN 0.004 0.024 0.120 0.082 MAX 0.676 -0.636 0.840 0.691 STDEV 0.297 0.285 0.383 0.216 RMS 0.305 0.304 0.392 0.430 5 GCPs HO 15 VE 15 MIN 0.000 0.000 0.000 0.000 MAX 0.645 0.408 0816 0.679 STDEV 0.244 0.188 0.389 0.176 RMS 0.266 0.237 0.395 0.356 Table 4. Comparison of Direct Geo-referencing with AT Accuracy expectations for Bakosurtanal (Indonesia) in 1:10,000 scale topographic mapping are; HO= ± 2 m, VE= ± 1.5 m. Mapping in forest areas like Kalimantan and Sumatra may be harder than any other area. Establishment and surveying of GCPs might be difficult, so, direct geo-referencing can be applied in such areas without any hesitation. Even though vertical accuracy is in the range of value at specification, aerial triangulation measurement and integrated geo-referencing (AT with GPS/IMU data) with only one ground control point (vertical control point) can be applied to increase the vertical accuracy. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B1, 2012 XXII ISPRS Congress, 25 August – 01 September 2012, Melbourne, Australia 332 work_uicquopiung2zgeebr5mribja4 ---- Outpatient Follow-up and Secondary Prevention for Melanoma Patients Cancers 2010, 2, 1178-1197; doi:10.3390/cancers2021178 cancers ISSN 2072-6694 www.mdpi.com/journal/cancers Review Outpatient Follow-up and Secondary Prevention for Melanoma Patients Ryan G. Gamble 1,† , Daniel Jensen 1,† , Andrea L. Suarez 1 , Anne H. Hanson 2 , Lauren McLaughlin 3 , Jodi Duke 1,4 and Robert P. Dellavalle 1,5,6, * 1 Department of Dermatology, University of Colorado Denver, Aurora, CO, USA; E-Mail: ryan.gamble@ucdenver.edu (R.G.G.); james.jensen@ucdenver.edu (J.D.J.) 2 Kansas City University of Medicine and Biosciences, Kansas City, MO, USA; E-Mail: ahhanson@kcumb.edu 3 Rocky Vista University College of Osteopathic Medicine, Parker, CO, USA; E-Mail: laurenmcla@gmail.com 4 School of Pharmacy, University of Colorado, Aurora, CO, USA; E-Mail: jodi.duke@ucdenver.edu 5 Dermatology Service, Denver Veterans Affairs Medical Center, Denver, CO, USA 6 Epidemiology Department, Colorado School of Public Health, Aurora, CO, USA † These authors contributed equally to this work. * Author to whom correspondence should be addressed; E-Mail: robert.dellavalle@ucdenver.edu; Tel.: +1 303-399-8020 x 2475. Received: 23 April 2010; in revised form: 2 June 2010 / Accepted: 3 June 2010 / Published: 7 June 2010 Abstract: Health care providers and their patients jointly participate in melanoma prevention, surveillance, diagnosis, and treatment. This paper reviews screening and follow-up strategies for patients who have been diagnosed with melanoma, based on current available evidence, and focuses on methods to assess disease recurrence and second primary occurrence. Secondary prevention, including the roles of behavioral modification and chemoprevention are also reviewed. The role of follow-up dermatologist consultation, with focused physical examinations complemented by dermatoscopy, reflectance confocal microscopy, and/or full-body mapping is discussed. Furthermore, we address the inclusion of routine imaging and laboratory assessment as components of follow-up and monitoring of advanced stage melanoma. The role of physicians in addressing the psychosocial stresses associated with a diagnosis of melanoma is reviewed. OPEN ACCESS Cancers 2010, 2 1179 Keywords: melanoma; melanoma follow-up; patient management; secondary prevention; melanoma surveillance; melanoma chemoprevention 1. Introduction Cutaneous melanoma is one of the most common forms of skin cancer and accounts for the greatest number of skin cancer-related deaths in the United States [1]. Dermatologists and primary care physicians are instrumental in screening and treating at-risk patients for suspicious lesions and initiating multidisciplinary follow-up care for melanoma patients following histopathologic confirmation of the disease. Fortunately, most melanoma patients have thin lesions and are cured with primary lesion excision. Improvements in early diagnosis and more frequent diagnosis of lower-risk patients (i.e., those with <1 mm of tumor thickness) have led to increased survival. As a consequence, dermatologists and other physicians are increasingly faced with decisions regarding the long term care and surveillance of cutaneous melanoma patients. 2. Secondary Prevention of Melanoma 2.1. Behavioral Methods of Prevention Numerous studies have been conducted on sun-avoidant behaviors and sun protection practices in patients who have already been diagnosed with a cancerous skin lesion. Behavior changes are particularly important for patients diagnosed with melanoma, as these patients are at an increased risk for developing subsequent primary melanomas compared to the general public [2]. Novak et al. found that the majority of patients diagnosed with a cancerous skin lesion reported an increase in sunscreen use, wore sun protective hats more often, and avoided the direct sun during the midday hours [3]. Women were particularly likely to increase their sun protection behavior. Overall, 87% of melanoma patients reported an increase in sun awareness. This increase in sun awareness has been attributed to frequent consultation with a physician following diagnosis and treatment [2]. However, other studies have been unable to report similarly successful behavior change following melanoma diagnosis. One study found that even though patients with a previous diagnosis of melanoma were more likely to perform skin self-examinations and recognize the importance of prompt treatment of suspicious skin lesions, they were not necessarily more knowledgeable about other associated symptoms or more likely to protect themselves from sun exposure [4]. A similar study also found that only 23% of previously diagnosed patients practiced regular sun protection [5]. Sunscreen was always used by 57% of melanoma patients compared to 28% to 32% use by the general public. However, the rates of sun exposure in cancer survivors did not differ from those in the general public. These mixed outcomes illustrate the limited value that individual physician-directed patient education may have in altering patients’ sun protection efforts, and that the utilization of several effective programs may be needed in order to achieve behavioral change. While it is important that dermatologists stress the importance of sun protection to melanoma patients, primary care providers can serve a key role by providing advice and referring patients with risk factors [2]. Cancers 2010, 2 1180 Clinical recommendations after melanoma diagnosis currently include multiple follow-up visits which involve patient education and self examinations of the skin and lymph nodes. A more directed patient education program may also increase the likelihood of patients’ compliance with sun protection. There is evidence that melanoma recurrence and second primary diagnoses are usually found by the patient; however, skin self-examinations may require the help of family members in order to see parts of the body that are less easily observed by oneself [5]. This provides an optimal opportunity for physicians to include family members in melanoma prevention efforts–an important intervention because of the increased risk for melanoma in first-degree relatives of patients with melanoma. Unfortunately, this opportunity is generally overlooked. One study found that only half of first-degree relatives have their skin examined by a physician following the diagnosis of melanoma in a family member [6]. 2.2. Melanoma Chemoprevention Ideally, melanoma chemopreventive agents taken as prophylaxis would reduce melanoma recurrence, incidence and mortality [7]. While evidence is insufficient to recommend their routine use, candidate agents reviewed here target cholesterol biosynthesis, inflammatory and immune mediators, antioxidants, and cell signaling pathways implicated in cellular transformation. Evidence for nutritional chemoprevention via diet, micronutrients, and dietary supplements is also reviewed. 2.2.1. Statins, Fibrates, and Apomine Statins and fibrates are anti-lipidemics which lower cholesterol by different mechanisms, and both drugs display antitumor activity in a variety of experimental cancer models [8,9]. While early trials reported significantly fewer melanomas in patients taking fibrates [10–12], meta-analyses do not reveal these reductions to be of statistical significance [13,14]. Statins have antiproliferative, proapoptotic, anti-invasive, and radiosensitive effects [15]. However, they may alternatively promote cancer cell growth through their angiogenic effects [16,17]. Apomine is a novel antineoplastic agent that inhibits the mevalonate/isoprenoid pathway of cholesterol synthesis, inhibits cancer cell growth, and induces tumor cell line apoptosis [18–21]. Researchers recently developed a topical formulation of apomine and demonstrated a statistically significant decrease in tumor incidence in mouse melanoma models [22]. 2.2.2. Anti-inflammatory Agents Overexpression of cyclooxygenase-2 (COX-2) and increased prostaglandin biosynthesis are defining features of several malignancies, and correlate with carcinogenesis [23–27]. The SKICAP-AK trial revealed that NSAID use of short duration was more protective against non-melanoma skin cancers than longer duration of use [28]. A few epidemiological studies have examined the association of NSAIDs with melanoma risk and have found conflicting results [29–32] A recent case controlled study examining the association between statins and NSAID use and melanoma, demonstrated that control subjects were more likely than melanoma subjects to have reported NSAID or aspirin use for 5 years [33]. While NSAIDs have yet to demonstrate sufficient evidence to be recommended for melanoma chemoprevention, their potential role in melanoma may be directed towards adjuvant treatment of metastases rather than prevention [34]. Cancers 2010, 2 1181 2.2.3. Anti-oxidants Induction of reactive oxygen species (ROS) in the skin by ultraviolet (UV) radiation is damaging to intracellular organelles, depletes the critical antioxidant glutathione (GSH), and ultimately promotes oncogenic mutations via oxidative DNA damage [35–38]. N-acetylcysteine (NAC), an orally bioavailable antioxidant, replenishes the pool of available GSH [39]. Topical formulations have the potential to decrease UV-mediated GSH depletion and ROS formation [40], and recent human studies support the utility of NAC in pre-UV exposure prophylaxis [41]. 2.2.4. Anti-proliferatives Perillyl alcohol (POH) is a naturally occurring chemical that can slow tumor cell growth by suppressing transcription factor-mediated cell proliferation and transformation [42]. A recent phase IIa study examined reversal of actinic damage following POH vs. placebo in patients with sun-damaged skin, and showed that histopathologic score was reduced with low dose POH and that abnormal nuclei were significantly reduced with high dose POH. These compelling results warrant larger, well-controlled studies of POH as a chemopreventive agent as well as efforts to improve dermal penetration and bioavailability of POH-based therapeutics. 2.2.5. Diet, Micronutrients and Nutritional Supplements Diet, micronutrients, and other nutritional supplements may also play a role in melanoma chemoprevention [43]. Vitamins C [44], D [45–48], and E [49–55] each have varying degrees of evidence supporting their use as chemopreventive agents. The same is true with other dietary supplements such as green tea polyphenols [56–61], selenium [62–65], curcumin [66], and lycopene [67–69]. While there are many in vitro and animal studies that indicate a possible benefit in melanoma prevention, human studies are generally lacking and do not suggest a clear clinical recommendation that physicians should pass on to their patients. 3. Diagnostic Follow-up of the Melanoma Patient 3.1. Dermatoscopy Dermatoscopy, also referred to as epiluminescence microscopy or dermoscopy, is currently the most effective clinical modality for diagnosing and screening for melanoma. Essentially skin surface microscopy, this technique allows inspection of skin lesions without obstruction from skin surface reflections. An invaluable tool for monitoring clinically atypical nevi and identifying new primary lesions in melanoma patients, dermatoscopy also increases melanoma diagnostic sensitivity from 60% by naked-eye exam to 90% in experienced hands [70]. Randomized trials have shown up to a 42% reduction in biopsy referral with dermatoscopy compared to control groups [71]. When clinicians are adequately trained in its use, the application of dermatoscopy as a diagnostic tool reduces patient harm and distress and helps eliminate the extraneous cost associated with benign lesion excision. When following patients with metastatic melanoma of unknown origin, dermatoscopy may identify key features, including linear-irregular vasculature, scar-like depigmentation, remnants of Cancers 2010, 2 1182 pigmentation, and pink coloration of the background, assisting the clinician in identifying regressing primary lesions [72]. Furthermore, winding and polymorphic atypical vessels, pigmentary halos, and peripheral grey spots are highly suggestive of cutaneous melanoma metastasis and warrant prompt work-up when examining a patient with previous melanoma [73]. Visualization of these features using dermatoscopy may allow the clinician to more accurately narrow the field of possible lesions responsible for confirmed metastasis with unknown primary lesion, although in most cases, no primary melanoma can be identified [74–76]. Patients with a prior diagnosis of melanoma are at higher risk for subsequent melanoma, suggesting the need for a lower threshold to proceed to biopsy of suspicious melanocytic nevi. However, even in high risk patients, such as those with atypical moles or a history of melanoma, lesions that have evolved between successive dermatoscopic examinations are most likely to be dysplastic nevi [77]. In one study, 196 high risk patients with melanocytic nevi were followed for an average 25 months with dermatoscopy, resulting in a ratio of thirty-three lesions excised to two melanomas identified [78]. In another study, 297 high-risk patients were followed for a median period of 22 months, and there was a ratio of 64 dysplastic nevi to one melanoma biopsied due to change on repeat dermatoscopy [77]. Additional biopsies revealed 4 melanomas that arose in skin not previously photographed. The fact that many melanomas arise in previously normal skin limits the sensitivity of dermatoscopic monitoring in high risk populations. 3.2. Reflectance Confocal Microscopy Reflectance confocal microscopy (RCM) allows for non-invasive evaluation of tissue underlying dermatoscopic structures with cellular-level resolution [70]. Tissue can be viewed in thin horizontal sections from the stratum corneum through the epidermal layers into the superficial dermis. This technique works optimally for light-colored lesions (65% improved specificity), while dermatoscopy seems to be ideal for evaluation of darker lesions [79]. RCM compliments dermatoscopy and allows evaluation of “grey zone” lesions and light-colored or amelanotic lesions not easily evaluated by dermatoscopy (specificity of 39% vs. 84%, respectively) [70]. Overall, RCM has superior specificity (68% vs. 32%) and similar sensitivity (91% vs. 95%) to dermatoscopy in diagnosing melanoma [79]. RCM is especially effective in confirming all non-biopsied control lesions as benign, thus preventing unnecessary biopsies and subsequent patient anxiety [79]. RCM is also a helpful method for monitoring the response of in situ residual disease to topical therapies [80]. A limitation of RCM is the significant false-positive rate of identifying Spitz nevi as melanoma. In one study, 56% of Spitz nevi were misclassified as melanoma using RCM [81]. Since there is also uncertainty in the histological differentiation of Spitz nevi from melanoma, a cautious approach should be taken for young patients with multiple Spitz nevi [82]. A false-negative rate of 12% with dermatoscopy compared with a false- negative rate of 9% with RCM shows that the solitary use of any one technique is not ideal. However, the high cost of obtaining the required equipment and amount of time needed to image each lesion limit the availability and use of this modality [83]. Overall, with the two modalities combined, sensitivity increases to 98% while specificity decreases to 23% [79]. For following up high-risk melanoma patients it is advantageous to use both dermatoscopy and RCM, when available, optimizing the chances of identifying a recurrence or second primary lesion. Cancers 2010, 2 1183 3.3. Full Body Mole Mapping Full body mole mapping is a cost-effective and necessary part of routine clinic follow-up for melanoma patients. It is useful for post-excisional monitoring, following suspicious melanocytic lesions, and assessing the treatment response of various dermatologic conditions such as eczema and psoriasis. Since new or changing pigmented lesions and rapid rate of growth are sensitive predictors for melanoma, full body mole mapping can be a powerful tool in the clinical follow-up of melanoma patients [84]. With improving resolution and clarity of close-up images, as well as the ability to archive increasing data within medical record systems, serial imaging of pigmented lesions is likely to become a component of standard clinic follow-up care. The technology is already broadly used, with larger whole body-photo being subdivided with links to the detailed close-up images. Some form of whole body imaging is utilized by 63% of American dermatology residency programs in the management of patients at risk for developing melanoma [85]. Sequential imaging increases the likelihood that thin, curable melanoma lesions are identified early in their course, thus reducing morbidity and mortality [86]. In a 2009 study comparing a self- or physician-referred patient cohort with a serial imaging cohort using an automated, low-cost full body scan device, the Breslow depth of melanomas in the non-imaged groups were statistically greater than the Breslow depth of melanomas from the serial scanning cohort [86]. Another report shows improvement in diagnostic accuracy of 13.5% and significant increase in sensitivity using a similar, though non-automated, method [87]. Whole body imaging may be especially useful for tumors with intermediate to high risk of recurrence (lesions > 1 mm depth), serving as a baseline reference for ongoing evaluation [88]. Several studies have shown that approximately one third of prospectively diagnosed melanomas were recognized as a result of change from baseline photographs [89]. Baseline whole body digital photography has also been shown to improve patient self-skin monitoring in patients at high risk for melanoma, increasing self skin examination by over 51% when patients were given books or storage disks with copies of baseline images for comparison [89–91]. Whole body imaging has shown that new or changing nevi in patients over the age of 50 are more likely to be melanomas–30% when compared to <1% of new or changing lesions in younger patients [92]. This is consistent with the natural history of benign nevi, which rarely form and often regress after age 50. In one study, the mean number of biopsies performed on patients tracked with total body digital photography vs. those who were not was similar, though traditional risk factors still had the strongest influence on decision to biopsy [93]. Full body mole mapping is an area of opportunity for improved standardization of management of suspicious melanocytic lesions. One study proposed a set of fifteen poses, chosen for patient comfort and technical feasibility, and the authors suggested that a standard template and quality standards for the images would be advantageous for image comparison in the dermatologic community as well as the development of archival, commercially available software [94]. Serial digital imaging allows for the precise mapping of cutaneous lesions in melanoma patients over time, giving dermatologists another valuable tool in screening for recurrence in melanoma patients. Cancers 2010, 2 1184 Table 1. Summary of modalities for routine follow-up of melanoma patients. Modality Advantages and Disadvantages Imaging: Chest X-ray, abdominal sonogram, CT, PET - Low sensitivity, high false positive rates unless Stage III or higher [97] - 50% of follow-up costs [128] Labs: Liver chemistries, LDH - Useful in Stage IV disease [95] Lymph node sonography - Higher sensitivity than palpation [118,119] - May confer survival benefit [120] - Reveals 13% of relapses [97] - Comprises 24% of follow-up costs [128] Routine History and full body exam - Detects >50% of recurrences - Non-invasive - Associated with thinner 2 nd primary melanomas [129] Physical exam and lymph node ultrasound combined - Detects majority of patients with macroscopic lymph node metastasis - Only 10% of histologically tumor positive sentinel nodes are macroscopically detectable [130] Patient self-monitoring - Detects up to three-quarters of reported recurrences [131–134] - With proper education and/or images, could decrease frequency of follow-up visits [133] Sequential body-mapping, utilizing digital photography - Increases detection of thin lesions [86] - Decreased mean Breslow depth at diagnosis [86] - Increased diagnostic accuracy of 13.5%, increased sensitivity, increased specificity depending on dermatologist experience with modality [87] 3.4. Follow-up Schedules for Melanoma Patients Initial tumor parameters of the melanoma lesion determine the frequency of follow-up, with recommendations for follow-up intervals taking into account early detection of a second primary melanoma or of recurrence. In patients with a diagnosis of melanoma, the lifetime risk of developing a second primary melanoma may be as high as 10% [95], and current National Comprehensive Cancer Network (NCCN) Guidelines recommend at least annual skin examinations for life for every melanoma patient [96]. More frequent follow-up examinations are required in patients with more advanced disease, as recurrences are detected in 1.5% of patients with stage I, 18.0% with stage II, and 68.6% with stage III disease [97]. For Stage IA–IIA, NCCN guidelines recommend follow-up visits every 3 to 12 months, for five years, and for Stage IIB–IV, follow-up visits every 3–6 months for the Cancers 2010, 2 1185 first two years, and every 3 to 12 months for the next three years are recommended. A personal history of prior melanoma or dysplastic nevi, the presence of multiple clinically atypical nevi, a family history of melanoma, and high patient anxiety may prompt the more frequent end of the recommended follow- up schedules. The risk of disease recurrence is highest in the first five years after initial diagnosis. In a retrospective study of 373 patients, the median time interval between initial visit and diagnosis of recurrence was 22 months for stage I, 13.2 months for stage II, and 10.6 months for stage III, with a range of 2.3 to 53.8 months for stage III lesions [95]. 3.5. Serum Markers Multiple studies have demonstrated that history and physical exam, with tests directed at symptoms and signs, detect more recurrences than any routine study in patients with Stage I–III melanoma [98–100]. While use of surveillance blood tests, such as complete blood count (CBC) and lactate dehydrogenase (LDH) to monitor for disease recurrence in asymptomatic patients with Stage I–III melanoma is common in clinical practice, the clinical utility of these tests has never been well established. New NCCN guidelines recommend against performing any routine surveillance blood tests for these patients [96]. Evidence for these recommendations comes from several studies. In a retrospective analysis, Weiss et al. analyzed follow-up data on a group of 261 patients with melanomas greater than 1.7 mm in thickness, some of whom had nodal spread. In the 145 patients who developed recurrences, the recurrence was detected by symptoms in 68%, by physical exam in 26% and never solely by CBC or blood chemistry panel [100]. Mooney et al. analyzed data from 1,004 patients with stage I and II melanoma. There were 174 recurrences, with information on the method of first detection available for 154 of these. In this group, recurrence was detected by symptoms in 17%, by physical exam in 72% and never by CBC or liver function tests [99]. In a large prospective analysis of melanoma follow-up strategies of 2,008 patients with Stages I–IV melanoma, history and physical examination detected 47% of recurrences, chest radiographs detected 5.5% of recurrences and an elevated LDH was the first signal of metastasis in only three (0.1%) of these patients [98]. Recently, Leiter et al. examined the cost-effectiveness of various follow-up strategies in a subset of 1,969 patients with Stages I–III melanoma from the above study [97]. Overall, blood tests (complete blood count, LDH and alkaline phosphatase) had an average cost per detection of recurrence of $46,909 (this and all costs reported subsequently are based on Medicare reimbursement rates). For chest radiography, the cost per detection was $5,433 and for physical exam it was $733. Due to the cost ineffectiveness of the blood tests and chest radiographs, these authors also recommended a follow-up strategy that does not include routine blood tests and that includes chest radiographs only for patients with stage III melanoma. While routine blood testing is not cost effective for Stage I–III melanoma, newer serum markers for melanoma have been proposed, including melanoma inhibitory activity (MIA), tyrosinase, 5-S-cysteinyldopa and serum protein S-100B [101]. Of these, MIA and S-100B have been the best studied [102–104]. In clinically disease-free Stage II and III patients, S-100B and MIA were found to have a sensitivity of 29% and 22%, and a diagnostic accuracy of 84% and 86%, respectively, for detecting new metastasis [105]. Some evidence indicates these markers may be more useful in high risk patients [106,107]. Since S-100B is best at detecting distant metastasis, elevated levels may Cancers 2010, 2 1186 warrant further investigation with a modality such as FDG-PET/CT, discussed below. Further study is needed to assess the utility of S100B and MIA as well as the frequency with which these tests should be performed in clinically disease free melanoma patients. 3.6. Imaging Melanoma commonly metastasizes to the regional lymph nodes, lung and brain, but any organ can be affected [109]. In the follow-up care of melanoma patients, imaging, such as ultrasound, CT, PET and PET/CT should mostly be used to investigate suspicious findings on a history and physical. A low index of suspicion should guide the decision to order symptom directed imaging. NCCN guidelines recommend no surveillance imaging for Stage IA–IIA patients and considering periodic surveillance chest radiograph, CT and/or PET/CT, and annual surveillance brain MRI in Stage IIB–IV patients with no clinical evidence of disease [96]. With all imaging modalities, the risk of increased patient anxiety and undergoing unnecessary, costly and potentially harmful biopsy procedures must be weighed against the possibility of detecting recurrent disease before it is clinically evident and when surgical resection or chemotherapeutic treatment is more likely to be effective. Multiple studies have shown that chest radiographs detect few, if any asymptomatic recurrences in Stage I–IIA patients [98–100,109,110]. The flexibility of the recommendations for Stage IIB–IV reflects a lack of quality evidence delineating the best surveillance tests in Stage IIB–IV patients. Chest radiographs are inexpensive and involve little radiation exposure. However, in a study of 994 melanoma patients, early detection of disease recurrence by chest radiograph was not associated with increased survival [111]. While chest CT is more sensitive, this benefit must be weighed against the false-positive rate and radiation exposure, which causes an increased lifetime risk of fatal cancer of 1 in 2,000 [108]. In a study involving 347 asymptomatic stage III melanoma patients, 4.2% of CT scans were true positive and 8.4% were false positive [112]. Though expensive, PET imaging has been shown to be superior to CT for metastatic detection [113] and PET/CT has been shown to be superior to PET or CT alone for melanoma staging [114]. The metabolically active brain produces high background activity on PET imaging, and MRI is the preferred imaging modality for the brain [108]. Although not a component of NCCN or American Academy of Dermatology guidelines, some European guidelines recommend routine lymph node sonography as an adjunct to physical exam in assessing the regional lymph node basin, to which melanoma often spreads first [96,115–117]. In a meta-analysis of 12 studies and 6,642 patients, Bafounta et al. showed that lymph node sonography had significantly higher discriminatory power (OR 1,755) than did palpation (OR 21, p = 0.0001) [118]. Machet et al. followed a cohort of 373 patients with Stage I and II melanoma and reported palpation and ultrasound to have a sensitivity of 71% and 93%, and a specificity of 99.6% and 97.8%, respectively [119]. The authors noted that earlier detection of metastases only occurred in 7.2% of patients and unnecessary anxiety, unnecessary biopsy or false reassurance occurred in 5.9% of patients. In their large prospective trial of 1,969 patients with stage I-III melanoma, Leiter et al. found that lymph node sonography performed annually in stage I, semiannually in stage II and four times yearly in stage III melanoma patients detected 13% of recurrences, more than all other methods except physical exam [97]. Most recurrences detected were in patients with stage II or III disease, a group for whom the cost per recurrence was $9,361. While there was an apparent survival benefit to early detection of Cancers 2010, 2 1187 metastasis by lymph node sonography, the design of this study could not exclude lead or length time bias, which may have accounted for this result [120]. Further studies should assess the impact of routine lymph node sonography on survival via randomized trials. In high risk melanoma patients, recent studies have investigated the use of PET/CT to detect recurrences in patients with elevated S-100B levels [105,107]. In a retrospective analysis of 47 patients with elevated S-100B, Strobel et al. found that PET/CT had a sensitivity for detecting metastasis of 97%, a specificity of 100% and an accuracy of 98% [106]. More recently, in a retrospective analysis of 46 melanoma patients with elevated S-100B levels, Aukema et al. found that PET/CT had a sensitivity of 100%, specificity of 83% and accuracy of 91% [107]. Each of these studies involved a small number of patients and they used different cutoff values for S-100B: Strobel et al. used 0.20 µ g/L whereas Aukema et al. used 0.10 µ g/L. These results warrant further investigation via multicenter trials to determine the utility of directed PET/CT in high risk melanoma patients with elevated S100B. 4. Psychosocial Issues and the Melanoma Patient Although many melanoma patients are diagnosed early and subsequently cured, numerous psychosocial issues may affect a patient who has been newly diagnosed or is being treated for melanoma. In a 2008, the Institute of Medicine (IOM) made a sobering observation when it noted that “some of the most basic psychological and social issues affecting cancer patients aren’t being adequately addressed” by physicians [121]. Unfortunately, the report shows that patients often report that their health care providers fail to acknowledge their struggles, most commonly when it relates to difficulties with depression and finances. Providers have the responsibility to attempt to orchestrate effective communication with their patient, thus allowing them to evaluate both the physiological and psychological needs of the patient. Poor psychosocial outcomes in the context of cancer diagnosis include female gender [122], young age [123], and being unmarried [124]. Other studies have found that cancer patients (including those with melanoma) who have strong social support have enhanced quality of life and better disease outcomes [125]. Providers can help melanoma patients and survivors by offering psychosocial and psycho-education interventions to their patients [126]. It is important for physicians to address both short-term and long-term psychological needs of their patients in order to minimize both physiological and psychological morbidity [127]. 5. Conclusion Close clinical follow-up is the foundation of care for melanoma patients. Secondary prevention of melanoma through sun protection counseling, patient education, and perhaps the use of chemopreventive agents are important aspects for follow-up of the melanoma patient. Thorough history and physical exam detect the most recurrences at the lowest cost, and diagnostic accuracy is bolstered when dermatoscopy, RCM and full body mole mapping are employed. While surveillance imaging and blood tests using newer serum markers may be considered for patients with higher stage melanoma, most blood tests and imaging should only be performed based on abnormalities detected by the history and physical. Finally, clinicians must remember to maintain open communication with patients and offer support in order to effectively address both the patient’s medical and psychosocial struggles. Cancers 2010, 2 1188 References 1. Garbe, C.; Leiter, U. Melanoma epidemiology and trends. Clin. Dermatol. 2009, 27, 3–9. 2. Freiman, A.; Yu, J.; Loutif, A.; Wang, B. Impact of melanoma diagnosis on sun-awareness and protection: efficacy of education campaigns in high-risk population. J. Cutan. Med. Surg. 2004, 8, 303–309. 3. Novak, C.B.; Young, D.; Lipa, J.; Neligan, P. Evaluation of sun protection behaviour in patients following excision of skin lesion. Can. J. Plast. Surg. 2007, 15, 38–40. 4. Jackson, A.; Wilkinson, C.; Hood, K.; Pill, R. Does experience predict knowledge and behaviour with respect to cutaneous melanoma, moles, and sun exposure? Possible outcome measures. Behav. Med. 2000, 26, 74. 5. Mujumdar, U.; Hay, J.; Monroe, Y.; Hummer, A.; Begg, C.; Wilcox, H.; Oliveria, S.; Berwick, M. Sun protection and skin self-examination in melanoma survivors. Psychooncology 2009, 18, 1106–1115. 6. Manne, S.; Fasanella, N.; Connors, J.; Floyd, B.; Wang, H.; Lessin, S. Sun protection and skin surveillance practices among relatives of patients with malignant melanoma: prevalence and predictors. Prev. Med. 2004, 39, 36–47. 7. Dellavalle, R.P.; Nicholas, M.K.; Schilling, L.M. Melanoma chemoprevention: a role for statins or fibrates? Am. J. Ther. 2003, 10, 203–210. 8. Shellman, Y.G.; Ribble, D.; Miller, L.; Gendall, J.; Vanbuskirk, K.; Kelly, D.; Norris, D.A.; Dellavalle, R.P. Lovastatin-induced apoptosis in human melanoma cell lines. Melanoma Res. 2005, 15, 83–89. 9. Grabacka, M.; Placha, W.; Plonka, P.M.; Pajak, S.; Urbanska, K.; Laidler, P.; Slominski, A. Inhibition of melanoma metastases by fenofibrate. Arch. Dermatol. Res. 2004, 296, 54–58. 10. Robins, S.J.; Collins, D.; Wittes, J.T.; Papademetriou, V.; Deedwania, P.C.; Schaefer, E.J.; McNamara, J.R.; Kashyap, M.L.; Hershman, J.M.; Wexler, L.F.; Rubins, H.B.; VA-HIT Study Group. Relation of gemfibrozil treatment and lipid levels with major coronary events: VA-HIT: a randomized controlled trial. JAMA 2001, 285, 1585–1591. 11. Rubins, H.B.; Robins, S.J.; Iwane, M.K.; Boden, W.E.; Elam, M.B.; Fye, C.L.; Gordon, D.J.; Schaefer, E.J.; Schectman, G.; Wittes, J.T. Rationale and design of the Department of Veterans Affairs High-Density Lipoprotein Cholesterol Intervention Trial (HIT) for secondary prevention of coronary artery disease in men with low high-density lipoprotein cholesterol and desirable low- density lipoprotein cholesterol. Am. J. Cardiol. 1993, 71, 45–52. 12. Rubins, H.B.; Robins, S.J.; Collins, D.; Fye, C.L.; Anderson, J.W.; Elam, M.B.; Faas, F.H.; Linares, E.; Schaefer, E.J.; Schectman, G.; Wilt, T.J.; Wittes, J. Gemfibrozil for the secondary prevention of coronary heart disease in men with low levels of high-density lipoprotein cholesterol. Veterans Affairs High-Density Lipoprotein Cholesterol Intervention Trial Study Group. N. Engl. J. Med. 1999, 341, 410–418. 13. Dellavalle, R.P.; Drake, A.; Graber, M.; Heilig, L.F.; Hester, E.J.; Johnson, K.R.; McNealy, K.; Schilling, L. Statins and fibrates for preventing melanoma. Cochrane Database Syst. Rev. 2005, 4, CD003697. Cancers 2010, 2 1189 14. Freeman, S.R.; Drake, A.L.; Heilig, L.F.; Graber, M.; McNealy, K.; Schilling, L.M.; Dellavalle, R.P. Statins, fibrates, and melanoma risk: a systematic review and meta-analysis. J. Natl. Cancer Inst. 2006, 98, 1538–1546. 15. Thibault A.; Samid, D.; Tompkins, A.C.; Figg, W.D.; Cooper, M.R.; Hohl, R.J.; Trepel, J.; Liang, B.; Patronas, N.; Venzon, D.J.; Reed, E.; Myers, C.E. Phase I study of lovastatin, an inhibitor of the mevalonate pathway, in patients with cancer. Clin. Cancer Res. 1996, 2, 483–491. 16. Simons, M. Molecular multitasking: statins lead to more arteries, less plaque. Nat. Med. 2000, 6, 965–966. 17. Bonovas, S.; Filioussi, K.; Tsavaris, N.; Sitaras, N.M. Statins and cancer risk: a literature-based meta-analysis and meta-regression analysis of 35 randomized controlled trials. J. Clin. Oncol. 2006, 24, 4808–4817. 18. Flach, J.; Antoni, I.; Villemin, P.; Bentzen, C.L.; Niesor, E.J. The mevalonate/isoprenoid pathway inhibitor apomine (SR-45023A) is antiproliferative and induces apoptosis similar to farnesol. Biochem. Biophys. Res. Commun. 2000, 270, 240–246. 19. Roitelman, J.; Masson, D.; Avner, R.; Ammon-Zufferey, C.; Perez, A.; Guyon-Gellin, Y.; Bentzen, C.L.; Niesor, E.J. Apomine, a novel hypocholesterolemic agent, accelerates degradation of 3-hydroxy-3-methylglutaryl-coenzyme A reductase and stimulates low density lipoprotein receptor activity. J. Biol. Chem. 2004, 279, 6465–6473. 20. Lewis, K.D.; Thompson, J.A.; Weber, J.S.; Robinson, W.A.; O'Day, S.; Lutzky, J.; Legha, S.S.; Floret, S.; Ruvuna, F.; Gonzalez, R. A phase II open-label trial of apomine (SR-45023A) in patients with refractory melanoma. Invest. New Drugs 2006, 24, 89–94. 21. Pourpak, A.; Dorr, R.T.; Meyers, R.O.; Powell, M.B.; Stratton, S.P. Cytotoxic activity of Apomine is due to a novel membrane-mediated cytolytic mechanism independent of apoptosis in the A375 human melanoma cell line. Invest. New Drugs 2007, 25, 107–114. 22. Kuehl, P.J.; Stratton, S.P.; Powell, M.B.; Myrdal, P.B. Preformulation, formulation, and in vivo efficacy of topically applied apomine. Int. J. Pharm. 2009, 382, 104–110. 23. Raju, R.; Cruz-Correa, M. Chemoprevention of colorectal cancer. Dis. Colon Rectum. 2006, 49, 113–124. 24. Pelzmann, M.; Thurnher, D.; Gedlicka, C.; Martinek, H.; Knerer, B. Nimesulide and indomethacin induce apoptosis in head and neck cancer cells. J. Oral. Pathol. Med. 2004, 33, 607–613. 25. Chan, A.T.; Giovannucci, E.L.; Schernhammer, E.S.; Colditz, G.A.; Hunter, D.J., Willett, W.C.; Fuchs, C.S. A prospective study of aspirin use and the risk for colorectal adenoma. Ann. Intern. Med. 2004, 140, 157–166. 26. Shen, J.; Wanibuchi, H.; Salim, E.I.; Wei, M.; Yamachika, T.; Fukushima, S. Inhibition of azoxymethane-induced colon carcinogenesis in rats due to JTE-522, a selective cyclooxygenase-2 inhibitor. Asian Pac. J. Cancer Prev. 2004, 5, 253–258. 27. Mazhar, D.; Ang, R.; Waxman, J. COX inhibitors and breast cancer. Br. J. Cancer 2006, 94, 346– 350. Cancers 2010, 2 1190 28. Clouser, M.C.; Roe, D.J.; Foote, J.A.; Harris, R.B. Effect of non-steroidal anti-inflammatory drugs on non-melanoma skin cancer incidence in the SKICAP-AK trial. Pharmacoepidemiol. Drug Saf. 2009, 18, 276–283. 29. Jacobs, E.J.; Thun, M.J.; Bain, E.B.; Rodriguez, C.; Henley, S.J.; Calle, E.E. A large cohort study of long-term daily use of adult-strength aspirin and cancer incidence. J. Natl. Cancer Inst. 2007, 99, 608–615. 30. Harris, R.E.; Beebe-Donk, J.; Namboodiri, K.K. Inverse association of non-steroidal antiinflammatory drugs and malignant melanoma among women. Oncol. Rep. 2001, 8, 655–657. 31. Ramirez, C.C.; Ma, F.; Federman, D.G.; Kirsner, R.S. Use of cyclooxygenase inhibitors and risk of melanoma in high-risk patients . Dermatol. Surg. 2005, 31, 748–752. 32. Ming, M.E.; Shin, D.B.; Brauer, J.A.; Troxel, A.B. Statins, non-steroidal anti-inflammatory drugs (NSAIDs) and calcium channel blockers are prescribed less frequently for patient who later develop melanoma. Oral presentation at the Society for Investigative Dermatology Annual Meeting in Philadelphia, PA, USA, May 3. 2006. J. Invest. Dematol. 2006, 126 (1), 49. 33. Curiel, C.; Gomez, M.L.; Atkins, M.B.; Nijsten, T.; Stern, R.S. Association between use of non- steroidal anti-inflammatory drugs (NSAIDs) and statins and the risk of cutaneous melanoma (CM): A case-control study. J. Clin. Oncol. 2007, 25, 8500. 34. Goulet, A.C.; Einsphar, J.G.; Alberts, D.S.; Beas, A.; Burk, C.; Bhattacharyya, A.; Bangert, J.; Harmon, J.M.; Fujiwara, H.; Koki, A.; Nelson, M.A. Analysis of cyclooxygenase 2 (COX-2) expression during malignant melanoma progression. Cancer Biol. Ther. 2003, 2, 713–718. 35. Herrling, T.; Jung, K.; Fuchs, J. Measurements of UV-generated free radicals/reactive oxygen species (ROS) in skin. Spectrochim. Acta A Mol. Biomol. Spectrosc. 2006, 63, 840–845. 36. Farmer, P.J.; Gidanian, S.; Shahandeh, B.; Di Bilio, A.J.; Tohidian, N.; Meyskens, F.L., Jr. Melanin as a target for melanoma chemotherapy: pro-oxidant effect of oxygen and metals on melanoma viability. Pigment Cell Res. 2003, 16, 273–279. 37. Bruner, S.D.; Norman, D.P.; Verdine, G.L. Structural basis for recognition and repair of the endogenous mutagen 8-oxoguanine in DNA. Nature 2000, 403, 859–866. 38. Meyskens, F.L., Jr.; Farmer, P.; Fruehauf, J.P. Redox regulation in human melanocytes and melanoma. Pigment Cell Res. 2001, 14, 148–154. 39. Maxwell, S.R. Prospects for the use of antioxidant therapies. Drugs 1995, 49, 345–361. 40. Kang, S.; Chung, J.H.; Lee, J.H.; Fisher, G.J.; Wan, Y.S.; Duell, E.A.; Voorhees, J.J. Topical N- acetyl cysteine and genistein prevent ultraviolet-light-induced signaling that leads to photoaging in human skin in vivo. J. Invest. Dermatol. 2003, 120, 835–841. 41. Goodson, A.G.; Cotter, M.A.; Cassidy, P.; Wade, M.; Florell, S.R.; Liu, T.; Boucher, K.M.; Grossman, D. Use of oral N-acetylcysteine for protection of melanocytic nevi against UV-induced oxidative stress: towards a novel paradigm for melanoma chemoprevention. Clin. Cancer Res. 2009, 15, 7434–7440. 42. Barthelman, M.; Chen, W.; Gensler, H.L.; Huang, C.; Dong, Z.; Bowden, G.T. Inhibitory effects of perillyl alcohol on UVB-induced murine skin cancer and AP-1 transactivation. Cancer Res. 1998, 58, 711–716. Cancers 2010, 2 1191 43. Jensen, J.D.; Wing, G.J.; Dellavalle, R.P. The Role of Diet and Nutrition in Melanoma Prevention. Clin. Dermatol. 2010, in press. 44. Lin, S.Y.; Lai, W.W.; Chou, C.C.; Kuo, H.M.; Li, T.M.; Chung, J.G.; Yang, J.H. Sodium ascorbate inhibits growth via the induction of cell cycle arrest and apoptosis in human malignant melanoma A375.S2 cells. Melanoma Res. 2006, 16, 509–519. 45. Randerson-Moor, J.A.; Taylor, J.C.; Elliott, F.; Chang, Y.M.; Beswick, S.; Kukalizch, K.; Affleck, P.; Leake, S.; Haynes, S.; Karpavicius, B.; Marsden, J.; Gerry, E.; Bale, L.; Bertram, C.; Field, H.; Barth, J.H.; Silva, I.D.; Swerdlow, A.; Kanetsky, P.A.; Barrett, J.H.; Bishop, D.T.; Bishop, J.A. Vitamin D receptor gene polymorphisms, serum 25-hydroxyvitamin D levels, and melanoma: UK case-control comparisons and a meta-analysis of published VDR data. Eur. J. Cancer 2009, 45, 3271–3281 46. Newton-Bishop, J.A.; Beswick, S.; Randerson-Moor, J.; Chang, Y.M.; Affleck, P.; Elliott, F.; Chan, M.; Leake, S.; Karpavicius, B.; Haynes, S.; Kukalizch, K.; Whitaker, L.; Jackson, S.; Gerry, E.; Nolan, C.; Bertram, C.; Marsden, J.; Elder, D.E.; Barrett, J.H.; Bishop, D.T. Serum 25- hydroxyvitamin D3 levels are associated with breslow thickness at presentation and survival from melanoma. J. Clin. Oncol. 2009, 27, 5439–5444. 47. Weinstock, M.A.; Stampfer, M.J.; Lew, R.A.; Willett, W.C.; Sober, A.J. Case-control study of melanoma and dietary vitamin D: implications for advocacy of sun protection and sunscreenuse. J. Invest. Dermatol. 1992, 98, 809–811. 48. Asgari, M.M.; Maruti, S.S.; Kushi, L.H.; White, E. A cohort study of vitamin D intake and melanoma risk. J. Invest. Dermatol. 2009, 129, 1675–1680. 49. Malafa, M.P.; Fokum, F.D.; Mowlavi, A.; Abusief, M.; King, M. Vitamin E inhibits melanoma growth in mice. Surgery 2002, 131, 85–91. 50. Malafa, M.P.; Fokum, F.D.; Smith, L.; Louis, A. Inhibition of angiogenesis and promotion of melanoma dormancy by vitamin E succinate. Ann. Surg. Oncol. 2002, 9, 1023–1032. 51. Kogure, K.; Manabe, S.; Hama, S.; Tokumura, A.; Fukuzawa, K. Potentiation of anti-cancer effect by intravenous administration of vesiculated alpha-tocopheryl hemisuccinate on mouse melanoma in vivo. Cancer Lett. 2003, 192, 19–24. 52. McArdle, F.; Rhodes, L.E.; Parslew, R.A.; Close, G.L.; Jack, C.I.; Friedmann, P.S.; Jackson, M.J. Effects of oral vitamin E and beta-carotene supplementation on ultraviolet radiation-induced oxidative stress in human skin. Am. J. Clin. Nutr. 2004, 80, 1270–1275. 53. Kirkpatrick, C.S.; White, E.; Lee, J.A. Case-control study of malignant melanoma in Washington State, II: diet, alcohol, and obesity. Am. J. Epidemiol. 1994, 139, 869–880. 54. Lonn, E.; Bosch, J.; Yusuf, S.; Sheridan, P.; Pogue, J.; Arnold, J.M.; Ross, C.; Arnold, A.; Sleight, P.; Probstfield, J.; Dagenais, G.R.; HOPE and HOPE-TOO Trial Investigators. Effects of long-term vitamin E supplementation on cardiovascular events and cancer: a randomized controlled trial. JAMA 2005, 293, 1338–1347. 55. Stryker, W.S.; Stampfer, M.J.; Stein, E.A.; Kaplan, L.; Louis, T.A.; Sober, A.; Willett, W.C. Diet, plasma levels of beta-carotene and alphatocopherol, and risk of malignant melanoma. Am. J. Epidemiol. 1990, 131, 597–611. 56. Hsu, S. Green tea and the skin. J. Am. Acad. Dermatol. 2005, 52, 1049–1059. Cancers 2010, 2 1192 57. Nihal, M.; Ahmad, N.; Mukhtar, H.; Wood, G.S. Anti-proliferative and proapoptotic effects of epigallocatechin-3-gallate on human melanoma: possible implications for the chemoprevention of melanoma. Int. J. Cancer 2005, 114, 513–521. 58. Katiyar, S.K.; Elmets, C.A. Green tea polypheolic antioxidants and skin photoprotection. Int. J. Oncol. 2001, 18, 1307–1313. 59. Wang, Z.Y.; Huang, M.T.; Ho, C.T.; Chang, R.; Ma, W.; Ferraro, T.; Reuhl, K.R.; Yang, C.S.; Conney, A.H. Inhibitory effect of green tea on the growth of established skin papillomas in mice. Cancer Res.1992, 52, 6657–6665. 60. Katiyar, S.K. Skin photoprotection by green tea: antioxidant and immunomodulatory effects. Curr. Drug. Targets Immune Endocr. Metab. Disord. 2003, 3, 234–242. 61. Zheng, W.; Doyle, T.J.; Kushi, L.H.; Sellers, T.A.; Hong, C.P.; Folsom, A.R. Tea consumption and cancer incidence in a prospective cohort study of postmenopausal women. Am. J. Epidemiol. 1996, 144, 175–182. 62. Burke, K.E.; Combs, G.F., Jr.; Gross, E.G.; Bhuyan, K.C.; Abu-Libdeh, H. The effects of topical and oral L-selenomethionine on pigmentation and skin cancer induced by ultraviolet irradiation. Nutr. Cancer 1992, 17, 123–137. 63. Overvad, K.; Thorling, E.B.; Bjerring, P.; Ebbesen, P. Selenium inhibits UV-light-induced skin carcinogenesis in hairless mice. Cancer Lett. 1985, 27, 163–170. 64. Vinceti, M.; Rothman., K.J.; Bergomi, M.; Borciani, N.; Serra, L.; Vivoli, G. Excess melanoma incidence in a cohort exposed to high levels of environmental selenium. Cancer Epidemiol. Biomarkers Prev. 1998, 7, 853–856. 65. Asgari, M.M.; Maruti, S.S.; Kushi, L.H.; White, E. Antioxidant supplementation and risk of incident melanomas: results of a large prospective cohort study. Arch. Dermatol. 2009, 145, 879–882. 66. Bill, M.A.; Bakan, C.; Benson, D.M., Jr.; Fuchs, J.; Young, G.; Lesinski, G.B. Curcumin induces proapoptotic effects against human melanoma cells and modulates the cellular response to immunotherapeutic cytokines. Mol. Cancer Ther. 2009, 8, 2726–2735. 67. Comstock, G.W.; Helzlsouer, K.J.; Bush, T.L. Prediagnostic serum levels of carotenoids and vitamin E as related to subsequent cancer in Washington County, Maryland. Am. J. Clin. Nutr. 1991, 53, 260S–264S. 68. Breslow, R.A.; Alberg, A.J.; Helzlsouer, K.J.; Bush, T.L.; Norkus, E.P.; Morris, J.S.; Spate, V.E.; Comstock, G.W. Serological precursors of cancer: malignant melanoma, basal and squamous cell skin cancer, and prediagnostic levels of retinol, beta-carotene, lycopene, alphatocopherol, and selenium. Cancer Epidemiol. Biomarkers Prev. 1995, 4, 837–842. 69. Millen, A.E.; Tucker, M.A.; Hartge, P.; Halpern, A.; Elder, D.E.; Guerry, D., 4th; Holly, E.A.; Sagebiel, R.W.; Potischman, N. Diet and melanoma in a case-control study. Cancer Epidemiol. Biomarkers Prev. 2004, 13, 1042–1051. 70. Nathansohn, N.; Orenstein, A.; Trau, H.; Liran, A.; Schachter, J. Pigmented Lesions Clinic for Early Detection of Melanoma: preliminary results. Isr. Med. Assoc. J. 2007, 9, 708–712. Cancers 2010, 2 1193 71. Carli, P.; de Giorgi, V.; Chiarugi, A.; Nardini, P.; Weinstock, M.A.; Crocetti, E.; Stante, M.; Giannotti, B. Addition of dermoscopy to conventional naked-eye examination in melanoma screening: a randomized study. J. Am. Acad. Dermatol. 2004, 50, 683–689. 72. Bories, N.; Dalle, S.; Debarbieux, S.; Balme, B.; Ronger-Savlé, S.; Thomas, L. Dermoscopy of fully regressive cutaneous melanoma. Br. J. Dermatol. 2008, 158, 1224–1229. 73. Bono, R.; Giampetruzzi, A.R.; Concolino, F.; Puddu, P.; Scoppola, A.; Sera, F.; Marchetti, P. Dermoscopic patterns of cutaneous melanoma metastases. Melanoma Res. 2004, 14, 367–373. 74. Anbari, K.K.; Schuchter, L.M.; Bucky, L.P.; Mick, R.; Synnestvedt, M.; Guerry, D.; Hamilton, R.; Halpern, A.C. Melanoma of unknown primary site: presentation, treatment, and prognosis--a single institution study. University of Pennsylvania Pigmented Lesion Study Group. Cancer 1997, 79, 1816–1821. 75. High, W.A.; Stewart, D.; Wilbers, C.R.; Cockerell, C.J.; Hoang, M.P.; Fitzpatrick, J.E. Completely regressed primary cutaneous malignant melanoma with nodal and/or visceral metastases: a report of 5 cases and assessment of the literature and diagnostic criteria. J. Am. Acad. Dermatol. 2005, 53, 89–100. 76. Savoia, P.; Fava, P.; Osella-Abate, S.; Nardò, T.; Comessatti, A.; Quaglino, P.; Bernengo, M.G. Melanoma of unknown primary site: a 33-year experience at the Turin Melanoma Centre. Melanoma Res. 2010, 20, 227–232. 77. Fuller, S.R.; Bowen, G.M.; Tanner, B.; Florell, S.R.; Grossman, D. Digital dermoscopic monitoring of atypical nevi in patients at risk for melanoma. Dermatol. Surg. 2007, 33, 1198–1206. 78. Bauer, J.; Blum, A.; Strohhäcker, U.; Garbe, C. Surveillance of patients at high risk for cutaneous malignant melanoma using digital dermoscopy. Br. J. Dermatol. 2005, 152, 87–92. 79. Guitera, P.; Pellacani, G.; Longo, C.; Seidenari, S.; Avramidis, M.; Menzies, S.W. In Vivo Reflectance Confocal Microscopy Enhances Secondary Evaluation of Melanocytic Lesions. J. Invest. Dermatol. 2009, 129, 131–138. 80. Ahlgrimm-Siess, V.; Hofmann-Wellenhof, R.; Cao, T.; Oliviero, M.; Scope, A.; Rabinovitz, H.S. Reflectance confocal microscopy in the daily practice. Semin. Cutan. Med. Surg. 2009, 28, 180–189. 81. Pellacani, G.; Cesinaro, A.M.; Seidenari, S. Reflectance-mode confocal microscopy of pigmented skin lesions--improvement in melanoma diagnostic specificity. J. Am. Acad. Dermatol. 2005, 53, 979–985. 82. Gelbard, S.N.; Tripp, J.M.; Marghoob, A.A.; Kopf, A.W.; Koenig, K.L.; Kim, J.Y.; Bart, R.S. Management of Spitz nevi: A survey of dermatologists in the United States. J. Am. Acad. Dermatol. 2002, 47, 224–230. 83. Longo, C.; Bassoli, S.; Farnetani, F.; Pupelli, G.; Seidenari, S.; Pellacani, G. Reflectance Confocal Microscopy for Melanoma and Melanocytic Lesion Assessment. Expert Rev. Dermatol. 2008, 3, 735–745. 84. Rhodes, A.R. Intervention strategy to prevent lethal cutaneous melanoma use of dermatologic photography to aid surveillance of high-risk persons. J. Am. Acad. Dermatol. 1998, 39, 262–267. Cancers 2010, 2 1194 85. Nehal, K.S.; Oliveria, S.A.; Marghoob, A.A.; Christos, P.J.; Dusza, S.; Tromberg, J.S.; Halpern, A.C. Use of and beliefs about baseline photography in the management of patients with pigmented lesions: a survey of dermatology residency programmes in the United States. Melanoma Res. 2002, 12, 1617. 86. Drugge, R.; Nguyen, C.; Drugge, E.; Gliga, L.; Broderick, P.; McClain, S.; Brown, C. Melanoma screening with serial whole body photographic change detection using Melanoscan® technology. Dermatol. Online J. 2009, 15, 1. 87. Kittler, H.; Binder, M. Risks and benefits of sequential imaging of melanocytic skin lesions in patients with multiple atypical nevi. Arch. Dermatol. 2001, 137, 1590–1595. 88. Garbe, C. Cutaneous melanoma: baseline and ongoing laboratory evaluation. Dermatol. Ther. 2005, 18, 413–421 89. Nathansohn, N.; Orenstein, A.; Trau, H.; Liran, A.; Schachter, J. Pigmented Lesions Clinic for Early Detection of Melanoma: preliminary results. Isr. Med. Assoc. J. 2007, 9, 708–712. 90. Oliveria, S.A.; Dusza, S.W.; Phelan, D.L.; Ostroff, J.S.; Berwick, M.; Halpern, A.C. Patient adherence to skin self-examination: effect of nurse intervention with photographs. Am. J. Prev. Med. 2004, 26, 152–155. 91. Oliveria, S.A.; Chau, D.; Christos, P.J.; Charles, C.A.; Mushlin, A.I.; Halpern, A.C. Diagnostic accuracy of patients in performing skin self-examination and the impact of photography. Arch. Dermatol. 2004, 140, 57–62. 92. Yeatman, J.M.; Dowling, J.P. Incidence of New and Changed Nevi and Melanomas Detected Using Baseline Images and Dermoscopy in Patients at High Risk for Melanoma. Arch. Dermatol. 2005, 141, 998–1006. 93. Risser, J.; Pressley, Z.; Veledar, E.; Washington, C.; Chen, S.C. The impact of total body photography on biopsy rate in patients from a pigmented lesion clinic. J. Am. Acad. Dermatol. 2007, 57, 428–434. 94. Halpern, A.C.; Marghoob, A.A.; Bialoglow, T.W.; Witmer, W.; Slue, W. Standardized positioning of patients (poses) for whole body cutaneous photography. J. Am. Acad. Dermatol. 2003, 49, 593–598. 95. Christianson, D.; Anderson, D.M. Close Monitoring and lifetime follow-up is optimal for patients with a history of melanoma. Semin. Oncol. 2003, 30, 369–374. 96. NCCN Melanoma Panel. Melanoma. NCCN Clinical Practice Guidelines in Oncology, Version 2.2010; National Comprehensive Cancer Network: Fort Washington, PA, USA, 2010. 97. Leiter, U.; Marghoob, A.A.; Lasithiotakis, K.; Eigentler, T.K.; Meier, F.; Meisner, C. Costs of the detection of metastases and follow-up examinations in cutaneous melanoma. Melanoma Res. 2009, 19, 50–57. 98. Garbe, C.; Paul, A.; Kohler-Späth, H.; Ellwanger, U.; Stroebel, W.; Schwarz, M.; Schlagenhauff, B.; Meier, F.; Schittek, B.; Blaheta, H.J.; Blum, A.; Rassner, G. Prospective Evaluation of a Follow-Up Schedule in Cutaneous Melanoma Patients: Recommendations for an Effective Follow-Up Strategy. J. Clin. Oncol. 2003, 21, 520–529. Cancers 2010, 2 1195 99. Mooney, M.M.; Kulas, M.; McKinley, B.; Michalek, A.M.; Kraybill, W.G. Impact on survival by method of recurrence detection in stage I and II cutaneous melanoma. Ann. Surg. Oncol. 1998, 5, 54–63. 100. Weiss, M.; Loprinzi, C.L.; Creagan, E.T.; Dalton, R.J.; Novotny, P.; O'Fallon, J.R. Utility of follow-up tests for detecting recurrent disease in patients with malignant melanomas. JAMA 1995, 274, 1703–1705. 101. Bánfalvi, T.; Edesné, M.B.; Gergye, M.; Udvarhelyi, N.; Orosz, Z.; Gilde, K.; Kremmer, T.; Ottó, S.; Tímár, J. Laboratory markers of melanoma progression. Magy. Onkol. 2003, 47, 89-104. 102. Djukanovic, D.; Hofmann, U.; Sucker, A.; Rittgen, W.; Schadendorf, D. Comparison of S100 protein and MIA protein as serum marker for malignant melanoma. Anticancer Res. 2000, 20, 2203–2207. 103. Martenson, E.D.; Hansson, L.O.; Nilsson, B.; von Schoultz, E.; Mansson Brahme, E.; Ringborg, U.; Hansson, J. Serum S-100b protein as a prognostic marker in malignant cutaneous melanoma. J. Clin. Oncol. 2001, 19, 824–831. 104. Hauschild, A.; Engel, G.; Brenner, W.; Glaser, R.; Monig, H.; Henze, E.; Christophers, E. S100B protein detection in serum is a significant prognostic factor in metastatic melanoma. Oncology 1999, 56, 338–344. 105. Garbe, C.; Leiter, U.; Ellwanger, U.; Blaheta, H.J.; Meier, F.; Rassner, G.; Schittek, B. Diagnostic value and prognostic significance of protein S-100beta, melanoma-inhibitory activity, and tyrosinase/MART-1 reverse transcription-polymerase chain reaction in the follow-up of high-risk melanoma patients. Cancer 2003, 97, 1737–1745. 106. Strobel, K.; Skalsky, J.; Kalff, V.; Baumann, K.; Seifert, B.; Joller-Jemelka, H.; Dummer, R.; Steinert, H.C. Tumour assessment in advanced melanoma: value of FDG-PET/CT in patients with elevated serum S-100B. Eur. J. Nuc. Med. Mol. Imaging. 2007, 34, 1366–1375. 107. Aukema, T.S.; Olmos, R.A.V.; Korse, C.M.; Kroon, B.B.R.; Wouters, M.W.J.M.; Vogel, W.V.; Bonfrer, J.M.G.; Nieweg, O.E. Utility of FDG PET/CT and Brain MRI in Melanoma Patients with Increased Serum S-100B Level During Follow-up. Ann. Surg. Oncol. 2010, 17, 1657–1661. 108. Dancey, A.L.; Mahon, B.S.; Rayatt, S.S. A review of diagnostic imaging in melanoma. J. Plast. Reconstr. Aesthet. Surg. 2008, 61, 1275–1283. 109. Hengge, U.R.; Wallerand, A.; Stutzki, A.; Kockel, N. Cost-effectiveness of reduced follow-up in malignant melanoma. J. Dtsch. Dermatol. Ges. 2007, 5, 898–907. 110. Meyers, M.O.; Yeh, J.J.; Frank, J.; Long, P.; Deal, A.M.; Amos, K.D.; Ollila, D.W. Method of detection of initial recurrence of stage II/III cutaneous melanoma: analysis of the utility of follow- up staging, Ann. Surg. Oncol. 2009, 16, 941–947. 111. Tsao, H,; Feldman, M.; Fullerton, J.E.; Sober, A.J.; Rosenthal, D.; Goggins, W, Early Detection of Asymptomatic Pulmonary Melanoma Metastases by Routine Chest Radiographs Is Not Associated With Improved Survival. Arch. Dermatol. 2004, 140, 67–70. 112. Kuvshinoff, B.W.; Kurtz, C.; Coit, D.G. Computed tomography in evaluation of patients with stage III melanoma. Ann. Surg. Oncol. 1997, 4, 252–258. Cancers 2010, 2 1196 113. Swetter, S.M.; Carroll, L.A.; Johnson, D.L.; Segall, G.M. Positron emission tomography is superior to computed tomography for metastatic detection in melanoma patients. Ann. Surg. Oncol. 2002, 9, 646–653. 114. Reinhardt, M.J.; Joe, A.Y.; Jaeger, U.; Huber, A.; Matthies, A.; Bucerius, J.; Roedel, R.; Strunk, H.; Bieber, T.; Biersack, H.J.; Tüting, T. Diagnostic performance of whole body dual modality 18F-FDG PET/CT imaging for N- and M-staging of malignant melanoma: experience with 250 consecutive patients. J. Clin. Oncol. 2006, 24, 1178–1187. 115. Sober, A.J.; Chuang, T.Y.; Duvic, M.; Farmer, E.R.; Grichnik, J.M.; Halpern, A.C.; Ho, V.; Holloway, V.; Hood, A.F.; Johnson, T.M.; Lowery, B.J. Guidelines/Outcomes Committee. Guidelines of care for primary cutaneous melanoma. J. Am. Acad. Dermatol. 2001, 45, 579–586. 116. Garbe, C.; Hauschild, A.; Volkenandt, M.; Schadendorf, D.; Stolz, W.; Reinhold, U.; Kortmann, R.D.; Kettelhack, C.; Frerich, B.; Keilholz, U.; Dummer, R.; Sebastian, G.; Tilgen, W.; Schuler, G.; Mackensen, A.; Kaufmann, R. Evidence and interdisciplinary consensus-based German guidelines: diagnosis and surveillance of melanoma. Melanoma Res. 2007, 17, 393–399. 117. Négrier, S.; Saiag, P.; Guillot, B.; Verola, O.; Avril, M.F.; Bailly, C.; Cupissol, D.; Dalac, S.; Danino, A.; Dreno, B.; Grob, J.J.; Leccia, M.T.; Renaud-Vilmer, C.; Bosquet, L. National Federation of Cancer Campaign Centers, French Dermatology Society. Guidelines for clinical practice: Standards, Options and Recommendations 2005 for the management of adult patients exhibiting an M0 cutaneous melanoma, full report. National Federation of Cancer Campaign Centers. French Dermatology Society. Update of the 1995 Consensus Conference and the 1998 Standards, Options, and Recommendations. Ann. Dermatol. Venereol. 2005, 132, 10S3–10S85. 118. Bafounta, M.; Beauchet, A.; Chagnon, S.; Saiag, P. Ultrasonography or palpation for detection of melanoma nodal invasion: a meta-analysis. Lancet Oncol. 2004, 5, 673–680. 119. Machet, L.; Nemeth-Normand, F.; Giraudeau, B.; Perrinaud, A.; Tiguemounine, J.; Ayoub, J.; Alison, D.; Vaillant, L.; Lorette, G. Is ultrasound lymph node examination superior to clinical examination in melanoma follow-up? A monocentre cohort study of 373 patients. Br. J. Dermatol. 2005, 152, 66–70. 120. Autier, P.; Coebergh, J.W.; Boniol, M.; Dore, J.F.; de Vries, E.; Eggermont, A.M. Management of melanoma patients: benefit of intense follow-up schedule is not demonstrated. J. Clin. Oncol. 2003, 21, 3707. 121. American Cancer Society News and Views. Institute of Medicine’s 10-Point Plan for More Comprehensive Cancer Care. CA. Cancer J. Clin. 2008, 58, 67–68. 122. Missiha, S.B.; Solish, N.; From, L. Characterizing anxiety in melanoma patients. J. Cutan. Med. Surg. 2003, 7, 443–448. 123. Bergenmar, M.; Nilsson, B.; Hansson, J.; Brandberg, Y. Anxiety and depressive symptoms measured by the Hospital Anxiety and Depression Scale as predictors of time to recurrence in localized cutaneous melanoma. Acta Oncol. 2004, 43, 161–168. 124. Hamama-Raz, Y.; Solomon, Z.; Schachter, J.; Azizi, E. Objective and subjective stressors and the psychological adjustment of melanoma survivors. Psychooncology 2007, 16, 287–294. Cancers 2010, 2 1197 125. Kasparian, N.A.; McLoone, J.K.; Butow, P.N. Psychological responses and coping strategies among patients with malignant melanoma: a systematic review of the literature. Arch. Dermatol. 2009, 145, 1415–1427. 126. Network, A.C. Clinical Practice Guidelines for the Management of Melanoma in Australia and New Zealand; National Health and Medical Research Council: Canberra, Australia, 2008. 127. Beutel, M.E.; Blettner, M.; Fischbeck, S.; Loquay, C.; Werner, A.; Marian, H. Psycho- oncological aspects of malignant melanoma. A systematic review from 1990–2008. Hautarzt 2009, 60, 727–733. 128. Hofmann, U.; Szedlak, M.; Rittgen, W.; Jung, E.G.; Schadendorf, D. Primary staging and follow- up in melanoma patients--monocenter evaluation of methods, costs and patient survival. Br. J. Cancer 2002, 87, 151–157. 129. Aitken, J.F.; Elwood, M.; Baade, P.D.; Youl, P.; English, D. Clinical whole-body skin examination reduces the incidence of thick melanomas. Int. J. Cancer 2010, 126, 450–458. 130. Hafner, J.; Schmid, M.H.; Kempf, W.; Burg, G.; Künzi, W.; Meuli-Simmen, C.; Neff, P.; Meyer, V.; Mihic, D.; Garzoli, E.; Jungius, K.P.; Seifert, B.; Dummer, R.; Steinert, H. Baseline staging in cutaneous malignant melanoma. Br. J. Dermatol. 2004, 150, 677–686. 131. Poo-Hwu, W.J.; Ariyan, S.; Lamb, L.; Papac, R.; Zelterman, D.; Hu, G.L.; Brown, J.; Fischer, D.; Bolognia, J.; Buzaid, A.C. Follow-up recommendations for patients with American Joint Committee on Cancer Stages I-III malignant melanoma. Cancer 1999, 86, 2252–2258. 132. Meyers, M.O.; Yeh, J.J.; Frank, J.; Long, P.; Deal, A.M.; Amos, K.D.; Ollila, D.W. Method of detection of initial recurrence of stage II/III cutaneous melanoma: analysis of the utility of follow- up staging. Ann. Surg. Oncol. 2009, 16, 941–947. 133. Francken, A.B.; Shaw, H.M.; Accortt, N.A.; Soong, S.J.; Hoekstra, H.J.; Thompson, J.F. Detection of first relapse in cutaneous melanoma patients: implications for the formulation of evidence-based follow-up guidelines. Ann. Surg. Oncol. 2007, 14, 1924–1933. 134. Kantor, J.; Kantor, D.E. Most melanomas detected by dermatologists are from full body skin examination, not patient complaint. Arch. Dermatol. 2009, 145, 873–876. © 2010 by the authors; licensee MDPI, Basel, Switzerland. This article is an Open Access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). work_uk6twpns3re3dgpt32fhpwwpoe ---- DC111317 482..484 Imager Evaluation of Diabetic Retinopathy at the Time of Imaging in a Telemedicine Program The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Cavallerano, Jerry D., Paolo S. Silva, Ann M. Tolson, Taniya Francis, Dorothy Tolls, Bina Patel, Sharon Eagan, Lloyd M. Aiello, and Lloyd P. Aiello. 2012. Imager evaluation of diabetic retinopathy at the time of imaging in a telemedicine program. Diabetes Care 35(3): 482-484. Published Version doi:10.2337/dc11-1317 Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:10611819 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA http://osc.hul.harvard.edu/dash/open-access-feedback?handle=&title=Imager%20Evaluation%20of%20Diabetic%20Retinopathy%20at%20the%20Time%20of%20Imaging%20in%20a%20Telemedicine%20Program&community=1/4454685&collection=1/4454686&owningCollection1/4454686&harvardAuthors=ec12615d347079a50c5d1489946fa14f&department http://nrs.harvard.edu/urn-3:HUL.InstRepos:10611819 http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA Imager Evaluation of Diabetic Retinopathy at the Time of Imaging in a Telemedicine Program JERRY D. CAVALLERANO, OD, PHD1,2 PAOLO S. SILVA, MD1,2 ANN M. TOLSON, BA1 TANIYA FRANCIS, BA1 DOROTHY TOLLS, OD1 BINA PATEL, OD1,3 SHARON EAGAN, OD1 LLOYD M. AIELLO, MD1,2 LLOYD P. AIELLO, MD, PHD1,2 OBJECTIVEdTo evaluate the ability of certified retinal imagers to identify presence versus absence of sight-threatening diabetic retinopathy (stDR) (moderate nonproliferative diabetic retinopathy or worse or diabetic macular edema) at the time of retinal imaging in a telemedicine program. RESEARCH DESIGN AND METHODSdDiabetic patients in a primary care setting or specialty diabetes clinic received Joslin Vision Network protocol retinal imaging as part of their care. Trained nonphysician imagers graded the presence versus absence of stDR at the time of imaging. These gradings were compared with masked gradings of certified readers. RESULTSdOf 158 patients (316 eyes) imaged, all cases of stDR (42 eyes [13%]) were iden- tified by the imagers at the time of imaging. Six eyes with mild nonproliferative diabetic reti- nopathy were graded by the imagers to have stDR (sensitivity 1.00, 95% CI 0.90–1.00; specificity 0.97, 0.94–0.99). CONCLUSIONSdAppropriately trained imagers can accurately identify stDR at the time of imaging. Diabetes Care 35:482–484, 2012 T he American Telemedicine Associa- tion Telehealth Practice Recommen- dations for Diabetic Retinopathy identifies four categories of telemedi- cine care for diabetic retinopathy (1). Cat- egory 1 programs identify patients with no or minimal diabetic retinopathy (Early Treatment Diabetic Retinopathy Study [ETDRS] level 20 or below) versus those with diabetic retinopathy more severe than ETDRS level 20. Category 2 pro- grams accurately determine whether sight-threatening diabetic retinopathy (stDR), as evidenced by any level of dia- betic macular edema (DME), severe or worse levels of nonproliferative diabetic retinopathy (NPDR) (ETDRS level 53 or worse), or proliferative diabetic retinopa- thy (ETDRS level 61 or worse), is present or not present. Category 3 programs ac- curately identify ETDRS-defined levels of diabetic retinopathy and DME to deter- mine appropriate follow-up and treat- ment. Category 4 programs can replace ETDRS 7-standard field 35-mm stereo- scopic color fundus photographs in any clinical or research program. The Joslin Vision Network (JVN) is a validated category 3 program (2–5). Im- agers undergo an intensive 3-day program that includes fundus camera operation and imaging software navigation; struc- tured courses on diabetes, ocular anat- omy, diabetic retinopathy, and common ocular disorders; and a guided review demonstrating retinal images of nondi- seased and diseased eyes. As part of the certification, imagers learn to recognize lesions of diabetic retinopathy, including hemorrhages, microaneurysms, venous caliber abnormalities, intraretinal micro- vascular abnormalities, retinal neovascula- rization, cotton wool spots, hard exudates, and laser scars. Salient retinal abnormali- ties not related to diabetes are also dem- onstrated, including choroidal nevi, retinal emboli, and large or asymmetrical optic cup-to-disc ratios. After the 3-day program, imagers serve a probationary period with senior imager supervision and ongoing quality improvement and assurance. This prospective study assessed the ability of two certified imagers to conduct American Telemedicine Association Cat- egory 2 (presence vs. absence of stDR) grading at the time of retinal imaging. RESEARCH DESIGN AND METHODSdPatients with diagnosed diabetes had nonmydriatic JVN imaging as part of their routine physical examina- tions in a primary care setting (HealthCare Associates, Beth-Israel Deaconess Medical Center) or a specialty diabetes clinic (Adult Diabetes, Joslin Diabetes Center). At the time of imaging, certified imagers (A.M.T., 71 patients [45%]; T.F., 87 patients [55%]) identified patients with potential stDR, defined for this program as ETDRS levels of 43 or worse (6) or DME, and un- gradable images. Imagers were not able to manipulate the color, brightness, contrast, or other features of the images and could not view images stereoscopically. To grade retinal thickening without stereoscopic viewing, imagers relied on identifying hard exudates or microaneurysms within 3,000 microns from the center of the mac- ula as surrogate markers for DME. The two certified imagers were Bachelor of Arts col- lege graduates with no prior health care experience in evaluating retinal images and had not provided direct patient care before working as retinal imagers. Certified readers graded images according to the previously described JVN protocol (2,3) in a central reading center with calibra- ted monitors and stereoscopic viewing capability. All readers in the JVN program are Massachusetts-licensed optometrists. c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c From the 1Beetham Eye Institute, Joslin Diabetes Center, Boston, Massachusetts; the 2Department of Oph- thalmology, Harvard Medical School, Boston, Massachusetts; and the 3New England College of Optometry, Boston, Massachusetts. Corresponding author: Paolo S. Silva, paoloantonio.silva@joslin.havard.edu. Received 13 July 2011 and accepted 21 November 2011. DOI: 10.2337/dc11-1317 © 2012 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered. See http://creativecommons.org/ licenses/by-nc-nd/3.0/ for details. 482 DIABETES CARE, VOLUME 35, MARCH 2012 care.diabetesjournals.org C l i n i c a l C a r e / E d u c a t i o n / N u t r i t i o n / P s y c h o s o c i a l R e s e a r c h B R I E F R E P O R T mailto:silva@joslin.havard.edu http://creativecommons.org/licenses/by-nc-nd/3.0/ http://creativecommons.org/licenses/by-nc-nd/3.0/ All readers were masked to the grading performed by the imagers. All findings were recorded on a specifically designed template. RESULTSdA total of 158 consecutive patients were imaged. Mean age was 56.5 years (range 22–86), 54% female, and the mean diabetes duration was 7.0 years (range 0.1–42). A total of 316 eyes were evaluated, and 195 (61.7%) had no dia- betic retinopathy, 62 (19.6%) had mild NPDR, 24 (7.6%) had moderate NPDR, 3 (1%) had severe or very severe NPDR, 2 (0.6%) had proliferative diabetic reti- nopathy, and 30 (9.5%) were ungradable for diabetic retinopathy. DME was absent in 266 (84.2%) eyes, present in 13 (4.1%), and 37 (11.7%) were ungradable for DME. Of the 316 eyes assessed, imagers identified 48 (15%) eyes with potential stDR at the time of imaging. Subsequent grading by certified readers classified 6 (12.5%) of these eyes as mild NPDR. The imagers accurately identified all cases of stDR as graded by the readers. Although limited by the moderate sample size and the use of only two independent imagers, the agreement for determining stDR be- tween imagers and readers was 0.95 6 0.02. The sensitivity and specificity in identifying stDR at the time of imaging by a certified imager is 1.00 (95% CI 0.90–1.00) and 0.97 (95% CI 0.94– 0.99), respectively (positive predictive value 0.88 [95% CI 0.74–0.95]; negative predictive value 1.00 [0.98–1.00]). There was complete agreement between im- agers and readers regarding ungradable eyes (37 [12%]). Table 1 presents a cross-tabulation of imager and reader evaluations for the presence of stDR and ungradable images. CONCLUSIONSdFilm or digital ret- inal imaging is a sensitive method to identify the presence and level of diabetic retinopathy (7–10). Despite efforts to au- tomate retinal image evaluation (11–13), currently no system can perform such analyses in real time, and present meth- ods of retinal imaging require trained im- agers to acquire retinal images. This study shows that appropriately educated and certified imagers following a clearly defined imaging and grading pro- tocol can accurately evaluate retinal im- ages with a high degree of sensitivity and specificity for the presence of stDR and inadequate image quality at the time of imaging. The ability to identify ungradable images and detect potential stDR facili- tates reacquisition of retinal images during a single imaging encounter and allows prompt referral to appropriate eye care. Although this study involved a moderate number of eyes (n = 316), 42 (13%) eyes with stDR and 37 (12%) eyes with ungradable images were identified, representing all cases that would have re- quired further ophthalmic evaluation and care. Additional studies with a vari- ety of imagers and patient populations will be required to determine whether similar results can be obtained across di- verse health care scenarios. However, the fact that the two certified imagers in- volved in this study had no prior health care experience in evaluating retinal images suggests that similar results are possible. In this study, retinal imagers had received a validated standardized method of cer- tification and training, which is an impor- tant consideration when extrapolating these results to other retinal imaging programs. AcknowledgmentsdNo potential conflicts of interest relevant to this article were re- ported. J.D.C. and P.S.S. researched data and wrote the manuscript. A.M.T., T.F., D.T., B.P., and S.E. researched data and reviewed and edited the manuscript. L.M.A. and L.P.A. reviewed and edited the manuscript and contributed to discussion. J.D.C. and P.S.S. are the guarantors of this work and, as such, had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. References 1. Li HK, Horton M, Bursell SE, Cavallerano J, et al. Telehealth practice recommendations for diabetic retinopathy, second edi- tion. Telemed J E Health 2011;17:814– 837 2. Ahmed J, Ward TP, Bursell SE, Aiello LM, Cavallerano JD, Vigersky RA. The sensi- tivity and specificity of nonmydriatic digital stereoscopic retinal imaging in de- tecting diabetic retinopathy. Diabetes Care 2006;29:2205–2209 3. Bursell SE, Cavallerano JD, Cavallerano AA, et al.; Joslin Vision Network Research Team. Stereo nonmydriatic digital-video color retinal imaging compared with Early Treatment Diabetic Retinopathy Study seven standard field 35-mm stereo color photos for determining level of diabetic retinopathy. Ophthalmology 2001;108: 572–585 4. Cavallerano JD, Aiello LP, Cavallerano AA, et al.; Joslin Vision Network Clinical Team. Nonmydriatic digital imaging alter- native for annual retinal examination in persons with previously documented no or mild diabetic retinopathy. Am J Ophthalmol 2005;140:667–673 5. Chow SP, Aiello LM, Cavallerano JD, et al. Comparison of nonmydriatic digital reti- nal imaging versus dilated ophthalmic examination for nondiabetic eye disease in persons with diabetes. Ophthalmology 2006;113:833–840 6. Early Treatment Diabetic Retinopathy Study Research Group. Grading dia- betic retinopathy from stereoscopic color fundus photographs: an exten- sion of the modified Airlie House clas- sification. ETDRS report number 10. Ophthalmology 1991;98(Suppl.):786– 806 7. Klein R, Klein BE, Neider MW, Hubbard LD, Meuer SM, Brothers RJ. Diabetic retinopathy as detected using ophthal- moscopy, a nonmydriatic camera and a standard fundus camera. Ophthalmol- ogy 1985;92:485–491 8. Li HK, Danis RP, Hubbard LD, Florez- Arango JF, Esquivel A, Krupinski EA. Comparability of digital photography with the ETDRS film protocol for evaluation Table 1dCross-tabulation of grading for the presence of stDR by JVN imagers and JVN readers Grading by JVN reader No stDR stDR Cannot grade n (%) Grading by JVN imager No stDR 231 0 0 231 (73) stDR 6 42 0 48 (15) Cannot grade 0 0 37 37 (12) n (%) 237 (75) 42 (13) 37 (12) Total 316 (100) Numbers represent number of eyes. Exact matches between JVN imagers and JVN readers are in bold. Simple k statistic: 0.95 6 0.02 (95% CI 0.92–0.99). Agreement for absence of stDR: 97% (0.94–0.99). Agreement for presence of stDR: 88% (0.74–0.95). Agreement for cannot grade: 100% (0.88–1). care.diabetesjournals.org DIABETES CARE, VOLUME 35, MARCH 2012 483 Cavallerano and Associates of diabetic retinopathy severity. Invest Ophthalmol Vis Sci 2011;52:4717– 4725 9. Hubbard LD, Sun W, Cleary PA, et al.; Diabetes Control and Complications Trial/ Epidemiology of Diabetes Interventions and Complications Study Research Group. Comparison of digital and film grading of diabetic retinopathy severity in the Di- abetes Control and Complications Trial/ Epidemiology of Diabetes Interventions and Complications Study. Arch Ophthalmol 2011;129:718–726 10. Gangaputra S, Almukhtar T, Glassman AR, et al. Comparison of film and dig- ital fundus photographs in eyes of in- dividuals with diabetes mellitus. Invest Ophthalmol Vis Sci 2011;52:6198– 6173 11. Chaum E, Karnowski TP, Govindasamy VP, Abdelrahman M, Tobin KW. Au- tomated diagnosis of retinopathy by content-based image retrieval. Retina 2008; 28:1463–1477 12. Abràmoff MD, Reinhardt JM, Russell SR, et al. Automated early detection of dia- betic retinopathy. Ophthalmology 2010; 117:1147–1154 13. Fleming AD, Philip S, Goatman KA, Pre- scott GJ, Sharp PF, Olson JA. The evi- dence for automated grading in diabetic retinopathy screening. Curr Diabetes Rev 2011;7:246–252 484 DIABETES CARE, VOLUME 35, MARCH 2012 care.diabetesjournals.org Grading diabetic retinopathy at time of imaging work_uruapygfobe3xpux6u4ys4g6mm ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218565510 Params is empty 218565510 exception Params is empty 2021/04/06-02:16:47 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218565510 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:47 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_utjtl2t45nfdzdib4flrjby45i ---- A method for measuring human body composition using digital images RESEARCH ARTICLE A method for measuring human body composition using digital images Olivia AffusoID 1,2*, Ligaj Pradhan3, Chengcui Zhang3, Song Gao3, Howard W. Wiener1, Barbara Gower 2,4 , Steven B. Heymsfield 5 , David B. Allison 6* 1 Department of Epidemiology, School of Public Health, University of Alabama at Birmingham, Birmingham, AL, United States, 2 Nutrition Obesity Research Center, University of Alabama at Birmingham, Birmingham, AL, United States, 3 Department of Computer and Information Sciences, College of Arts and Sciences, University of Alabama at Birmingham, Birmingham, AL, United States, 4 Department of Nutrition Science, School of Health Professions, University of Alabama at Birmingham, Birmingham, AL, United States, 5 Pennington Biomedical Research Center, Louisiana State University System, Baton Rouge, LA, United States, 6 Department of Epidemiology and Biostatistics, School of Public Health, Indiana University- Bloomington, Bloomington, IN, United States * Oaffuso@uab.edu (OA); Allison@iu.edu (DBA) Abstract Background/Objectives Body mass index (BMI) is a proxy for obesity that is commonly used in spite of its limitation in estimating body fatness. Trained observers with repeated exposure to different body types can estimate body fat (BF) of individuals compared to criterion methods with reason- able accuracy. The purpose of this study was to develop and validate a computer algorithm to provide a valid estimate %BF using digital photographs. Subjects/Methods Our sample included 97 children and 226 adults (age in years: 11.3±3.3; 38.1±11.6, respec- tively). Measured height and weight were used (BMI in kg/m 2 : 20.4±4.4; 28.7±6.6 for chil- dren and adults, respectively). Dual x-ray absorptiometry (DXA) was the criterion method. Body volume (BVPHOTO) and body shape (BSPHOTO) were derived from two digital images. Final support vector regression (SVR) models were trained using age, sex, race, BMI for % BFNOPHOTO, plus BVPHOTO and BSPHOTO for %BFPHOTO. Separate validation models were used to evaluate the learning algorithm in children and adults. The differences in correlations between %BFDXA, %BFNOPHOTO and %BFPHOTO were tested using the Fisher’s Z-score transformation. Results Mean BFDXA and BFPHOTO were 27.0%±9.2 vs. 26.7%± 7.4 in children and 32.9± 10.4% vs. 32.8%±9.3 in adults. SVR models produced %BFPHOTO values strongly correlated with % BFDXA. Our final model produced correlations of rDP = 0.80 and rDP = 0.87 in children and adults, respectively for %BFPHOTO vs. %BFDXA. The correlation between %BFNOPHOTO and %BFDXA was moderate, yet statistically significant in both children rDB = 0.70; p <0.0001 and adults rDB = 0.86; p<0.0001. However, the correlations for rDP were statistically higher than PLOS ONE | https://doi.org/10.1371/journal.pone.0206430 November 5, 2018 1 / 13 a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 OPEN ACCESS Citation: Affuso O, Pradhan L, Zhang C, Gao S, Wiener HW, Gower B, et al. (2018) A method for measuring human body composition using digital images. PLoS ONE 13(11): e0206430. https://doi. org/10.1371/journal.pone.0206430 Editor: Rebecca A. Krukowski, University of Tennessee Health Science Center, UNITED STATES Received: September 12, 2017 Accepted: October 12, 2018 Published: November 5, 2018 Copyright: © 2018 Affuso et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Data Availability Statement: Access to a de- identified dataset required to replicate our results is available via OPEN ICPSR (www.openicpsr.org) – Project ID: OPENICPSR-101343 (http://doi.org/10. 3886/E101343V1). Funding: This research was supported by the National Heart, Lung, and Blood Institute and the National Institute of Diabetes and Digestive and Kidney Diseases of the National Institutes of Health under Award Numbers R01HL107916 and P30DK056336. The content is solely the responsibility of the authors and does not http://orcid.org/0000-0003-2268-5074 https://doi.org/10.1371/journal.pone.0206430 http://crossmark.crossref.org/dialog/?doi=10.1371/journal.pone.0206430&domain=pdf&date_stamp=2018-11-05 http://crossmark.crossref.org/dialog/?doi=10.1371/journal.pone.0206430&domain=pdf&date_stamp=2018-11-05 http://crossmark.crossref.org/dialog/?doi=10.1371/journal.pone.0206430&domain=pdf&date_stamp=2018-11-05 http://crossmark.crossref.org/dialog/?doi=10.1371/journal.pone.0206430&domain=pdf&date_stamp=2018-11-05 http://crossmark.crossref.org/dialog/?doi=10.1371/journal.pone.0206430&domain=pdf&date_stamp=2018-11-05 http://crossmark.crossref.org/dialog/?doi=10.1371/journal.pone.0206430&domain=pdf&date_stamp=2018-11-05 https://doi.org/10.1371/journal.pone.0206430 https://doi.org/10.1371/journal.pone.0206430 http://creativecommons.org/licenses/by/4.0/ http://www.openicpsr.org http://doi.org/10.3886/E101343V1 http://doi.org/10.3886/E101343V1 rDB (%BFDXA vs. %BFNOPHOTO) in both children and adults (children: Z = 5.95, p<0.001; adults: Z = 3.27, p<0.0001). Conclusions Our photographic method produced valid estimates of BF in both children and adults. Fur- ther research is needed to create norms for subgroups by sex, race/ethnicity, and mobility status. Introduction Assessment of body composition, particularly fat and fat-free mass, is vital to understanding many health-related conditions including cachexia induced by HIV, cancer, and other dis- eases; multiple sclerosis; wasting in neurological disorders such as Parkinson’s, Alzheimer’s, and muscular dystrophy; sarcopenia; obesity; eating disorders; proper growth in children, and response to exercise[1–7]. Nevertheless, challenges remain in the determination of these aspects of body composition in research[8]. Obesity, characterized by an excess of body fat (BF) and sarcopenia, defined as diminution of primarily skeletal muscle, remain significant public health problems [9, 10]. Both obesity and sarcopenia can be assessed using highly accu- rate techniques such as dual-energy x-ray absorptiometry (DXA) or magnetic resonance imag- ing (MRI) but are not widely used in large-scale epidemiologic studies or non-clinical settings due, in part, cost and size of the equipment used for these methods. Furthermore, field meth- ods such as multiple skinfold measurements depend heavily upon repeated training of research staff to obtain accurate and reliable assessments [11]. Therefore, body mass index (BMI; kg/ m 2 ) is a commonly used alternative, but is limited in that it is an assessment of body weight rel- ative to height and not of body composition per se. It is well-documented that obesity is often misclassified when BMI is used as a proxy for body fatness compared to measurement tech- niques via imaging. [12, 13]. In children, age- and sex-specific BMI percentiles have been found to underestimate the prevalence of excess adiposity when compared to DXA particularly among whites and Mexican American youth [12]. Similarly in adults, BMI misclassified obe- sity status differentially by race/ethnicity and age [13]. Therefore, there is significant need for a simple, portable, and relatively inexpensive but accurate measurement of body composition that performs well across age, sex, and racial/ethnic groups. The use of digital photography may be a viable alternative to BMI for the assessment of human body composition in field research. This method has the potential to overcome limita- tions associated with BMI particularly the misclassification of obesity among individuals who have relatively high lean mass (i.e. body builders) or low lean mass (i.e. the elderly). Our approach to using digital images to assess human body composition is predicated on evidence from studies done as early as the 1930’s using either visual estimation [11, 14–16]; or photo- graphic assessment of body volume, from which body composition could be determined[17– 19]. Previous studies employing visual estimation have found that both trained and untrained observers can provide moderately accurate estimates of percentage BF by visually inspecting an individual directly or from photography with correlations between observations and crite- rion measures, such as under water weighing (UWW) ranging between r = 0.56 and 0.83. This evidence suggests that visual estimation of body composition can be valid, but may be limited by familiarity of the observer with the study population, subjectivity of the observer, and repro- ducibility of the study results. Prior to the advent of digital photography, researchers used Body composition from digital images PLOS ONE | https://doi.org/10.1371/journal.pone.0206430 November 5, 2018 2 / 13 necessarily represent the official views of the National Institutes of Health or any other organization. Competing interests: No honoraria, grants, or other forms of payment were given to anyone to produce this manuscript. However, Dr. Allison has received grants, honoraria, donations, and consulting fees from numerous food, beverage, pharmaceutical companies, and other commercial, government, and nonprofit entities with interests in obesity. None of Dr. Allison’s sponsors were involved in the design, data collection, analysis, interpretation, or decision to publish the results presented in this manuscript. The remaining authors of this manuscript have no competing interests to disclose. This does not alter our adherence to PLOS ONE policies on sharing data and materials. https://doi.org/10.1371/journal.pone.0206430 manual photographic methods to provide valid and reliable estimates of body volume as a means to overcome the substantial expense and participant burden associated with UWW. However, these early attempts still required significant labor to process the photographs and manually calculate body volume. The automation of the visualization process by using com- puterized digital image analysis could overcome several of the issues associated with visual assessment of body volume and composition. Therefore, the purpose of this study was to develop an easy, portable, quick, and comparatively inexpensive, but valid computerized image analysis method for use in large-scale and/or remote studies to estimate fat and fat free mass. Methods Study sample Participants were 323 children and adults aged 6–80 years representing a broad range of shapes and sizes recruited from the metropolitan Birmingham, AL area (2012–2014) via flyers, news- papers, newsletters, an online research referral service, word of mouth, visits to local recrea- tional centers, churches, and community events. Inclusion criteria were: 1) without diseases known to effect body composition; 2) able to stand for three photographs; 3) no contraindica- tion for a DXA body composition scan; 4) not missing more than one finger or toe to reduce error in volume determination; and 5) not pregnant. Participants were instructed to refrain from caffeine and large meals prior to the single study visit. Informed consent was obtained from the adults/parents and children provided their assent for study participation. The study protocol was approved by the Institutional Review Board of The University of Alabama at Bir- mingham. All participants received $20 for completing the study. Measures All measures were assessed by trained research staff with participants dressed in close fitting but non-compressing LYCRA shorts and tank/sports bra (females only) without shoes. Height (to the nearest 0.1 cm) and weight (to the nearest 0.1 kg) were measured using a physician’s balance beam scale with stadiometer (HealthOMeter—Model 402LB, McCook, IL). BMI (kg/ m 2 ) was calculated from measured height and weight. Obesity status was categorized in chil- dren as BMI�95 percentile and BMI�30 in adults, according to standard definition by expert panels [20, 21]. Body composition was measured using DXA (GE Lunar iDXA, Madison, WI) with the pediatric software employed when appropriate (encore 2011 version 13.6). Quality assurance tests were performed daily per the manufacturer instructions. Obesity via DXA was defined as body fat percent � 25% in males, �30% girls and �35% in women [22, 23]. Three photographs (front, back, and side profiles) were taken with a digital camera (Canon Power- Shot—Model SX50; Canon USA Inc., Melville, NY) with participants standing against a pho- tography green screen. Body volume and body shape from two-dimensional image processing Body volume (BVPHOTO) and body shape (BSPHOTO) were estimated from photographs of each participant using two photographs (back and side profiles). Although we had a third pho- tograph of the front profile, the use of the additional photograph did not change our estima- tion of body volume or shape. Briefly, the methods used to determine BV and BS are described below. However, our technical report of the detailed methods used herein are described else- where [24]. Body composition from digital images PLOS ONE | https://doi.org/10.1371/journal.pone.0206430 November 5, 2018 3 / 13 https://doi.org/10.1371/journal.pone.0206430 We constructed a 3-dimensional body model based on both the back and side body masks (i.e., outline of the body) extracted from 2D profile images as shown in Fig 1A. The body posi- tion for the back profile photograph required participants to stand with their arms and legs separated, while for the side profile photograph the arms were required to be close to the body with the legs together. Also for the side profile, the right leg and foot were covered in a green cloth to isolate the side of the body. The distance of the camera from participants (91 in.) and the camera/lighting settings were standardized to reduce variation in the photographs among participants. A four-step procedure was then used to separate the body components of each participant and was as follows: 1) The body was separated from the green screen by setting a color intensity threshold that facilitated the differentiation between the background and the black clothing and skin tone in the photograph; 2) The body mask from the back and side pro- files were rotated until symmetrical in the vertical plane; 3) The height from the body mask was then normalized to represent participants’ actual height; 4) Finally, the separation points, called key points (see Fig 1B), used to determine the separation line of each body component such as the arms, legs and the trunk, were detected for each participant from the back profile, which allowed better separation due to natural folds of the skin. The local dimension features (e.g. length and width) of each body component such as arms, legs, and trunk, were used to construct ellipse-like slices along the main orientation of each component. Long and short axes were calculated for each ellipse-like slice depending upon the length and width of each body component. The area of each slice was taken to be the number of pixels within that slice. A 3D body model was then constructed by accumulating ellipse-like slices and the BVPHOTO was derived by summing the area of all slices. Fig 1. A. 3D-body model from 2D images. B. Key points to separate body segments. https://doi.org/10.1371/journal.pone.0206430.g001 Body composition from digital images PLOS ONE | https://doi.org/10.1371/journal.pone.0206430 November 5, 2018 4 / 13 https://doi.org/10.1371/journal.pone.0206430.g001 https://doi.org/10.1371/journal.pone.0206430 Body shape features, which delineate fat distribution in the trunk, were captured by extract- ing the front curve (FC) and the side curve (SC) using the left upper key point and the lower key point as demonstrated in Fig 2. The width of the body was taken at each of 12 equally spaced points along the vertical contour of the front of the body, providing 12 numerical values representing the FC of each person. Likewise, the side curve of the body was measured by 12 additional equally spaced lines representing the extracted side contour. The side contour lines were measured to the central bodyline (CBL, formed by a vertical line drawn from the top of the head through the lower key point) and gave the twelve numerical values representing each person’s SC. Details of the methods used to extract the shape features are published elsewhere [24]. After extracting FC and SC, a cluster evaluation function in MatLab (‘evalclusters’) with K- means clustering and ‘Calinski-Harabasz’ criterion was used to compute optimal number of clusters ‘k’ for both the FC and SC features from the training dataset [25]. This cluster evalua- tion step examined cluster sizes from 2 to 10 and computed the optimal cluster size for the FC and SC feature sets. FC and SC were then represented as k-element vectors, consisting of 0’s and 1’s. A value of one at the n th element indicates that the body shape is in the n th cluster. The encoded vectors for FC and BC are represented as BSPHOTO in our model and used as categori- cal features to train the prediction models. After determining the optimal number of clusters and their corresponding cluster-centroids using only the training dataset, FC and SC in the testing dataset were later assigned to the clusters whose centroids were closest in terms of Euclidean distance. Building the prediction model BVPHOTO and BSPHOTO of each participant along with the covariates age, race, sex, BMI were used as input features as shown in the first step of Table 1, to train a support vector regression (SVR) for the estimation of %BFPHOTO. The ground truth collection, type of regression model used and the parameter optimization for the regression model are also described in steps two, three and four of Table 1. Separate 3-fold cross validation models were used to evaluate the performance of the learning algorithm for the prediction of body fat in children and adults [26,27]. Both datasets were then shuffled randomly and divided into three equal subsets. The 3-fold cross validations were then conducted by taking two of the subsets as the training data- set and remaining one subset as the testing dataset without any overlap. In addition, we repeated the process three times, choosing separate subsets for testing each time. Hence, we Fig 2. Body shape feature derives from front and side curves. https://doi.org/10.1371/journal.pone.0206430.g002 Body composition from digital images PLOS ONE | https://doi.org/10.1371/journal.pone.0206430 November 5, 2018 5 / 13 https://doi.org/10.1371/journal.pone.0206430.g002 https://doi.org/10.1371/journal.pone.0206430 tested each data point in the dataset once after completing one round of the 3-fold cross valida- tion process as described in step 5 of Table 1. Statistical analysis Descriptive statistics (means, SD, ranges, and/or frequencies) were computed for the partici- pant characteristics stratified by adults and children. Pearson’s correlations between the DXA criterion measure of body fat and predicted values from a simplified model (BVPHOTO + covariates) and the full (BVPHOTO + BSPHOTO + covari- ates) model were calculated. The correlations are represented as follows: Where, rDP denotes the correlation between %BF by DXA and % BF by PHOTO, rDB denotes the correlation between %BF by DXA and %BF by BMI plus covariates. We used the method of Meng et al.[28] to determine whether our method (i.e. %BFPHOTO) was significantly better correlated to the criterion method of %BFDXA than was %BFNO- PHOTO[29]. We also created Bland-Altman plots of the difference in estimation of body fat by each method to examine the error in our estimation across the range of %BFDXA and % BFPHOTO measurements [30]. SVR models were constructed using LIBSVM[31] in MatLab Version R2014b (Mathworks, Inc., Natick, MA). Correlation and the test for statistical difference were computed using SAS Version 9.3 (SAS, Inc., Cary, NC). Alpha level of significance was set at p<0.05, 2-tailed). Results Participant characteristics stratified by children (6–18 years) and adults (�19 years) are pre- sented in Table 2. Our sample included 97 children and 226 adults (mean ± SD of age in years: 11.3±3.3; 38.1±11.6, respectively). Mean BMI and %BFDXA were 20.4 kg/m 2±4.4; 28.7 kg/ m 2±6.6 and 27.1%±9.2; 32.7%±10.4 for children and adults, respectively). Among the children, Table 1. Procedures used in the support vector regression for the photographic estimation of body fat. Steps Detail description 1. Feature set generation Back and side profile images of 323 children and adults ranging from age 6–80 were collected along with their age, sex, race and BMI information. The pictures are used to compute BVPHOTO, FC and SC. FC and SC were separately used to cluster the participants into k groups which is computed using a cluster evaluation function in MatLab (‘evalclusters’) with K-means clustering and ‘Calinski-Harabasz’ criterion. These clusters represent different body shapes and are represented as BSPHOTO in our study. Hence, our feature set for each participant is: age sex race BMI BVPHOTO BSPHOTO 2. Ground truth We also measure the BF% for each participant using a DXA machine (BFDXA%). BFDXA% will be our ground truth BF% for training and testing the prediction model. 3. Regression model We train Support Vector Regression (SVR) model to predict BFDXA using our feature set. We use the ‘nu-SVR’ type SVR with ‘radial bias’ type kernel function from LIBSVM[31] 4. Parameter optimization ‘nu-SVR’ requires several parameters [31] Values of -s and -t are selected to be 4 and 2 respectively. A parameter selection during operation was performed using our training dataset to select the best parameter set for the–g (gamma), -c (cost) and–n (nu). Combinations of g, c and n, within certain ranges were tested training, and the best combination that produced the highest prediction accuracy in the training data set was chosen for our regression model. The values of -g and -c were taken as 2 X , where X is an integer ranging from -10 to 10. Similarly values of n were taken from 0.1 to 1 at an interval of 0.1. 5. Training and Testing The participants were randomly divided into 3 subsets and 3-fold cross validations were performed by taking two of the subsets as the training dataset and remaining subset as the testing dataset. FC and SC curves from the training dataset were used to compute the k clusters. Each test dataset was assigned to the clusters discovered during training by computing the nearest cluster centroid to compute their BSPHOTO. This process was repeated 3 times, choosing separate subsets for testing each time. https://doi.org/10.1371/journal.pone.0206430.t001 Body composition from digital images PLOS ONE | https://doi.org/10.1371/journal.pone.0206430 November 5, 2018 6 / 13 https://doi.org/10.1371/journal.pone.0206430.t001 https://doi.org/10.1371/journal.pone.0206430 47.4% were female, 60.8% were African American (AA), while among the adults, 46.5% were female and 46.5% were AA. Also, 14.4% of children and 33.6% of adults were classified as obese using BMI (age- and sex-specific BMI�95 th percentile for children and �30 kg/m 2 for adults), while 40.2% of children and 66.4% of adults were over the threshold for body fatness via DXA (�30% for girls and �25% for boys; �35% for women and �25% for men. Correlations between %BFDXA and the %BFPHOTO from our learning algorithms are pre- sented for adults and children in Figs 3 and 4, respectively. Both SVM models produced % BFPHOTO values that were strongly correlated with the %BFDXA. %BFPHOTO from Model 1 which included demographic variables plus BMI and BVPHOTO produced correlations of rDP = 0.72 and rDP = 0.86 in children and adults respectively. However, %BFPHOTO from Model 2 which included variables from Model 1 plus BSPHOTO produced rDP = 0.81 in children and rDP = 0.88 in adults. The correlations among %BFDXA, %BFNOPHOTO, and %BFPHOTO are reported in Table 3. The correlation between %BFNOPHOTO and %BFDXA was moderate, yet statistically significant in both children (rDB = 0.70;p < 0.0001) and adults (rDB = 0.86;p < 0.0001). rDP was signifi- cantly larger than rDB in both children and adults (children: Z = 5.95; p < 0.0001; adults: Z = 3.27; p < 0.001). Bland-Altman plots revealed greater differences between our %BFPHOTO and the %BFDXA in the tails of the distributions in both adults and children (Fig 3A and 3B). In adults, the mean absolute and relative differences between the two methods were -0.06 (95% CI: -4.97, 4.85) and 0.005 (95% CI: -0.18, 0.19), respectively. In children, the mean and absolute and relative differences were -0.19 (95% CI: -5.43, 5.04) and 0.004 (95% CI: -0.20, 0.21), respectively. Discussion Innovations in both digital photography and image analysis have allowed the automation of visual body composition assessment. Our study results showed strong correlations between predicted body fat percentage derived from 2D digital photographs and body fatness from DXA in a sample of children and adults. The performance of our image analysis algorithm provided statistically better correlations to DXA measurements than containing BMI and demographic information. The average absolute error between our method and DXA was small (~4.1%) with BV and BS contributing significantly to the overall prediction of percent body fat. Our estimates of body volume from the photographs were highly correlated with body vol- ume from air displacement plethysmography suggesting that our photographic method is Table 2. Sample characteristics. Adults n = 226 Children n = 97 Age (mean, SD), years 38.1 (11.1) 11.3 (3.3) Percentage Female (n, %) 105 (46.5) 46 (47.4) Percentage African American (n, %) 105 (46.5) 59 (60.8) Height (mean, SD), cm 169.2 (8.4) 148.5 (15.4) Weight (mean, SD), kg 82.2 (20.1) 46.3 (16.2) BMI (mean, SD), kg/m 2 28.7 (6.6) 20.4 (4.4) BMI Percentile (mean, SD) - 64.8 (27.6) DXA Body Fat (mean, SD), % 32.9 (10.4) 27.0 (9.2) Photographic Volume (MP) 25.4 (6.3) 14.1 (4.9) MP = megapixel https://doi.org/10.1371/journal.pone.0206430.t002 Body composition from digital images PLOS ONE | https://doi.org/10.1371/journal.pone.0206430 November 5, 2018 7 / 13 https://doi.org/10.1371/journal.pone.0206430.t002 https://doi.org/10.1371/journal.pone.0206430 Body composition from digital images PLOS ONE | https://doi.org/10.1371/journal.pone.0206430 November 5, 2018 8 / 13 https://doi.org/10.1371/journal.pone.0206430 valid for use in the prediction of body composition (r = 0.98)[32]. To further enhance our esti- mation of percent body fat, we incorporated visual elements of body shape that were also shown to correspond to levels of adiposity from DXA (i.e. thin body shapes corresponded to low body fatness). The clustering of body shape and percent body fat suggests that body shape provides important visual information for improving the accuracy of the image analysis algo- rithm and may have implications on the assessment of health outcomes beyond overall adipos- ity. The findings followed a similar pattern among children and adults though the correlation between our method and DXA was higher in adults. Moreover, our results are consistent with other studies that use multiple cameras (e.g. 8–16 cameras) to estimate BF in humans [33, 34]. The simplicity of our method overcomes several issues associated with other field measures of body composition such as portability, ease of data collection, and expense. Relative to a DXA machine (~$30,000 USD), the setup cost for our photographic method is ~$300, which includes the cost for a scale/stadiometer, Chroma- key green background, tank top & shorts, and digital camera. Our preliminary software pro- gram was has been developed in Matlab and will require the end-user to enter demographic information along with the selection of a front and side photograph previously saved on the computer. Furthermore, our method requires little human input during the image capture and processing procedures, thereby reducing biases inherent in other field methods such as visual estimation and skinfold measurement, which require training and re-training[11, 14–16, 35]. Lastly, no specialized equipment is required beyond a simple digital camera and a green back- drop makes our portable method more convenient than other methods. Implications Computerized image analysis of digital photographs can be used as a valid method for estimat- ing body fatness in humans. Simple digital photographs processed with our algorithm could be used in place of BMI alone in both clinical practice and public health research. This is of particular interest in research as evidence suggests that associations between obesity assessed by BMI and outcomes such as mortality may be different (e.g. Linear vs U-shaped) when a more robust method of body composition is used. Our novel method also addresses the need for a portable, yet valid, method for measuring body composition in large epidemiological studies as well as investigations conducted in remote locations. Strengths & limitations The strengths of this study include the use of digital photographs to capture unbiased informa- tion about the body for use in the estimation of volume and shape, DXA as the criterion mea- sure of body composition, and a diverse sample of black and white youth and adult males and females. However, several limitations of this work should be also noted, including the small sample within each stratum of race, sex and age as well as the inclusion of only able-bodied individuals who could stand for the photographs. Although attempts were made to include a broad range of body sizes at this stage in the development of this method, there were limited numbers of participants enrolled in the tails of the distribution for this analysis. Therefore, the results are not necessarily generalizable to very lean or very obese individuals, all race-ethnic groups or persons unable to stand for the photographs. Future studies should examine the Fig 3. Correlations between predicted body fat from the photographs and dual-energy x-ray absorptiometry (DXA). (A) and Bland-Altman plots of the absolute (B) and relative (C) differences between the two methods (horizontal lines represent the 95% confidence intervals) among adults. https://doi.org/10.1371/journal.pone.0206430.g003 Body composition from digital images PLOS ONE | https://doi.org/10.1371/journal.pone.0206430 November 5, 2018 9 / 13 https://doi.org/10.1371/journal.pone.0206430.g003 https://doi.org/10.1371/journal.pone.0206430 Body composition from digital images PLOS ONE | https://doi.org/10.1371/journal.pone.0206430 November 5, 2018 10 / 13 https://doi.org/10.1371/journal.pone.0206430 performance of our photographic method in a larger sample to allow stratification by age, sex, and race/ethnicity. Conclusion This research shows that a computer algorithm can be developed to provide a valid estimate of body fatness from 2D photographs taken with a regular digital camera. Acknowledgments We wish to thank our study participants, without whom this research would not have been possible. Author Contributions Conceptualization: Olivia Affuso, Steven B. Heymsfield, David B. Allison. Data curation: Olivia Affuso, Ligaj Pradhan, Chengcui Zhang, Song Gao, Howard W. Wiener. Formal analysis: Olivia Affuso, Howard W. Wiener, David B. Allison. Funding acquisition: Olivia Affuso, David B. Allison. Investigation: Olivia Affuso. Methodology: Olivia Affuso, Ligaj Pradhan, Chengcui Zhang, Song Gao, Barbara Gower, Ste- ven B. Heymsfield, David B. Allison. Project administration: Olivia Affuso. Resources: Barbara Gower. Supervision: Olivia Affuso, Chengcui Zhang. Writing – original draft: Olivia Affuso, Ligaj Pradhan, Howard W. Wiener. Writing – review & editing: Olivia Affuso, Barbara Gower, Steven B. Heymsfield, David B. Allison. Fig 4. Correlations between predicted body fat from the photographs and dual-energy x-ray absorptiometry (DXA). (A) and Bland-Altman plots of the absolute (B) and relative (C) differences between the two methods (horizontal lines represent the 95% confidence intervals) among children. https://doi.org/10.1371/journal.pone.0206430.g004 Table 3. Correlations and concordance between DXA and estimated body fat. Adults n = 226 Children n = 97 DXA % Body Fat Lin’s Concordance Coefficient DXA % Body Fat Lin’s Concordance Coefficient Non-Photographic % Body Fat 0.86 a 0.85 0.70 HYPERLINK a 0.66 Photographic % Body Fat (Volume) 0.86 0.85 0.72 0.67 Photographic % Body Fat (Volume and Shape) 0.88 b 0.87 0.81 b 0.79 DXA = dual energy x-ray absorptiometry a,b Differences in Correlation Coefficients between non-photographic model and photographic model containing volume and shape: Adults—Z = 3.27, P<0.001; Children—Z = 5.95, P<0.0001. Lin’s Concordance Coefficients are comparisons of the model estimates of % body fat and DXA % body fat. https://doi.org/10.1371/journal.pone.0206430.t003 Body composition from digital images PLOS ONE | https://doi.org/10.1371/journal.pone.0206430 November 5, 2018 11 / 13 https://doi.org/10.1371/journal.pone.0206430.g004 https://doi.org/10.1371/journal.pone.0206430.t003 https://doi.org/10.1371/journal.pone.0206430 References 1. Vaisman N, Corey M, Rossi MF, Goldberg E, Pencharz P. Changes in body composition during refeed- ing of patients with anorexia nervosa. J Pediatr. 1988; 113(5):925–9. PMID: 3183854. 2. Johnson DK, Wilkins CH, Morris JC. Accelerated weight loss may precede diagnosis in Alzheimer dis- ease. Arch Neurol. 2006; 63(9):1312–7. https://doi.org/10.1001/archneur.63.9.1312 PMID: 16966511. 3. Stewart R, Masaki K, Xue QL, Peila R, Petrovitch H, White LR, et al. A 32-year prospective study of change in body weight and incident dementia: the Honolulu-Asia Aging Study. Arch Neurol. 2005; 62 (1):55–60. https://doi.org/10.1001/archneur.62.1.55 PMID: 15642850. 4. Morley JE, Thomas DR, Wilson MM. Cachexia: pathophysiology and clinical relevance. Am J Clin Nutr. 2006; 83(4):735–43. https://doi.org/10.1093/ajcn/83.4.735 PMID: 16600922. 5. Hunter GR, McCarthy JP, Bamman MM. Effects of resistance training on older adults. Sports Med. 2004; 34(5):329–48. https://doi.org/10.2165/00007256-200434050-00005 PMID: 15107011. 6. Allison DB, Mentore JL, Heo M, Chandler LP, Cappelleri JC, Infante MC, et al. Antipsychotic-induced weight gain: a comprehensive research synthesis. Am J Psychiatry. 1999; 156(11):1686–96. https:// doi.org/10.1176/ajp.156.11.1686 PMID: 10553730. 7. Knox TA, Zafonte-Sanders M, Fields-Gardner C, Moen K, Johansen D, Paton N. Assessment of nutri- tional status, body composition, and human immunodeficiency virus-associated morphologic changes. Clin Infect Dis. 2003; 36(Suppl 2):S63–8. https://doi.org/10.1086/367560 PMID: 12652373. 8. Segal KR, Dunaif A, Gutin B, Albu J, Nyman A, Pi-Sunyer FX. Body composition, not body weight, is related to cardiovascular disease risk factors and sex hormone levels in men. J Clin Invest. 1987; 80 (4):1050–5. https://doi.org/10.1172/JCI113159 PMID: 3654969; PubMed Central PMCID: PMCPMC442345. 9. Ogden CL, Carroll MD, Kit BK, Flegal KM. Prevalence of obesity in the United States, 2009–2010. NCHS Data Brief. 2012;( 82):1–8. PMID: 22617494. 10. Fielding RA, Vellas B, Evans WJ, Bhasin S, Morley JE, Newman AB, et al. Sarcopenia: an undiagnosed condition in older adults. Current consensus definition: prevalence, etiology, and consequences. Inter- national working group on sarcopenia. J Am Med Dir Assoc. 2011; 12(4):249–56. https://doi.org/10. 1016/j.jamda.2011.01.003 PMID: 21527165; PubMed Central PMCID: PMCPMC3377163. 11. Vogel JA, Kirkpatrick JW, Fitzgerald PI, Hodgdon JA, Harman EA. Derivation of anthropometry based body fat equations for the Army’s weight control program (No. USARIEM-T-17/88). Army Research Institute of Environmental Medicine, Natick, MA.1988. 12. Affuso F, Bray MS, Fernandez JR, Casazza K. Standard obesity cut points based on BMI percentiles do not equally correspond to body fat percentage across racil/ethnic groups in a nationally representa- tive sample of children and adolescents. Intl J Body Compos Res. 2010; 8(4):117–22. 13. Baumgartner RN, Heymsfield SB, Roche AF. Human body composition and the epidemiology of chronic disease. Obes Res. 1995; 3(1):73–95. PMID: 7712363. 14. Blanchard JM, Ward GM, Kryzywicki HJ, Canham JE. A visual appraisal method for estimating body composition in humans. Presidio of San Francisco, Report No.81: 1979. 15. Eckerson JM, Housh TJ, Johnson GO. The validity of visual estimations of percent body fat in lean males. Med Sci Sports Exerc. 1992; 24(5):615–8. PMID: 1569858. 16. Sterner TG, Burke EJ. Body fat assessment: A comparison of visual estimaton and skinfold techniques. Physician Sportsmed. 1986; 14:101–7. 17. Weinbach AP. Contour maps, center of gravity, moment of inertia, and surface area of the human body. Human Biol. 1938; 10:356–71. 18. Pierson WR. Monophotogrammetric determination of body volume. Ergonomics. 1961; 4(3):213–8. 19. Geoghegan B. The determination of body measurements, surface area and body volume by photogra- phy. Am J Phys Anthropol. 1953; 11(1):97–119. PMID: 13040507. 20. Cole TJ, Bellizzi MC, Flegal KM, Dietz WH. Establishing a standard definition for child overweight and obesity worldwide: international survey. BMJ. 2000; 320(7244):1240–3. PMID: 10797032; PubMed Central PMCID: PMCPMC27365. 21. NHLBI Expert Panel: Obesity Education Initiative. Clinical guidelines on the identification, evaluation, and treatment of overwieght and obesity in adults. 1998. 22. Williams DP, Going SB, Lohman TG, Harsha DW, Srinivasan SR, Webber LS, et al. Body fatness and risk for elevated blood pressure, total cholesterol, and serum lipoprotein ratios in children and adolescents. Am J Public Health. 1992; 82(3):358–63. PMID: 1536350; PubMed Central PMCID: PMCPMC1694353. 23. Gallagher D, Heymsfield SB, Heo M, Jebb SA, Murgatroyd PR, Sakamoto Y. Healthy percentage body fat ranges: an approach for developing guidelines based on body mass index. Am J Clin Nutr. 2000; 72 (3):694–701. https://doi.org/10.1093/ajcn/72.3.694 PMID: 10966886. Body composition from digital images PLOS ONE | https://doi.org/10.1371/journal.pone.0206430 November 5, 2018 12 / 13 http://www.ncbi.nlm.nih.gov/pubmed/3183854 https://doi.org/10.1001/archneur.63.9.1312 http://www.ncbi.nlm.nih.gov/pubmed/16966511 https://doi.org/10.1001/archneur.62.1.55 http://www.ncbi.nlm.nih.gov/pubmed/15642850 https://doi.org/10.1093/ajcn/83.4.735 http://www.ncbi.nlm.nih.gov/pubmed/16600922 https://doi.org/10.2165/00007256-200434050-00005 http://www.ncbi.nlm.nih.gov/pubmed/15107011 https://doi.org/10.1176/ajp.156.11.1686 https://doi.org/10.1176/ajp.156.11.1686 http://www.ncbi.nlm.nih.gov/pubmed/10553730 https://doi.org/10.1086/367560 http://www.ncbi.nlm.nih.gov/pubmed/12652373 https://doi.org/10.1172/JCI113159 http://www.ncbi.nlm.nih.gov/pubmed/3654969 http://www.ncbi.nlm.nih.gov/pubmed/22617494 https://doi.org/10.1016/j.jamda.2011.01.003 https://doi.org/10.1016/j.jamda.2011.01.003 http://www.ncbi.nlm.nih.gov/pubmed/21527165 http://www.ncbi.nlm.nih.gov/pubmed/7712363 http://www.ncbi.nlm.nih.gov/pubmed/1569858 http://www.ncbi.nlm.nih.gov/pubmed/13040507 http://www.ncbi.nlm.nih.gov/pubmed/10797032 http://www.ncbi.nlm.nih.gov/pubmed/1536350 https://doi.org/10.1093/ajcn/72.3.694 http://www.ncbi.nlm.nih.gov/pubmed/10966886 https://doi.org/10.1371/journal.pone.0206430 24. Pradhan L, Song G, Zhang C, Gower BA, Heymsfield SB, Allison DB, et al., editors. Feature extraction from 2D images for body composition analysis. IEEE International Symposium on Multimedia; 2015:45–52. Miami, FL: IEEE. https://doi.org/10.1109/ISM.2015.117 URL: http://ieeexplore.ieee.org/ stamp/stamp.jsp?tp=&arnumber=7442294&isnumber=7442255 25. Maulik U, Bandyopadhyay S. Performance evaluation of some clustering algorithms and validity indi- ces. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2002; 24(12):1650–4. 26. Ivanescu AE, Li P, George B, Brown AW, Keith SW, Raju D, et al. The importance of prediction model validation and assessment in obesity and nutrition research. Int J Obes (Lond). 2015. https://doi.org/10. 1038/ijo.2015.214 PMID: 26449421. 27. Ambroise C, McLachlan GJ. Selection bias in gene extraction on the basis of microarray gene-expres- sion data. Proc Natl Acad Sci U S A. 2002; 99(10):6562–6. https://doi.org/10.1073/pnas.102102699 PMID: 11983868; PubMed Central PMCID: PMCPMC124442. 28. Meng XL, Rosenthal R, Rubin DR. Comparing correlated correlation coefficients. Psych Bulletin. 1992; 111(1):172–5. 29. Lin L, Hedayat AS, Wu W. Statistical tools for measuring agreement Springer Science and Business Media; 2012. 30. Bland JM, Altman DG. Comparing methods of measurement: why plotting difference against standard method is misleading. Lancet. 1995; 346(8982):1085–7. PMID: 7564793. 31. Chang C-C, Lin C-J. LIBSVM: A library for support vector machines. ACM Trans Intell Syst Technol. 2011; 2(3):1–27. https://doi.org/10.1145/1961189.1961199 32. Affuso O, Zhang C, Chen W-B, Song G, Keeting K, Lewis DW, et al. Novel image analysis method for assessing body composition in humans: a pilot study Proceedings of the 31st Annual Scientific Meeting of the Obesity Society, Atlanta, GA. 2013. 33. Connell L, Ulrich P, Brannon E, Alexander M, Presley A. Body shape assessment scale: Instrument development for analyzing female figures. Clothing and Textile Res J. 2006; 24:80–95. 34. Xu B, Yu W, Yao M, Pepper MR, Freeland-Graves JH. Three-dimensional surface imaging system for assessing human obesity. Opt Eng. 2009; 48(10):nihpa156427. https://doi.org/10.1117/1.3250191 PMID: 19966948; PubMed Central PMCID: PMCPMC2788969. 35. Pollock ML, Jackson AS. Research progress in validation of clinical methods of assessing body compo- sition. Med Sci Sports Exerc. 1984; 16(6):606–15. PMID: 6392815. Body composition from digital images PLOS ONE | https://doi.org/10.1371/journal.pone.0206430 November 5, 2018 13 / 13 https://doi.org/10.1109/ISM.2015.117 http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7442294&isnumber=7442255 http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7442294&isnumber=7442255 https://doi.org/10.1038/ijo.2015.214 https://doi.org/10.1038/ijo.2015.214 http://www.ncbi.nlm.nih.gov/pubmed/26449421 https://doi.org/10.1073/pnas.102102699 http://www.ncbi.nlm.nih.gov/pubmed/11983868 http://www.ncbi.nlm.nih.gov/pubmed/7564793 https://doi.org/10.1145/1961189.1961199 https://doi.org/10.1117/1.3250191 http://www.ncbi.nlm.nih.gov/pubmed/19966948 http://www.ncbi.nlm.nih.gov/pubmed/6392815 https://doi.org/10.1371/journal.pone.0206430 work_uunspq22trc2fcmwvbr7adcy7q ---- BA312015.pdf 231 African Invertebrates Vol. 52 (1) Pages 231–232 Pietermaritzburg June, 2011 BOOK REVIEW ‘DIGITAL PHOTOGRAPHY FOR SCIENCE: CLOSE-UP PHOTOGRAPHY, MACROPHOTOGRAPHY AND PHOTOMACROGRAPHY.’ Enrico Savazzi. Raleigh, NC, USA: Lulu, 2011. 698 pp., 23×15 cm. Paperback, ISBN 978-0-557- 92537-7. Hardcover, ISBN 978-0-557-91133-2. Available from www.lulu.com. tography has long been a topic of numerous publications, very few, if any, have dealt has done so with an all-encompassing manual that will prove invaluable for advanced amateurs as well as professionals. quite technical and go into great detail about the functioning of digital camera systems, touching upon numerous topics including storage and battery options, camera types, and physics of camera optics are also covered and might prove quite challenging for those whose mathematics is a bit rusty. The section on specimen mounts is quite useful as it can save readers a lot of time with regard to subsequent image processing, especially in avoi- ding having to photoshop out mounts and pins. One of the more interesting chapters is “Photo- graphing the invisible and the barely visible”, which deals with numerous techniques and spe- cial equipment for capturing that which the naked eye cannot see. The section on High Dynamic Range techniques is particularly insightful as it sible in terms of colour and detail. This chapter is especially useful for anyone wanting to take photos of specimens that cannot be achieved by also covered thoroughly in later chapters. plaining all the need-to-know basics from how the stacking software works in theory to the commercially and freely available software options. I found the section on auto- cess works but also how to build your own stacking equipment, something that can be of great use to scientists who cannot afford a Leica or Nikon system. http://www.africaninvertebrates.org.za 232 and how to set up a proper photography room. The chapter on preparing images for publication is especially valuable as nothing causes a bigger headache to a journal edi- tor than incorrect images and the inability to follow instructions to authors. The book is published via print-on-demand and as such it does have a paperback- novel feel to it in terms of paper and print quality. It has numerous photographs to il- lustrate techniques, instruments and photographic results, which are unfortunately all in black and white. However, all these photos have a note stating that the high quality colour versions are available online on the author’s website. While this is a creative way of saving on printing costs one cannot help but wonder if the URL will still be the back of the book, which would still have lowered printing costs while at the same time ensuring the availability of the images in the future. phy in their research output, and also to anyone who uses any form of computer ge- library. B. S. Muller work_uutisxpnxjg7rapgjnrpmlqzkq ---- P ro je ct G al le ry Delphi4Delphi: first results of the digital archaeology initiative for ancient Delphi, Greece Ioannis Liritzis1,∗, George Pavlidis2, Spyros Vosynakis3, Anestis Koutsoudis2, Pantelis Volonakis1, Nikos Petrochilos4, Matthew D. Howland5, Brady Liss5 & Thomas E. Levy5 Digital media and learning initiatives for virtual collaborative environments are contributing to the definition of new (sub-)disciplines in archaeological and heritage sciences. New nomenclature and terminology is emerging such as cyber archaeology, cyber archaeometry, virtual worlds and augmented and immersive realities; and all of them are related to museums and cultural heritage—tangible, intangible or natural (Forte 2010; Liritzis et al. 2015). The research project ‘Digital Enterprise for Learning Practice of Heritage Initiative FOR Delphi’ (Delphi4Delphi) aims to address many of these topics. In particular, it focuses on the educational, research and social implications of digital heritage. The ongoing work presented here highlights the first large-scale interdisciplinary cyber-archaeology project to make use of structure from motion (SfM) and CAVEcam measurements of heritage monuments and artefacts in Greece on any significant scale (Levy 2015). Delphi was the most prestigious and authoritative oracle in the ancient world. Its reputation centred on the political decisions taken after consultation of the Oracle, especially during the period of colonisation of the Archaic period (c. eighth to sixth centuries BC), when cities sought her consent and guidance (Bommelaer 2015). The main goal of the Delphi4Delphi project is to capture detailed 3D images of the major archaeological monuments at Delphi and artefacts in the Delphi Archaeological Museum 1 Department of Mediterranean Studies, Laboratory of Archaeometry, University of the Aegean, Rhodes 85100, Greece (Email: liritzis@rhodes.aegean.gr; p.volonakis@aegean.gr) 2 ATHENA—Research and Innovation Centre in Information, Communication and Knowledge Technologies, Panepistimioupoli Kimmerion, P.O. Box 159, 67100 Xanthi, Greece (Email: gpavlid@ipet.gr; akoutsou@ipet.gr) 3 Department of Product and Systems Design Engineering, University of the Aegean, Konstantinoupoleos 2, Hermoupolis, Syros 84100, Greece (Email: spyrosv@aegean.gr) 4 Ephoria of Antiquities in Phokis, Museum of Delphi, 33054 Delphi, Greece (Email: npetrochilos@hotmail.com) 5 Department of Anthropology and Center for Cyber-Archaeology and Sustainability—Qualcomm Institute, University of California, San Diego, Social Sciences Building Room 210 9500, Gilman Drive La Jolla, CA 92093-0532, USA (Email: mdhowlan@ucsd.edu; bliss@ucsd.edu; tlevy@ucsd.edu) ∗ Author for correspondence (Email: liritzis@rhodes.aegean.gr) C© Antiquity Publications Ltd, 2016 ANTIQUITY 90 354, e4 (2016): 1–6 doi:10.15184/aqy.2016.187 1 mailto:liritzis@rhodes.aegean.gr mailto:p.volonakis@aegean.gr mailto:gpavlid@ipet.gr mailto:akoutsou@ipet.gr mailto:spyrosv@aegean.gr mailto:npetrochilos@hotmail.com mailto:mdhowlan@ucsd.edu mailto:bliss@ucsd.edu mailto:tlevy@ucsd.edu mailto:liritzis@rhodes.aegean.gr http://dx.doi.org/10.15184/aqy.2016.187 Ioannis Liritzis et al. Figure 1. Structure from motion photography of the Omphalos, captured and cast into a 3D model. in order to contribute to the 3D reconstruction of the sanctuary in support of research, conservation and tourism. Two types of digital-photography-based recording systems were used in the 2015 season. The first method was SfM. This is a technique of spectral documentation (usually typical optical imaging), and refers to the process of making 3D structures from 2D image sequences. This constitutes recovering a scene’s depth (the dimension that is not captured in a single typical 2D photo), and in recording it in a 3D data format (Pavlidis et al. 2007; Doneus et al. 2011; Koutsoudis et al. 2013) (Figures 1 & 2). The second system involved the use of the 3D CAVEcam stereo photography system, which produces 3D-surrounding images of buildings and heritage sites. CAVEcam cameras capture 360 × 180o panoramic views (Figure 3–6). Both methods were used in a quick, non-invasive and affordable manner (DeFanti et al. 2009; Smith et al. 2013). The monuments and artefacts recorded in 2015 as part of the Delphi4Delphi project include: the Temple of Apollo, the theatre, the gymnasium, the Sanctuary of Athena Pronaea (and the Tholos), the bronze charioteer and the marble sphinx, as well as decorative architectural elements of monuments such as those from the Treasury of the Siphnians. The dataset generated during our first season totalled around 300GB. It will allow a detailed analysis of the ancient sanctuary and its material culture, including integration with archaeoastronomical studies of celestial phenomena prevailing at the time, the Oracle delivered her advice (Liritzis & Castro 2013). The next stage of the research will include the use of balloons and drones to capture additional images of the site. C© Antiquity Publications Ltd, 2016 2 P ro je ct G al le ry Delphi4Delphi Figure 2. 3D geometry and textured models with structure from motion from the Delphi site and archaeological museum: a) the head of the sphinx; b) the dense point cloud of part of the theatre (about 120 million points measured). C© Antiquity Publications Ltd, 2016 3 Ioannis Liritzis et al. Figure 3. Stitched CAVEcam imagery from one camera of the Temple of Apollo. Figure 4. Stitched CAVEcam imagery from one camera of the interior of the Tholos. C© Antiquity Publications Ltd, 2016 4 P ro je ct G al le ry Delphi4Delphi Figure 5. Stitched CAVEcam imagery from one camera of the Sphinx of Naxos. Figure 6. CAVEcam of Roman complex in the Qualcomm Institute, UCSD. C© Antiquity Publications Ltd, 2016 5 Ioannis Liritzis et al. Acknowledgements We thank the Ministry of Culture and Sports, Greece, for providing the necessary permit, and A. Psalti for facilitating our work at Delphi. References BOMMELAER, J.-F. 2015. Guide de Delphes, le site. Paris: Athènes. DEFANTI, T.A., G. DAWE, D.J. SANDIN, J.P. SCHULZE, P. OTTO, J. GIRADO, F. KUESTER, L. SMARR & R. RAO. 2009. The StarCAVE, a third-generation CAVE and virtual reality OptIPortal. Future Generation Computer Systems 25: 169–78. DONEUS, M., G. VERHOEVEN, M. FERA, CH. BRIESE, M. KUCERA & W. NEUBAUER. 2011. From deposit to point cloud—a study of low-cost computer vision approaches for the straightforward documentation of archaeological excavations, in Proceedings of the 23rd International CIPA Symposium, Prague, Czech Republic. Available at: http://publik.tuwien.ac.at/files/ PubDat 206899.pdf (accessed 1 August 2016). FORTE, M. 2010. Cyber archaeology (British Archaeological Reports international series 2177). Oxford: Archaeopress. KOUTSOUDIS, A., B. VIDMAR & F. ARNAOUTOGLOU. 2013. Performance evaluation of a multi-image 3D reconstruction software on a low-feature artefact. Archaeological Science 40: 4450–56. LEVY, T.E. 2015. The past forward. Biblical Archaeology Review (special issue): 81–87. LIRITZIS, I. & B. CASTRO. 2013. Delphi and cosmovision: Apollo’s absence at the land of the Hyperboreans and the time for consulting the oracle. Journal of Astronomical History and Heritage 16: 184–206. LIRITZIS, I., F.M. AL-OTAIBI, P. VOLONAKIS & A. DRIVALIARI. 2015. Digital technologies and trends in cultural heritage. Mediterranean Archaeology and Archaeometry 15: 313–32. PAVLIDIS, G., A. KOUTSOUDIS, F. ARNAOUTOGLOU, V. TSIOUKAS & C. CHAMZAS. 2007. Methods for 3D digitization of cultural heritage. Elsevier Journal of Cultural Heritage 8: 93–98. SMITH, N., S. CUTCHIN, R. KOOIMA, R. AINSWORTH, D. SANDIN, J. SCHULZE, A. PRUDHOMME, F. KUESTER, T. LEVY & T. DEFANTI. 2013. Cultural heritage omni-stereo panoramas for immersive cultural analytics—from the Nile to the Hijaz, in Proceedings of the 8th International Symposium on Image and Signal Processing and Analysis, Trieste, Italy: 552–57. http://dx.doi.org/10.1109/ISPA.2013.6703802 C© Antiquity Publications Ltd, 2016 6 Acknowledgements References References work_uv5rvzwquvayphoyfca4tqwuq4 ---- Cryolipolysis for Reduction of Excess Adipose Tissue C A F c U t o r t v V r a a m * † ‡ D D A 2 ryolipolysis for Reduction of Excess Adipose Tissue ndrew A. Nelson, MD,* Daniel Wasserman, MD,† and Mathew M. Avram, MD, JD‡ Controlled cold exposure has long been reported to be a cause of panniculitis in cases such as popsicle panniculitis. Cryolipolysis is a new technology that uses cold exposure, or energy extraction, to result in localized panniculitis and modulation of fat. Presently, the Zeltiq cryolipolysis device is FDA cleared for skin cooling, as well as various other indications, but not for lipolysis. There is, however, a pending premarket notification for noninvasive fat layer reduction. Initial animal and human studies have demonstrated significant reductions in the superficial fat layer thickness, ranging from 20% to 80%, following a single cryolipolysis treatment. The decrease in fat thickness occurs gradually over the first 3 months following treatment, and is most pronounced in patients with limited, discrete fat bulges. Erythema of the skin, bruising, and temporary numbness at the treatment site are commonly observed following treatment with the device, though these effects largely resolve in approximately 1 week. To date, there have been no reports of scarring, ulceration, or alterations in blood lipid or liver function profiles. Cryolipolysis is a new, noninvasive treatment option that may be of benefit in the treatment of excess adipose tissue. Semin Cutan Med Surg 28:244-249 © 2009 Elsevier Inc. All rights reserved. KEYWORDS cryolipolysis, Zeltiq, non-invasive, body contouring, fat removal, cold panniculitis e c r n q a t k o c a k a e p a n P T s f at treatment and removal is a worldwide, billion-dollar cosmetic industry, and liposuction remains the most ommon surgical cosmetic procedure performed in the nited States.1 Although liposuction is an effective therapeu- ic option for the removal of fat and can be safely performed n an outpatient basis, it remains an invasive procedure. In ecent years, there has been a dramatic trend toward effec- ive, noninvasive procedures. Unfortunately, current nonin- asive fat treatments such as Endermologie (LPG Systems, alence, France), radiofrequency treatment, and lasers have esulted in only modest clinical improvements in the appear- nce of fat and cellulite.2-8 There is, therefore, a great demand nd need for an effective, selective, and noninvasive treat- ent option for excess adipose tissue. Division of Dermatology, UCLA Medical Center, Los Angeles, CA. Total Skin and Beauty Dermatology Center, Birmingham, AL. Dermatology Laser and Cosmetic Center, Wellman Center of Photomedi- cine, Massachusetts General Hospital, Boston, MA. r. Avram has received research grants from Candela Corporation and is a stock holder in Zeltiq Aesthetics, Inc. r. Nelson and Dr. Wasserman have no conflicts to disclose. ddress reprint requests to Mathew M. Avram, MD, JD, Dermatology Laser and Cosmetic Center, Wellman Center of Photomedicine, Massachusetts General Hospital, 50 Staniford St, Suite 250, Boston, MA 02114. E-mail: nmavram@partners.org 44 1085-5629/09/$-see front matter © 2009 Elsevier Inc. All rights reserved. doi:10.1016/j.sder.2009.11.004 Cryolipolysis is based on clinical observations that cold xposure, under the proper circumstances, can result in lo- alized panniculitis; this panniculitis ultimately results in the eduction and clearance of adipose tissue. Cold-induced pan- iculitis was initially described in infants, where it is fre- uently known as popsicle panniculitis.9,10 However, it has lso been observed in adult patients; for example, panniculi- is occurring after horseback riding in cold environments is nown as equestrian panniculitis.11 Exogenous application f cold, particularly with aggressive cryosurgery, is known to ause epidermal damage as well as damage to the underlying dipose tissue. Cryolipolysis attempts to use controlled fat cooling, also nown as energy extraction, to cause localized panniculitis nd fat reduction. By controlling and modulating the cold xposure, it could be possible to selectively damage the adi- ocytes, while avoiding damage to the overlying epidermis nd dermis. This would result in an effective, localized, and oninvasive treatment for excess adipose tissue. roposed Pathogenesis he exact pathogenesis by which cold results in adipose tis- ue removal is unknown. Case reports of infants suffering rom popsicle panniculitis and adults with equestrian pan- iculitis demonstrate that a perivascular inflammatory infil- mailto:mavram@partners.org t p i c l u r s l m o d p t o t c e b i c w p a c c i w a e a s b f T t u d o r f m t u t a s a i t d t t n C M c t a l t d a t i t o a P e v c p T c “ p b i c m d i l t t l f a d m 3 w b m s m n 3 t i w b f a a Cryolipolysis for reduction of excess adipose tissue 245 rate consisting of histiocytes and lymphocytes develops ap- roximately 24 hours after cold exposure. The inflammatory nfiltrate results in a lobular panniculitis. The inflammatory ells cause a rupture of the adipocytes, aggregation of the ipids, and the formation of small cystic spaces. This pannic- litis slowly resolves over the next several weeks, ultimately esulting in modulation of the fat without any persistent tis- ue damage or scarring. A similar mechanism of action has been proposed for cryo- ipolysis. In animal models, cold exposure results in inflam- ation, damage to the fat cells, and ultimately phagocytosis f the adipocytes.12 Immediately after the treatment, no fat amage is observed and the adipocytes are intact. Initial adi- ocyte damage is noted histologically at day 2, and increases hroughout the next month. It is believed that adipocyte ap- ptosis stimulates the initial inflammatory infiltrate, though he exact mechanism is not fully characterized; pig adipo- ytes in culture undergo apoptosis and necrosis following xposure to cool temperatures.13 At day 2 after treatment, pig iopsy samples demonstrate localized subcutaneous mixed nflammation, consisting of neutrophils and mononuclear ells, and the adipocytes remain unchanged. Over the next eek, the infiltrate becomes denser and an intense lobular anniculitis develops. The inflammation appears to peak at pproximately 14 days following treatment when the adipo- ytes are surrounded by histiocytes, neutrophils, lympho- ytes, and other mononuclear cells. During 14-30 days, the nflammatory infiltrate becomes more monocytic, consistent ith a phagocytic process. Macrophages begin to envelop nd digest the apoptotic adipocytes, thereby facilitating their limination from the body. As this process occurs, the aver- ge size of the adipocytes decrease, a wider range of adipocyte izes are observed, and the fibrous septae of the fat layer ecome widened. The actual elimination of the adipocytes rom the body occurs slowly over at least the next 90 days. he exact mechanism and pathway by which the phagocy- osed adipocytes are eliminated from the body are not fully nderstood at present. Ultimately, the lobules of fat cells ecrease in size, and the fibrous septae constitute a majority f the volume of the subcutaneous layer. Clinically, this cor- esponds to a decrease in the thickness of the subcutaneous at layer.12,14 These initial animal studies have helped to shape the likely echanism of cryolipolysis. However, it should be stressed hat the exact mechanism has not been fully elucidated. It is nclear why adipocytes are more sensitive to cold tempera- ure than other cell lines. It is also not fully established why dipocyte apoptosis occurs and how this leads to the ob- erved inflammatory infiltrate. Finally, once the adipocytes re phagocytosed and mobilized, the full mechanism of elim- nation is not well characterized. The adipocytes are thought o be mobilized via the lymphatic system, but it remains to be etermined how they are then eliminated or redistributed hroughout the body in response to cryolipolysis. As the echnology continues to be developed, future studies will eed to further investigate these issues. s linical Animal Studies anstein et al12 performed the initial exploratory studies of ryolipolysis in Yucatan pigs. In their article, they described he results of 3 different studies: an initial exploratory study, dosimetry study, and a study of treatment effect on serum ipid levels. The initial exploratory study used a cold copper applica- or, chilled by circulating antifreeze solution. The cooling evice was maintained at a constant temperature of �7°C, nd was applied to the Yucatan pig for times ranging from 5 o 21 minutes. The highest degree of clinical effect was noted n a treatment area on the buttock; 3.5 months after the single reatment, 80% of the superficial fat layer was removed (40% f total fat layer). Following the demonstration of efficacy with the copper pplicator, a prototype clinical device (Zeltiq Aesthetics Inc., leasanton, CA) was developed, which contained a thermo- lectric cooling element. This device allowed for the use of ariable, present plate temperatures during treatments; the old temperature was maintained at a constant level via tem- erature sensors imbedded within the treatment plates. reatments were performed with this device in either a “flat onfiguration” with a flat panel cooling the skin or in a folded configuration” in which the excess tissue was inched between 2 cooling panels, allowing for cooling on oth sides of the tissue. The tissue was exposed to cold rang- ng from 20°C to �7°C for 10 minutes. All sites treated with old exposure less than �1°C developed perivascular inflam- ation, panniculitis, and ultimately fat layer reduction. Fat amage was significantly greater at lower temperatures, and ncreased over time. In Manstein’s lipid study, no significant changes in the ipid profiles of the animals were noted immediately or at any ime point through 3 months post treatment. There was a emporary decrease in serum triglycerides immediately fol- owing the cold exposures, though this was attributed to asting before and during general anesthesia. A follow-up animal study was performed by Zelickson et l.14 In this study, 4 pigs were treated with the cryolipolysis evice. Three animals underwent a single cryolipolysis treat- ent, while the fourth pig underwent 7 treatments (90, 60, 0, 14, 7, and 3 days, as well as 30 min before euthanasia) ith the cryolipolysis device. About 25%-30% of the total ody surface of each animal was treated. Ultrasound assess- ents demonstrated a 33% reduction in the thickness of the uperficial fat layer following cryolipolysis. Pathologic speci- ens revealed an approximate reduction of 50% in the thick- ess of the superficial fat layer. Erythema lasting approximately 0 minutes developed in treatment areas. The skin became cool, hough not frozen, after treatment. There was no edema, bruis- ng, purpura, or scarring observed in the trial. Lipid panels ere performed for each animal at multiple time points; the aseline profile was after a 12 hour fast before treatment, with ollow-up lipid profiles performed 1 day, 1 week, and 1, 2, nd 3 months after treatment. There were no significant vari- tions in the lipid profiles of the animals throughout the tudy. w a e n c i t t H F ( d c t T e c t t i c 6 i t c t t m t Z o c i t e t c t l c h w s a m o p a o 2 t a b t a g p T g T t 246 A.A. Nelson, D. Wasserman, and M.M. Avram In the above animal studies, the cryolipolysis treatments ere well tolerated by the animals. Erythema of the treated reas was common. In the initial animal studies by Manstein t al,12 whitening, hardening, and freezing of the skin was oted in 30% of the treated areas. Superficial epidermal ne- rosis was observed in some of the frozen areas, with result- ng transient hypopigmentation following re-epithelializa- ion. However, no scarring or ulceration was noted in any of he animal studies. uman Clinical Studies ollowing the promising animal studies, the Zeltiq System Zeltiq Aesthetics Inc, Pleasanton, CA) was developed. This evice consists of a control console, with a treatment appli- ator attached by a cable. A thermal coupling gel is placed on he area to be treated, and the applicator is then applied. issue is drawn into the cup-shaped applicator with a mod- rate vacuum to optimally positioning the tissue between 2 ooling panels; this allows for more efficient cooling of the issue. A cooling intensity factor (CIF) is then selected by he treating clinician. The CIF is an index value represent- ng the rate of heat flux into or out of tissue opposite the ooling device. Treatment with the cold exposure for up to 0 minutes then begins. The energy extraction rate, or cool- ng, is controlled by sensors that monitor the heat flux out of he treated areas and is modulated by thermoelectric cooling ells. Following completion of the treatment, the system au- omatically stops the cold exposure and the clinician releases he vacuum. Depending on the surface area to be treated, ultiple applications may be necessary to effectively expose he entire area to cryolipolysis. It is important to note that the eltiq device is presently FDA cleared for skin cooling and Figure 1 A representative example of clinical improve reduction of the flanks (ie, love handles). The patient’s le control. The top pictures show the baseline, while the bo after treatment. The patient’s weight on the baseline an obtained and used with permission of Flor Mayoral, MD.15 ther various indications. However, the Zeltiq device is not urrently FDA cleared for lipolysis, although there is a pend- ng premarket notification for noninvasive fat layer reduc- ion. A multicenter, prospective, nonrandomized clinical study valuating the use of cryolipolysis for fat layer reduction of he flanks (ie, love handles) and back (ie, back fat pads) was onducted at 12 sites.15 Patients underwent cryolipolysis reatment to 1 area, while a symmetric, contralateral area was eft untreated to serve as a control for observing clinical effi- acy. An interim subgroup analysis of all patients in the “love andle” group, 32 patients, was performed. Clinical efficacy as determined at 4 months post-treatment using visual as- essment with digital photography, physician assessment, nd subject satisfaction. Most patients had a clinical improve- ent with a visible contour change, as assessed by physician bservation and digital photography (Fig. 1). A subset of 10 atients underwent pre- and post-treatment ultrasound im- ging. Of these 10 patients, all had a decrease in the thickness f their fat layer, with an average reduction in thickness of 2.4%. Importantly, regardless of the assessment protocol, he best cosmetic results were achieved in those patients with modest, discrete fat bulge. The treatment was well tolerated y patients with no adverse events related to the device or reatment. Kaminer et al,16 demonstrated that cryolipolysis results in visible cosmetic improvement in the flank/love handle re- ion. A blinded comparison of preprocedure and 6 month ostprocedure photographs was performed on 50 subjects. hree physicians specializing in dermatology, cosmetic sur- ery, or plastic surgery performed the photographic review. he physicians were able to accurately differentiate between he pre- and post-photographs in 89% of the cases. When the ollowing 1 treatment with cryolipolysis for fat layer was treated, while the right side served as an untreated ictures demonstrate the clinical improvement 4 months nth follow-up day remained unchanged. This figure is ment f ft side ttom p d 4 mo e t c T c s o i e i c m c d p l a f i t c p i b a f a S A w I t o c e a a c r s t C fl i u l s i d s r f h n t Cryolipolysis for reduction of excess adipose tissue 247 valuation was limited to those subjects who maintained heir original weight, �5 lb after the procedure, the physi- ians were able to accurately differentiate 92% of the cases. his study demonstrates that the improvement following ryolipolysis treatment is clinically apparent on visual in- pection, further documenting the usefulness of this technol- gy. A feasibility study of using cryolipolysis to reduce abdom- nal fat is currently ongoing.17 A total of 42 subjects were nrolled in this study. Symmetric abdominal fat bulges, typ- cally to the left and right of the umbilicus, were treated with ryolipolysis. An interim analysis of the subjects’ self-assess- ents indicated that 79% (31 of 39) subjects reported clini- al improvement within the first 2-4 months after the proce- ure (Fig. 2). Further clinical end points, including blinded hysician assessments and ultrasound measurements of fat ayer thickness, have not been reported as of yet. The interim nalysis appears to support that cryolipolysis may be effective or noninvasive contouring of abdominal fat and body sculpt- ng. Cryolipolysis seems to be an effective treatment option for he reduction of excess adipose tissue, as shown in these linical studies. It is important to note that the clinical im- rovements were most pronounced in patients with local- zed, discrete fat bulges. The technology does not appear to e as effective in patients with significant skin laxity or who re obese. Thus, cryolipolysis is an effective treatment option or the reduction of fat, particularly when the proper patients re selected for treatment. Figure 2 A representative example of clinical improve reduction of the abdomen. The patient underwent a s application to the right and 1 application to the left side surface area. The top pictures show the baseline, while months after treatment. The patient’s weight at the 4 m day. This figure is obtained and used with permission of Ivan afety Profile s with any new technology, it is important to establish hether the device results in any significant adverse events. n the previous clinical studies, the device has been well olerated by the subjects. Patients typically develop erythema f the treatment area, lasting up to a few hours following ryolipolysis. As the device uses a vacuum to increase clinical fficacy, patients may also develop bruising of the treatment rea, which may last approximately 1 week. The treated skin lso becomes cold and firm following cryolipolysis. In all linical studies to date, no ulceration or scarring has been eported. Cryolipolysis has been reported by human subjects to re- ult in a temporary dulling of sensation and numbness in reated areas. To better characterize this phenomenon, oleman et al18 performed cryolipolysis on 10 subjects with ank fat bulges. Following cryolipolysis, a 20.4% reduction n the thickness of the fat layer was observed, as assessed by ltrasound. Thus, the patients had achieved an effective cryo- ipolysis treatment. These subjects underwent neurologic as- essment by a board-certified neurologist during the study, ncluding light touch evaluated with a soft tissue, two-point iscrimination, temperature sensitivity (cold temperature ense), and pain sensitivity (assessed with a pinprick). Neu- ologic assessments were performed at baseline and weekly ollowing treatment. One subject underwent skin biopsy for istologic analysis of nerve-fibers. Patients reported numb- ess in 24 of the 25 treated sites (96% of treated sites), hough by 1 week following treatment the numbness had ollowing 1 treatment with cryolipolysis for fat layer reatment with cryolipolysis, though 2 applications (1 required to treat the entire abdomen due to the larger ttom pictures demonstrate the clinical improvement 4 llow-up day had increased by 3.5 lb from the baseline ment f ingle t ) were the bo onth fo Rosales-Berber, MD. l p a t w t w l s m b d t i p d t l l Z i n f 4 h o b w o l s b n m d i f 9 a t i l d c o p t f i b d o I i c s t w D C c i a s a t a p a o m t s o o h t f e a i p e u o t s l i h w p s t m o f s a f i s c f a t t i 248 A.A. Nelson, D. Wasserman, and M.M. Avram argely resolved. Transient reductions in sensation were re- orted in 6 of 9 patients (67%), most commonly manifested s reductions in pain sensitivity. However, reductions in light ouch, 2 point discrimination, and temperature sensitivity ere also reported by a minority of patients. These reduc- ions in neurologic sensation lasted between 1 and 6 weeks, ith a mean duration of 3.6 weeks. All reductions in neuro- ogic sensation had resolved by 2 months after the cryolipoly- is treatment. No changes were noted in the nerve biopsy 3 onths after the cryolipolysis compared with the baseline iopsy. These results indicate that cryolipolysis results in a ecrease in sensation of treated areas, but this altered sensa- ion is transient and appears to resolve without any further ntervention. As previously discussed, the exact mechanism of cryoli- olysis is not well understood. It is possible that as the fat is estroyed and phagocytosed, the fat could be released into he blood. Many of the clinical studies have therefore ana- yzed the patient’s lipid profiles and liver function tests fol- owing cryolipolysis. In the animal studies by Manstein and elickson, no significant changes in the lipid profiles follow- ng cryolipolysis were observed. In all human studies to date, o clinically significant alterations in lipid profiles or liver unction tests have been observed. Klein et al,19 reported on 0 patients with bilateral fat bulges on their flanks (ie, love andles) treated with cryolipolysis. The patients were treated n 1 or 2 sites on each flank, depending on the size of the fat ulge, to a maximum of 4 treatment applications. Patients ere treated at a CIF of 42 for 30 minutes. Lipid values were btained, including triglycerides; total cholesterol; and very- ow-density lipoprotein, low-density lipoprotein, and high-den- ity lipoprotein (HDL) cholesterol. Additionally, liver-related lood tests, including: aspartate aminotransferase, alanine ami- otransferase, alkaline phosphatase, total bilirubin, and albu- in were obtained before treatment. Follow-up values were etermined 1, 4, 8, and 12 weeks after treatment. Triglycer- de values were noted to increase slightly in the 12 weeks ollowing cryolipolysis, from a mean of 82.1 to a mean of 3.2; this increase was not statistically significant (P � 0.22) nd the mean value remained well below the upper limit of he reference range. There was, however, a statistically signif- cant decrease in HDL cholesterol in the first few weeks fol- owing cryolipolysis (P � 0.0296), though the HDL values id return to baseline by 12 weeks. No statistically significant hanges from baseline for any of the liver function tests were bserved following cryolipolysis. These initial safety reports support that the Zeltiq cryoli- olysis device results in a significant reduction in fat layer hickness with no significant adverse events. During the in- ormed consent process before cryolipolysis treatment, it is mportant to emphasize known risks including erythema, ruising, and temporary altered sensation. To date, there oes not appear to be significant risk of altered lipid profiles r liver function tests associated with cryolipolysis treatment. t remains to be determined whether patients with rare, cold- nduced dermatologic conditions, such as cryoglobulinemia, old urticaria, or paroxysmal cold hemoglobinuria, can be afely treated with cryolipolysis. Patients with a known his- n ory of cold-induced disease should probably not be treated ith the cryolipolysis device until further data are available. iscussion ryolipolysis is a novel procedure, which uses controlled old exposure, known as energy extraction, to produce non- nvasive, effective, and selective damage to adipocytes. In nimal and human clinical studies, cryolipolysis has been hown to result in significant improvement in the clinical ppearance of fat. Additionally, reductions in the thickness of he subcutaneous fat layer of up to 50% can occur following single cryolipolysis treatment. Clinical studies have shown otential efficacy in the treatment of excess back fat, flank fat, nd abdominal fat; the potential efficacy of cryolipolysis in ther treatment areas and for the treatment of cellulite re- ains to be determined. In these initial studies, cryolipolysis reatments have been well tolerated by patients with tran- ient, mild adverse events such as erythema and bruising ccurring in treated patients. No cases of ulceration, scarring, r significant changes in lipid profiles and liver function tests ave been reported following cryolipolysis. Cryolipolysis herefore appears to be a safe and effective treatment option or reduction of excess adipose tissue. The exact mechanism of cryolipolysis remains to be fully lucidated. It has been shown that cold exposure results in poptosis of the adipocytes, followed by an inflammatory nfiltrate. Ultimately, the inflammatory infiltrate results in hagocytosis and mobilization of the treated adipocytes. The xact mechanism and pathway for this fat elimination are nclear. No significant alterations in blood lipid profiles, ther than transient decreases in HDL values, or liver func- ion tests have been observed following cryolipolysis. Further tudies to determine the exact mechanism of action for cryo- ipolysis remains an active area of research. Although cryolipolysis is a promising new technology, it is mportant to bear in mind a few potential limitations. In the uman clinical studies, results were most visible in patients ith discrete, localized fat bulges. Cryolipolysis does not ap- ear to be as effective in obese patients or patients with excess kin laxity. It is unclear whether the device itself is less effec- ive in these patients, or whether the potential improve- ent associated with cryolipolysis treatment is harder to bserve in these patients. Additionally, the improvement ollowing cryolipolysis is not immediate, but rather occurs lowly over the course of 2-3 months. Finally, the currently vailable data seem to support that cryolipolysis is most ef- ective for localized, discrete fat bulges. Thus, patients seek- ng large scale fat removal, which can be achieved with lipo- uction, may not achieve their desired outcomes with ryolipolysis. It is therefore important for physicians to care- ully select potential cryolipolysis treatment patients, as well s educate them regarding their expected outcomes and po- ential limitations. Cryolipolysis is a new, selective, effective, and noninvasive reatment option for excess adipose tissue. While the device s currently FDA cleared only for skin cooling, a premarket otification application for lipolysis is pending. The device is p n a c e i m R 1 1 1 1 1 1 1 1 1 1 Cryolipolysis for reduction of excess adipose tissue 249 articularly appealing given that it is noninvasive, requires ot much or no downtime for patients following treatment, nd does not require local or regional anesthesia. Ongoing linical studies will help to determine the full potential and fficacy of this device. Cryolipolysis appears to be a promis- ng new technology for safe, effective, and noninvasive treat- ent of fat. eferences 1. American Society for Aesthetic Plastic Surgery Annual Statistics. Available at: http://www.cosmeticplasticsurgerystatistics.com/statistics. html#2007-NEWS. Accessed November 15, 2009 2. Gulec AT: Treatment of cellulite with LPG endermologie. Int J Derma- tol 48:265-270, 2009 3. Moreno-Moraga J, Valero-Altés T, Riquelme AM, et al: Body contouring by non-invasive transdermal focused ultrasound. Lasers Surg Med 39: 315-323, 2007 4. Goldberg DJ, Fazeli A, Berlin AL: Clinical, laboratory, and MRI analysis of cellulite treatment with a unipolar radiofrequency device. Dermatol Surg 34:204-209, 2008 5. Trelles MA, van der Lugt C, Mordon S, et al: Histological findings in adipocytes when cellulite is treated with a variable-emission radiofre- quency system. Lasers Med Sci (in press) 6. Nootheti PK, Magpantay A, Yosowitz G, et al: A single center, random- ized, comparative, prospective clinical study to determine the efficacy of the VelaSmooth system versus the Triactive system for the treatment of cellulite. Lasers Surg Med 38:908-912, 2006 7. Lach E: Reduction of subcutaneous fat and improvement in cellulite appearance by dual-wavelength, low-level laser energy combined with vacuum and massage. J Cosmet Laser Ther 10:202-209, 2008 8. Sadick N, Magro C: A study evaluating the safety and efficacy of the VelaSmooth system in the treatment of cellulite. J Cosmet Laser Ther 9:15-20, 2007 9. Rotman H: Cold panniculitis in children. Arch Dermatol 94:720-721, 1966 0. Duncan WC, Freeman RG, Heaton CL: Cold panniculitis. Arch Derma- tol 94:722-724, 1966 1. Beacham BE, Cooper PH, Buchanan CS, et al: Equestrian cold pannic- ulitis in women. Arch Dermatol 116:1025-1027, 1980 2. Manstein D, Laubach H, Watanabe K, et al: Selective cryolysis: A novel method of non-invasive fat removal. Lasers Surg Med 40:595-604, 2009 3. Preciado JA, Allison JW: The effect of cold exposure on adipocytes: Examining a novel method for the noninvasive removal of fat. Cryobi- ology 57:327, 2008 4. Zelickson B, Egbert BM, Preciado J, et al: Cryolipolysis for noninvasive fat cell destruction: Initial results from a pig model. Dermatol Surg 35:1462-1470, 2009 5. Dover J, Burns J, Coleman S, et al: A prospective clinical study of noninvasive cryolypolysis for subcutaneous fat layer reduction—In- terim report of available subject data. Presented at the Annual Meeting of the American Society for Laser Medicine and Surgery, April 2009, National Harbor, Maryland 6. Kaminer M, Weiss R, Newman J, et al: Visible cosmetic improvement with cryolipolysis: Photographic evidence. Presented at the Annual Meeting of the American Society for Dermatologic Surgery, 2009, Phoenix, AZ 7. Rosales-Berber IA, Diliz-Perez E: Controlled cooling of subcutaneous fat for body reshaping. Presented at the 15th World Congress of the International Confederation for Plastic, Reconstructive and Aesthetic Surgery, 2009, New Delhi, India 8. Coleman SR, Sachdeva K, Egbert BM, et al: Clinical efficacy of nonin- vasive cryolipolysis and its effects on peripheral nerves. Aesthet Plast Surg 33:482-488, 2009 9. Klein K, Zelickson B, Riopelle J, et al: Non-invasive cryolipolysis for subcutaneous fat reduction does not affect serum lipid levels or liver function tests. Lasers Surg Med (in press) http://www.cosmeticplasticsurgerystatistics.com/statistics.html http://www.cosmeticplasticsurgerystatistics.com/statistics.html Cryolipolysis for Reduction of Excess Adipose Tissue Proposed Pathogenesis Clinical Animal Studies Human Clinical Studies Safety Profile Discussion References work_uvyr4pfbrnfjxi2qvh5ju355ni ---- stock_00617.indd ABSTRACT We present quantitative analyses of recent large rock falls in Yosemite Valley, Califor- nia, using integrated high-resolution imag- ing techniques. Rock falls commonly occur from the glacially sculpted granitic walls of Yosemite Valley, modifying this iconic landscape but also posing signifi cant poten- tial hazards and risks. Two large rock falls occurred from the cliff beneath Glacier Point in eastern Yosemite Valley on 7 and 8 October 2008, causing minor injuries and damaging structures in a developed area. We used a combination of gigapixel photog- raphy, airborne laser scanning (ALS) data, and ground-based terrestrial laser scanning (TLS) data to characterize the rock-fall detachment surface and adjacent cliff area, quantify the rock-fall volume, evaluate the geologic structure that contributed to fail- ure, and assess the likely failure mode. We merged the ALS and TLS data to resolve the complex, vertical to overhanging topog- raphy of the Glacier Point area in three dimensions, and integrated these data with gigapixel photographs to fully image the cliff face in high resolution. Three-dimensional analysis of repeat TLS data reveals that the cumulative failure consisted of a near-planar rock slab with a maximum length of 69.0 m, a mean thickness of 2.1 m, a detachment sur- face area of 2750 m2, and a volume of 5663 ± 36 m3. Failure occurred along a surface- parallel , vertically oriented sheeting joint in a clear example of granitic exfoliation. Stress concentration at crack tips likely propagated fractures through the partially attached slab, leading to failure. Our results demon- strate the utility of high-resolution imag- ing techniques for quantifying far-range (>1 km) rock falls occurring from the largely inaccessible, vertical rock faces of Yosemite Valley, and for providing highly accurate and precise data needed for rock-fall hazard assessment. INTRODUCTION Yosemite Valley is a ~1-km-deep, glacially carved canyon in the Sierra Nevada mountains of California that hosts some of the largest granitic rock faces in the world (Fig. 1). These steep walls are sculpted over time by rock falls that typically occur as exfoliation-type failures along surface-parallel sheeting joints (Matthes, 1930; Huber, 1987). Thick (≥100 m) talus accu- mulations fl anking the cliffs record substantial rock-fall activity since the last glacier retreated from Yosemite Valley ca. 17 ka (Wieczorek and Jäger, 1996; Wieczorek et al., 1999). Over 700 rock falls and other slope movements have been documented in Yosemite National Park since A.D. 1857, and some of these events have resulted in several fatalities, numerous injuries, and damage to infrastructure (Wieczorek and Snyder, 2004). Recognition that rock falls pose a signifi cant natural hazard and risk to the 3–4 million annual visitors has prompted detailed documentation and investigation of rock-fall triggering mechanisms, causative factors, and runout dynamics to better assess geological haz- ard and risk in Yosemite Valley (Wieczorek and Jäger, 1996; Wieczorek and Snyder, 1999, 2004; Wieczorek et al., 1998, 1999, 2000, 2008; Guz- zetti et al., 2003; Stock and Uhrhammer, 2010). Quantifying rock-fall events is a critical com- ponent of hazard analysis because (1) particle shapes, volumes, source area locations, and cliff surface geometry infl uence rock-fall trajec- tories (e.g., Okura et al., 2000; Guzzetti et al., 2003; Wieczorek et al., 2008), (2) computer programs that simulate rock-fall runout utilize these data (e.g., Jones et al., 2000; Agliardi and Crosta, 2003; Guzzetti et al., 2002, 2003; Lan et al., 2010), and (3) accurate and precise rock-fall volumes are needed to develop reliable probabilistic hazard assessments (Dussauge- Peisser et al., 2002; Dussauge et al., 2003; Guz- zetti et al., 2003; Hantz et al., 2003; Malamud et al., 2004; Brunetti et al., 2009). However, in Yosemite Valley such quantifi cation often is dif- fi cult due to the sheer scale of the rock faces, the relative inaccessibility of these faces, and the hazards associated with fi eldwork in active rock-fall areas. Rock-fall volumes have tradi- tionally been estimated using the product of detachment surface areas and an assumed fail- ure depth or thickness; this value can be com- pared to volume estimates of fresh talus beneath the source area. Both techniques result in large volumetric uncertainties, typically on the order of ±20% and occasionally much larger (Wiec- zorek and Snyder, 2004). In Yosemite, these techniques are further limited by the inaccessi- bility of the vertical to overhanging ~1-km-tall cliffs. Volume estimates from fresh talus may be imprecise because rock masses often fragment on impact, partially disintegrating into dust that can drift far from the impact area (Wieczorek et al., 2000), and because rock falls can mobi- lize talus from previous events (Wieczorek and For permission to copy, contact editing@geosociety.org © 2011 Geological Society of America 573 Geosphere; April 2011; v. 7; no. 2; p. 573–581; doi: 10.1130/GES00617.1; 8 fi gures; 1 table; 1 supplemental fi le; 1 animation. High-resolution three-dimensional imaging and analysis of rock falls in Yosemite Valley, California Greg M. Stock1,*, Gerald W. Bawden2, Jimmy K. Green3, Eric Hanson4, Greg Downing4, Brian D. Collins5, Sandra Bond2, and Michael Leslar6 1National Park Service, Yosemite National Park, 5083 Foresta Road, Box 700, El Portal, California 95318, USA 2U.S. Geological Survey, 3020 State University Drive East, Modoc Hall Suite 4004, Sacramento, California 95819, USA 3Optech International, Inc., 7225 Stennis Airport Drive, Suite 400, Kiln, Mississippi 39556, USA 4xRez Studio, 12818 Dewey Street, Los Angeles, California 90066, USA 5U.S. Geological Survey, 345 Middlefi eld Road, MS973, Menlo Park, California 94025, USA 6Optech International, 300 Interchange Way, Vaughan, Ontario L4K 5Z8, Canada, and Department of Earth and Space Science Engineering, York University, 4700 Keele Street, North York, Ontario M3J 1P3, Canada *E-mail: greg_stock@nps.gov. on April 1, 2011geosphere.gsapubs.orgDownloaded from http://geosphere.gsapubs.org/ Stock et al. 574 Geosphere, April 2011 Snyder , 2004). The resulting uncertainties prop- agate to additional uncertainties in the accuracy of hazard assessment. New imaging tools offer opportunities for characterizing large rock faces and reducing uncertainties in their measurement and analysis. High-resolution (gigapixel) digital photography and airborne and ground-based terrestrial laser scanning (light detection and ranging [LiDAR]) are emerging remote sensing techniques that enable precise location, measurement, monitor- ing, and modeling of mass movement events (e.g., McKean and Roering, 2004; Derron et al., 2005; Lim et al., 2005; Lato et al., 2009; Sturz- negger and Stead, 2009; Abellán et al., 2010; Lan et al., 2010). Terrestrial laser scanning is a particularly valuable tool for quantifying rock falls, identifying failure mechanisms, assess- ing slope stability, and monitoring vertical cliff faces, especially when repeat laser scans per- mit change detection (e.g., Rosser et al., 2005, Abellán et al., 2006, 2009; Jaboyedoff et al., 2007; Collins and Sitar, 2008; Rabatel et al., 2008; Oppikofer et al., 2008, 2009; Arnesto et al., 2009; Dunning et al., 2009). Importantly, these techniques provide noninvasive methods for rock-fall analysis in areas that are technically challenging or otherwise hazardous to access. We collected airborne laser scanning (ALS) data, ground-based terrestrial laser scanning (TLS) data, and gigapixel photographs for the cliffs beneath Glacier Point in eastern Yosemite Valley (Fig. 1A) in September 2006, Octo- ber 2007, and May 2008, respectively. Subse- quently, two large rock falls occurred within the imaged area, the fi rst at 13:30 Pacifi c Standard Time (PST) on 7 October 2008, and a second, larger rock fall from the same area at 05:55 PST on 8 October 2008 (Fig. 1B). Both rock falls occurred adjacent to an area of previous insta- bility that had failed in August and September of 2001 (Wieczorek and Snyder, 2004). On both occasions in October 2008, the rock masses free fell ~220 m, impacted a prominent east-dipping joint-controlled ledge, and fragmented into numerous boulders and smaller debris. Much of this debris traveled down a talus slope into Curry Village (Fig. 1B), causing minor injuries and damaging or destroying 25 buildings. In terms of structural damage, this was the most destructive rock fall in the history of Yosemite National Park. In the ten days following the second rock fall, we repeated the TLS and high-resolution photography of Glacier Point. The resulting pre– and post–rock-fall data, inte- grated to maximize data resolution, allow us to image the failed rock mass in three dimensions, quantify the rock-fall volume, evaluate the geo- logic structure that contributed to instability, and assess the likely failure mode. METHODS Gigapixel Photography Gigapixel photography is a digital mosaic approach used to achieve image resolution that far surpasses that of conventional digital pho- tography, i.e., creation of individual images con- sisting of ≥1 billion pixels, 100 times the resolu- tion of a standard 10-MP digital camera (e.g., Frenkel, 2010). Gigapixel photography uses a robotic control device that divides a fi eld of view into several hundred positions that are shot with a telephoto lens and then stitched together to create a single high-resolution digital image. We obtained baseline gigapixel photography for Yosemite Valley by simultaneously photo- graphing the rock faces from 20 locations along the valley rim (http://www.xrez.com/case-studies/ national-parks/yosemite-extreme-panoramic -imaging-project/). At each location we col- lected ~400–700 high-resolution overlapping digital photographs using a motion-controlled camera tripod (GigaPan™ unit) and a Canon G9 camera with a 2× extender (effective 300 mm focal length). We then aligned the overlap- ping photographs, stitched them together using PTGui™ software, and rendered the stitched images to create 20 individual gigapixel pan- oramic images of Yosemite Valley. Images can be viewed online at http://gigapan.org/ gigapans/most_recent/?q=xrez and at http:// www.xrez.com/yose_proj/Yose_result.html. As part of this process, we captured a 3.7-gigabyte panoramic photograph (63,232 × 20,224 pixel Glacier Point Curry Village P h o to : B ra d B e n te r California Yosemite National Park Glacier Point El Capitan Cathedral Rocks Half Dome A B N Yosemite Falls TLS location Fig. 2A Fig. 1B 0 1 2 4 kilometers Figure 1. The 8 October 2008 rock fall from Glacier Point. (A) Shaded relief image derived from airborne laser scanning data showing the location of Glacier Point in eastern Yosemite Valley. Blue box delin- eates imaging study area. Yellow arrow shows the photo perspective in (B) from the northeast face of Half Dome, and black arrow shows the photo perspective in Figure 2A. (B) Photograph of the 8 October 2008 rock fall. Yellow box (150 × 200 m) encloses the rock-fall detachment area, white circle marks the location of the ground-based terrestrial laser scanner (TLS) in Stoneman Meadow. Dust cloud results from fragmentation of rock-fall debris and marks the approximate extent of talus deposition. Cliff height is 980 m. on April 1, 2011geosphere.gsapubs.orgDownloaded from http://geosphere.gsapubs.org/ Imaging Yosemite rock falls Geosphere, April 2011 575 resolution) of the cliffs beneath Glacier Point on 28 May 2008 from a position on the opposite valley wall (Figs. 1 and 2). Two days after the 8 October 2008 rock fall, we repeated the high- resolution photography of Glacier Point, creat- ing a detailed panoramic image of the rock-fall source area and surrounding cliff before and after the rock falls occurred (Fig. 2). Airborne Laser Scanning In September of 2006, the National Center for Airborne Laser Mapping (NCALM), in collabor ation with the National Park Service, collected ALS data for Yosemite Valley and vicinity (Fig. 1A), an area of ~43 km2. ALS data were collected with an Optech 1233 ALTM scanner mounted in a turbocharged twin engine Cessna 337. Above–ground-level fl ying heights varied from less than 100 m to over 2 km, with an average range of 1050 m. The Glacier Point area (blue box in Fig. 1) consists of ~9.1 million data points, corresponding to a horizontal plane point spacing of ~60 cm. The ALS point-cloud data and digital eleva- tion models (DEMs) derived from them resolve the lower angle topography and upward-facing surfaces in the Glacier Point area in high reso- lution. However, because ALS is a downward scanning system it cannot fully resolve topo- graphic overhangs where there may be three or more surface measurements for a given x-y coordinate: the uppermost surface, the over- hang surface, and the cliff face underneath the overhang. Typically, DEMs created from ALS point-cloud data only use the uppermost posi- tion, and are therefore unable to image and characterize vertical to overhanging surfaces, such as the 2008 rock-fall detachment surface. In order to fully resolve the complex topog- raphy of the Glacier Point area, we expanded the spatial resolution by also collecting ground- based TLS data. Ground-Based Terrestrial Laser Scanning We collected TLS data for the cliffs below Glacier Point from a position on the northern edge of Stoneman Meadow, 1.23 km line-of- sight distance from the rock-fall detachment surface (Fig. 1). We used an Optech ILRIS-3DER extended range scanner to collect pre–rock-fall TLS data on 11 October 2007, and post–rock- fall TLS data on 18 October 2008. The scan area of ~612,000 m2 covered nearly the entire verti- cal extent of the cliff (728 m; Fig. 3). Glacier Point is a challenging TLS target due to high incident scanning angles from the valley fl oor, dark rock surface staining, and smooth granitic surface properties that can refl ect the laser signal Glacier Point Curry Village A B C 2001 7 October 2008 8 October 2008 A BGlacier Point Glacier Point Figure 2. High-resolution digital photography of the rock-fall detachment area. (A) Gigapixel panoramic image of Glacier Point. Cliff height is 980 m. Yellow box (150 × 200 m) encloses the rock-fall detachment area shown in (B) and (C). (B) Zoomed-in view of the rock-fall detachment area in May 2007. 2008 rock-fall detachment areas shown by white dashed lines; 2001 rock-fall detachment surfaces shown by white dotted lines. (C) Same view show- ing rock-fall detachment area after 7 and 8 October 2008 rock falls. Light-colored “scar” results from removal of water-stained and lichen-covered rock. Figure 3. Ground-based terrestrial laser scanning of the rock-fall detachment area. Photo- graph (A) and terrestrial laser scanning (TLS) point-cloud image (B) of the scanned area of Glacier Point. Yellow boxes (150 × 200 m) enclose the rock-fall detachment area, and white dashed lines mark the rock-fall detachment area. Dark areas in the TLS point-cloud data are due to variations in refl ective properties of the cliff face and to laser attenuation begin- ning at distances >1 km. Cliff height is 980 m. on April 1, 2011geosphere.gsapubs.orgDownloaded from http://geosphere.gsapubs.org/ Stock et al. 576 Geosphere, April 2011 away from the scanner, all of which tend to yield low signal return. Furthermore, laser attenuation reduced the signal from surfaces >1 km from the scanner. The TLS scans consisted of ~8 million data points, corresponding to a vertical plane point spacing at the rock-fall detachment area of ~50 cm. Because of the short time interval between the 7 and 8 October rock falls (16.5 h), we were unable to repeat the TLS surveys until after the second event; thus, the rock-fall vol- ume and other metrics reported here are cumu- lative for both events. Integrating Imaging Techniques To aid our three-dimensional characterization of the rock-fall detachment surface and adjacent cliff area, we integrated the gigapixel photo- graphs with the LiDAR data using 3D point- cloud meshing and 3D animation software. As described above, ALS point-cloud data do not accurately resolve vertical to overhanging cliffs because they cannot capture surfaces with multiple z values for one x-y position. In con- trast, TLS data accurately resolve vertical and overhanging surfaces, but our TLS point-cloud data have large gaps due to shielding of lower angle surfaces from the scanning position on the valley fl oor, and also due to laser attenuation beginning at distances >1 km (Fig. 3). To com- pensate for these data gaps, we merged the ALS and TLS point clouds into a single point cloud using a best-fi t alignment algorithm described below (Fig. 4). By merging the ALS and TLS point clouds, we are able to resolve the complex topographic surfaces of the Glacier Point area in three dimensions. Once the ALS and TLS data were merged, we projected the gigapixel photograph of Gla- cier Point onto an interpolated surface created from the point clouds. To do this, we imported the merged point cloud into VRMesh™ 5.0 and interpolated surfaces from the point cloud as described below (Fig. 5A). We then imported the surface model in Maya™ 3D animation soft- ware as an exported object fi le from VRMesh and properly scaled and positioned it. We chose a smoothing angle to create shading along the surface, then applied texture coordinate map- ping onto the surface. Finally, we projected a texture map derived from the gigapixel pan- oramic photograph onto the surface, yielding a three-dimensional form of the photographic imagery that reveals the morphology, structure, and texture of the cliff below Glacier Point in high resolution (Fig. 5B). Volumetric Analysis Repeat TLS scans before and after the rock falls occurred allow us to precisely calculate the volume of the failed rock mass. To do so, we aligned the 2007 and 2008 TLS point-cloud data using InnovMetric PolyWorks™ soft- ware (InnovMetric, 2010), fi rst using manual point-pair matching and then using the surface- to-surface iterative closest point algorithm IMAlign. This routine creates a best-fi t surface through the point-cloud data for each time period and then uses a least-square inversion approach to minimize the misfi t between the two epochs. We aligned on common cliff-face data points outside of the rock-fall detachment area. As described above, variations in refl ective properties of the cliff face and laser attenuation produced data gaps in the TLS point clouds (Fig. 3B). To fi ll these gaps, we generated sur- face models using three different interpolation methods: kriging, triangular irregular network (TIN), and inverse distance to power (IDP). Kriging and IDP use a weighted distribution of neighboring points to project the cliff surface across data gaps. For both methods, we used a linear search of 10 m and 0.25 m spot spacing; for kriging we used zero power and for IDP we used a power of 2, both calculated with Surfer™ Version 8 software (Golden Software, 2010). We also created a TIN model, which connects triangular surfaces between adjacent points. All three approaches yield reasonable rock surface models because there are very few data points associated with vegetation on the cliff face; however, kriging and IDP more effectively model data gaps by incorporating neighboring points. To take advantage of the interpolation routines and still account for the overhanging character of the rock-fall detach- ment area, we rotated the xyz coordinate sys- tem from x into the cliff, y along the cliff face, and z is up, to a coordinate system where x-y are in the cliff face and z is perpendicular to the cliff face. Once these surface models were created, we calculated the volume change at the rock- fall source area between the 2007 and 2008 models using Applied Imagery Quick Ter- rain Modeler™ (Applied Imagery, 2010) and visualized and assessed the data misfi t with LiDARViewer (Kreylos et al., 2008). Although the alignment precision for the two TLS data sets is on the order of centimeters, there is additional uncertainty in the volumetric change associated with the interpolation methods. We assessed this uncertainty by calculating vol- ume changes for fi ve circular areas (55.4 m diameter, the diameter of a circle with the same approximate area as the rock-fall detachment Glacier Point Curry Village Figure 4. Merged airborne laser scanning (ALS; white) and terrestrial laser scanning (TLS; blue) point-cloud data for the Glacier Point area. ALS data resolve low-angle topogr aphy and upward-facing ledge surfaces, whereas TLS data resolve vertical cliff faces and down- ward-facing roof surfaces; merged ALS and TLS data therefore provide full coverage of the complex topographic surface of the cliffs in the Glacier Point area, including the 2008 rock-fall source area and adjacent cliff face. Yellow box (150 × 200 m) encloses the October 2008 rock-fall detachment area, and white dashed line marks the rock-fall detachment area. Prominent ledges dipping down and to the left are part of the predominantly east-dipping J 2 joint set. Cliff height is 980 m. on April 1, 2011geosphere.gsapubs.orgDownloaded from http://geosphere.gsapubs.org/ Imaging Yosemite rock falls Geosphere, April 2011 577 surface) on the cliff adjacent to, but outside of, the rock-fall detachment area (Fig. 6A). The high-resolution photographs confi rm that these areas did not experience rock falls, and thus did not change volume, between the 2007 and 2008 scans. Determining volume changes for these areas provides a measure of volumetric uncertainty associated with each interpolation method (Table 1). RESULTS AND DISCUSSION Volumetric Analysis Comparison of the 2007 and 2008 TLS-based surface models (Fig. 6) reveals that the rock-fall detachment surface is 69.0 m along its longest axis (A–A′; Figs. 6B and 7A), and has a total surface area of 2750 m2. The failed rock mass, which was approximately lens-shaped, had a mean thick ness of 2.1 m and a maximum thick- ness of 7.1 m near the upper, eastern corner (Figs. 6B and 7A); these thickness calculations are confi rmed by the actual measured thick- nesses of fresh boulders on the talus slope (Fig. 7B). Cross sections through the slab reveal that it was of relatively uniform thickness across most of its width, with the detachment surface remarkably parallel to the pre-failure cliff sur- face (Fig. 7A, Supplemental File 11, Animation 1). Kriging and TIN interpolation methods both yield rock-fall volumes of 5667 m3, although the kriging method has greater uncertainty (±36 m3 versus ± 27 m3, respectively; Table 1). The IDP method yields a slightly smaller vol- ume of 5658 ± 44 m3. Our best estimate of the cumulative rock-fall volume is 5663 ± 36 m3 (the error-weighted mean and uncertainty of the three interpolation methods). Photo graphs of the source area taken immediately after the 7 October rock fall suggest that ~20% of the total volume (~1133 m3) is attrib utable to this fi rst event and the remainder (~4530 m3) attribut- able to the subsequent 8 October rock fall (Fig. 2B). Notably, observation-based estimates of the cumulative volume made immediately after the events underestimated the actual volume by roughly a factor of two, primarily because the relatively thin (≤1.0-m) overhangs at the top of the detachment area proved to be a poor indica- tor of the mean slab thickness (4.1 m). This high- lights the diffi culty of attaining accurate rock-fall volumes without quantitative topographic data. Our results also highlight the uncertainty asso- ciated with estimating rock-fall volumes from fresh talus deposits. Talus resulting from the 7 and 8 October 2008 rock falls is spread over an area of ~118,000 m2 on the talus slope beneath Glacier Point. The largest boulder on the talus slope accurately records the original slab thick- ness (Fig. 7B), but represents just 2.5% of the total rock-fall volume. Furthermore, some fresh-appearing talus was actually older debris on the talus slope that was remobilized by the event. Estimating the cumulative rock-fall vol- ume from fresh talus deposits alone would likely have also led to a substantial underestimation of the actual volume. These discrepancies illustrate the importance of repeat high-resolution topo- graphic data for accurately determining rock-fall volumes for individual events. Structural Analysis High-resolution, three-dimensional imaging helps to evaluate the geologic structure that con- tributed to failure. The October 2008 rock falls were clear examples of granitic exfoliation along a sheeting joint (Matthes, 1930; Huber, 1987). Images of the detachment surface and adjacent cliff reveal that the dominant structural feature controlling detachment was a vertically oriented, near-planar sheeting joint, which is part of the surface-parallel J 1 joint set (Wieczorek and A B Figure 5. Gigapixel photograph of Glacier Point area projected onto merged ALS and terrestrial laser scanning (TLS) data. (A) Inter- polated surface model produced from merged ALS and TLS point- cloud data. (B) Gigapixel photograph projected onto the interpolated surface model. Yellow boxes (150 × 200 m) enclose the October 2008 rock-fall detachment area, and white dashed lines mark the rock- fall detachment area. Prominent ledges dipping down and to the left are part of the predominantly east-dipping J 2 joint set. 1Supplemental File 1. PDF file of thickness measure ments of the slab that failed in the October 2008 rock falls. Thickness measurements were made along a series of cross sections across the failed slab, created by comparing the pre- and post-rock fall inter- polated surfaces. Gray lines represent pre-rock fall cliff surface, and green lines represent post-rock fall cliff (detachment) surface. Cross section line A–A′ is shown in Fig. 6B; up is to the right. All thickness measure- ments are in meters. If you are viewing the PDF of this paper or reading it offl ine, please visit http:// dx.doi.org/10.1130/GES00617.S1 or the full-text arti- cle on www.gsapubs.org to view Supplemental File 1. on April 1, 2011geosphere.gsapubs.orgDownloaded from http://geosphere.gsapubs.org/ Stock et al. 578 Geosphere, April 2011 Snyder , 1999; Wieczorek et al., 2008). Sheeting joints are common in Yosemite Valley, and they often form detachment surfaces for rock falls (Matthes, 1930; Huber, 1987; Wieczorek and Snyder, 2004). As determined by plane-fi tting to the TLS data, the primary J 1 joint-controlled detachment surface of the October 2008 rock falls is oriented 027°/89° (dip direction/dip angle), identical to the orientation of the cliff face prior to failure (Fig. 7A, Supplemental File 1 [see footnote 1], Animation 1). In fact, the detachment surface very closely mirrors the pre- failure cliff surface, not only in its overall orien- tation but also in the location and magnitude of surface convexities (Fig. 7A, Animation 1). This tends to support the suggestion that cliff surface morphology, in particular the degree of curva- ture, may strongly control the development of sheeting joints (Martel, 2006) and thus infl uence rock-fall susceptibility. The failed slab was further bounded on its upper and lower edges by several predominantly east-dipping joints, part of a pervasive joint set, termed J 2 , which is prominent throughout the Glacier Point area (Wieczorek and Snyder, 1999; Wieczorek et al., 2008; Fig. 5B). The dominant J 2 joint exposed directly above the detachment surface (Fig. 5B) has a dip direction/dip angle of 094°/30°. The detachment surface was further bounded on its upper western edge by a series of subvertical fractures (Fig. 2B). The lower western edge of the failed slab consisted of an overhang that resulted from earlier rock falls occurring on 14 and 25 September 2001 (Wieczorek and Snyder , 2004) (Fig. 2B). These bounding fea- tures provided structural weaknesses that likely contributed to instability. A prominent light- colored dike extending across the upper portion of the detachment surface (Fig. 8C) may have contributed to the greater thickness of the failed slab in the upper northeast portion of the detach- ment area (Figs. 6B and 7A). Slope Stability Analysis The high-resolution data provided by giga- pixel photography, ALS, and TLS help to clarify the likely failure mode of the October 2008 rock falls. Typically, rock falls from steep cliffs fail by one of two modes, shear (sliding along the cliff surface) or tension (rotation away from the cliff surface, also known as toppling; Goodman, 1989). These can be analyzed using various methods (limit equilibrium, constitu- tive modeling, fracture mechanics, etc.) Here we explore the use of limit equilibrium, where the likelihood of one or another failure mode is determined by the relationship between driving forces (the geometry and volume of the rock mass and external forces such as cleft pres- 1 23 4 5 J 1 JJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ 1111 A 7.1 m 0.0 m 3.5 m 7.0 m Change (m) 027°/89° A′′ 69.0 m B A Figure 6. Terrestrial laser scanning (TLS) difference map of the rock-fall detachment area. (A) Comparison of 2007 and 2008 inter- polated surface models reveals volume change associated with 7 and 8 October 2008 rock falls. Yellow box encloses the rock-fall detach- ment area shown in (B). Numbered white circles mark locations where volume changes were calculated in areas that did not experi- ence rock falls (see Table 1). (B) Difference map showing slope thick- ness change at the detachment area (i.e., surface area and thickness of the failed slab). Gray and dark-blue colors represent areas of no volume change between surface models. Dip direction/dip angle of J 1 detachment surface (027°/89°) measured by plane-fi tting to the TLS data. Cross-section A–A′ is shown in Figure 6A. TABLE 1. CALCULATED VOLUMETRIC CHANGES FOR THE ROCK-FALL DETACHMENT AND ADJACENT CLIFF AREAS USING DIFFERENT INTERPOLATED SURFACE MODELS Kriging volume (m3) TIN volume (m3) IDP volume (m3) Detachment area –5667 –5667 –5658 301–401–301–1aerA 6+76+73+2aerA 02–62–42–3aerA 491–171–291–4aerA 29+99+001+5aerA Mean (Areas 1–5) –36 –27 –44 Note: TIN—triangular irregular network; IDP—inverse distance to power; areas shown in Fig. 6A. on April 1, 2011geosphere.gsapubs.orgDownloaded from http://geosphere.gsapubs.org/ Imaging Yosemite rock falls Geosphere, April 2011 579 sures) and resisting forces (the shear and tensile strengths of the rock material) (e.g., Norrish and Wyllie, 1996; Wyllie and Mah, 2004). Our use of the limit equilibrium method pro- vides a fi rst-order failure analysis of rock slab detachment, as is commonly performed in con- ventional soil and rock slope stability analysis. Using these methods, we compared the driving forces and moments of the failed slab, based on the measured volume and detachment surface geometry, to the resisting forces and moments of the slab (Figs. 8A and 8B). We assumed typical values for granite rock unit weight (26.5 kPa, Goodman, 1989) and low values for strength (friction angle = 31°, Jaeger et al., 2007; cohe- sion = 25,000 kPa, assumed value; overall shear strength = 13,445 kPa, West, 1995; tensile strength = 6688 kPa, West, 1995); these values were verifi ed by comparison with site-specifi c values for Sierra Nevada granodiorite (Krank and Watters, 1983). Further, we assume that there is only minimal variation between strength parame ters for granite and the light-colored dike that runs through the detachment surface (Fig. 8C), and thus used uniform parame ters for both rock types. Analysis of high-resolution imagery indicates that freshly broken surfaces were dis- tributed over seven areas on the detachment surface , encompassing ~26% of the total surface area; the remaining areas display slight staining or weathering, suggesting earlier detachment of these areas (Fig. 8C). Based on these observa- tions, we assume that prior to failure the major- ity of the slab was detached from the main cliff surface, and we calculated strength contribu- tions only for those portions of the slab that were previously attached at these freshly broken sur- faces. We analyzed shear failure based on both an inclined plane formulation dipping 89° using Mohr-Coulomb shear strength parame ters, and a static vertical analysis using an overall shear strength parameter (Fig. 8A; West, 1995) to cal- culate the factor of safety for sliding (i.e., the ratio of shear strength to rock slab gravitational stress). We analyzed tensile failure based on a moment analysis of the slab acting at a point 2.0 m from the detachment surface and rotating about a pivot point located at the bottom of the lowest attached surface (Figs. 8B and 8C), held in place by internal tensile strength acting along the fresh areas of broken rock. We calculated the acting location of tensile forces through gen- eral mechanical analysis of a composite tensile strength centroid, resulting in an acting tensile strength vector positioned 31.1 m above the bottom of the lowest attachment surface (Fig. 7C). This provided a factor of safety for tensile failure (i.e., the ratio of tensile strength to rock slab outward rotational moment). We also con- sidered a third potential failure mode, that of lat- eral shearing (i.e., tearing) along the detachment surface, but did not have suffi cient information on the likely point of rotation to develop a mean- ingful analysis. The results of these calculations imply that shear (sliding) failure was the more likely failure mode because the calculated factor of safety for shear failure is nearly fi ve times lower than that for moment-driven (tensile rotation or toppling) failure. However, these analyses highlight the limitations of using limit equi- librium methods to back-calculate the stability of exfoliating rock slabs, in that both the shear and moment driven analyses yielded factors of 0 5 10 15 20 25 30 35 40 45 50 55 0 2 4 6 Distance (m) 60 65 70 2007 cliff surface 2008 cliff surface T h ic k n e s s ( m ) A B 4.0 m 4.0 m A A′ Figure 7. Slab thickness determination. (A) Cross section along the longest axis of the rock- fall detachment surface (A–A′; see Fig. 6B), showing mean thickness of ~4 m and maximum thickness of 7.1 m. (B) 140 m3 boulder resulting from fragmentation of this thicker portion of the slab. Note person for scale. This boulder accurately records the pre-failure slab thick- ness, but represents just 2.5% of the total rock-fall volume, illustrating the challenge of reconstructing rock-fall volumes from fresh talus. Animation 1. Animation.mov file (best viewed with QuickTime software) of a three-dimensional visualization of the rock- fall detachment surface and adjacent cliffs below Glacier Point, Yosemite Valley, using repeat ground-based terrestrial laser scan- ning (TLS) data. The fi rst part shows a verti- cal transect of the TLS data for Glacier Point from the valley fl oor to the rock-fall source area. The second part shows a three-dimen- sional visualization of the failed rock mass by comparing the pre–rock-fall (blue) and post–rock-fall (yellow) interpolated surface models of the rock-fall source area. If you are viewing the PDF of this paper or read- ing it offl ine, please visit http://dx.doi.org/ 10.1130/GES00617.S2 or the full-text article on www.gsapubs.org to view Animation 1. on April 1, 2011geosphere.gsapubs.orgDownloaded from http://geosphere.gsapubs.org/ Stock et al. 580 Geosphere, April 2011 safety that were considerably higher than those defi ning instability (i.e., safety factors >>1). This suggests additional driving forces acted to initiate shear failure, but the static limit equi- librium methods used in this analysis are not capable of specifi cally identifying these forces. Observations indicating that the detachment surface was dry at the time of failure suggest that water pressures did not act as an additional driving force; however, we cannot rule out the possibility of increased cleft pressures imme- diately prior to failure. Stress concentration at crack tips and resulting fracture propagation has been previously proposed as a driving force for rock failures (e.g., Martel, 2004; Ishikawa et al., 2004), including exfoliation events in Yosemite Valley (Bahat et al., 1999; Wieczorek and Snyder, 1999, 2004), and fracture propaga- tion likely played a role in the 7 and 8 October 2008 rock falls. Fracture propagation is con- sistent with reports of cracking sounds pre- ceding the 7 October 2008 rock fall by a few hours. We tentatively suggest that a change in the equilibrium of the rock slab, for example, stress redistribution on fracture surface asperi- ties that exceeded the shear or tensile strength in the areas of attachment, likely led to fracture propagation. It is possible that the earlier rock falls in August and September of 2001, also dry failures without recognized triggering mecha- nisms (Wieczorek and Snyder, 2004), served to destabilized the rock slab, with fracture propa- gation occurring sporadically until the October 2008 failures. Shear or tensile failure of a rock slab along sur- face parallel sheeting joints highlights a remain- ing challenge for stability assessments based on laser scanning data. Because they form within the rock mass roughly parallel to topographic surfaces, sheeting joints are often diffi cult or impossible to detect even with high-resolution imaging. Laser scanning has great potential for identifying rock mass confi gurations suscep- tible to failure (e.g., Jaboyedoff et al., 2007; Lato et al., 2009; Sturznegger and Stead, 2009), but more work is needed to remotely detect and monitor surface-parallel sheeting joints, which form the detachment surfaces for many rock falls in Yosemite Valley. CONCLUSIONS Integrated repeat gigapixel photography and airborne and ground-based terrestrial laser scanning provide accurate and precise volume calculations and three-dimensional geologic characterization of the October 2008 rock-fall source area that could not be otherwise attained by traditional assessment methods. They also provide a means of quantifying uncertainties associated with these calculations. Our results obtained with a long-range laser scanner dem- onstrate that high-precision topographic data are attainable for vertical rock faces >1.2 km distant, and that rock-fall volumes can be com- puted at these distances with uncertainties of <1%. Merging ALS and TLS point clouds, and integrating with gigapixel photography, pro- vides unprecedented imaging capabilities for rock-fall analysis in areas that are technically challenging or otherwise hazardous to access. Volumetric, structural, and other geologic and topographic data pertaining to rock falls are critical for deriving accurate and precise hazard assessment based on probabilistic or determin- istic methods, and allow for evaluating potential failure modes. The vast improvements in quan- titative analyses for tall cliffs resulting from integrated high-resolution imaging techniques should lead to a reduction of related uncertain- ties in rock-fall hazard assessments that rely on these analyses. ACKNOWLEDGMENTS We thank David Haddad and Michel Jaboyedoff for constructive manuscript reviews, and Steve Martel and Jonathan Stock for commenting on an earlier draft. We appreciate the assistance of the many vol- unteer photographers who helped to acquire the high- resolution panoramic images. ALS data were acquired by the National Center for Airborne Laser Mapping (NCALM) at the University of Houston and processed by NCALM at the University of California, Berkeley. NCALM is funded by the National Science Founda- tion. This work was supported in part by funding from the Yosemite Conservancy. REFERENCES CITED Abellán, A., Vilaplana, J.M., and Martinez, J., 2006, Application of a long-range terrestrial laser scanner to a detailed rockfall study at Vall de Núria (Eastern Pyrenees , Spain): Engineering Geology, v. 88, p. 136– 148, doi: 10.1016/j.enggeo.2006.09.012. Abellán, A., Jaboyedoff, M., Oppikofer, T., and Vilaplana, J.M., 2009, Detection of millimetric deformation using a terrestrial laser scanner: Experiment and application to a rockfall event: Natural Hazards and Earth System Sci- ences, v. 9, p. 365–372, doi: 10.5194/nhess-9-365-2009. Abellán, A., Calvet, J., Vilaplana, J.M., and Blanchard, J., 2010, Detection and spatial prediction of rockfalls by means of terrestrial laser scanner monitoring: Geomorphology, v. 119, p. 162–171, doi: 10.1016/ j.geomorph.2010.03.016. Agliardi, F., and Crosta, G.B., 2003, High resolution three- dimensional numerical modeling of rockfalls: Inter- national Journal of Rock Mechanics and Mining, v. 40, p. 455–471, doi: 10.1016/S1365-1609(03)00021-2. Applied Imagery, 2010, Quick Terrain Modeler: Powerful, simple, and visual LiDAR exploitation software: http:// www.appliedimagery.com/qtmmain.htm. Arnesto, J., Ordóñez, C., Alejano, L., and Arias, P., 2009, Terrestrial laser scanning used to determine the geom- etry of a granite boulder for stability analysis purposes: Geomorphology, v. 106, p. 271–277, doi: 10.1016/ j.geomorph.2008.11.005. Bahat, D., Grossenbacher, K., and Karasaki, K., 1999, Mech- anism of exfoliation joint formation in granitic rocks, Yosemite National Park: Journal of Structural Geology, v. 21, p. 85–96, doi: 10.1016/S0191-8141(98)00069-8. Brunetti, M.T., Guzzetti, F., and Rossi, M., 2009, Prob- ability distributions of landslide volumes: Non linear Processes in Geophysics, v. 16, p. 179–188, doi: 10.5194/npg-16-179-2009. Collins, B.D., and Sitar, N., 2008, Processes of coastal bluff erosion in weakly lithifi ed sands, Pacifi ca, Cali- fornia, USA: Geomorphology, v. 97, p. 483–501, doi: 10.1016/j.geomorph.2007.09.004. Derron, M.-H., Jaboyedoff, M., and Blikra, L.H., 2005, Pre- liminary assessment of rockslide and rockfall hazards using a DEM (Oppstadhornet, Norway): Natural Haz- ards and Earth System Sciences, v. 5, p. 285–292, doi: 10.5194/nhess-5-285-2005. Dunning, S.A., Massey, C.R., and Rosser, N.J., 2009, Struc- tural and geomorphological controls on landslides in the Bhutan Himalayas using terrestrial laser scan- ning: Geomorphology, v. 103, p. 17–29, doi: 10.1016/ j.geomorph.2008.04.013. T P 3 1 .1 m 3S W T P W A B C W Dike Figure 8. Two-dimensional (plane strain) failure mode analysis for the October 2008 rock falls. Schematic diagram and direction of movement (blue arrows) for (A) shear sliding and (B) tensile rotation (toppling) failure modes. (C) Image of detachment surface showing extent of light-colored dike (white shaded area) and seven areas of freshly broken rock (blue shaded areas) suspected to have been attached immediately prior to failure. In (A), the shear strength, S, of the slab is overcome by the weight of the block, W. In (B), the tensile strength, T, of the slab is overcome by the rotational moment of the weight, W, acting about a point, P, at the base of the lowest attached section of the partially detached slab (C). on April 1, 2011geosphere.gsapubs.orgDownloaded from http://geosphere.gsapubs.org/ Imaging Yosemite rock falls Geosphere, April 2011 581 Dussauge, C., Grasso, J.-R., and Helmstetter, A., 2003, Sta- tistical analysis of rockfall volume distributions: Impli- cations for rockfall dynamics: Journal of Geophysical Research, v. 108, p. 2286, doi: 10.1029/2001JB000650. Dussauge-Peisser, C., Helmstetter, A., Grasso, J.-R., Hantz, D., Desvarreux, P., Jeannin, M., and Giraud, A., 2002, Probabilistic approach to rock fall hazard assessment: Potential of historical data analysis: Natural Haz- ards and Earth System Sciences, v. 2, p. 15–26, doi: 10.5194/nhess-2-15-2002. Frenkel, K.A., 2010, Panning for science: Science, v. 5, p. 748–749, doi: 10.1126/science.330.6005.748. Golden Software, 2010, Surfer Version 8: Contouring, Grid- ding, and Surface Mapping Package for Scientists and Engineers: http://www.goldensoftware.com. Goodman, R.E., 1989, Introduction to Rock Mechanics: New York, John Wiley and Sons, 562 p. Guzzetti, F., Crosta, G., Detti, R., and Agliardi, F., 2002, STONE: A computer program for the three-dimen- sional simulation of rock-falls: Computers & Geo- sciences, v. 28, p. 1079–1093, doi: 10.1016/S0098 -3004(02)00025-0. Guzzetti, F., Reichenbach, P., and Wieczorek, G.F., 2003, Rockfall hazard and risk assessment in the Yosemite Valley, California, USA: Natural Hazards and Earth System Sciences, v. 3, p. 491–503, doi: 10.5194/ nhess-3-491-2003. Hantz, D., Vengeon, J.M., and Dussauge-Peisser, C., 2003, An historical, geomechanical, and probabilistic approach to rock-fall hazard assessment: Natural Haz- ards and Earth System Science, v. 3, p. 693–701. Huber, N.K., 1987, The geologic story of Yosemite National Park: U.S. Geological Survey Professional Paper 1595. InnovMetric, 2010, Polyworks: 3-D scanner and 3-D digi- tizer software from InnovMetric Software Inc.: http:// www.innovmetric.com:80/polyworks/3D-scanners/ home.aspx. Ishikawa, M., Kurashige, Y., and Hirakawa, K., 2004, Analy- sis of crack movements observed in an alpine bedrock cliff: Earth Surface Processes and Landforms, v. 29, p. 883–891, doi: 10.1002/esp.1076. Jaboyedoff, M., Metzger, R., Oppikofer, T., Couture, R., Derron, M.-H., Locat, J., and Turmel, D., 2007, New insight techniques to analyze rock-slope relief using DEM and 3D-imaging cloud point: COLTOP-3D soft- ware, in Eberhardt, E., Stead, D., and Morrison, T., eds., Rock Mechanics: Vancouver, Canada, Meeting Society’s Challenges and Demands, v. 2, Taylor and Francis, p. 61–68. Jaeger, J.C., Cook, G.W., and Zimmerman, R.W., 2007, Fundamentals of Rock Mechanics: Malden, Massa- chusetts, Blackwell Publishing, 475 p. Jones, C.L., Higgins, J.D., and Andrew, R.D., 2000, Colo- rado Rockfall Simulation Program, Version 4.0: Den- ver, Colorado, Colorado Department of Transportation. Krank, K.D., and Watters, R.J., 1983, Geotechnical properties of weathered Sierra Nevada granodiorite: Bulletin of the Association of Engineering Geologists, v. 20, p. 173–184. Kreylos, O., Bawden, G.W., and Kellogg, L.H., 2008, Immersive visualization and analysis of LiDAR data, in Bebis, G., et al., eds., Fourth International Sympo- sium on Visual Computing: Berlin, Springer-Verlag. Lan, H., Martin, C.D., Zhou, C., and Lim, C.H., 2010, Rock- fall hazard analysis using LiDAR and spatial model- ing: Geomorphology, v. 118, p. 213–223, doi: 10.1016/ j.geomorph.2010.01.002. Lato, M., Diederichs, M.S., Hutchinson, D.J., and Harrap, R., 2009, Optimization of LiDAR scanning and processing for automated structural evaluation of dis- continuities in rock masses: International Journal of Rock Mechanics and Mining, v. 46, p. 194–199, doi: 10.1016/j.ijrmms.2008.04.007. Lim, M., Petley, D.N., Rosser, N.J., Allison, R.J., Long, A.J., and Pybus, D., 2005, Combined digital photogram- metry and time-of-fl ight laser scanning for monitoring cliff evolution: The Photogrammetric Record, v. 20, p. 109–129, doi: 10.1111/j.1477-9730.2005.00315.x. Malamud, B.D., Turcotte, D.L., Guzzetti, F., and Reichen- bach, P., 2004, Landslide inventories and their statisti- cal properties: Earth Surface Processes and Landforms, v. 29, p. 687–711, doi: 10.1002/esp.1064. Martel, S.J., 2004, Mechanics of landslide initiation as a shear fracture phenomenon: Marine Geology, v. 203, p. 319–339, doi: 10.1016/S0025-3227(03)00313-X. Martel, S.J., 2006, Effect of topographic curvature on near- surface stresses and application to sheeting joints: Geophysical Research Letters, v. 33, L01308, doi: 10.1029/2005GL024710. Matthes, F.E., 1930, Geologic history of the Yosemite Val- ley: U.S. Geological Survey Professional Paper 160. McKean, J., and Roering, J., 2004, Objective landslide detec- tion and surface morphology mapping using high-reso- lution airborne laser altimetry: Geomorphology, v. 57, p. 331–351, doi: 10.1016/S0169-555X(03)00164-8. Norrish, N.I., and Wyllie, D.C., 1996, Rock slope stabil- ity analysis, in Turner, A.K., and Schuster, R.L., eds., Landslides: Investigation and Mitigation: Transporta- tion Research Board Special Report 247, p. 391–425. Okura, Y., Kitahara, H., Sammori, T., and Kawanami, A., 2000, Effects of rockfall volume on runout distance: Engineering Geology, v. 58, p. 109–124, doi: 10.1016/ S0013-7952(00)00049-1. Oppikofer, T., Jaboyedoff, M., and Keusen, H.R., 2008, Collapse at the eastern Eiger fl ank in the Swiss Alps: Nature Geoscience, v. 1, p. 531–535, doi: 10.1038/ ngeo258. Oppikofer, T., Jaboyedoff, M., Blikra, L., Derron, M.-H., and Metzger, R., 2009, Characterization and monitor- ing of the Åknes rockslide using terrestrial laser scan- ning: Natural Hazards and Earth System Sciences, v. 9, p. 1003–1019, doi: 10.5194/nhess-9-1003-2009. Rabatel, A., Deline, P., Jaillet, S., and Ravanel, L., 2008, Rock falls in high-alpine rock walls quantifi ed by terrestrial LiDAR measurements: A case study in the Mont Blanc area: Geophysical Research Letters, v. 35, L10502, doi: 10.1029/2008GL033424. Rosser, N., Dunning, S.A., Lim, M., and Petley, D.N., 2005, Terrestrial laser scanning for quantitative rockfall haz- ard assessment, in Hungr, O., Fell, R., Couture, R., and Eberhardt, E., eds., Landslide Risk Management: Amsterdam, Balkema. Stock, G.M., and Uhrhammer, R.A., 2010, Catastrophic rock avalanche 3600 years B.P. from El Capitan, Yosemite Valley, California: Earth Surface Processes and Land- forms, v. 35, p. 941–951, doi: 10.1002/esp.1982. Sturznegger, M., and Stead, D., 2009, Close-range terrestrial digital photogrammetry and terrestrial laser scanning for discontinuity characterization on rock cuts: Engi- neering Geology, v. 106, p. 163–182, doi: 10.1016/ j.enggeo.2009.03.004. West, T.R., 1995, Geology Applied to Engineering: Long Grove, Illinois, USA, Waveland Press, 560 p. Wieczorek, G.F., and Jäger, S., 1996, Triggering mecha- nisms and depositional rates of postglacial slope movement processes in the Yosemite Valley, Califor- nia: Geomorphology, v. 15, p. 17–31, doi: 10.1016/ 0169-555X(95)00112-I. Wieczorek, G.F., and Snyder, J.B., 1999, Rock falls from Glacier Point above Camp Curry, Yosemite National Park, California: U.S. Geological Survey Open- File Report 99-385, http://pubs.usgs.gov/of/1999/ ofr-99-0385/. Wieczorek, G.F., and Snyder, J.B., 2004, Historical rock falls in Yosemite National Park, California: U.S. Geological Survey Open-File Report 03-491, http:// pubs.usgs.gov/of/2003/of03-491. Wieczorek, G.F., Morrissey, M.M., Iovine, G., and Godt, J., 1998, Rock-fall hazards in the Yosemite Valley: U.S. Geological Survey Open-File Report 98-467, http:// pubs.usgs.gov/of/1998/ofr-98-0467/. Wieczorek, G.F., Morrissey, M.M., Iovine, G., and Godt, J., 1999, Rock-fall Potential in the Yosemite Valley, California: U.S. Geological Survey Open-File Report 99-578, http://pubs.usgs.gov/of/1999/ofr-99-0578/. Wieczorek, G.F., Snyder, J.B., Waitt, R.B., Morrissey, M.M., Uhrhammer, R.A., Harp, E.L., Norris, R.D., Bursik, M.I., and Finewood, L.G., 2000, The unusual air blast and dense sandy cloud triggered by the 10 July 1996 rock fall at Happy Isles, Yosemite National Park, California: Geological Society of America Bulletin, v. 112, p. 75–85, doi: 10.1130/0016-7606(2000)112 <75:UJRFAH>2.0.CO;2. Wieczorek, G.F., Stock, G.M., Reichenbach, P., Snyder, J.B., Borchers, J.W., and Godt, J.W., 2008, Investigation and hazard assessment of the 2003 and 2007 Staircase Falls rock falls, Yosemite National Park, California, USA: Natural Hazards and Earth System Sciences, v. 8, p. 421–432, doi: 10.5194/nhess-8-421-2008. Wyllie, D.C., and Mah, C.W., 2004, Rock Slope Engineer- ing: Civil and Mining: New York, Spon Press, 431 p. MANUSCRIPT RECEIVED 28 MAY 2010 REVISED MANUSCRIPT RECEIVED 29 SEPTEMBER 2010 MANUSCRIPT ACCEPTED 6 OCTOBER 2010 on April 1, 2011geosphere.gsapubs.orgDownloaded from http://geosphere.gsapubs.org/ work_uyjfq5mrnjfvphmyzeqsaot2qu ---- March 2009.indd SQU MED J, APRIL 2009, VOL. 9, ISS. 1, PP. 97-98, EPUB 16TH MAR 2009 SUBMITTED - 1ST FEB 09 The Color Atlas of Family Medicine I AGREED TO REVIEW THIS BOOK SINCE THE IDEA of an atlas in family medicine was novel to me. I was hoping to get a ‘clear picture’ and I was not disappointed. This is the first comprehensive atlas of family medicine. The compilation of this atlas has required a concerted effort by five editors over many years. Sixty contributors with outstanding qualifica- tions, experience, and knowledge were engaged. The atlas presents more than 1,500 superb clinical images, with its primary purpose being to provide useful clini- cal information for practising physicians, medical stu- dents, interns, residents, and other health care profes- sionals. The style and organisation of the atlas have some unique characteristics. Readers are urged to spend few minutes reviewing the table of contents at the begin- ning of the atlas and the topic index. The editors have provided a challenging brief for the 60 contributors. The atlas is divided in to 18 parts that reflects the wide variety of conditions seen by the family physi- cian. Family physicians probably see a wider variety of rashes, eye conditions, foot disorders than any other specialty. The book focuses on medical conditions or- ganised by anatomical and physiological system. Both adult and childhood conditions are covered. There are special sections on the essence of family medicine, physical/sexual abuse, women’s health and substance abuse. The first chapter begins with an introduction to learning objectives with images and digital photogra- phy. Then each chapter begins with a patient’s story that ties the photographs to real life stories and intro- duces the content of the chapter. This is followed by epidemiology, aetiology and pathology, diagnosis, dif- ferential diagnosis and management. Additional sec- tions on patient’s education and follow-up advice are provided. I am delighted that editors included sugges- tions of online resources for patients and care provid- ers followed by the key references for further informa- tion. This format is maintained throughout. Some parts are very lengthy. Dermatology, gets 467 pages of its own, excellently illustrated and very well complemented with colour photographs and tables. Clearly it is now a major specialty rather than just a subspecialty, which would have merited only two pag- es. This atlas has 1,095 pages and is not meant to be read from cover to cover. I therefore urge you to check the index whenever you are looking up a topic. Ap- proaching A4 size, hard backed, and 4cm thick, it is not a white coat pocket book for even the most stalwart house officer, but it should succeed in being just what it sets out to be, a source of rapid reference, allowing students and junior doctors to get clued up on medical األسرة لطب امللون األطلس جيمس تايسنجر تشوملي، هيدي هي مايوه، سميث، مايندي يوساتني، ريتشارد : احملررون Editors: Richard P Usatine, Mindy A Smith, EJ Mayeux Jr, Heidi Chumley, James Tysinger Publishers: McGraw-Hill Medical, 2009. $100 ISBN: 978-0-07-147464-1 Orders: www.mcgraw-hillmedical.com B O O K R E V I E W B O O K R E V I E W : TH E C O L O R AT L A S O F FA M I LY M E D I C I N E 98 topics with which they were previously unfamiliar, or where their knowledge has become rusty. This atlas is written for family physicians, but can be invaluable to medical students, residents, and health care providers in primary care. It certainly has much to offer to internists, pediatricians, and dermatolo- gists. It is especially interesting for anyone who loves to look at clinical photographs for learning, teaching, and practising medicine. I hope this atlas will be a wel- come aid to all of us and worthy of frequent use. Despite the novelty of a family medicine atlas, this book was enjoyable to read. As each chapter is only a few pages long, it is suitable for reading when only short snippets of time are available. If you are new to the idea of a family medicine atlas, or if you would en- joy reading more on the subject from a variety of authors, this would be the book for you. Can so many subjects be covered adequately in a single book? You, the reader, must be the ultimate judge. R E V I E W E R Rahma Al Kindi Department of Family Medicine and Public Health, College of Medicine and Health Sciences, Sultan Qaboos University, Muscat, Sultanate of Oman Email: rkindi@squ.edu.om work_uzcwqz3h7ve7lkkyln4xw5gbpa ---- Obzor ZdrN 2000; 34: 227-31 227 VLOGA MEDICINSKE SESTRE V OČESNI DIAGNOSTIKl THE ROLE OF THE NURSE IN EYE DIAGNOSTlCS Marta Blažič UDK/UDC 617.7-083 DESKRIPTORJI: očesne bolezni; zdravstvena nega diagnostika Izvleček - Članek povzema delo medicinske sestre v očesni dia- gnostiki kot specifičnost v okulistiki. Medicinska sestra ima v očesni diagnostiki pomembno vlogo, tako pri pripravi bolnika na preiskavo, kakor tudi med in po njej. Prikazanaje obrazloži- tev namena in metod slikanja v očesni diagnostiki pri diabetič- ni retinopatiji, komunikacija kot sestavni del sodobne zdrav- stvene nege. Dodatno znanje medicinske sestre s področja ra- čunalništva oziroma informatike in digitalne fotografije je nuj- no potrebno za uporabo sodobnih diagnostično-terapevtskih na- prav. Podatki o bolnikih v elektronski obliki, ki jih posreduje zdravniku za izvedbo preiskave, morajo biti ažumi in točni. Za- to se pri medicinski sestri zahtevajo dodatna znanja spodročja uporabe računalnika. Uvod Na pobudo društva diabetikov Dolenjske, Bele Kra- jine in dela Posavjaje LlONS klub Novo mesto omo- gočil nakup računalniško vodene kamere za pregled očesnega ozadja. Sladkomi bolnik ima lahko spremem- be na skoraj vseh delih očesa, vendar vid najbolj pri- zadenejo okvare žil na očesnem ozadju. To je tako ime- novana diabetična retinopatija, ki se jo zdravi z laser- sko fotokoagulacijo. Za uspešnost laserske terapije je potrebno pravočasno odkriti začetne spremembe in pri- četi z zdravljenjem, ko je še možno ohraniti dober vid. Izkušnje po svetu so pokazale, da je mogoče s pravo- časnim odkrivanjem začetnih sprememb na očeh in z ustrezno terapijo (laserska fotokoagulacija) v 70 %pre- prečiti slepoto. Pravočasnemu odkrivanju obolenj je namenjena kamera za pregled očesnega ozadja. Ka- mera z IMAGEnet sistemom omogoča shranjevanje velikega števila slik, primerjavo slik iz različnih ča- sovnih obdobij ter izvajanje programa izločanja nor- malnih izvidov. Bolnikom slikajo očesno ozadje me- dicinske sestre, zdravnik oftalmolog vse slike pregle- da in za vsakega bolnika določi ustrezno terapijo ali le kontrolo. S takim načinom dela je lahko pregledanih bistveno več bolnikov v določenem časovnem obdob- ju in tako lahko zgodaj odkrijemo spremembe mrežni- DESCRIPTORS: eye diseases; nursing diagnosis Abstract - The article describes the work of a nurse in eye diag- nostics as a specific part of ophthalmology. Nurse has an im- portant role in preparation of a patient for the procedure, as well as during and after it. Explanation ofthe purpose and meth- ods of the recording and communication as a composite part of modem nursing are presented. Additional knowledge from com- puters and digital photography is absolutely necessary for the use of modem diagnostic and therapeutic resources. Patient data in electronic format required by the physician for the pro- cedure should be updated and accurate. ce. Pri sladkomih bolnikih naj bi oftalmolog vsaj en- krat letno pregledal očesno ozadje. Za uspešno ohra- njanje dobrega vida pri teh bolnikih je potrebna večja razgledanost bolnikov samih. Veliko bolnikov sploh ne ve, kako lahko sladkoma bolezen okvari očesno ozadje in prizadene vid (Sevšek, 1996). Pregled bolnika v očesni ambulanti vključuje: določitev vida, pregled sprednjih delov očesa s špranjsko svetilko, pregled očesnega ozadja, slikanje očesnega ozadja s kamero za pregled oče- snega ozadja. Za diagnosticiranje diabetične retinopatije lahko uporabljamo foto metodo in fluorescentno angiografi- jo. S foto metodo se dokumentira spremljanje napre- dovanja retinopatije (začetne spremembe na ožilju in kasnejše na mrežnici). Bolnika slikamo enkrat na leto, zaradi primerjave in kontrole stanja bolezni. Pridob- ljena slika je barvna. Natančne podatke o stanju mrežnice in njenih žil se dobi s fluorescentno angijografijo. Bolniku vbrizga- mo kontrast (fluorescein). Fluorescein se v krvi veže Marta Blažič, dipl. med. sestra, Splošna bolnica Novo mesto, očesni oddelek Članek povzema del diplomske naloge z naslovom Uporaba računalnika pri delu medicinske sestre v očesni diagnostiki. Mentorica je bila mag. Jelena Ficzko, univ. dipl. ing., predavateljica 228 na krvne proteine v 70-80 %, ostalo je prosti fluores- cein. Bolezenske spremembe na žilah pa povzročijo njegovo patološko iztekanje (Gregorčič-Kožuh, Pre- skar,1999). Patološki metabolni procesi povzročajo mikroangi- opatijo v mrežnici. Pri slikanju se opazuje prizadetost arteriol, kapilar in venul. Pojavijo se zamašitve žilic, spremeni se prepustnostkapilar, kopiči se serozni eksu- dat v izvenceličnih prostorih mrežnice, zaradi zapore žilja pa se razvije razraščanje novo nastalih žilic (neo- vaskularizacija). Vezi med celicami novonastalih žil so zelo slabe, zato je njihova prepustnost zelo poveča- na. Pojavijo se krvavitve različnih oblik in velikosti, kar privede do delne ali popolne slepote (Gregorčič- Kožuh, 1999). V očesni diagnostiki deluje zdravstveni tim, ki ga sestavljajo: zdravnik specialist oftalmolog, višja me- dicinska sestra, zdravstveni tehnik. Vodja tima je zdravnik specialisto Delo v timuje usklajeno delo eki- pe, v kateri se natančno ve, kdo je vodja, kdo je za kaj odgovoren, kakšne so pristojnosti posameznika, hkra- ti pa je ohranjeno spoštovanje posameznikove oseb- nosti. V vsakem trenutku delo v timu zahteva dobre socialne odnose, vendar ne dopušča domačnosti, kajti to lahko negativno vpliva na kakovost dela, zahteva pa tople medčloveške odnose, ki potekajo na ustrezni profesionalni ravni. Medicinska sestra se zaveda etičnosti svojih razmiš- ljanj in dejanj, ki jih bo znala uporabljati v korist bol- nika, in svoje odgovomosti. Pri svojem delu se drži navodil ter dosledno spoštuje socialne, vamostne, or- ganizacijske in strokovne zahteve. Zaveda se stopnje odgovomosti ter posledic in napak nepravilnega rav- nanja s preiskovalnimi pripomočki. Zdravstvena nega pri preiskovalni metodi pomeni sklop vseh nalog in opravil, ki so potrebna za čim boljšo psihofizično po- čutje bolnika, za njegovo vamost pred posegom, med preiskavo in po njej. V procesu izvedbe po stopka so medicinske sestre soodgovomi člani skupine, niso anonimne, kar jih ob- vezuje, da za svoje področje dela prevzamejo tudi mo- ralno, profesionalno in pravno odgovomost (Šmitek, 1998). Vloga medicinske sestre pri preiskavi Zdravstvena nega je pri diagnostičnem posegu zelo specifična. Poleg specifičnosti je zanjo značilna tudi intenziteta dogajanja. Medicinska sestra mora poznati vse vrste in faze postopka, vse možne zaplete in znati odgovomo ukre- pati. Poleg tega mora poznati: - vsa diagnostična sredstva (farmakološka), ki se upo- rabljajo pri preiskavi, - kontrast, ki se aplicira pred preiskavo, - možnost nastanka anafilaktičnega šoka in nudenje prve pomoči ob nastanku reakcije, - fiziologijo in patofiziologijo. Obzor Zdr N 2000; 34 Dejavnost medicinske sestre pri preiskavi po pro- cesni metodi je naslednja: - ugotavljanje bolnikovih potreb po zdravstveni ne- gi, priprava prostora in pripomočkov, ki vključujejo pri- pravo aparature, reanimacijskega vozička, kisikove jeklenke, priprava in dajanje kapljic bolniku po naročilu zdrav- nika, izvedba medicinsko-tehničnega posega, sodelovanje pri preiskavi medicinsko-tehničnega po- sega, izvedba medicinsko-tehničnega posega (slikanje), dokumentiranje. Preiskava se začne s foto metodo. Vloga medicinske sestre pri foto metodi Ugotavljanje bolnikovih potreb po zdravstveni negi Priprava bolnika na preiskavo pomeni sklop vseh nalog in opravil, ki so potrebna za čim boljši in nemo- ten potek preiskave v vseh fazah. Obsega fizično, psi- hično in socialno pripravo bolnika. Pri temje pomemb- no sodelovanje vseh zdravstvenih delavcev v timu, raz- delitev nalog in kompetenc (odgovomosti) mora biti jasna in razumljiva vsem. Sladkoma bolezen in posledice le-te pomenijo za bolnika resno psihofizično in socialno obremenitev. Pri bolniku ponavadi obstaja strah pred izgubo vida, zato so reakcije bolnikov različne. Reakcije so v veli- ki meri odvisne od osebnih značilnosti bolnika, kot so: starost in zrelost bolnih, izobrazba, vzgoja, poklic, okolje, iz katerega izhaja, morebitne prejšnje izkuš- nje. Reakcija bolnika je vedno rezultat več dejavni- kov. Pred preiskavo mora biti zdravstvena nega načr- tovana s postavljenimi cilji, temeljiti mora na holistič- nem in humanem pristopu ter mora upoštevati tudi in- dividualne fizične, psihične in socialne komponente. Na dogovorjen datum in uro pregleda bolnik obišče specialistično okulistično ambulanto. Prav bi bilo, da bi se medicinska sestra, ki sodeluje eri preiskavi, seznanila z bolnikom že pred preiskavo. Zal so v današnjem času medicinski sestri dodeljena še droga dela, zato se pogostokrat zgodi, da se srečata medicinska sestra in bolnik prvič, ko se postopek pre- iskave začenja. Podatke o bolniku, potrebne za nego- valno anamnezo, medicinska sestra pridobi z indivi- dualnim pristopom do bolnika na podlagi pogovora. Za analizo in ugotavljanje potreb na podlagi ocenje- vanja medicinska sestra odkrije vrsto aktivnosti, za ka- tere lahko bolnik sam sprejme odgovomost. Namen seznanitve medicinske sestre z bolnikom pripomore k dobri psihofizični pripravi bolnika. Me- dicinska sestra se seznani z bolnikovim psihofizičnim stanjem ter ugotavlja potrebe po zdravstveni negi. Potrebne informacije medicinska sestra pridobi od BlažičM.Vlogamedicinskesestrevočesnidiagnostiki ambulantne medicinske sestre in pri pogovoru z bol- nikom. Medicinska sestra vpogovoru bolnika seznani zna- menom in vsebino preiskave, kar pripomore k dobri psihofizični pripravi. Bolnika poučimo o potrebi po čim večjem sodelo- vanju med preiskavo. Tako mu zmanjšamo strah, ga informiramo in poučimo o smislu pravilnega gledanja v močno svetlobo določen čas. Na koncu pogovora bolniku omogočimo, da zastavi vprašanja o vsem, kar ga.v zvezi s preiskavo zanima. Pri vsebini bolniku poudarimo, kako njegovo sode- lovanje v času preiskave časovno zmanjša njegovo obremenitev, kar vpliva na hiter potek preiskave ter možnost slikanja v najkrajšem možnem času zaprido- bitev izvidov - slik. Priprava prostorov in pripomočkov, ki vključujejo aparature Samapreiskavaje računalniško vodena. Naloga me- dicinske sestre je vključitev kamere za pregled oče- snega ozadja in samega računalnika. Vračunalnik vne- se vse potrebne podatke o bolniku in na zaslonu raču- nalnika pripravi pogovorno okno za izvedbo slikanja. Ker je uporaba pravilnega ukaza pogoj za uspešno na- daljnje delo, mora medicinska sestra poznati pomen posameznih ukazov zukaznega menija. Program IMA- GEnet omogoča shranjevanje, prikaz in analizo slik, posnetih s kamero. Z medicinskega vidika oziroma zvidika diplomira- ne medicinske sestre in zdravnika specialista je po- membno, kako lahko uporabimo funkcije iz pogovor- nih oken in vkolikšni meri. Pomembno je, da doseže- mo čim boljšo diagnostiko v očesni ambulanti in iz- menjavo podatkov, katerih namenje omogočanje sku- pinskih konzultacij in prikazovanja ter podatkov za dlje časa. Priprava in dajanje kapljic bolniku po naročilu zdravnika Medicinska sestra aplicira kapljice trikrat v časov- nem razmahu 15minut, da se zenica maksimalno raz- širi. V primeru slabe razširitve se medicinska sestra posvetuje z zdravnikom, da predpiše druge kapljice. Sodelovanje pri preiskavi Izvedbo postopka odredi zdravnik. Pri izvedbi po- stopka je medicinska sestra samostojna in hkrati od- govorna za pravilno izvedbo. Bolnika namesti za apa- rat,mufiksira glavo, zdravstveni tehnik pa razpre zgor- njo in spodnjo veko, da veka ne zastre slikanja. Dokumentiranje Preiskavaje dokumentirana vzdravstveni dokumen- taciji, računalniku in posebnem protokolu, ki je izrec- no namenjen tej preiskavi. 229 Obzaključku slikanja zdravnik slike pregleda. Gle- de na izvid, ki je izključno zdravnikovo področje, se odloči zanadaljevanje znovo preiskavo - fluorescent- no angiografijo. Vloga medicinske sestre pri jluorescentni angiografiji Zdravnik sepogovori zbolnikom osplošnem zdrav- stvenem stanju: če ima poleg sladkorne bolezni še ka- ko drugo bolezen, če uživa kakšna zdravila, o more- bitnih alergičnih reakcijah (alergija na zdravila, pike insektov, hrano). Če bolnik navaja alergijo, se zdrav- nik za topreiskavo neodloči, sajje lahko kontrast kon- traindiciran. Sama preiskava je namenjena zgodnje- mu odkrivanju bolezni in pomoči pri terapevtskih po- stopkih. Zato se izvede, kadar ni kontraindikacij, da ne ogrožamo bolnikovega življenja. Ugotavljanje bolnikovih potreb po zdravstveni negi Preiskava senadaljuje pri istem bolniku, zato je na- daljnja priprava bolnika samo nadgradnja že opisane. Ves čas medicinska sestra upošteva bolnikovo fizič- no, psihično in socialno počutje ter skrb za nemoteno delovanje temeljnih življenjskih aktivnosti. Delo medicinske sestre se deli z več vidikov: - delomedicinske sestre kot sodelavke zdravniku spe- cialistu, delo ob bolniku, - delo medicinske sestre pri pripravi materiala za po- tek preiskave, priprava in namestitev bolnika. To delo medicinska sestra opravlja delno sama, delno pa ga opravlja zdravstveni tehnik. delo medicinske sestre kot izvajalke medicinsko- tehničnega posega po naročilu zdravnika speciali- sta oftalmologa (vstavitev intravenozne kanile). S pomočjo kanile pri bolniku omogočimo vzposta- vitev venskega sistema za aplikacijo kontrasta. Priprava prostorov in pripomočkov, ki vključujejo reanimacijski voziček in kisikovo jeklenko V prostoru je reanimacijski voziček, ki je oprem- ljen z materialom za nastavitev kanile in drugo opre- mo. To sovsi medikamentozni pripravki in pripomoč- ki za prvo pomoč v primeru šokovnega stanja. Priprava bolnika po naročilu zdravnika Medicinska sestra je po naročilu zdravnika izvajal- ka medicinsko-tehničnega posega (vstavitev intrave- nozne kanile). Namen priprave je razlaga in pogovor, kar pri bol- niku zmanjša strah in bolečino pri nastavitvi kanile. Medicinska sestraje dolžna prisluhniti bolniku. Čebol- 230 nik zastavi vprašanja, nanje skuša odgovoriti. Kadar bolnik opozori, da ima slabe žile, mora medicinska sestra to upoštevati. Medicinska sestra, ki izvaja poseg, mora dobro po- znati potek venskega pleteža na roki, kriterije za izbor vene in sam postopek punkcije vene. Izvedba medicinsko-tehničnega posega (vstavitev intravenozne kanile) Medicinska sestra, ki izvaja poseg, mora poznati aseptično tehniko dela in zaplete, ki lahko nastanejo ob uvajanju. Pri izboru vene pazi, da je žila gladka, elastična, dobro vidna in tipna. Nikoli ne apliciramo v vnete, sklerozirane ali poškodovane vene. Vedno najprej poskusi izbrati venDnadistalnem delu roke. Če pri prvi punkciji vene ni uspeha, poiščemo venDna bolj proksimalnem delu roke. Izbira vene ne sme biti naključna, ampak individu- alna. Upoštevati je treba vrsto kontrasta, ki ga bo bol- nik prejel, in stanje bolnikovih ven. Kanile debeline 14-18 guage so najbolj primeme. Sodelovanje pri preiskavi medicinsko-tehničnega posega Reakcija bolnika na bolečino je odvisna od narave bolečine, od njene intenzivnosti, bolnikovega kultur- nega okolja, vzgoje, izobrazbe in prejšnjih izkušenj z bolečino. Medicinska sestra se mora bolniku znati približati, ga poslušati, opazovati in mu razložiti, da se ob uvajanju kanile pojavi bolečina, vendar bo po- stopek izvedla tako, da bo bolečina čim manjša in krajša. Povstavljeni kanili bolnika ponovno namesti za apa- rat, zdravnik pripravi aparat, zdravstveni tehnik pabol- niku fiksira glavo in zgomjo in spodnjo veko drži raz- prto. Medicinska sestra po naročilu zdravnika vbrizga 5 ml fluoresceina v 5 sekundah, da pride do bolusa (ve- lika količina kontrasta včasovni enoti) po krvnem ob- toku ter po arteriiophtalmiki v oko. Pomemben je čas, vkaterem pride fluorescein voko in ga lahko zaznamo na filmu, kajti podaljšanje tega časa kaže na motnjo pretočnosti žil. Pokončanem slikanjumedicinska sestrabolnika od- strani od aparature. Namesti ga v prostor, kjer je nad- zor medicinske sestre stalen, zaradi možnih reakcij na kontrast. Kanila ostane v žili še približno 30 minut, zaradi morebitne preventivne aplikacije zdravil ob po- javu alergij ali šokovnih stanj na kontrast. V tem času selahkomedicinska sestra zbolnikom pogovarja, hkra- ti pa opazuje in ocenjuje njegovo splošno stanje. Ko kanal ni več potreben, ga medicinska sestra od- strani. Po odhodu bolnika iz kabineta za fluorescentno an- giografijo medicinska sestra preveri mesto, kjer je bi- la nastavljena kanila, da ugotovi, če bolnik ne krvavi iz žile. Obzor Zdr N 2000; 34 Medicinska sestra bolnika vpraša o njegovem po- čutju. Ko preveri stanje in ugotovi, da ni odstopanj od normale, bolnika preda ambulantnemu zdravstvene- mu tehniku, kjer se preiskava zaključi z izdanim izvi- dom. Dokumentiranje Celoten postopek preiskave je dokumentiran v me- dicinski dokumentaciji, sam izvid je shranjen v raču- nalniku in vposebnem protokolu, ki je namenjen za to preiskavo. Prva pomoč ob šokovnem stanju po izvedbi slikanja in preventivni ukrepi Možnost reakcij na fluorescein (kontrast) so danes majhne. Vendar so, čeprav vmajhnem odstotku, stran- ski učinki možni: - pogosto gre za obarvanje kože in urina (rumeno), - alergijske reakcije, - slabost, - bruhanje, - prehodno povišanje telesne temperature, - astmatični napad, - življenjsko nevami zapleti so redki v obliki anafi- laktičnega šoka (po najnovejših podatkih v razmer- ju 1:220000 primerov). Kljub majhnim možnostim reakcij bolniku pustimo do 30minut po preiskavi ševedno intravenozni kanal, kije potreben pri morebitnih zapletih, da lahko aplici- ramo zdravila. Kajti pri težjih šokovnih stanjih se žile stisnejo in bi bila aplikacija zdravila nemogoča. Kabinet je opremljen z reanimacijskim vozičkom, ki vsebujejo vse potrebne pripomočke za morebitno pomoč. Opreiskavi morajo biti obveščeni tudi aneste- zisti. V času do njihovega prihoda sovkabinetu navo- dila za zdravljenje anafilaktičnega šoka, ki jih je po- trebno začeti izvajati nemudoma, saj gre za reševanje življenja. Sklep Delo medicinske sestre v očesni diagnostiki je spe- cifično, kakor je specifična okulistika kot veja medi- cine. Enako velja za aparate, ki se uporabljajo v dia- gnostiki. Večino aparatov nadzorujejo računalniški si- stemi in predstavljajo njihove izhodne podatke. Tozahtevapoleg znanj spodročja strokezdravstvene nege tudi permanentno izobraževanje medicinskih se- ster na področju računalništva. Računalništvo sekot veda razvija izredno hitro. Zna- nje, ki se ga pridobi danes, zastari vnekaj letih. Nujno potrebno je,da se medicinska sestra stalno izobražuje tudi na tem področju. Pri zadovo1jevanju potreb bolnikaje zelo pomemb- no poznavanje sodobne zdravstvene nege, saj 1etako 1ahkoobravnavamo bolnika individualno in holistič- Blažič M. Vloga medicinske sestre v očesni diagnostiki no. Na tak način lahko medicÍnska sestra spozna bol- nikove potrebe, z znanjem in razumevanjem pa jih uspešno rešuje v zadovoljstvo bolnika. Medicinska sestra je enakopraven član zdravstve- nega tima, ki deluje na določenem področju. Za uspe- šno delovanje na svojem področju mora delo dobro poznati. Med člani tima mora obstajati dobro sodelo- vanje in timsko vzdušje. Literatura 1. Gregorčič-Kožuh M, Preskar P. Razlaga na terno očesna diagno- stika. Novo mesto, maj-junij, 1999. 231 2. Jack J. Kanski Clinical ophtalmology. By Butler, London, 1994. 3. Kisner N, Rozman M, Klasinc M, Pernat S. Zdravstvena nega. Ma- ribor: Založba Obzorja, 1998: 14-26. 4. Nove usmeritve v razvoju zdravstvene nege. Maribor: Kolabora- tivni center SZO za primarno zdravstveno nego, 1995: 14-27. 5. Peric HK. Dokumentiranje zdravstvene nege - ali je res potrebno. Obzor Zdr N 1977;31: 115-9. 6. Proces zdravstvene nege z dokumentiranjem. Maribor: Kolabora- tivni center SZO za primarno zdravstveno nego; 1995: 37-44. 7. Sekavčnik T. Razvijanje standardov in kriterijev kakovosti zdrav- stvene nege. Ljubljana: Zbornica zdravstvene nege Slovenije, 1997. 8. Sevšek D. Zdravljenje diabetične retinopatije. Obzor Zdr N 1996; 30: 227. 9. Šmitek J. Filozofija, morala in etika v zdravstveni negi. Obzor Zdr N 1998; 32: 127-38. 10. Topcon corp. IMAGEnet for windows, februar 1997. SPLOŠNA NAČELA PRI UPORABI MEDICINSKIH ROKAVIC Rokavice uporabljamo strogo namensko: samo za določenega bolnika, samo za določen poseg, samo za določen čas. Pri menjavi rokavic se držimo načela: Rokavice menjamo tako pogosto, da zagotovimo učinkovito zaščito tako zase kot za bolnika. Pred uporabo rokavic in po njej roke umijemo, osušÍmo in po potrebi tudi razkužimo. Rokavice nadenemo tik pred posegom na čiste in osušene roke. Rokavice med različnimi posegi pri istem bolniku menjamo. Rokavic med delom ne čistimo ali razkužujemo zanadaljnjo uporabo, saj spranjem in drgnjenjem povečuje- mo propustnost rokavÍc za mikroorganizme. Rokavice takoj po opravljenem postopku snamemo in odložimo obmjene navznoter. Z rokavicami se ne dotikamo čistih površin, kljuk, telefona, dokumentacije v okolici bolnika. Če so rokavÍce onesnažene skrvjo ali telesnimi izločki, jih takoj zamenjamo in razkužimo tudi roke. Pri delu skužnim materialom sipo odstranitvi rokavic roke najprej razkužimo in nato še umijemo. Zumiva- njem bi mikroorganizme šebolj vtrli v kožo, z dezinfekcijo pa jih takoj in učinkovito uničimo. Če se rokavica med posegom strga oziroma mehanično poškoďuje, jo takoj snamemo, razkužimo roke in nadenemo novo. Podatki kažejo, da se vkar 50-70 % kirurške rokavÍce med operacijskimi posegi strga- jo. Luknjica vvelikosti bucikine glave paje žedovolj, dav 20minutah prodre skoznjo 40.000 mikroorga- nizmov, ki lahko okužijo operaterja, prenesene v rano pa lahko povzročijo številne zaplete, postoperativ- ne infekcije in posledično dolgotrajnejše celjenje ran in zdravljenje. Uporaba rokavic ni nadomestilo za umivanje rok in zaščita pred ostrimi predmeti. Rokavice skladiščimo v primemih prostorih, pri temperaturi do 40° C, zavarovane pred neposrednim son- cem, močno umetno svetlobo, rentgenskimi žarki. Dragica Bencik, VMS, dipl. org. dela www.sanolabor.si http://www.sanolabor.si work_v3m2ypf3ajc3bmyunyp47y2iry ---- Digital Imaging in K-12 Biology James Ekstrom, Phillips Exeter Academy http://science.exeter.edu/jekstrom/default.html K-12 instruction in biology has traditionally taken a very descrip- tive approach. This is in marked contrast to quantitative as well as qualitative way of looking at things in physics and chemistry. This qualitative/descriptive approach even extends into the iaboratory portion of the biological course. One way to introduce a more quantitative approach is in the microscopy portion of the biology curriculum. Because cellular structure is primarily a microscopic province It makes ...... .,t„..,......_,....,..,„,.,...........:. _•.-.•..-.-,-. -,..,,•.,_.-.-._• sense to introduce j students to the different micro- scopic tools such as TEM and SEM, as well as the light microscope that are used to investigate cell structure. It is easy to quantify microscopic work and the light micro- scope is the prin- ciple, if sometimes only instrument, found in biology classrooms. A typical in- troduction to the microscope can involve a mea- surement of the "field of view," as well as getting use to the various controls found on the instrument. If the lowest power student objective is 4X and the ocular 10X this measure- ment can occur with a fair degree of ac- curacy using a 6" mass-produced plastic ruler that also has a metric edge to it. Using a higher power objec- tive would involve mathematically calculating what the field would be or using an inex- pensive $15,00) micrometer. Once the student makes these calculations : . _ _. „ % t Ar,v,AV i n v \ Figure 1 Top. Figure 2 Center. for40X(4Xx10X), Flgum 3 bottom w'th »fc? |n.j. At Hi( C > * f . wit* J I ' J M M."«-«»» Jtf Jviftt WHt 'SotK l*p Itftiitt inf. Iff f ' l t ,-ii'ff It*jgl w,';i w.'i J ltc»cn*i; Multiple target materials without breaking vacuum ; >• Improved imaging for highest magnification FESEM • Base pressure Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1111/J.1095-8312.2011.01623.X Corpus ID: 2855983Colour plasticity and background matching in a threespine stickleback species pair @article{Clarke2011ColourPA, title={Colour plasticity and background matching in a threespine stickleback species pair}, author={J. Clarke and D. Schluter}, journal={Biological Journal of The Linnean Society}, year={2011}, volume={102}, pages={902-914} } J. Clarke, D. Schluter Published 2011 Biology Biological Journal of The Linnean Society Examining differences in colour plasticity between closely-related species in relation to the heterogeneity of background colours found in their respective habitats may offer important insight into how cryptic colour change evolves in natural populations. In the present study, we examined whether nonbreeding dorsal body coloration has diverged between sympatric species of stickleback along with changes in habitat-specific background colours. The small, limnetic species primarily occupies the… Expand View via Publisher academic.oup.com Save to Library Create Alert Cite Launch Research Feed Share This Paper 39 CitationsHighly Influential Citations 3 Background Citations 11 Methods Citations 6 Results Citations 1 View All Figures and Tables from this paper figure 1 table 1 figure 2 table 2 figure 3 figure 4 View All 6 Figures & Tables 39 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency A colourful youth: ontogenetic colour change is habitat specific in the invasive Nile perch Elizabeth A Nyboer, S. Gray, L. Chapman Biology Hydrobiologia 2014 9 PDF View 3 excerpts, cites methods and background Save Alert Research Feed Background matching ability and the maintenance of a colour polymorphism in the red devil cichlid W. Sowersby, Topi K. Lehtonen, B. Wong Biology, Medicine Journal of evolutionary biology 2015 10 PDF Save Alert Research Feed Predation and the relative importance of larval colour polymorphisms and colour polyphenism in a damselfly F. Johansson, V. Nilsson-Örtman Biology Evolutionary Ecology 2012 7 PDF View 1 excerpt, cites background Save Alert Research Feed Functional basis of ecological divergence in sympatric stickleback Matthew Mcgee, D. Schluter, P. Wainwright Biology, Medicine BMC Evolutionary Biology 2013 49 PDF Save Alert Research Feed Differential predation alters pigmentation in threespine stickleback (Gasterosteus aculeatus) Michelle Gygax, A. K. Rentsch, S. Rudman, Diana J. Rennison Biology, Medicine Journal of evolutionary biology 2018 1 PDF View 1 excerpt, cites background Save Alert Research Feed Phenotypic flexibility in background-mediated color change in sticklebacks P. Tibblin, M. Håll, P. Svensson, J. Merilä, A. Forsman Biology, Medicine Behavioral ecology : official journal of the International Society for Behavioral Ecology 2020 Highly Influenced PDF View 6 excerpts, cites background and methods Save Alert Research Feed Phenotypic plasticity in the Antarctic nototheniid fish Trematomus newnesi: a guide to the identification of typical, large mouth and intermediate morphs E. Barrera-Oro, J. Eastman, E. Moreira Biology Polar Biology 2011 5 PDF View 1 excerpt, cites background Save Alert Research Feed Pigmentation plasticity enhances crypsis in larval newts: associated metabolic cost and background choice behaviour Nuria Polo-Cavia, Ivan Gomez-Mestre Biology, Medicine Scientific reports 2017 19 PDF Save Alert Research Feed Evolution of dark colour in toucans (Ramphastidae): a case of molecular adaptation? J. Corso, N. Mundy, N. Fagundes, T. R. D. de Freitas Biology, Medicine Journal of evolutionary biology 2016 2 View 1 excerpt, cites methods Save Alert Research Feed Adaptive plasticity generates microclines in threespine stickleback male nuptial color Chad D. Brock, M. Cummings, D. Bolnick Biology 2017 1 PDF View 3 excerpts, cites background Save Alert Research Feed ... 1 2 3 4 ... References SHOWING 1-10 OF 50 REFERENCES SORT BYRelevance Most Influenced Papers Recency Character displacement of male nuptial colour in threespine sticklebacks (Gasterosteus aculeatus) A. Albert, Nathan P. Millar, D. Schluter Biology 2007 40 PDF View 1 excerpt, references background Save Alert Research Feed The Role of Phenotypic Plasticity in Color Variation of Tularosa Basin Lizards E. Rosenblum Biology Copeia 2005 49 PDF Save Alert Research Feed Benthic fish exhibit more plastic crypsis than non-benthic species in a freshwater spring S. Cox, Sondra Chandler, C. Barron, K. Work Biology Journal of Ethology 2008 9 View 1 excerpt, references background Save Alert Research Feed Background Matching and Color-Change Plasticity in Colonizing Freshwater Sculpin Populations Following Rapid Deglaciation A. Whiteley, S. Gende, A. Gharrett, D. Tallmon Biology, Medicine Evolution; international journal of organic evolution 2009 41 PDF View 2 excerpts, references background Save Alert Research Feed Ecological Character Displacement and Speciation in Sticklebacks D. Schluter, J. Mcphail Biology, Medicine The American Naturalist 1992 886 PDF Save Alert Research Feed Color change and color-dependent behavior in response to predation risk in the salamander sister species Ambystoma barbouri and Ambystoma texanum Tiffany S Garcia, A. Sih Biology, Medicine Oecologia 2003 48 View 1 excerpt, references background Save Alert Research Feed Color Variation among Habitat Types in the Spiny Softshell Turtles (Trionychidae: Apalone) of Cuatrociénegas, Coahuila, Mexico S. McGaugh Biology 2008 29 PDF Save Alert Research Feed HABITAT‐SPECIFIC PIGMENTATION IN A FRESHWATER ISOPOD: ADAPTIVE EVOLUTION OVER A SMALL SPATIOTEMPORAL SCALE A. Hargeby, J. Johansson, J. Ahnesjö Biology, Medicine Evolution; international journal of organic evolution 2004 77 Save Alert Research Feed Habitat light, colour variation, and ultraviolet reflectance in the Grand Cayman anole, Anolis conspersus J. Macedonia Biology 2001 72 PDF View 1 excerpt Save Alert Research Feed Convergent Evolution and Divergent Selection: Lizards at the White Sands Ecotone E. Rosenblum Biology, Medicine The American Naturalist 2005 208 PDF Save Alert Research Feed ... 1 2 3 4 5 ... Related Papers Abstract Figures and Tables 39 Citations 50 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_v6fbk3dbm5cdxpvzm4zxu7g6yy ---- untitled Prevalence of diabetic retinopathy in individuals with type 2 diabetes who had recorded diabetic retinopathy from retinal photographs in Catalonia (Spain) Antonio Rodriguez-Poncelas,1,2 Sònia Miravet-Jiménez,3,4 Aina Casellas,5 Joan Francesc Barrot-De La Puente,2,6 Josep Franch-Nadal,4,7 Flora López-Simarro,3,4 Manel Mata-Cases,4,8 Xavier Mundet-Tudurí4,9 For numbered affiliations see end of article. Correspondence to Professor Xavier Mundet- Tudurí, Unitat de Suport a la Recerca Barcelona Ciutat, Institut Universitari d’Investigació en Atenció Primària Jordi Gol (IDIAP Jordi Gol), Sardenya 375, Barcelona 08025, Spain; xavier.mundet@uab.cat Received 3 February 2015 Revised 9 April 2015 Accepted 26 May 2015 Published Online First 18 June 2015 To cite: Rodriguez- Poncelas A, Miravet- Jiménez S, Casellas A, et al. Br J Ophthalmol 2015;99:1628–1633. ABSTRACT Background/aims Retinal photography with a non- mydriatic camera is the method currently employed for diabetic retinography (DR) screening. We designed this study in order to evaluate the prevalence and severity of DR, and associated risk factors, in patients with type 2 diabetes (T2DM) screened in Catalan Primary Health Care. Methods Retrospective, cross-sectional, population based study performed in Catalonia (Spain) with patients with T2DM, aged between 30 years and 90 years (on 31 December 2012) screened with retinal photography and whose DR category was recorded in their medical records. DR was classified as: no apparent retinopathy (no DR), mild non-proliferative DR (mild NPDR), moderate NPDR, severe NPDR, proliferative DR (PDR) and diabetic macular oedema (DMO). Non-vision threatening DR (non-VTDR) included mild and moderate NPDR; VTDR included severe NPDR, PDR and DMO. Clinical data were obtained retrospectively from the SIDIAP database (System for Research and Development in Primary Care). Results 108 723 patients with T2DM had been screened with retinal photography. The prevalence of any kind of DR was 12.3% (95% CI 12.1% to 12.5%). Non- VTDR and VTDR were present in 10.8% (mild 7.5% and moderate NPDR 3.3%) and 1.4% (severe NPDR 0.86%, PDR 0.36% and DMO 0.18%) of the study patients, respectively. Conclusions The prevalence of any type of DR in patients with T2DM screened with retinal photography was lower when compared with earlier studies. INTRODUCTION Diabetic retinopathy (DR) is a major microvascular complication in diabetics. It particularly affects patients with type 2 diabetes (T2DM) whose vision may be threatened by diabetic macular oedema (DMO) and proliferative DR (PDR). In developed countries these two conditions are the principal cause of blindness in adults of working age and are responsible for a worsening in quality of life.1 The presence and severity of DR is related to car- diovascular risk factors and, consequently, a greater incidence of cardiovascular disease.2 Previous studies have shown a considerable vari- ation in DR prevalence. Factors such as population characteristics, screening techniques employed (direct ophthalmological examination or digital photography) and type of study performed, all hinder comparisons among studies. Depending on the country, the prevalence rate of direct ophthal- mological screening ranges from 40.3% in the study by Kempen et al;3 34.6% in the meta-analysis by Yau et al;4 33.2% in the study by Wong et al;5 to 27.9% in the work from Ruta et al.6 DR preva- lence in Spain also differs according to the authors. The results vary from 20.9% to 26.1%.7 8 In studies that employ retinal photography in DR screening, prevalence also differs: 19% in the UK9 (patients with recently diagnosed T2DM), 29% in the USA,10 34.6% in Sweden,11 and in the study carried out by Ruta et al6 in developing and devel- oped countries between 10.1% and 48.1%. The variations observed among the countries could be explained by the screening techniques employed (direct ophthalmological examination or digital photography) and the type of study performed. Factors such as these plus distinct methodologies and population characteristics all hinder compari- sons among studies. In spite of an overall increase in T2DM preva- lence in Spain12 and abroad,13 a decrease in DR prevalence, particularly vision threatening DR (VTDR), has been observed.14 This reduction could be a result of increased care for patients with diabetes and an earlier detection of T2DM and DR,15 the time of progression of diabetes1; poor control of glycaemia,16 blood pressure,16 dyslipi- daemia;17 and higher levels of the urinary albumin to creatine ratio (UACR)18 have been identified as risk factors for the onset and progression of DR. Hyperglycaemia plays a key role in this process;19 and a strict glycaemic control is recommended, par- ticularly during the initial phases of the disease. In addition, the control of blood pressure20 and regular examinations of the ocular fundus are advised to diminish DR severity and incidence. It is important to be aware of DR prevalence as it is a reliable indicator of microvascular complications and the impact that a good control of the disease can have on health results. However, the DR prevalence figures published in the literature do not correspond to those we have observed in primary healthcare clin- ical practice; an issue that can have repercussions on the correct planning and orientation of resources. This study was performed in order to observe the prevalence of DR, and its associated risk factors, in Open Access Scan to access more free content 1628 Rodriguez-Poncelas A, et al. Br J Ophthalmol 2015;99:1628–1633. doi:10.1136/bjophthalmol-2015-306683 Clinical science o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://b jo .b m j.co m / B r J O p h th a lm o l: first p u b lish e d a s 1 0 .1 1 3 6 /b jo p h th a lm o l-2 0 1 5 -3 0 6 6 8 3 o n 1 8 Ju n e 2 0 1 5 . D o w n lo a d e d fro m http://crossmark.crossref.org/dialog/?doi=10.1136/bjophthalmol-2015-306683&domain=pdf&date_stamp=2015-06-18 http://bjo.bmj.com http://bjo.bmj.com/ patients with T2DM screened with retinal photography, and whose DR category was recorded in their medical records, at the Catalonian primary healthcare services (Spain). MATERIALS AND METHODS Study, design, settings and population A population-based, descriptive study was performed in Catalonia (Spain) with patients with T2DM. Catalonia is a region in the north-east of Spain and has a public health system that covers 100% of the population. Most inhabitants (70%) concentrate in urban areas. All patients aged 30–90 years with a diagnosis of T2DM (International Classification of Diseases 10 codes E11 and E14) before 31 December 2012 were included. Of a total of 3 755 038 individuals aged 30–90 years, 329 419 patients with T2DM (8.8%) were identified and 108 723 (33%) had been screened with retinal photography and all study vari- ables recorded in the electronic medical record between 1 January 2008 and 31 December 2012. Information for this study came from the SIDIAP (System for Research and Development in Primary Care) an electronic data- base containing all the patients’ medical records. The SIDIAP includes data from the primary healthcare electronic medical records named e-CAP (ECAP) on demographic information, appointment dates with doctors and nurses, clinical diagnoses, clinical variables, prescriptions written, referrals to specialists and hospitals, results from laboratory tests, and medication sold by pharmacies. The quality of SIDIAP data has been previously documented, and the database has been widely used to study the epidemiology of a number of health outcomes.21 Assessment of DR The use of retinal photography for the detection of DR has been validated.22 Digital colour images are captured from each eye and the severity of DR is categorised according to the inter- national clinical DR severity scales recommended by the Global Diabetic Retinopathy Project Group23 as: no apparent retinop- athy (no DR), mild non-proliferative DR (mild NPDR), moder- ate NPDR, severe NPDR, proliferative DR (PDR) and DMO. All patients should have had at least one fundus photo recorded. In the case of patients having more than one retinal photograph between 1 January 2008 and 31 December 2012, the last one was used for analysis. Each patient was given a DR according to the worst eye. We took two digital colour images from each eye: one 45° centred midway between the macula and the optic disc and the other centred on the macula. In our study, retinal pho- tography was performed by trained personnel using a non- mydriatic camera. Subsequently, in the primary healthcare centre, a family physician trained in reading eye fundus images registered the result in the patient’s medical records. Measures of kidney function Serum creatine levels and UACR were determined and the fol- lowing definitions established: normoalbuminuria (UACR <30 mg/g), microalbuminuria (UACR 30–299 mg/g), and macroalbuminuria (UACR ≥300 mg/g). The estimated glomeru- lar filtration ratio (eGFR) was calculated according to the Modification of Diet in Renal Disease equation. Clinical variables The following data were obtained from each patient: age, gender, age at diagnosis of diabetes, duration of diabetes and glycated haemoglobin levels (A1C). Cardiovascular risk factors including body mass index, blood lipids, total cholesterol, low density lipo- protein cholesterol, high density lipoprotein (HDL) cholesterol, non-HDL cholesterol, blood pressure (systolic and diastolic), pulse pressure and smoking status according to the last condition registered before 31 December 2012, were collected. Data for clinical variables were gathered from the 15 months prior to the cut-off date with the exception of blood pressure, pulse pressure and body mass index, which were obtained from the previous 12 months. Additional data was gathered on medication. Statistical analysis DR prevalence was calculated assuming a binominal distribution on which the CI was based. Patient characteristics were com- pared according to DR presence and severity by analysis of vari- ance (ANOVA) for the continuous variables and Pearson’s χ2 test for the categorical ones. The level of statistical signifi- cance was set at p<0.05. All calculations were performed with StataCorp V.13.0 (Stata Statistical Software, College Station, Texas, USA: StataCorp LP). RESULTS In our study 108 723 T2DM had been screened with retinal photography and their DR category recorded in their medical records. The overall prevalence of any DR was 12.2% (CI 12.1% to 12.5%): 7.5% mild NPDR, 3.3% moderate NPDR, 0.86% severe NPDR, 0.36% PDR. DMO was present in 0.18% of the patients alone or associated with other DR lesions. Prevalence (figure 1) of VTDR and non-VTDR was 1.4% and 10.8% (CI 1.4% to 1.5%), respectively. Table 1 shows the characteristics of the participants with and without retinal photography screening. The group without retinal photography was older (69.4 years vs 66.9 years), had a shorter T2DM progression time (7.5 years vs 7.8 years), was diagnosed with T2DM at an older age (61.9 years vs 59.2 years), had a higher percentage of eGFR <60 mL/ min/1.73 m2 (22.9 vs 19.7), and higher levels of UACR (45.4 mg/g vs 36.1 mg/g). Table 2 presents the clinical and metabolic characteristics of the 108 723 participants with retinal photography and recorded DR category. Of these 56.2% were men. The mean age was 66.9 (11.0%) years, mean duration of diabetes was 7.8 (5.1%) years, and mean A1C level was 7.2 (1.3%). In comparison to patients without DR, those with some kind of DR were older and with a greater percentage of hypertension. They also had a longer duration of diabetes, higher A1C levels, higher systolic blood pressure, lower diastolic blood pressure and higher pulse pressure. Patients with DR used more insulin than those without. Participants with some kind of DR had a higher per- centage of eGFR levels <60 mL/min/1.73 m2 and greater values of UACR than those without. Figure 1 Prevalence of diabetic retinopathy. Rodriguez-Poncelas A, et al. Br J Ophthalmol 2015;99:1628–1633. doi:10.1136/bjophthalmol-2015-306683 1629 Clinical science o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://b jo .b m j.co m / B r J O p h th a lm o l: first p u b lish e d a s 1 0 .1 1 3 6 /b jo p h th a lm o l-2 0 1 5 -3 0 6 6 8 3 o n 1 8 Ju n e 2 0 1 5 . D o w n lo a d e d fro m http://bjo.bmj.com/ Table 3 shows the risk factors associated with DR. The prevalence of any kind of DR increased with the duration of diabetes (6.9% <5 years and 23.7% >15 years); higher A1C levels (8.4%≤7% and 22.9>9%); poorer blood pressure (10.9%<140/90 and 15.4%≥140/90); and hyperten- sion (9.1–13.1%). There was a trend towards a lower preva- lence of any DR in patients with non HDL-cholesterol ≥3.3 mmol/L. Table 1 Characteristics of patients with and without digital photography N (%) Global 329 419 (100) Without DP 220 696 (67) With DP 108 723 (33) *P value Sex, n (%) <0.001 Male 180 198 (54.7) 119 087 (54.0) 61 111 (56.2) Female 149 221 (45.3) 101 609 (46.0) 47 612 (43.8) Age (years) (n=329 419) 68.6 (11.7) 69.4 (11.9) 66.9 (11.0) <0.001 BMI (kg/m2, SD) (n=329 419) 30.1 (5.1) 30.1 (5.2) 30.2 (5.0) <0.001 Diabetes time of progress (years) (n=329 419) 7.6 (5.6) 7.5 (5.8) 7.8 (5.1) <0.001 Age of diabetes diagnosis (n=329 419) 61.0 (11.4) 61.9 (11.7) 59.2 (10.7) <0.001 A1C (%) (n=263 690) 7.2 (1.3) 7.2 (1.3) 7.2 (1.3) 0.631 Hypertension, n (%) (n=264 743) 80.4 80.5 80.1 0.003 SBP (mm Hg) (n=279 030) 134.8 (13.2) 135.0 (13.5) 134.5 (12.5) <0.001 DBP (mm Hg) (n=279 030) 75.2 (8.6) 75.0 (8.8) 75.6 (8.3) <0.001 PP (mm Hg) (n=279 030) 59.6 (12.4) 60.0 (12.7) 58.9 (11.9) <0.001 Non-HDL cholesterol (mg/dL) (n=248 610) 137.0 (36.9) 137.0 (37.4) 136.9 (36.0) 0.461 Triglycerides (mg/dL) (n=258 046) 154.5 (102.6) 153.5 (103.3) 156.4 (101.2) <0.001 Creatine (mg/dL) (n=265 898) 0.9 (0.4) 0.9 (0.4) 0.9 (0.3) <0.001 eGFR (mL/min/1.73 m2) <60, n (%) (n=265 898) 57 957 (21.8) 39 431 (22.9) 18 526 (19.7) <0.001 UACR (mg/g) (n=149 526) 41.8 (155.1) 45.4 (163.4) 36.1 (141.3) <0.001 *p Value for comparison of groups with Pearson’s χ2 test for qualitative variables and ANOVA for the quantitative ones. Values are expressed as n (%) and mean (SD). A1C, glycated haemoglobin; BMI, body mass index; DBP, diastolic blood pressure; DP, digital photography; eGFR, estimated glomerular filtration ratio; PP, pulse pressure; SBP, systolic blood pressure; UACR, urinary albumin-to-creatine ratio. Table 2 Clinical and laboratory characteristics of patients without DR, with NPDR, PDR and DMO Global No DR NPDR PDR DMO P value* N 108 723 95 336 12 788 400 199 Sex (M), n (%) 61 111 (56.2) 53 642 (56.3) 7137 (55.8) 216 (54.0) 116 (58.3) 0.553 Age (years) (n=108 723) 66.9 (11.0) 66.7 (11.0) 68.5 (10.9) 71.3 (9.7) 69.8 (10.9) <0.001 BMI (Kg/m2) (n=89 610) 30.2 (5.0) 30.2 (5.0) 30.0 (5.1) 30.1 (4.6) 29.5 (4.4) 0.002 Diabetes time of progress (years) (n=108 723) 7.8 (5.1) 7.5 (4.9) 9.8 (5.9) 10.7 (5.8) 9.6 (5.8) <0.001 Age of diabetes diagnosis (years) (n=108 723) 59.2 (10.7) 59.2 (10.6) 58.6 (11.2) 60.6 (10.8) 60.2 (11.4) <0.001 A1C (%) (n=95 126) 7.2 (1.3) 7.2 (1.3) 7.7 (1.5) 8.0 (1.6) 7.7 (1.5) <0.001 Insulin treatment, n (%) 18 452 (17.0) 13 338 (14.0) 4819 (37.7) 218 (54.5) 77 (38.7) <0.001 Hypertension, n (%) 87 056 (80.1) 75 645 (79.3) 10 869 (85.0) 364 (91.0) 178 (89.4) <0.001 SBP (mm Hg) (n=97 646) 134.5 (12.5) 134.1 (12.3) 136.9 (13.4) 137.9 (14.2) 138.9 (14.5) <0.001 DBP (mm Hg) (n=97 646) 75.6 (8.3) 75.7 (8.3) 74.7 (8.6) 73.4 (8.7) 74.3 (8.9) <0.001 PP (mm Hg) (n=97 646) 58.9 (11.9) 58.4 (11.7) 62.2 (12.8) 64.5 (12.8) 64.6 (13.0) <0.001 Non-HDL cholesterol (mg/dL) (n=89 090) 136.9 (36.0) 137.5 (35.8) 132.4 (36.9) 130.1 (35.3) 134.9 (37.7) <0.001 Creatine (mg/dL) (n=93 774) 0.9 (0.3) 0.9 (0.3) 1.0 (0.4) 1.1 (0.5) 1.0 (0.9) <0.001 eGFR (mL/min/1.73 m2), n (%) <0.001 ≥60 75 394 (80.3) 66 988 (81.1) 8082 (74.3) 208 (63.4) 116 (69.9) <60 18 526 (19.7) 15 564 (18.9) 2792 (25.7) 120 (36.6) 50 (30.1) UACR (mg/g), n (%) <0.001 <30 48 860 (83.0) 43 655 (84.5) 5029 (73.0) 118 (62.1) 58 (59.2) 30–299 8692 (14.8) 7081 (13.7) 1519 (22.1) 58 (30.5) 34 (34.7) ≥300 1294 (2.2) 936 (1.8) 338 (4.9) 14 (7.4) 6 (6.1) *p Value for comparison of groups with Pearson’s χ2 test for qualitative variables and ANOVA for the quantitative ones. Values are expressed as n (%) and mean (SD). A1C, glycated haemoglobin; BMI, body mass index; DBP, diastolic blood pressure; DMO, macular oedema; DR, diabetic retinopathy; eGFR, estimated glomerular filtration ratio; NPDR, non-proliferative diabetic retinopathy; PDR, proliferative diabetic retinopathy; PP, pulse pressure; SBP, systolic blood pressure; UACR, urinary albumin-to-creatine ratio. 1630 Rodriguez-Poncelas A, et al. Br J Ophthalmol 2015;99:1628–1633. doi:10.1136/bjophthalmol-2015-306683 Clinical science o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://b jo .b m j.co m / B r J O p h th a lm o l: first p u b lish e d a s 1 0 .1 1 3 6 /b jo p h th a lm o l-2 0 1 5 -3 0 6 6 8 3 o n 1 8 Ju n e 2 0 1 5 . D o w n lo a d e d fro m http://bjo.bmj.com/ DISCUSSION Our study provides data on the prevalence and severity of DR in patients with T2DM screened with retinal photography and whose DR category was recorded in their medical records. A longitudinal, 10-year follow-up of this cohort is planned in order to evaluate DR incidence, changes in DR severity and related complications. It is well known that, due to the differences in baseline characteristics, the prevalence of DR is reported to be higher in clinical studies than in population based ones. We believe the figures from the latter to be underestimated. In our study, for example, patients with glaucoma and cataracts, and those attended by an ophthalmologist because they had VTDR, did not usually undergo retinal photography screening in primary care. In addition, a reduced number of patients who presented a greater number of complications and worse metabolic control was attended by an endocrinologist. Moreover in our study, dif- ferences among patients, with and without retinal photography, could affect the final results of DR prevalence. In our work, 12.3% (CI 12.1% to 12.5%) of the patients screened with retinal photography had some kind of DR and 1.4% suffered from VTDR. A DR prevalence rate did not concur with other studies in which incidence varied between 6% and 27.9%,5 24 and 20.9% and 26.1% in Spain.7 8 A 2007 population-based study carried out in primary care, in which screening was performed with a direct ophthalmological exam- ination by a specialist, reported a DR prevalence of 5.8%.21 In contrast, studies with patients with diabetes screened with retinal photography observed a DR prevalence ranging from 10.1% to 48.1%.6 The heterogeneity of the studies performed makes comparisons difficult and disguises the real prevalence of DR in the T2DM population attended in primary care. The diabetic population in Catalonia is very well controlled (from a metabolic point of view) and our population is not a selected well controlled group. In another study by our group about metabolic control of glycaemia and cardiovascular risk factors in patients with T2DM in primary care in Catalonia (Spain) the mean (SD) A1C value was 7.15% (1.5).21 Better control of T2DM could be the reason for the overall decrease in the incidence and prevalence of DR.15 In agreement with prior studies, we observed three major DR risk factors: dia- betes progression time, A1C levels and blood pressure control.10 24 It is possible that the total exposure to the gly- caemic load and cardiovascular risk factors over the years is reflected in the time of diabetes progression. As in other work,25 we noted that higher A1C values were related to a greater prevalence of DR while higher levels of non-HDL chol- esterol to a lesser one. Patients with DR had higher levels of UACR and lower levels of eGFR than patients without DR. Diabetes kidney disease and DR are linked to endothelial Table 3 Patients with any kind of diabetic retinopathy and associated risk factors Any DR N (%) Mild-NPDR N (%) Moderate-NPDR N (%) Severe-NPDR N (%) PDR N (%) DMO N (%) *P value Sex 0.001 Male 7469 (12.2) 4464 (7.29) 2092 (3.42) 581 (0.95) 216 (0.35) 116 (0.19) Female 5918 (12.4) 3680 (7.71) 1608 (3.36) 363 (0.76) 184 (0.38) 83 (0.17) Total 13 387 (12.3) 8144 (7.48) 3700 (3.39) 944 (0.86) 400 (0.36) 199 (0.18) Diabetes time duration (years) <0.001 <5 2420 (6.9) 1580 (4.50) 607 (1.73) 146 (0.42) 47 (0.14) 40 (0.11) 5–9 5907 (12.5) 3633 (7.68) 1623 (3.44) 396 (0.84) 170 (0.36) 85 (0.18) 10–15 3083 (17.2) 1847 (10.31) 851 (4.74) 221 (1.23) 121 (0.68) 43 (0.24) >15 1977 (23.7) 1084 (12.99) 619 (7.43) 181 (2.17) 62 (0.74) 31 (0.37) A1C (%) <0.001 ≤7.0 4360 (8.4) 2912 (5.61) 1042 (2.01) 236 (0.45) 99 (0.19) 71 (0.14) >7.0 to <7.9 3193 (13.3) 1933 (8.05) 918 (3.82) 215 (0.90) 92 (0.38) 35 (0.15) 8 to 9 1868 (18.4) 1064 (10.47) 560 (5.52) 150 (1.48) 67 (0.66) 27 (0.27) >9 2026 (22.9) 1084 (12.25) 650 (7.35) 191 (2.16) 71 (0.80) 30 (0.34) Insulin treatment <0.001 No 8273 (9.2) 5370 (5.97) 2146 (2.39) 453 (0.50) 182 (0.20) 122 (0.14) Yes 5114 (27.7) 2774 (15.03) 1554 (8.42) 491 (2.65) 218 (1.18) 77 (0.42) Smoking Status 0.182 Smoker 1573 (10.7) 968 (6.58) 441 (3.00) 104 (0.71) 45 (0.31) 15 (0.10) Past-smoker 3486 (11.9) 2085 (7.13) 991 (3.38) 264 (0.90) 89 (0.30) 54 (0.18) Never-smoker 8274 (12.9) 5060 (7.89) 2252 (3.51) 568 (0.89) 264 (0.41) 130 (0.20) Hypertension <0.001 No 1976 (9.1) 1275 (5.87) 516 (2.38) 128 (0.58) 36 (0.17) 21 (0.10) Yes 11 411 (13.1) 6869 (7.89) 3184 (3.66) 816 (0.93) 364 (0.42) 178 (0.20) Blood pressure (mm Hg) <0.001 ≥140/90 4487 (15.4) 2620 (8.99) 1316 (4.52) 338 (1.16) 143 (0.49) 70 (0.24) <140/90 7490 (10.9) 4664 (6.79) 2007 (2.92) 484 (0.70) 224 (0.33) 111 (0.16) No HDL-cholesterol (mmol/L) 0.272 ≥3.3 2353 (10.5) 1437 (6.41) 632 (2.82) 183 (0.82) 65 (0.29) 36 (0.16) <3.3 8390 (12.6) 5105 (7.67) 2366 (3.55) 559 (0.84) 247 (0.37) 113 (0.17) *p Value for group comparison with Pearson’s χ2 test for qualitative variables and ANOVA for the quantitative ones. A1C, glycated haemoglobin; DMO, diabetic macular oedema; DR, diabetic retinopathy; NPDR, non-proliferative diabetic retinopathy; PDR, proliferative diabetic retinopathy. Rodriguez-Poncelas A, et al. Br J Ophthalmol 2015;99:1628–1633. doi:10.1136/bjophthalmol-2015-306683 1631 Clinical science o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://b jo .b m j.co m / B r J O p h th a lm o l: first p u b lish e d a s 1 0 .1 1 3 6 /b jo p h th a lm o l-2 0 1 5 -3 0 6 6 8 3 o n 1 8 Ju n e 2 0 1 5 . D o w n lo a d e d fro m http://bjo.bmj.com/ dysfunction; it is possible that microvascular lesions progress in a parallel manner in the kidney and the eye. Patients with dia- betes kidney disease should, therefore, be considered at high risk for DR screening. In each Primary Health Care Centre of Catalonia, a general practitioner (GP) trained in reading eye fundus images read the retinal photography. There is no existing strict quality control on our screening process, nevertheless, the professionals who perform and read the retinal photography have previously received the same regulated and accredited training. Moreover the professionals who read the fundus may consult with the oph- thalmologist if they have any doubts. Although the studies that have evaluated the agreement among GPs and ophthalmologists in evaluating retinal images in Spain showed a good concord- ance, we have to accept that the screening of DR performed by several professionals might have influenced the final results. Strengths and limitations of the study Our study has some strengths and limitations. The main strengths are access to retinal photography screening data, infor- mation about DR category in the patients’ clinical records, use of a population based database, and sample size. It is note- worthy that we were dealing with patients with T2DM attended in primary care, most of whom followed the corresponding con- trols. As a consequence, we believe our results to be extremely relevant for clinical practice. Among our limitations, as it is a database study, not all patients with T2DM underwent retinal photography and had the corresponding DR categories regis- tered in their clinical records. In addition, patients with VTDR, who were attended by an ophthalmologist or endocrinologist, could have been poorly represented. Another limitation of our study is the duration of diabetes mellitus in the DR population is higher than observed in the results. The source of information used in this study is the SIDIAP, a computerised database containing anonymised patient records for the 5.8 million people registered with a GP in the Catalan Health Institute. The SIDIAP contains all data entered into the ECAP database since it was first introduced in some practices in 1998, but until 2005 the system was not generalised and used systematically in every Catalan Health Institute prac- tice. An unknown number of GPs recorded the surgery visit in the system as the diabetes diagnosis date and not the true date of diagnosis of diabetes. For that reason we think that the true duration of diabetes is higher but we think it has not influenced the lower prevalence of DR. CONCLUSIONS In summary, our study provides data on the prevalence of DR and VTDR in a sample of 108 723 T2DM participants who were screened with retinal photography and had their DR cat- egory registered in their medical records. Our findings indicate that the real prevalence of DR and VTDR in T2DM individuals who were screened with retinal photography is lower than that published to date in the literature. Nevertheless, it is necessary to continue with cribbage and good control of risk factors in order to decrease the prevalence of DR. Author affiliations 1Primary Health Care Center Anglès, Gerència Territorial Girona, Institut Català de la Salut, Girona, Spain 2Unitat de Suport a la Recerca Girona, Institut Universitari d’Investigació en Atenció Primària Jordi Gol (IDIAP Jordi Gol), Girona, Spain 3Primary Health Care Center Martorell, Gerència Territorial Metropolitana Sud, Institut Català de la Salut, Barcelona, Spain 4Unitat de Suport a la Recerca Barcelona Ciutat, Institut Universitari d’Investigació en Atenció Primària Jordi Gol (IDIAP Jordi Gol), Barcelona, Spain 5Institut Universitari d’Investigació en Atenció Primària Jordi Gol (IDIAP Jordi Gol), Barcelona, Spain 6Primary Health Care Center Jordi Nadal (Salt),Gerència Territorial Girona, Institut Català de la Salut, Girona, Spain 7Primary Health Care Center Raval Sud, Gerencia d’Atenció Primària Barcelona Ciutat, Institut Catala de la Salut, Barcelona, Spain 8Primary Health Care Center La Mina, Gerència d’Atenció Primària Barcelona Ciutat, Institut Català de la Salut, Barcelona, Spain 9Universitat Autónoma de Barcelona, Bellaterra, Spain, Barcelona, Spain Contributors AR-P and XM-T designed the study. AR-P, SM-J and XM-T researched the data and wrote and edited the manuscript. AC analysed the data and reviewed the manuscript. JFB-DLP and JF-N contributed to discussion and reviewed and edited the manuscript. Funding This study was supported by a grant from the IDIAP. Competing interests None declared. Ethics approval This study was approved by the Ethics Committee of the IDIAP Jordi Gol (protocol number P13/028) and was carried out in accordance with the principles of the 1996 Helsinki declaration. Provenance and peer review Not commissioned; externally peer reviewed. Open Access This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/ licenses/by-nc/4.0/ REFERENCES 1 Fong DS, Aiello L, Gardner TW, et al. American Diabetes Association. Retinopathy in diabetes. Diabetes Care 2004;27(Suppl 1):S84–7. 2 ACCORD Study Group. Diabetic retinopathy, its progression, and incident cardiovascular events in the ACCORD trial. Diabetes Care 2013;36:1266–71. 3 Kempen JH, O’Colmain BJ, Leske MC, et al. Eye Diseases Prevalence Research Group. The prevalence of diabetic retinopathy among adults in the United States. Arch Ophthalmol 2004;122:552–63. 4 Yau JW, Rogers SL, Kawasaki R, et al. Meta-Analysis for Eye Disease (META-EYE) Study Group. Global prevalence and major risk factors of diabetic retinopathy. Diabetes Care 2012;35:556–64. 5 Wong TY, Klein R, Islam FM, et al. Diabetic retinopathy in a multi-ethnic cohort in the United States. AmJ Ophthalmol 2006;141:446–55. 6 Ruta LM, Magliano DJ, Lemesurier R, et al. Prevalence of diabetic retinopathy in Type 2 diabetes in developing and developed countries. Diabet Med 2013;30:387–98. 7 López IM, Diez A, Velilla S, et al. Prevalence of diabetic retinopathy and eye care in a rural area of Spain. Ophthalmic Epidemiol 2002;9:205–14. 8 Pedro RA, Ramon SA, Marc BB, et al. Prevalence and relationship between diabetic retinopathy and nephropathy, and its risk factors in the North-East of Spain, a population-based study. Ophthalmic Epidemiol 2010;17:251–65. 9 Kostev V and Rathmann W. Diabetic retinopathy at diagnosis of type 2 diabetes in the UK: a database analysis. Diabetología 2013;56:109–11. 10 Zhang X, Saaddine JB, Chou CF, et al. Prevalence of diabetic retinopathy in the United States, 2005–2008. JAMA 2010;304:649–56. 11 Olafsdottir E, Andersson DK, Dedorsson I, et al. The prevalence of retinopathy in subjects with and without type 2 diabetes mellitus. Acta Ophthalmol 2014;92:133–7. 12 Soriguer F, Goday A, Bosch-Comas A, et al. Prevalence of diabetes mellitus and impaired glucose regulation in Spain: the Di@bet.es Study. Diabetologia 2012;55:88–93. 13 Shaw JE, Sicree RA, Zimmet PZ. Global estimates of the prevalence of diabetes for 2010 and 2030. Diabet Res Clin Pract 2010;87:4–14. 14 Romero-Aroca P, Fernández-Balart J, Baget-Bernaldiz M, et al. Changes in the diabetic retinopathy epidemiology after 14 years in a population of Type 1 and 2 diabetic patients after the new diabetes mellitus diagnosis criteria and a more strict control of the patients.. J Diabetes Complications 2009;23:229–38. 15 Wong TY, Mwamburi M, Klein R, et al. Rates of progression in diabetic retinopathy during different time periods: a systematic review and meta-analysis. Diabetes Care 2009;32:2307–13. 16 Stratton IM, Kohner EM, Aldington SJ, et al. UKPDS 50: risk factors for incidence and progression of retinopathy in Type II diabetes over 6 years from diagnosis. Diabetologia 2001;44:156–63. 17 Chew EY, Ambrosius WT, Davis MD, et al. Effects of medical therapies on retinopathy progression in type 2 diabetes. N Engl J Med 2010;363:233–44. 1632 Rodriguez-Poncelas A, et al. Br J Ophthalmol 2015;99:1628–1633. doi:10.1136/bjophthalmol-2015-306683 Clinical science o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://b jo .b m j.co m / B r J O p h th a lm o l: first p u b lish e d a s 1 0 .1 1 3 6 /b jo p h th a lm o l-2 0 1 5 -3 0 6 6 8 3 o n 1 8 Ju n e 2 0 1 5 . D o w n lo a d e d fro m http://creativecommons.org/licenses/by-nc/4.0/ http://creativecommons.org/licenses/by-nc/4.0/ http://dx.doi.org/10.2337/diacare.27.2007.S84 http://dx.doi.org/10.2337/dc12-1311 http://dx.doi.org/10.1001/archopht.122.4.552 http://dx.doi.org/10.2337/dc11-1909 http://dx.doi.org/10.1016/j.ajo.2005.08.063 http://dx.doi.org/10.1111/dme.12119 http://dx.doi.org/10.1076/opep.9.3.205.1516 http://dx.doi.org/10.3109/09286586.2010.498661 http://dx.doi.org/10.1007/s00125-012-2742-7 http://dx.doi.org/10.1001/jama.2010.1111 http://dx.doi.org/10.1111/aos.12095 http://dx.doi.org/10.1007/s00125-011-2336-9 http://dx.doi.org/10.1016/j.diabres.2009.10.007 http://dx.doi.org/10.1016/j.jdiacomp.2008.02.012 http://dx.doi.org/10.2337/dc09-0615 http://dx.doi.org/10.1007/s001250051594 http://dx.doi.org/10.1056/NEJMoa1001288 http://bjo.bmj.com/ 18 Lunetta M, Infantone L, Calogero AE, et al. Increased urinary albumin excretion is a marker of risk for retinopathy and coronary heart disease in patients with type 2 diabetes mellitus. Diabetes Res Clin Pract 1998;40:45–51. 19 Hemmingsen B, Lund SS, Gluud C, et al. Targeting intensive glycaemic control versus targeting conventional glycaemic control for type 2 diabetes mellitus. Cochrane Database Syst Rev 2013;11:CD008143. 20 [No authors listed]. Tight blood pressure control and risk of macrovascular and microvascular complications in type 2 diabetes: UKPDS 38. UK Prospective Diabetes Study Group. BMJ 1998;317:703–13. 21 Vinagre I, Mata-Cases M, Hermosilla E, et al. Control of glycemia and cardiovascular risk factors in patients with type 2 diabetes in primary care in Catalonia (Spain). Diabetes Care 2012;35:774–9. 22 Olson JA, Strachan FM, Hipwell JH, et al. A comparative evaluation of digital imaging, retinal photography and optometrist examination in screening for diabetic retinopathy. Diabet Med 2003;20:528–34. 23 Wilkinson CP, Ferris FL III, Klein RE, et al., Global Diabetic Retinopathy Project Group. Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales. Ophthalmology 2003;110:1677–82. 24 Tapp RJ, Shaw JE, Harper CA, et al. The prevalence of and factors associated with diabetic retinopathy in the Australian population. Diabetes Care 2003;26:1731–7. 25 Molyneaux LM, Constantino MI, McGill M, et al. Better glycaemic control and risk reduction of diabetic complications in Type 2 diabetes: comparison with the DCCT. Diabetes Res Clin Pract 1998;42:77–83. Rodriguez-Poncelas A, et al. Br J Ophthalmol 2015;99:1628–1633. doi:10.1136/bjophthalmol-2015-306683 1633 Clinical science o n A p ril 5 , 2 0 2 1 b y g u e st. P ro te cte d b y co p yrig h t. h ttp ://b jo .b m j.co m / B r J O p h th a lm o l: first p u b lish e d a s 1 0 .1 1 3 6 /b jo p h th a lm o l-2 0 1 5 -3 0 6 6 8 3 o n 1 8 Ju n e 2 0 1 5 . D o w n lo a d e d fro m http://dx.doi.org/10.1016/S0168-8227(98)00024-2 http://dx.doi.org/10.1136/bmj.317.7160.703 http://dx.doi.org/10.2337/dc11-1679 http://dx.doi.org/10.1046/j.1464-5491.2003.00969.x http://dx.doi.org/10.1016/S0161-6420(03)00475-5 http://dx.doi.org/10.2337/diacare.26.6.1731 http://dx.doi.org/10.1016/S0168-8227(98)00095-3 http://bjo.bmj.com/ Prevalence of diabetic retinopathy in individuals with type 2 diabetes who had recorded diabetic retinopathy from retinal photographs in Catalonia (Spain) Abstract Introduction Materials and methods Study, design, settings and population Assessment of DR Measures of kidney function Clinical variables Statistical analysis Results Discussion Strengths and limitations of the study Conclusions References work_v6nboreyljdkxcx4k5fle5pmse ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218549190 Params is empty 218549190 exception Params is empty 2021/04/06-02:16:27 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218549190 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:27 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_v7z22hkegffatfgr76x75p3tp4 ---- The Framingham School Nevus Study: A Pilot Study The Framingham School Nevus Study A Pilot Study Susan A. Oliveria, ScD, MPH; Alan C. Geller, MPH; Stephen W. Dusza, MPH; Ashfaq A. Marghoob, MD; Dana Sachs, MD; Martin A. Weinstock, MD; Marcia Buckminster, RN; Allan C. Halpern, MD Objectives: To (1) describe nevus patterns using digital photography and dermoscopy; (2) evaluate the relationship between host and environmental factors and prevalence of nevi in schoolchildren; and (3) demonstrate the feasibility of conducting a longitudi- nal study. Design: Cross-sectional survey and 1-year prospective follow-up study. Participants: Students from 2 classrooms, grades 6 and 7, in the Framingham, Mass, school system (N = 52). Main Outcome Measures: A survey was completed by students and 1 of their parents that included ques- tions on demographic and phenotypic characteristics, family history of skin cancer, and sun exposure and protection practices. An examination of nevi on the back was performed that included digital photography and digital dermoscopy. Follow-up child and parent surveys and examinations were conducted at 1-year follow-up. Results: At baseline, the median number of back nevi was 15 (mean [SD], 21.9 [15.3]). Older age, male sex, fair skin, belief that a tan is healthier, tendency to burn, and spo- radic use of sunscreen were positively associated with mole count, although age was the only statistically significant fac- tor. Predominant dermoscopic patterns for the index ne- vus were as follows: 38% globular, 14% reticulated, 38% structureless, and 10% combinations of the above pat- terns with no predominant characteristic. The overall par- ticipation rate from baseline to follow-up was 81% (42/ 52) for the skin examination process. At the 1-year follow-up examination, new nevi were identified in 36% of students (n = 15), while 9.6% of baseline index nevi had changes in the dermoscopic pattern. Dominant dermoscopic pattern was related to nevus size: smaller nevi tended to be struc- tureless, while larger nevi were of mixed pattern. Conclusion: This study supports the feasibility and util- ity of digital photography and dermoscopy for the lon- gitudinal study of nevus evolution in early adolescence. Arch Dermatol. 2004;140:545-551 T HE INCIDENCE AND MORTAL- ity rates of melanoma con- tinue to rise at exception- ally high rates.1 Melanoma is more common in people with many moles (nevi) and/or atypical (dysplastic) nevi.2-6 Current knowledge of the evolution of nevi has been largely de- rived from cross-sectional studies, which suggest that adolescence is an important time of life for the formation and evolu- tion of nevi.7-10 However, there has been very limited research on the formation and evolution of individual common nevi dur- ing adolescence. Knowledge of the evolution of nevi in children derives primarily from cross- sectional studies using visual examina- tions to identify and document nevi.7,8,11-20 Many of these studies distinguish be- tween common acquired and atypical (dys- plastic) nevi. Common acquired nevi by definition are absent at birth, often pres- ent in the early years of childhood, pres- ent in greater numbers in early to middle life, and do not multiply thereafter.21 They predominate on sun-exposed skin above the waist. It is generally believed that the common acquired nevus undergoes a pre- dictable evolution over a period of years or decades. Initially, it appears as a tiny pinpoint macule (1 to 2 mm in diameter, uniformly tan or brown but occasionally black) and gradually enlarges to a maxi- mum size of 4 to 6 mm. 2 1 , 2 2 Cross- sectional studies consistently demon- strate that nevi increase in number with age during childhood and adolescence and that sun exposure is an important corre- late of the development of nevi. Older chil- dren (ages 13-14 years) have 30% to 50% more nevi than younger children (ages 9-10 years).12 Several studies have shown that constitutional factors such as hair, eye, STUDY From the Dermatology Service, Memorial Sloan-Kettering Cancer Center, New York, NY (Drs Oliveria, Marghoob, Sachs, and Halpern and Mr Dusza); Department of Dermatology, Boston University, Boston, Mass (Mr Geller); Dermatoepidemiology Unit, Veterans Affairs Medical Center and Department of Dermatology, Rhode Island Hospital and Brown University, Providence (Dr Weinstock); and School Health Services, Framingham Public Schools, Framingham, Mass (Ms Buckminster). The authors have no relevant financial interest in this article. (REPRINTED) ARCH DERMATOL / VOL 140, MAY 2004 WWW.ARCHDERMATOL.COM 545 ©2004 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 and skin color as well as environmental factors are as- sociated with the number of nevi.7,8,12-20,23,24 Common ac- quired nevi appear to be at least 3-fold less common in blacks and Asians than in whites.14,20,25,26 Like common acquired nevi, atypical (dysplastic) nevi are by definition absent at birth. An early predictive fac- tor is an increased number of morphologically normal nevi, first noted around age 5 to 8 years, and the development of nevi in the scalp area.21 The characteristic clinical fea- tures of dysplastic nevi generally do not appear until ages 10 to 15 years, at which time the number and appearance of an affected patient’s nevi may change dramatically: 21% of Australian children aged 15 years have at least 1 dys- plastic nevus compared with 11% of 9-year-olds.23 Cross- sectional observations have also led to the recognition in some children and adolescents of large darkly pigmented nevi with features similar to small congenital nevi.27 Nevi appear to share a common causal pathway with melanoma that involves interplay between constitutional factors and sun exposure.12 It has been proposed that, akin to the model of tumor progression in colonic neoplasia de- scribed by Vogelstein et al,28 nevi represent intermediate steps in the evolution of a significant subset of superficial spreading melanomas, the most common histogenic sub- type of the disease.22,29 Recent studies distinguish be- tween common nevi and atypical (dysplastic) nevi as im- portant risk markers and lesional steps in melanocytic tumor progression.2,3,30 A limitation of these studies is the lack of a consistent definition of nevi, which is related to the ex- isting controversy surrounding the clinical and histologic definitions of common and atypical nevi.31 An improved understanding of the natural history of nevi at the subsur- face (dermoscopic) level may generate new biologic hy- potheses regarding the evolution of nevi. Knowledge of the causes and evolution of nevi has significant public health importance for the primary and secondary prevention of melanoma. Understanding the evolution of nevi during adolescence is important for im- proving the early detection of relatively rare melanomas that occur in this age group and perhaps even more im- portant for avoiding unnecessary excisions of large num- bers of changing benign nevi. We report the results of a pilot study in schoolchildren, grades 6 and 7, in Framing- ham, Mass. This study provides preliminary data for the first population-based longitudinal study of nevi in US children. The overall objectives of the study were to de- scribe nevi patterns using digital photography and der- moscopy, evaluate the roles of host and environmental factors and prevalence of nevi in schoolchildren, and dem- onstrate the feasibility of conducting this type of re- search in a cohort of US schoolchildren. METHODS STUDY SITE AND STUDY OVERVIEW The present study was conducted with children and their par- ents from the Framingham, Mass, school system. We identi- fied 2 classrooms from grades 6 and 7 and obtained informed consent from 100% of the children and their parents (N = 52). Children were asked to complete a self-administered survey, and a survey was also provided to 1 parent of each child for completion. During December 2001, skin examination and pho- tography of back nevi were performed in concert with manda- tory screening examinations for scoliosis (curvature of the spine) that are conducted annually in Massachusetts for grades 5 through 9. Follow-up child and parent surveys and skin ex- aminations were conducted 1 year later. SKIN EXAMINATION AND PHOTOGRAPHY PROCESS The examination was limited to the back and included digital photography and digital dermoscopy. The study examination took an average of 2.5 minutes for boys and 3 minutes for girls. All children completed an informal exit interview to elicit feed- back about the study process. The only significant complaint reported by the children was that the room was too cold. The scoliosis examination was performed by the school nurse behind a privacy curtain. Boys were asked to remove their shirts, and girls were asked to wear a loose fitting shirt (with a bathing suit underneath) that could be readily lifted off and draped in front of them. At baseline, immediately following the scoliosis examina- tion, an overview digital photograph of each student’s back and a close-up digital photograph of the largest nevus on the back were taken. Digital dermoscopy was also performed on the larg- est nevus. Appropriate clothing and draping techniques were used in consideration of the student’s comfort and modesty. The school nurse instructed the student to hold his or her hair off the shoulders and neck, using an elastic band or hair clip as necessary. For girls, the nurse lifted up the straps of the swimsuit top to ensure that no nevi were hidden and so that all nevi could be assessed. Also, if the tops of pants covered the iliac crest, the nurse instructed the student to roll the top of the pants slightly so that the back area near the iliac crest could be photographed. The area photographed was defined from the nape of the neck to the posterior iliac crests. The field of view of the photographs was 51 cm (20 in) wide and 76 cm (30 in) high. The largest pig- mented nevus on the back was chosen for close-up photography and dermoscopy by 2 examiners, a nondermatologist and a der- matologist. Agreement was reached in 49 of 50 instances. At the 1-year follow-up examination, the number of nevi undergoing close- up and digital photography was increased to 4 per student. Close-up photographic and dermoscopic images were ob- tained using a digital camera system with and without an epi- luminescence microscopy attachment. The overview and close-up pictures were automatically stored directly on a lap- top computer in a proprietary database archived by study num- ber, nevus number, image type, and date. The pictures were encrypted on entry into the database and viewable only through the secure software. Security level access to the software is avail- able only to key study personnel. CHILD AND PARENT SURVEYS The child and parent surveys were self-administered at base- line and at 1-year follow-up. They included questions on de- mographics, skin type, family history of skin cancer (parents only), eye and hair color, and sun protection practices, includ- ing the use of hats and sunscreen, limiting time in the sun, seek- ing shade, and frequency of sunburns. Parents were also asked questions regarding their child’s sun protection practices and exposure. IMAGE ASSESSMENT The digital images were reviewed for clarity by 2 dermatologists (A.A.M. and A.C.H.). All overview images were of excellent qual- ity and consistent resolution, readily permitting the recognition (REPRINTED) ARCH DERMATOL / VOL 140, MAY 2004 WWW.ARCHDERMATOL.COM 546 ©2004 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 of nevi 2 mm or more in diameter and the distinction of nevi from inflammatory lesions. The inclusion of fiducial size mark- ers, as a fixed basis for comparison, demonstrated consistency of magnification in the images as viewed on the monitors. All close-up photographic and dermoscopic images were sharp and readily evaluated for the presence of clinical and dermoscopic features. Inclusion of fiducial markers in the images demon- strated excellent consistency of color rendition on calibrated moni- tors. Dermatologists (A.A.M. and A.C.H.) counted back nevi in- dependently. The interobserver nevus counts were highly correlated with a correlation coefficient, r2= 0.97. The close-up photographic and dermoscopic images were assessed for individual clinical attributes, including nevus asym- metry, color, border, dermoscopic pattern (eg, globular, reticu- lar, structureless, and complex/mixed), and the presence of der- moscopic structures (eg, globules). Following the first-year pilot examinations, all images were independently evaluated by the dermatologists. We conducted a small reproducibility study to determine whether a physician could be trained to correctly iden- tify and categorize dermoscopic features of nevi according to the study protocol. A physician (A.A.M.) who participated in the pi- lot study and classified the dermoscopic features of the images was responsible for training a dermatologist (D.S.) who had not participated in the pilot study. Once the training session was com- pleted, the 2 physicians independently reviewed the same 20 im- ages. Nevi were evaluated for the presence or absence of each of 6 dermoscopic features: reticular pigment network, globules, structureless areas, blotches, dots, and streaks. The results showed that there was consensus between the physicians regarding the presence or absence of dermoscopic pat- terns. The physicians achieved 100% agreement on reticular pig- ment network, 70% agreement on globules, 80% agreement on structureless areas, and 100% agreement on blotches. The � val- ues for interrater reliability for these patterns ranged from 0.35 to 1.0. Ninety percent agreement was reached for prominence of the reticulation (� = 0.73). None of the nevi in the reproduc- ibility study appeared to have dots, streaks, blotches, or periph- eral globules, and thus the interobserver agreement for these char- acteristics could not be evaluated. Based on these results, a further simplified schema was applied to the 1-year follow-up images. In the final schema, nevi were classified on dermoscopy into 4 groups: reticular, globular, structureless, and complex (mixed) based on the overall global pattern. Dermoscopic blotches, dots, streaks, peripheral globules, and vascular structures were con- sidered local features and occurred infrequently. Thus, they did not change the overall lesion classification. STATISTICAL ANALYSIS Descriptive statistics were calculated to characterize the study cohort and describe the prevalence of back nevi. The preva- lence of nevi was described by age group and median mole count. Univariate statistics including odds ratios and 95% confidence intervals were used to describe the relationship between host and environmental factors and median mole count. Dermo- scopic features of the index moles at follow-up were presented using descriptive statistics stratified by size of the index nevi and upper vs lower back. RESULTS BASELINE RESULTS We obtained informed consent from 100% (N = 52) of the children and their parents. Completed surveys were re- turned for 51 of 52 children and parents. Examinations, digital photography and dermoscopy were performed on 50 children (participation rate, 96%; 50/52); 2 children were absent from school on the day of the scheduled ex- amination. The characteristics of the study cohort at base- line are summarized in Table 1; 90% were white; 57% were girls; and the median age was 12.5 years (range, 11-13.8). Eighty percent of children (n = 39) stated that their skin was fair/very fair (corroborated by 97% of par- ents’ responses [n = 38]). Eighty-six percent of children reported having at least 1 sunburn the previous sum- mer (n = 42). Parent respondents were generally female (n = 44; 86%) with a median age of 42 years. Nearly two thirds of them (n = 33) stated that they were at average to high risk of developing skin cancer. More than 25% (n = 14) reported a first-degree relative with skin cancer, and 70% (n = 36) reported themselves to be fair skinned. At baseline, the median number of back nevi was 15, and the mean (SD) number of nevi was 21.9 (15.3) (range, 2-70) based on the review of digital images (Figure 1). The prevalence of nevi described by age is presented in Figure 2. An important and significant trend with increasing age and median mole count was ob- served. No differences were apparent when analyses were stratified by male vs female student. Table 2 summa- Table 1. Demographic Characteristics of Student Population Characteristic Finding* Mean (SD) age, y 12.4 (0.68) Sex Male 22 (43) Female 29 (57) Race White 46 (90) Nonwhite† 4 (8) Color of untanned skin Very fair/fair 41 (80) Olive/very dark 10 (20) No. of sunburns during previous summer 0 7 (14) 1 23 (45) 2 12 (24) �3 9 (18) Do people look healthier with a tan? Yes 24 (47) No 15 (29) Don’t know 11 (22) What happens to your skin after you go outside in the sun for 45 minutes in the summer? Always burn 1 (2) Usually burn 8 (16) Sometimes burn 20 (39) Rarely burn 21 (41) How often do you apply sunscreen when you go to the beach or pool? Never/less than half of the time 15 (29) Most days/every day 33 (65) During the past summer, when outside, how often do you have sunscreen on? Never/less than half of the time 32 (63) Most days/every day 18 (35) *While n = 51, responses do not total 51 because of missing or incomplete questionnaire data. Unless otherwise indicated, data are number (percentage) of respondents. †Asian, Hispanic, and other. (REPRINTED) ARCH DERMATOL / VOL 140, MAY 2004 WWW.ARCHDERMATOL.COM 547 ©2004 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 rizes the univariate statistics for host and environmen- tal factors and median mole count. Older age, male sex, fair skin, belief that a tan is healthier, tendency to burn, and sporadic use of sunscreen were positively associ- ated with higher median mole count, although only age was statistically significant. On close-up clinical examination, the median di- ameter of the largest nevus was 4 mm, and the mean (SD) diameter was 4.3 (1.7) mm. Asymmetry was specified across 0, 1, or 2 axes. All nevi were asymmetric: 94% were asymmetric in 1 axis, and 6% were asymmetric in 2 axes. Clinical borders were assessed as fuzzy or distinct and as regular or irregular. Eighty-percent (40) of the nevi had fuzzy borders and 58% (29) had regular borders. Ne- vus color was assessed by the presence of 7 colors: light brown, medium brown, dark brown, red, black, white, and blue/gray. The predominant colors present in the nevi were the shades of brown: 62% (31) of the nevi con- tained light brown; 52% (26), medium brown; and 30% (15), dark brown. Red was observed in 20% (10) of the nevi, and black, white, and blue/gray were not observed in any of the index nevi. All of these clinical features were compared be- tween younger and older students (dichotomized at the median age), between boys and girls, and based on skin color (very fair to fair vs olive to very dark). There were no significant differences detected for any of these com- parisons. No children demonstrated nevi requiring re- ferral to a dermatologist. Nevi were classified into groups based on the predominant dermoscopic characteristic. Pre- dominant dermoscopic patterns for the index nevus were as follows: 38% (19) globular, 14% (7) reticulated, 38% (19) structureless, and 10% (5) combinations of the above patterns with no predominant characteristic. FOLLOW-UP RESULTS At 1-year follow-up, 42 of 51 surveys (parent and child) were returned, for a response rate of 82%, and 42 of 50 children completed the follow-up examination, for a par- ticipation rate of 84%. There was an overall participa- tion rate from baseline to follow-up of 81% (42/52) for the skin examination process. Of the 8 students who did not participate in the follow-up (42/50), 2 refused to par- ticipate, 2 were absent on the day of the examination, 2 had moved to nearby schools in Framingham, and 2 had moved to other school systems. An increased number of nevi were observed at the 1-year follow-up examination: new nevi were identified in 36% of students (n = 15). We assessed changes in der- moscopic pattern in the single baseline index nevus over the year follow-up interval and observed changes in 10% of these lesions (n = 5). The results of the dermoscopic classification of nevi at the follow-up examination are sum- marized in Table 3. Dominant dermoscopic pattern ap- pears to be related to nevus size: smaller nevi tend to be structureless, while larger nevi are of mixed pattern. Table 3 also summarizes the results of a stratified analysis for globular and reticular nevi by anatomic site, suggesting that reticulation is more common on the upper back. COMMENT We report results of a pilot study in Framingham, Mass, schoolchildren that describes the prevalence of nevi us- ing digital photography and digital dermoscopy. This is the first study to document clinical and dermoscopic fea- tures of common nevi in schoolchildren using recent ad- vances in technology and to explore the interrelation- ship between nevi and host and environmental factors. Knowledge of the causes and evolution of nevi has significant public health importance for the primary and secondary prevention of melanoma. Public health cam- paigns and clinical efforts to reduce melanoma deaths cur- rently target individuals with many nevi or apparently atypical nevi. Identification of factors that predict the de- velopment of multiple and atypical nevi will improve tar- geting of primary prevention efforts in early life. Be- cause these factors may be apparent earlier in life, there is an opportunity to intervene when sun protection ef- forts are more likely to succeed. Nevus phenotype is cur- rently used to identify high-risk individuals for directed efforts in sun protection and early detection. In regard to the secondary prevention of melanoma, recent efforts in early detection have intensified across all age groups with an emphasis on the importance of change in a ne- vus as a sensitive marker of early curable disease. Our results showed that the median number of back nevi was 15, with a significant association of increasing 30 25 20 15 10 5 0 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 Mole Count St ud en ts , % Figure 1. Distribution of back mole counts in 50 students. 80 60 40 20 0 10 11 12 13 14 Age, y To ta l B ac k M ol e Co un t Figure 2. Scatterplot of number of back nevi by age of student (n = 50). The regression line is based on simple linear regression with total mole count as the dependent variable and age as the predictor with the intercept being suppressed. (REPRINTED) ARCH DERMATOL / VOL 140, MAY 2004 WWW.ARCHDERMATOL.COM 548 ©2004 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 age with higher median mole count. Older age, male sex, fair skin, belief that a tan is healthier, tendency to burn, and sporadic use of sunscreen were positively associ- ated with mole count, although age was the only statis- tically significant factor. Predominant dermoscopic pat- terns for the index nevus were as follows: 38% globular (n = 19), 14% reticulated (n = 7), 38% structureless (n = 19), and 10% combinations of patterns (n = 5). It has been ob- served that globules are often seen in enlarging and con- genital nevi.32,33 The fact that 38% of nevi were globular might suggest that many of these nevi were growing or congenital. We may have selected for this by evaluating young children and selecting the largest nevi for close-up photography. A total of 19 (38%) of the nevi were struc- tureless, and this may also be consistent with a congen- ital pattern.34 However, it should be noted that studies have found congenital nevi to be prevalent in only 1% to 6% of the population and the interpretation that these small nevi are congenital is only suggestive.23,35,36 Only 3 longitudinal studies have examined the evo- lution of nevi in individual children over time and as- sessed the relationship between nevus evolution and risk Table 3. Dermoscopic Pattern by Size and Site of Index Mole at Follow-up Examination of 50 Students* Characteristic No. (%) of Moles Structureless Globular/Reticular Mixed Total (N = 155) Size, mm �2.0 26 (40.6) 22 (34.4) 16 (25.0) 64 2.1-4.0 16 (24.6) 23 (35.4) 26 (40.0) 65 �4.1 2 (7.7) 8 (30.7) 16 (61.5) 26 Site Upper back NA 17 (39.5)/8 (80.0) NA NA Lower back NA 26 (60.5)/2 (20.0) NA NA Abbreviation: NA, not applicable. *Each student contributed up to 4 index nevi. Table 2. Univariate Statistics by Median Mole Count in 50 Students Variable No. of Students* Median Mole Count OR (95% CI)† Age Younger (11.0-12.3 y) 24 12.5 1.0 Older (12.4-13.5 y) 25 23 3.6 (1.1-11.4)‡ Sex Female 28 14.5 1.0 Male 21 25 2.5 (0.68-9.41) Race White 44 18.5 1.0 Nonwhite§ 4 7.5 . . . Color of skin Olive/very dark 10 11.5 1.0 Very fair/fair 39 16 1.57 (0.41-6.07) No. of sunburns in previous summer 0-1 30 16 1.0 �1 19 15 0.90 (0.30-2.80) Do people look healthier with a tan? No 15 14 1.0 Yes 24 21 1.6 (0.49-5.87) Don’t know 10 10 0.50 (0.09-2.66) What happens to skin after 45 minutes in sun? Sometimes/rarely burn 40 15 1.0 Always/usually burn 9 25 1.38 (0.34-5.50) How often do you apply sunscreen when you go to the beach or pool?� Routinely 13 13 1.0 Sporadically 33 18 2.70 (0.71-9.98) During the past summer, when outside, how often did you have sunscreen on?� Routinely 8 12 1.0 Sporadically 40 17 3.31 (0.66-13.1) Abbreviations: CI, confidence interval; OR, odds ratio. *Responses do not total 50 because of missing or incomplete data. †Odds ratio for association between variable of interest and higher median mole count. Ellipses indicate unable to provide OR estimate owing to small contingency cell counts ‡P�.05. §Asian, Hispanic, or other. �Sporadically includes “never” and “most days”; routinely includes “every day.” (REPRINTED) ARCH DERMATOL / VOL 140, MAY 2004 WWW.ARCHDERMATOL.COM 549 ©2004 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 factor exposures.17-19 Two large studies conducted in Eu- rope and Canada assessed nevi in children at 2 time points.18,19 The European study included 377 children ex- amined at ages 7 and 12 years, and the Canadian study conducted entrance and exit examinations of 309 first and fourth graders who were part of a 3-year random- ized trial of broad-spectrum sunscreen use. Both studies used nevus counts obtained from clinical examination. The results showed a significant increase in nevus counts over time and a strong association between nevus count and skin type, pigmentation characteristics, and freck- ling. In a small cohort study of 102 Australian school- children,20 melanocytic nevi were counted, and in a sub- set of 20 students aged 12 to 14 years, all nevi of the face and neck were photographically mapped and clinically assessed annually during 4 years. This study of 20 ado- lescents represents the only longitudinal study of indi- vidual nevi in this age group to date. Over the follow-up period, nevus numbers increased 47% in the first year, with smaller increases in older students.26 The selection of the back as the anatomic focus for this study was based on logistic and epidemiologic con- siderations.7,8 The existing infrastructure at the schools for administering scoliosis examinations along with the efficiency of restricting the study to the skin of the back resulted in very high participation rates and excellent data quality. The back as a site is ideal for photography stud- ies because it is a relatively flat surface, and the surface area can be determined with precision, which is re- quired for assessment of proportionate vs disproportion- ate nevus growth.37 Although it would have been opti- mal to conduct a skin examination and photography of the full body, the utility and efficiency of restricting our study to the back is supported by several epidemiologic studies: English and Armstrong8,38 have demonstrated the least interobserver variation for nevus counts of the back relative to other anatomic sites and excellent correla- tion between back nevus counts from photographs and those from direct examination. Autier et al7 have dem- onstrated a strong correlation between back and total ne- vus counts and recognize the phenotype of back nevi as an excellent marker of melanoma risk. We demonstrated the feasibility of implementing and conducting this type of research and maintaining high com- pliance with minimal loss to follow-up. The strength of this study is the high response rate achieved for survey completion and the skin examination and photography pro- cess. A limitation is the small size of the sample, which was a function of the pilot nature of the study. The co- hort sampled represents a predominantly white group. The survey data used for this analysis was obtained from the responses from the child survey. In collecting data from parent and child, we recognize that discrepancies in re- sponses will arise between them. Analysis of these dis- crepancies will permit an indirect assessment of the va- lidity of responses and will be part of a separate study report. Nevi appear to share a common causal pathway with melanoma that involves interplay between constitu- tional factors and sun exposure.12 Recent studies distin- guish between common and atypical (dysplastic) nevi as important risk markers and lesional steps in melano- cytic tumor progression.2,3,30 A limitation of these stud- ies is the lack of a consistent definition of nevi, which is related to the existing controversy surrounding the clini- cal and histologic definitions of common and atypical nevi. An improved understanding of the natural history of nevi at the subsurface (dermoscopic) level may prove espe- cially informative in this regard and will likely generate new biologic hypotheses regarding the evolution of nevi. Understanding the evolution of nevi during adoles- cence is important for improving the early detection of relatively rare melanomas that occur in this age group and perhaps even more important for avoiding unnec- essary excisions of large numbers of changing benign nevi. An analysis of data from patients aged 0 to 14 years (N = 96 255) seen in the Henry Ford Health System in the year 2001 shows that full-thickness excisions of suspect skin lesions (excluding shave excisions of epidermal or dermal lesions) were performed at rates of 1.2 per 1000 children aged 0 to 9 years and 4.2 per 1000 children aged 10 to 14 years (Christine Cole Johnson, PhD, personal written communication, January 2003). An extrapola- tion of these data to the 2000 US Census estimates sug- gests that there are over 125 000 full-thickness exci- sions of skin lesions performed on children younger than 14 years annually in the United States despite the very low incidence of skin cancer in this age group. Dermatologists (A.A.M. and A.C.H.) counted back nevi images independently. The interobserver nevus counts were highly correlated with a correlation coeffi- cient, r2= 0.97. On review, it was discovered that the pri- mary source of nevus count discrepancies was the skip- ping and double-counting of nevi on freehand counts. A tagging tool provided in the image archiving software that permits tagging of each nevus as it is counted can be used to reduce these discrepancies. We demonstrated the feasibility of conducting a study in US schoolchildren to determine the prevalence and der- moscopic features of back nevi and to attain high re- sponse rates and collect important data on host factors of children and parents. Health systems in schools can be natural settings for researchers to obtain high re- sponse rates. This study lays the foundation for future studies that will elucidate the relationship between ne- vus evolution and phenotype, genotype, and risk factor exposures in a population-based cohort. Accepted for publication September 29, 2003. This research was supported by grants provided by the American Skin Association and the National Melanoma Foundation. We would like to thank the teachers and staff in the Framingham School System and the parents and students who participated in this study. Corresponding author and reprints: Susan A. Oliveria, ScD, MPH, Dermatology Service, Memorial Sloan- Kettering Cancer Center, 1275 York Ave, New York, NY 10021 (e-mail: oliveri1@mskcc.org). REFERENCES 1. Jemal A, Murray T, Samuels A, Ghafoor A, Ward E, Thun MJ. Cancer statistics, 2003. CA Cancer J Clin. 2003;53:5-26. (REPRINTED) ARCH DERMATOL / VOL 140, MAY 2004 WWW.ARCHDERMATOL.COM 550 ©2004 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 2. Tucker MA, Halpern A, Holly EA, et al. Clinically recognized dysplastic nevi: a cen- tral risk factor for cutaneous melanoma. JAMA. 1997;277:1439-1444. 3. Elder DE, Goldman LI, Goldman SC, Greene MH, Clark WHJ. Dysplastic nevus syndrome: a phenotypic association of sporadic cutaneous melanoma. Cancer. 1980;46:1787-1794. 4. Swerdlow AJ, English J, MacKie RM, et al. Benign melanocytic naevi as a risk factor for malignant melanoma. Br Med J (Clin Res Ed). 1986;292:1555- 1559. 5. Holly EA, Kelly JW, Shpall SN, Chiu SH. Number of melanocytic nevi as a major risk factor for malignant melanoma. J Am Acad Dermatol. 1987;17:459- 468. 6. Nordlund JJ, Kirkwood J, Forget BM, et al. Demographic study of clinically atypi- cal (dysplastic) nevi in patients with melanoma and comparison subjects. Can- cer Res. 1985;45:1855-1861. 7. Autier P, Boniol M, Severi G, et al. The body site distribution of melanocytic naevi in 6-7 year old European children. Melanoma Res. 2001;11:123-131. 8. English DR, Armstrong BK. Melanocytic nevi in children, I: anatomic sites and demographic and host factors. Am J Epidemiol. 1994;139:390-401. 9. Fritschi L, Green A, Solomon PJ. Sun exposure in Australian adolescents. J Am Acad Dermatol. 1992;27:25-28. 10. Harrison SL, MacLennan R, Speare R, Wronski I. Sun exposure and melano- cytic naevi in young Australian children. Lancet. 1994;344:1529-1532. 11. Hanrahan PF, Hersey P, Menzies SW, Watson AB, D’Este CA. Examination of the ability of people to identify early changes of melanoma in computer-altered pig- mented skin lesions. Arch Dermatol. 1997;133:301-311. 12. Dwyer T, Blizzzard L, Ashbolt R. Sunburn associated with increased number of nevi in darker as well as lighter skinned adolescents of northern European de- scent. Cancer Epidemiol Biomarkers Prev. 1995;4:825-830. 13. Green A, Siskind V, Hansen ME, Hanson L, Leech P. Melanocytic nevi in school- children in Queensland. J Am Acad Dermatol. 1989;20:1054-1060. 14. Gallagher RP, Rivers JK, Yang CP, McLean DI, Coldman AJ, Silver HKB. Mela- nocytic nevus density in Asian, Indo-Pakistani, and white children: The Vancou- ver Mole Study. J Am Acad Dermatol. 1991;25:507-512. 15. Coombs BD, Sharples KJ, Cooke KR, Skegg DC, Elwood JM. Variation and co- variates of the number of benign nevi in adolescents. Am J Epidemiol. 1992; 136:344-355. 16. Fritschi L, McHenry P, Green A, Mackie R, Green I, Siskind V. Naevi in school- children in Scotland and Australia. Br J Dermatol. 1994;130:599-603. 17. Green A, Siskind V, Green L. The incidence of melanocytic naevi in adolescent children in Queensland, Australia. Melanoma Res. 1995;5:155-160. 18. Luther H, Altmeyer P, Garbe C, et al. Increase of melanocytic nevus counts in children during 5 years of follow-up and analysis of associated factors. Arch Der- matol. 1996;132:1473-1478. 19. Gallagher RP, Rivers JK, Lee TK, Bajdik CD, McLean DI, Coldman AJ. Broad- spectrum sunscreen use and the development of new nevi in white children. JAMA. 2000;283:2955-2960. 20. Siskind V, Darlington S, Green L, Green A. Evolution of melanocytic nevi on the faces and necks of adolescents: a four year longitudinal study. J Invest Derma- tol. 2002;118:500-504. 21. Greene MH, Clark WH Jr, Tucker MA, et al. Medical intelligence: current con- cepts: acquired precursors of cutaneous malignant melanoma. N Engl J Med. 1985;312:91-97. 22. Clark WH Jr, Elder DE, Guerry IV D, Epstein MN, Greene MH, Van Horn M. A study of tumor progression: the precursor lesions of superficial spreading and nodular melanoma. Hum Pathol. 1984;15:1147-1165. 23. Rivers JK, MacLennan R, Kelly JW, et al. The Eastern Australian Childhood Nevus Study: prevalence of atypical nevi, congenital nevus-like nevi, and other pigmented lesions. J Am Acad Dermatol. 1995;32:957-963. 24. Banuls J, Climent JM, Sanchez-Paya J, Botella R. The association between id- iopathic scoliosis and the number of acquired melanocytic nevi. J Am Acad Der- matol. 2001;45:35-43. 25. Coleman WP III, Gately LE III, Krementz AB, Reed RJ, Krementz ET. Nevi, len- tigines, and melanomas in blacks. Arch Dermatol. 1980;116:548-551. 26. Darlington S, Siskind V, Green L, Green A. Longitudinal study of melanocytic nevi in adolescents. J Am Acad Dermatol. 2002;46:715-722. 27. Kopf AW, Levine LJ, Rigel DS, Friedman RJ, Levenstein M. Prevalence of congenital- nevus-like nevi, nevi spili, and cafe au lait spots. Arch Dermatol. 1985;121:766- 769. 28. Vogelstein B, Fearon ER, Hamilton SR, et al. Genetic alterations during colorectal- tumor development. N Engl J Med. 1988;319:525-532. 29. Elder DE, Clark WH Jr, Elenitsas R, Guerry D IV, Halpern AC. The early and interme- diate precursor lesions of tumor progression in the melanocytic system: common acquired nevi and atypical (dysplastic) nevi. Semin Diagn Pathol. 1993;10:18-35. 30. Kanzler MH, Mraz-Gernhard S. Primary cutaneous malignant melanoma and its precursor lesions: diagnostic and therapeutic overview. J Am Acad Dermatol. 2001;45:260-276. 31. Marghoob AA, Kopf AW. Melanoma risk in individuals with clinically ayptical nevi. Dermatopathol Pract Concept. 1996;1:254-261. 32. Braun RP, Calza AM, Krischer J. The use of digital dermoscopy for the follow-up of congenital nevi: a pilot study. Pediatr Dermatol. 2001;18:277-281. 33. Kittler H, Seltenheim M, Dawid M, Pehamberger H, Wolff K, Binder M. Fre- quency and characteristics of enlarging common melanocytic nevi. Arch Der- matol. 2000;136:316-320. 34. Seidenari S, Pellacani G. Surface microscopy features of congenital nevi. Clin Dermatol. 2002;20:263-267. 35. Lorenz S, Maier C, Segerer H, Landthaler M, Hohenleutner U. Skin changes in newborn infants in the first 5 days of life. Hautarzt. 2000;51:396-400. 36. Walton RG, Jacob AH, Cox AJ. Pigmented lesions in newborn infants. Br J Der- matol. 1976;95:389-396. 37. Rhodes AR, Albert LS, Weinstock MA. Congenital nevomelanocytic nevi: pro- portionate area expansion during infancy and early childhood. J Am Acad Der- matol. 1996;34:51-62. 38. English DR, Armstrong BK. Melanoctyic nevi in children, II: observer variation in counting nevi. Am J Epidemiol. 1994;139:402-407. (REPRINTED) ARCH DERMATOL / VOL 140, MAY 2004 WWW.ARCHDERMATOL.COM 551 ©2004 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 work_vav7hjkjdbagpepxrwty63lvom ---- Merging the arts and sciences for collaborative sustainability action: a methodological framework Vol.:(0123456789)1 3 Sustainability Science (2020) 15:1067–1085 https://doi.org/10.1007/s11625-020-00798-7 O R I G I N A L A R T I C L E Merging the arts and sciences for collaborative sustainability action: a methodological framework Carlie D. Trott1  · Trevor L. Even2 · Susan M. Frame3 Received: 2 July 2019 / Accepted: 16 March 2020 / Published online: 2 April 2020 © The Author(s) 2020 Abstract This manuscript explores the possibilities and challenges of art–science integration in facilitating collaborative sustainability action in local settings. To date, much sustainability education is prescriptive, rather than participatory, and most integrated art–science programming aims for content learning, rather than societal change. What this means is that learners are more often taught “what is” than invited to imagine “what if?” In order to envision and enact sustainable alternatives, there is a need for methods that allow community members, especially young people, to critically engage with the present, imagine a better future, and collaboratively act for sustainability today. This manuscript introduces a methodological framework that integrates the arts and sciences to facilitate: (1) transdisciplinary learning, focusing on local sustainability challenges; (2) participatory process, bringing experience-based knowledge into conversation with research-based knowledge; and (3) col- laborative sustainability action, inviting community members to envision and enact sustainable alternatives where they live. The transformative potential of this framework is examined through international case studies from countries representing the richest and poorest in the Western hemisphere: a multi-site research study and after-school program for climate change education and action in collaboration with children in the Western US; and a multi-cycle research study and community arts center course for environmental photography and youth-led water advocacy in Southern Haiti. Despite many shared characteristics, case studies diverge in important ways relative to the sustainability challenges they sought to address, the specific context in which activities took place, and the manner in which art–science integration was practiced. Across cases, however, art–science integration facilitated participants’ learning, connection, and action for sustainability. Framed by the shared aims of transdisciplinary approaches, this manuscript discusses methodological hurdles and practical lessons learned in art–science integration across settings as well as the transformative capacity of alternative pedagogical and research practices in building a sustainable future. Keywords Art–science integration · Community engagement · Sustainability science education · Transdisciplinary research · Transformative learning · Youth-led participatory action research Introduction The urgency of radical societal transformation to sustain- ability underscores the necessity of alternative approaches to sustainability education through which communities can learn about, relate to, and collaboratively act for sustain- ability in local contexts. Given the daunting nature of sus- tainability challenges, there is especially a need for pedago- gies that allow young people to engage with sustainability on their own terms and in ways that not only deepen their understanding, but also support their interest, active par- ticipation, and sustained engagement (Cutter-Mackenzie and Rousell 2019; Facer 2019; Holfelder 2019). In place of top-down models consisting of universal, competency-based Handled by Eefje Cuppen, Universiteit Leiden Instituut Bestuurskunde, Netherlands. * Carlie D. Trott carlie.trott@uc.edu Trevor L. Even trevor.even@colostate.edu Susan M. Frame jakmelekspresyon@gmail.com 1 Department of Psychology, University of Cincinnati, Cincinnati, OH, USA 2 Department of Anthropology, Colorado State University, Fort Collins, CO, USA 3 Jakmel Ekspresyon Art Center, Jacmel, Haiti http://orcid.org/0000-0002-4400-4287 http://crossmark.crossref.org/dialog/?doi=10.1007/s11625-020-00798-7&domain=pdf 1068 Sustainability Science (2020) 15:1067–1085 1 3 solutions,1 it is critical that sustainability educators encour- age learners to critically engage with present realities, envision alternatives, and take collaborative action for sus- tainability where they live. What pedagogical modes may facilitate these bottom-up processes, and through them, posi- tion young people as radical visionaries and change agents for a sustainable future today? One possibility is to integrate the arts and sciences in action-oriented sustainability education (Skorton and Bear 2018). Sustainability challenges are simultaneously scien- tific and cultural in nature. Scientific discourses have domi- nated the sustainability literature to date, yet the arts, as drivers of culture (Kagan 2011), have a critical role to play in societal transformation to sustainability. Though the arts (like sustainability) resist a simple definition, in this man- uscript, “the arts” refer to modes of expression—such as music, performance, painting, photography, and film—that use “skill or imagination in the creation of aesthetic objects, environments, or experiences that can be shared with others” (Kuiper 2016).2 Historically, the arts have been mobilized for social change—a way to resist and rewrite dominant nar- ratives, spark critical dialogue around societal issues, and open the door to potential transformation. Towards cultivat- ing cultures of sustainability, the arts have the potential to strengthen affective ties between people and places as well as other kinds of emotional bonds that spur action (Heras and Tàbara 2014; Kagan 2008; Singleton 2015). In educa- tional contexts, the arts foster diverse ways of learning and understanding the world, which can be highly engaging and motivating to learners, especially young people (Heras et al. 2016; Sipos et al. 2008). Importantly, the critical role of the arts in facilitating societal transformation to sustainability, lies in their capacity “to create experience, make relation- ships, [and] expand mental activity” in ways that allow us to reexamine our non-sustainable present realities and act towards alternative futures (Kagan 2011, p. 218). Recent decades have witnessed a rise in forms of art–sci- ence integration that infuse the STEM disciplines (science, technology, engineering, and math) with the arts (e.g., via STEAM programming), particularly in K-12 settings (Land 2013; Taylor and Taylor 2017). More often than not, how- ever, such practices have tended to instrumentalize the arts for the purpose of science learning (Psycharis 2018), rather than to activate the transformative potential of the arts for social change. Put differently, art–science integration has typically been an instructional means to transmit information about ‘what is,’ rather than an invitation to imagine ‘what if?’ Moreover, there is limited empirical evidence around how to approach, design, and implement STEAM educa- tion (Quigley et al. 2017, 2019), and few published studies have explored practical challenges related to these forms of integrated engagement (Herro et al. 2019). What are the possibilities and challenges of art–science integration in the context of sustainability education? And how, might sus- tainability scholars, educators, and advocates integrate the arts and sciences in ways that enable children and youth3 to envision and enact sustainable alternatives at the local level? This manuscript presents a pair of case studies experi- menting with art–science integration for sustainability action in widely different settings: a climate action project in the Western US and a water advocacy project in Southern Haiti. Despite great variation in context and application, each case study used community-engaged participatory research meth- ods in collaboration with children and youth, while mobiliz- ing art–science integration for collaborative sustainability action in local settings. Before offering case study descrip- tions, this manuscript first reviews literature relevant to art–science integration for enabling young people to envi- sion and enact sustainable alternatives. Building on this lit- erature review, we present a methodological framework for transdisciplinary learning and collaborative sustainability action that shaped—and was shaped by—participatory pro- cesses taking place within each community context. Next, each element of the framework is used to describe the key components and action-based outcomes of each case study. Finally, alternative pedagogical and research practices, such as those described in this manuscript, are discussed in terms of their transformative potential towards building a sustain- able future. 1 Following Wiek et al. (2012), use of the term “solutions” with ref- erence to sustainability challenges does not imply simplicity, swift- ness, or universality. As opposed to “a simple ‘command and control’ approach,” addressing sustainability challenges requires “participa- tion, coordination, iteration, and reflexivity” (p. 6; see also van Kerk- hoff and Lebel 2006). 2 The National Foundation on the Arts and the Humanities Act, originally passed by the United States Congress (1965), defines “the arts” as follows: “The term ‘the arts’ includes, but is not limited to, music (instrumental and vocal), dance, drama, folk art, creative writ- ing, architecture and allied fields, painting, sculpture, photography, graphic and craft arts, industrial design, costume and fashion design, motion pictures, television, radio, film, video, tape and sound record- ing, the arts related to the presentation, performance, execution, and exhibition of such major art forms, all those traditional arts practiced by the diverse peoples of this country. (sic) and the study and appli- cation of the arts to the human environment.” This legislation estab- lished the National Endowment for the Arts and the National Endow- ment for the Humanities. 3 Given that age-based categories are not fixed across time and space, there is no universally agreed-upon age range associated with “youth” (Corner et al. 2015; Fisher 2016). Here, “youth” is defined as under eighteen years of age. 1069Sustainability Science (2020) 15:1067–1085 1 3 Literature review The following sections offer a detailed literature review that knits together disparate literatures dealing with sustain- ability education,4 art–science integration,5 and youth-led sustainability action. First, we trace linkages between the history of disciplinary fragmentation in research and educa- tion to present-day efforts to reintegrate the arts and sciences in informal and non-formal educational settings. Later, we discuss the transformative potential of alternative pedago- gies that facilitate youth-led sustainability action. Sustainability as science A widely cited definition of sustainability comes from the World Commission on Environment and Development’s (1987) Brundtland Report—also known as Our Common Future—which characterizes sustainable development as that which “meets the needs of the present generation with- out compromising the ability of future generations to meet their own needs” (p. 8). The inherent complexity of sustain- ability, as a concept and a practice, is apparent when consid- ering its implied emphasis on multi-dimensional “linkages and interdependencies of the social, political, environmen- tal, and economic dimensions of human capabilities” (Davis 2015, p. 10; Pipere 2016). Recognizing this, sustainability challenges have often been characterized as wicked prob- lems, or those with complex, ever-changing conditions and whose solutions can be contradictory, contested, or cause problems of their own (Rittel and Weber 1973). From this perspective, sustainability topics intersect with, and give life to all of the traditional school subjects (e.g., social stud- ies, math) and have been integrated across the curriculum (Cook 2019; Hicks 2014). Despite this, when sustainability is introduced in the US classroom, it is most often from the exclusive perspective of science6 (Bourn et al. 2017). Formal education in Western societies rests on a founda- tion of rationalism, which holds that knowledge arises from specific modes of inquiry such as observation, evidence- gathering, and scientific reasoning. This dominant paradigm is responsible for fragmenting knowledge into discrete dis- ciplines—and consequently, school subjects—because, in order to know the world through these means, it was nec- essary to break it down into smaller and smaller elemen- tal parts. Through this process of dis-integration, certain disciplines (e.g., the sciences) came to be held in greater esteem than other disciplines (e.g., the arts) to the extent that their epistemic values aligned or dis-aligned with the rationalist perspective. In particular, the sciences—which place a premium on “objectivity, certainty, universality and predictability”—came to be valued over the arts, which tend to embrace a wider range of knowledge sources, including emotion, intuition, memory, and spirituality (Sipos et al. 2008; Cook 2019). In a 1998 essay entitled Back from the Chaos, E. O. Wilson spoke of the wide chasm between the arts and sciences, noting that “the ongoing fragmentation of knowledge and the resulting chaos in philosophy are not reflections of the real world but artifacts of scholarship.” Despite the widespread adoption of constructivist peda- gogies in the classroom over recent decades, the legacy of Western education’s rationalist roots survives not only in the existence of school subjects, but in the language of scientific objectivity and subject matter that can feel “devoid of story, attachment and meaning” (Sipos et al. 2008; Phelan 2004). What this means for learning is that students may struggle to find ways to connect to or identify with abstract science concepts despite their real-world significance (Birdsall 2013; Hodson 2003; Lester et al. 2006). In the research laboratory as in the classroom, efforts around disciplinary integration, including cross-, multi-, inter-, and transdisciplinary approaches, have in recent dec- ades sought to reverse the deep fragmentation that is rooted in rationalism (Weinberg and Sample-McMeeking 2017). Sustainability has long been a key site for this reintegration process (Cook 2019; Roy et al. 2019; Yarime et al. 2012). The ‘wickedness’ of sustainability challenges, combined with their high stakes for humanity, have ushered in an era of increasingly integrated research agendas and pedagogies. However, in sustainability research and education, this grow- ing cross-pollination and collaboration has taken place pri- marily across the boundaries of various science fields and much less with the arts (Kagan 2011; WCED 1987). Art–science integration for social change In this increasingly welcoming atmosphere towards a wider array of epistemic traditions, art–science integration has been proposed as a way to breathe new life into science teaching and learning. Specifically, so-called “STEAM- based” programming has been advanced as a remedy both to problems of disciplinary fragmentation and towards reviving 4 There are many terms referring to ‘environment-related education’ in the sustainability domain (Hart and Nolan 1999; Håkansson et  al. 2019). Following Sterling (2004) and Pipere (2016), in this manu- script, the term “sustainability education” is used to refer to various approaches including education for sustainable development (ESD), education for sustainability (EFS), environmental sustainability edu- cation (ESE), and education for a sustainable future (ESF). 5 In this manuscript, the term “art-science integration” is used to refer to any kind of educational initiative integrating the science, technology, engineering, and mathematics (i.e., STEM) disciplines with the arts (e.g., STREAM, STEMM, ST2EAM, ArtSTEM; Land 2013; Taylor and Taylor 2017). 6 This is not the case everywhere. See Birdsall (2013) for an argu- ment for the reintegration of science and sustainability education in New Zealand. 1070 Sustainability Science (2020) 15:1067–1085 1 3 students’ affective and intuitive connections with STEM dis- ciplines by illuminating their personal and real-world sig- nificance through creative processes (Land 2013; Taylor and Taylor 2017). STEAM-based curricula have been character- ized as “an opportunity to inject creativity into courses that have traditionally been more scientific in nature” (Marmon 2019, p. 101). For example, in a large study of STEAM cur- ricula, fourth grade classrooms merged the arts and sciences to design a prosthetic arm, explore how scientists recreate extinct animals, and investigate how engineers design roller coasters (Bush and Cook 2019). In a 2018 report examin- ing the benefits of integrated art–science programming in higher education contexts, the National Academies of Sci- ence, Engineering, and Mathematics (NASEM) note that: The arts teach creative means of expression, under- standing of different perspectives, an awareness of knowledge and emotions throughout the human expe- rience, and the shaping and sharing of perceptions through artistic creation and practices in the expressive world. (Skorton and Bear 2018, p. 60) Powerful actors, including governments, institutions of higher education, and industry have increasingly embraced art–science integration, as noted in the inaugural issue of the STEAM Journal: “Government agencies are beginning to acknowledge that art and science—once inextricably linked, both dedicated to finding truth and beauty—are bet- ter together than apart” (Maeda 2013, p. 1). Research and programming around such integration has grown in recent years, particularly in informal learning contexts, as has discussion and debate around its specific aims, organiza- tion, and implementation—as indicated by the existence of numerous related but distinct titles (e.g., STREAM, STEMM, ST2EAM, ArtSTEM, Land 2013; Taylor and Tay- lor 2017). Considered holistically, integrated art–science engage- ment with youth seems like a worthwhile endeavor— especially around deepening understanding and critical engagement with sustainability challenges. For one, it offers a pathway towards bridging the kinds of divides— constructed (e.g., emotion–reason) and institutionalized (i.e., art–science)—that would appear to hinder sustain- able transformation. Moreover, to borrow the language of interdisciplinarity, art–science integration holds the promise of bringing together “different disciplinary ideas and methods … [which] can result in novel, unexpected answers to familiar, timeworn questions” (Toomey et al. 2015, p. 1). However, a strong theme across its various configurations and a seeming limitation is that the arts are typically integrated with STEM disciplines for the purpose of deeper STEM engagement (Liao 2019). In other words, when art–science integration occurs, the arts are mobilized almost exclusively in the service of science, either as a way of concretizing abstract subject matter or cultivat- ing students’ active participation in ways that stimulate their creative thinking, feeling, or personal connection (Psycharis 2018). Ultimately, the arts are instrumental- ized towards strengthening students’ sustained STEM interest and engagement, thus reproducing—rather than dissolving—existing disciplinary hierarchies in the name of interdisciplinarity. More generally, the stated purpose of integrating the arts and sciences is often to deepen stu- dents’ content knowledge (i.e., learning “what is”), rather than to promote students’ critical engagement with reality and spur social transformation (i.e., asking “what if?”). A distinction articulated by radical educational theorist Paolo Freire (1972) is relevant in this regard: Education either functions as an instrument which is used to facilitate integration of the younger generation into the logic of the present system and bring about conformity or it becomes the practice of freedom, the means by which men and women deal critically and creatively with reality and discover how to participate in the transformation of their world. Given the gravity of sustainability challenges and the urgency of cultural transformation, there is a need for inte- grated art–science pedagogies that build upon the historical legacy of the arts in prompting social change. At present, however, there is reason to suspect that infusing the arts into STEM teaching and learning through novel pedagogies is a sort of rebranding effort for science and technology dis- ciplines. In many STEAM-based curricula, the arts are an “add-on experience”, keeping more or less intact traditional approaches to STEM teaching and learning (Quigley et al. 2019, p. 143). Moreover, rather than delivering the prom- ised paradigm shift, much art–science integration appears poised to uphold non-sustainable present-day conditions. On a societal level, building the STEM workforce through inno- vative recruitment and retention strategies (e.g., art–science integration) is, at its core, a way to shore up global economic competitiveness; a brimming STEM ‘pipeline’ ensures con- tinued leadership in innovation over the decades to come (Maeda 2013; National Science Foundation 2015). As the founder of the STEAM Journal put it: With global competition rising, America is at a critical juncture in defining its economic future. I believe that art and design are poised to transform our economy in the 21st century like science and technology did in the last century, and the STEAM movement is an opportunity for America to sustain its role as innovator of the world (Maeda 2013, p. 1). While this statement promises transformation, the goal of maintaining US global dominance, via STEAM, reveals a preference for the neoliberal status quo. Put differently, 1071Sustainability Science (2020) 15:1067–1085 1 3 just as global sustainability crises demand that we reexam- ine (and resist) dominant narratives around individualism, consumerism, and competition, art–science integration is framed here as a way to give the US the perpetual upper hand. The realities of global sustainability crises challenge us to envision alternative structures and institutions, such as Jack- son’s (2017) notion of ‘the economy of tomorrow,’ which redefines prosperity in terms not of material concerns, but of human and ecological well-being. From a sustainabil- ity perspective, then, current applications of STEAM (and similar integrative approaches) do not appear particularly transformative. Outside of “learning through the arts” to “enrich learning in disciplines beyond the arts” (Ghanbari 2015, p. 3; Hetland 2013), how might we rethink art–sci- ence integration in the context of global transformation to sustainability? For example, what would it look like to see art–science integration as existing on a continuum, which, on one end mobilizes science in the service of arts-based activism? Moreover, what would it mean to integrate art and science for the purpose of envisioning and enacting alterna- tive futures? Transformative learning for youth‑led sustainability action Despite prevailing trends in STEAM education, a number of educational programs in the sustainability realm have sought to engage young people in various forms of social change action, most often in non-formal educational settings (Percy- Smith and Burns 2013). Taking seriously the notion that information dissemination (i.e., science learning) is insuffi- cient to the task of societal transformation, these educational initiatives have sought to facilitate transformative learning by engaging the ‘whole person’ and emphasizing knowledge, values, and action (Jensen and Schnak 2006; Sterling 2001). These pedagogical practices have been advocated under a variety of names, including (but not limited to) education for sustainable development (ESD; UNESCO 2019), environ- mental education for empowerment (Palmer 1998), science education for action (Hodson 2003, 2011; Jenkins 1994), and education for sustainability (Singleton 2015). Such approaches share an ethos that mobilizing people in sus- tainability requires not only the act of collectively construct- ing visions of sustainable transformation but linking them to opportunities for action (Meadows and Randers 1992). Indeed, in order to “achieve sustainable human well-being and the flourishing of a non-human world”, we must engage with young people as “citizens who care about sustainability and who are supported and able to act on their concerns” (Hayward 2012, p. 5, emphasis added; Hodson 2011). Given dominant cultural narratives that regard young people not as human beings but as “human becomings”—thus limiting their sociopolitical inclusion—it is also important that chil- dren play a role in choosing, planning, and taking action for sustainability in ways that support their sense of agency (Qvortrup 2009). A primary goal of action-based educational initiatives has been to empower young people as agents of change within their families and communities. In the context of sustain- ability, youth who are “environmentally empowered” are confident in their own abilities to “make a difference in the world, both by daily, personal choices related to lifestyle and by influencing [others]” (Schreiner et al. 2005, p. 8). Research emphasizing the potential role of young people as agents or catalysts of change in their families and commu- nities often points to the efficacy of intergenerational influ- ence as a means for youths’ active, influential, and critical position in the transformation of their environments (Bal- lantyne et al. 1998; Percy-Smith and Burns 2013). Given a stubborn focus on individual rather than collective action in sustainability education contexts, however, there is a need for “engag[ing] differently” in ways that map onto the large- scale transition that is now required (Jorgenson et al. 2019, p. 168). Studies of youth social action consistently find that when young people are given opportunities to explore their voice and communicate their concerns in the public sphere, they can develop competencies often linked to empowering, participatory outcomes (Foster-Fishman et al. 2005; Malone 2013; Roth and Lee 2004). For example, in a study by John- son et al. (2013), after participating in a youth development program “designed to increase knowledge and capacity for leadership and action in response to climate change” (p. 29), Ugandan youth reported increased self-efficacy, social and political awareness, commitment to civic action, and leader- ship. Moreover, collaborative action can be a way for young people to manage their feelings of frustration and powerless- ness in the face of sustainability challenges, while creating tangible change within their communities (Trott 2019a, b). Though limited, studies of children’s collaborative sustain- ability action using arts-based methodologies suggest similarly beneficial outcomes. In particular, children’s sense of political agency has been supported through the act of advocating for sustainable alternatives through the visual and performing arts (Haynes and Tanner 2015; Osnes 2017; Rooney-Varga et al. 2014; Stratford and Low 2015). In a 2-year study, Stratford and Low (2015) collaborated with children in an arts-based initia- tive called A Map of a Dream of the Future, in which children and adults co-created climate change educational materials and an art installation. As the authors explained, “children are more readily attuned to their political impulses when sup- ported in creative, participatory engagements” (p. 167). In another long-term study, called Climate Change and Me, 9 to 14 year-olds, as co-researchers and co-artists, developed an online social media platform and network, organized com- munity exhibitions, designed an interdisciplinary curriculum, 1072 Sustainability Science (2020) 15:1067–1085 1 3 and held a community event that brought together artists, sci- entists, and schools to address challenges related to climate change (Cutter-Mackenzie-Knowles and Rousell 2018). After describing children’s creative outputs through this participa- tory project, the authors theorize the use of digital video as a “transitional medium” that allowed young people to experi- ment with new forms of “creative resistance” (p. 2) while developing their own ways of responding to climate change. Transcending its common application in STEAM-based pedagogies to date, the arts offer more than a mechanism through which to promote content learning. They can be activated to spark and sustain much-needed processes of societal change in response to sustainability challenges. As mentioned, the arts have been a key medium through which marginalized groups may raise awareness of critical issues, challenge unjust norms, and provoke the social imaginary to think and become otherwise. In the context of sustainability, art–science integration can be a portal through which to con- template the past, confront the present, and create immediate and eventual futures. Integrating the arts into sustainabil- ity education opens the possibility of deepening children’s place-based awareness of sustainability challenges, while forming a foundation for shared meaning-making, emotional connection, and collaborative action planning in local set- tings. Moreover, by practicing epistemic fluidity, art–sci- ence integration represents a promising avenue for facili- tating transformative sustainability learning for purposes of critical reflection and invoking “new ways of thinking and being” in the world (Sipos et al. 2008, p. 71). Despite these transformative aspects, questions remain about how to design and implement integrated art–science programming in ways that position young people as change agents in their communities. The present research This manuscript explores the possibilities and challenges of art–science integration in sustainability education contexts. Through the lens of two case studies, we—an interdiscipli- nary team with backgrounds in social and environmental sciences as well as the arts—engage with the question of how we might integrate the arts and sciences in ways that enable children and youth to envision and enact sustainable alternatives within their communities. The US case study was the first author’s dissertation research; the Haiti case study is an ongoing collaboration between all co-authors of this manuscript. The first author has backgrounds in social and community psychology as well as arts management, the second author has backgrounds in anthropology and ecology, and the third author has a background in the arts and is the Founder and Director of the arts center in Southern Haiti. Before turning to case study descriptions, we feel it is necessary to provide a broad overview of our approach to art–science integration across case studies. First, there is no widely agreed-upon definition of integration, as noted in a 2018 NASEM report that explored the practice of “inten- tionally integrat[ing] knowledge in the arts, humanities, physical and life sciences, social sciences, engineering, technology, mathematics, and the biomedical disciplines” in higher education contexts (Skorton and Bear 2018, p. x). In general, however, to integrate means to unite, or “to form, coordinate, or blend into a functioning or unified whole” (Merriam-Webster 2019). In educational settings, “integra- tion” is an umbrella term encompassing a range of integra- tive approaches (e.g., inter-, cross-, multi-, and transdisci- plinarity), and it typically involves “merg[ing] contents and/ or pedagogies traditionally occurring in one discipline with those in other disciplines in an effort to facilitate student learning” (Skorton and Bear 2018, p. 64). In the present research, integrated art–science activities merged a number of relevant science disciplines with the arts towards address- ing sustainability challenges. The specific artform in both cases was digital photog- raphy guided by photovoice methodology. As a research method, photovoice is a form of participatory action research (PAR) that aims to give voice to marginalized groups in the process of gaining awareness and acting on critical societal issues (e.g., Wang et al. 2004). As an arts-based methodol- ogy, photovoice is grounded in the tradition of documen- tary photography. In this tradition, photographic images aim to capture real-life settings and situations as a means of “bearing witness to world events”, and is often—through its dissemination in galleries and museums—a “tool for social change [used] to shed light on injustice, inequality and the sidelined aspects of society” (Tate 2019). Photovoice has traditionally been used as an action-oriented research method but has been adapted for use in educational settings (Cook 2015). Modified photovoice practices are increas- ingly observed in environmental education settings (Derr and Simons 2019). In the present research, photovoice is both a way to strengthen children’s understanding of, and connections to today’s sustainability challenges (i.e., “what is”) and to invoke their democratic imaginations (i.e., “what if?”) about a more sustainable tomorrow through present- day projects. Contrasting sharply with the trend of instru- mentalizing the arts for knowledge acquisition, photovoice methodology instrumentalizes the camera as a machine for generating social change through critical engagement with present-day realities. Put differently, youths’ photography in this sustainability education context invoked “alternative ways of seeing, recording and understanding the events and situations that shape the world in which we live”, opening up possibilities for envisioning alternative futures (Tate 2019). 1073Sustainability Science (2020) 15:1067–1085 1 3 Case studies in art–science integration for collaborative sustainability action The following sections introduce a pair of case studies in art–science integration for collaborative sustainability action: (1) a multi-site research study and after-school pro- gram for climate education and action in collaboration with children in the Western US; and (2) a multi-cycle research study and youth-focused course for environmental photog- raphy and water advocacy in Southern Haiti. Both projects took place in non-formal learning contexts (i.e., community organizations), and combined transdisciplinary learning with digital photography as an integrated art–science plat- form for collaborative sustainability action in local settings (see Table 1). Despite these shared characteristics, the case Table 1 Case study overview: participant, research team, and program characteristics a Educational backgrounds refer to the first author and five undergraduate research assistants who helped facilitate the program in 2016 b For a more detailed description of program activities, see Trott (2019a) Science, Camera, Action! (US) Photo Environment (Haiti) Participants Children (ages 10–12) Cycle 1: youths (ages 8–14). Cycle 2: youth and young adult participants (up to 21 years) Context Western US; National non-profit (Boys and Girls Clubs) Southern Haiti; Community organization (Arts Center) Program design After-school program (three sites) Arts center course (two cycles) Duration 15 weeks (1-hour-long session per week) Cycle l: 15 weeks (1 session/week); Cycle 2: 8 weeks (2–3 sessions/week) Program team discipli- nary backgrounds Psychology, Arts Management, Engineering, Communi- cations, and Early Childhood Educationa Psychology, Arts Management, Ecology, Anthropology, Fine Arts, and Film Sustainability challenge Climate change Water security Science components Hands-on activities connecting the topics of climate change, ecosystems, and community-based sustainabil- ity action (e.g., Greenhouse Gas Tag; Energy Bingo)b Water literacy education, including water-testing kits and day-tips to local water sources (e.g., rivers, streams), with a focus on hydrology, water management, and clean water access Arts components Digital photography and hand drawings inspired by chil- dren’s personal and place-based connections to program activities; provided the inspiration for youth-led com- munity action projects Digital photography and documentary film-making to cap- ture the beauty and necessity of (and challenges related to) water in Southern Haiti; provided the raw material for community events Social change outputs Photo gallery exhibition; website; city council presenta- tion; tree-planting campaign; community garden Photo gallery exhibition; community-led water-testing analysis; documentary films Fig. 1 Methodological frame- work for transdisciplinary learning and collaborative sustainability action through the integration of art and science 1074 Sustainability Science (2020) 15:1067–1085 1 3 Ta bl e 2 C on ce pt ua l f ra m ew or k el em en ts : d efi ni tio ns a nd a pp lic at io ns a cr os s co nt ex ts Fr am ew or k el em en t D efi ni tio n U S co nt ex t H ai tia n co nt ex t Tr an sd is ci pl in ar y le ar ni ng (" W ha t I s" ) A n ap pr oa ch to c ur ri cu lu m in te gr at io n w hi ch di ss ol ve s th e bo un da ri es b et w ee n th e co nv en - tio na l d is ci pl in es a nd o rg an iz es te ac hi ng a nd le ar ni ng a ro un d th e co ns tr uc tio n of m ea ni ng in th e co nt ex t o f r ea l- w or ld p ro bl em s or th em es (I B E -U N E SC O 2 01 9) ; f oc us ed o n un de rs ta nd - in g su st ai na bi lit y ch al le ng es In te gr at ed a rt -s ci en ce p ro gr am m in g co m bi n- in g cl im at e lit er ac y an d di gi ta l p ho to gr ap hy ; ex pl or in g th e sc ie nt ifi c an d so ci al d im en si on s of c lim at e ch an ge in th e M ou nt ai n W es te rn re gi on o f t he U S In te gr at ed a rt –s ci en ce  p ro gr am m in g co m bi ni ng w at er li te ra cy a nd  p ho to gr ap hy in st ru ct io n; ex pl or in g w at er s ys te m s, th e ne ce ss ity o f c le an w at er a cc es s, a nd th re at s to w at er s ec ur ity in th e So ut he rn re gi on o f H ai ti Pa rt ic ip at or y pr oc es s (“ W ha t I f? ”) A c ol la bo ra tiv e ap pr oa ch to re se ar ch , e du ca - tio n, a nd a ct io n th at b ri ng s re se ar ch er s an d pa rt ic ip an ts to ge th er to id en tif y, e xa m in e, a nd ad dr es s pr ob le m s in c om m un ity s et tin gs (H al l 19 81 ; K in do n et  a l. 20 07 ); fo cu se d on c ri tic al en ga ge m en t w ith n on -s us ta in ab le p re se nt re al i- tie s an d so ci al c ha ng e pl an ni ng M on th ly p ho to vo ic e di sc us si on s ar ou nd p er so na l an d pl ac e- ba se d co nn ec tio ns to c lim at e ch an ge ; ph ot og ra ph y- in sp ir ed v is io ni ng a nd a ct io n pl an ni ng u si ng c on se ns us p ro ce ss R eg ul ar e xc ur si on s to w at er s ou rc es b ot h to c on - du ct c om m un ity -l ed w at er -t es tin g as w el l a s to fil m a nd c ap tu re p ho to gr ap hi c im ag es in te nd ed to s pa rk s oc ia l c ha ng e C ol la bo ra tiv e ac tio n A n ap pr oa ch to c om m un ity a ct io n th at in vo lv es w or ki ng to ge th er fo r s oc ie ta l t ra ns fo rm at io n to su st ai na bi lit y (e .g ., al te ra tio ns to c ul tu ra l s ym - bo ls , r ul es o f b eh av io r, so ci al o rg an iz at io ns , o r va lu e sy st em s) ; f oc us ed o n ac tiv el y ge ne ra tin g su st ai na bl e al te rn at iv es w ith in lo ca l s et tin gs Y ou th -l ed p ro je ct s in vo lv in g co lla bo ra tiv e tr ee - pl an tin g an d co m m un ity g ar de ni ng ; c ol la bo ra - tiv e ev en t p la nn in g an d aw ar en es s- ra is in g; gr ou p pr es en ta tio n to p ol ic ym ak er s Tw o cy cl es o f c ol la bo ra tiv e ev en t p la nn in g an d co m m un ity p ho to e xh ib iti on s fe at ur in g re su lts of w at er -t es tin g; d oc um en ta ry fi lm -m ak in g an d co m m un ity d is se m in at io n 1075Sustainability Science (2020) 15:1067–1085 1 3 studies diverge in important ways relative to the sustain- ability challenges they sought to address, the specific context in which learning and action took place, and the manner in which art–science integration was practiced. The design of each case study was guided by iterations of the methodologi- cal framework introduced below, and the implementation of each case study, in turn, contributed to framework devel- opment. As such, the divergent aspects of the case studies (i.e., geographic region, sustainability challenge, integrated art–science practices) contributed to a methodological framework that is more versatile than if developed from a single case. Moreover, these divergent aspects allowed for a deeper exploration of the possibilities and challenges of art–science integration in different contexts. In the follow- ing sections, we introduce the methodological framework, followed by detailed case study descriptions. Both studies were approved by the institutional review boards at the uni- versities where the research took place. Figure 1 presents a methodological framework for trans- disciplinary learning and collaborative sustainability action through the integration of art and science. Specifically, the framework combines: (1) transdisciplinary learning, cen- tered on real-world problems, with (2) participatory process, facilitating learners’ personal connections to sustainability challenges, and culminates in (3) collaborative action for sustainability, allowing learners to both envision and enact alternative futures. In the US and Haiti, program activities were shaped by the ‘Head, Hands, and Heart’ model for transformative sustainability learning (Sipos et al. 2008) and inspired by photovoice methodology for purposes of sustainability learning, placed-based inquiry and connec- tion, and social change (Cook 2015). For definitions of each framework element as well as brief overviews of their spe- cific applications across case study contexts, see Table 2. Child‑led climate action in the Western US The after-school program Science, Camera, Action! was designed to engage 10- to 12-year-olds in place-based cli- mate change education and collaborative, community-based climate action over a period of 15 weeks in the spring of 2016. Altogether, 55 children participated in the program across three Boys and Girls Clubs (BGC) located in the Western US. Together, the three participating Clubs rep- resented a regional division of the BGC and were located within a 20-mile radius of the research university. The BGC is one of the largest youth-serving non-profit organizations in the US, currently with over 4600 Club locations (BGC 2019). Founded in 1860, the BGC serves over four mil- lion youth every year, most of whom live in poverty (BGC 2019). The university-community partnership on which this research is based was established through a pre-existing relationship between local BGC staff and a Center at the nearby research university, which focused on STEM edu- cation and outreach. In particular, the regional BGC was looking to incorporate STEAM programming into its after- school services. Following a series of meetings in late 2015, the BGC welcomed Science, Camera, Action! into all three of its regional Clubs. These BGCs varied in size and the nature of their programming—from small and informal to large and highly coordinated. The mission of the BGC is “to enable all young people, especially those who need us most, to reach their full potential as productive, caring, responsible citizens” (BGC 2019). The program was mission-aligned through its engagement of local children in transdisciplinary learning and youth-led collaborative action for sustainability. The Western state where this research took place is among the majority of US states not to require anthropo- genic climate change to be taught in the classroom (National Science Teachers Association [NSTA] 2019). Among the only guidelines related to climate change in the state’s plan, eighth grade teachers of Earth systems science are encour- aged to cover “inquiry questions” about climate change such as, “What evidence supports and/or contradicts human influ- ence on climate change?” and, “How has Earth’s climate changed over time?” What this means for 10 to 12-year-olds (i.e., in fourth to seventh grade) is that, despite likely expo- sure to climate change information elsewhere in their lives (e.g., news, movies, zoos; Kelsey and Armstrong 2012), they may not learn about these topics in the classroom until their teenage years, and even then, in ways that may or may not align with established research. As such, there was a need, locally, not only to provide young people with access to evidence-based information about climate change, but also to create empowering learning environments where children are invited to engage with complex sustainability topics, make sense of them on their own terms, and act towards sustainability in ways that align with their visions for alter- native futures. The program’s name, Science, Camera, Action!, is a play on the popular film director phrase “Lights, camera, action!” and was chosen because program activities closely aligned with each of these domains. The program’s initial activi- ties—under the category “Science”—consisted of various hands-on activities and games focused on important social and scientific dimensions of climate change (see Trott 2019a). For example, activities that sometimes mirrored common children’s games focused on how the greenhouse effect works (i.e., Greenhouse Gas Tag), what kinds of local and global ecosystem impacts to expect under a changing climate (i.e., Oh Deer! and Glaciers: Then and Now), what daily behaviors relate to climate change (i.e., Energy Bingo), and how kids have worked together in other places around the globe to collaboratively make a difference (i.e., Young Voices for the Planet videos). The final two activities were paired with discussions about how we can take meaningful 1076 Sustainability Science (2020) 15:1067–1085 1 3 steps towards enacting sustainability, both individually and as a group. These activities—running the gamut from explaining “what is” to asking “what if?”— were integrated with the program’s “Camera” component. Guided by photovoice methodology, children were asked to use program-provided digital cameras to capture images expressing their thoughts and feelings about each program topic. Approximately half of US program participants received introductory training in photographic techniques (e.g., composition) through a field trip and photo excursion at a local ranch. Photographs were intended to facilitate chil- dren’s affective and intuitive connection-making between program activities, their everyday lives, and their real-world hopes and concerns, forming the basis for group discussion and action project planning. When children forgot to take photographs or bring their camera to the program, they were asked to draw their ideas on paper. The content of children’s photographs and drawings varied widely and included, for example, family members (e.g., a baby representing the “next generation”), animals (e.g., cows representing meth- ane emissions; pets), and landscapes (e.g., a beautiful sunset; pollution). In these ways, children’s photographs represented the people and places about which children cared deeply and wanted to protect, as well as perceived environmen- tal threats. Three times throughout the program, children participated in photovoice group discussions in which they narrated their printed photographs or drawings according to the SHOWeD method,7 which asks children a series of ques- tions ranging from “What do you See here?” (i.e., “what is”) to “What can we Do about it?” (i.e., “what if?”; Cook and Quigley 2013; Wang et al. 2004). Finally, the children were supported in designing and implementing family- and community-focused “Action” projects, inspired by program activities and their own digital photography, to address climate change in ways that aligned with their interests and visions around sus- tainability. Specifically, children at each research site viewed a compilation of their own favorite photographs generated over a period of 10 weeks, then brainstormed action ideas by writing down any collaborative project idea that came to mind. Program facilitators captured all ideas and condensed them into thematic groupings presented the following week to children at each of the three research sites. Children then discussed their favorite action project ideas and, at two research sites, chose to combine a pair of popular ideas for action. Children’s action projects consisted of tree-planting and community gardening as well as awareness-raising with local adults (e.g., family and community members via a photo gal- lery exhibition and website) and policymakers (e.g., via a city council presentation). Most action projects were completed within the 15-week program period, but the children who established a community garden formed a summer garden club and continued working together for an additional few months. As described elsewhere, Science, Camera, Action! was among the most popular non-sport programs the local BGC Activities Director had seen across the BGC region, and children’s enthusiastic engagement was apparent to program facilitators from week to week and across research sites (see Trott 2019a). For more detailed descriptions of program activities, see Trott 2017; for more detailed descriptions of children’s action projects, see Trott 2019a. In Science, Camera, Action!, transdisciplinary learn- ing took place through centering the real-world problem of climate change, and allowing children to explore this complex, multi-faceted problem among a disciplinarily diverse group of program facilitators (see Table  1) and through hands-on activities based in diverse disciplinary traditions, including various science (e.g., ecology, phys- ics, atmospheric science) and social science (e.g., psy- chology) disciplines, as well as through art (i.e., photog- raphy) and art-inspired action (i.e., community projects). Importantly, children’s action projects were shaped by transdisciplinary learning combined with participatory process through art–science integration: children’s pho- tography offered a way for them to make sense of climate change in relation to their own lives and the things they care about, to share in this sense-making process with each other, and to act on the problem according to their own present-day concerns and aspirations for the future. At two of three research sites, children’s photographs became a primary mode of communicating about climate change with adults—first, in a public presentation during which children advocated for swift action on climate change with local policymakers, and second, through a photography exhibition during which children explained their pho- tographs to local adults as a way to encourage climate change awareness and action. In these ways, children’s photographs became “artistic boundary objects” (Rath- well and Armitage 2016)—a way of translating meaning “across social worlds” to generate social change (Star and Griesemer 1989, p. 388). 7 According to the SHOWeD method, photovoice discussions explored the specific content of children’s photographs as well as what can be done about problems depicted. In these ways, photovoice discussions prompted children’s “what is” and “what if?” thinking. The following is a list of SHOWeD question prompts, in order: “What do you see here? What is really happening? In other words, what may not be clear about your photo but is important for you to explain? How does this relate to Our lives? Had you thought about this con- nection before? Why does this problem or strength exist? What can we Do about it? What are the challenges? What are the opportuni- ties?” (Cook and Quigley 2013; Wang et al. 2004). 1077Sustainability Science (2020) 15:1067–1085 1 3 Youth‑led clean water advocacy in Southern Haiti One year after Science, Camera, Action! ended, in spring 2017, a new university-community partnership formed between the lead author and the Director (and founder) of a community arts organization in Southern Haiti dedicated to creating “a space for all people to learn, explore, develop, and express a voice through creative mediums.” The Direc- tor was looking to incorporate STEAM into their otherwise arts-focused programming, due in part to the fact that many schools in the area covered only theoretical science content, limiting students’ practical and lab-based experience. The Photo Environment course was born out of a conversation about the integrated art–science framework used in Science, Camera, Action!, specifically its combination of STEM and photography for local action. In particular, the combination of photography with environmental awareness and action aligned seamlessly with the interests and ongoing efforts of an arts center educator who had been filming a documentary about access to clean water in the region. It is difficult to overstate the seriousness of the numer- ous challenges faced by Haitian citizens when tasked with acquiring clean, safe, and reliable water. Roughly two-thirds of people living in this area of Southern Haiti live in abso- lute poverty (i.e., less than one dollar a day), and with its main livelihoods comprising non-irrigated maize cultivation, at-home charcoal manufacture, and, in the highlands, banana production, it faces significant exposure to climate-driven hazards such as flooding, erosion, landslides, and severe ecological and social drought (ECVMAS 2012). Although not impacted as severely by the 2010 earthquake as its neigh- bors to the north, the area nevertheless continues to face challenges in providing basic services, as well as multiple long-standing challenges to local water security, includ- ing contamination of river water from agricultural runoff, sewage, and refuse, fresh-water salinization resulting from seawater infiltration, and limited water supply during the area’s December to March dry season. As of 2012, just over half the population had access to an “improved” source of water (e.g., underground pumps or gravity-fed wells), and less than a quarter had access to improved sanitation (ECV- MAS 2012). Despite these challenges, residents of Southern Haiti have shown an avid engagement in efforts to better manage their community’s common pool resources, with numerous local programs currently underway to address a variety of social and ecological issues, ranging from anti-discrimina- tion and anti-violence campaigns to deforestation preven- tion and community garbage clean-up and management. In this, they draw upon a long-standing local history of activ- ism and artistic engagement with social issues, as well as a deep desire to develop local capacities to such a level that continued reliance upon international NGO-driven service provisioning is no longer necessary. The community arts center and research partner, for example, is an organization whose mission is rooted in social justice, radical inclusivity, and the creative arts. The arts center course, Photo Environment, was designed to engage local Haitian youth in place-based education about photography and local water systems as a platform for local action. The multi-cycle course has taken place twice—once in the fall of 2017 and again in the summer of 2018. The first cycle was open only to youth participants (ages 8–14), but— based on the enthusiastic requests of community members and arts center educators—the course expanded to include youth and adult participants in the second cycle. To date, 21 students (i.e., 19 youth; two adults) have participated in the course. Participants have enjoyed their participation in the course and a third cycle is planned for spring 2020. Educational program content integrated science topics (i.e., water systems, quality, and management) with the arts (i.e., digital photography), and emphasized the trans- formative potential of creative expression to generate posi- tive social change. Educational activities focused on: (1) using digital cameras to capture and communicate ideas and issues; (2) generate shared sense-making around the sources, critical importance, and local problems related to water (i.e., pollution; policy) in Southern Haiti; and (3) cre- ate a platform for community action towards a more sustain- able future. As part of the course, students took day-trips to regional water sources (e.g., rivers; streams; open commu- nity cisterns), photographed water resources and problems, and collected water samples for testing. Throughout the pro- gram, students discussed the content of their photographs in terms of the beauty of the region, its challenges related to climate change and pollution, and what could be different. For many students, the water-testing component was their first experience with science education that allowed them to gather evidence, develop hypotheses, and analyze results with the goal of answering pertinent local natural resource questions. Building on these processes of data gathering and shared sense-making around issues of water security, over both cycles of the course, students organized and held photography exhibition events featuring their own local water-focused photography and the results of water testing. After processing the water-testing kits and analyz- ing results, water quality analyses were presented to local community members and policymakers at the events. The students also introduced and screened brief documentary- style films describing the course and featuring the voices and actions of youth and educators. Together, the pho- tography, film, and water quality reports were intended to raise awareness and spur further sustainability action. Events over both cycles were attended by local Haitians, UN officials, missionaries, NGO workers, students, and 1078 Sustainability Science (2020) 15:1067–1085 1 3 their families. Following the first exhibition, school direc- tors, NGOs, and orphanages began visiting the community arts center to learn more about its programming and to identify opportunities for partnership. During cycle 2, stu- dents and educators began alerting neighborhood residents in areas where filters for community water sources were not effective and water was found to be polluted based on testing. In cycle 3, students will engage in local water- testing combined with community mapping to generate a shared community resource to identify clean water access points and sites of extreme pollution. In the Photo Environment course, transdisciplinary learning took place through centering the real-world problem of water security, and inviting youth to engage with the topic through the lens of art (i.e., documentary photography), action (i.e., community awareness-raising), and various science (e.g., hydrology) and interdisciplinary (e.g., ecology) fields. Importantly, the course combined formal photography instruction (e.g., proper lighting techniques) and Western science (i.e., water testing kits) with participatory process. Photography and local action opportunities offered a way for youth to affectively connect with, capture, and communicate the importance of clean water access on their own terms and according to their own visions for community change towards a sustainable future. An important difference between the US and Haiti case studies was the manner in which art–science integration was practiced. In the US, place-based “Science” activi- ties were the primary platform for shared understanding and enthusiasm related to local sustainability challenges (i.e., climate change), whereas the arts (i.e., photography) became children’s ‘access route’ to deeper connection with the science-framed subject matter. In Haiti, youths’ pho- tography was the primary platform for shared understand- ing and enthusiasm related to local sustainability chal- lenges (i.e., water security), whereas science (e.g., water quality testing from local sources) became youths’ ‘access route’ to deeper connection with the art-framed subject matter. In light of this key distinction, Fig. 2 presents a cursory spectrum of art–science integration, ranging from art- to science-centric integration. A strength of the methodological framework—and a benefit of facilitating art–science integration in divergent contexts—is exploring its versatility along this spectrum. The US case was closer, in practice, to the kinds of “STEAM-based” art–science arrangements that center science learning, whereas the Haiti case was closer to the kinds of arts-centered activ- ism that have the capacity to mobilize the sciences in the service of social change. Across both cases, however, art–science integration was a platform for transdiscipli- nary learning, participatory process, and collaborative action for sustainability. Discussion This manuscript has discussed how art–science integration can be applied in ways that enable community members to envision and enact sustainable futures within local contexts. Case studies in the US and Haiti offer variations on art–sci- ence integration through an action-oriented methodological framework that combines ‘top-down’ as well as ‘bottom- up’ processes to invite young people to learn about, connect with, and act on sustainability challenges in locally mean- ingful ways. The transformative potential of these alterna- tive future-making pedagogies is discussed in the follow- ing sections, organized by the primary components of the framework on which each collaboration was based (Fig. 1): transdisciplinary learning, participatory process, and collab- orative sustainability action. The collaborative research con- text within which case studies took place is also examined for its transformative potential, framed by key dimensions of transdisciplinary research. In pedagogical and research terms, to be transformative requires “a shift or a switch to a new way of being and seeing”, not just for young people and Fig. 2 Spectrum of art–science integration. The center of the figure represents the kinds of art–science integration in which neither the arts nor sciences are considered primary, central, or elevated com- pared to the other. The far left and far right represent the kinds of art– science integration in which one component is central and the other is mostly peripheral—hence the greater total surface area. The inter- mediary circles represent infinite iterations of art–science integration that blend the arts and sciences to a greater degree compared to the poles. aRelative positioning of the US case study. bRelative position- ing of the Haiti case study 1079Sustainability Science (2020) 15:1067–1085 1 3 ‘non-academics,’ but for the researchers themselves (Wals 2011, p. 181; O’Sullivan 1999). From a sustainability per- spective, realizing the transformative potential of alternative pedagogical and research practices means opening up new possibilities for building a sustainable future. Transdisciplinary learning: confronting complexity The methodological framework introduced in this manu- script ‘begins’ with transdisciplinary learning. A transdis- ciplinary approach to learning calls for centering the prob- lem, rather than the discipline, in the process of acquiring knowledge. Moreover, transdisciplinary learning—like simi- lar approaches (e.g., transformative learning; problem-based learning; Boström et al. 2018; Mezirow 1997; O’Sullivan 1999; Savery 2006)—considers learning as “more than merely knowledge based” (Wals 2011, p. 180). In the meth- odological framework, transdisciplinary learning is repre- sented as a ‘top-down’ process because, in practice, this component served to provide young people with a broad awareness of sustainability challenges, immersing them in a ‘big picture’ view local environmental problems. Transdisci- plinary learning was a way of introducing the problem (i.e., ‘what is’) and establishing a sense of shared understand- ing around present-day and projected future sustainability challenges. Doing so drew from research-based knowledge within relevant disciplines and became a starting point for young people’s substantive participation in decision-making and action. As indicated by the prefix ‘trans’, transdisciplinary peda- gogies go beyond teaching according to specific disciplines (i.e., school subjects) and often embrace epistemological fluidity. The need for transdisciplinarity is tied to the need to address problems that are “real, complex, socially rel- evant … [and], which ask for the integration of knowledge of science and society” (Scholz et al. 2016, p. 231). Most of these problems, such as climate change and clean water access, fall within the domain of sustainability. The case studies presented in this manuscript practiced transdiscipli- narity both by ‘starting from’ local sustainability challenges (not disciplinary subjects) as well as by integrating the arts and sciences in the process of learning. Transdisciplinary learning has been advanced as a critical pathway towards envisioning sustainable futures with regard to both climate change and water access (e.g., Schneider and Rist 2014). A challenge related to transdisciplinary learning in the present research was the way in which art–science integra- tion was taken up by children and youth participants across case study contexts. In the US, children were at first con- fused about the photography component. Children across research sites asked questions like, “What should I photo- graph?” and seemed to perceive photovoice as a homework assignment that could receive a poor grade. Unfortunately, telling children that “there is no such thing as a wrong pho- tograph” and that the content of photographs were to be “about what you think and how you feel” did not always clarify the task. By the third photovoice session, after see- ing examples and experiencing photovoice discussions, children seemed to understand and enjoy the process more fully. In the Haitian context, there was less confusion about the content of photographs. From the outset, Haitian youth received guidance on how to take a good photograph, and they understood the task of photographing the beauty and challenges of their region. A possible explanation for this distinction is that the Haitian program started with the artis- tic process, whereas the US program started with hands- on science activities. It is possible that, in the US, children became epistemically ‘stuck’ in rationalist scientific thinking modes (i.e., focused on observation and evidence-gathering) as well as traditional expectations of school science (i.e., the right answers), which colored their interpretations of photovoice. Another possible explanation for this distinction is the divergent properties of the sustainability challenges addressed in each case study. Specifically, climate change takes place on a global scale and entails relatively less vis- ible (atmospheric) processes, whereas water security chal- lenges occur more locally and are more readily perceptible in the physical environment. Its traces are more visible and tangible. For photovoice process, this would require cli- mate change-related photography to be more conceptual in nature—a higher-level task for young people. Given both of these distinctions, a recommendation for integrated art–sci- ence programming following this methodological framework and/or using variations of the photovoice method would be to emphasize, from the outset, photography as a mode of creative expression with the capacity to convey an idea, mes- sage, or emotion related to sustainability challenges. Doing so would also support learners’ epistemological fluidity, in particular their sensibilities related to ‘more-than-rational’ ways of knowing (Colucci-Gray et al. 2013; Kagan 2011). Moreover, for climate change-related programming, it is important to employ place-based approaches that ground this complex, global phenomenon in local-scale manifestations of transformation, allowing learners to draw these linkages and ‘see’ climate change in their world. Attempts to conceptualize transdisciplinarity—both in research and teaching—have often centered on a set of shared properties or attributes, rather than a single concrete definition. In outlining the “shared aims” of transdiscipli- narity, Lawrence and Després (2004) assert that “transdis- ciplinarity tackles complexity in science and it challenges knowledge fragmentation” (p. 399). These features of trans- disciplinarity are inherent not only in the pedagogical prac- tices applied in the US and Haiti case studies, but also in the broader research environment within which each case study took place. In both case studies, notoriously complex 1080 Sustainability Science (2020) 15:1067–1085 1 3 environmental problems (i.e., climate change; water man- agement) became the ‘subject’ of each collaboration, and the learning process drew from a broad array of epistemologi- cal and disciplinary lenses. Moreover, this research brought together researchers and educators with backgrounds in psy- chology, anthropology, ecology, and the arts. By transcend- ing disciplinary norms and focusing on problems, transdis- ciplinary teams are: …able to explore the limits and blind spots of their dif- ferent disciplinary epistemologies, with each discipline undergoing modification to take on insights from the other fields. In this way, they create new knowledge, tools, and perspectives that differ from the foundation and approach offered by any one discipline. (Skorton and Bear 2018, p. 69). The transformative potential of these modes lies not only in transcending boundaries of epistemology (i.e., art–sci- ence integration) and discipline (i.e., research collabora- tion), but also in traversing borders of campus (i.e., univer- sity–community partnership), and nation (i.e., international collaboration). Participatory process: attending to context In the methodological framework introduced in this manu- script, transdisciplinary learning is combined with par- ticipatory process. By inviting the full participation of all involved, participatory approaches seek to break down established hierarchies in traditional approaches to teach- ing and research. Put differently, participatory modes blend the boundaries between the categories ‘researcher’ and ‘the researched,’ ‘educator’ and ‘the educated’ by positioning everyone involved as a capable contributor to the process of acquiring and generating knowledge. In the methodological framework, the participatory approach is represented as a ‘bottom-up’ process because it enabled the full and active participation of young people in making sense of sustain- ability challenges on their own terms in local settings, rather than relying solely on the unidirectional ‘transmission’ of research-based knowledge (Sterling 2001; Wals 2011). Par- ticipatory process, through digital photography, became a way to invite children and youth to make sense of complex problems from their own position and point of view, serving to anchor abstract sustainability concepts in tangible objects, concrete observations, and real-world relationships (Trott 2019b). Moreover, participatory process urged young people to draw on their own knowledge and life experiences to make cognitive as well as affective connections to sustainability challenges as well as to imagine alternatives to the non- sustainable status quo (i.e., ‘what if?’) and then act on those visions (Trott 2019a). Whereas transdisciplinary learning became a platform for youths’ shared understanding around the problem, participatory process became a platform for shared meaning-making around potential solutions. The participatory component of each case study was guided by photovoice, which is a participatory research method. As occurred in each case study, photovoice exhi- bitions offered a way to showcase community strengths and concerns, while advocating for social and policy change. Across case studies, photography was engaged both as a learning tool for sustainability as well as an art- form—a way to rethink present realities as well as envision and enact alternatives (Trott 2019a). From a pedagogical perspective, photography served as a bridge between epis- temologies—bringing together research- and experience- based knowledge, projected and imagined futures, sci- ence and art. From a research standpoint, visual methods (e.g., photovoice, hand drawings) offer the added benefit of cultivating within young people a sense of agency in the research process (Johnson et al. 2012). In the present research, photography was a translational means through which children were able to channel and communicate their deepest concerns and greatest hopes for the future. Both case studies engaged with the transformative poten- tial of the arts to challenge unsustainable norms and demand social change. A notable challenge limiting the social change potential of art–science integration across case studies is that neither was genuinely participatory from the outset. A key component of fully participatory engagements is that they allow for the process of problem identification to lie primarily with com- munity partners, in this case, children and youth. Instead, in each case study, we—as researchers—defined the problem (i.e., sustainability challenge) in advance. In some ways, this is a classic challenge of any university-community partner- ship practicing participatory modes and requiring institu- tional review board approval and/or funding. That is, to write the proposal, a description of the project’s aims and objec- tives—usually including a literature review and statement of the project’s main contributions—is required in advance. On the one hand, the need for a problem definition at the outset of a project is consistent with transdisciplinary learn- ing (Quigley et al. 2017). On the other, defining the problem in advance limits the full participation of community part- ners (Kindon et al. 2007). In both case studies, however, a ‘middle way’ was to define the broad sustainability chal- lenge (e.g., water; climate change) in advance, but not the collaborative action projects designed and implemented by the youth participants. Rather, we emphasized in proposals the need for community engagement in addressing sustain- ability challenges at the local level, and the ways in which participatory research could yield increasingly meaningful outputs yet to be determined. Then, in both settings, specific, youth-defined facets of the problem (e.g., lack of community awareness; the need for new policies) became a platform for 1081Sustainability Science (2020) 15:1067–1085 1 3 youth-led action—allowing them a sense of ownership of the problem and its solutions in their communities. Participatory process, as practiced in the present case studies through photovoice, also maps onto the shared aims of transdisciplinary research advanced by Lawrence and Després (2004). According to them, a second key dimen- sion of transdisciplinary research is that it “accepts local contexts and uncertainty; it is a context-specific negotiation of knowledge” (p. 399). This feature of transdisciplinarity is again apparent both at the level of each case study as well as in the broader research context. First, participatory process is, by definition, a way of attending to local context. During each case study, children and youth brought their own knowledge, values, and life experience into conversa- tion with larger sustainability concepts and principles. In the broader research context, the way art–science integration took shape in the US and Haiti was in many ways an out- come of context-specificity. In the US—specifically BGC— context, integrated art–science programming meant infusing the arts into STEM, and framing sustainability challenges primarily in science terms. In Haiti—and specifically the community arts organization, integration meant infusing STEM into art-focused programming and conveying sustain- ability challenges primarily through the arts. In some ways, the nature of art–science integration practiced in each set- ting was grounded in the broader regional context. Whereas the US region is widely known for its climate science, the Haitian region is widely known for its arts and culture. What this meant for the practice of art–science integration in each setting was that, in the US, children explained climate change through the stories of their photographs, whereas young Haitians told the stories of their photographs with ref- erence to the science. Across settings, photography became a translational medium for young people to communicate their knowledge, feelings, and hopes for the future (Star and Griesemer 1989). Importantly, children’s photography prompted social change action. Collaborative action: generating sustainability The final component in the methodological framework pre- sented in this manuscript is collaborative action. In the over- lapping space at the center of the figure, action represents the culmination of transdisciplinary learning (i.e., contem- plating ‘what is’) and participatory process (i.e., imagining ‘what if?’). Across case studies, this combination of ‘top- down’ and ‘bottom-up’ processes offered children and youth a firm foundation upon which to collaboratively act on their knowledge, values, and concerns to attempt to change the situation for the better. In the US, this process took the shape of planting trees and establishing local gardens as well as holding public events to advocate with adults and policy- makers around the importance of climate action. In Haiti, young people alerted local residents about threats to their local water supply and held public events to raise aware- ness, communicate their concerns, and advocate for change. In these ways, both case studies facilitated critical engage- ment with present realities as a way to think about, imagine, advocate for, and enact alternative futures. By combining transdisciplinary learning, participatory process, and col- laborative action, this methodological framework activates all components of the “Head, Hands, and Heart” model of transformative sustainability learning (Sipos et al. 2008). Specifically, the integrated art–science pedagogies exhibited in each case study—despite some unnecessary distinctions and obvious domain overlaps—combined youths’ cognitive engagement (i.e., head) through transdisciplinary learning, with affective enablement (i.e., heart) through participa- tory process, and resulted in behavioral enactment for sus- tainability (i.e., hands) through youth-led projects (Sipos et al. 2008; Trott 2019a). In the context of sustainability, taking collaborative action can be a way for young people to develop a sense of agency in the face of seemingly insur- mountable challenges, which is critical to their sustained interest and engagement (Ojala 2016; Riemer et al. 2014; Trott 2019a, b). In this methodological framework, collaborative action was an outcome of participatory processes grounded in the tradition of action research. Action research traditions, including PAR and likeminded variations (e.g., commu- nity-based participatory action research; transdisciplinary action research; critical utopian action research; systemic action research), are simultaneously dedicated to research and social change (Gayá and Brydon-Miller 2017; Kindon et al. 2007; Stokols 2006). For example, prefigurative action research—an orientation that was influential in both case studies—has been described as an approach to research that aims to study something in order to change it (Kagan and Burton 2000). Across case studies, transdisciplinary learning and participatory process were intended to form a platform for youth action. Specifically, through art–science integra- tion, educational activities and digital photography became the medium through which young people developed shared understanding around local sustainability challenges. A third characteristic of transdisciplinarity, offered by Law- rence and Després (2004), is “intercommunicative action”, which refers to a process of negotiation and “making sense together” (Klein 2004, p. 521) in order to achieve mutual understanding. Achieving this form of intersubjectivity is not only the basis for further action in transdisciplinary research but is itself a form of action necessary to addressing sustainability challenges. Art–science integration, in both case studies, became the foundation upon which youth-led collaborative action plans were constructed. A challenge related to art–science integration for col- laborative sustainability action, across case studies, was 1082 Sustainability Science (2020) 15:1067–1085 1 3 that photography can sometimes feel limited to captur- ing present-day realities rather than facilitating children’s abilities to envision alternative futures. Indeed, the tradi- tion of documentary photography is grounded in capturing real-life settings and situations as they occur. Nevertheless, documentary photography is often a means to spark critical dialogue and prompt social change and, hence, is tied to the creation of alternative futures. Because photovoice is increasingly used in action-oriented environmental and sus- tainability education settings, some scholars have noted this limitation and are modifying the method towards ‘opening up’ the ways in which children may express their democratic imaginations (Derr and Simons 2019). For example, Crea- tiveVoice—which expands modes of artistic expression to include drawings, paintings, film, etc.—better allows partici- pants to “move beyond capturing present-day realities” and make connections to the past and to desired futures (Rivera Lopez et al. 2018, p. 1778). We agree that additional art- forms would indeed expand the forms of connection-making and visioning available to youth participants, while standing by more traditional forms of photovoice process to facilitate critical engagement with present realities and prompt col- laborative action for social change in local settings (Wang et al. 2004). Expanding the artforms available to learners may also assist in overcoming challenges related to visu- ally ‘capturing’ climate change-related phenomena through photography, as occurred in the US case study. Doing so may also help with engaging a younger audience who may struggle with photography—but thrive with drawing—as a way to communicate their thoughts, feelings, and hopes for the future. The fourth and final dimension of transdisciplinarity noted by Lawrence and Després (2004) is an action orien- tation in research. According to them, “transdisciplinary contributions frequently deal with real-world topics and generate knowledge that not only address societal problems but also contribute to their solution” (p. 399). Implied in this crucial dimension is that, in transdisciplinarity, there is a blurring of boundaries, not only between disciplines, but also “between theoretical development and professional practice.” This action orientation is a critical feature not only in each case study context, but in the context of sustain- ability research more broadly. In recent years, sustainability scholars have been called upon to break from traditional modes of research that are “descriptive-analytic” (i.e., problem-focused) and begin to practice a “solution-oriented approach” that gives greater attention to identifying viable pathways to a sustainable future (Wiek et al. 2012, p. 6). In theorizing transformational sustainability research, Wiek et al. (2012) anticipate a new role for scientists in which: ...scientists no longer “only” analyze sustainability issues, but, rather, need to immerse themselves into decision processes that are embedded in societal tran- sition processes and build socially robust knowledge— with necessary changes in research modes, incentive structures, and reward systems. (p. 7) As implied above, engaging with multiple stakeholders is a necessity in sustainability research and in addressing sustainability challenges more generally. The deliberate inclusion of non-scientists (e.g., young people) as well as “non-strictly scientific forms of knowledge” (e.g., the arts) have been proposed as a “criterion for quality in sustainabil- ity science” (Colucci-Gray et al. 2013, p. 136; Ziegler and Ott 2011). Across case studies, children and youth acquired, transmitted, and generated knowledge about local sustain- ability challenges, acting as change agents for the sustain- able transformation of their communities. In these ways, the present research prefigures a sustainable future, while acting towards it—through art–science integration—in collabora- tion with young people. Conclusions In the spirit of transdisciplinarity, this manuscript has taken global sustainability crises as its central issues, and—towards building a sustainable future through action- oriented research—has mobilized numerous disciplinary approaches to identify viable paths forward. It has traced new linkages, both in theory and in practice, between the already-interdisciplinary literatures of art–science integra- tion, alternative future-making pedagogies, and youth-led sustainability action. The action-oriented methodological framework introduced in this manuscript takes seriously the role of young people in learning about, connecting with, and acting on sustainability challenges where they live. Case studies make clear that young people are not only capable of, but passionate about protecting their future through social and environmental action—and art–science integration is a promising pathway towards facilitating their meaningful participation. Open Access This article is licensed under a Creative Commons Attri- bution 4.0 International License, which permits use, sharing, adapta- tion, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. http://creativecommons.org/licenses/by/4.0/ 1083Sustainability Science (2020) 15:1067–1085 1 3 References Ballantyne R, Connell S, Fien J (1998) Students as catalysts of envi- ronmental change: a framework for researching intergenerational influence through environmental education. Environ Educ Res 4(3):285–298. https ://doi.org/10.1080/13504 62060 09429 72 BGC (2019) Boys and Girls Clubs of America 2018 annual report. https ://www.bgca.org/ Accessed 15 Jun 2019 Birdsall S (2013) Reconstructing the relationship between science and education for sustainability: a proposed framework of learning. Int J Environ Sci Educ 8(3):451–478. https ://doi.org/10.12973 / ijese .2013.214a Boström M, Andersson E, Berg M, Gustafsson K, Gustavsson E, Hys- ing E, Lidskog R, Löfmarck E, Ojala M, Olsson J, Singleton B (2018) Conditions for transformative learning for sustain- able development: a theoretical review and approach. Sustain 10(12):4479. https ://doi.org/10.3390/su101 24479 Bourn D, Hunt F, Bamber P (2017) A review of Education for Sustain- able Development and Global Citizenship Education in teacher education. Paper commissioned for the 2017/8 Global Educa- tion Monitoring Report. https ://unesd oc.unesc o.org/in/rest/annot ation SVC/Downl oadWa terma rkedA ttach ment/attac h_impor t_40e98 442-c551-4ee6-a69c-4e055 7f2d5 48?_=25956 6eng.pdf. Accessed 15 Jun 2019 Bush C, Cook KL (2019) Structuring STEAM inquiries: lessons learned from practice. In: Khine MS, Areepattamannil S (eds) STEAM education: theory and practice. Springer Nature, Swit- zerland, pp 19–24. https ://doi.org/10.1007/978-3-030-04003 -1_2 Colucci-Gray L, Perazzone A, Dodman M, Camino E (2013) Science education for sustainability, epistemological reflections and edu- cational practices: from natural sciences to trans-disciplinarity. Cult Stud Sci Educ 8(1):127–183. https ://doi.org/10.10007 /s1142 2-012-9405-3 Cook JW (ed) (2019). Sustainability, human well-being, and the future of education. Springer Nature, Cham. https://doi. org/10.1007/978-3-319-78580-6 Cook K, Quigley CF (2013) Connecting to our community: utilizing photovoice as a pedagogical tool to connect college students to science. Int J Environ Sci Educ 8(2):339–357. https ://doi. org/10.12973 /ijese .2013.205a Cook K (2015) Grappling with wicked problems: exploring photo- voice as a decolonizing methodology in science education. Cult Stud Sci Educ 10(3):581–592. https ://doi.org/10.1007/s1142 2-014-9613-0 Corner A, Roberts O, Chiari S, Völler S, Mayrhuber ES, Mandl S, Monson K (2015) How do young people engage with climate change? The role of knowledge, values, message framing, and trusted communicators. Wiley Interdisc Rev Clim Change 6(5):523–534. https ://doi.org/10.1002/wcc.353 Cutter-Mackenzie A, Rousell D (2019) Education for what? Shaping the field of climate change education with children and young people as co-researchers. Children’s Geogr 17(1):90–104. https ://doi.org/10.1080/14733 285.2018.14675 56 Cutter-Mackenzie-Knowles A, Rousell D (2018) The mesh of play- ing, theorizing, and researching in the reality of climate change: creating the co-research playspace. In: Cutter-Mackenzie- Knowles A, Malone K, Barratt Hacking E (eds) Research hand- book on childhood nature: assemblages of childhood and nature research. Springer Nature, Switzerland, pp 1–25. https ://doi. org/10.1007/978-3-319-51949 -4_14-1 Davis JM (2015) Young children and the environment: early educa- tion for sustainability, 2nd edn. Cambridge University Press, Port Melbourne Derr V, Simons J (2019) A review of photovoice applications in envi- ronment, sustainability, and conservation contexts: is the method maintaining its emancipatory intents? Environ Educ Res. https :// doi.org/10.1080/13504 622.2019.16935 11 ECVMAS (2012) Rapport Enquête sur les Conditions de Vie des Ménages Après le Séisme Haiti 2012. https ://ecvma shait i2012 .e-monsi te.com/pages /prese ntati on.html. Accessed 15 Jun 2019 Facer K (2019) Storytelling in troubled times: what is the role for edu- cators in the deep crises of the 21st century? Literacy 53(1):3–13. https ://doi.org/10.1111/lit.12176 Fisher SR (2016) Life trajectories of youth committing to cli- mate activism. Environ Educ Res 22(2):229–247. https ://doi. org/10.1080/13504 622.2015.10073 37 Fischer J, Manning AD, Steffen W, Rose DB, Daniell K, Felton A, Garnett S, Gilna B, Heinsohn R, Lindenmayer DB, MacDon- ald B (2007) Mind the sustainability gap. Trends Ecol Evol 22(12):621–624. https ://doi.org/10.1016/j.tree.2007.08.016 Foster-Fishman P, Nowell B, Deacon Z, Nievar MA, McCann P (2005) Using methods that matter: the impact of reflection, dialogue, and voice. Am J Community Psych 36(3–4):275–291. https :// doi.org/10.1007/s1046 4-005-8626-y Freire P (1972) Pedagogy of the oppressed. Penguin, Harmondsworth Funtowicz SO, Ravetz JR (1993) Science for the post-normal age. Futures 25(7):739–755. https ://doi.org/10.1016/0016- 3287(93)90022 -L Gayá P, Brydon-Miller M (2017) Carpe the academy: dismantling higher education and prefiguring critical utopias through action research. Futures 94:34–44. https ://doi.org/10.1016/j.futur es.2016.10.005 Ghanbari S (2015) Learning across disciplines: a collective case study of two university programs that integrate the arts with STEM. Int J Educ Arts 16(7):1–21 Håkansson M, Kronlid DO, Östman L (2019) Searching for the politi- cal dimension in education for sustainable development: socially critical, social learning and radical democratic approaches. Environ Educ Res 25(1):6–32. https ://doi.org/10.1080/13504 622.2017.14080 56 Hall BL (1981) Participatory research, popular knowledge and power: a personal reflection. Convergence 14(3):6–19 Hart P, Nolan K (1999) A critical analysis of research in environmental education. Stud Sci Educ 34:1–69. https ://doi.org/10.1080/03057 26990 85601 48 Haynes K, Tanner TM (2015) Empowering young people and strength- ening resilience: youth-centred participatory video as a tool for climate change adaptation and disaster risk reduction. Chil- dren’s Geogr 13(3):357–371. https ://doi.org/10.1080/14733 285.2013.84859 9 Hayward B (2012) Children, citizenship and environment: nurturing a democratic imagination in a changing world. Routledge, New York Heras M, Tàbara JD (2014) Let’s play transformations! Performative methods for sustainability. Sustain Sci 9(3):379–398. https ://doi. org/10.1007/s1162 5-014-0245-9 Heras M, Tàbara JD, Meza A (2016) Performing biospheric futures with younger generations: a case in the MAB Reserve of La Sepultura. Mexico Ecol Soc 21(2):14. https ://doi.org/10.5751/ ES-08317 -21021 4 Herro D, Quigley C, Cian H (2019) The challenges of STEAM instruc- tion: lessons from the field. Action Teacher Educ 41(2):172–190. https ://doi.org/10.1080/01626 620.2018.15511 59 Hetland L (2013) Studio thinking 2: the real benefits of visual arts edu- cation. Columbia University Teachers College Press, New York Hicks D (2014) Educating for hope in troubled times: climate change and the transition to a post-carbon future. Trentham Books Lim- ited, Staffordshire Hodson D (2003) Time for action: science education for an alter- native future. Int J Sci Educ 25(6):645–670. https ://doi. org/10.1080/09500 69030 5021 https://doi.org/10.1080/13504620600942972 http://www.bgca.org/ https://doi.org/10.12973/ijese.2013.214a https://doi.org/10.12973/ijese.2013.214a https://doi.org/10.3390/su10124479 https://unesdoc.unesco.org/in/rest/annotationSVC/DownloadWatermarkedAttachment/attach_import_40e98442-c551-4ee6-a69c-4e0557f2d548?_=259566eng.pdf https://unesdoc.unesco.org/in/rest/annotationSVC/DownloadWatermarkedAttachment/attach_import_40e98442-c551-4ee6-a69c-4e0557f2d548?_=259566eng.pdf https://unesdoc.unesco.org/in/rest/annotationSVC/DownloadWatermarkedAttachment/attach_import_40e98442-c551-4ee6-a69c-4e0557f2d548?_=259566eng.pdf https://doi.org/10.1007/978-3-030-04003-1_2 https://doi.org/10.10007/s11422-012-9405-3 https://doi.org/10.10007/s11422-012-9405-3 https://doi.org/10.12973/ijese.2013.205a https://doi.org/10.12973/ijese.2013.205a https://doi.org/10.1007/s11422-014-9613-0 https://doi.org/10.1007/s11422-014-9613-0 https://doi.org/10.1002/wcc.353 https://doi.org/10.1080/14733285.2018.1467556 https://doi.org/10.1080/14733285.2018.1467556 https://doi.org/10.1007/978-3-319-51949-4_14-1 https://doi.org/10.1007/978-3-319-51949-4_14-1 https://doi.org/10.1080/13504622.2019.1693511 https://doi.org/10.1080/13504622.2019.1693511 http://ecvmashaiti2012.e-monsite.com/pages/presentation.html http://ecvmashaiti2012.e-monsite.com/pages/presentation.html https://doi.org/10.1111/lit.12176 https://doi.org/10.1080/13504622.2015.1007337 https://doi.org/10.1080/13504622.2015.1007337 https://doi.org/10.1016/j.tree.2007.08.016 https://doi.org/10.1007/s10464-005-8626-y https://doi.org/10.1007/s10464-005-8626-y https://doi.org/10.1016/0016-3287(93)90022-L https://doi.org/10.1016/0016-3287(93)90022-L https://doi.org/10.1016/j.futures.2016.10.005 https://doi.org/10.1016/j.futures.2016.10.005 https://doi.org/10.1080/13504622.2017.1408056 https://doi.org/10.1080/13504622.2017.1408056 https://doi.org/10.1080/03057269908560148 https://doi.org/10.1080/03057269908560148 https://doi.org/10.1080/14733285.2013.848599 https://doi.org/10.1080/14733285.2013.848599 https://doi.org/10.1007/s11625-014-0245-9 https://doi.org/10.1007/s11625-014-0245-9 https://doi.org/10.5751/ES-08317-210214 https://doi.org/10.5751/ES-08317-210214 https://doi.org/10.1080/01626620.2018.1551159 https://doi.org/10.1080/09500690305021 https://doi.org/10.1080/09500690305021 1084 Sustainability Science (2020) 15:1067–1085 1 3 Hodson D (2011) Looking to the future: building a curriculum for social activism. Sense Publications, Rotterdam Holfelder AK (2019) Towards a sustainable future with education? Sustain Sci. https ://doi.org/10.1007/s1162 5-019-00682 -z IBE-UNESCO (2019) Transdisciplinary approach. https ://www.ibe. unesc o.org/en/gloss ary-curri culum -termi nolog y/t/trans disci plina ry-appro ach Jackson T (2017) Prosperity without growth. Foundations for the econ- omy of tomorrow, 2nd edn. Routledge, New York Jenkins EW (1994) Public understanding of science and science education for action. J Curric Stud 26(6):601–611. https ://doi. org/10.1080/00220 27940 26060 2 Jensen BB, Schnack K (2006) The action competence approach in envi- ronmental education. Environ Educ Res 3(2):163–178. https :// doi.org/10.1080/13504 62060 09430 53 Johnson LR, Johnson-Pynn JS, Lugumya DL, Kityo R, Drescher CF (2013) Cultivating youth’s capacity to address climate change in Uganda. Int Perspect Psych Res Pract Consult 2(1):29–44. https ://doi.org/10.1037/a0031 053 Johnson GA, Pfister AE, Vindrola-Padros C (2012) Drawings, photos, and performances: using visual methods with children. Visual Anthro Rev 28(2):164–178 Jorgenson SN, Stephens JC, White B (2019) Environmental educa- tion in transition: a critical review of recent research on climate change and energy education. J Environ Educ 50(3):160–171. https ://doi.org/10.1080/00958 964.2019.16044 78 Kagan S (2008) Sustainability: a new frontier for the arts and cultures. Vas Verlag Fur Akademisch, Frankfurt Kagan S (2011) Art and sustainability: connecting patterns for a culture of complexity. Transaction Publishers, Piscataway Kagan C, Burton M (2000) Prefigurative action research: an alterna- tive basis for critical psychology. Ann Rev Crit Psych 2(73–87). Kaufmann N, Sanders C, Wortmann J (2019) Building new founda- tions: the future of education from a degrowth perspective. Sus- tain Sci. https ://doi.org/10.1007/s1162 5-019-00699 -4 Kelsey E, Armstrong C (2012) Finding hope in a world of environ- mental catastrophe. In: Wals AE, Corcoran PB (eds) Learning for sustainability in times of accelerating change. Wageningen Academic Publishers, Wageningen, pp 187–200 Kenis A, Mathijs E (2014) Climate change and post-politics: repo- liticizing the present by imagining the future? Geoforum 52:148–156 Kindon S, Pain R, Kesby M (eds) (2007) Participatory action research approaches and methods: connecting people, participation and place. Routledge, Abington Klein JT (2004) Prospects for transdisciplinar ity. Futures 36(4):515–526 Kuiper K (2016) The arts. https ://www.brita nnica .com/topic /the-arts Land MH (2013) Full STEAM ahead: the benefits of integrating the arts into STEM. Procedia Comp Sci 20:547–552. https ://doi. org/10.1016/j.procs .2013.09.317 Lawrence RJ, Després C (2004) Futures of transdisciplinarity. Futures 4(36):397–405 Lester BT, Ma L, Lee O, Lambert J (2006) Social activism in ele- mentary science education: a science, technology, and society approach to teach global warming. Int J Sci Educ 28(4):315–339. https ://doi.org/10.1080/09500 69050 02401 00 Liao C (2019) Creating a STEAM map: a content analysis of visual art practices in STEAM education. In: Khine MS, Areepattamannil S (eds) STEAM education: theory and practice. Springer, Swit- zerland, pp 37–55. https ://doi.org/10.1007/978-3-030-04003 -1_3 Maeda J (2013) STEM + Art = STEAM. STEAM J 1(1):34. https ://doi. org/10.5642/steam .20130 1.34 Malone K (2013) “The future lies in our hands”: children as researchers and environmental change agents in designing a child-friendly neighbourhood. Local Environ 18(3):372–395. https ://doi. org/10.1080/13549 839.2012.71902 0 Marmon M (2019) The emergence of creativity in STEM: fostering an alternative approach for Science, Technology, Engineering, and Mathematics instruction through the use of the Arts. In: Khine MS, Areepattamannil S (eds) STEAM education: theory and practice. Springer, Switzerland, pp 101–115. https ://doi. org/10.1007/978-3-030-04003 -1_6 Meadows DL, Randers J (1992) Beyond the limits: global collapse or a sustainable future. Earthscan Publications Ltd., London Merriam-Webster (2019) Integrate. https ://www.merri am-webst er.com/ dicti onary /integ rate Mezirow J (1997) Transformative learning: theory to practice. New Direct Adult Continuing Educ 1997(74):5–12 Mitra D, Serriere S, Kirshner B (2014) Youth participation in US contexts: student voice without a national mandate. Child Soc 28(4):292–304. https ://doi.org/10.1111/chso.12005 Monroe MC, Plate RR, Oxarart A, Bowers A, Chaves WA (2017) Iden- tifying effective climate change education strategies: a systematic review of the research. Environ Educ Res 12:1–22. https ://doi. org/10.1080/13504 622.2017.13608 42 National Foundation on the Arts and the Humanities Act (1965). https ://www.law.corne ll.edu/uscod e/text/20/951. Accessed 28 Nov 2019 National Science Foundation (2015) Revisiting the STEM workforce: a companion to science and engineering indicators 2014. https ://www.nsf.gov/pubs/2015/nsb20 1510/nsb20 1510.pdf. Accessed 23 Jun 2019 (28 Nov. 2019) NSTA (2019) National science teaching association: about the next generation science standards. https ://ngss.nsta.org/About .aspx. Accessed 15 Jun 2019 O’Sullivan E (1999) Transformative learning: educational vision for the twenty first century. Zed Books, London Ojala M (2016) Preparing children for the emotional challenges of climate change: a review of the research. In: Winograd K (ed) Education in times of environmental crises: teaching children to be agents of change. Routledge, New York, pp 210–218 Osnes B (2017) Performance for resilience: engaging youth on energy and climate through music, movement, and theatre. Palgrave- Macmillan, New York Palmer JA (1998) Environmental education in the 21st century: theory, practice, progress and promise. Routledge, London Percy-Smith B, Burns D (2013) Exploring the role of children and young people as agents of change in sustainable commu- nity development. Local Environ 18(3):323–339. https ://doi. org/10.1080/13549 839.2012.72956 5 Phelan AM (2004) Rationalism, complexity science and curriculum: a cautionary tale. Complicity Int J Complexity Educ 1(1):9–17. https ://doi.org/10.29173 /cmplc t8712 Pipere A (2016) Envisioning complexity: towards a new conceptu- alization of educational research for sustainability. Discourse Comm Sustain Educ 7(2):68–91. https ://doi.org/10.1515/ dcse-2016-0017 Psycharis S (2018) STEAM in education: a literature review on the role of computational thinking, engineering epistemology and computational science. Computational STEAM pedagogy (CSP). Sci Cult 4(2):51–72. https ://doi.org/10.5281/zenod o.12145 65 Qvortrup J (2009) Are children human beings or human becomings? A critical assessment of outcome thinking. Rivista Internazionale di Scienze Sociali 3–4:631–653 Quigley CF, Herro D, Baker A (2019) Moving toward transdisciplinary instruction: a longitudinal examination of STEAM teaching prac- tices. In: Khine MS, Areepattamannil S (eds) STEAM education: theory and practice. Springer Nature, Switzerland, pp 143–164. https ://doi.org/10.1007/978-3-030-04003 -1_8 https://doi.org/10.1007/s11625-019-00682-z http://www.ibe.unesco.org/en/glossary-curriculum-terminology/t/transdisciplinary-approach http://www.ibe.unesco.org/en/glossary-curriculum-terminology/t/transdisciplinary-approach http://www.ibe.unesco.org/en/glossary-curriculum-terminology/t/transdisciplinary-approach https://doi.org/10.1080/0022027940260602 https://doi.org/10.1080/0022027940260602 https://doi.org/10.1080/13504620600943053 https://doi.org/10.1080/13504620600943053 https://doi.org/10.1037/a0031053 https://doi.org/10.1037/a0031053 https://doi.org/10.1080/00958964.2019.1604478 https://doi.org/10.1007/s11625-019-00699-4 https://www.britannica.com/topic/the-arts https://doi.org/10.1016/j.procs.2013.09.317 https://doi.org/10.1016/j.procs.2013.09.317 https://doi.org/10.1080/09500690500240100 https://doi.org/10.1007/978-3-030-04003-1_3 https://doi.org/10.5642/steam.201301.34 https://doi.org/10.5642/steam.201301.34 https://doi.org/10.1080/13549839.2012.719020 https://doi.org/10.1080/13549839.2012.719020 https://doi.org/10.1007/978-3-030-04003-1_6 https://doi.org/10.1007/978-3-030-04003-1_6 https://www.merriam-webster.com/dictionary/integrate https://www.merriam-webster.com/dictionary/integrate https://doi.org/10.1111/chso.12005 https://doi.org/10.1080/13504622.2017.1360842 https://doi.org/10.1080/13504622.2017.1360842 https://www.law.cornell.edu/uscode/text/20/951 https://www.law.cornell.edu/uscode/text/20/951 https://www.nsf.gov/pubs/2015/nsb201510/nsb201510.pdf https://www.nsf.gov/pubs/2015/nsb201510/nsb201510.pdf https://ngss.nsta.org/About.aspx https://doi.org/10.1080/13549839.2012.729565 https://doi.org/10.1080/13549839.2012.729565 https://doi.org/10.29173/cmplct8712 https://doi.org/10.1515/dcse-2016-0017 https://doi.org/10.1515/dcse-2016-0017 https://doi.org/10.5281/zenodo.1214565 https://doi.org/10.1007/978-3-030-04003-1_8 1085Sustainability Science (2020) 15:1067–1085 1 3 Quigley CF, Herro D, Jamil FM (2017) Developing a conceptual model of STEAM teaching practices. School Sci Math 117(1–2):1–2. https ://doi.org/10.1111/ssm.12201 Rasmussen K, Smidt S (2003) Children in the neighborhood. In: Chris- tensen P, O’Brien M (eds) Children in the city: home, neighbour- hood and community. Routledge, London, pp 82–100 Rathwell KJ, Armitage D (2016) Art and artistic processes bridge knowledge systems about social-ecological change: an empirical examination with Inuit artists from Nunavut, Canada. Ecol Soc 21(2):21. https ://doi.org/10.5751/ES-08369 -21022 1 Riemer M, Lynes J, Hickman G (2014) A model for developing and assessing youth-based environmental engagement programmes. Environ Educ Res 20(4):552–574. https ://doi.org/10.1080/13504 622.2013.81272 1 Rittel HW, Webber MM (1973) Dilemmas in a general theory of plan- ning. Policy Sci 4(2):155–169 Rivera Lopez F, Wickson F, Hausner V (2018) Finding CreativeVoice: applying arts-based research in the context of biodiversity con- servation. Sustain 10:1778. https ://doi.org/10.3390/su100 61778 Rooney-Varga JN, Brisk AA, Adams E, Shuldman M, Rath K (2014) Student media production to meet challenges in climate change science education. J Geosci Educ 62(4):598–608. https ://doi. org/10.5408/13-050.1 Roth WM, Lee S (2004) Science education as/for participation in the community. Sci Educ 88(2):263–291 Roy SG, de Souza SP, McGreavy B, Druschke CG, Hart DD, Gardner K (2019) Evaluating core competencies and learning outcomes for training the next generation of sustainability researchers. Sus- tain Sci. https ://doi.org/10.1007/s1162 5-019-00707 -7 Sadler TD (2009) Situated learning in science education: socio-scien- tific issues as contexts for practice. Stud Sci Educ 45(1):1–42. https ://doi.org/10.1080/03057 26080 26818 39 Savery JR (2006) Overview of problem-based learning: definition and distinctions. Interdiscip J Problem Based Learn 1(1):9–20. https ://doi.org/10.7771/1541-5015.1002 Schneider F, Rist S (2014) Envisioning sustainable water futures in a transdisciplinary learning process: combining normative, explorative, and participatory scenario approaches. Sustain Sci 9(4):463–481 Scholz RW, Lang DJ, Wiek A, Walter AI, Stauffacher M (2016) Trans- disciplinary case studies as a means of sustainability learn- ing: historical framework and theory. Int J Sustain High Educ 7(3):226–251 Schreiner C, Henriksen EK, Kirkeby Hansen PJ (2005) Climate educa- tion: empowering today’s youth to meet tomorrow’s challenges. Studies Sci Educ 41(1):3–49 Singleton J (2015) Head, heart and hands model for transformative learning: place as context for changing sustainability values. J Sustain Educ 9:1–16 Sipos Y, Battisti B, Grimm K (2008) Achieving transformative sustain- ability learning: engaging head, hands and heart. Int J Sustain High Educ 9(1):68–86. https ://doi.org/10.1108/14676 37081 08421 93 Skorton D, Bear A (eds) (2018) The integration of the humanities, and arts with sciences, engineering and medicine in higher education: branches from the same tree. The National Academies Press, Washington. https ://doi.org/10.17226 /24988 Sterling S (2001) Sustainable education: re-visioning learning and social change. Books, Totnes Sterling S (2004) An analysis of the development of sustainability edu- cation internationally: evolution, interpretation, and transforma- tive potential. In: Blewitt J, Cullingford C (eds) The sustainabil- ity curriculum: the challenge for higher education. Earthscan, London, pp 43–62 Stokols D (2006) Toward a science of transdisciplinary action research. Am J Commun Psych 38(1–2):79–93 Stratford E, Low N (2015) Young islanders, the meteorological imagi- nation, and the art of geopolitical engagement. Children’s Geogr 13(2):164–180 Tate (2019) Documentary photography. https ://www.tate.org.uk/art/ art-terms /d/docum entar y-photo graph y Taylor E, Taylor PC (2017) Breaking down enlightenment silos: from STEM to ST2EAM education, and beyond. In: Bryan L, Tobin K (eds) 13 questions: reframing education’s conversation: science. Peter Lang, New York, pp 455–472 Toomey AH, Markusson N, Adams E, Brockett B (2015) Inter-and transdisciplinary research: a critical perspective. https ://susta inabl edeve lopme nt.un.org/conte nt/docum ents/61255 8-Inter -%20 and %20Tra ns-disci plina ry%20Res earch %20-%20A%20Cri tical %20Per spect ive.pdf. Accessed 15 Jun 2019 Trott CD (2017) Engaging key stakeholders in climate change: a com- munity-based project for youth-led participatory climate action. Colorado State University Libraries. https ://mount ainsc holar .org/ handl e/10217 /18134 9 Trott CD (2019a) Reshaping our world: collaborating with children for community-based climate change action. Action Res 17(1):42–62 Trott CD (2019b) Children’s constructive climate change engagement: empowering awareness, agency, and action. Environ Educ Res. https ://doi.org/10.1080/13504 622.2019.16755 94 UNESCO (2019) What is education for sustainable development? https ://en.unesc o.org/theme s/educa tion-susta inabl e-devel opmen t/ what-is-esd. Accessed 15 Jun 2019 van Kerkhoff L, Lebel L (2006) Linking knowledge and action for sustainable development. Annu Rev Environ Resour 31:445–477. https ://doi.org/10.1146/annur ev.energ y.31.10240 5.17085 0 Wals AE (2011) Learning our way to sustainability. J Ed Sustain Dev 5(2):177–186. https ://doi.org/10.1177/09734 08211 00500 208 Wang CC, Morrel-Samuels S, Hutchison PM, Bell L, Pestronk RM (2004) Flint photovoice: community building among youths, adults, and policymakers. Am J Public Health 94(6):911–913 Weinberg AE, Sample McMeeking LB (2017) Toward meaningful interdisciplinary education: high school teachers’ views of math- ematics and science integration. School Sci Math 117(5):204– 213. https ://doi.org/10.1111/ssm.12224 Wiek A, Ness B, Schweizer-Ries P, Brand FS, Farioli F (2012) From complex systems analysis to transformational change: a com- parative appraisal of sustainability science projects. Sustain Sci 7(1):5–24. https ://doi.org/10.1007/s1162 5-011-0148-y Wilson EO (1998) Back from the Chaos. https ://www.theat lanti c.com/ magaz ine/archi ve/1998/03/back-from-chaos /30870 0/. Accessed 10 Dec 2019 World Commission on Environment and Development (1987) Our common future. Oxford University Press, Oxford Wyness M, Harrison L, Buchanan I (2004) Childhood, politics and ambiguity: towards an agenda for children’s political inclusion. Soc 38(1):81–99. https ://doi.org/10.1177/00380 38504 03936 2 Yarime M, Trencher G, Mino T et al (2012) Establishing sustainability science in higher education institutions: towards an integration of academic development, institutionalization, and stakeholder collaborations. Sustain Sci 7:101–113. https ://doi.org/10.1007/ s1162 5-012-0157-5 Ziegler R, Ott K (2011) The quality of sustainability science: a philo- sophical perspective. Sustain Sci Practice Policy 7(1):31–44. https ://doi.org/10.1080/15487 733.2011.11908 063 Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. https://doi.org/10.1111/ssm.12201 https://doi.org/10.5751/ES-08369-210221 https://doi.org/10.1080/13504622.2013.812721 https://doi.org/10.1080/13504622.2013.812721 https://doi.org/10.3390/su10061778 https://doi.org/10.5408/13-050.1 https://doi.org/10.5408/13-050.1 https://doi.org/10.1007/s11625-019-00707-7 https://doi.org/10.1080/03057260802681839 https://doi.org/10.7771/1541-5015.1002 https://doi.org/10.7771/1541-5015.1002 https://doi.org/10.1108/14676370810842193 https://doi.org/10.1108/14676370810842193 https://doi.org/10.17226/24988 https://www.tate.org.uk/art/art-terms/d/documentary-photography https://www.tate.org.uk/art/art-terms/d/documentary-photography https://sustainabledevelopment.un.org/content/documents/612558-Inter-%20and%20Trans-disciplinary%20Research%20-%20A%20Critical%20Perspective.pdf https://sustainabledevelopment.un.org/content/documents/612558-Inter-%20and%20Trans-disciplinary%20Research%20-%20A%20Critical%20Perspective.pdf https://sustainabledevelopment.un.org/content/documents/612558-Inter-%20and%20Trans-disciplinary%20Research%20-%20A%20Critical%20Perspective.pdf https://sustainabledevelopment.un.org/content/documents/612558-Inter-%20and%20Trans-disciplinary%20Research%20-%20A%20Critical%20Perspective.pdf https://mountainscholar.org/handle/10217/181349 https://mountainscholar.org/handle/10217/181349 https://doi.org/10.1080/13504622.2019.1675594 https://en.unesco.org/themes/education-sustainable-development/what-is-esd https://en.unesco.org/themes/education-sustainable-development/what-is-esd https://en.unesco.org/themes/education-sustainable-development/what-is-esd https://doi.org/10.1146/annurev.energy.31.102405.170850 https://doi.org/10.1177/097340821100500208 https://doi.org/10.1111/ssm.12224 https://doi.org/10.1007/s11625-011-0148-y https://www.theatlantic.com/magazine/archive/1998/03/back-from-chaos/308700/ https://www.theatlantic.com/magazine/archive/1998/03/back-from-chaos/308700/ https://doi.org/10.1177/0038038504039362 https://doi.org/10.1007/s11625-012-0157-5 https://doi.org/10.1007/s11625-012-0157-5 https://doi.org/10.1080/15487733.2011.11908063 Merging the arts and sciences for collaborative sustainability action: a methodological framework Abstract Introduction Literature review Sustainability as science Art–science integration for social change Transformative learning for youth-led sustainability action The present research Case studies in art–science integration for collaborative sustainability action Child-led climate action in the Western US Youth-led clean water advocacy in Southern Haiti Discussion Transdisciplinary learning: confronting complexity Participatory process: attending to context Collaborative action: generating sustainability Conclusions References work_vbr2ulv6tbfvbe2ydto2r2mzye ---- Estimating Needle and Shoot Inclination Angle Distributions and Projection Functions in Five Larix principis-rupprechtii Plots via Leveled Digital Camera Photography Article Estimating Needle and Shoot Inclination Angle Distributions and Projection Functions in Five Larix principis-rupprechtii Plots via Leveled Digital Camera Photography Jie Zou 1,2,*, Peihong Zhong 1,2, Wei Hou 1,2, Yong Zuo 1,2 and Peng Leng 1,2 ���������� ������� Citation: Zou, J.; Zhong, P.; Hou, W.; Zuo, Y.; Leng, P. Estimating Needle and Shoot Inclination Angle Distributions and Projection Functions in Five Larix principis-rupprechtii Plots via Leveled Digital Camera Photography. Forests 2021, 12, 30. https://doi.org/10.3390/f12010030 Received: 16 August 2020 Accepted: 25 December 2020 Published: 28 December 2020 Publisher’s Note: MDPI stays neu- tral with regard to jurisdictional claims in published maps and institutional affiliations. Copyright: © 2020 by the authors. Li- censee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/ licenses/by/4.0/). 1 The Academy of Digital China (Fujian), Fuzhou University, Fuzhou 350116, China; zhongpeihongfz@163.com (P.Z.); hwzb123@163.com (W.H.); zuoyong209@163.com (Y.Z.); lengpeng_lp@163.com (P.L.) 2 Key Laboratory of Spatial Data Mining and Information Sharing, Ministry of Education, Fuzhou 350116, China * Correspondence: zoujie@fzu.edu.cn; Tel.: +86-591-6317-9702 Abstract: The leaf inclination distribution function is a key determinant that influences radiation penetration through forest canopies. In this study, the needle and shoot inclination angle distributions of five contrasting Larix principis-rupprechtii plots were obtained via the frequently used leveled digital camera photography method. We also developed a quasi-automatic method to derive the needle inclination angle measurements based on photographs obtained using the leveled digital camera photography method and further verified these using manual measurements. Then, the variations of shoot and needle inclination angle distribution measurements due to height levels, plots, and observation years were investigated. The results showed that the developed quasi-automatic method is effective in deriving needle inclination angle distribution measurements. The shoot and needle inclination angle distributions at the whole-canopy scale tended to be planophile and exhibited minor variations with plots and observation years. The small variations in the needle inclination angle distributions with height level in the five plots might be caused by contrasting light conditions at different height levels. The whole-canopy and height level needle projection functions also tended to be planophile, and minor needle projection function variations with plots and observation years were observed. We attempted to derive the shoot projection functions of the five plots by using a simple and applicable method and further evaluated the performance of the new method. Keywords: needle inclination angle distribution; shoot inclination angle distribution; leveled digital photography; G function; coniferous forest; Larix 1. Introduction The leaf inclination angle distribution function (LIDF), which is defined as the prob- ability of a leaf element of unit size to have its surface normal within a specified unit solid angle [1,2], is a key determinant influencing radiation penetration through forest canopies [3]. The LIDF is also the fundamental parameter used to derive the leaf projection function (commonly referred to as G), which is defined as the projection coefficient of the unit foliage area on a plane perpendicular to the viewing direction [2]. This coefficient is required as the key parameter for the indirect estimation of the leaf area index (LAI) using optical methods. Therefore, LIDF and G measurements are essential requirements in the field of forestry. Although LIDF measurements are required for many forest studies, few reliable and convenient estimation methods (i.e., direct and indirect methods) are available for forest canopies, especially for coniferous forest canopies. For example, although some direct methods have been successfully applied to measure the LIDFs of low-vegetation canopies by using combinations of inclinometers, compasses and rulers [2,4], mechanical arms [5], Forests 2021, 12, 30. https://doi.org/10.3390/f12010030 https://www.mdpi.com/journal/forests https://www.mdpi.com/journal/forests https://www.mdpi.com https://www.mdpi.com/1999-4907/12/1/30?type=check_update&version=1 https://doi.org/10.3390/f12010030 https://doi.org/10.3390/f12010030 https://creativecommons.org/ https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/ https://doi.org/10.3390/f12010030 https://www.mdpi.com/journal/forests Forests 2021, 12, 30 2 of 22 and 3D digitizers [6,7], they are difficult to apply in tall canopies, such as forest canopies, because they must be in contact with leaves, which are usually distant from the ground in- strument and operator. Several indirect methods, including optical methods (e.g., multiband vegetation imaging [8], digital hemispherical photography (DHP) [9,10] and digital cover photography [11,12], leveled digital photography (LDP) [3,13–15], 3D image-processing method [16]) and terrestrial laser scanning [17–24], have been proposed for deriving the LIDFs of tall forest canopies. For DHP, one of the two solutions is to obtain the LIDFs by inverting the simultaneous equations representing the gap fraction measurements at multi- ple zenith angles based on the least squares method [8,10,25]. Another solution is to obtain the LIDFs by inverting Beer’s law without the usage of the least squares method based on the gap fraction, clumping index, and LAI measurements [11,26]. The limitation of the DHP is that the accuracy of the derived LIDFs is strongly reliant on the accuracy of the gap fraction, clumping index, and LAI measurements [8,11,26]. Accurate LIDF measurements of broadleaf forest plots have been obtained using terrestrial laser scanning (TLS) [18–24]. However, the TLS method is expensive due to the high instrument price and advanced pro- fessional skills required to obtain accurate LIDF measurements. These requirements hamper the extensive use of this method. Furthermore, the TLS method has previously been limited to broadleaf forest plots due to the diameter of the laser spot size used in TLS generally being larger than the typical width of a needle (about 1–2 mm), meaning the 3D surfaces of the needles cannot be effectively detected using TLS [18–24,27]. As for the 3D image processing method, this has only been applied to broadleaf trees, since the individual leaves of broadleaf trees are usually large enough to be detected using segment algorithms [16]. However, it is difficult to apply this method to coniferous trees due to the typical width of a needle usually being small, causing great challenges for segment algorithms [27]. Another key reason is the common characteristics of the needles grouping within shoots, which makes it difficult to collect multi-view photographs of a single needle due to the mutual shading of needles within shoots. The sufficient and qualified multi-view photographs of each needle within shoots are the prerequisite requirements for the 3D surface construction of the needles. Among the above-mentioned indirect methods, LDP is simple, affordable, and can be relatively conveniently applied to tall forest canopies with the aid of ladders, observational towers, and buildings [3,14,28,29]. Moreover, the performance of the LDP method in terms of obtaining LIDF measurements of forest plots has already been verified by Pisek et al. [28]. Therefore, the LDP approach is an effective and promising method for obtaining LIDF measurements of forest canopies, especially for coniferous canopies with flat needles. Several studies have attempted to obtain the LIDF measurements of forest plots with different broadleaf tree species [3,14,28,29]. The results of these studies show obvious differences between the LIDF measurements of those broadleaf tree species [3,14,28,29]. Therefore, the estimation of LIDF measurements for each tree species is necessary whenever feasible. In addition to tree species, the LIDF measurements of the same forest plot also change due to factors related to the seasons [14,29], canopy height level [14,30], and light conditions [14]. To illustrate, Raabe et al. [14] reported that the LIDF measurements of Betula pendula Roth forest plots change obviously with the seasons. Thus, season, height level, and light condition factors, in addition to the tree species, should also be considered in the LIDF estimation of forest plots with the same tree species. To date, the majority of the previous studies have focused on measuring the LIDF and G measurements of broadleaf forest plots with different tree species [14,28,29]. Few studies have attempted to obtain the shoot and needle inclination angle distribution function (SIDF and NIDF) values or the shoot and needle G measurements of coniferous forest plots. For coniferous forest plots, the separation of SIDF and NIDF or needle and shoot G measurements is necessary given that only the SIDF or shoot G value is required for some applications, such as for LAI estimation from optical methods, because most of these methods cannot discriminate small gaps between the needles of a shoot. Furthermore, the relationships between the SIDFs and NIDFs of coniferous forests have not yet been comprehensively investigated. Whether the Forests 2021, 12, 30 3 of 22 SIDF, NIDF, and shoot and needle G measurements of coniferous forest plots will change with the factors related to the canopy height levels, plot canopy structure characteristics, or observation year remains unclear. In this study, five Larix principis-rupprechtii forest plots with contrasting canopy struc- ture characteristics (e.g., LAI, stand density, mean tree height, and diameter at breast height) were selected to obtain SIDF and NIDF measurements at three height levels (i.e., top, mid- dle, and bottom) over 2 years using the LDP method. For L. principis-rupprechtii forest canopies, the typical width of a needle is usually small than 1 mm and the needles are spirally distributed around the major axis of the cylindrical brachyplast of shoots at certain degrees. Moreover, the SIDF and NIDF measurements should be collected at the three height levels of forest canopies. Therefore, TLS and 3D image processing methods are inapplicable for this study. Given that the needles of L. principis-rupprechtii tree species are flat, the LDP method was adopted to obtain the SIDFs and NIDFs in this study. We also improved the LDP method to quasi-automatically obtain the NIDFs from the LDP pho- tographs. Then, the NIDF measurements from the quasi-automatic method were evaluated using those measurements obtained from the manual method. The variations in SIDF and NIDF or shoot and needle G measurements due to height levels, plots, and observation years were analyzed. Then, the relationships between the SIDFs and NIDFs of the five plots were investigated. Finally, we attempted to evaluate the performance of a simple and applicable method for obtaining the shoot G values of the five plots. 2. Materials and Methods 2.1. Plots Description In this study, five plots with contrasting stand densities, mean tree heights, mean tree ages, and LAIs were established (Table 1) to analyze the impacts of different within-canopy light conditions on SIDF and NIDF measurements, as well as shoot and needle G measure- ments [31]. The five selected forest plots were located in Saihanba National Forest Park in Hebei Province, China [31]. The five plots are the same plots used in our two previous studies [31,32]. The tree species in the five single-species plots is L. principis-rupprechtii, which is widespread in the northern area of China. The tree ages of the five plots are the classical tree ages of the L. principis-rupprechtii forest plots in the park. The terrain slope of the five plots is approximately 0◦, while the plot size is 25 m × 25 m. A forest inventory was established during a field campaign in 2017 [32]. Table 1 gives the plot descriptions for the five plots. Table 1. Characteristics of Larix principis-rupprechtii plots [31]. Plot 1 Plot 2 Plot 3 Plot 4 Plot 5 Longitude and latitude N 42◦24′43′′, E 117◦19′4′′ N 42◦24′2′′, E 117◦18′40′′ N 42◦18′2′′, E 117◦18′9′′ N 42◦25′22′′, E 117◦19′32′′ N 42◦17′42′′, E 117◦16′53′′ Mean tree height (m) * 19.4 20.4 12.6 13.3 8.7 Average diameter at breast height (cm) 26.6 27.2 12.7 14.1 9.2 Stand density (stems/ha) 464 384 2320 1760 3904 Tree age (~years) 54 55 21 22 13 Needle-to-shoot area ratio (γ) ** 1.36 1.20 1.18 1.23 1.37 Litter collection LAI *** 4.65 3.58 4.96 3.04 6.69 Tree species Larix principis-rupprechtii * The height of each tree in each plot was estimated based the point cloud dataset collected using the terrestrial laser scanner in the 2017 leaf area index (LAI) measurement campaign [31]. The height of each tree was calculated as the vertical distance between the highest point of the tree and the lowest point of the stem located close to the ground [31]. ** The needle-to-shoot area ratio (γ) of each plot was estimated by averaging the γ of typical shoot samples harvested from the three height levels of forest canopies (i.e., top, middle, and bottom). Details of γ estimations can be found in Section 2.3.3. *** The litter collection LAI of the five plots was obtained by using the litter collection measurements acquired in the 2017 LAI measurement campaign [32]. Forests 2021, 12, 30 4 of 22 2.2. Data Acquisition In this study, the SIDF was defined as the probability of a shoot with half of the total needle area equal to unit size having the major axis of its cylindrical brachyplast within a specified unit solid angle; the NIDF was defined as the probability of a needle of unit size to have its surface normal within a specified unit solid angle [1,2]. The shoot and needle images of LDP were taken using a Canon 6D camera equipped with a Canon 24–70 mm lens and by using a ladder with a maximum length of approximately 14 m. A bubble was mounted on the top of the camera to facilitate camera leveling. The image resolution was 5472 × 3648 pixels. All images were taken under calm conditions to avoid wind effects on shoot and needle inclination angles [33]. The images were taken from three height classes (i.e., top, middle, and bottom) in the five plots, except in plots 1 and 2. In plots 1 and 2, images were taken from the top and middle height classes because the majority of the lower canopy branches were harvested during forest management activities prior to the field experiment [34]. The images were taken between 17–18 September 2017 and 9–10 August 2018 at the maximum plant area index (PAI) period of the five plots. The numbers of effective collected photographs ranged from 47 to 166 per height level or 163 to 344 per plot for the five plots. 2.3. Data Processing 2.3.1. Needle and Shoot Inclination Angle Estimation Collected images were further visually inspected for the presence of typical shoots with the major axis of their cylindrical brachyplast or needles with their surfaces oriented ap- proximately perpendicular to the viewing direction of the digital camera (Figure 1) [14,28]. Therefore, for uncurved needles, the selected needles appeared as straight lines in the images. Shoot or needle samples, which were selected for SIDF or NIDF determination, were randomly selected from all available samples in the collected images. The selection was performed by two experienced operators (the two authors of this paper) to avoid the influence of operator subjectivity on the shoot or needle sample selection [14]. All samples used in this study were first selected by one of the two operators and then checked by another operator individually. Incomplete and abnormal (untypical) shoots or needles were discarded before SIDF and NIDF determination. Severely curved needles were also discarded due to the difficulty of manually obtaining stable and reliable measurements for such samples. The inclination angles of the selected shoots or needles were measured by using ACDSee Photo Manager 12 (ACD Systems International Inc., Bellevue, WA, USA) (free version). For lightly curved needles, three inclination angle measurements were collected for each needle sample at the middle location of three even subsections of the needle. Then, the inclination angle measurement of the curved needle sample was obtained by averaging the three inclination angle measurements. Since the selected uncurved needle samples appeared as straight lines in the images, in order to reduce the impact of the user subjectivity of the manual method as described above on the NIDF measurements, a quasi-automatic method for detection of the boundary lines and further estimation of the needle inclination angles of the selected needles was developed based on a line segment detector (LSD) algorithm [35,36]. The LSD algorithm is an excellent line segment detection algorithm when compared to the traditional methods involving the combination of a canny edge detector and Hough transform, as these methods generally produce a large number of false detections [35–37]. For LSD, a line segment is detected as an estimate from a line support region (where pixels within that region have a similar gradient orientation value) if it passes a meaningful alignment criterion test [36,38]. The performance of the LSD in terms of detection of line segments under varying and complex field conditions has been verified previously [35,37,39–41]. More details of the LSD can be found in [36]. The LSD code can be publicly downloaded at http://www.ipol.im/pub/art/2012/gjmr-lsd/. Figure 2 shows a flow chart of the quasi- automatic method used to obtain the needle inclination angle measurements based on the LSD. This method includes four main steps: (1) line segment detection based on the http://www.ipol.im/pub/art/2012/gjmr-lsd/ http://www.ipol.im/pub/art/2012/gjmr-lsd/ Forests 2021, 12, 30 5 of 22 LSD algorithm; (2) interactive line segment identification; (3) line segment connection; (4) needle inclination angle calculation. The line segments extracted using the LSD in step 1 were not perfect, since false line segments and two boundary lines were sometimes detected (Figure 2b) because the photographs were taken at different height levels of the forest canopies and the backgrounds vary between photographs. Moreover, the boundary line shapes of the selected needles also differed. Instead of developing a totally automatic method to identify the false and redundant line segments, which could be unstable due to the varying background and boundary line shapes, an interactive identification method with the operators was used to identify the extracted effective line segments used for the needle inclination angle estimation (Figure 2c). Step 3 was used to connect the identified disconnected line segments into a whole line (Figure 2d). Then, the inclination angle of each line segment was estimated as the angle between the zenith and normal direction of each line segment (step 4) (Figure 2e). Finally, the needle inclination angle is the sum of the inclination angle of each line segment, which is achieved by multiplying its weight. The weight of each line segment was calculated by dividing the line segment length by the length of all line segments for that needle. A two-sample Kolmogorov–Smirnov (K–S) homogeneity test was adopted to evaluate whether the NIDFs of the manual and quasi-automatic methods were part of the same population [28,42]. The inclination angle measurements of the selected needles or shoots for each plot were almost evenly distributed at each height level (Tables 2 and 3). Pisek et al. (2013) suggested that a minimum of 75 leaf inclination angle measurements is sufficient to obtain a statistically representative LIDF. Therefore, in accordance with Pisek et al. (2013), we collected at least 82 effective needle or shoot inclination angle measurements at each height level in the five plots (Tables 2 and 3). The numbers of effective needle inclination angle measurements at each height level were larger than those of shoots (Tables 2 and 3) because the numbers of needles were dozens of times larger than those of shoots in the canopy. Therefore, needles with their orientations perpendicular to the viewing direction of the camera were easier to be identified from the images than shoots. Forests 2021, 12, x FOR PEER REVIEW 5 of 22 5 based on the LSD. This method includes four main steps: (1) line segment detection based on the LSD algorithm; (2) interactive line segment identification; (3) line segment connection; (4) needle inclination angle calculation. The line segments extracted using the LSD in step 1 were not perfect, since false line segments and two boundary lines were sometimes detected (Figure 2b) because the photographs were taken at different height levels of the forest canopies and the backgrounds vary between photographs. Moreover, the boundary line shapes of the selected needles also differed. Instead of developing a totally automatic method to identify the false and redundant line segments, which could be unstable due to the varying background and boundary line shapes, an interactive identification method with the operators was used to identify the extracted effective line segments used for the needle inclination angle estimation (Figure 2c). Step 3 was used to connect the identified disconnected line segments into a whole line (Figure 2d). Then, the inclination angle of each line segment was estimated as the angle between the zenith and normal direction of each line segment (step 4) (Figure 2e). Finally, the needle inclination angle is the sum of the inclination angle of each line segment, which is achieved by multiplying its weight. The weight of each line segment was calculated by dividing the line segment length by the length of all line segments for that needle. A two-sample Kolmogorov–Smirnov (K–S) homogeneity test was adopted to evaluate whether the NIDFs of the manual and quasi-automatic methods were part of the same population [28,42]. The inclination angle measurements of the selected needles or shoots for each plot were almost evenly distributed at each height level (Tables 2 and 3). Pisek et al. (2013) suggested that a minimum of 75 leaf inclination angle measurements is sufficient to obtain a statistically representative LIDF. Therefore, in accordance with Pisek et al. (2013), we collected at least 82 effective needle or shoot inclination angle measurements at each height level in the five plots (Tables 2 and 3). The numbers of effective needle inclination angle measurements at each height level were larger than those of shoots (Tables 2 and 3) because the numbers of needles were dozens of times larger than those of shoots in the canopy. Therefore, needles with their orientations perpendicular to the viewing direction of the camera were easier to be identified from the images than shoots. Z Z Nn Ns θn θs Figure 1. Example of the determination of the needle or shoot inclination angles based on the leveled digital photography method. Here, 𝑍 denotes the zenith, 𝑁𝑛 denotes the needle surface normal, and 𝑁𝑠 denotes the major axis of the cylindrical shoot brachyplast; 𝜃𝑛 and 𝜃𝑠 are the needle and shoot inclination angles, respectively. Figure 1. Example of the determination of the needle or shoot inclination angles based on the leveled digital photography method. Here, Z denotes the zenith, Nn denotes the needle surface normal, and Ns denotes the major axis of the cylindrical shoot brachyplast; θn and θs are the needle and shoot inclination angles, respectively. Forests 2021, 12, 30 6 of 22 Forests 2021, 12, x FOR PEER REVIEW 6 of 22 6 Target needle zenith (a) Original photo (b) Step 1: Line segment detection (LSD) (c) Step 2: Interactive line segment idetification (d) Step 3: Line segment connection (e) Step 4: Needle inclination angle calculation (a) (b) (c) (d) (e) Figure 2. The flow chart of the quasi-automatic method used to measure the needle inclination angle based on the line segment detector (LSD) algorithm and photographs acquired by the leveled digital photography method. The target needle (a) is the same as the target in Figure 1. The blue lines and arrows (e) denote the normal directions of the line segments detected by LSD. Figure 2. The flow chart of the quasi-automatic method used to measure the needle inclination angle based on the line segment detector (LSD) algorithm and photographs acquired by the leveled digital photography method. The target needle (a) is the same as the target in Figure 1. The blue lines and arrows (e) denote the normal directions of the line segments detected by LSD. Forests 2021, 12, 30 7 of 22 Table 2. Statistical characteristics (i.e., number of measurements (count), mean leaf inclination angle (mean), and standard deviation (SD)) of the needle inclination angle distribution function (NIDF) measurements, which were derived using the manual and quasi-automatic estimation methods, respectively, in the five plots. The quasi-automatic method failed for ten needle samples, the results of which were not included for the quasi-automatic method measurements. Plot Name Height Level Manual Quasi-Automatic 2017 2018 2017 2018 Count Mean SD Count Mean SD Count Mean SD Count Mean SD Plot 1 Top 192 34.52 22.18 146 40.96 22.0 192 35.04 22.30 146 41.75 21.67 Middle 189 34.5 22.44 158 33.11 20.82 189 34.70 22.32 158 33.43 20.85 Bottom \ * \ \ \ \ \ \ \ \ \ \ \ All 381 34.51 22.28 304 36.88 21.72 381 34.87 22.28 304 37.43 21.62 Plot 2 Top 141 39.19 22.87 259 38.27 22.99 141 39.59 22.49 259 39.11 23.07 Middle 136 29.97 20.72 250 39.88 22.01 136 30.02 20.63 250 40.63 22.13 Bottom \ \ \ \ \ \ \ \ \ \ \ \ All 277 34.66 22.28 509 39.06 22.51 277 34.89 22.08 509 39.86 22.60 Plot 3 Top 134 39 22.91 227 36.27 22.0 134 39.05 22.81 227 36.92 22.08 Middle 135 31.99 21.52 223 35.14 21.64 135 32.37 20.98 223 35.47 21.64 Bottom 124 35.79 22.72 222 34.63 20.59 124 36.11 22.51 222 34.82 20.31 All 393 35.58 22.51 672 35.36 21.40 393 35.68 22.45 672 35.75 21.35 Plot 4 Top 170 41.59 24.74 108 39.7 22.75 170 42.19 24.66 108 39.62 22.83 Middle 171 34.59 23.09 106 34.88 21.57 170 34.65 23.12 106 34.70 20.87 Bottom 160 32.71 20.83 98 33.24 20.96 157 33.01 20.65 98 33.60 20.96 All 501 36.37 23.26 312 36.04 21.9 497 36.71 23.23 312 36.05 21.69 Plot 5 Top 222 39.35 23.44 185 37.0 22.38 222 39.99 23.29 185 37.62 22.24 Middle 222 35.71 21.88 173 36.225 20.62 221 35.88 21.54 173 36.81 20.32 Bottom 212 31.89 21.56 185 30.85 17.51 212 32.08 21.32 185 31.64 17.48 All 656 35.71 22.49 543 34.66 20.41 655 36.04 22.28 543 35.32 20.24 * Not applicable. Table 3. Statistical characteristics (i.e., number of measurements (count), mean leaf inclination angle (mean), and standard deviation (SD)) of the shoot inclination angle distribution function (SIDF) measurements, which were derived using the manual estimation method in the five plots. Plot Name Height Level 2017 2018 Plot Name Height Level 2017 2018 Count Mean SD Count Mean SD Count Mean SD Count Mean SD Plot 1 Top 90 19.68 16.91 83 24.73 24.2 Plot 2 Top 92 22.27 20.47 96 25.91 19.46 Middle 86 23.06 20.18 85 27.31 21 Middle 88 25.67 23.4 90 24.76 21.97 Bottom \* \ \ \ \ \ Bottom \ \ \ \ \ \ All 176 21.33 18.6 168 26.04 22.6 All 180 23.93 21.96 186 25.35 20.67 Plot 3 Top 94 23.1 23.58 92 21.93 19.6 Plot 4 Top 90 24.28 22.12 83 23.81 19.77 Middle 94 20.14 18.37 97 24.32 21.08 Middle 94 25.37 22.08 84 21.4 19.41 Bottom 90 22.26 20.43 97 21.96 21.94 Bottom 89 21.34 19.81 82 20.46 19.56 All 278 21.83 20.87 286 22.75 20.87 All 273 23.7 21.37 249 21.9 19.55 Plot 5 Top 138 25.67 23.17 84 21.3 22.88 Middle 134 20.3 20.53 86 22.35 22.25 Bottom 138 20.82 19.78 92 21.41 21.44 All 410 22.29 21.3 262 21.68 22.09 * Not applicable. 2.3.2. Fitting the Needle and Shoot Inclination Angle Measurements The SIDFs or NIDFs of each plot were derived on the basis of the obtained needle or shoot inclination angle (θn or θs) measurements. Several typical distributions, including two-parameter beta distribution, ellipsoidal distribution, and rotated-ellipsoidal distribu- tion values, have been proposed to describe the LIDFs of vegetation canopy [43]. Among Forests 2021, 12, 30 8 of 22 the mentioned typical functions, the two-parameter beta and ellipsoidal distributions were the two commonly used distributions adopted to fit the field leaf inclination angle mea- surements of vegetation canopies [1,10,14,17,30,43,44]. Since no studies have attempted to compare the performance of the beta and ellipsoidal distributions to fit the field-collected θn or θs measurements of coniferous forest canopies, both of the typical distributions were used to fit the field-collected θn or θs measurements in this study. The probability of θn or θs can be described using the two-parameter beta distribution as follows [43]: f (t) = 1 B(µ, ν) (1 − t)µ−1tν−1, (1) where t = 2θn /π or 2θs /π. The beta distribution function B(µ, ν) is defined as: B(µ, ν) = ∫ 1 0 (1 − x)µ−1 xν−1dx = Γ(µ)Γ(ν) Γ(µ + ν) , (2) where Γ is the gamma function, and µ and ν are the two parameters of the beta distribution function, which can be calculated as follows: µ = ( 1 − t )(σ20 σ2t − 1 ) , (3) ν = t ( σ20 σ2t − 1 ) , (4) where σ20 is the maximum standard deviation with the expected mean t, and σ 2 t is the variance of t [43]. Similarly, the probability of θn or θs can also be described using the ellipsoidal distri- bution as follows [43,45]: f (θ) = 2χ3 sin(θ) Λ ( cos(θ)2 + χ2 sin(θ)2 )2 , (5) where θ represents the needle or shoot inclination angle (θn or θs), χ is the ratio of horizontal semi-axis length to the vertical semi-axis length of an ellipsoid, and Λ is the normalized ellipse area for the projection of an ellipsoid. The ellipsoidal distribution becomes spherical distribution if χ = 1 and Λ = 2. If χ < 1, Λ = χ + sin e−1 e , e = ( 1 − χ2 )1/2 , (6) and if χ > 1, Λ = χ + ln[(1 + e)/(1 − e)] 2eχ , e = ( 1 − χ−2 )1/2 , (7) The χ can be derived based on the mean value of θn or θs measurements (θ) as follows [43]: χ = −3 + ( θ 9.65 )−0.6061 , (8) The θ estimates can be calculated using the field-collected θn or θs measurements as follows [43]: θ = 90 ∑ i=0 θj f j (9) where f j is the leaf area fraction for a needle or shoot angle interval centered at θj. Forests 2021, 12, 30 9 of 22 All of the needle and shoot inclination angle measurements acquired in this study were fitted using two-parameter beta or ellipsoidal distribution before subsequent processing. After this, the deviation ( fD ) between the derived beta or ellipsoidal distribution and those without fitting ( f LDP(θ)) was quantified using the modified inclination index proposed by Ross [46], which was adopted in previous studies [3,14,29]: fD = ∫ π/2 0 | f (θ)− f LDP(θ)|dθ or fD = ∫ 1 0 | f (t)− f LDP(t)|dt (10) where f LDP(θ) or f LDP(t) is the field NIDF or SIDF, which are presented here as histograms with the bin width of 5◦. Besides the beta and ellipsoidal distributions, an additional solution except the field method used to derive the LIDFs, NIDFs, or SIDFs of vegetation canopies was designed based on the six theoretical distributions, including planophile, plagiophile, uniform, spher- ical, erectophile, and extremophile distributions (Figure 3). The six theoretical distributions were proposed based on the empirical evidence of the natural variation of leaf normal dis- tributions and mathematical considerations used to describe the LIDFs values of vegetation canopies [14,47]. For spherical canopies, the relative frequency of the leaf inclination angles is the same as the relative frequency of the inclinations of the surface elements of a sphere. For uniform canopies, the relative frequency of the leaf inclination angles is the same at any angle. Planophile canopies are characterized by a predominance of horizontally oriented leaves. Plagiophile canopies are dominated by inclined leaves, erectophile canopies are dominated by vertically oriented leaves, and extremophile canopies are dominated by high frequencies of both horizontally and vertically oriented leaves [14,48] (Figure 3). Forests 2021, 12, x FOR PEER REVIEW 9 of 22 9 where 𝑓𝐿𝐷𝑃(𝜃) or 𝑓𝐿𝐷𝑃(𝑡) is the field NIDF or SIDF, which are presented here as histograms with the bin width of 5°. Besides the beta and ellipsoidal distributions, an additional solution except the field method used to derive the LIDFs, NIDFs, or SIDFs of vegetation canopies was designed based on the six theoretical distributions, including planophile, plagiophile, uniform, spherical, erectophile, and extremophile distributions (Figure 3). The six theoretical distributions were proposed based on the empirical evidence of the natural variation of leaf normal distributions and mathematical considerations used to describe the LIDFs values of vegetation canopies [14,47]. For spherical canopies, the relative frequency of the leaf inclination angles is the same as the relative frequency of the inclinations of the surface elements of a sphere. For uniform canopies, the relative frequency of the leaf inclination angles is the same at any angle. Planophile canopies are characterized by a predominance of horizontally oriented leaves. Plagiophile canopies are dominated by inclined leaves, erectophile canopies are dominated by vertically oriented leaves, and extremophile canopies are dominated by high frequencies of both horizontally and vertically oriented leaves [14,48] (Figure 3). Figure 3. Beta distributions for the six theoretical leaf inclination angle distributions proposed by de Wit [47]. 2.3.3. Needle and Shoot Projection Function Calculation Assuming that needle azimuth angle is uniformly distributed and the needle inclination angle distribution is independent of needle size [14], then the needle projection function (𝐺𝑛(𝜃)) can be calculated as follows: 𝐺𝑛(𝜃) = ∫ 𝐴(𝜃,𝜃𝑛)𝑓(𝜃𝑛)𝑑𝜃𝑛 𝜋 2 0 , (11) 𝐴(𝜃,𝜃𝑛) = { 𝑐𝑜𝑠𝜃𝑐𝑜𝑠𝜃𝑛⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡|𝑐𝑜𝑡𝜃𝑐𝑜𝑡𝜃𝑛| > 1 𝑐𝑜𝑠𝜃𝑐𝑜𝑠𝜃𝑛 [1 + ( 2 𝜋 )(𝑡𝑎𝑛𝜓 − 𝜓)]⁡⁡⁡⁡⁡⁡⁡|𝑐𝑜𝑡𝜃𝑐𝑜𝑡𝜃𝑛| ≤ 1, (12) where 𝜃 is the view zenith and 𝜓 = 𝑐𝑜𝑠−1(𝑐𝑜𝑡𝜃𝑐𝑜𝑡𝜃𝑛). Light can penetrate through the shoots of L. principis-rupprechtii forest plots due to the small gaps between the needles of each shoot. Therefore, the shoot projection function ( 𝐺𝑠(𝜃) ) cannot be derived directly from SIDF measurements by using the method Figure 3. Beta distributions for the six theoretical leaf inclination angle distributions proposed by de Wit [47]. 2.3.3. Needle and Shoot Projection Function Calculation Assuming that needle azimuth angle is uniformly distributed and the needle incli- nation angle distribution is independent of needle size [14], then the needle projection function (Gn(θ)) can be calculated as follows: Gn(θ) = ∫ π 2 0 A(θ, θn) f (θn)dθn, (11) Forests 2021, 12, 30 10 of 22 A(θ, θn) = { cos θ cos θn |cot θ cot θn| > 1 cos θ cos θn [ 1 + ( 2 π ) (tan ψ − ψ) ] |cot θ cot θn| ≤ 1, (12) where θ is the view zenith and ψ = cos−1(cot θ cot θn). Light can penetrate through the shoots of L. principis-rupprechtii forest plots due to the small gaps between the needles of each shoot. Therefore, the shoot projection function (Gs(θ)) cannot be derived directly from SIDF measurements by using the method described above for needles. The needle-to-shoot area ratio (γ) is a parameter describing the ratio between the shoot projection area and total needle area in a shoot. Therefore, a simple and applicable method for determining the Gs(θ) of L. principis-rupprechtii forest plots based on the Gn(θ) and γ measurements was used here as follows: Gs(θ) = Gn(θ)/γ (13) The γ values for the five plots were obtained during the field campaign in 2017. A detailed description of the γ determination procedure can be found in [31]. We only provide a brief description here. Two to four typical shoot samples were randomly clipped from each height class (i.e., top, middle, and bottom) of the canopy in each plot. The method for γ calculation described by Chen et al. [49] was adopted in this study. The projection images of each typical shoot were taken by using a Canon 6D camera equipped with a Canon 24–70 mm lens and a flat, leveled white panel with two rules laid on its top surface [31]. Three projection images were taken for each typical shoot by rotating the shoot’s main axis at an azimuth angle of 0◦ and zenith angles of 0◦, 45◦, and 90◦ [31]. Then, three projection area estimates, namely Ap(0◦, 0◦), Ap(45◦, 0◦), and Ap(90◦, 0◦), were derived for each typical shoot on the basis of the three projection images. Then, half of the total needle area of each typical shoot (An) was estimated by using the volume displacement method [49]. The γ of each typical shoot was obtained on the basis of three projection area estimates and An as follows [49]: γ = An Ap(0◦,0◦) ∗ cos(15◦) + Ap(45◦,0◦) ∗ cos(45◦) + Ap(90◦,0◦) ∗ cos(75◦) cos(15◦) + cos(45◦) + cos(75◦) (14) The γ of each plot was obtained by averaging the γ of all shoot samples of each plot [31]. 3. Results and Discussion 3.1. Comparison of Manual and Quasi-Automatic Methods Used to Derive the Needle Inclination Angle Measurements Obvious agreements were observed between the needle inclination angle measure- ments using manual and quasi-automatic methods (p > 0.92), regardless of the plot or height level (Table 4). The agreements were further verified by the small variations in mean needle inclination angles between the two methods in the five plots (<0.64◦ for 2017 and <0.84◦ for 2018) (Table 2). Both of these results indicate that the needle inclination angle measurements of manual and quasi-automatic methods in the five plots were part of the same population. Therefore, the quasi-automatic method developed in this study is effective for measuring the needle inclination angle measurements. The total number of needle inclination angle measurements for the manual method is ten samples larger than those of the quasi-automatic method (Table 2) due to the quasi- automatic method failing for ten needle samples. Upon further examination, the reasons for the failure of the quasi-automatic method are the severe mutal shading between needles and the relatively low photograph quality, meaning that the whole needle boundary lines cannot be effectively detected using the LSD. The failure rate of the quasi-automatic method is very small (0.2%) and the mutal shading between needles can be reduced by changing the photograph acquisition position. Furthermore, the photograph quality can be improved by using a camera equipped with lenses with focal lengths larger than those used in this Forests 2021, 12, 30 11 of 22 study, which is highly appropriate for acquiring photographs of needle samples located relatively far from the acquisition position. Therefore, the failure related to the ten needle samples using the quasi-automatic method found in this study is not a key factor that affects the effectiveness of the quasi-automatic method to be used to derive the needle inclination angle measurements. Only the NIDF and SIDF measurements obtained from the manual method were used due to the quasi-automatic method being inapplicable for the shoot inclination angle estimation. Table 4. The results of the Kolmogorov–Smirnov test comparing the manual and quasi-automatic measurements of needle inclination angle measurements in the five plots. D and P are the maximum difference between the cumulative distributions (D) and the corresponding level of probability (P). Plot Name Height Level 2017 2018 Plot Name Height Level 2017 2018 D P D P D P D P Plot 1 Top 0.03 1.0 0.05 0.99 Plot 2 Top 0.04 1.0 0.03 0.99 Middle 0.03 1.0 0.04 0.99 Middle 0.04 1.0 0.04 0.95 Bottom \* \ \ \ Bottom \ \ \ \ All 0.02 1.0 0.03 0.99 All 0.03 1.0 0.03 0.97 Plot 3 Top 0.06 0.94 0.04 1.0 Plot 4 Top 0.04 1.0 0.04 1.0 Middle 0.05 0.98 0.03 1.0 Middle 0.03 1.0 0.04 1.0 Bottom 0.04 1.0 0.03 1.0 Bottom 0.05 0.97 0.04 1.0 All 0.03 1.0 0.02 0.99 All 0.02 1.0 0.03 1.0 Plot 5 Top 0.03 1.0 0.03 1.0 Middle 0.04 0.98 0.04 1.0 Bottom 0.04 0.99 0.05 0.92 All 0.02 1.0 0.03 0.93 * Not applicable. 3.2. Comparison of Beta and Ellipsoidal Distribution Functions for Fitting the Shoot or Needle Inclination Angle Measurements Figures 4 and 5 show that the beta distribution was outperformed by the ellipsoidal distribution in fitting the shoot or needle inclination angle measurements, with smaller deviations ( fD ) in most cases. Moreover, the dominant trends of small shoot inclination angles for SIDF distributions were better described by the beta distribution compared with the ellipsoidal distribution (Figure 4). Obvious deviations were observed between the ellipsoidal and LDP SIDF distributions at the zenith angle range of 0◦–5◦ in the five plots (Figure 4). This finding is consistent with the conclusion made by Wang et al. [43], who concluded that the beta distribution outperformed the ellipsoidal distribution in most cases in fitting the field leaf inclination angle measurements of grasses, shrubs, and broadleaf trees across 50 plant species, even though the plant species covered by the two studies are obviously different. The same conclusion was also reported for four broadleaf tree species in the study by Wagner and Hagemeier [30]. Furthermore, the beta distribution has been adopted by several studies recently to fit field leaf inclination angle measurements of LDP [3,14,29,44]. Therefore, the beta distribution was chosen to fit the shoot and needle inclination angle measurements of LDP for further analysis in this study. Forests 2021, 12, 30 12 of 22 Forests 2021, 12, x FOR PEER REVIEW 11 of 22 11 Table 4. The results of the Kolmogorov–Smirnov test comparing the manual and quasi-automatic measurements of needle inclination angle measurements in the five plots. D and P are the maximum difference between the cumulative distributions (D) and the corresponding level of probability (P). Plot Name Height Level 2017 2018 Plot Name Height Level 2017 2018 D P D P D P D P Plot 1 Top 0.03 1.0 0.05 0.99 Plot 2 Top 0.04 1.0 0.03 0.99 Middle 0.03 1.0 0.04 0.99 Middle 0.04 1.0 0.04 0.95 Bottom \* \ \ \ Bottom \ \ \ \ All 0.02 1.0 0.03 0.99 All 0.03 1.0 0.03 0.97 Plot 3 Top 0.06 0.94 0.04 1.0 Plot 4 Top 0.04 1.0 0.04 1.0 Middle 0.05 0.98 0.03 1.0 Middle 0.03 1.0 0.04 1.0 Bottom 0.04 1.0 0.03 1.0 Bottom 0.05 0.97 0.04 1.0 All 0.03 1.0 0.02 0.99 All 0.02 1.0 0.03 1.0 Plot 5 Top 0.03 1.0 0.03 1.0 Middle 0.04 0.98 0.04 1.0 Bottom 0.04 0.99 0.05 0.92 All 0.02 1.0 0.03 0.93 * Not applicable. 3.2. Comparison of Beta and Ellipsoidal Distribution Functions for Fitting the Shoot or Needle Inclination Angle Measurements Figures 4 and 5 show that the beta distribution was outperformed by the ellipsoidal distribution in fitting the shoot or needle inclination angle measurements, with smaller deviations (𝑓𝐷) in most cases. Moreover, the dominant trends of small shoot inclination angles for SIDF distributions were better described by the beta distribution compared with the ellipsoidal distribution (Figure 4). Obvious deviations were observed between the ellipsoidal and LDP SIDF distributions at the zenith angle range of 0°–5° in the five plots (Figure 4). This finding is consistent with the conclusion made by Wang et al. [43], who concluded that the beta distribution outperformed the ellipsoidal distribution in most cases in fitting the field leaf inclination angle measurements of grasses, shrubs, and broadleaf trees across 50 plant species, even though the plant species covered by the two studies are obviously different. The same conclusion was also reported for four broadleaf tree species in the study by Wagner and Hagemeier [30]. Furthermore, the beta distribution has been adopted by several studies recently to fit field leaf inclination angle measurements of LDP [3,14,29,44]. Therefore, the beta distribution was chosen to fit the shoot and needle inclination angle measurements of LDP for further analysis in this study. Forests 2021, 12, x FOR PEER REVIEW 12 of 22 12 Figure 4. A comparison of the beta and ellipsoidal distribution functions used to fit the whole- canopy shoot inclination angle measurements, which were manually obtained from the leveled digital photography (LDP) information of the five plots. The shoot inclination angle distributions of the LDP information are presented as histograms with a bin width of 5°. Only the whole-canopy shoot inclination angle measurements from 2018 are shown here, as the shoot inclination angle measurements at the three height levels (i.e., bottom, middle, and top) and those from 2017 showed similar behavior. Figure 4. A comparison of the beta and ellipsoidal distribution functions used to fit the whole- canopy shoot inclination angle measurements, which were manually obtained from the leveled digital photography (LDP) information of the five plots. The shoot inclination angle distributions of the LDP information are presented as histograms with a bin width of 5◦. Only the whole-canopy shoot inclination angle measurements from 2018 are shown here, as the shoot inclination angle measurements at the three height levels (i.e., bottom, middle, and top) and those from 2017 showed similar behavior. Forests 2021, 12, 30 13 of 22 Forests 2021, 12, x FOR PEER REVIEW 12 of 22 12 Figure 4. A comparison of the beta and ellipsoidal distribution functions used to fit the whole- canopy shoot inclination angle measurements, which were manually obtained from the leveled digital photography (LDP) information of the five plots. The shoot inclination angle distributions of the LDP information are presented as histograms with a bin width of 5°. Only the whole-canopy shoot inclination angle measurements from 2018 are shown here, as the shoot inclination angle measurements at the three height levels (i.e., bottom, middle, and top) and those from 2017 showed similar behavior. Forests 2021, 12, x FOR PEER REVIEW 13 of 22 13 Figure 5. A comparison of the beta and ellipsoidal distribution functions used to fit the whole canopy shown in Plot 5. Only the whole-canopy needle inclination angle measurements from 2018 are shown here, as the needle inclination angle measurements at the three height levels (i.e., bottom, middle and top) and those from 2017 showed similar behavior. 3.3. Shoot Angle Distribution In the five plots, the SIDFs were strongly planophile (Figure 3) over the entire vertical profile throughout the 2 years (Figure 6). This phenomenon can be explained by the fact that horizontally oriented shoots were more effective than steeply inclined shoots in maximizing needle light interception from above and reducing needle light shading from neighboring needles. The reason for this is that the needles were usually distributed around the major axes of the cylindrical brachyplasts of shoots with certain angles (approximately 30°–60°, as determined through visual inspection) (Figure 1). Moreover, minor variations among the SIDFs of different height levels for the same plot were observed in all five plots (the SIDF curves at different height levels in each plot almost overlapped) (Figure 6). Minor variations were also found among the whole-canopy SIDFs of different plots for the same observation year, as well as among the whole-canopy SIDFs of different years for the same plot (Figure 6). Light conditions have been identified as key factors affecting the LIDF measurements for broadleaf forest plots [14]. The minor variations in SIDFs with height level and plots (whole-canopy scale) were further verified by the small variations in mean shoot inclination angles due to height level and plot in the five plots (<5.37° for 2017 and <4.36° for 2018) (Table 3). The minor variations in SIDFs indicate that the SIDFs of L. principis-rupprechtii forest plots might be mainly determined by tree species and were not obviously affected by other common factors, such as light conditions, which differed within the canopies of the five plots, given that the five plots in this study exhibited contrasting forest canopy characteristics (Table 1). Figure 5. A comparison of the beta and ellipsoidal distribution functions used to fit the whole canopy shown in Plot 5. Only the whole-canopy needle inclination angle measurements from 2018 are shown here, as the needle inclination angle measurements at the three height levels (i.e., bottom, middle and top) and those from 2017 showed similar behavior. 3.3. Shoot Angle Distribution In the five plots, the SIDFs were strongly planophile (Figure 3) over the entire vertical profile throughout the 2 years (Figure 6). This phenomenon can be explained by the fact that horizontally oriented shoots were more effective than steeply inclined shoots in maximizing needle light interception from above and reducing needle light shading from neighboring needles. The reason for this is that the needles were usually distributed around the major axes of the cylindrical brachyplasts of shoots with certain angles (approximately 30◦–60◦, as determined through visual inspection) (Figure 1). Moreover, minor variations among the SIDFs of different height levels for the same plot were observed in all five plots (the SIDF curves at different height levels in each plot almost overlapped) (Figure 6). Minor variations were also found among the whole-canopy SIDFs of different plots for the same observation year, as well as among the whole-canopy SIDFs of different years for the same plot (Figure 6). Light conditions have been identified as key factors affecting the LIDF measurements for Forests 2021, 12, 30 14 of 22 broadleaf forest plots [14]. The minor variations in SIDFs with height level and plots (whole- canopy scale) were further verified by the small variations in mean shoot inclination angles due to height level and plot in the five plots (<5.37◦ for 2017 and <4.36◦ for 2018) (Table 3). The minor variations in SIDFs indicate that the SIDFs of L. principis-rupprechtii forest plots might be mainly determined by tree species and were not obviously affected by other common factors, such as light conditions, which differed within the canopies of the five plots, given that the five plots in this study exhibited contrasting forest canopy characteristics (Table 1). On the basis of Figure 6 and Table 3, we concluded that a stable and representative SIDF measurement for L. principis-rupprechtii plots could be obtained by randomly selecting shoot samples from the canopy for a total measurement number of approximately 80–100, without considering the height level, forest plots, or light conditions. Although we did not attempt to recalculate the SIDFs of the five L. principis-rupprechtii plots by randomly choosing measurements from all available shoot inclination angle measurements of each plot, the minor variations in SIDFs with height levels, plots, and observation years (Figure 6 and Table 3) indicated that the above conclusion was valid for L. principis-rupprechtii plots. Forests 2021, 12, x FOR PEER REVIEW 14 of 22 14 On the basis of Figure 6 and Table 3, we concluded that a stable and representative SIDF measurement for L. principis-rupprechtii plots could be obtained by randomly selecting shoot samples from the canopy for a total measurement number of approximately 80–100, without considering the height level, forest plots, or light conditions. Although we did not attempt to recalculate the SIDFs of the five L. principis- rupprechtii plots by randomly choosing measurements from all available shoot inclination angle measurements of each plot, the minor variations in SIDFs with height levels, plots, and observation years (Figure 6 and Table 3) indicated that the above conclusion was valid for L. principis-rupprechtii plots. Figure 6. Cont. Forests 2021, 12, 30 15 of 22 Forests 2021, 12, x FOR PEER REVIEW 15 of 22 15 Figure 6. Vertical and annual variations in shoot inclination angle distribution in the five plots. 3.4. Needle Angle Distribution The whole-canopy NIDFs for all the five plots were consistently toward the planophile throughout the 2 years (Figure 7 and Table 2). Although the tree species investigated in this study was coniferous, this finding was consistent with previous results showing that the planophile is the prevailing LIDF for broadleaf tree species in the northern hemisphere [3,14,29]. The prevalence of the planophile for broadleaf and coniferous tree species might be attributed to the greater photosynthesis in horizontal leaves than in vertical leaves for canopies with LAI values up to 6.0 in northern latitudes, according to theoretical computations [50]. Similar to the small whole-canopy SIDF variations in plots (Figure 6), no large variations were observed between the whole-canopy NIDFs for the five different plots throughout the 2 years (Figure 7). The small variations in whole-canopy NIDFs between plots were further verified by the small variations in mean needle inclination angles between plots in 2017 (<1.86°) and 2018 (<4.4°) (Table 2). Given that the five plots included very sparse (e.g., the stand densities of plots 1 and 2 ranged from 384 to 464 stems/ha) to very dense canopies (e.g., the stand density of plot 5 was 3904 stems/ha) and the LAI range of 3.04–6.69 for the five plots (Table 1), the light conditions within the canopies of the five plots were obviously different. This indicates that the variations in whole-canopy NIDFs of L. principis-rupprechtii forest plots are insensitive to light conditions. This finding is consistent with the conclusion made by Raabe et al. [14], who concluded that the variations in the LIDFs of broadleaf tree species are insensitive to light conditions if the LIDFs are planophile, because planophile function can maximize light use and improve the solar irradiance intercept efficiency of the canopy. Given the small whole-canopy NIDF variations in plots (Figure 7), a universal whole-canopy NIDF could be obtained on the basis of the NIDFs of the five plots in this study. A slight shift in zenith angles with Figure 6. Vertical and annual variations in shoot inclination angle distribution in the five plots. 3.4. Needle Angle Distribution The whole-canopy NIDFs for all the five plots were consistently toward the planophile throughout the 2 years (Figure 7 and Table 2). Although the tree species investigated in this study was coniferous, this finding was consistent with previous results showing that the planophile is the prevailing LIDF for broadleaf tree species in the northern hemi- sphere [3,14,29]. The prevalence of the planophile for broadleaf and coniferous tree species might be attributed to the greater photosynthesis in horizontal leaves than in vertical leaves for canopies with LAI values up to 6.0 in northern latitudes, according to theoretical computations [50]. Similar to the small whole-canopy SIDF variations in plots (Figure 6), no large varia- tions were observed between the whole-canopy NIDFs for the five different plots through- out the 2 years (Figure 7). The small variations in whole-canopy NIDFs between plots were further verified by the small variations in mean needle inclination angles between plots in 2017 (<1.86◦) and 2018 (<4.4◦) (Table 2). Given that the five plots included very sparse (e.g., the stand densities of plots 1 and 2 ranged from 384 to 464 stems/ha) to very dense canopies (e.g., the stand density of plot 5 was 3904 stems/ha) and the LAI range of 3.04–6.69 for the five plots (Table 1), the light conditions within the canopies of the five plots were obviously different. This indicates that the variations in whole-canopy NIDFs of L. principis-rupprechtii forest plots are insensitive to light conditions. This finding is consistent with the conclusion made by Raabe et al. [14], who concluded that the variations in the LIDFs of broadleaf tree species are insensitive to light conditions if the LIDFs are planophile, because planophile function can maximize light use and improve the solar irradiance intercept efficiency of the canopy. Given the small whole-canopy NIDF varia- tions in plots (Figure 7), a universal whole-canopy NIDF could be obtained on the basis of the NIDFs of the five plots in this study. A slight shift in zenith angles with the largest frequency was observed between the whole-canopy NIDFs for 2017 and 2018 for the same plot in the five plots (Figure 7). This shift might have been caused by the inconsistent field observation times in 2017 and 2018. Although the 2017 and 2018 field measurements were collected at the maximum PAI period for all of the plots, the field observation times in Forests 2021, 12, 30 16 of 22 2017 were closer to the needle senescence period compared with those in 2018. When the needles of L. principis-rupprechtii trees approached senescence, they began to soften and the angle between the shoot main axis of the cylindrical brachyplast and needle enlarged, making the orientation of the needles close to the zenith due to the planophile SIDF. This behavior indicated that the whole-canopy NIDF values of L. principis-rupprechtii forest plots might change with seasons because the angle between the shoot main axis of the cylindrical brachyplast and needle gradually increased from the beginning of leaf emer- gence in late spring to the end of leaf expansion in early summer in addition to the leaf senescence period. Forests 2021, 12, x FOR PEER REVIEW 16 of 22 16 the largest frequency was observed between the whole-canopy NIDFs for 2017 and 2018 for the same plot in the five plots (Figure 7). This shift might have been caused by the inconsistent field observation times in 2017 and 2018. Although the 2017 and 2018 field measurements were collected at the maximum PAI period for all of the plots, the field observation times in 2017 were closer to the needle senescence period compared with those in 2018. When the needles of L. principis-rupprechtii trees approached senescence, they began to soften and the angle between the shoot main axis of the cylindrical brachyplast and needle enlarged, making the orientation of the needles close to the zenith due to the planophile SIDF. This behavior indicated that the whole-canopy NIDF values of L. principis-rupprechtii forest plots might change with seasons because the angle between the shoot main axis of the cylindrical brachyplast and needle gradually increased from the beginning of leaf emergence in late spring to the end of leaf expansion in early summer in addition to the leaf senescence period. Figure 7. Cont. Forests 2021, 12, 30 17 of 22 Forests 2021, 12, x FOR PEER REVIEW 17 of 22 17 Figure 7. Vertical and annual variations in needle inclination angle distributions in the five plots. In most cases, small variations in NIDFs with the height levels were observed in the five plots in 2017 and 2018 (Figure 7). This trend was further verified by the relatively small differences between the mean needle inclination angles at different height levels for the same plot in the five plots (ranging from 0.02° to 9.22° in 2017 and 0.51° to 7.85° in 2018) (Table 2). Moreover, in most cases, more horizontally oriented needles were observed at the bottom level than at other two height levels in plots 3–5 and at the middle level than at the top level in plots 1–2 (Figure 7). By contrast, in most cases, more needles with oriented angles larger than 30° were observed in the top canopy compared with those at the two other height levels (Figure 7). This finding was consistent with previously reported findings for broadleaf tree species [14,51]. More needles with large inclination angles in the top canopy are helpful for reducing the amount of solar irradiance, which is usually received in excess in the top canopy especially at midday hours. The reduced solar irradiance interception at the top canopy is further helpful in increasing the amount of solar irradiance that reaches the middle and lower canopies [14,52]. By contrast, more horizontally oriented needles in the bottom canopy could improve the solar irradiance interception efficiency of the bottom canopy, because solar irradiance mainly arrives from the above needles at directions close to the zenith [52,53]. On the other hand, the variations in the NIDFs over the entire vertical profile can increase the efficiency of the solar irradiance interception for the entire canopy [54]. Obvious differences were observed between the whole-canopy NIDFs and SIDFs from the same plot in the five plots, even though both of them showed an obvious trend of planophile distribution in most cases (Figures 6 and 7). An example of this is that the mean values of the whole-canopy NIDFs were obviously larger than those of the whole- canopy SIDF for the same plot in the five plots (Tables 2 and 3). Moreover, the zenith angles with the largest frequency for the whole-canopy NIDFs were also obviously larger Figure 7. Vertical and annual variations in needle inclination angle distributions in the five plots. In most cases, small variations in NIDFs with the height levels were observed in the five plots in 2017 and 2018 (Figure 7). This trend was further verified by the relatively small differences between the mean needle inclination angles at different height levels for the same plot in the five plots (ranging from 0.02◦ to 9.22◦ in 2017 and 0.51◦ to 7.85◦ in 2018) (Table 2). Moreover, in most cases, more horizontally oriented needles were observed at the bottom level than at other two height levels in plots 3–5 and at the middle level than at the top level in plots 1–2 (Figure 7). By contrast, in most cases, more needles with oriented angles larger than 30◦ were observed in the top canopy compared with those at the two other height levels (Figure 7). This finding was consistent with previously reported findings for broadleaf tree species [14,51]. More needles with large inclination angles in the top canopy are helpful for reducing the amount of solar irradiance, which is usually received in excess in the top canopy especially at midday hours. The reduced solar irradiance interception at the top canopy is further helpful in increasing the amount of solar irradiance that reaches the middle and lower canopies [14,52]. By contrast, more horizontally oriented needles in the bottom canopy could improve the solar irradiance interception efficiency of the bottom canopy, because solar irradiance mainly arrives from the above needles at directions close to the zenith [52,53]. On the other hand, the variations in the NIDFs over the entire vertical profile can increase the efficiency of the solar irradiance interception for the entire canopy [54]. Obvious differences were observed between the whole-canopy NIDFs and SIDFs from the same plot in the five plots, even though both of them showed an obvious trend of planophile distribution in most cases (Figures 6 and 7). An example of this is that the mean values of the whole-canopy NIDFs were obviously larger than those of the whole-canopy SIDF for the same plot in the five plots (Tables 2 and 3). Moreover, the zenith angles with the largest frequency for the whole-canopy NIDFs were also obviously larger than those Forests 2021, 12, 30 18 of 22 of the whole-canopy SIDFs for the same plot in the five plots (Figures 6 and 7). Therefore, we can conclude that the shoots exhibited distinct inclination angle distributions from those of needles for L. principis-rupprechtii plots. 3.5. Needle and Shoot Projection Functions The whole-canopy needle projection functions were consistently toward the planophile in all the five plots, with minor variations between plots and observation years (Figure 8). These minor variations illustrated that a representative needle projection function could be obtained for L. principis-rupprechtii plots. The needle projection functions of the five L. principis-rupprechtii plots (Figure 8) obviously deviated from those of the case with spherical leaf inclination distribution (G ≡ 0.5), which has been commonly assumed for plant canopies in previous studies [2,28,29,55,56], especially at viewing directions close to the zenith (the needle G value is equal to 0.76 at 1◦ in plot 1) (Figure 8). Therefore, the assumption of a spherical leaf inclination angle distribution would result in errors in the LAI and clumping index estimation for forest plots because their leaf inclination angle distributions usually deviate from the spherical case [29]. For example, Pisek et al. [29] reported that the assumption of a spherical leaf angle distribution underestimates the clumping index (28–47%) at the zenith in a birch plot. Therefore, we suggest obtaining the field leaf angle distribution measurements of forest plots with tree species that have not been investigated comprehensively whenever possible. Another solution for minimizing the impact of the assumption of spherical leaf angle distribution on the LAI and clumping index estimation of forest plots without field leaf inclination angle measurements is to utilize specific characteristics of leaf or woody-component inclination angle distributions, which approximately intersect at zenith angles close to 57.3◦; the values of leaf or woody- component projection coefficients at the specific zenith angle of 57.3◦ equal approximately 0.5 (also found in this study [Figure 8]) [55–58]. Given that the differences between the field-collected and assumed leaf or woody-component projection coefficients of forest plots at 57.3◦ are small, their effects on the LAI and clumping index estimation are small and acceptable [57]. Compared with the needle projection functions (Figure 8), the shoot projection func- tions of the five plots did not consistently approach a single typical leaf projection function and approximated the three typical leaf projection functions, namely plagiophile, uniform, and extremophile functions (Figure 9) [3,29,56]. The variations between the shoot projec- tion functions of the five plots (Figure 9) were mainly attributed to variations in γ for the five plots due to the minor variations in needle projection functions in the plots (Figure 8). The shoot projection functions of the five plots intersected with the line of G = 0.5 at zenith angles of 27◦–43◦ (Figure 9), which consistently deviated largely from the commonly reported intersected zenith angle of 57.3◦ for the leaf or woody-component projection functions of forest plots [2,29,55,56,59]. This deviation indicated that the derived shoot projection functions might contain a certain degree of estimation errors. In this study, the error in the estimated shoot projection function might originate from γ measurements, because a constant γ value was used for all viewing zenith angles in the shoot projection function estimation. However, Equation (14) shows that the γ estimates varied with zenith angles because the shoot projection areas changed with projection zenith angles. Therefore, if the method of this study is used to obtain the shoot projection function of coniferous forest plots in the future, we recommend obtaining γ estimates with zenith angles of 0◦–90◦ and an interval of 1◦, coinciding with the same zenith angle range and intervals used for needle projection functions. Forests 2021, 12, 30 19 of 22 Forests 2021, 12, x FOR PEER REVIEW 19 of 22 19 Figure 8. Whole-canopy needle projection function against viewed zenith angles in the five plots in 2017 (a) and 2018 (b). Figure 9. Derived whole-canopy shoot projection function against viewed zenith angles in the five plots in 2017. 4. Conclusions The shoot and needle inclination angle distributions and projection functions of five contrasting L. principis-rupprechtii plots were obtained on the basis of the LDP method. The main conclusions of this study are as follows: (1) The quasi-automatic method developed in this study is effective and accurate enough to obtain the NIDFs. (2) The whole-canopy SIDFs and NIDFs tended to be planophile and exhibited minor variations between plots and observation years. The NIDF was slightly sensitive to light conditions given that small variations in NIDFs were observed between NIDF measurements obtained at different height levels. (3) The whole-canopy and height level needle projection functions tended to be planophile, and minor variations in needle projection functions with plots and observation years were observed. (4) The method for obtaining shoot projection functions based on needle projection functions and 𝛾 measurements used in this study tended to produce estimates with a certain degree of error, because the zenithal dependence of 𝛾 was ignored in the estimation. The performance of this method can be further evaluated by considering the variation in 𝛾 with zenith angles in the future. Author Contributions: J.Z. proposed the concept of this study, conducted the experiment, analyzed the data, and wrote the paper. J.Z. and P.L. conducted the experiment and analyzed the data. W.H., P.Z., and Y.Z. analyzed the data. All authors have read and agreed to the published version of the manuscript. Figure 8. Whole-canopy needle projection function against viewed zenith angles in the five plots in 2017 (a) and 2018 (b). Forests 2021, 12, x FOR PEER REVIEW 19 of 22 19 Figure 8. Whole-canopy needle projection function against viewed zenith angles in the five plots in 2017 (a) and 2018 (b). Figure 9. Derived whole-canopy shoot projection function against viewed zenith angles in the five plots in 2017. 4. Conclusions The shoot and needle inclination angle distributions and projection functions of five contrasting L. principis-rupprechtii plots were obtained on the basis of the LDP method. The main conclusions of this study are as follows: (1) The quasi-automatic method developed in this study is effective and accurate enough to obtain the NIDFs. (2) The whole-canopy SIDFs and NIDFs tended to be planophile and exhibited minor variations between plots and observation years. The NIDF was slightly sensitive to light conditions given that small variations in NIDFs were observed between NIDF measurements obtained at different height levels. (3) The whole-canopy and height level needle projection functions tended to be planophile, and minor variations in needle projection functions with plots and observation years were observed. (4) The method for obtaining shoot projection functions based on needle projection functions and 𝛾 measurements used in this study tended to produce estimates with a certain degree of error, because the zenithal dependence of 𝛾 was ignored in the estimation. The performance of this method can be further evaluated by considering the variation in 𝛾 with zenith angles in the future. Author Contributions: J.Z. proposed the concept of this study, conducted the experiment, analyzed the data, and wrote the paper. J.Z. and P.L. conducted the experiment and analyzed the data. W.H., P.Z., and Y.Z. analyzed the data. All authors have read and agreed to the published version of the manuscript. Figure 9. Derived whole-canopy shoot projection function against viewed zenith angles in the five plots in 2017. 4. Conclusions The shoot and needle inclination angle distributions and projection functions of five contrasting L. principis-rupprechtii plots were obtained on the basis of the LDP method. The main conclusions of this study are as follows: (1) The quasi-automatic method developed in this study is effective and accurate enough to obtain the NIDFs. (2) The whole-canopy SIDFs and NIDFs tended to be planophile and exhibited minor variations between plots and observation years. The NIDF was slightly sensitive to light conditions given that small variations in NIDFs were observed between NIDF measurements obtained at different height levels. (3) The whole-canopy and height level needle projection functions tended to be planophile, and minor variations in needle projection functions with plots and observation years were observed. (4) The method for obtaining shoot projection functions based on needle projection functions and γ measurements used in this study tended to produce estimates with a certain degree of error, because the zenithal dependence of γ was ignored in the estimation. The performance of this method can be further evaluated by considering the variation in γ with zenith angles in the future. Author Contributions: J.Z. proposed the concept of this study, conducted the experiment, analyzed the data, and wrote the paper. J.Z. and P.L. conducted the experiment and analyzed the data. W.H., P.Z., and Y.Z. analyzed the data. All authors have read and agreed to the published version of the manuscript. Forests 2021, 12, 30 20 of 22 Funding: This work was jointly funded by the National Key Research and Development Program from Ministry of Science and Technology of China (2016YFB0501501) and National Natural Science Foundation of China (Grant Nos. 41871233, 41371330, 41001203). Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data of this work can be shared to the readers depending on the request. Acknowledgments: We are grateful to the anonymous reviewers, whose comments much helped to improve this paper. Conflicts of Interest: The authors declare no conflict of interest. References 1. Zou, X.; Mõttus, M.; Tammeorg, P.; Torres, C.L.; Takala, T.; Pisek, J.; Mäkelä, P.; Stoddard, F.L.; Pellikka, P. Photographic measurement of leaf angles in field crops. Agric. For. Meteorol. 2014, 184, 137–146. [CrossRef] 2. Ross, J. The Radiation Regime and Architecture of Plant Stands; Dr. W. Junk Publ.: The Hague, The Netherlands, 1981. 3. Chianucci, F.; Pisek, J.; Raabe, K.; Marchino, L.; Ferrara, C.; Corona, P. A dataset of leaf inclination angles for temperate and boreal broadleaf woody species. Ann. For. Sci. 2018, 75, 50. [CrossRef] 4. Norman, J.M.; Campbell, G.S. Canopy structure. In Plant Physiological Ecology; Pearcy, R.W., EEhleringer, J.R., Mooney, H.A., RRundel, P.W., Eds.; Springer: Dordrecht, The Netherlands, 1989; pp. 301–325. 5. Lang, A.R.G. Leaf orientation of a cotton plant. Agric. Meteorol. 1973, 11, 37–51. [CrossRef] 6. Sinoquet, H.; Rivet, P. Measurement and visualization of the architecture of an adult tree based on a three-dimensional digitising device. Trees 1997, 11, 265–270. [CrossRef] 7. Sonohat, G.; Sinoquet, H.; Kulandaivelu, V.; Combes, D.; Lescourret, F. Three-dimensional reconstruction of partially 3D-digitized peach tree canopies. Tree Physiol. 2006, 26, 337–351. [CrossRef] 8. Kucharik, C.J.; Norman, J.M.; Gower, S.T. Measurements of leaf orientation, light distribution and sunlit leaf area in a boreal aspen forest. Agric. For. Meteorol. 1998, 91, 127–148. [CrossRef] 9. Chen, J.M.; Black, T.A.; Adams, R.S. Evaluation of hemispherical photography for determining plant area index and geometry of a forest stand. Agric. For. Meteorol. 1991, 56, 129–143. [CrossRef] 10. Wagner, S.; Hagemeier, M. Method of segmentation affects leaf inclination angle estimation in hemispherical photography. Agric. For. Meteorol. 2006, 139, 12–24. [CrossRef] 11. Macfarlane, C.; Arndt, S.K.; Livesley, S.J.; Edgar, A.C.; White, D.A.; Adams, M.A.; Eamus, D. Estimation of leaf area index in eucalypt forest with vertical foliage, using cover and fullframe fisheye photography. For. Ecol. Manag. 2007, 242, 756–763. [CrossRef] 12. Macfarlane, C.; Hoffman, M.; Eamus, D.; Kerp, N.; Higginson, S.; McMurtrie, R.; Adams, M. Estimation of leaf area index in eucalypt forest using digital photography. Agric. For. Meteorol. 2007, 143, 176–188. [CrossRef] 13. Ryu, Y.; Sonnentag, O.; Nilson, T.; Vargas, R.; Kobayashi, H.; Wenk, R.; Baldocchi, D.D. How to quantify tree leaf area index in an open savanna ecosystem: A multi-instrument and multi-model approach. Agric. For. Meteorol. 2010, 150, 63–76. [CrossRef] 14. Raabe, K.; Pisek, J.; Sonnentag, O.; Annuk, K. Variations of leaf inclination angle distribution with height over the growing season and light exposure for eight broadleaf tree species. Agric. For. Meteorol. 2015, 214–215, 2–11. [CrossRef] 15. McNeil, B.E.; Pisek, J.; Lepisk, H.; Flamenco, E.A. Measuring leaf angle distribution in broadleaf canopies using UAVs. Agric. For. Meteorol. 2016, 218–219, 204–208. [CrossRef] 16. Qi, J.; Xie, D.; Li, L.; Zhang, W.; Mu, X.; Yan, G. Estimating Leaf Angle Distribution From Smartphone Photographs. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1190–1194. [CrossRef] 17. Zheng, G.; Moskal, L.M. Leaf Orientation Retrieval From Terrestrial Laser Scanning (TLS) Data. Geosci. Remote Sens. IEEE Trans. 2012, 50, 3970–3979. [CrossRef] 18. Hosoi, F.; Omasa, K. Factors contributing to accuracy in the estimation of the woody canopy leaf area density profile using 3D portable lidar imaging. J. Exp. Bot. 2007, 58, 3463–3473. [CrossRef] [PubMed] 19. Hosoi, F.; Omasa, K. Estimating leaf inclination angle distribution of broad-leaved trees in each part of the canopies by a high-resolution portable scanning lidar. J. Agric. Meteorol. 2015, 71, 136–141. [CrossRef] 20. Bailey, B.N.; Mahaffee, W.F. Rapid measurement of the three-dimensional distribution of leaf orientation and the leaf angle probability density function using terrestrial LiDAR scanning. Remote Sens. Environ. 2017, 194, 63–76. [CrossRef] 21. Vicari, M.B.; Pisek, J.; Disney, M. New estimates of leaf angle distribution from terrestrial LiDAR: Comparison with measured and modelled estimates from nine broadleaf tree species. Agric. For. Meteorol. 2019, 264, 322–333. [CrossRef] 22. Liu, J.; Skidmore, A.K.; Wang, T.; Zhu, X.; Premier, J.; Heurich, M.; Beudert, B.; Jones, S. Variation of leaf angle distribution quantified by terrestrial LiDAR in natural European beech forest. ISPRS J. Photogramm. Remote Sens. 2019, 148, 208–220. [CrossRef] 23. Itakura, K.; Hosoi, F. Estimation of Leaf Inclination Angle in Three-Dimensional Plant Images Obtained from Lidar. Remote Sens. 2019, 11, 344. [CrossRef] http://dx.doi.org/10.1016/j.agrformet.2013.09.010 http://dx.doi.org/10.1007/s13595-018-0730-x http://dx.doi.org/10.1016/0002-1571(73)90049-6 http://dx.doi.org/10.1007/s004680050084 http://dx.doi.org/10.1093/treephys/26.3.337 http://dx.doi.org/10.1016/S0168-1923(98)00058-6 http://dx.doi.org/10.1016/0168-1923(91)90108-3 http://dx.doi.org/10.1016/j.agrformet.2006.05.008 http://dx.doi.org/10.1016/j.foreco.2007.02.021 http://dx.doi.org/10.1016/j.agrformet.2006.10.013 http://dx.doi.org/10.1016/j.agrformet.2009.08.007 http://dx.doi.org/10.1016/j.agrformet.2015.07.008 http://dx.doi.org/10.1016/j.agrformet.2015.12.058 http://dx.doi.org/10.1109/LGRS.2019.2895321 http://dx.doi.org/10.1109/TGRS.2012.2188533 http://dx.doi.org/10.1093/jxb/erm203 http://www.ncbi.nlm.nih.gov/pubmed/17977852 http://dx.doi.org/10.2480/agrmet.D-14-00049 http://dx.doi.org/10.1016/j.rse.2017.03.011 http://dx.doi.org/10.1016/j.agrformet.2018.10.021 http://dx.doi.org/10.1016/j.isprsjprs.2019.01.005 http://dx.doi.org/10.3390/rs11030344 Forests 2021, 12, 30 21 of 22 24. Xu, Q.; Cao, L.; Xue, L.; Chen, B.; An, F.; Yun, T. Extraction of Leaf Biophysical Attributes Based on a Computer Graphic-based Algorithm Using Terrestrial Laser Scanning Data. Remote Sens. 2018, 11, 15. [CrossRef] 25. Liu, J.; Wang, T.; Skidmore, A.K.; Jones, S.; Heurich, M.; Beudert, B.; Premier, J. Comparison of terrestrial LiDAR and digital hemispherical photography for estimating leaf angle distribution in European broadleaf beech forests. ISPRS J. Photogramm. Remote Sens. 2019, 158, 76–89. [CrossRef] 26. Leblanc, S.G.; Chen, J.M.; Fernandes, R.; Deering, D.W.; Conley, A. Methodology comparison for canopy structure parameters extraction from digital hemispherical photography in boreal forests. Agric. For. Meteorol. 2005, 129, 187–207. [CrossRef] 27. Ma, L.; Zheng, G.; Eitel, J.U.H.; Magney, T.S.; Moskal, L.M. Retrieving forest canopy extinction coefficient from terrestrial and airborne lidar. Agric. For. Meteorol. 2017, 236, 1–21. [CrossRef] 28. Pisek, J.; Ryu, Y.; Alikas, K. Estimating leaf inclination and G-function from leveled digital camera photography in broadleaf canopies. Trees Struct. Funct. 2011, 25, 919–924. [CrossRef] 29. Pisek, J.; Sonnentag, O.; Richardson, A.D.; Mõttus, M. Is the spherical leaf inclination angle distribution a valid assumption for temperate and boreal broadleaf tree species? Agric. For. Meteorol. 2013, 169, 186–194. [CrossRef] 30. Utsugi, H.; Araki, M.; Kawasaki, T.; Ishizuka, M. Vertical distributions of leaf area and inclination angle, and their relationship in a 46-year-old Chamaecyparis obtusa stand. For. Ecol. Manag. 2006, 225, 104–112. [CrossRef] 31. Zou, J.; Leng, P.; Hou, W.; Zhong, P.; Chen, L.; Mai, C.; Qian, Y.; Zuo, Y. Evaluating Two Optical Methods of Woody-to-Total Area Ratio with Destructive Measurements at Five Larix gmelinii Rupr. Forest Plots in China. Forests 2018, 9, 746. [CrossRef] 32. Zou, J.; Zuo, Y.; Zhong, P.; Hou, W.; Leng, P.; Chen, B. Performance of Four Optical Methods in Estimating Leaf Area Index at Elementary Sampling Unit of Larix principis-rupprechtii Forests. Forests 2019, 11, 30. [CrossRef] 33. Kimes, D.S.; Kirchner, J.A. Diurnal variations of vegetation canopy structure. Int. J. Remote Sens. 1983, 4, 257–271. [CrossRef] 34. Zou, J.; Hou, W.; Chen, L.; Wang, Q.; Zhong, P.; Zuo, Y.; Luo, S.; Leng, P. Evaluating the impact of sampling schemes on leaf area index measurements from digital hemispherical photography in Larix gmeliniiLarix principis-rupprechtii Rupr. forest plots. For. Ecosyst. 2020, 7, 52. [CrossRef] 35. Gioi, R.G.v.; Jakubowicz, J.; Morel, J.; Randall, G. LSD: A Fast Line Segment Detector with a False Detection Control. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 722–732. [CrossRef] [PubMed] 36. Gioi, R.G.v.; Jakubowicz, J.; Morel, J.-M.; Randall, G. LSD: A Line Segment Detector. Image Process. Line 2012, 2, 35–55. [CrossRef] 37. Lin, Y.; Wang, C.; Cheng, J.; Chen, B.; Jia, F.; Chen, Z.; Li, J. Line segment extraction for large scale unorganized point clouds. ISPRS J. Photogramm. Remote Sens. 2015, 102, 172–183. [CrossRef] 38. Cho, N.; Yuille, A.; Lee, S. A Novel Linelet-Based Representation for Line Segment Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 1195–1208. [CrossRef] 39. Hofer, M.; Maurer, M.; Bischof, H. Efficient 3D scene abstraction using line segments. Comput. Vis. Image Underst. 2017, 157, 167–178. [CrossRef] 40. Tang, G.; Xiao, Z.; Liu, Q.; Liu, H. A Novel Airport Detection Method via Line Segment Classification and Texture Classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2408–2412. [CrossRef] 41. Sun, Y.; Zhao, L.; Huang, S.; Yan, L.; Dissanayake, G. Line matching based on planar homography for stereo aerial images. ISPRS J. Photogramm. Remote Sens. 2015, 104, 1–17. [CrossRef] 42. W.H. Freeman and Company. Biometry: Principles and Practice of Statistics in Biological Research, 2nd ed.; W.H. Freeman and Company: San Francisco, CA, USA, 1981. 43. Wang, W.M.; Li, Z.L.; Su, H.B. Comparison of leaf angle distribution functions: Effects on extinction coefficient and fraction of sunlit foliage. Agric. For. Meteorol. 2007, 143, 106–122. [CrossRef] 44. Pisek, J.; Lang, M.; Nilson, T.; Korhonen, L.; Karu, H. Comparison of methods for measuring gap size distribution and canopy nonrandomness at Järvselja RAMI (RAdiation transfer Model Intercomparison) test sites. Agric. For. Meteorol. 2011, 151, 365–377. [CrossRef] 45. Campbell, G.S. Derivation of an angle density function for canopies with ellipsoidal leaf angle distributions. Agric. For. Meteorol. 1990, 49, 173–176. [CrossRef] 46. Ross, J. Radiative transfer in plant communities. In Vegetation and the Atmosphere; Monteith, J.L., Ed.; Academic Press: London, UK, 1975; Volume 1, pp. 13–55. 47. De Wit, C.T. Photosynthesis of Leaf Canopies; Wageningen University: Wageningen, The Netherlands, 1965. 48. Lemeur, R.; Blad, B.L. A critical review of light models for estimating the shortwave radiation regime of plant canopies. Agric. Meteorol. 1974, 14, 255–286. [CrossRef] 49. Chen, J.M.; Rich, P.M.; Gower, S.T.; Norman, J.M.; Plummer, S. Leaf area index of boreal forests: Theory, techniques and measurements. J. Geophys. Res. 1997, 102, 29429–29443. [CrossRef] 50. Oker-Blom, P.; Kellomäki, S. Effect of angular distribution of foliage on light absorption and photosynthesis in the plant canopy: Theoretical computations. Agric. Meteorol. 1982, 26, 105–116. [CrossRef] 51. Kull, O.; Broadmeadow, M.; Kruijt, B.; Meir, P. Light distribution and foliage structure in an oak canopy. Trees 1999, 14, 55–64. [CrossRef] 52. Niinemets, Ü. A review of light interception in plant stands from leaf to canopy in different plant functional types and in species with varying shade tolerance. Ecol. Res. 2010, 25, 693–714. [CrossRef] 53. King, D.A. The Functional Significance of Leaf Angle in Eucalyptus. Aust. J. Bot. 1997, 45, 619–639. [CrossRef] http://dx.doi.org/10.3390/rs11010015 http://dx.doi.org/10.1016/j.isprsjprs.2019.09.015 http://dx.doi.org/10.1016/j.agrformet.2004.09.006 http://dx.doi.org/10.1016/j.agrformet.2017.01.004 http://dx.doi.org/10.1007/s00468-011-0566-6 http://dx.doi.org/10.1016/j.agrformet.2012.10.011 http://dx.doi.org/10.1016/j.foreco.2005.12.028 http://dx.doi.org/10.3390/f9120746 http://dx.doi.org/10.3390/f11010030 http://dx.doi.org/10.1080/01431168308948545 http://dx.doi.org/10.1186/s40663-020-00262-z http://dx.doi.org/10.1109/TPAMI.2008.300 http://www.ncbi.nlm.nih.gov/pubmed/20224126 http://dx.doi.org/10.5201/ipol.2012.gjmr-lsd http://dx.doi.org/10.1016/j.isprsjprs.2014.12.027 http://dx.doi.org/10.1109/TPAMI.2017.2703841 http://dx.doi.org/10.1016/j.cviu.2016.03.017 http://dx.doi.org/10.1109/LGRS.2015.2479681 http://dx.doi.org/10.1016/j.isprsjprs.2014.12.003 http://dx.doi.org/10.1016/j.agrformet.2006.12.003 http://dx.doi.org/10.1016/j.agrformet.2010.11.009 http://dx.doi.org/10.1016/0168-1923(90)90030-A http://dx.doi.org/10.1016/0002-1571(74)90024-7 http://dx.doi.org/10.1029/97JD01107 http://dx.doi.org/10.1016/0002-1571(82)90036-X http://dx.doi.org/10.1007/s004680050209 http://dx.doi.org/10.1007/s11284-010-0712-4 http://dx.doi.org/10.1071/BT96063 Forests 2021, 12, 30 22 of 22 54. Cescatti, A.; Alessandro, U. Leaf to Landscape. In Photosynthetic Adaptation: Chloroplast to Landscape; Smith, W.K., Vogelmann, T.C., Critchley, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; pp. 42–85. 55. Weiss, M.; Baret, F.; Smith, G.J.; Jonckheere, I.; Coppin, P. Review of methods for in situ leaf area index (LAI) determination: Part II. Estimation of LAI, errors and sampling. Agric. For. Meteorol. 2004, 121, 37–53. [CrossRef] 56. Yan, G.; Hu, R.; Luo, J.; Weiss, M.; Jiang, H.; Mu, X.; Xie, D.; Zhang, W. Review of indirect optical measurements of leaf area index: Recent advances, challenges, and perspectives. Agric. For. Meteorol. 2019, 265, 390–411. [CrossRef] 57. Zou, J.; Zhuang, Y.; Chianucci, F.; Mai, C.; Lin, W.; Leng, P.; Luo, S.; Yan, B. Comparison of Seven Inversion Models for Estimating Plant and Woody Area Indices of Leaf-on and Leaf-off Forest Canopy Using Explicit 3D Forest Scenes. Remote Sens. 2018, 10, 1297. [CrossRef] 58. Woodgate, W. In-Situ Leaf Area Index Estimate Uncertainty in Forests: Supporting Earth Observation Product Calibration and Validation. Ph.D. Thesis, RMIT University, Melbourne, Australia, 2015. 59. Woodgate, W.; Disney, M.; Armston, J.D.; Jones, S.D.; Suarez, L.; Hill, M.J.; Wilkes, P.; Soto-Berelov, M.; Haywood, A.; Mellor, A. An improved theoretical model of canopy gap probability for Leaf Area Index estimation in woody ecosystems. For. Ecol. Manag. 2015, 358, 303–320. [CrossRef] http://dx.doi.org/10.1016/j.agrformet.2003.08.001 http://dx.doi.org/10.1016/j.agrformet.2018.11.033 http://dx.doi.org/10.3390/rs10081297 http://dx.doi.org/10.1016/j.foreco.2015.09.030 Introduction Materials and Methods Plots Description Data Acquisition Data Processing Needle and Shoot Inclination Angle Estimation Fitting the Needle and Shoot Inclination Angle Measurements Needle and Shoot Projection Function Calculation Results and Discussion Comparison of Manual and Quasi-Automatic Methods Used to Derive the Needle Inclination Angle Measurements Comparison of Beta and Ellipsoidal Distribution Functions for Fitting the Shoot or Needle Inclination Angle Measurements Shoot Angle Distribution Needle Angle Distribution Needle and Shoot Projection Functions Conclusions References work_vdkzcpwdbfayzb3pur7vczabvu ---- Approaches to Audience: An Overview of the Major Perspectives 324 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 48, NO. 3, SEPTEMBER 2005 Mary Jo Reiff Approaches to Audience: An Overview of the Major Perspectives Book Review —Reviewed by JOSHUA TUSIN Index Terms—Audience, cognitive, reader, rhetoric, text, writer. Let me begin the way Mary Jo Reiff begins her book, Approaches to Audience: An Overview of the Major Perspectives, with the same three quotes: “Of the three elements in speech- making—speaker, subject, and person addressed—it is the last one, the hearer, that determines the speech’s end and object.” (Aristotle) “I feel strongly that the reader plays an active role, and that the writer, when writing, must be consciously aware of the reader’s activity.” (Russell C. Long) “ … the audience is a primary factor, perhaps the primary factor, influencing discourse.” (James Porter) [p. v]. Reiff sets out to enhance our concept of audience by exploring various viewpoints on the subject, including rhetorical, cognitive, textual, contextual, and social-constructionist perspectives. What is clear from the onset is that no single perspective is comprehensive and that understanding audience is no simple matter. Reiff successfully navigates this difficult topic by considering a multitude of approaches within an academic framework, but doing so in a format that is very accessible and usable. The author does not intend to take an in-depth look at any given perspective, but rather, to give an overview that is sufficiently detailed to be useful but not so detailed as to lose focus. The organization of the book is clearly laid out and supports the plan of the book. The book is divided into six chapters, in addition to a preface and afterword that elaborate on the purpose of the text and its place in the field. As an associate professor and director of composition at the University of Tennessee, Reiff has spent a Manuscript received May 5, 2005; revised May 26, 2005. The reviewer is with the Technical Communication Program and the Research Institute, Illinois Institute of Technology, Chicago, IL 60616 USA (email: jtusin@iitri.org). IEEE DOI 10.1109/TPC.2005.853941 Book Publisher: Chicago: Parlay Press, 2004 155 pp., including afterword and index. number of years tackling the concept of how to successfully reach an audience. She demonstrates quickly that this is indeed her area of expertise by building a historical picture for us in the first chapter, “Rhetorical approaches: A brief history,” which starts with Aristotle’s classical rhetoric and its function to persuade public audiences and moves through the medieval and Renaissance periods and then the eighteenth through twentieth centuries. Reiff traces the rise, fall, and resurrection of audience as a critical component of discourse—all in a modest 23 pages. Reiff continues with an economic treatment of cognitive, textual, contextual, and social constructionist approaches to audience, using no more than 25 pages in any chapter and as few as 15 for the social constructionist approach. Chapter 2, “Cognitive approaches: The reader in the writer,” looks at “egocentrism and audience awareness” as well as some differences between spoken and written discourse, while also reviewing some empirical research. Chapter 3, “Textual approaches: The reader in the text,” examines the implications of language use for different types of readers. Structuralism, formalism, and phenomenology are all considered here. In Chapter 4, “Contextual approaches: Uniting audience, writer, and text,” Reiff focuses on the relationship between author and reader by the text chosen, and how context plays an important role. The penultimate chapter, “Social constructionist approaches: From context to community,” illustrates how the audience is a community. Teaching applications are highlighted especially here. Reiff spends about 30 pages on the sixth and final chapter, “Acknowledging Multiple Audiences,” the most important section of the book and the culmination of all of the previous chapters. Here Reiff explores the reality that most situations involve multiple audiences and discusses organizational models and ways of handling conflict that can arise in these situations. Each chapter is constructed effectively, not only to deliver information in a thorough yet concise manner but also to give readers ways to expand on the chapter. Many readers will find most useful the “questions for 0361-1434/$20.00 © 2005 IEEE IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 48, NO. 3, SEPTEMBER 2005 325 further consideration,” “recommendations for further reading,” and “for discussion” sections interspersed throughout the text, as well as the “applications for teachers” included at various points. The examples are also plentiful, ensuring that the information is practical and not simply theoretical. As a case in point, in discussing how different communities use different language, Reiff presents three different reviews of the same art exhibit on AIDS, and discusses the language differences in such a way that teachers will recognize what conventions are expected in different settings. Other cases provide constructive suggestions for assignments and classroom exercises. Each chapter begins with a short introduction, often including some history of the approach, in order to frame the upcoming work. This may seem unremarkable, but these frames are effective introductions, and the author accomplishes her goals. Following the introductions, we get to the information-rich heart of the chapters. Scattered throughout these sections readers will find discussion questions and applications for teachers. Each chapter wraps up with a discussion of the limitations of the approach, a conclusion, additional questions, and a list of further reading. All of the chapters build to a natural concluding chapter, about multiple audiences, a conclusion that is obviously the author’s purpose from the outset. Reiff offers her summary in the form of an afterword, where she again reminds us of the difficulties in addressing audiences and why she has framed her book the way she has. The closing of the afterword is perhaps most telling: As a result of this complex interaction among writers (who envision their multiple audiences), texts (that construct these multiple audiences through various textual clues and linguistic devices) and readers (who are actual and multiple), a fully integrated perspective on audience is achieved: one that encompasses the multiple locations and descriptions of audience and recognizes the multiplicity of real readers. [p. 146] Reiff argues that her book has successfully mapped out the major perspectives on audience, in the process updating the work of Ede and Lunsford two decades ago, and ultimately provides a work that acknowledges and begins to deal with the sticky reality of multiple audiences. The closing sentence above gets to the point: readers are many and multiple, and no two are exactly alike. Understanding how to effectively reach as many readers as possible is not easy, and Reiff manages to help in the challenging process. Refreshing in Reiff’s work is a pervasive humility. The book does not cover any of its topics exhaustively, and the author does not pretend to have done so. The emphasis on limitations and the extensive suggested readings are clear signals that the book is meant to be an overview, a good start, and in that capacity it has done well. Reiff also presents balanced information, including views she does not necessarily support. This is clear in some of the discussion questions, such as “Do you agree with Britton et al.’s claim that writing ’emancipates’ writers from audience intrusions?” [p. 34]. On the surface it would seem obvious, from Reiff’s vantage point, what she thinks the answer would be. But in recognizing the value of discussing the topic, she gives Britton nearly a page of space, leaving it to the readers to decide what effect the physical space between writer and reader means to the discourse. This style, including frequent discussion points, is especially useful for the targeted audience. As a survey, the book will be an effective teaching tool in undergraduate and graduate classrooms. The focus on various approaches to audience makes the book useful to an interdisciplinary audience, including those in fields such as rhetoric and composition, speech communication, cognitive psychology, literary studies and linguistics. In fact, anyone who will be writing, speaking, or otherwise communicating professionally would benefit from time spent with this book. The most useful application of this book, it seems, will be for teachers. Education departments should start making this text a required part of their program as it would undoubtedly stimulate invaluable discussion about the difficult task of teaching. Reiff may not realize it, but there are perhaps fewer audiences more diverse than children and young adults in school, and teachers that have studied and thought critically about how to address those audiences will most certainly be increasing their value to those students. The applications for teachers are stimulating and useful, with examples of different types of communication to make the book as practical as possible. Looking for a moment at a high school art teacher, we see a host of audience-related challenges ahead. The teacher, obviously, must be prepared to successfully communicate with 14–18-year-olds who all have different learning needs and a variety of preferred styles. Some will be interested in art while others will be interested in anything but art. The same teacher also needs to be able to address the parents of these students, who will range dramatically in age and often are older than the teacher. This, of course, must be done on an individual basis as well with groups, and at times may include the student along with the parents. Perhaps even more daunting than the parents are supervisors and administrators in the school. Plus, the teacher will need to prepare a budget or some sort of financial look at what is needed in the classroom. All of these situations will be made easier 326 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 48, NO. 3, SEPTEMBER 2005 with this book, with sections about handling conflicts, classroom implications and examples, including a proposal to enhance a digital photography program. The downside to this book is what it chooses not to do. Each of the approaches to audience that Reiff discusses could be expanded to a book in itself, or at least receive more than 20 pages. By no means can Reiff complete a thorough analysis in such a short span. And given that the author makes it so clear that approaching audiences is complex and challenging, to address the whole topic in a short, easy-to-read book is almost a contradiction. Lastly, covering the section on multiple audiences—the most important and practical part of the book by all standards—in about 30 pages leaves much to be desired. However, this strategy is consistent with Reiff’s goal. She has deliberately used an approach that will be accessible to many and in fact usable by those who need it most. The pages can get tattered and marked up as the reader forms an understanding of how to be an effective communicator. In this process, some approaches or topics will pique the reader’s interest, and this book will be one-stop shopping to find other materials to satiate that interest. In the end, we are left with a book that meets its goal and provides as useful a guide for professional communicators as anybody could desire. The purpose of each exercise is clear, the language accessible. The book does not pretend to be an “authoritative guide,” nor does it try to be an “idiot’s guide to communicating.” It treats the reader with respect and checks pretension at the door. What is left is quite simply an invaluable way to think about audiences and how to reach them effectively. work_vevci3mdq5fi3lajzvu7le2prm ---- Spontaneous intracranial hypotension presenting as thunderclap headache: a case report Chang et al. BMC Research Notes (2015) 8:108 DOI 10.1186/s13104-015-1068-1 CASE REPORT Open Access Spontaneous intracranial hypotension presenting as thunderclap headache: a case report Thashi Chang1,2*, Chaturaka Rodrigo1,2 and Lasitha Samarakoon2 Abstract Background: Spontaneous intracranial hypotension is a rare but treatable cause of a disabling headache syndrome. It is characterized by positional orthostatic headache, pachymeningeal enhancement and low cerebrospinal fluid pressure. However, the spectrum of clinical and radiographic manifestations is varied and misdiagnosis is common even in the modern era of magnetic resonance imaging. Spontaneous intracranial hypotension presenting as thunderclap headache is recognized but rare. Case presentation: A 41-year-old Sri Lankan female presented with thunderclap headache associated with nausea and vomiting, but the headache was characterized by positional variation with aggravation in the upright posture and relief in the supine posture. Gadolinium-enhanced cranial magnetic resonance imaging demonstrated generalized meningeal enhancement and normal magnetic resonance angiography while lumbar puncture revealed a cerebrospinal fluid opening pressure of less than 30 millimetres of water. Magnetic resonance myelography failed to identify the site of cerebrospinal fluid leak. The patient was managed conservatively with bed-rest, intravenous hydration, analgesics and an increased intake of oral coffee which led to a gradual relief of headaches in the upright posture. Conclusions: Spontaneous intracranial hypotension can rarely present as thunderclap headache. Awareness of its varied spectrum of presentations would avoid inappropriate investigations, misinterpretation of imaging results and ineffective treatment. Keywords: Thunderclap headache, Spontaneous intracranial hypotension, Meningeal enhancement Background Spontaneous intracranial hypotension (SIH) (first de- scribed by Schaltenbrand) [1] is an important cause of headache that is often underdiagnosed. It has an estimated annual incidence of 5 per 100 000, peaking around the age of 40 and affects women more frequently than men [2]. Orthostatic headache, low cerebrospinal fluid (CSF) pres- sure, and diffuse meningeal enhancement on brain mag- netic resonance imaging (MRI) are the major features of the classic syndrome. However, the spectrum of clinical and radiographic manifestations is varied, with diagnosis largely dependent on clinical suspicion [3]. We describe a patient presenting with typical clinical and radiological features of SIH and review the salient literature. * Correspondence: thashichang@gmail.com 1Department of Clinical Medicine, Faculty of Medicine, University of Colombo, 25, Kynsey Road, Colombo 08, Sri Lanka 2University Medical Unit, National Hospital of Sri Lanka, Colombo, Sri Lanka © 2015 Chang et al.; licensee BioMed Central. Commons Attribution License (http://creativec reproduction in any medium, provided the or Dedication waiver (http://creativecommons.or unless otherwise stated. Case presentation A 41-year-old, previously healthy Sri Lankan female pre- sented with sudden onset severe headache for one day. The headache started in the occipital region and spread towards the vertex. It worsened with standing and was accompanied with nausea and vomiting. The patient de- scribed it as her ‘worst-ever’ headache. She denied a past history of migraine. Remarkably, the headache resolved with lying supine and recurred on sitting up or standing. It would commence as a sensation of ‘heaviness’ of her head that would gradually progress to a severe, disabling headache. The maximum duration that she could tolerate an upright posture was approximately one hour. She did not have any other co-morbidities and denied use of any medicinal or recreational drugs. There was no history of surgery or trauma involving the head, neck or spine. On examination, she was comfortable in the supine position and detested sitting up or standing. The cardio- vascular, respiratory, abdominal and nervous system ex- aminations were normal. This is an Open Access article distributed under the terms of the Creative ommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and iginal work is properly credited. The Creative Commons Public Domain g/publicdomain/zero/1.0/) applies to the data made available in this article, mailto:thashichang@gmail.com http://creativecommons.org/licenses/by/4.0 http://creativecommons.org/publicdomain/zero/1.0/ Chang et al. BMC Research Notes (2015) 8:108 Page 2 of 3 Haematological and biochemical blood investigations including full blood count, electrolytes, random blood glucose, liver and renal function tests, erythrocyte sedi- mentation rate and C-reactive protein were normal. Electrocardiogram was normal. Computerised tomog- raphy (CT) scan of the head did not reveal any abnormal- ity. However, gadolinium-enhanced magnetic resonance imaging (MRI) showed generalized meningeal enhance- ment (Figure 1). The MR angiogram was normal. Lumbar puncture done in the lateral decubitus pos- ition revealed a CSF opening pressure of less than 30 mm of H2O. The biochemical, cytological and micro- biological analysis of CSF was normal and there was no xanthochromia. MR myelography failed to identify the site of CSF leak. The patient was managed with bed-rest and hydration with infusions of normal saline. She was prescribed anal- gesics and encouraged to drink excess amounts of coffee ad libitum. Over the ensuing 3 months, her headaches became less intense and she could progressively tolerate longer durations in the upright posture. At three months’ review she was able to maintain her upright posture for up to 6 hours without headache. Since she showed small but definite improvement each day, the plan for epidural blood patching (EBP) was perpetually deferred. However, in retrospect, given the protracted time to recovery it would have been appropriate to have instituted EBP earlier. Conclusions Our case report illustrates the rare but typical syndrome of spontaneous intracranial hypotension (SIH) character- ized by positional orthostatic headache and pachymenin- geal enhancement on neuroimaging that led to the discovery of a low CSF opening pressure (<60 mmH20). Figure 1 Gadolinium-enhanced cranial magnetic resonance imaging s Low pressure headache following dural puncture rarely presents diagnostic difficulty, but when it occurs spontan- eously as in our patient misdiagnosis is the rule. Although the headache is typically orthostatic, other patterns have been reported ranging from chronic daily headaches, inter- mittent headaches, paradoxical headache that worsens on recumbency and headaches that mimic primary cough/ex- ertional headache [4]. Interestingly, our patient presented with a thunderclap headache, which is a rare but recog- nised manifestation of SIH [5,6], further highlighting how easily it could be misdiagnosed. In addition to headache, patients may develop neck pain, nausea, vomiting, hypera- cusis, tinnitus, unsteadiness, visual obscurations and abdu- cens nerve palsies. In SIH, the prevailing aetiology for low CSF pressure is considered to be CSF leakage located in the spine, most commonly in the throracic or cervicothorcic junc- tion. In many instances of reported literature, SIH oc- curs in previously healthy individuals which indicate an acute event triggering a CSF leak [7]. A tear of the intri- cate meningeal coverings of nerve root sleeves emanating from spinal cord or a rupture of meningeal diverticula (re- portedly in individuals with connective tissue disorders such as Marfan syndrome) or rupture of spinal epidural or perineural cysts may be the source of the cryptic CSF leak [8]. The disruption of meningeal continuity can be trig- gered by a trivial event such as a minor fall, a sudden twist or stretch, sexual intercourse, a sudden sneeze or vigorous exercise, which the patient often fails to recall as the incit- ing event [7]. Traction on pain-sensitive intracranial and meningeal structures because of the CSF hypovolaemia, particularly sensory nerves and bridging veins, is thought to cause headache and some of the associated symptoms. In the upright position this traction is exaggerated, hence howing generalized, uniform pachymeningeal enhancement. Chang et al. BMC Research Notes (2015) 8:108 Page 3 of 3 the postural component of the headache. Secondary vasodilation of the cerebral vessels to compensate for the low CSF pressure may contribute to the vascular component of the headache by increasing brain blood volume. Because jugular venous compression increases headache severity, it seems likely that venodilation is also a contributing factor to the headache. Diagnosis is based on the demonstration of low CSF pressure (<60 mmH20) in a patient presenting with a pos- tural orthostatic headache that is not better accounted by an alternative diagnosis [9]. The advent of MRI has greatly improved diagnosis although up to 20% may have a normal MRI [2]. The acronym SEEPS (for Subdural fluid collec- tions, Enhancement of the pachymeninges, Engorgement of the venous structures, Pituitary enlargement, and Sagging of the brain) recalls the major features of SIH on brain MRI. CT myelography, radioisotope cisterno- graphy and MR myelography are utilized to identify the site of leak. Epidural blood patching (EBP) is the mainstay of treat- ment and recommended for severe disabling SIH and in patients not responding to conservative management. It is hypothesized that EBP works initially through tam- ponade of the dural leak, and later by fibrin deposition that occurs within about three weeks [10]. Lumbar placement of the EBP can be effective even when the site of CSF leakage is above the site of the blood patch or is unknown. However, many patients may require more than one EBP treatment for a successful outcome [11]. Conservative management is recommended for mild to moderate SIH and includes avoidance of the upright posture, analgesics and strategies to increase CSF vol- ume such as intravenous hydration, high caffeine intake and high salt intake. Oral or intravenous caffeine which is an adenosine receptor antagonist that increases CSF production as a secondary effect to reducing intracere- bral blood flow, is of proven efficacy against post lumbar puncture headaches [12]. It is also used as therapy in SIH although its efficacy for this indication is unproven. SIH is a rare and treatable cause of disabling headache and our case highlights a rare presentation of this syn- drome. Awareness of its varied spectrum of clinical and radiographic manifestations is essential to avoid inappro- priate investigations, misinterpretation of imaging results and unnecessary, ineffective treatment. Consent Written informed consent was obtained from the patient for publication of this Case report and the accompany- ing images. A copy of the written consent is available for review by the Editor of this journal. Abbreviations CSF: Cerebrospinal fluid; CT: Computerized tomography; EBP: Epidural blood patching; MR: Magnetic resonance; MRI: Magnetic resonance imaging; SIH: Spontaneous intracranial hypotension. Competing interests The authors declare that they have no competing interests. Authors’ contributions TC, CR and LS were involved in the management of the patient and contributed to the drafting of the manuscript. TC revised the manuscript critically and prepared the final version. All authors read and approved the final manuscript. Acknowledgements We thank Mr Sarath of the Audio-visual Unit of the Faculty of Medicine, University of Colombo for the digital photography of the magnetic resonance images. Received: 10 September 2014 Accepted: 18 March 2015 References 1. Schaltenbrand G. Normal and pathological physiology of the cerebrospinal fluid circulation. Lancet. 1953;1(6765):805–8. 2. Schievink WI. Spontaneous spinal cerebrospinal fluid leaks and intracranial hypotension. JAMA. 2006;295(19):2286–96. 3. Mokri B. Spontaneous cerebrospinal fluid leaks: from intracranial hypotension to cerebrospinal fluid hypovolemia–evolution of a concept. Mayo Clin Proc. 1999;74(11):1113–23. 4. Mokri B. Spontaneous low pressure, low CSF volume headaches: spontaneous CSF leaks. Headache. 2013;53(7):1034–53. 5. Grimaldi D, Mea E, Chiapparini L, Ciceri E, Nappini S, Savoiardo M, et al. Spontaneous low cerebrospinal pressure: a mini review. Neurol Sci. 2004;25 Suppl 3:S135–137. 6. Schwedt TJ, Matharu MS, Dodick DW. Thunderclap headache. Lancet Neurol. 2006;5:621–31. 7. Murakami M, Morikawa K, Matsuno A, Kaneda K, Nagashima T. Spontaneous intracranial hypotension associated with bilateral chronic subdural hematomas–case report. Neurol Med Chir (Tokyo). 2000;40(9):484–8. 8. Liu FC, Fuh JL, Wang YF, Wang SJ. Connective tissue disorders in patients with spontaneous intracranial hypotension. Cephalalgia. 2011;31(6):691–5. 9. The International Classification of Headache Disorders. 3rd edition (beta version). Cephalalgia. 2013;33(9):629–808. 10. Marcelis J, Silberstein SD. Spontaneous low cerebrospinal fluid pressure headache. Headache. 1990;30(4):192–6. 11. Hoffmann J, Goadsby PJ. Update on intracranial hypertension and hypotension. Curr Opin Neurol. 2013;26(3):240–7. 12. Camann WR, Murray RS, Mushlin PS, Lambert DH. Effects of oral caffeine on postdural puncture headache, a double-blind, placebo-controlled trial. Anesth Analg. 1990;70:1181–4. Submit your next manuscript to BioMed Central and take full advantage of: • Convenient online submission • Thorough peer review • No space constraints or color figure charges • Immediate publication on acceptance • Inclusion in PubMed, CAS, Scopus and Google Scholar • Research which is freely available for redistribution Submit your manuscript at www.biomedcentral.com/submit Abstract Background Case presentation Conclusions Background Case presentation Conclusions Consent Abbreviations Competing interests Authors’ contributions Acknowledgements References work_vjmv7omdcne6dk5uh5vxhtawje ---- Communication methods and production techniques in fixed prosthesis fabrication: a UK based survey. Part 1: Communication methods Communication methods and production techniques in fixed prosthesis fabrication: a UK based survey. Part 1: Communication methods J. Berry,1 M. Nesbit,2 S. Saberi2 and H. Petridis*3 and also that members have to ‘communi- cate clearly and effectively with other team members and colleagues in the interest of patients’, and that ‘if you ask a colleague to provide treatment, a dental appliance or clinical advice for a patient, make sure that your request is clear and that you give your colleague all the information they need’.4 The prerequisite for a proper prescription written by a qualified dentist has also been set in the Medical Devices Directive (MDD).5 A number of studies6–12 from different parts of the world have highlighted problems and confirmed the need for improved communi- cation methods and production techniques between dentists and dental technicians, during the fabrication of fixed prosthodon- tic appliances. Problems seem to occur even within the same hospital setting.13,14 Commu- nication issues have included lack of infor- mation regarding the prosthesis design and materials, the lack of understanding of the necessary technical steps and time required, and lack of proper shade communication.6–12 Most of the times, the final decision was left INTRODUCTION Prosthodontics is a discipline that requires a synergy between the dentist and dental tech- nician in order to fabricate intraoral prosthe- ses with acceptable fit, function and aesthet- ics.1–3 Proper communication between the two parties is very important because, in the majority of cases, the dental technicians are remotely located and usually never actually see the patient. The General Dental Council’s (GDC) policy document Principles of dental team working4 states that: ‘Members of the dental team will work effectively together’, Statement of the problem The General Dental Council (GDC) states that members of the dental team have to ‘communi- cate clearly and effectively with other team members and colleagues in the interest of patients’. A number of studies from different parts of the world have highlighted problems and confirmed the need for improved communication methods and production techniques between dentists and dental technicians. Aim The aim of this study was to identify the communi- cation methods and production techniques used by dentists and dental technicians for the fabrication of fixed prostheses within the UK from the dental technicians’ perspective. The current publication reports on the communication methods. Materials and methods Seven hundred and eighty-two online questionnaires were distributed to the Dental Laboratories Association membership and included a broad range of topics. Statistical analysis was undertaken to test the influence of various demographic variables. Results The number of completed responses totalled 248 (32% response rate). The labora- tory prescription and the telephone were the main communication tools used. Statistical analysis of the results showed that a greater number of communication methods were used by large laboratories. Frequently missing items from the laboratory prescription were the shade and the date required. The majority of respondents (73%) stated that a single shade was selected in over half of cases. Sixty-eight percent replied that the dentist allowed sufficient laboratory time. Twenty- six percent of laboratories felt either rarely involved or not involved at all as part of the dental team. Conclusion This study suggests that there are continuing communication and teamwork issues between dentists and dental laboratories. with the technician, without proper feed- back. All of the above issues, compounded by the time pressure for completion of the restorations as noted in some studies,8,11 may explain the finding that many dental techni- cians feel insufficiently valued in the dental team.11,12,15 A number of studies1,12,14,16 have high- lighted the lack of suitable instruction to dental undergraduates regarding effective communication between dentists and tech- nicians, and the lack of knowledge regard- ing dental prosthesis fabrication at the time of qualification as the main factors for the re-occurring problems. This has led to the introduction of inter-professional education schemes in Australia.17 The last survey of UK-based dental labo- ratories was published in 2009,12 and sug- gested that the GDC had failed in its aims published in The first five years; a framework for undergraduate dental education,18 as seri- ous communications issues were identified.12 The purpose of this cross-sectional study was to identify the communication 1Clinical Lecturer, Department of Adult Oral Health, Institute of Dentistry, Barts and The London School of Medicine and Dentistry, Queen Mary University of Lon- don, London; 2Senior Technical Instructor, Prosthodontic Unit, UCL Eastman Dental Institute, London; 3Senior Lecturer, Department of Restorative Dentistry, Prostho- dontics Unit, UCL Eastman Dental Institute, London *Correspondence to: Dr Haralampos (Lambis) Petridis Email: c.petridis@ucl.ac.uk Online article number E12 Refereed Paper - accepted 19 June 2014 DOI: 10.1038/sj.bdj.2014.643 ©British Dental Journal 2014; 217: E12 • Highlights the importance of dentist- technician communication. • Concludes that dentists must ensure that written prescriptions contain all the necessary information so that the dental technician can fabricate fixed prostheses correctly and without delay. • Recommendations for improved communication are made with the ultimate goal of better patient service. I N B R I E F R E S E A R C H OPEN BRITISH DENTAL JOURNAL VOLUME xxx NO. x MON xx 2014 1BRITISH DENTAL JOURNAL 1 © 2014 Macmillan Publishers Limited. All rights reserved. RESEARCH methods and production techniques used by dentists and dental technicians for the fabrication of fixed prostheses within the UK from the dental technicians’ perspec- tive. The current publication reports on the communication methods. MATERIALS AND METHODS A questionnaire was constructed to investi- gate communication methods and produc- tion techniques used between dentists and dental laboratories from the laboratories perspective. An effort was made to include a broad range of topics. At the same time elements of previously published research were incorporated in order to obtain mean- ingful results that would be comparable to past surveys. The final questionnaire con- sisted of 30 questions within the following subcategories: general information, com- munication methods, impression disinfec- tion and suitability, production techniques, shade matching, and time and team manage- ment issues. The questionnaire was piloted among dental technicians both at UCL East- man Dental Institute and in selected com- mercial laboratories. The Dental Laboratories Association (DLA, Nottingham, UK) was approached and approved the use of their database of e-mail contacts (782 addresses). A web-based survey tool, Opinio (Object- Planet Inc. Oslo, Norway), was utilised for the administration of the survey and assim- ilation of data. Settings were managed in order to ensure anonymity of respondents. The questionnaire weblink along with an introduction letter, were distributed through the DLA. The survey was ‘live’ for 6 weeks, and during that time the response rate was actively monitored and three e-mail remind- ers were sent. The collected data was presented as descriptive statistics and analysed using Fisher’s exact test, the Mann-Whitney test or the Spearman’s rank correlation (SPSS 12.0; SPSS Inc, Chicago). P-values of less than 0.025 were regarded as statistically sig- nificant. A significance level of 2.5% was chosen rather than the conventional 5% to avoid spuriously significant results arising from multiple testing. The null hypothesis was that factors such as the source of information used to answer the questionnaire, the location, and size of the dental laboratory, did not influence the communication methods and production techniques. RESULTS The number of responses totalled 248, which yielded a 32% response rate. Sixty- eight  respondents answered only some of the questions. The results presented in this paper pertain to the subchapters of general information, communication methods, shade matching, and time and team management issues. The subchapters and questions along with the results in parentheses are depicted in Table 1. The majority of the information (81%) used to answer the survey questions were sourced from memory and 19% of respondents used their laboratory records. Ninety  percent of the respondents were based in England. This unequal distribution among England, Scot- land, Wales and Northern Ireland did not permit any further analysis of this particular factor. The majority of dental laboratories (73%) completed work for less than 50 den- tists and 13% worked with over 100  den- tists. For analysis purposes the labs were grouped into three categories regarding size: small (43% working with up to 25 dentists), medium (38% working with 26-75 dentists) and large (19% working with 76+ dentists). The results of this study showed that the laboratory prescription and the telephone were the main communication tools used between dentists and dental technicians. Digital means, whether by e-mail or photog- raphy, also played an important role (Fig. 1). Statistical analysis of the results showed that a greater number of communication methods were used by large laboratories (Table 2) and that the source of information did not play a significant role. Almost a quarter of the respondents (24%) indicated that more than half of laboratory prescriptions had an inadequate amount of information on them throughout the course of treatment and 13% had to con- tact the dentists for further information. The two most frequently missing items from the laboratory prescription were the shade and the date required. These results were not influenced by the size of the laboratory or the source of information, with the excep- tion of the responses about contact with the dentist for further information (p = 0.002). This was more common in the group provid- ing information from records, where 22% reported having to contact the dentist over half the time, compared to only 10% in the memory group. Also, mid-sized laboratories reported a greatest percentage (79%) for shade missing compared (p  =  0.01) to the two other groups (60%). Some of the additional comments in this section of the questionnaire indicated that the need for further communication was time consuming, with the dentist often being difficult to contact during normal surgery hours and also stressed the fact that some prescriptions were illegible or were not fully completed but had additional comments Table 1 Relevant subchapters of the questionnaire with answers in percentages in parentheses GENERAL INFORMATION 1. Please indicate the source of the information that you will be giving: From memory (81%) From records (19%) 2. This survey is anonymous so please indicate the country that you are based in: England (90%) Scotland (4%) Northern Ireland (1%) Wales (5%) 3. Approximately, what number of dentists do you currently work with? 1–25 (43%) 26–50 (30%) 51–75 (8%) 76–100 (6%) 100+ (13%) COMMUNICATION METHODS 4. Please select all the methods of contact used by dentists to communicate with you: Laboratory prescription (98%) Telephone (93%) Text messaging (29%) Email (73%) Digital photography (67%) Other (10%) Please add any relevant comments 5. With regards to the laboratory prescriptions for fixed restorative work, what percentages have an inadequate amount of information on them throughout the course of treatment? 0-25% (54%) 26-50% (22%) 51-75% (16%) 76-100% (8%) 6. What percentage of laboratory prescriptions do you have to contact the dentist to obtain further information? 0-25% (65%) 26-50% (22%) 51-75% (8%) 76-100% (5%) 7. Please indicate the two most common items missing from the laboratory prescription when received from the dentist. Patient’s name (6%) Shade (75%) Date required (60%) Material to be used (32%) Tooth notation (18%) Other (9%) Please add any relevant comments SHADE MATCHING 25. What percentage of the time is a single shade (for example, A3 or B2) specified for crown and bridgework? 0-25% (7%) 26-50% (20%) 51-75% (33%) 76-100% (40%) 26. What percentage of dentists would send you a photograph of the patient’s teeth with the shade tab to help you with shading? 0-25% (81%) 26-50% (10%) 51-75% (7%) 76-100% (2%) 27. What percentage of dentists would send a patient to you to do the shade matching? 0-25% (75%) 26-50% (15%) 51-75% (9%) 76-100% (1%) TIME & TEAM MANAGEMENT ISSUES 28.  Do you feel that the dentist generally allows you adequate time to complete the fabrication of the crown/bridge to the best of your ability, and return it to the dental practice? Yes (68%) No (32%) Please comment 29.  How involved do you feel as part of the dental team? Completely involved (22%) Partly involved (52%) Rarely involved (24%) Not involved (2%) 30. Please add any further comments that you may have on the communication between the dentist and laboratory: (63 additional comments) 2 BRITISH DENTAL JOURNAL VOLUME xxx NO. x MON xx 20142 BRITISH DENTAL JOURNAL © 2014 Macmillan Publishers Limited. All rights reserved. RESEARCH written on them such as ‘see e-mail’ or ‘I will call you to discuss’ or ‘please call me’. Regarding shade selection and communi- cation, the results of this study showed that the majority of respondents (73%) received a single shade for over half of the cases and 81% rarely (0-25% of cases) received any photographs with the patient’s teeth and shade guide. Only a minority of dental technicians (9%) reported regularly seeing patients for shade matching. Statistical anal- ysis showed that these results were not influ- enced by the source of information. A sta- tistical significance (p = 0.02) was detected between the size of lab and a single shade chosen. Large laboratories were more likely to receive instruction for a single shade. However, with regards to sending a patient to the laboratory for shade taking there was a negative correlation (p  =  0.02) suggest- ing that larger dental laboratories were less likely to see patients for shade taking. The last section on communication per- tained to time and management issues. Sixty-eight  percent of technicians replied that the dentist allowed sufficient time for fabrication of the definitive prosthesis and its return to the dental practice. The major- ity (74%) of dental technicians felt that they were either completely or partly involved in the dental team. However, one-quarter of the respondents (26%) felt either rarely involved or not at all. A number of respondents seized the opportunity to further comment and the following is a small selection: ‘On the whole communication has got bet- ter but I feel the laboratory must make a stand with their clients to get the best treat- ment for both the patient and themselves.’ ‘I have worked in dentistry now for many years and the issue of lack of communication in the dental team has been a continuing one, which never seems to be resolved.’ ‘I feel that dentists need to realise how valu- able the technician’s experience and knowledge are, and include them as part of the team and not consider them as a personal servant.’ ‘Would like a bit more appreciation shown!!!!!!’ ‘By far my happiest clients with the hap- piest patients are the ones that communicate with the laboratory and view it as part of a team effort to achieve the right result for the patient.’ ‘I feel that in general dentists think of us as an afterthought, not really appreciated. Just a thank you now and again would be nice.’ ‘The main problem occurs when it is nec- essary to speak to the surgeon and he is unavailable due to surgery.’ ‘Technicians should attend more lectures and courses with the dentists to appreciate the dentist’s point of view and exchange opinions and ideas.’ ‘I am a laboratory owner and communicate with dental surgeons on a frequent basis. I find my contact to be almost invariably friendly and professional.’ ‘Private clients value technical support and involvement. NHS customers just tell me they want a crown that “drops on” and is completely clear of the occlusion.’ ‘Most surgeons give plenty of time, but some only give 1 week when it arrives in the lab after 2 days in the post.’ ‘I feel that the dentists do not check their impressions.’ DISCUSSION The purpose of this cross-sectional study was to identify the communication methods and production techniques used by dentists and dental technicians for the fabrication of fixed prostheses within the UK from the dental technicians’ perspective. The current publication reports on the communication methods and team issues. The last similar UK study was published in 2009.12 The response rate was 32%, which falls into the range of other published surveys of dental laboratories.10,11,12,19 The difference with the current survey was the fact that it was administered online, in the hope to make it more appealing and easy for respond- ents.20,21 Nevertheless, it has been shown22 that web and postal surveys yield similar response rates if certain protocols are fol- lowed. A limitation of the current survey was the fact that no distinction was made between those laboratories providing a fully private service, a fully NHS service or a mixed arrangement. These diverse cohorts might be experiencing different communica- tion issues and might be utilising alternative fabrication methods. Juszczyk et al.10 in their 2009  survey used the same DLA database and reported that the majority (61%) of den- tal laboratories reported doing a mixture of NHS and private work. A previous study6 looking at the quality and prescription of single crowns in Wales reported more prob- lems with NHS compared to privately funded work, but no statistics were possible due to the limited sample size. The majority of the information used to answer the survey questions were sourced by memory. Most of the published surveys have 0 50 100 150 200 250 Number of replies Laboratory prescription Telephone Text messaging Email Digital photography Other Fig. 1 Bar chart showing the methods of contact used by dentists in communicating with the dental laboratory Table 2 Fisher’s exact test relating size of laboratory to communication methods used Lab size Small (n = 106) Number (%) Medium (n = 95) Number (%) Large (n = 47) Number (%) P-value Lab prescription 87 (82%) 86 (91%) 44 (94%) 0.08 Telephone 78 (74%) 83 (87%) 44 (94%) 0.004 Text messaging 19 (18%) 26 (27%) 18 (38%) 0.02 Email 50 (47%) 72 (76%) 40 (85%) <0.001 Digital photography 48 (45%) 60 (63%) 40 (85%) <0.001 BRITISH DENTAL JOURNAL VOLUME xxx NO. x MON xx 2014 3BRITISH DENTAL JOURNAL 3 © 2014 Macmillan Publishers Limited. All rights reserved. RESEARCH not focused on this issue with the exception of Hatzikyriakos et al.11 who reported a simi- lar finding, and a previous survey23 where it was anecdotally reported that most of the answers were sourced by financial records. The use of memory may introduce personal bias and thus affect the accuracy of the infor- mation. The dental technicians could have exaggerated the degree of lack of informa- tion and unsatisfactory work received from the dentists. Similarly their personal bias could have affected the responses on their own laboratory procedures. In this study, however, the statistical analysis showed that the method of resourcing information did not play a significant role. The results of this study showed that the main method of communication between dentists and dental laboratories is still the written prescription and telephone con- tact. This is in agreement with the previ- ous UK-based survey.12 Statistical analysis of the results showed that a greater number of communication methods were used by large laboratories and this is the first time that this has been reported. This could be a reflection on the degree of knowledge on the use of different communication methods within a larger group, or the need to have multiple modes of communication because of the logistics of maintaining contact with a larger number of dentists. The results of this study showed that in approximately half of the cases the labo- ratory prescription was lacking important information. This reaffirms the findings of past surveys.6,9,10 The statistical analysis also revealed that the group that based their answers on records reported even higher per- centages for the need to obtain further infor- mation from the dentist, compared to techni- cians basing their answers on memory. This implies that the problem might have been under-reported in this survey. The two most common items missing from the written pre- scription were the ‘shade’ and ‘date required’, which is in conflict with the results reported by Stewart13 who reported the absence of the ‘departmental clinic’ and the ‘name of the prescribing dentist’ as the most frequent omissions. Statistical analysis of the results indicated that there was no statistical asso- ciation between the size of the lab and the items missing from the prescription except for shade (p = 0.01). The analysis suggested that mid-sized laboratories reported the highest percentage of missing values (79%), with the percentage only around 60% in the two other groups. Shade selection may be a quite complex and individualised procedure24 yet this sur- vey showed that dentists regularly chose a single shade for most of the cases. The statistical analysis also showed that this was most common with large laboratories. No other UK studies reported on this parameter. This is consistent with the findings of Jen- kins et al.6 in Wales, and Hatzikyriakos et al.9 in Greece who reported that a single shade tab was chosen 50% of the time. A useful adjunct to the written prescription would be a photograph of the tooth in question, ide- ally with the shade tab placed adjacent to it,14,19,24 but the extent of use has not been previously reported. This simple accessory measure, however, was only occasionally used according to the responses in this sur- vey. If the dentist is not confident in shade matching, and is not prepared to use other measures such as photographs, an alterna- tive solution would be to send the patient to the dental laboratory.7,24 Alternatively, some dentists arrange for the technician to visit the practice and meet the patient. How- ever, the results showed that this method of communication was not a popular one. Statistical analysis showed that technicians working in large laboratories were less likely to see patients for shade taking (p = 0.02). No other comparable research data was sourced on the frequency of dental techni- cian shade taking in the UK. However, in a study conducted in Greece11 almost 30% of shade selection was undertaken by the dental technicians. A possible explanation of the different results might be that the dental laboratory is not conveniently accessible for the patient, which further strengthens to use of photography as an aid to shade matching. Nowadays, many dentists within the UK use a postal service to send the impressions/casts to dental laboratories some distance from the practice. The vast majority (68%) of the dental lab- oratories felt that the dentist did allow them adequate time to complete the fabrication of the crown or bridge to the best of their ability and to return it back to the dental practice. This is in contrast with two previ- ous studies in Greece11 and Ireland,8 which reported that the majority of dental techni- cians thought that they were pressured for time. Undergraduate training rarely involves the student undertaking any fixed prosthetic laboratory procedures and as a result the dentist may fail to understand the complexi- ties of manufacture and especially the time required. The UK study by Juszczyk et al.12 reported that ‘54% of dental technicians working in a commercial laboratory did feel an integral part of the dental team’. In this survey 22% felt completely involved, the majority of 52% feeling partly involved. The question- naire allowed the dental laboratory to pass on any additional comments on the survey title. In general the comments indicated that communication methods have improved but there are still many unresolved issues. A number of papers1,14,25 have recom- mended that dental school curricula should reinforce the teaching of both the technical stages of laboratory fabrication as well as proper dentist-technician communication in order to ensure high quality team working later on. This has been recognised at Griffith University in Australia17 with the introduction of formalised inter-professional education between students of dentistry, dental tech- nology, dental therapists and hygienists. The adoption of similar changes in the curricula of UK dental schools would be recommended. One more way of strengthening communica- tion may be through organising more contin- uous professional development courses with participation from both parties encouraged. CONCLUSIONS Within the limitations of this UK-based study, the following conclusions could be drawn: 1. The main methods of communication between the dentists and dental laboratories are written prescriptions and telephone contact. Newer technologies such as digital photography and e-mail are playing an increasing role 2. The number of communication methods used by laboratories is directly related to their size 3. The laboratory prescriptions often lack important information, such as shade. When shade was prescribed, it was usually a single tab 4. Dental laboratories were, in the main, content with the time allocated for the prescribed work to be fabricated 5. The majority of dental laboratories felt that they were part of the dental team, but there were still some elements of dissatisfaction that need to be improved upon. The authors acknowledge the Dental Laboratories Association for their valuable assistance in carrying out this survey and Dr Aviva Petrie for her advice regarding the statistical analysis. The authors declare that they have no conflict of interest with respect to the submitted work. 1. Christensen G J. A needed remarriage: dentistry and dental technology. J Am Dent Assoc 1995; 126: 116–117. 2. Malament, K A, Pietrobon N, Nesser S. The interdisci- plinary relationship between prosthodontics and dental technology. Int J Prosthodont 1996; 9: 341–354. 3. Davenport J C, Basker R M, Heath J R, Ralph J P, Glantz P O, Hammond P. Communication between the dentist and the dental technician. Br Dent J 2000; 189: 471–474. 4. General Dental Council. Principles of dental team working 2013. London: General Dental Council, 2013. 5. Council of the European Communities. The medical 4 BRITISH DENTAL JOURNAL VOLUME xxx NO. x MON xx 20144 BRITISH DENTAL JOURNAL © 2014 Macmillan Publishers Limited. All rights reserved. RESEARCH devices directive. Council of the European Commu- nities, 1993. 6. Jenkins S J, Lynch C D, Sloan A J, Gilmour A S. Qual- ity of prescription and fabrication of single-unit crowns by general dental practitioners in Wales. J Oral Rehabil 2009; 36: 150–156. 7. Aquilino S A, Taylor T D. Prosthodontic laboratory and curriculum survey. Part III: Fixed prosthodontic labo- ratory survey. J Prosthet Dent 1984; 52: 879–885. 8. Leith R, Lowry L, O’Sullivan M. Communication between dentists and laboratory technicians. J Irish Dent Assoc 2000; 46: 5–10. 9. Lynch C D, Allen P F. Quality of communication between dental practitioners and dental technicians for fixed prosthodontics in Ireland. J Oral Rehabil 2005; 32: 901–905. 10. Afsharzand Z, Rashedi B, Petropoulos V C. Commu- nication between the dental laboratory technician and dentist: work authorization for fixed partial dentures. J Prosthodont 2006; 15: 123–128. 11. Hatzikyriakos A, Petridis H P, Tsiggos N, Sakelariou S. Considerations for services from dental techni- cians in fabrication of fixed prostheses: A survey of commercial dental laboratories in Thessaloniki, Greece. J Prosthet Dent 2006; 96: 362–366. 12. Juszczyk A S, Clark R K, Radford DR. UK dental labo- ratory technicians’ views on the efficacy and teach- ing of clinical-laboratory communication. Br Dent J 2009; 206: 532–533. 13. Stewart C A. An audit of dental prescriptions between clinics and dental laboratories. Br Dent J 2011; 211: E5. 14. Dickie J, Shearer A C, Ricketts D N. Audit to assess the quality of communication between operators and technicians in a fixed prosthodontic laboratory: educational and training implications. Eur J Dent Educ 2014; 18: 7–14. 15. Bower E J, Newton P D, Gibbons D E, Newton J T. A national survey of dental technicians: career devel- opment, professional status and job satisfaction. Br Dent J 2004; 197: 144–148. 16. Clark R K. The future of teaching of complete den- ture construction to undergraduates. Br Dent J 2002; 193: 13–14. 17. Evans J, Henderson A, Johnson N. The future of edu- cation and training in dental technology: designing a dental curriculum that facilitates teamwork across the oral health professions. Br Dent J 2010; 208: 227–230. 18. General Dental Council. The first five years. 3rd ed. London: GDC, 2013. 19. Rath C, Sharpling B, Millar B J. Survey of the provi- sion of crowns by dentists in Ireland. J Irish Dent Assoc 2010; 56: 178–185. 20. Couper M. Web surveys: a review of issues and approaches. Public Opin Q 2000; 64: 464–494. 21. Sills S J, Song C. Innovations in survey research: an application of web surveys. Soc Sci Comput Rev 2002; 20: 22–30. 22. Kaplowitz M D, Hadlock T D, Levine R. A Comparison of web and mail survey response rates. Public Opin Q 2004; 68: 94–101. 23. MacEntee M I, Belser U C. Fixed restorations pro- duced by commercial dental laboratories in Vancou- ver and Geneva. J Oral Rehabil 1988; 15: 301–305. 24. Small B W. Shade selection for restorative dentistry. Gen Dent 2006; 54: 166–167. 25. Barret P A, Murphy W M. Dental technician education and training-a survey. Br Dent J 1999; 18: 85–88. This work is licensed under a Creative Commons Attribution 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/ BRITISH DENTAL JOURNAL VOLUME xxx NO. x MON xx 2014 5BRITISH DENTAL JOURNAL 5 © 2014 Macmillan Publishers Limited. All rights reserved. Communication methods and production techniques in fixed prosthesis fabrication: a UK based survey. Part 1: Communication methods Introduction Materials and methods Results Discussion Conclusions Acknowledgements Note References work_vjufhrgjavg2dgz3kq3o6hnn7u ---- PROCEEDINGS Open Access Virtual microscopy with Google-Earth: a step in the way for compatibility Luis Alfaro1*, Maria Jose Roca2, Pablo Catala3 From 11th European Congress on Telepathology and 5th International Congress on Virtual Microscopy Venice, Italy. 6-9 June 2012 Background Advances in the field of virtual microscopy are continu- ously growing. Many companies have introduced equip- ments with very good image quality levels, and increased speed in the scanning processes. However also a wide vari- ety of image formats, software viewers and servers have appeared, with lack of compatibility in the managements of virtual slides. Commercial solutions for virtual microscopy tend to be rigid and difficult of customize, probably to protect the developments, but this leads to a detachment in the management of the images and a difficulty in becoming familiar with this technology. Handling virtual slides in a similar way as we do with conventional pictures taken from digital cameras surely would bring to virtual microscopy a much wider number of pathologist. In this approach we tried to adapt to virtual microscopy simplistic solutions employed in digital photography as software oriented for panoramic images [1]. Panoramic images share with virtual slides their huge size and a similar way to be generated stitching smaller images [2]. Software for panoramic images can be adapted for virtual microscopy. Google Earth is a well-known software oriented as a geographic information system working in a way similar to virtual microscopy, zooming and panning images, and moving along huge files. It is widely distributed, installed in may computers and can be useful to share virtual slides and employed as a viewer of virtual slides [3]. There are developments for virtual microscopy based in the use of the API (application programming interface) of Google Maps, such as the NYU School of Medicine Virtual Microscope [4]. These options of high complexity require a team of programmers and computer support, not available in all situations. However, it is possible for pathologists to use Google Earth as a viewer of our virtual slides more easily, and without programming knowledge. Material and methods With the aim of testing the value of Google Earth as a software for the handling of virtual slides we selected 20 pathology cases. Glass slides of 10 were scanned with a·3D-Histech Panoramic Midi, and the other 10 slides with an Aperio XT. Original virtual slides were exported into .jpg flat files with Aperio ScanScope software. We generated kml files, the file format to display information in Google Earth, and the compatible pyramidal tiles structure. The software employed was GDAL, an open source library for geospatial data images, with the utility gdal2- tiles, and its graphical interface variant (MapTiler). All the cases were uploaded into two servers, our own server at the hospital, and a external sever hosting web pages. HTML web pages were built linking the cases with the kml files. MIME types were defined in both servers in order to lead kml files be opened with Google Earth. Results and discussion Cases were accessible anywhere from the Internet through its web address and opened directly with Google Earth. All functions to allow diagnostic, consultation, educational... purposes were available. Image quality obtained was equivalent to any other specific viewer for virtual slides. Speed in serving files was related with lines capacity and hosting server performance, and not with software. Google Earth uses a specific file type (KML / KMZ) to define specific locations. kmz files are compressed versions of the kml with zip compression. When clicking on these files Google Earth opens and shows the location defined in the file with specific spatial coordinates. * Correspondence: lalfaro@comv.es 1Department of Pathology. Fundacion Oftalmologica del Mediterraneo. Valencia. Spain Full list of author information is available at the end of the article Alfaro et al. Diagnostic Pathology 2013, 8(Suppl 1):S11 http://www.diagnosticpathology.org/content/8/S1/S11 © 2013 Alfaro et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. mailto:lalfaro@comv.es http://creativecommons.org/licenses/by/2.0 KML files have many features, which can be reviewed at the program tutorials [5]. They allow among them, to insert pictures that are embedded on the landscapes of Google Earth. This feature, called “PhotoOverlay” is often used associated with Google Earth position marks, but also supports the use of very large photographs, with many megapixels as used in our virtual slides. The proce- dure for visualization of these giant photographs is the usual breaking them into small portions, and arrange them in a pyramidal structure. Each image of the pyramid is divided into tiles, so that only the parts wished to see need to be charged at every moment, and with the known functions of zoom and panning. Kml files have a syntax similar to html files. They can be created manually and adding a link to the virtual slide with the PhotoOverlay tag is not difficult. Anyway there are available kml file generators [6,7]. The organization of the virtual slide in a set of pyramidal tiles although can also be generated manually, needs in practice an automated system to avoid a long and repeti- tive process. Several tools are designed for this purpose. We selected MapTiler [8] because of its easiness of use and the simultaneous generation of the associated kml file. The main handicap of this software is that it becomes very slow even in the most modern computers when facing very big virtual slides. A similar alternative for the generation of the pyramidal image is gdal2tiles.py a software that generates a directory with the small tiles from the fragmentation of the virtual slide an also the kml file to be opened with Google Earth [9]. It’s a command line software based on python pro- gramming language not as easy to be used. An example of a virtual slide prepared to be seen with Google Earth is provided in Figure 1. Table 1 shows the syntax of the simplified KML file prepared for the example. A certain limitation in the use of this alternatives is that original virtual slides need to be exported into flat jpg files which will be the template to generate the pyramidal structure suitable for Google Earth. The well know restric- tion of jpg files to a maximum size of 65.000 pixels (216) can be insufficient for big virtual slides. JPG2000 format is not available for the analyzed software, and conventional Tiff files have also a size limitation of 4 GB. Figure 1 Google Earth showing a virtual slide Alfaro et al. Diagnostic Pathology 2013, 8(Suppl 1):S11 http://www.diagnosticpathology.org/content/8/S1/S11 Page 2 of 4 Conclusions Google Earth is widely distributed and can be a good choice to avoid compatibility limitations in virtual micro- copy. Even most viewers for virtual slides are free, some- times especially for remote consultation, it is not possible to expect a remote pathologist having installed all different browsers. Besides many pathologist in hospitals have not administrations rights to install software at their compu- ters. Viewing slides with Google Earth requires not techni- cal skills and any pathologist can use it easily. Exporting virtual slides and generating the tiles and files to serve them for Google Earth requires a bit more of knowledge in information technologies, however it is possible for a pathologist with some experience in virtual microscopy to do without technical assistance. No programming abil- ities are needed further the generation of html files to link the slides in the servers, and the full process can be semi- automated. Google Earth is also a very dynamic software with frequent actualization, and many working groups introducing new improvements and can be suitable to be used in virtual microscopy. List of abbreviations GB: Gigabyte; GDAL: Geospatial Data Abstraction Library; HTML: HyperText Markup Language; JPG (JPEG): Joint Photographic Experts Group; KML: Keyhole Markup Language; MIME: Multipurpose Internet Mail Extensions; TIFF: Tagged Image File Format Competing interests The authors declare that they have no competing interests. Authors’ contributions LA Conception, design and initial manuscript writing MJR manuscript reviewing and important intellectual contribution PC Technical advice and support Authors’ details 1Department of Pathology. Fundacion Oftalmologica del Mediterraneo. Valencia. Spain. 2Department of Pathology. Hospital Arnau de Vilanova. Valencia Spain. 3IT department. Fundacion Oftalmologica del Mediterraneo. Valencia. Spain. Published: 30 September 2013 References 1. Alfaro L, Poblet E, Catalá P, Navea A, García-Rojo MJ: Compatibilización de equipos de microscopía virtual: análisis de alternativas con software de imágenes panorámicas. Rev Esp Patol 2011, 44(1):8-16. 2. Alfaro L, Roca MJ: Manual generation of virtual slides: a simplistic alternative for small biopsies [abstract]. Virchows Arch 2011, 459(Suppl 1):312. Table 1 KML file prepared to open a virtual slide with Google Earth 059-2.jpg Telepathology Congress -0.40500000000000 339.47503845807844 500 2000 0 00 absolute 0/0/0.png http://digipat.org/vs/GE/059-2/0/0/0.kkmz Alfaro et al. Diagnostic Pathology 2013, 8(Suppl 1):S11 http://www.diagnosticpathology.org/content/8/S1/S11 Page 3 of 4 3. Alfaro L, Poblet E, Roca MJ, Catala P, Navea A: Google Earth and panoramic photo software in the management of virtual slides. [abstract]. Mod Pathol 2011, 24(S1):339A. 4. NYU School of Medicine Virtual Microscope. , ©NYU. [http://cloud.med. nyu.edu/virtualmicroscope/]. 5. Google developers: Keyhole Markup Language. KML tutorial [https:// developers.google.com/kml/documentation/kml_tut?hl=en]. 6. KML file creator. Free Maps tools [http://www.freemaptools.com/kml-file- creator.htm]. 7. KML generator 2.05. [http://www.madsencircuits.com/kmlgenerator.html]. 8. MapTiler: Map Tile Cutter. Map Overlay Generator for Google Maps and Google Earth [http://www.maptiler.org/]. 9. GDAL Utilities: gdal2tiles.py. [http://www.gdal.org/gdal2tiles.html]. doi:10.1186/1746-1596-8-S1-S11 Cite this article as: Alfaro et al.: Virtual microscopy with Google-Earth: a step in the way for compatibility. Diagnostic Pathology 2013 8(Suppl 1):S11. Submit your next manuscript to BioMed Central and take full advantage of: • Convenient online submission • Thorough peer review • No space constraints or color figure charges • Immediate publication on acceptance • Inclusion in PubMed, CAS, Scopus and Google Scholar • Research which is freely available for redistribution Submit your manuscript at www.biomedcentral.com/submit Alfaro et al. Diagnostic Pathology 2013, 8(Suppl 1):S11 http://www.diagnosticpathology.org/content/8/S1/S11 Page 4 of 4 https://developers.google.com/kml/documentation/kml_tut?hl=en https://developers.google.com/kml/documentation/kml_tut?hl=en http://www.freemaptools.com/kml-file-creator.htm http://www.freemaptools.com/kml-file-creator.htm http://www.madsencircuits.com/kmlgenerator.html http://www.maptiler.org/ http://www.gdal.org/gdal2tiles.html Background Material and methods Results and discussion Conclusions Abbreviations List of abbreviations Competing interests Competing interests Authors' contributions Authors’ details References work_vklbons7n5ba7cwvscpsmwffze ---- untitled CORRESPONDENCE RESEARCH LETTERS The Fascial Plication Suture: An Adjunct to Layered Wound Closure M inimizing surgical wound tension is gener-ally accomplished by combining buried der-mal sutures with an epidermal closure of choice. Several improvements on the buried dermal su- ture have been proposed over the years, from the buried vertical mattress technique1 and its many variants2 to the setback dermal suture.3 Despite the unique nature of wound closure from ex- cisional surgery, most surgeons approach these clo- sures as repairs of simple incisions. The added wound tension resulting from the extirpation of the excised tis- sue, over and above the tension across the surface of the wound that would be expected even from incisional sur- gery, may not be fully alleviated with standard layered- closure techniques. In excisions of large cysts or tu- mors, significant dead space may remain even with wounds closed using dermal sutures, potentially increas- ing the risk of hematoma formation or wound infection. The plication of superficial muscle fascia has been used as a technique of choice in rhytidectomy surgery for de- cades. The popularity of this technique stems from the ability to effect long-lasting tissue drag. Extending the application of this technique to reconstructive proce- dures, Dzubow4 described the use of multiple fascial pli- cation sutures using nonabsorbable sutures in lieu of der- mal sutures for select Mohs defects on the head and neck. Other authors have suggested fascial imbrication for se- lect scalp and forehead defects.5 The fascial plication technique described herein is best suited to high-tension repairs on the trunk, neck, and shoulders—areas where general dermatologists per- form the bulk of their procedures. Further study is needed to assess whether this technique confers a significant ben- efit over standard layered closure techniques. A B C D Figure 1. A, First throw, cross-sectional view; B, second throw, cross-sectional view; C, first throw, surgeon’s-eye view; D, second throw, surgeon’s-eye view. (REPRINTED) ARCH DERMATOL/ VOL 145 (NO. 12), DEC 2009 WWW.ARCHDERMATOL.COM 1454 ©2009 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 Methods. After the excision is accomplished, the wound is widely undermined in the level of the superficial fat, and adequate hemostasis is obtained. For all but the deepest ex- cisions, the muscle fascia will not be visible at this point. With absorbable suture (I favor 2-0 Vicryl; Ethicon Inc, Somerville, New Jersey), the first throw is accomplished by inserting the cutting or reverse-cutting needle gently into the fat and through the superficial fascia approximately 2 to 5 mm from the undermined edge of the wound (Figure 1A and B). A successful bite of the fascia may be tested by gently pulling on the suture and watching for char- acteristic uplifting of the area. The second throw is then performed by repeating the procedure on the opposite edge and at the same depth (Figure 1C and D). The knot is then tied, which results in a visible pleat and leads to a more fusiform appearance of the wound (Figure 2 and Figure 3). Placing the sutures at the far lateral under- mined edges should be avoided owing to a tendency for dimpling of the overlying skin. If 1 fascial plication suture is used, this may be placed in the vertical midline of the wound; otherwise, a series of multiple evenly spaced pli- cation sutures may be placed. Comment. While this technique might theoretically in- crease risk of postoperative pain from deep tissue trauma or infection through the breach in the superficial muscle fascia, these issues have not been a problem in my ex- perience. In hundreds of wound closures using this tech- nique on the shoulders and trunk, I have found im- proved outcomes with less spread-scar formation, less dehiscence, and less tendency toward hematoma forma- tion, especially on large defects. A randomized con- trolled trial is needed to determine whether this tech- nique truly offers a clinically and statistically significant benefit over standard layered wound closure tech- niques. This technique shifts the burden of tension from the dermis to the fascia, theoretically resulting in a lower- tension closure, dramatically improved dead-space mini- A B Figure 3. Defect before (A) and after (B) placement of single fascial plication suture; note enhanced fusiform appearance of the wound. Scale bar measures 6 cm in both panels. A B Figure 2. A, Final wound appearance, cross-sectional view; B, final wound appearance, surgeon’s-eye view. (REPRINTED) ARCH DERMATOL/ VOL 145 (NO. 12), DEC 2009 WWW.ARCHDERMATOL.COM 1455 ©2009 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 mization, and, hopefully, a cosmetically and function- ally improved reconstruction. Accepted for Publication: July 14, 2009. Author Affiliation: North Florida Dermatology Associ- ates, Jacksonville, Florida. Correspondence: Dr Kantor, North Florida Dermatol- ogy Associates, 1551 Riverside Ave, Jacksonville, FL 32082 (jonkantor@gmail.com). Financial Disclosure: None reported. 1. Zitelli JA, Moy RL. Buried vertical mattress suture. J Dermatol Surg Oncol. 1989; 15(1):17-19. 2. Alam M, Goldberg LH. Utility of fully buried horizontal mattress sutures. J Am Acad Dermatol. 2004;50(1):73-76. 3. Kantor J. The set-back buried dermal suture: an alternative to the buried ver- tical mattress for layered wound closure. J Am Acad Dermatol. In press. 4. Dzubow LM. The use of fascial plication to facilitate wound closure follow- ing microscopically controlled surgery. J Dermatol Surg Oncol. 1989;15(10): 1063-1066. 5. Radonich MA, Bisaccia E, Scarborough D. Management of large surgical de- fects of the forehead and scalp by imbrication of deep tissues. Dermatol Surg. 2002;28(6):524-526. Lack of Lower Extremity Hair Not a Predictor for Peripheral Arterial Disease P eripheral arterial disease (PAD) afflicts 8 to 12 mil-lion Americans, but nearly 75% of them areasymptomatic.1 Physicians rely on history and physical examination to determine which patients re- quire further evaluation. Physical findings that have been associated with arterial disease include a unilaterally cool extremity, skin atrophy and lack of hair, and abnormal pedal pulses, among others.2 The disease spectrum ranges from exertional calf pain to chronic limb ischemia ne- cessitating amputation. The suspicion of arterial disease often leads to further examination of the lower extrem- ity vascular supply. Measurement of the ankle-brachial index (ABI) is a noninvasive method for detecting PAD and is about 95% sensitive and specific when the diag- nostic cutoff is 0.9.3 In general, the accepted ABI for the presence of PAD is lower than 0.9, and that for severe disease is lower than 0.7. The present observational case-control study was un- dertaken based on the clinical observation that many men seem to have hairless lower extremities. Our goal was to determine whether this physical sign is a predictor of PAD. Methods. After obtaining institutional review board ap- proval, we enrolled 50 subjects from Hershey Medical Cen- ter in the study. Twenty-five control subjects were re- cruited from various outpatient clinics and had documented normal ABI measurements (�0.9). Twenty-five subjects with PAD were recruited from the vascular clinic and had either an ABI lower than 0.9 or abnormal lower extremity arterial duplex findings. Subjects with ABIs lower than 0.9 due to disease other than PAD were excluded. Subjects with diabetes who had abnormal ABIs were included in the disease group. Due to arterial calcifica- tion, the vessels in subjects with diabetes may be less com- pressible and so might generate falsely elevated indices. Thus, the vascular disease of patients with diabetes is likely worse than the measured value. Lower extremity hairs were counted on all subjects. First, a measurement was taken from the anterior tibial tuber- osity to the proximal portion of the lateral malleolus. The distance was divided by 3, and hairs were counted at a lo- cation one-third of the distance proximal to the lateral mal- leolus. Scissors were used to trim hairs at this location to several millimeters in length. Temporary black hair dye was then applied to the area for approximately 1 minute. Ex- cess dye was removed, and we took 2 pictures of the area using a magnified digital photography technique, which involved pressing the camera lens against the skin to make full contact while the photograph was taken. All photo- graphs were taken with a Nikon D80 camera (Nikon USA Inc, Melville, New York), stored on a memory card, and uploaded to a computer where Photoshop (Adobe Sys- tems Inc, San Jose, California) was used to crop them to standard dimensions of 2572 � 1564 pixels. Hair count analyses were performed, and data were categorized as either leg hair present (1 or more hairs pre- sent in the examined field) or leg hair absent (no hairs present in the examined field). This assessment was per- formed on data from each of the 50 subjects. Statistical analysis was then completed using a �2 analysis. Results. Of the 50 patients recruited for this study, 25 had existing PAD, and 25 were healthy controls (Table). Subjects in the control group had a mean age of 65 years (age range, 50-80 years). Those in the PAD group had a mean age of 75 years (age range, 55-88 years). Sixty- four percent of patients with PAD had absent leg hair, and 40% of patients without PAD had absent leg hair (Table). Using �2 analysis, we found no statistically sig- nificant relationship between disease presence and ab- sence of lower extremity hair (P = .09). Comment. Peripheral arterial disease involves atheroscle- rotic occlusions in the arterial system distal to the aortic bifurcation.4 It is mainly a disorder of advancing age, and one’s risk of PAD is increased by cigarette smoking, dia- betes, hypercholesterolemia, and hypertension.4 Because many patients are asymptomatic, physicians must recog- nize the early signs and take appropriate action. The goal of the present study was to determine whether the ab- sence of lower extremity hair is a useful predictor of PAD. No statistically significant difference was found between the numbers of diseased patients without leg hair (n = 16) and control patients without leg hair (n = 10) (P = .09), sug- Table. Presence of Lower Extremity Hair in Patients With and Without PAD Lower Extremity Hair Patients, No .(%) With PAD a (n = 25) Without PAD (n = 25) Present 9 (36) 15 (60) Absent 16 (64) 10 (40) Abbreviation: PAD, peripheral arterial disease. a By �2 analysis, no statistically significant relationship was found between disease presence and absence of lower extremity hair (P = .09). Jonathan Kantor, MD, MSCE (REPRINTED) ARCH DERMATOL/ VOL 145 (NO. 12), DEC 2009 WWW.ARCHDERMATOL.COM 1456 ©2009 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 work_vkxvya7vijb63cwj46wgsr7kw4 ---- 419 CLINICS 2008;63(4):419-20 EDITORIAL Hospital das Clínicas, Faculdade de Medicina da Universidade de São Paulo – São Paulo/SP, Brazil. mrsilva36@hcnet.usp.br IN THE AUGUST 2008 ISSUE OF CLINICS Mauricio Rocha-e-Silva, Editor doi: 10.1590/S1807-59322008000400001 In this issue of CLINICS we would like to highlight a study by Karadeniz et al. on accumulation of oxidized low-density-lipo- protein in a model of murine liver fibrosis induced by cholestasis. Following a 21 day bile duct ligation, authors report higher levels of malondialdehyde, lower levels of superoxide-dismutase, and posi- tive oxLDL cholesterol staining were found in liver homogenates of jaundiced rats, as compared to sham operated animals. They suggest that oxLDLs are produced as an intermediate agent during exacerbated oxidative stress or, alternatively, that they contribute to mechanisms underlying the process of liver fibrosis. This issue includes five reports on original research related to the cardiovascular system: da Luz et al. studied 374 high risk patients submitted to coro- nary angiography and found that a high ratio of triglycerides and HDL-cholesterol is a powerful and independent marker of coronary disease: the increment of one quartile of that ratio is followed by a 30% increase in the frequency of extensive CAD, adjusted for other lipid variables and is the greatest influence among lipid variables. Bocalini et al. analyzed the effect of physical exercise on functional and quality of life in 22 patients with grade II and III heart failure (NYHA). Twenty two patients were randomized into training vs. non-training groups and the former went through a 6-months exercise program. Fitness and quality of life indices were evaluated and showed that guided and monitored physical exercise is safe and able to improve the functional capacity and quality of life in heart failure patients. Carvalho et al. studied 25 patients in optimized beta-blocked heart failure with an average left ventricular ejection fraction of 30%, and fourteen controls to evaluate heart rate dynamics during a treadmill cardiopulmonary exercise test. They found that no patient in the heart failure group reached the maximum heart rate for their age, despite the fact that the percentage increase of heart rate was similar to sedentary normal subjects. They proposed that a heart rate increase in optimized beta-blocked heart failure patients during car- diopulmonary exercise test over 65% of the maximum age-adjusted value should be considered an effort near the maximum. Costa e Silva et al. compared conventional and transdisci- plinary care in a tertiary outpatient clinic for patients after their first acute myocardial infarction. A total of 153 patients were allotted to conventional (n=75) or transdisciplianry (n=78) care. Patients were followed for 180 days. The primary outcome was clinical improve- ment, as evaluated by an index including reduction of body weight, lowering of blood pressure, smoking cessation, increase in physical activity, and compliance with medication. Compliance with diet and visits was higher for transdisciplinary care vs. conventional care; however, the transdisciplinary approach did not provide more clinical benefits than the conventional approach. Casella et al. describe the testing of a practical protocol to measure common carotid intima-media thickness that uses the combined values of two longitudinal examination angles to increase sensitivity. Duplex scan examination of carotid vessels was performed and the intima-media thickness of 407 common carotids were measured in three angles: transversal, longitudinal posterolateral, and anterolateral. In addition to numbers obtained from the three angles of measurement, a fourth visual perspective was obtained by combining the intima-media thickness results of longitudinal posterolateral and anterolateral longitudinal views, and considering the thickest wall measurement. Authors claim that the protocol is a practical method for obtaining common carotid artery intima-media thickness measurements. The combined longitudinal posterolateral and anterolateral longitudinal views provide a more sensitive evaluation of the inner layers of the carotid walls than isolated longitudinal views. Five studies cover problems in anatomy and function of the musculo-skeletal system: Lucareli and Greve describe knee joint dysfunctions that in- fluence gait in cerebrovascular injury in a study including 66 adult men and women with a diagnosis of either right or left hemiparesis, resulting from ischemic cerebrovascular injury; all underwent three- dimensional gait evaluation and an the angular kinematics of the joint knee were selected for analysis. Relevant clinical characteris- tics included mechanisms of loading response in the stance, knee hyperextension in single stance, and reduction of the peak flexion and movement amplitude of the knee in the swing phase. Authos suggest that these mechanisms should be taken into account when choosing the best treatment. Milani et al. checked on the correlation between the degree of gynoid lipodystrophy (cellulite) and lumbar hyperlordosis in 50 volunteers evaluated through digital photography, palpation, and thermograph, but no correlation was found between the angle of lumbar lordosis and degree of cellulite. Rosito et al. studied 49 patients submitted to an acetabular component revision of a total hip arthroplasty, using impacted hu- man (n=26) and bovine freeze-dried (n=25) cancellous bone grafts, and a reinforcement device. No clinical/radiographic differences were found between the groups and both showed an overall rate of 88.5% and 76% of graft incorporation. 420 CLINICS 2008;63(4):419-20In the August, 2008 issue of Clinics Rocha-e-Silva M Rai et al. studied anatomical variability of the omohyoid mus- cle and its clinical relevance in 35 cadavers. A double omohyoid was present in one cadaver; the inferior belly originated from the clavicle in three cadavers; the superior belly merged with the ster- nohyoid in two cadavers, and the omohyoid received additional slips from the sternum in one cadaver. Standard attachment and position of the omohyoid was observed in the remaining cadavers. Such variations are important because of a close relation of the muscle to the large vessels and brachial plexus. Because of the di- rect adhesion of the intermediate tendon to the anterior wall of the internal jugular vein and its connection with it through a thin lamina of the pretracheal layer of the cervical fascia, the contraction of the omohyoid muscle has a direct effect on the lumen of this vessel. Aragão et al. performed metric measurements to verify the attach- ment levels of the medial patellofemoral ligament in 17 knees from 10 human cadavers, which were intact and showed no macroscopic signs of injuries. The medial patellofemoral ligament was present in 88% of the knees; the width of its insertion ranged from 16 to 38.8 mm, with a mean of 27.90 mm, and its mean length was 55.67 mm. The margins of the ligament were concave or rectilinear. At the upper margin, the concave form predominated and was better characterized, while at the lower margin, the rectilinear form predominated. Three papers deal with gynecologic and fertility issues: Castro et al. report on a single-blind, randomized, controlled trial of pelvic floor muscle training, electrical stimulation, vaginal cones, and no active treatment in the management of stress urinary incontinence in which the effectiveness of these 3 procedures was compared. It was found that pelvic floor exercises, electrical stimu- lation, and vaginal cones are equally effective treatments and far superior to no treatment in women with urodynamic stress urinary incontinence. Pelvic floor training is recommended as the premier strategy in this situation. Lobo et al. evaluated the effects of estrogen treatment in combi- nation with gestrinone on an experimental rat model of endometrio- sis. Uterine transplants were attached to the peritoneum of female Wistar rats via a surgical autotransplantation technique. A high dose of estrogen caused macroscopic increases in the endometrial implant group compared with other groups, which were similar to increases in the proestrus phase. A low dose showed morphometric development of implants, such as an increase in number of endometrial glands, leukocyte infiltration and mitosis. Gestrinone antagonized both doses of estrogen. Our findings suggest that gestrinone antagonizes estro- gen’s effects on rat peritoneal endometrial implants Ibrahim et al. studied the antioxidant effect of alpha lipoic acids on the sperm quality of semen from six Boer bucks. Seminal analysis of the baseline before and after incubation of samples with a series of doses of Alpha lipoic acids was performed. Sperm motility rate was improved after incubation with Alpha lipoic acids at a concentration of 0.02 mmol/ml. This concentration was also capable of reducing DNA damage. Two studies address problems in Ophthalmology: Giampani Jr et al. report on the long term efficacy and safety of trabeculectomy with mitomycin c for childhood glaucoma: a study of results with long-term follow-up of 114 patients. They con- clude that trabeculectomy with mitomycin is safe and effective for short-term or long-term treatment of congenital or developmental glaucoma. The frequency of bleb-related endophthalmitis was no higher in these patients than that described in adults. Moraes and Susanna Jr. report on the correlation between the water drinking test and modified diurnal tension curve in untreated glaucomatous eyes, and conclude that intraocular pressure peaks detected during the Water Drinking Test could be used in clinical practice to estimate the peaks observed during the Modified Di- urnal Tension Curve and to assess the status of the eye’s outflow facility. We also publish 2 orignal reports on sepsis: Rezende et al. describe the epidemiology of severe sepsis in the emergency department and difficulties in the initial assistance for 342 out of 5332 patients admitted to the Emergency Depart- ment of a public Hospital. Authors find that the occurrence rate of severe sepsis in the emergency department was 6.4%, and the rate of sepsis diagnosed by the emergency department team, as well as the number of patients transferred to the ICU was very low. Edu- cational campaigns are important to improve diagnosis and, hence, treatment of severe sepsis. Freitas et al. analyzed the impact of duration of organ dysfunc- tion on the outcome of 56 patients with severe sepsis and septic shock; the duration of organ dysfunction prior to diagnosis was correlated with mortality. In a multivariate analysis, only duration of organ dysfunction persisting longer than 48 hours correlated with mortality. These findings suggest that the diagnosis of organ dysfunction is not being made in a timely manner. The time elapsed between the onset of organ dysfunction and initiation of therapeutic intervention can be quite long, and this represents an important determinant of survival in cases of severe sepsis and septic shock. Two studies in infectology cover problems relating to Leishmaniois and HIV/AIDS: Medeiros et al. studied the involvement of Leishmania (L.) amazonensis in American tegumentary leishmaniasis in the state of São Paulo, Brazil. DNA sequencing permitted the identification of a particular 15bp fragment (5’ …GTC TTT GGG GCA AGT... 3’) in all samples. Analysis by the neighbor-joining method showed the occurrence of two distinct groups related to the genus Viannia (V) and Leishmania (L). These results confirm the pattern of distribu- tion and possible mutations of these species, as well as the change in the clinical form presentation of ATL in the state of São Paulo. Soeiro et al. performed a post-mortem histological pulmonary analysis of the pathology observed in 250 autopsies of HIV/AIDS patients. This report covers a wide range of findings and is the first autopsy study to include demographic data, etiologic diagnosis, and respective histopathological findings in patients with HIV/ AIDS and acute respiratory failure. Further studies are necessary to elucidate the complete pulmonary physiopathological mechanism involved with each HIV/AIDS-associated disease. One study covers the problem of posttraumatic stress disorder: Assis et al. evaluated physical activity habits in 50 patients with posttraumatic stress disorder and found substantial changes in habits following the onset of PTSD. While more than half of the pa- tients participated in physical activities prior to Posttraumatic Stress Disorder onset, there was a significant reduction in their participa- tion afterwards. The justifications for stopping physical activities or sport participation were lack of time and lack of motivation. The findings demonstrate that patients with Posttraumatic Stress Disor- der have low levels of participation in sports or physical activities. work_vld64vnf3jfsngmd3fcpbwcf6m ---- A disruption framework This is an electronic reprint of the original article. This reprint may differ from the original in pagination and typographic detail. Powered by TCPDF (www.tcpdf.org) This material is protected by copyright and other intellectual property rights, and duplication or sale of all or part of any of the repository collections is not permitted, except that material may be duplicated by you for your research use or educational purposes in electronic or print form. You must obtain permission for any other use. Electronic or print copies may not be offered, whether for sale or otherwise to anyone who is not an authorised user. Kilkki, Matti; Mäntylä, Martti; Karhu, Kimmo; Hämmäinen, Heikki; Ailisto, Heikki A disruption framework Published in: Technological Forecasting & Social Change DOI: 10.1016/j.techfore.2017.09.034 Published: 01/04/2018 Document Version Publisher's PDF, also known as Version of record Published under the following license: CC BY-NC-ND Please cite the original version: Kilkki, M., Mäntylä, M., Karhu, K., Hämmäinen, H., & Ailisto, H. (2018). A disruption framework. Technological Forecasting & Social Change, 129, 275-284. https://doi.org/10.1016/j.techfore.2017.09.034 https://doi.org/10.1016/j.techfore.2017.09.034 https://doi.org/10.1016/j.techfore.2017.09.034 Contents lists available at ScienceDirect Technological Forecasting & Social Change journal homepage: www.elsevier.com/locate/techfore A disruption framework Kalevi Kilkkia,⁎, Martti Mäntyläa, Kimmo Karhua, Heikki Hämmäinena, Heikki Ailistob a Aalto University, Espoo, Finland b VTT Technical Research Centre of Finland, Finland A R T I C L E I N F O Keywords: Disruption Innovation Technology Ecosystem Strategy A B S T R A C T One of the fundamental dilemmas of modern society is the unpredictable and problematic effect of rapid technological development. Sometimes the consequences are momentous not only on the level of a firm, but also on the level of an entire industry or society. This paper provides a framework to understand and assess such disruptions with a focus on the firm and industry levels. First, we give a generally applicable definition for a disruption as an event in which an agent must redesign its strategy to survive a change in the environment. Then we construct a layered model that spans from basic science to society and enables a systematic analysis of different types of disruption. The model also helps in analyzing the spread of innovations both vertically between layers and horizontally between industries. Thirdly, we introduce three main threats that may lead to a dis- ruption and four basic strategies applicable when a disruption occurs. Finally, the framework is used to study four cases: GSM, GPS, the digitalization of photography, and 3D printing. The main contribution of this paper is the simple yet expressive model for understanding and analyzing the spread of industry-level disruptions through several layers and between industries. 1. Introduction Innovation means different things to different people. However, for most of us innovation has a positive connotation. Disruption is, in turn, a negative term. Thus, there is a kind of internal conflict in the term disruptive innovation. Even more so with the term creative destruction, as coined by Joseph Schumpeter in 1942. Both terms leave open the question of whether the outcome will be socially beneficial or not; the terms hint that some entities will benefit while others will suffer. The role of new technologies in the redistribution of costs and benefits has been apparent from the early 19th century when Luddites fiercely protested the then new textile industry. The dilemma between the ne- cessary actions needed for the continuous development of modern so- cieties and the requests to maintain the status quo and to honor the old traditions has been a central topic in political, social, and economic forums during the last 200 years. After Schumpeter (1950), discussion about the effects of innovations gradually gained momentum. Diffusion of innovations has been studied since early last century (Tarde, 1903/1969). The concept of the S-curve and adopter categorization by Rogers (1962/2003) has been widely used and referenced. Nevertheless, the terms disruptive technology and disruptive innovation were seldom used before Clayton Christensen published The Innovator's Dilemma in 1997. Per Google Scholar, the numbers of scholarly articles before 1997 mentioning “disruptive innovation” or “disruptive technology” were 51 and 58, respectively, whereas innovation, overall, was mentioned in close to 100,000 arti- cles. Christensen's book created lots of debate about the nature of dis- ruptions. The number of articles discussing disruptive innovations rose from the level of ten per year in the mid-nineties up to almost 3000 articles in 2015. Obviously, Christensen was able to identify and clarify the nature of an important idea. Understandably, much of the existing literature focuses on dis- ruptive innovation at the level of an individual technology or a single firm and often delves deep in the specific characteristics of the in- dividual case. Yet historical examples show that truly significant dis- ruptions affect also entire industries and even society: former industrial leaders may vanish and be replaced by new entrants, boundaries be- tween formerly distinct industrial sectors may blur, and the new market conditions emerging from the disruption may require significant adaptations at the level of societies in terms of new institutions and regulation. The main objective of this paper is to provide a simple yet ex- pressive framework for studying and understanding disruptive changes especially at the level of entire industries. To achieve this, we develop conceptual definitions, a layered framework, and a classification of strategies to cope with different types of disruption. The primary viewpoint of the paper is a combination of technology, business, and consumer behavior. However, because we want to present a general https://doi.org/10.1016/j.techfore.2017.09.034 Received 18 November 2016; Accepted 17 September 2017 ⁎ Corresponding author at: Aalto University, Department of Communications and Networking, Konemiehentie 2, 02150 Espoo, Finland. E-mail address: kalevi.kilkki@aalto.fi (K. Kilkki). Technological Forecasting & Social Change 129 (2018) 275–284 Available online 10 November 2017 0040-1625/ © 2017 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/BY-NC-ND/4.0/). T http://www.sciencedirect.com/science/journal/00401625 https://www.elsevier.com/locate/techfore https://doi.org/10.1016/j.techfore.2017.09.034 https://doi.org/10.1016/j.techfore.2017.09.034 mailto:kalevi.kilkki@aalto.fi https://doi.org/10.1016/j.techfore.2017.09.034 http://crossmark.crossref.org/dialog/?doi=10.1016/j.techfore.2017.09.034&domain=pdf framework, we also need to consider social and political processes, as well as scientific and applied research. All definitions and classifications are devised to be applicable on all layers from science to society. Before going to the details of the framework in Section 3, we present a lit- erature review on disruptions in the next section. Additionally, in Section 4 several cases are then analyzed through the presented fra- mework. Finally, the general findings of the cases are presented in Section 5 with a discussion about the need for further studies on business disruptions. 2. Literature review and the definition of disruption Christensen's influence has been most prominent in technology-re- lated business literature. Many books have discussed the interplay be- tween technology and business. For instance, Berkun (2010, p. 62), Isaacson (2014, p. 288), Lessig (2008, p. 143), Naim (2014, p. 71), Norman (1998, p. 235), Rogers (1962/2003, 5th ed., p. 247), and Varian (2004, p. 26) approvingly reference Christensen's original thesis about disruptions. Typically, the attitude in such technical papers is that “disruptive” is a desirable trait, because the choice of the term suggests that the paper is presenting something important and possibly highly valuable. The greater the effect or the more disruptive the in- novation, the better. Christensen's original idea was that an excessive reliance on the known and presumed needs of current customers could be harmful when a novel technology disrupts the market. The conflict between old and new needs may lead to a situation in which the incumbents con- centrate on serving the old needs while the new players capture a major portion of the market by serving new needs. However, Christensen's treatise in The Innovator's Dilemma has been criticized as cherry picking examples and for the lack of a general classification of disruptions, see Danneels (2004), King and Baatartogtokh (2015), Lepore (2014), Markides (2006), Sood and Tellis (2011), and Wadhwa (2015). Moreover, some business literature about digital disruptions omits Christensen and the concept of disruption. For instance, Evans and Wurster (2000) use terms “blowup” and “deconstruction” to address those cases that Christensen would call disruptions. Similarly, Brynjolfsson and McAfee (2014) only refer to Schumpeter's creative destruction and use the word disruption only occasionally while Kelly (2016) discusses the significant future effects of novel technologies on our lives but does not mention Schumpeter or Christensen at all. Also these books do not stress the difference between sustaining and disruptive technologies; rather, they consider digitalization and its economic and social effects as a complex process that includes phases of gradual evolution and intermittent rapid changes. Other kinds of terminology have also been used. Discontinuous in- novation was widely used before disruptive technology became popular, see Anderson and Tushman (1990), Lynn et al. (1996), Veryzer (1998), and Kaplan (1999). Disruptive is a stronger and more tangible qualifier than discontinuous, which may explain the popularity of disruptive among many fields of inquiry. However, the events discussed under these two terms, disruptive and discontinuous innovations, are very similar. Various definitions of disruption can be found from literature. Sood and Tellis (2011) state that technology disruption occurs when a new technology exceeds the performance of the dominant technology on the primary dimension of performance. Similar definitions can be found in Govindarajan and Kopalle (2006), Schmidt and Druehl (2008), and Utterback and Acee (2005). Linton (2002) refers to Abernathy and Clark (1985) and states that “Disruptive innovations are based on a different technology base than current practice, thereby destroying the value of existing technical competencies.” Kassicieh et al. (2000), Kostoff et al. (2004), Rothaermel (2002), and Volberda et al. (2011) have provided similar definitions. According to Danneels (2004) “a disruptive technology is a technology that changes the bases of com- petition by changing the performance metrics along which firms compete.” Similar definitions are presented by Obal (2013) and Nagy et al. (2016). According to Walsh et al. (2002), Geoffrey Moore has noted in 1991: “disruptive technologies generate discontinuous in- novations that require users/adopters to change their behavior in order to make use of the innovation.” Albors-Garrigos and Hervas-Oliver (2014), Lyytinen and Rose (2003), Bessant et al. (2010), Paap and Katz (2004), and Urban et al. (1996) have presented similar kinds of defi- nitions. Sometimes disruptions are initiated by a new business model rather than by new technology, as discussed in Ghezzi et al. (2015), Pisano (2015), Sabatier et al. (2012), and Sosna et al. (2010). Finally, many articles (e.g., Kassicieh et al., 2002; Laplante et al., 2013; Markides, 2006 and Yu and Hang, 2010) discuss several aspects of disruptions without giving one clear definition. In most of the definitions outlined above, the authors define dis- ruption by searching for the common denominator in a set of disrup- tions. Instead, we take a conceptual approach that starts with the concept of disruption and aims to give a definition that is applicable for all fields, not only for the business sector. Cambridge Dictionaries Online (2017) gives the following definition for disrupt: to prevent something, especially a system, process, or event, from continuing as usual or as expected. An agent, when pursuing some predefined goals, makes intentional decisions and performs some actions that, in turn, affect other entities. Sometimes the effects are disruptive, either in- tentionally or unintentionally. Thus, a disruptor is an agent that disrupts the functioning of some other agents. Those disrupted agents can be called disruptees; see, e.g., Christensen (2013) and by Yu and Hang (2008). An agent can thus be a disruptor, a disruptee, or a neutral actor from the perspective of a disruption. But not all entities are agents. In an ecosystem, a majority of entities stay passive without goals, expectations, or intentions. For instance, although money is an integral part of all business ecosystems, money in and of itself has no intentions; only the owner of the money has in- tentions. In our framework, disruptive is a property of a passive entity that mediates the effects from disruptors to disruptees. An ecosystem is, thus, a medium for disruptions. If one says that an ecosystem is dis- rupted due to an event, the actual claim is that so many agents in the ecosystem are disrupted that the event has a perceptible influence on the ecosystem as a whole. As to the term innovation, Merriam-Webster (2017) gives two main definitions: 1) the introduction of something new, and 2) a new idea, method, or device. We prefer here the later meaning in which in- novation refers to an actual object (e.g., charge-coupled device (CCD) that led to digital cameras) instead of the process initiated by an object. Moreover, we use the term disruptive innovation rather than disruptive technology because innovation is a broader concept and covers business, institutional, and user-generated innovations. Thus, we propose the following definitions: An agent is disrupted when the agent must redesign its strategy to survive a change in the environment. From the perspective of a system, disruption is an event in which a substantial share of agents belonging to the system is disrupted. A disruptive innovation is a passive entity that mediates a disruption in a system. 3. Framework As the literature review in the previous section demonstrated, nu- merous viewpoints and methods have been proposed to assess dis- ruptive innovations. Typically, if someone wants to understand a dis- ruption, she may start either with a specific viewpoint (say, strategic choices within a firm) or with a relevant book or a set of articles. In contrast, our aim is to build a framework that makes it possible to flexibly choose among different viewpoints and different methods and even use several of them in parallel. The framework consists of two parts: first, a model with six layers to assess the dynamics of disruptive K. Kilkki et al. Technological Forecasting & Social Change 129 (2018) 275–284 276 innovations and, secondly, a model with three types of threat and four types of strategic choice for a firm encountering a disruption. 3.1. A layered model The main strength of the definitions presented in the previous sec- tion is that they are applicable independent of the particular nature and process of a disruption. The definitions can also be applied at any agent layer. These layers range from scientific research to the authorities defining the rules of the society as illustrated in Fig. 1. Scientific dis- ruptions, called paradigm shifts by Thomas Kuhn (1962), may occur on the science layer; the discovery of the theory of evolution is a clear example. Even without a paradigm shift and grand disruptors, the ac- cumulation of scientific knowledge enables technical inventions in the research and development (R&D) units of private companies and public organizations. On this layer, many important decisions are made on the level of middle managers that first identify the potential of an inven- tion. As a result, a manager can become a disruptor within a firm by allocating resources from old technologies to develop a commercially successful invention, that is, an innovation. On the layer of firms, the key decision maker when an innovation is brought to market is the Chief Executive Officer together with other top managers. Inside a firm, disruptive technologies typically require new skills and create a pressure to change the value generation models of the firm. Disruptions also occur because of the adoption of an estab- lished technology into a new business sector or because of a new combination of two or more old technologies as discussed by Arthur (2009) and Berkun (2010). The effects of disruptions may diffuse through multiple layers in- cluding technology, business, and consumers as presented in Funk (2008). A disruption started by a new technology affects the value generation model of a firm that is then able to offer new products. The new product might pass the industry layer without any immediate disruptive effect on the industry architecture. However, if the product creates significant demand among consumers, technology push may turn into market pull and the disruption can diffuse back to the industry and firm layers with noticeable consequences. As an example, short message service (SMS) was adopted by consumers much more rapidly than what the service providers expected (Hillebrand, 2010). SMS de- monstrated the urgent need for online social interaction that later led to the rise of social media applications. On the lower layers (science, R&D, and firms), it is often possible to identify the disruptor, that is, a person or a group of persons, initiating a disruption by a new publication, patent, or product. These accom- plishments can be used to measure the significance of a disruption. On the upper layers, the situation is fuzzier. On the industry layer, new products may cause changes in the structure of the value networks (see, e.g., Allee, 2000). Disruptions affect an industry by changing the re- lationships between different players and results in mergers, acquisi- tions, and bankruptcies that can be used as measures of the strength of a disruption. Disruptive products can also prompt a momentous change in usage behavior. As a recent example, many consumers now spend at least several hours per day using their smartphones (Finley and Soikkeli, 2017). Additionally, other aspects of the smartphone innovation affect daily life. For example, smartphone applications collect and use a large amount of private information (often for targeted advertising pur- poses); thus, prompting user privacy concerns (Rainie, 2016). Those concerns create requests for regulating the behavior of enterprises and other organizations by means of new rules. Therefore, notable changes in regulation can be considered a strong indication of a disruption on some of the lower layers in Fig. 1. In extreme cases, a disruptive technology together with other social changes can even disrupt the social order of nations; the Arab Spring in 2010 is a notable example. 3.2. Firm-level strategies As to business disruptions, the most critical decisions are usually made in firms. The main strategic choices for firms are illustrated in Fig. 2. There are three axes: industries, product quality, and the number Fig. 1. A layered structure to illustrate the interactions in disruptive processes. The boxes in the middle (theory, technology, etc.) represent potentially disruptive entities. K. Kilkki et al. Technological Forecasting & Social Change 129 (2018) 275–284 277 of potential customers. Different industries serve different customer needs by offering different types of products. Most customers have moderate product requirements while some customers are more de- manding and willing to pay more than others. In a stable situation, a couple of firms usually dominate the largest customer segment with moderate requirements. Now we can consider how a disruptive innovation affects the status quo. Three types of entrants pose different threats for the incumbents in industry A. New entrants armed with an innovation may invade the market by serving first the less demanding customers (threat T1). The entrant assumes it can improve the product quality so quickly that firms using the old technology cannot react quickly enough and thus lose the battle. This is the archetypal disruption in Christensen's model while the other threats can be viewed as extensions to that basic model. In the second type of threat (T2), the main competitive advantage of the en- trant is superior quality demanded by high-end customers. If the price of the new product can be decreased quickly enough, the innovation may capture a major part of the market from the incumbents. Finally, the most serious threats often arise when another industry expands to the area of an established industry and threatens to change the business logic of the old industry (threat T3). When the entrant generates threat T1, established players have a motive to move upwards on the scale of product quality (High-end strategy, S1). Even if the firm will likely lose market share, gross margins are typically higher in the upper product categories. However, there is rarely enough room in the upper segment for all established players. The situation is even more problematic when a strong entrant enters the market with a high-quality product (T2). In principle, es- tablished players may move downwards by reducing costs, diminishing product variety and lowering quality (Low-end strategy, S2). From an organizational viewpoint, this kind of move is uncomfortable, because it threatens the position and status of some integral parts of the orga- nization including R&D and advanced customer support. But doing nothing is also a strategy; it likely results in a bankruptcy or in a vo- luntary exit from the business (S3). Finally, some firms move to another industry (S4). Different strategies may lead to different kinds of inter- action with agents on the other layers illustrated in Fig. 1: low-end strategy (S1) leads to a change in the interaction with consumers, high- end strategy (S2) requires investments in R&D, and moving to another industry (S4) changes the firm's position in the business ecosystem. 4. Case examples In this section we check the feasibility of our framework by assessing some major disruptions. In 1439, the introduction of movable type printing created a massive disruption that affected the rise of modern society. Many agents, particularly state leaders had to make critical strategic decisions whether to embrace or suppress the usage of the innovation. The Turkish Sultan decided to effectively ban printing, and therefore helped maintain a status quo in Turkish society for cen- turies, but hindered intellectual and economic advancement. On the other hand, the Catholic Church accepted the printing press and thus helped spur the transition from the Middle Ages to the Renaissance and New Age – but also enabled the Reformation and Protestantism. This major disruption demanded critical strategic decisions on the highest layer of the society. For instance, in 1589, Queen Elisabeth I declined to grant a patent for a knitting machine, because she was afraid that knitting machines would create political instability (Acemoglu and Robinson, 2013, p. 182). There were no firms or industries to support the spreading of the invention, only individuals. In general, the spreading of innovations was defined primarily by the institutions adopted by different nations. As to modern technology, the computer, and more generally in- formation technology, has produced major disruptions that can be evaluated by using available data sources. For instance, many Internet and web-enabled services have disrupted conventional businesses: first CD-ROM technology and later Wikipedia have disrupted the en- cyclopedia business (see, e.g., Anderson, 2009). Similarly, Uber is causing tremors in the taxi business and Airbnb in the hotel business. In a certain sense, all these examples started with low prices (threat 1 in Fig. 2), but at the same time they offered additional benefits unavail- able in the conventional model. In these cases, new entrants started within a certain field (like Amazon in books) but then expanded to other fields of business (see, e.g., Rothman, 2017). More disruptors are emerging due to the continuing effects of Moore's law and the devel- opment of machine learning and artificial intelligence. In the following case analysis, we use our framework to analyze a few industries more closely using both quantitative and qualitative approaches. As explained in Section 3.1., disruptions leave their mark in databases covering publications, patents, products sales, usage of applications, stock prices, mergers and acquisitions, and regulatory decisions. Thus for the quantitative approach, we collect and plot the yearly development of some of these variables (see Figs. 3, 4, 5, and 9) and use them as proxies for indicating strength and timing of innova- tion activity at specific layer in our framework (refer to the right-hand side of Fig. 1). The primary data sources in the figures are: articles: IEEE Xplore (2017), patents: United States Patent and Trademark Office (2017), books: Amazon.com (2017), stock prices: Yahoo (2017), and the share of mobile phone features in Finland: Riikonen et al. (2015). For the qualitative analysis, we collected the main events in selected case industries and built a synthesizing illustration by positioning the events over time on the horizontal axis and according to layers of our framework on the vertical axis (see Figs. 6, 7, and 8). We present our case analysis in three parts. First, we utilize our framework using quantitative, and second, qualitative approach to consider the devel- opment of mobile phones, GPS, and digital photography. These cases illustrate how the effects of innovations spread between industries. They also provide a useful set of occurrences to demonstrate the use- fulness of the framework with (mostly past) business disruptions. Third, we use 3D printing as an example of a potential future disruptive in- novation that is still in the early phase of development. 4.1. Quantitative analysis of GSM, GPS, and digital photography: timing and scale of disruptions Over the last 30 years mobile phones have had dramatic effects on everyday life all over the world. The first generation mobile phones using analog technology were cumbersome and expensive and were not widely adopted by consumers. The first truly successful mobile tech- nology was GSM (Global System for Mobile Communications). Fig. 3 Fig. 2. Three industries (A–C), three threats (T1–T3), and four strategies for incumbent players in industry A (S1–S4). Threats are made by disruptors while disruptees need to select a strategy to cope with the effects of a disruption. K. Kilkki et al. Technological Forecasting & Social Change 129 (2018) 275–284 278 illustrates the three phases of GSM development: first, the relatively quiet years until 1994, then the phase of rapid development from 1995 to 2000, and finally a stagnant phase when the main development effort moved from GSM to the next generations of mobile technology (3G, 4G, and 5G). In the case of GSM, the number of scientific articles started to rise several years before successful businesses, because mobile service providers and device vendors were aware of the necessity of the change from analog to digital technology. Still, the number of US patents re- mained low (partly because other technologies were adopted in US). Even though the main actors in the mobile ecosystem were prepared for the change induced by GSM, they did not anticipate the dramatic in- crease of demand illustrated in Fig. 3. The sudden change in the mobile phone business gave an opportunity for Nokia to rise from a small player to a global mobile phone giant in a couple of years (illustrated by the stock value in Fig. 3). At the same time as GSM, two other technologies were in the phase of rapid development: digital photographing and GPS (Global Positioning System). Figs. 3, 4, and 5 reveal similarities in the devel- opment of GSM, digital photographing, and GPS but also some notable differences. In the case of GPS, a lot of patent applications were filed already over 10 years before GPS became popular in consumer devices because of the usage of GPS in other fields. With digital photographing the lag between patents and sales was roughly 6 years, whereas with GSM it was less than five years. The lag between sales and patents il- lustrates how innovation propagates, with some delay, from the scientific communities and R&D unit layers up to the firms and con- sumers layers. It seems that also speed, i.e., a small delay and sharp angle, in addition to the scale of change contribute to the disruptive- ness. The advance of digital photography disruption is manifested in the decline of yearly sales of film cameras and the increase in the sales of digital cameras. The number of film cameras sold peaked in 1998, when 36 million units were sold (CIPA, 2017). In 1999, sales began a constant decline and in less than ten years sales dropped below a million units per year. During the same period the sales of digital cameras reached 100 million units. Since then, sales of digital cameras have declined to 35 million units because the next wave of digital photography entered the scene, namely the smartphone (see MT in Fig. 3). Smartphone sales in 2015 reached 1.4 billion units. This serves as an illustrative example of successive waves of disruptions enabled by convergent developments in other fields, which will be explored in more detail in the qualitative analysis. 4.2. Qualitative analysis of GSM, GPS, and digital photography: entanglement of disruptions Digital technology has created several disruptions in the telecom sector. All established telecom firms have had to make major strategic decisions: first in the fixed network area and then in mobile network and device area. Nokia was a disruptor in the first phase but was itself later disrupted by the software and Internet enabled smartphones made by Apple and Google (see Nokia's stock value and the share of multi- touch phones in Fig. 3). As a strategic consequence, Nokia was forced to sell its phone business to Microsoft in 2013. Apart from the disruptive developments within the industry, mobile communications have had significant disruptive effects on other fields, including GPS devices and cameras, which we will examine more closely. The history of GPS is illustrated in Fig. 6. Whereas photography emerged as a device-driven technology, satellite navigation required massive investments in satellite systems and is therefore necessarily government-driven. The early phases in the 1960s (Sputnik 1, TRANSIT system) were driven by military interests. Because of the tragedy of Korean Air Lines Flight KAL007 the US government allowed limited civilian use (e.g., maritime, aviation) when the first GPS satellite was launched in 1989. This event opened the GPS device market. The first device firms Magellan and Garmin exploited military synergies (and the government subsidized satellites) while entering the non-military per- sonal navigators industry (threat T3 to traditional navigation firms). The GPS market got a major boost due to the improvements in GPS Fig. 3. The history of GSM in numbers. A: the number of articles per year with GSM in abstract); B: the number of books published per year with GSM in title; Pat: the number of patents filed per year based on granted patents 6/2017 with GSM in abstract; GSM: the number of GSM phones sold per year (millions, source: Häikiö, 2001, p. 179); Mob: the number of mobile phones sold per year (millions); Nok: the market capitalization of Nokia (billions of Euros, source: Nokia's annual reports); MT: the share of mobile phones with multi-touch screens in Finland (%). Fig. 4. The history of GPS in numbers. A: the number of articles per year with GPS in abstract; B: the number of books published per year with GPS in title; Pat: the number of patents filed per year, based on granted patents 6/2017, GPS in abstract; Garmin: the stock price of Garmin Ltd. ($ US); Ph: the share of mobile phones with GPS in Finland (%). Fig. 5. The history of digital photography in numbers. A: the number of articles in per year with digital photography or digital camera in abstract, B: the number of books pub- lished per year with digital photography in title; Pat: the number of patents filed per year based on granted patents 6/2017, Digital photography or digital camera in abstract; Pat*: the estimated number of filed patents per year, estimated based on the granted patents by 6/2017; S-A: the number of analog cameras sold per year (millions, source: CIPA, 2017); S-D: the number of digital cameras sold per year (millions, source: CIPA, 2017); K: the stock price of Kodak ($ US); Ph: the share of mobile phones with camera in Finland (%). K. Kilkki et al. Technological Forecasting & Social Change 129 (2018) 275–284 279 accuracy in 2000 when the US government removed Selective Availability. As one consequence, it became clear that GPS would be superior in outdoor positioning compared to triangulation technologies of GSM mobile networks. Mobile operators and mobile phone manu- facturers became interested, and Benefon launched the first GPS-en- abled phone in 2001. The emergence of smartphones with GPS (see Fig. 4) partly explains the financial difficulties of Garmin (specialized in navigation devices) as well as the acquisitions of Navteq (specialized in navigation maps) by Nokia (2007) and CSR/SiRF (specialized in GPS microchips) by Qualcomm (2015). As to the consumer layer, GPS did not have any noticeable effect on purchase decisions of mobile phones in the period of 2004–2013 (Kekolahti et al., 2016). Thus, it seems that GPS did not disrupt the mobile phone market. On the other hand, none of the big GPS personal navigator vendors (e.g., Garmin, Magellan, and TomTom) could enter the mainstream mobile phone industry and instead remained specia- lized in their narrow segment. Overall, as witnessed by the societal impacts of regulation (emergency call positioning such as E911 and eCall, privacy rulings such as E-Privacy) and commoditization of smartphones (due to government satellite subsidy and Google's free navigation strategy), GPS is an example of full penetration through the vertical layers of our framework. The origins of digital photography (Fig. 7) can be traced to basic technological research in the 1960s, leading to the invention of the fundamental component, the CCD image sensor in 1969. A completely digital camera nevertheless required the converging development of many other technologies (such as the development of processors and algorithms for image processing and large memory chips for storing the images); thus, the first digital cameras arrived to the market only around the end of the 1980s. During the 1990s, digital cameras were chiefly aimed at the high-end market of professional photography, corresponding with threat T2 of Fig. 2 against traditional analog pho- tography. However, digital cameras soon expanded in volumes to low- end market as well (T1) as illustrated by sharp rise in sales in Fig. 5. The impact of this disruption proved to be disastrous to established firms of the market, culminating in the bankruptcy of Kodak in 2012 (strategy S3). The low-end market was further fueled when the first mobile phones with embedded cameras entered the market. The almost simultaneous adoption of 3G mobile networks ensured that camera phones had suf- ficient communications capabilities to disrupt established photography, even though the picture quality and other characteristics of the first camera phones were relatively modest. For instance, the share of phones with camera increased from 21% in 2005 to 65% in 2009 in Finland (Kivi et al., 2012). The availability of a camera was one of the few features that had a discernible positive effect on the sales of mobile phone models between 2004 and 2007 (Kekolahti et al., 2016). By 2008, Nokia had become the world's largest camera manufacturer. Thus, Nokia and other mainstream camera phone manufacturers posed the threats T1 and T3 of Fig. 2 against traditional digital photography. In our layered framework, this illustrates a horizontal spread of in- novation from one industry to another. Once established, camera phones enjoyed significantly larger economies of scale compared to traditional digital cameras. By that time, however, the next wave of the digital photography disruption was already apparent. Powered by rapidly developing cloud data storage, websites for storing and sharing digital photographs ap- peared in the mid-2000s and were eventually integrated in the growing empires of Web giants such as Google (Flickr) and Facebook (Instagram). With this, the value of photographs moved from cameras and phones to social media sites that made the photographs available to other users (threat T3 against smartphone business). This has led to new user behaviors such as using photography to document all kinds of happenings (not just memorable events) or to communicate in peer groups (e.g., the “selfie” phenomenon). In our layered framework, this illustrates a vertical spread of innovation from a lower level upwards. The emergence of these behaviors has also raised social issues, such as the balance of freedom of expression and personal privacy. 4.3. 3D printing—a future disruptive innovation? The basic idea of 3D printing, to create 3-dimensional shapes by stacking 2-dimensional cross sections, can be traced to ancient humans; this is how much of Neolithic pottery was created before the invention of the potter's wheel. Nevertheless, the birth of modern 3D printing took place in late 1980s and early 1990s with the invention and patenting of the first practical technologies, such as stereolithography (SLA), selec- tive laser sintering (SLS), and fuel deposition modelling (FDM) (see Fig. 8). Much of the 1990s were characterized by rapid progress with new materials and processes being continuously introduced to increase the range of parts that could be created. Powered by these inventions, the market developed rapidly under the parallel forces of creative Fig. 6. Milestones of GPS satellite navigation. K. Kilkki et al. Technological Forecasting & Social Change 129 (2018) 275–284 280 destruction and rapid consolidation. By the new millennium, industrial 3D printing was well established in design offices and had started to make inroads in actual manufacturing (initial threat T3 against tradi- tional manufacturing). A second wave of the disruption commenced in the mid-2000s, when the first printers aimed at consumer markets appeared (initial threat T3 against industrial manufacturing); this coincided with the expiration of some of the core patents from the 1980s. Akin to the development of web photography, this development gained speed from social media and the web economy being built around it. In particular, crowdsourcing spurred the rapid development of many new entrants competing to create printer kits for consumers. The growing consumer market also opened the door for Internet-enabled 3D printing app stores, sites specializing in publishing 3D-designs. Inevitably, social concerns have also appeared, such as the appearance of the Defence Distributed website specializing in 3D-printable gun designs. After some controversy, it discontinued its offering in 2013. As to the disruptiveness of 3D printing, the most surprising ob- servation in Fig. 9 is that all the metrics (articles, patents, books, sales, and the stock price of 3D Systems) accelerated within a relatively short period of time (2010–2014) after a long quiet period. The quickness of the change indicates that other sectors and industries may be un- prepared for the possible disruptive effects of 3D printing. Whether 3D printing turns out be a genuinely disruptive technology akin to the web remains to be seen; though the rise during the last 5 years appears promising. Fig. 7. Milestones of digital photography. Fig. 8. Milestones of 3D printing. K. Kilkki et al. Technological Forecasting & Social Change 129 (2018) 275–284 281 5. Discussion The main contribution of this paper is a simple yet expressive model for assessing industry-level disruptions. More specifically, our frame- work helps to understand how a disruptive innovation propagates be- tween layers, from science to society, and how it may spread from one industry to another. In this section we discuss the spreading of dis- ruptive innovations from three perspectives: vertical vs. horizontal di- rection, entanglement with other innovations, and specific role of so- called generic disruptors. To conclude, we also discuss practical im- plications and provide avenues for future research. Earlier research on spreading of innovation has mainly come in two streams. Diffusion of innovation research (e.g., Rogers, 2003, Wejnert, 2002) has looked at how a particular innovation is spread and adopted among group of actors in a social system. In terms of our framework, this stream of research has mainly focused on spreading of innovation within one layer, be it firms or consumers. Second, technology transfer research has looked at policies to promote innovation transfer from academia to industry (for a review, see Bozeman et al., 2015). In our model, this corresponds to spreading from scientific layer up to in- dustries layer. Our framework extends these research streams by pro- viding a systemic perspective on spreading of innovation between the entire stack of layers (vertically) and also between industries (hor- izontally). As an example, CCD image sensor technology, after propa- gating from academia to the industry layer, later spread from the ori- ginal camera industry to another, namely the smartphone industry, disrupting the original receiving industry. Although we have traced the roots of the three cases to several specific scientific discoveries or technological innovations, all cases were the result of many convergent technological developments, without which they would not have been successful. Apart from image sensors, the breakthrough in digital photography also depended on the general progress of microelectronics and embedded computing, making possible the image processing and storage required for the complete camera package. Later it gained further momentum from a convergence with the mobile phone and cloud-based Internet service disruptions. Correspondingly, the spreading of GPS to mobile phones created the need for more user-friendly maps and cloud-based Internet service. This contributed to another disruption where Internet and software driven firms such as Google and Apple produced smartphones and platforms that were able to challenge and replace the traditional mobile phone firms such as Nokia (Kekolahti et al., 2016). Likewise, although the original birth of 3D printing depended critically on the invention of suitable materials and processes, the later development of consumer 3D printing benefited from a convergence of relevant information and communications technologies (such as inkjet printer heads repurposed for 3D printing, and the Internet). These findings are in line with the earlier research that has noted how digital technology enable dis- tributed and combinatorial innovation (Yoo et al., 2012). The layered framework we have presented contributes to the existing research by providing an instrument to track the paths how preceding develop- ments and disruptions interact with the current disruption both verti- cally between layers and horizontally between industries. How does the entanglement of disruptions take place? Earlier re- search on service innovation (Barrett et al., 2015) and digital innova- tion (Yoo et al., 2012) has emphasized the role of pervasive digital technologies. This is confirmed by our study. A common characteristic of the disruption case studies is that their path was significantly changed due to their crossover with Internet technology. Internet-based photo services are now the dominant form of sharing photographs among families and peer groups (after showing photographs directly from the phone screen to others). Likewise, the progress of consumer 3D printing is linked with the Internet economy through the symbiotic progress of 3D model app stores and increasingly web-enabled 3D printing services. This supports the view that the Internet specifically is a generic enabler and disruptor that has the power to alter the course of other disruptive developments once they become entangled with it (Lyytinen and Rose, 2003). From the industry-level perspective of our study, a generic disruptor may also act as a bridge allowing the spreading of disruptions from one industry to another. Regarding practical implications, the Internet appears to have a si- milar role as steam power had in the first industrial revolution and electrification in the second, lending some credence to the view that we are now witnessing the third (or according to some, fourth) industrial revolution through the development of the Internet of Things (IoT). We believe our framework is especially suitable for analyzing develop- ments, such as IoT, in which several layers, from science to society are involved, and impacts are typically felt across several industry sectors. In a networked economy, managers need to be aware of generic dis- ruptors, not only technologies, but also social or business innovations that may spread from another industry. While these disruptions may pose a threat, there are ways to counter them (see Fig. 2). Our research opens up several future research possibilities. First, the distributed and combinatorial nature of disruptive innovations calls for future research on diffusion of innovation, to understand how en- tangled innovations and practices are simultaneously diffused in a so- cial system fueling the adoption of each other. Further, as Yoo et al. (2012, p. 1403) point out, innovations “will not simply spread but will mutate and evolve as they spread.” Second, our preliminary quantita- tive analysis also calls for future research on the temporal aspects of disruptive innovations. One interesting temporal aspect to look at is the successive disruptions across industries, such as in the case of digital photography. Additionally, future research could examine the re- lationship between diffusion time (from science to products) on the disruptiveness of an innovation. In other words, it is potentially not only the scale, but also speed that together determine the disruptiveness of an innovation. Conflicts of interest None. Acknowledgements This work has been supported by the project Digital Disruption of Industry funded by the Strategic Research Council (Grant number: 292889) of Finland. The authors thank Dr. Risto Sarvas and Dr. Jukka Tuomi for their comments on the content of Figs. 7 and 8, respectively, and Benjamin Finley for valuable comments. Fig. 9. The history of 3D printing in numbers. A: the number of articles per year with 3D printing in abstract; B: the number of books published per year; Pat: the number of patents filed per year based on granted patents 6/2017, 3D printing or solid freeform fabrication in abstract; Pat*: the estimated number of filed patents per year, estimated based on the granted patents by 6/2017; sales: the number of 3D printers sold per year (thousands, source: www.3ders.org, 2016); 3D Sys: the stock price of 3D Systems ($ US). K. Kilkki et al. Technological Forecasting & Social Change 129 (2018) 275–284 282 References Abernathy, W.J., Clark, K.B., 1985. Innovation: mapping the winds of creative destruc- tion. Res. Policy 14 (1), 3–22. Acemoglu, D., Robinson, J.A., 2013. Why Nations Fail: The Origins of Power, Prosperity, and Poverty. Crown Business, New York, NY. Albors-Garrigos, J., Hervas-Oliver, J.L., 2014. Creative destruction in clusters: from theory to practice, the role of technology gatekeepers, understanding disruptive in- novation in industrial districts. In: Portland International Conference on Management of Engineering & Technology (PICMET), pp. 710–722. Allee, V., 2000. Reconfiguring the value network. J. Bus. Strateg. 21 (4), 36–39. Amazon.com, 2017. Advanced search. https://www.amazon.com/Advanced-Search- Books/b/ref=sv_b_0?ie=UTF8&node=241582011 (retrieved 07-22-2017). Anderson, C., 2009. Free: The Future of a Radical Price. Random House, New York, NY. Anderson, P., Tushman, M.L., 1990. Technological discontinuities and dominant designs: a cyclical model of technological change. Adm. Sci. Q. 604–633. Arthur, W.B., 2009. The Nature of Technology: What it Is and how it Evolves. Simon and Schuster, New York, NY. Barrett, M., Davidson, E., Prabhu, J., Vargo, S.L., 2015. Service innovation in the digital age: key contributions and future directions. MIS Q. 39 (1), 135–154. Berkun, S., 2010. The Myths of Innovation. O'Reilly Media, Inc., Sebastopol, CA. Bessant, J., von Stamm, B., Moeslein, K.M., Neyer, A.-K., 2010. Backing outsiders: se- lection strategies for discontinuous innovation. R&D Manag. 40 (4) (345–35). Bozeman, B., Rimes, H., Youtie, J., 2015. The evolving state-of-the-art in technology transfer research: revisiting the contingent effectiveness model. Res. Policy 44 (1), 34–49. Brynjolfsson, E., McAfee, A., 2014. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. WW Norton & Company, New York, NY. Cambridge Dictionaries Online, 2017. Disrupt. http://dictionary.cambridge.org/ dictionary/english/disrupt (retrieved 07-22-2017). Christensen, C.M., 1997. The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail. Harvard Business Review Press. Christensen, C.M., 2013. Disruptive innovation. In: The Encyclopedia of Human-com- puter Interaction, 2nd ed. . https://www.interaction-design.org/literature/book/ the-encyclopedia-of-human-computer-interaction-2nd-ed/disruptive-innovation (re- trieved 07-13-2017). CIPA, 2017. Camera and Imaging Products Association. http://www.cipa.jp/stats/dc_e. html (retrieved 05-25-2017). Danneels, E., 2004. Disruptive technology reconsidered: a critique and research agenda. J. Prod. Innov. Manag. 21 (4), 246–258. Evans, P., Wurster, T.S., 2000. Blown to Bits: How the New Economics of Information Transforms. Harvard Business Press, Boston, MA. Finley, B., Soikkeli, T., 2017. Multidevice mobile sessions: a first look. Pervasive Mob. Comput. 39, 267–283. Funk, J.L., 2008. Components, systems and technological discontinuities: lessons from the IT sector. Long Range Plan. 41 (5), 555–573. Ghezzi, A., Cortimiglia, M.N., Frank, A.G., 2015. Strategy and business model design in dynamic telecommunications industries: a study on Italian mobile network operators. Technol. Forecast. Soc. Chang. 90, 346–354. Govindarajan, V., Kopalle, P.K., 2006. The usefulness of measuring disruptiveness of in- novations ex post in making ex ante predictions. J. Prod. Innov. Manag. 23 (1), 12–18. Häikiö, M., 2001. Nokia oyj:n historia: globalisaatio. Telekommunikaation maailman- valloitus 1992–2000, Edita, Helsinki, Finland. Hillebrand, F., 2010. Global market development (chapter 8). In: Hillebrand, F. (Ed.), Short Message Service, the Creation of Personal Text Messaging. John Wiley & Sons, Chichester, UK. IEEE Xplore, 2017. Advanced search. http://ieeexplore.ieee.org/search/advsearch.jsp (retrieved 07-22-2017). Isaacson, W., 2014. The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution. Simon and Schuster, New York, NY. Kaplan, S.M., 1999. Discontinuous innovation and the growth paradox. Strateg. Leadersh. 27 (2), 16–21. Kassicieh, S.K., Anderson, S.W., Romig, A., Cummings, J., McWhorter, P., Williams, D., 2000. A model for technology assessment and commercialization for innovative disruptive technologies. In: Proceedings of the Engineering Management Society, pp. 340–344. Kassicieh, S.K., Walsh, S.T., Cummings, J.C., McWhorter, P.J., Romig, A.D., Williams, W.D., 2002. Factors differentiating the commercialization of disruptive and sus- taining technologies. IEEE Trans. Eng. Manag. 49 (4), 375–387. Kekolahti, P., Kilkki, K., Hämmäinen, H., Riikonen, A., 2016. Features as predictors of phone popularity: an analysis of trends and structural breaks. Telematics Inform. 33 (4), 973–989. Kelly, K., 2016. The Inevitable: Understanding the 12 Technological Forces that will Shape our Future. Viking, New York, NY. King, A.A., Baatartogtokh, B., 2015. How useful is the theory of disruptive innovation? MIT Sloan Manag. Rev. 57 (1), 77–90. Kivi, A., Smura, T., Töyli, J., 2012. Technology product evolution and the diffusion of new product features. Technol. Forecast. Soc. Chang. 79 (1), 107–126. Kostoff, R.N., Boylan, R., Simons, G.R., 2004. Disruptive technology roadmaps. Technol. Forecast. Soc. Chang. 71 (1), 141–159. Kuhn, T.S., 1962. The Structure of Scientific Revolutions. University of Chicago press, Chicago, IL. Laplante, P.A., Jepsen, T., Williams, J., Corno, F., 2013. Innovative and disruptive technologies [from the editors]. IT Profes. 3, 4–5. Lepore, J., 2014. The disruption machine. The New Yorker 23, 30–36. Lessig, L., 2008. Remix: Making Art and Commerce Thrive in the Hybrid Economy. Penguin, London, UK. Linton, J., 2002. Forecasting the market diffusion of disruptive and discontinuous in- novation. IEEE Trans. Eng. Manag. 49 (4), 365–374. Lynn, G.S., Morone, J.G., Paulson, A.S., 1996. Marketing and discontinuous innovation: the probe and learn process. Calif. Manag. Rev. 38 (3), 8–37. Lyytinen, K., Rose, G.M., 2003. The disruptive nature of information technology in- novations: the case of internet computing in systems development organizations. MIS Q. 557–596. Markides, C., 2006. Disruptive innovation: in need of better theory. J. Prod. Innov. Manag. 23 (1), 19–25. Merriam-Webster, 2017. Innovation. https://www.merriam-webster.com/dictionary/ innovation (retrieved 07-22-2017). Nagy, D., Schuessler, J., Dubinsky, A., 2016. Defining and identifying disruptive in- novations. Ind. Mark. Manag. 57, 119–126. Naim, M., 2014. The End of Power: From Boardrooms to Battlefields and Churches to States, Why Being in Charge Isn't What it Used to Be. Basic Books, New York, NY. Norman, D.A., 1998. The Invisible Computer: Why Good Products can Fail, the Personal Computer is So Complex, and Information Appliances Are the Solution. MIT Press, Cambridge, MA. Obal, M., 2013. Why do incumbents sometimes succeed? Investigating the role of inter- organizational trust on the adoption of disruptive technology. Ind. Mark. Manag. 42 (6), 900–908. Paap, J., Katz, R., 2004. Anticipating disruptive innovation. Res. Technol. Manag. 47 (5), 13. Pisano, G.P., 2015. You need an innovation strategy. Harv. Bus. Rev. 93 (6), 44–54. Rainie, L., 2016. The state of privacy in post-Snowden America. Pew Research Center FactTank. http://www.pewresearch.org/fact-tank/2016/09/21/the-state-of- privacy-in-america/ (retrieved 07-21-2017). Riikonen, A., Smura, T., Töyli, J., 2015. Price and sales volume patterns of mobile handsets and technologies. Int. J. Bus. Data Commun. Netw. 11 (2), 22–39. Rogers, E.M., 1962/2003. Diffusion of innovations. Free Press, New York. Rothaermel, F.T., 2002. Technological discontinuities and interfirm cooperation: what determines a startup's attractiveness as alliance partner? IEEE Trans. Eng. Manag. 49 (4), 388–397. Rothman, J., 2017. What Amazon's purchase of whole foods really means. The New Yorker 24 (June). Sabatier, V., Craig-Kennard, A., Mangematin, V., 2012. When technological dis- continuities and disruptive business models challenge dominant industry logics: in- sights from the drugs industry. Technol. Forecast. Soc. Chang. 79 (5), 949–962. Schmidt, G.M., Druehl, C.T., 2008. When is a disruptive innovation disruptive? J. Prod. Innov. Manag. 25 (4), 347–369. Schumpeter, J.A., 1950. Capitalism, Socialism and Democracy, 3rd ed. HarperCollins, New York, NY. Sood, A., Tellis, G.J., 2011. Demystifying disruption: a new model for understanding and predicting disruptive technologies. Mark. Sci. 30 (2), 339–354. Sosna, M., Trevinyo-Rodríguez, R.N., Velamuri, S.R., 2010. Business model innovation through trial-and-error learning: the Naturhouse case. Long Range Plan. 43 (2), 383–407. Tarde, G., 1903/1969. The Laws of Imitation. University of Chicago Press, Chicago. United States Patent and Trademark Office, 2017. Advanced search. http://patft.uspto. gov/netahtml/PTO/search-adv.htm (retrieved 07-22-2017). Urban, G.L., Weinberg, B.D., Hauser, J.R., 1996. Premarket forecasting of really-new products. J. Mark. 47–60. Utterback, J.M., Acee, H.J., 2005. Disruptive technologies: an expanded view. Int. J. Innov. Manag. 9 (01), 1–17. Varian, H.R., Farrell, J.V., 2004. The Economics of Information Technology: An Introduction. Cambridge University Press, Cambridge, UK. Veryzer, R.W., 1998. Discontinuous innovation and the new product development pro- cess. J. Prod. Innov. Manag. 15 (4), 304–321. Volberda, H.W., Morgan, R.E., Reinmoeller, P., Hitt, M.A., Ireland, R.D., Hoskisson, R.E., 2011. Strategic Management: Competitive & Globalisation: Concepts Only. Cengage Learning Business Press. Wadhwa, V., 2015. What the legendary Clayton Christensen gets wrong about Uber, Tesla and disruptive innovation. In: The Washington Post, . https://www.washingtonpost. com/news/innovations/wp/2015/11/23/what-the-legendary-clayton-christensen- gets-wrong-about-uber-tesla-and-disruptive-innovation/ (retrieved 07-13-2017). Walsh, S.T., Kirchhoff, B.A., Newbert, S., 2002. Differentiating market strategies for disruptive technologies. IEEE Trans. Eng. Manag. 49 (4), 341–351. Wejnert, B., 2002. Integrating models of diffusion of innovations: a conceptual frame- work. Annu. Rev. Sociol. 28. www.3ders.org, 2016. Wohlers report 2016 reveals $1 billion growth in 3D printing in- dustry. http://www.3ders.org/articles/20160405-wohlers-report-2016-reveals-1- billion-growth-in-3d-printing-industry.html (retrieved 07-13-2017). Yahoo, 2017. Finance. https://finance.yahoo.com (retrieved 07-22-2017). Yoo, Y., Boland Jr, R.J., Lyytinen, K., Majchrzak, A., 2012. Organizing for innovation in the digitized world. Organ. Sci. 23 (5), 1398–1408. Yu, D., Hang, C.C., 2008. Creating candidate technologies for disruptive innovation: a case study approach. In: 4th IEEE International Conference on Management of Innovation and Technology (ICMIT), pp. 65–70. Yu, D., Hang, C.C., 2010. A reflective review of disruptive innovation theory. Int. J. Manag. Rev. 12 (4), 435–452. Kalevi Kilkki is a university lecturer in the Department of Communications and K. Kilkki et al. Technological Forecasting & Social Change 129 (2018) 275–284 283 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0005 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0005 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0010 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0010 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0015 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0015 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0015 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0015 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0020 https://www.amazon.com/Advanced-Search-Books/b/ref=sv_b_0?ie=UTF8&node=241582011 https://www.amazon.com/Advanced-Search-Books/b/ref=sv_b_0?ie=UTF8&node=241582011 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0030 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0035 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0035 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0040 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0040 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0045 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0045 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0050 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0055 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0055 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0060 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0060 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0060 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0065 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0065 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0065 http://dictionary.cambridge.org/dictionary/english/disrupt http://dictionary.cambridge.org/dictionary/english/disrupt http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0075 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0075 https://www.interaction-design.org/literature/book/the-encyclopedia-of-human-computer-interaction-2nd-ed/disruptive-innovation https://www.interaction-design.org/literature/book/the-encyclopedia-of-human-computer-interaction-2nd-ed/disruptive-innovation http://www.cipa.jp/stats/dc_e.html http://www.cipa.jp/stats/dc_e.html http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0090 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0090 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0095 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0095 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0100 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0100 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0105 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0105 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0110 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0110 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0110 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0115 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0115 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0115 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0120 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0120 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0125 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0125 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0125 http://ieeexplore.ieee.org/search/advsearch.jsp http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0135 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0135 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0140 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0140 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0145 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0145 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0145 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0145 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0150 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0150 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0150 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0155 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0155 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0155 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0160 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0160 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0165 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0165 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0170 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0170 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0175 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0175 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0180 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0180 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0185 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0185 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0190 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0195 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0195 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0200 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0200 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0205 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0205 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0210 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0210 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0210 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0215 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0215 https://www.merriam-webster.com/dictionary/innovation https://www.merriam-webster.com/dictionary/innovation http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0225 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0225 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0230 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0230 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0235 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0235 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0235 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0240 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0240 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0240 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0245 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0245 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0250 http://www.pewresearch.org/fact-tank/2016/09/21/the-state-of-privacy-in-america http://www.pewresearch.org/fact-tank/2016/09/21/the-state-of-privacy-in-america http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0260 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0260 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0265 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0270 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0270 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0270 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0275 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0275 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0280 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0280 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0280 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0285 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0285 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0290 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0290 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0295 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0295 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0300 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0300 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0300 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0305 http://patft.uspto.gov/netahtml/PTO/search-adv.htm http://patft.uspto.gov/netahtml/PTO/search-adv.htm http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0315 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0315 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0320 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0320 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0325 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0325 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0330 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0330 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0335 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0335 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0335 https://www.washingtonpost.com/news/innovations/wp/2015/11/23/what-the-legendary-clayton-christensen-gets-wrong-about-uber-tesla-and-disruptive-innovation https://www.washingtonpost.com/news/innovations/wp/2015/11/23/what-the-legendary-clayton-christensen-gets-wrong-about-uber-tesla-and-disruptive-innovation https://www.washingtonpost.com/news/innovations/wp/2015/11/23/what-the-legendary-clayton-christensen-gets-wrong-about-uber-tesla-and-disruptive-innovation http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0345 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0345 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0350 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0350 http://www.3ders.org/articles/20160405-wohlers-report-2016-reveals-1-billion-growth-in-3d-printing-industry.html http://www.3ders.org/articles/20160405-wohlers-report-2016-reveals-1-billion-growth-in-3d-printing-industry.html https://finance.yahoo.com http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0365 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0365 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0370 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0370 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0370 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0375 http://refhub.elsevier.com/S0040-1625(17)31462-2/rf0375 Networking in Aalto University, Finland. He worked at Telecom Finland in 1990–95 and Nokia Research Center 1995–2008 as a research scientist in the area of Quality of Service in the Internet and in mobile networks. During the last eight years, his main research topics have been Quality of Experience, customer behavior and economic models in communications ecosystems. Martti Mäntylä is Professor of Information Technology at the Aalto University since 1987, where his work focuses on digitalization of industry. In 2009–2013 he was Chief Strategy Officer of EIT Digital, the Knowledge and Innovation Community (KIC) in di- gitalization of the European Institute for Innovations and Technology (EIT). In 1999–2008, he was Director of the Helsinki Institute for Information Technology (HIIT), a joint research centre of Helsinki University of Technology and University of Helsinki. Mäntylä obtained his Dr.Sc. in computer science from the Helsinki University of Technology in 1983. Kimmo Karhu is Postdoctoral Researcher in the Department of Computer Science at the Aalto University. His recent doctoral thesis analyses open platform strategizing and di- gital tactics in mobile ecosystems. Currently, he works as a project coordinator for the Digital Disruption of Industry –project, a six-year project funded by the Strategic Research Council (SRC) at the Academy of Finland. Heikki Hämmäinen is professor of Network Economics at Department of Communications and Networking, Aalto University, Finland. He has MSc (1984) and PhD (1991) in Computer Science from Helsinki University of Technology. His main research interests are in techno-economics and regulation of mobile services and networks. Special topics recently include measurement and analysis of mobile usage, value networks of flexible Internet access, and diffusion of Internet protocols in mobile. He is active in several journal and conference duties. Heikki Ailisto, research professor, is responsible for Internet of Things/Industrial in- ternet research in VTT. His research interests include ubiquitous computing, context awareness, and IoT. Currently Ailisto is the leader of Productivity with IoT research programme inside VTT. He is also a member in the steering board of a national industrial internet program. Heikki Ailisto has authored and co-authored more than 100 journal and conference papers, holds five patents and he is a member of IEEE. Ailisto holds a Dr. Tech and eMBA degrees. K. Kilkki et al. Technological Forecasting & Social Change 129 (2018) 275–284 284 A disruption framework Introduction Literature review and the definition of disruption Framework A layered model Firm-level strategies Case examples Quantitative analysis of GSM, GPS, and digital photography: timing and scale of disruptions Qualitative analysis of GSM, GPS, and digital photography: entanglement of disruptions 3D printing—a future disruptive innovation? Discussion Conflicts of interest Acknowledgements References work_vmrtodolhrejpki3ji46sydcbu ---- [00_]목차.hwp Original article KOREAN JOURNAL OF APPLIED ENTOMOLOGY 한국응용곤충학회지 ⓒ The Korean Society of Applied Entomology Korean J. Appl. Entomol. 53(2): 97-101 (2014) pISSN 1225-0171, eISSN 2287-545X DOI: http://dx.doi.org/10.5656/KSAE.2014.01.1.083 Gelechiidae Collected from Is. Ulleung-do in the East Sea, Reporting a Newly Recorded Species from Korea and an Unknown Species Kyu-Tek Park, Minyoung Kim 1 * and Bong-Kyu Byun 2 The Korean Academy of Science and Technology, Seongnam 463-808, Republic of Korea 1 Department of Agricultural Biotechnology, Seoul National University, 599 Gwanak-ro, Gwanak-gu, Seoul 151-921, Republic of Korea 2 Department of Biological Science and Biotechnology, Hannam University, 461-6, Yuseong-gu, Daejeon 305-811, Republic of Korea 울릉도에서 채집된 뿔나방과의 보고 및 2 미기록종 박규택ㆍ김민영 1 *ㆍ변봉규 2 한국과학기술한림원, 1 서울대학교, 2 한남대학교 ABSTRACT: In a faunal survey of the Gelechiidae (Lepidoptera: Gelechioidea) from Is. Ulleung-do in the East Sea of Republic of Korea, one species of Gelechiidae, Bagdadia gnomia Ponomaranko, is reported for the first time from Korea and an unknown species of the genus Bryotropa was discovered. In addition, eight species of Gelechiidae including a little known species, Dichomeris anisacuminata Li & Zheng, were first recorded from Is. Ulleung-do. Images of adults and genitalia for the newly recorded species and a little known species are provided. Key words: Is. Ulleung-do, New record, Gelechiidae, Lepidoptera 초 록: 울릉도의 곤충상 조사결과, 뿔나방과 (나비목: 뿔나방상과)의 10종이 채집되었다. 그 중 Bagdadia gnomia Ponomaranko과 Bryotropa sp. 의 2종은 우리나라에서 처음으로 기록되었으며, 채집이 어려운 Dichomeris anisacuminata Li & Zheng도 함께 조사되었다. 이들 종 동정에 필요한 성충과 생식기 사진을 함께 기재한다. 검색어: 울릉도, 미기록종, 뿔나방과, 나비목 *Corresponding author: entommy@snu.ac.kr Received December 29 2013; Revised January 6 2014 Accepted January 20 2014 Is. Ulleung-do is located about 270 km east from Phohang in the East Sea, more than three and half hours by express passenger ship. The island is 72.56 km 2 width with a population of less than 13,000 residents. Mt. Seonginbong (ca. 984 m) is the highest with a broad basin called Naribunji which is 1.5-2.0 km 2 width and located at 860m above sea level in the middle of the island. There are two crater basins with two villages: Narimaeul in the North and Albongmaeul in the Southwest (Park, 1997). Only few insect surveys have been conducted due to a long distance from the Korean Peninsula and an inconvenient approach. The first fauna survey on Lepidoptera in Is. Ulleung-do was conducted by Cho (1955), who first reported five species of moths and 26 species of butterflies, and later (1965) listed 267 species of moths and 83 species of butterflies. Recently Byun et al. (1996) reported 137 species of 24 families with 43 additional species for the fauna of Is. Ulleung-do, but no species of family Gelechiidae has been known from the island to date. The family Gelechiidae is one of the largest and least known families of micromoths, comprising more than 4,700 known species belonging to about 500 genera in the world. The family contains a total of 172 species in the Peninsula (Park and Ponomarenko, 2007; Byun et al. 2009). Of the eight known species of Gelechiidae in this study, The Korean Society of Applied Entomology (KSAE) retains the exclusive copyright to reproduce and distribute for all KSAE publications. The journal follows an open access policy. 98 Korean J. Appl. Entomol. 53(2): 97~101 Fig. 1. Map indicated the location of Is. Ulleung-do. Anarsia ulneungensis Park & Ponomarenko which is still known as endemic to Korea, and a little known species, Dichomeris anisacuminata Li & Zheng are illustrated. Materials and methods This study is based on specimens collected from the Naribunji of Is. Ulleung-do, during June in 2006 by light traps using a Mercury vapor-lamp of 220 V/200 W, and deposited in the Korea National Arboretum (KNA), Pocheon, Korea. The wingspan is measured from the left apex to the right apex of the forewing. For morphological studies, external structures and genital characters were examined under a stereo microscope (Olympus SZ51; Olympus, Japan). A Canon 500D camera (Canon, Japan) was used for the digital photography. The color standard for the description of adults follows Kornerup and Wanscher (1978). Results Bagdadia gnomia Ponomarenko (Figs. 2, 3, 3a, 3b) Bagdadia gnomia Ponomarenko, 1995. Actias, 2: 50. TL: Primorye, Russian Far East. Capidentalia gnomia: Ponomarenko, 1997: 49; 1999: 254. Bagdadia gnomia: Sattler, 1999: 238; Park & Ponomarenko, 2007: 179. Diagnosis. Male genitalia (Figs. 3, 3a-b). Bagdadia gnomia is differed from B. claviformis (Park, 1983) by the male genital characters: 1) cucullus with dilated part beyond distal 3/4 length; 2) valvella stout, bifurcated apically. Material examined. 4♂, Is. Ulleung-do, Naribunji, 19-20. vi. 2006 (MY Kim & MY Chae), gen. slide no. KNA-3161, KNA-3163. Distribution. Korea (new record), Russian Far East (Primorye), Japan. Remarks. The genus Bagdadia Amsel was originally placed in family Scythrididae by Rebel (1901), and later transferred to family Gelechiidae by Sattler (1973). The genus mostly occurs in East Asia: two species in the Russian Far East, four species in China, and two species in Korea. The male genitalia is characterized by having a crown-shaped uncus and elongate valva. Bryotropha sp. (Figs. 4, 5, 5a, 5b) Diagnosis. The female genitalia (Figs. 5, 5a-b) are characterized by the large signum with short posterior spine laterally and a long anterior process serrated along lateral margins. Only one female was available to be examined. The species is probably an undescribed and will be described when additional material is available. Material examined. 1♀, Is. Ulleung-do, Naribunji, 19-20. vi. 2006 (MY Kim & MY Chae), gen. slide no. KNA-3167. Distribution. Korea (new record). Caryocolum pullatella (Tengström) Caryocolum pullatella Tengström, 1848, Finl. Fjäril.: 126; Park, 1993: 18; Park, 2004: 37; Park & Ponomarenko, 2007: 79. TL: Finland. Material examined. 1♂, Is. Ulleung-do, Naribunji, 19-20. vi. 2006 (MY Kim & MY Chae), gen. slide no. KNA-3168. Distribution. Korea (throughout the country, including Jeju), Japan, Russia (European Part, Irkutsk region, Transbaikalia), Europe, North America. Remarks. This species was reported for the first time from Korea by Park (1993), and it is one of the common species in Korea. Angustialata gemmellaformis Omelko Angustialata gemmellaformis Omelko, 1988, Ent. Obozr., Gelechiidae collected from Is. Ulleung-do, Korea 99 Figs. 2-6. 2. Adult of Bagdadia gnomia Ponomaranko; 3. Male genitalia; Bagdadia gnomia Ponomaranko. 3a. Ditto, lateral view; 3b. Ditto, aedeagus; 4. Adult of Bryotropa sp.; 5. Female genitalia of Bryotropa sp.; 5a. Ditto, ostial part; 5b. Ditto, close-up of signum. 6. Adult of Dichomeris anisacuminata Li & Zheng; 6a, Male genitalia of Dichomeris anisacuminata Li & Zheng; 6b, Ditto, close-up of posterior part; 6c, aedeagus. 67: 150; Lee & Park, 2000: 64; Park, 2004: 49; Park & Ponomarenko, 2007: 179. TL: Ussuri, Russia. Material examined. 3♂, Is. Ulleung-do, Naribunji, 19-20. vi. 2006, (MY Kim & MY Chae), gen. slide no. KNA-3162, KNA-3171. Distribution. Korea (Central), China, Russian Far East, Japan. Remarks. The genus Angustialata Omelko is monotypic, which was described from Ussuri, the Russian Far East. The genus is superficially close to genus Stenolechia Meyrick, but it is characterized by having a pair of unique shapes of signa in the female. This species was reported for the first time from Korea by Lee & Park (2000), based on specimens collected in the central part of the Peninsula. 100 Korean J. Appl. Entomol. 53(2): 97~101 Agrolamprotes micella (Dennis & Schiffermüller) Agrolamprotes micella Dennis & Schiffermüller, 1775, Ank. Syst. Werkes Schmett. Wien.: 140 (Tinea); Park & Ponomarenko, 2006: 277; Park & Ponomarenko, 2007: 23. TL: Europe. Material examined. 1♂, 3♀, Is. Ulleung-do, Naribunji, 19-20. vi. 2006 (MY Kim & MY Chae), gen. slide no. KNA- 3160. Distribution. Korea (Central, Jeju), Japan, China, Russian Far East, Europe. Remarks. This species was reported for the first time from Korea by Park & Ponomarenko (2006), and has been found in the Prov. Gangwon and Is. Jeju-do. Altenia inscriptella (Christoph) Altenia inscriptella Christoph, 1882. Bull. Soc. Nat. Mosc., 57: 25 (Teleia); Park, 1992: 17; Park, 2004: 63; Park & Ponomarenko, 2007:120. TL: E. Siberia. Material examined. 4♂, Is. Ulleung -do, Naribunji, 19-20. vi. 2006 (MY Kim & MY Chae). Distribution. Korea (throughout the country, including Jeju), Russian Far East, Japan. Remarks. This species was reported for the first time from Korea by Park (1992). Evippe syrictis (Meyrick) Recurvaria syrictis Meyrick, 1936. Exot. Microlep. 5: 43. TL: Japan. Material examined. 1♀, Is. Ulleung-do, Naribunji, 19-20. vi. 2006 (MY Kim & MY Chae), gen. slide no. KNA-3176. Distribution. Korea (North, Central), China, Japan, Russian Far East. Remarks. This species was first reported from Korea by Park (2004), based on male specimens. The female was found for the first time in Korea. Dichomeris anisacuminata Li & Zheng (Figs. 6, 6a-c) Dichomeris anisacuminata Li & Zheng, 1996. SHILAP Revta. Lipid., 24 (95): 231; Sohn, 2007. TL: China. Material examined. 1♂, Is. Ulleung-do, Naribunji, 19-20. vi. 2006 (MY Kim & MY Chae), gen. slide no. KNA-3169. Distribution. Korea (Central, Is. Ulleung-do), China. Remarks. This species was described based on two male and one female specimens from Jiangxi in China. Sohn (2007) reported this species for the first time from Korea, based on specimens collected in the central part of Korea. Larvae of this species were reared from Quercus mongolica (Sohn, 2007). The male genitalia (Figs. 6a-c) are characterized by having long, symmetric processes of juxta and the aedeagus with heavily sclerotized spine-like process apically and two different length of slender processes arising from middle. Dichomeris rasilella (Herrich & Schäffer) Dichomeris rasilella Herrich & Schäffer, 1854. Syst. Bearb. Schmett. Eur., 5: 191; Park, 1983: 505; Park, 1994: 16; Park & Hodges, 1995: 52; Park, 2004: 94; Park & Ponomarenko, 2007:153. TL: Europe. Material examined. 1♂, Is. Ulleung-do, Naribunji, 19-20. vi. 2006 (MY Kim & MY Chae), gen. slide no. KNA-3160. Distribution. Korea (throughout the country, including Jeju), China, Taiwan, Japan, Russia (European Part, Far East), Europe. Remarks. This species was reported for the first time from Korea by Park (1983), and is one of the common species in Korea. Anarsia ulneungensis Park & Ponomarenko Anarsia ulneungensis Park & Ponomarenko, 1996. Kor. J. Ent., 26(4): 343; Park & Ponomarenko, 2007: 182. TL: Dodong, Is. Ulleung-do, Korea. Material examined. 1♂, Is. Ulleung-do, 6. viii. 1995 (KT Park). Distribution. Korea (endemic). Remarks. This species was reported for the first time from Korea by Park (1983), and has been found throughout the country, including Is. Jeju-do. Gelechiidae collected from Is. Ulleung-do, Korea 101 Acknowledgments This study was financially supported by the National Institute of Biological Resources (NIBR), Ministry of Environment, Korea. We thank Dr. Houhun Li, Nankai University, Nankai, China for his comments for identification of the species. Literature Cited Byun, B.K., Lee, B.Y., Kim, S.S., 1996. Lepidoptera of the Is. Ulleung, Korea. J. Lepid. Soc. Korea. 9, 26-33. Byun, B.K., Park, K.T., Bae, Y.S., Lee, B.W., 2009. A check list of the microlepidoptera in Korea. Korea National Arboretum. 413 pp. Cho, B.S., 1955. The fauna of Dagelet Island (Ulneung-do). Bull. Sungkyunkwan Univ. 2, 179-266. Cho, B.S., 1965. Beitrage zur Kenntnis der Insekten-fauna Insel Dagelet (Ulleung-do). Commemoration Thesis of 60th Aniversary (Natural Sciences), Korea Univ. pp. 157-205. Park, H.D., 1997. The physical geography of Ulneung-do Island. The Geographical Journal of Korea. 31, 27-40. Park, K.T., Ponomarenko, M.G., 2007. In: Park, K.T. (eds.). Gelechiidae of the Korean Peninsula and adjacent territories (Lepidoptera). Insects of Korea series 12. 312 pp. Sohn, J.C., 2007. Faunistic contribution to the Korean micro- lepidoptera and Pyralids (1). 40 species new to Korea. Tinea 20, 12-27. work_vn6ozvdcxze4rb5c2kliploopm ---- Microsoft Word - ijerph-662857.docx     Int. J. Environ. Res. Public Health 2019, 16, 5139; doi:10.3390/ijerph16245139  www.mdpi.com/journal/ijerph  Article  Reliability of a Virtual Prosthodontic Project Realized  through a 2D and 3D Photographic Acquisition:    An Experimental Study on the Accuracy of Different  Digital Systems  Luca Lavorgna 1, Gabriele Cervino 2, Luca Fiorillo 2,*, Giovanni Di Leo 1, Giuseppe Troiano 3,  Marco Ortensi 3, Luigi Galantucci 4 and Marco Cicciù 2,*  1  Private practice, 82037 Telese Terme, Italy; info@odontosinergy.it (L.L.);    giovannidileo@outlook.com (G.D.L.)  2  Department of Biomedical and Dental Sciences, Morphological and Functional Images,    University of Messina, 98100 Messina, Italy; gcervino@unime.it    3  Department of Prosthodontics, University of Foggia, 71100 Foggia, Italy; giuseppe.troiano@unifg.it (G.T.);  marco.ortensi90@gmail.com (M.O.)  4  Department of Mechanics and Mathematics Management, University of Bari, 70100 Bari, Italy  luigimaria.galantucci@poliba.it  *  Correspondence: lfiorillo@unime.it (L.F.); mcicciu@unime.it or acromarco@yahoo.it (M.C.)  Received: 22 November 2019; Accepted: 13 December 2019; Published: 16 December 2019  Abstract: Aims: The study aims to assess the accuracy of digital planning in dentistry, evaluating  the characteristics of different intraoral 3D scanners and comparing it with traditional imaging 2D  recording  methods.  Specifically,  using  computer  aided  design  (CAD)  software  and  measuring  inside  CAD  software,  authors  want  to  verify  the  reliability  of  different  models  obtained  with  different  techniques  and  machines.  Methods:  12  patients  that  needed  aesthetic  restorative  treatment were enrolled in the study. All the patients underwent recording data of the height and  width dental elements 1.1, 1.2, and 1.3 size using different technologies and comparing 2D with 3D  methods. A T test was then applied in order to verify whether there was a statistically significant  difference  between  the  measurements  obtained,  comparing  the  different  tools  data  (Emerald,  TRIOS, Photogrammetry and DSS (Digital Smile System)) with the reference values. Results: No  significant  differences  emerged  in  the  measurements  made  with  the  different  scanners  (Trios  3Shape ®, Planmeca Emerald ®) and photogrammetry. Therefore, what  should  be  underlined  regarding the 2D measurements is the speed and simplicity compared to all 3D techniques, so this  work can help to better define the field of application and the limits connected to 2D techniques,  giving a good window of the technique. Conclusions: The low number of patients is not sufficient  to provide statistically significant results, but  the digital planning  future prospects seem  to be  promising. This study results highlighted how a photogrammetric scanner for dental arches would  only have a much smaller shooting field size and greater accuracy. Despite these considerations,  the photogrammetric facial scanner provided excellent results for the measurement of individual  teeth, showing a great versatility of use.  Keywords: dentistry; digital planning; intraoral scanner; digital workflow; prosthodontic; virtual    1. Introduction  The  introduction  of  new  restorative  materials  in  dentistry,  the  current  knowledge  on  the  enamel‐dentin adhesion method, and the use of the computer as an aid for the aesthetic analysis of  the smile are the basis of a change in the dental daily practice. The new clinical approach is not very  invasive and  is  therefore able  to replace  the “real” patient with a “virtual” one. The goal  is  to  Int. J. Environ. Res. Public Health 2019, 16, 5139  2  of 15  enhance the image of the patient, maintaining the health and respecting aesthetics with a balance  between teeth and soft tissues [1,2].  The digital revolution opens the way to the virtual patient representing all the patient’s tissues  (bone, teeth, gums, face) in a single 3D model. In this way, it is possible to perform preoperative  planning and to evaluate surgical, prosthetic and orthodontic treatments. At the same time,  it  is  possible to physically realize the necessary tools for clinical use in the various branches of dentistry  [3].  The analogical workflow  in esthetic dental rehabilitation  includes various phases:  from  the  impression taken through the use of different plastic materials to the development of the plaster  model for the realization of the diagnostic wax‐up and for the construction of the mock‐up. The  patient test and evaluation of the mock‐up is fundamental to increase the patient’s understanding of  the expected post treatment result. The level of patient satisfaction is related to the consistency of the  final product with the mock‐up. The accuracy of the mock‐up depends in turn on the accuracy of the  patient’s stomatognathic apparatus morpho‐functional characteristics detection [2,4,5].  A great deal of errors occurs in the different phases of a traditional prosthetic workflow, since  this process requires the transfer of two‐dimensional and three‐dimensional data between different  operators. The use of pre‐visualization and 3D rendering software is guiding clinicians towards a  paradigm shift, respecting the standard of traditional care with a reduction of the operatorʹs error as  a priority [4–6].  Commonly, the conventional dental impression registration is a simple procedure. However, it  is  not  always  easy  for  the  patients.  Several  authors  documented  how  the  dental  impression  is  recorded as an uncomfortable phase for the patients. It is a high risk that the patient may have a poor  inclination to an attitude of compliance towards the dental team. The subjects involved in dental  treatment  therefore  favored comfortable clinical procedures, as well as other  factors such as  the  precision of the procedures, the effectiveness of diagnostic devices, and the clinical experience of the  dental team, which were all important elements for the final success [5–9].  Although intra oral scanners IOSs can be considered useful tools for capturing impressions in  partially  edentulous  patients,  the  scientific  literature  does  not  seem  to  support  their  use  in  completely edentulous patients. Hence, there is a need to replace conventional clinical procedures  requiring physical contact with the patient with others that reduce the patient’s direct involvement  and  treatment  times—without  affecting  the  precision  and  aesthetic  performance  of  the  final  prosthesis. Technological evolution  is proceeding fast, and the manufacturing companies release  new hardware and software monthly to improve the accuracy of their IOS.  Furthermore, it should be emphasized that there are statistically significant differences in the  accuracy of different IOSs, especially in the totally edentulous patients scanning [9–11]. By using  these tools, a complex of digital data can be obtained and it is possible to have a “virtual patient”.  The outcome of the procedure is to replace the real patient in a completely digital workflow aimed at  the manufacture of dental prostheses that adapt to the patient’s arches in the most appropriate way.  In order to realize this project, the dimensional discrepancies between the real patient and the 2D  and 3D virtual project should be checked. A further check regards  the comparison between  the  reliability of the 2D project and that of the 3D project.  The present study therefore aims to verify the reliability of the virtual planning of aesthetic  cases,  carried  out  starting  from  photographic  acquisitions  of  the  patient’s  face  using  different  technologies able  to return on  the one hand  traditional,  two‐dimensional  images, and  from  the  innovative, three‐dimensional representations.  2. Materials and Methods  2.1. Patient Selection  Twelve  patients  spontaneously  following  our  observation  and  requesting  a  restorative  treatment to improve the aesthetic aspect of the dental elements of the II sextant were involved in  this study.  Int. J. Environ. Res. Public Health 2019, 16, 5139  3  of 15  All the patients were previously informed of their participation in this study and agreed to sign  informed  consent.  The  dental  restorative  procedures  have  been  performed  accordingly  to  the  standards set by the World Medical Association (WMA) with the Helsinki Declaration on Human  Experimentation.  The sample of patients included six women and six men between 25 and 35 years. The dental  elements  involved  in  the  treatment  have  serious  aesthetic  defects  accordingly  to  the  standard  proportion reported in the international literature [5–10].  The following inclusion and exclusion criteria have been used during recruiting.  Inclusion criteria:   Patients requesting restorative treatments.  Exclusion criteria:   Patients with systemic pathologies;   Patients with oral pathologies, periodontal or articular disease.  2.2. Clinical and Laboratory (CAD) Procedures    A silicone impression of the upper dental arch was detected for each patient. The impression  material used was the Express 2 PENTA (3M Espe) polyvinyl siloxane (Pioltello MI, Italy), in the  double heavy and light body viscosity, delivered through Pentamix 3 automatic mixer (3M Espe,  Pioltello MI, Italy).  From each impression, a physical model was subsequently obtained using Fujirock EP type IV  dental stone (GC Europe NV, Tokyo, Japan), mixed under vacuum using a Venturi Tornado effect  mixer  (Silfradent,  Forlì‐Cesena,  Italy)  and  inserted  into  the  impression  with  a  specific  gypsum  vibrator (Renfert, Hilzingen, Germany).  Stone  models  were  scanned  inside  a  3Shape  D1000  laboratory  scanner  (3shape  A/S,  Copenhagen, Denmark) obtaining an STL (Standard Triangulation Language) file for each of them.  These values are taken as reference parameters for subsequent measurements (sample group).  Furthermore,  two  optical  impressions  were  acquired  for  each  patient;  one  by  using  the  Planmeca Emerald ™ intraoral scanner, (Planmeca OY, Helsinki, Finland) and the other with the  TRIOS®  3 scanner  (3shape A/S, Copenhagen, Denmark).  PLY and  DCM  files  were respectively  obtained and then converted into STL so that they could be processed and analyzed. These values  were collected in the group 1 (Planmeca) and group 2 (3Shape).  At the same time the patients were subjected to digital smile system (DSS) acquisition to the  official photographic protocol. The photographic exam was conducted with a Canon 5d mark III full  frame reflex equipped with Canon EF 100mm f/2.8L Macro IS USM optics (Canon, Tokyo, Japan)  and supported by a tripod (Manfrotto, Vicenza, Italy) placed at a distance of 1.50 m from the face of  the patients. The photograph depicts the patient with the aid of labial retractors in order to expose  the greatest number of dental elements, which is useful for a correct measurement. The produced  files were recorded in a JPEG format. The resulting values were collected in the group 3.  The patients were then subjected to the acquisition of the face by applying the photogrammetric  technique.  The  patients  used  a  target  placed  on  the  chest—called  a  collar—and  the  calibration  glasses of the DSS system. The device used for the photogrammetry of the face is the FaceShape  Maxi 6 (Polishape 3D, Bari, Italy), in the Maxi Line version composed of six Canon D2000 reflex  cameras equipped with Canon 50 mm f/1.8 STM lenses (Canon, Tokyo, Japan). The file resulting  from the photographic processing of the Photoscan Professional Edition software (Agisoft LLC., St.  Petersburg, Russia) supplied with the device is in an OBJ format.  2.3. Outcome    The  various  format  files  were  then  transferred  to  the  Exocad  DentalCAD  software  Matera  (Exocad GmbH, Darmstadt, Germany) and the measurements of the dental elements 1.1, 1.2, and 1.3  were performed. Specifically, two linear distances were measured for each of these dental elements:   one in the apico‐coronal sense, from the most apical point of the gingival parabola, i.e., the  gingival zenith up to the incisal edge;  Int. J. Environ. Res. Public Health 2019, 16, 5139  4  of 15   the other in the mesio‐distal direction, at the level of the equator of the dental elements,  from the most mesial to the most distal point; i.e., the maximum mesio‐distal diameter level.  Precisely, the files imported into the exocad software in order to perform these measurements  for each patient are the following:   3D model obtained from the scanning in the laboratory of the stone model of the upper  arch (figure MODEL IN PLASTER) (Figure A1);   3D model of the upper arch created using the Planmeca Emerald intraoral scanner (figure  MODEL EMERALD) (Figure A2);   3D model of the upper arch acquired with the 3Shape TRIOS intraoral scanner (3shape,  Copenhagen, Denmark) (TRIOS MODEL) (Figure A3);   3D  model  of  the  face  detected  with  the  photogrammetric  technique  (PHOTOGRAMMETRIC EXAMINATION) (Figure A4);   digital photography of the face according to the Digital Smile System (DSS) photographic  protocol (EXAM DSS) (Figure A5).    2.4. Variables and Measurements    The obtained values from the measurements performed on the 3D virtual models created by the  scanning of the plaster models have been taken as reference values.    Starting from the performed measurements:   to  assess  the  accuracy  of  each  of  the  intraoral  scanners  used  (Planmeca  Emerald  and  3Shape TRIOS) compared to the scan of the plaster model, taken as a reference virtual object;   to assess which of the two intraoral scanners is accurate; i.e., closer to the reference values;   to  verify  the  accuracy  of  the  3D  model  of  the  face,  obtained  with  photogrammetric  technique and acquired with the cheeks apart, compared to the scan of the plaster model;   to verify the accuracy of the 2D face photograph, obtained according to the DSS protocol  with the cheeks apart, compared to the scan of the plaster model;   to  assess  which  of  the  photogrammetry  and  the  DSS  protocol  is  the  most  precise  method—that is, closer to the reference values.  2.5. Statistical Evaluation  The statistical analysis was performed using statistical software (Prism 8.0; GraphPad Software,  Inc., La Jolla, CA, USA).  The measurements made on the 3D plaster models virtual images have been taken as reference  and comparison parameter. The mean and the standard deviation (DS) of the height and width sizes  of the analyzed dental elements were then calculated. A T‐test was used for comparing the averages  calculated by pairs (for example, comparing the reference height parameter for 1.1 with the height  value calculated on the TRIOS scans again for 1.1, or comparing the reference width of 1.2 with the  width  measured  for  the same element on  the photogrammetric  acquisitions of  the  face)  with a  significance level of 0.05 (p < 0.05): this method is useful in order to assess whether there was a  statistically  significant  difference  between  the  data  obtained,  comparing  the  different  methods  (Emerald, TRIOS, Photogrammetry and DSS) with the reference values.  Once  the average of  the measurements for  the sample groups was calculated,  it was asked  whether the difference between the means of the group was statistically significant, i.e., whether it  could be said that the observed difference was not due to chance, but referred to a real difference  between the group averages. For this purpose, the most effective statistical analysis  is the T‐test  (Figures 1 and 2).  The 5% significance level is frequently adopted, as it is considered that the 1/20 ratio (i.e., 0.05)  is small enough to conclude that it is “unlikely” that the observed difference is due to the simple  case.  In  fact,  the difference could be due  to chance, but  it will be once  in 20—an event  that  is  therefore highly unlikely.  Int. J. Environ. Res. Public Health 2019, 16, 5139  5  of 15  Therefore,  if  the zero hypothesis  is rejected at  the 5% significance  level,  then  there  is a 5%  probability  of  rejecting  a  zero  hypothesis;  if  the  zero  hypothesis  is  rejected  at  the  1%  level  of  significance, then a 1% probability of rejecting a zero hypothesis.    Figure 1. A T‐test was then used for comparing the averages calculated by pairs. Differences in the  distribution of height values between the different acquisition methods, each of which corresponds  to a distinct sample group, with respect  to  the reference volumes corresponding  to zero on  the  ordinate axis. Groups on x axis and height differences are on the y axis.      Int. J. Environ. Res. Public Health 2019, 16, 5139  6  of 15  Figure 2. A T‐test was then used for comparing the averages calculated by pairs. Differences in the  distribution  of  mesio‐distal  width  values  between  the  different  acquisition  methods,  each  corresponding to a distinct sample group, with respect to the reference volumes corresponding to  zero on the ordinate axis. Groups on x axis and width differences are on the y axis.    3. Results  3.1. Experimental Study Results  Tables 1–3 show the values of the measurements performed with the exocad software for each  patient. Values are expressed in millimeters. Each patient has an alphanumeric code shown in the  first line at the top and composed of the letters PZ (Paziente abbreviation from Italian Patient) and a  progressive numbering from 1 to 12.  The letters H and L are located below the alphanumeric code of each patient and mean height  and width, respectively. Height means the linear distance measured in apico‐coronal direction from  the most apical point of the gingival parabola, i.e., the gingival zenith up to the incisal edge. Width is  defined as the linear distance measured in the mesio‐distal direction at the level of the equator of the  dental elements, i.e. the maximum mesio‐distal diameter level.  In  the  lines  headed  as  “1.1  scanner”,  “1.2  scanner”  and  “1.3  scanner”  the  values  of  the  measurements performed on the 3D virtual models of the plaster models are shown. This dental  numbering system is the one established by the World Health Organization (WHO). While for the  scanner, it refers to the laboratory scanner used to create the virtual 3D model of the plaster models.  In the header lines with the labels “Emerald” and “TRIOS” are contained the measurements  calculated on the 3D models of the maxillary arches, acquired through the homonymous intraoral  scanners.  In  the  header  lines  with  the  labels  “Photogrammetry”  and  “DSS”  are  reported  the  measurements taken, respectively, on the 3D models of the face and on the digital photographs of  the patient’s face.  Due to the two‐dimensional nature of digital photography it was not possible to measure the  mesio‐distal distance for the element 1.3, which is strongly distorted. From the comparison between  the two intraoral scanners, (by evaluating the data reported in the Table 4), it emerges that their  accuracy is rather overlapping, with a slight superiority in the proximity to the reference values of  Planmeca Emerald compared to 3Shape TRIOS. Different measures have been obtained between  different scanners and photogrammetry. In Patient 01 (PZ01 on Table 1) 1.1 tooth Exocad Height and  Weight  measurements  have  been  performed;  the  same  tooth  showed  a  height  of  6.95  mm  on  Planmeca  Emerald  Scanner,  6.67  mm  on  3Shape  TRIOS  Scanner,  and  6.47  mm  on  a  2D  Photgrammetry. Digital Smile System protocols instead provided a 6.42 mm height. This difference  could have repercussions on a definitive rehabilitation where the tolerance margins should be less  than one millimeter. Despite of this data, as showed in the statistical analysis subsection, differences  are not significant.  Table 1. Exocad measurements (H=height; W=Width).  Intraoral Scanners  PZ01  PZ02  PZ03  PZ04  Dental Size  H  W  H  W  H  W  H  W  1.1 Scanner  6.98  6.09  9.48  8.23  8.81  8.11  8.30  8.08  1.2 Scanner  5.94  3.61  7.38  6.49  7.62  6.05  6.85  6.15  1.3 Scanner  9.20  7.60  9.56  7.35  9.07  7.48  7.85  7.88  1.1 Emerald  6.95  6.02  9.43  8.21  9.01  8.15  8.25  8.10  1.2 Emerald  5.92  3.75  6.52  6.50  7.23  6.14  6.74  6.53  1.3 Emerald  9.21  7.53  9.10  7.16  8.95  7.61  7.69  7.96  1.1 Trios  6.67  5.70  9.40  8.13  8.81  8.18  8.44  8.19  1.2 Trios  5.90  3.47  7.09  6.42  7.11  6.02  6.73  6.34  1.3 Trios  9.20  7.62  9.10  7.10  8.88  7.85  7.80  7.97  1.1 Photogrammetry  6.47  5.34  9.66  8.24  8.97  8.01  8.39  8.28  Int. J. Environ. Res. Public Health 2019, 16, 5139  7  of 15  1.2 Photogrammetry  6.01  3.48  7.14  6.46  7.47  6.27  6.75  6.78  1.3 Photogrammetry  9.23  7.44  9.18  7.55  9.10  7.83  7.69  7.84  1.1 DSS  6.42  5.40  9.45  8.11  8.56  7.54  8.30  8.45  1.2 DSS  5.96  3.33  7.17  5.48  6.90  5.23  6.59  5.72  1.3 DSS  8.69    9.32    8.60    7.39    Table 2. Exocad measurements (H=height; W=Width)..  Intraoral Scanners    PZ05  PZ06  PZ07  PZ08  Dental Size  H  W  H  W  H  W  H  W  1.1 Scanner  8.61  8.57  10.75  10.04  13.71  7.24  11.45  9.11  1.2 Scanner  7.53  7.04  7.47  7.03  13.34  6.89  9.82  7.18  1.3 Scanner  8.77  7.73  8.57  7.58  12.60  8.83  10.77  8.22  1.1 Emerald  8.66  8.77  10.75  10.24  13.43  9.14  11.40  9.12  1.2 Emerald  7.48  7.23  7.74  7.02  14.87  6.68  9.85  7.45  1.3 Emerald  8.92  7.78  8.56  7.60  13.02  9.12  10.74  8.23  1.1 Trios  8.54  8.88  10.71  10.06  13.53  9.11  11.36  9.12  1.2 Trios  7.32  7.24  7.57  7.02  14.96  6.59  9.69  7.41  1.3 Trios  8.88  7.85  8.53  7.59  13.02  8.99  10.93  8.13  1.1 Photogrammetry  8.65  8.75  10.51  8.87  13.55  9.29  11.33  9.12  1.2 Photogrammetry  7.17  7.17  7.78  7.14  13.93  6.75  9.25  7.65  1.3 Photogrammetry  8.85  7.90  8.94  7.41  12.22  8.71  10.65  8.06  1.1 DSS  8.77  8.61  10.77  9.83  13.52  9.07  11.08  9.22  1.2 DSS  7.49  5.68  7.69  5.95  14.19  5.87  9.48  6.37  1.3 DSS  8.81    8.72    12.81    10.80    Table 3. Exocad measurements (H=height; W=Width)..  Intraoral Scanners    PZ09  PZ10  PZ11  PZ12  Dental Size  H  W  H  W  H  W  H  W  1.1 Scanner  10.75  8.87  9.15  8.72  10.61  8.15  9.02  8.05  1.2 Scanner  9.35  7.08  6.69  6.45  9.10  7.18  8.15  6.27  1.3 Scanner  10.34  8.70  10.35  7.69  10.02  7.87  9.18  8.09  1.1 Emerald  10.93  9.22  9.09  8.92  10.57  8.13  9.14  8.09  1.2 Emerald  9.20  7.05  6.72  6.70  9.09  7.52  8.12  6.40  1.3 Emerald  10.17  8.64  10.43  7.54  9.92  7.56  8.97  7.97  1.1 Trios  10.66  9.20  9.06  8.85  10.65  8.21  9.05  7.95  1.2 Trios  9.20  7.14  6.57  6.68  9.15  9.50  8.22  6.35  1.3 Trios  10.10  8.78  10.41  7.67  9.95  7.64  9.20  8.03  1.1 photogrammetry  10.37  9.24  9.19  8.93  10.02  7.93  9.25  8.23  1.2 photogrammetry  9.25  7.21  6.49  6.83  8.79  7.04  8.11  6.73  1.3 photogrammetry  10.04  8.87  10.58  8.22  9.91  7.44  9.43  7.86  1.1 DSS  10.74  8.89  8.61  9.15  10.26  8.23  8.33  7.98  1.2 DSS  8.72  6.04  6.17  6.03  8.68  5.75  7.64  5.58  1.3 DSS  9.74    10.20    9.69    8.82    Table 4. Statistical analysis results. Results of the paired T‐test with comparison of the values in pairs  for each dental element  (for example between dental heights of  the scanned plaster models and  dental heights of the intraoral scan with TRIOS relative to element 1.1). For each comparison between  reference values and all other values, the p value is shown (H=height; W=Width).  Intraoral Scanners    1.1  1.2  1.3    Mean ± SD and p  Mean ± SD and p  Mean ± SD and p    H  W  H  W  H  W  Scanner  9.80 ± 1.76  8.28 ± 0.98  8.27 ± 1.96  6.45 ± 0.99  9.69 ± 1.24  7.92 ± 0.47  Emerald  9.80 ± 1.71  8.51 ± 1.02  8.29 ± 2.39  6.58 ± 0.99  9.64 ± 1.37  7.89 ± 0.55  p = 0.9991  p = 0.5800  p = 0.9823  p = 0.7501  p = 0.9261  p = 0.8987  Int. J. Environ. Res. Public Health 2019, 16, 5139  8  of 15  Trios  9.74 ± 1.76  8.47 ± 1.07  8.29 ± 2.40  6.68 ± 1.35  9.67 ± 1.36  7.94 ± 0.52  p = 0.9324  p = 0.6623  p = 0.9802  p = 0.6390  p = 0.9654  p = 0.9348  Photogrammetry  9.70 ± 1.73  8.35 ± 1.06  8.18 ± 2.09  6.62 ± 1.05  9.65 ± 1.15  7.93 ± 0.48  p = 0.8852  p = 0.8636  p = 0.9128  p = 0.6844  p = 0.9341  p = 0.9625  DSS  9.58 ± 1.83  8.37 ± 1.13  8.06 ± 2.20  5.59 ± 0.77  9.47 ± 1.37    p = 0.7526  p = 0.8308  p = 0.8044  p = 0.0254  p = 0.6790    3.2. Statistical Evaluation  Regarding the height, the difference between the groups is not statistically significant with a p  value > 0.05;  in particular, both  the Trios and  the Planmeca and  the photogrammetry are more  precise  than  the DSS. Only  in one case was a p value < 0.05, or  in  the comparison between  the  average of the reference widths for the element 1.2 and the average of the widths measured for the  same element on digital photography with the DSS protocol.  It  is  clear  that  the  difference  between  the  reference  values  and  those  obtained  from  a  2D  photograph of the face is not due to chance (there is only a 2.5% probability that it is), but it is instead  due  to  distortion  of  the  mesio‐distal  dimensions  of  the  dental  elements  caused  by  the  two‐dimensional nature of a digital photograph. This mesio‐distal distortion increases progressively  moving from the central incisors to the posterior sectors. Table 1 shows that the difference between  the  groups  (corresponding  to  the  different  methods  of  data  acquisition)  relative  to  H  is  not  statistically significant with a p value > 0.05; in particular, the Trios and Planmeca are precise than  both photogrammetry and DSS (Table 4).  No substantial differences emerged comparing the reference values with those obtained from  the measurements conducted on intraoral scans, demonstrating how the accuracy achieved today by  intraoral scanners is high.  The comparison of the photogrammetric technique and the DSS system underlines superior  results in the precision of the first one, although it is reported as not statistically significant measure.    4. Discussion  Anamnesis and physical examination conventionally  represent  the  preliminary  phases of a  dental  treatment. Those steps are supported by physical  impressions  for  the  registration of  the  dental arches. The evaluation of the plaster models obtained from the impressions and the analysis  of two‐dimensional x‐ray images provided complete first information of the patient’s status.  Currently, the awareness and aesthetic expectations of patients are increasing. Ror this reason,  the  digital  aesthetic  previsualization  becomes  a  tangible  expression,  although  virtually,  of  everything  that  the  clinician  could  achieve,  thereby  legitimizing  the  patient’s  requests  and  expectations [11–18].  However, it should be ensured that the pre‐visualization of the treatment through the use of the  virtual patient is reliable, i.e., that it allows the design of restorations dimensionally appropriate to  the  anatomy  of  the  real  patient.  In  order  to  verify  this  concept,  it  is  necessary  to  evaluate  the  reliability of the virtual patient, defined as a set of supposable digital data on the basis of which  digital treatment planning is carried out by developing the so‐called virtual project.  The  accuracy  of  the  virtual  patient’s  data  depends  on  the  virtual  rehabilitation  project  effectiveness (prosthetic, orthodontic, surgical). Ultimately, assessing the reliability of the virtual  patient means testing the accuracy of the technological devices that allow it to be created and then  represented on the computers screen.  Therefore, in this study, the accuracy of those systems applied to dental field that are currently  available today for the virtual patient processing (excluding the 3D radiological techniques), were  evaluated: intraoral scanners, digital photographs acquired according to a predictable protocol and  repeatable  as  the  DSS  system  and  the  most  innovative  tool  proposed  and  studied  here,  a  photogrammetric detector of the patient’s face in 3D, sometimes improperly referred as a scanner,  given that it does not emit any type of laser or structured light [18–24].  Int. J. Environ. Res. Public Health 2019, 16, 5139  9  of 15  The parameter used to compare the acquisitions obtained with these instruments was identified  in the plaster models of the maxillary dental arches of the recruited patients. It has been produced by  starting from physical silicone impressions that today are still considered the gold standard for an  accurate recording of the dental arch morphology and for each dental element shape and size.  The study showed how the three‐dimensional techniques for detecting the patient’s dental and  facial  features  are  accurate,  namely  the  digital  photography  of  the  DSS  protocol,  although  the  statistical analysis did not report statistically significant differences.  The obviousness of this result could be disputed, especially with reference to the superiority of  the photogrammetric technique compared to the 2D photography of the DSS for the analysis, not  only of the dental dimensions, but also of the patient’s facial features.  The  DSS  photographic  protocol  requires  less  sophisticated  equipment,  is  economically  inexpensive and is normally is presented in the common dental practices. Quite different, on the  other hand, are the characteristics of the photogrammetric device, the FaceShape Maxi 6 in the Maxi  Line version [24–28].  Within the future hope of providing clinicians with an innovative approach in the diagnostic  phase through a daily practice tool, such as a reflex and a tripod, it seemed useful to compare the  accuracy of  the  two protocols—also by virtue of  the repeatability and predictability of  the DSS  protocol, already found in the recent literature [26–28].  Mangano et al.  [33]  in  their studies, demonstrated how different scanners show significant  differences on trueness and precision between them. In another study, the combination of intraoral  and  face scans allowed  to successfully restore  fully edentulous patients with maxillary  implant  supported overdentures. Furthermore, their group of study showed how due to excellent optical  properties,  high  mechanical  resistance,  restorative  versatility,  and  different  manufacturing  techniques, lithium disilicate could be considered to date one of the most promising dental materials  in digital dentistry. The current scanners are sufficiently accurate  for capturing  impressions  for  fabricating a whole series of prosthetic restorations (inlays/onlays, copings and frameworks, single  crowns, and fixed partial dentures) on both natural teeth and implants; in addition, they could be  used for smile design, and to fabricate posts and cores, removable partial prostheses and obturators.  [29–36].  As for the observations related to the 2D/3D comparison, perhaps  it would be necessary to  emphasize that 2D and 3D measurements could coincide only if the 2D photos are taken with the  plane of the photographic sensor perfectly perpendicular to the observed subject, and if the surface  of the observed subject is perfectly flat. The differences instead increase if the observed surface is  inclined with angles differing more and more from the 90° and how much more the shape of the  same surface differs from the plane and results of the cylindrical, conical or freeform type. For this  reason, if the differences on angular measurements were also examined, these could also be higher.  What instead should be emphasized in favor of 2D analyses is the quick speed and simplicity  compared to all the 3D procedures. These study results can define, within the limitation related to  the high differences between the 2D and 3D techniques, the field of application and the limits related  to 2D techniques, offering a spot for the use of the technique in dentistry.  A  final  observation  on  the  application  of  the  photogrammetric  scanner  Faceshape  is  the  possibility of obtaining the 3D scan of the entire face in 1/100th of a second, but it is not designed to  scan teeth. A photogrammetric scanner for dental arches only would have a much smaller shooting  field size and greater accuracy. Despite these considerations, the photogrammetric facial scanner has  however  provided  excellent  results  for  the  measurement  of  individual  teeth,  showing  great  versatility of use.  Certainly, there is a superiority of photogrammetry compared to digital photography, however  from the comparison between the reference values and the ones calculated on the 2D photographs, a  statistically  significant  coherence  is  evident  for  all  linear  distances  evaluated,  except  for  the  mesio‐width distal of the upper lateral incisor for the reasons documented in the results section.  Such a reduction in the accuracy of the two‐dimensional DSS systematic in faithfully reproducing  the mesio‐distal dimensions of the dental elements of the latero‐posterior sectors, which could be  Int. J. Environ. Res. Public Health 2019, 16, 5139  10  of 15  overcome  however  through  a  matching  between  digital  photographs  and  intraoral  scans,  a  supported  operation  foreseen  by  the  DSS  software,  is  compensated  by  its  wide  application  practicality compared to the photogrammetric technique [37–43].  5. Conclusion    The results as already seen during the manuscript and the previous paragraphs do not show  significant differences, however the reduced number of patients could influence these data. Within  the limitation of the present study related to the short number of the involved patients and mainly  connected to the difficulty on comparing 2D with 3D investigations, this study could be considered a  starting  point  to  carry  out  other  researches  and  to  definitively evaluate  if  there  are  differences  between scanners and which are better, depending on the therapeutic planning.  Author  Contributions:  conceptualization,  L.L.;  methodology,  software,  validation,  formal  analysis,  investigation, resources, data curation, writing—original draft preparation, G.T.; writing—review and editing,  G.D.L. and M.O.; visualization, L.G.; supervision, L.L. and G.C.; Project administration, L.F. and M.C.  Funding: This research received no external funding.  Acknowledgements: The authors want to thank Alan Herford Loma at Linda University for his kind revision of  the whole paper.  Conflicts of Interest: The authors declare no conflict of interest.  Appendix A    Figure  A1.  Stone  model  scan  evaluated  distance.  The  sample  has  been  chosen  because  of  its  reproducibility and low humidity, and the scan has been used as reference parameters.  Int. J. Environ. Res. Public Health 2019, 16, 5139  11  of 15    Figure  A2.  Planmeca  Emerald®  Scan,  distance  measure  were  obtained  and  compared  to  stone  model.    Figure A3. Trios 3Shape scan, measurements of obtained .stl file with this scanner were compared to  the stone model.  Int. J. Environ. Res. Public Health 2019, 16, 5139  12  of 15    Figure A4. Photogrammetric exam, the device used for the photogrammetry of the face is the Face  Shape Maxi 6 (Polishape 3D, Bari, Italy), in the Maxi Line version composed of six Canon D2000  reflex cameras equipped with Canon 50 mm f/1.8 STM lenses.    Figure  A5.  Digital  Smile  System  Exam.  Conducted  with  a  Canon  5d  mark  III  full  frame  reflex  equipped with Canon EF 100mm f/2.8 L Macro IS USM optics and supported by a tripod (Manfrotto,  Vicenza, Italy) placed at a distance of 1.50 m from the face of the patients.  References  1. Yilmaz, B.; Abou‐Ayash, S. A digital intraoral implant scan technique using a combined healing abutment  and scan body system. J. Prosthet. Dent. 2019, doi:10.1016/j.prosdent.2019.01.016.  2. Sailer, I.; Muhlemann, S.; Fehmer, V.; Hammerle, C.H.F.; Benic, G.I. Randomized controlled clinical trial  of digital and conventional workflows for the fabrication of zirconia‐ceramic fixed partial dentures. Part I:  Time efficiency of complete‐arch digital scans versus conventional  impressions. J. Prosthet. Dent. 2019,  121, 69–75, doi:10.1016/j.prosdent.2018.04.021.  Int. J. Environ. Res. Public Health 2019, 16, 5139  13  of 15  3. Runkel,  C.;  Guth,  J.F.;  Erdelt,  K.;  Keul,  C.  Digital  impressions  in  dentistry‐accuracy  of  impression  digitalisation by desktop scanners. Clin. Oral Investig. 2019, doi:10.1007/s00784‐019‐02995‐w.  4. De Stefano, R.; Bruno, A.; Muscatello, M.; Cedro, C.; Cervino, G.; Fiorillo, L. Fear and anxiety managing  methods during dental treatments: Systematic review of recent data. Minerva Stomatol. 2019, 68.  5.  De Stefano, R. Psychological factors in dental patient care: Odontophobia. Medicina 2019, 55, 678.  6.  Patel, J.; Winters, J.; Walters, M. Intraoral digital impression technique for a neonate with bilateral cleft lip  and palate. Cleft Palate‐Craniofacial J. 2019, 56, 1120–1123, doi:10.1177/1055665619835082.  7.  Pagano, S.; Moretti, M.; Marsili, R.; Ricci, A.; Barraco, G.; Cianetti, S. Evaluation of the accuracy of four  digital  methods  by  linear  and  volumetric  analysis  of  dental  impressions.  Materials  2019,  12,  1958,  doi:10.3390/ma12121958.  8.  Molinero‐Mourelle, P.; Lam, W.; Cascos‐Sanchez, R.; Azevedo, L.; Gomez‐Polo, M. Photogrammetric and  intraoral digital  impression  technique  for  the rehabilitation of multiple unfavorably positioned dental  implants—A clinical report. J. Oral Implantol. 2019, doi:10.1563/aaid‐joi‐D‐19‐00140.  9. Mangano, F.; Mangano, C.; Margiani, B.; Admakin, O. Combining intraoral and face scans for the design  and  fabrication  of  computer‐assisted  design/computer‐assisted  manufacturing  (cad/cam)  polyether‐ether‐ketone  (peek)  implant‐supported  bars  for  maxillary  overdentures.  Scanning  2019,  doi:10.1155/2019/4274715.  10. Kihara, H.; Hatakeyama, W.; Komine, F.; Takafuji, K.; Takahashi, T.; Yokota,  J.; Oriso, K.; Kondo, H.  Accuracy and practicality of intraoral scanner in dentistry: A literature review. J. Prosthodont. Res. 2019,  doi:10.1016/j.jpor.2019.07.010.  11. Cicciù,  M.;  Cervino,  G.;  Milone,  D.;  Risitano,  G.  FEM  analysis  of  dental  implant‐abutment  interface  overdenture  components  and  parametric  evaluation  of  Equator®  and  Locator®  prosthodontics  attachments. Materials 2019, 12, 592, doi:10.3390/ma12040592.  12. Cervino, G.; Fiorillo, L.; Arzukanyan, A.V.; Spagnuolo, G.; Cicciù, M. Dental restorative digital workflow:  Digital smile design from aesthetic to function. Dent. J. 2019, 7, 30, doi:10.3390/dj7020030.  13. Cappare, P.; Sannino, G.; Minoli, M.; Montemezzi, P.; Ferrini, F. Conventional versus digital impressions  for full arch screw‐retained maxillary rehabilitations: A randomized clinical trial. Int J. Environ. Res. Public  Health 2019, 16, doi:10.3390/ijerph16050829.  14. Cervino,  G.;  Fiorillo,  L.;  Herford,  A.S.;  Laino,  L.;  Troiano,  G.;  Amoroso,  G.;  Crimi,  S.;  Matarese,  M.;  D’Amico, C.; Nastro Siniscalchi, E.; et al. Alginate Materials and Dental Impression Technique: A Current  State of the Art and Application to Dental Practice. Mar. Drugs 2018, 17, doi:10.3390/md17010018.  15. Zitzmann, N.U.; Kovaltschuk, I.; Lenherr, P.; Dedem, P.; Joda, T. Dental students’ perceptions of digital  and  conventional  impression  techniques:  A  randomized  controlled  Trial.  J.  Dent.  Educ.  2017,  81,  1227–1232, doi:10.21815/jde.017.081.  16. Cicciù, M.; Herford, A.S.; Cervino, G.; Troiano, G.; Lauritano, F.; Laino, L. Tissue fluorescence imaging  (VELscope) for quick non‐invasive diagnosis in oral pathology. J. Craniofacial Surgery 2017, 28, e112–e115,  doi:10.1097/SCS.0000000000003210.  17. Sakornwimon,  N.;  Leevailoj,  C.  Clinical  marginal  fit  of  zirconia  crowns  and  patients’  preferences  for  impression techniques using intraoral digital scanner versus polyvinyl siloxane material. J. Prosthet. Dent.  2017, 118, 386–391, doi:10.1016/j.prosdent.2016.10.019.  18. Rancitelli, D.; Cicciù, M.; Lini, F.; Fumagalli, D.; Frigo, A.C.; Maiorana, C. Reproducibility of a digital  method  to  evaluate  soft  tissue  modifications:  A  study  of  inter  and  intra‐operative  measurement  concordance. Open Dent. J. 2017, 11, 171–180, doi:10.2174/1874210601711010171.  19. Joda, T.; Lenherr, P.; Dedem, P.; Kovaltschuk, I.; Bragger, U.; Zitzmann, N.U. Time efficiency, difficulty,  and  operator’s  preference  comparing  digital  and  conventional  implant  impressions:  A  randomized  controlled trial. Clin. Oral Implant. Res. 2017, 28, 1318–1323, doi:10.1111/clr.12982.  20. Joda, T.; Bragger, U. Patient‐centered outcomes comparing digital and conventional implant impression  procedures:  A  randomized  crossover  trial.  Clin.  Oral  Implant.  Res.  2016,  27,  e185–e189,  doi:10.1111/clr.12600.  21. Gjelvold, B.; Chrcanovic, B.R.; Korduner, E.K.; Collin‐Bagewitz, I.; Kisch, J. Intraoral digital impression  technique compared  to conventional  impression  technique. A randomized clinical  trial.  J. Prosthodont.  2016, 25, 282–287, doi:10.1111/jopr.12410.  Int. J. Environ. Res. Public Health 2019, 16, 5139  14  of 15  22. Gherlone,  E.;  Cappare,  P.;  Vinci,  R.;  Ferrini,  F.;  Gastaldi,  G.;  Crespi,  R.  Conventional  versus  digital  impressions  for  “all‐on‐four”  restorations.  Int.  J.  Oral  Maxillofac.  Implant.  2016,  31,  324–330,  doi:10.11607/jomi.3900.  23. Yuzbasioglu,  E.;  Kurt,  H.;  Turunc,  R.;  Bilir,  H.  Comparison  of  digital  and  conventional  impression  techniques: Evaluation of patients’ perception,  treatment comfort, effectiveness and clinical outcomes.  BMC Oral Health 2014, 14, 10, doi:10.1186/1472‐6831‐14‐10.  24. Newby, E.E.; Bordas, A.; Kleber, C.; Milleman, J.; Milleman, K.; Keogh, R.; Murphy, S.; Butler, A.; Bosma,  M.L.  Quantification  of  gingival  contour  and  volume  from  digital  impressions  as  a  novel  method  for  assessing gingival health. Int. Dent. J. 2011, 61, 4–12, doi:10.1111/j.1875‐595X.2011.00043.x.  25. Lo Giudice, G.; Cutroneo, G.; Centofanti, A.; Artemisia, A.; Bramanti, E.; Militi, A.; Rizzo, G.; Favaloro,  A.; Irrera, A.; Lo Giudice, R.; et al. Dentin morphology of root canal surface: A quantitative evaluation  based on a scanning electronic microscopy study. BioMed Res. Int. 2015, 2015, doi:10.1155/2015/164065.  26. Cervino,  G.;  Romeo,  U.;  Lauritano,  F.;  Bramanti,  E.;  Fiorillo,  L.;  D’Amico,  C.;  Milone,  D.;  Laino,  L.;  Campolongo, F.; Rapisarda, S.; et al. Fem and von mises analysis of OSSTEM ® dental implant structural  components:  evaluation  of  different  direction  dynamic  loads.  Open  Dent.  J.  2018,  12,  219–229,  doi:10.2174/1874210601812010219.  27. Bramanti,  E.;  Matacena,  G.;  Cecchetti,  F.;  Arcuri,  C.;  Cicciù,  M.  Oral  health‐related  quality  of  life  in  partially  edentulous  patients  before  and  after  implant  therapy:  A  2‐year  longitudinal  study. ORAL  Implantol. 2013, 6, 37–42.  28. Fiorillo, L.; Cervino, G.; Herford, A.S.; Lauritano, F.; D’Amico, C.; Lo Giudice, R.; Laino, L.; Troiano, G.;  Crimi, S.; Cicciù, M. Interferon Crevicular Fluid Profile and Correlation with Periodontal Disease and  Wound Healing: A Systemic Review of Recent Data. Int. J. Mol. Sci. 2018, 19, 1908.  29. Cattoni,  F.;  Teté,  G.;  Calloni,  AM.;  Manazza,  F.;  Gastaldi,  G.;  Capparè,  P.  Milled  versus  moulded  mock‐ups based on the superimposition of 3D meshes from digital oral impressions: A comparative in  vitro study in the aesthetic area. BMC Oral Health 2019, 19, 230, doi:10.1186/s12903‐019‐0922‐2.  30. Mendes, T.A.; Marques, D.; Lopes, L.P.; Carames, J. Total digital workflow in the fabrication of a partial  removable  dental  prostheses:  A  case  report.  SAGE  Open  Med.  Case  Rep.  2019,  7,  2050313x19871131,  doi:10.1177/2050313x19871131.  31. Spielau,  T.;  Hauschild,  U.;  Katsoulis,  J.  Computer‐assisted,  template‐guided  immediate  implant  placement  and  loading  in  the  mandible:  A  case  report.  BMC  Oral  Health  2019,  19,  55,  doi:10.1186/s12903‐019‐0746‐0.  32. Mangano,  F.G.;  Hauschild,  U.;  Veronesi,  G.;  Imburgia,  M.;  Mangano,  C.;  Admakin,  O.  Trueness  and  precision of 5 intraoral scanners in the impressions of single and multiple implants: A comparative in  vitro study. BMC Oral Health 2019, 19, 101, doi:10.1186/s12903‐019‐0792‐7.  33. Mangano, C.; Perrotti, V.; Shibli, J.A.; Mangano, F.; Ricci, L.; Piattelli, A.; Iezzi, G. Maxillary sinus grafting  with biphasic calcium phosphate ceramics: Clinical and histologic evaluation in man. Int. J. Oral Maxillofac.  Implant. 2013, 28, 51–56.  34. Mangano, C.; Mangano, F.; Shibli, J.A.; Luongo, G.; De Franco; M.; Briguglio, F.; Figliuzzi, M.; Eccellente,  T.; Rapani, C.; Piombino, M.; MacChi, A. Prospective clinical evaluation of 201 direct laser metal forming  implants: Results from a 1‐year multicenter study. Lasers Med. Sci. 2012, 27, 181–189.  35. Zarone, F.; Ferrari, M.; Mangano, F.G.; Leone, R.; Sorrentino, R. “Digitally Oriented Materials”: Focus on  Lithium Disilicate Ceramics. Int. J. Dent. 2016, 2016, 10. http://dx.doi.org/10.1155/2016/9840594.  36. Giuliani,  A.;  Manescu,  A.;  Larsson,  E.;  Tromba,  G.;  Luongo,  G.;  Piattelli,  A.;  Mangano,  F.;  Iezzi,  G.;  Mangano,  C.  In  vivo  regenerative  properties  of  coralline‐derived  (biocoral)  scaffold  grafts  in  human  maxillary defects: Demonstrative and comparative study with beta‐tricalcium phosphate and biphasic  calcium phosphate by synchrotron radiation X‐Ray microtomography. Clin. Implant Dent. Relat. Res. 2014,  16, 736–750.  37. Cervino, G.; Fiorillo, L.; Iannello, G.; Santonocito, D.; Risitano, G.; Cicciù, M. Sandblasted and acid etched  titanium dental implant surfaces systematic review and confocal microscopy evaluation. Materials 2019,  12, 1763, doi:10.3390/ma12111763.  38. Cervino, G.; Fiorillo, L.; Monte, I.P.; De Stefano, R.; Laino, L.; Crimi, S.; Bianchi, A.; Herford, A.S.; Biondi,  A.; Cicciù, M. Advances in antiplatelet therapy for dentofacial surgery patients: focus on past and present  strategies. Materials 2019, 12, 1524, doi:10.3390/ma12091524.  Int. J. Environ. Res. Public Health 2019, 16, 5139  15  of 15  39. Cervino,  G.;  Fiorillo,  L.;  Arzukanyan,  A.;  Spagnuolo,  G.;  Campagna,  P.;  Cicciù,  M.  Application  of  bioengineering devices for the stress evaluation in dentistry: the last 10 years fem parametric analysis of  outcomes and current trends. Minerva Stomatol. 2019, 29, 565–574.  40. Germano, F.; Bramanti, E.; Arcuri, C.; Cecchetti, F.; Cicciù, M. Atomic force microscopy of bacteria from  periodontal  subgingival  biofilm:  Preliminary  study  results.  Eur.  J.  Dent.  2013,  7,  152–158,  doi:10.4103/1305‐7456.110155.  41. Maiorana,  C.;  Beretta,  M.;  Grossi,  G.B.;  Santoro,  F.;  Herford,  A.S.;  Nagursky,  H.;  Cicciù,  M.  Histomorphometric  evaluation  of  anorganic  bovine  bone  coverage  to  reduce  autogenous  grafts  resorption: Preliminary results. Open Dent. J. 2011, 5, 71–78, doi:10.2174/1874210601105010071.  42. Cicciù, M.; Cervino, G.; Terranova, A.; Risitano, G.; Raffaele, M.; Cucinotta, F.; Santonocito, D.; Fiorillo, L.  Prosthetic and mechanical parameters of the facial bone under the load of different dental implant shapes:  A parametric study. Prostheses 2020, 1, 41–53. 43. Cicciù, M. Prosthesis: new technological opportunities and innovative biomedical devices. Prostheses 2020,  1, 1–2.     © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access  article distributed under the terms and conditions of the Creative Commons Attribution  (CC BY) license (http://creativecommons.org/licenses/by/4.0/).    work_vnkqsmonlzff7omqwhcrxwq2iu ---- Disruptive technology: How Kodak missed the digital photography revolution Disruptive technology: How Kodak missed the digital photography revolution Henry C. Lucas Jr. *, Jie Mein Goh Decisions, Operations and Information Technologies, Robert H. Smith School of Business, University of Maryland, College Park, MD 20740, United States a r t i c l e i n f o Article history: Available online 25 February 2009 Keywords: Innovation Information and communications technologies Disruptive technology Core rigidities Case study Qualitative research a b s t r a c t The purpose of this paper is to analyze how a firm responds to a challenge from a transfor- mational technology that poses a threat to its historical business model. We extend Christensen’s theory of disruptive technologies to undertake this analysis. The paper makes two contributions: the first is to extend theory and the second is to learn from the example of Kodak’s response to digital photography. Our extensions to existing theory include con- siderations of organizational change, and the culture of the organization. Information tech- nology has the potential to transform industries through the creation of new digital products and services. Kodak’s middle managers, culture and rigid, bureaucratic structure hindered a fast response to new technology which dramatically changed the process of capturing and sharing images. Film is a physical, chemical product, and despite a succes- sion of new CEOs, Kodak’s middle managers were unable to make a transition to think digitally. Kodak has experienced a nearly 80% decline in its workforce, loss of market share, a tumbling stock price, and significant internal turmoil as a result of its failure to take advantage of this new technology. � 2009 Elsevier B.V. All rights reserved. 1. Introduction The purpose of this paper is to explore how firms respond to challenges from rare transformational technology that threatens a traditional, successful business model. We propose an extension of Christensen’s theory of disruptive technolo- gies and illustrate the extensions with a longitudinal case study of Kodak. Kodak is unique in that it developed and patented many of the components of digital photography, yet this new form of photography has had a serious, negative impact on the firm. The two main contributions of the paper are the extension to Christensen’s theory and the lessons from Kodak’s unsuc- cessful response to a major technological discontinuity. The digital camera combined with information and communications technologies (ICT), specifically the capabilities of the computer to store and display photographs, and the Internet to transmit them, transformed the major customer processes asso- ciated with photography. The consumer could take many photos at virtually no cost, and delete unwanted ones by pushing a button. Rather than waiting to develop a photo and then sending it by mail to another person, the customer uploads the pic- ture to a PC and sends it as an email attachment to multiple recipients. If the customer wants a hard copy, she can print a picture locally on an inexpensive color printer on a PC, send it to an Internet photo service, or go to a store that had a devel- oping kiosk. 0963-8687/$ - see front matter � 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.jsis.2009.01.002 * Corresponding author. Tel.: +1 301 314 1968. E-mail addresses: hlucas@rhsmith.umd.edu (H.C. Lucas Jr.), jgoh@rhsmith.umd.edu (J.M. Goh). Journal of Strategic Information Systems 18 (2009) 46–55 Contents lists available at ScienceDirect Journal of Strategic Information Systems j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / j s i s mailto:hlucas@rhsmith.umd.edu mailto:jgoh@rhsmith.umd.edu http://www.sciencedirect.com/science/journal/09638687 http://www.elsevier.com/locate/jsis 1.1. Past research: Christensen’s theory of disruptive technologies Christensen’s theory of disruptive technologies is one of the most popular for explaining the plight of the incumbent firm facing a significant new technology. He proposes a theory of response to disruptive technologies in two books about inno- vation (Christensen, 1997; Christensen and Raynor, 2003). He argues that investing in disruptive technologies is not a ra- tional financial decision for senior managers to make because, for the most part, disruptive technologies are initially of interest to the least profitable customers in a market (Christensen, 1997). The highest-performing companies have systems for eliminating ideas that customers do not ask for, making it difficult for them to invest resources in disruptive technologies. By the time lead customers request innovative products, it is too late to compete in the new market. The root cause of the failure to adapt to disruptive technologies is that the company practiced good management. The decision-making and re- source-allocation processes that make established companies successful cause them to reject disruptive technologies. Christensen and Overdorf (2000) present a framework for dealing with disruptive change that focuses on resources, pro- cesses and values. Resources include people, equipment, technologies, cash, product designs and relationships. Processes are the procedures and operational patterns of the firm, and values are the standards employees use to set priorities for making decisions. Managers design processes so that employees perform tasks in a consistent way every time; they are not meant to change. The most important processes when coping with a disruptive technology are those in the background such as how the company does market research and translate it into financial projections, and how the company negotiates plans and budgets. Employees exhibit their values every day as they decide which orders are more important, what customers have priority and whether an idea for a new product is attractive. The exercise of these values constitutes the culture of the orga- nization. Culture defines what the organization does, but it also defines what it cannot do, and in this respect can be a dis- ability when confronting a new innovation. 1.2. Extending Christensen’s theory When a firm is confronted with a discontinuous, highly disruptive technology, senior management has to bring about sig- nificant changes in the organization at all levels. Our first extension to Christensen is to emphasize the change process re- quired to adopt a disruptive technology. Senior management has to convince others of the need to move in a new direction. Specifically we are interested in how middle managers change themselves and also bring about change in the organization (see Rouleau, 2005; Balogun, 2006). Christensen argues that the firm is not ready to adapt a disruptive technology because it does not see a demand from its customers for the new innovation. He maintains that high-performing companies have systems in place that tend to kill ideas that customers are not asking for. We propose to extend this part of his theory to encompass the culture of the orga- nization, by which we mean the beliefs of employees, the way the firm organizes itself and the nature of the interactions among employees (Schein, 1983). 1.3. A first extension: the struggle for change In confronting a technological disruption, a firm faces a struggle between employees who seek to use dynamic capabilities to bring about change, and employees for whom core capabilities have become core rigidities. Management propensities for change drive the process (see Fig. 1). We describe this ongoing struggle using concepts from dynamic capabilities, core rigid- ities and management propensities. Dynamic Capabilities Core Rigidities Management Propensities Response to Disruptive Technology Reduce capacity to changeIncrease capacity to change Attack Rigidities Organize and marshal capabilities for change Fig. 1. A framework for responding to disruptive change. H.C. Lucas, J.M. Goh / Journal of Strategic Information Systems 18 (2009) 46–55 47 https://isiarticles.com/article/18554 work_vnor4dzlnbhjva7fhiz7iqbw4u ---- Title R. Lukac and K. N. Plataniotis: Single-Sensor Camera Image Compression Contributed Paper Manuscript received December 19, 2005 0098 3063/06/$20.00 © 2006 IEEE 299 Single-Sensor Camera Image Compression Rastislav Lukac, Member, IEEE, and Konstantinos N. Plataniotis, Senior Member, IEEE Abstract — This paper presents digital camera image compression solutions suitable for the use in single-sensor consumer electronic devices equipped with the Bayer color filter array (CFA). The proposed solutions code camera images available either in the CFA format or as the full-color demosaicked data, thus offering different design characteristics, performance and computational efficiency. Extensive experimentation reported in this paper indicates that pipelines which employ a JPEG 2000 coding scheme achieve significant performance improvements compared to similar processing pipelines equipped with a JPEG coder. Other improvements, both objective and subjective, are observed in terms of color appearance, image sharpness and the presence of visual artifacts in the captured images.1 Index Terms — Image-enabled consumer electronics, single-sensor imaging, Bayer pattern, camera image compression, color filter array interpolation, demosaicking. I. INTRODUCTION Single-sensor imaging constitutes a cost-effective tool to capture the visual scene. This approach is widely used in consumer electronic devices [1]-[5], such as digital still and video cameras, image-enabled mobile phones, and wireless personal digital assistants (PDAs). To overcome the monochromatic nature of the single image sensor, usually a charge-coupled device (CCD) [6] or complementary metal oxide semiconductor (CMOS) [7] sensor, a color filter array (CFA) [8] is placed on top of the sensor. Since each sensor cell has its own spectrally selective filter, the acquired CFA values constitute a mosaic-like gray-scale image [8],[9], thus requiring the so-called demosaicking process to restore the full-color information (Fig. 1) [8]-[13]. Typical consumer cameras use the demosaicking process as part of the processing pipeline implemented in the camera hardware/software. The demosaicked images are usually stored in a compressed format using the Joint Photographic Experts Group (JPEG) standard. In addition to the image data, the metadata information about the camera and the environment is added to the compressed file using the Exchangeable Image File (EXIF) format [14]. The image file- recording format is strictly based on existing formats used by commercial applications to utilize available functions for viewing and manipulating the images. Uncompressed RGB 1 The authors are with The Edward S. Rogers Sr. Department of ECE, University of Toronto, Toronto, Canada. Corresponding Author: Dr. Rastislav Lukac, Multimedia Laboratory, Room BA 4157, The Edward S. Rogers Sr. Department of ECE, University of Toronto, 10 King's College Road, Toronto, Ontario, M5S 3G4, Canada (e-mail: lukacr@ieee.org, web: http://www.dsp.utoronto.ca/~lukacr) data is recorded in conformance with baseline TIFF 6.0 for RGB full-color images. The EXIF standard can also record uncompressed YCbCr data by following TIFF 6.0 extensions for YCbCr images. In this mode, the image data is stored along with additional information about the RGB to YCbCr color transformation matrix coefficients, chrominance sub- sampling, and matching/non-matching of chrominance and luminance samples. Finally, in the most popular EXIF mode the images are compressed using the JPEG adaptive discrete cosine transformation (ADCT) format [14],[15]. Apart from the EXIF driven devices, there also exist digital cameras which store the acquired CFA image using the Tagged Image File Format for Electronic Photography (TIFF- EP) [16] along with metadata information about camera setting, spectral sensitivities and used illuminant. The uncompressed image data is recorded in conformance with the TIFF 6.0 specification, whereas essential compression is usually obtained using the JPEG DCT scheme. Although TIFF-EP supports the recording of the image data in the (lossy or lossless) JPEG-DCT compression format, or using other JPEG versions, or even in a vendor unique compression format; the current standard mainly supports baseline (lossy DCT) based compression to allow the reading of the camera image by commercial applications [16]. In the JPEG-DCT format, the use of lossy compression may require to store the information about JPEG quantization and Huffman coding tables. For lossless JPEG compression, the standard recommends the use of lossless sequential Differential Pulse Code Modulation (DPCM) along with Huffman coding. The demosaicking process is performed off-camera using a companion personal computer (PC) which interfaces with the TIFF-EP-driven camera [9]. Both EXIF and TIFF-EP are made to be as compatible as possible by unifying the tags’ definitions. For example, the tags are used to indicate image data format, color space information, camera and lens setting, CFA type, and camera characterization. The tag-fields are readable by dedicated PC software. The annotation of the captured image by storing the metadata information about the date/time, location, semantic information, authorship and copyright is also supported. This so-called picture information can support the digital rights management (DRM) operations and it can be also used to organize and retrieve the digital photographs in personal and public image databases [17]. Note that the EXIF specification allows also the inclusion of an audio file format enabling the recording of audio as a supplementary function and indicating the relation between image and audio files [14]. IEEE Transactions on Consumer Electronics, Vol. 52, No. 2, MAY 2006 300 (a) (b) Fig. 1. Bayer CFA-based single-sensor imaging: (a) mosaic-like gray- scale CFA image, (b) demosaicked full-color image. This paper deals with compression of the captured images using JPEG and JPEG 2000. Although various lossless and near-lossless image compression solutions have been recently proposed in [2],[18]-[21], the above EXIF/TIFF-EP overview revealed that camera manufacturers mainly rely on lossy JPEG-type compression applied either to the full-color demosaicked image or to the grayscale CFA data. Since compression of the CFA image allows the transmission of significantly less information compared to the full-color image compression, it is expected that CFA-oriented compression methods [18]-[25] may be of great interest in wireless image- enabled devices. Furthermore, JPEG 2000 has been established as a new standard for still image compression [26] and used as the replacement of the previous JPEG coder in a wide range of image processing applications [27]. It is therefore expected that overcoming the difficulties with the computational complexity and memory requirements, JPEG 2000 will be employed in the next generation of digital cameras. Moreover, since JPEG 2000 supports different metadata information, its inclusion in both EXIF and TIFF-EP formats should offer new possibilities for various computer vision and multimedia applications based on single-sensor consumer electronic devices. Some initial research effort has been devoted to the utilization of JPEG 2000 in single-sensor imaging and evaluation of the image compression efficiency [18],[19], [25],[28]. However, there is no known study addressing the performance issues with respect to the quality of the decoded demosaicked image. Since the demosaicked images are used for displaying, printing, and storage at the final stage of the single-sensor processing pipeline (Fig. 2), the analysis of the visual quality of the decoded and demosaicked image from the end-user perspective is even more emergent. To this end, this paper presents three camera image processing pipelines suitable for coding of the CFA images or demosaicked full- color images. Furthermore, four state-of-the-art demosaicking solutions and two (JPEG, JPEG 2000) coding schemes are used to demonstrate the influence of both demosaicking and image compression at various compression ratios on the sharpness, color appearance and the presence of visual artifacts in the captured images. The rest of this paper is organized as follows. Section II introduces single-sensor image compression solutions. Motivation and design characteristics are discussed in detail. In Section III, the proposed camera image processing solutions are tested using a number of color images, and various demosaicking and coding schemes. Evaluations of performance, both objective and subjective, are provided. Finally, conclusions are drawn in Section IV. II. SINGLE-SENSOR CAMERA IMAGE COMPRESSION In a single-sensor digital camera, the captured image can be stored either in the CFA format (Fig. 1a) or as the demosaicked full color image (Fig. 1b). As shown in Fig. 2, the storage format denotes the order of the demosaicking and image compression steps. Thus, it essentially determines both the design and performance characteristics of the single- sensor imaging pipeline. A. Demosaicked Image Compression Compression of a demosaicked image represents a typical approach implemented in the consumer-grade camera. Built on the advances of color image processing, the camera manufacturers use conventional color image compression methods [14],[26]-[28] to reduce redundancy in the full-color image data (Fig. 1b) obtained using the demosaicking process. Note that in-camera processing is usually performed using sub-optimal methods because of the real-time constraints imposed on the processing solution. Although comfortable to use, compression of demosaicked images directly in the digital camera can be counterproductive [2],[23]. Demosaicking triples the amount of the data to be compressed by populating the two missing color components at each spatial location of the acquired gray-scale CFA image. Since color image compression aims to de-correlate the image data, the solution shown in Fig. 2a may reduce compression ratios to be possibly achieved and increase the computational complexity. B. CFA Image Compression High-end single-sensor cameras store the captured image directly in the CFA image format. Compressing the CFA data before demosaicking can achieve the acceptable visual quality at high compression ratios [18]-[25]. By decompressing the CFA image in its original quality or with a small compression error, the end-user can obtain a high-quality demosaicked image by running computationally expensive, sophisticated demosaicking algorithms using a companion PC. Therefore, the solution shown in Fig. 2b is suitable for both R. Lukac and K. N. Plataniotis: Single-Sensor Camera Image Compression 301 (a) CFA image demosaicking compression display decompresion storage CFA image structure conversion compression display demosaicking structure conversion storage decompresion (b) Fig. 2. Single-sensor camera image compression solutions: (a) demosaicked image compression, (b) CFA image compression. Note that the coding scheme in (b) can be either applied directly to the CFA image or it may optionally request the structure conversion prior to image compression. z(1,2) z(1,4) z(1,6) z(3,2) z(3,4) z(3,6) z(5,2) z(5,4) z(5,6) z(2,1) z(2,3) z(2,5) z(4,1) z(4,3) z(4,5) z(6,1) z(6,3) z(1,3) z(1,5) z(3,1) z(3,3) z(3,5) z(5,1) z(5,3) z(5,5) z(2,2) z(2,4) z(2,6) z(4,2) z(4,4) z(4,6) z(6,2) z(6,4) z(6,6) z(6,5) z(1,1) z(1,1) z(1,2) z(1,3) z(1,4) z(1,5) z(1,6) z(2,1) z(2,2) z(2,3) z(2,4) z(2,5) z(2,6) z(3,1) z(3,2) z(3,3) z(3,4) z(3,5) z(3,6) z(4,1) z(4,2) z(4,3) z(4,4) z(4,5) z(4,6) z(5,1) z(5,2) z(5,3) z(5,4) z(5,5) z(5,6) z(6,1) z(6,2) z(6,3) z(6,4) z(6,5) z(6,6) (a) (b) Fig. 3. CFA pixel arrangement in the Bayer CFA: (a) before structure conversion, (b) after structure conversion. Fig. 4. Structure conversion applied to the CFA image shown in Fig. 1a. The sub-images’ order follows the arrangement in Fig. 3b. consumer and professional digital photography, as well as emerging applications such as digital cinema, astronomy and medical imaging. Apart from high-end cameras, CFA image compression is of paramount importance to image-enabled consumer electronic devices, such as wireless PDAs, mobile phones, and surveillance systems. Since bandwidth reduction is crucial for transmission of captured images in wireless networks, compression of the CFA data rather than the demosaicked data basically allows a three-fold reduction of the information to be transmitted. On the other hand, operating on the CFA pixels arranged in the original mosaic layout may limit the compression efficiency due to the artificial high frequencies in the CFA image (Fig. 1a). C. CFA Image Compression Using Structure Conversion To overcome the mosaic-like structure of the acquired CFA data and further increase the CFA image coding efficiency, a CFA data structure conversion should be used prior to image compression (Fig. 2b). The structure conversion step transforms the CFA pixels corresponding to the same color filters into a structure more appropriate for image coding [23]. Fig. 3 shows one of several possible re-arrangements of the pixels captured using the Bayer CFA [21]. As shown in Fig. 4, the process creates the sub-images which contain natural edges and transitions between the flat regions. Therefore, the structure conversion can help to achieve higher compression ratios compared to the direct coding of the acquired CFA image. As shown in Fig. 2b, an inverse structure conversion should be used after the image decompression to restore the CFA mosaic layout. III. EXPERIMENTAL RESULTS In this section, various single-sensor image processing solutions are used to produce the decoded full-color camera image. Following the scenario shown in Fig. 2, the generated image is displayed to the end-user for visual inspection and used for comparative evaluations of the visual quality. The objective of this experimentation is to demonstrate strong dependencies of the final image quality on: i) the order of demosaicking and compression operations, ii) the selection of the demosaicking solution and/or the coding scheme, iii) various compression ratios, and iv) the utilization of the structure conversion step in support of CFA image compression. IEEE Transactions on Consumer Electronics, Vol. 52, No. 2, MAY 2006 302 (a) (b) (c) (d) (e) (f) (g) (h) Fig. 5. Test color images: (a) Lighthouse, (b) Bikes, (c) Girls, (d) Parrots, (e) Train, (f) Rafting, (g) Flower, (h) Face. A. Processing Solutions Under Consideration Three single-sensor image processing pipelines (IPP) are considered. Namely, IPP1 first demosaicks the CFA image and then compresses the generated full-color image. The compressed image is decoded and displayed. IPP2 first compresses the CFA data in a straight-forward manner (without structure conversion) and then demosaicks the decoded CFA image. Finally, IPP3 uses the structure conversion step prior to compression of the CFA image data An inverse structure conversion is used after decoding the compressed CFA image. The full-color image is obtained by demosaicking the data in the restored CFA layout. The proposed pipelines can employ various compression and demosaicking solutions. In this work, we used JPEG and JPEG 2000 to compress the image data. Demosaicking solutions under consideration include the bilinear interpolation (BI) scheme, the Kimmel algorithm (KA) [29], the enhancement/demosaicking (ED) scheme [30], and the color- correlation adaptive (CCA) demosaicking scheme [31]. The combination of these four demosaicking solutions and two coding schemes in three processing pipelines allows for the testing of total 24 processing solutions. Thus, this work extensively covers a wide range of design and performance issues which are essential in single-sensor imaging. B. Evaluation Procedure The performance of the proposed solutions was tested using the 512 512× color images shown in Fig. 5. Following the procedure reported in [9], the 1 2K K× test images 2 3: Z Z→o were sampled by the Bayer CFA (Fig. 3a) to produce the CFA images .z The full-color image x to be displayed to the end-user (Fig. 2) is generated by applying separately each of the proposed pipelines (IPP1, IPP2, IPP3) onto the CFA image .z To evaluate the performance of the proposed solutions, image quality was measured by comparing o and x using four criteria defined in three color spaces. Namely, the image quality was evaluated in the RGB color space (commonly used for storage/displaying) using the mean absolute error (MAE) [9] and the peak signal-to-noise ratio (PSNR) [23] defined as follows: 1 2 3 ( , ) ( , ) 1 1 11 2 1 MAE 3 K K r s k r s k r s k o x K K = = = = −∑∑∑ (1) ( ) 1 23 22 10 ( , ) ( , ) 1 1 11 2 1 PSNR 10log 255 / 3 K K r s k r s k k r s o x K K = = = ⎛ ⎞⎛ ⎞ = −⎜ ⎟⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠ ∑∑∑ (2) where ( , )r s denotes the spatial position in a 1 2K K× image, k characterizes the color channel, ( , ) ( , )1 ( , )2 ( , )3[ , , ]r s r s r s r so o o=o is the original RGB pixel, and ( , ) ( , )1 ( , )2 ( , )3[ , , ]r s r s r s r sx x x=x is the restored RGB pixel. Since RGB is not a perceptually uniform color space, i.e. the measured units do not correspond to the human perception, two additional criteria defined in perceptually uniform CIE Luv and CIE Lab color spaces [32] with the white point D65 were used. The perceptual similarity between the images o and x is quantified using the normalized color difference (NCD) criterion [9]: ( ) ( ) 1 2 1 2 3 2 ( , ) ( , ) 1 1 1 3 2 ( , ) 1 1 1 NCD K K r s k r s k r s k K K r s k r s k o x o = = = = = = − = ∑∑ ∑ ∑∑ ∑ (3) where ( , ) ( , )1 ( , )2 ( , )3[ , , ]r s r s r s r so o o=o and ( , ) ( , )1 ( , )2 ( , )3[ , , ]r s r s r s r sx x x=x are the vectors representing respectively the RGB vectors ( , )r so and ( , )r sx in the CIE LUV color space. Another color-based criterion is the so-called LabΔ measure defined as follows [32]: ( ) 1 2 3 2 ( , ) ( , ) 1 1 11 2 1 K K Lab r s k r s k r s k o x K K = = = Δ = −∑∑ ∑ (4) where ( , ) ( , )1 ( , )2 ( , )3[ , , ]r s r s r s r so o o=o and ( , ) ( , )1 ( , )2 ( , )3[ , , ]r s r s r s r sx x x=x are CIE Lab equivalents of the RGB vectors ( , )r so and ( , )r sx , respectively. An Euclidean distance value of approximately 2.3 between two color stimuli in the CIE Lab color space corresponds to a just noticeable difference (JND). C. Achieved Results Results reported in Figs. 6-8 are achieved for a wide range of compression ratio values (eight values per curve). Since the error values are calculated as aggregated measures averaged over the images in the test database shown in Fig. 5, these results indicate the solutions’ robustness (or its lack). For the demosaicked image compression (Fig. 6), the selection of the demosaicking algorithm practically does not have significant impact for JPEG 2000 with compression ratio values over 80. The best performance for JPEG with compression ratio values over 60 was achieved by the BI demosaicking scheme. At low compression ratios, both JPEG and JPEG 2000 produced the best results in conjunction with the powerful ED and CCA schemes. Figs. 7 and 8 summarize results corresponding respectively to the plain and structure conversion-based CFA image R. Lukac and K. N. Plataniotis: Single-Sensor Camera Image Compression 303 BI KA ED CCA JPEG 2000 JPEG BI KA ED CCA JPEG 2000 JPEG 20 40 60 80 100 120 140 160 2 4 6 8 10 12 14 16 18 compression ratio M A E 20 40 60 80 100 120 140 160 0.04 0.06 0.08 0.1 .12 .14 0.16 0.18 0.2 0.22 N C D compression ratio BI KA ED CCA JPEG 2000 JPEG BI KA ED CCA JPEG 2000 JPEG 20 40 60 80 100 120 140 160 2 3 4 5 6 7 8 9 10 11 12 13 compression ratio D el ta L ab 20 40 60 80 100 120 140 160 20 22 24 26 28 30 32 34 36 compression ratio P S N R ( dB ) Fig. 6. Performance of the IPP1 solution expressed for various compression ratios and error criteria averaged over the database in Fig. 5. compression. As it can be seen, scaling down compression ratios, the image quality for both JPEG and JPEG 2000 compression critically depends on the performance of the demosaicking solution. The best results were achieved using sophisticated (CCA, ED) demosaicking solutions at low compression ratios. The comparison of Figs. 6-8 shows that the use of JPEG 2000 significantly outperforms conventional JPEG in terms of both the measured quality and achieved compression ratios. This behavior was observed for all four considered error criteria. Finally, it should be noted that although the proposed IPP1, IPP2 and IPP3 pipelines had similar performance at low compression ratios, the difference in performance of these solutions became obvious at high compression ratios where IPP2 (straightforward CFA image compression) produced the worst results. Figs. 9-11 depict enlarged parts of the test images and the output images cropped in areas with significant structural contents. Visual inspection of these images and the corresponding compression ratios listed in the figure captions reveals that the solutions equipped with JPEG 2000 clearly outperformed their JPEG-based variants in terms of the image sharpness, true coloration and higher compression ratios. Even for extremely high compression ratios, both IPP1 and IPP3 solutions produced modest visual quality using the JPEG 2000 coding scheme, whereas images obtained using the conventional JPEG scheme contained various visual impairments, such as blurred edges, block effects, shifted colors and other compression artifacts. In the summary, the following conclusions can be drawn: i) the use of JPEG 2000 instead of the conventional JPEG scheme in the single-sensor imaging pipeline results in significantly higher compression ratios and quality of the captured images, ii) the proposed processing solutions produce images with the reasonable image quality at high compression ratios and visually pleasing images at compression ratios ranging from modest to low values, iii) powerful demosaicking solutions should be employed in the proposed processing pipelines to produce the highest visual quality at low and modest compression ratios, and iv) cost- effective demosaicking solutions should be used at higher compression ratios. IEEE Transactions on Consumer Electronics, Vol. 52, No. 2, MAY 2006 304 BI KA ED CCA JPEG 2000 JPEG BI KA ED CCA JPEG 2000 JPEG 20 40 60 80 100 120 140 160 0.04 0.06 0.08 0.1 0.12 .14 .16 0.18 0.2 0.22 0.24 compression ratio N C D 20 40 60 80 100 120 140 160 4 6 8 10 12 14 16 18 20 compression ratio M A E BI KA ED CCA JPEG 2000 JPEG BI KA ED CCA JPEG 2000 JPEG 20 40 60 80 100 120 140 160 20 22 24 26 28 30 32 34 36 compression ratio P S N R ( dB ) 20 40 60 80 100 120 140 160 2 4 6 8 10 12 14 compression ratio D el ta L ab Fig. 7. Performance of the IPP2 solution expressed for various compression ratios and error criteria averaged over the database in Fig. 5. IV. CONCLUSION A camera image compression framework suitable for single-sensor devices was presented. Using the single-sensor imaging pipeline equipped with the Bayer CFA, the proposed solutions can store either the CFA data or the demosaicked full-color data. Extensive experimentation reported in this paper indicates that the use of CFA image coding solutions as well as the introduction of JPEG 2000 as a new image compression standard for digital photography formats (EXIF, TIFF-EP) can have a key role in the development of new, powerful, consumer electronic devices. ACKNOWLEDGEMENT The authors gratefully thank Leo Hwang for his help with the preparation of the test tools. REFERENCES [1] R. Lukac and K. N. Plataniotis, “ Fast video demosaicking solution for mobile phone imaging applications,” IEEE Transactions on Consumer Electronics, vol. 51, no. 2, pp. 675-681, May 2005. [2] L. Zhang, X. Wu, P. Bao, “Real-time lossless compression of mosaic video sequences,” Real-Time Imaging, vol. 11, pp. 370-377, 2005. [3] S. Battiato, A. Castorina, M. Guarnera, and P. Vivirito, “A global enhancement pipeline for low-cost imaging devices,” IEEE Trans. Consumer Electronics, vol. 49, no. 3, pp. 670-675, August 2003. [4] R. Lukac, K. Martin, and K. N. Plataniotis, “Digital camera zooming based on unified CFA image processing steps,” IEEE Transactions on Consumer Electronics, vol. 50, no. 1, pp. 15-24, February 2004. [5] R. Lukac, K. Martin, and K. N. Plataniotis, “Demosaicked image post- processing using local color ratios,” IEEE Trans. Circuit and Systems for Video Technology, vol. 14, no. 6, pp. 914-920, June 2004. [6] P. L. P. Dillon, D. M. Lewis, and F. G. Kaspar, “Color imaging system using a single CCD area array,” IEEE Journal of Solid-State Circuits, vol. 13, no. 1, pp. 28-33, February 1978. [7] T. Lule, S. Benthien, H. Keller, F. Mutze, P. Rieve, K. Seibel, M. Sommer, and M. Bohm, “Sensitivity of CMOS based imagers and scaling perspectives,” IEEE Transactions on Electron Devices, vol. 47, no. 11, pp. 2110-2122, November 2000. [8] R. Lukac and K. N. Plataniotis, “Color filter arrays: Design and performance analysis,” IEEE Transactions on Consumer Electronics, vol. 51, no. 4, pp. 1260-1267, November 2005. [9] R. Lukac and K. N. Plataniotis, “Data-adaptive filters for demosaicking: A framework,” IEEE Transactions on Consumer Electronics, vol. 51, no. 2, pp. 560-570, May 2005. [10] B. K. Gunturk, J. Glotzbach, Y. Altunbasak, R. W. Schaffer, R. M. Murserau, “Demosaicking: color filter array interpolation,” IEEE Signal Processing Mag., vol. 22, no. 1, pp. 44-54, Jan. 2005. R. Lukac and K. N. Plataniotis: Single-Sensor Camera Image Compression 305 BI KA ED CCA JPEG 2000 JPEG BI KA ED CCA JPEG 2000 JPEG 20 40 60 80 100 120 140 160 2 4 6 8 10 12 14 16 18 compression ratio M A E 20 40 60 80 100 120 140 160 0.04 0.06 0.08 0.1 .12 0.14 0.16 0.18 0.2 compression ratio N C D BI KA ED CCA 20 40 60 80 100 120 140 160 20 22 24 26 28 30 32 34 36 compression ratio P S N R ( dB ) JPEG 2000 JPEG BI KA ED CCA JPEG 2000 JPEG 20 40 60 80 100 120 140 160 2 3 4 5 6 7 8 9 10 11 12 compression ratio D el ta L ab Fig. 8. Performance of the IPP3 solution expressed for various compression ratios and error criteria averaged over the database in Fig. 5. [11] X. Wu and N. Zhang, “Primary-consistant soft-decision color demosaicking for digital cameras,” IEEE Trans. Image Processing, vol. 13, no. 9, pp. 1263-1274, Sep. 2004. [12] R. Lukac and K. N. Plataniotis, “Normalized color-ratio modeling for CFA interpolation,” IEEE Transactions on Consumer Electronics, vol. 50, no. 2, pp. 737-745, May 2004. [13] R. Lukac, B. Smolka, K. Martin, K. N. Plataniotis, and A. N. Venetsanopoulos, ”Vector filtering for color imaging,” IEEE Signal Processing Magazine; vol. 22, no. 1, pp. 74-86, Jan. 2005. [14] Technical Standardization Committee on AV & IT Storage Systems and Equipment, “Exchangeable image file format for digital still cameras: Exif Version 2.2, JEITA CP-3451, April 2002. [15] Y. T. Tsai, “Color image compression for single-chip cameras,” IEEE Trans. Electron Devices, vol. 38, no. 5, pp. 1226-1232, May 1991. [16] Technical Committee ISO/TC 42, Photography, “Electronic still picture imaging - Removable memory, Part 2: Image data format - TIFF/EP,” ISO 12234-2, January 2001. [17] R. Lukac and K. N. Plataniotis, “Digital image indexing using secret sharing schemes: A unified framework for single-sensor consumer electronics,” IEEE Transactions on Consumer Electronics, vol. 51, no. 3, pp. 908-916, August 2005. [18] N. Zhang and X. Wu, “Lossless compression of color mosaic images,” in Proc. IEEE Int. Con. Image Processing (ICIP’04), vol. 1, pp. 517-520, October 2004. [19] X. Xie, G. L. Li, X. W. Li, Z. H. Wang, C. Zhang, D. M. Li, and L. Zhang, “A new approach to near-lossless and lossless image compression with Bayer color filter arrays,” in Proc. Third Int. Con. on Image and Graphics (ICIG’04), pp. 357-360, Dec. 2004. [20] T. Toi, M. Ohta, “A subband coding technique for image compression in single CCD cameras with Bayer color filter arrays,” IEEE Trans. Consumer Electronics, vol. 45, no. 1, pp. 176-180, Feb. 1999. [21] A. Bazhyna, A. Gotchev, K. Egiazarian, “Near-lossless compression algorithm for Bayer pattern color filter arrays,” SPIE-IS&T Electronic Imaging: Digital Photography, vol. 5678, pp. 198-209, 2005. [22] S. Y. Lee and A. Ortega, “A novel approach of image compression in digital cameras with Bayer color filter array,” in Proc. IEEE Int. Con. Image Processing (ICIP’01), vol. 3, pp. 482-485, Oct. 2001. [23] C. C. Koh, J. Mukherjee, and S. K. Mitra, “New efficient methods of image compression in digital cameras with color filter array,” IEEE Transactions on Consumer Electronics, vol. 49, no. 4, pp. 1448-1456, November 2003. [24] X. Xie, G. L. Li, Z. H. Wang, C. Zhang, D. M. Li, and X. W. Li, “A novel method of lossy image compression for digital image sensors with Bayer color filter arrays,” in Proc. IEEE Inter. Symposium on Circuits and System (ISCAS’05), pp. 4995-4998, May 2005. [25] B. Parrein, M. Tarin, and P. Horain, “Demosaicking and JPEG2000 compression of microscopy images,” in Proc. IEEE Int. Con. on Image Processing (ICIP’04), vol. 1, pp. 521-524, Oct. 2004. [26] Information Technology - JPEG 2000 Image Coding System, ISO/IEC Intern. Standard 15444-1, ITU Recommendation T.800, 2000. [27] M. Rabbani and R. Joshi, “An overview of the JPEG 2000 still image compression standard,” Signal Processing: Image Communication, vol. 17, no. 1, pp. 3-48, January 2002. [28] S. Battiato, A. R. Bruna, A. Buemi, and A. Castorina, “Analysis and characterization of JPEG 2000 standard for imaging devices,” IEEE Trans. Consum. Elect., vol. 49, pp. 773-779, Nov. 2003. IEEE Transactions on Consumer Electronics, Vol. 52, No. 2, MAY 2006 306 (a) (c) (b) (d) (f) (e) Fig. 9. Performance of the IPP1 solution (demosaicked image compression) using the CCA demosaicking scheme: (a) original image Flower, (b) JPEG at compression ratios 130.05, 80.91, and 26.20, (c) JPEG 2000 at compression ratios 163.63, 122.46, and 38.93, (d) original image Girls, (e) JPEG at compression ratios 124.79, 56.58, and 20.33, (f) JPEG 2000 at compression ratios 162.79, 89.18, and 29.46. (a) (c) (b) (d) (f) (e) Fig. 10. Performance of the IPP2 solution (CFA image compression) using the CCA demosaicking scheme: (a) original image Flower, (b) JPEG at compression ratios 116.49, 75.34, and 18.37, (c) JPEG 2000 at compression ratios 163.12, 124.49, and 38.79, (d) original image Girls, (e) JPEG at compression ratios 62.46, 33.97, 11.57, (f) JPEG 2000 at compression ratios 164.73, 86.97, and 29.57. R. Lukac and K. N. Plataniotis: Single-Sensor Camera Image Compression 307 (a) (c) (b) (d) (f) (e) Fig. 11. Performance of the IPP3 solution (structure conversion-based CFA image compression) using the CCA demosaicking scheme: (a) original image Flower, (b) JPEG at compression ratios 131.64, 47.05, and 16.95, (c) JPEG 2000 at compression ratios 162.32, 125.09, and 39.53, (d) original image Girl, (e) JPEG at compression ratios 123.38, 46.77, and 17.10, (f) JPEG 2000 at compression ratios 163.74, 87.30, and 29.58. [29] R. Kimmel, “Demosaicing: image reconstruction from color CCD samples,” IEEE Transactions on Image Processing, vol. 8, no. 9, pp. 1221-1228, September 1999. [30] W. Lu and Y. P. Tang, “Color filter array demosaicking: new method and performance measures,” IEEE Transactions on Image Processing, vol. 12, no. 10, pp. 1194-1210, October 2003. [31] R. Lukac, K. N. Plataniotis, D. Hatzinakos, and M. Aleksic, “A novel cost effective demosaicing approach,” IEEE Transactions on Consumer Electronics, vol. 50, no. 1, pp. 256-261, February 2004. [32] G. Sharma, H. J. Trussell, “Digital color imaging,” IEEE Trans. Image Processing, vol. 6, no. 7, pp. 901-932, July 1997. Rastislav Lukac (M’02) received the M.S. (Ing.) and Ph.D. degrees in Telecommunications from the Technical University of Kosice, Slovak Republic in 1998 and 2001, respectively. From February 2001 to August 2002 he was an Assistant Professor at the Department of Electronics and Multimedia Communications at the Technical University of Kosice. Since August 2002 to July 2003 he was a Researcher in Slovak Image Processing Center in Dobsina, Slovak Republic. From January 2003 to March 2003 he was a Postdoctoral Fellow at the Artificial Intelligence and Information Analysis Lab at the Aristotle University of Thessaloniki, Greece. Since May 2003 he has been a Post-doctoral Fellow with the Edward S. Rogers Sr. Department of Electrical and Computer Engineering at the University of Toronto in Toronto, ON, Canada. He is a contributor to four books and he has published over 200 papers in the areas of digital camera image processing, microarray image processing, multimedia security, and color image and video processing. Dr. Lukac is a Member of the IEEE Circuits and Systems, IEEE Consumer Electronics, and IEEE Signal Processing Societies. He serves as a Technical Reviewer for various scientific journals and he participates as a Member of numerous international conference committees. In 2003, he was the recipient of the NATO/NSERC Science Award. Konstantinos N. Plataniotis (S’90–M’92–SM’03) received the B. Engineering degree in Computer Engineering from the Department of Computer Engineering and Informatics, University of Patras, Patras, Greece in 1988 and the M.S and Ph.D degrees in Electrical Engineering from the Florida Institute of Technology (Florida Tech), Melbourne, Florida in 1992 and 1994 respectively. From August 1997 to June 1999 he was an Assistant Professor with the School of Computer Science at Ryerson University. He is currently an Associate Professor at the Edward S. Rogers Sr. Department of Electrical & Computer Engineering where he researches and teaches image processing, adaptive systems, and multimedia signal processing. He co-authored, with A.N. Venetsanopoulos, a book on "Color Image Processing & Applications", Springer Verlag, May 2000, he is a contributor to seven books, and he has published more than 300 papers in refereed journals and conference proceedings in the areas of multimedia signal processing, image processing, adaptive systems, communications systems and stochastic estimation. Dr. Plataniotis is a Senior Member of IEEE, an Associate Editor for the IEEE Transactions on Neural Networks, a past member of the IEEE Technical Committee on Neural Networks for Signal Processing. He was the Technical Co-Chair of the Canadian Conference on Electrical and Computer Engineering (CCECE) 2001, and CCECE 2004. He is the Vice-chair of the 2006 IEEE International Conference in Multimedia and Expo (ICME 2006), the Technical Program Co-chair for 2006 IEEE Intelligent Transportation Systems Conference (ITSC 2006), and the Image Processing Area Editor for the IEEE Signal Processing Society e-letter. work_vq4o4zccgfd53pdazesosnsgdm ---- WBC 2007 SHARING INFORMATION ACROSS COMMUNITY PORTALS WITH FOAFREALM John G. Breslin, Slawomir Grzonkowski, Adam Gzella, Sebastian R. Kruk, Tomasz Woroniecki Digital Enterprise Research Institute, National University of Ireland, Galway, Ireland IDA Business Park, Lower Dangan, Galway, Ireland firstname.lastname@deri.org ABSTRACT Community portals such as blogs, wikis and photo sharing sites have become the new channels for information dissemination on the Web. When searching for information, many results end up in some type of community site. However, one cannot make use of the wealth of information that is available through the preferences of one's thematic social network, or through bookmarks or other documents, only those accessible through a social network. Also, due to multiple accounts being registered on a variety of community portals, there is a lack of semantics regarding the information that a particular user has created or bookmarked across this set of portals. This paper presents a method for sharing information across multiple community portals through a social semantic collaborative filtering system; this collaborative filtering extends a popular "Friend Of A Friend" (FOAF) network; it enables users to share the bookmarks and community documents that they create. KEYWORDS Online communities, information sharing, collaborative filtering 1. INTRODUCTION Community-based applications such as blogging and wikis have become very popular and at the same time have created an interconnected information space (through the “blogosphere” and inter-wiki links). More and more content is being created by users on "Web 2.0" community portals ranging from pictures on photo album sharing sites to community event information to bookmarks on topics of interest. At the same time, these applications are experiencing boundaries in terms of information dissemination and user profile automation. A very simple example of a question that cannot be easily answered at the moment is "show me all the content created by Mick and all his close friends in the past week". We will now introduce what a community portal is, detail some of the problems with such portals and then give an overview of the proposed solution to the problems via an example scenario. 1.1 Community Portals Community portals are online community-specific websites that provide improved communication and contact links for a community online (e.g. one that is providing local or interest-based information). They are the most widespread platform used by communities to stay informed electronically. Members can find relevant information and may contribute any required shared information to others via the portal. By having an online collaboration space for a community of a certain interest, community portals provide an awareness and interaction amongst a set of people whether for profit or non-profit. These portals are replacing the traditional means of information exchange. They help to provide an online global communication agora, and to strengthen the communities themselves by informing them and by providing an open place for interaction and exchange of information and ideas. ISBN: 978-972-8924-31-7 © 2007 IADIS 126 1.2 Problems with Existing Portals Community portals have many significant advantages over traditional community collaboration methods (e.g. newsletters), and can be very useful and helpful for users of the portals. Unfortunately users of classical community portals are facing many potential problems. Each portal usually has its own user management system. Users are forced to create an account and then remember a set of credentials. Moreover, users need to present almost exactly the same information for each portal they register to (such as their personal details, domains of interest, etc.). Finally, a community member may store different information relating to a particular domain of interest on each community portal. This information needs to be copied into one place, and then merged, which is of course time consuming; not every user would decide to perform such an operation. Another problem occurs when a user wants to gain the knowledge gathered by an expert or a friend. Unfortunately, resources maintained by those people can be located in many different portals; users would waste their time on manually importing these resources, or may even abandon the operation in favour of using other, less competent knowledge. 1.3 Solution Scenario Figure 1 illustrates a scenario which requires a solution to the problems described above. Let us take the case where we have a number of people who know each other and are interested in more than just one topic, e.g. the Semantic Web and digital photography. We will begin by looking at John, and the content cloud that represents his membership of various online community portals. John is interested in digital photography, and he is a member of a weblog site and a photo sharing community. John also has three friends, Mick, Mike and Sheila, some of whom are registered to the same communities as he is. He wants to sign up for a collaborative bookmarking site so that he can start bookmarking some of his favourite links relating to photos and also to annotating digital photos with metadata, a new interest. He also knows that his friend Mick is a member of the bookmarking site; John hopes to use some of Mick's expertise in the Semantic Web and metadata to help with his search for useful resources. Unfortunately, he cannot use either of the accounts on the weblog or photo sharing communities to login to the new bookmarking site, so he will need to register a new account for that portal. What is more, if there are any interesting posts on digital photography and annotations in the weblog or photo sharing communities, he will need to copy-and-paste links for all the relevant discussions to the new bookmarking site as there is no common exchange mechanism for resources or resource links between the various communities. In the solution proposed in this paper, if the aforementioned sites were connected using a P2P-based distributed user profile and relationship management system (FOAFRealm (Kruk, 2004) via D-FOAF (Kruk et al, 2006b)), then John could use social semantic collaborative filtering (SSCF) (Kruk et al, 2006c) to pass links to items of interest from the photo sharing site to the collaborative bookmarking site (or to any of his friends, e.g. for Mike to use on his weblog). He could also simply use SSCF to bookmark any interesting items under a category folder called, for example, "Digital Photo Semantic Annotations", and then refer to this folder from any of the communities to which he is registered. Also, if a user specifies their topics of interest on one site, then these can be used to match other resources (discussions, pages, etc.) matching those topics of interest on any other site they register for. For example, Sheila is registered on a bulletin board site (for Semantic Web developers) and says that she has an interest in resources tagged or categorised with "Semantic Web" and "IPTV". She registers (via FOAFRealm) on another site (a video sharing site), the site picks up information from her profile that says she is interested in "Semantic Web" and "IPTV", and presents her with resources linked to those topics. One of the videos is about using Semantic Web technologies to provide an enhanced program guide for television over data networks, and is tagged as being related to "Semantic TV", and she marks this as a topic of interest. Then on the original bulletin board site, more resources (matching "Semantic TV") are presented. IADIS International Conference on Web Based Communities 2007 127 1.4 Outline of the Paper The Semantic Web (Berners-Lee et al, 2001) is increasingly aiming at applications areas. The aforementioned areas of community portals and the social networks (Adamic et al., 2003) formed therein is one of the obvious targets for Semantic Web research, due mainly to the recent explosion in the number of online social network site users, the growing popularity of other community portal sites such as blogs and media-sharing sites, and the potential benefits of connections between these community networks using semantic technologies. Quite a number of Semantic Web approaches have recently appeared to overcome the boundaries being encountered in these application areas, e.g. SIOC (semantically-interlinked online communities) (Breslin et al., 2005), structured blogging, semantic wikis, etc. This paper will describe a combination of efforts to share and access the information related to networks of people across various community sites. The main solution uses a standards-aware technology called FOAFRealm, a user profile and relationships management system; it operates over D-FOAF, a distributed peer-to-peer authentication and trust infrastructure. D-FOAF is designed to operate without a centralised authority. We will begin by describing related work in this area, and then we will describe the profile management and social bookmarking systems in some detail, followed by details of how documents and resources can be classified and exchanged amongst members of a distributed community network. Finally, we will outline some plans for future work based on our results so far in connecting the content of a user and their social network across multiple community sites. 2. BACKGROUND RELATED WORK 2.1 Online Communities Online communities have become more and more popular, and they can no longer be considered as niche systems. Every country and every business trade has at least several popular portals. Hildreth at al. (2000) Figure 1. Sharing content between disparate community sites using a social network and topics of interest. ISBN: 978-972-8924-31-7 © 2007 IADIS 128 propose the following definition for a community: "it has a common set of interests to do something in common, is concerned with motivation, is self-generating, is self-selecting, is not necessarily co-located, and has a common set of interests motivated to a pattern of work not directed to it". Additionally, Kondratova and Goldfarb (2003) distinguish between three main objectives of online communities. Firstly, to supply content to the users. Secondly, to encourage members to participate in the community by contributing. Finally, it has to facilitate communication and interaction between the members. The key features that differentiate online communities are the various kinds of forums, wikis, chat rooms, as well as online and offline events that they may have (e.g. the Semantic Web community portal1 uses a mailing list as its primary communication medium). It is difficult to say which type of community portal is the most popular, because as many interests as there are portals exist. One potential candidate could be Wikipedia, since it gathers people regardless of interests. 2.2 Social Networks The aim of the "Friend Of A Friend" or "FOAF" standard is to utilise machine-readable homepages for describing people. Moreover, the idea also proposes storing links between people and activities that they take part in (i.e. by specifying their topics of interest). In order to achieve this, the "FOAF-vocabulary"2 was introduced by Brickley and Miller. The vocabulary is strongly dependent on W3C3 standards, especially RDF and XML. A number of applications have been developed that make use of metadata provided using this vocabulary: FOAF-A-Matic4 allows those not familiar with XML to easily create people descriptions, and FOAFNaut5 provides a visualization of any social networks formed using FOAF user profiles. Additionally, many online social network sites have taken advantage of Milgram's (1967) "six degrees of separation" observation, especially Friendster (Berners-Lee et al, 2001) which was initiated in 2002 and has received some patents in this domain. Furthermore, there has been a large growth of business-oriented networks. LinkedIn6 and Ryze7 manage professional contacts and enable users to find an employer or an employee. A special issue of Complexity published in August 2002 considered the role of networks and social network dynamics (Skvoretz, 2002). The aim of the issue was to show the complexity for different levels of network architecture, and to help with the comprehension of network-based analyses and explanations. 2.3 User Profile Management Systems The issue of user profile management systems is a ubiquitous matter. Many research project concepts have been proposed, ranging from open source to commercial. Some are offering sophisticated features such as distributed profiles and single sign-on functionality. Examples of such systems are Drupal8 and XUP9 (the latter being similar to the W3C FOAF metadata recommendation). Another idea called Protocol Identity 2.010 has been proposed for the exchange of digital identity information. The general idea entails that users be supported with enhanced control over the information entrusted to other members of the community. The authors of Sxip 2.0 (Hardt, 2004) have announced that their system will provide this feature. In addition, it will be possible to adjust security needs to a specific site. Probably the most famous profile management system is Microsoft Passport11 which support the reuse of profile information across different services. Although an interesting idea, frequent bug reports and the centralised topology mean that the system has not yet been commonly accepted by other sites. 1 Semantic Web Community Portal: http://www.semanticweb.org/ 2 FOAF: http://xmlns.com/foaf/0.1/ 3 W3C: http://w3c.org 4 FOAF-A-Matic: http://www.ldodds.com/foaf/foaf-a-matic.html 5 FOAFNaut: http://www.foafnaut.org/ 6 LinkedIn: http://www.linkedin.com/ 7 Ryze: http://ryze.com/ 8 Drupal: http://drupal.org/ 9 XML User Profiles: http://xprofile.berlios.de/ 10 Identity 2.0: http://www.identity20.com/ 11 Microsoft Passport: http://www.passport.net/ IADIS International Conference on Web Based Communities 2007 129 An example of a decentralised digital identity system is OpenID12, in which a user's online identity is given either by a URI (such as for their blog or a home page) or an XRI (OASIS Extensible Resource Identifier)13. 3. SOLUTION USING FOAFREALM AND D-FOAF 3.1 Distributed User Profile Management (A) The transparent distribution of a user's profile offered by FOAFRealm fits very well with the requirement for inter-portal cooperation. Information about a user's preferences may be collected from all connected sites and modifications made in any site are visible across the others. For example, when the user subscribes to the "Annotating Images" category on a visited portal, a new fragment of his personal profile is created and propagated to other sites. Eventually every site sees the user's profile as a union of all fragments on the rest of sites. FOAFRealm hides the complexity of managing distributed data and offers a clean interface for querying and storing users' profiles. 3.2 Social Semantic Collaborative Filtering (B) Social semantic collaborative filtering (SSCF) is based on two concepts: distributed collections and annotations of resources. Each user classifies only a small subset of the knowledge, based on the level of expertise they have on a specific topic. This knowledge is later shared across the social network. 3.2.1 Classifying Community Portal Documents (B.1) During their online activity users can bookmark some resource. Unfortunately such information needs to be properly classified to be used by the system; the SSCF module allows users to classify their bookmarks with "domains of interest", which are represented by semantically annotated catalogs. Domains contain bookmarks and may also include other domains. This structure needs to be well classified; user’s taxonomy of catalogs needs to refer to other knowledge organization systems. SSCF can utilise well known classification systems with JOnto14 plugin; a user can annotate catalog’s content using e.g. DDC, WordNet and dmoz: h The Dewey Decimal Classification (DDC)15 is a general knowledge organisation tool that is continuously revised to keep pace with knowledge. DDC is currently the world's most widely used library classification system. Libraries in more than 135 countries use DDC to organise and provide access to their collections, and DDC numbers are featured in the national bibliographies of more than sixty countries. DDC provides a structural hierarchy which means that all topics (aside from the ten main classes) are part of the broader topics above them. The class of a resource is shown by a decimal number with at least 3 digits. The first digit the main class (for example - 500 represents science). The second digit indicates the division (for example, 500 is used for general works on the sciences, 510 for mathematics). The third digit indicates the section (530 is used for general works on physics, 531 for classical mechanics). A dot follows the third digit in a class number, after which division by ten continues to the specific degree of classification needed. h WordNet16 is an online lexical reference system whose design is inspired by current psycholinguistic theories of human lexical memory. English nouns, verbs, adjectives and adverbs are organised into synonym sets, each representing one underlying lexical concept. Different relations link the synonym sets. Currently the WordNet database consist of over 200k word-sense pairs (over 150k unique strings). 12 OpenID: http://openid.net/ 13 OASIS Extensible Resource Identifier (XRI) TC: http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=xri 14 JOnto – Java Binding for Ontologies, Taxonomies and Thesauri: http://sf.net/projects/jonto/ 15 The Dewey Decimal Classification (DDC): http://www.oclc.org/dewey/ 16 WordNet: http://wordnet.princeton.edu/ ISBN: 978-972-8924-31-7 © 2007 IADIS 130 h dmoz is the Open Directory Project17, the most widely distributed database of Web content classified by humans. Its editorial standards body of netizens provides the collective brain behind resource discovery on the Web. The Open Directory powers the core directory services portal for the Web's largest and most popular search engines. All Open Directory resources (structure and content) are freely available to use. With FOAFRealm-SSCF different parts of community portals can be classified using methods described above. A user can easily assign a class to discussions, wiki pages, blogs, photo albums, as well as normal pages on the Web. 3.2.2 Mechanism for Exchanging Documents Between People (B.2) Social semantic collaborative filtering is strongly dependant on the social network, which can be stored as a directed graph. Nodes describe users whereas edges represent the relationships between them. Additionally, to overcome problems with security, each link between two people can also have an assigned trust level that decides whether access should be granted or denied. Users can have a collections of bookmarks (i.e. a private bookshelf as described by Kruk at al. (2005)), which represent their knowledge; later they can renders this knowledge accessible to their friends. Resources are collected in the private bookshelf according to the user's point of view, as expressed by their categories taxonomy. Each collection can be ranked with quality metrics assigned to it; therefore the owner is able to specify their expertise level on a particular topic that can be computed with the PageRank algorithm (Breslin et al., 2005) applied to graphs of collection inclusions and the social network. Moreover, users are aware of the expertise level of some of their friends; this information can be used while looking for resources. Usually the resources that belong to close friends, who are experts on given topic, are potentially useful and reliable. To sum up, such an infrastructure provides an excellent environment for obtaining shared documents. The presented approach differs in many ways from present trends; sharing files via current P2P standards usually only depends on a number of free slots or the quantity of shared files. Furthermore, SSCF allows users to specify access control policies to each catalog; they can restrict access to a certain sub-graph of a social network. 3.2.3 All the Parts Coming Together (A + B.1 + B.2) With its advanced distributed model and collaboration features, FOAFRealm aims to be a complex solution for managing identities and preferences. Fine grained access control lists make it easy to share resources among friends and to spread knowledge in a community. Single sign-on and single registration lets a user comfortably use multiple services and also helps to start up new sites by connecting them with existing popular ones. Browsing others' bookmarks and annotations gives users the benefit of using valuable resources collected by experts. By simply deploying our FOAFRealm library, all of these features can be easily incorporated into a portal or any other application which wants to leverage utilisation of social networks. 4. FUTURE WORK The next step of evolution is a system called DigiMe (Kruk et al, 2006a), a research topic that has recently been initiated. The aim of DigiMe is to deliver SSCF features that will support mobile devices and provide users with better control over their profile information. Users can store collaborative resources and profile information on their mobile device; the DigiMe system uses this information to explore the ad-hoc social networks paradigm. 5. CONCLUSIONS In this paper, we detailed many issues with community portals that are experiencing boundaries in terms of content dissemination and profile automation. Users have to repeatedly sign up for various community sites, and they cannot make use of their stored resource links or annotations between sites. Similarly, users cannot 17 The Open Directory Project: http://dmoz.org/ IADIS International Conference on Web Based Communities 2007 131 easily make use of their social networks between sites, for example, to leverage the skills of a friend who may an expert in one domain on a different community site. We have described the FOAFRealm and D- FOAF implementations that can overcome these boundaries by providing a distributed user profile management system along with social semantic collaborative filtering. This system provides an excellent method of sharing resources between friends or associates by defining the level of expertise that a person has on a particular topic (according to those they are connected to via a social network) and by suggesting various resources based on these expertise levels. We finished by briefly describing future work for mobile devices. ACKNOWLEDGEMENT This material is based upon works supported by Enterprise Ireland under Grant No. *ILP/05/203* and by Science Foundation Ireland (SFI) under the DERI-Lion project (SFI/02/CE1/1131). REFERENCES Adamic et al., 2003. A Social Network Caught in the Web, First Monday, vol. 8, no. 6, http://firstmonday.org/issues/issue8_6/adamic/ Berners-Lee et al, 2001. The Semantic Web, Scientific American, May 2001 Boyd, 2004. Friendster and Publicly Articulated Social Networking, Proceedings of the Conference on Human Factors and Computing Systems (CHI 2004) Breslin et al., 2005. An Approach to Connect Web-Based Communities, Proceedings of the 2nd IADIS International Conference on Web Based Communities (WBC 2005), pp. 272-275, Carvoeiro, Portugal Brin and Page, 1993. The Anatomy of a Large-Scale Hypertextual Web Search Engine, Computer Networks and ISDN Systems, vol. 30, pp. 107-117 Dodds, 2004. An Introduction to FOAF, http://www.xml.com/pub/a/2004/02/04/foaf.html Hildreth et al, 2000. Communities of Practice in the Distributed International Environment, Journal of Knowledge Management, vol. 4, no. 1, pp. 27-37. Hardt, 2004. Personal Digital Identity Management, Proceedings of the FOAF Galway Workshop, September 2004. Kondratova and Goldfarb, 2003. Design Concepts for Virtual Research and Collaborative Environments, Proceedings of the 10th ISPE International Conference On Concurrent Engineering: Research and Applications Kruk, 2004. FOAF-Realm - control your friends' access to resource, Proceedings of the FOAF Galway Workshop, September 2004 Kruk et al, 2005. JeromeDL - Reconnecting Digital Libraries and the Semantic Web, Proceedings of the 16th International Conference on Database and Expert Systems Applications Kruk et al, 2006a. DigiMe - Ubiquitous Search and Browsing for Digital Libraries, Proceedings of the MoSO Workshop at MDM Conference, Nara, Japan, 2006 Kruk et al, 2006b. D-FOAF: Distributed Identity Management with Access Rights Delegation, Proceedings of the 1st Asian Semantic Web Conference, Beijing, 2006 Kruk et al, 2006c. Social Semantic Collaborative Filtering for Digital Libraries, Journal of Digital Information, Special Issue on Personalization, 2006 Milgram, 1967. The Small World Problem, Psychology Today, pp. 60–67, May 1967. Skvoretz, 2002. Complexity Theory and Models for Social Networks, Complexity, vol. 8, no. 1, pp. 47–55 ISBN: 978-972-8924-31-7 © 2007 IADIS 132 work_vqek2oqotngzbbpjtrdxxyf74m ---- Send Orders for Reprints to reprints@benthamscience.ae 232 The Open Nursing Journal, 2017, 11, (Suppl-1, M7) 232-240 1874-4346/17 2017 Bentham Open The Open Nursing Journal Content list available at: www.benthamopen.com/TONURSJ/ DOI: 10.2174/1874434601711010232 REVIEW ARTICLE Wearable Devices for Caloric Intake Assessment: State of Art and Future Developments Maria Laura Magrini1, Clara Minto1, Francesca Lazzarini1, Matteo Martinato2 and Dario Gregori1,* 1Unit of Biostatistics, Epidemiology and Public Health, Department of Cardiac, Thoracic and Vascular Sciences, University of Padova, Padova, Italy 2Department of Gastroenterology, University of Padova, Padova, Italy Received: February 15, 2017 Revised: May 15, 2017 Accepted: July 07, 2017 Abstract: Background: The self-monitoring of caloric intake is becoming necessary as the number of pathologies related to eating increases. New wearable devices may help people to automatically record energy assumed in their meals. Objective: The present review collects the released articles about wearable devices or method for automatic caloric assessments. Method: A literature research has been performed with PubMed, Google Scholar, Scopus and ClinicalTrials.gov search engines, considering released articles regarding applications of wearable devices in eating environment, from 2005 onwards. Results: Several tools allow caloric assessment and food registration: wearable devices counting the number of bites ingested by the user, instruments detecting swallows and chewings, methods that analyse food with digital photography. All of them still require more validation and improvement. Conclusion: Automatic recording of caloric intake through wearable devices is a promising method to monitor body weight and eating habits in clinical and non-clinical settings, and the research is still going on. Keywords: Wearable devices, Caloric intake monitoring, Non-communicable diseases, Body Mass Index, Bite, Armband. 1. INTRODUCTION Nowadays, worldwide health care systems are focused on the creation of new management strategies to prevent worsening of chronic diseases and reduce associated economic costs. Non-communicable diseases (NCDs) are responsible for almost 38 million of deaths each year due to cardiovascular events, respiratory pathologies, cancer and diabetes. In high and middle income countries, NCDs are mostly related to unhealthy diets, sedentary lifestyle, tobacco smoke and alcohol abuse. Particularly, the excess of body weight, defined as a Body Mass Index higher than 25 kg/m2, the low consumption of fruits and vegetables and the consumption of foods with high content of fats and sugar, are main concerns of current health policies [1]. * Address correspondence to this authors at the Unit of Biostatistics, Epidemiology and Public Health, Department of Cardiac, Thoracic and Vascular Sciences, Via Loredan 18, 35121 Padova, Italy; Tel: +390498275384; Fax: +3902700445089; E-mail: dario.gregori@unipd.it http://benthamopen.com http://crossmark.crossref.org/dialog/?doi=10.2174/1874434601711010232&domain=pdf http://www.benthamopen.com/TONURSJ/ http://dx.doi.org/10.2174/1874434601711010232 mailto:dario.gregori@unipd.it Wearable Devices for Caloric Intake The Open Nursing Journal, 2017, Volume 11 233 Data from observational studies reveal that the global phenomenon of obesity has nearly doubled between 1980 and 2008: percentage of obese men increased from 5% to 10%, while percentage of obese women increased from 8% to 14% [2]. According to the CUORE project, between 1998 and 2002, 17% of Italian men and 21% of Italian women were obese [3]. The World Health Organization (WHO) suggests the importance of healthy food choices to reduce consequences of food-related pathologies such as obesity, diabetes, cancer and cardiovascular diseases [4]. These recommendations are widely supported by scientific literature on dietary style: intake of healthy foods has positive effects on health conditions by reducing risks of recurrent coronary heart disease [5], positively modulating markers of inflammation in patients with diabetes [6] and improving health status of subjects with obstructive sleep apnea syndrome [7]. The maintenance of a positive energy balance is ensured by the complementary action of energy taken from nutrients (energy intake) and energy expended through physical activity and resting metabolic rate (energy expenditure) [8]. Several factors such as gender, age, environmental temperature, general health status and physical activity can influence energy intake and energy expenditure, suggesting the necessity of an energy balance calibrated for each subject [9]. As regards economic expenditure of obesity, an annual medical cost of 3613$ pro-capita has been estimated for women, and 1152$ pro-capita for men in United States [10]. A similar trend is observable in Italy, where costs related to health-care system are strongly influenced by clinical status of individuals: compared with normal-weighted subjects, annual pro-capita expenditure rises to 65€ in overweight subjects and to 105€ in obese subjects [3]. Because of the importance of preventing chronic diseases that may result from excess of body weight and unhealthy diet, clinical trials and epidemiological studies have developed different research instruments in order to collect reliable data on dietary patterns of large populations. Classic tools for food assessment include 24-hours dietary recall, dietary record, dietary history and food-frequency questionnaire. All these methods give subjective measure using a predefined or open-ended, self- or interviewer-administered format [11]. Despite several advantages, such as low cost, suitability and accuracy, food questionnaires are not free from potential errors: the inclusion of open-ended questions can be time- consuming, patients cannot be able to remind daily diet, and predefined answers can be inaccurate. Moreover, the long- term monitoring increases the risk of underestimating the energy intake and return of incorrect or incomplete information about the quality and type of nutrient intake [12]. For these reasons, the assessment of caloric intake and energy expenditure is a crucial challenge for the maintenance of individual and public health. An accurate monitor-activity is essential to obtain information about the eating habits of subjects at risk (e.g. obese and diabetic), subjects whose dietary assessment is difficult and often inaccurate (e.g. elderly and children) and also of healthy subjects. This is even more important considering that a diet-therapy requires long period of treatment and often involves subjects at home. A constant and accurate monitoring of calories and nutrient quality is essential both for the healthcare provider and the patient, to improve therapy effectiveness and compliance to diet. New wearable devices for caloric intake seem to be the future of scientific research on nutrition: easy to use, accurate, free of subjective bias, these new electronic tools are able to collect large amount of data. These wearable devices can be divided into three macro categories according to the object or action captured by the device: Devices capturing the gestures related to nutrition, such as arm and wrist movements;1. Devices capturing sounds and vibrations from chewing and/or swallowing;2. Devices that identify the kind and the amount of food analyzing food digital images.3. The aim of the present paper is to review wearable devices or methods for automatic caloric assessment which are currently on a research and development process, reporting for each tool a brief description, a case study and some consideration about pros and cons. 2. MATERIALS AND METHODS The literature search started from PubMed search engine, using the string: (“device”[Title] OR “bite”[Title] OR “armband”[Title]) AND (“caloric intake”[Title] OR “intake”[Title]). The results allowed the extraction of the names of the devices described in this review. These names were searched through Google Scholar, PubMed, Scopus and ClinicalTrials.gov search engines. Papers published from 2005 onwards and dealing with application of the cited wearable devices in eating environment were considered. Articles regarding the same devices in other applications (i.e. 234 The Open Nursing Journal, 2017, Volume 11 Magrini et al. physical movement, hydraulic, veterinary) were excluded. No literature reviews about this topic were found. A flowchart of the papers’ selection process is represented in Fig. (1). 3. RESULTS Results are shown in Table 1, a list of each device/method with its description, experimental studies related to the specific device and method, and consideration about pros and cons. Four devices (Bite-counter, Autodietary, E-button, piezoelectric sensor-based necklace) and one method for caloric assessment which requires a personal smartphone (digital photography) are described. The devices or methods were tested in experimental studies to perform a process of validation, by asking participants to eat different kinds of food. The relative results provided an estimation of the accuracy of each tool, with consideration about limits and future developments. Table 1. Devices description, experimental studies. Pros and cons. DEVICE DESCRIPTION EXPERIMENTAL STUDIES PROS AND CONS A bite-counter is a wearable device with an integrated tri-axial accelerometer and gyroscope. It appears as a clock and must be worn on the wrist, to record the number of bytes that the user ingests during a meal. This information is extracted from the analysis of wrist torsional movement, and the wrist movement occurring when the subject brings food from the plate to his mouth [6, 11, 12]. Starting from the number of bites recorded during a meal, an equation calculates the total calories ingested. An example of algorithm for the detection of the number of bites, coming from the data collected with an inertial sensor, was developed by Dong et al [10]. Salley et al. developed a predictive equation to estimate the caloric amount associated with a single bite [6]. A study by Desedorf et al. [5] assesses the effectiveness of the bite-counter in monitoring the caloric intake. 15 subjects between 23 and 58 years old were invited to eat different types of food and beverages, with different table utensils and in different ways. For each participant, the number of bites was detected both through the bite- counter and the visual monitoring of an observer. Findings reported in Table 2 show that the electronic tool tends to underestimate the actual number of bites: this happens especially when food or drink is consumed with a spoon, with a straw or with a fork. This is related to a reduced wrist rotation in some particular actions. For example, those subjects who eat soup with the spoon tend to stiffen the wrist while approaching food to the mouth, avoiding to spill the soup from the spoon. The bite- counter also requires an interval of at least 8 seconds between two consecutive bites: for a person who eats quickly, all the consecutive bites made in a shorter temporal interval are not detected. On the contrary, the bite-counter overestimates the number of bites when the meal is eaten with both fork and knife. In general, the difference between the manual measurement and the electronics one is reduced when food is eaten with hands, for both liquid and solid foods. The predictive equation developed by Salley et al. [6]to determine the caloric amount estimation of a single bite, is based on individual and physical data, without taking into account the specific food. Multiple regression is based on height, weight, waist-to-hip ratio, gender and age of the subject. The authors validated and developed the model by collecting data from 280 participants: the results show that the estimate of caloric intake with the equation was more accurate than human-based estimation, with an average estimation error which went from -257.36 ± 790.22 kcal in the case of human estimation to 71.21 ± 562.14 kcal using the bites based method. Thanks to its design and comfort, Bite-counter does not affect eating gestures and can be easily managed by the subjects. Nevertheless, as reported in some validation studies [4 - 6], both the algorithm for bite detection and the equation for caloric estimation lack in accuracy. Future research should focus on the development of an algorithm for bite counter which takes into account the numerous and complex gestures performed by the wrist while eating, and of a prediction equation for caloric estimation which includes both individual features and food type [4]. AutoDietary is an electronic device for food recognition, based on the distinction between chewing and swallowing sounds produced while eating. The system is composed by a necklace which records sounds of mastication through acoustic sensors, and a smartphone or tablet application which allows data transmission and elaboration, and gives the user an information about food type. The study of Bi et al. [13] aims to evaluate the efficacy of AutoDietary system in recognizing different types of food and drink. 12 participants were invited to eat seven kinds of food of different consistency. The experiment was performed in a laboratory with a low noise. The effectiveness of the instrument was assessed by considering the accuracy in the detection of the event, i.e. the sequence of chewing/swallowing sounds alternated with periods of silence, and the accuracy in the detection of the type of food, as represented in Table 3. AutoDietary is effective in food recognition, but it does not give information about caloric intake. Accuracy in food recognition is influenced also by the collar: it should fit snugly to the neck, otherwise the detection is incorrect or incomplete; for example, a participant with a low BMI reported a low percentage of accuracy. The effectiveness in recognizing the type of food also depends on the bite size: the smaller the food, the harder its recognition. Wearable Devices for Caloric Intake The Open Nursing Journal, 2017, Volume 11 235 DEVICE DESCRIPTION EXPERIMENTAL STUDIES PROS AND CONS The rational of this device is the idea that foods can be identified by the energy exerted in their mastication: the microphone placed in correspondence of the neck, close to the jaw, converts superficial skin vibrations into acoustic signals. Following the algorithm used by Bi et al. [13], this signal is analysed through the distinction between chewing and swallowing events, and the extraction of more than 30 features representing its acoustic characteristics. An algorithm based on a light-weight decision tree is used to recognize the type of food intake. The results show that AutoDietary has a high performance in food recognition, especially in distinguishing solid from liquid food. Fontana et al study [9]. tested the effectiveness of an energy intake detection model based on Counts of Chews and Swallows (CCS). For the creation of CCS model, numbers of chews and swallows were extracted for each subject using a throat microphone, in order to estimate the average mass of liquid and solid food ingested per chewing/swallowing: these parameters were used to create predictive models to estimate the intake of solid and liquid food. The model was then validated in a second step, by verifying the ability of the method to generalize the predictive estimates in different foods than those used for the initial test. The energy content of each food and caloric intake per meal was calculated by multiplying the estimated mass of food, obtained from the count of chews and swallows, by the caloric density extracted from nutritional analysis. The results show that the method still commits some errors, but they might be compensated adding correction factors in the model. Nevertheless, this study suggests a promising alternative to diet diaries. The results from questionnaires performed by the participants in the study of Bi et al. [13], determined that the device appeared acceptable for most users, in terms of comfort and functionality. The instrument was tested (and it is used) only for a limited number of foods. For these reasons, the developers planned to further improve the device, by performing experimental tests in a real-life environment, considering a larger number of foods, and enhancing device capabilities adding the chance to perform an estimate of caloric intake by detecting the volume and weight of food ingested. The piezoelectric sensor-based necklace records the mechanical stress caused by the swallowing action. It detects skin vibrations in the low part of trachea at the moment of food passage. Data from the sensor are then sent via Bluetooth to a smartphone app. There are two versions of the collar. “Sportband style collar” sacrifices appearance for a higher sensor stability: because of his major adherence to the neck, it prevents lateral motions which can bring to false positives. For these reasons, this kind of instrument is more indicated for clinical and scientific environment. Otherwise, a more comfortable “pendant-based” version is available, with a lower neck adherence but a higher sensitivity to movements not related to eating [8]. The device is able to recognize the ingestion of solid and liquid foods by detecting the movements occurring between two swallows: a swallowing preceded by numerous chewing movements indicates the assumption of a solid food; a series of consecutive swallows without movements indicates the assumption of liquid foods. The algorithm for food recognition developed by Kalantarian et al. [8] performed swallow identification through the analysis of the voltage signal coming from the piezoelectric sensor. To perform activity recognition, the authors added a tri-axial accelerometer to the necklace, in order to detect movements of the upper body which might be mistaken as swallows. Kalantaria et al. [8] performed an experimental study on the effectiveness of the collar with built-in piezoelectric sensor and accelerometer. 30 subjects between 22 and 34 years old were recruited: all participants were invited to eat foods with different textures. All subjects wore the sportband collar with accelerometer. Results reported in Table 4 show the accuracy in the type of food recognition, where accuracy was defined as the percentage of swallows correctly identified. A strong presence of false positives was observed when the instrument was not associated to the accelerometer: in this case, the percentage of movements incorrectly classified as swallowing increases to 18% with a head rotation to the left, 46% with a head rotation upward and 8% during a walk motion. The piezoelectric-sensor based necklace is a low-cost device for food recognition. Thanks to the interface with the smartphone app, it can easily provide the user information about food consumption, but it does not give information about the caloric intake. Results showed that the tight configuration is too uncomfortable to be worn for more than a few minutes. Tightening the necklace, the movements of the piezoelectric sensor are restricted thus decreasing the sensitivity of detection. On the contrary, the loose configuration causes too much fluctuations in the signal, making the results unusable. For these reasons, a compromise between comfort and accuracy must be found: the use of an accelerometer might help this way. (Table 1) contd..... 236 The Open Nursing Journal, 2017, Volume 11 Magrini et al. DEVICE DESCRIPTION EXPERIMENTAL STUDIES PROS AND CONS E-Button is a wearable computer collecting and processing data about physical activity and diet. Because of its small dimensions and weight, it can be worn on the chest. It contains two wide- angle cameras with stereo and depth information, an UV sensor which distinguishes indoor and outdoor places, an IMU (Inertial Measurement Unit) composed by a 3-axis accelerometer, a 3- axis gyroscope and a magnetometer, an audio processor, a proximity sensor to record arm/hand motion in front of the chest, a barometer measuring the distance from the device to the floor, and a GPS. It can communicate wireless via Wi-Fi or Bluetooth with a smartphone or tablet. Considering its use as a dietary device, e- Button camera collects images with a determined rate. These pictures are analysed to determine the type of food that the user is eating, through techniques of computer vision. First, shapes of utensil are detected, then food regions are segmented based on colour, texture and a complexity measure. Next, the volume of each dish is determined, with the comparison of food-specific shape models. After food and volume detecting, calories information is extracted from a database like the Food and Nutrient Database for Dietary Studies (FNDDS) [14]. The study proposed by Jia et al. [15] aims to evaluate the accuracy of the electronic instrument e-button in determining the volume of food. 7 participants were involved in the study. The volume of each food was measured with a validated method (seed displacement method). Two different detection methods of food volume were compared, considering the same images detected by the device-button: the computer-based estimation, and a visual estimation performed by some raters by observing the same images analysed by the software. Considering the mean difference from the actual volume (measured with seed displacement), it resulted as -5% (SD of 21%) in the automatic detection, and -15.5% (SD 41.4%) in the visual detection. The results show that the volume of food can be automatically calculated from an image with a higher level of accuracy than a visual estimation. The E- button then provides an objective and accurate measurement of food intake. E-Button allows the semi-automatic calculation of the food volume and does not require an excessive effort by the user. Moreover, its dimensions and position on the body, allow it to be easily worn in different situation without affecting users‘ habits: for this reason, it can be used in different assets, such as physical activity, sedentary behaviours and support for blind people. Nevertheless, some issues are still in progress. The device does not correctly identify all types of food, as beverages, condiments, not visible ingredients or cooking methods. The images ‘ quality depends on lighting in the room, the angle of the camera and the presence of foods with complex or confusingly shapes [15]. Finally, the high level of technology contained in the e-Button creates a significant economic cost if compared to other devices for food recognition. Digital photography of foods method was used in cafeteria settings to perform the analysis of nutrients and daily calories assumed by the clients. According to this method, digital video cameras placed in the environment capture images of people food’s selection and food remaining on the plates (leftovers). Images of weighted standard portions are also collected, and associated to the information collected in the Food and Nutrient Database for Dietary Studies (FNDDS), providing nutrients and calories intake [16, 17]. The Remote Food Photography Method (RFFM) uses digital photography in free- living situations: it is a semi-automatic system in which the subject captures a picture of his dishes with his smartphone, before and after the meal. These images are sent in real-time through a wireless network to the Food Photography Application©, to be compared to other images with dishes of equivalent portions and same type of food contained in a database. The information about energy and nutrition intake are extracted from FNDDS [16]. Martin et al. conducted a study to assess the reliability of RFPM in the estimation of daily caloric intake comparing the meals consumed in a laboratory settings and the meal consumed in an uncontrolled environment [18]. 50 participants were instructed to photograph the food with their smartphone and send the image to the Food Photography Application©, which returns information about the caloric intake and nutrients. Even in this case, the analysis of the food was performed by comparing the photographs taken by the participants with the images catalogued in a central archive. The archive consists of an ordered collection of thousands of types of foods associated with all the information necessary to estimate the caloric quantity, namely: • the size of a standard portion, useful for the estimation of the size of the actual portion of food; • the quality of the nutrients, available through the information gathered in the FNDDS. Results reveal that caloric intake estimation through RFPM in laboratory meals does not significantly underestimate the actual amount of energy (-5.5%), while for meals consumed in uncontrolled environment, the level of estimation decreases further becoming significant (-6.6%). Despite the tendency to underestimate the actual values, the RFPM method tends to underestimate an average caloric quantity of 97 kcal per day: however, that is a lower error if compared to other detection methods [16]. Digital photography for food method presents similar variability if compared to visual estimation [17]. Nevertheless, the method is still not applicable in free living conditions. Studies on RFPM [18, 19] showed that the method is accurate in caloric intake estimation in people’s natural environment. It was tested on various types of samples, including a population of mothers for the daily caloric intake of their child [20]. The method is strictly connected to the user’s intervention: individuals may forget to take pictures of foods or they can lose the Smartphone device. For these reasons, alarm/warning systems are designed [18]. Moreover, digital images must be of quality to be readable so the subjects must be instructed beforehand. Finally, certain types of food, such as seasonings, are difficult to analyse and often do not allow the precise energy intake calculation. (Table 1) contd..... Wearable Devices for Caloric Intake The Open Nursing Journal, 2017, Volume 11 237 Fig. (1). Flowchart of the review method. Table 2. Accuracy of the Bite counter in bite detection [5]. Food (utensil) Accuracy Meat (fork and knife) 127% Sides (fork) 82.6% Soup (spoon) 60.2% Pizza (hands) 87.3% Soda (hands) 81.7% Smoothie (straw) 57.7% Total 81.2% Table 3. Accuracy of Autodietary in food recognition [13]. Event/food Accuracy Chewing-swallowing 86.6% Apple 86.3% Carrot 84.9% Biscuit 82.9% Chips 87.7% Walnuts 75.5% Peanuts 83.4% Water 93.3% Table 4. Accuracy of piezoelectric sensor-based necklace with accelerometer in food recognition [8]. Food Accuracy Water 81.4% sandwich 84.5% Chips 85.3% 238 The Open Nursing Journal, 2017, Volume 11 Magrini et al. 4. DISCUSSION Among devices containing inertial sensors (accelerometers and gyroscopes), capable of detecting the gesture of a body portion (arm or wrist) in a given space, the bite-counter measures both the vertical movement of the arm from the top downwards and the rotational movement of wrist. The gesture is translated into number of bites and these are translated into energy intake through the use of algorithms whose rational is based on the existence of a relationship between the number of bites and the number of calories eaten. In fact, although the instrument is not able to detect the type of food and the quality of the nutrients, and the calories per bite vary according to the food taken, the number of bites remains positively correlated to individually introduced calories [4]. Bite counter is one of the most promising self-monitoring tools for energy intake detection, both in healthy subjects and patients, thanks to its convenience and simplicity. Its use is already widespread and implemented in clinical and non-clinical setting [6, 11, 12]. Nevertheless, some limitations of the bite-counter must be acknowledged. First, bite- counter is susceptible to similar movements to the typical nutrition gestures and tends to register as bites movements of different nature. Second, it requires an active role of the subject in activating the tool and placing it on the dominant hand during the meal. Last, bite-counter does not provide any information about the quality of the food introduced [5, 6]. Given its high potential for use in health research, bite-counter needs some improvements for a more accurate and reliable detection of caloric intake. The electronic devices that detect swallowing and/or chewing sounds are often very accurate in the identification of the type of food, although they are able to recognize only a limited number of foods [7] and do not provide information on the caloric intake. In more recent studies they are associated with other devices, such as motion and proximity sensors [8, 9], which are useful to reduce potential errors associated with movements of different nature and to obtain additional information. The biggest drawback of these tools is that they are uncomfortable to wear: in most cases, they must be worn as collars, and require careful use by the subject which has to minimize noise and accessory movements. Finally, devices that capture digital images of foods before and after eating, allow the estimate of caloric intake from the analysis of pictures. Considering calories monitoring in free-living conditions, the Remote Food Photography Method (RFPM) is a semi-automatic method and its evolution is still in place: its function is based on the association of similar foods images by type and portion. Calculation software then allows the calculation of the calories and the analysis of the quality of nutrients introduced, providing a wide range of useful information for the subject and the researcher. The image technique takes advantage of common technologies, such as Smartphone, crossing data from multiple different database containing essential information on foods. However, detection is often affected by the quality of the image, by the position of food, and an important collaboration is requested from the subject. CONCLUSION Self-monitoring of caloric intake becomes more urgent as wrong eating habits of developed countries increase. Moreover, health problems like obesity, diabetes, or other diseases with a high need for dietary assistance require instruments for a major control of caloric intake. New technologies based on wearable devices allow people to monitor their energy and caloric intake of their meals. Different methods are available, but all of them are still in progress. Wrist devices based on the arm movement and bite counting seem to be the most comfortable, cheap and easy to use: they still need a correlation between the number of bites and calories ingested. Collars with sensors for the detection of chewing and swallowing showed high accuracy, but they lack in comfort and still do not give information about caloric intake. Methods of food photography showed accuracy in caloric content of foods, and are very accessible with instruments of common use as smartphones, but they are strictly bounded to the user’s intervention and images qualities. For these reasons, further research is needed on this field. LIST OF ABBREVIATIONS (NCDs) = Non-communicable diseases (WHO) = World Health Organization CONSENT FOR PUBLICATION Not applicable. Wearable Devices for Caloric Intake The Open Nursing Journal, 2017, Volume 11 239 CONFLICT OF INTEREST The authors declare no conflict of interest, financial or otherwise. ACKNOWLEDGEMENTS Declared none. REFERENCES [1] Goris AH, Meijer EP, Westerterp KR. Repeated measurement of habitual food intake increases under-reporting and induces selective under- reporting. Br J Nutr 2001; 85(5): 629-34. [http://dx.doi.org/10.1079/BJN2001322] [PMID: 11348579] [2] Thompson FE, Subar AF, Loria CM, Reedy JL, Baranowski T. Need for technological innovation in dietary assessment. J Am Diet Assoc 2010; 110(1): 48-51. [http://dx.doi.org/10.1016/j.jada.2009.10.008] [PMID: 20102826] [3] Lopez-Meyer P, Patil Y, Tiffany T, Sazonov E. Detection of hand-to-mouth gestures using a RF operated proximity sensor for monitoring cigarette smoking. Open Biomed Eng J 2013; 9: 41-9. [http://dx.doi.org/10.2174/1874120701307010041] [PMID: 23723954] [4] Scisco JL, Muth ER, Hoover AW. Examining the utility of a bite-count-based measure of eating activity in free-living human beings. J Acad Nutr Diet 2014; 114(3): 464-9. [http://dx.doi.org/10.1016/j.jand.2013.09.017] [PMID: 24231364] [5] Desendorf J, Bassett DR Jr, Raynor HA, Coe DP. Validity of the Bite Counter device in a controlled laboratory setting. Eat Behav 2014; 15(3): 502-4. [http://dx.doi.org/10.1016/j.eatbeh.2014.06.013] [PMID: 25064306] [6] Salley JN, Hoover AW, Wilson ML, Muth ER. Comparison between human and bite-based methods of estimating caloric intake. J Academy Nutrition Dietetics 2016; 31(10): 1568-77. [7] Amft O, Kusserow M, Tröster G. Bite weight prediction from acoustic recognition of chewing. IEEE Trans Biomed Eng 2009; 56(6): 1663-72. [http://dx.doi.org/10.1109/TBME.2009.2015873] [PMID: 19272978] [8] Kalantarian H, Alshurafa N, Le T, Sarrafzadeh M. Monitoring eating habits using a piezoelectric sensor-based necklace. Comput Biol Med 2015; 58: 46-55. [http://dx.doi.org/10.1016/j.compbiomed.2015.01.005] [PMID: 25616023] [9] Fontana JM, Higgins JA, Schuckers SC, et al. Energy intake estimation from counts of chews and swallows. Appetite 2015; 85: 14-21. [http://dx.doi.org/10.1016/j.appet.2014.11.003] [PMID: 25447016] [10] Dong Y, Hoover A, Muth E. A device for detecting and counting bites of food taken by a person during eating. In: Bioinformatics and Biomedicine, 2009 BIBM'09 IEEE International Conference on: IEEE. 2009. [http://dx.doi.org/10.1109/BIBM.2009.29] [11] Turner McGrievy B Using Bite Counter For Weight Loss. A One-Month Usability Trial to Test The Effectiveness of Using the Bite Counter (Bites) ClinicalTrialsgov. 2016. [12] O Neil P. Assessing the Bite Counter. Available at: ClinicalTrialsgov. [13] Bi Y, Lv M, Song C, Xu W, Guan N, Yi W. AutoDietary: A wearable acoustic sensor system for food intake recognition in daily life. IEEE Sens J 2016; 16: 806-16. [http://dx.doi.org/10.1109/JSEN.2015.2469095] [14] Sun M, Burke LE, Mao Z-H, et al. eButton: a wearable computer for health monitoring and personal assistance. Proceedings of the 51st Annual Design Automation Conference. 1-6. [http://dx.doi.org/10.1145/2593069.2596678] [15] Jia W, Chen H-C, Yue Y, et al. Accuracy of food portion size estimation from digital pictures acquired by a chest-worn camera. Public Health Nutr 2014; 17(8): 1671-81. [http://dx.doi.org/10.1017/S1368980013003236] [PMID: 24476848] [16] Martin CK, Nicklas T, Gunturk B, Correa JB, Allen HR, Champagne C. Measuring food intake with digital photography. J Hum Nutr Diet 2014; 27(Suppl.1): 72-81. [http://dx.doi.org/10.1111/jhn.12014] [PMID: 23848588] [17] Williamson DA, Allen HR, Martin PD, Alfonso A, Gerald B, Hunt A. Digital photography: a new method for estimating food intake in cafeteria settings. Eat Weight Disorder Studies Anorexia Bulimia Obes. 2004; 9: pp. (1)24-8. [http://dx.doi.org/10.1007/BF03325041] [18] Martin CK, Han H, Coulon SM, Allen HR, Champagne CM, Anton SD. A novel method to remotely measure food intake of free-living individuals in real time: The remote food photography method. Br J Nutr 2009; 101(3): 446-56. [http://dx.doi.org/10.1017/S0007114508027438] [PMID: 18616837] http://dx.doi.org/10.1079/BJN2001322 http://www.ncbi.nlm.nih.gov/pubmed/11348579 http://dx.doi.org/10.1016/j.jada.2009.10.008 http://www.ncbi.nlm.nih.gov/pubmed/20102826 http://dx.doi.org/10.2174/1874120701307010041 http://www.ncbi.nlm.nih.gov/pubmed/23723954 http://dx.doi.org/10.1016/j.jand.2013.09.017 http://www.ncbi.nlm.nih.gov/pubmed/24231364 http://dx.doi.org/10.1016/j.eatbeh.2014.06.013 http://www.ncbi.nlm.nih.gov/pubmed/25064306 http://dx.doi.org/10.1109/TBME.2009.2015873 http://www.ncbi.nlm.nih.gov/pubmed/19272978 http://dx.doi.org/10.1016/j.compbiomed.2015.01.005 http://www.ncbi.nlm.nih.gov/pubmed/25616023 http://dx.doi.org/10.1016/j.appet.2014.11.003 http://www.ncbi.nlm.nih.gov/pubmed/25447016 http://dx.doi.org/10.1109/BIBM.2009.29 http://dx.doi.org/10.1109/JSEN.2015.2469095 http://dx.doi.org/10.1145/2593069.2596678 http://dx.doi.org/10.1017/S1368980013003236 http://www.ncbi.nlm.nih.gov/pubmed/24476848 http://dx.doi.org/10.1111/jhn.12014 http://www.ncbi.nlm.nih.gov/pubmed/23848588 http://dx.doi.org/10.1007/BF03325041 http://dx.doi.org/10.1017/S0007114508027438 http://www.ncbi.nlm.nih.gov/pubmed/18616837 240 The Open Nursing Journal, 2017, Volume 11 Magrini et al. [19] Martin CK, Correa JB, Han H, et al. Validity of the remote food photography method (RFPM) for estimating energy and nutrient intake in near real-time. Obesity (Silver Spring) 2012; 20(4): 891-9. [http://dx.doi.org/10.1038/oby.2011.344] [PMID: 22134199] [20] Martin CK, Newton RL Jr, Anton SD, et al. Measurement of children’s food intake with digital photography and the effects of second servings upon food intake. Eat Behav 2007; 8(2): 148-56. [http://dx.doi.org/10.1016/j.eatbeh.2006.03.003] [PMID: 17336784] © 2017 Magrini et al. This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 International Public License (CC-BY 4.0), a copy of which is available at: https://creativecommons.org/licenses/by/4.0/legalcode. This license permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. http://dx.doi.org/10.1038/oby.2011.344 http://www.ncbi.nlm.nih.gov/pubmed/22134199 http://dx.doi.org/10.1016/j.eatbeh.2006.03.003 http://www.ncbi.nlm.nih.gov/pubmed/17336784 https://creativecommons.org/licenses/by/4.0/legalcode Wearable Devices for Caloric Intake Assessment: State of Art and Future Developments [Background:] Background: Objective: Method: Results: Conclusion: 1. INTRODUCTION 2. MATERIALS AND METHODS 3. RESULTS 4. DISCUSSION CONCLUSION LIST OF ABBREVIATIONS CONSENT FOR PUBLICATION CONFLICT OF INTEREST ACKNOWLEDGEMENTS REFERENCES work_vrzbv3kv5bf4npcz4wye3oqvoq ---- Quantifying phenotype-environment matching in the protected Kerry spotted slug (Mollusca: Gastropoda) using digital photography: exposure to UV radiation determines cryptic colour morphs RESEARCH Open Access Quantifying phenotype-environment matching in the protected Kerry spotted slug (Mollusca: Gastropoda) using digital photography: exposure to UV radiation determines cryptic colour morphs Aidan O’Hanlon1* , Kristina Feeney1, Peter Dockery2 and Michael J. Gormally1 Abstract Background: Animal colours and patterns commonly play a role in reducing detection by predators, social signalling or increasing survival in response to some other environmental pressure. Different colour morphs can evolve within populations exposed to different levels of predation or environmental stress and in some cases can arise within the lifetime of an individual as the result of phenotypic plasticity. Skin pigmentation is variable for many terrestrial slugs (Mollusca: Gastropoda), both between and within species. The Kerry spotted slug Geomalacus maculosus Allman, an EU protected species, exhibits two distinct phenotypes: brown individuals occur in forested habitats whereas black animals live in open habitats such as blanket bog. Both colour forms are spotted and each type strongly resembles the substrate of their habitat, suggesting that G. maculosus possesses camouflage. Results: Analysis of digital images of wild slugs demonstrated that each colour morph is strongly and positively correlated with the colour properties of the background in each habitat but not with the substrate of the alternative habitats, suggesting habitat-specific crypsis. Experiments were undertaken on laboratory-reared juvenile slugs to investigate whether ultraviolet (UV) radiation or diet could induce colour change. Exposure to UV radiation induced the black (bog) phenotype whereas slugs reared in darkness did not change colour. Diet had no effect on juvenile colouration. Examination of skin tissue from specimens exposed to either UV or dark treatments demonstrated that UV-exposed slugs had significantly higher concentrations of black pigment in their epithelium. Conclusions: These results suggest that colour dimorphism in G. maculosus is an example of phenotypic plasticity which is explained by differential exposure to UV radiation. Each resulting colour morph provides incidental camouflage against the different coloured substrate of each habitat. This, to our knowledge, is the first documented example of colour change in response to UV radiation in a terrestrial mollusc. Pigmentation appears to be correlated with a number of behavioural traits in G. maculosus, and we suggest that understanding melanisation in other terrestrial molluscs may be useful in the study of pestiferous and invasive species. The implications of colour change for G. maculosus conservation are also discussed. Keywords: Camouflage, Mollusc, Phenotypic plasticity, Pigmentation, UV radiation, Animal colouration, Digital photography, Disruptive patterning, Gastropoda, Polyphenism, Slug, Terrestrial mollusc, Visual predation * Correspondence: a.ohanlon4@nuigalway.ie 1Applied Ecology Unit, School of Natural Sciences, National University of Ireland Galway, Galway, Ireland Full list of author information is available at the end of the article © The Author(s). 2017 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. O’Hanlon et al. Frontiers in Zoology (2017) 14:35 DOI 10.1186/s12983-017-0218-9 http://crossmark.crossref.org/dialog/?doi=10.1186/s12983-017-0218-9&domain=pdf http://orcid.org/0000-0002-8193-6630 mailto:a.ohanlon4@nuigalway.ie http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/publicdomain/zero/1.0/ Background The ability of many animal species to accurately match surrounding habitat features (i.e. camouflage) has been held up as strong evidence of natural selection since Darwin’s time [1]. Intraspecific phenotypic variation is common within many animal populations and a consist- ent pattern of phenotype-environment matching and disruptive markings can be considered an indication that a particular phenotype is adaptive [2]. Such patterns of phenotype-environment matching in animal colouration are most often discussed in the context of predator avoidance and it has been shown that disruptive pat- terns, as well as background colour matching, can prove highly effective in concealing an animal [3]. The role of predation, therefore, in maintaining variation in cryptic prey colour morphs and patterns is relatively well- understood [4]. Apart from concealment from predators, however, animal colours and patterns may serve other functions such as thermoregulation or social signalling. Thus, not all apparently cryptic morphs are the sole re- sult of directional selection by predators over many generations and, for some species, external cues can stimulate developmental or behavioural mechanisms which lead to rapid adaptation within the lifetime of an individual (i.e. phenotypic plasticity). Plasticity in animal crypsis has been demonstrated in response to a wide range of cues such as diet [5, 6], seasonal temperature changes [7], illumination and visual background proper- ties [8, 9]; detection of a possible predation threat [10], and ultraviolet (UV) radiation [11, 12]. Colour variation appears to be maintained by frequency- dependent selection for many gastropod species (e.g. in land snails [13, 14]; in littorinid snails [15, 16]). Differ- ences in gastropod colour morphs can also arise as ad- aptations to climatic selection where relatively darker or lighter shell and skin pigmentation evolves within populations exposed to cooler or warmer climates, re- spectively (e.g. as in littorinids [17, 18]; in Western Irish Cepea nemoralis L [19]; and in horn snails [20]). Skin colouration in terrestrial slugs can be highly vari- able, even within populations [21]. The mechanisms in- volved in determining slug colouration most likely vary with species and local adaptations may arise within populations of slugs exposed to different environmental conditions. Pigmentation has been explained as a sim- ple Mendelian-inherited trait for some species of slug [22, 23]. Skin colouration and mottling (possibly func- tioning as crypsis) has been shown to be under polygenic control for Limax flavus L and Limacus maculatus Kaleniczenco [24]. Colour variation can also arise as a result of hybridization with closely related species (e.g. as in Arion ater L and Arion rufus L [25]). However, there is also evidence that different colour morphs in slug species can arise as local adaptations to different environmental conditions. While Evans [25], for ex- ample, found similar isozyme profiles between colour morphs of Arion ater agg., Taylor [21], in a robust sur- vey of the malacofauna of Ireland and Britain, observed that dark colour morphs were associated with higher al- titudes and cooler, wetter climates, suggesting climatic selection for pigmentation. Chevallier [26] provided further evidence of climatic selection for A. ater agg. and observed a similar pattern for Arion lusitanicus Mabille, with darker morphs prevailing at altitudes above 500 m. Jordaens et al. [27], on the other hand, demonstrated that diet can also influence skin pigmen- tation in F1 offspring of three species of arionid slug, resulting in a loss of ‘species-specific’ colour character- istics. The Kerry spotted slug Geomalacus maculosus (Allman) is unusual in that it appears to possess disrup- tive patterning as well as background-matching colour- ation which may provide different degrees of camouflage in alternative environments. This EU-protected species is associated with forested and open habitats (blanket bog and mountain heath) in Ireland where it occurs in one of two distinct colour morphs. In forested habitats G. maculosus generally possesses a hazel brown to gin- ger brown body colour with white and yellowish spots which appear to accurately match the moss and lichen- covered bark of trees. In open habitats, on the other hand, the slug generally possesses a dark blue-grey to black body colour with white spots where it seems to ac- curately match lichen-covered boulder outcrops which it uses for shelter and feeding [21, 28–31]. Little is known about population structure of G. maculosus and whether this affects colouration. Reich et al. [32] studied the population genetics of G. maculosus on which basis they suggested that the species originated in northern Spain 15Myr ago. No differences in 16S rRNA or COI genes were found between black and brown colour morphs (I. Reich, pers. comm.). Furthermore, newly hatched juve- niles are brown in both forested and open habitats [31], suggesting that body colouration in this species may be a plastic trait. This study was undertaken to test the hypothesis that each colour morph of G. maculosus is habitat-specific and provides camouflage. We used standardized digital pho- tography to quantify widespread phenotype-environment matching in this internationally important mollusc spe- cies. Experiments were also conducted with laboratory- reared juvenile slugs to determine whether different environmental conditions (UV exposure or diet) could induce colour-change and whether this may help to elucidate the function of each colour morph observed in the wild. The study will also help inform the debate regarding potential translocation of the species as a possible mitigation measure within the Environmental Impact Assessment process. O’Hanlon et al. Frontiers in Zoology (2017) 14:35 Page 2 of 12 Methods Study sites and animal sampling Free-living slugs were photographed from six sites in Ireland across the western counties of Galway, Kerry and Cork (Fig. 1). Sites were selected on the basis that previous surveys had found high numbers of Kerry slugs in these areas [30, 33, 34, 35]. Forested sites were a com- bination of conifer plantations and oak woodlands (Fig. 1: sites 1, 3 and 5), and open habitats surveyed were blanket bog areas (Fig. 1: sites 2, 4 and 6). Refuge traps (De Sangosse, France) were used to collect slugs. These traps are 50 cm × 50 cm sheets of absorbent material covered with a reflective upper surface and a perforated dark lower surface which maintains a damp, cool envir- onment beneath the trap. This method has been shown to be an effective technique for live-trapping G. maculo- sus [36]. Traps were placed on Q. petraea Liebl tree trunks (n = 4) at breast height (approx. 1.5 m) at site 5; on sandstone boulder outcrops at sites 4 (n = 4) and 6 (n = 4); and on granite outcrops at site 2 (site descrip- tions given in Additional file 1: Table S1). Each trap was checked for slugs one month after they had been set. Traps on sitka spruce Picea sitchensis Carr. trees at sites 1 and 3 were set previously by other researchers in the Applied Ecology Unit, NUI Galway, as part of separate projects [34, 35]. Each trap was only checked once for slugs to avoid pseudoreplication, except for site 1. Slugs in this site were removed from tree trunks after they had been photographed for use in a separate behavioural study, so multiple trips were possible since we could be positive that we were not taking pictures of the same in- dividuals. Any additional slugs found near the traps were also photographed and included in colour analyses and subsequently removed from the site for use in a separate study. Slugs were collected with permission from the National Parks and Wildlife Service, Department of Arts, Heritage and the Gaeltacht (Licence No. C097/2015). Quantification of G. maculosus colour using digital photography Digital photography was used to estimate the degree of phenotype-environment matching of adult slugs at each site, following a simplified version of the suggested methodology outlined in Stevens et al. [37]. A colour- checker card (X-Rite, Munsell Color Laboratories) was used to standardize the reflectance values obtained from digital photographs. The card consists of 24 coloured squares which are manufactured to represent common natural colours and a six-step greyscale from white to black. The ‘adjacent’ method validated by Bergman and Beenher [38] was used, whereby slugs were photo- graphed in the same image as the colour-checker card. The camera used was a Nikon D3000 digital single-lens reflex camera with a pixel count of 10.2 megapixels and full control over metering and exposure. All photos of wild slugs in this study were taken at F-stop: f/5.6 with a shutter speed of 1/100 s, ISO 800. Images were saved to the camera memory card as uncompressed Nikon Elec- tronic Format (NEF) raw image files. After transferring all files to a computer, Adobe Photoshop was used to convert the raw NEF files to 8-bit Tagged Image File Format (TIFF) files, for compatibility with GIMP 2.0 image pro- cessing software. White balance was corrected in GIMP 2.0 to standardize each photo with reference to the white square of the colour-checker card, such that this white square was equal to a reflectance score of 255 in R, G and B colour channels (i.e. ‘true’ white) for each image. To quantify colouration of individual slugs, a 1 cm × 1 cm square was drawn over the mantle of the slug in each image. Mean R, G and B reflectance values (calculated in- program by GIMP 2.0) were then recorded per square. To measure substrate colouration, three additional 3 cm × 3 cm squares were placed over background substrate in the photographs along-side each animal and the mean R, G and B reflectance scores of these three squares was cal- culated in-program. This was to determine whether G. maculosus phenotypes match a random sample of their background substrate (Fig. 2). The same method was used to quantify colour of juvenile slugs in UV- exposure and feeding experiments (outlined below). Due to the small body size of newly-hatched slugs, a 0.5 cm × 0.5 cm square was instead used to calculate mean R, G and B values over the mantle of juvenile slugs. Juvenile slugs were photographed once per month in the laboratory at F-stop f/4 with a shutter speed of 1/100 s, ISO 800. Fig. 1 Partial map of Ireland showing survey sites (black circles) inside the distribution range of G. maculosus (shaded area). Conifer (1) and blanket bog (2) habitats at Oughterrard, Co. Galway; conifer habitat at Tooreenafersha, Co. Kerry (3); blanket bog habitat adjacent to Uragh woods, Co. Kerry (4); Oak forest habitat at Glengarriff Nature Reserve, Co. Cork (5) and blanket bog at Leahill Bog, Co. Cork (6). Black squares show the locations of Galway, Cork and Dublin cities for reference. Map modified from G. Kindermann, 2016© O’Hanlon et al. Frontiers in Zoology (2017) 14:35 Page 3 of 12 Effect of UV-exposure, darkness and diet on pigmentation In addition to studying habitat-phenotype matching in wild-caught adult slugs, experiments were performed with newly hatched juveniles to investigate whether UV- exposure or diet can influence body colouration. To investigate whether UV radiation could affect pigmentation in G. maculosus, hatchlings (n = 30) were randomly assigned to either a UV or ‘darkness’ treatment. For UV treatments, juveniles (n = 15) were kept in the laboratory in a clear plastic container (37 cm × 30 cm × 25 cm) fitted with a cold 13-watt fluorescent UVB bulb (Exo Terra, Canada). The bulb was fixed in the container lid at a distance of 25 cm from the container floor, resulting in a mean UVB irradiance value of approximately 25 μW/cm2 (estimated from information provided with purchase of the bulb) on the container floor. The floor of each container was lined with a sheet of wet cotton wool covered with tissue paper and each container lid was sealed with ParaFilm® to prevent the slugs from dehydrating. A small shelter constructed from laminated cardboard was placed on one side of the container and food (porridge oats) was placed on the opposite side. This was to ensure that the slugs had shelter from excessive UV radiation and that they would be forced to periodically leave the shelter to feed. The UV light was set on a 14:10 dark: light cycle which approximated with the natural photoperiod when experiments began in February 2016 in a laboratory which also received ambient light. For ‘darkness’ treat- ments, juvenile slugs (n = 15) were kept in identical con- ditions but the containers were not fitted with a UV light source and were kept in constant darkness. Juve- niles for each treatment came from two egg-batches laid within the same week which were split in half with each hatchling assigned at random to one of two treatments. Both egg batches were laid in February 2016 by two cap- tive adults of the brown phenotype. In feeding trials, newly hatched juveniles (n = 45) were randomly assigned to one of three plastic con- tainers (17 cm × 11 cm × 6 cm) where they were fed one of three food types: organic carrot (n = 15 slugs), spinach (n = 15 slugs) or porridge oats (n = 15 slugs). The container floor was lined with a layer of moist tis- sue paper to maintain damp conditions and protect the slugs from dehydration. Cotton wool was not necessary to maintain adequate moisture in feeding trial boxes (as in the UV and darkness trials) due to their smaller size. The tissue was re-misted three times per week and decaying food was replaced as necessary (typically once per week). The containers were housed in the laboratory on a shelf approximately 5 m from a SW- facing window providing identical natural photoperiod cues to each diet group. Juveniles used in feeding trials originated from egg batches laid in the laboratory in September 2015 by two captive adults of the brown phenotype. Fig. 2 a Each colour morph of G. maculosus was photographed on its natural substrate on tree trunks in forested sites (b) or on boulder outcrops in blanket bog (c) alongside a colour standard card. Reflectance in R, G and B colour channels was calculated from a 1 cm × 1 cm square drawn over the mantle to estimate slug colouration, and a further three squares measuring 3 cm × 3 cm were drawn over patches of the substrate to calculate background R, G and B reflectance. Scale bars were measured from original photographs O’Hanlon et al. Frontiers in Zoology (2017) 14:35 Page 4 of 12 Stereological estimation of epithelial pigment At the end of the UV / dark trials, all remaining slugs (n = 6 of each colour morph) were sacrificed using chloroform vapour and 1 cm × 1 cm skin samples from the slug mantle (effectively all the mantle) were re- moved following Rowson et al.[31]. Skin samples were fixed in 4% paraformaldahide before being dehydrated and embedded in paraffin wax. Sections (5 μm thick) were then cut on a Leica RM2125RT microtome and stained with hematoxylin/eosin. Transverse sections (1 per individual: 6 UV-exposed and 6 darkness-reared slugs) were imaged under a Leica DM500 microscope. Simple point-counting methods were used to estimate the volume fraction of pigment to the epithelium (mean of 16 grid samples per individual: mag. ×800). Mean epithelial thickness was also estimated (from 10 measuring points per individual: mag. ×200). Estimates of the volume of pigment per unit projected area of skin were then obtained by multiplying these two parameters [39]. Statistical analyses To investigate whether animal colouration matches background colouration in free-living slugs in natural habitats, the mean R, G and B reflectance values from each slug were compared with those from the substrate upon which they were photographed. A one-way ANOVA was used to compare colour scores of slugs and substrate between sites of the same habitat type. Colour data of slugs and substrate were pooled for sites of the same habitat type (after it was determined that there were no significant differences between sites; Additional file 1: Table S1) and then tested for bivariate correlation using Pearson’s r. Students t-tests were used to test whether RGB reflectance values differed significantly be- tween woodland and blanket bog substrate and between each of the two slug colour morphs. Students t-tests were also used to examine whether RGB reflectance scores, at the end of the experiment, differed between UV-treated slugs and slugs kept in darkness. Paired t- tests were used to test whether juvenile slugs differed in RGB reflectance scores after feeding trials were con- cluded. Data from stereological estimation of the % vol- ume fraction of black pigment in epithelial sections were not normally distributed. A Mann-Whitney U test was therefore used to examine whether the % volume frac- tion of black pigment differed significantly between UV- irradiated and dark-reared slugs. A one-way ANOVA was used to test whether juvenile slugs differed in re- flectance of R, G and B colour scores at the beginning of UV-exposure trials and feeding trials (to account for any potential variation between newly-hatched individuals). Graphs were prepared and statistical tests were carried out using SPSS (IBM, USA). Results Quantification of G. maculosus colour using digital photography In total, 124 slugs were sampled from forested sites and 71 slugs were sampled from blanket bog sites. Colour scores did not differ significantly for slugs or substrate between sites of the same habitat type (with the exception of the B reflectance from forest slugs; Additional file 1: Table S2). Slugs from forest site 3 showed significantly lower mean B reflectance scores than slugs from both of the other forested sites. Given that subsequent analysis involved pooling of data, this site was removed from the data set – an explanation of why the B reflectance was significantly different in slugs from forest site 3 is given in the discussion. Colour scores of slugs and of substrates were pooled by habitat type (forest or bog). The R, G and B reflectance values of slugs and substrate were strongly and positively correlated for both forest and blanket bog habitats (Fig. 3). Mean R, G and B reflectance values of slugs differed significantly between habitats as did sub- strate (Table 1). R, G and B reflectance values did not dif- fer significantly between slugs and substrate from the same habitat type but did differ significantly between slugs and substrate from the alternative habitat (i.e. between slugs from forest habitats and substrate from bog habitats, and between slugs from bog habitats and substrate from forest habitats; Table 2). Effect of UV-exposure, darkness and diet on pigmentation The R, G and B reflectance scores did not differ significantly between newly-hatched slugs before each experimental treatment (i.e. between UV and darkness treatments, and be- tween different diet treatments; Additional file 1: Table S3). After a period of 140 days, UV-irradiated slugs displayed significantly lower R, G and B reflectance scores than when they first hatched (Fig. 4), whereas slugs reared in darkness did not differ significantly in colour reflectance scores after the same time period (Table 3). Slugs reared on different diets under laboratory conditions also exhib- ited significantly lower colour reflectance scores at the end of feeding trials (84 days) than when they first hatched (Table 3). However, colour reflectance scores did not differ significantly between diet treatments after a period of 84 days (comparable only for carrot and oat diets since juveniles reared on spinach died after 56 days; Fig. 5). Stereological estimation of epithelial pigment Estimates of the volume fraction of black pigment (Fig. 6) were significantly greater in epithelial sections of UV- irradiated slugs (Mean rank: 9.33) than in darkness- reared slugs (Mean rank: 3.67; U (12) = 35, p = 0.004). There was no significant difference in mean epithelial thickness between slugs from each treatment group (U (12) = 30, p = 0.0649). O’Hanlon et al. Frontiers in Zoology (2017) 14:35 Page 5 of 12 Discussion Environment-phenotype matching The colour values of adult slugs as estimated using digital photography are strongly and positively correlated with the colour properties of the substrate upon which they were found, in both forest and bog habitats. Fur- thermore, these colour values differed significantly be- tween colour morphs and substrate from each habitat, and when animal colouration was compared to substrate colouration from the alternative habitat. This suggests a ‘mismatch’ between animal and substrate in alternative habitats and demonstrates that colouration in adult G. maculosus is habitat-specific. Brown ‘forest’ slugs may be mismatched on black and white ‘bog’ substrates and vice versa, suggesting that mismatched individuals would possess a lower degree of crypsis in the ‘wrong’ habitat type, possibly increasing their susceptibility to predation. A consistent pattern of background-phenotype matching is a good indication that a phenotype is adaptive [2]. The correlations found between R, G and B colour channels measured from G. maculosus and their substrate in both habitat types is consistent with the hypothesis that this species possesses camouflage which enables both colour morphs of G. maculosus to accurately match a random sample of their respective background. Hall et al. [40] demonstrated how disruptive markings can provide effective crypsis in multiple habitats. Spot- ted patterning, as well as background colour matching therefore likely provides strong, habitat-specific camou- flage in G. maculosus. Camouflage implies selective pressure from a visual predator. Although passerine birds are known to prey upon a number of medium-large arionid slug species [41], only one published record of predation exists for G. Table 1 Mean reflectance scores and t test statistics comparing RGB values of slug and substrate images from forested (N = 104 images) and blanket bog (N = 71 images) habitats Forested habitats Blanket bog habitats Between-habitats t-tests Channel Mean SD Mean SD Mean Diff. t(173) p Slug R 111.68 20.31 53.54 15.34 58.16 21.53 <0.001 G 98.45 17.69 47.44 15.72 50.98 19.56 <0.001 B 88.95 17.96 48.66 19.71 40.27 13.99 <0.001 Substrate R 111.12 23.09 53.21 16.04 57.95 19.62 <0.001 G 94.14 21.34 50.74 16.41 43.38 15.18 <0.001 B 86.89 20.04 52.78 20.85 34.09 10.87 <0.001 Fig. 3 Correlations between animal and substrate R, G and B reflectance scores from forested (a–c; n = 104) and blanket bog (d–f; n = 71) habitats. Solid lines show Pearson’s r correlation; dotted lines show 95% CI O’Hanlon et al. Frontiers in Zoology (2017) 14:35 Page 6 of 12 maculosus by the larval stage of the sciomyzid fly Teta- nocera elata Fabricius, which appeared to initiate feeding upon reception of tactile or olfactory cues [42]. It is cur- rently unknown whether the spotted patterns on both G. maculosus morphs act as disruptive markings against avian predators. Many birds are potentially tetrachro- matic [43] and may therefore perceive colour in wave- lengths undetected by this study (photography and image analysis methods used in this study provide a basic assessment of colouration in a trichromatic colour space - more accurate colour analysis would involve measuring differences in regions of a light spectrum, and mapping these to models representing the visual capacity of known predators [37]). A possible alternative explanation for spotted markings might be that they function in social signalling, as is known to be the case for other invertebrates (e.g. in wasps: [44, 45]). However, slugs are believed to have poor visual systems relative to many other invertebrate taxa and may be only capable of detecting the overall distribution of light and dark [46]. Furthermore, G. maculosus is hermaphroditic which makes a social signalling hypothesis for the spotted pat- terning unlikely. Allen [41] stated that visual predation by birds is undoubtedly the most important selective force in the evolution of cryptic colour morphs in prey animals and suggested that this explains why terrestrial gastropods in general living in forests tend to be brown. In addition to the pattern of background matching dem- onstrated by the results of this study, an unusual startle- response unique among gastropods to G. maculosus may also suggest selection by avian predators. Many slugs contract into a humped posture when disturbed [31]. However, G. maculosus curls up into a ball shape by contracting its foot completely in half, and secretes a low-viscosity mucus causing the animal to become more slippery (pers. obs.). This behaviour is perhaps most easily explained as an adaptation by increasing the difficulty of an avian predator to hold the prey in its bill. Colour change Images of juvenile slugs used in feeding trials showed lower reflectance scores in all colour channels after 84 days. However, there was no statistically significant difference in R, G and B reflectance values between food treatments after the feeding trial period was concluded. Thus, even though juvenile slugs were darker after feed- ing trials, it seems to be independent of diet. Although the lichen composition found in the field differs between Table 2 Results from t tests comparing mean RGB values of slugs against RGB values of substrate from their natural habitat, and against substrate from alternative habitats (sample sizes and means ± SD for each colour channel are presented in Table 1) Forested substrate Bog substrate Channel Mean diff. t p Mean diff. t p Forested slug R 0.56 0.18 0.852 −57.60 −19.80 <0.001 G 4.30 1.58 0.115 −46.68 −16.64 <0.001 B 2.05 0.78 0.437 −38.22 −12.46 <0.001 Bog slug R 58.51 21.28 <0.001 0.35 0.13 0.894 G 47.69 18.03 <0.001 −3.29 −1.22 0.224 B 36.15 11.91 <0.001 −4.13 −1.21 0.227 Fig. 4 Juvenile slugs irradiated under UV-lighting (a) showed less reflectance in R (top line), G (middle line) and B (bottom line) colour channels with each month, becoming significantly darker than juveniles maintained in darkness (b). Colour reflectance scores differed significantly between UV-exposed and darkness reared slugs at the end of the experimental period (R: t = −14.461, p < 0.001; G: t = −11.391, p = <0.001; B: t = −7.700, p = <0.001). Numbers above means show group n at each sampling month; error bars show ±SE for means O’Hanlon et al. Frontiers in Zoology (2017) 14:35 Page 7 of 12 blanket bog and forested habitats, adult G. maculosus specimens from blanket bog habitats will eat similar foods to specimens from forested habitats (E. Johnston, pers. comm.) in captivity. While Jordaens et al. [27] showed how diet can influence body colour in slugs of the subgenus Carinarion Hesse, the slight darkening of juveniles observed after 84 days of feeding trials in this study is likely to be a natural result of growth. As in some other slug species (e.g. in Limax flavus [24]), newly hatched G. maculosus juveniles tend to be light brown in colour and somewhat translucent, so the darker colour values recorded after feeding trials were concluded are probably the result of increased overall size and thickening of the integument. Furthermore, the juvenile slugs used in feeding trials were maintained in plastic containers on a shelf in the laboratory approximately 5 m from a SW- facing window and, as such may have been exposed to a natural day/light cycle, which may also have influenced this slight darkening to some degree. Slugs irradiated under UV-lighting became consist- ently darker at each sampling month and exhibited the black ‘bog’ morph after these trials were concluded, showing less reflectance in R, G and B colour channels. Slugs kept in darkness, however, remained lighter than UV-exposed juveniles at the end of the experimental period, showing higher reflectance scores in R, G and B colour channels. This result demonstrates that ultravio- let radiation can induce a change in pigmentation such that slugs exposed to UV radiation become darker in colour. Stereological examination of skin tissue from darkness-reared and UV-exposed juveniles demonstrated that UV-irradiated slugs contained significantly greater amounts of black pigment in their epithelium than darkness-reared slugs. This black pigment is most likely melanin, which has been detected in a number of other slug species (e.g. in A. ater [47]; Arion hortensis Férussac [48]; Deroceras reticulatum Müller [49]) and is known to develop in response to UV-exposure in a wide range of other animal taxa [50] but until this study, has not been demonstrated in slugs. Since juvenile G. maculosus from blanket bog habitats tend to be brown [31], it is likely that the black morph of adults develops in response to Table 3 Effect of different lighting and diet treatments on juvenile slug colouration from the time of hatching (start) to the end of the trials Starta Endb Channel Mean SD Mean SD t p R 114.80 13.09 55.05 12.48 9.175 <0.001 UV G 83.26 11.31 36.26 11.65 7.879 <0.001 Lighting B 60.64 10.44 21.86 9.40 7.455 <0.001 (fed on oats) R 119.18 15.36 114.94 17.23 0.603 0.559 Darkness G 88.02 12.92 92.97 16.89 −1.179 0.263 B 65.52 11.08 58.28 13.42 2.172 0.053 R 124.07 16.35 91.68 13.21 6.27 <0.001 Carrot G 95.31 11.73 69.51 11.15 5.75 <0.001 B 71.21 11.09 57.65 6.72 3.06 0.013 Diet R 119.81 17.11 96.04 11.48 4.378 0.001 (natural photoperiod) Oats G 95.88 18.18 69.65 9.23 4.828 <0.001 B 67.26 17.26 67.26 5.98 2.227 0.043 aStart = day of hatching, bEnd = day 140 UV-exposure experiments, and day 84 for diet experiments Fig. 5 Juveniles reared on different diets showed significantly less reflectance in R, G and B colour channels (shown as red, green and blue coloured bars) after 84 days (right of the dotted line) than when they first hatched, but colour reflectance scores not differ significantly between diets after this period; (R: t = −0.876, p = 0.390; G: t = 0.409, p = 0.686; B: t = 0.612, p = 0.546). Error bars show ±2SE for group means; means ± SD given in Table 3 O’Hanlon et al. Frontiers in Zoology (2017) 14:35 Page 8 of 12 higher levels of UV exposure in these habitats, whereas juveniles which develop in relatively darker and more sheltered forested habitats remain brown. McCrone and Sokolove [51] showed that photoperiod was responsible for producing a maturation hormone in Limax maximus L., with long photoperiod exposures resulting in the development of male-phase, and shorter photoperiod exposures resulting in the development of female-phase reproductive morphologies upon maturation. A similar hormonal pathway may be present in G. maculosus, where black or brown pigmented phenotypes are expressed in response to different levels of UV radiation. Previously, adult G. maculosus specimens collected from open habi- tats appeared to lose some of their black pigment after a period of several weeks in the laboratory, with skin tissue in the grooves between tubercles becoming a paler brown colour (pers. obs.), suggesting that colour change may be reversible to some degree. However, it currently remains unclear whether colour change is completely reversible in adults or whether black pigmentation in G. maculosus is produced during a key period of early development in response to UV cues. UV radiation reaching the ground in bog habitats, particularly on overcast days and during sunrise/sunset hours when the slugs are most active, is likely to be lower than the UVB radiation emitted during laboratory experiments (25 μW/cm2). As such it remains unknown how long it takes juveniles to develop into the black morph in the field, and whether this change in pigmentation is completely or partially reversible. Slugs from forest site 3 were omitted from pooled analyses be- cause they exhibited slightly lower mean reflectance scores in R and G channels, and significantly lower reflectance scores in B colour channels than slugs photographed from the other two forested sites. These slugs, although of the brown phenotype, appear to possess darker skin than slugs from the other forested sites. The area from which slugs were photographed from forest site 3 was noticeably brighter and had a luminosity value three times greater than the other two forest sites surveyed (Additional file 1: Table S1). This site also borders a blanket bog habitat, so it is possible that the slightly darker brown slugs photo- graphed here were exposed to higher levels of UV light than slugs from the other two forested sites. This is con- sistent with a statement by Rowson et al. [31], that the dis- tinction between brown and black G. maculosus colour forms becomes substantially blurred where wooded areas border open habitats. Origin of colour dimorphism Reich et al. [32] demonstrated that G. maculosus origi- nated during the middle Miocene, approx. 15Myr ago, probably arriving in Ireland from Iberia during the middle ages. The ability of G. maculosus to alter the degree of melanin-like pigment in its skin in response to UV expos- ure could have evolved relatively recently, during the Quarternary period. The Quarternary is characterised by Fig. 6 a UV-exposed slugs contained a significantly greater estimated volume of black pigment than darkness-reared slugs. Volume fraction estimates are expressed as percentages. TS of integument from b a darkness-reared (brown) juvenile and c a UV-exposed (black) juvenile (mag. ×200). Black pigment is concentrated in the outer epithelia of both slug colour morphs O’Hanlon et al. Frontiers in Zoology (2017) 14:35 Page 9 of 12 the periodic growth and retreat of ice sheets across the northern hemisphere [52]. Hewitt [53] has suggested that animal and plant populations survived through several glacial cycles by migrating up and down mountains. Reich et al. [32] suggested that Iberian G. maculosus populations may have survived in mountain valleys during periods of glaciation when northern Spain’s mountain peaks would have been covered by ice. During warming phases when ice sheets were in retreat, G. maculosus populations would again have access to higher mountain altitudes, allowing them to increase their range altitudinally. Exposure to UV radiation can damage DNA and melanin can act as a protective filter in skin against the effects of UV exposure [54–56]. Migration by G. maculosus up and down moun- tain ranges over thousands of years as ice sheets periodic- ally expanded and retreated may therefore have fixed in ancestral populations the ability to cope with different ex- posures to UV intensity. Dark populations of some slug species have previously been reported to prevail at high al- titudes, possibly as a result of climatic selection (e.g. A. ater [21]; A. rufus and A. lusitanicus [26]; and Lehmannia marginata Müller [31]). However, pigmentation in these species is not known to be plastic. Since G. maculosus can self-fertilize, the capacity of an individual to express either black or brown phenotype in response to differential UV exposure may have been selected over non-plastic melanic morphs, which are known to occur as adaptations to alti- tudinal gradients in UV intensity in some lizards [12] and insects [57–59]. Skin colour in G. maculosus most likely plays a photoprotective role, leading to incidental camou- flage against the different substrate types of each habitat. The spotted patterning present in both colour morphs likely affords G. maculosus with a degree of bet-hedging by increasing the effectiveness of this incidental camou- flage in whichever habitat type it develops. Ahlgren et al. [60] found that the development of black skin pigmentation in the freshwater gastropod Radix balthica L. was induced by UV radiation and by the detection of kairomones from predatory fish, demonstrating how pig- mentation may serve more than one function in cryptic animals. It is also possible that each G. maculosus colour morph has important implications for thermoregulation – with black slugs absorbing heat more efficiently than paler brown slugs, as has been shown for other ectothermic in- vertebrates on a wide geographic scale [61]. Implications for other species Dark colour morphs have been reported to prevail at high altitudes in many other terrestrial gastropod species, and this phenomenon has most often been explained as an ex- ample of climatic selection. It is possible that colouration is also a plastic trait in at least some of these species. Melanisation is correlated with a suite of behavioural traits in vertebrates: typically, darker vertebrates tend to be more aggressive, sexually active and resistant to stress than lighter vertebrates [62]. Although melanin develop- ment pathways differ significantly between vertebrates and invertebrates [63], further research into the links between pigmentation and behaviour may reveal many analogues from invertebrate systems – particularly in species with melanin-based colour polyphenisms. Re- sults from behavioural studies with G. maculosus have shown that black slugs collected from bog habitats exhibit a significantly faster escape response (data to be published elsewhere), as well as a greater degree of sinuosity in food- searching behaviour than brown individuals collected from forests (E. Johnston, pers. comm.), demonstrating greater levels of boldness and exploratory behaviour, respectively. Such consistent intraspecific behavioural types may also be common to other slugs, and study- ing their occurrence could have useful practical impli- cations. For example, the grey field slug D. reticulatum is a major agricultural pest which can be difficult to identify due to its highly variable skin colour - it occurs on a spectrum of very pale to deep brown-coloured individuals, even within populations [31]. Luther [22] demonstrated that pigmentation in D. reticulatum is genetically controlled, with melanised forms dominant to unpigmented individuals. However, the degree of melanisation may well be influenced by UV exposure in darkly pigmented D. reticulatum, as has been presently demonstrated for G. maculosus. Fur- thermore, Chevallier [26] reported that populations of the highly invasive slug A. lusitanicus are darker at high altitudes, citing it as a case of climatic selection. It is likely that darker forms are at least in part melanised due to higher UV-exposure for A. lusitanicus, D. reticulatum and a host of other terrestrial slugs in which colour polyphenisms were previously believed to be the sole result of climatic selection or putatively non-adaptive genetic inheritance. It may therefore be useful for researchers interested in developing pest con- trol protocols to investigate whether melanisation could also be used as a predictor of boldness or exploratory behaviour in pestiferous and invasive slugs. Conclusions Colour dimorphism in G. maculosus is consistent with the idea of habitat-specific crypsis, with a likely add- itional function for photoprotection. Adult pigmentation is a plastic trait determined by differential exposure to UV radiation and colour dimorphism in this species may have initially originated as an adaptation to clinal differ- ences in UV radiation. Phenotypic plasticity in G. macu- losus pigmentation could represent a generalist strategy to reduce detectability by visual predators by providing incidental crypsis in different habitat types and experi- ments are planned to test this hypothesis with passerine O’Hanlon et al. Frontiers in Zoology (2017) 14:35 Page 10 of 12 birds. To our knowledge, the results from this study provide the first evidence of plastic colour change in a terrestrial mollusc in response to UV radiation. Recently, G. maculosus populations been recorded from a number of plantation forests throughout its Irish distribution [33–35, 64], and forest clear-felling may significantly re- duce population sizes in these habitats. The ability of G. maculosus to change colour with UV-exposure may also have important implications for the conservation and management of this protected species. Slugs which re- main in clear-felled areas should develop the black ‘bog’ phenotype upon exposure to the relatively higher levels of UV post-felling. We would expect these black pheno- types to provide ineffective camouflage against a tree- stump background, which may reduce their fitness by making the slugs more conspicuous to visually-foraging predators. Careful consideration therefore needs to be given to site and habitat selection where translocation of G. maculosus populations may be used as a mitigation measure for forestry activities. Additional files Additional file 1: Table S1. Description of study sites. Table S2. Results of a one-way ANOVA comparing mean slug and substrate RGB reflectance values between sites of the same habitat type. Table S3. Results of a one-way ANOVA comparing mean RGB reflectance values between groups prior to diet experiments; and results of an independent samples t test comparing RGB reflectance values between groups prior to UV and darkness experiments. (DOCX 18 kb) Additional file 2: Epithelial stereology. (XLSX 28 kb) Additional file 3: Feeding trials. (XLSX 14 kb) Additional file 4: Lighting trials. (XLSX 18 kb) Additional file 5: Slug and Substrate RGB reflectance values. (XLSX 21 kb) Acknowledgements We are grateful to Dr. Kerry Thompson and Mark Canney for their help in staining and sectioning tissue slides, Eoin MacLoughlin for providing dissection trays and pins, Dr. Claire Heardman and Dr. Rory McDonnell for their advice when selecting survey sites, Dr. Gesche Kindermann for providing a map of G. maculosus distribution and Dr. Erin Johnston for helpful discussion about the diets of each G. maculosus colour morph. Funding This project was funded by an Irish Research Council government of Ireland postgraduate scholarship (GOIPG/2014/657) and is part of a larger project investigating the behavioural ecology of G. maculosus. Availability of data and materials All data generated or analysed during this study are included in this published article (see Additional files 2, 3, 4, and 5). Authors’ contributions Study design was by AO’H and MJG, field surveys were by AO’H and KF, feeding trials were conducted by AO’H and KF, UV-rearing experiments were conducted by AO’H, analyses were by AO’H and MJG, stereology work was designed by PD. Writing and manuscript production were by AO’H and MJG with input by PD and KF. All authors approved the final manuscript. Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Author details 1Applied Ecology Unit, School of Natural Sciences, National University of Ireland Galway, Galway, Ireland. 2Centre for Microscopy and Imaging, National University of Ireland Galway, Galway, Ireland. Received: 11 January 2017 Accepted: 19 June 2017 References 1. Darwin C. On the origin of species by means of natural selection. London: John Murray; 1859. Chapter IV ‘Natural Selection’ 2. Endler JA. A predator’s view of animal colour patterns. Evol Biol. 1978;11:319–64. 3. Cuthill I, Stevens M, Sheppard J, Maddocks T, Párraga CA, Troscianko TS. Disruptive colouration and background pattern matching. Nature. 2005;434:72–4. 4. Stevens M. Predator perception and the interrelation between different forms of protective colouration. P Roy Soc Lond B Bio. 2007;272:1457–64. 5. Manríquez PH, Lagosb NA, Jaraa ME, Castillac JC. Adaptive shell color plasticity during the early ontogeny of an intertidal keystone snail. P Natl Acad Sci Usa. 2009;106:16298–303. 6. Cranfield MR, Chang S, Pierce NE. The double cloak of invisibility: phenotypic plasticity and larval decoration in a geometrid moth, Synchlora frondaria, across three diet treatments. Ecol Entomol. 2009;34:412–4. 7. Brakefield PM, Reitsma N. Phenotypic plasticity, seasonal climate and the population biology of bicyclus butterflies (Satyridae) in Malawi. Ecol Entomol. 1991;16:291–303. 8. Vroonen J, Vervust B, Fulgione D, Maselli V, Van Damme R. Physiological colour change in the Moorish gecko, Tarentola mauritanica (Squamata: Gekkonidae): effects of background, light, and temperature. Biol J Linn Soc. 2012;107:182–91. 9. Kang C, Kim YE, Jang Y. Colour and pattern change against visually heterogeneous backgrounds in the tree frog Hyla japonica. Sci Rep. 2016;6:22601. 10. Tollrian R, Heible C. Phenotypic plasticity in pigmentation in Daphnia induced by UV radiation and fish kairomones. Funct Ecol. 2004;18:497–502. 11. Hannson LA. Plasticity in pigmentation induced by conflicting threats from predation and UV radiation. Ecology. 2004;85:1005–16. 12. Reguera S, Zamora-Camacho F, Moreno-Rueda G. The lizard Psammodromus algirus (Squamata: Lacertidae) is darker at high latitudes. Biol J Linn Soc. 2014;112:132–41. 13. Cook LM. Polymorphic snails on varied backgrounds. Biol J Linn Soc. 1986;29:89–99. 14. Cameron RAD. Change and stability in Cepaea populations over 25 years – a case of climatic selection. P Roy Soc Lond B Bio. 1992;248:181–7. 15. Wilbur AK, Steneck RS. Polychromatic patterns of Littorina obtusata on Ascophyllum nodosum: are snails hiding in intertidal seaweed? North Eastern Nat. 1999;6:189–98. 16. Johannesson K, Ekendahl A. Selective predation favouring cryptic individuals of marine snails (Littorina). Biol J Linn Sco. 2002;76:137–44. 17. Cook LM, Freeman PM. Heating properties of morphs of the mangrove snail Littoraria pallescens. Biol J Linn Soc. 1986;29:295–300. 18. Miller LP, Denny MW. Importance of behavior and morphological traits for controlling body temperature in Littorinid snails. Biol Bull. 2011;220:209–23. 19. Burke DPT. Variation in body colour in western Irish populations of Cepaea nemoralis (L.). Biol J Linn Soc. 1989;36:55–63. 20. Miura O, Nishi S, Chiba S. Temperature-related diversity of shell colour in the intertidal gastropod Batillaria. J Mollus Stud. 2007;73:235–40. 21. Taylor JW. Monograph of the Land and Freshwater Mollusca of the British Isles (Testacellidae, Limacidae, Arionidae). Leeds: Taylor Brothers; 1901;Parts 8–13, O’Hanlon et al. Frontiers in Zoology (2017) 14:35 Page 11 of 12 dx.doi.org/10.1186/s12983-017-0218-9 dx.doi.org/10.1186/s12983-017-0218-9 dx.doi.org/10.1186/s12983-017-0218-9 dx.doi.org/10.1186/s12983-017-0218-9 dx.doi.org/10.1186/s12983-017-0218-9 22. Luther A. Zuchtversuche an Ackerschnecken (Agriolimax reticulatus Müll. Und A. agrestis L.). Acta Soc pro Fauna et Flora Fenn. 1915;40:1–42. 23. Reise H. Deroceras juranum – A Mendelian colour morph of D. rodnae (Gastropoda: Agriolimacidae). J Zool. 1997;241:103–15. 24. Evans NJ. Observations on variation in body colouration and patterning in Limax flavus and Limax pseudoflavus. J Nat Hist. 1982;16:847–58. 25. Evans NJ. Studies on the variation and taxonomy in two species aggregates of terrestrial slugs: Limax flavus, L. agg. and Arion ater L. agg. 1977; PhD thesis, University of Liverpool. 26. Chevallier H. Observations sur le polymorphisme des limaces rouges (Arion rufus Linne et Arion lusitanicus Mabille) et de l’escargot petit-gris (Helix aspersa Müller). Haliotis. 1976;6:41–8. 27. Jordaens K, Van Riel P, Geenen S, Verhagen R, Backeljau T. Food-induced body pigmentation questions the taxonomic value of colour in the self- fertilizing slug Carinarion spp. J Molluscan Stud. 2001;67:161–7. 28. Oldham C. Notes on Geomalacus maculosus. Proc Malac Soc Lond. 1942;25:10–1. 29. Platts EA, Speight MCD. The taxonomy and distribution of the Kerry slug Geomalacus maculosus Allman, 1843 (Mollusca: Arionidae) with a discussion of its status as a threatened species. Ir Nat J. 1988;22:417–30. 30. McDonnell RJ, Gormally MJ. Distribution and population dynamics of the Kerry slug Geomalacus maculosus (Arionidae). Irish Wildlife Manuals, 2011;54. National Parks and Wildlife Service, Department of Arts, Heritage and the Gaeltacht, Dublin, Ireland. 31. Rowson B, Turner J, Anderson R, Symondson B. Slugs of Britain and Ireland. Telford: FSC Publications; 2014. p. 60–1. 32. Reich I, Gormally MJ, Allcock AL, McDonnell RJ, Castillejo J, Iglasias J, Quinteiro J, Smith CJ. Genetic study reveals close links between Irish and northern Spanish specimens of the protected Lusitanian slug Geomalacus maculosus. Biol J Linn Soc. 2015;116:156–68. 33. Reich I, McDonnell RJ, McInerney C, Callanan S, Gormally MJ. EU-protected slug Geomalacus maculosus and sympatric Lehmannia marginata in conifer plantations: what does mark-recapture method reveal about population densities? J Moll Stud. 2016;82:doi:10.1093/mollus/eyw039 34. Johnston E, Kindermann G, O’Callaghan J, Burke D, McLoughlin C, Horgan S, McDonnell RJ, Williams C, Gormally MJ. 2016. Monitoring the EU protected Geomalacus maculosus (Kerry Slug): what are the factors affecting catch returns in open and forested habitats? Ecol Res. 2016; doi:10.1007/s11284-016-1412-5 35. Kearney J. Kerry slug (Geomalacus maculosus Allman 1843) recorded at Lettercraffroe. Co Galway Ir Nat J. 2010;31:68–9. 36. McDonnell RJ, Gormally MJ. A live trapping method for the protected European slug, Geomalacus maculosus Allman 1843 (Arionidae). J Conchol. 2011;40:483–5. 37. Stevens M, Parraga CA, Cuthill IC, Partridge JC, Troscianko TS. Using digital photography to study animal colouration. Biol J Linn Soc. 2007;90:211–37. 38. Bergman TJ, Beehner JC. A simple method for measuring colour in wild animals: validation and use on chest patch colour in geladas (Theropithecus gelada). Biol J Linn Soc. 2008;94:231–40. 39. Howard CV, Reed MG. Unbiased stereology. Three dimensional measurement in microscopy (2nd ed.). Garland Science/Bios Scientific: Abingdon; 2005. 40. Hall JR, Cuthill IC, Baddeley R, Shohet AJ, Scott-Samuel NE. Camouflage, detection and identification of moving targets. Proc Roy Soc B Biol Sci. 2013;doi:10.1098/rspb.2013.0064 41. Allen JA. Avian and mammalian predators of terrestrial gastropods. In: Barker GM, editor. Natural enemies of terrestrial Molluscs. Hamilton: CABI; 2004. p. 1–36. 42. Giordani I, Hynes T, Reich I, McDonnell RJ, Gormally MJ. Tetanocera elata (Diptera: Sciomyzidae) larvae feed on protected slug species Geomalacus maculosus (Gastrpoda: Arionidae): first record of predation. J Insect Behav. 2014;27:652–6. 43. Cuthill I, Partridge JC, Bennett ATD, Church SC, Hart NS, Hunt S. Ultraviolet vision in birds. Adv Study Anim Behav. 2000;29:159–214. 44. De Souza AR, Mourao CA, do Nascimento FS, Lino-Neto J. Sexy faces in a male paper wasp. PLoS One, 2014;doi:10.1371/journal.pone.0098172 45. Izzo AS, Tibbetts EA. Spotting the top male: sexually selected signals in male Polistes dominulus wasps. Anim Behav. 2012;83:839–45. 46. Zieger MV, Vakoliuk IA, Tuchina OP, Zhukov VV, Meyer-Rochow VB. Eyes and vision in Arion rufus and Deroceras agreste (Mollusca; gastropod; Pulmonata): what role does photoreception play in the orientation of these terrestrial slugs? Acta Zool. 2009;90:189–204. 47. Kennedy GY. A porphyrin pigment in the integument of Arion ater (L.). J Mar Biol Assoc UK. 1959;38:27–32. 48. Dyson M. An experimental study of wound-healing in Arion. 1964; PhD thesis, University of London, UK. 49. Lainé HA. Some observations on the structure of the skin gland of Agriolimax reticulatus. 1971; MSc thesis, University of Keele, UK. 50. Roulin A. Melanin-based colour polymorphism responding to climate change. Glob Change Biol. 2014;20:3344–50. 51. McCrone EJ, Sokolove PG. Brain-gonad axis and photoperiodically-stimulated sexual maturation in the slug Limax maximus. J Comp Physiol A Sensory, Neural Behav Physiol. 1979;133:117–24. 52. Denton GH, Anderson RF, Toggweiler JR, Edwards RL, Schaefer JM, Putnam AE. The last glacial termination. Nature. 2010;328:1652–6. 53. Hewitt GM. Some genetic consequences of ice ages, and their role in divergence and speciation. Biol J Linn Soc. 1996;58:247–76. 54. Ahmed FE, Setlow RB. Ultraviolet radiation-induced DNA damage and its photorepair in the skin of the platyfish Xiphophorus. Cancer Res. 1993;53:2249–55. 55. Jablonski NG, Chaplin G. The evolution of human skin coloration. J Hum Evol. 2000;39:57–106. 56. Svobodova AR, Galandakova A, Sianska J, Dolezal D, Lichnovska R, Ulrichova J, Vostalova J. DNA damage after exposure of mice skin to physiological doses of UVB and UVA light. Arch Dermatol Res. 2012;304:407–12. 57. Loayza-Muro RA, Marticorena-Ruiz JK, Palomino EJ, Merritt C, De Baat ML, Van Gemert M, Verweij RA, Kraak MHS, Admiraal W. Persistence of chironomids in metal polluted Andean high altitude streams: does melanin play a role? Environ Sci Technol. 2013;47:601–7. 58. Karl I, Hoffmann KH, Fischer K. Cuticular melanisation and immune response in a butterfly: local adaptation and lack of correlation. Ecol Entomol. 2010;35:523–8. 59. Parkash R, Munjal AK. Phenotypic variability of thoracic pigmentation in Indian populations of Drosophila melanogaster. J Zool Syst Evol Res. 1999;37:133–40. 60. Ahlgren J, Yang X, Hannson LA, Brönmark C. Camouflaged or tanned: plasticity in freshwater snail pigmentation. Biol Lett. 2013;9:20130464. 61. Pinkert S, Brandl R, Zeuss D. Colour lightness of dragonfly assemblages across North America and Europe. Ecography, 2016;doi:10.1111/ecog.02578 62. Ducrest A, Keller L, Roulin A. Pleiotropy in the melanocortin system, colouration and behavioural syndromes. Trends Ecol Evol. 2008;23:502–10. 63. Sugumaran M. Comparative biochemistry of eumelanogenesis and the protective roles of phenoloxidase and melanin in insects. Pigment Cell Res. 2001;15:2–9. 64. Reich I, O’Meara K, McDonnell RJ, Gormally MJ. An assessment of the use of conifer plantations by the Kerry slug (Geomalacus maculosus) with reference to the impact of forestry operations. Irish Wildlife Manuals. 2012;64. National Parks and Wildlife Service, Department of Arts, Heritage and the Gaeltacht. Theatr Irel. • We accept pre-submission inquiries • Our selector tool helps you to find the most relevant journal • We provide round the clock customer support • Convenient online submission • Thorough peer review • Inclusion in PubMed and all major indexing services • Maximum visibility for your research Submit your manuscript at www.biomedcentral.com/submit Submit your next manuscript to BioMed Central and we will help you at every step: O’Hanlon et al. Frontiers in Zoology (2017) 14:35 Page 12 of 12 http://dx.doi.org/10.1093/mollus/eyw039 http://dx.doi.org/10.1007/s11284-016-1412-5 http://dx.doi.org/10.1098/rspb.2013.0064 http://dx.doi.org/10.1371/journal.pone.0098172 http://dx.doi.org/10.1111/ecog.02578 Abstract Background Results Conclusions Background Methods Study sites and animal sampling Quantification of G. maculosus colour using digital photography Effect of UV-exposure, darkness and diet on pigmentation Stereological estimation of epithelial pigment Statistical analyses Results Quantification of G. maculosus colour using digital photography Effect of UV-exposure, darkness and diet on pigmentation Stereological estimation of epithelial pigment Discussion Environment-phenotype matching Colour change Origin of colour dimorphism Implications for other species Conclusions Additional files Funding Availability of data and materials Authors’ contributions Ethics approval and consent to participate Consent for publication Competing interests Publisher’s Note Author details References work_vs52qr6evnhs5l2bmkcahutxi4 ---- Cost-effectiveness of digital surveillance clinics with optical coherence tomography versus hospital eye service follow-up for patients with screen-positive maculopathy Eye (2019) 33:640–647 https://doi.org/10.1038/s41433-018-0297-7 ARTICLE Cost-effectiveness of digital surveillance clinics with optical coherence tomography versus hospital eye service follow-up for patients with screen-positive maculopathy Jose Leal1 ● Ramon Luengo-Fernandez1 ● Irene M. Stratton2 ● Angela Dale2 ● Katerina Ivanova2 ● Peter H. Scanlon 2,3,4 Received: 8 September 2018 / Revised: 11 October 2018 / Accepted: 21 October 2018 / Published online: 30 November 2018 © The Author(s) 2018. This article is published with open access Abstract Background Annually 2.7 million individuals are offered screening for diabetic retinopathy (DR) in England. Spectral- Domain Optical Coherence Tomography (SD-OCT) has the potential to relieve pressure on NHS services by correctly identifying patients who are screen positive for maculopathy on two-dimensional photography without evidence of clinically significant macular oedema (CSMO), limiting the number of referrals to hospitals. We aim to assess whether the addition of SDOCT imaging in digital surveillance clinics is a cost-effective intervention relative to hospital eye service (HES) follow- up. Methods We used patient-level data from the Gloucestershire Diabetic Eye Screening Service linked to the local digital surveillance programme and HES between 2012 and 2015. A model was used to simulate the progression of individuals with background diabetic retinopathy (R1) and diabetic maculopathy (M1) following DR screening across the clinic pathways over 12 months. Results Between January 2012 and December 2014, 696 people undergoing DR screening were found to have screen- positive maculopathy in at least one eye for the first time, with a total of 766 eyes identified as having R1M1. The mean annual cost of assessing and surveillance through the SD-OCT clinic pathway was £101 (95% CI: 91–139) as compared with £177 (95%CI: 164–219) under the HES pathway. Surveillance under an SD-OCT clinic generated cost savings of £76 (95% CI: 70–81) per patient. Conclusions Our analysis shows that SD-OCT surveillance of patients diagnosed as R1M1 at DR screening is not only cost- effective but generates considerable cost savings. Introduction Diabetes places a great economic burden on society owing to its high prevalence and increased health-care expendi- tures and lost productivity. Although the treatment of dia- betes on its own is costly, its complications are the major contributors to health-care costs [1]. Among the main diabetes-related complications is diabetic retinopathy (DR), which is an important cause of blindness in the working age population in the UK [2]. It is possible to treat sight threatening DR effectively [3, 4] and cost-effectively [5] and screening using retinal photography has been shown to be cost-effective [6]. A common cause of sight-threatening retinopathy is diabetic macular oedema [7], which can be treated effectively by either macular laser treatment or anti- vascular endothelial growth factor (VEGF) injections [8]. However, treatment is only recommended for patients with These authors contributed equally: Jose Leal, Ramon Luengo- Fernandez. * Peter H. Scanlon p.scanlon@nhs.net 1 Health Economics Research Centre, Nuffield Department of Population Health, University of Oxford, Oxford, UK 2 Gloucestershire Retinal Research Group, Cheltenham General Hospital, Cheltenham, UK 3 Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, UK 4 School of Health and Social Care, University of Gloucestershire, Cheltenham, UK Electronic supplementary material The online version of this article (https://doi.org/10.1038/s41433-018-0297-7) contains supplementary material, which is available to authorized users. 1 2 3 4 5 6 7 8 9 0 () ;, : 12 34 56 78 90 () ;,: http://crossmark.crossref.org/dialog/?doi=10.1038/s41433-018-0297-7&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1038/s41433-018-0297-7&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1038/s41433-018-0297-7&domain=pdf http://orcid.org/0000-0001-8513-710X http://orcid.org/0000-0001-8513-710X http://orcid.org/0000-0001-8513-710X http://orcid.org/0000-0001-8513-710X http://orcid.org/0000-0001-8513-710X mailto:p.scanlon@nhs.net https://doi.org/10.1038/s41433-018-0297-7 clinically significant macular oedema (CSMO), with non- CSMO patients deriving little additional benefit from treatment [4]. In 2003, the NHS Diabetic Eye Screening Programme (NDESP) was introduced in England which uses annual digital photography with pupil dilation [9, 10]. Until 2015, NDESP recommended all screen-positive patients identified with mild non-proliferative DR and maculopathy (R1M1), moderate/severe non/pre-proliferative DR (R2M0, R2M1), or proliferative retinopathy (R3M0, R3M1) be referred to a hospital eye services (HES) for treatment assessment. However, given the limitations of two-dimensional digital photographic retinal screening for maculopathy, less than a quarter of referred M1 patients were found to have CSMO in need of treatment [11]. In January 2015, digital surveillance was included in the standard NDESP pathway for annual screening with a statement in the annual service specification “the provider shall: refer people with diabetes to digital surveillance clinics that, in the opinion of the Clinical Lead, need more frequent review and do not require referral to the HES. This should be done against local protocols based on best evi- dence and NDESP guidance, using appropriate technology. Surveillance clinics may interface with OCT assessment where this has been agreed with commissioners of hospital eye services” and a pathway overview diagram which is unchanged in current documents [12, 13]. The digital sur- veillance pathway standards [14] were updated in March 2018 to include a new standard (DES-PS-5) to ensure those in digital surveillance are seen within an appropriate time frame. No guidance has been given on grading criteria used within digital surveillance or on the use of OCT which is currently considered ‘optional’. In June 2017, 2.7 million people were offered DR screening with 2.25 million uptake (82.2%) [15], with an epidemic posing an ever-growing strain on the screening programme and resources in the wider NHS. One way to relieve pressure on NHS services would be to improve the specificity of the current screening programme. Spectral domain optical coherence tomography (OCT) is an imaging technique that interprets reflected optical waves from a depth in the retina of 2–3 mm to produce three-dimensional images with the potential to correctly identify patients without evidence of any oedema in the macular area or CSMO, therefore limiting the number of referrals to hos- pitals and improving the specificity of the current screening programme [16]. Although SD-OCT imaging has been shown to be a useful adjunct in surveillance clinics for screen-positive diabetic maculopathy [16], questions remain about its cost- effectiveness in the clinical setting given its high imple- mentation costs. Using detailed data from the Gloucester- shire SD-OCT clinic surveillance programme, we aim to assess whether the addition of SD-OCT imaging surveil- lance in a community setting following digital retinal pho- tography is a cost-effective intervention when screening for CSMO relative to hospital eye service assessment and follow-up. Methods Participants We used patient-level data from the Gloucestershire Diabetic Eye Screening Service linked to the local digital surveillance programme and HES covering the period between 1st January 2009 and 31st December 2015. An anonymised cohort dataset of individuals with incident R1M1 in one eye as detected by GDESP between 1st Jan- uary 2012 and 31st December 2014 was analysed. To ensure that only incident R1M1 cases were considered in our analysis, we excluded patients with previous history of R1M1 (that is, those individuals screened as R1M1 between January 2009 and December 2011). HES and SD-OCT clinic surveillance data were analysed up to 31st December 2015, allowing for at least a full year of follow-up for all incident R1M1 cases. These data included basic demo- graphics, DR screening encounters, grading and referral outcomes, SDOCT clinic grading and referral outcomes, and HES grading, referral and treatment outcomes from two sources: a. Gloucestershire Diabetic Retinopathy Eye Screening Programme (GDESP) i. A diabetes register for Gloucestershire provided by a regularly updated and collated data download from each Gloucestershire primary care General Practice. ii. Every DR screening encounter, grade, outcome and referral recorded in detail since 1998 and GDESP operational pathways and data requirements managed in accordance with National Screening Programme specifications and standards. a. Gloucestershire Hospital Eye Service (HES) i. The HES data consists of an electronic medical record (EMR) for every patient attending an HES appoint- ment; ii. Patient demographic Electronic Medical Record (EMR) is managed by an electronic interface with Gloucestershire Patient Administration System (PAS) and clinical episodes are recorded, by medical and allied health professional staff at the time of a patient encounter. Cost-effectiveness of digital surveillance clinics with optical coherence tomography versus hospital. . . 641 Interventions under study We estimated the cost-effectiveness of two pathways of surveillance for individuals with incident R1M1 grading detected in the diabetic screening programme: Technician led digital surveillance clinic including SD- OCT in a community setting Ophthalmologist led HES clinic assessment and follow- up At the time of the analysis, the English National Screening Programme recommended annual screening for all patients with diabetes. The technician led digital sur- veillance pathway consisted of two field mydriatic digital photography (macular and disc fields) followed by a macular SD-OCT using a Topcon OCT 2000. For the HES clinical surveillance pathway, this consisted of a slit lamp biomicroscopy examination by an ophthalmologist and a macular SD-OCT using one of three SD-OCT machines used within the HES (Heidelberg Spectralis, Zeiss Cirrhus or the Topcon OCT 2000). Patients attending HES who did not need treatment were followed up in a similar fashion to that observed in the SD-OCT clinic surveillance programme in a community setting, with the exception that patients would continue to be assessed in HES. The comparison of the two pathways is shown in Online Appendix Figure A1. Grading criteria The grading criteria used for retinopathy (R) and maculo- pathy (M) grades in the NHS Diabetic Eye Screening Programme are published [17] by NDESP. Online Appen- dix Table A1 reports the grading criteria and assessment of image quality for the SD-OCT images. Model structure A decision analytic model was developed to evaluate the impact of the two pathways of surveillance under evalua- tion. Given the clinic pathways and the 1 year of time horizon used in the analysis, the most appropriate model was judged to be a decision tree which was developed in Excel (Microsoft, Redmond, WA). Model structure and assumptions were informed by what was known about diabetic retinopathy, the clinic pathways of R1M1 indivi- duals and discussions with clinical experts and statisticians involved in the project. The model was used to simulate the progression of R1M1 individuals following DR screening across the clinic pathways over 12 months (Fig. 1). Nearly all individuals who were screen positive for R1M1 were referred to SD-OCT digital surveillance clinics with a very small number referred to HES. Hence, the R1M1 population was first divided into those that attended the SD-OCT/HES clinic appointments and those that did not. Of those attending SD-OCT clinic or HES, we further divided the R1M1 population according to the grading in the other eye at screening: R0/R1, R1M1 or R2-R3. Con- ditional on the grading in the other eye, we classified the R1M1 at screening according to their possible grading at SD-OCT/HES clinic appointments: R0M0, R1M0, R1M1 and R2-R3. We then further divided these subgroups according to their SD-OCT results: negative, borderline and positive. Conditional on the DR/OCT results, the indivi- duals could be referred back to DR screening, to further surveillance (at the SD-OCT clinic or HES) or directly to HES for possible treatment of maculopathy. If referred for further surveillance, the frequency of surveillance could be: 4 months or less, 6 months, 9 months or 12 months. If referred to HES for possible maculopathy treatment, indi- viduals might or might not receive treatment (VEGF injection or laser). Given the use of the same equipment in the digital surveillance pathway and the hospital eye ser- vices (SD-OCT) and similar levels of DR/OCT grading expertise grading in both settings, we assumed that the grading, outcomes and frequency of referral would be identical in both models. Costs Following NICE recommendations, the perspective for the analysis was that of the UK NHS and the price year was 2015/16. Costs included: SD-OCT clinic surveillance appointment cost in a community setting; HES outpatient appointment costs; capital cost of SD-OCT in a hospital 9 months R1M1 at clinic OCT negative OCT borderline OCT positive Surveillance HES (for treatment) DR screening programme 4 months or less 6 months 12 months Injection Laser None Fig. 1 Model structure for individuals with R1M1 grading at SDOCT clinic appointment. The same model structure applies for R0M0, R1M0 and R2-R3 grading at SDOCT clinic appointment. The circles denote ‘chance’ nodes where branches meet (representing likely events) that set out the probability of an event occurring or not (e.g., OCT negative/borderline or positive given R1M1 grading at SDOCT clinic). The probabilities of events must always sum up to one at any given node. Costs were assigned to each branch including the end of the branch to value the resource use associated with each possible model pathway. These costs are combined with branch probabilities and the tree is ‘rolled back’ so that mean cost of the intervention can be estimated 642 J. Leal et al. setting, and hospital-based treatment costs for CSMO (VEGF injection and laser) (see Online Appendix Table A2). The costs of SD-OCT surveillance clinics were obtained from a recent report evaluating community relative to hospital eye service follow-up for patients with age- related macular degeneration. [18] These authors deter- mined the costs of monitoring review in a community set- tings (£51.82 in 2013/14 prices) as well as the costs of SD- OCT equipment by review (£22.99 in 2013/14 prices). Outpatient consultation costs for ophthalmology were obtained from NHS Reference Costs 2015/16. [19] We costed the first outpatient appointment following DR screening as a consultant led first appointment and the remaining appointments as consultant-led follow-up appointments. Under the HES pathway, we added the OCT equipment costs to outpatient appointment costs. The annual costs associated with laser or VEGF injections for the treatment of CSMO were obtained from patient-level data from patients attending DR screening in Gloucester- shire. [20] Statistical analysis We converted the Gloucestershire patient-level dataset into eye-level data so that we could follow the clinical pathway, grading and outcomes for the eye graded as R1M1 at screening over 12 months. These data were used to estimate the grade at screening in the other eye (R0-R1M0, R1M1 and R2-R3), the uptake of surveillance, the grading at community or HES surveillance clinic, and the number of surveillance appointments attended during the 12 months following initial screening. For example, the uptake rate of the surveillance clinics (community or HES-based) was determined by dividing the number of attenders, within 12 months of screening, out of all those referred at screening for HES or SD-OCT surveillance. We also used three regression models to estimate: 1. Probability of each referral outcome (i.e., referral back to DR screening, SD-OCT surveillance in a commu- nity or HES setting, HES direct referral for possible treatment of maculopathy) following the SD-OCT surveillance clinic appointment (multinomial logit); 2. Probability of the frequency of surveillance (i.e., 4 months or less, 6 months, 9 months or 12 months) following SD-OCT surveillance clinic appointment (ordered logit); 3. Probability of receiving treatment for CSMO (VEGF injection or laser). We examined the following predictors: SD-OCT result (positive, borderline, negative), grading at surveillance clinic (R0M0, R1M0, R1M1), screening grade in other eye (R0-R1, R1M1) and other ocular conditions (e.g., macular degeneration). A predictor was deemed to be statistically significant if p < 0.05. Model fit was assessed using Pregi- bon’s link test. All analyses were performed using STATA version 14 (StataCorp, College Station, TX). Cost-effectiveness analysis Following the assumption that the clinic pathways under evaluation would result in the same patient grading results, OCT results, and subsequent referrals for treatment or fur- ther surveillance, the quality of life of the R1M1 individuals was assumed to be the same. As a result, we estimated the incremental costs associated with surveillance in an SD- OCT clinic pathway relative to HES. Probabilistic sensi- tivity analysis (PSA) was performed where distributions were used to represent the uncertainty in the model inputs [21]. Furthermore, to capture the uncertainty and correla- tions between the three regression models we re-estimated the same set of models for each of the 1000 bootstraps (with replacement) of the sample dataset and saved the coeffi- cients from the three regression models. Scenario analysis We explored the impact of the scenario that once eyes were assessed at HES for the first time after diagnosis of R1M1 at screening they would no longer require further surveillance. We also explored no further surveillance for SD-OCT negative results if clinic grading was R0M0 or R1M0 in both pathways. Results Gloucester cohort study and model inputs Number of eyes identified with R1M1 at screening Between January 2012 and December 2014, 696 people with diabetes screened for diabetic retinopathy were found to be screen positive for maculopathy (M1) in at least one eye for the first time. Of these, 622 (89%) had a diagnosis of R1M1 in one eye, 4 had R1M1 in one eye and moderate to proliferative retinopathy R2M1 in the other eye (1%) and 70 (10%) had R1M1 in both eyes (Table 1), with a total of 766 eyes diagnosed as having R1M1. Eye referral once identified with R1M1 at screening All 696 individuals identified as having R1M1 at screening were referred to either HES or SD-OCT clinics, with 37 (5%) failing to attend their appointments within the first Cost-effectiveness of digital surveillance clinics with optical coherence tomography versus hospital. . . 643 year after screening (Fig. 2). Of the remaining 659 patients, 652 (99%) were initially assessed at a digital surveillance clinic with SD-OCT after their screening episode and 7 (1%) were assessed at HES. SD-OCT grading of eyes identified with R1M1 at screening Table 2 presents the results of the grading at the SD-OCT clinic, stratified by screen grade for the other eye, for the 652 patients attending an SD-OCT clinic because of diag- nosis of R1M1 at screening. Of the 652 patients attending SD-OCT clinic, 13 (2%) patients had missing or incomplete data (e.g., absence of an OCT result). For the remaining 639 patients, 572 (81%) were diagnosed with R1M1 at screening the other eye was diagnosed as R0/1M0 at screen (Table 2). At the first SD- OCT clinic visit, 171 (30%) of R1M1 patients at screening and with R0/1M0 in the other eye were found to be R1M1 but with an SD-OCT negative result, and 54 (9%) as R1M1 with a positive result. There were 65 patients (10%) with R1M1 in both eyes at screening (that is. 130 eyes in total). Of these patients, 2 (6%) had SD-OCT positive results in both eyes and 12 (18%) had SD-OCT positive results in one eye. Only two patients diagnosed as having R1M1 at screening were diagnosed with R2/3 in the other eye. Overall, for the 639 patients, irrespective of diabetic Table 1 Baseline characteristics of patients identified with diabetic maculopathy at screening n (%) (n = 696) Gender Males 440 (57) Females 326 (43) Age range, years <21 6 (1) 21 to 30 19 (2) 31 to 40 37 (5) 41 to 50 97 (13) 51 to 60 142 (19) 61 to 70 168 (23) 71 to 80 178 (23) 81 to 90 113 (15) ≥91 6 (1) Diabetes type Type 1 102 (14) Type 2 846 (86) Duration diabetes, mean years (SD) 13 (9) Screening grade Background retinopathy and DM in one eye R1M1/R0M0 181 (26) R1M1/R1M0 441 (63) R2M0/R1M1 2 (<1) R2M1/R1M1 1 (<1) R3M1/R1M1 1 (<1) Background retinopathy and DM in both eyes R1M1/R1M1 70 (10) Patients with R1M1 in at least one eye referred to HES or OCT clinics n=696 Not attending HES or OCT clinics n=37 Assessed in OCT clinic n=652 Assessed in HES n=7 Fig. 2 Patient pathway after diagnosis of incident background retino- pathy and maculopathy at screening Table 2 Eye grading at the SDOCT clinic, stratified by screen grade of the other eyea Digital surveillance grade Screen grade of other eye R0/1M0 n (%) R1M1 n (%) R2/3MX n (%) R0M0 OCT negative 53 (10) 8 (6) 0 OCT borderline 6 (1) 0 0 OCT positive 6 (1) 0 0 R1M0 OCT negative 156 (27) 37 (28) 0 OCT borderline 25 (4) 6 (5) 0 OCT positive 13 (2) 3 (2) 0 R1M1 OCT negative 171 (30) 45 (35) 1 (50) OCT borderline 77 (13) 18 (14) 0 OCT positive 54 (9) 13 (10) 1 (50) R2/3MX OCT negative 4 (1) 0 0 OCT borderline 2 (<1) 0 0 OCT positive 1 (<1) 0 0 RUMX OCT negative 2 (<1) 0 0 OCT borderline 1 (<1) 0 0 OCT positive 1 (<1) 0 0 Total 572 (100) 130 (100)* 2 (100) aResults per eye conditional on the screening grade of the other eye. MX: M0 or M1; RU: unknown retinopathy grade. For the 65 patients who were identified as having R1M1 in both eyes and had recorded SDOCT results the data has been reported for both eyes 644 J. Leal et al. retinopathy grading in the SD-OCT clinic, a total of 90 (14%) patients tested SD-OCT positive and 477 (66%) tested negative. Referral outcome following SD-OCT grading For the 639 patients with recorded SD-OCT results, out- come data were missing in 11 (2%) patients. For the remaining 628 patients, 118 (19%) were returned to the DR annual screening programme, 412 (66%) continued under SD-OCT clinic surveillance, and 98 (16%) were referred to HES. Patients with SD-OCT positive results were sig- nificantly more likely to be referred to HES after controlling for SD-OCT clinic grading of the eye and the screen grading of the other eye (see Online Appendix Table A3). Of the 412 patients under continued SD-OCT clinic surveillance, 4 (<1%) were followed up in SD-OCT clinic surveillance between 1 and 3 months, 174 (42%) every 6 months, 72 (17%) every 9 months and 162 (39%) were referred to annual SD-OCT clinic surveillance. Patients with SD-OCT negative results in one eye were significantly more likely to have longer surveillance intervals than those with positive or borderline results after controlling for SD-OCT clinic grading of the eye and the screen grading of the other eye (see Online Appendix Table A4). Including the initial SD-OCT clinic visit, the mean number of SD-OCT clinic visits for patients who were referred to SD-OCT clinic surveillance every 3, 6, 9 and 12 months was, respectively: 2.00 (S.D. 1.15); 1.95 (S.D. 0.44); 1.59 (S.D. 0.49); and 1.05 (S.D. 0.24) visits. Treatment for CSMO Of the 639 patients assessed at the SD-OCT clinic for suspected R1M1 and with SD-OCT results, 18 (3%) received treatment with laser or VEGF injection. Of these, 7 (39%) received VEGF injection and 11 (61%) received laser treatment. Results of the logistic regression (Online Appendix Table A5) showed that a SD-OCT positive result was the only significant predictor of increased likelihood of treatment (odds ratio 36, 95% CI: 10–126). Therefore, the probability of treatment given a SD-OCT positive result was 16%. Cost comparison between SD-OCT and HES surveillance Online Appendix Table A2 reports the model inputs used to compare SD-OCT and HES surveillance pathways. The mean annual cost of assessing and surveillance of an eye identified as R1M1 at DR screening through the SD-OCT clinic pathway was £101 (95% CI: 91–139) per patient (Table 3). Results showed that the mean annual cost of assessing and surveillance of patient with at least one eye diagnosed as R1M1 at DR screening through the HES pathway was £177 (95% CI: 164–219) per patient. As a result, the mean annual cost saving per patient with one identified as R1M1 in the DR screening pro- gramme and surveilled in the SD-OCT digital surveillance pathway as opposed to a HES pathway was £76 (95% CI: 70–81), with these savings arising from less patients without maculopathy or negative SD-OCT results being referred to HES. Even if, under the HES pathway, a patient with R1M1 was assessed at HES for the first time after diagnosis of R1M1 at screening and no longer require surveillance, the SD-OCT clinic pathway still generated savings of £41 (95% CI: £37–£46) per eye assessed. Finally, assuming that there was no further surveillance for SD-OCT negative results if the clinic grading was R0M0 or R1M0 generated savings of £74 (95%CI: £67–£79) (Table 3). Discussion Diabetic retinopathy screening is nationally mandated and has been shown to be cost-effective compared with no screening [22–24]. There is also a clear understanding about the appropriateness of different screening intervals for patients at differing risks of developing sight threatening DR [20]. In addition, studies have suggested that including an SD-OCT digital surveillance pathway could improve the overall efficiency of the DR screening programme [16], and its cost-effectiveness [25]. This study, based on 652 patients and over 700 eyes diagnosed as screen positive for maculopathy with back- ground DR, and referred almost entirely for surveillance in an SD-OCT clinic, provides further evidence that surveil- lance of these patients in an SD-OCT clinic is not only cost- Table 3 Cost-effectiveness results of SDOCT digital surveillance in a community setting vs. HES (2015/16 prices) Base case (£) Scenario 1 (£)a Scenario 2 (£)b SDOCT surveillance Cost per patient Mean= 101 101 99 95% CI= 91–139 89–134 87–131 HES pathway Cost per patient Mean= 177 142 173 95% CI= 164–219 129–179 157–209 SDOCT surveillance vs. HES pathway Cost per patient Mean= −76 −41 −74 95% CI= −70 to −81 −37 to −46 −67 to −79 aUnder the HES pathway, once eyes were assessed at HES for the first time after diagnosis of R1M1 at screening they would no longer require surveillance; bUnder both surveillance options, assuming that there was no further surveillance for SDOCT negative results if the clinic grading was R0M0 or R1M0 Cost-effectiveness of digital surveillance clinics with optical coherence tomography versus hospital. . . 645 effective but generates substantial cost savings. We found that after the first surveillance visit in an SD-OCT clinic, less than 20% of patients required referral to HES as they were found not to have macular oedema or CSMO and could be safely monitored at an SD-OCT follow-up clinic or discharged back to screening if no longer screen positive for maculopathy. At a cost of £110 and £87 for a first and follow-up face- to-face consultant visit, respectively, in ophthalmology, in addition to £24 for SD-OCT imaging, the costs of assess- ment at HES are considerable. By contrast, at a cost of £53 a visit including SD-OCT imaging, the costs of an SD-OCT surveillance clinic are significantly lower. By reducing the number of patients referred to HES without CSMO and referring only those who will benefit from doing so, the 1- year cost savings associated with surveillance in an SD- OCT clinic rather than HES was £76 (95% CI: 70–81), that is. a saving of 41% compared with having all these eyes assessed in HES. Despite our study being based on a large number of eyes diagnosed as R1M1 at screening, with individual patient record linkage of data on DR screening appointments, SD- OCT clinic surveillance visits, HES appointments and eye treatments received, our study is not without limitations. First, we assumed that surveillance in an SD-OCT clinic would yield the same eye outcomes as surveillance in HES, that is we assumed that the sensitivity of SD-OCT at detecting CSMO would be equal to that of HES assessment. Previous studies and literature reviews [16, 26], suggest that SD-OCT performs very well with very similar outcomes. Second, we only established the cost implications of the two surveillance programmes over 1-year and not over the longer-term, which would most likely have resulted in lar- ger cost savings by reviewing patients who are screen positive with R1M1 in digital surveillance clinics with SD- OCT as opposed to HES. Finally, with no control group to compare the SD-OCT clinic surveillance programme with, we assumed that instead of attending the SD-OCT clinic for surveillance, patients attended HES directly and were then followed-up in a similar fashion in HES to that observed in the Glouces- tershire SD-OCT clinic surveillance programme. We tested the implications of this assumption in our sensitivity ana- lyses, which showed that even if eyes were assessed at HES for the first time after diagnosis of R1M1 at screening and no longer required surveillance, the SD-OCT clinic pathway still generated significant cost savings. In conclusion, our analysis shows that SD-OCT digital surveillance of patients with diabetes, background diabetic retinopathy (R1) and evidence of diabetic maculopathy (M1) at DR screening is not only cost-effective but gen- erates considerable cost savings. Summary What was known before: ● The NHS Diabetic Eye Screening Programme have introduced digital surveillance clinics into their pathway for those patients who, in the opinion of the Clinical Lead, need more frequent review and do not require referral to the HES. What this study adds: ● This paper is the first to show that the use of OCT in the digital surveillance pathway of the English NHS Diabetic Eye Screening Programme is both effective and cost-effective. Acknowledgements We are grateful to Steve Chave who collated all the data for the analyses. Funding Project was funded by an unrestricted grant from Glouces- tershire NHS Foundation Trust from funding received from Public Health England. Compliance with ethical standards Conflict of interest JL, RLF, IMS, AD and KI declare no conflict of interest. PHS is Clinical Director for the English NHS Diabetic Eye Screening Programme. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/. References 1. Alva ML, Gray A, Mihaylova B, Leal J, Holman RR. The impact of diabetes-related complications on healthcare costs: new results from the UKPDS (UKPDS 84). Diabet Med: a J Br Diabet Assoc. 2015;32:459–66. 2. Liew G, Michaelides M, Bunce C. A comparison of the causes of blindness certifications in England and Wales in working age adults (16-64 years), 1999-2000 with 2009-2010. BMJ Open. 2014;4:e004015. 3. DRS. Photocoagulation treatment of proliferative diabetic retino- pathy. Clinical application of Diabetic Retinopathy Study (DRS) 646 J. Leal et al. http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/licenses/by/4.0/ findings, DRS Report Number 8. The Diabetic Retinopathy Study Research Group. Ophthalmology. 1981;88:583–600. 4. ETDRS. Photocoagulation for diabetic macular edema. Early Treatment Diabetic Retinopathy Study report number 1. Early Treatment Diabetic Retinopathy Study research group. Arch Ophthalmol. 1985;103:1796–806. 5. Savolainen EA, Lee QP. Diabetic retinopathy - need and demand for photocoagulation and its cost-effectiveness: evaluation based on services in the United Kingdom. Diabetologia. 1982;23:138–40. 6. James M, Turner DA, Broadbent DM, Vora J, Harding SP. Cost effectiveness analysis of screening for sight threatening diabetic eye disease. BMJ. 2000;320:1627–31. 7. Fong DS, Ferris FL 3rd, Davis MD, Chew EY. Causes of severe visual loss in the early treatment diabetic retinopathy study: ETDRS report no. 24. Early Treatment Diabetic Retinopathy Study Research Group. Am J Ophthalmol. 1999;127:137–41. 8. Virgili G, Parravano M, Menchini F, Brunetti M. Antiangiogenic therapy with anti-vascular endothelial growth factor modalities for diabetic macular oedema. Cochrane Database Syst Rev. 2012;12: CD007419. 9. NICE. Type 1 diabetes in adults: diagnosis and management. Secondary Type 1 diabetes in adults: diagnosis and management 2015. www.nice.org.uk/guidance/ng17. 10. NICE. Type 2 diabetes in adults: management. Secondary Type 2 diabetes in adults: management 2015. www.nice.org.uk/guidance/ ng28. 11. Jyothi S, Elahi B, Srivastava A, Poole M, Nagi D, Sivaprasad S. Compliance with the quality standards of National Diabetic Retinopathy Screening Committee. Prim Care Diabetes. 2009;3:67–72. 12. PHE. NHS public health functions agreement 2017-2018. Service specification no.22. NHS Diabetic Eye Screening Programme. Secondary NHS public health functions agreement 2017-2018. Service specification no.22. NHS Diabetic Eye Screening Pro- gramme 2017. www.england.nhs.uk/wp-content/uploads/2017/04/ service-spec-22.pdf. 13. PHE. NHS Diabetic Eye Screening Programme Overview of patient pathway, grading pathway, surveillance pathways and referral pathways. Secondary NHS Diabetic Eye Screening Pro- gramme Overview of patient pathway, grading pathway, surveil- lance pathways and referral pathways 2017. www.gov.uk/ government/uploads/system/uploads/attachment_data/file/648658/ Diabetic_Eye_Screening_pathway_overviews.pdf. 14. PHE. NHS Diabetic Eye Screening Programme Pathway stan- dards. Secondary NHS Diabetic Eye Screening Programme Pathway standards 2018. 15. PHE. NHS Screening Programmes in England, 1 April 2016 to 31 March 2017. Secondary NHS Screening Programmes in England, 1 April 2016 to 31 March 2017. 2017. www.gov.uk/government/ uploads/system/uploads/attachment_data/file/661677/NHS_ Screening_Programmes_in_England_2016_to_2017_web_ version_final.pdf. 16. Mackenzie S, Schmermer C, Charnley A, Sim D, Vikas T, Dumskyj M, et al. SDOCT Imaging to Identify Macular Pathology in Patients Diagnosed with Diabetic Maculopathy by a Digital Photographic Retinal Screening Programme. PLoS One. 2011;6: e14811. 17. PHE. NHS Diabetic Eye Screening Programme Grading definitions for referable disease. Secondary NHS Diabetic Eye Screening Pro- gramme Grading definitions for referable disease 2017. www.gov. uk/government/uploads/system/uploads/attachment_data/file/ 582710/Grading_definitions_for_referrable_disease_2017_new_ 110117.pdf. 18. Reeves BC, Scott LJ, Taylor J, Hogg R, Rogers CA, Wordsworth S, et al. The Effectiveness, cost-effectiveness and acceptability of Community versus Hospital Eye Service follow-up for patients with neovascular age-related macular degeneration with quiescent disease (ECHoES): a virtual randomised balanced incomplete block trial. Health Technol Assess (Rockv). 2016;20:1–120. 19. DH. NHS reference costs 2015 to 2016. Secondary NHS reference costs 2015 to 2016 2016. www.gov.uk/government/publications/ nhs-reference-costs-2015-to-2016. 20. Scanlon PH, Aldington SJ, Leal J, Luengo-Fernandez R, Oke J, Sivaprasad S, et al. Development of a cost-effectiveness model for optimisation of the screening interval in diabetic retinopathy screening. Health Technol Assess (Rockv). 2015;19:1–116. 21. Briggs AH. Handling uncertainty in cost-effectiveness models. Pharmacoeconomics. 2000;17:479–500. 22. Javitt JC, Aiello LP. Cost-effectiveness of detecting and treating diabetic retinopathy. Ann Intern Med. 1996;124:164–9. 23. Caro JJ, Ward AJ, O’Brien JA. Lifetime costs of complications resulting from type 2 diabetes in the U.S. Diabetes Care. 2002;25:476–81. 24. Fendrick AM, Javitt JC, Chiang YP. Cost-effectiveness of the screening and treatment of diabetic retinopathy. What are the costs of underutilization? Int J Technol Assess Health Care. 1992;8:694–707. 25. Prescott G, Sharp P, Goatman K, Scotland G, Fleming A, Philip S, et al. Improving the cost-effectiveness of photographic screening for diabetic macular oedema: a prospective, multi-centre, UK study. Br J Ophthalmol. 2014;98:1042–9. 26. Jittpoonkuson T, Garcia PM, Rosen RB. Correlation between fluorescein angiography and spectral-domain optical coherence tomography in the diagnosis of cystoid macular edema. Br J Ophthalmol. 2010;94:1197–200. Cost-effectiveness of digital surveillance clinics with optical coherence tomography versus hospital. . . 647 http://www.nice.org.uk/guidance/ng17 http://www.nice.org.uk/guidance/ng28 http://www.nice.org.uk/guidance/ng28 http://www.england.nhs.uk/wp-content/uploads/2017/04/service-spec-22.pdf http://www.england.nhs.uk/wp-content/uploads/2017/04/service-spec-22.pdf http://www.gov.uk/government/uploads/system/uploads/attachment_data/file/648658/Diabetic_Eye_Screening_pathway_overviews.pdf http://www.gov.uk/government/uploads/system/uploads/attachment_data/file/648658/Diabetic_Eye_Screening_pathway_overviews.pdf http://www.gov.uk/government/uploads/system/uploads/attachment_data/file/648658/Diabetic_Eye_Screening_pathway_overviews.pdf http://www.gov.uk/government/uploads/system/uploads/attachment_data/file/661677/NHS_Screening_Programmes_in_England_2016_to_2017_web_version_final.pdf http://www.gov.uk/government/uploads/system/uploads/attachment_data/file/661677/NHS_Screening_Programmes_in_England_2016_to_2017_web_version_final.pdf http://www.gov.uk/government/uploads/system/uploads/attachment_data/file/661677/NHS_Screening_Programmes_in_England_2016_to_2017_web_version_final.pdf http://www.gov.uk/government/uploads/system/uploads/attachment_data/file/661677/NHS_Screening_Programmes_in_England_2016_to_2017_web_version_final.pdf http://www.gov.uk/government/uploads/system/uploads/attachment_data/file/582710/Grading_definitions_for_referrable_disease_2017_new_110117.pdf http://www.gov.uk/government/uploads/system/uploads/attachment_data/file/582710/Grading_definitions_for_referrable_disease_2017_new_110117.pdf http://www.gov.uk/government/uploads/system/uploads/attachment_data/file/582710/Grading_definitions_for_referrable_disease_2017_new_110117.pdf http://www.gov.uk/government/uploads/system/uploads/attachment_data/file/582710/Grading_definitions_for_referrable_disease_2017_new_110117.pdf http://www.gov.uk/government/publications/nhs-reference-costs-2015-to-2016 http://www.gov.uk/government/publications/nhs-reference-costs-2015-to-2016 Cost-effectiveness of digital surveillance clinics with optical coherence tomography versus hospital eye service follow-up for�patients with screen-positive maculopathy Abstract Introduction Methods Participants Interventions under study Grading criteria Model structure Costs Statistical analysis Cost-effectiveness analysis Scenario analysis Results Gloucester cohort study and model inputs Number of eyes identified with R1M1 at screening Eye referral once identified with R1M1 at screening SD-OCT grading of eyes identified with R1M1 at screening Referral outcome following SD-OCT grading Treatment for CSMO Cost comparison between SD-OCT and HES surveillance Discussion Summary Compliance with ethical standards ACKNOWLEDGMENTS References work_vsb45wzflbhazejlaqekn3hdke ---- SEM study of dental gutta-percha after cutting IEJ Iranian Endodontic Journal 2011;6(4):146-149 ORIGINAL ARTICLE Maryam Zare Jahromi 1* DDS, MS, Amir Arsalan Navabi 2 DDS, Mahsa Ekhtiari 3 DDS Comparing coronal discoloration between AH26 and ZOE sealers 1. *(Correspondence author) Associate Professor of Endodontics, Dental School, Islamic Azad University of Medical Scie nces, Khorasgan Branch, Isfahan, Iran. E-mail: hiva1378maryam@yahoo.com 2. Postgraduate Student, Islamic Azad University of Medical Sciences, Khorasgan Branch, Isfahan, Iran . 3. General Dentist. INTRODUCTION: Intrinsic tooth discolorations after endodontic treatment are principally attributed to the composition of necrotic pulp tissue, hemorrhage within the pulp cavity, endodontic medicaments and/or filling materials. Residual sealer left in pulp chamber after obturation can cause discoloration. The objective of this in vitro study was to evaluate coronal discoloration created by AH26 and ZOE sealers after four months. MATERIALS & METHODS: Fifty intact human extracted maxillary central incisors were employed. Access cavities were prepared in all samples and root canals were instrumented; coronal orifices were then sealed using self-cure glass ionomer. The teeth were divided into two experimental groups (n=20) according to utilized sealer in pulp chambers including AH26 and Dorifill (ZOE). The remaining 10 teeth served as negative and positive controls (n=5). The access cavities were sealed with self-cure glass ionomer. Teeth were kept in incubator for four month. Preliminary digital images of the teeth were taken and then compared with those related to 4- month follow-up. The images were assessed using Photoshop software. Data was analyzed using paired t-test and independent samples t-test. RESULTS: The teeth which were filled with AH26 sealer showed significantly greater discoloration than those filled with ZOE sealer (Dorifill) (P<0.05). CONCLUSION: AH26 sealer causes greater discoloration of the crown compared to ZOE sealer. Despite the other disadvantage of AH26 sealer, it seems that Dorifill is more esthetically considerate. KEYWORDS: AH26 sealer, Tooth discoloration, Zinc oxide eugenol. Received May 2011; accepted September 2011 INTRODUCTION Anterior tooth discoloration is an esthetic problem for both the patient and dentist. Sources of coronal tooth discoloration can be natural (acquired) or iatrogenic (inflicted). Natural causes are those that occur as a result of tooth developing disturbances or due to patients’ behavior, tooth caries, or traumatic injuries (1-3). Iatrogenically induced discoloration results from different factors such as restorative or obturating materials. Many of the materials that are used in dentistry have the potential to cause discoloration i.e. amalgam, Cavit, IRM, some drugs such as chlorhexidine, fluoride and intracanal medicaments such as phenolics and iodoform-based medicaments (5-7). The most common cause of coronal discoloration is presence of remnant sealer in the pulp chamber that causes sealer to infiltrate into dentine tubules leading to discoloration (8). If the materials are not removed from the pulp chamber after obturation, subsequent staining may occur (9). On the basis of results of literature most discolorations occur at the midcervical surface of teeth where enamel is thin and enamel translucency makes dentinal discoloration 741 Zare Jahromi et al.  IEJ Iranian Endodontic Journal 2011;6(4):146-149 apparent (10). Scanning electron microscopies have shown that the presence of the smear layer obstructs the penetration of sealer into the dentinal tubules (11-13). Sealer infiltration and discoloration is related to some factors e.g. dentine thickness and sealer quality. Therefore dentists require adequate knowledge of sealers’ properties. Davis et al. examined coronal discoloration of seal apex sealers, Roth 801 (Kerr, Romulus, MI., USA) and AH26 (Detrey, Dentsply, Germany). They found that all sealers cause coronal discoloration within few weeks and AH26 created the greatest discoloration (10). Another study examined coronal discoloration of four sealers including AH26, Roth 801, Kerr sealer and Sealapex; they found that even silver free sealers cause coronal discoloration. AH26 was reported to have the greatest discoloration effect (14). The aim of this in vitro study was to compare discoloration caused by AH26 and ZOE sealers after four months. MATERIALS & METHODS Fifty intact human maxillary central incisors were included in this study; inclusion criteria were teeth with no caries, restorations, developmental defect, and coronal discoloration. The teeth were cleaned with ultrasonic to remove gross debris, followed by rubber cup and pumice to remove remaining debris and to remove stains from the coronal crown surface. For evaluating color, digital photography was taken under the same brightness, light source and environment on the black sheet. Labial surface were photographed and pictures were transferred to a computer (Figure 1). RGB (Red, Green and Blue colors) and HSB (Hue, Saturation and Brightness) variables used as a criterion to determine color in three points of labial surface of samples (Figure 1) using Photoshop software. Then access cavities were prepared in all samples. Gates-Glidden drills no. #1, 2, 3 (Mani, Tochigi, Japan) were used for coronal enlargement and canals were prepared manually with K-files Mani, Utsunomiya, Figure 1. The Labial surface of tooth (Three points a, b and c) Japan) in step-back technique up to size 25 as master apical file. Canals were dried using paper cones (Ariadent, Tehran, Iran) and then coronal orifices were sealed 1 to 2mm using self-cure glass ionomer. Teeth were randomly assigned to experimental and control groups. Forty teeth were divided into two experimental groups of 20 teeth each and 10 teeth were used as positive and negative controls (n=5). In the group A, pulp chamber was filled with AH26, and in group B pulp chambers received Dorifill sealer. We used surgical curette for placing sealers. After all the sealers set (while samples were kept in wet gauzes), access cavities were sealed by self cure glass ionomer (Chemfil II, De Trey Dentsply, Konstanz, Germany). In the negative control group, above process except sealer adminis- tration was performed and the positive controls were stained internally by filling the chamber with lysed red blood cells (15). After assurance of glass ionomer setting and complete cavity seal, samples were put in normal saline and kept in incubator for 4 months at 37 ° C; every three days normal saline solution was changed. After 4 months images were taken from samples with the same angulations, magnification and condition. Photographs were again transferred to a computer and the same three points on labial surface as previous measurement were evaluated for HSB and RGB variables by Photoshop software. The HSB and RGB variables of each 741 Coronal discoloration by sealers IEJ Iranian Endodontic Journal 2011;6(4):146-149 Table 1. The Mean±SD of discoloration Of HBS and RGB in different groups Groups R G B H B S Dorifill(ZOE) -1.7±2.3 -3.6±1.2 -3.3±2.5 0.0015±2.8 -2.5±4.8 -1.5±1.8 AH26 4.6±4.1 -11.6±9.4 18.6±7.3 1.8±3.9 1.9±3.5 -8.4±3.2 Positive control 58.6±15.3 -52±13.1 -42.4±12.4 1.4±2.7 22.9±8.4 7.5±4.1 Negative control -6.6±5.2 -0.1±1.4 -0.7±1.2 3.1±4.1 -2.7±3.3 -2.1±3.4 sample were determined. Data was analyzed using paired t-test and independent samples t- test. RESULTS The difference in each variable of HSB before and after using sealer was analyzed using paired samples t-test. Results showed that mean discoloration of AH26 sealer (Group A) in each parameter (Hue, Saturation and Brightness) was greater than Dorifill (Group B) (P<0.05) using independent samples t-test. Mean differences in each parameter (Red, Green, Blue) for AH26 was higher composed to Dorifill; this difference was significant (P<0.05) (Table1). DISCUSSION Ideal sealer showed have characteristics such as good adhesion, adequate seal, radiopacity, dimensional stability during setting, tissue tolerance, antibacterial effect, non-resolvability in tissue fluids and cause no discoloration for the tooth structure (3). The most common cause of coronal discoloration is presence of remaining sealer in pulp chamber which causes sealer to infiltrate into dentine tubules leading to discoloration (8). Most sealers show some degrees of tooth discoloration. AH26 and Dorifill tooth discolorations were compared in this in vitro study. HSB (H=Hue), (S=Saturation) and (B=Brightness or value) and RGB variables were measure as a base for evaluating samples colors. This method seems to be more sensitive in detecting color changes than clinically visual method that Parsons et al. used in their study (16). The term, hue, refers to the actual color (i.e. red), whereas saturation is a measure of chroma or color intensity (i.e. cherry red versus brick red). Brightness describes shades of gray within a particular hue (i.e. pink versus red). In Van der Burgt et al., Parsons et al. and our study, application of sealer into the pulp chambers was similar (8,16). Van der Burgt et al. used EDTA and NaOCl for irrigation during canal preparation, whereas Parsons et al. used only NaOCl (8,16). In a clinical situation, effort should be made to remove as much of the sealer as possible. Remnants of sealer, however, are often left behind, which are likely will induce some discoloration. Results obtained from this study concurred with Van der Burgt et al. results; they showed that AH26 sealers cause a grayish discoloration and ZOE sealers cause a light red to orange discoloration (8). In this study, AH26 showed more grayish discoloration according to the brightness parameter of HSB. In Kraus and Jordan study which inspected sealer infiltration into dentine tubules, presence of sealer in the pulp chamber was the recognized cause of discoloration and also AH26 known to have the most discoloration effect, which agrees with results obtained from the current study (14). Parsons et al. study also confirmed that AH26 causes the greatest discoloration (16). CONCLUSION In this in vitro study, AH26 causes greater mean discoloration compared to Dorifill sealer after 4 months. Therefore, Dorifill (ZOE) sealers seem more appropriate for root canal treatment of anterior teeth. Similar studies with more samples and type of sealers and also periodical discoloration assessments are recommended. AKNOWLEGMENT This study approved by the Ethical Committee of Islamic Azad University, Khorasgan Branch. Conflict of Interest: ‘none declared’. 741 Zare Jahromi et al.  IEJ Iranian Endodontic Journal 2011;6(4):146-149 REFERENCES 1. Grossman L: Obturation of the canal. In: Lea and Febiger: Endodontic Practice, 9 th Edition. Philadelphia, 1981:pp. 326-7 2. Driscoll WS, Horowitz HS, Meyers RJ, Heifetz SB, Kingman A, Zimmerman ER. Prevalence of dental caries and dental fluorosis in areas with optimal and above-optimal water fluoride concentrations. J Am Dent Assoc 1983;107:42-7. 3. Trope M: Trauma Injuries. In: Cohen S, Burns R: Pathways to the pulp, 7 th Edition. St Louis, USA: Mosby Inc, 1998. pp. 463-473. 4. Wagnild G:W Restoration of the Endodontically Treated Tooth. In: Cohen S, Burns R: Pathways of the pulp. 7th Edition. St. Louis: Mosby, 1998: pp. 691-717. 5. van der Burgt TP, Plasschaert AJ. Tooth discoloration induced by dental materials. Oral Surg Oral Med Oral Pathol 1985;60:666-9. 6. Tredwin CJ, Scully C, Bagan-Sebastian JV. Drug-induced disorders of teeth. J Dent Res 2005;84:596-602. 7. Ingle JI. Endodontics, 5th Edition. Hamilton London: Bc Decker Inc, 2008: pp.845-58. 8. van der Burgt TP, Mullaney TP, Plasschaert AJ. Tooth discoloration induced by endodontic sealers. Oral Surg Oral Med Oral Pathol 1986;61:84-9. 9. Walton R: Bleaching Discolored teeth; internal and external. In: Walton RE, Torabinejad M: Principles and practice of Endodontics. 2 th Edition. Philadelphia: W.B. Saunders, 1996. pp. 385-400. 10. Davis MC, Walton RE, Rivera EM. Sealer distribution in coronal dentin. J Endod 2002;28:464-6. 11. Calt S, Serper A. Dentinal tubule penetration of root canal sealers after root canal dressing with calcium hydroxide. J Endod 1999;25:431-3. 12. Kouvas V, Liolios E, Vassiliadis L, Parissis- Messimeris S, Boutsioukis A. Influence of smear layer on depth of penetration of three endodontic sealers: an SEM study. Endod Dent Traumatol 1998;14:191-5. 13. Okşan T, Aktener BO, Sen BH, Tezel H. The penetration of root canal sealers into dentinal tubules. A scanning electron microscopic study. Int Endod J 1993;26:301-5. 14. Jordan R: The Dentin. In: Kraus B, Jordan R, Abrams L: Dental anatomy and occlusion, 7 th Edition. Baltimore The Williams Wilkins Company, 1976. pp. 159. 15. Freccia WF, Peters DD. A technique for staining extracted teeth: a research and teaching aid for bleaching. J Endod 1982;8:67-9. 16. Parsons JR, Walton RE, Ricks-Williamson L. In vitro longitudinal assessment of coronal discoloration from endodontic sealers. J Endod 2001;27:699-702. work_vshrgc6eubfjfo3f7joktbdwgu ---- Virtual Temporal Bone Anatomy Journal of Otology 2007 Vol. 2 No.1 Corresponding author: XIA Yin, M.D, Ph. D, Department of Otolaryngology-Head & Neck Surgery, Beijing Tongren Hospi⁃ tal, Capital Medical University, No.1 Dongjiaominxiang Street, Dongcheng District, Beijing, China, 100730 Business Tele⁃ phone: (86+) 58269134;E-mail: xiayin3@163.com Virtual Temporal Bone Anatomy XIA Yin, LI Xi-ping, HAN De-min, ZHOU Guo-hong, ZHAO Yuan-yuan Department of Otorhinolaryngology, Beijing Tongren Hospital, Capital Medical University, Beijing, China, 100730 (Xia Y, Li XP, Han DM) Biomedical Academy of Capital Medical University (Zhou GH and Zhao YY) Correspondence to: Dr. Han Demin, Department of otolaryngology, Beijing Tongren Hospital, Beijing, China, 100730 (Email: handm@trhos.com) This study was supported by a grant from Beijing Natural Science Foundation (7212008, 7031001) Background The Visible Human Project(VHP)initiated by the U.S. National Library of Medicine has drawn much attention and interests from around the world. The Visible Chinese Human(VCH)project has started in China. The current study aims at acquiring a feasible virtual methodology for reconstructing the temporal bone of the Chi⁃ nese population, which may provide an accurate 3-D model of important temporal bone structures that can be used in teaching and patient care for medical scientists and clinicians. Methods A series of sectional images of the tempo⁃ ral bone were generated from section slices of a female cadaver head. On each sectional image, SOIs(structures of in⁃ terest)were segmented by carefully defining their contours and filling their areas with certain gray scale values. The processed volume data were then inducted into the 3D Slicer software(developed by the Surgical Planning Lab at Brigham and Women's Hospital and the MIT AI Lab)for resegmentation and generation of a set of tagged images of the SOIs. 3D surface models of SOIs were then reconstructed from these images. Results The temporal bone and structures in the temporal bone, including the tympanic cavity, mastoid cells, sigmoid sinus and internal carotid ar⁃ tery, were successfully reconstructed. The orientation of and spatial relationship among these structures were easily visualized in the reconstructed surface models. Conclusion The 3D Slicer software can be used for 3-dimensional visualization of anatomic structures in the temporal bone, which will greatly facilitate the advance of knowledge and techniques critical for studying and treating disorders involving the temporal bone. Keywords 3-D reconstruction; temporal bone; Chinese Virtual Human Introduction The rapid development of modern computer technology and computer image processing techniques has promoted emergence and development of many new scientific research areas. The Visible Human Project (VHP) [1-2] initiated by the United States National Library of Medicine is one good example. VHP aims at building a complete dataset of high resolution, 3-dimensional, color human anatomy models. At the same time, the concept of virtual human being has been put forward,, which provides a unified platform for building models of the human life system that will facilitate transition of medicine from empirical to theory- and evidence- based science and from microscopic subsystem studies to macroscopic systemic studies. At present, research of the virtual human being is limited to morphological studies, and key points current⁃ ly under study include image registration, color image segmentation, surface reconstruction of anatomical struc⁃ tures, multiresolutional indication of reconstructed sur⁃ faces, and interactive browsing of volume rendering re⁃ sults. We have reconstructed selected temporal bone Original Article ·· 56 Journal of Otology 2007 Vol. 2 No.1 structures based on the first Chinese virtual human data⁃ set for the purpose of gaining practical experiences and collect needed data for the Virtual Chinese Human (VCH)project. Material and method Acquisition of sectional image data[3-4] In collaboration with the Anatomy Institute, First Military Medical University(Guangzhou, China), 0.2 mm cadaver head sections were produced using a high precision milling machine. A total of 2128 successive cross- sections were photographed using a digital camera. Images generated in this manner were free from color distortion and structure displacement. Data processing Image segmentation 1) On each sectional image, contours of SOIs were marked on the Photoshop software platform. 2) For each OSI, an identical grey scale value was assigned to the entire area representing the structure on each single sectional image. For the same structure, its assigned grey scale value on each sectional image varied systematically from the background. SOIs were extracted based upon their assigned grey scale values. Induction and processing of two- dimensional images: All sectional images were saved as a series of computer files with numerical extension names representing the order of the images. The files were then inducted into the 3D Slicer software, which reconstructed sectional images on the other two perpendicular planes using inducted data. Data on the newly constructed two planes were also used to refine definition of SOI contours on the original sectional images for improved visual effects. Volume data were edited and segmentation refined through adjusting thresholds in order to increase segmentation precision and reliability and to produce a series of color-labled SOI images. Data of labeled images were subjected to additional computing to smooth surface image noise and sharpen their borders. Three-dimensional reconstruction and visualization: Surface models of SOIs were generated using the 3D Slicer software by selecting color-labeled volumes of various SOIs. The linked 3-D and 2-D windows in the software allow viewing sectional images of preference along selected axes. Results 1. Three- dimensional surface models of several structures in the temporal bone were reconstructed, including the temporal bone, mastoid, and sigmoid sinus. Surface model of the head was also reconstructed to demonstrate the orientations and positions of the SOIs within the head(Figure1-4). 2. SOIs could be selected to be viewed and spatial relationship between different structures could be visualized.“Stripping”browsing of various structures was also available. Reconstructed structures could also be rotated in all angles, allowing enhanced observation of various structures. Discussion Introduction to Virtual Chinese Project Virtual human being refers to building a complete, systematic and interactive digital human model representing the anatomy, physics, physiology and biochemistry of the human body, to meet the needs in medical education and clinical research for improvement in diagnosis and treatment of diseases. Multiple virtual human research projects are currently underway and much progress has been made. Some of Figure 1. Lateral view of reconstructed temporal bone. Figure 2. Internal view of Reconstructed temporal bone ··57 Journal of Otology 2007 Vol. 2 No.1 such projects include the VHP, Visible Korean Project and Voxel-Man project at Hamburg University in Germany. The goals of these projects are to build accurate 3-dimensional anatomical models using virtual computing technologies and to integrate these models into development of new surgical techniques. Different technical approaches have been adopted in these projects in dataset acquisition and application. Considering racial differences and the different goals between these projects and the CVH project, datasets used in the existing projects may not be completely appropriate for the VCH project. Therefore, there is a need for building datasets based on information collected from the Chinese race[5]. The VCH project was officially initiated in China as part of the so-called“863 Plan”, which is a state sponsored national effort to boost advances in science and technology in China. The project is assigned to the Institute of Computing Technology at Chinese Academy of Science, the First Military Medical University, Capital University of Medical Sciences and Huazhong University of Science and Technology. So far, datasets of three fresh cadavers, including one male, one female and one female fetus, have been acquired. Comparison between CT scan- and histological slices-based reconstruction Reconstruction of human anatomical structures has great significance for understanding the complex spatial relationship among different structures and is a useful tool for developing new surgical approaches and for surgery simulation. It has become a worldwide fruitful research hot spot for the past 10 years. Training simulation softwares for surgical procedures in the ear, nose and throat including mastoidectomy and rhinoendoscopic procedures have been developed in Germany and the U.S. There have been a number of studies both in China and other countries on reconstruction of anatomical structures using CT images[6-8]. However, because CT images are derivative information, with false color representation and relatively coarse section thickness, they are inadequate for reconstruction of high-precision models. Reconstruction using traditional histological slices have also been less than satisfactory, due to the overwhelming complexity of the process and the inherent problem of tissue displacement. Since the time of initiation of the U.S. VHP, frozen section data acquisition technology has aroused much in⁃ terest for researchers around the world. Frozen section greatly minimizes the tissue displacement problem. In addition, the application of digital photography further reduces errors in image acquisition. More importantly, frozen section allows much thinner sections compared to CT scans while preserving information of all anatomical structures. Reconstruction of high-resolution structure models puts high demands on technologies in image pro⁃ cessing, structure extraction and, ultimately, computing capacity of the computer. Color image segmenting tech⁃ niques do not yet meet the requirements of 3-dimension⁃ al reconstruction. Multidisciplinary cooperation is re⁃ quired for breakthroughs in this research area. Comprehending the complex temporal bone anatomy has always been a challenge for researchers and clini⁃ cians. Satisfying reconstruction of temporal bone struc⁃ tures has yet to be developed. Reconstruction using fro⁃ zen section slices can be an important advance in this ar⁃ ea, although by no means an easy task. Compared to the numerous reports on CT-based reconstruction, reports on reconstruction using frozen section slices are rare. Harada[9] is among the earliest to report temporal bone reconstruction using histological slices, followed by Lucz [10], Their success was limited by available slicing tech⁃ Figure 3. Transparent view of tympani cavity(green)、mastoid(blue)and sigmoid sinuses(yellow)in the temporal bone. Figure 4. Indicate positions of reconstructed structures in the head ·· 58 Journal of Otology 2007 Vol. 2 No.1 niques, computer technology and reconstruction comput⁃ ing techniques at the time. Partial temporal bone recon⁃ struction using histological slices were later attempted by other researchers. In 2000, Mason [11] tried recon⁃ structing the temporal bone using 630 histological slic⁃ es. In his work, the omission of labeling acquired images created tremendous difficulties for re-aligning struc⁃ tures at later reconstruction stages. In this study, we attempted to reconstruct the complex temporal bone structures using data from the VCH dataset No. 1. Three-dimensional images of main structures in the temporal bone were successfully reconstructed with good viewing effects. This is an important supplement to the VCH project and lays a foundation for future surgical simulation research. Our work is limited by the lack of an ideal method for extracting mastoid air cells and subtle structures such as the ossicles and inner ear structures. This is due to the relatively low resolutions of the images acquired from frozen section slices. Extraction errors are inevitable in the presence of ambiguous borders between structures. Qiu et al[12] reported temporal bone reconstruction using surface rendering method that resulted in improved demonstration of relationship among the fallopian canal, sigmoid sinus, ossicles and labyrinth. We suggest that imaging of thin histological slices from a single fresh temporal bone may be required to accurately reconstruct fine structures such as the semicircular canals, cochlea, ossicles and middle ear ligaments. Sorensen [13] has established a temporal bone dataset from 605 sectional images that were acquired through digitally photographing 25 μm fresh frozen temporal bone slices. He, however, has not yet reconstructed the structures. Bernhard[14] has developed a mastoidectomy simulation system based on CT images. Chen[7] has also reported his preliminary works on CT image-based mastoidectomy simulation. It has to be acknowledged that, at this stage, reconstruction based on histological slices is not completely satisfying and is yet to surpass the quality of reconstruction based on high resolution CT images. However, it is expected that, as technologies of image capturing and processing improve, the results of histological slice-based reconstruction will eventually exceed those of CT-based reconstruction. Continued research in this area is therefore critical. We intend to continue the present work toward the goal of developing a otolaryngological surgery simulation system that reflects the unique characteristics of the Chinese race and is suitable for clinical application. References 1 Spitzer V, Ackerman M, Scherzinger A, et al. The Visible Human Male: A Technical Report. J. Am. Med. Inform Assoc 1996,3: 118-130. 2 Ackerman MJ. The Visible Human Project: a resource for anatomical visualization. Med info 1998, 9: 1030-1032. 3 Yuan L, Huang WH, Tang L, et al. Image processing in treatment of digitized virtual Chinese No 1 female. Chin J of Clin Anat, 2003, 21:193-196. 4 Zhong SZ, Yuan L, Tang L, et al. Research report of experimental database establishment of digitized Virtual Chinese No 1 female. J First Med Univ, 2003, 23: 196-200. 5 Yuan L,Huang WH, Tang L. The generalization of visible human research. Chin J Clin Anat,2002, 20:341-342. 6 Dong S, Liu S, Yang BT, et al. Three- dimensional reconstruction of nasal anatomical structures based on CT images. Journal of Capital University of Medical Sciences, 2003, 24: 115-117. 7 Chen HX, Xu G, Yu CT, et al. Experimental study on computered-aided mastoidectomy simulation. Chin J of Clin Anat, 2003, 21:129-131. 8 Hilbert M, Muller W. Virtual reality in endonasal surgery. Stud Health Techno Inform,1997,39: 237-245. 9 Harada T, Ishii S, Tayama N. Three- dimensional reconstruction of the temporal bone from histological sections. Arch Otolaryngology Head Neck Surg 1988; 114:1139-1142. 10 Lutz C, Takagi A, Janecka IP, et al. Threedimensional computer reconstruction of a temporal bone. Arch Otolaryngology Head Neck Surg, 1989, 101: 522-526. 11 Mason TP, Applebaum EL, Rasmussen M, et al. Virtual temporal bone: Creation and application of a new computer-based teaching tool. Arch Otolaryngology Head Neck Surg 2000,122: 168-173. 12 Qiu MG, Zhang SX, Tan LW, et al. Computer-aided 3-dimensional reconstruction of temporal bone. Acta Academia Medicine Militaries Tertian, 2001, 23:1201-1203. 13 Sorensen MS, Dobrzeniecki AB, Larsen P, et al. The visible ear: a digital image library of the temporal bone. ORL, 2002, 64: 378-381. 14 Bernhard P Flesser. A Virtual Reality Simulator for Petrous Bone Surgery with Haptic Feedback. ORL 2002; 64: 473-478. ··59 Virtual Temporal Bone Anatomy Background Methods Results Conclusion Keywords Introduction Material and method Data processing Results Discussion References work_vteoocf52zhzzfvcgxbnawh3tu ---- Print Measuring lunchtime consumption in school cafeterias: a validation study of the use of digital photography Mariel Marcano-Olivier*, Mihela Erjavec, Pauline J Horne, Simon Viktor and Ruth Pearson The Centre for Activity and Eating Research, Bangor University, School of Psychology, Brigantia, Penrallt Road, Bangor, Gwynedd, LL57 2AS, UK Submitted 23 October 2017: Final revision received 20 December 2018: Accepted 21 January 2019: First published online 4 April 2019 Abstract Objective: The present study tested the validity of a digital image-capture measure of food consumption suitable for use in busy school cafeterias. Design: Lunches were photographed pre- and post-consumption, and food items were weighed pre- and post-consumption for comparison. Setting: A small research team recorded children’s lunchtime consumption in one primary and one secondary school over seven working days. Participants: A primary-school sample of 121 children from North Wales and a secondary-school sample of 124 children from the West Midlands, UK, were utilised. Nineteen children were excluded because of incomplete data, leaving a final sample of 239 participants. Results: Results indicated that (i) consumption estimates based on images were accurate, yielding only small differences between the weight- and image-based judgements (median bias = 0·15–1·64 g, equating to 0·45–3·42 % of consumed weight) and (ii) good levels of inter-rater agreement were achieved, ranging from moderate to near perfect (Cohen’s κ = 0·535–0·819). This confirmed that consumption estimates derived from digital images were accurate and could be used in lieu of objective weighed measures. Conclusions: Our protocol minimised disruption to daily lunchtime routine, kept the attrition low, and enabled better agreement between measures and raters than was the case in the existing literature. Accurate measurements are a necessary tool for all those engaged in nutrition research, intervention evaluation, prevention and public health work. We conclude that our simple and practical method of assessment could be used with children across a range of settings, ages and lunch types. Keywords Validation Consumption Digital photography Cafeteria School Visual estimation Nutrition In the past two decades, the onset of affordable, easy-to- use, high-resolution digital cameras has provided researchers with a convenient new tool for dietary assessment. The appeal of this method includes creation of objective records which can be examined in several ways, by more than one independent coder and to a greater level of detail than is the case with visual estima- tion of consumption performed ‘in situ’(1). Using digital image capture, small teams of observers, causing minimal disruption in busy dining environments, can capture information on portions (servings) and plate waste from a large cohort of participants(2). In principle, this information can subsequently be stored, re-analysed and shared. Such improvements in reliability and replicability have led to digital image collection replacing or enhancing the more traditional methods for estimating consumption, including direct methods (such as visual estimation by a group of observers present at meals) and indirect methods (such as using dietary diaries or recall), as manifest in the emer- gence of recent reports that are investigating how images can complement other forms of dietary assessment as prompts and as complementary data sources(3–5). How- ever, the present study considers the use of digital image capture to measure consumption behaviour in a more controlled environment, where images are not recorded freely, directly by consumers, but in a controlled and highly replicable setting. Many studies have used image-assisted visual estimation without reporting the validity or reliability of this method(6,7), but several validation reports have also appeared in the literature. Some of these publications have examined the reliability of image-based visual estimation methods(8–10), but seldom do they examine the method’s accuracy against a criterion measure. Others have Public Health Nutrition: 22(10), 1745–1754 doi:10.1017/S136898001900048X *Corresponding author: Email m.marcanoolivier@chester.ac.uk © The Authors 2019 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:24, subject to the Cambridge Core terms of use. https://doi.org/10.1017/S136898001900048X https://www.cambridge.org/core compared estimates based on digital images with weigh- ing of the foods under controlled laboratory conditions. For example, Williamson et al.(11) have used a contrived scenario where plates of food were arranged by the researchers and plate waste mimicked by subtracting precisely weighed amounts of foods, and Sabinsky et al.(12) assessed accuracy in consumption estimations from images of typical sandwiches that children may bring from home to school, although these sandwiches were created by researchers in order to simulate a standard home-provided lunch. These studies show that, in princi- ple, raters’ estimates based on digital images can be sound, but they cannot test the validity of data collection proto- cols performed under free-living conditions. Pouyet et al.(13) addressed this issue by examining image-based dietary assessment in a geriatric setting, and Nicklas et al.(14) looked at utilising caregivers as data collectors, using iPhones to remotely photograph total weekly food consumption of pre-school children. How- ever, these studies have administered their protocols in potentially less chaotic environments, such as in the home or elderly care home dining areas, where there may be more opportunities to capture images, without the time constraints typical of a school cafeteria. Taylor et al.(15) attempted to validate digital image capture in a real-life school canteen setting; however, although they report that digital image capture has the potential to be used as a method of collecting nutritional data, they focused on fruit and vegetable consumption and did not consider other food types. Hanks et al.(16) considered a broader spectrum of food types in their attempt to vali- date the use of digital image capture; however, data were collected during one lunch period only and available foods were those that are typically distributed in pieces and do not mix, such as chicken nuggets, sandwiches or cookies, which are very different from ‘wet’ foods like stews or curries or baked beans that are sauce-based and spread on the plate, mixing with other ingredients, and which make the plate waste much more difficult to estimate. In a systematic review of evidence for image-assisted dietary assessment, Gemming et al.(17) called for better validation studies using criterion measurement and pro- tocols capable of capturing information in free-living research with children and adolescents. To our knowl- edge, only one recent investigation reported to have validated its method of visual estimates based on images against weighed measures with school-provided meals’ data collected in two primary-school cafeterias(18), albeit using very generous agreement criteria. Considering this gap in the literature, the present study has been designed to test the validity of a simple but versatile protocol for collection of consumption data in free-living cafeteria environments, in primary- and secondary-school settings, and for meals provided both by caterers and by parents. Methods Aim The present study was designed to test the validity of the use of digital image capture as a method of nutritional data collection in busy school cafeterias, by: (i) comparing estimates of consumption from digital images with weighed measures; and (ii) establishing inter-rater relia- bility of image-based estimates. Participants Following parental consent, 121 children from a rural primary school in North Wales and 124 children from an urban secondary school in the West Midlands, UK, took part. Both samples were gender balanced (sixty-seven females in primary and fifty-nine in secondary school) and represented a wide range of ages: 5–10 years old for pri- mary (with twenty-four children in Year 1; twenty-five in Year 2; twenty-three in Year 3; twenty in Year 4; and twenty-nine in Year 5) and 11–18 years for secondary school (thirty in Year 7; seventeen in Year 8; thirty-five in Year 9; twenty-three in Year 10; and nineteen in Year 13). Participants were of predominantly Caucasian origin, reflecting the demographics of their regions. Six children were excluded because of incomplete data (e.g. no post- consumption image was captured), leaving a final sample of 239 participants. Each child contributed data for one lunchtime meal. Materials To capture images, four digital cameras were used (Fuji- film Finepix, 16 mega pixels, model no. AX650). To standardise image capture, cameras were positioned on tripod stands (Tiffen Davis and Sanford, Vista EXPLORERV 60-Inch Tripod), with tape measures and protractors available to ensure correct set-up; the camera was approximately 45 cm away from the plate and at a 45° angle. This ensured that images contained consistent size and depth information necessary for coding. Food items were displayed either on paper plates for lunchbox meals or on plastic school dinner trays. Plain white paper participant identification tags were attached to lunchboxes. White self-adhesive participant identification labels were attached to red metallic wristbands given to each participant to wear during lunchtime, and to the plate/tray for later coding of the food and waste in each image. Non-latex gloves were worn by researchers at all times when handling food items. Procedure Data were recorded over four consecutive days (Monday– Thursday) in the primary school and three consecutive days (Monday–Wednesday) in the secondary school. On these days, researchers arrived at the school prior to the 1746 M Marcano-Olivier et al. Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:24, subject to the Cambridge Core terms of use. https://www.cambridge.org/core registration period and set up a data collection area in the school gym. Then, one researcher entered each partici- pating classroom during the registration period to collect lunchboxes, distribute participant identification labels (placed on wristbands) and attach additional participant identification labels to corresponding lunchboxes (if chil- dren had brought lunch from home). Those children who ate school dinners were told they would be given another sticker at lunchtime to put on their dinner tray. Researchers then described what participants would be asked to do at lunchtime. Pre-consumption images and weights were then taken for each food item provided to the children. The protocol differed depending on whether the participant had a lunchbox or was given a school dinner. Lunchboxes Participants’ lunchboxes were collected during registration and taken to the study area to be photographed. The contents of each box were spread on a paper plate. They were clearly visible and any items that could be unwrap- ped (e.g. sandwiches in tin foil or cling film) were exposed for the purpose of the image. Those items that could not be unwrapped (e.g. yoghurts) were photographed and weighed in their wrapping and the weight of each wrap- ping type (e.g. small yoghurt pot) was deducted from the pre-weight record. Similarly, if an item was served in an unusual container (e.g. a thermos flask), the lid was removed for the purpose of the pre-consumption image, the whole container was weighed and the weight of the container was deducted from this when a post- consumption measurement was obtained (at this point, any waste food could be emptied into a plastic cup in order to obtain the true weight of the container and returned to the container once it had been weighed). Items were then weighed individually and these weights were recorded. Those items that comprised more than a single component (e.g. a ‘ham sandwich’) were weighed as a single item and weights of fillings were approximated based on separate measurements (see below). Lunch- boxes were restored and returned to participants after morning break time. School dinners Estimate food measurements were calculated by asking caterers to serve researchers five portions of every food item available to children. Each portion was weighed on a plastic dinner tray and from this a mean was calculated for each food item. The portion that was closest to the mean for that food item was photographed (to be used as a reference for a typical portion). At lunchtime, participants were instructed to come to researchers after they had been served their lunch, but before they sat down to eat, so that a pre-consumption image could be recorded for each child. One researcher was stationed at the end of the dinner queue to collect pre-consumption images, with a second researcher collecting post-consumption images positioned at the back of the dinner hall, by the waste bins, to protect against attrition from children disposing of waste food before it had been photographed. Tripods and cameras were set up prior to lunchtime commencement to be clearly focused on an area on the table in front of them, so that dinner trays could easily be slid into focus, and an image captured, in a matter of seconds. At lunchtime, all children sat down to eat as usual. Once the participants had finished eating, they handed their lunchbox or dinner tray to the researchers positioned at the back of the hall. Researchers photographed the dinner trays or contents of each lunchbox and weighed each remaining food item individually (in the same manner as the pre-consumption data collection) before returning lunchboxes to participants or disposing of plate waste and returning dinner trays to the cafeteria staff. Data processing and coding Weighed consumption measures For each child, consumption was calculated by subtracting post-consumption weight from estimated pre- consumption weight (or known pre-consumption weight stated on branded snack packaging), for each recorded food item. Consumption estimates from digital images Utilising images collected during our unpublished pilot work, a consumption analysis training protocol was developed for the present study. A representative sample of images from the pilot data set, showing a variety of home- and school-provided lunches and the associated plate waste, were coded jointly and then independently by a pair of raters (who had also been present at school sites for data collection). The percentage consumed for each food item was estimated to the closest 10 % (on an 11- point scale, from 0 to 100 % consumed) using the pre- and post-consumption images. Successful completion of the training, as manifest in the raters perfectly matching their ratings on over 90 % of items, was achieved in approxi- mately two working days. Following training, the lead researcher coded all data; to calculate inter-rater agree- ment, a second rater independently coded 40 % of the total food items. Each participant’s meal took approximately 30 s to estimate the percentage of each food item con- sumed, with an additional minute to convert these per- centages into estimate weights. Next, these percentage consumption estimates were converted to weights. The weight in grams for each food item in lunchboxes was judged by referring to product information published by the manufacturer (e.g. a Nutri- grain® soft baked fruit cereal bar weighs 37 g according to published product information, and so this was the weight recorded for Nutri-grain bars and supermarket own-brand varieties). Where this information was unavailable (e.g. for Measuring lunch consumption with photography 1747 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:24, subject to the Cambridge Core terms of use. https://www.cambridge.org/core sandwiches), an average sandwich weight was calculated from displayed product information (e.g. the average ‘medium’ slice of bread weighs 40 g, the average ‘small’ bread roll weighs 60 g) and weighing samples (e.g. making five cheese sandwiches and weighing the components independently to estimate an average sandwich filling weight for commonly presented food items). For example, the average cheese sandwich on sliced bread was esti- mated to weigh 100 g in total, with additional fillings (e.g. cheese and ham) increasing the estimated weight by 20 g per filling, or 5 g per salad filling (e.g. cheese and lettuce). Participants were also often presented with pieces of fruit, and so estimates were calculated from an average sized piece of fruit (e.g. an average apple weighs 70 g, with 60 g edible flesh, minus 10 g for core; an average ‘snack size/ kid’s size’ apple weighs about 50 g with 40 g edible flesh). Following this protocol, it was possible to estimate the weight of each food item that children consumed in grams. For example, if a participant was judged to have con- sumed 70 % of a Nutri-grain bar then 26 g was consumed, or if a participant consumed 80 % of a mean 64 g portion of carrots then 51 g was consumed. Preliminary data analyses All data were inputted into the statistical software package IBM SPSS Statistics version 22. Where the first and second coder disagreed on how much of a food item was con- sumed by 10 %, the estimation from the first coder was taken; and where they disagreed by more than 10 %, the mean value between the two estimates was selected by researchers and was used to calculate the estimated weight consumed. Total weights of food consumed by each participant were calculated by adding the weights from each recorded item. Next, to provide more detailed validation measures, all food items were allocated to one of four broad cate- gories: (i) Main Starch item; (ii) Fruit and Vegetables; (iii) Meat, Dairy and Wet foods (stews, curries, pasta sauce, etc.); and (iv) Snacks. These categories were based on similarity in the way the food items appear on a plate (e.g. compact (a potato) or spread (baked beans)); the approximate weight of servings (e.g. Snacks (crisps) weigh less than a Main Starch item (jacket potato)); and the approximate volume of the food items. All food items were categorised prior to analysis into a category that best represented their properties. For example, a yoghurt could be considered a common snack, but was categorised as dairy since its volume and density are more typically shared by Meat, Dairy and Wet foods (such as beans or custard) than by those in the Snack category (such as crisps); sandwiches, although potentially containing foodstuffs from other categories, were considered a Main Starch item, as the majority of their weight and volume was bread – a starchy foodstuff. All categories were broad so that they may contain enough data items to sufficiently power the subsequent analyses. For lunchboxes, the Main Starch was typically a sand- wich, while for school dinners it was more varied, with potatoes, pasta, rice and pizza regularly presenting. In the Fruit and Vegetable category, a typical lunchbox portion included bananas, apples and cucumber, while partici- pants who ate school dinners were more likely to be served peas, sweetcorn or carrots. Meat, Dairy and Wet food items in lunchboxes were typically yoghurts or cocktail sausages, while commonly presenting items in school dinners included sausages, custard and baked beans. Finally, in lunchboxes, regularly presented Snack items included packets of crisps, cake bars and cookies, while for school dinners they included shortbread and brownies, often provided as the ‘sweet’. Statistics and sample size calculations As all data between groups were positively skewed, Mann–Whitney U tests were used to identify differences between groups (e.g. primary/secondary; lunchbox/ school dinner meals) and the median was used as the measure of central tendency. One-sample t tests were used to identify any significant differences between con- sumption estimations derived from digital images and the criterion measurement. Comparing weight- and image-based data Bland–Altman plots were used to assess the agreement between the criterion measurement and the image capture method. Previous published research utilising this analysis does not typically publish sample size calculations, although a sample of n 100 would promote a sensitive analysis(18), and so all samples on which a Bland–Altman analysis was conducted exceeded n 100. Percentage relative error (%RE) is a measure of precision and is the ratio of the absolute error (the difference between two measurements) to the size of the actual measure, expres- sed as a percentage: δ = 100 % ´ η = 100 % ´ ε νj j ; where δ = %RE, η = relative error, ε = absolute error (digital image estimate – criterion measure (actual) value) and νj j = criterion measure value. This was used to consider the acceptability of the magnitude of the bias. Performance evaluation of the model Estimates of model accuracy, which are sometimes termed ‘performance evaluation estimates’, are needed in cross-validation studies and can be used to support other estimates of agreement between observations (e.g. Cohen’s κ). Accordingly, accuracy can be operationalised as the distance between the estimated (e.g. image-based estimates) and/or observed values (weighed estimates) and the true value(19). Root-relative-squared error (RRSE) is one of a family of model accuracy estimates that takes 1748 M Marcano-Olivier et al. Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:24, subject to the Cambridge Core terms of use. https://www.cambridge.org/core the total squared error and normalises it by dividing it by the total squared error of the simple predictor. In doing so, it reduces the error to the same dimensions as the one being predicted by putting them on a similar unit(20). Estimates of error, performance evaluation and model accuracy can be specified as a percentage. In the present study the estimate of RRSE was 10·60 %, which indicates less error and greater accuracy, 89·40 % (100 – 10·60). The model performance estimate of the present study is also in line with the one reported for the level of agreement using Cohen’s κ. RRSE was calculated as fol- lows: Ei = ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiPn j = 1 Pij�Tj � �2 Pn j = 1 Tj�T � �2 vuut ; where Ei = RRSE, Pij = predicted value, Tj = target value and T = 1n Pn j = 1 Tj � � : Determining inter-rater agreement Cohen’s κ was used to identify the level of agreement on visual consumption estimates using images between raters and we ensured it was sufficiently powered(21): κ � po�pe 1�pe = 1� 1�po 1�pe ; where po = observed agreement among raters and pe = probability of agreement by chance. Agreement could be classed as either slight (0–0·20) or fair (0·21–0·40), although these results would not be considered significant; moderate (0·41–0·60); substantial (0·61–0·80); or near perfect (0·81–1)(20). Results Overall consumption Total weights per plate were calculated for each mea- surement method. Table 1 shows these weights in grams, together with provided serving sizes (provision), in pri- mary and secondary schools, for lunchboxes and school dinners. It can be seen that in all categories, children consumed over 80 % of the provided food. Three factors were analysed for differences in food provision and food consumption: school, lunch type and gender. There were no differences, except that children in the primary school were provided with lunchbox meals of a greater total weight than their secondary-school coun- terparts (U = 1686, P = 0·008, r = − 0·23). Bland–Altman analyses, presented in Fig. 1 and Table 2, show that the bias resulting from the digital image-capture method was small considering total consumption for each of the schools and for each type of lunch; SE varied from 0·53 to 2·44 % of the mean. Low values for RRSE (10·60 %) indicate less bias and greater accuracy (89·40 %) in the modelling of the data. Consumption of foods in each category Descriptive statistics for foods consumed in each category, based on weight measurements, can be found in Table 3. The results of the Bland–Altman analysis, shown in Fig. 2 and Table 4, indicate that the estimated consumption of food items derived from digital images presented an acceptably small bias for all categories, with SE ranging from 1·05 to 2·05 % of the mean. However, the value of the %RE statistic for the Fruit and Vegetable category was 10·55 %, showing lower accu- racy than the others. Similarly, a one-sample t test identified a significant difference between the two measures for the category of Fruit and Vegetables (t(323) = 2·893, P = 0·004), but no significant difference between the measures for all other categories. This result reflects a comparably higher variation in Fruit and Vegetable serving sizes. Although cafeteria staff were requested to serve standardised portions this did not always happen, leading to some disparities between the pre-consumption estimated weights and the actual weights of the portions served and, consequently, to less accurate consumption estimates, similar to those repor- ted in other research(22). While our (well powered) analyses registered this effect as significant, the actual differences were very small: the average consumed portion weighed 47·91 g and this was overestimated via image capture by 1·64 g (3·42 %) on average. Inter-rater agreement For the full sample, a substantial level of agreement was achieved (Cohen’s κ = 0·679; CI 0·64, 0·72). The food categories of Main Starch (κ = 0·581; CI 0·50, 0·66) and Fruit and Vegetables (κ = 0·535; CI 0·46, 0·62) achieved moderate agreement; substantial agreement was achieved for Meat, Dairy and Wet foods (κ = 0·781; CI 0·71, 0·85); and near perfect agreement was achieved for Snack items (κ = 0·819; CI 0·76, 0·88). The percentage agreement achieved for each category is typical of that previously accepted in key studies utilising digital image cap- ture(11,17). The breakdown shown in Table 5 confirms that coding disparities, where recorded, were seldom large for any of the categories. Table 1 Food provided and consumed (in grams), by school and meal type, in a sample of 121 children from a rural primary school in North Wales and 124 children from an urban secondary school in the West Midlands, UK, April 2016 Primary school Secondary school Lunchbox School dinner Lunchbox School dinner Mean SD Mean SD Mean SD Mean SD Provided 283 107 253 60 247 118 240 106 Consumed 229 110 204 64 199 93 223 87 Measuring lunch consumption with photography 1749 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:24, subject to the Cambridge Core terms of use. https://www.cambridge.org/core 200 100 0 –100 –200 200 150 100 50 0 –50 –100D iff e re n ce b e tw e e n m e a su re s (g ) D iff e re n ce b e tw e e n m e a su re s (g ) 40 20 0 –20 100 50 0 –50 –100 –150D iff e re n ce b e tw e e n m e a su re s (g ) D iff e re n ce b e tw e e n m e a su re s (g ) 0 200 400 600 Mean weight of food (g) 0 100 200 300 400 500 Mean weight of food (g) 0 100 200 300 400 500 Mean weight of food (g) 0 200 400 600 Mean weight of food (g) (a) (b) (c) (d) Fig. 1 Bland–Altman plots comparing consumption estimates (in grams) made by digital photographs and objective weighed measures from lunchtime meals of a sample of 121 children from a rural primary school in North Wales and 124 children from an urban secondary school in the West Midlands, UK, April 2016. The difference in food weight between the two methods is plotted v. the mean food weight from the two methods, by school and meal type: (a) primary school; (b) secondary school; (c) lunchbox; and (d) school dinner. —— represents the mean difference (bias) and – – – – – represent the limits of agreement Table 2 Bland–Altman analysis results for all meals (in grams), by school and meal type, in a sample of 121 children from a rural primary school in North Wales and 124 children from an urban secondary school in the West Midlands, UK, April 2016 n Bias SD of bias Limits of agreement (lower, upper) %RE School Primary 116 5·86 32·92 −58·66, 70·38 2·25 Secondary 123 0·36 7·07 −13·5, 14·22 −0·09 Lunch Lunchbox 137 2·67 22·60 −41·63, 46·97 0·96 School dinner 102 3·52 24·98 −45·44, 52·48 1·14 %RE, percentage relative error. Table 3 Food provided and consumed (in grams), by food category, in a sample of 121 children from a rural primary school in North Wales and 124 children from an urban secondary school in the West Midlands, UK, April 2016 Number of portions Primary school Secondary school Provided (g) Consumed (g) Category Lunchbox School dinner Lunchbox School dinner Mean SD Mean SD Main Starch 54 84 65 43 101·54 47·84 84·96 51·10 Fruit and Vegetables 62 68 98 5 66·39 45·26 47·91 41·51 Meat, Dairy and Wet foods 4 63 27 22 65·39 51·50 57·51 45·37 Snacks 100 42 64 27 37·44 25·92 33·42 26·82 1750 M Marcano-Olivier et al. Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:24, subject to the Cambridge Core terms of use. https://www.cambridge.org/core Discussion The present investigation supports the use of digital image capture as a valid method of data collection for free-living research in busy school dining environments. We have found that estimates derived from digital images can be equivalent to weighed measures for most food types, and that a high level of inter-rater agreement can be achieved using the present protocol. This has significant implica- tions for the collection of nutritional data in children. The current study extends the findings of previous investigations in several important ways. While a digital image-capture method has been validated for use with sandwiches brought from home in a contrived study(12), the use of digital image capture has never before been shown to be accurate for lunchboxes in a real-life setting. By testing the validity of the digital image-capture method against weighed measures for items brought from home and consumed in a school cafeteria, the present investi- gation provides evidence that digital images can also enable valid estimates in this context. This finding should be of interest to researchers measuring children’s con- sumption in countries where parental lunch provision is the norm (e.g. Canada; Norway; Ireland) and in those where a mixed supply is used (e.g. UK; Australia). Further, previous investigations conducted in a real-life setting have focused on younger, primary-school-age children(17), while the current study supports the use of our digital 40 20 0 –20 –40 –60D iff e re n ce b e tw e e n m e a su re s (g ) 25 0 –25 –50 –75D iff e re n ce b e tw e e n m e a su re s (g ) 40 20 0 –20 –40 –60D iff e re n ce b e tw e e n m e a su re s (g ) 30 20 10 0 –10 –20D iff e re n ce b e tw e e n m e a su re s (g ) –100 0 100 200 300 Mean weight of food (g) 0 100 200 300 Mean weight of food (g) –100 0 100 200 300 400 Mean weight of food (g) –50 0 50 100 150 200 Mean weight of food (g) (a) (b) (c) (d) Fig. 2 Bland–Altman plots comparing consumption estimates (in grams) made by digital photographs and objective weighed measures from lunchtime meals of a sample of 121 children from a rural primary school in North Wales and 124 children from an urban secondary school in the West Midlands, UK, April 2016. The difference in food weight between the two methods is plotted v. the mean food weight from the two methods by food category: (a) Main Starch; (b) Fruit and Vegetables; (c) Meat, Dairy and Wet foods; and (d) Snacks. —— represents the mean difference (bias) and – – – – – represent the limits of agreement Table 4 Bland–Altman analysis results for all meals (in grams), by food category, in a sample of 121 children from a rural primary school in North Wales and 124 children from an urban secondary school in the West Midlands, UK, April 2016 Category n Bias SD of bias Limits of agreement (lower, upper) SE %RE Main Starch 246 0·22 8·19 −16·03, 16·27 0·90 3·04 Fruit and Vegetables 233 1·64 8·67 −15·35, 18·63 0·98 10·55 Meat, Dairy and Wet foods 186 −1·14 8·12 −17·06, 14·78 1·03 0·16 Snacks 233 0·15 3·12 −5·97, 6·27 0·35 1·12 %RE, percentage relative error. Measuring lunch consumption with photography 1751 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:24, subject to the Cambridge Core terms of use. https://www.cambridge.org/core image-capture method in both primary- and secondary- school settings. The current paper also presents a more accurate mea- sure of consumption than the previously published research. By utilising an 11-point scale (0–100 % con- sumed in 10 % increments), rather than continuous unbounded estimation in grams, the digital image-capture measure of the present study yielded greater alignment with the weighed measure than has previously been achieved in research with children(17). We consider that continuous weight estimation from digital images may have led the researchers to adopt comparably lenient criteria. For example, ±25 % weight discrepancies between the two measures were considered as ‘acceptable agreement’ in one recent validation study(17), where the authors reported pre-consumption measures and plate waste measures separately, further inflating the number of agreements. By contrast, we used a measure of con- sumption for each meal, which is the variable of most interest to researchers. The present method combined accuracy comparable to the weighed measures with the convenience of unob- trusive group data collection, avoiding some of the pro- blems of other commonly used methods(23). We acknowledge that accurate visual estimation of con- sumption is clearly a more complex skill to master than direct recording of food weights. Nevertheless, we have found that a modest amount of training (see ‘Methods’ section) sufficed to produce reliable coding of a large number of food types. Based on pilot work, our protocol addressed procedural challenges common to free-living investigations. For example, we carefully positioned the researchers and recording equipment to minimise disruption but maximise visibility and children’s compliance with measurements, reducing attrition to one or two participants per day and thereby ensuring that any data loss would have a negli- gible impact on overall results. We adjusted our data col- lection methods to suit two very different cafeteria settings: a small school (200 students) in a rural area with a strictly regimented lunchtime routine and a large school (2000 students) in an urban area with a more relaxed approach to the lunch period. We examined different lunch types, including lunchboxes brought from home and school dinner meals, in the analysis and recorded consumption from children with ages spanning 5 to 18 years old. The success in two very different settings, lunch types and age groups supports the generalisability and ecological validity of the digital image-capture method described in the present paper. The present study has significant implications for public health. There has been a growing interest in the promotion of healthful behaviours in education settings(24,25); with children in the UK consuming about 30 % of their daily nutrients at school(26), the regulation of food eaten in schools has a significant impact on overall dietary beha- viour(27). Indeed, research has indicated that eating pat- terns at school are reflective of typical eating behaviour(28,29). With the availability of a valid measure to collect nutritional data, comparable to weighed measures from a large sample of school children in situ, research may now be designed to run an appropriately powered analysis of what is currently being consumed by children at lunchtime (as we know that lunchtime provision does not equal consumption). An understanding of what is being consumed will also highlight areas for improvement, and interventions can be designed (and analysed for effectiveness using the digital image-capture method) that fulfil these nutritional deficiencies. Such research ought to then inform policy which will, in turn, be expected to have a significant impact on children’s dietary behaviour and overall health(27). Regarding the digital image-capture method, we acknowledge that visually estimating food item con- sumption will always be vulnerable to human error; using this measure we may estimate only the percentage con- sumed of observed volumes, and in the absence of true weights for each food item being recorded before con- sumption, that this cannot be truly ‘converted’ to a true weight. The present study does not pertain to suggest that digital image capture will fully replace the gold standard of weighing every food item before or after consumption, but simply that with a reasonably sensitive measure, capable of yielding large quantities of data in a short period of time, more research regarding children’s diets and lunch- time consumption may be conducted to observe important trends in children’s eating behaviour. Some compromises had to be made regarding study design. Considering the school lunches, estimate weights for each food item available in the cafeteria were based on Table 5 Percentages of inter-rater agreement and disparities for all meals, by food category, in a sample of 121 children from a rural primary school in North Wales and 124 children from an urban secondary school in the West Midlands, UK, April 2016 Category Full agreement 10 % disparity 20 % disparity > 20 % disparity Main Starch 81·00 7·60 2·50 8·90 Fruit and Vegetables 83·60 11·00 2·70 2·70 Meat, Dairy and Wet foods 95·20 4·80 – – Snacks 94·10 2·90 1·50 1·50 1752 M Marcano-Olivier et al. Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:24, subject to the Cambridge Core terms of use. https://www.cambridge.org/core the average of five ‘typical’ servings. These estimates were used in lieu of weighing each portion before the partici- pants ate their lunch. This commonly used method(15) was efficient and unobtrusive; it preserved the ‘real-life’ nature of the investigation and prevented the food from cooling down before the children ate it, which would have made it less appetising. Nevertheless, it had its drawbacks. Although cafeteria staff were requested to provide all participants with equally sized servings, this did not always happen. Unlike foods like fish or bread that were well standardised (e.g. one fillet or one slice), spoonsful of vegetables sometimes varied in size, leading to a disparity between the estimated and actual servings and introducing a source of noise into the data set. This barrier to reliability has been previously identified in associated research(30). Even though we recorded a significant difference between data collection methods, a comparably high bias and greater %RE for the Fruit and Vegetable food category, the actual overestimation was less than a couple of grams on average. This is much less than discrepancies reported in other studies(17) and is unlikely to adversely impact mea- surement. Our ongoing research in schools confirms that this method is sensitive enough to detect small changes in children’s fruit and vegetable consumption over time. Due to the fast-paced nature of the school lunchtime environment, it was not possible to weigh each food item twice and so visual estimations of consumption were validated only against a single measure, without provision of inter-rater reliability. However, it is unlikely that mea- surement was inaccurate. The digital scales used were correctly set up and tested every morning prior to data collection. Further, a relatively small sample size was utilised. As stated, we used two schools that differed on several important aspects (age range, setting, etc.) in order to promote generalisability, although we do acknowledge that a sample of just two schools does limit gen- eralisability. Future research may benefit from exploring the application of the digital image-capture measure in a greater variety of school-based settings; however, we consider the present sample to indicate the potential for the wide applicability of the method. Overall, we found the lunchtime provision and con- sumption to be matched across study settings, ages, lunch types and genders. Somewhat counter-intuitively, children in primary schools brought more food in their lunchboxes than did their older counterparts. We considered by whom the food was being provided and concluded that the child’s lunchbox was more likely to be prepared by the parents at primary and by the children at secondary school age. Adolescents may have been less motivated to pack a substantial lunch and forego quantity and quality for ease, resulting in fewer food items. The finding that serving sizes were not related to children’s nutritional needs indicated that more attention should be given to providing appro- priate portions as children grow and develop(31). Conclusion The current study presented a simple and versatile digital image-capture method for estimating lunchtime consumption of children in schools. We obtained a high agreement with the weighed measures and good inter-rater reliability using total consumption and food category scores, derived from the weight estimates of individual food items. These data can be used to calculate the energy content of children’s meals and their micro- and macronutrient composition, using published nutrition tables and school meal recipes, to pro- vide more detailed measures of consumption and its changes over time, for example in studies that seek to evaluate the effects of various school-based interventions(32). Acknowledgements Acknowledgements: The authors would like to extend thanks to San Sior School in Llandudno, and Aldridge School in Walsall, for their participation in this research. Financial support: This work was supported by Bangor University and the Knowledge Economy Skills Scholarships II, a major pan-Wales operation supported by European Social Funds. No funders undertook any role in the design, analysis or writing of this article. Conflict of interest: None. Authorship: M.M.-O. designed and coordinated the study, processed and analysed the data, and drafted the manu- script. M.E. secured the funding, supervised all aspects of the research, and co-wrote the final manuscript. P.J.H. assisted with editing. S.V. advised on statistical analysis and results. R.P. assisted with data collection, inputting and second coding. All authors have read and approved the final manuscript. Ethics of human subject participation: This study was conducted according to the guidelines laid down in the Declaration of Helsinki and all procedures involving human subjects were approved by the Bangor University Ethics Board, and written parental consent was obtained. References 1. Martin CK, Nicklas T, Gunturk B et al. (2014) Measuring food intake with digital photography. J Hum Nutr Diet 27, 72–81. 2. McClung HL, Champagne CM, Allen HR et al (2017) Digital food photography technology improves efficiency and feasibility of dietary intake assessments in large populations eating ad libitum in collective dining facilities. Appetite, 116, 389–394. 3. Lazarte CE, Encinas ME, Alegre C et al. (2012) Validation of digital photographs, as a tool in 24-h recall, for the improvement of dietary assessment among rural popula- tions in developing countries. Nutr J 11, 61. 4. Ptomey LT, Willis EA, Honas JJ et al. (2015) Validity of energy intake estimated by digital photography plus recall in overweight and obese young adults. J Acad Nutr Diet 115, 1392–1399. Measuring lunch consumption with photography 1753 Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:24, subject to the Cambridge Core terms of use. https://www.cambridge.org/core 5. Ptomey LT, Willis EA, Goetz JR et al. (2015) Digital photo- graphy improves estimates of dietary intake in adolescents with intellectual and developmental disabilities. Disabil Health J 8, 146–150. 6. Becker C, Miketinas DC, Martin CK et al. (2013) Examining dietary energy density and eating occasions with the Remote Food Photography Method. FASEB J 27, 621–626. 7. England CY, Lawlor DA, Lewcock M et al. (2016) Use of the Remote Food Photography Method by young pregnant women: a feasibility study. Proc Nutr Soc 75, issue OCE3, E242. 8. Christoph MJ, Loman BR & Ellison B (2017) Developing a digital photography-based method for dietary analysis in self-serve dining settings. Appetite 114, 217–225. 9. Swanson M (2008) Digital photography as a tool to measure school cafeteria consumption. J Sch Health 78, 432–437. 10. Swanson M, Branscum A & Nakayima PJ (2009) Promoting consumption of fruit in elementary school cafeterias. The effects of slicing apples and oranges. Appetite 53, 264–267. 11. Williamson DA, Allen HR & Martin PD et al. (2004) Digital photography: a new method for estimating food intake in cafeteria settings. Eat Weight Disord 9, 24–28. 12. Sabinsky MS, Toft U & Andersen KK et al. (2013) Validation of a digital photographic method for assessment of dietary quality of school lunch sandwiches brought from home. Food Nutr Res 57, 20243. 13. Pouyet V, Cuvelier G & Benattar L et al. (2015) A photo- graphic method to measure food item intake. Validation in geriatric institutions. Appetite 84, 11–19. 14. Nicklas T, Saab R & Islam NG et al. (2017) Validity of the remote food photography method against doubly labeled water among minority preschoolers. Obesity (Silver Spring) 25, 1633–1638. 15. Taylor JC, Yon BA & Johnson RK (2014) Reliability and validity of digital imaging as a measure of schoolchildren’s fruit and vegetable consumption. J Acad Nutr Diet 114, 1359–1366. 16. Hanks AS, Wansink B & Just DR (2014) Reliability and accuracy of real-time visualization techniques for measuring school cafeteria tray waste: validating the quarter- waste method. J Acad Nutr Diet 114, 470–474. 17. Gemming L, Utter J & Mhurchu CN (2015) Image-assisted dietary assessment: a systematic review of the evidence. J Acad Nutr Diet 115, 64–77. 18. Olafsdottir AS, Hörnell A & Hedelin M et al. (2016) Devel- opment and validation of a photographic method to use for dietary assessment in school settings. PLoS One 11, e0163970. 19. Walther BA & Moore JL (2005) The concepts of bias, pre- cision and accuracy, and their use in testing the perfor- mance of species richness estimators, with a literature review of estimator performance. Ecography 28, 815–829. 20. Witten IH & Frank E (2005) Data Mining: Practical Machine Learning Tools and Techniques, 2nd ed., pp. 176–179. San Francisco, CA: Morgan Kaufmann. 21. Bland JM & Altman D (1986) Statistical methods for asses- sing agreement between two methods of clinical measure- ment. Lancet 327, 307–310. 22. Landis JR & Koch GG (1977) The measurement of observer agreement for categorical data. Biometrics 33, 159–174. 23. Buzby JC & Guthrie JF (2002) Plate waste in school nutrition programs. J Consum Aff 36, 220–238. 24. National Cancer Institute (2016) School Age Population Methods Validation – NCS Dietary Assessment Literature Review. https://epi.grants.cancer.gov/past-initiatives/ assess_wc/review/agegroups/schoolage/validation.html#val (accessed August 2017). 25. World Health Organization (n.d.) What is a health promot- ing school? http://www.wpro.who.int/health_promotion/ about/health_promoting_schools_framework/en/ (accessed May 2018). 26. McKenna ML (2010) Policy options to support healthy eat- ing in schools. Can J Public Health 101, Suppl. 2, S14–S17. 27. Rogers IS, Ness AR, Hebditch K et al. (2007) Quality of food eaten in English primary schools: school dinners vs. packed lunches. Eur J Clin Nutr 61, 856–864. 28. Nelson M & Breda J (2013) School food research: building the evidence base for policy. Public Health Nutr 16, 958–967. 29. Tilles-Tirkkonen T, Pentikainen S, Lappi J et al. (2011) The quality of school lunch consumed reflects overall eating patterns in 11–16-year-old schoolchildren in Finland. Public Health Nutr 14, 2092–2098. 30. Raulio S, Roos E & Prattala R (2010) School and workplace meals promote healthy food habits. Public Health Nutr 13, 987–992. 31. Thompson CH, Head MK & Rodman SM (1987) Factors influencing accuracy in estimating plate waste. J Am Diet Assoc 87, 1219–1220. 32. Marcano-Olivier M, Pearson R, Ruparell A et al. (2019) A low-cost behavioural nudge and choice architecture inter- vention targeting school lunches increases children’s con- sumption of fruit: a cluster randomised trial. Int J Behav Nutr Phys Act 16, 20. 1754 M Marcano-Olivier et al. Downloaded from https://www.cambridge.org/core. 06 Apr 2021 at 01:16:24, subject to the Cambridge Core terms of use. https://epi.grants.cancer.gov/past-initiatives/assess_wc/review/agegroups/schoolage/validation.html#val https://epi.grants.cancer.gov/past-initiatives/assess_wc/review/agegroups/schoolage/validation.html#val http://www.wpro.who.int/health_promotion/about/health_promoting_schools_framework/en/ http://www.wpro.who.int/health_promotion/about/health_promoting_schools_framework/en/ https://www.cambridge.org/core Measuring lunchtime consumption in school cafeterias: a�validation study of the use of digital photography Methods Aim Participants Materials Procedure Lunchboxes School dinners Data processing and coding Weighed consumption measures Consumption estimates from digital images Preliminary data analyses Statistics and sample size calculations Comparing weight- and image-based data Performance evaluation of the model Determining inter-rater agreement Results Overall consumption Consumption of foods in each category Inter-rater agreement Table 1Food provided and consumed (in grams), by school and meal type, in a sample of 121 children from a rural primary school in North Wales and 124 children from an urban secondary school in the West Midlands,�UK, April 2016 Fig. 1Bland–Altman plots comparing consumption estimates (in grams) made by digital photographs and objective weighed measures from lunchtime meals of a sample of 121 children from a rural primary school in North Wales and 124 children from an urba Table 2Bland–Altman analysis results for all meals (in grams), by school and meal type, in a sample of 121 children from a rural primary school in North Wales and 124 children from an urban secondary school in the West Midlands,�UK, April 2016 Table 3Food provided and consumed (in grams), by food category, in a sample of 121 children from a rural primary school in North Wales and 124 children from an urban secondary school in the West Midlands,�UK, April 2016 Discussion Fig. 2Bland–Altman plots comparing consumption estimates (in grams) made by digital photographs and objective weighed measures from lunchtime meals of a sample of 121 children from a rural primary school in North Wales and 124 children from an urba Table 4Bland–Altman analysis results for all meals (in grams), by food category, in a sample of 121 children from a rural primary school in North Wales and 124 children from an urban secondary school in the West Midlands,�UK, April 2016 Table 5Percentages of inter-rater agreement and disparities for all meals, by food category, in a sample of 121 children from a rural primary school in North Wales and 124 children from an urban secondary school in the West Midlands,�UK, April 2016 Conclusion Acknowledgements ACKNOWLEDGEMENTS References work_vu7yb6grczasfkyci7b3nshszq ---- 37 Prospect. Vol. 10, No. 1, Enero - Junio de 2012, págs. 37-43 Efecto de la modificación del almidón de arracacha y la concentración del plastificante sobre las propiedades mecánicas de películas biodegradables Effect of arracacha starch modification and plasticizer concentration on the mechanical properties of biodegradable films Oscar H. Pardo Cuervo1, William Aperador Chaparro2, William M. Sanabria3 1MsC en Metalurgia y Ciencia de los Materiales. Docente. Universidad Pedagógica y Tecnológica de Colombia. Grupo de investigación en Química y Tecnológica de Alimentos (GQTA). E-mail: oscarhernando.pardo@uptc.edu.co 2PhD en Ingeniería. Docente. Universidad Militar Nueva Granada. Grupo de investigación en Ingeniería de Materiales. 3MsC en Metalurgia y Ciencia de los Materiales. Docente. Universidad Pedagógica y Tecnológica de Colombia. Grupo de investigación en Integridad y Evaluación de Materiales (GIEM). Recibido 11/11/11, Aceptado 30/05/2012 RESUMEN El Almidón nativo (AN) de arracacha (Arracacia xanthorrhiza Bancroft) fue oxidado (AO) o acetilado (AA) para elaborar películas biodegradables, modificando la concentración de glicerol con el fin de determinar el efecto del tipo de modificación del almidón y la concentración del plastificante sobre las propiedades mecánicas de las películas. Se observó que el tipo de modificación del almidón y la cantidad de plastificante utilizado en la for- mulación para la obtención de las películas tienen efecto significativo (p<0.01), en la resistencia a la tensión (RT), porcentaje de elongación (%E), módulo de elasticidad (ME) y módulo de resiliencia (MR). Al usar AA para la obtención de películas se presenta mayor valor en la RT y en promedio esta es mayor cuando se utiliza la formula- ción con menor cantidad de plastificante. Al usar AN en la obtención de películas se presenta en promedio mayor %E y este es mayor cuando se usa la formulación con mayor cantidad de plastificante. No existe interacción entre los dos factores estudiados que afecten significativamente las propiedades mecánicas de las películas obtenidas. Palabras clave: Películas,Plastificante,Acetilación, Oxidación ABSTRACT Arracacha (Arracacia xanthorrhiza Bancroft) native starch (NS) was oxidized (OS) or acetylated (AS) to produce bio- degradable films, changing the concentration of glycerol in order to determine the effect of modifying the starch type and concentration of the plasticizer on the mechanical properties of the films. It was observed that the type of modification of starch and the amount of plasticizer used in the formulae to obtain the films have significant effect (p<0.01) in tensile strength (TS), percent elongation at break (%E) and elasticity modulus (EM). When using AS for the production of films, a greater value in the average TS was reached and this is greater when using lowest amount of plasticizer. While using NS, higher average %E was achieved and this is greater when using higher amount of plasticizer. Among the two factors that affect the mechanical properties of the films obtained, there was not interaction. Keywords: Films, Plasticizer, Acetylated, Oxidized 38 Efecto de la modificación del almidón de arracacha y la concentración del plastificante sobre las propiedades mecánicas... 1. INTRODUCCIÓN El consumo de plásticos ha aumentado en los últimos años [1]. Estos petroquímicos han sustituido parcial y a veces totalmente a muchos materiales naturales. Sin embargo, la contaminación causada por estos materiales es muy eleva- da. Por esta razón, algunas investigaciones se han enfocado al alivio de este problema ambiental, principalmente por medio del desarrollo y uso de polímeros biodegradables [2]. En el sector de empaques, los materiales deben ser re- novables, no tóxicos y los productos finales deben ser re- ciclables, innovadores y económicamente competitivos [3]. Entre los polímeros naturales, el almidón cumple la ma- yoría de estos rigurosos requisitos. No obstante, el uso de almidones nativos se ve limitado debido a que las condi- ciones de procesamiento reducen su uso en aplicaciones industriales [4]. Además, el uso del almidón sin modificar también se limita por su fragilidad en el área de empa- ques, el deterioro de las propiedades mecánicas bajo con- diciones ambientales por la exposición a la humedad [5]. Para mejorar las deficientes propiedades fisicoquímicas y mecánicas, al almidón se le realizan modificaciones que se llevan a cabo por tres métodos: Físicos, químicos y micro- bianos o por una combinación entre estas. Los almidones modificados, aún con bajos niveles de modificación quími- ca, pueden significativamente mejorar la hidrofobicidad así como cambiar propiedades mecánicas y físicas [6]. Desde hace años se ha estudiado la capacidad que tiene el almidón para formar películas [7]. Se ha estudiado el efecto sobre las propiedades mecánicas de películas de almidón de papa adicionadas con fracciones de monoglicéridos acetila- dos, observándose que una mayor cantidad disminuyó el módulo de Young, la resistencia a la tensión y el porcentaje de elongación a la ruptura [8]. Con el fin de mejorar las pro- piedades tanto mecánicas como fisicoquímicas, se han elabo- rado películas con almidones modificados de maíz, papa y otras fuentes por varios métodos, las cuales se han caracte- rizado y determinado la influencia de las modificaciones y plastificantes utilizados. Los plastificantes se utilizan porque ayudan a modificar las propiedades del producto final. En el caso de las películas de almidón el plastificante disminuye la fragilidad y mejora la flexibilidad de estas. Existen varios plastificantes utilizados en la obtención de películas de almi- dón pero el más utilizado es el glicerol [9]. Se han obtenido películas de almidón de fuentes no con- vencionales como la quinua, estudiando el efecto del pH y concentración de glicerol, concluyendo que las condi- ciones óptimas son 21.2 g de glicerol/100 g de fécula de quinua y un valor de pH de 10.7. Las películas producidas en estas condiciones poseen propiedades mecánicas supe- riores [10]. Teniendo en cuenta que Colombia es el primer productor mundial de arracacha [11] y debido a que esta raíz presenta contenidos de almidón considerablemente altos [12], el ob- jetivo del presente trabajo fue obtener tres tipos de películas biodegradables utilizando almidón nativo, oxidado y aceti- lado de arracacha con el fin de comparar las propiedades mecánicas de estas, variando la proporción de plastificante. 2. PARTE EXPERIMENTAL El almidón nativo (AN) se extrajo según la metodología propuesta por Aristizábal y Sánchez [13], con algunas modificaciones. Para la obtención de las películas se utili- zaron almidón oxidado (AO) con un porcentaje de grupos carbonilo de 0.019 y un porcentaje de grupos carboxilo de 0.025 y almidón acetilado (AA) con un grado de sustitu- ción de 0.259 previamente caracterizados y almidón nati- vo de arracacha [14]. 2.1 Obtención de películas Las películas de AN, AO y AA se prepararon suspendien- do 2 g de almidón en una mezcla de 50 mL de agua des- tilada y glicerol en diferentes proporciones, (0.6; 0.8; 1.0 mL), se agitó a 500 r.p.m a temperatura ambiente durante 20 minutos. La suspensión se sometió a proceso de gelati- nización durante 15 minutos a una temperatura de 70 °C. La mezcla se vertió en moldes de acero de 30x15 cm y se dejó secar a temperatura ambiente durante 5 días. Propiedades mecánicas de las películas Las propiedades mecánicas se determinaron en una má- quina universal de ensayos Shimadzu EZ-L, según la norma ASTM D882-10 [15]. Las muestras analizadas co- rrespondieron a conjuntos de películas plásticas cortadas en forma de probetas rectangulares (12x2.5 cm aproxima- damente). Previo a la realización del ensayo, las muestras fueron acondicionadas a una temperatura de 23 ± 2 ºC, hu- medad relativa de 50 ± 10 % durante 7 días en una cámara climática BINDER KBF115. Para cada probeta, se midió y registró el valor promedio de ancho y espesor (promedio de cinco mediciones), con un calibrador digital pie de rey Mitutoyo y medidor de espesores TMI respectivamente. Cada muestra se ubicó en las mordazas de la máquina universal y se sometió a las condiciones del método de ensayo, registrándose simultáneamente los cambios de es- fuerzo y deformación durante el ensayo. Las condiciones generales del método de medición em- pleado fueron las siguientes, las cuales se ajustaron de acuerdo a ensayos preliminares: Distancia inicial entre mordazas: 80 mm Velocidad de desplazamiento del cabezal: 40 mm/min Velocidad inicial de deformación: 0,5 mm/mm.min 39 Prospect. Vol. 10, No. 1, Enero - Junio de 2012, págs. 37-43 Capacidad máxima de la celda de carga empleada: 500 N 2.3 Análisis estadístico y diseño experimental Se realizó un análisis de varianza con el programa estadísti- co SPSS versión 17 y la comparación de medias con la prue- ba de Tukey (p≤0.01). El diseño experimental que se siguió fue un diseño factorial AxB con dos factores de estudio: i) concentración del glicerol con tres niveles (A: 0.6 mL; B: 0.8 mL; C: 1.0 mL) y ii) tipo de almidón con 3 niveles (Almidón nativo, almidón oxidado y almidón acetilado). Se analizó la influencia del glicerol como plastificante so- bre las propiedades mecánicas de las películas obtenidas. 3. RESULTADOS Y ANALISIS DE RESULTADOS En la figura 1 se muestran las películas de almidón obte- nidas. Se observó mediante inspección visual que las pelí- culas de AO eran más opacas que las de AN y AA, debido a que durante la reacción de oxidación el almidón sufre un blanqueamiento que persiste durante la elaboración de las películas. Figura 1. Fotografía digital de las películas de almidón ob- tenidas. A. acetilado; B. oxidado; C. nativo. Contenido de Glicerol: 0.8 mL Figure 1. Digital Photography starch films obtained. A. acetylated; B. oxidized; C. native. Glycerol content: 0.8 mL. 3.1 Espesor promedio de las películas obtenidas Se ha establecido que el espesor de las películas afecta di- rectamente las propiedades mecánicas [16]. Por esta razón antes de realizar los ensayos, a cada muestra, se le midió y registró el valor promedio del espesor (tabla 1), resultado de promediar cinco mediciones en puntos diferentes para cada muestra (figura 2). El análisis de varianza indico que no existieron diferencias significativas (p< 0.01), lo cual indi- ca que el espesor no influyo en los resultados de las pruebas mecánicas que se llevaron a cabo, limitando los resultados de estas, a la influencia de la formulación y al tipo de almi- dón utilizado para la obtención de las películas. Tabla 3. Espesor de películas (mm) Table 3. Film thickness (mm) PAN: Películas con almidón nativo; PAO: Películas con almidón oxi- dado; PAA: Películas con almidón acetilado; A: 0.6 mL glicerol; B: 0.8mL glicerol; C: 1.0 mL glicerol; S: Desviación estándar Figura 2. Fotografía digital de las muestras utilizadas para las pruebas mecánicas. Figure 2. Digital photography of the samples used for me- chanical testing. AA: Película con almidón Acetilado. AN: Película con al- midón Nativo. AO: Película con almidón Oxidado. 3.2 Resistencia a la tensión (RT) Al realizar el análisis de varianza con el fin de evaluar el efecto del tipo de almidón (AN, AO y AA) y la formula- ción (A, B, C) se observa que, el tipo de almidón tiene efec- to significativo en la RT (p<0.01), lo que implica que esta varía de acuerdo al tipo de almidón que se utiliza para obtener las películas. 40 Efecto de la modificación del almidón de arracacha y la concentración del plastificante sobre las propiedades mecánicas... La prueba de comparaciones múltiples Tukey, indicó que la RT al usar el AN presenta mayor diferencia que cuan- do se usa AA. También se presenta diferencias al usar AO que con el AA para la obtención de las películas, (p<0.01). Mientras que si se compara la RT de las películas, usan- do AN y AO, no hay diferencias significativas. Adicional- mente esta prueba refleja que al usar AA para la obtención de películas, se presenta mayor RT. Este comportamiento se puede observar en la figura 3. Figura 3. Comparación de la resistencia a la tensión de las películas. Figure 3. Tensile strength comparison of the films. AO: Almidón nativo. AN: Almidón oxidado. AA: almidón acetilado. A: 0.6 mL; B: 0.8 mL; C: 1.0 mL El análisis de varianza, también indicó que la formulación empleada para obtener las películas, evaluando la RT, ge- nera en promedio valores diferentes de RT (p<0.01). La prueba de Tukey, indicó que la formulación, influye direc- tamente en la RT (p<0.01). En promedio de los resultados, se registró mayor RT usando la formulación A para la ob- tención de las películas, seguido de la formulación B y de la formulación C, con valores de 12.5753 MPa, 4.9393 MPa y 2.7705 MPa respectivamente. El valor más alto en la RT que presentan las películas de AA, se debe posiblemente a que en este tipo de modifica- ción se sustituyen los grupos hidroxilo por grupos acetilo, que tienen menor interacción con las moléculas de glicerol que los grupos hidroxilo del AN y los grupos carboxilo y carbonilo del AO. En promedio el valor más alto para la RT de las películas obtenidas fue 12.5753 MPa correspondiente a las pelícu- las con AA. Este valor comparado con el valor mínimo (10 MPa) del polietileno de baja densidad indica que exis- te una gran posibilidad de utilizar este tipo de películas como material de empaque. 3.3 Porcentaje de elongación (%E) Los resultados del análisis de varianza para el porcenta- je de elongación (%E) indicaron que, el tipo de almidón tiene efecto significativo en el %E, lo que significa que el promedio de %E de las películas varía según el tipo de al- midón usado (p<0.01). La prueba de comparaciones múl- tiples Tukey indicó que el %E de las películas obtenidas con AN varia significativamente en comparación a las obtenidas con AA. De igual manera se presenta diferen- cias en el %E al usar el AO en la obtención de películas que AA (p<0.01). Por el contrario el %E de las películas usando AN y AO, no presentan diferencias significativas. La prueba Tukey también permite establecer que al usar AN en la obtención de películas, se presenta en promedio mayor %E (figura 4). En lo que respecta a la formulación empleada para obte- ner las películas y su efecto en el %E, se observó que, la formulación genera en promedio valores de %E diferentes (p<0.01). Con la prueba de Tukey se estableció que existen diferencias significativas en el %E de las películas (p<0.01) cuando se varia la formulación. En la figura 4 también se puede ver dicho comportamiento. En promedio se regis- tró mayor %E usando la formulación C, seguida de la for- mulación B y de la formulación A, con valores de 30.001%, 21.2670 % y 11.3121 % respectivamente. Dicho de otra ma- nera, la formulación C permitió obtener una película con mayor capacidad a deformarse elásticamente, aunque, dentro de la formulación C, el almidón acetilado tiene una diferencia significativa con respecto al almidón nativo y el almidón oxidado. Igualmente, a menor cantidad de plas- tificante en las películas tiende a ser un material frágil y fácilmente quebradizo. Figura 4. Comparación del porcentaje de elongación a la ruptura de las películas. Figure 4. Percentage of elongation at break comparison of the films. AO: Almidón nativo. AN: Almidón oxidado. AA: almidón acetilado. A: 0.6 mL; B: 0.8 mL; C: 1.0 mL Como el %E de los materiales poliméricos depende de la flexibilidad de la cadena molecular y teniendo en cuen- ta que durante el procedimiento para la obtención de las 41 Prospect. Vol. 10, No. 1, Enero - Junio de 2012, págs. 37-43 películas, el almidón se sometió a tratamiento térmico y agitación mecánica durante la etapa de gelificación; esto ocasionó que la estructura cristalina del almidón se des- truyera y se formara una estructura nueva de carácter amorfo. Este cambio de estructura favoreció la impreg- nación del glicerol en la matriz lo que disminuyo las in- teracciones, tanto intra como intermoleculares entre las moléculas de almidón, debido a la formación de puentes de hidrogeno entre grupos hidroxilo de las macromolécu- las de almidón y el glicerol. Este fenómeno contribuye a un reordenamiento de las cadenas del almidón que causa el incremento de la flexibilidad de las películas de almi- dón. Por lo tanto, el valor de %E de las películas aumentó mientras que la RT disminuyó con el aumento de glicerol. Resultados similares han sido reportados por Rodríguez et al. [17] y Hu et al. [1]. 3.4 Módulo de elasticidad y Módulo de Resiliencia (ME y MR) De acuerdo con los resultados del análisis de varianza para el módulo de elasticidad (ME) y el módulo de res- iliencia (MR) se pudo establecer que el tipo de almidón tiene efecto significativo sobre estas dos propiedades me- cánicas de las películas, lo que implica que el promedio de ME y MR de las películas varían según el tipo de almidón usado (p<0.01). Presentando mayores ME y MR las pelí- culas obtenidas con AA, seguido del AN y del AO. Dicho comportamiento se puede ver en la Figura 5. Figura 5. Comparación del módulo de elasticidad de las películas. Figure 5. Elastic modulus comparison of the films. AO: Almidón nativo. AN: Almidón oxidado. AA: almidón acetilado. A: 0.6 mL; B: 0.8 mL; C: 1.0 mL Figura 6. Comparación del módulo de resilencia de las pe- lículas. Figure 6. Modulus of resilience comparison of the films. AO: Almidón nativo. AN: Almidón oxidado. AA: almidón acetilado. A: 0.6 mL; B: 0.8 mL; C: 1.0 mL En las figuras 5 y 6 también se muestra, que a las pelícu- las obtenidas con AO formulación B y C, y las películas de AN formulación C, no fue posible calcularles el ME y el MR por no existir región lineal definida en la curva esfuerzo-deformación, por esta razón para estas variables respuesta no se realizó la prueba de comparaciones múl- tiples de Tukey. El análisis de varianza también permitió establecer, cómo el tipo de formulación genera diferencias significativas en el ME (p<0.01). Presentándose mayor ME en la formu- lación A seguido de la B y C con valores de 733.74 MPa, 272.47 MPa y 130.17 MPa respectivamente. Igualmente, también mediante análisis de varianza se observó, cómo el tipo de formulación genera diferencias significativas en el Módulo de Resiliencia (p<0.01). Encon- trándose mayor módulo de resiliencia en la formulación A seguido de la B y C con valores de 0.109, 0.074 MPA y 0.094 MPa respectivamente. En cuanto al ME, el mayor valor lo presentan las pelícu- las de AA para todas las concentraciones de glicerol, lo que indica que este tipo de película tienen mayor rigidez y no se deforman tan fácilmente, además, se corrobora su rigidez observando en la figura 3 que las películas de AA presentan mayor RT para cualquiera de las tres formula- ciones. Por otro lado, las formulaciones B y C (mayor can- tidad de plastificante) para el AO y C de AN no presentan ME y tienden a deformarse plásticamente desde el inicio de la aplicación de la fuerza; resultado similares fueron reportados por Talja et al. [18]. 42 Efecto de la modificación del almidón de arracacha y la concentración del plastificante sobre las propiedades mecánicas... En la formulación A, el módulo de resiliencia es significa- tivamente mayor a las formulaciones B y C, lo que clara- mente se traduce en una mayor capacidad de ésta formu- lación para absorber energía antes de llegar a entrar en régimen de deformación plástica. Este comportamiento permite evidenciar que la modifica- ción del almidón también contribuye favorablemente a mejo- rar estas propiedades mecánicas, principalmente si se acetila y de alguna manera si se oxida, estas modificaciones causan una disminución en la formación de los puentes de hidroge- no entre el glicerol y los grupos hidroxilo del almidón. Por otra parte, resultados de aplicar el modelo estadístico para evaluar la posible interacción entre el tipo de almi- dón usado en la obtención de las películas y la formula- ciones, arroja que no hay interacción del almidón sobre la formulación que incida de manera significativa en la RT, %E, ME y MR de las películas obtenidas. 4. CONCLUSIONES • Las propiedades mecánicas de las películas, se ven afectadas directamente por la concentración del plas- tificante y el tipo de modificación del almidón de arracacha utilizado para su obtención. • La acetilación favorece el aumento de la resistencia a la tensión de las películas, mientras que la oxidación la reduce, en comparación a las obtenidas con almi- dón sin modificar. • En promedio, las reacciones de acetilación y oxida- ción disminuyen el porcentaje de elongación de las películas, en comparación con las obtenidas con almi- dón sin modificar. • La acetilación aumenta el módulo de elasticidad, mientras que la oxidación lo disminuye. A mayor cantidad de plastificante utilizado para la obtención de todas las películas, la resistencia a la tensión y el módulo de resiliencia disminuyen mientras que el porcentaje de elongación aumenta. • La propiedad mecánica que se vio más afectada por la cantidad de plastificante fue el módulo de elasticidad. • No existe efecto significativo en cuanto a la interacción entre el tipo de almidón y la concentración del plasti- ficante usados para la obtención de las películas, que incida en las propiedades mecánicas de las películas de almidón de arracacha. REFERENCIAS [1] Hu, G., Chen, J., Gao, J. Preparation and characteristics of oxidized potato starch films. Carbohydr. Polym., 76 (2), 291-298, 2009. [2] Uhrich, K.E., Cannizzaro, S.M., Langer, R.S., Shakes- heff, K.M. Polymeric systems for controlled drug release. Chem. Rew., 99 (11), 3181-3198, 1999. [3] Tuominen, J. Chain linked acid polymers: polymerization degradation studies. Tesís de doctorado, Universidad Tec- nológica de Helsinki, 2003. [4] Zamudio, F.P., Bello, P.L., Vargas, T.A., Hernández, U.J., Romero, B.C. Partial characterization of films prepa- red with oxidized banana starch. Agrociencia., 41 (8), 837- 844, 2007. [5] Fang, J.M., Fowler, P.A., Tomkinson, J., Hill, C.A. An investigation of the use of recovered vegetable oil for the preparation of starch thermoplastics. Carbohydr. Polym., 50 (4), 429-434, 2002. [6] Peñaranda, O.I., Perilla, J.E., Algecira, N.A. A review of using organic acids to chemically modify starch. Ingenie- ría e Investigación., 28 (3): 47-52, 2008. [7] Langlois, D. P., Wagoner, J. A. Production and use of amylose. En: Starch, Chemistry and technology. New York (EE.UU.): Press Inc., p. 451-496, 1967. [8] Petersson, Maria y Stading, Mats. Water vapour per- meability and mechanical properties of mixed starch-mo- noglyceride films and effect of film forming conditions. En: Food Hydrocolloids. Vol. 19, p. 123–132, 2005. [9] López, O.V., García, M.A., Zaritzky, N.E., Film forming capacity of chemically modified corn starches. Carbohydr. Polym., 73 (4), 573-581, 2008. [10] Aristizábal, J., Sánchez, T. Extracción de almidón de yuca. En: Guía técnica para producción y análisis de almidón de yuca. Roma (Italia): press Inc., p. 49-57, 2007. [11] Rodríguez, G. Concepción de un modelo de agroindus- tria rural para la elaboración de harina y almidón a partir de raíces y tubérculos promisorios, con énfasis en los casos de achira (Canna edulis), arracacha (Arracacia xanthorriza) y ñame (Dioscorea sp.). [Internet]. Tibaitatá, Colombia: COR- POICA–PRONATTA; 2003 [citado 18 de octubre de 2010]. 104 p. Disponible en: http://www.infoandina.org/system/fi- les/recursos/AIRachira.pdf [12] Jiménez, F.S. Características nutricionales de la arra- cacha (Arracacia xanthorrhiza) y sus perspectivas en la ali- mentación. [Internet]. Lima, Perú: Red peruana de alimen- tación y nutrición; 2005 [citado 15 de diciembre de 2010]. 18 p. Disponible en: http://www.slideshare.net/kevin1990/ caracteristicas-nutricionales-de-la-arrecacha 43 Prospect. Vol. 10, No. 1, Enero - Junio de 2012, págs. 37-43 [13] Araujo-Farro, P., G., Podadera, G., Sobral, P., Mene- galli, F. Development of films based on quinoa (Chenopo- dium quinoa, Willdenow) starch. Carbohydr. Polym., 81 (4), 839–848, 2010. [14] Pardo, O.H. Propiedades Fisicoquímicas y Mecánicas de Películas Obtenidas a Partir de Almidones de Arraca- cha. Tesis de Maestría. Universidad Pedagógica y Tecnoló- gica de Colombia, 2011. [15] Norma ASTM D882-10, Standard Test Method for Ten- sile Properties of Thin Plastic Sheeting. Pennsylvania (EE. UU.): American Society for Testing and Materials, 2010. [16] Janssona, A. y Thuvanderb, F. Influence of thickness on the mechanical properties for starch films. Carbohydr. Polym., 56 (4), 499–503, 2004. [17] Rodríguez, M., Osés, J., Ziani, K., Maté, J.I. Combined effect of plasticizers and surfactants on the physical pro- perties of starch based edible films. Food Research Inter- national., 39 (8), 840–846, 2006. [18] Talja, R.A., Heléna, H., Roos, Y.H., Jouppila, K. Effect of various polyols and polyol contents on physical and mechanical properties of potato starch-based films. Car- bohydr. Polym., 67 (3), 288–295, 2007. work_vudjb7e46zeqpksgorkxd6gyqa ---- This Month in Archives of Dermatology Comparison of Skin Biopsy Triage Decisions in 49 Patients With Pigmented Lesions and Skin Neoplasms Store-and-Forward Teledermatology vs Face-to-Face Dermatology T eledermatology is a rapidlygrowing discipline that ad- dresses issues of access to and qual- ity and cost of dermatologic care. In this prospective study of 49 consecu- tive patients judged by an internist to require dermatologic consulta- tion for skin neoplasms, Shapiro et al compared skin biopsy triage deci- sions by 3 experienced observers us- ing store-and-forward teledermatol- ogy and face-to-face consultations. Remarkably, 100% agreement in treatment plan was observed be- tween the face-to-face and teleder- matologic consultations. The au- thors comment on issues surrounding the cost-effectiveness of this technol- ogy that will ultimately impact the in- tegration of teledermatology into real- world practice settings. See page 525 The Framingham School Nevus Study A Pilot Study M alignant melanoma is morecommon among people with numerous nevi. Our current under- standing of the evolution of nevi de- rives mainly from cross-sectional studies. In this population-based survey and 1-year prospective fol- low-up study of a cohort of 52 schoolchildren, Oliveria et al dem- onstrate the feasibility of perform- ing a longitudinal study to address patterns of nevus evolution. In ad- dition, they describe the preva- lence and patterns of nevi among schoolchildren using digital pho- tography and dermoscopic features to explore the interrelationship be- tween nevi and host and environ- mental factors. See page 545 Management of Lentigo Maligna and Lentigo Maligna Melanoma With Staged Excision A 5-Year Follow-up L entigo maligna (LM) and LM melanoma are pigmented melanocytic neo-plasms found mainly on the sun-exposed skin of the head and neck in elderly patients. In this retrospective follow-up study of 59 patients with LM or LM melanoma who had the lesion excised with serial excision, Bub et al describe the utility of radial histologic sectioning in preparing the tissue. This technique allows the dermatopathologist to examine multiple sections start- ing from the center of the tumor and working toward the periphery. The av- erage margin excised was 0.55 cm, and patients were observed for a mean of 57 months after surgery. Only 3 local recurrences and no evidence of meta- static disease were observed, a cure rate that exceeds that of conventional sur- gery and carries the benefits of tissue conservation. See page 552 Acaricidal Activity of Melaleuca alternifolia (Tea Tree) Oil In Vitro Sensitivity of Sarcoptes scabiei var hominis to Terpinen-4-ol S cabies is a worldwide ectoparasitic disease of the skin caused by the miteSarcoptes scabiei. Current therapies for ordinary scabies consist primar- ily of topical agents, although oral ivermectin has been used in some settings. The essential oil of the tea tree is an Australian Aboriginal traditional medica- tion for bruises, insect bites, and skin infections, although little is known of its antiectoparasitic activity. The primary active components of tea tree oil (TTO) are oxygenated terpinoids. In this pilot study, Walton et al evaluated the anti- scabetic efficacy of TTO through in vitro and in vivo assays and demonstrated that TTO may represent an effective novel agent for the treatment of scabies. See page 563 Association of Solitary, Segmental Hemangiomas of the Skin With Visceral Hemangiomatosis H emangiomas ofinfancy (HOI) are the most common benign tumors of child- hood. Most HOI are soli- tary localized lesions with a relatively low risk of complications. Mul- tifocal hemangiomas, on the other hand, are as- sociated with a higher potential for concomi- tant visceral hemangio- mas. In this case series supplemented by an ex- tensive review of previ- ously reported cases, Metry et al demonstrate that large solitary segmental HOI may also be associated with extracutaneous hemangiomatosis. Evalua- tion of patients with segmental hemangiomas should be tailored to risk fac- tors and signs and symptoms that may be present. See page 591 SECTION EDITOR: ROBIN L. TRAVERS, MD Multiple facial hemangiomas present in a segmental distribution. THIS MONTH IN ARCHIVES OF DERMATOLOGY (REPRINTED) ARCH DERMATOL / VOL 140, MAY 2004 WWW.ARCHDERMATOL.COM 515 ©2004 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 work_vug3muwnvvbw5cnpmjiw6ailey ---- Microsoft Word - Nuzzolese_TOAnthrJ 142 The Open Anthropology Journal, 2010, 3, 142-147 1874-9127/10 2010 Bentham Open Open Access Dental Contribution to an Anthropological Forensic Case Work of Skeletal Remains in Miglionico Countryside (South Italy) E. Nuzzolese* ,1 , C. Liuzzi 1 , G. Quarta 2 , L. Calcagnile 2 and G. Di Vella 1 1 Sezione di Medicina Legale, DiMIMP, Università degli Studi di Bari, Policlinico, piazza G. Cesare, 70125, Bari, Italy 2 CEDAD, Dipartimento di Ingegneria della Innovazione, Università del Salento, Cittadella della Ricerca (S.S. 7 per Mesagne, Km. 7 +300), 72100 Brindisi Abstract: This report contains the results of a forensic study of human remains discovered by a forester in the countryside surrounding Miglionico (Southern Italy) in August 2007. A total of 286 bone fragments were excavated at the scene and an osteological analysis was carried out by two forensic pathologists, one of which had a anthropological background. A forensic odontologist was also involved to ascertain the completeness of the skeletons and to make an inventory of the skeletal material. It was hoped also to establish a cause of death and period in which it occurred, and if possible to attempt to identify the individual. Age and odontological assessment was also provided. This report will highlight the contribution of an odontological and radiological analysis in relation to fragments of maxil- lary bones with teeth in situ, and also with teeth lost post-mortem. Findings from morphological, dental and radiological examination, UV illumination in the compact bones and radioisotope scan ( 14 C) revealed these skeletal remains belonged to at least three separate individuals, dating between 600 and 1000 AD, and therefore having archaeological significance. The case shows the relevance of forensic odontology in an anthropological evaluation which deals with discovered human remains of jaws and teeth. Keywords: Forensic science, human remains, forensic radiology, forensic odontology, forensic anthropology. INTRODUCTION This report contains the results of a forensic study of hu- man remains discovered by a forester in the countryside sur- rounding Miglionico (Southern Italy) in August 2007. The area is a protected wild life nature reserve covering an area of 2500 hectares which was established in 1976. Within the reserve there is a resevoir, the result of the construction of a dam. The remains were discovered in the clayey area around the lake having being uncovered by a mudslide. Some of the remains (vertebrae and fragments of pelvis) appeared to be embedded at the top of the ridge, others (femur, tibia, fibula and vertebras fragments) were found at the bottom of it, about 2 meters below ground level. After an initial inspec- tion the recovery of the fragments commenced - those frag- ments stuck at the top of the ridge being removed first with the assistance of firemen. It was in fact, necessary to remove huge amounts of the soil in order to reach the bone frag- ments, with the ever present risk of landslide. 286 bone fragments were excavated and recovered from the scene (Fig. 1). The forensic team was asked by the Judge to pro- vide an expert opinion on the nature of the remains, and, should they prove human, a cause and period of death to- gether with any other relevant data such as an identification in one were possible. An osteological analysis was carried out by two forensic pathologists, who had a background in anthropology, and a *Address correspondence to this author at the Sezione di Medicina Legale, DiMIMP, Università di Bari, Policlinico, piazza G. Cesare, 70125, Bari, Italy; Tel +390805042555; Fax +3908022031198; E-mail: emilionu@tin.it forensic odontologist, to ascertain the completeness of the skeletons and compile an inventory of the skeletal material. Odontological and radiological analysis of fragments of maxillary bones and teeth was also performed. Fig. (1). Part of the bone fragments recovered from the excavation site: 286 bone fragments were excavated and recovered from the scene. METHODS AND PROCESS The skeletal material was analysed according to the stan- dards laid out by the guidelines recommended by the British Association of Biological Anthropologists and Osteologists in conjunction with English Heritage [1]. Dental Contribution to an Anthropological Forensic Case Work The Open Anthropology Journal, 2010, Volume 3 143 Recording of the material was carried out using the rec- ognised descriptions contained in Standards for Data Collec- tion from Human Skeletal Remains by Buikstra and Ube- laker [2] and digital photographs have been used for illustra- tion. The material was analysed macroscopically and, where necessary, with the aid of a magnifying glass for identifica- tion purposes. The skeletal material was catalogued in: A. Pelvis and lower limbs: 5 long bone fragments; 2 left tibial fragments; 1 left femur; 1 right tibial fragment; 1 right femoral fragment; 2 left hip fragments with acetabulum cavity; 5 pelvic fragments; 1 right tibial fragment; 1 right femoral fragment; 1 right hip frag- ment; B. Ribs: 46 rib fragments; C. Vertebra: 13 cervical vertebral fragments; 16 thoracic vertebrae; 5 lumbar vertebral fragments; 3 fragments of sacral vertebrae ; D. Upper Limbs: 1 left humeral fragment; 4 left radial fragments; 2 left ulna fragments; 5 unidentified upper limb long bones; 4 humeral fragments with two humeral heads; 3 right ulna fragments; 31 metacarpal bone fragments, proximal, middle and distal pha- langes right and left; E. Clavicle and Scapula: 5 clavicle fragments; 2 right scapula fragments; 1 left scapula joint cavity; F. Skull bones: 33 skullcap fragments; 2 occipital frag- ments with nape crests; 4 temporal fragments with mastoid; 5 orbital and zygomatic fragments; 4 maxil- lary fragments with 18 permanent teeth; 29 perma- nent teeth with highly worn occlusal face; 4 right hemimandible fragments; 3 left mandible fragments; 12 mandible fragments; G. Commingled and crushed bones: 31 small fragments of various skeletal segments. No significant pathological changes were observed in the skeletal remains of the individuals. Both morphological and instrumental methods were used to determine the age at inhumation [3-7]. From a forensic point of view correct interpretation of the events surrounding death depends on the taphonomic process, distinguishing possible scavenger modifications from perimortem trauma. Two bone samples were sent to a Lab (CEDAD Center) in order to perform C 14 radiocarbon dating. Samples were analysed and the results calibrated to a calendar date. The odontologist made a reconstruction of jaw fragments and teeth using dental wax. Found teeth were repositioned in the correct alveoli, and it was possible to reposition all teeth, based on the occlusal match. The skeletal material analysed was catalogued as follows: 47 permanent teeth (29 of which lost post mortem) (Fig. 2); 64 skull fragments; 19 jaw frag- ments. The macroscopic analysis revealed complete arches, with no evidence of dental treatment. There was a high degree of abrasion of the occlusal surfaces and no enamel hypoplastic defects. Only two dental pathologies were discovered - an abscess on a second left molar and some periodontal disease. The reconstructed jaws underwent digital photography (Fig. 3), radiological imaging using a panoramic X-ray de- vice (Proline XC, Planmeca) at 68 Kv 8 mA exposure (Fig. 4). As dry bones had to be exposed to radiographic imaging a plastic glass filled with water was used in order to reduce significant burnout in the anterior region [8]. Fig. (3). Jaws fragments and teeth repositioned. Digital periapical x-ray images of canines were also taken with a Nomad Examiner handheld portable device (Aribex inc.) and a Trophy Radiovideography digital sensor Fig. (2). Teeth recovered from the excavation site: 47 permanent teeth (29 teeth lost post mortem). 144 The Open Anthropology Journal, 2010, Volume 3 Nuzzolese et al. 1 2 3 Fig. (4). 1. Adult 1: Panoramic X-ray image of the jaws. 2. Adult 2: Panoramic X-ray image of the mandible. 3. Adult 3: Panoramic X-ray image. connected to a computer, using 0,02 seconds exposure time and 60 Kv (Fig. 5). Periapical X-ray images of canines were employed to perform age assessment using Cameriere’s method and Adobe Photoshop [9-11]. For macroscopic age assessment the most common item studied include teeth, cranial sutures, pubis, auricular surface of the ilium and ster- nal ribs. Dental wear is also widely used in anthropology, but may not be completely reliable. The method proposed by Cameriere et al. in 2006, which has been validated by further findings, considers the apposition of secondary dentine and has proved to be reliable regardless of the (age) historical period of the specimen and is particularly suitable for adult individuals. RESULTS The surface of the bone was on the whole intact but frag- ile, although some surface damage had occurred through root action and the post-depositional processes of the clay. Most Dental Contribution to an Anthropological Forensic Case Work The Open Anthropology Journal, 2010, Volume 3 145 1 2 3 Fig. (5). 1. Periapical X-ray image of canines of adult 1 for the application of Cameriere's age estimation formula. 2. Periapical X-ray image of adult 2 canine for the application of Camierire's age estimation formula. 3. Adult 3: Periapical X-ray image of adult 3 canine for the appli- cation of Cameriere's age estimation formula. of the material was fragmented. All breaks to the bone were old and weathered, with no sign of violent lesions. No com- plete long bones were found, therefore no estimation of stat- ure could be made. However, the epiphyses of surviving long bones had fused diaphyses with no evidence of fusion lines. Fragments of same bone type indicated 3 separate adult individuals. Jaws and dental samples after the recon- struction and occlusal match also confirmed they had be- longed to 3 separate adult individuals. Findings from morphological and radiological examina- tion, UV illumination in the compact bones and radioisotope scan ( 14 C) revealed these skeletal remains belonged to at least three separate individuals dating between 600 and 1000 AD, thus having an archaeological relevance (Table 1, 2). The individuals appear to have been healthy, with no signs of any sustained periods of childhood stress from malnutri- tion or disease. The tooth surface wear and absence of dental treatment suggest the individuals do not belong to the mod- ern era. Adobe Photoshop was used to determine canine pulp/tooth ratio in order to apply Cameriere’s regression formula and finally assess the chronological age. The follow- ing age ranges were obtained: adult 1 age 36-40; adult 2 age 26-31; adult 3 age 21-26. The location of the bones cannot be considered a burial, but most probably the result displacement during the excava- 146 The Open Anthropology Journal, 2010, Volume 3 Nuzzolese et al. Table 1. Table 2. tions for the reservoir, and for this reason no further informa- tion can be gained on their cultural background. CONCLUSION The authors recognise the limitations of this work and acknowledge the benefit of involving a forensic anthropolo- gist. However, the above case demonstrates the relevance of an odontological evaluation when discovered human remains include jaws and teeth. It allows for the collection of poten- tially valuable evidence and enhances observations and analysis. A consultant in forensic odontology and radiology is particularly helpful in establishing the detailed anatomy of a specimen, individual characteristics and age assessment, both in criminal and anthropological cases. This case con- firms the importance of involving odontologists with a fo- rensic background in non clinical applications such as physi- cal anthropology during investigations on teeth, jaw and skull fossils. This is particularly relevant in those Countries where post mortem investigations are likely to be performed solely by forensic pathologists. DISCLOSURE The authors declare they have received no direct or indi- rect financial incentives or benefits from Planmeca, Aribex inc. and Trophy. All the expenses for this paper were sup- ported by the authors themselves. REFERENCES [1] Mays S. Brickley M, Dodwell N. Human Bones from Archaeological Sites. Guidelines for Producing Assessment Documents and Analytical Reports. English Heritage, Swindon, 2002. [2] Buikstra JE, Ubelaker D, Eds. Standards for Data Collection from Human Skeletal Remains: Proceedings of a Seminar at the Field Dental Contribution to an Anthropological Forensic Case Work The Open Anthropology Journal, 2010, Volume 3 147 Museum of Natural History. Arkansas Archaeological Survey Press, Fayetteville, 1994. [3] Haglund WD, Sorg MH, Eds. Forensic Taphonomy: the post- mortem faith of human remains. CRC press, Boca-Raton FL, 1997. [4] Blaauw M, Christen JA. The problems of radiocarbon dating. Sci- ence 2005; 308(5728): 1551-3. [5] Grün R. Direct dating of human fossils. Am J Phys Anthropol 2006; (Suppl 43): 2-48; [6] Ubelaker DH, Buchholz BA, Stewart JE. Analysis of artificial radiocarbon in different skeletal and dental tissue types to evaluate date of death. J Forensic Sci 2006; 51(3): 484-8. [7] Swift B, Lauder I, Black S, Norris J. An estimation of the post- mortem interval in human skeletal remains: a radionuclide and trace element approach. Forensic Sci Int 2001; 117(1-2): 73-87. [8] Mincer HH, Chaudhry J, Blankenship JA, Turner EW. Postmortem Dental Radiography. J Forensic Sci 2008; 53(2). [9] Cameriere R, Brogi G, Ferrante L, Mirtella D, Vultaggio C, Cingo- lani M, Fornaciari G. Reliability in Age Determination by Pulp/ToothRatio in Upper Canines in Skeletal Remains. J Forensic Sci 2006; 51(4). [10] Cameriere R, Ferrante L, Belcastro MG, Bonfigli B, Rastelli E, Cingolani M. Age estimation by Pulp/Tooth Ratio in Canines by Mesial and Vestibular Peri-Apical X-Rays. J Forensic Sci 2007; 52(5): 1151-5. [11] Cameriere R, Gunha E, Sassaroli E, Nuzzolese E, Ferrante L. Age estimation by pulp/tooth area ratio in canines: study of a Portu- guese sample to test. Forensic Sci Int 2009; 193(1-3): 128.e1-6. Received: January 16, 2010 Revised: April 24, 2010 Accepted: April 29, 2010 © Nuzzolese et al.; Licensee Bentham Open. This is an open access article licensed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted, non-commercial use, distribution and reproduction in any medium, provided the work is properly cited. work_vv2kfkpbljgmhfob74dnshtbpu ---- Closure to “Negative Surges in Open Channels: Physical and Numerical Modeling” by Martina Reichstetter and Hubert Chanson Discussions and Closures Closure to “Negative Surges in Open Channels: Physical and Numerical Modeling” by Martina Reichstetter and Hubert Chanson DOI: 10.1061/(ASCE)HY.1943-7900.0000674 Martina Reichstetter1 and Hubert Chanson2 1Ph.D. Student, School of Geography, Planning and Environmental Management, Univ. of Queensland, Brisbane, QLD 4072, Australia; formerly, Graduate Student, School of Civil Engineering, Univ. of Queensland, Brisbane, QLD 4072, Australia. 2Professor, Hydraulic Engineering, School of Civil Engineering, Univ. of Queensland, Brisbane, QLD 4072, Australia (corresponding author). E-mail: h.chanson@uq.edu.au The authors thank the discussers for their pertinent comment. Indeed, a negative surge is observed in the upstream reservoir during a dam break wave, and there is an abundant literature on the topic. The complete solution of dam break wave is com- monly treated in modern textbooks (Henderson 1966; Montes 1998; Sturm 2001; Chanson 2004a, b). The analytical solution of dam break wave advancing over some water was first solved by Barré de Saint-Venant (1871) for a rising tide in a channel with initial water depth. Relevant experimental evidences included Bazin (1865, pp. 536–553) [see also Darcy and Bazin (1865), Schoklitsch (1917), Cavaillé (1965), and Estrade (1967)]. Interestingly, Bazin (1865) repeated experiments in a large canal with different initial conditions to check his findings, whereas Cavaillé (1965) repeated identical experiments on smooth and rough inverts for three initial water depth-to-reservoir height ratios. Hager and Chervet (1996) reviewed the historical developments. Experimental studies of negative surges included the free- surface measurements of Favre (1935) and the unsteady velocity data of Reichstetter and Chanson (2013) and Leng and Chanson (2013). Numerical studies of negative surges are more numerous (Tan and Chu 2009; Reichstetter and Chanson 2013), albeit restricted by the limited amount of detailed validation data sets. In relation to the original data at x ¼ 10.8 m [Fig. 4 in Reichstetter and Chanson (2013)], the water depth data were recorded 0.35 m upstream of the tainter gate, itself located 0.85 m of a free overfall (Fig. 1). Fig. 1 presents an undistorted dimensioned sketch of the channel downstream end. The longitu- dinal flow profile was substantially determined by the hydraulic control mechanism operating within the system (Henderson 1966; Chanson 2004b). Prior to gate opening, the channel flow was controlled by the undershoot tainter gate. The flow was sub- critical upstream of the gate and supercritical between the tainter gate and free overfall (Fig. 1, solid line). During the rapid complete gate opening, a transient flow took place during which the channel flow was controlled briefly by critical flow conditions at the gate location. This was followed by a gradually varied flow motion in the flume, which became controlled by the critical flow conditions at the overfall (Fig. 1, dashed line). The channel flow experienced a shift in downstream control location that was responsible for a slight increase in water depth at x ¼ 10.8 m beyond a certain time, as recorded by the acoustic displacement meter [Fig. 4 in Reichstetter and Chanson (2013)] and observed with video camera and digital photography. Notation The following symbols are used in this paper: d = water depth (m) measured above the invert; h = initial undershoot gate opening (m); x = longitudinal distance (m) positive downstream, with x ¼ 0 at test section upstream end; and 0.30 m Free overfall h d x = 10.8 m Fig. 1. Sketch of the negative surge generated by the rapid tainter gate opening © ASCE 07014010-1 J. Hydraul. Eng. J. Hydraul. Eng. 2014.140. D ow nl oa de d fr om a sc el ib ra ry .o rg b y T he U ni ve rs it y of Q ue en sl an d L ib ra ry o n 08 /1 6/ 14 . C op yr ig ht A S C E . F or p er so na l us e on ly ; al l ri gh ts r es er ve d. xGate = longitudinal coordinate (m) of the tainter gate (xGate ¼ 11.15 m herein). Subscripts Gate = flow properties at tainter gate; and x = longitudinal direction positive downstream. References Barré de Saint-Venant, A. J. C. (1871). “Théorie du mouvement non permanent des eaux, avec application aux crues de rivieres et a l’intro- duction des marees dans leur lit.” Comptes Rendus des séances de l’A- cadémie des Sciences, 73(4), 237–240 (in French). Bazin, H. (1865). “Recherches experimentales sur la propagation des ondes (Experimental research on wave propagation).” Mémoires présentés par divers savants à l’Académie des Sciences, Vol. 19, 495–644 (in French). Cavaillé, Y. (1965). “Contribution à l’etude de l’ecoulement variable accompagnant la vidange brusque d’une retenue (Contribution to the study of unsteady flow following a dam break).” Publication Scientifique et Technique du Ministère de l’Air, Vol. 410, 165 (in French). Chanson, H. (2004a). Environmental hydraulics of open channel flows, Elsevier-Butterworth-Heinemann, Oxford, U.K., 483. Chanson, H. (2004b). The hydraulics of open channel flow: An introduction, 2nd Ed., Butterworth-Heinemann, Oxford, U.K., 585. Darcy, H. P. G., and Bazin, H. (1865). “Recherches hydrauliques (Hydraulic research).” Imprimerie imperiales, Paris, Parties 1ère et 2ème (in French). Estrade, J. (1967). “Contribution à l’etude de la suppression d’un barrage. Phase initiale de l’ecoulement (Contribution to the study of dam break. Initial stages of the wave).” Bulletin de la Direction des Etudes et Re- cherches, Series A, Nucléaire, Hydraulique et Thermique, Vol. 1, EDF Chatou, France, 3–128 (in French). Favre, H. (1935). Etude theorique et experimentale des ondes de translation dans les canaux decouverts (Theoretical and experi- mental study of travelling surges in open channels), Dunod, Paris (in French). Hager, W. H., and Chervet, A. (1996). “Geschichte der dammbruchwelle.” Wasser Energie Luft, 88(3–4), 49–54 (in German). Henderson, F. M. (1966). Open channel flow, MacMillan, New York. Leng, X., and Chanson, H. (2013). “Effect of bed roughness on the propa- gation of negative surges in rivers and estuaries.” Proc., 21ème Congrès Français de Mécanique CFM 2013, Association Française de Mécani- que, Paris, France, 6 (in French). Montes, J. S. (1998). Hydraulics of open channel flow, ASCE Press, New York, 697. Reichstetter, M., and Chanson, H. (2013). “Negative surges in open channels: Physical and numerical modeling.” J. Hydraul. Eng., 10 .1061/(ASCE)HY.1943-7900.0000674, 341–346. Schoklitsch, A. (1917). “Über dambruchwellen.” Sitzungberichten der Königliche Akademie der Wissenschaften, Vol. 126, Part IIa, 1489–1514 (in German). Sturm, T. W. (2001). “Open channel hydraulics.” Water resources and envi- ronmental engineering series, McGraw Hill, Boston, 493. Tan, L., and Chu, V. H. (2009). “Lauber and hager’s dam-break wave data for numerical model validation.” J. Hydraul. Res., 47(4), 524–528. © ASCE 07014010-2 J. Hydraul. Eng. J. Hydraul. Eng. 2014.140. D ow nl oa de d fr om a sc el ib ra ry .o rg b y T he U ni ve rs it y of Q ue en sl an d L ib ra ry o n 08 /1 6/ 14 . C op yr ig ht A S C E . F or p er so na l us e on ly ; al l ri gh ts r es er ve d. http://dx.doi.org/10.1061/(ASCE)HY.1943-7900.0000674 http://dx.doi.org/10.1061/(ASCE)HY.1943-7900.0000674 http://dx.doi.org/10.1061/(ASCE)HY.1943-7900.0000674 http://dx.doi.org/10.1061/(ASCE)HY.1943-7900.0000674 http://dx.doi.org/10.1080/00221686.2009.9522028 http://dx.doi.org/10.1080/00221686.2009.9522028 work_vwi3jgaefvaltabbkaydxggqbm ---- Shibboleth Authentication Request If your browser does not continue automatically, click work_vxmcfv7exrbvvpz42zh25a7guy ---- Comparative ophthalmic assessment of patients receiving tafenoquine or chloroquine/primaquine in a randomized clinical trial for Plasmodium vivax malaria radical cure ORIGINAL PAPER Comparative ophthalmic assessment of patients receiving tafenoquine or chloroquine/primaquine in a randomized clinical trial for Plasmodium vivax malaria radical cure Sukhuma Warrasak . Ataya Euswas . Mark M. Fukuda . Mali Ittiverakul . R. Scott Miller . Srivicha Krudsood . Colin Ohrt Received: 16 April 2018 / Accepted: 11 August 2018 / Published online: 29 September 2018 � The Author(s) 2018 Abstract Purpose Ophthalmic safety observations are reported from a clinical trial comparing tafenoquine (TQ) efficacy and safety versus sequential chloroquine (CQ)/primaquine (PQ) for acute Plasmodium vivax malaria. Methods In an active-control, double-blind study, 70 adult subjects with microscopically confirmed P. vivax malaria were randomized (2:1) to receive 400 mg TQ 9 3 days or 1500 mg CQ 9 3 days then 15 mg PQ 9 14 days. Main outcome measures: clinically relevant changes at Day 28 and Day 90 versus baseline in the ocular examination, color vision evaluation, and corneal and retinal digital photography. Results Post-baseline keratopathy occurred in 14/44 (31.8%) patients with TQ and 0/24 with CQ/PQ (P = 0.002). Mild post-baseline retinal findings were reported in 10/44 (22.7%) patients receiving TQ and 2/24 (8.3%) receiving CQ/PQ (P = 0.15; treatment difference 14.4%, 95% CI - 5.7, 30.8). Masked evaluation of retinal photographs identified a retinal hemorrhage in one TQ patient (Day 90) and a slight increase in atrophy from baseline in one TQ and one CQ/PQ patient. Visual field sensitivity (Humphrey TM 10-2 test) was decreased in 7/44 (15.9%) patients receiving TQ and 3/24 (12.5%) receiving CQ/PQ; all Disclaimer This manuscript was reviewed by the Walter Reed Army Institute for Research and the United States Army Medical Research and Material Command. There is no objection to its publication or dissemination. The opinions reflected herein reflect those of the authors and do not necessarily reflect the official policy, position or opinions of the US Army, the US Department of Defense, or the US Government. S. Warrasak (&) � A. Euswas Department of Ophthalmology, Faculty of Medicine, Ramathibodi Hospital, Mahidol University, Bangkok 10400, Thailand e-mail: sukhuma@csloxinfo.com A. Euswas e-mail: ataya@csloxinfo.com Present Address: S. Warrasak The Eye Center, Debaratana Medical Center, Ramathibodi Hospital Faculty, Mahidol University, 270 Rama 6 Road, Rajthevi, Phyathai, Bangkok 10400, Thailand Present Address: A. Euswas The Bangkok Eye Center, Bangkok Hospital Medical Group, Bangkok 10310, Thailand M. M. Fukuda � M. Ittiverakul � R. S. Miller Department of Immunology and Medicine, Armed Forces Research Institute of Medical Sciences (AFRIMS), 315/6 Rajvithi Road, Bangkok 10400, Thailand e-mail: mark.fukuda@afrims.org M. Ittiverakul e-mail: mali.ittiverakul@afrims.org R. S. Miller e-mail: scott.miller@gatesfoundation.org 123 Int Ophthalmol (2019) 39:1767–1782 https://doi.org/10.1007/s10792-018-1003-2(0123456789().,-volV)(0123456789().,-volV) http://crossmark.crossref.org/dialog/?doi=10.1007/s10792-018-1003-2&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1007/s10792-018-1003-2&domain=pdf https://doi.org/10.1007/s10792-018-1003-2 cases were\ 5 dB. There were no clinically relevant changes in visual acuity or macular function tests. Conclusions There was no evidence of clinically relevant ocular toxicity with either treatment. Mild keratopathy was observed with TQ, without conclu- sive evidence of early retinal changes. Eye safety monitoring continues in therapeutic studies of low- dose tafenoquine (300 mg single dose). Clinical trial registration Clinicaltrials.gov identi- fier: NCT01290601. Keywords Ophthalmic safety � Tafenoquine � Chloroquine � Primaquine � Keratopathy � Retinopathy � Plasmodium vivax � Clinical trial � Malaria Abbreviations CQ Chloroquine HCQ Hydroxychloroquine PQ Primaquine TQ Tafenoquine Background Plasmodium vivax malaria is endemic in Central and South America, South East Asia, Oceania, and parts of Africa. It represents a substantial health and economic burden with an estimated 132–391 million infections annually [1, 2]. Despite its high prevalence, P. vivax malaria is considered a neglected disease and only recentlyhasitstrueimpactonmorbidityandparticularly mortality been acknowledged [2–4]. P. vivax treatment and control are complicated by the ability of the parasite to form hypnozoites. These develop from sporozoites that enter into a dormant state, rather than actively dividing in the host liver. Hypnozoites periodically reenter the development cycle causing clinical relapse weeks or months after the infectious inoculation. Primaquine (PQ) remains the only agent available for hypnozoite eradication, termed ‘radical cure’. PQ is given for 14 days following standard treatment with a blood schizonticide, usually chloroquine (CQ). How- ever, patients often feel well after the first few days of commencing treatment and so adherence with 14-day PQ is problematic, and clinical effectiveness is there- fore compromised [5]. Thus, there is a need to develop new treatments with improved convenience for patients that are safe and effective for P. vivax radical cure. In addition to relieving the clinical burden of repeated P. vivax relapses, the availability of an effective therapy for P. vivax radical cure has particular relevance for malaria elimination operations [6–9]. Tafenoquine (TQ) is an 8-aminoquinoline discov- ered and developed by the Walter Reed Army Institute of Research [10]. Chemically related to PQ, TQ has potent antihypnozoite activity [11], but has a 2–3- week half-life [8, 12]. This raises the possibility of shorter treatment regimens for P. vivax radical cure. This clinical trial was conducted under an NIH Challenge Grant entitled ‘Tafenoquine, a Novel Drug for Malaria Prevention and Control.’ The primary objective of this study was to determine if 3-day TQ monotherapy could clear and cure P. vivax blood-stage infection, with secondary objectives of rapidity of parasite clearance from the bloodstream, antihypno- zoite activity, population pharmacokinetics, and safety. The comparator arm was standard sequential therapy with CQ followed by PQ. The efficacy and main safety results of this study have been previously reported [13]. Briefly, although there was no differ- ence in efficacy in this study, the slow rate of parasite, gametocyte and fever clearance indicated that tafeno- quine should not be used as monotherapy for radical cure of P. vivax malaria, at which point clinical development in this indication was paused [13]. Present Address: R. S. Miller The Malaria Program, Bill and Melinda Gates Foundation, PO Box 23350, Seattle, WA 98102, USA S. Krudsood Faculty of Tropical Medicine, Mahidol University, 420/6 Ratchawithi Road, Ratchathewi, Bangkok 10400, Thailand e-mail: srivicha.kru@mahidol.ac.th C. Ohrt (&) Division of Experimental Therapeutics, Walter Reed Army Institute of Research, Silver Spring, USA e-mail: colin@consortiumha.org Present Address: C. Ohrt Consortium for Health Action, Savage, MN, USA Present Address: C. Ohrt Phnom Penh, Cambodia C. Ohrt Hanoi, Vietnam 123 1768 Int Ophthalmol (2019) 39:1767–1782 More recently, GlaxoSmithKline and the Medicines for Malaria Venture (MMV) have been developing single-dose TQ plus CQ (TQ/CQ) for P. vivax radical cure. A dose-ranging clinical trial from this programme showed that a single 300 mg TQ dose plus 3-day CQ prevented P. vivax relapse in 89% (95% CI 77–95) of patients over a 6-month period compared with 37.5% (95% CI 23–52) with CQ alone (treatment difference 51.7% [95% CI 35–69], P \ 0.0001) [14]. In the same study, CQ plus 15 mg 9 14 days PQ prevented relapse in 77.3% (95% CI 63–87) of patients [14]. CQ is an antimalarial drug with well-known oph- thalmic toxicity [15]. Similar to several other drugs, such as amiodarone and tamoxifen [16, 17], CQ causes keratopathy (lipid deposits) in the cornea resulting in changes ranging from epithelial haze to a dense ‘whorl’ [17]. These findings are usually considered benign and reversible and, in the case of CQ, occur generally during extended use in rheumatic diseases [17]. However, retinal toxicity associated with CQ can lead to perma- nent vision impairment, which is dose and duration related [18, 19]. Early CQ retinal toxicity can appear as mild retinal pigment epithelium changes [18]. A study published in 2010 found possible eye safety findings with TQ in a Phase III prophylaxis trial in Australian soldiers [20], though there were no baseline ocular data to evaluate a drug effect. Also, ocular safety evaluation was based on macular function testing plus digital photography without retinal exam- inations, so correlation between morphological changes and functional impairment was not possible. In the ongoing P. vivax radical cure clinical trial program, TQ/CQ are co-administered, and it will not be possible to differentiate between CQ and TQ ocular safety findings. Here we report ophthalmic safety data from a randomized clinical trial of tafenoquine monotherapy in P. vivax radical cure. This study was designedtodetermine whetherthe previouseye findings with TQ had clinical significance and assess whether there is an ocular safety risk with TQ [13]. Thus, this study reports the only available data evaluating the ocular safety of TQ alone in P. vivax malaria patients. Methods Patients Eligible subjects were 20–60 years old with micro- scopically confirmed P. vivax malaria (parasite density [ 500 and \ 200,000 lL-1). Exclusion cri- teria were: mixed malaria infection; severe vomiting; demonstrated glucose-6-phosphate dehydrogenase deficiency; history of antimalarial therapy within 30 days; clinically significant illness or abnormal laboratory values; history of allergy to study drugs or 8-aminoquinolines; use of an investigational drug within 30 days or 5 half-lives (whichever longer); history of previous eye surgery; corneal or retinal abnormalities; concomitant medication that would affect safety evaluations; and a clinical assessment of risk for acute angle closure glaucoma. A negative pregnancy test was required from all women of child- bearing potential. Pre-menarchal, pregnant, and breastfeeding women were excluded. Design and procedures This randomized, active-control, double-blind, dou- ble-dummy study was conducted at the Bangkok Hospital for Tropical Diseases, Bangkok, Thailand, between September 2003 and January 2005. All drug treatments were supplied as capsules containing (free base equivalent): TQ 200 mg (GlaxoSmithKline, UK), CQ 150 mg (AstraZeneca UK Ltd, UK), or PQ 15 mg (Boucher and Muir Pty Ltd, Australia). Match- ing placebos were used to maintain study blinding. The computer-generated randomization schedule was provided by GlaxoSmithKline. Eligible subjects were allocated a sequential randomization number and treatment allocated through the GlaxoSmithKline Registration and Medication Ordering System (RAMOS) in a 2:1 ratio to TQ or the comparator arm using blocks of six. Subjects, investigators and study staff remained blinded to treatment allocation throughout the study. Emergency unblinding was possible through RAMOS. There was one code break at Day 4 for a patient found to have a history of eye surgery/abnormality, and this patient was withdrawn from the study. The study design is shown in Fig. 1. Patients were randomized to receive either TQ 400 mg for 3 days (Day 0, 1 and 2) or CQ 600 mg for 2 days (Day 0 and 1) followed by CQ 300 mg for 1 day (Day 2) then PQ 15 mg for 14 days (Days 3–16). All doses are given as free base equivalents. Follow-up continued to Day 120. All doses were directly observed and given with food. Vomiting within 1 h of dosing was to result in redosing. Vomiting of any subsequent dose was to lead 123 Int Ophthalmol (2019) 39:1767–1782 1769 to patient withdrawal from the study and administra- tion of rescue medication. Patients were hospitalized for 29 days of observation, and requested to remain in a malaria-free region for 60 days thereafter. Ophthalmic assessments All ophthalmic assessments were performed with patients and investigators blinded to study treatment allocation. Ophthalmic assessments were conducted at baseline (within 36 h of receiving the first dose of study drug), Day 28 and Day 90 at the Ramathibodi Faculty Hospital, Bangkok, Thailand. Clinically rel- evant changes versus baseline were recorded as adverse events using the MedDRA coding system (version 8.1). Patients with clinically relevant changes were followed up until resolution or following improvement to a degree that clinical monitoring was no longer required. Ophthalmic assessment crite- ria for patient study withdrawal were prospectively defined. Best spectacle-corrected visual acuity was assessed with the high contrast visual acuity (HCVA) test using the Early Treatment Diabetic Retinopathy Study (ETDRS) chart with letters. A clinically relevant change was defined as a decrease in visual acuity of 0.08 logMAR (minimum angle of resolution), equiv- alent to five letters decrease on the high contrast chart or one line. Patients with decreased vision of 0.3 logMAR, equivalent to 15 letters decrease on the high contrast chart or three lines, were to be withdrawn. Ocular examination consisted of slit-lamp biomi- croscopy of the anterior segment, applanation tonom- etry, and dilated fundus examination of the central and peripheral fundus. Ocular images (corneal and retinal digital photography) were documented. Clinically relevant findings in the cornea were graded as Grade 0 = no haze, Grade 1 = trace to minimal haze, Grade 2 = moderate haze, Grade 3 = marked haze, Grade 4 = significant haze. Digital photographs were taken of the cornea with the Haag-Streit EyeCap TM system (Haag-Streit AG, Switzerland) which included a slit- lamp biomicroscope BQ900 (magnification 169) and a computer loaded with the software. Retinal changes in the macula were assessed and collected as (in increasing intensity): slight granularity, mottling of pigmentation, definite area of depigmentation and/or hyperpigmentation, and ring of depigmentation sur- rounded by a ring of hyperpigmentation (any degree of bull’s eye maculopathy). Any patient with any degree of bull’s eye retinopathy, confirmed by clinical assessment, digital photography, or fundus fluorescein angiography, was to be withdrawn from the study. Digital images of the retina were captured with the fundus camera of the Topcon IMAGEnet TM 2000 system (TRC-50IX, IMAGEnet TM 2000, Topcon, Japan) at 20�, 35�, and 50� during the study at baseline, Day 28 and Day 90. A masked review of the photographs was conducted at the Fundus Photograph Reading Center, University of Wisconsin. Macular function tests were: Amsler grid test, a Humphrey TM 10-2 visual field test, the macular stress test and color vision evaluation using the pseu- doisochromatic plate (PIP) ishihara compatible test, and the L’Anthony 40 hue test. For the Amsler grid test, the presence of a repeatable (two assessments within 10 min) area of distortion (metamorphopsia) or scotoma covering more than one grid block ([ 1� of visual angle) was to result in patient withdrawal. To identify macular disease, a Humphrey TM 10-2 visual field test was performed. Presence of a treatment- emergent scotoma in two tests at least 30 min apart with a 5 dB decrease in sensitivity (local or overall visual field) was considered clinically relevant [21], and a decrease in sensitivity of at least 10 dB was to lead to patient withdrawal. For the macular stress test, using the time to read the line above the best spectacle- corrected visual acuity as an endpoint, an increase of 30 s in recovery time versus baseline was considered clinically relevant. For assessment of color vision, an 70 patients randomized Primaquine 15 mg for 14 days N=44 2 (4.3%) lost to follow-up N=22 2 (8.3%) lost to follow-up Primaquine placebo N=24 Tafenoquine (N=46) 400 mg for 3 days plus chloroquine placebo N=37 Chloroquine (N=24) 1000 mg for 2 days 500 mg for 1 day plus tafenoquine placebo Day 0, 1, 2 Day 3–16 Day 28 Day 90 7 (15.2%) lost to follow-up 1 (2.1%) treatment failure 1 (2.1%) insufficient therapeutic effect Fig. 1 Study design and patient numbers 123 1770 Int Ophthalmol (2019) 39:1767–1782 increase of two in the number of PIP plates missed required further investigation using the L’Anthony 40 hue test; greater than two reversals was clinically significant. Subjects with four or more reversals on the L’Anthony hue 40 test were to be withdrawn. Statistical analysis The evaluation of all ophthalmic safety data was made in the safety population, including all subjects that received at least one dose of study medication, based on actual available observations at each time point. As this study was designed for an efficacy endpoint, no formal sample size calculation was made for the ophthalmic safetyoutcomes.Confidenceintervals(CI)andPvalues for the differences in binomial proportions are exact and have been produced by the inversion of two one- sided tests using the standardized (score) statistic. Calculations were performed using the StatXact statis- tical software (version 6.3). P values are presented where outcomes were significantly different. As this study was not powered for assessment of ophthalmic outcomes, non-significant P values are presented with 95% confidence intervals forthe treatment difference to provide some indication of trend. Results Patient baseline characteristics are summarized in Table 1; they were similar for the two treatment groups. All patients were from Myanmar or Thailand. Figure 1 shows the study design and patient numbers included for the ophthalmic assessments. The inten- tion-to-treat population included 70 patients, 46 randomized to TQ and 24 to CQ/PQ. All patients were fully compliant with medication. At Day 28, 44 patients in the TQ arm and 24 in the CQ/PQ arm were available for analysis, whereas by Day 90, 7/46 (15.2%) in the TQ arm versus 2/24 (8.3%) in the CQ/PQ arm were lost to follow-up. Adverse events recorded within the term ‘eye disorders’ were more common with TQ (17/46 [37.0%]) versus CQ/PQ (4/24 [16.7%]). In the TQ group (n = 46) adverse events were described as keratopathy (n = 14, 30.4%), retinopathy/retinal dis- order (n = 10, 21.7%), and eye irritation (n = 1, 2.2%). In the CQ/PQ group (n = 24), there were two cases of conjunctivitis (8.3%) and one each of retinopathy, eye inflammation and eye pain (4.2% each). Corneal assessments The outcomes of ophthalmic clinical assessments are summarized in Table 2. At baseline, 1/46 (2.2%) patients in the TQ arm and 0/24 in the CQ/PQ arm had keratopathy. At any follow-up visits, (Day 28 or Day 90), keratopathy was observed in 14/44 (31.8%) patients with TQ versus none with CQ/PQ (P = 0.002). At Day 28, 12/44 (27.3%) patients receiving TQ had keratopathy; 11/44 (25.0%) were Table 1 Summary of patient baseline characteristics Characteristic Tafenoquine (N = 46) Chloroquine/primaquine (N = 24) Median age, years (range) 23.5 (20–43) 30.0 (20–55) Male sex, n (%) 37 (80.4) 20 (83.3) P. vivax total parasites count, median parasites lL-1 (range) 4000 (200–44,000) 2730 (600–30,000) P. vivax gametocytes count, median parasites lL-1 (range) n = 26 80 (20–640) n = 15 60 (40–280) Mean temperature, �C (SD) (range) 37.0 (0.9) [36.0–40.0] 36.8 (0.7) [36.0–39.3] Previous malaria (yes), n (%) 30 (65.2) 16 (66.7) Malaria episodes in previous 6 months, n (%) 0 14 (46.7) 10 (62.5) 1 11 (36.7) 4 (25.0) 2 3 (10.0) 2 (12.5) C 3 2 (6.7) 0 Median time since last malaria, years (range) 0.3 (0.2–1.8) 0.6 (0.2–6.3) 123 Int Ophthalmol (2019) 39:1767–1782 1771 Grade I, 1/44 (2.3%) was Grade II. All of these patients had keratopathy in both eyes. By Day 90, keratopathy had resolved in 5/12 patients. Of the remaining seven patients, two were lost to follow-up at Day 90 and four had keratopathy reported as ongoing, plus one had a minor Grade I keratopathy. An additional two patients in the TQ arm developed Grade I new onset keratopa- thy by Day 90. In all cases, the keratopathy was Table 2 Ocular findings by clinical assessment in P. vivax malaria patients treated with tafenoquine monotherapy or chloroquine/ primaquine sequential therapy Ocular findings Baseline (N = 46) Day 28 (N = 44) Day 90 (N = 37) Post-baseline total (N = 44) Tafenoquine Keratopathy (direct examination), patients 1 (2.2) 12 (27.3) a 7 (18.9) 14 (31.8) Left eye 0 12 (27.3) a 7 (18.9) 14 (31.8) Right eye 1 (2.2) 12 (27.3) a 6 (16.2) 14 (31.8) Retinal findings (direct examination), patients 1 (2.2) 9 (20.5) 8 (21.6) 10 (22.7) Left eye 1 (2.2) 9 (20.5) 8 (21.6) 10 (22.7) Right eye 1 (2.2) 8 (18.2) 7 (18.9) 9 (20.5) Keratopathy and retinal findings b 0 6 (13.6) 3 (8.1) 7 (15.9) Decreased sensitivity on visual field testing, patients c 1 (2.2) 4 (9.1) 4 (10.8) 7 (15.9) Left eye 0 2 (4.5) 4 (10.8) 5 (11.4) Right eye 1 (2.2) 4 (9.1) 1 (2.7) 5 (11.4) Retinal findings plus decreased sensitivity, patients 0 1 (2.3) 1 (2.7) 2 (4.5) d Left eye 0 1 (2.3) 1 (2.7) 2 (4.5) d Right eye 0 1 (2.3) 1 (2.7) 2 (4.5) d Chloroquine/primaquine (N = 24) (N = 24) (N = 22) (N = 24) Keratopathy (direct examination) 0 0 0 0 Left eye 0 0 0 0 Right eye 0 0 0 0 Retinal findings (direct examination), patients 2 (8.3) e 2 (8.3) e 1 (4.5) 2 (8.3) e Left eye 1 (4.2) e 2 (8.3) e 1 (4.5) 2 (8.3) e Right eye 2 (8.3) e 2 (8.3) e 1 (4.5) 2 (8.3) e Decreased sensitivity on visual field testing, patients c 0 3 (12.5) e 0 3 (12.5) e Left eye 0 3 (12.5) e 0 3 (12.5) e Right eye 0 2 (8.3) e 0 2 (8.3) e Retinal findings plus decreased sensitivity, patients 0 2 (8.3) e 0 2 (8.3) e Left eye 0 2 (8.3) e 0 2 (8.3) e Right eye 0 2 (8.3) e 0 2 (8.3) e Numbers are n (%) a 11/44 (25.0%) Grade I, 1/44 (2.3%) Grade II b All patients had bilateral findings c Abnormal Humphrey TM 10-2 test; decreases in sensitivity were \ 5 dB (not clinically relevant) in all cases d Including one patient who had decreased sensitivity in the left eye at Day 90 and retinal findings in the left eye on Day 28 e Including one patient with a dot hemorrhage near the fovea in both eyes at baseline and Day 28. Day 90 retinal examination was normal for both eyes. Decreased sensitivity on visual field testing was observed at Day 28 in both eyes and was normal at other assessments 123 1772 Int Ophthalmol (2019) 39:1767–1782 located in the inferior part of the cornea and did not involve the optical zone (Fig. 2). None of the patients who had keratopathy required further follow-up after Day 90 as keratopathy was mild and did not affect vision. Retinal assessments Baseline clinical fundus examination revealed one case of bilateral retinal dot hemorrhage in the CQ/PQ group, probably related to P. vivax malaria (Fig. 3a). In the TQ group, there was one case of post-baseline unilateral (in the left eye) retinal hemorrhage on Day 90 (Fig. 3b). Retinal abnormalities were described as mottling of the retinal pigment epithelium, a group of yellow pigment deposits, or hypopigmentation (Fig. 4). Base- line retinal abnormalities were noted clinically in 1/46 (2.2%) patients in the TQ group and 2/24 (8.3%) in the CQ/PQ group. Post-baseline retinal findings were reported on clinical examination in 10/44 (22.7%) patients with TQ and 2/24 (8.3%) in the CQ/PQ group (P = 0.15, treatment difference 14.4%; 95% CI - 5.7, 30.8) (Table 2). At Day 28, 9/44 (20.5%) patients receiving TQ had retinal findings reported (8/44 in both eyes, 1/44 in left eye only). At Day 90, one patient had been lost to follow-up, retinal findings had resolved in one patient and an additional patient had changes from baseline. Thus, 8/37 (21.6%) of patients in the TQ group had retinal findings at Day 90 (7/37 in both eyes, 1/37 in left eye only). In the CQ/PQ group, 1/24 (4.2%) patients had retinal findings on clinical examination of mild pigmentation with abnormal assessments at Day 28 and 90 for the left eye and at Days 0, 28, and 90 for the right eye. Clinical observations at Day 28 led to further investigation by fundus fluorescein angiogram (FFA) on one TQ patient with retinal findings (Fig. 4) and in one CQ/PQ patient with decreased sensitivity and retinal findings. Note that FFA was not pre-specified in the protocol, but conducted ad hoc only in these two patients in response to clinical findings. In the TQ patient, for whom retinal pigment epithelium alter- ations in the left eye were seen clinically as a group of yellow pigment deposits or hypopigmentation as shown in Fig. 4, abnormal FFA findings showed pinpoints of hyperfluorescence in the early phase of the angiograms and persisted through the late frames (Fig. 5). These hyperfluorescence spots corresponded to the focal depigmented retinal pigment epithelial cells that were observed fundoscopically (circled areas Fig. 2 Treatment–emergent keratopathy at Day 28 in four P. vivax malaria patients receiving tafenoquine; corneal changes were seen clinically as brown pigment deposits in the lower part of the cornea (black arrows) and did not involve the optical zone. Images c and d are from the same individual 123 Int Ophthalmol (2019) 39:1767–1782 1773 in Figs. 4, 5). However, as there was no baseline FFA, it is not clear whether these observations were related to drug treatment. No subsequent FFA assessments were obtained at Day 90 as retinal findings were stable, though subsequent review of digital pho- tographs indicated a slight change in retinal pigment epithelium at Day 90. In this patient, threshold sensitivity in the left eye at Day 90 was decreased by \ 5 dB and other macular function tests were unre- markable. For the patient receiving CQ/PQ who had retinal pigment epithelium changes, no FFA abnor- malities were detected. On masked digital photograph review by an inde- pendent reviewer, there were no changes from base- line noted at Day 28 for either group. At Day 90, apart from the patient with a retinal hemorrhage in the left eye, one TQ patient had a slight increase in atrophy versus baseline and one CQ/PQ patient had retinal atrophy at baseline and a slight increase in atrophy versus baseline at Day 90. Visual acuity, macular function tests, and visual field testing There were no clinically relevant changes in visual acuity or macular function tests (Table 3). At baseline, 1/46 (2.2%) patients in the TQ group had an abnormal Humphrey TM 10-2 test in the right eye, which was not evident at later assessments. Post-baseline, a similar proportion of patients in the TQ group (7/44 [15.9%]) Fig. 3 a Baseline retinal hemorrhage (black arrow) at Day 0 in both eyes of a patient infected with P. vivax. b Post-baseline white- centered retinal hemorrhage (black arrow) noted in the left eye of one patient in the tafenoquine group on Day 90 123 1774 Int Ophthalmol (2019) 39:1767–1782 Fig. 4 In a P. vivax malaria patient receiving tafenoquine, retinal pigment epithelium alteration, seen as a small group of hypopigmented yellow deposits was observed clinically supero- nasal to the fovea in the left eye on Day 28 (ellipse) and Day 90 (ellipse and circle) assessments. Note increasing retinal changes on Day 28 compared to day 90 (ellipse) and new area of RPE changes in circle on Day 90. (Images taken at 20�) Fig. 5 Fundus fluorescein angiograms performed on Day 28 in the same patient in the tafenoquine group as for Fig. 4, showed retinal pigment epithelium abnormalities in the left eye as pinpoints of hyperfluorescence in the early phase of the angiograms, persisting through the late frames (circled area). These hyperfluorescence spots corresponded to the hypopig- mented area mostly involving the papillomacular bundle and supero-nasal to the fovea (circled areas). (Images taken at 35�) 123 Int Ophthalmol (2019) 39:1767–1782 1775 and the CQ/PQ group (3/24 [12.5%]) had an abnormal Humphrey TM 10-2 test (Tables 2, 3). In the TQ group at Day 28, 4/44 (9.1%) patients had an abnormal Humphrey TM 10-2 test. At Day 90, three of the four cases had resolved with the remaining patient still having an abnormal Humphrey TM 10-2 test in both eyes. However, at Day 90 a further three patients developed an abnormal Humphrey TM 10-2 test, all in the left eye. In the CQ/PQ group, three patients had abnormal Humphrey TM 10-2 test results at Day 28, bilateral in two cases and one case in the left eye. Thus, at Day 90, 4/37 (10.8%) patients in the TQ group versus 0/22 (0%) in the CQ/PQ arm had an abnormal Humphrey TM 10-2 test (P = 0.117; treat- ment difference 10.8%, 95% CI - 5.11, 25.6). Of those patients who had abnormal Humphrey 10-2 tests, one in each treatment group had concurrent retinal findings, i.e. pigment changes in both eyes on Day 28 and Day 90 in the TQ group, and in the CQ/PQ group, mild pigment abnormality, grade 1 in both eyes on Day 28. In all cases in both treatment groups, decreases in sensitivity were \ 5 dB and not consid- ered to be clinically relevant. Discussion Retinal hemorrhage related to P. vivax This prospective study of patients infected with P. vivax showed that retinal hemorrhage occurred in 2.9% (2/70) of patients. At baseline, one patient had a small retinal hemorrhage in both eyes at the fovea. There was also one retinal hemorrhage at Day 90 in the Table 3 Macular function test outcomes in P. vivax malaria patients treated with tafenoquine monotherapy or chloroquine/pri- maquine sequential therapy at each assessment Outcome Eye Tafenoquine Chloroquine/primaquine Baseline (N = 46) Day 28 (N = 44) Day 90 (N = 37) Baseline (N = 24) Day 28 (N = 24) Day 90 (N = 22) High contrast visual acuity, LogMAR Left 0.31 (0.018) [0.22–0.34] 0.31 (0.013) [0.30–0.34] 0.31 (0.010) [0.30–0.34] 0.31 (0.014) [0.30–0.34] 0.31 (0.011) [0.30–0.34] 0.30 (0.010) [0.30–0.34] Right 0.31 (0.019) [0.22–0.34] 0.31 (0.013) [0.30–0.34] 0.31 (0.010) [0.30–0.34] 0.31 (0.014) [0.30–0.34] 0.31 (0.009) [0.30–0.32] 0.31 (0.12) [0.30–0.34] Amsler grid, abnormal, n (%) Left 0 0 0 0 0 0 Right 0 0 0 0 0 0 Humphrey TM 10-2, abnormal, n (%) Left 0 2 (4.5) 4 (10.8) 0 3 (12.5) 0 Right 1 (2.2) 4 (9.1) 1 (2.7) 0 2 (8.3) 0 Macular stress test, recovery, secs Left 3.5 (1.6) [1–9] 3.0 (1.0) [2–6] 2.9 (1.1) [2–8] 4.4 (3.8) [1–18] 3.3 (1.3) [2–8] 2.9 (0.8) [2–4] Right 3.5 (1.6) [2–9] 3.0 (1.1) [2–6] 2.7 (0.9) [1–6] 3.6 (1.8) [2–9] 3.3 (1.6) [2–8] 2.8 (0.7) [2–4] Color perception PIP, plates missed Left 0.1 (0.4) [0–3] 0 (0.3) [0–2] 0 0 0 0 Right 0.1 (0.4) [0–3] 0 (0.3) [0–2] 0 0 0 0 Color perception L’Anthony hue 40, score Left 2.3 (2.9) [0–12] 1.6 (2.0) [0–8] 1.2 (1.5) [0–4] 2.3 (2.4) [0–9] 0.9 (1.2) [0–4] 1.5 (1.7) [0–4] Right 2.7 (3.0) [0–11] 1.6 (2.1) [0–8] 1.4 (1.6) [0–6] 2.1 (3.0) [0–12] 1.1 (1.6) [0–6] 1.3 (1.3) [0–4] Values are mean (SD) [range] unless otherwise indicated NB: The patient in the chloroquine/primaquine group with baseline bilateral retinal dot hemorrhage is included in this table of results 123 1776 Int Ophthalmol (2019) 39:1767–1782 TQ group. Ocular involvement, including retinopa- thies, is known to occur with P. falciparum malaria in adults and children, and the frequency of retinal changes increases with the severity of the disease [22]. In P. vivax malaria, retinal hemorrhage has been reported but appears to be a rare occurrence; with only six previously published cases [23, 24]. In Choi’s series, hemorrhage can be either pre-retinal/sub- hyaloid or retinal and can involve one or both eyes with the recovery period of 1 week to 5 months [24]. Ophthalmic data on TQ This study reported ophthalmic data from a clinical efficacy and safety trial of TQ monotherapy versus CQ/PQ sequential therapy in patients with P. vivax malaria. It is the first study to show that short-course (3-day) TQ can cause mild keratopathy. The study also collected data to evaluate for possible retinopathy, though results were inconclusive. The 1200 mg total TQ dose was evaluated to see whether higher doses were more effective for radical cure as monotherapy, but also provides a vital safety dataset at four times the single TQ dose (300 mg) used in Phase III trials for P. vivax radical cure in combination with CQ (1500 mg free base). The information from this study thus continues to build the ocular safety database for this important new drug. In the TQ group, by direct slit-lamp examination, keratopathy was reported in 14/44 (31.8%) evaluable patients. In these patients, there was no effect on vision and the keratopathy observed was considered to be benign and reversible. Structurally, TQ has cationic amphiphilic characteristics such that an effect on the cornea might be expected. Phospholipidosis is asso- ciated with amphiphilic drugs, such as aminoquinoline antimalarials, and can result in accumulation of intracellular phospholipids which cannot be metabo- lized by lysosomal phospholipases. These deposits are known to occur with CQ in the corneal epithelium and superficial stroma [17]. They may first appear as a Hudson-Stähli line or an increase in a preexisting Hudson-Stähli line. They are more commonly seen as a whorl-like pattern known as cornea verticillata, which can be seen with other amphiphilic drugs, such as amiodarone or chlorpromazine [17]. In this study, keratopathy appeared after only three large TQ doses (total dose 1200 mg). This may be related to the TQ half-life of about 17.5 days (range 4–36 days) and a large volume of distribution, suggesting extensive tissue binding [25–27]. However, whether there is a direct relationship between TQ pharmacokinetics and keratopathy has yet to be determined. Because direct examinations were assessed from the cornea to the retina, the presence of keratopathy could have unintentionally unblinded the study. However, that there were more reports of retinal abnormalities described as mild mottling of the retinal pigment epithelium or hypopigmentation in the TQ (10/44, 22.7%) versus CQ/PQ (2/24, 8.3%) group; that 7 TQ patients had both keratopathy and retinopathy versus none in the CQ/PQ group; and that retinal changes remote from macula at Day 28 and Day 90 versus baseline in one TQ patient suggested increasing morphological changes (despite no clinically relevant changes in visual acuity or decreased visual field sensitivity) indicates that possible correlations between keratopathy and retinopathy, and in one case early progressive retinal epithelium changes, related to TQ cannot be ruled out. Visual field testing (Humphrey TM 10-2) was the most sensitive objective test of retinal toxicity used in this study [15, 21]. Post-baseline, a similar frequency of mildly decreased sensitivity was seen; 7/44 (15.9%) with TQ and 3/24 (12.5%) with CQ/PQ. In all cases, changes were \ 5 dB; below the threshold considered clinically relevant. In all but three cases, there were no concurrent findings on retinal examination. In the CQ/ PQ group, all of the abnormal Humphrey TM 10-2 tests had resolved by Day 90. In the TQ group, four patients had abnormal Humphrey TM 10-2 tests at Day 90; of these, one patient continued to have a decreased sensitivity from Day 28 (in both eyes) and three patients had abnormal results emerging at this assess- ment (in one eye only). It is not clear why such findings should appear so late during follow-up in these three patients. The current study was limited by the recommended methods at the time that it was conducted, whereas more sensitive tests to evaluate drug ocular toxicity are now available. Recent guidelines from the American Academy of Ophthalmology (AAO) recommend Humphrey TM 10-2 fields for screening after prolonged CQ treatment plus the use of one or more of the following objective tests: multifocal electroretino- gram (mfERG), spectral domain optical coherence tomography (SD-OCT), and fundus autofluorescence (FAF) [15]. When Humphrey TM 10-2 fields are 123 Int Ophthalmol (2019) 39:1767–1782 1777 performed independently, even the most subtle changes are an indication for objective testing. The study reported in this paper was conducted before this AAO guidance, and is limited in its conclusions as these investigations were not performed [15]. Also, guidelines suggest using a person’s height for calcu- lating drug dose, rather than body weight, as this can lead to overmedication with CQ in obese patients [28, 29]. A study investigating ocular toxicity with TQ versus placebo in healthy volunteers has recently been completed and includes SD-OCT and FAF in addition to standard clinical assessments (ClinicalTrials.gov Identifier: NCT02658435). Conducting these assess- ments within a malaria treatment clinical trial would be challenging as the necessary facilities would not normally be available at the point of care. The first study to report ocular safety of TQ was conducted in healthy volunteers receiving TQ 200 mg weekly for 6 months in Bethesda, MD and Chiltern, UK [30], though is not directly comparable to the current study as different macular function tests were used and there were no clinical retinal examinations, only retinal photography. In the 70 subjects receiving TQ available for assessment, there was no effect on night vision (low-contrast visual acuity test, mesopic contrast threshold tests, and scotopic contrast test) compared with 32 subjects receiving placebo [30]. Forward light scatter test and macular functions were also similar between the two groups after 6 months of TQ therapy [30]. Based on digital retinal photographs, retinal abnormalities were noted in one patient in each study arm. In the subject receiving TQ, a single area of retinal hyperpigmentation was detected at follow-up, without any decline in visual acuity, foveal sensitivity, or change in visual field, and remained unchanged for 11 months after therapy cessation [30]. The subject receiving placebo had a retinal abnormality noted at 12 weeks that resolved within 2 months. Independent retinal photograph grading was performed for 39 subjects receiving TQ and 21 in the placebo group with no findings of changes in retinal morphology or any objective signs of toxicity [30]. Humphrey TM 10-2 tests were performed, with no apparent trends in the rate of timing of abnormal results [30]. However, one subject receiving TQ (n = 56) was withdrawn at week 3 for an abnormal Humphrey TM 10-2 test; there was no effect on vision and the mild decrease in macular sensitivity resolved spontaneously [30]. A previous study of long-term TQ prophylaxis described ocular effects. In the study, Australian soldiers received TQ 200 mg weekly for 6 months. Whorl-like formations and mild vortex keratopathy were observed in 69/74 (93.2%) subjects [20]. The changes were reversible and resolved by 1 year and did not adversely affect visual acuity [20]. In these subjects, retinal abnormalities were also noted on clinical examination in 27/69 (39.1%) receiving TQ (e.g. granularity/pigmentation of the retinal pigment epithelium or hard drusen) versus 4/17 (23.5%) receiving mefloquine (MQ). Retinal FFAs were per- formed on subjects with possible retinal findings; 14 receiving TQ, 1 receiving MQ. Of these, four TQ subjects and one MQ subject had FFAs that were considered possibly abnormal [20]. However, review of these data by an Independent Expert Ophthalmol- ogy Review Board concluded that the retinal findings could have been normal variations and that there was no evidence of drug-related visual disturbances. The study was limited in that retinal fundoscopic exami- nations were unmasked at follow-up because the presence of corneal deposits was already known to investigators. Also, there were no baseline data present for comparison. Humphrey TM 10-2 results were normal in all subjects (N = 63) in the TQ group and in 15/16 (93.8%) of subjects in the MQ group [20]. These data, taken together with the Bethesda volunteer study described above, provided no conclusive evi- dence of early retinal changes related to TQ, though the number of subjects examined was small. Note that subjects receiving prophylaxis received much larger TQ doses, and for longer, than for P. vivax radical cure. Compared to the prophylaxis study in Australian soldiers, the shorter TQ dose duration used in the current trial might have led to the decreased frequency and severity of the ocular findings noted. However, our results are consistent with a trial in healthy volunteers in the USA and UK, in which 15/60 (25.0%) subjects given TQ 200 mg weekly for 6 months had mild keratopathy compared with 4/25 (16.0%) in the placebo arm [30]. The differences between these studies may reflect differences in data collection methods, data quality, environmental expo- sure, ethnic or other unknown factors. Sufficient data are not yet available to allow comment on the relationship between TQ dose and duration and the incidence of keratopathy. 123 1778 Int Ophthalmol (2019) 39:1767–1782 In studies investigating P. vivax radical cure, two older reports of the effect of TQ at doses ranging from 500 mg single dose to 2100 mg over 7 days after treatment with CQ (N = 90) did not include oph- thalmic safety data specifically [8, 9]. However, 4/90 (4.4%) patients receiving TQ experienced an eye- related adverse event (one each of conjunctivitis, eye irritation, blurred vision and ocular hyperemia); there were no instances in the comparator groups (CQ and CQ/PQ) [31]. A recent, much larger Phase IIb randomized dose-ranging trial included 329 patients with P. vivax malaria who received TQ doses of 50 mg, 100 mg, 300 mg or 600 mg as a single-dose plus 3-day CQ (total CQ dose 1500 mg free base equivalent) or CQ/PQ or CQ alone [14]. Adverse events attributed to eye disorders occurred in 5/225 (2.2%) of patients that received TQ/CQ (2 conjunc- tivitis, 2 eye pain, 1 blurred vision, 1 conjunctivial hyperemia) versus 2/50 (4.0%) with CQ/PQ (both blurred vision) and 2/54 (4.0%) with CQ alone (both blurred vision) [14]. Ophthalmic assessments were conducted in 93 patients overall, with no reports of keratopathy in any treatment group [14]. There were mild post-baseline transient changes in the results of the Humphrey visual field test in 7/61 (11%) patients receiving TQ/CQ, 1/15 (7%) with CQ/PQ, and in 1/17 (5.9%) with CQ alone. In all cases, these findings had resolved by Day 180 of the study. There were no other clinically important ophthalmic findings [14]. In the current study, despite the known ocular toxicity of CQ, keratopathy or clinically relevant ocular changes were not reported in the CQ/PQ group. These findings are consistent with the short (3-day) dose duration of CQ in antimalarial therapy. Ocular complications, such as corneal opacity, reversible retinal lipidosis, and irreversible receptor cell degen- eration have been documented with CQ, generally during extended use in rheumatoid diseases [15, 17, 32, 33]. The risk of ocular complications increases with therapy duration, and CQ doses of 250 mg or below are rarely associated with ocular side effects if used for less than 6 months [15, 17]. A retrospective study of CQ ocular toxicity among 155 Thai patients aged 10–70 years, who received treat- ment for 6–14 years with a cumulative dose of 26–1771 g, found that early retinal toxicity occurred at 9 months of treatment, with an overall prevalence of keratopathy of 9.0% (17/155) and retinopathy of 14.2% (22/155), without correlation between keratopathy and retinopathy [32]. Rheumatologists have switched from using CQ to hydroxychloroquine (HCQ) because of its better safety profile in long-term use. The risk of developing HCQ retinopathy is rare and the cumulative dose appeared to be a more important factor than daily dosage [19]. Retinal toxicity associated with CQ and HCQ may continue to progress despite cessation of the drug therapy [19]. The largest study, in 400 Greek patients with rheuma- toid arthritis or systemic lupus treated with\ 6.5 mg/ kg/day HCQ for a mean of 8.7 years reported the incidence of irreversible retinal toxicity as 0.5% [34]. Conclusions TQ is a promising antimalarial for P. vivax radical cure and is the only drug in development as an alternative to PQ for the eradication of P. vivax hypnozoites. This study was designed to evaluate TQ efficacy and safety and was not powered to evaluate ophthalmic risk, but provides safety data for TQ concentrations above the expected therapeutic dose. The data suggest that ophthalmic adverse events with TQ 400 mg/day for 3 days tended to be more common than with CQ/PQ. There was no evidence of clinically relevant ocular toxicity with either treatment. This is the first study to show mild keratopathy with short-course TQ (total dose 1200 mg), with no conclusive evidence of early retinal changes, though possible increasing early retinal pigment epithelial changes related to TQ cannot be ruled out in one patient. Although there were some abnormal Humphrey TM 10-2 visual field tests in patients receiving TQ, sensitivity was decreased by \ 5 dB and vision was not affected. Ophthalmic monitoring will continue in the current TQ Phase III program for P. vivax radical cure using the lower TQ 300 mg single-dose plus 3-day CQ regimen (1500 mg total dose free base; NCT01376167), or TQ 300 mg plus or dihy- droartemisinin-piperaquine (320:40 mg; Clinicaltri- als.gov: NCT02802501). Acknowledgements The authors thank the following local study coordinators: Nillawan Buathong, Krisda Jongskul, and Punnee Pitisutithum. The authors recognize the contribution of the following co-investigators: Harald Noedl, MD (Institute of Specific Prophylaxis and Tropical Medicine, Medical University Vienna, Austria); Sombat Teeprasertsuk, MD (Faculty of Tropical Medicine, Mahidol University, Bangkok, 123 Int Ophthalmol (2019) 39:1767–1782 1779 Thailand), Udomsak Silachamroon, MD (Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand); Weerapong Phumratanaprapin, MD (Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand). We thank the Ophthalmology Advisory Board for their contribution: Michael Altaweel, MD (University of Wisconsin, Madison, WI, USA); Douglas A Jabs, MD, MBA (The Johns Hopkins University Bloomberg School of Public Health, Baltimore, MD, USA); Tony Moore, FRCS, FRCOphth, FMedSci (University College London, United Kingdom); Sukhuma Warrasak, MD (Ramathibodi Faculty Hospital, Mahidol University, Bangkok, Thailand). We thank Justin Green for critical comments on the manuscript. Editorial support in the form of editorial suggestions to draft versions of this paper, collecting author comments, copyediting, and graphic services were provided by Naomi Richardson at Magenta Communications Ltd and was funded by GlaxoSmithKline; administrative support was provided by Alex Lowe of Fishawack and was funded by GlaxoSmithKline. Authors’ contributions SW, AE, RSM, MMF, and CO contributed to the study design. SW and AE conducted the ophthalmology examinations, drafted the manuscript, and provided technical support and expertise. SK and MMF supervised the entire study and assisted with study design and data interpretation. MI was involved in acquisition of data, data analysis, and interpretation. CO, MMF, and RSM conceived of and designed this study and wrote the grant which funded it. CO led the effort to address the safety signals identified in a malaria prophylaxis trial (Nasveld et al. Antimicrob Agents Chemother 2010; 54:792–798 Ref [18 in manuscript]), provided study supervision throughout the study, and edited this manuscript. All authors were involved in the interpretation of the data, contributed to this paper and approved the final version for publication. Sornchai Looareesuwan, MD, who died in July 2007, contributed to this study in the development of the protocol and collection of data. Funding This publication was made possible by Grant Numbers RC1AI048874-01 and UC1AI049499-01 from the National Institute of Allergy and Infectious Diseases. This study was supported by US Army Medical Research and Material Command and GlaxoSmithKline (Brentford, Middlesex, UK). The US Army Medical Research and Material Command and GlaxoSmithKline were involved in study design, collection and interpretation of data and preparation of this paper. Compliance with ethical standards Conflict of interest The authors have no proprietary or commercial interest in any materials discussed in this article. Ethics approval and consent to participate This study was conducted in accordance with Good Clinical Practice and the guiding principles of the Declaration of Helsinki. The protocol was approved by the Ethical Committee, Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand, the Ministry of Public Health, Nonthaburi, Thailand, and the Human Subjects Research Review Board, U.S. Army Medical Research and Material Command, Office of Regulatory Compliance and Quality, Fort Detrick, MD, USA. All participants provided written informed consent before entry into the study. The pro- tocol can be obtained from the corresponding authors. The trial is registered at Clinicaltrials.gov identifier: NCT01290601. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unre- stricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Com- mons license, and indicate if changes were made. References 1. Guerra CA, Howes RE, Patil AP, Gething PW, Van Boeckel TP, Temperley WH, Kabaria CW, Tatem AJ, Manh BH, Elyazar IR, Baird JK, Snow RW, Hay SI (2010) The international limits and population at risk of Plasmodium vivax transmission in 2009. PLoS Negl Trop Dis 4:e774. https://doi.org/10.1371/journal.pntd.0000774 2. Howes RE, Battle KE, Mendis KN, Smith DL, Cibulskis RE, Baird JK, Hay SI (2016) Global epidemiology of Plasmodium vivax. Am J Trop Med Hyg 95:15–34. https:// doi.org/10.4269/ajtmh.16-0141 3. Mendis K, Sina BJ, Marchesini P, Carter R (2001) The neglected burden of Plasmodium vivax malaria. Am J Trop Med Hyg 64:97–106 4. Price RN, Douglas NM, Anstey NM (2009) New developments in Plasmodium vivax malaria: severe disease and the rise of chloroquine resistance. Curr Opin Infect Dis 22:430–435. https://doi.org/10.1097/QCO.0b013e32832f14c1 5. Galappaththy GN, Tharyan P, Kirubakaran R (2013) Pri- maquine for preventing relapse in people with Plasmodium vivax malaria treated with chloroquine. The Cochrane database of systematic reviews CD004389. https://doi.org/ 10.1002/14651858.cd004389.pub3 6. Ponsa N, Sattabongkot J, Kittayapong P, Eikarat N, Cole- man RE (2003) Transmission-blocking activity of tafeno- quine (WR-238605) and artelinic acid against naturally circulating strains of Plasmodium vivax in Thailand. Am J Trop Med Hyg 69:542–547 7. Coleman RE (1990) Sporontocidal activity of the anti- malarial WR-238605 against Plasmodium berghei ANKA in Anopheles stephensi. Am J Trop Med Hyg 42:196–205 8. Walsh DS, Looareesuwan S, Wilairatana P, Heppner DG Jr, Tang DB, Brewer TG, Chokejindachai W, Viriyavejakul P, Kyle DE, Milhous WK, Schuster BG, Horton J, Braitman DJ, Brueckner RP (1999) Randomized dose-ranging study of the safety and efficacy of WR 238605 (Tafenoquine) in the prevention of relapse of Plasmodium vivax malaria in Thailand. J Infect Dis 180:1282–1287. https://doi.org/10. 1086/315034 9. Walsh DS, Wilairatana P, Tang DB, Heppner DG Jr, Brewer TG, Krudsood S, Silachamroon U, Phumratanaprapin W, Siriyanonda D, Looareesuwan S (2004) Randomized trial of 3-dose regimens of tafenoquine (WR238605) versus low- dose primaquine for preventing Plasmodium vivax malaria 123 1780 Int Ophthalmol (2019) 39:1767–1782 http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/licenses/by/4.0/ https://doi.org/10.1371/journal.pntd.0000774 https://doi.org/10.4269/ajtmh.16-0141 https://doi.org/10.4269/ajtmh.16-0141 https://doi.org/10.1097/QCO.0b013e32832f14c1 https://doi.org/10.1002/14651858.cd004389.pub3 https://doi.org/10.1002/14651858.cd004389.pub3 https://doi.org/10.1086/315034 https://doi.org/10.1086/315034 relapse. Clin Infect Dis 39:1095–1103. https://doi.org/10. 1086/424508 10. Shanks GD, Oloo AJ, Aleman GM, Ohrt C, Klotz FW, Braitman D, Horton J, Brueckner R (2001) A new pri- maquine analogue, tafenoquine (WR 238605), for prophy- laxis against Plasmodium falciparum malaria. Clin Infect Dis 33:1968–1974. https://doi.org/10.1086/324081 11. Dow GS, Gettayacamin M, Hansukjariya P, Imerbsin R, Komcharoen S, Sattabongkot J, Kyle D, Milhous W, Cozens S, Kenworthy D, Miller A, Veazey J, Ohrt C (2011) Radical curative efficacy of tafenoquine combination regimens in Plasmodium cynomolgi-infected Rhesus monkeys (Macaca mulatta). Malaria J 10:212. https://doi.org/10.1186/1475- 2875-10-212 12. Brueckner RP, Lasseter KC, Lin ET, Schuster BG (1998) First-time-in-humans safety and pharmacokinetics of WR 238605, a new antimalarial. Am J Trop Med Hyg 58:645–649 13. Fukuda MM, Krudsood S, Mohamed K, Green JA, Warrasak S, Noedl H, Euswas A, Ittiverakul M, Buathong N, Sriwichai S, Miller RS, Ohrt C (2017) A randomized, double-blind, active-controltrialtoevaluatetheefficacyandsafetyofathree day course of tafenoquine monotherapy for the treatment of Plasmodium vivax malaria. PLoS ONE 12:e0187376. https:// doi.org/10.1371/journal.pone.0187376 14. Llanos-Cuentas A, Lacerda MV, Rueangweerayut R, Krudsood S, Gupta SK, Kochar SK, Arthur P, Chuenchom N, Mohrle JJ, Duparc S, Ugwuegbulam C, Kleim JP, Carter N, Green JA, Kellam L (2014) Tafenoquine plus chloro- quine for the treatment and relapse prevention of Plas- modium vivax malaria (DETECTIVE): a multicentre, double-blind, randomised, phase 2b dose-selection study. Lancet 383:1049–1058. https://doi.org/10.1016/S0140- 6736(13)62568-4 15. Marmor MF, Kellner U, Lai TY, Melles RB, Mieler WF, American Academy of Ophthalmology (2016) Recommen- dations on screening for chloroquine and hydroxychloroquine retinopathy (2016 revision). Ophthalmology 123:1386–1394. https://doi.org/10.1016/j.ophtha.2016.01.058 16. Izazola-Conde C, Zamora-de la Cruz D, Tenorio-Guajardo G (2011) Ocular and systemic adverse effects of ophthalmic and non ophthalmic medications. Proc West Pharmacol Soc 54:69–72 17. Prokopich C, Bartlett J, Jaanus S (2007) Ocular adverse drug reactions to systemic medications. In: Bartlett J, Jaanus S (eds) Clinical ocular pharmacology. Butterworth-Heine- mann, Oxford, pp 701–759 18. Mecklenburg L, Schraermeyer U (2007) An overview on the toxicmorphologicalchangesintheretinalpigmentepithelium after systemic compound administration. Toxicol Pathol 35:252–267. https://doi.org/10.1080/01926230601178199 19. Michaelides M, Stover NB, Francis PJ, Weleber RG (2011) Retinal toxicity associated with hydroxychloroquine and chloroquine: risk factors, screening, and progression despite cessation of therapy. Arch Ophthalmol 129:30–39. https:// doi.org/10.1001/archophthalmol.2010.321 20. Nasveld PE, Edstein MD, Reid M, Brennan L, Harris IE, Kitchener SJ, Leggat PA, Pickford P, Kerr C, Ohrt C, Pre- scott W, Tafenoquine Study Team (2010) Randomized, double-blind study of the safety, tolerability, and efficacy of tafenoquine versus mefloquine for malaria prophylaxis in nonimmune subjects. Antimicrob Agents Chemother 54:792–798 21. Dersu I, Wiggins M (2006) Understanding visual fields, part II; Humphrey visual fields. J Ophthalmic Med Technol. http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid= 3DDE9F18027042E8D67188FED3AA583A?doi=10.1.1. 567.1948&rep=rep1&type=pdf. Accessed 2 April 2018 22. Abu Sayeed A, Maude RJ, Hasan MU, Mohammed N, Hoque MG, Dondorp AM, Faiz MA (2011) Malarial retinopathy in Bangladeshi adults. Am J Trop Med Hyg 84:141–147. https://doi.org/10.4269/ajtmh.2011.10-0205 23. Lee JH, Chin HS, Chung MH, Moon YS (2010) Retinal hemorrhage in Plasmodium vivax malaria. Am J Trop Med Hyg 82:219–222. https://doi.org/10.4269/ajtmh.2010.09- 0439 24. Choi HJ, Lee SY, Yang H, Bang JK (2004) Retinal haem- orrhage in vivax malaria. Trans R Soc Trop Med Hyg 98:387–389. https://doi.org/10.1016/j.trstmh.2003.12.002 25. Charles BG, Miller AK, Nasveld PE, Reid MG, Harris IE, Edstein MD (2007) Population pharmacokinetics of tafenoquine during malaria prophylaxis in healthy subjects. Antimicrob Agents Chemother 51:2709–2715. https://doi. org/10.1128/AAC.01183-06 26. Edstein MD, Kocisko DA, Brewer TG, Walsh DS, Eamsila C, Charles BG (2001) Population pharmacokinetics of the new antimalarial agent tafenoquine in Thai soldiers. Br J Clin Pharmacol 52:663–670 27. Edstein MD, Kocisko DA, Walsh DS, Eamsila C, Charles BG, Rieckmann KH (2003) Plasma concentrations of tafenoquine, a new long-acting antimalarial agent, in thai soldiers receiving monthly prophylaxis. Clin Infect Dis 37:1654–1658. https://doi.org/10.1086/379718 28. Karmel M (2011) Rx side effects: New Plaquenil guidelines and more. American Academy of Ophthalmology. https:// www.aao.org/eyenet/article/rx-side-effects-new-plaquenil- guidelines-more. Accessed 2 Feb 2017 29. Marmor MF, Kellner U, Lai TY, Lyons JS, Mieler WF (2011) Revised recommendations on screening for chloro- quine and hydroxychloroquine retinopathy. Ophthalmology 118:415–422. https://doi.org/10.1016/j.ophtha.2010.11.017 30. Leary KJ, Riel MA, Roy MJ, Cantilena LR, Bi D, Brater DC, van de Pol C, Pruett K, Kerr C, Veazey JM Jr, Beboso R, Ohrt C (2009) A randomized, double-blind, safety and tolerability study to assess the ophthalmic and renal effects of tafenoquine 200 mg weekly versus placebo for 6 months in healthy volunteers. Am J Trop Med Hyg 81:356–362 31. Hackett D, Harrell E, Miller AK, Kleim J-P, Mohamed K, Nash L, Ohrt C, Rana R. A dose ranging study for the safety and efficacy of W238605/SB252263 in the prevention of relapse of Plasmodium vivax infection in Thailand. Docu- ment number: RM2007/00309/00; study SB-252263/047: GlaxoSmithKline, data on file; 2010 32. Puavilai S, Kunavisarut S, Vatanasuk M, Timpatanapong P, Sriwong ST, Janwitayanujit S, Nantiruj K, Totem- chokchyakarn K, Ruangkanchanasetr S (1999) Ocular tox- icity of chloroquine among Thai patients. Int J Dermatol 38:934–937 33. Araiza-Casillas R, Cardenas F, Morales Y, Cardiel MH (2004) Factors associated with chloroquine-induced retinopathy in rheumatic diseases. Lupus 13:119–124. https://doi.org/10.1191/0961203304lu514oa 123 Int Ophthalmol (2019) 39:1767–1782 1781 https://doi.org/10.1086/424508 https://doi.org/10.1086/424508 https://doi.org/10.1086/324081 https://doi.org/10.1186/1475-2875-10-212 https://doi.org/10.1186/1475-2875-10-212 https://doi.org/10.1371/journal.pone.0187376 https://doi.org/10.1371/journal.pone.0187376 https://doi.org/10.1016/S0140-6736(13)62568-4 https://doi.org/10.1016/S0140-6736(13)62568-4 https://doi.org/10.1016/j.ophtha.2016.01.058 https://doi.org/10.1080/01926230601178199 https://doi.org/10.1001/archophthalmol.2010.321 https://doi.org/10.1001/archophthalmol.2010.321 http://citeseerx.ist.psu.edu/viewdoc/download%3bjsessionid%3d3DDE9F18027042E8D67188FED3AA583A%3fdoi%3d10.1.1.567.1948%26rep%3drep1%26type%3dpdf http://citeseerx.ist.psu.edu/viewdoc/download%3bjsessionid%3d3DDE9F18027042E8D67188FED3AA583A%3fdoi%3d10.1.1.567.1948%26rep%3drep1%26type%3dpdf http://citeseerx.ist.psu.edu/viewdoc/download%3bjsessionid%3d3DDE9F18027042E8D67188FED3AA583A%3fdoi%3d10.1.1.567.1948%26rep%3drep1%26type%3dpdf https://doi.org/10.4269/ajtmh.2011.10-0205 https://doi.org/10.4269/ajtmh.2010.09-0439 https://doi.org/10.4269/ajtmh.2010.09-0439 https://doi.org/10.1016/j.trstmh.2003.12.002 https://doi.org/10.1128/AAC.01183-06 https://doi.org/10.1128/AAC.01183-06 https://doi.org/10.1086/379718 https://www.aao.org/eyenet/article/rx-side-effects-new-plaquenil-guidelines-more https://www.aao.org/eyenet/article/rx-side-effects-new-plaquenil-guidelines-more https://www.aao.org/eyenet/article/rx-side-effects-new-plaquenil-guidelines-more https://doi.org/10.1016/j.ophtha.2010.11.017 https://doi.org/10.1191/0961203304lu514oa 34. Mavrikakis I, Sfikakis PP, Mavrikakis E, Rougas K, Niko- laou A, Kostopoulos C, Mavrikakis M (2003) The incidence of irreversible retinal toxicity in patients treated with hydroxychloroquine: a reappraisal. Ophthalmology 110:1321–1326. https://doi.org/10.1016/S0161-6420(03)00 409-3 123 1782 Int Ophthalmol (2019) 39:1767–1782 https://doi.org/10.1016/S0161-6420(03)00409-3 https://doi.org/10.1016/S0161-6420(03)00409-3 Comparative ophthalmic assessment of patients receiving tafenoquine or chloroquine/primaquine in a randomized clinical trial for Plasmodium vivax malaria radical cure Abstract Purpose Methods Results Conclusions Clinical trial registration Background Methods Patients Design and procedures Ophthalmic assessments Statistical analysis Results Corneal assessments Retinal assessments Visual acuity, macular function tests, and visual field testing Discussion Retinal hemorrhage related to P. vivax Ophthalmic data on TQ Conclusions Acknowledgements Authors’ contributions Funding References work_w2uzzijvojfxvbiyfbqovbudru ---- A study of distribution, sex differences and stability of lip print patterns in an Indian population Saudi Journal of Biological Sciences (2015) xxx, xxx–xxx King Saud University Saudi Journal of Biological Sciences www.ksu.edu.sa www.sciencedirect.com ORIGINAL ARTICLE A study of distribution, sex differences and stability of lip print patterns in an Indian population * Corresponding author at: Department of Forensic Science, Govt. Institute of Forensic Science, R.T. Road, Civil Lines, Nagpur, Maharashtra 440001, India. Tel.: +91 7387490889. E-mail addresses: Neeti.kapoor86@gmail.com (N. Kapoor), Badiye. ashish@gmail.com (A. Badiye). Peer review under responsibility of King Saud University. Production and hosting by Elsevier http://dx.doi.org/10.1016/j.sjbs.2015.01.014 1319-562X ª 2015 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Please cite this article in press as: Kapoor, N., Badiye, A. A study of distribution, sex differences and stability of lip print patterns in an Indian populatio Journal of Biological Sciences (2015), http://dx.doi.org/10.1016/j.sjbs.2015.01.014 Neeti Kapoor, Ashish Badiye * Department of Forensic Science, Govt. Institute of Forensic Science, Nagpur, Maharashtra, India Received 12 November 2014; revised 1 January 2015; accepted 24 January 2015 KEYWORDS Cheiloscopy; Lip print pattern; Forensic identification; Sex differences; Marathi population Abstract Lip prints are very useful in forensic investigations. The objective of this study is to deter- mine predominant lip print pattern found among a central Indian population, to evaluate whether any sex difference exists and to study the permanence of the pattern over a 6 month duration. This study included 200 healthy adult subjects comprising of 100 males and 100 females in the age group of 18–25 years. A convenient and easier method of data collection i.e., digital photography was used instead of the traditional lipstick methods. Lip prints were then divided into four quadrants and recognized as per Suzuki and Tsuchihashi’s classification. Type I (30.63%) was found to be most predominant overall in the Marathi population. Type I (29.75%) and Type III (35.75%) were found most prevalent in males and females respectively. Applying the Chi-Square test, statistically significant differences (p < 0.05) were observed between male and female lip print patterns in each of the quadrants individually and all quadrants taken together. The lip print patterns remained stable over a period of six-months. Being stable and with significant sex differences, lip prints can be effectively used as an important tool in forensic investigations for individualization as well as identification of sex of the donor, thus, narrowing down the scope of investigation to almost half. ª 2015 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). 1. Introduction In various civil, criminal and mass disaster cases, positive identification of a person can be very difficult. Out of the many existing techniques implied for the purpose, comparison of fingerprints, DNA and dental records are probably the most common techniques used in this context. However, ‘human lip recognition’ also known as cheiloscopy, is one of the most interesting emerging fields which find its roots in criminal and forensic practices (Caldas et al., 2007; Sharma et al., 2009; Reddy and Reddy, 2011). Previous studies have shown to establish the fact that lip prints can positively distinguish indi- viduals and hence have potential use in human identification n. Saudi http://creativecommons.org/licenses/by-nc-nd/4.0/ mailto:Neeti.kapoor86@gmail.com mailto:Badiye.ashish@gmail.com mailto:Badiye.ashish@gmail.com http://dx.doi.org/10.1016/j.sjbs.2015.01.014 http://dx.doi.org/10.1016/j.sjbs.2015.01.014 http://www.sciencedirect.com/science/journal/1319562X http://dx.doi.org/10.1016/j.sjbs.2015.01.014 http://creativecommons.org/licenses/by-nc-nd/4.0/ http://dx.doi.org/10.1016/j.sjbs.2015.01.014 2 N. Kapoor, A. Badiye (Venkatesh and David, 2011; Prabhu et al., 2012, 2013; Dwivedi et al., 2013). The term ‘‘Cheiloscopy’’ is derived from the Greek words cheilos meaning ‘lips’ and e skopein meaning ‘to see’ and is defined as the study of the characteristic patterns of the wrin- kles and grooves present on the labial mucosa (sulci laborium), called as lip prints (Sivapathasundharam et al., 2001; Molano et al., 2002; Rajendran and Sivapathasundharam, 2006; Shafer et al., 2009). R. Fischer was among the first to take notice of the biological phenomenon of systems of furrows on the red part of human lips in the year 1902 (Thomas and Van Wyk, 1988; Kasprzak, 1990, 2000). Use of lip prints in personal iden- tification and criminalization was first recommended in France by Edmond Locard as early as 1932 (Warren, 1976; Thomas and Van Wyk, 1988). Le Moyne Snyder was the first to intro- duce a case in which lip prints helped the crime investigators in an unusual way (Suzuki and Tsuchihashi, 1970a,b; Williams, 1991; Ball, 2002). Santos, Suzuki and Tsuchihashi were among the first to classify the various patterns present on the human lips (Suzuki and Tsuchihashi, 1970a,b; Tsuchihashi, 1974; Williams, 1991). The importance of cheiloscopy is linked to the fact that the lip prints are unique to one person, except in monozygotic twins (Neville et al., 2002), like fingerprints and palatal rugae, the lip grooves are permanent throughout life (Tsuchihashi, 1974). It is possible to identify lip patterns as early as the 6th week of uterine life (Caldas et al., 2007; Koneru et al., 2013). The oily and moist secretions from sebaceous and salivary glands located at the vermillion border and subsequent mois- turization from the tongue enables the formation of a latent lip print whenever there is contact (Ball, 2002) and is likely to be encountered and should be suspected to be present on the scene of the crime of burglary, sexual assault, house tress-pass, homicide, rape, etc. Depending upon the scenario of/at the crime scene, lip prints may be found on various phys- ical evidences at the crime scene, such as shirt, handkerchief, tissue paper/wipes, cups, photographs, letters, glass, window panes, cutlery, fruit skin/peel, cigarette butts, clothing, and even biological materials such as skin (Kavitha et al., 2009; Vats et al., 2012). Lip prints are very useful in forensic investigations and are considered to be important forms of transfer evidence, and are analogous to fingerprints (Tsuchihashi, 1974). Apart from identification and evidential use, lip prints may also be used in detection work, being the source of tactical and criminal- istic information. Being unknowingly left at the scene of the crime, lip prints can directly and effectively be helpful in placing the suspect on the scene (Satyanarayana et al., 2011). A lip print at the scene of crime can be a basis for con- clusions as to the character of the event, the number of the people involved, sexes, cosmetics used, habits, occupational traits, and the pathological changes of lips themselves (Vahanwala and Parekh, 2000). If a complete match or identification is not possible, proper examination of lip prints may help in establishing other relative facts like sex identification of the donor, hence reducing the burden of the forensic examiner to half. The objective of this study is to determine predominant lip print pattern found among a central Indian population, to evaluate whether any sex difference exists and to study the permanence of the same over a 6 month duration. Please cite this article in press as: Kapoor, N., Badiye, A. A study of distribution, s Journal of Biological Sciences (2015), http://dx.doi.org/10.1016/j.sjbs.2015.01.014 2. Methodology 2.1. Sample The study comprised of 200 healthy individuals (100 males and 100 females), in the age group of 18–25 years and belonged to Marathi population of the Nagpur region of the Maharashtra state, India. Informed consent was obtained from all the subjects. 2.2. Inclusion criteria Only healthy subjects, free from any oral pathologies, inflam- mation, abnormalities or deformities such as cleft lip, cut marks, surgical scars or lesions of the lip were included in the study. 2.3. Recording the lip prints Recording of the data is an extremely important step for the success of this study; still digital photography was used, as the mobile nature of the human lips can affect the accuracy of the lip print impressions even with slight variations in the strength or the direction of the pressure applied (Tsuchihashi, 1974). The subjects were made to stand erect with the head positioned in Frankfurt plane. From a fixed dis- tance, lips of volunteers in ‘natural condition’ (without the application of lipstick, lip fillers, lip gloss or any other cosmetic product) were photographed twice using a digital camera (Nikon D3100-14.2 MP-AF-S NIKKOR 18–55 mm lens kit). This method is relatively easier and involved nil physical con- tact with the volunteers in terms of application of lip gloss or lipstick as previously used and suggested by others, which can be quite laborious and unhygienic (if the same lipstick is used for all the subjects). 2.4. Classification used In this study, we followed (Fig. 1) the classification of patterns of the lines on the lips proposed by Suzuki and Tsuchihashi (Suzuki and Tsuchihashi, 1971; Tsuchihashi, 1974). � Type I: Long vertical (Clear-cut vertical grooves that run across the lips). � Type I0: Short vertical (Partial length groove of type I). � Type II: Branched grooves (Branching Y-shaped pattern). � Type III: Intersected grooves (Criss-cross/’x’ pattern grooves). � Type IV: Reticular pattern (Grooves that forms rectangular shape). � Type V: Mixed/Indefinite (Grooves that do not fall into any of the above categories, combination of two or more pat- terns and/or cannot be differentiated morphologically/ undetermined). 2.5. Examination of the prints After transferring the photographs of the lips on a computer, lip prints (Fig. 2) were divided into four quadrants namely ex differences and stability of lip print patterns in an Indian population. Saudi http://dx.doi.org/10.1016/j.sjbs.2015.01.014 Lip Print Type Type I Long Vertical Type I’ Short Vertical Type II Branched Type III Intersecting Type IV Reticulate Type V Indefinite/Mixed Figure 1 Photographs showing all the patterns of the lip prints followed/observed in this study. A B CD Figure 2 Lip print divided clockwise in four quadrants namely A, B, C and D. Table 1 Percentage distribution of lip patterns in the Marathi population. Lip pattern types All quadrants (A + B + C + D) Percentage Long vertical (Type I) 30.63% Short vertical (Type I0) 1.88% Branched (Type II) 16.50% Intersecting (Type III) 25.38% Reticulate (Type IV) 8.63% Mixed/Indefinite (Type V) 17% Total 100 Study of lip print patterns in an Indian population 3 A, B, C, D moving clockwise, starting from the left side of the upper lip to the left side of the lower lip. Quadrants were man- ually classified twice by both authors independently, so as to assess the inter- and intra-rater reliability of the observations. The same criteria of inclusion and exclusion; procedure for recording the prints; classification and same method of exam- Please cite this article in press as: Kapoor, N., Badiye, A. A study of distribution, s Journal of Biological Sciences (2015), http://dx.doi.org/10.1016/j.sjbs.2015.01.014 ination were repeated for the same individuals after 6 months. The older photographs were compared with the recent ones by the same observers. 2.6. Statistical analysis The obtained results were statistically analyzed using the Chi-Square test, wherein a value of p < 0.05 was considered as significant. Kappa (K) value was also calculated to check the inter-observer and intra-observer agreement strength. 3. Result and discussion The division of lip prints into four quadrants was in accordance to Tsuchihashi (1974), Gondivkar et al. (2009), Saraswathi et al. (2009), ElDomiaty et al. (2010), Satyanarayana et al. (2011), Gupta et al. (2011), Venkatesh and David (2011), Prabhu et al. (2012), Koneru et al. (2013) and Prabhu et al. (2013). In the present study, overall, Type I (long vertical) was the most frequently observed pattern (Table 1) in the examined subjects of the Marathi population of the Nagpur region of the Maharashtra state in India. The result is in accordance to results of the studies conducted by Koneru et al. (2013) on Kerala and Manipur population and by Vahanwala and Parekh (2000) on the Mumbai population, who also found the Type I pattern to be most predominant, while Type I0 was observed to be the least common type. Type I (long vertical) and Type III (intersecting) lip print patterns were found to be most commonly found in males and females respectively. In male lip prints, the order of appearance of patterns were Type I > Type V > Type II > Type III > Type IV > Type I0 (least common). In female lip prints, the order of appearance of patterns were Type III > Type I > Type II > Type IV > Type V > Type I0 (least common) shown in Table 2 and Fig. 3. Quadrant wise distributions of lip print patterns are shown in Table 3. Some interesting points to be noted as they appear here are that the upper lips of males and females have about the same frequency of Type V prints, while the lower lips are vastly different (66 Type V lower lip patterns in males; 2 in females). Thus, the difference in the frequency of Type V between males and females is coming almost entirely from the lower lip, and is a much stronger sex difference than found when combining upper and lower lip frequencies. Other upper/ lower lip differences are seen in Type II patterns (no sex differ- ences here), and type I and type IV, females only. ex differences and stability of lip print patterns in an Indian population. Saudi http://dx.doi.org/10.1016/j.sjbs.2015.01.014 Table 4 Statistical analysis result (applying the Chi-Square test). Statistical analysis Lip print patterns: males Vs females Quadrant A * Quadrant B * v2 17.1 14.9 df 5 5 Probability 0.004 0.011 v2 = Value of Chi-Square, df = Degree of freedom. * Significant at p < 0.05. Table 2 Distribution of patterns among male and females. Pattern Distribution of patterns Male Female n % n % Long vertical (Type I) 119 29.75 126 31.50 Short vertical (Type I0) 14 3.5 1 0.25 Branched (Type II) 75 18.75 57 14.25 Intersecting (Type III) 62 15.5 141 35.75 Reticulate (Type IV) 27 6.75 42 10.5 Mixed/Indefinite (Type V) 103 25.75 33 8.25 Total 400 100 400 100 0 5 10 15 20 25 30 35 40 (Type I) (Type I’) (Type II) (Type III) (Type IV) (Type V) Male Female Figure 3 Percentage distribution of lip print patterns in the studied group. Table 3 Quadrant wise distribution of patterns in lip prints of mal Pattern type Male A B C Long vertical (Type I) 20 23 43 Short vertical (Type I0) 5 4 2 Branched (Type II) 34 34 2 Intersecting (Type III) 12 16 15 Reticulate (Type IV) 9 6 8 Mixed/Indefinite (Type V) 20 17 30 Total 100 100 100 A, B, C, D = Quadrant. 4 N. Kapoor, A. Badiye Please cite this article in press as: Kapoor, N., Badiye, A. A study of distribution, s Journal of Biological Sciences (2015), http://dx.doi.org/10.1016/j.sjbs.2015.01.014 Statistical analysis, applying the Chi-Square test, shows sig- nificant difference (p < 0.05) between lip print patterns in males and females in all quadrants individually as well as com- bined (Table 4). The result is in accordance with the results obtained by Kumar et al. (2012) in the Pondicherry popula- tion, Vats et al. (2012), Dwivedi et al. (2013) and Koneru et al. (2013) in Kerala and Manipuri populations, but is in con- trast to the results obtained by Sivapathasundharam et al. (2001), Saraswathi et al. (2009), Sandhu et al. (2012), Prabhu et al. (2013) and Verghese et al. (2010) who did not find any statistically significant differences. Statistically insignificant (p > 0.05) inter-observer and intra-observer variability was observed in assessing the mor- phologic pattern of the lips in all the four quadrants by both authors. The calculated Kappa (K) values showed that the strength of agreement between both the observers was ‘good’ to ‘very good’. Considered an environmental factor, peeling off the super- ficial layers of the skin of the lips was not seen in the present study, which is in contrast to the study by ElDomiaty et al. (2010) who observed it as a common feature of the lips in their study and explained it as a probable result of the dry weather in the study area, which dried the lips and made the people accustomed to bite away the dried skin. However, they also noted that, this did not mask the pattern of the lip print as it appeared after taking the print many times. This phenomenon could help a great deal in identification of subjects as the lip pattern does not change due to differences in climate or any illness present around the mouth (Tsuchihashi, 1974). No two lip prints showed the exactly same pattern in our study, which is in accordance to the detailed study by Tsuchihashi (1974) on 1364 Japanese subjects (757 males and 607 females) where no lip print showed the same pattern in Quadrant C * Quadrant D * All quadrants * 46.8 47.1 84.0 5 5 5 0.000 0.000 0.000 es and females. Female D A B C D 33 12 15 49 50 3 0 0 0 1 5 28 25 2 2 19 27 28 45 41 4 18 16 3 5 36 15 16 1 1 100 100 100 100 100 ex differences and stability of lip print patterns in an Indian population. Saudi http://dx.doi.org/10.1016/j.sjbs.2015.01.014 Study of lip print patterns in an Indian population 5 the investigation. He also observed that although the lip print patterns of the uniovular twins are in duplicate, in detail no two of them were exactly identical. The lip prints of the studied individuals in our study remained unchanged even after 6 months, which is practically useful in a criminal search where the unchanged pattern even for short period (6 month) would be helpful (Tsuchihashi, 1974). These findings are in accordance with the results of (Tsuchihashi, 1974) who studied the lip prints of the same indi- viduals every month for three years to see whether the lip prints are permanent or not. None showed any change throughout this period. Similar results in terms of permanence were demonstrated in a very recent study by ElDomiaty et al., 2014 who investigated the stability of lip-print patterns in, Saudi females, over time to validate their secure use in civil and criminal investigations by analyzing and comparing the prints of the same subjects taken 3 years earlier. They also urged the need for similar studies on a larger sample size of both sexes to confirm the rate of stable lip-print patterns and to investigate the sex differences, so as to validate lip prints as a powerful forensic tool. 4. Conclusion The study revealed that lip print Type I (30.63%) and Type I0 (1.88%) were the most and least predominant (respectively) among the Marathi population of the Nagpur region of the Maharashtra state in India. In males Type I (29.75%) and in females Type III (35.75%) were found to be most prevalent and Type I0 least seen in both sexes. The statistically significant difference (p < 0.05) obtained in all the quadrants individually and taken together suggests the potential of usage in sex dis- crimination. The lip print patterns were also found to be stable over a period of 6 months. Further studies on similar grounds considering different populations should be done in order to create a comprehensive database so that the hidden potential of lip prints as an important source of information can be utilized optimally. Acknowledgement The authors are thankful to all the individuals who donated their prints for the study. We also express our deepest grati- tude to Dr. Anjali M. Rahatgaonkar, Director, Govt. Institute of Forensic Science, Nagpur, for her constant support, motiva- tion, encouragement and appreciation for research activities and submitting the research to international journals of repute. We also express our deepest regards to the anonymous review- ers for their valuable suggestions which have contributed to the enhancement of the quality of this paper. References Ball, J., 2002. The current status of lip prints and their use for identification. J. Forensic Odontostomatol. 20, 43–46. Caldas, I.M., Magalhaes, T., Afonso, A., 2007. Establishing identity using cheiloscopy and palatoscopy. Forensic Sci. Int. 165 (1), 1–9. Dwivedi, N., Agarwal, A., Kashyap, B., Raj, V., Chandra, S., 2013. Latent lip print development and its role in suspect identification. J. Forensic Dent. Sci. 5, 22–27. Please cite this article in press as: Kapoor, N., Badiye, A. A study of distribution, s Journal of Biological Sciences (2015), http://dx.doi.org/10.1016/j.sjbs.2015.01.014 ElDomiaty, M.A., Al-gaidi, S.A., Elayat, A.A., Safwat, M.D., Galal, S.A., 2010. Morphological patterns of lip prints in Saudi Arabia at Almadinah Almonawarah province. Forensic Sci. Int. 200 (179), 1–9. ElDomiaty, M.A., Anwar, Rasha I., Algaidi, Sami A., 2014. Stability of lip-print patterns: a longitudinal study of Saudi females. J. Forensic Legal Med. 22, 154–158. Gondivkar, S.M., Indurkar, A., Degwekar, S., Bhowate, R., 2009. Cheiloscopy for sex determination. J. Forensic Dent. Sci. 1, 56–60. Gupta, S., Gupta, K., Gupta, O.P., 2011. A study of morphological patterns of lip prints in relation to gender of North Indian population. J. Oral Biol. Craniofac. Res. 1, 12–16. Kasprzak, J., 2000. Cheiloscopy. In: Siegel, J.A., Saukko, P.J., Knupfer, G.C. (Eds.), Encyclopaedia of Forensic Sciences, 1. Academic Press, London, pp. 358–361. Kasprzak, J., 1990. Possibilities of cheiloscopy. Forensic Sci. Int. 46, 145–151. Kavitha, Einstein, Sivapathasundharam, B., Saraswathi, T.R., 2009. Limitations in forensic odontology. J. Forensic Odontol. 1, 8–10. Koneru, A., Surekha, R., Nellithady, G.S., Vanishree, M., Ramesh, D., Patil, R.S., 2013. Comparison of lip prints in two different populations of India: reflections based on a preliminary examina- tion. J. Forensic Dent. Sci. 5, 11–15. Kumar, G.S., Vezhavendhan, N., Vendhan, P., 2012. A study of lip prints among Pondicherry population. J. Forensic Dent. Sci. 4, 84–87. Molano, M.A., Gil, J.H., Jaramillo, J.A., Ruiz, S.M., 2002. Estudio queiloscó pico en estudiantes de la facultad de odontologı́a de la Universidad de Antı́oquia. Rev. Fac. Odontol. Univ. Antioq. 14 (1), 26–33. Neville, B., Damm, D., Allen, C., Bouquot, J., 2002. Oral and Maxillofacial Pathology, second ed. WB Saunders Company, Philadelphia, pp. 763–774. Prabhu, R.V., Dinkar, A., Prabhu, V., 2012. A study of lip print pattern in Goan dental students – a digital approach. J. Forensic Legal Med. 19, 390–395. Prabhu, R.V., Dinkar, A., Prabhu, V., 2013. Digital method for lip print analysis: a new approach. J. Forensic Dent. Sci. 5, 96–105. Rajendran, R., Sivapathasundharam, B., 2006. Shafer’s Textbook of Oral Pathology, sixth ed. Elsevier, New Delhi, India, pp. 896–897. Reddy, L., Reddy, Vamsi Krishna, 2011. Lip prints: an overview in forensic dentistry. JADR II. Sandhu, S.V., Bansal, H., Monga, P., Bhandari, R., 2012. Study of lip print pattern in a Punjabi population. J. Forensic Dent. Sci. 4, 24–28. Saraswathi, T.R., Mishra, G., Ranganathan, K., 2009. Study of lip prints. J. Forensic Dent. Sci. 1, 28–31. Satyanarayana, N.K., Prabhu, A., Nargund, R., 2011. Forensic odontology: cheiloscopy. Hong Kong Dent. J. 8, 25–28. Shafer, Hine, Levy, 2009. Shafer’s Textbook of Oral Pathology, sixth ed. Elsevier, Noida, India, pp. 871–897. Sharma, P., Saxena, S., Rathod, V., 2009. Comparative reliability of cheiloscopy and palatoscopy in human identification. Indian J. Dent. Res. 20 (4), 453–457. Sivapathasundharam, B., Prakash, P.A., Sivakumar, G., 2001. Lip prints (cheiloscopy). Indian Dent. Res. 12, 234–237. Suzuki, K., Tsuchihashi, Y., 1971. A new attempt of personal identification by means of lip print. Can. Soc. Forensic Sci. J. 4, 154–158. Suzuki, K., Tsuchihashi, Y., 1970a. New attempt of personal identi- fication by means of lip print. JADA 42, 8–9. Suzuki, K., Tsuchihashi, Y., 1970b. Personal identification by means of lip prints. J. Forensic Med. 17, 52–57. Thomas, C.J., VanWyk, C.W., 1988. The palatal rugae in identifica- tion. J. Forensic Odontostomatol. 6, 21–27. ex differences and stability of lip print patterns in an Indian population. Saudi http://refhub.elsevier.com/S1319-562X(15)00015-7/h0005 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0005 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0010 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0010 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0015 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0015 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0015 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0025 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0025 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0025 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0025 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0030 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0030 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0030 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0035 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0035 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0040 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0040 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0040 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0045 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0045 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0045 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0050 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0050 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0020 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0020 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0020 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0055 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0055 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0055 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0055 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0060 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0060 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0060 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0065 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0065 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0065 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0065 http://refhub.elsevier.com/S1319-562X(15)00015-7/h9000 http://refhub.elsevier.com/S1319-562X(15)00015-7/h9000 http://refhub.elsevier.com/S1319-562X(15)00015-7/h9000 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0075 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0075 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0075 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0080 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0080 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0085 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0085 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0090 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0090 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0095 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0095 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0095 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0100 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0100 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0105 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0105 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0110 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0110 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0115 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0115 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0115 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0120 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0120 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0125 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0125 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0125 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0130 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0130 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0135 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0135 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0140 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0140 http://dx.doi.org/10.1016/j.sjbs.2015.01.014 6 N. Kapoor, A. Badiye Tsuchihashi, Y., 1974. Studies on personal identification by means of lip prints. Forensic Sci. Int. 3, 233–248. Vahanwala, S.P., Parekh, D.K., 2000. Study of lip prints as an aid to forensic methodology. J. Indian Dent. Assoc. 71, 269–271. Vats, Y., Dhall, J.K., Kapoor, A.K., 2012. Gender variation in morphological patterns of lip prints among some north Indian populations. J. Forensic Dent. Sci. 4, 19–23. Venkatesh, R., David, M.P., 2011. Cheiloscopy: an aid for personal identification. J. Forensic Dent. Sci. 3, 67–70. Please cite this article in press as: Kapoor, N., Badiye, A. A study of distribution, s Journal of Biological Sciences (2015), http://dx.doi.org/10.1016/j.sjbs.2015.01.014 Verghese, A.J., Somasekar, M., Umesh, B.R., 2010. A study of lip print types among the people of Kerala. J Indian Acad. Forensic Med. 32 (1), 6–7. Warren, H., 1976. Look, ask, listen and note. In: Henry, K. (Ed.), Dental Identification and Forensic Odontology. Henry Kimpton Publishers, London, pp. 22–23. Williams, T.R., 1991. Lip prints – another means of identification. J. Forensic Ident. 41, 190–194. ex differences and stability of lip print patterns in an Indian population. Saudi http://refhub.elsevier.com/S1319-562X(15)00015-7/h0145 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0145 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0150 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0150 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0155 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0155 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0155 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0160 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0160 http://refhub.elsevier.com/S1319-562X(15)00015-7/h9005 http://refhub.elsevier.com/S1319-562X(15)00015-7/h9005 http://refhub.elsevier.com/S1319-562X(15)00015-7/h9005 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0170 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0170 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0170 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0175 http://refhub.elsevier.com/S1319-562X(15)00015-7/h0175 http://dx.doi.org/10.1016/j.sjbs.2015.01.014 A study of distribution, sex differences and stability of lip print patterns in an Indian population 1 Introduction 2 Methodology 2.1 Sample 2.2 Inclusion criteria 2.3 Recording the lip prints 2.4 Classification used 2.5 Examination of the prints 2.6 Statistical analysis 3 Result and discussion 4 Conclusion Acknowledgement References work_w3plhx3sfbd4dizktj64vhaxty ---- JMDH-84488-ethical-implications-of-digital-images-for-teaching-and-lear © 2015 Kornhaber et al. This work is published by Dove Medical Press Limited, and licensed under Creative Commons Attribution – Non Commercial (unported, v3.0) License. The full terms of the License are available at http://creativecommons.org/licenses/by-nc/3.0/. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. Permissions beyond the scope of the License are administered by Dove Medical Press Limited. Information on how to request permission may be found at: http://www.dovepress.com/permissions.php Journal of Multidisciplinary Healthcare 2015:8 299–305 Journal of Multidisciplinary Healthcare Dovepress submit your manuscript | www.dovepress.com Dovepress 299 R e v i e w open access to scientific and medical research Open Access Full Text Article http://dx.doi.org/10.2147/JMDH.S84488 ethical implications of digital images for teaching and learning purposes: an integrative review Rachel Kornhaber1–3 vasiliki Betihavas4 Rodney J Baber5 1School of Health Sciences, Faculty of Health, University of Tasmania, Rozelle, NSw, 2School of Nursing, The University of Adelaide, Adelaide, SA, 3Severe Burns injury Unit, Royal North Shore Hospital, St Leonards, NSw, 4School of Nursing, University of Sydney, Sydney, NSw, 5Discipline of Obstetrics, Gynaecology and Neonatology, Sydney Medical School, University of Sydney, Sydney, NSw, Australia Correspondence: Rachel Kornhaber School of Health Sciences, University of Tasmania, Locked Bag 5052, Alexandria, NSw 2015, Australia email rachel.kornhaber@utas.edu.au Background: Digital photography has simplified the process of capturing and utilizing medical images. The process of taking high-quality digital photographs has been recognized as efficient, timely, and cost-effective. In particular, the evolution of smartphone and comparable technologies has become a vital component in teaching and learning of health care professionals. However, ethical standards in relation to digital photography for teaching and learning have not always been of the highest standard. The inappropriate utilization of digital images within the health care setting has the capacity to compromise patient confidentiality and increase the risk of litigation. Therefore, the aim of this review was to investigate the literature concerning the ethical implica- tions for health professionals utilizing digital photography for teaching and learning. Methods: A literature search was conducted utilizing five electronic databases, PubMed, Embase (Excerpta Medica Database), Cumulative Index to Nursing and Allied Health Literature, Educational Resources Information Center, and Scopus, limited to English language. Studies that endeavored to evaluate the ethical implications of digital photography for teaching and learning purposes in the health care setting were included. Results: The search strategy identified 514 papers of which nine were retrieved for full review. Four papers were excluded based on the inclusion criteria, leaving five papers for final analysis. Three key themes were developed: knowledge deficit, consent and beyond, and standards driving scope of practice. Conclusion: The assimilation of evidence in this review suggests that there is value for health professionals utilizing digital photography for teaching purposes in health education. However, there is limited understanding of the process of obtaining and storage and use of such mediums for teaching purposes. Disparity was also highlighted related to policy and guideline identifi- cation and development in clinical practice. Therefore, the implementation of policy to guide practice requires further research. Keywords: digital photography, ethics, education, informed consent, practice guidelines, health professionals, photography, teaching materials, health care Introduction Digital photography has simplified the process of capturing and utilizing digital images for health care professionals. The process of taking high-quality digital photographs has been recognized as efficient, timely, and cost-effective.1–3 In particular, the evolution of smartphone and comparable technologies has enabled digital photography to become a vital component of teaching and learning in the health care setting.4,5 Consultation and documentation, clinical education, patient and family education, and publications are four key domains where digital images are frequently utilized within the clinical setting.6 This can be attributed to the minimal cost involved, user friendliness, and the Jo u rn a l o f M u lti d is ci p lin a ry H e a lth ca re d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com/permissions.php http://creativecommons.org/licenses/by-nc/3.0/ http://www.dovepress.com/permissions.php www.dovepress.com www.dovepress.com www.dovepress.com http://dx.doi.org/10.2147/JMDH.S84488 mailto:rachel.kornhaber@utas.edu.au Journal of Multidisciplinary Healthcare 2015:8submit your manuscript | www.dovepress.com Dovepress Dovepress 300 Kornhaber et al ability to produce high-quality images that are easily stored, downloaded, and distributed via a multitude of mediums.2 However, ethical standards in relation to digital photography for teaching and learning have not always been of the high- est standard. The inappropriate utilization of digital images within the health care setting has the capacity to compromise patient confidentiality and increase the risk of litigation.7 In particular, health care personnel need to consider the ethi- cal implications of digital technology within the health care setting.8,9 The very simplicity of modern digital photography has resulted, in some cases, in a relaxation of the usually applied guidelines of informed consent.9 An unequal power balance could exist between health care professionals and patients who may feel coerced into consenting to digital photography.10 Taking photographs of at-risk and vulnerable populations requires greater ethical responsibility.11 Ensur- ing that the patient’s identity and privacy is not sacrificed for the reason of ease and convenience is of prime importance. Furthermore, misuse of digital images obtained in clinical settings is becoming an area of concern10,12 and brings into question and highlights issues surrounding privacy and con- fidentiality. Consent, when obtained, is often verbal and may not include an explanation conveying photography is not only for treatment-related purposes, but also for education.9,10,13,14 Consequently, there is a heightened need for guidance in relation to the use of digital photography within the clinical setting where such technology plays an increasingly promi- nent role.15 The application of digital photography in the clinical setting for the purpose of teaching and learning is poorly reported in the health care literature,16 and the authors have been unable to identify a single review that investigates digital photography in teaching and learning of health professionals. Therefore, the aim of this integrative review was to investigate the current literature concerning the ethical implications of digital photography for teaching and learning purposes within the health care environment. Methods The framework guiding this integrative review is based on Whittemore and Knafl’s17 five stages: problem identifica- tion, literature search, data evaluation, data analysis, and presentation. A systematic search was conducted using PubMed, Embase, Cumulative Index to Nursing and Allied Health Literature, Educational Resources Information Center, and Scopus. The references of all potential papers retrieved were examined to identify any additional papers fulfilling the inclusion criteria that may have been missed by the electronic searching strategy. Boolean connectors were used to combine search terms such as health care, medical, image*, smartphone*, digital, photography, ethic*, informed consent, privacy, confidential, education, guideline*, policy, teaching, and learning. The search strategy undertaken yielded 514 articles after the duplicates were removed (Figure 1). The inclusion criteria included: • peer-reviewed reports of original research; • literature published in the English language within the last 10 years; and • exploration of the use of identifiable digital imagery for teaching and learning in the health care setting. The 10-year range was chosen due to the advancements made in the area of digital photography, imaging, and smart- phone technology in the past decade.18,19 Studies that exam- ined histopathology specimens were excluded as these are unidentified specimens. Although integrative reviews allow the use of theoretical pieces, review articles, commentaries, editorials, gray literature, and narrative opinion,17 they were excluded. These abstracts were read by all authors (RK, VB, Papers identified from literature search strategy (n=600) Removal of duplicates (n=86) Papers retrieved for evaluation of title and abstract (n=514) Papers excluded after review of title and abstract (n=505) Full text retrieved for critical appraisal (n=9) Papers excluded as did not meet inclusion criteria (n=4) Final papers included (n=5) Figure 1 Flow diagram: literature review. Jo u rn a l o f M u lti d is ci p lin a ry H e a lth ca re d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Journal of Multidisciplinary Healthcare 2015:8 submit your manuscript | www.dovepress.com Dovepress Dovepress 301 ethical implications of digital images and RB) based on inclusion/exclusion criteria, yielding nine studies. Reading the full text of these studies resulted in the exclusion of four more articles as they did not meet the inclu- sion criteria, resulting in five studies deemed appropriate for inclusion in the review (Table 1). Any discrepancies expe- rienced were resolved by active discussion until consensus was attained by all the parties. Findings were compiled and then arranged to identify themes and relationships. Results Study characteristics This integrative review incorporated the key aspects of ethical implications of utilizing digital photography in the clinical setting for purposes of teaching and learning. Of the included studies, there were 678 participants collectively. The research was conducted within Australia and the UK. The respondents within the selected studies incorporate primarily nursing and medical professionals. The domains of the clinical practice settings included plastic surgery, dermatology, emergency medicine, and surgical/medical wards. All included studies were surveys/questionnaires with a response rate ranging from 22.6% to 78%. The studies reported the use of digital photography/imaging in the clinical setting for the purpose of teaching and learning, including the issues of consent and utilization of technology. Surprisingly, out of the five studies, only one reported policies/guidelines in relation to digital photography in their results.20 The five studies were synthesized, and their findings categorized into three themes: knowledge deficit, consent and beyond, and standards driving scope of practice. Knowledge deficit Of significance was the notion of a deficit in knowledge pertaining to medical staff. It was clearly identified that having a poor understanding of the process of consent when capturing digital images and a lack of familiarity with the available policies had implications for ethical compliance and the potential for patient identification and harm. Burns and Belton10 highlighted that ethical compliance may potentially be compromised when there is a deficit in comprehension and understanding. A knowledge deficit may lead to inappropriate and unsafe practice(s) in the clinical area, placing the patient at further risk of harm by compromis- ing his/her privacy and identity.6 A knowledge deficit also renders health care professional at risk of breaching ethical standards and guidelines within their institutions, a finding reported by both Burns and Belton10 and Taylor et al.2 This demonstrates the need for educational workshops and further training within the clinical setting. An excerpt from an inter- view with a participant from the study of Burns and Belton10 clearly indicates a lack of awareness surrounding digital photography and the necessary process one must follow to comply with an ethical standard: …everyone has phones that can take photos these days… but I was like ‘ahh’… Where do you put that information?… I do not fancy having it sitting around on a hard drive I have no control over. Hubbard et al1 identified that dermatological trainees sought consent from consultants when taking digital images of patients, rather than obtaining consent from patients themselves. This raises questions and concerns around their understanding and perception of ethical process of digital photography. It is critical for clinicians to be aware of policies and guidelines concerning the process of digital photogra- phy, the ethical implications, and the potential for harm. To highlight this point, Hubbard et al,1 found that 33.6% of respondents were not aware of any available guidelines on the process of digital photography in their clinical area. Subsequently, despite a small number of facilities conducting an annual training program describing a protocol for obtain- ing, storing, and consenting digital images, Bhangoo et al20 found that these sessions were predominately attended by senior nursing staff with poor attendance by medical staff. This is surprising and concerning considering that most digital images of patients are taken by medical staff. Consent and beyond There remains an inadequacy of health care facilities to develop and implement policies and guidelines relevant to digital photography. Consequently, this has resulted in a disparity that currently exists between the taking of digital photographs for use in teaching and learning and the obtain- ing of informed patient consent for this specified purpose. However, agencies’ guidelines such as the General Medical Council guidelines “Making and Using Visual and Audio Recordings of Patients in the United Kingdom”21 and acts such as the Health Insurance Portability and Accountability Act (HIPAA)6 provide clear guidance on when consent should be obtained and how it should be recorded and also address the taking of photographs for educational reasons, although this was not reflected in the studies included in this review. Taylor et al2 identified that 25 of the 30 respondents in their study acquired digital images for the purpose of teaching and learning. Of the 25 respondents, ten always gained consent with two rarely and one never. The most common Jo u rn a l o f M u lti d is ci p lin a ry H e a lth ca re d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Journal of Multidisciplinary Healthcare 2015:8submit your manuscript | www.dovepress.com Dovepress Dovepress 302 Kornhaber et al T ab le 1 S um m ar y o f i nc lu de d st ud ie s fo r th e in te gr at iv e re vi ew A u th o r/ s, y ea r (c o u n tr y) P u rp o se D at a co lle ct io n M et h o d s o f an al ys is S am p le a n d s tu d y p o p u la ti o n R ep o rt ed o u tc o m es B ha ng o o , M ac o no ch ie , B at ri ck , a nd H en ry , 2 00 5 (U K )2 0 T o id en ti fy if e D s ha ve a w ri tt en p o lic y fo r th e us e o f c lin ic al p ho to gr ap hy Q ue st io nn ai re St at is ti ca l a na ly si s no t re po rt ed N =1 17 ( 78 % r es po ns e ra te ); c o ns ul ta nt s, s pe ci al is t re gi st ra rs , a nd s en io r nu rs in g st af f 36 % o f e D s re po rt ed h av in g a po lic y. O f t ho se w it h a po lic y, fo ur e D s us ed im ag es fo r te ac hi ng p ur po se s, a nd o f t he se fo ur , o nl y tw o h ad w ri tt en p o lic ie s ad dr es si ng d ig it al im ag es fo r te ac hi ng a nd le ar ni ng ; 1 7 ha d a sp ec ifi c co ns en t fo rm a nd t hr ee ha d a w ri tt en c o ns en t in t he n o te s 63 % o f e D s re po rt ed n o t ha vi ng a p o lic y w it h 53 e D s us in g im ag es fo r te ac hi ng p ur po se s; o nl y 10 /5 3 eD s ha d w ri tt en co ns en t do cu m en te d in t he n o te s an d 43 /5 3 ha d ve rb al c o ns en t B ur ns a nd B el to n, 2 01 3 (A us tr al ia )1 0 T o ex pl or e th e w id es pr ea d us e of m ed ic al p ho to gr ap hy in a t er tia ry h os pi ta l an d its e th ic al a nd le ga l im pl ic at io ns v al id at ed q ue st io nn ai re (t w o ) an d in te rv ie w s co nt ai ni ng 1 3 o pe n- en de d qu es ti o ns D es cr ip ti ve s ta ti st ic s an d lit er at ur e- ge ne ra te d to pi cs ( th em at ic a na ly si s) N =1 67 ( 22 .6 % r es po ns e ra te ); in te rv ie w s N =8 ; d o ct o rs an d nu rs es ed uc at io n w as t he m ai n re as on fo r ta ki ng p ho to gr ap hs ; 5 1. 2% to ok p ho to s fo r te ac hi ng /e du ca tio n; v er ba l c on se nt w as t he m os t fr eq ue nt m od e of c on se nt ; a nd 3 8. 2% d is cl os ed n ot o bt ai ni ng co ns en t. 81 .2 % u se d ho sp ita l c am er as , 7 .5 % p er so na l c am er as , an d th e re m ai nd er o f p ar tic ip an ts p er so na l s m ar tp ho ne s H ub ba rd , G o dd ar d, an d w al ke r, 20 09 ( U K )1 T o d et er m in e th e us e o f d ig it al c am er as b y th e m em be rs o f t he B ri ti sh A ss o ci at io n o f D er m at o lo gi st s A no ny m o us o nl in e su rv ey v ia d ir ec t lin k D es cr ip ti ve s ta ti st ic s N =3 39 m em be rs o f t he B ri ti sh A ss o ci at io n o f D er m at o lo gi st s (3 7. 6% r es po ns e ra te ); 23 9 co ns ul ta nt s, 7 1 re gi st ra rs , ei gh t ge ne ra l p ra ct it io ne rs , 16 n o n- co ns ul ta nt c ar ee r gr ad es , o ne h o sp it al p ra ct it io ne r, an d fo ur r es ea rc h/ cl in ic al fe llo w s 63 % r es po nd en ts u se d a di gi ta l c am er a o f w hi ch 7 .4 % d id no t o bt ai n co ns en t; 3 4. 9% o bt ai ne d ve rb al c o ns en t an d do cu m en te d in p at ie nt ’s n o te s; a nd 4 2. 3% u se d a m ed ic al ill us tr at o r co ns en t fo rm a nd 1 5. 3% u se d an a lt er na ti ve c o ns en t fo rm . T he m o st c o m m o n us e o f a d ig it al im ag e fo r w hi ch co ns en t w as o bt ai ne d w as fo r te ac hi ng p ur po se s K un de , M cM en im an , an d Pa rk er , 20 13 ( A us tr al ia )9 T o d et er m in e ho w de rm at o lo gy t ra in ee s ar e ut ili zi ng d ig it al ph o to gr ap hy in t he ir cl in ic al p ra ct ic e an d th e pr o ce du re s fo llo w ed O nl in e su rv ey p la tf o rm Su rv ey M o nk ey D es cr ip ti ve s ta ti st ic s N =1 3 (6 5% r es po ns e ra te ) de rm at o lo gi ca l r eg is tr ar s A ll th e re sp o nd en ts r ep o rt ed u si ng t he ir o w n pe rs o na l sm ar tp ho ne fo r ta ki ng d ig it al im ag es ; s ev en r es po nd en ts u se di gi ta l p ho to gr ap hy fo r te ac hi ng p ur po se s; 6 2% o f r es po nd en ts us ed t he ir d ig it al c am er a in c o ns ul ta ti o n; o nl y se ve n re sp o nd en ts r o ut in el y di sc lo se d to t he ir p at ie nt s th e id en ti ty o f t he t hi rd p ar ty w it h w ho m t he ir im ag e w as t o b e sh ar ed ; s ix re sp o nd en ts in co ns is te nt ly in fo rm ed t he ir p at ie nt ; a nd v er ba l co ns en t fo r ph o to gr ap hy w as c o m m o n in 9 2% o f c as es , w it h o nl y tw o r es po nd en ts c la im in g th at t he y ro ut in el y do cu m en te d ha vi ng o bt ai ne d co ns en t ve rb al ly T ay lo r, Fo st er , D un ki n, an d Fi tz ge ra ld , 20 08 ( U K )2 T o in ve st ig at e th e pr ev al en ce o f d ig it al ph o to gr ap hy a m o ng pl as ti c su rg eo ns Q ue st io nn ai re w as de si gn ed u si ng t he gu id el in es s et o ut b y th e U K in st it ut e o f M ed ic al ill us tr at o rs ; t o h ei gh te n co he si o n, q ue st io ns w er e th em ed g ro up ed an d cl o se d en de d U si ng d es cr ip ti ve st at is ti cs ; r es po ns es w er e o n ad je ct iv al s ca le w it h an e ve n nu m be r o f po ss ib ili ti es ; a nd s ur ve y m ap pe d to p re ve nt pa rt ic ip an ts r es po nd in g to u nn ec es sa ry q ue st io ns N =4 2 pl as ti c su rg eo ns r es id in g at t hr ee p la st ic s ur gi ca l u ni ts in th e U K ( 70 % r es po ns e ra te ) 25 o ut o f 3 0 su rg eo ns t o o k ph o to gr ap hs fo r te ac hi ng p ur po se s o f w hi ch 1 0 al w ay s ga in ed c o ns en t fo r th e pu rp o se o f t ea ch in g, 12 u su al ly d id , t w o r ar el y di d, a nd o ne n ev er d id . S ev en te en su rg eo ns s ta te d th at t he c o ns en t w as u su al ly o bt ai ne d ve rb al ly ; ei gh t us ua lly d o cu m en te d co ns en t in p at ie nt ’s n o te s; n in e o f t he 30 u su al ly in fo rm ed t he p at ie nt o f h is s/ he r ri gh t to w it hd ra w co ns en t; a nd 3 0 su rg eo ns o ut o f 4 2 to o k ph o to gr ap hs u si ng th ei r o w n ca m er as A b b re vi at io n : e D , e m er ge nc y de pa rt m en t. Jo u rn a l o f M u lti d is ci p lin a ry H e a lth ca re d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Journal of Multidisciplinary Healthcare 2015:8 submit your manuscript | www.dovepress.com Dovepress Dovepress 303 ethical implications of digital images form of consent was verbal. Similarly, Burns and Belton10 found that 24 of 41 health care professionals reported verbal consent as the preferred method. This is problematic as there remains no data trail or evidence that the consent was sought or that the consent was obtained for the use of the images for purposes other than treatment, including teaching and learning. These concerns are echoed by a participant from Burns and Belton10 study stating: I don’t think people put significant enough emphasis on the consent process. I have often told people you know “make sure you have got consent” and they will go “oh yeah, yeah”, but it’s just for education. This quote clearly emphasizes the lack of awareness concerning consent and the ethical implications of using the digital photography for educational purposes. Furthermore, the absence of consent was similarly identified by Hubbard et al1 who reported that 7.4% of respondents who used a digital camera to gain images for teaching and learning did so without consent. Kunde et al9 revealed that only 54% of their respondents reported that they regularly informed the patients of any third parties who may potentially view their image. While it may not always be possible for health care professionals to be specific about such viewing in a clinical setting, it remains important that we differentiate between the clinical and educational settings. Patients routinely consent to the use of digital images for treatment purposes. However, the philosophy of obtain- ing written valid informed consent, prior to the acquisition of digital images for educational purpose, seems lacking. Taylor et al2 also emphasized the need for staff to inform patients that their consent may be withdrawn at any time, prior to images entering the public domain where they are irretrievable. However, withdrawing consent for the images regardless of the purpose of their acquisition may be a redundant notion given the speed of which data can be transferred and may well need to be addressed with the patient and their family. Standards driving scope of practice It is apparent from this review that within the clinical area, discrepancies exist around the availability of policies and guidelines related to digital photography. Of those policies that are available, the awareness of their existence by health care professionals appears to range from limited to 0, posing concerns that they may not comply with these guidelines. Given that the utilization of digital photography was for the purpose of teaching and learning across all the five included studies, it is essential that policymakers, health care practitioners, and administrators become familiar with ethical regulations and guidelines related to digital images.10 This is clearly highlighted in Bhangoo et al’s20 findings that only 36% of emergency departments had a written policy pertaining to the taking of images, and of those, only two departments had a written policy specifically focused on photography in clinical and educational settings. To further highlight this issue, Taylor et al2 found that of the 25 surgeons in their study who took images for the purposes of teach- ing and learning, 17 took no extra measures to protect the anonymity of their patient. Conversely, Hubbard et al1 did identify that in total 181 (53.3%) respondents were aware of guidelines for digital photography, although 40 (11.8%) stated that there were none and 114 (33.6%) did not know whether there were any guidelines. This raises ethical concerns as there appears to be considerable variation in the specific guidance provided for the use of digital photography. It is also clear that health care professionals are partaking in the process of digital photography without recognizing the existence of policies and guidelines that guide safe practice, ensuring the patient’s privacy is protected and no harm is perpetrated. Despite Taylor et al’s2 study emphasizing the importance of health professionals being aware of guidelines pertaining to digital photography, Burns and Belton10 study illustrates that noncompliance in the domain of obtaining consent for digital photography was widespread. Hence, it is evident that accessible, well-publicized policies and guidelines need to be developed to guide and safeguard those practicing digital photography within health care setting as well as the patients being photographed.20 Discussion The authors have explored ethical considerations associated with the use of digital photography for teaching and learn- ing purposes in the health care environment. Considering the fundamental importance that is directed toward ethics, informed consent, and patients’ privacy, there are surpris- ingly few published studies exploring this domain. implications for practice This review has identified several areas of concern. There is a lack of appropriate ethical guidelines to govern the use of digital photography for clinical education. We found that even when guidelines were available as demonstrated with the General Medical Council’s “Making and Using Visual and Audio Recordings of Patients”21 in the UK and the HIPAA in Jo u rn a l o f M u lti d is ci p lin a ry H e a lth ca re d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Journal of Multidisciplinary Healthcare 2015:8submit your manuscript | www.dovepress.com Dovepress Dovepress 304 Kornhaber et al the US,6 health care personnel were either unaware of their existence or loath to follow them. To compound this, the definition of Protected Health Information under HIPAA, although only relevant to the US, is very broad and general- ized and therefore does not specifically address the use of full facial images and images that could potentially reveal the identity of the patient.22 However, the studies reviewed revealed that when consent was obtained, it was commonly verbal, seemingly rarely informed, and often omitted refer- ence to digital photographs being used for teaching and learning purposes. Finally, we found that those taking digital photographs had neither common understanding of how these photographs were to be stored securely nor with whom such images might be ethically shared. While digital photography has become a simple, useful, and readily available tool to assist in diagnosis, it is becom- ing increasingly used by clinicians in areas including wound management as it provides an accurate and objective method of assessment and assists in with monitoring the progress of treatment. However, the use of such photographs for other purposes raises significant ethical questions,23 although we do not wish to obstruct the proper use of such images and strongly believe that the following steps should be under- taken to improve ethical practices and protect patients and clinicians: 1) Professional bodies should develop guidelines to assist in the acquisition and use of digital photographs for health education purposes. 2) Training programs should then be developed to raise awareness of these guidelines and local institutional poli- cies to raise awareness and promote ethical conduct. 3) Appropriate consent is critical to the management of digital photographs. Consent should be informed, written, and obtained prior to any procedure and should include full disclosure of how images are to be taken, stored, and de-identified and how they are to be used and which audiences are likely to view them. Appropriately acquired consent will protect both patients and staff from ethical dilemmas surrounding the use of digital photography within the domain of teaching and learning. In order to comply with ethical standards in the face of ever- changing technology, the need for auditing of health care facilities on their practices around digital photography is critical and may shed light on the frequency and use of digital photography and the presence of policies and guidelines that reflect the current technological climate. Emerging smartphone technology may provide some solutions in the management of digital images. Health care institutions globally have been incorporating the use of smart- phone technology into their daily practice.6 For example, apps such as PicSafe® and Epic Haiku (Epic Systems Corporation, Verona, WI, USA) allow clinicians to consolidate the con- sent, capture and means to retain the image within a single program10 and allows for integration into the medical record. Apps such as these may also be a critical tool for those who deliver education to clinicians and could be incorporated into existing policies and guidelines. However, it is essential that any policy or guideline must reflect and comply with the existing legislation. This has become increasingly difficult as technology is far outpacing legislative and legal systems.6 As such, this has resulted in some health care institutions deem- ing smartphone cameras as a significant ethical dilemma for patient privacy, resulting in the development and implemen- tation of smartphone policies.6 Therefore, the development of policies and guidelines in accordance with the relevant legislation is warranted. Limitations and strength of evidence Strengths of this review include the use of three independent reviewers during the selection and extraction stages. No assumptions were made when methodology was unclear, which further strengthened the review process. Similarly, the development of logic tables and a search strategy across five major relevant databases, including the educational resources information database (Educational Resources Information Center), offer reassurance that the review is both rigorous and comprehensive. This review is limited by the small number of original papers that were identified for evaluation. All papers included in this review were questionnaires, which may limit the strength of the findings. However, given the methodological complexities of evaluating issues as ethical implications, sur- veys can still offer useful and practical evidence that has the potential to guide policy and for guideline development. As the majority of studies in this review involved voluntary par- ticipation in questionnaires, there is potential for bias within the results. Likewise, small sample sizes and self-reporting may also impact data quality and the strength of our results. Finally, this review incorporates studies from Australia and the UK and as such may be perceived as limiting for it does not reflect a global perspective. Despite the fact that the database search was extensive and inclusive, it was limited to only English language pub- lications over the past 10 years and we did not review the gray literature, and therefore, may have overlooked some relevant studies. Jo u rn a l o f M u lti d is ci p lin a ry H e a lth ca re d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com Journal of Multidisciplinary Healthcare Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/journal-of-multidisciplinary-healthcare-journal The Journal of Multidisciplinary Healthcare is an international, peer- reviewed open-access journal that aims to represent and publish research in healthcare areas delivered by practitioners of different disciplines. This includes studies and reviews conducted by multidisciplinary teams as well as research which evaluates the results or conduct of such teams or healthcare processes in general. The journal covers a wide range of areas and welcomes submissions from practitioners at all levels, from all over the world. The manuscript management system is completely online and includes a very quick and fair peer-review system. Visit http://www.dove- press.com/testimonials.php to read real quotes from published authors. Journal of Multidisciplinary Healthcare 2015:8 submit your manuscript | www.dovepress.com Dovepress Dovepress Dovepress 305 ethical implications of digital images Conclusion This integrative review suggests that there is currently a deficit in knowledge pertaining to ethical considerations of clini- cians in obtaining, storing, utilizing, and distributing digital photography within the clinical and teaching environment. These results have implications for informing the practices of those who utilize digital photography routinely in their practice, in particular, newly graduated health profession- als, with the opportunity to lay the foundations and embed their future career in sound ethical practices that ensure the safety of their patients and themselves in a dynamic highly technological and litigious health care environment. Disclosure All authors declare no support from any organization for the submitted work. VB was previously employed by The University of Tasmania and RK is currently employed by The University of Tasmania. The authors report no other conflicts of interest in this work. References 1. Hubbard VG, Goddard DJ, Walker SL. An online survey of the use of digital cameras by members of the British Association of Dermatologists. Clin Exp Dermatol. 2009;34(4):492–494. 2. Taylor D, Foster E, Dunkin CS, Fitzgerald AM. A study of the personal use of digital photography within plastic surgery. J Plast Reconstr Aesthet Surg. 2008;61(1):37–40. 3. Berle I. Clinical photography and patient rights: the need for orthopraxy. J Med Ethics. 2008;34(2):89–92. 4. Mahar PD, Foley PA, Sheed-Finck A, Baker CS. Legal considerations of consent and privacy in the context of clinical photography in Australian medical practice. Med J Aust. 2013;198(1):48–49. 5. Palacios-Gonzalez C. The ethics of clinical photography and social media. Med Health Care Philos. 2015;18(1):63–70. 6. Harting MT, DeWees JM, Vela KM, Khirallah RT. Medical photography: current technology, evolving issues and legal perspectives. Int J Clin Pract. 2015;69(4):401–409. 7. Blain C, Mackay M. The digital revolution. Br J Perioper Nurs. 2004; 14(11):494–497, 499. 8. Lau CK, Schumacher HH, Irwin MS. Patients’ perception of medical photography. J Plast Reconstr Aesthet Surg. 2010;63(6):e507–e511. 9. Kunde L, McMeniman E, Parker M. Clinical photography in dermatology: ethical and medico-legal considerations in the age of digital and smartphone technology. Australas J Dermatol. 2013;54(3): 192–197. 10. Burns K, Belton S. Clinicians and their cameras: policy, ethics and practice in an Australian tertiary hospital. Aust Health Rev. 2013;37(4): 437–441. 11. Berle I. The principles of ethical practice in professional clinical photography. J Vis Commun Med. 2004;27(1):11–13. 12. Payne K, Tahim A, Goodson A, Delaney M, Fan K. A review of current clinical photography guidelines in relation to smartphone publishing of medical images. J Vis Commun Med. 2012;35(4):188–192. 13. Berle I. The ethical context of clinical photography. J Vis Commun Med. 2002;25(3):106–109. 14. Johns MK. Informed consent for clinical photography. J Vis Commun Med. 2002;25(2):59–63. 15. Hill K. Consent, confidentiality and record keeping for the recording and usage of medical images. J Vis Commun Med. 2006;29(2):76–79. 16. Tranberg H, Rous B, Rashbass J. Legal and ethical issues in the use of anonymous images in pathology teaching and research. Histopathology. 2003;42:104–109. 17. Whittemore R, Knafl K. The integrative review: updated methodology. J Adv Nurs. 2005;52(5):546–553. 18. Sandler J, Gutierrez RJ, Murray A. Clinical photographs: the gold standard, an update. Prog Orthod. 2012;13(3):296–303. 19. Van der Rijt R, Hoffman S. Ethical considerations of clinical photogra- phy in an area of emerging technology and smartphones. J Med Ethics. 2014;40(3):211–212. 20. Bhangoo P, Maconochie IK, Batrick N, Henry E. Clinicians taking pictures – a survey of current practice in emergency departments and proposed recommendations of best practice. Emerg Med J. 2005;22(11):761–765. 21. General Medical Council. Making and Using Visual and Audio Record- ings of Patients. General Medical Council; 2013. London. 22. Segal J, Sacopulos MJ. Photography consent and related legal issues. Facial Plast Surg Clin North Am. 2010;18(2):237–244. 23. Crisp J, Taylor C, Douglas C, Rebeiro G. Potter and Perry’s Funda- mentals of Nursing – Australian Version – INKLING. Elsevier Health Sciences APAC; 2012. Jo u rn a l o f M u lti d is ci p lin a ry H e a lth ca re d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com/journal-of-multidisciplinary-healthcare-journal http://www.dovepress.com/testimonials.php http://www.dovepress.com/testimonials.php www.dovepress.com www.dovepress.com www.dovepress.com www.dovepress.com Publication Info 2: Nimber of times reviewed: work_w3r6axdwmbhwxjmjyunehvztxi ---- Utilization of ground-based digital photography for the evaluation of seasonal changes in the aboveground green biomass and foliage phenology in a grassland ecosystem Ecological Informatics 25 (2015) 1–9 Contents lists available at ScienceDirect Ecological Informatics journal homepage: www.elsevier.com/locate/ecolinf Utilization of ground-based digital photography for the evaluation of seasonal changes in the aboveground green biomass and foliage phenology in a grassland ecosystem Tomoharu Inoue a,b,⁎, Shin Nagai b, Hideki Kobayashi b, Hiroshi Koizumi a a Faculty of Education and Integrated Arts and Sciences, Waseda University, 2-2 Wakamatsu-cho, Shinjuku-ku, Tokyo 162-8480, Japan b Department of Environmental Geochemical Cycle Research, Japan Agency for Marine-Earth Science and Technology (JAMSTEC), 3173-25 Showa-machi, Kanazawa-ku, Yokohama 236-0001, Japan ⁎ Corresponding author at: Department of Environmen Japan Agency for Marine-Earth Science and Technology machi, Kanazawa-ku, Yokohama 236-0001, Japan. Tel.: + 778 5706. E-mail addresses: tomoharu@aoni.waseda.jp, tomohar tomoharu.inoue.mail@gmail.com (T. Inoue), nagais@jams hkoba@jamstec.go.jp (H. Kobayashi), hkoizumi@waseda.j http://dx.doi.org/10.1016/j.ecoinf.2014.09.013 1574-9541/© 2014 Elsevier B.V. All rights reserved. a b s t r a c t a r t i c l e i n f o Article history: Received 19 December 2013 Received in revised form 23 September 2014 Accepted 24 September 2014 Available online 13 October 2014 Keywords: Biomass estimation Digital camera Digital repeat photography Phenological observations RGB We investigated the usefulness of a ground-based digital photography to evaluate seasonal changes in the aboveground green biomass and foliage phenology in a short-grass grassland in Japan. For ground-truthing purposes, the ecological variables of aboveground green biomass and spectral reflectance of aboveground plant parts were also measured monthly. Seasonal change in a camera-based index (rG: ratio of green channel) reflected the characteristic events of the foliage phenology such as the leaf-flush and leaf senescence. In addition, the seasonal pattern of the rG was similar to that of the aboveground green biomass throughout the year. More- over, there was a positive linear relationship between rG and aboveground green biomass (R2 = 0.81, p b 0.05), as was the case with spectra-based vegetation indices. On the basis of these results, we conclude that continuous observation using digital cameras is a useful tool that is less labor intensive than conventional methods for estimating aboveground green biomass and monitoring foliage phenology in short-grass grasslands in Japan. © 2014 Elsevier B.V. All rights reserved. 1. Introduction Grassland ecosystems, which are estimated to cover from 41 to 56 million km2 or from 31% to 43% of the Earth's land surface, are ranked with forest ecosystems as the main terrestrial ecosystems (Scurlock and Hall, 1998; White et al., 2000). In East Asia, grassland ecosystems are widely distributed in alpine areas and semiarid regions (e.g., the Tibetan and Mongolian Plateaus [Ni, 2002; White et al., 2000; Xiao et al., 1995]). These grasslands provide livelihoods to local residents and serve as carbon sinks and/or sources in the carbon budget (Fang et al., 2007; Ni, 2002; Xiao et al., 1995). However, recent studies have reported that environmental changes such as global warming, overgrazing, and land-use change can alter vegetation and ecosystem functions (e.g., carbon cycle) in these grasslands (Cao et al., 2004; Ma et al., 2010). Because uptake of CO2 by plants might be the only sustain- able way of removing CO2 from the atmosphere (Eisfelder et al., 2012; Trumper et al., 2008), it is important to conduct quantitative studies of tal Geochemical Cycle Research, (JAMSTEC), 3173-25 Showa- 81 45 778 5614; fax: +81 45 u@jamstec.go.jp, tec.go.jp (S. Nagai), p (H. Koizumi). the aboveground plant biomass and the length of the plants' growing seasons in order to predict the response of the carbon cycle to future environmental changes accurately. To achieve these aims, it is necessary to establish techniques for long-term and continuous in situ observa- tions of the spatial and temporal variations in foliage phenology and plant biomass (Eisfelder et al., 2012). Previous observations of foliage phenology and plant biomass in grasslands fall into two main classes: (1) investigations of a specific area based on conventional ground-based observations, such as clipping and weighing of aboveground plant biomass and direct observations of leaves to detect plant phenological events (e.g., Dhital et al., 2010a,b; Ma et al., 2010; Xiao et al., 1996; Zhang and Skarpe, 1996); and (2) region- al or global scale investigations based on satellite remote-sensing obser- vations (e.g., Kawamura et al., 2005; Xu et al., 2008; Yang et al., 2009). Although conventional ground-based observation is the most accurate way to collect biomass data in a fixed area, this approach is both time con- suming and labor intensive (Lu, 2006). Therefore, it would be difficult to gather continuous measurements over a wide area using conventional observation techniques (Eisfelder et al., 2012; Ide and Oguma, 2010; Richardson et al., 2009). In contrast, satellite-based monitoring requires little labor and is suitable for the continuous observation of spatial and temporal changes in terrestrial ecosystem structure (e.g., foliage phenol- ogy) at regional or global scales (Akiyama and Kawamura, 2007; Eisfelder et al., 2012; Lu, 2006). However, satellite data acquisition is limited by cloud cover and atmospheric conditions (Ide and Oguma, 2010; http://crossmark.crossref.org/dialog/?doi=10.1016/j.ecoinf.2014.09.013&domain=pdf http://dx.doi.org/10.1016/j.ecoinf.2014.09.013 mailto:tomoharu@aoni.waseda.jp mailto:tomoharu@jamstec.go.jp mailto:tomoharu.inoue.mail@gmail.com mailto:nagais@jamstec.go.jp mailto:hkoba@jamstec.go.jp mailto:hkoizumi@waseda.jp http://dx.doi.org/10.1016/j.ecoinf.2014.09.013 http://www.sciencedirect.com/science/journal/15749541 2 T. Inoue et al. / Ecological Informatics 25 (2015) 1–9 Muraoka et al., 2012; Nagai et al., 2010). Thus, each approach has its strengths and weaknesses. For continuous, long-term, and accurate eval- uations of spatial and temporal variations in ecosystem structure and functions at a large scale, it is important to integrate ground-truthing into the study design, that is, to combine ground-based observations and satellite remote sensing, which allows the scaling-up from a specific area to a regional or global scale (Muraoka et al., 2012). To bridge the gap between ground-based observations and satellite remote sensing, researchers have begun to focus on near-surface remote-sensing techniques (e.g., Ahrends et al., 2008, 2009; Crimmins and Crimmins, 2008). Among these, most researchers have used the technique that utilizes the red, green, and blue channels of digital numbers (RGBs) extracted from digital repeat photographs taken by tower-mounted digital cameras (e.g., Ide and Oguma, 2010; Nagai et al., 2011; Richardson et al., 2009; Sonnentag et al., 2012). Seasonal changes in vegetation indices calculated from RGBs (camera-based vegetation indices) were closely related to seasonal changes in leaf phenology (i.e., the timing of the leaf-flush and leaf-fall [e.g., Ahrends et al., 2009; Nagai et al., 2011; Richardson et al., 2007]), gross primary production (e.g., Richardson et al., 2009; Saitoh et al., 2012), and the spectral reflectance estimated by a spectroradiometer system (Saitoh et al., 2012). Because digital cameras are relatively inexpensive and easy to use, as compared with other equipments (e.g., spectroradiometer system), they can be used as components of a worldwide network to allow continuous, large-scale, and highly precise monitoring of ecological and environmental variations in an array of ecosystems (Graham et al., 2010; Ide and Oguma, 2010; Nishida, 2007; Richardson et al., 2007). Moreover, such a system can provide ground-truthing of data gathered by satellite remote sensing (Graham et al., 2010; Nagai et al., 2011; Saitoh et al., 2012). However, most previous digital photography studies focused on forest ecosystems (e.g., Ahrends et al., 2009; Richardson et al., 2009; Sonnentag et al., 2012). Few studies have used the digital repeat photography to monitor the temporal variation in grassland structure and function (Migliavacca et al., 2011). Moreover, few such studies have examined the relationship between the plant biomass and the camera data (VanAmburg et al., 2006). To validate the usefulness of the ground-based digital photography approach in grasslands, we performed 2 years of continuous monitoring with a digital camera and gathered monthly measurements of ecologi- cal variables (aboveground green biomass and spectral reflectance of aboveground plant parts) throughout a single year in a short-grass grassland in central Japan. We investigated the relationships between a camera-based vegetation index and these ecological variables. The objectives of this study were (1) to test how well the seasonal changes in each RGB channel obtained from the digital repeat photography or a camera-based vegetation index reflect the temporal variations in ecosystem structure (foliage phenology and aboveground green biomass) in the grassland; and (2) to validate the usefulness of the ground-based digital photography for continuous observations of grassland ecosystems. 2. Methods 2.1. Site description and study period Our study site is a short-grass grassland located in the central mountain region of Japan (36°08′N, 137°26′E, 1340 m a.s.l.). The grassland is maintained by cattle grazing from May to October. The vegetation consisted primarily of Zoysia japonica, but Ranunculus japonicus and Trifolium repens were also common (Dhital et al., 2010a). The grass height can range from 5 to 10 cm. The annual mean air temperature and annual cumulative precipitation from 1997 to 2006 were 7.2 °C and 2151 mm, respectively (Inoue and Koizumi, 2012). The snow season usually begins in late December and ends in early April. More information about this site is provided by Dhital et al. (2010a,b) and Inoue and Koizumi (2012). We established a 20 m × 20 m experimental area on a south-facing slope (about 16°) of the study site. The present study was conducted from early May 2010 to early December 2011 (except during the snow season, which extended from late December 2010 to mid-April 2011). 2.2. Observation of foliage phenology using a digital camera system To observe the foliage phenology, we mounted a digital camera system (Nishida, 2007) on a stand at the site, facing east and providing a view of the whole experimental area. The placement of the system was fixed during the study period. The camera system consisted of a digital camera (Coolpix 4500; Nikon, Tokyo, Japan), a recording controller (SPC31A; Hayasaka Rikoh, Sapporo, Japan), and a lithium- ion battery (Y00-00250, Enax Inc., Tokyo, Japan). Image size was set to 2272 × 1704 pixels. All images were captured with automatic exposure, and the white balance set to “auto.” The images were saved in JPEG format. The observation periods were from 6 May (day of year: DOY 126) to 21 December (DOY 355) in 2010 and from 27 April (DOY 117) to 7 December (DOY 341) in 2011. The images were captured at 4-hour intervals between 00:00 and 24:00 h (Japan Standard Time: JST) each day. The timing of the start of leaf-flush (SLF), end of leaf-flush (ELF), start of leaf senescence (SLS), and end of leaf senescence (ELS) was detected by visual inspection of the images. We defined the timings of SLF, ELF, SLS, and ELS as the first day when 10% of leaves had flushed, the first day when 90% of leaves had flushed, the first day when 10% of leaves were withered, and the first day when 90% of leaves were withered, respectively. 2.3. Calculation of the camera-based vegetation indices We selected one photo that was captured around noon (between 10:30 and 15:30 h JST) per day for the image analysis (see Appendix A for a detailed description). As determined by visual inspection, the images obtained on rainy or foggy days were removed because of noise in the RGB channels. Moreover, some images were also removed from the analysis because of dirt on the camera housing window. In the end, we used 127 and 151 images for the analysis in 2010 and 2011, respectively. The red, green, and blue channels of digital numbers (DNR, DNG, and DNB, respectively) were extracted from each pixel of an image. We then calculated the averages of DNR, DNG, and DNB within the regions of interest, an example of which is demarcated in white in Fig. 1. These analyses were conducted with the free geographic information system software GRASS GIS (http://grass.osgeo.org). Because both weather conditions (i.e., sunny or cloudy) and solar altitude vary during a day and over each season, the illumination conditions at around noon at the site may not always be equal (Saitoh et al., 2012). To avoid the effects of these variations on the RGBs, we calculated the normalized RGBs (ratios of DNR, DNG, and DNB as percentages of total DN) by using Eqs. (1) to (3). In addition, we used Eq. (4) to calculate the green excess index (GEI), which has been reported as a useful index to evaluate the variation in foliage phenology (e.g., timing of leaf flush) more precisely than the individual RGB channels (Richardson et al., 2007). rR ¼ DNR= DNR þ DNG þ DNBð Þ ð1Þ rG ¼ DNG= DNR þ DNG þ DNBð Þ ð2Þ http://grass.osgeo.org Fig. 1. Typical camera images of the ground surface at the study site in the (a, f) pre-foliation period, (b, g) foliation period, (c, h) leafy period, (d, i) defoliation period, and (e, j) post-de- foliation period, in 2010 (a–e) and 2011 (f–j). The number in the upper right of each image is the day of the year on which the picture was taken. Because the camera system was not functioning properly, owing to a dead battery, from DOYs 129 to 149 in 2010, we could not obtain images during the foliation period in 2010. The white outline indicates the region of interest for image analysis. Some parts of the study site (lower left parts of the image) were removed for the image analysis because the areas were dug up by animals during this study period. 3T. Inoue et al. / Ecological Informatics 25 (2015) 1–9 rB ¼ DNB= DNR þ DNG þ DNBð Þ ð3Þ GEI ¼ DNG − DNRð Þ þ DNG − DNBð Þ ¼ 2 � DNG − DNR þ DNBð Þ ð4Þ To record the weather conditions when the camera captured images at the site, we also mounted a camera system (with the same compo- nents as noted above) with a fish-eye lens (FC-E8; Nikon) on top of an 18-m-high steel tower erected 700 m north of the site. Images of the sky were captured at 2-min intervals between 06:00 and 17:30 h (JST) each day. We used visual analysis of the images to assess the amount of cloud coverage. We defined the weather conditions as “sunny” when the cloud coverage was 80% or less and as “cloudy” when the cloud coverage was over 80%. 2.4. Observations of the spectral reflectance of the aboveground plant parts To obtain the spectra-based vegetation indices corresponding to the camera-based vegetation indices, we measured the spectral reflectance of the aboveground plant parts with a portable spectroradiometer (MS-720; Eko Instruments, Tokyo, Japan; spectral range, 350–1050 nm; spectral interval, 3.3 nm; spectral resolution, 10 nm) at monthly intervals from May to December 2011. On each date, we ran- domly selected six points at the study site. The spectral measurements were performed between 11:00 and 16:00 h (JST). The spectra were measured approximately 25 cm above the ground. The instrument had a field-of-view of 25° and could observe a circular area with a diameter of 11 cm at the ground surface. A 99% diffuse reflectance Spectralon Target (SRT-99-050, Labsphere, Inc., North Sutton, NH, USA) was used to obtain the white reflectance. We calculated the spectral reflectance as the ratio for the reflectance of the target. 2.5. Calculation of spectra-based vegetation indices Using the above-mentioned spectral data, we calculated spectra- based vegetation indices. Because previous studies reported that the normalized difference vegetation index (NDVI), green normalized difference vegetation index (GNDVI), ratio vegetation index (RVI), difference vegetation index (DVI), and enhanced vegetation index (EVI) were correlated with aboveground plant biomass in grasslands (Chen et al., 2009; Itano and Tomimatsu, 2011; Itano et al., 2000), we calculated these vegetation indices by using Eqs. (5) to (9): NDVI ¼ NIRave − redaveð Þ= NIRave þ redaveð Þ ð5Þ GNDVI ¼ NIRave − greenaveð Þ= NIRave þ greenaveð Þ ð6Þ RVI ¼ Ra=Rb ð7Þ DVI ¼ NIRave − redave ð8Þ EVI ¼ G � NIRave − redaveð Þ= NIRave þ C1 � redaveð Þ − C2 � blueaveð Þ þ L½ �f g: ð9Þ On the basis of the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor specifications (http://modis.gsfc.nasa.gov), we used the average value of the spectral data from 620 to 670 nm, 545 to 565 nm, 459 to 479 nm, and 841 to 876 nm as the reflectance in the redave, greenave, blueave, and NIRave (NIR: near infra-red) bands, respec- tively, and the data of 770 and 485 nm as the reflectance in the Ra and Rb bands, respectively (Itano and Tomimatsu, 2011). G = 2.5, C1 = 6, C2 = 7.5, and L = 1 are constants (Huete et al., 2002). Moreover, we also calculated spectra-based rG (rG_Sp), and spectra-based GEI (GEI_Sp) by substituting redave, greenave, and blueave for DNR, DNG, and DNB into Eqs. (2) and (4), respectively (Saitoh et al., 2012). Correlations of aboveground green biomass with each spectra-based vegetation index were evaluated. 2.6. Measurements of aboveground green biomass During each spectral reflectance measurement, we also measured aboveground green biomass. Five circular areas with a diameter of 11 cm at the ground surface were randomly selected at the site. The aboveground plant parts inside the area were clipped at the soil surface and collected. These field measurements were conducted monthly from May to November 2011, with the circular area selected at different points each month. The collected aboveground plant parts were sorted into Z. japonica, other grasses, and senescent leaves. The green aboveground green biomass of Z. japonica and other grasses was then http://modis.gsfc.nasa.gov Fig. 2. Seasonal changes in (a, g) DNR, (b, h) DNG, (c, i) DNB, (d, j) standard deviation of RGBs, (e, k) normalized RGBs (rR, rG, and rB), and (f, l) green excess index (GEI) during 2010 (a–f) and 2011 (g–l). Circles and crosses indicate that the weather condition when the camera captured the image was sunny and cloudy, respectively. Red, green, blue, and gray symbols indicate DNR, DNG, DNB, and GEI respectively. Vertical lines indicate the timing of start of leaf-flush (SLF), end of leaf-flush (ELF), start of leaf senescence (SLS), and end of leaf senescence (ELS). 4 T. Inoue et al. / Ecological Informatics 25 (2015) 1–9 separately oven-dried at 70 °C for 48 h and weighed. Finally, we calculated the average aboveground green biomass for each month. 3. Results 3.1. Seasonal changes in RGB channels and camera-based vegetation index Typical camera images of the ground surface at the study site are shown in Fig. 1. The average value of DNR, DNG, and DNB, and the standard deviation of RGBs during the study periods are shown in Fig. 2. Because the camera system was not functioning properly because of a dead battery from DOYs 129 to 149 (early May to late May) in 2010 and from DOYs 263 to 275 (late September to early October) in 2011, we could not obtain the RGB channels during these periods. In 2010, DNR and DNB decreased slightly from DOYs 126 to 260 (May to September). After that, the values increased from DOYs 315 to 355 (November to December; Fig. 2a,c). In contrast, DNG did not show 5T. Inoue et al. / Ecological Informatics 25 (2015) 1–9 remarkable seasonal changes (Fig. 2b). The standard deviation of RGBs showed a broad range from DOYs 260 to 355 (September to December; Fig. 2d). In 2011, the DNR decreased from DOYs 117 to 170 (April to June), and then it gradually increased until DOY 300 (October; Fig. 2g). After that it decreased again in December. The DNG also decreased from DOYs 117 to 170, and then it increased from DOYs 170 to 250 (June to September; Fig. 2h). After that it gradually decreased again from DOYs 250 to 320 (September to November). In contrast, the DNB showed little seasonal changes from DOYs 170 to 341 (June to December; Fig. 2i). The standard deviation of RGBs showed large ranges during two time periods: DOY 117 to 160 and DOY 276 to 341 (Fig. 2j). The fluctuations of the seasonal changes in each RGB channel were reduced by normalization (Fig. 2e,k). In 2010, although the RGBs were lacking during mid-May, the rG increased from DOYs 126 to 150 (early May to late May) and it reached the maximum level from DOYs 183 to 260 (early July to mid-September). From DOYs 260 to 355 (mid-September to mid-December), the rG gradually decreased and reached the minimum level. In contrast, the rR and rB gradually decreased from DOYs 126 to 215 (May to August), and then they gradually increased from DOYs 215 to 336 (August to December). In 2011, the values of rG from DOYs 117 to 129 (late April to early May) were almost constant (0.33–0.34). After that, rG began to increase from DOYs 132 to 151 (mid-May to late May) and it reached the maximum level around DOY 200 (July). From DOYs 262 to 276 (late September to early October), the rG decreased rapidly. After that, it gradually decreased and reached the minimum level on DOY 340 (December). In contrast, the rR gradually decreased from DOYs 122 to 170 (May to June), rapidly increased from DOYs 250 to 280 (September to October), and then remained nearly constant until DOY 341. The rB gradually decreased until DOY 200, and then it gradually increased until DOY 341. The GEI had a bell-shaped seasonal pattern similar to that of the rG during the two years (Fig. 2f,l). In 2011, the values of GEI were nearly zero (from −3 to 6) between DOYs 117 and 129 (early May), and it Fig. 3. Seasonal changes in (a) NDVI, (b) GNDVI, (c) RVI, (d) DVI, and (e) EVI in 2011. Error bar end of leaf-flush (ELF), and end of leaf senescence (ELS). increased rapidly from DOYs 132 to 200 (mid-May to July), reaching about 70. Although the GEI peaked around 85 for a short time, it generally remained about 70 from DOYs 200 to 262 (July to September). After September, it gradually decreased and was nearly zero on DOY 340. A similar seasonal variation in GEI was observed in 2010, although it began to decrease gradually from DOY 240 (late August). The RGBs on sunny days were higher than those on cloudy days (Fig. 2a–c,g–i). However, these fluctuations were reduced by normaliza- tion (Fig. 2e,k). 3.2. Seasonal changes in spectra-based vegetation indices All the spectra-based vegetation indices calculated in this study reached the maximum level on DOY 228 (mid-August) (Fig. 3). Both NDVI and GNDVI showed a bell-shaped seasonal pattern, whereas the seasonal pattern of RVI, DVI, and EVI had a distinct peak. Moreover, NDVI and GNDVI varied among measurement spots on DOYs 136 (mid-May), 290 (mid-October), 317 (mid-November), and 341 (early December), but there was less spatial variation on DOYs 170 (mid- June), 205 (late-July), 228 (mid-August), and 255 (mid-September). In contrast, RVI, DVI, and EVI showed large spatial variations during the entire measurement period. 3.3. Seasonal changes in foliage phenology and aboveground green biomass Visual analysis of the foliage images in 2011 (data not shown) showed that leaf flush of grasses started on DOY 130 (early May) and was nearly finished by DOY 196 (mid-July). During the leaf-flushing period, the study site was dotted with leafless and leafed areas because the timing of foliation was uneven across the site (see Fig. 1g). From DOYs 196 to 262 (mid-July to mid- September), the leaves remained unchanged in appearance. Leaves had already begun to wither on DOY 276 (early October), and senescence was almost complete on DOY 331 (late November). In 2010, we could not evaluate the timing of the start of leaf flush s indicate the standard deviation. Arrows indicate the timing of the start of leaf-flush (SLF), Fig. 4. Seasonal changes in aboveground green biomass in 2011. Error bars indicate the standard deviation. Arrows indicate the timing of the start of leaf-flush (SLF), end of leaf- flush (ELF), and end of leaf senescence (ELS). “d.w.” is an abbreviation for “dry weight”. 6 T. Inoue et al. / Ecological Informatics 25 (2015) 1–9 because of a lack of images. The leaf flush was nearly finished by DOY 191 (mid-July). Leaves began to wither on DOY 254 (mid-September) and senescence was complete on DOY 329 (late November). In 2011, the total aboveground green biomass showed a seasonal pattern (Fig. 4) similar to those of the camera-based indices (rG and GEI). The aboveground green biomass was 0.08 kg d.w. m−2 soon after the start of leaf-flush (DOY 133), and then it increased to 0.26 kg d.w. m−2 on DOY 168 (mid-June). In late July (DOY 203), it reached the maximum level of 0.32 kg d.w. m−2. After September, it gradually decreased and reached the minimum of 0.06 kg d.w. m−2 on DOY 341 (early December). The ratio of aboveground Z. japonica biomass to total aboveground green biomass was about 90% from DOYs 168 to 315, whereas it accounted for 72% and 75% in May (DOY 133) and December (DOY 341), respectively. Fig. 5. Relationships between the camera-based and spectra-based vegetation indices (rG, rG_S is an abbreviation for “dry weight”. Error bars indicate the standard deviation. The black solid li vegetation indices of the day when the aboveground green biomass was collected. If we could n image taken on the previous day was used. 3.4. Relationships between the vegetation indices and aboveground green biomass There were significant positive linear relationships between vegetation indices (rG, GEI, and spectra-based vegetation indices) and aboveground green biomass (Fig. 5). The rG and rG_Sp were more strongly correlated with seasonal changes in aboveground green biomass (R2 = 0.81 and 0.82, respectively; p b 0.05) than were GEI, GEI_Sp, and DVI (R2 = 0.73, 0.66, and 0.61, respectively; p b 0.05), but the coefficients of determination for rG and rG_Sp were smaller than those of NDVI and GNDVI (R2 = 0.90 and 0.91, respectively; p b 0.05). The coefficients of determination for RVI and EVI were similar to that of rG (R2 = 0.79 and 0.78, respectively; p b 0.05). 4. Discussion 4.1. Evaluation of temporal variations in foliage phenology using the camera-based vegetation index The start date of leaf-flush at this site in 2011 estimated by the visual analysis of the images (estimated date: DOY 130) agreed well with the timing based on an increase in GEI and rG (DOY 132). In addition, the timings of ELF and ELS detected by the visual analysis corresponded to the maximum and minimum levels of GEI and rG, respectively. Moreover, both the decrease of GEI and rG and the increase of rR were shown more clearly after the timing of SLS detected by the visual analysis. In this way, seasonal changes in the camera-based indices reflected the characteristic events of the foliage phenology at this site. Previous reports on the seasonal changes in these camera-based indices in forest sites (Ide and Oguma, 2010; Nagai et al., 2011; Richardson et al., 2009; Saitoh et al., 2012) noted a marked spike in the GEI after the leaf-flush period, which we did not observe in the grassland (Fig. 2e,f, k,l), although the GEI in both forest and grassland sites showed a bell-shaped seasonal pattern. However, the seasonal pattern of GEI at our site agreed with previous research conducted at a subalpine grassland (Migliavacca et al., 2011). This difference in the seasonal patterns of GEI between forest and grassland sites might reflect differences in the leaf growth characteristics between the two types of p, GEI, GEI_Sp, NDVI, GNDVI, RVI, DVI, and EVI) and the aboveground green biomass. “d.w.” ne represents the regression line for each vegetation index. We selected the camera-based ot obtain the images because of rain or fog on the day when the material was collected, the 7T. Inoue et al. / Ecological Informatics 25 (2015) 1–9 ecosystems. In a deciduous forest site, Nagai et al. (2011) noted that the increase of photosynthetic pigments related to the increasing or decreasing of rR and rG occurred later than the leaf area increase. However, photosynthetic pigmentation and leaf area increased simulta- neously in a grassland site (Nishida and Higuchi, 2000). Moreover, we found that the range of the standard deviation of RGBs in summer 2011 was smaller than that in summer 2010, which suggests that leaf growth occurred simultaneously within all grasses at the site in 2011. This result also suggests that the standard deviation of RGBs may be a useful index for observing the interannual variation in differences of foliage phenology among the grasses at the site. Our results indicate that the continuous observation of foliage phenology, which requires much labor if conducted by using conventional ground-based observation techniques, can be achieved in a much simpler manner by using digital cameras, as noted by Ide and Oguma (2010). However, researchers must be aware that the color balance of camera images can change over time (Ide and Oguma, 2010) and there may be a difference in camera-based indices among various digital camera models (Sonnentag et al., 2012). Thus, a standardized and consistent calibration method is required for the establishment of long-term observation with this technique across different sites with different digital cameras. Use of the threshold value of the camera-based vegetation index is one of the useful approaches for automatic detection of the phenological events such as leaf-flush (Nagai et al., 2013). Inoue et al. (2014) statistically evaluated the suitable threshold value of GEI for detection of the timings of start of leaf-flush and end of leaf-fall in a deciduous broad-leaved forest in Japan by using daily canopy surface photographs over 10 years. As is the case with Inoue et al. (2014), long-term camera monitoring is required to evaluate the suitable threshold value of the camera-based vegetation index for automatic detection of the phenological timings in this study site. 4.2. Relationships between aboveground green biomass and spectra- and camera-based vegetation indices The rG and GEI showed a positive linear relationship with aboveground green biomass (Fig. 5a,b), as did the spectra-based vegetation indices (Fig. 5c–i), similar to previous studies that reported the relationship between the spectra-based vegetation index and aboveground green biomass in grasslands (Itano and Tomimatsu, 2011). However, the R2 values varied widely among the spectra-based vegetation indices (from 0.61 to 0.91). The spectral reflectance of the aboveground plant parts obtained by hyperspectral sensor measurements could represent variation in both the leaf forms of different species and the photosynthetic pigment content in the leaves. For example, the reflection of NIR could capture seasonal changes in leaf form (e.g., leaf area [Justice et al., 1985]), and the reflection of both red and green represents the seasonal changes in leaf color indicating changes in the amounts of specific pigments (i.e., anthocyanin and chlorophyll [Sims and Gamon, 2002]). These findings highlight the importance of selecting the spectra-based vegetation index that uses the optimum spectral band at a study site when estimating aboveground green biomass based on a spectra-based vegetation index (Chen et al., 2009; Itano and Tomimatsu, 2011). Our results suggest that a spectroradiometer should provide high-quality data for estimating the aboveground green biomass (e.g., Fig. 5 shows that NDVI and GNDVI had the highest R2 values). However, continuous spectral measurement is very costly. In contrast, a digital camera system can obtain near-surface remote-sensing images continuously, yet inexpensively. Its radiometric quality is not as good as that of a spectroradiometer, but a digital camera system is inexpensive and much less labor-intensive to use, while provid- ing aboveground green biomass estimates as good as those obtained with a spectroradiometer (relationship between rG and aboveground green biomass: R2 = 0.81). 5. Conclusion Aboveground green biomass and the length of the plants' growing seasons are important ecological information for under- standing the response of the carbon cycle to future environmental changes. However, direct observation methods are too labor intensive to be suitable for continuous long-term observations. In this study, we tested whether a ground-based digital photography is suitable for the continuous evaluation of seasonal changes in aboveground green biomass and foliage phenology in a short-grass grassland of Japan. Image analysis indicated that seasonal variation in the camera-based vegetation index (rG: ratio of green channel) reflected the characteristic events of the foliage phenology such as the leaf-flush and leaf senescence and corresponded well with seasonal variation in aboveground green biomass. Although we did not make long-term measurements, our results suggest the feasibility of using a digital camera system for continuous long-term monitoring of foliage phenology and estimation of aboveground green biomass in short-grass grasslands in Japan, in a manner that is less labor intensive than the use of convention methods. Our future research will be focused on the clarification of the generality of this approach. We need to test whether this approach is suitable for the continuous evaluation of seasonal changes in aboveground green biomass and foliage phenology in various types of grasslands. Acknowledgments We thank K. Kurumado and Y. Miyamoto (Takayama Field Station of River Basin Research Center, Gifu University) and M. Ozaki (Waseda University) for their assistance in our field observations. We also thank all Phenological Eyes Network members for their cooperation. We would like to appreciate the reviewer's comments which helped improve this paper. Financial support for this study was provided by the Japan Society for the Promotion of Science (JSPS)/National Research Foundation of Korea (NRF)/National Natural Science Foundation of China (NSFC) A3 Foresight Program (Gifu University, Korea University, and Peking University). H. Kobayashi, S. Nagai, and T. Inoue were supported by KAKENHI (25281014; Grant-in-Aid for Scientific Research B by JSPS). T. Inoue was supported by the Waseda University Grant for Special Research Projects (project nos. 2011A-841, 2012B-053, and 2013A-6166). T. Inoue and S. Nagai were also supported by the Environment Research and Technology Development Fund (S-9) of the Ministry of the Environment of Japan. Appendix A To investigate the effect of the diurnal changes in solar altitude on the RGBs, we examined the seasonal changes in the differences be- tween the value of RGB channels at 4 h before noon (i.e., morning) and 4 h after noon (afternoon) as compared to the value around noon (Fig. A1). We found that the RGB channels obtained during the morning tended to be lower than those around noon throughout the year, whereas values obtained during the afternoon tended to be higher than those around noon. These results suggested that the illumination effect on the RGBs in the noon tended to be lower than that in the morning and afternoon. Therefore, to avoid the effects of diurnal changes in solar altitude on the RGBs, we used the images captured around noon in our analysis. Fig. A1. Seasonal changes in the difference between the morning and afternoon values of the RGB channels as compared to those around noon in 2011. 8 T. Inoue et al. / Ecological Informatics 25 (2015) 1–9 References Ahrends, H.E., Brügger, R., Stöckli, R., Schenk, J., Michna, P., Jeanneret, F., Wanner, H., Eugster, W., 2008. Quantitative phenological observations of a mixed beech forest in northern Switzerland with digital photography. J. Geophys. Res. Biogeosci. 113, G04004. http://dx.doi.org/10.1029/2007JG000650. Ahrends, H.E., Etzold, S., Kutsch, W.L., Stoeckli, R., Bruegger, R., Jeanneret, F., Wanner, H., Buchmann, N., Eugster, W., 2009. Tree phenology and carbon dioxide fluxes: use of digital photography for process-based interpretation at the ecosystem scale. Clim. Res. 39 (3), 261–274. Akiyama, T., Kawamura, K., 2007. Grassland degradation in China: methods of monitoring, management and restoration. Grassl. Sci. 53 (1), 1–17. Cao, G.M., Tang, Y.H., Mo, W.H., Wang, Y.A., Li, Y.N., Zhao, X.Q., 2004. Grazing intensity alters soil respiration in an alpine meadow on the Tibetan plateau. Soil Biol. Biochem. 36 (2), 237–243. Chen, J., Gu, S., Shen, M., Tang, Y., Matsushita, B., 2009. Estimating aboveground biomass of grassland having a high canopy cover: an exploratory analysis of in situ hyperspectral data. Int. J. Remote Sens. 30 (24), 6497–6517. Crimmins, M.A., Crimmins, T.M., 2008. Monitoring plant phenology using digital repeat photography. Environ. Manage. 41, 949–958. Dhital, D., Muraoka, H., Yashiro, Y., Shizu, Y., Koizumi, H., 2010a. Measurement of net ecosystem production and ecosystem respiration in a Zoysia japonica grassland, central Japan, by the chamber method. Ecol. Res. 25 (2), 483–493. Dhital, D., Yashiro, Y., Ohtsuka, T., Noda, H., Shizu, Y., Koizumi, H., 2010b. Carbon dynamics and budget in a Zoysia japonica grassland, central Japan. J. Plant Res. 123 (4), 519–530. Eisfelder, C., Kuenzer, C., Dech, S., 2012. Derivation of biomass information for semi-arid areas using remote-sensing data. Int. J. Remote Sens. 33 (9), 2937–2984. Fang, J.Y., Guo, Z.D., Piao, S.L., Chen, A.P., 2007. Terrestrial vegetation carbon sinks in China, 1981–2000. Sci. China. Ser. D Earth Sci. 50 (9), 1341–1350. Graham, E.A., Riordan, E.C., Yuen, E.M., Estrin, D., Rundel, P.W., 2010. Public internet- connected cameras used as a cross-continental ground-based plant phenology monitoring system. Glob. Chang. Biol. 16 (11), 3014–3023. Huete, A., Didan, K., Miura, T., Rodriguez, E.P., Gao, W., Ferreira, L.G., 2002. Overview of the radiometric and biophysical performance of the MODIS vegetation indices. Remote Sens. Environ. 83, 195–213. Ide, R., Oguma, H., 2010. Use of digital cameras for phenological observations. Ecol. Inform. 5, 339–347. Inoue, T., Koizumi, H., 2012. Effects of environmental factors upon variation in soil respiration of a Zoysia japonica grassland, central Japan. Ecol. Res. 27, 445–452. Inoue, T., Nagai, S., Saitoh, T.M., Muraoka, H., Nasahara, K.N., Koizumi, H., 2014. Detection of the different characteristics of year-to-year variation in foliage phenology among deciduous broad-leaved tree species by using daily continuous canopy surface images. Ecol. Inform. 22, 58–68. http://dx.doi.org/10.1016/j.ecoinf.2014.05.009. Itano, S., Tomimatsu, H., 2011. Reflectance spectra for monitoring green herbage mass in Zoysia-dominated pastures. Grassl. Sci. 57 (1), 9–17. Itano, S., Akiyama, T., Ishida, H., Okubo, T., Watanabe, N., 2000. Spectral characteristics of aboveground biomass, plant coverage, and plant height in Italian ryegrass (Lolium multiflorum L.) meadows. Grassl. Sci. 46 (1), 1–9. Justice, C.O., Townshend, J.R.G., Holben, B.N., Tucker, C.J., 1985. Analysis of the phenology of global vegetation using meteorological satellite data. Int. J. Remote Sens. 6 (8), 1271–1318. Kawamura, K., Akiyama, T., Yokota, H., Tsutsumi, M., Yasuda, T., Watanabe, O., Wang, G., Wang, S., 2005. Monitoring of forage conditions with MODIS imagery in the Xilingol steppe, Inner Mongolia. Int. J. Remote Sens. 26 (7), 1423–1436. Lu, D.S., 2006. The potential and challenge of remote sensing-based biomass estimation. Int. J. Remote Sens. 27 (7), 1297–1328. Ma, W.H., Liu, Z.L., Wang, Z.H., Wang, W., Liang, C.Z., Tang, Y.H., He, J.S., Fang, J.F., 2010. Climate change alters interannual variation of grassland aboveground productivity: evidence from a 22-year measurement series in the Inner Mongolian grassland. J. Plant Res. 124 (4), 509–517. Migliavacca, M., Galvagno, M., Cremonese, E., Rossini, M., Meroni, M., Sonnentag, O., Cogliati, S., Manca, G., Diotri, F., Busetto, L., Cescatti, A., Colombo, R., Fava, F., Morra di Cella, U., Pari, E., Siniscalco, C., Richardson, A.D., 2011. Using digital repeat photog- raphy and eddy covariance data to model grassland phenology and photosynthetic CO2 uptake. Agric. For. Meteorol. 151, 1325–1337. Muraoka, H., Ishii, R., Nagai, S., Suzuki, R., Motohka, T., Noda, H., Hirota, M., Nasahara, K.N., Oguma, H., Muramatsu, K., 2012. Linking remote sensing and in situ ecosystem/biodiversity observations by “satellite ecology”. In: Nakano, S., Nakashizuka, T., Yahara, T. (Eds.), Biodiversity Observation Network in the Asia- Pacific Region. Springer Verlag, Tokyo, Japan, pp. 277–308. Nagai, S., Nasahara, K.N., Muraoka, H., Akiyama, T., Tsuchida, S., 2010. Field experiments to test the use of the normalized-difference vegetation index for phenology detection. Agric. For. Meteorol. 150 (2), 152–160. Nagai, S., Maeda, T., Gamo, M., Muraoka, H., Suzuki, R., Nasahara, K.N., 2011. Using digital camera images to detect canopy condition of deciduous broad-leaved trees. Plant Ecolog. Divers. 4 (1), 79–89. Nagai, S., Saitoh, T.M., Kurumado, K., Tamagawa, I., Kobayashi, H., Inoue, T., Suzuki, R., Gamo, M., Muraoka, H., Nasahara, K.N., 2013. Detection of bio-meteorological year-to-year variation by using digital canopy surface images of a deciduous broad-leaved forest. SOLA 9, 106–110. http://dx.doi.org/10.2151/sola.2013-024. Ni, J., 2002. Carbon storage in grasslands of China. J. Arid Environ. 50 (2), 205–218. Nishida, K., 2007. Phenological Eyes Network (PEN): a validation network for remote sensing of the terrestrial ecosystems. AsiaFlux Newsl. 21, 9–13 (available online at http://www.asiaflux.net/newsletter.html). Nishida, K.N., Higuchi, A., 2000. Seasonal change of grassland vegetation found in the preliminary GLI experiment in the environmental research center. Bull. Terr. Environ. Res. Center, Tsukuba Univ. 1, 1–10 (in Japanese with English summary). Richardson, A.D., Jenkins, J.P., Braswell, B.H., Hollinger, D.Y., Ollinger, S.V., Smith, M.L., 2007. Use of digital webcam images to track spring green-up in a deciduous broadleaf forest. Oecologia 152 (2), 323–334. Richardson, A.D., Braswell, B.H., Hollinger, D.Y., Jenkins, J.P., Ollinger, S.V., 2009. Near-surface remote sensing of spatial and temporal variation in canopy phenology. Ecol. Appl. 19 (6), 1417–1428. Saitoh, T.M., Nagai, S., Saigusa, N., Kobayashi, H., Suzuki, R., Nasahara, K.N., Muraoka, H., 2012. Assessing the use of camera-based indices for characterizing canopy phenology in relation to gross primary production in a deciduous broad-leaved and an evergreen coniferous forest in Japan. Ecol. Inform. 11, 45–54. Scurlock, J.M.O., Hall, D.O., 1998. The global carbon sink: a grassland perspective. Glob. Chang. Biol. 4 (2), 229–233. Sims, D.A., Gamon, J.A., 2002. Relationships between leaf pigment content and spectral reflectance across a wide range of species, leaf structures and developmental stages. Remote Sens. Environ. 81, 337–354. Sonnentag, O., Hufkens, K., Teshera-Sterne, C., Young, A.M., Friedl, M., Braswell, B.H., Milliman, T., O'Keefe, J., Richardson, A.D., 2012. Digital repeat photography for phenological research in forest ecosystems. Agric. For. Meteorol. 152, 159–177. Trumper, K., Ravilious, C., Dickson, B., 2008. Carbon in drylands: desertification, climate change and carbon finance. UNEP-WCMC (United Nations Environment Programme World Conservation Monitoring Centre), (available online at http://www.unep- wcmc.org/carbon-in-drylands-desertification-climate-change-and-carbon-finance_ 197.html). VanAmburg, L.K., Trlica, M.J., Hoffer, R.M., Weltz, M.A., 2006. Ground based digital imagery for grassland biomass estimation. Int. J. Remote Sens. 27 (5), 939–950. http://dx.doi.org/10.1029/2007JG000650 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0010 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0010 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0010 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0015 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0015 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0020 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0020 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0020 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0025 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0025 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0025 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0030 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0030 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0035 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0035 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0035 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0040 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0040 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0045 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0045 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0050 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0050 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0055 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0055 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0055 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0060 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0060 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0060 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0065 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0065 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0070 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0070 http://dx.doi.org/10.1016/j.ecoinf.2014.05.009 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0085 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0085 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0080 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0080 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0080 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0090 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0090 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0090 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0095 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0095 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0100 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0100 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0105 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0105 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0105 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0110 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0110 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0110 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0110 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0115 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0115 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0115 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0115 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0120 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0120 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0120 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0125 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0125 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0125 http://dx.doi.org/10.2151/sola.2013-024 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0135 http://www.asiaflux.net/newsletter.html http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0215 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0215 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0215 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0150 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0150 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0145 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0145 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0155 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0155 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0155 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0160 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0160 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0165 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0165 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0165 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0170 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0170 http://www.unep-wcmc.org/carbon-in http://www.unep-wcmc.org/carbon-in http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0175 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0175 9T. Inoue et al. / Ecological Informatics 25 (2015) 1–9 White, R.P., Murray, S., Rohweder, M., 2000. Pilot Analysis of Global Ecosystems: Grassland Ecosystems. World Resources Institute, Washington D.C., USA. Xiao, X., Ojima, D.S., Parton, W.J., Chen, Z., Chen, D., 1995. Sensitivity of Inner Mongolia grasslands to climate change. J. Biogeogr. 22, 643–648. Xiao, X.M., Shu, J., Wang, Y.F., Ojima, D.S., Bonham, C.D., 1996. Temporal variation in aboveground biomass of Leymus chinense steppe from species to community levels in the Xilin River Basin, Inner Mongolia, China. Vegetatio 123 (1), 1–12. Xu, B., Yang, X.C., Tao, W.G., Qin, Z.H., Liu, H.Q., Miao, J.M., Bi, Y.Y., 2008. MODIS-based remote sensing monitoring of grass production in China. Int. J. Remote Sens. 29 (17/18), 5313–5327. Yang, Y.H., Fang, J.Y., Pan, Y.D., Ji, C.J., 2009. Aboveground biomass in Tibetan grasslands. J. Arid Environ. 73 (1), 91–95. Zhang, W., Skarpe, C., 1996. Small-scale vegetation dynamics in semi-arid steppe in Inner Mongolia. J. Arid Environ. 34 (4), 421–439. http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0180 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0180 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0185 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0185 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0190 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0190 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0190 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0195 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0195 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0195 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0200 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0200 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0205 http://refhub.elsevier.com/S1574-9541(14)00129-0/rf0205 Utilization of ground-�based digital photography for the evaluation of seasonal changes in the aboveground green biomass an... 1. Introduction 2. Methods 2.1. Site description and study period 2.2. Observation of foliage phenology using a digital camera system 2.3. Calculation of the camera-based vegetation indices 2.4. Observations of the spectral reflectance of the aboveground plant parts 2.5. Calculation of spectra-based vegetation indices 2.6. Measurements of aboveground green biomass 3. Results 3.1. Seasonal changes in RGB channels and camera-based vegetation index 3.2. Seasonal changes in spectra-based vegetation indices 3.3. Seasonal changes in foliage phenology and aboveground green biomass 3.4. Relationships between the vegetation indices and aboveground green biomass 4. Discussion 4.1. Evaluation of temporal variations in foliage phenology using the camera-based vegetation index 4.2. Relationships between aboveground green biomass and spectra- and camera-based vegetation indices 5. Conclusion Acknowledgments Appendix A References work_w4b57iwgzfeujkymgj3ozr3l3u ---- SENSORY SYSTEMS DISORDERS - Cost Studies PSS5 BEVACIZUMAB VERSUS RANIBIZUMAB FOR AGE-RELATED MACULAR DEGENERATION (AMD): A BUDGET IMPACT ANALYSIS Zimmermann I, Schneiders RE, Mosca M, Alexandre RF, do Nascimento Jr JM, Gadelha CA Ministry of Health, Brasília, DF, Brazil OBJECTIVES: The use of intravitreal injection of vascular endothelial growth factor inhibitors is an effective treatment for AMD and trials have showed similar clinical effects of bevacizumab and ranibizumab. The aim of this study was to estimate the budget impact for Brazilian Ministry of Health (MoH) recommending ranibizumab instead of bevacizumab for AMD. METHODS: We did a deterministic budget impact analysis, with the MoH perspective, comparing the use of ranibizumab and bevaci- zumab for wet AMD. The target population was estimated by extrapolating epide- miologic data to the Brazilian population. Data about dosage, administration and fractioning were extracted from literature. Prices were obtained with the Brazilian regulatory agency, applying potential discounting benefits. This analysis did not consider the cost of the fractioning process because it will be assumed by the states and not by the MoH. RESULTS: The considered price of the ranibizumab vial was US$ 962.86 (fractioning is not an option). In contrast, a 4 mL vial of bevacizumab would cost US$ 410.86 (US$ 5.14 each 0.05 mL dose, resulting in 80 doses/vial). Therefore, the expenses of one year on ranibizumab would be about US$ 11,554.37 and about US$ 61.63 for bevacizumab (12 injections for both). Thus, the use of ranibizumab instead of bevacizumab for treating 467,600 people would be related with a US$ 5,374,007,960.48 budget impact. The sensitivity analyses also demon- strated a budget impact of US$ 3,097,416,007.65 and US$ 5,287,555,101.51 (1 dose/ vial and 20 doses/vial, respectively). CONCLUSIONS: Although not a label indica- tion, bevacizumab has been widely adopted in clinical practice. As presented above, even with inefficient fractioning methods, the use of bevacizumab would bring substantial savings to MoH resources. Even the need of preserving the steril- ity of the solution being a real-world worry, stability studies have showed the maintenance of the solution characteristics through adequate handling and stor- age. PSS6 COST-OF-ILLNESS OF CHRONIC LYMPHOEDEMA PATIENTS IN HAMBURG AND SUBURBAN REGION Purwins S1, Dietz D1, Blome C2, Heyer K1, Herberger K1, Augustin M1 1University Clinics of Hamburg, Hamburg, Germany, 2University Medical Center Hamburg, Hamburg, Germany OBJECTIVES: Chronic lymphedema is of particular interest from the socioeco- nomic point of view, since it is accompanied with high costs, disease burden and permanent need of medical treatment. The economic and social impact can in- crease if complications such as erysipelas and ulcers develop. Therefore, cost-of- illness of patients with lymphoedema or lipoedema should be known. METHODS: Patients with chronic primary or secondary lymph- or lipoedema of upper or lower limbs, with at most 6 months of disease duration, were enrolled in an observa- tional, cross-sectional study in Hamburg and surroundings (population of approx- imately 4 Mio inhabitants, 90% of which are insured in the statutory health insur- ance (SHI) and 10% in private insurance). Standardized clinical examinations and patient interviews were carried out. The oedemas were documented via digital photography as well as further available patient data. Resource utilizations were collected. From the societal perspective direct medical, non - medical and indirect costs were computed. RESULTS: A total of 348 patients were enrolled and inter- viewed. 90.8% of them were female and had a mean age of 57.3� 14.5 years. Mean annual costs per lymphoedema were €8121. These costs consisted of 58% direct (€4708) and 42% indirect (€3413) costs. The SHI accounted for about €5552 expenses and the patient €494.20 out-of-pocket costs. Conducted subgroup analyses on (a) arm vs leg oedema and (b) primary vs secondary vs lipo-lymphoedema did not show significant differences in costs. The main costs drivers in this study were medical treatment and disability costs. CONCLUSIONS: The treatment of patients with chronic lymphoedema is associated with high direct and indirect costs. PSS7 C-REALITY (CANADIAN BURDEN OF DIABETIC MACULAR EDEMA OBSERVATIONAL STUDY): 6-MONTH FINDINGS Barbeau M1, Gonder J2, Walker V3, Zaour N1, Hartje J4, Li R1 1Novartis Pharmaceuticals Canada Inc., Dorval, QC, Canada, 2St. Joseph’s Health Care, London, ON, Canada, 3OptumInsight, Burlington, ON, Canada, 4OptumInsight, Eden Prairie, MN, USA OBJECTIVES: To characterize the economic and societal burden of Diabetic Macular Edema (DME) in Canada. METHODS: Patients with clinically significant macular edema (CSME) were enrolled by ophthalmologists and retinal specialists across Canada. Patients were followed over a 6-month period to combine prospective data collected during monthly telephone interviews and at sites at months 0, 3 and 6. Visual acuity (VA) was measured and DME-related health care resource informa- tion was collected. Patient health-related quality of life (HRQOL) was measured using the National Eye Institute Visual Functioning Questionnaire (VFQ-25), and the EuroQol Five Dimensions (EQ-5D). RESULTS: A total of 145 patients [mean age 63.7 years (range: 30-86 yrs); 52% male; 81% Type 2 diabetes; mean duration of diabetes 18 years (range: 1-62 yrs); 72% bilateral CSME] were enrolled from 16 sites across 6 provinces in Canada. At baseline, the mean VA was 20/60 (range: 20/20- 20/800) across all eyes diagnosed with CSME (249 eyes). Sixty-three percent of pa- tients had VA severity in the eye diagnosed with DME (worse seeing eye if both eyes diagnosed) of normal/mild vision loss (VA 20/10 to � 20/80), 10% moderate vision loss (VA � 20/80 to � 20/200), and 26% severe vision loss/nearly blind (VA � 20/200). At month 6, the mean VFQ-25 composite score was 79.6, the mean EQ-5D utility score was 0.78, and the EQ visual analogue scale (VAS) score was 71.0. The average 6-month DME-related cost per patient was $2,092 across all patients (95% confi- dence interval: $1,694 to $2,490). The cost was $1,776 for patients with normal/mild vision loss, $1,845 for patients with moderate vision loss, and $3,007 for patients with severe vision loss/nearly blind. CONCLUSIONS: DME is associated with limi- tations in functional ability and quality of life. In addition, the DME-related cost is substantial to the Canadian health care system. PSS8 NON-INTERVENTIONAL STUDY ON THE BURDEN OF ILLNESS IN DIABETIC MACULAR EDEMA (DME) IN BELGIUM Nivelle E1, Caekelbergh K1, Moeremans K1, Gerlier L2, Drieskens S3, Van dijck P4 1IMS Health HEOR, Vilvoorde, Belgium, 2IMS Health, Vilvoorde, Belgium, 3Panacea Officinalis, Antwerp, Belgium, 4N.V. Novartis Pharma S.A., Vilvoorde, Belgium OBJECTIVES: To study real-life patient characteristics, treatment patterns and costs associated with DME and visual acuity (VA) level. METHODS: The study aimed to recruit 100 patients distributed evenly over 4 categories defined by last measured VA. 1-year retrospective data were collected from medical records. An- nual direct costs were calculated from resource use in medical records and official unit costs (€ 2011). Self-reported economic burden was collected via Short Form Health and Labour Questionnaire (SF-HLQ). Indirect costs (€ 2011) included per- sonal expenses and caregiver burden (SF-HLQ). RESULTS: Thirteen Belgian oph- thalmologists recruited 32, 12, 14 and 6 DME patients for VA categories �20/50, 20/63-20/160, 20/200-20/400 and �20/400 respectively. VA was stable during the study in 86% of patients. Recruitment for lower VA categories was difficult due to long-term vision conservation with current treatments, lack of differentiation be- tween lowest categories in medical records and discontinuation of ophthalmolo- gist care in lowest categories. 75% of patients had bilateral DME. 68% were treated for DME during the study, of which 60% in both eyes. 50% received photocoagula- tion, 33% intravitreal drugs. Less than 4% of patients had paid work; 17% received disability replacement income. Total direct medical costs in patients receiving active treatment ranged from €960 (lowest VA) to €3,058. 59% of direct costs were due to monitoring and vision support, 39% to DME treatment. Indirect cost trends were less intuitive due to small samples and large variations. Annual costs grouped by 2 highest and 2 lowest VA levels, were respectively €114 and €312 for visual aids, €407 and €3,854 for home care. CONCLUSIONS: The majority of DME patients had bilateral disease. Except for the lowest VA, direct medical costs increased with VA decrease. Indirect costs were substantially higher at lower VA levels. Low sample sizes in some categories did not allow statistical analysis of cost differences. PSS9 COST OF BLINDNESS AND VISUAL IMPAIRMENT IN SLOVAKIA Psenkova M1, Mackovicova S2, Ondrusova M3, Szilagyiova P4 1Pharm-In, Bratislava, Slovak Republic, 2Pharm-In, spol. s r.o., Bratislava, Slovak Republic, 3Pharm-In, Ltd., Bratislava, Slovak Republic, 4Pfizer Luxembourg SARL, Bratislava, Slovak Republic OBJECTIVES: To measure the burden of the disease and provide a basis for the health care policy decisions. METHODS: The analysis was performed based on the several data sources. Data on prevalence of bilateral blindness and visual impair- ment were obtained from the official Annual Report on the Ophthalmic Clinics Activities. Cost analysis was performed from the Health and Social Insurance per- spective and reflects the real costs of health care payers in 2010. Information on health care and social expenditure were obtained from State Health and Social Insurance Funds. As detailed data on expenditures were not always available in a necessary structure, the missing data were collected in the retrospective patient survey. Both direct and indirect costs were evaluated and divided by the cost type and level of visual impairment. For the estimation of indirect costs Capital method was used. Patient survey was conducted on randomly collected geographically homogeneous sample of 89 respondents from all over Slovakia. RESULTS: A total of 17 201 persons with bilateral blindness or visual impairment were identified in 2010. Total yearly expenditures were 63 677 300 €. Direct costs counted only for 7% (4 468 112 €) of total costs and the most of them were caused by hospitalisations (4 001 539 €) and medical devices (307 739 €). The indirect costs counted for 59 209 188 €. The highest share represented loss of productivity (69%), followed by disability pensions (17%) and compensation of medical devices (14%). CONCLUSIONS: The evidence of cost-effectiveness must be demonstrated in order to get reimburse- ment in Slovakia. According the Slovak guidelines indirect costs are accepted only in exceptional cases. Indirect costs of blindness and visual impairment count more than two thirds of total costs and therefore should be considered in health care policy evaluations. PSS10 ECONOMICAL BURDEN OF SEVERE VISUAL IMPAIRMENT AND BLINDNESS – A SYSTEMATIC REVIEW Köberlein J1, Beifus K1, Finger R2 1University of Wuppertal, Wuppertal, Germany, 2University of Bonn, Bonn, Germany OBJECTIVES: Visual impairment and blindness pose a significant burden in terms of costs on the affected individual as well as society. In addition to a significant loss of quality of life associated with these impairments, a loss of independence leading to increased dependence on caretakers and inability to engage in income generat- ing activities add to the overall societal cost. As there are currently next to no data capturing this impact available for Germany we conducted a systematic review of the literature to estimate the costs of visual impairment and blindness for Ger- many and close this gap. METHODS: A systematic literature search of the main medical and economic information databases was conducted from January-April A569V A L U E I N H E A L T H 1 5 ( 2 0 1 2 ) A 2 7 7 – A 5 7 5 work_w5f5vyejbrgbbgm3wf2qxd5uhq ---- www.palgrave-journals.com/dam JOURNAL OF DIGITAL ASSET MANAGEMENT Vol. 2, 3/4 178–180 © 2006 Palgrave Macmillan Ltd 1743-6540 $30.00178 PAM: MANAGING ASSETS, MAXIMIZING PROFITABILITY Playboy has a corporate wide initiative to digitize an on-demand library of all its assets. Known as the Playboy Asset Management (PAM) system, the goal of this project is to maximize the asset profi tability of one of the world ’ s best-known brands across all media and markets. INTRODUCTION With its rich history of content production, Playboy archives span 53 years of print publishing, 24 years of TV and movie making, 11 years of online publishing, 5 years of radio programming, and 3 years of publishing in the mobile space. Today, there are few media platforms where Playboy can not be found. Playboy has always been in the forefront of emerging technologies launching one of the fi rst pay cable TV networks and later becoming one of the fi rst national magazines to have an online presence. Our continued search for new opportunities to leverage assets initiated its international mobile launch of still images and video in 2003. The legacy of these media is a library that includes millions of photos, as well as cartoons, artwork, video and a range of editorial features. With every new technology, Playboy remains committed to providing the entertainment people have come to expect from one of the most recognized brands in the world. New technologies create questions around content selection, availability, formats, size, delivery mechanisms, etc. Digital asset management (DAM) is a central part of how we answer these questions and bring content to market. DAM has been happening in informal ways for decades with everything from departmental access-databases to simple excel-spreadsheets. The role of the PAM system is to build on these informal processes and to streamline them to allow our content creative teams to remain technologically agnostic in a multi-platform world. Understanding the role of PAM within the existing operational infra-structure required the development of strategies to incorporate new workfl ows and business rules into the legacy systems. The PAM tag line: “ Managing Assets, Maximizing Profi tability ” was created to inspire corporate wide thinking about how to simplify the reuse of assets in ways that made the best use of the resources at hand. The challenge for PAM is two-fold: to mine existing archives and to leverage our content production to keep pace with changing times and technologies while not slowing down the creation of any end product. PAM is a central component of integrated systems including Rights and Content Playboy ’ s digital journey: Extending the power of the brand Michael Hires with over a decade of media experience, directs Playboy ’ s corporate-wide initiative to digitize and catalogue an on-demand library of all its assets. Known as the Playboy Asset Management (PAM) system, the goal of this project is to maximize the asset profi tability of one of the world ’ s best-known brands across all media and markets. Hires began his career at Playboy in 1997 and has since served in a variety of roles. Prior to his leadership role with PAM he served as the principal account manager for Playboy ’ s online and wireless licensees. Keywords: DAM , integrated system , repurposing assets , archives , electronic workfl ow Abstract Playboy has launched a corporate wide digital asset management (DAM) solution to leverage assets across media platforms. As that DAM solution gets rolled out across business units the ultimate goal is to have a single repository tied to rights and content management, streamlining the production of new branded content distribution points that are scalable allowing for optimal revenue growth. Journal of Digital Asset Management (2006) 2, 178 – 180. doi: 10.1057/palgrave.dam.3650030 Michael Hires Director, Digital Asset Management and Distribution, Playboy Enterprises Inc. Tel: (312) 373-2812 E-mail: mhires@playboy.com Playboy’s digital journey © 2006 Palgrave Macmillan Ltd 1743-6540 $30.00 Vol. 2, 3/4 178–180 JOURNAL OF DIGITAL ASSET MANAGEMENT 179 Management. They have been designed to speed product to market with the fl exibility of adding new end platforms as they become available. The PAM system is at the core of who we are and what we do from a content production perspective. It is intertwined and enmeshed in every business production process in every division. It enables us to quickly leverage new business opportunities and distribution channels of our content as they arise. Mark Laudenslager, Chief Information Offi cer, Playboy Enterprises STILL IMAGE ASSETS The life of our still image assets begins with the magazine. Our photo editors and art directors conceptualize the ideas and settings for each pictorial situation whether a stand-alone pictorial or as images to illustrate an article. Utilizing a blend of staff and free lance photography talent, the images are executed to extremely high-quality standards in terms of both content and technical quality. So what happens to a set of images from a photo shoot over the course of its lifetime? For a typical pictorial, only 10 – 15 of hundreds, sometimes thousands of images taken in the course of a photo shoot will be published in the magazine. Selecting the best images is a process that Playboy photo editors have honed into an art form. Identifying the images that best refl ect the purpose and vision of the pictorial or article being illustrated is not an easy task. The challenge of viewing hundreds of fi lm images has only been compounded with the introduction of digital photography. Advances in the quality of digital photography have enabled Playboy to contemplate the transition from fi lm to digital. However, having a system to edit and archive large numbers of digital images is an absolutely necessary part of the process. And it has to be a system that respects the quality criteria that the magazine has set for itself over the years. Gary Cole, Photography Director, Playboy Magazine Once selected, the images are turned over to Playboy ’ s Art Department where they are sized and paced for optimum visual effect. The approval process travels through the art director and editorial director of the magazine and, fi nally, to Editor-in-Chief, Hugh Hefner. Invariably, changes are requested, requiring that more images be identifi ed from the overall edit. For some companies, the 10 – 15 images that make the print edition could be the end of the story, leaving the remaining hundreds of pictures sitting in a photo library, never to see the light of day again. Among Playboy ’ s unique qualities is our belief that every asset has value and no asset featuring the brand at Playboy is ever truly retired. Playboy has always looked for ways of extending the power of the brand of Playboy beyond the core business of our domestic magazine. Our licensed international editions gave us one of the earliest tests of repurposing images and whole layouts came with the launch of the fi rst two non-English editions, Playboy Germany and Playboy Japan. The process worked — today there are 20 editions of Playboy around the world including, Brazil, France, Mexico, the Netherlands, Poland and Russia. Over the years the optimal balance of 80 per cent local content — fi lled in by 20 per cent US created content, or a combination of content from other foreign editions — has created and extended the global presence of Playboy while adding another source of original material that can be leveraged across other editions or in new technologies. Playboy works around the world because its icons and its core editorial materials and values are so strong that the editors of each edition have enormous fl exibility in creating magazines that touch their readers and work for their advertisers. The tensile strength of the editorial ideas and standards of quality that come from US Playboy are the building blocks that work everywhere. David Walker, SVP-Editorial Director-International Publishing In the domestic space, the extension of the Playboy brand appeared with the introduction of Playboy ’ s Newsstand Specials. Using out-takes from photo shoots, Playboy created a series of magazines extending Playboy ’ s shelf space on the newsstand and providing a new revenue stream. As the popularity of these issues — now called Special Editions — grew, so did budgets for original photography and a third source of original material was created. The Special Editions team publishes 25 issues every year. Hires JOURNAL OF DIGITAL ASSET MANAGEMENT Vol. 2, 3/4 178–180 © 2006 Palgrave Macmillan Ltd 1743-6540 $30.00180 Every issue is based on themes such as College Girls, Girls Next Door or Natural Beauties. Digital Asset Management will enable us to build on the thematic pictorials our readers have come to enjoy and streamline our ability to identify and build on new themes as they arise. Jeff Cohen, Publisher and Executive Editor- Playboy Press As new technologies emerged, the life of the Playboy asset has been extended even further. 1994 marked the beginning of Playboy ’ s presence on the internet, and in 2003, onto wireless platforms. Over the course of the last 11 years the same growth pattern repeated: begin with leveraging existing assets and, as readership and revenue grow, initiate the process of creating material specifi c to the new space. Playboy.com grew from an informational page to an online destination with several subscription sites that feature content from the archives of Playboy Magazine, its international editions, and the company ’ s Special Editions. Some of the most popular features that appear online today are the complete library of Playmates, cover galleries and other features that leverage our historic archives. As the online audience grew so did the need to create content specifi cally for the online space. In 2000, Playboy Cyber Girls were introduced, followed in 2005 by UK Cyber Girls. A similar pattern is happening in the mobile space. Playboy Mobile launched internationally in 2003 and domestically in 2005. The technology supporting mobile phones is making it possible to view images and video as if you are sitting on the couch reading a magazine or watching TV. We ’ ve had great success extending the power of Playboy ’ s brand online to create a host of new franchise features. We ’ re taking that expertise and using it to develop fresh, new concepts by leveraging digital asset management and metadata models for the mobile space, creating content we know Playboy fans will enjoy. John D. Thomas, Editorial Director, Playboy Online As new technologies are introduced, content creation teams are formed to support the new businesses. As an example, fi ve creative groups produce content for the businesses previously discussed. Collaboration among these teams is part of our culture, but each of these teams has an audience whose taste for new content needs to be met Enter PAM. THE VALUE OF PAM The fi rst task PAM had to address was the creation of a controlled vocabulary and a metadata model that would allow all of the assets to be cataloged, organized and made searchable across the company to speed reuse. Leveraging pre-existing electronic workfl ows for print and online production, images are entered into the PAM system. As each team reviews the image for use, they answer questions that build the metadata ’ s search criteria. The metadata model is used to extend Playboy Online ’ s assets for use in the Playboy Cyber Club, Playboy Net, Playboy Daily and other online destinations offering customers a variety of choices and options. This metadata is then used to mine archives for use on international websites, magazines and mobile offerings. In 2005, revenues from the use of these assets in the digital space was over $ 10 million. A rather complex system evolved over time for the continued, extended use of images and articles created under the Playboy brand. DAM brings to light opportunities that might have been overlooked with vast, unwieldy archives. With commitment and support from our CEO, CFO, CIO and our creative teams, the PAM system has been able to create signifi cant benefi ts. Flexibility together with an understanding of the organic nature of creative workfl ows has been important to our initial success. While workfl ow changes are underway, access to material is quicker and getting product to market is faster. A VISIONARY FUTURE Playboy Enterprises ’ foresight in establishing a tradition of repurposing assets was the natural springboard for the company ’ s embracing the emerging fi eld of Asset Management. What started as a way to use outtakes from photo sessions, the publication of unpublished or otherwise unused images, is now an emerging industry in itself. The ability of organizations to tailor rapidly expanding technologies to their needs is crucial to their continued fi nancial growth. The life of an asset is limited only by imagination and willingness to adopt these new technologies. We already are pursuing similar opportunities to repurpose our video, TV, radio, book publishing, product-licensing, and catalog assets. Playboy's digital journey: Extending the power of the brand PAM: MANAGING ASSETS, MAXIMIZING PROFITABILITY INTRODUCTION STILL IMAGE ASSETS THE VALUE OF PAM A VISIONARY FUTURE work_w5ohqgzhwzatvb7iwmll236gbq ---- Running Head: The Expert and the Machine 1 Title: The Expert and the Machine: Competition or Convergence? Author information: Eric T. Meyer Oxford Internet Institute, University of Oxford, United Kingdom 1 St Giles, Oxford, OX1 3JS, UK Email: eric.meyer@oii.ox.ac.uk Author bio: Eric T. Meyer is a Senior Research Fellow and Associate Professor at the Oxford Internet Institute, University of Oxford, UK. His work in the area of social informatics focuses on how work and knowledge practices are transformed as people adopt digital technologies. For more, see http://www.oii.ox.ac.uk/people/meyer/ Keywords: expertise, experts, amateurs, algorithms, computerization, photography, film Author’s final copy, published copy available. Please cite as: Meyer, E.T. (2015). The Expert and the Machine: Competition or Convergence? Convergence: The International Journal of Research into New Media Technologies 21(3): 306-313. Available online at http://dx.doi.org/10.1177/1354856515579840. mailto:eric.meyer@oii.ox.ac.uk http://www.oii.ox.ac.uk/people/meyer/ http://dx.doi.org/10.1177/1354856515579840 Running Head: The Expert and the Machine 2 Abstract In this essay, the role of human expertise in the face of technological advance is discussed. There are many examples of technology that has become sufficiently advanced that the previous need to develop expert-level skills before being able to perform at a high level is either vastly reduced or eliminated. For instance, digital cameras can create sharp, beautiful photos with essentially zero technical skill, and high-definition video recordings are available on smartphones and iPads. The question is not just whether these technologies eliminate the need for expertise (thus substituting engineering for expertise), but also if in doing so they foster the development of new types of creative expertise (such as an ability to use photography as part of a social media strategy, for instance). The paper concludes by arguing that machines contribute to an increasingly capable constellation of people and machines, which together allow more people to develop their talents into expressions of creative expertise. Running Head: The Expert and the Machine 3 Introduction There has been much talk in recent years about the growth in the power of the amateur that has been powered by the Internet (Shirky, 2008), in terms of amateur contributions to Wikipedia (Benkler, 2006), the growth of amateur photography (Gómez Cruz and Meyer, 2012), citizen journalism (Outing, 2005), citizen science (Raddick et al., 2010), and any number of other realms. One element that has been less frequently mentioned in this rush toward building a world of crowd contributions is the role of the expert 1 and the role of acquired expertise. Technical expertise, as opposed to innate skill, must be developed in humans in order to reach higher levels of performance, which in turn can be thought of as the expression of creative expertise (Ericsson, 1999). Ericsson et al. (1993) were among the first to examine the role of deliberate practice, often subsequently called the 10,000 hour rule, referring to the number of hours of practice needed to develop expert-level skills. 2 In the case of a musician, there are many hours of practice that go into developing the technical expertise and skills required to produce the desired sounds and to play particular pieces of music, and many more that go into developing the creative expertise, or virtuosity, required to perform at a maximal level. Whether it takes 10,000 hours or not to become an expert, one question that has been largely ignored in recent years is the extent to which technologically sophisticated devices which provide the appearance of expertise can replace the need for individually acquired expertise. Running Head: The Expert and the Machine 4 Or to put it in a somewhat flippant way, to what extent can 10 million lines of algorithmic code replace 10 thousand hours of human experience? Of course, the process of human expertise being replaced by the actions of machines and processes of automation (which of course are themselves the products of human expertise) is not new, and has often generated anxieties about the loss of previous human expertise-based systems. The industrial revolution saw a shift from craft- and guild-based production to large industrial automation, for instance replacing the expertise of weavers working in cottage industries with huge machines that replicated and even in many cases improved upon skilled hand-work. 3 Likewise, the rise of the networked information society of the last 30-50 years has seen the steady shift from individual labour to automation (Castells, 2011). The first “computers” after all were people (Grier, 2013), but are now, of course, automated pieces of machinery ranging from the smartphones in our pockets with Google Now and Siri predicting our next moves to massive supercomputers put to tasks such as drug discovery and the detection of international security threats. In a recent article discussing these trends in detail, Frey and Osborne have examined the computerization of occupations, and predict that about 47% of Americans are currently working in occupations at risk of being replaced by computers in the relatively near future, although the exact timeframe cannot be predicted (Frey and Osborne, 2013). I would like to take a different view on this however, which is slightly different than the fears raised at the prospect that human expertise will be increasingly replaced by that enacted by Running Head: The Expert and the Machine 5 machines. Instead, the question is to what extent can sufficiently sophisticated machines allow us to replace each other, to supplant experts with non-experts who are armed with technology, to revolutionize (or undermine, depending on your view) the production of knowledge and creative expression? In this essay, I would like to discuss the relationship between expertise as enacted by human experts and the activities of inexpert humans armed with technologies which are sufficiently advanced so as to produce outputs of sufficient quality that they replicate or even improve upon previous expert-quality outputs. Photography I have previously written about the difference between amateurs and professionals in the context of photography: one can think about a two by two grid in which one axis distinguishes between amateurs and professionals, and the other axis differentiates between those who consider themselves photographers and those who use photographs for a variety of purposes but do not consider themselves to be photographers 4 (Meyer, 2008). These boundaries are not rigid, but are increasingly blurred in this technological era when cameras are not only highly capable but also ubiquitous in the form of smartphones. In this conceptualization, the importance for what we are discussing here is somewhat surprisingly not the professional versus amateur dimension, since both categories include people who traditionally spent many hours perfecting their technical expertise, be it in the ability to record the “decisive moment” (Cartier-Bresson, 1952) or to excel at the skills required to process and print photographs in a darkroom. Instead, we are interested in the dimension that more closely maps onto acquired Running Head: The Expert and the Machine 6 expertise: people who self-identify as “photographers” (and thus internalize this role as part of their identity, in the symbolic interactionist sense) versus those who use photography as a means toward other ends. Traditionally, the highest expressions of creative photographic expertise were performed by those who had also acquired the necessary technical expertise and frequently also a significant quantity of expensive technical gear. In recent years, however, the technical barriers to taking high quality photographs have dropped dramatically, as have the material requirements for creating, processing, and sharing an image. The difference between a professional grade SLR and mass-market cameras designed for holiday snapshots used to be a wide gulf, with the difference in technical quality obvious to the naked eye. With the shift away from film toward digital cameras that began in the 1990s and was essentially complete within 20 years, digital devices capable of creating technically high-quality images are in everyone’s backpack or pocket in the form of small cameras, inexpensive SLRs, and smartphones. Image processing and manipulation is likewise easy: while advanced processing with Photoshop still has a learning curve, anyone can apply Instagram filters and post their images for the world to see. This begs the question of whether expertise in film photography is fundamentally different than expertise in digital photography because of the affordances and demands of the technology of each, or whether there is something fundamental to both that goes beyond creating expert-quality images using technological means (technical expertise) and gets to the Running Head: The Expert and the Machine 7 question of the ability to engage in expert-level photography as a practice and process (creative expertise). One version of this story is that photography has become a domain in which little to no technical expertise is required, allowing everyone to become an expert (or at least create outputs which are indistinguishable technically from those created by experts) with little effort by leveraging the built-in “expertise” encoded in the algorithms that run our photographic devices. Of course, cameras themselves were seen by some as a technological means by which painting expertise was being devalued in the 19th century: As the photographic industry became the refuge of all failed painters with too little talent, or too lazy to complete their studies...I am convinced that the badly applied advances of photography, like all purely material progress for that matter, have greatly contributed to the impoverishment of French artistic genius. (Baudelaire, 1980 [orig. 1855]: 87) Baudelaire’s dismissal of “purely material progress” starts to get at the tension 5 that can arise as technological improvements offer us as human actors the ability to do things which we would be unable to do proficiently using manual methods. In Baudelaire’s worldview, the expression of a scene as rendered by an artistic genius is taken as a laudable goal, but one that is somehow debased if too many people have the ability to create an image of a scene without expending the effort required to become an expert. I think it is far too simple to say that Running Head: The Expert and the Machine 8 substituting technological power and prowess for hard-won human skill and knowledge is a debasement of “artistic genius”. On the other side of the coin, however, it is also too simple to argue that technology has simply democratized endeavours such as photography, opening up the path of artistic expression to all. Instead, I think something else explains how machines reconfigure expertise: the machines we build have taken away some degree of demand for expertise in performing functional work (again, technical expertise), but at the same time, have opened up possibilities for expanded opportunities for people to engage in higher order expression (creative expertise) and also for new forms of expertise to emerge. Using our example of photography, what are examples of these new forms of expertise? One is clearly the ability to effectively use social media to create an online message using photography in conjunction with writing, design, presentation, and other sets of skills and abilities. Many of the parts of this social media skill set rely on automated tools to work together, but simply having access to all these tools does not make everyone an expert in social media. A limited number of people are able to acquire the expertise (operating in conjunction with their natural abilities) to become social media influencers, because these technologies allow them to leverage their innate talents in ways that are suited to success in an online environment. Many others, however, make a single blog post, YouTube video, or post a few photographs on Instagram, Flickr, or Twitter, and then disappear from view, lacking the ability, knowledge, expertise, and possibly the prior cultural capital or even the inclination required to make their work more visible. Running Head: The Expert and the Machine 9 These successful social media influencers are a good example of our professionals who use photography, but do not call themselves photographers. Photographs may play an important role in the messages they broadcast, but they would in many cases not view themselves as photographers. Also important is that these new social media experts are not necessarily the same people who would have achieved expertise or prominence related to photography in another time because the types of expertise required for success are different: combining various types of media into packages that appeal to social media consumers is wholly different to working with cameras, lenses, and darkroom chemicals. The digital camera’s embedded technical capabilities and algorithms allow the social media expert to appear expert precisely because many of the elements of being a prominent social media influencer have had the expertise formerly required to excel packaged into machines, but it is still up to this new type of expert to tie them together in creative ways. The benefits accrue more widely than this, however: many others (including many amateurs who use photography but don’t call themselves photographers) neither have nor want these social media skills, but can still enjoy the benefits of digital cameras, relying on the same packaging of technical expertise into the digital camera to better record their personal and family life and share it with friends on Facebook, to give just one example. Filmmaking Making a film that the average layperson would perceive to be filmic, in that it is sufficiently high quality in terms of images, lighting, sound, editing, and presentation format, has for most Running Head: The Expert and the Machine 10 of the history of filmmaking been out of reach of the average user. Certainly, home movies could be made using handheld movie cameras since the 1930s, VHS camcorders in the 1980s, and digital camcorders in the 1990s. However, all of these produced resolutions that were less than professional equipment, and largely did not enable the sophisticated editing and post- production that marks the professional films produced by experts. However, this dynamic has changed dramatically in recent years. First, digital SLR cameras capable of shooting high- resolution video were introduced, and now small hand-held devices such as mobile phones and iPads can shoot high-quality video. One only needs to go to YouTube to see that people with little or no training (the traditional source of expertise) are able to produce films that at least look professional: they are shot at high resolutions, have good quality sound, and are edited into a coherent package often with titles and end credits. In a recent project, some filmmaker colleagues and I worked with several schools to test the proposition that it was possible to teach students with either limited or no previous training how to plan, shoot, edit, and produce a high quality film using nothing but a basic iPad (Meyer et al., 2014). One of the team members (Phillips) involved in the project is an experienced filmmaker. He worked directly with the students at each school to create a bespoke film using the iPad as the main platform for all aspects of the production. 6 What we found was that the iPad running the right software can replace many other pieces of technology (large cameras on tripods, editing suites, and so forth) to make the filmmaking experience much less onerous (partly because of the compact design of the technology but Running Head: The Expert and the Machine 11 also because of the affordances that embed technical expertise into the software and hardware) and as a unexpected benefit, potentially more collaborative as well. The relatively large screen of the iPad, for instance, allowed many more off-camera students to see exactly what the iPad operator was seeing and shooting, which contributed to their ability to learn even when playing a relatively passive role for the time being. Also, editing collaboratively on a large projected screen immediately after shooting enabled the students to get immediate feedback on whether their shots worked or might need reshooting, a sharp contrast to most filmmaking when rough cuts are only available days or weeks after the initial shoot. So, some of the expert skills normally needed to make a film have been reduced or eliminated when making a film with an iPad. One element of teaching filmmaking that was not replicated by the iPad, however, was the professional expertise of the filmmaker with regard to how one tells a dramatic and compelling story with film. In speaking with teachers, this was a key area where they felt their lack of expertise held them back from making short narrative films with their students. As a result, film projects in many schools appeared to fall back on genres that are less demanding in terms of narrative structure, such as reshooting a professional music video or movie trailer. The replacement by digital technology of difficult-to-learn technical expertise opens up new (practical) possibilities for filmmaking expertise to be taught. The expertise being taught is no longer necessarily in the technical areas of manipulating filming equipment and editing suites, but in the area of storytelling and teaching students collaborative and creative working Running Head: The Expert and the Machine 12 techniques. These are arguably higher-order skills demonstrating creative expertise, potentially well-suited to success in the modern economy. A relatively small handful of people will ever need the expertise in how to operate highly technical film equipment to make feature films, but many people can benefit by gaining expertise in how to create a compelling narrative that influences those who see, hear or read it. So we return to authorship: in this area of digital photography and iPad filmmaking, we see not the diminishing of the author and creator, but a central role for creative experts to emerge who have different skills, abilities, and interests than the experts of a previous time whose place they are taking. Conclusion We have considered just a few technologies here that have contributed to the declining need for individual human expertise in some areas while enabling other forms of expertise to emerge, although there are many more we could have addressed (e.g. crowd-sourced Wikipedia replacing edited encyclopedias, Google search replacing consulting a librarian, or even word processing making us all into typists and typesetters). The question we have been considering is whether these technologies eliminate the need for expertise, or if they substitute engineering for expertise, or if they foster the creation of new types of expertise and the expansion of alternative forms of expertise. I think it is clear that I am arguing for the latter. The first too options are too simple, almost straw figures. Yes, technology replaces certain jobs, as it has always done and will continue to Running Head: The Expert and the Machine 13 do as long as humans are able to innovate. If it were not so, we would still be living on the savannah or in caves without a weapon, farming implement, or cooking tool to our names, yet expert in the ability to hunt and gather food to be eaten raw. Also, yes, technologies do frequently substitute engineering for elements of expertise, because the engineering has been designed to replicate the lessons learned by previous experts and to embed those lessons into the algorithms and operations the technology is able to replicate. As we have seen, there is a point at which technology becomes sufficiently advanced that the previous need to develop lots of skills before being perceived as an expert is either eliminated or vastly reduced. So the answer to our earlier question (Can 10 million lines of algorithmic code replace 10 thousand hours of human experience?) is a qualified yes: we are clearly able to encode and reliably reproduce some types of expertise previously limited to humans. We should be clear here, however, that this applies unevenly, as some types of expertise are less amenable to an algorithmic approach than others. However, even as some forms of expertise are enacted by technology, new forms of expertise will inevitably emerge. Perhaps this is part of human nature, in that we as humans seem to seek out ways to challenge ourselves and others (White, 1959). Or perhaps it is due to the limits of any technology: no matter how many challenges the technology solves, new challenges inevitably arise as its limits become visible. To return to the title of the paper, are we seeing competition or convergence between the expert and the machine? I argue for the latter: we as humans are part of an increasingly Running Head: The Expert and the Machine 14 complex but also increasingly capable constellation of people and machines (made up of physical hardware and software). We can create beautiful images with the push of a button, typeset a page without ever touching a printing press, and film something we see as it happens. However, the most beautiful photographs will be created by those who gain expertise in composition, layout, lighting, and choice of subject. The most compelling writing will be done by those who learn how to communicate their ideas effectively onto the typeset page. And the best films will be made by those who have the ability to visually tell a story that is able to generate an emotional reaction from those who see it. What the machines do for us is open up the possibility of developing these types of expertise to more people, and also allow more people such as students additional ways to explore how they might develop their talents into true expressions of creative expertise. Acknowledgements I express my thanks to the editors and reviewers of this issue for their helpful advice and comments which have improved the article; any errors are of course my own. This paper was not supported by any specific grant funding, but I thank the Oxford Internet Institute for continuing support for this stream of research. Running Head: The Expert and the Machine 15 References Cited Baudelaire C (1980 [orig. 1855]) The Modern Public and Photography. In: Trachtenberg A (ed) Classic Essays on Photography. Stony Creek, CT: Leete's Island Books, 83-89. Benkler Y (2006) The wealth of networks. New Haven, CT: Yale University Press. Cartier-Bresson H (1952) The Decisive Moment. New York: Simon and Schuster. Castells M (2011) The Rise of the Network Society: The Information Age: Economy, Society, and Culture (Volume 1). Malden, MA: Wiley-Blackwell Publishers. Collins HM and Evans R (2002) The Third Wave of Science Studies: Studies of Expertise and Experience. Social Studies of Science 32(2): 235-296. Epstein SR (1998) Craft Guilds, Apprenticeship, and Technological Change in Preindustrial Europe. The Journal of Economic History 58(03): 684-713. Ericsson KA (1999) Creative expertise as superior reproducible performance: Innovative and flexible aspects of expert performance. Psychological Inquiry 10(4): 329-333. Ericsson KA (2012) The Danger of Delegating Education to Journalists: Why the APS Observer Needs Peer Review When Summarizing New Scientific Developments. Florida State University. Available at: http://www.psy.fsu.edu/faculty/ericsson/2012%20Ericssons%20reply%20to%20APS%20 Observer%20article%20Oct%2028%20on%20web.doc (accessed 14 August 2014). Ericsson KA, Krampe RT and Tesch-Romer C (1993) The Role of Deliberate Practice in the Acquisition of Expert Performance. Psychological Review 100(3): 363-406. Frey CB and Osborne MA (2013) The Future of Employment: How susceptible are jobs to computerisation? Oxford: Oxford Martin School. Available at: http://www.oxfordmartin.ox.ac.uk/publications/view/1314 (accessed 22 August 2014). Gladwell M (2008) Outliers: The Story of Success. New York: Little, Brown and Company. Gómez Cruz E and Meyer ET (2012) Creation and control in the photographic process: iPhones and the emerging fifth moment of photography. Photographies 5(2): 203-221. Grier DA (2013) When computers were human. Princeton: Princeton University Press. Jones J (2014) The $6.5m canyon: it's the most expensive photograph ever – but it's like a hackneyed poster in a posh hotel. In: Jonathan Jones on Art (Guardian Blogs). Available at: http://www.theguardian.com/artanddesign/jonathanjonesblog/2014/dec/10/most- expensive-photograph-ever-hackneyed-tasteless (accessed 10 December 2014). Meyer ET (2008) Digital photography. In: Kelsey S and St. Amant K (eds) Handbook of Research on Computer Mediated Communication, Volume I. New York: Information Science Reference, 791-803. Meyer ET, Carroll J and Phillips K (2014) Bottling Inspiration: Shoot Smart Swindon Final Project Report. London: Into Film. Nowotny H (2003) Democratising expertise and socially robust knowledge. Science and Public Policy 30(3): 151-156. http://www.psy.fsu.edu/faculty/ericsson/2012%20Ericssons%20reply%20to%20APS%20Observer%20article%20Oct%2028%20on%20web.doc http://www.psy.fsu.edu/faculty/ericsson/2012%20Ericssons%20reply%20to%20APS%20Observer%20article%20Oct%2028%20on%20web.doc http://www.oxfordmartin.ox.ac.uk/publications/view/1314 http://www.theguardian.com/artanddesign/jonathanjonesblog/2014/dec/10/most-expensive-photograph-ever-hackneyed-tasteless http://www.theguardian.com/artanddesign/jonathanjonesblog/2014/dec/10/most-expensive-photograph-ever-hackneyed-tasteless Running Head: The Expert and the Machine 16 O'Hagan S (2014) Photography is art and always will be. In: Sean O'Hagan on Photography (Guardian Blogs). Available at: http://www.theguardian.com/artanddesign/2014/dec/11/photography-is-art-sean- ohagan-jonathan-jones (accessed 11 December 2014). Outing S (2005) The 11 Layers of Citizen Journalism. Available at: http://www.poynter.org/content/content_view.asp?id=83126 (accessed 25 May 2006). Raddick MJ, Bracey G, Gay PL, et al. (2010) Galaxy Zoo: exploring the motivations of citizen science volunteers. Astronomy Education Review 9. Shirky C (2008) Here comes everybody : the power of organizing without organizations. New York: Penguin Press. White RW (1959) Motivation reconsidered: the concept of competence. Psychological Review 66(5): 297-333. End Notes 1 The use of the word expert in this paper is in line with Ericsson’s ideas about understanding expertise which is used to create or perform something. There are clearly other ways to think about experts, including the role of certified experts (Nowotny, 2003), such as in expert testimony, or the role of experts in decision-making (Collins and Evans, 2002). 2 The 10,000 hour rule terminology arose after Malcolm Gladwell’s (2008) popularization of the Ericsson et al. work, which has since been argued to be a mischaracterization of the original data and argument (Ericsson, 2012). The actual amount of time required is not a magical number like 10,000 hours, but varies by type of expertise and levels of innate skill. 3 See Epstein (1998) for a discussion of the role of guilds in technological change. Epstein argues that guilds were not, as some other authors have argued, simply resistant to new http://www.theguardian.com/artanddesign/2014/dec/11/photography-is-art-sean-ohagan-jonathan-jones http://www.theguardian.com/artanddesign/2014/dec/11/photography-is-art-sean-ohagan-jonathan-jones http://www.poynter.org/content/content_view.asp?id=83126 Running Head: The Expert and the Machine 17 technologies, but were part of a technical and political environment that pursued technological innovation that “privileged skill-enhancing, capital-saving factors” (p. 696). 4 Example of professionals who use photography but do not consider themselves to be photographers per se include members of many scientific disciplines such as biology, medicine, archaeology, and ecology, as well as professional examples such as police who take photographs of crime scenes as part of their other duties. Amateur examples include people taking snapshots of holidays or sharing smartphone images on Facebook or Snapchat. 5 This tension expressed by Baudelaire is not completely resolved over 150 years later: in December 2014, an exchange between Guardian newspaper blog writers revisited a debate on photography as art, with Jones (2014) claiming that “Photography is not an art. It is a technology.” O'Hagan (2014) responds that Jones “still thinks painting is in some sort of competition with photography. How quaint. He also seems to think that all photography is derivative of painting. This is plainly not so.” 6 One sign that the project was successful at creating a film that displayed creative expertise is that one of the films (The Other Girl, available at http://vimeo.com/idealfilms/theothergirl) was selected for the competition section of an international film festival, the 14th European Meeting of Young Peoples’ Audiovisual Creation - Camera Zizanio. http://vimeo.com/idealfilms/theothergirl work_w5rdekvqvjcnhgpteke7pdm2dy ---- MED_Cubitt_2013 Enumerating Photography from Spot Meter to CCD   1   Enumerating Photography from Spot Meter to CCD Sean Cubitt, Daniel Palmer, Les Walkling accepted for publication, Theory Culture and Society, November 2013 pre-press doi:10.1177/0263276412472377 Abstract The transition form analogue to digital photography was not accomplished in a single step. It required a number of feeder technologies which enabled and structured the nature of digital photography. Among those traced in this article, the most important is the genesis of the raster grid, which is now hard-wired into the design of the most widely employed photographic chip, the charge-couple device (CCD). In tracing this history from origins in half-tone printing, the authors argue that qualities available to analogue photographers are no longer available to digital, and that these changes correspond to historical developments in the wider political and economic world. They conclude, however, that these losses may yet be turned into gains. Authors Professor Sean Cubitt is Professor of Film and Television at Goldsmiths, University of London, and Professorial Fellow of the University of Melbourne. His publications include Timeshift, Videography, Digital Aesthetics, Simulation and Social Theory, The Cinema Effect and EcoMedia. He is series editor for Leonardo Books at MIT Press. Mailing Address: Media and Communications, Goldsmiths, University of London, New Cross, London SE14 6NW, UK. Email: s.cubitt@gold.ac.uk Dr Daniel Palmer is Senior Lecturer in the Department of Theory of Art & Design at Monash University, Melbourne. His publications include Twelve Australian Photo Artists (with Blair French), Participatory Media: Visual Culture in Real Time, and as editor Photogenic: Essays/Photography/CCP 2000–2004. Mailing Address: Theory Department, Faculty of Art & Design, Monash University, PO Box 197, Caulfield East, VIC 3145, Australia Email: e@danielpalmer.com Dr Les Walkling is an artist, educator and consultant. Formerly Director of Media Arts and now a Senior Research Fellow at RMIT University, Melbourne. His works are in the collections of the Centre for Creative Photography, Arizona, The Metropolitan Museum of Art, New York, and the national galleries of Australia and Victoria. Mailing Address: School of Art, College of Design and Social Context, RMIT University, GPO Box 2476, Melbourne VIC 3001, Australia This article was written with the support of the Australia Research Council Discovery Grant DP087237 Enumerating Photography from Spot Meter to CCD   2   Enumerating Photography from Spot Meter to CCD When, in 1835, William Henry Fox Talbot exposed a plate illuminated by the light falling through the latticed window of his home at Lacock Abbey, it is unlikely he considered that his image might appear to observers a hundred and seventy-five years later as a precursor to the most familiar geometry of modernity: the grid. From a subject for photography to the very principle of its operation, the grid has expanded from map projections, urban planning, and mathematical exercise books to become both an aesthetic icon and a guiding principle in the construction of cameras and printers. Such continuities between past and present technologies belie the apparent rift between traditional and digital photography. Curiously, certain practices in traditional fine art photography, ostensibly incompatible with digital technology, can also be understood as contributing to the mathematicisation of the image and the institutionalisation of the raster grid that are such central hallmarks of digital photography. Our concern in this paper is to emphasise the origins of the discrete, automated and arithmetic logic of contemporary digital imaging in the longer history of photography. To do so, we take three exemplary instances: the unique account of technical practice left by the North American photographer Ansel Adams; the history of half-tone printing, which in many sense links the analogue and digital phases of photography through the introduction of electronics and the clock function; and the design of the CCD (charge coupled device) chip used extensively in digital cameras. Adams and Abstraction Our essay focuses on the quality of photographic texture which, in traditional, chemical photography, is a compromise between contrast and grain; that is, the density of silver in a print. The two qualities that together form the sense of texture are resolution (the number of grains per square inch or centimetre) and acutance (the clarity of edges). Many qualities of the medium act against such clarity. The 'circle of confusion', that is the circle of light which surrounds any exposed point on the negative, is a finite quantity that can be reduced, but never to zero.i Halation, the effect of light passing through the film strip and reflecting back onto the light- sensitive emulsion, causes haloes around bright points (Adams 1995b: 17), while the unavoidable spontaneous reduction of trace molecules of silver halide produce ʻchemical fog' (Adams 1995b: 87) which degrades the shadows of the negative and highlights of the print. While photography is today commonly discussed in terms of image capture and distribution, it has also, centrally, been a print medium. ʻFor example,ʼ wrote Ansel Adams in The Print: I have found that Seagull Grade 4 gives me a better print of my Frozen Lake and Cliffs than I was ever able to get on Agfa Brovira Grade 6, and the tone is magnificent. This is one of those significant early negatives (c. 1932) that must be considered quite poor in quality and very difficult to print. The negative contains enough information to yield an acceptable print with great effort, and I continue to improve the 'salvage' printing as best I can (Adams 1983: 50). Enumerating Photography from Spot Meter to CCD   3   What is so amazing about this 25-year struggle to find an 'acceptable' realisation of an image is not just the sheer level of work involved, but the way Adams sets his visualisation against that realisation, a constant theme in his three great books on photographic technique. When students are introduced to the basics of technical photographic theory, this is what they find: physics, optics and chemistry (or electronic processes in the case of digital photography): theory whose central task is to make it possible to produce new work, or in this case to remake old work in pursuit of a new outcome. In separating the conception of the image (whatever coerced him to make the image in the first place) from the means of production, Adams demonstrates a powerfully moral sensibility. His assessment of the negative is pitiless (regardless of an attachment that has him reprinting it quarter of a century later, it 'must be considered as of poor quality'). But even to Adams, the quintessential technologist, the means of production was only a means to a desired end – the print. When he was working with Lakeside Press in Chicago on the then new process of photomechanical transfer, Adams reported that they were 'pleased beyond words to have the chance to find out the procedure on such work' (Hammond 2002: 50). This is the same man who, in his Autobiography, would say 'I believe in beauty. I believe in stones and water, air and soil, people and their future and fate' (Hammond 2002: 50). As a pioneering environmentalist, Adams was devoted to nature and to the scientific knowledge of nature; as a technologist he was an assured manipulator of the laws of nature; and in his ethical responsibility to both the Yosemite and his own negatives and prints, these convictions were united in a single creative practice. We have to imagine him in the darkroom working the negative in order to produce a rememoration of the landscape, not only as raw experience but as an experience framed in the knowledge that, for example, Yosemite was already a threatened wilderness (whence the centrality of his photography to the project of the Sierra Club). As printmaker, Adams' commitment to the darkroom was an attempt to establish artisan credentials for photography generally and for his prints in particular, as opposed, for example, to the seized moments of life on the wing in Henri Cartier- Bresson's photography. If anything is, Adams' are graven images. In his nostalgia, Adams is working not only with a memory but with an expectation – whence the many prints of Cliff with Frozen Lake. The work he undertakes in the darkroom is precise: etymologically it 'cuts before'. His fieldwork is undertaken in the expectation of finding the landscape that is already there in the imagination; his darkroom practice not only restores the vision (and the visions of precursors) he brought as expectation to the landscape – it is itself a pre-vision of the prints which as yet do not exist. So the prints encapsulate excision, incision and precision. Their stilling involves not only the cutting out of the moment from the rest of time but all these past and future orientations. In this sense they are not present at all. They arrest vanishing and becoming not only of the landscapes and their changing light but the vanishing and becoming of the photographer, his experiences, even his darkroom. Enumerating Photography from Spot Meter to CCD   4   Contra the idea that technology is a barrier to experience, Adams' work seems to suggest that while his craft is conscious, it is also a dialogue, not only with the natural green world and the laws of physics, but with apparatuses he is not in any simple way constrained to employ. Craft is conscious because it is a dialogue, because it necessitates an intrigue of three partners, physics, technology and the artist. Of the three, the least conscious, in this instance at least, is the artist, not because he is operating on automatic, not at all – on the contrary. But if we are right to say that artists, like anyone else, are social constructs, what is unconscious in the artist is the society that constructs him. Looking at the prints, or considering the accounts he gives of their making, it is hard to see Adams as the kind of technologist who devotes himself to overcoming either the tools or the objects of his art. Sure, he is happy enough to arm-wrestle light till it gives him the heightened responses he dreams of; and sure enough he is happy to innovate in his technological practice to get the finest result the engines he has at his disposal will come up with. What he does not seem to want is to win, to subdue the light of the world or the properties of his materials and his kit. If anything, he expresses a desire to submit to them. But this submission is not about submission to the values of reality, where the word 'values' has the very specific meaning it has in photometry as a measure of brightness. Greyscale photography has only brightness, 'value', to display. Certainly hues are significant too: various film stocks are more or less sensitive to specific wavelengths, and skilled photographers and cinematographers used colour filters to enhance skies, foliage and faces long before the industrialisation of colour film. Likewise, various papers fundamentally alter print colour, determining warmer and cooler tones. Adams was heavily influenced by Pictorialism early in his career – a movement that largely subscribed to the idea that art photography needed to emulate the painting and etching of the time, through techniques such as soft focus, special filters, lens coatings and exotic printing processes in the darkroom. Adams retains a pictorialist ethos insofar as he is adamant that he does not aim to reproduce the scene as it appears to ordinary human vision. As a modernist, overwhelmingly concerned with the form of the image, its transformation by a singular artistic vision, he approves of the upside-down image on the ground glass on the back of the large-format view camera because it abstracts the values on the plate from the scene they otherwise represent (Adams 1995a: 30). As a teacher, he recommends various techniques for abstraction, from looking through a monochromatic viewing filter to isolating elements of a scene by looking through a cardboard mailing tube, or using a spot meter to register the brightness of small areas within the scene.ii The entire principle of the Zone System, which he elaborates in The Negative – probably the defining technical manual of photography – is to differentiate and abstract the photographer's visualisation from the subject that lies in front of the camera. It is not simply that the photograph is greyscale and two dimensional, but that the values (the brightnesses) of individual areas in the frame are there to be used. If the photographer opts to expose his film in such a way that a sheet of white paper appears as middle-grey, and if in the process reveals details in Enumerating Photography from Spot Meter to CCD   5   shadows or in highlights that might otherwise be lost, or alters the balance between sky and cloud, or renders autumn foliage as dramatic highlights where naturally it would produce a dull grey, so be it. The vision, the ʻprevisualisationʼ which precedes and guides the entire process – from choosing a lens to framing a print – is about that finished object, not the imaged scene, even though Adams' fine print tradition concentrates on already photogenic subjects such as desert landscapes, weathered walls and rocks.; or if it is about the scene, it is about some truth about the scene, emotional or intellectual, not about how it looked to the photographer when the shutter opened. In this sense previsualisation generalises Alfred Stieglitzʼs notion of the 'equivalent', whereby subjects such as clouds 'stand-in' for something spiritual; that is, the refined beauty of the print transcends its subject (Norman 1973). In this sense, we can understand Adams practice and his technical manuals as guides to understanding photography as abstraction of vision from the merely seen. Photometry and the Control of Light Texture in photography is the product of illumination, distance to the subject, focal length, aperture, exposure time, efficiency of the lens (multi-component lenses, even coated, lose light by reflection, Ansel Adams, 1995a, 133 and passim); of film speed, the type of developer used, duration, temperature and agitation of the development process, the type and duration of fixing and the care taken in washing and drying the negative (Adams 1995b, 29ff, 181) and in printing the quality of the materials and paper used, the duration of exposure for various areas of the negative, and the final viewing conditions of the print.iii In Adamsʼ trilogy of technical manuals, each of these processes is open to intervention by ʻthe creative photographerʼ, while at the same time he argues that obsession with technical prowess cannot substitute for an intuitive grasp of each of these variables – plus lights, flash, colour temperatures and so on. In other words, the richness of photography is that it is irreducible, in Adams' world, to automation. The photographer is quite capable of enjoying a schoolmasterly arithmetic exposition of the inverse square rule in foot-candles, but equally seems quite happy that manufacturers of light meters fail to agree on a common standard. It allows him to replace their proprietary scales with his own, the Zone System: to turn nascent automation of exposure readings into a creative practice of their own with the addition of an adhesive tape with handwritten numerals on it. It is the archetypal jury-rigged solution to a partially effective piece of kit.iv Neither the original Weston light meter nor the more sophisticated SEI (Salford Electrical Instruments) Photometer provided any level of automation. They simply provided ʻobjective measurementsʼ, that needed to be interpreted, and each film manufacturer had their own system for interpreting relative film speed and consequently the exposure value. Therefore applying a zone scale over the top, effectively renaming the manufacturer's scale, was about a different interpretation of what was measured, what it meant, and what you could do, or wanted to do, with that data. This challenge continues today, where a photographer has to select Manual Exposure in order to interpret the contemporary camera's built-in light meter measurements, while Auto Aperture, Auto Shutter Speed, Full Auto, Program and Enumerating Photography from Spot Meter to CCD   6   other stylistic modes invoke the manufacturer's predetermined interpretation of the reflected brightnesses. However, because of the extraordinary level of instantly available reporting available via the digital camera's LCD screen – both pictorially (less accurate and more impressionistic) and via a histogram (more accurate and less impressionistic) mapping of the relative brightness values – the photographer no longer has to interpret the data imaginatively. More specifically, they no longer have to translate the data in a manner (zones) that can be easily conceptualised and thereby 'imagined' or previsualised as final print values. In digital capture, the photographer can immediately evaluate the 'actual data'. This 'digital polaroid' aspect of digital capture – as well as the economics of film-less photography – appears to encourage photographers to defer much of their imaginative conception of the image to its post-processing. But then again, this is not really that different. The Zone System also sought to guarantee an appropriate range of values, from which the desired print could be crafted (post-processed) in the analogue darkroom. It was both a form of quality assurance, as well as a way of dreaming (seeing) the world as a print. This is why it was called 'pre-visualisation'. One thing is clear, however: the result of the drive behind contemporary camera design and manufacture is to automate this moment of previsualisation. We now have auto white balance, auto exposure, auto aperture, auto shutter speed, auto focus, auto face recognition, auto processing (typically JPEG), auto numbering, auto metadata, and so on. As Julian Stallabrass argues in his essay ʻSixty Billion Sunsetsʼ (1996), relieving the camera user of manual control has the paradoxical effect of mystifying the cameraʼs processes. Stallabrass, for his part, romanticizes the 35mm amateur photographerʼs activity as a zone of freedom, or non-alienated activity – defined against work – in the face of their technological domination by state-of-the- art electronics and the then imminent rise of digital cameras. Although Adams inevitability employed forms of automation – accepting certain preconditions of his production, using products, films, papers, premixed developers, fixers, and toners that he did not manufacture himself – he collaborated with their manufactured sensibility, accepting their qualities, and in that he gave himself over to their value- adding. In this sense, ʻinnovationʼ can be considered in terms of how one collaborates with or redefines the limits of available automation. There is, however, no sense of ʻalchemyʼ in digital processes precisely because every gesture is reduced to a mathematical equation, to numbers. Numbers interact in precise and predictable ways. Photographic chemistry rarely appeared so predictable, because unlike numbers, the substance of its chemistry was invisible to the operator. Analogue photographers were only able to adhere to the external conditions of production, such as the environment, its temperature, dilution and duration, within which the chemical processing happened. Hence great efforts were made to monitor and thereby stabilise the processing conditions (time, temperature, dilution, purity, and so on). Towards the final stages of processing a print, 'magical' outcomes could appear to those alert to the possibility that exceeded any capacity to predict how it would work. This was the bane of commercial photographers, but the Enumerating Photography from Spot Meter to CCD   7   delight of fine art photographers. Analogue photographers could not 'see and therefore know' the analogue equivalent (atoms and molecules) of today's digital numbers (that anyone can read). Human micro-blindness hid the chemical 'truth' from photographers, so they were encouraged to work harder for the revelation. For example they watched carefully the progression of a chemical toner and immediately arrested its progress when, suddenly, an effect beyond their imagination – or what they were searching for – appeared. Their limits of perception allowed what was happening in the processing tray to assume a magical status. While the electrons in digital processes cannot be ʻseenʼ any more than the molecules of chemical processes, there is an indexical measure of them – that is, they can be counted and their effects numerically accounted for. Half Tone One important aspect of photography falls outside Adams' ambit: the distribution of images. Modern distribution requires two factors: mass production, and speedy dissemination. Talbot had already imagined a system of producing intaglio prints from photographs. Indeed the 1840 Talbotype was itself a print medium: as many prints as desired could be drawn from a single negative. But when in 1855 Alphonse Louis Poitevin discovered that bichromated gelatine reacted to light by hardening, the way was open for printing from photographs and their truly mass circulation embedded in print media like magazines. Relief printing methods involved exposing the gelatine (and substitutes including treated albumen and fishglue) to the negative, etching out the unexposed areas and inking the raised surfaces. The half-tone process interposes a screen between the original and a new photograph of it, giving the effect of a grid of dots of varying size, depending on the depth of the tone. This process adds more tonal variation, although to get good greys requires such fine dots that they can only print to the best chalked-faced art paper, whence the relative crudity of newspaper reproductions compared to those in expensive magazines and books.v The second critical factor in the distribution of images is the speed with which they can be delivered from the site of an event to the organs of mass publication. In 1900, Pearson's Magazine published an article (Cook 1900) on the experiments of Edward A Hummel, of St Paul's, Minnesota. Hummel drew images using non-conductive shellac, and scanned his drawings with a conducting point connected to a telegraph. A matching plotter at the far end could replicate the breaks in the circuit as a plotted surface. The results, judging by Pearson's images, were relatively crude, and Hummel doesn't appear to have developed anything that could deal with greyscales. The German scientist Arthur Korn in 1907 demonstrated a far more ambitious machine, based on the light-sensitive properties of selenium, which conducts electricity better in the light than in the dark. Korn shone a lamp through a pierced grill onto a film wrapped around a cylinder in which a revolving selenium cell then conducted more or less electricity to the telegraph according to the amount of light it received. The remote receiver reversed the process to expose another piece of film. Enumerating Photography from Spot Meter to CCD   8   The process was sufficiently successful for Korn to open offices in half a dozen European cities, although the time required to complete an image, the restricted distances over which it could travel, and the tendency to smear, streak and blur were obstacles to its widespread adoption. Improved by Edouard Belin, who was able to couple a photocell with telephone transmission, the basic technology was in place for what would eventually become facsimile transmission: fax machines. Press photography however required something quicker, and with greater resolution. Most histories point to the competition between Bell Labs and AT&T in the mid-1930s to develop a satisfactory industrial application, the race decided by Kent Cooper, senior manager of Associated Press, in favour of Bell's Wirephoto system, which transmitted a large photo of survivors of an Adirondack air crash to twenty-five cities at 3am, New Year's Day 1935: a succinct ideogram – air travel, crash, wilderness, network communications, news – of modernity. What these printing and transmission technologies share is a migration from the line to the grid as their basic tool. Texture is no longer emulated by crosshatching, as it was in intaglio printing, or random, as in mezzotint and stone lithography, but regularised in the meshwork grid of half-tone screens, and standardised in time by the necessity to match scanning speeds at sender and receiver points in wire-photo and fax transmission. Abandoning Hummel's line-based approach for the grid system matched the requirements of printing with those of transmission, where a linear scan could easily be appropriated for the rectilinear mash of industrial lithographic printing. Manipulating dot size and shape, the distances between dots, and later the relations between primary subtractive colour dots, allowed a great increase in the sheer number of images in circulation at acceptable resolution. Indeed grain remains a major signifier of factual status, for example in images drawn from CCTV or telephoto paparazzi snaps of celebrities. Some kind of compromise, however, had to be reached, a compromise that will bring us back to Adams' distrust of automation in photography. His distrust was based on a lack of conscious craft on the user's behalf, or the blind acceptance of a point of view, aesthetic judgement, or technical interpretation derived not from the imagination but from either the apparatus itself, or the subordination of the apparatus to the world as given, what we would now Latinise as data. Photography, Language and Logic What is a photograph? It is an image created and distributed automatically by programmed apparatuses in the course of a game necessarily based on chance, an image of a magic state of things whose symbols inform its receivers how to act in an improbable fashion (Flusser 2000: 76). Vilém Flusser's definition is comprehensive, if densely packed. From the standpoint Enumerating Photography from Spot Meter to CCD   9   of the camera, the heart of the photographic apparatus, human users are mere functionaries. The real work is done by the camera: users only play with it, but their play extends the capacities of the apparatus. From the perspective of the photographic apparatus, society is only a feedback mechanism for improving its functions. Automation is intrinsic to the apparatus. Once designed, the camera operates according to the program written into its structure. This automation not only abstracts values from the world, but reconstructs the world as information (Flusser 2000: 39). Following Shannon and Weaver's (1949) mathematical definition of information as a ratio between probabilities, Flusser sees the camera seizing not the world but an abstract 'state of things': data. Information depends on the balance between repetition and novelty. The human user and the world the camera observes only add improbability, chance, to the mix, increasing the amount of data which it can convert into photographs. The 'magic' of the definition describes the way photographs, in their abstraction, produce images, not of the world but of concepts (such as 'states of things'), concepts which then program society ʻwith absolute necessity but in each individual case by chance' (Flusser 2000: 70). Photographers are functionaries of an apparatus which, if analysis is extended back far enough, reaches into capital, corporations, politics and economics, a nested series of black boxes each governed by an elite of functionaries who nonetheless are prisoners of their own apparatus. Designed to work without human intervention, cameras program both photographers and viewers in a determinist vision which comes close to Jean Baudrillard's (1975) apocalyptic vision of society as self-replicating code. For Flusser, codes embedded in any apparatus feed on human use to produce new combinations to assimilate into the apparatus itself. This more general application of the word apparatus includes not only the mechanical device but the ensemble formed by manufacture, clubs, publications, galleries, newspapers and magazines, people and their institutions. Flusser's 'apparatus' is an institution: an ordering of social interactions that produces its own type of language (discourse), its own mode of knowledge, its own idea of truth. Unhampered by moral judgements external to its own operation, its goal is maximal efficiency. The apparatus operates in and as a regime of power, in much the same way as the clinics, asylums and prisons investigated in Michel Foucault's early writingsvi. According to Flusser, before photography, all thought was verbal. Photography, he argues, is a visualisation of language. Digital photography, by extension, extends the verbalisation of perception by mathematising it. We know from Saussure (1974) that language is based on difference, and that that difference is negation: X is X because it is not Y. Language's intrinsic capacity for negation extends to negating what is empirically or perceptually given. Thus language asserts human independence from what is given to it by way of environment. Numbers are an outgrowth of language. From counting, number has developed to be abstract, counterfactual, independent, and negating – the same qualities as language itself. The calculus, mathematical logic and the mathematics of algorithms stem from the negation of the semantic content of sentences. Number and algorithm, as formalised in computer languages, Enumerating Photography from Spot Meter to CCD   10   are also institutions. Even though they do not obey exclusively the same rules (for instance, of generative transformational grammars) as natural languages, they share language's fundamental capacity for negation. From this an important point emerges: algorithms have the power to institutionalise perception. They bring perception within the ambit of (a form of) language. The empirically and perceptually given of the non-human environment, that excess of signifiers which is a danger to humans as much as a resource for them: that world is systematically negated, pixel by pixel, in the process of enumeration. Such might be the case too with drawing and analogue photography: that they neither name nor describe, but substitute for the reality they observe. The various schools of drawing and printmaking applied such 'grammars' (Ivins 1953) between the seventeenth and nineteenth centuries. But what distinguishes digital imaging from both drawing and traditional photography – especially as defined by the practice of Ansel Adams as exemplary technician – is a semiotic, but not a semantic, change. It is the nature of the process of automation. However, 'the process of automation' is not a stable, definable entity, confined to digital code (what literally distinguishes digital imaging from drawing and photography is not automation, but the absolute precision, predictability, and finite limits, of its numerical grid). The broader history of photographic manufacturing has been about exploiting automation in the quest for the stability and certainty that automation provides, and the profit that derives from it. The process begins several decades before Adams with Eastman Kodak's Box camera in 1888 – with its philosophy of ʻyou press the button, we do the restʼ – and perhaps even earlier in the transition from wet-plate to dry-plate photography. Such incremental steps, via the Instamatic cassette cameras of the 1960s, and the progressive introduction of electronics into cameras in the 1970s, arrive at their destination in the 2003, when digital cameras began to overtake sales of analogue.vii At this juncture, the grammar of objects and the previsualised composition are superseded by the enumerated and averaged accumulation of photographic data. Adams himself cannot be isolated from this trajectory. For instance, he served as an enthusiastic consultant to Polaroid from 1949 until his death in 1984 – a seemingly surprising encounter between a professional interventionist and the emblem of instant consumer photography (Polaroid automated all aspects of picture-taking and after 1972 effectively excluded all possibility of a photographer intervening in the developing process by producing a sealed print that developed in the light) (Buse 2008). More particularly, it was his practice, when conditions allowed or demanded, to use a spot meter, an exposure meter calibrated to sample light from a one degree arc (as opposed to the 20 or 30 degrees of arc of a normal exposure meter). The spot meter could be placed much closer to objects than the camera to get a specific sample of the light. The apparent value of light is as much a product of mental activity as of perception. Cameras, of course, do not have such equilibrating brains, and 'see' only what is there before them. The spot meter emulates the reaction of the Enumerating Photography from Spot Meter to CCD   11   correspondingly small area of the exposed film. Like an Impressionist seeking the colours of scintillations on water, the spot meter abstracts the value from the object, deriving from it a measurement of the light. It is the beginning of a loss of object (and thus of the subject-object relation) and its replacement with a value – a Zone number in Adams, usually a hexadecimal colour code in digital imaging). Thus the semantic world of objects begins to be replaced by an arithmetical world of measurements. The averaging of these values in half-tone, and their automation from Box Brownie to mobile phone cameras, is meticulous. Low resolution film cameras, from the Box Brownie to the Instamatic, favour norms established in manufacture (notoriously Caucasian skin tones [Winston 1996]). The specification of the stored record of the image – that is, the particular qualities of the latent image – determine the results of an automated printing process. When they become truly enumerated in digital imaging, the pixels become elements in 'a language without a code', in the sense that there is no longer a grammar to govern the interactions between pixels. Concepts of 'in front' and 'behind', 'figure' and 'ground' no longer signify. The image is no longer an image 'of' but a pure surface. More, that surface is now a numerical matrix that algorithms can redefine at a single point, across the whole surface, or at any intervening scale. The various realist aesthetics that had been under attack for a hundred and fifty years since Symbolism and the advent of advertising have joined hands with the mathematicisation of the image to abandon realism – fidelity to either the empirically given or to natural perception of it. Indeed, the mathematicisation of the image was already formalised in the nineteenth-century by Ferdinand Hurter and Vero Charles Driffield (1890), who brought quantitative scientific practice to photography through the methods of sensitometry and densitometry. Their quest to describe the relationship between exposure and development in silver halide emulsions was not in order to abandon realism, but to aid in its indexical reproduction. Today, it is true that old realist aesthetics remain in place, that scientific measurement is also a realism, and that it is ideologically convenient to retain the belief that images record reality. The link between imaging and empirical environments has not been severed: it operates in a kind of subjunctive mood that lies between seeing and visualising, between representation and visualisation, between image and imaginary. There is a particular strength to the photographic which enables this maintenance of a subjunctive realism. When Adams speaks of visualisation, he refers to light values. What the Zone System respects is the difference between values in a field of view. When digital cameras add hue and saturation (see the following section on colour), they too are less interested in the specific hue-saturation-value measurement of each pixel, and far more in the relationships between them. Such differences, expressed as differences between hexadecimal values, are in dialectical terms relations of negation ('X is X because it is not Y'; a zero is a zero because it is not a one). Digitisation of imaging is a move away from the object relation towards, in the first instance, the new objectivity of scientific measurement, which is itself an abstraction from language in the form of algorithm and calculus. Adams' spot meter Enumerating Photography from Spot Meter to CCD   12   is therefore a crucial step in the assimilation of artisan photography to the digital. But even as it does so, it opens up a contradiction inside the linguistico- mathematicisation of the image. Unlike drawing, or even the assemblage of form from line and texture in intaglio printing, photography is then not, as Flusserian apparatus, interested in objects. Hence Adams' use of the upside down image in the ground-glass viewfinder of his view cameras: his interest lay not in things but in values. The work of earlier artists like Lászlo ́ Moholy-Nagy and Aleksandr Rodchenko, who championed the use of bold simplified forms, dramatic vantage points and close-ups, prompted Walter Benjamin to suggest that “a different nature opens itself to the camera than opens to the naked eye” (Benjamin 1973: 238). Moholy-Nagy laid out the crux of the ʻNew Instrument of Visionʼ in his seminal 1925 text Painting, Photography, Film, noting that “the modern lens is no longer tied to the narrow limits of our eye” (Moholy-Nagy 1969: 7). Viewing a photograph commonly gives the impression of witnessing an event, because photography constructs the moment of the exposure as event. Until it is considered, a situation is only an aggregation of matter and motion. Only when it is considered, as it is in the act of visualisation, does it become an event, something to which meaning attaches, and which, being meaningful, becomes the grounds for some subsequent flash of understanding, decision or action. Photographic consideration takes the given and turns it into potential, the hallmark of an event. Henri Cartier-Bresson's ʻdecisive momentsʼ, Robert Capa's implication through camera shake of the intensity of witnessing, and Adams' wilderness compositions alike take merely given situations and extract from their raw material the conditions under which they might become other. The situation – which embraces environment, photographer and machinery – becomes event by its disturbance of habitual indifference. An Ansel Adams print becomes an event in that it presents the situation otherwise than it is given to an unconsidered gaze. It sees otherwise, revisions seeing, invents new ways to see. In this sense it fulfils the formal expectations introduced in the 1920s by the ʻnew photographyʼ movements in Germany and post- Revolutionary Russia, however different they might appear. Digital image capture does not require this process of visualisation, of consideration. Flusser's automatic apparatus instead deploys logic, where logic retains its philosophical role of abstracting from any particular argument ('Socrates is a man; men are mortal; therefore Socrates is mortal') the non-specific bones ('If A is B and B is C, then A is C). When it is applied as a program, as Flusser argues occurs in photography, logic automates. Digital logic abstracts numbers from the infinitesimal flux of the world. It undertakes to reproduce any given situation according to programmed instructions, without consideration, and therefore without the possibility of bringing about an event. The purpose of digital image manipulation is control over the image, not the conversion of given situations into potential events. To some extent, as we have seen, all images undertake a level of abstraction from what is given to sight. To understand how and why digital logic's abstraction is more complete than previous forms of image-making, we need to understand the Enumerating Photography from Spot Meter to CCD   13   particularities of our new numerical tools. CCD and the Politics of Number The CCD chip of a digital camera comprises a p-doped (positively charged) thin crystalline lattice deposited on a transmitting layer. Light arrives from the lens onto the lattice, each cell of which acts as a capacitor accumulating an electric charge according to the amount of light, or more technically the luminance, arriving at that cell. The array is linked to a control circuit that instructs each capacitor to pass its charge on to its neighbour. The last capacitor in the array then passes its charge to an amplifier that converts the charge into a voltage. The process is repeated until all the charges have been converted to voltage, digitised, sampled and stored by the underlying CCD semiconductor (CMOS chips used in some still cameras are a different class of semiconductor but rely on the same type of light-sensitive array). The majority of modern CCDs use a buried-channel design, where areas of the silicon substrate are implanted with phosphorous ions giving them an n-doped (negatively charged) designation. These areas act as channels through which the electric charge generated by the light-sensitive upper layer will travel. The actual capacitor layer lies on top of this buried-channel layer. On top of them is a layer comprising polysilicon gates, perpendicular to the channels. The channels are separated by oxides that stop charge flowing from one channel to the next, while the gates control the flow of charge from the capacitors towards their destination. Lying on top of the layers, immediately above the focal plane, lies a Bayer mask, which for every four pixels filters one red, one blue and two green, on the basis that the human eye sees better in the green area (the green closely corresponds to the luminosity function of human vision). Since the filters exclude the other two colours, the light reaching the capacitors is diminished by the transmission characteristics of the filter, and the information is stronger for luminance (value) than for hue or saturation. One solution to this loss of colour resolution and light intensity is to use 3CCD technology, which allows actual, not interpolated, RGB values to be recorded. Another solution, using a rotating filter, is only useful for objects that do not move. In sum: light is organised by the filter or prism, and gathered on a grid during the exposure time. The information in the grid is then passed through the system of gates and channels in ordered array to its conversion, via voltage, to stored data. The charge-coupled device operates as a kind of clock. The exposure charges the lattice, but the charge is drained from it down ordered channels in lockstep units. The chip moves its data from spatial to temporal and back to spatial ordering. Without the clock function allied to the interlocking grids, the charge would mingle and pour out in no order at all, chaotically, as noise or more specifically as heat, since these are close relatives of solar panels. The CCD imposes a very specific order, or a pair of orders, on the light that it gathers. This is characterised by whole-number steps of equal unit duration and area. The result is an array of discrete, ordered units. These units are achieved principally in the design of the crystal lattice, which itself is grown on the chip which, acting as a seed crystal, ensures that the lattice orientation Enumerating Photography from Spot Meter to CCD   14   and structure are identical to itself (as opposed to polycrystalline or amorphous layers, which have no long-range order). Unlike the grains of silver halides in photographic film, the order of the molecules reacting to light is the product of design, specifically of an arithmetic design based on whole numbers. Thus the ʻgrainʼ of a digital image is already arranged in the form of a grid. Each cell of the grid reacts to the light falling on it, but does so by averaging the different wavelengths that reach it across its area. Recalling that the range of greens visible to the human eye fall between wavelengths of 520 to 570 nanometers, we have to recall that the process of averaging across that 50-nanometer field involves every shade of green, a field which experience tells us is not susceptible to whole-number division, but is a continuum. The ordering is to some extent dependent on the density and light- sensitivity of the cells. Lower density and light-sensitivity results in noise, especially in low-light conditions.viii But contemporary CCD chips are more light-sensitive than normal photographic film, so that even with the loss of light due to Bayer masking, the signal to noise ratio is high. This creates a quandary for digital imaging similar to that addressed by Adams' Zone System: how to correlate high contrast subjects. Areas in deep shadow appear not as low density on the negative, but as noise, random deviations from the true value. To describe these pixel units as points is incorrect: consumer CCD pixels are typically 7 to 13 micrometers, though some new models offer 3 micrometer pixels, trading increased resolution for lower capacity to hold electrons (something like 100,000 electrons in a 10 micrometer pixel). While these scales are difficult to visualise, they indicate that the number of photons reaching a CCD pixel is going to be in the six-figure range as well. However, at these quantum scales, enumeration is impossible (on the uncertainty principle), so what the stored charge measures is the photon flux over the area of the pixel for the duration of the exposure. Averaging that flux into a single value is the task that the design of the chip ascribes to these pixels. The human eye discriminates to about one sixtieth of a degree. A pixel needs to approximate to this dimension to give a convincingly detailed image, while it also requires a grayscale of at least 256 distinct tones to give a convincing tonal resolution. A high-quality 35mm film frame contains between 12 and 20 million grains of exposed halides. Though there is considerable dispute over whether CCD cameras can match or even exceed this resolution, there are qualities that distinguish them. As we have seen in the analysis of Ansel Adams' use of a spot meter, traditional photography is also a matter of averaging, but it averages onto a random array of light-sensitive materials. In the CCD grid, in which each cell is marked off from its neighbours by the mask, by the lattice, and by the structure of gates and channels, neighbouring cells do not actually touch. This reduces the 'circle of confusion' mentioned by Adams, the bloom or smear referred to by engineers as the point spread function. But it introduces regular interruptions between areas of the image. The light-sensitive materials thus respond to light that has already been organised by the time it reaches them, and further organises the light by taking an average of its value over area and duration. The averaging function, which will be immensely important to later manipulations in image processing software, organises Enumerating Photography from Spot Meter to CCD   15   light into information in the form of charge and later of voltage. These values are stored in digital form, that is as zeros and ones: as whole numbers defined by the unit difference between them. This transition from averaged photon flux to unit difference indicates a peculiar property of the digital image. The eighteenth-century epigrammatist Antoine de Rivarol once remarked: A man in his house does not live on the staircase, but makes use of it to go up and down and gain access to every room. The human mind, likewise, does not reside in numbers, but uses them to attain all sciences and arts (Ifrah 101) It is on this principle that number, in the specific cases of ordering, and of digitising (ascribing numerical values to voltages), operates in the charge-coupled device of digital cameras. Rivarol's image of the staircase gives a very clear sense of the kind of numbers involved: those which count as units, and which follow one another in regular succession. We know that there are other kinds of number (pi, the square root of 2, and so on). Rivarol's point is that such numbers are of interest only as instruments in pursuit of something else, not in themselves. The process of averaging the photon flux at a given pixel is a limit: it abstracts from the physical world a value that sums everything that has happened in that pixel during the exposure time. The unification of all previous values as a single number in the pixel charge value is such a transition. In the mathematical philosophy of Alain Badiou (2008), it replaces the foundations of being – the void, the infinite, the multiple – with a plenum, a unit value. In doing so it creates a limit for the person inspecting the result: the enumeration – the hexadecimal colour number and the pixel address – are all that there is to inspect. This numerical sign is a representation, but a representation which blocks access to what it represents, since the quantities that have been averaged are no longer visible in their sum. The sampling process normalises the photon flux, replacing gradation with the steps of Rivarol's staircase. The averaging and unit organisation of CCD imaging, its probabilistic sampling, its replacement of photon flux with units, and its placing of a limit sign between flux and representation, thus block access to the excessive multiplicity of light, and so make impossible the event that light, as excess, would make possible. The automation that Adams so feared, yet which in the instance of the spot-metre he pionered, comes to maturity in 'the unthought basis of the ideology of the countable' (Badiou 2008: 109). The line-scan technologies of early photographic transmission technologies are now embedded in single-line CCD chips used as reading heads in flatbed scanners. With this advance, also now used in faxes, the CCD chip's typical seriality and averaging functions have become ubiquitous. Older line-scans, starting with Hummel's pendulum apparatus, used an analogue match of time between senders and receivers. That process has now been integrated into the CCD ordering of time as a Enumerating Photography from Spot Meter to CCD   16   spatial grid ordered in units. Automated cameras began to simplify the averaging process first undertaken through techniques like Adams' use of spot meters, the first probabilistic management of light values. From half-tone printing's beginnings in woven fabric, and its later development of ruled glass screens, to the ordered raster grid, the transition from grammatical to statistical ordering is complete, One achievement of this process has been the assimilation of time into spatial organisation. Meanwhile, the gradations available to Adams, grounded on the one hand in the subtleties of perception and hand-eye coordination and on the other in the excess of light as both object and medium, are gradually automated. In this process, texture is standardised as raster grid, each of whose pixels are numerically specified according to an instrumental mathematics which, in averaging and enumerating, closes off the relation between representation and light, and with it the possibility of a relation between image and event. We have been unable here to address other continuities between the analogue and digital in photography, among them issues of latency (Cubitt 2011), glass technologies (especially lenses) and theories of indexicality. To the extent that every camera relies on photosensitive reactions, whether chemical or opto-electronic, every camera records not scenes but photons, and every photographic 'image' is more properly described as data visualization. Manipulation and control of light (for example in the shared design principles of camera bodies, or in 'noise reduction' undertaken electronically or in the darkroom) order the electro-magnetic spectrum for a human sensorium. The defense of indexicality is a humanism, in the sense that it equates the reality supposedly collected by traditional photography with what a human at the same spot would have seen. Yet as we have seen, even in the heart of the realist tradition in the work of Adams, processes of abstraction were already at work. These processes are indeed embodied in changing photographic technologies, but the distinction between analogue and digital has to be traced back, before even the invention of the computer, to the second half of the 19th century, and in our instance the convergence of industrial half-tone lithography with nascent wire transmission technologies. In this sense we must understand Adams' Yosemite photographs not only as works of abstract art, but as participation in a machine or dispositif which, in Flusser's terms, exists to colonise for the apparatus the very space which to Adams most symbolized that which escaped it. There remains the dialectical principle that no system is so complete that it does not contain the germ of its own negation. As Badiou argues, 'the difficulty lies in succession, and . . . there, also, lies resistance' (Badiou 2008: 81). On this principle, we can argue the counter-case to the history of arithmetic and logical abstraction presented here. Analogue photography promises an uninterrupted scale from minus- infinity to plus-infinity whose limits are the limits of the instrumentation. Digital's restriction is that the process itself, not its enactment through instrumentation, places limits or absolutes, on every range it works with. Despite the fact that analogue photography deploys discrete molecules and is therefore not continuous, our everyday experience of analogue photography emphasises its illusion of continuous Enumerating Photography from Spot Meter to CCD   17   tone and resolution. Digital processes eloquently point out this contradiction. It could even be argued that digitisation is not only a more efficient and verifiable indexing of the object of knowledge or desire, but is the logical outcome of the impetus that has driven so much photographic research, from Talbot's August 1835 imprint of the two hundred latticed panes of glass to the latest digital camera. Analogue processes pretended to continuity within the world they imaged, which in turn seemed to promise continuity between the image and the world. Digital processes may yet liberate us from that illusion, and in the process interpolate a technological vision – so often elided in analogue photography – between photographer or viewer and the world they view, and thereby liberate our tools from both our illusions and the instrumental goals of the photographic apparatus. REFERENCES Adams, Ansel (1983), The Print, Little Brown, Boston. Adams, Ansel (1995a), The Camera, Little Brown, Boston. Adams, Ansel (1995b), The Negative, Little Brown, Boston. Badiou , Alain (2008), Number + Numbers, trans Robin MacKay, Polity, Cambridge. Baudrillard, Jean (1975), The Mirror of Production, trans Mark Poster, Telos Press, St Louis. Benjamin, Walter (1973) ʻThe Work of Art in the Age of Mechanical Reproductionʼ [1936] trans Harry Zohn. Illuminations, Fontana, London, pp. 219–53. Buse, Peter (2008), ʻSurely Fades Away: Polaroid Photography and the Contradictions of Cultural Valueʼ, Photographies, Volume 1, Issue 2, September, pp. 221–238. Cook, Charles Emerson (1900), 'Pictures by Telegraph' in Pearson's Magazine, April, http://homepage.ntlworld.com/forgottenfutures/fax/fax.htm Cubitt, Sean (2011),ʻThe Latent Imageʼ, The International Journal of The Image 1(2), 27-38, April, http://seancubitt.cgpublisher.com/product/pub.202/prod.34/index_html Ellul, Jacques (1964), The Technological Society, trans John Wilkinson, Vintage, New York. Flusser, Vilém (2000), Towards a Philosophy of Photography, trans Anthony Matthews, Reaktion Books, London. Flusser, Vilém (2011a), Into the Universe of Technical Images, trans Nancy Ann Enumerating Photography from Spot Meter to CCD   18   Roth, intro Mark Poster, University of Minnesota Press, Minneapolis. Flusser, Vilém (2011b), 'The Gesture of Photographing', trans Nancy Roth, Journal of Visual Culture, 10(3), 279-93 Hurter, Ferdinand & Driffield, Vero Charles (1890), 'Photochemical Investigations and a New Method of Determination of the Sensitiveness of Photographic Plates', The Journal of the Society of Chemical Industry, May 31, 1890 Ifrah, Georges (2000), The Computer and the Information Revolution, vol 3 of The Universal History of Numbers, trans EF Harding, assisted by Sophie Wood, Ian Monk, Elizabeth Clegg and Guido Waldman, The Harvill Press, London. Ivins, William M Jnr (1953), Prints and Visual Communicatioon, MIT Press, Cambridge MA. Moholy-Nagy, Lászlo ́ (1969) Painting, Photography, Film, trans Janet Seligman, Lund Humphries, London Norman, Dorothy (1973), Alfred Stieglitz: An American Seer, Aperture, New York. Saussure, Ferdinand de (1974), Course in General Linguistics, rev.ed., trans Wade Baskin, Fontana, London. Schumpeter, Joseph A. (1962), Capitalism, Socialism and Democracy, 3rd edn, Harper, New York. Shannon, Claude E. and Warren Weaver (1949), The Mathematical Theory of Communication, University of Indiana Press, Urbana. Stallabrass, Julian (1996) ʻSixty Billion Sunsetsʼ in Gargantua: Manufactured Mass Culture, Verso, London, pp. 13–39 Winston, Brian (1996), Technologies of Seeing: Photography, Cinematography and Television, BFI, London.                                                                                                                 i Although it should be pointed out that when the circle of confusion is smaller than the resolution of the emulsion, the object appears in 'perfect focus'. There is then effectively no circle of confusion, and the lens appears to have faithfully projected one point in front of it to one point behind it. Circle of confusion in this sense is also an illusion. When the circle is small enough to be confused for a sharp point, typically around 1/100" (0.25mm) at 40cm distance, we believe it is a sharp, rather than a Enumerating Photography from Spot Meter to CCD   19                                                                                                                                                                                                                                                                                                                                                           blurred (confused) point. In other words, the circle of confusion defines the limits of our psychophysical resolution, at a particular magnification and viewing distance. ii A monochromatic viewing filter is a dark amber-orange filter (eg. Kodak Wratten Filter No. 90) that seeks to reduce, but cannot entirely eliminate, colour variations by scaling the luminance of the scene to a closer rendering to that of 'normally exposed and normally processed’ panchromatic B&W film. iii In the original Basic Photo series there were five volumes. The two additional volumes, subsequently but not entirely subsumed into the three volumes of the post- 1980 series, were 'Artificial Light' and 'Natural Light'. In other words, subject matter and the qualities of light itself (hard, soft, key, fill, lighting ratios) also contribute significantly to photographic texture. iv The Zone System as practiced and perfected by Adams is almost irrelevant to digital imaging, since the CCD's tonal reproduction curve has little relationship to a negative film emulsion (the tonal response is more akin to commercial transparency film emulsions such as Ektachrome). However, the current 'state of the art' digital Hasselblad (H3D) camera includes as an option for exposure control in 'zones', even though their value progression, and classic control points, such as zones I and VIII, are almost meaningless in digital technology. v The finer the half tone screen the better the highlight reproduction. That is, more, smaller dots per square inch produce the same overall tone as fewer, larger dots per square inch, but more smaller dots produce more detail and better texture in reproduction. The same observation applies to any half tone process, where solid colours are distributed as dots of variable size, placement and dye/pigment weight, and the smaller the dot size the more dots that can be placed per unit area without changing the area's value. vi  In ‘The Gesture of Photography’ (Flusser 2011b), published the year of his death in 1991, Flusser is considerably more optimistic, even than the preliminary revisions of the earlier work in Into the Universe of Technical Images (Flusser 2011a). Here, between periods of reflection and moments of action, photography is part of a phenomenological “project of situating oneself in the world” (280). Flusser celebrates the “reflection” on the part of the photographer, the editing process which “rejects all the other possible pictures, except this one, to the realm of lost virtualities.” (291) However, one can legitimately ask how the editing of pictures in photographic software such as Lightroom complicates Flusser’s equation, given endless virtual versions of an image enabled by the ostensibly lossless editing of RAW files, suggesting that a photographer may now approaches the world as fluid raw material – indeed data – to be manipulated later.   vii The Photo Marketing Association International statistics show 2003 as the year that total US digital camera sales overtook US analogue camera sales, and that this was the first country for this to happen in. Enumerating Photography from Spot Meter to CCD   20                                                                                                                                                                                                                                                                                                                                                           viii But film has a threshold below which no signal (exposure) is recorded. This was evident even in the darkroom, where we would clearly see light hitting a print emulsion, but on subsequent processing (making visible) the print was paper white without any trace of the exposure. Digital capture has no such threshold. It records all exposure down to a single photon. Whether this is usable data depends on the conditions under which it was captured, for example, the dark current noise at the time of exposure. The limitations of a digital camera's dynamic range are full saturation (at the highlight end of the scale) and unacceptable signal-to-noise ratio (at the shadow end of the scale). Advanced noise reduction includes characterising the noise so it can be subtracted from the signal + noise, thereby liberating the signal as discrete and useful information. work_wa6etsukejahnce7vu5vukf5jm ---- Microsoft Word - 10 Spring 2012; Vol. 24, No. 2 60 Original Article Evaluation of Tooth Color Distribution in 20 to 30-Year-Old Patients of Shahid Beheshti University Related Centers in 1389 Z. Jaberi Ansari 1, K. Saati 2� 1Associate Professor, Department of Operative Dentistry, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran 2Assistant Professor, Department of Operative Dentistry, School of Dentistry. Islamic Azad University, Dental Branch Tehran, Iran � Corresponding author: Dr. Saati K., Assistant Professor, Department of Operative Denti- stry, School of Dentistry. Azad University, Tehran, Iran keivan.saati@gmail.com Received: 30 Oct 2010 Accepted: 22 Oct 2011 Abstract Background and Aim: Tooth color assessing is very important in esthetic dentistry. The aim of this article was to study tooth color prevalence of 20-30-year-old patients and also presentation of a simple method for color assessment. Materials and Methods: This cross section descriptive analytic study was performed on One thousand fifty nonsmoker volunteers (501 male, 549 female). In a light, dis tance and angle constant condition the left maxillary incisors were photographed with a digital camera (Canon G9) using a retractor. The pictures were studied with Adobe Photoshop CS5 software. The color of the middle 1/9 of the tooth was measured in L*a*b* system. In the exact above condition vita pan classical shade guide was pho tographed in direction of neutral background. These pictures were studied with Adobe Photoshop CS5 and L*a*b* average was measured for the middle 1/9 of each sample. After measurement of ∆E* for each person and vita pan classical shade guide, mini mum ∆E* was chosen for each tooth as the color of that tooth. Finally, the effect of colored liquid drinks was assessed. Results: The most popular findings were A3.5 (16.85%) A3 (14.85%) and B1, B2 (9.8%). The results of this study showed that the L* average in nonsmokers who drank colored liquids more than twice a day in comparison to noncolored liquid users was lower. The a* and b* average in this group was higher than noncolored liquid users. Conclusion: The most common findings were A3.5 (16.85%) A3 (14.85%) B1, B2 (9.8%). Tooth color assessment with a digital camera and computerized study is sim ple and cheap but very sensitive. Key Words: Spectrophotometry, Colorimetry , Dental photography Journal of Islamic Dental Association of IRAN (JIDAI) / Spring 2012 /24 / (2) Introduction The role of color in esthetic dentistry is undoub- tedly of the most complicated and unknown mat- ters of the science of restorative dentistry. In this regard, many factors are effective and play a role and all take part in the outcome of the final con- sequential appearance of the restoration. There- fore, the basic knowledge of color is essential for esthetic restoration [1]. An appropriate under- standing of the normal color of the teeth is ne- cessary for a precise and congruent selection of the relevant colors of the restorative material. Journal of Islamic Dental Association of IRAN (JIDAI) / Spring 2012 /24 / (2) Jaberiansari et. al Spring 2012; Vol. 24, No. 2 61 The tooth consists of the combination of colors with different concentrations. Usually, the color of the tooth fluctuates from the gingival to the incisal area, being significantly darker in the gingival area due to the thinner enamel in this area [2]. A color perception of the teeth leads to a restoration with a normal appearance. Light reaches the retina after passing the vitreous humor which is a gelatinous substance. The reti- na consists of two main types of photoreceptor cells; namely, cylindrical and conical. There are approximately 120 million cylindrical cells in each eye, responsible for the black and white vision (achromatic vision), vision in low light (scotopic vision) and interpretation of bright- ness. In each eye, there are about 6 million con- ical cells responsible for color vision which are active in only high light. In order to create a clear image, three terms are necessary [3]; 1- It should have a definite minimum intensity 2- A particular period of time 3- Contact with a minimum required area of the retina The white light is composed of three basic spec- trums; namely, red (600-700 nm), green (500- 600 nm) and blue (400-500 nm). These three colors are nominated as the main colors or the primary colors of the additive system; red (R), green (G) and blue (B). The wave length of these three main colors are congruent with the sensitiveness limit of the three conical cell types of the retina. Combina- tion of two of these main colors forms the sec- ondary colors of the additive system; yellow (Y), magenta (M) and cyan (C). Combination of a primary color with its opposite secondary color leads to the formation of white (2). Therefore, color is created by the joint impression of the eye and the brain, in a way that stimulating the conical cells of the retina leads to the perception of 7 million colors [3]. We have to bear in mind that the color of the light and the color of the object are not always the same. For instance if red light shines on a white object, that object appears red and if green light shines on a red object, that object seems black. Therefore, in order to determine the color of an object or the teeth precisely, they should be observed in white light [4]. There are color spaces to evaluate the amount of color, but for long periods of time munsell color system and CIE Lab have been used for this pur- pose. The color of the tooth may be determined by the average of the subjective measurement of the color by the shade guide or the objective measurement by the colorimeter, spectrophoto- meter and analysis of the digital images. The color standard used for the teeth’s choice of color is called the color shade. There are many color samples for different objectives. The most famous global color guide is the vita classic which regarding hue consists of A (red), B (yel- low), C (yellow-grey) and D (orange-grey or brown) colors. In addition, in each color family, with the increase in number from 1 to 4 chroma increases and value decreases. For instance, A2 is lower in value and higher in chroma compared to A1 [2]. Nowadays, the dentists use visual color determi- nation techniques and individual clinical expe- riences for color determination and based on this judgment they prepare composite restoration ma- terials for private clinics and treatment centers which is an obstacle in reaching desirable re- sults. Besides, this non scientific judgment leads to disproportionate material which is the reason of capital loss. Healthy and proportionate teeth are the basics of a beautiful face and dental esthetics are one of the objectives of dentistry [5]. When the teeth have to be replaced by restorative material or artificial teeth, special attention should be paid to the color of the teeth. Due to the obscurity of the prevalence of tooth color in the society and based on the Iranian nature and culture and not the book references in other countries; private office, governmental and private treatment cen- ter dentists have always faced problems in the purchase and order of composite material. The objective of this study was to evaluate the teeth Jaberiansari et. al Evaluation of tooth color distribution in 20 to 30-year-old . . . Spring 2012; Vol. 24, No. 2 62 color quantitatively in 20 to 30-year-old patients referred to ShahidBeheshti University of Medi- cal Sciences dental clinics as one of the greatest centers for esthetic problems such as tooth color and also presentation of a simple technique for evaluation of tooth color. Materials and Methods This was a cross sectional descriptive analytic study. The data were collected by a question- naire. This study was performed on 1050 pa- tients referred to dental centers of Shahid Be- heshti University of Medical Sciences in 2009- 2010. Based on the results of case studies and using the option of the proportion of sample vo- lume, Minitab software estimated the minimum sample volume in each center as 350 considering α=0.05 and an accuracy of 0.01. Five hundred one (47.72%) of these patients were male. The age range was 20-30 years. None of the patients were smokers. The patients’ central teeth were caries-free and no composite filling was detected. There was no grinding or history of sensitiveness in the area of the two central teeth. A device was designed to stabilize the distance between the camera and the teeth. This device fixed the distance between the camera location and the patients mouth at 15 cm. The base was perpendicular to the surface the camera was placed on and opposite the oral cavi- ty, forming no angle with it. The two bases were parallel and were attached to a plate so the dis- tance would be permanent. A chin set and head rest were used to fix the patient’s head position. A Demetron Shade system (Light Kerr Hawe SA Co. Switzerland) was utilized to reconstruct neu- tral light with a 6500 Kelvin degree color heat [6]. In accordance with the fact that increase in the camera’s resolution capacity (number of pixels) leads to higher confidence and reliability of the determined color [7], Canon G9 with a resolu- tion of 12.0 megapixels was used. The camera’s setting has been demonstrated in Table 1. Finally, a CA (color assessment) software was designed with the assistance of computer engi- neers. First L*, a*, b* standards were determined for each of the 16 vita classic colors in the test conditions and were given to this software. For calibration, four different colors shade guides were used and their mean was selected as the study. After choosing the samples among pa- tients from three treatment centers of Behfar, Ghazi Tabatabaee, and the diagnostic department of Shahid Beheshti Dentistry Faculty and ex- plaining the project to them, the prepared forms were filled by the patients. The patients then sat on a chair and placed their forehead and chin on the designed apparatus. The patients’ lips were held away from the teeth by a lip retractor. When the patients head was placed in the apparatus, the central teeth were parallel to the camera’s lens. When the camera was placed on the same appa- ratus, the lens was completely perpendicular to the surface. To equalize the color of the teeth, a small round disc punched from grey cardboard with an 18% light reflection was prepared and placed on the right central tooth at the time of photography [8]. The demetron shade light was placed between the camera and the patient’s mouth in a way that it was as close to the mouth as possible. After that, the lights were turned off and the only existing light was this source. In this situation, a photo was taken of the central teeth (Figure 1). Applied Setting Setting Criteria 100 ISO 1944×2592 Image size Super fine Image quality CMOS Digital sensor Manual Function 4.5 F. stop SRGB Color field Macro Lens type 1/125 Shutter speed Internal light White light balance Table 1: Camera Setting Journal of Islamic Dental Association of IRAN (JIDAI) / Spring 2012 /24 / (2) Jaberiansari et. al Spring 2012; Vol. 24, No. 2 63 Figure 1: One of the photo samples According to the above mentioned method, a photo was taken of each of the vita classic color samples in the exact light condition and distance and same angle as the patients. All the photos of the patients and the vita color samples were eva- luated by CS5 photoshop software based on L*a*b* view point. The number obtained for L*a*b* were entered in the related question- naire. After opening the photo in the CS5 photoshop page, the image of the tooth was selected by Slice tool and the 1/9 middle piece with the fixed size of 64×64 pixels was cut by Crop tool. This selected piece was called the useful part of work. For L*a*b* determination, we went to the image menu and in the Mode part we chose the Lab color. For L*a*b* determination, from all the points of this separated image from the Image menu, the Histogram option was selected. In the Channel part, L*, a*, b* were chosen in the ex- act order mentioned. For each of them the mean was defined and was recorded at the bottom of the page, in each patient’s or each color sample’s questionnaire. To change the L*, a*, b* figures that were obtained from Adobe Photoshop CS5 to CIE L*a*b* system the following equations were used (2, 9, 10). L*CIE = 0.392 × L*photoshop = × L*photoshop a*CIE = a*photoshop-128 b*CIE = b* photoshop-128 Using ∆E* which shows the difference of color between two samples, the difference between the color of each patient’s left central tooth and all vita color samples was evaluated and recorded in the table prepared. To accomplish this, L*, a* and b* which were determined for vita color samples were entered as standard to the designed software. The role of this software was compari- son of the three entered components resulted from the photos of the left central teeth and the ∆E* equation based on the determined standard and determining the least difference as the color of the tooth. The ∆E* equation is as follows (10). The effect of colored drinks was evaluated for each patient in the questionnaire. The data were analyzed statistically using SPSS 16 for Win- dows (SPSS Inc., Chicago, Il). To determine the effect of gender and drinking on L*, a* and b* two-way ANOVA was used. P value less than 0.05 was considered significant. Results The range for L*, a* and b* was measured as 53.94-82.76, -4.01-2.81 and 0.5-23.19, respec- tively. The mean for L*, a* and b* was 68.82 ± 6.35, -1.06 ± 1.36 and 12.17 ± 5.41, respectively. The most common cases observed were A3.5 (16.85%), A3 (14.85%), B1 (9.8%) and B2 (9.8%). In graph 1, the frequency of the ob- served colors have been demonstrated in all the patients without considering any variables. A 1 F requency percent A 2 A 3A3/5 A 4 B 1 B 2 B 3 B 4 C 1 C 2 C 3 C 4 D 2 D 3 D 4 Vita color sample 0 2 18 4 6 8 10 12 14 16 Graph 1: Frequency percent of the observed colors in all the patients without considering any variables Jaberiansari et. al Evaluation of tooth color distribution in 20 to 30-year-old . . . Spring 2012; Vol. 24, No. 2 64 Table 2 shows the mean values of L*, a* and b* separately regarding gender. Gender has no sig- nificant effect on L*, but men have a significant- ly higher mean b* compared to women and women have a significantly higher a* compared to men. P value less than 0.05 was considered significant. Based on Table 3, in those who drink colored drinks the mean L* is 3.53 lower and the mean b* is 3.51 higher, but the mean a* is approx- imately similar and the difference is not signifi- cant. P value less than 0.05 was considered sig- nificant. Table 2: Mean values of L*, a* and b* separately regarding gender SD Mean Number of specimen Gender 6.356 68.89 L* 549 Woman 1.392 -0.99 a* 5.312 11.89 b* 6.365 68.73 L* 501 Man 1.333 -1.14 a* 5.506 12.48 b* Table 3: Mean values of L*, a* and b* separately regarding usage colored drinks SD MeanNumber of specimen Usage of colored drinks 6.36 70.53 L* 541 - 1.364 -1.12 a* 4.929 10.47 b* 5.834 67L* 509 + 1.366 -1a* 5.314 13.98 b* Discussion This study showed that the most prevalent color in this study group was A3.5 (16.85%) and A3 (14.85%). Alber mentioned A2 as the most pre- valent color in America [11]. Based on some opinions, tooth color is one of the properties that is related to the individual’s genetic, culture and life style and generally, the society’s nationality plays a role [5, 12]. In this evaluation, the middle third of the tooth was selected for color determination. The reason this part was chosen was due to the fact that in the incisal third, the capacity of light passage is high; therefore, the selected color depends on the background and in the cervical third, the pink color of the gums effects the selected color [13]. The round grey cardboard was used to regulate the light and the color of the photo in photo shop software. This grey cardboard has an 18% reflec- tance which is between white and pure black. This spherical grey card acts as a neutral objec- tive. The amount of red, blue and green colors are equal. In other words, a criterion which is grey and has a definite value is considered in the photo. The software also has to take color grey into account [8, 14-16]. In this study a digital camera was used. Of the other tools used for determining tooth color, we may mention colorimeter. This tool cannot be used for the curved surface of the teeth [17]; therefore, its inter instrument validity is less than its intra instrument validity [18]. The color sam- ple obtained by this tool is inconsistent to the color obtained from the color sample [18-20]. Due to its high technical sensitivity [21] and the high sensitivity to light and temperature [22], this tool was not used in this evaluation. In 2007, in a study conducted by Dozic et al. in Holland, they reached the conclusion that using easy shade instrument (spectrophotometer) and I Kam (digital camera) in the mouth is the most reliable method to determine the color of the teeth [23]. Many researchers have used the spec- trophotometer in their studies for color determi- nation [24-27], but based on the fact that spec- trophotometers in comparison to other methods are somehow complicated, it is necessary to have a specific expensive instrument with high tech- nical sensitivity and besides they only determine the color of the sample surface [28]. In this study, we did not use this instrument. We used a reliable, low price replacement as the digital camera. In the recent years, use of the digital Journal of Islamic Dental Association of IRAN (JIDAI) / Spring 2012 /24 / (2) Jaberiansari et. al Spring 2012; Vol. 24, No. 2 65 camera for color evaluation is a repeatable me- thod both in clinic and laboratory [7]. Based on studies by Elvin in America (2006), Smith in England (2008) and Tung in America (2010), use of the digital camera in color eva- luating tests and color determination in dental clinics is reliable [14-16]. In a study conducted by Cal in 2006 and Lath in 2006-2007, using digital images in color tests was similar to the spectrophotometry method regarding validity and reliability [28-30]. Therefore, using the digital camera technique to evaluate tooth color is a reliable and inexpensive method which gives more information about a tooth as a precise clinical example. According to Lath’s study, digital photography is a more reli- able method in comparison to spectrophotometry [8]. Another method is using the color sample and the visual diagnostic method which is the most common method used in clinics, but because the color samples of different companies are differ- ent [13, 31, 32], these samples include a re- stricted color spectrum [33]. Their quality is dif- ferent compared to composites and their thick- ness with restoration is always not similar [34]. Different supervisors have different opinions, in a way that maybe as a result of eye strain, the individual may not confirm the color he himself has selected before [35]. This method was not reliable and valid for precise color determination compared with the other tools [23, 36]. As mentioned before, in this evaluation a digital camera was used, but in order for the digital pho- tography method to be an appropriate replace- ment for other methods, the distance between the object and the camera, the camera settings and the light circumstances should be appropriate [14-16, 21]. Based on the fact that the higher the camera’s resolution capacity, the higher the re- liability and validity to the determined color [7]; the number of the camera’s pixels shows this capacity. In this study, the highest resolution camera (12 megapixels) at that time was used. The camera settings were performed similar to Bengel’s study regarding semi professional digi- tal cameras [8]. In an article review by Bengel, choosing the tooth color in the light of day in the dental office does not have the essential accuracy [8]. The room light effects the color too. Office fluores- cent lamps are designed to mimic daylight. They do not have a spectrum and are not ideal neutral. The unit lamp is another light source effective on the tooth color. Halogen lamps have 3000-3400 Kelvin color heat leading to yellow light. In or- der to reconstruct neutral light creating 6500 Kelvin color heat, we used Demetron shade light which is produced by Kerr manufacturer. Jasine- vicus also reached the conclusion that under this light there is a better light accommodation in comparison to the usual light of the laboratories and offices in all vita color samples [37]. Since the fixed distance between the mouth and the camera is important for accuracy of the study [14-16, 21], the optical axis of the camera has to be perpendicular to the frontal plane of the pa- tient and parallel to the occlusal plane [38]. Sta- bility of the patient’s head is another important matter which is achieved by the chin rest and the head rest [38]. A tool was designed and pro- duced in which all the above mentioned items were considered and the location of the dimetron tool was placed at the closest distance from the mouth in a 5-7 cm distance [6]. In other studies, the color determination was performed by a digi- tal camera and the distance of the camera was approximately 10-25 cm from the patient [14-21, 30] The focal length of the Canon G9 in the ma- cro state is 1-20 cm; so, the distance between the patient and t camera was 15 cm which was men- tioned in the construction of the photo recording tool. In photography of vita classic color samples, we tried to provide circumstances similar to the oral cavity. By placing the neutral grey cardboard in the background, the interference of the other fac- tors was prevented in the color determination of the samples. In Erbhaeuser’s tests in Germany in which photos were taken from one color sample Jaberiansari et. al Evaluation of tooth color distribution in 20 to 30-year-old . . . Spring 2012; Vol. 24, No. 2 66 with different black and white backgrounds, high color difference ( = 17) was detected [39]. In this evaluation, the mean of L*, a* and b* in the evaluated society was 68.81±6.35,-1.91±1.36 and 12.17±5.41, respectively. In a similar study on 405 Chinese men and women in the same age range, these figures were 70.67±1.91, 4.29±2.05 and 17.51±4.13 for L*, a* and b*, respectively [40]. In another study in Buffalo city, America; 933 central teeth of 501 cases in the 20 to 30 years age range, the L*, a* and b* figures were 58.71 to 88.7, -3.6 to 7 and 3.7 to 37.3, respectively [43]. In the present study, L*, a* and b* were 53.94 to 82.76, -4.01 to 2.81 and 0.5 to 23.19, respectively. The questions in the questionnaire were about the effect of colored drinks such as tea, coffee and carbonated beverages and the answers were evaluated. Using colored beverages increased the b*; indicating that in people who drink colored beverages more than twice a day, in the b* part which shows blue or yellow color, there is pro- pensity towards yellow. Moreover, usage of co- lored beverages decreases the L* component, indicating that in people who drink color beve- rages more than twice daily have lower teeth brightness compared with other people. These mentioned matters were results of Bagheri and Guler in Conclusion The most prevalent colors were A3.5 (16.85%), A3 (14.85%) and B1 and B2 (9.8%). Using digital photography and evaluation of the photos by computer software to determine tooth color is a relatively simple and inexpensive me- thod. The mean quantity of L* in men and women who drink colored beverages is significantly lower than the control group; the difference be- tween mean a* quantities in the two groups was not significant. The mean b* quantity in men and women in the group using colored beverages was significantly higher than the other group. Acknowledgment This study is partly the postgraduate degree [No573] Dntal Faculty of ShahidBeheshti Uni- versity of Medical Sciences. References 1-Irfan A: Protocols for predictable aesthetic dental restorations. 1 st ed. UK: Blackwell, Munksgaard; 2006, 77-96. 2-Paravina R, Powers J. Esthetic Color training in dentistry. 1st ed. USA: Mosby; 2004, 1-6. 3-Silbernagel S. Physiology of the eyes. 2 nd ed. Germany (Stuttgart): Thieme; 1988, 308-313. 4-Gnan C. Science of colors. 1 st ed. USA: Quintessence Publishing Co. Inc; 1994, 20(3): 383-397. 5-Roberson TM, Hyman HO, Swift EJ. Sturte- vant's art & Science of operative dentistry. 5 th ed. USA: Mosby; 2006, Chapter 4. 6-Demetron Shade light, an ideal light for shade taking. Catalogue hand book. 1 st ed. Switzer- land: Kerr Hawe SA. Co; 2010. 7-Elter A, Caniklioglu B, Deger S, Ozen J. The reliability of digital cameras for color selection. Int J Prosthodont. 2005 Sep-Oct; 18(5):438-40. 8-Bengel WM. Digital photography and assess- ment of therapeutic results after bleaching pro- cedures. J EsthetRestor Dent. 2003 Dec; 15(3): 521-32. 9-Paravina RD, Majkic G, Imai FH, and Powers JM. Optimization of tooth color and shade guide design. J Prosthod. 2007 Apr; 16(2):269-276. 10-Westland S. Review of the CIE system of colorimetry and its use in dentistry. J EsthetRes- to Dent. 2003 Dec; 15 (Supple 1):S5-S12. 11-Albers HF. Tooth colored restoration prin- ciples and techniques. 9 th ed. USA: BC Deke- rInc; 2002, Chapter 5. 12-Suminit JB, Robbins WJ, Hilton TJ, Schwartz RS. Fundamental of Operative dentistry a con- temporary approach. 3 rd ed. USA: Quintes- sence Publishing Co. Inc; 2006. 13-Schwabacher WB, Goodkind RJ. Tree- dimensional color coordinates of natural teeth Journal of Islamic Dental Association of IRAN (JIDAI) / Spring 2012 /24 / (2) Jaberiansari et. al Spring 2012; Vol. 24, No. 2 67 compared with three shade guides. J Prosthet Dent. 1990 Oct; 64(5):425-431. 14-Tung OH, Lai YL, Chou IC, and Lee SY. Development of digital shade guides for color assessment using a digital camera with ring flashes. Clin Oral Investing. 2010 Jan; 5(1):265- 71. 15-Smith RN, Collins LZ, Naeeni M. The invitro and invivo validation of a mobile non-contact camera-based digital imaging system for tooth color measurement. J Dent. 2008 Dec; 36Suppl 1:S15-20. 16-Wee AG, Lindsey DT, Kuo SH, Johnston WM. Color accuracy of commercial digital cam- eras for use in dentistry. Dent Mater. 2006 Jun; 22(8):553-559. 17-Joiner A. Tooth Color: A review of the litera- ture. J Dent. 2004 Nov; 32(Supple)3:-12. 18-Douglas RD. Precision of in vivo colorime- tric assessments of teeth. J Prosthet Dent. 1997 May; 77(2):464-70. 19-Cho BH, Lim YK, Lee Y. Comparison of the color of natural teeth measured by a colorimeter and Shade Vision System. Dent Mater. 2007 Oct; 23(10):1307-1312. 20-Tung FF, Goldstein GR, Jang S, Hittelman E. The repeatability of an intraoral dental colorime- ter. J Prosthet Dent. 2002 Dec; 88(6):585-590. 21-Calglar A, Yamanel K, Gulsahi K. Could digi- tal imaging bean alternative for digital colorime- ters? Clin Oral Invest. 2010 Dec; 14(6):713-8. 22-Goldstein GR, Schmitt GW. Repeatability of a specially designed intraoral colorimeter. J Prosthet Dent. 1993 Jun; 69(3):616-9. 23-Dozic A, Klevelaan CJ, EL-Zohairy A. Per- formance of five commercially available tooth color-measuring devices. J Prosthod 2007 Mar- Apr; 16(2):93-100. 24-Kim BJ, YU B, and Lee YK. Shade distribu- tion of indirect resin composites compared with a shade guide: J Dent. 2008 Oct; 36:1054-1060. 25-Samra AP, Pereira SK, Delgado LC, and Borges CP. Color stability evaluation of aesthet- ic restorative materials. Braz Oral Res. 2008 Jul -Sep; 22(3):205-10.205. 26-Fontes ST, Fernandez MR, Moura CM, Mei- reles SS. Color stability of nanofill composite: Effect of different immersion media. J Appl Oral Sei. 2009 May; 17(5):388-91. 27-Delfino CS, Chinelatti MA, Carrasco- Gverisoli LD, Batista AR Froner IC, and Palma- Dibb RG: Effectiveness of home bleaching agents in discolored teeth and influence on ena- mel micro hardness. J Appl Oral Sei. 2009 July- Aug.; 17(4):284-8. 28-Lath DL, Johnson C, Smith RN, Brook AH. Measurement of stain removal in vitro: A com- parison of two instrumental methods. Int J Dent Hyg. 2006 Aug; 4(3):129-32. 29-Cal E, Guneri P, Kose T. Comparison of digi- tal and spectrophotometric measurements of col- or shade guides. J Oral Rehabil. 2006 Mar; 33 (3):221-8. 30-Lath DL, Smith RN, Guan YH, Karmo M, and Brook AH. Measurement of stain on ex- tracted teeth using spectrophotometery and digi- tal image analysis. Int J Dent Hyg. 2007 Aug; 5(3): 274-9. 31-Segi RR, Johnston WM. Spectrometric anal- ysis of color differences between porcelain sys- tems. J Dent Res. 1986 Jul; 56(1):35-40. 32-Van-der-Burgt TP, Ten-Bosch JJ, Borsboom PC. A new method for matching tooth colors with color standards. J Dent Res. 1985 May; 64 (9):837-841. 33-Browning WD. Use of shade guides for color measurement in tooth-bleaching studies. J Esthe- tRestor Dent. 2003 Dec; 15 Suppl 1:S13-20. 34-Barna GJ, Tayler JW, King GE. The influ- ence of selected light intensities on color percep- tion within the color range of natural teeth. J Prosth Dent. 1981 Oct; 46(4):450-3. 35-Donahue JL, Goodkind RJ. Shade color dis- crimination by men and women. J Prosth Dent. 1991 May; 65(5):699-703. 36-Luo W, Westland S, Brunton P, et al: Com- parison of the ability of different color indices to assess changes in tooth whiteness. J Dent. 2007 Feb; 35(2):109-16. Jaberiansari et. al Evaluation of tooth color distribution in 20 to 30-year-old . . . Spring 2012; Vol. 24, No. 2 68 37- Jasinevicius TR, Curd FM, Schilling L, Sa- dan A. Shade-matching abilities of dental labora- tory technicians using a commercial light source. J Prosthod. 2009 Jan; 18(1):60-3. 38-Schropp L. Shade matching assisted by digi- tal photography and computer software. J Pros- thod3. 2009 Apr; 18(3):235-41. 39-Erbshaeuser M. A new method for the objec- tive colorimetry with digital photography. [The- sis]. France: Universal Publishers; 2001. 40-XIAO J, Zhou XD, ZHU WC. The prevalence of tooth discoloration and the self-satisfaction with tooth color in a Chinese urban population. J Oral Rehabil. 2007 May;34(5):351-60. 41-Guler A, Yilmaz F. Effect of different drinks on stain ability of resin composite provisional restorative materials. J Prosthet Dent. 2005 Aug; 94(2):118-24. 42-Bagheri R, Burrow MF. Influence of food- simulating solutions and surface finish on sus- ceptibility to staining of esthetic restorative ma- terials. J Dent. 2005 May; 33(5):389-398. 43-Yazici AR, Celik C, Dayangac B. The effect of curing units and staining solutions on the col- or stability of resin composites. Oper Dent. 2007 Nov-Dec;32(6):616-22. work_wa6nzyjr7bhn5pry3q5mjs6b5y ---- 93 ПРОГРАМ ЗА ВОЂЕЊЕ ФОТО-ДОКУМЕНТАЦИЈЕ НЕПОКРЕТНИХ КУЛТУРНИХ ДОБАРА – приказ апликације – СНЕЖАНА НЕГОВАНОВИЋ, БРАНИСЛАВ ТОМИЋ УДК 004.932 Сажетак: Рад са дигиталним фотографијама је другачији техничко-технолошки процес од рада с филмом и емулзијом и у уста- новама очувања културног наслеђа захтева потпуно нов приступ да би се поштовала постојећа законска регулатива, али и да би се створила квалитетна, сређена и трајна дигитална фото-документација. При изради програмa, базе података за евидентирање и систе- матизацију дигиталне фото-документације, поред решавања организације дигиталних фотографија, постигнута је и компатибилност с постојећом, класичном, организацијом фото-документације Завода за заштиту споменика културе града Београда, уз могућност да се многобројни фотографски снимци, забележени или репродуковани на различитим материјалима, дигитализују и унесу у електронску базу, чиме би се створила јединствена целина фотографија о стању и променама споменичког наслеђа града Београда. Кључне речи: дигитална фотографија, апликација, програм, база података Abstract: Being a different technical and technological process from the one using film and emulsion, digital photography requires a completely different approach of the heritage protection service in order to comply with the existing legislation on the one hand, and to create a well-organised and lasting digital photo archive on the other. In addition to resolving the issue of organisation in developing the program, the database for filing and systematising the digital photographic recordings, compatibility has been achieved with the existing, classical, organisa- tion of the Institute’s photographic records, opening up the possibility of digitising and entering into the database the photographs recorded or reproduced in different media, which would result in an integrated photo archive documenting the condition of and changes underwent by the built heritage of Belgrade. Keywords: digital photography, application, database след све већег броја дигиталних фотографија које се годинама свакодневно стварају у раду на документа- цији Завода за заштиту споменика културе града Бео- града, решавање система евидентирања и систематиза- цијe дигиталних снимака издвојило се као приоритет- но. Рад на документацији Завода за заштиту споменика културе града Београда регулисан је постојећом прав- ном регулативом: Правилником о подацима који се упи- сују у регистар, начину вођења регистра и централног регистра непокретних културних добара и о докумен- тацији о овим културним добрима, Службени гласник РС, бр. 30/95 и 37/95 и Правилником о начину вођења евиденције о непокретностима које уживају претходну заштиту, Службени гласник РС, број 19/95. Поред тога, у Документационом центру Завода за заштиту споме- ника културе града Београда постоји Пословник о раду (2000), који ближе дефинише рад на фото-документа- цији и другим видовима документације установе. Године 2010. остварена је и иницијатива Секције фотографа Друштва конзерватора Србије о изради Пре- поруке о снимању, обради и архивирању дигиталних фотографија за потребе документације о културним до- брима, Гласник Друштва конзерватора Србије (ДКС), број 35, настале применом нових препорука у раду са дигиталном фото-документацијом објављених у књизи The AIC Guide To Digital Photography And Conservation Documentation1. Након периода припрема и дефинисања циљева, током 2014. године је дошло до успешне сарадње с фирмом „Парагон“ на сређивању дигиталне фото-доку- ментације Завода за заштиту споменика културе града Београда. Настала је нова апликација за рад са фото- -документацијом, израђена као синтеза једне базичне апликације за евиденцију фотографских снимака и спе- цифичних потреба Завода за заштиту споменика култу- ре града Београда. U 94КОНЗЕРВАТОРСКИ ПРИСТУП СНЕЖАНА НЕГОВАНОВИЋ, БРАНИСЛАВ ТОМИЋ Израђена је у програмском окружењу Filemaker. Подржана је на Windows-у, Mac OS X и на iOS мобил- ним уређајима. Пројектована је као вишекорисничка или као једнокорисничка апликација, према потребама и могућностима. У фото-документацији Завода за за- штиту споменика културе града Београда у овом тре- нутку користи се као једнокорисничка апликација – база података. Сл. 1. Уводни екран Сл. 2. Екран Снимак, Појединачни снимак 95 ПРОГРАМ ЗА ВОЂЕЊЕ ФОТО-ДОКУМЕНТАЦИЈЕ НЕПОКРЕТНИХ КУЛТУРНИХ ДОБАРА ГРУПЕ ГЛАВНИХ ЕКРАНА Функционалност апликације је подељена у три групе главних екрана: 1. снимци; 2. колекције; 3. администрација. До ових екрана се долази коришћењем икона за на- вигацију на врху стране. Поред главних екрана, постоји читав низ секундарних, до којих се долази коришћењем програма, према контексту. 1. Снимци Група екрана која приказује снимак, као основни елемент апликације, чини то на три начина: као поједи- начни снимак, као листу и као табелу. 2. Колекције Група екрана која приказује колекције предвиђена је за рад са групама снимака које чине колекцију. Сваки од ових екрана има различиту функционалност, уз из- весна преклапања. Постоје три врсте колекција: сетови, секвенце и web колекција. Сет је у овој апликацији неустројена група снима- ка. Постоје две врсте сетова, али о томе ће бити говора у даљем тексту. Сетови су погодан начин да се окупе снимци по различитим критеријумима. Секвенца је колекција снимака у одређеном редо- следу. Обично се користи ако је потребно експортовати снимке нумерисане одређеним редоследом; за изложбу, слајд презентацију или неким другим поводом. Web колекција се разликује од других зато што се не налази у локалној мрежи, већ негде на неком серверу на интернету, обично код провајдера. Овој колекцији се приступа из панела Web у екрану појединачног снимка, који је директно повезан с базом на интернету. Могуће је и претраживање ове колекције. 3. Админ (Администрација) На овом екрану су приказани административни подаци о непокретним културним добрима. Идеално би било да се ови подаци не налазе у самој апликацији, већ негде у некој централној административној бази у оквиру локалне мреже. База снимака би се при том само повезала с централном административном базом. Сл. 3. Екран Админ, културна добра 96КОНЗЕРВАТОРСКИ ПРИСТУП СНЕЖАНА НЕГОВАНОВИЋ, БРАНИСЛАВ ТОМИЋ ФУНКЦИОНАЛНОСТ ПРОГРАМА 1. Снимци Приказ на екрану Снимке је могуће приказати на више начина: – као један снимак великог формата на екрану, окру- жен свим подацима и функцијама везаним за тај снимак; – као листу снимака, са ограниченим бројем пода- така везаних за снимак, чиме је омогућено да се више снимака види на екрану истовремено. Поред стандардног приказа снимка, могуће је при- казати и увећану верзију, на целом екрану. Такође је могуће и зумирати слику до пуне резолуције у оквиру мањег прозора. Импорт (унос) снимака у базу Снимци, искључиво у jpg формату, могу се уносити у базу један по један, или као цео фолдер, укључујући и потфолдере. Импортом фотографија могуће је унети и ме- таподатке који су заједнички за све импортоване снимке, – као табелу сличица, с минималним подацима, слично контакт-копијама у класичној фотографи- ји; овај екран омогућава приказ великог број сни- мака истовремено. у једном кораку. Ово је могуће урадити и касније, груп- ном изменом метаподатака. Ако постоје фајлови и у неком другом формату, а запис снимка је већ формиран, можемо их импортовати у посебан део апликације, у Верзије. Сл. 4. Екран Снимак, приказ снимака у виду листе Сл. 5. Екран Снимак, табеларни приказ снимака (доле лево) Сл. 6. Дијалог при експорту снимака (доле десно) 97 ПРОГРАМ ЗА ВОЂЕЊЕ ФОТО-ДОКУМЕНТАЦИЈЕ НЕПОКРЕТНИХ КУЛТУРНИХ ДОБАРА Измена (уређивање) снимака Снимак се може изменити у екстерној апликацији (на пример у Photoshop-у). Након измене, аутоматски се врши замена новом верзијом. Ако желимо да сачувамо првобит- ну верзију, можемо је пре измене пребацити у Верзије. Оцена снимка Сваки снимак можемо оценити оценом од 1 до 5. Ово омогућава да се окупе најбољи снимци на основу датих оцена. Маркери Маркери су привремене ознаке уз снимак. Снимке маркирамо да бисмо могли манипулисати групама сни- мака. На пример, можемо маркиране снимке експорто- вати или извршити групну измену метаподатака. Такође, од маркираних снимака можемо формирати нови сет. Метаподаци Уз снимак постоји више поља за метаподатке. По- стоје подаци који се преузимају из административног дела апликације: назив споменика, значај културног до- бра, адреса, општина... итд. Поред тога, постоје поља с подацима који су специфични за сваки снимак: опис снимка, опис локације, кључне речи, коментар, аутор снимка, сарадник и врста снимка. Датирање снимка се уноси аутоматски при импорту снимка дигиталног фо- то-апарата, преузимањем овог податка из EXIF инфор- мације, која је саставни део фајла дигиталног снимка. Ако је реч о скенираном снимку, у коме нема EXIF ин- формације, датум се мора унети ручно. Метаподаци су приказани у панелу Осн. подаци. Верзије снимака Верзије су фајлови произвољног формата повеза- ни са записом снимка. Уз сваки снимак може се при- дружити неограничен број оваквих фајлова. Верзије се обично користе за чување других форматa снимка (на пример raw), или алтернативних верзија снимка (разли- чити исечци истог снимка). Верзије снимка су приказа- не у панелу Верзије. Експорт (извоз) снимака Снимци који се налазе у бази могу се експортова- ти, један по један или као група тренутно одабраних. Ако се експортују као група, омогућено је формирање назива фајлова, укључујући префиксе, суфиксе, редне бројеве, датуме и сл. Такође је могуће експортовати умањене верзије снимака дефинисањем максималне величине у пикселима. Експорт се врши у фолдер на десктопу. Сл. 7. Панел са метаподацима Сл. 8. Снимак са секундарним прозором на коме је приказан положај на географској карти 98КОНЗЕРВАТОРСКИ ПРИСТУП СНЕЖАНА НЕГОВАНОВИЋ, БРАНИСЛАВ ТОМИЋ Сортирање снимака по метаподацима Сортирање снимака је могуће извршити по разли- читим критеријумима: атрибутима споменика, дату- мима, оценама итд. Сортирање може бити растуће или опадајуће. EXIF подаци Сви дигитални фото-апарати приликом снимања у оквиру фајла снимка бележе EXIF информације (датум снимка, техничке параметре снимка, неки дигитални фото-апарати и локацију, као и низ других параметара). Избор ових података се учитава приликом импортова- ња снимака. Ови подаци су приказани у панелу Еxif. Претраживање Претраживање је могуће извршити на два начина: као брзо претраживање, где се унети термин претражује по свим пољима метаподатака, и као сложено претражива- ње, где је могуће претраживати по сваком пољу посебно. Извештаји Постоје два типа извештаја: – Извештаји са снимцима и метаподацима. Постоје три врсте оваквог типа извештаја: слика великог формата на страни са свим метаподацима; пет ма- лих слика на страни са ограничени бројем метапо- датака и двадесет малих слика на страни с једним метаподатком, који је или назив фајла или опис снимка (налик контакт-копијама). – Извештаји са статистичким подацима снимања. Овакви извештаји могу бити: месечни, који при- казује број снимака по споменицима у току једног месеца, и годишњи, који приказује сумарно број снимака направљених у сваком месецу током једне године. Извештаји се могу штампати или снимити у PDF формату. Web запис Web запис је приказан у Web панелу. Из овог па- нела можемо проследити (извршити upload) снимака умањених за интернет приказ, са ограниченим бројем GPS координате Уз снимак је могуће унети у посебна поља и GPS координате ради приказа локације на географској карти (Google Maps). Ако GPS координате нису додељене сним- ку, програм користи координате које су додељене споме- нику (ако су додељене). Координате можемо доделити и мануелно, користећи Google Maps. Географска карта се приказује у оквиру апликације у секундарном прозору. Сл. 10. Изглед екрана за обраду секвенци Сл. 9. Страна извештаја на А4 формату 99 ПРОГРАМ ЗА ВОЂЕЊЕ ФОТО-ДОКУМЕНТАЦИЈЕ НЕПОКРЕТНИХ КУЛТУРНИХ ДОБАРА метаподатака, на сервер (нпр. код провајдера). Ако је снимак већ раније послат, приказан је у овом панелу (заједно с метаподацима), ради контроле. 2. Колекције Сетови снимака Сетови су неуређени скупови снимака. У аплика- цији постоје две врсте сетова. Прва врста се једностав- но назива сет. Потпуно је произвољног састава. Један снимак може припадати једном или више сетова. По по- треби се формирају или бришу. Често су само привре- мени. Ови сетови се могу организовати у групе сетова и то у три хијерархијска нивоа. Ова организација се врши у екрану Сетови. Свака група сетова може бити у више надређених група. Друга врста сетова, у апликацији под називом Спо- меници, односи се на непокретна културна добра. Ово је строго дефинисана врста сета, где је сваки снимак додељен једном непокретном културном добру. Свако културно добро је представљено само једним сетом. При томе се мисли на културно добро у администра- тивном смислу, у складу с регистром непокретних кул- турних добара, који води Завод за заштиту споменика културе града Београда. Редослед снимака зависи од начина сортирања. Секвенце Као што је раније напоменуто, секвенца је уређен скуп снимака, где се снимци налазе у одређеном редо- следу. Секвенца се формира тако што се из тренутно одабране групе снимака (споменик или сет) пребацују појединачни снимци у секвенцу. Простим пребацива- њем, снимак одлази на крај секвенце. Алтернативно, снимак можемо пребацити и тако да буде убачен између било која два суседна, раније унета снимка. Могуће је такође мењати и редослед снимака накнадно. Секвенце се могу штампати или снимити у PDF формату у форми извештаја. Могу се експортовати као нумерисани сним- ци или у XML формату, ради пребацивања метаподатака у друге системе. Web колекција Web колекција се налази на неком серверу ван ло- калне мреже, на пример, на неком домену, код провај- дера. Овај сервер мора бити истовремено web сервер, на којем такође мора бити и web апликација способна да прихвати податке из наше апликације. Трансфер по- датака се врши http протоколом из екрана Један снимак. У примеру који је је дат уз апликацију, на сајту по- стоји web апликација написана у PHP и MySQL окруже- њу, која прихвата податке и смешта их у екстерну web базу. Наша апликација је повезана са овом wеб апли- кацијом http протоколом. Задатак web апликације је да испоручује тражене податке на захтев корисника, одно- сно друге web апликације, користећи API (Application programming interface). Ова, показна web апликација је једноставна и има циљ да демонстрира начин рада са екстерним базама. Идеја конкретног примера је да више институција које користе апликацију сличну апликацији у Заводу за за- штиту споменика културе града Београда имају зајед- ничку базу на интернету, тако да свако може имати увид у материјал свих институција које учествују (наравно само оних података које је фото-документариста одлу- чио да пошаље у заједничку базу). Претпоставка је да ће се ова функционалност ипак највише користити код уређења web сајтова. 3. Админ (Администрација) Споменици Као што је претходно речено, овај део апликације је замена за централну базу институције. У базу су уне- сени основни подаци о непокретним културним добри- ма: број регистра, назив, врста културног добра, значај, адреса, општина на којој се налази, насеље, границе (за просторне културно-историјске целине) и на крају GPS координате непокретног културног добра, односно цен- тра целине. Тиме је омогућено да се сви подаци о непокретном културном добру уносе на једном месту, само једанпут. Свако културно добро је повезано са свим снимцима тог непокретног културног добра. Важи и инверзно, уз сва- ки снимак се приказују подаци његовог споменика. Из- мене у подацима се аутоматски преносе на све снимке. Такође, могуће је сортирање и претраживање. Може се приказати и географски положај непокретног култур- ног добра у секундарном прозору, користећи Google Maps. 4. Мобилни уређаји У уводном делу је речено да се апликацији – бази може приступити и с мобилних платформи (таблет или телефон). У овом случају, само са уређаја под iOS опе- ративним системом, односно са Apple уређаја. Иако је мало вероватно да ће се у данашњим условима ови уре- ђаји користити у институцијама културе, због комплет- ности програмског решења и ово је омогућено. Нарав- но, за то морају постојати технички услови. То је пре свега Filemaker Server, софтвер који омогућава више- кориснички рад, што је сасвим довољно да би се бази приступило са било ког места где постоји бежични ин- тернет (wifi). Тренутно ова мобилна апликација служи само за прегледање података, рецимо негде на терену. 100КОНЗЕРВАТОРСКИ ПРИСТУП СНЕЖАНА НЕГОВАНОВИЋ, БРАНИСЛАВ ТОМИЋ ЗАКЉУЧАК Израдом програма за евидентирање и системати- зацију дигиталне фото-документације у Документа- ционом центру Завода за заштиту споменика културе града Београда – омогућен је континуитет и наставак рада на истраживању, документовању и представљању развоја града Београда кроз фотографије и кроз развој фотографске праксе. Формирање дигиталне базе по- датака фотографског наслеђа пружа низ могућности унакрсног повезивања и праћења визуелизације исто- рије коју фотографски медиј омогућава, као медиј који званично постоји свега 175 година, али је за то време њиме забележен низ докумената непроцењиве вредно- сти, аутентичних сведочења о некадашњим изгледима града Београда, улицама и трговима, појединачним гра- ђевинама и другим мотивима, драгоценим за истражи- ваче и за јавност уопште, у реконструкцији националне, регионалне и савремене историје. НАПОМЕНЕ: 1] Warda, J. (ed.) (2008), The AIC Guide To Digital Photography And Conservation Documentation, Washington DC: American Institute for Conservation of Historic and Artistic Works. Summary: BRANISLAV TOMIĆ SNEŽANA NEGOVANOVIĆ PROGRAM FOR THE MANAGEMENT OF PHOTOGRAPHIC RECORDS OF IMMOVABLE CULTURAL HERITAGE The paper presents the application for filing and sorting the digital photo records of the Cultural Heritage Protection Institute of the City of Belgrade which has been created by using a basic application for filing photographic images in response to the Institute’s specific needs. Analogously to filing a classical photograph, a digital photographic image was used as a starting point to create a metadata and image organisation system both for individual images and for the whole of the Institute’s digital photographic records. Taken into account in the process were new guidelines for the management of digital photographic records and new possibilities offered by the digital and internet technology. The paper provides a description of the application with all basic functions, necessary technical conditions and the manner of integration into the Institute’s existing photographic records system. ILLUSTRATIONS Fig. 1 Home screen Fig. 2 Image screen. Individual images Fig. 3 Admin screen. Cultural properties Fig. 4 Image screen. Presentation of images in the form of a list Fig. 5 Image screen. Tabular presentation of images Fig. 6 Image export dialogue box Fig. 7 Metadata panel Fig. 8 Image with a secondary window showing the location on a map Fig. 9 A4 report sheet Fig. 10 Sequence processing screen Снежана Н. Неговановић, дипл. сниматељ слике Завод за заштиту споменика културе града Београда snezana.negovanovic@beogradskonasledje.rs Бранислав Б. Томић, дипл. грађевински инжењер „ПАРАГОН“, Београд branatomic@yahoo.com work_web4pprjrfcvplxngghqmdafuu ---- Lycobetaine acts as a selective topoisomerase II • poison and inhibits the growth of human tumour cells Lycobetaine acts as a selective topoisomerase IIβ poison and inhibits the growth of human tumour cells HU Barthelmes2, E Niederberger1, T Roth4, K Schulte1, WC Tang1, F Boege2, H-H Fiebig3,4, G Eisenbrand1 and D Marko1 1Department of Chemistry, Division of Food Chemistry and Environmental Toxicology, University of Kaiserslautern, Erwin-Schroedinger Str. 52, 67663 Kaiserslautern; 2Medizinische Poliklinik, University of Würzburg Medical School, Klinikstr. 6–8, 97070 Würzburg; 3Tumor Biology Center at the University of Freiburg, Breisacher Str. 117, 79106 Freiburg i. Br.; 4Institute for Experimental Oncology, Oncotest GmbH, Am Flughafen 8–10, 79110 Freiburg i. Br., Germany Summary The phenanthridine alkaloid lycobetaine is a minor constituent of Amaryllidaceae. Inhibition of cell growth was studied in the clonogenic assay on 21 human tumour xenografts (mean IC 50 = 0.8 µM). The growth of human leukaemia cell lines was also potently inhibited (mean IC 50 = 1.3 µM). Athymic nude mice, carrying s.c. implanted human gastric tumour xenograft GXF251, were treated i.p. with lycobetaine for 4 weeks, resulting in a marked tumour growth delay. Lycobetaine was found to act as a specific topoisomerase IIβ poison. In the presence of calf thymus DNA, pure recombinant human topoisomerase IIβ protein was selectively depleted from SDS-gels, whereas no depletion of topoisomerase IIα protein was observed. In A431 cells immunoband-depletion of topoisomerase IIβ was induced, suggesting stabilization of the covalent catalytic DNA-intermediate in living cells. It is reasonable to assume that this mechanism will cause or at least contribute significantly to the antitumour activity. © 2001 Cancer Research Campaign http://www.bjcancer.com Keywords: lycobetaine; ungeremine; topoisomerase IIβ; cleavable complex; clonogenic assay; gastric carcinoma British Journal of Cancer (2001) 85(10), 1585–1591 © 2001 Cancer Research Campaign doi: 10.1054/ bjoc.2001.2142, available online at http://www.idealibrary.com on http://www.bjcancer.com OH O The phenanthridine alkaloid lycobetaine (ungeremine, Figure 1) has been isolated as a minor constituent from several plant species of the Amaryllidaceae family (Owen et al, 1976; Ghosal et al, 1986; Lee et al, 1994). Lycobetaine has been reported to exhibit growth inhibitory properties in vitro (Wang et al, 1987; Ghosal et al, 1988) and to show significant cytotoxic activity against Ehrlich ascites carcinoma, ascites hepatoma, the leukaemias L1210 and P388, Lewis lung carcinoma and Yoshida ascites sarcoma in mice or in rats after i.p. injection (Zhang et al, 1981). In nude mice with gastric cancer, lycobetaine has been reported to extend the survival time and to decrease the tumour size (Wu et al, 1988). However, the underlying mechanism of action has not been elucidated yet. The present study addresses potential mechanisms of action of lycobetaine. MATERIALS AND METHODS Materials Lycobetaine was obtained by oxidation of lycorine with selenium dioxide (Ghosal et al, 1986; He and Weng, 1989). Lycorine was isolated from bulbs of Sternbergia lutea (Amaryllidaceae) (Evidente et al, 1984) and characterized by HPLC and 1H-NMR spectroscopy in comparison with reference material kindly provided by Dr B Xu, Shanghai Institute of Materia Medica, Chinese Academy of Sciences, China. All chemicals used were of research grade. Received 24 January 2001 Revised 20 July 2001 Accepted 24 August 2001 Correspondence to: D Marko Cell lines The large cell lung tumour xenograft LXFL 529L was established in serial passage onto nude mice, NMRI genetic background (Berger et al, 1992) from which a permanent cell line was devel- oped. The cell lines LXFL 529L, HL60, U937, K562 and Molt4 were cultured at 37˚C (5% CO 2 and 98% humidity) in RMPI 1640 medium with addition of 1% penicillin/streptomycin and 10% fetal calf serum (FCS). Medium, FCS and penicillin/streptomycin were obtained from Gibco Life Technologies (Karlsruhe, Germany). Cells were subcultured twice weekly and were routinely tested for the absence of mycoplasm contamination. Sulforhodamine B assay Growth inhibition was determined using the sulforhodamine B assay (Skehan et al, 1990) with slight modifications as described previously (Marko et al, 2001). 1585 O N + Figure 1 Structure of lycobetaine 1586 HU Barthelems et al Table 1 Effect of lycobetaine on the growth of human tumour cell lines Cell line Growth inhibitiona IC 50 (µM) HL60 1.3 ± 0.5 Molt 4 0.7 ± 0.4 K 562 0.8 ± 0.03 U 937 2.5 ± 0.1 LXFL 529L 1.2 ± 0.1 aGrowth inhibition of human tumour cell lines was determined using the sulforhodamine B assay, incubation time 72 h. IC 50 values were calculated as survival of treated cells over control cells × 100 (T/C%). Values are given as mean ± SD of 2–4 independent experiments, each done in quadruplicate. Clonogenic assay 18 human tumours established in serial passage in nude mice (NMRI nu/nu strain) and 3 cell line-derived xenografts were used (Berger et al, 1992). The clonogenic assay was performed as a modified 2- layer soft agar assay as described previously (Drees et al, 1997). Inhibition of colony formation was expressed as treated/control × 100 (T/C%). IC 50 and IC 70 values were determined by plotting compound concentration versus cell viability. Mean IC 50 and IC 70 values were calculated as described previously (Roth et al, 1999). In vivo testing Fragments of the gastric cancer GXF251 were implanted subcuta- neously in both flanks of female nude mice (NMRI nu/nu strain). When tumours were approximately 3 × 3 mm, mice were randomly assigned to treatment and control groups. Lycobetaine was administered i.p. dissolved in 0.9% NaCl. Group 1 was treated twice weekly for 4 weeks with 60 mg kg–1 bw. Body weight and tumour diameters were measured twice weekly. Tumour volumes were calculated from 2 perpendicular diameters measured by calipers (a × b2/2) and the optimal T/C as well as the tumour growth delay in days was determined (Drees et al, 1997). All tests were performed according to UKCCCR Guidelines for the Welfare of Animals in Experimental Neoplasia (Workman et al, 1998). Single cell gel electrophoresis (Comet assay) HL60 cells (1 × 106 ml–1) were incubated for 3 h in RPMI 1640 medium containing 10% FCS in the presence or absence of the test compounds. Single cell gel electrophoresis (SCGE) was performed according to the method of Gedik (Gedik et al, 1992). DNA-strand breaks were detected by fluorescence microscopy and quantified by using the comet assay II system (Perceptive instru- ments, Suffolk, UK). Biochemical analysis of topoisomerase-directed drug-effects DNA-relaxation, and -cleavage we measured with pUC18-plasmid as a DNA substrate, whereas DNA-decatenation activity was measured with C. fasciculata catenated kinetoplast-DNA (obtained from TopoGen Inc, Columbus, Ohio). Human topoisomerases I, IIα, and IIβ were expressed in S. cerevisiae and purified by various chromatographic steps, as described by Knudsen et al, (1996). Assay conditions and electrophoretic separation of the reaction products were precisely described by Boege (1996). For the study of topoisomerase-directed drug effects in cells, we used human A431 epidermoid cells. Cell culture and immunoblotting of cellular DNA-topoisomerases was done as described in Meyer et al (1997). Gels were documented by digital photography. X-ray films were digitalized with a flatbed scanner. Since statistical analysis could not be applied to the data, we present examples of results represen- tative of at least 3 experiments with similar outcome performed on different days and with different sets of enzymes or cells. Flow cytometry LXFL529L cells were incubated for 72 h with compounds on 10 cm Petri-dishes in RPMI 1640 medium containing 10% FCS. Flow cytometry was performed as described previously (Marko et al, 1998). British Journal of Cancer (2001) 85(10), 1585–1591 RESULTS Growth inhibition in vitro The activity profile in the clonogenic assay with human tumour xenograft cells shows that lycobetaine inhibits the growth of the 21 tumour xenografts tested within a range of 2 nM up to 27.5 µM, with a mean IC 70 value of 3.3 µM (mean IC 50 = 0.8 µM, Figure 2). Substantial activity (IC 70 ≤ 1 µM) was observed against the 2 gastric carcinomas tested, the large-cell lung carcinoma LXFL529, the small-cell lung carcinoma LXFS538, the colon carcinoma CXF280, the ovarian carcinoma OVXF1023 and the pancreatic carcinoma PAXF546. Marked growth inhibition (IC 70 ≤ 3 µM) was also observed for 2 bladder carcinomas (BXF1299 and BXF1301), one melanoma (MEXF989) and the renal carcinoma RXF944LX. The mammary carcinomas (MX1, MCF7X) and the lung adenocarci- nomas (LXFA289, LXFA526) appear to be more resistant to lyco- betaine. This also applies to the other colorectal, ovarian, pancreatic and renal tumours tested, as well as the second melanoma, resulting in an overall heterogeneous response pattern for these tumours. Human leukaemia cell lines were also growth inhibited in the low micromolar range (mean IC 50 = 1.3 µM, SRB assay), as shown in Table 1. Similar results were obtained for the xenograft derived permanent human large-cell lung carcinoma cell line LXFL529L, tested for comparison (Table 1). Antitumour efficacy in vivo The gastric tumour xenograft GXF251 was chosen for in vivo testing. Athymic nude mice, carrying s.c. implanted tumour xenografts, were treated i.p. twice weekly for 4 weeks (Table 2). This group received a total dose of 480 mg kg–1, resulting in a growth inhibition of about 50%. The schedule was well tolerated, as judged by only marginal effects on body weight gain, however, a late toxic death (day 21) was recorded (Table 2). Induction of DNA strand breaks HL 60 cells were incubated for 3 h with lycobetaine. DNA strand breaks were determined using the comet assay (Gedik et al, 1992) (Figure 3). The extent of DNA damage induced correlates with the fluorescence intensity of the comets, quantified as tail intensity. Incubation with lycobetaine for 3 h resulted in 32% slightly and 10% severely damaged cells at 5 µM, whereas at 10 µM 82% slightly and 6% severely damaged cells were observed, with only 12% of the cells remaining intact. © 2001 Cancer Research Campaign Selective inhibition of topoisomerase IIβ by lycobetaine 1587 Tumour xenograft Bladder BXF 1299 BXF 1301 Colon CXF 280 HT29X Gastric GXF 214 GXF251 Lung-non small cell LXFA 289 LXFA 526 LXFL 529 Lung − small cell LXFS 538 Mammary MXI MCF7X Melanoma MEXF 514 MEXF 989 Ovarian OVXF 899 OVXF 1023 OVXF 1353 Pancreatic PAXF 546 PAXF 736 Renal RXF 1220 RXF 944LX Distribution of IC 70 related to mean log.scaled axis *0.01 *0.1 *10 µM Mean IC70 3.3 µM IC 50 IC 70 *100 Mean 3.3n = 2 1 0.8 3.3 0.6 2.4 0.8 2.6 0.2 1.1 3.9 9.2 0.3 1.0 0.002 0.1 2.3 6.6 0.9 2.6 0.1 0.4 0.3 0.5 1.6 4.1 3.9 15.8 27.5 69.2 0.5 2.3 5.6 16.1 0.03 0.5 5.7 33.4 0.1 1.5 10.5 26.8 4.2 8.4 1.0 2.7 Figure 2 Activity profile of lycobetaine on the growth of human tumour xenografts in the clonogenic assay. Cell suspensions of human tumour xenografts, grown onto nude mice, were seeded in soft agar and cultivated for 6–15 days, depending on the doubling time of the tumour stem cells. Lycobetaine was applied continuously in various concentrations. After the incubation time and staining with a vital dye, tumour colonies with a diameter > 50 µm were counted with an automatic image analysis system. Variations of individual IC 70 s (drug concentration reducing colony formation to 30% of control values) from the mean value are shown as bars in the logarithmically scaled axis. Bars to the left demonstrate IC 70 s lower than the mean value, bars to the right demonstrate higher values Table 2 Tumour growth inhibition in vivoa Group Animals/tumours Dose Schedule in Toxic Body weight Optimal Tumour Tumour per group (mg kg–1) days (i.p.) deaths change (%) on day T/C doubling time growth delay 14 21 28 (days) (days) Control 6/8 0 5 times/week 0 + 8 +5 +1 – 7.2 – for 4 weeks Group 1 5/10 60 twice weekly 1 –1 –4 –11 50.6 9.4 2.1 for 4 weeks (day 21) aTreatment of nude mice bearing subcutaneously the human gastric tumour xenograft GXF251 started at a tumour size of approximately 80 mm3 (day 20 after tumour implantation). Lycobetaine was dissolved in 0.9% NaCl and applicated by i.p. injection. Mice in the control group were treated with the vehicle (0.9% NaCl). Lycobetaine stimulates DNA-cleavage by topoisomerase IIβ but not -α In view of the pronounced DNA damaging effect of lycobetaine in the comet assay, we suspected that the drug might act as a stabi- lizer of covalent DNA intermediates of topoisomerase I and/or II. To test this hypothesis, we incubated pUC18 plasmid DNA with © 2001 Cancer Research Campaign pure recombinant human topoisomerases and various concentra- tions of lycobetaine. Controls included camptothecin or VM-26 as typical poisons of topoisomerase I or II, respectively. Reaction products were electrophoresed in the presence of ethidium bromide in order to resolve relaxed and supercoiled as well as linearized and nicked plasmid forms. Figure 4A shows a typical result obtained with topoisomerase I. Obviously the enzyme alone British Journal of Cancer (2001) 85(10), 1585–1591 1588 HU Barthelems et al A B Figure 3 Single-cell gel electrophoresis (comet assay). HL 60 cells were treated for 3 h with lycobetaine. Subsequent lysis and single cell gel electrophoresis were performed as described in Materials and methods. (A) Control. (B) Lycobetaine 10 µM, 3 h. The categories of DNA damage were separated according of the proportion of extranuclear fluorescence into ‘not damaged’ (proportion of the extranuclear fluorescence < 17%), slightly damaged (extranuclear fluorescence 17–60%) and severely damaged (> 60%) A B Topoisomerase I, 60 ng Camptothecin µM Lycobetain, µM slot 1 2 3 4 5 6 7 8 9 nicked shifted neg. supercoiled relaxed Topoisomerase ΙΙ, 300 ng VM-26, µM Lycobetain, µM linear nicked supercoiled relaxed linear nicked supercoiled relaxed 50 1003031100 100 100 100 10 30 10 3 1 1 2 3 4 5 6 7 8 9 10 Figure 4 Topoisomerase DNA-cleavage assays. 60 ng topoisomerase I (A) or 300 ng topoisomerase IIα (B, top) or 300 ng topoisomerase IIβ (B, bottom) were incubated with or without 200 ng pUC18 DNA at 37˚C. Camptothecin or lycobetaine were added to the incubations as indicated. Reactions were stopped after 30 min with 1% SDS. Samples were digested with proteinase K, and subjected to submarine 1% agarose gelelectrophoresis in the presence of ethidiumbromide. UV- transilluminated gels were documented by digital photography. This is a representative result of 5 identical experiments with similar outcome. Arrows in the bottom panel of (B) indicate linearization of plasmid DNA promoted by a concerted action of topoisomerase IIβ and lycobetaine - 1-+...-+ - +++ H - --... ... -- ' i t -I--· - (lane 2) relaxed the plasmid completely, whereas 50 µM camp- tothecin (lane 1) inhibited the relaxation reaction and at the same time induced nicking of a considerable portion of the substrate. These observations are in a perfect agreement with the known fact that camptothecin promotes topoisomerase I-mediated DNA single- stranded cleavage. When various concentrations of lycobetaine British Journal of Cancer (2001) 85(10), 1585–1591 ranging from 1–100 µM (lanes 5–9) were tested in a similar fashion, it became apparent that lycobetaine did not promote plasmid nicking in the same way as did camptothecin. Thus, it can be concluded that lycobetaine does not damage DNA via stimu- lating topoisomerase I-mediated DNA cleavage. It should be noted that lycobetaine shifted the closed circular plasmid in a dose- dependent fashion. The shift was clearly also dependent on topoiso- merase I. It was not seen with lycobetaine and DNA alone (lane 4), but also appeared in coincubations of topoisomerase II and lyco- betaine (Figure 4B, lanes 6–9). Since the reaction products were thoroughly digested with proteinase before electrophoresis, the apparent band shifts cannot be due to the binding of topoiso- merases to the DNA. They could, however, reflect the introduction of positive supercoils into the plasmid by topoisomerase I or II, which might be driven by the DNA-intercalating properties of lycobetaine, and, thus, be dependent on lycobetaine and topoiso- merase. As that may be, the phenomenon was not further pursued, because it seemed irrelevant with respect to the molecular basis of the DNA-damaging properties of lycobetaine. Figure 4B shows typical results of DNA-cleavage experiments with topoisomerase IIα (top) or -β (bottom). Most notably, lyco- betaine clearly stimulated topoisomerase II-mediated DNA double- stranded cleavage (indicated by plasmid-linearization). However, it did so only in reactions containing the β-isoform and not in reactions containing the α-isoenzyme (lanes 7–9, compare top to bottom). The dose–response relationship of this effect was bell shaped (maximum at 10–30 µM), as to be expected from a substance that has also DNA-intercalating properties. The apparent selectivity of lycobetaine for the β-isoform of topoiso- merase II was clearly opposed by the non-selective effect of the classic topoisomerase II poison VM-26, which stimulated DNA- cleavage of both isoenzymes (lanes 4 and 5). Taken together, these results suggested that lycobetaine might be a β-selective topo- isomerase II-poison. Lycobetaine stabilizes the DNA-linkage of topoisomerase IIβ but not -α To corroborate these findings, we approached the problem from the protein side, studying depletion of topoisomerase II proteins from SDS gels by drug-induced DNA-linkage. For this purpose we incu- bated pure recombinant human topoisomerase IIα or -β with calf thymus DNA in the presence or absence of drugs, stopped the reac- tion with SDS, and subjected these samples to SDS-polyacrylamide gel electrophoresis followed by Coomassie staining. Figure 5, lane 1 shows that 2 µg topoisomerase IIα (top) or -β (bottom) alone were readily detectable by the protein staining. The protein bands of both isoenzymes were only marginally diminished upon incubation with 6 µg calf thymus DNA (Figure 5, lane 2), whereas coincuba- tion with DNA and VP-16 (lane 3) or VM-26 (lane 4), which are both established topoisomerase II poisons, effected a more or less complete removal of the protein bands of both isoenzymes from the gel, because the covalent enzyme–DNA complexes stabilized by these drugs are too large to enter the gel. A similar extent of band- depletion was also obtained with lycobetaine. However, in this case only topoisomerase IIβ became depleted whereas the α-isoform was not affected (Figure 5, lanes 6 and 7, compare top to bottom). Again, the lycobetaine effect on the DNA linkage of topo- isomerase IIβ had a maximum between 10–30 µM and decreased at higher concentrations (Figure 5, lane 8). © 2001 Cancer Research Campaign Selective inhibition of topoisomerase IIβ by lycobetaine 1589 DNA, 6µg 1 2 3 4 5 6 7 8 VM - 26, µM VP- 16 , µM Lycobetaine, µM 100 100 3 10 30 100 170 kDa 180 kDa α β Figure 5 Protein band depletion of pure topoisomerase II. 2 µg of pure recombinant human topoisomerase IIα (top) or -β (bottom) were mixed with 6 µg calf thymus DNA. Lycobetaine, VM-26, or VP-16 were added, as indicated. The sample in lane 1 was without DNA. Samples were incubated at 37˚C for 30 min. Incubations were stopped with 1% SDS. Samples were then subjected to SDS-gel electrophoresis on 5.5% polyacrylamide gels. Gels were stained with coomassie blue, transilluminated and documented by digital photography. This is a representative result of 3 identical experiments with similar outcome 1 2 3 4 5 6 7 8 Incubation, Min Lycobetaine, µM A B 170 kDa 180 kDa Incubation, Min VM - 26, 100 µM Lycobetaine, 200 µM 9 10 11 12 170 kDa 1 2 3 4 5 6 7 8 9 10 11 180 kDa 60 180 360 50 200 600 50 200 600 50 200 600 15 30 60 120 1800 α β α β Figure 6 Immunoband depletion of topoisomerase IIα and -β in A431-cells. Lycobetaine or VM-26 was added at the concentrations indicated to the medium of non-confluent monolayer cultures of 105 A431 cells and culture was continued for the indicated periods of time. Subsequently the cells were lysed with 1% hot SDS and the cell lysates were rapidly applied to SDS-gel electrophoresis on 5.5% polyacrylamide gels, followed by Western blotting and immunostaining with rabbit peptide antibodies specific for the carboxyterminus of human topoisomerase IIα (top panels) or -β (bottom panels). Controls (lanes 1 and 11 in (A) and lanes 1 and 12 in (B)) were treated similarly without addition of drugs + · ·++· +++ Lycobetaine promotes selective immunoband-depletion of topoisomerase IIβ in cells In summary, the data shown in Figures 4 and 5 strongly suggest that lycobetaine might be a topoisomerase II poison selective for the β-form of the enzyme. As a next step we investigated whether it would also target topoisomerase IIβ in living cells. For this purpose, we added lycobetaine to the cell culture medium of A431 cells and assessed subsequently by immunoblotting the cellular pools of free topoisomerase IIα and -β that were not linked to DNA as a consequence of drug exposure. Drug-induced DNA- linkage of the cellular topoisomerase II can be determined as a loss of the respective protein band, because the DNA-linked enzymes are retained in the application slot together with the genomic DNA. In the absence of drug (Figure 6A, lanes 1 and 11) topoiso- merase IIα (top) and -β (bottom) were readily detectable in the cells during the entire time-frame of the experiment. Upon expo- sure to lycobetaine topoisomerase IIα was not at all affected, whereas the signal of topoisomerase IIβ became clearly dimin- ished (Figure 6A, lanes 6 and 9, compare bottom and top). The disappearance of the topoisomerase IIβ could in principle be due to proteolysis of the enzyme. However this is unlikely, because typical proteolytic degradation products are not apparent on the blot. Considering in addition the effect of the drug on purified © 2001 Cancer Research Campaign topoisomerase IIβ (Figures 4 and 5) it seems most likely that the disappearance of the topoisomerase IIβ bands shown in Figure 6 is due to the induction of DNA-linkage. Maximal depletions were obtained with 200 µM lycobetaine, whereas minor effects were seen at higher concentrations (Figure 6A, bottom, compare lanes 6 and 7 or lanes 9 and 10). Most notably, the onset of these effects seemed to be delayed by at least one hour (Figure 6A, bottom, lanes 2–4) and increased gradually during incubations between 3 to 6 hours. Moreover, the effective concentration range seemed to increase during prolonged incubation periods (Figure 6A, bottom, compare lanes 5 with 8 and 7 with 10). Figure 6B compares the kinetics of topoisomerase II band depletion by lycobetaine with those of the classical poison VM-26. It is quite obvious, that the 2 drugs act on a different time-scale. The onset of VM-26 is more or less instantaneous and maximal band depletion is already obtained after 30 min (Figure 6B, lane 4), whereas lycobetaine needs as long as 120 minutes to produce a similar effect (Figure 6B, lane 9). Moreover, lycobetaine acts selectively on topoisomerase IIβ, whereas VM-26 depletes topoisomerase IIα and β (Figure 6B, compare top and bottom). British Journal of Cancer (2001) 85(10), 1585–1591 1590 HU Barthelems et al Cell cycle distribution A further prediction of this type of mechanism of action, would be a block in the G 2 /M-phase of the cell cycle. To test that assumption, LXFL529L cells were incubated for 72 h with lycobetaine. In untreated cells, 11% of the cells were found in the G 2 /M phase. In the presence of 5 µM lycobetaine, a slight increase of cells in the G 1 /G 0 phase became apparent, concomitant with a significant loss of cells in S phase and a distinct increase of the G 2 /M cells (25%). The arrest in G 2 /M was strongly enhanced upon incubation with 7.5 µM lyco- betaine (32%) and an increase of cells with reduced DNA staining compared to cells in G 1 was observed. In agreement with flow cytometry results, at 10 µM lycobetaine, cells showed a distinct shrinkage of the cell volume and condensation of the chromatin, features characteristic for apoptosis induction (data not shown). DISCUSSION Our results show that lycobetaine is a potent inhibitor of human tumour cell growth in the colony forming and in the SRB assay. In the clonogenic assay, enhanced activity against gastric carcinomas was observed. Lycobetaine was also active on certain lung and ovarian xenografts (Figure 2). This is in line with reports about substantial activity of lycobetaine in clinical studies in the treat- ment of ovarian and gastric carcinomas, at a dosage that was not reported to lead to significant systemic toxicity (He and Weng, 1989). In mice bearing the gastric carcinoma GXF251, lycobetaine exhibits substantial antitumour activity with minor toxicity on a twice weekly schedule, implying tolerable weight loss and 1 late toxic death (day 21). The antitumour efficacy was enhanced when lycobetaine was applied daily (30 mg kg–1), resulting in a higher total dose. This resulted in increased toxicity, as judged from body weight loss, however, no toxic deaths were observed (data not shown). For exploration of an optimal dosing schedule further experiments appear warranted. Using fluorescence microscopy, we observed exclusive local- ization of lycobetaine in the nucleus of human LXFL529L cells, comparable to ethidium bromide and Hoechst H33258 (data not shown). Lycobetaine has been described previously already as a DNA intercalating agent with preference for G–C pairs (Wu et al, 1987; Liu et al, 1989; Gan et al, 1992; Chen et al, 1997). This is consistent with our finding on its efficient competition for ethidium bromide intercalation into calf thymus DNA (data not shown). Furthermore, lycobetaine was found to compete effec- tively with the minor groove-binding ligand Hoechst H33258 (data not shown). In the comet assay, intense DNA damage was observed after treatment of HL60 cells with lycobetaine for 3 h (Figure 3). Mechanistically, DNA damage might occur as a conse- quence of the intercalative/minor groove-binding properties of lycobetaine, or, alternatively, lycobetaine might target as well topoi- somerases directly. Minor groove-binding ligands have been reported to be potent topoisomerase I inhibitors with a mechanism similar to camptothecin, which is trapping the DNA-topoisomerase I cleavable complex (Chen et al, 1993). A number of DNA interca- lators such as adriamycin, amsacrine or etoposide have already been shown to represent potent topoisomerase II inhibitors (Davies et al, 1988). We present here several lines of evidence characterizing lycobetaine as a topoisomerase IIβ-selective poison. Firstly, it stimu- lates DNA-cleavage by topoisomerase IIβ but not by topoisomerase IIα (Figure 4). Secondly, in the presence of calf thymus DNA it depletes pure recombinant human topoisomerase IIβ protein from British Journal of Cancer (2001) 85(10), 1585–1591 SDS-gels, whereas it does not deplete topoisomerase IIα protein under similar conditions (Figure 5). Thirdly, it induces topoiso- merase IIβ-selective immunoband-depletion in A431-cells (Figure 6). The latter result suggests that the drug does indeed stabilize the covalent catalytic DNA-intermediate in living cells, although we cannot altogether exclude that it induces proteolysis of the enzyme. However, in the view of the data obtained with pure topoisomerase IIβ (Figures 4 and 5) this is a less likely alternative. It is reasonable to assume therefore that poisoning of topoisomerase IIβ will cause or at least contribute significantly to the DNA- damaging properties of lycobetaine that might be relevant for the antitumour activity of the drug. As to be expected from a topoiso- merase II inhibitor, lycobetaine was found to induce in a concentra- tion-dependent fashion arrest of LXFL529L lung carcinoma cells in the G 2 /M-phase of the cell cycle with indication for the onset of apoptosis. Recently, an analogue of the herbicide Assure has been described (XK 469, NSC 697887), representing a synthetic quinoxaline phenoxypropionic acid derivative (Gao et al, 1999), which was found to inhibit topoisomerase IIβ at mM concentra- tions. On the basis of our data, lycobetaine is at least 10-fold more potent than XK 469. Lycobetaine is rapidly taken up into cells and locates to the nucleus. However, in view of the delay observed for the full onset of topoisomerase IIβ inhibition, it cannot be excluded at present that intracellular transport or metabolism plays also a role. ACKNOWLEDGEMENTS We thank A Genzlinger and H Zankl for their help with the flow cytometry measurements. The study was supported by the grant 0310938 of the Bundesministerium fur Bildung, Wissenschaft, Forschung und Technologie, Germany (to GE) and the grants Bo 910/3-1 and Bo 910/4-1 of the Deutsche Forschungsgemeinschaft (to FB). REFERENCES Berger DP, Winterhalter BR and Fiebig HH (1992) Establishment and characterization of human tumor xenografts in thymus aplastic nude mice. Contr Oncol 42: 23–46 Boege F (1996) Analysis of eukaryotic DNA topoisomerases and topoisomerase- directed drug effects. Eur J Clin Chem Clin Biochem 34: 873–888 Chen AY, Chiang Y, Gatto B and Liu LF (1993) DNA minor groove binding ligands: a different class of mammalian DNA topoisomerase I inhibitors. Proc Natl Acad Sci USA 90: 131–135 Chen JZ, Chen KX, Jiang HL, Lin MW and Ji RY (1997) Theoretical investigation on interaction binding of analogs of AT-1840 to double-stranded polynucleotide. Proc Natl Sci 7: 329–335 Davies SM, Robson CN, Davies SL and Hickson ID (1988) Nuclear topoisomerase II levels correlate with the sensitivity of mammalian cells to intercalating agents and epipodophyllotoxins. J Biol Chem 263(33): 17724–17729 Drees M, Dengler WA, Roth T, Labonte H, Mayo J, Malspeis L, Grever M, Sausville EA and Fiebig HH (1997) Selective antitumor activity in vitro and activity in vivo for prostate carcinoma cells. Clin Cancer Res 3: 273–279 Evidente A, Iasiello I and Randazzo G (1984) An improved method for the large-scale preparation of lycorine. Chem & Ind, 348–349 Gan L, Liu XJ, Chen KL and Ji YY (1992) Computer simulation on the interaction between antitumor agent lycobetaine and DNA. Gaojishu Tongxun 2: 30–33 Gao H, Huang KC, Yamasaki EF, Chan KK, Chohan L and Snapka RM (1999) XK469, a selective topoisomerase IIβ poison. Proc Natl Acad Sci USA 96: 12168–12173 Gedik CM, Ewen SWB and Collins AR (1992) Single-cell gel electrophoresis applied to the analysis of UV-C damage and its repair in human cells. Int J Radiat Biol 62: 313–320 © 2001 Cancer Research Campaign Selective inhibition of topoisomerase IIβ by lycobetaine 1591 Ghosal S, Kumar Y, Singh SK and Kumar A (1986) Chemical constituents of Amaryllidaceae. Part 21. Ungeremine and criasbetaine, two antitumor alkaloids from Crinum asiaticum. J Chem Res Synop 112–113 Ghosal S, Singh SK, Kumar Y, Unnikrishnan S, and Chattopadhyay S (1988) Chemical constituents of Amaryllidaceae. Part 26. The role of ungeremine in the growth-inhibiting and cytotoxic effects of lycorine: evidence and speculation. Planta Med 54: 114–116 He HM and Weng ZY (1989) Structure-activity-relationship study of the new anticancer drug lycobetaine (AT-1840). Acta Pharm Sinica 24: 302–304 Knudsen BR, Straub T and Boege F (1996) Separation and functional analysis of eukaryotic DNA topoisomerases by chromatography and electrophoresis. J Chromatogr B Biomed Appl 684: 307–321 Lee KH, Sun L and Wang HK (1994) Antitumor agents. 147. Antineoplastic alkaloids from Chinese medicinal plants and their analogs. J Chin Chem Soc 41: 371–384 Liu J, Yang SL and Xu B (1989) Characteristics of the interaction of lycobetaine with DNA. Acta Pharmacol Sinica 10: 437–442 Marko D, Romanakis K, Zankl H, Fürstenberger G, Steinbauer B and Eisenbrand G (1998) Induction of apoptosis by an inhibitor of cAMP-specific PDE in malignant murine carcinoma cells overexpressing PDE activity in comparison to their nonmalignant counterparts. Cell Biochem Biophys 28: 75–101 Marko D, Schätzle S, Friedel A, Genzlinger A, Zankl H, Meijer L and Eisenbrand G (2001) Inhibition of cyclin-dependent kinase 1 (CDK1) by indirubin derivatives in human tumour cells. Br J Cancer 84(2): 283–289 Meyer KN, Kjeldsen E, Straub T, Knudsen BR, Hickson ID, Kikuchi A, Kreipe H and Boege F (1997) Cell cycle-coupled relocation of types I and II topoisomerases and modulation of catalytic enzyme activities. J Cell Biol 136: 775–788 © 2001 Cancer Research Campaign Owen TY, Wang SY, Chang SY, Lu FL, Yang CL and Hsu B (1976) A new antitumor substance lycobetaine (AT-1840). Ko Hsueh Tung Pao 21: 285–287 Roth T, Burger AM, Dengler W, Willmann H and Fiebig HH (1999) Human tumor cell lines demonstrating the characteristics of patient tumors as useful models for anticancer drug screening. In: Fiebig HH, Burger AM (eds) Relevance of tumor models for anticancer drug development, Contrib Oncol Vol 42. pp 145–156 Basel: Karger Skehan P, Storeng R, Scudiero D, Monks A, McMahon J, Vistica D, Warren JT, Bokesch H, Kenney S and Boyd MR (1990) New colorimetric cytotoxicity assay for anticancer-drug screening. J Natl Cancer Inst 82: 1110–1112 Wang XW, Yu WJ, Shen ZM, Yang JL and Xu B (1987) Cytotoxicity of hydroxycamptothecin and four other antineoplastic agents on KB cells. Acta Pharmacol Sinica 8: 86–90 Workman P, Twentyman P, Balkwill F, Balmain A, Chaplin d, Double J, Embleton J, Newell D, Raymond R, Stables J, Stephens T and Wallace J (1998) United Kingdom Co-ordinating Committee on Cancer Research (UK CCCR) Guidelines for the Welfare of Animals in Experimental Neoplasia (Second Edition). Br J Cancer 77: 1–10 Wu H, Shen CY and Xu B (1987) Effect of lycobetaine on DNA circular dichroism. Chin J Pharmacol Toxicol 1: 272–276 Wu YL, Wu YX, Yu CS, Zhang SY, Su ZC and Jiang SJ (1988) The cytocidal effect of AT-1840 and parvovirus H-1 on gastric cancer cells. Shanghai Med J 11: 683–688 Zhang SY, Lu FL, Yang JL, Wang LJ and Xu B (1981) Effect on animal tumors and toxicity of lycobetaine acetate. Acta Pharmacol Sinica 2: 41–45 British Journal of Cancer (2001) 85(10), 1585–1591 Lycobetaine acts as a selective topoisomerase IIβ poison and inhibits the growth of human tumour cells MATERIALS AND METHODS Materials Cell lines Sulforhodamine B assay Clonogenic assay In vivo testing Single cell gel electrophoresis (Comet assay) Biochemical analysis of topoisomerase-directed drug-effects Flow cytometry RESULTS Growth inhibition in vitro Antitumour efficacy in vivo Induction of DNA strand breaks Lycobetaine stimulates DNA-cleavage by topoisomerase II but not - Lycobetaine stabilizes the DNA-linkage of topoisomerase IIβ but not-α Lycobetaine promotes selective immunoband-depletion of topoisomerase II• in cells Cell cycle distribution DISCUSSION ACKNOWLEDGEMENTS REFERENCES work_wgpiqgqeezcrbftvb772z54gz4 ---- Boise Center Aerospace Laboratory (BCAL) - BCAL Skip to main content BCAL Home Search Menu Boise Global Navigation Apply Visit Give Menu Ready to take the next step? Plan your virtual visit. Search Search All Boise State BCAL Only BCAL Advancing research and providing training in remote sensing of the environment. BCAL People Research Resources More section menu items The Boise Center Aerospace Laboratory (BCAL) is a group of scholars focused on advancing research and promoting education in remote sensing and spatial technologies. We are passionate about exploring novel approaches to remote sensing to better understand and monitor the earth’s surface, and especially ecosystems. Remote sensing is interdisciplinary in nature and thus we collaborate with engineers, geoscientists, ecologists, and social scientists. We invite you to explore our website and get to know us! Our training provides students with skills and knowledge in science and technology for professional careers or continued study in all aspects of earth and environmental sciences and engineering. Our research focuses on image processing, interpretation of remote sensing imagery, integration of global positioning systems and geographic information systems into remote sensing, and understanding spatial, spectral, and temporal scales of ecosystem and landscape processes. BCAL’s interdisciplinary focus brings together science and technology in the earth and environmental sciences and engineering. Boise Center Aerospace Laboratory bcal@boisestate.edu 208.426.2933 Lab 4110, Environmental Research Building Secondary Navigation myBoiseState Safety, Security and Support Career Opportunities Boise State Public Radio Albertsons Library Transportation and Parking Partner with Us Morrison Center Alumni Bronco Shop and Bookstore University Administration ExtraMile Arena Broncos Logo Athletics Follow us on Facebook Follow us on Instagram Follow us on Twitter Follow us on Youtube Accessibility Privacy Policies Contact Boise State © 2021 All Rights Reserved. Boise State University work_wjeovlbbjzatzm3tcfbgcojhbi ---- PUBLIC SECTOR SCIENCE AND THE STRATEGY OF THE COMMONS PUBLIC SECTOR SCIENCE AND THE STRATEGY OF THE COMMONS AJAY AGRAWAL Queen’s School of Business, Queen’s University Kingston, Ontario, Canada, K7L 3N6 LORENZO GARLAPPI University of Texas at Austin INTRODUCTION This paper is motivated by a puzzling observation. Over the past decade, a variety of large firms and private sector consortia have approached prominent universities and expressed interest in sponsoring particular research laboratories. In return for their sponsorship, these organizations have requested that all inventions generated by the sponsored labs be licensed openly, on a non- exclusive basis only. Why non-exclusive? At first glance, this seems surprisingly generous – in fact, altruistic. Hence, the puzzle. Consider three examples: 1) Kodak sponsors research in areas related to digital photography; 2) AT&T sponsors research in areas related to communication, including Internet telephony; 3) A consortium, comprised of several of the world's largest pharmaceutical firms, sponsors research related to the Human Genome Project. In each case, the sponsorship stipulates no exclusive licensing. Why would the sponsoring firms choose to disallow exclusive licensing - which has been the norm at universities since the Bayh-Dole Act of 19801 - especially since these firms would be prime candidates for licensing the inventions themselves? One hypothesis is simply that sponsoring firms are worried that competing firms might obtain the exclusive license first. This is certainly a reasonable explanation, but not altogether consistent with the evidence. Historically, sponsoring firms have enjoyed favorable information advantages regarding the research outcomes of the labs they sponsor since they often receive interim briefings prior to publications or conference presentations. So, in practice, they are usually ‘first in line’ for any related exclusive licenses. In this paper we explore a second, less obvious, explanation. The hypothesis modeled here is that firms request non-exclusive licensing regimes in order to prevent, or at least slow down, the commercial development of inventions in a particular technological market. In other words, firms sponsor research in a laboratory specifically because they wish to retard the development of particular areas of innovation. They purposely spoil the incentives to develop and commercialize inventions from the sponsored lab by creating a market failure. Sponsoring firms accomplish this by creating an intellectual property ‘commons’ under which no firm is able to obtain exclusive property rights. Why would firms do this? Under some conditions, if a new market (based on a new technology) is related to an existing market in such a way that the former will cannibalize the latter, it may be profitable for an entrant to develop the invention but harmful for the incumbent to do so. In other words, the incumbent’s profits in the original market will be reduced if an entrant develops Academy of Management Proceedings 2002 BPS: X1 the invention or even if the incumbent itself does so. From this perspective, one can imagine reasons why Kodak may want to delay the development of digital photography, AT&T the development of Internet telephony, and large pharmaceutical firms the development of processes for human gene mapping.2 Under this threat of cannibalization, one might question why the incumbent doesn't license the patent and leave the technology dormant? The answer lies in the licensing contract that is hand- crafted for each agreement. Benchmarks, milestones, expenditure commitments, and other timeline components associated with product development and commercialization are specified in the contract. Technology licensing officers refer to these contractual conditions as ‘use it or lose it’ clauses that ensure that the mandate of the university is reflected in the conditions of the contract.3 Indeed, anecdotal evidence suggests that the incidence of the strategy of the commons is positively correlated with an increase in the sophistication of the ‘use it or lose it’ contractual terms, both of which have varied across research organizations. In any case, licensing university inventions and not developing them no longer appears to be a feasible strategy for mitigating the effects of cannibalization. The idea of market cannibalization has been well studied. Arrow (1962) explicitly discussed the ‘replacement effect’ and argued that a monopolist incumbent would have a lower willingness to pay for an innovation than an entrant since the incumbent would be concerned about replacing its sunk assets and thus have, relatively, less incentives to innovate. Since Arrow, many other scholars have examined particular economic effects of market cannibalization. For example, Abernathy and Utterback (1978) compare incremental and radical innovation and offer a number of reasons, including cannibalization, to explain why radical innovation is typically carried out by entrants rather than incumbents. Foster (1986) popularized the concept of the S-curve for technologies, the shape of which is defined by the increase in performance relative to the development effort expended. Discontinuities in the curve represent new technologies that are often developed by entrants because they have the potential to cannibalize the existing product market. Gans and Stern (1997) model the allocation of rents from innovation amongst incumbents and entrants that is dependent on the existence and terms available on the ‘market for ideas' and use this framework to consider the way in which cannibalization affects the underlying incentives for either firm to conduct R&D. Finally, Christensen (1997) examines the concept of ‘disruptive’ technologies in a number of product markets, most notably the disk drive industry. In this analysis, cannibalization is once again offered as a primary explanation for development by entrants but not incumbents. THE MODEL Here we describe a simple game-theoretic model that we use to investigate the conditions under which it is possible to observe the ‘strategy of the commons’ as a result of profit-maximizing behavior of players in the licensing game. While a brief outline of the model follows, a complete description of how the model is developed is provided in the full version of the paper. Academy of Management Proceedings 2002 BPS: X2 At the beginning of the game, a sponsoring firm selects a licensing regime for the invention that will potentially be generated. We refer to this firm as the incumbent. We assume the incumbent firm has monopoly power in the original market in which it operates (the ‘old’ market). The incumbent, when sponsoring university research, can decide to select either an exclusive or a non-exclusive licensing regime for the inventions that will be the basis of the ‘new’ market. An exclusive licensing regime is one under which only one firm may license the right to use a patented technology at any given time. This also includes technologies that are protected by copyright, trademark, and other forms of legal intellectual property protection. This is in contrast to a non-exclusive licensing regime under which more than one firm may simultaneously license the right to use a protected technology. For the sake of clarity and simplicity, issues such as sub-licensing and restricted fields of use are not considered here. The main implication that arises from the exclusivity distinction in licensing regimes is with regard to competition. In the exclusive case, the licensee firm maintains a monopoly of the technology, whereas in the non-exclusive case, the licensee firm faces either direct competition or at least the threat of competition from other firms. For exigency of tractability we assume that there exists only one potential entrant in the new market. The resulting game is hence a two-player game in which an established incumbent and a potential entrant interact in the adoption of a new technology. The two firms are equally efficient in the utilization of the new technology, which is used to develop a product that is a partial substitute for the one already produced by the incumbent. The degree to which the new product is a substitute (degree of cross-price elasticity) for the old product is in fact a key ingredient in the model. This determines the level of cannibalization, which directly affects the incumbent’s payoffs since the incumbent holds a monopoly in the old market. Throughout the analysis we also assume that patents are enforceable and cannot be ‘invented around.’ This means that a firm must license the patent in order to produce the new technology product. In other words, firms must engage in the licensing game in order to compete in the new market. The dynamics of the game are summarized in Figure 1. If the incumbent selects an exclusive licensing regime, then both firms decide simultaneously whether or not to license. This is because at most only one firm may obtain rights to the license. In the non-exclusive regime, the licensing decisions are modeled as a sequential game with the entrant moving first since it is possible for both firms to have licensing rights to the invention simultaneously. ----------------------------- Figure 1 about here ----------------------------- It is important to note that the order of the sequential non-exclusive licensing subgame produces an outcome that is the same as that which would result if the subgame were infinitely repeated, with no specified order. In the infinitely repeated game, the entrant is always faced with the threat of subsequent entry by the incumbent. Therefore, what is critical is not which player is allowed to move first, but rather which player is allowed to move second. The incumbent is only Academy of Management Proceedings 2002 BPS: X3 able to threaten the entrant with entry if she is able to license after the entrant has already done so. In other words, the conditions under which the incumbent will play the strategy of the commons are the same in the infinitely repeated game, in which the incumbent can always respond to the entrant’s move, as they are when the order of play is dictated as ‘entrant first,’ as is modeled here. The ‘strategy of the commons’ outcome occurs when the incumbent selects a non-exclusive licensing regime and, in the ensuing sequential game, both the entrant and the incumbent decide optimally not to invest in the license, even though the new technology would be profitable under an exclusive licensing regime. The payoffs are generated by deriving industry equilibrium profits drawn from a simple downward sloping linear demand function. While we do not describe the payoffs in this summary version of the paper, we offer some basic intuition. There are three types of outcomes in the new market: 1) no firms enter, 2) one firm enters, or 3) both firms enter. Each outcome in the new market has a distinct effect on the incumbent’s profits in the old market. If no firms enter, the old market is not cannibalized. If both firms enter, the old market is cannibalized to a greater degree than if just one firm enters the new market. The incumbent firm maximizes total profits (new market plus old market), while the entrant only maximizes profits in the new market. The competition that results, under a certain set of conditions that depend on the size and profitability of the new market relative to the old market as well as the degree of cannibalization, leads the incumbent firm to play the strategy of the commons. These conditions are described in the following section. THE STRATEGY OF THE COMMONS AS AN EQUILIBRIUM SOLUTION The strategy of the commons occurs when the entrant would enter in an exclusive licensing regime but the incumbent selects a non-exclusive regime such that it may credibly threaten the entrant with entry, ultimately deterring the entrant from entering. The ‘social’ outcome is therefore a situation in which, despite the potential profitability to the entrant, neither firm optimally decides to invest in the license and the invention is not put into use. In the paper we derive conditions under which the strategy of the commons can emerge as a subgame perfect equilibrium. Three conditions must be met in order for the strategy of the commons to emerge. Condition 1 (Exclusive Regime) The payoffs in the exclusive regime are such that the entrant will invest in the development of the new technology and enter the new market. Condition 2 (Non-exclusive Regime) The payoffs in the non-exclusive regime are such that the entrant will not enter. This is because the incumbent utilizes its option to move second in the sequential game (second mover advantage), allowing it to credibly threaten the entrant. If the entrant doesn’t enter, neither will the incumbent, who will continue to enjoy a monopoly in the old market that remains non-cannibalized. However, should the entrant enter, the incumbent Academy of Management Proceedings 2002 BPS: X4 has the incentive to enter as well. As a result, the entrant will not enter because its payoff as a duopolist does not meet the entry threshold. Condition 3 (Regime Selection) The incumbent’s payoffs are such that, given the outcomes of the second stage of the game, the incumbent will select the non-exclusive licensing regime in the first stage. As a result, the potentially welfare-generating innovation is left undeveloped. In the full version of the paper, we show that there exists a subgame perfect equilibrium in the licensing game in which the strategy of the commons is played. We also examine welfare implications. CONCLUSIONS The results of this research suggest some interesting and important strategy and policy implications. From a strategy perspective, incumbents may consider ways by which to diffuse potential threats from technology fields likely to produce ‘disruptive’ innovations. One such way is to create a market failure by dismantling the legal architecture that offers the intellectual property protection that is often critical to entrants for purposes of raising capital and attracting early adopters. In fact, in many cases it is considered necessary for entrants to acquire a ‘thicket’ of related patents around the key patent in order to instill the required confidence in early stage investors. This implies that the strategy of the commons does not require the incumbent to sponsor all research in a particular area to be effective, only enough to prevent an entrant from obtaining all of the exclusive intellectual property rights to a potentially threatening substitute. In most cases, a tightly protected intellectual property position is significantly more important for an entrant than for an incumbent. From a policy perspective, governments and university administrations may consider whether particular areas of technical research should be protected from incumbents playing the strategy of the commons. In other words, public sector officials may consider some areas of technological innovation particularly likely to produce ‘disruptive’ technologies that might not be developed by incumbent firms but would likely be developed by entrants. In most cases, these will be technologies that will enable products that have significant cannibalization effects (high cross-price elasticities with existing products). In these cases, protection of the legal architecture that establishes private intellectual property rights might be considered. ENDNOTES 1. The Bayh-Dole Act (Public Law 96-517) assigned ownership and control of patents derived from federally funded research to performing institutions, rather than the sponsoring federal agency. Most relevant to this study, it granted non-profit organizations the right to offer exclusive licenses, a right that, as the Columbia University Technology Licensing Office describes, “provided the incentives for the venture capital industry to invest in unproven technology [...]. The results have been dramatic. A trickle of university patents, 200 in 1980, has turned into a flood - now more than 3,000 applications a year” (Winter, 1998). Academy of Management Proceedings 2002 BPS: X5 2. The latter example refers to the SNP Consortium, which consists of several large pharmaceutical rivals including Novartis, Glaxo Wellcome, Pfizer, and SmithKline Beecham. This consortium was formed in 1999 for the sole purpose of sponsoring public-sector research to identify and patent SNPs (Single Nucleotide Polymorphisms) in order to prevent smaller biotechnology firms from entering and obtaining exclusive rights to this genetic information (The Wall Street Journal (03/04/1999), US News & World Report (10/18/99), The Economist (12/04/99)). SNPs are differences in the DNA of individuals that are likely to be important in tracking the genetic causes of disease. 3. The mandate of most research universities, with respect to patent licensing, is to promote the development of their inventions rather than to maximize profits. For example, the MIT Technology Licensing Office states that “in our technology licensing endeavor, MIT is following the mandate of the US Congress when it gave universities title to inventions developed with federal funds: We use licenses to our intellectual property to induce development of our inventions into products for the public good'' (MIT TLO promotional pamphlet, 1996). References Available From the Author Academy of Management Proceedings 2002 BPS: X6 INTRODUCTION THE MODEL THE STRATEGY OF THE COMMONS AS AN EQUILIBRIUM SOLUTION CONCLUSIONS ENDNOTES References Available From the Author TOC: Find: BP Index: work_wkyt4bradng5xiln5zzm2rh5wy ---- Clinical Study Evaluation of Ocular Versions in Graves’ Orbitopathy: Correlation between the Qualitative Clinical Method and the Quantitative Photographic Method Cristiane de Almeida Leite, Thaı́s de Sousa Pereira, Jeane Chiang, Allan C. Pieroni Gonçalves , and Mário L. R. Monteiro Laboratory of Investigation in Ophthalmology (LIM 33), Division of Ophthalmology, University of São Paulo Medical School, São Paulo, Brazil Correspondence should be addressed to Allan C. Pieroni Gonçalves; allanpieroni75@gmail.com Received 8 April 2020; Accepted 17 July 2020; Published 3 August 2020 Academic Editor: Van C. Lansingh Copyright © 2020 Cristiane de Almeida Leite et al. )is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Purpose. To assess the agreement between the qualitative clinical method and the quantitative photographic method of evaluating normal and abnormal ocular versions in patients with inactive Graves’ orbitopathy (GO). Methods. Forty-two patients with inactive GO had their ocular versions evaluated clinically according to three categories: normal, moderate alterations (−1 or −2 hypofunction), and severe alterations (−3 or −4 hypofunction). )e subjects were photographed in the 9 positions of gaze, and the extent (mm) of eye movement in each position was estimated using Photoshop® and ImageJ and converted into degrees with a well-established method. )e agreement between the two methods (qualitative vs. quantitative) for classifying ocular versions as normal or abnormal was assessed. Results. )e mean quantitative measurements of versions were significantly different for each clinical category (normal, moderate alterations, and severe alterations) in the following five positions: abduction, adduction, elevation in abduction, elevation, and elevation in adduction (p<0.001). No such pattern was observed for the three infraversion positions (depression in abduction, p � 0.573; depression, p � 0.468; depression in adduction, p � 0.268). Conclusion. )e agreement was strong between the quantitative photographic method and the qualitative clinical method of classifying ocular versions, especially in lateral and supraversions, which are typically affected in GO. Digital photography is recommended for the assessment of ocular versions due to its practicality, suitability for telemedicine applications, and ease of monitoring during follow-up. )is trial is registered with NCT03278964. 1. Introduction Assessment of ocular versions is an essential part of the study of extrinsic ocular motility, helping in the diagnosis and treatment of eye movement disorders, especially incomitant, restrictive, and paralytic strabismus [1]. Evaluations during clinical examination are usually qualitative. )e patient is instructed to follow an object presented by the examiner, from the primary position to secondary and tertiary positions of gaze. For each muscle involved, versions are graded from −1 to −4 for hypo- function and from +1 to +4 for hyperfunction. Due to high interobserver variability and standardization errors, the method is heavily dependent on examiner experience [2, 3]. To circumvent this problem, quantitative measuring methods with objective scales have been proposed [4–11]. Quantitative version assessments can be made with kinetic methods (the patient following a moving target) or static methods (measuring the angle of movement in a given position of gaze) [5]. Examples of the former are the limbus test [8], the lateral version light-reflex test [9], and the use of ophthalmic devices such as perimeters [10]. )e latter includes the use of Hess and Lancaster screens [5]. In 2014, Lim and colleagues described a modified limbus test, evaluating versions based on photographs taken in the Hindawi Journal of Ophthalmology Volume 2020, Article ID 9758153, 7 pages https://doi.org/10.1155/2020/9758153 mailto:allanpieroni75@gmail.com https://clinicaltrials.gov/ct2/show/NCT03278964 https://orcid.org/0000-0002-3222-8128 https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/ https://doi.org/10.1155/2020/9758153 cardinal positions of gaze [11]. )is low-cost method has proven to be reproducible and easily implemented in clinical practice. In this study, we evaluated the agreement between the qualitative clinical method and the quantitative pho- tographic method of assessing ocular versions in a sample of patients with Graves’ orbitopathy (GO) with different de- grees of ocular version abnormalities. 2. Methods )is prospective and comparative study was conducted at a hospital-based outpatient referral ophthalmology service in São Paulo, Brazil. )e study protocol complied with the tenets of the latest revision of the Declaration of Helsinki and was approved by the Institutional Review Board of the University of São Paulo Medical School. All participants gave their informed written consent. Between January 2015 and November 2018, 42 patients in the inactive phase of GO were studied. GO was quantified with the Clinical Activity Score (CAS) [12]. Patients with CAS<3 for at least 6 months and time of onset of GO >2years were considered to have inactive disease. )e inclusion criteria were as follows: (i) diagnosis of GO in the inactive phase, (ii) informed written consent to participate in the study, (iii) age above 21 years, (iv) euthyroidism, (v) Hertel exophthalmometry ≥20 mm, (vi) absence of eye abnormalities such as degenerative myopia, microphthalmos, and anophthalmia, (vii) absence of orbital abnormalities such as previous fractures, surgery, or con- genital defects, (viii) absence of eye motility diseases such as myasthenia gravis, and (ix) sufficient cooperation during the evaluation. 2.1.ClinicalMeasurements. )e patients were submitted to a complete ophthalmological examination and orthoptic as- sessment, including a qualitative version evaluation of the nine positions of gaze. A single experienced strabismus specialist made all clinical version assessments. Versions were graded taking into account basic ana- tomical landmarks such as the position of the limbus in relation to the medial and lateral canthus (horizontal ver- sions) and the excursion beyond the primary gaze position (vertical versions). We used a scale from −1 to −4 to qualify hypofunction and a scale from +1 to +4 to qualify hyper- function for each muscle in its field of action. Normal versions were noted as 0 [3, 13–15]. To evaluate the ability of the photographic method to detect different patterns of version impairments and assess the correlation between the two methods (qualitative vs. quantitative), we first classified each version of individual patients with the clinical evaluation and divided the results into three categories: (1) Normal (no hypofunction) (2) Moderate alteration (hypofunction of the evaluated muscle from −1 to −2) (3) Severe alteration (hypofunction of the evaluated muscle from −3 to −4) 2.2. Photography. A single trained ophthalmologist took standardized frontal photographs (Canon Power-Shot SX530 HS) of each subject. )e patient was positioned in a chair with a clean background at a distance of 50 cm from the camera lens. With the head adequately aligned hori- zontally and vertically, photographs were taken in the nine cardinal positions of gaze (primary gaze, supra- dextroversion, supraversion, supralevoversion, dextro- version, levoversion, infradextroversion, infraversion, and infralevoversion). Verbal encouragement was given to en- sure head stability and maximum effort toward the extremes of gaze. In case of inappropriate movement, the photographs were repeated. In the infraversions, the eyelids were pulled for better observation. )e photograph also included a 12- mm circular sticker for digital calibration (Figure 1). 2.3. Digital Photographic Measurements. A single researcher processed and analyzed the digital images using the method proposed by Lim et al. [11, 16]. Using the software Pho- toshop (Adobe, San Jose, CA, USA, version 19.1.9), semi- transparent photographs of the patient’s versions were successively juxtaposed on a photograph in the primary gaze position (Figure 2(a)) [11]. We then measured the distance (mm) between the limbi of the overlapping photographs with the assistance of the software ImageJ (the National Institutes of Health, Bethesda, MD, USA, version 1.52a) [11]. Pixels and mm were cali- brated using the 12-mm circular sticker as reference (Figure 2(b)). As per Lim’s method, the limbus-to-limbus distance (mm) was converted into degrees of eyeball rotation with the formula α � arcsin (D/r), where ɑ is the angle of ocular movement, D is the interlimbus distance, and r is the ex- ternal radius of the eyeball, based on axial length measured with the IOLMaster biometer (Zeiss Humphrey System, Dublin, CA, USA) [11]. 2.4. Statistical Analysis. )e statistical analysis was per- formed using the software Stata v. 15 (StataCorp, College Station, TX, USA) and Statistica v. 13 (TIBCO Software Inc., Palo Alto, CA, USA). )e descriptive statistics included arithmetic means and standard deviations. We used ANOVA or Student’s t-test for independent samples to assess the agreement between qualitative and quantitative variables. We calculated the mean of the maximum angle of the eight secondary and tertiary gaze positions for each clinical category. Using ANOVA and the Tukey-HSD test, we compared the three qualitative categories with regard to the mean angle of version. Statistically significant differences between the means of each category were considered an indication of agreement between the methods. We also used the Spearman correlation coefficient to assess the correlation between clinical qualitative categories and photographic quantitative measurements. All statistical tests used an alpha error of 5%. )us, results were considered statistically significant when p<0.05. 2 Journal of Ophthalmology 3. Results All 42 patients met the inclusion criteria, with a predomi- nance of the female sex (n � 31; 73.8%). )e mean age was 48.7±11.9 years. Figure 3 shows the mean angles of the 8 secondary and tertiary positions of gaze in patients with normal clinical versions. Tables 1 and 2 show the mean (±standard deviation) measurements (in degrees) of the quantitative measure- ments for each clinical category. )e mean quantitative measures corresponding to the 3 qualitative categories (normal, moderate alteration, and severe alteration) differed significantly in 5 positions of gaze: abduction, adduction, elevation in abduction, elevation, and elevation in adduction (the only two exceptions among the 15 correlations being “normal vs. moderate in abduction” and “moderate vs. se- vere in elevation in adduction”), indicating a good level of agreement between the two methods (Table 1). On the other hand, in the 3 remaining positions of gaze (depression in abduction, depression, and depression in adduction), which are barely affected in GO, the mean quantitative measures did not vary significantly between the two possible categories (normal vs. moderate alteration) (Table 2). Figure 4 is a graphic representation of the agreement between the two methods concerning each of the 8 sec- ondary and tertiary positions of gaze. We also assessed the correlation between the two methods using Spearman correlation coefficients. Statisti- cally significant negative correlations were observed for the following variables: abduction (rho � −0.321, p<0.001), adduction (rho � −0.405, p<0.001), elevation in abduction (rho � −0.627, p<0.001), elevation (rho � −0.527, Figure 1: Photographs of the 9 cardinal positions of gaze. (a) (b) Figure 2: (a) Juxtaposition of semitransparent images of primary gaze and dextroversion (Photoshop) for quantitative version evaluation. (b) Evaluation of dextroversion. Right eye: in abduction, the distance between the medial limbi of the juxtaposed photos is measured. Left eye: in adduction, the distance between the lateral limbi is measured. 24.8° 33.3° 42.8° 62.8° 54.0° 57.1° 46.8° 39.0° Figure 3: Mean degrees of versions in the cardinal positions of gaze in Graves’ orbitopathy patients with clinically normal versions (based on Lim and colleagues) [11]. Journal of Ophthalmology 3 p<0.001), and elevation in adduction (rho � −0.554, p<0.001). No statistically significant correlations were observed between the methods for depression in abduction (rho � 0.055, p � 0.477), depression (rho � 0.069, p � 0.376), and depression in adduction (rho � 0.062, p � 0.430) (Table 3). 4. Discussion Ductions are termed as uniocular rotations while versions are synchronous simultaneous rotations of the two eyes in the same direction. Version evaluation can identify subtle imbalances in eye movements that may be missed in duction evaluation [2]. Several methods of assessing ocular rotations during the extrinsic eye motility examination have been proposed, but few studies have compared these methods [4, 6, 7, 17]. In 1899, in one of the first studies on eye movement, Asher evaluated his versions [5]. Later, in 1916, Hess recorded the static position of the eyes on a two-di- mensional chart (the Hess screen test). )e test has since been automated and is currently used to evaluate diplopia and changes in extraocular movements. )e Lancaster screen test and the Harms wall test use screens to record eye Table 1: Agreement between qualitative clinical categories and quantitative photographic measurements of version in 5 positions of gaze (abduction, adduction, elevation in abduction, elevation, and elevation in adduction) in a sample of 42 patients with GO. Version Qualitative Quantitative (°) mean (SD) p value∗ Normal vs. moderate Normal vs. severe Moderate vs. severe p valueŦ p valueŦ p valueŦ Abduction Normal (n � 18) 46.88 (8.04) <0.001 0.161 <0.001 0.001Moderate alteration (n � 52) 42.06 (9.20) Severe alteration (n � 14) 31.33 (15.07) Adduction Normal (n � 47) 42.84 (10.95) <0.001 <0.001 <0.001 0.007 Moderate alteration (n � 34) 32.68 (9.20) Severe alteration (n � 3) 15.66 (16.26) Elevation in abduction Normal |(n � 47) 39.05 (10.95) <0.001 <0.001 <0.001 <0.001Moderate alteration (n � 19) 24.20 (15.07) Severe alteration (n � 18) 10.36 (10.30) Elevation Normal (n � 51) 24.83 (10.36) <0.001 <0.001 <0.001 0.025Moderate alteration (n � 18) 15.66 (9.20) Severe alteration (n � 15) 8.62 (6.89) Elevation in adduction Normal |(n � 52) 33.36 (10.95) <0.001 <0.001 <0.001 0.242Moderate alteration (n � 18) 18.05 (12.12) Severe alteration (n � 14) 13.29 (10.95) ∗ � ANOVA;Ŧ� post hoc test (Tukey-HSD). Normal � no hypofunction; moderate alteration � hypofunction from −1 to −2; severe alteration � hypofunction from −3 to −4. Table 2: Agreement between qualitative clinical categories and quantitative photographic measurements of version in 3 positions of gaze (depression in abduction, depression, and depression in adduction) in a sample of 42 patients with GO. Version Qualitative Quantitative (°) mean (SD) p value∗ Depression in abduction Normal (n � 75) 57.14 (9.78) 0.573Moderate alteration |(n � 9) 59.31 (12.12) Depression Normal (n � 80) 54.09 (9.78) 0.468Moderate alteration (n � 4) 59.31 (6.89) Depression in adduction Normal (n � 78) 62.87 (10.95) 0.268Moderate alteration (n � 6) 68.43 (8.04) ∗ � Student’s t-test for independent samples. Normal � no hypofunction; moderate alteration � hypofunction from −1 to −2. 4 Journal of Ophthalmology positions and vertical, horizontal, and torsional deviations [5]. )e limbus test was developed by Kestenbaum. He measured ocular versions in millimeters with a transparent ruler positioned in front of the cornea, making it possible to compare the position of the limbus from the primary to the secondary and tertiary positions of gaze [8]. Urist, in turn, developed the lateral version light-reflex test in which the examiner places a luminous focus in front of the patient’s eye and observes the position of the light reflex in the sclera while the patient performs extreme lateroversion. )e dif- ference is measured in millimeters and converted into de- grees using the Hirschberg scale (1 mm � 7°) [9]. Other authors have used ophthalmic devices to measure ductions and versions with greater accuracy. )us, in 1950, Yamishoro used a keratometer to determine the position of the limbus in adduction, abduction, and supra- and infra- duction in a sample of 100 healthy patients [5]. More re- cently, in 1994, Mourits measured the ductions of 40 healthy patients using a modified Schweiger perimeter [10]. )e synotophore may be used to evaluate binocular rotations, despite the 30° limitation in the evaluation of vertical ro- tations, but the most commonly used ophthalmic device for measuring binocular rotations is Goldmann’s manual pe- rimeter [18]. Using a manual perimeter, Haggerty and colleagues concluded that measurements with less than 5° variation might be considered accurate and reliable [18]. Finally, Kushner developed the so-called cervical-range-of- motion device (CROM) to record binocular rotations, anomalous head positions, and binocular field of view [19]. Holmes proposed a photographic method for assessing abduction restrictions in patients with sixth cranial nerve palsy. )e method is based on photographs of the patient fixating in dextro- and levoversion. With a ruler, the ex- aminer measures the abduction deficit in millimeters. At the time, the method was considered simple, effective, and re- producible, with good interobserver agreement [20]. More recently, eye-tracking methods or search coils have been used to measure eye movement automatically and quantitatively. However, these methods are too laborious and costly for everyday clinical practice [21]. )e techniques discussed above yield highly variable results. Moreover, their usefulness is, in many cases, limited by the need for ophthalmological devices, such as manual perimeters, which are becoming obsolete and can only evaluate ductions. In the present study, we evaluated the method most commonly employed in clinical practice (qualitative as- sessment) and a simple and affordable quantitative method of measuring versions based on digital photographs [11]. )e qualitative clinical method of version assessment is highly dependent on examiner skill and therefore associated with considerable interobserver variability. )is is particu- larly relevant for patients with GO whose therapeutic follow- up requires quantifying running changes in version am- plitude [5]. )e digital photographic method of Lim and colleagues is a modification of a method originally proposed by Kestenbaum (the limbus test). Patients are photographed with a digital camera while fixing in the nine positions of gaze. )e obtained images are then analyzed with the software Photoshop and ImageJ, and the interlimbus dis- tance (mm) is converted into degrees to determine the maximum angle of movement in each position. Inexpen- sive and easy to perform, the method is associated with very low interobserver variability (i.e., good reproducibility and accuracy) [11]. )e two methods were in agreement concerning five positions of gaze (abduction, adduction, elevation in ab- duction, elevation, and elevation in adduction). )at is, a hypofunction clinically diagnosed as moderate (−1 or −2) correlated well with an angle measured by digitalized photographs that were statistically different from the findings for normal version or severe hypofunction (−3 or −4). )erefore, if performed by an experienced examiner, qualitative and quantitative assessments are likely to yield similar results. However, quantitative photographic as- sessments are easier to perform, making it possible for different and even less experienced physicians to obtain consistent results at different times, or to remotely diagnose patients, discuss therapies, and monitor response (telemedicine). )e statistical difference observed for the five positions of gaze above was not replicated in the assessment of the infraversions (depression in abduction, depression, and depression in adduction), most likely because in general GO patients are known to display only mild changes in infra- version. Accordingly, in our sample, no cases of severe infraversion alterations were observed and few patients displayed even moderate abnormalities (depression in ab- duction n � 9, depression n � 4, and depression in adduction n � 6, all of whom with a clinically diagnosed hypofunction of −1). )is would explain why the mean quantitative measurements corresponding to the two possible clinical categories (normal and moderate alterations) were not significantly different. Whether GO is treated surgically or clinically, changes in versions should be measured with the most objective method possible. )e quantitative method evaluated in this study yields relatively consistent measurements between examiners and thus is a more useful tool in the evaluation of changes in ocular movement following treatment. However, regardless of the method, the quality of the measurements depends on a wide range of factors: patient comfort, control of head movement, simplicity and ac- curacy of the procedure, reproducibility, and inter- and intraobserver variability. )e literature provides no gold standard for assessing eye movements. At this point, traditional methods requiring devices that are no longer manufactured (such as manual perimeters) should be replaced. Digital photography appears to be an affordable, reproducible, and accurate alternative. In conclusion, we found strong correlations between the qualitative clinical method and the quantitative photographic method of assessing ocular versions, es- pecially with regard to lateral and supraversions, which are most typically affected in GO. Ophthalmologists are advised to adopt digital photography for the assessment of Journal of Ophthalmology 5 ocular versions due to its practicality, suitability for telemedicine applications, and ease of monitoring during follow-up. Data Availability )e datasets used and analyzed during the current study are available upon reasonable request to the co-author Allan C. Pieroni Gonçalves (allanpieroni75@gmail.com). Disclosure )e funding organizations had no role in the design or conduct of the study. Conflicts of Interest None of the authors have any conflicts of interest to declare. Authors’ Contributions CAL, JC, and TSP contributed to the acquisition, analysis, and interpretation of data. CAL, ACPG, and MLRM made substantial contributions to the conception, design, inter- pretation of data, and drafting of the manuscript. Acknowledgments )is study was supported by grants from CAPES (Coor- denação de Aperfeiçoamento de Nı́vel Superior, Braśılia, Brazil) and CNPq (Conselho Nacional de Desenvolvimento Cient́ıfico e Tecnológico) (grant number: 308172/2018–3), Braśılia, Brazil. 15 10 5 0 A bd uc tio n 15 10 5 0 A bd uc tio n Normal Moderate alteration Severe alteration Normal Moderate alteration Severe alteration 15 10 5 0 El ev at io n in a bd uc tio n 15 20 10 5 0 D ep re ss io n in a bd uc tio n 15 20 10 5 0 D ep re ss io n in a bd uc tio n D ep re ss io n 15 20 10 5 0 10 5 0 El ev at io n in a bd uc tio n 10 8 6 4 2 0 El ev at io n Normal Moderate alteration Severe alteration Normal Moderate alteration Severe alteration Normal Moderate alteration Severe alteration Normal Moderate alteration Severe alteration Normal Moderate alteration Normal Moderate alteration Figure 4: Graphic representation of the agreement between qualitative clinical categories and quantitative photographic measurements of version in the 8 secondary and tertiary positions of gaze in 42 patients with GO. Table 3: Correlation between qualitative clinical categories and quantitative photographic assessments of version in a sample of 42 patients with GO. Variable Rho∗ p value∗ Abduction −0.321 <0.001 Adduction −0.405 <0.001 Elevation in abduction −0.627 <0.001 Elevation −0.527 <0.001 Elevation in adduction −0.554 <0.001 Depression in abduction 0.055 0.477 Depression 0.069 0.376 Depression in adduction 0.062 0.430 ∗ � Spearman correlation coefficient. 6 Journal of Ophthalmology mailto:allanpieroni75@gmail.com References [1] A. Bielschowsky, Lectures on Motor Anomalies, Dartmouth College Publications, Hanover, NH, USA, 1956. [2] G. K. Von Noorden, Binocular Vision and Ocular Motility, Mosby, St. Louis, MO, USA, 5th edition, 1996. [3] G. K. Von Noorden, Atlas of Strabismus, Mosby Company, St. Louis, MO, USA, 4th edition, 1983. [4] S. Hanif, F. Rowe, and A. O’Connor, “A comparative analysis of monocular excursion measures,” Strabismus, vol. 17, no. 1, pp. 29–32, 2009. [5] S. Hanif, F. J. Rowe, and A. R. O’connor, “A comparative review of methods to record ocular rotations,” British and Irish Orthoptic Journal, vol. 6, pp. 47–51, 2009. [6] S. Hanif, A. O’Connor, and F. Rowe, “Measuring uniocular fields of rotation: modified Goldmann perimetry versus Aimark perimetry,” Strabismus, vol. 22, no. 3, pp. 125–132, 2014. [7] F. J. Rowe and S. Hanif, “Uniocular and binocular fields of rotation measures: Octopus versus Goldmann,” Graefe’s Ar- chive for Clinical and Experimental Ophthalmology, vol. 249, no. 6, pp. 909–919, 2011. [8] A. Kestenbaum, Clinical Methods of Neuro-Ophthalmologic Examination, Grune & Stratton, New York, NY, USA, 1961. [9] M. J. Urist, “A lateral version light-reflex test,” American Journal of Ophthalmology, vol. 63, no. 4, pp. 808–815, 1967. [10] M. P. Mourits, M. F. Prummel, W. M. Wiersinga, and L. Koornneef, “Measuring eye movements in Graves oph- thalmopathy,” Ophthalmology, vol. 101, no. 8, pp. 1341–1346, 1994. [11] H. W. Lim, D. E. Lee, J. W. Lee et al., “Clinical measurement of the angle of ocular movements in the nine cardinal positions of gaze,” Ophthalmology, vol. 121, no. 4, pp. 870–876, 2014. [12] M. P. Mourits, M. F. Prummel, W. M. Wiersinga, and L. Koornneef, “Clinical activity score as a guide in the management of patients with Graves’ ophthalmopathy,” Clinical Endocrinology, vol. 47, no. 1, pp. 9–14, 1997. [13] P. Boeder, “An analysis of the general type of uniocular ro- tations,” Archives of Ophthalmology, vol. 57, no. 2, pp. 200– 206, 1957. [14] P. Boeder, “Co-operative action of extra-ocular muscles,” British Journal of Ophthalmology, vol. 46, no. 7, pp. 397–403, 1962. [15] G. Guibor, Squint and Allied Conditions, Grune & Stratton, New York, NY, USA, 1959. [16] H. W. Lim, J. W. Lee, H. Cho et al., “Quantitative mea- surement of the angle of ocular movements in patients with horizontal strabismus,” Investigative Ophthalmology & Visual Science2015, vol. 56, p. 5223, 2015. [17] P. J. Dolman, K. Cahill, C. N. Czyz et al., “Reliability of es- timating ductions in thyroid eye disease,” Ophthalmology, vol. 119, no. 2, pp. 382–389, 2012. [18] H. Haggerty, S. Richardson, K. W. Mitchell, and A. J. Dickinson, “A modified method for measuring uniocular fields of fixation,” Archives of Ophthalmology, vol. 123, no. 3, pp. 356–362, 2005. [19] B. J. Kushner, “)e usefulness of the cervical range of motion device in the ocular motility examination,” Archives of Ophthalmology, vol. 118, no. 7, pp. 946–950, 2000. [20] J. M. Holmes, G. G. Hohberger, and D. A. Leske, “Photo- graphic and clinical techniques for outcome assessment in sixth nerve palsy,” Ophthalmology, vol. 108, no. 7, pp. 1300–1307, 2001. [21] M. M. J. Houben, J. Goumans, and J. van der Steen, “Re- cording three-dimensional eye movements: scleral search coils versus video oculography,” Investigative Opthalmology & Visual Science, vol. 47, no. 1, pp. 179–187, 2006. Journal of Ophthalmology 7 work_wlissbtqtnf4jlpao45q5uto5y ---- High-Speed Optical Microscopy of Spin-Like Reactions in Exothermic Nanolaminates High-Speed Optical Microscopy of Spin-Like Reactions in Exothermic Nanolaminates J. P. McDonald, E. D. Jones Jr., V. Carter Hodges, D. P. Adams Sandia National Laboratories, Albuquerque, NM, 87185-0959 Exothermic nanolaminates are a class of energetic materials comprised of alternating layers of two or more materials which produce an exothermic reaction when they are encouraged to mix [1]. The constituent materials are grown in distinct layers, such that the exothermic energy is stored until a stimulus provides for sufficient mixing of these layers and ignition of a reaction. A schematic of an exothermic nanolaminate foil is presented in FIG. 1. In this work, the details of reaction dynamics in exothermic nanolaminates are explored at the single micrometer length scale with high speed digital photography (10 5 frames per second), revealing the presence of spin-like reactions [2]. Initially observed during combustion of a titanium cylinder in a nitrogen gas atmosphere [3], spin reactions are thought to result from the competition between heat conduction out of the reaction volume, and heat production within the reaction volume from the exothermic chemical reaction [4]. For planar exothermic nanolaminates, spin-like reactions were initially discovered in Co/Al foils, where transverse reaction bands were observed to propagate orthogonal to the net propagation direction [2]. Spin-like reactions have emerged as a common mode of reaction propagation, showing up in other systems including Ni/Ti, Ni/Al, and Sc/Cu (an example is shown in FIG. 2). In contrast to steady reaction propagation, it was discovered that the spin-like mode of reaction propagation resulted in a periodic surface morphology present long after the reaction was complete (FIG. 3). Further, it was noted that the reaction speed in the transverse reaction bands greatly exceeded that of the net reaction propagation (FIG. 4). The influence of exothermic foil properties on the spin-like reactions is revealed in FIG. 4, where it is apparent that both the net and transverse reaction propagation speeds decrease with increasing bilayer thickness (bilayer thickness defined in FIG. 1). This general trend was attributed to the effects of a mass diffusion limited reaction, in which atoms from the constituent layers must travel greater distance to meet and react as the bilayer thickness is increased. Recently, the combined influence of spin reactions and oxidation/combustion reactions in exothermic nanolaminates has been investigated. In Ni/Ti multilayers, for example, a reaction between the two metallic species is also accompanied by a reaction between Ti and O2 [5]. The oxidation reaction has the effect of increasing the rate of generation of transverse reaction bands, which has the secondary effect of increasing the net velocity of reactions performed in air. Observation of reaction propagation in planar exothermic nanolaminates using high-speed optical microscopy revealed the presence of spin-like reactions, an unsteady mode of reaction propagation characterized by transverse reaction bands propagating orthogonal to the overall reaction direction. The characteristics of spin reactions were found to be dependent on foil design and reaction environment [6]. 504 doi:10.1017/S1431927610062173 Microsc. Microanal. 16 (Suppl 2), 2010 © Microscopy Society of America 2010 https://doi.org/10.1017/S1431927610062173 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1431927610062173 https://www.cambridge.org/core https://www.cambridge.org/core/terms References: 1. T. W. Barbee, et al, US Patent Number 5538795-A (1996). 2. J. P. McDonald, et al, Applied Physics Letters 94, 3 (2009). 3. A. K. Filonenko, et al, Combustion Explosion and Shock Waves 11, 301-308 (1975). 4. S. Gennari, et al, Journal of Physical Chemistry B 110, 7144-7152 (2006). 5. D. P. Adams, et al, Journal of Applied Physics 106, 8 (2009). 6. The authors would like to acknowledge the assistance of Kathryn A. Chinn in data collection. This work was funded by a Sandia Laboratory Directed Research & Development program. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000. Reacted/Reacting Un-reacted Reaction initiated at one end of freestanding foil. A: 10-1000 nm Premixed: 1-10 nm (est.) B: 10-1000 nm Reaction Front Reaction mode and propagation speed determined from plan-view images FIG 1. Schematic cross section of an exothermic nanolaminates foil. High speed camera observations of reaction dynamics are made in plan-view of the top foil surface. Bilayer Thickness 0–300 nm 0 ms 500 µm 0.10 ms 0.20 ms 0.30 ms 0.40 ms 1.00 ms Net Reaction Propagation Transverse Reaction Propagation FIG 2. Still frame captures from a high speed movie of an exothermic reaction in a Sc/Cu foil with 41.2 nm bilayer thickness. The time stamp in/above each image is relative to the first image in the sequence, while the scale bar in the upper left image applies to all images. Note that external illumination was used and the contrast apparent in the images is strictly reflected light. 1 mm FIG 3. Scanning electron micrograph of the surface of a reacted Co/Al exothermic nanolaminate with a bilayer thickness of 250 nm and a total thickness of 7.5 µm. FIG 4. Net and transverse reaction speeds for Co/Al exothermic nanolaminates as a function of bilayer thickness. All foils had a total thickness of 7.5 µm Microsc. Microanal. 16 (Suppl 2), 2010 505 https://doi.org/10.1017/S1431927610062173 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 06 Apr 2021 at 01:16:41, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1431927610062173 https://www.cambridge.org/core https://www.cambridge.org/core/terms work_wms7nd3r7faz3ftmc6ibpia2xy ---- . © 2015 Journal of Neurosciences in Rural Practice | Published by Wolters Kluwer ‑ Medknow 625 Sir, We have read with interest the article of Mojumder et al. on the use of pupil to limbus ratio to measure anisocoria[1] and we would like to share our experience on this topic: Some studies have demonstrated that digital photography is a very convenient tool for the measurement of pupil diameter in ophthalmic practice. It offers several advantages over other methods (inexpensive, accurate, more repeatable than other techniques, provides long lasting documentation, and eases masked observations).[2] However to take absolute measures of pupil diameter, a ruler or a template should be included in the photo, and care should be taken to its placement and orientation to avoid parallax effects. Otherwise, only relative measures of pupil diameter can be taken. Using relative measures of pupil diameter (pupil to limbus ratio), simplifies the process of taking the photos (it is not necessary to include extra elements in the picture) and could have important advantages over using absolute measurements in different situations: • Intra-subject comparisons: This method is impeccable to monitor changes on pupil diameter on the same subject. As Mojumder et al. demonstrate in their article, this is a really convenient approach to monitoring pupil changes in the UCI. It can also be useful in the differential diagnosis of anisocoria (to monitor pupil change when apraclonidine, cocaine, or pilocarpine 0.125 tests are performed) • Inter-subject comparisons: To compare pupil diameters among different subjects this method h a s a l s o i m p o r t a n t a d va n t a g e s o v e r t h e methods that use absolute measurements. Pupil diameter is correlated with limbus diameter. A 6-mm-pupil is probably maximum mydriasis in a subject with 8-mm-corneas, but it is not c e r t a i n l y m a x i m u m m y d r i a s i s i n a s u b j e c t w i t h 1 3 - m m - d i a m e t e r- c o r n e a s . I n t h e f i r s t case, pupil diameter is 75% of the maximum p u p i l d i a m e t e r w h e r e a s i n t h e s e c o n d c a s e pupil diameter is 46% of the maximum pupil diameter • Defining anisocoria: Expressing pupil diameter as a percentage of limbal diameter makes possible a more logical definition of anisocoria. As pupil diameter has a wide distribution (young myopic subjects have bigger pupils than old hyperopic subjects),[3] we find the classical definition of anisocoria very rigid. A 0.5 mm difference should be considered significant in a subject with 2.5 mm pupils, but it is probably insignificant in a patient with 6-mm-pupils • Comparing photopic and scotopic pupils in the same subject: In our experience, using absolute measures, anisocoria tends to be greater under scotopic conditions (not only on Horner syndrome, also in physiologic anisocoria and anisocoria related to parasympathetic lesions). This fact is probably due to bigger scotopic pupils. This bias could be avoided using the relative change in a pupil to limbus ratio between photopic and scotopic pupil. In summary, we agree with the authors. Pupil to limbus ratio is not only a very convenient method to measure pupils. It also has many conceptual advantages. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest. Julio González Martín‑Moro1,2, Belén Pilo de la Fuente3, Fernando Gomez Sanz1,4 1Department of Ophthalmology, University Hospital of Henares, 2Department of Medicine, Francisco de Vitoria University, 3Department of Neurology, University Hospital of Getafe, 4Department of Optics II, Complutense School of Optometry, Madrid, Spain Address for correspondence: Dr. Julio González Martín‑Moro, Department of Ophthalmology, University Hospital of Henares, Madrid, Spain, Department of Medicine, Francisco de Vitoria University, Madrid, Spain. E‑mail: juliogazpeitia@gmail.com References 1. Mojumder DK, Patel S, Nugent K, Detoledo J, Kim J, Dar N, et al. Pupil to limbus ratio: Introducing a simple objective measure using two-box L e t t e r s t o t h e E d i t o r Conceptual advantages of using pupil to limbus ratio over absolute pupil diameter Published online: 2019-09-26 626 Journal of Neurosciences in Rural Practice | October ‑ December 2015 | Vol 6 | Issue 4 Letters to the Editor method for measuring early anisocoria and progress of pupillary change in the ICU. J Neurosci Rural Pract 2015;6:208-15. 2. Twa MD, Bailey MD, Hayes J, Bullimore M. Estimation of pupil size by digital photography. J Cataract Refract Surg 2004;30:381-9. 3. Hashemi H, Yazdani K, Khabazkhoob M, Mehravaran S, Mohammad K, Fotouhi A. Distribution of photopic pupil diameter in the Tehran eye study. Curr Eye Res 2009;34:378-85. How to cite this article: Martín-Moro JG, de la Fuente BP, Sanz FG. Conceptual advantages of using pupil to limbus ratio over absolute pupil diameter. J Neurosci Rural Pract 2015;6:625-6. This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as the author is credited and the new creations are licensed under the identical terms. Access this article online Quick Response Code: Website: www.ruralneuropractice.com DOI: 10.4103/0976-3147.169772 Necessity of globally implementing the comprehensive mental health action plan: World Health Organization Sir, For decades, mental well-being has been acknowledged as an integral component of health.[1] In fact, a sound state of mental health can enable people to realize their potential, successfully deal with the stresses of life, work in a productive manner and thus contribute to the welfare of society.[1,2] Realizing the global impact of mental illnesses, its influence on other health dimensions and quality of life, the World Health Organization called for the need of a comprehensive, multi-sectoral response from health and allied sectors.[3] Subsequently, in 2013 a comprehensive mental health action plan 2013–2020 was launched after consulting with stakeholders (viz., policy makers, specialists, health professionals, family members of a mentally-ill person, etc.,) to ensure promotion of mental health, prevention and prompt treatment of mental illnesses, and provision of rehabilitation and care services.[2] This action plan was proposed with four basic objectives, namely to strengthen effective leadership activities in mental health; to ensure the provision of holistic, integrated mental health/social care services in community settings; to implement strategies for promotion and prevention of mental health; and to strengthen information systems, evidence and research in the area of mental health.[2,4,5] However, the recent estimates present quite a dismal picture as almost one out of every ten people globally have some mental illness, whereas only 1% of health personnel are working in the field of mental health.[6] It means that a large number of people who are actually suffering from mental illnesses are devoid of appropriate and adequate mental health care services.[4,6] It is quite alarming as close to 50% of the world’s population are residing in those nations, which have <1 psychiatrist per 0.1 million individuals.[4,6] To this, a wide disparity has been observed with regards to the service users/patients/persons’ country of residence, as the estimated number of mental health workers in high-income nations is 1 mental health worker per 2000 persons, in contrast to the estimates of <1/0.1 million individuals in low-middle income nations such as Armenia, Bhutan, Bolivia, India, etc.[4] In addition, it was revealed that the global spending on mental health per person is quite low and variable, with low and middle-income countries spending less than US$ 2 per capita per year, in contrast to the high-income nations which are spending more than US$ 50.[4,6] On subsequent analysis of the financial expenditure, it was observed that a major proportion of spent money was on mental health hospitals, which caters to the need for a very small percentage of people who actually need attention.[4,6] Moreover, issues pertaining to the poor awareness among the general population, lack of trained and competent health personnel and absent/ irregular supply of basic medicines to treat mental illnesses, especially in rural and remote settings have complicated the problem of delivery of mental health care services.[1,7] On a positive note, some of the nations have started to show progress by creating policies/plans (two-third of the nations), and laws (50% of the nations) for mental health.[6] However, shortcomings like incomplete Nitin Rectangle work_wpgh5dhunnhi5f5dfcegw5jjxq ---- untitled CLINICAL SCIENCES Comparison of Digital and Film Grading of Diabetic Retinopathy Severity in the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications Study Larry D. Hubbard, MAT; Wanjie Sun, MS; Patricia A. Cleary, MS; Ronald P. Danis, MD; Dean P. Hainsworth, MD; Qian Peng, MS; Ruth A. Susman, BS; Lloyd Paul Aiello, MD, PhD; Matthew D. Davis, MD; for the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications Study Research Group Objective: To compare diabetic retinopathy (DR) sever- ity as evaluated by digital and film images in a long-term multicenter study, as the obsolescence of film forced the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications Study (DCCT/ EDIC) to transition to digital after 25 years. Methods: At 20 clinics from 2007 through 2009, 310 participants with type 1 diabetes with a broad range of DR were imaged, per the Early Treatment Diabetic Reti- nopathy Study (ETDRS) protocol, with both film and digi- tal cameras. Severity of DR was assessed centrally from film and tonally standardized digital cameras. For reti- nopathy outcomes with greater than 10% prevalence, we had 85% or greater power to detect an agreement � of 0.7 or lower from our target of 0.9. Results: Comparing DR severity, digital vs film yielded a weighted � of 0.74 for eye level and 0.73 for patient level (“substantial”). Overall, digital grading did not systematically underestimate or overestimate severity (McNemar bias test, P = .14). For major DR outcomes (�3-step progression on the ETDRS scale and disease pres- ence at ascending thresholds), digital vs film � values ranged from 0.69 to 0.96 (“substantial” to “nearly per- fect”). Agreement was 86% to 99%; sensitivity, 75% to 98%; and specificity, 72% to 99%. Major conclusions were similar with digital vs film gradings (odds reductions with intensive diabetes therapy for proliferative DR at EDIC years 14 to 16: 65.5% digital vs 64.3% film). Conclusion: Digital and film evaluations of DR were com- parable for ETDRS severity levels, DCCT/EDIC design out- comes, and major study conclusions, indicating that switch- ing media should not adversely affect ongoing studies. Arch Ophthalmol. 2011;129(6):718-726 L ONG-TERM MULTICENTER studies such as the Diabetes Control and Complications Trial (DCCT)/Epidemiol- ogy of Diabetes Interven- tions and Complications (EDIC) require consistent measurements of key outcome parameters over time and across clinics, es- pecially when technology evolves during the study. The DCCT (1983-1993) demon- strated that intensive therapy aimed at main- taining blood glucose levels as close to nor- mal as possible substantially reduced the risk of development and/or progression of dia- betic retinopathy (DR) and other microvas- cular complications compared with con- ventional therapy.1-3 The EDIC (1994- 2016 [ongoing]), an observational follow-up study of the DCCT cohort,4 demonstrated that the differences in DR and other micro- vascular (and macrovascular) outcomes be- tween the former intensive and conven- tional treatment groups persisted for at least 10 years after the DCCT despite the loss of glycemic separation after the clinical trial ended.5-9 Since the inception of the DCCT in 1983, recording of retinal images, from which DR status and progression are evalu- ated, has inexorably moved from film to digital. Commercial digital fundus camera systems have markedly improved in qual- ity, have been widely adopted by clinics, and offer substantial convenience and economy compared with film cameras. Changing retinal imaging methods in the DCCT/EDIC, while perhaps unavoid- able, might alter study analysis results and conclusions. Although several cross- sectional studies have reported that digi- tal systems provide results that are simi- lar to the film “gold standard,” most represent single-center experience and some lack a wide range of retinopathy se- verity. Therefore, the DCCT/EDIC Re- search Group undertook a formal due- diligence ancillary study to gauge the effect Author Affiliations: Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison (Drs Danis, and Davis, Mr Hubbard, and Mss Peng and Susman); Biostatistics Center, George Washington University, Washington, DC (Mss Sun and Cleary); Department of Ophthalmology, University of Missouri, Columbia (Dr Hainsworth);and Joslin Diabetes Center, Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts (Dr Aiello). Group Information: See page 726 for group member information. ARCH OPHTHALMOL / VOL 129 (NO. 6), JUNE 2011 WWW.ARCHOPHTHALMOL.COM 718 ©2011 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 on retinal outcomes of switching from film to digital pho- tography. In addition to examining conventional mea- sures of agreement between digital and film grading re- sults, we were also able to evaluate retrospectively the degree to which DCCT/EDIC primary study outcomes and con- clusions might be altered by transitioning between the dif- ferent imaging media. METHODS STUDY DESIGN This was a masked, cross-sectional comparison study for deter- mining results of film and digital imaging in assessing DR. Sample size calculations10,11 indicated that, for outcomes with 10% or greater prevalence, 300 subjects would provide 85% or greater power to detect a � of 0.7 or lower compared with our target � of 0.9. The target and alternative � were based on the test/retest � on film photographs in the DCCT/EDIC.2,6 SUBJECTS Twenty DCCT/EDIC centers certified for both film and digital imaging (of the 28 clinical centers) studied 319 subjects with type 1 diabetes at their regular visits; 9 subjects (2.8%) were excluded because they had ungradable digital (n = 6) and/or film (n = 5) photography sets in one or both eyes. Inclusion and exclusion criteria for the DCCT have been published previously.1 Clinical characteristics in the 310 sub- jects included in the study are given in Table 1 at DCCT base- line (1983-1989), EDIC baseline, and at the time of the digital- to-film transition study (EDIC years 14-16). Comparison of the 310 participants with the remaining 1131 persons enrolled in the DCCT showed no important differences except that more nonparticipants were male, from the secondary cohort, and had higher mean hemoglobin A1c levels during DCCT (eTable; www .archophthalmol.com), largely because 6.9% who had died and 6.2% who were inactive were included as substudy nonpartici- pants. Because the primary focus of this article is not on treat- ment effect, this imbalance does not introduce bias to most digi- tal-film comparisons. DCCT/EDIC DATA COLLECTION Retinopathy was assessed by standard film fundus photogra- phy in the whole cohort every 6 months during DCCT, in ap- proximately one-quarter of the cohort each year during EDIC, and in the entire cohort at EDIC years 4 and 10.6 Reproduc- ibility of the film grading procedure and its stability over time Table 1. Clinical Characteristics of the 310 DCCT/EDIC Subjects With Gradable Digital and Film Photographs in the Digital-Film Ancillary Study Characteristics Percentage DCCT Baseline (1983-1989) DCCT Closeout or EDIC Baseline (1992-1993) Digital-Film Ancillary Study (2007-2009) Sample, No. 310 Primary cohort 57 Intensive therapy 50 Female sex 53 Age, mean (SD), y 26 (7.3) 33 (7.1) 48 (7.1) Diabetes duration, mean (SD), y 5.4 (3.9) 12.0 (5.1) 27.2 (5.0) BMI, mean (SD) 23.3 (2.9) 26.0 (4.0) 28.7 (5.5) BMI � 30 1.6 12.5 35.9 Current smoker 20.0 21.4 21.4 Blood pressure, mean (SD), mm Hg a 86.4 (9.2) 88.1 (9.2) 88.5 (9.1) Hypertension b 3.5 5.8 46.4 AER � 30 mg/d in DCCT/EDIC 11.6 12.3 60.4 AER � 300 mg/d in DCCT/EDIC 0 1.9 9.6 Hyperlipidemia c 0 25.5 59.7 Retinopathy levels based on film photo No retinopathy (10/10) 56.5 25.8 5.8 Microaneurysms (MA) only (20/�20) 28.1 37.1 35.5 Mild NPDR (35/�35) 12.9 25.5 21.6 Moderate NPDR (43/�43) 1.9 6.1 17.7 Moderately severe NPDR (47/�47) 0.6 2.9 4.8 Severe NPDR (53/�53) 0 0 0.3 PDR (61/�61) 0 2.6 14.2 CSME based on film photograph 0 3.2 7.3 Hemoglobin A1c, mean (SD) 9.0 (1.5) 8.1 (1.6) 7.9 (1.1) Mean hemoglobin A1c during DCCT or EDIC, mean (SD) 8.0 (1.3) 8.0 (1.0) Abbreviations: AER, albumin excretion rate; BMI, body mass index (calculated as weight in kilograms divided by height in meters squared); CSME, clinically significant macular edema; DCCT, Diabetes Control and Complications Trial; EDIC, Epidemiology of Diabetes Interventions and Complications Study; NPDR, nonproliferative diabetic retinopathy; PDR, proliferative diabetic retinopathy. SI conversion factor: To convert hemoglobin A1c to proportion of total hemoglobin, multiply by 0.01. a Mean blood pressure defined as two-thirds of the diastolic blood pressure plus one-third of the systolic blood pressure. b Hypertension is defined as systolic blood pressure of 140 mm Hg or greater or diastolic blood pressure of 90 mm Hg or greater, documented hypertension, or use of antihypertensive agents. c Hyperlipidemia is defined as low-density lipoprotein cholesterol level of 130 mg/dL (to convert to micromoles per liter, multiply by 0.0357) or greater or use of lipid-lowering agents. ARCH OPHTHALMOL / VOL 129 (NO. 6), JUNE 2011 WWW.ARCHOPHTHALMOL.COM 719 ©2011 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 were evaluated in each study by annual masked regrading of a sample of images (both eyes of each subject) that included a broad spectrum of DR severity. During DCCT, there were 7 an- nual replicate gradings of 42 and, later, 60 subjects; during EDIC, there were 10 annual replicate gradings of 50 subjects.4 FUNDUS PHOTOGRAPHY PROCEDURE Both film and digital photography used the standard 7-field, nonsimultaneous stereoscopic, 30° color procedure estab- lished by the Diabetic Retinopathy Study,12 as modified by the Early Treatment Diabetic Retinopathy Study (ETDRS).13 Sets of fundus photographs of both eyes included central views of disc and macula, adjacent views of each of the 4 major vascu- lar arcades, and an adjacent view just temporal to the macula. Although recent studies of macular edema have shifted the disc and temporal-to-macula fields slightly to include the center of the macula, DCCT/EDIC has retained the original ETDRS defi- nitions of fields 1 and 3. Film photographs were taken on Zeiss FF2-4 fundus cam- eras (Carl Zeiss Meditech, Inc, Oberkochen, Germany) (or ap- proved alternatives) by certified photographers. Digital im- ages were obtained using camera systems with a minimum of 3 megapixels; 19 of 20 clinics had 5-megapixel or higher sys- tems. Clinics were required to submit images taken of non- study volunteers to obtain reading center certification of pho- tographers and digital camera systems. FUNDUS IMAGE HANDLING AND DISPLAY At the clinic, film photographs were mounted in plastic sheets in approximate anatomic position and digital photographs were indexed as “proof sheets,” with personal identifying informa- tion removed except for study identification number. At the reading center, all digital images were loaded for unified han- dling into the Topcon IMAGEnet system (Topcon Medical Imaging Inc, Paramus, New Jersey) and were JPEG- compressed at the IMAGEnet “maximum” quality setting, with an average compression ratio of approximately 20:1. Film sets were retroilluminated on a standard light box (6500° K color temperature) and viewed with the Donaldson stereo viewer (George Davco, Holbrook, Massachusetts). Digi- tal images were displayed on calibrated 20.5-in liquid crystal display monitors (� = 2.2; color temperature, 6500° K; lumi- nance, 110-170 candelas per m2) and viewed with handheld stereo viewers (Screen-Vu Stereoscope; PS Manufacturing, Port- land, Oregon). Imposition on images of the ETDRS macular grid and mea- surements of distances/areas were done in film by superimpos- ing grids and measuring circles printed on transparent acetate stock and in digital by superimposing a digital version of the grid and by using the standard distance and planimetry tools of the digital system. For stereo viewing, gridding, and mea- surement, graders invoked the IMAGEnet stereo analyzer func- tion. For digital images, grids and measuring tools were scaled for each camera, according to the spatial calibration factor es- tablished by the reading center at the time of system certifica- tion. Image illumination, contrast, and color balance were con- trolled in film by specifying acceptable film emulsions (Kodak Ektachrome Professional ASA [Kodak Inc, Rochester, New York] or equivalent) and development processes (E-6 process by a Kodak Q-certified laboratory). Digital image tonal character- istics were optimized via the standardized enhancement model published by the Age-Related Eye Disease Study 2.14 An auto- mated processor-computed luminance histogram for each of the red/green/blue color channels and the curves for each chan- nel were adjusted via algorithm to conform to a model image derived from exemplars. Quality of both film and digital images was rated by the grad- ers, based on proper field definition, crisp focus, and stereo effect. Graders assigned an image confidence score of high, ad- equate, or inadequate for answers to the main DR questions as affected by image quality. DIABETIC RETINOPATHY GRADING PROCEDURE Certified graders evaluated each eye using the ETDRS classi- fications of DR abnormalities, diabetic macular edema,12,13,15 and overall DR severity.16 Data were entered into computerized forms, with checks for internal consistency and completeness. The grad- ing program included independent assessments of each eye by 2 graders (from a pool of 6), with adjudication of substantial differences by a senior grader (from a pool of 3). Grading of film and digital images of each eye was separated by a mini- mum of 2 weeks (in most cases, several months) to minimize any memory effect. Another senior grader not involved in the o r i g i n a l g r a d i n g c o m p a r e d f i l m a n d d i g i t a l i m a g e s side-by-side, with knowledge of the grades from both, to ex- plore possible reasons for differences in grading between the two media. GRADING AND OUTCOMES Diabetic retinopathy severity at the eye level was assigned one of the following ETDRS levels: 10 (including levels 14 and15— eyes without microaneurysms but with cotton-wool spots or retinal hemorrhages, respectively), 20, 35, 43, 47, 53, 61 (in- cluding level 60—panretinal photocoagulation scars without extant proliferative DR), 65, 71, 75, 81, and 85.15 The ETDRS person-level combines eye results (worse eye emphasized method) as previously done in the DCCT/EDIC.3 To estimate the effect of digital/film grading differences on DCCT/EDIC design outcomes, we collapsed grading scales into dichotomous categories of particular interest to the study: any retinopathy (including microaneurysms only, ie, level 20 or worse in either eye), mild nonproliferative DR (NPDR) or worse (�35 in either eye), moderate NPDR or worse (�43 in either eye), moderately severe NPDR or worse (�47 in either eye), severe NPDR or worse (�53 in either eye), proliferative DR (PDR) (�60/61 in either eye), and Diabetic Retinopathy Study high-risk characteristics or worse (�71 in either eye). Prolif- erative DR is the primary EDIC retinopathy outcome after EDIC year 10. Retinopathy progression in the DCCT was defined as an increase of 3 or more steps on the ETDRS person scale from DCCT baseline. Further retinopathy progression in EDIC was defined as 3 or more steps progression from DCCT closeout. Progression of DR at the dual imaging visit was used to com- pare the outcomes from digital vs film images. Diabetic macular edema was analyzed as the presence or ab- sence of ETDRS clinically significant macular edema (CSME). Center-involved diabetic macular edema was insufficiently preva- lent in our population for reliable comparison between media. PRELIMINARY TEST OF GRADING PERFORMANCE ON DIGITAL IMAGES PRIOR TO STANDARDIZED ENHANCEMENT After grading the digital images of 98 eyes (49 subjects) with- out standardized enhancement for tonal characteristics, the read- ing center performed a preliminary comparison of ETDRS reti- nopathy severity levels between digital and film gradings. There appeared to be a systematic difference between results from the ARCH OPHTHALMOL / VOL 129 (NO. 6), JUNE 2011 WWW.ARCHOPHTHALMOL.COM 720 ©2011 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 two media, with higher DR severity levels in some eyes on film compared with digital images (data not shown). Standardized enhancement (optimization) was then applied to these digital images, and they were independently regraded. The reduction in systematic differences between the two media achieved by optimization was substantial. Therefore, all digital images were optimized prior to being graded. STATISTICAL ANALYSIS Agreement between film and digital gradings on ordinal DR cat- egories was analyzed by cross-tabulation and by rates of exact and near agreement. Cohen � statistics, both unweighted17 and weighted,18 were calculated for multistep ordinal scales. A weight of 1 was assigned for exact agreement, 0.75 for 1-step differ- ence on eye and patient scales, and 0.5 for 2-step differences on the patient scale. For 2-step or greater differences on the eye scale or 3-step or greater differences on the patient scale, the weight 0 was applied. We used guidelines for interpreta- tion of � proposed by Landis and Koch: 0.0-0.20 indicates slight; 0.21-0.40, fair; 0.41-0.60, moderate; 0.61-0.80, substantial; and 0.81-1.00, almost perfect.19 The Bhapkar test of marginal ho- mogeneity20 was used to assess the agreement between film and digital in marginal distribution of the ordinal ETDRS scale. The McNemar overall bias test21 was used to test for systematic over- estimation or underestimation between film and digital grad- ings. Film/digital agreement on dichotomous DCCT/EDIC DR cat- egories was evaluated by prevalence, agreement rate, sensitiv- ity, specificity, false-positive and false-negative rates, and Co- hen unweighted �, using film as the reference standard. For prevalence rates close to 0 or 1, Cohen � was not reported be- cause of its unreliability owing to substantial imbalance in the distribution of marginal totals.22 To assess the effect of switching from film to digital im- ages, separate multivariate logistic regression models were con- structed within each image type comparing the glycemic treat- ment effect (odds reduction of the former intensive therapy compared with conventional therapy) on several DR out- comes, especially risk of further 3-step DR progression during EDIC (our primary retinopathy outcome through EDIC year 10) and risk of onset of PDR during EDIC (our primary reti- nopathy outcome after year 10). These models adjusted for the same covariates as our published Weibull proportional hazard model, including primary or secondary cohort (no retinopa- thy or retinopathy at DCCT baseline), diabetes duration at DCCT baseline, hemoglobin A1c levels at DCCT eligibility, and reti- nopathy levels at DCCT closeout.6 To evaluate historical reproducibility of film photography during DCCT/EDIC, Fleiss � among multiple raters17 was used to calculate � for DR dichotomous categories, using data from annual replicate gradings on the quality control image samples. Reliability of the digital film grading across clinics was ana- lyzed via the Cochran test of homogeneity.23 RESULTS COMPARISON OF DIGITAL VS FILM GRADINGS OF DR SEVERITY Figure 1 compares film and digital gradings on the ETDRS person-level scale. There were at least 12 per- sons in each of the lower retinopathy severity categories (from no retinopathy, level 10 = 10, through moderately severe NPDR in the worse eye, level 47 � 47) and in the 3 mildest PDR categories (levels 60 � 60, 60 = 60, and 65 � 65) but only 0 to 3 in the more severe NPDR (lev- els 47 = 47 through 53 = 53) and PDR categories (levels 65 � 65 through 71 = 71). There was exact agreement in 51% of subjects, agreement within 1 level in 82%, and agreement within 2 levels in 95% (DR progression is wors- ening of �3 levels). Weighted � was 0.73 (95% confi- dence interval, 0.68-0.77), representing substantial agree- ment between digital and film gradings. The McNemar test of overall bias did not show significant systematic difference between gradings (film higher in 27% and lower in 22%; P = .14). The Bhapkar test of marginal homoge- neity indicated a borderline significant imbalance be- tween the marginal distributions of film vs digital grad- ings (P = .08; eFigure 1). The corresponding analysis using ETDRS eye-level scale is shown in Figure 2. To gain power, we used all eyes with gradable film and digital photographs (N = 628, including those with gradable photographs in only 1 eye). Agreement rates were 63% for exact agreement and 94% for agreement within 1 step. Weighted � for agreement was 0.74 (95% confidence interval, 0.71-0.78). Grad- ings showed more severe DR with film than with digital (film higher in 141 eyes and digital higher in 92, P = .001 by McNemar test), and there was significant marginal heterogeneity (P = .002 by Bhapkar test; eFigure 2). The most noteworthy differences were in the 106 eyes placed in level 10 by 1 or both image types (film higher in 36 and digital higher in 14; P = .002) and in the 122 eyes in level 43 by 1 or both image types (film higher in 56 and digital higher in 31; P = .004). Side-by-side review of a sample of these cases post hoc by a senior grader confirmed that small, subtle micro- aneurysms, intraretinal microvascular abnormalities, and retinal new vessels were sometimes more difficult to de- tect in digital color images than in film, even after tonal enhancement. COMPARISON OF DIGITAL VS FILM GRADINGS OF DIABETIC MACULAR EDEMA In this study, clinically significant diabetic macular edema occurred in only 6% to 7% of subjects and 6% to 7% of eyes, providing insufficient power for reliable analyses. However, agreement rates on presence or absence were 94% for subjects and 96.8% for eyes; digital was higher in 5.3% and film higher in 4.3% (McNemar bias test, P = .56); and marginal totals were not significantly dif- ferent (Bhapkar test of marginal homogeneity, P = .59). AGREEMENT ON DCCT/EDIC DR OUTCOMES BASED ON DIGITAL VS FILM IMAGES Table 2 presents the agreement on dichotomous DCCT/ EDIC DR categories determined from digital vs film im- ages. In these categories, digital vs film � ranged from 0.69 to 0.96, agreement proportion was 86% to 99%, sen- sitivity was 75% to 98%, and specificity was 72% to 99%. Agreement on the presence of any degree of PDR (in- cluding scars of prior photocoagulation treatment of it, with or without residual new vessels), the primary EDIC retinopathy outcome, was very good, leading to high sen- sitivity (96%-98%), specificity (99%), and � (0.95-0.96) ARCH OPHTHALMOL / VOL 129 (NO. 6), JUNE 2011 WWW.ARCHOPHTHALMOL.COM 721 ©2011 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 71 43 35 20 1010 20 35 43 71 5 4 7 12 5 4 3 4 1 Di git al Film 65 61 60 53 4747 53 60 65 61 169 4 33 30 19 32 70 156 35 1 14 1 4 1 3 12 39 81 1 56 Exact agreement (n = 395; 63%) Within 1 step (n = 591; 94.1%) Within 2 steps (n = 619; 98.6%) 3214 1 5 6 1 2 1 45 2 11 70 92 241 228 124 136 88 69 24 21 1 0 6 16 60 49 13 1 2 15 Figure 2. Cross-tabulation of film and digital gradings of final Early Treatment Diabetic Retinopathy Study scale based on eye level of 310 subjects with gradable dual-image types (n = 628). Level 60 (scars of photocoagulation for proliferative diabetic retinopathy [DR] or severe nonproliferative DR without residual new vessels) and level 61 (mild retinal new vessels, with or without photocoagulation scars) are shown separately here rather than being pooled (into mild proliferative DR) as they are when change on the scale is calculated. � = 0.52, SE = 0.02, 95% confidence interval = 0.47-0.57; weighted � = 0.74, SE = 0.02, 95% confidence interval = 0.71-0.78; weights are 1 for complete agreement, 0.75 for 1-step, and 0 for all other disagreement. 71 < 71 Di git al Film 65 = 65 65 < 65 60 = 60 60 < 60 53 < 53 47 = 47 47 < 47 43 = 43 43 < 43 35 = 35 35 < 35 20 = 20 20 < 20 10 = 1010 = 10 20 < 20 20 = 20 35 < 35 35 = 35 43 < 43 43 = 43 71 < 71 65 = 65 65 < 65 60 = 60 60 < 60 53 < 53 47 = 47 47 < 47 16 4 5 4 5 11 2 15 6 21 1 6 7 12 5 4 4 4 3 4 1 3 1 1 2 49 62 182 11 13 1451 6 3 5 1 2 2 5 21 1 1 7 202 4 2 1 13 Exact agreement (n = 158; 51%) Within 1 step (n = 255; 82.3%) Within 2 steps (n = 293; 94.5%) 27 18 83 39 28 34 21 12 3 1 10 23 0 2 1 2 1 12 22 8 0 3 11 17 25 32 49 66 39 23 Figure 1. Cross-tabulation of film and digital gradings of final Early Treatment Diabetic Retinopathy Study scale based on person-level of 310 subjects with gradable dual image types. � = 0.44, SE = 0.03, 95% confidence interval = 0.38-0.5; weighted � = 0.7, SE = 0.02, 95% confidence interval = 0.65-0.74; weights are 1 for complete agreement, 0.75 for 1-step, 0.5 for 2-step, and 0 for all other disagreement. ARCH OPHTHALMOL / VOL 129 (NO. 6), JUNE 2011 WWW.ARCHOPHTHALMOL.COM 722 ©2011 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 for the PDR category. This result may be explained in part by panretinal photocoagulation scars, easily detected in images of either type in 25 of the 35 patients with mild proliferative DR. Proliferation consisting solely of early new vessels is sometimes more difficult to detect in digi- tal than film images, although there was agreement on presence in 8 of 10 such eyes. Results for the severe NPDR (or worse) category could not be accurately determined because only 1 of the 310 participants was classified as having severe NPDR, and only using film (Figure 1). Simi- larly, the low sensitivity observed for CSME (50%) is of uncertain significance owing to low prevalence. There were very few subjects with no retinopathy in either eye (10 by film only, 5 by digital only, and 13 by both; Figure 1). Thus, the low specificity observed for the “any retinopathy” threshold (72%) is not statistically reli- able. Table 3 presents the agreement between digital and film grading regarding the effect of former DCCT treat- ment assignment (standard vs intensive glycemic con- trol) on the risk of any degree of PDR, at the dual- imaging visit, among the 302 participants free of PDR at DCCT close out. Multivariate logistic regression re- vealed an almost identical treatment effect from film and digital gradings. Adjusted odds ratios (ORs) for risk of PDR, conventional vs intensive, were 1.7 for film (95% confidence interval, 0.7-4.1; P = .27) and 1.7 for digital (95% confidence interval, 0.7-4.1; P = .22). Models were adjusted for primary or secondary cohort (no retinopa- thy or retinopathy at DCCT baseline), diabetes duration at DCCT baseline, hemoglobin A1c levels at DCCT eligi- bility, and retinopathy levels at DCCT closeout. Additional multivariate logistic regression models on other retinopathy categories (Table 4) showed similar results. Adjusted ORs of conventional vs intensive treat- ment are comparable between film and digital at vari- ous levels: for further 3-step or greater progression, film OR was 1.6 (P = .07) vs digital, 1.5 (P = .10); for mild NPDR Table 2. Reliability of Digital-Film Photography Grading in EDIC (N = 310) Retinopathy Outcome Percentage � (95% CI) a Prevalence Rate Agreement Rate Sensitivity Specificity False-Positive Rate False-Negative RateFilm Digital 3-Step progression from DCCT baseline 47.1 47.7 88 88 88 12 12 0.75 (0.68-0.83) Further 3-step progression from DCCT closeout 32.9 31.3 90 82 94 6 18 0.77 (0.69-0.85) Any retinopathy �10/10 94.2 92.6 95 97 72 28 3 Mild NPDR or worse �20/20 58.7 58.7 86 88 84 16 12 0.72 (0.64-0.80) Moderate NPDR or worse �35/35 37 33 86 75 92 8 25 0.69 (0.60-0.77) Severe NPDR or worse �47/47 14.5 14.5 99 96 99 1 4 0.95 (0.90-1.00) PDR or worse �53/53 14.2 14.5 99 98 99 1 2 0.96 (0.92-1.00) CSME b 7.3 6.0 94 50 98 3 50 Abbreviations: CI, confidence interval; CSME, clinically significant macular edema; DCCT, Diabetes Control and Complications Trial; EDIC, Epidemiology of Diabetes Interventions and Complications Study; NPDR, nonproliferative diabetic retinopathy; PDR, proliferative diabetic retinopathy. a Cohen �.18 Cohen � is not reliable when the prevalence of an outcome is close to 1 or 0.22 b N = 302 for CSME. Table 3. Logistic Regression of DCCT Treatment Effect on Risk of Any Degree of PDR Based on Film vs Digital Photography at EDIC Years 14 Through 16 Among the Participants Free of PDR at DCCT Closeout After Adjustment for the Other Risk Factors (N = 302) Covariate Film-Based PDR Digital-Based PDR OR (95% CI) P Value OR (95% CI) P Value At DCCT entry HbA1c level at DCCT eligibility, % 1.2 (0.9 to 1.5) .28 1.1 (0.9 to 1.5) .38 Cohort primary (vs secondary) 0.9 (0.3 to 3.1) .86 0.9 (0.3 to 2.8) .82 Type 1 diabetes mellitus duration, y 0.9 (0.8 to 1.0) .12 0.9 (0.8 to 1.0) .10 At DCCT closeout Retinopathy level Microaneurysms (vs no retinopathy) 3.0 (0.3 to 28.2) .34 3.9 (0.4 to 35.2) .23 Mild NPDR (vs no retinopathy) 24.9 (2.8 to 220) .004 27.3 (3.1 to 238) .003 Moderate or severe (vs no retinopathy) 129.8 (11.5 to �999) �.001 116.1 (10.5 to �999) �.001 DCCT treatment group conventional (vs intensive) 1.7 (0.7 to 4.1) .27 1.7 (0.7 to 4.1) .22 Abbreviations: CI, confidence interval; DCCT, Diabetes Control and Complications Trial; HbA1c, hemoglobin A1c; NPDR, nonproliferative diabetic retinopathy; OR, odds ratio; PDR, proliferative diabetic retinopathy. ARCH OPHTHALMOL / VOL 129 (NO. 6), JUNE 2011 WWW.ARCHOPHTHALMOL.COM 723 ©2011 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 or worse, film OR was 1.5 (P = .22) vs digital, 1.5 (P = .02); and for moderate NPDR or worse, film OR was 1.7 (P = .09) vs digital, 1.8 (P = .06). The greater-than-3-step progres- sion from DCCT at baseline shows the largest discrep- ancy between image media, with adjusted ORs of 1.9 for film (P = .05) and 1.5 for digital (P = .18). RELIABILITY OF � ACROSS CLINICS Comparison of � for the dichotomous DCCT/EDIC DR outcomes across clinics via Cochran test of homogene- ity24 showed no significant difference among the 20 clin- ics from the United States and Canada (eFigure 3). HISTORICAL REPRODUCIBILITY OF GRADING DR FROM FILM IN DCCT/EDIC Weighted � statistics for reproducibility on the ordinal ETDRS scale derived from film gradings in annual qual- ity control exercises ranged from 0.72 to 0.84 in the DCCT2 and from 0.69 to 0.80 in the EDIC—values some- what greater than the � of 0.70 from the film vs digital Table 5. Reliability of Film Photography Grading in DCCT and EDIC Retinopathy Outcome DCCT EDIC Patients/ Regrading, No. Prevalence Rate, % � (95% CI) a Patients/ Regrading, No. b Prevalence Rate, % � (95% CI) b 3-Step progression from DCCT baseline NA NA NA 49/10 61 0.91 (0.87-0.95) Any retinopathy �10/10 60/7 78 0.74 (0.68-0.79) 49/10 88 0.87 (0.83-0.91) Mild NPDR or worse �20/20 60/7 46 0.80 (0.75-0.85) 49/10 82 0.93 (0.89-0.97) Moderate NPDR or worse �35/35 60/7 23 0.83 (0.78-0.88) 49/10 70 0.91 (0.87-0.95) Severe NPDR or worse �47/47 42/4 29 0.66 (0.54-0.78) 49/10 45 0.71 (0.67-0.75) PDR or worse �53/53 42/4 13 0.72 (0.60-0.84) 49/10 30 0.82 (0.78-0.86) High-risk characteristics or worse �65/65 42/4 7 0.90 (0.78-1.02) 49/10 11 0.85 (0.81-0.89) CSME 42/4 14 0.91 (0.79-1.02) 49/10 29 0.65 (0.62-0.69) Abbreviations: CI, confidence interval; CSME, clinically significant macular edema; DCCT, Diabetes Control and Complications Trial; EDIC, Epidemiology of Diabetes Interventions and Complications Study; NA, not applicable; NPDR, nonproliferative diabetic retinopathy; PDR, proliferative retinopathy. a Fleiss � for multiple raters.25 b One of the 50 subjects had ungradable photographs and was not included in the analysis. Table 4. Logistic Regression of DCCT Treatment Effect on Risk of Various Retinopathy Categories Based on Film vs Digital Photography at EDIC Years 14 Through 16 Among the Participants Free of Respective Complications at DCCT Closeout After Adjustment for the Other Risk Factors a Retinopathy Category Participants, No. b Prevalence Adjusted OR of Conventional vs Intensive (95% CI) P ValueIntensive, % c Conventional, % c 3-Step progression from DCCT baseline Film 195 32.7 43.9 1.9 (1.0-3.5) .05 Digital 34.5 41.5 1.5 (0.8-2.8) .18 Further 3-step progression from DCCT closeout Film 304 28.4 37.6 1.6 (0.9-2.7) .07 Digital 27.1 35.6 1.5 (0.9-2.6) .10 Mild NPDR or worse Film 195 42.5 51.2 1.5 (0.8-2.6) .22 Digital 41.6 50.0 1.5 (0.8-2.6) .22 Moderate NPDR or worse Film 274 23.8 36.6 1.7 (0.9-3.0) .09 Digital 18.9 31.3 1.8 (1.0-3.3) .06 PDR or worse Film 302 7.8 16.2 1.7 (0.7-4.1) .27 Digital 7.8 16.9 1.7 (0.7-4.1) .22 Abbreviations: DCCT, Diabetes Control and Complications Trial; NPDR, nonproliferative diabetic retinopathy; PDR, proliferative diabetic retinopathy. a The same logistic models as in Table 3 were used with the respective retinopathy category as the outcome, and the same covariates adjusted. b Patients free of respective complications at DCCT closeout were included. For further 3-step progression, those with scatter photocoagulation in DCCT were excluded. c Prevalences of respective complications within each treatment group of those free of the corresponding complications at DCCT closeout were reported. ARCH OPHTHALMOL / VOL 129 (NO. 6), JUNE 2011 WWW.ARCHOPHTHALMOL.COM 724 ©2011 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 comparison (Figure 1) using the same weighting scheme. For most dichotomous outcomes there were similar dif- ferences; for 3-step or greater progression, presence of mild NPDR or worse, and presence of moderate NPDR or worse, � values ranged from 0.80 to 0.93 in DCCT and EDIC (Table 5), while corresponding values for film vs digital comparisons ranged from 0.69 to 0.77 (Table 2). In contrast, the film vs film quality control exercises pro- duced lower � values than the film vs digital compari- son study for presence of PDR and presence of severe NPDR or worse, as might be expected in quality control sets selected to include eyes in level 53 and to minimize eyes with photocoagulation scars. COMMENT From the DCCT/EDIC perspective, the most important finding of this substudy is that, in the subset of subjects with dual images, the effects of DCCT intensive (rela- tive to conventional) treatment on most measures of reti- nopathy progression were reasonably similar when as- sessed from digital compared with film images (Tables 3 and 4). For assessment of retinopathy severity level along the multistep ETDRS scale, agreement between grad- ings from film and digital images was also substantial (� = 0.70) but appeared to be slightly lower than corre- sponding film vs film comparisons in the DCCT (� = 0.72- 0.84) and the more contemporaneous EDIC (� = 0.69- 0.80). The comparability of grading digital vs film images for classification of DR severity has been described previously by others.24,26-29 While some previous stud- ies used the full ETDRS 7SF (7 standard field) imaging procedure,27,29 others modified it by reducing the number of 30° fields or substituting wide-angle fields, switching to monochrome rather than color, dispens- ing with stereoscopic effect (in peripheral fields, or entirely), and/or using nonmydriatic (via dark adapta- tion) rather than pharmacologic pupillary dila- tion.24,26,28 Many of these studies were primarily ori- ented toward screening programs for the purpose of referring persons with clinically important retinopathy to ophthalmologic care rather than conducting clinical trials or epidemiological studies. Most of these articles concluded that the comparability between film and digital grading was adequate to justify adoption of the digital medium for various clinical purposes. Thanks to these precedent studies, we were made aware of the limitations in emerging digital practice and were able to address some of these difficulties. The DCCT/EDIC digital vs film ancillary study is the first formal comparison to be reported by an ongoing, multicenter clinical trial or epidemiological study. Sev- eral of our study design and implementation features may have enhanced the comparability between film and digi- tal imaging for DR assessment: modern digital fundus cam- eras with higher spatial resolution, photographers and camera systems certified for digital performance, full ETDRS 7SF stereo imaging, standardized tonal enhance- ment of digital images to filmlike standard, and certified graders at a central reading center experienced in evalu- ating DR for many years with film and for the past few years with digital images. A weakness of our study was the small number of cases with severe NPDR, severe PDR, and mild PDR in the ab- sence of photocoagulation scars, resulting in lower power to examine differences between digital and film in these cat- egories. We recruited all subjects within a specified time period rather than recruiting a stratified sample, and these levels are infrequent in our subjects. In most populations, severe NPDR is rare, being an acute stage through which eyes pass relatively quickly on their way to developing PDR.15 For retinopathy studies requiring discrimination be- tween all of the individual levels on the ETDRS severity scale, we emphasize that we found worse performance cur- rently with digital images at 2 points on the DR scale. For the presence of any retinopathy (driven at the lower end by microaneurysms only), digital sensitivity was 72% and its false-positive rate was 28%. For moderate NPDR (lev- els 43 and 47, driven mostly by intraretinal microvascular abnormalities), digital sensitivity was 75% and its false- negative rate was 25%. Our more recent work suggests that supplementing the view of the full-color image with the monochromatic green channel (the latter extracted from the former) improves performance of digital photogra- phy.30 The green channel view maximizes the contrast of DR abnormalities against the retinal pigment epithelial back- ground compared with the full-color view. For studies that require evaluation of macular edema from fundus photography rather than ocular coherence tomography, we must also caution that sensitivity for de- tecting CSME with digital images appeared to be lower than with film, although this condition was too infre- quent in our sample to draw robust conclusions. Our digi- tal vs film results for CSME suggest high specificity (98%) but low sensitivity (50%) and a high false-negative rate (50%). Of note, most present-day clinical trials in oph- thalmology now study diabetic macular edema primar- ily with ocular coherence tomography, which measures retinal thickening objectively rather than with grading of stereo color photographs (as done historically). How- ever, the DCCT/EDIC has not yet elected to add ocular coherence tomographic examination, given the low in- cidence of CSME in our cohort. Work is ongoing at the reading center to improve grading of macular edema from digital photographs. Given our ancillary study’s finding of overall compa- rability of digital vs film gradings for evaluation of DR severity, the DCCT/EDIC Research Group and its exter- nal advisory committee voted in 2009 to approve the switch from film to digital imaging. At present, all 28 clin- ics have changed to digital photography. In the context of a multicenter, long-term study, we found that ETDRS severity levels (the major DCCT/EDIC retinopathy outcomes) and our study conclusions drawn from them are comparable when DR is graded from digi- tal rather than film images. Overall, these results support transition from the film to the digital imaging medium for research documentation of diabetic retinopathy. Submitted for Publication: August 30, 2010; final revi- sion received August 30, 2010; accepted October 5, 2010. Correspondence: Larry D. Hubbard, MAT, Department ARCH OPHTHALMOL / VOL 129 (NO. 6), JUNE 2011 WWW.ARCHOPHTHALMOL.COM 725 ©2011 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 of Ophthalmology and Visual Sciences, University of Wis- consin, Madison, 8010 Excelsor Dr, Ste 100, Madison, WI 53717-0568 (hubbard@rc.opth.wisc.edu). Group Information: A complete list of participants in the Diabetes Control and Complications Trial/Epidemiol- ogy of Diabetes Interventions and Complications Study research group was published in Arch Ophthalmol. 2008; 126(12):1713. Financial Disclosure: The authors report contributions from Abbott, Animas, Aventis, BD Bioscience, Bayer, (do- nated one time in 2008) Can-AM, Eli Lilly, Lifescan, Medtronic Minimed, Omron, Roche, and OmniPod to the trial, not attributed to any individual author. Funding/Support: This study is supported by contracts with the Division of Diabetes, Endocrinology, and Meta- bolic Diseases of the National Institute of Diabetes and Digestive and Kidney Diseases (DK 034818), the Na- tional Eye Institute, the National Institute of Neurologi- cal Disorders and Stroke, the General Clinical Research Centers Program, the Clinical and Translational Sci- ence Awards Program, the National Center for Research Resources, and by Genentech through a Cooperative Re- search and Development Agreement with the National Institute of Diabetes and Digestive and Kidney Diseases. Online-Only Material: The eTable and eFigures are avail- able at http://www.archophthalmol.com. REFERENCES 1. The Diabetes Control and Complications Trial Research Group. The effect of in- tensive treatment of diabetes on the development and progression of long-term complications in insulin-dependent diabetes mellitus. N Engl J Med. 1993; 329(14):977-986. 2. The effect of intensive diabetes treatment on the progression of diabetic reti- nopathy in insulin-dependent diabetes mellitus: the Diabetes Control and Com- plications Trial. Arch Ophthalmol. 1995;113(1):36-51. 3. Diabetes Control and Complications Trial Research Group. Progression of reti- nopathy with intensive versus conventional treatment in the Diabetes Control and Complications Trial. Ophthalmology. 1995;102(4):647-661. 4. Epidemiology of Diabetes Interventions and Complications (EDIC) Research Group. Epidemiology of Diabetes Interventions and Complications (EDIC). Design, imple- mentation, and preliminary results of a long-term follow-up of the Diabetes Con- trol and Complications Trial cohort. Diabetes Care. 1999;22(1):99-111. 5. The Diabetes Control and Complications Trial/Epidemiology of Diabetes Inter- ventions and Complications Research Group. Retinopathy and nephropathy in patients with type 1 diabetes four years after a trial of intensive therapy. N Engl J Med. 2000;342(6):381-389. 6. White NH, Sun W, Cleary PA, et al. Prolonged effect of intensive therapy on the risk of retinopathy complications in patients with type 1 diabetes mellitus: 10 years after the Diabetes Control and Complications Trial. Arch Ophthalmol. 2008; 126(12):1707-1715. 7. Writing Team for the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications Research Group. Sustained effect of intensive treatment of type 1 diabetes mellitus on development and progression of diabetic nephropathy: the Epidemiology of Diabetes Interventions and Com- plications (EDIC) study. JAMA. 2003;290(16):2159-2167. 8. Martin CL, Albers J, Herman WH, et al; DCCT/EDIC Research Group. Neuropa- thy among the diabetes control and complications trial cohort 8 years after trial completion. Diabetes Care. 2006;29(2):340-344. 9. Nathan DM, Cleary PA, Backlund JY, et al; Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications (DCCT/EDIC) Study Research Group. Intensive diabetes treatment and cardiovascular disease in pa- tients with type 1 diabetes. N Engl J Med. 2005;353(25):2643-2653. 10. Donner A, Eliasziw M. A goodness-of-fit approach to inference procedures for the kappa statistic: confidence interval construction, significance-testing and sample size estimation. Stat Med. 1992;11(11):1511-1519. 11. Sim J, Wright CC. The kappa statistic in reliability studies: use, interpretation, and sample size requirements. Phys Ther. 2005;85(3):257-268. 12. Diabetic retinopathy study. Report number 6: design, methods, and baseline re- sults: report number 7: a modification of the Airlie House classification of dia- betic retinopathy: prepared by the Diabetic Retinopathy. Invest Ophthalmol Vis Sci. 1981;21(1, pt 2):1-226. 13. Early Treatment Diabetic Retinopathy Study Research Group. Grading diabetic retinopathy from stereoscopic color fundus photographs: an extension of the modified Airlie House classification: ETDRS report number 10. Ophthalmology. 1991;98(5)(suppl):786-806. 14. Hubbard LD, Danis RP, Neider MW, et al; Age-Related Eye Disease 2 Research Group. Brightness, contrast, and color balance of digital versus film retinal im- ages in the age-related eye disease study 2. Invest Ophthalmol Vis Sci. 2008; 49(8):3269-3282. 15. Gardner TW, Sander B, Larsen ML, et al. An extension of the Early Treatment Diabetic Retinopathy Study (ETDRS) system for grading of diabetic macular edema in the Astemizole Retinopathy Trial. Curr Eye Res. 2006;31(6):535-547. 16. Early Treatment Diabetic Retinopathy Study Research Group. Fundus photo- graphic risk factors for progression of diabetic retinopathy. ETDRS report num- ber 12. Ophthalmology. 1991;98(5)(suppl):823-833. 17. Shoukri M. Measures of Interobserver Agreement. Boca Raton, FL: Chapman & Hall/CRC; 2004. 18. Cohen J. Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychol Bull. 1968;70(4):213-220. 19. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159-174. 20. Bhapkar VP. A note on the equivalence of two test criteria for hypotheses in cat- egorical data. J Am Stat Assoc. 1966;61:228-235. doi:10.2307/2283057. 21. McNEMAR Q. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika. 1947;12(2):153-157. 22. Feinstein AR, Cicchetti DV. High agreement but low kappa I: the problems of two paradoxes. J Clin Epidemiol. 1990;43(6):543-549. 23. Cochran WG. The combination of estimates from different experiments. Biometrics. 1954;10:101-120. doi:10.2307/3001666. 24. Lin DY, Blumenkranz MS, Brothers RJ, Grosvenor DM. The sensitivity and speci- ficity of single-field nonmydriatic monochromatic digital fundus photography with remote image interpretation for diabetic retinopathy screening: a comparison with ophthalmoscopy and standardized mydriatic color photography. Am J Ophthalmol. 2002;134(2):204-213. 25. Fleiss JL. Measuring nominal scale agreement among many raters. Psychol Bull. 1971;76:378-382. doi:10.1037/h0031619. 26. Bursell SE, Cavallerano JD, Cavallerano AA, et al; Joslin Vision Network Re- search Team. Stereo nonmydriatic digital-video color retinal imaging compared with Early Treatment Diabetic Retinopathy Study seven standard field 35-mm stereo color photos for determining level of diabetic retinopathy. Ophthalmology. 2001;108(3):572-585. 27. Fransen SR, Leonard-Martin TC, Feuer WJ, Hildebrand PL; Inoveon Health Re- search Group. Clinical evaluation of patients with diabetic retinopathy: accuracy of the Inoveon diabetic retinopathy-3DT system. Ophthalmology. 2002;109 (3):595-601. 28. Rudnisky CJ, Tennant MT, Weis E, Ting A, Hinz BJ, Greve MD. Web-based grad- ing of compressed stereoscopic digital photography versus standard slide film photography for the diagnosis of diabetic retinopathy. Ophthalmology. 2007; 114(9):1748-1754. 29. Li HK, Hubbard LD, Danis RP, et al. Digital versus film fundus photography for research grading of diabetic retinopathy severity. Invest Ophthalmol Vis Sci. 2010; 51(11):5846-5852. 30. Reimers JL, Gangaputra S, Esser B, et al. Green channel vs color retinal images for grading diabetic retinopathy in DCCT/EDIC [ARVO abstract 2285]. Invest Oph- thalmol Vis Sci. 2010;51:e-Abstract 2285. doi:10.1167/iovs.10-6303. ARCH OPHTHALMOL / VOL 129 (NO. 6), JUNE 2011 WWW.ARCHOPHTHALMOL.COM 726 ©2011 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 work_wpnfwgu5tfclhbaa2ndz36cx3q ---- doi:10.1016/S1537-5110(03)00031-X Biosystems Engineering (2003) 84 (4), 375–391 doi:10.1016/S1537-5110(03)00031-X Available online at www.sciencedirect.com PA}Precision Agriculture REVIEW PAPER Precision Farming of Cereal Crops: a Review of a Six Year Experiment to develop Management Guidelines R.J. Godwin 1 ; G.A. Wood 1 ; J.C. Taylor 1 ; S.M. Knight 2 ; J.P. Welsh 1 1 Cranfield University at Silsoe, Silsoe, Bedfordshire MK45 4DT, UK; e-mail of corresponding author: r.godwin@cranfield.ac.uk 2 Arable Research Centres, Shuttleworth Centre, Old Warden Park, Biggleswade, Bedfordshire SG18 9EA, UK (Received 2 December 2002; accepted in revised form 6 February 2003) This paper summarises the results of a 6-yr study, involving five principal fields in England covering 13 soil types, which represent approximately 30% of the soils on which arable crops are grown. The aim of the project was to determine guidelines to maximise profitability and minimise environmental impact of cereal production using precision farming. The study focused on the interaction between soil/water variability and nitrogen applications. The earlier work concentrated on identifying the in-field variability and the development of a ‘real-time’ sensing technique, while the later work compared spatially controlled inputs with uniform agronomic practice. A number of techniques were used to decide upon the variable application strategy. These included yield variability from historic yield maps, variability in shoot density in the spring, and variability in the subsequent development of the canopy; the latter two enabling the development of the concept of ‘real- time’ agronomic management. In uniformly managed fields, there were considerable differences in the spatial patterns and magnitudes of yield variation between fields and seasons, which linked to soil variation and annual differences in rainfall and earlier field operations. Electromagnetic induction (EMI) was found to be a suitable surrogate for detailed soil coring and cluster analysis of EMI and yield data provided an objective method to subdivide fields into management zones for targeted sampling of soil nutrients and pH; and for estimating replenish- ment levels of P and K fertilisers. Considerable reductions in the cost of soil sampling were possible with this approach. Yield maps, however, were not a useful basis for determining a variable nitrogen application strategy. It was shown that the spatial variation in canopy development with a field can be effectively determined using aerial digital photography for ‘real-time’ management. This technique can improve the efficiency of cereal production through managing variations in the crop canopy and gave an average economic return of £22 ha �1 while reducing the nitrogen surplus by approximately one-third. Benefits from spatially variable application of nitrogen outweigh costs of the investment in precision farming systems for cereal farms greater than 75 ha for systems costing £4500. This area increases in size in proportion to the capital cost. Integrating the economic costs with the proportion of the farmed area that has benefit potential enables the break-even yield increase to be estimated. Typically a farm with 250 ha of cereals where 20% of the area could respond positively to spatially variable nitrogen would need to achieve a yield increase of 1�1 t ha�1 on that 20% to break even. These economic advantages linked to the environmental benefits should improve the longer term sustainability of cereal production. Common problems, such as water- logging and fertiliser application errors should be corrected prior to the spatial application of fertilisers and other inputs. From the overall results a set of practical guidelines has been incorporated in a single decision support tool to help farmers. # 2003 Silsoe Research Institute. All rights reserved Published by Elsevier Science Ltd 1537-5110/03/$30.00 375 # 2003 Silsoe Research Institute. All rights reserved Published by Elsevier Science Ltd mailto:r.godwin@cranfield.ac.uk mailto:r.godwin@cranfield.ac.uk mailto:r.godwin@cranfield.ac.uk 1. Introduction Precison farming is the term given to a method of crop management by which areas of land or crop within a field are managed with different levels of input in that field. The potential benefits are: (i) the economic margin from crop production may be increased by improvements in yield or a reduction in inputs; (ii) the risk of environmental pollution from agro- chemicals applied at levels greater than optimal can be reduced; and (iii) greater assurance from precise targeting and record- ing of field applications to improve traceability. These benefits are excellent examples of where both economic and environmental considerations are work- ing together. This paper provides a review of a 6-yr study to develop practical guidelines for implementing precision farming technology for the UK cereal industry by: (i) developing a methodology for identifying causes of within-field variation; (ii) exploring the use of remote sensing methods to enable management decisions to be made in ‘real- time’ during growth of the crop; (iii) determining potential economic benefits of preci- sion farming; and (iv) collaborating with farmers to ensure that research findings are appropriate for adoption. The emphasis of this work was the development of practical guidelines to assist management of ever-increas- ing sizes of enterprise when economic margins are under great pressure. It is the technology that assists in recognising the spatial boundaries together with equip- ment for yield recording and the variable application of agronomic inputs that has re-kindled the interest in this approach to farming in recent years. The main catalyst for this was the advent of affordable differential global positioning systems which enabled a number of yield mapping systems to appear on the market from 1990. Whilst there have been, and still are, challenges to be addressed relating to the hardware and software aspects of the precision farming system, the single greatest challenge is in interpreting information from yield maps, crop performance records (both historic and ‘real-time’) and soil analysis into practical strategies for the variable application of crop treatments for an individual field. 2. Approach A summary of factors that could influence the yield of crops in a given location developed by Earl et al. (1996) is presented in Table 1. Whilst little control can be exercised over factors on the left of the table, they have to be considered as they can have major effects upon yield. The factors on the right, however, can be manipulated in a spatially variable manner and could lead to economic benefits from either (i) yield improve- ments due to changes in input or (ii) savings in inputs costs without an adverse effect upon crop yield. The duration of the study extended over six cropping seasons and included the harvests in 1995–2000. The fields detailed in Table 2 were selected to provide a range of case studies and included soils typical of approxi- mately 30% of the land used for arable production in England and Wales. These fields had predominantly Table 1 Factors influencing yield variation Little control Possible control Soil texture Soil structure pH levels Climate Available water Trace elements Topography Water-logging Weed competition Hidden features Macro nutrients Pests and diseaes Table 2 Field details and location Field name Location Soil series * Cropping pattern Key fields}all years Far Sweetbrier Old Warden, Bedfordshire Hanslope Winter wheat, oilseed rape rotation Onion Field Houghton Conquest, Bedfordshire Denchworth/Oxpasture/Evesham Continuous winter wheat Trent Field Goodworth Clatford, Hampshire Andover/Panholes Continuous winter wheat Twelve Acres Hatherop, Gloucestershire Sherborne/Moreton/Didmarton Continuous winter wheat Supplementary fields Short Lane Gamlingay, Cambridgeshire Wickham/Ludford/Maplestead Continuous winter barley Short Wood Gamlingay, Cambridgeshire Hanslope/Denchworth Winter wheat Far Highlands Old Warden, Bedfordshire Wickham/Evesham Winter wheat * After Jarvis et al. (1984) and Hodge et al. (1984). R.J. GODWIN ET AL.376 been in cereals for several years prior to the experi- mental work. At the outset, it was agreed that the reasons for any underlying field variation needed to be established prior to managing the crop in a spatially variable manner. Hence, uniform ‘blanket’ treatments were applied to the ‘key’ fields in the harvest seasons of 1995–1997. Yield maps for these seasons provided an indication of crop yield variation both in space and time. Since the 1997– 1998 cropping season effects of variable inputs were studied on all fields shown in Table 2 with the exception of the pilot study in Short Lane which started in 1996– 1997. A number of fields planted with uniform seed rate were subjected to variable inputs of nitrogen. An additional two fields, Onion Field and Far Highlands, had variable nitrogen inputs applied across a range of seed rates that had been sown to create different crop canopy structures. 3. Inherent variability 3.1. Crop yield Typical variations in crop yield are presented in Fig. 1, which shows that there is some similarity over the 3-yr period. The spatial trend map (average yield), after Blackmore (2000), for the period shows that, on average, the yield range for this particular field is in excess of �20% of the mean, with the higher yielding zones to the west and the lower yielding zones to the east of the 100% (or mean) contour. These maps have been corrected using algorithms developed by Blackmore and Moore (1999) to compensate for field operational artefacts associated with combine harvester grain filling at the headlands and crop harvesting widths of less than the full width of the combine harvester cutter bar. In uniformly managed fields, there were considerable differences in the spatial patterns and magnitudes of yield variation between fields and seasons, which linked to soil variation and annual differences in rainfall and earlier field operations. Further analysis of the data over a 6-yr period by Blackmore et al. (2003) shows that there is compensa- tion in the spatial distribution of crop yield such that the spatial variability of the cumulative yield reduces with time. 3.2. Soil types The fields were initially surveyed at a commercial detail level of approximately 1 auger hole ha �1 to provide an overview of soil textural and profile variation. These were complemented by ‘targeted’ profile pit descriptions as given in Earl et al. (2003). The location of the profile pits were selected to encompass: (i) the range of yields observed in the yield maps of 1994/95 and 1995/96; (ii) the density of the crop from aerial digital photo- graphy (Wood et al., 2003a) captured in May 1996; and (iii) soil maps based upon auger sampling at a 100 m grid spacing. These pits, 3 m long by 1 m wide by 1.5 m deep, were excavated to provided detailed information for soil classification and information on crop rooting depth and soil drainage status. Excavations such as these, should be viewed as a one-off investment as photo- graphs taken of these geo-referenced soil profiles can be passed to successive generations and have greater impact than traditional written profile descriptions. Further studies with both soil coring apparatus (to a depth of 1 m) and electromagnetic induction (EMI) equipment increased the resolution to define soil textural boundaries. The latter technique is particularly useful for differentiating between soil textures as shown in Fig. 2, where the higher levels of conductivity indicate higher moisture content soils which if conducted at field capacity would indicate a greater clay content (Waine, 1999), as shown in Godwin and Miller (2003). Thus electromagnetic induction observations correlate well with assessments of soil series where these are differentiated by the soil texture and water holding properties. Objective techniques, using cluster analysis, have been developed which enable potential management zones to be determined using historic yield and EMI data (Taylor et al., 2003). Differences in soil nutrient levels have been identified between the management zones and, hence, form a basis for targeted sampling of soil nutrient status to reduce the cost of field sampling. 80 90 100 110 120 Yield, % of grand mean 1995 1996 1997 Fig. 1. Spatial trend (average yield) map for yield at Trent Field, 1995–1997 PRECISION FARMING OF CEREAL CROPS 377 3.3. Soil fertility and crop nutrition Detailed analyses of macro- and micro-nutrients in both soil water extract and plant tissue were conducted at approximately 50 m grid spacings together with soil pH. These indicated that there was variation in nutrient levels in each of the fields. However, with the exception of isolated areas with low pH, the analysis by Earl et al. (2003) showed that the levels were above the commonly accepted agronomic limits. 3.4. Crop canopy Variations in crop canopy occur both in space and time in the same field. In order to obtain consistent and reliable data for monitoring crop development for ‘real- time’ management and to explain field differences, a light aircraft was equipped with two digital cameras fitted with red (R) and near infrared (NIR) filters as described in Wood et al. (2003a). Field images obtained form aerial digital photography (ADP) from a height of 1000 m give a pixel resolution of 0�5 m by 0�5 m. Normalised difference vegetation index (NDVI) values were estimated from the following equation: INDV 5 ðlNIR � lRÞ=ðlNIR þ lRÞ ð1Þ where: INDV is the normalised difference vegetation index; and lR and lNIR are the red and near infrared spectral wavebands. The resulting images, such as Fig. 3, show the effect of variations in crop development immediately prior to the first application of nitrogen. These images are (i) immediately valuable in discerning patterns of field variability, and (ii) provide detailed spatial data on crop tillers/shoot density. These data, when calibrated against detailed agronomic measure- ments at targeted locations, were used in near ‘real-time’ to estimate crop condition and potential nutritional requirements as described in Wood et al. (2003b). Extension of this principle to farm scale operations results in effective calibration between the crop indica- tors and NDVI using eight sampling points, Wood et al. (2003a). The cost of extending this technique to commercial practice has been estimated by Godwin et al. (2003) at £7 ha �1 for 3 flights yr �1 , during the January to April period, for areas of 1500 ha per flight. It has been possible using this system to also identify areas in need of spatially variable application of herbicides and plant growth regulators. 0 3 6 9 12 15 18 21 24 27 30 33 3 to 6 6 to 8 8 to 11 11 to 14 14 to 50 EMI apparent soil conductivity mS m-1 Fig. 2. Electro Magnetic induction (EMI) conductivity Trent Field 2 February 1999 Shoot density, shoots m-2 >1000 800 - 1000 600 - 800 < 600 Fig. 3. Normalised difference vegetation index (NDVI) image of Trent Field R.J. GODWIN ET AL.378 3.5. Conclusions from the field variability studies The major long-term causes of yield variation in the study fields were attributable to soil and its associated water holding capacity. There was variation in the availability of plant nutrients, potassium, phosphorus and the micro-nutrients in the study fields, however, in agronomic terms, were not limiting. The stable patterns in the yield maps observed in the early years of the work, changed with time to reduce the variation in cumulative yield, (Blackmore et al., 2003). The aerial digital photography (ADP) system specified for this project allowed variations in crop yield components to be mapped in near ‘real time’. 4. Variable application of nitrogen 4.1. Experimental design One of the aims of this project was to develop an experimental methodology that could be employed by farmers to determine an optimal application strategy for a given input in any particular field, in this case nitrogen. To achieve this, it was important to use standard farm machinery for the experiments. This resulted in a move away from the traditional small plot randomised block experimental design. Pilot studies by James and Godwin (2003) in Short Lane, investigated the use of a series of long treatment strips, which ran through the main areas of variation within each field. This was developed into the proposed final design by Welsh et al. (2003a, 2003b), comprised a series of long strips, which ran through the main areas of variation within each field, an examples of which is presented in Fig. 4, where the treatment strip is interlaced with the field standard. The width of each strip was dependent upon the existing tramline system and/or the working width of the machinery available. The treatment strips were, therefore, half the width of a tramline. The fertiliser was applied using a pneumatic or liquid fertiliser applicator that was capable of operating the left and right booms independently. The strip widths used allowed the experiments to be harvested by the combine harvester without the inclusion of the tramline wheel marks. The combine was equipped with a radiometric yield sensor, with a mean instantaneous grain flow error of 1% as given in Moore (1998). 4.2. Nitrogen response studies These treatment strips had different rates of nitrogen applied uniformly along their complete length. The purpose of this was to provide an indication of the crop response to different levels of nitrogen in the various zones of the field, from typically low to high yielding areas. These were conducted with a uniform seed rate of 300 seeds m �2 in 1997/98, 1998/99, and 1999/00 in Far Sweetbrier, Trent Field and Twelve Acres. 4.3. Historic yield and shoot density studies These treatment strips were established to test the following strategies in the same fields as in the nitrogen response studies: (i) increasing the fertiliser application to the higher, or potentially higher, yielding parts of the field whilst reducing the application to the lower yielding parts: and (ii) reducing the fertiliser application to the higher, or potentially higher, yielding parts of the field whilst increasing the application to the lower yielding parts. However, before these strategies could be implemen- ted, the high, average and low yielding zones had to be identified. Two methods were used: (i) historic yield data, as shown in Fig. 2; and (ii) shoot density data, estimated from NDVI data, as shown in Fig. 4, (after Wood et al., 2003a). Using this approach, experimental strips (Fig. 5) were established to give the following treatments. Historic yield 1 (HY1). High yield zone received 30% more nitrogen; average yield zone received the standard nitrogen rate; and the low yield zone received 30% less nitrogen. S ta nd ar d N r at e S ta nd ar d N r at e S ta nd ar d N r at e S ta nd ar d N r at e S ta nd ar d N r at e + 3 0% S ta nd ar d N r at e - 30 % H is to ri c yi el d 1 (H Y 1) H is to ri c yi el d 2 (H Y 2) S ho ot d en si ty 2 ( S D 2) S ho ot d en si ty 1 ( S D 1) 12 m V ar ia ti on Low yield High yield Uniform treatments Variable treatments Fig. 4. Plan of field experiments PRECISION FARMING OF CEREAL CROPS 379 Shoot density 1 (SD1). High shoot density zone received 30% more nitrogen; average shoot density zone received the standard nitrogen rate; and the low shoot density zone received 30% less nitrogen. Historic yield 2 (HY2). High yield zone received 30% less nitrogen; average yield zone received the standard nitrogen rate; and the low yield zone received 30% more nitrogen. Shoot density 2 (SD2). High shoot density zone received 30% less nitrogen; average shoot density zone received the standard nitrogen rate; and the low shoot density zone received 30% more nitrogen. Standard N rate strips were located adjacent to each of the variable treatment strips to allow treatment comparisons to be made, since classical experimental design and statistical analyses with replicated plots were not possible. 4.4. Crop canopy management studies The methodology for these studies, described by Wood et al. (2003b), was developed over 3 yr in Onion Field, but was extended to include Far Highlands in the final season. Seed rates of 150, 250, 350 or 450 seeds m �2 were used to establish 24 m wide strips of wheat with a range of initial crop structures. In 1997/98, the impact of seed rate on subsequent variation in canopy structure, yield components and grain yield was studied separately, with a standard dose of nitrogen fertiliser applied uniformly to all strips. In the second and third years, the strips were then subdivided into two 12 m wide sections, along which one received a standard field rate of nitrogen fertiliser (200 kg [N] ha �1 ), and the other a variable amount dependent upon crop growth. Obser- vations were made in near ‘real time’ using the aerial digital photographic technique and crop canopy mea- surements described in Wood et al. (2003a). Appropriate flights were made prior to each of the three nitrogen application timings in the February to May period, and crop growth (shoot populations at tillering and canopy green, growth stages GS30-31 and GS33) compared with benchmarks from the Wheat Growth Guide, HGCA (1998). A default nitrogen strategy was calculated using canopy management principles for areas of the variable strips where growth was on target, and application rates were then increased or decreased along each strip, where growth was above or below target, respectively. 5. Results 5.1. Nitrogen response studies Typical examples of the nitrogen response curves for the uniform treatments for the winter barley crop in Trent Field are given in Fig. 5 for the 3 yr of the experiment from the data presented by Welsh et al. (2003a). This shows a significant difference in the nitrogen response curve and the optimum application rate between the two soil types in 1997/98 when the Panholes series had the greater soil moisture deficit. In the following two seasons the soil moisture deficits were lower and similar for both soil types, resulting in common yield response curves. This observation was 2 4 6 8 10 12 0 100 200 Y ie ld , t h a- 1 0 100 200 Applied N, kg [N] ha-1 0 100 200 (a) (b) (c) Fig. 5. Yield response to applied N in the Andover and Panholes soil series zones in (a) 1997/98, (b) 1998/99 and (c) 1999/00; error bars denote the yield range about the mean: , Andover; , Panholes R.J. GODWIN ET AL.380 also made by James and Godwin (2003) in Short Lane, where the optimum application rate from the winter barley yield response curves of the contrasting clay loam and sandy loam soils was the same in each of the three seasons studied, despite significant variations in annual rainfall. As reported by Welsh et al. (2003b) three consecutive winter wheat crops of feed varieties were grown in Twelve Acres with two main soil series. Crops grown on Sherborne series soil produced higher yields than those on Moreton, but the optimum nitrogen rate was the same for both, and equal to the standard (200 kg [N] ha �1 ). At Far Sweetbrier with the uniform Hanslope series soil the strips were arbitrarily divided into three equal zones, with Zone 1 being in the south-west part. The results of the winter/spring/winter wheat crop rotation indicated that Zone 1 had a yield maximum at the field standard rate of nitrogen, the yield maximum was less than the other two zones in 1998/99 and 1999/00. The other two zones behaved in a similar manner and indicated yield benefits from additional nitrogen. This difference may be explained by evidence that Zone 1 was historically part of another field which could have received a different long-term management regime. 5.2. Historic yield and shoot density studies An example of the yield distribution along the variable treatment yield strips is presented in Fig. 6 (after Welsh et al., 2003a) for the HY1 and HY2 strategies. The effect of both increasing (160 kg [N] ha �1 ) and decreasing (90 kg [N] ha �1 ) the nitrogen application rates to the high and low yielding zones in comparison with the field standards can be clearly seen. This shows that for Trent Field in 1997/98 there were advantages of adding fertiliser to both the high and low yielding zones and penalties for reducing the rate. The results in Table 3, which summarises all the alternative scenarios in comparison with a standard application rate, indicate that there are no economic benefits from HY1 and HY2 in Trent Field or Twelve Acres. The reason for this is due to the reduction in nitrogen application rate causing a significant yield loss in both the high and low yielding zones, which are not compensated for by savings in nitrogen costs. The winter, spring, winter wheat sequence of crops at Far Sweetbrier produced benefits from the historic yield (HY2) strategy, which was due to the benefit of adding nitrogen to the poorer yielding areas which also coincided with an area of low shoot density in 1998/99 which is in agreement with the SD2 strategy and canopy management principles. Managing the crop using maps of the relative shoot density from NDVI data provided a positive benefit when more nitrogen was applied to the areas of low shoot density, and less to the high density areas (SD2), but the success of this depended on the actual shoot populations present which differed between seasons. This occurred because there was little variation along the strip with a low shoot density, which from hindsight using the principle of canopy management would respond best to a uniform application of nitrogen. Overall, the shoot density SD2 approach which uses a real-time assessment of the crop canopy/structure to control the nitrogen requirement, appeared to offer the greatest potential for crop production. Nitrogen strate- gies based on historic yield maps (HY1 and HY2) showed no or very little benefit (Welsh et al., 2003a, 2003b). Yield maps are, however, a valuable tool for: (i) the replenishment of potassium and phosphorous removed by the previous crop, and (ii) identifying the size of the zones needing particular attention from the impact of the other factors listed in Table 4. 4 6 8 10 0 50 100 150 200 250 300 Distance along strip, m Y ie ld , t h a- 1 160 160 90 90 125 125 All 125 High yield zone Av. yield zone Low yield zone Fig. 6. Combine harvester yield of ‘historic yield’ treatments (HY1 & HY2) compared with a standard application along the treatment strips in Trent Field; shaded areas are transition zones and are deleted from the analysis: , HY1; , standard; , HY2; fertiliser application rates in kg [N] ha �1 PRECISION FARMING OF CEREAL CROPS 381 The zones needing particular attention were identified in this phase of the study and could be treated by targeted measures. Their economic impact can be significant (Godwin et al., 2003) and if present in fields it is recommended that they are corrected prior to the application of spatially variable fertiliser and other inputs. 5.3. Canopy management studies The results from the pilot study by Wood et al. (2003b) in 1997/98 clearly showed that plant popula- tions increased up to the highest seed rate, but shoot and ear populations peaked at 350 seeds m �2 . Quadrat samples taken from four transects across the seed rate strips revealed spatial variation in both populations and their response to seed rate. However, compensation through an increased number of grains per ear and thousand grain weight resulted in the highest yield and gross margin being obtained at the lowest seed rate. In 1998/99 the experiment suffered water-logging, and due to poor growth the variable dose consisted simply of a higher total amount (245 kg [N] ha �1 ) applied uniformly to the ‘variable’ strips. Despite good autumn establishment, sampling revealed low spring shoot populations and an increase in ear populations up to the highest seed rate. There were complicated interac- tions between transect position and population re- sponses. Compensation within yield components as ear populations decreased was evident. Yield responses to seed rate and nitrogen dose were irregular, and varied with location. The results presented in Table 5 are a comparison of both the yield and the economic performance of the recommended uniform field nitrogen application rate strips with those receiving the variable nitrogen applica- tion rate based on canopy size in Onion Field and Far Highlands in 1999/2000. Also shown are the mean of the variable nitrogen application rate and the uniform rate. These show that regardless of seed rate in Onion Field both the yield and the gross margins for the variable nitrogen strategy exceeded those for the uniform practice. The similar data from Far Highlands show yield benefits at the lowest seed rate only. The other three seed rates show a small reduction in yield which was economically compensated for by lower nitrogen application rates. The financial benefits of variable N management versus uniform N management are also presented in Fig. 7. In seven of the eight comparisons the gross margin was in favour of variable N management. The maximum advantage to variable N management was £60 ha �1 that was produced from a combination of higher yield (+11%) and a slightly lower total N input compared to the standard N approach. Overall yield benefits were greatest where the mean application rate of the variable nitrogen strips was approximately that of the field standard. On average, for the two fields, the overall benefit of the variable nitrogen strategy as described by Wood et al. (2003b) was £22 ha �1 . An analysis of the ‘responsive areas’ to variable nitrogen in both the shoot density and canopy management studies indicate that between 12 and 52% of all fields responded positively and depending upon field and season. Table 3 Economic consequences in £ ha �1 of 3 yr of alternative nitrogen management scenarios for all fields in comparison to a standard application rate Strategy Relative economic consequences, £ ha �1 Trent Field Twelve Acres Far Sweetbrier Mean Historic yield 1 (HY1) �5�41 �21�23 �7�80 �11�48 Historic yield 2 (HY2) �12�56 �21�88 5�85* �9�53* Shoot density 1 (SD1) 4�98 �15�38 �13�00 �7�80 Shoot density 2 (SD2) 0�43 �15�17 33�58 6�28 * Contains data from 1998/99 and 1999/00 only. Table 4 Other economic implications Issue Implication Cost or benefit Water-logging Economic penalty Up to £195 ha �1 pH Economic advantage Up to £7 ha �1 Uneven fertiliser application Economic penalty Up to £65 ha �1 R.J. GODWIN ET AL.382 6. Economic implications An analysis of the capital and associated costs for alternative systems for yield mapping and spatial application of fertilisers and seeds in January 2001 by Godwin et al. (2003) enabled the annual costs per hectare to be assessed. These costs ranged from less than £5 to £18 ha �1 for a single yield mapping and spatial control unit managing an area of 250 ha per yr �1 depending on the system chosen. The basic low cost system is associated with marginally less spatial accu- racy in the production of yield maps and the control of application rate is effected via changes to the tractor forward speed implemented by the driver after receiving instructions from the control system. The more ex- pensive system simultaneously equips both the combine for yield mapping and a tractor/sprayer for variable seed rate and fertiliser application. The actual costs per hectare vary inversely with the size of the area managed per unit. These studies demonstrated that historic yield records are not a sound basis for determining variable nitrogen studies. A more promising approach was to use a ‘real- time’ measure of crop growth. This would currently require the additional cost of collecting and calibrating remotely sensed data from aerial digital photography or tractor-based radiometry. This has an estimated annual cost of £7 ha �1 for farm scale operations for cereal crop areas in excess of 1500 ha per flight for the former and £10 ha �1 for the latter for a 500 ha cereal crop area. Assuming that the average financial benefit, from variable nitrogen management, of £22 ha �1 holds for other farms, together with the costs presented above, there is an economic benefit from precision farming when the annual area harvested per combine is greater than 80 ha �1 for the basic system costing £4500, and 300 ha for the more sophisticated systems costing £16 000. This is the situation for N manipulation but variable application of other inputs, if successful, will reduce these nominated areas. The relationships shown in Fig. 8 extend this approach to other situations to enable estimates of the potential yield increase required in the proportion of the field likely to provide a positive response to variable management. The example shown illustrates that a farmer with an area of 250 ha, where 20% of the area is likely to respond positively to precision farming, must achieve a yield increase of 1�10 t ha�1 for that particular 20% to break even. If the potential yield increase is greater than 1�10 t ha�1, economic benefits will follow, if less then there is currently no economic benefit to be gained from precision farming for that field or enterprise. The effects of the relative size of the responsive proportion of the field is also illustrated. T a b le 5 N it ro g e n a p p li c a ti o n ra te s in k g [N ] h a � 1 , y ie ld in t h a � 1 a n d g ro ss m a rg in (G M ) in £ h a � 1 c o m p a ri so n s b e tw e e n v a ri a b le a n d u n if o rm n it ro g e n a p p li c a ti o n st ra te g ie s O n io n F ie ld T a rg e t se e d ra te , se e d s m � 2 1 5 0 2 5 0 3 5 0 4 5 0 P la n t p o p u la ti o n , p la n ts m � 2 1 0 0 1 4 3 1 7 7 2 0 0 N , k g h a � 1 Y ie ld t h a � 1 G M , £ h a � 1 N , k g h a � 1 Y ie ld , t h a � 1 G M , £ h a � 1 N , k g h a � 1 Y ie ld , t h a � 1 G M , £ h a � 1 N , k g h a � 1 Y ie ld , £ h a � 1 G M , £ h a � 1 V a ri a b le N 2 4 3 6 �3 1 3 6 6 2 2 7 7 �2 4 4 3 2 1 8 8 7 �2 3 4 3 4 1 9 2 7 �4 7 4 4 1 U n if o rm N 2 0 0 5 �9 2 3 4 9 2 0 0 6 �6 3 3 9 4 2 0 0 6 �8 7 4 0 3 2 0 0 6 �6 9 3 8 1 D if fe re n c e 4 3 0 �3 7 1 7 2 7 0 �5 3 3 8 � 1 2 0 �4 8 3 1 � 8 0 �7 5 6 0 P la n t p o p u la ti o n , p la n ts m � 2 F a r H ig h la n d s 1 2 0 1 9 5 2 4 0 3 2 0 U n if o rm N 1 9 7 8 �2 4 4 3 7 1 8 9 7 �7 7 3 9 7 1 3 5 7 �7 9 4 0 6 1 4 4 7 �7 7 3 9 1 S ta n d a rd N 2 0 0 7 �9 4 4 1 7 2 0 0 7 �8 5 3 9 8 2 0 0 8 �1 1 4 0 4 2 0 0 7 �9 3 3 8 1 D if fe re n c e � 3 0 �3 0 2 0 � 1 1 � 0 �0 8 � 1 � 6 5 � 0 �3 2 2 � 5 6 � 0 �1 6 1 0 PRECISION FARMING OF CEREAL CROPS 383 The above estimates are based on improvements from nitrogen management alone, if this more than covers the costs then other benefits will have an immediate financial return. Results of studies into the variable application of both herbicides (Rew et al., 1997; Perry et al., 2001) and fungicides (Secher, 1997) have each shown benefits of up to £20 ha �1 . 7. Environmental implications Whilst this project did not specifically address environmental implications of nitrogen usage patterns, it is possible to draw some conclusions on the possible impact of precision farming decisions on the nitrogen balance in the environment. Using the strip mean grain yields, average fertiliser N application rates, and grain and straw nitrogen contents measured in the quadrat samples, and assuming a straw yield equal to 65% of grain yield, Wood et al. (2003b) calculated the potential off-take of nitrogen in the variable treatment compared to the standards for each seed rate. The plant populations in Onion Field were generally low and in the lowest seed rate (which produced only 100 plants m �2 both the uniform and variable nitrogen programmes had nitrogen off-takes which were signifi- cantly less than the amount applied, resulting in a surplus at the end of the season, see Fig. 9. -10 0 10 20 30 40 50 60 70 150 250 350 450 150 250 350 450 Seed rate, seeds m-2 G ro ss m ar gi n di ff er en ce , £ h a- 1 Onion Field Far Highlands Fig. 7. Difference in gross margins in £ ha �1 between variable N and uniform N, i.e. variable N has uniform N 0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 10.00 50 150 250 350 450 550 650 750 850 950 Area per combine harvester, ha Y ie ld i nc re as e re qu ir ed i n va ri ab le p ar ts o f th e fi el to a ch ie ve b re ak ev en @ £ 65 t -1 , t h a- 1 Fig. 8. Yield increases required for a break even scenario for different proportions of the field likely to benefit from precision farming and the harvested area for a fully integrated precision farming equipment and software system costing £11 500; area of the field likely to produce a positive response to variable inputs: , 10%; 20%; 30%; 50% R.J. GODWIN ET AL.384 However at the three higher plant populations, the off- takes from the variable N appliations were higher than applied N resulting in a net reduction in N balances. Averaged over the four seed rates, the N surplus for the variable treatments was 18�5 kg [N] ha�1 compared to 28 kg [N] ha �1 for the uniform treatments. This represents a 34% reduction in the net amount added to the soil from the uniform application and this could have considerable longer-term environmental significances. A similar analysis was conducted for Far Highlands 2000 and assuming similar grain and straw nitrogen contents as these were not individually sampled, the average saving from the variable N treatments com- pared to the uniform N treatments was 32�5 kg [N] ha�1. -20 0 20 40 60 80 150 [100] 250 [143] 350 [177] 450 [200] Seed rate [Plant population], m-2 N d ef ic it N s ur pl us k g ha -1 Bayes lsd @ 95% confidence level Fig. 9. Surplus or deficit of applied nitrogen relative to offtake in grain and straw at Onion Field in 2000: , uniform N; , variable N UNIFORM MANAGEMENT Continue with conventional management START How much within-field variation do I have? Significant or not known Little or none FIELD ASSESSMENT First assessments 1. Note variation when: Field walking Spraying/fertilising combining 2. Identify pest and disease effects, eg. Rabbits, slugs, take-all Identify locations prone to water logging or drought 3. 4. Dig/auger holes in good/bad areas to assess: Texture/structure Rooting depth Compaction 5. Assess machinery operation quality Drill misses Spreader calibration Sprayer operation Refined assessments 1. Use aerial photography and electro-magnetic induction (EMI) to assess areas of field-scale crop and soil variability 2. Assess the yield impact by sampling the crop yield components in all the areas prior to harvest: Number of ears Grain weight Number of grains 3. Use all information collected above to make approximate estimates of the amount of yield variation and the areas affected. Question 1. What percentage of the total area has the potential for improvement? Question 2. What is my expected yield benefit from applying precision farming techniques to those areas? NEXT (a) Fig. 10. (a) Stage 1 of the practical guidelines: initial investigations of within-field variation, (b) Stage 2 of the practical guidelines: is precision farming economically viable? (c) Stage 3 of the practical guidelines: understanding the causes of variability and identifying management zones. (d) Stage 4 of the practical guidelines: addressing fundamental management practices prior to variable application, and strategies for managing nutrients. (e) Stage 5 of the practical guidelines: real-time management of variable nitrogen fertiliser for optimising economic yield PRECISION FARMING OF CEREAL CROPS 385 8. Practical guidelines The results of the project have been integrated into a simple-to-follow flow chart, shown in Fig. 10, to guide the grower through the respective elements of precision farming. The flow chart addresses the questions and choices facing the grower at various stages of the decision–making process, including determining whether precision farming would be viable on their farm, to identifying management zones for soil nutrient management, to detailed crop monitoring for varying fertilizer nitrogen. These have been made available and distributed to all UK levy paying cereal growers (HGCA, 2002). 9. Conclusions (i) Yield maps are indispensable for targeting areas for investigation and treatment by precision farming practices and subsequent monitoring of UNIFORM MANGAGEMENT Continue with conventional management Is Precision Farming economically viable on your farm? Yes No LEVEL OF YIELD INCREASE, t ha-1, NEEDED TO JUSTIFY INVESTEMNT Cereal area 250 ha 500 ha 750 ha 1000 ha System (investment) % area responding Basic unit (£ 4500) 5% 10% 20% 30% Single integrated unit (£ 11000) Multiple units for combine and tractor (£ 16000) 5% 10% 20% 30% 5% 10% 20% 30% e.g. taking the basic entry-level system - if your farmed area is 750 ha, of which 10% is expected to respond to variabe inputs, the level of yield increase on those areas in order to break even must be at least 0.24 t ha-1 shaded areas: sustainable improvements of this magnitude were not generated during the project 1.44 0.72 0.36 0.24 3.81 1.91 0.95 0.64 5.68 2.84 1.42 0.95 0.72 0.36 0.18 0.12 1.91 0.95 0.48 0.32 2.84 1.42 0.71 0.47 0.48 0.24 0.12 0.08 1.27 0.64 0.32 0.21 1.89 0.95 0.47 0.32 0.36 0.18 0.09 0.06 0.95 0.48 0.24 0.16 1.42 0.71 0.35 0.24 NEXT (b) Fig. 10 (continued). R.J. GODWIN ET AL.386 results. They provide a valuable basis for estimat- ing the replenishment levels of P and K fertilisers; however, they were not found to provide a useful basis for determining a variable nitrogen applica- tion strategy to optimise management in a particular season. This was particularly illustrated in the study of Short Lane, where despite significant differences in yield between two con- trasting soil types, the optimum nitrogen applica- tion rate was the same, despite significant differences in annual rainfall. (ii) The spatial variability evident in the yield maps of the fields studied was inconsistent from one year to the next, hence the variation in the spatial distribution of the cumulative yield reduced with time. (iii) The possible extent and potential causes of yield variability can be determined using low capital cost yield mapping systems together with electro- magnetic induction techniques to assess variation in soil factors such as texture and water holding capacity. A methodology using cluster analysis UNDERSTAND VARIABILITY Yield Maps demonstrate 'effect' rather than 'cause and are used over time to identify zones that typically yield high or low; and used to assess the effects of 'known' problems or characteristics: Soil type (texture, depth) Soil condition (e.g. water-logging, completion) Pests, weeds, diseases Nutrient deficiencies Machine calibrations (e.g. uneven spreading, drill misses) Soil Mapping Use EMI to map soil variation (£ 30 ha-1) Use traditional soil analysis to measure soil variations at targeted locations (£ 25/sample) Aerial Photographs from national archives can help to determine a range of possible causes of within-field variability, depending on when the pictures are taken (£ 20 per photo, or 4p ha-1), and used to target field investigations. Soil colour can be related to soil characteristics: - soil type (texture, depth) - drainage (natural/human -made) Within-crop patterns (establishment, development, canopy size, senescence and lodging) Weed patches (can have stable patterns between years) Historical photographs could help to determine perennial features, whilst recently taken or real-time photography could identify more transient crop patterns (e.g. establishment, disease, weeds and pests). MANAGEMENT ZONES 1. Identify management zones 2. Identify likely limiting factors An objective approach is to use the yield from the previous year and EMI maps to identify a practical number of management zones using computerised statistical 'cluster' analysis An approximate approach is to visually compare yield and EMI maps. Walk field to visually compare zones Using field and soil maps, where available, note any observations in the field Prioritise actions NEXT (c) Fig. 10 (continued). PRECISION FARMING OF CEREAL CROPS 387 was developed to use these techniques to deter- mine within-field management zones. Both indivi- dually and together these systems provide an objective means for assessing the degree of variability within a field and provide a basis for targeting a reduced number of soil and crop sampling points, which is essential to reduce costs for commercial application. (iv) The spatial variation in canopy development within a field can be estimated using an aerial digital photography (ADP) technique developed by Cranfield University for this project for ‘real- time’ agronomic management. This technique can be extended from field scale to farm scale for crops of similar varieties and planting dates. The processing of the data from cameras mounted in light aircraft is sufficiently fast to enable applica- tion rate plans to be produced within a few hours of the aircraft landing. The technique can be used as a basis for determining the most appropriate application rate for nitrogen, and as a guide for herbicide and plant growth regulator application. It is feasible to adapt the system for use with tractor-based systems. (v) The application of nitrogen in a spatially variable manner can improve the efficiency of cereal produc- tion through managing variations in the crop canopy. Depending upon the field and the year, between 12 and 52% of the area of the fields under investigation responded positively to this approach. In 2000 seven out of eight treatment zones gave positive economic returns to spatially variable nitrogen with an average benefit of £22 ha �1 . (vi) Simple nitrogen balance calculations have shown that in addition to a modest increase in yield, the spatially variable application of nitrogen can have an overall effect on reducing the nitrogen surplus by approximately one-third. (vii) Common problems, such as water-logging and fer- tiliser application errors, can result in significant One-off infrastructural tasks, e.g. water_logging_improve drainage; rabbit damage_fence crops. Local limiting factors, e.g. weeds_apply herbicides spatially; compaction_loosen affected zones Machinery factors, e.g. poor distribution of sprays and fertilisers_calibrate sprayers and spreaders, check for uniformity. Human factors, e.g. drill misses, poor fertiliser placement_improve operator training NUTRIENT MANAGEMENT PHOSPHATE AND POTASH 1. Monitor P & K status using targeted soil sampling and repeat every 3-4 yr 2. Calculate annual off-take using yield maps and determine application strategy Well above critical index Close to critical index Generally, or locally below critical index Do nothing Apply according to offtake Apply according to offtake, plus an additional amount pH AND MICRO-NUTRIENTS Use targeted soil sampling to quantify deficiencies_replenish levels in areas below recommended critical levels. NEXT MANAGEMENT ISSUES (d) Fig. 10 (continued). R.J. GODWIN ET AL.388 crop yield penalties. Precision farming can enable these problems to be identified, the lost revenue to be calculated and the resultant impact on the cost/ benefit to be determined. This provides a basis from which informed management decisions can be taken. It is critical that these problems are corrected prior to the spatial application of fertilisers and other inputs. (viii) At current prices, the benefits from spatially variable application of nitrogen outweigh the costs of the investment in precision farming systems for cereal farms greater than 75 ha if basic Sep-Nov Dec-Feb Feb-Mar Apr May Match send rate to sowing date. Aim to achieve 450-600 fertile ears, m-2 Determine fertiliser nitrogen requirement 1. Measure or estimate soil mineral nitrogen (SMN) in areas of different soil type/depth or different previous cropping. 2. Calculate the additional N requied for the crop to reach its optimum canopy size. 3. Calculate the total fertiliser N requirement_assuming that only 60% of applied fertiliser N will be recovered by the crop. 4. Split the total fertiliser N to determine the "planned dose" for each of the three application timings (February_March, April, and May) K eep th e crop clean an d free from w eed s, p ests an d d iseases If SMN is high First N application to manage shoot numbers GS23-29 Use remote sensing techniques to measure variation in shoot density Zone field according to shoot density and compare with the shoot target for this growth stage Below shoot target On target Above shoot target Increase the first dose (and subtract this amount from the main dose) Apply 'planned' dose (as determined in Dec-Feb) Reduce or omit first dose (and add to the main dose) Main N application to achieve canopy requied for optimum yield GS30-32 Use remote sensing techniques to measure variation in green area index (GAI) prior to main N dose Re-zone field compared to target GAI for this growth stage. Above target for April Reduce main dose Reduce main dose Apply 'planned' main dose Apply 'planned' main dose Reduce main dose Apply 'planned' main dose Increase main dose Increase main dose Increase main dose On-target for April Below target for April Check nitrogen availability Final N application for canopy survival GS37 Re-measure variation in green area index (GAI) at GS37 Zone field and compare with the target Above-target for May On-target for May Below-target for May Omit final dose Apply 'planned' dose Increase dose, If the earlier main dose was increased, apply the 'planned' dose (e) Fig. 10 (continued). PRECISION FARMING OF CEREAL CROPS 389 systems costing £4500 are purchased, and greater than 200–300 ha for more sophisticated systems costing between £11 500 and £16 000. (ix) Integrating the economic costs with the propor- tion of the farmed area that has benefit potential enables the break-even yield increase to be estimated. Typically a farmer with 250 ha of cereals where 20% of the farmed area could respond positively to spatially variable nitrogen would need to achieve a yield increase of 1.1 t ha �1 on that 20% to break even for a precision farming system costing £ 11500. This figure reduces to 0.25 t ha �1 for a basic system. The net effect of combining the benefits of spatially variable application of nitrogen (£22 ha �1 ) with the benefits from both the spatial application of herbicides (up to £20 ha) and fungicides (up to £20 ha �1 ), found from other studies, should provide valuable returns from the adoption of precision farming concepts. However, this should not be considered as the simple addition of the maximum benefits quoted. These economic advantages linked to the environ- mental benefits should improve the longer term sustain- ability of cereal production. The results of the above have been incorporated into a simple decision support tool in the form of a flow chart, to help farmers with practical decisions. Acknowledgements The authors would like to thank the sponsors of this work, Home-Grown Cereals Authority, Hydro Agri and AGCO Ltd, for their support, and the contributions made by their collaborators, Arable Research Centres and Shuttleworth Farms. We would also like to thank Dr Richard Earl, Dr David Pullen and Dr Nicola Cosser for their assistance in developing the research pro- gramme, Robert Walker for implementing treatments and harvesting the experiments and Kim Blackburn and Sandra Richardson for their tireless work on the manuscripts of this and all other papers in the series. Thanks must also be extended to Messrs Dines, Hart, Wilson and Welti who graciously allowed us to use their fields and gave us much support. References Blackmore B S (2000). The interpretation of trends from multiple yield maps. Computers and Electronics in Agri- culture, 26, 37–51 Blackmore B S; Godwin R J; Fountas S (2003). The analysis of spatial and temporal trends in yield map data over six years. Biosystems Engineering (Special Issue on Precision Agri- culture), 84 (4), doi: 10.1016/S1537-5110(03)00038-2, this issue Blackmore B S; Moore M R (1999). Remedial corection of yield map data. Precision Agriculture Journal, Kluwer, 1, 53–56 Earl R; Taylor J C; Wood G A; Bradley R I; James I T; Waine T; Welsh J P; Godwin R J; Knight S K (2003). Soil factors and their influence on within-field crop variability, part I: field observation of soil variation. Biosystems Engineering (Special Issue on Precision Agriculture), 84(4), doi: 10.1016/ S1537-5110(03)00004-7, this issue Earl R; Wheeler P N; Blackmore B S; Godwin R J (1996). Precision farming}the management of variability. Land- wards, Institution of Agricultural Engineers, Winter, 51(4), 18–23 Godwin R J; Miller P C H (2003). A review of the technologies for mapping within-field variability. Biosystems Engineering (Special Issue on Precision Agriculture), 84(4), doi: 10.1016/ S1537-5110(03)00283-0, this issue Godwin R J, Richards T E; Wood G A; Welsh J P; Knight S K (2003). An economic analysis of the potential for precision farming in UK cereal production. Biosystems Engineering (Special Issue on Precision Agriculture), 84(4), doi: 10.1016/ S1537-5110(03)00282-9, this issue HGCA (1998). The Wheat Growth Guide. Home-Grown Authority, Caledonian House, London HGCA (2002). Precision Farming of Cereals: Practical Guide- lines and Crop Nutrition. Home-Grown Cereals Authority, Caledonian House, London Hodge C A H; Burton R G O; Corbett W M; Evans R; Seale R S (1984). Soils and their use in Eastern England. Soil Survey Bulletin No. 15., Harpenden, UK James I T; Godwin R J (2003). Soil, water and yield relationships in developing strategies for the precision application of nitrogen fertiliser to winter barley. Biosystems Engineering (Special Issue on Precision Agriculture), 84(4), doi: 10.1016/S1537-5110(03)00284-2, this issue Jarvis M G; Allen R H; Fordham S J; Hazenden J; Moffat A J; Study R G (1984). Soils and their use in South East England. Soil Survey Bulletin No. 13, Harpenden, UK Moore MR (1998). An investigation into the accuracy of yield maps and their subsequent use in crop management. Unpublished PhD Thesis, Cranfield University at Silsoe, Silsoe, Bedford Perry N H; Lutman P J W; Miller P C H; Wheeler H C (2001). A map based system for patch spraying weeds. (1) Weed mapping. Proceedings BCPC Crop Protection Conference, Brighton, UK Rew L J; Miller P C H; Paice M E R (1997). The importance of patch mapping resolution for sprayer control. Aspects of Applied Biology (Optimising Pesticide Applications), 48, 49–55 Secher B J M (1997). Site specific control of diseases in winter wheat. Aspects of Applied Biology (Optimising Pesticide Applications), 48, 57–64 Taylor J C; Wood G A; Earl R; Godwin R J (2003). Soil factors and their influence on within-field crop variability, part II: spatial analysis and determination of management zones. Biosystems Engineering (Special Issue on Precision Agri- culture), 84(4), doi: 10.1016/S1537-5110(03)00005-9, this issue R.J. GODWIN ET AL.390 Waine T W (1999). Non-Invasive soil property measurement for precision farming. Unpublished EngD Thesis, Cranfield University at Silsoe, Silsoe, Bedford Welsh J P; Wood G A; Godwin R J; Taylor J C; Earl R; Blackmore B S; Knight S M (2003a). Developing strategies for spatially variable nitrogen application in cereals, part I: winter barley. Biosystems Engineering (Special Issue on Precision Agriculture), 84(4), doi: 10.1016/S1537- 5110(03)00002-3, this issue Welsh J P; Wood G A; Godwin R J; Taylor J C; Earl R; Blackmore B S; Knight S M (2003b) Developing strategies for spatially variable nitrogen application in cereals, part II: wheat. Biosystems Engineering (Special Issue on Precision Agriculture), 84(4), doi: 10.1016/S1537-5110(03)00003-5, this issue Wood G A; Taylor J C; Godwin R J (2003a). Calibration methodology for mapping within-field crop variability using remote sensing. Biosystems Engineering (Special Issue on Precision Agriculture), 84(4), doi: 10.1016/S1537- 5110(03)00281-7, this issue Wood G A; Welsh J P; Godwin R J; Taylor J C; Earl R; Knight S M (2003b). Real-time measures of canopy size as a basis for spatially varying nitrogen applications to winter wheat sown at different seed rates. Biosystems Engineering (Special Issue on Precision Agriculture), 84(4), doi: 10.1016/S1537- 5110(03)00006-0, this issue PRECISION FARMING OF CEREAL CROPS 391 work_ws2xktz5ejchjiwn76hidtwmya ---- Global Assessment of Psoriasis Severity and Change from Photographs: A Valid and Consistent Method Global Assessment of Psoriasis Severity and Change from Photographs: A Valid and Consistent Method David Farhi1,2,3, Bruno Falissard1,4,5 and Alain Dupuy6,7 Five raters tested the validity and consistency of global assessments of severity and change from standardized photographs in 30 consecutive patients with plaque psoriasis. The main outcome measures were physician global assessment (PGA) scores for change between baseline and follow-up visits (‘‘dynamic PGA’’) and for severity at the baseline visit (‘‘static PGA’’). These photographic evaluation scores were compared with in-person clinical ratings. Panel ratings were obtained using the mean of the five raters’ independent evaluations from photographs. Validity and consistency were assessed with intra-class coefficients (ICCs; 95% confidence interval). Intra-rater and intra-panel consistencies for photographic dynamic PGA scores were 0.85 (0.74–0.92) and 0.95 (0.92–0.99), respectively. As an evaluation of validity, agreement between photographic and clinical static PGA scores was 0.87 (0.75–0.93). We concluded that global assessment of psoriasis severity and change from photographs by a panel of experts was accurate and consistent. The generalizability of the results requires further studies. The intrinsic limitations of photographic assessment of individual characteristics such as plaque thickness and their effect on global photographic assessment should be further evaluated. Journal of Investigative Dermatology (2008) 128, 2198–2203; doi:10.1038/jid.2008.68; published online 17 April 2008 INTRODUCTION A therapeutic effect is generally assessed by evaluating change in severity between a baseline and a final state. In everyday practice, dermatologists assess change by recalling the base- line state while assessing the final state on clinical criteria during the follow-up visit. This quick global assessment of change cannot be used in rigorous clinical research, however, because the baseline and final states are not assessed from the same material (memory versus actual patient examination). To circumvent this drawback, clinical researchers have developed severity scores. In this case, change is measured by computing the difference between the final and baseline scores. One of the most widely used scores in Dermatology is the psoriasis area severity index (PASI). Its validity and reproducibility have, however, been challenged (Ashcroft et al., 1999; Langley and Ellis, 2004; Feldman and Krueger, 2005; Finlay, 2005), and the clinical meaning of score values is obscure for many physicians (Feldman and Krueger, 2005). Using photographs to assess change is appealing because of two theoretical advantages: both states are evaluated in the same way, and change can be directly assessed rather than being calculated, and thus could be more meaningful for clinicians. In this article, a pilot study on validity and reproducibility of a global assessment of severity and change conducted on standardized photographs is presented. RESULTS The clinical description of the patients is presented in Table 1. Baseline and follow-up clinical PASI scores are presented for each patient in Figure 1. Consistency among experts for photographic assessment of change Intra-rater consistency was assessed for each of the five experts, using their ‘‘photographic dynamic physician global assessment (PGA)’’ scores at the ‘‘test’’ and ‘‘retest’’ sessions. Four of the five experts (80%) had an intra-class coefficient (ICC) 40.80 (range: 0.71–0.93). Overall, the mean intra-rater consistency was 0.85 (95% CI: 0.74–0.92). Inter-rater consistency was assessed using the five ‘‘photographic dynamic PGA’’ scores. The inter-rater con- sistency was 0.73 (95% CI: 0.56–0.87). Consistency for photographic assessment of change by the panel When using the mean of the five experts’ scores as the syn- thetic assessment from the panel, that is, ‘‘panel photographic dynamic PGA’’ scores, intra-panel consistency ICC between ‘‘test’’ and ‘‘retest’’ sessions was 0.95 (95% CI: 0.92–0.99). ORIGINAL ARTICLE 2198 Journal of Investigative Dermatology (2008), Volume 128 & 2008 The Society for Investigative Dermatology Received 29 July 2007; revised 20 January 2008; accepted 7 February 2008; published online 17 April 2008 This study was performed in Paris, France 1Inserm, U669, Paris, France; 2Université Paris Descartes, Paris, France; 3Département de Dermatologie, AP-HP, Hôpital Cochin, Paris, France; 4Université Paris-Sud and Université Paris Descartes, UMR-S0669, Paris, France; 5Département de Santé Publique, AP-HP, Hôpital Paul Brousse, Villejuif, France; 6Université Paris 7 Denis-Diderot, Paris, France and 7Département de Dermatologie, AP-HP, Hôpital Saint-Louis, Paris, France Correspondence: Dr Alain Dupuy, Department of Dermatology, Hôpital Saint Louis, AP-HP, 1 avenue Claude Vellefaux, Paris 75010, France. E-mail: alain.dupuy@sls.aphp.fr Abbreviations: CI, confidence interval; PASI, psoriasis activity and severity index; PGA, physician global assessment; ICC, intra-class coefficient http://dx.doi.org/10.1038/jid.2008.68 mailto:alain.dupuy@sls.aphp.fr Using the Spearman–Brown formula, the predicted inter- panel consistency was above 0.90 for a panel composed of four or more experts: inter-panel ICCs with 2, 3, 4, and 5 experts in the panel were respectively 0.84 (95% CI: 0.71–0.91), 0.88 (95% CI: 0.79–0.94), 0.91 (95% CI: 80–95), and 0.93 (95% CI: 0.86–0.97). This result suggests that the increase of inter-panel consistency gained by increasing the number of experts in the panel is marginal for panels of more than four experts. Consistency for photographic assessment of severity In the same way as for assessment of change, we tested consistency for severity scores, using each single rating by the experts (intra-rater and inter-rater consistency of photo- graphic static PGA) or the mean rating by the panel (intra- panel consistency of photographic static PGA). Intra-rater consistency was assessed for each of the five experts, using their ‘‘photographic static PGA’’ scores at the ‘‘test’’ and ‘‘retest’’ sessions. Four of the five experts (80%) had intra-rater ICC40.80 (range: 0.66–0.94). The mean intra-rater consistency ICC was 0.84 (95% CI: 0.78–0.90). Inter-rater consistency was assessed using the five ‘‘photographic static PGA’’ scores. The inter-rater consistency ICC was 0.80 (95% CI: 0.68–0.89). When using the mean of the five experts’ scores as the synthetic assessment from the panel, intra-panel consistency was excellent: ICC¼0.95 (95% CI: 0.92–0.99). Validity of photographic assessment of severity Agreement ICC between panel photographic static PGA scores and clinical static PGA scores was 0.87 (95% CI: 0.75–0.93). Agreement ICC between clinical and photo- graphic Delta-static PGAs was 0.64 (95% CI: 0.51–0.79). DISCUSSION Our study has shown that global assessment of psoriasis severity and change from photographs by a panel of experts is an accurate and consistent method. First, there is the excellent agreement between clinical assessment and the panel’s photographic assessment when the same scale was used. Second, the study has shown the good intra-rater and inter-rater consistency of photographic assessments of change. Third, the number of experts can be restricted to as few as five to obtain consistent and valid estimations. Apart from the demonstrated metrological qualities, the use of photography presents several advantages—whether for daily practice or for clinical research—that should be emphasized: (1) communicability: digital photography data can be easily and swiftly transmitted to peers; (2) transpar- ency: digital photography allows easy storage and subsequent monitoring of the data on disease severity, an advantage that may improve post hoc evaluations; (3) better blind assess- ment: during clinical trials, the clinical evaluation of a patient may provide verbal or nonverbal (including behavioral) clues—for instance, the occurrence of side effects—that may facilitate investigators’ deduction of the treatment group; photography tends to lessen this risk of information bias; (4) independence of assessments: in clinical research trials, the TimeDay 30Baseline 0 5 10 15 20 25 30 35 40 45 50PASI Figure 1. Change in severity during study. Baseline and follow-up (day 30) clinical PASI scores are presented for each patient. Table 1. Clinical description of 30 patients with chronic plaque psoriasis Characteristics n (%)1 Female gender 12 (40) Median age (range) 42 (19–74) Past treatments Hospitalization 7 (23) Classic systemic treatment2 17 (57) Biologics 8 (27) Present treatments Classic systemic treatment2 6 (20) Phototherapy 15 (50) Biologics 1 (3) Topical treatments3 8 (27) Skin color White 26 (86) Black 2 (7) Asian 2 (7) Median baseline PASI (range) 6.9 (2.1–32.7) 1Unless otherwise specified. 2Retinoid, methotrexate, cyclosporin. 3Steroids, vitamin D derivatives, and emollients. www.jidonline.org 2199 D Farhi et al. Psoriasis Severity Assessment from Photographs http://www.jidonline.org use of photographs avoids the risk of communication between a given investigator and either patients or peers, and therefore preserves the independence of investigators’ assessments; (5) homogeneity: in a multicenter or multi- investigator trial, photographic assessment can be central- ized, allowing all subjects to be evaluated by the same panel of judges. In a clinical research perspective, these advantages reduce both variability (centralization of evaluations) and bias (‘‘blinded’’ evaluations). Meta-analytical evaluations can also provide insight by reassessing efficacy homogeneously from photographic archives of patients from several trials. This pilot study was conducted with a single photographer and a small number of patients. Sample size was calculated to evaluate the internal validity of our photographic method. The applicability of the whole process and the scope for generalization of our findings should be discussed, however. Photographer training and workload for taking pictures (5 minutes per patient) were minimal, but admittedly more time consuming than performing a simple clinical PGA. The technical quality of the photographs was good for all patients. The intervention of several photographers could, however, lead to greater variability in quality of pictures. Perfect standardization leaving no room for personal initiative is the key to address this point. Being photographed proved acceptable to all patients, but it should be noted that our patients were made aware of the constraints when they agreed to participate. Also, our small sample size did not allow us to study our photographic assessment method across various skin types. As this pilot study has been completed, we have implemented a large multi-investigator study to test acceptability, applicability, technical quality, and perfor- mance in subgroups of patients in a more accurate manner. We studied a group of patients presenting different treatments and evolutions during the 1-month follow-up. This was intended to reflect real-life conditions. Large inter- patient variability tends to increase ICC values, however. Conversely, estimating the validity of our photographic method by computing the difference between final and baseline static PGAs led to lower ICC values, because computing a difference on a narrow scale can only capture obvious changes and not more subtle ones. Assessment of validity is always difficult to undertake when the variable to evaluate is a subjective one. This is a particularly prominent problem in Dermatology (Chren, 2000). Assessment of validity should ideally refer to an unequivocal gold standard instrument, but such reference is rarely available. Although change in clinical PASI scores is used as the main outcome efficacy measure in clinical trials, the PASI score has been criticized for its inaccuracy in patients with low PASI values (that is, under 10). Also, because PASI and PGA scores are different scales with different ranges, clinical PASI score could not be used as a reference to test validity of the PGA- based photographic scoring. There are intrinsic limitations in using photographs to assess severity and change. In psoriasis, thickness, for example, is certainly difficult to appreciate on photographs. The consequences of this limitation on a global photographic evaluation of severity and change were not specifically addressed in this study. Another limitation is the noncom- prehensiveness of the skin area photographed. The nine poses selected for our study covered about 95% of the body surface area, but important areas such as the scalp were not evaluated. We believe that adding poses would increase validity but would reduce applicability. In conclusion, we present a pilot study on the photo- graphic assessment of change and severity in psoriasis. Although photography has long been used in Dermatology, photographic evaluations have been made easy to implement only since the development of digital photography. New possibilities offered by this technical and economic break- through need to be explored. This proof-of-principle study yielded encouraging results regarding validity and consis- tency in psoriasis patients. Photographic assessment using standardized photographs might be extended in the future beyond psoriasis toward a large variety of skin conditions. MATERIALS AND METHODS Patients Between December 2005 and February 2006, 51 consecutive adult patients being treated for psoriasis were approached in the outpatient and phototherapy clinics on one specified day each week. Inclusion criteria were as follows: (1) clinical diagnosis of plaque psoriasis of any duration; (2) agreement to be photographed at two visits 1 month apart. None was reluctant to have standardized pictures taken. Thirty patients agreed to participate. Severity was assessed clinically at both visits using standard scales (see ‘‘Scales and measurements’’); change between baseline and follow-up was assessed at the follow-up visit 1 month later. A standardized set of photographs was taken at each visit. The study was conducted in accordance with the Declaration of Helsinki Principles. According to the French law (Code de Santé Publique, article L1121-1), an authorization from the Ethics Committee was not required for this type of study. Patients signed an informed consent form stating that they agreed to have their photographs taken, analyzed, and included in a scientific publication. Standardization of photographs We used a nonprofessional 8-million pixels Canon Eos 350D ‘‘Rebel’’ digital camera with a fixed 35 mm focal length Canon lens. Settings were standardized: manual mode, autofocus on, shutter speed of 1/50 seconds, diaphragm aperture of f5.6, 400 ASA sensitivity, JPEG format, integrated camera flash on. Pictures were taken in a room with artificial neon light. Patients were placed in front of a light-blue papered wall. A set of nine standardized poses was adapted from Halpern et al. (2003). These are presented in Figure 2. Time for taking a full set of photographs was about 5 minutes. Men wore different styles of underwear, but were routinely asked to roll up their underwear to increase the visible surface of their buttock. Women were asked to remove bras consistently. A full set of nine photographs were obtained for all 30 patients. Photographs were taken by a single dermatologist (DF), without formal training. Presentation of photographs Unedited JPEG pictures of each patient were transferred into an Adobe Portable Document Format file. Each file was made up of nine A4 format sheets in landscape orientation. Each sheet presented 2200 Journal of Investigative Dermatology (2008), Volume 128 D Farhi et al. Psoriasis Severity Assessment from Photographs two pictures of the same pose: the baseline picture on the left-hand part and the follow-up picture on the right-hand part. Scales and measurements We firstly aimed to focus on global assessment of change between two sets of photographs taken one month apart in psoriasis patients. Assessments were performed by a panel of five experts. Each expert rated change for each patient using the two sets of photographs. We have used the standard term of ‘‘dynamic PGA’’ to refer to this scale, and to specify that the assessment was made from photographs, the term ‘‘photographic dynamic PGA’’ is used hereafter. Secondly, together with the assessment of change, severity was also scored. For this purpose, a scale known as the ‘‘static PGA’’ was used. When assessed from photographs, the score is referred to below as the ‘‘photographic static PGA’’ (severity), to distinguish from the ‘‘dynamic’’ (change) score. When the clinician assessed the patient rather than the photographs during a visit, the severity score is referred to as the ‘‘clinical static PGA’’. As there was only one clinician involved in the clinical part of the study, there was only one ‘‘clinical static PGA’’ score for a given patient at the baseline visit and another at the follow-up visit, whereas there were five experts rating from photographs, yielding five different ‘‘photographic static PGA’’ scores for the baseline visit and five for the follow-up visit, for each patient. To obtain a synthetic assessment from the panel of the five experts, for each patient, the mean of the five ratings was computed. These scores are referred to as the ‘‘panel photographic dynamic PGA’’ (change) and ‘‘panel photographic static PGA’’ (severity). The scales are presented in Figure 3. A scoring summary is presented in Table 2. Assessors and sessions Photographs were independently assessed on computer screens by a panel of five senior dermatologists. They had at least 4 years’ experience in dermatology, were practicing in an academic hospital, and three of them had been specialized in psoriasis management for more than 3 years. All photographic assessments were blinded to clinical data. Raters were explicitly requested to use the scales in the following order: ‘‘dynamic PGA’’, then ‘‘static PGA’’ for the baseline set, and finally for the follow-up set. Two similar rating sessions were organized at a 1-month interval (‘‘test’’ and ‘‘retest’’ sessions) on the same set of the photographs of the 30 patients, by the same five experts. Between the ‘‘test’’ and the ‘‘retest’’ sessions, raters did not look at the photographs, did not reread their first scorings, and did not exchange any information about the study. This ‘‘test–retest’’ design was used to evaluate intra-rater consistency (see ‘‘Consis- tency’’ below). Figure 2. Nine standardized photographs applied twice at 30-day intervals to 30 patients with chronic plaque psoriasis. In this patient, mean static photographic PGA scores from the five experts was (4.0) at baseline (individual scores: 3, 4, 4, 4, 5) and 0.8 at day 30 (individual scores: 0, 1, 1, 1, 1); mean dynamic photographic PGA score from the five experts was 4.2 (individual scores: 1, 5, 5, 5, 5). www.jidonline.org 2201 D Farhi et al. Psoriasis Severity Assessment from Photographs http://www.jidonline.org Statistical analysis Intra-class coefficients. ICCs were used to assess both validity and consistency (Muller and Buttner, 1994). For ‘‘static PGA’’ scales, only scores for baseline visits were used. ICCs were estimated by the percentage of total variance that results from patient effect, using a two-way random effects analysis- of-variance model (Muller and Buttner, 1994): ICC¼s2patient/(s2patientþs2raterþs2residual) Generally, agreement can be qualified as ‘‘almost perfect’’, ‘‘substantial’’, or ‘‘moderate’’, for ICC values in the (0.80–1.0), (0.60–0.80), and (0.40–0.60) ranges, respectively (Landis and Koch, 1977). Agreement for ordered categorical variables can also be estimated with weighted Kappa statistics (Falissard, 2001). Weight- ing is used to penalize large disagreements (for example, ‘‘1’’ vs ‘‘5’’) more strongly than small ones (for example, ‘‘1’’ vs ‘‘2’’). There is no consensus on which weight should be used, 1/N and 1/N2 being the most widely used. Because 1/N2-weighted Kappa statistics lead to the same values (±1%) as ICCs, only results for ICCs are presented here. All analyses were conducted with R software, release 2.3.0 (R Development Core Team, 2006). The use of an analysis-of-variance model for 95% confidence interval (CI) estimation and ICC comparisons would be based on the hypotheses of normality and homoscedasticity. These hypotheses were not confirmed by checking ICC graphical distributions in our data set. Therefore a nonparametric bootstrap method was implemented to estimate ICC 95% CI and to compare ICCs. All test formulations were two tailed. Pp0.05 was defined as the significance threshold. ICCs are presented with 95% CI. Consistency of photographic assessment Consistency of experts’ scores from photographs. Consistency (or reliability) is the ability of a measurement process to yield the same result when the measurement process is repeated by the same observer (intra-rater consistency) or by another observer (inter-rater consistency) (Feinstein, 1987). For each expert, intra-rater consis- tency was estimated by the ICC between the ‘‘test’’ and the ‘‘retest’’ session measurements. Assessment of inter-rater and intra-rater consistency was also conducted with a Bland and Altman graphical method (Bland and Altman, 1986); this analysis corroborated the results yielded by the ICC analysis (data not shown). Consistency of panel ratings from photographs. Intra-panel consistency was assessed by computing the ICC between the ‘‘test’’ and ‘‘retest’’ sessions for the 30 patients. Inter-panel consistency could not be directly assessed, as only one panel of five experts was available. We estimated inter-panel consistency with the Spear- man–Brown formula (Lord and Novick, 1968): the inter-panel consistency for the ‘‘panel photographic dynamic PGA’’ was estimated for different numbers of experts in the panel. Validity of photographic assessment Validity is defined as how closely the result of a tested procedure conforms to the results obtained with a reference procedure (‘‘Is there agreement between the results of the tested tool and the reference tool?’’). This concept has been also termed accuracy (Feinstein, 1987). The score of interest in our study was the ‘‘panel photographic dynamic PGA’’. It was not possible to compare ‘‘panel photographic dynamic PGA’’ against a reference clinical score, as there is no such reference clinical score: simultaneous comparison of two 1-month-apart states cannot be made, and memorization of Dynamic PGA: In each slide, photographs on the left represent baseline, photographs on the right were taken 1 month later. Rate the change of psoriasis severity from –5 (considerable deterioration) to +5 (considerable improvement), for each patient, using Dynamic PGA: (+5) Very large improvement/cleared (+90 to +100%) (+4) Large improvement (+70 to 89%) (+3) Moderate to large improvement (+50 to 69%) (+2) Moderate improvement (+30 to +49%) (+1) Mild improvement (+10 to +29%) (0) No or minimal change (–10 to +10%) (–1) Mild deterioration (–2) Moderate deterioration (–3) Moderate to large deterioration (–4) Large deterioration (–5) Very large deterioration Static PGA: Rate psoriasis severity at each time point, for each patient, using Static PGA: first, for all left-hand photographs (baseline), then for all right-hand photographs (taken 1 month later). (6) Severe psoriasis (5) Moderate to severe psoriasis (4) Moderate psoriasis (3) Mild to moderate psoriasis (2) Mild psoriasis (1) Psoriasis almost cleared (0) Clear (no lesion) Figure 3. (a) Dynamic PGA and (b) Static PGA, as presented to assessors in our study. Table 2. Scoring Clinical assessment Photographic assessment Change (between baseline and follow-up visits) Not assessable1 Photographic dynamic PGA (�5) Panel photographic dynamic PGA (x1) Severity Baseline Clinical static PGA (�1) Photographic static PGA (�5) Panel photographic Static PGA (�1) Follow-up Clinical static PGA (�1) Photographic static PGA (�5) Panel photographic static PGA (�1) Each of the five raters composing the panel was blinded to clinical data, and independent to other raters. For the appraisal of intra-observer consistency, all photographic assessments were repeated at a second (‘‘retest’’) session. ‘‘(�n)’’ refers to the number of results for the variable. 1Comparison of two clinical states in a given patient cannot be made simultaneously and can only rely on memory of the baseline state when seeing the patient at the follow-up visit. 2202 Journal of Investigative Dermatology (2008), Volume 128 D Farhi et al. Psoriasis Severity Assessment from Photographs baseline state at the time of follow-up evaluation cannot be considered reliable enough to be used as a reference clinical score. Therefore, we used static rather than dynamic assessments to determine validity. The validity of photographic assessment was ascertained by assessing agreement for two comparisons: (1) agreement between ‘‘panel photographic static PGA’’ and ‘‘clinical static PGA’’; (2) agreement between ‘‘clinical’’ and ‘‘photographic’’ Delta-static PGA, Delta-static PGA being defined as ((follow-up static PGA score) � (baseline static PGA score)). ICCs were used to assess the agreement between scores. Sample size calculation Sample size calculation was based on the required precision of intra- rater ICC estimations. It has been shown that ICCs estimate precision increases, that is, the ICC 95% CI narrows with the number of patients and/or assessors. Under the normal hypothesis, for a given ICC, the range of the ICC 95% CI can be determined by the number of patients and assessors included. Previous studies have shown that, in terms of 95% CI precision, the benefit of including more than five assessors is small (Giraudeau and Mary, 2001; Bonett, 2002). In addition, an intra-rater ICC value of 0.80 for photographic dynamic PGA was expected, and we decided that a 95% CI span of 0.20 would be satisfactory. Thus, using a previously published approx- imation (Bonett, 2002) of the mathematical relationship between ICC, 95% CI width (w), number of patients (n), and number of assessors (k), the sample size required for this study was estimated: if ICC¼0.80, w¼0.20, k¼5, and za/2¼1.96, then n¼29 patients. We therefore decided to include 30 patients and 5 raters. CONFLICT OF INTEREST Authors declare neither conflict of interest nor financial disclosure. Wyeth- France provided the funding for the camera used in this study, but did not offer any personal grant for the authors nor had any influence on the study analyses or results. ACKNOWLEDGMENTS We thank Drs Amélie Arsouze, Edouard Bégon, Justine Dautremer, Laurence Fardet, François Durupt, Sophie Georgin-Lavialle, Fabien Guibal, Simon Jacobelli, Annabelle Lévy, Antoine Petit, Stéphanie Régnier for their assistance, and Angela Swaine Verdier for reading and correcting the manuscript. The authors state that this work has not been published in another journal. This work has been partly presented at the 36th European Society for Dermatological Research congress (7–9 September 2006, Paris, France), and at the Journées Dermatologiques de Paris congress (6–10 December 2006, Paris, France). REFERENCES Ashcroft DM, Wan Po AL, Williams HC, Griffiths CE (1999) Clinical measures of disease severity and outcome in psoriasis: a critical appraisal of their quality. Br J Dermatol 141:185–91 Bland JM, Altman DG (1986) Statistical methods for assessing agreement between two methods of clinical measurement. Lancet 1:307–10 Bonett DG (2002) Sample size requirements for estimating intraclass correlations with desired precision. Stat Med 21:1331–5 Chren MM (2000) Giving ‘‘scale’’ new meaning in dermatology: measure- ment matters. Arch Dermatol 136:788–90 Falissard B (2001) La précision de mesure. In: Mesurer la subjectivité en santé, (Masson ed), Paris, 155–7 Feinstein AR (1987) Clinimetrics. New Haven: Yales University Press, 272 pp Feldman SR, Krueger GG (2005) Psoriasis assessment tools in clinical trials. Ann Rheum Dis 64(Suppl 2):ii65–8 Finlay AY (2005) Current severe psoriasis and the rule of tens. Br J Dermatol 152:861–7 Giraudeau B, Mary JY (2001) Planning a reproducibility study: how many subjects and how many replicates per subject for an expected width of the 95 per cent confidence interval of the intraclass correlation coefficient. Stat Med 20:3205–14 Halpern AC, Marghoob AA, Bialoglow TW, Witmer W, Slue W (2003) Standardized positioning of patients (poses) for whole body cutaneous photography. J Am Acad Dermatol 49:593–8 Landis JR, Koch GG (1977) The measurement of observer agreement for categorical data. Biometrics 33:159–74 Langley RG, Ellis CN (2004) Evaluating psoriasis with psoriasis area and severity index, psoriasis global assessment, and lattice system physician’s global assessment. J Am Acad Dermatol 51:563–9 Lord FM, Novick MR (1968) Statistical theories of mental test score. Reading, MA: Addison-Wesley Muller R, Buttner P (1994) A critical discussion of intraclass correlation coefficients. Stat Med 13:2465–76 www.jidonline.org 2203 D Farhi et al. Psoriasis Severity Assessment from Photographs http://www.jidonline.org Global Assessment of Psoriasis Severity and Change from Photographs: a Valid and Consistent Method ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������� Introduction ������������������������������������������������������� Results ���������������������������������������� Discussion ������������������������������������������������� Materials and Methods ���������������������������������������������������������������������������������� Conflict of Interest ������������������������������������������������������������������������������� Acknowledgments ���������������������������������������������������������������� Supplementary Material ������������������������������������������������������������������������������������� References ������������������������������������������������� work_wtlyi75lsrfijbbn36ydd7aabe ---- Microsoft Word - _1 page.doc XXV ISBS Symposium 2007, Ouro Preto – Brazil 398 RELIABILITY OF A DIGITAL METHOD TO DETERMINE FRONTAL AREA OF A CYCLIST Randall L. Jensen, Saravanan Balasubramani, Keith C. Burley, Daniel R. Kaukola, James A. LaChapelle and Ross Anderson* Department of Health, Physical Education, and Recreation Northern Michigan University, Marquette, MI, USA *Biomechanics Research Unit, University of Limerick, Limerick, Ireland Eight cyclists were photographed with a digital camera for three trials while positioned on their own bicycle wearing their helmet. The positions were different from each other and described as: hands on the brake hoods; hands below the curve of the brakehoods on the handlebars; and using aerobars. Twenty four trials were digitized by two different individuals three times to estimate the inter- and intra-rater reliability of the method. The Intraclass Correlation Coefficient (p < 0.05) value for the intra-rater (test-retest) reliability was ICC = .993; for inter-rater consistency the ICC = .976. There were significant differences (p < 0.05) between digitizers and between trials apparently due to a learning effect that disappeared by the third trial. Due to small differences between digitizers and trials, caution is recommended when considering use of this method. KEYWORDS: surface area, cycling, intra-rater reliability, inter-rater reliability INTRODUCTION: The effect of drag on the movement of a cyclist can be quite large and is related to the surface or frontal projection area (FPA) (Faria et al. 2005). However to determine this area is often time consuming and may require specialized and/or expensive equipment (Cappaert, 1998; Edwards & Byrnes, 2006). The other extreme is a gross oversimplification for example extrapolation based on height and weight (Radjvojevic et al. 1983). Both of these techniques have limited use in determining FPA for a cyclist; the former due to a large amount of time for repeated trials and calculations and/or high cost of equipment, while the latter results in a lack of precision limiting the ability to assess changes that have occurred following small alterations of position. Martin and colleagues (2006) have provided a field test to determine aerodynamic drag, however, this method requires six trials at various speeds, an accurate power meter of some sort (they recommend an SRM), and a straight flat section of road. Recently Swanton and coworkers (2006) presented a simplified method for determining FPA of a cyclist using digital photography and an image-processing package (Adobe Photoshop 7.0 - Adobe, San Jose, USA). This methodology greatly simplifies the process of determining surface area of an individual or object with minimal investment in equipment. However, there has been no information published on the reliability of this method for either test-retest or between digitizers. When implementing a new test, knowledge of the reliability of the test is important to ensure that the data are consistent and will allow for replication in future studies or within a repeated measures design (Morrow and Jackson, 1993). The purpose of the current study was to estimate the reliability of the method for both repeated trials by the same digitizer and between different digitizers. To accomplish this, two digitizers digitized the same photo two times and the inter- and intra-rater reliability was assessed. METHODS: Approval for the use of human subjects was obtained from the institution prior to commencing the study. Nine recreational or sub-elite cyclists volunteered to partake in all aspects of the study and gave written consent. Subjects wore their own helmet and were positioned on their own bicycle in one of three randomly ordered positions: 1) with the hands on top of the brake hoods (BH); 2) with the hands below the dropped, or curved, portion of the handlebars (DH); and 3) using clip-on triathlon bar extensions with the elbows resting on XXV ISBS Symposium 2007, Ouro Preto – Brazil 399 pads and hands extended to the end of the bar (AB). Previous research (Swanton et al. 2006) has shown these positions to vary from one another in FPA and thus each position was treated as a different sample. To calculate FPA the volunteers were asked to position themselves in each of three positions with the pedal cranks perpendicular with the floor. A digital image (5 mega pixel) was captured of the participant and a 51 X 76.3 cm calibration object (CO) from the frontal plane. The images were analysed in an image-processing package (Digitizer 1 used Adobe Photoshop 7.0 with Windows; while Digitizer 2 used Photoshop CS2 on a Macintosh - Adobe, San Jose, USA). Firstly, the rectangular marquee tool (crop tool in Macintosh) was used to extract the portion of the image containing the whole body of the cyclist and bicycle (Figure 1A). The portion of the image containing the CO was also extracted. The magnetic lasso tool was then used to extract the cyclist (or the CO) from the background of the image; this selection was then pasted as a new image with a white background (see Figure 1B). Following this, the image was converted to an image of two colours by reducing the contrast to minus 100% (image – adjustments – brightness/contrast – contrast). The resulting image (see Figure 1C) contained a representation of the FPA for that position. To calculate the actual FPA the image was represented as a histogram (containing the number of pixels of each colour). The CO image was processed in the same way. The area of the FPA could then be calculated in pixels and converted to m2 (Swanton et al. 2006). To estimate the intra-rater reliability of determining frontal projection area, the processing of each digital photo was performed three times. To estimate the inter-rater reliability two independent digitizers processed each digital photo. Frontal projection area data were then compared between and within digitizers. Figure 1: Image A (left), Image B (middle), and Image C (right) Statistical analyses were performed using the Statistical Program for the Social Sciences v.14.0 (SPSS, 2006). The intra-rater reliability, or test-retest consistency, of the method for determining frontal projection area for each of the three positions was estimated by XXV ISBS Symposium 2007, Ouro Preto – Brazil 400 determining the intra-class correlation coefficient using a two-way mixed effect model, with fixed effect for digitizer and random effect on trials (Morrow and Jackson, 1993). For the inter-rater reliability an intra-class correlation coefficient using a two-way mixed effect model, with fixed effect for trials and random effect for the digitizer was implemented. Repeated Measures Analysis of Variance was used to determine if there were significant differences between raters (inter-rater) and within raters (intra-rater) using p <0.05 for the test of significance. RESULTS: Intraclass Correlation testing the intra-rater reliability, or reliability of test-retest between trials resulted in an ICC = .996 (95% Confidence Interval of .993 to .997) for average measures and ICC = .987 for a single measure. However, despite the high ICC there was a significant difference (p < 0.05) between the trials (Mean ± SD) Trial 1 = 0.389 ± 0.038 m2; Trial 2 = 0.386 ± 0.037 m2; Trial 3 = 0.385 ± 0.038 m2. Bonferroni’s post hoc analysis indicated that Trial 1 was greater than the other two trials, which did not differ. The estimate of inter-rater reliability produced an ICC = .979 (95% Confidence Interval of .966 to .987) for average measures and ICC = .959 for a single measure. Similar to the intra- rater reliability there was a significant difference (p < 0.05) between the trials (Mean ± SD) Digitizer 1 = 0.380 ± 0.036 m2; Digitizer 2 = 0.394 ± 0.038 m2. DISCUSSION: The major findings of the current study were that high Intraclass Correlation Coefficients (> .95) were found in both intra-rater (test-retest) and inter-rater (between digitizers) estimations of reliability for the digital determination of frontal projection area proposed by Swanton et al. (2006). However, there were also significant differences between trials and between digitizers (p < 0.05). The presence of differences between digitizers is similar to previous findings for other digitization techniques (Winter, 1990). As noted by Winter, using different digitizers to analyze the same subject would not be recommended due to the differences that would be expected for repeated measurements. Although there was a statistically significant difference between trials, the magnitude of this difference was small, equaling 0.004 m2 or 1%. It is unclear to the authors what impact this size difference might have on comparisons of changes in body position, however this magnitude of error is smaller than many other types of measurement in sport science (Hopkins et al, 1999). Furthermore, Trial 1 differed from the latter trials, which did not differ from each other; indicating that there may be a learning effect in performing this technique. Further analyses that found the difference of Trial 1 from the latter trials occurred for Digitizer 1, who had no prior experience in the procedure, while Digitizer 2, who had over 10 years of experience in manipulation of digital photo data, displayed no differences across the trials. CONCLUSION: The current study found the estimated reliability of the method by Swanton et al. (2006) to determine the frontal projection area of a cyclist resulted in high Intraclass Correlations but also significant differences between trials and digitizers. Although there were differences between the trials, the magnitude was 1% which was considered small. The method is inexpensive and easy to use: but because of the small but significant differences, practitioners should consider whether this method may be useful. A consideration when using this technique is the experience of the digitizer, as a learning effect appears to exist. Also data from one cyclist should be analyzed by a single digitizer, as significant differences between digitizers indicate that similar findings would be unlikely. REFERENCES: Cappaert, J.M. (1998) Frontal surface area measurements in national calibre swimmers. Sports Engineering 1(1) 51. XXV ISBS Symposium 2007, Ouro Preto – Brazil 401 Edwards, A.G & Byrnes, W.C. (2007) Aerodynamic characteristics as determinants of the drafting effect in cyclists. Medicine and Science in Sports and Exercise, 39(1), 170-176. Faria, E.W, Parker, D.L & Faria, I.E. (2005) The science of cycling: factors affecting performance – part 2, Sports Medicine, 35(4), 313 – 337. Hopkins, W.G., Hawley, J.A. & Burke, L.M. (1999). Design and analysis of research on sport performance enhancement. Medicine and Science in Sports and Exercise, 31(3), 472-485. Martin, J.C., Gardner, A.S. & Barras, M, Martin DT (2006) Aerodynamic Drag Area of Cyclists Determined with Field-Based Measures. Sportscience, 10, 68-69 (sportsci.org/2006/jcm.htm). Morrow, J.R. Jr. & Jackson, A.W. (1993) How "significant" is your reliability? Research Quarterly for Exercise and Sport, 64(9): 352-355. Radjvojevic, L., Jovic, D. & Perunovic, D. (1983) Valuation of the methodological procedures for determination of the values for absolute surface area. Journal of Sports Medicine & Physical Fitness, 23(2) 148-154. Swanton, A., Shafat, A. & Anderson, R. (2006) Biomechanical and physiological characterization of four cycling positions. In Proceedings of the XXIV International Symposium of Biomechanics in Sports; (Schwameder, H, Strutzenberger, G, Fastenberger, V, Lindinger, S, Müller, S. editors) 855-858. Winter, D.A. (1990) Biomechanics and Motor Control of Human Movement, John Wiley Publishers. Acknowldgement This study was supported in part by a Northern Michigan University College of Professional Studies Grant. work_wtrnpcmm4zafdn6phpmyec3ewa ---- UC Irvine UC Irvine Previously Published Works Title Determination of optimal view angles for quantitative facial image analysis. Permalink https://escholarship.org/uc/item/2qj896q8 Journal Journal of biomedical optics, 10(2) ISSN 1083-3668 Authors Jung, Byungjo Choi, Bernard Shin, Yongjin et al. Publication Date 2005-03-01 DOI 10.1117/1.1895987 License https://creativecommons.org/licenses/by/4.0/ 4.0 Peer reviewed eScholarship.org Powered by the California Digital Library University of California https://escholarship.org/uc/item/2qj896q8 https://escholarship.org/uc/item/2qj896q8#author https://creativecommons.org/licenses/https://creativecommons.org/licenses/by/4.0//4.0 https://escholarship.org http://www.cdlib.org/ Determination of optimal view angles for quantitative facial image analysis Byungjo Jung Yonsei University, Department of Biomedical Engineering Korea, 220-710 Bernard Choi University of California, Irvine Beckman Laser Institute 1002 Health Sciences Road East Irvine, California 92612-1475 E-mail: bjung@laser.bli.uci.edu Yongjin Shin University of California, Irvine Beckman Laser Institute 1002 Health Sciences Road East Irvine, California 92612-1475 and Chosun University Department of Physics Gwangju, 501-759, Korea Anthony J. Durkin J. Stuart Nelson University of California, Irvine Beckman Laser Institute 1002 Health Sciences Road East Irvine, California 92612-1475 Abstract. In quantitative evaluation of facial skin chromophore con- tent using color imaging, several factors such as view angle and facial curvature affect the accuracy of measured values. To determine the influence of view angle and facial curvature on the accuracy of quan- titative image analysis, we acquire cross-polarized diffuse reflectance color images of a white-patched mannequin head model and human subjects while varying the angular position of the head with respect to the image acquisition system. With the mannequin head model, the coefficient of variance (CV) is determined to specify an optimal view angle resulting in a relatively uniform light distribution on the region of interest (ROI). Our results indicate that view angle and facial cur- vature influence the accuracy of the recorded color information and quantitative image analysis. Moreover, there exists an optimal view angle that minimizes the artifacts in color determination resulting from facial curvature. In a specific ROI, the CV is less in smaller regions than in larger regions, and in relatively flat regions. In clinical application, our results suggest that view angle affects the quantitative assessment of port wine stain (PWS) skin erythema, emphasizing the importance of using the optimal view angle to minimize artifacts caused by nonuniform light distribution on the ROI. From these re- sults, we propose that optimal view angles can be identified using the mannequin head model to image specific regions of interest on the face of human subjects. © 2005 Society of Photo-Optical Instrumentation Engineers. [DOI: 10.1117/1.1895987] Keywords: erythema; facial imaging; port wine stain; digital imaging; reflectance imaging. Paper 04046 received Mar. 26, 2004; revised manuscript received Aug. 12, 2004; accepted for publication Aug. 17, 2004; published online Apr. 29, 2005. 1 Introduction Assessment of skin chromophore ~melanin and hemoglobin! content is important to identify the presence of cutaneous pa- thology or to monitor patient response to therapeutic interven- tion of skin disease. Quantitative point-measurement devices, such as reflectance spectrophotometers and tristimulus colo- rimeters, have been employed to estimate the content of mela- nin and hemoglobin in human skin.1–5 The clinical usefulness of spectrophotometers and colorimeters is limited by practical considerations such as small test area, potential skin blanch- ing due to probe contact, poor spatial resolution ~measure- ments are typically resolved on a spatial scale equivalent to the dimensions of the probe head!, and difficulty in relocating the probe to the same site over the longitudinal course of therapy, which may have a duration of years. Another tech- nique under investigation is digital photography,6 –10 which addresses several limitations of spectrophotometers and colorimeters.11 Digital photography techniques can be used to characterize a large skin area in a noncontact geometry with relatively high spatial resolution, addressing such limitations of point-measurement devices. To maximize the ability of digital photography to compare quantitatively skin chromophore content measurements at dif- ferent patient visits, it is necessary to control the imaging environment. In a previous study, we described a cross- polarized diffuse reflectance color imaging system to obtain subsurface skin color information and a head-positioning de- vice that enabled acquisition of facial images in a reproduc- ible fashion at a fixed distance from the illumination source.12 An advantage of our cross-polarized diffuse reflectance imag- ing system is that polarization optics enables us to reject skin surface glare.13 When the polarized light is incident on skin, ;4% of the incident light is specularly reflected at the skin surface due to the refractive index mismatch between human skin and air and ;3% of the incident light is reflected from the initial skin layer, retaining the linear polarization of the incident light. The remaining 93% of the incident light pen- etrates into the deep skin, and is depolarized by multiple scat- tering events. Eventually, about half of the depolarized light Address all correspondence to Byungio Jung, University of California, Irvine, 1002 Health Sciences Road East, Irvine, CA 92612-1475. Fax: 949-824-6969; E-mail: bjung@laser.bli.uci.edu 1083-3668/2005/$22.00 © 2005 SPIE Journal of Biomedical Optics 10(2), 024002 (March/April 2005) 024002-1Journal of Biomedical Optics March/April 2005 d Vol. 10(2) that consists of light that is parallel and perpendicular to the incident polarized light is backscattered to the skin surface. The specularly reflected light is the source of glare at the skin surface, which impairs observation of skin color information provided by light scattered and absorbed from subsurface structures. Since specularly reflected light is in the same po- larization state as incident polarized light, a polarizer located between the subject and the detector having its axis perpen- dicular to the incident light polarizer can be used to remove specular reflectance. The resulting images primarily contain subsurface information related to skin structure and chro- mophore content. Also described in the previous study was an image analysis method to characterize quantitatively melanin and hemoglo- bin content of hypervascular port wine stain ~PWS! birth- marks in human skin using L * and a * values, respectively, from the Commission Internationale de l’Éclairage ~CIE! L *a *b * color space. Since most PWS lesions occur on the face,14 it is necessary to determine the influence of view angle of the camera system and facial surface curvature on the color values derived from the images. To examine the effects of such variables on quantitative image analysis, we conducted experiments using a mannequin head model and a PWS hu- man subject. We present a procedure that minimizes the ef- fects of view angle and facial curvature on the accuracy of quantitative analysis of the erythema ~i.e., index of hemoglo- bin content! in human skin. 2 Materials and Methods 2.1 Imaging System and Image Acquisition The system employed for these investigations is identical to that described previously.12 Briefly, the imaging system @Fig. 1~a!# is based on a Minolta DiMAGE 7 digital camera. Cam- era output was displayed on a 9-in. color monitor. The system incorporates an ac adapter powered ring flash for consistent uniform illumination. Cross-polarized optics are used to re- move surface glare, which corrupts subsurface skin color measurement. Using a Kodak gray card ~E152 7795, Tiffen, Rochester, New York!, the white balance and exposure of the digital camera were manually adjusted to set the chromatic ratio at red ( R ) 5 128, green ( G ) 5 128, and blue ( B ) 5 128. The optimized camera parameters were ISO 200, aperture size F /8, shutter speed 1/60 s, and flash intensity level of 1/2. To eliminate artifacts induced by environmental lighting, digital images were acquired in a darkened room. To ensure that test sites on the face were positioned in a reproducible manner, a custom head-positioning device @Fig. 1~b!# was constructed and placed within the working distance ~50 cm! of the ring flash, resulting in uniform illumination. The view angle for facial imaging was selected by rotating the head-positioning device, as indicated by the bidirectional ar- row in Fig. 2 and defined as the angle between the optical axis of the imaging system and the medial facial plane. The opti- mal view angle was defined as the view angle that minimized nonuniform illumination on the facial region of interest. 2.2 Calculation of L* and a* Using the algorithm for color space conversion,15,16 the device-dependent RGB ~red, green, and blue! color images were converted into the device-independent CIE L *a *b * color images to determine objectively skin color. In the CIE L *a *b * color space, the reflected light intensity was quantified17 as L * and erythema as a *. Higher L * and a * values are indicative of higher reflectivity and erythema val- ues, respectively. For the color space conversion, the tristimu- lus X, Y, and Z images of the sample ~skin! and calibration reference were first calculated from respective RGB color im- ages using the D 65 ~average daylight illumination at a stan- dardized blackbody temperature of 6500 K! conversion matrix16 @Eq. ~1!#. As a calibration reference, RGB values for a 99% diffuse reflectance standard ~Model SRT-99-100, Lab- sphere, North Sutton, New Hampshire! with a uniform flat surface of 30 cm2 were measured. F XY Z G 5F 0.412453 0.357580 0.1804230.212627 0.715160 0.072169 0.019334 0.119193 0.950227 GF RG B G . ~1! Fig. 1 (a) Diagram of the cross-polarized diffuse reflectance imaging system and (b) of the head positioning device used to standardize images obtained from each subject. Jung et al. 024002-2Journal of Biomedical Optics March/April 2005 d Vol. 10(2) Tristimulus images for the skin ( X , Y , Z ) and calibration reference ( X n , Y n , Z n) were utilized to calculate L *a *b * color images using the following equations: L *5 H 116~ Y / Y n !1/32 16 for Y / Y n. 0.008856 903.3~ Y / Y n ! otherwise , a *5 500@ f ~ X / X n !2 f ~ Y / Y n !#, ~2! b *5 200@ f ~ Y / Y n !2 f ~ Z / Z n !#, where f ~ t !5 H t 1/3 for t . 0.008856 7.787t 1 16/116 otherwise . 2.3 Uniformity of Light Distribution Ideally, light incident on the target area should be uniformly distributed for accurate quantitative image analysis. To inves- tigate the influence of view angle on the uniformity of inci- dent light distribution, the 99% diffuse reflectance standard was placed in the head-positioning device. Cross-polarized diffuse reflectance images were acquired at view angles of 0 and 35 deg, which were assumed to be optimal and subopti- mal angles, respectively. The L * images for both angles were computed from the cross-polarized diffuse reflectance images. 2.4 Mannequin Head Model The uniformity of light distribution due to facial curvature was studied with a physical mannequin head model, assuming that the mannequin face is representative of the shape of the human face. Fifty white patches ~of 1-cm2 area each! were removed from a Kodak gray card and positioned on the entire right-side face of the mannequin head model ~Fig. 3!. 2.5 Determination of Optimal View Angle The mannequin head model with attached white patches was placed in the head-positioning device and cross-polarized dif- fuse reflectance images were obtained at multiple view angles varying between 0 and 90 deg, inclusive, in increments of 10 deg. From each image, L * values of the patches were com- puted using Eqs. ~1! and ~2!. The optimal view angle was determined based on the average L * value and coefficient of variation ~CV! of the selected white patches. Mean ~m! and standard deviation ~s! values of L * from different subsets of patches were computed and the CV calculated as follows: CV~ % !5@ s / m #3 100. ~3! A lower CV indicates a lower dispersion in L * over the subset of patches and, therefore, a more uniform incident light dis- tribution. Statistical analysis was performed using SPSS soft- ware ~Version 8, SPSS Inc, Chicago, Illinois!. 2.6 Determination of Optimal View Angle for Clinical Imaging To simulate a PWS lesion, 16 red color patches ~of 1-cm2 area each! from a Macbeth color checker ~GretagMacbeth, New Windor, New York! with 24 different color patches were placed on the left-side face of the mannequin model and on a human subject with normal skin ~Fig. 4!. This color checker is used as a standard in evaluation of color measurement devices.16 Every effort was made to place the patches on iden- tical locations on both the model and the subject. From im- ages of white patches placed at corresponding locations on the contralateral side of the model ~i.e., patches 2 to 9, 11, 12, 14 to 16, 19, 20, 23, 24, and 27 in Fig. 3!, the optimal view angle to image the entire red-patched region was determined from calculated CV values as described above. Using this optimal view angle, images of the red patches on both the model and subject were acquired and a * values were determined. The clinical relevance of view angle was investigated on a PWS patient receiving laser treatment at Beckman Laser In- stitute. Images were acquired at two different suboptimal view angles and, then, the respective a * images were compared to demonstrate the importance of view angle for quantitative im- age analysis. Three consecutive cross-polarized diffuse reflec- tance color images were acquired at the optimal view angle for the PWS lesion over an 8-week period. Qualitative assess- Fig. 2 Schematic diagram for facial image acquisition. View angles, defined as the angle between the optical axis of the imaging system and medial facial plane, were selected by adjusting the rotation of the head-positioning device. Fig. 3 Mannequin head model used to study the uniformity of the light distribution. Fifty white patches were positioned on the entire right side-face of the mannequin head model. This image was ac- quired at a view angle of 45 deg. Determination of optimal view angles . . . 024002-3Journal of Biomedical Optics March/April 2005 d Vol. 10(2) ment was performed by comparing PWS skin color changes in consecutive images. For quantitative assessment of changes in PWS erythema, a * images were computed from the corre- sponding cross-polarized diffuse reflectance images. 3 Results 3.1 Optimized Imaging System Provides a Uniform Light Distribution on a Flat Surface Using the selected camera parameters, a uniform light distri- bution on the 99% diffuse reflectance standard was obtained at a view angle of 0 deg @Fig. 5~a!#. At a view angle of 35 deg, the resultant light distribution was nonuniform @Fig. 5~b!#. To test system stability, images of the diffuse reflectance standard were acquired at a view angle of 0 deg on 5 separate days. RGB values ~m6s! were 25061.3, 25261.7, and 25161.2, respectively, demonstrating the stability of our imaging sys- tem. 3.2 Optimal View Angle Depends on the Region of Interest To simulate different regions of interests ~ROIs! on the face, optimal view angles were determined for various subsets of the 50 white patches placed on the mannequin head model. Figure 6 illustrates the dependence of L * values on view angle for a ROI covering primarily the front side of the face ~i.e., patches 2 to 21 in Fig. 3!. The CV is a minimum ~1.2%! at a view angle of 40 deg, suggesting that this is the optimal view angle. Based on the results shown in Table 1, the optimal view angle varies and should be determined based on the ROI under study. Use of the red patches to simulate a PWS lesion resulted in similar CV results for both the mannequin head model and human subject with normal skin. From the white-patched mannequin head model data shown in Fig. 7, the optimal view angle for the red patched region was determined to be 35 deg ~CV 0.4%!. At this optimal view angle, the mean a * values of the red patches on the mannequin model and human subject with normal skin were 37.4760.63 ~CV 1.6%! and 40.64 60.78 ~CV 1.9%!, respectively. From the image of the red patches at a 0-deg view angle, the mean a * value was 38.98 60.67 ~CV 1.71%!. The a * values of the simulated PWS lesion on the human subject with normal skin were higher than those from the mannequin head model. Fig. 4 To simulate a PWS birthmark, red-patches were positioned on (a) the mannequin head model and (b) a human subject. Sixteen red patches were placed at similar locations on both the mannequin head model and the human subject. Cross-polarized diffuse reflectance im- ages were acquired at the optimal view angle of 35 deg. Fig. 5 Light distribution on the 99% diffuse reflectance standard at view angles of (a) 0 and (b) 35 deg. Image contrast was adjusted to enhance visualization of the light distribution. Jung et al. 024002-4Journal of Biomedical Optics March/April 2005 d Vol. 10(2) 3.3 View Angle Affects the Quantitative Assessment of PWS Skin Erythema At the same PWS patient visit, two cross-polarized diffuse reflectance images were obtained at view angles of 20 deg ~Fig. 8, top left! and 40 deg ~Fig. 8, bottom left! and the corresponding a * images computed ~Fig. 8, top and bottom right, respectively!. In comparing the two images, the region enclosed by the black line illustrated different a * distributions as compared to the rest of the PWS lesion. Using the CV-based analysis described, the optimal view angle to image the PWS lesion was determined to be 45 deg. Over an 8-week period, three cross-polarized diffuse reflec- tance color images were obtained from the same patient ~Fig. 9, left side! at the optimal view angle of 45 deg. Qualitatively, it appears that while normal area presents quasiconstant skin color, red skin color in PWS lesion was gradually lighter in successive images due to the positive laser treatment effect. The changes of erythema seen in the images was emphasized in the corresponding quantitative a * images ~Fig. 9, right side!. In the color bar, a higher a * value means higher erythema. 4 Discussion Our results indicate that view angle and facial curvature affect quantitative measurement of facial skin erythema. The distri- butions of reflected light from a 99% diffuse reflectance stan- dard acquired at two view angles demonstrated that view angle affects the uniformity of the incident light distribution ~Fig. 5!. The flat reflectance standard is larger than the field of view of the camera at the working distance used in this study. At an optimal view angle ~0 deg!, at which the axis of the camera is perpendicular to the surface of the reflectance stan- dard, the working distance is spatially constant on the flat surface and results in uniform light distribution @Fig. 5~a!#. However, at a suboptimal view angle of 35 deg, the working distance varies spatially, resulting in displacement of a portion of the reflectance standard away from the camera and lighting system, and displacement of a portion of the standard toward the camera and lighting system @Fig. 5~b!#. Since the exposure time and aperture for this set of experiments remained fixed, translation of the reflectance standard toward the camera and illumination system resulted in saturation of a portion of the sensing element ~blooming!. Note, however, that the graded decrease in illumination on the left portion of this image rep- resents a 4% ~or whatever it is! decrease in intensity. The apparent severity in the recorded illumination intensity is par- tially an artifact of the color scale that was chosen to make the effects of the angular offset readily apparent. The variation in CV with view angle showed that an opti- mal view angle exists for a given facial surface curvature and ROI ~Figs. 6 and 7!. In the presented examples ~Figs. 6 and 7!, variations in CV are relatively insensitive to view angles of 610 deg from the determined optimal view angle. How- ever, in clinical practice, it is obvious that reproducing head position at each patient visit is essential. Evidence that view angle affects quantitative image analy- sis is that images acquired from the same subject at two dif- ferent view angles possessed noticeable difference in a * val- ues on the higher facial curvature region compared to the relatively flat region ~Fig. 8!. This is due primarily to the difference in incident light distribution with view angle ~Fig. 5!. In evaluation of skin color at different patient visits, if the view angle is not optimized and held constant, quantitative Fig. 6 Example illustrating the dependence of L* values on view angle. In this measurement, patches 2 to 21 comprised the ROI, and the optimal view angle was determined to be 40 deg (CV=1.2%). Fig. 7 Effect of view angle on L* in the white patch ROI correspond- ing to the PWS-simulating red patches on the mannequin head model. The white patches numbers corresponding to the red patches were 2 to 9, 11 to 12, 14 to 16, 19 to 20, 23 to 24, and 27. The optimal view angle was 35 deg with a CV of 0.4% (m, 95.87 and s, 60.4). Table 1 Summary of optimal view angles for imaging different ROIs of the mannequin head model (Fig. 2). Patch Location Numbers Optimal View Angle (deg) CV (%) 1 to 50 40 2.86 2 to 42 50 1.97 2 to 21 40 1.2 19 to 42 70 0.3 Determination of optimal view angles . . . 024002-5Journal of Biomedical Optics March/April 2005 d Vol. 10(2) Fig. 8 Cross-polarized diffuse reflectance color (left) and a* (right) images taken at view angles of 20 deg (top) and 40 deg (bottom). Angular artifact in quantitative assessment of a* was emphasized in the ROI enclosed in the solid black line, in which a* value distributions were different. Fig. 9 Cross-polarized diffuse reflectance color images (left) and a* images (right) of a PWS patient taken at the optimal view angle of 45 deg. The images were acquired at three successive visits over an 8-week period. The images from top to bottom indicate the first, second, and third visits, respectively. The image acquisition based on the optimal view angle provides comparable qualitative skin color images and enables us to use an absolute a* image for quantitative assessment of response to laser treatment of PWS. Jung et al. 024002-6Journal of Biomedical Optics March/April 2005 d Vol. 10(2) comparison of color changes in different ROIs is hindered by artifacts caused by differences in incident light distribution. A surprising finding was that the mean a * values of the simulated PWS birthmark on the human subject was higher than that from the mannequin head model. We expected the mean a * values to be the same due to the use of identical red patches in both measurements. We believe that this discrep- ancy is due to differences in vertical tilt between the human subject and mannequin head model and to slight differences in placement of the red patches. In a separate experiment, red patches were placed on a flat panel, and the vertical tilt was changed from 0 to 10 deg. Resultant a * values determined from the cross-polarized diffuse reflectance images were slightly different ~38.5 versus 40.2 at 0 and 10 deg, respec- tively!. Therefore, to maximize the accuracy of quantitative analysis determined from cross-polarized diffuse reflectance images, it is necessary to use a head-positioning device with adjustments for both vertical and horizontal tilt. We are cur- rently working on the design of such a device. Results obtained with the mannequin head model ~Fig. 6! and human subjects with normal ~Fig. 7! and PWS skin ~Fig. 8! demonstrate the importance of considering the ROI in fa- cial imaging. As shown in Table 1, the optimal view angle depends on the ROI. Furthermore, the CV was higher when a relatively large ROI was selected ~e.g., patches 1 to 50! as compared to when a smaller ROI was considered; this results was due to the higher degree of nonuniformity in the illumi- nation over the larger ROI due to local differences in facial curvature. This result is similar to that obtained by Miyamoto et al., who determined that when a ROI was close to the re- gion used for color calibration, the error was minimized.18 Therefore, we recommend that facial imaging should be per- formed at multiple view angles, and the optimal view angle determined individually for different ROIs. This recommen- dation enables direct quantitative comparison of images ob- tained at different patient visits to monitor the progress of PWS laser therapy throughout an extended treatment protocol ~Fig. 9!. Currently, we are developing a program for clinicians to determine easily the optimal view angle in daily practice. In the program, clinicians determine the specific ROI of a patient and easily select the matched specific white patch numbers from the digitized mannequin head model, and finally, the optimal view angle is automatically determined with the se- lected white patches. In summary, we believe the approach described here can be applied to other color-based studies in which analog or digital imaging is employed. Acknowledgments The authors would like to acknowledge the Arnold and Mabel Beckman Fellows Program ~BC!, the National Institutes of Health/National Center for Research Resources ~NIH/NCRR! Laser and Medical Microbeam Program ~RR01192 AJD!, and National Institutes of Health ~AR47551, AR48458 and GM62177 JSN! for providing research grant support. References 1. P. Clarys, K. Alewaeters, R. Lambrecht, and A. O. Barel, ‘‘Skin color measurements: comparison between three instruments: the chroma- meter, the dermaspectrometer and the mexameter,’’ Skin Res. Technol. 6, 230–238 ~2000!. 2. M. D. Shriver and E. Parra, ‘‘Comparison of narrow-band reflectance spectroscopy and tristimulus colorimetry of measurements of skin and hair color in persons of different biological ancestry,’’ Am. J. Phys. Anthropol. 112, 17–27 ~2000!. 3. A. M. Trolius and B. Ljungren, ‘‘Evaluation of port wine stains by laser Doppler perfusion imaging and reflectance photometry before and after pulsed dye laser treatment,’’ Acta Derm Venereol 76, 291– 294 ~1996!. 4. E. Berardesca, P. Elsner, and H. I. Maibach, Bioengineering of The skin: Cutaneous Blood Flow and Erythema, pp. 60– 63, CRC Press, Boca Raton, FL ~1995!. 5. A. M. Trolius and B. Ljungren, ‘‘Reflectance spectrophotometry in the objective assessment of dye laser-treated port-wine stains,’’ Br. J. Dermatol. 132, 245–250 ~1995!. 6. H. Takiwaki, Y. Miyaoka, H. Kohno, and S. Arase, ‘‘Graphic analysis of the relationship between skin colour change and variations in the amounts of melanin and hemoglobin,’’ Skin Res. Technol. 8, 78 – 83 ~2002!. 7. M. Seraro and A. Sparavigna, ‘‘Quantification of erythema using digital camera and computer-based image analysis: a multicentre study,’’ Skin Res. Technol. 8, 84 – 88 ~2002!. 8. K. Miyamoto, H. Takiwaki, G. G. Hillebrand, and S. Arase, ‘‘Devel- opment of a digital imaging system for objective measurement of hyperpigmented spots on the face,’’ Skin Res. Technol. 8, 227–235 ~2002!. 9. S. A. Young-Gee, H. A. Kurwa, and R. J. Barlow, ‘‘Objective assess- ment of port-wine stains following treatment with the 585 nm pulsed dye laser,’’ Australas J. Dermatol. 42, 243–246 ~2001!. 10. D. K. Rah, S. H. Kim, K. H. Lee, B. Y. Park, and D. W. Kim, ‘‘Objective evaluation of treatment effects on port-wine stains using L *a *b * color coordinate,’’ Plast. Reconstr. Surg. 108, 842– 847 ~2001!. 11. A. Fullerton, T. Fischer, A. Lahti, K.-P. Wilhelm, H. Takiwaki, and J. Serup, ‘‘Guidelines for measurement of skin colour and erythema,’’ Contact Dermatitis 35, 1–10 ~1996!. 12. B. Jung, B. Choi, A. J. Durkin, K. M. Kelly, and J. S. Nelson, ‘‘Char- acterization of port wine stain skin erythema and melanin content using cross-polarized diffuse reflectance imaging,’’ Lasers Surg. Med. 34, 174 –181 ~2004!. 13. S. L. Jacques, J. R. Roman, and K. Lee, ‘‘Imaging superficial tissues with polarized light,’’ Lasers Surg. Med. 26, 119–129 ~2000!. 14. A. J. Welch and M. J. C. van Gemert, Optical-Thermal Response of Laser-Irradiated Tissue, Plenum Press, New York ~1995!. 15. D. Malacara, Color Vision and Colorimetry: Theory and Applica- tions, pp. 90–101, SPIE Press, Bellingham, WA ~2002!. 16. Y. V. Haeghen, J. M. D. N. Naeyaert, and I. Lemahieu, ‘‘An imaging system with calibrated color image acquisition for use in dermatol- ogy,’’ IEEE Trans. Med. Imaging 19, 722–730 ~2000!. 17. W. Alaluf, D. Atkins, K. Barrett, M. Blount, N. Carter, and A. Heath, ‘‘The impact of epidermal melanin on objective measurements of human skin color,’’ Pigment Cell Res. 15, 119–126 ~2002!. 18. K. Miyamoto, H. Takiwaki, G. G. Hillebrand, and S. Arase, ‘‘Utili- zation of a high-resolution digital imaging system for the objective and quantitative assessment of hyperpigmented spots on the face,’’ Skin Res. Technol. 8, 73–77 ~2002!. Determination of optimal view angles . . . 024002-7Journal of Biomedical Optics March/April 2005 d Vol. 10(2) work_wurjsm4bxbhc3nrgqsajdm56he ---- Polar Biol (2011) 34:1513–1522 DOI 10.1007/s00300-011-1010-5 O R I G I N A L P A P E R Cetacean surveys in the Southern Ocean using icebreaker-supported helicopters Meike Scheidat · Ari Friedlaender · Karl-Hermann Kock · Linn Lehnert · Olaf Boebel · Jason Roberts · Rob Williams Received: 4 August 2010 / Revised: 29 March 2011 / Accepted: 31 March 2011 / Published online: 17 April 2011 © The Author(s) 2011. This article is published with open access at Springerlink.com Abstract Cetaceans in the Southern Ocean are potentially impacted by anthropogenic activities, such as direct hunting or through indirect eVects of a reduced sea ice due to cli- mate change. Knowledge on the distribution of cetacean species in this area is important for conservation, but the remoteness of the study area and the presence of sea ice make it diYcult to conduct shipboard surveys to obtain this information. In this study, aerial surveys were conducted from ship-based helicopters. In the 2006/07 (ANT XXIII/8) and 2008/09 (ANT XXV/2) polar summers, the icebreaker RV ‘Polarstern’ conducted research cruises in the Weddell Sea, which oVered the opportunity to use the helicopters to conduct dedicated cetacean surveys. Combining the results from both cruises, over 26,000 km were covered on survey eVort, 13 diVerent cetacean species were identiWed, and a total of 221 cetacean sightings consisting of a total of 650 animals were made. Using digital photography, it was pos- sible to identify four diVerent beaked whale species and to conduct individual photo-identiWcation of humpback and southern right whales. Helicopter surveys allow the collec- tion of additional information on sightings, (e.g. group size, species), as well as the coverage of areas with high ice cov- erage. The Xexibility and manoeuvrability of helicopters make them a powerful scientiWc tool to investigate ceta- ceans in the Southern Ocean, especially in combination with an icebreaker. Keywords Cetacean · Distribution · Sea ice · Southern Ocean · Surveys · Whales Introduction The Southern Ocean provides critical habitat for a large number of whale populations, several of which have been previously reduced extensively in size during twentieth century whaling activities. Many of the cetaceans in the Southern Ocean still face anthropogenic impacts, such as direct hunting (Clapham et al. 2003) or through indirect eVects of reduced sea ice due to climate change (Moore and Huntington 2008). Knowledge on the distribution of cetacean species in this area is especially important for conservation, but survey conditions in the Southern Ocean are far from ideal. In addition to its remoteness, large areas are covered with sea ice that makes it logistically and economically challenging, or even impossible, to conduct large-scale shipboard surveys. M. Scheidat (&) Department of Ecosystems, IMARES (Institute for Marine Resources and Ecosystem Studies), 1790 AB Den Burg, The Netherlands e-mail: meike.scheidat@wur.nl A. Friedlaender · J. Roberts Duke University Marine Laboratory, 135 Pivers Island Road, Beaufort, NC, USA K.-H. Kock Institut für SeeWscherei, Johann Heinrich von Thünen-Institut, Palmaille 9, 22767 Hamburg, Germany L. Lehnert Forschungs- und Technologiezentrum Westküste, Hafentörn 1, 25761 Büsum, Germany O. Boebel Alfred-Wegener-Institute for Polar and Marine Research, Am Handelshafen 12, 27570 Bremerhaven, Germany R. Williams Marine Mammal Research Unit, University of British Columbia, Room 247, AERL, 2202 Main Mall, Vancouver, BC V6T 1Z4, Canada 123 1514 Polar Biol (2011) 34:1513–1522 This challenge has gained a greater sense of urgency in light of recent abundance estimates for Antarctic minke whales (Balaenoptera bonaerensis) (Branch and Butter- worth 2001). There is some suggestion from the most recent set of circumpolar surveys conducted under the aus- pices of the International Whaling Commission (IWC) that this population has undergone a dramatic decline (Branch and Butterworth 2001), but there is considerable disagree- ment within the IWC’s ScientiWc Committee whether the lower population size estimates are due to decreased abun- dance, changing survey methodology, or a shift in animal distribution due to changing sea ice conditions. Resolving this controversy, whilst of great importance for the IWC’s management decisions, is challenging, because the ships used to conduct systematic surveys are unable to penetrate the sea ice. For many other cetacean species, it has been diYcult to conduct comprehensive assessments due to lack of information on cetacean distribution both inside the mar- ginal ice zone and north of 60°S (Branch and Butterworth 2001). Alternatively, aerial surveys using ship-based heli- copters could allow information to be obtained on cetacean distribution relative to ice conditions that in turn could be used to estimate cetacean density across a range of habitats from open-water to ice-covered regions. In the 2006/2007 and 2008/2009 polar summers, the icebreaker RV Polarstern conducted two research cruises in the Weddell Sea. The ANT XXIII/8 cruise (26 November 2006–29 January 2007) and the ANT XXV/2 cruise (5 December 2008–5 January 2009) oVered an opportunity to use the helicopters on board to conduct dedicated cetacean surveys, which were a secondary research objective to be met in combination with the sur- veys’ obligation to resupply Neumayer station. The Xights spanned open water, the marginal ice zone, as well as deep pack ice. Helicopter surveys have been used in Antarctic waters to survey pinnipeds in the fast ice and pack ice zones (Southwell 2005), but very few sur- veys have included cetaceans as target species (Plötz et al. 1991; van Franeker et al. 1997). The goals of this paper are to: (1) introduce helicopter surveys as a viable means for conducting quantitative line-transect surveys of cetaceans in Antarctic waters; (2) illustrate the kind of data that can be obtained from such platforms; and (3) conduct a preliminary, exploratory analysis to identify factors that may inXuence distribution of diVerent cetacean species with respect to sea ice. This latter analysis, whilst preliminary, is necessary to guide future data collection and analysis, particularly in the Weddell Sea where heavy ice conditions generally pre- clude vessels from being able to penetrate beyond the marginal ice edge zone. For this region, reliable informa- tion on how diVerent cetacean species utilize this habitat is especially scarce. Methods Helicopter surveys and whale sightings Surveys were conducted by means of a helicopter BO-105 from RV Polarstern. The surveys could not follow a sys- tematic sampling design (see below), but Weld protocols for recording eVort and sightings data followed standard line- transect distance-sampling methods (Buckland et al. 2001). Flying time during each survey was generally around 2 h, with a range from 20 min to 3.5 h. Generally, surveys could not be designed in advance, because the ship’s position and ice conditions were unavailable ahead of time, and because the cetacean surveys were not the main objective of the cruise. Therefore, track-lines were normally planned a few hours prior to departure to cover a rectangular shape (Figs. 1, 2). The orientation and placement of these rectan- gles were arbitrary with respect to whale distribution (i.e. the ability to Xy or not Xy was a function of competing demands on the pilots’ time, rather than being planned in response to seeing whales). Surveys were adapted in an ad hoc manner to changing weather and ice conditions and the course of RV Polarstern through the ice. For safety reasons, these surveys usually employed a rectangular survey design that minimized the distance of the helicopter to the survey vessel at any given time. For the areas of Elephant Island and Larsen A and B ice shelves, surveys were conducted along pre-designed transects that involved equally spaced, parallel lines with a random start point, following speciWc depth gradients (Fig. 2). Surveys were conducted at an approximate altitude of 183 m (600 ft) at a speed of 80 km/h. One observer was positioned in the back left seat of the helicopter and observed the area to that side of the helicopter. A second observer sat in the front left seat of the helicopter and observed the area ahead, focusing on the transect line. Dur- ing the ANTXXV-2 survey, a third observer generally joined the team, sitting in the back seat on the right side. During the Xights, the program VOR (designed by Lex Hiby and Phil Lovell and described in (Hammond et al. 1995) was used to enter all relevant survey data and to store the helicopter’s GPS position every 4 s. Information on sea state (Beaufort Sea state scale), cloud cover (in octaves), glare (strength and area aVected), ice coverage (in percent) and overall sighting conditions (good, moderate, poor) was recorded at the beginning of each sur- vey and whenever conditions changed. Ice coverage was averaged for a search area 1 km in front of the helicopter, assessed by the front observer. The following data was collected for each cetacean sighting: location, species, group size, group composition, behaviour, cue, swimming direction and reaction to the helicopter. Group composition described the presence of 123 Polar Biol (2011) 34:1513–1522 1515 calves or adults. Behaviour was categorised into directional swimming, non-directional swimming (“milling”), feeding and resting (including logging at the surface or slow swim- ming). The initial cue of a sighting included body under water, body at the surface, splash, footprint and blow. It was also noted if the helicopter was thought to cause a visi- ble behavioural reaction in the animals. For the back observers, when a sighting was perpendicular to the track- line, the vertical angle in relation to the trackline was mea- sured using an inclinometer. The front observer covered the area directly around the trackline (not visible to the back observers). The front observer would take the vertical angle to the sighting as well as the horizontal angle in relation to the trackline. All measurements were reported to the near- est degree. As the helicopter was at a consistent height, the measurements could be converted easily to perpendicular distances post-survey (Buckland et al. 2001). The survey was conducted entirely in ‘closing mode’, which means that if a sighting could not be identiWed to the species level at the Wrst sighting, the helicopter left the trackline temporar- ily once the sighting was perpendicular to the trackline. The helicopter ‘closed’ on the sighting, approaching to identify species and group size. Photographs were taken to conWrm species identity at a later time and the helicopter returned to the trackline to resume the survey. A digital tape recorder was used to provide an audio backup throughout each sur- vey for later reference. Ice covariate data and analysis methods Two ice-related predictor variables were considered to explore the distribution and abundance of cetaceans with respect to sea ice conditions: ice concentration observed along the trackline by the helicopter (“ice_conc”) and dis- tance (in m) to the marginal ice edge (“UBIceDist”) observed by satellite remote sensing (deWned by the smooth line inscribing the 15% ice concentration margin (Ainley et al. 2007)). We used observer-derived data for the ice concentration predictor, rather than remotely sensed data, because it is the operational measure: that is, the one that actually determines whether a ship could penetrate into the region, and thus conduct a full-scale visual survey for ceta- ceans. To estimate the position of the marginal ice edge, we obtained daily, 6.25-km resolution ice concentration images collected by the advanced microwave scanning radiometer for EOS (AMSR-E) satellite sensor from the University of Bremen (Spreen et al. 2008). We calculated the position of the ice edge for each daily satellite image with ArcGIS 9.3.1 Spatial Analyst functions (ESRI 2009) Fig. 1 Overview with vessel track and helicopter surveys for ANTXXIII/8 (2006/2007) and ANTXXV/2 (2008/2009) 70° 70° 60° 60° 50° 50° 40° 40° 30° 30° 105° 60° 45° 15° 15° 45° 45° ANTXXIII-8 2006/07 ANTXXV-2 2008/09 123 1516 Polar Biol (2011) 34:1513–1522 by selecting the largest polygon of contiguous pixels hav- ing ¸15% ice concentration (i.e. the polygon encompassing the land-fast ice), extracting the outermost edge, and smoothing it using the Spatial Analyst “Boundary Clean” operator with the default parameters. We used the Marine Geospatial Ecology Tools software (Roberts et al. 2010) to match whale sightings to satellite images and calculate the distance to the closest ice edge for each sighting. In order to explore the relationships between baleen whale presence and sea ice conditions, we ran a classiWca- tion and regression tree (CART) analysis. This method has been used previously to not only determine the suitability of predictor variables for more rigorous multivariate analy- sis (e.g. Friedlaender et al. 2006) but also can be useful for determining thresholds for where the presence or absence of cetaceans occurs with respect to a continuous measure of Fig. 2 Results of the combined helicopter surveys during the ANTXXIII/8 (2006/2007) and ANTXXV/2 (2008/2009) cruises: a Western Weddell Sea, b Eastern Weddell Sea and transit from South Africa 123 Polar Biol (2011) 34:1513–1522 1517 a predictor variable (e.g. Hazen et al. 2009). Tree-based models use recursive partitioning methods to help resolve relationships of response variables to predictor variables by partitioning them into increasingly homogeneous sub- groups (Breiman et al. 1984). CARTs are non-parametric in nature and therefore assume no a priori relationships between response and predictor variables, allowing for a variety of data to be used without requiring equal sample sizes amongst response variables (Guisan and Zimmer- mann 2000; Redfern et al. 2006). For our CART, we chose to explore the relationships between baleen whale sightings with proximity to the mar- ginal ice edge and the total concentration of sea ice where sightings were made. We also included a third predictor variable, proximity to shore (“UBLandDist”), deWned as the distance (in m) from the sighting to the closest land pixel in the AMSR-E ice concentration images. We chose to use individual sightings of each whale species as a bino- mial response variable. CARTs were run with optimal recursive partitioning and cross-validation methods similar (Hazen et al. 2009) to ensure that the most signiWcant clas- siWcations were included in the Wnal model. Likewise, receiver operator characteristic (ROC) curves were Wtted to the sightings data, and the area under the curve (AUC) was calculated as a means to measure the likelihood of false positives in the CART (sensitivity versus speciWcity). In this measure, an AUC value of 1.0 would indicate no chance of false positives, whilst a 0 value would represent only false positives. Results During cruise ANT XXIII/8, ‘Polarstern’ entered the pack ice at 58°S and passed through approximately 2,200 km of ice in the eastern Weddell Sea to reach Neumayer Station (70°39’S, 08°15’W) on the Antarctic shelf ice. From Neumayer Station, the vessel continued in a north-westerly direction until the vessel left the pack ice southwest of Ele- phant Island (see Fig. 1). From 1 December 2006 to 26 Jan- uary 2007, a total of 58 aerial survey Xights covering 13,057 km were conducted (Fig. 1). During cruise ANT XXV/2, the helicopter survey started on 6 December 2008 and ended on 3 January 2009. The cruise track of the outward voyage followed an almost straight line from 57° S to Neumayer Station (see Fig. 2). The return voyage took place further to the East returning to Cape Town again in the morning of 5 January 2009. Sur- veys were slightly longer during this cruise covering 13,359 km in a total of 47 Xights (Fig. 1). During both surveys combined, a total of 221 cetacean sightings consisting of a total of 650 animals were made. In total, 13 diVerent cetacean species were identiWed in both surveys combined. An overview of all sightings is given in Table 1. Whale sightings in relation to ice conditions The CART was used to relate the number of sightings of the three most commonly observed baleen whale species (minke, humpback and Wn) to two ice-related, environmental response variables, ice concentration and distance to the ice edge, and to one other variable, distance to shore (Fig. 2). The other baleen whale species that were not included were observed fewer than 3 times (i.e. not frequently enough to include in a statistical model). We included 171 cetacean sightings, and the best-Wt model determined by optimal recursive partitioning had an R-squared = 0.681 with 5 splits. The Wrst fundamental split in the tree occurred at a distance of »143 km from the ice edge (Fig. 3). The majority of ceta- ceans (75 of 76) found farther than this threshold distance were humpback and Wn whales. Of the remaining 95 sight- ings made within this distance to the ice edge, 92 were minke whales. Of these 95 sightings, 84 were made in ice cover >5%, all of which were minke whales. In the ROC analysis Table 1 Overview of cetacean sightings and number of animals as recorded during two helicopter surveys conducted during ANTXXIII/8 (2006/2007) and ANTXXV/2 (2008/2009) Species (scientiWc name) # Sighting # Animals Calves Fin whale (Balaenoptera physalus) 26 75 6 Sei whale (Balaenoptera borealis) 2 5 2 Antarctic minke whale (Balaenoptera bonaerensis) 94 183 0 Humpback whale (Megaptera novaeangliae) 53 134 2 Southern right whale (Eubalaena australis) 1 2 1 Sperm whale (Physeter macrocephalus) 6 15 0 UnidentiWed large whale 20 30 0 Southern bottlenose whale (Hyperoodon planifrons) 5 10 0 Gray’s beaked whale (Mesoplodon grayi) 1 5 0 Strap-toothed whale (Mesoplodon layardii) 2 6 0 Arnoux’s beaked whale (Berardius arnuxii) 1 4 0 Killer whale (Orcinus orca) 6 37 0 Rough-toothed dolphin (Steno bredanensis) & common dolphin (Delphinus sp.) 3 143 >1 UnidentiWed small cetacean 1 1 0 Total 221 650 12 123 1518 Polar Biol (2011) 34:1513–1522 for minke whales, the CART yielded an AUC of 0.96, indi- cating that the CART performed very well at classifying whether a sighting was a minke or another species, given the input predictor variables. The spatial distribution of the various cetacean species was distinct. Minke whales were encountered only in waters south of 57°S, and most sightings were made in or close to sea ice (Fig. 4). Beaked whale sightings ranged from 50°S to 61°S, with most sightings in a small area north of the South Shetland Islands. The large baleen whales were found in open water throughout the study area, but were not sighted further south than 63°. For humpback and Wn whales, two areas of high density were found: one around the South Shetland Islands (Fig. 2) and the other on the most eastern part of the survey, at latitude of about 53°S (Fig. 2 continued). Killer whales were observed at both higher and lower latitudes. Only four species were observed in ice-covered waters, mainly in the Weddell Sea and near the edge of the Larsen A and B ice shelves: Antarctic minke whale, Arnoux’s beaked whale (Berardius arnuxii), Southern bottlenose whale (Hyperoodon planifrons) and killer whale (Orcinus orca) (Fig. 2). The initial sighting cue of most of the identiWed sight- ings was the body at the surface of the water. This was fol- lowed by blows for large whales, footprints for the Antarctic minke and body under the surface for small and beaked whales (Table 2). Behaviour varied between species as well. Most animals were swimming directionally and a large proportion was logging at the surface or resting or swimming slowly (Table 2). When the helicopter passed animals in survey mode, in three cases, a visible reaction (Xuke up, increase in swimming speed, change in swim- ming direction) to the helicopter was recorded. For all these sightings, the distance to the helicopter was less than 500 m. When positioning the helicopter to take photo- graphs, reactions were observed for two beaked whale groups. Whilst lying horizontally just at the water surface, animals synchronously changed their orientation in the water towards the helicopter and dove after a few minutes. Digital photography from the helicopter oVered ability to conWrm visual estimates of group size. Mean group sizes varied between the diVerent species (Table 2). Humpback whales had a maximum group size of 11; Wn whales and Antarctic minke whales, 7 animals; sperm whales, 6 ani- mals; and sei whales, 3 animals. Killer whale group sizes ranged from 1 animal (an adult male) to 18 (Table 2). Fig. 3 CART diagram showing the relationships between three ceta- cean species and sea ice. Black bars represent Antarctic minke whales, grey bars represent Wn whales and white bars represent humpback whales. The R-squared value and number of splits are generated from the optimal recursive partitioning function to generate the best-Wt mod- el of the data. Counts indicate the total number of sightings that were used for each split, and the proportion of each species is shown in each box e.g. for sightings <142,931 m from the ice edge and ice cover >5%, 84 sightings were made and all were minke whales (black bar) Fig. 4 Photograph of a group of Antarctic minke whales taken from the helicopter in the sea ice of the Weddell Sea 123 Polar Biol (2011) 34:1513–1522 1519 Using digital photography from the helicopter, it was possible to later conWrm species (Fig. 4). This was espe- cially useful for the identiWcation of four beaked whale spe- cies seen during the survey. Digital photography was also used for individual photo-identiWcation of three humpback whales and one southern right whale mother-calf pair. Pho- tographs were then compared with existing photo-identiW- cation catalogues in South America and South Africa. One humpback whale identiWed close to the South Shetland Islands was matched with an individual identiWed oV the coast of Ecuador (pers. comm. Fernando Felix). One south- ern right whale female was matched with an individual recorded in South African waters in 1981 and 1996 (pers. comm. Peter Best). Discussion Helicopter surveys conducted from the RV Polarstern proved valuable in several respects. The helicopter allowed the coverage of a broad study area in the comparatively short period of a few hours, as opposed to ship-borne sur- veys, which would take several days to cover the same area. Also, the helicopter survey eVort was independent of the coverage of sea ice. Whilst a vessel would have to adapt survey speed and course, the helicopter can stay at a stan- dard protocol even when crossing high sea ice coverage. Thus, our surveys spanned completely open waters as well as those completely covered by ice and were not restricted to open-water habitats more easily accessed by the ship. Generally speaking, the areas surveyed in this study could not have been surveyed by conventional sighting ships as used in IWC surveys (Branch and Butterworth 2001). Whilst ship-based observations would be possible under similar sea ice conditions, ice-breaking operations generate substantial noise that has been shown to alter distribution of some whales (e.g. studies of ice-breakers and bowhead whales (Richardson et al. 1995)), which would compromise one of the major objectives of our study. To conduct a successful survey, it is essential to use a good survey design that can be challenging in complex hab- itats (Thomas et al. 2007). The design we developed can be adapted shortly before (and even during) each survey Xight, for example, to account for varying and complex sea ice conditions or in order to avoid poor local weather condi- tions. The resulting coverage results in data spanning a wide range of ice conditions, but is nevertheless spatially biased. There are spatial modelling methods that can use ice data together with the distance-sampling data to model habitat use of Antarctic cetaceans, in particular the Antarc- tic minke whale (Hedley et al. 1999; Williams et al. 2006). But methods to generate robust estimates of absolute abun- dance from such spatially biased data are still in development (Bravington and Hedley 2009). We have initiated Table 2 Overview of distribution of cues, behaviour and group sizes of large whales (Wn whale, sei whale, humpback whale, southern right whale, sperm whale); Antarctic minke whale; small whales (killer whales, dolphins) and beaked whales as observed during the ANT- XXIII/8 (2006/2007) and ANTXXV/2 (2008/2009) helicopter based surveys Large whales Antarctic minke whale Small whales Beaked whales Cue N = 94 N = 77 N = 9 N = 8 Body at surface (%) 48 71 78 62 Body under surface (%) 2 9 22 38 Blow (%) 45 0 0 0 Footprint (%) 0 20 0 0 Splash (%) 5 0 0 0 Behaviour N = 86 N = 79 N = 9 N = 9 Directional swimming (%) 57 44 89 22 Un-directional swimming (%) 2 16 0 33 Resting (%) 28 35 11 44 Feeding (%) 6 4 0 0 Other (breaching) (%) 7 0 0 0 Group size and composition N = 107 N = 94 N = 10 N = 9 Average group size 2.46 1.95 18.10 2.78 Range of group size 1–11 1–7 1–110 1–5 123 1520 Polar Biol (2011) 34:1513–1522 collaborations that will take advantage of these complex statistical models to better understand the distribution and abundance of cetaceans in the Weddell Sea after accounting for unequal coverage probability and rapidly changing ice conditions (Bravington and Hedley 2009). In the meantime, our exploratory analyses using CARTs reveal distribution patterns that can allow us to generate speciWc hypotheses regarding how diVerent cetacean species use and are aVected by changing ice conditions in the Weddell Sea. Furthermore, we can now develop an analytical framework to compare the ecological interactions between cetaceans and their environment across very diVerent Antarctic regions. The strong physical diVerences between regions such at the Weddell Sea, Western Antarctic Peninsula, and Ross Sea, for example, will have profound eVects on the community structure, distribution and abundance of ceta- ceans (and other krill predators). Understanding regional diVerences can allow for predictive models and hypotheses to be tested regarding how diVerent systems will respond to changes in environmental conditions. Recent studies have shown that the distribution of baleen whales around the Antarctic Peninsula are related to physi- cal features as well as prey availability (Thiele et al. 2004; Friedlaender et al. 2006) and that strong three-dimensional diVerences in these associations are found between hump- back and minke whales in coastal waters (Friedlaender et al. 2009). The sea ice conditions between the Weddell Sea and the Western Antarctic Peninsula are dramatically diVerent. Likewise, the oceanographic processes governing the circulation patterns and productivity of each region are very diVerent, and thus we expect diVerences in the com- munities of top predators (including cetaceans) in each region. Whilst the distribution of prey is known to have the greatest inXuence on the distribution of cetaceans on the Western side of the Antarctic Peninsula, the heavy and multi-year ice conditions found in the Weddell Sea will likely shape a very diVerent cetacean community and aVect the distribution and abundance of the species present. Whereas humpback whales are the most common cetacean species in the waters around the Western Antarctic Penin- sula in summer (Friedlaender et al. 2009) and fall (e.g. Thiele et al. 2004; Friedlaender et al. 2006), we Wnd minke whales to be the most commonly seen cetacean in the Wed- dell Sea. Similar to Friedlaender et al. (2010) from the western side of the Antarctic Peninsula, we Wnd that minke whale distribution is most inXuenced by proximity to sea ice cover in the Weddell Sea. In our current study, we demonstrate a horizontal separa- tion of minke from other baleen whales with respect to proximity to sea ice. This is in contrast to previous work around the Antarctic Peninsula that speciWcally looked at areas where minke and humpback whales were sympatric (e.g. Friedlaender et al. 2006, 2009). The diVerences in whale community structure in the current paper are likely the result of both seasonal and spatial diVerences in when and where the studies were conducted. The current study took place during the middle of summer when the marginal ice edge is retreating, and pack ice from the previous year has completely consolidated. Friedlaender et al. (2006, 2009) drew conclusions from data collected in the late autumn when ice was beginning to form for the upcoming winter. Likewise, our data were collected in the Weddell Sea, as opposed to the Western side of the Antarctic Penin- sula, and the dramatic oceanographic diVerences in these regions likely contributes to both diVerences in the physical structure of sea ice and the distribution of cetaceans in the regions. Thus, the diVerences presented here in the distribu- tion of diVerent cetacean species that rely heavily on a com- mon resource (Antarctic krill) are new and insightful. A more comprehensive analysis including measurements of prey would oVer further information regarding the regional diVerences in cetacean distribution across the Antarctic. As a survey platform, helicopters oVer a unique and eVective means of covering large areas in a short period of time. Being based on oceanographic research vessels allows helicopters to cover far-reaching areas not accessi- ble from land-based stations. Likewise, helicopter surveys can be paired with oceanographic and environmental sam- pling strategies that can only be accomplished by dedicated time on research vessels. If the research vessel stays in the same area for an extended period of time, for example dur- ing dedicated research programs, study areas can also be covered using a predetermined survey design and quantita- tive analyses can be conducted to better understand the dis- tribution and abundance patterns of cetaceans in relation to their environment. During the ANT XXIII/8 cruise two such research programs were conducted, one around Ele- phant Island investigating Wsh fauna, and one in the Larsen A and B area investigating the eVects of the collapse of the Larsen ice shelves (Gutt 2008; Gutt et al. 2011). As the ves- sel spent several weeks in each of the areas, we changed our ad hoc survey design to parallel tracklines in a way to ensure representative coverage. In the future, this will allow local whale abundance to be estimated using relatively sim- ple, conventional distance-sampling analyses. The helicopter platform allowed good observations of cetaceans above water and underneath (see Fig. 4, Antarctic minke whales surfacing and under water). For a consider- able proportion of sightings of small cetaceans and beaked whales, the main cue was the body under water, indicating that helicopter surveys are able to detect cetaceans just under the water surface when they would not be visible from an observer on a vessel. This is particularly important for species that either do not show easily recognisable sur- face cues (e.g. blows of large baleen whales) or that spend only little time at the surface (e.g. beaked whales). This is 123 Polar Biol (2011) 34:1513–1522 1521 advantageous to surveys to estimate absolute abundance, as well as for mitigation purposes (e.g. in the context of use of military sonar or seismic surveys), when it is vital to know when an animal is in the vicinity of potentially harmful human activities. As has been shown in previous helicopter surveys (e.g., dolphins in the Eastern Tropical PaciWc (Gerrodette and Forcada 2005)), one advantage of helicopters was the abil- ity to halt the survey at any time and position the helicopter in a way in which detailed information on a sighting could be collected. This was especially true for the accurate deter- mination of group size, where it was found that initial group size estimates of minke whales often increased dur- ing observation of the sighting in closing mode. The use of a digital camera from the helicopter served as an eVective tool to identify whales on a species level. In the case of beaked whales, the use of digital photography in combina- tion with the helicopter allowed the determination of spe- cies in which sightings are short and notoriously diYcult to identify. This has allowed us to obtain records of poorly known species, such as Arnoux’s beaked whale, the strap- toothed whale and Gray’s beaked whale. Even though photo-identiWcation during our helicopter survey was used as an opportunistic method, the data col- lected can provide valuable information. The matching of individual right whales between feeding and breeding grounds can oVer insight into migration patterns and habitat use. Photo-identiWcation catalogues of individual whales exist worldwide and are particularly important for those cetacean populations that occur in low numbers. In summary, conducting helicopter surveys, even from platforms that cannot follow a systematic survey design, can be an eYcient means to investigate cetacean distribu- tion and abundance in the Southern Ocean. This is espe- cially true for areas in which high ice coverage makes survey work with a non ice-breaking vessel impractical (e.g. the Weddell Sea). Nevertheless, it is diYcult to con- duct pre-designed surveys whilst working from a transiting research vessel. We see two options for overcoming this limitation. One is to be adaptive in survey design, including personnel with experience with survey design and data analysis, so that survey design algorithms can be used to lay out tracklines in more or less real time as satellite ice images become available. The second option is to come up with what is thought to represent a reasonable solution in the Weld, and accept that some principles of good survey design will be violated due to uncertainty in ice conditions. This latter approach leaves the problems of poor survey design to be addressed at the analysis stage. In our case, our future analyses are going to be complicated immensely by the unequal coverage probability that stemmed from our ad hoc survey design (Bravington and Hedley 2009). In other words, one can bring an analyst into the Weld, or be pre- pared to spend additional time and resources on analysis post-cruise. The geographic complexity and dynamic nature of sea ice will always be a challenge to the design and execution of rigorous systematic surveys for whales in these regions. However, the Xexibility and manoeuvrability of helicopters make them a powerful scientiWc tool with which to approach that challenge. Acknowledgments A large number of people have provided valu- able advice prior to the Weld work. This includes Horst Bornemann, Peter Boveng, Mark Bravington, Michael Cameron, Greg Donovan, Christian Haas, Sharon Hedley, John Jansen, JeV Laake, Russell Leap- er and Tony Warby. We would like to thank Captain Pahl and Captain Schwarze and the crew of RV ‘Polarstern’ for their logistical support. A special “thanks” goes to the helicopter crew, Jürgen Büchner, Hans Heckmann, Uli Michalski, Jens Brauer, Markus Heckmann and Car- sten Möllendorf for professional and safe Xights. We would also like to thank the meteorological oYce onboard the ‘Polarstern’, Frank-Ul- rich Dentler, Klaus Buldt, Christoph Joppich, Harald Rentsch, Edmund Knutz and Felicitas Hansen for their excellent weather forecasts. A personal thanks goes to the scientist-in-charge Julian Gutt for his con- tinuous support of this project prior to and during the ANT XXIII/8 survey. We thank Stefan Bräger, Helena Herr, Kristina Lehnert and Hans Verdaat for their dedicated observer work during ANT XXV/2. Thanks to Gunnar Spreen and his colleagues for providing AMSR-E ice concentration data. The two projects presented here were Wnanced by a number of diVerent institutions: Alfred Wegener Institute for Polar and Marine Research (AWI), Institute for Marine Resources and Ecosystem Studies (Wageningen IMARES), Johann Heinrich von Thünen Institute (Federal Research Institute for Rural Areas, Forestry and Fisheries), Research and Technology Centre Westcoast (FTZ) of the University Kiel, the Netherlands Polar Programme (NPP) of the Netherlands Organisation for ScientiWc Research (NOW), Dutch Min- istry of Agriculture, Nature and Food Quality (LNV), German Federal Ministry of Food, Agriculture and Consumer Protection (BMELV) and the German Federal Ministry for the Environment, Nature Conserva- tion and Nuclear Safety (BMU). The responsibility of the content of this publication lies with the author. We thank Natalie Kelly, Colin Southwell, Paige Eveson and one anonymous reviewer for helpful feedback on a previous version of this manuscript. Open Access This article is distributed under the terms of the Crea- tive Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. References Ainley DG, Dugger KKM, Toniolo VV, GaVney II (2007) Cetacean occurrence patterns in the Amundsen and southern Bellingshau- sen sea sector, Southern Ocean. Marine Mammal Sci 23:287–305 Branch TA, Butterworth DDS (2001) Southern Hemisphere minke whales: standardised abundance estimates from the 1978/79 to 1997/98 IDCR/SOWER surveys. J Cetacean Res Manag 3:143– 174 Bravington MV, Hedley SL (2009) Antarctic minke whale abundance estimates from the second and third circumpolar IDCR/SOWER surveys using the SPLINTR model. Paper SC/61/IA14 presented to the ScientiWc Committee of the International Whaling Com- mission, June 2009 (unpublished) Breiman L, Friedman JH, Olshen R, Stone CJ (1984) ClassiWcation and regression trees. Wadsworth International Group, Belmont, CA 123 1522 Polar Biol (2011) 34:1513–1522 Buckland SST, Anderson DR, Burnham KP, Laake JL, Borchers DDL, Thomas L (2001) Introduction to distance sampling. Oxford Uni- versity Press, Oxford Clapham PJ, Berggren P, Childerhouse S, Friday NA, Kasuya T, Kellam P, Kock KH, Manzanilla-Nam S, Notabartolo di Sciara G, Perrin WF, Read AJ, Reeves RR, Rogan E, Rojas-Bracho L, Smith TD, Stachowitsch M, Taylor BL, Thiele D, Wade PPR, Brownell RL (2003) Whaling as science. Bioscience 53:210–212 ESRI (2009) ArcGIS-A complete integrated system. Environmental Systems Research Institute, Inc, Redlands, California Friedlaender AS, Halpin PN, Qian SS, Lawson GL, Wiebe PH, Thiele D, Read AJ (2006) Whale distribution in relation to prey abun- dance and oceanographic processes in shelf waters of the Western Antarctic Peninsula. Mar Ecol Prog Ser 317:297–310 Friedlaender AS, Lawson GL, Halpin PN (2009) Evidence of resource partitioning between humpback and minke whales around the western Antarctic Peninsula. Marine Mammal Sci 25(2):402– 415. doi:10.1111/j.1748-7692.2008.00263.x Friedlaender AS, Johnston DW, Fraser WR, Burns J, Patrick NH, Costa DP (2010) Ecological niche modeling of sympatric krill predators around Marguerite Bay, Western Antarctic Peninsula. Deep Sea Res II: topical studies in oceanography. doi:10.1016/ j.dsr2.2010.11.018 Gerrodette T, Forcada J (2005) Non-recovery of two spotted and spin- ner dolphin populations in the eastern tropical PaciWc Ocean. Mar Ecol Prog Ser 291:1–21 Guisan A, Zimmermann NE (2000) Predictive habitat distribution models in ecology. Ecol Model 135:147–186 Gutt J (2008) The expedition ANTARKTIS-XXIII/8 of the research vessel “Polarstern” in 2006/2007 : ANT-XXIII/8; 23 November 2006–30 January 2007 Cape Town-Punta Arenas. Rep Polar Res 569:1–153 Gutt J, Barratt I, Domack E, d’Udekem d’Acoz C, Dimmler W, Grémare A, Heilmayer O, Isla E, Janussen D, Jorgensen E, Kock K-H, Lehnert LS, López-Gonzáles P, Langner S, Linse K, Eugenia Manjón-Cabeza M, Meißner M, Montiel A, Raes M, Robert H, Rose A, Sañé Schepisi E, Saucède T, Scheidat M, Schenke H-W, Seiler J, Smith C (2011) Biodiversity change after climate-induced ice-shelf collapse in the Antarctic. Deep Sea Res II: Topical Stu Oceanogr 58:74–83 Hammond PS, Benke H, Berggren P, Borchers DDL, Buckland SST, Collet A, Heide Jorgensen MMP, Heimlich Boran S, Hiby AAR, Leopold MF, Oien N (1995) Distribution and abundance of harbour porpoises and other small cetaceans in the North Sea and adjacent waters. European community LIFE programme, life 92-2/IL/027 Hazen EL, Friedlaender AS, Thompson MA, Ware CR, Weinrich MT, Halpin PN, Wiley DN (2009) Fine-scale prey aggregations and foraging ecology of humpback whales Megaptera novaeangliae. Mar Ecol Prog Ser 395:75–89. doi:10.3354/meps08108 Hedley SL, Buckland SST, Borchers DL (1999) Spatial modelling from line transect data. J Cetacean Res Manag 1:255–264 Moore SE, Huntington HP (2008) Arctic marine mammals and climate change: impacts and resilience. Ecol Appl 18:157–165 Plötz J, Weidel H, Bersch M (1991) Winter aggregations of marine mammals and birds in the north-eastern Weddell Sea pack ice. Polar Biol 11:305–309 Redfern JV, Ferguson MC, Becker EA, Hyrenbach KD, Good C, Bar- low J, Kaschner K, Baumgartner MF, Forney KA, Ballance LT, Fauchald P, Halpin P, Hamazaki T, Pershing AJ, Qian SS, Read A, Reilly SB, Torres L, Werner F (2006) Techniques for cetacean habitat modeling. Mar Ecol Prog Ser 310:271–295. doi:10.3354/ meps310271 Richardson J, Green C, Malme C, Thomson D (1995) Marine mam- mals and noise. Academic Press, 525 B Street, San Diego, Cali- fornia, USA. pp 92101–4495 Roberts JJ, Best BD, Dunn DC, Treml EA, Halpin PN (2010) Marine geospatial ecology tools: an integrated framework for ecological geoprocessing with ArcGIS, Python, R, MATLAB, and C++. Environ Modell Softw 25:1197–1207. doi:10.1016/j.envsoft. 2010.03.029 Southwell C (2005) Optimising the timing of visual surveys of crabeat- er seal abundance: haulout behaviour as a consideration. Wildlife Res 32:333–338 Spreen G, Kaleschke L, Heygster G (2008) Sea ice remote sensing using AMSR-E 89-GHz channels. J Geophys Res 113(C2):C02S03. doi:10.1029/2005jc003384 Thiele D, Chester ET, Moore SE, Sirovic A, Hildebrand JA, Friedla- ender AS (2004) Seasonal variability in whale encounters in the Western Antarctic Peninsula. Deep Sea Res II: Topical Stu Ocea- nogr 51:2311–2325. doi:10.1016/j.dsr2.2004.07.007 Thomas L, Williams R, Sandilands D (2007) Designing line tran- sect surveys for complex survey regions. J Cetacean Res Manag 9:1–13 van Franeker JA, Bathmann UV, Mathot S (1997) Carbon Xuxes to Antarctic top predators. Deep Sea Res II 44:435–455 Williams R, Hedley SL, Hammond PS (2006) Modeling distribution and abundance of Antarctic baleen whales using ships of oppor- tunity. Ecol Soc 11:1 http://www.ecologyandsociety.org/vol11/ iss1/art1. Accessed 20 March 2011 123 http://dx.doi.org/10.1111/j.1748-7692.2008.00263.x http://dx.doi.org/10.1016/j.dsr2.2010.11.018 http://dx.doi.org/10.1016/j.dsr2.2010.11.018 http://dx.doi.org/10.3354/meps08108 http://dx.doi.org/10.3354/meps310271 http://dx.doi.org/10.3354/meps310271 http://dx.doi.org/10.1016/j.envsoft.2010.03.029 http://dx.doi.org/10.1016/j.envsoft.2010.03.029 http://dx.doi.org/10.1029/2005jc003384 http://dx.doi.org/10.1016/j.dsr2.2004.07.007 http://www.ecologyandsociety.org/vol11/iss1/art1 http://www.ecologyandsociety.org/vol11/iss1/art1 Cetacean surveys in the Southern Ocean using icebreaker-supported helicopters Abstract Introduction Methods Helicopter surveys and whale sightings Ice covariate data and analysis methods Results Whale sightings in relation to ice conditions Discussion Acknowledgments References work_wv4x6rnuuzd5fbtfsdmsfp7lvy ---- Microsoft Word - 39308446-file00 HAL Id: hal-00568505 https://hal.archives-ouvertes.fr/hal-00568505 Submitted on 23 Feb 2011 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Adaptation of a digital Camera for Simultaneous Stereophotography in Ophthalmology Katarina Stingl, Esther Hoffmann, Ulrich Schiefer To cite this version: Katarina Stingl, Esther Hoffmann, Ulrich Schiefer. Adaptation of a digital Camera for Simultaneous Stereophotography in Ophthalmology. British Journal of Ophthalmology, BMJ Publishing Group, 2010, 94 (10), pp.1288. �10.1136/bjo.2010.186502�. �hal-00568505� https://hal.archives-ouvertes.fr/hal-00568505 https://hal.archives-ouvertes.fr 1 Adaptation of a digital Camera for Simultaneous Stereophotography in Ophthalmology Katarina Stingl1, Esther Hoffmann2, Ulrich Schiefer1 1Center for Ophthalmology, Institute for Ophthalmic research, Tübingen, Germany, 2Medical University Center Mainz, Dept. of Ophthalmology, Mainz, Germany Corresponding author Katarina Stingl, MD Center for Ophthalmology, Institute for Ophthalmic research Schleichstr. 12-16 D – 72076 Tübingen Germany Tel: +49 7071 2987421 Fax: +49 7071 295361 Email: katarina.stingl@med.uni-tuebingen.de Keywords : digital simultaneous stereophotography, glaucoma follow-up documentation, optic nerve disc photography, fundus camera Word count : 977 2 ABSTRACT Simultaneous stereoscopic fundus photography is an important tool for classification and follow-up of glaucomatous optic neuropathy. The use of conventional film-based simultaneous stereoscopic fundus cameras is complicated and time-consuming due to film processing and sensitivity of its mechanical components. Digital simultaneous stereoscopic fundus cameras are not available in Germany, since the existing ones are lacking the EC-certificate. We realised a digital simultaneous stereophotographic fundus camera by replacing the conventional film-based analogue camera of the simultaneous stereoscopic fundus camera TOPCON TRC-SS (TOPCON CORPORATION, Tokyo, Japan) by the digital 21.10 megapixel, 36 x 24 mm full frame CMOS camera CANON EOS 5D, Mark II (CANON INC., Tokyo, Japan). 3 BACKGROUND In clinical ophthalmology morphological findings are crucial for diagnostics, as well as follow up or accessing therapeutic success. Of particular importance is a stereoscopic view of the posterior eye segment structures such as the retinal morphology, the optic nerve head, or the retinal vessels. A monocular estimation is not appropriate for a three dimensional estimation.[1]. A stereo image is regarded the standard for assessing optic disc damage in glaucoma.[2,3]. The three dimensional morphologic finding can be accessed by the examiner with the help of binocular indirect ophthalmoscopy and the slit lamp; however, documentation for classification and follow-up purposes requires a stereoscopic picture. An exact and reproducible three dimensional documentation is necessary for clinical routine follow up as well as for a documentation in clinical studies.[4-6]. To our knowledge, a simultaneous stereoscopic fundus camera capable of digital imaging, such as NIDEK 3-Dx (NIDEK Co., Ltd., Hiroishi, Japan) is not approved in Germany due to lacking CE-certification. According to the outing of the distributors of the OCULUS Company (OCULUS, Wetzlar, Germany) the light emission of this camera is assumed to be too high, interfering thus with the certificate. Sequential stereoscopic imaging by shifting the monocular fundus camera has proven to be insufficient due to inappropriate standardization.[7,8]. The aim of this paper is to demonstrate a solution for upgrading a conventional film-based simultaneous stereoscopic camera into digital equipment. METHODS The conventional simultaneous stereo fundus camera TOPCON TRC-SS (TOPCON CORPORATION, Tokyo, Japan) has been transformed into a digital camera by replacing the original camera body by a digital camera (CANON EOS 5D, Mark II, CANON INC., Tokyo, Japan), as depicted in Fig. 1. The 21.10 megapixel full frame CMOS (complementary metal oxide semiconductor) sensor camera is capable of capturing both stereo-images simultaneously. The connection of the digital camera was provided by a specifically constructed adapter connecting the conventional fundus camera and the digital full format camera. 4 RESULTS AND DISCUSSION To our knowledge, this is the first time that a conventional analogue simultaneous stereoscopic fundus camera has been transformed into a digital system, which is compatible with the CE prescription and can be integrated into an electronic patient data record system. Stereoscopic fundus photography is an optimal documentation of the morphology of the posterior eye segment in follow up of glaucoma or macular diseases.[4-6,9,10]. Such findings can be collected and viewed by stereo viewers such as the screen-VU Stereoscope (Heidelberg Engineering, Inc., Heidelberg, Germany). The result of conventional stereoscopic photography approaches are classical photo slides. This approach is, although the picture quality is mostly excellent, time-consuming, since the photography film must be used up completely before development, which is usually performed externally in case of colour slides. As a consequence, an immediate observation and judgement of the photographic output is impossible. Moreover, the storage and retrieval of photographic slides is a considerable economical and spatial burden. Switching to digital photography allows circumventing the above mentioned shortcomings and is extremely attractive for the increasing number of clinics working with digital patient records as is the case of the Department and Institute for Ophthalmology, University of Tuebingen. For gaining electronic data, the classical analogue stereoscopic slides can be digitalized ex post. However, this is a considerable economical and time effort that is, unfortunately, associated with an inevitable quality loss. Therefore, an optimal solution, direct stereoscopic digital fundus photography should be obtained. A simultaneous stereoscopic fundus camera with the possibility of digital imaging, such as NIDEK 3-Dx (NIDEK Co., Ltd., Hiroishi, Japan) has not been approved in Germany so far. The reason for this is the lacking CE-certification, since the light emission of this camera is assumed to be too high, based on the information of the distributors of the OCULUS Company (OCULUS, Wetzlar, Germany). Alternatively, pseudo-stereoscopic images with a standard digital fundus camera can be obtained. In this case, two non- stereoscopic pictures are taken subsequently from two different angles mimicking thus a stereo view. Unfortunately, a pseudo-stereoscopic picture is not an optimal solution for a lack of reproducibility as the amount of shift is depending on the (varying) pupil diameter and on the subjective approach of the 5 photographer. For example, an inconsistency of the camera angle can lead to false positive or false negative judgment with regard to progression in glaucoma.[8,11]. Therefore, the ideal method for stereoscopic documentation is simultaneous stereophotography, whereas pseudo-stereoscopic techniques are less frequently used nowadays. Recent follow-up imaging techniques such as the Heidelberg Retina Tomograph (HRT, Heidelberg Engineering, Inc., Heidelberg, Germany) have shown to be comparably accurate for diagnosing the optic nerve head damage.[2], however, in estimating glaucoma progression criteria such as haemorrhages or disc pallor cannot it is inferior to stereophotography. So far, a good stereophotography of the optic nerve head remains the best method for the morphological evaluation in glaucoma. Examples of a stereoscopic optic disc finding documentation are shown in Fig. 2 and Fig. 3. For viewing a classical stereo-viewer or software for quantitative analysis can be applied. Since analogue simultaneous stereoscopic fundus cameras such as TOPCON TRC-SS were broadly used in the earlier decades and may be part of the equipment of many ophthalmologic departments and clinics, we present a simple and a low-cost solution for converting this high-quality documentation system into a digital one in order to document the stereoscopic image. An important aspect is that the attached digital camera must have a full frame sensor (36 x 24 mm) in order to capture the full image produced by the conventional TOPCON fundus camera, which would not be possible with cameras with smaller sensors. CONCLUSIONS As a conclusion, by transferring a conventional photo-based simultaneous stereoscopic camera into a digital unit we were able to realize a solution for 3D morphological documentation in ophthalmology without the disadvantages of the analogue stereoscopic photography (financial and time effort as well as spacy storage, retrieval requirements and non-suitability for direct transfer into electronic records) as well as those of the pseudo-stereophotography (lack of reproducibility). 6 ACKNOWLEDGEMENT This project was supported by ALCON Pharma GmbH, Freiburg, Germany. COMPETING INTERESTS None The Corresponding Author has the right to grant on behalf of all authors and does grant on behalf of all authors, an exclusive licence (or non-exclusive for government employees) on a worldwide basis to the BMJ Publishing Group Ltd and its Licensees to permit this article (if accepted) to be published in British Journal of Ophthalmology and any other BMJPGL products to exploit all subsidiary rights, as set out in the licence. 7 REFERENCES 1 Martinello M, Favaro P, Muyo Nieto GD, et al. 3-D retinal surface inference: stereo or monocular fundus camera? Conf Proc IEEE Eng Med Biol Soc 2007;2007:896-9. 2 Pablo LE, Ferreras A, Fogagnolo P, et al. Optic nerve head changes in early glaucoma: a comparison between stereophotography and Heidelberg retina tomography. Eye (Lond) 2010;24:123-30. 3 Hoffmann EM. [Optic disc photography and retinal nerve fiber layer photography]. Ophthalmologe 2009;106:683-6. 4 Budenz DL, Anderson DR, Feuer WJ, et al. Detection and prognostic significance of optic disc hemorrhages during the Ocular Hypertension Treatment Study. Ophthalmology 2006;113:2137- 43. 5 Miglior S, Zeyen T, Pfeiffer N, et al. Results of the European Glaucoma Prevention Study. Ophthalmology 2005;112:366-75. 6 Gaasterland DE, Blackwell B, Dally LG, et al. The Advanced Glaucoma Intervention Study (AGIS): 10. Variability among academic glaucoma subspecialists in assessing optic disc notching. Trans Am Ophthalmol Soc 2001;99:177-84. 7 Krohn MA, Keltner JL, Johnson CA. Comparison of photographic techniques and films used in stereophotogrammetry of the optic disk. Am J Ophthalmol 1979;88:859-63. 8 Chang RT, Budenz DL. Diagnosing glaucoma progression. Int Ophthalmol Clin 2008;48:13-28. 8 9 Sallo FB, Peto T, Leung I, et al. The International Classification system and the progression of age-related macular degeneration. Curr Eye Res 2009;34:238-40. 10 Davis MD, Gangnon RE, Lee LY, et al. The Age-Related Eye Disease Study severity scale for age-related macular degeneration: AREDS Report No. 17. Arch Ophthalmol 2005;123:1484-98. 11 Knudsen LL, Skriver K. A 3-dimensional evaluation of the macular region: comparing digitized and film-based media with a clinical evaluation. Acta Ophthalmol Scand 2006;84:296- 300. 9 FIGURE LEGENDS Fig. 1: The analogue simultaneous stereo fundus camera TOPCON TRC-SS has been transformed into a digital system by adding the digital camera CANON EOS 5D, Mark II (21.10 megapixel, 36 x 24 mm full frame CMOS sensor). Fig. 2: A simultaneous digital stereoscopic picture of an optic nerve head obtained with the introduced digital simultaneous stereophotography fundus camera. The pictures can be viewed directly on the computer screen by means of a stereo viewer (e.g. Screen-VU Stereoscope, Heidelberg Engineering, Inc., Heidelberg, Germany). Normal optic disc. Fig. 3: A simultaneous digital stereoscopic picture of an optic nerve head obtained with the introduced digital simultaneous stereophotography fundus camera. The pictures can be viewed directly on the computer screen by means of a stereo viewer (e.g. Screen-VU Stereoscope, Heidelberg Engineering, Inc., Heidelberg, Germany). Glaucomatous optic disc. work_wvzjznru4fcbpopozmgki7232u ---- Different Trichoscopic Features of Tinea Capitis and Alopecia Areata in Pediatric Patients Research Article Different Trichoscopic Features of Tinea Capitis and Alopecia Areata in Pediatric Patients Abd-Elaziz El-Taweel, Fatma El-Esawy, and Osama Abdel-Salam Dermatology & Andrology Department, Faculty of Medicine, Benha University, Benha, Al Qalyubia 13512, Egypt Correspondence should be addressed to Osama Abdel-Salam; oselfady452003@yahoo.com Received 17 February 2014; Accepted 13 May 2014; Published 16 June 2014 Academic Editor: Giuseppe Argenziano Copyright © 2014 Abd-Elaziz El-Taweel et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Background. Diagnosis of patchy hair loss in pediatric patients is often a matter of considerable debate among dermatologists. Trichoscopy is a rapid and noninvasive tool to detect more details of patchy hair loss. Like clinical dermatology, trichoscopy works parallel to the skin surface and perpendicular to the histological plane; like the histopathology, it thus allows the viewing of structures not discovered by the naked eye. Objective. Aiming to compare the different trichoscopic features of tinea capitis and alopecia areata in pediatric patients. Patients and Methods. This study included 40 patients, 20 patients with tinea capitis and 20 patients with alopecia areata. They were exposed toclinical examination, laboratory investigations (10% KOH and fungal culture), and trichoscope examination. Results. Our obtained results reported that, in tinea capitis patients, comma shaped hairs, corkscrew hairs, and zigzag shaped hairs are the diagnostic trichoscopic features of tinea capitis. While in alopecia areata patients, the most trichoscopic specific features were yellow dots, exclamation mark, and short vellus hairs. Conclusion. Trichoscopy can be used as a noninvasive tool for rapid diagnosis of tinea capitis and alopecia areata in pediatric patients. 1. Introduction Losing hair is not usually health threatening; it can scar a young child’s vulnerable self-esteem by causing immense psychological and emotional stress, not only to the patient, but also to the concerned parents and siblings [1]; so the cause of hair loss should be diagnosed and treated early to overcome the resulting problems [2]. The most frequent causes of hair loss in pediatric patients include tinea capitis, alopecia areata, traction alopecia, and trichotillomania. The clinician must be able to separate the types and causes of hair loss into those that reflect primary dermatologic conditions and those that represent reaction to systemic disease [3]. Tinea capitis is a superficial fungal infection of the scalp. The disease is primarily caused by dermatophytes in the Tri- chophyton and Microsporum genera that invade the hair shaft. The clinical presentation is typically a single or multiple patches of hair loss, sometimes with a “black dot” pattern, that may be accompanied by inflammation, scaling, pustules, and itching [4]. Alopecia areata (AA) is a medical condition in which hair is lost from some or all areas of the body, usually from the scalp [5]. Typical first symptoms of alopecia areata are small bald patches; the underlying skin looks superficially normal. These patches can take many shapes but are most usually round or oval [6]. The cause of focal hair loss may be diagnosed by the appearance of the patch and examination for fungal agents. A scalp biopsy may be necessary if the cause of hair loss is unclear [7]. Trichoscopy is a noninvasive diagnostic tool that allows the recognition of morphologic structures not visible by the naked eye. The trichoscope, despite its ease of handling, is not a mere magnifying glass but a more complex instrument, allowing the superimposition of the skin layers. This is entirely different from the image obtained in histopathology, where the visualization is total, with the possibility to observe any surface or deep skin layer [8]. Trichoscopy is useful for the diagnosis and follow-up of hair and scalp disorders. However, it is not widely used in the management of hair disorders. This will enable dermatologists to make fast diagnoses of tinea capitis and alopecia areata, distinguish early androgenetic alopecia from telogen effluvium, and differentiate scarring from nonscarring alopecia [9]. Hindawi Publishing Corporation Dermatology Research and Practice Volume 2014, Article ID 848763, 6 pages http://dx.doi.org/10.1155/2014/848763 http://dx.doi.org/10.1155/2014/848763 2 Dermatology Research and Practice Aim of Work. The aim of this work is to detect different tri- choscopic features of tinea capitis and alopecia areata in pedi- atric patients with localized patches of hair loss. 2. Patients and Methods This observational analytical study included forty patients, twenty patients with tinea capitis and twenty patients with alopecia areata, without sex predilection, their age (≤12 years) presented with solitary or multiple lesions of patchy hair loss of the scalp. This study was conducted at Cairo Hospital of Dermatology and Andrology (Al-Haud Al-Marsoud), during the period from October 2012 to March 2013. The exclusion criteria were (1) patients with any concomitant dermatological diseases and (2) history of using any topical (1 month) or systemic treatment (3 month) for tinea capitis or alopecia areata prior to the study. Consent was taken from a parent of each patient before participation in the study which was approved by Ethics Committee Human Research of Benha University. All patients were subjected to the following: (1) history taking, clinical examination, and digital photography of any lesion of patchy hair loss by using Panasonic LUMIX S5 16 mega pixel, (2) microscopic examination of skin scraping and plucked hairs using KOH 10% and fungal culture and (3) trichoscopic examination. 2.1. Laboratory Examination. The specimen was collected in a sufficient amount from the edge of the area of hair loss (scales or plucked hairs). Hair roots and skin scraping were mounted in 10% potassium hydroxide solution. The slide was gently heated and microscopically examined for spores. The culture was done on Sabouraud’s agar media; the cultures were incubated at 30∘C and examined frequently for 4 weeks. 2.2. Trichoscope Examination. In this study, a hand-held trichoscope (DermLite DL3, Gen, USA) which can block light reflection from the skin surface without immersion gels was used. Characteristics and specifications of the DermLite DL3 trichoscope used in the study were 20x and 40x magnification with focusing optics. It consisted from light emitting diodes (LEDs) bulbs, piece handle and head, and replaceable batter- ies (rechargeable lithium). The trichoscope is switched on and placed away from the lesion by about 1 cm or gently over the lesion after covering the lesion with gel, so that it is in the center of the contact plate. The examiner’s eyes should be as close as possible to the eyepieces, with the free hand adjusting the focusing ring until a clearly focused image is obtained (in most cases it is only necessary to set up the focus once). Disinfection of the lens with alcohol swab to avoid transmission of infection. Digital photography of the lesion(s) were taken through the DermLite DL3 Gen Trichoscope. Findings obtained were evaluated by the same two dermatologists. 2.3. Statistical Analysis. All collected data were revised for completeness and accuracy. Precoded data was entered on the computer using the statistical package of social science soft- ware program, version 15 (SPSS), to be statistically analyzed. 3. Results 3.1. Clinical Data. In alopecia areata patients, the study was carried on 13 female 65% and 7 males 35%. Their age ranged from 1.5–11 years with median ± inter quartile range (IQR) 5.25 (3.3, 8.0). The duration of lesions ranged from 2 to 12 weeks with median ± IQR 4.00 (2.3, 11.0). The number of lesion(s) ranged from 1 to 2 with median ± IQR 1.0 (1.0, 2.0). The size of the lesion(s) ranged from 0.5 to 3 cm with median± IQR (2.0±0.6×1.5±0.7). In tinea capitis patients, the study was carried on 15 male 75.0% and 5 females 15.0%. Their age ranged from 2–11 years with median± IQR 5.0 (3.5, 7.1), the duration of the lesion(s) ranged from 2 to 12 weeks with median± IQR 4.00 (2.3, 11.0). The number of lesion(s) ranged from 1 to 2 with median± IQR 1.0 (1.0, 1.0). The size of the lesion(s) ranged from 1 to 3 cm with median ± IQR (2.1±0.8×1.6±0.8). 3.2. Laboratory Results. Direct microscopic examination of the collected specimens from the lesion(s) after being mounted by KOH 10% was done for all patients and revealed that 13 patients, 32.5%, of tinea capitis gave positive result, 7 patients, 17.5%, gave false negative results, and all cases of alopecia areata gave negative results. The dermatophytes iso- lated are as follows: T. violaceum in 6 patients, 15.0%, M. canis in 6 patients, 15.0%, T. rubrum in 3 patients, 7.0%, and T. verrucosum in 5 patients, 13.0%. 3.3. Trichoscopic Results. In patients with tinea capitis, the most common trichoscopic feature (Figure 1) was short broken hairs seen in 18 patients, 90.0%, followed by black dots in 13 patients, 65.0%, comma shaped hairs in 11 patients, 55.0%, or corkscrew hairs in 9 patients, 45.0%, and zigzag shaped hair in 5 patients, 25.0% (Table 1). In patients with alopecia areata the most common tri- choscopic feature (Figure 2) was black dots in 12 patients, 60.0%, followed by yellow dots seen in 11 patients, 55.0%, exclamation mark in 11 patients, 55.0%, white hairs in 9 patients, 45.0%, short vellus hairs in 8 patients, 40.0%, short broken hairs in 8 patients, 40.0%, and pig tail growing hairs in 3 patients, 15.0% (Table 2). 4. Discussion Tinea capitis and alopecia areata are considered to be the most common causes of hairless patches of the scalp in pedi- atrics [10]. Tinea capitis especially nonscaly type may have the same clinical appearance of alopecia areata, so trichoscopy has recently become a useful diagnostic tool for alopecia areata and tinea capitis, especially in doubtful cases as lab investigations like fungal culture or biopsy may take several weeks [11, 12]. The studies regarding trichoscopic finding of patients with tinea capitis were few and included few patients [13]. In the present study we found in tinea capitis patients by trichoscope examination, comma shaped hairs, zigzag shaped hairs, corkscrew hairs, black dots, and shortbroken hairs are considered characteristic trichoscopic features of tinea capitis as conducted by Ekiz et al. [14]. Dermatology Research and Practice 3 (a) Comma shaped hairs White scales. Short broken hairs Black dots (b) Figure 1: Tinea capitis (a) macroscopic view, (b) trichoscopic view at 20x magnification, dermoscopy shows comma shaped hairs (blue arrow), black dot (black arrow), short broken hairs (green arrow), and white scales (white arrow). Table 1: Different trichoscopic features of tinea capitis. Frequency (𝑁=20) Percent (%) Comma shaped hairs Present 11 55.0 Absent 9 45.0 Zigzag shaped hairs Present 5 25.0 Absent 15 75.0 Black dots Present 13 65.0 Absent 7 35.0 Short broken hairs Present 18 90.0 Absent 2 10.0 Corkscrew hairs Present 9 45.0 Absent 11 55.0 The most common trichoscopic feature was short broken hairs followed by black dots, but both of them are nonspecific as they were detected in other conditions of hair loss. Comma shaped hairs, corkscrew hairs, and zigzag shaped hairs are the diagnostic trichoscopic features of tinea capitis. In the present study, comma shaped hairs were seen in 55% (11 out of 20 patients); this finding was detected in other studies that included few number of patients [14–16]. Comma hairs, which are slightly curved and fractured hair shafts, are associated with ectothrix and endothrix type fungal invasion. The authors believe that comma hair is probably shaped as a result of subsequent cracking and bending of a hair shaft filled with hyphae [15]. In the current study, zigzag shaped hairs were seen in 25.0% (5 out of 20 patients) and corkscrew hairs in 45.0% (9 out of 20 patients); these findings were detected in other studies with different number of patients [14, 16]. The zigzag shaped hairs or corkscrew hair seems to be a variation of the comma hair, manifesting in black patients [16]. Short broken hairs were observed in the present study in 90.0% (18 out of 20 patients) of tinea capitis cases; this finding was conducted with other studies [13, 14]. Short broken hairs may be nonspecific trichoscopic finding of tinea capitis but may be a sign of severity of the disease. Black dots were reported in our study in 65.0% (13 out of 20 patients) of tinea capitis cases, as conducted by Sandoval et al. [17]. Black dots are remnants of broken hairs or dys- trophic hairs [18]. Hughes et al. [16] stated that comma shaped hairs, corkscrew hairs were detected in zoophilic infection. In the present study T. violaceum, M. canis, and T. verrucosum were isolated; this result may be due to farming and low socioeco- nomic status of our patients. To conclude the most common trichoscopic features are short broken hairs, followed by black dots, comma shaped hairs, or corkscrew hairs. However, comma shaped hairs, zigzag shaped hairs, or corkscrew hairs are characteristic trichoscopic features of tinea capitis. Black dots and short broken or dystrophic hairs are not specific to tinea capitis, as they can be observed also in alopecic areata and trichotillo- mania but could be used as a sign of severity of tinea capitis. There are large scale studies in patients with alopecia areata. Yellow dots, black dots, broken hairs, exclamation mark, and short vellus hairs are considered as characteristic trichoscopic features in AA [14, 19]. In the present study yellow dots are detected in 55% (11 out of 20 patients) of alopecia areata patients; this finding was detected in other studies [20, 21]; these are marked by distinctive array of yellow to yellow-pink, round or polycyclic dots that vary in size and are uniform in color. They are more easily observed using video trichoscopy than with handheld trichoscopy [18]. The combination of large numbers of yellow dots and short growing hairs is a feature of AA incognita [22]. For the diagnosis of alopecia areata, other signs of alopecia areata should be taken into account, because isolated yellow dots may be seen in trichotillomania, hypotrichosis simplex, and even tinea capitis, as stated by Inui [23]. 4 Dermatology Research and Practice (a) Yellow dots Exclamation mark hair (b) Figure 2: Alopecia areata (a) macroscopic view, (b) trichoscopic view at 40x magnification, shows yellow dots (orange arrow) and exclamation mark hair (pink arrow). Table 2: Different trichoscopic features of alopecia areata. Frequency (𝑁=20) Percent (%) Black dots Present 12 60.0 Absent 8 40.0 Yellow dots Present 11 55.0 Absent 9 45.0 Microexclamation mark Present 11 55.0 Absent 9 45.0 Short vellus hairs Present 8 40.0 Absent 12 60.0 Pig tail regrowing hair Present 3 15.0 Absent 17 85.0 Short broken hairs Present 8 40.0 Absent 12 60.0 White hairs Present 9 45.0 Absent 11 55.0 In alopecia areata patients, the most common trichoscopic feature was black dots but it is nonspecific for alopecia areata, as it is found in other conditions as trichotillomania and tinea capitis. But the specific features were yellow dots, exclamation mark, and short vellus hairs. In the current study under trichoscopic examination exclamation mark hair was detected in 55.0% (11 out of 20 patients) of alopecia areata cases; this finding was detected in other studies [14, 18, 20]. This term, tapering hair, is preferred over “exclamation mark hair” because the affected hair is not typical exclamatory mark in shape. It occurs due to the narrowing of hair shafts toward the follicles which is more readily perceived using trichoscopy than by naked eye [18]. In the present study we believe that tapering hairs are diagnostic feature of alopecia areata as reported by authors [12, 14, 19]. We discovered that it was more sensitive and diagnostic when associated with yellow dots, short vellus hairs, or pig tail growing hairs. We detected that it was sign of active alopecia areata, as it was seen in the active cases of alopecia areata at the periphery of the lesion(s). In our study under trichoscopic examination black dots were detected in 60.0% (12 out of 20 patients) of tinea capitis patients; this finding was detected in other studies [19, 20]. Black dots as remnants of exclamation mark hairs or broken hairs occur when hair shaft, fractured before emerging from the scalp, provides a sensitive marker for disease activity as well as severity of AA [18]. The present study showed that black dots are the most common trichoscopic finding and can be used as a sensitive feature of alopecia areata only if associated with other specific trichoscopic features of alopecia areata as yellow dots, tapering hairs, or short vellus hairs. As in the present study, black dots were detected also in cases of trichotillomania and tinea capitis, and black dots were not detected by Ekiz et al. [14]. In the present study under trichoscopic examination short vellus hair was detected in 40.0% (8 patients out of 20 patients) of alopecia areata cases; this finding was detected in other studies [14, 19, 20]. Short vellus hairs were seen as new, thin, and nonpigmented hairs within the patch, which may or may not be clinically detectable [24]. Our obtained data showed that short vellus hair is also a diagnostic feature of AA, which can provide useful prognostic information (indicating the nondestructive nature of AA) as stated by Inui et al. [18]. They also mentioned that the appearance of clusters of short vellus hairs is a possible sign of spontaneous remission or adequate treatment, but in the present study it was a sign of spontaneous remission as the cases were not treated before the study. Dermatology Research and Practice 5 In the current study pig tail regrowing hair was reported in 15.0% (3 out of 20 patients) of alopecia areata patients; this finding was detected in other studies [19]. We observed that pig tail growing hair is not common, but if present it is a diagnostic trichoscopic finding and is a possible sign of spontaneous remission of alopecia areata. In the present study short broken hairs were detected in 40.0% (8 out of 20 patients) of alopecia areata cases conducted with other authors [18–20]. Inui et al. [18] mentioned that broken hairs were considered as being clinical markers of the disease activity and severity of AA. They are nondiagnostic as in our study we detected broken hairs in tinea capitis cases as mentioned by Köse and Güleç [19] and Ekiz et al. [14]. In the present study white hairs were detected in 45.0% (9 out of 20 patients) of alopecia areata cases; we suggested that it is a diagnostic trichoscopic finding and a sign of spontaneous remission of alopecia areata. Some of the trichoscopic features can be used to predict the activity and severity of AA. Tapering hair is considered as a marker of disease activity and known to reflect exacerbation of disease. These trichoscopic findings will be helpful for management of patients with hair disorders. Yellow dots and short vellus hairs enable AA to be screened from other hair loss disorders. Abundant numbers of the yellow dots seen in AA can differentiate it from trichotillomania which can have limited number of yellow dots. In addition, black dots, tapering hairs, and broken hairs are specific for AA, except for trichotillomaniasingle trichoscopic feature which may not reliably diagnose AA [20]. Inui et al. [18] found that a combination of cadaverized hairs, exclamation mark hairs, broken hairs, and yellow dots could sensitively detect difficult-to-clinically diagnose types of AA like alopecia areata incognita, and broken hairs may be found in tinea capitis and trichotillomania. To conclude the most common trichoscopic feature was black dots, followed by yellow dots, exclamation mark, white hairs, short vellus hairs, short broken hairs, and pig tail growing hairs. However, yellow dots, exclamation mark hair, and short vellus hair are specific to alopecia areata; if not detected under trichoscopy, further clinical and histopatho- logical examination will be required. 5. Conclusion Trichoscopy has been shown to improve the clinical diag- nostic performance in the daily practice; it can be used to differentiate between tinea capitis by its characteristic findings as comma shaped hairs and zigzag shaped hairs or corkscrew hairs which are not present in alopecia areata. Alopecia areata also has characteristic findings as yellow dots or exclamation mark which are not present in tinea capitis. Trichoscopy can nowadays be seen as the dermatologists’ stethoscope. Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. References [1] V. Mendiratta and M. Jabeen, “Hair loss and its management in children,” Expert Review of Dermatology, vol. 6, no. 6, pp. 581– 590, 2011. [2] E. Sarifakioglu, A. E. Yilmaz, C. Gorpelioglu, and E. Orun, “Prevalence of scalp disorders and hair loss in children,” Cutis, vol. 90, no. 5, pp. 225–229, 2012. [3] J. H. Phillips, S. L. Smith, and J. S. Storer, “Hair loss, common- congenital and acquiredcauses,” Postgraduate Medicine, vol. 79, no. 5, pp. 207–215, 1986. [4] G. F. Kao, “Tinea capitis,” in Fungal Infection: Diagnosis and Management, M. Richardson, Ed., Medical Mycology, WB Saunders, Philadelphia, Pa, USA, 3rd edition, 1993. [5] J. Shapiro and S. Madani, “Alopecia areata: diagnosis and management,” International Journal of Dermatology, vol. 38, no. 1, pp. 19–24, 1999. [6] R. Paus, E. A. Oslen, and A. G. Messenger, “Hair growth disor- ders,” in Fitzpatrick’s Dermatology in Generalmedicine, K. Wolff, L. A. Goldsmith, and S. I. Katz, Eds., p. 763, McGraw-Hill, New York, NY, USA, 7th edition, 2008. [7] A. L. Mounsey and S. W. Reed, “Diagnosing and treating hair loss,” American Family Physician, vol. 80, no. 4, pp. 356–374, 2009. [8] G. Campos-do-Carmo and M. Ramos-e-Silva, “Dermoscopy: basic concepts,” International Journal of Dermatology, vol. 47, no. 7, pp. 712–719, 2008. [9] M. Miteva and A. Tosti, “Hair and scalp dermatoscopy,” Journal of the American Academy of Dermatology, vol. 67, no. 5, pp. 1040–1048, 2012. [10] K. Hillmann and U. Blume-Peytavi, “Diagnosis of hair disor- ders,” Seminars in Cutaneous Medicine and Surgery, vol. 28, no. 1, pp. 33–38, 2009. [11] E. C. Haliasos, M. Kerner, N. Jaimes-Lopez et al., “Dermoscopy for the pediatric dermatologist part I: dermoscopy of pediatric infectious and inflammatory skin lesions and hair disorders,” Pediatric Dermatology, vol. 30, no. 2, pp. 163–171, 2013. [12] A. Lencastre and A. Tosti, “Role of trichoscopy in children's scalp and hair disorders,” Pediatric Dermatology, vol. 30, no. 6, pp. 674–682, 2013. [13] E. T. M. Mapelli, L. Gualandri, A. Cerri, and S. Menni, “Comma hairs in tinea capitis: a useful dermatoscopic sign for diagnosis of tinea capitis,” Pediatric Dermatology, vol. 29, no. 2, pp. 223– 224, 2012. [14] O. Ekiz, B. B. Sen, E. N. Rifaioglu, and I. Balta, “Trichoscopy in paediatric patients with tinea capitis: a useful method to differ- entiate from alopecia areata,” Journal of the European Academy of Dermatology and Venereology, 2013. [15] M. Slowinska, L. Rudnicka, R. A. Schwartz et al., “Comma hairs: a dermatoscopic marker for tinea capitis. A rapid diagnostic method,” Journal of the American Academy of Dermatology, vol. 59, supplement 5, pp. S77–S79, 2008. [16] R. Hughes, C. Chiaverini, P. Bahadoran, and J.-P. Lacour, “Cork- screw hair: a new dermoscopic sign for diagnosis of tinea capitis in black children,” Archives of Dermatology, vol. 147, no. 3, pp. 355–356, 2011. [17] A. B. Sandoval, J. A. Ortiz, J. M. Rodriguez et al., “Dermoscopic pattern in tinea capitis,” Revista Iberoamericana de Micologı́a, vol. 27, no. 3, pp. 151–152, 2010. 6 Dermatology Research and Practice [18] S. Inui, T. Nakajima, F. Shono, and S. Itami, “Dermoscopic find- ings in frontal fibrosing alopecia: report of four cases,” Interna- tional Journal of Dermatology, vol. 47, no. 8, pp. 796–799, 2008. [19] Ö. K. Köse and A. T. Güleç, “Clinical evaluation of alopecias using a handheld dermatoscope,” Journal of the American Academy of Dermatology, vol. 67, no. 2, pp. 206–214, 2012. [20] M. Mane, A. K. Nath, and D. M. Thappa, “Utility of dermoscopy in alopecia areata,” Indian Journal of Dermatology, vol. 56, no. 4, pp. 407–411, 2011. [21] E. K. Ross, C. Vincenzi, and A. Tosti, “Videodermoscopy in the evaluation of hair and scalp disorders,” Journal of the American Academy of Dermatology, vol. 55, no. 5, pp. 799–806, 2006. [22] A. Tosti, F. Torres, C. Misciali et al., “Follicular red dots: a novel dermoscopic pattern observed in scalp discoid lupus erythe- matosus,” Archives of Dermatology, vol. 145, no. 12, pp. 1406– 1409, 2009. [23] S. Inui, “Trichoscopy for common hair loss diseases: algorith- mic method for diagnosis,” Journal of Dermatology, vol. 38, no. 1, pp. 71–75, 2011. [24] F. Lacarrubba, V. D'Amico, M. R. Nasca, F. Dinotta, and G. Micali, “Use of dermatoscopy and videodermatoscopy in ther- apeutic follow-up: a review,” International Journal of Dermatol- ogy, vol. 49, no. 8, pp. 866–873, 2010. work_wwz6msv7gzhovcrxkfvwve7xrm ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218552108 Params is empty 218552108 exception Params is empty 2021/04/06-02:16:31 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218552108 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:31 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_wzert2x4p5h3jmdyw4vugmyyze ---- Cilt: 13 Sayı: 69 Mart 2020 & Volume: 13 Issue: 69 March 2020 www.sosyalarastirmalar.com Issn: 1307-9581 Doi Number: http://dx.doi.org/10.17719/jisr.2020.3979 DİJİTAL FOTOĞRAFLAR VE BİLGİSAYAR ARACILIĞIYLA ÜRETİLEN GÖRÜNTÜLER BİRBİRİNDEN NE KADAR FARKLI? HOW DIFFERENT ARE DIGITAL PHOTOGRAPHS FROM COMPUTED-GENERATED IMAGES? İsmail Erim GÜLAÇTI Öz Bilgisayar aracılığıyla üretilen görüntüler görsel iletişimin diğer alanlarında giderek artan oranlarda kullanılsa da fotoğrafların dijital olarak düzenlenmesine dair hem basın hem de reklamcılık sektöründe katı ilkeler bulunmaktadır ve bu ilkelere uyulmaması ciddi sonuçlar doğurmaktadır. Bu durum ise hem fotoğraf hem de bilgisayar aracığıyla üretilen görüntülere ilişkin algımızı ve bu iki tür görselin ifade ettiği anlamı sorgulamamızı gerektirmektedir. Bu çalışma profesyonel fotoğrafçıların ve fotoğraf düzenleme ile profesyonel düzeyde ilgilenen kişilerin bir ekranda gördükleri fotoğrafları bilgisayar aracığıyla üretilen görüntülerden ayırabilme becerileri hakkında deneysel bir kurguya dayalı veriler sunmaktadır. Sonuçlar net bir şekilde katılımcıların bu iki tür görseli birbirinden istatistiksel olarak anlamlı bir şekilde ayıramadığını göstermektedir. Bu da bu ayrımı yapmanın katılımcılar kadar yetkin ve uzman olmayan kişiler için giderek daha da zorlaşacağını göstermektedir. Ancak araştırma ilginç bir şekilde katılımcıların günümüz dijital fotoğrafçılığı ve dijital görüntü üretimi ile uyumlu olmayan oldukça geleneksel denebilecek bir fotoğraf anlayışını paylaştıklarını da ortaya koymuştur. Bu sonuçlara dayanarak dijital fotoğrafçılıkta bilgisayar kullanımını daha iyi anlamaya yönelik ve dijital fotoğrafın kullanımına belli sektörlerde dayanak oluşturacak bir görsel okuryazarlığa ihtiyaç olduğu ifade edilebilir çünkü fotoğrafları ekranlarda, dergilerde, sergilerde ya da gazetelerde görenlerin aslında bilgisayar aracığıyla üretilen görüntülere bakıyor olmaları giderek yaygınlaşan ve artan bir duruma dönüşmektedir. Anahtar Kelimeler: Fotoğraf, Dijital Fotoğraf, Göstergebilim, Habercilik, Görsel Sanatlar. Abstract Although computer-generated images are increasingly used in other areas of visual communication, there are strict principles for digital editing of photos in both the press and advertising sectors and not adhering to these principles causes serious consequences. This requires us to question our perception of both images and computer-generated images as well as the meaning of these two types of images. This study provides experiment-based data on the ability of professional photographers and those professionals interested in photo editing to distinguish photographs from computer-generated images that they see on a screen. The results clearly show that the participants cannot distinguish between these two types of images in a statistically significant manner. This shows that making this distinction will become increasingly difficult for the non-specialist as well as the participants, who were all competent individuals in their field. Interestingly, however, the study found out that the participants shared a rather traditional understanding of photography that was not compatible with today's digital photography and digital image production. Based on these results, it can be stated that there is need for a new type of visual literacy in order to better understand the use of computers in digital photography and to provide a basis for the use of digital photography in certain sectors because one could actually be looking at computer-generated images when they think that they are looking at photographs on screens, magazines, exhibitions or newspapers. Keywords: Photography, Digital Photography, Semiotics, Journalism, Visual Arts.  Dr. Öğr. Üyesi, Yıldız Teknik Üniversitesi Sanat ve Tasarım Fakültesi Uluslararası Sosyal Araştırmalar Dergisi / The Journal of International Social Research Cilt: 13 Sayı: 69 Mart 2020 & Volume: 13 Issue: 69 March 2020 - 571 - 1. GİRİŞ Foto muhabirleri başta olmak üzere fotoğrafçılar çekim, düzenleme ve yayınlama konularında bazı katı ilkelere uymak zorundadır. Özellikle 21. Yüzyılın başlarından itibaren dijital fotoğrafın hızla gelişmesiyle birlikte fotoğrafların bilgisayar ortamında düzenlenmesine ilişkin etik tartışmalar da yeniden ortaya çıkmıştır. Bu tür tartışmalar belgesel, özel ya da haber amaçlı fotoğraf çekenlerin neredeyse fotoğrafın tarihi boyunca birçok kez dahil oldukları tartışmalardır. Hatta son yıllarda birçok fotoğrafçı çektikleri fotoğraflara ekleme yaptıkları ya da bu fotoğraflardan bazı ‘içeriği’ çıkardıkları için işlerinden olmuştur. Pulitzer ödüllü Narciso Contreras’ın 2013 yılında Suriye’deki savaşı haberleştirirken çektiği bir fotoğrafın sol alt köşesinde görünen kamerasını Photoshop ile sildiğinin ortaya çıkması sonucunda AP (Associated Press) tarafından kovulması buna bir örnek olarak gösterilebilir. Haber ve habercilik güven ile ilgilidir. Fotoğrafçılardan ajanslara, gazetelere ve internet sitelerine ve oradan da okuyuculara uzanan bu güven zinciri kırılırsa her bir fotoğraf şüpheyle karşılanacaktır ve bu da izin verilmemesi gereken bir durumdur. Fotoğrafların dijital olarak değiştirilmesine ilişkin sorunlar 1990’ların başından itibaren ortaya çıkmaya başlamıştır ve hala güncelliğini korumaktadır (Huang, 2001; Lowrey, 2003; Mäenpää, 2014; Reaves, 1992, 1995; Schwartz, 1992). Fotoğraf düzenlemenin ahlaki temelleri (Solaroli, 2015), haber fotoğraflarının nesnelliği (Carlson, 2009) ve deneysel düzenleme uygulamaları (Gürsel, 2016) özellikle tartışılan konuların başındadır. Dijital fotoğrafçılık uygulamalarının kontrolünün ve bunların kullanımıyla ortaya çıkan sonuçların doğrulanmasının geçmişe göre çok daha zor olduğu böyle bir bağlamda, tüm bu tartışmalar dijitalleş(tir)menin fotoğrafa olan etkisini ortaya açıkça ortaya koymaktadır. Fotoğrafların düzenlenmesine ilişkin, özellikle haber fotoğrafçılığı yapan fotoğrafçıların takip ettiği, katı ilkeler olsa da bilgisayar aracılığıyla üretilen görüntüler görsel iletişimin ağır bastığı reklamcılık, sinema, internet ve TV dizileri gibi alanlarda giderek artan oranda kullanılmaktadır. Fotoğraf makinesi ile kaydedilmemiş bu tür görüntüler oluşturulurken bazı özel yazılımlar ve programlar kullanılmaktadır. Sonuçta oluşturulan görüntüler yeterince gerçekçiyse fotoğraf makinesi ile çekilmiş fotoğraflara çok benzeyen bir yapıları olur. Fotoğrafların dijital olarak düzenlenmesinden farklı olarak bu tür görüntülerde kullanılan yazılım ve programlar gerçek fotoğraflardan neredeyse ayırt edilemeyecek görüntüler ortaya çıkarır. Dijital fotoğrafın doğasına ve kullanımına dair yapılan tartışmalar bilgisayar aracılığıyla üretilen görüntülerdeki bu türden son gelişmelerle birlikte düşünüldüğünde fotoğraf kavramına ilişkin temel bir ikileme işaret etmektedir. Diğer bir deyişle fotoğraf diye baktığımız görseller aslında üretildikleri teknoloji ve yazılımlar dışında hiçbir dış dayanak noktası ya da göstergesi olmayan görüntüler olma yolundadır. Geleneksel fotoğraf anlayışına göre fotoğraflarda bir nedensellik olgusu bulunmalıdır. Fotoğraf çekildiği zaman fotoğraf makinesinin dışındaki bir varlık hem fotoğrafın kaydedildiği yüzey hem de fotoğraf bastırıldıktan sonra ortaya çıkan görüntü üzerinde etki gösterir. Dolayısıyla bu anlayışa göre bir görüntüyü fotoğraf ya da bilgisayar aracılığıyla üretilen görüntü olması bakımından değerlendirirken göz önüne alınması gereken sorular elimizdeki görselin bir tiyatro sahnesi canlandırır gibi oluşturulup oluşturulmadığı ya da görüntünün içeriğine müdahale edilip edilmediğidir. Ancak dijital fotoğrafçılıktaki son gelişmeler bu yaklaşımın yeniden değerlendirilmesini zorunlu hale getirmiştir. Birçok açıdan dijitalleşmenin fotoğraftan önceki gerçeklik ile sonrasında oluşan görsel arasındaki nedensellik ilişkisini zayıflattığı ileri sürülmektedir. Dijital bir görüntü, gazete ya da dergilerde yayınlandığında fotoğrafa çok benzese de aslında fotoğraf resimden nasıl farklıysa geleneksel fotoğraf da dijital bir görüntüden öyle farklıdır. Ritchin’e göre (1991) dijital fotoğrafın manipülasyonların yolunu açması fotoğrafın olaylara tanıklık etme rolünü de tartışmaya açmıştır çünkü dijital fotoğraf artık neredeyse düzenlemede kullanılan yazılımlarla eş anlamlı hale gelmiştir. Böylece 1 ve 0’lardan ibaret hale gelen fotoğraf dönüştüğü bu dijital bilgi formuyla kolaylıkla şekillendirilebilir, programlanabilir hale gelmiştir ve sonuçta da bir gösterge olma özelliğini yitirmiştir. Gelinen bu noktada analog ve dijital fotoğraf artık birbirinden farklı iki olguya dönüşmüş ve dijital fotoğraf bilgisayar aracılığıyla üretilen görüntülerle eş anlamlı hale gelmiştir. 2. YÖNTEM Bu çalışma bu paradoksa yukarıda değinilen tartışmalar bakımından da önemli bir açıdan yaklaşmaktadır: ‘Meslekleri açısından geleneksel fotoğrafçılık anlayışı özellikle önemli olan profesyonel fotoğrafçılar, dijital fotoğrafları bilgisayar aracılığıyla üretilen ve bir bilgisayar ekranında gördükleri görüntülerden ayırabiliyor mu?’ sorusu araştırmanın temelini oluşturmaktadır. Sorunun anlamı açıktır. Eğer bilgisayar aracılığıyla oluşturulan görüntüleri üretmekte kullanılan teknikler, profesyonel fotoğrafçıların bu görüntüleri fotoğraf makinesi ile çekilen fotoğraflardan ayırt edemeyeceği kadar gelişmişse o zaman dijital fotoğrafa dair anlayışımızın yeniden sorgulanması ve değerlendirilmesi gerekmektedir. Günümüzde birçok profesyonel fotoğrafçı düzenlenmiş dijital fotoğraflar Uluslararası Sosyal Araştırmalar Dergisi / The Journal of International Social Research Cilt: 13 Sayı: 69 Mart 2020 & Volume: 13 Issue: 69 March 2020 - 572 - ile geleneksel anlayışa göre çekilmiş fotoğraflar arasındaki farkı görebilmeye büyük önem vermektedir (Mäenpää ve Seppänen, 2010) ve bu nedenle de tamamen bilgisayar aracılığıyla oluşturulan görüntüler ile fotoğraf makinesi ile çekilen fotoğrafları birbirinden ayırmak çok daha önemli bir konu haline gelmiştir. Dolayısıyla bu çalışmanın amacı, dijital fotoğraflar ve bilgisayar aracılığıyla oluşturulan görüntüler arasındaki farkın sıradan insanlar kadar profesyonel fotoğrafçıların da bu iki tür görseli birbirinden ayırt etmelerini zorlaştıracak kadar kaybolup kaybolmadığını bulmaktır. Bu amaçla 20 profesyonel fotoğrafçıya bilgisayar ekranında 37 farklı görsel gösterilmiş ve ‘Bu görsel fotoğraf mı yoksa bilgisayar aracılığıyla üretilen bir görüntü müdür?’ sorusu yöneltilmiştir. Alanlarındaki uzmanlıkları ve deneyimleri nedeniyle iki tür görseli birbirinden ayırt etmekte özellikle yetkin olan bu kişilerin çoğu habercilik, reklamcılık ve basın alanlarında çalışmaktadır. Söz konusu seçimi daha zorlayıcı bir hale getirmek için ayırt etmesi daha zor olabilecek görseller seçilmiştir. Bilgisayar aracılığıyla üretilmiş görüntülerden seçilenler fotoğraf makinesi ile çekilen geleneksel fotoğraflara çok benzediği gibi fotoğraf makinesi ile çekilen geleneksel fotoğraflardan da deneyimsiz birinin bilgisayar aracılığıyla üretilen bir görüntü olarak kabul edeceği olanlar bulunmaktadır. Her bir karardan sonra katılımcılardan seçimlerini yazılı olarak gerekçelendirmeleri ve daha sonra da kısa bir görüşmede bazı soruları cevaplamaları istenmiştir. Ortaya çıkan bu veri iki önemli boyutta analiz edilmiştir: 1. Katılımcılar fotoğraflar ve bilgisayar aracılığıyla üretilen görüntüleri birbirinden doğru bir şekilde ayırabilmekte midir? 2. Bu ayrımı yaparken fotoğrafa ilişkin bilgi birikimlerini ve deneyimlerini nasıl kullanmışlardır? Bu bağlamda çalışmanın geri kalanında öncelikle konuyla ilgili güncel tartışmalar ele alınacak, görsel ve katılımcı tercihlerinin gerekçeleri ve deneysel yapıdaki yöntem hakkında bilgi verilecektir. Katılımcıların kararlarının sonuçları analiz edilerek fotoğraf ve bilgisayar aracılığıyla üretilen görüntüler arasında yaptıkları seçimlere ilişkin açıklamaları tartışılacaktır. Son olarak çalışmanın ortaya çıkardığı sonuçların ışığında fotoğraf ve benzeri görsellere bakışımız üzerindeki etkileri ele alınacaktır. 3. LİTERATÜR VE İLGİLİ ARAŞTIRMALAR Nesnellik ve güvenirlik haber fotoğrafçılığında özellikle önemlidir. Serbest çalışan fotoğrafçıların sayısının ve hazır fotoğraf sağlayıcılarından görsel sağlama oranının artması (Thomson, 2016; Frosh, 2013; Machin, 2004) ve vatandaş fotoğrafçılığının yaygınlaşması fotoğrafçılığın hassas bir meslek niteliği kazanmasına neden olmuştur. Özellikle haber fotoğrafçıları gündelik işlerinde birçok riskle karşılaşmakta ve fiziksel yaralanma tehlikesiyle yüz yüze gelmektedir. Çatışma bölgelerinden gelenler başta olanlar olmak üzere (Solaroli, 2015) fotoğraflar yayınlanacak kadar dramatik ve etkileyici olmalıdır. Fotoğraf estetiğinde zamanla beceri kazanan profesyonel fotoğrafçılar aynı zamanda özel bir ‘görme’ becerisi de edinirler ve bu beceri onları vatandaş fotoğrafçılar olarak adlandırılabilecek fotoğrafçılardan ayıran bir niteliktir. Mesleklerinin gereklerini yerine getirirken ve en doğru görüntüyü ararken bazı fotoğrafçılar yayınlanmaları için göndermeden önce fotoğrafların bazı yerlerini kırpar veya bazı fotoğraflarda da değişiklik (ya da manipülasyon) yaparlar. Bu şekilde oluşturulmuş sahte fotoğrafların yayınlanması fotoğrafçıların işlerinden olmasının en başta gelen nedenidir (Carlson, 2009). Bu durumu daha da karmaşıklaştıran başka bir etken de dijital fotoğrafların daha yaygın ve kolayca kullanıldığı günümüzde neyin sahte neyin gerçek olduğunu belirleyen kuralların her zaman açık ve net olmamasıdır. Öte yandan hem haber fotoğrafçıları hem de çeşitli fotoğrafçılık meslek örgütleri tarafından genel kabul görmüş bir kural karanlık oda prensibidir (Mäenpää ve Seppänen, 2010). Bu kural temelde film kullanılan dönemde bir karanlık odada gerçekleştirilebilecek o dönemin yaygın işleme ve düzenlenmelerinin dijital fotoğraf düzenleme yazılımları aracılığıyla da yapılmasının kabul edilebilir olmasına dayanmaktadır. Adından da anlaşılabileceği gibi böyle bir yaklaşım geleneksel fotoğraf anlayışına dayanmaktadır ve fotoğrafçıların işe alınmasından fotoğrafların yayınlanmasına ve saygın ödüllerin kazanılmasına kadar birçok boyutu etkilemektedir. 1990’ların başından itibaren birçok farklı akıma mensup medya kuramcısı dijital fotoğrafların manipülasyonunun kolay olmasının yanı sıra analog fotoğraflardan çok büyük farklılıklar gösterdiğini hatta temelinde tüm dijital fotoğrafların bilgisayar aracılığıyla üretilen görüntüler olduğunu ileri sürmüştür. Bu iddianın temelinde dijital bir fotoğraf makinesi ile kaydedilen bir görüntünün görülebilir bir fotoğraf haline gelmesi için yazılımlar ile işlenmesi gerekliliği yatmaktadır. Eğer aynı görüntü ışığa duyarlı bir filme kaydedilirse işlenmesi de bir tür ‘çeviri’ olarak düşünülebilir. Fotoğraf çekiminde kullanılan teknolojiye malzeme temelli bu yaklaşım ise bir bakıma fotoğrafın kanıtlayıcı bir belge olarak oynadığı rolü tamamladığı olarak yorumlanabilir. Böylesi bir durum analog ve dijital fotoğraf arasında temel bir ayrımı göstermektedir. Bu ayrıma göre dijital fotoğrafın oluşmasını sağlayan algoritmaların gösterdikleri nesnelerle aralarında bir nedensellik ilişkisi yoktur veya göstergebilim Uluslararası Sosyal Araştırmalar Dergisi / The Journal of International Social Research Cilt: 13 Sayı: 69 Mart 2020 & Volume: 13 Issue: 69 March 2020 - 573 - açısından ifade etmek gerekirse bu algoritmaların herhangi bir belirtisel özelliği de yoktur. Dijital fotoğraf makinesinin görüntü sensöründe oluşan kayıt, aslında insan zihninin görsel açıdan fotoğraf olarak algılayacağı bir çıktıdan öte bir şey değildir. Bu düşünüşe göre dijital fotoğraf temelde bilgisayara aracılığıyla üretilen bir görüntüyle eş anlamlıdır. Fotoğraf makinesinin dışındaki fiziki bir nesneyle olan bağ kopmuş ve fotoğraf aslen bilgisayar ve yazılımlar aracılığıyla oluşturulur hale gelmiştir. Daha sonra da tartışılacağı üzere bu çalışma bu tür görsellere alternatif bir bakış açısı ortaya koymaktadır ve dijital fotoğraftaki bilgisayar öğesinin yeniden düşünülüp yorumlanması amacını taşımaktadır. İnsanların fotoğraflara ya da bilgisayar aracılığıyla üretilen görüntülere ilişkin yargıları, daha önce basit bir aydınlatma altındaki görece basit nesneleri içeren deneysel çalışmalara konu olmuştur (McNamara, 2005). Daha yakın bir zamanda Shaojing Fan (2014) da bir çalışmasında katılımcılardan fotoğrafı çekilmiş ve bilgisayar aracılığıyla üretilmiş yüzleri karşılaştırmasını ve ayırt etmesini istemiştir. Bu çalışmanın sonuçlarına göre bu iki tür görseli birbirinden ayırt etmek beklendiği kadar kolay olmamış hatta katılımcılar çoğu zaman ciddi şekilde zorlanmışlardır. Fotoğraf gerçekliğindeki görüntüler günümüzde reklamcılıkta, kataloglar ve broşürlerde, propaganda amaçlı haberlerde yaygın bir şekilde kullanılmaktadır. Hatta bilgisayar teknolojileri alanında araştırma yapan bazı bilim insanları fotoğrafların bilgisayar aracılığıyla üretilen görüntülerden nasıl ayırt edileceği konusunda endişelerini dile getirmektedir. Lyu ve Farid (2005) bu durum ile ilgili olarak fotoğrafın tekrar olayların kanıtı olarak kabul edilmesi sağlanacaksa bunun ancak fotoğraf ve fotoğraf gerçekliğinde görüntüleri birbirinden ayırt edebilecek bir teknoloji ile mümkün olabileceğini ileri sürmektedir. Ancak içinde bulunduğumuz dönem itibariyle bunu yapabilecek mükemmellikte bir çözüm bulunmamaktadır. Bir grup insanın fotoğraf gerçekliğinde görüntüler üretmeye çalışırken bir diğer grubun da bu görüntüleri tespit etmeye yönelik çalışmalar yaptığı bir bağlamda böyle bir çözümün bulunması da kolay olmayacaktır. Bu araştırmaya konu olan fotoğraflar ve bilgisayar aracılığıyla üretilen görüntüler arasında profesyonel fotoğrafçılar ve fotoğraf ile profesyonel olarak ilgilenenler tarafından yapılan sistematik bir karşılaştırma bilindiği kadarıyla daha önce çalışılmış değildir. Bu gruptaki kişilerin bu iki tür görsel arasında ayrım yapabilmesinin gerekliliği foto muhabirliği başta olmak üzere kendi meslekleri bakımında da özellikle önemlidir. 4. VERİ TOPLAMA SÜRECİ Profesyonel fotoğrafçıların bir ekrandaki görsellerin fotoğraf mı yoksa bilgisayar aracılığıyla üretilen görüntü mü olduğunu belirlemek için 8 erkek 12 kadın toplam 20 profesyonel fotoğrafçı katılımcı olarak seçilmiştir. Tüm fotoğrafçılar Adobe Photoshop ve Lightroom gibi fotoğraf düzenleme yazılımlarını kullanmaktadır. Hepsi Türkiye’de çalışan bu fotoğrafçılar esas olarak foto muhabirliği alanında çalışmakla beraber reklamcılık, moda ve ürün fotoğrafçılığı gibi alanlarda da deneyimleri bulunmaktadır. Araştırmada katılımcılara her seferinde 1 tane olmak üzere ultra yüksek çözünürlüklü 35 inç’lik LED bir monitörde 1’den 37’ye kadar numaralandırılmış toplam 37 görsel gösterilmiştir. Katılımcılar monitöre olan mesafelerini kendileri ayarlayabilmiştir. Ofis aydınlatması (350-550 lümen, 4000 K) koşullarına sahip bir salonda yapılan araştırmada katılımcıların günlük çalışma ortamları canlandırılmaya çalışılarak iş ortamında ve bilgisayar başında çalıştıkları koşulların yaratılması amaçlanmıştır. Katılımcılar her bir görseli sıra ile istedikleri kadar inceleyebilmiştir. Katılımcılar görsellere bakarken baktıkları görselin bir fotoğraf mı yoksa bilgisayar aracılığıyla üretilen bir görüntü mü olduğu sorulmuştur. Sorulan soru ‘Gördüğünüz görsel bir fotoğraf mı yoksa bilgisayar aracılığıyla üretilmiş bir görüntü mü?’ şeklindedir. Katılımcılardan ayrıca kararlarını kendilerine sağlanan bir dizüstü bilgisayarda işaretlemeleri ve cevaplarının nedenini ya da nedenlerini yazılı olarak açıklamaları istenmiştir. Katılımcılar bundan sonra sıradaki diğer görsele geçmişler ve süreç tüm görseller bu şekilde incelenip cevaplar yazılı olarak açıklanana kadar devam etmiştir. Tüm bu süreç tamamlandıktan sonra katılımcılarla görüşmeler yapılmış ve bu ayırt etme süreci ile ilgili yarı-yapılandırılmış sorular sorulmuştur. Sorular fotoğraf ve bilgisayar aracılığıyla üretilmiş görüntüleri birbirinden ayırt edebilmenin önemi, potansiyel kullanım imkanları, bu ayırt edebilme sürecinde kullandıkları ölçütler ve kendilerinin fotoğraf ve bilgisayar aracılığıyla üretilmiş görüntü tanımlarına odaklanmıştır. Kaydedilen bu görüşmeler daha sonra metne dökülerek analiz edilmiştir. Araştırmada katılımcılara gösterilen görseller arama motorları kullanılarak internetteki hazır görsel sağlayıcılardan manzara ve doğa resimleri kategorilerine odaklanılarak sağlanmıştır. Bilgisayar aracılığıyla üretilen görüntülerin seçilmesindeki en önemli ölçüt fotoğraf makinesiyle çekilen fotoğraflara benzemeleri ve belirlenen kategorilere uygun olmalarıdır. Fotoğrafların da aynı şekilde manzara ve doğa resimleri olması sağlanmıştır. Fotoğraflar için manzara resmi türünün seçilmesindeki neden fotoğraf düzenleme ve işleme yazılımlarının bu tür görüntüleri oluşturmada son derece etkin olması ve bu türden görüntülerin yaygın bir Uluslararası Sosyal Araştırmalar Dergisi / The Journal of International Social Research Cilt: 13 Sayı: 69 Mart 2020 & Volume: 13 Issue: 69 March 2020 - 574 - şekilde bulunabilmesidir. Fotoğrafların sağlandığı siteler http://planetside.co.uk/galleries/terragen- gallery, https://www.flicker.com, http://www.deviantart.com, ve http://gettyimages.com olarak seçilmiştir. Görseller toplandıktan sonra araştırmaya katılmayan başka bir fotoğrafçı ile tüm görsellerin üzerinden tek tek geçilerek fotoğraftan kolayca ayırt edilemeyecek bilgisayar aracılığıyla üretilmiş görüntüleri seçilmiştir. Araştırmanın kurgusunu sınamak için araştırmacının çalıştığı fakültedeki öğrencilerden 30 kişiyi içeren bir pilot uygulama yapılmıştır. Bu pilot uygulamanın sonuçları tasarlanan deneysel kurgunun anlamlı istatistiksel değerler içinde geçerli ve güvenilir olduğunu göstermiştir. Bu durum üzerine araştırmaya katılacak katılımcılara ulaşmak için harekete geçilmiştir. Bu kapsamda ulusal yayın yapan gazetelerdeki fotoğrafçılar ile iletişime geçilmiş, fotoğrafçılık dernekleri ziyaret edilmiş ve fotoğrafçıların ilgi gösterdikleri forum ve internet sitelerine araştırma ile ilgili duyurular bırakılmıştır. 5. BULGULAR 5.1. Fotoğrafları Bilgisayar Aracılığıyla Üretilen Görüntülerden Ayırt Etme Bu aşamada toplam 725 yanıt verilmiş ve bunlardan 371 tanesi fotoğraflar, 354 tanesi de bilgisayar aracılığıyla üretilen görüntüler lehine olmuştur. 20 katılımcının 37 görsele verebileceği toplam 740 olası yanıttan 15 tanesi bir katılımcı ayırt etme işi için kendisine tanıdığı sürede tamamlayamadığı için verilememiştir. İki katılımcı da birer soruyu farkında olmaksızın atlamıştır. Katılımcıların hiçbirisi tüm görselleri başarılı bir şekilde sınıflandıramamıştır. En iyi skoru olan katılımcı, 37 yanıt için toplamda 10 yanlış yaparken en kötü sonuca sahip katılımcı ise 23 yanlış yanıt vermiştir. Bu durum doğru yanıt oranının %37,84 ile %72,97 arasında değiştiğini göstermektedir. Bu oranları tüm uygulamaya yaydığımızda ortalama doğru yanıt oranının %55,49 olduğu gözlemlenmektedir. Fotoğraflar ve bilgisayar aracılığıyla üretilen görüntüler için doğru yanıt verme oranı ayrı ayrı %55,85 ve %55,02 olmuştur. Bulguların istatistiksel açıdan anlamını görebilmek amacıyla sıklık ifade eden verileri analiz etmeye uygun olan regresyon analizi yapılmıştır. Sonuçlar, ki-kare = 0.0384, p = 0.84, df = 1 değerleriyle fotoğraf ya da bilgisayar aracılığıyla üretilen görüntüler olarak ifade edilen görsel türünün katılımcıların sınıflandırma performansının bir göstergesi olmadığını göstermektedir. Tablo 1: Fotoğraflara ilişkin sonuçlar GÖRSEL TOPLAM YANIT DOĞRU YANIT OLASILIK DÜZEYİ z Değeri p Değeri 10 20 3 0,05 -3,63 p<0.001 7 20 4 0,07 -3,41 p<0.001 4 18 5 0,10 -2,96 p<0.001 17 20 7 0,14 -2,65 p=0,01 30 19 8 0,19 -2,25 p=0,02 5 20 9 0,22 -2,11 P=0,03 18 20 9 0,22 -2,11 p=0,03 12 20 10 0,27 -1,84 p=0,07 32 20 11 0,33 -1,56 p=0,12 8 19 11 0,37 -1,37 p=0,17 31 20 12 0,40 -1,26 p=0,21 15 19 12 0,46 -1,06 p=0,29 20 20 13 0,50 -0,96 p=0,34 3 20 14 0,62 -0,64 p=0,52 26 20 14 0,62 -0,64 p=0,52 9 19 14 0,75 -0,38 p=0,70 34 19 15 11 20 16 1,07 0,08 p=0,94 24 19 16 1,42 0,42 p=0,68 21 20 17 1,51 0,49 p=0,62 Diğer bir ifadeyle katılımcılar, ki-kare = 105.00, p < 0,001, df = 35 değerleriyle, fotoğraflar ve bilgisayar aracılığıyla üretilen görüntüler arasında istatistiksel olarak anlamlı bir ayrım yapamamıştır. Bu da bazı görsellerin sınıflandırılmasının kolay bazılarının da zor olduğunu sonucunu doğurmaktadır. Bu nedenle görsellerin olasılık düzeyi tablosu karşılaştırılmıştır. Tablo 1 olasılık düzeyi değerlerine göre sıralanmış ve istatistiksel olarak anlamlı değerler koyu olarak gösterilmiştir. Bu tabloda olasılık düzeyi sütununda bulunan sayılar ne kadar küçülürse fotoğraflar sıklıkla bilgisayar aracılığıyla üretilen görüntülerle karıştırılmaktadır. Öte yandan sayılar büyüdükçe fotoğrafın doğru bir şekilde fotoğraf olarak Uluslararası Sosyal Araştırmalar Dergisi / The Journal of International Social Research Cilt: 13 Sayı: 69 Mart 2020 & Volume: 13 Issue: 69 March 2020 - 575 - teşhis edilmektedir. Bu sıralamadan 7 fotoğrafın (4,5,7,10,17,18,30 numaralı fotoğraflar) sıklıkla bilgisayar aracılığıyla üretilen görüntülerle karıştırıldığı görülmektedir. Bu da katılımcıların bu görselleri doğru olarak ayırt edememenin yanı sıra görselleri birbirleriyle karıştırdığını da göstermektedir. Tablo 2 ise bilgisayar aracılığıyla üretilen görüntülere ait sonuçları göstermektedir. Tablo 2: Bilgisayar aracılığıyla üretilen görüntülere ait sonuçlar GÖRSEL TOPLAM YANIT DOĞRU YANIT OLASILIK DÜZEYİ z Değeri p Değeri 2 19 6 0,56 -0,86 p = 0,39 23 19 5 0,56 -0,86 p = 0,39 25 20 8 0,81 -0,32 p = 0,75 13 20 9 1,00 0,00 p = 1,00 35 20 9 1,00 0,00 p = 1,00 33 19 9 1,10 0,15 p = 0,88 1 20 9 1,10 0,15 p = 0,88 27 20 10 1,22 0,32 p = 0,75 19 20 10 1,22 0,32 p = 0,75 22 19 10 1,36 0,48 p = 0,63 37 20 11 1,49 0,63 p = 0,53 36 20 12 1,83 0,95 p = 0,34 29 19 13 2,65 1,46 p = 0,14 6 20 14 2,85 1,58 p = 0,11 14 19 14 3,42 1,79 p = 0,07 16 20 15 3,67 1,90 p = 0,06 28 20 18 11 2,75 p = 0,01 Tablo 2’de olasılık düzeyi değerlerine göre sıralanmış ve istatistiksel olarak anlamlı değerler (28 numaralı görsel) koyu olarak gösterilmiştir. Olasılık düzeyi sütunundaki küçük sayılar bilgisayar aracılığıyla üretilen görüntülerin sıklıkla bir fotoğraf olarak teşhis edildiğini göstermektedir. Büyük bir sayı ise bilgisayar aracılığıyla üretilen bir görüntünün doğru bir şekilde bir bilgisayar aracılığıyla üretilen bir görüntü olarak teşhis edildiğini ifade etmektedir. Bu sıralamada doğru bir şekilde bilgisayar aracılığıyla üretilen görüntü olarak teşhis edilen sadece bir görsel olduğu görülmektedir. Bu da yukarıdaki Tablo 1’deki sonuçları güçlendirmekte ve katılımcıların geri kalan 27 görselde fotoğraf ve bilgisayar aracılığıyla üretilen görüntüler arasında doğru bir ayrım yapamadığını doğrulamaktadır. Bu iki tabloda ortaya konan sonuçlar açıkça fotoğraf veya fotoğraf düzenlemeyle günlük olarak profesyonel düzeyde ilgilenen yetkin kişilerin bir bilgisayar ekranında gördükleri fotoğraf makinesi ile çekilen fotoğrafları bilgisayar aracılığıyla üretilen görüntülerden doğru bir şekilde ayıramadıklarını göstermektedir. Araştırmanın örneklemi olarak seçilen katılımcılar, mesleklerindeki katı görsel kullanım kriterleri gereği fotoğrafları diğer tür görsellerden en net şekilde ayırt edebilecek yetkinlikte kişiler olarak seçildiğinden bu sonucun daha geniş bir örneklemle de tekrarlanacağı öngörülmektedir. Araştırmanın ele aldığı türden bir sınıflandırma, özellikle fotoğrafa benzeyen bilgisayar aracılığıyla üretilen görüntüler yaratmakla ilgilenenler ile yapılmadığından bu tür çalışmalar yapanların bu araştırmanın konusu olan sınıflandırmada daha iyi bir sonuç elde edip edemeyecekleri belli değildir. Ancak bu tür görüntüler yaratmakla ilgilenenlerin muhtemelen fotoğrafları bilgisayar aracılığıyla üretilen görüntülerden ayırmalarını sağlayacak daha etkili ve uygun ölçütlere sahip olacağı düşünülmektedir. Günümüzde belli bir yeterliğe ulaşan ve farklı alanlardaki kullanımları giderek yaygınlaşan yapay zekâ ve makine öğrenimi uygulamaları çok da fazla zaman geçmeden bilgisayar aracılığıyla üretilen insan yüzü görüntülerinin de yukarıda manzara fotoğrafları ile yapılan sınıflandırmada olduğu gibi fotoğraftan ayırt edilemeyeceğini işaret etmektedir. Tüm bu gelişmeler aslında giderek daha çok dijitalleşen fotoğrafa ve özellikle de fotoğrafın doğal ve yaygın olarak kullanıldığı alanlara ilişkin anlayışımızı kökten değiştirecek etkiler doğuracaktır. 5.2. Fotoğraflar ve Bilgisayar Aracılığıyla Üretilen Görüntüler Arasındaki Farklılıklara İlişkin Açıklamalar İnceledikleri görselin fotoğraf mı yoksa bilgisayar aracılığıyla üretilen görüntü mü olduğuna dair yaptıkları seçimden başka katılımcıların fotoğraf ve bilgisayar aracılığıyla üretilen görüntüleri nasıl anladıklarını daha iyi anlayabilmek için katılımcıların verdikleri kararları açıklamaları istenmiştir. Yarı- yapılandırılmış bir görüşme ile katılımcılara fotoğrafı ve bilgisayar aracılığıyla üretilen görüntüyü nasıl Uluslararası Sosyal Araştırmalar Dergisi / The Journal of International Social Research Cilt: 13 Sayı: 69 Mart 2020 & Volume: 13 Issue: 69 March 2020 - 576 - tanımladıkları sorulmuş ayrıca diğer katılımcıların fotoğraflar ve bilgisayar aracılığıyla üretilen görüntüleri birbirinden ayırmada nasıl bir performans sergileyecekleri ve bu ayrımın önemli olup olmadığı konularında da görüşlerini belirtmeleri istenmiştir. Böyle bir ayrım fotoğrafçılık ve özellikle haber fotoğrafçılığı alanlarında çalışanlar açısından önemlidir. Geleneksel fotoğraf anlayışı bakımından fotoğrafların doğruluk ve gerçekliği ile fotoğraf makinesinin dışındaki bir olgu ya da varlıkla olan bağlantısı birkaç katılımcı tarafından tekrarlanan baskın bir tema olmuştur. Katılımcılar esasen fotoğraf ile çalıştıkları ve fotoğraf gerçekliğinde görseller oluşturmak konusunda fazla deneyimleri olmadığı için görseller arasındaki farklılıklara dair açıklamalarını, genellikle fotoğrafa ilişkin bilgilerini ekranda gördükleri görsellerle karşılaştırarak yapmışlardır. Dolayısıyla fotoğraf kavramı hakkındaki düşünceleri yaptıkları seçimlerde ve bunlara dair açıklamalarında sürekli olarak bir çıkış noktası olmuştur. Gösterilen görselin fotoğraf olup olmadığı kararını verirken katılımcılar tarafından değinilen nedenler genel olarak dörde ayrılabilir ve bu nedenlerin temelde şu yargılara dayanmaktadır: 1) fotoğrafların neye benzedikleri (diğer bir ifadeyle ikon nitelikleri), 2) fotoğraf ile ilgili donanımın fotoğrafın görselliğini nasıl etkilediği (donanımın belirtisel nitelikleri), 3) gerçekte ne gibi manzaraların var olabileceği ve bu nedenle fotoğraf olabileceği (manzaraların belirtisel nitelikleri) 4) özellikle diğer tür görsellerle karşılaştırıldığında fotoğrafların ne tür bir kavramı temsil ettikleri (fotoğrafların sembolik boyutu) Tüm bu nedenler yukarıdaki terimlerden de anlaşılacağı gibi göstergebilimsel bir temele dayanmaktadır ve bir arada düşünüldüğünde ise fotoğraflar ve bilgisayar aracılığıyla üretilen görüntüler arasındaki ayrıma nasıl yaklaşıldığına dair temel bir anlayışa işaret etmektedir. Katılımcılar görsellerin hangisinin fotoğraf hangisinin bilgisayar aracılığıyla üretilen görüntü olduğuna dair değerlendirmelerinde farklılıklar gösterse de fotoğrafın nasıl bir görsel olduğuna dair hem sözel hem yazılı açıklamalarında son derece benzer ve tutarlı yargılarda bulunmuştur. Dolayısıyla fotoğrafa dair anlayışlarının benzeştiği rahatlıkla ileri sürülebilir. İki kişi hariç tüm katılımcılar fotoğraflar ve bilgisayar aracılığıyla üretilen görüntüler arasındaki ayrımın hem fotoğrafın söz konusu olduğu anlamda hem de haber fotoğrafçılığı gibi daha özel bir bağlamda önemli olduğunu belirtmişlerdir. Resimleme ya da reklamcılık anlamındaysa birçoğu aynı ayrımı bu amaçla kullanılan görsellerin ana amacının gerçekliğin ortaya çıkarılmasının olmadığı düşüncesiyle önemli bulmamıştır. Katılımcıların bazılarının bu konudaki görüşleri aşağıda sunulmuştur: 1- “Fotoğraf ile profesyonel olarak uğraşanların bunları birbirinden ayırt edebilmesi bilgisayar aracılığıyla üretilen görsellerin yanlış bağlamlarda sunulmaması için önemlidir.” (Katılımcı 8) 2- “Evet, kesinlikle önemlidir. Özellikle manipüle edilmiş görsellerin haber verme amaçlı kullanıldığı birçok durum yaşandığı için bu dönem daha da arttı. Bu iki tür görseli birbirinden ayırabilmek son derece önemlidir.” (Katılımcı 16) Bu ayrım özellikle önemli görüldüğü için katılımcıların kararlarını hangi nedenlere dayanarak verdiklerini sözel olarak açıkladıkları kısım yeniden incelenmiş ve fotoğrafların tam olarak neye benzediğini tespit etmek için kullandıkları ölçütlere ulaşılmaya çalışılmıştır. Gösterilen görselin bir fotoğraf mı ya da bilgisayar aracılığıyla üretilen bir görüntü mü olduğuna karar verilirken ifade edilen nedenler görselin nitelikleri ile ilgili olmuştur. Fotoğraflar için bu nitelikler detayların miktarı ve bu detayların rastlantısallığının yanı sıra fotoğrafın doğal ritmi ve zaman zaman da göze o kadar da mükemmel görünmemesi önemli bir tanımlama ölçütü olarak ortaya çıkmıştır. Bilgisayar aracılığıyla üretilen görüntüler bu bağlamda da yeterince detaylı olamayan, kolaylıkla tahmin edilebilen ve tekrarlanan biçimler, doğal görünmeyen ya da bazen fazlasıyla mükemmel görünen nitelikler sergilemiştir. Göstergebilimin ikon tanımı bakımından değerlendirilebilecek bu ölçütlere katılımcıların verdikleri yanıtlardan bazı örnekler aşağıda verilmiştir: 1- “Detay miktarı, ışık ve gölgelerdeki farklılık. Ormandaki farklı renkteki alanlar ve devrilmiş ağaçlar, toprağın ormanla birleştiği bölgeler.” (Katılımcı 10) Uluslararası Sosyal Araştırmalar Dergisi / The Journal of International Social Research Cilt: 13 Sayı: 69 Mart 2020 & Volume: 13 Issue: 69 March 2020 - 577 - 2- “Bu bilgisayar aracılığıyla üretilen bir görüntü olabilir ama tekrarlayan fazla desen olmadığı için fotoğraf diyeceğim.” (Katılımcı 6) 3- “Detay miktarı, rastlantısallık, görsel kavramına ait ‘hatalar’, ön plandaki kötü aydınlatma.” (Katılımcı 5) Bu ikonik nitelikler katılımcıların fotoğrafların belirli görsel özelliklerini nasıl algıladıklarına ve tanımladıklarına dair önemli ipuçları vermektedir. Bu özelliklerin birçoğu fotoğrafla neredeyse bulunuşundan itibaren uğraşanların ilgisini çekmektedir. Özellikle ilk bakışta dikkat çekecek veya doğrudan bakılmayacak alanlardaki detayların miktarının aslında birçok kişinin fotoğrafa duyduğu ilginin kaynağında yattığı ifade edilebilir. Fotoğraf ile ilgili donanımın fotoğrafın görselliğini nasıl etkilediği konusu ise fotoğrafları bilgisayar aracılığıyla üretilen görüntülerden ayırmada diğer önemli tanımlama ölçütlerini ortaya koymuştur. Her zaman doğru yanıtı veremeseler de doğal olarak katılımcılar arasında bulanan profesyonel fotoğrafçılar kullanılan donanıma ilişkin birçok farklı seçenek belirtmişlerdir. Uzun pozlama ya da farklı lenslerin kullanımının perspektif veya ritim gibi alanlarda görselliği nasıl etkileyebileceğini kestirebilmeleri daha önceki saha deneyimlerine dayanmaktadır. Bu da fotoğraf ya da bilgisayar aracılığıyla üretilen görüntü olsun herhangi bir görselin nitelikleri ile o görselin oluşumuna etki eden süreçler arasındaki belirtisel ilişkiyi ortaya koymaktadır. Katılımcıların bu konuya ilişkin görüşleri aşağıda verilmiştir: 1- “Hareket bulanıklığı görüyorum ve ön planda da birçok detay var.” (Katılımcı 17) 2- “Bakış açısı, geniş açılı objektiften kaynaklanan bozulma ve ufuk çizgisinin kaybolması.” (Katılımcı 17) 3- “Sisli bir çam ormanında teleobjektifin neden olduğu üst üste binmiş bir gerçeklik” (Katılımcı 3) 4- “Gün batımı uzun pozlama ile çekildiğinde, manzara böyle görünebilir.” (Katılımcı 16) 5- “….küçük bir görüntü sensörü ile kaydedilmiş….” (Katılımcı 9) Bu örneklerdeki ifadeler açıkça fotoğraf çekmek içim kullanılan donanımın belirtisel izlerine dikkat çekmektedir. Bilgisayar aracılığıyla üretilen görüntüler de dahil olmak üzere fotoğraf teknolojisi elde edilen görüntünün görsel niteliklerini derinden etkilemektedir. Dolayısıyla bu teknolojiye ya da bu teknolojinin etkilerine dair sahip olunan bilginin görselleri değerlendirirken önemli bir rol oynadığı net bir şekilde ortaya çıkmaktadır. Manzara fotoğrafları söz konusu olduğunda, manzaraların nitelikleri gösterilen görselin fotoğraf mı yoksa bilgisayar aracılığıyla üretilen görüntü mü olduğuna karar vermede özellikle önemli olmuştur. Bu noktada kişilerin bazı yerlere ilişkin sahip oldukları bilgi ya da deneyim gösterilen manzara görsellerini değerlendirmede kullanılmıştır. Ayrıca ışık, su, vb. öğelerin manzara fotoğraflarında nasıl bir etki yarattığına yönelik sahip olunan farkındalık da karar verirken başvurulan bir diğer araç olmuştur. Katılımcılar bazı görseller için kendi deneyimlerine dayanarak değerlendirme yapsa da diğer bazı görsellerde de değerlendirmelerini gerçek olması olası olan yerlere işaret ederek yapmışlardır. Bu kapsamda kullandıkları ifadelerden bazıları aşağıda verilmiştir. 1- “Böyle bir su birikintisi daha önce görmüştüm. Ön plandaki çimen ve taşlar arasındaki farklılık, detayların miktarı, fotoğrafın çekildiği hava durumu ve zamana uygun renkler.” (Katılımcı 11) 2- “Bir keresinde ben de deniz kenarında böyle bir fotoğraf çekmiştim.” (Katılımcı 1) 3- “Bir çayır manzarası. Hava koşulları böyle yüksek bölgelerin karakteristik özelliği gibi görünüyor.” (Katılımcı 7) 4- “Sabah sisine benziyor.” (Katılımcı 4) Tüm bu örneklerde gösterilen görseli fotoğraf ya da bilgisayar aracılığıyla üretilen görüntü olarak tanımlamak için kullanılan ölçüt katılımcıların kendi deneyimlerine veya kendi deneyimlerini kullanarak yaptıkları çıkarımlara dayanmaktadır. Katılımcılar bazen görsellerdeki yerlerin gerçekten var olup olmadığından emin olmasalar da görsellerde gösterilen türden yerlerin ve manzaraların var olabileceğine işaret etmektedirler. Görsellerin yukarıda tartışılan ikonik ve belirtisel niteliklerine dayanan açıklamalar dışında katılımcıların yaptıkları seçimlere ilişkin yaptıkları açıklamalarda daha genel ve sembolik düzlemde değerlendirilebilecek başka bir grup tanımlama daha bulunmaktadır. Bu tanımlama yukarıdaki açıklamalarda ortaya çıkan fotoğraf anlayışını daha iyi anlamak için de önemlidir. Uluslararası Sosyal Araştırmalar Dergisi / The Journal of International Social Research Cilt: 13 Sayı: 69 Mart 2020 & Volume: 13 Issue: 69 March 2020 - 578 - Aşağıda örnekleri verilen sembolik nitelikler ikonik ve belirtisel tanımlamaları aşan bir fotoğraf anlayışına işaret etmektedir. Fotoğrafların görünüşü ve gösterdikleri, yani donanımlar ve manzaralar, özellikle önemli olsa da bu sembolik nitelikler bu iki kategoriden daha fazlasına işaret etmektedir. 1- “yaşam dolu” (Katılımcı 4) 2- “İnanılır, gerçek hissi veriyor.” (Katılımcı 7) 3- “Bir makinenin bu kadar hayat dolu bir sahne yaratabileceğini sanmıyorum.” (Katılımcı 1) 4- “Doğal bir çizgi var burada.” (Katılımcı 3) 5- “Görsel doğal görünüyor.” (Katılımcı 16) Bu açıklamalardan fotoğrafların hayatı gösterdiği, gerçek olduğu, doğal veya inanılır hissettirdiği, dolayısıyla da çalışmanın başlarında değinilen geleneksel fotoğraf anlayışıyla ve katılımcılar tarafından fotoğraf ve bilgisayar aracılığıyla üretilen görüntüler arasındaki farkın önemine dair verilen nedenlerle örtüştüğü anlaşılmaktadır. Tanımlamalardaki sembolik boyut katılımcıların ‘hayat’, ‘gerçek’ ve ‘doğal’ gibi son derece soyut ölçütleri karar vermek için kullandıklarını göstermektedir. 6. TARTIŞMA VE SONUÇ Yukarıda açıklanan bulgular katılımcıların fotoğrafları bilgisayar aracılığıyla üretilen görüntülerden başarılı ve istatistiksel olarak anlamlı bir şekilde ayıramasalar da fotoğrafla ilgili belli bir anlayışa sahip olduklarını göstermektedir. Geleneksel fotoğraf anlayışı da denilebilecek bu düşünce çalışmanın başında değinilen karanlık oda ilkesine dayanmaktadır. Bu ilkeye göre, fotoğraf kavramı fotoğrafı çekilen nesne ya da sahneden yansıyan ışığın ışığa duyarlı bir yüzey üzerinde bıraktığı ve ikonik olarak söz konusu nesne ya da sahneye benzeyen maddi izlere dayanır. Dolayısıyla bu izler fotoğrafı konusu nesne ya da sahneye zaman ve mekânda bir bakıma gecikmeli biçimde bağlayarak fotoğrafın içeriğinin kökenini oluşturur. Diğer bir deyişle fotoğrafın oluşabilmesi için fotoğrafa konu nesne ya da sahnenin zaman ve mekân bakımından fotoğrafın görsel temsilinden önce gelmesi gerekmektedir. Geleneksel fotoğraf anlayışının çıkış noktasını oluşturan ve dijital fotoğrafa karşıtlık oluşturan bu düşünüşe göre görsel temsilin zaman ve mekân bakımından fotoğrafa konu olan nesne ya da sahnenin önüne geçmesi imkânsızdır. Bu da iki fotoğrafın köken ve doğasını en baştan farklı kılmaktadır. Araştırmanın katılımcıları arasında ortaya çıkan fotoğraf anlayışı da sözü edilen bu geleneksel fotoğraf anlayışının bir devamı olarak görülebilir. Diğer bir ifadeyle katılımcılar fotoğrafın gerçekten olmuş ya da yaşanmış bir sahneyi içerdiğine ve bu duruma tanıklık ettiğine inanmaktadır. Ancak bu anlayışa karşıt olarak araştırmanın bulguları fotoğraf ile profesyonel olarak uğraşan kişilerin ifade ettikleri görüşe uygun olarak fotoğraf makinesi ile çekilmiş fotoğraflar ile bilgisayar aracılığıyla üretilen görüntüler arasında ayrım yapma becerilerinin sorunlu olduğunu ortaya koymuştur. Katılımcılar kendilerine gösterilen görselleri inceleyerek bu görsellerin gerçekten olmuş ya da yaşanmış bir sahneyi içerdiğine dair istatistiksel olarak anlamlı bir değerlendirme yapamamışlardır. Sonuçlar doğru cevap yüzdesinin tahmin edilen cevapların yüzdesine yakın olduğunu göstermektedir. Bu da profesyonel fotoğrafçıların çekilmiş fotoğraflar ile bilgisayar aracılığıyla üretilen görüntüler arasında ayrım yapma konusunda ciddi zorluk yaşadığına işaret etmektedir. Ayrıca bazı durumlarda katılımcılar fotoğraf makinesi ile çekilmiş fotoğrafları bilgisayar aracılığıyla üretilen görüntülerle karıştırmıştır. Bu durum görüntü işleme yazılım ve teknolojilerdeki baş döndürücü gelişmeler de göz önüne alınırsa profesyonel olarak fotoğraf ile uğraşmayanlar için bu iki tür görsel arasında doğru bir ayrım yapmanın daha da zor olacağı ortaya çıkacaktır. Araştırma sonuçları ayrıca katılımcıların fotoğraflar ve bilgisayar aracılığıyla üretilen görüntüler arasındaki farkın önemli olduğuna inandıklarını ve aynı zamanda da fotoğrafın taşıdığı anlam üzerinde neredeyse tamamen hemfikir olduklarını göstermektedir. Fotoğraflarla ilişkilendirilen doğal görünüm, ‘hayat’ dolu olmak ya da gerçek olmak gibi niteliklerin aksine, bilgisayar aracılığıyla üretilen görüntüler yapay yapılar, simülasyon ve kopya gibi sözcüklerle tanımlanmıştır. Ancak araştırmanın sonuçları dijital fotoğraf söz konusu olduğunda ‘özgün’ ve ‘kopya’ arasındaki farkı bulmanın giderek daha da zorlaştığını göstermiştir. Fotoğraflar gibi gerçek görseller ile bilgisayar aracılığıyla üretilen görüntüler arasında seçim yapmanın zorluğu Beltig’e göre (2005) insanların fotoğrafın icadından beri güvenebilecekleri görüntüler aramasından kaynaklanmaktadır. Ancak fotoğrafın icadından çok daha önceleri farklı görseller gerçek olarak kabul görseler de artık görsel temsilin hayat bulduğu yüzeyin türü, farklı yöntemler kullanılarak üretilen görselleri birbirinden doğru bir şekilde ayırmakta yeterli olmamaktadır. Daha net ifade edilirse araştırmada kullanılan görseller katılımcılar tarafından fotoğraf ya da Uluslararası Sosyal Araştırmalar Dergisi / The Journal of International Social Research Cilt: 13 Sayı: 69 Mart 2020 & Volume: 13 Issue: 69 March 2020 - 579 - bilgisayar aracılığıyla üretilen görüntü olarak sınıflandırılmalarına bağlı olarak değişen şekilde anlamlandırılmışlardır. Araştırmanın ortaya çıkardığı bilgisayar aracılığıyla üretilen görüntüleri fotoğraflardan ayıramama durumu soruna bu tür görselleri daha ciddiye alarak yaklaşmanın gerekliliğini göstermiştir. Sonuçlar ayrıca dijital fotoğrafçılıkta kullanılan yazılım ve algoritmaların gücünün ve etkinliğinin bir kere daha kanıtlanması olarak da yorumlanabilir. Ancak analog fotoğraflarla kıyaslandığında dijital fotoğraflar her ne kadar güvenilmez olarak görülse de analog fotoğrafların da manipüle edildiği birçok örneğin olduğu da unutulmamalıdır. Sonuç olarak profesyonel olarak fotoğrafla ilgilenen katılımcılarla yapılan bu araştırma söz konusu katılımcıların fotoğraflar ve bilgisayar aracılığıyla üretilen görüntüleri başarılı bir şekilde birbirinden ayıramadığını göstermiştir. Ayrıca katılımcılar açıklamalarında geleneksel fotoğraf anlayışına şaşırtıcı şekilde bağlı kalmışlardır. Ancak hazır fotoğraf sağlayıcıların yaygınlaşması ve bu fotoğrafların habercilikte de sık sık kullanılması, vatandaş fotoğrafçılığının artması ve fotoğrafçılığın giderek daha fazla dijitalleşmesine rağmen görsellerle profesyonel olarak çalışan kişilere her zaman ihtiyaç olacağı açıktır. KAYNAKÇA Carlson, Matt (2009). The reality of a fake image: News norms, photojournalistic craft, and Brian Walski’s fabricated photograph. Journalism Practice, S. 3 (2), s. 125–139. Fan Shaojing (2014). Human perception of visual realism for photo and computer-generated face images. ACM Transactions on Applied Percepttion, S. 11 (2). Erişim Linki: http://doi.acm.org/10.1145/2620030 Erişim Tarihi 16 Mart 2019. rosh, Paul (2013). Beyond the image bank: Digital commercial photography. Ed. Lister, M. The Photographic Image in Digital Culture,. London: Routledge, Gürsel, Zeynep (2016). Image Brokers: Visualizing World News in the Age of Digital Circulation. Berkeley: University of California Press. Huang, Edgar S. (2001). Readers’ perception of digital alteration in photojournalism. Journalism & Communication Monographs, S. 3 (3), s. 147-182. Lowrey, Wilson (2003). Normative conflict in the newsroom: The case of digital photo manipulation. Journal of Mass Media Ethics, S. 18 (2), s. 123–142. Lyu, Sung, Farid, Hany (2005). How realistic is photorealistic? IEEE Transactions on Signal Processing, S. 53 (2), s. 845–850. Machin, David (2004). Building the world’s visual language: The increasing global importance of image banks in corporate medi a. Visual Communication, S. 3 (3), s. 316–336. Mäenpää, Jenni (2014). Rethinking photojournalism: The changing work practices and professionalism of photojournalists in the digital age. Nordicom Review, S. 35 (2), s. 91–104. Mäenpää, Jenni, Jenna Seppänen (2010). Imaginary darkroom: Digital photo editing as a strategic ritual. Journalism Practice, S. 4 (4), s. 454–475. McNamara, Ann (2005). Exploring perceptual equivalence between real and simulated imagery. Proceedings of the 2nd Symposium o n Applied Perception in Graphics and Visualization, s. 123–128. Reaves, Shiela (1992). What’s wrong with this picture? Daily newspapers photo editors’ attitudes and their tolerance toward digital manipulation. Newspaper Research Journal, S. 13/14 (4/1), s. 131–155. Reaves, Shiela (1995). The vulnerable image: Categories of photos as predictor of digital manipulation. Journalism & Mass Communication Quarterly, S. 72 (3), s. 706–715. Ritchin, Fred (1991). Photojournalism in the age of computers. Ed. C. Squiers, The Critical Image: Essays on Contemporary Photography. London: Lawrence & Wishart. Schwartz, Donna (1992). To Tell the Truth: Codes of Objectivity in Photojournalism. Minneapolis: Gordon / Breach. Solaroli, Marco (2015). Toward a new visual culture of the news: Professional photojournalism, digital post -production, and the symbolic struggle for distinction. Digital Journalism, S. 3 (4), s. 513–532. Thomson, Taylor J (2016). Freelance photojournalists and photo editors: Learning and adapting in a (mostly faceless) virtual world. Journalism Studies, s. 1–21. work_wzgwas4425hjzngxhqmvap2bdm ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218555281 Params is empty 218555281 exception Params is empty 2021/04/06-02:16:34 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218555281 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:34 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_x2sxol6avvemdb6xwhd744lg74 ---- [PDF] High-resolution stereoscopic digital fundus photography versus contact lens biomicroscopy for the detection of clinically significant macular edema. | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1016/S0161-6420(01)00933-2 Corpus ID: 19745554High-resolution stereoscopic digital fundus photography versus contact lens biomicroscopy for the detection of clinically significant macular edema. @article{Rudnisky2002HighresolutionSD, title={High-resolution stereoscopic digital fundus photography versus contact lens biomicroscopy for the detection of clinically significant macular edema.}, author={C. Rudnisky and B. Hinz and M. Tennant and A. D. de Leon and M. Greve}, journal={Ophthalmology}, year={2002}, volume={109 2}, pages={ 267-74 } } C. Rudnisky, B. Hinz, +2 authors M. Greve Published 2002 Medicine Ophthalmology PURPOSE The purpose of this study was to compare high-resolution stereoscopic digital photography to contact lens biomicroscopy (CLBM) for the diagnosis of clinically significant macular edema. STUDY DESIGN Comparative, prospective, observational case series. PARTICIPANTS One hundred twenty diabetic patients. METHODS Patients underwent clinical retinal examination with CLBM by a retinal specialist. On the same day as clinical grading, patients received high-resolution stereoscopic digital… Expand View on PubMed people.ucalgary.ca Save to Library Create Alert Cite Launch Research Feed Share This Paper 75 CitationsHighly Influential Citations 2 Background Citations 15 Methods Citations 14 Results Citations 1 View All Figures, Tables, and Topics from this paper figure 1 table 1 table 2 table 3 table 4 table 5 View All 6 Figures & Tables Macular retinal edema Retinal Hemorrhage Lens Diseases Microaneurysm Retinal Diseases Diabetic Retinopathy Diabetes Mellitus Fundus photography Diabetic Neuropathies Retina Histopathologic Grade Retinal Perforations Eye Retinal Examination Shutter Device Component Exudate 75 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Benefits of stereopsis when identifying clinically significant macular edema via teleophthalmology. C. Rudnisky, M. Tennant, A. D. de Leon, B. J. Hinz, M. Greve Medicine Canadian journal of ophthalmology. Journal canadien d'ophtalmologie 2006 27 PDF Save Alert Research Feed Prospective evaluation of digital non-stereo color fundus photography as a screening tool in age-related macular degeneration. A. Pirbhai, T. Sheidow, P. Hooper Medicine American journal of ophthalmology 2005 69 Save Alert Research Feed Comparison of multiple stereoscopic and monoscopic digital image formats to film for diabetic macular edema evaluation. H. Li, L. Hubbard, R. Danis, Jose F. Florez-Arango, A. Esquivel, E. Krupinski Medicine, Computer Science Investigative ophthalmology & visual science 2010 15 PDF Save Alert Research Feed Web-based grading of compressed stereoscopic digital photography versus standard slide film photography for the diagnosis of diabetic retinopathy. C. Rudnisky, M. Tennant, E. Weis, A. Ting, B. J. Hinz, M. Greve Medicine Ophthalmology 2007 86 Save Alert Research Feed Nonmydriatic Ultra-Wide-Field Scanning Laser Ophthalmoscopy (Optomap) versus Two-Field Fundus Photography in Diabetic Retinopathy R. Liegl, Kristine Liegl, +5 authors A. Neubauer Medicine Ophthalmologica 2013 20 PDF Save Alert Research Feed Comparison of stereoscopic digital imaging and slide film photography in the identification of macular degeneration. Riz Somani, M. Tennant, +6 authors A. D. de Leon Medicine Canadian journal of ophthalmology. Journal canadien d'ophtalmologie 2005 23 PDF Save Alert Research Feed Validity of Optical Coherence Tomography as a Diagnostic Method for Diabetic Retinopathy and Diabetic Macular Edema C. Azrak, M. V. Baeza-Díaz, +4 authors V. Gil-Guillén Medicine Medicine 2015 4 View 1 excerpt, cites methods Save Alert Research Feed Clinical detection of diabetic macular edema N. Rudometkin, Parin S Gohel, Marco A Maycotte-Velazquez, A. Ciardella Medicine 2007 Save Alert Research Feed Interobserver agreement in the interpretation of single-field digital fundus images for diabetic retinopathy screening. P. Ruamviboonsuk, Khemawan Teerasuwanajak, M. Tiensuwan, Kanokwan Yuttitham Medicine Ophthalmology 2006 49 Save Alert Research Feed Application of different imaging modalities for diagnosis of Diabetic Macular Edema: A review M. Mookiah, U. Acharya, +5 authors L. Tong Medicine, Computer Science Comput. Biol. Medicine 2015 22 View 1 excerpt, cites background Save Alert Research Feed ... 1 2 3 4 5 ... References SHOWING 1-10 OF 37 REFERENCES SORT BYRelevance Most Influenced Papers Recency A comparison of digital nonmydriatic fundus imaging with standard 35-millimeter slides for diabetic retinopathy. J. Lim, L. Labree, T. Nichols, I. Cárdenas Medicine Ophthalmology 2000 51 Save Alert Research Feed Detection of diabetic macular edema: Nidek 3Dx stereophotography compared with fundus biomicroscopy. A. Kiri, D. S. Dyer, N. Bressler, S. Bressler, A. Schachat Medicine American journal of ophthalmology 1996 18 PDF Save Alert Research Feed Detection of diabetic macular edema. Ophthalmoscopy versus photography--Early Treatment Diabetic Retinopathy Study Report Number 5. The ETDRS Research Group. J. Kinyoun, F. Barton, M. Fisher, L. Hubbard, L. Aiello, F. Ferris Medicine Ophthalmology 1989 193 Save Alert Research Feed Identification of diabetic retinopathy by stereoscopic digital imaging via teleophthalmology: a comparison to slide film. M. Tennant, M. Greve, C. Rudnisky, T. Hillson, B. J. Hinz Medicine Canadian journal of ophthalmology. Journal canadien d'ophtalmologie 2001 62 Save Alert Research Feed Tele-ophthalmology via stereoscopic digital imaging: a pilot project. M. Tennant, C. Rudnisky, B. J. Hinz, I. MacDonald, M. Greve Medicine Diabetes technology & therapeutics 2000 33 Save Alert Research Feed The role of digital fundus photography in diabetic retinopathy screening. Digital Diabetic Screening Group (DDSG). D. Lin, M. Blumenkranz, R. Brothers Medicine Diabetes technology & therapeutics 1999 22 Save Alert Research Feed Screening for Diabetic Retinopathy: The wide-angle retinal camera J. Pugh, J. Jacobson, +6 authors R. Velez Medicine Diabetes Care 1993 150 Save Alert Research Feed Photocoagulation for diabetic macular edema. Early Treatment Diabetic Retinopathy Study report number 1. Early Treatment Diabetic Retinopathy Study research group. Medicine Archives of ophthalmology 1985 1,093 Save Alert Research Feed Diabetic retinopathy as detected using ophthalmoscopy, a nonmydriatic camera and a standard fundus camera. R. Klein, B. Klein, M. Neider, L. Hubbard, S. Meuer, R. Brothers Medicine Ophthalmology 1985 237 Save Alert Research Feed The diagnosis of diabetic retinopathy. Ophthalmoscopy versus fundus photography. V. Lee, R. Kingsley, +5 authors C. Wilkinson Medicine Ophthalmology 1993 63 Save Alert Research Feed ... 1 2 3 4 ... Related Papers Abstract Figures, Tables, and Topics 75 Citations 37 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_x3e3txofgfcf7lc3xocschqfta ---- © 2019 Dall’Oglio et al. This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms. php and incorporate the Creative Commons Attribution – Non Commercial (unported, v3.0) License (http://creativecommons.org/licenses/by-nc/3.0/). By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms (https://www.dovepress.com/terms.php). Clinical, Cosmetic and Investigational Dermatology 2019:12 103–108 Clinical, Cosmetic and Investigational Dermatology Dovepress submit your manuscript | www.dovepress.com Dovepress 103 O r I g I n a l r e s e a r C h open access to scientific and medical research Open Access Full Text Article http://dx.doi.org/10.2147/CCID.S186621 Clinical and instrumental evaluation of a new topical non-corticosteroid antifungal/anti- inflammatory/antiseborrheic combination cream for the treatment of mild-to-moderate facial seborrheic dermatitis Federica Dall’Oglio1 Francesco Lacarrubba1 Maria luca1 simona Boscaglia1 Corinne granger2 giuseppe Micali1 1Dermatology Clinic, University of Catania, Catania, Italy; 2Innovation and Development, IsDIn sa, Barcelona, spain Purpose: Topical agents play a key role in the management of facial seborrheic dermatitis (SD) by reducing inflammation and scale production. The aim of this open-label trial was to assess the efficacy and tolerability of a new non-corticosteroid, antifungal/anti-inflammatory/ antiseborrheic cream containing piroctone olamine, stearyl glycyrrhetinate, and zinc PCA in the treatment of facial SD using clinical and instrumental evaluation. Patients and methods: Twenty adult subjects affected by mild-to-moderate inflamed facial SD were enrolled and instructed to apply the study cream twice daily for 60 days. Efficacy was evaluated at baseline, and at days 15, 30, and 60 by measuring the grade of desquama- tion, erythema, and pruritus using clinical evaluation, erythema-directed digital photography, colorimetry, and subject-completed Visual Analog Scale. Additionally, an Investigator Global Assessment (IGA) was assessed using a 5-point scale: excellent response (>80% improvement); good response (50%–80% improvement); mild response (<50% improvement); no response (no change); worsening. Results: After 15 days, a statistically significant decrease from baseline was found in desquama- tion, erythema, colorimetric scores, and pruritus. At day 60, a significant further improvement for all evaluated parameters was recorded. Moreover, the IGA improved in 90% of patients, with an excellent response in 53% of cases. A good correlation was found between clinical and instrumental evaluations. Conclusion: Our results indicate that the study facial cream represents an option to consider when dealing with mild-to-moderate SD, being effective, well-tolerated, and free of significant side effects, as confirmed by clinical and instrumental evaluation. Keywords: seborrheic dermatitis, topical cosmetic, digital photography, colorimeter Introduction Facial seborrheic dermatitis (SD) is a common, chronic, recurrent inflammatory dis- ease clinically characterized by erythema, scaling, and pruritus on sebum-rich areas, including the nasolabial folds, malar regions, glabella, and eyebrows. The external auditory meatus and the retroauricular areas can also be affected.1–5 The pathogenesis of SD is still unclear, but it seems to be multifactorial, involv- ing sebaceous gland function, presence on the skin of yeasts belonging to the Correspondence: giuseppe Micali Dermatology Clinic, University of Catania, a.O.U. Policlinico -Vittorio Emanuele, Via Santa Sofia, 78 - 95123 Catania, Italy Tel +39 095 321 705 Fax +39 095 378 2425 email cldermct@gmail.com Journal name: Clinical, Cosmetic and Investigational Dermatology Article Designation: Original Research Year: 2019 Volume: 12 Running head verso: Dall’Oglio et al Running head recto: Dall’Oglio et al DOI: http://dx.doi.org/10.2147/CCID.S186621 C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com/permissions.php www.dovepress.com www.dovepress.com https://www.dovepress.com/article_from_submission.php?submission_id=101395 cldermct@gmail.com http://dx.doi.org/ Clinical, Cosmetic and Investigational Dermatology 2019:12submit your manuscript | www.dovepress.com Dovepress Dovepress 104 Dall’Oglio et al Malassezia (M.) spp. (formerly called Pityrosporum ovale), and the individual immune response.6–8 Additional precipitating factors include drug intake (lith- ium, haloperidol, buspirone, chlorpromazine, methyldopa, cimetidine), nutritional deficiency (acrodermatitis entero- pathica due to zinc deficiency), neurological and degenerative disorders (Parkinson’s disease), immunosuppression (HIV and non-HIV related), genetic disorders (trisomy 21), as well as environmental factors (cold, low humidity, excessive sun exposure), physical and psychological stress, and unhealthy lifestyle (alcohol consumption).1,2 The therapeutic approach should be selected according to SD severity and the patient’s immune status and compli- ance. For mild-to-moderate facial SD, topical drugs play a key role in the management by reducing erythema and scale production. These include corticosteroids and antifungals.9–14 The clinical efficacy of current therapeutic agents is widely demonstrated,9–11 although their prolonged use may cause some adverse effects, including skin atrophy, striae, telangi- ectasia, folliculitis, hypopigmentation, and tachyphylaxis in the case of corticosteroids, irritant contact dermatitis from the use of antifungals, and limitations due to the off-label use of metronidazole, benzoyl peroxide, lithium succinate/ lithium gluconate, pimecrolimus, and tacrolimus. The therapeutic efficacy of topical treatments is gener- ally based on clinical evaluation of erythema and/or scaling severity. Erythema-directed digital photography and colo- rimetry are noninvasive instrumental techniques that help determine treatment outcome more precisely than simple clinical inspection or standard photography, providing a more accurate evaluation of erythema severity.15–21 The aim of this open-label trial was to assess the effi- cacy and tolerability of a new topical non-corticosteroid, antifungal/anti-inflammatory/antiseborrheic facial cream in the treatment of mild-to-moderate SD by clinical assess- ment and by erythema-directed photography and colorimetry evaluation. Patients and methods study design This was an open-label, prospective clinical trial. setting and study period From September 2017 to March 2018 a total of 20 adult subjects (15M/5F; age range 19–55 years) affected by mild- to-moderate facial SD were enrolled at the Department of Dermatology of the University of Catania. Study duration was 60 days. The study was performed in accordance with the ethical principles originating from the Declaration of Helsinki 1996 and Good Clinical Practices. The protocol was approved by the institutional review board of AOU Policlinico “Vittorio Emanuele” Hospital, Catania. Written informed consent was obtained from each patient before study procedures were started, which included consent to use their images. Inclusion/exclusion criteria Inclusion criteria included a wash-out period of at least 2 weeks for topical antimycotic/corticosteroid treatments, and 1 month for oral antifungals/corticosteroids or hormonal therapy. Exclusion criteria were: severe underlying disease, concurrent exposure to sunlight and/or artificial ultraviolet sources, pregnancy, and breastfeeding. No other topical products or drugs were allowed, except for standard daily care products, including mild cleansers, noncomedogenic moisturizers, make-up, and SPF 50+ sunscreens. Methodology Patients were instructed to apply the non-corticosteroid facial cream containing piroctone olamine, stearyl glycyrrhetinate and zinc PCA, twice daily (at morning and at bedtime) for 8 weeks. In order to reduce potential evaluator bias, all sub- jects were assessed by an investigator not directly involved in the study at baseline (T0), and at day 15 (T1), 30 (T2), and 60 (T3). study endpoints The primary end point was efficacy evaluation at day 15 (T1), 30 (T2), and 60 (T3) of all clinical parameters (erythema, desquamation, and pruritus) including global assessment; the secondary endpoint was the evaluation of tolerability and cosmetic acceptability. Clinical and instrumental evaluation criteria All clinical and instrumental evaluations were carried out at T0, T1, T2, and T3. Desquamation was rated by clinical evaluation using a 5-point scale: 4 = severe (many large adherent white flakes); 3 = moderate (several small loose white flakes); 2 = mild (few small loose white flakes); 1 = very mild (very few small loose white flakes); 0 = no desquamation. Erythema was evaluated using: 1) VISIA-CR imaging system equipped with RBX technology (Canfield, Parsippany, NJ, USA), to provide digital images of the face highlighting C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dermatology 2019:12 submit your manuscript | www.dovepress.com Dovepress Dovepress 105 Dall’Oglio et al red areas corresponding to erythema/inflammatory lesions, scored on a 5-point scale (4 = severe erythema; 3 = moderate erythema; 2 = mild erythema; 1 = very mild erythema; 0 = no erythema), and 2) colorimeter DSM II ColorMeter (Cortex Technology, Hadsund, Denmark) (able to provide the degree of erythema of a 4 mm sample skin area on a numeric scale) through three consecutive measurements of a target-affected facial area (the unaffected skin of the submental region was used as intra-patient control). Measurement of pruritus was carried out with a Visual Analog Scale from 0 = no pruritus to 100 = severe pruritus. Clinical assessment included evaluation of global effi- cacy at day 60 by Investigator Global Assessment (IGA) using a 5-point scale: excellent response (>80% improve- ment); good response (50%–80% improvement); mild response (<50% improvement); no response (no change); worsening. Finally, product tolerability was evaluated on a 4-point scale: 0 = very poor; 1 = poor; 2 = good; 3 = excellent, and cosmetic acceptability was evaluated on a 4-point scale: 0 = very poor; 1 = poor; 2 = good; 3 = excellent. statistical analysis The statistical analysis was performed using STATA. Quali- tative data were expressed as number and percentage, while quantitative data were expressed as mean ± standard devia- tion. The statistical analysis considered the whole sample and then only the patients who continued the treatment. The comparisons between data obtained before and after treatment were performed with Student’s t-test. Statistical significance was set at P≤0.05. Results Subject demographic and clinical history data are shown in Table 1. Seventeen subjects (mild: 5 cases; moderate: 12 cases) completed the study, and three subjects were lost to follow-up for personal reasons. After 15 days of treatment (T1), a statistically significant reduction from baseline was observed in desquamation (from mean 1.2±0.9 to 0.3±0.6; P<0.001), erythema severity (from mean 1.9±0.9 to 1±0.8; P=0.0005), colorimetric scores (from 8.6±5 to 3.4±3; P<0.001), and pruritus (from mean 69±16.8 to 6.7±10.5; P<0.001) (Table 1, Figure 1). Results from all time points are summarized in Table 2. At T3, all evaluated parameters showed a progressive, significant reduction (Table 2, Figure 2). IGA showed an excellent response in nine cases (53%), good in five (29.5%), mild in two (11.5%), and no change in one case (6%). No patient showed clinical worsening from baseline. Overall, no signs of local side effects were documented during the study and cosmetic tolerability and acceptability was rated as excellent by all patients. Discussion This pilot, open-label clinical trial with clinical and instru- mental evaluation of SD indicates that the non-corticosteroid, antifungal/anti-inflammatory/antiseborrheic facial cream containing piroctone olamine, stearyl glycyrrhetinate, and zinc PCA represents an option to consider when dealing with mild-to-moderate SD, being effective, well-tolerated, and free of significant side effects, as confirmed by clinical and instrumental evaluation. In the 17 subjects who completed the study no serious side effects were recorded. The mechanisms of action of this fragrance-free product may be related to multiple synergist mechanisms of action of the active ingredients, that include piroctone olamine, an antifungal effective against Malassezia spp. by chelation of iron and other minerals,10 stearyl glycyrrhetinate (a salt and ester of glycyrrhetinic acid) that exhibits anti-inflammatory, soothing and antipruritic effects,22 as well as zinc PCA which combines anti-inflammatory properties with antiseborrheic efficacy.23 We are aware that our study has some limitations: it was an open study from a single center, the number of patients was limited, and there was no control group for comparison – a split-face comparison could also have helped improve the study design and thus the robustness of the results. However, we reduced the possible bias of the open study by requesting an independent investigator to evaluate the clinical signs at four time points and by supporting our clinical results with objective instrumental evaluations, an original approach in SD treatment evaluation. These instrumental assessments are Table 1 Demographic and clinical history data at baseline Sex, Female Male 5 (25%) 15 (75%) Age (years), range 19–55 Age (years), mean ± standard deviation 37±10.2 SD clinical severity Mild Moderate 8 (40%) 12 (60%) SD duration (years), range 6.2–8 Duration of current episode (months), range 4–9 Abbreviation: SD, seborrheic dermatitis. C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dermatology 2019:12submit your manuscript | www.dovepress.com Dovepress Dovepress 106 Dall’Oglio et al Figure 1 Standard and erythema-directed digital photography with VISIA-CR of a patient with moderate facial seborrheic dermatitis showing a significant reduction (P<0.001) in erythema after 15 days of treatment (erythema score: 2, B1–B2) compared to baseline (erythema score: 3, A1–A2). Notes: Erythema scale: 4 = severe erythema; 3 = moderate erythema; 2 = mild erythema; 1 = very mild erythema; 0 = no erythema. Table 2 Results from clinical and instrumental assessment of desquamation, erythema, and itch from baseline to study end Assessment (n=17) Baseline (T0) mean ± standard deviation Day 15 (T1) (mean ± standard deviation) Day 30 (T2) mean ± standard deviation Day 60 (T3) mean ± standard deviation Desquamation (clinical) 1.2±0.9 0.3±0.6 (P<0.001) 0.2±0.4 (P<0.001) 0.1±0.3 (P<0.001) Erythema (digital photography) 1.9±0.9 1±0.8 (P=0.0005) 0.7±0.8 (P<0.001) 0.6±0.6 (P<0.001) Erythema (colorimetry) 8.6±5 3.4±3 (P<0.001) 2.6±2.8 (P<0.001) 2.3±2.7 (P<0.001) Itch (VAS) 69±16.8 6.7±10.5 (P<0.001) 0.6±1.6 (P<0.001) 0 (P<0.001) Abbreviation: Vas, Visual analog scale. extremely time-consuming and costly and could not be used in routine consultation. Further studies on a larger series are necessary to confirm our results. Conclusion This new topical non-corticosteroid antifungal/anti-inflam- matory/antiseborrheic combination cream is a good treatment C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dermatology 2019:12 submit your manuscript | www.dovepress.com Dovepress Dovepress 107 Dall’Oglio et al option for mild-to-moderate facial SD and was well-tolerated during the 60 days of treatment. Acknowledgment The study was supported financially by ISDIN SA, Spain, which manufacture the product. Disclosure The authors report no conflicts of interest in this work. References 1. Gupta AK, Bluhm R. Seborrheic dermatitis. J Eur Acad Dermatol Venereol. 2004;18(1):13–26. 2. Naldi L, Rebora A. Clinical practice. seborrheic dermatitis. N Engl J Med. 2009;360(4):387–396. 3. Berk T, Scheinfeld N. Seborrheic dermatitis. P T. 2010;35(6):348–352. 4. Naldi L. Seborrhoeic dermatitis. BMJ Clin Evid. 2010;7:1713. 5. Sampaio AL, Mameri AC, Vargas TJ, Ramos-e-Silva M, Nunes AP, Car neiro SC. Sebor rheic der matitis. An Bras Dermatol. 2011;86(6):1061–1071. 6. Dessinioti C, Katsambas A. Seborrheic dermatitis: etiology, risk factors, and treatments: facts and controversies. Clin Dermatol. 2013;31(4):343–351. 7. Bukvić Mokos Z, Kralj M, Basta-Juzbašić A, Lakoš Jukić I. Seborrheic dermatitis: an update. Acta Dermatovenerol Croat. 2012;20(2):98–104. 8. Gary G. Optimizing treatment approaches in seborrheic dermatitis. J Clin Aesthet Dermatol. 2013;6(2):44–49. 9. Kastarinen H, Oksanen T, Okokon E, et al. Topical anti-inflammatory agents for seborrhoeic dermatitis of the face or scalp. Cochrane Data- base Syst Rev. 2014;5:CD009446. 10. Okokon E, Verbeek J, Ruotsalainen J, Ojo O, Bakhoya V. Topical antifungals for seborrhoeic dermatitis. Cochrane Database Syst Rev. 2015;4:CD008138. 11. Kastarinen H, Okokon EO, Verbeek JH. Topical anti-inflammatory agents for seborrheic dermatitis of the face or scalp: summary of a Cochrane review. JAMA Dermatol. 2015;151(2):221–222. 12. Cheong WK, Yeung CK, Torsekar RG, et al. Treatment of sebor- rhoeic dermatitis in Asia: a consensus guide. Skin Appendage Disord. 2016;1(4):187–196. 13. Gupta AK, Versteeg SG. Topical treatment of facial seborrheic dermatitis: a systematic review. Am J Clin Dermatol. 2017;18(2): 193–213. 14. Borda LJ, Perper M, Keri JE. Treatment of seborrheic dermatitis: a comprehensive review. J Dermatolog Treat. 2018;3:1–12. Figure 2 Standard and erythema-directed digital photography with VISIA-CR of a patient with moderate facial seborrheic dermatitis showing a significant reduction (P<0.001) in erythema at 15 days (erythema score: 2, B1–B2), 30 days (erythema score: 1, C1–C2), and 60 days (erythema score: 0, D1–D2) compared to baseline (erythema score: 3, A1–A2). Notes: Erythema scale: 4 = severe erythema; 3 = moderate erythema; 2 = mild erythema; 1 = very mild erythema; 0 = no erythema. C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com Clinical, Cosmetic and Investigational Dermatology 2019:12submit your manuscript | www.dovepress.com Dovepress Dovepress Clinical, Cosmetic and Investigational Dermatology Publish your work in this journal Submit your manuscript here: https://www.dovepress.com/clinical-cosmetic-and-investigational-dermatology-journal Clinical, Cosmetic and Investigational Dermatology is an interna- tional, peer-reviewed, open access, online journal that focuses on the latest clinical and experimental research in all aspects of skin disease and cosmetic interventions. This journal is included on PubMed. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors Dovepress 108 Dall’Oglio et al 15. Dall’Oglio F, Tedeschi A, Guardabasso V, Micali G. Evaluation of a topi- cal anti-inflammatory/antifungal combination cream in mild-to-moderate facial seborrheic dermatitis: an intra-subject controlled trial examining treated vs. untreated skin utilizing clinical features and erythema-directed digital photography. J Clin Aesthet Dermatol. 2015;8(9):33–38. 16. Micali G, Gerber PA, Lacarrubba F, Schäfer G. Improving treatment of erythematotelangiectatic rosacea with laser and/or topical therapy through enhanced discrimination of its clinical features. J Clin Aesthet Dermatol. 2016;9(7):30–39. 17. Micali G, Dall’Oglio F, Verzì AE, Luppino I, Bhatt K, Lacarrubba F. Treatment of erythemato-telangiectatic rosacea with brimonidine alone or combined with vascular laser based on preliminary instrumental evalu- ation of the vascular component. Lasers Med Sci. 2018;33(6):1397–1400. 18. Dall’Oglio F, Tedeschi A, Fusto CM, Lacarrubba F, Dinotta F, Micali G. A novel cosmetic antifungal/anti-inflammatory topical gel for the treatment of mild to moderate seborrheic dermatitis of the face: an open-label trial utilizing clinical evaluation and erythema-directed digital photography. G Ital Dermatol Venereol. 2017;152(5):436–440. 19. Micali G, Dall’Oglio F, Tedeschi A, Lacarrubba F. Erythema-directed digital photography for the enhanced evaluation of topical treatments for acne vulgaris. Skin Res Technol. 2018;24(3):440–444. 20. Andreassi L, Flori L. Practical applications of cutaneous colorimetry. Clin Dermatol. 1995;13(4):369–373. 21. Matias AR, Ferreira M, Costa P, Neto P. Skin colour, skin redness and melanin biometric measurements: comparison study between Antera(®) 3D, Mexameter(®) and Colorimeter(®). Skin Res Technol. 2015;21(3):346–362. 22. Asl MN, Hosseinzadeh H. Review of pharmacological effects of Glycyrrhiza sp. and its bioactive compounds. Phytother Res. 2008;22(6):709–724. 23. Takino Y, Okura F, Kitazawa M, Iwasaki K, Tagami H. Zinc l-pyr- rolidone carboxylate inhibits the UVA-induced production of matrix metalloproteinase-1 by in vitro cultured skin fibroblasts, whereas it enhances their collagen synthesis. Int J Cosmet Sci. 2012;34(1): 23–28. C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com _GoBack Publication Info 4: work_x3vdoaxrqfdo3i3fdebdmm4jfy ---- Update on Screening for Sight-Threatening Diabetic Retinopathy Review Article Ophthalmic Res 2019;62:218–224 Update on Screening for Sight-Threatening Diabetic Retinopathy Peter H. Scanlon a–d a Clinical Director English NHS Diabetic Eye Screening Programme, Cheltenham, UK; b Gloucestershire Hospitals NHS Foundation Trust, Cheltenham, UK; c Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, UK; d University of Gloucestershire, Cheltenham, UK Received: February 27, 2019 Accepted after revision: March 6, 2019 Published online: May 27, 2019 Peter H. Scanlon Gloucestershire Hospitals NHS Foundation Trust, Department of Ophthalmology Cheltenham General Hospital, GRRG Office above Oakley Ward Sandford Road, Cheltenham, Gloucestershire GL53 7AN (UK) E-Mail p.scanlon @ nhs.net © 2019 The Author(s) Published by S. Karger AG, Basel E-Mail karger@karger.com www.karger.com/ore DOI: 10.1159/000499539 Keywords Retinopathy · Retinal screening · Imaging · Sight-threatening diabetic retinopathy · Visual loss Abstract Purpose: The aim of this article was to describe recent advanc- es in the use of new technology in diabetic retinopathy screen- ing by looking at studies that assessed the effectiveness and cost-effectiveness of these technologies. Methods: The author conducts an ongoing search for articles relating to screening or management of diabetic retinopathy utilising Zetoc with key- words and contents page lists from relevant journals. Results: The areas discussed in this article are reference standards, alter- natives to digital photography, area of retina covered by the screening method, size of the device and hand-held cameras, mydriasis versus non-mydriasis or a combination, measure- ment of distance visual acuity, grading of images, use of auto- mated grading analysis and cost-effectiveness of the new tech- nologies. Conclusions: There have been many recent advances in technology that may be adopted in the future by screening programmes for sight-threatening diabetic retinopathy but each device will need to demonstrate effectiveness and cost- effectiveness before more widespread adoption. © 2019 The Author(s) Published by S. Karger AG, Basel Introduction The Wilson and Junger criteria, which are the 1968 principles [1] applied by the World Health Organisation, have formed the basis of development of screening pro- grammes and required an evidence base which I adapted [2] for sight-threatening diabetic retinopathy (STDR): 1. STDR is an important public health problem [3, 4] 2. The incidence of STDR is going to remain the same or become an even greater public health problem [5, 6] 3. STDR has a recognisable latent or early symptomatic stage [7–9] 4. Treatment for STDR is effective and agreed upon uni- versally Diabetic retinopathy (DR) can be prevented or the rate of deterioration reduced by improved control of blood glucose [10–12] and blood pressure [13, 14]. Laser treat- ment is effective [15, 16], and vascular endothelial growth factor inhibitors can improve the results of treatment in diabetic maculopathy [17, 18] and in some cases of pro- liferative DR [19, 20]. In this article I have concentrated on reviewing the up- dates in relation to the final two criteria: This article is licensed under the Creative Commons Attribution- NonCommercial-NoDerivatives 4.0 International License (CC BY- NC-ND) (http://www.karger.com/Services/OpenAccessLicense). Usage and distribution for commercial purposes as well as any dis- tribution of modified material requires written permission. Update on Screening for STDR 219Ophthalmic Res 2019;62:218–224 DOI: 10.1159/000499539 5. The test – a suitable and reliable screening test is avail- able, acceptable to both health care professionals and (more importantly) to the public 6. Cost-effectiveness – the costs of screening and effec- tive treatment of STDR are balanced economically in relation to total expenditure on health care – including the consequences of leaving the disease untreated Methodology The review of the literature relating to screening for DR has been ongoing since March 2000. The methodology involves a search technique for articles relating to screening or management of DR utilising Zetoc (http://zetoc.jisc.ac.uk/), which is a compre- hensive research database, giving you access to over 34,500 jour- nals and more than 55 million article citations and conference pa- pers through the British Library’s electronic table of contents cov- ering 1993 to the present day and is updated daily. Subject title keywords are searched daily using 21 different combinations (e.g., “retinopathy” or “digital” and “imaging” and “eye” in title), and contents page lists from 28 journals are reviewed monthly. Articles of interest identified with this search strategy were sourced from online electronic journal resources (e.g., Open Athens [21] or the Royal Society of Medicine [22]). Results The Test Reference Standards for Digital Photographic and Other Screening Methods There are two accepted reference standards to com- pare with any new screening methodology. (a) 7-field (30-degree) stereo photography is consid- ered the best reference standard. The advantage of this reference standard is the area of retina covered and the detailed grading classification [23] which has been developed for this standard. The disad- vantage is that the unassessable image rate is at 10% in one report from the Wisconsin Epidemiological Study of Dia- betic Retinopathy [24] and, in many studies, not reported so is likely higher than that rate. (b) Slit lamp biomicroscopy by an ophthalmologist is another accepted reference standard, although it is prefer- able with this methodology to use one or a small number of retinal specialists. The studies demonstrate significant vari- ation compared to 7-field stereophotography with some studies in which the ophthalmologists performed poorly [25, 26], and others with better results [27, 28]. Gangaputra et al. [29] compared evaluation by clinical examination with image grading at a reading centre for the classification of DR and diabetic macular oedema and concluded that the results support the use of clinical information for defining broad severity categories but not for documenting small- to-moderate changes in DR over time. Gangaputra et al. [30] also compared 35-mm film with digital photography and found that agreement between film and digital images was substantial to almost perfect for DR severity level and moderate to substantial for dia- betic macular oedema and clinically significant macular oedema severity levels, respectively. The study concluded that replacement of film fundus images with digital im- ages for DR severity level should not adversely affect clin- ical trial quality. The “Exeter Standards,” which were a consensus view formed at a meeting [31] in Exeter in the UK in 1988, formed the basis for a publication [32] for an acceptable method for use in a systematic screening programme for DR in the UK, which was adopted in the planning [33] of the English NHS Diabetic Eye Screening Programme. The Exeter Standards recommended that a screening test for STDR should achieve a minimum sensitivity of 80% and a minimum specificity of 95%. A systematic review by Piyasena et al. [34] found that both mydriatic and non-mydriatic digital imaging meth- ods generate a satisfactory level of sensitivity. The mean proportion of ungradable images in non-mydriatic meth- ods was 18.4% (CI 13.6–23.3%) and for the mydriatic method 6.2% (CI 1.7–10.8%) and, once these were ex- cluded from analysis: (a) the 1-field non-mydriatic strategy gave summary estimates of sensitivity of 78% (CI 76–80%) and of speci- ficity of 91% (CI 90–92%); the 2-field non-mydriatic strategy gave summary estimates of sensitivity of 91% (95% CI 90–93%) and of specificity of 94% (CI 93–95%); (b) the 1-field mydriatic strategy gave summary esti- mates of sensitivity of 80% (CI 77–82%) and of specificity of 93% (CI 92–94%); the 2-field mydriatic strategy gave summary estimates of sensitivity of 85% (95% CI 84– 87%) and of specificity of 82% (95% CI 81–83%). The article concluded that, overall, there was no differ- ence in sensitivity between non-mydriatic and mydriatic methods (86%, 95% CI 85–87%) after exclusion of un- gradable images. In the literature, studies vary as to whether they count ungradable images as test positive, and it is more likely that a study will achieve the 95% specificity if they do not count ungradable images as test positive. Alternatives to Digital Photography Goh et al. [35] produced a comprehensive review of ret- inal imaging techniques for DR screening. The most excit- ScanlonOphthalmic Res 2019;62:218–224220 DOI: 10.1159/000499539 ing new technologies that may be used in screening in the future, providing they can be shown to be effective and cost- effective, are the scanning confocal ophthalmoscopes that use either laser light or light-emitting diodes (LED). Exam- ples of 4 CE-marked scanning confocal ophthalmoscopes that are currently commercially available are discussed: The Optos California which is described as ultrawide- field imaging incorporates low-powered laser wave- lengths in red (635 nm), green (532 nm) and blue (488 nm) that scan simultaneously and produce a composite image that joins the 3 wavelengths of light into a false- colour image. In 2016, Silva et al. [36] compared the ef- ficiency of non-mydriatic ultrawide-field imaging and non-mydriatic fundus photography in a DR ocular tele- health programme. The Heidelberg Spectralis OCT2 with multicolour functionality also uses three laser wavelengths, blue (488 nm), green (515 nm) and infrared reflectance (820 nm), to simultaneously capture a composite false-colour image. The Eidon confocal scanner (Centervue, Padova, Ita- ly) combines confocal imaging with natural white-light illumination to provide a true-colour image using a white LED (440–650 nm). The Zeiss Clarus 500 uses red (585–640 nm), green (500–585 nm) and blue (435–500 nm) LEDs to capture a composite image. There have not yet been any major studies published using any of these imaging techniques in a DR screening setting. The Area of Retina Covered by the Screening Method The original 35-mm film fundus cameras that were used for 7-field stereophotography had 30-degree fields. In 1989, Moss et al. [24] demonstrated that for 8 retinop- athy levels, the rate of agreement with 7 stereoscopic fields ranges from 80% for two 30-degree stereo fields to 91% for four 30-degree stereo fields. The non-mydriatic digital fundus cameras that are widely used in screening programmes, whether or not the patient’s eyes are dilated, usually have 45-degree fields. Population-based screening programmes that utilise non-mydriatic photography commonly capture a single 45-degree field centred on the fovea of each eye [37]. For many mydriatic schemes, two 45-degree fields are taken [38] – one centred on the fovea and one on the optic disc. The Scanning Confocal Ophthalmoscopes have the fields of view shown below: (a) Heidelberg Spectralis OCT2 with multicolour function- ality: 1-field or 2-field non-mydriatic 55-degree image(s) per eye (when using supplementary lens) (b) Optos California: 1-field non-mydriatic 200-degree image per eye (c) Zeiss Clarus 500: 1-field non-mydriatic 130-degree image per eye (d) CentreVue Eidon: 1- or 2-field non-mydriatic 60-de- gree image(s) per eye Size of the Device and Hand-Held Cameras There have been many claims for the use of smart- phones in DR screening. There is an excellent review of potential devices by Bolster et al. [39]. Hand-held devices have historically performed poorly in DR screening [40] although a recent study suggested that they could be used for optic disc imaging [41] and another study suggested that a small device had been validated [42] for DR screen- ing. The latter was an excellent study that compared the sensitivity and specificity of a “fundus on phone” camera, a smartphone-based retinal imaging system, as a screen- ing tool for DR detection and DR severity in comparison with 7-standard field digital retinal photography. It was noteworthy that mydriasis was used and that the smart- phone was fixed and the patient’s head positioned using a slit lamp chin rest, overcoming many of the problems of movement of patient and operator that is associated with hand-held devices. It may be that the way forward with these small devices is to use an inexpensive device to fix them and a slit lamp chin rest for the patient. Mydriasis versus Non-Mydriasis or a Combination of Both A strong correlation has been reported [43] between older age and poor-quality image rate in non-mydriatic digital photography in DR screening. The main reason for this is higher rates of media opacity and smaller pupil sizes in older people. Scanlon et al. [44] reported an un- gradable image rate for non-mydriatic photography of 19.7% (95% CI 18.4–21.0%), and Murgatroyd et al. [45] reported an ungradable image rate for non-mydriatic photography of 26%. The mean age of the patients in the study of Scanlon et al. [44] was 65 years, and in that of Murgatroyd et al. [45] the median age of the patients was 63.0 years (range 17–88 years, interquartile range 51.8– 70.3 years). There is also an ethnicity component with some studies demonstrating poorer results for non-myd- riatic digital photographic screening in eyes with more iris pigmentation [46]. Scotland introduced the concept of staged mydriasis into their screening programme, only dilating those who the technician taking the images de- termined had poor-quality images without mydriasis. As the age of the Scottish population has increased, the num- Update on Screening for STDR 221Ophthalmic Res 2019;62:218–224 DOI: 10.1159/000499539 bers needing dilation have risen to 34% [pers. commun. Mike Black, Scottish DRS Collaborative Coordinator]. Silva et al. [47] have demonstrated that the ungradable rate per patient for DR and diabetic macular oedema was significantly lower with non-mydriatic ultrawide-field imaging compared with non-mydriatic fundus photogra- phy (DR, 2.8 vs. 26.9%, p < 0.0001; diabetic macular oe- dema, 3.8 vs. 26.2%, p < 0.0001) in the Indian Health Ser- vice-JVN programme, which serves American Indian and Alaska Native communities. Measurement of Distance Visual Acuity Visual acuity is widely accepted as an adjunct to screen- ing for diabetic maculopathy, but in isolation it is not suf- ficiently sensitive to be a screening tool [48, 49], and there is currently no study that supports the added benefit of visual acuity in screening. It is however, from the patient’s perspective, probably the most important factor. Grading the Images In most screening programmes, trained graders grade the images, and the ones with the severer pathology are referred to ophthalmologists to decide on further man- agement. Different grading criteria are used in different countries. Use of Automated Analysis for Grading Automated grading of images from DR screening has been pioneered in Scotland with the development of iGrad- ingM (Scottish Health Innovations Ltd.) which has been used extensively as first level disease/no disease grader [50]. This includes an image quality assessment to reduce the workload of manual grading in the Scottish screening pro- gramme which takes 1-field non-mydriatic photographs. Tufail et al. [51] reported on a study which included a total of 20,258 patients with 102,856 two-field per eye im- ages. Three software products were tested, iGradingM (Scottish Health Innovations Ltd.), EyeArt (Eyenuk Inc., Woodland Hills, CA, USA) and Retmarker (Retmarker Ltd., Coimbra, Portugal), with the following sensitivities: EyeArt 94.7% (95% CI 94.2–95.2%) for any retinopathy (manual grades R1, U, M1, R2 and R3 as refined by arbi- tration), 93.8% (95% CI 92.9–94.6%) for referable reti- nopathy; corresponding sensitivities for Retmarker were 73.0% (95% CI 72.0–74.0%) for any retinopathy, 85.0% (95% CI 83.6–86.2%) for referable retinopathy. For man- ual grades R0 and no maculopathy (M0), specificity was 20% (95% CI 19–21%) for EyeArt and 53% (95% CI 52– 54%) for Retmarker. In this study the version of iGrad- ingM was unable to grade the nasal field. The Iowa Detection Program (IDx-DR) is another soft- ware solution for automated grading that was tested [52] in the Hoorn Diabetes Care System in the Netherlands. There are also a number of developing systems [53, 54] that are not yet commercially available. Automated analysis of OCT images through use of deep learning is being explored in a collaborative project between Moorfields Eye Hospital and Google DeepMind [55] and in the Singapore Eye Research Centre [35]. A recent study [56] examined the variability in differ- ent methods of grading, definitions of reference stan- dards, and their effects on building deep learning models for the detection of diabetic eye disease. The results from the studies are very dependent on the image sets that they are being tested upon. Cost-Effectiveness The cost-effectiveness of screening for STDR is depen- dent on the local health care system but there are various reports of screening being cost-effective in health care set- tings such as Singapore [57], Canada [58], South Africa [59] and India [60] with the proviso that low-risk groups can be identified and cost-effectiveness of screening for STDR can be improved in some settings by differential or individualised screening intervals for low- and high-risk groups [61–63]. Automated grading was shown to be cost- effective in the Scottish Screening Programme [64, 65], and the Tufail study [51] reported that two of the software packages that they tested (Retmarker and EyeArt) achieved acceptable sensitivity for referable retinopathy and false- positive rates (compared with human graders as reference standard) and appear to be cost-effective. The use of OCT in screening has been considered but the cost of the equipment makes it more likely that this would only be useful as a second-line screening tool [66, 67] for those who are screen positive with 2-dimensional photographic markers for diabetic maculopathy. With respect to the use of ultrawide-field imaging sys- tems in DR screening programmes, a review by Fenner et al. [68] summed up the current situation that, despite the impressive outcomes in clinical trials, it remains unclear whether the cost savings of reduced inappropriate refer- rals are sufficient to justify the financial outlay. Discussion/Conclusion There have been many recent advances in technology that may be adopted by screening programmes for STDR in the future. ScanlonOphthalmic Res 2019;62:218–224222 DOI: 10.1159/000499539 Most screening programmes currently use staged mydriasis with one 45-degree field non-mydriatic digi- tal photography or two 45-degree field mydriatic digital photography. Advances in camera technology and in particular scanning confocal ophthalmoscopes with la- ser light or light-emitting diodes show good potential for non-mydriatic photography with wider fields. Each device will need to demonstrate effectiveness and cost- effectiveness before more widespread adoption. Auto- mated reading of images is progressing, with Scotland having already introduced this into their national pro- grammes and other countries likely to follow in the fu- ture. Statement of Ethics The author has no ethical conflicts to disclose. Disclosure Statement I have attended Advisory Boards for Pfizer, Allergan, Roche, Bayer and Boehringer. My department has received Educational, Research and Audit Grants from Allergan, Pfizer, Novartis, Boehringer and Bayer in the last 10 years. Funding Sources No funding has been received for the preparation of this man- uscript. References 1 Wilson J, Jungner G. The principles and prac- tice of screening for disease. Public Health Pa- pers 34. Public Health Papers. Geneva: WHO; 1968.55 De Fauw J, Ledsam JR, Romera- Paredes B, Nikolov S, Tomasev N, Blackwell S, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med. 2018 Sep; 24(9): 1342–50. 2 Scanlon P. An evaluation of the effectiveness and cost-effectiveness of screening for diabet- ic retinopathy by digital imaging photogra- phy and technician ophthalmoscopy and the subsequent change in activity, workload and costs of new diabetic ophthalmology referrals. London: UCL; 2005. 3 Zheng Y, He M, Congdon N. The worldwide epidemic of diabetic retinopathy. Indian J Ophthalmol. 2012 Sep-Oct; 60(5): 428–31. 4 Yau JW, Rogers SL, Kawasaki R, Lamoureux EL, Kowalski JW, Bek T, et al; Meta-Analysis for Eye Disease (META-EYE) Study Group. Global prevalence and major risk factors of diabetic retinopathy. Diabetes Care. 2012 Mar; 35(3): 556–64. 5 International Diabetes Federation. IDF dia- betes atlas Brussels, Belgium 2013. 6th ed. [cited 2014 April 16]. Available from: http:// www.idf.org/diabetesatlas 6 Stefánsson E. Prevention of diabetic blind- ness. Br J Ophthalmol. 2006 Jan; 90(1): 2–3. 7 Early Treatment Diabetic Retinopathy Study Research Group. Fundus photo- graphic risk factors for progression of dia- betic retinopathy. ETDRS report number 12. Ophthalmology. 1991 May; 98(5 Suppl): 823–33. 8 Stratton IM, Aldington SJ, Taylor DJ, Adler AI, Scanlon PH. A simple risk stratification for time to development of sight-threatening diabetic retinopathy. Diabetes Care. 2013 Mar; 36(3): 580–5. 9 Ruta LM, Magliano DJ, Lemesurier R, Taylor HR, Zimmet PZ, Shaw JE. Prevalence of dia- betic retinopathy in Type 2 diabetes in devel- oping and developed countries. Diabet Med. 2013 Apr; 30(4): 387–98. 10 Nathan DM, Genuth S, Lachin J, Cleary P, Crof- ford O, Davis M, et al; Diabetes Control and Complications Trial Research Group. The effect of intensive treatment of diabetes on the devel- opment and progression of long-term compli- cations in insulin-dependent diabetes mellitus. N Engl J Med. 1993 Sep; 329(14): 977–86. 11 Stratton IM, Kohner EM, Aldington SJ, Turner RC, Holman RR, Manley SE, et al. UKPDS 50: risk factors for incidence and progression of retinopathy in Type II diabetes over 6 years from diagnosis. Diabetologia. 2001 Feb; 44(2): 156–63. 12 Ferris FL 3rd, Nathan DM. Preventing Dia- betic Retinopathy Progression. Ophthalmol- ogy. 2016 Sep; 123(9): 1840–2. 13 Matthews DR, Stratton IM, Aldington SJ, Holman RR, Kohner EM; UK Prospective Diabetes Study Group. Risks of progression of retinopathy and vision loss related to tight blood pressure control in type 2 diabe- tes mellitus: UKPDS 69. Arch Ophthalmol. 2004 Nov; 122(11): 1631–40. 14 Chase HP, Garg SK, Jackson WE, Thomas MA, Harris S, Marshall G, et al. Blood pres- sure and retinopathy in type I diabetes. Oph- thalmology. 1990 Feb; 97(2): 155–9. 15 Davies EG, Petty RG, Kohner EM. Long term effectiveness of photocoagulation for diabetic maculopathy. Eye (Lond). 1989; 3(Pt 6): 764–7. 16 Chew EY, Ferris FL 3rd, Csaky KG, Murphy RP, Agrón E, Thompson DJ, et al. The long-term effects of laser photocoagulation treatment in patients with diabetic retinopathy: the early treatment diabetic retinopathy follow-up study. Ophthalmology. 2003 Sep; 110(9): 1683–9. 17 Elman MJ, Qin H, Aiello LP, Beck RW, Bressler NM, Ferris FL 3rd, et al.; Diabetic Ret- inopathy Clinical Research Network. Intravit- real ranibizumab for diabetic macular edema with prompt versus deferred laser treatment: three-year randomized trial results. Ophthal- mology. 2012 Nov; 119(11): 2312–8. 18 Wells JA, Glassman AR, Ayala AR, Jampol LM, Bressler NM, Bressler SB, et al.; Diabetic Reti- nopathy Clinical Research Network. Afliber- cept, Bevacizumab, or Ranibizumab for Diabetic Macular Edema: Two-Year Results from a Com- parative Effectiveness Randomized Clinical Tri- al. Ophthalmology. 2016 Jun; 123(6): 1351–9. 19 Sivaprasad S, Prevost AT, Vasconcelos JC, Rid- dell A, Murphy C, Kelly J, et al.; CLARITY Study Group. Clinical efficacy of intravitreal aflibercept versus panretinal photocoagulation for best corrected visual acuity in patients with proliferative diabetic retinopathy at 52 weeks (CLARITY): a multicentre, single-blinded, ran- domised, controlled, phase 2b, non-inferiority trial. Lancet. 2017 Jun; 389(10085): 2193–203. 20 Gross JG, Glassman AR, Liu D, Sun JK, Anto- szyk AN, Baker CW, et al.; Diabetic Retinopathy Clinical Research Network. Five-Year Out- comes of Panretinal Photocoagulation vs Intra- vitreous Ranibizumab for Proliferative Diabetic Retinopathy: A Randomized Clinical Trial. JAMA Ophthalmol. 2018 Oct; 136(10): 1138–48. 21 Open Athens. 2019 [cited 2019 Feb 22]. Avail- able from: https://www.openathens.net/ 22 Royal Society of Medicine. 2019 [cited 2019 Feb 22]. Available from: https://www.rsm. ac.uk/the-library/ 23 Early Treatment Diabetic Retinopathy Study Research Group. Grading diabetic retinopathy from stereoscopic color fundus photographs— an extension of the modified Airlie House clas- sification. ETDRS report number 10. Ophthal- mology. 1991 May; 98(5 Suppl): 786–806. https://www.karger.com/Article/FullText/499539?ref=1#ref1 https://www.karger.com/Article/FullText/499539?ref=1#ref1 https://www.karger.com/Article/FullText/499539?ref=1#ref1 https://www.karger.com/Article/FullText/499539?ref=2#ref2 https://www.karger.com/Article/FullText/499539?ref=2#ref2 https://www.karger.com/Article/FullText/499539?ref=2#ref2 https://www.karger.com/Article/FullText/499539?ref=2#ref2 https://www.karger.com/Article/FullText/499539?ref=2#ref2 https://www.karger.com/Article/FullText/499539?ref=2#ref2 https://www.karger.com/Article/FullText/499539?ref=3#ref3 https://www.karger.com/Article/FullText/499539?ref=3#ref3 https://www.karger.com/Article/FullText/499539?ref=4#ref4 https://www.karger.com/Article/FullText/499539?ref=6#ref6 https://www.karger.com/Article/FullText/499539?ref=7#ref7 https://www.karger.com/Article/FullText/499539?ref=8#ref8 https://www.karger.com/Article/FullText/499539?ref=9#ref9 https://www.karger.com/Article/FullText/499539?ref=10#ref10 https://www.karger.com/Article/FullText/499539?ref=11#ref11 https://www.karger.com/Article/FullText/499539?ref=12#ref12 https://www.karger.com/Article/FullText/499539?ref=12#ref12 https://www.karger.com/Article/FullText/499539?ref=13#ref13 https://www.karger.com/Article/FullText/499539?ref=14#ref14 https://www.karger.com/Article/FullText/499539?ref=14#ref14 https://www.karger.com/Article/FullText/499539?ref=15#ref15 https://www.karger.com/Article/FullText/499539?ref=16#ref16 https://www.karger.com/Article/FullText/499539?ref=17#ref17 https://www.karger.com/Article/FullText/499539?ref=17#ref17 https://www.karger.com/Article/FullText/499539?ref=18#ref18 https://www.karger.com/Article/FullText/499539?ref=19#ref19 https://www.karger.com/Article/FullText/499539?ref=20#ref20 https://www.karger.com/Article/FullText/499539?ref=23#ref23 https://www.karger.com/Article/FullText/499539?ref=23#ref23 Update on Screening for STDR 223Ophthalmic Res 2019;62:218–224 DOI: 10.1159/000499539 24 Moss SE, Meuer SM, Klein R, Hubbard LD, Brothers RJ, Klein BE. Are seven standard photographic fields necessary for classifica- tion of diabetic retinopathy? Invest Ophthal- mol Vis Sci. 1989 May; 30(5): 823–8. 25 Lin DY, Blumenkranz MS, Brothers RJ, Gros- venor DM. The sensitivity and specificity of single-field nonmydriatic monochromatic digital fundus photography with remote im- age interpretation for diabetic retinopathy screening: a comparison with ophthalmosco- py and standardized mydriatic color photog- raphy. Am J Ophthalmol. 2002 Aug; 134(2): 204–13. 26 Pugh JA, Jacobson JM, Van Heuven WA, Watters JA, Tuley MR, Lairson DR, et al. Screening for diabetic retinopathy. The wide- angle retinal camera. Diabetes Care. 1993 Jun; 16(6): 889–95. 27 Kinyoun J, Barton F, Fisher M, Hubbard L, Aiello L, Ferris F 3rd; ETDRS Research Group. Detection of diabetic macular edema. Ophthalmoscopy versus photography – Early Treatment Diabetic Retinopathy Study Re- port Number 5. Ophthalmology. 1989 Jun; 96(6): 746–50. 28 Scanlon PH, Malhotra R, Greenwood RH, Aldington SJ, Foy C, Flatman M, et al. Com- parison of two reference standards in validat- ing two field mydriatic digital photography as a method of screening for diabetic retinopa- thy. Br J Ophthalmol. 2003 Oct; 87(10): 1258– 63. 29 Gangaputra S, Lovato JF, Hubbard L, Davis MD, Esser BA, Ambrosius WT, et al.; AC- CORD Eye Research Group. Comparison of standardized clinical classification with fun- dus photograph grading for the assessment of diabetic retinopathy and diabetic macular edema severity. Retina. 2013 Jul-Aug; 33(7): 1393–9. 30 Gangaputra S, Almukhtar T, Glassman AR, Aiello LP, Bressler N, Bressler SB, et al.; Dia- betic Retinopathy Clinical Research Network. Comparison of film and digital fundus photo- graphs in eyes of individuals with diabetes mellitus. Invest Ophthalmol Vis Sci. 2011 Aug; 52(9): 6168–73. 31 Marshall S. The Exeter BDA meeting - a syn- opsis. Diabet Med. 1988; 5 Suppl 1:iii–iv. 32 BDA. Retinal photography screening for dia- betic eye disease. London: British Diabetic Association; 1997. 33 Garvican L, Clowes J, Gillow T. Preservation of sight in diabetes: developing a national risk reduction programme. Diabet Med. 2000 Sep; 17(9): 627–34. 34 Piyasena MM, Murthy GV, Yip JL, Gilbert C, Peto T, Gordon I, et al. Systematic review and meta-analysis of diagnostic accuracy of detec- tion of any level of diabetic retinopathy using digital retinal imaging. Syst Rev. 2018 Nov; 7(1): 182. 35 Goh JK, Cheung CY, Sim SS, Tan PC, Tan GS, Wong TY. Retinal Imaging Techniques for Diabetic Retinopathy Screening. J Diabetes Sci Technol. 2016 Feb; 10(2): 282–94. 36 Silva PS, Cavallerano JD, Haddad NM, Tolls D, Thakore K, Patel B, et al. Comparison of non- diabetic retinal findings identified with non- mydriatic fundus photography vs ultrawide field imaging in an ocular telehealth program. JAMA Ophthalmol. 2016 Mar; 134(3): 330–4. 37 Williams GA, Scott IU, Haller JA, Maguire AM, Marcus D, McDonald HR. Single-field fundus photography for diabetic retinopathy screening: a report by the American Academy of Ophthalmology. Ophthalmology. 2004 May; 111(5): 1055–62. 38 Scanlon PH. The English National Screening Programme for diabetic retinopathy 2003- 2016. Acta Diabetol. 2017 Jun; 54(6): 515–25. 39 Bolster NM, Giardini ME, Bastawrous A. The Diabetic Retinopathy Screening Workflow: Potential for Smartphone Imaging. J Diabetes Sci Technol. 2015 Nov; 10(2): 318–24. 40 Yogesan K, Constable IJ, Barry CJ, Eikelboom RH, McAllister IL, Tay-Kearney ML. Tele- medicine screening of diabetic retinopathy using a hand-held fundus camera. Telemed J. 2000; 6(2): 219–23. 41 Bastawrous A, Giardini ME, Bolster NM, Peto T, Shah N, Livingstone IA, et al. Clinical Val- idation of a Smartphone-Based Adapter for Optic Disc Imaging in Kenya. JAMA Oph- thalmol. 2016 Feb; 134(2): 151–8. 42 Rajalakshmi R, Arulmalar S, Usha M, Prat- hiba V, Kareemuddin KS, Anjana RM, et al. Validation of Smartphone Based Retinal Pho- tography for Diabetic Retinopathy Screening. PLoS One. 2015 Sep; 10(9):e0138285. 43 Scanlon PH, Foy C, Malhotra R, Aldington SJ. The influence of age, duration of diabe- tes, cataract, and pupil size on image qual- ity in digital photographic retinal screen- ing. Diabetes Care. 2005 Oct; 28(10): 2448– 53. 44 Scanlon PH, Malhotra R, Thomas G, Foy C, Kirkpatrick JN, Lewis-Barned N, et al. The ef- fectiveness of screening for diabetic retinopa- thy by digital imaging photography and tech- nician ophthalmoscopy. Diabet Med. 2003 Jun; 20(6): 467–74. 45 Murgatroyd H, Ellingford A, Cox A, Binnie M, Ellis JD, MacEwen CJ, et al. Effect of my- driasis and different field strategies on digital image screening of diabetic eye disease. Br J Ophthalmol. 2004 Jul; 88(7): 920–4. 46 Gupta V, Bansal R, Gupta A, Bhansali A. Sen- sitivity and specificity of nonmydriatic digital imaging in screening diabetic retinopathy in Indian eyes. Indian J Ophthalmol. 2014 Aug; 62(8): 851–6. 47 Silva PS, Horton MB, Clary D, Lewis DG, Sun JK, Cavallerano JD, et al. Identifica- tion of Diabetic Retinopathy and Ungrad- able Image Rate with Ultrawide Field Im- aging in a National Teleophthalmology Program. Ophthalmology. 2016 Jun; 123(6): 1360–7. 48 Scanlon PH, Foy C, Chen FK. Visual acuity measurement and ocular co-morbidity in dia- betic retinopathy screening. Br J Ophthalmol. 2008 Jun; 92(6): 775–8. 49 Corcoran JS, Moore K, Agarawal OP, Edgar DF, Yudkin J. Visual acuity screening for dia- betic maculopathy. Pract Diabetes. 1985; 2: 230–2. 50 Fleming AD, Philip S, Goatman KA, Prescott GJ, Sharp PF, Olson JA. The evidence for auto- mated grading in diabetic retinopathy screen- ing. Curr Diabetes Rev. 2011 Jul; 7(4): 246–52. 51 Tufail A, Kapetanakis VV, Salas-Vega S, Egan C, Rudisill C, Owen CG, et al. An observa- tional study to assess if automated diabetic retinopathy image assessment software can replace one or more steps of manual imaging grading and to determine their cost-effective- ness. 2016. p. 1–73. Available from: https:// w w w . j o u r n a l s l i b r a r y . n i h r . a c . u k / h t a / hta20920/#/full-report 52 van der Heijden AA, Abramoff MD, Verbraak F, van Hecke MV, Liem A, Nijpels G. Valida- tion of automated screening for referable dia- betic retinopathy with the IDx-DR device in the Hoorn Diabetes Care System. Acta Oph- thalmol. 2018 Feb; 96(1): 63–8. 53 Li Z, Keel S, Liu C, He Y, Meng W, Scheetz J, et al. An Automated Grading System for De- tection of Vision-Threatening Referable Dia- betic Retinopathy on the Basis of Color Fun- dus Photographs. Diabetes Care. 2018 Dec; 41(12): 2509–16. 54 Ramachandran N, Hong SC, Sime MJ, Wilson GA. Diabetic retinopathy screening using deep neural network. Clin Exp Ophthalmol. 2018 May; 46(4): 412–6. 55 De Fauw J, Ledsam JR, Romera-Paredes B, Nikolov S, Tomasev N, Blackwell S, et al. Clin- ically applicable deep learning for diagnosis and referral in retinal disease. Nat Med. 2018 Sep; 24(9): 1342–50. 56 Krause J, Gulshan V, Rahimy E, Karth P, Wid- ner K, Corrado GS, et al. Grader Variability and the Importance of Reference Standards for Evaluating Machine Learning Models for Diabetic Retinopathy. Ophthalmology. 2018 Aug; 125(8): 1264–72. 57 Nguyen HV, Tan GS, Tapp RJ, Mital S, Ting DS, Wong HT, et al. Cost-effectiveness of a National Telemedicine Diabetic Retinopathy screening program in Singapore. Ophthal- mology. 2016; 123(12): 2571–80. 58 Kanjee R, Dookeran RI, Mathen MK, Stockl FA, Leicht R. Six-year prevalence and inci- dence of diabetic retinopathy and cost-effec- tiveness of tele-ophthalmology in Manitoba. Can J Ophthalmol/J Can Ophtalmol. 2016; 51(6): 467–70. 59 Khan T, Bertram MY, Jina R, Mash B, Levitt N, Hofman K. Preventing diabetes blindness: cost effectiveness of a screening programme using digital non-mydriatic fundus photogra- phy for diabetic retinopathy in a primary health care setting in South Africa. Diabetes Res Clin Pract. 2013 Aug; 101(2): 170–6. 60 Rachapelle S, Legood R, Alavi Y, Lindfield R, Sharma T, Kuper H, et al. The cost-utility of telemedicine to screen for diabetic retinopa- thy in India. Ophthalmology. 2013 Mar; 120(3): 566–73. https://www.karger.com/Article/FullText/499539?ref=24#ref24 https://www.karger.com/Article/FullText/499539?ref=24#ref24 https://www.karger.com/Article/FullText/499539?ref=25#ref25 https://www.karger.com/Article/FullText/499539?ref=26#ref26 https://www.karger.com/Article/FullText/499539?ref=27#ref27 https://www.karger.com/Article/FullText/499539?ref=28#ref28 https://www.karger.com/Article/FullText/499539?ref=29#ref29 https://www.karger.com/Article/FullText/499539?ref=30#ref30 https://www.karger.com/Article/FullText/499539?ref=31#ref31 https://www.karger.com/Article/FullText/499539?ref=33#ref33 https://www.karger.com/Article/FullText/499539?ref=34#ref34 https://www.karger.com/Article/FullText/499539?ref=35#ref35 https://www.karger.com/Article/FullText/499539?ref=35#ref35 https://www.karger.com/Article/FullText/499539?ref=36#ref36 https://www.karger.com/Article/FullText/499539?ref=37#ref37 https://www.karger.com/Article/FullText/499539?ref=38#ref38 https://www.karger.com/Article/FullText/499539?ref=39#ref39 https://www.karger.com/Article/FullText/499539?ref=39#ref39 https://www.karger.com/Article/FullText/499539?ref=40#ref40 https://www.karger.com/Article/FullText/499539?ref=41#ref41 https://www.karger.com/Article/FullText/499539?ref=41#ref41 https://www.karger.com/Article/FullText/499539?ref=42#ref42 https://www.karger.com/Article/FullText/499539?ref=43#ref43 https://www.karger.com/Article/FullText/499539?ref=44#ref44 https://www.karger.com/Article/FullText/499539?ref=45#ref45 https://www.karger.com/Article/FullText/499539?ref=45#ref45 https://www.karger.com/Article/FullText/499539?ref=46#ref46 https://www.karger.com/Article/FullText/499539?ref=47#ref47 https://www.karger.com/Article/FullText/499539?ref=48#ref48 https://www.karger.com/Article/FullText/499539?ref=49#ref49 https://www.karger.com/Article/FullText/499539?ref=50#ref50 https://www.karger.com/Article/FullText/499539?ref=52#ref52 https://www.karger.com/Article/FullText/499539?ref=52#ref52 https://www.karger.com/Article/FullText/499539?ref=53#ref53 https://www.karger.com/Article/FullText/499539?ref=54#ref54 https://www.karger.com/Article/FullText/499539?ref=55#ref55 https://www.karger.com/Article/FullText/499539?ref=56#ref56 https://www.karger.com/Article/FullText/499539?ref=57#ref57 https://www.karger.com/Article/FullText/499539?ref=57#ref57 https://www.karger.com/Article/FullText/499539?ref=58#ref58 https://www.karger.com/Article/FullText/499539?ref=59#ref59 https://www.karger.com/Article/FullText/499539?ref=59#ref59 https://www.karger.com/Article/FullText/499539?ref=60#ref60 ScanlonOphthalmic Res 2019;62:218–224224 DOI: 10.1159/000499539 61 Scanlon PH, Aldington SJ, Leal J, Luengo- Fernandez R, Oke J, Sivaprasad S, et al. Devel- opment of a cost-effectiveness model for op- timisation of the screening interval in diabet- ic retinopathy screening. Health Technol Assess. 2015 Sep; 19(74): 1–116. 62 Scanlon PH. Screening Intervals for Diabetic Retinopathy and Implications for Care. Curr Diab Rep. 2017 Sep; 17(10): 96. 63 Lund SH, Aspelund T, Kirby P, Russell G, Einarsson S, Palsson O, et al. Individualised risk assessment for diabetic retinopathy and optimisation of screening intervals: a scien- tific approach to reducing healthcare costs. Br J Ophthalmol. 2016 May; 100(5): 683–7. 64 Scotland GS, McNamee P, Philip S, Fleming AD, Goatman KA, Prescott GJ, et al. Cost- effectiveness of implementing automated grading within the national screening pro- gramme for diabetic retinopathy in Scot- land. Br J Ophthalmol. 2007 Nov; 91(11): 1518–23. 65 Scotland GS, McNamee P, Fleming AD, Goatman KA, Philip S, Prescott GJ, et al.; Scottish Diabetic Retinopathy Clinical Re- search Network. Costs and consequences of automated algorithms versus manual grad- ing for the detection of referable diabetic ret- inopathy. Br J Ophthalmol. 2010 Jun; 94(6): 712–9. 66 Prescott G, Sharp P, Goatman K, Scotland G, Fleming A, Philip S, et al. Improving the cost- effectiveness of photographic screening for diabetic macular oedema: a prospective, multi-centre, UK study. Br J Ophthalmol. 2014 Aug; 98(8): 1042–9. 67 Leal J, Luengo-Fernandez R, Stratton IM, Dale A, Ivanova K, Scanlon PH. Cost-effec- tiveness of digital surveillance clinics with op- tical coherence tomography versus hospital eye service follow-up for patients with screen- positive maculopathy. Eye (Lond). 2018. DOI: 10.1038/s41433-018-0297-7. 68 Fenner BJ, Wong RL, Lam WC, Tan GS, Cheung GC. Advances in Retinal Imaging and Applications in Diabetic Retinopathy Screening: A Review. Ophthalmol Ther. 2018 Dec; 7(2): 333–46. https://www.karger.com/Article/FullText/499539?ref=61#ref61 https://www.karger.com/Article/FullText/499539?ref=61#ref61 https://www.karger.com/Article/FullText/499539?ref=62#ref62 https://www.karger.com/Article/FullText/499539?ref=62#ref62 https://www.karger.com/Article/FullText/499539?ref=63#ref63 https://www.karger.com/Article/FullText/499539?ref=63#ref63 https://www.karger.com/Article/FullText/499539?ref=64#ref64 https://www.karger.com/Article/FullText/499539?ref=65#ref65 https://www.karger.com/Article/FullText/499539?ref=66#ref66 https://www.karger.com/Article/FullText/499539?ref=67#ref67 https://www.karger.com/Article/FullText/499539?ref=68#ref68 work_x4gl6exdffanpgqynvfez5sequ ---- Microsoft Word - AMI27_4__090_093.doc Applied Medical Informatics Original Research Vol. 27, No. 4/2010, pp: 90-93 Copyright ©2010 by the authors; Licensee SRIMA, Cluj-Napoca, Romania. 90 [ Evaluation of Differences regarding the Characteristics of Diabetic Retinopathy between 2007 and 2008 Mihaela MOCIRAN1,*, Cristian DRAGOŞ2, Nicolae HÂNCU1 1 “Iuliu Haţieganu” University of Medicine and Pharmacy Cluj-Napoca, 8 Victor Babeş, 400012 Cluj-Napoca, Romania 2 Babeş-Bolyai University of Cluj-Napoca, 1 Mihail Kogalniceanu, 400084 Cluj-Napoca, Romania. E-mail(s): mihaela.mociran@yahoo.com; cristian.dragos@econ.ubbcluj.ro; nhancu@umfcluj.ro * Author to whom correspondence should be addressed; County Hospital “C. Opriş” Baia Mare, Baia Mare, 31 George Coşbuc, Tel.: 0724044581 Received: 23 October 2010 /Accepted: 1 December 2010/ Published online: 15 December 2010 Abstract Aim: to discover the incidence of diabetic retinopathy; to identify the influence of optimizing different parameters on progression of diabetic retinopathy in diabetic patients from Maramureş. Methods and material: We done a prospective study of 1269 persons with diabetes mellitus registered in healthcare system in Maramureş County. We analysed the differences between 2007 and 2008 regarding different parameters (glycated haemoglobin - HbA1c, hypertension, lipid profile, body mass index - BMI and abdominal circumference). We analysed the influence of these modifications on the progression of diabetic retinopathy. Results: In 2007 the total number of patients with diabetic retinopathy was 246 but in 2008 the total number of patients with diabetic retinopathy increased to 250. In 2007 the total number of patients with diabetic maculopathy was 40 but in 2008 the total number of patients with diabetic maculopathy increased to 46. There was no significant influence after one year in optimizing glycaemic control (p=0.302) or cholesterol level (p=0.292) or triglycerides levels (p=0.054) or HDL level (p=0.831) or hypertension (p=0.413) or BMI (0.063) or abdominal circumference (p=0.068). Conclusions: diabetes retinopathy incidence was 4 in 1269 or 0.3%. After one year any improvement in diabetes management has no significant positive influence on progression of diabetic retinopathy. Keywords: Diabetes; Retinopathy; Progression; Incidence. Introduction Diabetic retinopathy, a specific microvascular complication of diabetes, is the primary cause of blindness among adults aged 20-74 in SUA [1]. The prevalence of diabetic retinopathy increases with duration of diabetes and nearly all persons with type 1 diabetes and more than 60% of those with type 2 diabetes have some retinopathy after 20 years [2]. Major risk factors associated with incidence and progression of this complication of diabetes were: diabetes duration, glycaemic control, hypertension control, lipid control [3]. Any new information regarding this complication of diabetes mellitus has significant influence in men’s health. Dates about Romanian diabetic patients and this complication are few and inconsistent. In this study out purpose was to identify the incidence of diabetic retinopathy in patients with diabetes mellitus from Maramureş County. In addition, we wanted to analyze the influence of optimizing different parameters (such as glycated haemoglobin, hypertension, lipid profile, body mass index - as mentioned above) on progression of diabetic retinopathy in this population in the interval between 2007 and 2008. Evaluation of Differences regarding the Characteristics of Diabetic Retinopathy between 2007 and 2008 [ Appl Med Inform 27(4) December / 2010 91 Material and Methods We included in our study all patients with diabetes who respected the inclusion criteria and who not presented any of the exclusion criteria. Inclusion criteria: patients with diabetes mellitus from Maramureş County registered in healthcare system in December 2007. All participants gave oral informed consent to accept to participate in this study. Exclusion criteria: patients with history of laser treatment, patients with cataract or other diseases which may affect the detection of retinopathy, patients with severe mental diseases, patients with glaucoma, patients whose medical records do not include all the necessary dates. The study followed the recommendations of the Declaration of Helsinki. This was a prospective study. We made four visits (twice a year, at six month intervals approximately) for to evaluate all the dates describe below. Only retinal digital photography was made once a year. We investigated different parameters regarding demographics variables, diabetes and its complications, physical measurements and some laboratory determinations (HbA1c, cholesterol, triglycerides, HDL). All these were described in detail in a previous study published in 2009 by the same authors [4]. We also performed the retinal digital photography as mentioned in the above study. We applied the International Diabetes Federation recommendations regarding daily exercise, HbA1c<6.5%, blood pressure < 130/80, LDL < 95mg/dl, triglycerides < 200mg/dl and HDL >39mg/dl [5]. Statistics: We used Ordered Logit Model and estimation of parameters (described below) using Maximum Likelihood. For these estimations we used STATA software package version 9.1. We applied these analyses for parameters: HbA1c, cholesterol, triglycerides, HDL, hypertension, BMI, abdominal circumference. We described the results in detail, in table, using coefficients (the coefficients of Ordered Logit Model). A p-value<0.05 denoted statistical significance. Results From a total of 12917 registered patients with diabetes in 2007 only 1269 were available for analysis (the rest were excluded because we haven’t all the dates necessary for analyses as mentioned above). The sample was representative, the prevalence of the retinopathy having a confidence interval of ± 2.1% for a p-value of 0.05. From our 1269 patients, a total number of 703 (55.4%) were female and a total number of 566 (44.6%) were male. Mean age of this population was 57.7 years (limits between 14 and 87 years; a median age of 59 years). Mean diabetes duration was 4.8 years (from newly diagnosed with diabetes mellitus to 42 years of diabetes duration; median diabetes duration was 4 years). The differences between the parameters were detailed in table 1. Table 1. Differences between parameters in 2007 and 2008 Parameters 2007 2008 HbA1c (%) (median) 7.7 7.2 Cholesterol (mg/dl) (median) 194 188 Triglycerides (mg/dl) (median) 154 153 HDL (mg/dl) (median) 46 48 BMI (kg)/m²)(median) 30.8 31 Abdominal circumference (cm) (median) 110 112 In 2007 we recorded a number of 871 (68.6%) patients registered with hypertension. In 2008 this number increased to 933 (73.5%). The results of the analysis in which we used Ordered Logit Model providing differences between parameters from 2007 to 2008 and their influence on diabetic retinopathy is presented in Table 2. In 2007 the total number of patients with diabetic retinopathy was 246 and this number increased to 250 in 2008. The total number of patients with mild non proliferative diabetic retinopathy decreased Mihaela MOCIRAN, Cristian DRAGOŞ and Nicolae HÂNCU 92 Appl Med Inform 27(4) December / 2010 from 94 (38.2%) in 2007 to 83 (33.2%) in 2008. The total number of patients with moderate non proliferative diabetic retinopathy decreased from 78 (31.7%) in 2007 to 74 (29.6%) in 2008. The total number of patients with severe non proliferative diabetic retinopathy increased from 52 (21.1%) in 2007 to 60 (24.0%) in 2008. The total number of patients with proliferative diabetic retinopathy increased from 22 (8.9%) in 2007 to 33 (13.2%) in 2008. Table 2. Influence of modification of different parameters on diabetic retinopathy incidence and progression Parameters Coef p-value [95%CI] HbA1c 0.0139729 0.302 -0.0125491 0.0404948 Cholesterol 0.0006133 0.292 -0.0005288 0.0017553 Triglycerides -0.0002374 0.054 -0.000478 3.97e-06 HDL 0.0004748 0.831 -0.0038902 0.0048399 Hypertension -0.0804481 0.413 -0.2733303 0.1124343 BMI 0.2685842 0.063 -0.0014673 0.5518421 Abdominal circumference 0.0869333 0.068 -0.0062657 0.1801323 Incidence of diabetic retinopathy was 4 to 1269 or 0.3%. In 2007 the total number of patients with diabetic maculopathy was 40 but in 2008 the total number of patients with diabetic maculopathy increased to 46. In 2007 the total number of patients with mild diabetic maculopathy were 8 (20.0%) but in 2008 the total number of patients with mild diabetic maculopathy decreased to 7 (15.2%). In 2007 the total number of patients with moderate diabetic maculopathy were 8 (20.0%) but in 2008 the total number of patients with mild diabetic maculopathy decreased to 4 (8.6%). In 2007 the total number of patients with severe diabetic maculopathy were 24 (60.0%) but in 2008 the total number of patients with severe diabetic maculopathy increased to 35 (76.1%). Discussion Diabetes Control and Complications Trial Study (DCCT) established that any decrease with 1% in HbA1c will decrease with 69% the risk of apparition of new diabetic retinopathy [6]. In CURES Eye Study the authors remark that for any increase of HbA1c with 2% the risk of development of any retinopathy will increase by 1.7 times [7]. In DCCT study, the reduction of risk for retinopathy appears only after 6.5 years of intensive treatment [8]. Similar results were finding for type 2 diabetic patients in UKPDS study. In this last study, the relative risk between intensive and conventional treatment was 0.83 (95% CI 0.67-1.01) after 6 years of follow-up and only after 12 years of follow-up the risk was 0.79 (95% CI 0.63-1.00) [9]. Both studies remarks that there is need for a good glycaemic control for a long period of time to appear significant reduction of relative risk for diabetic retinopathy. More recently, the study “The Action in Diabetes and Vascular Disease: Preterax and Diamicron MR Controlled Evaluation (ADVANCE) Retinal Measurement after a period of 4.1 years established that there is no evidence of intensive glucose control and intensive blood pressure control on incidence and progression of diabetic retinopathy. Also, some modification (maculopathy or microanevrisme), modification which are strong correlated with diabetic retinopathy, are reduced after an intensive glyceamic control [10]. In our study, we obtained an improvement in glycaemic control (the difference between HbA1c in 2007 and 2008 was 0.5%). Unfortunately this small difference had no effect on incidence and progression of diabetic retinopathy (p=0.302). It is important to mention that in the vast majority of studies a significant influence appeared only after a long period of time. As in majority of studies, any improvement in HbA1c was associated with weight increased. Therefore, in our case, an improvement in HbA1c of 0.5% was associated with an increased in BMI with 0.2 kg/ m² and an increased in abdominal circumference with 2 cm. Again, this had no significant influence on development of diabetic retinopathy (p=0.063 for BMI and respectively p=0.068 for abdominal circumference). Some studies find a positive correlation between regression of hard exudates and the intensive and aggressive treatment for lipids control [11]. Evaluation of Differences regarding the Characteristics of Diabetic Retinopathy between 2007 and 2008 [ Appl Med Inform 27(4) December / 2010 93 In our study we obtained a relative improvement in lipid control (cholesterol decreased from 194 mg/dl to 188 mg/dl, triglycerides decreased from 154 mg/dl to 153 mg/dl and HDL increased from 46 mg/dl to 48 mg/dl). Again, this very small difference in lipid profile has no significant influence on incidence and progression of diabetic retinopathy (p=0.292 for cholesterol, p=0.054 for triglicerydes and p=0.831 for HDL). Regarding hypertensive patients we find no significant influence on diabetic retinopathy development (p=0.413). This result is similar with the observation registered in ADVANCE study. In our study there is no significant relation between any of these parameters and incidence and progression of diabetic retinopathy. The major problem was the too short period of time used for the evaluation of these differences (only one year). We need a long period of time for to obtain relevant results (more than 6 years). Conclusions Diabetes retinopathy incidence was 4 in 1269 or 0.3%. After one year any improvement in diabetes management has no significant positive influence on progression of diabetic retinopathy. There is necessary a long period of time with a good metabolic control for diabetic patients to obtain significant results regarding reduction of the progression of diabetic retinopathy. Conflict of Interest The authors declare that they have no conflict of interest. References 1. US Centers for Disease Control and Prevention. National Diabetes Fact Sheet: General Information and National Estimates on Diabetes in the United States, 2005. Atlanta, GA: Centers for Disease Control and Prevention; 2005 [cited 2009 January]. Available from: URL: http://www.cdc.gov/diabetes/pubs/pdf/ndfs_2005.pdf 2. Klein R, Klein BE, Moss SE, Cruickshanks KJ. The Wisconsin Epidemiologic Study of Diabetic Retinopathy, XVII: the 14-year incidence and progression of diabetic retinopathy and associated risk factors in type 1 diabetes.Ophtalmology.1998;105(10):1801-1815. 3. Mohamed Q, Gillies MC, Wong TY. Management of Diabetic Retinopathy: A systematic review. JAMA 2007;298(8):902-916. 4. Mociran M, Dragoş C, Hâncu N. Risk Factors and Severity of Diabetic Retinopahty in Maramureş. Applied Medical Informatics 2009;24(1-2):47-52 5. International Diabetes Federation. Global Guideline for Type 2 Diabetes. Chapter 13: eye screening [internet]. 2005. [Cited 2009 January]. Available from: URL: http://www.idf.org/webdata/docs/GGT2%2013%20Eye%20screening.pdf 6. Diabetes Control and Complications Trial Research Group. The effect of intensive treatment of diabetes on the development and progression of longterm complications in IDDM. N Engl J Med 1993;329:977-986. 7. Rema M, Premkumar S, Anitha B, Deepa R, Pradeepa R, Mohan V. Prevalence of diabetic retinopathy in urban India: the Chennai Urban Rural Epidemiology Study (CURES) Eye study. I Invest Ophtalmol Vis Sci 2005;46:2328-2333. 8. The Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications Research Group. Retinopathy and nephropathy in patients with type 1 diabetes four years after a trial of intensive therapy. N Engl J Med 2000;342:381-389. 9. UK Prospective Diabetes study (UKPDS) Group. Intensive blood-glucose control with sulphonylureas or insulin compared with conventional treatment and risk of complications in patients with type 2 diabetes (UKPDS 33). Lancet 1998;352:837-853. 10. Beulens JWJ, Patel A, Vingerling JR, Cruickshank JK, Hughes AD, Stanton A, Lu J, Thom SAMcG, grobbee De, stolk RP – on behalf of the AdRem project team and ADVANCE management committee. Effects of blood pressure lowering and intensive glucose control on the incidence and progression of retinopathy in patients with type 2 diabetes mellitus: a randomized controlled trial. Diabetologia 2009;52(10):2027-2036. 11. Cusick MBS, Chew EY, Chan CC, Kruth HS, Murphy Rp, Ferris FL. Histopathology and regression of retinal hard exudates in diabetic retinopathy after reduction of elevated serum lipid levels. Ophthalmology 2003;110:2126-2133. work_x4iad7n5k5glrdl2rc7kkfxj6y ---- http://pss.sagepub.com/ Psychological Science http://pss.sagepub.com/content/early/2013/11/07/0956797613499592 The online version of this article can be found at: DOI: 10.1177/0956797613499592 published online 8 November 2013Psychological Science Denise C. Park, Jennifer Lodi-Smith, Linda Drew, Sara Haber, Andrew Hebrank, Gérard N. Bischof and Whitley Aamodt The Impact of Sustained Engagement on Cognitive Function in Older Adults: The Synapse Project Published by: http://www.sagepublications.com On behalf of: Association for Psychological Science can be found at:Psychological ScienceAdditional services and information for http://pss.sagepub.com/cgi/alertsEmail Alerts: http://pss.sagepub.com/subscriptionsSubscriptions: http://www.sagepub.com/journalsReprints.navReprints: http://www.sagepub.com/journalsPermissions.navPermissions: What is This? - Nov 8, 2013OnlineFirst Version of Record >> by Clinton Chapman on November 19, 2013pss.sagepub.comDownloaded from by Clinton Chapman on November 19, 2013pss.sagepub.comDownloaded from http://pss.sagepub.com/ http://pss.sagepub.com/ http://pss.sagepub.com/content/early/2013/11/07/0956797613499592 http://pss.sagepub.com/content/early/2013/11/07/0956797613499592 http://www.sagepublications.com http://www.sagepublications.com http://www.psychologicalscience.org/ http://www.psychologicalscience.org/ http://pss.sagepub.com/cgi/alerts http://pss.sagepub.com/cgi/alerts http://pss.sagepub.com/subscriptions http://pss.sagepub.com/subscriptions http://www.sagepub.com/journalsReprints.nav http://www.sagepub.com/journalsReprints.nav http://www.sagepub.com/journalsPermissions.nav http://www.sagepub.com/journalsPermissions.nav http://pss.sagepub.com/content/early/2013/11/07/0956797613499592.full.pdf http://pss.sagepub.com/content/early/2013/11/07/0956797613499592.full.pdf http://online.sagepub.com/site/sphelp/vorhelp.xhtml http://online.sagepub.com/site/sphelp/vorhelp.xhtml http://pss.sagepub.com/ http://pss.sagepub.com/ http://pss.sagepub.com/ http://pss.sagepub.com/ Psychological Science XX(X) 1 –10 © The Author(s) 2013 Reprints and permissions: sagepub.com/journalsPermissions.nav DOI: 10.1177/0956797613499592 pss.sagepub.com Research Article Despite the tremendous strides made in scientifically based recommendations for promoting physical health in adulthood, less is known about what one should do to maintain cognitive health. As baby boomers age, the issue of maintaining healthy cognitive function has become a problem of increasing social urgency. There is a considerable amount of correlational data suggesting that individuals who are engaged in intellectual and social activities in middle and late adulthood fare better cognitively than their less active peers. For example, self- reports of higher participation in cognitive, leisure, and social activities are related to better cognitive ability in middle-aged adults (Singh-Manoux, Richards, & Marmot, 2003) and are even associated with a decreased risk of being diagnosed with Alzheimer’s disease (Wilson et al., 2002; Wilson, Scherr, Schneider, Li, & Bennett, 2007). Such results are intriguing, but there is surprisingly little evidence that lifestyle engagement maintains or improves cognitive function (Hertzog, Kramer, Wilson, & Lindenberger, 2008). No doubt the reason is the difficulty of translating this hypothesis into an experimental design in which volunteers agree to be randomly assigned to conditions that significantly alter their daily experiences for a sustained period. Two studies to date have approached this issue. In one study, participants in the Senior Odyssey program engaged in diverse problem- solving activities in a group-based competition that spanned approximately 5 months and showed small but reliable improvements in speed of processing, inductive reasoning, and divergent thinking skills when compared with no-treatment control participants (Stine-Morrow, 499592PSSXXX10.1177/0956797613499592Park et al.Synapse: Engagement Intervention research-article2013 Corresponding Author: Denise C. Park, The University of Texas at Dallas—Center for Vital Longevity, Suite 800, 1600 Viceroy Ave., Dallas, TX 75235 E-mail: denise@utdallas.edu The Impact of Sustained Engagement on Cognitive Function in Older Adults: The Synapse Project Denise C. Park1,2, Jennifer Lodi-Smith3, Linda Drew1, Sara Haber1, Andrew Hebrank1, Gérard N. Bischof1,2, and Whitley Aamodt1 1Center for Vital Longevity, University of Texas at Dallas; 2School of Behavioral and Brain Sciences, University of Texas at Dallas; and 3Department of Psychology, Canisius College Abstract In the research reported here, we tested the hypothesis that sustained engagement in learning new skills that activated working memory, episodic memory, and reasoning over a period of 3 months would enhance cognitive function in older adults. In three conditions with high cognitive demands, participants learned to quilt, learned digital photography, or engaged in both activities for an average of 16.51 hr a week for 3 months. Results at posttest indicated that episodic memory was enhanced in these productive-engagement conditions relative to receptive-engagement conditions, in which participants either engaged in nonintellectual activities with a social group or performed low-demand cognitive tasks with no social contact. The findings suggest that sustained engagement in cognitively demanding, novel activities enhances memory function in older adulthood, but, somewhat surprisingly, we found limited cognitive benefits of sustained engagement in social activities. Keywords cognitive aging, intervention, engagement, cognitive training, aging cognition, episodic memory, cognitive reserve, working memory Received 12/18/12; Revision accepted 7/2/13 Psychological Science OnlineFirst, published on November 8, 2013 as doi:10.1177/0956797613499592 by Clinton Chapman on November 19, 2013pss.sagepub.comDownloaded from http://pss.sagepub.com/ http://pss.sagepub.com/ 2 Park et al. Parisi, Morrow, & Park, 2008). In another intervention project, older adults taking part in Experience Corps spent sustained periods partnered with elementary school students, teaching them literacy skills, library skills, and classroom etiquette over a prolonged period. When compared with a wait-list control group, these adults showed improvements in executive function and memory (Carlson et al., 2008). These findings are encour- aging, but many questions about the impact of sustained engagement on cognitive function remain (for a review, see Stine-Morrow & Basak, 2011). We examined the impact of sustained engagement on cognitive function in older adults using multiple control conditions, building on a distinction between productive engagement versus receptive engagement. These two types of engagement are differentiated by the cognitive operations they involve. Productive engagement refers to activities that require active learning and sustained acti- vation of working memory, long-term memory, and other executive processes. In contrast, receptive engagement refers to activities that rely on passive observation, activa- tion of existing knowledge, and familiar activities, rather than the acquisition of novel information and engage- ment in cognitively challenging tasks (Park, Gutchess, Meade, & Stine-Morrow, 2007). We created an environ- ment called “Synapse” to investigate the hypothesis that productive engagement is more likely than receptive engagement to lead to improvements in cognition due to sustained activation of core cognitive abilities. Although the cognitive-training literature suggests that older adults can achieve gains in processing speed, work- ing memory, and episodic memory when they train a particular ability over a prolonged period (Ball et al., 2002), there is little evidence that the training transfers to other domains (although see Anguera et al., 2013; Basak, Boot, Voss, & Kramer, 2008). The Synapse Project differs from cognitive training in that subjects agree to make a lifestyle change and learn a new, real-world skill in a social environment that demands extended use of core cognitive abilities. In the present study, participants were enrolled for 3 months in one of six lifestyle conditions, five of which required 15 hr of weekly engagement in structured activi- ties. The three productive-engagement conditions were (a) the photo condition, in which novice participants learned digital-photography and computer skills using photo-editing software; (b) the quilt condition, in which novice participants learned how to design and sew quilts; and (c) the dual condition, in which participants spent half of the 3-month period engaged in quilting and the other half in photography. These conditions involved continual learning of new and increasingly complex tasks over a prolonged period. Participants in the photo condition learned to operate a single-lens reflex camera (which they had to remember how to use when off-site) and also acquired considerable skill in complex software operations for photo editing and production. The manipulation was particularly demanding of executive function, long-term memory, and reasoning. In the quilt condition, participants learned to piece together and visualize abstract shapes to form complex, integrated patterns, in addition to learning the many operations associated with a software-driven sew- ing machine; hence, in this condition, there was a strong focus on visuospatial working memory and reasoning. The receptive-engagement conditions were (a) the social condition, in which participants engaged in on-site, facilitator-led social interactions, field trips, and entertain- ment; and (b) the placebo condition, in which participants engaged in tasks at home that appeared to be beneficial to cognition but had no substantiated link to cognitive improvement (e.g., listening to classical music, completing word-meaning puzzles). Finally, the sixth condition, which did not require a 15-hr time commitment per week, was a no-treatment control condition. We hypothesized that the participants assigned to the productive-engagement conditions would show improved cognition relative to those in the receptive-engagement conditions. Moreover, we expected that participants in the photo condition would show greater improvement in verbal memory, whereas those in the quilt condition would show more improvement in visuospatial abilities. The inclusion of the social condition provided informa- tion about whether socializing alone without formal learning can produce cognitive gains. Although the social condition had few formal cognitive demands, it did involve meeting new people and learning their names, so it was more cognitively demanding than the placebo and no-treatment conditions, but far less demanding than any of the productive conditions. The failure to include a social control group has been a serious limitation of pre- vious lifestyle-engagement studies; including such a con- dition allowed us to determine the role that social interactions play in facilitation effects associated with engagement. Method Participants A total of 259 participants were enrolled in the study, with 221 completing the full 14-week program (comple- tion rate = 85%).1 Participants ranged in age from 60 to 90 (M = 71.67 years); demographic information can be found in Table 1. Participants could be included in the study if they had at least a tenth-grade education, were fluent in English, worked or performed volunteer activi- ties for no more than 10 hr per week, were novices at by Clinton Chapman on November 19, 2013pss.sagepub.comDownloaded from http://pss.sagepub.com/ http://pss.sagepub.com/ Synapse: Engagement Intervention 3 both quilting and digital photography, and used a com- puter only for social networking and for less than 10 hr per week. Additional eligibility requirements included visual acuity (20/40 vision or corrected to 20/40 vision; Snellen, 1862), a minimum score of 26 on the Mini-Mental State Examination (Folstein, Robins, & Helzer, 1983), and no major psychiatric disorders. Overview of study Prior to the study, all prospective participants attended a detailed information session in which the six study con- ditions were described and the importance of random assignment was explained. The potential for cognitive improvement was emphasized in all conditions except for the no-treatment control condition. In an effort to ensure that participants would perform an activity of some interest to them, we allowed them to exclude one of the three productive-engagement conditions (photo, quilt, or dual) to which they could have been randomly assigned. The productive-engagement groups met over a 14-week period in a project-specific space (which we called the “Synapse Center”) located in a strip mall in Dallas, Texas. The Synapse Center was a learning envi- ronment that was available to participants 35 hr per week and included two large activity spaces for quilting and photography and a large area for socializing. We had the three remaining groups (social, placebo, no treatment) meet at a different Synapse site 1.5 miles away to prevent interactions between productive- and receptive-engage- ment participants. Data collection took place in five waves of assessment between August 2008 and May 2011. All data remained sealed until the last participant was assessed, and no data were analyzed until the study was finished. Productive-engagement conditions Participants assigned to the productive-engagement con- ditions were directed to spend an average of 15 hr per week in the Synapse environment: 5 hr of formal instruc- tion and 10 hr completing course assignments. Participants received instruction in groups of six. The three engage- ment conditions are described in the following sections. Photo condition. Participants were instructed by a pro- fessional photographer who trained them to use cameras and develop computer skills required to use professional photography software for photo editing. This condition was particularly demanding of episodic verbal memory and reasoning, given that participants had to remember many complex verbal instructions to use both the soft- ware and camera. On average, participants spent 15.84 hr (SD = 1.95) per week working on projects. Quilt condition. The quilt condition had the same for- mat as the photo condition and was under the direction of a professional quilting instructor. All participants learned basic skills and progressed to complete complex, individ- ual projects using computer-driven sewing machines. On average, participants who completed the program spent 15.93 hr (SD = 2.55) working on projects per week. Dual condition. The dual condition included training in both digital photography and quilting for 6.5 weeks each; in the final week of the study, participants could complete projects in either class. The order of the two types of training was counterbalanced across partici- pants. The instructors were the same as in the photo and quilt conditions. This condition had more breadth of stimulation but less depth in each particular domain. On average, participants spent 18.11 hr (SD = 4.48) working on projects each week. Table 1. Demographic Information for Participants in All Conditions Condition Mean age in years Mean years of education Female participants (%) Minority participants (%) All conditions (N = 221) 71.67 (7.29) 16.02 (3.06) 73.9 14.2 Photo (n = 29) 72.83 (6.70) 16.16 (3.10) 65.5 6.5 Quilt (n = 35) 71.69 (6.67) 15.54 (2.34) 74.3 13.5 Dual (n = 42) 69.74 (7.00) 16.92 (3.00) 64.3 11.9 Social (n = 36) 72.14 (8.06) 16.58 (2.97) 86.1 16.2 Placebo (n = 39) 70.97 (7.12) 15.79 (2.76) 84.6 20.5 No treatment (n = 40) 73.08 (7.87) 15.41 (3.53) 72.5 15.4 Significance of group effect p = .34 p = .17 p = .14 p = .34 Note: Standard deviations are shown in parentheses. by Clinton Chapman on November 19, 2013pss.sagepub.comDownloaded from http://pss.sagepub.com/ http://pss.sagepub.com/ 4 Park et al. Control conditions Social condition. The social condition mimicked a social club: It involved instructor-directed activities, such as cooking, playing games, watching movies, reminisc- ing, and going on regular field trips organized around a different topic, such as travel or history, each week. The social-group curriculum relied as much as possible on participants’ existing knowledge, with no formal knowl- edge acquisition. Games could be won largely by chance, with low requirements for strategy. The social activities involved no active skill acquisition. As in the productive- engagement conditions, participants in the social condi- tion were directed to complete 5 hr of common structured activities and 10 hr or more of additional activities on-site with other group members each week. The social-condi- tion participants spent an average of 15.90 hr (SD = 1.63) on social activities each week. Placebo condition. For 15 hr per week, participants performed a structured set of activities that relied on acti- vation of existing knowledge or activities that have not been reliably linked by empirical evidence to cognitive improvement but are commonly thought of as being cog- nitively engaging. Each week, participants were provided with an assigned packet of materials for 5 hours’ worth of activities (i.e., documentaries, informative magazines such as National Geographic, word games relying on knowledge, and classical-music CDs) and were asked to select at least 10 hr of additional activities from the “Brain Library” (a collection of magazines, DVDs, CDs, and crossword puzzles). Participants recorded the time they spent on the activities and visited the site for a few min- utes at a scheduled time each week to pick up and drop off weekly assignments. Participants spent an average of 17.22 hr (SD = 2.50) on these activities each week. No-treatment condition. Participants in the no-treat- ment condition were required only to complete a weekly checklist of their daily activities, which was dropped off at the research site at a scheduled time each week. Cognitive battery Each participant completed a battery of pre- and postint- ervention cognitive tests and psychosocial question- naires. Testers were blind to condition assignment and were not involved in the intervention. Testing included both paper-and-pencil and computerized tasks. The cog- nitive constructs assessed and the tasks associated with the constructs were as follows: • Processing speed, assessed using digit-comparison tasks with three, six, and nine items (Salthouse & Babcock, 1991). • Mental control, assessed using Flanker Center Letter, Flanker Center Arrow, and Flanker Center Symbol tasks (modified from Eriksen & Eriksen, 1974) and the Cogstate Identification Task (http://www .cogstate.com). • Episodic memory, assessed using the immediate- recall section of the modified Hopkins Verbal Learn- ing Task (Brandt, 1991), the Cambridge Neuropsycho - logical Test Automated Battery (CANTAB) Verbal Recognition Memory Task (Robbins et al., 1994), and the long-delay section of the modified Hopkins Verbal Learning Task (Brandt, 1991). • Visuospatial processing, assessed using the CANTAB Spatial Memory Task, the CANTAB Stockings of Cambridge Task, and a modified version of Raven’s Progressive Matrices (Raven, 1976). Analysis and Results We modeled our analysis after that used for the Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE) trial, the largest cognitive intervention reported to date (Ball et al., 2002). We standardized the scores for each cognitive measure by pooling the two scores (pre- test and posttest) for each participant across all experi- mental conditions and applying an inverse-normal transformation on rank-ordered scores in this pool using a weighting suggested by Blom (1958). The normalized task scores for the pretest and posttest were then adjusted to the means and standard deviations of pretest scores (Ball et al., 2002). We note that gender had no significant effects when included in reported analyses. Cognitive constructs We conducted an exploratory factor analysis using an oblimin rotation on the pretest normalized scores for each cognitive measure described above, which resulted in a clear four-factor structure, χ2(41, N = 221) = 67.0, p < .01. We found measurement invariance across conditions, and the same structure fit the posttest data. Given the clear factor structure and its match to a priori theoretical constructs, we accepted the factors (processing speed, mental control, episodic memory, and visuospatial pro- cessing). The normalized scores associated with each construct were averaged to produce one factor score per individual for each testing session. Missing test scores were not imputed. Each construct was reliable, as was test-retest reliability. We note that the no-treatment con- trol condition was included so that we could calculate test-retest reliability, but this group was not included in any further analyses (Nunnally, 1978). We also found that across conditions, participants did not differ in their ini- tial performance on any of the cognitive constructs, and by Clinton Chapman on November 19, 2013pss.sagepub.comDownloaded from http://pss.sagepub.com/ http://pss.sagepub.com/ Synapse: Engagement Intervention 5 that, despite the restricted range of ages (60–90), there was significant age-related cognitive decline on all of the tasks (Horn & Cattell, 1967; Park et al., 2002). Table 2 presents these data. Cognitive-intervention analyses Productive versus receptive engagement. To test the hypothesis that productive engagement was more facilita- tive of cognition than was receptive engagement, we con- trasted the three productive-engagement conditions (quilt, photo, and dual) with the two receptive-engagement con- ditions (social and placebo). A 2 × 2 analysis of variance (ANOVA) was conducted with condition (productive vs. receptive) as the between-subjects variable and time (pretest vs. posttest) as the within-subjects factor on each cognitive construct. We observed a significant Condition × Time interaction for episodic memory, F(1, 179) = 9.63, p = .002, which occurred because the productive-engage- ment groups improved significantly more over time than did the receptive-engagement groups (see Fig. 1). Post hoc analyses of the episodic-memory interaction showed there was no significant difference between the two receptive-engagement conditions (p = .59), nor did the three productive-engagement conditions differ from one another (p = .19). Significant Condition × Time interaction effects were not present for processing speed, mental control, or visuospatial processing. We also specifically tested the hypothesis that productive engagement was more facilitative of cognition than social engagement and Table 2. Age Correlations and Reliability for Cognitive Constructs Cognitive construct and measure Dependent variable Correlation with age Composite reliability (Cronbach’s α) Test-retest reliability Processing speed .88 .87 Digit comparison: three-item trials Total items correct –.31** — — Digit comparison: six-item trials Total items correct –.25** — — Digit comparison: nine-item trials Total items correct –.23** — — Mental control .83 .80 Cogstate Identificationa Log RT to a two-alternative forced-choice decision .23** — — Flanker Center Lettera RT for incongruent trials following congruent trials .19** — — Flanker Center Symbola RT for incongruent trials following congruent trials .22** — — Flanker Center Arrowa RT for incongruent trials following congruent trials .17** — — Episodic memory .83 .80 CANTAB Verbal Recall Memory Total items correct on verbal free recall –.23** — — Hopkins Verbal Learning Task (Immediate) Total items correct on Trials 1, 2, and 3 –.22** — — Hopkins Verbal Learning Task (Delayed) Total items correct on Trials 1, 2, and 3 after a 20-min delay –.20** — — Visuospatial processing .77 .61 CANTAB Stockings of Cambridge Problems solved in the minimum number of moves –.16* — — Modified Raven’s Progressive Matrices Correct items (out of 18) –.13* — — CANTAB Spatial Working Memorya Times a box where a token had previously been found was revisited .29** — — CANTAB Spatial Working Memorya Number of times a new search was begun with the same box .15* — — Note: Test-retest reliabilities were calculated from the no-treatment condition. The digit-comparison tasks were drawn from Salthouse and Babcock (1991); the Cogstate Identification Task was drawn from http://www.cogstate.com; the flanker tasks were drawn from Eriksen and Eriksen (1974); the Cambridge Neuropsychological Test Automated Battery (CANTAB) tasks were drawn from Robbins et al. (1994); the Hopkins Verbal Learning Task measures were drawn from Brandt (1991); and the modified version of Raven’s Progressive Matrices was based on Raven (1976). RT = response time. aFor these tests, an age-associated decline in performance is represented by a positive correlation. *p = .05. **p = .001. by Clinton Chapman on November 19, 2013pss.sagepub.comDownloaded from http://pss.sagepub.com/ http://pss.sagepub.com/ 6 Park et al. found that the productive-engagement groups improved more than the social group from pretest to posttest, F(1, 140) = 4.40, p < .04. Specific effects of intervention. The pretest and post- test transformed scores for each condition and cognitive domain are presented in Table S2. To determine the effects of different types of productive engagement, we compared each productive-engagement condition with the placebo condition. Thus, for example, for the analysis comparing the photo and placebo conditions, a 2 × 2 repeated measures ANOVA was conducted with condi- tion (photo vs. placebo) as the between-subjects variable and Time (pretest vs. posttest) as the within-subjects vari- able for each cognitive construct. In this analysis, we found a significant Condition × Time interaction for epi- sodic memory, F(1, 66) = 11.09, p = .01, with a net effect size of .54.2 We also found a marginally significant inter- action for visuospatial processing, F(1, 66) = 3.43, p = .07, with an effect size of .28, due to greater improvement in the photo condition. The analysis comparing the dual and placebo conditions also yielded a Condition × Time interaction for episodic memory, a result due to greater improvement in the dual condition, F(1, 79) = 3.83, p = .05, with a net effect size of .22. We also observed a Con- dition × Time interaction for processing speed in this analysis, F(1, 79) = 3.10, p = .05, with a net effect size of .29. No significant effects were observed in the compari- son of the quilt and placebo conditions. Figure 2 presents gain scores for episodic memory (standardized posttest scores minus pretest scores) as a function of condition for all of the cognitive domains. We note that when the comparisons shown in Figure 2 were corrected with a Bonferroni-Holm correction (Holm, 1979) for multiple comparisons, the only significant inter- action that remained was the episodic memory effect observed in the photo-versus-placebo comparison. We also assessed whether learning photography skills was more facilitative of cognition than socializing alone by comparing the photo condition with the social condition (rather than the placebo condition), and found that the episodic-memory effect remained significant, F(1, 63) = 8.70, p = .01. To further explicate the intervention effect on episodic memory at the individual level, we present percent reliable change (Ball et al., 2002),3 defined as improvement on the posttest relative to the pretest that was greater than 1 stan- dard error of measurement, for each participant in the five intervention conditions (Fig. 3). Figure 3 demonstrates that the proportion of participants showing reliable improve- ment in the photo, quilt, and dual conditions was .76, .60, and .57, respectively. The social and placebo groups improved less, with the proportions of participants show- ing improvements at .47 and .46, respectively. Discussion The present study represents a serious attempt to change everyday lifestyles in older adults for a period of 3 months and ascertain the impact of different types of lifestyle changes on cognitive function in an elderly sample. Three of the conditions involved productive engagement, that is, participants learned novel and demanding new skills for 15 hr or more per week over the 3-month period. These conditions were contrasted with a recep- tive-engagement condition (the social control condition) in which participants engaged in novel activities and socialized for 15 hr a week but did not actively acquire new skills. This manipulation allowed us to dissociate the impact of socializing and other novel aspects of the situ- ation in the social condition from active skill and knowl- edge acquisition. This important condition has been omitted from past intervention studies that examined the impact of engagement on cognition. Additionally, the inclusion of a placebo condition, in which participants had limited social interactions and worked alone on tasks that they believed would improve cognition, provided an appropriate baseline against which to assess the impact of the other interventions. The results can be summarized as follows. First, we found that productive engagement (in the quilt, photo, and dual conditions) caused a significant increase in epi- sodic memory compared with receptive engagement (in the social and placebo conditions). A further comparison demonstrated that the three productive-engagement Episodic Memory 0.0 0.2 0.4 0.6 Pretest Posttest Time Sc or e Productive-Engagement Condition Receptive-Engagement Condition Fig. 1. Normalized mean score for episodic memory as a function of condition and time. Error bars represent ±1 SE. by Clinton Chapman on November 19, 2013pss.sagepub.comDownloaded from http://pss.sagepub.com/ http://pss.sagepub.com/ Synapse: Engagement Intervention 7 * † ** * † Processing Speed Mental Control Episodic Memory Visuospatial Processing 0.00 0.25 0.50 0.75 0.00 0.25 0.50 0.75 Photo Quilt Dual Social Placebo Photo Quilt Dual Social Placebo Condition St an da rd iz ed G ai n Sc or e St an da rd iz ed G ai n Sc or e Condition Fig. 2. Mean standardized gain score as a function of condition for each cognitive construct. The standardized scores from the posttest were subtracted from standardized scores from the pretest, yielding the mean standardized gain scores for each cognitive construct. Error bars represent ±1 SE. Asterisks represent significant differences between conditions (*p = .05; **p = .01); daggers represent marginally significant differences between conditions (p = .10). ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●●● ●● ●●● ● ●●●● ● ●●●● ●●●●● ●●● ●● ● ● ●● ●●●●● ●● ●● ● ●●●●●● ● ●●● ●●● ●● ● ●●●● ●●●●●● ● ●● ●● ● Photo Quilt Dual Social Placebo −1 0 1 2 −1 0 1 2 Individual Participants St an da rd iz ed G ai n Sc or e St an da rd iz ed G ai n Sc or e ● ● ● ● ● ● ●● ●●● ●●●●● ●● ●●● ●●●●● ●●●●● ●●● ●●● ● ● ● ● ● ●● ●●●●● ●●●●● ●● ● ●● ●●●●● ●●●●● ●● ●● ● ● Fig. 3. Standardized gain score for episodic memory for each participant. Results are shown separately for each condition. The dashed horizontal lines represent the standard error of measurement (the upper line is +1 SEM, and the lower line is −1 SEM). Vertical lines above the dashed horizontal line represent a reliable positive change in performance, and vertical lines below the dashed line indicate a reliable negative change in performance. by Clinton Chapman on November 19, 2013pss.sagepub.comDownloaded from http://pss.sagepub.com/ http://pss.sagepub.com/ 8 Park et al. groups were superior in episodic memory when com- pared with the social group alone. Thus, we found evi- dence that sustained effort to acquire a demanding new skill improved episodic memory and no evidence sug- gesting that socializing, information exchange, and nov- elty alone facilitated cognitive function. Second, our more fine-grained analyses of specific conditions showed that participants in the photo and dual conditions exhib- ited a significant improvement in episodic memory, whereas the effect was not significant for those in the quilt condition (p = .11) but was in the direction of facili- tation. We also found some evidence that participants in the photo condition showed an improvement in visuo- spatial processing and that those in the dual condition improved their processing speed. Overall, the results suggested that learning digital pho- tography, either alone or in combination with learning to quilt, had the most beneficial effect on cognition, and that the positive impact was primarily on memory func- tion. We note that the photo condition was considerably more demanding of episodic memory, and this may explain its greater facilitative impact relative to quilting: In the photo condition, there was a great deal of informa- tion presented to novice users of computers and cameras regarding complex photographic software, whereas the quilt condition had a somewhat stronger procedural component after the initial skill-acquisition period. The finding of improved episodic memory as a function of engagement without direct memory training is similar to that reported for the Experience Corps trial, in which participants worked with school children over the course of an academic year (Carlson et al., 2008), and is also similar to findings from a study in which older adults showed episodic-memory improvement as a result of theatrical training (Noice, Noice, & Staines, 2004). A question that emerges is why episodic memory seems more sensitive to improvement than other cogni- tive abilities. One possibility is that of the abilities mea- sured, episodic memory is the most strategic and the most reliant on use of existing knowledge, given that there is clear evidence that self-imposed organizational strategies enhance memory (for a review, see Verhaeghen, Marcoen, & Goossens, 1992). Perhaps sustained partici- pation in engaging activities facilitated strategic, organi- zational behaviors. A second alternative is that the facilitation in memory occurred because engagement enhanced attentional capabilities and freed cognitive resources for encoding and retrieval. We believe the strat- egy hypothesis is more likely, because an increase in attentional resources should have resulted in broad improvements across all measured abilities. Neuroimaging data could provide definitive information about how underlying networks changed with the intervention and could greatly enhance our understanding of the underly- ing causal mechanisms. Another possible interpretation of the observed effects is that because participants in the productive-engagement groups mastered specific skills, they had stronger beliefs that the intervention was improving their memory, which in turn enhanced their performance compared with that of participants in the placebo and social conditions. This seems unlikely. We examined participants’ performance on the Metamemory in Adulthood Questionnaire (Dixon & Hultsch, 1984), which included a subscale for self-rated memory capacity. If there were differences between condi- tions regarding the perceived effectiveness of the assigned intervention, there would have been a disproportional change in perceived memory capacity across conditions. Importantly, we found no significant differences across conditions in either pretest perceived memory capacity (p = .23) or changes in perceived memory capacity at post- test (p = .69). We also found no differences between the productive- and receptive-engagement groups in other psychosocial measures such as well-being and depression. Finally, the productive- and receptive-engagement groups were run at separate sites to minimize participants’ expo- sure to the differences in the challenges faced by produc- tive versus receptive groups. To summarize, the present study is perhaps the most systematic and complete study of the impact of engage- ment in novel, cognitively challenging activities on cogni- tion in older adults. We recognize that the findings yield at least as many questions as answers. Nevertheless, the research provides clear evidence that memory function is improved by engagement in demanding everyday tasks. We found no cognitive benefit of social engagement, a confounding variable in most previous studies. Neverthe- less, we believe more work needs to be done on social engagement before this finding is viewed as definitive. This research is particularly important because, unlike computer training, productive engagement has the poten- tial to be self-reinforcing and propagate continued learn- ing and intellectual stimulation. Long-term follow-up will be crucial in determining whether facilitation effects are maintained or even enhanced over time. The present results provide some of the first experimental evidence that learning new things and keeping the mind engaged may be an important key to successful cognitive aging, just as folk wisdom and our own intuitions suggest. Author Contributions D. C. Park developed the study concept and directed the proj- ect. J. Lodi-Smith managed the project for 2 years and con- ducted preliminary analyses. Testing and data collection were performed by G. N. Bischof and W. Aamodt. S. Haber and A. Hebrank analyzed and interpreted the data under the supervi- sion of D. C. Park. S. Haber and D. C. Park drafted and edited the manuscript, and L. Drew and A. Hebrank provided critical revisions. All authors approved the final version of the manu- script for submission. by Clinton Chapman on November 19, 2013pss.sagepub.comDownloaded from http://pss.sagepub.com/ http://pss.sagepub.com/ Synapse: Engagement Intervention 9 Acknowledgments The authors thank Carol Bardhoff, Katie Berglund, Michael Blackwell, Blair Flicker, Kim Flicker, Janice Gebhard, Mindi Kalosis, Kristen Kennedy, Anna Mortensen, Jenny Rieck, Karen Rodrigue, Linda Sides, Prassana Tamil, and Marcia Wood for assistance with this project. Declaration of Conflicting Interests The authors declared that they had no conflicts of interest with respect to their authorship or the publication of this article. Funding This research was supported by two National Institute on Aging grants to D. C. Park. Both were titled “Active Interventions for the Aging Mind”: Grant 5R01AG026589 and American Recovery and Reinvestment Act (ARRA) Supplement 3R01AG026589- 03S1. Notes 1. Thirty-eight participants were excluded or dropped from the study. Of these, 22 were dropped or excluded for reasons unre- lated to the condition to which they were assigned (e.g., illness, family problems). Attrition of the remaining 16 participants could conceivably have been related to condition (e.g., stated disinterest, concern about the time commitment). Of these 16 participants, 5 were in the photo condition, 2 were in the quilt condition, 2 were in the dual condition, 5 were in the social condition, and 2 were in the no-treatment condition. We found no statistical or even anecdotal evidence that differences were due to condition assignment. 2. Net effect size of intervention was calculated using the following formula: B B B B s i post p post i pre p pre pre − − −( ) ( ) , where spre is the standard deviation at pretest, B i pre and B i post represent pre– and post–Blom transformation scores for the intervention conditions, and B p pre and B p post represent pre– and post–Blom transformation scores for the placebo condition. 3. Percent reliable change was calculated using the following formula: B B s s Ri post i pre pre pre− ≥ −1 , where R is the test-retest reli- ability of the each measure that was obtained from the no- treatment control condition. References Anguera, J. A., Boccanfuso, J., Rintoul, J. L., Al-Hashimi, O., Faraji, F., Janowich, J., . . . Gazzaley, A. (2013). Video game training enhances cognitive control in older adults. Nature, 501, 97–101. Ball, K., Berch, D. B., Helmers, K. F., Jobe, J. B., Leveck, M. D., Marsiske, M., . . . Willis, S. L. (2002). Effects of cog- nitive training interventions with older adults. The Journal of the American Medical Association, 288, 2271–2281. doi:10.1001/jama.288.18.2271 Basak, C., Boot, W. R., Voss, M. W., & Kramer, A. F. (2008). Can training in a real-time strategy video game attenuate cognitive decline in older adults? Psychology and Aging, 23, 765–777. Blom, G. (1958). Statistical estimates and transformed beta variables. New York, NY: John Wiley. Brandt, J. (1991). The Hopkins Verbal Learning Test: Devel- opment of a new memory test with six equivalent forms. Clinical Neuropsychologist, 5, 125–142. Carlson, M. C., Saczynski, J. S., Rebok, G. W., Seeman, T., Glass, T. A., McGill, S., . . . Fried, L. P. (2008). Exploring the effects of an “everyday” activity program on executive function and memory in older adults: Experience Corps. Gerontologist, 48, 793–801. Dixon, R., & Hultsch, D. (1984). The Metamemory in Adulthood (MIA) instrument. Psychological Documents, 14, 3. Eriksen, B. A., & Eriksen, C. W. (1974). Effects of noise letters upon the identification of a target letter in a nonsearch task. Perception & Psychophysics, 16, 143–149. Folstein, M. F., Robins, L. N., & Helzer, J. E. (1983). Mini-Mental State Examination. Archives of General Psychiatry, 40, 812. Hertzog, C., Kramer, A. F., Wilson, R. S., & Lindenberger, U. (2008). Enrichment effects on adult cognitive development: Can the functional capacity of older adults be preserved and enhanced? Psychological Science in the Public Interest, 9, 1–65. Holm, S. (1979). A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics, 6, 65–70. Horn, J. L., & Cattell, R. B. (1967). Age differences in fluid and crystallized intelligence. Acta Psychologia, 26, 107–129. Noice, H., Noice, T., & Staines, G. (2004). A short-term interven- tion to enhance cognitive and affective functioning in older adults. Journal of Aging and Health, 16, 562–585. Nunnally, J. C. (1978). Psychometric theory. New York, NY: McGraw-Hill. Park, D. C., Gutchess, A. H., Meade, M. L., & Stine-Morrow, E. A. L. (2007). Improving cognitive function in older adults: Nontraditional approaches. Journal of Gerontology: Series B, 62B, 45–52. Park, D. C., Lautenschlager, G., Hedden, T., Davidson, N. S., Smith, A. D., & Smith, P. K. (2002). Models of visuospatial and verbal memory across the adult life span. Psychology and Aging, 17, 299–320. Raven, J. C. (1976). Standard progressive matrices: Sets A, B, C, D, & E. Oxford, England: Oxford Psychologists Press. Robbins, T. W., James, M., Owen, A. M., Sahakian, B. J., McInnes, L., & Rabbitt, P. (1994). Cambridge Neuropsychological Test Automated Battery (CANTAB): A factor analytic study of a large sample of normal elderly volunteers. Dementia, 5, 266–281. Salthouse, T. A., & Babcock, R. L. (1991). Decomposing adult age differences in working memory. Developmental Psychology, 27, 763–776. Singh-Manoux, A., Richards, M., & Marmot, M. (2003). Leisure activities and cognitive function in middle age: Evidence from the Whitehall II study. Journal of Epidemiology & Community Health, 57, 907–913. Snellen, H. (1862). Probebuchstaben zur Bestimmung der Sehschärfe. Utrecht, The Netherlands: P. W. van de Weijer. Stine-Morrow, E. A. L., & Basak, C. (2011). Cognitive interven- tions. In K. W. Schaie & S. L. Willis (Eds.), Handbook of by Clinton Chapman on November 19, 2013pss.sagepub.comDownloaded from http://pss.sagepub.com/ http://pss.sagepub.com/ 10 Park et al. the psychology of aging (7th ed., pp. 153–171). New York: Elsevier. Stine-Morrow, E. A. L., Parisi, J. M., Morrow, D. G., & Park, D. C. (2008). The effects of an engaged lifestyle on cogni- tive vitality: A field experiment. Psychology and Aging, 23, 778–786. Verhaeghen, P., Marcoen, A., & Goossens, L. (1992). Improving memory performance in the aged through mnemonic train- ing: A meta-analytic study. Psychology and Aging, 7, 242–251. Wilson, R. S., Mendes De Leon, C. F., Barnes, L. L., Schneider, J. A., Bienias, J. L., Evans, D. A., & Bennett, D. A. (2002). Participation in cognitively stimulating activities and risk of incident Alzheimer disease. The Journal of the American Medical Association, 287, 742–748. Wilson, R. S., Scherr, P. A., Schneider, J. A., Li, Y., & Bennett, D. A. (2007). Relation of cognitive activity to risk of developing Alzheimer’s disease. Neurology, 69, 1911– 1920. by Clinton Chapman on November 19, 2013pss.sagepub.comDownloaded from http://pss.sagepub.com/ http://pss.sagepub.com/ work_x4vjemlsmfb45di6ol2t7qznkm ---- Photo-a-day: a digital photographic practice and its impact on wellbeing This is a repository copy of Photo-a-day: a digital photographic practice and its impact on wellbeing. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/115633/ Version: Accepted Version Article: Cox, A.M. orcid.org/0000-0002-2587-245X and Brewster, L. (2018) Photo-a-day: a digital photographic practice and its impact on wellbeing. Photographies, 11 (1). pp. 113-129. ISSN 1754-0763 https://doi.org/10.1080/17540763.2017.1399288 eprints@whiterose.ac.uk https://eprints.whiterose.ac.uk/ Reuse Items deposited in White Rose Research Online are protected by copyright, with all rights reserved unless indicated otherwise. They may be downloaded and/or printed for private study, or other acts as permitted by national copyright laws. The publisher or other rights holders may allow further reproduction and re-use of the full text version. This is indicated by the licence information on the White Rose Research Online record for the item. Takedown If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing eprints@whiterose.ac.uk including the URL of the record and the reason for the withdrawal request. mailto:eprints@whiterose.ac.uk https://eprints.whiterose.ac.uk/ Photo-a-day: a digital photographic practice and its impact on wellbeing Andrew Cox, Information School, University of Sheffield, a.m.cox@sheffield.ac.uk * corresponding author Liz Brewster, Lancaster Medical School, Lancaster University, e.brewster@lancaster.ac.uk Abstract The practice of taking and sharing one photo every day for a year, has become a popular new form of photography enabled by the Internet. The purpose of this study was to investigate how people use photo-a-day to enhance their wellbeing. The data for the study were sixteen interviews with people who practised photo-a-day, analysed by thematic analysis. The analysis showed how photos offer positive affordances because of the conventions to take aesthetically pleasing images, share positive events and comment positively. A seemingly simple activity, photo-a-day creates a new layer of interest woven around daily activities, and expands social relationships. Representations of identity are complex, emerging through photos taken, rather than a premeditated profile. Keywords: Digital photography, 365 projects, therapeutic photography, social networking, wellbeing, self-help Words: 8381 Photo-a-day 2 Photo-a-day: a digital photographic practice and wellbeing Introduction A literature is forming around the term “positive computing” (Calvo and Peters, 2014), which refers to the design of tools that seek to improve wellbeing and to promote human potential, following the precepts of positive psychology. The authors argue that the convergence of ubiquitous and affective computing, and personal informatics create a context ripe to improve people’s lives by developing applications directly drawing on the theories of positive psychology. For example, a number of projects have sought to develop smartphone apps or tools in Facebook that promote wellbeing along these lines (Sosik and Cosley, 2014; Munson et al., 2010). Yet in doing this they are constrained by the assumptions of positive psychology itself. Positive psychology promotes and seeks to measure the impact on subjective wellbeing of small positive behavioural interventions such as gratitude letters and journaling (Seligman 2003, Seligman and Csikszentmihalyi, 2000). While such interventions have been shown to be effective and models are emerging of how they work (Lyubomirsky and Layous, 2013), the underlying analysis on which they are based seems “overly reductive and normative” (Cieslik, 2015: p.3). The assumption that such activities can be prescribed and their “dosage” determined, ignores the degree of choice in how they are interpreted and the complex impact of changing daily routines on everyday life, as well as the complex nature of wellbeing itself. This paper offers an alternative perspective gained by examining existing online practices that people engage in to enhance their own wellbeing. This allows us explore how people gain benefit from weaving such practices into their daily lives. It also gives us more insight into how people themselves conceive of wellbeing. Photo-a-day 3 A popular, but under-researched Internet practice is photo-a-day, also known as the 365 project, where people commit to taking and sharing one photo online, every day, for a year. This is already a popular activity, with many groups on Facebook and Flickr and well- established dedicated sites like Blipfoto. Yet the simple idea of sharing a photo every day is a very rich resource that people realise in many creative ways. The benefits are complex, intangible and self-defined. They cannot be seen as interventions that work in a mechanical way to improve wellbeing, rather it is the choices people make in defining the practice for themselves and the way it affects day to day life, in the context of wider discourses around wellbeing, that produces their effects. Given the importance currently being given to wellbeing, research is needed to map this. Thus the aim of the paper is to explore how people make use of photo-a-day as a photographic practice to enhance their wellbeing and what this tells us about how wellbeing is understood. The paper is organised as follows. It starts by exploring the concept of wellbeing, particularly in the context of critiques of the pervasive influence of therapeutic and self-help cultures. The relatively limited literature on the link between photography and wellbeing is also reviewed. The method of the study based on thematic analysis of interviews is then outlined. The findings explore why photo-a-day is experienced as positive, the complex way that it is realised as a practice and how identity is represented. The discussion brings together what we have learned and the conclusions include practical implications. Wellbeing and therapeutic discourses Economists and psychologists have been at the forefront of raising wellbeing, happiness and quality of life in public and research agendas (Layard, 2005; Seligman, 2002). Yet the definition of such concepts is multiple and contested (Haworth and Hart, 2007). For example, what has been variously referred to as “personal”, “mental” or “subjective” wellbeing is Photo-a-day 4 widely considered to include hedonic (feeling good) and eudaimonic (functioning well) elements (Huppert, 2014; Dodge et al. 2012). But hedonic positive feeling can itself include several aspects, such as happiness, life satisfaction and positive affect. Influential authors such as Ryff (1989) see eudaimonic wellbeing as made up of six aspects: self-acceptance, positive relations with others, autonomy, environmental mastery, purpose in life and personal growth, aspirations derived from developmental psychology and philosophy. But these now seem rather normative assumptions. In this context it is useful to turn to examine how people define the feelings and good functioning of wellbeing for themselves (Cieslik, 2015). Clearly it is not enough to treat people’s experience of wellbeing as immune to the influence of wider social structures and discourses. Indeed recent research suggests a strong influence of therapeutic discourses in popular accounts of wellbeing (Hyman, 2014). A specific target of sociological critique has been how self-help advice for wellbeing is driven by therapeutic ideology. Thus for Rimke, self-help writing reflects contemporary society’s “unprecedented preoccupation with the self” (2000: 61). It reproduces myths about a unitary, essential self that can be discovered, a “final truth” that can be accessed as the foundation for mental wellbeing - as opposed to a view of identity as multiple, relational, emergent and actively constructed and contested. Further, Rimke (2000) suggests that self- help writing articulates a damaging myth of “co-dependence”: a belief that individual woes derive from trying to control others and neglect of the self. This leads to a denial of the sense in which we are interdependent on others. Therapeutic discourse is more than a contemporary fashion, Rimke (2000) argues, rather it is a form of governmentality which itself produces the individualised, neo-liberal subject centred on a fetishisation of choice. The influence of such therapeutic culture has been found to be an important discourse in lay people’s accounts of wellbeing in Hyman’s (2014) recent study. Although, there are a number of discourses in common use, Hyman suggests that the therapeutic culture Photo-a-day 5 characterised by Rimke (2000) has a pervasive place in everyday thinking about happiness, which is seen as “individualised, internal and self-orientated” (Hyman, 2014: 147). Relationships are important to people, but intimate and sexual relationships were often seen through an asocial perspective or seen through a therapeutic lens e.g. as merely a support to the individual in an insecure world. Happiness was perceived as an individual experience, with few feeling that happiness arose from the happiness of others or society as a whole. Cieslik (2015) has argued that indeed sociology’s treatment of wellbeing has been dominated by a view of it as a problem. Furedi (2004) has suggested that in a therapy culture people mistake pleasure or fun (often through consumer experiences) with wellbeing and have an expectation that it is easily attained. However, Cieslik (2015) makes the case that in constructing these critiques, sociologists have neglected to investigate sufficiently how people themselves conceive of and experience wellbeing. His own research, based on life history interviews, shows that sociological critiques do not seem to reflect how people think about wellbeing in their everyday lives. People recognise that it requires effort and sacrifice to achieve wellbeing. People are not narcissistic: they recognise the importance of others to their wellbeing. Further, while the critiques of Rimke (2000) are compelling, Kline argues (2009) that the range of self-help practices is very wide. Dolby (2005) identifies four different notions of self-hood as evident in self-help works, including an important one directed to social activism. There are strands of thinking in self-help that are hard not to recognise as liberating for marginalised individuals. Self-organised groups are as much a feature of self-help as texts. Kline sets out some principles for what a “pro-social self-help” might look like: it would be one whose result is self-determination understood as “the facility of agentic subjects to participate in the meaning-making processes of self-identity.” (Kline, 2009: 202). Rather than defining a single path, the group would undercut normative narratives, identify a Photo-a-day 6 range of trajectories and enable the individual to make their own choices from them. It would be a collective activity, with an emphasis on sharing personal experience as its foundation (but able to integrate the insights of professional expertise) and based on an assumption of social parity in relation to shared experience. These arguments prompt us to investigate the range of self-help practices to find out how self-help is realised and to explore how wellbeing is conceived within them. It asks whether they perpetuate the myths of self-help such as of the essential self and therapeutic discourses or whether they can realise the more empowering potential of a “pro-social self-help”. Digital photography and therapeutic photography One interesting context to explore for new insights into how people seek to improve their own wellbeing is online. The advent of social media sites and the emergence of a body of people with long term Internet experience, ubiquitous internet access and greater bandwidth have created new conditions to refashion these kinds of possibilities and for people to manage their own wellbeing. The visual turn in social media is a particularly interesting aspect of such changes. The explosion of the quantity, range and mobility of images as an aspect of social media has been analysed by a large number of authors (e.g. van Dijk, 2008; Rubinstein and Sluis, 2008). Suler (2009) suggests that among the many photographic practices transformed in the digital revolution, their potential use for “self-insight and personal change” has also expanded. It has become both easier to take photos and also to manipulate them, as a means to explore the self. The potential of photos, like other artistic activities, for self-exploration is widely recognised. In the world of therapy, for example, the term therapeutic photography has been coined to refer to the use of photography for such purposes as “increasing self-knowledge, awareness, and well-being” (Weiser n.d.) unmediated by a professional therapist, as opposed to mediated “phototherapy” (Weiser and Krauss, 2009). Outside the therapeutic context, Craig (2009) recognises that photography can promote Photo-a-day 7 communication, aid memory, improve self-esteem, strengthen relationships, support change and is simply an enjoyable hobby. Yet while there has been much research on digital photography, such uses for wellbeing have been relatively under-researched. Cook’s (2011) thesis is one of the few works to directly explore the link between digital photography and wellbeing, outside a therapeutic context. Cook’s (2011) conclusion is that while photography and wellbeing are connected, there is no simple or uni-directional relation between them. This is partly because of the complexity of the concept of wellbeing but also the multiple character of photography. Thus a major type of work within home mode photography he uncovers is maintaining and creating social connections. This contributes to “positive relations with others” as an aspect of eudaimonic wellbeing. Introspective photographic work, where the audience is the self, helps people identify their level of happiness (an aspect of the hedonic perspective); it also contributes to them having a “sense of autonomy”. Interest/hobby work, another but less pervasive use of photography, is its use to support other hobbies. Again this can be seen to addresses the social aspect of wellbeing, and other eudaimonic themes such as “environmental mastery”. Thus different types of photography map to different aspects of hedonic or eudaimonic wellbeing; however the three types of photographic work are not entirely unconnected. Cook’s (2011) work also recognises that efforts to use photography to improve wellbeing may not work. He suggests that aspects of photography, such as organising material, can be boring or even unpleasant, but are necessary to achieve the benefits. This points to the need for more work which examines how wellbeing is affected by particular everyday photographic practices. Photo-a-day One of the numerous new practices of digital photography is photo-a-day. The idea is to take and upload one photo every day for a year, usually starting on January 1st. Photo-a-day has Photo-a-day 8 proliferated on photo sharing sites like Flickr (Barton, 2012) and Facebook, even spawning dedicated web sites such as Blipfoto and 365project.org. Web sites offering advice on how to complete photo-a-day have also multiplied. It is difficult to establish either numbers of users and purposes of conducting photo-a-day, but Blipfoto is estimated to have around 4000 users (mostly UK based), and two significant Flickr groups (project_365 and project365) have approximately 37,000 members. The dedicated website 365project.org claims to have over 160,000 members. Photo-a-day echoes well-established social practices such as diary keeping and journaling, as well as of some forms of related online practices such as personal blogging (and status updating). Like diaries, journals and blogs there is a sense of photo-a-day being an attempt to change one’s life: diary writing is often a new year’s resolution. The idea of a resolution to establish new routines is a prominent in self-help advice for wellbeing. Photo-a-day is ripe for consideration as the type of practice people employ themselves to improve their wellbeing, in ways that echo features of positive psychology interventions. There has already been some instructive research on photo-a-day. Piper-Wright, coming from an art practice background, argues that photo-a-day is a genuinely reflexive, “critically creative” activity, that resists “the casual, the disposable and the spectacular” (2013: 216). For her, a key aspect of such photography is that it forces the user to select a single photograph per day, thus emphasising quality, rather than just the sharing of an indiscriminate “stream” of photos. Maintaining the habit over 365 days enforces a long term commitment, rather than promoting a search for an instantaneous response. Piper-Wright finds an emphasis on “documentation, diary and self-examination” (2013: 220) - in tension with van Dijk’s (2008) suggestion that it is rather in communicative uses that digital images are now most often used. The emphasis is on photographing for the project rather than Photo-a-day 9 making social connections. It generates an “internal dialogue” (ibid. 237). Such themes resonate with the wellbeing literature and its emphasis on mindfulness and reflection. Barton (2012) writing from a digital literacy perspective finds that the talk that is generated around photo-a-day shares a particular vernacular discourse of learning. Often unexpectedly to themselves, participants find that learning is central to the experience: they report experiences of learning about photography, themselves and life in general. The learning is active in the sense of involving a challenge and experiment. Learning is “a good thing, it is fun, it involves other people and people can be both teachers and learners” (Barton, 2012:142). Like Piper-Wright, Barton sees the practice as reflective, but places stronger emphasis on the social aspect of the photo-a-day practice. He finds that the discourse fits models of adult learning which characterise it as self-directed, active, lacking a focus on a teacher, situated and social. Certain elements of this discourse resonate with Kline’s (2009) definition of pro-social self-help, e.g. the social element and the displacement of the expert. Thus photo-a-day is an interesting context in which to explore how people use photography and photo sharing as a means of shaping their own wellbeing. It is also an opportunity to explore how wellbeing is understood within a particular practice. However, in the light of debates in the literature about the nature of wellbeing and the role that online self-help communities may play, there are critical arguments to explore about appropriate models of self-determination or empowerment as realised through such activities. Should photo-a-day be understood through the lens of positive psychology that construes wellbeing as promoted through simple interventions; does it align with seemingly pervasive discourses around therapeutic self-help; or can it promote more progressive models of pro-social self-help? Photo-a-day 10 Methodology Given that the study sought to discover how participants themselves perceived photo-a-day, it was appropriate to employ a qualitative method within an interpretive methodology. The data for the study was semi-structured interviews. Interview participants were recruited through an open invitation that explained our interest in the potential relation between photo-a-day and well-being. This was disseminated via Facebook, Twitter and on the Friends of Blipfoto group on Facebook, and asked interested participants to respond via email. Some participants were recruited by snowballing from initial respondents. The interviews themselves were framed by an understanding that the underlying question was the impact on wellbeing. Questions focused on why people had decided to undertake photo-a- day, the value they placed on it, and how they defined the rules or conventions of the practice. We also asked questions about the most recent photo taken, discussing how it was chosen, what the subject was, and whether it was typically representative of the project as a whole. This sought to capture the concrete processes of photo taking, sharing and use, to help understand how it was acted out as an everyday photographic practice. Other questions sought to discover how the participant felt that their wellbeing was impacted by the practice, and were inspired by the five simple mantras “connect, be active, keep learning, give to others and be mindful” developed by the New Economic Foundation and promoted by the UK National Health Service. Ethical approval for the study was given by two University Research Ethics Committees, and all participants gave written consent for interview. Interviews were predominantly conducted over the telephone or skype by a research assistant, with one conducted face-to-face by one of the authors. Interviews were audio-recorded and fully transcribed. Photo-a-day 11 Interviews were conducted in April and May 2015. Sixteen people volunteered to be interviewed, eleven women and five men. Participants were aged between their mid-30s and early 70s, with six participants reporting that they were retired. This seemed to reflect the age structure of communities such as Blipfoto, though no definitive data on this exists. Most had been conducting their photo-a-day project for over a year, with one person’s project running for eight years. Ten participants shared their photo on Blipfoto, with other platforms used including Facebook, Flickr, Tumblr and sharing via Twitter. Devices used to capture photographs varied, with many using more than one type of digital camera to take photographs, depending on circumstances – camera phones, tablet computer cameras and digital SLRs were all mentioned. Most participants shared their photo-a-day work openly on the internet, though some were in Facebook groups with a defined membership, meaning that their photographs were not shared as widely. Data was analysed using the six step process defined by Braun and Clarke (2006) for thematic analysis, consisting of data familiarisation, generation of initial codes, searching for themes and reviewing, naming themes and reporting findings. Rather than a linear process the analysis used these steps iteratively. Code generation mixed deductive and inductive approaches, with some derived from the literature; some grounded in the data. Findings: Photo-a-day as a positive intervention Photo-a-day has some of the qualities of the interventions recommended by positive psychology. For example, the idea of doing something new every day to create a positive new habit is a common feature of such interventions, e.g. journaling. Similarly, in photo-a-day people commit to the discipline of sharing one photo every day, often in the form of a new year’s resolution. Photo-a-day 12 Furthermore, photo-a-day is “positive” because of a number of given features of both photography and online activity. In photography, by convention, people typically turn their camera on beautiful things or try and make things look beautiful. They spend time selecting the most aesthetically pleasing images from all those taken. They may be limited by their current photographic skills, but they are usually seeking to learn to present things in a pleasing way. They also spend time looking at photos others share, and can select a path through beautiful images. Browsing popular photographs will offer a pathway through aesthetically pleasing images). Thus there is a strong focus in photography on the beautiful. This is part of why people like to take photos and experience it as positive. “The basic premise is that I actually enjoy taking photos.” Also, in terms of content, Chalfen (1987) suggests that in home mode photography people typically take photos of positive events (such as family occasions or holidays). Specifically photos of positive events are the ones that are chosen to share. Cook (2011) confirms this broad pattern, though he suggests that emotions can be more complex and conflicted around positive events, such as weddings, than this assumes. Yet as a result of this focus on aesthetically pleasing and positive events, photo-a-day can become something like a book of abundance: a daily listing of good things in a person’s life. In addition, some participants explicitly took photos to be positive: “It sort of stopped me thinking negatively if you see what I mean, I could do something positive every day, although I was having trouble finding another job.” “I specifically made the decision that I was going to do things that would make me, when I looked back at them, would remind me of good things that happen and good things that are going on. Positives not all the negatives. […]So I will take photos of those and then Photo-a-day 13 when the weather is really, really icy I went and took photos of the ice crystals that formed, so I hate icy weather but there is always something there that I can sort of turn round and say well, that was actually beautiful even though I don’t like icy weather.” These conventions of photography that make it “positive” are reinforced by some features of online behaviour. People tend to prefer to share positive things online (Shifman, 2014), because it projects a positive image of themselves. The commenting on photographic sites like Flickr has also been observed to be largely positive (Cox, 2008). This may reduce its value in improving photographic skills, but it does create a pleasing sense of positive reinforcement. On Blipfoto this was a valued feature, indeed commenting was experienced as entirely positive: Interviewees struggled to recall any conflict. Together these features of photography and online sharing make it a positive practice, and mean photo-a-day is likely to directly enhance the affective, hedonistic side of wellbeing. Positive but not simple Whereas positive psychology tends to present such interventions as “simple”, and so do not ask questions about how exactly it is carried through, an important aspect of photo-a-day was that it had a potential to generate a complex chain of activities woven through the day, and through other practices. It was this way that taking and sharing a single photo subtly and positively reshaped many other daily practices that was key to the strength of its impact. It was true that photos could be quickly taken and uploaded, almost without thinking. One interviewee described her process as often “spontaneous” and “impulsive.” This is what Cohen’s (2005) early photobloggers wanted: that one could take photos by blinking, so that all “barriers” to taking more photos have dissolved. For this interviewee the technology had become so natural it feels as if there is no mediation. For many participants the sense of Photo-a-day 14 photo-a-day being “technically” easy was important to it being positive; there was a lack of technology annoyances. However, while one could take a photo and quickly upload it and so complete the project with little effort, it was common that the enjoyable aspects of the practice were extended and elaborated. For example, participants often spent the day searching for better photos. Looking for photos became a preoccupation across the day, not a brief, quickly completed act. Indeed one of the major effects of this was to “transform how I look at the world”, as one participant put it. “I notice all sorts I never would have noticed before. You are kind of scanning around all the time you are outside. Noticing things.” People were taken out of their other routines, because after a while it becomes challenging to find new things to photograph. Often people had deliberately adopted photo-a-day for that reason: “So it was really a way of getting myself away from my desk and out taking a lunchtime. […] You know it is still quite a good way of forcing myself to think about something else other than work. ” Often participants referred to doing things specifically to get a photo. “If I have a spare 5 minutes on my way home I will walk down a side street I have not been down before, or take a different turning in the car, just to see what is down there, in case there is anything that would make a good photo.” “One of my favourite things to do is just to go on what I call a Blip walk, and I will just leave the house with a camera and I won’t have any idea of where I am going or what I am doing other than I am just looking for something to take pictures of and sometimes you Photo-a-day 15 get the best stuff like that and sometimes you don’t get very much at all but I like the walking.” Thus photo-a-day was linked to breaking of existing routines to do something positive. Rather than being a simple one-off act, searching for a good photograph added a positive interest to the whole day – there was always a chance the current best photo could be improved upon. Also, while in theory the photo could be quickly chosen and uploaded, often careful choices about editing the photo and adding text that explained it further elaborated the simple practice. One participant talked about the enjoyment in reflecting on the day and selecting a photo: “I will upload them into Lightroom on my laptop, you know as you are having a beer or whisky or something, and I will just see what jumps out at me and it might jump out at me because I think it is evocative and will remind me of something that was pleasant or fun or nice, or it might jump out at me because it is strong photographically, or it might be both.” So it was important that photo-a-day is technically easy, but that people could select how to elaborate and prolong parts of the activity if these tasks are enjoyable or rewarding. This often included mundane aspects of managing photos, what Cook (2011) dubs “procedural work”. But whereas Cook sees these as merely as a necessary chore, our data suggested it could be seen as inherently interesting and potentially contributing to wellbeing itself. As well as being elaborated itself, photography is often a subsidiary hobby, supporting another interest or stimulating another interest or form of learning (Cook, 2011). “I have become interested in birds through the photography and capturing a decent sharp colourful photo, and it gives me great satisfaction … and you know people talk about ‘you Photo-a-day 16 have really come on in your garden birds.’ And yesterday I got a greenfinch. I had only had one in the garden that I had seen before in the past year, and it came on this new bird table and I was just so pleased with myself. So I think in miniature that is a good example of the spreading of interest and spreading of [the] way you conduct yourself.” Thus photography enhanced the interest and enjoyment around other activities, indeed created whole new interests. In addition to these elaborations of the simple act of taking and sharing a photo, there was a strong obligation to look daily at the photos uploaded by those one was following too, which was a further extension of the activity generated by photo-a-day. To some degree looking at and favouriting others’ photos could be seen as using the obligation to reciprocate to gain attention for one’s own material. But many clearly had a sincere interest in others’ lives and photos. They felt the way that they conducted themselves in relation to followers was a part of the ethic of photo-a-day. As well as the choices around the photo to share and commenting on other’s work, a further elaboration was the way that looking back, the participants’ own photos acquired new meaning. Lister (2014) suggests that photos are taken for memory but rarely actually revisited. This did not seem to be the case with photo-a-day. “I am now 6 or 7 years in and what it has become is a valuable… history of my life. […] some photographs that I take are very important to me in terms of memory but the memory that I know they will evoke is completely private. So I might write no words at all because I have nothing that I want to say in public, even though I know the photograph will evoke something for me in the future.” Photo-a-day 17 “I have learnt a lot about memory, and how I remember and how I think memory has in some way become more valuable through me because I experience pleasure from memory in a very simple way when I flip back through my journal.” Because only a single photo was being collected every day making sense of a year’s collection becomes possible. Participants had a sense of their own life being revealed to them. Social aspects of photo-a-day One of Rimke’s (2000) critiques of self-help in a therapeutic culture is that it creates a “myth of codependence”, in which it is suggested that a person’s ills arise from depending on others and so promotes a neo-liberal, independent, choosing subject. But social networking is social. Photography is connective (van Dijk, 2008; Cook, 2011). Photo-a-day is social generally in ways we would expect from a social networking site. Thus another way that the practice can be elaborated, is through incorporating more people in different ways into the practice. Photos frequently include family and friends as subjects. Often the family participated by viewing the photos but also suggesting ideas for photos. “It has brought me closer to the family as well, because they follow it and they have got really interested in what image is going to be there for the day.” The practice was often shared between pairs of relatives or friends where they undertook photo-a-day in parallel. Beyond this there was the obligation felt by participants to those who followed them (to be interesting and engage them) and those they followed (to give them attention and support). Because photos were positive, sharing photos was seen in itself to be giving something positive to others. Photo-a-day 18 “I have had several people say to me that my photos make them feel better when they see them in their day.” Participants enjoyed the insight into other lives: “It is just an interest and almost a privilege to be able to eaves drop in on their lives and see what they are doing.” “It is just nice to get glimpses into […] their everyday life.” The commonest reason for feeling Blipfoto or the Facebook group was a community, was the sense of concern showed to people. “It is an odd thing to say but I have got to know people without meeting them. There are people now if they walked in and said I am so and so off Blipfoto it would not feel awkward to talk to them because you get to know bits about their life which are sometimes quite intimate. People ask me about my illness. I have had conversations with people about illness and cancer and all the rest of it, so you know it can be fairly heavy stuff really and through that you do get a sense of mutual support. People are concerned for you, and you are concerned for them.” This suggests that for some quite deep mutual revelation could occur. For others disclosure and concern was more circumscribed, but still of value. “I think they [contacts] have become important, I think it is now as I have got to know them a little bit better, although I have no idea what they look like […] it is not a kind of personal relationship in that way , but you do develop little friendships where […] they might put that they are having a bad day […] or something and you have a little glimpse into their life and you do end up feeling concerned about them.” Photo-a-day 19 Relationships felt significant, if ultimately limited. Even keeping up contact with a large number of followers by looking at their photo every day, could feel difficult but this reflected a desire to maintain regular contact. Participants talked about showing concern to others, but also the strength of feeling they got when others responded to something they posted. “So I just posted the photograph and then there was just this wave of emotion that was […] is there anything we can do, please give her our love, you know let us know when we can come and visit her, is there anything we can do to help out with your other daughter? […] We were just overwhelmed by […] people sending presents from different countries to the hospital.” This level of concern from people one did not know could be almost over-whelming. An interesting extension of this was one interviewee who had terminal cancer and who was concerned to create a record for his family, also wanted to participate in creating some sort of collective social record. It had been announced that the site was to be web-archived by the British Library (n.d.). “Recording stuff is a contribution [for] my kids and my friends, that I can leave something that shows what I did during the last year or two. I would say that motivation has grown during the year […] could it be a way of just creating a snapshot of life, could it be a way of enabling me to leave something behind?” “This is a record of my year and if you multiply that up it is a huge research resource. People’s lives, […] it is akin to an oral history because people are talking about what motivates them and so on. ” Thus there are multiple levels to the sociality in photo-a-day: for friends and family; online contacts; for wider society. This is far from the critique of self-help writing, that suggests that Photo-a-day 20 it propagates a “myth of co-dependence”. Photo-a-day is only meaningful as a deeply social practice. How identity is performed It will already be apparent that the common accusation that such practices are narcissistic misses the mark. Such a conclusion is reinforced by the way that an identity is constructed through photo-a-day. It is rather different from what is typical on social networking sites. Sites like Facebook and Linkedin, are based on self-consciously created profiles, with carefully honed text and a self-portrait. Certain expected narratives are written into the web site design (van Dijk, 2013). It is assumed that a single, coherent narrative of identity is desirable. On Blipfoto, in contrast, profiles tend to be brief. They do not contain a picture of the author. Self-portraits are fairly rare among the photos taken too. The camera is turned outward onto what the photographer sees, not the self. It is what the author notices through a photo or where they are located that becomes focal. Thus the self commentary on the photographer’s identity is indirect, circumstantial. It reflects a relational sense of identity where the author is talking about themselves by making statements about objects or places they connect to, rather than depicting themselves directly. A narrative of identity emerges through choices of photo, over time. This leaves scope for multiplicity and even for some contradictory elements. A picture of a person emerges from the series of photos they have taken (what they have noticed) and chosen to share, and the text they write about it. The identity is let out, rationed in its expression through a daily series of revelations, rather the self-consciously constructed through a definitive profile portrait or biography. Truths escape gradually. “It is not a diary it is much more, and it is what takes me.” Photo-a-day 21 The story is not complete. Tomorrow something new may emerge. Much also remains mysterious to the viewer. The meaning of photos are often not clearly defined. Photos often have quite private meanings, leaving the viewer with an ongoing sense of mystery of the author. Furthermore photos have the quality of acquiring meaning themselves, through time and association, that means the sense of emergence is experienced by the blipper themselves. “I think probably the one with the most impact, a ridiculous one actually is a picture of a Give Way sign when I was coming back from a night out and it was the night I met my now wife, for a drink. Interviewer: So because of the impact it had on your life? Yes because we had known each other years ago, and we literally met for a drink because I knew she was back in the country through Facebook, and we met for a drink and we hit it off incredibly well and I just got a picture of a ridiculous like a bollard or a street sign or something from that night, but what is interesting has fixed that night for me in my mind.” While this account of the self is clearly not without lots of conscious shaping, and also coloured by the positivity of photography, it does have an “authentic” feel in being indirectly presented, multiple and retaining an element of incompleteness and mystery. In this sense the approach resists normative views of identity as essential and unitary. Discussion Thus photo-a-day was experienced as linked to wellbeing. This was partly because of the conventions of photography that focus on the visually appealing and on sharing images of positive events, and was supported by a culture of positive commenting. This made taking and sharing photos a pleasing, happy experience. Technically photo-a-day was seen as simple, with relatively few technical annoyances. Participants who used Blipfoto often Photo-a-day 22 attributed the positive culture to the founding ethos of the web site that had established guiding principles of treating others well (Blipfoto, n.d.), and that were also reflected in the Blipfoto site design, for example, in the centrality of the photos rather than the user profile. However, a similar culture had emerged on a small scale within the private Facebook group too. Although we have no definitive data on the age structure of those undertaking photo-a- day, it did appear that people were in older age groups. Age may well have been a factor in how the site was used. Yet photo-a-day was more than a simple, mechanical intervention as imagined in positive psychology and positive computing. Taking and sharing the photo could be elaborated or extended in various ways through the participant’s life and through other daily practices and became an interest across the whole day. Photography was a support to other valued activities. Sharing a photo every day is a simple idea, yet its rich and deep effects arise from the way that it adds interest and pleasure to many other daily practices. Even the procedural aspect of photowork, of managing photos could itself also be positive, not just a chore as Cook’s (2011) interviewees seemed to see it. Thus the way that positive psychology construes such activities as simple interventions fails to recognise how their effect on wellbeing really comes about through choices made about how they are realised. The aspects of the experience that seemed to relate to eudaimonic wellbeing (functioning well) according to participants were the impact on noticing things around them, which implied both escaping routine and a sort of mindfulness; the way it was social, reinforcing and expanding relationships; and the sense the practice gave people of learning and creativity. Barton (2012) has already established that photo-a-day generates a strong ethic of informal, social learning, both about photography and many other things. Participants in the current study also talked about it being an outlet for creativity. Photo-a-day 23 “I have always been told I am not creative and I think actually by my photos I am so it has changed how I view myself as well.” Discourses about both learning and creativity seemed to be important reference points for people talking about the eudaimonic wellbeing that they achieved through photo-a-day, linking to broader discourses of self-improvement and betterment. But, interestingly, participants did not draw on discourses around “self-actualisation” or therapeutic discourses. Thus while Hyman (2014) finds these pervasive in talk about wellbeing, the current study shows how within a particular practice alternative discourses are sustained. This reveals the complex and local nature of the understanding of wellbeing. The aestheticisation of experience through photographic conventions and the focus on primarily recording happy events might seem to be a form of escapism and actively limit realising the potential of photography to tell the truth or even to actively unsettle and disturb. Yet effective, indirect ways were found to surface personal issues which gained empathetic support. Nevertheless, it is doubtful that an open site like Blipfoto is suitable to support people with serious mental health issues. The purpose is as a supportive community to share daily life, not a therapeutic community, as modelled by Kline (2009). Yet the strongly social nature of the practice dispels any sense that it might promote the “myth of co-dependence” associated by critiques of therapeutic thinking. Critically the way that identity is constructed through looking outwards, avoided the more “narcissistic” focus on self-branding implicit in the design of social networking sites, and central to usual Facebook use, according to van Dijk (2013). But Facebook is not monolithic (Zhao et al., 2013), as the thriving Facebook photo-a-day group illustrates. Photo-a-day seemed to produce a richer, less constraining articulation of identity. It avoids the kind of essentialist constructions of identity that seem to be promoted by self-help and therapeutic Photo-a-day 24 discourses, and as critiqued by authors such as Rimke (2000). The study supports Kline’s (2009) optimism that there can be a “pro-social self-help”. Piper-Wright’s (2013) argument that the selectivity of the practice promotes reflexivity, seems to be partly confirmed by those who talked about the way the practice enhanced their use of photography for memory. Thus it seems that particular practices (and the communities around them) can embody local constructions of wellbeing, including ones that resist wider discourses. Concerns were relatively few. Surprisingly, in the light of current debates about online privacy, this was not a concern for participants. Part of the character of photo-a-day as it affected wellbeing was that it was enjoyed in a relatively unqualified way. However, a major concern for Blipfoto users, that did threaten their sense of wellbeing, was the sustainability of the practice given the business problems of the organisation running the site (BBC, 2015). Most people had a copy of the photos they had shared, but given that the text and comments received, and the ongoing connections with others, were as important as the photos, this was a major worry. Interestingly subsequent developments suggest a genuine community willingness to crowd source the site, to avoid its loss. Conclusion There is increasing interest in designing specific tools and apps to support people to improve their lives, often based on the precepts of positive psychology. However, people already use practices with an online element to shape their own wellbeing. The originality of this paper lies in how it begins to uncover how this works, recognising the complexity of wellbeing as a construct and in the light of an understanding of the debates about the impact of therapeutic discourses and online identity construction. It has sought to unravel how people employ one particular practice, photo-a-day, to improve their sense of wellbeing. Significantly, for the understanding of wellbeing, the paper has explored how photo-a-day is a complex Photo-a-day 25 intervention, woven through and reshaping other daily routines; reinforcing other personal interests; and expanding and deepening social relations. Critically it seems to be premised on a construction of identity, which is multiple, emergent and also partly withheld, retaining a certain mystery. These aspects mean that within photo-a-day an understanding of wellbeing is developed that is much more complex than imagined by positive psychology but equally resists self-help and therapeutic discourses. In the context of the wider debate about the effect of social media on wellbeing this offers an alternative viewpoint. Rather than examining the effect of social media use as a whole, it may be more productive to examine the daily practices that people carry through, of which online sharing is just a part, to see how these effect wellbeing and how wellbeing is understood within those practices. The findings have implications for web sites and apps, if their aspiration is to support wellbeing, both for functionality and interfaces and for the building of a community culture. For example, it directs attention to the importance of less direct forms of the representation of identity. It is striking the way that the basic conventions of the practice are varied and adjusted to fit in with people’s needs and the complex ways that photo-a-day is woven through other practices in people’s lives. Future research could examine further how individuals on Blipfoto (and on comparable web sites) embed such practices into their daily lives as a strategy of self-change. The amount of data used in the study is quite small, and participants were a self-selecting sample, so more work needs to be done to explore how many and in what ways people are benefiting from this practice. Understanding how this works could assist in web site development but also be of interest to supporters who seek to help people improve their own wellbeing. Photo-a-day 26 References Barton, D. (2012). Participation, deliberate learning and discourses of learning online. Language and Education, 26(2) 139–150. DOI:10.1080/09500782.2011.642880 BBC. (2015). Photo-sharing firm Blipfoto goes into liquidation. http://www.bbc.co.uk/news/uk-scotland-scotland-business-31927236 Blipfoto (n.d.) Be excellent. https://www.blipfoto.com/be-excellent Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative research in psychology, 3(2), 77-101. DOI:10.1191/1478088706qp063oa British Library (n.d.). The curator’s 100. http://www.bl.uk/100websites/top100.html Cieslik, M. (2015). ‘Not smiling but frowning’: Sociology and the ‘Problem of Happiness’, Sociology, 49 (3) 422-437. DOI: 10.1177/0038038514543297 Calvo, R. A., & Peters, D. (2014). Positive Computing: Technology for a better world. Cambridge (MASS), MIT press. Chalfen, R. (1987). Snapshot versions of life. Bowling Green (OH): Bowling Green State University Press. Cohen, K. R. (2005). What does the photoblog want? Media, Culture & Society, 27(6), 883- 901. DOI: 10.1177/0163443705057675 Cook, E. C. (2011). Biography, Well-being and Personal Media: A Qualitative Study of Everyday Digital Photography Practices (Doctoral dissertation, The University of Michigan). Cox, A.M. (2008). Flickr: a case study of Web2.0, Aslib Proceedings, 60 (5) 493 – 516. DOI: 10.1108/00012530810908210 Photo-a-day 27 Craig, C. (2009). Exploring the self through photography: Activities for use in group work. Jessica Kingsley Publishers. Dodge, R., Daly, A. P., Huyton, J., & Sanders, L. D. (2012). The challenge of defining wellbeing. International Journal of Wellbeing, 2(3) 222-235. DOI: 10.5502/ijw.v2i3.4 Dolby, S. K. (2005). Self-help books: Why Americans keep reading them. Chicago: University of Illinois Press. Haworth, J., & Hart, G. (2007). Well-being-individual, community and social perspectives. London: Palgrave. Hazleden, R. (2003). Love Yourself: The Relationship of the Self with Itself in Popular Self- Help Texts. Journal of Sociology 39 (4): 413-428. DOI: 10.1177/0004869003394006 Huppert, F.A (2014) The State of Wellbeing Science: Concepts, Measures, Interventions, and Policies in F.A Huppert and C.L Cooper (eds) Interventions and Policies to Enhance Wellbeing. (John Wiley & Sons, Ltd, London) Volume VI Wellbeing: A Complete Reference Guide, pp.1-50. Hyman, L. (2014 ). Happiness: Understandings, narratives and discourses. London: Palgrave. Hyman, L. (2014). Happiness and memory: some sociological reflections. Sociological Research Online, 19(2) 3. DOI: 10.5153/sro.3268 Kline, K.N. (2009). The discursive characteristics of a prosocial self-help: Re-visioning the potential of self-help for empowerment, Southern Communication Journal, 74 (2) 191-208. DOI:10.1080/10417940802510357 Layard, R. (2005). Happiness: lessons from a new science. London: Penguin. Photo-a-day 28 Lister, M. (2014). Overlooking, rarely looking and not looking. In Larsen, J. and Sandbye, M. Digital snaps: The new face of photography. London: Taurus. Pp.1-24. Lyubomirsky, S. and Layous, K. (2013). How do simple positive activities increase well- being? Current Directions in Psychological Science, 22(1) 57-62. DOI: 10.1177/0963721412469809 Munson, S. A., Lauterbach, D., Newman, M. W., & Resnick, P. (2010). Happier together: integrating a wellness application into a social network site. In Persuasive Technology (pp. 27-39). Springer Berlin Heidelberg. Piper-Wright, T. (2013). That’s the story of my life...daily photography as reflexive practice. In R. Miller, J. Carson, & T. Wilkie (Eds.), The Reflexive Photographer (pp. 214– 251). Edinburgh: Museumsetc. Rimke, H.M. (2000). Governing citizens through self-help literature, Cultural Studies 14 (1) 61-78. DOI:10.1080/095023800334986 Rubinstein, D., & Sluis, K. (2008). A life more photographic: Mapping the networked image. Photographies, 1(1) 9-28. DOI:10.1080/17540760701785842 Ryff, C. D. (1989). Happiness is everything, or is it? Explorations on the meaning of psychological well-being. Journal of personality and social psychology, 57(6), 1069- 1081. DOI: 10.1037/0022-3514.57.6.1069 Seligman, M.E.P. & Csikszentmihalyi, M. (2000). Positive psychology: An introduction. American Psychologist, 55(1) 5–14. Seligman, M.E.P. (2003). Authentic happiness: using the new positive psychology to realize your potential for deep fulfilment. London: Nicholas Brearley. Photo-a-day 29 Shifman, L. (2014). Memes in Internet Culture. Cambridge (MA): MIT press. Sosik, V. S., & Cosley, D. (2014). Leveraging social media content to support engagement in positive interventions. The Journal of Positive Psychology, 9(5) 428-434. DOI:10.1080/17439760.2014.910826 Suler, J. (2009). The psychotherapeutics of online photosharing. International Journal of Applied Psychoanalytic Studies, 6(4), 339-344. DOI: 10.1002/aps.219 Van Dijck, J. (2008). Digital photography: communication, identity, memory. Visual Communication, 7(1) 57-76. DOI: 10.1177/1470357207084865 Van-Dijk, J. (2013). ‘You have one identity’. Performing the self on Facebook and LinkedIn. Media, Culture & Society, 35(8), 199-215. DOI:10.1177/0163443712468605 Weiser, J., & Krauss, D. (2009). Picturing phototherapy and therapeutic photography: Commentary on articles arising from the 2008 International Conference in Finland. European Journal of Psychotherapy and Counselling, 11(1), 77-99. Word count 8381 Andrew Cox (* corresponding author) Senior Lecturer The Information School University of Sheffield Rm 222, Regent Court, 211 Portobello, Photo-a-day 30 Sheffield, S1 4DP UK T: +44(0)114 2226347 a.m.cox@sheffield.ac.uk Liz Brewster Lecturer Lancaster Medical School Faculty of Health & Medicine Furness College Lancaster University Lancaster LA1 4YG UK T: +44 (0)152 459 5018 e.brewster@lancaster.ac.uk work_x5qm6ji4tzewpbwyklg7hrq45i ---- Workshop No 1‎ Proceeding of the 6 th ICEE Conference 29-31 May 2012 TTEM-1 1/1 Military Technical College Kobry El-Kobbah, Cairo, Egypt 6 th International Conference on Chemical & Environmental Engineering 29 -31 May, 2012. 1/1 TTEM -1 SOME FREE AIR EXPLOSION MEASUREMENT TIPS Dr. Mohamed El-Naggar * Abstract Study of blast waves generated by explosions finds important application in energetic material research programs. Main tool in this aspect is recording the generated pressures at different locations and times using pressure sensors. Another important tool is the high speed digital photography recording. Both previously mentioned tools are vital for judging the performance of EM as well as optimization of elaborated designs in several fields. In the same time, both techniques are used for improvement of adapted theoretical models simulating such processes. Pressure sensors used in blast wave pressure measurement needs special precautions ensuring sensor protection, accurate pressure recording and accurate signal transmission to recoding devices. High speed digital photography is also used for performance analysis of EM combustion or detonation as well as motion study of related processes. In this presentation, blast pressure measurement techniques are discussed supported by some real test runs. Problems related to efficient use and maximizing information gathering from such techniques are high lighted. The presentation contains photographs of hardware used in these measuring techniques and best practices in this field are also discussed. * Egyptian Armed Forces work_x664icbdnne23l5szu2c4o6pdi ---- 04-0822 1212..1216 Psychosocial Mediators of a Nurse Intervention to Increase Skin Self-examination in Patients at High Risk for Melanoma Jennifer L. Hay,1 Susan A. Oliveria,2 Stephen W. Dusza,2 Deborah L. Phelan,2 Jamie S. Ostroff,1 and Allan C. Halpern2 1Behavioral Sciences Service and 2Dermatology Service, Memorial Sloan-Kettering Cancer Center, New York, New York Abstract Thisprospectivestudyexaminespsychosocialmediatorsofan efficacious skin self-examination (SSE) intervention that includes provision of a whole-body digital photography book depicting the entire skin surface. Individuals (n = 100) with established risk factors for melanoma were recruited from the Memorial Sloan-Kettering Cancer Center Pigmented Lesion Clinic during their initial dermatologist visit and were randomized to receive a photobook immediately (n = 49) or 4 months after intervention delivery (n = 51). Potential mediators included self-efficacy and response efficacy drawn from Social Cognitive Theory, melanoma worry, and SSE anxiety drawn from Self-Regulation Theory, and skin cancer knowledge, and skin awareness. Only self-efficacy was a sig- nificant mediator, accounting for 8% of the total effect of photo- book enhancement on SSE adherence at 4 months. (Cancer Epidemiol Biomarkers Prev 2006;15(6):1212–6) Introduction Melanoma is one of the most rapidly increasing cancers in the United States (1). Established risk factors for melanoma include strong intermittent sun exposure, large numbers of dysplastic nevi, cutaneous phenotype (red hair, blue eyes, and poor tanning ability; ref. 2), and a family history of the disease (3). Fortunately, there is a 95% survival rate if melanoma is diagnosed at a local stage (4), making early detection an important strategy for reducing melanoma mortality and morbidity. Skin self-examination (SSE) by patients is a potentially useful, but as of yet unproven, strategy to reduce incident and invasive diagnoses (5). Additionally, over half (53-68%) of melanomas are originally detected by the patients, spouses, or partners (6); thus, increasing individuals’ ability to recognize new or changing lesions represents an important goal for early detection of melanoma, especially among those with melanoma risk factors (7). Even among those with a family history of melanoma, recent (last 12 months) screening rates vary widely (28-62%; refs. 8, 9). Novel intervention strategies to increase SSE use among those at high risk for developing melanoma are warranted. Among those at high risk for developing melanoma, demographic predictors of adherence to SSE include being female, younger, and having a higher educational level (10). Medical factors related to SSE performance include having a history of skin cancer and greater sun sensitivity (10). Psychosocial predictors of SSE in high-risk populations include higher knowledge about SSE (10), high self-efficacy, or confidence that they can perform efficacious screening (8, 10), a positive attitude about SSE and the benefits of SSE, low levels of perceived barriers to SSE performance (10-12), and physician recommendation and counseling to perform SSE (12). Finally, increased SSE is related to ability to ask for help from a spouse or partner (8, 10) and increased levels of melanoma concern and risk perceptions (10, 11). Prior research indicates that SSE educational interventions can increase SSE utilization and diagnostic accuracy. In the general population, Mickler et al. (13) found that the provision of an SSE educational brochure, videotape, or one-on-one instruction from a nurse practitioner led to sustained (3 weeks) increases in skin cancer knowledge, SSE use, and accurate discrimination of lesions compared with a no-intervention control group. In addition, the provision of photographic examples combined with written information about different types of skin lesions has also been shown to be a useful strategy to increase participants’ ability to accurately discrim- inate benign from suspicious lesions. Among those at high risk for developing melanoma, dermatologic examination and nurse-provided SSE education increase knowledge and use of SSE sustained through 18 months (14). The provision of digital photographs of the entire skin surface, in tandem with SSE education, may further enhance SSE over educational interventions alone. Digital photography has the potential to act as an at-home reminder to engage in monthly SSE, as well as a concrete point of comparison for patients as they search for new or changing skin lesions on their skin surface during SSE (15). The use of digital photography increases high-risk individuals’ diagnostic accu- racy (16) and, integrated into a nurse and dermatologist- provided educational intervention, results in significantly increased use of SSE after 4 months over the intervention alone (17). In this study, we examine potential theory-based psychoso- cial mechanisms of the effect of digital photography on adherence to SSE. In behavioral science, the relationship of an intervention on an intervening variable on a behavioral outcome is defined as mediation (18); whereas in epidemiol- ogy, this relationship is termed an intermediate end point effect (19). An understanding of the mechanisms through which provision of digital photography enhances SSE use has practical and theoretical importance, as this information could guide the development of additional enhancements for SSE interventions and booster interventions aimed at SSE mainte- nance. Additionally, clarification of the psychological process- es through which digital photography leads to increased SSE would provide guidance concerning the development of other personal, hands-on aids for screening and provide evidence for or against theoretical approaches used to develop intervention enhancements. Unfortunately, even when health behavior theories are used to guide the development of intervention 1212 Cancer Epidemiol Biomarkers Prev 2006;15(6). June 2006 Received 11/9/04; revised 3/5/06; accepted 3/14/06. Grant support: NIH grant K07 CA098106. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. Requests for reprints: Jennifer Hay, Department of Psychiatry and Behavioral Sciences, 641 Lexington Avenue, 7th Floor, New York, NY 10022. Phone: 646-888-0039; E-mail: hayj@mskcc.org. Copyright D 2006 American Association for Cancer Research. doi:10.1158/1055-9965.EPI-04-0822 on April 5, 2021. © 2006 American Association for Cancer Research. cebp.aacrjournals.org Downloaded from http://cebp.aacrjournals.org/ components, examinations of whether the expected theory- based constructs are actually responsible for behavior change are rarely conducted. The Task Force on Community Preven- tive Services (20) has advocated for further examination of the theoretical mechanisms responsible for community interven- tions aimed to reduce sun exposure; these recommendations are similarly warranted for SSE interventions provided in clinical settings. Materials and Methods Sample. As described previously (21), the sample included new patients recruited from the Memorial Sloan-Kettering Cancer Center outpatient Pigmented Lesion Clinic of the Dermatology Service. All new dermatology visits were assessed for presence and number of clinically dysplastic or atypical nevi by the physician during the clinical examination. Patients ages z18 years with five or more clinically dysplastic or atypical nevi who were willing to have digital whole-body photography and agreed to be randomized to an intervention arm were recruited and informed consent obtained (N = 100). Among these participants, self-reported melanoma risk factors included a personal history of skin cancer (50%), a history of dysplastic moles (81%), and a history of previous skin biopsy (80%). Half of these individuals (n = 49) were designated to receive their whole-body digital photography (photobook) to take home with them. We stratified by personal history of skin cancer during patient enrollment to ensure that this variable was equally distributed between the two intervention arms. Patients who were visually or physically impaired, had been previously photographed, or had previously received a photobook were not eligible. The participation rate for the study was 95%, with those refusing involvement in the study doing so because of time constraints or a lack of interest in research participation. This study was reviewed and approved by the Institutional Review Board at Memorial Sloan-Kettering Cancer Center. Study Design and Description of the Intervention. The intervention for this study (21) consisted of a 2-hour meeting with a dermatologist and a dermatologic nurse. As a preliminary step, an explanation of the study was provided; consent was obtained; and randomization to intervention A (photobook) or intervention B (no photobook) was completed. The first intervention module consisted of a dermatologist encounter where the physician explained the importance of SSE, instructed the patient to focus on size/color/shape of the lesions during SSE, and conducted a discussion of any changes that should prompt a dermatology visit, types of skin cancer, and sun protection advice. Next, the nurse asked each patient to remove all clothing and put on a robe. The nurse then did whole body digital photography incorporating 27 body sectors, including close-ups of patients’ moles. After patients changed back into their clothing, they viewed a 3-minute video on SSE: Skin Cancer: Can you Spot it? (22). Next, the nurse conducted a guided imagery exercise where she asked each patient to close their eyes, try to relax, and visualize being at home in a comfortable, well-lit room. The nurse then systematically described the patient conducting SSE at home. The group randomized to SSE intervention with photobook (intervention A) received their personal whole-body photo- graphs compiled in the form of a booklet. The nurse showed how to use the photobook as an adjunct to SSE. The group randomized to receive SSE intervention with no photobook (intervention B) received a written pamphlet on how to perform SSE and how to record moles in a diary format as an adjunct to SSE. The nurse showed in a systematic fashion how to look at all body parts and how to record current moles. After the 4-month assessment, intervention B participants received their own photobook with nurse instruction. Proposed Mediational Model. We proposed a set of psychosocial factors to explain the effect of provision of whole-body digital photography photobook as an intervention enhancement on increased use of SSE. First, we hypothesized that provision of the photobook would increase use of SSE through increased confidence in SSE performance (self- efficacy) and a stronger belief that SSE is an effective means of detecting early skin cancer (response efficacy). These constructs are derived from Social Cognitive Theory (23), which emphasizes the importance of beliefs about the efficacy of one’s efforts in behavioral performance as an important mechanism of behavior change. Empirically, self-efficacy and response efficacy are related to increased use of SSE in high- risk individuals (8, 10-12). We hypothesized that the provision of the photobook, a personalized, concrete, take-home guide and point of comparison of the appearance of moles would further enhance efficacy beliefs, and thus adherence with SSE, over educational intervention alone. Second, we hypothesized that the effect of provision of whole-body digital photography on SSE adherence would be mediated by reductions in negative affect related to developing melanoma and performing SSE. Given the potential for negative affect related to SSE and melanoma in individuals with high numbers of dysplastic or atypical nevi, we anticipated that the provision of the photobook would aid in the management of negative affect over and above the educational intervention alone because providing more personalized, concrete guidance may provide these high-risk patients with an additional level of structure to help them manage their risk by conducting regular SSE. This is consistent with Leventhal’s Self-Regulation Theory (24). Empirically, as well, there is evidence that those who anticipate that SSE will increase their anxiety about skin cancer prefer to rely on physician examination (11). We also hypothesized that skin cancer knowledge and heightened skin awareness would mediate the intervention effect because skin cancer knowledge is an outcome of SSE intervention (13), and both knowledge and awareness are key predictors of SSE performance. Measurement Strategy. In Table 1, we describe the measurement strategy for each proposed psychosocial medi- ator, including for each one the number and wording of the items used in the scales, the response categories employed, the score ranges, and level of internal consistency of each psychosocial factor. These psychosocial factors (see Table 1) were assessed by questionnaire at multiple time points, including baseline, before receipt of the intervention and follow-up, and after 4 months. At baseline, participants completed their questionnaire (demographics, medical and psychosocial factors, and SSE adherence) before their meeting with the dermatologist and nurse. Then, the dermatologic examination was conducted to collect information on the number of moles and dysplastic nevi. The nurse education module was also delivered at this appointment. All patients then underwent whole-body photography. Intervention A participants received their photobook immediately after the nurse-provided intervention, whereas intervention B partic- ipants received their photobook after follow-up 2, 4 months after their baseline visit. At 4-month follow-up, the question- naires were either mailed directly to the patient for self- administration, or the nurse administered the questionnaire via telephone. Our primary dependent variable was adherence with SSE at 4 months, and we designated those who had completed three or more screenings during this time period as adherent, and those who had done fewer than three screenings as nonadherent. Most participants (95%) were retained through the 4-month assessment. We examined whether the five that dropped out differed from the 95 who were retained in demographic, medical history, or baseline psychosocial factors, and they differed significantly only on self-efficacy. The five who dropped out had significantly higher scores (m = 4.1) than those who were retained (m = 3.4, P = 0.04). Cancer Epidemiology, Biomarkers & Prevention 1213 Cancer Epidemiol Biomarkers Prev 2006;15(6). June 2006 on April 5, 2021. © 2006 American Association for Cancer Research. cebp.aacrjournals.org Downloaded from http://cebp.aacrjournals.org/ Analytic Approach. Descriptive statistics, including me- dians, means, and SDs, were calculated for all patient characteristics and psychosocial factors. We also examined whether any of the psychosocial factors significantly varied across intervention arm. Simple mediation models were evaluated for all psychoso- cial factors. Mediation was assessed using the following indi- vidual regression model methodology as described by Baron and Kenny (18) and expanded on by MacKinnon et al. (25). M ¼ b0ð1Þ þ aX þ eð1Þ ðAÞ Y ¼ b0ð2Þ ¼ sX þ eð2Þ ðBÞ Y ¼ b0ð3Þ þ sX þ bM þ eð3Þ ðCÞ In this analysis, the dependent variable (Y) is adherence to SSE at 4 months after intervention; the independent variable (X) is the intervention group to which the patient belonged (group A or B); and the possible mediator variables (M) are the psychosocial measures collected at 4 months after baseline. Eq. A tests the effect of the independent variable (X) on the mediator (M). Eq. B depicts the effect of the independent variable (X) on the dependent variable (Y). Eq. C tests the effect of the mediator (M) on the dependent variable (Y), adjusting for the independent variable (X, see Fig. 1). Baron and Kenny (18) consider a variable to be a mediator if three conditions hold: first, the independent variable affects the mediator (Eq. A); second, the independent variable affects the dependent variable (Eq. B); and third, the mediator must affect the dependent variable after controlling for the independent variable (Eq. C). The mediation variable effect was assessed for each psychosocial measure individually using the product of the coefficients a (representing the relation between the mediator and the independent variable) and b (representing the relationship between the mediator and the dependent variable, adjusting for the effect of the independent variable) as outlined in MacKinnon et al. (25). The Sobel (26) estimate of the SE was also calculated. The mediator variable effect was calculated as the product of the coefficients (ab) divided by the Sobel Table 1. Proposed psychosocial mediators of a 4-month SSE intervention effect Psychosocial factor No. items Item wording Response category for each item Score ranges Baseline, M (SD) Internal consistency* Self-efficacy 3 How confident are you that you can: (1) perform SSE? (2) perform effective SSE? (3) I am not confident that I know what to look for when doing SSE c 1 = Not at all, 2 = a little, 3 = somewhat confident, 4 = very, 5 = extremely confident 1-5 3.4 (0.7) 0.77 Response efficacy 1 How certain are you that SSE is an effective means of detecting early skin cancer? 1 = Not at all, 2 = a little, 3 = somewhat certain, 4 = very, 5 = extremely certain 1-5 4.0 (0.9) NA Melanoma worry (30) 4 During the past two weeks: (1) how often have you worried about developing melanoma? (2) How often has your mood been affected by concern that you might get melanoma someday? (3) How often have thoughts about getting melanoma affected your abilities to perform your daily activities? (4) How emotionally distressed or concerned have you been about the possibility of getting melanoma? Items 1-3: 1 = rarely or never, 2 = sometimes, 3 = often, 4 = all the time Item 4: 1 = not at all, 2 = somewhat concerned, 3 = moderately concerned, 4 = very concerned 4-16 7.5 (2.6) 0.84 SSE anxiety 1 When I think about doing SSE I become anxious 1 = Strongly disagree, 2 = somewhat disagree, 3 = undecided, 4 = somewhat agree, 5 = strongly agree 1-5 2.3 (1.0) NA Skin cancer knowledge 11 Knowledge concerning types of skin cancer, curability, prevention methods, performance of SSE, body parts in SSE, signs and appropriate follow-up of suspicious lesions, time interval for SSE, and appropriate reminders for SSE 0 = Incorrect, 1 = correct 0-11 7.1 (1.2) 0.65 Skin awareness (31) 1 Do you think you would notice changes on your skin if they occurred? 0 = No/DK, 1 = Yes 0-1 0.7 (0.4) NA Abbreviation: NA, not available. *Internal consistency calculated at 4-month assessment for each psychosocial factor. cThis variable was reverse coded. Figure 1. Simple mediational model. 1214 Psychosocial Mediators of Skin Self-examination Cancer Epidemiol Biomarkers Prev 2006;15(6). June 2006 on April 5, 2021. © 2006 American Association for Cancer Research. cebp.aacrjournals.org Downloaded from http://cebp.aacrjournals.org/ estimate of the SE. This procedure assumes that the error terms e1 and e3 are normally distributed, and that there was little measurement error. Because of the relatively limited sample size, bootstrap estimates of the product of the coefficients were also estimated along with 95% confidence intervals (27). Results Participants were predominantly female (63%), White non- Hispanic (98%), married (61%), and with an average age of 40 (SD = 11.7). Almost half (40%) had education beyond college, and most (74%) saw a dermatologist regularly. Patients randomized to intervention A (photobook) versus intervention B (no photobook) did not differ significantly on any of the demographic or psychosocial factors (all Ps > 0.05). Baseline descriptive statistics for all psychosocial factors are provided in Table 1. Table 2 shows the results of mediation testing for each psychosocial factor measure. Eq. A was significant for self- efficacy alone, as delivery of the photobook was significantly related to increased self-efficacy at 4 months after intervention. None of the other psychosocial factors met this condition. Eq. B was significant such that delivery of the photobook was related to adherence to skin self-examination, as we reported previously (17). The condition for Eq. C was met by self- efficacy as well as skin cancer knowledge; these potential mediators were associated with adherence to SSE controlling for photobook delivery. The only psychosocial factor that met all three conditions of mediation effect was self-efficacy. The bootstrap estimate for the mediating variable effect for self- efficacy was 0.0808. The 95% confidence interval for the indirect effect of self-efficacy did not overlap zero (0.0172- 0.1638), indicating statistical significance at alpha of 0.05. Self- efficacy accounted for f8% of the total effect of photobook delivery on SSE adherence. Discussion This study examines psychosocial processes associated with an intervention to enhance the performance of SSE among patients at high risk for melanoma. We found that self-efficacy significantly mediated the relationship between provision of an SSE intervention enhancement, provision of digital pho- tography photobook, and increased SSE adherence at 4 months over SSE intervention without provision of the photobook, with self-efficacy accounting for 8% of the total effect of the intervention enhancement on adherence to SSE. This confirms our hypothesis that those patients who had an objective comparison on which to evaluate any new or changing lesions felt more confident in their SSE ability, and that this helped explain their increased use of SSE. These findings confirm that this SSE intervention enhance- ment (provision of the photobook) met the process goals of increasing efficacy beliefs (23). As such, the study dictates that other methods aimed to increase and maintain SSE use should aim to address self-efficacy beliefs and could include methods for addressing any reductions in self-efficacy that could diminish adherence. We did not find evidence for mediation among the other psychosocial factors assessed, which could have been due to a lack of sensitivity in the proposed mediators and our small sample size. Despite this, it is also likely that there are other health behavior change processes at work driving increased SSE adherence in the presence of the photobook (28). There are opportunities to further explore and examine theory-based mechanisms of this and other novel SSE intervention components. We note some limitations of the current study. The study comprised a relatively small number of participants and was conducted in a fairly ideal circumstance where high-risk patients were willing to be involved in a relatively time- consuming intervention strategy. Additionally, self-efficacy was assessed through direct questioning about level of confidence in performing SSE, which may not fully capture the self-efficacy construct. However, comprehensive and detailed information was obtained during this study, albeit on a small group of patients. Our sample size and potential measurement error in those psychosocial factors that were based on single items dictate the need for us to confirm our findings in a larger sample. Finally, our SSE end point was based on self-report rather than direct observation and thus is vulnerable to overestimation by participants. However, patients in this study were not privy to the underlying hypotheses, and any misclassification related to SSE self-report is not likely to have been differential between the two groups, thus not substantially biasing the study findings. Additionally, a larger sample would allow for examination of potential moderators of the effect of the photobook on SSE adherence. In fact, photobook might be particularly useful in some subgroups of high-risk patients. We note that the five participants lost to 4-month follow-up had significantly higher levels of baseline self-efficacy than those retained; whereas this analysis is based on very few participants, it indicates interesting unanswered questions concerning psychosocial characteristics of those who might find the intervention strategy more or less relevant for themselves. In sum, this study provides needed insight into the psychological mechanisms associated with a specific compo- nent of SSE intervention. Unique strengths of the study include focus on a cancer screening strategy that is equally applicable to men and women (unlike BSE), the prospective study design, and theory-driven nature of the photobook intervention. Additionally, this study examines these issues in a group of individuals where initiation and maintenance of SSE is highly recommended; thus, a greater understanding of the psycho- logical mechanisms associated with adherence to SSE is of value. This study indicates the central importance of self- efficacy in driving increased SSE adherence rates after provision of advice, education, and personalized photobook. Finally, the study adds to our theoretical understanding of how cancer prevention behavior change takes place. The Table 2. Single mediator tests for each psychosocial factor (n = 95) Potential mediator variables Eq. A: X ! M Eq. C: Y ! M.X Mediator variable effect 95% Confidence interval Estimate P Estimate P Self-efficacy 0.5065 0.0030 0.1643 0.0076 0.0808 (0.0172, 0.1638) Response efficacy �0.0594 0.7593 �0.0164 0.7626 �0.0002 (�0.0259, 0.0253) Melanoma worry 0.5045 0.2851 �0.0157 0.4856 �0.0085 (�0.0516,0.0191) SSE anxiety �0.0989 0.7259 �0.0605 0.1001 0.0046 (�0.0354, 0.0461) Skin cancer knowledge 1.5676 0.4758 0.0151 0.0086 0.0222 (�0.0473, 0.0998) Skin awareness �0.0590 0.4734 0.2026 0.1141 �0.0119 (�0.0541, 0.0287) Cancer Epidemiology, Biomarkers & Prevention 1215 Cancer Epidemiol Biomarkers Prev 2006;15(6). June 2006 on April 5, 2021. © 2006 American Association for Cancer Research. cebp.aacrjournals.org Downloaded from http://cebp.aacrjournals.org/ development of new maintenance-focused theoretical models (29) and theory-driven empirical investigations will provide additional insight into the process and optimization of health behavior adherence, including SSE adherence, over time. Acknowledgments We thank Benjamin Bowling, B.Sc. and Christopher S. Webster for their valued assistance in completing this article. References 1. Jemal A, Devesa SS, Hartge P, Tucker MA. Recent trends in cutaneous melanoma incidence among Whites in the United States. J Natl Cancer Inst 2001;93:678–83. 2. Armstrong BK, English DR. Cutaneous malignant melanoma. New York: Oxford University Press; 1996. 3. Ford D, Bliss JM, Swerdlow AJ, et al. Risk of cutaneous melanoma associated with a family history of the disease. The International Melanoma Analysis Group (IMAGE). Int J Cancer 1995;62:377–81. 4. Reis LAG, Eisner MP, Kosary CL, et al., editors. SEER cancer statistics review. Bethesda (MD); 2002. 5. Berwick M, Begg CB, Fine JA, Roush GC, Barnhill RL. Screening for cutaneous melanoma by skin self-examination. J Natl Cancer Inst 1996;88: 17–23. 6. Epstein DS, Lange JR, Gruber SB, Mofid M, Koch SE. Is physician detection associated with thinner melanomas? JAMA 1999;281:640–3. 7. Weinstock MA. Early detection of melanoma. JAMA 2000;284:886–9. 8. Geller AC, Emmons K, Brooks DR, et al. Skin cancer prevention and detection practices among siblings of patients with melanoma. J Am Acad Dermatol 2003;49:631–8. 9. Manne S, Fasanella N, Connors J, et al. Sun protection and skin surveillance practices among relatives of patients with malignant melanoma: prevalence and predictors. Prev Med 2004;39:36–47. 10. Robinson JK, Fisher SG, Turrisi RJ. Predictors of skin self-examination performance. Cancer 2002;95:135–46. 11. Janda M, Youl PH, Lowe JB, et al. Attitudes and intentions in relation to skin checks for early signs of skin cancer. Prev Med 2004;39:11–8. 12. Robinson JD, Silk KJ, Parrott RL, et al. Healthcare providers’ sun-protection promotion and at-risk clients’ skin-cancer-prevention outcomes. Prev Med 2004;38:251–7. 13. Mickler TJ, Rodrigue JR, Lescano CM. A comparison of three methods of teaching skin self-examinations. J Clin Psychol Med Settings 1999;6:273–86. 14. Berwick M, Oliveria S, Lou ST, Headley A, Bolognia JL. A pilot study using nurse education as an intervention to increase skin self-examination for melanoma. J Cancer Educ 2000;15:39–41. 15. Edmondson PC, Curley RK, Marsden RA, et al. Screening for malignant melanoma using instant photography. J Med Screen 1999;6:42–6. 16. Oliveria SA, Chau D, Christos PJ, et al. Diagnostic accuracy of patients in performing skin self-examination and the impact of photography. Arch Dermatol 2004;140:57–62. 17. Oliveria SA, Dusza SW, Phelan DL, et al. Patient adherence to skin self- examination: effect of nurse intervention with photographs. Am J Prev Med 2004;26:152–5. 18. Baron RM, Kenny DA. The moderator-mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations. J Pers Soc Psychol 1986;51:1173–82. 19. Freedman LS, Schatzkin A. Sample size for studying intermediate endpoints within intervention trails or observational studies. Am J Epidemiol 1992;136: 1148–59. 20. Saraiya M, Glanz K, Briss PA, et al. Interventions to prevent skin cancer by reducing exposure to ultraviolet radiation: a systematic review. Am J Prev Med 2004;27:422–66. 21. Phelan DL, Oliveria SA, Christos PJ, Dusza SW, Halpern AC. Skin self- examination in patients at high risk for melanoma: a pilot study. Oncol Nurs Forum 2003;30:1029–36. 22. Skin Cancer Foundation. Skin cancer: Can you spot it? [Video]. Available from the Skin Cancer Foundation, 245 Fifth Avenue, Suite 1403, New York, NY 10016;1992. 23. Bandura A. Social foundations of thought and action: a social cognitive theory. Englewood Cliff (NJ): Prentice-Hall; 1986. 24. Cameron LD. Screening for cancer: illness perceptions and illness worry. Amsterdam: Hardwood Academic Publishers; 1997. 25. MacKinnon DP, Lockwood CM, Hoffman JM, West SG, Sheets V. A comparison of methods to test mediation and other intervening variable effects. Psychol Methods 2002;7:83–104. 26. Sobel M. Asymptotic confidence intervals for indirect effects in structural equation models. In: Leinhardt S, editor. Sociological methodology 1982. Washington (DC): American Sociological Assn.; 1982. pp. 290–312. 27. Efron B, Tibshirani R. Bootstrap measures for standard errors, confidence intervals, and other measures of statistical accuracy. Statistical Science 1986; 1:54–77. 28. Baum A, Cohen L. Successful behavioral interventions to prevent cancer: the example of skin cancer. Annu Rev Public Health 1998;19:319–33. 29. Rothman AJ. Toward a theory-based analysis of behavioral maintenance. Health Psychol 2000;19:64–9. 30. Lerman C, Trock B, Rimer BK, et al. Psychological side effects of breast cancer screening. Health Psychol 1991;10:259–67. 31. Oliveria SA, Christos PJ, Halpern AC, et al. Evaluation of factors associated with skin self-examination. Cancer Epidemiol Biomarkers Prev 1999;8:971–8. 1216 Psychosocial Mediators of Skin Self-examination Cancer Epidemiol Biomarkers Prev 2006;15(6). June 2006 on April 5, 2021. © 2006 American Association for Cancer Research. cebp.aacrjournals.org Downloaded from http://cebp.aacrjournals.org/ 2006;15:1212-1216. Cancer Epidemiol Biomarkers Prev Jennifer L. Hay, Susan A. Oliveria, Stephen W. Dusza, et al. Skin Self-examination in Patients at High Risk for Melanoma Psychosocial Mediators of a Nurse Intervention to Increase Updated version http://cebp.aacrjournals.org/content/15/6/1212 Access the most recent version of this article at: Cited articles http://cebp.aacrjournals.org/content/15/6/1212.full#ref-list-1 This article cites 22 articles, 1 of which you can access for free at: Citing articles http://cebp.aacrjournals.org/content/15/6/1212.full#related-urls This article has been cited by 1 HighWire-hosted articles. Access the articles at: E-mail alerts related to this article or journal.Sign up to receive free email-alerts Subscriptions Reprints and .pubs@aacr.orgDepartment at To order reprints of this article or to subscribe to the journal, contact the AACR Publications Permissions Rightslink site. (CCC) Click on "Request Permissions" which will take you to the Copyright Clearance Center's .http://cebp.aacrjournals.org/content/15/6/1212 To request permission to re-use all or part of this article, use this link on April 5, 2021. © 2006 American Association for Cancer Research. cebp.aacrjournals.org Downloaded from http://cebp.aacrjournals.org/content/15/6/1212 http://cebp.aacrjournals.org/content/15/6/1212.full#ref-list-1 http://cebp.aacrjournals.org/content/15/6/1212.full#related-urls http://cebp.aacrjournals.org/cgi/alerts mailto:pubs@aacr.org http://cebp.aacrjournals.org/content/15/6/1212 http://cebp.aacrjournals.org/ work_x7zaqsgo3rajjeswa3oc5umou4 ---- The Fixator—A simple method for mounting of arthropod specimens and photography of complex structures in liquid ZOOTAXA ISSN 1175-5326 (print edition) ISSN 1175-5334 (online edition) Accepted by R. Zahiri: 17 Jul. 2019; published: 19 Aug. 2019 Licensed under a Creative Commons Attribution License http://creativecommons.org/licenses/by/3.0 385 Zootaxa 4657 (2): 385–391 https://www.mapress.com/j/zt/ Copyright © 2019 Magnolia Press Article https://doi.org/10.11646/zootaxa.4657.2.11 http://zoobank.org/urn:lsid:zoobank.org:pub:A0DE0959-A937-49FC-BB08-4747F368840F The Fixator—A simple method for mounting of arthropod specimens and photography of complex structures in liquid DOMINIC WANKE1,2, SONIA BIGALK2, LARS KROGMANN1,2, INGO WENDT2 & HOSSEIN RAJAEI2 1 University of Hohenheim, Schloss Hohenheim 1, D-70599 Stuttgart, Germany; 2 Department of Entomology, State Museum of Natural History Stuttgart, Rosenstein 1, D-70191 Stuttgart, Germany e-mail address: dominic.wanke@smns-bw.de Abstract Mounting and preparing arthropods in liquids for photography and further investigations is a challenging task and may lead to unsatisfactory results and, in the worst case, to damage to specimens. A new method is presented here, which allows the fixation of specimens of different sizes under various degrees of pressure. The method is illustrated by three case studies from different groups of insects and arachnids. Keywords: fixation, genitalia dissection, maceration, specimen handling Introduction A major aspect of taxonomy is the description and interpretation of external and internal morphological characters (Dayrat 2005, Mutanen & Pretorius 2007, Padial et al. 2010). Many arthropod dissections are therefore carried out in liquid environments (e.g. ethanol, glycerol) to avoid dehydration and collapsing of structures (Robinson 1976, Ungureanu 1972, Krogmann & Vilhelmsen 2006). During and after dissection, photography of macro- and microstructures in ethanol or glycerol can be chal- lenging, as in most cases photos from specific angles or in natural positions are mandatory for useful comparisons (Wanke & Rajaei 2018). Proper positioning of specimens in liquids can be frustrating, and results may be not sat- isfactory (Su 2016) due to the following disruptive factors, which can lead to artifacts during stacked photography: first the evaporation of ethanol, which can generate a flow causing the specimen or dirt particles around it to drift and secondly, any kind of vibrations (Haselböck et al. 2018). To overcome these problems, different methods have been published so far. Flat objects can easily be fixed by placing them, within the liquid, on a slide covered with a cover glass. However, the pressure of the cover glass can also deform important structures (Wanke & Rajaei 2018). Su (2016) showed that drift and movement of specimens can be minimized by placing them in hand sanitizer gel. Similarly, different concentrations of agarose gel can be used (Haselböck et al. 2018). Wanke & Rajaei (2018) used a fixed tunnel-shaped holder to document important characters in the male genitalia of lepidopterans. However, when specimens or structures need to be photographed from a specific angle, all of the listed methods may be inefficient. Here, we present the Fixator, an easy and low-priced method to fix arthropod specimens of all sizes in almost any position. The method is described and demonstrated through three case studies: (1) documentation of a special structure of the male genitalia of geometrid moths; (2) genitalia dissection of a spider wasp and (3) photography of the tibial apophysis of a spider from different angles. Specimens used for case study Lepidoptera: Geometridae ♂♂: Peribatodes secundaria (Denis & Schiffermüller, 1775), P. umbraria (Hübner, 1809), P. rhomboidaria (Denis & Schiffermüller, 1775) WANKE ET AL.386 · Zootaxa 4657 (2) © 2019 Magnolia Press Hymenoptera: Pompilidae ♂: Agenioideus usurarius (Tournier, 1889) Arachnida: Theraphosidae ♂: Chaetopelma sp. Ausserer, 1871 Material A Petri dish (preferably made of plastic for a more durable glueing), a plastic pipette, some nylon thread, a razor blade, super glue (based on Ethyl cyanoacrylate (ECA)). Step by step guide for building the Fixator 1. Cut two 3-mm pieces from the tips of two plastic pipettes (as thread holders). The nylon thread has to fit through the hole without being too loose (Fig. 1A). FIGURE 1. Construction of the Fixator. A. Cut pipette tips as thread holders of similar size. B. Pull the nylon thread through one thread holder. C. Tie a knot in one end of the nylon thread. D. Put a drop of super glue on the thread so that it runs into the tip, pull the thread at the free end. E. Glue both tips onto surface of Petri dish. F. Pull the nylon thread through the other thread holder. The gadget is now ready to use. 2. Thread the nylon thread through one of the pipette tips, tie a knot and glue the tip and nylon thread together (Figs 1B-D). 3. Glue the thread and thread holder to the floor of the Petri dish, to one side, and the other thread holder on the opposite side (the space left between the two holders can vary depending on the size of the object to be examined) (Fig. 1E). METHOD FOR MOUNTING ARTHROPOD SPECIMENS Zootaxa 4657 (2) © 2019 Magnolia Press · 387 4. Thread the nylon thread through the second thread holder (this forms a loop under which the object to be photographed will be placed) (Fig. 1F). 5. A small piece of plastic with a slit can be glued to the edge of the Petri dish (in case the structure needs to be very firmly fixed) (Fig. 2). 6. By pulling the nylon thread, the strength of fixation can be adjusted. FIGURE 2. Overview of the fixing gadget. A. Top view; B. Angled view; C. Lateral view. Arrows indicate the thread-fixing plastic card; scale bar 1cm. Important note: depending on the size of the object, an underlay (a piece of plastic, plastazote or Styrofoam) should be placed beneath the thread, to enable a strong tension. Photography. Objects and specimens were photographed using a Keyence VHX-5000 and Visionary Digital photography system (LK Imaging System, Dun.Inc.). Results & Discussion (1) Case study: Fixation of male genitalia for photography in Lepidoptera Genitalia structures in lepidopterans display species-specific diagnostic characters, which are important for their diagnosis (Scoble 1992, Hausmann & Viidalepp 2012). After dissection, the genitalia are often embedded on per- manent slides (Robinson 1978), where the pressure of the cover glass may lead to rotation of the structure to an unnatural position and to an altered shape and/or position of several important characters (Wanke & Rajaei 2018). It is therefore necessary to photograph these structures before embedding takes place. In Peribatodes species, the uncus displays an important species-specific character, which is only visible in lateral view (Müller et al. 2019) (Fig. 3A). However, once the genitalia have been embedded, this character is not visible anymore (Fig. 3B). Unfortunately, photography of the genitalia capsule in ethanol is often very difficult, due to variation of the angle or drift. For photography in ventral view, Wanke & Rajaei (2018) suggested fixation in a tunnel-shaped holder, which is in some cases also possible in lateral view. However, this is unfeasible for Periba- todes, whereas using the Fixator allows more comparable results. Here, the genitalia capsules of three Peribatodes species, P. secundaria, P. umbraria and P. rhomboidaria, were mounted using the suggested method (Fig. 3C-E), which allowed an easy comparison of the uncus across the dif- ferent species (see Fig. 3): slightly swollen in P. secundaria, dorsally well extended and rectangular in P. umbraria, dorsally extremely extended in P. rhomboidaria (see Müller et al. 2019). The results show that mounting the geni- talia capsule with the Fixator allows an easy comparison in lateral view. Furthermore, the suggested method allows a fast mounting of male and female genitalia in almost all positions before embedding in a permanent slide takes place. (2) Case study: Removal of subgenital plate and genitalia in male Hymenoptera for subsequent study In male Hymenoptera, the morphology of the genitalia and subgenital plate provides useful information and has WANKE ET AL.388 · Zootaxa 4657 (2) © 2019 Magnolia Press been proven especially useful for species identification in Pompilidae (Day 1988, Wiśniowski 2009, Krogmann & Austin 2011). However, the removal of male genitalia, especially in small specimens, usually requires dexterity and practice (Day 1988). The specimen must be kept fixed in place while the genitalia structures are carefully prepared, avoiding any damage to the genitalia structures or to important external morphological structures. The Fixator facilitates the dissection without damaging the specimen during the fixation process. Figure 4 shows the extracted genitalia and subgenital plate of a specimen fixed, in ethanol, on the presented fixing device. The Fixator can be used to hold specimens in place to facilitate the genitalia dissection, but also for photography of external and internal morphological structures in general. FIGURE 3. Use of the Fixator for genitalia photography in Peribatodes spp. (Geometridae). A. Genitalia of P. secundaria fixed in lateral view. The capsule in this case is placed on a serrated plastic plate glued to a piece of plastazote. B. Embedded genitalia capsule of P. secundaria; the diagnostic shape of the uncus (framed) is not vis- ible. C-E. Close-up photography of the uncus in lateral view (black arrows). C. P. secundaria; D. P. umbraria and E. P. rhomboidaria. Scale bars 1mm. (3) Case study: Fixation of spiders to image selected body parts Spiders in collections are usually conserved in ethanol (Martin 1977). Morphological features need to be photo- graphed in a liquid environment to prevent desiccation of the specimen or setae from adhering to the body surface, especially in very hirsute species. Furthermore, for identification or species descriptions it is crucial to study several characters from a preassigned angle. For this purpose, small species or parts of specimens can easily be locked in position using agarose gel or a similar viscous liquid (Haselböck et al. 2018), but large specimens are often too refractory and unmanageable for efficient use of this method. METHOD FOR MOUNTING ARTHROPOD SPECIMENS Zootaxa 4657 (2) © 2019 Magnolia Press · 389 FIGURE 4. Lateral fixation of a male spider wasp for genitalia dissection (Hymenoptera: Pompilidae: Agenioides usurarius). Use a pair of sharp Dumont 5 forceps to apply pressure and draw out the genitalia capsule and subgenital plate. A. Overview of the specimen mounted in the Fixator. The specimen is placed on a serrated plastic plate on plastazote; B. Genitalia in the natural position; C. Partly removed genitalia capsule; D. Fully removed genitalia capsule (a) and subgenital plate (b); black arrows indicate the genitalia. Scale bars 1mm. FIGURE 5. Fixation of a male spider (Chaetopelma sp.: Theraphosidae) to capture images of diagnostic structures situated on its appendages. A. Overview of fixed specimen in ventral view: leg-I is fixed, interfering appendages are positioned out of the way; scale bar 10 mm. B-D: Close-up images of the tibial apophysis showing two branches: (a) a long and curved retrolateral branch, wider in its distal portion and with an apical row of 15 spines, and (b) a short prolateral branch with an adjacent spine at its base; B. prolateral view; C. ventral view; D. retrolateral view. Scale bar 2,5 mm. WANKE ET AL.390 · Zootaxa 4657 (2) © 2019 Magnolia Press In tarantulas (Araneae: Theraphosidae), many taxonomically important characters are situated on the legs. Males of many species possess tibial apophyses on their first pair of legs, which are used to hold the females che- licerae during mating. In Chaetopelma and closely related genera, the presence and shape of the tibial apophysis are diagnostic features (Guadanucci & Gallon 2008, Guadanucci & Wendt 2014). Using the Fixator described here, we were easily and rapidly able to take images of this structure on leg-I of a male Chaetopelma from three angles, prolateral, ventral and retrolateral, without removing it from the specimen (Fig. 5). Altering the position of the leg can be easily accomplished without damaging the specimen. Acknowledgements We thank Tanja Schweizer for testing the prototype of the Fixator. Many thanks to Daniel Whitmore for valuable comments on the manuscript and linguistic proof reading. We are thankful to the subject editor of Zootaxa Reza Za- hiri (Canada). Many thanks to Pasi Sihvonen (Finland) and an anonymous colleague for the review of the submitted version of the paper and their constructive comments. References Day, M. (1988) Spider Wasps. Hymenoptera: Pompilidae. In: Dolling, W.R. & Askew R.R. (Eds.), Handbooks for the Identifica- tion of British Insects. Vol. 6, Part 4. Royal Entomological Society of London, pp. 1–60. Dayrat, B. (2005) Towards integrative taxonomy. Biological Journal of the Linnean Society, 85 (3), 407–15. https://doi.org/10.1111/j.1095-8312.2005.00503.x Denis, M. & Schiffermüller, J.I. (1775) Ankündung eines systematischen Werkes von den Schmetterlingen der Wienergegend. Verlegts Augustin Bernardi, wen, 323 pp. Guadanucci, J.P.L. & Gallon, R.C. (2008) A revision of the spider genera Chaetopelma Ausserer 1871 and Nesiergus Simon 1903 (Araneae, Theraphosidae, Ischnocolinae). Zootaxa, 1753, 34–48. https://doi.org/10.11646/zootaxa.1753.1.2 Guadanucci, J.P.L. & Wendt, I. (2014) Revision of the spider genus Ischnocolus Ausserer, 1871 (Mygalomorphae: Theraphosi- dae: Ischnocolinae). Journal of Natural History, 48 (7–8), 387–402. https://doi.org/10.1080/00222933.2013.809492 Haselböck, A., Schilling, A.-K., Wendt, I. & Holstein, J. (2018) Alternative Methode zur manuellen Fixierung von flüssigkonservierten Arthropoden für die makroskopisch-fotografische Dokumentation. Arachne, 23 (1), 13–17. Hausmann, A. & Viidalepp, J. (2012) Larentiinae I. In: Hausmann, A. (Ed.), The Geometrid Moths of Europe. Vol. 3. Apollo Books, Vester Skerninge, pp. 1–743. Hübner, J. (1809) Sammlung europäischer Schmetterlinge, Horde 6. Pyralides-Zunsler. Augsburg, pls. 32. Krogmann, L. & Vilhelmsen, L. (2006) Phylogenetic implications of the mesosomal skeleton in Chalcidoidea (Hymenoptera: Apocrita) - Tree searches in a jungle of homoplasy. Invertebrate Systematics, 20, 615–674. https://doi.org/10.1071/IS06012 Krogmann, L. & Austin, A. (2011) Systematic revision of the spider wasp genus Sphictostethus Kohl (Hymenoptera:Pompi- lidae: Pepsinae) in Australia with description of nine new species. Stuttgarter Beiträge zur Naturkunde A, Neue Serie, 4, 105–128. Martin, J.E.H. (1977) Collecting, preparing and preserving insects, mites, and spiders. In: The Insects and Arachnids of Canada. Part 1. Biosystematics Research Institute, Canada, Ottawa, pp. 1–182. Müller, B., Erlacher, S., Hausmann, A., Rajaei, H., Shivonen, P. & Skou, P. (2019) Ennominae II. In: Hausmann, A., Rajaei, H., Shivonen, P. & Skou, P. (Eds.), The Geometrid Moths of Europe. Vol. 6. Brill, pp. 1–906. Mutanen, M. & Pretorius, E. (2007) Subjective visual evaluation vs. traditional and geometric morphometrics in species delimi- tation: a comparison of moth genitalia. Systematic Entomology, 32, 371–386 https://doi.org/10.1111/j.1365-3113.2006.00372.x Padial, J.M., Miralles, A., De la Riva, I. & Vences, M. (2010) The integrative future of taxonomy. Frontiers in Zoology, 7 (1), 16. https://doi.org/10.1186/1742-9994-7-16 Robinson, G.S. (1976) The preparation of slides of Lepidoptera genitalia with special reference to the Microlepidoptera. Ento- mologist’s Gazette, 27, 127–132. Scoble, M.J. (1992) The Lepidoptera: Form, Function and Diversity. The Oxford University Press, Oxford, 404 pp. Su, Y.N. (2016) A simple and quick method of displaying liquid-preserved morphological structures for microphotography. Zootaxa, 4208 (6), 592–593. https://doi.org/10.11646/zootaxa.4208.6.6 METHOD FOR MOUNTING ARTHROPOD SPECIMENS Zootaxa 4657 (2) © 2019 Magnolia Press · 391 Ungureanu, E. M. (1972) Methods for dissecting dry insects and insects preserved in fixative solutions or by refrigeration. Bul- letin World Health Organization, 47, 239–244. Wanke, D. & Rajaei, H. (2018) An effective method for the close up photography of insect genitalia during dissection: a case study on the Lepidoptera. Nota lepidopterologica, 4 (1), 219–223. https://doi.org/10.3897/nl.41.27831 Wiśniowski, B. (2009) Spider-hunting wasps (Hymenoptera: Pompilidae) of Poland. Diversity, identification, distribution. Ojców National Park, 432 pp. work_xamxx55fu5eofhzcc2xhvnduyq ---- Kim et al. BioMedical Engineering OnLine 2012, 11:37 http://www.biomedical-engineering-online.com/content/11/1/37 RESEARCH Open Access Patterned thin metal film for the lateral resolution measurement of photoacoustic tomography Do-Hyun Kim1*, Dong-Ho Shin2, Sang Hun Ryu2 and Chul-Gyu Song2* * Correspondence: do-hyun.kim@ fda.hhs.gov; song133436@gmail. com 1Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Ave, Silver Spring, MD 20993, USA 2Department of Electrical Engineering, Chonbuk National University, Jeonju, Jeonbuk 561-756, South Korea Abstract Background: Image quality assessment method of photoacoustic tomography has not been completely standardized yet. Due to the combined nature of photonic signal generation and ultrasonic signal transmission in biological tissue, neither optical nor ultrasonic traditional methods can be used without modification. An optical resolution measurement technique was investigated for its feasibility for resolution measurement of photoacoustic tomography. Methods: A patterned thin metal film deposited on silica glass provides high contrast in optical imaging due to high reflectivity from the metal film and high transmission from the glass. It provides high contrast when it is used for photoacoustic tomography because thin metal film can absorb pulsed laser energy. An US Air Force 1951 resolution target was used to generate patterned photoacoustic signal to measure the lateral resolution. Transducer with 2.25 MHz bandwidth and a sample submerged in water and gelatinous block were tested for lateral resolution measurement. Results: Photoacoustic signal generated from a thin metal film deposited on a glass can propagate along the surface or through the surrounding medium. First, a series of experiments with tilted sample confirmed that the measured photoacoustic signal is what is propagating through the medium. Lateral resolution of the photoacoustic tomography system was successfully measured for water and gelatinous block as media: 0.33 mm and 0.35 mm in water and gelatinous material, respectively, when 2.25 MHz transducer was used. Chicken embryo was tested for biomedical applications. Conclusions: A patterned thin metal film sample was tested for its feasibility of measuring lateral resolution of a photoacoustic tomography system. Lateral resolutions in water and gelatinous material were successfully measured using the proposed method. Measured resolutions agreed well with theoretical values. Keywords: Photoacoustic tomography, Resolution measurement, Ultrasound imaging. Background Photoacoustic (opto-acoustic) imaging is an imaging modality which combines the physics of photonics (optics) and ultrasound (acoustics), providing high contrast of op- tical modality and long signal delivery path of ultrasound [1]. Sub-microsecond optical pulses generate surface or subsurface acoustic ultrasound waves when the pulses are irradiated on photon absorbing structure in biological samples. Generated acoustic © 2012 Kim et al.; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. mailto:do-hyun.kim@fda.hhs.gov mailto:do-hyun.kim@fda.hhs.gov mailto:song133436@gmail.com mailto:song133436@gmail.com http://creativecommons.org/licenses/by/2.0 Kim et al. BioMedical Engineering OnLine 2012, 11:37 Page 2 of 11 http://www.biomedical-engineering-online.com/content/11/1/37 waves are collected by an ultrasonic transducer, and then the image is reconstructed by computational procedure. Photoacoustic tomography (PAT) conventionally refers to a photoacoustic imaging technique by which expanded laser irradiation excites a large area of the sample while a moving transducer or multiple array of transducers collect acoustic waves [2]. In contrast to the photoacoustic microscopy (PAM) by which fo- cused laser irradiation excites very small volume of acoustic wave and then is detected by a focused transducer [3], image reconstruction of PAT resembles that of ultrason- ography. For PAM, traditional lateral resolution measurement method using a pat- terned resolution target such as the US Air Force (USAF) 1951 bar chart has been used [4]. One of the technical difficulties in PAT system development is the lack of a proper image target with which image quality parameters such as resolution can be adjusted and assessed. Tissue-mimicking phantom development for photoacoustic imaging sys- tem has recently been investigated [5,6], however, resolution targets are not widely adapted yet. There was an attempt to utilize photon-absorbing parallel lines printed on transparency for measurement of lateral resolution of a PAT system [7]. A resolution target for a PAT must contain photon-absorbing fine patterns which are embedded in the optically non-absorbing material. Various PAT image reconstruction algorithms have been developed, which in general rely on analytical solutions to the photoacoustic wave equation assuming the samples are acoustically homogeneous for simplicity [8]. However, difference in acoustic impedance of hetero-materials affects the quality of image reconstruction [9]. In this study, lateral resolution measurement of a PAT system using a patterned thin metal film will be discussed. Lateral direction in this study refers to the ultrasound propagation direction which is perpendicular to the light propagation. It must be noted that the lateral direction in PAT imaging is the axial direction of reflective ultrasonography. Methods Identifying ultrasound component Laser-generated ultrasound in solid has been widely studied since the 1960’s. Different regimens for the production of ultrasound in metals using lasers with wavelengths in visible to near infrared range are summarized extensively in Davies’ review article [10]. Although the propagation of laser-generated ultrasound within the object where it was generated is well understood, its propagation in the surrounding media needs more study. The propagation of laser-generated ultrasound from thin metal film deposited on a base material immersed in water is especially of great interest in this study. Propa- gation of sound waves in the direction perpendicular to the surface of chromium film from the normal laser irradiation was extensively studied by Ko, et al. [11,12]. Propaga- tion of sound waves generated in a 0.4-mm-thick stainless steel plate in water for trans- versal direction was observed by Schlieren imaging [13]. It was demonstrated that various types of sound wave can be formed by different generation mechanisms. The strongest ultrasound signal is generated from the surface vibration in the perpendicular direction to the surface, while weaker ultrasound signal still propagates along the sur- face mediated by the shear mode of ultrasound in metal [13]. Kim et al. BioMedical Engineering OnLine 2012, 11:37 Page 3 of 11 http://www.biomedical-engineering-online.com/content/11/1/37 In this study, we investigate a patterned thin metal film (PTMF) deposited on a trans- parent plate for its feasibility as a resolution test target for PAT. A commercially avail- able chromium film deposited on a 2-mm fused silica glass with the USAF 1951 resolution bar-chart pattern was used. Positive patterns (Edmund Optics, NT57-896) and negative patterns (Edmund Optics, NT57-895) were used to compare the effect of phase of the generated ultrasound in propagating through a medium. These particular PTMFs are not optimal test targets for PAT because some of their patterns are only suitable for optical characterization. However, they were chosen in this study for their convenience of acquisition. Optimal patterns for PAT will be discussed in Section 4. The major challenge in this work is to distinguish components of the ultrasound gen- erated on the metal film delivered to the transducer for PAT image reconstruction. Details of the problem are illustrated in Figure 1. The strongest component of laser-generated ultrasound from PTMF is the bulk wave propagating away from the PTMF which acts as the sound source [11,12]. However, shear wave (or surface acoustic wave) not only propagates within the fused silica plate, but it also generates sound waves in the water with the speed of the shear wave in the propagating direction [13]. Depending on the sound component detected by the trans- ducer, measured resolution of PAT using PTMF will be either for the imaging in the medium (water) or in the silica. The spatial resolution (Δλultrasound) of a PAT system using a transducer with bandwidth (Δftransducer) is determined by the relation: Δλultrasound ¼ vultrasound Δftransducer ; ð1Þ where vultrasound represents the speed of sound in the medium. For the PAT system, it is a commonly accepted fact that the lateral resolution of the system is half of Eq. (1) [14, [15]. The speeds of sound are the material property. Speed of sound in fused silica and in water are ~3,800 m/sec and ~1,500 m/sec, respectively. Since the speed of sound in fused silica is more than twice that in water, the resolution of a PAT system measuring the sound propagating in fused silica will be more than twice lower than that in water. Identifying the correct sound component is of great importance. Experiments The schematic of the experimental setup is shown in Figure 2. The imaging system is based on an Nd:YAG laser (Meditech, Eraser-k) which has pulse energy of 400 mJ, Figure 1 Illustration of the sound propagation in water. Ultrasound generated at the chromium strips propagates along the surface of the fused silica plate in the form of shear wave (solid arrow) and in the form of bulk wave (dashed arrow). Figure 2 Schematic of the PAT setup. Solid black lines represent electrical connections, dashed lines represent laser paths, and red line represents ultrasound path. TR is the transducer. PTMF sample is placed such that the metal film is facing toward the incoming laser. Kim et al. BioMedical Engineering OnLine 2012, 11:37 Page 4 of 11 http://www.biomedical-engineering-online.com/content/11/1/37 center wavelength of 532 nm, and pulse duration of 8 nsec. A transducer (Panametrics- NDT, V323) having a bandwidth of 2.25 MHz was rotated around the sample using a step-motor connected to a belt-pulley assembly for a two-dimensional (2D) PAT image. 2D PAT imaging requires full rotation (360 degrees) of the transducer around the sam- ple at 1.2 degrees per step, so 300 step-rotations. The ultrasonic signal collected by the transducer was amplified and then transferred to an oscilloscope (Tektronix, TDS2002), and then to a controlling personal computer via GPIB communication protocol. Image reconstruction was achieved using Labview (National Instrument) and Matlab (Mathworks) software. Uniform beam profile was ensured by choosing a proper diffuser. Results and discussion Measurement of lateral resolution in water Shown in Figure 3(a) is the positive USAF 1951 resolution target used as the PTMF sample. Figure 3(b) shows the 2D PAT image of the region in the proximity of Group 0, Elements 2 – 6. The PTMF sample and the transducer were immersed in water. As can be seen in Figure 3(b), all the Elements (2 – 6) of Group 0 are clearly distinguished. Other elements did not produce clean images either due to their small sizes beyond the Figure 3 (a) Digital photography and (b) a full 360 degree PAT image of the positive PTMF sample. Patterns within the white rectangle represent Elements 2–6 of Group 0. Dashed-arrow represents the direction of 1D intensity profile measured by the transducer at a single fixed position. Kim et al. BioMedical Engineering OnLine 2012, 11:37 Page 5 of 11 http://www.biomedical-engineering-online.com/content/11/1/37 system resolution limit or due to interference of multiple reflections from glass plate edges. For the measurement of lateral resolution, measurement of only one-dimensional (1D) distribution of signal amplitude along the longitudinal direction (dashed arrow in Figure 2) sufficiently provides the information of the lateral resolution of PAT system [14]. Compared to the 2D imaging, acquisition of only 1D data significantly reduces total measurement time. Figures 4(a) and 4(b) show the 1D signal from the positive PTMF sample placed on a transparent gelatinous block in water. Since the main pur- pose of this study is to distinguish the number of peaks from 1D intensity graphs, in- tensity and time scales are not specified in all 1D graphs. In this proof-of-concept study, a high energy laser (400 mJ) and non-scattering media were used to increase the contrast in the signal. Thus, variation between signal peaks was not significant enough to necessitate signal processing such as averaging. However, signals were averaged 5 times to reduce random noise from transducer and laser. The speed of sound in water is ~1,500 m/sec, which gives the theoretical axial reso- lution of a transducer (2.25 MHz bandwidth) to be 0.33 mm. Signals from Elements 2 – 6 of Group 0 were shown in Figure 4(a). All the elements of Group 0 were distinguished. Element 2 of Group 0 has 1.12 line-pairs-per-mm (lpm), thus the distance between the peaks should be 0.89 mm. After converting the time scale of the data to the distance scale using the speed of sound, the measured distance between the two peaks from Element 2 of Group 0 was 0.90 mm, which is close to the actual value. For a PAT sys- tem with lateral resolution of 0.33 mm, three peaks from the element with 2.83 lpm (Element 4 of Group 1) should be distinguishable, while the peaks from the element with 3.17 lpm (Element 5 of Group 1) will start to merge with each other. Figure 4(b) shows the 1D signal from the Group 1 patterns of the same positive PTMF sample. As can be seen from the figure, three diminished peaks are observed in element 4, while only two peaks are observed in element 5. Thus, the PAT system resolution for water is between 0.32 mm and 0.35 mm. When the limited range of patterns is considered, the measured resolution measured from the PTMF is close to the theoretical resolution limit (0.33 mm). Figure 4 Longitudinal 1D transducer signals from the positive PTMF sample: (a) Elements 2 – 6 of Group 0; (b) Elements 1 – 6 of Group 1. The scales of x-axes of (a) and (b) are identical. Dashed circles indicate the signal from the largest element of each group. Kim et al. BioMedical Engineering OnLine 2012, 11:37 Page 6 of 11 http://www.biomedical-engineering-online.com/content/11/1/37 Figure 5(a) and 5(b) are for the 1D signal from the negative PTMF sample placed on a transparent gelatinous block. Signals were averaged 5 times. In comparison to the positive PTMF, in which chromium bars are deposited on a transparent fused silica plate, the surface of the negative PTMF sample is covered with thin chromium film while the bars (patterns) are left uncovered. The words positive and negative are adap- tation from the traditional photography film. Negative PTMF were tested for compari- son because it has wider coverage of chromium film from which photoacoustic signal is generated. After signal acquisition, 1D signal was inverted in order for direct com- parison with that from positive PTMF. Compared to the results shown in Figure 4, the results from the negative PTMF shown in Figure 5 revealed higher signal-to-noise ratio due to increased signal level from larger chromium area. All the peaks of Group 0 ele- ments were clearly distinguished in Figure 5(a). For Group 1, three diminished peaks are observed in element 4, while only two peaks are observed in element 5, as can be seen in Figure 5(b). Thus, the PAT system resolution for water is between 0.32 mm and 0.35 mm. The measurement result is basically the same as that from positive PTMF. Identifying the sound component In Section 3.1, the lateral resolution of the PAT system was measured using PTMF samples based on the assumption that the laser-generated ultrasound only propagates in the surrounding water. According to Don-Liyanage’s study [13], shear wave gener- ated by the surface acoustic wave in solid propagates in all directions. However, com- pared to the bulk wave which propagates with the same speed in all directions, shear wave has higher speed in the direction parallel to the surface. Thus, if the detected sig- nal is from shear wave, the measured resolution will depend on the angle between the transducer and the surface. If no dependency is observed, then the detected signal is bulk wave. To identify which component of the sound was detected in this experiment, PTMF sample was tilted by the amount of angle θ (see Figure 2) such that the patterned metal film faced toward the transducer. The transducer axis was still facing toward the center of the PTMF sample. The lateral distance between the bars of the pattern will decrease Figure 5 Inverted 1D transducer signals from the negative PTMF sample: (a) Elements 2 – 6 of Group 0; (b) Elements 1 – 6 of Group 1. The scales of x-axes of (a) and (b) are identical. Dashed circles indicate the signal from the largest elements of each group. Kim et al. BioMedical Engineering OnLine 2012, 11:37 Page 7 of 11 http://www.biomedical-engineering-online.com/content/11/1/37 by the amount of cos(θ). For example, if the tilting angles were θ = 30 degrees and θ = 45 degrees, the minimal spacing of bar patterns that can be distinguished by the PAT system will be 0.38 mm (= 0.33/cos30) and 0.47 mm (= 0.33/cos45), respectively. The minimum spacing resolvable by Elements 3 and 4 of Group 1 ranges from 0.35 mm to 0.40 mm, thus Element 3 or 4 of Group 1 will be able to be distinguished for a 30- degree tilted PTMF sample. The minimum spacing resolvable by Elements 1 and 2 of Group 1 ranges from 0.47 mm to 0.50 mm, thus Element 1 or 2 of Group 1 will be able to be distinguished for a 45-degree tilted PTMF sample. Figure 6(a) and 6(b) show the 1D signal from the negative PTMF sample tilted 30 degrees toward the transducer. Signals were averaged 5 times. All the elements of Group 0 were distinguished as can be seen in Figure 6(a). Elements 1 and 2 of Group 1 were clearly distinguished as can be seen from Figure 6(b). A previous prediction was that the peaks will collapse between Elements 3 and 4 of Group 1, however, peaks of Element 3 of Group 1 were observed to be collapsed. The spacing between bars of Element 2 and 3 of Group 1 ranges from 0.40 mm to 0.45 mm, which gives the range of the tilt-angle (θ) from 34 degrees to 42 degrees. The error range is broad, which is the disadvantage of using a readily available USAF 1951 resolution target for PAT measurement. Measurement error in θ may have been introduced from the measure- ment of angle from outside of the water tank. The actual angle between the surface of the PTMF sample and the transducer axis should be 35 degrees rather than 30 degrees. Figure 7(a) and 7(b) show the 1D signal from the negative PTMF sample tilted 45 degrees toward the transducer. Signals were averaged 5 times. Contrary to the predic- tion that up to Element 1 or 2 of Group 1 will be distinguished, only the elements of Group 0 were distinguished. Adapting the 5 degrees of error assumed from the previ- ous results for 30-degree tilt, the actual tilt angle is assumed to be 50 degrees rather than 45 degrees. For θ = 50 degrees, the spacing between the bars that leads to 0.33 mm lateral spacing is 0.51 mm, which lies between Element 1 of Group 1 and Element 6 of Group 0. Figure 7(b) shows that none of the peaks from the Group 1 pat- terns was distinguished, thus the assumption of 5-degree measurement error is still valid. Figure 6 Inverted 1D transducer signals from the negative PTMF sample tilted 30 degrees toward the transducer: (a) Elements 2 – 6 of Group 0; (b) Elements 1 – 6 of Group 1. The scales of x-axes of (a) and (b) are identical. Dashed circles indicate the signal from the largest elements of each group. Figure 7 Inverted 1D transducer signals from the negative PTMF sample tilted 45 degrees toward the transducer: (a) Elements 2 – 6 of Group 0; (b) Elements 1 – 6 of Group 1. The scales of x-axes of (a) and (b) are identical. Dashed circles indicate the signal from the largest element of each group. Kim et al. BioMedical Engineering OnLine 2012, 11:37 Page 8 of 11 http://www.biomedical-engineering-online.com/content/11/1/37 From the two validation experiments with tilted samples, it is concluded that the ultrasound signals measured in this study demonstrated the same speed regardless of the location of transducer relative to the sample surface where they were measured. It is proven that the observed ultrasound was bulk wave. Intensities of ultrasound signals did not dramatically vary for different transducer locations (sample tilt angles), which is another proof that the observed ultrasound was not shear wave. Spherical bulk wave is the mode of propagating shock wave in this study. The resolution measured by the pro- posed method is the resolution in the media in which the sample is immersed, rather than the resolution in the sample block. Practical applications A practical design of the resolution target for PAT system is discussed. Although it was proven in this study that the patterned thin metal film deposited on a glass plate can be effectively utilized for measuring the lateral resolution of PAT system in a medium, commercially available USAF 1951 target needs modification for practical reasons. First, small patterns designed for optical resolution measurements induce unnecessary inter- ferences as was shown in Figure 3(b). Second, although the speed of sound in biological tissue (for example in liver 1570 m/sec) [16] is close to that in water (1480 m/sec at room temperature) [17], there still is discrepancy between two values. A practically use- ful resolution target for PAT could be made by a PTMF plate having only a limited number of patterns embedded in a transparent material (or tissue-mimicking scattering material) block which has a speed of sound similar to that of biological tissue. The speed of sound in gel material is 1580 m/sec [16]. Resolution of the PAT system for gel- atinous material is close to that for some biological tissues. Figure 8(a) shows the digital photography of the suggested resolution target. Only one group of patterns from the negative PTMF sample was cut into a piece so that interference from unresolvable smaller patterns can could be prevented. To reflect the resolution of the PAT system for a biological tissue, the PTMF piece was embedded in a gelatinous block in which the speed of sound is similar to that of tissues. Figure 8(b) shows the 2D PAT image from the suggested resolution target. Image reconstruction process was the same as that used in obtaining Figure 3(b). However, the signals were Figure 8 (a) Digital photography of the suggested resolution target. A piece of PTMF sample with only Group 0 patterns was embedded in a gelatinous block; (b) 2D reconstruction of PAT image from the suggested resolution target. Inverted signal was used for this particular resolution target made of negative PTMF sample. Kim et al. BioMedical Engineering OnLine 2012, 11:37 Page 9 of 11 http://www.biomedical-engineering-online.com/content/11/1/37 inverted because of the negative pattern on the PTMF sample. Although some un- necessary patterns were still remaining due to the error in cuts, Figure 8(b) shows that the acquired 2D PAT image for the Group 0 patterns is clean enough to distinguish all 5 elements. For actual preparation of such resolution targets, line pairs with more var- iety of spatial frequencies may be used. Figure 9a) and 9(b) show the 1D signal from the suggested resolution target. Signals were averaged 5 times. All the elements of Group 0 were distinguished as can be seen in Figure 9(a). Element 2 of Group 1 showed three distinctive peaks in Figure 9(b), while Element 3 of Group 1 showed slightly diminished peaks. Peaks started to collapse for the Element 4 of Group 1 in water as was shown in Figure 5(b). This is due to the fact that the lateral resolution of the PAT system for gelatinous material is 0.35 mm, which is slightly poorer than that for water (0.33 mm). The results coincide well with theoretical prediction. The developed PAT system was fully characterized for the lateral resolution. It exhib- ited the resolution that agrees well with the theoretical value. For future work, the PAT system was tested for biomedical application. Figure 10 shows a PAT image of chicken embryo on day 12. The embryo was extracted from the egg shell, and was placed on a transparent gelatinous block. The whole sample was submersed in water during the image reconstruction procedure. The transducer had bandwidth of 2.25 MHz. At this stage, chicken embryo starts to develop feathers in the tail. Developing toes reaches a Figure 9 Inverted 1D transducer signals from the negative PTMF sample embedded in a gelatinous block. (a) Elements 2 – 6 of Group 0; (b) Elements 1 –6 of Group 1. The scales of x-axes of (a) and (b) are identical. Dashed circles indicate the signal from the largest element of each group. Figure 10 Chicken embryo of day 12 imaged by the developed PAT system. The sample was submersed in water bath. Kim et al. BioMedical Engineering OnLine 2012, 11:37 Page 10 of 11 http://www.biomedical-engineering-online.com/content/11/1/37 thickness close to 1.0 mm. These fine structures can only be identified by an imaging system with proper resolution. The current PAT system with the measured lateral reso- lution of 0.33 mm in water demonstrated the required resolving power as can be seen in Figure 10. Conclusions Patterned thin metal film deposited on fused silica plate was investigated for the meas- urement of lateral resolution of a PAT system. Propagation of bulk ultrasound wave was confirmed by this proof-of-concept experiment. Lateral resolutions measured by this method for different materials agreed well with theoretical limit. The lateral reso- lution measured by this method is the resolution of the PAT system in the media rather than the resolution in the sample material. A practically useful resolution target block was suggested and tested. To the best of the authors' knowledge, this finding is the first systematic experimental confirmation using a resolution target showing that the lateral resolution of a PAT system (with a fixed bandwidth of the transducer) is the half of the theoretical spatial resolution expressed by Eq. (1). Although controlled experimental results presented in this study using high power laser and non-scattering media suc- cessfully proved the concept, for this method to be fully standardized and validated for biological samples or tissue simulating phantoms, a quantitative discrimination method of signal peaks will be studied and suggested in future works. Competing interests The authors declare that they have no competing interests. Acknowledgement (DHS, SHR, CGS) This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No. 2011–0030075, 2011–0030781). This research was financially supported by the Ministry of Knowledge Economy(MKE), Korea Institute for Advancement of Technology(KIAT) and Gangwon Leading Industry Office through the Leading Industry Development for Economic Region. (DHK) Disclaimer: The mention of commercial products, their sources, or their use in connection with material reported herein is not to be construed as either an actual or implied endorsement of such products by the Department of Health and Human Services. Kim et al. BioMedical Engineering OnLine 2012, 11:37 Page 11 of 11 http://www.biomedical-engineering-online.com/content/11/1/37 Authors' contributions DHK proposed the idea, analysed the data, and composed the manuscript. DHS and SHR performed experiments. CGS supervised and provided support to DHS and SHR. All authors read and approved the final manuscript. Received: 30 April 2012 Accepted: 13 July 2012 Published: 13 July 2012 References 1. Xu M, Wang LV: Photoacoustic imaging in biomedicine. Rev Sci Instrumen 2006, 77:041101. 2. Wang X, Pang Y, Ku G, Stoica G, Wang LV: Three-dimentional laser-induced photoacoustic tomography of mouse brain with the skin and skull intact. Opt Lett 2003, 28:1739–1741. 3. Zemp RJ, Song L, Bitton R, Shung KK, Wang LV: Realtime photoacoustic microscopy in vivo with a 30-MHz ultrasound array transducer. Opt Express 2008, 16:7915–7928. 4. Maslov K, Zhang HF, Hu S, Wang LV: Optical-resolution photoacoustic microscopy for in vivo imaging of single capillaries. Opt Lett 2008, 33:929–931. 5. Li X, Xi L, Jiang R, Yao L, Jiang H: Integrated diffuse optical tomography and photoacoustic tomography: phantom validations. Biomed Opt Express 2011, 2:2348–2353. 6. Cook JR, Bouchard RR, Emelianov SY: Tissue-mimicking phantoms for photoacoustic and ultrasonic imaging. Biomed Opt Express 2011, 2:3193–3206. 7. Wang X, Xu Y, Xu M, Yokoo S, Fry ES, Wang LV: Photoacoustic tomography of biological tissues with high cross-section resolution: reconstruction and experiment. Med Phys 2002, 29:2799–2805. 8. Yuan Z, Zhang Q, Jiang H: Simultaneous reconstruction of acoustic and optical properties of heterogeneous media by quantitative photoacoustic tomography. Opt Express 2006, 14:6749–6754. 9. Dean-Ben XL, Ma R, Razansky D, Ntziachristos V: Statistical approach for optoacoustic image reconstruction in the presence of strong acoustic heterogeneities. IEEE Trans Medical Imaging 2011, 30:401–408. 10. Davies SJ, Edwards C, Taylor GS, Palmer SB: Laser-generated ultrasound: its properties, mechanisms and multifarious applications. J Phys D: Appl Phys 1993, 26:329–348. 11. Ko SH, Ryu SG, Misra N, Pan H, Grigoropoulos CP, Kladias N, Panides E, Domoto GA: Laser induced short plane acoustic wave focusing in water. Appl Phys Lett 2007, 91:051128. 12. Ko SH, Ryu SG, Misra N, Pan H, Grigoropoulos CP, Kladias N, Panides E, Domoto GA: Laser induced plane acoustic wave generation, propagation, and interaction with rigid structures in water. J Appl Phys 2008, 104:073104. 13. Don-Liyanage DKL, Emmony DC: Schlieren imaging of laser-generated ultrasound. Appl Phys Lett 2001, 79:3356–3357. 14. Wang LV, Yang X: Boundary conditions in photoacoustic tomography and image reconstruction. J Biomed Opt 2007, 12:014027. 15. Andreev VG, Karabutov AA, Araevsky AA: Detection of ultrawide-band ultrasound pulses in optoacoustic tomography. IEEE Trans Ultrasonics Ferroelectrics and Frequency Control 2003, 50:1383–1390. 16. Madsen EL, Zagzebski JA, Banjavie RA, Jutila RE: Tissue mimicking materials for ultrasound phantoms. Med Phys 1978, 5:391–394. 17. Del Grosso VA, Mader CW: Speed of Sound in Pure Water. J Acous Soc America 1972, 52:1442–1446. doi:10.1186/1475-925X-11-37 Cite this article as: Kim et al.: Patterned thin metal film for the lateral resolution measurement of photoacoustic tomography. BioMedical Engineering OnLine 2012 11:37. Submit your next manuscript to BioMed Central and take full advantage of: • Convenient online submission • Thorough peer review • No space constraints or color figure charges • Immediate publication on acceptance • Inclusion in PubMed, CAS, Scopus and Google Scholar • Research which is freely available for redistribution Submit your manuscript at www.biomedcentral.com/submit Abstract Background Methods Results Conclusions Background Methods Identifying ultrasound component Experiments link_Fig1 Results and discussion Measurement of lateral resolution in water link_Fig2 link_Fig3 link_Fig4 Identifying the sound component link_Fig5 link_Fig6 Practical applications link_Fig7 link_Fig8 link_Fig9 Conclusions Competing interests Acknowledgement link_Fig10 Authors' contributions References link_CR1 link_CR2 link_CR3 link_CR4 link_CR5 link_CR6 link_CR7 link_CR8 link_CR9 link_CR10 link_CR11 link_CR12 link_CR13 link_CR14 link_CR15 link_CR16 link_CR17 work_xcwvxn47yfga3b76cejb2p5qbq ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218546467 Params is empty 218546467 exception Params is empty 2021/04/06-02:16:24 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218546467 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:24 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_xcy6qnoqubasbavditbiswllx4 ---- 1 23 Coral Reefs Journal of the International Society for Reef Studies ISSN 0722-4028 Coral Reefs DOI 10.1007/s00338-013-1033-1 Ground-level spectroscopy analyses and classification of coral reefs using a hyperspectral camera T. Caras & A. Karnieli 1 23 Your article is protected by copyright and all rights are held exclusively by Springer- Verlag Berlin Heidelberg. This e-offprint is for personal use only and shall not be self- archived in electronic repositories. If you wish to self-archive your work, please use the accepted author’s version for posting to your own website or your institution’s repository. You may further deposit the accepted author’s version on a funder’s repository at a funder’s request, provided it is not made publicly available until 12 months after publication. R E P O R T Ground-level spectroscopy analyses and classification of coral reefs using a hyperspectral camera T. Caras • A. Karnieli Received: 16 May 2012 / Accepted: 15 March 2013 � Springer-Verlag Berlin Heidelberg 2013 Abstract With the general aim of classification and map- ping of coral reefs, remote sensing has traditionally been more difficult to implement in comparison with terrestrial equivalents. Images used for the marine environment suffer from environmental limitation (water absorption, scattering, and glint); sensor-related limitations (spectral and spatial resolution); and habitat limitation (substrate spectral simi- larity). Presented here is an advanced approach for ground- level surveying of a coral reef using a hyperspectral camera (400–1,000 nm) that is able to address all of these limita- tions. Used from the surface, the image includes a white reference plate that offers a solution for correcting the water column effect. The imaging system produces millimeter size pixels and 80 relevant bands. The data collected have the advantages of both a field point spectrometer (hyperspectral resolution) and a digital camera (spatial resolution). Finally, the availability of pure pixel imagery significantly improves the potential for substrate recognition in comparison with traditionally used remote sensing mixed pixels. In this study, an image of a coral reef table in the Gulf of Aqaba, Red Sea, was classified, demonstrating the benefits of this technology for the first time. Preprocessing includes testing of two normalization approaches, three spectral resolutions, and two spectral ranges. Trained classification was performed using support vector machine that was manually trained and tested against a digital image that provided empirical veri- fication. For the classification of 5 core classes, the best results were achieved using a combination of a 450–660 nm spectral range, 5 nm wide bands, and the employment of red- band normalization. Overall classification accuracy was improved from 86 % for the original image to 99 % for the normalized image. Spectral resolution and spectral ranges seemed to have a limited effect on the classification accu- racy. The proposed methodology and the use of automatic classification procedures can be successfully applied for reef survey and monitoring and even upscaled for a large survey. Keywords Spectroscopy � Classification � Remote sensing � Hyperspectral � Glint � Monitoring � Survey Introduction Remotely sensed spectral data analysis has the potential to become a cost-effective practice for reef investigation, assessment, and monitoring (e.g., Mumby et al. 2004; Knudby et al. 2007; Collin and Planes 2012). The most desired application for remote sensing analysis of a coral reef is usually a basic classification (or thematic mapping), whereby the researcher wishes to quantify or map sub- strates in the study area (Johansen et al. 2008; Kendall and Miller 2008; Leiper et al. 2012). However, the integrity of remotely sensed results is often questionable as three main sources for errors obfuscate the image processing: (1) Analysis of marine habitats is confounded by the optical effect of the water and air columns above the substrate of interest (e.g., Pope and Fry 1997; Holden and LeDrew 2000; Hedley et al. 2010); (2) spatial, spectral, and noise limitations are imposed by the sensor itself (e.g., Yamano and Tamura 2004; Kendall and Miller 2008); and (3) similarities between the reflectance of the desired sub- strates may cause errors in their identification (e.g., Hedley Communicated by Geology Editor Prof. Bernhard Riegl T. Caras (&) � A. Karnieli The Remote Sensing Laboratory, Jacob Blaustein Institutes for Desert Research, Ben-Gurion University of the Negev, Sede-Boker Campus, 84990 Beersheba, Israel e-mail: divething@yahoo.com 123 Coral Reefs DOI 10.1007/s00338-013-1033-1 Author's personal copy et al. 2004; Knudby et al. 2007). These three limitations are detailed below. Unlike terrestrial substrates, marine substrates are lim- ited in their available spectral range to somewhere between 400 and 700 nm, dependent on water quality and clarity (e.g., Smith and Baker 1981; Pope and Fry 1997; Zheng et al. 2002). Predominantly, the effect of the water column is that it scatters the shorter wave section of the spectrum (up to about 450 nm) and absorbs most of the available light above 600 nm (e.g., Lee et al. 1999; Woźniak et al. 2010). Understanding the effects of water and their cor- rection is a formidable challenge and has been the subject of numerous studies; these complexities and their potential solutions are beyond the scope of this paper. Regardless, within the spectral range available through water, spectral features are concentrated between 550 and 700 nm (e.g., Hedley and Mumby 2002; Kutser et al. 2003; Hochberg et al. 2006). Furthermore, it is important to note that even with a good water correction procedure, the analysis is confined to a maximum of 3–6 m water depth, beyond which very little light above 600 nm leaves the water (Kutser et al. 2003; Hedley et al. 2010). Another environmental effect caused by the air–water interface is the glint. This element is dependent on lighting directionality in relation to the sensor position and there- fore is often referred to as specular reflection (Kay et al. 2009). Glint can be produced by waves or wavelets that, in a micro-scale, can change the water surface angle, thereby producing a localized glint-like effect. In affected samples (pixels or areas in the image), the reflection measured is significantly higher than other (e.g., neighboring) samples. The correction for glint is relatively simple and is based on normalizing effected pixels to unaffected pixels using a none-water-penetrating wavelength in the near-infrared (NIR) spectral range (Hochberg et al. 2003a; Hedley et al. 2005; Kay et al. 2009). Spatial resolution (i.e., pixel, the size of the land cov- ered by each remotely sensed unit) and spectral resolution (the number of spectral bands and their width) are usually limited by the ability of the sensor to collect enough energy (Mather and Koch 2004). Put together, even without the effect of water absorbance, the total available radiance (reflected light from the substrate) has to be balanced or optimized between the unit area (pixel) and the number of bands. To this end, with the available technology, it is virtually impossible to achieve both high spectral resolu- tion and high spatial resolution. Therefore, to date, anyone who aims to study the reef using remote sensing has essentially to choose between high spatial resolution (with low spectral resolution) and high spectral resolution (cou- pled with low spatial resolution). The choice in this dilemma is fundamentally dependent on the scientific task at hand. A coral reef is a highly heterogeneous habitat in which even the finest spatial resolution available (1–4 m) from spaceborne systems is unlikely to contain a single substrate (Andrefouet et al. 2002; Hochberg and Atkinson 2003; Leiper et al. 2012). Not having a single substrate in a pixel makes the classification infinitely more difficult as the combination of substrates present may not necessarily be linearly contributing to the pixel’s reflection (Joyce and Phinn 2002; Hedley 2004). Studies attempting to address this problem have used a combination of solutions such as sub-pixel analysis (e.g., Hedley et al. 2012), or limiting classification resolution to specific highly detectable sub- strates, such as bleaching (Wooldridge and Done 2004; Dekker et al. 2005; Dadhich et al. 2012). Many of the solutions suggested above can be undertaken only with a high spectral resolution that, in turn, limits the worker to a coarser spatial resolution (from tens of meters upwards) (Sterckx et al. 2005; Kutser et al. 2006; Hedley et al. 2012). Reef substrates’ spectral separability is another subject that has drawn considerable attention (e.g., Hochberg et al. 2003b; Lee et al. 2007; Leiper et al. 2012). Most examples for spectral analysis rely on the benefits of hyperspectral resolution and the ability to detect unique spectral features of pure substrate measurements (e.g., Hochberg et al. 2003b; Karpouzli et al. 2004; Hamylton 2009). Examples of coral reef pure substrate spectra are usually in situ sampling using a hand-held spectrometer (Holden and LeDrew 2002; Wettle et al. 2003). To date, only Hochberg and Atkinson (2000) have used practically pure pixels for their aerial remote sensing analysis despite pixel size of 0.5–0.9 m. This was possible because the substrate unit and area coverage were very homogeneous and do not represent a typical case study for habitats such as coral reefs. An alternative for coral reef remote sensing is in situ digital photography. Taken underwater or above, it can provide very high spatial resolution (unmatched compared with remote sensing) but provides only three bands. Although these are highly accurate pure pixel images, their analysis is limited by the minimal spectral resolution. Therefore, a typical data analysis of digital photography requires lengthy manual post-processing and is manpower intensive for large-scale coral reef monitoring (English et al. 1997). That said, the potential for extracting high- quality quantitative data, such as species identification or accurate percent cover, could be good if the worker is well trained and manpower is not limited. Recent developments in sensor production provide innovative remote sensing solutions tested in this study for the first time. The use of a hyperspectral camera from the surface offers an opportunity to deliver a potential solution for all three sources of error discussed earlier: (1) Water correction can be addressed by placing white reference Coral Reefs 123 Author's personal copy targets within the image (atmospheric correction is unnec- essary) (Mather and Koch 2004); (2) the hyperspectral camera is very similar to the AISA Eagle aerial sensor, featured with the same spectral resolution and up to 80 bands within the relevant range. This spectral resolution far exceeds the resolution recommended for coral reef deter- mination (e.g., Lubin et al. 2001; Hochberg et al. 2003b; Kutser et al. 2003). While this spectral resolution exists in other aerial sensors such as AISA Eagle and CASI, the special resolution is unmatched. Spatial resolution (pixel size) is expected to be finer than that of the target substrate units (e.g., coral colony), thus delivering a pure substrate spectrum in each pixel (much like a digital image); and (3) the combination of high spectral and high spatial (pure pixels) resolution would allow superior recognition of underwater substrates, leading to accurate quantitative estimates of cover within the sampled area (the image). To date, hyperspectral imaging data of this type have never been used in the shallow marine environment. Therefore, it resembles a combination of a field point spectrometer that provides hyperspectral resolution and a digital camera that produces high spatial resolution. Using the proposed tech- nology may lead to the development of an image acquisition and processing system that enables the analysis of the reef features in a fast, accurate, and efficient way. An advanced and improved design will support a semi-automatically run system with minimal operator input over larger areas. All these reasons together formed a strong rationale for testing the capabilities of the proposed technology. The aim of this study was to provide a preliminary assessment of ground-level, above-water, remote sensing of a coral reef. The objectives include acquiring a hyper- spectral image, correction of environmental distortions, and classification of the underwater substrates. In line with the aim, the key objective is achieving the best (most accurate) classification for the given image, using only basic processing steps. Given the pioneering level of pro- cessing protocol, objectives included testing a variety of variables relevant to image preprocessing, including spec- tral resolution and spectral ranges. Methods Study site The coral-reef marine park (CRMP, 29�330N 34�570E) is located at the northern end of the Gulf of Aqaba, 8 km south of the city of Eilat, Israel. The study site is catego- rized as fringing reef and is relatively small while the reef table is approximately 25 m wide and 2 km long. The choice of location is based on the availability of an over- water dry structure and the relatively flat reef table. Image acquisition The hyperspectral image was obtained from the jetty (bridge) over the reef (Fig. 1). Time to provide the best results, the image was acquired during the late morning— when the sun was at an angle that avoided glint effect. Other conditions included low wind (approximately 5 knots), and the water was close to low tide, so the average depth was 30 cm (ranging from 20 to 50 cm). The camera/instrument used was a pushbroom line scanner Spectral Camera HS by Specim Systems. It was fixed on a boom, overhanging the reef as close as possible to the nadir position. The camera’s 28� lens, opening at 2.5 m above target, captured approximately 2 9 3 m of the reef table, and the acquisition time was near 32 s. The camera pro- vides 1,600 pixels per line (the image width); therefore, divided by 2 m of image width, it gives, on average, 1.25 mm of reef substrate area per pixel. Lengthwise, the camera is able to provide up to 2,500 lines although only 1,400 of those were captured to minimize image distortion at the edges. Spectrally, 849 bands are captured in the spectral range of 400–1,000 nm with a 0.67–0.74 nm band width. For demonstrating the technique, a subset of 500 9 500 pixels was selected from the entire image in order to minimize pixel stretching due to camera angle (Fig. 2a, b). The image included a plastic quadrate frame and two white reference plates—one at the reef table depth (Fig. 2a) and another at the water surface (not shown). Image preprocessing and analysis A digital number (DN) is a value assigned to a pixel in a digital image that depicts the average radiance of the basic Fig. 1 The construction of the hyperspectral camera on the bridge. At this fully extended boom height, the camera can cover 3 9 6 m of reef Coral Reefs 123 Author's personal copy picture element (pixel). The white reference needed for water correction was an extracted spectrum of the under- water white reference plate placed within the image at the reef depth. Dividing the DN values by those of the white reference converts all irradiance values into reflectance values completing the image preparation (Lillesand and Kiefer 2003; Mather and Koch 2004). Measuring that plate underwater offers an opportunity to correct the effect of the water medium on light passing through. This is based on the assumption that—just like on land—those plates pro- vide a baseline measurement representing all the light that can be reflected from the target in those conditions. Since the image analysis is undertaken in reflectance values and the correction was based on an image-derived white ref- erence, radiometric correction is an unneeded step and was omitted from the image preparation sequence. Altogether, three images were created to test the spectral resolution parameter. The original image was spectrally resampled to three spectral resolutions: 5 nm bands, 10 nm bands, and 20 nm bands, reducing the original number of bands between 400 and 800 nm to 80, 40, and 20, respectively. The final step of the pre-processing focused on addressing the severe variability in spectral albedo, including the glinting effects caused by surface ripples (Fig. 2c). Those glinted pixels were different from their surrounding pixels, both by their reflectance magnitude and by their spectral features in the NIR spectral range (Fig. 3a). Instead of filtering out high albedo pixels and imposing a uniform correction approach on the darker pixels as well as the light ones, a normalization procedure was employed. This processing adopted the deglinting method described by Hochberg et al. (2003a) and Hedley et al. (2005). This correction is based on using the water opacity in the NIR wavelengths (thus reflectance values are independent of water absorption) as a baseline for normalization. In this respect, it is similar to more familiar normalization tech- niques based on the mean or mode of all bands. In the first stage of this routine, each band is divided by a specific near- a b c Fig. 2 An example of a hyperspectral image. a the original full size image; b a clip used for all analyses; and c a close-up section showing glinted (under the cross) and non-glinted pixels Coral Reefs 123 Author's personal copy infrared (NIR) band (not specified). The second stage of this method suggests normalizing all pixels to a certain mini- mum value originally termed NIR min but referred to as ‘min’ from here on. The min value is derived from non- glinted pixels selected manually by the operator. In this step, the linear relationship, created earlier, is reversed by simple multiplication using the new min value (instead of the original NIR band used for division). The output is an image where all pixels in the image have a uniform baseline based on the min derived from a non-glinted pixel. Theoretically, glint correction should come in earlier stages of preprocessing, prior to reflection conversion, as the glinted pixels contain reflection spectra that did not enter the water column (reflected from the surface directly) at all. However, in this case, because the ‘deglinting’ approach is used only as a normalization agent, the intro- duced bias is ignored. The leading rationale in this choice was that any bias is applied to all spectra and since it is derived from the image itself, its effect is uniform across the image. Moreover, since classification training is also based on the same biased within-image spectra, the classification itself is not affected by their radiometric incorrectness. Additionally, the most affected part of the spectrum would be in the NIR—an area of the spectrum not taking part in the classification anyway. The normalization procedure tested four bands based on the above method. In the first stage, each spectrum was expressed as a proportion to one of the following bands (red bands: 675, 680 and NIR bands: 760 and 775 nm). The two red bands were chosen since they represented an area in the spectrum that is typical to a living substrate (i.e., typical chlorophyll absorption peak at 675 nm) and also because their standard deviation was the smallest in the image. The NIR section bands were chosen based on low standard deviation alone following Hedley et al. (2005). Next, the images were multiplied by 0.3, the average reflection of bands 675 and 680 nm in well-lit pixels (i.e., pixels that are not affected by glint or shade). A similar process took place for the NIR-divided images that were multiplied by 0.92. The resulting normalized images were used for classification. The general spectral range of water-leaving radiance suggested in the literature is 400–700 nm. From this range, two sets of spectral ranges were selected based on close observation of the spectra in the image and the following rationale. Due to a combination of high light scattering and the lack of useful spectral features (e.g., Lubin et al. 2001), the spectral range between 400 and 450 nm was omitted. The spectral range between 650 and 700 nm is highly affected by water absorption but does contain important spectral features, like the chlorophyll absorption peak at 675 nm. Because omitting this range may affect the sepa- rability of tested substrate, two spectral ranges were selec- ted to represent the spectral range options 450–650 nm and 450–700 nm. Image classification Image classification was applied using support vector machine (SVM), a standard supervised classification pro- cedure in ENVI software. SVM is based on modeling the training classes in hyperspace and minimizing each pixel’s distance to its most similar target. The SVM enables one to seek those distances in a nonlinear way (Cortes and Vapnik 1995) and is particularly effective with spectral data. Image classification focused on five classes: massive hard coral, branching hard coral, deeper turf rock, shallow turf rock, and shade. For every identified class, 10 adjacent pairs of areas of interest (AOI) were selected; each AOI contained in excess of 250 pixels (therefore, each class average was based on more than 2,000 pixels). Adjacent AOIs were expected to contain pixels of the same class, so one of every pair was allocated for classification training while the second AOI was later used for verification (Fig. 4). Altogether, classification was undertaken ten a b Fig. 3 Glinted and non-glinted spectra of massive coral and their normalization. After conversion to reflectance, the Y axis represents the fraction of reflectance from the maximum, and their values should range between 0 and 1 (i.e., white reference should reach one). Spectra that are over the value of one suggest that they are overexposed as result of glinting effect. a While spectra of branching coral tend to be fairly consistent in their albedo, glinted spectra (blue) are more varied and their spectral features not consistent beyond 700 nm. b Spectra of glinted massive hard coral before and after normalization. Shown here are also non-glinted pixels before and after normalization. Note that normalization affects albedo without affecting the spectral features themselves Coral Reefs 123 Author's personal copy times: There were five images, including the original image, two red-band-normalized images and two NIR-band-nor- malized images; each of those was classified twice using the two spectral ranges 450–650 nm and 450–700 nm. Confu- sion matrices, calculated from the resulting classed images, offered a quantitative assessment for the success of the classification in each image. Two measurements for accu- racy were calculated—total accuracy, representing the number of correct classifications as a fraction of the total, and the Kappa coefficient, representing accuracy that takes into account classification occurring by chance. Results Deglinting Deglinted pixels were only a small proportion of the entire image (*3 %), and their effect ranged from saturated pixels (i.e., after white reference correction, pixels con- tained values larger than 1) to over-lit pixels (Fig. 3a). It seems that due to the sensor’s pushbroom action, glint was, in many cases, recorded in lines. The key for the normal- ization success was assumed to be in choosing the right band on which to base the correction and from which to extract the ‘min’ values. Although spectrally biased, both correction procedures produced an improved spectral image where much of the within-substrate unit variability (shade or glint) was reduced without losing useful spectral features (Fig. 3b). Classification Overall, the most accurate classification was achieved using the combination of 5 nm bands, a 450–660-nm range, and red-band normalization (total accuracy was 0.99 and Kappa coefficient was 0.99, Table 1). Of the five images tested, the two images corrected with NIR bands (765 and 775 nm) and the two images corrected with the red bands (675 and 680 nm) produced identical accuracies (not pre- sented). Out of the two spectral ranges, the wide spectral range (450–700 nm) gave better results than that of the narrow spectral range (450–650 nm). The exclusion to the above rule was the red-band corrections that were generally improved using the narrower spectral range and narrower bands. Both normalization treatments improved the clas- sification in comparison with that of the original image although red-band normalization consistently produced more accurate results (e.g., 0.99 for red versus 0.9 for NIR- corrected bands classified between 450 and 650 nm bands and 5 nm band width). Overall accuracy scores for all the tested images and the corresponding Kappa coefficient scores were highly correlated (R2 = 0.99). Throughout, despite the normalization, severely glinted spectra and very dark spectra were more likely to be mis- classified or unclassified. Normalization improved the micro-scale—within colony—classification (Fig. 5) for both high albedo (glint) and low albedo (shade) pixels. For example, the spherical coral shade is classified as ‘shade’ (or unclassified) in the original image, while correctly identified as ‘coral’ in the normalized images. In all cases, class perimeter is better defined after normalization that improves the identification of turf areas significantly (Fig. 6). Discussion The results presented in this preliminary study suggest that ground-level hyperspectral imagery can be used success- fully to classify coral reef substrates. Furthermore, the addition of normalization significantly increases classifi- cation accuracy. Despite the fact that the red-band nor- malization is radiometrically biased, it achieves better accuracy than that of the NIR band because the classifi- cation endmembers and the classified pixels are subjected to the same treatment. This may have to be addressed if classification were to rely on external data, such as a spectral library or endmembers derived from another image. Using a different form of normalization may improve accuracy without imposing bias limitations and also may allow cross referencing between images (Roger et al. 2003). In contradiction to previous work where nor- malization was not used (e.g., Lubin et al. 2001; Kutser Fig. 4 Spectra used for classification. The solid lines represent the average spectra used for classification while the dashed line is the average spectra of the verification areas of interest. The classes include branching hard coral, massive hard coral, turf rock, rock, and shade. Note that while the albedo is not always identical, spectral features are similar Coral Reefs 123 Author's personal copy et al. 2003), spectral resolution or spectral range limitations did not seem to have played an important role in improving accuracy. The SVM is a practical tool in extracting a classification model when the data are noisy or when the spectra are not well defined, such as marine substrates (Cortes and Vapnik 1995; Kruse 2008). As a classification approach, it is somewhat similar to the principal compo- nent analysis (PCA) that was suggested by previous workers as a good method of maximizing the benefit of high spectral resolution data sources (e.g., Holden and LeDrew 1998; Hochberg and Atkinson 2003; Hedley et al. 2012). If we consider that the remote sensing of coral reefs suffers from additional drawbacks in comparison with terrestrial equivalents, using the hyperspectral camera over the reef table seems to overcome these limitations well (environmental, sensor, and habitat). The first, and in many cases the hardest, to encounter is the environmental prob- lem of water scattering and absorption. The use of an in situ white reference plate used in this study was found to a b c d e f Fig. 5 Deglinting and classification results. a–c are the images before normalization, after NIR-band normalization, and after red-band normalization, respectively. The marked areas of interest (AOI) in the image represent the training and verification AOIs used for the classification process. Color legend follows previous colors (Fig. 3). c–e are the classified images. c is the non-deglinted image; d is the classified deglinted image; and e is the normalized. Accuracy in classification is visually noticed Table 1 Confusion matrix results of the most successful image classification tested (5 nm bands, 450–660 nm range, and red-band normalization) Confusion matrix Verification Branching hard coral Massive hard coral Turf rock Deeper turf rock Shade Classification Branching hard coral 388 0 0 0 0 Massive hard coral 0 849 2 2 0 Turf rock 1 2 492 0 5 Deeper turf rock 7 0 0 258 0 Shade 0 0 12 0 802 The total accuracy was 0.99 and Kappa coefficient was 0.99 Coral Reefs 123 Author's personal copy be useful in the reef table conditions especially in omitting the need for water correction. In this respect, water quality and water type (Jerlov 1951) are not considered, and water correction is not necessary (Lee et al. 1999). The success of this option is currently tested for deeper water and will be applied in different conditions. Another environmental, water-related drawback is the distortion caused by glint (Kay et al. 2009). This element seemed to be overcome successfully by using a simple correction procedure in two variations. The second limitation, imposed by the sensor itself, is the camera’s most significant merit. Even the finest-scaled remote sensing image pixel exceeds substrate unit sizes (e.g., pixels size may be 1 m but average coral heads at the study site may range to around 25 cm in diameter). The hyperspectral camera provides millimeter-scale pixels, an impressive spatial scale allowing all of the pixels in the image to contain a single substrate. In fact, both resolution elements (spectral and spatial) offered by the sensor used here are well over the minimum needed to execute marine substrate classification successfully (Hochberg et al. 2003b; Kutser et al. 2006; Knudby et al. 2010). The final difficulty outlined earlier is the similarity between the different reef living-substrate spectra as they are all dominated by the same chlorophyll absorption fea- tures. Hyperspectral imagery is probably the only means for addressing the feature separation necessary for positive identification of different marine substrates (Kutser and Jupp 2006; Leiper et al. 2012). The spectral resolution of the camera used for this study provides adequate spectral resolution to successfully separate the common substrates within the study site. Bridging the technological capabilities between remote sensing and in situ spectroscopy offers a good opportunity for testing resolution limitations. Starting off at the fine resolution offered by this sensor, reducing the spectral resolution by resampling the same data set may facilitate further investigation into the limits of the spectral resolu- tion parameter. The ability to provide pure pixels offers an opportunity to run a simple classification on the selected scene, an uncommon option for in situ analysis of marine substrates (Hochberg and Atkinson 2003). Furthermore, results pre- sented here suggest that sub-classification to branching hard coral (e.g., Acropora spp.) and massive hard coral (e.g., Platygyra spp.) was also possible, a novel applica- bility not possible to date. To this end, even pixels of a few centimeters (and tenfold upscale!) would provide a very useful classification and quantification data set. The potential for testing the effect of spatial up-scaling may help in optimizing the remote sensing effort in future research (Andrefouet et al. 2002; Purkis and Riegl 2005; Dadhich et al. 2012). In contrast, the combination of micro- scale pure pixels and high spectral resolution provides an exciting opportunity to investigate and address within-class variability, such as shading and the 3D complexity of marine substrates. Understanding these effects can help explain the spectral nonlinear mixing described by several studies that attempted to perform these tasks (Hedley and Mumby 2003; Goodman and Ustin 2007; Bioucas-Dias and Plaza 2010). At this scale, models of classification can test approaches in contextual classifications (Mumby et al. 1998; Vanderstraete et al. 2005), textual classifications (Mumby and Edwards 2002; Riegl and Purkis 2005; Purkis et al. 2006) or object recognition techniques (Gilbes et al. 2006; Phinn et al. 2011; Leon et al. 2012). Particularly important is the sharper substrate unit edges obtained using the red-band normalization. Being able to identify and isolate different units in the scene is a major step-up in object recognition processing (Gilbes et al. 2006). With all this in mind, it is important to remember that the pushbroom technology poses a problematic limita- tion—the exposure time. Even if the current exposure time can be reduced by decreasing spectral resolution, the instrument currently used has to be fixed to a static base. To date, there is no reflex camera alternative although the technology is being developed (FluxData; SurfaceOptics- Corporation; Habel et al. 2012). Fig. 6 A demonstration of spectra treatment results. The solid lines represent the average lit spectra. The dashed lines and the dotted lines represent glinted and shaded spectra, respectively. Raw (untreated) coral pixels are in black, pixels normalised with red band in red, and coral pixels normalised with NIR band pixels in blue Coral Reefs 123 Author's personal copy For monitoring and conservation, the potential appli- cations demonstrated by this study include automated quantitative image analysis and accurate substrate identi- fication. In turn, this facilitates larger scale campaigns (i.e., multiple image analysis) with much less manpower investment, compared to knowledge-based digital photog- raphy analysis. The immediate applicability is currently limited by the need for camera mounting (such as a tripod or the jetty used for this study). This may be overcome using a reflex-type camera as described earlier but could be also overcome by using a higher mounting for the current sensor. For example, raising the sensor to 6 meter above the reef may cover an extended portion of the reef, approximately 3 m 9 6 m (effectively spatial up-scaling). In this case, the spatial resolution will be reduced but that is unlikely to affect the image analysis. Even with all the current limitations, for the study site described in this study—using the two jetties to take 5 images at each side would result in a set of 20 images. If, as suggested above, every image covers 18 m 2 , this amounts to a sizable survey effort. The aim of this paper was to give a quick and sim- plified account of the potentiality of a new technology for marine spectroscopy at ground level. The three key limi- tations relevant to remote sensing of coral reefs are suc- cessfully overcome. The presented results show that the combination of image specification 5 nm bands, a 450–660 nm range and the red-band normalization can produce unprecedented classification accuracies of 99 %. To this end, the camera’s specifications open a new ave- nue of research in the spectroscopy, image analysis, and remote sensing relevant to this environment. The advan- tage of such resolution is limited in respect to computer power and data handling. To overcome this problem, currently the possibility of reducing spatial and spectral resolution is being tested. Up-scaling experiments may provide resolution limits for this type of remote sensing analysis of coral reefs. Scaling up the area surveyed can be done empirically by rising the sensor’s mounting point and by creating mosaics of several images together. Another area in need of improvement is the normalization technique that currently is image specific. Further work planned for this source of imagery focuses on testing different normalization techniques. The results presented here show how well the classification handles a relatively simple scene with only five class definitions. In more complex scenes, the number of classes may be doubled, and class integrity may be reduced. Tests with more complex classification techniques, such as PLS-DA and object recognition, are planned and may yield better results with more complex scenes. Additional avenues for exploration include micro-scale (single colony) spectral variation and spatial up-scaling of similar images. References Andrefouet S, Berkelmans R, Odriozola L, Done T, Oliver J, Muller- Karger FE (2002) Choosing the appropriate spatial resolution for monitoring coral bleaching events using remote sensing. Coral Reefs 21:147–154 Bioucas-Dias J, Plaza A (2010) Hyperspectral unmixing: Geometri- cal, statistical and sparse regression-based approaches. SPIE Remote Sensing Europe, Image and Signal Processing for Remote Sensing Conference Collin A, Planes S (2012) Enhancing coral health detection using spectral diversity indices from WorldView-2 Imagery and Machine Learners. Remote Sens 4:3244–3264 Cortes C, Vapnik V (1995) Support-vector networks. Machine Learning 20:273–297 Dadhich AP, Nadaoka K, Yamamoto T, Kayanne H (2012) Detecting coral bleaching using high-resolution satellite data analysis and 2-dimensional thermal model simulation in the Ishigaki fringing reef, Japan. Coral Reefs 31:425–439 Dekker AG, Wettle M, Brando VE (2005) Coral reef habitat mapping using MERIS: can MERIS detect coral bleaching? MERIS (A)ATSR Workshop 2005 Published on CDROM:25.21 English S, Wilkinson C, Baker V (1997) Survey manual for tropical marine resources. Australian Institute of Marine Science, Townsville FluxData I The FD-1665-MS 7 Channel Camera Gilbes F, Armstrong RA, Goodman J, Velez M, Hunt S (2006) Censsis seabed: diverse approaches for imaging shallow and deep coral reefs. Proceedings of Ocean Optics XVIII Goodman JA, Ustin SL (2007) Classification of benthic composition in a coral reef environment using spectral unmixing. J Appl Remote Sens 1:011501. doi:10.1117/1.2815907 Habel R, Kudenov M, Wimmer M (2012) Practical spectral photog- raphy. Eurographics 2012 Hamylton S (2009) Determination of the separability of coastal community assemblages of the Al Wajh Barrier Reef, Red Sea, from hyperspectral data. European J Geosci 1:1–11 Hedley J (2004) Spectral unmixing of coral reef benthos under ideal conditions. Ph.D. thesis, University of Exeter, pp 108–149 Hedley JD, Mumby PJ (2002) Biological and remote sensing perspectives of pigmentation in coral reef organisms. Adv Mar Biol 43:277–317 Hedley JD, Mumby PJ (2003) A remote sensing method for resolving depth and subpixel composition of aquatic benthos. Limnol Oceanogr 48:480–488 Hedley J, Mumby P, Joyce K, Phinn S (2004) Spectral unmixing of coral reef benthos under ideal conditions. Coral Reefs 23:60–73 Hedley JD, Harborne AR, Mumby PJ (2005) Simple and robust removal of sun glint for mapping shallow-water benthos. Int J Remote Sens 26:2107–2112 Hedley J, Roelfsema C, Phinn SR (2010) Propogating uncertainty through a shallow water mapping algorithm based on radiative transfer model inversion. Ocean Optics XX Hedley JD, Roelfsema CM, Phinn SR, Mumby PJ (2012) Environ- mental and sensor limitations in optical remote sensing of coral reefs: Implications for monitoring and sensor design. Remote Sens 4:271–302 Hochberg EJ, Atkinson MJ (2000) Spectral discrimination of coral reef benthic communities. Coral Reefs 19:164–171 Hochberg EJ, Atkinson MJ (2003) Capabilities of remote sensors to classify coral, algae, and sand as pure and mixed spectra. Remote Sens Environ 85:174–189 Hochberg EJ, Andrefouet S, Tyler MR (2003a) Sea surface correction of high spatial resolution Ikonos images to improve bottom mapping in near-shore environments. IEEE Transactions on Geoscience and Remote Sensing 41:1724–1729 Coral Reefs 123 Author's personal copy http://dx.doi.org/10.1117/1.2815907 Hochberg EJ, Atkinson MJ, Andrefouet S (2003b) Spectral reflec- tance of coral reef bottom-types worldwide and implications for coral reef remote sensing. Remote Sens Environ 85:159–173 Hochberg EJ, Appril A, Atkinson MJ, Bidigare RR (2006) Bio-optical modeling of photosynthetic pigments in corals. Coral Reefs 25:99–109 Holden H, LeDrew E (1998) Deconvolution of measured spectra based on principal components analysis and derivative spectros- copy. Geoscience and Remote Sensing Symposium Proceedings, 1998. IGARSS ‘98. 1998 IEEE International 2:760–762 Holden H, LeDrew E (2000) Optical water column properties of a coral reef environment: towards correction of remotely sensed imagery. Geoscience and Remote Sensing Symposium, 2000. Proceedings. IGARSS 2000. IEEE 2000 International 6:2666–2668 Holden H, LeDrew E (2002) Hyperspectral linear mixing based on in situ measurements in a coral reef environment. Geoscience and Remote Sensing Symposium, 2002. IGARSS ‘02. 2002 IEEE International 1:249–251 Jerlov NG (1951) Optical studies of ocean water. Report of Swedish Deep-Sea Expeditions 3:1–59 Johansen K, Roelfsema C, Phinn S (2008) High spatial resolution remote sensing for environmental monitoring and management preface. J Spatial Sci 53:43–47 Joyce KE, Phinn SR (2002) Bi-directional reflectance of corals. Int J Remote Sens 23:389–394 Karpouzli E, Malthus T, Place C (2004) Hyperspectral discrimination of coral reef benthic communities in the western Caribbean. Coral Reefs 23:141–151 Kay S, Hedley JD, Lavender S (2009) Review: Sun glint correction of high and low spatial resolution images of aquatic scenes: a review of methods for visible and near-infrared wavelengths. Remote Sens 1:697–730 Kendall MS, Miller T (2008) The influence of thematic and spatial resolution on maps of a coral reef ecosystem. Mar Geodesy 31:75–102 Knudby A, LeDrew E, Newman C (2007) Progress in the use of remote sensing for coral reef biodiversity studies. Prog Phys Geogr 31:421–434 Knudby A, Newman C, Shaghude Y, Muhando C (2010) Simple and effective monitoring of historic changes in nearshore environ- ments using the free archive of Landsat imagery. Int J Appl Earth Obs Geoinform 12:S116–S122 Kruse FA (2008) Expert system analysis of hyperspectral data. SPIE Defense and Security, Algorithms and Technologies for Multi- spectral, Hyperspectral, and Ultraspectral Imagery XIV, Con- ference DS43 6966–25 Kutser T, Jupp DL (2006) On the possibility of mapping living corals to the species level based on their optical signatures. Estuar Coast Shelf Sci 69:607–614 Kutser T, Dekker AG, Skirving W (2003) Modeling spectral discrimination of Great Barrier Reef benthic communities by remote sensing instruments. Limnol Oceanogr 48:497–510 Kutser T, Miller I, Jupp DLB (2006) Mapping coral reef benthic substrates using hyperspectral space-borne images and spectral libraries. Estuar Coast Shelf Sci 70:449–460 Lee Z, Carder KL, Mobley CD, Steward RG, Patch JS (1999) Hyperspec- tral remote sensing for shallow waters: 2. Deriving bottom depths and water properties by optimization. Appl Opt 38:3831–3843 Lee Z, Carder K, Arnone R, He M (2007) Determination of primary spectral bands for remote sensing of aquatic environments. Sensors 7:3428–3441 Leiper I, Phinn S, Dekker AG (2012) Spectral reflectance of coral reef benthos and substrate assemblages on Heron Reef, Australia. Int J Remote Sens 33:3946–3965 Leon J, Phinn SR, Woodroffe CD, Hamylton S, Roelfsema C, Saunders M (2012) Data fusion for mapping coral reef geomorphic zones: possibilities and limitations Proceedings of the 4th GEOBIA, Rio de Janeiro, pp 261–266 Lillesand TM, Kiefer RW (2003) Remote sensing and image interpretation. John Wiley and Sons, New York Lubin D, Li W, Dustan P, Mazel CH, Stamnes K (2001) Spectral signatures of coral reefs: Features from space. Remote Sens Environ 75:127–137 Mather PM, Koch M (2004) Computer processing of remotely-sensed images: An introduction. John Wiley and Sons Mumby PJ, Edwards AJ (2002) Mapping marine environments with IKONOS imagery: enhanced spatial resolution can deliver greater thematic accuracy. Remote Sens Environ 82:248–257 Mumby PJ, Green EP, Clark CD, Edwards AJ (1998) Benefits of water column correction and contextual editing for mapping coral reefs. Int J Remote Sens 19:203–210 Mumby PJ, Skirving W, Strong AE, Hardy JT, LeDrew EF, Hochberg EJ, Stumpf RP, David LT (2004) Remote sensing of coral reefs and their physical environment. Mar Pollut Bull 48:219–228 Phinn SR, Roelfsema CM, Mumby PJ (2011) Multi-scale, object- based image analysis for mapping geomorphic and ecological zones on coral reefs. Int J Remote Sens 33:3768–3797 Pope RM, Fry ES (1997) Absorption spectrum 380–700 nm of pure water. II. Integrating cavity measurements. Appl Opt 36:8710–8723 Purkis SJ, Riegl B (2005) Spatial and temporal dynamics of Arabian Gulf coral assemblages quantified from remote-sensing and in situ monitoring data. Mar Ecol Prog Ser 287:99–113 Purkis SJ, Myint SW, Riegl B (2006) Enhanced detection of the coral Acropora cervicornis from satellite imagery using a textural operator. Remote Sens Environ 101:82–94 Riegl BM, Purkis SJ (2005) Detection of shallow subtidal corals from IKONOS satellite and QTC View (50, 200 kHz) single-beam sonar data (Arabian Gulf; Dubai, UAE). Remote Sens Environ 95:96–114 Roger JM, Chauchard F, Bellon Maurel V (2003) EPO - PLS external parameter orthogonalisation of PLS application to temperature- independent measurement of sugar con-tent of intact fruits. Chemometrics and Intelligent Laboratory Systems 66:191–204 Smith RC, Baker KS (1981) Optical properties of the clearest natural waters (200–800 nm). Appl Opt 20:177–184 Sterckx S, Debruyn W, Vanderstraete T, Goossens R, Van der Heijden P (2005) Hyperspectral data for coral reef monitoring. A case study: Fordate, Tanimbar, Indonesia. EARSeL eProceedings 4(1):18–25 SurfaceOpticsCorporation SOC700Hz/SOC760 Real-time Hyper- spectral (HS) Imager Vanderstraete T, Goossens R, K. GT (2005) Coral reef habitat mapping in the Red Sea (Hurghada, Egypt) based on remote sensing. EARSeL eProceedings 3(2):191–207 Wettle M, Ferrier G, Lawrence AJ, Anderson K (2003) Fourth derivative analysis of Red Sea coral reflectance spectra. Int J Remote Sens 24:3867–3872 Wooldridge S, Done T (2004) Learning to predict large-scale coral bleaching from past events: A Bayesian approach using remotely sensed data, in-situ data, and environmental proxies. Coral Reefs 23:96–108 Woźniak SB, Stramski D, Stramska M, Reynolds RA, Wright VM, Miksic EY, Cichocka M, Cieplak AM (2010) Optical variability of seawater in relation to particle concentration, composition, and size distribution in the nearshore marine environment at Imperial Beach. California. J Geophys Res 115:C08027 Yamano H, Tamura M (2004) Detection limits of coral reef bleaching by satellite remote sensing: Simulation and data analysis. Remote Sens Environ 90:86–103 Zheng X, Dickey T, Chang G (2002) Variability of the downwelling diffuse attenuation coefficient with consideration of inelastic scattering. Appl Opt 41:6477–6488 Coral Reefs 123 Author's personal copy Ground-level spectroscopy analyses and classification of coral reefs using a hyperspectral camera Abstract Introduction Methods Study site Image acquisition Image preprocessing and analysis Image classification Results Deglinting Classification Discussion References work_xd2o3jeak5d4lkfyquibwtzm5e ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218556763 Params is empty 218556763 exception Params is empty 2021/04/06-02:16:36 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218556763 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:36 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_xd5hwlntpnbqrasjg4lllef5um ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218549741 Params is empty 218549741 exception Params is empty 2021/04/06-02:16:28 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218549741 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:28 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_xdr6otxznndclhuhijp4s7bc3i ---- Sensitivity and specificity of Norwegian optometrists’ evaluation of diabetic retinopathy in single-field retinal images – a cross-sectional experimental study Sundling et al. Sundling et al. BMC Health Services Research 2013, 13:17 http://www.biomedcentral.com/1472-6963/13/17 Sundling et al. BMC Health Services Research 2013, 13:17 http://www.biomedcentral.com/1472-6963/13/17 RESEARCH ARTICLE Open Access Sensitivity and specificity of Norwegian optometrists’ evaluation of diabetic retinopathy in single-field retinal images – a cross-sectional experimental study Vibeke Sundling1*, Pål Gulbrandsen2,3 and Jørund Straand4 Abstract Background: In the working age group, diabetic retinopathy is a leading cause of visual impairment. Regular eye examinations and early treatment of retinopathy can prevent visual loss, so screening for diabetic retinopathy is cost-effective. Dilated retinal digital photography with the additional use of ophthalmoscopy is the most effective and robust method of diabetic retinopathy screening. The aim of this study was to estimate the sensitivity and specificity of diabetic retinopathy screening when performed by Norwegian optometrists. Methods: This study employed a cross-sectional experimental design. Seventy-four optometrists working in private optometric practice were asked to screen 14 single-field retinal images for possible diabetic retinopathy. The screening was undertaken using a web-based visual identification and management of ophthalmological conditions (VIMOC) examination. The images used in the VIMOC examination were selected from a population survey and had been previously examined by two independent ophthalmologists. In order to establish a “gold standard”, images were only chosen for use in the VIMOC examination if they had elicited diagnostic agreement between the two independent ophthalmologists. To reduce the possibility of falsely high specificity occurring by chance, half the presented images were of retinas that were not affected by diabetic retinopathy. Sensitivity and specificity for diabetic retinopathy was calculated with 95% confidence intervals (CIs). Results: The mean (95%CI) sensitivity for identifying eyes with any diabetic retinopathy was 67% (62% to 72%). The mean (95%CI) specificity for identifying eyes without diabetic retinopathy was 84% (80% to 89%). The mean (95%CI) sensitivity for identifying eyes with mild non-proliferative diabetic retinopathy or moderate non-proliferative diabetes was 54% (47% to 61%) and 100%, respectively. Only four optometrists (5%) met the required standard of at least 80% sensitivity and 95% specificity that has been previously set for diabetic retinopathy screening programmes. Conclusions: The evaluation of retinal images for diabetic retinopathy by Norwegian optometrists does not meet the required screening standard of at least 80% sensitivity and 95% specificity. The introduction of measures to improve this situation could have implications for both formal optometric training and continuing optometric professional education. Keywords: Diabetic retinopathy, Optometrist, Sensitivity, Specificity, Retinal images, Case finding, Screening * Correspondence: vibeke.sundling@hibu.no 1Institute of Optometry and Visual Science, Faculty of Health Sciences, Buskerud University College, Kongsberg, Norway Full list of author information is available at the end of the article © 2013 Sundling et al.; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Sundling et al. BMC Health Services Research 2013, 13:17 Page 2 of 8 http://www.biomedcentral.com/1472-6963/13/17 Background Approximately 90,000 to 120,000 Norwegians have known diabetes [1], among whom reported prevalence of diabetic retinopathy (DR) ranges from 11% to 28% [2-5]. In the working age group, DR is a leading cause of visual impair- ment [6]. Among people with diabetes, 1% to 13% develop sight-threatening diabetic retinopathy (STDR) and 0.4% to 1.3% are visually impaired because of DR [7-13]. Regular eye examinations and early treatment of retinopathy can prevent visual loss [9,14-16], so screening for DR is cost- effective [17]. Dilated retinal digital photography with the additional use of ophthalmoscopy is the most effective and robust method of DR screening [18,19]. In Norway, the national guidelines for diabetes [20] and The Norwegian College of General Practitioners [21] recommend either regular eye examinations by an ophthalmologist or the use of retinal photography. The Norwegian Association of Op- tometry has issued clinical guidelines for optometric prac- tice [22] which include guidelines for the examination and management of patients with diabetes. People with diabetes are commonly examined in opto- metric practice due to having refractive errors. Norwegian optometric practice may represent a low threshold setting for case-finding of DR [23]. Studies in other countries have shown that optometrists are able to detect and grade DR [24] and specially trained optometrists perform well when screening for STDR (sensitivity 73%-97% and specificity 83%-99%) [25-29]. Since 1988, the profession in Norway has developed from being populated by opticians to being an approved healthcare profession, populated by optome- trists. Consequently, Norwegian optometrists are a hetero- geneous group with regard to formal education [30]. In 2004, optometrists were granted the right to prescribe diagnostic ocular drugs and since 2009 they have been able to refer patients directly to an ophthalmologist, with- out the patient first seeing a gate-keeping general practi- tioner (GP). These two responsibilities warrant a high standard of performance on the part of the optometrist. Sensitivity and specificity define the ability of a clinical test to correctly identify people with and without a specific disease. For low prevalence diseases, a high specificity is required to avoid large numbers of false positive results. The British Diabetic Association (now Diabetes UK) has set a required screening standard for DR of at least 80% sensitivity and 95% specificity [31]. The aims of the current study were to assess the sensitivity and specificity of the optometrists’ diagnosis of DR and to assess sensitiv- ity and specificity with respect to the optometrists’ formal education. Furthermore, we wanted to investigate how the optometrists intended to follow up their cases. Methods A cross-sectional experimental design was employed. The study population, from which study participants were drawn, comprised authorized optometrists in Norway (n≈1850). Members of the Norwegian Association of Op- tometry (NOF) (n=1028) were invited to participate by e-mail. Only those optometrists who were currently work- ing in private practice, who had worked in private practice for the previous 6 months and who intended to continue working in private practice for the following 6 months were eligible for inclusion in the study. Those optometrists who responded positively to our e- mail and were subsequently accepted for inclusion in the study were sent an interactive web-based visual identifica- tion and management of ophthalmological conditions (VIMOC) examination that used Question Writer 4 soft- ware. A VIMOC examination tests clinical competency using cases and/or images with accompanying multiple choice questions [32]. The examination consisted of 14 retinal images which the optometrists were to assess with respect to the presence or absence of DR, without grading severity. Additionally, they were to decide on patient man- agement, based solely on retinal findings and making the assumption that the patient had never been examined by an ophthalmologist. No grading scales or patient management guidelines were provided and the optome- trists were not given any patient information, such as visual acuity data. It was possible to move back and forth in the VIMOC exam to review the images and revise prior assessments before submitting a final response. In addition, a questionnaire was included to gather informa- tion regarding the participants’ work experience, educa- tion, preferred method of retinal examination, methods used for retinal examination in patients with diabetes, and methods available to them for retinal examination and im- aging. Optometrists used their own computers with screen resolution and colour set to maximum. Screen resolution ranged from 1024×600 to 2560×1440 pixels. The VIMOC retinal images were obtained from a pre- vious Norwegian population survey [2]. The study fol- lowed the tenets of the Declaration of Helsinki for research involving humans and was approved by the Re- gional Committee for Medical Research Ethics, REC Central (January 19. 2009). Blinded to patient informa- tion, all images had been independently assessed by two ophthalmologists who graded the presence of retinop- athy according to the Diabetic Retinopathy Disease Se- verity Scale [33]. The ophthalmologists viewed the images on a 21” monitor with screen resolution of 1600×1200 pixels. From a total of 239 images, only those that had been graded with full agreement between the two ophthalmologists (n=217) were considered for inclu- sion in our study. Seven images of retinas affected by non-proliferative diabetic retinopathy (NPDR) and seven images of retinas unaffected by DR were randomly selected. The DR images included five examples of mild NPDR (Figure 1) and two examples of moderate, Figure 1 Mild non-proliferative diabetic retinopathy. Sundling et al. BMC Health Services Research 2013, 13:17 Page 3 of 8 http://www.biomedcentral.com/1472-6963/13/17 potentially sight threatening NPDR (Figure 2). To reduce the possibility of falsely high specificity occurring by chance, half of the presented images were of retinas that were not affected by DR. The diagnoses of the two ophthalmologists for each image were used as a “gold standard” against which the performance of the study participants was assessed. The required sample size of participants was calcu- lated based on the following: 50% prevalence of DR in the image sample, a standard deviation of true sensitivity and specificity for individual optometrists’ image evalu- ation of 0.2, and 50% sensitivity and specificity for Figure 2 Moderate non-proliferative diabetic retinopathy. detecting DR by individual optometrists to allow max- imum variance. It was calculated that a CI < 0.05 for sensitivity and specificity for any diabetic retinopathy (ADR) could be achieved with 100 study participants, meaning that sensitivity and specificity was calculated with 95% confidence interval for any retinopathy. Study participants were not asked to grade DR; the sensitivity of mild and moderate NPDR was assessed in terms of detection of retinopathy in images with mild and moder- ate NPDR, respectively. The screening standard estab- lished by the British Diabetic Association (Diabetes UK) of at least 80% sensitivity and 95% specificity for ADR [31] was used as the screening standard in our study. Potential associations between test performance and for- mal education were investigated and analysed using Pearson Chi-square and student-t tests; a p-value ≤ 0.05 was regarded as significant. Data were collected in the period between 28th February and 14th March 2011; reminders requesting participants to complete the test were sent once. Results In all, 112 (11%) members of NOF responded positively to the e-mail and volunteered to participate in this study. Of these 112, 101 (90%) met the inclusion criteria, and 74 (73%) completed the study. Participants were generally educated to a higher level than the average for Norwegian optometrists (Table 1). The optometrists’ preferred methods of retinal examin- ation were reported to include undilated indirect ophthal- moscopy (47% of participants) and undilated retinal fundus photography (34% of participants). Multiple exam- ination methods were reported for patients with diabetes (Table 2). Twenty-three percent of participants reported that they undertook dilated retinal examinations in patients with diabetes. The optometrists’ assessments of each of the 14 VIMOC images are presented in Table 3. Optometrists with higher optometric education (a Master of Science in clinical optometry [MSc]) demonstrated significantly higher sensitivity than those who had a basic optometric education (Table 4). The specificity was not influenced by the optometrist education level. No association was found either between sensitivity or specificity and the number of years of experience in opto- metric practice, or between sensitivity or specificity and the participants’ preferred method of retinal examination. The screening standard for sensitivity of at least 80% and specificity of at least 95%, for ADR, was met by 24 (32%) and 31 (42%) optometrists, respectively, overall. The stan- dards for sensitivity and specificity were met by 50% and 45%, respectively, of optometrists who held an MSc and by 25% and 39%, respectively, who had a basic optometric Table 2 Characteristics of participating optometrists by formal education Master of science in clinical optometrya All (n=74) No (n=51) Yes (n=22) Gender, n (%) Female 43 (58) 30 (59) 13 (59) Male 31 (42) 21 (41) 9 (41) Number of years as practicing optometrist, mean (sd) ** 12 (±9) 10 (±8) 16 (±8) Preferred method of retinal examination, n (%) Undilated indirect ophthalmoscopy 35 (47) 22 (43) 13 (59) Retinal fundus photography 25 (34) 16 (31) 8 (36) Undilated direct ophthalmoscopy 9 (12) 9 (17) 0 (0) Other 5 (7) 4 (8) 1 (1) Retinal examinations methods used in patients with diabetes, n (%) Undilated retinal photography 46 (62) 30 (59) 15 (68) Undilated indirect ophthalmoscopy* 39 (53) 23 (45) 16 (73) Dilated indirect ophthalmoscopy 15 (20) 9 (18) 6 (27) Dilated retinal photography 11 (15) 8 (16) 3 (14) Undilated direct ophthalmoscopy* 11 (15) 11 (22) 0 (0) Available instruments for retinal examination and imaging, n (%) Direct ophthalmoscope and/or indirect slit-lamp ophthalmoscopy 71 (96) 48 (94) 22 (100) Retinal fundus camera 65 (88) 44 (86) 20 (91) Scanning-laser ophthalmoscope (Optomap) 19 (26) 10 (20) 9 (41) a Missing data for 1 participant. Student t-test P*<0.05 and P**<0.01 between optometrists with and without MSc in clinical optometry. Table 1 Characteristics of Norwegian optometrists Members of NOFa All members (n=1028) Non-participants (n=954) Participants (n=74) Gender n, (%) Male 472 (46) 441 (46) 31 (42) Female 556 (54) 513 (54) 43 (58) Practice by national health region n, (%) East 338 (33) 317 (33) 21 (28) South 293 (29) 276 (29) 20 (27) West 176 (17) 165 (17) 11 (15) Middle 135 (13) 119 (12) 16 (22) North 83 (8) 77 (8) 6 (8) Higher education n, (%) Master of science in clinical optometry b,* 200 (20) 178 (19) 22 (30) Private optometric practice n, (%) ‡ 870 (88) 796 (87) 74 (100) Information as registered by the Norwegian Association of Optometry (NOF) and reported by the participating optometrists. a Members of the NOA February 2011. b Missing data for 37 optometrists. Pearson Chi-square P*< 0.05 between participants and non-participants. Sundling et al. BMC Health Services Research 2013, 13:17 Page 4 of 8 http://www.biomedcentral.com/1472-6963/13/17 education. Only four optometrists (5%) met the required standard for both sensitivity and specificity. Patient management decisions were dependent on retinal findings (Table 5). Report/referral to a GP and/or an oph- thalmologist was regarded as appropriate for 99% and 96% of true- and false-positive findings, respectively. The rate of referral to an ophthalmologist was higher for moderate than for mild NPDR (92% vs. 62%). No further manage- ment was considered appropriate in 68% and 66% of cases of true- and false-negative findings, respectively. Discussion Only 5% of the responding optometrists satisfied the screening standard established by the British Diabetic Association of at least 80% sensitivity and 95% specificity [31]. Overall, sensitivity for detecting ADR was low and specificity was moderate. Sensitivity for detecting poten- tial STDR was, however, high. This suggests that the optometrists’ assessment of retinal images is an unreli- able method of screening for DR. The sensitivity and specificity of detection of DR in the current study and previous studies is presented in Table 6. It is not possible to make a direct comparison between the current study and previous studies that involved community optome- trists [28,34], as those studies did not report sensitivity and specificity levels for individual optometrists. How- ever, based on the reported mean levels of sensitivity and specificity, it is unlikely that individual optometrists in those studies would have met the British Diabetic Association screening criteria. The sensitivity for detect- ing ADR in our study (67%) was lower than that reported by Gibbins et al. (86-88%) [28]. However, the sensitivity for detecting STDR was higher in our study than in either the Gibbins et al. or Buxton et al studies (100% vs. 47-97%) [28,34] and the specificity was similar (84% vs. 83-95%). The greater sensitivity for detecting STDR in the current study could have been a result of the higher prevalence of STDR in our VIMOC sample compared with these earlier studies, which may have inflated sensitivity by chance. The prevalence of ADR in our study was comparable with the prevalence of ADR in the study by Gibbins et al. [28]. In that study, opto- metrists had received special training in the identifica- tion and grading of DR, which could explain the relatively high sensitivity levels observed. Table 3 Optometrists’ VIMOC evaluations of retinal images and corresponding ophthalmologist grading and patient glucose status Image evaluation Optometrist consideration on further management Imagea Patient status Ophthalmologists grading of DR Optometrists true findings of DR No/Routine follow-up Report / referral to general practitioner Report / referral to ophthalmologist n % (95%CI) n % (95%CI) n % (95%CI) n % (95%CI) 8 IGT No DR 46 64 (52 to 75) 17 37 (23 to 51) 21 46 (31 to 60) 8 17 (6 to 28) 5 KDM No DR 53 72 (61 to 82) 30 37 (43 to 70) 10 19 (8 to 29) 13 25 (13 to 36) 14 SDDM No DR 61 82 (74 to 91) 31 51 (38 to 63) 18 30 (18 to 41) 12 20 (10 to 30) 13 SDDM No DR 64 86 (79 to 94) 33 52 (39 to 64) 10 16 (7 to 25) 21 33 (21 to 44) 12 SDDM No DR 68 92 (86 to 98) 58 85 (77 to 94) 4 6 (0 to 11) 6 9 (2 to 16) 11 KDM No DR 71 96 (91 to 100) 61 86 (78 to 94) 2 3 (0 to 7) 8 11 (4 to 19) 2 NGT No DR 73 99 (96 to 100) 66 90 (84 to 97) 4 5 (0 to 11) 3 4 (0 to 9) 1 NGT Mild NPDR 23 31 (20 to 42) 1 4 (0 to 13) 8 35 (15 to 54) 14 61 (41 to 81) 6 IGT Mild NPDR 39 53 (41 to 64)) 1 3 (0 to 8) 10 26 (12 to 39) 28 72 (58 to 86) 3 NGT Mild NPDR 40 54 (42 to 66) 0 0 (0 to 0) 19 48 (32 to 63) 21 53 (37 to 68) 7 DM Mild NPDR 41 55 (44 to 67) 1 2 (0 to 7) 15 37 (22 to 51) 25 61 (46 to 76) 4 DM Mild NPDR 57 77 (67 to 87) 2 4 (0 to 8) 20 35 (23 to 47) 35 61 (49 to 74) 9b DM Moderate NPDR 71 100 (100 to 100) 0 0 (0 to 0) 4 6 (0 to 11) 67 94 (89 to100) 10c DM Moderate NPDR 73 100 (100 to 100) 0 0 (0 to 0) 8 11 (4 to 18) 65 89 (82 to 96) CI, confidence interval; DR, diabetic retinopathy; IGT, impaired glucose tolerance; KDM, known diabetes; NGT, normal glucose tolerance; NPDR, non-proliferative diabetic retinopathy; SDDM, screen-detected diabetes; VIMOC, Visual Identification and Management of Ophthalmological Conditions. a The number refers to the number in image presentation sequence of the VIMOC examinationMissing grading data from b3 and c1 optometrists. Sundling et al. BMC Health Services Research 2013, 13:17 Page 5 of 8 http://www.biomedcentral.com/1472-6963/13/17 We have found that sensitivity, but not specificity, was influenced by the level of formal education the partici- pants had received. Optometrists with an MSc had a sig- nificantly higher sensitivity than optometrist with a basic optometric education. This suggests that our results give a better estimate of sensitivity and specificity in general optometric practice, as our study included optometrists who had not had any special training in screening for DR. The sensitivity observed in this study is in line with that observed in a previous study we undertook to investi- gate Norwegian general optometric practice [23], where Table 4 Optometrists’ sensitivity and specificity for identifying Se Any DR (n=7) Mild D % (95%CI) % (95% All optometrists (n=74) 67 (62 to 72) 54 (47 t Formal education a,** BSc or lower (n=51) 63 (56 to 69) 48 (39 t MSc (n= 22) 77 (71 to 84) 68 (59 t BSc, Bachelor of science; CI, confidence interval; DR, diabetic retinopathy; MSc, mas a Missing data for 1 optometrist. Student t-test P**<0.01 statistically significant difference in sensitivity between opto sensitivity ranged from 61% to 65%, based on an assump- tion of 14% prevalence of DR among patients with diabetes [3]. Specificity in the current study was, however, lower than in our previous study (84% vs. 98-100%), which could be explained by the difference in prevalence between the two studies. In the current study, 98% of findings of DR (both true- and false-positives) were considered to warrant a report/referral to a physician. This is higher than the rate of 57% reported in a national practice registration in Nor- way [23]. The experimental design in our study, where optometrists were blinded to patient information but assumed that the patient had never been examined by an diabetic retinopathy, presented by formal education level nsitivity Specificity R (n=5) Moderate DR (n=2) No DR (n=7) CI) % (95%CI) % (95%CI) o 61) 100 84 (80 to 89) o 57) 100 84 (78 to 89) o 77) 100 85 (77 to 93) ter of science. metrists with MSc and optometrists with BSc or even lower formal education. Table 5 Individual image evaluation and suggested follow-up Images with diabetic retinopathy (n=518) Images without diabetic retinopathy (n=518) True positive sensitivity False negative True negative specificity False positive n % n % n % n % Screening standard set to meet a 414 80 104 20 492 95 26 5 n % (95%CI) n % (95%CI) n % (95%CI) n % (95%CI) Optometrists’ image evaluation 348 67 (62 to 72) 170 32 (28 to 38) 437 84 (80 to 89) 81 16 (11 to 20) Further managementb None / Routine follow up 5 1 (0 to 3) 113 66 (59 to 74) 296 68 (64 to 72) 3 4 (0 to 8) Report / referral to general practitioner 84 24 (20 to 29) 41 24 (18 to 21) 69 16 (12 to 19) 39 48 (37 to 59) Report / referral to ophthalmologist 255 74 (70 to 79) 16 9 (5 to 14) 71 16 (13 to 20) 39 48 (37 to 59) a British Diabetic Association. Retinal photographic screening for diabetic eye disease. A British Diabetic Association Report. London: British Diabetic Association; 1997. b Data missing for 5 image evaluations. Sundling et al. BMC Health Services Research 2013, 13:17 Page 6 of 8 http://www.biomedcentral.com/1472-6963/13/17 ophthalmologist, may have led to an increased tendency to recommend referral to a physician. Assuming the prevalence of DR is 14% [3], the nega- tive and positive predictive values of the optometrists’ evaluation of DR in our study would be 94% and 41%, respectively. Based on this and on the fact that Norwe- gian optometrists undertake approximately 1 million eye examinations per year (of which approximately 4% are in patients with diabetes [30]), our findings suggest that each year approximately 5500 patients without DR are referred based on a false positive result, while in Table 6 Optometrists’ sensitivity and specificity for identifyin and previous studies Study Retinal examination method Our study (2011) Community optometrists Image evaluation of digital image Harvey et al (2006) Optometrists in a screening program Not available Olson et al (2003) Specially trained optometrists Dilated slit-lamp examination Schmid et al (2002) Community optometrists Ophthalmoscopy (free choice) Image evaluation of retinal slides Hulme et al (2001) Specially trained optometrists Dilated slit-lamp examination Prasad et al (2001) Specially trained optometrists Dilated slit-lamp examination Gibbins et al (1998) Community optometrists Image evaluation of 35 mm slides Specially trained optometrist Image evaluation of 35 mm slides Buxton et al (1991) Community optometrists Image evaluation of Polaroid imag ADR, any diabetic retinopathy, STDR, sight-threatening diabetic retinopathy. approximately 1300 patients with DR, no further action is taken. However, if the British Diabetic Association screening criteria were met in Norwegian optometric practice, these figures would be 1700 and 800, respect- ively. Our results suggest that an excessive workload is being placed on healthcare services by inaccurate refer- ral practices. However, the national guidelines recom- mend eye examination by ophthalmologists [20,21], thus the report/referral of a patient who has not previously been seen by an ophthalmologist should not be discour- aged. Of greater concern is the false security given to g diabetic retinopathy as reported in the current study Sensitivity (95%CI) Specificity (95%CIn ADR STDR ADR STDR s 67 (62 to 72) 84 (80 to 89) 80 (71 to 89) 99 (98 to 100) 73 (52 to 88) 90 (87 to 93) 92 (84 to 100) 94 (90 to 98) 94 (90 to 98) 97 (92 to 100) 72 87 77 91 66 (65 to 67) 76 (70 to 81) 97 (97 to 98) 95 (95 to 96) 88 (83 to 93) 91 (79 to 98) 68 (58 to 68) 83 (79 to 87) 86 (81 to 91) 97 (90 to 100) 89 (85 to 93) 87 (84 to 91) es 48 (26 to 69) 94 (92 to 97) Sundling et al. BMC Health Services Research 2013, 13:17 Page 7 of 8 http://www.biomedcentral.com/1472-6963/13/17 those patients with DR who are not referred to an ophthalmologist. The strengths of this study are the use of standardised images in the VIMOC exam and the use of a diagnostic “gold standard” based on 100% agreement between two independent ophthalmologists. The experimental design allowed the calculation of sensitivity and specificity with acceptable precision in a relatively large nationwide sam- ple of optometrists, something that was not achieved in previous studies [25-28,34]. In terms of gender, number of years in practice and geographical location, our sam- ple of optometrists is representative of members of the NOF and of optometrists who participated in a previous study of Norwegian optometric practice [23,30]. The po- tential for knowledge bias and overestimation of sensi- tivity and specificity for general optometric practice was reduced in the current study because the optometrists were not provided with grading scales, nor were they given specific training prior to the study. One potential limitation of the study was the possibil- ity of selection bias, as optometrists with a specific inter- est in diabetes may have been more likely to accept the invitation to participate and hence may have been over- represented in the study. This could have inflated the sensitivity levels observed, compared with general opto- metric practice. On the other hand, participating opto- metrists did not have specific training in screening for DR, nor were they provided with a DR grading scale or a computer screen that would facilitate classification of DR. Variable viewing conditions may have influenced the detection rate of DR. Small screen size, low screen resolution and inadequate colour setting may have led to lower sensitivity for detecting mild DR. On the other hand, the optometrists’ use of their own facilities simu- lated real practice, something that the use of perfect viewing conditions could not have done. Conclusions Our study is likely to have given a better representation of general optometric practice than previous studies [25-28,34]. However, our findings indicate that at present case-finding of DR in Norwegian optometric practice is unreliable. Formal optometric training in screening for DR and continuing education may improve diagnostic sensitivity. Further research will be needed to evaluate the effectiveness of measures undertaken to improve optome- trists’ diagnostic accuracy for case-finding of DR. Abbreviations ADR: Any diabetic retinopathy; CI: Confidence interval; DR: Diabetic retinopathy; GP: General Practitioner; MSc: Master of Science in clinical optometry; NOF: Norwegian Association of Optometry; NPDR: Non- proliferative diabetic retinopathy; STDR: Sight-threatening diabetic retinopathy; VIMOC: Visual identification and management of ophthalmological conditions. Competing interests The authors declare that they have no financial or non-financial competing interests. Authors’ contributions VS conceived of the study and participated in its design, acquired and statistically analysed the data and drafted the manuscript. PG and JS participated in the design of the study and critically revised the manuscript. All authors read and approved the final manuscript. Authors’ information VS is the program coordinator for postgraduate courses in Optometry and Visual Science at the Department of Optometry and Visual Science, Faculty of Health Sciences, Buskerud University College. Acknowledgements We thank the optometrists participating in the study and Fredrik A. Dahl, HØKH, Research Centre for statistical advice. Author details 1Institute of Optometry and Visual Science, Faculty of Health Sciences, Buskerud University College, Kongsberg, Norway. 2Institute of Clinical Medicine, Campus Ahus, University of Oslo, Oslo, Norway. 3HØKH, Research Centre, Akershus University Hospital, Oslo, Norway. 4Department of General Practice, Institute of Health and Society, University of Oslo, Oslo, Norway. Received: 16 March 2012 Accepted: 17 November 2012 Published: 10 January 2013 References 1. Stene LC, Midthjell K, Jenum AK, Skeie S, Birkeland KI, Lund E, Joner G, Tell GS, Schirmer H: Hvor mange har diabetes mellitus i Norge?. [Prevalence of diabetes mellitus in Norway]. Tidsskr Nor Laegeforen 2004, 124(11):1511–1514. 2. Sundling V, Platou CG, Jansson RW, Bertelsen G, Wøllo E, Gulbrandsen P: Retinopathy and visual impairment in diabetes, impaired glucose tolerance and normal glucose tolerance: the Nord-Trondelag Health Study (the HUNT study). Acta Ophthalmol 2012, 90(3):237–243. 3. Hapnes R, Bergrem H: Diabetic eye complications in a medium sized municipality in southwest Norway. Acta Ophthalmol Scand 1996, 74(5):497–500. 4. Kilstad HN, Sjolie AK, Goransson L, Hapnes R, Henschien HJ, Alsbirk KE, Fossen K, Bertelsen G, Holstad G, Bergrem H: Prevalence of diabetic retinopathy in Norway: report from a screening study. Acta Ophthalmol 2012, 90(7):609–612. 5. Bertelsen G, Peto T, Lindekleiv H, Schirmer H, Solbu MD, Toft I, Sjolie AK, Njolstad I: Tromso eye study: prevalence and risk factors of diabetic retinopathy. Acta Ophthalmol 2012, doi:10.1111/j.1755-3768.2012.02542.x. [Epub ahead of print]. 6. Porta M, Bandello F: Diabetic retinopathy A clinical update. Diabetologia 2002, 45(12):1617–1634. 7. Olafsdottir E, Andersson DK, Stefansson E: Visual acuity in a population with regular screening for type 2 diabetes mellitus and eye disease. Acta Ophthalmol Scand 2007, 85(1):40–45. 8. Scanlon PH, Foy C, Chen FK: Visual acuity measurement and ocular co- morbidity in diabetic retinopathy screening. Br J Ophthalmol 2008, 92(6):775–778. 9. Zoega GM, Gunnarsdottir T, Bjornsdottir S, Hreietharsson AB, Viggosson G, Stefansson E: Screening compliance and visual outcome in diabetes. Acta Ophthalmol Scand 2005, 83(6):687–690. 10. Jeppesen P, Bek T: The occurrence and causes of registered blindness in diabetes patients in Arhus County, Denmark. Acta Ophthalmol Scand 2004, 82(5):526–530. 11. Prasad S, Kamath GG, Jones K, Clearkin LG, Phillips RP: Prevalence of blindness and visual impairment in a population of people with diabetes. Eye 2001, 15(Pt 5):640–643. 12. Broadbent DM, Scott JA, Vora JP, Harding SP: Prevalence of diabetic eye disease in an inner city population: the Liverpool Diabetic Eye Study. Eye 1999, 13:160–165. 13. Hove MN, Kristensen JK, Lauritzen T, Bek T: The prevalence of retinopathy in an unselected population of type 2 diabetes patients from Arhus County, Denmark. Acta Ophthalmol Scand 2004, 82(4):443–448. http://dx.doi.org/10.1111/j.1755-3768.2012.02542.x http://dx.doi.org/10.1111/j.1755-3768.2012.02542.x Sundling et al. BMC Health Services Research 2013, 13:17 Page 8 of 8 http://www.biomedcentral.com/1472-6963/13/17 14. Backlund LB, Algvere PV, Rosenqvist U: New blindness in diabetes reduced by more than one-third in Stockholm County. Diabet Med 1997, 14(9):732–740. 15. Kristinsson JK, Hauksdottir H, Stefansson E, Jonasson F, Gislason I: Active prevention in diabetic eye disease. A 4-year follow-up. Acta Ophthalmol Scand 1997, 75(3):249–254. 16. Stefansson E, Bek T, Porta M, Larsen N, Kristinsson JK, Agardh E: Screening and prevention of diabetic blindness. Acta Ophthalmol Scand 2000, 78(4):374–385. 17. Javitt JC, Aiello LP: Cost-effectiveness of detecting and treating diabetic retinopathy. Ann Intern Med 1996, 124(1 Pt 2):164–169. 18. Hutchinson A, McIntosh A, Peters J, O’Keeffe C, Khunti K, Baker R, Booth A: Effectiveness of screening and monitoring tests for diabetic retinopathy–a systematic review. Diabet Med 2000, 17(7):495–506. 19. Gillibrand W, Broadbent D, Harding S, Vora J: The English national risk- reduction programme for preservation of sight in diabetes. Mol Cell Biochem 2004, 261(1–2):183–185. 20. Helsedirektoratet: Nasjonale faglige retningslinjer (IS-1674). Diabetes - Forebygging, diagnostikk og behandling [National professional guidelines. Diabetes - Prevention, diagnostics and treatment]. Oslo: Helsedirektoratet; 2009. 21. Claudi T, Cooper JG, Midthjell K, Daae C, Furuseth K, Hanssen KF: NSAMs handlingsprogram for diabetes i allmennpraksis 2005 (NSAMs guidelines for diabetes in general practice 2005). 2005. 22. Optikerforbund N: Retningslinjer i klinisk optometri. Oslo: Norges Optikerforbund; 2005. 23. Sundling V, Gulbrandsen P, Bragadottir R, Bakketeig LS, Jervell J, Straand J: Suspected retinopathies in Norwegian optometric practice with emphasis on patients with diabetes: a cross-sectional study. BMC Health Serv Res 2008, 8:38. 24. Schmid KL, Swann PG, Pedersen C, Schmid LM: The detection of diabetic retinopathy by Australian optometrists. Clin Exp Optom 2002, 85(4):221–228. 25. Olson JA, Strachan FM, Hipwell JH, Goatman KA, McHardy KC, Forrester JV, Sharp PF: A comparative evaluation of digital imaging, retinal photography and optometrist examination in screening for diabetic retinopathy. Diabet Med 2003, 20(7):528–534. 26. Hulme SA, Tin UA, Hardy KJ, Joyce PW: Evaluation of a district-wide screening programme for diabetic retinopathy utilizing trained optometrists using slit-lamp and Volk lenses. Diabet Med 2002, 19(9):741–745. 27. Prasad S, Kamath GG, Jones K, Clearkin LG, Phillips RP: Effectiveness of optometrist screening for diabetic retinopathy using slit-lamp biomicroscopy. Eye 2001, 15(Pt 5):595–601. 28. Gibbins RL, Owens DR, Allen JC, Eastman L: Practical application of the European Field Guide in screening for diabetic retinopathy by using ophthalmoscopy and 35 mm retinal slides. Diabetologia 1998, 41(1):59–64. 29. Harvey JN, Craney L, Nagendran S, Ng CS: Towards comprehensive population-based screening for diabetic retinopathy: operation of the North Wales diabetic retinopathy screening programme using a central patient register and various screening methods. J Med Screen 2006, 13(2):87–92. 30. Sundling V, Gulbrandsen P, Bragadottir R, Bakketeig LS, Jervell J, Straand J: Optometric practice in Norway: a cross-sectional nationwide study. Acta Ophthalmol Scand 2007, 85(6):671–676. 31. British Diabetic Association: Retinal photographic screening for diabetic eye disease. A British Diabetic Association Report. London: British Diabetic Association; 1997. 32. Aakre BM, Svarverud E: Utdanning for klinisk kompetanse (Education towards clinical competence). Optikeren 2011, 5:52–53. 33. Wilkinson CP, Ferris FL 3rd, Klein RE, Lee PP, Agardh CD, Davis M, Dills D, Kampik A, Pararajasegaram R, Verdaguer JT: Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales. Ophthalmology 2003, 110(9):1677–1682. 34. Buxton MJ, Sculpher MJ, Ferguson BA, Humphreys JE, Altman JF, Spiegelhalter DJ, Kirby AJ, Jacob JS, Bacon H, Dudbridge SB, et al: Screening for treatable diabetic retinopathy: a comparison of different methods. Diabet Med 1991, 8(4):371–377. doi:10.1186/1472-6963-13-17 Cite this article as: Sundling et al.: Sensitivity and specificity of Norwegian optometrists’ evaluation of diabetic retinopathy in single- field retinal images – a cross-sectional experimental study. BMC Health Services Research 2013 13:17. Submit your next manuscript to BioMed Central and take full advantage of: • Convenient online submission • Thorough peer review • No space constraints or color figure charges • Immediate publication on acceptance • Inclusion in PubMed, CAS, Scopus and Google Scholar • Research which is freely available for redistribution Submit your manuscript at www.biomedcentral.com/submit Abstract Background Methods Results Conclusions Background Methods Results Discussion Conclusions Abbreviations Competing interests Authors’ contributions Authors’ information Acknowledgements Author details References work_xhgo7gvdt5awpoagrd3suhz5qu ---- Screening neonatal jaundice based on the sclera color of the eye using digital photography Terence S. Leung,1,* Karan Kapur,1 Ashley Guilliam,1 Jade Okell,2 Bee Lim,2 Lindsay W. MacDonald,3 and Judith Meek2 1Department of Medical Physics & Biomedical Engineering, University College London, UK 2The Neonatal Care Unit, Elizabeth Garrett Anderson Wing, University College London Hospitals Trust, UK 3Department of Civil, Environmental & Geomatic Engineering, University College London, UK *t.leung@ucl.ac.uk Abstract: A new screening technique for neonatal jaundice is proposed exploiting the yellow discoloration in the sclera. It involves taking digital photographs of newborn infants’ eyes (n = 110) and processing the pixel colour values of the sclera to predict the total serum bilirubin (TSB) levels. This technique has linear and rank correlation coefficients of 0.75 and 0.72 (both p<0.01) with the measured TSB. The mean difference ( ± SD) is 0.00 ± 41.60 µmol/l. The receiver operating characteristic curve shows that this technique can identify subjects with TSB above 205 µmol/l with sensitivity of 1.00 and specificity of 0.50, showing its potential as a screening device. ©2015 Optical Society of America OCIS codes: (300.6550) Spectroscopy, visible; (330.1710) Color, measurement. References and Links 1. “NICE clinical guideline 98: Neonatal jaundice,” (National Institute for Health and Clinical Excellence, London, 2010). 2. J. M. Rennie, Rennie & Roberton's Textbook of Neonatology (Elsevier Health Sciences, 2012). 3. R. Keren, K. Tremont, X. Luan, and A. Cnaan, “Visual assessment of jaundice in term and late preterm infants,” Arch. Dis. Child. Fetal Neonatal Ed. 94(5), F317–F322 (2009). 4. L. I. Kramer, “Advancement of dermal icterus in the jaundiced newborn,” Am. J. Dis. Child. 118(3), 454–458 (1969). 5. P. Szabo, M. Wolf, H. U. Bucher, J. C. Fauchère, D. Haensse, and R. Arlettaz, “Detection of hyperbilirubinaemia in jaundiced full-term neonates by eye or by bilirubinometer?” Eur. J. Pediatr. 163(12), 722–727 (2004). 6. G. Nagar, B. Vandermeer, S. Campbell, and M. Kumar, “Reliability of transcutaneous bilirubin devices in preterm infants: a systematic review,” Pediatrics 132(5), 871–881 (2013). 7. S. N. El-Beshbishi, K. E. Shattuck, A. A. Mohammad, and J. R. Petersen, “Hyperbilirubinemia and transcutaneous bilirubinometry,” Clin. Chem. 55(7), 1280–1287 (2009). 8. M. Ahmed, S. Mostafa, G. Fisher, and T. M. Reynolds, “Comparison between transcutaneous bilirubinometry and total serum bilirubin measurements in preterm infants <35 weeks gestation,” Ann. Clin. Biochem. 47(1), 72– 77 (2010). 9. D. De Luca, E. Zecca, P. de Turris, G. Barbato, M. Marras, and C. Romagnoli, “Using BiliCheck™ for preterm neonates in a sub-intensive unit: Diagnostic usefulness and suitability,” Early Hum. Dev. 83(5), 313–317 (2007). 10. K. Jangaard, H. Curtis, and R. Goldbloom, “Estimation of bilirubin using BiliChek, a transcutaneous bilirubin measurement device: Effects of gestational age and use of phototherapy,” Paediatr. Child Health 11(2), 79–83 (2006). 11. T. Karen, H. U. Bucher, and J. C. Fauchère, “Comparison of a new transcutaneous bilirubinometer (Bilimed) with serum bilirubin measurements in preterm and full-term infants,” BMC Pediatr. 9(1), 70 (2009). 12. S. Sanpavat and I. Nuchprayoon, “Transcutaneous bilirubin in the pre-term infants,” J. Med. Assoc. Thai. 90(9), 1803–1808 (2007). 13. E. T. Schmidt, C. A. Wheeler, G. L. Jackson, and W. D. Engle, “Evaluation of transcutaneous bilirubinometry in preterm neonates,” J. Perinatol. 29(8), 564–569 (2009). 14. L. Y. Siu, L. W. Siu, S. K. Au, K. W. Li, T. K. Tsui, Y. Y. Chang, G. P. Lee, and N. S. Kwong, “Evaluation of a transcutaneous bilirubinometer with two optical paths in Chinese preterm infants,” Hong Kong J. Paediatr. 15, 132–140 (2010). 15. W. A. Willems, L. M. van den Berg, H. de Wit, and A. Molendijk, “Transcutaneous bilirubinometry with the Bilicheck in very premature newborns,” J. Matern. Fetal Neonatal Med. 16(4), 209–214 (2004). 16. S. Yasuda, S. Itoh, K. Isobe, M. Yonetani, H. Nakamura, M. Nakamura, Y. Yamauchi, and A. Yamanishi, “New transcutaneous jaundice device with two optical paths,” J. Perinat. Med. 31(1), 81–88 (2003). 17. C. Romagnoli, E. Zecca, P. Catenazzi, G. Barone, and A. A. Zuppa, “Transcutaneous bilirubin measurement: Comparison of Respironics BiliCheck and JM-103 in a normal newborn population,” Clin. Biochem. 45(9), 659– 662 (2012). 18. L. d. Greef, M. Goel, M. J. Seo, E. C. Larson, J. W. Stout, J. A. Taylor, and S. N. Patel, “BiliCam: Using Mobile Phones to Monitor Newborn Jaundice,” in The 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing 2014) 19. T. S. Leung, A. Guilliam, L. MacDonald, and J. Meek, “Investigation of the relationship between the sclera colour of the eye and the serum bilirubin level in newborn infants,” in 42nd Annual Meeting of the International Society on Oxygen Transport to Tissue, (London, 2014). 20. P. G. Watson and R. D. Young, “Scleral structure, organisation and disease. A review,” Exp. Eye Res. 78(3), 609–623 (2004). 21. A. N. Bashkatov, E. A. Genina, V. I. Kochubey, and V. V. Tuchin, “Optical properties of human sclera in spectral range 370–2500 nm,” Opt. Spectrosc. 109(2), 197–204 (2010). 22. J. J. Kuiper, “Conjunctival Icterus,” Ann. Intern. Med. 134(4), 345–346 (2001). 23. N. J. Talley and S. O’Connor, Clinical Examination: A Systematic Guide to Physical Diagnosis (Elsevier Health Sciences APAC, 2014). 24. N. Cohen, “A Color Balancing Algorithm for Cameras,” Standford Project Report. (2011). 25. J. F. Hair, Multivariate Data Analysis (Prentice Hall, 2010). 26. D. J. Sheskin, Handbook of parametric and nonparametric statistical procedures (crc Press, 2003). 27. J. M. Bland and D. G. Altman, “Statistical methods for assessing agreement between two methods of clinical measurement,” Lancet 327(8476), 307–310 (1986). 28. M. H. Zweig and G. Campbell, “Receiver-operating characteristic (ROC) plots: a fundamental evaluation tool in clinical medicine,” Clin. Chem. 39(4), 561–577 (1993). 29. A. P. Bradley, “The use of the area under the ROC curve in the evaluation of machine learning algorithms,” Pattern Recognit. 30(7), 1145–1159 (1997). 30. W. J. Youden, “Index for rating diagnostic tests,” Cancer 3(1), 32–35 (1950). 31. S. Leartveravat, “Transcutaneous bilirubin measurement in full term neonate by digital camera,” Medical Journal of Srisaket Surin Buriram Hospitals 24, 105–118 (2009). 32. Z. Liu and J. Zerubia, “Skin image illumination modeling and chromophore identification for melanoma diagnosis,” Phys. Med. Biol. 60(9), 3415–3431 (2015). 1. Introduction Neonatal jaundice is a common condition affecting newborn babies - 60% of term infants and 80% of preterm infants in the first week of life, and 10% of breastfed babies are still jaundiced at 4 weeks old [1]. It is caused by increased red blood cell breakdown or decreased liver function, usually a physiological process, nonetheless causing an accumulation of bilirubin. In most cases no treatment is required. However, if the total serum bilirubin (TSB) level exceeds an age dependent treatment threshold [1], phototherapy has been shown to reduce the TSB level to avoid kernicterus, a condition whereby a high level of bilirubin has crossed the blood brain barrier causing short and long term neurological dysfunction. Jaundice can also indicate other life threatening conditions. For instance, haemolytic disease of the newborn (a condition in which antibodies from the mother attack the newborn’s red blood cells), neonatal infection and liver diseases can all cause bilirubin levels to rise [2]. Early diagnosis of these pathological conditions would allow urgent treatments to be administered in a timely fashion, improving the chance of recovery significantly. In the UK newborns who have had a low risk perinatal course are discharged from hospital within 1-2 days of birth. They and those born at home are routinely visited by a community midwife who assesses their wellbeing with respect to feeding, weight gain and jaundice. If the baby appears jaundiced the midwife should measure the bilirubin level either with a transcutaneous bilirubinometer (TcB) or with a blood sample. If the baby appears unwell or the bilirubin measurement exceeds 250 µmol/l, the baby should be referred to the paediatric team in the nearest hospital. In a worldwide setting there is a wide variation in the availability of community midwives and local paediatric teams. A screening tool using digital photography (e.g. a mobile phone camera) could have wide applications in settings where TcBs and blood tests are too expensive and difficult to obtain. One noticeable feature of jaundiced patients is the appearance of yellow discoloration on their skin and sclera caused by a higher level of TSB (yellow in colour) in the blood. It is visually detectable when TSB levels exceed 85 µmol/l. The degree of the yellow discoloration is dependent on the TSB level. An experienced healthcare worker can often identify jaundiced patients by visual inspection. The extent of neonatal jaundice can also be quantified by a 5-point scale first proposed by Kramer [3–5]. However, colour perception can be severely influenced by the ambient lighting and the skin pigment [5]. Such screening practice is therefore unable to provide an objective, reliable and accurate result [3]. For a more quantitative measurement, customised spectrometers have been developed which illuminate the skin with control lighting in direct contact and evaluate the reflected light at multiple wavelengths. Currently, the two most popular commercial TcBs include BiliChek (Philips Healthcare) and Jaundice Meter JM102/JM103 (Draeger Medical Inc., previously marketed by Konica Minolta Inc.) [5–16]. The skin pigment is the most influential confounding factor for the skin colour approach. To minimise its effect, the two TcBs adopt two very different approaches: BiliChek exploits 137 wavelengths in the visible spectrum between 380 and 760 nm to resolve the concentrations of melanin, haemoglobin, bilirubin and dermis based on their known absorption spectra [7, 17]; JM102/JM103, on the other hand, only uses two colours of light in blue and green (with peak wavelengths of 450 nm and 550 nm), as well as a special probe design which allows the reflected light to be measured by two detectors at two different distances. The two optical density signals measured in this way account for light with two different penetration depths beneath the skin, one deeper than the other. The difference between the two signals is therefore more sensitive to the deeper subcutaneous tissue where the effect of skin pigment is minimised [7, 17]. A recent study also shows that a smartphone’s camera (iPhone 4S) can be used to predict TSB level based on the skin colour [18]. This approach essentially makes use of only three broadband colours (red, green and blue) that can be detected by a conventional smartphone’s camera. This technique converts the three base colours into 2 colour spaces and uses 5 different machine learning algorithms to predict the TSB level. This current study reports an alternative approach based on the sclera colour to predict bilirubin level and quantify jaundice. The main advantage is that the sclera is less influenced by pigments such as melanin. Healthy sclera is white, independent of the ethnicity of the subject. Without the influence of a major confounding factor, the quantification of the yellowness and therefore the prediction of bilirubin level become much easier. Our study involves a digital camera which is used to take photos of newborn infants’ eyes and does not require any contact with the infants. The digital image is then analysed by a computer program written in Matlab, a high-level technical computing language. Preliminary results were presented at an international conference in July 2014 [19]. The sample size has since been expanded and the methodology improved, as reported in this paper. The ultimate aim of this work is to develop a screening tool which allows more effective referral of suspected patients for blood tests, rather than replacing them completely. 2. Methods 2.1 Study protocol and data collection The study has been approved by the National Research Ethics Service Committee (London – City Road and Hampstead). The data were collected in the Advanced Neonatal Nurse Practitioner Clinic at University College London Hospital. The majority of these babies were referred by midwives conducting a routine health check after discharge from the hospital. Some appointments were made for babies with pre-existing prolonged or physiological jaundice. As the TSB blood test formed part of the routine clinical procedure during the consultation, no further blood sampling was required specifically for this study. The photographs were taken before the blood test as babies tended to be more compliant at this time. The results for the blood test were processed within twenty minutes thereafter to prevent any error from short-term fluctuations in bilirubin measurements. The camera used was a Nikon D3200, which had a 24.6 megapixel CMOS sensor, with a prime macro lens of 60 mm focal length. Manual focus was used to ensure that the sclera was in focus and programmed auto mode was selected. This allowed the shutter speed and aperture size to vary depending on light entering the sensor to prevent over/under exposure. The ISO value (sensitivity of the digital sensor to light) was kept at ISO 1600 as a compromise between maintaining fast shutter speed and allowing enough light to expose the picture correctly. White balance was also maintained at the ‘4’ setting for fluorescent lighting and kept constant to ensure that colour temperature did not influence colour analysis on a picture-to-picture basis. A customised colour chart was utilised to match the colour of the sclera to a reference value on the chart so that any yellow difference in sclera colour can be correctly associated with a physiological effect rather than a change in spectrum of ambient lighting. The images were captured in RAW format to preserve as much tonal information as possible for each image taken. 2.2 Subjects A total of 133 newborn infants were photographed for this study. Since the ambient lighting can seriously affect the result of this study, only the 110 samples taken in a specific clinic room have been included in the subsequent analysis. The reasons for exclusion are: 18 samples were collected from other locations, 2 samples had erroneous colour channel responses, 1 baby would not fully open his eyes, and 2 TSB blood samples could not be recorded. Figures 1(a) and 1(b) show the distribution of TSBs in relation to the gestational and postnatal ages, respectively. As expected, Fig. 1(b) shows that the TSB tends to drop with the postnatal age. The database includes 67 white and 43 non-white infants. Fig. 1. The distribution of total serum bilirubin levels in relation to (a) gestational, and (b) postnatal ages (n = 110). 2.3 Optical properties of the sclera The sclera is often known as “the white of the eye” which is the opaque, outer layer of the eye, surrounding the iris. The sclera is mainly composed of collagen, elastin, proteoglycans and glycoproteins [20]. Since the collagen has a slow turnover, the sclera has a low metabolic rate and requires only a low blood supply. In comparison to other parts of the body, e.g., skin, the sclera is less vascularised which also means that the colour of a healthy sclera is less influenced by the colour of the blood. In newborn infants, the sclera is thin and often appears blue, revealing the colour of the underlying uveal layer [20]. The optical properties of human sclera have been studied previously [21], showing that the optical transport scattering coefficient decreases monotonically from 370 nm to 1300 nm, similar to the general spectral characteristics of biological tissues. The study also shows that between 400 and 900 nm, the absorption coefficient of the human sclera has a simple spectral trend and does not have the spectral characteristics found in rabbit and pig sclera which are influenced by the scleral melanin unique only to animals [21]. Since the human sclera is free of melanin, it appears white in all ethnicities. The sclera is overlaid by the clear, lubricating layer conjunctiva, which some have argued is the main origin of the yellow discolouration associated with a higher level of bilirubin in jaundiced patients [22, 23]. It has been pointed out that the bilirubin has a strong affinity for elastin and it is the innermost layer of conjunctiva and the most superficial part of the sclera that are rich in elastic fibres, and therefore account for the yellow discoloration [22]. Some have simply noted that the bilirubin is mainly deposited in the vascular conjunctiva rather than the avascular sclerae [23]. While acknowledging the subtlety in the interpretation, we have adopted the more common term of “sclera colour” to describe the “combined colour” of conjunctiva and sclera in this paper. 2.4 Signal analysis The raw images were originally saved in Nikon’s proprietary NEF format, and then converted into TIF format using Nikon’s ViewNX 2 software. These images were analysed using the mathematical software Matlab. The pixel coordinates of the sclera, skin (for comparison purpose) and the reference colour were manually identified in an image viewer. The reference colour was taken as the white patch in the colour chart. Figure 2 shows an example of pixels identified in this way. Three colour indexes for red, green and blue, represented by R, G and B here, were calculated by averaging 900 neighbouring pixels (30 × 30 region) centred on the pre-defined pixel coordinates. This process resulted in colour indexes: Reye, Geye and Beye for the sclera, Rskin, Gskin and Bskin for the skin, and Rref, Gref and Bref for the reference colour. Normalised versions of the colour indexes were also calculated for both the sclera and skin, e.g., Reye,nor = Reye / Rref is the normalised red colour index of the sclera. The normalisation can be considered as a colour balancing process in digital photography [24]. The next step was to develop a multivariate model based on the measured TSB levels and colour indexes in the database [25]. The multiple linear regression was performed using Matlab and its Statistics and Machine Learning Toolbox. Using colour indexes as the independent variables and the measured TSB values as dependent variables (y), a quadratic model was developed: 0 1 2 3 4 5 2 2 2 6 7 8 9 eye eye eye eye eye eye eye eye eye eye eye eye eye y C C R C G C B C R G C R B C B G C R C G C B = + + + + + + + + + (1) where yeye is the predicted TSB value, coefficients C0 = 373, C1 = −3385, C2 = 5137, C3 = −2391, C4 = −14945, C5 = 1987, C6 = 7163, C7 = 8680, C8 = 1295 and C9 = −3906. This quadratic model was chosen to provide a reasonably high R-squared value while avoiding overfitting the data, as in the case of some higher order polynomial models. By performing multiple linear regression on four sets of colour indexes (Rskin/Gskin/Bskin, Rskin,nor/Gskin,nor/Bskin,nor, Reye/Geye/Beye and Reye,nor/Geye,nor/Beye,nor), four models were produced which could then be used to predict four sets of TSB levels: yskin, yskin,nor, yeye and yeye,nor . Fig. 2. An example of the image collected in the UCL Hospitals. The colour chart is shown on the left. The pixel values for the sclera, skin and white reference have been used in the processing. (This image has been media consented by the parents). 2.5. Statistical analyses The statistical analyses focused on comparing the measured and predicted TSB levels in terms of their correlation, bias, sensitivity and specificity as a screening tool. The linear correlation between the measured and predicted TSB levels was assessed by calculating Pearson’s linear correlation coefficient r [26]. For a non-parametric correlation with less influence from outliers, Spearman’s rank correlation coefficient ρ was calculated [26]. To assess the agreement between the conventional and new techniques, Bland-Altman analysis was carried out [27]. Since the ultimate aim of this work is to develop a screening tool which can make a binary decision (referral or non-referral), the accuracy of detecting subjects above a certain TSB screening level (ysc) is also assessed using the receiver operating characteristic (ROC) curve [28]. The ROC curves have been calculated based on the predicted TSB levels, with reference to the measured TSB levels as the true answers. Suppose the new technique aims to identify subjects with a TSB level above a screening threshold, e.g., > ysc = 205 µmol/l. If one uses a low cut-off threshold (ycut) of 50 µmol/l (i.e. all subjects with predicted TSB levels above 50 µmol/l would be considered to have TSB levels above 205 µmol/l), the true positive rate (TPR) would be very high, albeit at the expense of a high false positive rate (FPR). As ycut increases, both the TPR and FPR would drop in a non-linear fashion. The ROC curve summarises the TPR and FPR for a range of ycut, allowing the selection of a particular ycut with certain sensitivity (TPR) and specificity (1-FPR). The “area under the ROC curve” (AUC) was also calculated which indicated the general accuracy of the screening technique [29]. 3. Results The results are presented for four sets of predicted TSB levels including yskin, yskin,nor, yeye and yeye,nor as described in section 2.4. Although the main focus of this study was on using the sclera colour to predict TSB levels, the skin colour was included here for comparison purpose. 3.1 Correlations between the measured and predicted serum bilirubin levels Table 1 summarises the values of Pearson’s linear correlation coefficient r and Spearman’s rank correlation coefficient ρ between the measured and predicted TSB levels. It can be seen that both the Pearson and Spearman correlations for using the sclera colour (normalised:0.75/0.72; un-normalised: 0.75/0.72) are much higher than those for the skin colour (normalised:0.54/0.50; un-normalised: 0.56/0.54). Figure 3 shows the scatter plots of the measured and predicted TSB levels using the sclera colour, i.e., Fig. 3(a), and skin colour, i.e., Fig. 3(b) as the predictors, respectively. An identity line is also shown in each figure which indicates that the predicted TSBs tend to overestimate in the lower TSB range (around 100 µmol/l) and underestimate in the higher TSB range (around 250 µmol/l). It can be seen that the data points are generally closer to the identity line using the sclera colour as the predictor (see Fig. 3(a) for yeye) in comparison to using the skin colour (Fig. 3(b) for yskin). Table 1. Correlation between the measured and predicted TSB levels Predicted Serum Bilirubin level Region of interest RGB values normalised by white reference colour ? Pearson’s linear correlation (r), CI: 95% Confidence Intervals Spearman’s rank correlation (ρ) yskin,nor Skin Yes 0.54 (CI: 0.39 –0.66, p<0.01) 0.50 (p<0.01) yskin Skin No 0.56 (CI: 0.42 – 0.68, p<0.01) 0.54 (p<0.01) yeye,nor Sclera Yes 0.75 (CI: 0.65 – 0.82, p<0.01) 0.72 (p<0.01) yeye Sclera No 0.75 (CI: 0.65 – 0.82, p<0.01) 0.72 (p<0.01) 3.2 Bland-Altman plot In the previous section, it has been established that the TSB predicted by the RGB values of the sclera without normalisation, i.e., yeye, provides a good performance, which is the focus in the following analysis. Figure 4 shows the Bland-Altman plot which is often used to assess the agreement between two measurement methods [27]. The mean difference is close to zero because of the condition imposed by the least squares multiple linear regression algorithm. The standard deviation is 41.60 µmol/l. Although the vast majority of data points (all except eight) lie within the 95% confidence intervals, i.e., 0 ± 81.54 µmol/l (mean ± 1.96 standard deviations), the TSB differences are generally large, indicating that the predicted TSBs based on the sclera colour are not accurate enough to replace the TSBs by blood sampling on an individual basis. Similar to the scatter plots in Fig. 3, the Bland-Altman plot also shows the tendency of predicted TSBs overestimating in the lower TSB range and underestimating in the higher TSB range. Similar trends have also been found in a commercial TcB (BiliChek) [5]. Fig. 3. Scatter plots for the measured and predicted TSB levels based on (a) the sclera colour (un-normalised) and (b) the skin colour (un-normalised). The Pearson’s linear correlation (r), the Spearman’s rank correlation (ρ) and linear regressed lines are also shown (n = 110). Fig. 4. The Bland Altman plot showing the agreement between the conventional blood sampling method (measurement) and the photographic method (prediction) based on the sclera colour: the mean difference is zero and most of the differences lie between ± 1.96 standard deviations, i.e., ± 81.54 µmol/l. 3.3 Receiver operating characteristic curves Figure 5 depicts the ROC curve for a screening threshold (ysc) at 205 µmol/l. The optimal cut- off threshold (ycut*) can be identified using the Youden’s J statistic which results in the maximum J = sensitivity + specificity - 1 = TPR – FPR [30]. This optimal point represents the best compromise between a high TPR and a low FPR, and is, in this case, found to be ycut* = 194 µmol/l (hollow circle), with sensitivity = 0.80 and specificity = 0.86. However, one can also choose a particular ycut to raise the sensitivity which is important in a screening test. A ycut at 162 µmol/l (filled circle) has been chosen to raise the sensitivity to 1.00 at the expense of a reduced specificity = 0.50. Fig. 5. The ROC curve for screening TSB above 205 umol/l: the optimal cut-off threshold (open circle) based on the Youden’s J statistic, ycut* = 194 µmol/l, resulting in sensitivity = 0.80 and specificity = 0.86. To increase the sensitivity, the cut-off threshold is reduced to 162 µmol/l (filled circle) which raises the sensitivity to 1.00 at the expense of a reduced specificity = 0.50. 4. Discussions 4.1 Ambient lighting The images taken in this study relied on the ambient lighting for illuminating the sclera, which inevitably also affected the prediction performance. When images taken under various ambient lightings were used for TSB prediction, the correlations between sclera/skin colours and the measured TSB were poor (result not shown here). Using the sclera samples collected from the same location for prediction, the correlations become much stronger as shown in Table 1. The results from Table 1 may first seem surprising that the normalised RGB values, which can also be considered to have undergone a white balancing process, do not improve the correlations, in comparison to the un-normalised RGB values. However, the white balancing is normally used to reduce the effect of colour differences under different illuminations. In our case, the same illumination (fluorescent light in the clinic room) has been used for the whole data set and therefore the need for white balancing is less important. In fact, the introduction of white balancing can be counterproductive because the white reference colour can be an additional source of noise when it receives a different illumination due to multipath reflections. In general, the region of interest is also affected by reflected light from the surrounding areas. For example, a bright colour jumper worn by the mother holding the baby could affect the RGB pixel values of the image. The variability of the lighting environment causes errors in the predicted TSB, leading to a high standard deviation of 41.60 µmol/l in the Bland Altman plot in Fig. 4. 4.2 Comparisons with other techniques Numerous studies have compared the agreement between TcB and TSB [5–16]. One review in 2013 summarised the results from 22 studies conducted between 1982 and 2012 ([5–16]), mainly on BiliChek and JM-103 (and its earlier versions) [6]. Two other studies adopted digital photography to predict TSB based on the skin colour in the sternum and forehead [18, 31]. The results of these studies have been summarised in Table 2. It can be seen that the linear correlation coefficient r for other studies ranges between 0.83 and 0.86 while for this study it is lower at 0.75. The standard deviations of the mean differences reported in [6] are 24.06 µmol/l for the sternum site and 29.46 µmol/l for the forehead site [6], in comparison to ours at 41.60 µmol/l for the sclera site. Our database lacks subjects with TSB values in both the high (>250 µmol/l) and low (<100 µmol/l) ends of the range, which may in turn affect the r value. Table 2. Comparison of result obtained by various studies (conversion for bilirubin: 1 mg/dL = 17.1 µmol/l) Study Method Body site Pearson’s linear correlation, r (n) Bland-Altman test: mean difference a ± SD (µmol/l) Nagar 2013 [6] TcB: BiliChek, JM-102/103 Forehead 0.83 (n N/A; 16 pooled studies) −0.06 ± 29.46 (n = 912; 11 pooled studies) Sternum 0.83 (n N/A; 10 pooled studies) 3.80 ± 24.06 (n = 265; 5 pooled studies) Leartveravat 2009 [31] Digital photography Sternum 0.86 (n = 61) N/A de Greef 2014 [18] Digital photography Sternum & Forehead 0.84 (n = 100) N/A Leung 2015 (This study) Digital photography Sclera 0.75 (n = 110) 0.00 ± 41.60 amean difference defined as: predicted TSB – measured TSB. For screening purpose, the ROC curve results may be more relevant. Table 3 shows the ROC curve results from three studies [5, 17] including ours. A previous study found that JM- 102 performed better than BiliChek for screening subjects with a ysc = 250 µmol/l using a ycut = 230 µmol/l, achieving a sensitivity of 0.97 and a specificity of 0.83 [5]. Another study also found that JM-103 was more accurate than BiliChek in screening for subjects with a ysc = 205 µmol/l using a ycut = 144 µmol/l, achieving a sensitivity of 1.00 and a specificity of 0.42 [17], which are comparable to our results: sensitivity = 1.00 and specificity = 0.50 in screening for subjects with a ysc = 205 µmol/l using a ycut = 162 µmol/l. Although the UK based NICE guidelines require the screening technique to identify patients with ysc > 250 µmol/l for referrals [1], we are unable to report this result because our database lacks samples above 250 µmol/l. The AUC of our technique at 0.87 is also comparable to others’ which range between 0.89 and 0.98. Table 3. Comparison of results from ROC curves Study Method Body site n Screening Threshold, ysc (µmol/l) Cut-off Threshold, ycut (µmol/l) Sensitivity Specificity AUC Szabo 2004 [5] TcB: BiliChek Forehead 140 250 190 0.94 0.72 0.92 Szabo 2004 [5] TcB: JM- 102 Sternum 140 250 230 0.97 0.83 0.98 Romagnoli 2012 [17] TcB: BiliChek Forehead 630 205 157 0.99 0.30 0.89 Romagnoli 2012 [17] TcB: JM- 103 Forehead 630 205 144 1.00 0.42 0.94 Leung 2015 (This study) Digital photo- graphy Sclera 110 205 162 1.00 0.50 0.87 5. Conclusions This work has shown that by using the sclera colour to predict the TSB level, reasonably high correlations (r = 0.75 and ρ = 0.72) with the measured TSB can be achieved. Although the correlations may still not be high enough for this technique to be used to predict the absolute TSB level on an individual basis, it is reasonably robust as a screening technique in identifying subjects with TSBs above 205 µmol/l, achieving a sensitivity of 1.00 and specificity of 0.50, comparable to those achieved by commercial TcBs such as JM-103 and BiliChek [17]. In comparison to skin, sclera is less affected by other confounding factors such as melanin (ethnicity), allowing a relatively simple analysis technique, i.e., multiple linear regression, to be used in this work and still producing encouraging results. This technique can also be readily implemented as a low cost smartphone app, making it more accessible to the wider community and developing countries. Unlike the contact based TcBs which require disposables, this technique is non-contact, making it easier and more economical to use. In order to further improve the performance, in our next phase of work, certain improvements will be made. First, to avoid the influence of ambient light, the room light will be turned off and a diffuse LED flashlight will be used to gently illuminate the face of the baby. In this way, the photos can be taken in different locations while ensuring the same illumination is used. Second, the lighting environment can be better controlled by putting the baby in a specially designed “cot” with non-reflective surfaces, similar to the light cabinet (a.k.a viewing booth or light box) used in the textile industry for visual inspection of fabrics and garments. This environment would reduce the multipath problem and therefore minimise the influence of the colour objects in the surrounding on the captured images. Third, the algorithm can also be improved by adopting techniques such as the independent component analysis, which has been shown to provide a better identification of pigment colour [32]. Acknowledgments The authors would like to thank all the subjects and their parents for participating in this study, UCL Grand Challenge Small Grants Scheme (Global Health), EPSRC Vacation Bursary and EPSRC Fellowship Grant (EP/G005036/1) for providing funding, J. Bernardo, R. Birch, M. Dinan, S. Jollye, R. Lombard, N. McKeown, H. Mpanza and S. Syed for invaluable help with the coordination of the study, and I. Liu for initiating the idea. work_xlqjpyyk3regxfhqqrgspukb6y ---- Multiscale modeling of spring phenology across Deciduous Forests in the Eastern United States Multiscale modeling of spring phenology across Deciduous Forests in the Eastern United States E L I K . M E L A A S 1 , M A R K A . F R I E D L 1 and ANDREW D. RICHARDSON2 1Department of Earth and Environment, Boston University, 675 Commonwealth Avenue, Boston, MA 02215, USA, 2Department of Organismic and Evolutionary Biology, Harvard University, HUH, 22 Divinity Avenue, Cambridge, MA 02138, USA Abstract Phenological events, such as bud burst, are strongly linked to ecosystem processes in temperate deciduous forests. However, the exact nature and magnitude of how seasonal and interannual variation in air temperatures influence phenology is poorly understood, and model-based phenology representations fail to capture local- to regional-scale variability arising from differences in species composition. In this paper, we use a combination of surface meteorolog- ical data, species composition maps, remote sensing, and ground-based observations to estimate models that better represent how community-level species composition affects the phenological response of deciduous broadleaf forests to climate forcing at spatial scales that are typically used in ecosystem models. Using time series of canopy greenness from repeat digital photography, citizen science data from the USA National Phenology Network, and satellite remote sensing-based observations of phenology, we estimated and tested models that predict the timing of spring leaf emergence across five different deciduous broadleaf forest types in the eastern United States. Specifically, we evaluated two different approaches: (i) using species-specific models in combination with species composition infor- mation to ‘upscale’ model predictions and (ii) using repeat digital photography of forest canopies that observe and integrate the phenological behavior of multiple representative species at each camera site to calibrate a single model for all deciduous broadleaf forests. Our results demonstrate variability in cumulative forcing requirements and pho- toperiod cues across species and forest types, and show how community composition influences phenological dynamics over large areas. At the same time, the response of different species to spatial and interannual variation in weather is, under the current climate regime, sufficiently similar that the generic deciduous forest model based on repeat digital photography performed comparably to the upscaled species-specific models. More generally, results from this analysis demonstrate how in situ observation networks and remote sensing data can be used to synergisti- cally calibrate and assess regional parameterizations of phenology in models. Keywords: deciduous forest, MODIS, phenology models, species composition, spring phenology Received 2 June 2015; revised version received 24 September 2015 and accepted 29 September 2015 Introduction The springtime phenology of leaf emergence and devel- opment in temperate deciduous broadleaf forests is highly correlated with air temperature (Lechowicz, 1984). Seasonality in photoperiod also influences the timing of leaf development, especially in late succes- sional temperate tree species, although the exact role of photoperiod is not well understood (Chuine et al., 2010; K€orner & Basler, 2010). In addition to providing an important diagnostic of climate variation and change, phenology is also a key regulator of seasonal carbon, water, and energy exchanges in many ecosystems (Fitz- jarrald et al., 2001; Richardson et al., 2010). Hence, understanding of mechanistic controls and accurate representation of vegetation phenology in ecosystem models that are used to simulate biosphere–atmosphere interactions is critical (Richardson et al., 2012). Most widely used land surface models (LSMs) simu- late the onset of leaf development using prescribed dates estimated from satellite remote sensing or empiri- cally derived functions based on cumulative chilling and forcing units (Yang et al., 2012). Recent studies, however, have demonstrated that current representa- tions of phenology in LSMs are not realistic. Using in situ data from eddy covariance sites located in North American deciduous forests, Richardson et al. (2012) showed that LSMs consistently predict the onset of spring phenology early by more than 2 weeks and, in some cases, by as much as 10 weeks. While the source of this bias was different for each model, overly simpli- fied model representations and overfitting of model coefficients (or both) are widely known sources of error in phenological models (Linkosalo et al., 2008; Melaas et al., 2013a). An additional source of error, which we examine here, is that despite previous research showing substantial variability in the response of differentCorrespondence: Eli Melaas, tel. +1 248 343 9278, fax +1 617 353 8399, e-mail: emelaas@bu.edu 792 © 2015 John Wiley & Sons Ltd Global Change Biology (2016) 22, 792–805, doi: 10.1111/gcb.13122 temperate forest species to warming, chilling, and pho- toperiod controls (e.g., Korner & Basler, 2010; Jeong et al., 2012; Migliavacca et al., 2012), most models do not account for species-specific or community-level responses to climate forcing. An obvious solution to this problem is to include species-specific representa- tions of phenology. However, doing so at continental or even regional scales in LSMs poses significant chal- lenges. In particular, current LSMs generally represent vegetation at coarse spatial resolutions (~0.05–1°) using mosaics of highly generalized plant functional types to capture first-order heterogeneity in the ecophysiology of terrestrial vegetation. Refinement of such models to include representations for species-specific responses requires geographically explicit information related to species composition that does not exist for most loca- tions. In this paper, we propose a solution to this limitation that stratifies plant functional types according to com- munity composition, thereby capturing local controls and geographic variation in springtime phenology to climate forcing. To develop and test our approach, we focus on temperate deciduous forests in the eastern United States and disaggregate the region encompass- ing these forests into five forest community types. Our justification for this strategy is that geographic variation in forest community composition reflects variation in climate, elevation, soil conditions, and land use history, and by extension, also reflects differential adaptation to bioclimatic conditions including distinct ecophysiologi- cal traits that influence spring leaf emergence (Lechow- icz, 1984). For example, southern species representative of oak-gum and oak-hickory forests tend to be ring-por- ous with large diameter vessels prone to embolism from freezing. Northern forest species representative of aspen–birch and maple–beech–birch forests, on the other hand, are overwhelmingly diffuse-porous with narrow-diameter vessels and lower conducting capac- ity. As a result, ring-porous species tend to leaf out later than diffuse-porous species under the same forcing conditions because ring-porous species require new wood growth to supply water to their newly expanding leaves (Wang et al., 1992). To test this approach, our analysis makes use of data acquired at multiple scales. Specifically, ground obser- vations, near-surface remote sensing, and satellite remote sensing offer valuable information regarding spatial variation in the timing of spring leaf develop- ment at the scale of individual trees [e.g., the USA National Phenology Network (USA-NPN); Betancourt et al., 2007], forest stands (e.g., the PhenoCam network; Richardson et al., 2007), and regions (e.g., the Moderate Resolution Imaging Spectroradiometer (MODIS); Gan- guly et al., 2010). Phenology data from individual trees provide detailed information regarding the nature and magnitude of within-species variability in phenology, but these patterns are hard to generalize at regional and larger scales. Satellite data, on the other hand, pro- vide spatially integrated measurements for the timing of leaf onset over landscapes that include mixtures of species, plant functional types, and land cover types (e.g., forests vs. agriculture) (Badeck et al., 2004; Klosterman et al., 2014). Hence, phenological models that are calibrated to remote sensing data may include biases that depend on the composition of land cover or plant functional types in model grid cells (e.g., Jenkins et al., 2002). At intermediate spatial scales, imagery from repeat digital photography provides data that capture canopy dynamics at scales that range from individual trees to forest stands, which therefore com- plement observations from both ground-based net- works and remote sensing (Hufkens et al., 2012; Klosterman et al., 2014). Together, these three sources of data provide a rich basis for calibrating species-specific and more general- ized chilling and forcing sum models of phenology (e.g., Raulier & Bernier, 2000). To date, however, most modeling efforts focused on deciduous forests in the eastern United States have utilized data from individ- ual sites or observation networks that span relatively small geographic ranges (e.g., Jeong et al., 2012; Migli- avacca et al., 2012). As a result, the nature and magni- tude of geographic variation in photoperiod and temperature controls on spring phenology in temperate trees remains poorly understood, and models that pre- dict leaf onset and development do not generalize well across sites (e.g., Chuine et al., 1998; Richardson et al., 2006). This limits the utility of models for understand- ing and predicting how the phenology of temperate for- ests will be affected by climate change, and by extension, how changes in phenology are likely to affect biosphere–atmosphere interactions in the future (Richardson et al., 2013). With these issues in mind, the goal of this paper was to develop and test models of springtime leaf onset in temperate deciduous forests that account for commu- nity-level differences in phenological behavior. The resulting models offer new insight regarding how eco- logical controls vary as a function of both climate forc- ing and community species composition and, by extension, provide a basis for geographically explicit model parameterization of phenological dynamics in LSMs. To accomplish this goal, our analysis includes three main elements. First, we use networks of ground observations for springtime phenology, including data from USA-NPN and PhenoCam, to develop and cali- brate process-based spring phenology models for each forest community type (see Table 1 for a complete © 2015 John Wiley & Sons Ltd, Global Change Biology, 22, 792–805 MULTISCALE MODELING OF SPRING PHENOLOGY 793 summary of observation datasets used in the analysis). Second, we use species composition data to identify community-level clusters of tree species at the regional scale, which are subsequently used to define regions with common species composition. Third, we estimate and apply our spring phenology models over the entire eastern temperate deciduous forest biome using grid- ded climate data and compare our model results with observations of spring phenology derived from moder- ate resolution satellite remote sensing data. Data and methods Study region The study region encompassed all temperate forests in the eastern United States located between 68°W–95°W and 30°N– 50°N, including most of the eastern temperate forest ecoregion and southern portions of the northern forest ecoregion defined by the Level 2 United States Environmental Protection Agency (http://www.epa.gov/wed/pages/ecoregions.htm) (Fig. 1). Forests in the study region include five main community types: (i) maple–beech–birch forests in the northeast, (ii) aspen–birch forests in the northern Great Lakes region, (iii) oak-hickory forests extending across the Ozark and Ouachita- Appalachian mountain ranges, (iv) oak-gum mixed hard- woods in the south, and (v) elm–ash–cottonwood across the Interior Lowlands. Climate in this region is classified as either humid continental or humid subtropical, with mean annual temperatures and precipitation ranging from �2 to 21 °C and from 500 to 2000 mm, respectively. The region is heavily forested, but also includes substantial urban and agricultural areas, which were excluded from this analysis. Ground observations National phenology network. The USA-NPN compiles and distributes observations of spring and autumn phenology collected by citizen scientists across the United States. In spring (nominally March, April, and May), trained observers record dates for key phenophase events (including flowering, leaf emergence, and leaf maturation) using consistent and well-defined protocols (see www.usanpn.org; Denny et al., 2014). As part of their mission, USA-NPN also performs qual- ity control and screening of observations provided by citizen scientists. For this study, we identified all available USA-NPN observations that were collected within our study domain and extracted annual spring onset dates for individual trees based on the first observation in each year when the phenophase attribute was recorded as ‘Leaves’, indicating leaf emergence or unfolding. This dataset encompassed all available observa- tions for deciduous tree species collected between 2009 and 2013, excluding species with fewer than 30 site-years. The final dataset included 980 site-years for 10 tree species collected at 692 sites in 28 states (Table 2; Fig. 1a). While this dataset does include finite levels of uncertainty, Gerst et al. (2015) specifi- cally explored how errors in USA-NPN data might affect results from models. Results from their analysis, which spanned a similar geographic region to this study, showed that at regional scales where sample sizes are large, the quality of USA-NPN data is more than sufficient to support statistical modeling. To calibrate heat- and chill-sum models that we used to predict the timing of leaf emergence, we downloaded daily minimum and maximum temperatures from 2008 to 2013 for United States Historical Climatology Network (USHCN) stations located in the study region where at least 75% of observations were available (http://cdiac.ornl.gov/epubs/ ndp/ushcn/ushcn.html). Using these data, we estimated daily mean temperatures at each USA-NPN site using inverse dis- tance weighting of observations from the 15 closest USHCN stations, corrected for elevation differences between USHCN stations and USA-NPN observation sites using an environ- mental lapse rate of 6.5 °C km�1. PhenoCam data. The PhenoCam network uses repeat digital photography to monitor vegetation phenology (http:// Table 1 Summary of observation datasets used to calibrate and test phenology models across the study region Name Type Years Spatial resolution Application USA National Phenology Network Ground phenology 2009–2013 Individual Trees Calibrate species-specific GDD model parameters PhenoCam network Near-surface digital photography 2000–2012 Forest Canopy Calibrate PhenoCam GDD model parameters MODIS Satellite remote sensing 2001–2012 (excluding 2007) 500 m Regional testing of GDD models USHCN Ground meteorology 2008–2013 Individual Stations Train species-specific and PhenoCam GDD models DAYMET Interpolated surface meteorology 2000–2012 1 km Regional testing of GDD models FIA Species composition and distribution 2012 25 km hexagons Spatial weighting of species-specific GDD models NLCD Tree canopy and land cover 2001 and 2006 30 m Deciduous forest stratification © 2015 John Wiley & Sons Ltd, Global Change Biology, 22, 792–805 794 E. K. MELAAS et al. http://www.epa.gov/wed/pages/ecoregions.htm http://www.usanpn.org http://cdiac.ornl.gov/epubs/ndp/ushcn/ushcn.html http://cdiac.ornl.gov/epubs/ndp/ushcn/ushcn.html http://phenocam.sr.unh.edu phenocam.sr.unh.edu). In addition to providing access to the raw imagery, the PhenoCam project has developed methods for screening and preprocessing image data, and for extracting quantitative metrics that are useful for characterizing pheno- logical dynamics in vegetation (Richardson et al., 2007). For this work, we used time series of the green chromatic coordi- nate (GCC) index to estimate the timing of spring green-up from PhenoCam image time series (Sonnentag et al., 2012). We used all eastern deciduous forest sites in the PhenoCam net- work with at least 3 years of data and preprocessed GCC time series at each site using a 90th percentile filter applied to 3- day moving windows for manually defined regions of interest in webcam imagery that are selected to provide representative measurements of tree canopy phenology at each site. Applica- tion of this filter significantly reduces noise in GCC time series introduced by changes in scene illumination related to variability in cloud cover and other atmospheric conditions (Sonnentag et al., 2012). Three-day GCC data were interpo- lated to daily values using cubic smoothing splines, and the timing of leaf emergence was identified as the date when the GCC reached 50% of its springtime amplitude at each site. The resulting dataset included 80 site-years from 13 camera locations (Klosterman et al., 2014). MODIS observations The MODIS Nadir Bidirectional Reflectance Distribution Func- tion-Adjusted Reflectance (NBAR) product (MCD43A4) pro- vides surface reflectance measurements at 500 m spatial resolution that are normalized to a consistent nadir view geometry at eight daytime steps using MODIS observations acquired during overlapping 16-day periods (Schaaf et al., Fig. 1 (a) Map of NPN and PhenoCam observation sites used to parameterize growing degree-day models; (b) map of deciduous forest coverage across the study region. Each 1 km grid cell is labeled according to the number of 500 m pixels classified as deciduous forest in a 3 9 3 window centered on the grid cell. Grid cells with fewer than 3 deciduous pixels were excluded from the study. We used the National Land Cover Database to classify deciduous forest and, therefore, did not provide coverage for Canada. Table 2 Local test statistics for species-specific growing degree-day models across NPN, PhenoCam, Harvard Forest, and Hubbard Brook observations. Statistics are based on performance across all site-years for each dataset SPP SPP Code No. of obs. Model NPN/PhenoCam Harvard Forest SPP, 1990–2011 Hubbard Brook SPP, 1989–2012 RMSE MBE COR RMSE MBE COR RMSE MBE COR Red Maple ACRU 330 PAR1 9.9 1.2 0.78 5.2 2.2 0.68 Sugar Maple ACSA 133 SW4 8.6 0.2 0.74 4.6 �3.2 0.83 4.8 �0.4 0.83 Paper Birch BEPA 62 SW2 6.4 �1.5 0.79 4.9 �1.6 0.61 American Beech FAGR 72 SW2 10.9 �3.8 0.79 8.7 �6.5 0.39 6.8 �3.5 0.69 Quaking Aspen POTR 63 SW4 8.6 0.7 0.79 5.2 �3.4 0.85 Black Cherry PRSE 75 ALT1 10.6 �1.0 0.74 8.1 6.0 0.46 Black Walnut JUNI 37 SW2 10.3 �0.1 0.71 Yellow Poplar LITU 56 SW2 9.7 1.5 0.74 White Oak QUAL 67 ALT1 9.7 0.5 0.81 8.2 6.6 0.57 N. Red Oak QURU 75 SW1 9.7 �0.3 0.81 5.1 3.4 0.80 PhenoCam PhCam 80 ALT1 8.5 �1.2 0.82 © 2015 John Wiley & Sons Ltd, Global Change Biology, 22, 792–805 MULTISCALE MODELING OF SPRING PHENOLOGY 795 http://phenocam.sr.unh.edu 2002). For this work, we computed the enhanced vegetation index (EVI; Huete et al., 2002) from NBAR data collected between 2001 and 2012 and used the resulting NBAR-EVI time series to estimate the timing of spring onset for all deciduous forest pixels located in 6 MODIS tiles spanning the eastern United States. Using a procedure similar to the one we used to estimate the timing of leaf emergence from GCC time series, we generated daily time series of NBAR-EVI using cubic spline interpolation applied to the 8-day data and identified the timing of spring onset as the day of year (DOY) when NBAR-EVI values exceeded 20% of the springtime amplitude in EVI at each pixel. We chose 20% instead of 50% based on results shown by Klosterman et al. (2014), who performed an extensive comparison of phenology metrics derived from satellite remote sensing and PhenoCams. In 2007, a wide- spread frost event occurred, which significantly damaged leaves across the southern half of the study region and subse- quently delayed MODIS-detected spring onset by 3–4 weeks (Gu et al., 2008). Because the method to estimate spring onset from MODIS that we describe above does not allow for the possibility of multiple ‘green-up’ phases in the same spring, retrievals of spring onset in 2007 from MODIS were problem- atic throughout much of the southern portion of our study region. We therefore excluded 2007 from our analysis. Daymet meteorological data The Daymet dataset provides gridded surface meteorological data for the contiguous United States at high spatial and tem- poral resolution (Thornton et al., 2014; http://daymet.ornl.- gov/). The dataset uses 35 years (1980–2014) of minimum and maximum temperature and precipitation measurements inter- polated from weather stations in the Global Historical Clima- tology Network (Williams et al., 2006) to provide daily gridded data at 1 km spatial resolution. For this analysis, we used daily temperature data from 2000 to 2012 for 101 2°92° Daymet tiles located within the study region. To merge Day- met data with MODIS observations, we calculated the median spring onset date for 3 9 3 MODIS pixel windows centered on each 1-km Daymet grid cell. Only grid cells with at least 3 deciduous forest pixels (based on land cover information, described below) in the 3 9 3 window were included in the analysis (Fig. 1b). Species composition and distribution The Forest Inventory and Analysis (FIA) program of the U.S. Forest Service routinely collects forest stand measurements at nearly 200 000 plots nationwide. In the analysis presented below, we used spatially averaged basal areas for dominant tree species in ~54 000 ha hexagons estimated from plot-level FIA data (~22 plots per hexagon) to characterize geographic variation in forest composition across our study region (Wil- son et al., 2013). To do this, we first assigned each deciduous tree species to one of five main forest types defined in Appendix D of the FIA Database User’s Manual: (i) oak-hick- ory, (ii) oak-gum, (iii) elm–ash–cottonwood, (iv) maple– beech–birch, and (v) aspen–birch. We then used the average species-level basal area data to map the proportion of each for- est type in each hexagon in our study region (Fig. 2). Deciduous forest stratification MODIS NBAR-EVI data have a nominal ground resolution of 500 m. Hence, individual pixels generally contain mixtures of land cover and plant functional types that have different phe- nological responses to weather and climate forcing. To mini- mize variability in MODIS time series associated with subpixel heterogeneity in land cover and plant functional types, we used the 2006 National Land Cover Database (NLCD) in association with tree canopy cover information from the 2001 NLCD (Homer et al., 2007; Fry et al., 2011), both at 30 m spatial resolution, to identify 500 m pixels where deciduous forest cover and tree canopy density were both greater than or equal to 60 percent. Pixels that did not meet these criteria were excluded from the analysis. Spring phenology models We used USA-NPN, PhenoCam, and MODIS data to calibrate and test 13 models that use thermal forcing or combined ther- mal forcing and chilling to predict the timing of spring onset. Each model was based on one of four functional forms in which spring onset is predicted to occur when the state of forcing (Sf(t)) reaches a critical sum of forcing units (F*). Below, we describe each of these functional forms. Complete details regarding each of the 13 models that we tested are pro- vided in the Supporting information 1. Spring warming model. In the spring warming model (SW), accumulated forcing is computed as a function of air tempera- ture using a logistic function (Sarvas, 1974) SfðtÞ ¼ Xtpheno t0 max 28:4 1 þ expð3:4 � 0:185 � TairÞ ; 0 � � ð1Þ where Tair is daily mean air temperature, t0 is the start date when forcing begins to accumulate, and tpheno is the date when Sf(t) exceeds a prescribed threshold (F*). Sequential model. The sequential model (SEQ) assumes that forcing accumulation does not occur until a critical threshold of accumulated chilling units (C*) is reached, where the state of chilling (Sc(t)) increases when Tair falls below a prescribed temperature threshold, Tc: ScðtÞ ¼ Xt1 t0 1 Tair\Tc 0 Tair � Tc � ð2Þ In this model, t1 is the date when chilling requirements are met and forcing accumulation begins. Unlike the SW model described above, the rate of accumulated forcing in the sequential model is linearly related to air temperature: SfðtÞ ¼ Xtpheno t1 maxðTair � Tf; 0Þ ð3Þ where Tf is a base temperature below which no leaf develop- ment is assumed to occur. © 2015 John Wiley & Sons Ltd, Global Change Biology, 22, 792–805 796 E. K. MELAAS et al. http://daymet.ornl.gov/ http://daymet.ornl.gov/ Alternating and parallel models. The alternating (ALT) and parallel (PAR) models both assume that the forcing required for spring onset decreases exponentially as accumulated chilling increases. In the ALT model, chilling and forcing take turns accumulating from an initial start date relative to a single base temperature. Specifically, chilling and forcing accumulate according to Eqns (2) and (3), respectively, until SfðtÞ � a þ b � expðc � ScðtÞÞ ð4Þ where a, b, and c are optimized to maximize the predictive accuracy of the model and where c < 0. In the PAR model, chilling and forcing occur simultaneously, where forcing accu- mulation follows Eqn (1) and chilling accumulates according to a triangular function: ScðtÞ ¼ Xtpheno t0 0 Tair � 10:4 or Tair � � 3:4 xðtÞþ3:4 Toptþ3:4 �3:4\Tair � Topt xðtÞ�10:4 Topt�10:4 Topt\Tair\10:4 8>< >: ð5Þ where Topt is the optimum chilling temperature. Leaf emer- gence is predicted to occur when SfðtÞ � a � expðb � ScðtÞÞ ð6Þ where a and b are optimized to minimize model errors, and b < 0. Each of these functional forms has been widely used with site-specific in situ phenology observations (Jeong et al., 2012; Migliavacca et al., 2012) and satellite remote sensing observa- tions (Fisher et al., 2007; Yang et al., 2012). Here, we estimate spring onset models using the basic formulations described above and two simple variants: (i) where accumulation of chil- ling or forcing is based on photoperiod rather than a fixed date (i.e., substitute p0 for t0) and (ii) where the base tempera- tures and chilling and forcing requirements are estimated as functions of mean annual temperature. Model calibration and validation To estimate coefficients for each of the 13 models evaluated in the analysis, we used species-specific observations for the tim- ing of leaf emergence derived from USA-NPN observations of Fig. 2 (a–e) Forest type composition across the study region according to FIA Phase 2 plot-level basal area measurements. Plot-level measurements were spatially averaged within each 25 km hexagon and (f) dominant forest type within each 25 km hexagon. © 2015 John Wiley & Sons Ltd, Global Change Biology, 22, 792–805 MULTISCALE MODELING OF SPRING PHENOLOGY 797 temperate deciduous forest species located within our study region. USA-NPN observations were collected between 2009 and 2013 and included 3 years with relatively normal spring (defined here as March–May) temperatures relative to 20th century climatological means (2009, 2011, and 2013), and 2 years with anomalously warm spring temperatures (2010 and 2012; see Friedl et al., 2014). To evaluate the robustness of estimated models for predicting phenology under projected future climate conditions, we calibrated each model for each species using observations from 2009, 2011, and 2013 (i.e., rela- tively average years) and then evaluated their performance using observations from 2010 and 2012 (i.e., years with clima- tologically warm springs). Following Chuine et al. (1998), we optimized coefficient values for each model using Monte Carlo techniques (Metro- polis et al., 1953). Once the best parameter set for each model species pair was identified, we also estimated 95% confidence intervals for each parameter using a chi-square test (Franks et al., 1999; Migliavacca et al., 2012). To select the best model for each species, we used the small-sample corrected Akaike information criterion (AICc) (Burnham & Anderson, 2002) based on the residual sum of squared errors for all observa- tions. The same procedure was applied to parameterize and select an optimal ‘species-ignorant’ model using PhenoCam observations. In addition to model calibration and testing using USA- NPN data, we performed two complementary sets of model evaluations. First, we tested the USA-NPN species-specific models using long-term in situ observations of phenology collected at the Hubbard Brook Experimental Forest in New Hampshire and the Harvard Forest Long Term Ecological Research site in Massachusetts, focusing on the ability of the models to capture interannual variability in the timing of spring leaf out at each site (see Richardson et al., 2006 for more details on each dataset). Second, we assessed the abil- ity of the species-specific and species-ignorant models esti- mated from USA-NPN and PhenoCam data, respectively, using daily mean temperature data from Daymet to predict annual spring phenology across the study region between 2000 and 2012. We then calculated the root-mean-square error (RMSE), mean bias error (MBE; here, bias is measured as observed minus predicted), and Pearson’s correlation coefficient between the predicted spring onset dates from each of the models and observations of spring onset dates from MODIS. We characterized regional model performance in two ways. First, we evaluated each species-specific model within each forest type and across the entire study region. Second, we gen- erated predictions at each Daymet grid cell using a weighted average of species-specific model predictions based on FIA forest type composition (Fig. 2). Specifically, in each forest type where sufficient USA-NPN data were available, we used predictions for the most common species (maple–beech–birch – red maple; aspen–birch – quaking aspen; oak-hickory – white oak), while in the other forest types, we used predic- tions for species with a comparable native range (i.e., for oak-gum, we used a model tuned to yellow poplar; for elm–ash–cottonwood, we used a model tuned to red maple). Results Model calibration and validation Table 2 provides a summary of the best-fit USA-NPN species-specific and PhenoCam models and their rela- tive performance. Across species, AICc values suggest that SW models tend to be better supported by the data than more complex chilling models. In particular, SW models were selected for seven of ten species, two of which allowed the critical forcing requirement (F*) to vary as a function of mean annual temperature. Among chilling models, ALT models were selected three times, while the PAR model was selected once. Relative to USA-NPN or PhenoCam observations, models gener- ally predicted spring onset within 8–10 days with rela- tively low bias and Pearson’s correlation coefficients between 0.7 and 0.8. When the models were assessed using ground obser- vations from Harvard Forest and Hubbard Brook, model performance generally performed best for species that were dominant at each site. At Harvard Forest, the red maple and northern red oak (the two most common spe- cies at Harvard Forest) models predicted spring onset within approximately 5 days of observations (on aver- age), with model errors for less common species such as black cherry and American beech that were much larger. At Hubbard Brook, where American beech is more com- mon, the model tuned to NPN data for this species showed better agreement with observations than the same model at Harvard Forest. The sugar maple model, which includes a mean annual temperature control on F*, showed high predictive accuracy at both sites. Model parameters varied across species in terms of F* and the optimal start date for photoperiod and chil- ling/forcing accumulation (t0 or p0; see Table S2). For example, models for species that tend to be more com- mon in southern parts of the eastern deciduous forest (e.g., yellow poplar and white oak) tended to have ear- lier start dates or lower photoperiod thresholds and higher F*, while models for more northern species (e.g., American beech, paper birch, and quaking aspen) tended to have later start dates or photoperiod thresh- olds and lower F* requirements. Photoperiod control was also important in six of ten species models, and most of the species-specific models and the PhenoCam model do not accumulate chilling/forcing until mid- March (~DOY 75). Regional model evaluation using daymet and MODIS Following model calibration and local validation against in situ measurements, we used Daymet data to © 2015 John Wiley & Sons Ltd, Global Change Biology, 22, 792–805 798 E. K. MELAAS et al. predict spring onset dates for each of the USA-NPN models and the PhenoCam model at 1 km spatial reso- lution over the study region and then compared these predictions against spring onset dates from MODIS for the period 2001–2012. In addition to evaluating model performance across all years and all Daymet grid cells (i.e., ‘Total’ in Figs 3 and 4), we assessed how model performance was affected by forest composition, as reflected by the five dominant forest types in the study region (Fig. 2). The resulting models RMSEs and MBEs are shown in Figs 3 and 4, respectively, and boxplots of Pearson’s correlation coefficients and model slopes are provided in Figs S1 and S2. For the entire study region, models predicted spring onset with a median RMSE of 5–10 days (Fig. 3f). The northern red oak (QURU) and sugar maple (ACSA) models provided the highest overall accuracy, while black walnut (JUNI) and yellow poplar (LITU) were least accurate on average. The range in mean bias among all models was �10 days, while Pearson’s corre- lation coefficients averaged 0.7–0.8. Closer inspection of model results reveals substantial variation in model performance that depends on forest type (Figs 3a–e and 4a–e). For example, the QURU model consistently predicts spring onset within about 5 days across all forest types. In contrast, other native oak-hickory species models show variable accuracy in predicting oak-hickory forest phenology relative to other forest types. For example, the LITU model pre- dicts spring onset roughly 10 days too early in oak- hickory forests, while the JUNI model shows relatively low RMSE and MBE in oak-hickory forests, but much higher errors in other northern forest types such as aspen–birch and maple–beech–birch. Finally, while the white oak (QUAL) model has a median RMSE less than 7 days across all forest types, its predictions are nega- tively correlated with observed spring onset in more than half of the oak-gum forest region. Among aspen–birch species models (red-colored box- plots), the quaking aspen (POTR) model performed consistently well across all forest types, with median RMSEs of less than 7 days, generally low bias, and Pearson’s correlation coefficients greater than 0.6. The paper birch (BEPA) model, on the other hand, is more accurate for aspen–birch and maple–beech–birch forests than in more southern elm–ash–cottonwood, oak-hick- ory, and oak-gum forests. Compared with other species models, both the BEPA and POTR models have rela- tively low F*. The BEPA model also has a late starting photoperiod (i.e., accumulation of thermal forcing starts later than for other models) and, as a result, signifi- cantly underestimates interannual variation in spring onset across southern oak-hickory and oak-gum forests. The maple–beech–birch species models are also quite variable in terms of their model structure and parame- terization across species. The ACSA model, a SW model for which the period over which F* accumulates is a function of mean annual temperature, performs well across all forest types except oak-gum. Similar to BEPA, the American beech (FAGR) model has a relatively late photoperiod trigger, but also requires a relatively large F* to trigger leaf emergence. As a result, this model’s performance is relatively weak in the southern portions of the study region. The red maple (ACRU) and black cherry (PRSE) models, which depend on both chilling and forcing temperatures, were relatively robust across all forest types. However, both models were consis- tently biased early relative to MODIS spring onset dates. Evaluation of upscaled models In the final part of our analysis, we evaluated two approaches to parameterizing regional-scale phenologi- cal models. First, we computed predictions for the tim- ing of leaf emergence at each Daymet grid cell based on an area-weighted estimate from each of the species-spe- cific models, where the area weights were generated based on FIA basal area data (Fig. 2). Second, we used a single model based on PhenoCam data. Results from these analyses showed that both the area-weighted and PhenoCam models performed well across the study region (Fig. 5k, l). Both models predicted spring onset with a median RMSE of approximately 5 days and with relatively low bias (Figs 4f and 5f). The PhenoCam model was particularly accurate across oak-hickory and maple–beech–birch dominated forests, but was less accurate and significantly biased (late) in oak-gum for- ests. Figs 5 and 6 show maps illustrating spatial pat- terns in models RMSE and MBE for the species-specific, PhenoCam, and forest type-weighted models. As these figures show, the PhenoCam and forest type-weighted models accurately predict spring onset in the New Eng- land and the upper Great Lakes regions, but are less accurate and biased (early) across northern Michigan and the Allegheny Mountains (eastern-central portion of the study region). Discussion There is broad consensus that future warming is likely to influence the growing season of temperate deciduous forests and that these changes have substantial poten- tial to impact ecosystem function and carbon budgets at local- to regional scales (e.g., Keenan et al., 2014. However, recent literature has demonstrated that © 2015 John Wiley & Sons Ltd, Global Change Biology, 22, 792–805 MULTISCALE MODELING OF SPRING PHENOLOGY 799 temperate deciduous forest species have shown a wide range of phenological responses to climate changes during the last half-century (Badeck et al., 2004; Polgar & Primack, 2011; Richardson et al., 2013). Hence, there is considerable uncertainty regarding how, specifically, future changes in temperature will affect key pheno- phase events such as the timing of leaf emergence and senescence in this biome. Despite evidence showing that species-specific differences in phenological sensi- tivity to environmental factors can be substantial (e.g., Vitasse et al., 2009; Jeong et al., 2012), current LSMs, which are designed to simulate biological processes that affect long-term energy, water, and carbon bud- gets, do not account for differences in phenological behavior among species. Indeed, Richardson et al. (2012) have shown that bias in the representation of phenology in these models leads to substantial errors in modeled carbon fluxes. To address this problem, we explored how geo- graphic patterns in species composition influence the timing of spring onset in deciduous forests and assessed methods for parameterizing geographic varia- tion in models of springtime phenology. To do this, we used a combination of in situ observations, species composition maps, gridded climate data, and moderate spatial resolution remote sensing data. Our results reveal differences in the thermal forcing required to ini- tiate leaf emergence among species and, consequently, variable accuracy in species-specific models across for- est types. At the same time, some species-specific mod- els – notably, the ACSA and QURU models – and the species-independent PhenoCam model showed rela- tively good accuracy predicting the timing of leaf emer- gence in deciduous forests across our study region. These latter results suggest that the phenology of both sugar maple and red oak is relatively robust and repre- sentative of broader forest community phenology over our study region and that PhenoCam data successfully integrate the phenological behavior of multiple species. More generally, regional weaknesses in species-specific models in combination with the success of both ‘up- scaled’ modeling approaches (i.e., based on PhenoCam data and weighted species-specific models) demon- strate that realistic representation of forest community composition is important in models that simulate phe- nological events over large scales. Our results also illuminate several important func- tional relationships between air temperatures and the timing of spring onset in eastern deciduous forests of the United States. First, we find that air temperatures in early to late spring (March through May) are sufficient to explain the majority of interannual variability in the Fig. 3 Boxplots of root-mean-square error (days) between MODIS estimated spring onset and model predicted spring onset across (a– e) pixels dominated by each forest type, according to 25 km FIA hexagon maps (see Fig. 2) or (f) the entire study region. © 2015 John Wiley & Sons Ltd, Global Change Biology, 22, 792–805 800 E. K. MELAAS et al. timing of spring onset. Previous research using both ground and satellite remotely sensed data has sug- gested that models incorporating chilling tend to out- perform models based solely on forcing requirements (Chiang & Brown, 2007; Kaduk & Los, 2011; Jeong et al., 2012). However, these studies have consistently used arbitrary start dates with no specific biological basis (e.g., January 1) to accumulate forcing. Using a model that allows the start date to vary as a function of pho- toperiod, Migliavacca et al. (2012) found that a rela- tively simple two-parameter spring warming model outperformed more complex chilling models at Har- vard Forest. Our results are consistent with those of Migliavacca et al., as well as other previous experimen- tal and modeling studies (e.g., Linkosalo et al., 2000; Basler & K€orner, 2012), and support the conclusion that photoperiod is an important factor that influences when and how thermal forcing triggers leaf emergence. More specifically, results from species-specific models tuned to USA-NPN data largely support the spring warming approach, with start dates for thermal accu- mulation centered around the spring equinox (~DOY 80, which corresponds to a 12-hour day length every- where in the study region), but significant differences in forcing requirements among species and forest types. This result implies that winter chilling accumulation is of secondary importance for most tree species in our study area. Indeed, our model results for 2010 and 2012 suggest that chilling requirements over the study region are being easily met, even in years when winter and early spring temperatures are exceptionally warm. However, it is important to note that as the magnitude of chilling requirements is currently unknown, it is unclear how much longer they will continue to be satis- fied under future warming. A second key result from this work is that relatively coarse spatial resolution information related to forest community composition (in this case, 25 km FIA data) explains meaningful geographic variation in the timing and ecological controls on springtime phenology. At the same time, it is important to note that our analysis includes several import assumptions and phenological datasets include nontrivial levels of uncertainty. For example, we assume that all 500 m MODIS pixels within each 25 km FIA hexagon have roughly uniform species composition. In some regions, this assumption is probably quite robust. In others, especially those with complex topography and recent disturbance, this assumption is weaker (e.g., Hwang et al., 2011). Indeed, this may partly explain why our model predictions do not agree as well with MODIS observations in parts of Appalachia (but see next paragraph). Spatially explicit Fig. 4 Boxplots of mean bias error (days) between MODIS estimated spring onset and model predicted spring onset across (a–e) pixels dominated by each forest type, according to 25 km FIA hexagon maps (see Fig. 2) or (f) the entire study region. © 2015 John Wiley & Sons Ltd, Global Change Biology, 22, 792–805 MULTISCALE MODELING OF SPRING PHENOLOGY 801 maps of tree species and forest types at higher spatial resolution (i.e., 500 m–1 km) may provide a basis for reducing this source of uncertainty (e.g., Zhu & Evans, 1994; Wilson et al., 2012). Whatever the solution, a key conclusion from the results we report here is that forest composition is important to modeling large-scale phe- nology (i.e., plant functional types are not sufficient). Hence, new datasets and modeling approaches, be they explicit using species-weighted models or implicit using data from PhenoCams, are required in to realisti- cally model phenology at the spatial resolutions required by large-scale ecosystem and LSMs. A third key finding from our analysis is that predic- tions for spring onset from the model calibrated to Phe- noCam data had errors that were approximately the same as errors from USA-NPN models tuned to indi- vidual species in the northern forest types, even though only 13 sites and 80 site-years were available to cali- brate the model. This suggests that PhenoCam data provide a good basis for characterizing the seasonality of species mixtures. In southern areas of the study region dominated by oak-gum, however, PhenoCam model performance was poor and predictions showed significant bias toward later onset dates (Fig. 6k). We believe that this bias is at least partly attributable to the geographic distribution of PhenoCam sites. Specifically, none of the PhenoCam sites were located in southern forests, which probably caused the model to be overfit to northern species and sites. As more sites are added to the PhenoCam network in coming years, it will be important to re-estimate this model using a more geographically and ecological representative sample. Moreover, this result demonstrates that even though previous studies have shown that LSMs poorly represent spring phenology, the performance of these models is not necessarily a by-product of using generalized plant functional types. Rather, the strong performance of the species-neutral PhenoCam model suggests that relatively simple phenological models can be used to predict the timing of leaf emergence over large geographic ranges if they are properly parameterized and that species-level parameterizations Fig. 5 Maps of root-mean-square error (days) between MODIS estimated spring onset and (a–j) each species-specific model, (k) Pheno- Cam model, and (l) forest type-weighted model predicted spring onset across the study region. © 2015 John Wiley & Sons Ltd, Global Change Biology, 22, 792–805 802 E. K. MELAAS et al. are not necessarily needed to improve existing schemes. Finally, despite their relatively coarse spatial resolu- tion (500 m), results from this analysis demonstrate that MODIS data have substantial utility for estimating and verifying results from upscaled phenology models. Moving forward, there is substantial basis for optimism that new methods and datasets based will soon be available that resolve phenology over large scales at spatial resolutions that are able to capture landscape- scale variation that is not currently resolved from instruments such as MODIS. Specifically, recent work by Elmore et al. (2012) and Melaas et al. (2013b) has demonstrated that time series of Landsat TM/ETM+ images can successfully estimate spring and autumn phenology at 30 m resolution and therefore have the potential to substantially mitigate this issue. We used a combination of surface meteorological data, species composition maps, remote sensing, and ground-based observations of phenology to develop and test models that predict the timing of spring leaf emergence for dominant tree species across five differ- ent deciduous broadleaf forest types in the eastern Uni- ted States. Our results identify significant differences in the cumulative forcing requirements and photoperiod cues among individual species, which lead to variation in the accuracy of spring onset model predictions across forest types. Most terrestrial ecosystem models, which rely on accurate predictions of phenophase tran- sitions to simulate season variation in important ecolog- ical process, do not account for species-specific phenological dynamics. To address this shortcoming, we evaluated two strategies for upscaling species-level dynamics to spa- tial scales commonly used by ecosystem models, which encompass significant variation in species composition and phenological dynamics. The first strategy used information related to the geographic distribution of dominant tree species to provide upscaled predictions for the timing of spring onset based on a linear weight- ing of species-specific models. The second strategy used time series observations of forest canopies from Fig. 6 Maps of mean bias error (days) between MODIS estimated spring onset and (a–j) each species-specific model, (k) PhenoCam model, and (l) forest type-weighted model predicted spring onset across the study region. © 2015 John Wiley & Sons Ltd, Global Change Biology, 22, 792–805 MULTISCALE MODELING OF SPRING PHENOLOGY 803 repeat digital photography that include mixtures of tree species to calibrate a single model for the entire study region. Both strategies yielded robust predictions of spring onset across a majority of the study region and therefore provide a foundation for improving the repre- sentation of spring phenology in ecosystem models. More generally, the issues and strategies we examined in this paper provide the basis for more reliable fore- casts of how the phenology of temperate forests will be affected by climate change, and by extension, how changes in climate will impact temperate forests in the coming decades. Acknowledgements The authors thank two anonymous reviewers for their helpful feedback and suggestions. The authors also gratefully acknowl- edge phenology data provided by John O’Keefe at the Harvard Forest, Amy Bailey at the Hubbard Brook Experimental Forest, and citizen scientists from the USA-NPN. This work was par- tially supported by NASA grant numbers NNX11AE75G and NNX14AJ35G. ADR acknowledges support from the National Science Foundation through the LTER (DEB-1237491) and Macrosystems Biology (EF-1065029) programs, and the Depart- ment of Energy through the Regional and Global Climate Modeling program (DE-SC0016011). Development of the Phe- noCam network has been supported by the Northeastern States Research Cooperative, NSF’s Macrosystems Biology Pro- gram (award EF-1065029), and the US National Park Service Inventory and Monitoring Program and the USA National Phenology Network (grant number G10AP00129 from the United States Geological Survey). The authors declare no conflict of interest. References Badeck FW, Bondeau A, Bottcher K, Doktor D, Lucht W, Schaber J, Sitch S (2004) Responses of spring phenology to climate change. New Phytologist, 162, 295–309. Basler D, K€orner C (2012) Photoperiod sensitivity of bud burst in 14 temperate forest tree species. Agricultural and Forest Meteorology, 165, 73–81. Betancourt JL, Schwartz MD, Breshears DD et al. (2007) Evolving plans for the USA National Phenology Network. Eos, Transactions American Geophysical Union, 88, 211–211. Burnham KP, Anderson DR (2002) Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach (2nd edn). Springer, New York. Chiang JM, Brown KJ (2007) Improving the budburst phenology subroutine in the forest carbon model PnET. Ecological Modelling, 205, 515–526. Chuine I, Cour P, Rousseau DD (1998) Fitting models predicting dates of flowering of temperate-zone trees using simulated annealing. Plant Cell and Environment, 21, 455–466. Chuine I, Morin X, Bugmann H (2010) Warming, photoperiods, and tree phenology. Science, 329, 277–278. Denny EG, Gerst KL, Miller-Rushing AJ et al. (2014) Standardized phenology moni- toring methods to track plant and animal activity for science and resource man- agement applications. International Journal of Biometeorology, 58, 591–601. Elmore AJ, Guinn SM, Minsley BJ, Richardson AD (2012) Landscape controls on the timing of spring, autumn, and growing season length in mid-Atlantic forests. Glo- bal Change Biology, 18, 656–674. Fisher JI, Richardson AD, Mustard JF (2007) Phenology model from surface meteorol- ogy does not capture satellite-based greenup estimations. Global Change Biology, 13, 707–721. Fitzjarrald DR, Acevedo OC, Moore KE (2001) Climatic consequences of leaf presence in the eastern United States. Journal of Climate, 14, 598–614. Franks SW, Beven KJ, Gash JHC (1999) Multi-objective conditioning of a simple SVAT model. Hydrology and Earth System Sciences, 3, 477–489. Friedl MA, Gray JM, Melaas EK et al. (2014) A tale of two springs: using recent cli- mate anomalies to characterize the sensitivity of temperate forest phenology to cli- mate change. Environmental Research Letters, 9, 054006. Fry JA, Xian G, Jin SM et al. (2011) National land cover database for the conterminous United Sates. Photogrammetric Engineering and Remote Sensing, 77, 859–864. Ganguly S, Friedl MA, Tan B, Zhang XY, Verma M (2010) Land surface phenology from MODIS: characterization of the Collection 5 global land cover dynamics pro- duct. Remote Sensing of Environment, 114, 1805–1816. Gerst KL, Kellermann JL, Enquist CA, Rosemartin AH, Denny EG (2015) Estimating the onset of spring from a complex phenology database: trade-offs across geo- graphic scales. International Journal of Biometeorology, 1–10. Gu L, Hanson PJ, Post WM et al. (2008) The 2007 eastern US spring freezes: increased cold damage in a warming world? BioScience, 58, 253–262. Homer C, Dewitz J, Fry J et al. (2007) Completion of the 2001 national land cover database for the conterminous United States. Photogrammetric Engineering and Remote Sensing, 73, 337–341. Huete A, Didan K, Miura T, Rodriguez EP, Gao X, Ferreira LG (2002) Overview of the radiometric biophysical performance of the MODIS vegetation indices. Remote Sensing of Environment, 83, 195–213. Hufkens K, Friedl M, Sonnentag O, Braswell BH, Milliman T, Richardson AD (2012) Linking near-surface and satellite remote sensing measurements of deciduous broadleaf forest phenology. Remote Sensing of Environment, 117, 307–321. Hwang T, Song CH, Vose JM, Band LE (2011) Topography-mediated controls on local vegetation phenology estimated from MODIS vegetation index. Landscape Ecology, 26, 541–556. Jenkins JP, Braswell BH, Frolking SE, Aber JD (2002) Detecting and predicting spatial and interannual patterns of temperate forest springtime phenology in the eastern US. Geophysical Research Letters, 29, 54-1–54-4. Jeong SJ, Medvigy D, Shevliakova E, Malyshev S (2012) Uncertainties in terrestrial carbon budgets related to spring phenology. Journal of Geophysical Research-Biogeos- ciences, 117.G1, 1–17. Kaduk JD, Los SO (2011) Predicting the time of green up in temperate and boreal biomes. Climatic Change, 107, 277–304. Keenan TF, Gray J, Friedl MA, Toomey M, Bohrer G, Hollinger DY, Richardson AD (2014) Net carbon uptake has increased through warming-induced changes in temperate forest phenology. Nature Climate Change, 4, 598–604. Klosterman ST, Hufkens K, Gray JM et al. (2014) Evaluating remote sensing of deciduous forest phenology at multiple spatial scales using PhenoCam imagery. Biogeosciences, 11, 1–16. Korner C, Basler D (2010) Phenology under global warming. Science, 327, 1461–1462. Lechowicz MJ (1984) Why do temperate deciduous trees leaf out at different times - adaptation and ecology of forest communities. American Naturalist, 124, 821–842. Linkosalo T, Carter TR, H€akkinen R, Hari P (2000) Predicting spring phenology and frost damage risk of Betula spp. under climatic warming: a comparison of two models. Tree Physiology, 20, 1175–1182. Linkosalo T, Lappalainen HK, Hari P (2008) A comparison of phenological models of leaf bud burst and flowering of boreal trees using independent observations. Tree Physiology, 28, 1873–1882. Melaas EK, Friedl MA, Zhu Z (2013a) Detecting interannual variation in deciduous broadleaf forest phenology using Landsat TM/ETM plus data. Remote Sensing of Environment, 132, 176–185. Melaas EK, Richardson AD, Friedl MA et al. (2013b) Using FLUXNET data to improve models of springtime vegetation activity onset in forest ecosystems. Agri- cultural and Forest Meteorology, 171, 46–56. Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH (1953) Equation of state calculations by fast computing machines. The Journal of Chemical Physics, 21, 1087– 1093. Migliavacca M, Sonnentag O, Keenan TF, Cescatti A, O’keefe J, Richardson AD (2012) On the uncertainty of phenological responses to climate change, and implications for a terrestrial biosphere model. Biogeosciences, 9, 2063–2083. Polgar CA, Primack RB (2011) Leaf-out phenology of temperate woody plants: from trees to ecosystems. New Phytologist, 191, 926–941. Raulier F, Bernier PY (2000) Predicting the date of leaf emergence for sugar maple across its native range. Canadian Journal of Forest Research-Revue Canadienne De Recherche Forestiere, 30, 1429–1435. Richardson AD, Bailey AS, Denny EG, Martin CW, O’Keefe J (2006) Phenology of a northern hardwood forest. Global Change Biology, 12, 1174–1188. © 2015 John Wiley & Sons Ltd, Global Change Biology, 22, 792–805 804 E. K. MELAAS et al. Richardson AD, Jenkins JP, Braswell BH, Hollinger DY, Ollinger SY, Smith ML (2007) Use of digital webcam images to track spring green-up in a deciduous broadleaf forest. Oecologia, 152, 323–334. Richardson AD, Black TA, Ciais P et al. (2010) Influence of spring and autumn pheno- logical transitions on forest ecosystem productivity. Philosophical Transactions of the Royal Society B-Biological Sciences, 365, 3227–3246. Richardson AD, Anderson RS, Arain MA et al. (2012) Terrestrial biosphere models need better representation of vegetation phenology: results from the North American Carbon Program Site Synthesis. Global Change Biology, 18, 566–584. Richardson AD, Keenan TF, Migliavacca M, Ryu Y, Sonnentag O, Toomey M (2013) Climate change, phenology, and phenological control of vegetation feedbacks to the climate system. Agricultural and Forest Meteorology, 169, 156– 173. Sarvas R (1974) Investigations on the annual cycle of development of forest trees, Autumn dormancy and winter dormancy. Communicationes Instituti Forestalis Fen- niae, 84, 1–101. Schaaf CB, Gao F, Strahler AH et al. (2002) First operational BRDF, albedo nadir reflectance products from MODIS. Remote Sensing of Environment, 83, 135–148. Sonnentag O, Hufkens K, Teshera-Sterne C et al. (2012) Digital repeat photography for phenological research in forest ecosystems. Agricultural and Forest Meteorology, 152, 159–177. Thornton PE, Thornton MM, Mayer BW et al. (2014) Daymet: Daily Surface Weather Data on a 1-km Grid for North America, version 2. Oak Ridge National Laboratory (ORNL), Oak Ridge, TN. Vitasse Y, Delzon S, Dufrêne E, Pontailler JY, Louvet JM, Kremer A, Michalet R (2009) Leaf phenology sensitivity to temperature in European trees: do within-species populations exhibit similar responses? Agricultural and Forest Meteorology, 149, 735–744. Wang J, Ives NE, Lechowicz MJ (1992) The relation of foliar phenology to xylem embolism in trees. Functional Ecology, 6, 469–475. Williams CN, Menne MJ, Vose S, Easterling DR (2006) United States Historical Clima- tology Network Daily Temperature, Precipitation, and Snow Data. ORNL/CDIAC-118, NDP-070. Carbon Dioxide Information Analysis Center, Oak Ridge National Labo- ratory, Oak Ridge, TN. Wilson BT, Lister AJ, Riemann RI (2012) A nearest-neighbor imputation approach to mapping tree species over large areas using forest inventory plots and moderate resolution raster data. Forest Ecology and Management, 271, 182–198. Wilson BT, Lister AJ, Rienmann RI, Griffith DM (2013) Live Tree Species Basal Area of the Contiguous United States (2000–2009). USDA Forest Service, Northern Research Station, Newtown Square, PA. Yang X, Mustard JF, Tang JW, Xu H (2012) Regional-scale phenology modeling based on meteorological records and remote sensing observations. Journal of Geophysical Research-Biogeosciences, 117.G3, 1–18. Zhu ZI, Evans DL (1994) United-States forest types and predicted percent forest cover from AVHRR data. Photogrammetric Engineering and Remote Sensing, 60, 525–531. Supporting Information Additional Supporting Information may be found in the online version of this article: Figure S1. Boxplots of Pearson correlation coefficients between MODIS estimated spring onset and model pre- dicted spring onset. Figure S2. Boxplots of model slope. Table S1. Summary of model parameters. Table S2. Optimal parameters for each species based on out-of-sample testing results. Table S3. DAICc (difference between a particular model’s AICc and the lowest AICc across all candidate models for a species) values for models fit to species-specific NPN or PhenoCam data. Data S1. Description of phenology models. © 2015 John Wiley & Sons Ltd, Global Change Biology, 22, 792–805 MULTISCALE MODELING OF SPRING PHENOLOGY 805 work_xlz2mqk4g5gitiuuh4ssfmqcde ---- CCID_A_206935 961..967 O R I G I N A L R E S E A R C H Licochalcone A in Combination with Salicylic Acid as Fluid Based and Hydroxy-Complex 10% Cream for the Treatment of Mild Acne: A Multicenter Prospective Trial This article was published in the following Dove Press journal: Clinical, Cosmetic and Investigational Dermatology Federica Dall’Oglio 1 Gabriella Fabbrocini 2 Aurora Tedeschi 1 Marianna Donnarumma 2 Paolo Chiodini 3 Giuseppe Micali 1 1Dermatology Clinic, University of Catania, Catania, Italy; 2Section of Dermatology, Department of Clinical Medicine and Surgery, University Federico II of Naples, Naples, Italy; 3Medical Statistics Unit, University of Campania “Luigi Vanvitelli”, Naples, Italy Purpose: Topical cosmetic agents if correctly prescribed and used may improve outcomes in acne therapy. The aim of this study was to assess the efficacy and tolerability of a new daily cosmetic regimen in the treatment of mild facial acne. Patients and methods: A multicenter, prospective, observational, clinical study was conducted on 91 adult patients with mild acne. Subjects were instructed to apply a fluid containing Licochalcone A/Salicylic acid/L-Carnitine in the morning and a cream with Licochalcone A/Hydroxy-Complex 10% at bedtime for 8 weeks. The efficacy was clinically evaluated by Global Acne Grading System (GAGS) score and by comedones/papules lesions count and by instrumental assessment (SebutapeTMand Reveal Photo Imager/VISIA-CR™ imaging) at baseline, at 4 and 8 weeks. Results: At 4 weeks a statistically significant reduction from baseline of GAGSwas observed. In addition, the mean total count of comedones and papules was significantly reduced by 41% and 45%, respectively, from baseline along with a significant reduction of mean sebum of 47%. At 8 weeks, a further statistically significant reduction from baseline of GAGS, total count of comedones and papules (64% and 71%, respectively), along with an additional sebum reduction of about 52% was also recorded. Conclusion: Our results suggest that the daily regimen based on Licochalcone A with Salicylic acid/L-Carnitine as fluid or with Hydroxy-Complex 10% as cream represent an interesting cosmetic approach for treating mild acne. Keywords: mild acne, topical cosmetic, licochalcone A, salicylic acid, hydroxyl-complex Introduction Acne is a common, multifactorial, inflammatory skin disease affecting the pilosebaceous unit resulting from altered keratinization, androgen-induced increased sebum production, inflammation involving both innate and acquired immunity and microbial colonization by Propionibacterium (P.) acnes.1–3 Genetic predisposition, anxiety and depression, hyperglycemic diet, sunlight or artificial light, are additional factors known to be involved in the pathogen- esis of acne.4 Acne is clinically characterized by different clinical aspects and severity, span- ning from mild forms, mainly characterized by retentional lesions (RTL) (open and closed comedones) with or without few inflammatory lesions (IFL) (papules/ Correspondence: Giuseppe Micali Dermatology Clinic, University of Catania, A.O.U. Policlinico -Vittorio Emanuele, via Santa Sofia, 78, Catania 95123, Italy Tel +39 095321705 Fax +39 0953782425 Email cldermct@gmail.com Clinical, Cosmetic and Investigational Dermatology Dovepress open access to scientific and medical research Open Access Full Text Article submit your manuscript | www.dovepress.com Clinical, Cosmetic and Investigational Dermatology 2019:12 961–967 961 http://doi.org/10.2147/CCID.S206935 DovePress © 2019 Dall’Oglio et al. This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/ terms.php and incorporate the Creative Commons Attribution – Non Commercial (unported, v3.0) License (http://creativecommons.org/licenses/by-nc/3.0/). By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms (https://www.dovepress.com/terms.php). C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://orcid.org/0000-0001-6759-0162 http://orcid.org/0000-0002-5157-3939 http://www.dovepress.com http://www.dovepress.com https://www.facebook.com/DoveMedicalPress/ https://twitter.com/dovepress https://www.linkedin.com/company/dove-medical-press https://www.youtube.com/user/dovepress http://www.dovepress.com/permissions.php pustules), to moderate or severe forms, with multiple IFL that seriously impact on patient’s quality of life.5,6 The treatment vary according to disease severity. For mild acne, topical pharmacological treatments are generally recommended and include the use of benzoyl peroxide (BP), retinoid, and antibiotics.5–10 The clinical efficacy of these agents is widely demonstrated, and it is well known that their use may cause some adverse effects, such as hypersensitivity reaction, erythema, burning/stinging in case of BP/retinoids, and limitations for the possible development of resistant bacterial strains for antibiotics.5–10 Recent evidence suggests that some cosmetic agents if correctly prescribed may improve the therapeutic outcome with the advantage of minimizing the side effects and enhance the efficacy of prescription drugs.11 The aim of this study was to assess the efficacy and tolerability of a new cosmetic regimen consisting in the daily use of an anti-inflammatory/corneolytic/sebum con- trolling fluid in the morning and of an anti-inflammatory /corneolytic cream at bedtime in the treatments of mild facial acne by clinical and instrumental evaluations for 8 weeks duration study. Materials and Methods Study Design This was a multicenter, prospective, observational clinical study. Setting and Study Period From October 2017 to May 2018 ninety-one adult patients (71F/20M) with mild acne were enrolled at the Dermatology Department of the University of Catania (Italy) and of University “Federico II” of Naples (Italy). Study duration was 8 weeks. The study was performed in accordance with the ethical principles originating from the Declaration of Helsinki 1996 and Good Clinical Practices. The protocol was approved by an internal review board. A written informed consent was obtained from each patient before study procedures were started, which included consent to use their images. Inclusion/Exclusion Criteria Inclusion criteria were: adult patients with mild facial acne (open and closed comedones and up to 10 papules), a wash out period of at least 2 weeks for any topical acne treat- ments, and 1 month for oral antibiotics and 2 months for hormonal therapy and/or oral isotretinoin. Exclusion criteria were: presence of pustules or nodules lesions, trunk localization, severe underlying diseases, concurrent exposure to sunlight and/or artificial ultraviolet sources, pregnancy and breastfeeding. No other topical products or drugs were allowed, except for a mild cleanser, make-up and oil-free SPF 50+ sunscreens. Methodology Patients were instructed to apply a fluid containing Licochalcone A combined with Salicylic acid, and L-Carnitine in the morning and a cream based on Licochalcone A and Hydroxy-complex 10% at bedtime for 8 weeks. A mild, perfume-free cleaning gel principally based on Ampho-Tenside 6% was suggested for daily care. In order to reduce potential evaluator bias, all subjects were assessed by an investigator not directly involved in the study at baseline (T0), and at 4 weeks (T1), and 8 weeks (T2). Clinical and Instrumental Evaluation Criteria Clinical and instrumental evaluation were carried out at baseline, and at 4 and 8 weeks. Acne severity was rated by clinical evaluation using: 1) the Global Acne Grading System (GAGS) score consisting in a subjective assessment of acne severity proposed by Doshi et al in 1997 that divides the face into six areas, forehead, nose, chin, right cheek, left cheek, chest and upper back and assigns a factor (from 1 to 3) to each area (2=forehead; 1=nose; 1=chin; 2= right cheek; 2= left cheek; 3=chest and upper back): the local score is calculated using the formula factor x grade (depending on the most severe lesion type; 0= no lesions; 1= come- dones; 2=papules; 3=pustules; 4=nodules); the global score is the sum of all global scores: a score of 1–18 is considered mild; 19–30 moderate; 31–38 severe and >39 very severe and by 2) lesions count, method based on count of the total number of retentional and/or inflamma- tory lesions.12,13 Instrumental assessment included: 1) measurement of sebum by SebutapeTM strips (CuDerm Corp., Dallas, TX, USA) placed on the patient’s forehead, nose, cheeks, and chin and subsequently checked against a black background of a score card; sebum spots are scored on a scale ranging from 1 to 5, where 1 indicates a dry skin without sebum, and 5 identifies a very oily skin, and 2) facial imaging performed by high-tech facial photography characterized Dall’Oglio et al Dovepress submit your manuscript | www.dovepress.com DovePress Clinical, Cosmetic and Investigational Dermatology 2019:12962 C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com http://www.dovepress.com by 15 mega pixel resolution and flash cross-polarized Light by Reveal Photo Imager (Canfield Scientific Inc., Parsippany, NJ, USA) at Naples site and by VISIA-CR™ imaging system (Canfield Scientific Inc., Fairfield, NJ, USA) at Catania site.14–16 Additionally, evaluation of product tolerability by a self-administered questionnaire based on 5 parameters (erythema, scaling, dryness, stinging/burning and itch) from 0 to 3 (0 =none; 1=mild; 2=moderate; 3=severe) was carried out. Study Endpoints Primary endpoint was the clinical efficacy at week 4 and 8 evaluated by reduction of GAGS score and lesion count; secondary endpoint was the evaluation of tolerability and cosmetic acceptability at the end of the study. Statistical Analyses The characteristics of the patients were reported as abso- lute number and percentage for the categorical variables and as mean and standard deviation (SD) for the contin- uous variables. The analysis of longitudinal data was per- formed using a mixed effect regression model, which considered the correlation between measurements carried out over time in the same subject. In the mixed effect regression models, time, age, gender and center were considered as covariates. Interaction between center and time was also tested. For time covariate Dunnett post-test was used for multiple comparisons vs baseline. A two- tailed P value <0.05 was considered significant. All statis- tical analyses were performed using SAS version 9.4 (SAS, Inc., Cary, NC). Results Ninety-one subjects (71F/20M; age range: 18–30 years; mean age 21.5±3.8 years; mean GAGS: 11.6±2.4) with mild acne were enrolled. Eighty-eight (70F/18M) com- pleted the study whereas three subjects were lost to fol- low-up for personal reasons. At 4 weeks a statistically significant reduction from baseline of GAGS (mean from 11.6±2.4 to 10.3±3.0; p<0.001) was observed. In addition, the mean total count of RTL and IFL was significantly reduced of 41% and 45%, respectively, from baseline (comedones: mean from 42.3±14.3 to 24.8±12.7; p<0.001; papules: mean from 5.5 ±5.0 to 3.0±3.8; p<0.001) along with a significant reduc- tion of mean sebum of about 47% (mean from 2.13±0.67 to 1.13±0.38; p<0.001) (Table 1, Figure 1). At 8 weeks a further statistically significant reduction from baseline of GAGS (8.2±2.9; p<0.001) was also recorded; similarly the mean total count of RTL and IFL was significantly reduced of 64% and 71%, respectively, from baseline (comedones: mean from 42.3±14.3 to 15.0 ±9.6; papules: mean from 5.5±5.0 to 1.6±3.0; p<0.001), along with an additional mean sebum reduction of about 52% (p<0.001) (Table 1, Figure 2). For all considered Table 1 Results from Clinical and Instrumental Evaluation of GAG Score, Total Lesion Count of Comedones/Papules from Baseline to 8 Weeks Evaluations (Mean Value) Baseline 4 Weeks 8 Weeks P Value Mean DS Mean DS Mean DS GAGS Catania 11.2 2.2 9.6 3.4 7.7 3.3 <0.001 Naples 12.0 2.6 10.9 2.6 8.7 2.4 <0.001 Overall 11.6 2.4 10.3 3.0 8.2 2.9 <0.001 Total Count Comedones Catania 39.3 11.9 20.8 13.6 12.6 10.9 <0.001 Naples 45.1 16.0 28.7 10.5 17.2 7.6 <0.001 Overall 42.3 14.3 24.8 12.7 15.0 9.6 <0.001 Total Lesions Papules Catania 4.2 3.3 2.3 2.4 1.3 2.0 <0.001 Naples 6.7 6.0 3.7 4.8 1.8 3.7 <0.001 Overall 5.5 5.0 3.0 3.8 1.6 3.0 <0.001 Sebutape Naples 2.13 0.67 1.13 0.38 1.03 0.37 <0.001 Dovepress Dall’Oglio et al Clinical, Cosmetic and Investigational Dermatology 2019:12 submit your manuscript | www.dovepress.com DovePress 963 C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com http://www.dovepress.com outcomes, Dunnett multiple comparison post-tests versus baseline showed a significant reduction (p<0.001) of 4-weeks and 8-weeks values. The two centers showed similar results in the reduction of GAGS (interaction cen- ter*time, p=0.655) and total count of comedones (interac- tion center*time, p=0.192), while for total count of Figure 1 Advanced digital photography of two patients affected by mild acne at baseline (A,C) and after 4 weeks of treatment (B,D). Dall’Oglio et al Dovepress submit your manuscript | www.dovepress.com DovePress Clinical, Cosmetic and Investigational Dermatology 2019:12964 C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com http://www.dovepress.com papules a significant interaction center*time was found (p=0,007) showing a greater reduction in the Naples center in comparison to Catania center (Table 1). Local side effects were documented in 3 cases only (severe erythema: 1 case; severe hitch: 2 cases) and pro- duct tolerability was rated as excellent in 90% of patients. Figure 2 Advanced digital photography of two patients affected by mild acne at baseline (A,C) and after 8 weeks of treatment (B,D). Dovepress Dall’Oglio et al Clinical, Cosmetic and Investigational Dermatology 2019:12 submit your manuscript | www.dovepress.com DovePress 965 C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com http://www.dovepress.com Discussion Cosmetic treatment for acne, if correctly prescribed and used, may improve outcomes in acne therapy.11 The topi- cal cosmetic approach for acne treatment includes: hygiene and cleansing, ideally with a non-comedogenic, non acnegenic, non-irritating and non-allergenic cleanser; sebum controlling agents, generally based on the so-called “mattifyng” ingredients able to absorb sebum from skin surface, in case of hyperseborrea; corneolytics agents, particularly indicated for comedonic acne as able to induce a comedolytic effect, making comedones more superficial and at the same time may also facilitate skin absorption of other products; antimicrobial agents that have been proven to effectively reduce P.acnes growth both in vitro and in vivo clinical study; anti-inflammatory agents indicate to control the pivotal role of inflammatory events in the development of acne lesions; moisturizers product specifi- cally designed for acne patients that improve adherence to therapy by alleviating the discomfort and feeling of dry- ness following topical and systemic retinoids or topical BP.11 The use of specific photoprotective agents and shav- ing products should also be encouraged. Finally, tailored camouflage make-up technique using non-comedogenic products may minimize the esthetic problem associated with acne and improve patients’ self-esteem and adherence to treatment.11 The results of our study suggest that the tested products are effective in the treatment of mild acne achieving a good resolution in cure rates due to a synergic action of its agents concurrently able to control different factors involved in the pathogenesis of acne. In detail, licochalcone A is a natural flavonoid substance isolated from Glycyrrhiza glabra and Glycyrrhizainflata (licorice root) with anti-inflammatory and anti-microbial effects, as confirmed by inhibition in vitro and in vivo of pro-inflammatory cytokines (PGE2, LTB4, IL-6 and TNF- α),14–15 as well as anti ROS-induced cell damage action by activating of the expression of antioxidant transcription factor (Nf-E2-related factor 2/Nrf2) and of detoxifying enzymes (Heme Oxygenase 1/HO-1, Glutamate-Cysteine Ligase Modifier subunit/GCLM).17–19 Hydroxy Complex is a combination of three exfoliating agents (Glycolic, Salicylic and Polyhydroxy Acids) able to cause intercorneo- cyte cell detachment so as to induce a comedolytic effect.20–22 Finally, L-Carnitine is a sebum controlling agent by β-oxidation of intracellular lipid content as demon- strated in vitro and in vivo clinical study.23 Conclusions Our results suggest that the daily regimen-based Licochalcone A in combination with Salicylic acid/L-carnitine as fluid or with Hydroxy-Complex 10% as cream represent an interesting approach for treating mild acne that undoubtedly deserves attention. It may represent a significant adjunct to the thera- peutic armamentarium for the management of acne patients without any significant side effects. We are aware that our study has some restrictions being a non-controlled trial but it was a multicenter study with a considerable number of patients enrolled. Further studies on larger series of adolescent acne patients are necessary to confirm our results. Disclosure The study was supported financially by Beiersdorf SpA (Milano, Italy), who manufacture the product. Paolo Chiodini received an honorarium for a consultancy from Beiersdorf SpA (Milano, Italy). The other authors declare no other conflicts of interest in this work. References 1. Gollnick HP. From new findings in acne pathogenesis to new approaches in treatment. J Eur Acad Dermatol Venereol. 2015;29 (Suppl 5):1–7. 2. Kircik LH. Advances in the understanding of the pathogenesis of inflammatory acne. J Drugs Dermatol. 2016;15(1 Suppl 1):s7–s10. 3. Picardo M, Eichenfield LF, Tan J. Acne and Rosacea. Dermatol Ther (Heidelb). 2017;7(Suppl 1):43–52. doi:10.1007/s13555-016-0168-8 4. George RM, Sridharan R. Factors aggravating or precipitating acne in Indian adults: a hospital-based study of 110 cases. Indian J Dermatol. 2018;63(4):328–331. doi:10.4103/ijd.IJD_565_17 5. Zaenglein AL, Pathy AL, Schlosser BJ, et al. Guidelines of care for the management of acne vulgaris. J Am Acad Dermatol. 2016;74 (5):945–973. doi:10.1016/j.jaad.2015.12.037 6. Bahali AG, Bahali K, BiyikOzkaya D, et al. The associations between peer victimization, psychological symptoms and quality of life in adolescents with acne vulgaris. J Eur Acad Dermatol Venereol. 2016;30(12):e184–e186. doi:10.1111/jdv.13495 7. Gollnick H, Cunliffe W, Berson D, et al. Management of acne: a report from a Global Alliance to Improve Outcomes in Acne. J Am Acad Dermatol. 2003;49(Suppl 1):S1–S37. doi:10.1067/ mjd.2003.618 8. Strauss JS, Krowchuk DP, Leyden JJ, et al; American Academy of Dermatology/American Academy of Dermatology Association. Guidelines of care for acne vulgaris management. J Am Acad Dermatol. 2007;56(4):651–663. doi:10.1016/j.jaad.2006.08.048 9. Thiboutot D, Gollnick H, Bettoli V, et al; Global Alliance to Improve Outcomes in Acne. New insights into the management of acne: an update from the Global Alliance to Improve Outcomes in Acne group. J Am Acad Dermatol. 2009;60(Suppl 5):S1–S50. doi:10.1016/j.jaad.2009.01.019 10. Nast A, Dréno B, Bettoli V, et al.; European Dermatology Forum. European evidence-based (S3) guidelines for the treatment of acne. J Eur Acad Dermatol Venereol. 2012;26(Suppl 1):1–29. 11. Dall’Oglio F, Tedeschi A, Fabbrocini G, et al. Cosmetics for acne: indications and recommendations for an evidence-based approach. G Ital Dermatol Venereol. 2015;150(1):1–11. Dall’Oglio et al Dovepress submit your manuscript | www.dovepress.com DovePress Clinical, Cosmetic and Investigational Dermatology 2019:12966 C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 https://doi.org/10.1007/s13555-016-0168-8 https://doi.org/10.4103/ijd.IJD_565_17 https://doi.org/10.1016/j.jaad.2015.12.037 https://doi.org/10.1111/jdv.13495 https://doi.org/10.1067/mjd.2003.618 https://doi.org/10.1067/mjd.2003.618 https://doi.org/10.1016/j.jaad.2006.08.048 https://doi.org/10.1016/j.jaad.2009.01.019 http://www.dovepress.com http://www.dovepress.com 12. Doshi A, Zaheer A, Stiller MJ. A comparison of current acne grading systems and proposal of a novel system. Int J Dermatol. 1997;36:416–418. doi:10.1046/j.1365-4362.1997.00099.x 13. Becker M, Wild T, Zouboulis CC. Objective assessment of acne. Clin Dermatol. 2017;35(2):147–155. doi:10.1016/j.clindermatol.2016.10.006 14. Fabbrocini G, Capasso C, Donnarumma M, et al. A peel-off facial mask comprising myoinositol and trehalose-loaded liposomes improves adult female acne by reducing local hyperandrogenism and activating autophagy. J Cosmet Dermatol. 2017;16(4):480–484. doi:10.1111/jocd.2017.16.issue-4 15. Goldsberry A, Hanke CW, Hanke KE. VISIA system: a possible tool in the cosmetic practice. J Drugs Dermatol. 2014;13(11):1312–1314. 16. Micali G, Dall’Oglio F, Tedeschi A, Lacarrubba F. Erythema-directed digital photography for the enhanced evaluation of topical treatments for acne vulgaris. Skin Res Technol. 2018;24(3):440–444. doi:10.1111/ srt.2018.24.issue-3 17. Kolbe L, Immeyer J, Batzer J, et al. Anti-inflammatory efficacy of Licochalcone A: correlation of clinical potency and in vitro effects. Arch Dermatol Res. 2006;298:23–30. doi:10.1007/s00403- 006-0654-4 18. Angelova-Fischer I, Rippke F, Fischer TW, et al. D.A double-blind, randomized, vehicle-controlled efficacy assessment study of a skin care formulation for improvement of mild to moderately severe acne. J Eur Acad Dermatol Venereol. 2013;27(Suppl 2):6–11. doi:10.1111/ jdv.2013.27.issue-s2 19. Kühnl J, Roggenkamp D, Gehrke SA, et al. G. Licochalcone A activates Nrf2 in vitro and contributes to licorice extract-induced lowered cutaneous oxidative stress in vivo. Exp Dermatol. 2015;24 (1):42–47. doi:10.1111/exd.2015.24.issue-1 20. Kornhauser A, Coelho SG, Hearing VJ. Applications of hydroxy acids: classification, mechanisms, and photoactivity. Clin Cosmet Investig Dermatol. 2010;24(3):135–142. 21. Sharad J. Glycolic acid peel therapy – a current review. Clin Cosmet Investig Dermatol. 2013;11(6):281–288. doi:10.2147/CCID.S34029 22. Arif T. Salicylic acid as a peeling agent: a comprehensive review. Clin Cosmet Investig Dermatol. 2015;26(8):455–461. doi:10.2147/CCID. S84765 23. Peirano RI, Hamann T, Düsing HJ, et al. Topically applied L-carnitine effectively reduces sebum secretion in human skin. J Cosmet Dermatol. 2012;11(1):30–36. doi:10.1111/j.1473-2165.2011.00597.x Clinical, Cosmetic and Investigational Dermatology Dovepress Publish your work in this journal Clinical, Cosmetic and Investigational Dermatology is an interna- tional, peer-reviewed, open access, online journal that focuses on the latest clinical and experimental research in all aspects of skin disease and cosmetic interventions. This journal is indexed on CAS. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. Submit your manuscript here: https://www.dovepress.com/clinical-cosmetic-and-investigational-dermatology-journal Dovepress Dall’Oglio et al Clinical, Cosmetic and Investigational Dermatology 2019:12 submit your manuscript | www.dovepress.com DovePress 967 C lin ic a l, C o sm e tic a n d I n ve st ig a tio n a l D e rm a to lo g y d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 https://doi.org/10.1046/j.1365-4362.1997.00099.x https://doi.org/10.1016/j.clindermatol.2016.10.006 https://doi.org/10.1111/jocd.2017.16.issue-4 https://doi.org/10.1111/srt.2018.24.issue-3 https://doi.org/10.1111/srt.2018.24.issue-3 https://doi.org/10.1007/s00403-006-0654-4 https://doi.org/10.1007/s00403-006-0654-4 https://doi.org/10.1111/jdv.2013.27.issue-s2 https://doi.org/10.1111/jdv.2013.27.issue-s2 https://doi.org/10.1111/exd.2015.24.issue-1 https://doi.org/10.2147/CCID.S34029 https://doi.org/10.2147/CCID.S84765 https://doi.org/10.2147/CCID.S84765 https://doi.org/10.1111/j.1473-2165.2011.00597.x http://www.dovepress.com http://www.dovepress.com/testimonials.php http://www.dovepress.com http://www.dovepress.com work_xmom33mw5zhn7lvavnvnqyysyi ---- R739 Product News Heating units save space Sooner or later most laboratories will need a heating mantle, something that heats safely and reliably from room temperature to boiling in 4 minutes. Meeting this need, particularly where laboratory benchtop space is at a premium, is the range of heating units available from C. Gerhardt UK. Their six- position heating mantle, model K126, is capable of accomodating up to six round bottomed flasks up to 750 ml. This unit, which is only 90 cm long, ensures maximum flexibility and efficient use of space, even in the smallest laboratory. Circle number 2 on reader response card. Spill trays contain the solution It is important to be able to contain spills that inevitably occur on laboratory benchtops or similar work surfaces to the smallest possible area so as to minimize possible damage and inconvenience. With this in mind Clark Scientific offer a practical cost- effective solution to the problem with their range of disposable, re-usable spill trays. These trays contain any spills of any description whilst also clearly defining a work area, without the need for bulky bench coverings. The lightweight trays can be simply rinsed or wiped clean for future use and then, when necessary, disposed of. Circle number 3 on reader response card. In Brief 384-well cell harvester Brandel have introduced a family of cost- effective cell harvesters which will ensure real time saving in harvesting procedures. These space saving 384-well format harvesters are genuine high throughput screening machines, combining four 96-well harvesters in one instrument. The 384-well format is available as either a manual or automated harvester system. All are fully Teflon tubed for long life and maintenance- free service. All models provide the ability to individually adjust the flow rate of 1, 2, 3 or 4 96-wells at a time. Circle number 4 on reader response card. Microplate counting applications Packard Instrument Company recently announced the release of a new data analysis software package for use in microplate counting applications. X-Curve, an Excel®- based add-in, can be used to analyze data counted on Packard’s SpectraCount®, LumiCount® and FluoroCount®. The software boasts a powerful data analysis package capable of analyzing multiple microplate data sets and multiple standard curves simultaneously. X-Curve’s flexible spreadsheet graphics allow easy conversion of experimental data into publication quality graphics. Circle number 5 on reader response card. Qualitative filters Whenever an analytical technique calls for materials to be separated for identification, Whatman cellulose filter papers should be the first items to spring to mind. They come in a wide variety of grades to suit general purpose as well as specialist applications: the spectrum ranges from the most basic, Grade 1, a medium retention and flow rate paper for routine use, through to Grade 6, the most efficient qualitative paper for collecting small particles and often specified for boiler water analysis. Circle number 6 on reader response card. Digital photography with a microscope Olympus is the first microscope manufacturer to introduce a digital photomicrography system with an unprecedented price/performance ratio. The DP10 digital colour camera has a progressive 2/3′′ CCD sensor with over 1.4 million pixels, corresponding to a resolution of 1280 × 1024 pixels. Every pixel of the CCD sensor is represented directly in the digital picture. This technology secures extremely sharp pictures with natural colour reproduction which can rightly be called photographs. The DP10 is suitable for many applications in every area of industry, research and medicine. Circle number 7 on reader response card. GRI Contract Services has introduced a new addition to its range of Labconco® ventilation equipment. The Purifier® Vertical Clean Bench is a ventilated cabinet designed to provide clean, particulate-free air in the work space so minimising risks of cross contamination. Ideal as an individual workstation the new cabinet provides Class 100 conditions. It is suitable for applications which are non hazardous to the user such as plant tissue culture, PCR, media preparation, electronics inspection, medical device assembly and non-toxic drug preparation. Air drawn through the top of the cabinet passes first through a pre-filter and then a 99.99% efficient HEPA filter which removes all contaminating particles of 0.3 µm or greater. Variable speed fans maintain the correct air velocities through the cabinet and uniform, turbulence free air flow is ensured by a diffuser located at the top of the work area. All controls are located within easy reach and a three- way safety switch for models incorporating ultraviolet light allows only one light at a time to be turned on. Circle number 1 on reader response card. Clean air for the work space Product News Clean air for the work space Heating units save space Spill trays contain the solution In Brief 384-well cell harvester Microplate counting applications Qualitative filters Digital photography with a microscope work_xprttprstfhmxgzvwizldymzuu ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218557777 Params is empty 218557777 exception Params is empty 2021/04/06-02:16:37 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218557777 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:37 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_xpth44dztfhxznvriknisci57i ---- Measuring Food Waste and Consumption by Children Using Photography HAL Id: hal-02620997 https://hal.inrae.fr/hal-02620997 Submitted on 26 May 2020 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Distributed under a Creative Commons Attribution| 4.0 International License Measuring food waste and consumption by children using photography Agnès Giboreau, Camille Schwartz, David Morizet, Herbert L. Meiselman To cite this version: Agnès Giboreau, Camille Schwartz, David Morizet, Herbert L. Meiselman. Measuring food waste and consumption by children using photography. Nutrients, MDPI, 2019, 11 (10), pp.2410. �10.3390/nu11102410�. �hal-02620997� https://hal.inrae.fr/hal-02620997 http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/licenses/by/4.0/ https://hal.archives-ouvertes.fr nutrients Article Measuring Food Waste and Consumption by Children Using Photography Agnes Giboreau 1,*, Camille Schwartz 1,†, David Morizet 2 and Herbert L. Meiselman 3 1 Institute Paul Bocuse Research Center, 69130 Ecully, France; camille.schwartz@inra.fr 2 Bonduelle Corporate Research, 59650 Villeneuve-d’Ascq France; david.morizet@rd.loreal.com 3 Herb L. Meiselman Training and Consulting, Rockport, MA 01966, USA; herb@herbmeiselman.com * Correspondence: agnes.giboreau@institutpaulbocuse.com; Tel.: +33-472-180-220 † Present address: Centre des Sciences du Goût et de l’Alimentation, AgroSup Dijon, CNRS, INRA, Université Bourgogne Franche-Comté, 2100 Dijon, France. Received: 31 August 2019; Accepted: 30 September 2019; Published: 9 October 2019 ���������� ������� Abstract: A photography method was used to measure waste on food trays in school lunch in France, using the 5-point quarter-waste scale. While food waste has been studied extensively in US school lunches, the structure of the French lunch meal is quite different, with multiple courses, and vegetables (raw and cooked) in more than one course. Vegetables were the most wasted food category as usually seen in school lunch research, especially cooked vegetables, which were wasted at rates of 66%–83%. Raw vegetables were still wasted more than main dishes, starchy products, dairy, fruit, and desserts. Vegetables were also the most disliked food category, with the classes of vegetables falling in the same order as for waste. Waste and liking were highly correlated. Sensory characteristics of the food were cited as a main reason for liking/disliking. There is a strong connection between food liking and food consumption, and this connection should be the basis for future attempts to modify school lunch to improve consumption. The photographic method of measuring food waste at an individual level performed well. Keywords: food waste; school lunch; photography; vegetables; liking; French meals 1. Introduction The measurement of food consumption and food waste are important for the goals of healthy eating and sustainability. Much of the research on measuring food intake and food waste has been directed to children [1]. One of the major challenges with children’s eating is the consumption of vegetables, which have received a large share of the attention in research [2–5]. The present study was designed to extend the research on school food waste to French school canteens using the photographic method of estimating waste and study the reasons for non-consumption with particular attention on the differences between the various foods served in the French meal pattern. Much of the research on food consumption and food waste comes from studies of school lunches in the USA, including served hot lunches, served cold lunches (sandwiches, etc.), packed lunches from home, and possibly packed lunches purchased in shops. The children in the USA do not typically eat a hot lunch when they are at home, where sandwiches prevail [6]. Sandwich lunches also tend to be the norm in some Nordic countries [7,8]. Further, the meal content of the French mid-day meal and of the school mid-day meal are different from meals in many other countries. The French typically eat their main meal mid-day [9] and the meal usually includes multiple courses (usually three), including cooked vegetables [10]. Similarly, French children usually have a hot lunch mid-day at home or at school, though mainly at school as part of institutional practices. School lunches are designed following dietary rules based on energy and balance of nutrients [11]. Lunches have multiple Nutrients 2019, 11, 2410; doi:10.3390/nu11102410 www.mdpi.com/journal/nutrients http://www.mdpi.com/journal/nutrients http://www.mdpi.com http://www.mdpi.com/2072-6643/11/10/2410?type=check_update&version=1 http://dx.doi.org/10.3390/nu11102410 http://www.mdpi.com/journal/nutrients Nutrients 2019, 11, 2410 2 of 10 courses and always include vegetables as a either a starter and/or as a side dish. However, food waste is observed in school restaurants: the official French agency for the environment estimates food waste to be between 150 g and 200 g per person [12]. Detailed studies based on behavioral data are needed to better understand reasons for non-consumption and identify the potential levers to select optimum food offers. 1.1. Different Methods to Measure Waste Most people probably view weighed food intake as the gold standard for measuring consumption – with whole meals or individual meal components being weighed before and after the meal. This method is time-consuming, labor-intensive, and requires space [13,14]. It is difficult to do weighed assessments with large samples of people or with complex meals with many components. Alternatives to weighing food are visual estimation, i.e., direct observation by an experimenter, and photographic methods. Visual estimation has high reliability, requires less labor, less time, and less space [13,15], but requires trained observers [14]. Another method is digital photography where the estimation of amount consumed/amount wasted can be done at any time, and the photos can be checked for inter-judge reliability [3,16]. Byker Shanks, Banna, and Serrano (2017) [17] review 53 articles measuring nutrient intake in the United States National School Lunch Program. Different methods were used in the studies: visual estimation (n = 11), photography (n = 11), weighing (n = 23), or a combination of these three (n = 8). Most studies compared pre- and post-intervention in the school lunches. Fruits and vegetables were the most researched food category and the most wasted. The authors recommend more consistency in measuring food intake and food waste in school lunch assessments. For example, they note the wide variability in how food waste is reported, including grams, ounces, percentages, or kilocalories, with percentages being the most frequent. They also note the increasing popularity of visual estimation methods, including photography. Various scales have been used to visually estimate the food left in the plate as shown in Table 1. Table 1. Diversity of scales used to measure food waste through visual estimation. Scale Authors Food Left in the Plate 3-point scale Kandiah et al. 2006 [18] all; >50%; <50% 4-point scale Hiesmayr et al. 2009 [19] all; 1/2; 1/4; none 5-point scale Graves et al. 1983 [20] all; 3/4; 1/2; 1/4 or less; none or almost none Hanks et al. 2014 [13] all; 3/4; 1/2; 1/4; none 6-point scale Comstock et al. 1981 [21] all; 1 bite eaten; 3/4; 1/2; 1/4; none Navarro et al. 2016 [22] 100%; 90%; 75%; 50%; 25%; 0% 7-point scale Sherwin et al. 1998 [23] all; 1 mouthful eaten; 3/4; 1/2; 1/4; 1 mouthful left; none 11-point scale Williamson et al. 2003 [14] 100%; 90%; 80%; 70%; 60%; 50%; 40%; 30%; 20%; 10%; 0% The 5-point scale—quarter-waste method – is recommended by Hanks et al. (2014) as the most reliable. 1.2. Photographic Methods Martin et al. (2014) [24] provide an overview of the photographic method of food consumption and waste estimation, noting its reliability and validity. Portion-size estimates using photography correlate highly with weighed food intake, although research shows that the estimated intake does vary over categories of nutrients but no information of potential link with food items is given. Additionally, the photographic method does not appear to change significantly across different raters, making it suited to large studies with multiple raters. The photographic method has been applied in a wide range of food service settings including different levels of schools (including preschool and elementary school). The photography method has been used in various contexts. Christoph et al. (2017) [25] applied pre- and post-meal digital photography to measure dietary intake in a university self-serve dining-hall in the USA. They report high degrees of inter-rater agreement on selection, servings, and consumption of food groups, with the most challenge for consumption of mixed food items. Taylor et al. (2018) [26] Nutrients 2019, 11, 2410 3 of 10 extend photo analysis to student packed lunches in the USA, concluding that there was satisfactory reliability in assessment of quantities selected and consumed in eight food categories, with meat/meat alternatives providing the biggest challenge. Pouyet et al. (2015) [27] used a photographic method to study elderly patients’ meals in French nursing homes. A range of 11 reference images (0 to 100% food with 10% increments) was provided to judges to help evaluate food waste. They observed reliability of the method in that there was agreement between different untrained assessors and agreements among various estimates made by the same assessor. They confirmed the accuracy and specificity of this method by comparing food intake estimates for the four dishes with the food intakes determined using the traditional weighed food method. The method worked equally well on different food types. Hubbard et al. (2015) [28] used a photographic method to estimate waste and consumption in a nudge intervention study with intellectually and developmentally disabled children in a school setting. Smith and Cunningham-Sabo (2013) [3] used a photographic method to study food waste in the US school lunch program. They estimated waste to the nearest 10% for the main dish, canned fruit, fresh fruit, vegetable, grain and milk. One-third to one-half of fruits and vegetables were wasted. Yoder, Foecke and Schoeller (2015) [4] studied waste following interventions in the US school lunch program, measuring trays pre and post meal. The percentage remaining on the tray was estimated in 25% increments (0%, 25%, 50%, 75% and 100%) of the served portion. Amounts wasted were calculated as the served portion multiplied by the percentage remaining. Results showed greater waste for raw fruits (compared to cooked), cooked vegetables (compared to raw), locally sourced (as compared to conventionally sourced, salad bar items (as compared to main menu items). Byker Shanks et al. (2017) [2], as noted above, reported on 11 studies using digital photography in their assessment of the National School Lunch Program in the USA The studies used a variety of procedures, some studies using reference sizes of food components, and others using students’ selected food before consumption; in both cases, they reported percentage of the reference or the pre-consumption amount. Different studies used different percentage amounts (actual percent, increments of 10%, or 0%, 10%, 25%, 50%, 100%). Food waste was used to measure food waste and food consumption, as well as a number of interventions and procedures. 1.3. Food Consumption and Waste in School Canteens Several studies researched different methods for studying the low selection and intake of vegetables in schools. Getts et al. (2015) [29] studied the quarter-waste method of visual estimation in a school cafeteria and reported good validity and reliability. Comparing the estimated weight to the actual weight, they found almost 90% of foods were in either “almost-perfect agreement (45%) or “substantial agreement” (42%). Comparing inter-rater reliability, they found over 90% in perfect to substantial agreement. Hanks et al. (2014) [13] also compared estimation methods (quarter-waste, half-waste, photograph) with weighed food leftovers. They noted problems with photograph estimation when foods are in containers, such as milk. The quarter-waste method reliability was slightly better than the half waste. The authors recommend the photograph method for estimation of waste of selected, unpackaged foods. The goals of this research were to examine French children’s food waste in a school restaurant, to analyze consumption across the range of diverse foods composing a complete meal and explore reasons for non-consumption: the study analyzes actual waste in primary school and its link to taste preference. In addition to extending the use of photographic measurement of waste to the school setting in a natural environment, this research was designed to extend the study of school lunch to France which has a specific lunch pattern of eating (starter, main and side dish, dessert). To this end, a digital photography equipment was set-up in a typical French school to record leftovers at the tray disposal point. Children’s interviews were conducted on a subsample of participants to explore reasons of non-consumption. Nutrients 2019, 11, 2410 4 of 10 2. Materials and Methods 2.1. General Procedure The experiment took place in the natural setting of the school cafeteria. Observations were made at lunchtime. The school cafeteria was self service. On most days, children could choose between two cold-served vegetables served as a starter. Then they were served a main dish composed of meat/fish and a side, a dairy product (that they could choose), and a dessert (that they choose). The school was chosen in such a way that children were from a wide range of socio-economic status. Participants were children aged 6 to 12 who usually had lunch in the school canteen. All data were fully anonymous. Families pay for the meal according to their income. Tap water was the only available beverage. Menus are conceived in accordance with the national regulation GEMRCN ensuring dietary balance and hygiene standards. Food is prepared by a catering company in a central kitchen and delivered to the canteen using a cold chain (chilled food) scheme. The employees preparing and serving meals are city staff, trained to respect process and portion sizes. Children were given a code at the entrance of the school canteen to display on their tray. Then they behaved as usual until the disposal of their tray where a camera was set-up filming all trays during the service. At the entrance/exit of the school canteen, an experimenter was available to children to discuss the meal (Figure 1). Nutrients 2019, 11, x FOR PEER REVIEW 4 of 10 2. Materials and Methods 2.1. General Procedure The experiment took place in the natural setting of the school cafeteria. Observations were made at lunchtime. The school cafeteria was self service. On most days, children could choose between two cold-served vegetables served as a starter. Then they were served a main dish composed of meat/fish and a side, a dairy product (that they could choose), and a dessert (that they choose). The school was chosen in such a way that children were from a wide range of socio-economic status. Participants were children aged 6 to 12 who usually had lunch in the school canteen. All data were fully anonymous. Families pay for the meal according to their income. Tap water was the only available beverage. Menus are conceived in accordance with the national regulation GEMRCN ensuring dietary balance and hygiene standards. Food is prepared by a catering company in a central kitchen and delivered to the canteen using a cold chain (chilled food) scheme. The employees preparing and serving meals are city staff, trained to respect process and portion sizes. Children were given a code at the entrance of the school canteen to display on their tray. Then they behaved as usual until the disposal of their tray where a camera was set-up filming all trays during the service. At the entrance/exit of the school canteen, an experimenter was available to children to discuss the meal (Figure 1). Figure 1. General scheme of the protocol with children. 2.2. Ethics The experiment received a favorable response from the ethics committee of the CHU of Lyon (Ref. Rech_FRCH_2013_005, Feb 6, 2013). Parents were informed about the study by the director of the school, who collected authorizations. None of the parents refused child participation in the study. 2.3. Participants The data were collected through a 5-day longitudinal follow-up including a total of 215 children. Children of 4th and 5th grade classes were absent on one day because of a school trip. A total of 776 trays were analyzed with a good balance of gender (50.6% girls) and age as shown in Table 2. Figure 1. General scheme of the protocol with children. 2.2. Ethics The experiment received a favorable response from the ethics committee of the CHU of Lyon (Ref. Rech_FRCH_2013_005, Feb 6, 2013). Parents were informed about the study by the director of the school, who collected authorizations. None of the parents refused child participation in the study. 2.3. Participants The data were collected through a 5-day longitudinal follow-up including a total of 215 children. Children of 4th and 5th grade classes were absent on one day because of a school trip. A total of 776 trays were analyzed with a good balance of gender (50.6% girls) and age as shown in Table 2. Nutrients 2019, 11, 2410 5 of 10 Table 2. Number of children by grade (boys/girls). * CLIS class (classe pour l’inclusion scolaire) is composed of children of different ages with school difficulties and training assistance. 1st Grade 2nd Grade 3rd Grade 4th Grade 5th Grade CLIS* Total Day 1 30 (14/16) 29 (13/16) 35 (20/15) 32 (13/19) 32 (18/14) 7 (4/3) 165 Day 2 30 (16/14) 37 (18/19) 29 (19/10) 17 (6/11) 35 (19/16) 0 (0/0) 148 Day 3 31 (12/19) 44 (19/25) 34 (20/14) 32 (14/18) 32 (18/14) 7 (5/2) 180 Day 4 35 (15/20) 47 (20/27) 32 (19/13) 35 (13/22) 28 (14/14) 8 (5/3) 185 Day 5 25 (11/14) 40 (16/24) 31 (21/10) n/a n/a 2 (1/1) 98 Total 151 (68/83) 19.4% 197 (86/111) 25.4% 161 (99/62) 20.7% 116 (46/70) 14.9% 127 (69/58) 16.4% 24 (15/9) 3.1% 776 (383/393) (50.6% girls) 2.4. Part 1 Waste Two days of pre-tests were used to set up the cameras in the school canteen at the disposal area and to precisely define the process with the children: giving an individual label to be displayed on the tray and visible when disposing the tray after the meal. Digital images of disposed trays were automatically done with a camera set on a tripod with a distance of one meter at an angle of 45◦. Pictures of foods were done after the meal (Figure 2). Extraction of a single image for each child was manually done remotely at the office. Nutrients 2019, 11, x FOR PEER REVIEW 5 of 10 Table 2. Number of children by grade (boys/girls). * CLIS class (classe pour l'inclusion scolaire) is composed of children of different ages with school difficulties and training assistance. 1st Grade 2nd Grade 3rd Grade 4th Grade 5th Grade CLIS* Total Day 1 30 (14/16) 29 (13/16) 35 (20/15) 32 (13/19) 32 (18/14) 7 (4/3) 165 Day 2 30 (16/14) 37 (18/19) 29 (19/10) 17 (6/11) 35 (19/16) 0 (0/0) 148 Day 3 31 (12/19) 44 (19/25) 34 (20/14) 32 (14/18) 32 (18/14) 7 (5/2) 180 Day 4 35 (15/20) 47 (20/27) 32 (19/13) 35 (13/22) 28 (14/14) 8 (5/3) 185 Day 5 25 (11/14) 40 (16/24) 31 (21/10) n/a n/a 2 (1/1) 98 Total 151 (68/83) 19.4% 197 (86/111) 25.4% 161 (99/62) 20.7% 116 (46/70) 14.9% 127 (69/58) 16.4% 24 (15/9) 3.1% 776 (383/393) (50.6% girls) 2.4. Part 1 Waste Two days of pre-tests were used to set up the cameras in the school canteen at the disposal area and to precisely define the process with the children: giving an individual label to be displayed on the tray and visible when disposing the tray after the meal. Digital images of disposed trays were automatically done with a camera set on a tripod with a distance of one meter at an angle of 45°. Pictures of foods were done after the meal (Figure 2). Extraction of a single image for each child was manually done remotely at the office. Figure 2. Example of the captured image of a disposed tray. For each child, each meal component was analyzed individually. The food remaining on the plate was evaluated as a percentage of the served quantity using a 5-point scale: 0, 1, 2, 3, or 4 corresponding to 0%, 25%, 50%, 75% or 100% of the served portion. Waste was analyzed by food categories selected to be homogeneous in culinary terms: Raw vegetable (starter), Cooked vegetable served cold (starter), Starches (side dish), cooked vegetable served warm (side dish), main dish, dairy products, fruits, and desserts (Table 3). Figure 2. Example of the captured image of a disposed tray. For each child, each meal component was analyzed individually. The food remaining on the plate was evaluated as a percentage of the served quantity using a 5-point scale: 0, 1, 2, 3, or 4 corresponding to 0%, 25%, 50%, 75% or 100% of the served portion. Waste was analyzed by food categories selected to be homogeneous in culinary terms: Raw vegetable (starter), Cooked vegetable served cold (starter), Starches (side dish), cooked vegetable served warm (side dish), main dish, dairy products, fruits, and desserts (Table 3). Nutrients 2019, 11, 2410 6 of 10 Table 3. Food items and food categories served during the 5 observation days. Food Categories Food Items 1. Starter—Raw vegetable Diced tomatoes Grated carrots Iceberg lettuce Radishes with butter Romaine Lettuce 2. Starter—Cooked vegetable served cold Leeks Zucchini with pesto 3. Starches Chickpea salad Creole Rice Pasta Salad 4. Side dish—Cooked vegetable served warm Eggplant gratin Ratatouille Swiss Chard gratin 5. Main dish Breaded fish (hoki) Fish filet (hoki) with cream sauce Fish filet (lieu noir) with sauce Plain Omelet Shell Pasta with 5-cheese tomato sauce Spaghetti Bolognaise 6. Dairy products Blue cheese Cheese specialty Cottage cheese Emmental cheese Gouda Mixed Fruit Yogurt 7. Fruits and desserts Apple Sauce Apple Tart Apricot Apricot Tart Fruit Cocktail Kiwi Orange Peach Vanilla Muffins Waffles 2.5. Part 2 Liking The second part of the methodology consisted of exploring children’s reactions to the served dishes. A subgroup of approximately 50 children per day, 10 children per grade, was interviewed after meals to explore reasons for non-consumption and, more specifically, reasons for non-appreciation. A total of 248 children (62% girls) were randomly selected from the children who left food on their tray. The interview was a semi-structured one with two sections, one for the starter and one for the main dish. The questions were: Which starter / main dish did you choose today? Did you enjoy it? Had you had enough? too much? not enough? If not liked, why didn’t you like it? The interviews allowed us to focus on dislikes through the forced categorization into two liking categories (Liked or Disliked) and to develop reasons for dislikes. A thematic analysis was conducted to interpret children discourses. Four categories were determined after a first reading of collected verbatim interviews. Three categories are sensory ones: appearance, taste and flavor, texture; one category concerns one food component often cited as rejection cause: the sauce. Nutrients 2019, 11, 2410 7 of 10 2.6. Statistics A one-way ANOVA was performed to evaluate the difference of waste scores and a Chi Square analysis to evaluate the proportion of likes and dislikes across the offered food range. A Pearson coefficient was calculated to describe the correlation between waste and liking based on the 7 analyzed food components (see Table 3). Statistical analyses were performed using XLSTAT 2015 (version 2015.4.01.22368, AddinsoftTM). 3. Results 3.1. Part 1. Waste Vegetable dishes were significantly the most wasted food (F(ddl = 6) = 132,812, p < 0.0001) especially cooked vegetables, either served warm as a side dish (66% wasted) or cold as a starter (83% wasted). Vegetable dishes were less wasted when the vegetable was raw (42% wasted), e.g., salad with dressing served as a starter. Other food components, main dish and starch or dairy product, fruit, and dessert, were rather well consumed with wastage rates between 22% and 25% (see Figure 3). Nutrients 2019, 11, x FOR PEER REVIEW 7 of 10 2.6. Statistics A one-way ANOVA was performed to evaluate the difference of waste scores and a Chi Square analysis to evaluate the proportion of likes and dislikes across the offered food range. A Pearson coefficient was calculated to describe the correlation between waste and liking based on the 7 analyzed food components (see Table 3). Statistical analyses were performed using XLSTAT 2015 (version 2015.4.01.22368, Addinsoft™). 3. Results 3.1. Part 1 Waste Vegetable dishes were significantly the most wasted food (F(ddl = 6) = 132,812, p < 0.0001) especially cooked vegetables, either served warm as a side dish (66% wasted) or cold as a starter (83% wasted). Vegetable dishes were less wasted when the vegetable was raw (42% wasted), e.g., salad with dressing served as a starter. Other food components, main dish and starch or dairy product, fruit, and dessert, were rather well consumed with wastage rates between 22% and 25% (see Figure 3). Figure 3. Mean rate of individual wastage of meal components (Error bars are standard deviations. Food with the same letter are not significantly different ANOVA 5% risk level). 3.2. Part 2. Liking Interviewing children identified liked and disliked food components (Figure 4). The Chi Square statistics confirm observed differences (Chi²(6) = 317.16). Cooked vegetables served as starter or side dish were disliked whereas raw vegetables, main dishes, starches, dairy products, fruits and desserts were liked. Figure 3. Mean rate of individual wastage of meal components (Error bars are standard deviations. Food with the same letter are not significantly different ANOVA 5% risk level). 3.2. Part 2. Liking Interviewing children identified liked and disliked food components (Figure 4). The Chi Square statistics confirm observed differences (Chi2(6) = 317.16). Cooked vegetables served as starter or side dish were disliked whereas raw vegetables, main dishes, starches, dairy products, fruits and desserts were liked. Nutrients 2019, 11, 2410 8 of 10 Nutrients 2019, 11, x FOR PEER REVIEW 8 of 10 Figure 4. Categories of likes/disliked food components. The thematic analysis of children’s discourse gave information on reasons for rejection. ‘Taste is the most-cited word, showing the importance of sensory characteristics in rejection. Two dishes were particularly disliked: the two recipes of cooked vegetable served as cold starters. For the leeks with dressing, taste mainly refers to texture of the slimy appearance. For the zucchini al pesto, it mainly refers to the unusual temperature (cold) and hardness. Moreover, both dishes are criticized regarding the sauce and the too-large quantity of sauce. Texture is often a major reason for food rejection [30]. Waste quantities are highly correlated with the liking status of foods. The Pearson coefficient of −0.997 (R2 = 0.994) is significant (n = 317, p < 0.0001). 4. Discussion The present study was undertaken to study food selection and food waste in a typical French school lunch, with multiple courses of hot and cold foods, savory and sweet foods, using a photography method. Waste was measured using the 5-point quarter-waste scale [13]. As noted in previous research in the USA (see [2], for a review) and other countries [31], vegetables were the most wasted food category, especially cooked vegetables, which were wasted at rates of 66%–83%. Raw vegetables were still wasted more (42%) than main dishes, starch, dairy, fruit and desserts (22–25%). Similarly, vegetables were the most disliked food category, with the following classes of vegetables in the same order as for waste: cold cooked vegetable > side cooked vegetable > raw vegetable. Reasons for liking/disliking included sensory dimensions (e.g., “slimy”) and other reasons. There was a high correlation between waste and liking (p =< 0.0001). There is a strong connection between food liking and food consumption, but there is not yet strong research evidence identifying food disliking as the major reason for waste. However, numerous authors point to the importance of personal preferences in generating food waste and hopefully consumption, can be improved with design of school menus more in line with student preferences. This research supports the use of the 5-point quarter-waste method to measure food waste from photographs or online. Byker Shanks et al. (2017) [2] noted the problems arising from using a variety of scales to measure waste, and recommended more standardization of scales and methods. This research also supports the photographic method for measuring food waste. As noted above, weighed assessments of each food portion before and after a meal is time-consuming and labor- intensive, while visual estimation of each food portion requires less time and labor, but requires trained observers. Both weighed assessments and visual estimation are difficult with large groups of people and with meals with many components. The photographic method is the only method where the meal images can be retained and checked later for validity and reliability. Further, the Figure 4. Categories of likes/disliked food components. The thematic analysis of children’s discourse gave information on reasons for rejection. ‘Taste is the most-cited word, showing the importance of sensory characteristics in rejection. Two dishes were particularly disliked: the two recipes of cooked vegetable served as cold starters. For the leeks with dressing, taste mainly refers to texture of the slimy appearance. For the zucchini al pesto, it mainly refers to the unusual temperature (cold) and hardness. Moreover, both dishes are criticized regarding the sauce and the too-large quantity of sauce. Texture is often a major reason for food rejection [30]. Waste quantities are highly correlated with the liking status of foods. The Pearson coefficient of −0.997 (R2 = 0.994) is significant (n = 317, p < 0.0001). 4. Discussion The present study was undertaken to study food selection and food waste in a typical French school lunch, with multiple courses of hot and cold foods, savory and sweet foods, using a photography method. Waste was measured using the 5-point quarter-waste scale [13]. As noted in previous research in the USA (see [2], for a review) and other countries [31], vegetables were the most wasted food category, especially cooked vegetables, which were wasted at rates of 66%–83%. Raw vegetables were still wasted more (42%) than main dishes, starch, dairy, fruit and desserts (22–25%). Similarly, vegetables were the most disliked food category, with the following classes of vegetables in the same order as for waste: cold cooked vegetable > side cooked vegetable > raw vegetable. Reasons for liking/disliking included sensory dimensions (e.g., “slimy”) and other reasons. There was a high correlation between waste and liking (p =< 0.0001). There is a strong connection between food liking and food consumption, but there is not yet strong research evidence identifying food disliking as the major reason for waste. However, numerous authors point to the importance of personal preferences in generating food waste and hopefully consumption, can be improved with design of school menus more in line with student preferences. This research supports the use of the 5-point quarter-waste method to measure food waste from photographs or online. Byker Shanks et al. (2017) [2] noted the problems arising from using a variety of scales to measure waste, and recommended more standardization of scales and methods. This research also supports the photographic method for measuring food waste. As noted above, weighed assessments of each food portion before and after a meal is time-consuming and labor-intensive, while visual estimation of each food portion requires less time and labor, but requires trained observers. Both weighed assessments and visual estimation are difficult with large groups of people and with meals with many components. The photographic method is the only method where the meal images can be retained and checked later for validity and reliability. Further, the photographic Nutrients 2019, 11, 2410 9 of 10 method is not labor-intensive on the day of field testing; it is efficient in terms of space, personnel, and required training. 5. Conclusions The future of waste measurement in schools and in other eating contexts will probably utilize automated analysis of food photographs. Future software will be able to automatically recognize food items and measure food proportions remaining from standard serving sizes. However, the challenge of measuring complex foods, complex food combinations, and foods in containers might continue. This will help foodservice teams to adjust the offer to liked and consumed food and lower food waste in the cafeteria. It also opens the path to build an educational program in school cafeterias to follow the effect of actions such as exposure to foods in conjunction with adjustable portion sizes of less consumed targeted food. Author Contributions: A.G., D.M. and C.S. designed the study, C.S. supervised the study, A.G. and H.L.M. wrote the article Funding: This project was cofunded by the Institut Paul Bocuse Research Center, Bonduelle and Elior. Acknowledgments: We would like to thank colleagues and students from the Institut Paul Bocuse: Stephen Baldridge, Mélanie Ricci, Erica Pilgram, Remy Mondon, Fanny Jaillot as well as partners contributing to efficient experiment setting and data collection in the school cafeterias: Laurence Depezay (Bonduelle), Christel Hanicotte (Elior Restauration); Gratianne Dumas (City of Lyon) as well all other contributors on the field. We also thank Bonduelle and Elior for the financial support of the study. Conflicts of Interest: The authors declare no conflict of interest. References 1. Appleton, K.; Hemingway, A.; Saulais, L.; Dinnella, C.; Monteleone, E.; Depezay, L.; Morizet, D.; Perez-Cueto, F.J.A.; Bevan, A.; Hartwell, H. Increasing Vegetable Intakes: Rationale and Systematic Review of Published Interventions. Eur. J. Nutr. 2016, 55, 869–896. [CrossRef] [PubMed] 2. Byker Shanks, C.; Banna, J.; Serrano, E.L. Food Waste in the National School Lunch Program 1978–2015: A Systematic Review. J. Acad. Nutr. Diet. 2017, 17, 1792–1807. [CrossRef] [PubMed] 3. Smith, S.L.; Cunningham-Sabo, L. Food choice, plate waste and nutrient intake of elementary- and middle-school students participating in the US National School Lunch Program. Public Health Nutr. 2013, 17, 1255–1263. [CrossRef] [PubMed] 4. Yoder, A.B.B.; Foecke, L.L.; Schoeller, D.A. Factors affecting fruit and vegetable school lunch waste in Wisconsin elementary schools participating in Farm to School Programmes. Public Health Nutr. 2015, 18 (Suppl. 15), 2855–2863. [CrossRef] [PubMed] 5. Moreno-Black, G.; & Stockard, J. Salad bar selection patterns of elementary school children. Appetite 2018, 120, 136–144. [CrossRef] [PubMed] 6. McIntosh, W.A.; Dean, W.; Torres, C.C.; Anding, J.; Kubena, K.S.; Nayga, R. The American Family Meal. In Meals in Science and Practice; Meiselman, H.L., Ed.; CRC Press: New York, NY, USA, 2009; pp. 190–218. 7. Makela, J.; Kjaernes, U.; Ekstrom, P. What did they eat. In Eating Patterns: A Day in the Lives of the Nordic Peoples; Kjaernes, U., Ed.; National Institutes for Consumer Research: Lysaker, Norway, 2001; pp. 65–90. 8. Makela, J. The meal format. In Eating Patterns: A Day in the Lives of the Nordic Peoples; Kjaernes, U., Ed.; National Institutes for Consumer Research: Lysaker, Norway, 2001; pp. 125–158. 9. de Saint Pol, T. Quand est-ce qu’on mange? Le temps des repas en France. Terrains et Travaux 2005, 9, 51–72. 10. Fischler, C.; Masson, E. Manger: Français, Européens et Américains face à L’alimentation; Odile Jacob: Paris, France, 2008; p. 336. 11. Ministère de l’économie, de l’Industrie et du numérique. Recommandation Nutrition. Groupe d’Etude Des Marchés de Restauration Collective et Nutrition GEM-RCN; Ministère de l’économie, de l’Industrie et du numérique: Paris, France, 2015. 12. ADEME Réduire le gaspillage alimentaire en restauration collective. Guide Pratique de L’agence de et la Maîtrise de L’énergie; Ademe éditions: Angers, France, 2018. http://dx.doi.org/10.1007/s00394-015-1130-8 http://www.ncbi.nlm.nih.gov/pubmed/26754302 http://dx.doi.org/10.1016/j.jand.2017.06.008 http://www.ncbi.nlm.nih.gov/pubmed/28807638 http://dx.doi.org/10.1017/S1368980013001894 http://www.ncbi.nlm.nih.gov/pubmed/23866827 http://dx.doi.org/10.1017/S1368980015000385 http://www.ncbi.nlm.nih.gov/pubmed/25728060 http://dx.doi.org/10.1016/j.appet.2017.08.034 http://www.ncbi.nlm.nih.gov/pubmed/28864255 Nutrients 2019, 11, 2410 10 of 10 13. Hanks, A.S.; Wansink, B.; Just, D.R. Reliability and Accuracy of Real-Time Visualization Techniques for Measuring School Cafeteria Tray Waste: Validating the Quarter-Waste Method. J. Acad. Nutr. Diet. 2014, 114, 470–474. [CrossRef] 14. Williamson, D.A.; Allen, H.R.; Martin, P.D.; Alfonso, A.J.; Gerald, B.; Hunt, A. Comparison of digital photography to weighed and visual estimation of portion sizes. J. Am. Diet. Assoc. 2003, 103, 1139–1145. [CrossRef] 15. Martins, L.M.; Cunha, L.M.; Rodrigues, S.P.; Rocha, A. Determination of plate waste in primary school lunches by weighing and visual estimation methods: A validation study. Waste Manag. 2014, 34, 1362–1368. [CrossRef] 16. Sabinsky, M.S.; Toft, U.; Andersen, K.K.; Tetens, I. Validation of a digital photographic method for assessment of dietary quality of school lunch sandwiches brought from home. Food Nutr. Res. 2013, 57, 20243. [CrossRef] 17. Byker, C.J.; Farris, A.R.; Marcenelle, M.; Davis, G.C.; Serrano, E.L. Serrano Food Waste in a School Nutrition Program After Implementation of New Lunch Program Guidelines. J. Nutr. Educ. Behav. 2014, 46, 406–411. [CrossRef] [PubMed] 18. Kandiah, J.; Stinnett, L.; Lutton, D. Visual plate waste in hospitalized patients: Length of stay and diet order. J. Am. Diet. Assoc. 2006, 106, 1663–1666. [CrossRef] [PubMed] 19. Hiesmayr, M.; Schindler, K.; Pernicka, E.; Schuh, C.; Schoeniger-Hekeler, A.; Bauer, P.; Laviano, A.; Lovell, A.; Mouhieddine, M.; Schuetz, T.; et al. Decreased food intake is a risk factor for mortality in hospitalised patients: The Nutrition Day survey 2006. Clin. Nutr. 2009, 28, 484–491. [CrossRef] [PubMed] 20. Graves, K.; Shannon, B. Using visual plate waste measurement to assess school lunch food behaviour. J. Am. Diet. Assoc. 1983, 82, 163–165. [PubMed] 21. Comstock, E.M.; St Pierre, R.G.; Mackiernan, Y.D. Measuring individual plate waste in school lunches. Visual estimation and children’s ratings vs. actual weighing of plate waste. J. Am. Diet. Assoc. 1981, 79, 290–296. [PubMed] 22. Navarro, D.A.; Boaz, M.; Krause, I.; Eli, A.; Chernov, K.; Giabra, M.; Frishman, S.; Levy, M.; Giboreau, A.; Kosak, S.; et al. Improved meal presentation increases food intake and decreases readmission rate in hospitalized patients. Clin. Nutr. 2016, 35, 1153–1158. [CrossRef] 23. Sherwin, A.; Nowson, C.; McPhee, J.; Alexander, J.; Wark, J.; Flicke, L. Nutrient intake at meals in residential care facilities for the aged: Validated visual estimation of plate waste. Aust. J. Nutr. Diet. 1998, 55, 188–193. 24. Martin, C.K.; Nicklas, T.; Gunturk, B.; Correa, J.B.; Allen, H.R.; Champagne, C. Measuring food intake with digital photography. J. Hum. Nutr. Diet. 2014, 27, 72–81. [CrossRef] 25. Christoph, M.J.; Loman, B.R.; Ellison, B. Developing a digital photography-based method for dietary analysis in self-serve dining settings. Appetite 2017, 114, 217–225. [CrossRef] 26. Taylor, J.C.; Sutter, C.; Ontai, L.L.; Nishina, A.; Zidenberg-Cherr, S. Feasibility and reliability of digital imaging for estimating food selection and consumption from students’ packed lunches. Appetite 2018, 120, 196–204. [CrossRef] 27. Pouyet, V.; Cuvelier, G.; Benattar, L.; Giboreau, A. A photographic method to measure food item intake. Validation in geriatric institutions. Appetite 2015, 84, 11–19. [CrossRef] [PubMed] 28. Hubbard, K.L.; Bandini, L.G.; Folta, S.C.; Wansink, B.; Eliasziw, M.; Must, A. Impact of a Smarter Lunch- room intervention on food selection and consumption among adolescents and young adults with intellectual and developmental disabilities in a residential school setting. Public Health Nutr. 2015, 18, 361–371. [CrossRef] [PubMed] 29. Getts, K.M. Measuring Plate Waste: Validity and Inter-Rater Reliability of the Quarter-Waste Method. Master’s Thesis, University of Washington, Washington, DC, USA, 2015. 30. Egolf, A.; Siegrist, M.; Hartmann, C. How people’s food disgust sensitivity shapes their eating and food behaviour. Appetite 2018, 127, 28–36. [CrossRef] [PubMed] 31. Falasconi, L.; Vittuari, M.; Politano, A.; Segre, A. Food Waste in School Catering: An Italian Case Study. Sustainability 2015, 7, 14745–14760. [CrossRef] © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). http://dx.doi.org/10.1016/j.jand.2013.08.013 http://dx.doi.org/10.1016/S0002-8223(03)00974-X http://dx.doi.org/10.1016/j.wasman.2014.03.020 http://dx.doi.org/10.3402/fnr.v57i0.20243 http://dx.doi.org/10.1016/j.jneb.2014.03.009 http://www.ncbi.nlm.nih.gov/pubmed/24857599 http://dx.doi.org/10.1016/j.jada.2006.07.015 http://www.ncbi.nlm.nih.gov/pubmed/17000200 http://dx.doi.org/10.1016/j.clnu.2009.05.013 http://www.ncbi.nlm.nih.gov/pubmed/19573957 http://www.ncbi.nlm.nih.gov/pubmed/6822701 http://www.ncbi.nlm.nih.gov/pubmed/7264115 http://dx.doi.org/10.1016/j.clnu.2015.09.012 http://dx.doi.org/10.1111/jhn.12014 http://dx.doi.org/10.1016/j.appet.2017.03.050 http://dx.doi.org/10.1016/j.appet.2017.08.037 http://dx.doi.org/10.1016/j.appet.2014.09.012 http://www.ncbi.nlm.nih.gov/pubmed/25261733 http://dx.doi.org/10.1017/S1368980014000305 http://www.ncbi.nlm.nih.gov/pubmed/24636533 http://dx.doi.org/10.1016/j.appet.2018.04.014 http://www.ncbi.nlm.nih.gov/pubmed/29680195 http://dx.doi.org/10.3390/su71114745 http://creativecommons.org/ http://creativecommons.org/licenses/by/4.0/. Introduction Different Methods to Measure Waste Photographic Methods Food Consumption and Waste in School Canteens Materials and Methods General Procedure Ethics Participants Part 1 Waste Part 2 Liking Statistics Results Part 1. Waste Part 2. Liking Discussion Conclusions References work_xqwalikemrez7c2wvqjytenvfq ---- Secure High Dynamic Range Images (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 7, No. 4, 2016 144 | P a g e www.ijacsa.thesai.org Secure High Dynamic Range Images Secure HDR Images Med Amine Touil, Noureddine Ellouze Dept. of Electrical Engineering National Engineering School Tunis, Tunisia Abstract—In this paper, a tone mapping algorithm is proposed to produce LDR (Limited Dynamic Range) images from HDR (High Dynamic Range) images. In the approach, non- linear functions are applied to compress the dynamic range of HDR images. Security tools will be then applied to the resulting LDR images and their effectiveness will be tested on the reconstructed HDR images. Three specific examples of security tools are described in more details: integrity verification using hash function to compute local digital signatures, encryption for confidentiality, and scrambling technique. Keywords—high dynamic range; tone mapping; range compression; integrity verification; encryption; scrambling; inverse tone mapping; range expansion I. INTRODUCTION Thanks to recent advances in computer graphics and in vision, HDR imaging has become a new generation technology becoming a new standard representation in the field of digital photography. Advances in techniques, equipment acquisition and display, handsets with the powerful increasing of processors in professional and consumer devices, as well as the continued efforts to get content more photo- realistic with higher quality image and video; have attracted attentions to HDR imaging. Nowadays, several industrials offer cameras and displays capable of acquiring and rendering HDR images. However, the popularity and the public adoption of HDR images are hampered by the lack of file formats, compression standards and security tools. In this paper, mechanisms suitable for HDR images are developed to protect privacy and to minimize risks to confidential information. The structure of this paper is the following. The existing standard is first reviewed in Section 2. Three specific use cases are then discussed dealing with integrity verification, encryption and scrambling in Section 3. Some conclusions are finally drawn in Section 4. II. OVERVIEW The protection of privacy is important in our civilization and is also essential in several social functions. However, this fundamental principle is rapidly eroding due to the intrusion tolerated by some modern information technology. In particular, the protection of privacy is becoming a central issue in the transfer of images through open networks and especially in video surveillance systems. The digital images are distributed via the network so they can be easily copied and modified legally and/or illegally. In this spirit, there has been a strong demand for a security solution JPEG2000 images. To meet this demand, the JPEG [1][2] committee has created an extension of the JPEG2000 [3][4] encoder by integrating security tools such as integrity verification, encryption and scrambling. This extension is part 8 of standard JPEG2000 coder (JPEG2000 Part 8) designated by JPSEC. JPSEC [5][6] defines the framework, concepts and methods for the safety of JPEG2000. It specifies a specific syntax for the encoded data and provides protection JPEG2000 bit stream. The syntax defines the security services associated with the image data, the tools required for each service and how to apply its tools, and parts of the image data to be protected. The problem of protection of visual privacy in digital image and video data has attracted much interest lately. The capacity of HDR imaging to capture fine details in contrasting environments, making dark and bright areas clear, has a strong implication on privacy. However, the point at which the HDR representation affects privacy if used instead of the SDR (Standard Dynamic Range) is not yet clear and the scenarios of use are not fully understood. Indeed, there is no mechanism of protection of privacy specific to HDR representation. Currently, many challenges are open for research related to the intrusion in privacy for HDR images. As part of this paper, mechanisms adapted to HDR images are developed to protect privacy. These mechanisms should essentially meet the expectations of consumers concerned about the respect of their privacy and ethics. III. METHODOLOGY In this section, a system based on DCT (Discrete Cosine Transform) is proposed to secure HDR images. For tone mapping (Fig. 1), sub-bands architectures [7] are applied using a multi-scale decomposition with Haar pyramids splitting a signal into sub-bands which are rectified, blurred, and summed to give an activity map. A gain map is derived from the activity map using parameters which are to be specified. Each sub-band coefficient is then multiplied by the gain at that point, and the modified sub-bands are post-filtered and summed to reconstruct the result image. (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 7, No. 4, 2016 145 | P a g e www.ijacsa.thesai.org (a) (b) Fig. 1. Tone mapping (a) HDR image, (b) LDR image Hereafter, three specific examples of security tools are described in more details: integrity verification, encryption and scrambling. A. Integrity Verification To detect manipulations to the image data, the integrity verification (Fig. 2) is used where a bit exact verification is considered in this use case as a technique applied in the transform-domain based on hash function and digital signature. More specifically, hash function SHA-1 [8] is applied on the DCT coefficients generating a 160 bits value which is then encrypted by the public-key algorithm RSA [9] to generate a digital signature. It is possible to use other hash functions and encryption algorithms obviously. If new hash values computed and compared with those decrypted are not equal or if the digital signature is missing then an attack is detected. Enabling to locate a potential attack, the integrity verification is performed for each macro-block. While a single digital signature is computed to verify the integrity of the whole image, multiple ones are computed to identify locations in the image data where the integrity is in doubt. As an alternative, a digital signature is generated for each macro-block composed of several DCT blocks not for each 8x8 DCT block because of a significant increase of the overall bit-rate resulted from a very large number of digital signatures and added bytes. An original and a tampered image are shown in Fig. 3. Integrity verification is performed on macro-blocks composed of 100 DCT blocks corresponding to square shaped regions of 80x80 pixels. The attack is identified in the upper left 160x80 pixels by a comparison between the hash values obtained from the two images. Fig. 2. Flowchart of integrity verification (a) (b) (c) Fig. 3. Example of integrity verification (a) original image, (b) tampered image, (c) digital signature verification Discrete Cosine Transform Hash function SHA-1 RSA encryption Attack detection (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 7, No. 4, 2016 146 | P a g e www.ijacsa.thesai.org B. Encryption For confidentiality, the use case of encryption (Fig. 4) is now considered. The preferred approach is to apply encryption in the transform-domain. More specifically, encryption is applied on the quantized DCT coefficients. Authorized users are able to decrypt and recover the original data. The whole image or alternatively the ROI (Region of Interest) AES [10] encryption is restricted to selected DCT blocks. Fig. 4. Flowchart of encryption An example of a whole image or a ROI encryption is shown in Fig. 5 by restricting the shape of the encrypted region to match the 8x8 DCT blocks boundaries. (a) (b) Fig. 5. Example of encryption (a) whole image encryption, (b) region of interest encryption C. Scrambling Image and video content is characterized by a very high bit-rate and a low commercial value when compared to other types of information such as banking data and confidential documents. Conventional encryption techniques entail a non- negligible complexity increase and are therefore not optimal in this case. While keeping complexity very low, scrambling (Fig. 6) is a compromise to protect image and video data. A scrambling technique applied on the quantized DCT coefficients is considered in this use case. Authorized users perform unscrambling of the coefficients allowing for a fully reversible process for them. In the way confidentiality will be guaranteed, a pseudo- random noise consisting in pseudo-randomly inverting the sign of the quantized DCT coefficients is introduced. The whole image or alternatively the ROI scrambling is restricted to fewer DCT coefficients as in the previous case. The technique requires negligible computational complexity as it is merely flipping signs of selected DCT coefficients where other extensions such as flipping of least or most significant bits of the quantized DCT coefficients can be considered. Initialized by a seed value, PRNG (Pseudo Random Number Generator) is used to drive the scrambling process. Multiple seeds can be used to improve the security of the system where they are encrypted using RSA in order to communicate the seed values to authorized users. Fig. 6. Flowchart of scrambling An example of a whole image or a ROI scrambling is shown in Fig. 7 by restricting the shape of the scrambled region to match the 8x8 DCT blocks boundaries. A reconstructed HDR image is shown in Fig. 8 after inverse tone mapping of the resulting LDR image. Discrete Cosine Transform Quantization of the DCT coefficients AES encryption Decryption with public key for authorized users Discrete Cosine Transform Quantization of the DCT coefficients PRNG algorithm Unscrambling with public key for authorized users (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 7, No. 4, 2016 147 | P a g e www.ijacsa.thesai.org (a) (b) Fig. 7. Example of scrambling (a) whole image scrambling, (b) region of interest scrambling (a) (b) Fig. 8. Inverse tone mapping (a) resulting LDR image, (b) reconstructed HDR image IV. CONCLUSIONS In this paper, a system consisting of security tools was introduced to provide services similar to those in the standard JPSEC. As illustrative use cases, three specific examples of security tools are described in more details: integrity verification, encryption and scrambling techniques. Indeed, the system allows the use of different tools in support of a number of security services. As perspective, the scrambling technique implemented will be integrated in a video coding system adapted to HDR image sequences. Specifically, the scrambling process will be directly applied to the DCT coefficients after quantization and before entropy coding. Authorized users perform unscrambling (inverse scrambling) of the resulting DCT coefficients of entropy decoding at the decoder side. Different results will be presented in terms of subjective and objective measure of the quality and scrambling force. REFERENCES [1] G. K. Wallace, “The JPEG Still Picture Compression Standard”, Communications of the ACM, vol. 34, no. 4, pp. 31-44, 1991. [2] W. B. Pennebaker and J. L. Mitchell, “JPEG: Still Image Data Compression Standard”, Van Nostrand Reinhold, New York, 1993. [3] A. Skodras, C. Christopoulos and T. Ebrahimi “The JPEG 2000 Still Image Compression Standard”, IEEE Signal Processing Magazine , vol. 18, no. 5, pp. 36-58, Sept. 2001. [4] D. Taubman and M. Marcellin, “JPEG 2000: Image Compression Fundamentals, Standards and Practice”, Kluwer Academic Publishers, 2002. [5] “JPSEC Final Draft International Standard”, ISO/IEC JTC1/SC29/WG1/N3820, Nov. 2005. [6] J. Apostolopoulos, S. Wee, F. Dufaux, T. Ebrahimi, Q. Sun and Z. Zhang, “The Emerging JPEG 2000 Security (JPSEC) Standard”, in IEEE Proc. Int. Symp. on Circuits and Systems (ISCAS), Island of Kos, Greece, May 2006. [7] Y. Li, L. Sharan and E. H. Adelson, “Compressing and Companding High Dynamic Range Images with Subband Architectures”, ACM Transactions on Graphics (TOG), 24(3), Proceedings of SIGGRAPH, 2005. [8] FIPS PUB 180-1, “Secure Hash Standard (SHS)”, NIST, April 1995. [9] R. L. Rivest, A. Shamir and L. M. Adleman, “A Method for Obtaining Digital Signatures and Public-Key Cryptosystems”, Communications of the ACM, vol. 21, no. 2, pp. 120-126, 1978. [10] FIPS PUB 197, “Advanced Encryption Standard (AES)”, NIST, November 2001. work_xqyxprcq7va77budlv5pymo4cy ---- s1-ln27632410735710829-1939656818Hwf-238162601IdV-174352663827632410PDF_HI0001.pdf I n f l u e n ce o f f o r e s t a n d sh r u b c a n o p i e s o n p r e c i p i t a t i o n p ar t i t i o n i n g a n d i s o t o p i c s i g n a t u r e s Jo u r n a l: Hy dr ological Pr ocesses Ma n u s cr ip t I D HYP- 1 7 - 0 4 4 7 . R1 Wile y - Ma n u s cr ip t t y p e : Re s e a r ch Ar t icle Da t e S u b m it t e d b y t h e Au t h o r : n / a Co m p le t e Lis t o f Au t h o r s : S o u ls b y , Ch r is ; Un iv e r s it y o f Ab e r d e e n , S c h o o l o f Ge o s cie n ce s Br a u n , Ha n n a h ; Un iv e r s it y o f Ab e r d e e n , No r t h e r n Riv e r s I n s t it u t e ; Un iv e r s it y o f Fr e ib u r g , Ch a ir o f Hy d r o lo g y S p r e n g e r , Ma t t h ia s ; Un iv e r s it y o f Ab e r d e e n , No r t h e r n Riv e r s I n s t it u t e , S c h o o l o f Ge o s c ie n c e s We ile r , Ma r k u s ; Un iv e r s it y o f Fr e ib u r g , I n s t it u t e o f Hy d ro lo g y ; Te t z la ff, Do e r t h e ; Un iv e r s it y o f Ab e r d e e n , No r t h e r n Rive r s I n s t it u t e , S c h o o l o f Ge o s c ie n c e s Ke y w o r d s : Fo r e s t h y d r o lo g y , in t e r c e p t io n , t h r o u g h fa ll, is o t o p e s , Ca n o p y , Bo r e a l fo r e s t http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 Influence of forest and shrub canopies on precipitation partitioning and isotopic signatures C. Soulsby1, H. Braun1,2, M.Sprenger1, M. Weiler2, and D. Tetzlaff1,3,4 1Northern Rivers Institute, University of Aberdeen, Aberdeen, UK, AB24 3UF. 2Chair of Hydrology, University of Freiburg, Germany 3 IGB Leibniz Institute of Freshwater Ecology and Inland Fisheries, Germany 4 Department of Geography, Humboldt University Berlin, Germany Abstract Over a four-month summer period, we monitored how forest (Pinus sylvestris) and heather moorland (Calluna spp. and Erica spp.) vegetation canopies altered the volume and isotopic composition of net precipitation (NP) in a southern boreal landscape in northern Scotland. During that summer period, interception (I) losses were relatively high, and higher under forests compared to moorland (46% of gross rainfall (GR) compared with 35%, respectively). Throughfall (TF) volumes exhibited marked spatial variability in forests, depending upon local canopy density, but were more evenly distributed under heather moorland. In the forest stands, stemflow (StF) was a relatively small canopy flow path accounting for only 0.9-1.6% of NP and only substantial in larger events. Overall, the isotopic composition of NP was not markedly affected by canopy interactions; temporal variation of stable water isotopes in TF closely corresponded to that of GR with differences of TF-GR being -0.52 ‰ for δ2H and - 0.14 ‰ for δ18O for forests and 0.29 ‰ for δ2H and -0.04 ‰ for δ18O for heather moorland. These differences were close to, or within, analytical precision of isotope determination, though the greater differences under forest were statistically significant. Evidence for evaporative fractionation was generally restricted to low rainfall volumes in low intensity events, though at times subtle effects of Page 1 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 2 liquid-vapour moisture exchange and/ or selective transmission though canopies were evident. Fractionation and other effects were more evident in StF but only marked in smaller events. The study confirmed earlier work that increased forest cover in the Scottish Highlands will likely cause an increase in interception and green water fluxes at the expenses of blue water fluxes to streams. However, the low energy, humid environment means that isotopic changes during such interactions will only have a minor overall effect on the isotopic composition of NP. Key words: Forest hydrology, interception, throughfall, isotopes, canopy, boreal forest 1. Introduction Vegetation canopies play a critical role in partitioning gross precipitation (GR) into interception (I) losses and net precipitation (NP), and the latter’s further sub-division into throughfall (TF) and stemflow (StF) (Calder, 2005). The relative importance of these fluxes can vary dramatically between different ecosystems and in contrasting hydroclimatic regions (Bosch and Hewlett, 1982; Levia et al., 2011). Changes in canopy water partitioning potentially have major implications for determining the distribution of NP and the resulting flow paths, storage and mixing of water in the subsurface (Keim et al., 2005; Stockinger et al., 2015). This can also exert a strong influence on catchment scale outputs of water in terms of other “ green water” fluxes of transpiration and soil evaporative losses and “ blue water” fluxes of drainage to streams (Tetzlaff et al., 2013). Recent work has highlighted that the role of vegetation in such partitioning has been under-researched compared to other hydrological processes. Increased awareness that land management can play a critical role in addressing various water security issues has provided an impetus for more extensive research in plant-water interactions (Calder, 2005; Allen et al, 2017). In addition, climate change is predicted to have far-reaching implications for the ecohydrology of many regions, in terms of changing precipitation regimes and temperatures in ways Page 2 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 3 that may affect canopy routing and plant water availability (Kundzewicz et al., 2007; Rennermalm et al., 2010; Capell et al., 2013). Vegetation communities are highly responsive to such changes by adjusting species composition and distribution, as well as biological productivity (Wookey et al., 2009). For example, in many high latitude northern ecosystems, vegetation changes driven by recent climatic warming have been reported, with an expansion of shrub and tree cover accompanying climatic amelioration (Menard et al., 2013; Tetzlaff et al., 2014). The implication of such changes for water partitioning as well as for “ blue” and “ green” water fluxes and other water balance components at the catchment scale is usually unknown and is still a major research challenge (Bring et al., 2016). Previous plot scale work has provided a basis for assessing the influence that vegetation canopies can have on precipitation inputs. The fraction of precipitation intercepted by forest and shrub canopies and which is evaporated or sublimated directly back to the atmosphere has been measured in numerous studies (Levia et al., 2011). Similarly, the relative importance of TF, which encompasses canopy drip and water falling through canopy gaps, and StF draining down tree stems has been quantified at many experimental sites (Levia et al., 2011; Allen et al., 2014). Together, TF and StF usually account for about 70-90% of the GR in forested ecosystems, with the remaining 10-30% being lost to I; exact effects vary with hydroclimate and forest type and I losses can be as large as 50% (Levia and Frost, 2003; Allen et al., 2017). However, the resulting heterogeneity and spatial variability of TF and StF can affect soil wetting patterns, soil water re-distribution and groundwater recharge in ways that are still poorly understood (Ford and Deans, 1978; Keim et al, 2006; Guswa and Spence, 2012). As well as determining the physical quantity of NP, the partitioning of water in vegetation canopies also affects the physical and chemical characteristics of evaporated I and the residual NP (e.g. Soulsby and Page 3 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 4 Reynolds, 1994; Bhat et al., 2011). For example, stable isotopes of water, deuterium (2H) and oxygen 18 (18O), are commonly used as assumed conservative environmental tracers to identify sources and track the movement of water in catchments (Allen et al., 2017; Makoto et al., 2000; Soulsby et al., 2015). Variation in isotope signatures can be caused by fractionation occurring with the phase changes of water in the canopy, as well as exchange between liquid and vapour water and selection of different canopy flow paths in contrasting events (Yurtsever and Gat, 1981; Rozanski et al., 1993; Ingraham, 1998; Gat et al., 2001; Gat and Tzur, 1968; Allen et al., 2015). It has become apparent that changes to the isotopic composition of water routing through vegetation canopies can have significant effects on the resulting isotopic characteristics of TF and StF at a range of spatial and temporal scales (e.g. Cappa et al., 2003; Liu et al., 2008). For example, Brodersen et al. (2000) showed that the δ18O compositions of TF and GR were significantly different through isotopic fractionation in the canopy generating enriched TF (Saxena, 1986; Dewalle and Swistock, 1994). Exchange with water vapour and time variant mixing along different canopy flow paths in different precipitation can also alter the isotopic composition of TF and StF, with the effects usually being most marked in StF which often has a longer contact time with vegetation surfaces (Saxena 1986; Brodersen et al., 2000). Liu et al., (2008) also concluded that the isotopic composition of TF can be altered by canopy structure and ongoing evaporation processes. They found a high correlation between canopy structure and evaporation during low intensity rainfall events. However, other experiments in several forest stands failed to find any relationship between the enrichment of isotopes in TF and I rates (e.g. Allen et al., 2014; Hsuch et al., 2006). It is clear that many of the basic mechanisms and drivers of vegetation-influenced isotopic transformations remain poorly understood (Allen et al., 2017). Also, the extent of such processes in different ecosystems with contrasting hydroclimatic regimes is largely unknown (Tetzlaff et al., 2015). This is an important research gap as the isotopic composition of GR is often used in catchment travel Page 4 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 5 time assessments (Stockinger et al., 2015) or tracer-aided runoff models (Soulsby et al., 2015). If canopy fractionation and other transformative processes are significant, and the isotopic composition of NP is significantly different to GR, then the results of such modelling may be biased and misleading (e.g. Sprenger et al., 2016; Sprenger et al., 2017) Here, we present results from a study in a headwater catchment in a wet, low energy environment in the Scottish Highlands where we investigated the effects of contrasting dominant vegetation covers on canopy partitioning and the implications for the isotopic composition of NP in TF and StF. In addition to climate-induced changes, forest cover is increasing in many parts of Scotland as a result of the restoration of de-forested moorlands as part of conservation schemes and demands for increased biofuel production (Scottish Forest Strategy, 2006). A move to increased forest cover has the potential to significantly increase I losses, reduce NP and affect the spatial and temporal distribution of soil moisture, groundwater recharge and runoff generation, as well as their associated isotopic composition (Haria and Price, 2000, Tetzlaff et al., 2013, Capell et al., 2013). Our specific objectives were to (i) quantify the influence of canopy cover of contrasting vegetation types on I losses and the partitioning of TF and StF; (ii) characterise the temporal dynamics of I and precipitation partitioning; and (iii) assess the associated influence of vegetation on spatio-temporal dynamics in isotopic composition of TF and StF in a low energy environment. 2. Study Site The study was conducted in the Bruntland Burn (Figure 1), a 3.2 km² tributary of the larger Girnock catchment (31km2) and a long-term monitoring site for Atlantic salmon in the Cairngorms National Park in NE Scotland (Soulsby et al., 2016). The climate is maritime at the temperate/ boreal transition with Page 5 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 6 mean annual temperatures of around 6.8 °C, with January and July daily averages of 1.2 oC and 12.4 oC, respectively. The mean annual precipitation (P) is about 1000 mm. Most P falls in low intensity frontal events, with ~50 % and ~75 % falling in events of < 10 mm and < 20 mm, respectively (Tetzlaff et al., 2014). Mean annual potential evapotranspiration (PE) and runoff are about 400 mm and 700 mm, respectively. Most evaporation is focused in the relatively short summer period between May and August (Wang et al., 2017). In general, the catchment has limited influence of snow (<5% of the annual GR). The highest P is most likely to occur between November and February, but large events can happen throughout the year. The catchment’s elevation ranges between 248 and 539 m with an average of 350 m. Mean slopes are 13o (Birkel et al., 2011; Tetzlaff et al., 2014). The area has been repeatedly glaciated and following the last glacial period, altitudes below 400 m are characterised by drift-draped topography, with poorly sorted till deposits being dominant (Soulsby et al., 2016). Freely draining podzols are the main soils and are found on the steeper slopes covering around 60% of the catchment (Birkel et al., 2011; Tetzlaff et al., 2014; Dick et al., 2015), mainly facilitating groundwater recharge. The most extensive vegetation is heather (Calluna vulgaris and Erica tetralix) moorland over the podzol soils. The heather is around 0.3m high, with an extensive (up to 80%) canopy coverage. Also on the podzols, there are some stands of native Scots pine (Pinus sylvestris) and birch (Betula pendula) that, due to historic land management, only remain in small areas of the catchment on inaccessible slopes or behind fenced enclosures (Birkel et al., 2011; Tetzlaff et al., 2014). Past land management resulted in tree clearance in many headwater catchments in the UK. This was carried out to create more productive environments for rearing sheep (Ovis aries), red deer (Cervus elaphus) or red grouse (Lagopus lagopus scotica); the latter two for hunting (Dick et al., 2015). As a consequence, the forest coverage for the entire catchment is <15%. Page 6 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 7 3. Data and Methods Field sampling was conducted over a four-month summer period between 1st of June until 24th of September 2015. This encompassed the main summer months in the Scottish Highlands when evaporation is highest. The plots were close (<750m) to an automatic weather station (Figure 1) that recorded standard variables at 5 minute intervals (Wang et al., 2017). Four study plots were installed on south and north facing slopes with two sites for each aspect. On each slope, one plot was in the heather shrub vegetation (predominantly Calluna spp. and Erica tetralix spp.) and one in a Scots Pine plantation (Figure 1). In general, the trees were older (ca 50 years), taller (ca 15m) and of greater trunk diameter at the south-facing forest (SF), with this site being more homogeneous overall compared to the north- facing forest (NF) (Table 1). At the NF site, tree characteristics varied from very young (ca 20 years old), small (<5m tall) Scots Pines to a few older trees (ca 50 years) with higher canopy coverage and larger diameter at breast-height (DBH). Median canopy coverage, estimated from digital photography (see below) ranged from 67% on the NF to 69% on the SF, though there was marked variability at each site for individual TF collectors. For the heather, the vegetation canopy was more regular and typically 0.2- 0.3m high, though gaps appeared between plants or where the canopy of older plants had a more open structure. Median canopy coverage (from the digital photography ranged between 62% for the south- facing heather (SH) site and 65% for the north-facing heather (NH) site (Table 1). The heather sites were in stands where Calluna vulgaris is the dominant species, with a canopy height of around 0.3m. The sites were equipped with a total of 75 TF and 10 StF collectors. The TF collectors comprised a lower base that was secured to the ground and an inner cylinder with a measuring scale on it which collected water from an open funnel with a 78.5 cm2 orifice. To prevent leaves and litter plugging the entry of the Page 7 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 8 collector, a plastic mesh was fixed to the base of the funnel. The TF collectors were randomly located in 20 x 20 m grids for each site. These grids were sub-divided into 25 4 x 4 m sub-grids, giving a stratified random sample to capture variability in canopy cover. 25 TF collectors in each of the forest sites were used to account for the greater spatial variability compared to the more uniform heather sites, which had 10 collectors at each site. The percentage canopy cover at each collector determined by digital photography varied between 0% and 81%, though the median values for each site were similar (ranging between 62-69%, Table 1). For further comparison, one open collector was also located next to each plot and at the weather station in the valley bottom, to capture potential differences in the amount of GR. There was no significant difference between the GR sampled by these collectors and the weather station rain gauge station (one-way ANOVA, p=0.59) so GR sums and maximum intensity over each sampling period could be calculated. All TF and StF samples were protected with a small aliquot of paraffin shown in previous tests to prevent fractionation. As part of longer-term monitoring in the Girnock, daily precipitation samples were also collected for isotope analysis and could be used to complement the coarser sampling of TF and StF in this study (Soulsby et al, 2015). For the StF measurements and sampling, 10 trees of different height, DBH and species (Birch and Scots Pine) were selected; 5 in each forest plot. The DBH of all the trees in a grid was measured to estimate the tree basal area (BA) in the grid. Following Reynolds and Stevens (1987), a 30-40 cm section at approximately 1.50 m height above the ground and free of whorls and branches was cleaned of loose debris and moss, whilst avoiding damage to the tree’s bark. A PVC flexible tube with a diameter of 15 mm was wrapped around this area in a single spiral to cover the circumference of the tree. The tube was adjusted using four to five plastic pipe clips. The gap between the tree’s bark and the tubing was sealed with silicon and a silicon “ lip” at the edge of the tube was created to form a gutter to collect StF. Small holes cut into the tube every 10-20 cm allowed the StF to drain to a plastic container with a capacity of Page 8 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 9 15 l. Again, paraffin was added to prevent fractionation. Given time and resource constraints, it was considered impractical to assess StF down individual heather plants, thus, we recognised our measurement of NP by TF would be a conservative under-estimate. Intervals between sample collection ranged from 4 to 14 days (depending on logistical constraints and the occurrence of precipitation events), generating a data set of 16 sampling occasions. For each of the 75 TF collectors, the amount of TF was determined and an isotope sample was taken. Isotope samples were transferred into 8 ml glass vials with a silicone seal to prevent evaporation. The StF in the 10 collectors was measured (if present) and sampled on the same dates as the TF. To determine the amount for each plot, the volume of water in each collector was measured and normalised by dividing by the tree basal area, with the average for 5 trees being scaled up for the total basal area. An isotope sample was taken for StF. Samples were stored in a fridge until analysis. Samples were filtered and 1 ml was injected into 1.5 ml glass vials in accordance with the procedures detailed in Los Gatos Research Inc. (2010). The samples were analysed for the stable isotopes of water δ2H and δ18O using an off-axis integrated cavity laser spectrometer (a Los Gatos DLT-100 Liquid Water Isotope Analyser). In the post analysis, the sample ratios of δ18O and δ2H were derived by calibration against known standards relative to the Vienna Standard Mean Ocean Water (VSMOW, Green et al., 2015). The precision of the isotope analysis is reported by the manufacturer to be +/ -0.1 ‰ for δ18O and +/ -0.4 ‰ for δ2H and this corresponded to our own assessments. Page 9 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 10 I loss and NP were calculated for all sites. For the forest sites, I loss was calculated using the following equation (Helvey and Patric, 1965: Crockford and Richardson, 2000): = − + (Eqn 1) Where is interception loss [mm]; is gross precipitation [mm]; is throughfall [mm] and is stemflow [mm]. NP is the sum of TF and StF, therefore, I equals GR minus NP (Crockford and Richardson, 2000). For the heather sites, as StF was ignored estimates of I were likely overestimates. Potential evapotranspiration was calculated using the Penman-Monteith approach, which requires air temperature, radiation, air humidity and wind speed that were recorded at 5 minute intervals at the meteorological station (Figure 1). For isotopes, the local meteoric water line (LMWL) was determined using a least-squares regression on all GR isotope values sampled during the field season (Dansgaard, 1964; Landwehr and Coplen, 2006): δ = 7.6275 ∗ + 2.0779 Eqn 3 In addition, we also calculated the line conditioned excess (lc-excess [‰]) to describe the offset of a sample from the LMWL, and thus, an indicator of possible non-equilibrium fractionation. It is defined as (Landwehr and Coplen, 2006): c-excess = δ H − ∗ − Eqn 4 Where δ2H and are the δ2H/ values of the samples [‰] and and are the slope and intercept of the LMWL which then (combined with Eqn 3) gives Eqn 5 to calculate the lc-excess for the TF, StF and open test collectors at all sites: Page 10 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 11 - = δ − 7.6275 ∗ − 2.0779 Eqn 5 The precision of the isotopes analysis resulted in a precision of the lc-excess of +/ -1.16 ‰. To derive one (weighted and representative) lc-excess value of TF, StF and open test collectors at each sample date, volume weighted δ2H and δ18O values were used to calculate the lc-excess for each site. δ2H and δ18O were weighted using the TF/ StF volume of the related collector. It should be noted, however, that the use of the LMWL in the lc-excess calculations still means that the deviations for individual events may not be captured as specific event water lines may differ. A general characterization of the study site was undertaken using the program ArcGIS version 10.3.1 (Figure 1). A slope map of the catchment was produced using a high resolution Digital Elevation Model. Canopy coverage [%] - defined as the fraction of soil that is covered by the vegetation’s canopy - was calculated for the whole catchment as well as for the two forest grids using airborne Light Detection and Ranging (LiDAR) 1* 1m data. In addition, digital photography and the free software CAN-EYE V6.1 developed at the EMMAH laboratory (Mediterranean Environment and Agro-Hydro System Modelisation) were used to calculate the canopy coverage for each collector individually (CAN-EYE user manual). Using hemispherical images, it was necessary to integrate the cover fraction over a range of zenith angles as it was impossible to maintain exact nadir direction. This range as well as the lens properties can then be changed prior to analysing the photos. Light contamination due to direct sunlight was corrected in the program. Correlations between variables were described by Pearson correlation (r) for normally distributed data and with the Spearman rank correlation (ρ) if the Shapiro-Wilk test for normality revealed that the data Page 11 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 12 was not drawn from a normal distribution. Wilcoxon signed rank tests were used to test for differences between data sets (with p < 0.05 for significance). 4. Results 4.1 Variability in hydroclimate Hydroclimatic conditions were highly variable during the summer of 2015, which was cooler than average, with periods of day time maximum temperatures substantially >20°C restricted to a few days in early June and late June/ early July (Figure 2). Mean daily air temperatures varied between 7.0 and 15.4 °C with marked diurnal ranges (Figure 2b). Mean relative humidity was 77.2% with lowest values during daytime during the warm early summer periods, and highest in mid and end of August and September (Figure 2c). Mean wind speeds were moderate (2.7 m s-1) with higher peaks of 7.5 to 8 m s-1 during some sampling periods (Figure 2d). Precipitation totalled 270 mm for the four-month period; the highest daily total was 26.2 mm on 17th July and intensities reached up to 4 mm h-1 in the same July period. Around 30% of the precipitation fell in low intensity events below 5 mm d-1 and around 60% on days with < 10 mm d-1 (Figure 2e). During most of the individual sampling periods (i.e. the time between emptying the collectors) a sum of GR between 5 mm and 20 mm was typically recorded. The highest measured sum of GR was 74.8 mm over a 14 days sampling period in July. The lowest measured sum of GR was 0.8 mm over a 7-day period in September. Total potential evapotranspiration (PET) was 216 mm for the whole sampling period of 4 months (with a daily mean of 2.2 mm). Page 12 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 13 4.2 Variability in interception, throughfall and stemflow The dominant pathways of GR at all sites was either TF or I evaporated back to the atmosphere (Table 2). I losses were generally highest in the two forest plots (46% average) compared with the two heather plots (34% average) (Figure 3). A Wilcoxon signed rank test showed that TF was always less than GR and inter-site differences between the two vegetation types were significant (p< 0.05) on seven of the sampling dates, with TF in Scots pine being lower on all but one of the sampling dates (9th June) (Figure 3a). Regarding temporal variability, TF exhibited a strong positive and exponential correlation with GR at each site for each sampling period, with the relationship slightly stronger for the forest sites. Conversely, percentage I losses were negatively correlated with GR amount, with percentage loss much higher in smaller events (Table 2, Figure 4a). Similar, though generally weaker negative relationships were evident between I and maximum rainfall intensity in the sample period (Figure 4b). When the maximum GR intensity was about 1.5 mm h-1, the I loss was usually lowered to <70% of GR. Correlation with other climatic factors showed much weaker or insignificant relationships. To understand intra-site differences within the forest plots, Figure 5 shows the mean weighted amount of TF. At both sites, highest TF tended to be measured in the collectors located in canopy gaps and with greater distance to the trees. Conversely, the TF amount was low in areas with very dense canopy cover and in close proximity to a tree’s stem, especially at the NF (Figure 5a). Distance to the closest tree stem was correlated with TF amount (p-values for NF ≤0.001; SF ≤ 0.05); on average TF increased by 8.8% per metre from each tree. The I loss was significantly correlated with canopy cover for both sites (ρ = 0.80 for NF, ρ = 0.62 for SF). Lower intra-site variability could be found at the SF site (Figure 5b) as there, trees were more evenly spread over the grid with less variability in canopy cover and tree size distribution. At the SF site, the weighted mean amount in TF as a fraction of GR per collector over all Page 13 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 14 sample dates was 58.2 % with a minimum and maximum of 17.7 % and 75.4 % for individual collectors, respectively. The mean TF of all collectors – as a fraction of the sum of GR – was 51.8 % for the NF site with a minimum of 8.4 % for the collectors under the densest canopy and a maximum of 79.6 % for a collector where the canopy had a more open structure. StF was a small fraction of GR at the forest sites accounting for 1.6 % and 1.0% of incoming precipitation for the NF and SF, respectively. StF amounts were highly variable over the sampling periods and were most strongly correlated with GR (r = 0.76, p<0.001) at both sites. There were three sampling occasions for the NF and four for the SF where no StF was generated. For the heather plots, TF accounted for 60% and 67% of incoming precipitation at the NH and SH sites, respectively. Again, total GR input exhibited the strongest correlation with TF amount in each sampling period (Figure 4). Intra-site variability of TF amount was marked, though usually less than for the forests (cf generally more restricted ranges in Figure 3). There was a negative relationship between canopy cover and the amount of TF for both sites, though this was only significant (ρ = -0.74, p = 0.01) for the NH site. 4.3 Vegetation influences on the isotopic composition of net precipitation The isotopic composition of precipitation was temporally variable with δ2H ranging between -115.0‰ to -14.0‰ in GR and δ18O between -15.2‰ and 0.9‰ (Figure 6 and Table 3). Isotopic signatures were most depleted during the first sampling periods (early June) when mean air temperature were lowest (~7°C), reflecting the influence of rainfall brought on by northerly air streams. Thereafter, prevailing winds had Page 14 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 15 a stronger south westerly component bringing rainfall more enriched with heavier isotopes as is typical for summer. The isotopic composition of GR and TF under both land covers were strongly correlated (for δ2H and δ18O, r was >0.85 and >0.90 for all sites, respectively (p<0.001)). Air temperature was also positively correlated with both isotopes in GR (r = 0.76, p<0.001) and TF under both land use types (r = 0.58, p=0.03 for heather, r = 0.59, p = 0.02 for forest). Volume weighted isotopic compositions in TF samples at all sites were similar to the weighted isotopic composition of GR, though overall TF under the forests was slightly more depleted than for moorland (Table 4). The overall differences of TF-GR being -0.52 ‰ for δ2H and -0.14 ‰ for δ18O for forests and 0.29 ‰ for δ2H and -0.04 ‰ for δ18O for heather moorland. These small differences between the land cover types were significant (p<0.05) on 10 sampling occasions, and significant differences to GR were far more common under forests (Figure 6). To assess any offset of the TF and StF isotopic signature from the LMWL, lc-excess was calculated as an indicator of evaporative fractionation (Figure 6b). Gross precipitation/ rainfall lc-excess values were close to zero for much of the study period, with more negative values (indicative of fractionation and moisture recycling) in early June, early July and mid-August. LC-excess in TF also varied throughout the study period but mostly followed the lc-excess patterns of in rainfall (GR/ open collectors) for both land cover types. Significant differences (p<0.05) between land cover types only occurred on four occasions whist the lc-excess for forest sites was significantly different to that of GR on four occasions, heather – GR differences were only evident once. Page 15 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 16 The isotope values of all the TF and StF samples (>1100) for each sampling period were plotted in dual isotope space to explore the potential effects of evaporative fractionation and other processes in modifying the composition of NP beneath the two vegetation canopies (Figure 7). The majority of TF samples from the forest and heather moorland sites mostly scattered along the LMWL and occupied overlapping space, with the forest samples usually, but not always, being more variable. Despite the overall, longer-term small differences in the composition of TF compared to GR, there could be surprising variation between individual collectors for specific sampling events. The position of samples for both land covers relative to the LMWL line, and degree of offset, varied over the course of the season. For example some dates showed clear evidence of evaporative fractionation (e.g. 9th June, 2nd July and 6th and 13th August). Other dates, like 6th July and 24th September had a large number of samples plotting above the LMWL. This, together with the variable scatter on individual sampling dates, indicates potential subtle effects of event-based variability in the MWL, event-specific canopy pathways and the possibility of canopy moisture exchange between air and plants. However, in general, the isotopic variability at the two plots, indexed by the standard deviation of δ2H (same for δ18O– not shown) was negatively correlated with maximum rainfall intensity (more than rainfall amount), with this relationship strongest in the forest (Figure 8). The coefficient of variation (not shown) also showed a similar relationship with rainfall amount and rainfall intensity. The StF samples likewise generally plotted along the LMWL, but were usually more enriched than GR and TF (Figure 7). The isotopic composition in StF in forest plots was also variable over time, mostly reflecting the variability of the precipitation input signature (Figure 9) with r = 0.92 for δ2H and r = 0.96 for δ18O). A notable deviation was on 6th July when samples from several collectors plotted above the LMWL, and then again on 30th July where StF was lighter in several samples. Again, these deviations Page 16 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 17 might be explained by vapour-liquid moisture exchange effects, such as condensation on the canopy, or activation of selective canopy flow paths delivering water at specific stages of events where the intra- event isotope changes in rainfall may have occurred. Notwithstanding such subtle sample-specific differences, overall the spatial variability in the isotopic signatures of TF was limited at all sites. Further, in the forest sites, despite the spatial variation in TF volumes, the isotopic variability between samplers was not marked (Figure 10). There was no distinct pattern of more depleted, i.e. more negative δ2H values (larger circles) vs. the enriched values (smaller circles) and much of the intra-site variability was small. There was no statistically significant correlation with canopy cover or distance from tree at either site. The heather sites showed similar limited variability. 5. Discussion 5.1 Differences in interception and throughfall between forest and shrub cover Interception losses for all sites were quite high; averaging up to 35% for heather and 46% for Scots Pine. But these lie within previously reported ranges for pine trees and heather (Calder, 1990; Haria and Prices, 2000; Llorens and Domingo, 2007). For example, for another Scottish site in a wetter area in the western Cairngorms, Haria and Price (2000) found that I losses under Scots Pine were 37% of precipitation but only 13% for heather. This suggests that we may be significantly over-estimating heather losses by not accounting for stem flow. It should also be stressed that as our study was carried out over the summer, the overall annual percentage I losses are likely to be smaller due to the lower atmospheric moisture demand and higher precipitation in winter. In comparison with studies elsewhere, I losses for a Scots pine forest were found to lie between 13% and 49% (Llorens et al., 1997) and similar Page 17 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 18 coniferous trees with comparable interception capacity and branch architecture found respective values ranging from 40 to 41% at a 130 to 170 year old spruce stand in the Black Forest in Germany (Brodersen et al., 2000) with similar figures for a mixed spruce forest in the Eifel national park in Germany (Stockinger et al., 2015). For canopy partitioning in other boreal forests, Ilvesniemi et al. (2010) working on Scots pine in Finland found that the proportion of TF was ~67% of annual precipitation, ~33% was lost as I and, like in our study, StF was small (<1%). However, in dense plantations, Cape et al (1991) showed StF could be as high as 15% in the UK, but TF still dominated accounting for 51-78% of annual precipitation. The main reason for the high summer I losses at the study site under both forest and heather is probably that much of the precipitation in this part of northern Scotland falls in small, frequent, low intensity events which can interact with the high canopy interception storage (~1-3mm) for both vegetation types (Haria and Price, 2000). Lowest percentage I losses were found in weeks with higher GR amounts and intensities. During weeks with lowest intensity events and low GR totals, I losses were as high as 95% - 100%, whereas for weeks with intensities > 7 mm h-1, losses ranged between 22% - 45% of GR. Scatena (1990) also observed that high forest I losses were attributable to rain events with very low intensities (≤ 2mm h-1) and small storm totals, as such events are similar to canopy storage volumes allowing evaporation to occur. In our study, wind speeds ranged from 2 - 8 m s-1; these moderate to fresh winds (Smyth et al., 2013) help facilitate evaporation from the canopy by turbulent transfer, especially at the forest sites where the canopy is well-ventilated (Herwitz and Slye, 1995). Page 18 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 19 5.2 Intra-site variation in throughfall TF fractions were higher for the heather sites compared to the forests, though the intra-site spatial variability was greatest in the forest plots. Lowest TF fractions could be found at the younger NF (Table 2). Canopy coverages for the collectors, derived from digital photography, showed that these contrasts could be partly explained by differences in canopy cover at both forest sites. The NF site was characteristic of a young Scots Pine stand, with high canopy density, particularly in the middle part of the plot. Usually, mature Scots pine is characterized by a low density and open canopy (Hall et al., 2001) like at the SF site. Tree sizes, age, crown density and canopy coverage of the collectors varied at the NF site as shown in Figure 5 whereas all of those attributes were fairly uniform for the SF (reflected in the low standard deviation of canopy cover; Table 1). Statistically significant correlations could be found between I values for the two forest sites and both canopy coverage and distance to the closest tree. Similar relationships have been shown by previous studies (e.g. Stout and MacMahon, 1961; Helvey and Patric, 1965; Aussenac 1970; Johnson, 1990). The effect was most marked during small precipitation events, when the rainfall intensity is similar to the canopy interception capacity; as the TF amount and variability is greatest as GR will be intercepted by the canopy and most NP will reach the ground through canopy gaps. As the GR increases and the canopy storage gets increasingly saturated, each additional increment of rainfall generates TF reducing spatial variability (Loustau et al., 1992). Influences of the canopy coverage and distance to the nearest tree stem on TF were more marked for the NF, as vegetation cover is more heterogeneous there. For the two heather sites, TF was more evenly distributed and lacked a consistent relationship with canopy cover. Our study also showed that a GR threshold of approximately 15 mm was needed for all of the TF collectors in the forest sites to collect a sample. This partly reflects the high interception capacity above Page 19 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 20 some collectors in the forest plots with high canopy coverage. Marin et al. (2000) also found high variabilities due to a suite of interacting controlling factors, particularly the local canopy density and architecture, which regulates the redistribution of incident precipitation and associated flow paths. Despite this, as shown by other studies (e.g. Marin et al., 2000; Peng et al. 2014; Stockinger et al., 2015), a strong positive correlation between TF volume and the total amount of GR could be found at all sample sites and collectors. However, other studies for Scots Pine have shown contrasting conditions. For example, Kowalaska et al (2016) found that for Scots pine, the amount of TF water did not show a significant relationship with LAI, canopy cover or the distance to the nearest tree trunk. Rather the canopy partitioning of water was strongly modified by the irregular structure of the crowns and the irregular distribution of the trees in the stand as well as by weather conditions. 5.3 The role of stem flow As noted above, StF comprised a relatively small proportion of the NP at the two forest sites. To generate StF, canopy and bark water storage have to be filled and rainfall amounts have to be sufficient to wet the tree stem enough for continuous flow to reach the ground (Allen et al., 2017). Different GR amounts and intensities have been shown to trigger StF for different trees (> 7mm for beech and >10 mm for Scots pine); independent of species, the threshold above which StF generation did not increase with higher amount of GR was ~25 mm. Mean and median StF were mainly dependent on maximum rainfall. Xiao et al. (2000) and André et al. (2008) found that higher wind speeds increased the StF production as they reduced the initiation threshold by blowing canopy stored water onto the tree trunk; however, no direct correlation was found here. Similarly, no correlation between DBH and StF amount was evident. A more detailed investigation of the vegetation cover (such as crown area, branch architecture, bark composition) would be needed to improve the prediction of StF generation. However, Page 20 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 21 previous studies (e.g. Ford and Deans, 1978; Loustau et al., 1992) have shown that the interactions of vegetation-related controls on StF and hydroclimatic drivers are difficult to disentangle to explain StF amounts and variations between individual trees. Despite its minor overall importance, in larger events StF can still provide a large localised flux of water into the soil and its role in influencing spatial variations in soil moisture and groundwater recharge are poorly understood. 5.4 Influence of vegetation cover on spatio-temporal dynamics in isotopic compositions Temporal variability in the isotopic TF signature was found to be mostly driven by the variability in the incoming GR isotopic composition. In contrast to the strong influence of canopy cover on the volume and re-distribution of NP, the overall associated effects on the isotopic composition of TF and StF were small, though some subtle effects were detected for individual sample events. At all four sites, the average composition of TF little changed compared to GR, though the overall effects were greater under the forests. Other studies have found much stronger influences of canopy cover on the isotopic signature of NP (e.g. Ikawa et al., 2011; Stockinger et al., 2015). There was no statistical relationship between TF isotopic composition and GR amount or maximum intensity. Similar low correlations have also been observed by Allen et al. (2015) for a Douglas-fir dominated catchment in northern Oregon and Kato et al. (2013) for a cypress plantation in eastern Japan. However, the variability of the TF isotopic signal tends to be higher for lower GR amounts and intensities (Figure 8). The lc-excess of GR and TF exhibited highly correlated temporal variability at all sites, fluctuating between close to zero and more negative values. For some samples in each month, the effect of moisture recycling and evaporative fractionation on GR isotopic composition may have been evident in the observed negative lc-excess values during this period. Alteration of the GR isotopic composition by Page 21 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 22 its passage through the canopy can result from evaporative losses, mixing processes along selective flow paths and isotopic exchange with atmospheric water vapour (Saxena, 1986; Tsujimura and Tanaka, 1998). These exchange processes can potentially lead to both depletion and enrichment in isotopic signatures depending on ambient conditions. Such effects were small overall, though they were evident during specific sample periods. Compared to the temporal variability in TF isotopic composition, spatial variability between and within the forest and heather moorland sites was found to be small, similar to other studies (Brodersen et al., 2000). The TF isotopic variability in the forests was found to be slightly higher compared to the heather sites, which are characterised by a more uniform canopy cover. Differences in isotopic composition within a site could be marked at the scale of individual sampling periods. Likely explanations probably relate to event-scale conditions such as the changing isotopic composition of rainfall relative to the spatial and temporal variation of activation of flow paths through the canopy, and moisture exchange with the atmosphere (e.g. evaporative fractionation and condensation) (Brodersen et al., 2000; Kato et al., 2013; Allen et al., 2014). However, more intense sampling during and after events would be needed to test such hypotheses. For example, Kato et al. (2013) also found that in tree canopies there may be a forest edge effect, where TF nearer the edge - where the canopy is more ventilated - might be changed isotopically as a result of higher evaporative losses. In contrast, collectors below less dense canopies might gather more direct TF and can therefore show a similar isotopic composition to precipitation (Kato et al., 2013). StF isotopic compositions were more markedly different to those compared to TF in the two forest sites. This probably reflects the longer contact time of StF and greater opportunities for fractionation or Page 22 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 23 moisture exchange with the atmosphere (Ikawa et al., 2011). StF is also more likely affected by activation of flow paths later in events with mixing processes and bark storage delaying StF initiation depending on prevailing meteorological conditions (e.g. rainfall intensity, wind speed etc.) (Levia and Herwitz, 2005; Staelens et al., 2008; Ikawa et al., 2011). However, event-based sampling would be needed to further explore this. Our study demonstrated the importance of vegetation characteristics such as canopy coverage, age, height, density, DBH on TF amounts in a northern landscape, but less influence on isotopic composition. Given the limited changes in the isotopic composition of NP below the forest and heather canopies, it is reasonable to use GR isotope values in travel time estimates and tracer-aided runoff models in this geographical region, though correcting for canopy effects is possible. There is little evidence to suggest that this will cause a significant additional source of uncertainty to such models, but there may be some more unusual events where correction could help. More broadly, better understanding of rainfall partitioning processes through canopy cover is important, especially in relation to likely future vegetation changes driven by a changing climate (Tetzlaff et al., 2013). Many northern landscapes are already experiencing and responding to those shifts (Tetzlaff et al., 2014). Climate projections for Scotland predict longer dry and warm periods and shifting precipitation patterns (Capell et al., 2013). This, combined with large scale afforestation plans by the Scottish government (The Scottish Forestry Strategy, 2006) will be likely to affect vegetation distribution and community as well as linked flow path partitioning and water balances. This leads to an increasing importance in understanding vegetation influences on water partitioning, storage dynamics and water availability in higher latitude catchments (Geris et al., 2015). Page 23 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 24 6. Conclusions We analysed TF and StF with respect to their spatial and temporal variability in quantity and isotopic composition under forest (Scots Pine) and shrub (heather) cover during a four-month summer period in a northern landscape. We identified large temporal differences in TF which were mainly governed by the size of rainfall events and intensity. Interception was highest under forest cover (46%) where TF exhibited marked spatial variability in relation to canopy density and structure. I losses under heather were probably <35%, with TF more uniformly distributed. The contribution of StF to the overall water balance was found to be relatively small (~ 1% of GR). StF was found to correlate most strongly with maximum precipitation intensity. The subsequent I losses were in the upper range of reported literature values for coniferous forest stands. This probably reflects both the local hydroclimate where most rainfall events are frequent and small (<5mm) with low intensity, and high wind speeds are common. Vegetation canopies had a less marked effect on the isotopic composition of NP. Overall, the isotopic composition of NP was not markedly affected by canopy interactions; temporal variation of stable water isotopes in TF closely tracked that of GR with differences of TF-GR being -0.52 ‰ for δ2H and -0.14 ‰ for δ18O for forests and 0.29 ‰ for δ2H and -0.04 ‰ for δ18O. These differences were close to, or within, analytical precision of isotope determination, though they were statistically significant. Evidence for evaporative fractionation was generally restricted to low rainfall volumes in low intensity events, though at times subtle effects of liquid-vapour moisture exchange and/ or selective transmission though canopies were evident. Further event-scale work is needed to elucidate these processes. Acknowledgements We thank the European Research Council ERC (project GA 335910 VEWA) for funding the VeWa project and The Leverhulme Trust (project PLATO, RPG-2014-016) for funding. Page 24 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 25 REFERENCES Allen, S. T., Brooks, J. R., Keim, R. F., Bond, B. J., McDonnell, J. J., 2014. The role of pre event canopy storage in throughfall and stem flow by using isotopic tracers. Ecohydrology, 868(July 2013), 858– 868. http://doi.org/10.1002/eco.1408. Allen, S.T., Keim, R. F., McDonnell, J. J., 2015. Spatial patterns of throughfall isotopic composition at the event and seasonal timescales. Journal of Hydrology, 522, 58–66. http://doi.org/10.1016/j.jhydrol.2014.12.029. Allen ST, Keim RF, Barnard HR, McDonnell JJ, Brooks JR. (2017) The role of stable isotopes in understanding rainfall interception processes: a review. WIREs Water 2017, 4:e1187. doi: 10.1002/wat2.1187 André F., Jonard M., Ponette Q., 2008. Influence of species and rain event characteristics on stemflow volume in a temperate mixed oak beech stand. Hydrol. Process 22:4455–4466. Aussenac, G., 1970. Action du couvert forestier sur la distribution au sol des prècipitations. Annales des Sciences Forestières 27, 383–399. Bhat, S. Jacobs, J.M. and Bryant, M.L. 2011. The Chemical Composition of Rainfall and Throughfall in Five Forest Communities: A Case Study in Fort Benning, Georgia Water Air Soil Pollut (2011) 218:323–332 Birkel, C., Tetzlaff, D., Dunn, S. M., Soulsby C., 2011. Using lumped conceptual rainfall runoff models to simulate daily isotope variability with fractionation in a nested mesoscale catchment, Adv. Water Resour., 34(3), 383–394. Bosch, J. M. and Hewlett, J. D., 1982. A review of catchment experiments to determine the effect of vegetation changes on water yield and evapotranspiration, J. Hydrol., 55, 3–23. Bring, A., I. Fedorova, Y. Dibike, L. Hinzman, J. Mård, S. H. Mernild, T. Prowse, O. Semenova, S. L. Stuefer, and M. K. Woo (2016), Arctic terrestrial hydrology: A synthesis of processes, regional effects, and research challenges, J. Geophys. Res. Biogeosci., 121, 621–649, doi:10.1002/ 2015JG003131 Brodersen, C., Pohl, S., Lindenlaub, M., Leibundgut, C., Wilpert, K., 2000. Influence of vegetation structure on isotope content of throughfall and soil water. Hydrol. Processes 14, 1439–1448. Calder, I.R. 1990 Evaporation in the uplands, Wiley and Sons Chichester. Calder, I. R. (2005), Blue Revolution: Integrated Land and Water Resource Management, pp. 353, Earthscan, London. CAN EYE user manual, Weiss, M., Baret, F., 2014. CAN EYE V6.313 USER MANUAL. EMMAH, INRA, Science & Impact, 76p. Page 25 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 26 Capell, R., Tetzlaff, D., & Soulsby, C., 2013. Will catchment characteristics moderate the projected effects of climate change on flow regimes in the Scottish Highlands? Hydrological Processes, 699(December 2012), 687–699. http://doi.org/10.1002/hyp.9626. Cappa, C. D., Hendricks, M. B., DePaolo, D. J., Cohen R. C., 2003. Isotopic fractionation of water during evaporation, J. Geophys. Res., 108(D16), 4525, doi: 101029/2003JD003597. Crockford, R. H., Richardson, D. P., 2000. Partitioning of rainfall into throughfall, stemflow and interception: effect of forest type, ground cover and climate. Hydrological Processes, 2920(April 1999), 2903–2920. Dansgaard, W. (1964), Stable isotopes in precipitation, Tellus, 16(4), 436–468, doi:10.1111/j.2153 3490.1964.tb00181.x. Dewalle D.R., Swistock B.R., 1994. Differences in Oxygen 18 content of throughfall and rainfall in hardwood and coniferous forests. Hydrological Processes 8:75–82. DOI: 10.1029/94WR00758. Dick, J.J., Tetzlaff, D., Birkel, C., Soulsby, C., 2015. Modelling landscape controls on dissolved organic carbon sources and fluxes to streams. Biogeochemistry, vol 122, no. 2 3, pp. 361 374. Ford, E.D., Deans, J.D., 1978. The effect of canopy structure on stemflow, throughfall and interception loss in a young Sitka spruce plantation. J. Appl. Ecol. 15, 905– 917. Gat, J.R., Mook, W.G., Meijer, A.J., 2001. Atmospheric water. Environmental isotopes in the hydrological cycle Principles and applications, Vol. 2. W.G. Mook (Ed.) IHP V, Technical Document N. 39, UNESCO, Paris. 113p. Gat, J.R., Tzur, Y., 1968. Modification of the isotopic composition of rainwater by processes which occurs before groundwater recharge. In: Isotope Hydrology, Proc. Symp. Vienna 1966, International Atomic Energy Agency, pp. 49–60. Geris, J., Tetzlaff, D., McDonnnell, J., & Soulsby, C., 2015. The Relative Role of Soil Type and Tree Cover on Water Strorage and Transmission in Northern Headwater Chatchments. Hydrological Processes. Green M.B., Laursen B.K., Campbell J.L., McGuire K.J., Kelsey E.P., 2015. Stable water isotopes suggest sub canopy water recycling in a northern forested catchment. Hydrological Processes. DOI: 10.1002/hyp.10706. Guswa A, Spence C., 2012. Effect of throughfall variability on recharge: application to hemlock and deciduous forests in western Massachusetts. Ecohydrology 5: 563–574. Hall, J.E., Kirby, K.J., Whitbread, A.M., revised 2004. National vegetation classification field guide to woodland, 117 pages A5 softback, ISBN 1 86107 554 5. Haria A. and Price D. 2000; Hydrology and Earth System Science, Evaporation from Scots pine (Pinus sylvestris) following natural re colonisation of the Cairngorm mountains, Scotland. 4, 451 461, Page 26 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 27 Helvey, J., & Patric, J. H., 1965. Canopy and Litter Interception of Rainfall by Hardwoods of Eastern United States. Water Resources Research, 1(2), 193–206. Herwitz S.R., Slye R.E., 1995. Three dimensional modeling of canopy tree interception of wind driven rainfall. J Hydrol. 168:205–226. Hsueh, Y. H., Allen, S. T., & Keim, R. F. (2016). Fine-scale spatial variability of throughfall amount and isotopic composition under a hardwood forest canopy. Hydrological Processes. Ikawa, R., Yamamoto, T., Shimada, J., & Shimizu, T., 2011. Temporal variations of isotopic compositions in gross rainfall, throughfall, and stemflow under a Japanese cedar forest during a typhoon event. Hydrological Research Letters, 5, 32–36. http://doi.org/10.3178/HRL.5.32 Ingraham, N.L., 1998. Isotopic variations in precipitation. Isotope Tracers in Catchment Hydrology (C.Kendall, C., J.F. McDonnel, Eds) Elsevier, Amsterdam, Chapter 3, .87 118. Johnson, R.C., 1990. The interception, throughfall and stemflow in a forest in highland Scotland and the comparison with other upland forests in the UK. J. Hydrol. 118, 281–287 Kato, H., Onda, Y., Nanko, K., Gomi, T., & Yamanaka, T., 2013. Effect of canopy interception on spatial variability and isotopic composition of throughfall in Japanese cypress plantations. Journal of Hydrology, 504, 1–11. http://doi.org/10.1016/j.jhydrol.2013.09.028 Keim, R. F., Skaugset, A. E., & Weiler, M. (2005). Temporal persistence of spatial patterns in throughfall. J. Hydrol., 314(1), 263 274 Keim, R. F., Tromp van Meerveld, H. J., and McDonnell, J. J. (2006) A virtual experiment on the effects of evaporation and intensity smoothing by canopy interception on subsurface stormflow generation, J. Hydrol., 327, 352–364. Kundzewicz, Z. W., Mata, L. J., Arnell, N., Döll, P., Kabat, P., Jiménez, B., Miller, K., Oki, T., en, Z. & Shiklomanov,I., 2007. Freshwater resources and their management. Climate Change 2007: Impacts, Adaptation and Vulnerability. Contribution of Working Group II to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (ed. by M. L. Parry, O. F. Canziani, J. P. Palutikof, P. J. van der Linden & C. E. Hanson), 173–210. Cambridge University. Landwehr, J.M. and Coplen, T.B., 2006. Line conditioned excess: a new method for characterizing stable hydrogen and oxygen isotope ratios in hydrologic systems, in Isotopes in Environmental Studies, Aquatic Forum 2004: International Atomic Energy Agency, Vienna, Austria, IAEA CSP 26, p. 132 135. Levia, D.F., Frost, E.E., 2003. A review and evaluation of stemflow literature in the hydrologic and biogeochemical cycles of forested and agricultural ecosystems. Journal of Hydrology 274 (1–4), 1– 29. Levia DF, Herwitz SR. 2005. Interspecific variation of bark water storage capacity of three deciduous tree species in relation to stemflow yield and solute flux to forest soil. Catena; 64:117–37. Page 27 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 28 Levia, D. F., Keim, R. F., Carlyle Moses, D. E., Frost, E. E., 2011. Throughfall and Stemflow in Wooded Ecosystems (pp. 425–443). http://doi.org/10.1007/978 94 007 1363 5. Liu, W.J., Liu, W.Y., Li, J.T., Wu, Z.W., Li, H.M., 2008. Isotope variation of throughfall, stemflow and soil water in a tropical rain forest and a rubber plantation in Xishuangbanna, SW China. Hydrol. Res. 39, 5–6. Llorens P., Domingo F., 2007. Rainfall partitioning under Mediterranean conditions. A review of studies in Europe. J Hydrol 335:37–54. Llorens P., Poch R., Latron J., 1997. Rainfall interception by a Pinus sylvestris forest patch overgrown in a Mediterranean mountainous abandoned area I. monitoring design and results down to the event scale. J Hydrol 199:331–345. Los Gatos Research Inc, 2010. Liquid Water Isotope Analyser. User Manual. Los Gatos Research, Inc. 67 East Evelyn Avenue, Suite 3, Mountain View, CA 94041 1529 Loustau, D., Berbigier, P., Granier, A., Hadj Moussa, F. E., 1992. Interception loss, throughfall and stemflow in a maritime pine stand. I. Variability of throughfall and stemflow beneath the pine canopy. Journal of Hydrology, 138, 449–467. Makoto T, Tomoe N, Norio T, Jun S., 2000. Stable isotope studies of precipitation and river water in the Lake Biwa basin, Japan. Hydrological Processes 14: 539–556. Marin, C.T., Bouten, W., Sevink, J., 2000. Gross rainfall and its partitioning into throughfall, stemflow and evaporation of intercepted water in four forest ecosystems in western Amazonia. J. Hydrol. 237, 40–57. http://dx.doi.org/ 10.1016/S0022 1694(00)00301 2. Menard, C. B., R. Essery, and J. Pomeroy (2014), Modelled sensitivity of the snow regime to topography, shrub fraction and shrub height, 315, Hydrol. Earth Syst. Sci., 18, 2375–2392. Molina, A. J., del Campo, A. D., 2012. Forest Ecology and Management The effects of experimental thinning on throughfall and stemflow : A contribution towards hydrology oriented silviculture in Aleppo pine plantations. Forest Ecology and Management, 269, 206–213. http://doi.org/10.1016/j.foreco.2011.12.037. Peng, H. H., Zhao, C. Y., Feng, Z. D., Xu, Z. L., Wang, C., Zhao, Y., 2014. Canopy interception by a spruce forest in the upper reach of Heihe River basin, Northwestern China, Hydrol. Processes, 28(4), 1734–1741. Rennermalm, A.K., Wood, E.F., Troy, T.J., 2010. Observed changes in pen arctic cold season minimum monthly river discharge. Climate Dynamixy 35(6): 923 939.DOI.10.1088/1748 9326/4/2/024011. Reynolds, B., Stevens, P. A., 1987. British Geomorphological Research Group Technical Bulletin no. 36, 42 55. Geobooks, Norwich. Rozanski, K., Araguás Araguás, L., Gonfiantini, R., 1993. Isotopic Patterns in Modern Global precipitation. Climate Change in Continental Isotopic Records (P.K. Swart, K.L. Lohmann, J. Page 28 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 29 McKenzie, S. Savin, Eds) Geophysical Monograph 78, American Geophysical Union, Washington, DC, 1 37. Saxena R.K., 1986. Estimation of canopy reservoir capacity and Oxygen 18 fractionation in throughfall in a pine forest.NordicHydrology 17:251–260. Scatena, F. N., 1990. Watershed scale rainfall interception on two forested watersheds in the Luquillo mountains of Puerto Rico, Journal of Hydrology, 113, pp.89 102. Sprenger, M, Tetzlaff D, Soulsby C. Stable isotopes reveal evaporation dynamics at the soil plant atmosphere interface of the critical zone. Hydrology and Earth System Sciences (HESS-D), in review. Sprenger, M., Leistert, H., Gimbel, K., & Weiler, M. (2016). Illuminating hydrological processes at the soil-vegetation-atmosphere interface with water stable isotopes. Reviews of Geophysics, 54(3), 674 704. Soulsby, C. and Reynolds, B. (1994) The chemistry of throughfall, stemflow and soil water beneath broadleaved woodland and moorland vegetation in upland Wales. Chemistry and Ecology, 9, 115 134. Soulsby, C., Birkel, C., Geris, J., Dick, J., Tunaley, C. & Tetzlaff, D. 2015. Stream water age distributions controlled by storage dynamics and nonlinear hydrologic connectivity: Modeling with high resolution isotope data. Water Resources Research, vol 51, no. 9, pp. 7759 7776. Soulsby, C. Birkel, C. and Tetzlaff, D. (2016) Characterising the age distribution of catchment evaporative losses. Hydrological Processes. DOI:10.1002/hyp.10751. Staelens, J., Schrijver, A.D., Verheyen, K., Verhoest, N.E.C., 2006. Spatial variability and temporal stability of throughfall water under a dominant beech (Fagus sylvatica L.) tree in relationship to canopy cover. J. Hydrol. 330, 651–662. Staelens, J., Schrijver, A.D., Verheyen, K., Verhoest, N.E.C., 2008. Rainfall partitioning into throughfall, stemflow, and interception within a single breech (Fagus sylyatica L.) canopy: influence of foliation, rain event characteristics, and meteorology. Hydrological Processes 22(1): 33–45. doi:10.1002/hyp.6610. Stockinger, M. P., Lücke, A., McDonnell, J. J., Diekkrüger, B., Vereecken, H., & Bogena, H. R., 2015. Interception effects on stable isotope driven streamwater transit time estimates. Geophysical Research Letters, 42(1), 5299–5308. http://doi.org/10.1002/2015GL064622. Stout, B. B., McMahon, R. J., 1961. Throughfall variation under tree crowns, J. Geophys. Res., 66, 1839 1843. Tetzlaff D., Soulsby C., Buttle J., Capell R., Carey S.K., Kruitbos L., Laudon H., McDonnell J., McGuire K., Seibert S., Shanley J., 2013. Catchments on the Cusp? Structural and functional change in northern ecohydrological systems. Hydrological Processes 27: 766–774. Page 29 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 30 Tetzlaff, D., Birkel, C., Dick, J.J., Geris, J. & Soulsby, C., 2014. Storage dynamics in hydropedological units control hillslope connectivity, runoff generation, and the evolution of catchment transit time distributions. Water Resources Research, vol 50, no. 2, pp. 969 985. Tetzlaff, D., Buttle, J., Carey, S. K., Mcguire, K., Laudon, H., Soulsby, C., 2015. Tracer based assessment of fl ow paths, storage and runoff generation in northern catchments : a review. Hydrol. Processes, 3490 (December 2014), 3475–3490. http://doi.org/10.1002/hyp.10412 The Scottish Forestry Strategy, 2006. Forestry Commission Scotland, Edinburgh. Tsujimura, M., Tanaka, T., 1998. Evaluation of evaporation rate from forested soil surface using stable isotope composition of soil water in a headwater basin. Hydrol. Processes 12, 2093–2103. Wang H, Tetzlaff D, Dick J, Soulsby C. (2017) Assessing the environmental controls on Scots pine transpiration and the implications for water partitioning in a boreal headwater catchment. Agricultural and Forest Meteorology, 10.1016/j.agrformet.2017.04.002. Wookey, P.A., Aerts, R., Bardgett, R.D., Baptist, F., Brathen, K.A., Cornelissen, J.H.C., Gough, L., Hartley, I.P., Hopkins, D.W., Lavorel, S., Shaver, G.R., 2009. Ecosystem feedbacks and cascade processes: understanding their role in the responses of arctic and alpine ecosystems to environmental change. Global Change Biology 15: 1153 1172. Xiao, Q., McPherson, E.G., Ustin, S.L., Grismer, M.E., 2000. A new approach to modeling tree rainfall interception, J. Geophys. Res., 105(D23), 29173–29188, doi:10.1029/2000JD900343 . Yurtsever, Y., Gat, J.R., 1981. Atmospheric waters. Stable Isotope Hydrology: Deuterium and Oxygen 18 in the Water Cycle. (J.R. Gat, R. Gonfiantini, Eds) Technical Reports Series 210, IAEA, Vienna, 103 142. Page 30 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 31 Tables: Table 1: Vegetation characteristics of the 4 sampling plots: Number of trees at the forests sites, Mean diameter at breast height (DBH), median distance to collector and canopy coverage for each of the four sites. The values derived using digital photography represent only the collector’s canopy coverage whereas the values calculated using the LiDAR data are mean values for the whole site. Canopy coverage based on Digital photography (CAN-EYE) LiDAR Data (ArcGIS) Site # of trees Mean tree DBH [cm] Median distance of trees to collector [m] Min [%] Max [%] Median [%] Mean [%] Std Dev [%] Mean [%] NF Northfacing forest 36 13.8 1 28 81 67 63 16.3 43 NH Northfacing Heather 0 - - 0 79 65 60 21.8 - SF Southfacing Forest 46 21.8 1.5 50 74 69 68 5.8 68 SH Southfacing Heather 0 - - 0 78 62 53 24.3 - Page 31 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 32 Table 2 Sampling dates, the number of their integr ating days, gr oss r ainfall, maximum r ainfall intensity, ar ithmetic mean thr oughfall amount and sample number for for est and heather site, stemflow volumes in for est. Gross rainfall Forest Heather Forest Sampling day sam pling integ rate d days precipit ation sum [mm] maximum intensity [mm h-1] n TF [mm] n TF [mm] n StF [mm] 2015-06-01 11 7.6 3.8 50 2.5±2.2 22 2015-06-09 8 12.2 0.8 50 6.1±1.9 22 3.2±3.4 10 0.39±0.46 2015-06-17 8 3.6 0.4 50 0.8±0.8 22 1.2±0.8 10 0 2015-06-25 8 10.4 1.8 49 4.3±2.9 22 6.3±2.6 10 0.08±0.01 2015-07-02 7 10.4 4.6 49 3.0±2.5 22 5.6±1.9 10 0.04±0.07 2015-07-06 4 20.8 4.6 49 15.6±4.2 22 15.6±6.1 10 1.22±0.85 2015-07-20 14 74.8 7.6 50 48.9±13.9 22 52.4±19.0 10 3.15±1.91 2015-07-30 10 23.6 8 50 13.2±5.7 22 15.0±4.2 10 3.68±2.26 2015-08-06 7 12.8 1.6 50 4.1±2.7 22 7.1±2.3 10 0.12±0.12 2015-08-13 7 2.2 1.2 50 0.2±0.3 22 0.5±0.2 10 0 2015-08-19 6 36.2 3.8 50 21.3±7.3 22 22.1±6.5 10 1.71±1.34 2015-08-28 9 18.8 1.8 49 9.6±4.6 22 11.8±4.3 10 0.49±0.39 2015-09-03 6 8.8 3 49 3.0±1.9 22 4.0±1.6 10 0.08±8 2015-09-10 7 0.8 0.2 50 0.0±0.1 22 0.0±0.1 10 0 2015-09-18 8 16.8 3.2 50 8.9±3.4 22 10.5±3.5 10 0.57±37 2015-09-24 6 16 4.6 50 10.7±3.4 22 11.6±3.6 10 0.88±46 Mean values 17.2 ± 17.7 3.2 ± 2.3 9.5 ± 12 (55 % of GR) 11.1 ± 13 (64 % of GR) 0.82 ± 1.2 (1.3 % of GR) Page 32 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 33 Table 3 Over view of stable isotope data (‰) for the gr oss r ainfall, thr oughfall in for est, thr oughfall in heather and stemflow at tr ees. Gross precipitation Throughfall Forest Throughfall Heather Stem flow Forest Day n δ2H δ18O lc-excess n δ2H δ18O lc-excess n δ2H δ18O lc-excess n δ2H δ18O lc-excess 2015-06-01 42 -95.7±4.7 -12.3±0.7 -4.5±1.4 7 -99.4±11.3 -12.6±1.9 -5.7±3.8 2015-06-09 3 -96.4±6.2 -12.5±1.0 -3.4 36 -96.9±17.1 -12.3±3.0 -4.2±2.2 9 -94.1±5.1 -12.0±0.9 -4.7±1.9 ± ± ± 2015-06-17 3 -33.6±6.5 -5.27±0.8 2.4 33 -38.4±8.3 -5.4±0.9 -1.6±2.8 15 -30.1±6.0 -4.7±0.7 1.7±1.4 8 -29.5±7.5 -4.1±0.9 -2.4±3.4 2015-06-25 3 -38.1±1.6 -5.22±0.2 -2.5 45 -37.7±7.2 -5.3±0.7 -1.7±2.6 21 -39.4±4.7 -5.4±0.5 -2.5±2.1 3 -24.8±11.7 -2.2±1.8 -12.8±3.2 2015-07-02 3 -24.7±0.8 -3.05±0.1 -6.2 40 -28.7±4.1 -3.6±0.6 -5.7±2.7 22 -25.4±2.2 -3.2±0.5 -5.4±3.3 10 -39.2±1.7 -6.0±0.5 2.8±4.1 2015-07-06 3 -40.4±0.4 -5.6±0.1 -1.9 50 -40.3±1.7 -5.8±0.3 -0.3±1.6 22 -41.2±1.7 -6.0±0.3 0.8±1.7 10 -57.3±5.3 -8.4±1.0 3.6±2.8 2015-07-20 3 -58.5±0.4 -8.15±0.1 0.2 50 -57.1±1.3 -8.3±0.2 2.4±1.8 22 -59.2±4.2 -8.5±0.8 1.9±2.8 7 -52.5±5.3 -7.5±0.9 1.4±2.3 2015-07-30 3 -46.9±0.4 -6.52±0.1 -1.1 46 -48.3±2.4 -6.7±0.4 -0.9±3.3 22 -47.1±1.6 -6.4±0.2 -2.0±1.5 10 -15.2±2.4 -2.0±0.5 -4.8±3.1 2015-08-06 3 -25.3±0.5 -3.71±0.1 -1.6 42 -21.3±3.2 -3.4±0.5 0.0±2.9 22 -25.2±2.6 -3.5±0.6 -3.4±4.2 ± ± ± 2015-08-13 3 -47.8±0.6 -5.27±0.9 -11.8 23 -43.6±5.5 -5.5±0.8 -5.9±4.1 20 -47.0±5.8 -5.8±0.7 -6.6±4.2 10 -60.6±4.1 -8.3±0.5 -0.4±0.7 2015-08-19 3 -64.7±0.4 -8.92±0.1 0.0 50 -59.0±3.6 -8.1±0.5 -0.8±1.9 22 -63.3±2.0 -8.7±0.3 -0.4±1.4 10 -30.5±4.0 -4.3±0.5 -2.1±3.1 2015-08-28 3 -42.2±0.6 -6.19±0.1 1.1 49 -36.8±4.6 -5.5±0.6 1.0±2.9 21 -40.1±3.1 -6.0±0.3 1.8±1.8 8 -50.0±4.3 -6.6±0.5 -3.5±1.06 2015-09-03 1 -53.5 -7.1 -2.9 47 -56.4±3.3 -7.4±0.4 -3.5±0.9 21 -54.9±2.9 -7.3±0.3 -3.0±1.0 ± ± ± 2015-09-18 3 -55.1±0.7 -8.1±0.0 3.3 50 -53.6±5.6 -7.8±0.7 2.2±3.3 22 -56.7±2.6 -8.1±0.3 1.9±2.2 10 -43.0±3.2 -6.6±0.3 3.1±1.8 2015-09-24 3 -52.8±0.8 -7.3±0.2 -0.8 50 -51.5±1.6 -7.4±0.5 1.0±4.5 22 -52.6±1.3 -7.4±0.4 0.1±3.1 10 -49.4±1.2 -7.0±0.2 0.3±1.0 Mean values -48.6±18.5 -6.6±2.4 -1.8±3.8 -47.8±18.0 -6.6±2.2 -1.2±2.7 -48.3±17.9 -6.6±2.3 -1.4±2.9 -41.1±14.5 -5.7±2.3 -1.2±4.5 Page 33 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 34 Figures Figure 1: Bruntland Burn catchment with all the site locations and the Bruntland Burn stream. Locations of collectors are pictured in differently coloured squares. Page 34 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 35 Figure 2: Hydroclimatic conditions: (a) Daily potential Evapotranspiration (PET [mm/ d] (Pennman- Monteith), (b) Air temperature (T.a [°C], (c) humidity (RH [%]), (d) wind speed (U [m s-1]) and (e) daily precipitation (GR [mm d-1]) during the study period. The vertical orange lines represent the sampling dates. Apart from GR and PET (daily), hourly values are shown. Page 35 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 36 Figur e 3: a) Thr oughfall TF in for est and heather (gr een and pur ple boxplots, r espectively) and gr oss r ainfall GR (blue diamond) amount. GR was always significantly higher than TF. b) Boxplots showing TF fr action in % (TF as fr action of GR) for for est and heather sites. Outlier s ar e mar ked as points. For both subplots, aster isks indicate significant differ ences (p<0.05) between for est and heather (Wilcoxon test). Page 36 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 37 Figur e 4 a) Relationship between the inter ception loss and the gr oss r ainfall amount and b) maximum r ainfall intensity dur ing each sampling per iod for the four study sites. The lines show a logar ithmic fit to the data points. The equation and the coefficient of deter mination for the fitted cur ves ar e given. Page 37 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 38 Figure 5: 20 x 20 m grid of both forest sites with TF collectors and trees (black dots). The size of the orange points represents the average TF (over all sample dates) and the size of the black dots represent the DBH. Dark green squares indicate high canopy coverage, purple indicates no canopy coverage [%] as derived from LIDAR measurements. Page 38 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 39 Figur e 6: a) δ2H and b) lc excess of TF in for est and heather (gr een and pur ple boxplots, r espectively) and in gr oss r ainfall (blue diamonds) for each sampling campaign. Star s indicate significant differ ences (p<0.05) between for est and heather . Gr een and pur ple boxes below boxplots indicate significant differ ences between TF and GR for for est and heather , r espectively. Outliers ar e mar ked as points. Page 39 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 40 Figure 7: Dual isotope plots of all isotope samples for each sampling date over the entire sampling period. Green circles represent the forest sites, pink squares represent the heather sites. Blue diamonds are open collector of gross rainfall and brown triangles is stemflow. The dashed line shows the Local Meteoric Water Line (LMWL), the continuous line the Global Meteoric Water Line (GMWL). Page 40 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 41 Figur e 8: Relationship between standar d deviation of the δδδδ2H values of the thr oughfall in the for ested (gr een) and heather (pur ple) sites and the gr oss r ainfall amount (a) and the maximum r ainfall intensity (b) dur ing the integr ated sampling inter val. The lines show a logar ithmic fit to the data point s. The equation and the r 2 for the fitted cur ves ar e given. Page 41 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 42 Figur e 9: Tr ee stemflow d2H in compar ison with gr oss r ainfall (blue diamonds). Page 42 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 43 Figure 10: 20x20m grid of both forest sites with TF collectors (orange dots) and trees (black dots). The size of the orange points represent the average δ2H values and the size of the black dots represent the DBH. Dark green squares indicate high canopy coverage [%], purple indicates no canopy coverage as derived from LIDAR measurements. Page 43 of 43 http://mc.manuscriptcentral.com/hyp Hydrological Processes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 work_xrjes2uiync6tbmtioxt2fcxcy ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218565830 Params is empty 218565830 exception Params is empty 2021/04/06-02:16:47 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218565830 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:47 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_xwythogoxbeh7bruvc2wmceifm ---- An Acad Bras Cienc (2020) 92(1): e20180783 DOI 10.1590/0001-3765202020180783 Anais da Academia Brasileira de Ciências | Annals of the Brazilian Academy of Sciences Printed ISSN 0001-3765 I Online ISSN 1678-2690 www.scielo.br/aabc | www.fb.com/aabcjournal An Acad Bras Cienc (2020) 92(1) Running title:DNA BARCODING IN CROAKER LARVAE Academy Section: BIOLOGICAL SCIENCES e20180783 92 (1) 92(1) BIOLOGICAL SCIENCES DNA barcoding identifi es three species of croakers (Pisces, Sciaenidae) in the ichthyoplankton of the High Paraná River YANINA F. BRIÑOCCOLI, GLADYS G. GARRIDO & ALICIA ALVAREZ Abstract: In the province of Misiones (Argentina), the fi lling of the Yacyretá Reservoir (Argentina-Paraguay) to its fi nal dimensions in 2011 formed new aquatic ecosystem, e.g., Garupá Stream was converted into a subreservoir. Reports have been made in this stream of adult individuals and spawning of the Family Sciaenidae, excellent colonizers of modifi ed environments. The larvae of this family are morphologically similar, particularly among Pachyurus and Plagioscion species, making taxonomic differentiation diffi cult. In the present work, sciaenidae larvae were characterized molecularly at the cytochrome c oxidase subunit I gene in order to determine which species use this environment for reproduction. Additionally, genetic distances, Barcode Index Number (BIN) and Automatic Barcode Gap Discovery method (ABGD) were estimated and phylogenetic trees were reconstructed. The results indicated the presence of Plagioscion ternetzi and Pachyurus bonariensis larvae, and for the fi rst time in tributaries of the region, Plagioscion squamosissimus. The incorporation of P. bonariensis and P. squamosissimus to the faunistic assemblage of ichthyoplankton in the Garupá Stream supports better characterization of the species richness of this secondary watercourse modifi ed by the Yacyretá Reservoir, and advancement in our understanding of use of this area for reproduction. Key words: Plagioscion ternetzi, Plagioscion squamosissimus, Pachyurus bonariensis, Garupá Stream, reservoir, fish larvae. INTRODUCTION The identifi cation of adult fi shes relies on clearly defi ned criteria based on external morphological characters, making precise identification possible for everyone, from amateur anglers to professional taxonomists (Strauss & Bond 1990). However, these characters for specific identifi cation cannot be applied to larval life- stages, at which point the morphological characters may not have yet developed. Furthermore, the morphology of a single species may change rapidly and significantly during ontogeny (Ko et al. 2013). The study of ichthyoplankton is a developing field and includes the identifi cation of larvae to the lowest possible taxonomic level, which then permits the analysis of fi sh population dynamics, detection of occurrence and intensity of reproductive activity, as well as an understanding of habitat use during reproduction and early ontogeny (Ko et al. 2013). This identifi cation is possible through use of the DNA barcode (Hebert et al. 2003) based on the cytochrome c oxidase subunit I (COI) marker which has proven effective for species identifi cation, even in cryptic or diffi cult to identify species (Carvalho et al. 2011, Cabrera et al. 2017, Cardoso et al. 2018, Pereira et al. 2011, YANINA F. BRIÑOCCOLI, GLADYS G. GARRIDO & ALICIA ALVAREZ DNA BARCODING IN CROAKER LARVAE An Acad Bras Cienc (2020) 92(1) e20180783 2 | 14 2013, Rosso et al. 2012), and also larvae and eggs (Almeida et al. 2017, Becker et al. 2015, Frantine- Silva et al. 2015). Individuals of the Family Sciaenidae Cuvier, 1829, one of the most diverse families of the Order Perciformes, are commonly known as corvinas, or croakers, due to their propensity for producing sounds with the sonic muscles and swimbladder. They are fishes of commercial importance and are excellent colonizers of modified environments (Tower 1908, Ramcharitar et al. 2006). Larvae of the Family Sciaenidae are characterized as presenting 23 to 27 myomeres, and being relatively large, with a robust, somewhat bulbous head, a triangular stomach and a spiraled small intestine; at some life- stages, they present pigmentation in the head, nape, jaw, ventral area, and along the post-anal lateral line (Moser 1996, Taguti et al. 2015). The Garupá Stream is a tributary of the Paraná River, having a watershed of 1,416 km2. It is the sixth-largest waterway of Misiones Province (Argentina), and has a dendritic pattern with tributaries and sub-tributaries (Sistema Nacional de Información Hídrica 2017). Upon encountering the Yacyretá Reservoir’s area of influence (Argentina-Paraguay), it undergoes substantial changes in its drainage regime, increasing its area of inundation, the period of time during which it is flooded, and decreasing the velocity of flow, which give it characteristics of a sub-reservoir (Fulco 2012, Meichtry de Zaburlín et al. 2010, 2013). These changes are reflected in the structural composition of its diverse aquatic communities, such as macrophytes, which provide refuge and reproductive habitat for some members of the fish community. The elevated biodiversity of fish in this stream has been cited by authors such as Flores et al. (2009) and Bogan et al. (2015) among others, reporting the presence of species such as a characin, locally known as sabalito Cyphocharax spilotus (Vari 1987), cobi catfish Pimelodella gracilis (Valenciennes 1835), yellow catfish Pimelodus maculatus Lacèpede, 1803, bogas Leporinus acutidens (Valenciennes 1837), L. obtusidens (Valenciennes 1836), and the freshwater croakers Plagioscion ternetzi Boulenger 1895 and Pachyurus bonariensis Steindachner 1879. In the Garupá Stream in particular, there are records of large spawns of sciaenids, and the presence of adult Plagioscion ternetzi and Pachyurus bonariensis is periodically verified (Flores et al. 2009). However, these species present larval stages with very similar morphological characteristics, which makes it difficult to taxonomically differentiate between them at larval stages (Lakra et al. 2009, Taguti et al. 2015). On these grounds, in the present work, early stages of croakers obtained in the Garupá Stream (Misiones Province) were characterized at the molecular level, with the aim of recognizing the species which utilize this modified environment for reproduction. MATERIALS AND METHODS Ethics statement In Argentina, fish handling during sampling was performed following guidelines of the ethical committee of the UFAW Handbook on the Care and Management of Laboratory Animals (http://www.ufaw.org.uk). Collection permits in Argentina were granted by Ministerio de Ecología y Recursos Naturales Renovables de Misiones (Disp. 013/16). These permits in Argentina are granted without a formal request concerning the protocol used for the humane killing of fish. Notwith- standing, we opted to kill the fish with an overdose of benzocaine, as recommended by the New South Wales Fisheries Animal Care and Ethics Committee (Barker et al. 2002). YANINA F. BRIÑOCCOLI, GLADYS G. GARRIDO & ALICIA ALVAREZ DNA BARCODING IN CROAKER LARVAE An Acad Bras Cienc (2020) 92(1) e20180783 3 | 14 Sampling methodology The sampling of ichthyoplankton and adult specimens was carried out between December 2016 and January 2017 in the Garupá Stream (27°26’49.3”S, 55°48’15.8”W), Misiones Province, in the area of the current outlet resulting from the filling of the Yacyretá Reservoir to its final dimensions in 2011 (Fig. 1). Samplings of adult specimens of Plagioscion ternetzi and Pachyurus bonariensis were carried out in diurnal and nocturnal hours making use of monofilament seine nets with the following mesh measurements: 20 cm, 16 cm, 14 cm, 12 cm, 8 cm, 7 cm, 6 cm, 5 cm, and 4 cm when stretched from knot to knot. Collected material was fixed with absolute ethanol, and in the laboratory species identity was corroborated according to the diagnostic characters provided by Casciotta et al. (2006). From the gathered material, one adult per species was selected for posterior molecular characterization, and documented via digital photography (Fig. 2a, b). For their part, ichthyoplankton samples were taken in nocturnal hours in sub-superficial waters (up to 2 meters in depth). Cylindrical-conical nets with netting of 500 microns were used, these being equipped with mechanical flow meters which were operated in an active manner. Fixing of collected material was done in the field, using absolute ethanol. The larvae of the Family Sciaenidae were separated from the rest of the other animals, plants and drifting detritus, under a Leica MZ6 stereoscopic binocular microscope, and were classified, according to developmental stage according to the criteria of Nakatani et al. (2001), as being: yolk sac, pre-flexion, flexion or post-flexion. Then, 10 larvae representing different stages of development were selected for molecular characterization and documented via digital photography (Fig. 2c, l). The samples Figure 1. Location of the Garupá Stream and the sampling area. (a) The star indicates the location of the Garupá Stream in the province of Misiones. (b) and (c) Satellite images of the Garupá Stream in years 2008 and 2015, respectively; the dotted line in b indicates the course of the stream. (d) Satellite image of the mouth of the Garupá Stream in 2016; The oval indicates the zone sampling zone. Satellite images were taken from Google Earth (Version 7.1.7.2606). YANINA F. BRIÑOCCOLI, GLADYS G. GARRIDO & ALICIA ALVAREZ DNA BARCODING IN CROAKER LARVAE An Acad Bras Cienc (2020) 92(1) e20180783 4 | 14 were deposited in the reference collection “Proyecto de Biología Pesquera Regional” (Instituto de Biología Subtropical Nodo Posadas, CONICET-Universidad Nacional de Misiones). DNA extraction, Polymerase chain reaction (PCR) and sequencing The total genomic DNA was extracted from the abdominal portion of the adult specimens, as well as for each larvae, using the commercial Puriprep-S Kit (Inbio Highway, Tandil, Buenos Aires, Argentina), in accordance with the specifications of the manufacturer. The polymerase chain reaction (PCR) was utilized in order to amplify the mitochondrial COI gene. The primers used for the amplification of the COI gene were FishF1 TCAACCAACCACAAAGACATTGGCAC and FishR1 TAGACTTCTGGGTGGCCAAAGAATCA designed and described by Ward et al. (2005). Amplification was performed in a final volume of 50µl containing 1X Green GoTaq Reaction Buffer, 0.2mM dNTP mix, 0,5µM of each primer, 1,25U GoTaq DNA polymerase (Promega, Madison, WI, USA) and 50-100ng of DNA template. The amplification was carried out in an OpenPCR thermocycler (Chai Biotechnologies Inc., Santa Clara, CA, USA), and the amplification protocol consisted of 95°C for 2min; 35 cycles of 94°C for 30sec, 52°C for 30sec and 72°C for 1min; with a final extension of 72°C for 10min. PCR products were visualized on 1.5% agarose gels and then were purified in order to remove potential contaminants that could interfere with the sequencing process. The procedure was performed with the commercial AccuPrep® PCR Purification Kit (Bioneer, Daejeon, South Korea), following the recommendations of the manufacturer for products in solution. The purified products were sent to Macrogen, Inc. (Seoul, South Korea) and sequenced in both forward and reverse directions for each individual. Evaluation of the quality of the sequences was conducted with the QV scores provided by the sequencing service. In order to generate the consensus DNA sequence, the chromatograms obtained were analyzed Figure 2. Photographs of adults and larvae of croakers used in this study. (a) Adult Plagioscion ternetzi (PGPA-11). (b) Adult Pachyurus bonariensis (PGPA-12). (c) Larva (PGPA-1). (d) Larva (PGPA-2). (e) Larva (PGPA-3). (f) Larva (PGPA-4). (g) Larva (PGPA-5). (h) Larva (PGPA-6). (i) Larva (PGPA-7). (j) Larva (PGPA-8). (k) Larva (PGPA-9). (l) Larva (PGPA-10). The codes in parentheses indicate collection numbers assigned within the “Proyecto de Biología Pesquera Regional”. YANINA F. BRIÑOCCOLI, GLADYS G. GARRIDO & ALICIA ALVAREZ DNA BARCODING IN CROAKER LARVAE An Acad Bras Cienc (2020) 92(1) e20180783 5 | 14 with Chromas Lite 2.6.4 (Technelysium Pty Ltd, Tewantin, Australia) and Clustal X 2.1 (Larkin et al. 2007). Once the consensus DNA sequences were obtained for both larvae and adults, the sequences were searched using the basic local- alignment search tool BLASTn against GenBank (Altschul et al. 1990) and in the Barcode Index Number (BIN) system in the Barcode of Life Data (BOLD) identification engine (Ratnasingham & Hebert 2013). In addition, we also explored species limits using the Automatic Barcode Gap Discovery method (ABGD) according to Puillandre et al. (2012). Phylogenetic Analysis Bioinformatic analysis of the partial regions of the COI gene was carried out individually for each of the individuals studied. Based on the sequences obtained in this study, a multiple alignment was carried out with Clustal X 2.1 to identify the presence of different haplotypes, which later were characterized for composition (i.e,. number of each of the bases, relative frequency, content of G+C and A+T) using BioEdit 7.1.9 (Hall 1999). Then, a data matrix was generated with the consensus sequences obtained in this study, as well as reference sequences of phylogenetically close species, in the manner put forth by Lo et al. (2015): Pachypops furcroi (Monsch 1998), Plagioscion surinamensis (Bleeker 1873) and Paralonchurus brasiliensis (Steindachner 1875). This matrix likewise included the external group comprised of Monotaxis grandoculis (Forsskål 1775), Sparus aurata Linnaeus 1758 and Dicentrarchus labrax (Linnaeus 1758) (Table I). The data were subjected to estimation of genetic distance, Neighbor-Joining (NJ), and phylogenetic analyses using Maximum Parsimony (MP), Maximum Likelihood (ML), and Bayesian Inference (BI). Genetic distances were calculated using MEGA 7.0.20 software (Kumar et al. 2016) through the use of the number of differences (p-distance) and the Kimura 2 parameter substitution model (K2P). The analyses were performed with MEGA 7.0.20 (Kumar et al. 2016) for NJ and ML, and PAUP*4.0b10 (Swofford 2002) for MP. The BI analysis was carried out with Mr. Bayes 3.2 (Ronquist et al. 2012). In the distance method, the TN93 model (Tamura & Nei 1993) was used to calculate the divergence between the sequences. The MP analysis used of heuristic search, with characters equally weighted, tree bisection and reconnection branch-swapping, and 10 random stepwise additions. In the ML analysis, the optimal model of nucleotide substitution (HKY+G) was evaluated with a likelihood ratio test (LRT) and selected according to the corrected Akaike information criteria (AICc) with JModelTest 2.1.10 (Darriba et al. 2012). The topology of the ML tree was obtained in MEGA 7.0.20, following a heuristic search. The statistical support for the resulting phylogenies were assessed by bootstrapping with 1,000 (NJ, MP, ML) replicates (Felsenstein 1985). The BI analysis was carried out with two simultaneous runs of four Markov Monte Carlo chains, which were run for 106 generations, with sampling each 100 generations, using the HKY+G model in Mr. Bayes 3.2.6 (Ronquist et al. 2012), with a burnin of 1,001 generations, and the final consensus was based on 18,000 trees used to estimate posterior probabilities. The topologies of the trees obtained were evaluated using software FigTree 1.4.3 (http://tree.bio.ed.ac.uk/software/ figtree/). RESULTS Drawing upon the consensus COI sequences for the individuals analyzed, the search results obtained with BLASTn showed high homologies and sequence identities equaling 100% with YANINA F. BRIÑOCCOLI, GLADYS G. GARRIDO & ALICIA ALVAREZ DNA BARCODING IN CROAKER LARVAE An Acad Bras Cienc (2020) 92(1) e20180783 6 | 14 reference sequences in GenBank. These results gave confirmation of the identity of the adults in accordance with the morphological identification (Plagioscion ternetzi –PGPA11– and Pachyurus bonariensis –PGPA12–) and provided evidence for the presence of not two, but rather three larval sciaenid species in the analyzed sample: P. bonariensis (PGPA1, PGPA3, PGPA5, PGPA6, PGPA7 and PGPA10), P. ternetzi (PGPA2, PGPA4 and PGPA9) and Plagioscion squamosissimus (Heckel 1840) (PGPA8). In addition, three BINs were recovered, ACG8729 for P. bonariensis, ACK2538 for P. ternetzi and AAC6616 for P. squamosissimus. The Table I. Summary of the sequences and collection locations of thefor specimens included in this study, as well as those of species assigned to the external group for the COI marker. SPECIES LOCATION GENBANK REFERENCE Plagioscion ternetzi Argentina, Misiones, Garupá Stream MF289078 Present work Plagioscion ternetzi X KP722764 Lo et al. (2015) Plagioscion surinamensis X KP722763 Lo et al. (2015) Plagioscion squamosissimus Argentina, Misiones, Garupá Stream MF289077 Present work Plagioscion squamosissimus Brazil, São Paulo, Upper Paraná GU701891 Pereira et al. (2013) Plagioscion squamosissimus Brasil, São Paulo, Upper Paraná, Paranapanema River KM897668 Frantine-Silva et al. (2015) Plagioscion squamosissimus Brasil, São Paulo, Upper Paraná, Paranapanema River KM897618 Frantine-Silva et al. (2015) Plagioscion squamosissimus X KP722762 Lo et al. (2015) Pachypops fourcroi X KP722753 Lo et al. (2015) Pachyurus bonariensis Argentina, Misiones, Garupá Stream MF289076 Present work Pachyurus bonariensis X KP722754 Lo et al. (2015) Pachyurus bonariensis Argentina, Paraná Delta KU288796 Díaz et al. (2016) Paralonchurus brasiliensis Brazil, Rio de Janeiro JX124857 A.D.E.O. Ribeiro et al. (Unpublished data)* Paralonchurus brasiliensis Brazil, São Paulo JQ365484 Ribeiro et al. (2012) Paralonchurus brasiliensis X KP722756 Lo et al. (2015) Dicentrarchus labrax Turkey KC500493 Keskin & Ataret al. (2013) Sparus aurata Turkey, Hatay KY176642 M.B. Yokes (Unpublished data)* Monotaxis grandoculis Mozambique, Pomene KF489648 D. Steinke et al. (Unpublished data)* * Sequences published exclusively in GenBank. X refers to site not specifically mentioned in the respective reports. YANINA F. BRIÑOCCOLI, GLADYS G. GARRIDO & ALICIA ALVAREZ DNA BARCODING IN CROAKER LARVAE An Acad Bras Cienc (2020) 92(1) e20180783 7 | 14 analysis of the ABGD dataset using the default parameters resulted in clear partitions among three species. The sequence compositions of the different BINs were completely recovered by the groups proposed by the ABGD. Additionally, the sequences obtained were deposited in GenBank under the accession numbers MF289078, MF289077 and MF289076, for Plagioscion ternetzi, P. squamosissimus and Pachyurus bonariensis, respectively. As well as, the sequences and data of the specimens were also uploaded to BOLD in the project “SCIAE Sciaenids COI”. The interspecific distances obtained are presented in Table II. From the sequences obtained in this study and those available in Genbank for Pachyurus bonariensis and Plagioscion ternetzi, the intraspecific genetic distances were zero, while for Plagioscion squamosissimus there were variances of 0-1.7% and 0-1.8% for p-distance and K2P, respectively. The final matrix utilized for the phylogenetic analyses consisted of 615 characters. The phylogenetic and distance reconstructions based on the partial sequence of the COI gene presented the same topology (Fig. 3). For NJ and ML a larger cluster was obtained containing the species of the Genus Plagioscion with a bootstraping value of 100%. This group contained the sequences of the individuals identified as P. ternetzi and P. squamosissimus along with their reference sequences in GenBank. Likewise, the sequences obtained for the specimens of P. bonariensis formed a group together with their reference sequences with a maximum support value. For the BI analysis, the results were the same at those obtained in NJ and ML, with posterior probability values of 1 for each grouping. For the MP analysis, of the 615 characters of the matrix analyzed, 192 were informative for parsimony and 402 were constant. In the heuristic search, Table II. Interspecific genetic distances (in percent) for the COI gene among the analyzed species. 1 2 3 4 5 6 Pachyurus bonariensis 1 --- 20% 19.8-21.1% 18.90% 19.30% 19.7-20.2% Plagioscion ternetzi 2 17.10% --- 10.1-11% 7.40% 18.60% 20.9-21.4% Plagioscion squamosissimus 3 17-17.9% 9.2-10% --- 9.7-10.3% 20.3-21.1% 20.1-22.1% Plagioscion surinamensis 4 16.40% 6.90% 8.7-9.4% --- 17.80% 20.6-20.9% Pachypops fourcroi 5 16.60% 16.20% 17.3-17.9% 15.60% --- 20.6-20.9% Paralonchrurus brasiliensis 6 17-17.3% 17.7-18.1% 17.1-18.5% 17.5-17.7% 17.5-17.7% --- Beneath the diagonal line are p-distances, and above, K2P values. Information referring to the species mentioned is found in Table I. YANINA F. BRIÑOCCOLI, GLADYS G. GARRIDO & ALICIA ALVAREZ DNA BARCODING IN CROAKER LARVAE An Acad Bras Cienc (2020) 92(1) e20180783 8 | 14 a total of 68,520 trees were evaluated and three equally parsimonious trees were captured, with 509 longitudinal steps. DISCUSSION The results of the present work show the presence of larvae with morphologically distinctive characteristics of the Family Sciaenidae in Garupá Stream. They might be differentiated from other freshwater species included in the Order Perciformes, being that they present a body form similar to those of adults, with a relatively large head, lower caudal peduncle proportionately small in relation to the head (giving the body the form of a water drop), small intestine being approximately triangular, and spines in the cephalic region (Taguti et al. 2015). The larvae in question cannot be morphologically identified to the species level employing traditional methods of morphometric taxonomy, inasmuch as these species present similar morphological characteristics (Lakra et al. 2009). However, molecular identification using the COI gene allowed recognition of three species in the analyzed material based on the comparison to the totality of the DNA sequences available in GenBank and BOLD, i.e. Plagioscion ternetzi, Pachyurus bonariensis and Plagioscion squamosissimus. With respect to the identification of larvae of Plagioscion ternetzi in the Garupá Stream, its presence was foreseeable, owing to other observations of the species in the area, it having Figure 3. Phylogenetic and distance trees of species of the Family Sciaenidae based on 615 nucleotides of the mitochondrial gene COI. Tree of Neighbour-Joining, Maximum Parsimony, Maximum Likelihood and Bayesian Inference. Bootstrap values for: (a) NJ and MP; (b) ML and a posteriori probability values for BI, respectively, are shown above the branches. Values below 70/0.7 are not shown. References for the sequences correspond with the collection numbers assigned by the “Proyecto de Biología Pesquera Regional” for material from the Garupá Stream and access numbers from GenBank for the remaining sequences. YANINA F. BRIÑOCCOLI, GLADYS G. GARRIDO & ALICIA ALVAREZ DNA BARCODING IN CROAKER LARVAE An Acad Bras Cienc (2020) 92(1) e20180783 9 | 14 been catalogued as “constant” according to the Dajoz (1983) index, those species being regarded at constant which have an appearance rate greater than 50%. Nonetheless, there is novelty in the identification of Plagioscion ternetzi as being accompanied by other sciaenide larvae in this environment, and in the presence of Pachyurus bonariensis - for which there has been observation only of adult individuals in the area (Flores et al. 2009) - as well observation of Plagioscion squamosissimus, which constitutes the first record of the species within the area under influence of the Yacyretá Reservoir. Pachyurus bonariensis is a fish of small size, with some commercial value (Casatti 2003), found frequently in a majority of tropical aquatic environments, notably those which undergo anthropogenic impact such as reservoirs (Benedito-Cecilio & Agostinho 1997). Its presence has been reported in watersheds of the Paraná, Paraguay, and Uruguay Rivers (Agostinho et al. 1993, Mantero & Fuentes 1997, Sverlij et al. 2008). In the Yacyretá Basin, there have been records of spawning of the species (Flores & Hirt 2002), although there had not been records of larvae up to the present work, which might owe to the difficulty of morphological identification. On the other hand, Plagioscion squamosissimus, commonly known as the Piauí croaker, is a species that originates in the Amazonas watershed (Parnaíba River) and was introduced in the reservoirs of northeast Brazil in the 1940s, with the objective of putting available lentic environments to use and improving the availability of high-quality fish in that region (Fontenele 1978, Carnelós & Benedito-Cecílio 2002, DNOCS 2002, Félix et al. 2007). Upon the beneficial results obtained in prior experience, the Companhia Energética de São Paulo decided to introduce the first juveniles in to its reservoirs in 1952 (Bialetzki et al. 2004). The first spawning was obtained in the tanks of the Estação de Piscicultura da Usina Hidrelétrica Limoeiro (State of São Paulo, Brazil). Nonetheless, the necessity of opening the reservoir’s gates led to the liberation of juveniles into the Pardo River, and it later reached the Upper Paraná, most likely in 1972 (Bialetzki et al. 2004). Up to the present date, the species has been found in the area influenced by the Binational Itaipú Reservoir (Brasil-Paraguay) (Hahn et al. 1997, Nakatani et al. 1997, Carnelós & Benedito-Cecílio 2002, Baumgartner et al. 2003, Barros et al. 2012), where, downstream, the High Paraná begins (Bonetto et al. 1986). López et al. (2005) reported the advance of P. squamosissimus in the High Paraná, but not in its tributaries, and indicated a possible threat to natural aquatic communities. The presence of the species in a tributary of the Paraná is now confirmed based upon the results of the present work, in accordance with various studies that have demonstrated the species’ preference for areas of low water flow and it adaptation to reservoirs (Nomura 1973, Dourado 1981, Cruz et al. 1990, Okada et al. 1996). Piauí croaker is a fish with a great colonization success due to its dietary plasticity, which may suggest a high cost for the fauna of native fishes whose abundances could be altered (Barros et al. 2012, Hahn et al. 1997). This successful colonization is expected, as in most non-native species, mainly in situations where there is absence of predators, the abundance of preferential food resources and morphological traits providing a competitive advantage in early life-stages (Neves et al. 2015). Especially with use of our molecular methods, it is probable that there will be new records of P. squamosissimus made in the region owing to colonization opportunities associated with newly formed environments. Croakers are of commercial importance for the region, relevant to both artisanal and sport fishing (Hirt et al. 2010, Araya et al. 2013). Studies carried out in the watersheds of the YANINA F. BRIÑOCCOLI, GLADYS G. GARRIDO & ALICIA ALVAREZ DNA BARCODING IN CROAKER LARVAE An Acad Bras Cienc (2020) 92(1) e20180783 10 | 14 Amazon (Oliveira & Ferreira 2008), including the Tocantins (Soares & Teixeira 2012), São Francisco (Melo & Severi 2010), Paraguay (Casatti 2005) and Upper Paraná basins (Bialetzki et al. 2005) show that larvae of the Family Sciaenidae are among the most abundant larvae of the pelagic zone. Their reproductive cycle is well known, notably increasing their reproductive activity in the warm months of spring and summer, such that in these months their capture is very likely. In this period, croakers prefer regions with slow current and environments with meanders (Vera et al. 2005), characteristics present in the Garupá Stream, which, nearing its outlet into the Paraná River, forms a fan of flooded terrains with slow current. This environment arose when the Yacyretá Reservoir reached its full water level. One example of the modifications associated with the change in water level in the fish communities of the reservoir’s area of influence is reflected in the size structure (Agostinho et al. 2008, Hoeinghaus et al. 2009). Previously, when this stretch of the Paraná River was free of dams, it was much more likely to host fish species of larger size. Currently, with the changes produced by damming, the size structure is seen to be in a period of modification, where greatest success is achieved by those fish species of medium size, including the species currently under study, croakers. These results are comparable with those of Hoeinghaus et al. (2009) in the Upper Paraná, Brazil, before and after the construction of the Itaipú Dam. In this context, the identification of the three species mentioned underlines the importance of the newly formed environments in reproductive events, such the Garupá Stream. All results obtained, in genetic distances, analysis of BINs, ABGD and phylogenetic trees reaffirm the presence of three species of sciaenid larvae in the stream. Also, they show the efficacy of the COI marker as a DNA barcode for the identification of freshwater fish species, which was able to overcome a lack of differentiable morphological characteristics and ontogenetic variation for the identification of the sampled ichthyoplankton, allowing great taxonomic resolution which complements traditional morphological taxonomy. Owing to the difficulty in identification of ichthyoplankton, this approach provides an important taxonomic identification tool for regions that have barcode libraries at their disposal, such as that recently carried out in Argentina by Díaz et al. (2016). As such, the effort to identify these species is important, first of all, in order to characterize the fish community, and then also, to gain certitude in populational studies of the community composition and, in that basin to recommend measures for fishery management. In this sense, the incorporation of Pachyurus bonariensis and Plagioscion squamosissimus to the ichthyoplankton assemblage of the Garupá Stream, allows for better characterization of the species level richness of this secondary watercourse modified by the Yacyretá Reservoir. Furthermore, it permits an advancement in our understanding regarding the use of the area for reproduction and nursery, and provides evidence about spawning sites and recruitment for numerous fish populations, as these areas are of ecological importance and require management for the fishing resources of the region. Acknowledgments The authors wish to thank very especially to Drs. A. Beltramino and R. Vogler for the knowledge and time invested in this work. We are grateful to D. Aichino, J.C. Cerutti, C. Balatti, S. Masin, J. Aguilera and S. Müller for their support in the sampling site. Also, we thanks to the anonymous reviewers for their comments that greatly improved the manuscript. YANINA F. BRIÑOCCOLI, GLADYS G. GARRIDO & ALICIA ALVAREZ DNA BARCODING IN CROAKER LARVAE An Acad Bras Cienc (2020) 92(1) e20180783 11 | 14 REFERENCES AGOSTINHO AA, PELICICE FM & GOMES LC. 2008. Dams and the fish fauna of the Neotropical region: impacts and management related to diversity and fisheries. Braz J Biol 68(4): 1119-1132. AGOSTINHO AA, PEREIRA MENDES V, SUZUKI HI & CANZI C. 1993. Avaliação da atividade reprodutiva da comunidade de peixes dos primeiros quilômetros a jusante do reservatório de Itaipu. Rev UNIMAR 15: 175-189. ALMEIDA FS, FRANTINE-SILVA W, LIMA SC, GARCIA DZA & ORSI ML. 2017. DNA barcoding as a useful tool for identifying non-native species of freshwater ichthyoplankton in the neotropics. Hydrobiologia 817: 111-119. ALTSCHUL SF, GISH W, MILLER W, MYERS EW & LIPMAN DJ. 1990. Basic local alignment and search tool. J Mol Biol 215: 403-410. ARAYA P, HIRT L & FLORES S. 2013. Humedales de los arroyos de Misiones y Corrientes en relieve ondulado. In: Benzaquén L (Ed), Inventario de los humedales de Argentina: Sistemas de paisajes de humedales del Corredor Fluvial Paraná-Paraguay. Buenos Aires: Secretaría de Ambiente y Desarrollo Sustentable de la Nación, Buenos Aires, Argentina, p. 123-136. BAUMGARTNER MST, NAKATANI KI, BAUMGARTNER G & MAKRAKIS MC. 2003. Spatial and temporal distribution of ‘’curvina’’ larvae (Plagioscion squamosissimus Heckel, 1840) and its relationship to some environmental variables in the upper Paraná River floodplain, Brazil. Braz J Biol 63(3): 381-391. BARROS LC, SANTOS U, ZANUNCIO JC & DERGAM JA. 2012. Plagioscion squamosissimus (Sciaenidae) and Parachromis managuensis (Cichlidae): A threat to native fishes of the Doce River in Minas Gerais, Brazil. PLoS ONE 7(6): e39138. BECKER RA, SALES NG, SANTOS GM, SANTOS GB & CARVALHO DC. 2015. DNA barcoding and morphological identification of Neotropical ichthyoplankton from the Upper Paraná and São Francisco. J Fish Biol 87: 159-168. BENEDITO-CECILIO E & AGOSTINHO AA. 1997. Estrutura das populações de peixes do reservatório de Segredo. In: Agostinho AA and Gomes LC (Eds), Reservatório de Segredo: Bases Ecológicas para o Manejo. Maringá: EDUEM, Paraná, Brasil, p. 113-119. BIALETZKI A, NAKATANI K, SANCHES PV, BAUMGARTNER G & GOMES LC. 2005. Larval fish assemblage in the Baía River (Mato Grosso do Sul State, Brazil): temporal and spatial patterns. Environ Biol Fish 73: 37-47. BIALETZKI A, NAKATANI K, VANDERLEI SANCHES P & BAUMGARTNER G. 2004. Eggs and larvae of the ‘curvina’ Plagioscion squamosissimus (Heckel, 1840) (Osteichthyes, Sciaenidae) in the Baía River, Mato Grosso do Sul State, Brazil. J Plankton Res 26: 1327-1336. BOGAN S, MELUSO JM & BAUNI V. 2015. Los peces del Paraná en el área de influencia del embalse Yacyretá. In: Bauni V (Ed), El patrimonio natural y cultural en el área de influencia del embalse de Yacyretá, Argentina. Ciudad Autónoma de Buenos Aires: Fundación de Historia Natural Félix de Azara, Argentina, p. 29-40. BONETTO AA, NEIFF JJ & DI PERSIA DH. 1986. The Paraná River system. In: The Ecology of River Systems. Dordrecht: Springer, Dordrecht, Holland, p. 541-598. CABRERA MB, BOGAN S, POSADAS P, SOMOZA GM, MONTOYA- BURGOS JI & CARDOSO YP. 2017. Risks associated with introduction of poeciliids for control of mosquito larvae: first record of the non-native Gambusia holbrooki in Argentina. J Fish Biol 91(2): 704-710. CARDOSO YP, ROSSO JJ, MABRAGAÑA E, GONZALEZ-CASTRO M, DELPIANI M, AVIGLIANO E, BOGAN S, COVAIN R, SCHENONE NF & DÍAS DE ASTARLOA JM. 2018 A continental-wide molecular approach unraveling mtDNA diversity and geographic distribution of the Neotropical genus Hoplias. PLoS ONE 13(8): e0202024. CARNELÓS RC & BENEDITO-CECÍLIO E. 2002. Reproductive strategies of Plagioscion squamosissimus Heckel, 1840 (Osteichthyes Sciaenidae) in the Itaipu Reservoir, Brazil. Braz Arch Biol Technol 45: 317-324. CARVALHO DC, OLIVEIRA DAA, POMPEU PS, LEAL CG, OLIVEIRA C & HANNER R. 2011. Deep barcode divergence in Brazilian freshwater fishes: the case of São Francisco River Basin. Mitochondrial DNA 22: 71-79. CASATTI L. 2003. Family Sciaenidae (Drums or croakers). In: Reis RE, Kullander SO and Ferraris CJ Jr. (Eds), Checklist of the freshwater fishes of South and Central America. Porto Alegre: EDIPUCRS, Rio Grande do Sul, Brazil, p. 599-602. CASATTI L. 2005. Revision of the South American freshwater genus Plagioscion (Teleostei, Perciformes, Sciaenidae). Zootaxa 1080: 39-64. CASCIOTTA J, ALMIRÓN A & BECHARA J. 2006. Peces del Iberá: Habitat y Diversidad, 1st ed., La Plata: Graficar, Buenos Aires, Argentina, 244 p. CRUZ JA, MOREIRA JA, VERANI JR, GIRARDI L & TORLONI CEC. 1990. Levantamento da ictiofauna e aspectos da dinâmica de populações de algumas espécies do reservatório de YANINA F. BRIÑOCCOLI, GLADYS G. GARRIDO & ALICIA ALVAREZ DNA BARCODING IN CROAKER LARVAE An Acad Bras Cienc (2020) 92(1) e20180783 12 | 14 Promissão, SP. São Paulo: CESP/UFSCar, São Paulo, Brasil, 78 p. DAJOZ R. 1983. Ecologia Geral, 4a ed., Petrópolis: Vozes, Rio de Janeiro, Brasil, 472 p. DARRIBA D, TABOADA GL, DOALLO R & POSADA D. 2012. jModelTest 2: more models, new heuristics and parallel computing. Nat Methods 9: 772. DÍAZ J, VILLANOVA GV, BRANCOLINI F, DEL PAZO F, POSNER FM, GRIMBERG A & ARRANZ SE. 2016. First DNA barcode reference library for the identification of South American freshwater fish from the Lower Paraná River. PLoS ONE 11: e0157419. DNOCS - DEPARTAMENTO NACIONAL DE OBRAS CONTRA AS SECAS. 2002. Relatório das atividades desenvolvidas pela coordenação de pesca e aqüicultura, durante o ano de 2002. Coordenação de Pesca e Aqüicultura. Rio de Janeiro, Brasil, 22 p. DOURADO OF. 1981. Principais peixes e crustáceos dos açudes controlados pelo DNOCS. Fortaleza: SUDENE/ DNOCS, Ceará, Brasil, p. 27-28. FÉLIX RT, MELO VC, SANTANA FMS, EL-DEIR ACA & SEVERI W. 2007. Análise da reprodução da pescada Plagioscion squamosissimus (Heckel, 1840) no reservatório de Pedra, Rio de Contas, Bahia, Brasil. Anais do VIII Congresso de Ecologia do Brasil, 23 a 28 de Setembro de 2007, Caxambu – MG. FELSENSTEIN J. 1985. Confidence limits on phylogenies: an approach using the bootstrap. Evolution 39: 783-791. FLORES S, ARAYA PR & HIRT LM. 2009. Fish diversity and community structure in a tributary stream of the Paraná River. Acta Limnol Bras 21: 57-66. FLORES SA & HIRT LM. 2002. Ciclo reproductivo y fecundidad de Pachyurus bonariensis (Steindachner, 1879), Pisces, Sciaenidae. Bol Inst Pesca 28: 25-51. FONTENELE OPJT. 1978. Analise dos resultados de introdução da pescada do Piauí, Plagioscion squamosissimus (Heckel, 1840), nos açudes do nordeste. Boletim Tecnico DNOCS 36: 85-112. FRANTINE-SILVA W, SOFIA SH, ORSI ML & ALMEIDA FS. 2015. DNA barcoding of freshwater ichthyoplankton in the Neotropics as a tool for ecological monitoring. Mol Ecol Resour 15: 1226-1237. FULCO CA. 2012. El paisaje costero como factor de integración en el Proyecto Yacyretá. Buenos Aires: Contratiempo Ediciones, Buenos Aires, Argentina, 308 p. HAHN NS, AGOSTINHO AA & GOITEIN R. 1997. Feeding ecology of curvina Plagioscion squamosissimus (Heckel, 1840) (Osteichthyes, Perciformes) in the Itaipu Reservoir and Porto Rico floodplain. Acta Limnol Bras 9(1): 11-22. HALL TA. 1999. BioEdit: a user-friendly biological sequence alignment editor and analysis program for Window 95/98/NT. Nucl Acids Symp Ser 41: 95-98. HEBERT PDN, CYWINSKA A, BALL SL & DE WAARD JR. 2003. Biological identifications through DNA barcodes. Proc R Soc Lond B Biol Sci 270: 313-322. HIRT LM, ARAYA PR & FLORES SA. 2010. Peces de la Pesca Deportiva en la Provincia de Misiones (Argentina). Buenos Aires: Editoral Dunken, Buenos Aires, Argentina, 176 p. HOEINGHAUS DJ, AGOSTINHO AA, GOMES LC, PELICICE FM, OKADA EK, LATINI JD, KASHIWAQUI EAL & WINEMILLER KO. 2009. Effects of river impoundment on ecosystem services of large tropical rivers: embodied energy and market value of artisanal fisheries. Conserv Biol 23(5): 1222-1231. KESKIN E & ATAR HH. 2013. DNA barcoding commercially important fish species of Turkey. Mol Ecol Resour 13(5): 788-797. KO HL, WANG YT, CHIU TS, LEE MA, LEU MY, CHANG KZ, CHEN WY & SHAO KT. 2013. Evaluating the accuracy of morphological identification of larval fishes by applying DNA barcoding. PLoS ONE 8: e53451. KUMAR S, STECHER G & TAMURA K. 2016. MEGA7: Molecular Evolutionary Genetics Analysis version 7.0 for bigger datasets. Mol Biol Evol 33: 1870-1874. LAKRA WS, GOSWAMI M & GOPALAKRISHNA A. 2009. Molecular identification and phylogenetic relationships of seven Indian Sciaenids (Pisces: Perciformes, Sciaenidae) based on 16SrRNA and cytochrome c oxidase subunit I mitochondrial genes. Mol Biol Rep 36: 831-839. LARKIN MA ET AL. 2007. Clustal W and Clustal X version 2.0. Bioinformatics 23: 2947-2948. LO PC, LIU SH, CHAO NL, NUNOO FKE, MOK HK & CHEN WJ. 2015. A multi-gene dataset reveals a tropical New World origin and Early Miocene diversification of croakers (Perciformes: Sciaenidae). Mol Phylogenet Evol 88: 132-143. LÓPEZ HL, MIQUELARENA AM & PONTE GÓMEZ J. 2005. Biodiversidad y distribución de la ictiofauna mesopotámica. Miscelánea 14: 311-354. MANTERO G & FUENTES C. 1997. Huevos y larvas. In: Espinach Ros A and Ríos Parodi C (Eds), Conservación de la Fauna Íctica en el Embalse de Salto Grande. Salto Grande: Comisión Administradora del Río Uruguay y Comisión Técnica Mixta de Entre Ríos, Argentina, p. 26-32. YANINA F. BRIÑOCCOLI, GLADYS G. GARRIDO & ALICIA ALVAREZ DNA BARCODING IN CROAKER LARVAE An Acad Bras Cienc (2020) 92(1) e20180783 13 | 14 MEICHTRY DE ZABURLÍN N, PESO J & ARAYA P. 2013. Sistema 2a - Humedales del Embalse de Yacyretá y ambientes asociados. In: Benzaquén L, Blanco DE, Bó RF, Kandus P, Lingua GF, Minotti P, Quintana RD, Sverlij S and Vidal L (Eds), Inventario de los Humedales de Argentina. Sistemas de Paisajes de Humedales del Corredor Fluvial Paraná-Paraguay. Buenos Aires: Secretaría de Ambiente y Desarrollo Sustentable de la Nación, Buenos Aires, Argentina, p. 113-122. MEICHTRY DE ZABURLÍN N, PESO JG, GARRIDO G & VOGLER RE. 2010. Sucesión espacio-temporal del plancton y bentos en periodos posteriores al llenado del Embalse Yacyretá (Río Paraná, Argentina-Paraguay). Interciencia 35(12): 897-904. MELO AJS & SEVERI W. 2010. Abundância e distribuição espacial e sazonal do ictioplâncton no reservatório de Sobradinho, Rio São Francisco. In: Moura AN, Araújo EL, Bitencourt-Oliveira MC and Pimentel E (Eds), Reservatórios do Nordeste do Brasil: Biodiversidade, Ecologia e Manejo. Recife, Pernambuco: Comunigraf, Ceará, Brasil p. 503-540. MOSER HG. 1996. The early stages of the fishes in the California Current region. California Cooperative Oceanic Fisheries Investigation. Atlas 33. La Jolla: California Cooperative Oceanic Fisheries Investigations, California, USA, 1505 p. NAKATANI K, AGOSTINHO AA, BAUMGARTNER G, BIALETSKI A, VANDERLEI P, CAVICCHIOLI M & PAVANELLI C. 2001. Ovos e larvas de peixes de água doce, desenvolvimento e manual de identificação. Maringá: Editora da Universidade Estadual de Maringá, Paraná, Brasil, 378 p. NAKATANI K, BAUMGARTNER G & BAUMGARTNER MS. 1997. Larval development of Plagioscion squamosissimus (Heckel) (Perciformes, Sciaenidae) of Itaipu reservoir (Paraná River, Brazil). Rev Bras Zool 14(1): 35-44. NEVES MP, DELARIVA RL, GUIMARÃES ATB & SANCHES PV. 2015. Carnivory during ontogeny of the Plagioscion squamosissimus: A successful non-native fish in a lentic environment of the Upper Paraná River Basin. PLoS ONE 10(11): e0141651. NOMURA H. 1973. Peixes: Pesca e Biologia. Rio de Janeiro: Pisces, Rio de Janeiro, Brasil, 143 p. OKADA EK, AGOSTINHO AA & PETRERE JR M. 1996. Catch and effort data and the management of the commercial fisheries of Itaipu Reservoir in the Upper Paraná River, Brazil. In: Cowx UG (Ed), Stock Assessment in Inland Fisheries. Oxford: Osney Mead, Fishing News Books, Oxfordshire, England, p. 154-161. OLIVEIRA EC & FERREIRA EJG. 2008. Spawning areas, dispersion and microhabitats of fish larvae in the Anavilhanas Ecological Station, Rio Negro, Amazonas State, Brazil. Neotrop Ichthyol 6: 559-566. PEREIRA LHG, HANNER R, FORESTI F & OLIVEIRA C. 2013. Can DNA barcoding accurately discriminate megadiverse Neotropical freshwater fish fauna? BMC Genet 14: 20. PEREIRA LHG, PAZIAN MF, HANNER R, FORESTI F & OLIVEIRA C. 2011. DNA barcoding reveals hidden diversity in the Neotropical freshwater fish Piabina argentea (Characiformes: Characidae) from the Upper Paraná Basin of Brazil. Mitochondrial DNA 22: 87-96. PUILLANDRE N, LAMBERT A, ACHAZ S & BROUILLET G. 2012. ABGD, Automatic Barcode Gap Discovery for primary species delimitation. Mol Ecol 21: 1864-1877. RAMCHARITAR J, GANNON DP & POPPER NA. 2006. Bioacoustics of fishes of the Family Sciaenidae (croakers and drums). Trans Am Fish Soc 135: 1409-1431. RATNASINGHAM S & HEBERT PD. 2013. A DNA-based registry for all animal species: the Barcode Index number (BIN) system. PLoS ONE 8: e66213 RIBEIRO ADO, CAIRES RA, MARIGUELA TC, PEREIRA LHG, HANNER R & OLIVEIRA C. 2012. DNA barcodes identify marine fishes of São Paulo State, Brazil. Mol Ecol Resour 12(6): 1012-1020. RONQUIST F, TESLENKO M, VAN DER MARK P, AYRES DL, DARLING A, HÖHNA S, LARGET B, LIU L, SUCHARD MA & HUELSENBECK JP. 2012. MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Syst Biol 61: 539-542. ROSSO JJ, MABRAGAÑA E, GONZALES CASTRO M & DIAS DE ASTARLOA JM. 2012. DNA barcoding Neotropical fishes: recent advances from the Pampa Plain, Argentina. Mol Ecol Resour 12: 999-1011. SISTEMA NACIONAL DE INFORMACIÓN HÍDRICA. 2017. Cuenca de arroyos de Misiones sobre el Río Paraná hasta Posadas. Available in: https://mininterior.gov.ar/obras-publicas/ pdf/12.pdf. Last access: June 02, 2017. SOARES CL & TEIXEIRA GE. 2012. Distribuição de ovos e larvas de peixes antes e depois do represamento. In: Mazzoni R, Caramaschi EP and Iglesias RR (Eds), Usina Hidrelétrica de Serra da Mesa: 15 anos de estudo da ictiofauna do Alto Tocantins. Rio de Janeiro: Furnas, Rio de Janeiro, Brasil, p. 263-286. STRAUSS RE & BOND CE. 1990. Taxonomic methods: morphology. In: Schreck CB and Moyle PB (Eds), Methods for Fish Biology. Bethesda: American Fisheries Society, Maryland, USA, p. 109-140. YANINA F. BRIÑOCCOLI, GLADYS G. GARRIDO & ALICIA ALVAREZ DNA BARCODING IN CROAKER LARVAE An Acad Bras Cienc (2020) 92(1) e20180783 14 | 14 SVERLIJ SB, SCHENKE RLD, LÓPEZ HL & ROS AE. 2008. Peces del Río Uruguay. Guía ilustrada de las especies más comunes del Río Uruguay Inferior y el Embalse de Salto Grande. Paysandú: Comisión Administradora del Río Uruguay, Paysandú, Uruguay, 89 p. SWOFFORD DL. 2002. PAUP*. Phylogenetic Analysis Using Parsimony (*and other methods). Version 4.0b10. Sunderland: Sinauer Associates, Massachusetts, USA. TAGUTI TL, BIALETZKI A, SEVERI W, AGOSTINHO AA & FUGIMOTO ASSAKAWA L. 2015. Early development of two tropical fishes (Perciformes: Sciaenidae) from the Pantanal of Mato Grosso, Brazil. Rev Biol Trop 63: 1105-1118. TAMURA K & NEI M. 1993. Estimation of the number of nucleotide substitutions in the control region of mitochondrial DNA in humans and chimpanzees. Mol Biol Evol 10(3): 512-526. TOWER RW. 1908. The production of sound in the drumfishes, the sea-robin and the toadfish. Ann N Y Acad Sci 18: 149-180. VERA A, VENICA N, GONZÁLEZ IB, IBARRA DA & PELOZO CE. 2005. Crecimiento de la corvina en el Paraguay, tramo Formosa. Rev Cienc Tecnol Univ Nac Formosa 3: 61-67. WARD RD, ZEMLAK TS, INNES BH, LAST PR & HEBERT PD. 2005. DNA barcoding Australia’s fish species. Philos Trans R Soc Lond B Biol Sci 360: 1847-1857. How to cite BRIÑOCCOLI YF, GARRIDO GG & ALVAREZ A. 2020. DNA barcoding identifies three species of croakers (Pisces, Sciaenidae) in the ichthyoplankton of the High Paraná River. An Acad Bras Cienc 92: e20180783. DOI 10.1590/0001-3765202020180783. Manuscript received on July 27, 2018; accepted for publication on November 21, 2018 YANINA F. BRIÑOCCOLI1, 2 https://orcid.org/0000-0002-4461-291X GLADYS G. GARRIDO2 https://orcid.org/0000-0003-1639-9944 ALICIA ALVAREZ2 https://orcid.org/0000-0001-6234-9708 1Universidad Nacional de Misiones, Departamento de Biología, Facultad de Ciencias Exactas, Químicas y Naturales, Rivadavia 2370, N3300LDX Posadas, Misiones, Argentina 2Instituto de Biología Subtropical Nodo Posadas, CONICET- UNaM, Proyecto Biología Pesquera Regional, Rivadavia 2370, N3300LDX Posadas, Misiones, Argentina Correspondence to: Yanina F. Briñoccoli E-mail: ybrinoccoli@gmail.com Author contributions Y.F.B and G.G.G designed the study. A.A and G.G.G provided tissues. Y.F.B performed laboratory work. Y.F.B and G.G.G analyzed the data. Y.F.B wrote the manuscript. All authors contributed to the final draft of the manuscript. work_xyd6wsxcvnek3knhvks56y5zb4 ---- Abstract: Reconstruction of Nasal Tip Defects with Superior-Based Pedicle Nasolabial Island Flap | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1097/01.GOX.0000526171.45552.f0 Corpus ID: 29703668Abstract: Reconstruction of Nasal Tip Defects with Superior-Based Pedicle Nasolabial Island Flap @article{Evin2017AbstractRO, title={Abstract: Reconstruction of Nasal Tip Defects with Superior-Based Pedicle Nasolabial Island Flap}, author={Nuh Evin and O. Akdag and M. Karameşe and Z. Tosun}, journal={Plastic and Reconstructive Surgery Global Open}, year={2017}, volume={5} } Nuh Evin, O. Akdag, +1 author Z. Tosun Published 2017 Medicine Plastic and Reconstructive Surgery Global Open METHODS: 60 patients who had non-melanocytic skin cancer on the only nasal tip were included in this study. There was no previous history of nasal surgery, allergic rhinitis, concha hypertrophy and other breathing problems in any patients. Six patients were treated with forehead flap, ten patients nasolabial island flap, twenty patients bilobe flap that based on inferior, 24 patients bilobe flap that based on superior. Function of the internal and external nasal valves were evaluated by… Expand View via Publisher doi.org Save to Library Create Alert Cite Launch Research Feed Share This Paper Topics from this paper Surgical Flaps Skin Neoplasms Allergic rhinitis (disorder) Hypertrophy Anterior nares Expiration, function nasal endoscopy Concha of ear structure References SHOWING 1-5 OF 5 REFERENCES Reconstruction of Lower Half Defects of the Nose with the Lateral Nasal Artery Pedicle Nasolabial Island Flap Aydin Turan, Zekeríya Kul, Tayfun Türkaslan, Türker Özyiğit, Zafer Özsoy Medicine Plastic and reconstructive surgery 2007 37 Save Alert Research Feed Reconstruction of short nose deformity using nasolabial flaps pedicled on the infraorbital vessels. M. Noguchi, K. Matsuo, T. Hirose Medicine British journal of plastic surgery 1991 23 Save Alert Research Feed Nasal Reconstruction Involving Multiple Subunit Defects A. Cox, M. Fort Medicine Facial plastic surgery : FPS 2017 4 Save Alert Research Feed The Incompetent External Nasal Valve: Pathophysiology and Treatment in Primary and Secondary Rhinoplasty M. Constantian Medicine Plastic and reconstructive surgery 1994 138 Save Alert Research Feed Insufficiency of the Nasal Valve or Nozzle and Its Treatment F. Jeppesen, J. S. Jeppesen Medicine Ear, nose, & throat journal 1992 9 Save Alert Research Feed Related Papers Abstract Topics 5 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_xyfkyyeftvf7ll23kuylacld2m ---- International Journal of Environmental Research and Public Health Review Big Data and Digitalization in Dentistry: A Systematic Review of the Ethical Issues Maddalena Favaretto 1,*, David Shaw 1, Eva De Clercq 1, Tim Joda 2 and Bernice Simone Elger 1 1 Institute for Biomedical Ethics, University of Basel, 4056 Basel, Switzerland; david.shaw@unibas.ch (D.S.); eva.declercq@unibas.ch (E.D.C.); b.elger@unibas.ch (B.S.E.) 2 Department of Reconstructive Dentistry, University Center for Dental Medicine Basel, 4058 Basel, Switzerland; tim.joda@unibas.ch * Correspondence: maddalena.favaretto@unibas.ch; Tel.: +416-1207-0203 Received: 11 March 2020; Accepted: 4 April 2020; Published: 6 April 2020 ���������� ������� Abstract: Big Data and Internet and Communication Technologies (ICT) are being increasingly implemented in the healthcare sector. Similarly, research in the field of dental medicine is exploring the potential beneficial uses of digital data both for dental practice and in research. As digitalization is raising numerous novel and unpredictable ethical challenges in the biomedical context, our purpose in this study is to map the debate on the currently discussed ethical issues in digital dentistry through a systematic review of the literature. Four databases (Web of Science, Pub Med, Scopus, and Cinahl) were systematically searched. The study results highlight how most of the issues discussed by the retrieved literature are in line with the ethical challenges that digital technologies are introducing in healthcare such as privacy, anonymity, security, and informed consent. In addition, image forgery aimed at scientific misconduct and insurance fraud was frequently reported, together with issues of online professionalism and commercial interests sought through digital means. Keywords: Big Data; digital dentistry; oral health; ethical issues 1. Introduction The sophistication and increased use of Internet and Communication Technologies (ICT), the rise of Big Data and algorithmic analysis, and the origin of the Internet of Things (IOT) are a plethora of interconnected phenomena that is currently having an enormous impact on today’s society and that is affecting almost all spheres of our lives. In recent years, we have seen an exponential growth in the generation, storage, and collection of computational data and the digital revolution is transforming an increasing number of sectors in our society [1,2]. In the biomedical context, for instance, digital technologies are finding numerous novel applications to improve healthcare, cut costs for hospitals, and maximize treatment effectiveness for patients. Examples of such implementations include the development of electronic health records (EHRs) and smarter hospitals for increased workflow [3], personalized medicine and linkage of health data [4], clinical decision support for novel treatment concepts [5], and deep learning and Artificial Intelligence (AI) for diagnostic analysis [6]. In addition, the implementation of mobile technologies into the medical sector is fundamentally altering the ways in which healthcare is perceived, delivered, and consumed. Thanks to the ubiquity of smartphones and wearable technologies, mobile health (mHealth) applications are currently being explored by healthcare providers and companies for remote measurement of health and provision of healthcare services [7]. Dentistry, as a branch of medicine, has not remained unaffected by the digital revolution. The trend in digitalization has led to an increased production of computer-generated data in a growing number of dental disciplines and fields—for example, oral and maxillofacial pathology and surgery, prosthodontics Int. J. Environ. Res. Public Health 2020, 17, 2495; doi:10.3390/ijerph17072495 www.mdpi.com/journal/ijerph http://www.mdpi.com/journal/ijerph http://www.mdpi.com http://dx.doi.org/10.3390/ijerph17072495 http://www.mdpi.com/journal/ijerph https://www.mdpi.com/1660-4601/17/7/2495?type=check_update&version=2 Int. J. Environ. Res. Public Health 2020, 17, 2495 2 of 15 and implant dentistry, and oral public health [8–10]. For this reason, research in the field of dental medicine is currently focusing on exploring the numerous potential beneficial applications of digital and computer-generated data both for dental practice and in research. Population-based linkage of patient-level information could expand new approaches for research such as assisting with the identification of unknown correlations of oral diseases with suspected and new contributing factors and furthering the creation of new treatment concepts [11]. AI applications could help enhance the analysis of the relationship between prevention and treatment techniques in the field of oral health [12]. Digital imaging could promote accurate tracking of the distribution and prevalence of oral diseases to improve healthcare service provisions [13]. Finally, the creation of the digital or virtual dental patient, through the application of sophisticated dental imaging techniques (such as 3D con-beam computed tomography (CBCT) and 3D printed models) could be used for precise pre-operative clinical assessment and simulation of treatment planning in dental practice [9,14]. As these technologies are still at the early phases of implementation, technical issues and disadvantages might also emerge. For instance, data collection for the implementation of Big Data applications and AI must be done systematically according to harmonized and inter-linkable data standards, otherwise issues of data managing and garbage data accumulation might arise [15]. AI for diagnostic purposes is still in the very early phases, where its accuracy is being assessed, and although they are revealing themselves to be valuable for image-based diagnoses, analysis of diverse and massive EHR data still remains challenging [16]. Finally, with regards to the simulation of a 3D virtual dental patient, dataset superimposition techniques are still experimental and none of the currently available imaging techniques are sufficient to capture the complete dataset needed to create the 3D output in a single-step procedure [9]. In the past few years, alongside the ambitious promises of digital technologies in healthcare, the research community has also highlighted many of the potential ethical issues that Big Data and ICT are raising for both patients and other members of society. In the biomedical context, data technologies have been claimed to exacerbate issues of informed consent for both patients and research participants [17,18], and to create new issues regarding privacy, confidentiality [19–21], data security and data protection [22], and patient anonymization [23] and discrimination [24–26]. In addition, recent research has also emphasized additional pressing challenges that could emerge from the inattentive use of increasingly sophisticated digital technologies, such as issues of accuracy and accountability in the use of diagnostic algorithms [27] and the exacerbation of healthcare inequalities [25]. As dentistry is also undergoing the digital path, similar ethical issues might emerge from the application of ICT and Big Data technologies. To the best of our knowledge, there is currently no systematic evaluation of the different ethical issues raised by Big Data and ICT in the field of dentistry, as most of the literature on the topic generally focuses on non-dental medicine and healthcare [28]. As timely ethical evaluation is a consistent part of appropriate health technology assessment [29] and because recent literature has focused on the ethical issues concerning health-related Big Data [28], it is of the utmost importance to map the occurrence of the ethical issues related to the application of heterogeneous digital technologies in dental medicine and to investigate if specific ethical issues for dental Big Data are emerging. We thus performed a systematic review of the literature. The study has the following aims: (1) mapping the identified ethical issues related to the digitalization of dental medicine and the applications of Big Data and ICT in oral healthcare; (2) investigating the suggested solutions proposed by the literature; and (3) understanding if some applications and practices in digital dentistry could also help overcome some ethical issues. 2. Materials and Methods We performed a systematic literature review by searching four databases: PubMed, Web of Science, Scopus, and Cinahl. The following search terms were used: “big data”, “digital data”, “data linkage”, “electronic health record *”, “EHR”, “digital *”, “artificial intelligence”, “data analytics”, “information technology”, “dentist *”, “dental *”, “oral health”, “orthodont *”, “ethic *”, and “moral Int. J. Environ. Res. Public Health 2020, 17, 2495 3 of 15 *”. No restriction was placed on the type of methodology used in the paper (qualititative, qualitative, mixed methods, or theoretical). No time restriction was used. In order to enhance reproducibility of the study, we only included original research articles from peer-reviewed journals; therefore, grey literature, books (monographs and edited volumes), conference proceedings, dissertations, and posters were omitted. English was selected as it is the designated language of the highest number of peer-reviewed academic journals. The search was performed on 24 of January 2020 (see Table 1). Table 1. Search terms. No. Match Search Terms Pub Med Web of Science Scopus Cinahl 1 (“big data” OR “digital data” OR “data linkage” OR “electronic health record*” OR “EHR” OR “digital*” OR “artificial intelligence” OR “data analytics” OR “information technology”) 251,004 4,682,526 1,750,766 67,116 2 (“dentist*” OR “dental *” OR “oral health” OR “orthodont*”) 827,547 1,409,796 613,348 158,231 3 (“ethic *” OR “moral*”) 334,537 582,299 528,738 98,246 4 1 AND 2 AND 3 190 186 71 63 We followed the protocol from the Preferred Reporting Item for Systematic Reviews and Meta-Analyses (PRISMA) method [30], which resulted in 510 papers. We scanned the results for duplicates (125) and 385 papers remained. In this phase, we included all articles that focused on digitalization of dentistry or on one specific digital technology in the field of dentistry and that mentioned, enumerated, discussed, or described one or more ethical challenge related to digitalization. Papers that only described a technology from a technical point of view, that did not focus on dentistry or focused generally on medical practice, or that did not relate to the ethical challenges of digitalization were excluded. Additional papers (27) were excluded because they were book sections, posters, conference proceedings, or not in English. In total, 356 papers were excluded. We subsequently scanned the references of the remaining 29 articles to identify additional relevant studies. We added five papers through this process. The final sample included 34 articles. During the next phase, the first author read the full texts in their length. After thorough evaluation, eight articles were excluded for the following reasons: (1) they did not discuss or mention any ethical issue related to the technology discussed in the study; and (2) they did not refer to any digital implementation in dentistry (see Figure 1). The subsequent phase of the study involved the analysis of the remaining 26 articles. Regarding data analysis, we carried out a narrative synthesis of included publications [31]. Therefore, we extracted the following information relevant to the aim of the present study and to the research question from the papers: year and country of publication; methodology; type of technology or digital application discussed; field of application of the article; ethical issues that emerge from the use of the technology; technical issues that might exacerbate the ethical issues discussed; suggested potential solutions to the issue(s); and ethical issues that the technology could help overcome. Int. J. Environ. Res. Public Health 2020, 17, 2495 4 of 15 Figure 1. Preferred Reporting Item for Systematic Reviews and Meta-Analyses (PRISMA) flowchart. 3. Results Among the 26 papers included in our analysis, 22 were theoretical papers that critically discussed the impact of digitalization in the field of dentistry or that discussed a specific technology highlighting its promises and some of its ethical challenges. Among the remaining papers, three applied empirical methods and one was a feasibility study. The majority of papers (n = 20) were published after 2010, five were published between 2008 and 2010, and one of them was from 1996. Half of the articles (n = 13) were from the United States, five came from the United Kingdom, and four from India. The remaining ones came from Belgium, Brazil, Germany, and South Africa. Regarding the type of technological application they discussed, almost one-third of the papers (n = 8) analyzed digital photography, radiology and computed imaging; six papers discussed the impact of digital communication and social media in dentistry; three articles focused on electronic health records (EHRs) and patient records; another three discussed the promises and challenges of mobile health and teledentistry; and an additional three records focused on data linkage and personalized medicine. In addition, two papers broadly discussed the challenges and promises of ICT and digital implementations in dentistry, while one paper focused on search engine optimizations in dental practices. Finally, concerning the field of application of the different papers, 10 articles discussed the ethical issues of digitalization regarding dental practice, nine discussed digitalization and digital application for dentistry without a specific focus, five focused on education and dental school, and two discussed applications in research (see Table 2). Int. J. Environ. Res. Public Health 2020, 17, 2495 5 of 15 Table 2. Retrieved papers. EHR, electronic health record; mHealth, mobile health; CBCT, con-beam computed tomography; ICT, internet and communication technologies. Author, Year, Country Design Participants Technology Discussed Field of Application Ethical Issues Boden (2008), USA Theoretical Digital transfer of patient records Dental practice Justice and autonomy- high charges for the patient prevent beneficial use of records for future patient treatment Calberson et al. (2008), Belgium Theoretical Digital radiography General Fraudulent use of radiographs Cederberg and Valenza (2012), USA Theoretical EHR (in dental schools) Dental school Justice, patient privacy and security, shift in doctor patient relationship, misconduct from students Chambers (2012), USA Theoretical Digital Communication Dental practice Shift in doctor patient relationship, patient privacy and security, professionalism Cvrker (2018), USA Theoretical mHealth General Patient access, data ownership, patient privacy and security, bystanders da Costa et al. (2012), Brazil Theoretical Teleorthodontics General Patient privacy and security Day et al. (2018), UK Feasibility Study Birth cohort in the United Kingdom Data linkage Research Anonymization, data ownership Eng et al. (2012), USA Theoretical Personalized dentistry General Discrimination, confidentiality Gross et al. (2019), Germany Theoretical Digitalization in dentistry General Shift in doctor patient relationship, data literacy, responsibility and accountability for AI, digital footprint Indu et al. (2015), India Empirical A sample of postgraduate students and teaching faculties of oral pathology in India Digital photography General Anonymity and security Jampani et al (2011), India Theoretical Teledentistry General Confidentiality, patient privacy and security, consent Kapoor (2015), India Empirical Digital photography and radiology General Fraudulent use of radiographs/photographs, scientific misconduct Khelemsky (2011), USA Theoretical CBCT Dental practice Harm to patient, consent Knott (2013), UK Theoretical ICT Dental practice Anonymity, data security, patient privacy Luther (2010), UK Theoretical Digital forensics Research Fraudulent use of images, scientific misconduct, Neville and Waylen (2015), UK Theoretical Social Media Dental practice Shift in doctor patient relationship, patient Confidentiality, privacy, anonymity Oakley and Spallek (2012), USA Theoretical Social Media Dental School Shift in doctor patient relationship, patient privacy and confidentiality, miscommunication, boundary violation Peltier and Curley (2013), USA Theoretical Social Media Dental practice Dishonest/unlawful advertising, patient confidentiality Rao et al. (2010), India Empirical A sample of randomly selected clinicians in India Digital photography General Fraudulent use of photographs, scientific misconduct Spallek er al. (2015), USA Theoretical Social Media Dental School Shift in doctor patient relationship, patient privacy and confidentiality, miscommunication, boundary violation Stieber et al. (2015), USA Theoretical Electronic media and digital photography Dental School Patient privacy and confidentiality, autonomy and consent Swirsky at al. (2018), USA Theoretical Search engine optimization Dental practice Beneficence, autonomy, consent, conflict of interest and undue influence Int. J. Environ. Res. Public Health 2020, 17, 2495 6 of 15 Table 2. Cont. Author, Year, Country Design Participants Technology Discussed Field of Application Ethical Issues Sykes et al (2017), South Africa Theoretical Social Media Dental practice Patient privacy, anonymity, confidentiality and consent, professionalism, shift in patient doctor relationship, misleading advertisement Szekely et al. (1996), USA Theoretical EHR Dental practice Patient privacy and confidentiality, security Wenworth (2010), USA Theoretical Digital Radiography Dental practice Patient privacy and confidentiality, misleading advertisement Zijlstra-Shaw and Stokes (2018), UK Theoretical Big Data analytics (in dental education) Dental school Consent and data ownership 3.1. Implementation of Digital Technologies in Dentistry Two papers generally discussed the ethical implications that ICT and digitalization are introducing in dentistry [32,33]. According to Gross et al. [32], digitalization of dentistry is influencing the patient doctor relationship as the integration of digital technologies could distract attention away from the patient during the visit. Issues of data literacy can arise for both the dentist—who will need to constantly be updated on the latest technologies—and the patient—who will need to understand how new technologies work, possibly disfavoring people with poor computer literacy such as the elderly. The application of AI for diagnostic purposes could create issues of responsibility and accountability. A shift might occur towards overtreatment of the patient owing to increased demand for the use of digitized systems. In addition, the constant use, refurbishment, and replacement of increasingly new technology leaves a remarkable digital footprint and aggravates digital pollution. Finally, digital technologies create issues of data security, data falsification, and privacy issues regarding identifiable patient information [33]. 3.2. Big Data and Data Analytics Nine papers discussed the increased employment of Big Data and data analytics in dentistry related to different applications such as data linkage [34], data analytics in dental schools [35], personalized medicine [36], EHRs [37–39], and mHealth and teledentistry [40–42]. 3.2.1. Electronic Health Records (EHRs) Three papers focused on the implementation of EHRs both in private practices and in dental education [37–39]. Ethical issues that arise from this technology are data security, as sensitive patient information could be more easily accessed by unauthorized third parties, resulting in a breach of patient privacy and confidentiality [38,43]. In addition, Cederberg and Valenza [38] argue that the use of digital records might compromise the doctor patient relationship in the future, as easy access to all relevant information through digital means and forced focus on the computer screen could accustom students to becoming more detached from patients. Suggested solutions for privacy and security issues related to EHR are as follows: (a) the implementation of a three-zone confidentiality model of medical information for databases both linked (networked) and non-linked (network), where different levels of access and security are put in place for different areas—from a more secured inner area that holds the highest sensitive information about the patients (e.g., HIV status and psychiatric care) to an outer, less secured area containing generally publicly available information [37]. Int. J. Environ. Res. Public Health 2020, 17, 2495 7 of 15 3.2.2. mHealth and Teledentistry Ethical concerns related to mHealth and teledentisry—that is, the use of information technologies and telecommunications to provide remotely dental care, education and raise oral health awareness— were raised by three articles [40–42]. As for other Big Data technologies, issues of data security and patient anonymity [40,41] and confidentiality [42] were the most mentioned, as networked transfer through unsecure means could enable unwarranted third parties to obtain easier access to sensitive patient data. mHealth might also have an impact on consent both for the patient who might not have been appropriately informed about all of the risks that teledentistry implies [42] and for non-consenting bystanders, whose data might be collected by the device the patient is using [41]. Furthermore, Cvkrel [41] argued that first, mHealth creates additional vulnerability as smartphones gather additional data that are usually not collected by healthcare practitioners (e.g., fitness data, sleep patterns), and, as it is an object of everyday use, it might be easily accessible to unauthorized people. Second, easy access through the smartphone to raw data including data related to dental care could be counterproductive and harmful for patients who might self-adjust the prescription given by the practitioner. Among the suggested solutions are the following: (a) the establishment of secured networking communication such as the development of state-of-the-art firewalls and antiviruses to mitigate security concerns in telecommunications [40]; (b) the formulation of high quality consent processes that appropriately make the user aware of the risks and all relative factors [41]; and (c) the implementation of information and education about the specific issues that such technology raises for dentists who want to employ teledentistry in their practice. 3.2.3. Personalized Medicine and Data Linkage In the context of data linkage in dental practices, personalized medicine, and dental schools, the analyzed articles reported how consent issues might arise concerning data usage when the student or the patient cannot be completely informed about the ways in which the collected data is used [35]. Data anonymization [34] and patient confidentiality [36] were again both mentioned as issues of data linkage. Finally, Eng et al. [36] highlighted how discrimination based on higher risk for specific diseases might appear from the linkage of different databases in personalized medicine. In order to overcome these issues, Eng et al. [36] suggested to develop protective measures at both at a legal and a clinical level to ensure patient data confidentiality and security. 3.3. Digital Communication and Social Media in Dentistry Seven papers discussed the impact that the employment of digital communication and social media could have upon dental practices and the dentist–patient relationship [44–50]. According to the retrieved studies, one of the main issues is the possibility that commercial values might creep into the management of private practices’ websites and official social media pages [44]. For instance, digital media broadcasts might deliver a distorted image of the practice, resulting in misleading or dishonest advertisement of state-of-the-art dental technologies or dental practices, thus exercising an undue influence on patients [47,49]. In addition, Swirsky [50] also raised a concern regarding unethical search engine optimization, an aggressive marketing technique aimed at making your own website appear before others in popular search engines. This practice creates conflict of interest between the dental profession and the patient/public. Furthermore, the introduction of digital communication in dental practices has heavy effects on the dentist–patient relationship. Neville and Waylen [45] indicate how the use of social media pages is blurring the personal and professional divide. Via social media, patients might have access to information about their dental providers that could compromise the doctor–patient relationship and create issues of trust between the two parties. For instance, shared posts and messages of doctors Int. J. Environ. Res. Public Health 2020, 17, 2495 8 of 15 might be misinterpreted by the users (patients) and be considered unprofessional. Likewise, privacy issues might occur in the case where a dentist visits the personal social media page of their patient and uncovers information that the patient did not want to share with them [46,48]. In addition, doctor–patient confidentiality could be breached by dentists both willingly and inadvertently, if information about a patient is disclosed online, such as identifiable patient photographs or sensitive treatment details [47,49]. Suggested practices to avoid such issues are the development of adequate social media policies for the use of social media in dental practices and increased education for dental practitioners regarding online professionalism in social media—such as awareness of the ethical issues and of the rules of conduct to be used while using social media [48,49]. 3.4. Digital Photography and Radiography The technology discussed by eight of the collected papers was digital photography and digital radiography [51–58]. Among them, four articles [51,53,55,56] highlighted that image modification, made easier by digitalization of both dental photography and radiography, could result in misconduct in science and fraudulent use of modified pictures. Practitioners could be tempted to modify radiographs to deceive insurance companies [51] and researchers might do the same to falsify the results of their research [55]. Three papers correlated the ethical issues of digital imagery to digital sharing and storage of images [52,57,58]. For instance, issues of security of data and patient privacy and confidentiality might arise owing to inattentive storage of images (if digital photographs are stored for too long on an SD-card or if images are shared via electronic means such as using emails and smartphones or networking apps as Whatsapp) [52]. In addition, Stieber et al. [57] indicate how even patient autonomy and consent might be breached if the images are used in an unauthorized manner, such as posting them on a public forum. Finally, one paper that discussed the ethical issues of digital dental imaging focused on a particular diagnostic technology: cone beam computed tomography (CBCT) [54]. Highlighted issues related to this particular technology are related to its routine use potentially causing harm to patients, especially children and adolescents, owing to the excessive exposure to radiation and consent if patients are not appropriately informed about the health risks they are exposed to when undergoing this diagnostic exam. Some papers also highlighted some potential solutions. Regarding image modification, the application of state-of-the-art anti-forgery techniques was suggested [51], as well as the development of appropriate guidelines to set an acceptable standard for image modification in dentistry [53]. As for image sharing issues, Stieber et al. [57] suggested the implementation of a privacy compliant framework, where informed consent is enhanced in order to give patients more control over how their images are used, while Indu et al. [52] proposed the use of only custom apps built exclusively for medical data sharing. 3.5. Digital Dentistry Might Solve Ethical Issues Finally, almost one-third of the papers discussed not only ethical issues, but also mentioned how some of these technologies could be of assistance to solve ethical issues in dentistry and oral health. For instance, the application of digital technologies could result in empowerment of patients and democratization of oral health knowledge owing to increased and widespread information that could be easily retrieved on the Internet [32]. mHealth and teledentistry were argued to be powerful tools to (a) fight known inequalities in healthcare and provide better treatment and patient care in vulnerable populations thanks to the increased saturation of mobile phones and communication technologies that will allow them easier access to health information and remote treatment [41]; (b) overcome cultural and geographic barriers in oral health [40]; and (c) help eliminate the disparities in oral health care between rural and urban communities [42]. Provision of information about health care prevention and Int. J. Environ. Res. Public Health 2020, 17, 2495 9 of 15 oral health issues through social media could positively influence and promote oral healthcare [46,49]. The implementation of research through correlation and data linkage between birth cohorts in the United Kingdom and oral health habits could ameliorate public oral health issues such as caries prevention for children and adolescents [34]. Finally, digital forensics, that is, the digital analysis of images, could help with the recognition of scientific misconduct in dental research [55]. 4. Discussion The analyzed literature raised a plethora of intertwined ethical issues across different technologies and practices in dentistry. Numerous issues are in line with the commonly mentioned ethical challenges that digital technologies are introducing in healthcare—privacy anonymity, security, and so on. On the other hand, additional aspects emerged for dental medicine—such as commercialization and image forgery—that are usually less associated with digitalization of healthcare and Big Data [28]. The most frequently mentioned ethical issues related to the increased digitalization of dentistry are those related to patient privacy, which is often associated with anonymization and confidentiality. This is in line with a study by Mittelstadt and Floridi [28] that highlighted how this cluster of issues related to patient privacy is the one that is most correlated by scholarly research with Big Data technologies such as data analytics, IOT, and social media use. In the era of digitalization, with increased implementation of EHRs and digital data management, issues of privacy become among the most paramount, notably also in dentistry, on account of the opportunities for patient treatment development and research offered by data linkage. Important ethical issues could be overlooked if it is assumed that dental health data are less sensitive than, for example, mental health or stigmatizing infectious disease data. On the contrary, dental health data are sensitive for a number of specific reasons. For example, economic or marketing discrimination, that is, inequality in pricing and offers that are given to costumers based on profiling, such as insurance or housing [59], or discrimination based on health data and health prediction [60], are practices that are creeping out of the exploitation of digital records and might be exacerbated by the analysis of dental records and the use of mHealth in dentistry. Informed consent was another issue that was often mentioned by the selected papers, although surprisingly not in relationship to the reuse of EHR data. From an ethical and legal point of view, consent needs to be specific concerning three different activities: use for clinical care; clinical trials, where new Big Data technologies are used in dental patients; and secondary use of data for research or other purposes (such as marketing). For use in the clinical setting, issues of informed consent are not so prominent as the EHR would function as a substitute for a paper patient chart, leaving more concerns in the area of data security and patient privacy. However, as Big Data applications for secondary use of EHR data are becoming an increasingly implemented research practice and issues of consent for EHR and Big Data are quite often discussed for the biomedical context [28], more research should be spent in this area for the dental field. In fact, only three retrieved papers focused on EHR—they mostly targeted clinical care, and two of them were from before 2010, which may explain why they did not consider the implications of Big Data and secondary use of data from health records that are currently causing dilemmas of consent from both an ethical and a regulatory point of view [17,61]. Consent was also briefly mentioned by the retrieved papers in relation to data linkage and personalized medicine, but overall, the literature has not sufficiently analyzed the issue data linkage and secondary use of data for dentistry. In fact, electronic dental records increasingly include sensitive and complementary data about the patient, such as automatic tooth charting, general patient health information, development of treatment plans, radiographic captures of the mouth, and intraoral photography [43], which could be linked and analyzed for research and app development purposes without obtaining the appropriate patient’s approval. Cvrkel [41], in the context of mHealth, suggested deflecting the discussion from privacy concerns to the development of high-quality consent practices for both clinical as well as secondary research use. On the basis of a recent study by Valenza et al. [62], which assessed the benefits of “Smart consent” strategies that take into account patients’ preferences and desires regarding both Int. J. Environ. Res. Public Health 2020, 17, 2495 10 of 15 treatment and the use of their dental data, we argue that the implementation of better consent policies and strategies could also be beneficial to electronic dental records in order to face not only privacy issues related to clinical care, but also issues of consent related to secondary use of data. As might be expected, considerable space was given to digital photography and radiology in dentistry. Ethical issues were raised in two directions. First, concerns of patient privacy and anonymity and of data security were highlighted in relation to the storage and sharing of digital images [52,57,58]. These issues are of a comparable nature to those enumerated for EHR, mHealth, and teledentistry, which principally have to do with possible access to sensitive patient information by unwarranted parties and interception of digital communications. Interestingly, substantial weight was given to the topic of image forgery. According to the literature, image modification for fraudulent purposes such as insurance fraud and scientific misconduct is described as an expanding practice within dentistry [55,56]. The main problem is that the introduction of digital imagery in our society has exponentially increased the ease with which digital photographs can be manipulated and changed, both in the early and late stages of image production, to a point where essential information about the subject of the image might be falsified [63]. As a consequence, numerous scholars who focused on the epistemic status of photographs and digital imaging have tried to analyze the challenges that digital imaging poses to the epistemic consistency of images [63–65]. The question is, in our opinion, whether in the case of image modification in dentistry, a well-defined line can be settled on acceptable modifications that prevent misinterpretation or misreading by the observer, and modifications that would let the image fall in the category of image forgery. Following clear guidelines on the ethics of image modification [66] could assist practitioners in making the right choices, but might not be enough. Well-intentioned image modification, such as changing the background, modifying light sources, over and under exposure, cropping, color modification, and so on might unintentionally alter the epistemic consistency of an image, as the limit of acceptable alterations that digital images can endure, while maintaining their epistemic value is vague and undetermined [63]. Another interesting finding of this study is that numerous articles—almost one-third of the total and all theoretical papers—rather than expanding on the ethical issues that derive from the application of a medical/dental digital technology, focused on how digital communication could have an impact on the practice of dental care itself and on the doctor–dentist relationship. Some of the retrieved papers [44–49], in fact, highlighted how the inappropriate use of social media by dentists could compromise trust between dental practitioners and patients either owing to leakage of confidential information about patients, such as treatment outcomes or identifiable pictures, or displays of inappropriate behavior on their private social media pages. As the use of social media is permeating our everyday life, blurring the line between private and public, social media and online professionalism are topics that have been increasingly addressed in other areas of healthcare as well [67,68]. The ethical challenge here seems to be twofold. First, education regarding the professional use of social media for dental practitioners could be enhanced by the implementation of rules and social media policies that clearly state the “dos-and-don’ts” of managing a social media page, such as the following: do not post identifiable pictures of patients without their consent; do not discuss patient treatment on the page, and so on [48]. However, if a breach of confidentiality should occur through inattentiveness, the reach of the leaked information would be greater than in face to face exchanges, expanding exponentially the scale of the mistake [67]. Second, it becomes more challenging to implement strategies to appropriately educate dental practitioners about their private social media behavior. It has been argued by Greysen et al. [67] that some online content that might be flagged as unprofessional—such as posts concerning off-duty drinking and intoxication or the advertisement of radical political ideals that might question their professionalism—do not clearly violate any existing principle of medical professionalism, as they are done in the private sphere. In addition, even the interactions that a health practitioner might have with the private social media page of a patient become an intricate matter that might raise ethical dilemmas. By only accessing the page of their patient, the doctor could access private information such as their marital status, sexual orientation, or political orientation that might have an impact, either Int. J. Environ. Res. Public Health 2020, 17, 2495 11 of 15 conscious or unconscious, on the practitioner ’s personal perception of the patient [69]. Things become even more complicated if the healthcare professional retrieves posts or photos on social media sites that depict patients participating in risk-taking or health-averse behaviors [67]. All of this information might create a fracture in the patient–doctor relationship, as implicit bias and conflict of interests might prevent medical practitioners from providing the patient with the best care [69,70]. In addition, another interesting challenge raised by almost all of the papers that discussed digital communication in dentistry was the issues of commercialization and conflict of interest that interfere with patient care. A strong focus of some of the papers was on the possible exertion of undue influence on the patient by producing misleading advertisement for private practices and state-of-the-art dental procedures. As Chambers et al. [44] argue, the dentist–patient relationship should never shift to one of customer–provider, and commercial interests should always be in a subordinate position to that of oral health, as the well-being of the patient should always come first. In addition, according to the American Dentist Associations’ (ADA) Code of Conduct: “dentists who, in the regular conduct of their practices, engage in or employ auxiliaries in the marketing or sale of products or procedures to their patients must take care not to exploit the trust inherent in the dentist–patient relationship for their own financial gain [ . . . ] and no dentist shall advertise or solicit patients in any form of communication in a manner that is false or misleading in any material respect” [71]. Doing so would negate the patient’s right to self-determination and accurate information [50]. As additional technological developments are being increasingly introduced in dental practices, it is of the utmost importance that strong measures are taken to limit commercial interests for dental practice. In addition, while a substantial number of papers focused on digital photography and radiography, as well as the impact of digital communication for dental practice, this systematic review highlighted some gaps regarding some of the applications that data technologies have in dentistry and the possible ethical issues that might emerge as a consequence. For instance, the implementation of AI applications for diagnostic purposes in dentistry [12] or the sophistication of 3D imaging technologies for pre-operative clinical assessment [9] were not discussed in the retrieved literature. In addition, very few of the retrieved papers focused on the increased application of Big Data analytics and data linkage of health-related data. Shetty et al. [72] highlighted how the debate on digital dentistry is reflective of the traditional dental delivery model and usually focuses on micro trends in technology development such as technology-assisted services (e.g. computer-aided design/computer-aided manufacturing (CAD/CAM)), digital radiography, and electronic patient records. However, trends in the implementations of Big Data technologies such as mHealth, social media, AI, and the like are transforming oral healthcare through social and technical influences from outside the dental profession, as has been seen in relation to the social media use by dental providers. In addition, it has recently been argued that current literature on the topic of digital dentistry has a tendency to focus on its beneficial potentials or on the technical challenges of the discussed technology without appropriately addressing the ethical issues that these technologies might raise [32]. Also, our review indicates that, while a theoretical discussion on this topic is emerging, empirical studies on the ethical issues of digital implementations in dentistry are largely lacking. As a consequence, owing to the sensitive nature of data included in electronic dental records, the specific digital implementations in dental practice and research, and the gaps in the literature regarding the ethical analysis of some dental applications, it is of the outmost importance to conduct additional research, and especially more evidence-based studies, on the possible specific ethical issues related to the field of digital dentistry in order to appropriately understand and confront these issues. Finally, only a few papers mentioned ethical issues that could be solved by digital dentistry. In addition to those mentioned in Section 3.5, there are two other contenders for useful applications of Big Data research. It has historically been very difficult to conduct epidemiological research on the relationship (if any) between the public health measure of adding fluoride to water supplies and the incidence of dental fluorosis in children owing to the very high number of variables and confounders involved in such research. Big Data analytics could make sense of this difficult area of research, Int. J. Environ. Res. Public Health 2020, 17, 2495 12 of 15 helping to address the public health ethics of water fluoridation [73]. Similarly, antibiotic prophylaxis before dental treatment in patients who have undergone heart surgery remains a contentious area, with dentists tending to recommend against it despite heart surgeons supporting the prescription of antibiotics [74]. Big Data research could help to shed some light on this difficult ethical dilemma. 5. Conclusions Our study highlighted how most of the issues presented for digital dental technologies such as electronic dental records, mHealth, and teledentistry, as well as developments in personalized medicine, are in line with those mostly discussed in the debate regarding the application of ICT in healthcare, namely, patient privacy, confidentiality and anonymity, data security, and informed consent. In addition to those issues, image forgery aimed at scientific misconduct and insurance fraud was frequently reported in the literature. Moreover, the present review identified how major concerns in the field of dentistry are related to the impact that an improper use of ICT could have on the dental practice and the doctor–patient relationship. In this context, issues of online professionalism were raised together with issues of aggressive or misleading social media or web. Finally, additional research should be conducted to properly assess the ethical issues that might emerge from the routine applications of increasingly novel technologies. Author Contributions: Conceptualization, M.F., E.D.C., and D.S.; methodology, M.F. and E.D.C.; data analysis, M.F. and E.D.C.; writing—original draft preparation, M.F.; writing—review and editing, D.S., E.D.C., T.J., and B.S.E.; supervision, B.S.E.; funding acquisition, B.S.E. All authors have read and agreed to the published version of the manuscript. Funding: The funding for this research paper was provided by the Swiss National Science Foundation in the framework of the National Research Program “Big Data”, NRP 75 (Grant-No: 407540_167211). Acknowledgments: The first author would like to thank Christophe Schneble for his support during data collection and analysis. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. References 1. Lynch, C. How do your data grow? Nature 2008, 455, 28–29. [CrossRef] 2. Boyd, D.; Crawford, K. CRITICAL QUESTIONS FOR BIG DATA. Info. Commun. Soc. 2012, 15, 662–679. [CrossRef] 3. Mertz, L. Saving Lives and Money with Smarter Hospitals: Streaming analytics, other new tech help to balance costs and benefits. IEEE Pulse 2014, 5, 33–36. [CrossRef] [PubMed] 4. Cohen, I.G.; Amarasingham, R.; Shah, A.; Xie, B.; Lo, B. The Legal And Ethical Concerns That Arise From Using Complex Predictive Analytics In Health Care. Heal. Aff. 2014, 33, 1139–1147. [CrossRef] [PubMed] 5. Lee, C.H.; Yoon, H.-J. Medical big data: promise and challenges. Kidney Res. Clin. Pr. 2017, 36, 3–11. [CrossRef] 6. Liu, X.; Faes, L.; Kale, A.U.; Wagner, S.K.; Fu, D.J.; Bruynseels, A.; Mahendiran, T.; Moraes, G.; Shamdas, M.; Kern, C.; et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit. Heal. 2019, 1, e271–e297. [CrossRef] 7. Nilsen, W.J.; Kumar, S.; Shar, A.; Varoquiers, C.; Wiley, T.; Riley, W.T.; Pavel, M.; Atienza, A.A. Advancing the Science of mHealth. J. Heal. Commun. 2012, 17, 5–10. [CrossRef] 8. Fasbinder, D.J. Digital dentistry: innovation for restorative treatment. Compend. Contin. Educ. Dent. 2010, 31, 2–11. 9. Joda, T.; Wolfart, S.; Reich, S.; Zitzmann, N.U. Virtual Dental Patient: How Long Until It’s Here? Curr. Oral Heal. Rep. 2018, 5, 116–120. [CrossRef] 10. Finkelstein, J.; Ba, F.Z.; Bs, S.A.L.; Cappelli, D. Using big data to promote precision oral health in the context of a learning healthcare system. J. Public Heal. Dent. 2020, 80, S43–S58. [CrossRef] http://dx.doi.org/10.1038/455028a http://dx.doi.org/10.1080/1369118X.2012.678878 http://dx.doi.org/10.1109/MPUL.2014.2355306 http://www.ncbi.nlm.nih.gov/pubmed/25415882 http://dx.doi.org/10.1377/hlthaff.2014.0048 http://www.ncbi.nlm.nih.gov/pubmed/25006139 http://dx.doi.org/10.23876/j.krcp.2017.36.1.3 http://dx.doi.org/10.1016/S2589-7500(19)30123-2 http://dx.doi.org/10.1080/10810730.2012.677394 http://dx.doi.org/10.1007/s40496-018-0178-y http://dx.doi.org/10.1111/jphd.12354 Int. J. Environ. Res. Public Health 2020, 17, 2495 13 of 15 11. Joda, T.; Waltimo, T.; Pauli-Magnus, C.; Probst-Hensch, N.; Zitzmann, N.U. Population-Based Linkage of Big Data in Dental Research. Int. J. Environ. Res. Public Heal. 2018, 15, 2357. [CrossRef] [PubMed] 12. Joda, T.; Waltimo, T.; Probst-Hensch, N.; Pauli-Magnus, C.; Zitzmann, N.U. Health Data in Dentistry: An Attempt to Master the Digital Challenge. Public Heal. Genom. 2019, 22, 1–7. [CrossRef] [PubMed] 13. Hogan, R.; Goodwin, M.; Boothman, N.; Iafolla, T.; Pretty, I.A. Further opportunities for digital imaging in dental epidemiology. J. Dent. 2018, 74, S2–S9. [CrossRef] [PubMed] 14. Vandenberghe, B. The digital patient – Imaging science in dentistry. J. Dent. 2018, 74, S21–S26. [CrossRef] 15. Brodt, E.D.; Skelly, A.C.; Dettori, J.R.; Hashimoto, R.E. Administrative Database Studies: Goldmine or Goose Chase? Evid Based Spine Care J. 2014, 5, 74–76. [CrossRef] 16. Liang, H.; Tsui, B.Y.; Ni, H.; Valentim, C.C.S.; Baxter, S.L.; Liu, G.; Cai, W.; Kermany, D.S.; Sun, X.; Chen, J.; et al. Evaluation and accurate diagnoses of pediatric diseases using artificial intelligence. Nat. Med. 2019, 25, 433–438. [CrossRef] 17. Ioannidis, J.P. Informed consent, big data, and the oxymoron of research that is not research. Am. J. Bioeth. 2013, 13, 40–42. [CrossRef] 18. Martani, A.; Geneviève, L.D.; Pauli-Magnus, C.; McLennan, S.; Elger, B.S. Regulating the Secondary Use of Data for Research: Arguments Against Genetic Exceptionalism. Front. Genet. 2019, 10, 1254. [CrossRef] 19. Francis, J.G.; Francis, L.P. Privacy, Confidentiality, and Justice. J. Soc. Philos. 2014, 45, 408–431. [CrossRef] 20. Schneble, C.O.; Elger, B.S.; Shaw, D. The Cambridge Analytica affair and Internet-mediated research. EMBO Rep. 2018, 19, e46579. [CrossRef] 21. Schneble, C.O.; Elger, B.S.; Shaw, D. Google’s Project Nightingale highlights the necessity of data science ethics review. EMBO Mol. Med. 2020, 12(3), e12053. [CrossRef] [PubMed] 22. McMahon, A.; Buyx, A.; Prainsack, B. Big Data Governance Needs More Collective Responsibility: The Role of Harm Mitigation in the Governance of Data Use in Medicine and Beyond. Med Law Rev. 2019, 28, 155–182. [CrossRef] [PubMed] 23. Choudhury, S.; Fishman, J.R.; McGowan, M.L.; Juengst, E. Big data, open science and the brain: lessons learned from genomics. Front. Hum. Neurosci. 2014, 8, 239. [CrossRef] [PubMed] 24. Favaretto, M.; De Clercq, E.; Elger, B.S. Big Data and discrimination: perils, promises and solutions. A systematic review. J. Big Data 2019, 6, 12. [CrossRef] 25. Geneviève, L.D.; Martani, A.; Shaw, D.M.; Elger, B.S.; Wangmo, T. Structural racism in precision medicine: leaving no one behind. BMC Med Ethic 2020, 21, 1–13. [CrossRef] 26. Martani, A.; Shaw, D.; Elger, B.S. Stay fit or get bit - ethical issues in sharing health data with insurers’ apps. Swiss Med Wkly. 2019, 149, w20089. [CrossRef] 27. Martin, K.E.M. Ethical Implications and Accountability of Algorithms. SSRN Electron. J. 2018, 160, 835–850. [CrossRef] 28. Mittelstadt, B.D.; Floridi, L. The ethics of big data: current and foreseeable issues in biomedical contexts. Sci. Eng. Ethics 2016, 22, 303–341. [CrossRef] 29. Esfandiari, S.; Feine, J. Health technology assessment in oral health. Int. J. Oral Maxillofac. Implant. 2011, 26, 93–100. 30. Moher, D.; Shamseer, L.; Clarke, M.; Ghersi, D.; Liberati, A.; Petticrew, M.; Shekelle, P.G.; Stewart, L.A. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst. Rev. 2015, 4, 1. [CrossRef] 31. Rodgers, M.; Sowden, A.; Petticrew, M.; Arai, L.; Roberts, H.M.; Britten, N.; Popay, J. Testing Methodological Guidance on the Conduct of Narrative Synthesis in Systematic Reviews. Evaluation 2009, 15, 49–73. [CrossRef] 32. Gross, D.; Gross, K.; Wilhelmy, S. Digitalization in dentistry: ethical challenges and implications. Quintessence Int. 2019, 50, 830–838. [PubMed] 33. Knott, N.J. The use of information and communication technology (ICT) in dentistry. Br. Dent. J. 2013, 214, 151–153. [CrossRef] [PubMed] 34. Day, P.F.; Petherick, E.; Godson, J.; Owen, J.; Douglas, G. A feasibility study to explore the governance processes required for linkage between dental epidemiological, and birth cohort, data in the UK. Community Dent. Health 2018, 35, 228–234. 35. Zijlstra-Shaw, S.; Stokes, C.W. Learning analytics and dental education; choices and challenges. Eur. J. Dent. Educ. 2018, 22, e658–e660. [CrossRef] http://dx.doi.org/10.3390/ijerph15112357 http://www.ncbi.nlm.nih.gov/pubmed/30366416 http://dx.doi.org/10.1159/000501643 http://www.ncbi.nlm.nih.gov/pubmed/31390644 http://dx.doi.org/10.1016/j.jdent.2018.04.018 http://www.ncbi.nlm.nih.gov/pubmed/29929584 http://dx.doi.org/10.1016/j.jdent.2018.04.019 http://dx.doi.org/10.1055/s-0034-1390027 http://dx.doi.org/10.1038/s41591-018-0335-9 http://dx.doi.org/10.1080/15265161.2013.768864 http://dx.doi.org/10.3389/fgene.2019.01254 http://dx.doi.org/10.1111/josp.12070 http://dx.doi.org/10.15252/embr.201846579 http://dx.doi.org/10.15252/emmm.202012053 http://www.ncbi.nlm.nih.gov/pubmed/32064790 http://dx.doi.org/10.1093/medlaw/fwz016 http://www.ncbi.nlm.nih.gov/pubmed/31377815 http://dx.doi.org/10.3389/fnhum.2014.00239 http://www.ncbi.nlm.nih.gov/pubmed/24904347 http://dx.doi.org/10.1186/s40537-019-0177-4 http://dx.doi.org/10.1186/s12910-020-0457-8 http://dx.doi.org/10.4414/smw.2019.20089 http://dx.doi.org/10.2139/ssrn.3056716 http://dx.doi.org/10.1007/s11948-015-9652-2 http://dx.doi.org/10.1186/2046-4053-4-1 http://dx.doi.org/10.1177/1356389008097871 http://www.ncbi.nlm.nih.gov/pubmed/31538146 http://dx.doi.org/10.1038/sj.bdj.2013.157 http://www.ncbi.nlm.nih.gov/pubmed/23429122 http://dx.doi.org/10.1111/eje.12370 Int. J. Environ. Res. Public Health 2020, 17, 2495 14 of 15 36. Eng, G.; Chen, A.; Vess, T.; Ginsburg, G.S. Genome technologies and personalized dental medicine. Oral Dis. 2011, 18, 223–235. [CrossRef] 37. Boden, D.F. What Guidance Is There for Ethical Records Transfer and Fee Charges? J. Am. Dent. Assoc. 2008, 139, 197–198. [CrossRef] 38. A Cederberg, R.; A Valenza, J. Ethics and the electronic health record in dental school clinics. J. Dent. Educ. 2012, 76, 584–589. 39. Szekely, D.G.; Milam, S.; A Khademi, J. Legal issues of the electronic dental record: Security and confidentiality. J. Dent. Educ. 1996, 60, 19–23. 40. Da Costa, A.L.P.; Silva, A.A.; Pereira, C.B. Tele-orthodontics: Tool aid to clinical practice and continuing education. Dental Press J. Orthod. Rev. 2012, 16, 15–21. 41. Cvrkel, T. The ethics of mHealth: Moving forward. J. Dent. 2018, 74, S15–S20. [CrossRef] [PubMed] 42. Nutalapati, R.; Boyapati, R.; Jampani, N.D.; Dontula, B.S.K. Applications of teledentistry: A literature review and update. J. Int. Soc. Prev. Community Dent. 2011, 1, 37–44. [CrossRef] 43. Cederberg, R.; Walji, M.; Valenza, J. Electronic Health Records in Dentistry: Clinical Challenges and Ethical Issues; Springer Science and Business Media LLC: Cham, Switzerland, 2014; pp. 1–12. 44. Chambers, D.W. Position paper on digital communication in dentistry. J. Am. Coll. Dent. 2012, 79, 19–30. [PubMed] 45. Neville, P.; Waylen, A.E. Social media and dentistry: some reflections on e-professionalism. Br. Dent. J. 2015, 218, 475–478. [CrossRef] [PubMed] 46. Oakley, M.; Spallek, H. Social media in dental education: a call for research and action. J. Dent. Educ. 2012, 76, 279–287. [PubMed] 47. Peltier, B.; Curley, A. The ethics of social media in dental practice: Ethical tools and professional responses. J. Calif. Dent. Assoc. 2013, 41, 507–513. [PubMed] 48. Spallek, H.; Turner, S.P.; Donate-Bartfield, E.; Chambers, D.; McAndrew, M.; Zarkowski, P.; Karimbux, N. Social Media in the Dental School Environment, Part A: Benefits, Challenges, and Recommendations for Use. J. Dent. Educ. 2015, 79, 1140–1152. 49. Sykes, L.M.; Harryparsad, A.; Evans, W.G.; Gani, F. Social Media and Dentistry: Part 8: Ethical, legal, and professional concerns with the use of internet sites by health care professionals. SADJ 2017, 72, 132–136. 50. Swirsky, E.S.; Michaels, C.; Stuefen, S.; Halasz, M. Hanging the digital shingle. J. Am. Dent. Assoc. 2018, 149, 81–85. [CrossRef] 51. Calberson, F.L.; Hommez, G.M.; De Moor, R.J. Fraudulent Use of Digital Radiography: Methods to Detect and Protect Digital Radiographs. J. Endod. 2008, 34, 530–536. [CrossRef] 52. Indu, M.; Sunil, S.; Rathy, R.; Binu, M. Imaging and image management: A survey on current outlook and awareness in pathology practice. J. Oral Maxillofac. Pathol. 2015, 19, 153–157. [CrossRef] [PubMed] 53. Kapoor, P. Photo-editing in Orthodontics: How Much is Too Much? Int. J. Orthod. 2015, 26, 17–23. 54. Khelemsky, R. The ethics of routine use of advanced diagnostic technology. J. Am. Coll. Dent. 2011, 78, 35–39. [PubMed] 55. Luther, F. Scientific Misconduct. J. Dent. Res. 2010, 89, 1364–1367. [CrossRef] 56. Rao, S.; Singh, N.; Kumar, R.; Thomas, A. More than meets the eye: Digital fraud in dentistry. J. Indian Soc. Pedod. Prev. Dent. 2010, 28, 241. [CrossRef] 57. Stieber, J.C.; Nelson, T.M.; E Huebner, C. Considerations for use of dental photography and electronic media in dental education and clinical practice. J. Dent. Educ. 2015, 79, 432–438. 58. Wentworth, R.B. What ethical responsibilities do I have with regard to radiographs for my patients? J. Am. Dent. Assoc. 2010, 141, 718–720. [CrossRef] 59. Peppet, S.R. Regulating the internet of things: first steps toward managing discrimination, privacy, security and consent. Tex. L. Rev. 2014, 93, 85. 60. Hoffman, S. Employing e-health: the impact of electronic health records on the workplace. Kan. JL Pub. Pol’y 2009, 19, 409. 61. Starkbaum, J.; Felt, U. Negotiating the reuse of health-data: Research, Big Data, and the European General Data Protection Regulation. Big Data Soc. 2019, 6, 2053951719862594. [CrossRef] 62. Valenza, J.A.; Taylor, D.; Walji, M.F.; Johnson, C.W. Assessing the benefit of a personalized EHR-generated informed consent in a dental school setting. J. Dent. Educ. 2014, 78, 1182–1193. [PubMed] 63. Benovsky, J. The Limits of Photography. Int. J. Philos. Stud. 2014, 22, 716–733. [CrossRef] http://dx.doi.org/10.1111/j.1601-0825.2011.01876.x http://dx.doi.org/10.14219/jada.archive.2008.0138 http://dx.doi.org/10.1016/j.jdent.2018.04.024 http://www.ncbi.nlm.nih.gov/pubmed/29929583 http://dx.doi.org/10.4103/2231-0762.97695 http://www.ncbi.nlm.nih.gov/pubmed/23654160 http://dx.doi.org/10.1038/sj.bdj.2015.294 http://www.ncbi.nlm.nih.gov/pubmed/25908363 http://www.ncbi.nlm.nih.gov/pubmed/22383595 http://www.ncbi.nlm.nih.gov/pubmed/24024295 http://dx.doi.org/10.1016/j.adaj.2017.11.002 http://dx.doi.org/10.1016/j.joen.2008.01.019 http://dx.doi.org/10.4103/0973-029X.164525 http://www.ncbi.nlm.nih.gov/pubmed/26604489 http://www.ncbi.nlm.nih.gov/pubmed/22416617 http://dx.doi.org/10.1177/0022034510384627 http://dx.doi.org/10.4103/0970-4388.76149 http://dx.doi.org/10.14219/jada.archive.2010.0264 http://dx.doi.org/10.1177/2053951719862594 http://www.ncbi.nlm.nih.gov/pubmed/25086152 http://dx.doi.org/10.1080/09672559.2014.923015 Int. J. Environ. Res. Public Health 2020, 17, 2495 15 of 15 64. Hopkins, R. Factive Pictorial Experience: What’s Special about Photographs? Nous 2010, 46, 709–731. [CrossRef] 65. Alcarez, A.L. Epistemic function and ontology of analog and digital images. CA 2015, 13, 11. 66. Cromey, D. Avoiding twisted pixels: ethical guidelines for the appropriate use and manipulation of scientific digital images. Sci. Eng. Ethic 2010, 16, 639–667. [CrossRef] 67. Greysen, R.; Kind, T.; Chretien, K.C. Online Professionalism and the Mirror of Social Media. J. Gen. Intern. Med. 2010, 25, 1227–1229. [CrossRef] 68. Ventola, C.L. Social Media and Health Care Professionals: Benefits, Risks, and Best Practices. J. Formul. Manag. 2014, 39, 491–520. 69. Fitzgerald, C.; Hurst, S. Implicit bias in healthcare professionals: A systematic review. BMC Med. Ethics 2017, 18, 19. [CrossRef] 70. Garrison, N.O.; Ibañez, G.E. Attitudes of Health Care Providers toward LGBT Patients: The Need for Cultural Sensitivity Training. Am. J. Public Heal. 2016, 106, 570. [CrossRef] 71. McCarley, D.H. ADA Principles of Ethics and Code of Professional Conduct. Tex. Dent. J. 2011, 128, 728–732. 72. Shetty, V.; Yamamoto, J.; Yale, K. Re-architecting oral healthcare for the 21st century. J. Dent. 2018, 74, S10–S14. [CrossRef] [PubMed] 73. Shaw, D.M. Weeping and wailing and gnashing of teeth: The legal fiction of water fluoridation. Med Law Int. 2012, 12, 11–27. [CrossRef] 74. Shaw, D.; Conway, D. Pascal’s Wager, infective endocarditis and the “no-lose” philosophy in medicine. Heart 2009, 96, 15–18. [CrossRef] [PubMed] © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). http://dx.doi.org/10.1111/j.1468-0068.2010.00800.x http://dx.doi.org/10.1007/s11948-010-9201-y http://dx.doi.org/10.1007/s11606-010-1447-1 http://dx.doi.org/10.1186/s12910-017-0179-8 http://dx.doi.org/10.2105/AJPH.2015.303010 http://dx.doi.org/10.1016/j.jdent.2018.04.017 http://www.ncbi.nlm.nih.gov/pubmed/29929582 http://dx.doi.org/10.1177/0968533212438642 http://dx.doi.org/10.1136/hrt.2009.186056 http://www.ncbi.nlm.nih.gov/pubmed/20019208 http://creativecommons.org/ http://creativecommons.org/licenses/by/4.0/. Introduction Materials and Methods Results Implementation of Digital Technologies in Dentistry Big Data and Data Analytics Electronic Health Records (EHRs) mHealth and Teledentistry Personalized Medicine and Data Linkage Digital Communication and Social Media in Dentistry Digital Photography and Radiography Digital Dentistry Might Solve Ethical Issues Discussion Conclusions References work_xzcsta2w4rhc7huvtleofwkuq4 ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218562458 Params is empty 218562458 exception Params is empty 2021/04/06-02:16:43 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218562458 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:43 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_xzhpcbegujhtpegdeu6t7pyuum ---- Microsoft Word - Why_few_aopt_ST 1 WHY FEW ORGANIZATIONS ADOPT SYSTEMS THINKING Russell L. Ackoff I frequently talk to groups of managers on the nature of systems thinking and its radical implications to management. In doing so I use several case studies involving prominent American corporations. At the end of the presentation I am almost always asked, "If this way of thinking is as good as you say it is, why don't more organizations use it?" It is easy to reply by saying that organizations naturally resist change. This of course is a tautology. I once asked a vice president of marketing why consumers used his product. He answered, "Because they like it." I then asked him how he knew this. He answered, "Because the use it." Our answer to the question about failure of organizations to adopt systems thinking is seldom any better then this. There be many reasons why any particular organization fails to adopt systems thinking but I believe there are two that are the most important, one general and one specific. By a general reason I mean one that is responsible for organizations failing to adopt any transforming idea, let alone systems thinking. By a specific reason I mean one responsible for the failure to adopt systems thinking in particular. First, consider the general explanation. All through school, from kindergarten all the way through university, mistakes are treated as bad things. We are downgraded for them. Furthermore, no effort is made to determine whether we have learned anything from them. The grade given, not learning from our mistakes, is a fait accompli 2 When, on the completion of our schooling, we enter an employing organization, which also makes it clear that mistakes are a bad thing and that they will be held against us. Managers laugh when I tell them of an organization I once heard of that offers an annual prize for the best mistake made last year. That mistake is defined as the one from which they have learned most. When August Busch, III, was CEO of the Anheuser-Busch Companies he once told his assembled vice presidents, "If you didn't make a serious mistake last year you probably didn't do your job because you didn’t try anything new. There is nothing wrong in making a mistake, but if you ever make the same mistake twice you probably won't be here the next year." He had it right: mistakes will be forgiven if we learn from them. We cannot learn from doing anything right. We already know how to do it. Of course we may get confirmation of what we already know and this has some value, but it is not learning. We can learn from mistakes if we identify and correct them. Therefore, organizations and individuals that never admit to a mistake never learn anything. Organizations and individuals that always transfer responsibility for their mistakes to others also avoid learning. One need look no further for an example than to the executive office of my government. THE GENERAL REASON To understand why organizations do not use mistakes as opportunities for learning, other than a disposition inherited from educational institutions, we must recognize that there are two types of mistake: errors of commission and errors of omission. An error of commission occurs when an organization or individual does something that should not have been done. For example, when Kodak acquired Sterling Drugs it made a very costly mistake. It had to be sold subsequently. Its sale involved a considerable write-off. 3 Robert F. Bruner, in his book, Deals from Hell (Wiley, 2005,New York) cites a number of acquisitions that went sour in a big way. The Sony-Columbia merger in 1989 resulted in a $2.7 billion write-off. The acquisition of National Cash Register by AT&T, cost AT&T $4.1 billion. His champion of errors of commision is the merger of AOL and Time Warner. It resulted in a $200 billion loss in stock-market value and a $54 billion write-down in the worth of the combination’s assets. Bruner points out that in most such cases the executives responsible such losses made significant gains in their own compensation. They were able to disclaim responsibility for their mistakes. An error of omission occurs when an individual or organization fails to do something it should have done. For example, when Kodak failed to acquire Xerox when it could have, or when Xerox failed to develop the small computer produced by its employees. Of the two types of error, errors of omission are usually the more important. The deterioration and failure of organizations are almost always due to something they did not do. Not to long ago IBM get into serious trouble because it ignored the reduction of the size of computers. Fortunately, it eventually corrected this error but it came close to going out of business. Kodak is currently in a precarious position because it did not press the development of digital photography. General Motors and Ford are in trouble because they have not innovated in ways that Toyota and Honda have. Now for a key fact: accounting systems in the western world only take account of errors of commission, the less important of the two types of error. They take no account of errors of omission. Therefore, in an organization that frowns on mistakes and in which only errors of commission are identified, a manager only has to be concerned about doing something that should not have been done. Because errors 4 of omission are not recorded they often go unacknowledged. If acknowledged, accountability for them is seldom made explicit. In such a situation a manager who wants invoke as little disapproval as possible must try either to minimize errors of commission or transfer to others responsibility for those he or she makes. The best way to do this is to do nothing, or as little as one can get away with. This is a major reason that organizations do not make radical changes. A number of years ago when I was working on a project for a major automotive manufacturing company, the Executive Vice President asked me if I would give a two-day course on systems thinking to the company’s top 200 managers and executives. I was delighted. He said he wanted to restrict classes to 20 so that there would be plenty of discussion. He had the following plan: four sessions of junior vice presidents, three of intermediate level vice presidents, two of senior vice presidents, and finally one of the executive office. The sessions were to be conduct from the lower level up. At the end of the first session to junior vice presidents one said, "This stuff is great. I would love to use it but you are talking to the wrong people. I can't introduce it without the approval of my boss. Are you going to get a chance to present it to him?" I told I would in one of the later courses. He assured me he would hit his boss for approval as he came out of his session. In each of the first four sessions of junior vice presidents the same issue was raised. In the first group on the second tier, with intermediate level vice presidents, the same issue was raised. I was told they also wanted to introduce systems thinking but could not do so without their bosses’ approval. Again I told them their bosses would eventually be exposed to the same ideas. In each of the three sessions at this level the same issue was raised. 5 In the two sessions involving senior vice presidents the same issue was raised. They asked if I would have a chance to present the material to the CEO and his executive committee. I said I would. I could hardly wait to hear what the CEO would say. At the end of the session which he attended he said, "This stuff is great. I would love to use it. But I can't do it with the approval and support of my subordinates. Are you going to get a chance to present it to them?" This was a typical organization, one in which the principal operating principle was "Cover your ass.” Application of this principle produced a management that tried to minimize its responsibility and accountability. The result was a paralyzed organization, one that almost never initiated change of any kind let alone innovation. It made changes only when a competitor made it necessary for it to do so. This deficiency in organizations can be eliminated by taking the following steps. 1. Record every decision of importance, whether to do something or not. The Decision Record should include (a) the expected effects of the decision and by when they are expected, (b) the assumptions on which the expectations are based, (c) the inputs to the decision (information, knowledge and understanding), and (d) how the decision was made and by whom. 2. Monitor the decisions to detect any deviation of fact from expectations and assumptions. When a deviation is found, determine its cause and take corrective action. 3. The choice of a corrective action is itself a decision and should be treated in the same way as the original decision, a Decision Record should be prepared for it. In this way one can learn how to correct mistakes; that is, learn 6 how to learn more rapidly and effectively. Learning how to learn is probably the most important thing an organization or individual can learn. 4. The decision by an organization not to adopt systems thinking should be treated in this way. Making explicit the assumptions on which such a decision is based and monitoring them can lead to a reversal of the decision in time. THE SPECIFIC REASON Very few managers have any knowledge or understanding of systems thinking, and for good reason. Very little of our literature and lectures are addressed to potential users. I very seldom come across an organizational decision maker who has had any previous exposure to systems thinking. We are an introverted profession. We do most of our writing and speaking to each other. This is apparent on examination of the content of any of our journals or conferences. To be sure, some communication among ourselves is necessary, but it is not sufficient. Until we communicate to our potential users in a language they can understand, they and we will not understand what we are talking about. If Einstein could do it with relativity theory, we should be able to do it with systems thinking (Einstein and Infeld, 1951). It is easy to hide the ambiguity and vagueness in our own thinking behind jargon, but almost impossible to do so when speaking or writing in ordinary language. We have developed a vocabulary that equips our students with the ability to speak with authority about subjects they do not understand. Little wonder they do not become effective spokespersons to potential users. 7 This society should publish a journal addressed to potential users. It should have managers on its editorial board. It should invite dialog with potential users either electronically or in print. In addition, it should occasionally hold conferences that provide a bridge between system thinkers and their potential users. These conferences should reveal what are we doing and can do that they should know about? Furthermore, the articles published in our usual journals should be required to answer the “so what” question at the end of each submission. The answer to this question should be an explicit statement of how the author intends to affect the behavior or thinking of the reader. No article should be published without such an appendage. Let’s start to think outside the box into which we have painted ourselves! REFERENCES Bruner, Robert, Deals from Hell. Wiley. New York: 2005. Einstein, Albert and Leopold Infeld, The Evolution of Physics. New York: Simon and Schuster, 1951. Russell L. Ackoff 1021 Lancaster Ave., #201 Bryn Mawr, PA 19010 USA e-mail: RLAckoff@aol.com work_y2gd2nmuunfudoaxuapiqanhze ---- PUBLIC SECTOR SCIENCE AND THE STRATEGY OF THE COMMONS PUBLIC SECTOR SCIENCE AND THE STRATEGY OF THE COMMONS AJAY AGRAWAL Queen’s School of Business, Queen’s University Kingston, Ontario, Canada, K7L 3N6 LORENZO GARLAPPI University of Texas at Austin INTRODUCTION This paper is motivated by a puzzling observation. Over the past decade, a variety of large firms and private sector consortia have approached prominent universities and expressed interest in sponsoring particular research laboratories. In return for their sponsorship, these organizations have requested that all inventions generated by the sponsored labs be licensed openly, on a non- exclusive basis only. Why non-exclusive? At first glance, this seems surprisingly generous – in fact, altruistic. Hence, the puzzle. Consider three examples: 1) Kodak sponsors research in areas related to digital photography; 2) AT&T sponsors research in areas related to communication, including Internet telephony; 3) A consortium, comprised of several of the world's largest pharmaceutical firms, sponsors research related to the Human Genome Project. In each case, the sponsorship stipulates no exclusive licensing. Why would the sponsoring firms choose to disallow exclusive licensing - which has been the norm at universities since the Bayh-Dole Act of 19801 - especially since these firms would be prime candidates for licensing the inventions themselves? One hypothesis is simply that sponsoring firms are worried that competing firms might obtain the exclusive license first. This is certainly a reasonable explanation, but not altogether consistent with the evidence. Historically, sponsoring firms have enjoyed favorable information advantages regarding the research outcomes of the labs they sponsor since they often receive interim briefings prior to publications or conference presentations. So, in practice, they are usually ‘first in line’ for any related exclusive licenses. In this paper we explore a second, less obvious, explanation. The hypothesis modeled here is that firms request non-exclusive licensing regimes in order to prevent, or at least slow down, the commercial development of inventions in a particular technological market. In other words, firms sponsor research in a laboratory specifically because they wish to retard the development of particular areas of innovation. They purposely spoil the incentives to develop and commercialize inventions from the sponsored lab by creating a market failure. Sponsoring firms accomplish this by creating an intellectual property ‘commons’ under which no firm is able to obtain exclusive property rights. Why would firms do this? Under some conditions, if a new market (based on a new technology) is related to an existing market in such a way that the former will cannibalize the latter, it may be profitable for an entrant to develop the invention but harmful for the incumbent to do so. In other words, the incumbent’s profits in the original market will be reduced if an entrant develops Academy of Management Proceedings 2002 BPS: X1 the invention or even if the incumbent itself does so. From this perspective, one can imagine reasons why Kodak may want to delay the development of digital photography, AT&T the development of Internet telephony, and large pharmaceutical firms the development of processes for human gene mapping.2 Under this threat of cannibalization, one might question why the incumbent doesn't license the patent and leave the technology dormant? The answer lies in the licensing contract that is hand- crafted for each agreement. Benchmarks, milestones, expenditure commitments, and other timeline components associated with product development and commercialization are specified in the contract. Technology licensing officers refer to these contractual conditions as ‘use it or lose it’ clauses that ensure that the mandate of the university is reflected in the conditions of the contract.3 Indeed, anecdotal evidence suggests that the incidence of the strategy of the commons is positively correlated with an increase in the sophistication of the ‘use it or lose it’ contractual terms, both of which have varied across research organizations. In any case, licensing university inventions and not developing them no longer appears to be a feasible strategy for mitigating the effects of cannibalization. The idea of market cannibalization has been well studied. Arrow (1962) explicitly discussed the ‘replacement effect’ and argued that a monopolist incumbent would have a lower willingness to pay for an innovation than an entrant since the incumbent would be concerned about replacing its sunk assets and thus have, relatively, less incentives to innovate. Since Arrow, many other scholars have examined particular economic effects of market cannibalization. For example, Abernathy and Utterback (1978) compare incremental and radical innovation and offer a number of reasons, including cannibalization, to explain why radical innovation is typically carried out by entrants rather than incumbents. Foster (1986) popularized the concept of the S-curve for technologies, the shape of which is defined by the increase in performance relative to the development effort expended. Discontinuities in the curve represent new technologies that are often developed by entrants because they have the potential to cannibalize the existing product market. Gans and Stern (1997) model the allocation of rents from innovation amongst incumbents and entrants that is dependent on the existence and terms available on the ‘market for ideas' and use this framework to consider the way in which cannibalization affects the underlying incentives for either firm to conduct R&D. Finally, Christensen (1997) examines the concept of ‘disruptive’ technologies in a number of product markets, most notably the disk drive industry. In this analysis, cannibalization is once again offered as a primary explanation for development by entrants but not incumbents. THE MODEL Here we describe a simple game-theoretic model that we use to investigate the conditions under which it is possible to observe the ‘strategy of the commons’ as a result of profit-maximizing behavior of players in the licensing game. While a brief outline of the model follows, a complete description of how the model is developed is provided in the full version of the paper. Academy of Management Proceedings 2002 BPS: X2 At the beginning of the game, a sponsoring firm selects a licensing regime for the invention that will potentially be generated. We refer to this firm as the incumbent. We assume the incumbent firm has monopoly power in the original market in which it operates (the ‘old’ market). The incumbent, when sponsoring university research, can decide to select either an exclusive or a non-exclusive licensing regime for the inventions that will be the basis of the ‘new’ market. An exclusive licensing regime is one under which only one firm may license the right to use a patented technology at any given time. This also includes technologies that are protected by copyright, trademark, and other forms of legal intellectual property protection. This is in contrast to a non-exclusive licensing regime under which more than one firm may simultaneously license the right to use a protected technology. For the sake of clarity and simplicity, issues such as sub-licensing and restricted fields of use are not considered here. The main implication that arises from the exclusivity distinction in licensing regimes is with regard to competition. In the exclusive case, the licensee firm maintains a monopoly of the technology, whereas in the non-exclusive case, the licensee firm faces either direct competition or at least the threat of competition from other firms. For exigency of tractability we assume that there exists only one potential entrant in the new market. The resulting game is hence a two-player game in which an established incumbent and a potential entrant interact in the adoption of a new technology. The two firms are equally efficient in the utilization of the new technology, which is used to develop a product that is a partial substitute for the one already produced by the incumbent. The degree to which the new product is a substitute (degree of cross-price elasticity) for the old product is in fact a key ingredient in the model. This determines the level of cannibalization, which directly affects the incumbent’s payoffs since the incumbent holds a monopoly in the old market. Throughout the analysis we also assume that patents are enforceable and cannot be ‘invented around.’ This means that a firm must license the patent in order to produce the new technology product. In other words, firms must engage in the licensing game in order to compete in the new market. The dynamics of the game are summarized in Figure 1. If the incumbent selects an exclusive licensing regime, then both firms decide simultaneously whether or not to license. This is because at most only one firm may obtain rights to the license. In the non-exclusive regime, the licensing decisions are modeled as a sequential game with the entrant moving first since it is possible for both firms to have licensing rights to the invention simultaneously. ----------------------------- Figure 1 about here ----------------------------- It is important to note that the order of the sequential non-exclusive licensing subgame produces an outcome that is the same as that which would result if the subgame were infinitely repeated, with no specified order. In the infinitely repeated game, the entrant is always faced with the threat of subsequent entry by the incumbent. Therefore, what is critical is not which player is allowed to move first, but rather which player is allowed to move second. The incumbent is only Academy of Management Proceedings 2002 BPS: X3 able to threaten the entrant with entry if she is able to license after the entrant has already done so. In other words, the conditions under which the incumbent will play the strategy of the commons are the same in the infinitely repeated game, in which the incumbent can always respond to the entrant’s move, as they are when the order of play is dictated as ‘entrant first,’ as is modeled here. The ‘strategy of the commons’ outcome occurs when the incumbent selects a non-exclusive licensing regime and, in the ensuing sequential game, both the entrant and the incumbent decide optimally not to invest in the license, even though the new technology would be profitable under an exclusive licensing regime. The payoffs are generated by deriving industry equilibrium profits drawn from a simple downward sloping linear demand function. While we do not describe the payoffs in this summary version of the paper, we offer some basic intuition. There are three types of outcomes in the new market: 1) no firms enter, 2) one firm enters, or 3) both firms enter. Each outcome in the new market has a distinct effect on the incumbent’s profits in the old market. If no firms enter, the old market is not cannibalized. If both firms enter, the old market is cannibalized to a greater degree than if just one firm enters the new market. The incumbent firm maximizes total profits (new market plus old market), while the entrant only maximizes profits in the new market. The competition that results, under a certain set of conditions that depend on the size and profitability of the new market relative to the old market as well as the degree of cannibalization, leads the incumbent firm to play the strategy of the commons. These conditions are described in the following section. THE STRATEGY OF THE COMMONS AS AN EQUILIBRIUM SOLUTION The strategy of the commons occurs when the entrant would enter in an exclusive licensing regime but the incumbent selects a non-exclusive regime such that it may credibly threaten the entrant with entry, ultimately deterring the entrant from entering. The ‘social’ outcome is therefore a situation in which, despite the potential profitability to the entrant, neither firm optimally decides to invest in the license and the invention is not put into use. In the paper we derive conditions under which the strategy of the commons can emerge as a subgame perfect equilibrium. Three conditions must be met in order for the strategy of the commons to emerge. Condition 1 (Exclusive Regime) The payoffs in the exclusive regime are such that the entrant will invest in the development of the new technology and enter the new market. Condition 2 (Non-exclusive Regime) The payoffs in the non-exclusive regime are such that the entrant will not enter. This is because the incumbent utilizes its option to move second in the sequential game (second mover advantage), allowing it to credibly threaten the entrant. If the entrant doesn’t enter, neither will the incumbent, who will continue to enjoy a monopoly in the old market that remains non-cannibalized. However, should the entrant enter, the incumbent Academy of Management Proceedings 2002 BPS: X4 has the incentive to enter as well. As a result, the entrant will not enter because its payoff as a duopolist does not meet the entry threshold. Condition 3 (Regime Selection) The incumbent’s payoffs are such that, given the outcomes of the second stage of the game, the incumbent will select the non-exclusive licensing regime in the first stage. As a result, the potentially welfare-generating innovation is left undeveloped. In the full version of the paper, we show that there exists a subgame perfect equilibrium in the licensing game in which the strategy of the commons is played. We also examine welfare implications. CONCLUSIONS The results of this research suggest some interesting and important strategy and policy implications. From a strategy perspective, incumbents may consider ways by which to diffuse potential threats from technology fields likely to produce ‘disruptive’ innovations. One such way is to create a market failure by dismantling the legal architecture that offers the intellectual property protection that is often critical to entrants for purposes of raising capital and attracting early adopters. In fact, in many cases it is considered necessary for entrants to acquire a ‘thicket’ of related patents around the key patent in order to instill the required confidence in early stage investors. This implies that the strategy of the commons does not require the incumbent to sponsor all research in a particular area to be effective, only enough to prevent an entrant from obtaining all of the exclusive intellectual property rights to a potentially threatening substitute. In most cases, a tightly protected intellectual property position is significantly more important for an entrant than for an incumbent. From a policy perspective, governments and university administrations may consider whether particular areas of technical research should be protected from incumbents playing the strategy of the commons. In other words, public sector officials may consider some areas of technological innovation particularly likely to produce ‘disruptive’ technologies that might not be developed by incumbent firms but would likely be developed by entrants. In most cases, these will be technologies that will enable products that have significant cannibalization effects (high cross-price elasticities with existing products). In these cases, protection of the legal architecture that establishes private intellectual property rights might be considered. ENDNOTES 1. The Bayh-Dole Act (Public Law 96-517) assigned ownership and control of patents derived from federally funded research to performing institutions, rather than the sponsoring federal agency. Most relevant to this study, it granted non-profit organizations the right to offer exclusive licenses, a right that, as the Columbia University Technology Licensing Office describes, “provided the incentives for the venture capital industry to invest in unproven technology [...]. The results have been dramatic. A trickle of university patents, 200 in 1980, has turned into a flood - now more than 3,000 applications a year” (Winter, 1998). Academy of Management Proceedings 2002 BPS: X5 2. The latter example refers to the SNP Consortium, which consists of several large pharmaceutical rivals including Novartis, Glaxo Wellcome, Pfizer, and SmithKline Beecham. This consortium was formed in 1999 for the sole purpose of sponsoring public-sector research to identify and patent SNPs (Single Nucleotide Polymorphisms) in order to prevent smaller biotechnology firms from entering and obtaining exclusive rights to this genetic information (The Wall Street Journal (03/04/1999), US News & World Report (10/18/99), The Economist (12/04/99)). SNPs are differences in the DNA of individuals that are likely to be important in tracking the genetic causes of disease. 3. The mandate of most research universities, with respect to patent licensing, is to promote the development of their inventions rather than to maximize profits. For example, the MIT Technology Licensing Office states that “in our technology licensing endeavor, MIT is following the mandate of the US Congress when it gave universities title to inventions developed with federal funds: We use licenses to our intellectual property to induce development of our inventions into products for the public good'' (MIT TLO promotional pamphlet, 1996). References Available From the Author Academy of Management Proceedings 2002 BPS: X6 INTRODUCTION THE MODEL THE STRATEGY OF THE COMMONS AS AN EQUILIBRIUM SOLUTION CONCLUSIONS ENDNOTES References Available From the Author TOC: Find: BP Index: work_y2jj5ke2mzapjhxol45w7w3dta ---- 5261247.dvi Editorial Digital Dentistry: New Materials and Techniques Francesco Mangano,1,2 Jamil A. Shibli,3 and Thomas Fortin4 1Department of Surgical and Morphological Science, Dental School, University of Varese, Varese, Italy 2Academic Unit of Digital Dentistry, IRCCS San Raffaele Hospital, Milan, Italy 3Guarulhos University, Guarulhos, SP, Brazil 4University of Lyon, Lyon, France Correspondence should be addressed to Francesco Mangano; francescomangano1@mclink.net Received 27 September 2016; Accepted 28 September 2016 Copyright © 2016 Francesco Mangano et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The digital revolution is changing the world, and dentistry is no exception. The introduction of a whole range of digital devices (intraoral, extraoral, face scanners and cone beam computed tomography (CBCT) with low dose radiation) and process- ing software (computer-assisted-design/computer-assisted- manufacturing (CAD/CAM) prosthetic software, software for planning implant surgery), together with new aesthetic materials and powerful manufacturing and prototyping tools (milling machines and 3D printers), is radically transforming the dental profession. Classically, case history and physical examination, together with X-ray data from two-dimensional radiology (periapical, panoramic, and cephalometric radiographs), represented the necessary preparatory stages for formulating a treatment plan and for carrying out the therapy. With only two-dimensional X-ray data available, making a correct diagnosis and an appropriate treatment plan could be difficult; therapies essentially depended on the manual skills and experience of the operator. Today, the digital revolution is changing the workflow and consequently changing operating procedures. In modern digital dentistry, the four basic phases of work are image acquisition, data preparation/processing, the production, and the clinical application on patients. Image acquisition is the first operational phase of digital dentistry and this employs powerful tools such as digital cameras, intraoral scanners, and CBCT. Digital photography, combined with the use of appro- priate software for image processing, allows us to design a patient’s smile virtually: this is digital smile design, a valuable tool for previsualization and communication in modern aesthetic and cosmetic dentistry. Intraoral scanners allow us to take accurate optical impression of the dental arches, using only a beam of light. The optical impression is now supplanting the classic method with tray and impression materials: this last procedure, never liked by patients and often technically difficult, is likely to dis- appear over the next few years. The information on dentogin- gival tissues acquired from an optical impression can be used not only to make a diagnosis and for communication, but also to design prosthetic restorations. Indeed, optical impression data (e.g., the scanning of prosthetic preparations) is easily imported into processing software for designing/planning prosthetic restorations; the models created in this way are then physically produced with materials of high aesthetic value, with powerful milling machines. These restorations are then delivered to patients. In simple cases (e.g., in the case of inlays/onlays, temporary restorations in resin or single crowns), all these procedures can be performed directly in the dental office, without the help of a dental technician, through a “full in-office” or “chairside” procedure. In the case of more complex restorations (such as crowns and monolithic bridges or copings and frameworks which need to be veneered with ceramic), collaboration with a dental technician is fundamental. Optical impressions can find application in orthodontics and surgery, too. In orthodontics, they are useful in diagnosis and in the design of a whole series of custom devices, such as invisible aligners. In surgery, optical impressions allow useful data to be obtained for Hindawi Publishing Corporation International Journal of Dentistry Volume 2016, Article ID 5261247, 2 pages http://dx.doi.org/10.1155/2016/5261247 2 International Journal of Dentistry the planning of implant operations. Indeed, information on dentogingival tissues can be combined and overlaid with that relating to the patient’s bone structure obtained from CBCT, by using specific planning software packages. Within these processing software packages, the surgeon can design templates for guided implant placement, which are physically manufactured by milling or 3D printing and used clinically. Implants positioned through a guided surgical procedure can be loaded immediately, using prosthetic restorations in resin, printed in 3D before the fixtures are positioned. This is known as the “full-digital” technique. The data obtained by CBCT can also be used for design- ing personalized implants (“custom-made”), or personalized bone grafts, to be used in implant and regenerative surgery, respectively. In fact, by importing data from CBCT into specific modelling software, it is currently possible for the surgeon to design a whole series of customized implants (root analogue implants, blade implants, and maxillofacial implants); such implants can be physically produced by means of additive manufacturing and 3D printing proce- dures, such as direct metal laser forming (printing of metals). The use of customized implants offers the patient several advantages. With 3D printing techniques (the real “third industrial revolution”) becoming established in the biomed- ical field, the cost of equipment will fall and the printers will also increasingly become accessible to dental professionals: hence, it is likely that such modelling procedures and the fabrication of “customized” implants will spread. The produc- tion of personalized bone grafts for regenerative surgery also comes under this heading. The possibility of using custom- made bone grafts, offering macro- and microtopography with controlled characteristics, represents an undoubted advantage for the practitioner and for the patient. In fact, the availability of customized grafts which fit the individual patient’s bone defect perfectly will greatly simplify and speed up otherwise complex regenerative operations; reducing the surgical time will allow the risk of infection to be lowered and improve healing. Thus, the outcome of regenerative therapy will also be improved with significant benefit for the patient. In this special issue, the first in the world dedicated entirely to the topic of digital dentistry, you will find that we have gathered together a number of scientific and clinical papers which deal with various “digital” themes: we hope that you will find them interesting and that they will attract your attention. Francesco Mangano Jamil A. Shibli Thomas Fortin Submit your manuscripts at http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Oral Oncology Journal of Dentistry International Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Biomaterials Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 BioMed Research International Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Case Reports in Dentistry Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Oral Implants Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Anesthesiology Research and Practice Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Radiology Research and Practice Environmental and Public Health Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Dental Surgery Journal of Drug Delivery Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Oral Diseases Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Computational and Mathematical Methods in Medicine Scientifica Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Pain Research and Treatment Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Preventive Medicine Advances in Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Endocrinology International Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Orthopedics Advances in work_y2v3hgn23rc65ohbesqfu3yuny ---- doi:10.1016/j.scriptamat.2005.03.020 www.actamat-journals.com Scripta Materialia 53 (2005) 87–92 Time-resolved strain mapping measurements of individual Portevin–Le Chatelier deformation bands Wei Tong a,*, Hong Tao a , Nian Zhang a , Louis G. Hector Jr. b a Department of Mechanical Engineering, Becton Engineering Center, Yale University, 219, Becton Center, 15, Prospect Street, New Haven, CT 06520-8284, USA b Materials and Processes Lab, General Motor R&D Center, Warrant, MI 48090-9055, USA Received 29 September 2004; received in revised form 6 March 2005; accepted 9 March 2005 Available online 8 April 2005 Abstract A technique based on high-speed digital photography and image correlation for direct whole-field strain mapping of Portevin–Le Chatelier deformation bands is described. Results for a type B band show that the deformation band was formed within a few mil- liseconds and remained stationary as no motion of the band was detected. � 2005 Published by Elsevier Ltd. on behalf of Acta Materialia Inc. Keywords: Dynamic strain aging; Portevin–Le Chatelier effect; Tension test; Aluminum alloys; Image correlation strain mapping 1. Introduction Unstable plastic flow in terms of nucleation of defor- mation bands associated with repeated serrated yielding are often observed macroscopically in metal alloys due to strong dynamic strain aging or the so-called Portevin–Le Chatetlier (PLC) effect [1–8]. The serrations are due to sudden strain bursts that are generally be- lieved to result from interactions between moving dis- locations and diffusing solute clouds, although other processes may also be involved [9]. The PLC effect is commonly observed in many aluminum alloys [10] and a variety of other materials [9,11–13]. However, it has been most extensively studied in the dilute solid solution Al–Mg alloys under uniaxial tension [2,3,8,10,11,14]. Progress has been made toward a general understand- ing of the microscopic-scale features underlying the PLC effect, and several mechanistic models have been pro- posed [15,16]. However, a complete understanding of 1359-6462/$ - see front matter � 2005 Published by Elsevier Ltd. on behalf doi:10.1016/j.scriptamat.2005.03.020 * Corresponding author. Tel.: +1 203 432 4260; fax: +1 203 432 6775. E-mail address: wei.tong@yale.edu (W. Tong). the detailed micromechanical mechanisms that underlie macroscopic deformation behavior of serrated plastic flow is still lacking [4]. Of particular importance is PLC band nucleation, which has not been adequately quantified even at the macroscopic level [5–7,17,18]. Sev- eral nucleation and development mechanisms were con- jectured by Chihab et al. [3] who used analog video technology and light contrast variation of polished sur- faces due to PLC band formation to capture PLC band kinetics in an Al–Mg alloy tensile coupon. They con- cluded, however, that the slow video framing rate (25 fps, where fps = frames per second) in their experi- ments was inadequate and should be replaced with ‘‘fast cinematography’’ to more fully capture band nucleation and growth kinetics. In the present study, we combine state-of-the-art high-speed digital photography with an experimental strain mapping measurement technique via image corre- lation [17–21] to both temporally and spatially charac- terize serrated plastic flow in an Al–Mg alloy in tension. We present here a first report on time-resolved direct strain mapping measurements to provide some in- sight into the nucleation and growth of individual PLC of Acta Materialia Inc. mailto:wei.tong@yale.edu 88 W. Tong et al. / Scripta Materialia 53 (2005) 87–92 bands. In addition, we explore the issue of band propa- gation across the tensile coupon surface during the image capture times afforded by the high-speed camera. 6545 6550 6555 Test Time t0 (second) -5000 0 5000 Time after Digital Image Capture t4 (ms) 1320 1330 1340 1350 1360 1370 1380 PLC#4 Triggering Point A xi al L oa d ( N ) (a) 2. Experimental procedure An aluminum AA 5052-H32 sheet with alloying ele- ments of 2.5 wt% Mg, 0.25 wt% Cr, and 0.1% Mn [22] was tested in the as-received condition in this study. The aluminum polycrystalline sheet was strain-hardened and stabilized at the H32 temper and was not strongly textured. It had a thickness of 0.95 mm and an average grain size of about 12 lm. To facilitate the image-based strain measurement of PLC bands at a sufficiently high spatial resolution, a tension test coupon with a 4.6 mm-wide gage section and 20 mm gage length was used, see Fig. 1(a). The test coupon was fully clamped at the both ends and stretched under displacement con- trol by a screw-driven mini-tensile tester at a constant crosshead speed to achieve a nominal strain rate of 10 �4 s �1 . The mini-tensile tester, which is shown in Fig. 1(b), had overall dimensions of 100 mm by 125 mm by 50 mm, a total crosshead travel of 50 mm, and a load cell of 4.4 kN maximum capacity. Both force and displacement data were recorded digitally at a rate of 8 Hz. The uncertainty in the load measurement was ±0.3 N for the 4.4 kN miniature load cell. Tests were also carried out using a data acquisition rate of 100– 250 Hz but with higher electronic noises. Prior to the test, one flat surface of the tension test coupon was decorated with finely sprayed black paint Fig. 1. (a) A digital image of the AA5052-H32 sheet metal tension coupon. Its surface was sprayed with black paint speckles and its gauge section is 20 mm long, 4.6 mm wide and 0.95 mm thick; (b) the screw- driven mini-tensile tester. The coordinate system X1�X2 used in the paper is also shown. speckles to enhance image contrast. To characterize the nucleation and growth of individual PLC bands dur- ing tensile testing, both temporally and spatially, the en- tire gage section of the test coupon was imaged with a high-speed digital camera using a special CMOS sensor operating at various framing rates ranging from 1000 to 5000 fps [23]. The high-speed camera was post-triggered when a sudden load drop associated with a single serra- tion event was detected and a total amount of 8285 image frames prior to the triggering point were captured in the camera memory (see Fig. 2(a)). The tensile test was then interrupted (but without complete unloading) in order to download the images to a host computer E1: 0.0000 0.0009 0.0017 0.0026 0.0035 0.0043 0.0052 0.0061 U1: -0.38 -0.34 -0.30 -0.26 -0.22 -0.18 -0.14 -0.10 -0.06 -0.02 0.02 0.06 0.10 (unit: pixel) U1: -0.0075 -0.0064 -0.0053 -0.0042 -0.0031 -0.0019 -0.0008 0.0003 0.0014 (unit: mm) (b) Fig. 2. (a) The force–time record of the tension test with arrow markers indicating the start (the first image frame) and end (the post- triggering point) of the image capture of PLC band #4; (b) the accumulated plastic deformation in terms of horizontal displacement U1 (in both pixels and mm) and the axial strain E1 contour map showing strain localization in band #4 over the image capture duration of 1.6568 s. The field of view of the contour maps is 22 mm by 4.6 mm of the tensile coupon surface as shown in Fig. 1(a). W. Tong et al. / Scripta Materialia 53 (2005) 87–92 89 via a high-speed Ethernet connection. More than 10 PLC band nucleation and growth events were imaged in our experiment. Each of the recorded 8-bit grayscale digital images had a size of 1344 · 280 pixels with a spatial resolution of 20.0 lm per pixel. The aspect ratio of the digital images was chosen to match closely that of the gage section of the tension coupon to maximize both the spatial and time resolution of the high-speed imag- ing acquisition. The recorded digital images were ana- lyzed with the SDMAP [24] digital image correlation software that computes both cumulative and incremen- tal surface strain maps of the test coupon surface for each image capture. Details behind the strain mapping via the digital image correlation analysis may be found in [19,21,24]. The axial strain reported here was com- puted locally using a bi-quadratic surface fitting of dis- placements over subsets of 40 · 40 to 60 · 60 pixels [19]. The errors in local in-plane displacements, rigid body rotation, and strain measurements were estimated to be 0.02 pixels, 0.02� and 100 · 10�6, respectively, for a macroscopically homogenous field. 3. Results Serrated plastic flow was detected in the AA5052-H32 tension coupon in Fig. 1(a) after an incubation plastic strain of about 1% when it was stretched at a nominal strain rate of 10 �4 s �1 at the ambient temperature of 24 �C. Load drops with a magnitude of 30–40 N (corre- sponding to 7–9 MPa in stress or 2–4% of the current flow strength of the material) occurred rather regularly within a period of 4–5 s. Each load drop was associated with the nucleation and growth of a spatially discrete PLC band followed by a predominantly elastic reload- ing phase. Out of more than 10 PLC bands captured by the high-speed digital photography, we selected one repre- sentative PLC band (#4) for a detailed description of digital image correlation analysis results. A total of 8285 digital image frames were continuously recorded with a time interval of 0.2 ms per frame for a total dura- tion of 1.6568 s for the band. Fig. 2(a) and (b) show the experimental force–time records and whole-field strain mapping data. In Fig. 2(a), the serration time with its origin defined by the start of the image capture is given in the top horizontal axis in milliseconds as t4. Note that the left arrow in the figure coincides with the load and time at the start of the image capture, while the right arrow designates the load and time at which recording was stopped (as the camera was post-triggered at this point). The time at which the load reached its local min- imum (which coincided more or less with complete band formation) is labeled PLC #4. Fig. 2(b) shows the corre- sponding cumulative plastic deformation maps in terms of both horizontal displacement U1 (top map in units of pixels, and middle map in mm) and true axial strain, E1. These deformation maps were obtained via correlation analysis of frame 1 (t4 = 0 s) and frame 8285 (t4 = 1.6568 s) for the band. The densely spaced dis- placement contour lines denote the location, width, and orientation (about 26� clockwise from the trans- verse direction) of the PLC band. The symmetry in dis- placement conditions tended to favor alternating bands ±26� so that the relative vertical displacement at the grip ends was nearly zero on average. The axial strain con- tour map (bottom) details the distribution and magni- tude of deformation inside the PLC band. Across the PLC band, there was a jump of about 0.45 pixel or 9 lm in horizontal displacement over an axial distance of about 150 pixels or 3 mm. The peak axial strain in PLC band #4 was about 0.6%. Fig. 3(a) and (b) shows cumulative and incremental strain maps and the corresponding several selected times on their force–time curves associated with the nucleation and growth of PLC band #4. Each of the four cumula- tive strain maps in Fig. 3(a) was generated through dig- ital image correlation of the very first image (t4 = 0 s) with the image recorded at the indicated time. The time label in each map designates the capture time, which is referenced to the time at which the first image was cap- tured at the outset of the capture event (denoted by the left arrows in Fig. 2(a)). The top left contour map in Fig. 2(a) (representing a capture time of 969.4 ms for band #4) shows the cumulative axial strain field just prior to a sudden load drop or the sudden nucleation and rapid growth of the band. It suggests that there was some pre- cursory strain up to 0.1% that accumulated locally at one of the free edges of the tensile coupon before the sudden load drop. The second and third cumulative ax- ial maps in Fig. 3(a) (representing capture times of 972.2 and 975.0 ms, respectively) show the development of the PLC bands during the sudden load drop. The last cumu- lative axial map in Fig. 3(a) (representing a capture time of 1656.8 ms) show fully formed PLC after some elastic reloading (i.e. along the increasing portion of the load curve) but prior to the nucleation of another PLC band. Changes in strain contours associated with the PLC bands are shown very clearly by the incremental axial strain maps in Fig. 3(b). The incremental maps were generated through digital image correlation of two suc- cessive image frames captured at a 0.4 ms time interval. The time shown on each incremental axial strain map is the average of the capture times for these two image frames. The four incremental axial strain maps shown in Fig. 3(b) span a period of 1.6 ms or about 1/3 of the entire dynamic plastic deformation process during the formation of PLC bands #4. By dividing the axial strain increments with the time interval of 0.4 ms, the ac- tual strain rate inside a PLC band _eB was estimated to between 0.5 and 1 s �1 and is much higher than the im- posed overall strain rate _ea of 10 �4 s �1 . However, the 6545 6550 6555 Test Time t 0 (second) -5000 5000 Time after Digital Image Capture t 4 (ms) 1320 1330 1340 1350 1360 1370 1380 A xi al L oa d ( N ) PLC# 4 E1 0.005 0.004 0.003 0.002 0.001 0.000 E1 0.0014 0.0011 0.0008 0.0005 0.0003 0.0000 E1 0.0023 0.0018 0.0014 0.0009 0.0005 0.0000 E1 0.0042 0.0033 0.0025 0.0017 0.0008 0.0000 969.4 ms 972.2 ms 975.0 ms 1656.8 ms (a) 0.0004 0.0003 0.0002 0.0001 0.0001 0.0000 dE1 0.0003 0.0003 0.0002 0.0001 0.0001 0.0000 dE1 0.0004 0.0003 0.0002 0.0002 0.0001 0.0000 dE1 0.0005 0.0004 0.0003 0.0002 0.0001 0.0000 dE1 972.0 ms 972.4 ms 972.8 ms 973.2 ms 6545 6550 6555 Test Time t 0 (second) -5000 0 5000 Time after Digital Image Capture t 4 (ms) 1320 1330 1340 1350 1360 1370 1380 A xi al L oa d ( N ) PLC# 4 (b) 0 Fig. 3. PLC band #4: cumulative E1 (a) and incremental dE1 (b) axial strain maps (with a 9 mm · 4.6 mm field of view) at selected times. The time used here is t4 with origin defined as the start of high-speed image captures shown in Fig. 2(a). 90 W. Tong et al. / Scripta Materialia 53 (2005) 87–92 very high magnitude of the strain rate inside the band is not simply dictated by the compatibility condition of the imposed overall strain rate as wB 2Le _eB _ea � 460–920 (where the band width at half height is wB/2 � 1.85 mm and the total gage length of the sample Le � 20 mm)! The time history of cumulative axial strain at the cen- ter of PLC band #4 is shown in Fig. 4. Up to 80% or more of the total axial strain in the band was accumu- lated within a short period of 3–5 ms (accompanied by a sudden load drop of 2–4% of the current axial load level). Fig. 5 shows the time evolution of the axial strain distributions for PLC band #4 along the centerline of the tensile coupon around the location of each band. After initiation, PLC band #4 grew in a self-similar manner in terms of the spatial distribution of axial strain. In other words, the spatial distributions of axial strain at various times after band nucleation closely resemble one another if normalized by the correspond- ing peak strain level). No axial motion of the peak strain locations was observed (within the resolution of subpixel or a few microns) and the band width at half of the peak strain (i.e., the full width at the half magnitude) was found to be nearly constant (about 1.85 mm) through- out the rapid growth of the bands. -3 -2 -1 0 1 2 3 X (mm) 0 0.001 0.002 0.003 0.004 0.005 0.006 E 1 b c a d e f g h i j k l Time t4 (ms) a: 300.0 b: 940.0 c: 969.4 d: 971.0 e: 971.8 f: 972.6 g: 973.4 h: 974.2 i: 975.0 j: 977.4 k: 1000.0 l: 1656.8 PLC#4 Fig. 5. True axial strain distributions of PLC band #4 along the centerline of the tensile coupon at selected times. The curves are plotted so that the peak strain in each band occurs at X = 0. Time (ms) E 1 850 900 950 1000 1050 1100 0 0.001 0.002 0.003 0.004 0.005 PLC#4 Fig. 4. The time history of the cumulative axial true strain at the center point of PLC band #4. W. Tong et al. / Scripta Materialia 53 (2005) 87–92 91 4. Discussion At least two possible PLC band nucleation and sub- sequent development mechanisms in tensile test coupons have been proposed in the literature [3,25]. The first mechanism is associated with the initiation of an embryo band at a lateral specimen surface with subsequent transversal growth into the bulk at an angle (consistent with the resulting band orientation) with respect to the tensile axis. Such a band may have its final width and peak strain level at nucleation, and subsequently grows along its length direction to develop a complete band. The second mechanism is associated with nucleation and growth of a sample-scale band across the width and thickness of the tensile coupon but only a few atomic planes thick. This narrow band then expands in its width direction (along the specimen axis) until its expansion is blocked by the stress relaxation. Chihab et al. [3] associated these two scenarios of band develop- ment with different types of PLC bands observed in their experiments. For example, type C bands were thought to be related to the first mechanism, and types A and B bands largely result from the second mechanism. However, Chihab et al. [3] could not verify their hypoth- eses due to the lack of direct strain measurements and the very limited time resolution (25 fps) in their experi- ments. The PLC bands observed in our investigation fall into the type B bands according to the definition by Chi- hab et al. [3]. Some strain concentrations were indeed found to start to accumulate (prior to any obvious ser- rations or sudden load drops in the load–time curve) at one of the edges of the tensile coupon according to our time-resolved direct strain measurement results. However, the PLC bands were not first fully developed at a tensile coupon edge to their peak strain levels of 0.6%. Neither did fully developed strain concentrations of such a magnitude subsequently propagate within 3– 5 ms along their length directions to form complete bands across the specimen width at the angle of ±26� with respect to the transverse direction (the representa- tive PLC band #4 described in details here had an angle of �26�). As shown in incremental strain maps of Fig. 3(b), the material along the entire band region partici- pated in plastic deformation throughout the entire 3– 5 ms duration of the sudden load drops although some strain heterogeneities during the band growth was also evident. Our strain measurement results did not show any sig- nificant expansion of type B PLC bands along the width direction of the bands (the tensile loading direction) dur- ing their growth. This is clearly confirmed by the axial strain distribution profiles shown in Fig. 5. As the strain profiles remained self-similar, no major motion of the band strain distributions (either cumulative or incre- mental) along the tensile loading direction occurred within the 3–5 ms of the band development period. Con- trary to the usual assumption of a uniform band strain distribution with step-like transitions (which is consis- tent with the second mechanism proposed by Chihab et al. [3]), our strain measurement results show that the PLC bands have a more or less symmetric but bell-like strain distribution throughout the entire 3– 5 ms nucleation and development period. Such strain distribution characteristics are similar to those observed in localized necks in sheet metals due to material tension instability at large strains [26]. At the time resolution of 0.2 ms and the spatial reso- lution of the average grain size, the type B PLC bands studied here appear to be purely stationary at the macroscopic level as no plastic deformation front was 92 W. Tong et al. / Scripta Materialia 53 (2005) 87–92 detected to propagate either along the band�s length direction or along the band�s width direction (the tensile loading direction). This is clearly demonstrated by the incremental maps in Fig. 3(b): the successive ‘‘snap- shots’’ of the each band suggest no transverse or lateral motion of the deformation fronts of any kind. In sum- mary, PLC bands of the type B are more or less just a series of discrete, localized necks with a certain peak strain and a characteristic bell-like strain profile. Acknowledgement The high-speed digital image acquisition was carried out with the help of Mr. Andy Kubit of Vision Research (Wayne, NJ) and the AA 5052-H32 sheet metal material was provided by Dr. Tim Focke of NIST (Gaithersburg, MD). The enthusiastic support of Drs. Alan Taub, Shawn Gayden, and Y.T. Cheng and the critical review of Dr. Paul E. Krajewski of the manuscript are grate- fully acknowledged. References [1] Cottrell AH. Dislocations and plastic flow in crystals. Oxford: Clarendon Press; 1953. [2] Hall EO. Yield point phenomena in metals and alloys. London: Macmillan; 1970. [3] Chihab K, Estrin Y, Kubin LP, Vergnol J. Scripta Metall 1987; 21:203. [4] Kubin LP, Fressengeas C, Ananthakrishna G. Collective behavior of dislocations in plasticity. In: Nabarro FNN, Hirth JP, editors. Dislocation in solids, vol. 11. New York: Elsevier Science; 2001. [5] McCormick PG, Venkadesan P, Ling CP. Scripta Metall Mater 1993;29:1159. [6] Weinhandl H, Mitter F, Bernt W, Kumar S, Pink E. Scripta Metall Mater 1994;31:1567. [7] Wack B, Tourabi A. Mater Sci Eng A 1995;196:79. [8] Fujita H, Tabata T. Acta Metall 1977;25:793. [9] Korbel A, Zasadzinski J, Sieklucka Z. Acta Metall 1976;24: 919. [10] Robinson JM. Int Mater Rev 1994;39:217. [11] Gallardo-Lopez A, Gomez-Garcia D, Dominguez-Rodriguez A, Kubin LP. Scripta Mater 2004;51:203. [12] Nagarjuna S, Anozie FN, Evans JT. Mater Sci Tech 2003;19: 1661. [13] Zhu SM, Gao X, Nie JF. In: Luo A, editor. Magnesium technology 2004. Warrendale, PA: The Minerals, Metals & Materials Society; 2004. p. 325. [14] Cheng XM, Morris JG. Scripta Mater 2000;43:651. [15] Lebyodkin M, Brechet Y, Estrin Y, Kubin LP. Key Eng Mater 1995;103:313. [16] Mesarovic S. J Mech Phys Solids 1995;43:671. [17] Bruck HA, McNeill SR, Sutton MA, Peters WH. Exp Mech 1989; 29:261. [18] Chen DJ, Chiang FP, Tan YS, Don HS. Appl Optics 1993;32: 1839. [19] Tong W. Exp Mech 1997;37:452. [20] Vendroux G, Knauss WG. Exp Mech 1998;38:86. [21] Tong W. J Mech Phys Solids 1998;46:2087. [22] Staley JT. Effects of composition, processing and structure on aluminum alloys. In: Materials selection and design. ASM metals handbook, 20. OH: Materials Park; 1985. [23] Phantom V9.0, Vision Research, http://www.visiblesolutions.com. [24] Tong W. A User�s Guide to the Yale Surface Deformation Mapping Program (SDMAP), Technical Report, Department of Mechanical Engineering, Yale University, New Haven, CT (1996– 2004). [25] Kocks UF. In: Progress in materials science. Chalmers anniver- sary volume. Oxford: Pergamon Press; 1981. p. 225. [26] Tong W, Zhang N. Proceedings of the ASME manufacturing engineering division MED, vol.12. New York: ASME; 2001. http://www.visiblesolutions.com Time-resolved strain mapping measurements of individual Portevin -- Le Chatelier deformation bands Introduction Experimental procedure Results Discussion Acknowledgement References work_y4gi5dl6z5dinn37fs55ao4kyy ---- Uterine prolapse with endometrial eversion in association with an unusual diffuse, polypoid, fibrosing perimetritis and parametritis in a cat | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1177/2055116915626166 Corpus ID: 3005670Uterine prolapse with endometrial eversion in association with an unusual diffuse, polypoid, fibrosing perimetritis and parametritis in a cat @article{Valentine2016UterinePW, title={Uterine prolapse with endometrial eversion in association with an unusual diffuse, polypoid, fibrosing perimetritis and parametritis in a cat}, author={M. Valentine and Susan Porter and A. Chapwanya and J. Callanan}, journal={JFMS open reports}, year={2016}, volume={2} } M. Valentine, Susan Porter, +1 author J. Callanan Published 2016 Medicine JFMS open reports Case summary This case describes a young non-pregnant cat that presented with uterine prolapse in association with an unusual diffuse, polypoid, fibrosing perimetritis and parametritis. Following ovariohysterectomy the cat recovered fully. No intra-abdominal complications were seen on ultrasound examination 3 months postsurgery. At the time of writing, the cat remains healthy. Relevance and novel information Uterine prolapse in the cat is relatively rare and usually associated with the… Expand View on SAGE journals.sagepub.com Save to Library Create Alert Cite Launch Research Feed Share This Paper 4 CitationsBackground Citations 2 View All Figures and Topics from this paper figure 1 figure 2 figure 3 figure 4 Parametritis Uterine Prolapse Ptosis Uterus Nephrogenic Fibrosing Dermopathy Idiopathic Pulmonary Fibrosis Uterine Diseases Parametrium Polypoid Sinus Degeneration Conflict (Psychology) 4 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Surgical management of bicornual uterine prolapse in a Siamese cat: a case report. W. Kimani, S. Mbugua Medicine 2020 PDF Save Alert Research Feed Feline Unilateral Uterine Prolapse: A Description of Two Cases A. Sabuncu, G. Dal, S. Ö. Enginler, E. Karaçam, T. F. S. Toydemir, M. Uçmak Medicine 2016 1 PDF View 2 excerpts, cites background Save Alert Research Feed Transvaginal uterine evisceration during labor in a Bengal queen M. Freire, M. Diaw Medicine JFMS open reports 2019 Save Alert Research Feed Uterine prolapse in a pregnant cat Z. G. Uçmak, M. Uçmak, A. Çetin, Çağatay Tek Medicine 2018 2 PDF View 1 excerpt, cites background Save Alert Research Feed References SHOWING 1-10 OF 28 REFERENCES SORT BYRelevance Most Influenced Papers Recency Uterine prolapse with associated rupture in a Podengo bitch. R. Payan-Carreira, C. Albuquerque, H. Abreu, L. Maltez Medicine Reproduction in domestic animals = Zuchthygiene 2012 8 PDF Save Alert Research Feed Management of a complete uterine prolapse in a cat C. Deroy, C. Bismuth, C. Carozzo Medicine JFMS open reports 2015 8 Save Alert Research Feed Complete uterine prolapse without uterine mucosal eversion in a queen. E. Bigliardi, F. Di Ianni, E. Parmigiani, A. Cantoni, C. Bresciani Medicine The Journal of small animal practice 2014 7 Save Alert Research Feed Unilateral Uterine Prolapse in a Cat N. Özyurtlu, D. Kaya Medicine 2005 10 PDF Save Alert Research Feed Proliferative Lesions of Serous Membranes in Ovariectomised Female and Entire Male Dogs After Stilboestrol Administration J. D. O’shea, A. G. Jabara Biology, Medicine Veterinary pathology 1971 26 Save Alert Research Feed Histological changes of the feline cervix, endometrium and placenta after mid-gestational termination of pregnancy with aglepristone. P. Georgiev, A. Wehrend, G. Penchev, A. Vodenicharov, J. Kauffold, R. Leiser Biology, Medicine Reproduction in domestic animals = Zuchthygiene 2008 17 Save Alert Research Feed Female Genital System I. Horowitz Medicine 2010 138 PDF Save Alert Research Feed Pathogenesis of Peritoneal Fibrosing Syndromes (Sclerosing Peritonitis) in Peritoneal Dialysis J. Dobbie Medicine Peritoneal dialysis international : journal of the International Society for Peritoneal Dialysis 1992 353 Save Alert Research Feed Cervical patency during non-ovulatory and ovulatory estrus cycles in domestic cats. K. Chatdarong, C. Lohachit, Soonthorn Kiartmanakul, E. Axnér, C. L. Forsberg Biology, Medicine Theriogenology 2006 11 Save Alert Research Feed Primary bacterial septic peritonitis in cats: 13 cases. Cassandra M Ruthrauff, J. Smith, L. Glerum Medicine Journal of the American Animal Hospital Association 2009 20 Save Alert Research Feed ... 1 2 3 ... Related Papers Abstract Figures and Topics 4 Citations 28 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_ybpnuqhiwbg5xmnhrzhn2st5qu ---- Finger typing driven triboelectric nanogenerator and its use for instantaneously lighting up LEDs Q2 Q1 Q3 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 Q4 Available online at www.sciencedirect.com journal homepage: www.elsevier.com/locate/nanoenergy Nano Energy (]]]]) ], ]]]–]]] 2211-2855/$ - see fro http://dx.doi.org/1 �Corresponding a Engineering, Georgia USA. ����Correspondin E-mail addresse jun.zhou@mail.hust. Please cite this art LEDs, Nano Energy RAPID COMMUNICATION Finger typing driven triboelectric nanogenerator and its use for instantaneously lighting up LEDs Junwen Zhonga, Qize Zhonga, Fengru Fanb, Yan Zhangb,c, Sihong Wangb, Bin Hua, Zhong Lin Wangb,c,�, Jun Zhoua,���� aWuhan National Laboratory for Optoelectronics (WNLO), and School of Optical and Electronic Information, Huazhong University of Science and Technology (HUST), Wuhan, 430074, PR China bSchool of Materials Science and Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0245, USA cBeijing Institute of Nanoenergy and Nanosystems, Chinese Academy of Sciences, Beijing, China Received 27 November 2012; accepted 28 November 2012 KEYWORDS Nanogenerator; Self-powered system; Flexible nt matter & 2012 0.1016/j.nanoen.2 uthor at: School Institute of Techno g author. Tel.: +86 s: zhong.wang@m edu.cn (J. Zhou). icle as: J. Zhong, (2012), http://dx Abstract Harvest mechanical energy with variable frequency and amplitude in our environment for building self-powered systems is an effective and practically applicable technology to assure the independently and sustainable operation of mobile electronics and sensor networks without the use of a battery or at least with extended life time. In this study, we demonstrated a novel and simple arch-shaped flexible triboelectric nanogenerator (TENG) that can efficiently harvesting irregular mechanical energy. The mechanism of the TENG was intensively discussed and illustrated. The instantaneous output power of single TENG device can reach as high as�4.125 mW by a finger typing, which is high enough to instantaneously drive 50 commercial blue LEDs connected in series, demonstrating the potential application of the TENG for self-powered systems and mobile electronics. & 2012 Published by Elsevier Ltd. 63 65 67 69 Introduction Recently, research on light-weight, flexible, and even wear- able electronics have attracted much attention for its poten- tial applications including but not limited to, wearable display, 71 73 75 Published by Elsevier Ltd. 012.11.015 of Materials Science and logy, Atlanta, GA 30332-0245, 13307198060. se.gatech.edu (Z.L. Wang), et al., Finger typing driven tribo .doi.org/10.1016/j.nanoen.2012. artificial electronic skin, and distributed sensors [1,2]. A key component for these applications is the power source that is as flexible as the electronic sheet itself. Harvesting energy from ambient energy source including solar, thermal energy and mechanical energy could assure the independent and sustainable operating of such systems without the use of a battery or at least extending the life time of a battery [3–5]. Irregular mechanical energy, including ambient noise, airflows and activity of the human body, is probably the most common energy sources in our living environment and almost available anywhere at any time, which could be an ideal source of energy for mobile electronics. Piezoelectric 77 electric nanogenerator and its use for instantaneously lighting up 11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 mailto:zhong.wang@mse.gatech.edu mailto:jun.zhou@mail.hust.edu.cn dx.doi.org/10.1016/j.nanoen.2012.11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 J. Zhong et al.2 nanogenerators (PNGs) [6–16] and triboelectric nanogenera- tors (TENGs) [17–20] have been developed to harvest irregular mechanical energy with variable frequency and amplitude in our environment based on the piezoelectric effect and triboelectric effect, and they have been demon- strated to power small electronic devices, such as a small liquid crystalline display (LCD) screen [21] and electrochro- mic device [22]. Here we demonstrate a novel and simple design of the TENG for efficiently harvesting mechanical energy. A fingertip typing can generate an output voltage of up to�125 V, and the output power is sufficient to lit up 50 LEDs connected in series. By conjunct with a transformer for enhancing the output current, the TENG can power a commercial infrared transmitter with an output current of�6 mA at�1 V. Our study unambiguously demonstrates the application of the TENGs for self-powered system. 79 81 83 85 87 Experimental Fabrication of the TENG The design of the TENG is presented in Figure 1a. The fabrication process started with a rectangular (3.5 cm �2.5 cm) polytetrafluoroethylene (PTFE) film (0.20 mm in Figure 1 (a) Schematic diagram and digital photography of an arch (b) PTFE film and (c) Ag coated PVA nanowires on PET film. Inset show circuit of the TENG with an external load of R when the device is corresponding current–time curve, respectively. (h) Linear superpos the same polarity (G1+G2) and opposite polarity (G1�G2). Please cite this article as: J. Zhong, et al., Finger typing driven tribo LEDs, Nano Energy (2012), http://dx.doi.org/10.1016/j.nanoen.2012. thickness, Figure 1b). Cu layer (200 nm) was deposited on the upper surface of PTFE by sputter coating, and used as the top electrode. Specially, the Cu-coated PTFE film will be bent toward the polymer side because of the large differ- ence in thermal expansion coefficients, which results in an arch-shape structure. Then PTFE side of the hybrid film was placed onto another rectangular (3.5 cm�2.2 cm) poly- ethylene glycol terephthalate (PET) film (0.22 mm in thick- ness). The inner surface of PET film was coated with PVA nanowires prepared by electrospining, and then deposited with a thin Ag layer (100 nm in thickness) by sputter coating as the bottom electrode (Figure 1c). Before assembling of the device, the inner surface of the PEFE film was rubbed with cellulose paper to charging the surface of PTFE film. According to the triboelectric series, [23] that is, a list of materials based on their tendency to gain or lost charges, electrons are injected from cellulose paper to PTFE, resulting in net negative charges (Q) on the PTFE surface. It is reported that PTFE can contain charge densities up to�5�10�4 C/m2 with theoretical lifetimes of hundreds of years [24,25]. During the assembling process, the inner surface of the PTFE film faced Ag layer of the PET film, then the edges of the two films along the length axis were fixed by Kapton tape, forming an arch-shaped device (inset of Figure 1a). 89 91 93 95 97 99 101 103 105 107 109 111 113 115 117 119 121 123 -structured flexible triboelectric nanogenerator. SEM images of s the EDS spectrum of the Ag coated PVA nanowires. Equivalent at (d) origin, (e) pressing and (f) releasing states and (g) the ition tests of two TENGs (G1 and G2) connected in parallel with electric nanogenerator and its use for instantaneously lighting up 11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 3Finger typing driven triboelectric nanogenerator Results and discussions Power generation mechanism of TENG In a simplified model, the equivalent circuit of the TENG with an external load of R is illustrated in Figure 1d, f and g, in which the device can be regarded as a flat-panel capacitor. As the inner surface of the PTFE was charged Figure 2 Electrical performance characterization measurement current–time curve and (b) maximum output current as well as the frequency of 3 Hz and external load resistance of 500 MO. (c) Outpu total charges transported at different stimulation frequencies at a 500 MO. (e) Stimulations and variation with different degrees of d (f) Maximum output current and instantaneous peak power as a funct of 3 Hz and deformation of 1.5 mm. Please cite this article as: J. Zhong, et al., Finger typing driven tribo LEDs, Nano Energy (2012), http://dx.doi.org/10.1016/j.nanoen.2012. with negative charges of Q while the Cu electrode was grounded, the Cu electrode and Ag electrode would produce positive charges of Q1 and Q2, respectively, due the electro- static induction and conservation of charges, where �Q=Q1+Q2 at any moment. Assuming that the charges distributed is uniformly on the surface of PTFE, Cu and Ag, thus �s¼s1þs2 ð1Þ 71 73 75 77 79 81 83 85 87 89 91 93 95 97 99 101 103 105 107 109 111 113 115 117 119 121 123 of a TENG under different experiment conditions. (a) Output total charges transported at different deformations for a given t current–time curve and (d) maximum output current as well as given deformation of 1.5 mm and external load resistance of eformation provided by the mechanical trigger to the TENG. ion of the external load resistance at a given bending frequency electric nanogenerator and its use for instantaneously lighting up 11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 81 83 85 87 89 91 93 95 J. Zhong et al.4 where s is the charge density of PTFE surface, s1 is charge density of Cu surface which is contacted with PTFE and s2 is charge density of Ag upper surface (Figure 1d). If we define electric potential of the top electrode as UTE and electric potential of bottom electrode as UBE, then at any equilibrium state (Figure 2b) UBE can be presented as follows [20]: UBE ¼ s2 2e0 d2þ s2 2erpe0 d1þ s 2erpe0 d1� s 2e0 d2 � s1 2e0erp d1� s1 2e0 d2 ¼UTE ¼0 ð2Þ where e0 is the vacuum permittivity, and erp is the relative permittivity of PTFE, d1 is the thickness of PTFE film, d2 is the distance between the two electrodes. Put Eq. (1) into Eq. (2), we can get sd2þs1d2þ s1 erp d1 ¼0 ð3Þ s1 ¼� s 1þd1=d2erp ð4Þ As d1 and erp are constant with value of 0.2 mm and �1.93, [25] respectively, and charge Q is stable for a relatively long time on the PTFE surface, thus s1 is dictated by the gap distance d2 (See Figure S2). The variation of d2 will result in the redistribution of the charges between Cu and Ag electrodes through the load R which generates a current through the load, so that mechanical energy is converted into electricity. The working mechanism of the TENG is similar to a variable-capacitance generator [26–28] except that the bias is provided by the triboelectric charges rather than an external voltage source. Once the TENG was being pressed (Figure 1e), a reduction of the interlayer distance of d2 would make the decrease s1 according to Figure 3 TENG as a direct power source to drive 50 commercia harvesting circuit and LED display. (b) Current–voltage curve of the 5 of the prototype energy harvesting circuit and LED display. (c) The r a finger typing. (f) The magnified current peak and the correspond Please cite this article as: J. Zhong, et al., Finger typing driven tribo LEDs, Nano Energy (2012), http://dx.doi.org/10.1016/j.nanoen.2012. Eq. (4) (See Figure S1), which results in an instantaneous positive current (Figure 1g) (here we defined a forward connection for the measurement as a configuration with positive end of the electrometer connected to the top electrode). Upon the TENG was being released (Figure 1f), the device would revert back to its original arch shape due to resilience, the interlayer distance d2 would increase, and the surface charge s1 increased as well, resulting in an instantaneous negative current (Figure 1g). The output performance of TENG The output of the TENG was carefully studied by periodi- cally bending and releasing at a controlled frequency and amplitude. The measuring system is schematically shown in Figure S2. One end of the TENG was fixed on a x–y–z mechanical stage that was fixed tightly on an optical air table, with another end free to be bend. To rule out the possible artifacts, we did the measurement of the output current when two TENG were connected in parallel with an external load of 500 MO and the results are shown in Figure 1h. When two TENGs were connected in the same direction, the total output current was enhanced. In comparison, when two TENGs were connected in antipar- allel, the total output current was decreased. The results indicated that the electrical output of the TENGs satisfied linear superposition criterion in the basic circuit connec- tions [18]. The output current of a TENG variation with different degree of deformations (the amplitude of the pushing down distance of the mechanical trigger) are depicted in the Figure 2a. Correspondingly, for a given frequency of 3 Hz and external load resistance of 500 MO, an increase of 97 99 101 103 105 107 109 111 113 115 117 119 121 123 lized blue light emitting diodes. (a) Schematic of the energy 0 LEDs connected in series. Inset shows the digital photography ectified output current through 50 LEDs driven by the TENG with ing snapshots of the TENG-driven flashing LED display. electric nanogenerator and its use for instantaneously lighting up 11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 81 83 85 87 89 91 5Finger typing driven triboelectric nanogenerator deformation generally increased the magnitude of the maximum current, from 0.25 mA at 0.5 mm to 0.72 mA at 2 mm. The integration of each current peak can gives the total chargers transferred between the electrodes, as shown in Figure 2b, indicating that the total amount charges transferred increased with the increase of distance change between the two electrodes, which is consisted with our model discussed above. Figure 2c shows the output current of the TENG under stimulation frequencies ranging from 1 to 4 Hz for a given deformation distance (1.5 mm) and external load resistance (500 MO), revealing a clear increasing trend with the increase of frequency. For a given deformation, as the deformation rate increases with stimulation frequency, which leads to a higher flow rate of charges, resulting in a higher current peak value, however the total amount of the charges transferred is constant. The integration of each current peak from each of the 4 different stimulation frequency are shown in Figure 2d, indicating that the total amount of the charges transferred almost keep constant of�21 nC at a given deformation. Therefore, the instanta- neous power output increases with the increase of stimula- tion frequency. The output current and voltage of a TENG variation with different external load for a given frequency of 3 Hz and degree of deformation (1.5 mm) are depicted in Figure 2e. With an increase in the load resistance, the maximum current decreases, while the voltage across the following an opposite trend with the maximum value of�407 V. ITD A + - Figure 4 (a) Schematic of an infrared transmitter–receiver system TENG in conjunct with a transformer. L2 is the infrared receiver d external load connected with IRD with a value of 20 M. (b) The outpu time when ITD was driven by TENG in conjunct with a transformer Please cite this article as: J. Zhong, et al., Finger typing driven tribo LEDs, Nano Energy (2012), http://dx.doi.org/10.1016/j.nanoen.2012. The output power exhibits an instantaneous peak value of 0.23 mW with an external load of 300 MO (Figure 2f). The measurement results reveal that the TENG is particularly efficient provided that the load has a resistance on the order of hundreds of megaohm. The electric energy pro- duced by our TENG can be stored and using a rectifier and capacitor, and also can be used as a direct power source without electric storage to power commercial LEDs (Video 1 and Figure S3). Supplementary material related to this article can be found online at http://dx.doi.org/10.1016/j.nanoen.2012. 11.015. Powering 50 LEDs in series by TENG directly As a demonstration of converting irregular mechanical energy, such as human motion into electricity to power electronics, our TENG was successfully used as a direct power source without an energy storage system to instantly power 50 commercial blue LEDs (3B4SC) connected in series with a finger typing! Figure 3a and inset of Figure 3b show the schematic and digital photography of the prototype energy harvesting circuit and LED display. A bridge rectifier is used to convert the AC output signals into DC signals. 50 LEDs are connected in series, and 26 LEDs in the first row forms characters of ‘‘HUST’’, while 24 LEDS in the second row forms characters of ‘‘WNLO’’. Figure 3b shows the current–voltage (I–V) curve of the 50 LEDs connected in 93 95 97 99 101 103 105 107 109 111 113 115 117 119 121 123 RIRD V 4.74 V in which the infrared transmitter diode (ITD, L1) was driven by a iode (IRD) as a receiver to detect the light from the ITD. R is t current through the ITD and (c) the voltage drop across R with under finger typing. electric nanogenerator and its use for instantaneously lighting up 11.015 dx.doi.org/doi:10.1016/j.nanoen.2012.11.015 dx.doi.org/doi:10.1016/j.nanoen.2012.11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 81 83 85 87 89 91 93 95 97 99 101 103 105 107 109 111 113 115 117 119 121 123 J. Zhong et al.6 series, revealing the forward turn-on voltage of�125 V. In our study, both the finger pressing and releasing process could light the LEDs (See Video 2 and inset of Figure 3d), and the corresponding output current through the LEDs were simultaneously recorded and shown in Figure 3c. It is observed that the current peak corresponding to releasing process has a smaller magnitude but lasts longer than that for pressing process (Figure 3d). Such an observation can be explained by the fact that pressing is caused by finger typing, while it is the resilience of the arch-shaped PTFE film that leads to the releasing. Therefore, it is very likely that releasing corresponds to a slower process and thus a smaller but wider current signal. The highest peak current went across the LEDs was �33 mA, corresponding to an instantaneous output power of�4.125 mW. TENG used in wireless system In addition, by conjunction with a transformer that is a passive device, high output current in the order of mili- amperes was generated by our TENG that could be used to power those electronic devices which work with high current. Figure 4a shows the schematic of an infrared transmitter–receiver system (ST188, L4). The infrared trans- mitter diode (ITD) (forward turn-on voltage of�1 V, Figure S4) was powered by a TENG conjunct with a transformer, while infrared receiver diode (IRD) and external load R (20 MO) were powered by a constant power source. When the ITD was driven by TENG which was triggered by finger typing, a strong infrared signal would emitted from the ITD, as the IRD received the infrared signal, the resistance of the IRD would decrease and leading to an obvious change of voltage across the external load R. In our study, the output current through the ITD and the voltage drop across R with time was monitored simultaneously, and are shown in Figure 4b and c, respectively. Figure 4b depicts the output current after applying the transformer can reach as high as �6 mA. The change of the voltage across the external load R has the same trend with the output current (Figure 4c). Conclusions In summary, a novel and simple arch-shaped TENG is invented that can efficiently used for harvesting irregular mechanical energy. The instantaneous output power of single TENG device can reach as high as�4.125 mW, which is high enough to instantaneously drive 50 commercial blue LEDs connected in series. By conjunct with a transformer, the TENG can power a commercial infrared transmitter with an output current of�6 mA. The TENGs have potential of harvesting energy from human motion, mechanical vibration and more, with great applications in self-powered systems for wearable electronics, sensors and security. Acknowledgment JWZ and QZZ contributed equally to this work. This work was financially supported by the Foundation for the Author of National Excellent Doctoral Dissertation of PR China (201035), the Program for New Century Excellent Talents in University (NCET-10-0397). ZLW thanks the support of the Knowledge Please cite this article as: J. Zhong, et al., Finger typing driven tribo LEDs, Nano Energy (2012), http://dx.doi.org/10.1016/j.nanoen.2012. Innovation Program of the Chinese Academy of Sciences (Grant no. KJCX2-YW-M13). The authors would like to thank professor C. X. Wang from Sun Yat-sen University for his support. Appendix A. Supporting information Supplementary data associated with this article can be found in the online version at http://dx.doi.org/10.1016/j.nanoen. 2012.11.015. References [1] J.A. Rogers, Y.G. Huang, A. Curvy, Proceedings of the National Academy of Sciences 106 (2009) 10875. [2] D.H. Kim, N. Lu, R. Ma, Y.S. Kim, R.H. Kim, S. Wang, J. Wu, S.M. Won, H. Tao, A. Islam, et al., Science 333 (2011) 838. [3] Z.L. Wang, J.H. Song, Science 312 (2006) 242. [4] B.Z. Tian, X.L. Zheng, T.J. Kempa, Y. Fang, N.F. Yu, G.H. Yu, J.L. Huang, C.M. Lieber, Nature 449 (2007) 885. [5] D. Kraemer, B. Poudel, H.P. Feng, J.C. Caylor, B. Yu, X. Yan, Y. Ma, X.W. Wang, D.Z. Wang, A. Muto, et al., Nature Materials 10 (2011) 532. [6] S. Xu, Y. Qin, C. Xu, Y.G. Wei, R.S. Yang, Z.L. Wang, Nature Nanotechnology 5 (2010) 366. [7] G. Zhu, R.S. Yang, S.H. Wang, Z.L. Wang, Nano Letters 10 (2010) 3151. [8] C. Chang, V.H. Tran, J.B. Wang, Y.K. Fuh, L.W. Lin, Nano Letters 10 (2010) 726. [9] X. Chen, S.Y. Xu, N. Yao, Y. Shi, Nano Letters 10 (2010) 2133. [10] M.Y. Choi, D. Choi, M.J. Jin, I. Kim, S.H. Kim, J.Y. Choi, S.Y. Lee, J.M. Min, S.W. Kim, Advanced Materials 21 (2009) 2185. [11] T.D. Nguyen, N. Deshmukh, J.M. Nagarah, T. Kramer, P.K. Purohit, M.J. Berry, M.C. McAlpine, Nanotechnology 7 (2012) 587. [12] K.I. Park, M.B. Lee, Y. Liu, S. Moon, G.T. Hwang, G. Zhu, J.E. Kim, S.O. Kim, D.K. Kim, Z.L. Wang, K.J. Lee, Advanced Materials 24 (2012) 2999. [13] C. Sun, J. Shi, D.J. Bayerl, X.D. Wang, Science 4 (2011) 4508. [14] C.Y. Chen, T.H. Liu, Y. Zhou, Y. Zhang, Y.L. Chueh, Y.H. Chu, J.H. He, Z.L. Wang, Nano Energy 1 (2012) 424. [15] S.Y. Xu, G. Poirier, N. Yao, Nano Energy 1 (2012) 602. [16] X. Chen, S.Y. Xu, N. Yao, Y. Shi, Nano Letters 10 (2012) 2133. [17] E.R. Post, K. Waal, Proceedings of the E.S.A. Annual Meeting on Electrostatics, Paper G1, 2010. [18] F.R. Fan, Z.Q. Tian, Z.L. Wang, Nano Energy 1 (2012) 328. [19] F.R. Fan, L. Lin, G. Zhu, W. Wu, R. Zhang, Z.L. Wang, Nano Letters 12 (2012) 3109. [20] G. Zhu, C.F. Pan, W.X. Guo, C.Y. Chen, Y.S. Zhou, R.M. Yu, Z.L. Wang, Nano Letters 12 (2012) 4960. [21] Y.F. Hu, Y. Zhang, C. Xu, G. Zhu, Z.L. Wang, Nano Letters 10 (2010) 5025. [22] X.H. Yang, G. Zhu, S.H. Wang, R. Zhang, L. Lin, W.Z. Wu, Z.L. Wang,Environmental Science (2012) doi:10.1039/c2ee23194h. [23] J.A. Cross, Electrostatics: Principles, Problems and Applica- tions, HilgerBristol, 1987 (p xii, p. 500). [24] J. Baland, Y.H. Chao, Y. Suzuki, Y.C. Tai, Proceedings of the 16th IEEE International Conference MEMS2003, 2003, p. 538. [25] J.A. Malecki, Physical Review B 59 (1999) 9954. [26] B.C. Ten, J.H. Lang, IEEE Transcations on Circuits and Systems I: Regular Papers 53 (2006) 288. [27] P.D. Mitcheson, P. Miao, B.H. Stark, E.M. Yeatman, A.S. Holmes, T.C. Green, Actuators A 115 (2004) 523. [28] S.P. Beeby, M.J. Tudor, N.M. White, Measurement Science and Technology 17 (2006) 175. electric nanogenerator and its use for instantaneously lighting up 11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 7Finger typing driven triboelectric nanogenerator Junwen Zhong received his B.S. degree in Applied Chemistry from Huazhong Univer- sity of Science and Technology (HUST), China in 2011. He is a Ph.D. candidate in Wuhan National Laboratory for Optoelectro- nics (WNLO) and School of Optical and Electronic Information at HUST. His research interests is energy harvesting for self- powered system. 65 67 69 71 73 Qize Zhong received his B.S. degree in Optoelectronic Information from Huazhong University of Science and Technology(- HUST), PR China in Jun, 2011. He is a Ph.D. candidate in Wuhan National Labora- tory for Optoelectronics (WNLO) and School of Optical and Electronic Information at HUST. His research interests include flexible electronics. 75 77 79 81 83 Fengru Fan received his B.S. degree in Chemistry from Xiamen University, China in 2006. He is a Ph.D. candidate in College of Chemistry and Chemical Engineering at Xiamen University. From 2008 to 2011, he studied as a visiting student in Zhong Lin Wang’s group at Georgia Institute of Tech- nology. His research interests include nano- generators and self-powered nanosystem, preparation and applications of metal– 85 87 89 91 93 95 semiconductor hybrid devices, synthesis and characterization of novel nanostructures with functional materials. Yan Zhang received his B. S. degree (1995) and Ph.D degree in Theoretical Physics (2004) from Lanzhou University. Then, he worked as a lecturer and associate Professor (2007) of Institute of Theoretical Physics in Lanzhou University. In 2009 he worked as research scientist in the group of Professor Zhong Lin Wang at Georgia Institute of Technology. His main research interest and activities are: self-powered nano/micro sys- 97 99 101 103 105 107 tem, theoretical calculation of piezotronic, dynamics of time-delay systems and complex networks. Sihong Wang received his B. S. degree in Materials Science and Engineering from Tsin- ghua University, China in 2009. He is a Ph.D. candidate in Materials Science and Engineering at Georgia Institute of Technology. His main research interest is synthesis of ZnO nanowires and fabrication of nanodevices. 109 Please cite this article as: J. Zhong, et al., Finger typing driven tribo LEDs, Nano Energy (2012), http://dx.doi.org/10.1016/j.nanoen.2012. Bin Hu received his Ph.D. in materials science at Wuhan University of Technology in 2011. From 2009–2011, he was a visiting student in Georgia Institute of Technology He joined in Wuhan National Laboratory for Optoelectro- nics (WNOL) from 2012 as an associate pro- fessor, and his main research interest is the flexible sensors for integrated self-powered nano- and microsystems. Dr. Zhong Lin (ZL) Wang is the Hightower Chair in Materials Science and Engineering, Regents’ Professor, Engineering Distinguished Professor and Director, Center for Nanost- ructure Characterization, at Georgia Tech. Dr. Wang is a foreign member of the Chinese Academy of Sciences fellow of American Physical Society, fellow of AAAS, fellow of Microscopy Society of America, and fellow of Materials Research Society. Dr. Wang has been awarded the MRS Medal in 2011 from Materials Research Society and Burton Medal from Microscopy Society of America. He has made original and innovative contributions to the synthesis, discovery, characteriza- tion, and understanding of fundamental physical properties of oxide nanobelts and nanowires, as well as applications of nanowires in energy sciences, electronics, optoelectronics, and biological science. His discovery and breakthroughs in developing nanogenerators establish the principle and technological road map for harvesting mechanical energy from environment and biological systems for powering a personal electronics. His research on self-powered nanosystems has inspired the worldwide effort in academia and industry for studying energy for micro-nano-systems, which is now a distinct disciplinary in energy research and future sensor networks. He coined and pioneered the field of piezo-tronics and piezo-phototronics by introducing piezo- electric potential gated charge transport process in fabricating new electronic and optoelectronic devices. This breakthrough by redesign CMOS transistor has important applications in smart MEMS/NEMS, nanorobotics, human–electronics interface, and sensors. Dr. Wang’s publications have been cited for over 45,000 times. The H-index of his citations is 102. Details can be found at: http://www.nanoscience. gatech.edu. Jun Zhou received his B.S. degree in Mate- rial Physics (2001) and his Ph.D. in Material Physics and Chemistry (2007) from Sun Yat- Sen University, China. During 2005–2006, he was a visiting student in Georgia Institute of Technology. After he obtaining his Ph.D., He worked in Georgia Institute of Technology as a research scientist. He joined in Wuhan National Laboratory for Optoelectronics (WNOL), Huazhong University of Science and Technology (HUST) as a professor from the end of 2009. His main research interest is flexible energy havesting and storage devices for self-powered micro/nanosensor systems. electric nanogenerator and its use for instantaneously lighting up 11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 dx.doi.org/10.1016/j.nanoen.2012.11.015 Finger typing driven triboelectric nanogenerator and its use for instantaneously lighting up LEDs Introduction Experimental Fabrication of the TENG Results and discussions Power generation mechanism of TENG The output performance of TENG Powering 50 LEDs in series by TENG directly TENG used in wireless system Conclusions Acknowledgment Supporting information References work_ybx4wawesndehmxulrckns5nlu ---- Monitoring of degradation of porous silicon photonic crystals using digital photography Ariza-Avidad et al. Nanoscale Research Letters 2014, 9:410 http://www.nanoscalereslett.com/content/9/1/410 NANO EXPRESS Open Access Monitoring of degradation of porous silicon photonic crystals using digital photography Maria Ariza-Avidad1,2, Alejandra Nieto1, Alfonso Salinas-Castillo2, Luis F Capitan-Vallvey2, Gordon M Miskelly3 and Michael J Sailor1* Abstract We report the monitoring of porous silicon (pSi) degradation in aqueous solutions using a consumer-grade digital camera. To facilitate optical monitoring, the pSi samples were prepared as one-dimensional photonic crystals (rugate filters) by electrochemical etching of highly doped p-type Si wafers using a periodic etch waveform. Two pSi formulations, representing chemistries relevant for self-reporting drug delivery applications, were tested: freshly etched pSi (fpSi) and fpSi coated with the biodegradable polymer chitosan (pSi-ch). Accelerated degradation of the samples in an ethanol-containing pH 10 aqueous basic buffer was monitored in situ by digital imaging with a consumer-grade digital camera with simultaneous optical reflectance spectrophotometric point measurements. As the nanostructured porous silicon matrix dissolved, a hypsochromic shift in the wavelength of the rugate reflectance peak resulted in visible color changes from red to green. While the H coordinate in the hue, saturation, and value (HSV) color space calculated using the as-acquired photographs was a good monitor of degradation at short times (t < 100 min), it was not a useful monitor of sample degradation at longer times since it was influenced by reflections of the broad spectral output of the lamp as well as from the narrow rugate reflectance band. A monotonic relationship was observed between the wavelength of the rugate reflectance peak and an H parameter value calculated from the average red-green-blue (RGB) values of each image by first independently normalizing each channel (R, G, and B) using their maximum and minimum value over the time course of the degradation process. Spectrophotometric measurements and digital image analysis using this H parameter gave consistent relative stabilities of the samples as fpSi > pSi-ch. Keywords: Porous silicon; Photonic crystal; Degradation; Digital photography; Image processing; Hue color coordinate Background Porous silicon (pSi) has proven to be a versatile material that is readily prepared and modified for use as chemical sensors or as a platform for drug delivery [1]. Porous silicon is suited for this latter role because pSi and the porous silica (pSi-o) formed upon oxidation are biocompatible and biodegradable. Porous silicon prepared with sinusoidal variations in the refractive index (termed rugate sensors) show one-dimensional photonic crystal behavior, with characteristic narrow-band rugate reflectance peaks that can be engineered to occur in the visible through infrared regions of the electromagnetic spectrum. The reflectance spectra of these sensors changes when analytes enter or leave the pores or if the pore walls are dissolved. * Correspondence: msailor@ucsd.edu 1Department of Chemistry and Biochemistry, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093-0358, USA Full list of author information is available at the end of the article © 2014 Ariza-Avidad et al.; licensee Springer. Th Commons Attribution License (http://creativeco reproduction in any medium, provided the orig The ability to place the peaks in the reflectance spectrum within the near infrared region of the electromagnetic spectrum allows direct monitoring through tissue [2-4] which has potential use for both biomonitoring and monitored drug release. Most optical studies of porous silicon-based materials have used spectrophotometers with reflectance probes. The position of the wavelength of the maximum reflectance peak of a porous silicon-based photonic crystal can be an effective reporter of degradation due to oxidation and dissolution of the silicon matrix in aqueous media. Spectrophotometric measurement of the temporal evolution of the visible reflectance spectrum of pSi or pSi-o has been used to follow the dissolution process and the release of drugs trapped in the porous matrix [5,6]. A key challenge we are addressing is the development of efficient low-cost methods to extract relevant chemical information from the change in optical response is is an Open Access article distributed under the terms of the Creative mmons.org/licenses/by/4.0), which permits unrestricted use, distribution, and inal work is properly credited. mailto:msailor@ucsd.edu http://creativecommons.org/licenses/by/4.0 Ariza-Avidad et al. Nanoscale Research Letters 2014, 9:410 Page 2 of 11 http://www.nanoscalereslett.com/content/9/1/410 of porous silicon and similar nanostructured sensor materials. The broad-band red-green-blue (RGB) filters in most color cameras are not optimal for measuring changes in porous silicon reflectivity. The complete optimization of such camera-chemical sensor combinations will require structuring the optical properties of the nanosensor material to best match the optical response of the camera, optimizing the illuminant, and development of efficient and selective data analysis algorithms. In this paper, our primary focus is on developing a simple single parameter to represent the change detected by a color camera as a porous silicon film degrades. Colors, which are qualities representing human visual experiences [7,8], can be quantified by a number of methods or color spaces. Color spaces can be classified into four groups related via algebraic transformations: linear-light tri-stimulus, xy chromaticity, perceptually uniform, and hue-oriented [8]. All of the color spaces can be derived from the RGB information supplied by devices such as cameras and scanners. The hue, saturation, and value (HSV) color space used in this work represents the cognitive color information associated with a change in dominant wavelength of the observed signal in a single parameter, the hue coordinate H. The use of hue in optical sensor devices has been reported previously, especially in investigations of bitonal optical sensors and of thermochromic liquid crystal thermography. Thus, all relevant color information in digital images of bitonal sensors (sensors in which a chromophore changes into another chromophore with a different spectrum in the presence of a given analyte) is contained in the H coordinate [9,10]. These authors note that the H coordinate is simple to calculate, is easily obtained from commercial imaging devices, and shows little dependence on variations in color intensity or variations in brightness of illumination. The reflectance spectra of the thermochromic liquid crystals used in thermography are similar to those of rugate porous silicon, having narrow reflectance peaks with width 30 to 40 nm [11,12]. These reflectance peaks can move over 100 nm to the blue as temperature increases. Thermochromic liquid crystal thermography often relies on a monotonic relationship between hue and temperature. However, several authors have noted that the measured hue is dependent on the illuminant used and is also impacted by background reflectance [11-13]. This can result, for example, in hue not being monotonic if a red-rich light such as a tungsten lamp is used. Anderson and Baughn noted that approaches such as subtracting the amount of light in each of the red, green, and blue channels observed at low temperature from all subsequent measurements and then calculating hue using these corrected values could give a monotonic H function for all the light sources they used [11,12]. They noted that a monotonic H function was also obtained if they adjusted the white balance of their measurements using the image data corresponding to the low-temperature liquid crystal rather than images of a ‘true gray’ [11]. The concept of deriving a hue-based function after modification of the raw intensity data has been extended further. Thus, Finlayson and Schaefer applied logarithmic preprocessing to obtain a hue parameter that was invariant to brightness and gamma [14], while van der Laak et al calculated absorbance for transmitted light microscopy images prior to determining a hue parameter [15]. There are additional complexities with analyzing digital images of rugate porous silicon compared to thermochro- mic liquid crystals because the reflectance peaks can be narrower (10 to 30 nm) and the reflectance peak intensities can change to a larger extent with wavelength, due to factors such as light absorption within the porous silicon layer or degradation of the porous layer. In this work, we aimed to use a consumer-grade digital camera to monitor the degradation of freshly etched and modified pSi photonic crystals (rugate filters) rather than using a spectrophotometer. While this constrains the reflectance measurements to lie within the visible spectrum, measurement of the spectral changes of pSi by digital photography can enable monitoring of pSi degradation and drug delivery in non-laboratory settings. The use of digital photography for monitoring the degradation of pSi in aqueous media was validated by simultaneous spectrophotometric measurements of the pSi reflectance spectrum. Methods Preparation of freshly etched porous silicon chips (fpSi) Porous silicon was prepared by anodic electrochemical etch- ing of highly doped 0.95 mΩ cm p++-type (100)-oriented silicon wafers (Virginia Semiconductor, Fredericksburg, VA, USA) in a 3:1 (v/v) mixture of aqueous hydrofluoric acid (49%) and ethanol. The fpSi samples were prepared in a Teflon etch cell that exposed 1.2 cm2 of the polished face of the Si wafer, which was contacted on the back side with a piece of Al foil. A platinum spiral was used as a counter-electrode. A rugate filter was generated using a current density modu- lated with 100 cycles of a sinusoidal waveform oscillating between 15 and 108 mA/cm2, with periods on the order of 6 s depending on the desired wavelength of maximum reflectivity. After etching, the fpSi samples were rinsed with ethanol and dried in a stream of nitrogen. Preparation of porous silicon coated with chitosan (pSi-ch) A 1% chitosan solution was prepared by dissolving 10 mg chitosan from crab shells, 85% deacetylated (Sigma Aldrich, St. Louis, MO, USA) in 1 mL of 15% v/v Ariza-Avidad et al. Nanoscale Research Letters 2014, 9:410 Page 3 of 11 http://www.nanoscalereslett.com/content/9/1/410 aqueous acetic acid and stirring overnight. The fpSi sample was coated with chitosan by spin coating (Laurell WS-400B-6NPP-Lite, Laurell Technologies, North Wales, PA, USA) using 150 μL of chitosan solution at a final speed of 100 rpm for 10 min and then drying at room temperature under nitrogen. The sample was then placed under vacuum to evaporate the remaining solvent. After the deposition, the pSi-ch samples were heated at 70°C on a hot plate for 10 min to cause a small amount of polymer infiltration into the pores, and this resulted in a slight red shift in the rugate reflectance peak position. Instrumental procedures The porosity and thickness of the porous silicon layers were estimated by the spectroscopic liquid infiltration method (SLIM), based on the measurement of the thin-film interference components of the reflectance spectra of the samples before and after infiltration of a liquid (ethanol) with known refractive index [16] by using an Ocean Optics USB-2000 spectrometer (Ocean Optics, Dunedin, FL, USA) configured for specular reflectance, working in back-reflection configuration in the range 400 to 1,000 nm. The reflectance spectra were recorded at five spots distributed across each sample in order to evaluate the homogeneity of each porous silicon sample. The values of the porosity and the thickness were determined by means of the two-component Bruggeman effective medium approximation [17]. The extent of chitosan infiltration into the porous silicon sample was also evaluated from the reflectance spectrum. The freshly prepared (fpSi) and modified (pSi-ch) porous silicon samples were characterized using a Thermo Scientific Nicolet 6700 FTIR (Thermo Fisher Scientific, Waltham, MA, USA) with a Smart iTR diamond ATR fixture. The morphology of the porous silicon was measured by scanning electron microscopy (SEM) using a FEI XL30 SEM (FEI, Hillsboro, OR, USA) operating in secondary electron imaging mode. To avoid sample charging anomalies, the porous Si samples were metalized with a thin layer of gold prior to the SEM analysis. The pore size and the porosity oscillations of the rugate filter structure were evaluated with this analysis. Measurement of porous silicon degradation The pSi degradation was studied using a custom-designed transparent flow cell system with a total volume of 4.5 mL (including connections). The 1:1 (v/v) ethanol 0.5 M carbonate/borate buffer solution (pH 10) was flowed in at the bottom of the sample using a peristaltic pump at a rate of 10 μL/s and room temperature (20 ± 1°C). Ethanol was included in the buffer to improve the permeation of solution into the pores and reduce the formation of bubbles that could affect the subsequent image analysis. The degradation of the fpSi and pSi-ch samples was monitored by obtaining reflectance spectra (spectropho- tometer) and photographs (digital camera) every 5 min through the front cover of the flow cell until after complete degradation had occurred (300 min). To obtain both measurements repeatably during the same experiment, the optical paths for the reflectance probe and the camera were located in front of the flow cell along the sample surface normal as is shown in Figure 1. The sample was illuminated by means of a diffuse axial illuminator coupled to a Fiber-lite MI-150 (Dolan Jenner, Boxborough, MA, USA) light source with an approximate color temperature of 3,000 K mounted between the flow cell and the camera. A beam splitter (Thorlabs CM1-BS2 Cube-Mounted Non- Polarizing Beamsplitter, 50:50, 0.7 to 1.1 μm; Newton, NJ, USA) between the diffuse axial illuminator and the flow cell also allowed measurement of the reflectance spectrum over 400 to 1,000 nm with the reflectance probe of a fiber optic spectrophotometer (Ocean Optics USB-2000-VIS-NIR). The reflectance probe was rigidly fixed to the beamsplitter via lens tubes containing a focusing lens. The reflectance spectrum acquisition was controlled by Spectrasuite software (Ocean Optics, Inc.). The position of the rugate reflectance peak and the FFT of the portion of each reflectivity spectrum that displayed Fabry-Perot interference fringes were calculated using custom routines in Igor (Wavemetrics, Inc., Portland, OR, USA). The shifts in the position of the peaks in the FFT spectra indicate a change in effective optical thickness (EOT or 2 nL where n is the effective refractive index and L is the thickness of the layer) of the porous silicon samples. Digital images were acquired with a Canon EOS 500D (Digital Rebel XTi; Canon, Ota, Tokyo, Japan) digital camera with an EF-S 60 mm f/2.8 macro lens. In order to use the camera as a colorimeter, the geometry of the imaging equipment was rigidly fixed and the flow cell was exposed to constant lighting. The camera settings were fixed at ISO 400, aperture value f/4.5, shutter speed 1/2 s, and white balance set for a tungsten light source. Canon EOS Utility software was used to remotely operate the camera from a computer and to transfer the jpg images from the camera to the computer. Image analysis The jpg images were pre-processed using Photoshop CS5 (Adobe Systems, San Jose, CA, USA). First, a color curve balance correction for each image was made selecting as a reference point a portion of the silicon wafer that was not in contact with the buffer solution. Next, the portion of each image containing the pixels corresponding to the degrading porous silicon sample (ca. 1.2 × 105 pixels) was defined using a mask, Figure 2. The average RGB values for these pixels were determined for each image. The H coordinate, or hue, [9] of the HSV Figure 1 Photograph of equipment for simultaneous acquisition of photographs and reflectance spectra. 1 flow cell containing pSi sample, 2 beam splitter, 3 reflectance probe connected to fiber-optic spectrophotometer, 4 diffuse axial illuminator with tungsten light source, 5 camera, 6 pSi sample, and 7 spectrophotometer. Inset: image of the pSi sample as captured by the digital camera. Ariza-Avidad et al. Nanoscale Research Letters 2014, 9:410 Page 4 of 11 http://www.nanoscalereslett.com/content/9/1/410 (hue, saturation, and value) color space, was used to moni- tor the porous Si degradation since it represents the dom- inant color in one single parameter. The RGB values of the selected pixels in each image were processed with a set of scripts and functions developed in Matlab r2010b (The MathWorks Inc, Natick, MA, USA) to determine the H coordinate, which is defined as in Equation 1. H ¼ G−B maxmin− minchannel þ 0 � � ⋅60; if max ¼ R� H ¼ B−R maxmin− minchannel þ 2 � � ⋅60; if max ¼ G H ¼ R−G maxmin− minchannel þ 4 � � ⋅60; if max ¼ B ð1Þ * if H less than 0, then add 360 to H. Figure 2 Images showing color change of pSi sample during degrada The H coordinate in the HSV color space has a circular nature and so can be defined as an angle that varies between 0 and 360° [18]. However, because of the process- ing we have used prior to our H calculation, we report the values on a 0 to 1 scale. H values calculated by applying the above equations to the as-acquired images were not mono- tonic with time. A monotonic function was obtained in the following manner: The average RGB values for each image were normalized, with each channel being normalized independently using the maximum and minimum value for that channel observed during the degradation process. The H value of these processed values was then calculated. Results and discussion Characterization of porous Si The different porous Si rugate samples had thicknesses in the range 20 to 25 μm and average porosities of 53 to tion and mask used to select pixels for image analysis. Ariza-Avidad et al. Nanoscale Research Letters 2014, 9:410 Page 5 of 11 http://www.nanoscalereslett.com/content/9/1/410 62%, and displayed a single narrow band between 581 and 603 nm in their visible reflectance spectra. The freshly etched porous Si samples had the maximum reflectance peak centered at 593 nm (standard deviation 3.7 nm; n = 5). The thickness and porosity of fpSi were 22.8 μm (1.2 μm) and 53.4% (1.6%), respectively, both measured by SLIM. The average diameter of the pores was 20 nm as calculated from top surface SEM images (Figure 3a), and a channel-like mesoporous structure was observed in cross-sectional SEM images (Figure 3b). The ATR-FTIR spectrum of fpSi (Figure 4a) shows a band at 2,100 cm−1 due to the presence of Si-Hx groups (x 1 to 3) [19], a 905-cm−1 band assigned to the SiH2 scissor mode [20], and a 667-cm−1 band due to SiH wagging mode. The small band at 1,050 cm−1 due to Si-O stretching modes suggests a small amount of oxidation has occurred after etching [21]. Chitosan, a positively charged natural polysaccharide which is both biodegradable and biocompatible, was investigated as a protective coating for pSi due to its reported potential use in drug delivery studies [22]. A film of chitosan was deposited on the porous Si surface by spin coating. In order to evaluate the infiltration of the chitosan into the pores of the fpSi sample, cross-sectional SEM and reflectance spectra were compared before and after chitosan coating. The range of thickness achieved by spin coating was 8 to 12 μm according to SEM results, with the two well-defined separate layers suggesting the chitosan was mainly present as an adherent layer on top of the porous silicon (Figure 3c). More precise information about the extent of chitosan infiltration into pSi was obtained from reflectance spectra of the hybrid. The reflectance spectra of the fpSi samples coated with chitosan showed a red shift of 8 nm in the maximum of the rugate peak. However, analysis of the thin-film interference fringes which are also present in the reflectance spectrum allowed more detailed investigation of the changes to the pore filling. When chitosan is spin-coated onto the pSi surface and then warmed slightly, the chitosan forms an optically smooth film on top of the pSi layer, which leads to an Figure 3 SEM images of the porous silicon. (a) Top view showing the p ulations in porosity in fpSi. (c) Cross section of chitosan-coated porous silic additional Fabry-Pérot optical interference layer. Therefore, the FFT of the reflectance spectrum displays two major peaks (Figure 5). The position of the peak at an effective optical thickness (EOT) of 60.2 μm (EOT2 = 2n2L2, where n2 is the effective refractive index of the layer and L2 is its thickness) is slightly larger than the position of the corresponding peak observed in the FFT spectrum of the unmodified fpSi (59.7 μm). This peak is assigned to the pSi layer initially and to the pSi layer including a small amount of incorporated chitosan after modification. The second major peak in the FFT spectrum appears at an EOT of 77.4 μm (EOT3 = 2n3L3). This peak comes from interference between the top surface of the chitosan (the air/chitosan interface) and the bottom of the porous Si film (the bulk Si/porous Si interface). A third peak with EOT1 = 2n1L1 that is expected for the chitosan layer at 17.2 μm, according to the relationship EOT1 + EOT2 = EOT3, is not observable due to the small difference between chitosan and pSi refractive indexes [23]. These data indicate that chitosan does not significantly infiltrate the porous Si layer and are in agreement with the SEM images and the results from Pastor et al. who concluded that chitosan penetration into the inner structure of partially oxidized pSi is hindered [24]. Thus, the structure of pSi-ch samples consists of an array of porous reservoirs capped with a chitosan layer. Upon loading of chitosan onto the fpSi, new bands appear in the FTIR spectrum (Figure 4b). The broad band at 3,350 cm−1 is assigned to both O-H and N-H stretching; the bands at 2,915 and 2,857 cm−1 are due to C-H stretching vibration modes, while the aliphatic CH2 bending appears at 1,453 cm−1 and the C = O stretching vibration mode appears at 1,710 cm−1. The intense band at 1,043 cm−1 has contributions from the C-O stretching mode in addition to Si-O stretching modes [5]. Monitoring of porous silicon degradation Hydride-terminated porous silicon undergoes degradation when immersed in aqueous solutions, with release of gaseous or soluble species, due to two processes: (1) ore openings in fpSi. (b) Partial cross-section showing the rugate mod- on (pSi-ch). Figure 4 ATR-FTIR spectra of (a) freshly etched pSi (fpSi), (b) freshly etched pSi with a layer of chitosan (pSi-ch). Ariza-Avidad et al. Nanoscale Research Letters 2014, 9:410 Page 6 of 11 http://www.nanoscalereslett.com/content/9/1/410 oxidation of the silicon matrix to silica by water or various reactive oxygen species and (2) hydrolysis to soluble orthosilicic species [25]. This degradation hinders its use in some applications although controlled degrad- ation is useful for applications such as drug delivery. Different strategies have been applied to improve the stability of porous silicon [26], such as oxidation of the surface under controlled conditions [27], derivatization forming Si-C bonds on the surface via different organic reactions [28,29], or covering the porous structure with protective polymeric films [5]. The degradation of porous silicon in aqueous solution depends on several factors, with pH being a key factor. In acidic or neutral aqueous media, the degradation proceeds slowly but in basic solutions, hydroxide reacts with both Si-H and Si-O surface species [1]. A pH 10 buffer solution that would lead to moderately rapid degradation of the porous Si samples (time for degradation <300 min) was selected for this study. Ethanol was added to the buffer to ensure wetting of the porous silicon layer and to reduce the formation of adherent gas bubbles on the samples. Porous Si rugate filters show characteristic reflectance spectra due to the periodic oscillations of porosity in the dir- ection normal to the surface. Changes in the average refract- ive index of the porous silicon film due to infiltration of compounds into the pores or alteration of the porous silicon matrix modify the wavelength of maximum reflectance providing a useful method for sensing [30]. The oxidation of the porous silicon matrix to silica decreases the effective refractive index, which causes a hypsochromic shift in the position of the maximum reflectance peak in the spectrum, and the dissolution of the porous layer can both decrease the thickness of the layer and increase the porosity, both processes leading to a reduction in the effective optical thickness. Therefore, the shifts in the Fabry-Perot interference fringe pattern observed in the visible reflectance spectra and the wavelength of the rugate peak maximum can be used to measure and compare the stability of different porous Si samples. The effective optical thickness of porous silicon samples can be obtained in real time using a fast Fourier transform of the reflectance spectra [1,31]. One strategy to then compare the degrad- ation of different porous Si surface samples in aqueous media involves calculating the relative change in effective optical thickness defined as ΔEOT=EOT0 % ¼ EOT − EOT0ð Þ= EOT0 � 100% ð2Þ where EOT0 is the value of EOT (Equation 2) mea- sured when the porous Si surface is initially exposed to Figure 5 FFT of the visible reflectance spectrum obtained from pSi with (a) and without (b) a coating of chitosan. Ariza-Avidad et al. Nanoscale Research Letters 2014, 9:410 Page 7 of 11 http://www.nanoscalereslett.com/content/9/1/410 flowing buffer. The degradation of the pSi surface is then monitored by this relative decrease in optical thickness [32]. The degradation of the two porous Si sample types in the present study as measured by EOT changes is shown Figure 6. The data indicate that the stability of these samples decreases in the sequence: freshly etched porous Si > chitosan-coated pSi, since the initial rates of relative EOT change during the degradation are 0.217 and 0.37%/min, respectively. The degradation rate is higher for porous silicon coated by chitosan than for fresh pSi for the first 25 min, but there is a subsequent decrease in the degradation rate of the chitosan-coated sample so that at later times it degrades more slowly than fresh porous silicon, with relative EOT changes of 0.066 and 0.108%/min, respectively. The increased rate of degradation for the chitosan-coated porous silicon sample is in apparent contrast to the previously reported studies of chitosan-coated porous silicon, however, those studies used hydrosilylated porous silicon or oxidized porous silicon [5,23,24]. The increased degradation of pSi-ch compared even to freshly etched porous silicon may be due to the amines present in chitosan, since amines can increase the rate of porous silicon hydrolysis [33,34]. It also suggests that the chitosan layer contains cracks or fissures such that the aqueous solution readily infiltrates to the underlying fpSi layer. The stability of the pSi samples as shown by the rates of change of the positions of the band maxima of the rugate reflectance bands of the samples during their degradation followed the same order as for the EOT measurements: freshly etched porous Si > chitosan-coated pSi, with the rates being 1.33 and 1.99 nm/min, respectively. The degradation of porous Si, typically monitored by reflection or transmission measurements using a spectrophotometer, can also be monitored using digital photography if the degradation results in a perceived color change. Since previous studies have reported that the H coordinate of the HSV color space can provide a robust single parameter that corresponds to changes in the position of the main band in a reflectance spectrum of an optical sensor [9,10], we investigated whether this H coordinate could be used to monitor the shifts in wavelength and intensity of the narrow rugate reflectance band as porous silicon degrades. We initially investigated calculating the H coordinate for the as-acquired images, Figures 7 and 8. As the porous silicon degradation process occurred this H coordinate (hue) increased from ca. 0.033 to a maximum value of 0.18. These changes in the H coordinate values were manifested in a visible color change from red to green and a decrease and increase in the red and green channels of the images, respectively (Figure 7). Once all the pSi had dissolved, the mirror-like silicon wafer substrate was exposed. Reflection of the tungsten light source from this bare silicon surface was yellow as captured by the camera. This reflection from the substrate resulted in a reduction in the magnitude of the hue from ca. 0.18 to 0.11 at long times (at time >100 min), Figure 8. Because of this non-monotonic behavior of hue, we investigated other functions of the red, green, and blue channels that might provide a measure of degradation over the whole time of the reaction. We found that pre-processing the data by taking the average red channel value for each image and normalizing it using the minimum and maximum observed average red values during the degradation process and doing the same for the other two channels and then applying Equation 1 to these normalized channels gave a suitable monotonic function, Figure 8. Since the value obtained does not correspond directly to the perceived color, we Figure 6 EOT changes observed during the degradation of the two porous Si sample types. Plots showing the relative change in the effective optical thickness (EOT) of the pSi samples as a function of time exposed to 1:1 (v/v) 0.5 M carbonate/borate buffer (pH 10), ethanol at 20 ± 1°C. Ariza-Avidad et al. Nanoscale Research Letters 2014, 9:410 Page 8 of 11 http://www.nanoscalereslett.com/content/9/1/410 refer to it as the H parameter. As noted in the ‘Background,’ other authors have developed useful H parameters derived from HSV transformation of pre- processed data [11,12]. Our pre-processing is analogous to a combination of the background correction reported by Anderson and Baughn [11,12,14,15] followed by a white balance correction. The relative change in this H parameter was very similar to that for hue over the first 100 min of the degradation reaction, but at longer times the H parameter continued to increase in a monotonic manner in contrast to the behavior of the hue. Measurement of the color evolution using this H parameter confirmed the previously observed trend regarding the stability of the porous silicon samples towards degradation. We then used this H parameter to compare the degradation of the two porous silicon samples. Thus, Figure 9 shows a comparison of the normalized value ((H − Hinitial)/(Hmax − Hinitial)) for the fpSi and pSi-ch samples. The stability of the different silicon surfaces can be ranked by their initial rate of degradation, with the stabilities being in the order: freshly etched porous Si > chitosan-coated pSi. By comparing the degradation kinetics of the porous silicon samples using normalized reflectance values (either rugate position or EOT) and normalized H parameter values, we conclude that it is possible to obtain semiquantitative information about porous silicon stability using color data. In contrast, using the hue of the as- acquired images to monitor complete degradation is limited due to the interfering effect of the reflection of the broad light source spectrum from the porous silicon, silicon substrate, and other surfaces within the light path. However, the use of a different light source with increased intensity in the blue-green regions of the spectrum compared to the lamp used may reduce this problem. The behavior of the hue parameter for porous rugate samples with the reflectance band at λ < 560 nm is also very dependent on the white balance value used during the image pre-processing step. Conclusions We have demonstrated that the degradation of porous silicon in basic aqueous buffers can be monitored in situ by digital imaging with a consumer-grade digital camera Figure 7 Plot showing the change in average RGB values from images of fp-Si as it degrades. Figure 8 Plot showing hue derived from as-acquired images and scaled H-parameter derived from pre-processed RGB values. The H parameter has been scaled for this plot so that hue and the H parameter have the same numerical value at 100 min. Ariza-Avidad et al. Nanoscale Research Letters 2014, 9:410 Page 9 of 11 http://www.nanoscalereslett.com/content/9/1/410 Figure 9 Evolution of the normalized H parameter during the first 300 min for fpSi and pSi-ch. The experimental conditions are as given for Figure 6. Ariza-Avidad et al. Nanoscale Research Letters 2014, 9:410 Page 10 of 11 http://www.nanoscalereslett.com/content/9/1/410 and have validated this approach with simultaneous spectrophotometric measurement of the optical reflect- ance spectra. An approximately linear correlation between the wavelength of the maximum of the rugate reflectance band and an H parameter derived from the HSV color space was observed during the degradation process. A similar relationship was also noted between the H param- eter and the effective optical thickness of the samples. These results indicate that the samples were degrading via dissolution of the pore walls, rather than just dissolution from the top of the porous silicon layer downwards. The relative stabilities of the two porous silicon types obtained by the three measurement methods were consist- ent, indicating that all methods could be used to monitor relative sample degradation. However, whereas measure- ment of the rugate peak wavelength and effective optical thickness requires a spectrophotometer, the determination of the H parameter just required a consumer-grade digital camera and standard software. Indeed, the H-parameter approach could be applied using a low-cost camera or the camera within a mobile phone or mobile computing device. This would then allow such measurements to be made outside the laboratory and at comparatively low cost. While this paper reports results from monitoring degrad- ation of intact porous silicon films attached to a crystalline silicon substrate, a similar approach should be possible to monitor particles of porous silicon. The potential use of color measurements to monitor both degradation and drug delivery from porous silicon micro-particles would require only simple cameras and illuminants and could even be coupled to use with smartphones. Competing interests MJS has financial ties to the following companies who may or may not benefit from the research presented here: Spinnaker Biosciences, TruTags, Pacific Integrated Energy, and Silicium Energy. Authors’ contributions The study conception and design was carried out by MJS, MAA, and AN. The initial design of the image acquisition equipment was performed by GM, MAA, and MJS. MAA carried out the acquisition of the data. The analysis and interpretation of the data was performed by MAA, LFCV, and GM. The preparation of the manuscript was performed by LFCV, GM, MAA, and ASC. The critical revision was performed by GM and MJS. All authors read and approved the final manuscript. Acknowledgements We acknowledge the financial support from Ministerio de Educación y Ciencia (Spain), Dirección General de Enseñanza Superior (Spain) (CTQ2009- 14428-C02-01), and Junta de Andalucía (Spain) (P10-FQM-5974). A.N. wants to acknowledge Fundación Alfonso Martín Escudero for a postdoctoral fellowship. This material is based upon the work supported by the U.S. National Science Foundation under Grant No. DMR-1210417. Author details 1Department of Chemistry and Biochemistry, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093-0358, USA. 2Department of Analytical Chemistry, University of Granada, Faculty of Sciences, Avda. Fuentenueva s/n, Granada E-18071, Spain. 3School of Chemical Sciences, The University of Auckland, Private Bag 92019, Auckland 1142, New Zealand. Ariza-Avidad et al. Nanoscale Research Letters 2014, 9:410 Page 11 of 11 http://www.nanoscalereslett.com/content/9/1/410 Received: 22 May 2014 Accepted: 8 July 2014 Published: 21 August 2014 References 1. Sailor MJ: Porous Silicon in Practice. Preparation, Characterization and Applications. Wiley-VCH Verlag GmbH: Weinheim, Germany; 2012. 2. Cunin F, Schmedake TA, Link JR, Li YY, Koh J, Bhatia SN, Sailor MJ: Biomolecular screening with encoded porous-silicon photonic crystals. Nat Mater 2002, 1:39. 3. Li YY, Cunin F, Link JR, Gao T, Betts RE, Reiver SH, Chin V, Bhatia SN, Sailor MJ: Polymer replicas of photonic porous silicon for sensing and drug delivery applications. Science 2003, 299:2045. 4. Kilian KA, Lai LMH, Magenau A, Cartland S, Bocking T, Di Girolamo N, Gal M, Gaus K, Gooding JJ: Smart tissue culture: in situ monitoring of the activity of protease enzymes secreted from live cells using nanostructured photonic crystals. Nano Lett 2009, 9:2021. 5. Sciacca B, Secret E, Pace S, Gonzalez P, Geobaldo F, Quignard F, Cunin F: Chitosan-functionalized porous silicon optical transducer for the detection of carboxylic acid-containing drugs in water. J Mater Chem 2011, 21:2294. 6. Janshoff A, Dancil KP, Steinem C, Greiner DP, Lin VSY, Gurtner C, Motesharei K, Sailor MJ, Ghadiri MR: Macroporous p-type silicon Fabry-Perot layers. Fabrication, characterization, and applications in biosensing. J Am Chem Soc 1998, 120:12108. 7. Maund B: Color: Metaphysics Research Lab, CSLI, Stanford University. Stanford: Encyclopedie of Philosophy; 2006. 8. Wyszecki G, Stiles WS: Color Science: Concepts and Methods, Quantitative Data and Formulae. Wiley Classics Library: Denver, USA; 2000. 9. Cantrell K, Erenas MM, Orbe-Paya I, Capitán-Vallvey LF: Use of the hue parameter of the hue, saturation, value color space as a quantitative analytical parameter for bitonal optical sensors. Anal Chem 2010, 82:531. 10. Ariza-Avidad M, Cuellar MP, Salinas-Castillo A, Pegalajar MC, Vukovic J, Capitan-Vallvey LF: Feasibility of the use of disposable optical tongue based on neural networks for heavy metal identification and determination. Anal Chim Acta 2013, 783:56. 11. Anderson MR, Baughn JW: Liquid-crystal thermography: illumination spectral effects. Part 1 - experiments. J Heat Transfer 2005, 127:581–587. 12. Anderson MR, Baughn JW: Thermochromic liquid crystal thermography: illumination spectral effects. Part 2 - theory. J Heat Transfer 2005, 127:588–596. 13. Wiberg R, Lior N: Errors in thermochromic liquid crystal thermometry. Rev Sci Instrum 2004, 75:2985–2994. 14. Finlayson G, Schaefer G: Hue that is invariant to brightness and gamma. In Proc. 12th British Machine Vision Conference. ; 2001:303–312. 15. van der Laak JAWM, Pahlplatz MMM, Hanselaar AGJM, de Wilde PCM: Hue-Saturation-Density (HSD) model for stain recognition in digital images from transmitted light microscopy. Cytometry 2000, 39:275–284. 16. Pacholski C, Sartor M, Sailor MJ, Cunin F, Miskelly GM: Biosensing using porous silicon double-layer interferometers: reflective interferometric Fourier transform spectroscopy. J Am Chem Soc 2005, 127:11636. 17. Rouquerol F, Rouquerol J, Sing K: Adsorption by Powders and Porous Solids. Vol. 3. 11th edition. San Diego: Academic Press; 1999:191. 18. Smith AR: Color Gamut Transform Pairs. In Proceedings of the 5th Annual Conference on Computer Graphics and Interactive Techniques; 1978:12. 19. Bisi O, Ossicini S, Pavesi L: Porous silicon: a quantum sponge structure for silicon based optoelectronics. Surf Sci Rep 2000, 38:1. 20. Mawhinney DB, Glass JA Jr, Yates JT: FTIR study of the oxidation of porous silicon. J Phys Chem B 1997, 101:1202. 21. Amato G, Delerue C, Von Bardeleben HJ: Structural and Optical Properties of Porous Silicon Nanostructures. Boca Raton: CRC Press; 1998:54. 22. Wu EC, Andrew JS, Cheng L, Freeman WR, Pearson L, Sailor MJ: Real-time monitoring of sustained drug release using the optical properties of porous silicon photonic crystal particles. Biomaterials 2011, 32:1957. 23. Wu J, Sailor MJ: Chitosan hydrogel-capped porous SiO2 as a pH responsive nano-valve for triggered release of insulin. Advances Func Mat 2009, 19:733. 24. Pastor E, Matveeva E, Valle-Gallego A, Goycoolea FM, Garcia-Fuentes M: Protein delivery based on uncoated and chitosan-coated mesoporous silicon microparticles. Colloids Surf B 2011, 88:601. 25. Wu EC, Park JH, Park J, Segal E, Cunin F, Sailor MJ: Oxidation-triggered release of fluorescent molecules or drugs from mesoporous Si microparticles. ACS Nano 2008, 2:2401. 26. Lees IN, Lin H, Canaria CA, Gurtner C, Sailor MJ, Miskelly GM: Chemical stability of porous silicon surfaces electrochemically modified with functional alkyl species. Langmuir 2003, 19:9812. 27. Zangooie S, Bjorklund R, Arwin H: Protein adsorption in thermally oxidized porous silicon layers. Thin Sol Films 1998, 313–314:825. 28. Buriak JM: Organometallic chemistry on silicon and germanium surfaces. Chem Rev 2002, 102:1271. 29. Song JH, Sailor MJ: Reaction of photoluminescent porous silicon surfaces with lithium reagents to form silicon-carbon bound surface species. Inorg Chem 1999, 38:1498. 30. Fenzl C, Hirsch T, Wolfbeis OS: Photonic crystals for chemical sensing and biosensing. Angew Chem Int Ed 2014, 53:3318. 31. Letant SE, Sailor MJ: Detection of HF gas with a porous silicon interferometer. Adv Mater 2000, 12:355. 32. Tsang CK, Kelly TL, Sailor MJ, Li YY: Highly stable porous silicon-carbon composites as label-free optical biosensors. ACS Nano 2012, 6:10546. 33. Chandler-Henderson RR, Sweryda-Krawiec B, Coffer JL: Steric considerations in the amine-induced quenching of luminescent porous silicon. J Phys Chem 1995, 99:8851. 34. Sweryda-Krawiec B, Chandler-Henderson RR, Coffer JL, Rho YG, Pinizzotto RF: A comparison of porous silicon and silicon nanocrystallite photoluminescence quenching with amines. J Phys Chem 1996, 100:13776. doi:10.1186/1556-276X-9-410 Cite this article as: Ariza-Avidad et al.: Monitoring of degradation of porous silicon photonic crystals using digital photography. Nanoscale Research Letters 2014 9:410. Submit your manuscript to a journal and benefi t from: 7 Convenient online submission 7 Rigorous peer review 7 Immediate publication on acceptance 7 Open access: articles freely available online 7 High visibility within the fi eld 7 Retaining the copyright to your article Submit your next manuscript at 7 springeropen.com Abstract Background Methods Preparation of freshly etched porous silicon chips (fpSi) Preparation of porous silicon coated with chitosan (pSi-ch) Instrumental procedures Measurement of porous silicon degradation Image analysis Results and discussion Characterization of porous Si Monitoring of porous silicon degradation Conclusions Competing interests Authors’ contributions Acknowledgements Author details References << /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /PageByPage /Binding /Left /CalGrayProfile (Dot Gain 20%) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.4 /CompressObjects /Tags /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.1000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails true /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo true /PreserveFlatness true /PreserveHalftoneInfo false /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Apply /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true ] /NeverEmbed [ true ] /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 300 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /ColorImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 300 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /GrayImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 1200 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 >> /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA /BGR /CHS /CHT /CZE /DAN /DEU /ESP /ETI /FRA /GRE /HEB /HRV /HUN /ITA /JPN /KOR /LTH /LVI /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR /POL /PTB /RUM /RUS /SKY /SLV /SUO /SVE /TUR /UKR /ENU (Use these settings to create Adobe PDF documents for quality printing on desktop printers and proofers. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.) >> /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ << /AsReaderSpreads false /CropImagesToFrames true /ErrorControl /WarnAndContinue /FlattenerIgnoreSpreadOverrides false /IncludeGuidesGrids false /IncludeNonPrinting false /IncludeSlug false /Namespace [ (Adobe) (InDesign) (4.0) ] /OmitPlacedBitmaps false /OmitPlacedEPS false /OmitPlacedPDF false /SimulateOverprint /Legacy >> << /AddBleedMarks false /AddColorBars false /AddCropMarks false /AddPageInfo false /AddRegMarks false /ConvertColors /NoConversion /DestinationProfileName () /DestinationProfileSelector /NA /Downsample16BitImages true /FlattenerPreset << /PresetSelector /MediumResolution >> /FormElements false /GenerateStructure true /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles true /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) ] /PDFXOutputIntentProfileSelector /NA /PreserveEditing true /UntaggedCMYKHandling /LeaveUntagged /UntaggedRGBHandling /LeaveUntagged /UseDocumentBleed false >> ] >> setdistillerparams << /HWResolution [2400 2400] /PageSize [595.440 793.440] >> setpagedevice work_yceqviel5vdvlactqhzprkxi4u ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218553280 Params is empty 218553280 exception Params is empty 2021/04/06-02:16:32 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218553280 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:32 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_ycyo7be5ufbtvjlexnly32zr5q ---- Detecting new Buffel grass infestations in Australian arid lands: evaluation of methods using high-resolution multispectral imagery and aerial photography V. M. Marshall & M. M. Lewis & B. Ostendorf Received: 15 February 2013 /Accepted: 30 September 2013 /Published online: 14 November 2013 # The Author(s) 2013. This article is published with open access at Springerlink.com Abstract We assess the feasibility of using airborne imagery for Buffel grass detection in Australian arid lands and evaluate four commonly used image classification techniques (visual estimate, manual digitisation, unsupervised classification and normalised difference vegetation index (NDVI) thresholding) for their suitability to this purpose. Colour digital aerial photography captured at approximately 5 cm of ground sample distance (GSD) and four-band (visible–near- infrared) multispectral imagery (25 cm GSD) were acquired (14 February 2012) across overlapping subsets of our study site. In the field, Buffel grass projected cover estimates were collected for quadrates (10 m diameter), which were subsequently used to evaluate the four image classification techniques. Buffel grass was found to be widespread throughout our study site; it was particularly prevalent in riparian land systems and alluvial plains. On hill slopes, Buffel grass was often present in depressions, valleys and crevices of rock outcrops, but the spread appeared to be dependent on soil type and vegetation communities. Visual cover estimates performed best (r2 0.39), and pixel-based classifiers (unsupervised classification and NDVI thresholding) performed worst (r2 0.21). Manual digitising consistently underrepresented Buffel grass cover compared with field- and image-based visual cover estimates; we did not find the labours of digitising rewarding. Our recommendation for regional documentation of new infestation of Buffel grass is to acquire ultra-high-resolution aerial photography and have a trained observer score cover against visual standards and use the scored sites to interpolate density across the region. Keywords Remote sensing . Aerial photography. High resolution . Invasive species . Natural resource management . Cenchrus ciliaris . Pennisetum ciliare Introduction Encroachment of invasive Buffel grass (Cenchrus ciliaris L. Pennisetum ciliare) into arid and semi-arid ecosystems requires early detection if we are to have any hope of controlling its spread. Originally from Africa, this drought hardy bunch grass was introduced into Australia and the Americas as rangeland pasture where it remains an important resource (Brenner 2011; Smyth et al. 2009). Outside intended areas, it is a concern for natural resource managers because it accumulates dead matter, promoting fire in fire-intolerant systems, homogenising landscapes and threatening environmental and cultural values of infested areas (D'Antonio and Vitousek 1992; Miller et al. 2010). Comprehensive species distribution maps are invaluable to the containment of all invasive species (Stohlgren and Schnase 2006). Field-based mapping is only feasible over localised areas and is typically restricted by road access, to sites of particular Environ Monit Assess (2014) 186:1689–1703 DOI 10.1007/s10661-013-3486-7 V. M. Marshall (*): M. M. Lewis: B. Ostendorf School of Earth and Environmental Science, The University of Adelaide, Glen Osmond, South Australia, Australia e-mail: victoria.marshall@adelaide.edu.au significance, or to sites identified for strategic control (where an isolated occurrence is observed). These areas are mapped as a prelude to control within this area, and there is usually some prior knowledge on the distribution before localised mapping efforts are undertaken. In the remote desert landscapes of Australia, where Buffel grass thrives and is widespread, field-based mapping is inadequate; the alternative is a remote sensing approach. Remote sensing has been proven as an effective tool for community-level vegetation mapping and monitoring (Brink and Eva 2009; Ramakrishna and Steven 1996; Mehner 2004). Discrimination of individual plant species is more difficult due to the complexity of species intermixing with surrounding vegetation and spectral variability within individual species. Remote sensing approaches to species-level plant mapping have been most successful when the target species possess distinctive spectra, has a large structure or grows in large stands relative to the spatial resolution of the imagery and shows vigorous population growth, and when the phenological stages of growth are taken into account during spectral signature collection (Jia et al. 2011; Wang et al. 2008; Andrew and Ustin 2008; Blumenthal et al. 2009; Hestir et al. 2008; Ustin et al. 2002; Padalia et al. 2013; Ge et al. 2006). This presents several challenges for the remote detection of Buffel grass because most grasses are spectrally similar; the size of stands is variable and with unknown limits; vigorous growth is a response to rainfall and is not strictly seasonal; and there is a degree of intra-species variation. Nonetheless, in the Sonoran Desert of Mexico and the USA, several studies demonstrate success in remote detection of Buffel grass (Brenner et al. 2012; Franklin et al. 2006; Olsson et al. 2011). These studies primarily utilised moderate resolution satellite imagery, useful for monitoring large established infestations or pastures. Olsson et al. (2011) was able to distinguish Buffel grass in heterogeneous mixed desert scrub with the greatest success by integrating hyperspectral measurements of plant spectra collected in the field. However, these approaches are not necessarily useful for detecting new infestations as an alternative to field work. For remote detection of emerging infestations, individual tussocks, less than 0.5 m in diameter, must be definable. This requires a high spatial resolution with a ground sample distance (GSD) that is below 25 cm, i.e. half the smallest unit to be classified (Myint et al. 2011). An advantage to using aerial imagery is that acquisition timing is extremely flexible. This is critical, when working with grasses that rapidly green up in response to rainfall, and almost as quickly dry off or burn to ash, leaving a very small window of time when imagery can be captured. There are many challenges associated with using aerial imagery such as the limited spatial coverage (footprint) of image scenes, the quality of images being strongly weather-dependant, and the spatial coverage needing to be tailored to a specific project (Gergel et al. 2010). Further challenges include data management, processing time and cost. These factors, combined with uncertainty regarding the accuracy of classifications, means aerial imagery is underutilised by natural resource managers. Approaches to aerial image classification vary in regard to accuracy, consistency, time consumption and required producer expertise. Visual interpretation can be highly accurate; it requires minimal image preparation and uses human knowledge to make logical decisions (Gergel et al. 2010). This method can be documented either by digitising infestations (Olsson et al. 2012) or by using visual standards to categorically record species cover at selected locations across the study site (Puckey et al. 2007). Digitising is extremely laborious but less subjective than visual cover standards (Gergel et al. 2010). Pixel-based classifications are semi-automated, systematic, repeatable and require less interpretation time. However, these rely solely on spectral separation of the target species from surrounding land cover, which is complex in the case of this variable grass. In some ways, it is also less suited to analyses at high spatial resolutions because of the spectral diversity within the tussocks. For example, the sunlit side of tussocks may present a different spectral category to the shadowed side, or dry foliage, a different category from green foliage, resulting in a speckled “salt-and-pepper” effect (Myint et al. 2011). More recently, object-based classifiers have been developed, which, like pixel- based classifiers, are systematic, consistent and repeatable, but they better mimic human perception of objects (Walter 2004; Laliberte et al. 2004; Yu et al. 2006; Meneguzzo et al. 2012). Object-based classification algorithms are not yet well developed and require expert production. Our goal is to explore the potential of aerial imagery for detection of Buffel grass populations in the Australian desert country. Specifically, we examine 5–6 cm (GSD) 1690 Environ Monit Assess (2014) 186:1689–1703 ultra-high-resolution colour digital aerial photography and 25 cm (GSD), four-band (visible–near-infrared (NIR)) multispectral imagery. We compare four different yet common classification approaches—visual cues, manual digitisation, unsupervised classification and Normalised Difference Vegetation Index (NDVI) thresholds—and assess each for their suitability to Buffel grass discrimination. The research was conducted with the long-term aim of developing a method for early detection of Buffel grass in remote arid landscapes that could be used by natural resource managers. Methods and materials Focal species: Buffel grass (C. ciliaris L. P. ciliare) Buffel grass is a perennial, summer-growing (C4) African bunch grass (Sharif-Zadeh and Murdoch 2001). It reproduces via seed and rhizomes and, as a result, can be seen in the landscape as both lone tussocks and dense monocultures. It does not drop its leaves; they accumulate at the base of the plant, often forming a ring of dry foliage around the tussock. The grass is spread by wind, water and traffic. In arid environments of Australia, where this study is based, it is typically found at highest density in riparian environments, depressions, and wherever soils are disturbed, including roadsides, construction sites and fire beds (Marshall et al. 2012). The plant responds rapidly to rain and often emerges before native grasses. It is also quick to dry-off and burn. The window for image capture of growing plants is brief; in Australia, we consider it is usually restricted to about a month after the first summer rains. Study area Located in the remote far north-west corner of South Australia, the study site occupies 15×12 km of the aboriginal owned Anangu Pitjantjatjara Yankunytjatjara (APY) lands. The site encompasses two indigenous communities—Kalka (26° 7′11.50″S, 129° 8′59.04″E) and Pipalyatjara (26° 9′37.45″S, 129°10′20.64″E) (Fig. 1)—with a combined population of less than 350. Climate is arid, with hot summers, mild winters and annual rainfall below 300 mm. Elevation ranges from 650 to 900 m. Plains comprise alluvial and fluvial sediments, vegetated by Aristida grasslands, sparsely distributed low shrubs and Hakea trees. These grasslands are increasingly dominated by Buffel grass. The Tomkinson ranges (Fig. 1) comprise mafic rock dominated by Spinifex hummock grasses; ranges in the north-west of the study site (Fig. 1) comprise felsic rock dominated by Enneapogon sp. grasses. Buffel grass was introduced by direct seeding around Kalka in October 1987, along with Cenchrus setigerus and native drought-tolerant shrubs, Atriplex nummularia, Acacia kempeana and Acacia ligulata to combat dust storms on the alluvial flats; dust became a problem after an uncontrolled wildfire burnt a substantial area near the settlements, drought followed, and vegetation never regenerated. As a result of the direct seeding in 1987, this region is now largely dominated by Buffel grass. Imagery Colour digital photography and four-band (visible to NIR) multispectral images were obtained over the study area. Image specifications are given in Table 1. The imagery was acquired on 14 February 2012 between 1134 and 1430 hours. Multispectral imagery was flown after the aerial photography from 1352 hours in the afternoon; consequently, shadow effects vary between the images. Conditions at the time of image capture were slightly hazy with less than 1 % high cirrus cloud cover. Buffel grass was approximately 50 % dried off on the day of image capture. The aerial photography was acquired for a grid of 3×3 transects across the study site (north–south transects approx. 17 km; east–west transects approx. 12 km; spaced 5 km apart) (Fig. 1). Transects were positioned to capture the diversity of vegetation and geological settings while avoiding high elevations that are potentially dangerous for aerial navigation (Fig. 1). Photography was received as 930 un-georeferenced frames, in TIFF format. To save time georeferencing these frames individually, three-five frames were stitched together in the automated image matching program, Microsoft image composite editor (ICE). Image frames were exported from ICE as JPEG files, georeferenced in ArcGIS and saved as raster data using the minimum cell size for the image. These raster files were used for all subsequent analyses. The four-band imagery, collected using the Spec Terra multispectral sensor, was acquired for three smaller areas, in highly diverse local environments, and overlapping the aerial photography flight paths Environ Monit Assess (2014) 186:1689–1703 1691 (Fig. 2). The multispectral data was delivered corrected for radiometric and geometric artefacts, as orthorectified and georegistered mosaics in TIFF format. All image analysis was carried out in the 1994 Geocentric Datum of Australia, projected to UTM zone 52. Ground validation sites Field work was conducted from 7 to 12 February 2012. Selection of sites for ground validation was governed by in situ interpretation of environmental units, such as vegetation structure, soil colour and land use, aided by a 2007 ALOS colour mosaic of the region (2.5 m GSD). The goal was to represent the diversity of landscapes in which Buffel grass was present or absent and at varying densities. In total, 95 field sites were documented. Within these circular sites (10 m in diameter), projected cover (the vertical projection of plant foliage onto a horizontal surface) was estimated for Buffel grass and land cover units categorised as “herbs and forbs”, “other grasses”, “woody”, “leaf litter” and “soil”. The cover for each cover type was recorded as discrete classes: absent, 0 %; low, 0–25 %; moderate–low, 25–55 %; moderate– high, 55–85 %; and high, 85–100 % (Fig. 3). The centre point of each ground validation site was recorded using a Garmin eTrex High Sensitivity hand-held global positioning system receiver, which achieved a spatial accuracy of approximately 2–5 m. For remote sensing analysis, the ground validation sites were co-registered to the aerial photography and Fig. 1 Study site and flight path design. Left: flight paths. Right: field sample sites, terrain contours in which the light aircraft was navigating, and the image scenes of high quality which overlapped our field sites. These maps were prepared in the Geocentric Datum of Australia 1994 1692 Environ Monit Assess (2014) 186:1689–1703 separately to the multispectral imagery using the GPS coordinates recorded in the field and personal knowledge on the site. Of the 95 sites, 18 lay outside of the imagery coverage. A further 41 were not used because of image quality (which diminished over hilly terrain), obstruction from trees or insufficient geographical information to Table 1 Image specifications for aerial imagery captured on February 2012 over the Kalka–Pip Homelands, Australia Imagery Sensor Flying altitude Footprint Ground sample distance (GSD) or pixel size Spectral resolution Digital aerial photo Nikon D3X digital camera 305 m ~240×360 m/frame 5–6 cm Visible; 3 bands Airborne multispectral Spec Terra digital multispectral sensor 1,067 m Variable 25 cm Visible–NIR, 4 bands: 450±10 nm FWHM (blue); 550±10 nm FWHM (green); 675±10 nm FWHM (red); 780±10 nm FWHM (near-infrared) Fig. 2 Coverage of four-band Spec Terra imagery for aerial survey of Buffel grass in the Kalka–Pipalyatjara homelands of central Australia. Three Spec Terra panels were acquired in the west, east and south of the study area. These panels were designed to overlap the flight path for aerial photography collection and represent the varied landscape. The Spec Terra data is presented with a natural-colour display. The underlying image on all three maps is ALOS 2.5 m GSD pan-sharpened imagery, 2005–2007 composite, natural-colour display. These maps are prepared in the Geocentric Datum of Australia 1994 Environ Monit Assess (2014) 186:1689–1703 1693 accurately place the site. Of the remaining sites, 43 lay within the coverage of the multispectral image scenes. Ultimately, a total of 53 and 43 sites were used for interpretation and classification of the aerial colour photography and four-band multispectral imagery, respectively. Aerial photography image classification We evaluated three commonly used image classification techniques for discriminating and quantifying Buffel grass in the aerial photography including visual cover estimates, manual digitisation, and a pixel-based unsupervised classification. Classifications were run separately for each field site (53 sites×3 approaches). Visual cover estimates for each ground validation site were scored using the same cover classes employed in the field survey. For consistency, sites were viewed at a scale of 1:125 to make the estimates. Visual standards (Fig. 4) also aided in making observations consistent. For the manual digitisation method, individual Buffel grass plants or clumps within the 10-m diameter circular plots were digitised from the imagery, at a display scale of 1:125 m. The digitiser did not alter the viewing scale in order to more precisely circle plants. The total digitised area of Buffel grass for each site was then tabulated. For the pixel-based assessment, an unsupervised classification was performed on the imagery at each site. A circular area of a diameter of 30 m, centred on the ground validation site, was used to run the classification. This accounted for the possibility to have a “Buffel grass” class even if Buffel grass was not present within the more tightly prescribed sample site. The classification was performed using the Iso Cluster Unsupervised Classification tool in ArcGIS 10 Spatial Analyst. The number of classes was set to 20; classes most representative of Buffel grass were then manually aggregated on the basis of visual examination. The aggregation process is producer-directed, allowing for some flexibility in the number of classes selected as representative of Buffel grass. The total area classified as Buffel grass for each site was then tabulated. Four-band imagery classification To exploit the additional spectral information in the multispectral imagery, the NDVI was applied. In the Fig. 3 Buffel grass as observed in the field at low 0–25 % (a), low–moderate 25–55 % (b), moderate–high 55–85 % (c) and high >85 % (d) projected cover. The dominant grasses in panel a are Aristida sp. 1694 Environ Monit Assess (2014) 186:1689–1703 desert environment in which our study is situated, Buffel grass is often the “greenest” vegetation in the understorey during the summer months (December– February). Hence, the NDVI is a suitable index to identify cover type. The NDVI output was visually compared with the higher-resolution aerial photography to identify an NDVI threshold that best represented Buffel grass cover. The total area classified as Buffel grass was then calculated for each site. Comparing classifications To explore differences between each of the cover estimation approaches (visual cover estimates, manual digitisation, unsupervised classification, NDVI thresholds), classification results for selected sites were viewed concurrently, and disparities were described. The effectiveness of each approach in quantifying Buffel grass cover was then examined using regression analyses. The four image classifiers were compared not only with the field-based estimates, but with each other—to compare like with like. This is important, because whilst Buffel grass presence–absence is best interpreted from field results, field cover estimates are also subjective, and not necessarily more correct than the image-based estimates. The strength of each relationship was interpreted using Pearson's r-squared. Results Buffel grass in the landscape Buffel grass is observed to be widespread throughout our study site. It is particularly prevalent in riparian land systems (Fig. 5, panel f) and alluvial plains (Fig. 5, panel a). In Buffel grass monocultures, on the plains comprised of undifferentiated alluvial and fluvial soils, Fig. 4 Buffel grass as observed in on the 5-cm GSD colour digital photography at low 0–25 % (a), low–moderate 25–55 % (b), moderate– high 55–85 % (c) and high >85 % (d) projected cover. The dominant grasses in panel A are Aristida sp. Environ Monit Assess (2014) 186:1689–1703 1695 Buffel grass tussocks are typically encircled by a ring of bare soil (Fig. 5, panel b), which is not seen in this land system in patches of native grasses (e.g. Aristida, Enneapogon). In the Tomkinson Ranges, Spinifex is dominant on the hill slopes (Fig. 5, panel c), but in the depressions, valleys and the crevices of rock outcrops, Buffel grass is frequently observed. This is also true for the ranges directly north of Kalka (Fig. 1). Similarly, on calcareous flats, where Spinifex dominates with minor components of Compositae, and Ptilotus sp. (Fig. 5, panel d), Buffel grass was observed in micro- depressions over 0.5 km away from any roadsides. On hills in the north-west of this study area, Enneapogon sp. and Buffel grass often co-dominate (Fig. 5, panel e). These key land systems within the study area are represented on the panel of photographs presented in Fig. 5. Projected cover at each site, as recorded in the field, illustrates the diversity of ground cover within which Buffel grass occurs (Fig. 6). There is a general trend that as Buffel grass increases, other grasses decrease; this is evident in Fig. 6. The figure also shows that “soil” cover type is present at all Buffel grass sites. This is consistent with field observations that in Buffel grass monocultures, the tussocks were typically surrounded by bare soil. Fig. 5 Examples of the dominant landscapes within out study area. Buffel grass-dominated alluvial plain (a); mature Buffel grass tussock on alluvial plain, with bare soil surrounding it (b); Spinifex-dominated hill slope (c); Spinifex-dominated calcareous rubble plain (d); Enneapogon intermixed with Buffel grass on ridge top (e); Buffel grass-dominated drainage line (f). Photographs captured on February 2012 1696 Environ Monit Assess (2014) 186:1689–1703 The imagery The capacity of colour digital photography (5–6 cm GSD) and four-band Spec Terra multispectral imagery (25 cm GSD) for use in detecting Buffel grass was explored. In the aerial photography, it is possible to identify Buffel grass plants as small as 0.3 m in diameter. Larger, more mature plants ranging 0.8–2 m in diameter are easily discriminated. For large plants, the mixture of dry and green leaves, as well as shadows within the tussocks, are visible. This creates a texture, which in this landscape is quite unique to Buffel grass. In the four-band imagery, much of the internal texture of the plants is lost. Smaller plants, less than half a metre in diameter, are difficult to positively identify. Larger tussocks can be discriminated, but out of context, based on spatial information alone, they appear very similar to low shrubs. The near-infrared band was useful for discriminating Buffel grass from native Spinifex where spatial information was inadequate. The NIR band was inadequate for discriminating Buffel grass amongst native bunch grasses, such as Silky brown top (Eulalia aurea), Silky blue grass (Dichanthium sericeum), Windmill grass (Chloris sp.) and Barley Mitchell grass (Astrebla pectinata). Comparing projected cover estimates Four approaches to estimate Buffel grass cover on aerial imagery were trialled: visual cover estimates, manual digitisation, an unsupervised pixel-based classification and NDVI thresholds. Figure 7 illustrates the results of the four classification methods for three ground validation sites, representative of the varying vegetation and geological settings in which we attempt to discriminate Buffel grass. Site 30 (Fig. 7, row 1) shows a monoculture of Buffel grass on red sand, typical of Buffel grass-dominated alluvial plains. Site 75 (Fig. 7, row 2) represents a transition zone from Buffel grass on alluvial plains to Spinifex hummock grass on a rocky hill slope. At this site, Buffel grass is intermixed with Aristida sp. and Compositae. Site 6 (Fig. 7, row 3) shows Buffel grass growing at high density along a dry creek bank, intermixed with low densities of other 0.0 0.5 1.0 3 8 2 9 5 1 9 4 1 1 8 4 2 9 2 9 3 7 8 9 1 6 3 4 4 0 1 7 5 9 6 8 7 9 2 3 5 7 8 6 0 2 2 2 7 6 1 6 6 7 7 5 8 1 8 4 3 3 7 1 2 4 2 1 1 1 1 3 5 8 4 3 8 0 7 6 3 2 2 0 3 0 6 4 3 1 4 1 4 6 9 4 4 3 6 1 2 3 8 P rr o p o ti o n o f p o je c te d c o v e r Site number Buffel grass soil woody herbs and forbs other grasses Fig. 6 Projected cover and composition of 54 field sites in the Kalka–Pipalyatjara region of far north-west South Australia. Cover types were categorically recorded as Buffel grass, soil, woody vegetation, herbs and forbs and other grasses. Projected cover was categorically recorded for each cover type as “0–25 %”, “25–55 %”, “55–85 %” and “>85 %”. This graph shows the proportion of projected cover represented by each cover type for each field site Environ Monit Assess (2014) 186:1689–1703 1697 grasses such as Themeda sp. and E. aurea, which is typical of Buffel grass in creek lines. These examples highlight some of the strengths and weakness of each classification method. Field and image estimate scores at sites 6 and 75 are equal, but at site 30, in the Buffel grass monoculture, the field estimate is one point higher than the image estimate. Considering manual digitisation, boundaries are difficult to define where canopies are touching, as evident in site 75 where a grassy mass has been categorised as Buffel grass. Boundaries are also difficult to digitise where the plants are too small; this may be the case at site 6, where there are some newly emerging grasses not circled. The unsupervised classification has a salt-and-pepper effect caused by spectral differences between green and dry foliage within the tussocks and the similarity of Buffel grass with surrounding vegetation. It does not capture the entirety of the projected cover of the grass tussocks as discrete objects. This is evident at all three example sites, but particularly at site 75, where patches of Spinifex are classified as Buffel grass. However, when the sum of the classified area is averaged out across the site, the estimates are more comparable to field-based scores. Projected cover estimates based on NDVI thresholds, Fig. 7 Buffel grass projected cover (yellow) as estimated in the field and on the imagery by using visual cover ranking, manual digitisation of plants, unsupervised classification and NDVI thresholding, for three selected sites (field sites 30, 75, and 6). Columns 1–3 contain the colour aerial photography (~5 cm GSD) and display estimates based on this imagery. Column 4 contains the four-band multispectral imagery (25 cm GSD), natural-colour display, and the cover estimates based on NDVI threshold. Field/ image estimate scores represent the cover as “absence” (0 %) = 0, “low” (0–25 %) = 1, “moderate–low” (25–55 %) = 2, “moderate– high” (55–85 %) = 3 and “high” (>85 %) = 4 1698 Environ Monit Assess (2014) 186:1689–1703 using the 25-cm GSD multispectral imagery, substantially underrepresent Buffel grass. The relationship between each classification approach was assessed using regression analyses. Every combination of classification approaches was compared, totalling 10 separate regressions, displayed in Fig. 8. The strength of each relationship was interpreted using Pearson's r-squared. r-Squared values based on the four-band imagery (NDVI thresholds) are not statistically comparable with photography-based analyses because the n values are different. However, the trends are comparable, and for this reason, we have presented all the regressions together. The cover, where Buffel grass was present (field cover ranking = 1–4), was comparatively lower in all image-based classification approaches compared with field-based estimates. Where it was absent (field cover ranking = 0), the visual cover ranking and the unsupervised classification were comparatively higher than field estimates, while manual digitising and NDVI thresholds seem consistent. Image visual cover rankings are highly correlated (r2=0.66) but consistently lower than corresponding manually digitised areas. Pixel-based methods (NDVI and unsupervised classification) tend to overrepresent Buffel grass absence and underrepresent presence. Of all aerial photography image-based classifications, the visual cover ranking best correlates to results collected in the field (r2=0.39). Manually digitised area consistently underrepresented Buffel grass compared with other methods; it returned a highly variable r2 Fig. 8 Relationship between each method for estimating Buffel grass projected cover on the imagery (visual cover classes, manual digitising, unsupervised classification and NDVI threshold) and relative to field-based estimates. The strength of those relationships is represented by Pearson's r-squared, presented on each graph. Projected cover ranking 0–4 represent “absence” (0 %), “low” (0– 25 % ), “moderate–low” (25–55 %), “moderate–high” (55–85 %) and “high” (>85 %) cover, respectively Environ Monit Assess (2014) 186:1689–1703 1699 ranging 0.26–0.66. The high-end r2 (0.66) relates to its correlation with image visual cover rankings. The unsupervised classification correlated moderately well across the board (r2 ranging 0.21–0.36). The NDVI thresholding approach used to classify the four-band multispectral imagery returned low ranging r2 values of 0.13–0.27 and underrepresented Buffel grass projected cover across the board. Discussion Buffel grass is an invasive tussock grass widespread in arid and semi-arid ecosystems of Australia and the Americas, which homogenises landscapes, presents a tremendous fire hazard and threatens environmental and cultural values of infested regions. New infestations are where control efforts need to be focused. For control to be successful, these emerging infestations need to be detected with improved efficiency. We explored the potential of airborne imagery (ultra-high-resolution colour digital photography and four-band visible–NIR) for detection of emerging Buffel grass populations in Australian arid lands. We also compared common image classification techniques (visual estimates, manual digitisation, unsupervised digital classification and NDVI thresholding) for their suitability for discriminating Buffel grass. Buffel grass in Kalka–Pipalyatjara Buffel grass is observed widespread in the Kalka– Pipalyatjara region throughout our study site. Alluvial flats, once carrying a diversity of Compositae and Solanaceae members, as well as Aristida, and Enneapogon species, are now Buffel grass-dominated. This is consistent with our breakdown of projected cover at each field site, which showed a general trend of decreasing “other grasses” relative to increasing Buffel grass. In Buffel grass monocultures, on this soil type, individual tussocks are typically encircled by a ring of bare soil. We speculate that this relates to competition for water, preventing establishment of other grasses. On calcareous flats, where Spinifex dominates, Buffel grass was observed in micro-depressions over 0.5 km away from any roadsides, which are often a point of establishment for this invasive species (Lonsdale 1999; Van Devender and Dimmitt 2006). Similarly, on Spinifex-dominated hills, Buffel grass occurs in the depressions, valleys and the crevices of rock outcrops. On hills in the north-west of this study area, Enneapogon sp. and Buffel grass often co-dominate, although Buffel grass has higher coverage in the creek lines through these hills. In the valley, captured by the south multispectral image, Buffel grass dominates; it is intermixed with native species at the heart of the water course, dominates with bare ground closer to the hills and gives way to Spinifex on the slopes. The imagery Colour digital photography (5–6 cm GSD) and four- band Spec Terra multispectral imagery (25 cm GSD) were compared for their capacity to discriminate Buffel grass, at the individual tussock level, in an open arid landscape. The 5-cm GSD aerial photography was excellent for visually identifying moderate to large Buffel grass tussocks. The structural detail within the grass tussocks is only visible at the ultra-high resolution. This textural feature in the imagery made visual cover estimates easier. The GSD of 25 cm was too coarse to reveal this distinctive texture, and out of context, large tussocks could be misidentified as small shrubs. Small tussocks were indistinct to the human eye at this resolution. Discriminating Buffel grass is most challenging when the tussocks are densely compacted, with canopies touching or when it is tightly intermixed with other species. One of the reasons for this is that the most distinctive feature of Buffel grass, for the human eye to detect, is its form. It appears as a highly textured unit, in a tight envelope of dead leaf litter, and situated in a ring of bare ground. In fact, where a Buffel grass infestation has expanded to as few as three to four mature (>1 m diameter) plants, this texture can be seen, even on viewing an image frame at its full extent. When these elements cannot be used to identify the grass, even at this very high spatial resolution, greater spectral resolution is needed. In the NDVI thresholding classifications, we did not find the NIR band particularly helpful to distinguish Buffel grass from other bunch grasses such as Barley Mitchell grass (Astrebla sp.) and Silky-brown top (E. aurea). The NIR band may have proved more useful had the grass been at its greenest; Buffel grass was approximately 50 % dry and 50 % green at the time of image capture. Timing dependence is a weakness of classifications reliant on the NIR band. In this case, the 1700 Environ Monit Assess (2014) 186:1689–1703 added spectral band does not compensate for the coarser GSD. Classification approaches Four approaches to estimate Buffel grass cover on aerial imagery were trialled: visual cover estimates, manual digitisation, an unsupervised pixel-based classification and NDVI thresholds. Differences between the classification outputs were visually compared, and the relationship between each classification approach was assessed using regression analyses with Pearson's r-squared. When compared to the field estimates of cover, visual cover ranking was the best-performing image-based classifier (r2 0.39). While perhaps the most subjective method, it is excellent for rapid assessment by a trained image interpreter. Its strength lies in the interpreter's ability to score sites rapidly and adjust interpretation according to context: image quality, vegetation condition and landscape position. Manual digitisation of Buffel grass infestations was extremely laborious and consistently underrepresented Buffel grass-projected cover compared with all other estimation methods. Image quality is paramount to success in digitising, because slight image blur makes it extremely challenging for the digitizer to indentify boundaries. Furthermore, at the individual plant level, boundaries are particularly difficult to define for small plants and for plants with canopies touching. Although it may be beneficial for the natural resource managers to have distribution information digitised, this methodology is still subjective. The unsupervised pixel-based classification has more potential than the r2 of 0.21 suggested. It typically underestimated cover. However, it could be just as easily overestimated cover if more “dry grass” classes were included in the producer's classification of Buffel grass. The method is reliable, systematic and repeatable; however, the process of aggregating classes' representative of Buffel grass was time-consuming when repeated for every field site. The method would be more feasible if the unsupervised classification could be applied to an entire image frame, but with aerial photography, this is challenging. Variable sun angle on the camera, resulting from aircraft tilt as it navigates topography and weather at very low altitudes (305 m, in this case), causes hot spot effects, or overexposure on sun-side edges of the imagery. This results in spectral variation of the same land cover types across the image frame. NDVI thresholds had potential to be a strong indicator of Buffel grass cover in this landscape. The methodology is systematic, reliable, repeatable and rapid because it can be carried out for the entire image frame. We hypothesised that a high NDVI threshold should exclude native grasses and isolate Buffel grass which is highly photosynthetically active following summer rains. However, at the time of image capture in this study, Buffel grass had already begun to dry out, and tussocks were only about 50 % green. At this time, woody vegetation had, on average, a higher NDVI than the understorey, and the NDVI values for Buffel grass were not substantially different from surrounding grasses. We chose to set a high NDVI threshold, which underrepresented Buffel grass, rather than a lower NDVI threshold, which would have captured all green vegetation. Timing dependence is a weakness of this classification method, and given the fickle nature of the species lifecycle, it is not recommended for this scale of mapping. Conclusions Ultra-high resolution 5-cm GSD aerial photography has potential for regional documentation of new and emerging infestations of Buffel grass. Visual cover rankings performed by an informed image interpreter are currently the most accurate method of classification. These can be conducted quickly and easily and could be expanded over a larger area with a well-designed sampling strategy to document infestations long before they are seen in the field. For regional documentation of new and emerging infestations of Buffel grass, we recommend the following approach: (1) collect transects of aerial imagery across the region of interest. Imagery should be colour digital aerial photography with a GSD of 5 cm; (2) using the aerial photos as samples across the landscape, have a trained observer score Buffel grass cover against visual standards; and (3) use the scored sites to interpolate density across the region, target field survey and direct control efforts. For surveillance of waterways and environments where Buffel grass is known to grow at high densities intermixed with other species, a 5-cm GSD colour digital photography together with airborne hyperspectral Environ Monit Assess (2014) 186:1689–1703 1701 imagery could be considered for improved spectral separation, and this is one area for future research. Acknowledgments This research was undertaken as part of PhD studies at The University of Adelaide, Australia, supported by an Australian Postgraduate Award with funding and support from the Alinytjara Wilurara Natural Resources Management Board. Travel bursary awarded by the National Climate Change Adaptation Research Network (NCCARF) Terrestrial Biodiversity Network was used to carry out collaborative research at the University of Arizona. Thanks to the APY traditional owners, land managers and anthropologists for corporation and support in the swift conduct of this research project, especially April Langerak and Brad Griffiths. Thanks to Erika Lawley for invaluable assistance, cultural understanding and plant knowledge on field trips. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. References Andrew, M. E., & Ustin, S. L. (2008). The role of environmental context in mapping invasive plants with hyperspectral image data. Remote Sensing of Environment, 112(12), 4301–4317. Blumenthal, D., Booth, D. T., Cox, S. E., & Ferrier, C. E. (2009). Large-scale aerial images capture details of invasive plant populations. Rangeland Ecology and Management, 60(5), 523–528. doi:10.2111/1551-5028(2007)60[523:laicdo]2.0.co;2. Brenner, J. C. (2011). Pasture conversion, private ranchers, and the invasive exotic buffelgrass (Pennisetum ciliare) in Mexico's Sonoran Desert. Annals of the Association of American Geographers, 101(1), 84–106. doi:10.1080/00045608.2010. 518040. Brenner, J. C., Christman, Z., & Rogan, J. (2012). Segmentation of Landsat Thematic Mapper imagery improves buffelgrass (Pennisetum ciliare) pasture mapping in the Sonoran Desert of Mexico. Applied Geography, 34, 569–575. Brink, A. B., & Eva, H. D. (2009). Monitoring 25 years of land cover change dynamics in Africa: a sample based remote sensing approach. Applied Geography, 29(4), 501–512. D'Antonio, C. M. D., & Vitousek, P. M. (1992). Biological invasions by exotic grasses, the grass/fire cycle, and global change. Annual Review of Ecology and Systematics, 23, 63–87. Franklin, K. A., Lyons, K., Nagler, P. L., Lampkin, D., Glenn, E. P., Molina-Freaner, F., et al. (2006). Buffelgrass (Pennisetum ciliare) land conversion and productivity in the plains of Sonora, Mexico. Biological Conservation, 127(1), 62–71. doi:10.1016/j.biocon.2005.07.018. Ge, S. K., Carruthers, R., Gong, P., & Herrera, A. (2006). Texture analysis for mapping Tamarix parviflora using aerial photographs along the Cache Creek, California. Environmental Monitoring and Assessment, 114(1–3), 65– 83. doi:10.1007/s10661-006-1071-z. Gergel, S. E., Coops, N. C., & Morgan, J. L. (2010). Aerial photography: a rapidly evolving tool for ecological management. (Cover story). Bioscience, 60(1), 47–59. doi: 10.1525/bio.2010.60.1.9. Hestir, E. L., Khanna, S., Andrew, M. E., Santos, M. J., Viers, J. H., Greenberg, J. A., et al. (2008). Identification of invasive vegetation using hyperspectral remote sensing in the California Delta ecosystem. Remote Sensing of Environment, 112(11), 4034–4047. Jia, K., Wu, B., Tian, Y., Li, Q., & Du, X. (2011). Spectral discrimination of opium poppy using field spectrometry. IEEE Transactions on Geoscience and Remote Sensing, 49(9), 3414–3422. Laliberte, A. S., Rango, A., Havstad, K. M., Paris, J. F., Beck, R. F., McNeely, R., et al. (2004). Object-oriented image analysis for mapping shrub encroachment from 1937 to 2003 in southern New Mexico. Remote Sensing of Environment, 93(1–2), 198–210. Lonsdale, W. M. (1999). Global patterns of plant invasions and the concept of invasibility. Ecology, 80(5), 1522–1536. Marshall, V. M., Lewis, M. M., & Ostendorf, B. (2012). Buffel grass (Cenchrus ciliaris) as an invader and threat to biodiversity in arid environments: a review. Journal of Arid Environments, 78, 1–12. doi:10.1016/j.jaridenv. 2011.11.005. Mehner, H. (2004). Remote sensing of upland vegetation: the potential of high spatial resolution satellite sensors. Global Ecology and Biogeography Letters, 13(4), 359. Meneguzzo, D. M., Liknes, G. C., & Nelson, M. D. (2012). Mapping trees outside forests using high-resolution aerial imagery: a comparison of pixel- and object-based classification approaches. Environmental Monitoring and Assessment, 185(8), 6261–6275. Miller, G., Friedel, M., Adam, P., & Chewings, V. (2010). Ecological impacts of buffel grass (Cenchrus ciliaris L.) invasion in central Australia—does field evidence support a fire-invasion feedback? Rangeland Journal, 32(4), 353–365. doi:10.1071/rj09076. Myint, S. W., Gober, P., Brazel, A., Grossman-Clarke, S., & Weng, Q. (2011). Per-pixel vs. object-based classification of urban land cover extraction using high spatial resolution imagery. Remote Sensing of Environment, 115(5), 1145–1161. doi:10. 1016/j.rse.2010.12.017. Olsson, A. D., van Leeuwen, W. J. D., & Marsh, S. E. (2011). Feasibility of invasive grass detection in a desertscrub community using hyperspectral field measurements and landsat TM imagery. Remote Sensing, 3(10), 2283–2304. Olsson, A. D., Betancourt, J. L., Crimmins, M. A., & Marsh, S. E. (2012). Constancy of local spread rates for buffelgrass (Pennisetum ciliare L.) in the Arizona Upland of the Sonoran Desert. Journal of Arid Environments, 87, 136–143. Padalia, H., Kudrat, M., & Sharma, K. P. (2013). Mapping sub- pixel occurrence of an alien invasive Hyptis suaveolens (L.) Poit. using spectral unmixing technique. International Journal of Remote Sensing, 34(1), 325–340. Puckey, H., Brock, C., & Yates, C. (2007). Improving the landscape scale management of Buffel Grass Cenchrus ciliaris using aerial survey, predictive modelling, and a Geographic Information System. Pacific Conservation Biology, 13(4), 264–273. Ramakrishna, N., & Steven, W. R. (1996). Implementation of a hierarchical global vegetation classification in ecosystem function models. Journal of Vegetation Science, 7(3), 337–346. 1702 Environ Monit Assess (2014) 186:1689–1703 http://dx.doi.org/10.2111/1551-5028(2007)60%5B523:laicdo%5D2.0.co;2 http://dx.doi.org/10.1080/00045608.2010.518040 http://dx.doi.org/10.1080/00045608.2010.518040 http://dx.doi.org/10.1016/j.biocon.2005.07.018 http://dx.doi.org/10.1007/s10661-006-1071-z http://dx.doi.org/10.1525/bio.2010.60.1.9 http://dx.doi.org/10.1016/j.jaridenv.2011.11.005 http://dx.doi.org/10.1016/j.jaridenv.2011.11.005 http://dx.doi.org/10.1071/rj09076 http://dx.doi.org/10.1016/j.rse.2010.12.017 http://dx.doi.org/10.1016/j.rse.2010.12.017 Sharif-Zadeh, F., & Murdoch, A. J. (2001). The effects of temperature and moisture on after-ripening of Cenchrus ciliaris seeds. Journal of Arid Environments, 49(4), 823– 831. Smyth, A., Friedel, M., & O'Malley, C. (2009). The influence of buffel grass (Cenchrus ciliaris) on biodiversity in an arid Australian landscape. Rangeland Journal, 31(3), 307–320. doi:10.1071/rj08026. Stohlgren, T. J., & Schnase, J. L. (2006). Risk analysis for biological hazards: what we need to know about invasive species. Risk Analysis, 26(1), 163–173. doi:10.1111/j.1539- 6924.2006.00707.x. Ustin, S. L., DiPietro, D., Olmstead, K., Underwood, E., & Scheer, G. J. (2002). Hyperspectral remote sensing for invasive species detection and mapping. In Geoscience and Remote Sensing Symposium, 2002. IGARSS '02. 2002 I.E. International, (Vol. 3, pp. 1658–1660 vol.1653). Van Devender, T. R., & Dimmitt, M. A. (2006). Conservation of Arizona Upland Sonoran desert habitat. Status and threats of buffelgrass (Pennisetum ciliare) in Arizona and Sonora. Tucson: Arizona-Sonora Desert Museum. Walter, V. (2004). Object-based classification of remote sensing data for change detection. International Society for Photogrammetry and Remote Sensing Journal of Photogrammetry and Remote Sensing, 58(3–4), 225–238. Wang, C., Zhou, B., & Palm, H. (2008). Detecting invasive sericea lespedeza (Lespedeza cuneata) in Mid-Missouri pastureland using hyperspectral imagery. Environmental Management, 41(6), 853–862. Yu, Q., Gong, P., Clinton, N., Biging, G., Kelly, M., & Schirokauer, D. (2006). Object-based detailed vegetation classification with airborne high spatial resolution remote sensing imagery. Photogrammetric Engineering and Remote Sensing, 72(7), 799–811. Environ Monit Assess (2014) 186:1689–1703 1703 http://dx.doi.org/10.1071/rj08026 http://dx.doi.org/10.1111/j.1539-6924.2006.00707.x http://dx.doi.org/10.1111/j.1539-6924.2006.00707.x Detecting... Abstract Introduction Methods and materials Focal species: Buffel grass (C. ciliaris L. P. ciliare) Study area Imagery Ground validation sites Aerial photography image classification Four-band imagery classification Comparing classifications Results Buffel grass in the landscape The imagery Comparing projected cover estimates Discussion Buffel grass in Kalka–Pipalyatjara The imagery Classification approaches Conclusions References work_yeflbucnk5fureipcvos4bxnhm ---- Colour and contemporary digital botanical illustration | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1016/J.OPTLASTEC.2008.12.014 Corpus ID: 55305284Colour and contemporary digital botanical illustration @article{Simpson2011ColourAC, title={Colour and contemporary digital botanical illustration}, author={N. Simpson}, journal={Optics and Laser Technology}, year={2011}, volume={43}, pages={330-336} } N. Simpson Published 2011 Computer Science Optics and Laser Technology Colour can simply be an attribute of a plant, but for scientific identification purposes, colour can also be diagnostic, distinguishing, or helping to distinguish, a plant from an otherwise similar species or cultivar. Hence the accurate recording of colour has been a feature of botanical illustration since its beginnings. New digital composite botanical illustrations, based largely on photography, can include far more colour information about a plant, both in terms of quantity and quality… Expand View via Publisher wgbis.ces.iisc.ernet.in Save to Library Create Alert Cite Launch Research Feed Share This Paper 11 CitationsHighly Influential Citations 1 Background Citations 1 View All Figures and Topics from this paper figure 1 figure 2 figure 3 figure 4 figure 5 figure 6 View All 6 Figures & Tables Botanical Digital photography Botany Hydroxystilbamidine isethionate Imaging technology Self-information Interactivity Google Art Project Utility functions on indivisible goods Specimen 11 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Botanical symbols: a new symbol set for new images N. Simpson Biology 2010 2 PDF Save Alert Research Feed Colorimetric processing of digital colour image A. Khandual, G. Baciu, N. Rout Engineering 2013 6 Save Alert Research Feed Leaf Optical Properties S. Jacquemoud, S. Ustin Biology 2019 11 Save Alert Research Feed Leaf Optical Properties in Different Wavelength Domains S. Jacquemoud, S. Ustin Materials Science 2019 1 Save Alert Research Feed Variations Due to Leaf Abiotic and Biotic Factors S. Jacquemoud, S. Ustin 2019 1 Save Alert Research Feed Spectroscopy of Leaf Molecules S. Jacquemoud, S. Ustin Materials Science 2019 Save Alert Research Feed Applications of Leaf Optics S. Jacquemoud, S. Ustin Environmental Science 2019 2 Save Alert Research Feed Measurement of Leaf Optical Properties S. Jacquemoud, S. Ustin Physics 2019 1 Save Alert Research Feed Determinación del estado de maduración de frutos de feijoa mediante un sistema de visión por computador utilizando información de color Juan Pablo Bonilla Gonzalez, F. A. P. Ortíz Mathematics 2016 2 Highly Influenced View 1 excerpt, cites background Save Alert Research Feed Fresnel’s Equations S. Jacquemoud, S. Ustin Mathematics 2019 Save Alert Research Feed ... 1 2 ... References SHOWING 1-10 OF 12 REFERENCES SORT BYRelevance Most Influenced Papers Recency PHOTOGRAPHY AND CONTEMPORARY BOTANICAL ILLUSTRATION N. Simpson, P. Barnes Biology 2008 10 Save Alert Research Feed Modelling flower colour : Several experiments Shenghui D. Rydeheard 1 View 1 excerpt, references background Save Alert Research Feed Linnaeus' Philosophia Botanica Carl von Linné, Stephen Freer Mathematics 2003 27 View 1 excerpt, references background Save Alert Research Feed AN EXTINCT TREE ‘REVIVED’ D. Mabberley, E. Pignatti-Wikus, C. Riedl-Dorn Biology 2007 3 View 1 excerpt, references background Save Alert Research Feed Botanical illustration goes digital. The Plantsman 2005;4:208–15 2005 Photography and contemporary botanical illustration. Curtis’s Botanical Magazine 2008;25:258–80 2008 Digital Diversity—a new approach to botanical illustration. [Exhibition brochure] Guildford: N 2007 RHS Plant Finder News RHS Colour Chart 5 th ed 2007 Botanical illustration goes digital The Plantsman 2005 Match maker. The Garden 2001;126:344–5 2001 ... 1 2 ... Related Papers Abstract Figures and Topics 11 Citations 12 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_yeqtas7v4jendbqcq5s73pptea ---- Experience With Wood Lamp Illumination and Digital Photography in the Documentation of Bruises on Human Skin Experience With Wood Lamp Illumination and Digital Photography in the Documentation of Bruises on Human Skin Ev Vogeley, MD, JD; Mary Clyde Pierce, MD; Gina Bertocci, PhD, PE B ruising is very common in children. Examination of bruising can guide the clinician in ordering radiographic imaging studies of children who have suffered trauma. Addi- tionally, bruising in infants and patterns of bruising that do not match the injury sce- nario offered by caretakers can raise the suspicion of abuse. This article reports pre- liminary experience with Wood lamp enhancement of faint bruises and visualization of bruises that are not visible. It describes the method for digital photography of bruises visualized in this way. Finally, it suggests future applications and areas of further study. Arch Pediatr Adolesc Med. 2002;156:265-268 Although bruising in children is com- mon,1 bruises in unusual locations or in infants who do not yet cruise can be a physical finding that alerts clinicians to possible child abuse.1-3 Additionally, in children involved in other types of trauma, such as falls down stairs and motor ve- hicle collisions, the presence of bruises can guide the clinician in choosing appropri- ate imaging studies. The use of an alternative light source to delineate skin lesions is an established technique in forensic pathology4-8 and fo- rensic odontology.9-11 Generally, these tech- niques use specialized film and filters that permit the recording on 35-mm black-and- white film–reflected light in the infrared or UV range. The infrared spectrum, which consists of wavelengths that are longer than the human eye can detect (�700 nm), has the deepest penetration and has the theo- retical possibility of visualizing early bruis- ing through the ability to detect the pool- ing of subcutaneous blood.12 In contrast, UV light, which consists of wavelengths that are shorter than the visible spectrum (�400 nm), has the least penetration, entering only minimally into epidermal tissue, where it is either reflected or absorbed by various bio- chemical compounds (hemoglobin, carot- enoids, or bilirubin) that are part of the heal- ing process of skin. Reflective UV and infrared photog- raphy have limitations that decrease their practical application in pediatrics. The re- quired specialized filters, lenses, and films are unavailable in most emergency depart- ments and outpatient facilities. Exposure times are prolonged that require a child to hold still for several minutes. Addition- ally, a tripod is required to hold the cam- era still, making applicability in clinical set- tings cumbersome. Since the camera is photographing light that is beyond the vis- ible spectrum, detailed photographs of the entire body must be taken as the location of faded bruises becomes known only af- ter the film is developed. Ultraviolet illumination is an alter- native to reflective UV and infrared pho- tography that is more easily adapted for use in pediatrics. Most pediatric facilities have access to a Wood lamp that is an ad- equate source of UV light. If photographs are required, commercially available digi- tal still and video cameras permit brief ex- posure times (�1 second) even under low- light conditions so that the child need not be still for extended periods. Addition- ally, these short exposure times permit the camera to be handheld. As UV illumina- tion allows subclinical bruising to be seen, photographs need only be taken on spe- cifically identified areas of the body. From the Child Advocacy Center, Department of Pediatrics, Children’s Hospital of Pittsburgh (Drs Vogeley and Pierce), and the Department of Rehabilitation Science and Technology, University of Pittsburgh (Dr Bertocci), Pittsburgh, Pa. ARTICLE (REPRINTED) ARCH PEDIATR ADOLESC MED/ VOL 156, MAR 2002 WWW.ARCHPEDIATRICS.COM 265 ©2002 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 Our interest in the use of UV illumination was stimu- lated when one of us (E.V.) noted the visualization un- der Wood lamp illumination of a healed bruise on the wrist. The bruise had resulted from closing the skin in the clasp of a watchband 10 days earlier (Figure 1). Our study was undertaken to develop experience with the use of Wood lamp illumination in conjunction with digital imaging to permit photographic documentation of subtle and subclinical bruises in children. RESULTS Patient 1 A 6-month-old female infant rolled out of her grand- mother’s arms and struck her forehead on the beveled edge of a glass-topped table. The injury occurred ap- proximately 1 hour prior to her presentation to the emer- gency department. Physical examination revealed a su- perficial laceration on the forehead with a small amount of surrounding ecchymosis (Figure 2A). More exten- sive bruising was demonstrated with digitally photo- graphed Wood lamp illumination (Figure 2B). Patient 2 A 7-month-old female infant was being carried down a flight of carpeted stairs by her 9-year-old sibling. The in- fant was facing her sibling, being supported by her but- tocks, with her legs wrapped around the sibling’s waist. At the third stair from the bottom, the older sibling slipped and fell backwards landing on her buttocks. After an in- tense bout of crying, the infant was noted to be fussier than usual with refusal to move her left leg. Radio- graphic examination revealed a nondisplaced buckle frac- ture of the left distal femur. Physical examination of the knee under normal lighting did not reveal any bruising (Figure 3A). A digitally photographed Wood lamp il- lumination demonstrated a linear bruise consistent with the described mechanism of injury (Figure 3B). Patient 3 A 14-year-old boy was admitted to the hospital because of new-onset insulin-dependent diabetes mellitus. On physical examination, he was noted to have a very faint, yellow-brown ecchymosis overlying his left scapula (Figure 4A). He revealed that while attending military school approximately 2 weeks before, he had been bit- ten on the left part of his upper back. Wood lamp illu- mination revealed a pattern of bruises consistent with a human bite (Figure 4B). COMMENT The evaluation of any injured child requires a thorough examination to define the extent of injury. Although pre- liminary, the aforementioned cases suggest that the use of digital imaging combined with Wood lamp illumina- tion may provide clinicians with important information regarding the location and extent of subclinical bruis- ing. For example, the demonstration of abdominal bruis- A B Figure 1. View without (A) and with (B) Wood lamp illumination of a child’s wrist showing a healed bruise resulting from closing the skin in the clasp of a watchband 10 days earlier. SUBJECTS AND METHODS This study was approved by the Human Rights Com- mittee (institutional review board) of the Children’s Hospital of Pittsburgh, Pittsburgh, Pa. We studied 4 children who had trauma. In children who had a his- tory of trauma, the entire skin surface was exam- ined using a Wood lamp. This examination was con- ducted under low-light conditions with the Wood lamp held approximately 10 cm from the skin sur- face. The camera used was a Sony Digital Mavica (model MVC FD95; Sony Electronics Inc, Park Ridge, NJ). The resulting digital images were imported into Adobe Photoshop (version 5; Adobe Systems Inc, San Jose, Calif). The only manipulations of the photo- graphs prior to printing and storing consisted of re- sizing the images and adjusting contrast and bright- ness. Bruises were most easily seen after contrast boosts of 10% to 40%. (REPRINTED) ARCH PEDIATR ADOLESC MED/ VOL 156, MAR 2002 WWW.ARCHPEDIATRICS.COM 266 ©2002 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 ing in a child presenting with head injury would guide the physician to order appropriate radiographic imag- ing studies to define the extent of injury to the intestine or solid organs. Decisions about whether a given constellation of in- juries is consistent with the explanation offered by care- takers is essential in determining the likelihood of in- flicted vs noninflicted trauma. If the injury scenario described by a caretaker indicates that injuries occurred in a single plane, the demonstration of multiple points of contact in multiple planes raises the suspicion of abu- sive trauma. Just as important is the visualization of oc- cult bruising that can add credence to the explanation of injury offered by caretakers allowing a more objec- tive assessment of the injury event. Further studies suggested by our preliminary work include histological correlations and serial Wood lamp illumination with reflective 35-mm UV photography. There is also the possibility that Wood lamp examina- tion of infants who have suffered sudden unexplained death or an apparent life-threatening event may provide important clinical information. A B Figure 2. Patient 1. A, View without Wood lamp illumination of a superficial laceration on the forehead of a 6-month-old infant showing a small amount of surrounding ecchymosis. B, Under the illumination of a Wood lamp the same forehead laceration shows surrounding bruising. A B Figure 3. Patient 2. View without (A) and with (B) Wood lamp illumination of the knee of a 7-month-old infant who sustained a nondisplaced buckle fracture of the left distal femur. A B Figure 4. Patient 3. A, View without Wood lamp illumination of the left part of the upper back of a 14-year-old boy who had been bitten by another human. B, View of the same area under Wood lamp illiumination showed a pattern of bruises consistent with a human bite. (REPRINTED) ARCH PEDIATR ADOLESC MED/ VOL 156, MAR 2002 WWW.ARCHPEDIATRICS.COM 267 ©2002 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 CONCLUSIONS Our brief experience demonstrates that the examination of the skin surface of injured children with Wood lamp il- lumination can permit enhanced visualization of soft tis- sue injury. The technique described permits visualization of bruises that are not otherwise visible and identification of faint bruises that were not noticed prior to the Wood lamp examination. The identification of these subclinical bruises may help guide clinicians in selecting laboratory evaluations and imaging studies in injured children. It may allow more complete comparison of the caretaker’s ac- counts of injury scenarios with the child’s clinical presen- tation. Further studies suggested by our preliminary work include histological correlations, observation of bruises over time, and correlation with techniques such as reflective UV and infrared photographic techniques Accepted for publication November 8, 2001. Corresponding author and reprints: Ev Vogeley, MD, JD, Child Advocacy Center, Department of Pediatrics, Chil- dren’s Hospital of Pittsburgh, 3705 Fifth Ave, Pittsburgh, PA 15213 (e-mail: vogelee@chplink.chp.edu). REFERENCES 1. Labbé J, Caouette G. Recent skin injuries in normal children. Pediatrics. 2001; 108:271-276. 2. Carpenter RF. The prevalence and distribution of bruising in babies. Arch Dis Child. 1999;80:363-366. 3. Sugar NF, Taylor JA, Feldman KW, for the Puget Sound Pediatric Research Net- work. Bruises in infants and toddlers: those who don’t cruise rarely bruise. Arch Pediatr Adolesc Med. 1999;153:399-403. 4. Di Maio DJ, Di Maio VJM. Forensic Pathology. Boca Raton, Fla: CRC Press; 1993. 5. Dawson JB, Barker DJ, Ellis DJ, et al. A theoretical and experimental study of light absorption and scattering by in vivo skin. Phys Med Biol. 1980;25:695- 709. 6. West M, Barsley RE, Frair J, Stewart W. Ultraviolet radiation and its role in wound documentation. J Forensic Sci. 1992;37:1466-1479. 7. Hempling SM. The applications of ultraviolet photography in clinical forensic medi- cine. Med Sci Law. 1981;21:215-222. 8. Barsley RE, West MH, Fair JA. Forensic photography: ultraviolet imaging of wounds on skin. Am J Forensic Med Pathol. 1990;11:300-308. 9. West MH, Billings JD, Frair J. Ultraviolet photography: bite marks on human skin and suggested technique for the exposure and development of reflective ultra- violet photography. J Forensic Sci. 1987;32:1204-1213. 10. David TJ, Sobel MN. Recapturing a five-month-old bite by means of reflective ultraviolet photography. J Forensic Sci. 1994;39:1560-1567. 11. Golden GS. Use of alternative light source illumination in bite mark photogra- phy. J Forensic Sci. 1994;39:815-823. 12. Redsicker DR. The Practical Methodology of Forensic Photography. Boca Ra- ton, Fla: CRC Press; 1991. What This Study Adds Forensic scientists have used alternative light sources to elucidate skin wounds that are not visible. Many of these techniques are limited in their application to pediatrics because of the requirement for specialized equipment and the need for subjects to remain still for prolonged peri- ods. Our work describes the use of the ubiquitously avail- able Wood lamp combined with digital photography so as to demonstrate subtle and subclinical bruises in children. IN OTHER AMA JOURNALS ARCHIVES OF OPHTHALMOLOGY A Randomized Trial of Atropine vs Patching for Treatment of Moderate Amblyopia in Children The Pediatric Eye Disease Investigator Group Objective: To compare patching and atropine as treatments for moderate amblyopia in children younger than 7 years. Methods: In a randomized clinical trial, 419 children younger than 7 years with amblyopia and visual acuity in the range of 20/40 to 20/100 were assigned to receive either patching or atropine at 47 clinical sites. Main Outcome Measure: Visual acuity in the amblyopic eye and sound eye after 6 months. Results: Visual acuity in the amblyopic eye improved in both groups (improvement from baseline to 6 months was 3.16 lines in the patching group and 2.84 lines in the atropine group). Improvement was initially faster in the patching group, but after 6 months, the difference in visual acuity between treatment groups was small and clinically inconsequential (mean difference at 6 months, 0.034 logMAR units; 95% confidence interval, 0.005-0.064 logMAR units). The 6-month acuity was 20/30 or better in the amblyopic eye and/or improved from baseline by 3 or more lines in 79% of the patching group and 74% of the atropine group. Both treatments were well tolerated, although atropine had a slightly higher degree of acceptability on a parental questionnaire. More patients in the atropine group than in the patching group had reduced acuity in the sound eye at 6 months, but this did not persist with further follow-up. Conclusion: Atropine and patching produce improvement of similar magnitude, and both are appropriate modalities for the initial treatment of moderate amblyopia in children aged 3 to less than 7 years. (2002;120:268-278) Corresponding author: Roy W. Beck, MD, PhD, Jaeb Center for Health Research, 3010 E 138th Ave, Suite 9, Tampa, FL 33613 (e-mail: rbeck@jaeb.org). (REPRINTED) ARCH PEDIATR ADOLESC MED/ VOL 156, MAR 2002 WWW.ARCHPEDIATRICS.COM 268 ©2002 American Medical Association. All rights reserved. Downloaded From: https://jamanetwork.com/ by a Carnegie Mellon University User on 04/05/2021 work_yevfxwxmuffu3gomfk5xeg6gsi ---- RBO-MAR-ABR-2017.indd 65 Correlation between increase in margin-crease distance and patient satisfaction after upper blepharoplasty Correlação entre o aumento da distância margem-sulco e satisfação do paciente após blefaroplastia superior Eduardo Damous Feijó1, Adriana Ribeiro de Almeida1, Rayssa Léda1, Fábio Ramos Caiado1, Ana Carla de Souza Nery2, Roberto Murillo Limongi3 1 Department of Oculoplastic Surgery, Hospital Oftalmológico de Anápolis, Anápolis, GO, Brazil. 2 Department of Oculoplastic Surgery, Instituto Panamericano da Visão, Goiânia, GO, Brazil. 3 Department of Oculoplastic, Universidade Federal de Goiás, Goiânia, GO, Brazil. Approved by the Ethics Committee of the Anápolis Ophthalmology Hospital No financial support received AbstrAct Objective: To quantitatively and qualitatively evaluate postoperative outcomes and patient satisfaction after upper blepharoplasty and to correlate the findings with changes between preoperative and postoperative eyelid measurements using a digital imaging system. Methods: A total of 60 eyelids in 30 patients with dermatochalasis who were treated in the ambulatory center of the Department of Ocu- loplastic Surgery at the Anápolis Ophthalmology Hospital were evaluated. Patients ranged from 40 to 80 years of age. Photographs were taken before the upper blepharoplasty procedure and 90 days after as well. The images were transferred to the ImageJ 1.34n program. The parameters analyzed were palpebral fissure height in primary position and margin-crease distance. The correlations between these measurements and patient satisfaction 90 days after surgery were evaluated. Results: This study revealed an increase in the margin-crease distance after upper blepharoplasty and a high positive correlation (0.64) between the increase in this height and the level of satisfaction that the patients attributed to the surgery. There was no statistically significant difference between preoperative and postoperative palpe- bral fissure heights. Conclusion: The margin-crease distance may serve as a quantitative measurement of a good cosmetic and functional outcome, since it has been found to be strong correlated with patient satisfaction. Keywords: Eyelids/surgery; Eyelid disease/surgery; Blepharoplasty/methods; Treatment outcome; Image processing, computer- -assisted; Patient satisfaction Resumo Objetivo: Avaliar de maneira quantitativa e qualitativa o resultado pós-operatório e a satisfação de pacientes submetidos à blefaroplastia superior e correlacionar com as medidas palpebrais antes e após a cirurgia utilizando o sistema de imagem digital. Métodos: Foram avaliadas 60 pálpebras de 30 pacientes com dermatocálase atendidos no ambulatório de Plástica Ocular do Hospital Oftalmológico de Anápolis, com idade entre 40 e 80 anos. Foram realizadas fotografias antes e 90 dias após blefaroplastia superior. Essas imagens foram transferidas para o programa Image J 1.34n e analisados os parâmetros de altura da fenda palpebral em posição primária do olhar e distância margem-sulco palpebral. Foram avaliadas as correlações dessas medidas com a satisfação do paciente após 90 dias de pós-operatório. Resultados: O estudo mostrou um aumento da distância margem-sulco palpebral após blefaroplastia superior e uma correlação fortemente positiva (0,64) entre o aumento dessa medida e a nota de avaliação atribuída pelo paciente à cirurgia. Não houve diferença estatisticamente significante na altura da fenda palpebral antes e após a cirurgia. Conclusão: A uti- lização da medida da distância margem-sulco pode servir como parâmetro quantitativo de um bom resultado estético e funcional, apresentando uma forte correlação com a satisfação dos pacientes no pós-operatório. Descritores: Pálpebras/cirurgia; Doenças palpebrais/cirurgia; Blefaroplastia/métodos; Resultado do tratamento; Processamento de imagem assistida por computador; Satisfação do paciente Artigo originAl Recebido para publicação em 11/10/2016 - Aceito para publicação em 19/11/2016. Os autores declaram não haver conflito de interesses. Rev Bras Oftalmol. 2017; 76 (2): 65-9 DOI 10.5935/0034-7280.20170013 66 IntRoductIon Dermatochalasis is a pathology that commonly affects middle-aged and elderly individuals. It is defined as an excess of skin on the upper eyelid, on the lower eyelid, or on both lids, and it may include an excess of fat and hypertrophic muscle tissue (1). Advanced loss in elasticity and weakening in muscle tissue are characteristics of the periocular aging process, which results in eyelid flaccidity. Intrinsic and extrinsic aging mechanisms are involved in this process. They include alcohol consumption, chro- nic exposure to sunlight, smoking, and diet (2,3). Individuals who exhibit dermatochalasis of the superior eyelid may experience symptoms such as blurred vision, tearing, visual fatigue and discomfort, reductions in the superior and peripheral fields of vision, corneal astigmatism, and migraines due to the use of the surrounding muscles to attempt to raise the eyelids (1,2,4). Pseudoptosis may be induced by the increase in periorbital tissue weight (2). Blepharoplasty is the procedure of choice for correcting dermatochalasis. It consists of the excision of the excess skin using a cutaneous incision that involves the anterior lamella of the eyelid. Depending on each case and on patient anatomy, orbicular muscle tissue and fat pads may also be excised or repositioned (1,4,5,6). A complete ophthalmologic exam must be performed in the preoperative assessment so that limitations in visual acuity and to the visual field can be documented, as well as pathologies such as dry eye and any others for which the procedure is contraindicated. Eyebrow position and any association with blepharoptosis should be determined. In cases of superior dermatochalasis associated with eyelid ptosis or brow ptosis, it is important that all aspects be corrected (7,8). Though ophthalmologists commonly perform this proce- dure, there is no standardization or consensus for evaluating the severity of superior dermatochalasis. Evaluations are subjective and depend on each examiner’s observations. Measurements are typically taken using rulers and compasses. With the advancement of digital photography and software to analyze these photos, more precise measurement scales can be used to compare surgical outcomes and to correlate them with the success of a procedure (or lack thereof), as well as with patient satisfaction with the results(7,9,10). The objective of this study is to quantitatively and qualita- tively evaluate postoperative outcomes and patient satisfaction after upper blepharoplasty and to correlate these results with the measurements taken using preoperative and postoperative digital photography. methods A prospective interventional case series study was perfor- med between January and June 2015 in an ophthalmology tea- ching hospital. The sample was composed of 30 patients who had upper dermatochalasis and who were treated in the Oculoplastic Surgery ambulatory center of the aforementioned hospital. These patients agreed to participate in the study and signed the Infor- med Consent Form (ICF), which had been previously approved by the institution’s Ethics Committee. The study was performed in accordance with the Helsinki Declaration. The inclusion criteria were treatment in the Oculoplastic Surgery ambulatory center, age between 40 and 80 years, cosmetic or functional indication for the upper blepharoplasty procedure, and agreement to participate in the study (which was determined after patients signed the ICF). The exclusion criteria were age younger than 40 years or older than 80 years, a history of facial trauma, eyelid ptosis, ectropion, entropion, hyperthyroidism, the presence of a clinical contraindication for performing the procedure, and the use of blood thinners or antiplatelet agents. The digital photographs were obtained through the use of a Sony W50 digital camera and analyzed in the ImageJ 1.34n program, which converts the measurements taken with a ruler into pixels, thus creating a pixel per milliliter (mm) scale. The patients were seated with their heads positioned in a Haag-Streit slit lamp BQ 900® so that the measurements could be taken. Millimeter rulers were attached vertically to the lateral support of the slit lamp in order to standardize the measurements. Each patient was instructed to remain in primary position while the measurements were taken (Figure 1). As mentioned previously, there is no consensus regarding dermatochalasis severity. Therefore, the authors measured the distance between the superior lid margin and the superior lid crease, as suggested by Frantz et al.(8) at the level of the center of the lid (margin-crease distance) in primary position. Next, palpebral fissure height was measured (defined as the distance from the superior lid margin to the inferior lid margin, passing through the center of the pupil) and the margin-crease distance (defined as the distance from the superior lid margin to the lid crease along the pupillary line) before and 90 days after blepharoplasty. In each patient, preoperative superior dermatochalasis was determined to be either mild (margin-crease distance of 2 mm or greater), moderate, (margin-crease distance of 0.1 to 1.9 mm), or severe (0 or negative distance). We considered dermatochalasis to be severe in cases in which the skin touched the lash line or surpassed it (negative distance), as shown in figure 1. All of the surgeries were performed by ophthalmology residents in training; all were directly supervised by the same oculoplastic surgeon. The amount of skin to be excised had been Figure 1: A: Digital measurement of palpebral fissure height (right eye), measurement of the margin-crease distance (left eye), and millimeter ruler attached to the slit lamp B: Margin-crease distance measurement reflecting a postoperative increase (left eye). Feijó ED, Almeida AR, Léda R, Caiado FR, Nery ACS, Limongi RM Rev Bras Oftalmol. 2017; 76 (2): 65-9 67 outlined prior to the procedure. The patients were sedated using midazolam intravenously, and local anesthesia was then applied (2% lidocaine and 0.75% bupivacaine, both with epinephrine). The cutaneous incision of the superior lid was made using a #15 blade along the previously outlined markings in order to remove the excess skin, subcutaneous tissue, and preseptal orbi- cularis muscle using an EMAI brand electronic scalpel, model number BP 150. When necessary, meticulous hemostasis was performed and medial fat pads were removed. The surgical wound was closed using nylon 6-0 suture with two to three simple suture followed by a running suture. Finally, antibiotic ointment and a bandage were applied for 24 hours. The sutures were removed seven days after surgery. The data on patient satisfaction was collected using qualita- tive questionnaires applied in the immediate postoperative period and 90 days after the blepharoplasty. Patients were questioned regarding their reason for undergoing the procedure (cosmetic, functional, or both); the extent of postoperative pain (measured on a scale of 0 to 10, in which 0 reflected a lack of pain, 1 to 3 reflected mild pain, 4 to 5 reflected moderate pain, 7 to 9 reflected intense pain, and 10 reflected the most pain ever experienced). In terms of satisfaction with the final outcome of the proce- dure, the patients were instructed to give a subjective score of 0 to 10, in which 0 meant “completely unsatisfied” and 10 reflected “extremely satisfied” and the patients were also asked if they would undergo the procedure again. Patients were also asked for their subjective assessments of scarring on a scale of 0 to 3 (0: invisible; 1: minimally visible; 2: moderately visible; and 3: highly visible). The statistical analysis was performed using the SPSS software. The normality of the data was evaluated using the Kolmogorov-Smirnov test. Student’s t-test was used to compare the average preoperative and postoperative palpebral fissure heights, as well as the average preoperative and postoperative margin-crease distances. Pearson’s correlation coefficient was used to evaluate the correlation between the final margin-crease distance and patient satisfaction scores. In this study, results with a 95% confidence interval were considered statistically significant (p<0.05). When Student’s t-test was applied to compare the average preoperative palpebral fissure height (8.5 mm; SD: 0.91) to ave- rage postoperative palpebral fissure height (8.6 mm; SD: 0.85), no statistically significant distance was found (p=0.44). However, the average preoperative margin-crease distance was 1.4 mm (SD: 1.19), while the average postoperative margin-crease distance was 3.8 mm (SD: 0.49). The p value was 0.02, which reflects statistical significance (Table 2). Results A total of 60 upper blepharoplasties were performed on 30 patients. The population’s average age was 54.8 years (range 41-74 years of age); 88% were female. Forty-four percent of pa- tients presented no preexisting comorbidities, 48% had systemic hypertension, 15% had hypothyroidism, and 12% had diabetes. When asked about their reasons for undergoing the surgery, 32% of the patients reported only cosmetic reasons, 33% reported only functional reasons, and 35% reported both cosmetic and functional reasons. The questionnaire also found that 96% of the patients would undergo the surgery again (4% would not). In the assessment of superior dermatochalasis, 9 patients (30%) were found to have mild dermatochalasis, 11 patients (36%) were found to have moderate dermatochalasis, and 10 patients (34%) were found to have severe dermatochalasis. In this study, no cases of asymmetry were found, meaning there were no cases in which the patient exhibited different dermatochalasis intensities in each eye (Table 1). When Student’s t-test was applied to compare the average preoperative palpebral fissure height (8.5 mm; SD: 0.91) to ave- rage postoperative palpebral fissure height (8.6 mm; SD: 0.85), no statistically significant distance was found (p=0.44). However, the average preoperative margin-crease distance was 1.4 mm (SD: 1.19), while the average postoperative margin-crease distance was 3.8 mm (SD: 0.49). The p value was 0.02, which reflects statistical significance (Table 2). The study also found a strong positive correlation (0.64) between the margin-crease distance and patient satisfaction score, the average of which was 9.04 out of 10 (7-10; SD: 0.88) (Graph 1). Among subjects less than 60 years of age, the positive correlation was 0.64 (strong) and among subjects greater than 60 years of age, the positive correlation was 0.33 (weak). The average score patients gave to pain was 2.5 out of 10 (1-5; SD: 1.10). When asked about scarring, 22 patients (73%) Table 1 Patient profiles Gender 88% female Age 54.8 years (41-74) Comorbidities 48% SH* Reason for procedure 68% cosmetic Intensity of dermatochalasis 70% moderate/severe Would they perform the procedure again? 96% yes *SH: Systemic hypertension Table 2: Digital eyelid measurements in millimeters (mm) (mean and range) Preoperative Postoperative p value mean mean Palpebral fissure height 8.5 (7 – 10) 8.6 (7.1 –10) 0.44 Margin-crease distance 1.4 (0 – 3.8) 3.8 (2.3 – 5) 0.02 Graph 1 Correlation between higher margin-crease distance and patient satisfaction score (all patients) Correlation between increase in margin-crease distance and patient satisfaction after upper blepharoplasty Rev Bras Oftalmol. 2017; 76 (2): 65-9 68 subjectively rated their scarring as invisible or minimally visible, and 8 patients (27%) rated their scarring as moderately visible. None of the patients exhibited highly visible scarring, according to their own assessments dIscussIon Upper blepharoplasty is one of the most commom aesthe- thic procedures performed in the United States and Brazil(1,11). Aesthetics improvements can be made with a short operation that can be performed under intravenous sedation. This pro- cedure offers many benefits to patients and which most of the times results in high patient satisfaction. In our study, there was a higher percentage of female patients (88%), and patients averaged approximately 55 years of age. This data is compatible with the higher interest among this patient profile for cosmetic procedures, whether surgical or not. Regardless of cosmetic be- nefits, most of the times functional issues are present in patients who wish to undergo upper blepharoplasty. Common complaints before surgery include a feeling of excess weight above the eyes, a decreased superior and peripheral visual field, and asthenopia. (9,10). These findings are consistent with those published by Lessa et al.(11). Dermatochalasis is also associated with a loss in the peri- pheral visual field, which further affects patients’ quality of life (12). To achieve the digital measurements, a digital image must be produced, which means attributing spatial values (x, y) and luminance values to the points (pixels) that form the image (13). Once available in digital form, the image can be processed by programs that mathematically manipulate the pixels. The use of digital processing allows for more refined quantitative analyses of the oculoplastic parameters that may be of clinical or surgical importance and which may be correlated with qualitative analyses of surgical outcomes(7). These parameters are traditionally mea- sured using rulers and compasses, a method which may result in differences between examiners. The digitalization and compute- rized analysis of these measurements offer a more precise result and eliminate examiner bias. In our study, the increase in postoperative palpebral fissure height was not statistically significant, a finding which was also reported by Starck et al.(14). Schellini et al.(7) found significant changes in palpebral fissure measurements before and after upper blepharoplasty, a result which may be explained by the presence of many patients with mechanical ptosis due to severe dermato- chalasis in their study. The average postoperative palpebral fissure measurement in our study was 8.6 mm. Cruz et al.(13) analyzed palpebral fissure height in 70 eyes and found a mean value of 9.02 QUESTIONÁRIO DE DERMATOCÁLASE NOME:______________________________________________ SEXO: ( ) M ( ) F IDADE: ________ ANTECEDENTES PESSOAIS: ( ) HAS ( ) DM ( ) HIPOTIREOIDISMO ( ) HIPERTIR ( ) OUTROS _____________ SINTOMAS VISUAIS: ( ) “PESO” PALPEBRAL ( ) ALTERAÇÕES DE CV ( ) FOTOFOBIA ( ) LACRIMEJAMENTO ( ) OUTROS_______________ MOTIVAÇÃO PARA A CIRURGIA ( ) ESTÉTICA ( ) FUNCIONAL ( ) AMBAS EXAME CLÍNICO – MEDIDO COM IMAGE J FENDA PALPEBRAL PRÉ: OD: DISTÂNCIA MARGEM SULCO PRÉ: OD: OE: OE: FENDA PALPEBRAL PÓS: OD: DISTÂNCIA MARGEM SULCO PÓS: OD: OE: OE: ESCALA DE DOR PÓS-OPERATÓRIA DE 0 A 10 => AVALIAÇÃO SUBJETIVA DA CICATRIZ: 0 A 3 => ________________ (0: invisível, 1: minimamente visível, 2: moderadamente visível e 3: muito visível.) NOTA FINAL DA CIRURGIA SUBJETIVA – 0 A 10 => (0 totalmente insatisfeito e 10 extremamente satisfeito) FARIA O PROCEDIMENTO NOVAMENTE? ( ) SIM ( ) NÃO Feijó ED, Almeida AR, Léda R, Caiado FR, Nery ACS, Limongi RM Rev Bras Oftalmol. 2017; 76 (2): 65-9 69 mm, consistent with our findings. Our study found a significant increase in the preoperati- ve and postoperative measurement of margin-crease distance. The preoperative distance was 1.4 mm, while the postoperative distance was 3.8 mm (p < 0.02). Statistically significant increases have also been reported by Schellini et al.,(7) Starck et al.(14) and Lessa et al.(11). An analysis of this data shows that our finding was expected, given the fact that the excision of the skin above the preseptal orbicularis muscle has the effect of aesthetic placement of the supratarsal crease, since it is no longer covered due to superior dermatochalasis. Though many studies have analyzed preoperative and postoperative measurements involved in upper blepharoplas- ties, we are unaware of any studies that correlate postopera- tive measurements with the extent of patient satisfaction. In what may be considered a factor in a good outcome for upper blepharoplasty, a quantitative increase in margin-crease dis- tance in primary position, which exposes pretarsal skin (hollow upper eyelid sulcus) and, particularly among women, allows for the use of cosmetic products in this region, was found to be subjectively correlated with greater patient satisfaction in this study. Another important quantitative criterion is a lack of a decrease in palpebral fissure, since this decrease may lead to a certain degree of blepharoptosis, an outcome which is not desired after upper blepharoplasty. The majority of oculoplastic surgeons have been concerned with volume preservation in upper blepharoplasty. However, in this study, the strong positive correlation between higher margin- -crease distance and patient satisfaction, shows that, in patient evaluation, the hollow upper eyelid sulcus has been preferred to full upper eyelid sulcus. Bielory et al.(15) found that the preference for hollow or full upper eyelid sulcus could be accounted based on age. In their study, subjects greater than 45 years of age preferred a hollow upper eyelid sulcus (higher margin-crease distance) over a full eyelid(15). In another study, Hwang et al.(16), showed the effect of “single” versus “double” eyelids on the perceived attractiveness of Chinese woman and considered the presence of a medium upper eyelid crease to be the most significantly attractive eyelid shape. These findings are consistent with our study, that showed patient preferences to higher margin-crease distance. Qualitative criteria include invisible or minimally visible scarring in a good location (coinciding, in most cases, with the ori- ginal eyelid crease), decreased sensations of excess weight above the eyes and of asthenopia(17-19), improved superior and peripheral visual fields(20-24), and an associated absence of complications such as lagophthalmos and dry eye. These criteria are associated with an ideal outcome. In our study, we found a strong positive correlation between greater margin-crease distance and patient satisfaction (determi- ned by the score attributed by the patient to the final outcome of the procedure). We conclude that this measurement may serve as a quantitative parameter of a good cosmetic and functional outco- me, particularly in teaching hospitals where medical residents need clinical parameters to evaluate their results. This measurement may be beneficial in cases of careful surgical indication and the correct use of the surgical technique. RefeRences 1. De Angelis DD, Carter SR, Seiff SR. Dermatochalasis. Review. Int Ophthalmol Clin. 200242(2):89-102. 2. Paixão MP, Miot, HA, Machado, CD. Avaliação do impacto da blefaro- plastia superior na qualidade de vida utilizando questionário padroni- zado (Qblefaro): estudo piloto. An Bras Dermatol. 2008;83(1): 32-7. 3. Putterman AM. Blefaroplastia superior. In: Cirurgia oculoplástica estética. Rio de Janeiro: Elsevier; 2009; p. 114. 4. Battu VK, Meyer DR, Wobig JL. Improvement in subjective visual function and quality of life outcome measures after blepharoptosis surgery. Am J Ophthalmol 1996;121(6 ):677-86. 5. Fagien S. Eyebrow analysis after blepharoplasty in patients with brow ptosis. Ophthal Plast Reconstr Surg. 1992;8(3):210-4. 6. Federici TJ, Meyer DR, Lininger LL. Correlation of the vision-related functional impairment associated with blepharoptosis and the impact of blepharoptosis surgery. Ophthalmology. 1999;106(9):1705-12. 7. Schelinni AS, Pretti RC, Yamamoto RK, Padovani CR, Padovan CR. Eyelid measures before and after upper blepharoplasty – quantitative evaluation. Arq Bras Oftalmol. 2005; 68(1):85-8. 8. Frantz KA. Avaliação da topografia corneana e correlação com a intensidade da dermatocálase antes e após blefaroplastia superior [dissertação]. Goiânia (GO): Universidade Federal de Goiás; 2013. 9. Shore JW, Bergin DJ, Garrett SN. Results of blepharoptosis surgery with early postoperative adjustment. Ophthalmology. 1990;97(11):1502-11. 10. Cahill KV, Burns JA, Weber PA. The effect of blepharoptosis on the field of vision. Ophthal Plast Reconstr Surg. 1987;3(3 ):121-5. 11. Lessa SF, Elena EH, Araujo MR, Pitangy I. Modificações anatômicas da fenda palpebral após blefaroplastia. Rev Bras Cir. 1997;87(4):179-88. 12. Hacker HD, Hollsten DA. Investigation of automated perimetry in the evaluation of patients for upper lid blepharoplasty. Ophthal Plast Reconstr Surg. 1992;8(4):250-5. 13. Cruz AA, Baccega A. Análise bidimensional computadorizada da fenda palpebral. Arq Bras Oftalmol. 2001;64(1):13-9. 14. Starck WJ, Griffin JE, Epker BN. Objective evaluation of the eye- lids and eyebrows after blepharoplasty. J Oral Maxillofac Surg. 1996;54(3):297-302; discussion 302-3. 15. Bielory BP, Schwarcz RM. (New York Medical College). Hollow or full upper eyelid sulcus, which do patients prefer?. (Apresentado ao 46º ASOPRS Annual Fall Scientific Symposium). 16. Hwang HS, Spiegel, JH. The effect of “single” vs “double” eyelids on the perceived attractiveness of Chinese woman. Aesthet Surg J. 2013;34(3):374-81. 17. Small RG, Meyer DR. Eyelid metrics. Ophthal Plast Reconstr Surg. 2004;20(4): 266-7. 18. Meyer DR, Stern JH, Jarvis JM, Lininger LL. Evaluating the visual field effects of blepharoptosis using automated static perimetry. Ophthalmology. 1993;100(5):651-9. 19. Meyer DR, Rheeman CH. Downgaze eyelid position in patients with blepharoptosis. Ophthalmology. 1995;102(10):1517-23. 20. Olson JJ, Putterman A. Loss of vertical palpebral fissure height on downgaze in acquired blepharoptosis. Arch Ophthalmol. 1995;113(10):1293-7. 21. Damasceno RW, Avgitidou G, Belfort R Jr, Dantas PE, Holbach LM, Heindl LM. Eyelid aging: pathophysiology and clinical management. Arq Bras Oftalmol. 2015;78(5):328-31. 22. Atalay K, Gurez C, Kirgiz A, Serefoglu Cabuk K. Does severity of dermatochalasis in aging affect corneal biomechanical properties? Clin Interv Aging. 2016;11:659-64. 23. Cahill KV, Bradley EA, Meyer DR, Custer PL, Holck DE, Marcet MM, Mawn LA. Functional indications for upper eyelid ptosis and blepharoplasty surgery: a report by the American Academy of Oph- thalmology. Ophthalmology. 2011;118(12):2510-7. 24. Simsek IB, Yilmaz B, Yildiz S, Artunay O. Effect of upper eyelid blepharoplasty on vision and corneal tomographic changes measured by pentacam. Orbit. 2015;34(5):263-7. Corresponding Author Eduardo Damous Feijó Department of Oculoplastic Surgery, Hospital Oftalmológico de Anápolis, Anápolis, GO, Brazil. Corresponding Author’s Address: Av Faiad Hanna, 235 Cidade Jardim Anápolis - GO - Brazil CEP 75080-410 Correlation between increase in margin-crease distance and patient satisfaction after upper blepharoplasty Rev Bras Oftalmol. 2017; 76 (2): 65-9 work_ygjahw2jo5d5blwp4nfxse4pbe ---- A Randomized Comparative Study of Oral Corticosteroid Minipulse and Low-Dose Oral Methotrexate in the Treatment of Unstable Vitiligo | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. DOI:10.1159/000433424 Corpus ID: 20648420A Randomized Comparative Study of Oral Corticosteroid Minipulse and Low-Dose Oral Methotrexate in the Treatment of Unstable Vitiligo @article{Singh2015ARC, title={A Randomized Comparative Study of Oral Corticosteroid Minipulse and Low-Dose Oral Methotrexate in the Treatment of Unstable Vitiligo}, author={H. Singh and M. Kumaran and A. Bains and D. Parsad}, journal={Dermatology}, year={2015}, volume={231}, pages={286 - 290} } H. Singh, M. Kumaran, +1 author D. Parsad Published 2015 Medicine Dermatology Background: Despite continued progress towards elucidation of the biochemical, genetic and immunopathological pathways in vitiligo, a definitive cure remains elusive. The initial therapy must be directed to arrest disease progression. Oral minipulse therapy (OMP) with betamethasone/dexamethasone has been tried and shown to be an effective modality to arrest the disease progression in vitiligo. Objectives: Methotrexate (MTX) is a time-tested effective treatment extensively used in various… Expand View on PubMed doi.org Save to Library Create Alert Cite Launch Research Feed Share This Paper 28 CitationsBackground Citations 8 View All Topics from this paper Methotrexate Dexamethasone Adrenal Cortex Hormones Betamethasone Autoimmune Diseases Omeprazole Unstable Medical Device Problem Paget's Disease, Mammary Angina, Unstable Disease Progression 28 Citations Citation Type Citation Type All Types Cites Results Cites Methods Cites Background Has PDF Publication Type Author More Filters More Filters Filters Sort by Relevance Sort by Most Influenced Papers Sort by Citation Count Sort by Recency Oral mycophenolate mofetil as a stabilizing treatment for progressive non-segmental vitiligo: results from a prospective, randomized, investigator-blinded pilot study A. Bishnoi, K. Vinay, M. Kumaran, D. Parsad Medicine Archives of Dermatological Research 2020 Save Alert Research Feed Combination of oral methotrexate and oral mini‐pulse dexamethasone vs either agent alone in vitiligo treatment with follow up by dermoscope M. ElGhareeb, M. Metwally, N. Abdelmoneim Medicine Dermatologic therapy 2020 1 Save Alert Research Feed Topical methotrexate 1% gel for treatment of vitiligo: A case report and review of the literature A. Abdelmaksoud, D. Dave, T. Lotti, M. Vestita Medicine Dermatologic therapy 2019 4 Save Alert Research Feed Vitiligo: an update on systemic treatments T. Searle, F. Al-Niaimi, F. Ali Medicine Clinical and experimental dermatology 2020 Save Alert Research Feed Emerging drugs for the treatment of vitiligo A. Vojvodic, Priyanka Karagaiah, +8 authors T. Lotti Medicine Expert opinion on emerging drugs 2020 1 View 1 excerpt, cites background Save Alert Research Feed Addition of oral minipulse dexamethasone to narrowband ultraviolet B phototherapy and topical steroids helps arrest disease activity in patients with vitiligo A. Tovar-Garza, J. Hinojosa, L. Hynan, A. Pandya Medicine The British journal of dermatology 2019 5 Save Alert Research Feed Advances in Vitiligo: An Update on Medical and Surgical Treatments. A. B. Dillon, Andrew Sideris, A. Hadi, N. Elbuluk Medicine The Journal of clinical and aesthetic dermatology 2017 18 Save Alert Research Feed Statins role in vitiligo: A mini-review Hayder M Al-Kuraishy, Nawar Hussian, M. Al-Naimi, Ali I. Al-Gareeb Medicine 2020 2 View 1 excerpt, cites background Save Alert Research Feed What’s in the Pipeline for Melasma and Vitiligo N. Elbuluk, P. Grimes Medicine 2017 3 Save Alert Research Feed Vitiligo: Focus on Clinical Aspects, Immunopathogenesis, and Therapy K. Boniface, J. Seneschal, M. Picardo, A. Taïeb Biology, Medicine Clinical Reviews in Allergy & Immunology 2017 74 View 1 excerpt, cites background Save Alert Research Feed ... 1 2 3 ... References SHOWING 1-10 OF 31 REFERENCES SORT BYRelevance Most Influenced Papers Recency Low-Dose Oral Mini-Pulse Dexamethasone Therapy in Progressive Unstable Vitiligo A. Kanwar, R. Mahajan, D. Parsad Medicine Journal of cutaneous medicine and surgery 2013 31 Save Alert Research Feed The efficacy of low-dose oral corticosteroids in the treatment of vitiligo patient. K. Banerjee, J. N. Barbhuiya, A. Ghosh, S. Dey, P. R. Karmakar Medicine Indian journal of dermatology, venereology and leprology 2003 19 Save Alert Research Feed Oral dexamethasone pulse treatment for vitiligo. S. Radakovic-Fijan, A. M. Fürnsinn-Friedl, H. Hönigsmann, A. Tanew Medicine Journal of the American Academy of Dermatology 2001 102 Save Alert Research Feed Methotrexate for treatment of lichen planus: old drug, new indication A. Kanwar, D. De Medicine Journal of the European Academy of Dermatology and Venereology : JEADV 2013 21 Save Alert Research Feed Oral minipulse therapy in vitiligo. A. Kanwar, S. Dhar, G. Dawn Medicine Dermatology 1995 24 Save Alert Research Feed Efficacy and safety of methotrexate in recalcitrant cutaneous lupus erythematosus: results of a retrospective study in 43 patients J. Wenzel, S. Brähler, R. Bauer, T. Bieber, T. Tüting Medicine The British journal of dermatology 2005 99 Save Alert Research Feed Methotrexate for the treatment of generalized vitiligo. K. AlGhamdi, Huma Khurrum Medicine Saudi pharmaceutical journal : SPJ : the official publication of the Saudi Pharmaceutical Society 2013 18 PDF Save Alert Research Feed Association of the Köbner phenomenon with disease activity and therapeutic responsiveness in vitiligo vulgaris. M. Njoo, P. K. Das, J. Bos, W. Westerhof Medicine Archives of dermatology 1999 149 PDF Save Alert Research Feed ORAL MINI‐PULSE THERAPY WITH BETAMETHASONE IN VITILIGO PATIENTS HAVING EXTENSIVE OR FAST‐SPREADING DISEASE J. S. Pasricha, B. Khaitan Medicine International journal of dermatology 1993 108 Save Alert Research Feed Response to methotrexate in early rheumatoid arthritis is associated with a decrease of T cell derived tumour necrosis factor α, increase of interleukin 10, and predicted by the initial concentration of interleukin 4 M. Rudwaleit, Z. Yin, +4 authors J. Sieper Medicine Annals of the rheumatic diseases 2000 84 PDF Save Alert Research Feed ... 1 2 3 4 ... Related Papers Abstract Topics 28 Citations 31 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_yh7innnahrcxxf44pelegdyjby ---- Submitted 1 June 2017 Accepted 28 August 2017 Published 27 September 2017 Corresponding author Denise Dalbosco Dell’Aglio, ddd23@cam.ac.uk Academic editor David Roberts Additional Information and Declarations can be found on page 5 DOI 10.7717/peerj.3821 Copyright 2017 Dalbosco Dell’Aglio et al. Distributed under Creative Commons CC-BY 4.0 OPEN ACCESS Estimating the age of Heliconius butterflies from calibrated photographs Denise Dalbosco Dell’Aglio1,2, Derya Akkaynak2, W. Owen McMillan2 and Chris D. Jiggins1,2 1 Department of Zoology, University of Cambridge, Cambridge, United Kingdom 2 Smithsonian Tropical Research Institute, Panama City, Panama ABSTRACT Mating behaviour and predation avoidance in Heliconius involve visual colour signals; however, there is considerable inter-individual phenotypic variation in the appearance of colours. In particular, the red pigment varies from bright crimson to faded red. It has been thought that this variation is primarily due to pigment fading with age, although this has not been explicitly tested. Previous studies have shown the importance of red patterns in mate choice and that birds and butterflies might perceive these small colour differences. Using digital photography and calibrated colour images, we investigated whether the hue variation in the forewing dorsal red band of Heliconius melpomene rosina corresponds with age. We found that the red hue and age were highly associated, suggesting that red colour can indeed be used as a proxy for age in the study of wild- caught butterflies. Subjects Animal Behavior, Bioinformatics, Ecology, Entomology, Zoology Keywords Aging, Colour analysis, Calibrated images, Digital camera, Photography, Colouration INTRODUCTION Butterflies are some of the most colourful living animals and their bright wing colours have attracted the attention of scientists and artists alike. Multiple selective factors affect the evolution of butterfly wing colours, which might be tuned to the visual systems of potential mates and predators. This might be particularly true for the genus Heliconius, which exhibits conspicuous colours as a warning signal of toxicity (Benson, 1972; Langham, 2004), and find and choose mates based on these same colour signals that are involved in predator avoidance (Jiggins et al., 2001; Estrada & Jiggins, 2008). Although mating behaviour and predation avoidance in Heliconius is highly linked to colour, previous research has shown that some Heliconius species exhibit considerable phenotypic variation in colour. Analysis of the colour patterns of two polymorphic mimic butterflies, Heliconius numata and the genus Melinaea, suggests that small differences in contrast can be perceived by butterflies and birds (Llaurens, Joron & Théry, 2014). More- over, variation of wing colour spectra between populations of Heliconius timareta indicates that their colours are locally adapted for mimicry in very precise ways (Mérot et al., 2016). Most studies of colouration in Heliconius butterflies have focused on the genetic basis for colour variation (Reed, Mcmillan & Nagy, 2008; Reed et al., 2011; Pardo-Diaz et al., 2012). Convergent gene expression in H. melpomene and H. erato is associated with red How to cite this article Dalbosco Dell’Aglio et al. (2017), Estimating the age of Heliconius butterflies from calibrated photographs. PeerJ 5:e3821; DOI 10.7717/peerj.3821 https://peerj.com mailto:ddd23@cam.ac.uk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/10.7717/peerj.3821 http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/licenses/by/4.0/ http://dx.doi.org/10.7717/peerj.3821 wing elements (Pardo-Diaz & Jiggins, 2014). Moreover, the H. melpomene gene responsible for red colour pattern is genetically linked to a preference for red (Naisbit, Jiggins & Mallet, 2001; Merrill et al., 2011). From a morphological point of view, wing scales have ultra-structural differences which are correlated with pigmentation (Gilbert et al., 1988; Aymone, Valente & De Araújo, 2013). Red/brown scales in Heliconius are pigmented with xanthommatin and dihydroxanthommatin and vary in colour from bright red to brown due to variations in the pigment oxidation state (Gilbert et al., 1988). However, there is also phenotypic variation in red among individuals that do not differ genetically, in wing regions with the same pigmentation and ultrastructure, with variation in colour from bright crimson to faded red. It has been suggested that this variation in red is due to oxidation of the red dihydroxanthommatin pigment as individuals age (Crane, 1954; Ehrlich & Gilbert, 1973; Gilbert et al., 1988). Previous studies have taken advantage of this phenomenon to measure arbitrary age structure in Heliconius using wing condition such as wear, dull colours and scale loss (Ehrlich & Gilbert, 1973; Walters et al., 2012). We here investigated whether red colour can be used as a proxy for age in the study butterflies removing human vision bias. We used analysis based on digital photography to investigate the association between wing colouration and age, a methodology increasingly common in studies of animal coloration due to its high-end technology and affordability (Stevens et al., 2007; Akkaynak et al., 2014). Here, we created an unbiased methodology to quantify age based on the ‘‘redness’’ of the forewing red band in Heliconius butterflies. MATERIAL AND METHODS To quantify changes in the dorsal forewing red band, we used a set of 55 Heliconius melpomene rosina Boisduval 1870 wings from Owen McMillan’s collection in Gamboa, Panama, raised in insectaries and of known age (in days after emergence). To objectively characterise colour, photographs of the wings were taken using an Olympus OM-D EM-1 digital camera with an Olympus Zuiko Digital ED 60 mm f/2.8 macro lens (Olympus, Center Valley, PA, USA). Forewing specimens were photographed against a Kodak R-27 Gray card, with Munsell 18% reflectance (Eastman Kodak, Rochester, NY, USA). Camera RAW images were converted to DNG format using the Adobe DNG Converter R©, and white balanced and colour corrected according to equations (5)–(8) described in Akkaynak et al. (2014) using an Xrite Color Checker (Xrite, Grand Rapids, MI, USA). For illumination, a Bolt VM-110 LED macro ring light was used (Gradus Group LLC, New York, NY, USA). Image manipulations were done using custom scripts written in MATLAB R© in wide gamut Kodak ProPhoto RGB colour space. Following colour calibration, RGB images were projected to L*a*b* colour space (Wyszeck & Stiles, 1982) and the a* channel of each image was given a threshold at a*>10 to segment the red patch automatically and obtain a binary mask. In the L*a*b* colour space, the a* channel takes on values between −128 and +128, and represents red–green opponency. The b* channel also takes on values between −128 and +128, and represents yellow–green opponency. The L* channel varies between 0 and 100, and is a measure of Dalbosco Dell’Aglio et al. (2017), PeerJ, DOI 10.7717/peerj.3821 2/8 https://peerj.com http://dx.doi.org/10.7717/peerj.3821 Figure 1 Forewings of Heliconius melpomene rosina. Categories based on the appearance of colours to human vision: (A) crimson (fresh wings), (B) red (intermediate) and (C) faded red (worn). lightness, with 100 being a bright white. Thus in this colour space, the more positive the a* value, the ‘‘redder’’ an image appears. The resulting binary mask was cleaned of isolated pixels using first morphological area opening, then closing. The automatically made masks were checked visually to ensure no pixels outside the red patch area or severe scale loss patches were included. The a* and b* values of the pixels inside the red patch area were averaged to obtain a representative colour for each specimen. First, to test for differences between sexes, a t-test was performed. Next, to test for association between age and colour, a linear regression analysis was performed between age (days after emergence) and red measurement (*a). In addition, wings were sorted into groups using age/colour categories based on previous work in H. ethilla (Ehrlich & Gilbert, 1973). These subjective categories were named based on the appearance of the red band to human vision: crimson (fresh wings), red (intermediate) and faded red (worn) (Fig. 1). The categorical data was used to investigate the efficiency of human vision in categorizing subjective wing colouration. RESULTS AND DISCUSSION Through calibrated digital images, we quantified the redness of Heliconius wings and found a strong association between age and fading of the colour (Fig. 2). We analysed 37 females and 18 males and found no significant difference in colour between the sexes (t53,55 =0.409, P =0.684), so both were combined in subsequent analyses. Our results showed that redness and age were highly associated (t53,55=−7.461, P < 0.001), indicating that younger individuals have quantifiably ‘‘redder’’ patches. This suggests that the forewing dorsal red band changes colour with age as had been suggested previously (Fig. 2). The samples included individuals across a wide range of age categories, reflecting the natural age structure of wild populations (Ehrlich & Gilbert, 1973). We have also shown that through calibrated photographs it is possible to distinguish more colour categories when compared to the limitations of the human vision. Although human visual categories do not fit especially well with the correlation (Fig. 2), the ‘‘crimson’’ category included all specimens that were less than 10 days old and ‘‘faded red’’ included specimens that were all older than 25 days (Fig. 2). Also, ‘‘faded red’’ might contain information about scale loss, in which was not explored in our methodology. Dalbosco Dell’Aglio et al. (2017), PeerJ, DOI 10.7717/peerj.3821 3/8 https://peerj.com http://dx.doi.org/10.7717/peerj.3821 Figure 2 Forewing red band changes colour with age. Association between redness (a* value) and age in days after emergence in Heliconius melpomene rosina forewing dorsal red band (t53,55 =−7.461, P < 0.001). Human visual categories: crimson (filled circles, n=9), red (open squares, n=39) and faded red (filled triangles, n=7). Age-dependent colour change has also been found in the tissues of a pentatomid bug, Halyomorpha brevis, with the change in red stronger in males (Niva & Takeda, 2002). Similar colour fading was found as a consequence of direct sun exposure in bees, Bombus huntii, which colour hue was correlated with wing wear and therefore could be a reliable measure of age (Koch et al., 2014). In butterfly wings, age-based colour fading has also been shown in Colias eurytheme, in which fading of structural colour was the most accurate predictor of male age. Females of this species choose their partners based on age, since new males produce more nutritious spermatophores such that colour might be a useful cue for mate choice (Kemp, 2006). It is less clear whether there would be a similar benefit to such age discrimination in Heliconius. Females mate only once or a few times in their lifetime, depending on species, and the first mating occurs soon after eclosion (Walters et al., 2012). In contrast, males can mate throughout their life and there is no evidence that spermatophore quality is influenced by male age, although this has not been directly tested. Male choosiness in Heliconius is well documented (Jiggins, Estrada & Rodrigues, 2004; Estrada & Jiggins, 2008) as the spermatophore represents a considerable nutrient investment, providing the female with amino acids used in egg production (Boggs, 1981). Females are also likely to be choosy but this has been less well documented. Age, perhaps signalled by colour cues, might be a cue for mate choice in Heliconius and this would be interesting to test explicitly. Dalbosco Dell’Aglio et al. (2017), PeerJ, DOI 10.7717/peerj.3821 4/8 https://peerj.com http://dx.doi.org/10.7717/peerj.3821 Furthermore, H. erato has red lateral filtering pigments which shift red receptor sensitivity, allowing butterflies to distinguish colours in the red–green spectrum with just a single LW-sensitive opsin (Zaccardi et al., 2006; McCulloch, Osorio & Briscoe, 2016). This means that Heliconius likely have better abilities to distinguish slight differences in the red colour range as compared to other nymphalids (Zaccardi et al., 2006). It would therefore be interesting to test whether adults can distinguish colours across the range demonstrated here among individuals of different ages. If age were an important trait in sexual selection, perhaps red filtering pigments are in part an adaptation for better mate discrimination in this range. It would be interesting to investigate how these colour differences would be seen through Heliconius vision. It is also interesting to speculate about whether the fading of pigments might influence how predators perceive these butterflies. Although some degree of predator generalization is likely, it is also known that predators can distinguish fairly subtle differences in hue (Langham, 2004). Further experiments would be needed to determine whether colour fading might incur some cost in terms of increased predation. In conclusion, we have demonstrated how colour could be used to estimate age in population structure studies, and provided the groundwork for future studies of the fitness consequences of fading colours in Heliconius for mate choice and mimicry. ACKNOWLEDGEMENTS We are very grateful to Adriana Tapia for her help with the butterfly collection, the Smithsonian Tropical Research Institute for logistical assistance, and three anonymous reviewers for valuable comments on the manuscript. ADDITIONAL INFORMATION AND DECLARATIONS Funding This work was supported by the Cambridge Trust (UK) and CAPES (Brazil) to Denise Dalbosco Dell’Aglio, and the Smithsonian Tropical Research Institute (Panama) Short-term Fellowship to Derya Akkaynak. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Grant Disclosures The following grant information was disclosed by the authors: Cambridge Trust. CAPES. Smithsonian Tropical Research Institute. Competing Interests The authors declare there are no competing interests. Dalbosco Dell’Aglio et al. (2017), PeerJ, DOI 10.7717/peerj.3821 5/8 https://peerj.com http://dx.doi.org/10.7717/peerj.3821 Author Contributions • Denise Dalbosco Dell’Aglio conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. • Derya Akkaynak conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, reviewed drafts of the paper. • W. Owen McMillan contributed reagents/materials/analysis tools, reviewed drafts of the paper. • Chris D. Jiggins conceived and designed the experiments, wrote the paper, reviewed drafts of the paper. Data Availability The following information was supplied regarding data availability: The code used is already published and its reference is included in the manuscript. Supplemental Information Supplemental information for this article can be found online at http://dx.doi.org/10.7717/ peerj.3821#supplemental-information. REFERENCES Akkaynak D, Treibitz T, Xiao B, Gurkan UA, Allen JJ, Demirci U, Hanlon RT. 2014. Use of commercial off-the-shelf digital cameras for scientific data acquisition and scene-specific color calibration. Journal of the Optical Society of America 31:312–321 DOI 10.1364/JOSAA.31.000312. Aymone ACB, Valente VLS, De Araújo AM. 2013. Ultrastructure and morphogenesis of the wing scales in Heliconius erato phyllis (Lepidoptera: Nymphalidae): what silvery/brownish surfaces can tell us about the development of color patterning? Arthropod Structure & Development 42:349–359 DOI 10.1016/j.asd.2013.06.001. Benson WW. 1972. Natural selection for mullerian mimicry in Heliconius erato in Costa Rica. Science 176:936–939 DOI 10.1126/science.176.4037.936. Boggs CL. 1981. Selection pressures affecting male nutrient investment at mating in heli- coniine butterflies. Evolution 35:931–940 DOI 10.1111/j.1558-5646.1981.tb04959.x. Crane J. 1954. Spectral reflectance characteristics of butterflies (Lepidoptera) from Trinidad, B. W. I. Zoologica 39:85–112. Ehrlich PR, Gilbert LE. 1973. Population structure and dynamics of the tropical butterfly Heliconius ethilla. Biotropica 5:69–82 DOI 10.2307/2989656. Estrada C, Jiggins CD. 2008. Interspecific sexual attraction because of conver- gence in warning colouration: is there a conflict between natural and sex- ual selection in mimetic species? Journal of Evolutionary Biology 21:749–760 DOI 10.1111/j.1420-9101.2008.01517.x. Dalbosco Dell’Aglio et al. (2017), PeerJ, DOI 10.7717/peerj.3821 6/8 https://peerj.com http://dx.doi.org/10.7717/peerj.3821#supplemental-information http://dx.doi.org/10.7717/peerj.3821#supplemental-information http://dx.doi.org/10.1364/JOSAA.31.000312 http://dx.doi.org/10.1016/j.asd.2013.06.001 http://dx.doi.org/10.1126/science.176.4037.936 http://dx.doi.org/10.1111/j.1558-5646.1981.tb04959.x http://dx.doi.org/10.2307/2989656 http://dx.doi.org/10.1111/j.1420-9101.2008.01517.x http://dx.doi.org/10.7717/peerj.3821 Gilbert LE, Forrest HS, Schultz TD, Harvey DJ. 1988. Correlations of ultrastructure and pigmentation suggest how genes control development of wing scales of Heliconius butterflies. Journal of Research on the Lepidoptera 26:141–160. Jiggins CD, Estrada C, Rodrigues A. 2004. Mimicry and the evolution of premating isolation in Heliconius melpomene Linnaeus. Journal of Evolutionary Biology 17:680–691 DOI 10.1111/j.1420-9101.2004.00675.x. Jiggins CD, Naisbit RE, Coe RL, Mallet J. 2001. Reproductive isolation caused by colour pattern mimicry. Nature 411:302–305 DOI 10.1038/35077075. Kemp DJ. 2006. Heightened phenotypic variation and age-based fading of ultraviolet butterfly wing coloration. Evolutionary Ecology Research 8:515–527. Koch JB, Love B, Klinger E, Strange JP. 2014. The effect of photobleaching on bee (Hymenoptera: Apoidea) setae color and its implications for studying aging and behavior. Journal of Melittology 38:1–10. Langham GM. 2004. Specialized avian predators repeatedly attack novel color morphs of Heliconius butterflies. Evolution 58:2783–2787 DOI 10.1111/j.0014-3820.2004.tb01629.x. Llaurens V, Joron M, Théry M. 2014. Cryptic differences in colour among Müllerian mimics: how can the visual capacities of predators and prey shape the evolution of wing colours? Journal of Evolutionary Biology 27:531–540 DOI 10.1111/jeb.12317. McCulloch KJ, Osorio D, Briscoe AD. 2016. Sexual dimorphism in the compound eye of Heliconius erato: a nymphalid butterfly with at least five spectral classes of photore- ceptor. Journal of Experimental Biology 219:2377–2387 DOI 10.1242/jeb.136523. Mérot C, Le Poul Y, Théry M, Joron M. 2016. Mimicry refinement: phenotypic variations tracking the local optimum. Journal of Animal Ecology 85:1056–1069 DOI 10.1111/1365-2656.12521. Merrill RM, Van Schooten B, Scott JA, Jiggins CD. 2011. Pervasive genetic associations between traits causing reproductive isolation in Heliconius butterflies. Proceedings of the Royal Society B 278:511–518 DOI 10.1098/rspb.2010.1493. Naisbit RE, Jiggins CD, Mallet J. 2001. Disruptive sexual selection against hybrids contributes to speciation between Heliconius cydno and Heliconius melpomene. Proceedings of the Royal Society B 268:1849–1854 DOI 10.1098/rspb.2001.1753. Niva CC, Takeda M. 2002. Color changes in Halyomorpha brevis (Heteroptera: Pentato- midae) correlated with distribution of pteridines: regulation by environmental and physiological factors. Comparative Biochemistry and Physiology Part B 132:653–660 DOI 10.1016/S1096-4959(02)00081-7. Pardo-Diaz C, Jiggins CD. 2014. Neighboring genes shaping a single adaptive mimetic trait. Evolution & Development 16:3–12 DOI 10.1111/ede.12058. Pardo-Diaz C, Salazar C, Baxter SW, Merot C, Figueiredo-Ready W, Joron M, McMil- lan WO, Jiggins CD. 2012. Adaptive introgression across species boundaries in Heliconius butterflies. PLoS Genetics 8:e1002752 DOI 10.1371/journal.pgen.1002752. Reed RD, Mcmillan WO, Nagy LM. 2008. Gene expression underlying adaptive variation in Heliconius wing patterns: non-modular regulation of overlapping Dalbosco Dell’Aglio et al. (2017), PeerJ, DOI 10.7717/peerj.3821 7/8 https://peerj.com http://dx.doi.org/10.1111/j.1420-9101.2004.00675.x http://dx.doi.org/10.1038/35077075 http://dx.doi.org/10.1111/j.0014-3820.2004.tb01629.x http://dx.doi.org/10.1111/jeb.12317 http://dx.doi.org/10.1242/jeb.136523 http://dx.doi.org/10.1111/1365-2656.12521 http://dx.doi.org/10.1098/rspb.2010.1493 http://dx.doi.org/10.1098/rspb.2001.1753 http://dx.doi.org/10.1016/S1096-4959(02)00081-7 http://dx.doi.org/10.1111/ede.12058 http://dx.doi.org/10.1371/journal.pgen.1002752 http://dx.doi.org/10.7717/peerj.3821 cinnabar and vermilion prepatterns. Proceedings of the Royal Society B 275:37–46 DOI 10.1098/rspb.2007.1115. Reed RD, Papa R, Martin A, Hines HM, Counterman BA, Pardo-Diaz C, Jiggins CD, Chamberlain NL, Kronforst MR, Chen R, Halder G, Nijhout HF, McMillan WO. 2011. Optix drives the repeated convergent evolution of butterfly wing pattern mimicry. Science 333:1137–1141 DOI 10.1126/science.1208227. Stevens M, Párraga CA, Cuthill IC, Partridge JC, Troscianko TS. 2007. Using digital photography to study animal coloration. Biological Journal of the Linnean Society 90:211–237 DOI 10.1111/j.1095-8312.2007.00725.x. Walters JR, Stafford C, Hardcastle TJ, Jiggins CD. 2012. Evaluating female remating rates in light of spermatophore degradation in Heliconius butterflies: pupal-mating monandry versus adult-mating polyandry. Ecological Entomology 37:257–268 DOI 10.1111/j.1365-2311.2012.01360.x. Wyszeck G, Stiles WS. 1982. Color science: concepts and methods, quantitative data and formulae. New York: Wiley. Zaccardi G, Kelber A, Sison-Mangus MP, Briscoe AD. 2006. Color discrimination in the red range with only one long-wavelength sensitive opsin. Journal of Experimental Biology 209:1944–1955 DOI 10.1242/jeb.02207. Dalbosco Dell’Aglio et al. (2017), PeerJ, DOI 10.7717/peerj.3821 8/8 https://peerj.com http://dx.doi.org/10.1098/rspb.2007.1115 http://dx.doi.org/10.1126/science.1208227 http://dx.doi.org/10.1111/j.1095-8312.2007.00725.x http://dx.doi.org/10.1111/j.1365-2311.2012.01360.x http://dx.doi.org/10.1242/jeb.02207 http://dx.doi.org/10.7717/peerj.3821 work_yi4esd6kavdmpffazlbglaalbi ---- Digital Photography in Thoracic and Cardiovascular Surgery Abdulsalam Y Taha & Amanj Kamal Bas J Surg, March, 16, 2010٤٤ Basrah Journal Letter to the Editor Of Surgery Bas J Surg, March, 16, 2010 DIGITAL PHOTOGRAPHY IN THORACIC AND CARDIO- VASCULAR SURGERY Abdulsalam Y Taha *& Amanj Kamal# *Professor and Head of Department of Thoracic & Cardiovascular Surgery, College of Medicine, University of Sulaimania, IRAQ, E-mail:salamyt_1963@hotmail.com. #MB,ChB, Senior House Officer, Sulaimania Teaching Hospital, IRAQ. Digital photography has become a practical alternative to film photography for documentation, communication and education in medical practice. We present our experience at the Department of Thoracic and Cardiovascular Surgery/Sulaimania Teaching Hospital in digital photography. As an excellent method of medical documentation, the digital camera is recommended to be available in the ward, operation theatre and bronchoscopy unit. Photography has long been used for medical documentation. The 35- mm single lens reflex camera has been the traditional device for photo documentation in surgical practice. The widespread availability of computer hard and software, digital cameras and the internet has enhanced the advantages and ease of using digital photography1. Digital photography has become a practical alternative to film photography for documentation, communication and education in medical practice2. The department of thoracic and cardiovascular surgery in Sulaimania Teaching Hospital was established in November 2003. Since the establishment of our Department in November 2003, a large number of patients with varieties of elective and emergent conditions were admitted to and managed in the department of thoracic and cardiovascular surgery in Sulaimania Teaching Hospital. Whenever possible, documentation was carried out using a Kodak digital zoom 6440 camera. Photography was used in the bronchoscopy unit, ward and the operation theatre. Digital photography and flexible bronchoscopy: an increasing number of patients with different thoracic lesions were bronchoscoped using a flexible bronchoscope (Olympus BF Type 20). A Kodak digital zoom 6440 camera held by an assistant is brought into contact with the proximal end of the bronchoscope whenever an interesting abnormality is seen and a still photograph and sometimes a video record is obtained. The photos are stored in the computer and edited if necessary and used for documentation, research and teaching. Digital photography in the ward: The digital camera was also used to document important physical signs pre and post-operatively. Intra-operative digital photography: still photographs and sometimes a video record of certain operative procedures were similarly obtained. A large pool of interesting digital photographs and video records were obtained since 2003. These photographs were an excellent teaching aid for both under and postgraduate students. They were used in many presentations in local and national medical conferences and were really exciting. Reports of rare cases were nicely demonstrated using Digital Photography in Thoracic and Cardiovascular Surgery Abdulsalam Y Taha & Amanj Kamal Bas J Surg, March, 16, 2010٤٥ this technique. Several clinical articles were written in our department which included digital photographs of the relevant patients. Publication in medical journals was possible by online uploading. Communication with colleagues in the same specialty was possible by sharing these photos through the Internet. Selected photographs: Normal bronchoscopic appearance: Case: Right-sided bronchogenic carcinoma. Digital Photography in Thoracic and Cardiovascular Surgery Abdulsalam Y Taha & Amanj Kamal Bas J Surg, March, 16, 2010٤٦ Case: Right ventricular wound. Case: Pulmonary trunk wound. Discussion The digital camera has the advantages of producing instant photos of high quality without a film. The image quality has reached that of conventionally taken photography, and for most applications, the digital origin of the photo is no longer discernable3. Digital cameras can be used to document preoperative and postoperative condition, intra-operative findings and image studies. Images may be immediately viewed on the liquid crystal display (LCD) screen of the camera and reshot if necessary2. Photography image files may be stored in the camera in a floppy diskette, compact flash card and transferred to a computer. The images may be manipulated using photo-editing software programs and incorporated into digital presentations. The digital photographs may be transmitted to others using e-mail and Internet web sites2. Using digital operative records along with digital photography, surgeons can document their procedures more accurately and efficiently than by the conventional method (hand-writing). A compete digital operative record is not only accurate but also time- saving4. We here highlights the usefulness of digital photography in the practice of thoracic and cardiovascular surgery. From the very beginning, we have realized the importance of photographic medical documentations for many reasons: 1. Varieties of elective and emergent surgical conditions were encountered. Many of them were interesting or rare; thus proper documentation was worthy. 2. Digital photographs were used as an excellent teaching aid for both under and postgraduate students. Many lectures were prepared from these photographs. 3. These photos were used in many presentations in local and national medical conferences and were really exciting. 4. Rare case reports were nicely demonstrated using this technique. 5. Medico-legal discussions were common about cases like a severely injured limb which requires a primary amputation for irreversible ischemia. It was very difficult sometimes to describe such an injury using the conventional method of words. However, a digital photograph obtained preoperatively and /or intra-operatively could save words and efforts and protect the surgeon during any medico-legal discussion. 6. Communication with colleagues in the same specialty was possible by sharing these photos through the Internet. Digital Photography in Thoracic and Cardiovascular Surgery Abdulsalam Y Taha & Amanj Kamal Bas J Surg, March, 16, 2010٤٧ 7. Several clinical articles were written in our department which included digital photographs of the relevant patients. 8. Publication in medical journals was possible by online uploading. 9. Shortly after establishing our department, a unit of broncho- esophagoscopy was opened. The flexible fiber optic bronchoscope which we have initially used was a conventional bronchoscope (Olympus BF Type 20: without a video). We have solved the problem of endoscopic documentation by using a digital camera (Kodak). Still photographs and sometimes a digital record of reasonably good quality were obtained. In conclusion, Digital photography is an excellent method of medical documentation. It is an indispensable technique for the busy surgeon indulged in clinical teaching, research and practice. In comparison with conventional photography, the digital version is cheaper, does not use films, and provides us with instant pictures of good resolution which can be saved in the computer or storage media, sent to colleagues and modified by software if necessary. It is recommended that a digital camera be available in the ward or in the clinic for documentation of interesting physical signs and photography of imaging studies. It should also be present in the operation theatre for intra-operative still photography and video record of operative procedures. It should also be available in the bronchoscopy unit if no video- bronchoscope is present for documentation of important broncho- scopic findings. References 1. Kokoska,-M-S; Thomas,-J-P. 35-mm and digital photography for the facial plastic surgeon. Facial- Plast- Surg. 2000 Feb, 8 (1): 35-44, viii 2. Talmor,-M .Digital photography in orthopaedic surgery. Plast- Reconstr- Surg. 2001 Mar; 107 (3): 896-7 3. Bengel,-W. Digital photography in the dental practice- an overview (II). Int-J- Comput- Dent. 2000 May; 3 (2): 121-32 4. Houkin,-K; Kuroda,-S; Abe,-H. Operative record using intra-operative digital data in neurosurgery. Acta- Neurochir-(Wien). 2000; 142 (8): 913-9. . Fb: advanced Rie V8 1 wa: Digital Photography in Thoracic and Cardiovascular Surgery Abdulsalam Y Taha & Amanj Kamal Basrah Journal Letter to the Editor Of Surgery Bas J Surg, March, 16, 2010 DIGITAL PHOTOGRAPHY IN THORACIC AND CARDIO-VASCULAR SURGERY Abdulsalam Y Taha *& Amanj Kamal# *Professor and Head of Department of Thoracic & Cardiovascular Surgery, College of Medicine, University of Sulaimania, IRAQ, E-mail:salamyt_1963@hotmail.com. #MB,ChB, Senior House Officer, Sulaimania Teaching Hospital, IRAQ. Digital photography has become a practical alternative to film photography for documentation, communication and education in medical practice. We present our experience at the Department of Thoracic and Cardiovascular Surgery/Sulaimania Teaching Hospital in digital photography. As an excellent method of medical documentation, the digital camera is recommended to be available in the ward, operation theatre and bronchoscopy unit. Photography has long been used for medical documentation. The 35- mm single lens reflex camera has been the traditional device for photo documentation in surgical practice. The widespread availability of computer hard and software, digital cameras and the internet has enhanced the advantages and ease of using digital photography1. Digital photography has become a practical alternative to film photography for documentation, communication and education in medical practice2. The department of thoracic and cardiovascular surgery in Sulaimania Teaching Hospital was established in November 2003. Since the establishment of our Department in November 2003, a large number of patients with varieties of elective and emergent conditions were admitted to and managed in the department of thoracic and cardiovascular surgery in Sulaimania Teaching Hospital. Whenever possible, documentation was carried out using a Kodak digital zoom 6440 camera. Photography was used in the bronchoscopy unit, ward and the operation theatre. Digital photography and flexible bronchoscopy: an increasing number of patients with different thoracic lesions were bronchoscoped using a flexible bronchoscope (Olympus BF Type 20). A Kodak digital zoom 6440 camera held by an assistant is brought into contact with the proximal end of the bronchoscope whenever an interesting abnormality is seen and a still photograph and sometimes a video record is obtained. The photos are stored in the computer and edited if necessary and used for documentation, research and teaching. Digital photography in the ward: The digital camera was also used to document important physical signs pre and post-operatively. Intra-operative digital photography: still photographs and sometimes a video record of certain operative procedures were similarly obtained. A large pool of interesting digital photographs and video records were obtained since 2003. These photographs were an excellent teaching aid for both under and postgraduate students. They were used in many presentations in local and national medical conferences and were really exciting. Reports of rare cases were nicely demonstrated using this technique. Several clinical articles were written in our department which included digital photographs of the relevant patients. Publication in medical journals was possible by online uploading. Communication with colleagues in the same specialty was possible by sharing these photos through the Internet. Selected photographs: Normal bronchoscopic appearance: Case: Right-sided bronchogenic carcinoma. Case: Right ventricular wound. Case: Pulmonary trunk wound. Discussion The digital camera has the advantages of producing instant photos of high quality without a film. The image quality has reached that of conventionally taken photography, and for most applications, the digital origin of the photo is no longer discernable3. Digital cameras can be used to document preoperative and postoperative condition, intra-operative findings and image studies. Images may be immediately viewed on the liquid crystal display (LCD) screen of the camera and reshot if necessary2. Photography image files may be stored in the camera in a floppy diskette, compact flash card and transferred to a computer. The images may be manipulated using photo-editing software programs and incorporated into digital presentations. The digital photographs may be transmitted to others using e-mail and Internet web sites2. Using digital operative records along with digital photography, surgeons can document their procedures more accurately and efficiently than by the conventional method (hand-writing). A compete digital operative record is not only accurate but also time- saving4. We here highlights the usefulness of digital photography in the practice of thoracic and cardiovascular surgery. From the very beginning, we have realized the importance of photographic medical documentations for many reasons: 1. Varieties of elective and emergent surgical conditions were encountered. Many of them were interesting or rare; thus proper documentation was worthy. 2. Digital photographs were used as an excellent teaching aid for both under and postgraduate students. Many lectures were prepared from these photographs. 3. These photos were used in many presentations in local and national medical conferences and were really exciting. 4. Rare case reports were nicely demonstrated using this technique. 5. Medico-legal discussions were common about cases like a severely injured limb which requires a primary amputation for irreversible ischemia. It was very difficult sometimes to describe such an injury using the conventional method of words. However, a digital photograph obtained preoperatively and /or intra-operatively could save words and efforts and protect the surgeon during any medico-legal discussion. 6. Communication with colleagues in the same specialty was possible by sharing these photos through the Internet. 7. Several clinical articles were written in our department which included digital photographs of the relevant patients. 8. Publication in medical journals was possible by online uploading. 9. Shortly after establishing our department, a unit of broncho-esophagoscopy was opened. The flexible fiber optic bronchoscope which we have initially used was a conventional bronchoscope (Olympus BF Type 20: without a video). We have solved the problem of endoscopic documentation by using a digital camera (Kodak). Still photographs and sometimes a digital record of reasonably good quality were obtained. In conclusion, Digital photography is an excellent method of medical documentation. It is an indispensable technique for the busy surgeon indulged in clinical teaching, research and practice. In comparison with conventional photography, the digital version is cheaper, does not use films, and provides us with instant pictures of good resolution which can be saved in the computer or storage media, sent to colleagues and modified by software if necessary. It is recommended that a digital camera be available in the ward or in the clinic for documentation of interesting physical signs and photography of imaging studies. It should also be present in the operation theatre for intra-operative still photography and video record of operative procedures. It should also be available in the bronchoscopy unit if no video-bronchoscope is present for documentation of important broncho-scopic findings. References 1. Kokoska,-M-S; Thomas,-J-P. 35-mm and digital photography for the facial plastic surgeon. Facial- Plast- Surg. 2000 Feb, 8 (1): 35-44, viii 2. Talmor,-M .Digital photography in orthopaedic surgery. Plast- Reconstr- Surg. 2001 Mar; 107 (3): 896-7 3. Bengel,-W. Digital photography in the dental practice- an overview (II). Int-J- Comput- Dent. 2000 May; 3 (2): 121-32 4. Houkin,-K; Kuroda,-S; Abe,-H. Operative record using intra-operative digital data in neurosurgery. Acta- Neurochir-(Wien). 2000; 142 (8): 913-9. PAGE 47 Bas J Surg, March, 16, 2010 work_yibg5x2lgzhrdk5fymtczdn2bq ---- Inter- en intraobserver agreement bij wondbeoordeling TITLE PAGE 1. Title: ‘Validity of diagnosis of superficial infection of laparotomy wounds using digital photography: inter and intra observer agreement amongst surgeons’ 2. Running head: Digital photography for assessment of wound infection 3. Authors: Gabriëlle H. van Ramshorst1, MD, Wietske W. Vrijland2, MD, PhD, Erwin van der Harst3, MD, PhD, Wim C.J. Hop4, PhD, Dennis den Hartog1, MD, PhD, Johan F. Lange1, MD, PhD, Professor of Surgery 4. Name of departments and institutions: 1 Erasmus University Medical Center, Department of Surgery, Rotterdam, Netherlands. 2 Sint Franciscus Gasthuis, Department of Surgery, Rotterdam, Netherlands 3 Maasstad Hospital, Department of Surgery, Rotterdam 4 Erasmus University Medical Center, Department of Biostatistics, Rotterdam, Netherlands. 5. Disclaimers: none 6. Corresponding author: Gabriëlle H. van Ramshorst, Erasmus MC, Department of Surgery, Room Z-836, P.O. Box 2040, 3000 CA Rotterdam, Netherlands, email: g.vanramshorst@erasmusmc.nl, telephone: + 31 10 7034631, fax: + 31 10 7044752 7. Request for reprints: see corresponding author 8. Source(s) of support: none Digital photography for assessment of wound infection 1 mailto:g.vanramshorst@erasmusmc.nl ABSTRACT Background: The validity of digital photography for the diagnosis of wound infection is unknown. We intended to measure its validity by measuring inter and intra observer agreement. Methods: Laparotomy wounds were photographed daily in a prospective study. Four surgeons independently assessed photographs of 50 wounds opened for infection within hours after photography and of 50 normally healed wounds (n=50). Surgeons recorded presence of infection and treatment for each wound. Paired kappa values were calculated. Intra observer agreement was measured after 4-6 weeks. Results: Mean specificity with regard to infection was 97% (94-100%) and mean sensitivity was 42% (32-48%). Paired kappa-values ranged between 0.54-0.68 for infection and 0.15- 0.72 for treatment. Kappa values for intra observer agreement on infection ranged between 0.43-0.76. Conclusions: Inter and intra observer agreement on the diagnosis of superficial infection with digital photography are moderate, but specificity is high. Findings at physical examination should also be reported. KEY WORDS: wound, digital photography, wound infection, surgical site infection, inter observer agreement, intra observer agreement, abdominal, surgery. Digital photography for assessment of wound infection 2 TEXT INTRODUCTION In recent years, the use of digital photography has become increasingly popular and highlighted in literature for documentation and evaluation of wound healing progression and for its usefulness in telemedicine for diagnosis in dermatology and vascular surgery. [1-3] For chronic and burn wounds in particular, photography can be used to assess treatment results and support continuation or alteration of treatment strategy. [4] One of the most common complications of surgery is wound infection, affecting approx. 10% of all abdominal wounds. [5,6] Although the validity of digital photography has been reported for a number of indications, its validity of diagnosing infection in surgical wounds remains unclear. [3] The diagnosis infection has been based on the symptoms ‘rubor’, ‘dolor’, ‘calor’, ‘tumor’ and ‘functio laesa’ ever since the time of Hippocrates. Wound infection is diagnosed by doctors based on subjective and objective criteria and experience. The international gold standard for diagnosis of surgical wound infection is represented by the criteria for surgical site infection (SSI) as defined by the Centers for Disease Control and Prevention (CDC). [7] The surgeon’s judgment is, according to the CDC, very important for diagnosis of superficial SSI. In several studies in which wound photography was used for assessment of healed lacerations and incisions, moderate to good inter and intra observer agreement were found on wound appearance scales. [8-10] Few reports exist on agreement amongst surgeons with regard to wound assessment. In literature, kappa values for inter observer agreement on infection vary between 0.08 to 1.00 [1, 11, 12] Unfortunately, nor the absolute numbers of infected wounds, nor the levels of intra observer agreement were reported in these studies [1, 11, 12] Wirthlin). The goal of our study was to measure the degree of inter and intra observer Digital photography for assessment of wound infection 3 agreement on the diagnosis superficial infection of laparotomy wounds using digital photography, thereby assessing its validity. MATERIALS AND METHODS Between May 2007 and January 2009, a total number of 1000 patients were included in a prospective observational clinical study on surgical wound healing. After informed consent was obtained, the abdominal wound was photographed on a daily basis (including weekends and holidays) until discharge or until the 21st postoperative day using a Fujifilm (Tokyo, Japan) model Finepix S5700 digital camera (7.1 megapixels, 10x optical zoom) with standardized multi-auto focus and macrosettings. Each day, two photographs (resolution 3072 x 2304 pixels) were taken according to a standardized protocol: one of the entire abdomen from sternum to the pubic bone at a distance of approx. 40 cm, and one close-up photograph of the wound at a distance of approx. 20 cm. Photographs were downloaded onto a personal computer and saved in JPEG (Joint Photographic Expert Group) format, coded for patient, postoperative day and number of sequence. Presence of signs of infection were documented using a standard procedure and relevant data on wound infection were retrieved prospectively from hospital and nursing charts. Four gastrointestinal surgeons (A, B, C, D) with clinical experience ranging between 10-30 years independently assessed 100 randomly ordered sets of abdominal wound photographs, consisting of one overview photograph and one close-up. Fifty of these consisted of photographs of wounds that had been opened within hours on suspicion or presence of infection ánd had met the criteria for superficial SSI of the CDC. According to the CDC, the following criteria have to be met for diagnosis of a superficial wound infection (SSI) [7]: Digital photography for assessment of wound infection 4 ‘Infection occurs within 30 days after the operation and infection involves only skin or subcutaneous tissue of the incision and at least one of the following: 1. Purulent drainage, with or without laboratory confirmation, from the superficial incision. 2. Organisms isolated from an aseptically obtained culture or fluid or tissue from the superficial incision. 3. At least one of the following signs or symptoms of infection: pain or tenderness, localized swelling, redness, or heat, and superficial incision is deliberately opened by surgeon, unless incision is culture-negative. 4. Diagnosis of superficial incisional SSI by the surgeon or attending physician.’ Photographs of infected wounds were matched by postoperative day, type of incision and skin color with fifty sets of photographs of wounds that had healed without complications, verified by surveillance by means of out patient clinic visits after discharge and review of hospital charts, discharge letters and complication registration systems. Surgeons were requested to read the CDC criteria for superficial SSI and to apply these criteria if possible before all sessions. Wound pain scores (visual analogue scale ranging from 0 – no pain to 100 – worst imaginable pain) of the current and previous day, morning temperature and postoperative day were notified for each wound. All photographs were viewed on one laptop using standardized settings with the possibility to adjust the viewing screen (Toshiba A100 portable personal computer, 17 inch screen). Surgeons were requested to record for all wounds whether or not superficial infections were present and whether the wounds should be treated conservatively (i.e., remain closed) or be opened (either partially or fully). Four to six weeks after the initial sessions, all photographs were placed in a different, Digital photography for assessment of wound infection 5 random order and were reevaluated in order to measure intra observer agreement for all surgeons. Reevaluations took place in the same room at approx. the same time of day as the first evaluations. Statistical analysis was performed by calculating paired kappa values with 95% confidence intervals (CI, calculated as 1.96 ± SE) between all observers (A-B, A-C, A-D, B-C, B-D, C- D) for inter observer agreement. For each observer intra observer agreement was measured by calculating kappa values (including 95% CI). In general, kappa values of 0.80 or over are considered to represent a good level of agreement. [13] Sensitivity and specificity were calculated for each observer. RESULTS On average, abdominal wounds had been opened on the seventh postoperative day (range 3- 15). Mean specificity with regard to wound infection was 97% (94-100%) and mean sensitivity was 42% (32-48%) (Table 1). Paired kappa values with regard to wound infection varied between 0.54 and 0.68 (Table 2). Agreement on treatment (conservative or opening of the wound) was present in 76 of 100 wounds (kappa values: 0.15, 0.17, 0.20, 0.72, 0.63, 0.68). The diagnosis of wound infection was unanimous in 12 of 50 cases. In some cases, surgeons preferred not to open wounds in presence of infection. In 13 patients symptoms of infection were considered minimal by one or more surgeons, and in five cases spontaneous drainage of pus was present and further opening of wound was therefore not considered compulsory. Surgeon A was the only surgeon to report low morning temperature Digital photography for assessment of wound infection 6 as a reason for not opening infected wounds. None of the additional information given on morning temperature or wound pain scores were significantly associated with wound infection in this group of patients (all p>0.05). Kappa values for intra observer agreement varied between 0.43-0.76 for wound infection and 0.52-0.87 for wound treatment (Table 3). DISCUSSION Assessment of wounds is normally based on a combination of subjective and objective information, visual and physical information and experience. This study has demonstrated that the inter observer agreement on wound infection of laparotomy wounds is moderate amongst surgeons when using digital photography. The inter observer agreement on the treatment of wound infection is moderate, and shows high variability amongst different surgeons. Moreover, the intra observer agreement on wound infection and treatment differs amongst surgeons. This implies that wounds are possibly assessed and treated differently depending on which individual is supervising care for operated patients. Infection rates as collected in several national surveillance programmes might therefore vary between hospitals partly as a result of differences in judgment amongst doctors. [14-18] Standard protocols for the assessment of acute wounds such as ASEPSIS and the Southampton Wound Assessment Scale are time-consuming and have not been implemented widely. [14, 19, 20] Therefore, wounds are still subjected to individual surgeons or attending doctors and their experience. The predictive value of the criteria for wound infection used in the aforementioned protocols is unclear. The European Society for Wound Management reported the results of a Delphi approach in order to identify criteria for SSI in various types Digital photography for assessment of wound infection 7 of wounds. [21] In the Delphi approach of the acute wound, 8-10 panel members were asked to list relevant clinical indicators of infection. Panel members were offered the opportunity to review scores for the most important criteria in view of the position of the group as a whole (the ‘group score’). Cellulitis and pus/abscess were considered the most important factors, followed by delayed healing, erythema with or without induration, haemo- or seropurulent exudate, malodour and wound breakdown/enlargement. Assumed early signs of infection included increase of local skin temperature, oedema, serous exudates with erythema, swelling with increase in exudate volume and unexpected pain/tenderness. [21] The predictive value of these signs are yet unknown for acute wounds. Gardner et al found positive predictive values of 1.00 for increasing pain and wound breakdown in chronic wounds. [22]. Sensitivity of classic signs of infection in chronic wounds showed large variability among different items: heat and purulent exudate 0.18, increasing pain 0.36, erythema 0.55 and edema 0.64 [22]. Moreover, from the few studies that exist on inter observer agreement in wound assessment, it appears that kappa values for many of the important variables in the Delphi approach were not high. Hollander et al reported inter observer concordances (kappa values) of 0.51 for erythema, 0.39 for warmth, 0.38 for tenderness and 1.00 for infection of 100 wounds registered in the emergency department by two independent doctors. [11] Allami et al reported inter observer variations in the evaluation of 50 lower limb arthroplasty wounds between four observers. [12] In this study, poor inter observer agreement (kappa values <0.40) was reported for tenderness, localised swelling, redness, heat, moderate agreement for pain (kappa values 0.60-0.80) and good agreement (kappa values 0.80-1.00) for clinical diagnosis of superficial SSI, purulent drainage, dehiscence and fever. In a study by Wirthlin et al, agreement amongst surgeons in the ‘remote’ assessment of digital photographs of 38 Digital photography for assessment of wound infection 8 vascular surgery wounds, similar to our study, proved lowest on the aspects cellulitis/infection and erythema (kappa values of 0.08 and 0.28, respectively). [1] The mean kappa value for inter observer agreement on wound infection of 0.62 found in our study may be fair considering the results from previous studies, in which presumably fewer infected wounds were included. In our study, digital wound photographs were assessed with additional information available on wound pain -expressed as visual analogue scale scores-, postoperative day and morning temperature, which was thought to have been of additional value for the diagnosis of infection and to better simulate the clinical setting. The two-dimensional aspect of digital photographs hampered assessment of swelling of the wound edges. Palpation of the wound was an aspect which was considered an omission from the regular physical examination of wounds by the surgeons participating in this study. Palpation can provide valuable information in view of expression of wound pain and pus production during pressure exertion and elicit increased capillary refill. Digital photography, even with the provided additional information, seems adequate for diagnosis of ‘normal wound healing’ (i.e., no infection) in wounds based on a high specificity of 97%, but at a mean sensitivity of 42% not be sensitive enough to diagnose infections in all wounds. Besides the discussion on the use and validity of digital photography in wound assessment, it appears important that more objective criteria for wound infection should be defined to create more uniformity in the diagnosis of wound infection. For this reason, more research is needed to evaluate the predictive value of wound characteristics for wound infection such as wound temperature and production of exudate, to be incorporated in a standardized wound appraisal tool. Structural assessment of wounds, combined with on-site registration of SSI and plenary Digital photography for assessment of wound infection 9 discussion will undoubtedly result in more uniformity amongst surgeons and higher reliability of reported infections and infection rates. CONCLUSIONS Inter and intra observer agreement on the diagnosis of wound infection when using digital photography were both moderate, but specificity was very high. Physical examination, including palpation, appeared of high additional value to digital photography in the assessment of laparotomy wounds and should therefore be documented in detail. We recommend that digital photographs come with this information in future studies on ‘electronic’ wound assessment. Furthermore, we feel that more objective criteria for wound assessment and knowledge on the predictive value of wound characteristics for infection are needed. Digital photography for assessment of wound infection 10 ACKNOWLEDGEMENTS We thank Dr. W.E. Tuinebreijer for his thoughtful review of the manuscript. Digital photography for assessment of wound infection 11 REFERENCES 1. Wirthlin DJ, Buradagunta S, Edwards RA, Brewster DC, Cambria RP, Gertler JP, et al. Telemedicine in vascular surgery: feasibility of digital imaging for remote management of wounds. J Vasc Surg. 1998;27(6):1089-99; discussion 1099-1100. 2. Kvedar JC, Edwards RA, Menn ER, Maofid M, Gonzalez E, Dover J, et al. The substitution of digital images for dermatologic physical examination. Arch Dermatol. 1997;133(2):161-7. 3. Baumgarten M, Margolis DJ, Selekof JL, Moye N, Jones PS, Shardell M. Validity of pressure ulcer diagnosis using digital photography. Wound Repair Regen. 2009;17(2):287-90. 4. Molnar JA, Lew WK, Rapp DA, Gordon ES, Voignier D, Rushing S, et al. Use of standardized, quantitative digital photography in a multicenter web-based study. Eplasty. 2009;9:e4. 5. Reid R, Simcock JW, Chisholm L, Dobbs B, Frizelle AA. Postdischarge clean wound infections: incidence underestimated and risk factors overemphasized. ANZ J Surg. 2002; 72:339-43. 6. Coello R, Charlett A, Wilson J, Ward V, Pearson A, Borriello P. Adverse impact of surgical site infections in English hospitals. J Hosp Infection 2005;60:93-103. 7. Mangram AJ, Horan TC, Pearson ML, Silver LC, Jarvis WR. Guideline for prevention of surgical site infection. Hospital Infection Control Practices Advisory Committee. Infect Control Hosp Epidemiol. 1999;20(4):250-78. 8. Quinn JV, Drzewiecki AE, Stiell G, Elmslie TJ. Appearance scales to measure cosmetic outcomes of healed lacerations. Am J Emerg Med 1995;13(2):229-31. 9. Singer AJ, Arora B, Dagum A, Valentine S, Hollander JE. Development and validation of a novel scar evaluation scale. Plast Reconstr Surg 2007;120(7):1892-7. 10. Quinn JV, Wells GA. An assessment of clinical wound evaluation scales. Acad Emerg Med. 1998;5(6):583-6. Digital photography for assessment of wound infection 12 11. Hollander JE, Singer AJ, Valentine S, Henry MC. Wound registry: development and validation. Ann Emerg Med 1995;25(5):675-85. 12. Allami MK, Jamil W, Fourie B, Ashton V, Gregg PJ. Superficial incisional infection in arthroplasty of the lower limb. Interobserver reliability of the current diagnostic criteria. J Bone Joint Surgery 2005;87(9):1267-71. 13. Sacket DL, Haynes RB, Guyatt GH, Tugwell P (1991). Second Edition. Clinical epidemiology. A basic science for clinical medicine. Boston/Toronto/London: Little, Brown and Company, 1991. 14. Bruce J, Russell EM, Mollison J, Krukowski ZH. The measurement and monitoring of surgical adverse events. Health Technol Assess. 2001;5(22):1-94. 15. Centers for Disease Control and Prevention: National Nosocomial Infections Surveillance System (NNIS) USA. http://www.cdc.gov/ncidod/dhqp/nnis.html. 16. Health Protection Agency (HPA): Nosocomial Infection National Surveillance Service (NINSS), England. http://www.hpa.org.uk. 17. Scientific Institute of Public Health: National Programme for Surveillance of Hospital Infections (NSIH), Belgium. http://www.iph.fgov.be. 18. Dutch Institute for Healthcare Improvement (CBO), Netherlands. http://www.cbo.nl. 19. Wilson AP, Treasure T, Sturridge MF, Gruneberg RN. A scoring method (ASEPSIS) for postoperative wound infections for use in clinical trials of antibiotic prophylaxis. Lancet 1986;1:311-3. 20. Bailey IS, Karran SE, Toyn K, Brough P, Ranaboldo C, Karran SJ. Community surveillance of complications after hernia repair. BMJ 1992;304:469-71. 21. Cutting KF, White RJ, Mahoney P, Harding KG. Clinical identification of wound infection: a Delphi approach. In: EWMA Position Document. Identifying criteria for wound infection. London: MEP Ltd, 2005. Digital photography for assessment of wound infection 13 22. Gardner SE, Frantz RA, Doebbeling BN. The validity of the clinical signs and symptoms used to identify localized chronic wound infection. Wound Repair Regeneration 2001;9(3):178-86. Digital photography for assessment of wound infection 14 TABLES AND CAPTIONS TABLE 1 Surgeon Infection present (N) Sensitivity Infection absent (N) Specificity A 24 48 % 48 96 % B 16 32 % 48 96 % C 22 44 % 50 100 % D 21 42 % 47 94 % Total 50 42 % (mean) 50 97 % (mean) CAPTION 1: Sensitivity and specificity for infected and non-infected wounds Digital photography for assessment of wound infection 15 TABLE 2 Surgeons Wound infection 95% CI lower - upper Treatment 95% CI lower - upper A – B 0.54 0.34 - 0.73 0.15 0.00 - 0.39 A – C 0.67 0.50 - 0.84 0.17 0.00 - 0.39 A – D 0.68 0.51 - 0.85 0.20 0.00 - 0.43 B – C 0.63 0.43 - 0.82 0.72 0.54 - 0.90 B – D 0.58 0.39 - 0.78 0.63 0.42 - 0.84 C – D 0.61 0.42 - 0.79 0.68 0.49 - 0.86 CAPTION 2: Paired kappa values for inter observer agreement on wound infection and treatment Digital photography for assessment of wound infection 16 TABLE 3 Surgeon Wound infection 95% CI lower -upper Treatment 95% CI lower-upper A 0.66 0.49 – 0.84 0.52 0.15 – 0.89 B 0.43 0.26 – 0.61 0.53 0.31 – 0.75 C 0.74 0.57 – 0.91 0.76 0.59 – 0.93 D 0.76 0.62 – 0.91 0.87 0.75 – 0.99 CAPTION 3: Kappa values for intra observer agreement on wound infection and treatment Digital photography for assessment of wound infection 17 TITLE PAGE 1. Title: ‘Validity of diagnosis of superficial infection of laparotomy wounds using digital photography: inter and intra observer agreement amongst surgeons’ 3. Authors: Gabriëlle H. van Ramshorst1, MD, Wietske W. Vrijland2, MD, PhD, Erwin van der Harst3, MD, PhD, Wim C.J. Hop4, PhD, Dennis den Hartog1, MD, PhD, Johan F. Lange1, MD, PhD, Professor of Surgery 8. Source(s) of support: none TEXT INTRODUCTION ACKNOWLEDGEMENTS 15. Centers for Disease Control and Prevention: National Nosocomial Infections Surveillance System (NNIS) USA. http://www.cdc.gov/ncidod/dhqp/nnis.html. work_yjkzatpptvabhigrkgtxbrjsce ---- ISSN: 2536-9474 (Print) Original article / FYMJ ISSN: 2536-9482 (Online) Fayoum University Medical Journal El- Sayed et al., 2019,3(1), 60-70 Page 60 Vitamin D and platelet Rich Plasma (PRP) in the treatment of vitiligo Talal Ahmed Abd El Raheem (1) , Basma Hamada Mohammed (2) , Noha Tolba Mostafa El- Sayed (3) . (1) Professor of Dermatology,STDs and Andrology ,Faculty of Medicine, Fayoum University> (2) Lecturer of Dermatology ,Faculty of Medicine, Fayoum University (3) M.B.B.CH ABSTRACT Vitamin D and Platelet Rich Plasma (PRP) in the treatment of Vitiligo Background: Vitiligo is an autoimmune disorder which is caused by the destruction of melanocytes in the skin. Vitamin D is a fat-soluble vitamin that controls cell proliferation and differentiation and exerts immunoregulatory activities. Platelet- Rich Plasma (PRP) may serve as a source of different growth factors to induce pigmentation in vitiligo. Aim of the work: to explore the effect of intralesional platelet rich plasma and vitamin D injection in vitilgo patients. Patients and methods: The study included two groups of stable non segmental vitiligo patients. Each group included 20 patients. Group A: Each patient was injected PRP intralesionaliy in vitiligenous patch and PRP together with vitamin D intralesionally in another vitiligenous patch. Group B : Each patient was injected vit D intralesionally in vitiligenous patch and PRP together with vitamin D intralesionally in another vitiligenous patch . Results: Group A revealed highly statistically significant difference between baseline and after 3 months follow up regarding improvement of pigmentation in both PRP +vit. D patch (p < 0.001) and in PRP group (p < 0.001). Group B revealed highly statistically significant difference between baseline and after 3 months follow up regarding degree of improvement of pigmentation in PRP + vit D group (p < 0.001). It revealed statistically significant difference between baseline and after 3 months follow up regarding degree of improvement of pigmentation in vit D group (p < 0.05). Conclusion: In conclusion, PRP injection either alone or with vit. D has good satisfactory results in treatment of vitilig. Vit.D injection alone showed less satisfactory results than combined treatment regarding degree of improvement. KEYWORDS: Vitiligo,Vitamin D and Platelet Rich Plasma. ISSN: 2536-9474 (Print) Original article / FYMJ ISSN: 2536-9482 (Online) Fayoum University Medical Journal El- Sayed et al., 2019,3(1), 60-70 Page 61 INTRODUCTION : Vitiligo is an autoimmune disorder which is caused by the destruction of melanocytes in the skin. [1] The disease may affect both genders and all skin types and may also be associated with systemic autoimmune diseases such as lupus erythematosus, scleroderma, autoimmune thyroiditis and alopecia areata. [2] Vitiligo presents as well circumscribed, depigmented macules and patches with convex borders, surrounded by normal skin which is slowly progressive. It can be subdivided into non segmental vitilgo,segmental vitilgo and unclassified vitiligo. [3] No single therapy for vitiligo produces predictably good results in all patients and the response to therapy is highly variable . [4] Treatment options include topical treatment which include topical corticosteroids,vitamin D analogues and topical calcineurin inhibitors while systemic treatment options include corticosteroids and immunosuppressive, phototherapy including narrowband ultraviolet B, psoralen with ultraviolet A along with surgical treatment, in addition to camouflage cosmetics and sunscreens . [5] Vitamin D is a fat-soluble vitamin obtained by humans through diet. It has been used to treat psoriasis, vitiligo and other skin diseases for many years . [6] It controls cell proliferation and differentiation and exerts immunoregulatory activities . [1] Platelet-rich plasma (PRP) is an autologous preparation of platelets in concentrated plasma. Various growth factors are secreted from α-granules of concentrated platelets activated by aggregation inducers . [7] Various studies have proved the clinical applications and results of PRP in the fields of dermatology. [8] PRP has been used to treat acne, scarring, and alopecia. It is also effective for skin rejuvenation and tightening around the eyes . [9] The aim of this study is to compare the effect of intralesional PRP and vitamin D injection in vitilgo patients. PATIENTS AND METHODS: . Forty adult patients with localized non segmental vitiligo were enrolled in this study. They were recruited from the Outpatient Clinic of Dermatology and Venereology Department, Fayoum University Hospital. Inclusion criteria: 1-Age above 12 years. 2-Localized stable non segmental vitiligo. Exclusion criteria: 1-Patients below 12 years. 2-Pregnant and lactating females. 3- Patients with segmental vitilgo. 4-Patients with koebner phenomen. 5-Patients with chronic systemic diseases (anemia, Diabetes mellitus, immunosuppressed patients, chronic infection and patients with other autoimmune disease as thyroiditis). -subjects: - The patients were randomly categorized into 2 groups. - Each group included 20 patients. - Group A: PRP was injected intradermally using insulin syringe in the ISSN: 2536-9474 (Print) Original article / FYMJ ISSN: 2536-9482 (Online) Fayoum University Medical Journal El- Sayed et al., 2019,3(1), 60-70 Page 62 lesion, 0.1 cc of PRP was injected per point with a space of 0.5 cm between different points of injections in one patch of the body and PRP alternatively with vitamin D intralesionally in another vitiligenous patch (0.1 cc of vit. D was injected per point with a space of 0.5 cm between different points of injections). - Group B: By the almost used protocol, each patient was injected 0.1 cc of vit.D per point with a space of 0.5 cm between different points of injections in one patch of the body and PRP alternatively with vitamin D intralesionally in another vitiligenous patch. -Three sessions were done one month apart. A digital photography taken before initial and after each session by same camera -Follow up after three months to detect results ,any recurrence and any associated side effects PRP preparation method: To create PRP, 4 to 10 cc of venous blood had been collected from the anticubital vein using 21 G sterile 10ml syringe under complete aseptic conditions.The whole blood sample was collected in plastic tubes containing sodium citrate (10:1) as an anticoagulant (to bind calcium and prevent the initiation of the clotting cascade by preventing the conversion of prothrombin to thrombin). Then the citrated whole blood was subjected to two centrifugation steps using 800 D centrifuge. The initial centrifugation was done (“soft” spin) at 500 rpm for 10 min to separate the plasma and platelets from the red and white cells.The resulting plasma supernatant, which contained the suspended platelets (and may contain a portion of the white cell “buffy coat”) was harvested to a second centrifugation step (“hard” spin) in plain plastic tubes at 2500 rpm for 15 min, PRP group: Patients received 3 sessions of autologous intradermal PRP injection with a month interval. Topical anesthetic cream (Pridocaine ® ) was applied under occlusion 30 minutes before the session and then the skin was sterilized by alcohol. The area was injected with 1ml insulin syringe 0.5 cm spacing between injection sites, with maximum of 1 mL/session.The patients were followed up for 3 months after the last session Vitamin D injection: Patients received 3 sessions of intradermal Cholecalciferol (Vit. D3) ( Devarol-S R ) injection with one month interval. Topical anesthetic cream (Pridocaine ® ) was applied under occlusion 30 minutes before the session and then the skin was sterilized by alcohol. Injection was intralesional 0.5 cm spacing between injection sites, with maximum of 1 mL/session using 1ml insulin syringe. The patients were followed up once after 3 months after the last session. Evaluation of the treatment: Photographs were obtained at baseline, before each treatment session, and 3 months after the final treatment. Objective clinical assessments of repigmentation were performed by 2 blinded dermatologists. -Scoring system of degree of repigmentation:  Go = no improvement.  G1= < 25 % repigmentation. ISSN: 2536-9474 (Print) Original article / FYMJ ISSN: 2536-9482 (Online) Fayoum University Medical Journal El- Sayed et al., 2019,3(1), 60-70 Page 63  G2= 25%-50% repigmentation.  G3= 50%-75% repigmentation.  G4= > 75% repigmentation. -Pattern of repigmentation:  Perifollicular.  Marginal.  Mixed. Patient satisfaction: Patients were asked for his satisfaction about results of sessions and their compliance:  Not satisfied  Low satisfaction  High satisfaction Safety assessment: The patients were informed to report any complications as; erythema, pain, ulceration, burning sensation, ecchymosis, infection, postinflammatory hyperpigmentation, or any allergic manifestations. Statistical methods: The collected data were coded, fed to computer, organized and statistically analyzed. Statistical analysis was done using computer programs: Microsoft excel version 10 and Statistical Package for Social Science (SPSS) for windows version 25.0 . [10] Descriptive statistics: 1-Qualitative (categorical) data were presented by frequency and percentage. Qualitative data were subjected to Categorical variables are given as number and percentages. A chi-square test. 2- Quantitative data were presented by mean ± SD. Group A revealed highly statistically significant difference between Baseline and after 3 months follow up regarding improvement of pigmentation in PRP +vit. D patch (p <0.001) . 45% (9) patients showed no improvement, 15% (3) of patients showed improvement less than 25%, 20%(4) patients showed improvement 25-50% , 15% (3) patients showed improvement 50-75% and 5% (1) patient showed improvement > 75% ISSN: 2536-9474 (Print) Original article / FYMJ ISSN: 2536-9482 (Online) Fayoum University Medical Journal El- Sayed et al., 2019,3(1), 60-70 Page 64 Improvement after 3 months follow up regarding degree of improvement of pigmentation in PRP +vit. D patch , -No recurrence of depigmentation in patients after 3 months follow up. Group A revealed highly statistically significant difference between Baseline and after 3 months follow up regarding degree of improvement of pigmentation in PRP group (p < 0.001). 40% (8) patients showed no improvement , 15% (3) patients showed improvement less than 25%, 10% (2) patients showed improvement 25-50% , 30% (6) patients showed improvement 50-75% and 5% (1) patient showed improvement > 75% 45% 15% 20% 15% 5% improvement after 3 months follow up in PRP and Vit. D no improvement <25% improvement 25-50% 50-75% >75% ISSN: 2536-9474 (Print) Original article / FYMJ ISSN: 2536-9482 (Online) Fayoum University Medical Journal El- Sayed et al., 2019,3(1), 60-70 Page 65 Improvement after 3 months follow up regarding improvement of pigmentation in PRP patch. -No recurrence of depigmentation in patients after 3 months follow up. Group B revealed highly statistically significant difference between baseline and after 3 months follow up regarding degree of improvement of pigmentation in PRP + Vit D group (p < 0.001). 40% (8) patients showed no improvement , 10% (2) patients showed improvement less than 25%, 35% (7) patients showed improvement 25-50% , 10% (2) patients showed improvement 50-75% and 5% (1) patient showed improvement > 75% 40% 15% 10% 30% 5% improvement after 3 months follow up in PRP patch no improvement <25% 25-50% 50-75% >75% ISSN: 2536-9474 (Print) Original article / FYMJ ISSN: 2536-9482 (Online) Fayoum University Medical Journal El- Sayed et al., 2019,3(1), 60-70 Page 66 Improvement after 3 months follow up regarding improvement of pigmentation in PRP + Vit D patch. -No recurrence of depigmentation after 3 months follow up. Group B revealed statistically significant difference between baseline and after 3 months follow up regarding degree of improvement of pigmentation in Vit D group (p < 0.05). 60% (12) patients showed no improvement , 0% (0) patients showed improvement less than 25%, 15%(3) patients showed improvement 25-50% , 20% (4) patients showed improvement 50-75% and 5% (1) patient showed improvement > 75% Improvement after 3 months follow up regarding degree of improvement of pigmentation in Vit D patch No recurrence of depigmentation after 3 months follow up. DISCUSSION Vitiligo is an autoimmune disorder which is caused by the destruction of melanocytes in the skin . [1] No single therapy for vitiligo produces predictably good results in all patients and the response to therapy is highly variable . 4 Treatment options include topical treatment which include topical corticosteroids,vitamin D analogues and topical calcineurin inhibitors while systemic treatment options include corticosteroids and immunosuppressive, photothrapy including narrowband ultraviolet B, psoralen with ultraviolet A along with surgical treatment, in addition to camouflage cosmetics and sunscreens. [5] Vitamin D is a fat-soluble vitamin obtained by humans through diet. It has been used to treat psoriasis, vitiligo and other skin diseases for many years . 6 It controls cell proliferation and 40% 10% 35% 10% 5% improvement after 3 months follow up in PRP+Vit.D no improvement <25% 25-50% 50-75% >75% ISSN: 2536-9474 (Print) Original article / FYMJ ISSN: 2536-9482 (Online) Fayoum University Medical Journal El- Sayed et al., 2019,3(1), 60-70 Page 67 differentiation and exerts immunoregulatory activities . [1] Platelet-rich plasma (PRP) is an autologous blood-derived product enriched in platelets, growth factors, chemokines, and cytokines. It is a reservoir of essential growth factors. Various studies have proved the clinical applications and results of PRP in the fields of dermatology . 8 PRP has been used to treat acne, scarring, and alopecia. It is also effective for skin rejuvenation and tightening around the eyes . [9] The aim of our study was to explore the effect of intralesional platelet rich plasma and vitamin D injection in vitilgo patients. Our study included two groups of stable non segmental vitiligo patients. Each group included 20 patients. - Group A: included 17 female and 3 males their age ranged from 13-42 years (mean± SD 23.40± 9.25). Using insulin syringe, PRP was injected intradermally , 0.1 cc per point 0.5 cm apart in one patch of the body and PRP alternatively with 0.1 cc Vit. D 0.5 cm apart in another vitiligenous patch. - Group B: included 9 female and 11 males their age ranged from 12-45years (mean± SD 24.45± 10.72). Using insulin syringe, vit.D was injected intradermally , 0.1 cc per point 0.5 cm apart in one patch of the body and 0.1 cc PRP alternatively with vit. D 0.5 cm apart in another vitiligenous patch. -Three sessions were done one month apart. A digital photography was taken before initial and after each session for evaluation clinically together with measurement of degree of improvement and pattern of repigmentation.Follow up after three months to detect any possible recurrence . For our knowledge it is the first study to be done in Egypt to treat vitiligo with combined intralesional injection of PRP and vitamin D. Group A In our study , at 3 months follow up group A showed 55% improvement in PRP and vit. D and 60% improvement in PRP patches. It revealed highly statistically significant difference between baseline and after 3 months follow up regarding improvement of pigmentation in both PRP +vit. D patch and PRP only patch (p≤ 0.001). In PRP and vit.D patch, 45% of patients showed no improvement ,15% showed less than 25% improvement, 20% of patients showed 25-50% improvement, 15% showed 50%-75% improvement and 5% showed improvement more than 75%. In PRP patch, 40% showed no improvement , 15% showed less than 25% improvement,10% showed 25%- 50% improvement , 30% of patients showed 50%-75% improvement and only 5% showed improvement more than 75% and this is in consistent with Ibrahim et al., 2016 [11] who studied 60 stable vitiligo patients with overall symmetrical lesions and found that in the PRP side, 33 patients (55%) had 75%-100% improvement , 12 patients (20%) had 50%-75% improvement, 9 (15%) patients had 25%-50%, and 6(10%) patients had less than 25% improvement. Also, Abdelghani et al., [12] studied eighty adult patients with localized nonsegmental vitiligo .and found that PRP group together with fractional Co2 laser achieved the best results regarding ISSN: 2536-9474 (Print) Original article / FYMJ ISSN: 2536-9482 (Online) Fayoum University Medical Journal El- Sayed et al., 2019,3(1), 60-70 Page 68 repigmentation and patient's satisfaction which was similar to our patients satisifaction where we found 20% of patients not satisified, 40% with low satisifaction and 40% with high satisifaction in PRP and vit.D patches,while in PRP patches 15% not satisfied , 30% of them were low satisfied and 55% were highly satisfied with the results. In contrast with Lim et al., [13] in who treated 20 patients with vitiligo by intradermal injection of PRP weekly for 10 weeks and they suggested that PRP was not effective in the treatment of vitiligo. Group B Platelet rich plasma and vit.D patches showed 60% improvement while vit.D patches revealed only 40% improvement after 3 months follow up. It revealed highly statistically significant difference between Baseline and after 3 months follow up regarding degree of improvement of pigmentation in Prp + vit D group (p ≤ 0.001) and It revealed statistically significant difference between Baseline and after 3 months follow up regarding degree of improvement of pigmentation in vit D group (p ≤ 0.05). Forty % 0f patients of PRP and Vit. D patch showed no improvement, 10% revealed less than 25% improvement , 35 % showed improvement between 25- 50%, 10% showed 50%-75% improvement and only 5% showed more than 75% improvement. In vit.D patches, 60% of them were not improved at all, 15% of showed 25-50% improvement, 20% showed 50-75% improvement and only 5% improved more than 75%. It revealed no statistically significant difference between PRP + vit D and vit D regarding degree of improvement of pigmentation after 3 months follow up (p >0.05). In group B , 10 patients (83 %) revealed perifollicular repigmentation, 0 patients (0 %) revealed marginal repigmentation and 2 patients (17 %) revealed mixed repigmentation. However, in vit.D patches 5 patients (62.5 %) revealed perifollicular repigmentation, 1 patients (12.5 %) revealed marginal repigmentation and 2 patients (25 %) revealed mixed repigmentation. No recurrence of depigmentation. No Koebner phenomenon occurred, no other complications during follow up period. Regarding patients satisfaction in PRP and vit.D patches 25% of patients were not satisfied, 40% with low satisifaction and 35 % of them with high satisifaction and in vit.D patches 40% of patients were not satisfied, 25% with low satisfaction and 35 % of them with high satisfaction. In this study, we prepared PRP by double-spin centrifugation protocol. This method of preparation was chosen on the basis that double-centrifugation protocol using the correct g-forces and spin times results in higher platelet concentrations than the single centrifugation protocol. In this study, We partially applied the protocol done by Abdelghani et al., [12] and Ibrahim et al., [11] they recommended double centrifugation. In this study, We inject patients once per month in contrast to Abdelghani et al., [12] and Ibrahim et al., [11] who injected every weeks. ISSN: 2536-9474 (Print) Original article / FYMJ ISSN: 2536-9482 (Online) Fayoum University Medical Journal El- Sayed et al., 2019,3(1), 60-70 Page 69 For our knowledge we are the first to inject vit. D intralesional in vitiligo patients.A number of studies have been reported the treatment of vitiligo with oral and topical vitamin D analogues alone or in combination with ultraviolet light or corticosteroids to enhance repigmentation. [1] In conclusion, PRP injection either alone or with vit. D has good satisfactory results in treatment of vitiligo. It showed repigmentation in vitiligenous patches and improvement of already present pigment (50% improvement in patients). But we don’t advise to use vit.D injection alone as results were less satisfactory than combined treatment regarding degree of improvement. RECOMMENDATIONS: 1-Platelet rich plasma in vitiligo treatment requires further assessement and more studies. 2-Combination of other lines of treatment with PRP may be required to get higher degree of improvement. 3-Injection at short interval e.g every two weeks. In conclusion, PRP injection either alone or with vit. D has good satisfactory results in treatment of vitiligo . It show repigmentation in vitiligenous patches and improvement of already present pigment (50% improvement in patients). But vit.D injection alone results were less satisfactory than combined treatment regarding degree of improvement. REFERENCES: [1] AlGhamdi K, Kumar A and Moussa N. The role of vitamin D in melanogenesis with an emphasis on vitiligo. Indian J Dermatol Venereol Leprol 79,750–758 (2013). [2] Lee S, Zheng Z, Kang J, Kim D, Oh S and Bin S .Therapeutic efficacy of autologous platelet‐rich plasma and polydeoxyribonucleotide on female pattern hair loss. Wound Repair Regen 23,30‐36 (2015) . [3] Alikhan A, Felsten LM, Daly M and Petronic-Rosic V: Vitiligo.a comprehensive overview Part I.Introduction, epidemiology, quality of life, diagnosis, differential diagnosis, associations, histopathology, etiology, and work-up.J Am Acad Dermatol 65,473–491 (2011). [4] Ohguchi R, Kato H, Furuhashi T, Nakamura M, Nishida E, Watanabe S, Shintani Y and Morita A Risk factors and treatment responses in patients with vitiligo in Japan-A retrospective large- scale study. Kaohsiung J Med Sci. 31,260-264 (2015). [5] Gawkrodger DJ, Ormerod AD, Shaw L, Mauri-Sole I, Whitton ME, Watts MJ, Anstey AV, Ingham J and Young K Guideline for the diagnosis and management of vitiligo. Br J Dermatol. Br J Dermatol 159,1051-1076(2008). [6] Ustun I, Seraslan G, Gokce C, Motor S, Can Y, Ugur Inan M and Yilmaz N Investigation of vitamin D levels in patients with vitiligo vulgaris. Acta Dermatovenerol Croat 22, 110-113 (2014). [7] Kaux JF, Le Goff C, Seidel L, Péters P, Gothot A, Albert A and Crielaard JM .Comparative study of five techniques of preparation of platelet-rich plasma]. Pathol Biol (Paris)59,157-160. (2011) . ISSN: 2536-9474 (Print) Original article / FYMJ ISSN: 2536-9482 (Online) Fayoum University Medical Journal El- Sayed et al., 2019,3(1), 60-70 Page 70 [8] Perez AG, Lana JF, Rodrigues AA, Luzo AC, Belangero WD and Santana MH .Relevant aspects of centrifugation step in the preparation of platelet-rich plasma. ISRN Hematol; 76060. (2014) . [9] Shin J, Lee JS, Hann SK and Oh SH Combination treatment by 10 600 nm ablative fractional carbon dioxide laser and narrowband ultraviolet B in refractory nonsegmental vitiligo: a prospective, randomized half-body comparative study. Br J Dermatol.166,658-661 (2012) . [10] Snedecor, G. W. & Cochran, W. G. Statistical methods. (Iowa State University Press, 1982). [11] Ibrahim ZA, El-Ashmawy AA, El- Tatawy RA and Sallam FA The effect of platelet-rich plasma on the outcome of short-term narrowband-ultraviolet B phototherapy in the treatment of vitiligo: a pilot study. J Cosmet Dermatol. 15,108- 116 (2016) . [12] Abdelghani R, Ahmed NA and Darwish HM. Combined treatment with fractional carbon dioxide laser, autologous platelet-rich plasma, and narrow band ultraviolet B for vitiligo in different body sites: A prospective, randomized comparative trial. J Cosmet Dermatol. 17,365-372 (2018) . [13] Lim HK, Sh MK and Lee MH . Clinical application of PRP in vitiligo: a pilot study. Official 1st International Pigment Cell Conference 2011. work_yjzxspf5e5auzb7yb4drk7v2zy ---- [00_]목차.hwp Short communication KOREAN JOURNAL OF APPLIED ENTOMOLOGY 한국응용곤충학회지 ⓒ The Korean Society of Applied Entomology Korean J. Appl. Entomol. 53(3): 301-303 (2014) pISSN 1225-0171, eISSN 2287-545X DOI: http://dx.doi.org/10.5656/KSAE.2014.05.0.008 A Newly Known Genus Charitoprepes Warren (Lepidoptera: Pyraloidea: Crambidae) in Korea, with Report of C. lubricosa Warren Minyoung Kim, Young-Mi Park 1 *, Ik-Hwa Hyun, Byoung-Hyo Kang 2 , Si-Heon Oh 3 , Jae-Kwang Jwa 3 , Young-Kwon Hyun 3 and Heung-Sik Lee 4 Plant Quarantine Technology Center, Animal and Plant Quarantine Agency, Suwon 443-400, Korea 1 Department of Plant Quarantine, Animal and Plant Quarantine Agency, Anyang 480-757, Korea 2 Jeju International Airport District Office, Animal and Plant Quarantine Agency, Jeju 690-727, Korea 3 Jeju Regional Office, Animal and Plant Quarantine Agency, Jeju 690-755, Korea 4 Youngnam Regional Office, Animal and Plant Quarantine Agency, Busan 600-016, Korea 한국산 Charitoprepes 속 (나비목: 명나방상과: 포충나방과)의 1 미기록종 보고 김민영ㆍ박영미 1 *ㆍ현익화ㆍ강병효 2 ㆍ오시헌 3 ㆍ좌재광 3 ㆍ현영권 3 ㆍ이흥식 4 농림축산검역본부 식물검역기술개발센터, 1 농림축산검역본부 식물검역부, 2 농림축산검역본부 제주공항지역본부, 3 농림축산검역본부 제주지역본부, 4 농림축산검역본부 영남지역본부 ABSTRACT: The genus Charitoprepes (Warren), a probable vagrant group of the family Crambidae is newly recorded for the first time from the Korean Peninsula, which was described based on C. lubricosa (Warren) from Jeju islands. Diagnosis and illustrations of detailed diagnostic characters, including genitalia are provided. Key words: Charitoprepes, New record, Crambidae, Lepidoptera, Korea 초 록: 본 연구를 통해 제주도에서 채집된 포충나방과의 Charitoprepes (Warren)에 속하는 1종, Charitoprepes lubricosa (Warren)을 우리나라에 처 음으로 보고한다. 이들의 외부형태적 특징 및 종 동정에 필요한 성충과 수컷 생식기 이미지를 함께 제시한다. 검색어: Charitoprepes, 미기록속, 포충나방과, 나비목, 한국 *Corresponding author: insectcola@korea.kr Received February 10 2014; Revised May 15 2014 Accepted July 7 2014 The family Crambidae (Lepidoptera), belonging to the super- family Pyraloidea (Lepidoptera), is commonly known as “grass moths” due to the their peculiar shape during resting with their wings folded roof-like over the abdomen, and taking up closely on grass stems, where they are inconspicuous. This family Crambidae is subdivided into 17 subfamilies (Solis, 2007), with more than 9,655 species classified into 1,020 genera worldwide (Beccaloni et al., 2005; Nuss et al., 2010). It is a cosmopolitan family, with 219 species known from Korea (Bae et al., 2008), 535 species from Japan (www.jpmoth.org), and more than 2,000 species from China (pers., comm), while 1,158 species known from Europe (Karsholt and Razowski, 1996: including Pyralidae). We report the genus Charitoprepes Warren, a vagrant crambid species new to Korea, which was found during the survey of the subtropical-moths was done from the southern part (Jeju islands) of Korea. The genus Charitoprepes Warren is a monotypic genus, belonging to the subfamily Pyraustinae. Type species for the genus is C. lubricosa Warren, which was described from Khasis, India and the species has mainly been distributed in India and Japan. The genus is characterized by the following: Head and vertex covered with brownish yellow scales; labial palpi short, The Korean Society of Applied Entomology (KSAE) retains the exclusive copyright to reproduce and distribute for all KSAE publications. The journal follows an open access policy. 302 Korean J. Appl. Entomol. 53(3): 301~303 Fig. 1. Charitoprepes lubricosa (Warren). 1a. adult; 1b. diagrammatic dorsal views of wing pattern. strong, rounded and upturned; maxillary palpi imperceptible; antenna filiform, five-sixth of forewing. Forewing elongate; costa straight, becoming convex towards apex; two stigmata presented at ends of subterminal fascia; posterior margin oblique, slightly concave below. Hindwing with posterior margin entirely curved, slightly perceptible band at the end of postmedian fascia; large discal spots presented. Legs are rather long and weak. In this study, we report on the crambids genus Charitoprepes from the Korean Peninsula for the first time. The morphological information with illustrations based on male genitalia is provided to help for identification. Materials and Methods The specimens examined in this study were collected in Jeju islands, southern part of the Korean Peninsula in 2013 by using bucket traps and ultra violet lamps (12V/8W) at night. Morphological structures and genital characters were examined under a stereo microscope (Leica S8APO), and a Nikon D90 and Carl Zeiss Axio Imager M2 were used for a digital photography. Color standard for the description of adults was based on Methuen Handbook of Color (Kornerup and Wanscher, 1978). Specimens examined are deposited in the Plant Quarantine Technology Center, Animal and Plant Quarantine Agency (QIA). Systematics Order Lepidoptera Linnaeus, 1758 Family Crambidae Latreille, 1810 Subfamily Pyraustinae Meyrick, 1890 Genus Charitoprepes Warren, 1896 Type species: Charitoprepes lubricosa Warren, 1896 Charitoprepes lubricosa Warren, 1896 (Fig. 1) Charitoprepes lubricosa Warren, 1896; Ann. Mag. Nat. Hist. 17: 136. Type locality: Khasis, India. Heterocnephes lubricosa Warren, 1896 Diagnosis. This species is not externally similar to the species, so it can be easily distinguished from other species by the presence of a dark brown spot following fuscous scales beyond four-fifth of forewing and two dark brown stigmata close to median fascia. Adult (Figs. 1a-b). Wingspan 32.0 mm. Head: frons shiny white; vertex brownish yellow. Antenna filiform with brownish yellow. Labial palpus white; second and third segment tinged with dark brown. Thorax: Thorax and tegula more or less white dorsally. Forewing ground color grayish brown, with well- developed dark brown orbicular and reniform stigma, surrounded by fuscous scales; fringes pearly grey, rarely mixed with brownish scales; a dark brown patch at apex of forewing; cilia tipped with brown. Hindwing grayish brown, with dark brown discal stigma; fringes like forewing, with a distinct dark basal and postmedian fascia. Abdomen: Abdomen grayish brown. Legs: Legs white; hind tibia whitish with a pair of tibial spurs; tarsus white tinged with shiny white, about 1.5 times longer than tibia. Male genitalia (Figs. 2a-b). Gnathos elongated, very long and straight, with round apex; uncus membraneous, as long as A Newly Known Genus Charitoprepes Warren 303 Fig. 2. Male genitalia of Charitoprepes lubricosa (Warren). 2a. diagram of male genitalia; 2b. aedeagus. gnathos. Tegumen sclerotized, well-developed. Valva broad basally, costal margin strongly sclerotized; apex rounded, more or less angular; covered densely with hairs. Sacculus sclerotized, gradually thinner; juxta small; clasper very well-developed, with strong pointed apical process stout, falcate-like. Aedeagus slender, as long as genitalia; cornutus present and one-third length of aedeagus. Female genitalia. Unavailable in this study. Specimens examined. Is. Jeju- 7♂, Seonhol-ri, Jocheon-eub, Seogwipo-si, 4.vi.2013, 28.viii.2013 (YK Hyun & RN Sohn), genitalia slide no. QIA- 76, 88, 89; 2♂, Sanghyo-dong, Seogwipo- si, 4.vi.2013 (YK Hyun & RN Sohn); 1♂, Hannam-ri, Namwon- eub, Seogwipo-si, 12.ix.2013 (YK Hyun & RN Sohn). Host plant. No host plant has been known. Biology. It is known that moths appear from May to September in Japan (Jinbo et al., 2003-2014). Distribution. Korea (new record), Japan (Honshu, Shikoku, Kyushu, Tsushima, Yakushima), India. Remarks. The moth was reported in Honshu and southward, and even its possible distributional spread to Japan. Based on male specimens, we can’t at this point give any evidence about the invasion of this species in Korea. Therefore, we tentatively treated the species as an accidental migrant or vagrant species. However, its possible establishment and spread in the Korean fauna can’t be ruled out. Thus, it is very important to our weather conditions and further study on the surveillance of this species is needed, considering its possible status as a pest insect. Literature Cited Beccaloni, G., Scoble, M., Robinson, G. Pitkin, B., 2005. The glob- al Lepidoptera names index. Natural History Museum. Available from http://www.nhm.ac.uk/research-curation/projects/lepindex/ index.html. Bae, Y.S., Byun, B.K., Paek, M.K., 2008. Pyralid moths of Korea (Lepidoptera: Pyraloidea). Korea National Arboretum, Samsungad. com, Seoul, 426 pp. Jinbo, U., KeiHiroshi, Y., Teruhiko, F., Nakao, K., 2003-2014. List-MJ: A checklist of Japanese moths. http:// www.jpmoth.org/. Karsholt, O., Razowski, J., 1996. The Lepidoptera of Europe. A dis- tributional checklist. Apollo Books, Stenstrup, pp. 166-191. Kornerup, A,,Wanscher, J.H., 1978. Methuen handbook of colour (3rd ed.). Methuen, London, 252 pp. New Zealand Ministry of Agriculture and Forestry (MAF) Biosecurity New Zealand, 2008. Pest risk analysis for six moth species: les- sons for the biosecurity system on managing hitchhiker organisms. MAF, Wellington, New Zealand, 419 pp. Nuss, M., Landry, B., Vegliante, F., Tränkner, A., Mally, R., Hayden, J., Segerer, A., Li, H., Schouten, R., Solis, M.A., Trofimova, T., De Prins, J., Speidel, W., 2010. GlobiZ: Global Information System on Pyraloidea. Senckenberg Collection of Natural History, Museum of Zoology, Dresden (Germany). http://www.pyraloidea. org. Solis, M.A., 2007. Phylogenetic studies and modern classification of the Pyraloidea (Lepidoptera). Revista Colombiana de Entomología 33: 1-9. Warren, D., 1896. New genera and species of Pyralidae, Thyrididae and Epiplemidae. Ann. Mag. Nat. Hist. 17: 131-150. work_ykn2ykzepbdjvhufp7q5n4hwgq ---- www.palgrave-journals.com/dam © 2006 Palgrave Macmillan Ltd 1743-6540 $30.00 Vol. 2, 5 219–222 JOURNAL OF DIGITAL ASSET MANAGEMENT 219 THE OPPORTUNITY In reviewing their 2004 budget and the one for the upcoming year, General Electric (GE) determined that in terms of art costs, they could actually save money if they created a custom image library that all of the GE businesses around the world could share. Their initial idea was to employ 13 different photographers in various parts of the world, take these pictures traditionally and put them on some sort of CD. But the folks at BBDO, the people who would come to coordinate this project, had seen some interesting technology used on “ America ’ s Next Top Model ” that bore some investigation. I ’ ll get to that in just a minute. IN THE OLD DAYS … Imagine if there was no such thing as Digital Photography or even the internet. Jd Michaels, VP, Director of Print Services for BBDO described for me how a project like this used to be done. They would have shot fi lm traditionally … the fi lm would have been sent to different labs for each photographer and some of them were international photographers so the fi lm would wind up going to Japan, Germany, Sweden and Britain. Once the fi lm was developed, a contact sheet would be made and each photographer would review those with an agency art director. The art director would then review the contact sheet with the account team and then they would go to the client and make fi nal choices as to which frames they actually wanted to work with. At that point, we would take those frames and decide what changes we wanted to make to the frames with a high-resolution scan for digital purposes. The high-resolution scan would then be sent via mail all to New York City, where we would compile them on a server and then one- by-one begin to go through and re-touch them with all of the information in the retouching being sent either through email or over the telephone. So everyday there would be a mass mailing both to and from the agency to get things done. Now, keep in mind that developing fi lm takes about 2 days to do at the professional level. All of the back and forth of the fi lm lead to certain unavoidable risks, such as losing the fi lm, damaging the fi lm or not being able to control Possibilities Russ Stanton Before moving into Information Technology, Russ Stanton had developed a professional resume that encompassed radio production, producing and directing commercials and longer formats for television. He was the recipient of 13 Addy Awards and one Gabriel Award. In the late 1980s, Russ chose to move away from TV and radio and jump into the IT movement. In IT, he has successfully developed and managed enterprise business applications for top entertainment and publishing companies and has produced various interactive learning websites. His two careers merged into one by successfully implementing the Digital Workfl ow for TV, Print and Radio production at BBDO. Batten, Barton Durstine & Osborn. On a more personal note, Russ enjoys football, dancing and a good joke. He lives outside of Philadelphia, PA with his wife, two sons and a daughter. Keywords: GE , BBDO , metadata , Xinet , WebNative , possibilities , successful , digital Abstract This is a tale of six photographers in 20 cities in almost every continent of the world, at the same time mind you, taking on the major task of creating a global photo Library for General Electric. Without giving away too much of the story up front, let ’ s just say that digital asset management and the internet saved the day because this was a critical project for both GE and their global creative ad agency BBDO, and had to be completed in record time. You can only imagine the enormity of such a task, not to mention the pressure. It ’ s also the story of having the right people in place at the right time. Journal of Digital Asset Management (2006) 2, 219 –222 . doi: 10.1057/palgrave.dam.3650035 Russ Stanton is the Director, Digital Media Architect, BBDO North America Tel: + 212 459 6418 Email: Russ.Stanton@ bbdo.com Stanton JOURNAL OF DIGITAL ASSET MANAGEMENT Vol. 2, 5 219–222 © 2006 Palgrave Macmillan Ltd 1743-6540 $30.00220 what would have ended up being, over 300 rolls of fi lm fl ying around the world. The challenge with this project was that it was slated for 2004. It needed to be completed by the 1st of January and we started this basically in August of 2004. Doing it the oldway was not an option. HOW ’ D THEY DO THAT? In brainstorming on exactly how this project would get done in the time frame required, Jd ’ s group thought of many things. Some of the things included having a central lab in Europe do all the fi lm processing or having day and night shifts to handle submissions around the clock, given issues with various time zones. He continues That meeting just precipitated tears and headaches. So after the end of that we went back to our own offi ces and at about the exact same time we each met in the hall and said, ‘ Oh my Gosh!! We could use that Xinet thing and we ’ ll do the photography digitally. ’ Xinet is a suite of products dedicated to pre- press, workfl ows and digital asset management. One of the main attributes that captured Jd ’ s group ’ s attention though was the uploader / downloader functionality that allows for the transfer of fi les over the internet and the ingestion into the Xinet system. The uploader is almost an applet , Jd said, a specifi c program that sits on the desktop of a PC or a Mac that allows you to drag and drop images directly from your desktop connected at high- speed anywhere in the world into our fi rewall protected server. The trick was could we program the applet to put them in a specifi c place and tag them with XMP data (metadata)? This would allow us to sort through them in any number of ways on the database without having to have someone here to administer every photo that came in. The answer was yes. The images go into this applet, they are instantly tagged, stuffed or zipped, transmitted, unstuffed on the other end and placed in the correct folder for each photographer. Once they open on the Xinet server, previews are instantly made and it is searchable both by its name, its location, and its XMP metadata instantly. So if we wanted to search for the import date in the date fi eld, we got to see all the things that had come in for that date. But could it get implemented in time? Jd continues Because our groups were considerably smaller and they were a regional team we had to pull from various groups. So, we had the IT person who was excited about trying to pull the technology together and making sure it didn ’ t confl ict with anything else. You had Kari Nouhan, who was the art person, who was very excited about being able to get all the images in and making sure that they were a high enough quality in terms of making sure the digital photography was there. You had myself who wanted to make sure that when they came in, we could organize them and get them from place to place. And then we had Alaina Collins, who was very excited about programming it and making sure everything was running smoothly from a moment-to- moment basis. By having all of us together as a fi rst team, the only ramp-up we needed was that all 4 of us came together and understood what was going on, because we had taken, responsibility for the project working. THE LOGISTICS So after some discussions with the Xinet integrator, NAPC (New America Platinotype Corporation) we decided to give it a shot. We still didn ’ t have the kind of time that you ’ d typically require for this undertaking, but we had made a commitment. So, we had to streamline everything and make every moment count. NAPC gave us specifi c training for 2 days on how to set this project up educating us on skills we needed to know to create uploaders, create actions on the server, create a hierarchy on the server, make new volumes, naming conventions, tag the information correctly (metadata) and administer names and passwords. Once we got those lessons down, then we were able to work the program and make it actually do what we wanted it to do. So we really created a machine, and that ’ s all we wanted to know about it. As time went on, we realized the production people could work from home saving more A true success story in digital photography and asset management © 2006 Palgrave Macmillan Ltd 1743-6540 $30.00 Vol. 2, 5 219–222 JOURNAL OF DIGITAL ASSET MANAGEMENT 221 time. They could work from anywhere and do complete administration of any fi le through the administration feature. The amazing part is that all this happened in a 2 week period; a record implementation according to NAPC. Jd Michaels says, They only sold us the system because we made a solemn promise that we were going to take complete responsibility for this and not expect them to staff us with someone. We just had the right combination of people to make that happen. It ’ s just a box, but it ’ s a box that responds extremely well to enthusiasm and ideas. That ’ s the greatest thing about it. So, off we went creating uploaders for each photographer, pre-programming the folders into the uploaders. Each photographer shot the equivalent of over 2 – 3 rolls every day and there was 4 – 6 weeks of shooting depending on where the photographer was in the schedule. All total there were over 15 — thousand images before all the choices were made. We ended up with 800 prime images that they use, with another 400 that were considered seconds. But the 800 that we had were retouched and put together and color-corrected uniformly. The client could even see the previews online and they would say, “ those 4 don ’ t have the angle we want its that one ” so we saved a great deal of time and a great deal in prints, which kept their cost lower. The client became so used to the WebNative interface, that after awhile, we were able to have them review the pictures at their own site with their own computer. The art director would be here on the phone or they could go on and choose the ones they wanted and as they chose the ones they wanted, we could go on and then look at the basket that they had made and then say OK these are the ones that they want. It started being a really fun way or working. THE IMPORTANCE OF DAM IN ALL THIS In the beginning, the digital asset management piece was the one thing that was simply the way that they found the few images that really needed to be retouched and moved, out of 300 images per day. So the initial metadata was fi le name and photographer plus the day that it was put into the system. The initial fi elds that were put in were very simple. A lot of it came out of PhotoShop. The cameras that they were using gave a great deal of information instantly and just came right into the database through the XMP data. So they really only needed to put those custom fi elds in so that they could search. Jd continues, When we had the fi nal images, we went into them and then custom tagged them to who shot them, where they were shot, also, the client gave us metadata they wanted added in for businesses that each one represented or the angle they wanted to put on it so that their people could fi nd it easier. When we needed to add fi elds specifi c to the client, that ’ s when we manually added it. Everything else we had worked to make sure fi elds were basic and NAPC worked to make sure that their program was going to work to pull the images in the right place and then make sure that all the metadata that was involved was in the fi le. Also, a comfort was the fact that our IT guys had everything backed up and protected behind our fi rewall with only limited access from outside. Otherwise, this would have been impossible as well. We had to make sure that these million- dollar images, fl oating in cyberspace, were protected, and that the system was redundant so that if it went down, it could come up quickly with no time lost. AT THE END OF THE DAY According to Jd, If Xinet was a solution tool that gave you a workfl ow that was preplanned, it would have been useless, because we didn ’ t need a solution, we needed a tool that they could build something with. We needed a Lowes, where we could just go in and pick stuff and build something. That ’ s what this was like for us. The mixture of relief and pride was truly legendary with all of us. We had never gotten to work with a technology before that was so up in the air. It was like a box of crayons. The box of Stanton JOURNAL OF DIGITAL ASSET MANAGEMENT Vol. 2, 5 219–222 © 2006 Palgrave Macmillan Ltd 1743-6540 $30.00222 crayons doesn ’ t tell you what to draw with it, and this was what this was like. Normally, everything comes with directions, instructions or even suggestions, but it worked with out imaginations. What I learned was that you can take pretty high-end technology and with good people that understand the basics of each aspect, put together a super group that just enjoys making things work and enjoys seeing things happen. I think that I really learned in business, the boldness of your plan and your players is directly correlated to your success. Anybody could have bought the software, anybody could have said, ‘ we ’ re going to do it ’ , but the boldness of not being afraid of an 800 page manual, just speaks to the type of people between BBDO and GE. That boldness is what really made the project work to the client; to say ‘ OK this is going to be fi ne with us ’ . It was extremely successful, I was extremely proud of it. ” At the end of the day, going back to the old way of doing things as I have described earlier is now never an option, not only because of the time, although that was major. But, according to Jd, if we had done it the old way it would have cost 4 ½ times as much. Now that is the kind of success you can literally take to the bank. Possibilities THE OPPORTUNITY IN THE OLD DAYS… HOW’D THEY DO THAT? THE LOGISTICS THE IMPORTANCE OF DAM IN ALL THIS AT THE END OF THE DAY work_ylenynvv3fdgpmskf42tguerqa ---- Hindawi Publishing Corporation Minimally Invasive Surgery Volume 2013, Article ID 153235, 4 pages http://dx.doi.org/10.1155/2013/153235 Research Article Standardization of Laparoscopic Pelvic Examination: A Proposal of a Novel System Mohamed A. Bedaiwy,1,2 Rachel Pope,1 Drisana Henry,1 Kristin Zanotti,1 Sangeeta Mahajan,1 William Hurd,1 Tommaso Falcone,3 and James Liu1 1 Department of Gynecology, University Hospitals Case Medical Center, Case Western Reserve University, Cleveland, OH 44106, USA 2 Division of Reproductive Endocrinology and Infertility, Department of OB/GYN, University of British Columbia, D415A, Shaughnessy Building-4500 Oak Street, Vancouver, BC, Canada V6H 3N1 3 Department of OB/GYN and Women’s Health Institute, Cleveland Clinic, Cleveland, OH 44106, USA Correspondence should be addressed to Mohamed A. Bedaiwy; bedaiwymmm@yahoo.com Received 6 August 2013; Accepted 2 December 2013 Academic Editor: Casey M. Calkins Copyright © 2013 Mohamed A. Bedaiwy et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Objective. Laparoscopic pelvic assessment is often performed in a nonstandardized fashion depending on the surgeon’s discretion. Reporting anatomic findings is inconsistent and lesions in atypical locations may be missed. We propose a method for systematic pelvic assessment based on anatomical landmarks. Design. Retrospective analysis. Setting. Tertiary care academic medical center. Intervention. We applied this system to operative reports of 540 patients who underwent diagnostic or operative laparoscopy for unexplained infertility between 2006 and 2012. The pelvis was divided into 2 midline zones (zone I and II) and right and left lateral zones (zone III and IV). All reports were evaluated for the comprehensiveness of description with respect to normal findings or pathology for each zone. Results. Of 540 surgeries, all reports commented on the uterus, tubes, and ovaries (100%), but only 17% (𝑛 = 93, 95% CI: 13.8–20.2) commented on the dome of the bladder and the anterior cul-de-sac. 24% (𝑛 = 130, 95% CI: 20.4–27.6) commented on the posterior cul-de-sac, and 5% (𝑛 = 29, 95% CI: 3.2–6.8) commented on the pelvic sidewall. Overall, 6% (𝑛 = 34, 95% CI: 4–8) reported near complete documentation of the pelvic zones. Conclusion. Implementation of a systematic approach for laparoscopic pelvic examination will enhance the diagnostic accuracy and provide better communication between care providers. In the absence of pelvic pathology, we recommend a minimum of 6 photographs of the 6 pelvic zones. 1. Introduction Years after surgical procedures are performed, operative reports are often the only source of information another surgeon possesses when attempting to understand the history and internal anatomy of a patient. Evidence shows that a structured format for documenting findings improves overall accuracy of reporting and, by extension, is likely to improve patient outcomes [1, 2]. An appropriately detailed report may greatly improve treatment strategy and general preparedness for a case, theoretically leading to better patient safety and care. While efforts have been made in the general surgical field to improve and standardize operative reports, these efforts are still lacking in gynecology surgery. Pelvic anatomy is unique in that various pathologies can be missed if not intentionally sought out for identification. These anatomical characteristics could influence the detailed description of pelvic findings during surgery in general and, more specifically, during laparoscopy. Classically the pelvis is divided into a true and false pelvis. While the false pelvis is the space enclosed by the pelvic girdle above and in front of the pelvic brim and considered part of the abdominal cavity, the true pelvis includes the genital tract midline between the lower end of the urinary tract anteriorly and the gastroin- testinal tract posteriorly. The ligamentary attachments of the female genital organs add to the anatomical uniqueness of the pelvis. For instance, the round ligament, which extends from the cornua to the internal ring, could harbor pathology from its origin to its insertion. The uterosacral ligaments and the suspensory ligaments of the ovary are often inspected but not described. Other anatomically obscure locations include the http://dx.doi.org/10.1155/2013/153235 2 Minimally Invasive Surgery ovarian fossa, the lateral pelvic sidewall, and the area inferior to the uterosacral ligament. The objective of this study is to propose a method for systematic pelvic assessment based on anatomical landmarks and structured documentation with laparoscopic photogra- phy. To illustrate the current deficiencies, we retrospectively applied this system to a cohort of patients who underwent laparoscopy for unexplained infertility to assess the compre- hensiveness of the operative reports. 2. Materials and Methods 2.1. Proposed System. In our proposed system, the pelvis was topographically divided into two midline zones (zone I & II) and two paired (right and left) lateral zones (zone III & IV). Zone I is the area between the two round ligaments from their origin at the uterine cornua to their insertion in the deep inguinal rings. Zone II is the area between the two uterosacral ligaments from their origin from the back of the uterus to their insertions in the sacrum posteriorly. Zone III is the area between the uterosacral ligament inferiorly and the entire length of the fallopian tube and the infundibulopelvic ligament superiorly. Zone IV is the triangular area lateral to the fallopian tube and the infundibulopelvic ligament and medial to the external iliac vessels up to the round ligament (Figure 1). The contents of the different zones are shown in Table 1. 2.2. Retrospective Evaluation of Dictated Reports. This study was conducted at the University Hospitals Case Medical Center (UHCMC), Case Western Reserve University, Cleve- land, Oh, USA. After IRB approval was obtained, operative reports of 540 patients who underwent diagnostic or opera- tive laparoscopy for the diagnosis of unexplained infertility between January 2005 and January 2012 were collected. The operative reports for these patients were reviewed with allocation of the reported positive or negative findings to the respective zones as shown above. All reports were evaluated for the comprehensiveness of the description with respect to normal findings or pathology for six zones as follows. Using this mapping of the pelvis, the operative reports were reviewed for completeness in description of anatomical findings. Descriptive statistics are presented. 3. Results During the review period of the study, 8876 laparoscopies and hysteroscopies were performed within the entire UHCMC system for a variety of indications. Of these, a total of 540 cases of diagnostic and/or operative laparoscopy with and without hysteroscopy for unexplained infertility were identified. These cases were selected as they are usually intended as a careful surveillance of pelvic anatomy in order to identify an etiology of infertility. As the goal of these surgical cases is the identification of anatomy, it was thought fit that these operative reports would focus on the description of anatomy. All operative reports commented on the uterus, tubes, and ovaries (100%), which reflect parts of zone I and Figure 1: A color-coded illustration of the anatomical boundaries and the contents of all pelvic zones. part of zone III. Only 17% (𝑛 = 93, 95% CI: 13.8–20.2) commented on the dome of the bladder and the anterior cul- de-sac (the remainder of zone I). Twenty-four percent (𝑛 = 130, 95% CI: 20.4–27.6) commented on the posterior cul-de- sac, which represents part of zone II. Interestingly, only one fourth of those who addressed zone II (6%; 𝑛 = 34, 95% CI: 4–8) commented on the rectosigmoid. Moreover, only 5% (29/540) commented on the pelvic sidewall peritoneum without specifying whether the ovarian fossa and the peri- toneum overlying zone IV were evaluated. Overall, only 6% (𝑛 = 34, 95% CI: 4–8) reported either positive and/or negative findings in the various pelvic zones resulting in complete documentation of the presence or absence of pelvic findings (Table 2). Supplemental photographic documentation of all pelvic areas was frequently missed; it was found only in 6% (𝑛 = 34, 95% CI: 4–8) of patients’ charts. 4. Conclusion The paucity of detail in operative reporting represents a missed opportunity to document important anatomical find- ings that could prove useful in future patient care. Our retrospective chart review demonstrated that description of important pelvic structures is frequently missing in operative notes from diagnostic and operative laparoscopy. The ante- rior cul-de-sac, deep inguinal rings, ovarian fossa, and the lateral pelvic sidewall peritoneum are the most frequently missed areas. Photographic documentation of normal and abnormal findings was also frequently missed. As seen in the general surgical literature, standardizing operative reporting improves completeness of documenta- tion [2]. If such systems are in place, residents can be taught these methods for reporting during their training [3, 4]. As the era of digital photography and electronic medical records evolves, this is a very appropriate time to innovate with respect to the methods by which we document our surgical findings. Implementation of a systematic approach for laparoscopic pelvic examination will indeed enhance the diagnostic accuracy, help diagnose lesions in anatomically challenging locations, and provide the required standardiza- tion with its clinical and academic advantages. Templates have been created to achieve standardization in general Minimally Invasive Surgery 3 Table 1: Descriptive summary of the anatomical boundaries and the contents of each pelvic zone. Zone Boundaries Contents Zone I Midline anterior abdominal cavity limited by the round ligaments bilaterally (1) Uterine dome and anterior surface (2) Anterior surface of the broad ligaments (3) Bladder dome (4) Internal ring and inferior epigastric vessels Zone II Midline posterior zone of the abdominal cavity limited by the uterosacral ligaments bilaterally (1) Uterine dome and posterior surface (2) Pouch of Douglas (3) Rectovaginal septum (4) Sigmoid colon (5) Presacral peritoneum Zone III Lateral pelvic sidewalls limited by the uterosacral ligament and the adnexae and infundibulopelvic ligaments (1) Fallopian tube and ovary (2) Posterior surface of the broad ligament (3) Ovarian fossa (4) Vessels and ureter Zone IV Pelvic sidewall limited by the round ligament, adnexae and infundibular ligament, and external iliac vessels (1) External iliac vessels (2) Ilioinguinal nerve Table 2: Percentages of the surgical reports that described findings in any structure or all structures of every pelvic zone. Zone Percentage and/or (95% CI) of reports that included any part of this zone 𝑛 Percentage and/or (95% CI) of reports that included all aspects of this zone 𝑛 I 100% 540 17% (13.8–20.2) 93 II 24% (20.4–27.6) 130 6% (4–8) 34 III 100% 540 0% 0 IV 5% (2–6.8) 29 0% 0 operative reports [5]. Photographic documentation of these anatomic regions would provide an additional advantage. We recommend a minimum of 6 photographs of the 6 pelvic zones in the absence of pelvic pathology. These six zones are depicted in Figure 1. Images of these zones will sup- plement the report. In addition, if surgeons dictate according to the zones, comprehensive details will be incorporated into the description report. Two copies of photos should be available for charting. In summary, a comprehensive description of important pelvic structures is frequently missing in operative notes from diagnostic and operative laparoscopy. The anterior cul- de-sac, deep inguinal rings, and the lateral pelvic sidewall peritoneum are the most frequently missed areas. A large proportion of gynecological surgery utilizes operative and diagnostic laparoscopy. Intraoperative photographic docu- mentation is a true benefit to this approach. Lack of a standardized protocol for photographic documentation is a missed opportunity in providing quality patient care. The advantages of our proposed system are several. First, it is based on anatomical landmarks, which allow standardization. Second, it is comprehensive as it includes all pelvic major structures such as the bladder, uterus, adnexa, and the rectosigmoid colon. It also covers supportive pelvic structures such as the round ligaments, the broad ligament, and the uterosacral ligament. Moreover, it describes peritoneal surfaces such as the anterior and the posterior cul-de-sac and the ovarian fossa. In addition, it covers frequently missed areas such as the internal rings and the triangular peritoneal area lateral to the fallopian tube and the infundibulopelvic ligament. Lastly, it is easy to follow system whereas the examination could be performed in anteroposterior, then lateral fashion where zone I and II will be examined first (midline zones). Subsequently, lateral zones (right zones III and IV followed by left zones III and IV) are to follow. Alternatively, clockwise or counterclockwise examination could be performed. The main limitation of this study was the retrospective use of sources to validate the use of our novel system. In the future, we plan to use operative reports that include photography in order to prospectively describe the six pelvic zones in order to validate this method. By doing this, we propose that more pathology will be diagnosed resulting in improved patient care and communication between surgeons will be improved by extension. Conflict of interests The authors report no conflict of interests. References [1] J. A. Parikh, I. Yermilov, S. Jain, M. L. McGory, C. Y. Ko, and M. A. Maggard, “How much do standardized forms improve the documentation of quality of care?” Journal of Surgical Research, vol. 143, no. 1, pp. 158–163, 2007. [2] L. M. Gillman, A. Vergis, J. Park, S. Minor, and M. Taylor, “Structured operative reporting: a randomized trial using dic- tation templates to improve operative reporting,” The American Journal of Surgery, vol. 199, no. 6, pp. 846–850, 2010. 4 Minimally Invasive Surgery [3] L. M. Gillman, A. Vergis, K. Hardy, J. Park, and M. Taylor, “Resident training and the dictated operative report: a national perspective,” Canadian Journal of Surgery, vol. 53, no. 4, pp. 246–250, 2010. [4] J. R. Porterfield Jr., L. K. Altom, L. A. Graham, S. H. Gray, M. M. Urist, and M. T. Hawn, “Descriptive operative reports: teaching, learning, and milestones to safe surgery,” Journal of Surgical Education, vol. 68, no. 6, pp. 452–458, 2011. [5] A. Vergis, L. Gillman, S. Minor, M. Taylor, and J. Park, “Struc- tured assessment format for evaluating operative reports in general surgery,” The American Journal of Surgery, vol. 195, no. 1, pp. 24–29, 2008. work_ymilnjcrz5dk7ipbnyhhw62dxa ---- INFORMATICS/NEW COMPUTER TECHNOLOGY Image Is Everything: Pearls and Pitfalls of Digital Photography and PowerPoint Presentations for the Cosmetic Surgeon JOSEPH NIAMTU, III, DDS Oral/Maxillofacial and Cosmetic Surgery, Richmond, Virginia BACKGROUND. Cosmetic surgery and photography are insepar- able. Clinical photographs serve as diagnostic aids, medical records, legal protection, and marketing tools. In the past, taking high-quality, standardized images and maintaining and using them for presentations were tasks of significant propor- tion when done correctly. Although the cosmetic literature is replete with articles on standardized photography, this has eluded many practitioners in part to the complexity. A paradigm shift has occurred in the past decade, and digital technology has revolutionized clinical photography and pre- sentations. Digital technology has made it easier than ever to take high-quality, standardized images and to use them in a multitude of ways to enhance the practice of cosmetic surgery. PowerPoint presentations have become the standard for academic presentations, but many pitfalls exist, especially when taking a backup disc to play on an alternate computer at a lecture venue. OBJECTIVE. Embracing digital technology has a mild to moderate learning curve but is complicated by old habits and holdovers from the days of slide photography, macro lenses, and specialized flashes. Discussion is presented to circumvent common problems involving computer glitches with Power- Point presentations. CONCLUSION. In the past, high-quality clinical photography was complex and sometimes beyond the confines of a busy clinical practice. The digital revolution of the past decade has removed many of these associated barriers, and it has never been easier or more affordable to take images and use them in a multitude of ways for learning, judging surgical outcomes, teaching and lecturing, and marketing. Even though this technology has existed for years, many practitioners have failed to embrace it for various reasons or fears. By following a few simple techniques, even the most novice practitioner can be on the forefront of digital imaging technology. By observing a number of modified techniques with digital cameras, any practitioner can take high-quality, standardized clinical photo- graphs and can make and use these images to enhance his or her practice. This article deals with common pitfalls of digital photography and PowerPoint presentations and presents multi- ple pearls to achieve proficiency quickly with digital photo- graphy and imaging as well as avoid malfunction of PowerPoint presentations in an academic lecture venue. J. NIAMTU, III, DDS HAS INDICATED NO SIGNIFICANT INTEREST WITH COMMERCIAL SUPPORTERS. CLINICAL PHOTOGRAPHY and academic presenta- tions have undergone an exponential paradigm shift over the past 10 years.1 For decades before, clinical slide photography and carrousel slide lecture presenta- tions were the gold standard in cosmetic surgery. The availability of mega pixel digital photography, digital imaging systems, and computer-driven digital presen- tations programs has revolutionized teaching and learning. It has never been easier to take, standardize, and use high-quality controlled clinical images. Despite these changes, many practitioners have not adapted this technology or fail to observe the simple rules to ensure standardized image use. Most unfortu- nate is that fact that some of the best known and respected cosmetic surgeons propagate their work with substandard photographic quality. In the scope of things, this new technology is not overly expensive and has a mild to moderate learning curve.2 Lecturers continue to use poor-quality images and presentations because of a lack of attention to a small number of variables. The author feels that any organized group, society, or association should immediately mandate its mem- bers to use digital presentations and abandon slides. Although this statement may seem radical and irritate those who have not yet embraced this technology, it will truly benefit every profession in numerous ways. There is no doubt that most of us take the path of least resistance, and there is significant initial work involved in converting from slides to digital. Because of this excuse, many practitioners continue to give their same ‘‘canned’’ lectures repeatedly. This short- changes the audience, as these presentations and slides are oftentimes not contemporary. Unfortunately, it is very difficult to add new controlled images and data to old slide lectures; thus, oftentimes people do not. Digital presentations, on the other hand, can be r 2004 by the American Society for Dermatologic Surgery, Inc. � Published by Blackwell Publishing, Inc. ISSN: 1076-0512/04/$15.00/0 � Dermatol Surg 2004;30:81–91 Address correspondence and reprint requests to: Joseph Niamtu, III, DDS, Oral/Maxillofacial and Facial Surgery, 10230 cherokee Road, Richmond, VA 23235, or e-mail: niamtu@niamtu.com updated in a matter of seconds, and this author frequently assembles his lectures on the plane trip to the meeting. I have also changed a lecture minutes before going on stage because of additions or deletions of the previous speaker. I truly feel that when doctors convert to updated technology their teaching and subsequent audience learning will be more updated and contemporary. In this case, everyone benefits. The good news is that this conversion needs to be done only one time. The most prohibitive factor is the ‘‘fear and trepidation’’ of having to scan the entirety of one’s slide collection. As most seasoned lecturers maintain thousands of slides, this would be a task of awesome and fearful proportions. Fallacy 1: You Need to Convert Your Entire Slide Collection to a Digital Format This is the most common misconception that prevents adaptation of digital technology. Scanning thousands of slides is simply downright impractical and almost always unnecessary. First, doctors with thousands of slides rarely use them all on a regular basis. In reality, they use a small portion of their collection for routine lectures. I personally recommend that one only scan the images that he or she simply cannot recreate. This would include ‘‘hall of fame’’ cases, rare pathology, surgical images, unusual lesions, etc. Most anything else can be recreated. For instance, I have for years lectured on chin surgery and had multiple canned lectures on the subject. Some of these slides are once in a lifetime situations; thus, I scanned them. Instead of scanning all of the others, I simply photographed my next chin surgery with a digital camera. Now I not only had more contemporary images, but the quality was superior. I could edit them in many ways, such as improving brightness and contrast, cropping, changing hue and saturation, adding text and symbols, and placing multiple images on a single photo. This did not take long and immediately made my ‘‘chin lecture’’ better. I pro- ceeded with the same strategy for other procedures and soon had abandoned the slides that I previously held dear to my heart. My decision to go ‘‘slideless’’ was about 7 years ago, and I have never looked back.3 I cannot tell you how much of a ‘‘purge for freedom’’ it was to take my previously coveted carrousel of slides and dump them in the trash. Pearl 1: Scan Only the Slides You Need and Then Move on With Digital Technology Slide scanners can vary from several hundred dollars to over a thousand dollars. My advice is to either purchase an inexpensive scanner and do not use it often or pay a professional laboratory to scan your slides that cannot be recreated. Pearl 2: Tomorrow Is the First Day of the Rest of Your Life: Do Not Procrastinate There are some basic armamentaria required to make and use digital images.4 Do not try to outrace technology, as you will always be behind. Do not plan on using your current digital equipment for the rest of your life; it will be outdated in a matter of years. If you truly are pursuing excellence in teaching and learning, reinvesting in technology is merely part of the challenge. You have to bite the bullet and purchase a suitable digital camera. Currently, 4 to 6 mega pixels is the high end for most affordable cameras. Nikon, Fuji, Cannon, and others make high-end digital cameras that are designed more for the professional photo- grapher. They have advantages of interchangeable lenses, including macro and telephoto, metered lenses, expanded ports for accessory flashes, and other options used by professionals. They also have much more manual functions, again more for the advanced photographer. These types of cameras are very expensive and bulky. They are not easily transported and are not easily operated by staff and are in the author’s opinion overkill for average cosmetic clinical photography. A number of ‘‘off-the-shelf’’ digital cameras are available for under $1000; these take excellent clinical photographs, including macro. There are some basic instructions that must be followed to control and enhance clinical photography. I lecture all over the world with these types of images and can attest to their adequacy. I currently use an upper-end digital camera that has been modified by an aftermarket company for clinical and macro digital photography. The camera is compact and lightweight, has no special bulky flashes or lenses, and functions wonderfully for 99.9% of cosmetic surgery photographic needs. A small camera of this nature is easily transported to multiple offices, the operating room, the emergency room, and home and on the road. The next tool that you will need is a laptop computer. I suggest using this laptop for all of your imaging; do not store images on any other computer.5 Transferring images between multiple computers is time consuming and confusing, and images are commonly deleted inadvertently. By having all of your images on a laptop, you will never be without them and will know exactly where they are; you can always take them with you. The true power of digital technology is the ability to carry your entire ‘‘slide 82 NIAMTU: DIGITAL PHOTOGRAPHY AND POWERPOINT PRESENTATIONS Dermatol Surg 30:1:January 2004 collection’’ with you at any time. Now you can create lectures and publications on the airplane or at the barber shop, work, or home. Regardless of what type computer is used, one must religiously back up his or her images on a regular basis. A lost or stolen laptop or a malicious virus can wipe out your entire ‘‘slide’’ collection, which is a sickening thought, let alone event. In addition, your computer will be used for your actual presentations. Although multiple presentation programs exist, Microsoft PowerPoint is currently the most popular and universal presentation program. The Windows XP operating system has made image manipulation and use easier than ever. Buying a laptop is like buying a car or a boat; one should buy the biggest and fastest that he or she can afford. In today’s environment, several specifics are important. Digital images are memory intense, and if video is used, this requires even more power. The author recommends a processor of at least 1 gigahertz, a hard drive with a minimum of 30 gigabytes, a video card with a minimum of 32 megabytes of video ram, system RAM memory of at least 256 megabytes (512 to 1024 megabytes being preferable), a CD writer (a DVD writer is preferable), and capabilities to capture video. A serious problem has been encountered with even some of the highest quality laptop computers when playing back digital video during a PowerPoint presentation. The video card set up on some name- brand high-end computers may be such that when attempting to play a video in your PowerPoint presentation, you can see the video on you computer, but not on the main projector screen. The author has experienced and witnessed many presentations go awry because the video that was the crux of the presentation would not play in the lecture hall. The author does not endorse any specific computer but has not had this problem with the Dell computer 8100 and 8500 series laptops (www.dell.com). This may not apply to other Dell models. Make sure this conflict does not exist when considering purchasing a computer for digital imaging and presentations. If your particular computer does not show video on the projector, try toggling your display function key; sometimes the video will project on the projector but not the laptop. If a digital camcorder is to be used, a fire wire (1394) port on the computer is a big advantage for downloading video to the laptop. Most home-quality digital camcorders take excellent clinical video. Common camcorders have excellent macro capabilities and are versatile in many light conditions. A tripod or monopod is essential to make quality stabilized clinical video. The author has used common small-format camcorders to film full-body procedures such as abdominal fat harvest as well as super macro procedures such as transconjunctival blepharoplasty. Most upper end digital camcorders can do both with amazing quality. DVD recorders are in their infancy, and major compatibility issues exist. These discs are desirable as they can hold almost 5 gigabytes of data; this is very handy when dealing with large images or video files. Many images require editing; thus, some type of image editing software is necessary. Many commercial digital imaging systems are available, but mastering inexpensive Photoshop type programs can serve the same basic tasks. The author rarely archives a raw image but usually performs some type of editing such as contrast or color correction, cropping, or resizing. In the ‘‘old slide days of the last millennium,’’ making quality, standardized before and after slides was a serious task. With image editing, one simply opens both images, makes them the same size, and with a single mouse click, stitches them together. There is no longer a need to lecture with two slide projectors. Fallacy 2: Presentation Programs Are Complex and Difficult to Learn The author has heard many practitioners state that they are too old or do not have enough time to learn how to use PowerPoint. This is almost always far from the truth. If a person can get through a residency program and learn cosmetic surgery, he or she is certainly capable of learning PowerPoint. As to the age question, the author has had some very rewarding experience teaching older practitioners how to use the basics of PowerPoint.1 I have seen them blossom and truly appreciate the freedom and accomplishment of learning something new. Age is only important in wine and cheese! A digital projector is also a handy thing to own if you lecture frequently. It is particularly useful for local presentations, marketing and seminars, and business meetings. Most lecture venues have powerful digital projectors, and thus, owning one is not a necessity. The author has found local hospital audio visual departments that are willing to loan projectors to doctors on staff. Pearl 3: The Best Way to Learn Powerpoint Is to Play With It In this age of computer games, learning can be fun. The author suggests use of the included PowerPoint tutorials; one should simply play with the program. Dermatol Surg 30:1:January 2004 NIAMTU: DIGITAL PHOTOGRAPHY AND POWERPOINT PRESENTATIONS 83 Make a simple slide, and then browse the various menus to see what effects and changes are possible. You will be amused and enthralled with the possibi- lities. This author has published several journal articles that can also assist clinicians on the basics of making PowerPoint presentations.1,6 Pearl 4: Keep Your Presentations Simple Because of the novelty of digital presentations, many presenters were overly aggressive in using various enhancements such as animations, transitions, and sounds. A decade ago, playing the ‘‘screeching car tires’’ sound when your title appeared was chic; now it is sophomoric. The initial key to any presentation is an acceptable, uncluttered background. Many standard backgrounds are available with PowerPoint. These default back- grounds are usually set up with proper contrast between font (text) and background color. In a large lecture hall, this contrast is imperative to view the detail. A common error is to include too much small text on a PowerPoint slide. Cluttered slides detract from the content and confuse the audience. Because there is no extra cost, there is no reason to use cluttered slides. Animation is the single best and worse thing that has happened to digital clinical presentations. Although creative animation can emphasize a point or guide the audience though a process, its overuse can seriously detract from presentation content. In addition, overuse of animation can complicate the timing and progression of a presentation, as well as make it longer. Inserting or dragging and dropping images into a slide in a PowerPoint presentation is as simple as several mouse clicks. The image is scalable, meaning that it can be resized without losing perspective. Multiple images may be inserted into a slide, and the images can be edited directly in PowerPoint. It is easy to enhance the color and contrast of an image as well as to crop the borders. Using the PowerPoint drawing tools, arrows, text, and symbols can be added to images. In the author’s opinion, the single most important advancement in digital presentation technology has been the addition of video. The use of movies in a ‘‘still’’ lecture adds to a presentation in much the same way television is superior to radio. One cannot totally appreciate performing a cosmetic procedure from still images in the same manner as real-time video. Making clinical video movies with a home digital camcorder has been discussed. Transferring them to the computer is a separate task and does have a learning curve. Many entry-level video-editing programs are available for under $100, and most new digital camcorders come with editing software. The user friendliness of video capture and editing software has increased exponentially over the past years, and it is actual fun to become an academic producer. One caveat is do not count on using the video clips from your digital camera for serious presentations. Many digital still cameras are now capable of capturing short video movies as well as images. These movie clips are very low resolution and may be suitable for emailing personal movies but are below the standard required for a full-screen presentation. Raw, uncompressed video is extremely memory intense, and thus, compression is usually required for seamless video integration. The MPEG 1 compression scheme is one of the most common types of video compression in use today. This format works well in PowerPoint presentations and is not so memory intense as to slow down the presentation. Because it is compressed, the actual video does not usually fill up the computer or projector screen, but when projected in a large room, it is extremely magnified and easy to see. Other video formats such as MPEG 2 or AVI are very memory intense and do not currently seamlessly integrate with PowerPoint. These video clips are full screen but are truly memory intense and are not recommended for beginners. Fallacy 3: If a PowerPoint Presentation Works on Your Computer, It Will Work on Other Computers Unfortunately, there are many variables and compati- bility issues that can turn a well-rehearsed lecture into a presentation nightmare. Most of us that attend meetings have seen more that one presenter fumble around trying to run a presentation from the podium. In the worse case, some are not able to present at all. This is especially true if video is incorporated into a presentation. Pearl 5: Understanding the Basics of How PowerPoint Handles Images and Video Files Will Insure a Proper Backup Strategy and Allow seamless Compatibility With Other Similar Computers Although most presenters take their laptops to a meeting, it is not totally necessary if you have you presentation backed up on CD or comparable media. It is great feeling to walk into an auditorium with a CD in your pocket instead of a bag of rattling carrousels. If you are using video in your presentation, several points that are very important to successfully 84 NIAMTU: DIGITAL PHOTOGRAPHY AND POWERPOINT PRESENTATIONS Dermatol Surg 30:1:January 2004 replay from a CD. When you insert an image into a PowerPoint presentation, the image becomes em- bedded by default. This means that the digital code of that image becomes part of the presentation, and your pictures will always be there. Sometimes you will see a presenter show a PowerPoint slide, and instead of an image, you will see and geometric icon instead of the image. This is because the image was linked, instead of the default embedding. The best way to avoid this is to follow the menu commands on the PowerPoint menu INSERT and then PICTURE FROM FILE. This will always embed the image. Video, on the other hand, is linked to the presentation and not actually integrated in the presentation. This means that if you make a PowerPoint presentation on your laptop and insert a video, it will play on your computer without a problem. If you back up the presentation on a CD without the video, you presentation will play, but the video will not work. Again, the author has seen presenters be unable to give a featured lecture that they spent many hours preparing because of this problem. In order to circumvent this problem, the PowerPoint presentation and video clips must reside in the same folder before backing up on CD. For instance, if I have a talk on facelift and I want to show some facelift footage, I first need to create a folder on my hard drive to hold both the facelift PowerPoint presentation and the facelift video clips. I will create a folder anywhere on my C:\ drive and will arbitrarily call it ‘‘facelift PowerPoint’’ (the name does not matter, just that you have a dedicated folder). The next step is to place or copy the videos that you wish to use in that presentation into that folder. The last step is to create your PowerPoint presentation and insert the video clips that already reside in that folder. In this way they are linked to that folder, and PowerPoint will look for that link in that folder whether it is on your hard drive or a CD. The PowerPoint presentation must also be saved in this dedicated folder. Failure to observe this order can lead to great frustration on the podium. Finally, the best laid plans can go awry. You may make a flawless digital presentation only to have it fail because of unforeseen problems in the lecture hall such as computer compatibility issues. To combat compatibility issues, it is a good idea to plan ahead. If you are attending a large meeting, find out who is running the audiovisual setup and call ahead to inquire about your specific computer and model for any known issues. Also, when checking in to a hotel at a conference, the author always proceeds directly to the lecture hall or speaker ready room to test the computer, CD, and projector personally. When in doubt, make multiple backup copies as well as bring you own computer. Pearl 6: Never Keep Your Laptop and Your Backed-Up Presentations in the Same Bag This pearl is made under the assumption that anyone wishing not to experience the ‘‘academic suicide’’ of losing all your digital images is smart enough to frequently back up his or her computer. The author has personally seen a case where a keynote speaker at a national meeting lost a carry on bag with not only the computer, but the back-up CDs as well. Do not fall into that trap; pack your back up discs separately. Pearl 7: The Best Doctors Take Many Pictures This is a universal observation that I have made during my 20-year career in facial surgery. Surgeons that are passionate about their work document it very well and use those images to better their skills, as well as for academic presentations and marketing. Pearl 8: Standardized, Quality Images Will Enhance the Creditability of the Presenter Many articles have been written about photographic standardization for clinical photography.7–15 Although we all want to take the best images, overkill can exist in this area. If one is doing a statistically significant scientific study based on clinical photographs, then absolute standardization is a prerequisite. Standardi- zation devices such as head holders, multiple remote flashes, flash reflectors, and umbrellas may be required. One can literally invest thousands and fill an entire room with complex and sophisticated photographic paraphernalia. For the average cosmetic surgeon in the trenches of private practice, this level of proficiency is not necessary. It is necessary to take relatively standardized, consistent images. These images should be consistent in distance, white balance, background, and lighting. With a little forethought, some simple standardizations, and a good quality of digital camera, this is easily accomplished. In pursuit of ‘‘relative standardization,’’ this author has placed a white, nonglossy poster board on the back of the door in each room in the office (Figure 1). White is used because when using a colored background, printing the images can quickly deplete your colored ink. Setting up each room with a standardized back- ground is convenient, as it is difficult to channel all patients to a dedicated photo room in a busy practice. I promise that you will take more images when you can do it in any room. Nothing looks worse than a photos taken with a wooden door for a background or with a patient sitting in an exam chair with counters, Dermatol Surg 30:1:January 2004 NIAMTU: DIGITAL PHOTOGRAPHY AND POWERPOINT PRESENTATIONS 85 instruments, and other distractions in the background. Preferably, the room or ambient lighting should be consistent in these rooms. If not, then adjustments may be necessary to the camera. The next key is to standardize your camera and subject in terms of poses, distance, and flash. It is imperative to take all of your images in a standardized manner. Each patient should be posi- tioned in the same manner for a given pose. The focal distance can be standardized by securing a piece of dental floss or chain to the bottom of your camera and holding it to the nose (or appropriate area) of your subject (Figure 2). This insures that you will be at the same distance from the patient in all views. An alternative means is to make a mark on the floor for the photographer to stand. Regardless of the means, focal distance standardization is imperative. This will save you hours in postprocessing because your images will be the same size. The pose of the patient relative to the camera and background is the next most important factor. Most practitioners take a minimum of a frontal, right and left oblique, right and left lateral, and frequently posterior views of a patient. Various specialties and specialists require other poses. Most common facial views are taken with the patient’s Frankfort horizontal plane (imaginary line from the external auditory canal to the infraorbital rim) parallel to the ground. Observing this will prevent a ‘‘chin-up’’ or a ‘‘chin- down’’ view. Taking a preoperative lateral photo of a facelift patient with their chin down and then showing the same patient postoperatively with their chin elevated can imply artificial results and is a hallmark of incredibility. Clinical imaging is no place for trick photography. The frontal and posterior views are the most difficult to eliminate shadows. If the camera is straight on to the subject, minimal shadow is apparent. Hair, ears, and clothing can cause shadowing. Solu- tions include taking digital images without the flash or using an accessory slave flash, which can be purchased inexpensively at camera stores. The oblique view is Figure 1. A white poster board secured to the back of a door in each treatment room provides a convenient photographic background. Figure 2. A chain or string attached to the camera can help standardize the focal distance for each patient. Figure 3. Aligning soft-tissue nasion with the lacrimal caruncle will standardize the oblique facial views. 86 NIAMTU: DIGITAL PHOTOGRAPHY AND POWERPOINT PRESENTATIONS Dermatol Surg 30:1:January 2004 perhaps the most important view when evaluating facial structures. This is also the view that is the most commonly standardized. An easy means of standardiz- ing the oblique facial view is to line up the soft-tissue nasion with the lacrimal caruncle of the contralateral eye (Figure 3). When taking oblique or lateral photographs, shadow is very problematic but is very correctable. If the direction of the flash is passed over anatomic projections (nose, chin, breasts, etc.), a shadow is cast (Figure 4). In order to eliminate shadow in the oblique or lateral view, the photographer merely needs to rotate the camera sideways so that the flash points toward the side being photographed. If one is taking a right lateral facial profile image, then the camera should be rotated vertically so that the flash is on the same side as the patient’s nose (Figure 5). This prevents the nose, chin, neck, etc. from blocking the flash and causing a shadow. The camera is rotated vertically in the other direction when photographing the left side. Another alternative to eliminating profile shadow is to turn off the camera flash and rely on ambient lighting (Figure 6). When photographing anatomy such as the roof of the mouth, the nares, or the submental or inframammary area, the camera is held upside down so that the flash comes from below, thus preventing a shadow (Figure 7). Another com- monly experienced photographic pitfall that erodes operator creditability is the taking of a preoperative picture without flash or in poor lighting and the postoperative picture with flash or increased lighting. No flash or poor lighting always enhances wrinkles, acne, orbital fat prolapse, and other flaws. Taking the same image without a flash can be so dramatic that it could be passed off as a surgical result (Figure 8). Several other details should be considered for repeatable images. Pay attention to jewelry, glasses, and makeup. Alway takes a series of preoperative images with the patient in full makeup and without makeup immediately preoperatively. If the patient wore glasses in the preoperative image, they should have them in the postoperative image. Jewelry that is large, obtrusive, or distractive should be removed for photo- graphy. When taking facial views, a collarless shirt or blouse is preferable. A neutral-colored patient gown is preferred, as a red shirt (or other bright color) can reflect color to the patient’s face, skewing the true hue and saturation of the image. Patients should be reminded not to smile and should always look into Figure 4. Taking clinical photographs with traditional flash positions can cause unwanted shadowing on the background. Figure 5. Repositioning the camera so that the flash changes direction can eliminate problematic shadows. Dermatol Surg 30:1:January 2004 NIAMTU: DIGITAL PHOTOGRAPHY AND POWERPOINT PRESENTATIONS 87 the camera or in oblique or lateral views should focus on an object that is standardized. The author glues a red dot on the wall and asks the patient to stare at it in the oblique and lateral views. A step stool is also necessary, as the patient may be considerable taller or shorter than the photographer, and pictures taken looking up or down will be distorted. Another problem is the blinking patient. It is very frustrating that some patients simply cannot be photographed with a flash camera without blinking. The author may take 30 images to get one or two acceptable nonblinking images. Using the red-eye reduction function that is available on many digital cameras may assist; turning off the flash or photographing these patients next to a window or outdoors may be necessary. An additional help to photograph individuals that blink at the flash is to have them close their eyes and count to three. Tell them to open their eyes on ‘‘three.’’ By snapping the Figure 6. Disabling the flash is an effective means of eliminating shadows and works best with adequate ambient room lighting and/or manual camera adjustments. Figure 7. Inverting the camera so that the flash shines upward is a convenient means of illuminating difficult photographic areas such as the submental area, the roof of the mouth, and the inframammary fold. Figure 8. This image shows the same patient photographed with the same camera without a flash on the left and with a flash on the right. Preoperative and postoperative photographs should always be taken with the same lighting. Failure to do so can make such a drastic difference that it can appear as a preoperative and postoperative result. 88 NIAMTU: DIGITAL PHOTOGRAPHY AND POWERPOINT PRESENTATIONS Dermatol Surg 30:1:January 2004 picture on the three count, their eyes will be open. In the past, film processing meant that you frequently had ‘‘surprises’’ waiting for you when you got your film back from the laboratory. With digital cameras and preview screens, there is never an excuse to take a poor image. For macro photography such as nevi, eyelids, and lesions, the paradigm has changed. With film cameras equipped with 100-mm macro lenses and ring or point flashes, the photographer used to be able to get as close to the subject as possible. With most digital cameras, the opposite is true. These cameras contain software that is metered for automatic conditions, and most cannot compensate for ultramacro distances. If you get too close to the subject, you will overexpose some areas and block the flash in other areas, causing a shadow. For instance, if you wanted to make a macro photograph of the lateral canthal area and held the camera very close, you would have areas of over- exposure and underexposure (Figure 9A). To compen- sate for this, the trick is to stay back a foot or so from the subject and use the zoom to get close to the area. By doing this, you are far away enough for the flash to disperse over a larger area (Figure 9B). With digital editing, you can crop any extraneous anatomy. If your picture was taken at a sufficiently high resolution, your image will be ‘‘macro’’ after cropping out the unwanted structures (Figure 9C). Using Figure 9 as an example, if you want a close-up of the lateral canthus, stand back about a foot, zoom the camera in all the way, and take the picture. In your image editor, you select the area that you wish to keep and crop the rest. The result is a macro image of only the anatomy that you wish. This was not possible with slide photography, and thus, in the past, you had to get close because what you saw is what you got (unless you sent off your slide to the processing laboratory to be enlarged, cropped, and printed). Most off-the-shelf digital cameras are configured for a broad range of photography from macro to infinity. Because of this, the lenses are less versatile than traditional 100-mm macro lenses. Because most Figure 9. (A) The underexposed and overexposed areas when the lens is too close to the subject. (B) A longer distance when taking a macro shot of the lateral canthus with adequate exposure. (C) The same image, now cropped to show only the desired area. Figure 10. (A) Lens distortion of a typical digital camera when the camera is held too close to the subject. The same picture taken with the same camera at a distance of 3 feet. Zooming in on the subject eliminates the distortion (B). Dermatol Surg 30:1:January 2004 NIAMTU: DIGITAL PHOTOGRAPHY AND POWERPOINT PRESENTATIONS 89 clinicians previously used slide photography with macro lenses, they still attempt take images with the lens close to the patient. Doing so can cause distortion of the anatomy closer to the camera (Figure 10A). Figure 10B shows the same subject taken with the same camera, but the photographer has backed several feet away from the subject and has used the lens to zoom into the subject, thus maintaining the same anatomic aspect ratio. To maintain proper anatomic ratios, take sample macro pictures of a commonly photographed object such as the eye but place a millimeter ruler next to the eye and find the correct focal distance and zoom to replicate accurate measurement. Pearl 9: Archive Your Images in a Convenient Manner Just taking digital images is of little use if you cannot find them. Although proprietary software exists exclusively for digital image archiving, it is unneces- sary. This author has been using digital photography for clinical patients for a decade and has thousands of images archived. A convenient means of establishing an archiving system is to create a dedicated directory on your hard drive. To create a new directory, proceed to your root directory (usually C:\). In most versions of Microsoft Windows, this can be located by clicking on ‘‘My Computer.’’ The following steps show how to make new folders. 1. While in your C:\ directory, select FILE. Then select NEW, and then select FOLDER (Figure 11). 2. To rename the folder, right click on the folder and type a new name. The next step it to make a main folder where all your images will reside. This can be called anything you wish, but for example, we will call it MASTER IMAGES. After you make the MASTER IMAGES file, you need to create a new subfolder for each main procedure you perform. An example is shown in Figure 12. If you desire, you can make subfolders for each procedure. In other words, you may have a folder for LASER and under that you may make subfolders for CO2, ERYBIUM, PULSED DYE, etc. The final step is to make subfolders for your patients and to file them in the appropriate folders. If Mary Smith is a laser patient, you go to the MASTER IMAGES folder and then to the LASER folder and then to the CO2 folder. In that folder, you will make a new folder for SMITH, MARY. You will now place all of Mary’s images in this folder. If you wish, you can continue with subfolders for PREOP, POSTOP, etc. (Figure 13). By using this scheme, it is easy to find a given patient by looking in that procedure category folder. Many patients will have multiple procedures, and thus, their respective images will reside in multiple folders. By simply doing a Windows search, you can find all of the images for any patient or procedure in any folder. Conclusion Clinical photographic standardization is frequently taken for granted. One only has to look at some contemporary clinical journals and academic presenta- tions to realize that many of the astute practitioners unfortunately pay little attention to quality clinical photography. Figure 11. Creating a new folder or subfolder is the key to successful archiving. Figure 12. A master folder for pictures with subfolders for each main procedural category serves to organize images. Figure 13. The final stage of the directory tree is the patient subfolder. 90 NIAMTU: DIGITAL PHOTOGRAPHY AND POWERPOINT PRESENTATIONS Dermatol Surg 30:1:January 2004 Digital photography has changed the paradigm for taking clinical images. Several distinct differences exist between digital cameras and previously used film cameras with macro lenses. Failure to realize these differences can result in poor-quality images. Follow- ing several simple rules of photographic standardiza- tion can greatly enhance the clinical images of the average practitioner. As digital cameras and video continue to evolve, the standardization of clinical photography will become simpler. No matter how advanced the equipment evolves, the attention of the surgeon to the quality and standardization of his or her pictures will always be necessary to evaluate one’s work critically. Those practitioners that use PowerPoint presenta- tion software should heed several basic tenets, includ- ing backup strategies to prevent problematic malfunctions and compatibility issues. References 1. Niamtu J. The power of PowerPoint. Plast Reconst Surg 2001;108:466–84. 2. Niamtu J. Digital camera system: to use or not to use (response). Int J Cosmet Surg Aesthetic Dermatol 2001;3:216–7. 3. Niamtu J. Confessions of a Slide Addict: The Need for Digital Imaging: Plastic Surgery Products. Plastic Surgery Magazine, 1998. 4. Niamtu J. Digital Imaging in Cosmetic Surgery. In: Lowe N, ed. Textbook of Facial Rejuvenation: The Art of Minimally Invasive Therapy, London: Martin Dunitz, 2002. 5. Niamtu J. Digital imaging for the cosmetic dermatologist: part I. J Cosmet Dermatol 2001;14:21–4. 6. Niamtu J. Techno Pearls for Digital Image Management. Dermatol Surg 2002;28:946–50. 7. Ikeda I, Urushihara K, Ono T. A pitfall in clinical photography: the appearance of skin lesions depends upon the illumination device. Arch Dermatol Res 2003;294:438–43. 8. Talamas I, Pando L. Specific requirements for preoperative and postoperative photos used in publication. Aesthetic Plast Surg 2001;25:307–10. 9. Yavuzer R, Smirnes S, Jackson IT. Guidelines for standard photography in plastic surgery. Ann Plast Surg 2001;46: 293–300. 10. Sandler J, Murray A. Recent developments in clinical photography. Br J Orthod 1999;26:269–72. 11. Cranin AN. Developing skills in clinical photography. J Oral Implantol 1998;24:119–20. 12. DiBernado BE, Adams RL, Krause J, Fiorillo MA, Gheradini G. Photographic standards in plastic surgery. Plast Reconstr Surg 1998;102:559–68. 13. Brown S. Digital imaging in clinical photography, part 1. J Audiov Med 1994;17:53–65. 14. Steele DC. Achieve professional results in your clinical photo- graphy. J Mass Dent Soc Fall 1993;42:175–8. 15. Lorson EL. Simplified approach to clinical photography. J Oral Surg 1976;34:1035–7. Dermatol Surg 30:1:January 2004 NIAMTU: DIGITAL PHOTOGRAPHY AND POWERPOINT PRESENTATIONS 91 work_ypp7zalb6zf47jzzx3byyyfsbu ---- S2326376820000145jra 351..360 Integrating Digital Datasets into Public Engagement through ArcGIS StoryMaps Matthew D. Howland , Brady Liss, Thomas E. Levy, and Mohammad Najjar ABSTRACT Archaeologists have a responsibility to use their research to engage people and provide opportunities for the public to interact with cultural heritage and interpret it on their own terms. This can be done through hypermedia and deep mapping as approaches to public archae- ology. In twenty-first-century archaeology, scholars can rely on vastly improved technologies to aid them in these efforts toward public engagement, including digital photography, geographic information systems, and three-dimensional models. These technologies, even when collected for analysis or documentation, can be valuable tools for educating and involving the public with archaeological methods and how these methods help archaeologists learn about the past. Ultimately, academic storytelling can benefit from making archaeological results and methods accessible and engaging for stakeholders and the general public. ArcGIS StoryMaps is an effective tool for integrating digital datasets into an accessible framework that is suitable for interactive public engagement. This article describes the benefits of using ArcGIS StoryMaps for hypermedia and deep mapping–based public engagement using the story of copper production in Iron Age Faynan, Jordan, as a case study. Keywords: GIS, photogrammetry, deep mapping, public archaeology, multimedia, Jordan Los arqueólogos tienen una responsabilidad de utilizar su investigación académica para entablar una conversación con el público general y así ocasionar oportunidades para que el público pueda interactuar con su patrimonio cultural y generar una interpretación de esta herencia cultural en sus propios términos. Propongo que esta arqueología pública se puede hacer a través de tecnología como “hypermedia” y “deep mapping.” En la arqueología del siglo XXI, los académicos pueden depender de varios avances tecnológicos para promover el interés y compromiso del público, incluyendo la fotografía digital, los sistemas de información geográfica y los modelos tridimensionales. Estas tecnologías, incluso cuando se recopilan para análisis o documentación, pueden ser herramientas valiosas para educar e involucrar al público con los métodos arqueológicos y cómo estos métodos ayudan a los arqueólogos a aprender sobre el pasado. A la larga, existe un gran beneficio para la narrativa académica cuando se hacen más accesibles estos métodos y resultados arqueológicos que involucran al público general y a los beneficiarios interesados. ArcGIS StoryMaps es una herramienta efectiva para integrar los datos digitales de manera que sean accesibles al público. Este ensayo se basa en la historia de la producción de cobre durante la Edad de Hierro en Faynan, Jordania para describir los beneficios de usar ArcGIS StoryMaps con hypermedia y deep mapping con el objetivo de fomentar la participación pública. Palabras clave: SIG, fotogrametría, mapeo profundo, arqueología pública, multimedia, Jordania Archaeologists have a responsibility to educate the public about the knowledge they produce and the methods they use to do so (Kintigh 1996). Despite this responsibility, archaeologists have often fallen into modes of archaeological practice that treat the public’s lack of engagement with archaeological data as evidence of either their lack of interest in or their inability to understand serious archaeological practice (Grima 2016). The latter, catego- rized and sometimes critiqued as the “deficit model” of public archaeology, implicitly suggests that archaeologists view the public as needing education in order to understand how to appreciate the archaeological record (Merriman 2004; Richardson and Almansa-Sánchez 2015). The “multiple perspective model” is an alternative approach that frames the public as central to knowledge generation and relies on the audience to bring their own experience to the table. In doing so, differing audiences would ideally participate in and enjoy the creation of archaeo- logical knowledge through hands-on or interactive engagement (Merriman 2004; Williams et al. 2019). PUBLIC ARCHAEOLOGY, HYPERMEDIA, AND DEEP MAPPING Following the multiple perspective model, digital and 3D approaches to cultural heritage can provide opportunities for the public to engage directly with the archaeological record (Williams Advances in Archaeological Practice 8(4), 2020, pp. 351–360 © The Author(s), 2020. Published by Cambridge University Press on behalf of Society for American Archaeology. This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited. DOI:10.1017/aap.2020.14 351 https://orcid.org/0000-0001-5055-1343 http://creativecommons.org/licenses/by/4.0/ https://doi.org/10.1017/aap.2020.14 et al. 2019). Multimedia approaches have strong potential for engaging the public in the processes of archaeological data col- lection, as a topic related to but separate from learning about the data (Baione et al. 2018; Pavlidis et al. 2017). In addition to increasing the accessibility of data acquisition, multimedia- focused projects can generate a reflexive and inclusive atmos- phere for interpretation of archaeological data. In particular, 3D visualization has immense potential for generating an immersive experience for members of the public that mimics the reality of experiencing a site (Berggren et al. 2015; Forte 2010, 2011, 2014; Forte and Siliotti 1997; Garstki et al. 2019; Knabb et al. 2014; Levy, Ben-Yosef, and Najjar 2014) or as a framework for storytelling (Bonacini et al. 2018; Hupperetz et al. 2012; Smith et al. 2019; Srour et al. 2015). Furthermore, the use of hypermedia—documents in which topics, information, and multimedia elements are linked in a text and available for free-form exploration rather than strictly sequential storytelling—can amplify this potential, as the use of interactive links and features allows users to create their own path through the data and generate their own interpretations (Bertemes and Biehl 2009). These hypermedia documents should ideally be interactive and present a baseline of archaeological knowledge that provides avenues for further exploration of par- ticular topics. This format can facilitate engagement by users of differing backgrounds and familiarity with the information, all while featuring many types of archaeological data and information represented in varying ways (Pujol et al. 2012). Fortunately, the types of data that allow for this type of engagement are increas- ingly available. Relatedly, the concept of “deep mapping” may serve as a framework in which archaeologists can bring the data that they collect and analyze for research purposes to bear in engaging the public in interactive ways. Narrative is necessarily temporal and spatial, as stories are always situated in these ways (Bodenhamer 2015). As scholars of time and space, archaeologists are well positioned to address this challenge of situating a story. Yet, beyond simply illustrating what happened when, deep mapping is a process of providing multiple layers of representations and multiple forms of media in a way that is by definition not static and may tell multiple stories (Earley-Spadoni 2017). Narrative from the perspective of deep mapping involves the use of various forms of documentation that come together cohesively while not forsaking the individual threads of evidence for the whole (Bodenhamer 2015)—similar to hypermedia. A deep map should also be multi- scalar, in both space and time (Roberts 2016). In this as well, a relationship to hypermedia is clear, in that the exploration of many datasets of different types and scales can both expose and gen- erate new understanding. Ultimately, there is a need for approaches to public archaeology that make use of digital data- sets in engaging and interactive ways. Earley-Spadoni (2017) highlights Esri’s ArcGIS StoryMaps as one platform that may allow for the type of interactive, hypermedia deep mapping that can serve as a way to combine many threads of evidence into digital storytelling. ARCHAEOLOGY IN A DIGITAL AGE The types of multimedia archaeological datasets that can catalyze interactive engagement by the public with the processes of archaeological investigation and with the actual archaeological record itself are now commonly collected by archaeological projects as a standard practice. These datasets, though typically collected to facilitate archaeological interpretation and docu- mentation, can also serve as the basis for effective and engaging public outreach. As digital tools are becoming a standard part of the archaeological tool kit, there are growing opportunities for involving the public in the archaeological process through exposure to these methods and datasets. One of these types of data is geographic data, collected, man- aged, and analyzed with geographic information systems (GIS). GIS software packages are used by nearly all archaeological pro- jects today. Archaeology, as a fundamentally spatial field, requires the use of some way to track and perform analysis on the locations of artifacts and sites. GIS is often used as a framework for analyses in landscape archaeology (Howey and Brouwer Burg 2017; Parcak 2017). To that end, two types of analysis of how people interact with their landscape through sight and movement, visibility (e.g., Bernardini et al. 2013; Dungan et al. 2018), and cost path analysis (Gustas and Supernant 2019; Taliaferro et al. 2010) are frequently applied to archaeological datasets. Even more common is the use of GIS for spatial database management, where GIS allows archaeologists to perform typical spatial documentation but in a more efficient manner (Howey and Brouwer Burg 2017; Verhagen 2017). In practice, the use of GIS for storing and maintaining spatial data often necessitates a rigorous, digital data collection methodology (which may include photogrammetry as a comple- mentary method for top plan drawing; Berggren et al. 2015; Levy and Smith 2007; Olson et al. 2013). In addition, the generation of maps for publication is nearly universal among field projects. Thus, spatial data, collected in a GIS framework, are collected and stored by most institutions engaged in archaeological work, though frequently published in only limited ways. Indeed, GIS is often applied by archaeologists for data collection or analysis rather than for outreach or engagement, despite its potential in these realms (Earley-Spadoni 2017). The generation of 3D models of archaeological units and sites, though not necessarily a standard practice, is also increasingly common. Archaeological projects most often collect 3D data through laser scanning or digital photogrammetry, either of which can be applied terrestrially or aerially with varying degrees of viability (Howland 2018). Terrestrial photogrammetry, as alluded to above, can be a valuable tool for generating spatial data over the course of excavation and documenting its progress (De Reu et al. 2014; Howland et al. 2014a; Olson et al. 2013; Peng et al. 2017). Generating 3D models from images taken from the ground is also likely the most cost-effective method of generating 3D data (Haukaas and Hodgetts 2016). Aerial photogrammetry, though somewhat more expensive as it requires an elevated camera platform, has seen dynamic growth as a tool of archaeological 3D modeling in recent years. This approach has seen widespread use for 3D documentation of sites (Carvajal-Ramírez et al. 2019; López et al. 2016; Sauerbier and Eisenbeiss 2010). Also common is the use of photogrammetric modeling for the generation of spatial data that facilitate GIS-based mapping (Hill et al. 2014; Howland et al. 2014a; Reshetyuk and Mårtensson 2016; Uysal et al. 2015; Verhoeven et al. 2012). In short, digital photogrammetry is already widely applied and likely to become even more common due to its cost-effectiveness (Fernández-Hernandez et al. 2015; Howland 2018). As with many digital tools, the extent to which projects will be able to apply 3D technology depends on their funding and hardware resources. However, the decreasing cost and the ease of Matthew D. Howland et al. 352 Advances in Archaeological Practice | A Journal of the Society for American Archaeology | November 2020 digital photogrammetry mean that even less well-funded projects should be able to collect some amount of 3D data in the field. In general, the proliferation of archaeological 3D data collections provides excellent opportunities for their distribution and use in public outreach and storytelling, though the availability of these data does not necessarily result in quality public archaeology. Archaeologists have recognized the vast potential of GIS and 3D data collection for documentation and analysis, but they are only beginning to take full advantage of the capability of photogram- metric 3D models for public-facing interactive engagement (Earley-Spadoni 2017). Often, digitized collections are not shared widely with the public, as they are part of active, unpublished research projects (Scopigno et al. 2017). However, for projects interested in engaging the public, sharing 3D data is straightfor- ward and can be free. For example, Sketchfab, an online 3D model hosting platform with free and paid tiers, can be used to good effect for providing the public with access to 3D models of archaeological artifacts and sites (Baione et al. 2018; Means 2015; Scopigno et al. 2017). Sketchfab allows for some degree of explanation and contextualization of models within the platform and can also allow for more immersive virtual reality and aug- mented reality experiences (Ellenberger 2017). However, publica- tion of 3D models as individual files in an online database fails to appropriately contextualize the artifacts within their archaeo- logical, geographic, cultural, or historical framework (Lloyd 2016). As such, even publishing of archaeological 3D data to the public may not take full advantage of the opportunities provided by the increasing availability of these datasets. ARCGIS STORYMAPS Esri’s ArcGIS StoryMaps is an online digital storytelling platform centered on situating digital datasets in a narrative format. The platform allows content creators to add text, photographs, videos, 3D models, and maps created using Esri’s online mapping inter- face, ArcGIS Online, to a web page where users can access additional content by scrolling down through different slides. As such, StoryMaps can be a useful platform for publishing digital archaeological data, situated appropriately with contextual infor- mation that users can explore according to their own interest. Overall, ArcGIS StoryMaps can be an effective tool for education and digital engagement for any number of public outreach pro- jects (Antoniou et al. 2018; Cope et al. 2018; Kallaher and Gamble 2017; Strachan and Mitchell 2014). Within archaeology, however, StoryMaps has been the subject of only limited use despite the application’s apparent suitability for archaeological storytelling (Alemy et al. 2017; Amico 2019; Malinverni et al. 2019). We aim to consider the viability of StoryMaps for archaeological public out- reach and ultimately suggest that the platform can be a powerful tool for archaeologists, primarily based on three main character- istics that recommend its use to scholars interested in digital storytelling in a hypermedia or deep mapping context: its ease of use, its compatibility with many different types of datasets, and its interactivity. Evaluating ArcGIS StoryMaps requires a look at not only its utility but also its cost-effectiveness and viability vis-à-vis other similar platforms. StoryMaps requires, at minimum, an ArcGIS Online “Creator” license costing $500/year. Hosting large archaeological datasets on ArcGIS Online also requires the use of “credits,” the availability of which depends on license level. As such, the plat- form can be cost-prohibitive for scholars who do not already have access to Esri’s suite of services through an institutional license. Several viable open-source alternatives to ArcGIS Online’s web mapping platform exist, including Leaflet and MapServer, with Mapbox also representing a paid option with a free tier of use. While web mapping alternatives are readily available, platforms allowing for the use of multiple web maps as a framework for interactive storytelling, as ArcGIS StoryMaps does, are less com- mon. One open-access platform, StoryMapJS, provides an appealing interface with the ability to integrate various forms of data. However, StoryMapJS fails to allow for much of the open- ended mapmaking, including uploading user-generated datasets, that is possible through ArcGIS Online and ArcGIS StoryMaps. Mapme is another alternative allowing for map-based storytelling. The platform features (very limited) free functionality, though generating maps with user-collected datasets in Mapme requires a paid subscription, costing $348+/year. Unfortunately, the out- puts produced through Mapme are not as refined as those pro- duced by ArcGIS StoryMaps in their visual appeal and overall sophistication. In addition to cost and functionality, another con- cern with digital platforms is their life cycle of support. For example, one digital map-based storytelling platform that has been highlighted as a StoryMaps alternative, MapStory (Earley-Spadoni 2017), now appears to lack functionality. The reported support timeline for Esri’s StoryMaps app runs through 2024 and beyond, which may be the limit of what one can expect in today’s rapidly changing digital environment. Given these aspects of the StoryMaps platform, the Esri product is a superior choice over the available alternatives when economically viable. With regard to the usability of ArcGIS StoryMaps, stories in the program are created through a straightforward interface in which content can be edited in a form that matches the finished output. In other words, content creators and researchers are able to construct their StoryMaps without having any knowledge of cod- ing or how to construct a web page. This means that archaeolo- gists should be able to easily construct a compelling narrative regarding their fieldwork or an archaeological site or region with no more technical knowledge required than what it would take to construct a PowerPoint presentation. While the platform has a relatively high ceiling in terms of the level of interactivity it is possible to allow for, at its most basic level, a StoryMap need not be more complex than narrative text with embedded pictures or static maps. Additionally, this simple and adaptable format allows a StoryMap to be easily updated with additional research or feedback from the public. This feature can be important in allowing for stories about archaeological heritage to be updated with multiple perspectives of stakeholders. Another main benefit of using the StoryMaps platform is its compatibility with the types of sophisticated digital datasets that are increasingly collected by archaeological field projects as a matter of normal practice. Most prominently, StoryMaps allows for the integration of maps created or published in Esri’s ArcGIS Online platform. While paper maps can be useful and aestheti- cally pleasing, we are now in an era where “interactive and immersive” representations of archaeological data and processes are possible (McCoy and Ladefoged 2009), through platforms such as ArcGIS Online. Maps on ArcGIS Online and other web mapping platforms are interactive, allowing users to manipulate the map extent (by zooming, panning, etc.) and also click different Integrating Digital Datasets into Public Engagement through ArcGIS StoryMaps November 2020 | Advances in Archaeological Practice | A Journal of the Society for American Archaeology 353 map features to open pop-ups in order to learn more. These platforms provide users with an ability to explore spatial data to an extent not possible with printed maps (Smith 2016). This inter- activity is critical for hypermedia and deep mapping concepts and has also been used effectively to provide additional levels of engagement even to articles published in traditional academic outlets, which are not typically interactive (Hammer and Ur 2019). Content creators can customize the pop-ups that appear upon clicking map features to provide additional levels of information and interactive content, including text, photographs, and even other StoryMaps, generating multiple layers of hypermedia con- tent that users can explore at their own pace and according to their own interests. In addition to highly interactive online maps, StoryMaps also straightforwardly allows for the inclusion of digital photographs, videos, and 3D models, which can facilitate additional engage- ment with the archaeological stories being told beyond the accompanying text narrative. The inclusion of 3D data should provide a bridge from textual storytelling that only hints at place to 3D recordings or reconstructions of place that can heighten immersion. In general, map-based platforms such as StoryMaps or GIS suffer from a bias toward an absolute perspective toward space, in which Euclidean distances and measurements take pri- macy over experiential, phenomenological, or topological depic- tions (O’Sullivan et al. 2018). Photographs, videos, and 3D models can help to resolve this bias to some extent, as they can represent immersive rather than top-down or absolute perspectives. Archaeologists often already have many of these digital datasets on hand due to twenty-first-century archaeological practice and can facilitate the introduction of such data to users through StoryMaps. In doing so, archaeologists can introduce users to the stories of ancient societies and the methods of archaeological practice. Of course, not all projects are predisposed to making use of sophisticated digital datasets. However, generating a compelling StoryMap does not require projects to make use of expensive field recording technology. Even simple spatial datasets such as, for example, the locations of important sites in a region, images of those sites and artifacts recovered there, and videos taken at the site can provide a dynamic framing for an archaeo- logical narrative within StoryMaps. Moreover, given that advanced digital recording methods are increasingly affordable and applic- able without purchase of specialized tools and software, even less well-funded projects should be able to apply digital datasets to generate deep mapping environments within StoryMaps. CASE STUDY: THE ARCHAEOLOGY OF FAYNAN, JORDAN In order to demonstrate the applicability of StoryMaps for archaeological deep mapping, storytelling, and public engage- ment, we used datasets collected from recent excavations in Faynan, Jordan. Faynan is located approximately 30 km south of the Dead Sea in the deserts of southern Jordan. This region is also one of the largest copper ore resource zones in the Levant. These copper ores were intermittently exploited throughout history from roughly the Early Bronze Age until the Islamic period. The archaeology of Faynan has been the focus of the Edom Lowlands Regional Archaeology Project (ELRAP), a collaboration between the University of California San Diego and the Department of Antiquities, Jordan (principal investigator: Thomas E. Levy; co–principle investigator: Mohammad Najjar; Levy, Ben-Yosef, and Najjar 2014), since 1997. ELRAP investigates the relationship between social complexity and industrial-scale copper production particularly during the Early Iron Age (ca. 1200–800 BC) through a combination of surveys and excavations (Ben-Yosef 2010; Levy, Ben-Yosef, and Najjar 2014). The Iron Age in Fayan is the period of the most intense copper smelting, with an estimated 33,000– 36,000 tons of produced metallic copper (Ben-Yosef 2010). Much of our understanding of Faynan during the Iron Age comes from the ELRAP excavations at the main copper smelting sites dating to the period, primarily Khirbat en-Nahas, Khirbat al-Jariya, and Khirbat al-Ghuwayba. Khirbat en-Nahas is the largest Iron Age copper smelting center in Faynan (Levy, Najjar, et al. 2014). The site includes the collapsed architecture of more than 100 buildings and an estimated 50,000–60,000 tons of copper slag, the waste by-product of copper smelting, still visible on the surface (Hauptmann 2007; Levy, Najjar, et al. 2014). Similar to Khirbat en-Nahas, Khirbat al-Jariya is characterized by architectural col- lapse and slag mounds (Ben-Yosef et al. 2010, 2014). Located circa 3 km to the northeast of Khirbat en-Nahas, the site straddles the Wadi al-Jariya covering an area of approximately 4.8 ha and fea- tures circa 15,000–20,000 tons of copper slag. ELRAP also con- ducted excavations at Khirbat al-Ghuwayba, a smaller-scale smelting site located about 4 km east of Khirbat en-Nahas (Ben-Yosef et al. 2014). While Khirbat al-Ghuwayba has been less extensively excavated, its location near a local spring and archaeobotanical analysis of materials collected at Khirbat al-Jariya suggest that it might have served the additional function of provisioning contemporaneous smelting centers. Together, these three smelting sites, along with smaller mining camps throughout the region, were central to the industrial landscape of Iron Age Faynan and the first complex society in the region; the access to abundant copper ores was critical to this development. To investigate these sites, ELRAP uses methods of cyberarchae- ology, applying methods of computer science, natural science, and engineering to archaeological research (Levy, Ben-Yosef, and Najjar 2014; Levy et al. 2010, 2012). ELRAP records high-precision coordinates of artifact locations and locus perimeter/depth using a total station on a daily basis. This digital recording of spatial data facilitates easy integration into GIS, as data from the total station can be directly imported into GIS platforms. All geospatial data collected on ELRAP projects are visualized and further analyzed using Esri’s ArcGIS. ArcGIS is also essential for digitizing archae- ological features at a site to produce site/excavation maps (discussed further below). The combination of the total station and GIS maintains a digital record of all necessary geospatial infor- mation connected to the archaeological record from the moment it is excavated through final data storage and publication. In addition to GIS-based recording, ELRAP projects also collect spatially referenced 3D data through systematic photography of excavations for the generation of publication-quality imagery and photogrammetry (terrestrial and aerial). These images and the models that derive from them serve as an excellent basis for digital multimedia outreach. For models of larger areas, ELRAP has employed aerial photography using a helium balloon with an attached camera frame (Howland et al. 2014b). All of the produced models are also georeferenced using ground control points in order to orient them in space and to geographically connect the Matthew D. Howland et al. 354 Advances in Archaeological Practice | A Journal of the Society for American Archaeology | November 2020 models to the archaeological data. Using photogrammetry during the excavation serves two functions: (1) to produce 3D models that provide a photorealistic digital record of the site/excavations at that moment and (2) to produce digital elevation models and orthophotographs that provide an ideal base for site mapping of archaeological features in GIS. Ground photogrammetry and aerial photogrammetry, as discussed above, are increasingly cost-effective and practical approaches to recording for archaeo- logical projects. The combination of a digital recording strategy, comprehensive digital photography, detailed geospatial data, and photorealistic 3D models of the archaeological record provides a wealth of data that can facilitate interactive engagement with the archaeological past. As a spatial aspect to data is critical to archaeological research, much of the digital data collected by ELRAP—and many archaeological projects—is spatially refer- enced. This includes inherently spatial data such as digital eleva- tion models, orthophotographs, and mapped site features, as well as the 3D data collected on the project. The availability of these datasets is an advantage for digital public outreach, though the priority for outreach is providing an engaging and entertaining narrative allowing for immersive engagement rather than show- casing elaborate datasets. THE KINGDOM OF COPPER STORYMAP The ELRAP team generated a StoryMap focusing on the Iron Age polity centered in Faynan titled The Kingdom of Copper (Figure 1). This StoryMap is intended to perform digital storytelling and deep mapping through an interactive hypermedia framework. Many of the digital datasets collected through ELRAP’s cyberarchaeology field and lab methods are featured in the StoryMap, including digital maps of environmental characteristics, architecture, and material remains from sites in Faynan, as well as digital photography and 3D models. Many of these datasets required little modification in order to bring into StoryMaps as they are inherently digital. In order to generate the StoryMap, team members developed an iterative planning process to ensure that the StoryMap would meet its goals of digital public outreach in an interactive map- ping environment (Figure 2). This process can serve as one model for how to conceptualize and develop an interactive public outreach project using digital data in ArcGIS StoryMaps (Alemy et al. [2017] provides an excellent overview of the tech- nical process of creating a StoryMap). In general, our strategy was to make sure that narrative and engaging users in the archaeological process drove the framing of the StoryMap, rather than designing the product based on the availability of datasets. To that end, our first stage in generating the StoryMap was to create a storyboard conceptualizing the story elements impor- tant to framing the archaeology of Faynan, Jordan, in its appro- priate context. Next, it was important to write the text to be laid out in the StoryMap, telling a story based on archaeological information rather than the suitability of existing maps, images, or 3D models for accompanying the story elements. Writing a compelling story based on archaeological information is chal- lenging, and researchers may benefit from collaboration with experienced storytellers. Analyzing and developing media to complement the text is also necessary, though this should be taken up after storyboarding and writing the StoryMap narrative. In many cases, projects will be able to publish existing project maps including GIS vector and raster data to ArcGIS Online or existing photogrammetric models to Sketchfab with minimal modifications, though in some cases it will be necessary to involve new datasets to contextualize the text. Importantly, our project only developed the StoryMap after fieldwork, without orienting field data collection toward creating this type of digital outreach project. As argued above, many projects will be in a position to also integrate normally collected digital data into a StoryMap. After writing the text and considering and developing media, we installed both text and media elements into the StoryMap. In order to engage readers through our grounding concepts of deep mapping and hypermedia, it is essential to ensure that map elements feature high levels of complexity and interactivity. Thus, maps should be edited to ensure that addi- tional levels of information are available for users to explore and engage with if they are interested. Only at this stage did we publish the StoryMap. However, even the published version of the StoryMap was not considered a final product. Since pub- lishing the StoryMap, we have made many edits to the product, including the addition of a comment box designed to solicit feedback from the public, especially stakeholding groups in Faynan and Jordan. Through feedback in this comment box, we aim to continually redesign and rewrite the StoryMap in order to bring in different voices and stories to be heard and told. We have already redesigned and rewritten parts of the StoryMap in response to public feedback. Thus, we conceive of the StoryMap as a work in progress in perpetuity, subject to update at any time using StoryMaps’ straightforward editing tools. Overall, our process for StoryMap generation aims at establishing practices for developing interactive storytelling for public outreach that allows users to engage in a deep mapping environment that allows for interpretation of multiple layers of meaning. By developing an iterative process without a final draft, we aim to continually improve the outreach potential of the platform. FIGURE 1. Please access The Kingdom of Copper StoryMap through this link: http://bit.ly/Faynan. The StoryMap is also accessible by scanning this QR code with a smartphone. Integrating Digital Datasets into Public Engagement through ArcGIS StoryMaps November 2020 | Advances in Archaeological Practice | A Journal of the Society for American Archaeology 355 http://bit.ly/Faynan http://bit.ly/Faynan The current version of The Kingdom of Copper StoryMap is aimed at telling the story of how copper production in Faynan was able to lead to the development of complex society in the region during the Iron Age and of the archaeological methods used to interpret the region’s archaeological record (e.g., Liss et al. 2020). The first part of the StoryMap introduces readers to the Faynan region with an emphasis on its geographic location. In particular, this portion of the StoryMap includes a presentation of Faynan’s regional context using satellite images on several scales and a gradient map with isohyets (contour lines representing zones of identical rainfall levels) depicting its level of precipitation up to the present and emphasizes its unique environment. StoryMaps’ basis in digital maps means that the platform is ideal for geographically situating archaeological analysis in space, while text descriptions accompanying maps can help readers situate the data and anal- ysis in time. The second part of the StoryMap provides insight into the drivers of the development of industry in Faynan in the Iron Age by introducing the audience to copper production and its role in the ancient world. At this point, the development of Iron Age social complexity around the abundant copper ores of Faynan is framed against the collapse of the Late Bronze Age economic system. Specifically, this section provides a discussion of the importance of slag as an indicator for copper production and a display of the many sites potentially destroyed in the Late Bronze Age as part of a regional collapse using a combination of interactive maps and photographs. Each map in this section is interactive through clickable pop-ups that allow users to learn more about topics in the story. The StoryMap then provides readers with an illustration of the geologic context to show the availability of copper and 3D visualizations of the major copper-producing sites (Khirbat en-Nahas and Khirbat al-Jariya). The ability to include photoreal- istic 3D models provides the audience with an authentic experi- ence of the sites and the surrounding terrain and helps to provide a more immersive experience of the archaeological data, over- coming the bias of mapping outputs toward absolute conceptions of space. Our current understanding of the excavations (discussed above) is presented in the associated text. Primarily, the goals of this section of the StoryMap are to introduce readers to the types of evidence that archaeologists use to interpret the environment of the region and the history of the Late Bronze Age so that they can freely explore and understand these datasets on their own terms. The StoryMap also features a section on the process of archaeo- logical investigation. This section is aimed at engaging the public in how archaeologists conduct research at archaeological sites and what tools and techniques we use. The archaeological methods that led to the interpretations given in the StoryMap are presented, accompanied by videos and photographs. The videos allow the viewer to observe archaeology in practice while learning about how these techniques aid in developing our understanding of the past. The archaeology of Khirbat en-Nahas is highlighted with digitized maps, 3D models, and the opportunity to learn more about each excavation area through clicking and reading pop-ups for each area (Figure 3). Through the interactivity in these datasets, we hope to engage the public in the archaeological process, giving our perspective on how we acquire and interpret data but also providing the datasets themselves so that readers may explore as they are interested and take away their own con- clusions. The final section of the StoryMap, discussed below, focuses on modern Faynan and community engagement. One key point of emphasis for us in constructing the StoryMap was to generate as many interactive elements as possible in order to provide a true hypermedia environment that allows for multiple levels of engagement. To that end, the StoryMap includes 13 interactive maps, nine of which have specific features that provide more information upon a click of an element in the map. These additional features include descriptions of sites, features, and empires, as well as images of sites and features. Also present in the StoryMap are two videos showing archaeological processes, five interactive 3D elements, hyperlinks to other relevant content, and dozens of photographs. As such, the StoryMap overall is heavily oriented toward providing engaging multimedia content that users can explore at their own discretion. This project ideally serves as an example of how StoryMaps can integrate many forms of digital data into an interactive deep mapping application. Though different projects will naturally approach archaeological storytelling and public outreach in different ways, this platform can provide a useful mechanism for digitally inclined projects to integrate datasets into a deep mapping environment with many interactive forms of media. FIGURE 2. Flowchart showing the iterative process the Edom Lowlands Regional Archaeology Project team used to gener- ate a StoryMap, with emphasis on continually rewriting the StoryMap to better reach intended audiences. Matthew D. Howland et al. 356 Advances in Archaeological Practice | A Journal of the Society for American Archaeology | November 2020 TOWARD COMMUNITY ENGAGEMENT The final section of the StoryMap focuses on the local communi- ties that reside in the region today, primarily members of the ‘Ammarin, Sa’idiyyin, Rashaydah, and ‘Azazmah tribes. Many in these communities are members of the ELRAP, and it is their cultural heritage that is addressed by the archaeological content of the StoryMap. As such, they play a primary and important role in how that heritage is investigated and interpreted. The “Faynan Today” section of the StoryMap provides recognition of these communities and their role in how their cultural heritage is to be understood. However, this recognition is not, in and of itself, enough to say that the stakeholding communities have been “engaged.” Importantly, the people of Faynan and the nearby town Qirayqira play the most active role in the continuous reinterpretation of the cultural heritage of the Faynan region. This occurs most especially through their work at the Faynan Museum, an archaeology museum in the area; the Faynan Ecolodge, an eco-hotel that offers tours of archaeological sites by locals; and the Dana Biosphere Reserve, which preserves many of the important sites in Faynan. These locally operated organizations and the overall engagement of the communities of Faynan are critical in place- based education (Sobel 2005) and allow inhabitants of the region and visitors to engage with archaeological heritage at the source. This place-based learning is a necessity, especially to overcome a critical flaw in StoryMaps and internet-based digital storytelling—its inherent Eurocentric bias due to lack of equality in worldwide internet access (Bertemes and Biehl 2009). StoryMaps may not be an appropriate avenue to overcome this inequality, as the platform is only accessible via the internet. This can mean that StoryMaps projects may be inherently geared toward a dispro- portionately wealthy, English-speaking, and literate audience. Scholars would do well to take heed and consider stakeholding groups when they approach archaeological projects through StoryMaps. In recognition of this need to engage not only English-speaking audiences, The Kingdom of Copper StoryMap is available in Arabic. Furthermore, we hope to install a version of the StoryMap in the Faynan Museum in order to reach members of the local community without internet access in person. We also aim to make use of the platform’s compatibility with audio plat- forms by recording the text of the main story in order to increase the accessibility of the StoryMap to nonliterate users. As StoryMaps are easily editable, when used for public engage- ment they should not be seen as an end product to show stake- holding communities but, rather, as a first draft that communities can engage with, edit, and use as a platform to tell their own stories and explain their own relationship with their cultural heri- tage. To work toward this ideal, we aim to show The Kingdom of Copper StoryMap to Jordanian citizens and the people of Faynan and discuss it with them in person. For digital interaction, the StoryMap also features a comment box, linked up through Esri’s Survey123 application. Through this comment box, we hope to get feedback on the story told in the StoryMap and also elicit FIGURE 3. Screenshot of The Kingdom of Copper StoryMap, featuring an interactive map, written story elements, two buttons that allow zooming into the map or bringing up a 3D model, and a selected pop-up that provides more information about Area A, as well as a photograph of the area. Integrating Digital Datasets into Public Engagement through ArcGIS StoryMaps November 2020 | Advances in Archaeological Practice | A Journal of the Society for American Archaeology 357 stories and contextual information from the Faynan community for inclusion in the StoryMap. Only by engaging local populations in this way can we successfully move beyond deficit-based learning models and adopt a multiple perspective model that treats the archaeological past as the living cultural heritage of people living today. Future work on the StoryMap in the community in Faynan will help reveal the extent to which this process is successful and help our understanding of whether or not StoryMaps can be effectively used for engagement of non-internet-connected local communities. CONCLUSION Esri’s ArcGIS StoryMaps is an effective and straightforward tool for archaeologists to share their data and research. The interactivity of the platform allows for hypermedia and deep mapping outputs that facilitate engagement by the public in the archaeological past and with the archaeological process. The variety of digital datasets that are regularly collected by archaeologists in the twenty-first century nearly all lend themselves well to inclusion in the StoryMaps platform. This, along with the ease of sharing a StoryMap online, increases the accessibility of archaeological data. By framing datasets within a scrolling text story, archaeologists can contextualize their data while also allowing for the type of free-form interaction that allows for many forms of engagement. The Kingdom of Copper StoryMap serves as a case study of the usefulness of the StoryMaps platform for interactive multimedia public engagement. Ultimately, such digital outreach projects should be seen as works in progress to allow multiple perspectives to be shared, rather than as a final output set in stone. Acknowledgments Special thanks go to the American Center of Oriental Research, Samya Khalaf Kafafi for translating the StoryMap into Arabic, and members of the ELRAP team including the local Bedouin of Qirayqira and Faynan for their assistance in the field and the lab. This work was supported by a National Science Foundation Integrative Graduate Education and Research Traineeship Award under grant #DGE-0966375, “Training, Research and Education in Engineering for Cultural Heritage Diagnostics” (graduate student support for B.L. and M.D.H.); and major funding came from T.E.L.’s Norma Kershaw Chair, Department of Anthropology and Jewish Studies, University of California San Diego. Permissions for the August 2014 excavations were provided by the Department of Antiquities of Jordan. Data Availability Statement All digital data presented in this article are openly accessible through The Kingdom of Copper StoryMap itself and also on the Edom Lowlands Regional Archaeology Project’s ArcGIS online page: https://ucsdonline.maps.arcgis.com/home/group.html? id=92071501deb9422eb1badb6c3c18d67a#overview. REFERENCES CITED Alemy, Alexis, Sophia Hudzik, and Christopher N. Matthews 2017 Creating a User-Friendly Interactive Interpretive Resource with ESRI’s ArcGIS Story Map Program. Historical Archaeology 51:288–297. Amico, Jennifer 2019 Planning Public Archaeology Projects with Digital Products. Master’s thesis, State University of New York, Binghamton. Antoniou, Varvara, Paraskevi Nomikou, Pavlina Bardouli, Danai Lampridou, Theodora Ioannou, Ilias Kalisperakis, Christos Stentoumis, Malcolm Whitworth, Mel Krokos, and Lemonia Ragia 2018 An Interactive Story Map for the Methana Volcanic Peninsula. In The International Conference on Geographical Information Systems Theory, Applications, and Management: GISTAM 2018, edited by Cédric Grueau, Robert Laurini, and Lemonia Ragia, pp. 68–78. SciTePress, Setúbal, Portugal. DOI:10.5220/0006702300680078. Baione, Carlo, Tyler D. Johnson, and Carolina Megale 2018 Communicating Archaeology at Poggio del Molino: 3D Virtualization and the Visitor Experience On and Off Site. In Proceedings of the 1st International and Interdisciplinary Conference on Digital Environments for Education, Arts, and Heritage. EARTH 2018. Advances in Intelligent Systems and Computing, Vol. 919, edited by Alessandro Luigini, pp. 681– 690. Springer, Cham, Switzerland. Benavides López, José Antonio, Gonzalo A. Jiménez, Montserrat Sánchez Romero, Edwardo A. Garcia, Sonia Fernández Martín, Alvin L. Medina, and J. A. Esquivel Guerrero 2016 3D Modelling in Archaeology: The Application of Structure from Motion Methods to the Study of the Megalithic Necropolis of Panoria (Granada, Spain). Journal of Archaeological Science: Reports 10:495–506. Ben-Yosef, Erez 2010 Technology and Social Process: Oscillations in Iron Age Copper Production and Power in Southern Jordan. PhD dissertation, Department of Anthropology, University of California San Diego, La Jolla. Ben-Yosef, Erez, Thomas E. Levy, Thomas Higham, Mohammad Najjar, and Lisa Tauxe 2010 The Beginning of Iron Age Copper Production in the Southern Levant: New Evidence from Khirbat al-Jariya, Faynan, Jordan. Antiquity 84:724–746. Ben-Yosef, Erez, Mohammad Najjar, and Thomas E. Levy 2014 New Iron Age Excavations at Copper Production Sites, Mines, and Fortresses in Faynan. In New Insights into the Iron Age Archaeology of Edom, Southern Jordan, edited by Thomas E. Levy, Mohammad Najjar, and Erez Ben-Yosef, pp. 767–886. Cotsen Institute of Archaeology, University of California, Los Angeles. Berggren, Åsa, Nicolo Dell’Unto, Maurizio Forte, Scott Haddow, Ian Hodder, Justine Issavi, Nicola Lercari, Camilla Mazzucato, Allison Mickel, and James S. Taylor 2015 Revisiting Reflexive Archaeology at Çatalhöyük: Integrating Digital and 3D Technologies at the Trowel’s Edge. Antiquity 89:433–448. DOI:10. 15184/aqy.2014.43. Bernardini, Wesley, Alicia Barnash, Mark Kumler, and Martin Wong 2013 Quantifying Visual Prominence in Social Landscapes. Journal of Archaeological Science 40:3946–3954. Bertemes, François, and Peter F. Biehl 2009 The Past in the Future: E-Learning, Multimedia, and Archaeological Heritage in the Digital Age. In E-Learning Archaeology, edited by Heleen van Londen, Marjolijn S. M. Kok, and Arkadiuz Marciniak, pp. 158–175. Amsterdam University, Amsterdam. Bodenhamer, David J. 2015 Narrating Space and Place. In Deep Maps and Spatial Narratives, edited by David J. Bodenhamer, John Corrigan, and Trevor M. Harris, pp. 7–27. Indiana University Press, Bloomington. Bonacini, Elisa, Davide Tanasi, and Paolo Trapani 2018 Participatory Storytelling, 3D Digital Imaging, and Museum Studies: A Case Study from Sicily. In 2018 3rd Digital Heritage International Congress (DigitalHERITAGE) Held Jointly with 2018 24th International Conference on Virtual Systems and Multimedia (VSMM 2018), pp. 1–4. IEEE, New York. DOI:10.1109/DigitalHeritage.2018.8810058. Carvajal-Ramírez, Fernando, Ana D. Navarro-Ortega, Francisco Agüera-Vega, Patricio Martínez-Carricondo, and Francesco Mancini 2019 Virtual Reconstruction of Damaged Archaeological Sites Based on Unmanned Aerial Vehicle Photogrammetry and 3D Modelling: Study Case Matthew D. Howland et al. 358 Advances in Archaeological Practice | A Journal of the Society for American Archaeology | November 2020 https://ucsdonline.maps.arcgis.com/home/group.html?id=92071501deb9422eb1badb6c3c18d67a#overview https://ucsdonline.maps.arcgis.com/home/group.html?id=92071501deb9422eb1badb6c3c18d67a#overview https://ucsdonline.maps.arcgis.com/home/group.html?id=92071501deb9422eb1badb6c3c18d67a#overview https://doi.org/10.5220/0006702300680078 https://doi.org/10.15184/aqy.2014.43 https://doi.org/10.15184/aqy.2014.43 https://doi.org/10.1109/DigitalHeritage.2018.8810058 of a Southeastern Iberia Production Area in the Bronze Age. Measurement 136:225–236. Cope, Michael P., Elena A. Mikhailova, Christopher J. Post, Mark A.Schlautman, and Patricia Carbajales-Dale 2018 Developing and Evaluating an ESRI Story Map as an Educational Tool. Natural Sciences Education 47:1–9. DOI:10.4195/nse2018.04.0008. De Reu, Jeroen, Philippe De Smedt, Davy Herremans, Marc Van Meirvenne, Pieter Laloo, and Wim De Clercq 2014 On Introducing an Image-Based 3D Reconstruction Method in Archaeological Excavation Practice. Journal of Archaeological Science 41:251–262. Dungan, Katherine A., Devin White, Sylviane Déderix, Barbara J. Mills, and Kristin Safi 2018 A Total Viewshed Approach to Local Visibility in the Chaco World. Antiquity 92:905–921. DOI:10.15184/aqy.2018.135. Earley-Spadoni, Tiffany 2017 Spatial History, Deep Mapping, and Digital Storytelling: Archaeology’s Future Imagined through an Engagement with the Digital Humanities. Journal of Archaeological Science 84:95–102. Ellenberger, Kate 2017 Virtual and Augmented Reality in Public Archaeology Teaching. Advances in Archaeological Practice 5:305–309. DOI:10.1017/aap.2017.20. Fernández-Hernandez, Jesús, Diego González-Aguilera, Pablo Rodríguez-Gonzálvez, and Juan Mancera-Taboada 2015 Image-Based Modelling from Unmanned Aerial Vehicle (UAV) Photogrammetry: An Effective, Low-Cost Tool for Archaeological Applications. Archaeometry 57:128–145. Forte, Maurizio (editor) 2010 Cyber-Archaeology. BAR International Series 2177. Archaeopress, Oxford. Forte, Maurizio 2011 Cyber-Archaeology: Notes on the Simulation of the Past. Virtual Archaeology Review 2(4):7–18. 2014 3D Archaeology: New Perspectives and Challenges—The Example of Çatalhöyük. Journal of Eastern Mediterranean Archaeology and Heritage Studies 2:1–29. DOI:10.5325/jeasmedarcherstu.2.1.0001. Forte, Maurizio, and Alberto Siliotti (editors) 1997 Virtual Archaeology: Great Discoveries Brought to Life through Virtual Reality. Thames and Hudson, London. Garstki, Kevin, Chris Larkee, and John LaDisa 2019 A Role for Immersive Visualization Experiences in Teaching Archaeology. Studies in Digital Heritage 3:46–59. DOI:10.14434/sdh.v3i1.25145. Grima, Reuben 2016 But Isn’t All Archaeology “Public” Archaeology? Public Archaeology 15:50–58. DOI:10.1080/14655187.2016.1200350. Gustas, Robert, and Kisha Supernant 2019 Coastal Migration into the Americas and Least Cost Path Analysis. Journal of Anthropological Archaeology 54:192–206. Hammer, Emily, and Jason Ur 2019 Near Eastern Landscapes and Declassified U2 Aerial Imagery. Advances in Archaeological Practice 7:107–126. Haukaas, Colleen, and Lisa M. Hodgetts 2016 The Untapped Potential of Low-Cost Photogrammetry in Community- Based Archaeology: A Case Study from Banks Island, Arctic Canada. Journal of Community Archaeology and Heritage 3:40–56. DOI:10.1080/ 20518196.2015.1123884. Hauptmann, Andreas 2007 The Archaeometallurgy of Copper: Evidence from Faynan, Jordan. Springer, Berlin. Hill, Austin (Chad), Yorke Rowan, and Morag M. Kersel 2014 Mapping with Aerial Photographs: Recording the Past, the Present, and the Invisible at Marj Rabba, Israel. Near Eastern Archaeology 77:182–186. DOI:10.5615/neareastarch.77.3.0182. Howey, Meghan C. L., and Marieka Brouwer Burg 2017 Assessing the State of Archaeological GIS Research: Unbinding Analyses of Past Landscapes. Journal of Archaeological Science 84:1–9. DOI:10. 1016/j.jas.2017.05.002. Howland, Matthew D. 2018 3D Recording in the Field: Style without Substance? In Cyber- Archaeology and Grand Narratives, edited by Thomas E. Levy and Ian W. N. Jones, pp. 19–33. Springer, Cham, Switzerland. Howland, Matthew D., Falko Kuester, and Thomas E. Levy 2014a Photogrammetry in the Field: Documenting, Recording, and Presenting Archaeology. Mediterranean Archaeology and Archaeometry 14(4):101–108. Howland, Matthew D., Falko Kuester, and Thomas E. Levy 2014b Structure from Motion: Twenty-First Century Field Recording with 3D Technology. Near Eastern Archaeology 77:187–191. Hupperetz, Wim, Raffaele Carlani, Daniel Pletinckx, and Eva Pietroni 2012 Etruscanning 3D Project: The 3D Reconstruction of the Regolini Galassi Tomb as a Research Tool and a New Approach in Storytelling. Virtual Archaeology Review 3(7):92–96. Kallaher, Amelia, and Alyson Gamble 2017 GIS and the Humanities: Presenting a Path to Digital Scholarship with the Story Map App. College and Undergraduate Libraries 24:559–573. DOI:10.1080/10691316.2017.1327386. Kintigh, Keith W. 1996 SAA Principles of Archaeological Ethics. SAA Bulletin 14(3):5–17. Knabb, Kyle A., Jurgen P. Schulze, Falko Kuester, Thomas A. DeFanti, and Thomas E. Levy 2014 Scientific Visualization, 3D Immersive Virtual Reality Environments, and Archaeology in Jordan and the Near East. Near Eastern Archaeology 77:228–232. Levy, Thomas E., Erez Ben-Yosef, and Mohammad Najjar 2014 The Iron Age Edom Lowlands Regional Archaeology Project: Research, Design, and Methodology. In New Insights in the Iron Age Archaeology of Edom, Southern Jordan, Vol. 1, edited by Thomas E. Levy, Mohammad Najjar, and Erez Ben-Yosef, pp. 1–88. Cotsen Institute of Archaeology, University of California, Los Angeles. Levy, Thomas E., Mohammad Najjar, Thomas Higham, Yoav Arbel, Adolfo Muniz, Erez Ben-Yosef, Neil G. Smith, Marc Beherec, Aaron Gidding, Ian W. Jones, Daniel Frese, Craig Smitheram, and Mark Robinson 2014 Excavations at Khirbat en-Nahas, 2002–2009: An Iron Age Copper Production Center in the Lowlands of Edom. In New Insights in the Iron Age Archaeology of Edom, Southern Jordan, Vol. 1, edited by Thomas E. Levy, Mohammad Najjar, and Erez Ben-Yosef, pp. 89–245. Cotsen Institute of Archaeology, University of California, Los Angeles. Levy, Thomas E., Vid Petrovic, Thomas Wypych, Aaron Gidding, Kyle Knabb, David Hernandez, Neil G. Smith, Jürgen P. Schulz, Stephen H. Savage, Falko Kuester, Erez Ben-Yosef, Connor Buitenhuys, Casey J. Barrett, and Thomas DeFanti 2010 On-Site Digital Archaeology 3.0 and Cyber-Archaeology: Into the Future of the Past—New Developments, Delivery, and the Creation of a Data Avalanche. In Cyber-Archaeology, edited by Maurizio Forte, pp. 135–153. Archaeopress, Oxford. Levy, Thomas E., and Neil G. Smith 2007 On-Site GIS Digital Archaeology: GIS-Based Excavation Recording in Southern Jordan. In Crossing Jordan: North American Contributions to the Archaeology of Jordan, edited by Thomas E. Levy, Michele Daviau, Randall Younker, and May Shaer, pp. 47–58. Equinox, London. Levy, Thomas E., Neil G. Smith, Mohammad Najjar, Thomas A. DeFanti, Albert Y. Lin, and Falko Kuester 2012 Cyber-Archaeology in the Holy Land: The Future of the Past. Biblical Archaeology Society, Washington, DC. Liss Brady, Matthew D. Howland, Brita Lorentzen, Craig Smitheram, Mohammad Najjar, and Thomas E. Levy 2020 Up the Wadi: Development of an Iron Age Industrial Landscape in Faynan, Jordan. Journal of Field Archaeology, in press. DOI:10.1080/ 00934690.2020.1747792. Lloyd, James 2016 Contextualizing 3D Cultural Heritage. In Digital Heritage: Progress in Cultural Heritage: Documentation, Preservation, and Protection, edited by Marinos Ioannides,et al.Eleanor Fink, Antonia Moropoulou, Monika Hagedorn-Saupe, Antonella Fresa, Gunnar Liestøl, Vlatka Rajcic, and Pierre Grussenmeyer, pp. 859–868. Springer, Cham, Switzerland. DOI:10.1007/ 978-3-319-48496-9_69. Integrating Digital Datasets into Public Engagement through ArcGIS StoryMaps November 2020 | Advances in Archaeological Practice | A Journal of the Society for American Archaeology 359 https://doi.org/10.4195/nse2018.04.0008 https://doi.org/10.15184/aqy.2018.135 https://doi.org/10.1017/aap.2017.20 https://doi.org/10.5325/jeasmedarcherstu.2.1.0001 https://doi.org/10.14434/sdh.v3i1.25145 https://doi.org/10.1080/14655187.2016.1200350 https://doi.org/10.1080/20518196.2015.1123884 https://doi.org/10.1080/20518196.2015.1123884 https://doi.org/10.5615/neareastarch.77.3.0182 https://doi.org/10.1016/j.jas.2017.05.002 https://doi.org/10.1016/j.jas.2017.05.002 https://doi.org/10.1080/10691316.2017.1327386 https://doi.org/10.1080/00934690.2020.1747792 https://doi.org/10.1080/00934690.2020.1747792 https://doi.org/10.1007/978-3-319-48496-9_69 https://doi.org/10.1007/978-3-319-48496-9_69 Malinverni, Eva Savina, Roberto Pierdicca, Francesca Colosi, and Roberto Orazi 2019 Dissemination in Archaeology: A GIS-Based StoryMap for Chan Chan. Journal of Cultural Heritage Management and Sustainable Development 9:500–519. DOI:10.1108/JCHMSD-07-2018-0048. McCoy, Mark D., and Thegn N. Ladefoged 2009 New Developments in the Use of Spatial Technology in Archaeology. Journal of Archaeological Research 17:263–295. DOI:10.1007/s10814-009- 9030-1. Means, Bernard K. 2015 Promoting a More Interactive Public Archaeology: Archaeological Visualization and Reflexivity through Virtual Artifact Curation. Advances in Archaeological Practice 3:235–248. DOI:10.7183/2326-3768.3.3.235.. Merriman, Nick (editor) 2004 Public Archaeology. Routledge, London. Olson, Brandon R., Ryan Placchetti, Jamie Quartermaine, and Ann E. Killebrew 2013 The Tel Akko Total Archaeology Project (Akko, Israel): Assessing the Suitability of Multi-scale 3D Field Recording in Archaeology. Journal of Field Archaeology 38:244–262. O’Sullivan, David, Luke Bergmann, and Jim E. Thatcher 2018 Spatiality, Maps, and Mathematics in Critical Human Geography: Toward a Repetition with Difference. Professional Geographer 70:129–139. DOI:10. 1080/00330124.2017.1326081. Parcak, Sarah H. 2017 GIS, Remote Sensing, and Landscape Archaeology. DOI:10.1093/ oxfordhb/9780199935413.013.11. Pavlidis, George, Ioannis Liritzis, and Thomas E. Levy 2017 Pedagogy and Engagement in At-Risk World Heritage Initiatives. In Heritage and Archaeology in the Digital Age, edited by Matthew L. Vincent, Victor Manuel López-Menchero Bendicho, Marinos Ioannides, and Thomas E. Levy, pp. 167–183. Quantitative Methods in the Humanities and Social Sciences. Springer, Cham, Switzerland. DOI:10.1007/978-3-319- 65370-9_9. Peng, Fei, Sam C. Lin, Jialong Guo, Huimin Wang, and Xing Gao 2017 The Application of SfM Photogrammetry Software for Extracting Artifact Provenience from Palaeolithic Excavation Surfaces. Journal of Field Archaeology 42:326–336. DOI:10.1080/00934690.2017.1338118. Pujol, Laia, Maria Roussou, Stavrina Poulou, Olivier Balet, Maria Vayanou, and Yannis Ioannidis 2012 Personalizing Interactive Digital Storytelling in Archaeological Museums: The CHESS Project. In Archaeology in the Digital Era Volume II: e-Papers from the 40th Conference on Computer Applications and Quantitative Methods in Archaeology, edited by Graeme Earl, Tim Sly, Angeliki Chrysanthi, Patricia Murrieta-Flores, Constantinos Papadopoulos, Iza Romanowska, and David Wheatley, pp. 77–90. Amsterdam University Press, Amsterdam. Electronic document, http://arno.uva.nl/cgi/arno/show.cgi? fid=545855, accessed May 21, 2020. Reshetyuk, Yuriy, and Stig-Goran Mårtensson 2016 Generation of Highly Accurate Digital Elevation Models with Unmanned Aerial Vehicles. Photogrammetric Record 31:143–165. Richardson, Lorna-Jane, and Jaime Almansa-Sánchez 2015 Do You Even Know What Public Archaeology Is? Trends, Theory, Practice, Ethics. World Archaeology 47:194–211. DOI:10.1080/00438243. 2015.1017599. Roberts, Les 2016 Deep Mapping and Spatial Anthropology. Humanities 5(1):5. DOI:10. 3390/h5010005. Sauerbier, Martin, and Henri Eisenbeiss 2010 UAVs for the Documentation of Archaeological Excavations. In Proceedings of the ISPRS Commission V Mid-term Symposium “Close Range Image Measurement Techniques,” 21–24 June 2010, Newcastle upon Tyne, United Kingdom, edited by Jon P. Mills, David M. Barber, Pauline E. Miller, Ian Newton, pp. 526–531. International Archives of Photogrammetry, Remote Sensing, and Spatial Information Sciences Vol. 38, Pt. 5. International Society for Photogrammetry and Remote Sensing. Electronic document, https://www.isprs.org/proceedings/XXXVIII/part5/ papers/214.pdf, accessed May 21, 2020. Scopigno, Roberto, Marco Callieri, Matteo Dellepiane, Federico Ponchio, and Marco Potenziani 2017 Delivering and Using 3D Models on the Web: Are We Ready? Virtual Archaeology Review 8(17):1–9. Smith, Duncan A. 2016 Online Interactive Thematic Mapping: Applications and Techniques for Socio-economic Research. Computers, Environment, and Urban Systems 57:106–117. DOI:10.1016/j.compenvurbsys.2016.01.002. Smith, Matthew, Nigel Stephen Walford, and Carlos Jimenez-Bescos 2019 Using 3D Modelling and Game Engine Technologies for Interactive Exploration of Cultural Heritage: An Evaluation of Four Game Engines in Relation to Roman Archaeological Heritage. Digital Applications in Archaeology and Cultural Heritage 14:e00113. Sobel, David 2005 Place-Based Education: Connecting Classrooms and Communities. 2nd ed. Nature Literacy Series Vol. 4. Orion Society, Great Barrington, Massachusetts. Srour, David, John Mangan, Aliya Hoff, Todd Margolis, Jessica Block, Matthew L. Vincent, Thomas A. DeFanti, Thomas E. Levy, and Falko Kuester 2015 MediaCommons Framework: An Immersive Storytelling Platform and Exodus. In Israel’s Exodus in Transdisciplinary Perspective, edited by Thomas E. Levy, Thomas Schneider, and William H. C. Propp, pp. 173–184. Springer, Cham, Switzerland. DOI:10.1007/978-3-319-04768-3_13. Strachan, Caitlin, and Jerry Mitchell 2014 Teachers’ Perceptions of Esri Story Maps as Effective Teaching Tools. Review of International Geographical Education Online 4:195–220. Taliaferro, Matthew S., Bernard A. Schriever, and M. Steven Shackley 2010 Obsidian Procurement, Least Cost Path Analysis, and Social Interaction in the Mimbres Area of Southwestern New Mexico. Journal of Archaeological Science 37:536–548. DOI:10.1016/j.jas.2009.10.018. Uysal, Murat, Ahmet S. Toprak, and Nizar Polat 2015 DEM Generation with UAV Photogrammetry and Accuracy Analysis in Sahitler Hill. Measurement 73:539–543. DOI:10.1016/j.measurement.2015.06.010. Verhagen, Philip 2017 Spatial Analysis in Archaeology: Moving into New Territories. In Digital Geoarchaeology, edited by Christoph Siart, Markus Forbriger, and Olaf Bubenzer, pp. 11–25. Springer, Cham, Switzerland. DOI:10.1007/978- 3-319-25316-9_2. Verhoeven, Geert, Michael Doneus, Christian Briese, and Frank Vermeulen 2012 Mapping by Matching: A Computer Vision-Based Approach to Fast and Accurate Georeferencing of Archaeological Aerial Photographs. Journal of Archaeological Science 39:2060–2070. DOI:10.1016/j.jas.2012.02.022. Williams, Rhys, Tim Thompson, Caroline Orr, Andrew Birley, and Gillian Taylor 2019 3D Imaging as a Public Engagement Tool: Investigating an Ox Cranium Used in Target Practice at Vindolanda. Theoretical Roman Archaeology Journal 2:Article 2. DOI:10.16995/traj.364. AUTHOR INFORMATION Matthew D. Howland (mdhowlan@ucsd.edu, corresponding author), Brady Liss, and Thomas E. Levy ▪ Department of Anthropology, University of California San Diego, 9500 Gilman Drive, #0532, La Jolla, CA 92093-0532, USA Mohammad Najjar ▪ Levantine Archaeology Laboratory, University of California San Diego, 9500 Gilman Drive, #0532, La Jolla, CA 92093-0532, USA Matthew D. Howland et al. 360 Advances in Archaeological Practice | A Journal of the Society for American Archaeology | November 2020 https://doi.org/10.1108/JCHMSD-07-2018-0048 https://doi.org/10.1007/s10814-009-9030-1 https://doi.org/10.1007/s10814-009-9030-1 https://doi.org/10.7183/2326-3768.3.3.235. https://doi.org/10.1080/00330124.2017.1326081 https://doi.org/10.1080/00330124.2017.1326081 https://doi.org/10.1093/oxfordhb/9780199935413.013.11 https://doi.org/10.1093/oxfordhb/9780199935413.013.11 https://doi.org/10.1007/978-3-319-65370-9_9 https://doi.org/10.1007/978-3-319-65370-9_9 https://doi.org/10.1080/00934690.2017.1338118 http://arno.uva.nl/cgi/arno/show.cgi?fid=545855 http://arno.uva.nl/cgi/arno/show.cgi?fid=545855 https://doi.org/10.1080/00438243.2015.1017599 https://doi.org/10.1080/00438243.2015.1017599 https://doi.org/10.3390/h5010005 https://doi.org/10.3390/h5010005 https://www.isprs.org/proceedings/XXXVIII/part5/papers/214.pdf https://www.isprs.org/proceedings/XXXVIII/part5/papers/214.pdf https://www.isprs.org/proceedings/XXXVIII/part5/papers/214.pdf https://doi.org/10.1016/j.compenvurbsys.2016.01.002 https://doi.org/10.1007/978-3-319-04768-3_13 https://doi.org/10.1016/j.jas.2009.10.018 https://doi.org/10.1016/j.measurement.2015.06.010 https://doi.org/10.1007/978-3-319-25316-9_2 https://doi.org/10.1007/978-3-319-25316-9_2 https://doi.org/10.1016/j.jas.2012.02.022 https://doi.org/10.16995/traj.364 mailto:mdhowlan@ucsd.edu Integrating Digital Datasets into Public Engagement through ArcGIS StoryMaps PUBLIC ARCHAEOLOGY, HYPERMEDIA, AND DEEP MAPPING ARCHAEOLOGY IN A DIGITAL AGE ARCGIS STORYMAPS CASE STUDY: THE ARCHAEOLOGY OF FAYNAN, JORDAN THE KINGDOM OF COPPER STORYMAP TOWARD COMMUNITY ENGAGEMENT CONCLUSION Acknowledgments REFERENCES CITED work_yr7fms6ck5epnbwjklpgbfjxs4 ---- Review Article Point-of-Care Diagnoses and Assays Based on Lateral Flow Test Miroslav Pohanka Faculty of Military Health Sciences, University of Defense, Trebesska 1575, Hradec Kralove CZ-50001, Czech Republic Correspondence should be addressed to Miroslav Pohanka; miroslav.pohanka@gmail.com Received 26 November 2020; Revised 5 January 2021; Accepted 11 January 2021; Published 20 January 2021 Academic Editor: Adil Denizli Copyright © 2021 Miroslav Pohanka. )is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Analytical devices for point-of-care diagnoses are highly desired and would improve quality of life when first diagnoses are made early and pathologies are recognized soon. Lateral flow tests (LFTs) are such tools that can be easily performed without specific equipment, skills, or experiences. )is review is focused on the use of LFT in point-of-care diagnoses. )e principle of the assay is explained, and new materials like nanoparticles for labeling, new recognition molecules for interaction with an analyte, and new additional instrumentation like signal scaling by a smartphone camera are described and discussed. Advantages of the LFTdevices as well as their limitations are described and discussed here considering actual papers that are properly cited. 1. Introduction )ere are standard laboratory methods (chromatographic, mass, immunochemical, genetical, etc.) suitable for the analysis of various compounds and substances. Although the aforementioned laboratory methods exert superior features, they are quite expensive for both purchasing and cost per one assay. Moreover, education related to the type of analysis or experiences at least are required for staff controlling the instruments. In the bioanalytical approaches, there is a little different situation to the standard analytical methods in the central laboratories. While the dominant part of analyses is expected to be made in clinical laboratories, it is also necessary to perform some analyses outside. )e term point-of-care testing has emerged in current medicine. It can be explained as a simple test suitable to be finished including data evaluation in the home conditions by a patient or by a caregiver with no education in the bioanalyses or similar disciplines. Disposable urine test strips for multiple bio- chemical parameters and glucose biosensors for a fast gly- cemia assay can be mentioned as the standard commercial devices. Research on diagnostical biosensors is ongoing, and a number of new biosensor devices suitable for point-of-care testing have been investigated [1–5]. Other types of point-of- care tests like the colorimetric one based on a digital camera are developed [6–9]. )e current review is focused on lateral flow immu- nochromatographic assays also known as lateral flow tests (LFTs) and their use in point-of-care. )e LFT has started quite a long ago, and many analytical and diagnostical methods have been developed on the platform. In recent time, further improvement was achieved due to the use of advanced materials like colored nanoparticles. )e recent progress in the field of LFTis surveyed here, and the progress is analyzed and discussed. )e actual literature on LFT is summarized here. 2. LFT Development as a Standard Platform Various paper tests for measuring a wide scale of parameters like pH or thin-layer chromatography assay have been ex- tensively researched since the beginning of modern chem- istry. Chromatography as a general method is connected with the work of Russian scientist Mikhail Tsvet in the early 1900s, and thin-layer chromatography was first reported by Russian scientists Izmailov and Shreiber in 1938 [10]. Further research on immunoassays including the latex fix- ation test provided a simple analytical tool significantly Hindawi International Journal of Analytical Chemistry Volume 2021, Article ID 6685619, 9 pages https://doi.org/10.1155/2021/6685619 mailto:miroslav.pohanka@gmail.com https://orcid.org/0000-0001-8804-8356 https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/ https://doi.org/10.1155/2021/6685619 improving and simplifying the previous methods [11, 12]. )e first LFT devices were developed as an outcome of knowledge from previous methods and a series of patents applied in the 1980s. LFTs for proving pregnancy by the assay of human chorionic gonadotropin in urine were the first commercial tests working on the lateral flow principle and using specific antibodies against the hormone [13]. )e original types of LFT were based on the recognition capacity of antibodies that served as a recognition part of the assay. )e general principle is shown in Figure 1 and can be described as follows: the assay is performed on a sheet- shaped matrix from paper, cellulose, etc., that contains freely adsorbed antibodies labeled by color or fluorescent mark and specific against the analyte on the first end. )e matrix also contains two zones with immobilized antibodies on the second end: the first zone contains antibodies specific to the analyte, and the second zone immobilized antibodies specific against free-labeled antibodies. A liquid sample is applied on the first end and the analyte presented in the sample interacting with the labeled antibodies. )e complex analyte- labeled antibody and the unreacted antibodies are carried by the lateral flow on and in the hydrophilic matrix. )e complex analyte-labeled antibody is captured on the first zone forming a colored spot visible by a naked eye. )e unreacted labeled antibodies interact with the second zone and form a visible spot as well. )e coloration caused by an analyte is frequently called the test line, while the spot by unreacted antibodies is frequently called the control line. As seen from the principle of the LFT, it is a simple method suitable for simple assay relying on a naked eye, no specific instrumentation is necessary, and even liquid sample can be measured directly without further treatment. It is typically suitable for field applications [14]. )ough LFTcan be called by a synonym lateral flow immunochromato- graphic assay and antibodies are relevant and are also the most traditional recognition part in them, other recognition molecules fully replacing the antibodies can be also em- bedded. Aptamers can be exampled [15]. Gravidity tests for semiquantitative determination of human chorionic gonadotropin in urine are well known and are mass-produced types of LFT [16]. However, other types of LFT are currently available in the market, and many of them serve for the purpose of point-of-care diagnosis. )e LFT kits for the detection of lipoarabinomannan in urine as a marker of Mycobacterium tuberculosis and diagnosis of tuberculosis disease [17, 18], detection of antibodies re- sponsible for allergic reactions like the antibodies causing allergic bronchopulmonary aspergillosis [19, 20], detection of allergenic substances like peanut and hazelnut in food products [21], diagnosis of coronavirus disease 2019 (COVID-19) including antigens and specific antibodies [22–25], detection of antibodies against Brucella sp. to di- agnose brucellosis [26], peste des petits ruminants virus disease diagnosis by antigen detection in fecal and nasal swab samples [27], measuring of C reactive protein in the blood, blood plasma, and serum [28], and assay of biological warfare agents and toxins like Bacillus anthracis, Escherichia coli O157:H7, staphylococcal enterotoxin B, ricin, botuli- num toxin, Francisella tularensis, and Yersinia pestis [29–32] can be mentioned as examples of commercially available devices. )e appearance of a commercially available LFT is depicted in Figure 2. )e standard methods have a lack in the inability to measure the exact concentration of the analyte, and the assays can be performed as a semiquantitative test only. )e spots formed on the LFTmatrix are typically narrow which is optimal for coloration scaling by a naked eye but not an ideal solution for colorimetry. Digital cameras including the cameras integrated into smartphones are considered as the future tool for coloration scaling in various analytical protocols [8, 33–36]. )e assays can be also improved by spot design or by manufacturing more positive spots with an unequal affinity toward analyte placing in the LFT strip, so the concentration of analyte would be better estimated by a naked eye. )e next evolution of LFT should be made with regard to the measuring platforms making the formerly qualitative or semiquantitative tests to be the quantitative ones. 2.1. Current Trends in LFT Construction in Point-of-Care Diagnosis. )ough the original LFT devices from the 1980s are the functional ones, further improvement is desired to improve their analytical specifications and reduce costs. Comparing to the original devices from the 1980s, the currently researched and developed LFT contains typically alterations in selected recognition molecule and substance responsible for the visualization of the interaction with the analyte. Evolution of materials for matrix manufacturing, overall design resolving problems with manipulation by an unskilled worker, and improving LFT package to make it have long-term stability can be mentioned as the other areas of improvement. )e evolution of LFTs is not of course related to devices for diagnosis only because the platform gained overall popularity in analytical chemistry and various applications are known for this moment. )e immunoassay-based point-of-care diagnostic tool was, for instance, described for COVID-19. )e researchers investigated seroprevalence for COVID-19 using standard enzyme-linked immunosorbent assay (ELISA) and com- pared it with a standard LFT based on antibodies labeled by colloidal gold [37]. )e LFT and ELISA mutually correlated and the authors concluded their work by a recommendation that LFTis suitable for point-of-care in the healthcare setting and COVID-19 monitoring. In another study on COVID- 19, an LFT device for immunoglobulins (Ig) M and G in blood was constructed using selenium nanoparticles labeled SARS-CoV-2 nucleoproteins causing interaction with IgM and IgG antibodies [38]. )e assay exerted a limit of de- tection of 20 ng/ml for IgM and 5 ng/ml for IgG in a 10- minute lasting assay. Other types of nanoparticles can be also used for an LFT immunoassay. For instance, LFT based on carbon nanoparticles conjugated with p48 protein was de- veloped for the diagnosis of mycoplasma caused by Myco- plasma bovis [39]. )e assay exerted 100% specificity and no cross-reactivity with antibodies to other bovine pathogens. A full correlation with ELISA was also reached. An LFT test using monoclonal antibodies labeled by gold nanoparticles 2 International Journal of Analytical Chemistry was developed by Liu and coworkers for assay of dinitolmide in chicken tissue [40]. )e researchers reported a limit of detection of 2.5 μg/kg for chicken tissue containing dini- tolmide, and the assay was fully comparable to liquid chromatography and ELISA. Gold nanoparticles (gold spheres with size 30 and 100 nm or gold-silica shells with size 150 nm) and antibody-based detection were also used in the development of an LFT for the human immunodeficiency virus [41]. )e gold nanoparticles were covered by a monoclonal antibody against protein p24 of the human immunodeficiency virus, and the whole assay was made in a standard manner. Signal was recorded by thermal contrast reading using an IR camera and laser. )e assay was highly sensitive as a limit of detection of 8 pg/ml of p24 was achieved. )e relevance of LFT can be largely perceived in the recent events when a fast test for the diagnosis of COVID-19 was demanded. )e standard diagnosis of COVID-19 was based on the polymerase chain reaction (presence of pathogen) and ELISA (prove of antibodies), but the tests have to be performed in specialized laboratories, and they require a quite long time to be finished. LFTs were suc- cessfully introduced as an alternative to the polymerase chain reaction and ELISA, and they were proved to be suitable for routine diagnosis based on the detection of COVID-19 antigen. )ough they are not a replacement of polymerase chain reaction and ELISA, they were proved to be a suitable tool to be developed in a short period and used wherever necessary. Concurrently, the recognition capability of antibodies suitable for antigens measuring respective antigens when antibodies are assayed is frequently replaced by the use of aptamers by many assays. Extensive progress in aptamer preparation has been reached in recent years, and the aptamers typically comprise ribonucleic acid (RNA), deoxyribonucleic acid (DNA), peptides, or proteins [42–48]. )eir potency for LFT development was recognized as well [49]. Aptamers found their way to the construction of LFT, and they have become a relevant recognition molecule in LFT devices’ development. In a work by Tripathi and co- workers, the LFT test was constructed for the point-of-care cancer diagnosis by the measurement of marker CA125 level [50]. )e researchers used aptamer linked with gold nanoparticles serving as peroxidase mimetic and optimized the assay for the detection of CA125 in human serum. )e assay had a limit of detection of 3.71 U/ml and correlated with ELISA. A DNA aptamer for residual penicillin antibiotic ampicillin was prepared by Lin and coworkers [51]. )e researchers used hexachlorofluorescein for oligonucleotide labeling and were able to detect ampicillin in a range of 10 to 200 ng/l and a limit of quantification of 2.71 ng/l when water sample was analyzed. DNA aptamer was also used for the detection of dopamine in urine [52]. In this study, dopamine duplex DNA aptamers were conjugated to 40 nm gold nanoparticles, and LFT was performed in a standard manner resulting in a limit of detection of 50 ng/ml. Protein (a) (b) Capillary flow (c) Figure 1: General principle of an LFT immunoassay. (a) Sample with an analyte (circle) is added to the pad where a labeled antibody (Y shaped) already exists. (b) Analyte and a labeled antibody are carried by a lateral flow (arrow) and they can mutually interact. (c) Complex of analyte-labeled antibody and the labeled antibody are captured on test spot (analyte) or control spot (unreacted antibody) forming colored lines. BA RC CB YP se Figure 2: Appearance of commercial LFT devices. In the upper part, there is LFT for human chorionic gonadotropin; in the down part, there is a photograph of LFT for the contemporary deter- mination of five biological warfare agents Bacillus anthracis, ricin, Clostridium botulinum/botulinum toxin, Yersinia pestis, and staphylococcal enterotoxin B by Pro Strips (Advnt Biotechnologies, Phoenix, AZ, USA). International Journal of Analytical Chemistry 3 osteopontin representing a new marker for cancer was de- tected by a biotinylated aptamer and then streptavidin modified gold nanoparticles served for the visualization purpose [53]. )e assay had a limit of detection of 0.1 ng/ml with dynamic detection of osteopontin in a range from 10 to 500 ng/ml for a measuring time of 5 minutes. Aptamer-based LFT was chosen by Ali and coworkers for the early diagnosis of type-2 diabetes by assay of protein vaspin using fluorescent upconverting nanoparticles [54]. Vaspin was recognized in a range of 0.1–55 ng/ml with a limit of detection of 39 pg/ml. LFT does not have to be targeted on defined molecules only. In a work by Yu and coworkers, LFT was developed for tumor-derived exosomes for rapid diagnosis of lung cell cancer [55]. Aptamer specific to CD63 on exosome surface and gold nanoparticles giving good visualization and ob- servability of the assay by a naked eye were used for the LFT construction. )e assay was proved on exosomes isolated from human lung carcinoma cells in a dilution of 6.4 × 109 particles/ml. Quantum dots are another material that can serve for the purpose of reaction visualization and macro- molecules labeling [56–59]. An LFT method based on CdTe quantum dots was, for instance, constructed by Lu and coworkers in order to detect Shiga toxin type II [60]. )e authors also worked with gold nanoparticles as a material for labeling antibodies. While labeling by quantum dots pro- vided a limit of detection of 25 ng/ml, the LFT based on CdTe quantum dots had a five times lower limit of detection: 5 ng/ ml. )e LFT tests can be improved by connecting with other techniques improving samples and sensitivity increased. For instance, detection of Escherichia coli O157:H7 was per- formed in two steps: the first step was based on bacterium captured by immunomagnetic nanoparticles and monoclo- nal antibody conjugate with beta-lactamase and gold nanoparticles; the second step contained penicillin solution application and hydrolysis by the beta-lactamase [61]. )e final test containing LFT consisted of an ultrasensitive penicillin immunochromatographic test strip. )e assay had a limit of detection of 137 CFU/ml. Further investigation is also focused on other types of recognition parts and labels like molecularly imprinted polymers [62] and the use of liposome enhanced signal amplification [63]. Highly fluo- rescent europium chelate loaded silica nanoparticles can be mentioned as another type of label [64]. Research on matrix composition is also demanded, and it can improve analytical properties. Nitrocellulose or nitrocellulose coated with nanocolloids appears to be promising [40, 65]. An overview of the aforementioned LFTs is given in Table 1. Considering the current trends, it is clear that the new directions in LFT research are focused on two major areas. Firstly, new recognition elements are researched. Secondly, new types of nanoparticles are used for LFT construction. All the new materials can improve the final analytical parameters of a final LFT, but the suitability of the particular materials will depend on the type of assay and other conditions. )ere probably will never be an ideal recognition element or a label for any assay scenarios. Molecularly imprinted polymers and aptamers can be prospective recognition elements, but anti- bodies will probably remain an irreplaceable part of many commercial LFTs. Standard chemical labels like fluorescein will also remain a part of standard LFTs though nanoparticles like the gold one or quantum dots will probably represent a better alternative gaining higher popularity in the future praxis. Considering the current options in analytical chemistry, LFT remains probably the major tool for point-of-care specific diagnosis of various pathologies besides the devices like Clark glucose biosensor and simple urine colorimetric test strips. Compared to the other tests, LFT has quite high versatility for the diagnosis of good presumptions to be used under point-of-care conditions; on the other hand, LFT has a limitation on the molecular weight of analyte because the assay is on an affinity principle. Analytes with low molecular weight are not suitable for a standard immunoassay, and a competitive format is the only possibility of how to use an immunoassay for the analysis of a small compound. However, the new types of recognition elements like aptamers bring improvement and even LFT for small molecules like dinitolmide, ampicillin, and dopamine can be seen in the examples of new research on LFT. 2.2. Instrumentation of LFT for Point-of-Care Diagnosis. LFTmethods are typically intended to be either qualitative or semiquantitative, and the coloration is determined by a naked eye. If the assay is performed as a semiquantitative, the found range of value is highly inaccurate. )e overall simplicity of the method and no necessity to use an analytical device, electricity, or elaborative sample manipulation are the major advantages of LFT. On the other hand, there are disadvantages as well. )e scaling of coloration by a naked eye is highly subjective and also depends on ambient light conditions. )e subjective perception of color may be a problem when the point-of-care diagnosis is performed by elderly or disabled people. Development of coloration readers suitable for standard LFT is a way of how to improve the assay. )e reader devices are especially desired in point- of-care testing [66]. )e improved LFT assays are quanti- tative or at least semiquantitative with acceptable accuracy of concentration range determination. In the current time, broad attention is given to digital photography because of the good availability of cameras and their integration into smartphones. Standard cameras in- tegrated into smartphones are able to provide at least 8-bit digital photography in a format like jpg and have infor- mation about color depth for the 8-bit photography equal to 256 variables for each channel. Better cameras giving figures in 12, 14, 16, and more bits and providing raw data from the digital sensor are also widely available in the market. Digital photography is highly suitable for colorimetry by color depth analysis and colorimetric tests performed on a thin- layer-like paper-based assays and detector strips like the pH and others can be recorded this way, and the digital pho- tography-assisted assay is well suitable for point-of-care testing [7, 36, 67–69]. Digital photography has also its limitations making the assay inaccurate under some con- ditions. In the first point, the light source has to be men- tioned. )e light conditions are crucial when a sensor is photographed; integrated light-emitting diodes can have problems with the light temperature setting. )ere can be 4 International Journal of Analytical Chemistry also problems with the setting of white balance and color temperature in the camera. Problems with lens quality can also play a role when a cheap camera is used for point-of- care testing. Nevertheless, the use of digital cameras in personal diagnosis is considered as the next direction of research and application into praxis [70–75]. Instrumental analysis of spots formed in LFT was in- troduced in several applications. In the aforementioned study by Zhan and coworkers, detection of p24 protein of the human immunodeficiency virus was made using thermal contrast reading [41]. )e spot on LFT was formed by a complex of gold nanoparticle conjugate with p24 from a sample and capturing zone on the LFT pad. )e spot was focused by laser with wavelength 532 or 800 nm with power adjusted in the range from 10 to 500 mW. )e alighted spot was recorded, and temperature measured by an IR camera and digital data for further processing and signal scaling were recorded. Digital camera containing complementary metal-oxide-semiconductor (CMOS) chip served for the spot color recording in the study by Jahanpeyma and co- workers [76]. )e researchers tested their LFTdevice for the hybridization of DNA, and visualization was made by the application of a biotinylated detector probe in the presence of peroxidase-streptavidin conjugate. Just the peroxidase was responsible for the chemiluminescence reaction recorded by a camera. )e assay was tested for proving the 16S rRNA gene from Escherichia coli, and the lowest reached limit of detection was equal to 1.5 pmol/l. )e use of peroxidase-catalyzed reaction in an LFT was also described in a paper by Mirasoli and coworkers [77]. )ey adopted their method for the detection of fumonisin in food samples, and the mycotoxin was detected in a range of 2.5–500 μg/l with a limit of detection of 2.5 μg/l for an assay lasting for 25 minutes. )e detection signal was evaluated by a charge- couple device (CCD) camera. )e authors stated that the peroxidase reaction generating chemiluminescence products is more suitable for quantitative LFT assay than an assay where colloidal gold is used instead of peroxidase. )e digital scaling of coloration can be even made by simpler devices than cameras. Digital scanner was selected as an analytical tool in the work by Posthuma-Trumpie and coworkers [78]. )e authors successfully performed a standard LFT test for progesterone assay using antibodies and carbon colloid as a label and the LFTstrips scanned and analyzed digitally. Spots on an LFT test can be evaluated, and coloration was de- termined by a smartphone camera which makes the assays more available to most people. A smartphone camera assay based on an LFT was investigated for the detection of mercury [79]. )e assay comprised of the use of streptavidin- biotinylated DNA probes modified with gold nanoparticles and adsorbing mercury was proved with a limit of detection of 2.53 nmol/l. In another smartphone application, uricemia (uric acid content in the blood) was measured by a com- bination of an LFT where coloration was initiated Prussian blue nanoparticles as artificial nanozymes and standard smartphone for spots characterization [80]. )e assay Table 1: Overview of LFT in point-of-care diagnosis. Assay/diagnosis of a pathology Type of recognition part Type of label attached to the recognition part Analytical specifications Reference COVID-19 Antibody Colloidal gold Full correlation with ELISA for clinical samples testing [37] COVID-19-specific antibodies recognition SARS-CoV-2 nucleoproteins Selenium nanoparticles 20 ng/ml for IgM and 5 ng/ml for IgG in 10 minutes [38] Mycoplasma p48 protein Carbon nanoparticles 100% specificity, no cross-reactivity, full correlation with ELISA [39] Dinitolmide in tissue Monoclonal antibody Gold nanoparticles Limit of detection of 2.5 μg/kg for chicken tissue containing dinitolmide [40] Protein p24 of human immunodeficiency virus Monoclonal antibody Gold nanoparticles Limit of detection of 8 pg/ml [41] Diagnosis of cancer by CA125 assay Aptamer Gold nanoparticles Limit of detection of 3.71 U/ml [50] Ampicillin in water DNA aptamer Hexachlorofluorescein Limit of quantification of 2.71 ng/l [51] Dopamine in urine DNA aptamer Gold nanoparticles Limit of detection of 50 ng/ml [52] Cancer marker osteopontin Biotinylated aptamer Conjugate streptavidin-gold nanoparticles Limit of detection 0.1 ng/ml, with a dynamic range of 10 to 500 ng/ml, time per assay 5 minutes [53] Assay of vaspin as an early marker of type-2 diabetes Aptamer Fluorescent upconverting nanoparticles Limit of detection for vaspin of 39 pg/ ml [54] Exosomes for rapid diagnosis of lung cell cancer Aptamer specific to CD63 on exosomes surface Gold nanoparticles Detection of 6.4 ×109 exosomes/ml [55] Shiga toxin type II Antibodies Gold nanoparticles and CdSe quantum dots 25 ng/ml (labeling by gold nanoparticles), 5 ng/ml (labeling by quantum dots) [60] International Journal of Analytical Chemistry 5 exerted a linear range of 1.5–8.5 mg/dl for uric acid. An overview of the types of instrumentation applicable for an LFT assay is shown in Table 2. 3. Conclusions LFT devices and kits can be found in the current market as standard devices, and new improved types are researched. )ree major directions of improvement can be observed when the current research is compared with the traditional LFT devices on immunoassay principle: (1) new labels and materials including nanoparticles providing contrast col- oration, (2) new recognition molecules selectively inter- acting with analytes, and (3) types of instrumentation making the LFT-based assays quantitative from the origi- nally qualitative one. All the facts make the LFTa significant tool in point-of-care diagnostic where it can be performed for multiple diagnoses or analyses of harmful substances or microorganisms. Overall simplicity and growing sensitivity allow making LFT a tool for a wide number of markers. )ough the traditional analytes in an LFT assay were molecules with higher molecular weight, it is expected that the LFT will become a universal tool even for analytes with lower molecular weight and make the assay more universal. )e relevance of LFTwas also evident during the COVID-19 crisis when the tests COVID antibodies and antigens were urgently developed and marketed in a quite short time. Data Availability No data were used to support this study. Conflicts of Interest )e authors declare that they have no conflicts of interest. Acknowledgments )is work was supported by a long-term organization de- velopment plan “Medical Aspects of Weapons of Mass Destruction” (Faculty of Military Health Sciences, Univer- sity of Defense, Czech Republic) and Technological Agency od Czech Republic (TACR) (TH03030336). References [1] P. Babaie, A. Saadati, and M. Hasanzadeh, “Recent progress and challenges on the bioassay of pathogenic bacteria,” Journal of Biomedical Materials Research Part B: Applied Biomaterials, vol. 14, Article ID 34723, 2020. [2] S. Menon, M. R. Mathew, S. Sam, K. Keerthi, and K. G. Kumar, “Recent advances and challenges in electro- chemical biosensors for emerging and re-emerging infectious diseases,” Journal of Electroanalytical Chemistry, vol. 878, Article ID 114596, 2020. [3] Q. H. Nguyen and M. I. Kim, “Nanomaterial-mediated paper- based biosensors for colorimetric pathogen detection,” TrAC Trends in Analytical Chemistry, vol. 132, Article ID 116038, 2020. [4] M. Pohanka, “)e piezoelectric biosensors: principles and applications, a review,” International Journal of Electro- chemical Science, vol. 12, pp. 496–506, 2017. [5] B. Senf, W.-H. Yeo, and J.-H. Kim, “Recent advances in portable biosensors for biomarker detection in body fluids,” Biosensors, vol. 10, no. 9, p. 127, 2020. [6] S.-u. Hassan, A. Tariq, Z. Noreen et al., “Capillary-driven flow microfluidics combined with smartphone detection: an emerging tool for point-of-care diagnostics,” Diagnostics, vol. 10, no. 8, p. 509, 2020. [7] D. S. Y. Ong and M. Poljak, “Smartphones as mobile mi- crobiological laboratories,” Clinical Microbiology and Infec- tion, vol. 26, no. 4, pp. 421–424, 2020. Table 2: LFT signal recording by an analytical instrument. Type of assay Type of instrumentation Physical principle of the instrumental assay Reference LFT for p24 protein from human immunodeficiency virus, gold nanoparticles conjugated with a monoclonal antibody against p24 were used Laser alighting specific spots, IR camera visually recording temperature )ermal contrast reading [41] Detection of 16S rRNA gene from Escherichia coli by hybridization and use of biotinylated probe and peroxidase conjugated with streptavidin, peroxidase was responsible for the chemiluminescent reaction Digital camera with CMOS chip Digital camera recorded chemiluminescence provided by peroxidase [76] Detection of fumonisin, labeling of antibodies by peroxidase allows performing chemiluminescent chemical reaction that is instrumentally measured Digital camera with CCD chip Digital camera recorded chemiluminescence provided by peroxidase [77] LFT immunoassay of progesterone based on labeling by carbon colloid Digital scanner Strips were scanned and digital data analyzed [78] )e LFTassay comprised of the use of streptavidin- biotinylated DNA probes modified with gold nanoparticles Smartphone camera Detected spots were photographed by a smartphone camera, and coloration was measured [79] LFT where coloration was initiated Prussian blue nanoparticles as artificial nanozymes Smartphone camera Detected spots were photographed by a smartphone camera, and coloration was measured [80] 6 International Journal of Analytical Chemistry [8] M. Pohanka, “Small camera as a handheld colorimetric tool in the analytical chemistry,” Chemical Papers, vol. 71, no. 9, pp. 1553–1561, 2017c. [9] H. A. Watson, R. M. Tribe, and A. H. Shennan, “)e role of medical smartphone apps in clinical decision-support: a lit- erature review,” Artificial Intelligence in Medicine, vol. 100, Article ID 101707, 2019. [10] N. A. Izmailov and M. S. Schraiber, “Spot chromatographic adsorption analysis and its application in pharmacy,” Com- munication I Farmatsiya, vol. 3, pp. 1–7, 1936. [11] F. Klein and M. B. Janssens, “Standardisation of serological tests for rheumatoid factor measurement,” Annals of the Rheumatic Diseases, vol. 46, no. 9, pp. 674–680, 1987. [12] J. M. Singer, S. C. Edberg, R. L. Markowitz, J. D. Glickman, L. Miller, and R. Marchitelli, “Performance of latex-fixation kits used for serologic diagnosis of rheumatoid factor in rheumatoid arthritis sera,” American Journal of Clinical Pa- thology, vol. 72, no. 4, pp. 597–603, 1979. [13] A. E. Urusov, A. V. Zherdev, and B. B. Dzantiev, “Towards lateral flow quantitative assays: detection approaches,” Bio- sensors, vol. 9, no. 3, p. 89, 2019. [14] B. O׳Farrell, “Lateral flow technology for field-based appli- cations-basics and advanced developments,” Topics in Com- panion Animal Medicine, vol. 30, no. 4, pp. 139–147, 2015. [15] A. Chen and S. Yang, “Replacing antibodies with aptamers in lateral flow immunoassay,” Biosensors and Bioelectronics, vol. 71, pp. 230–242, 2015. [16] Y. Zhang, X. Liu, L. Wang et al., “Improvement in detection limit for lateral flow assay of biomacromolecules by test-zone pre-enrichment,” Scientific Reports, vol. 10, no. 1, p. 9604, 2020. [17] S. D. Lawn, “Point-of-care detection of lipoarabinomannan (LAM) in urine for diagnosis of HIV-associated tuberculosis: a state of the art review,” BMC Infectious Diseases, vol. 12, no. 1, pp. 1471–2334, 2012. [18] M. Shah, C. Hanrahan, Z. Y. Wang et al., “Lateral flow urine lipoarabinomannan assay for detecting active tuberculosis in HIV-positive adults,” Cochrane Database of Systematic Re- views, vol. 10, p. CD011420, 2016. [19] E. S. Hunter, I. D. Page, M. D. Richardson, and D. W. Denning, “Evaluation of the LDBio Aspergillus ICT lateral flow assay for serodiagnosis of allergic broncho- pulmonary aspergillosis,” PLoS One, vol. 15, 2020. [20] P. Schubert-Ullrich, J. Rudolf, P. Ansari et al., “Commer- cialized rapid immunoanalytical tests for determination of allergenic food proteins: an overview,” Analytical and Bio- analytical Chemistry, vol. 395, no. 1, pp. 69–81, 2009. [21] M. Röder, S. Vieths, and T. Holzhauser, “Commercial lateral flow devices for rapid detection of peanut (Arachis hypogaea) and hazelnut (Corylus avellana) cross-contamination in the industrial production of cookies,” Analytical and Bio- analytical Chemistry, vol. 395, no. 1, pp. 103–109, 2009. [22] F. Cui and H. S. Zhou, “Diagnostic methods and potential portable biosensors for coronavirus disease 2019,” Biosensors and Bioelectronics, vol. 165, Article ID 112349, 2020. [23] H. A. Hussein, R. Y. A. Hassan, M. Chino, and F. Febbraio, “Point-of-Care diagnostics of COVID-19: from current work to future perspectives,” Sensors, vol. 20, no. 15, p. 4289, 2020. [24] Z. Lyu, M. Harada Sassa, T. Fujitani, and K. H. Harada, “Serological tests for SARS-CoV-2 coronavirus by commer- cially available point-of-care and laboratory diagnostics in pre-COVID-19 samples in Japan,” Diseases, vol. 8, no. 4, p. 36, 2020. [25] L. Weidner, S. Gänsdorfer, S. Unterweger et al., “Quantifi- cation of SARS-CoV-2 antibodies with eight commercially available immunoassays,” Journal of Clinical Virology, vol. 129, Article ID 104540, 2020. [26] D. M. Pfukenyi, E. Meletis, B. Modise et al., “Evaluation of the sensitivity and specificity of the lateral flow assay, Rose Bengal test and the complement fixation test for the diagnosis of brucellosis in cattle using Bayesian latent class analysis,” Preventive Veterinary Medicine, vol. 181, Article ID 105075, 2020. [27] S. Halecker, S. Joseph, R. Mohammed et al., “Comparative evaluation of different antigen detection methods for the detection of peste des petits ruminants virus,” Transboundary and Emerging Diseases, vol. 5, Article ID 13660, 2020. [28] J. C. Wei, Y. M. Gao, G. Q. Li et al., “A lateral flow immu- nochromatographic assay (LFIA) strip reader based on scheduler and 8051 IP core,” Automatika, vol. 57, no. 4, pp. 1079–1088, 2016. [29] F. Gessler, S. Pagel-Wieder, M.-A. Avondet, and H. Böhnel, “Evaluation of lateral flow assays for the detection of botu- linum neurotoxin type A and their application in laboratory diagnosis of botulism,” Diagnostic Microbiology and Infectious Disease, vol. 57, no. 3, pp. 243–249, 2007. [30] H.-L. Hsu, C.-C. Chuang, C.-C. Liang et al., “Rapid and sensitive detection of Yersinia pestis by lateral-flow assay in simulated clinical samples,” BMC Infectious Diseases, vol. 18, no. 1, p. 402, 2018. [31] M. Pohanka, “Current trends in the biosensors for biological warfare agents assay,” Materials, vol. 12, no. 14, p. 2303, 2019. [32] I. Ziegler, P. Vollmar, M. Knüpfer, P. Braun, and K. Stoecker, “Reevaluating limits of detection of 12 lateral flow immu- noassays for the detection of Yersinia pestis, Francisella tularensis , and Bacillus anthracis spores using viable risk group-3 strains,” Journal of Applied Microbiology, 2020. [33] E. Frantz, H. Li, and A. J. Steckl, “Quantitative hematocrit measurement of whole blood in a point-of-care lateral flow device using a smartphone flow tracking app,” Biosensors and Bioelectronics, vol. 163, Article ID 112300, 2020. [34] Y. Jung, Y. Heo, J. J. Lee, A. Deering, and E. Bae, “Smart- phone-based lateral flow imaging system for detection of food-borne bacteria E.coli O157:H7,” Journal of Microbio- logical Methods, vol. 168, Article ID 105800, 2020. [35] M. Pohanka, “Photography by cameras integrated in smart- phones as a tool for analytical chemistry represented by an butyrylcholinesterase activity assay,” Sensors, vol. 15, no. 6, pp. 13752–13762, 2015. [36] M. Pohanka, “Colorimetric hand-held sensors and biosensors with a small digital camera as signal recorder, a review,” Reviews in Analytical Chemistry, vol. 39, no. 1, p. 20, 2020. [37] S. Pickering, G. Betancor, R. P. Galão et al., “Comparative assessment of multiple COVID-19 serological technologies supports continued evaluation of point-of-care lateral flow assays in hospital and community healthcare settings,” PLOS Pathogens, vol. 16, no. 9, Article ID e1008817, 2020. [38] Z. Wang, Z. Zheng, H. Hu et al., “A point-of-care selenium nanoparticle-based test for the combined detection of anti- SARS-CoV-2 IgM and IgG in human serum and blood,” Lab on a Chip, vol. 20, no. 22, p. 4255, 2020. [39] F. Shi, Y. Zhao, Y. Sun, and C. Chen, “Development and application of a colloidal carbon test strip for the detection of antibodies against Mycoplasma bovis,” World Journal of Microbiology and Biotechnology, vol. 36, no. 10, p. 157, 2020. [40] C. Liu, S. Fang, Y. Tian et al., “Rapid detection of Escherichia coli O157:H7 in milk, bread, and jelly by lac dye coloration- International Journal of Analytical Chemistry 7 based bidirectional lateral flow immunoassay strip,” Journal of Food Safety, 2020. [41] L. Zhan, T. Granade, Y. Liu et al., “Development and opti- mization of thermal contrast amplification lateral flow im- munoassays for ultrasensitive HIV p24 protein detection,” Microsystems & Nanoengineering, vol. 6, no. 1, p. 54, 2020. [42] F. Farshchi and M. Hasanzadeh, “Nanomaterial based apta- sensing of prostate specific antigen (PSA): recent progress and challenges in efficient diagnosis of prostate cancer using biomedicine,” Biomedicine & Pharmacotherapy, vol. 132, Article ID 110878, 2020. [43] Y. Guo, F. Yang, Y. Yao et al., “Novel Au-tetrahedral aptamer nanostructure for the electrochemiluminescence detection of acetamiprid,” Journal of Hazardous Materials, vol. 401, Article ID 123794, 2021. [44] N. Cheng, Y. Liu, O. Mukama et al., “A signal-enhanced and sensitive lateral flow aptasensor for the rapid detection of PDGF-BB,” RSC Advances, vol. 10, no. 32, pp. 18601–18607, 2020. [45] T. N. Navien, R. )evendran, H. Y. Hamdani, T.-H. Tang, and M. Citartan, “In silico molecular docking in DNA aptamer development,” Biochimie, vol. 180, pp. 54–67, 2021. [46] Y. Ning, J. Hu, and F. Lu, “Aptamers used for biosensors and targeted therapy,” Biomedicine & Pharmacotherapy, vol. 132, Article ID 110902, 2020. [47] F. Xue, Y. Chen, Y. Wen et al., “Isolation of extracellular vesicles with multivalent aptamers,” ;e Analyst, vol. 146, no. 1, p. 253, 2020. [48] T. Yang, X. Yang, X. Guo et al., “A novel fluorometric aptasensor based on carbon nanocomposite for sensitive detection of Escherichia coli O157:H7 in milk,” Journal of Dairy Science, vol. 103, no. 9, pp. 7879–7889, 2020. [49] R. Reid, B. Chatterjee, S. J. Das, S. Ghosh, and T. K. Sharma, “Application of aptamers as molecular recognition elements in lateral flow assays,” Analytical Biochemistry, vol. 593, Article ID 113574, 2020. [50] P. Tripathi, A. Kumar, M. Sachan, S. Gupta, and S. Nara, “Aptamer-gold nanozyme based competitive lateral flow assay for rapid detection of CA125 in human serum,” Biosensors and Bioelectronics, vol. 165, Article ID 112368, 2020. [51] H. Lin, F. Fang, J. Zang et al., “A fluorescent sensor-assisted paper-based competitive lateral flow immunoassay for the rapid and sensitive detection of ampicillin in hospital wastewater,” Micromachines, vol. 11, no. 4, p. 431, 2020. [52] S. Dalirirad and A. J. Steckl, “Lateral flow assay using aptamer- based sensing for on-site detection of dopamine in urine,” Analytical Biochemistry, vol. 596, Article ID 113637, 2020. [53] O. Mukama, W. Wu, J. Wu et al., “A highly sensitive and specific lateral flow aptasensor for the detection of human osteopontin,” Talanta, vol. 210, Article ID 120624, 2020. [54] M. Ali, M. Sajid, M. A. U. Khalid et al., “A fluorescent lateral flow biosensor for the quantitative detection of Vaspin using upconverting nanoparticles,” Spectrochimica Acta Part A: Molecular and Biomolecular Spectroscopy, vol. 226, Article ID 117610, 2020. [55] Q. Yu, Q. Zhao, S. Wang et al., “Development of a lateral flow aptamer assay strip for facile identification of theranostic exosomes isolated from human lung carcinoma cells,” Ana- lytical Biochemistry, vol. 594, Article ID 113591, 2020. [56] J. Cao, B. Zhu, K. Zheng et al., “Recent progress in NIR-II contrast agent for biological imaging,” Frontiers in Bioengi- neering and Biotechnology, vol. 7, p. 487, 2020. [57] M. Ganesan and P. Nagaraaj, “Quantum dots as nanosensors for detection of toxics: a literature review,” Analytical Methods, vol. 12, no. 35, pp. 4254–4275, 2020. [58] M. Pan, X. Xie, K. Liu, J. Yang, L. Hong, and S. Wang, “Fluorescent carbon quantum dots-synthesis, functionaliza- tion and sensing application in food analysis,” Nanomaterials, vol. 10, no. 5, p. 930, 2020. [59] M. Pohanka, “Quantum dots in the therapy: current trends and perspectives,” Mini Rev Med Chem, vol. 20, p. 20, 2017. [60] T. Lu, K.-D. Zhu, C. Huang et al., “Rapid detection of Shiga toxin type II using lateral flow immunochromatography test strips of colorimetry and fluorimetry,” ;e Analyst, vol. 145, no. 1, pp. 76–82, 2020. [61] W. Chen, S. Shan, J. Peng et al., “Sensitive and hook effect-free lateral flow assay integrated with cascade signal transduction system,” Sensors and Actuators B: Chemical, vol. 321, Article ID 128465, 2020. [62] Y. He, S. Hong, M. Wang et al., “Development of fluorescent lateral flow test strips based on an electrospun molecularly imprinted membrane for detection of triazophos residues in tap water,” New Journal of Chemistry, vol. 44, no. 15, pp. 6026–6036, 2020. [63] S. Kim and H. D. Sikes, “Liposome-enhanced polymerization- based signal amplification for highly sensitive naked-eye biodetection in paper-based sensors,” ACS Applied Materials & Interfaces, vol. 11, no. 31, pp. 28469–28477, 2019. [64] X. Xia, Y. Xu, X. Zhao, and Q. Li, “Lateral flow immunoassay using europium chelate-loaded silica nanoparticles as labels,” Clinical Chemistry, vol. 55, no. 1, pp. 179–182, 2009. [65] P. Manta, R. Nagraik, A. Sharma et al., “Optical density optimization of malaria Pan rapid diagnostic test strips for improved test zone band intensity,” Diagnostics, vol. 10, no. 11, p. 880, 2020. [66] R. Xiao, L. Lu, Z. Rong et al., “Portable and multiplexed lateral flow immunoassay reader based on SERS for highly sensitive point-of-care testing,” Biosensors and Bioelectronics, vol. 168, Article ID 112524, 2020. [67] G. M. Fernandes, W. R. Silva, D. N. Barreto et al., “Novel approaches for colorimetric measurements in analytical chemistry - a review,” Analytica Chimica Acta, vol. 1135, pp. 187–203, 2020. [68] H. Chen, Z. Li, L. Zhang, P. Sawaya, J. Shi, and P. Wang, “Quantitation of femtomolar-level protein biomarkers using a simple microbubbling digital assay and bright-field smart- phone imaging,” Angewandte Chemie International Edition, vol. 58, no. 39, pp. 13922–13928, 2019. [69] M. Pohanka, J. Zakova, and I. Sedlacek, “Digital camera-based lipase biosensor for the determination of paraoxon,” Sensors and Actuators B: Chemical, vol. 273, pp. 610–615, 2018. [70] L. Harendarcikova and J. Petr, “Smartphones & microfluidics: marriage for the future,” Electrophoresis, vol. 39, pp. 1319– 1328, 2018. [71] I. Nishidate, M. Minakawa, D. McDuff et al., “Simple and affordable imaging of multiple physiological parameters with RGB camera-based diffuse reflectance spectroscopy,” Bio- medical Optics Express, vol. 11, no. 2, pp. 1073–1091, 2020. [72] D. Quesada-González and A. Merkoçi, “Mobile phone-based biosensing: an emerging “diagnostic and communication” technology,” Biosensors and Bioelectronics, vol. 92, pp. 549– 562, 2017. [73] T. Sergeyeva, D. Yarynka, L. Dubey et al., “Sensor based on molecularly imprinted polymer membranes and smartphone for detection of Fusarium contamination in cereals,” Sensors, vol. 20, no. 15, p. 4304, 2020. 8 International Journal of Analytical Chemistry [74] K. Tomimuro, K. Tenda, Y. Ni, Y. Hiruta, M. Merkx, and D. Citterio, “)read-based bioluminescent sensor for detecting multiple antibodies in a single drop of whole blood,” ACS Sensors, vol. 5, no. 6, pp. 1786–1794, 2020. [75] A. K. Yetisen, M. S. Akram, and C. R. Lowe, “Paper-based microfluidic point-of-care diagnostic devices,” Lab on a Chip, vol. 13, no. 12, pp. 2210–2251, 2013. [76] F. Jahanpeyma, M. Forouzandeh, M. J. Rasaee, and N. Shoaie, “An enzymatic paper-based biosensor for ultrasensitive de- tection of DNA,” Frontiers in Bioscience (Scholar Edition), vol. 11, pp. 122–135, 2019. [77] M. Mirasoli, A. Buragina, L. S. Dolci et al., “Chem- iluminescence-based biosensor for fumonisins quantitative detection in maize samples,” Biosensors and Bioelectronics, vol. 32, no. 1, pp. 283–287, 2012. [78] G. A. Posthuma-Trumpie, J. Korf, and A. van Amerongen, “Development of a competitive lateral flow immunoassay for progesterone: influence of coating conjugates and buffer components,” Analytical and Bioanalytical Chemistry, vol. 392, no. 6, pp. 1215–1223, 2008. [79] Z. Guo, Y. Kang, S. Liang, and J. Zhang, “Detection of Hg(II) in adsorption experiment by a lateral flow biosensor based on streptavidin-biotinylated DNA probes modified gold nano- particles and smartphone reader,” Environmental Pollution, vol. 266, Article ID 115389, 2020. [80] N.-S. Li, Y.-T. Chen, Y.-P. Hsu et al., “Mobile healthcare system based on the combination of a lateral flow pad and smartphone for rapid detection of uric acid in whole blood,” Biosensors and Bioelectronics, vol. 164, Article ID 112309, 2020. International Journal of Analytical Chemistry 9 work_ysd7qi4nnnfarknb4zvewgnfhi ---- © 2018 Haik et al. This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms. php and incorporate the Creative Commons Attribution – Non Commercial (unported, v3.0) License (http://creativecommons.org/licenses/by-nc/3.0/). By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms (https://www.dovepress.com/terms.php). International Medical Case Reports Journal 2018:11 5–7 International Medical Case Reports Journal Dovepress submit your manuscript | www.dovepress.com Dovepress 5 C A S E R E P O R T open access to scientific and medical research Open Access Full Text Article http://dx.doi.org/10.2147/IMCRJ.S152145 Cauliflower ear – a minimally invasive treatment method in a wrestling athlete: a case report Josef Haik1–4 Or Givol2 Rachel Kornhaber1,5 Michelle Cleary6 Hagit Ofir1,2 Moti Harats1–3 1Department of Plastic and Reconstructive Surgery, Sheba Medical Center, Tel Hashomer, Ramat Gan, 2Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel; 3Burn Injury Research Node, Institute for Health Research University of Notre Dame Fremantle, Fremantle WA, Australia; 4Talpiot Leadership Program, Sheba Medical Center, Tel Hashomer, Ramat Gan, Israel; 5Faculty of Health, 6School of Health Sciences, College of Health and Medicine, University of Tasmania, Sydney, NSW, Australia Abstract: Acute auricular hematoma can be caused by direct blunt trauma or other injury to the external ear. It is typically seen in those who practice full contact sports such as boxing, wrestling, and rugby. “Cauliflower ear” deformity, fibrocartilage formation during scarring, is a common complication of auricular hematomas. Therefore, acute drainage of the hematoma and postprocedural techniques for preventing recurrence are necessary for preventing the deformity. There are many techniques although no superior method of treatment has been found. In this case report, we describe a novel method using needle aspiration followed by the application of a magnet and an adapted disc to the affected area of the auricular. This minimally invasive, simple, and accessible method could potentially facilitate the treatment of cauliflower ear among full contact sports athletes. Keywords: cauliflower ear, hematoma, ear deformity, athletic injuries, wrestling, case report Introduction Auricular hematomas are a commonly seen complication from blunt trauma to the auricle, usually seen in those who practice full contact sports such as boxing and wres- tling.1 Those who do not use wear protective headgear are at greater risk.2 Auricular hematomas form within the space between perichondrium and cartilage on the anterior of the ear.1 They develop due to the unique histological characteristics of the damaged auricle together with the peculiar pathology of the injury.3 The unique structure of the external ear is susceptible to traumatic injury.4 During the scarring process, the subperichondrial hematoma is invaded by chondrocytes and fibroblasts from the perichondrium, and a fibrocartilage is formed.1,5 As the blood supplied to the cartilage is dependent on the perichondrium, complications such as infections and cartilaginous necrosis may ensue.1 If left untreated, an ear deformity coined “cauliflower ear” may develop as a result of stimulation of the mesenchymal cells in the perichondrium and formation of new fibrocartilage.1 Numerous options to treat auricular hematomas have been reported, and effective management remains a contentious issue.3,6 Surgical correction remains the mainstay of treatment for “cauliflower ear”3,7 with the need to take time off from training. The most common treatment method for an auricular hematoma is drainage followed by techniques to prevent recurrence.8,9 Common drainage techniques are scalpel incisions, aspiration, and then the placement of materials to remove the dead space. There is a large variety of postprocedural techniques for preventing recurrence. Materials reported in the literature have included tie-over and tie through dressings, suturing, silicone splinting, Correspondence: Josef Haik Department of Plastic and Reconstructive Surgery, Sheba Medical Center, l, Emek HaEla Street 1, Ramat Gan 52621, Israel Tel +972 52666 6245 Email Josef.Haik@sheba.health.gov.il Journal name: International Medical Case Reports Journal Article Designation: CASE REPORT Year: 2018 Volume: 11 Running head verso: Haik et al Running head recto: Cauliflower ear DOI: http://dx.doi.org/10.2147/IMCRJ.S152145 In te rn a tio n a l M e d ic a l C a se R e p o rt s Jo u rn a l d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 http://www.dovepress.com/permissions.php www.dovepress.com www.dovepress.com www.dovepress.com https://www.facebook.com/DoveMedicalPress/ https://www.linkedin.com/company/dove-medical-press https://twitter.com/dovepress https://www.youtube.com/user/dovepress International Medical Case Reports Journal 2018:11submit your manuscript | www.dovepress.com Dovepress Dovepress 6 Haik et al cotton and plaster bolsters, passive and suction drainage, and clips.6 More recently, the use of mattress sutures,10 suturing pairs of Leonard buttons,11 and silastic sheets3 has also been reported in the treatment of auricular hematomas. Although these methods facilitate the removal of the dead space, the application of evenly distributed pressure across the hema- toma remains problematic.3 Furthermore, techniques that use cotton bolsters for the application of pressure can contribute to postoperative infections due to the wound exudate that requires frequent dressings.3 Case report An 18-year-old competitive ju-jitsu wrestler presented with a chief complaint of pain and swelling in his right auricle. On observation, the formation of an acute auricular hema- toma was noted. After cleaning the area with chlorhexidine, needle aspiration using a 30G insulin syringe was performed (Figure 1A and B). A magnet and a metal disc were applied on both sides of the auricle and left in situ for 3 days to compress the skin, perichondrium, and cartilage (Figure 1C and D). The use of the metal disc (as opposed to the use of two magnets) was to limit the amount of pressure exerted on the area to prevent undue pain, trauma, and additional injury. The pressure is adjusted according to the pain and can be altered by adding a layer of thin cotton or fabric. In our patient, we used a strong magnet on the posterior aspect of the ear and a disc that suited the ear size and shape covered with a layer of cotton which did not require the application of medical tape to secure. As the magnet is rigid and provides a homogeneous pressure, it must be noted that most of the collection is located in a relative flat area between the helix and antihelix where the sheering forces can create the virtual space to collect the blood or serum seen in these cases. In the event, a recollection was observed, the same process was repeated, and the hematoma was drained up to three times over 3 days. No hematoma was observed after 1 month, and the crus in the antihelix is clearly seen (Figure 1E). The same athlete has sought minimally invasive treatment over the past 2 years after recurring injuries, and this procedure has been repeated six times. The patient provided written informed consent for the described procedures and the use of digital photography for the purposes of treatment, teaching, and use in academic publications. Discussion In this case report, we describe a cost-effective, minimally invasive treatment method, using needle aspiration and post- drainage pressure (magnet and disc) on the hematoma site, similar to the technique used for the prevention of excessive scar formation.12 The most common etiology for auricular hematoma is blunt trauma to the external ear.9,13 The skin on the anterior aspect of the auricle is attached to the perichon- drium and cartilage with the majority of hematomas are noted to form on the anterior surface of the auricle.3 Therefore, effective treatment must be focused on evenly distributed pressure of the anterior surface of the auricle. The collec- tion is usually small due to the structure of the ear and the adherence of the skin to the cartilage beneath. Only a strong force during the training of martial arts can cause sheering of the different layers and create a collection. Recently, Vijendren et al9 conducted a multicenter retrospective observational study in the United Kingdom, assessing the outcomes of various management strategies of auricular hematoma and effects. They found that the manage- ment, attending practitioner, use of antibiotic coverage, and hospital admission had no impact on the rate of auricular hematoma occurrence, infection rates, and cosmesis. The only factor influencing the reoccurrence of the hematoma was the location of the drainage and the affected part of the ear.9 Some studies suggest that there is no superior method for the treatment of auricular hematomas.6,9 There is no doubt that in this case we had a good esthetic outcome that resulted in a minimal loss of training for the athlete reported in this case. Furthermore, we have performed Figure 1 (A) A total of 0.25 mL aspirate using 30G insulin syringe; (B) hematoma can be seen in the scaphoid region cancelling the delineation of the upper crus of the antihelix; (C) metal disc used to reduce the pressure exerted on the auricle; (D) metal disc (anterior) and magnet (posterior) in situ 3-day period; (E) 1-month postdrainage of hematoma with the crus in antihelix clearly seen. A B D E C In te rn a tio n a l M e d ic a l C a se R e p o rt s Jo u rn a l d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com International Medical Case Reports Journal 2018:11 submit your manuscript | www.dovepress.com Dovepress Dovepress International Medical Case Reports Journal Publish your work in this journal Submit your manuscript here: https://www.dovepress.com/international-medical-case-reports-journal-journal The International Medical Case Reports Journal is an international, peer-reviewed open-access journal publishing original case reports from all medical specialties. Previously unpublished medical post- ers are also accepted relating to any area of clinical or preclinical science. Submissions should not normally exceed 2,000 words or 4 published pages including figures, diagrams and references. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. Dovepress 7 Cauliflower ear this technique on three other athletes as described in this case report, and this minimally invasive technique has less chance of infection as no cotton bolster is used. Conclusion This case report presents a simple, minimally invasive, and cost-effective treatment for auricular hematomas. The main advantage of this technique is the capacity to distribute pressure evenly with the use of magnets to eliminate the dead space on the anterior surface of the auricular. The use of magnets enables easy monitoring of the wound site with no remarkable complications reported. However, it should be noted that only one case cannot prove the efficacy of a method, and further work is required to define a sound non- invasive procedure including drainage. Disclosure The authors report no conflicts of interest in this work. References 1. Sellami M, Ghorbel A. Traumatic auricular hematoma. Pan Afr Med J. 2017;26:148. 2. Schuller DE, Dankle SK, Martin M, Strauss RH. Auricular injury and the use of headgear in wrestlers. Arch Otolaryngol Head Neck Surg. 1989;115(6):714–717. 3. Rah YC, Park MH. Use of silastic sheets with mattress-fashion sutures for the treatment of auricular hematoma. Laryngoscope. 2015;125(3): 730–732. 4. Malloy K, Stack A, Wolfson A, Wiley J. Assessment and Manage- ment of Auricular Hematoma and Cauliflower Ear. UpToDate; 2017. Available from: https://www.uptodate.com/contents/assessment-and- management-of-auricular-hematoma-and-cauliflower-ear. Accessed December 2, 2017. 5. Ohlsén L, Skoog T, Sohn SA. The pathogenesis of cauliflower ear: an experimental study in rabbits. Scand J Plast Reconstr Surg. 1975;9(1): 34–39. 6. Jones S, Mahendran S. Interventions for acute auricular haematoma. Cochrane Database Syst Rev. 2004;(2):CD004166. 7. Yotsuyanagi T, Yamashita K, Urushidate S, Yokoi K, Sawada Y, Miyazaki S. Surgical correction of cauliflower ear. Br J Plast Surg. 2002;55(5): 380–386. 8. Brickman K, Adams DZ, Akpunonu P, Adams SS, Zohn SF, Guinness M. Acute management of auricular hematoma: a novel approach and retrospective review. Clin J Sport Med. 2013;23(4):321–323. 9. Vijendren A, Coates M, Smith ME, et al. Management of pinna haema- toma study (maphaes): a multicentre retrospective observational study. Clin Otolaryngol. 2017;42(6):1252–1258. 10. Giles WC, Iverson KC, King JD, Hill FC, Woody EA, Bouknight AL. Incision and drainage followed by mattress suture repair of auricular hematoma. Laryngoscope. 2007;117(12):2097–2099. 11. Ho E, Jajeh S, Molony N. Treatment of pinna haematoma with compres- sion using Leonard buttons. J Laryngol Otol. 2007;121(6):595–596. 12. Park TH, Chang CH. Early postoperative magnet application combined with hydrocolloid dressing for the treatment of earlobe keloids. Aesthetic Plast Surg. 2013;37(2):439–444. 13. Noormohammadpour P, Rostami M, Nourian R, et al. Association between hearing loss and cauliflower ear in wrestlers, a case control study employing hearing tests. Asian J Sports Med. 2015;6(2):e25786. In te rn a tio n a l M e d ic a l C a se R e p o rt s Jo u rn a l d o w n lo a d e d f ro m h tt p s: // w w w .d o ve p re ss .c o m / b y 1 2 8 .1 8 2 .8 1 .3 4 o n 0 6 -A p r- 2 0 2 1 F o r p e rs o n a l u se o n ly . Powered by TCPDF (www.tcpdf.org) 1 / 1 www.dovepress.com www.dovepress.com www.dovepress.com NumRef_1 Ref_Start REF_1 newREF_1 NumRef_2 REF_2 newREF_2 NumRef_3 REF_3 newREF_3 NumRef_4 REF_4 newREF_4 NumRef_5 REF_5 newREF_5 NumRef_6 REF_6 newREF_6 NumRef_8 REF_8 newREF_8 NumRef_9 REF_9 newREF_9 NumRef_10 REF_10 newREF_10 NumRef_11 REF_11 newREF_11 NumRef_12 REF_12 newREF_12 NumRef_13 Ref_End REF_13 newREF_13 Publication Info 4: work_yso6fd46yferdfbwwy2yaize5u ---- Adv. Sci. Res., 3, 119–122, 2009 www.adv-sci-res.net/3/119/2009/ © Author(s) 2009. This work is distributed under the Creative Commons Attribution 3.0 License. Advances in Science and Research 8 th E M S A n n u a l M eetin g a n d 7 th E u ro p ea n C o n feren ce o n A p p lied C lim a to log y 2 0 0 8 COST725 – establishing a European phenological data platform for climatological applications: major results E. Koch1, E. Dittmann2, W. Lipa1, A. Menzel3, J. Nekovar4, T. H. Sparks5, and A. J. H. van Vliet6 1Zentralanstalt für Meteorologie und Geodynamik, Vienna, Austria 2Deutscher Wetterdienst, Offenbach, Germany 3Technische Universität München, Center of Life and Food Sciences Weihenstephan, Freising, Germany 4Czech Hydrometeorological Institute, Prague, Czech Republic 5Poznan University of Life Sciences, Poznan, Poland 6Environmental Systems Analysis Group, Wageningen University, The Netherlands Received: 17 August 2009 – Revised: 7 September 2009 – Accepted: 10 September 2009 – Published: 13 October 2009 Abstract. COST – European COoperation in the field of Scientific and Technical Research – is the oldest and widest intergovernmental European network for cooperation in research. COST Action 725 (running from 2004 to 2009) aimed at and succeeded in establishing a European data base of phenological observations, classifying the data according to one common system and using the data in scientific peer reviewed papers. COST725 organized many workshops and conferences that helped to bring together not only the European, but also the global, phenological community. One of the highlights of COST725 was the boxed entry “Pheno- logical responses to climate in Europe: the COST725 project” in the AR4 of IPCC in 2007. And last, but not least, although the action ended in April 2009 a follow up will be launched in 2010 under EUMETNET/ECSN. 1 Introduction In the 19th century phenological recording was a traditional activity, but interest in observation declined in the early 20th century. However, in recent decades, phenology has rapidly become an important tool for climate change impact studies and COST725 contributed significantly to this renewed inter- est. The phenological responses of plants to climate warming are very strong in temperate and cool climate zones (Menzel et al., 2006). Furthermore phenological change is relatively easy to identify, especially in comparison with changes to distribution, fecundity, population size, morphology etc. In Europe, where the strongest and most widespread tradition of phenological monitoring is found, widespread phenological observations, especially in plants, existed, but were collected and archived by many different organisations with different data policies, data formats etc. Observation rules were only partly comparable and differing lengths of time-series made work on the data at a European wide level quite difficult. This was the starting point for COST725 (Koch et al., 2005). Correspondence to: E. Koch (e.koch@zamg.ac.at) 2 Results 2.1 The history and present status of plant phenology in Europe (Nekovář et al., 2008) This book, summarising the history and current status of phe- nology in Europe, will be a valuable reference for pheno- logical researchers for many years to come; it is the first time that such an extensive summary has been compiled. It is also available for free download at http://topshare.wur. nl/cost725/70987. The book contains contributions from all 27 COST725 members plus Croatia, Bosnia and Herzegov- ina, Montenegro and the International Phenological Gardens. A chapter on old international phenological networks com- pletes the presentation of the extant national European net- works (Fig. 1). In fall 2008 a workshop on the benefit of old phenologi- cal series was organized in Rome with 22 oral presentations. The workshop proceedings are available in a special issue of The Italian Journal of Agrometeorology/Rivista Italiana di Agrometeorologia published in 2009. Published by Copernicus Publications. http://creativecommons.org/licenses/by/3.0/ http://topshare.wur.nl/cost725/70987 http://topshare.wur.nl/cost725/70987 120 E. Koch et al.: Establishing a European phenological data platform for climatological applications Figure 1. EPHEMERIDES of the SOCIETATIS METEOROLOG- ICAE PALATINAE, first edition published in 1783 with phenolog- ical observations from Mannheim for 1781. 2.2 The common database The outcome of a questionnaire on phenological metadata helped to build the structure of the common database of COST725. To make the entries in the database comparable COST725 members agreed to apply the BBCH scale (Meier, 1997) to all national phenological observations across Eu- rope following the recommendations of Bruns and Vliet (2003). This was a very important step towards the uni- fication and standardization of phenological observations. As a follow up COST725 members and invited experts de- veloped “Guidelines for plant phenological observations”. WCMP/WCP/WMO accepted these guidelines as a stan- dard for phenological observations and published them at http://www.omm.urv.cat/documentation.html . In 2009 the web portal of COST725 was opened with data download-facilities and a graphical presentation of pheno- logical data (Lipa et al., 2009 and Fig. 2). 2.3 COST725 “Meta analysis”, a major contribution to IPCC, AR4, WG II report WG3 of COST725 published a meta analysis of European phenological data (Menzel et al., 2006). In the IPCC AR4 WG II report the COST725 study was one of the major con- tributions for the assessment of observed changes and re- sponses in natural and managed systems, using 125 000 ob- servational series of 542 plant and 19 animal species in 21 European countries for the period 1971–2000. The aggrega- tion of the time series revealed a strong signal across Europe of changing spring and summer phenology: Spring and sum- mer exhibited a clear advance by 2.5 days/decade in Europe. Mean autumn trends were close to zero, but suggested more of a delay when the average trend per country was exam- ined (1.3 days/decade). The patterns of observed changes in spring (leafing, flowering and animal phases) were spatially consistent and matched measured national warming across European countries; thus the phenological evidence quanti- tatively mirrors regional climate warming. About 80% of spring/summer phases were found to be advancing. The find- ings strongly support previous studies in Europe, confirm- ing them as free from bias towards reporting global climate change impacts. 2.4 CR Climate Research, Special 19, Vol. 39, Num- ber 3, 2009 with Open Access (free downloads http://www.int-res.com/abstracts/cr/v39/n3/) This special issue gives further examples of COST725 col- laboration; nine papers are presented based on the work of 35 scientists from 12 countries dealing e.g. with long data se- ries from Europe, examining for correlations between them and looking at temporal changes in both trends and tempera- ture responsiveness. One paper examines changes in a num- ber of potentially important summaries of daily temperature data, including thresholds and temperature accumulations. These show some clear changes in European temperatures, and maps of these put a spatial context on the change. Phe- nology in the Baltic countries of Latvia and Lithuania are the focus of a paper where the influence of the North Atlantic Oscillation and precipitation is also examined. Another study focuses on phenological change in flowering within Europe’s last remaining primeval lowland forest. Change is apparent even in this pristine environment reminding us that the con- sequences of a changing climate extend beyond those areas directly modified by Man. A study within the European Alps looks at changes in phenology with increasing altitude and changes in phenology in relation to location, phase timing and human population density are looked at in a subsequent paper. A paper on Bayesian methods examining phenolog- ical changes in 2600 European series confirmed that recent phenological change has not been linear but rather has an abrupt change associated with rising temperature. Phenolog- ical trends were most marked in NW Europe. Higher tech Adv. Sci. Res., 3, 119–122, 2009 www.adv-sci-res.net/3/119/2009/ http://www.omm.urv.cat/documentation.html E. Koch et al.: Establishing a European phenological data platform for climatological applications 121 Figure 2. Graphic presentation of time series and some statistics from at www.zamg.ac.at/cost725 Figure 2. Graphic presentation of time series and some statistics from www.zamg.ac.at/cost725. applications of phenology are considered in two papers. The use of digital photography on flux towers and the relationship between phenology and gross primary productivity are the major concerns of a Swiss study: different parts of the forest canopy were identified for examination of the development of individual tree species. The use of satellite imagery on a 8 km grid examines the beginning and end of the growing season in Fennoscandia and compares these data with local records of Betula phenology: though changes in the growing season over the study area are heterogeneous on average a 6 day/decade lengthening of the growing season was detected. 3 Conclusions Over the last decade the barriers towards phenological col- laboration between European countries have been broken down. COST725 has made a substantial contribution to that collaboration. The importance of phenological recording has been made very clear to those countries/organisations con- sidering cutting back their programmes. New schemes, for example in the Republic of Ireland and Sweden, have been inspired by this work, and a revival of former networks is un- derway in Poland and Hungary. The prospects for increased phenological research have never looked so encouraging. Acknowledgements. The authors and all COST725 members want to express their thanks to COST, the European COoperation in the field of Scientific and Technical Research, and especially its scientific officers P. Nejedlik, E. P. de Rose and C. Petite. Edited by: E. Koch Reviewed by: two anonymous referees References Bruns, E. and Vliet, v. A. J. H.: Standardisation of phenological monitoring in Europe, Wageningen University and DWD, 2003. Koch, E., Dittmann, E., Lipa, W., Menzel, A., Nekovar, J., and Vliet, v. A. J. H.: Cost725 establishing a European phenological data platform. Proceedings of the European Geosciences Union, General Assembly 2005, Vienna, Austria, 24–29 April 2005. Lipa, W. (chair of WG2), Baksiene, E., Behrendt, J., Briede, A., Chuine, I., Defila, C., Donnelly, A., Habič, B., Jatczak, K., Koch, E., Langvall, O., Jakubı́ková, V., Måge, F., Meier, D., Mulder, S., Nekovář, J., Nordli, Ø., Romanovskaja, D., Seguin, B., Sušnik, A., Zach-Hermann, S., Zimmermann, K. (vice-chairperson, WG 2), Žust, A.: Evolution of a database for European phenologi- cal data, in: Final Scientific Report of COST 725 Establishing a European Phenological dataplatform for climatological appli- cations, edited by: E. Koch et al., ISBN 978-92-898-0048-8, 82 pp., COST office, 2009. Meier, U.: Growth Stages of Mono- and Dicotyledonous Plants, Blackwell Wissenschaftsverlag, 1997. www.adv-sci-res.net/3/119/2009/ Adv. Sci. Res., 3, 119–122, 2009 www.zamg.ac.at/cost725 122 E. Koch et al.: Establishing a European phenological data platform for climatological applications Figure 3. Box showing results of COST725 data analyses of Menzel et al. GCB 2006, published in AR4 WGII chapter 1 IPCC 2007 (Rosenzweig et al., 2007) Figure 3. Box showing results of COST725 data analyses of Menzel et al. (GCB 2006), published in AR4 WGII chapter 1 IPCC 2007 (Rosenzweig et al., 2007). Menzel, A., Sparks, T. H., Estrella, N., Koch, E., Aasa,A., Ahas, R., Alm-Kübler, K., Bissolli, P., Braslavská, O., Briede, A., Chmielewski, F.-E., Crepinsek, Z., Curnel, Y., Dahl, Å., De- fila, C., Donnelly, A., Filella, Y., Jatczak, K., Måge, F., Mestre, A., Nordli, Ø., Peñuelas, J., Pirinen, P., Remisová, V., Scheifin- ger, H., Striz, M., Susnik, A., Wielgolaski, F.-E., Vliet, A. v., Zach, S., and Zust, A.: European phenological response to cli- mate change matches the warming pattern, Global Change Biol., 12, 1969–1976, 2006. Nekovář, J., Koch, E., Kubin, E., Nejedlik, P., Sparks, T. H., and Wielgolaski, F. E. (Eds.): The history and current status of plant phenology in Europe, ISBN 978-951-40-2091-9, 182 pp., Printed in Vammalan Kirjapaino Oy, 2008. Rosenzweig, C., Casassa, G., Karoly, D.J., Imeson, A., Liu, C., Menzel, A., Rawlins, S., Root, T. L., Seguin, B., and Try- janowski, P.: Assessment of observed changes and responses in natural and managed systems, Climate Change 2007: Impacts, Adaptation and Vulnerability, Contribution of Working Group II to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, edited by: Parry, M. L., Canziani, O. F., Pa- lutikof, J. P., van der Linden, P. J., and Hanson, C. E., Cambridge University Press, Cambridge, UK, 79–131, 2007. Adv. Sci. Res., 3, 119–122, 2009 www.adv-sci-res.net/3/119/2009/ work_ysy4m6z4yjbfxnyhjorypjgwgi ---- ISSN 1413-3555 Rev. bras. fisioter., São Carlos, v. 11, n. 5, p. 411-17, Sept./Oct. 2007 ©Revista Brasileira de Fisioterapia ReliaBility OF phOtOgRammetRy in RelatiOn tO gOniOmetRy FOR pOStuRal lOweR limB aSSeSSment Sacco IcN, alIbert S, QueIroz bWc, PrIPaS D, KIelINg I, KImura aa, Sellmer ae, malveStIo ra & Sera mt human motion and posture Biomechanics laboratory, Department of physical therapy, Speech therapy and Occupational therapy, School of medicine, universidade de São paulo, São paulo, Sp - Brazil Correspondence to: isabel de C. n. Sacco, Departamento de Fisioterapia, Fonoaudiologia e terapia Ocupacional, Faculdade de medicina, universidade de São paulo, Rua Cipotânia, 51, Cidade universitária, Cep 05360-000, São paulo, Sp – Brasil, e-mail: icnsacco@usp.br Received: 08/03/2007 - Revised: 30/06/2007 - accepted: 21/08/2007 AbstrAct Background: postural assessment and joint range-of-motion measurements are fundamental in diagnosing, planning and following up the evolution and results from physical therapy treatment. these can be done with the aid of goniometry – the most common method in physical therapy practice – and also, through technological advances, by means of photogrammetry. Objective: to investigate the parallel reliability of computerized photogrammetry, using two software tools (Corel Draw and Sapo), in rela- tion to goniometry, in four angles of the lower limbs. method: twenty-six asymptomatic volunteers of both sexes, aged between 18 and 45 years, were studied. none of them had leg length discrepancy greater than 1 cm. the tibiotarsal angle (tt), knee flexion/extension angle (F/e), quadriceps angle (Q) and subtalar angle (S) were measured. the measurement was done first with a manual goniometer and then with digital photogrammetry by means of the Corel Draw v. 12 and Sapo v.0.63 software. Re- sults: there were no statistical differences between the three evaluation methods for the tt (p= 0.9991), S (p= 0.2159) and F/e (p= 0.4027) angles. however, for the Q angle there was a significant difference between goniometry and the software used in photogrammetry (p= 0.0067), although there was no significant difference between two software tools (p= 0.9920). this showed that the photogrammetry results were not influenced by the software used. Conclusion: in these healthy young subjects, com- puterized photogrammetry showed good parallel reliability in comparison with goniometry, for all the angles evaluated except for the Q angle. therefore, in physical therapy practice, caution is needed in using Q angle measurements coming from different postural assessment methods. Key words: goniometry; photogrammetry; posture. resumo Confiabilidade da Fotogrametria em Relação a Goniometria para Avaliação Postural de Membros Inferiores Contextualização: a avaliação postural bem como a mensuração da amplitude articular são instrumentos fundamentais para o diagnóstico, planejamento e acompanhamento da evolução e dos resultados de um tratamento fisioterapêutico. elas podem ser realizadas tanto pela goniometria - método mais utilizado na clínica fisioterapêutica – como, com o avanço tecnológico, pela fotogrametria. Objetivos: Verificar a confiabilidade paralela da fotogrametria computadorizada, utilizando dois softwares, o Corel Draw e o Sapo, em relação à goniometria para quatro ângulos nos membros inferiores. Casuística e métodos: Foram estudados 26 voluntários de ambos os sexos, assintomáticos, com idade entre 18 e 45 anos, sem anisomelia de membros inferiores maior que 1 cm. Foram mensurados os ângulos tíbio-társico (tt), de flexo/extensão do joelho (flex/ext), ângulo Q (Q) e ângulo do retropé, inicialmente, com um goniômetro manual e, posteriormente, pela fotogrametria digital por meio dos softwares Corel Draw v. 12 e Sapo v.0.63. Resultados: Os ângulos tt (p= 0,9991), do retropé (p= 0,2159) e de flexo/extensão do joelho (p= 0,4027) não foram estatisticamente diferentes entre os 3 métodos de avaliação. Já o ângulo Q foi significativamente diferente entre a goniometria e os dois softwares usados na fotogrametria (p= 0,0067), embora os valores obtidos pelos mesmos não tenham diferido entre si (p= 0,9920), demonstrando que os resultados da fotogrametria não foram influenciados pelos softwares utilizados. Conclusão: para os ângulos avaliados em sujeitos jovens assintomáticos, a fotogrametria computadorizada é confiável paralelamente à goniometria, exceto para o ângulo Q. portanto, na clínica fisioterapêutica, deve-se ter cautela no uso de medidas de ângulo Q provenientes de diferentes métodos de avaliação postural. Palavras-chave: goniometria; fotogrametria; postura. Sacco iCn, alibert S, Queiroz BwC, pripas D, Kieling i, Kimura aa, et al. Rev. bras. fisioter.412 INTRODUCTION human posture is the kinematic relationship between the positions of the body joints at a given moment. in an ideal skeletal alignment, the muscles, joints and skeletal structures are expected to be in a state of dynamical balance, generating a minimal amount of effort and overload, leading to an optimal efficiency of the locomotive system1. Both postural assessment and objective joint range-of-motion measurement are of fundamental importance to diagnosis, planning and follow-up of the evolution and results of physical therapy treatment1,2. although it is widely accepted that a balanced posture is important for good functioning of the musculoskeletal structures, postural assessment is a complex phenomenon and difficult to measure3. accord- ing to iunes et al.3, this could explain why there are so few study results that can associate postural alterations to injuries or specific musculoskeletal dysfunctions. there- fore, it is important to establish trustworthy and reliable methods aimed at quantifying variables that aid postural assessment, contributing to an evidence-based develop- ment of physical therapy. manual goniometry is widely used in clinical physical therapy to assess range-of-motion4-6. For postural assess- ment, the goniometer can also be used to measure joint angles4-7. Some of the advantages of this methodology are the low-cost of the instrument and ease of measurement, which depends almost exclusively on the assessor’s previous experience4. these advantages make manual goniometry very accessible to clinical physical therapy. as a measur- ing instrument for the upper and lower limb joints, the reliability of the universal goniometer is considered good to excellent, but for measuring the range-of-motion of the trunk, its reliability is low8. Studies have shown high reliability for measures of shoulder and knee range-of- motion, when compared to methods of visual estimation and radiography, respectively9,10, and moderate reliability for ankle dorsiflexion, when compared to the reliability of the digital inclinometer11. high correlations were also found between goniometric and radiographic measures10 and between goniometry and isokinetic dynamometry measures12 for knee range-of-motion, as well as good to excellent reproducibility for shoulder range-of-motion measurements9. with the advent of technology, digital photogram- metry is now considered an alternative to quantitative assessment of asymmetries in the postural assessment, as it can be used for linear and angular measurements3,13,14. according to the american Society for photogrammetry and Remote Sensing, photogrammetry is the art, science, and technology of obtaining reliable information on physical objects and their surroundings by recording, measuring and interpreting photographic images and radiant elec- tromagnetic energy patterns, as well as other sources3,14. photogrammetry allows the recording of subtle changes and the inter-relations between different parts of the hu- man body that are difficult to measure or record by other means13,15. the use of photogrammetry may facilitate the quanti- fication of the morphological variables related to posture, bringing more reliable data than the ones obtained through visual observation. this fact is important for the credibility of clinical physical therapy, as well as for the reliability of rehabilitation research3. Furthermore, the filing process in photogrammetry has the convenience of saving space as well as time accessing recorded files. another advantage of digital photography is the possibility of conjugation with computerized mea- suring processes, resulting in computerized photogram- metry16. therefore, computerized photogrammetry is the combination of digital photography and software such as Corel Draw17-19, that allows the measurement of horizontal and vertical distances and angles for various ends, or others specifically developed for postural assessment such as Sapo (Software for assessment of posture), a nationally-funded free software developed with scientific basis, database, and web access20, 21. the measurement of the range-of-motion of joints and other parts of the body in relation to one another must be reliable and standardized, allowing not only the comparison of phases and the assessment of treatment effectiveness, but also the publication of results for the benefit of other professionals2,16,22. in a study by iunes et al.3, computerized photogram- metry in postural assessments presented acceptable inter- and intra-assessor reliability (intraclass correlation index [iCC] between 0.71 and 0.79) for most of the assessed angular measures, being therefore recommended for asymmetry and postural deviation assessments. however the repeatability of the method is low, and follow-up of pre- and post-treat- ment results may not be sufficiently reliable3. Zonnenberg et al.23 found a high inter- and intra-assessor reliability for all the angular measures taken with photogrammetry but, as with iunes et al.3, the method’s repeatability was low. in contrast, Braun and amundson24 found adequate reli- ability and repeatability for the postural assessment of the head and shoulders by means of photogrammetry. Other studies have also shown high reliability of photometric techniques for the assessment of the shoulder and trunk range-of-motion25,26. Rothstein27 classifies the different kinds of intra-as- sessor, inter-assessor and parallel reliability. parallel reli- ability compares the values or results obtained by different instruments or tests at the same time. it is used when the aim is to obtain alternative instruments, similar to the ref- erence instrument. to analyze the parallel reliability of a new instrument, it must be compared to a reference in- strument which has been previously tested and considered 413 reliable28,29. given that manual goniometry is the most common method in physical therapy practice with good to excellent reliability2,8, it is considered the reference instrument to which new methods and instruments, such as computerized photogrammetry, may be compared. in light of that, the objective of the present study was to verify the parallel reliability of computerized pho- togrammetry in relation to goniometry for four angles of the lower limbs, by means of two software tools: Corel Draw v. 12 and Sapo v. 0.63. in this sense, we endeavored to study the features of measurements that may contribute to the development of an evidence-based physical therapy assessment process. METHODS this study had a cross-sectional observational research design and was approved by the local ethics Committee (protocol number 1237/05). twenty-six volunteers of both genders (9 men and 17 women) were studied, with a total of 52 lower limbs. the inclusion criteria were asymptom- atic individuals between 18 and 45 years of age. exclusion criteria were significant postural asymmetries, leg length discrepancy (greater than 1 cm) and episodes of pain in the lower limbs and lower back during the previous three months. the subjects were asked to sign a free and informed consent, according to Resolution 196/96 of the national health Council. all subjects completed an initial questionnaire, includ- ing personal details (name, age, phone numbers, gender, and profession), and questions regarding the abovementioned exclusion criteria. the body mass and height of the subjects were measured. the tibiotarsal angle (tt), knee flexion/extension angle (F/e), quadriceps angle (Q), and subtalar angle (S) were measured by the same assessor (table 1) using manual goniometry22 and digital photogrammetry13. For all goniometric and photogrammetric measures, the indi- vidual was in the stance position on a bench (20 cm high x 40 cm long x 40 cm wide) placed 15 cm from a wall with a posture evaluation grid. two plumb lines hanged from the roof on each side of the positioning bench past the feet of the individual. an ethyl vinyl acetate (eVa) rectangle (7cm wide x 30cm long) was placed between the feet to maintain the position inter- and intra-subjects in all measurements. to replicate the goniometry of clinical practice, mea- surements were taken without marking the anatomical points. this option may affect the study results but marking ana- tomical points for goniometric measurement would be a deviation from the normal use of the instrument. Subjects wore bathing suits during both the gonio- metric and photogrammetric assessments. goniometry was performed with the subject in the stance position. the first angle to be measured was the tt by placing the fulcrum of the goniometer on the lateral malleolus. the stationary arm was projected toward the tuberosity of the distal diaphysis of the fifth metatarsus, and the moving arm toward the head of the fibula. next, the F/e angle of the knee was measured with the fulcrum on the head of the fibula, the stationary arm over the lateral surface of the thigh, projected toward the greater trochanter of the femur, and the moving arm over the fibula projected to- ward the lateral malleolus of the ankle. Both angles were measured first on the right lower limb and then on the left. after these procedures, the Q angle was measured by placing the fulcrum of the goniometer over the center of the patella, the stationary arm along the femur, projected toward the anterior superior iliac spine (aSiS), and the moving arm over the tibial tuberosity. Finally, the S angle was measured with the fulcrum of the goniometer over the midpoint between both malleoli, the stationary arm over the lower third of the tibia, and the moving arm aligned with the calcaneus. after goniometry, subjects were photographed in a private, well-lit, heated room, with a non-reflexive back- ground. anatomical points were located and marked bi- laterally with red self-adhesive tags (0.9cm in diameter) for subsequent angle calculation using the software. they were: center of the patella, tibial tuberosity and anterior superior iliac spine (aSiS) (anterior frontal plane); midpoint of the lower third of the leg, midpoint of the calcaneus, Table 1. Description of the angles measured in goniometry and photogrammetry. Angle Fulcrum Stationary arm Moving arm tibiotarsal angle (tt) lateral malleolus toward the fifth metatarsal head toward the head of fibula Knee extension/ Flexion angle (F/e) head of fibula lateral thigh surface toward greater trochanter of femur along the fíbula toward lateral malleolus Q angle * Center of the patella along the femur toward the aSiS tibial tuberosity Subtalar angle (S) midpoint between the two malleoli along the inferior third of the tibia aligned with the calcaneus * the value considered was 180º subtracted from the angle measured by the goniometer. Reliability: photogrammetry and goniometry Rev. bras. fisioter. Sacco iCn, alibert S, Queiroz BwC, pripas D, Kieling i, Kimura aa, et al. Rev. bras. fisioter.414 midpoint between the malleoli (posterior frontal plane); tuberosity of the distal diaphysis of the fifth metatarsus, lateral malleolus, head of the fibula and greater trochanter of the femur (sagittal plane). the subjects were photographed in the right and left sagittal, and anterior and posterior frontal planes with a 2 megapixel (1600 x 1200 pixel) digital camera parallel to the ground, at a distance of 3m from the bench and mounted on a level tripod 70 cm high (at knee height) (Figure 1). in the sagittal plane, the subject had the elbows flexed at 90º and, in the frontal plane, the arms were to the side of the body. the photogrammetric calculation of the angles in ques- tion was performed by means of two software programs: Corel Draw v. 12 and Sapo v. 0.63. the marked anatomical points were used to measure the angles with the axes and vortices described in the goniometry (table 1), except for the Q angle, measured directly with the extension of the line between the center of the patella and the tibial tuberosity, without the need to subtract it from 180º (Figure 2). after collection, organization and verification of the data normality by means of the Shapiro-wilks adherence test, we compared the calculated variables between the methods by using the anOVa test for repeated measures (α= 0.06) and Scheffé’s post hoc test. pearson’s correlation was also applied to the methods in order to verify the strength of the relationship between them. the pearson correlation was considered significant when the p value was less than 0.05. the r values lower than 0.40 were considered low correlation; between 0.41 and 0.59, moderate; between 0.60 and 0.79, good, and above 0.80 high. the parallel reliability tends to be lower than the intra-assessor reliability and the instrument reliability as it involves measures from different devices, and usually with different scales28. in the present study, for example, the measuring scale of the goniometer and of Corel Draw was numeric, while Sapo’s was decimal. resuLts the 26 subjects (9 males and 17 females) presented mean age of 21.7 ± 4.9 years, mean body mass of 62.7 ± 13.8 kg and mean height of 168.1 ± 11.8 cm. the tt angle (p= 0.9991), S angle (p= 0.2159) and F/e angle (p= 0.4027) were not statistically different among the three assessment methods (table 2). the Q angle was significantly different among goniometry and the two software programs used in photogrammetry (p= 0.0067) (table 2), although the values obtained by Corel Draw and Sapo did not differ from one another (p= 0.9920), showing that the software used for all the assessed angles did not influence the photogram- metry results. the tt angle presented significant correlation (p< 0.05) in all methods, with the correlation between goniometry and software near 50%, and of 85% between both photogram- metry software programs. the F/e angle presented significant correlation between goniometry and photogrammetry with Corel Draw, however a low, non-significant correlation was found between Sapo and goniometry, and between Sapo and Corel Draw. the Q angle presented good and significant correlation between Corel and goniometry, but not between goniometry and Sapo, and a high correlation between both programs. the S angle presented low, non-significant cor- relation between goniometry and the software programs. in contrast, both photogrammetry methods showed a high and significant correlation for this angle (table 3). Figure 1. Digital photo standardization (description of camera resolution, distance between camera and subject, tripod height and dimensions of the wooden stool and the rectangular eVa device placed between the feet to standardize their position). Figure 2. illustration of the angle measurements using Corel Draw v. 12. Tibiotarsal Angle Knee Flexion/extension Q Angle Subtalar Angle 415 DISCUSSION Based on these results, it was clear that goniometry and computerized photogrammetry with both software programs (Corel Draw and Sapo) were very similar for the tt, S and F/e angles. the exception was the Q angle, which was dif- ferent for goniometry and photogrammetry, but similar for both programs used in the photogrammetric calculations. although the Q angle is widely used in clinical physi- cal therapy, few studies have performed reliability tests of this measure30. it is believed that the unsatisfactory results found in the present study are due to the fact that, for this angle, anatomical reference points are distant from each other and their muscle mass and arrangement is such that hinders the positioning of the goniometer arms. Furthermore, the Q angle measure involves postures of more than one joint complex, including the pelvis, hip, femoral-patellar, and femoral-tibial complexes, adding up to almost a dozen of degrees of freedom. therefore, postural alterations in each one of the degrees of freedom of these three joint complexes (pelvis, hip and knee) may alter the measure of the Q angle both in goniometry and photogrammetry. this finding is in accordance with the studies that found a low intra- and inter-assessor reliability (iCC between 0.14 and 0.37) for the clinical measure of the Q angle, as well as a low correlation between the clinical and the radiographic measure of this angle30. in contrast, a study found iCC results over 0.80 for the intra-assessor reliability and above 0.60 for the inter-assessor reliability for the Q angle measure31. it is also argued that the lateralization of the patella can alter the Q angle measures, leading to lower values for this angle. in this case, it was proposed that the medial-lateral orientation of the patella be measured to improve the reli- ability and clinical applicability of this measure32. as for the other angles, the anatomical points are either close to the goniometer arms or on a plane on the human body, so that the arms do not need to contour anatomical irregularities. thus, for the F/e angle, for example, the side of the thigh served as reference for the positioning of the goniometer arm, aligning it with the greater trochanter of the femur. these points can be positioned directly under a goniometer arm. with regard to the pearson correlations, both software programs used in photogrammetry had a high and significant correlation. this suggests that, proportionally, the measures vary in a similar fashion, are related to one another and are reliable in parallel terms. no studies were found in the literature comparing these two software programs in postural analysis for the assessed angles. Between goniometry and Corel Draw photogrammetry, moderate and good correlations were found, with exception of the S angle, which was low. Table 2. means, standard deviations of the angular variables measured with the three assessment methods, and p values. Goniometry Corel Draw SAPo p tibiotarsal angle (degrees) 112.3 ± 4.0 112.4 ± 3.6 112.4 ± 3.4 0.9991 Subtalar angle (degrees) 7.1 ± 3.7 8.1 ± 4.5 8.1 ± 4.4 0.2159 Q angle (degrees) 15.0 ± 5.6 13.1 ± 7.8 13.1 ± 7.8 0.0068 * Knee Flexion/extension angle (degrees) 184.0 ± 4.7 181.7 ± 4.1 181.6 ± 4.3 0.4027 * represents significant difference, p< 0.05. Table 3. pearson’s Correlations among the three assessment methods, r and p values. Variables Comparisons 2 x 2 r p Correlation tibiotarsal angle gonio/Corel 0.41 0.003 moderate gonio/Sapo 0.47 0.001 moderate Corel/Sapo 0.85 0.000 high Knee Flexion/extension angle gonio/Corel 0.48 0.000 moderate gonio/Sapo 0.06 0.6721 low Corel/Sapo 0.04 0.7791 low Q angle gonio/Corel 0.65 0.0000 good gonio/Sapo 0.06 0.6721 low Corel/Sapo 0.97 0.0000 high Subtalar angle gonio/Corel -0.11 0.4682 low gonio/Sapo -0.09 0.5232 low Corel/Sapo 0.83 0.000 high Reliability: photogrammetry and goniometry Rev. bras. fisioter. Sacco iCn, alibert S, Queiroz BwC, pripas D, Kieling i, Kimura aa, et al. Rev. bras. fisioter.416 Between goniometry and the Sapo photogrammetry, a low and non-significant correlation was verified. as previ- ously described, the goniometry and the Corel Draw program scales are numeric, whereas the Sapo scale is decimal, yielding different results that may be expressed by these low correlations. CONCLUSION the study showed that, for the angles assessed in young asymptomatic individuals, computerized photogrammetry is reliable in parallel terms to goniometry, except for the Q angle. in addition to that, the measurements taken with photogrammetry, regardless of the software used, were similar and did not interfere in the assessments. therefore, in clinical physical therapy, caution should be used with Q angle measurements derived from different methods of postural assessment. REFERENCES 1. Kendall Fp, mcCreary eK, provance pg. músculos – provas e funções. 4ª ed. São paulo: editora manole; 1995. 2. harrelson gl, Swann e. medidas em reabilitação. in: andrews JR, harrelson gl, wilk Ke. Reabilitação física do atleta. 3ª ed. Rio de Janeiro: elsevier; 2005. p. 105-34. 3. iunes Dh, Castro Fa, Salgado hS, moura iC, Oliveira aS, Bevilaqua-grossi D. Confiabilidade intra e interexaminadores e repetibilidade da avaliação postural pela fotogrametria. Rev Bras Fisioter. 2005;9(3):327-34. 4. Venturni C, andré a, aguilar Bp, giacomelli B. Confiabilidade de dois métodos de avaliação da amplitude de movimento ativa de dorsiflexão do tornozelo em indivíduos saudáveis. acta Fisiatr. 2006;13(1):41-5. 5. Brosseau l, tousignant m, Budd J, Chartier n, Duciaume l, plamondon S, et al. intratester and intertester reliability and criterion validity of the parallelogram and universal goniometers for active knee flexion in healthy subjects. physiother Res int. 1997;2(3):150-66. 6. Sabari JS, maltzev i, lubarsky D, liszkay e, homel R. gonio- metric assessment of shoulder range of motion: comparison testing in supine and sitting positions. arch phys med Rehabil. 1998;79:647-51. 7. tomsich Da, nitz aJ, threlkeld aJ, Shapiro R. patellofemoral alignment: reliability. J Orthop phys ther. 1996;23(3):200-8. 8. amado-João Sm. avaliação articular. in: amado-João Sm. métodos de avaliação clínica e funcional em fisioterapia. Rio de Janeiro: guanabara Koogan; 2006. p. 39-50. 9. andrade Ja, leite Vm, teixeira-Salmela lF, araújo pmp, Juliano y. estudo comparativo entre os métodos de estima- tiva visual e goniometria para avaliação das amplitudes de movimento da articulação do ombro. acta Fisiatr. 2003;10(1): 12-6. 10. gogia pp, Braatz Jh, Rose SJ, norton BJ. Reliability and validi-ty of goniometric measurements at the knee. phys ther. 1987; 67:192-5. 11. Venturni C, andrade a, aguilar Bp, giacomelli B. Confiabili- dade de dois métodos de avaliação da amplitude de movimento ativa de dorsiflexão do tornozelo em indivíduos saudáveis. acta Fisiatr. 2006;13(1):41-5. 12. Batista lh, Camargo pR, aiello gV, Oishi J, Salvini tF. aval- iação da amplitude articular do joelho: correlação entre as me- didas realizadas com o goniômetro universal e no dinamômetro isocinético. Rev Bras Fisioter. 2006;10(2):193-8. 13. watson awS. procedure for the production of high quality photographs suitable for the recording and evaluation of pos- ture. Rev Fisioter univ São paulo. 1998;5(1):20-6. 14. aSpRS – american Society for photogrammetry and Remote Sensing. what is aSpRS – definition [homepage na internet]. Bethesda: american Society for photogrammetry and Remote Sensing; 2000 [atualizada em 2006 nov 16; acesso em 2006 Out 24]. Disponível em: http://www.asprs.org/society/about. html. 15. Cowan Dn, Jones Bh, Frykman pn, polly Jr. Dw, harman ea, Rosenstein Rm, et al. lower limb morphology and risk of overuse injury among male infantry trainees. med Sci Sports exerc. 1996;28(8):945-52. 16. watson aw, mac Donncha C. a reliable technique for the as- sessment of posture: assessment criteria for aspects of posture. J Sports med phys Fitness. 2000;40:260-70. 17. mattos F, Rodrigues al. Corel Draw 11. Rio de Janeiro: Bras- port; 2003. 18. Sacco iCn, morioka eh, gomes aa, Sartor CD, noguera gC, Onodera an, et al. implicações da antropometria para posturas sentadas em automóvel – estudos de caso. Rev Fisioter univ São paulo. 2003;10(1):34-42. 19. Sacco iCn, andrade mS, Souza pS, nisiyama m, Cantuária al, maeda Fyi, et al. método pilates em revista: aspectos biomecânicos de movimentos específicos para reestruturação postural – estudos de caso. Rev Bras Ciên e mov. 2005;13(4): 65-78. 20. portal do projeto Software para avaliação postural [homepage na internet]. São paulo: incubadora Virtual Fapesp; 2004 [atu- alizada em 2007 Jan 06; acesso em 2006 Out 24]. Disponível em: http://sapo.incubadora.fapesp.br/portal. 21. Ferreira eag. postura e controle postural: desenvolvimento e aplicação de método quantitativo de avaliação postural [tese]. São paulo: Faculdade de medicina da universidade de São paulo; 2005. 22. marques ap. manual de goniometria. 2ª ed. São paulo: manole; 2003. 23. Zonnenberg aJJ, maanen V, elvers Jwh, Oostendorp RaB. intra/interrater reliability of measurements on body posture photographs. J Cranomand pract. 1996;14(4):326-31. 24. Braun Bl, amundson lR. Quantitative assessment of head and shoulder posture. arch med phys Rehabil. 1989;70(4): 322-9. 25. Sato tO, Vieira eR, gil Coury h. análise da confiabilidade de técnicas fotogramétricas para medir a flexão anterior do tronco. Rev Bras Fisioter. 2003;7(1):53-99. 26. hayes K, walton JR, Szomor Zl, murrel gaC. Reliability of five methods for assessing shoulder range of motion. aust J physiother. 2001;47:289-94. 417 27. Rothestein Jm. measurement and clinical practice: theory and application. in: Rothstein Jm. measurement in physical therapy. new york: Churchill livingstone; 1985. 28. gadotti iC, Vieira eR, magee DJ. importance and clarification of measurement properties in rehabilitation. Rev Bras Fisioter. 2006;10(2):137-46. 29. Rodrigues Fl, Vieira eR, Benze Bg, Coury hJCg. Compa- ração entre o duplo flexímetro e o eletrogoniômetro durante o movimento de flexão anterior da coluna lombar. Rev Bras Fisioter. 2003;7(3):269-74. 30. greene CC, edwards tB, wade mR, Carson ew. Reliabil- ity of the quadriceps angle measurement. am J Knee Surg. 2001;14(2):97-103. 31. Caylor D, Fites R, worrel tw. the relationship between quad- riceps angle and anterior knee pain syndrome. J Orthop Sports phys ther. 1993;17(1):11-6. 32. herrington l, nester C. Q angle undervaluated? the relation- ship between Q angle and medio lateral position of the patella. Clin Biomech. 2004;19(10):1070-3. Reliability: photogrammetry and goniometry Rev. bras. fisioter. work_ytxellzbsjen7m2lhb5y4tnt7q ---- Digital dental photography. Part 9: post-image capture processing Digital dental photography. Part 9: post-image capture processing I. Ahmad1 orientation or cropping extraneous parts is acceptable, and indeed desirable to visu- alise the clinical situation as it appeared at the time of taking the picture. Hence, this chapter will only cover manipula- tion that is deemed ethically acceptable for dentistry. Another important point worth men- tioning is that image quality is directly related to the degree of manipulation. The greater the manipulation, the poorer the image quality. Therefore, it is crucial to keep alterations to a minimum by ensur- ing that the original image capture was as perfect as possible regarding exposure, magnifi cation, orientation and composi- tion. Furthermore, photo-editing software is complicated, applications often requiring training and being very time intensive. INITIAL PROCESSING The physical transfer of images from the camera into computer-based software is by one of the following methods: USB-2 cable connection1. FireWire2. ® 400 or FireWire 800® cable connections Wireless connection.3. The method of transfer depends on the camera and computer ports. A USB-2 cable is suffi ciently fast for small fi les, but extremely slow for larger fi les. In these circumstances, FireWire® 400, or the faster FireWire 800® cables are the ideal choice. The latest transfer mode is wire- less connections, which eliminate cables With regard to manipulation, it is important to remember that dental images are dento- legal documents. Therefore, manipulation should be kept to a minimum, ensuring that the original image is not altered to an extent that hides pathology or alters the clinical situation to camoufl age what was present in the oral cavity. Current photo-editing software allows an image to be manipulated beyond recognition, and while this is acceptable for dramatic or artistic purposes, it is inappropriate for dental imagery. Altering exposure, Having successfully taken a digital image, the next step is deciding what to do with it. Should it be cropped, correctly orientated, manipulated, compressed, scaled, sharpened, archived (and if so, which fi le format is the most suitable), or even discarded? The premise of this part of our series is to answer these and other questions related to post-production of a digital image. but at present are relatively slow. In some instances a print may be immediately required, and in these circumstances the camera or its memory card can be directly connected via a USB cable, or inserted into an offi ce printer, respectively. It is debatable whether a Windows™ or Macintosh™ based computer is more appro- priate for image management and manipu- lation. Windows-based PCs have the lion’s share of the market, while Apple® Mac™ machines are more eclectic. Previously, the Windows platform was relatively slow in handling graphics, being more suited for applications such as word processing, accounting and databases. However, the latest version, Windows Vista™, promises to rival the Apple OSX operating system Snow Leopard with regard to image man- agement. There is no doubt that profes- sional photographers, graphic designers and printings houses have a penchant for Apple Macs due to their superior capabili- ties for handling, manipulating and stor- ing image fi les. For this reason, if budgets allow and one is taking large volumes of dental images, the Apple Mac is the ideal choice. On the other hand, for small vol- ume documentation a Windows-based PC is adequate. Finally, compatibility was previously an issue between Windows and Macintosh platforms, however the newer versions of both operating systems allow free exchange of fi les, without the need for conversion or fi lters. The fi rst thing is to decide whether an image is useable or should be discarded. 1General Dental Practitioner, The Ridgeway Dental Surgery, 173 The Ridgeway, North Harrow, Middlesex, HA2 7DF Correspondence to: Irfan Ahmad Email: iahmadbds@aol.com www.IrfanAhmadTRDS.co.uk Refereed Paper Accepted 15 November 2008 DOI: 10.1038/sj.bdj.2009.763 ©British Dental Journal 2009; 207: 203–209 • Editing an image causes deterioration in quality, is complicated, time consuming, onerous and frustrating. • Ethically acceptable alterations include correcting exposure, orientation, laterally inverting and cropping an image. • The most popular fi le formats to consider are RAW data, TIFF and JPEG. • The most expedient and eco-friendly transfer of images is via the Internet. I N B R I E F PR A C TIC E 1. Digital dental photography: an overview 2. Purposes and uses 3. Principles of digital photography 4. Choosing a camera and accessories 5. Lighting 6. Camera settings 7. Extra-oral set-ups 8. Intra-oral set-ups 9. Post-image capture processing 10. Printing, publishing and presentations FUNDAMENTALS OF DIGITAL DENTAL PHOTOGRAPHY BRITISH DENTAL JOURNAL VOLUME 207 NO. 5 SEP 12 2009 203 © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE While image manipulation can correct many failings in technique, it cannot per- form miracles. If an image is grossly under- or over-exposed, it is probably prudent to delete it and start again. Also, images that do not reveal items (or details of items) that were sought should also have a similar demise. For example, if the purpose of the intended image was to show translucency or characterisations within a tooth but it is over-exposed, no amount of manipula- tion will reveal the missing translucency or characterisations. In a similar vein, if an image is out of focus or lacking detail in specifi c areas due to a small dynamic range, no software can be expected to put back or replace something that was lacking or absent in the beginning. Furthermore, excess manipulation is time consuming and causes severe deterioration in image quality. Therefore it may be expedient and easier to take another picture rather than labouring with software to achieve the impossible and ending up with a result that lacks quality and will ultimately be of little use. If not already performed by camera set- tings, the fi rst item for post-image produc- tion is to ensure the correct white balance for the prevailing lighting conditions at the time of exposure. Setting the white balance was discussed in Part 61 and can be cal- culated automatically by camera electron- ics, manually inputted, or calibrated with an 18% grey card. If the last option was chosen, a single image or multiple images are selected in the chosen software (either proprietary or Adobe® PhotoShop), and the grey balance settings recalled for calibrat- ing the new images (Figs 1-2). Correcting orientation, exposure, laterally inverting and cropping A major diffi culty with dental photography is framing the picture with correct orienta- tion for ensuring extraneous items such as saliva ejectors, cheek retractors and cot- ton wool rolls are invisible. This was of course challenging for fi lm photography, since little post-production was possible. However, with digital photography, chang- ing orientation, cropping or altering the exposure (within limits) is a relatively sim- ple task. The latter can either be performed in-camera, with camera specifi c software, or in photo-editing software such as Adobe® PhotoShop. Fig. 1 The raw image from the camera without white balance calibration (notice ‘Gray Balance Off’ on menu tab) Fig. 2 The same image as Figure 1 with white balance corrected by recalling the grey card calibration (notice ‘Gray Balance On’ on menu tab) Fig. 3 Initial image from digital camera 204 BRITISH DENTAL JOURNAL VOLUME 207 NO. 5 SEP 12 2009 © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE The order in which correcting exposure, orientating, laterally inverting or cropping is performed is irrelevant. Furthermore, every type of software has its own com- mands and methods for executing the above corrections. However, for illustra- tion purposes, the image in Figure 3 was edited in Adobe® PhotoShop (Figs 4-7) and the fi nal result is shown in Figure 8. Images which are taken using an intra-oral mirror are laterally inverted and require subsequent correction, while those with- out will not require the addition correction shown in Figure 7. Scaling Scaling or enlarging an image may often be necessary, for example after cropping or for concentrating on specifi c detail. Any scaling causes image deterioration and this is the major reason that high quality and quantity of pixels is essential for recording as much detail as possible at the outset. The mathematical enlarge- ment of an image is termed interpolation. The resulting image quality after enlarge- ment depends on the algorithms used for interpolation. PhotoShop offers a variety of algorithms for interpolation, includ- ing nearest neighbour, bi-linear, bicubic, fractals and reduction. For modest enlarge- ments, nearest neighbour or bi-linear is adequate, but for larger scaling bicubic, fractals and reduction are better choices. However, scaling is not limitless. If the original image is enlarged excessively, pix- elation will occur and the scaling process for enlarging specifi c details will obviously be futile. Therefore, marked enlargement to the extent where an image breaks down is pointless, and defaces the image beyond recognition. Ideally, an image should require mini- mal or no sharpening and if blurring is pronounced, it is better to take another picture that is sharply focused. An important point to remember is that if sharpening is necessary it should always follow scaling, not precede it. If sharp- ening is performed before enlarging an image any artefacts such as contour fringes at the edge of objects will also be enlarged and become more appar- ent. Furthermore, excess sharpening introduces grain and noise, which may defeat the initial objective of sharpening the image. Step 1: Select ‘Crop Tool’ Step 2: Draw marque around desired area Step 3: Press ‘Enter’ key or click ‘OK’ Fig. 4 Cropping: procedure for cropping an image Adjust levels until desired exposure is obtained Fig. 5 Exposure: procedure for correcting exposure BRITISH DENTAL JOURNAL VOLUME 207 NO. 5 SEP 12 2009 205 © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE FILE FORMATS An original in digital photography is only possible at the operating level system, ie within the initial proprietary software that captured the image. Once the image data is opened and subsequently saved in another software package such as photo-editing, graphics, desk top publishing or presen- tation applications, the original data is altered and irretrievably lost. Alterations include change of colour space, reduc- tion of colour depth or dynamic range. Although the deterioration is negligible and rarely perceptible on a computer monitor, vast numbers of manipulations severely affect image quality if a section of the image is enlarged. Therefore, before opening the image in another type of soft- ware, the original should be stored for sub- sequent retrieval. Furthermore, the way in which the data is archived is essential to reduce alterations, including choosing an appropriate fi le format. At present there is no fi le format that is suitable for all circumstances and therefore several types are needed depending on the intended use of the image. The basic difference between formats is whether the data is compressed or non- compressed, and if compression is applied, whether it is lossless or lossy. The choice of fi le format is as perplexing as choosing a digital camera. Some examples of image fi le formats are RAW, PSD, GIF, TIFF, JPEG, PNG, EPS, LZW, DCS, EPS, PICT, Bitmap, etc (Fig. 9). In addition, each camera manufacturer and software developer has their own philosophies regarding the type of image fi le that best serves digital image data. As yet there is no industry stand- ard, and this adds to the confusion dur- ing decision-making. However, to simplify matters, most DSLRs offer the option of selecting three fi le formats: RAW data, TIFF and JPEG (various varieties with different quality levels) (Fig. 10). Table 1 summarises their salient features and differences. Proprietary RAW data Many camera manufacturers have devel- oped their own fi le formats to capture the initial image as raw data. These fi les are camera specifi c and cannot be opened in any other software except that provided by the camera company. The objective is to capture as pure a digital signal as possible before being processed in the Step 1: Select ‘Image: Rotate: 180˚’from menu bar to upright image Step 3: Crop if necessary, to remove superfluous pats of the image Step 2: Select ‘Image: Rotate: Free Rotate Layer’ to finely rotate image to horizontal Fig. 6 Orientation: procedure for correcting orientation Select ‘Image: Rotate: Flip Horizontal’ from menu bar Fig. 7 Procedure for laterally inverting the image Fig. 8 Final image after cropping, correcting exposure, orientation and laterally inverting (compare with Fig. 3) 206 BRITISH DENTAL JOURNAL VOLUME 207 NO. 5 SEP 12 2009 © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE before the memory card is full, and the image is automatically calibrated for white balance, etc. However, the temptation should be resisted because the resulting image is of poor quality and unsuitable for archiving. Furthermore, it is better practice to capture the initial image as either a raw or TIFF fi le which is subsequently easily converted to a JPEG for dissemination. Unfortunately, conversion in the reverse direction, that is, JPEG to TIFF will not regain the lost image quality. PDF (Personal Document Files) Similar to JPEG fi les, PDF fi les are lossy compressed fi les that are relatively small in size and therefore ideal for electronic transmission. In addition, PDF fi les also capture software and exported into a format that is recognisable by popular manipulation software. TIFF (Tagged Image File Format) If there is an industry standard for image fi le format, then TIFF is a strong contender. Invented by the Aldus Corporation, TIFF is as ‘generic’ as an image format can be, and can be opened in nearly all types of manipulation software for data exchange, for example Adobe® Photoshop, Adobe® InDesign, PageMaker™ or Quark Xpress™. The most endearing feature of this for- mat is that the compression is lossless in the LZW mode, named after its designers Lempel-Ziv and Welch, and hence there is no detail loss from the raw proprietary precursor fi le. TIFF can be selected on camera menus as the choice for captur- ing an image. The advantage is that the camera software performs the necessary white balance and other calibrations to the image, which is ready to be opened in any software of choice. However, to be absolutely pedantic the data is not as pure as the proprietary raw format since the in- camera software is rarely as sophisticated as the proprietary capture software. The fi les generated in the TIFF mode are large, ranging from a few megabytes to over 200 MB, depending on camera specifi ca- tions. Consequently, large capacity storage media are an essential requirement. JPEG (Joint Photographic Experts Group) Unlike TIFF, when JPEG fi les are opened and saved they suffer from severe lossy compression, with loss of detail especially of diagonal lines in an image (Fig. 11). Various levels of JPEG compression are possible, ranging from Level 10 to Level 1. The higher the level, the larger and higher the quality of the fi le. A newer version is available called JPEG 2000 with less dete- rioration in image quality than its pred- ecessor. JPEG fi les can be as small as a few kilobytes to several megabytes depending on the level of compression. Because of their smaller size, they are ideal for Internet use, particularly for email attachments for communication between members of the dental fraternity. Some cameras also offer the facility to choose JPEG as the initial captured image. The temptation is that the fi le is small so more pictures can be taken A selection of image file formats Fig. 9 A selection of image fi le formats Dental.tif Dental.jpg Dental.pdf Dental.png Fig. 10 Icons of popular fi le formats, TIFF, JPEG 2000, PDF and PNG 24 Table 1 Comparison of popular fi le formats RAW data TIFF JPEG (varieties) Colour mode RGB RGB, CMKY, Lab RGB, CMYK Colour depth/channel Up to 16 bit/channel Up to 16 bit/channel 8 bit/channel ICC-profi le - Yes Yes Compression No Optional Lossy Alpha channels No Yes No Web-suitable No No Yes BRITISH DENTAL JOURNAL VOLUME 207 NO. 5 SEP 12 2009 207 © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE have different levels of quality, depend- ing on the chosen parameters before the fi le is exported. The advantage of PDF fi les is that they can contain layouts with text, vector drawings and photo- graphic images for distribution via the Internet. These fi les are very helpful for assessing layout for publishing and can also be used for communication with a dental technician by writing text over images, for example marking parts of a restoration that requires adjustment at the try-in stage. Most graphic software allows fi les to be exported to a PDF for- mat, and the email recipient can view the fi les by installing Adobe® Acrobat, which is freely available as a download from the Internet. PNG (Portable Network Graphics) PNG fi le format is a development of the GIF (Graphic Interchangeable Format) fi le, intended primarily for use on the Internet and for building websites. The GIF format has its origins from the very beginnings of the Internet, but has the major draw- back of yielding poor image quality. On the other hand, PNG fi les have addressed the quality issue as well as delivering faster Interest access. The main reason for this is that PNGs can support up to 24 bit (PGN 24) data, and are therefore capable of retaining quality for developing web pages. EPS (Encapsulated PostScript) EPS fi les are worth mentioning because they are primarily used for pre-press stages of the printing process (to be discussed further in Part 10). These are vector-ori- entated fi les (text and drawings) but can also store pixel-based images with loss- less compression. Therefore, this format is ideal for publishing that combines text and images, such as practice stationery, brochures and leafl ets. After designing the layout in a graphics application, the fi le is converted into EPS, ready for transmission to a printing house or an offi ce laser printer that supports the Adobe® PostScript™ printer language. IMAGE STORAGE AND TRANSFER The fi nal stage of processing is to store and transfer the image for safekeeping and intended use, respectively. As previously mentioned throughout this series, dental TIFF file format retains image quality (size = 30.9MB) PNG 24 file format retains image quality for building websites (size = 9.1MB) JPEG file format results in deterioration of image quality (size = 812KB) Fig. 11 Comparison of an image saved as TIFF, PGN 24 and JPEG, with severe deterioration in image quality in the latter format 208 BRITISH DENTAL JOURNAL VOLUME 207 NO. 5 SEP 12 2009 © 2009 Macmillan Publishers Limited. All rights reserved. PRACTICE for future retrieval. The best format to store the original is in an unadulterated manner, either in a raw data or a TIFF fi le format. The following storage protocols are advisable: Create a folder with the 1. patient’s name Within the folder, create sub-folders 2. according to when the series of images were taken, for example ‘Pre- operative status’, ‘Oral lesions’, ‘Tooth preparation’, ‘Temporisation’, etc Name each fi le with a unique name, 3. for example date of image and specifi c views such as facial, dento- facial, occlusal, etc In addition to a unique name, also add 4. if the fi le is simply RGB or processed RGB (ie with exposure and orientation correction, etc) before archiving. Many types of image data management and database software are available which expedite the above procedures, making retrieval easier and more effi cient. High quality images demand huge hard- ware storage capabilities. As well as fi xed storage devices such as computer hard drives, many portable options are available such as CD, DVD, fl ash sticks and memory cards (Fig. 12). Each of these media should be stored in different locations and if nec- essary, periodically updated and checked to verify the stored data. Before an image fi le is transferred, the intended use must be defi ned. The use of the picture will determine the type of export fi le required and the physical method of transfer. Table 2 summarises the intended uses of the images and the ideal fi le formats and modes of transfer. 1. Ahmad I. Digital dental photography. Part 6: camera settings. Br Dent J 2009; 207: 63–69. images are sensitive data and stringent practice protocols must be in place for their storage and transfer to prevent inadvertent loss. Before an image is transferred, the original fi le must fi rst be securely archived Table 2 Image transfer guidelines Intended use File Method of transfer Communication between colleagues JPEG Email attachments Layout approval for stationery, brochures PDF Email attachments Web publishing PGN, JPEG Internet, CD, DVD, fl ash drives Print publishing TIFF, EPS CD, DVD, fl ash drives or high speed Internet transfer Presentations High quality JPEG, TIFF Via DVI or VGA ports from computer to projector Laser or inkjet offi ce printing High quality JPEG, high quality PDF, TIFF, EPS USB or wireless connection to printer Fig. 12 A selection of portable storage media BRITISH DENTAL JOURNAL VOLUME 207 NO. 5 SEP 12 2009 209 © 2009 Macmillan Publishers Limited. All rights reserved. Digital dental photography. Part 9: post-image capture processing Main Initial processing Correcting orientation, exposure, laterally inverting and cropping Scaling File formats Proprietary RAW data TIFF (Tagged Image File Format) JPEG (Joint Photographic Experts Group) PDF (Personal Document Files) PNG (Portable Network Graphics) EPS (Encapsulated PostScript) Image storage and transfer References work_yurqjf5clzh6fogzfl27mg7tby ---- Moving the Archivist Closer to the Creator: Implementing Integrated Archival Policies for Born Digital Photography at Colleges and Universities This article was downloaded by: [State University of New York at Albany], [Brian Keough] On: 29 May 2012, At: 09:45 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Journal of Archival Organization Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/wjao20 Moving the Archivist Closer to the Creator: Implementing Integrated Archival Policies for Born Digital Photography at Colleges and Universities Brian Keough a & Mark Wolfe a a University at Albany, State University of New York, Albany, New York, USA Available online: 29 May 2012 To cite this article: Brian Keough & Mark Wolfe (2012): Moving the Archivist Closer to the Creator: Implementing Integrated Archival Policies for Born Digital Photography at Colleges and Universities, Journal of Archival Organization, 10:1, 69-83 To link to this article: http://dx.doi.org/10.1080/15332748.2012.681266 PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. http://www.tandfonline.com/loi/wjao20 http://dx.doi.org/10.1080/15332748.2012.681266 http://www.tandfonline.com/page/terms-and-conditions Journal of Archival Organization, 10:69–83, 2012 Copyright © Taylor & Francis Group, LLC ISSN: 1533-2748 print / 1533-2756 online DOI: 10.1080/15332748.2012.681266 Moving the Archivist Closer to the Creator: Implementing Integrated Archival Policies for Born Digital Photography at Colleges and Universities BRIAN KEOUGH and MARK WOLFE University at Albany, State University of New York, Albany, New York, USA This article discusses integrated approaches to the management and preservation of born digital photography. It examines the changing practices among photographers, and the needed relation- ships between the photographers using digital technology and the archivists responsible for acquiring their born digital images. Spe- cial consideration is given to technical issues surrounding preserva- tion and access of image formats. It explores how integrated policies can enhance the success of managing born digital photographs in an academic setting and illustrates the benefits and challenges to acquisition, description, and dissemination of born digital pho- tographs. It advocates for the archivist’s active involvement in the photographer’s image management practices to improve the acqui- sition and preservation of images. KEYWORDS digital photography, born digital records, raw image format, academic archives INTRODUCTION College and university archives have historically played a crucial role in preserving the photographic records of their institutions. Analog film nega- tives, contact sheets, and prints were created for administrative and publicity purposes, and then were traditionally transferred by campus photographers to the archives. The role of the archivist in collecting and preserving cam- pus photography was well established and understood in our profession. Address correspondence to Brian Keough, M.E. Grenander Department of Special Col- lections and Archives, Science Library 352, University at Albany, SUNY, 1400 Washington Avenue, Albany, NY 12222. E-mail: bkeough@albany.edu 69 D ow nl oa de d by [ S ta te U ni ve rs it y of N ew Y or k at A lb an y] , [ B ri an K eo ug h] a t 09 :4 5 29 M ay 2 01 2 70 B. Keough and M. Wolfe However, as photographers transition from analog film to digital formats, college and university photographs from the 1990s may paradoxically be in greater danger than photographs from the 1890s. The shift from analog to digital has been a disruptive force that puts digital assets at risk. The ease in shooting images that digital photography affords has dramatically increased the number of images of enduring value. Growing file sizes, as well as the numerous proprietary formats created by digital cameras, render old print-based management practices an impractical option. The recent announcement that the Eastman Kodak Company has filed for bankruptcy rings the death knell for the era of print photography and film and should be a clear indication to archivists that preserving digital photography is now our central concern.1 Although current practices of campus photographers work relatively smoothly for meeting their business needs, access and preservation are be- coming increasingly unmanageable activities. Photographers are required to devote more time, in addition to their core business duties, to the tasks of managing their digital images. Archivists’ inability to acquire images in a timely manner contributes to bulging storage servers and difficult-to-manage optical disk collections in photography departments. Digital photography requires an unprecedented level of engagement by the archivist throughout the entire lifecycle of the records, but given the size of the problem where does the archivist begin? Archivists can take small steps to develop tools appropriate to their setting that can lead toward long-term solutions. The wide implementa- tion of Institutional Repository (IR) software and Digital Asset Manage- ment Systems (DAMS) for digitized collections may be potential test beds for implementing practical methods to provide access for born digital im- ages. The relative ease with which institutions adopt standards and practices for digitization is reflected in the large number of well-documented case studies. The issues surrounding born digital collections are well known, and standards and practices are being developed to solve these problems. However, there remain too few examples documented by repositories that demonstrate practical methods and tools for providing access and preser- vation for born digital images, and such practical approaches deserve more attention. This article explores the policies and technical issues related to born digital images that influence their preservation in the college and university archives context. It discusses how the University at Albany’s Special Col- lections and Archives Department began acquiring born digital images. It suggests practical approaches and methods to meet the challenges of digital photography with special consideration given to possible staffing and finan- cial constraints, and it advocates greater collaboration between university archives and campus photography departments. D ow nl oa de d by [ S ta te U ni ve rs it y of N ew Y or k at A lb an y] , [ B ri an K eo ug h] a t 09 :4 5 29 M ay 2 01 2 Moving the Archivist Closer to the Creator 71 LITERATURE REVIEW The Archivist and Photographer Relationship The digital age requires more involvement, not less, by the archivist for in- stitutions that hope to preserve their cultural legacies. Elizabeth Yakel and William Brown, as early as 1996, examined the effect of digital technology on records creators and archivists. They argued that in the digital age there is a need for an “active archivist who serves as part of the administrative team, both culling and packaging information as well as working with ad- ministrative colleagues in the evaluation and interpretation of the data.”2 However, the close relationships that institutional archivists once enjoyed with paper records creators faded with the advent of digital technology. Many archivists, contrary to professional best practice, are excluded or not able to get involved in the policy decisions about records creators.3 Lisl Zach and Marcia Frank Peri suggested that there has been relatively little development of campus electronic records programs between 2005 and 2009 and concluded that the acquisition and management of institutional digital records is comparatively neglected.4 Until recently, digital records that arrived on magnetic and optical media typically played an ancillary role to the paper records contained in the collection. Whereas floppy disks and other aging media pose problems, they are only the tip of the iceberg. Increasingly, repositories are being tasked with acquiring digital collections with no paper counterpart, putting archivists in the uncomfortable position of not being able to provide access to the materials they own. A recent Online Computer Library Center (OCLC) report noted alarming findings among Association of Research Libraries affiliated special collections departments. The report recognizes a “widespread lack of basic infrastruc- ture for collecting and managing born-digital materials: more than two thirds [of the collections studied] cited lack of funding as an impediment, while more than half noted lack of both expertise and time for planning.”5 In- deed, the report summarized that special collections departments state born digital materials as their second biggest challenge. Yet, these findings are a striking contrast to the amount of investment institutions have made in their technological infrastructure. The OCLC report also found that 69% of the respondents are using an IR, suggesting that lack of technology might be a smaller obstacle to acquiring and preserving born digital records than conventional wisdom would suggest.6 The failed attempts to successfully deploy IR software for the digital scholarship community have met obstacles similar to those encountered by archivists working with electronic records. As IR deployments proliferated, libraries floundered to acquire faculty research due in part to a faulty assump- tion that faculty (the records creators) would self-archive their materials. Even relatively simple digital assets, such as preprints in PDF format, are difficult D ow nl oa de d by [ S ta te U ni ve rs it y of N ew Y or k at A lb an y] , [ B ri an K eo ug h] a t 09 :4 5 29 M ay 2 01 2 72 B. Keough and M. Wolfe to acquire because in many cases faculty are left to their own devices to manage their own records.7 The “build it and they will come” approach has largely failed because of the lack of staffing and administration of the repository software and pro- grams.8 The poor levels of participation in IRs may be owed in part to the emergent role of the repository librarian, who has suddenly assumed many of the responsibilities traditionally fulfilled by an archivist. The findings of the MIRACLE (Making Institutional Repositories A Collaborative Learning En- vironment) project suggest that the “type of one-on-one collection develop- ment and content recruitment now being carried out by librarians to populate IRs is exactly the type of field work that archivists have done for decades.”9 Archivists are professionally trained in collection building, especially the heterogeneous content and mixed format typically found in archival and manuscript collections. Archivists are accustomed to networking with cre- ators to ensure that custody is taken of those collections, and yet archivists have not played a significant role, especially when the IR is deemed the place to deposit electronic records. Although it is important to understand this his- torical distinction between the role of archivists and IR librarians, merely shifting the responsibility of collecting digital scholarship to the archivist will not solve the problem. Similarly, archivists have struggled to acquire born digital material from campus administrative offices. Unless archivists redou- ble their engagement with campus records creators, digital scholarship and administrative records will most likely remain in separate silos. Absent the involvement of an archivist, the contemporary campus pho- tographer devotes a surprisingly large amount of time to access and preser- vation issues. Professional photography literature devotes significantly more effort to these issues than the museum and archival community. This de- velopment is a logical one especially for commercial photographers who may have no dedicated managers or archivists to care for their legacy col- lections. Jessica Bushey says that “it is necessary to rearticulate the role of the photographer as both creator and preserver,” yet she hastens to add that photographers still must obtain “input from those entrusted with preserving digital materials, such as archivists.”10 Born Digital Photography To improve this situation, it is incumbent upon archivists to develop a greater understanding of digital photographic practices. The photographer’s techni- cal environment has changed dramatically and generated a new and com- plex set of terminology and techniques. Rules of thumb developed for digital image scanning operations may not be enough for the archivist to fully un- derstand the problem space of born digital photography. Patricia Russotti and Richard Anderson include over 150 new, digitally specific terms in their D ow nl oa de d by [ S ta te U ni ve rs it y of N ew Y or k at A lb an y] , [ B ri an K eo ug h] a t 09 :4 5 29 M ay 2 01 2 Moving the Archivist Closer to the Creator 73 recent review of best practices for digital photography.11 As with other types of digital materials, such as audio and video, the digital imaging literature and terminology runs deep and wide and can be daunting to the novice. In particular, archivists need to develop a better understanding of digital photography file formats and metadata practices if they aim to engage pho- tographers effectively. Born digital “originals” bring different kinds of responsibility to planning and decisions not found in scanning operations and introduce a new level of complexity and cost. The archivists’ commitment to authenticity may lead them toward wanting to preserve the “camera raw” (Raw)12 format as the “negative,” but this commitment may come into conflict with the realities of resource and staffing constraints because of its complexity and additional file storage. The emphasis put on authenticity, reliability, and accuracy of elec- tronic records can be daunting, and it can even lead to further inaction by the archivist.13 The proprietary nature of the Raw format, as adopted among the most popular camera manufacturers, has become a serious concern among photographers who worry that they may not be able to access their files in the future.14 Raw Image Format The importance of the Raw file format to the photographer cannot be un- derstated, and archivists must learn about it to meet the needs of the pho- tographer and to preserve the format effectively. Photographers work in the Raw format because it affords the highest quality in resolution and color range, and it allows the photographer to non-destructively adjust properties of the image. Many photographers consider the Raw image file generated by a digital camera as the “digital negative,” and image developing refers to the process of color selection and conversion of the Raw file into a JPEG or TIFF format.15 The Raw file stores the information gathered from the camera’s sensor in an unprocessed format. Michael K. Bennett and F. Barry Wheeler explored the barriers and ben- efits to archivists who want to preserve an image in Raw format, which they consider the true camera “negative” versus typical practices of preserving an image “positive” in TIFF format.16 Until the advent of Raw, pictures shot using the JPEG and TIFF formats were typically rendered inside the camera. Raw, however, defers processing of the image until the image is transferred to a desktop computer, and rendered inside of Adobe Camera Raw software or a comparable program. Managing this format is complicated by the fact that there are dozens of proprietary formats in use by different camera brands, sometimes differing with each successive mode. This increased functionality of cameras consequently increases the burden of the archivist. Currently, Adobe’s Digital Negative (DNG) is the only standardized, openly documented Raw format,17 and the Library of Congress supports it D ow nl oa de d by [ S ta te U ni ve rs it y of N ew Y or k at A lb an y] , [ B ri an K eo ug h] a t 09 :4 5 29 M ay 2 01 2 74 B. Keough and M. Wolfe as the most sustainable format for long-term storage of images.18 If Adobe successfully makes DNG a widely accepted standard among camera manu- facturers, then archivists may reap the same benefits for photographs as they have with documents reformatted to PDF. Adobe should be commended for the degree of openness it has brought to these formats, but archivists should be wary of becoming too reliant on one software manufacturer to manage their collections. By converting to DNG, photographers have the option of storing the parametric image edits made to the Raw image in a “side car” file.19 Parametric image editing allows for cropping and rotating the image, as well as adjusting the color and contrast non-destructively, but the same edits made to a TIFF are irreversible.20 Adobe’s Camera Raw software affords the user the ability to adjust dozens of settings, and then records them in an Extensible Metadata Platform (XMP) file or side car file, which resides separate from the Raw file; thus the digital negative is never tampered with. The encoded information in the XMP file can be inspected by viewing the tagged information about the edits and color settings made to the image in a text editor, which is easily read by a human. Adobe’s DNG negative convertor is a free converter that allows the archivist to preserve the original proprietary Raw file, as created by the camera, inside the DNG file. The DNG negative convertor can also extract the proprietary files, such as the Nikon Electronic Format (NEF) format, if needed. Additionally, with the conversion to DNG, the convertor creates and stores an MD5 hash code within the file, which is useful for documenting a file’s accuracy and authenticity.21 The file size of a Raw image can vary depending on the megapixel and the bit depth of the camera and settings selected by the photographer. A Nikon D7000 camera with image sensor sized at 4928 × 3264 pixels (16.1 megapixels) that shoots a 12 bit, lossless image will produce a 15.5 MB file size, which makes the Raw format in this same model roughly twice the size of a high quality, rendered JPEG format.22 Archivists have relied heavily on the TIFF format as an archival master for use in repositories because of its ability to preserve lossless images. Although the TIFF format has been phased out as an option in professional cameras, it is still a common rendering option in computer software and scanners. Because TIFF is a fully rendered file format, the settings of a camera or scanner are encoded permanently into the image thereby increasing the size (especially using 16- bit settings) and preventing any future re-renderings by the photographer or archivist.23 However, if an archivist chooses DNG as a master format, other options are available that can actually surpass the storage footprint of TIFF. Archivists may choose to embed the proprietary Raw image inside of the DNG Raw file. Although allowing the records creator to retain their original proprietary format, migrating to the open DNG format will double the storage requirements for the repository. D ow nl oa de d by [ S ta te U ni ve rs it y of N ew Y or k at A lb an y] , [ B ri an K eo ug h] a t 09 :4 5 29 M ay 2 01 2 Moving the Archivist Closer to the Creator 75 Embedded Metadata and Image Handling Software The sheer volume of born digital images requires archivists to economize their efforts, and capturing metadata can be one of the most costly aspects of acquiring a born digital collection. Whereas creating a descriptive metadata record about the image is important, photographers and archivists need to understand and use embedded metadata, which is critical to ensuring the preservation of digital assets. Preservation file formats allow self-describing information to be embedded into the “header” of the image itself. Embed- ded metadata ensures against loss if the file is separated from its descriptive database record. The TIFF, Raw (most proprietary versions), and JPEG for- mats store the three following units of metadata in the file: Exchangeable Image File (EXIF) format, International Press Telecommunications Council (IPTC), and XMP; the DNG format includes even more.24 The IPTC standard was merged with XMP, and now users can extend XMP to adopt the IPTC Core metadata standard. XMP is an Adobe created, XML-based formatting standard, which is primarily used for images and PDF files. The EXIF metadata is the information generated automatically by the camera; this includes such information as the f-stop setting, flash, focal length of the lens, and more. The IPTC metadata contains the description informa- tion about the content of the image, creator, date, and other descriptive information. IPTC can be extended by the Dublin Core standard as well, which allows for time saving processes through automation. Previously we discussed XMP in the context of side car files and the information they store from the parametric image editing. The DNG format stores descriptive in- formation in the XMP-formatted IPTC “chunk” inside the DNG file, whereas edits made to the image (cropping, color correction) are stored in a separate file, known as the side car file that is formatted in XMP as well. The simple act of viewing an image, and the slightly more difficult act of reading and writing embedded metadata, is a common barrier to managing the Raw format in a repository. Raw images often do not dis- play the same user-friendly image information in the form of thumbnail images as with JPEG and TIFF, making it impractical to conduct search and management actions through Windows Explorer and Finder programs. Further, there are known issues with handling and interacting with the em- bedded data in files through some default file managers, and especially with older generations of applications,25 yet this should not be an issue if the proper software is used. Adobe Bridge can help photographers and archivists safely automate some of the processes of adding metadata to the image file header. Other software dedicated to preparing images for ingestion include Adobe Lightroom, Apple’s Aperture, and Photo Mechanic. This additional image “collection processing” also known as “PIEware”26 software may be a small cost in comparison to the added control it gives and the time it saves. D ow nl oa de d by [ S ta te U ni ve rs it y of N ew Y or k at A lb an y] , [ B ri an K eo ug h] a t 09 :4 5 29 M ay 2 01 2 76 B. Keough and M. Wolfe As part of the Preserving Digital Images project, ARTstor developed new practices and tools for photographers so that they could create archive ready born digital images. The project developed best practices that advise photographers to only embed metadata into the image using Adobe Bridge or its equivalent without having to rely on cumbersome Excel spreadsheets. Archivists can instead use ARTstor’s recently developed Embedded Metadata Extraction Tool tool to extract metadata from digital assets and export them as spreadsheets.27 The archivist must have a firm understanding of the technical complex- ities of the contemporary photographer’s work, but technical knowledge alone will not guide archivists through the steps of implementing the pro- cesses for acquiring and preserving digital images. Research projects, confer- ences, and standards groups have developed an overwhelming number and array of technical guidelines, best practices, frameworks, and models to man- age and preserve electronic records. In light of these theoretical inroads in research, Christopher Prom posed the question, “Have libraries and archives made adequate progress in implementing the procedures, tools and services to actually preserve digital records?”28 The number of demonstrated imple- mentations pale in comparison to our plethora of theory and standards, but this is slowly beginning to change.29 The gap between theory and practice has proven to be much wider than initially thought. Although repositories hold out for a comprehensive electronic records solution, Ben Goldman wisely recommended “moving forward with practical and achievable steps” toward that goal “responsibly.”30 With this in mind, what are the short-term measures archivists can take to acquire and extend the life of their digital images? UNIVERSITY AT ALBANY’S BORN DIGITAL IMAGES This section depicts the University at Albany’s experience in formulating practical methods for acquiring and providing access to born digital images. It will discuss how the University Archives dealt with the backlog of born digital images, and then describe how it wrote integrated image management policies with the photographer to improve the University Archives’ ability to acquire images. We believe these short-term, practical steps will help us build toward our long-term goal of the archives becoming a “systematic institutional function” within the university.31 Background Since the early 1970s, the University Archives has regularly acquired pho- tographic prints and negatives from the campus Photography Department. The photographer typically transferred the images seven to ten years after their initial use, even earlier in some cases when the photographer ran out of D ow nl oa de d by [ S ta te U ni ve rs it y of N ew Y or k at A lb an y] , [ B ri an K eo ug h] a t 09 :4 5 29 M ay 2 01 2 Moving the Archivist Closer to the Creator 77 cabinet storage. Archivists transferred prints and negatives into the archives, re-housed them in acid-free boxes and folders, and placed them in the repos- itory’s processing backlog. The collections typically required little processing because they were well organized and easily accessible. Currently, there are more than 50,000 prints and negatives that are used frequently by the Uni- versity community in exhibits and promotional material, and by off-campus researchers. Analog to Digital Campus photographers began gradually converting to digital photography in 1997. During the course of ten years, campus photographers routinely used up their allowed data storage and would rely on optical media as an “archiving” solution. As the digital camera industry evolved, the photogra- phers did too by adopting new file formats to meet their needs and to stay current with industry standards. Over time, the Photography Department confronted image management issues, such as redundancy and proprietary formats that made search and retrieval of their own files difficult. In addition to the Photography Department, many departments across the University be- gan struggling to manage their digital assets, and the campus implemented a collaborative project to plan and purchase a DAMS. The project charter sought to solve “a broad-based need, expressed by several University constituencies, for an enterprise-wide service fo- cused on the management, curation and accessibility of digital media and related assets.”32 The project deemed it important that the DAMS feature superior image functionality because images were a priority for nearly all of the project participants. The project chose Luna Insight as its DAMS, and the system was adopted by various departments, including the University Archives. Originally, it was intended that each department would upload and manage its own digital collections. It quickly became clear, however, that the other departments lacked the qualified staff and time required to upload their images into the DAMS. The University Art Museum was the lone exception, and that was because they already had an employee tasked with managing their digital images on a legacy system. As with many implementations of IR software, the “build it and they will come approach” was not working. It was clear that the University Archives needed to play a leadership role for campus departments using the Luna Insight system. As the University Archives began using the DAMS for their digitized collections, it also began exploring how the archivists could get directly involved with acquiring born digital records across the campus. 1997 to 2007 Realizing that access to proprietary formats was problematic, archivists dealt with the images taken between 1997 and 2007 first to address three major D ow nl oa de d by [ S ta te U ni ve rs it y of N ew Y or k at A lb an y] , [ B ri an K eo ug h] a t 09 :4 5 29 M ay 2 01 2 78 B. Keough and M. Wolfe challenges: the lack of image descriptions; the lack of tools to facilitate the transfer of images and database records from the photographer to the University Archives; and the lack of a centralized portal to access digital images. To address these challenges, we first needed a better understanding of how the campus photographer worked. Our first step in understanding the photographer’s practice was to cap- ture, organize, and retain information about his workflow. We had previ- ously created some common ground between the University Archives and the photographer, having already had conversations about the planning and implementation of Luna Insight. We met with the campus photographer to learn what information he obtained during his work, and we discussed with him what information the archives lacked. We learned that when the pho- tographer shot pictures for a particular job, he entered information about the event into a Microsoft Access database including the date, unique job number, brief description, and name of the person or department who made the request. After completing the job, the photographer loaded the Raw files onto a local hard drive, created a folder for each event that contained the unique job number as the folder name, and placed all of the images cre- ated from the job in that folder. The photographer typically burned the job’s images to a CD and gave it to the customer. The images are used for the marketing of a particular event, and they are then repurposed for a multi- tude of uses in a semi-active manner by other departments on campus and in official university publications and alumni literature. For this purpose, the photographer created JPEG derivatives from the Raw files (NEF), which were edited for color correction and selected for publication or distribution; these might be comparable to the “print positive” in the analog photography. Until 2008, unfortunately, the photographer deleted 90% of the JPEG derivatives to save storage space. Transfer to Archives In 2007, the campus photographer transferred to the University Archives 150,000 Raw files (with connected XMP files) stored on optical discs totaling close to two terabytes. One of the first steps was to transfer the files from the optical media onto a shared network drive while maintaining the original order and unique job numbers. Graduate students appraise these images based on retention guidelines developed by the archivist. Students work primarily in Adobe Bridge and Photoshop to view, describe, rename, and to do batch conversions. Before we obtained Bridge, the student had to go through the cumbersome task of renaming files using Windows Explorer. If the image must be opened up, the student must use the Adobe Camera Raw program that runs inside of Photoshop. A batch file conversion is conducted on the images, converting them from proprietary Raw files (NEF) to TIFF using Adobe Bridge and Photoshop’s Image Processor tool. D ow nl oa de d by [ S ta te U ni ve rs it y of N ew Y or k at A lb an y] , [ B ri an K eo ug h] a t 09 :4 5 29 M ay 2 01 2 Moving the Archivist Closer to the Creator 79 It is important to adopt self-describing file naming conventions as well as embedding metadata into the header file of the image. Adobe Bridge is used to normalize all the file names using a consistent and planned file naming and identifier convention. File names should consist of lowercase characters from the Latin alphabet and Arabic numeral set; they should not consist of special characters or punctuation except for the use of the underscore character to designate meaningful spaces in a file name.33 These practices help to identify files in the case of disaster where the files are separated from their descriptive metadata records. Graduate students create a first draft of a Dublin Core descriptive metadata record for each image that is later reviewed and enhanced by an archivist before being uploaded with the affiliated images to the University’s Luna Insight system.34 Improve Workflow for Current Images The nature of the campus photographer’s work aids in the development of descriptive metadata and master file formats for access and preservation. After analyzing the photographer’s work process, we altered it so that he now creates a TIFF derivative of all Raw files when he completes a job and enhances the event description. Instead of transferring images to the University Archives on optical media, the photographer transfers images directly onto a dedicated share on the University Libraries’ server, which is restricted to the University Archives staff and the photographer. The TIFFs are loaded into the DAMS by the University Archives and the Raw files are stored offline on servers. We currently describe the images in such a way that the crucial job numbers used to identify the photographer’s work are preserved. The photographer can use old job numbers to search for images in Luna Insight. To ensure that each image is assigned a unique file name by the camera, we recommended to the photographer that he change the file numbering setting on his camera to “continuous” and to avoid using “auto reset.” If the incorrect setting is selected, the photographer may risk overwriting older files. Digital image collections that have not been set to “continuous” may also complicate the important task of appending identifier information to the image. Findings It has become abundantly clear that archivists can no longer passively wait for the transferal of born digital collections to our repositories. The archivist must become involved in the work of the creator to ensure that valuable digital information is not lost. Archivists must leverage their traditional relationships to the departments outside of the library while at the same time acquiring the proper skills and methods to enable them to properly take custody when the D ow nl oa de d by [ S ta te U ni ve rs it y of N ew Y or k at A lb an y] , [ B ri an K eo ug h] a t 09 :4 5 29 M ay 2 01 2 80 B. Keough and M. Wolfe time comes. The archivist must strike a balance between complete inaction and waiting for the perfect solution. Numerous colleges and universities are exploring alternatives to conventional acquisition practices, such as using online vendors to host their images off site, and this is especially true for recent born digital images.35 However, other institutions are already fully involved in the act of acquiring born digital images into the archives and are administering the software and technology locally using their campus IT infrastructure.36 Although the OCLC report points to the lack of IT support as being a large barrier to acquisition, our experiences show that archivists can surmount this barrier by taking small incremental steps. Some of the most important decisions are made at the moment of acquisition—these decisions may have lasting, irreversible effects on the sustainability of the collection. All attempts must be made to “do no harm” to the digital assets. Does the repository consider the images used for a par- ticular event the most important thing worth documenting or do they want all of the images? Furthermore, the archivists’ commitment to authenticity may lead them toward wanting to preserve the Raw format as the “negative,” but this commitment may come into conflict with the realities of resource and staffing constraints. Archivists who aim to preserve the digital masters may want to consider migrating their image collections out of the proprietary Raw file created by the camera to a more stable Raw format such as DNG. DNG will preserve all of the settings of the proprietary format and the metadata. DNG also has a built-in data validation mechanism that keeps “hash” file information to prevent against the possibility of data corruption. Hash files or checksums are unique pieces of data that get assigned to each image; they serve as a guarantee to the user that the file has not been digitally altered or corrupted. Each time the file is handled by the computer there is always a small risk of data corruption. Thus, the checksum that resides within the DNG file can allow the archivist to periodically check all of the files to make sure they have not become corrupted. Archivists can assume that as image processing software improves, older DNG files can be reprocessed into even more faithful reproductions of the original. Once the image has been converted to TIFF, the archivist per- manently loses future capabilities to eliminate noise and to recover colors lost from the original image.37 The University Archives is considering the costs and benefits of moving to DNG. Most DAMS require users to up- load surrogate copies of their images because Raw formats are not provided with the same functionality in DAMS as found with JPEG and TIFF. The ingest engines do not currently handle Raw, but this is likely to change as Raw becomes a more accepted preservation format. Even though Raw provides unlimited options to archivists and photographers, choosing these options for long-term preservation should be considered carefully because storage requirements can become overly costly. If storage costs are not a D ow nl oa de d by [ S ta te U ni ve rs it y of N ew Y or k at A lb an y] , [ B ri an K eo ug h] a t 09 :4 5 29 M ay 2 01 2 Moving the Archivist Closer to the Creator 81 concern, the justified belief that technology will improve over time suggests that archivists should attempt to preserve the Raw format, DNG preferably, when possible. The archivist and the photographer will have to come to agreement on what they want to preserve. Although migrating the images out of a proprietary Raw format to one such as DNG may meet the photographer’s needs, it does bring additional costs in management and storage to the repository. Michael J. Bennett and F. Barry Wheeler stated that, “as a baseline, familiarity with the concepts of parametric editing is necessary to confidently sustain DNG files while the format and its tools continue to mature.”38 The archivist and photographer may have to make a compelling case to their managers as to why preserving multiple formats is needed. Access copies must be created in place of the Raw files because most DAMS and IR software do not display Raw files natively. Once you dispose of the Raw files however, something is lost, and the repository will not be able to turn back. Archivists need to consider the benefits of embedding metadata into their image files. The University Archives has not begun enhancing the em- bedded metadata in its images beyond what is already there, though this is something it is considering for future integrated workflow improvements. The work of the ARTstor project holds great promise for not only making it easier for the photographers to embed information into image files, but also for making it easier for archivists to get that information back out of the file and into their conventional metadata records. Because records creators are typically not concerned with extensive metadata, the ARTstor project might be a way forward that lightens the costly burden of creating metadata for the creator and the archivist. Our profession must continue building better tools to preserve and provide access to our digital heritage, but we must remember that, when possible, making relationships with our creators is an integral first step to making effective use of these tools. With this in mind, Lisl Zach and Marcia Frank Peri suggested that archivists must find a “champion with influence within their institutions. . .forming strategic alliances with key players out- side of the library.”39 The photographer’s mindset and work environment may lend itself to repositories looking for a willing collaborator to begin an electronic records management program. It has become clear that without the archivist’s pre-custodial intervention early in the lifecycle of born digital images, their long-term preservation may be at risk. The records creator must not only have a desire to transfer his or her records, but also see a legiti- mate reason to do so. Whether in print or digital formats, photographers are accustomed to keeping pace with technical innovation in their field. Thus, photographers’ technical understanding of their own work environment, and their keen awareness of the problems that long-term access and preservation pose, can make them willing partners for archivists who plan to acquire born digital records. D ow nl oa de d by [ S ta te U ni ve rs it y of N ew Y or k at A lb an y] , [ B ri an K eo ug h] a t 09 :4 5 29 M ay 2 01 2 82 B. Keough and M. Wolfe NOTES 1. Tiffany Hsu, “Kodak Files for Long-Expected Chapter 11 Bankruptcy,” Los Angeles Times, Jan- uary 20, 2012 (accessed March 15, 2012). Available at http://articles.latimes.com/2012/jan/20/business/la- fi-kodak-bankrupt-20120120. 2. William E. Brown, Jr. and Elizabeth Yakel, “Redefining the Role of College and University Archives in the Information Age,” American Archivist 59 (1996): 275. 3. Susan E. Davis, “Electronic Records Planning in “Collecting” Repositories,” American Archivist 71 (2008): 185. 4. Lisl Zach and Marcia Frank Peri, “Practices for College and University Electronic Records Management (ERM) Programs: Then and Now,” American Archivist 73 (2010): 122. 5. Jackie M. Dooley, Katherine Luce, and OCLC Research, Taking Our Pulse: The OCLC Research Survey of Special Collections and Archives (Dublin, Ohio: OCLC Research, 2010), 13. 6. Ibid., 61. 7. Ibid., 101. 8. Dorothea Salo, “Innkeeper at the Roach Motel,” Library Trends 57 (2008): 99. 9. Karen Markey, Soo Young Rieh, Beth St. Jean, Jihyuun Kim, and Elizabeth Yakel, Census of Institutional Repositories in the United States: MIRACLE Project Research Findings, (Washington, DC: Council on Library and Information Resources, 2007), 79. 10. Jessica Bushey, “He Shoots, He Stores: New Photographic Practice in the Digital Age,” Archivaria 65 (2008): 127. 11. Patricia Russotti and Richard Anderson, Digital Photography Best Practices and Workflow Hand- book, (Burlington, MA: Focal Press, 2010), 8–27. 12. Camera raw is also conventionally referred to as Raw or RAW, but it is not an acronym. 13. Jessica Elaine Bushey, “Born Digital Images as Reliable as Authentic Records” (master’s thesis, University of British Columbia, 2005), accessed March 15, 2012), http://www.interpares.org/ display_file.cfm?doc = ip1-2_dissemination_thes_bushey_ubc-slais_2005.pdf. 14. Michael Reichmann and Juergen Specht, “The Raw Flaw,” accessed March15, 2012, http://www. luminous-landscape.com/essays/raw-flaw.shtml. 15. Philip Andrews, Yvonne Butler, and Joe Farace, Raw Workflow from Capture to Archives: A Complete Digital Photographer’s Guide to Raw Imaging (Oxford, UK: Focal Press, 2006), 6. 16. Michael J. Bennett and F. Barry Wheeler, “Raw as Archival Still Images Format: A Consid- eration,” UConn Libraries Published Works Paper 23, accessed March 15, 2012, http://digitalcommons. uconn.edu/libr_pubs/23/. 17. The OpenRAW working group has been advocating for a Raw format that is even more open and comprehensive than the current DNG format. It does not feel DNG is a fully open and documented format as Adobe purports. See http://www.openraw.org/about/index.html (accessed March 15, 2012). 18. See http://en.wikipedia.org/wiki/Digital_Negative (accessed March 15, 2012). 19. These are sometimes referred to as “buddy” files. 20. Bennett and Wheeler, “Raw as Archival Still Images Format,” 7. 21. Richard Anderson, “Should I Convert to DNG?” DPBestFlow.org, accessed March 15, 2012, http://dpbestflow.org/NODE/570. 22. Memory card capacity for a Nikon D7000 camera, accessed March 15, 2012, http://imaging. nikon.com/lineup/dslr/d7000/features03.htm. 23. “Raw vs. Rendered,” accessed March 15, 2012, http://dpbestflow.org/camera/raw-vs-rendered# tiff. 24. Russotti and Anderson, Digital Photography Best Practices, 46–49. 25. It has been reported that rotating some image formats, and even checking the properties of an image, can permanently change or destroy the EXIF data. See: “Nikon Also Warn About Windows XP,” dpreview.com, accessed March 15, 2012, http://www.dpreview.com/news/2001/12/14/nikonxpwarnings. See also, Peter Krogh, “Metadata Handling,” DPBestFlow.org, accessed Mar 15, 2012, http://dpbestflow. org/metadata/metadata-handling#durability. 26. See http://dpbestflow.org/image-editing/catalog-pieware. 27. “Final Report on Activities Related to ARTstor’s NDIPP Grant: Preserving Digital Still Images,” Available at http://www.digitalpreservation.gov/partners/documents/ARTstor_FinalReport020311.pdf D ow nl oa de d by [ S ta te U ni ve rs it y of N ew Y or k at A lb an y] , [ B ri an K eo ug h] a t 09 :4 5 29 M ay 2 01 2 Moving the Archivist Closer to the Creator 83 28. Chris Prom, “Making Digital Curation a Systematic Institutional Function,” The Interna- tional Journal of Digital Curation 6, no. 1 (2011): 139–152, accessed March 15, 2012, http://ijdc.net/ index.php/ijdc/article/viewFile/169/237. 29. For example, Michael Forstrom’s “Managing Electronic Records in Manuscript Collections: A Case Study from the Beinecke Rare Book and Manuscript Library,” American Archivist 72 (2009), 460–477, is an excellent case study on managing born digital collections, and he reviews literature of other prominent case studies. 30. Ben Goldman, “Bridging the Gap: Taking Practical Steps Toward Managing Born-Digital Col- lections in Manuscript Repositories,” RBM: A Journal of Rare Books, Manuscripts, and Cultural Heritage 12 (2011): 16. 31. Prom, “Making Digital Curation,” 142–143. 32. ”University at Albany University Digital Image Database Proof-of-Concept Project Charter,” November 16, 2007. 33. Jeffrey Warda, The AIC Guide to Digital Photography and Conservation Documentation (Wash- ington, DC: American Institute for Conservation, 2011), 85. 34. The Dublin Core standard is widely used by libraries and archives, and provides for easier interoperability and flexibility. 35. For example, the Web site SmugMug.com contains the born digital images of numerous col- leges and universities. In an e-mail from John Edens, January 17, 2012, he explains that University at Buffalo’s Creative Services are housing current photos as well as some historic photos online through SmugMug.com, and the UB University Archives are looking at solutions to preserve the images once they are no longer needed for their current uses online. 36. The Tufts University Archives contain born digital photos from 2005 to 2010. See, http://dl.tufts.edu/view_collection.jsp?pid=tufts:UA069.006.DO.UA206 (accessed March 15, 2012) 37. Bennett and Wheeler, “Raw as Archival.” 38. Ibid. 39. Zach and Peri, “Practices for College,” 122. D ow nl oa de d by [ S ta te U ni ve rs it y of N ew Y or k at A lb an y] , [ B ri an K eo ug h] a t 09 :4 5 29 M ay 2 01 2 work_yv3jze7lprbwjbxpr2fopmcsre ---- The Need for Dental Digital Photography Education LETTER TO THE EDITOR Letters may comment on articles published in the Journal and should offer constructive criticism. When appropri- ate, comment on the letter is sought from the author. Letters to the Editor may also address any aspect of the profession, including education, new modes of practice and concepts of disease and its management. Letters should be brief (no more than two A4 pages). THE NEED FOR DENTAL DIGITAL PHOTOGRAPHY EDUCATION I write in regard to Louise Brown’s article ‘Inadequate record keeping by dental practitioners’.1 I wholeheart- edly agree that newer technologies such as digital in- traoral and extraoral photography and audio recording of patient interactions may offer a solution to record keeping problems. Clinicians with decades of experi- ence or a student of dental history can look back at advances in dentistry and affirm that the dental profes- sion has experienced an enormous amount of techno- logical expansion. One is the advent of dental digital photography. Photographs have always been emblem- atic of memories and with the initiation of digital photog- raphy, it has become much easier to bring images together in a more inclusive and qualitative approach. Scientific upgradation in the field of digital photography has revolutionized the perception of photography as an influential means of expression and communication. Moreover, it offers a variety of perception, analysis and execution. Photography and dentistry go hand in hand for the disclosure of concealed and unnoticed defects in teeth and other parts of the cavity.2 The use of cameras in dentistry has added signifi- cantly to clinical knowledge and protocols. Dental digital photography is an incredible tool for communi- cating with our patients, laboratories and our col- leagues. The benefit of digital cameras is the instant, digitized accessibility of photos with image quality surpassing that of traditional photography. Similarly, images can be instantly included into a practice’s software and stored for future reference.3 To address the increasing health challenges and aes- thetic demands of a growing population, undergraduate students in dental schools should be given comprehen- sive and holistic training in photography as it plays a cru- cial role in diagnosis and treatment planning, as well as evaluation of various treatment steps. Viewing intraoral and portrait shots in still life form generates diagnostic protocol considerations for colour, soft tissue and tooth form. Accurate composition assists in precise study and assessment, and makes it possible to predict and man- age mid and post treatment complications up to the com- pletion of dental care. Post treatment photography is also imperative for a critical appraisal of treatment results and for educational uses.4 Digital records have para- mount significance to prevent litigation and to resolve tedious forensic cases.5 Looking over these advantages, digital technology has changed the outlook of a dentist toward data collection in academia. A novel innovation in the field of dentistry called photogrammetry through which the geometric properties of objects can be deter- mined from photographic images and has proved its effectiveness in studying the three-dimensional occlusion of dental arches, teeth and their dimensions in dentistry.6 The practice of dental digital photography is a type of macrography and with the arrival of digital cameras, photography has become a simple and accessible way of educating and documenting our patients. Digital images can be effortlessly stored and kept for possible use for legal or academic purposes. Hence, digital cam- eras must be considered as indispensable tools for den- tists. It is time academics and policymakers promote photography in dentistry education and incorporate its use in undergraduate and postgraduate studies. REFERENCES 1. Brown LF. Inadequate record keeping by dental practitioners. Aust Dent J 2015;60:497–502. 2. Desai V, Bumb D. Digital dental photography: a contemporary revolution. Int J Clin Pediatr Dent 2013;6:193–196. 3. Bengel W. Digital photography in the dental practice–an over- view (II). Int J Comput Dent 2000;3:121–132. 4. Gholston LR. Reliability of an intraoral camera: utility for clinical dentistry and research. Am J Orthod 1984;85:89–93. 5. Terry DA, Snow SR, McLaren EA. Contemporary dental photog- raphy: selection and application. Compend Contin Educ Dent 2008;29:432–436. 6. Knyaz VA. Image-based 3D reconstruction and analysis for orthodontia. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2012;39:585–589. SHIVANI KOHLI, SENIOR LECTURER Department of Prosthodontics, MAHSA University Kuala Lumpur, Malaysia SHEKHAR BHATIA, LECTURER Division of Restorative Dentistry, School of Dentistry, International Medical University, Kuala Lumpur, Malaysia (Received 6 December 2015.) © 2016 Australian Dental Association 125 Australian Dental Journal 2016; 61: 125 doi: 10.1111/adj.12405 Australian Dental Journal The official journal of the Australian Dental Association work_ywiljbbizjhqnkmz7zjw24i42u ---- Staphylococcal Persistence Due to Biofilm Formation in Synovial Fluid Containing Prophylactic Cefazolin Staphylococcal Persistence Due to Biofilm Formation in Synovial Fluid Containing Prophylactic Cefazolin Sana S. Dastgheyb,a,c Sommer Hammoud,b Constantinos Ketonis,a,b Andrew Yongkun Liu,a Keith Fitzgerald,a Javad Parvizi,b James Purtill,b Michael Ciccotti,b Irving M. Shapiro,a Michael Otto,c Noreen J. Hickoka Department of Orthopaedic Surgery, Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, Pennsylvania, USAa; The Rothman Institute, Philadelphia, Pennsylvania, USAb; National Institute of Allergy and Infectious Diseases, The National Institutes of Health, Bethesda, Maryland, USAc Antibiotic prophylaxis is standard for patients undergoing surgical procedures, yet despite the wide use of antibiotics, break- through infections still occur. In the setting of total joint arthroplasty, such infections can be devastating. Recent findings have shown that synovial fluid causes marked staphylococcal aggregation, which can confer antibiotic insensitivity. We therefore asked in this study whether clinical samples of synovial fluid that contain preoperative prophylactic antibiotics can successfully eradicate a bacterial challenge by pertinent bacterial species. This study demonstrates that preoperative prophylaxis with cefazo- lin results in high antibiotic levels. Furthermore, we show that even with antibiotic concentrations that far exceed the expected bactericidal levels, Staphylococcus aureus bacteria added to the synovial fluid samples are not eradicated and are able to colonize model implant surfaces, i.e., titanium pins. Based on these studies, we suggest that current prophylactic antibiotic choices, de- spite high penetration into the synovial fluid, may need to be reexamined. Infection remains a major concern with orthopedic procedures,although infection rates are generally �1% (1). Because of the high prophylactic antibiotic levels within synovial fluid, the use of minimally invasive techniques, and short surgical times, infection rates remain low. However, with increasing numbers of total knee arthroplasties, infections are projected to exceed 60,000 to 70,000 cases in the United States by 2020 (2, 3). When hardware, such as a prosthesis, is present, infection is recalcitrant to antibiotic treat- ment. Once established, infection can cause prolonged disability, multiple operations, and increased health care costs (3, 4). Ac- cording to the WHO, patients with a surgical site infection have twice the mortality rate, are twice as likely to spend time in an intensive care unit, and are five times more likely to be readmitted to the hospital than uninfected patients (5). To avoid the establishment of infection, antibiotic prophylaxis has become the standard of care (6, 7). For orthopedic procedures, cefazolin (CFZ), which is bactericidal for staphylococci, strepto- cocci, and Escherichia coli (8, 9), is commonly used in the absence of a penicillin allergy (10). Alternative prophylactic antibiotics include fusidic acid, cloxacillin (11), cefixime (12), vancomycin, and gentamicin (13). Interestingly, synovial fluid has been sug- gested to possess antibacterial properties (14–16) that are attrib- uted to hyaluronic acid within the synovial fluid (17), the induc- tion of bactericidal/permeability-increasing protein (15), or unknown compounds, including antimicrobial peptides, within the synovial fluid (14). We recently reported that this ostensi- ble decrease in bacterial numbers, which is perceived to be an “antibacterial” effect, is actually due to the clumping of bacte- ria within synovial fluid, which masks the true microbial load levels (18). The recognition of this behavior also helps to elucidate the difficulties associated with detecting infections. Using standard procedures, the infection rates for revision knee arthroplasties are around 10%, with a 7 to 12% false-negative rate, even in cases in which gross infections are present (19). When more rigorous tech- niques are used to detect infection, �50% of the retrieved im- plants are found to have bacterial contaminants, suggesting that antibiotic prophylaxis has limited success (20). In this study, we sought to determine if the prophylactic anti- biotics, commonly CFZ, present in synovial fluid are able to pre- vent bacterial contamination. Based on our recent studies show- ing that bacterial clumping in synovial fluid dominates its phenotype, we hypothesized that even these high levels of the bac- tericidal antibiotic CFZ would not be able to eradicate a Staphylo- coccus aureus challenge. We used synovial fluid samples from pa- tients who underwent preoperative prophylaxis and established the relative concentrations of active CFZ. We then asked if these concentrations affect S. aureus viability or colonization of a metal surface when bathed in synovial fluid containing pre- operative CFZ. These findings have potential implications for the selection of the most effective means to attain antibacterial prophylaxis, as well as for the methodology to detect joint in- fections. MATERIALS AND METHODS Materials. Samples of synovial fluid were collected both in the clinic dur- ing knee aspirations and in the operating room during total knee arthro- plasties, with the permission of the Thomas Jefferson University institu- tional review board. Cefazolin was obtained from AAP Pharmaceuticals. Kirby-Bauer disks were obtained from BBL. Wheat germ agglutinin (WGA) staining for polysaccharide intercellular adhesin (PIA) (also Received 17 October 2014 Returned for modification 14 December 2014 Accepted 21 January 2015 Accepted manuscript posted online 26 January 2015 Citation Dastgheyb SS, Hammoud S, Ketonis C, Liu AY, Fitzgerald K, Parvizi J, Purtill J, Ciccotti M, Shapiro IM, Otto M, Hickok NJ. 2015. Staphylococcal persistence due to biofilm formation in synovial fluid containing prophylactic cefazolin. Antimicrob Agents Chemother 59:2122–2128. doi:10.1128/AAC.04579-14. Address correspondence to Noreen J. Hickok, noreen.hickok@jefferson.edu. Copyright © 2015, American Society for Microbiology. All Rights Reserved. doi:10.1128/AAC.04579-14 2122 aac.asm.org April 2015 Volume 59 Number 4Antimicrobial Agents and Chemotherapy o n A p ril 5 , 2 0 2 1 a t C A R N E G IE M E L L O N U N IV L IB R h ttp ://a a c.a sm .o rg / D o w n lo a d e d fro m http://dx.doi.org/10.1128/AAC.04579-14 http://dx.doi.org/10.1128/AAC.04579-14 http://aac.asm.org http://aac.asm.org/ called poly-N-acetylglucosamine [PNAG]) was performed using Alexa Fluor 488-labeled WGA (Life Technologies). One-millimeter titanium alloy (Ti6Al4V) wires were obtained from Goodfellow Metals. Bacterial strains. The following strains of S. aureus were used for the study: methicillin-susceptible S. aureus (MSSA) ATCC 25923, methicil- lin-resistant S. aureus (MRSA) clinical isolate TJU2, MRSA LAC (USA300), MSSA Xen 36 (Perkin-Elmer), and MSSA AH1710, a strain that constitutively expresses green fluorescent protein (GFP) that was used to visualize staphylococcal surface colonization (a kind gift from Alexander Horswill, University of Iowa [21]). In addition, Staphylococcus epidermidis strain ATCC 35984 was used in the disk diffusion assays. Qualitative synovial fluid microscopy. Directly after the samples were collected, 1 ml of each sample was removed and imaged by light microscopy at 20� magnification in order to determine the overall cellu- larity and/or crystallinity of the sample. Disk diffusion assays. Each sample of synovial fluid was tested for the presence of antibiotics by an analysis of the zone of inhibition (ZOI) of bacterial growth around a Kirby-Bauer disk, in which antibiotic concen- tration is proportional to the distance from the disk. In brief, 6-mm Kirby- Bauer disks were impregnated with 20-�l samples of synovial fluid, dried overnight, and placed on tryptic soy broth (TSB) agar plates streaked with lawns of S. aureus (200 �l, at a concentration of 107 CFU/ml). Monitoring aggregate formation. An initial inoculum of 104 CFU/ml of MRSA USA300 was incubated in 5 ml of TSB or naive synovial fluid (no preoperative antibiotics) at 180 rpm and 37°C for 8 h. The resultant ag- gregates of USA300 were imaged using digital photography and scanning electron microscopy. In some cases, 108 CFU/ml of S. aureus ATCC 25923 was incubated in synovial fluid containing preoperative CFZ at 180 rpm and 37°C for 3 h and, after aggregate formation, with 10 �g/ml ethidium bromide for 20 min at 37°C; these were imaged using digital photography on a UV light box. Scanning electron microscopy. Large aggregates of bacteria in the synovial fluid were removed using forceps and fixed for 1 h in 4% para- formaldehyde (PFA). The fixed aggregates were serially dehydrated for 10 min each in 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, and 100% ethanol (% ethanol in double-distilled water [ddH2O]). The samples were then affixed onto a coverslip (Nunc), sputter-coated with palladium, and visualized using a Hitachi TM-1000 scanning electron microscopy (SEM) (Ibaraki, Japan). Wheat germ agglutinin staining. Wheat germ agglutinin staining for PIA was performed using Alexa Fluor 488-labeled WGA (Life Technolo- gies) at a concentration of 0.1 mg/ml (Jefferson and Cerca [22]) for 30 min at 37°C. Direct counting of S. aureus survival. The 200-�l samples of synovial fluid (SF) (samples SF4.1 to SF4.5) were challenged with 106 CFU/ml of a clinically isolated strain of S. aureus (TJU2), incubated under static con- ditions for 24 h, and serially diluted (with phosphate-buffered saline [PBS]). The entire 200-�l volume of each sample was plated on TSB agar to ensure that all surviving bacteria were detected. The samples of synovial fluid (200 �l each) were challenged for 24 h at 37°C under static conditions with high inocula (between 107 and 109 CFU/ml) of S. aureus ATCC 25923, a laboratory-adapted strain of S. au- reus, which is highly susceptible to CFZ in TSB, even at a high initial inoculum (18, 23). The samples were serially diluted in PBS, and the entirety (200 �l) of each sample was plated on TSB agar. Monitoring S. aureus survival using Xen 36. A 108-CFU/ml inocu- lum of the luciferase-expressing strain Xen 36 was incubated with 200 �l of fresh samples of synovial fluid from patients given preoperative CFZ (samples SF4.1 to SF4.5) or with 200 �l of samples of TSB with or without 20 �g/ml CFZ for 24 h at 37°C in a standing incubator and then imaged for chemiluminescence (ImageQuant LAS 4000). In other experiments, initial inocula of 103, 104, and 105 CFU/ml of Xen 36 were incubated for 12 h with naive synovial fluid or with different samples of synovial fluid from patients given preoperative CFZ (samples SF4.6 to SF4.8); luciferase expression was then measured by a luminom- eter. S. aureus adherence to titanium pins. A total of 107 CFU/ml of S. aureus AH1710 was added to 1 ml of synovial fluid samples containing preoperative CFZ and incubated 37°C for 24 h. With naive synovial fluid (SF), TSB, or PBS, the preincubation step was omitted. Three Ti alloy (Ti6Al4V) pins were then added and incubated at 37°C for 24 h. The retrieved pins were imaged by confocal laser scanning microscopy. The pins were also incubated with 107 CFU of S. aureus AH1710 in 100 �l of PBS plus 10% TSB or 100 �l of synovial fluid containing preopera- tive CFZ for 48 h. After imaging, the pin incubated in synovial fluid con- taining preoperative CFZ was washed 3� with PBS and incubated in 1 ml of TSB at 37°C for 24 h. After 3 PBS washes, the adherent bacteria were visualized by confocal laser scanning microscopy (CLSM) (Olympus Fluo- View 300). To determine the number of adherent bacteria, three 2-mm Ti6Al4V pins were incubated under static conditions in 200 �l of synovial fluid containing preoperative CFZ and 107 CFU/ml of strain ATCC 25923 for 18 h at 37°C. The pins were removed with sterile forceps, gently washed 3� in PBS, sonicated for 10 min in 0.3% Tween 20-PBS, diluted, plated onto 3M Petrifilm plates, and counted. Statistical analysis. Because of the size of the retrieved synovial fluid samples, some samples could only be tested in one set of 2 to 3 replicates. In those cases, multiple independent synovial fluid samples were always examined but not always averaged. Some evaluations were repeated at least in triplicate in �3 independent experiments. In those cases, statisti- cal analyses were performed, and differences between the values and groups were tested using a paired t test. Significance was set at a P value of �0.05. RESULTS Preoperative cefazolin is present in synovial fluid. We collected synovial fluid samples from patients who were undergoing total knee arthroplasties (with preoperative CFZ) or who were under- going knee aspirations in the office (no antibiotics [naïve]). The fluid had a golden color and tended to be either clear (Fig. 1A) or somewhat cloudy with crystals apparent. Under microscopy, the samples ranged from highly cellular (Fig. 1B) to few visible cells (Fig. 1C). Using a disk diffusion assay in which the clear zone (zone of inhibition [ZOI]) is proportional to the CFZ concentra- tion, we tested the antibiotic content of the synovial fluid obtained from subjects who had been given 2 g of CFZ 1 h prior to surgical procedures (synovial fluid with preoperative CFZ) against differ- ent strains of staphylococci. As expected, a disk containing naive synovial fluid did not inhibit S. aureus ATCC 25923 growth (Fig. 1Di), whereas synovial fluid with 2 g of preoperative CFZ caused a ZOI due to CFZ diffusion from the disk (Fig. 1Dii). The ability of this synovial fluid with preoperative CFZ to cause a ZOI was also observed with S. aureus isolated from a patient with septic arthritis (Fig. 1Diii); with S. aureus AH1710, a genetically engineered iso- late that expresses green fluorescent protein from a plasmid under chloramphenicol selection (Fig. 1Div); with S. epidermidis ATCC 35984 (Fig. 1Dv); and with a clinical MRSA isolate (from Jefferson Hospital; Fig. 1Dvi). The zones of inhibition (ZOI) were measured digitally and compared to the ZOI measured from a control disk incubated with that strain (Fig. 1E). Based on these comparisons, we deduced that synovial fluid with preoperative CFZ had �200 �g/ml CFZ (200� the MIC for S. aureus ATCC 25923 [23]). S. aureus forms aggregates in synovial fluid. We compared S. aureus growth in TSB to that in synovial fluid (Fig. 2A). The bac- teria in TSB grew to a cloudy suspension, whereas in naive syno- vial fluid (nSF), the sample appeared largely clear, with a large Bacterial Persistence in Cefazolin in Synovial Fluid April 2015 Volume 59 Number 4 aac.asm.org 2123Antimicrobial Agents and Chemotherapy o n A p ril 5 , 2 0 2 1 a t C A R N E G IE M E L L O N U N IV L IB R h ttp ://a a c.a sm .o rg / D o w n lo a d e d fro m http://aac.asm.org http://aac.asm.org/ clump floating in the fluid (Fig. 2Ai, red box) and a cloudy pre- cipitate on the bottom of the tube. When the clump was examined more closely (Fig. 2Aii), it appeared to be a large slimy aggregate of bacteria; similar aggregates were observed with three different sy- novial fluid samples with 2 g of preoperative CFZ (samples SF6.1, SF6.2, and SF6.3; Fig. 2Aiii), emphasizing the prevalence of aggre- gate formation. The very dense matrix of the bacterial aggregate in naive synovial fluid was visualized by scanning electron micros- copy (Fig. 2Aiv). Using confocal laser scanning microscopy, a three-dimensional reconstruction was compiled of aggregated S. aureus in synovial fluid with 2 g of preoperative CFZ (sample SF4.1) and showed a dense reticulated structure reminiscent of a biofilm (Fig. 2Av). We thus asked if aggregates in synovial fluid containing preoperative CFZ express a polysaccharide matrix characteristic of biofilms. When stained for polysaccharide inter- cellular adhesin (PIA) (also called poly-N-acetylglucosamine [PNAG]) with wheat germ agglutinin (WGA), S. aureus in syno- vial fluid with 2 g of preoperative CFZ (samples SF6.1, SF6.2, and SF6.3) clearly aggregated, and the aggregates showed green fluo- rescence, indicating the production of biofilm polysaccharide in the aggregated bacteria (Fig. 2Bi). In three different trials with TSB, S. aureus did not aggregate, and staining for PIA was punc- tate (Fig. 2Bii). Preoperative cefazolin in synovial fluid does not eradicate a bacterial challenge. Based on our previous studies and our find- ings in Fig. 2, S. aureus readily clumped in synovial fluid contain- ing preoperative CFZ. We thus asked to what extent this CFZ- containing synovial fluid was able to eradicate a bacterial challenge. When a clinical isolate of S. aureus (from a patient with septic arthritis) was incubated with CFZ-containing synovial fluid, 4 out of 5 different synovial fluid samples (SF4.1 to SF4.5) showed no significant decrease in bacterial viability after a 24-h challenge. Only 1 sample, SF4.1, showed a marked decrease (�3 log) from the initial inoculum (Fig. 3A). Using the same five sam- ples of synovial fluid, a visual analysis of S. aureus Xen 36 chemi- luminescence revealed that luciferase activity appeared to be com- pletely abolished in TSB containing CFZ. Conversely, luciferase activity persisted in samples SF4.1 to SF4.5, as seen in Fig. 3A, with noticeably lower activity in SF4.1 and robust activities in SF4.3 to SF4.5. By a direct measurement of luminescence, naive synovial fluid samples challenged with 102, 103, and 104 CFU/ml of S. aureus Xen FIG 1 Synovial fluid containing preoperative antibiotics. (A) Representative image of a synovial fluid sample obtained during a total knee arthroplasty that contains preoperative CFZ. (B) Typical microscopic image of synovial fluid displaying high cellularity. (C) Typical microscopic image of synovial fluid displaying low cellularity. (D) Zones of inhibition (ZOI) obtained with synovial fluid containing preoperative antibiotics (2 g of CFZ). In all, at least eight different synovial fluid samples were tested on each strain in triplicate. The images shown are representative of the ZOI obtained across all samples. Shown within the panel are ZOI associated with synovial fluid from a knee aspiration that does not contain antibiotics (naive SF) (i), synovial fluid containing preoperative antibiotics in S. aureus ATCC 25923 (ii), synovial fluid containing preoperative antibiotics in an S. aureus clinical isolate (iii), synovial fluid containing preoperative antibiotics in S. aureus AH1710 (iv), synovial fluid containing preoperative antibiotics in S. epidermidis ATCC 35984 (v), and synovial fluid containing preoperative antibiotics in an S. aureus MRSA isolate (Thomas Jefferson University Hospital [TJU2]) (vi). (E) ZOIs measured digitally with ImageJ using the synovial fluid samples and strains shown in panel D. Each circle represents the average ZOI from at least three disks/strain. The red diamonds represent the ZOI obtained with 200 �g/ml CFZ/disk. Dastgheyb et al. 2124 aac.asm.org April 2015 Volume 59 Number 4Antimicrobial Agents and Chemotherapy o n A p ril 5 , 2 0 2 1 a t C A R N E G IE M E L L O N U N IV L IB R h ttp ://a a c.a sm .o rg / D o w n lo a d e d fro m http://aac.asm.org http://aac.asm.org/ 36 for 12 h showed a stepwise increase in luminescence. When the samples with preoperative CFZ were challenged with Xen 36, SF4.6 and SF4.7 showed levels of luminescence that were similar to those of the samples with 103 to 104 CFU in naive synovial fluid, independent of the size of the starting inoculum. Sample SF4.8, which had been dosed with 1 g of preoperative CFZ, showed a stepwise increase in luminescence with increasing inocula (Fig. 3C). In contrast, in TSB (Fig. 3D), luciferase activity was high with all three inocula; after 12 h, TSB containing 20 �g/ml CFZ resulted in markedly decreased luminescence, and these levels were �10� lower than those measured in CFZ-containing synovial fluid. Of note, the CFZ concentrations in SF were �10-fold greater than those in TSB. S. aureus in synovial fluid containing antibiotics readily ad- heres to metal surfaces. We reasoned that if viable bacteria were indeed present in synovial fluid containing preoperative CFZ, they should readily adhere to titanium alloy (Ti) pins, thus modeling biofilm initiation in the presence of an implant. After 24 h of incubation, �10,000 CFU/ml of bacteria were recovered from Ti pins that had been inoculated with 107 CFU/ml of S. aureus in the presence of TSB (Fig. 4A). Importantly, no bacterial colonization was detected on Ti pins in which S. aureus was incubated in TSB�CFZ before the addition of the pin. In contrast, between 200 and 2,500 CFU of adherent S. aureus (average � standard devia- tion, 1,156 � 832 CFU) were recovered from Ti pins incubated in S. aureus-inoculated CFZ-containing synovial fluid. When the pins were examined for presence of GFP-expressing bacteria, a Ti pin that was not challenged with bacteria showed no fluorescence (Fig. 4Bi) and served as a negative control. Abundant fluorescence was apparent when a pin was incubated with S. au- reus in PBS plus 10% TSB, a minimal growth medium (Fig. 4Bii). In synovial fluid containing preoperative CFZ, S. aureus coloniza- tion was not initially detectable (Fig. 4Biii) but became apparent when the colonized pin was removed from the synovial fluid and incubated in TSB for an additional 24 h. This additional incuba- tion supported sufficient bacterial growth to be detected visually (Fig. 4Biv). DISCUSSION Antibiotic prophylaxis during orthopedic procedures is used to minimize the establishment of infection. In this paper, we show that synovial fluid containing CFZ due to a 2-g preoperative pro- phylactic protocol has attenuated efficacy against S. aureus in sy- novial fluid. Furthermore, we show that Ti pins in synovial fluid that contains CFZ due to preoperative prophylaxis can be colo- nized by added S. aureus at clinically significant numbers. In the presence of metal implants, it is now known that both in vitro (24, 25) and in vivo, as few as 100 CFU of S. aureus are sufficient to establish infection (26). Accompanying this diminished level of efficacy of antibiotics in synovial fluid, our in vitro findings show that in general, up to 90% of bacteria survive, suggesting that preoperative prophylaxis may be ineffective in the synovial fluid environment. Taken together, these in vitro studies suggest that the current standard for prophylaxis may be inadequate to combat S. aureus contamination. Antibiotic penetration is thought to determine the effective- ness of prophylaxis (13, 27–29). Because CFZ is bactericidal, ob- tains high levels within the synovial fluid within 30 to 60 min, and has a broad spectrum of coverage, it is recommended for preop- erative prophylaxis in procedures that breach the joint capsule (33). Based on our data, postprophylactic synovial fluid concen- trations are �100 to 200 �g/ml, which is �400� the MIC for S. aureus (9). These concentrations were determined through the testing of 23 synovial fluid samples containing preoperative CFZ. Despite these CFZ concentrations, a large number of S. aureus bacteria persist and are seen to form clumps in the antibiotic- containing synovial fluid. These clumps are biofilm-like in that they are stained by WGA, which stains the polysaccharide coating of S. aureus biofilms containing PIA. Our recent work shows that the formation of these clumps depends on a proteinaceous com- ponent that is critical for clump formation (18). This aggregation causes an accompanying decrease in virulence and a biofilm-like phenotype (18; S. S. Dastgheyb, M. Otto, and N. J. Hickok, un- published data). In addition to the protein content, the viscosity (150 � 50 MPa [30]) of synovial fluid favors the formation of biofilms, whether anchored or floating. Therefore, while plank- tonic S. aureus in TSB might be effectively eradicated by 200 �g/ml CFZ, S. aureus readily survived in this same concentration of an- tibiotic in synovial fluid. This recalcitrance to the effects of CFZ was found in several strains, including a clinical strain of S. aureus, and with inocula ranging from 103 to 107 CFU/ml. It is worth FIG 2 Bacterial growth and clumping in synovial fluid. (A) i, 5-h culture of S. aureus incubated in TSB or naive SF (nSF) (no antibiotics). These clumps have already been described by Dastgheyb et al. (18). Note the S. aureus aggregate that is boxed in red. ii, magnification at 10� of the aggregated S. aureus ATCC 25923 that was shown within the red box. iii, aggregated bacteria after incuba- tion in SF6.1, SF6.2, or SF 6.3, each containing 2 g of CFZ from preoperative prophylaxis, visualized by ethidium bromide staining, and photographed. Scale bar 1.5 mm. iv, scanning electron micrograph of an aggregate formed in naive synovial fluid. Scale bar 50 �m. v, three-dimensional reconstruc- tion of confocal laser scanning micrographs of an S. aureus ATCC 25923 ag- gregate formed in synovial fluid containing 2 g of preoperative CFZ. Similar images were obtained with other synovial fluid samples containing preopera- tive CFZ. (B) WGA staining for PIA/PNAG in S. aureus ATCC 25923 shown incubated in three synovial fluid samples (i) (left to right: SF3.1, SF3.2, and SF3.3) or three separate cultures of TSB without antibiotics (ii). Bacterial Persistence in Cefazolin in Synovial Fluid April 2015 Volume 59 Number 4 aac.asm.org 2125Antimicrobial Agents and Chemotherapy o n A p ril 5 , 2 0 2 1 a t C A R N E G IE M E L L O N U N IV L IB R h ttp ://a a c.a sm .o rg / D o w n lo a d e d fro m http://aac.asm.org http://aac.asm.org/ noting that throughout the study, the higher doses of bacteria have been used to more readily visualize the aggregation phenomenon that we are describing. However, as seen in the data in Fig. 3, the lower levels of bacterial contamination show similar behavior at higher levels. We would argue, therefore, that the high inocula used throughout will model the more clinically relevant lower levels of bacteria. Bacterial survival in CFZ-containing synovial fluid is illus- trated by the adherence of S. aureus to a titanium alloy pin. While a titanium alloy was used, the most common alloy used in knee hardware is cobalt-chrome-molybdenum. Data from others (31, 32) and our unpublished data (S. Dastgheyb, A. E. Villaruz, K. Y. Le, V. Y. Tan, A. C. Duong, S. S. Chatterjee, G. Y. C. Cheung, H.-S. Joo, N. J. Hickok, and M. Otto, submitted for publication) sup- port the adherence of S. aureus to both materials. By direct count- ing, approximately 103 of the 104 bacteria inoculated into the sy- FIG 3 Bacterial survival in antibiotic-containing synovial fluid. (A) Survival of S. aureus incubated in synovial fluid from patients given 2 g of preoperative (pre-op) CFZ (samples SF4.1 to SF4.5). Red dashed line represents initial inoculum. (B) Xen-36 chemiluminescence after 24 h incubation in TSB (0 or 20 �g/ml CFZ), or 5 samples of synovial fluid containing 2 g of preoperative antibiotics (samples SF4.1 to SF4.5). (C) Intensity of Xen 36 luminescence after 12 h of incubation of 100, 1,000, or 10,000 CFU in 100-�l samples of naive synovial fluid or SF4.6, SF4.7, and SF4.8 from patients given 2, 2, and 1 g of preoperative CFZ, respectively. (D) Intensity of Xen 36 luminescence after 12 h of incubation with 100, 1,000, or 10,000 CFU in 100-�l samples of TSB containing 0 or 20 �g/ml CFZ. The numbers of repeats with the number of independent experiments in parentheses are indicated for each panel. FIG 4 Bacterial survival and adherence in the presence of antibiotic-containing synovial fluid. (A) CFU recovered from Ti alloy pins incubated in TSB or in 8 synovial fluid samples containing 2 g of preoperative CFZ containing 107 CFU/ml of S. aureus for 24 h. All bacterial recoveries were performed independently three times in three independent experiments. (B) All experiments used three independent pins/experiment, and each experiment was repeated three times. i, control Ti alloy pin that has not been exposed to S. aureus. ii, adherence of S. aureus (green fluorescence) to Ti alloy pin in PBS plus 10% TSB. iii, adherence of S. aureus to Ti alloy pin in synovial fluid containing preoperative antibiotics. iv, pin in panel iii after removal from synovial fluid and 24 h of incubation in TSB. The green fluorescence is due to bacterial growth during this time frame. Scale bar 500 �m. Dastgheyb et al. 2126 aac.asm.org April 2015 Volume 59 Number 4Antimicrobial Agents and Chemotherapy o n A p ril 5 , 2 0 2 1 a t C A R N E G IE M E L L O N U N IV L IB R h ttp ://a a c.a sm .o rg / D o w n lo a d e d fro m http://aac.asm.org http://aac.asm.org/ novial fluid adhered to the pin. The recently published work of Dastgheyb et al. (18) established that despite early differences in proliferation rates, the rates of S. aureus proliferation in TSB and synovial fluid at 24 h are approximately equal. Thus, this differ- ence in colonization is unlikely to be due to a simple difference in proliferation. Independent of surface adhesion, the nonadherent bacteria in synovial fluid exhibit a biofilm-like phenotype and demonstrate a biofilm-like recalcitrance to antibiotic treatment. In synovial fluid, it is clear from these studies and our previous work that this aggregation alters antibiotic sensitivity and may have an impact on the numbers of bacteria that colonize an im- plant. Importantly, while performing these experiments, we relied on multiple synovial fluid samples to validate our results (1), as volumes were limited (2), and the use of multiple samples strengthened our observations. Interestingly, synovial fluid rap- idly reforms after joint replacement so that the synovial environ- ment is reestablished with a new more susceptible surface for bac- terial colonization (i.e., the implant). It is important to realize that when infection has been established, an important part of the treatment is extensive lavage and removal of any fibrin clots; based on the data in this paper, this removal becomes critical, as this clot is an important structure in the maintenance of indolent bacteria. In an arthroplasty setting, preoperative antibiotics are accom- panied by the removal of synovial fluid during the arthrotomy, followed by blood clot formation, and an additional one to two doses of antibiotic are given postoperatively. These additional procedures and antibiotic treatments may be sufficient to sterilize the knee environment, but this scenario needs to be explicitly tested in light of our findings. Importantly, the results we report may be especially pertinent to clinical situations in which synovial fluid is aspirated without performance of joint replacements. A single preoperative dose of antibiotic is typically recommended, and the data here suggest that this is inadequate. Further explora- tion of clinical synovial fluid samples is warranted. ACKNOWLEDGMENTS We thank Alexander Horswill for generously supplying the fluorescent S. aureus strain AH1710. This study was supported by NIH grants HD06153 and DE019901 to N.J.H., as well as the Intramural Research Program of the National Insti- tute of Allergy and Infectious Diseases (NIAID), the National Institutes of Health (NIH) (grant ZIA AI000904-13 to M.O.), and a T32 NIH training grant (T32-AR-052273 to I.M.S.). We declare no conflicts of interest. REFERENCES 1. Pulido L, Ghanem E, Joshi A, Purtill JJ, Parvizi J. 2008. Periprosthetic joint infection: the incidence, timing, and predisposing factors. Clin Orthop Relat Res 466:1710 –1715. http://dx.doi.org/10.1007/s11999 -008-0209-4. 2. Kurtz SM, Lau E, Schmier J, Ong KL, Zhao K, Parvizi J. 2008. Infection burden for hip and knee arthroplasty in the United States. J Arthroplasty 23:984 –991. http://dx.doi.org/10.1016/j.arth.2007.10.017. 3. Kurtz SM, Lau E, Watson H, Schmier JK, Parvizi J. 2012. Economic burden of periprosthetic joint infection in the United States. J Arthro- plasty 27:61– 65 e61. http://dx.doi.org/10.1016/j.arth.2012.02.022. 4. Borens O, Corvec S, Trampuz A. 2012. Diagnosis of periprosthetic joint infections. Hip Int 22(Suppl 8):S9 –S14. http://dx.doi.org/10.5301/HIP .2012.9565. 5. World Health Organization. 2009. WHO guidelines for safe surgery 2009: safe surgery saves lives. World Health Organization, Geneva, Switzerland. http://whqlibdoc.who.int/publications/2009/9789241598552_eng.pdf. 6. Block JE, Stubbs HA. 2005. Reducing the risk of deep wound infection in primary joint arthroplasty with antibiotic bone cement. Orthopedics 28: 1334 –1345. http://dx.doi.org/10.3928/0147-7447-20051101-13. 7. Darouiche RO. 2003. Antimicrobial approaches for preventing infections associated with surgical implants. Clin Infect Dis 36:1284 –1289. http://dx .doi.org/10.1086/374842. 8. Schurman DJ, Hirshman HP, Kajiyama G, Moser K, Burton DS. 1978. Cefazolin concentrations in bone and synovial fluid. J Bone Joint Surg Am 60:359 –362. 9. McAllister TA, Tait SC, Perman MG, Park AC. 1976. Clinical bacteri- ology of cephazolin. Scott Med J 20:220 –227. 10. de Lalla F. 2001. Antibiotic prophylaxis in orthopedic prosthetic surgery. J Chemother 13(Suppl 2):48 –53. http://dx.doi.org/10.1179/joc.2001.13 .Supplement-2.48. 11. Somekh E, Golan T, Tanay A, Poch F, Dan M. 1999. Concentration and bactericidal activity of fusidic acid and cloxacillin in serum and synovial fluid. J Antimicrob Chemother 43:593–596. http://dx.doi.org/10.1093/jac /43.4.593. 12. Somekh E, Heifetz L, Dan M, Poch F, Hafeli H, Tanai A. 1996. Penetration and bactericidal activity of cefixime in synovial fluid. Antimi- crob Agents Chemother 40:1198 –1200. 13. Ritter MA, Barzilauskas CD, Faris PM, Keating EM. 1989. Vancomycin prophylaxis and elective total joint arthroplasty. Orthopedics 12:1333– 1336. 14. Gruber BF, Miller BS, Onnen J, Welling R, Wojtys EM. 2008. Antibac- terial properties of synovial fluid in the knee. J Knee Surg 21:180 –185. http://dx.doi.org/10.1055/s-0030-1247816. 15. Inman RD, Chiu B. 1996. Comparative microbicidal activity of synovial fluid on arthritogenic organisms. Clin Exp Immunol 104:80 – 85. http://dx .doi.org/10.1046/j.1365-2249.1996.d01-650.x. 16. Pruzanski W, Leers WD, Wardlaw AC. 1974. Bacteriolytic and bacteri- cidal activity of sera and synovial fluids in rheumatoid arthritis and in osteoarthritis. Arthritis Rheum 17:207–218. http://dx.doi.org/10.1002/art .1780170303. 17. Pirnazar P, Wolinsky L, Nachnani S, Haake S, Pilloni A, Bernard GW. 1999. Bacteriostatic effects of hyaluronic acid. J Periodontol 70:370 –374. http://dx.doi.org/10.1902/jop.1999.70.4.370. 18. Dastgheyb S, Parvizi J, Shapiro IM, Hickok NJ, Otto M. Effect of biofilms on recalcitrance of staphylococcal joint infection to antibiotic treatment. J Infect Dis, in press. 19. Ackerman D, Lett P, Galat DD, Jr, Parvizi J, Stuart MJ. 2008. Results of total hip and total knee arthroplasties in patients with synovial chondro- matosis. J Arthroplasty 23:395– 400. http://dx.doi.org/10.1016/j.arth.2007 .06.014. 20. Jacovides CL, Kreft R, Adeli B, Hozack B, Ehrlich GD, Parvizi J. 2012. Successful identification of pathogens by polymerase chain reaction (PCR)-based electron spray ionization time-of-flight mass spectrometry (ESI-TOF-MS) in culture-negative periprosthetic joint infection. J Bone Joint Surg Am 94:2247–2254. http://dx.doi.org/10.2106/JBJS.L.00210. 21. Malone CL, Boles BR, Lauderdale KJ, Thoendel M, Kavanaugh JS, Horswill AR. 2009. Fluorescent reporters for Staphylococcus aureus. J Mi- crobiol Methods 77:251–260. http://dx.doi.org/10.1016/j.mimet.2009.02 .011. 22. Jefferson KK, Cerca N. 2006. Bacterial-bacterial cell interactions in bio- films: detection of polysaccharide intercellular adhesins by blotting and confocal microscopy. Methods Mol Biol 341:119 –126. http://dx.doi.org /10.1385/1-59745-113-4:119. 23. Dhawan VK, Maier MK, Nayyar M, Chuah SK, Thadepalli H. 1984. In vitro activity of cefmenoxime, cefotaxime, latamoxef, cefazolin, nafcillin and vancomycin against 53 endocarditis and bacteremic strains of Staph- ylococcus aureus. Chemotherapy 30:328 –330. http://dx.doi.org/10.1159 /000238288. 24. Antoci V, Jr, King SB, Jose B, Parvizi J, Zeiger AR, Wickstrom E, Freeman TA, Composto RJ, Ducheyne P, Shapiro IM, Hickok NJ, Adams CS. 2007. Vancomycin covalently bonded to titanium alloy pre- vents bacterial colonization. J Orthop Res 25:858 – 866. http://dx.doi.org /10.1002/jor.20348. 25. Costerton JW. 2005. Biofilm theory can guide the treatment of device- related orthopaedic infections. Clin Orthop Relat Res 437:7–11. http://dx .doi.org/10.1097/00003086-200508000-00003. 26. Li D, Gromov K, Soballe K, Puzas JE, O’Keefe RJ, Awad H, Drissi H, Schwarz EM. 2008. Quantitative mouse model of implant-associated osteomyelitis and the kinetics of microbial growth, osteolysis, and hu- Bacterial Persistence in Cefazolin in Synovial Fluid April 2015 Volume 59 Number 4 aac.asm.org 2127Antimicrobial Agents and Chemotherapy o n A p ril 5 , 2 0 2 1 a t C A R N E G IE M E L L O N U N IV L IB R h ttp ://a a c.a sm .o rg / D o w n lo a d e d fro m http://dx.doi.org/10.1007/s11999-008-0209-4 http://dx.doi.org/10.1007/s11999-008-0209-4 http://dx.doi.org/10.1016/j.arth.2007.10.017 http://dx.doi.org/10.1016/j.arth.2012.02.022 http://dx.doi.org/10.5301/HIP.2012.9565 http://dx.doi.org/10.5301/HIP.2012.9565 http://whqlibdoc.who.int/publications/2009/9789241598552_eng.pdf http://dx.doi.org/10.3928/0147-7447-20051101-13 http://dx.doi.org/10.1086/374842 http://dx.doi.org/10.1086/374842 http://dx.doi.org/10.1179/joc.2001.13.Supplement-2.48 http://dx.doi.org/10.1179/joc.2001.13.Supplement-2.48 http://dx.doi.org/10.1093/jac/43.4.593 http://dx.doi.org/10.1093/jac/43.4.593 http://dx.doi.org/10.1055/s-0030-1247816 http://dx.doi.org/10.1046/j.1365-2249.1996.d01-650.x http://dx.doi.org/10.1046/j.1365-2249.1996.d01-650.x http://dx.doi.org/10.1002/art.1780170303 http://dx.doi.org/10.1002/art.1780170303 http://dx.doi.org/10.1902/jop.1999.70.4.370 http://dx.doi.org/10.1016/j.arth.2007.06.014 http://dx.doi.org/10.1016/j.arth.2007.06.014 http://dx.doi.org/10.2106/JBJS.L.00210 http://dx.doi.org/10.1016/j.mimet.2009.02.011 http://dx.doi.org/10.1016/j.mimet.2009.02.011 http://dx.doi.org/10.1385/1-59745-113-4:119 http://dx.doi.org/10.1385/1-59745-113-4:119 http://dx.doi.org/10.1159/000238288 http://dx.doi.org/10.1159/000238288 http://dx.doi.org/10.1002/jor.20348 http://dx.doi.org/10.1002/jor.20348 http://dx.doi.org/10.1097/00003086-200508000-00003 http://dx.doi.org/10.1097/00003086-200508000-00003 http://aac.asm.org http://aac.asm.org/ moral immunity. J Orthop Res 26:96 –105. http://dx.doi.org/10.1002 /jor.20452. 27. Mini E, Grassi F, Cherubino P, Nobili S, Periti P. 2001. Preliminary results of a survey of the use of antimicrobial agents as prophylaxis in orthopedic surgery. J Chemotherapy 13(Suppl 2):73–79. http://dx.doi.org /10.1179/joc.2001.13.Supplement-2.73. 28. Norden CW. 1991. Antibiotic prophylaxis in orthopedic surgery. Rev Infect Dis 13(Suppl 10):S842–S846. http://dx.doi.org/10.1093/clinids/13 .Supplement_10.S842. 29. Yamada K, Matsumoto K, Tokimura F, Okazaki H, Tanaka S. 2011. Are bone and serum cefazolin concentrations adequate for antimicrobial pro- phylaxis? Clin Orthop Relat Res 469:3486 –3494. http://dx.doi.org/10 .1007/s11999-011-2111-8. 30. Jebens EH, Monk-Jones ME. 1959. On the viscosity and pH of synovial fluid and the pH of blood. J Bone Joint Surg Br 41-B:388 – 400. 31. Berry CW, Moore TJ, Safar JA, Henry CA, Wagner MJ. 1992. Antibac- terial activity of dental implant metals. Implant Dent 1:59 – 65. http://dx .doi.org/10.1097/00008505-199200110-00006. 32. Gómez-Barrena E, Esteban J, Medel F, Molina-Manso D, Ortiz-Perez A, Cordero-Ampuero J, Puertolas JA. 2012. Bacterial adherence to sepa- rated modular components in joint prosthesis: a clinical study. J Orthop Res 30:1634 –1639. http://dx.doi.org/10.1002/jor.22114. 33. American Academy of Orthopaedic Surgeons. 2004. Recommendations for the use of intravenous antibiotic prophylaxis in primary total joint arthroplasty. AAOS information statement, information statement 1027. American Academy of Orthopaedic Surgeons, Rosemont, IL. Dastgheyb et al. 2128 aac.asm.org April 2015 Volume 59 Number 4Antimicrobial Agents and Chemotherapy o n A p ril 5 , 2 0 2 1 a t C A R N E G IE M E L L O N U N IV L IB R h ttp ://a a c.a sm .o rg / D o w n lo a d e d fro m http://dx.doi.org/10.1002/jor.20452 http://dx.doi.org/10.1002/jor.20452 http://dx.doi.org/10.1179/joc.2001.13.Supplement-2.73 http://dx.doi.org/10.1179/joc.2001.13.Supplement-2.73 http://dx.doi.org/10.1093/clinids/13.Supplement_10.S842 http://dx.doi.org/10.1093/clinids/13.Supplement_10.S842 http://dx.doi.org/10.1007/s11999-011-2111-8 http://dx.doi.org/10.1007/s11999-011-2111-8 http://dx.doi.org/10.1097/00008505-199200110-00006 http://dx.doi.org/10.1097/00008505-199200110-00006 http://dx.doi.org/10.1002/jor.22114 http://aac.asm.org http://aac.asm.org/ Staphylococcal Persistence Due to Biofilm Formation in Synovial Fluid Containing Prophylactic Cefazolin MATERIALS AND METHODS Materials. Bacterial strains. Qualitative synovial fluid microscopy. Disk diffusion assays. Monitoring aggregate formation. Scanning electron microscopy. Wheat germ agglutinin staining. Direct counting of S. aureus survival. Monitoring S. aureus survival using Xen 36. S. aureus adherence to titanium pins. Statistical analysis. RESULTS Preoperative cefazolin is present in synovial fluid. S. aureus forms aggregates in synovial fluid. Preoperative cefazolin in synovial fluid does not eradicate a bacterial challenge. S. aureus in synovial fluid containing antibiotics readily adheres to metal surfaces. DISCUSSION ACKNOWLEDGMENTS REFERENCES work_yzhdziycefa5zc3bdzrowyh7ei ---- Platelet-rich plasma, Periorbital hyperpigmentation, Facial rejuvenation American Journal of Dermatology and Venereology 2014, 3(5): 87-94 DOI: 10.5923/j.ajdv.20140305.03 Treatment of Periorbital Hyperpigmentation Using Platelet-Rich Plasma Injections Salah Hashim Al-Shami Dermatologist and Venereologist, Laser Care center, Amman, Jordan Abstract Background: The term Platelet-Rich Plasma (PRP) is a generic term used to describe a plasma suspension obtained from whole blood, prepared so as to contain platelet concentrations higher than those normally found in circulating blood. The mechanism of action of PRP is based on the fact that platelets contain many growth factors. These factors have a well-known role in the process of tissue repair. Periorbital hyperpigmentation refers to conditions that present with relative darkness of the periorbital eyelids. It has multifactorial causes like sun exposure, smoking, alcoholism and sleep deprivation. Various treatments have been proposed for periorbital hyperpigmentation but without clear-cut results. This study, was designed to evaluate the effectiveness of platelet-rich plasma in the treatment of periorbital hyperpigmentation in Jordanian patients. Patients and Methods: This is an open therapeutic trial study performed at Laser Care Clinic in Amman, Jordan during the period from July, 1st 2013 to July, 1st 2014. Fifty patients (46 females and 4 males) with periorbital hyperpigmentation were included in this study. The treatment course consisted of three sessions of autologous platelet-rich plasma injections into the periorbital area as well as the whole face with one month apart in between the injections. Therapeutic outcomes were assessed by standardized digital photography. Results: The final results after 6 months were as follows: two patients (4%) reported excellent improvement, six patients (12%) significant improvement, twenty three patients (46%) moderate improvement, and nineteen patients (38%) mild improvement in the appearance of the dark circles. Conclusions: PRP is a useful treatment for periorbital hyperpigmentation and further comparative studies of other treatment modalities are highly recommended. Keywords Platelet-rich plasma, Periorbital hyperpigmentation, Facial rejuvenation 1. Introduction The term Platelet-Rich Plasma (PRP) is a generic term used to describe a plasma suspension obtained from whole blood, prepared so as to contain platelet concentrations higher than those normally found in circulating blood [1]. Platelets are anucleated fragments of megakaryocytes, and are produced in the bone marrow. They contain granules consisting of various substances, which are released upon activation of the platelets [2]. Among the main substances that are released from these granules are growth factors (GF) (Table 1) [3], cytokines, adhesion molecules, integrins, and coagulation proteins [4]. The mechanism of action of PRP is based on the fact that platelets contain many growth factors in their alpha granules. These factors have a well-known role in the process of tissue repair. Thus, the concentration of these substances in injured tissues could be beneficial to providing more agility to the regeneration processes [5]. * Corresponding author: salsah71@yahoo.com (Salah Hashim Al-Shami) Published online at http://journal.sapub.org/ajdv Copyright © 2014 Scientific & Academic Publishing. All Rights Reserved It is important to remember that the skin houses different types of stem cells (including cells present in the hair follicle bulge) and cells of mesenchymal origin, all dispersed in the dermis. In theory, growth factors released by platelets could act in these cells, promoting differentiation and proliferation. This information should be taken into account when planning to achieve a specific goal with the use of PRP in the skin [6]. Degranulation of the pre-packaged GFs in platelets occurs upon "activation" i.e., on coming in contact with coagulation triggers. The secreted GFs in turn bind to their respective transmembrane receptors expressed over adult mesenchymal stem cells, osteoblasts, fibroblasts, endothelial cells, and epidermal cells [7]. This further induces an internal signal-transduction pathway, unlocking the expression of a normal gene sequence of a cell like cellular proliferation, matrix formation, osteoid production, collagen synthesis, etc., thereby augmenting the natural wound-healing process [8]. The mitogenic effects of PRP are only limited to augmentation of the normal healing process and is theoretically not mutagenic, as the GFs released do not enter the cell or its nucleus, but only bind to the membrane receptors and induce signal transduction mechanisms [9]. There are a number of protocols described for obtaining 88 Salah Hashim Al-Shami: Treatment of Periorbital Hyperpigmentation Using Platelet-Rich Plasma Injections PRP, however all in general include a centrifugation process through which the different components of whole blood are separated according to their different densities. The protocols differ in time, speed, and number of centrifugations to which the whole blood is subjected. The volume of the initial blood sample and the types of collection tubes and anticoagulants used are also different for each method [10]. The use of Platelet-Rich Plasma in medicine has become increasingly more widespread during the last decade. Most studies on the subject are carried out in areas such as orthopedics, sports medicine, and odontology. Only recently have articles relating to the dermatologic field begun to be published, where PRP has been used in various dermatologic disorders [11]. Some applications of PRP in dermatology might include (but not limited to): alopecia, wound healing, ablative fractional laser resurfacing, skin rejuvenation, and striae distensae [12-21]. Periorbital hyperpigmentation (also called peripalpebral hyperpigmentation, dark eyelids, dark eye circles, dark circles) refers to conditions that present with relative darkness of the periorbital eyelids. It makes people look tired or older, which negatively affects their quality of life. It can be asignificant cosmetic problem, and many individuals seek treatment for this condition [22]. It has a higher prevalence in individuals with darker skin, hair and eyes, and affects age groups and genders equally. Nevertheless, there are a higher number of complaints from women, especially senior women due to skin laxity and tear trough associated with aging and hence more shadowing and darker circles [23]. Periorbital hyperpigmentation seems to have multifactorial causes that involve intrinsic factors (determined by the individual’s genetics), and extrinsic factors (sun exposure, smoking, alcoholism and sleep deprivation, for instance). However, the presence of melanic pigment and hemosiderotic pigment in the affected sites is a distinctive feature in its etio-pathogenesis [24]. Melanic hyperpigmentation is more frequent in brunet adults, as a consequence of excessive and cumulative exposure to the sun, which increases the production of melanin, reduces the skin’s thickness and increases the dilatation of blood vessels. While hemosiderotic deposition is mainly found in people belonging to certain ethnic groups such as Arabs, Turks, Hindus, inhabitants of the Iberian Peninsula and their respective descendants. In these ethnicities, its manifestation tends to take place earlier, often during childhood. In those individuals there is no change in the color of the skin; the eyelid appears darkened because the dilated vessels are visible due to the transparency of the skin. In those cases, therefore, the problem is often aggravated when the lower eyelid’s vessels are more dilated (e.g., from fatigue, insomnia, oral breathing, crying), causing dermal blood extravasation [24]. Other causes noted as being responsible for the appearance of dark eye circles are post-inflammatory hyperpigmentation secondary to atopic and contact dermatitis, oral breathing, use of certain medications (contraceptives, chemotherapy, antipsychotic and some types of eye drops), the presence of palpebral sagging (due to aging) and of disorders that develop with hydric retention and palpebral edema (thyroid disorders, nephropathies, cardiopathies and pneumopathies) – all of which worsen the unattractive appearance of dark eye circles [25]. Various treatments have been proposed for periorbital hyperpigmentation, however there are few studies on their long-term efficacy. The main types of treatment are: topical application of depigmenting products, chemical peelings, dermabrasion, cryosurgery, fillings with hyaluronic acid, intense pulsed light, CO2, argon, ruby and excimer lasers [26-30]. This study, was designed to evaluate the effectiveness of platelet-rich plasma in the treatment of periorbital hyperpigmentation in Jordanian patients. 2. Patients & Methods This is an open therapeutic trial study performed at Laser Care Clinic in Amman, Jordan during the period from July, 1st 2013 to July, 1st 2014. Fifty patients (46 females and 4 males) with periorbital hyperpigmentationwere included in this study. Their age ranges from 19-60 years with a mean ± SD of 41.5 ± 9.63 years and median of 42 years. Their skin types were III – V (Fitzpatrick skin types). This study approved by the ethical committee in Al-Dholail hospital. Written informed consent was obtained from each patient. The treatment course consisted of three sessions of autologous platelet-rich plasma injections into the periorbital area as well as the whole face with one month apart in between the injections and followed up for additional three months to assess the level of improvement or recurrence of the dark circles. Exclusion criteria include patients with known platelet dysfunction syndrome, critical thrombocytopenia (<50,000/ul), any hemodynamic instability, or chronic medical illness (e.g. diabetes, chronic infections, and blood dyscrasias). Patients withlocal inflammatory skin disorders or active herpes infection at the site of the procedure were driven out of the study. Patients on consistent use of anti-coagulants or non-steroidal anti-inflammatory drugs (NSAID) within 48 hours of procedure, corticosteroid injection at treatment site within 1 month, systemic use of corticosteroids within 2 weeks, recent fever or illness, cancer- especially hematopoietic or of bone, and hemoglobin level< 10 g/dl were also excluded. The whole procedure was fully explained and thoroughly discussed with the patients about the mechanism of platelet-rich plasma treatment, the time required for the treatment, the behavior after the treatment, and the prospects of successful treatment, and any unrealistic expectations of the end results were strongly discouraged. The patients were informed about all risks that may be caused by the procedure and the pre- and post-operative care. Prior to each treatment, the face was cleansed with a mild non-abrasive detergent and gauzes soaked in 70% isopropyl American Journal of Dermatology and Venereology 2014, 3(5): 87-94 89 alcohol. A topical anesthetic cream (EMLA, a eutectic mixture of local anesthesia of 2.5% lidocaine and 2.5% prilocaine, AstraZenica LP, Wilmington DE) was applied under an occlusive dressing for 1 hour and subsequently washed off to obtain completely dry skin surface. Topical antibiotics and a moisturizing cream were prescribed after each session and the patients were informed to apply a sunscreen all along the treatment course. Three photos were taken before treatment for each patient for both sides and the front of the face with a digital camera (Canon® EOS 500D Rebel T1i DigitalCamera, 15.1 megapixel HD) and another set of photos was taken in each visit post-treatment using identical camera settings, lighting, and patient positioning. Fifteen to twenty milliliter (ml) of blood was withdrawn from the patient using a butterfly cannula and a vacuum tube containing sodium citrate 3.2% as anticoagulant. The centrifugation process separates blood components owing to their different specific gravities, i.e., RBCs being the heaviest, followed by WBCs, whereas platelets are the lightest. The first centrifugation is slow to avoid spinning down platelets and to isolate plasma. The centrifugation was at 1600 rpm for 10 minutes. Platelets are mostly concentrated right on top of the buffy coat layer. The supernatant plasma was withdrawn and re-centrifuged. The subsequent centrifugation is faster at 4000 rpm for 10 minutes, so that platelets are spun down and separate as a pellet at the bottom of the tube from platelet-poor plasma (PPP) above. The final platelet concentration depends on the volume reduction of PPP. Approximately 3/4 of the supernatant is discarded and the platelet-rich pellet is re-suspended in remaining amount of plasma. The resultant plasma was aspirated and 0.1 ml of calcium chloride was added for each 1 ml of plasma to activate the platelets and then injected into the selected area using a 32-G needle for superficial microinjections via the mesotherapy technique, and the injections were administered in to the papillary dermis (1.5∼2.0 mm deep). Therapeutic outcomes were assessed by standardized digital photography by two blinded dermatologists using the following five-point scale: 0=no change; 1=slight improvement (0–25%); 2=moderate improvement (26–50%); 3=significant improvement (51–75%); 4=excellent improvement (>75%). The patients were asked to evaluate their own level of satisfaction by giving themselves a score from 0-4 points and the score was translated using the following scale: 0=no change 1=satisfied 2=pleased 3=very pleased 4=excellent In addition, the participants were asked to report any cutaneous or systemic side effects associated with treatment. In particular, a pain scale of 0–3 was used to determine the level of discomfort during the procedures as following: 0=no pain 1=mild pain 2=moderate pain 3=severe pain Statistical data were analyzed by Chi test using Software Minitab V.16 and P value < 0.05 is considered statistically significant descriptive data by frequency, percent, figure and table. 3. Results Fifty patients (46 females and 4 males) were included in the study. All patients completed the study, including the 3-monthly sessions of PRP injections and the 3-month follow-up period. Table 1. Growth factor biological activity GROWTH FACTOR BIOLOGICAL ACTIVITY TGF (Transforming Growth Factor) α & β Proliferation and differentiation control of many cell types PDGF (Platelet derived growth factor) α, β, C, D Potent mitogen for connective tissue cells, inhibitor of apoptosis, increases the motility of mesenchymal cells, fibroblasts, endothelial cells, and neurons. May be involved in physiological processes and in diseases such as cancer and atherosclerosis IGF I (Insulin-like growth factor I ) Promotes the mediation of the various effects of growth hormone FGF I (Fibroblast growth factor I) Induction of fibroblast proliferation and angiogenesis EGF (Epidermal growth factor) Induces the differentiation of cells and mitosis of cells of ecto- and mesodermal origin VEGF A, B & C (Vascular Endothelial growth factor) Induces angiogenesis through the induction of mitosis in endothelial cells, and promotes alterations in vascular physiology and permeability Table 2. The level of improvement assessed by dermatologists Score 0 1 2 3 4 P value Months 1st 31 (62%) 15 (30%) 4 (8%) -- -- 0.0001 2nd 10 (20%) 28 (56%) 8 (16%) 4 (8%) -- 3rd -- 16 (32%) 25 (50%) 6 (12%) 3 (6%) 6th -- 19 (38%) 23 (46%) 6 (12%) 2 (4%) 90 Salah Hashim Al-Shami: Treatment of Periorbital Hyperpigmentation Using Platelet-Rich Plasma Injections Table 3. The level of satisfaction assessed by the patient Score 0 1 2 3 4 P value Months 1st 6 (12%) 40 (80%) 4 (8%) -- -- 0.0001 2nd 1 (2%) 28 (56%) 13 (26%) 7 (14%) 1 (2%) 3rd -- 11 (22%) 23 (46%) 11 (22%) 5 (10%) 6th -- 14 (28%) 25 (50%) 8 (16%) 3 (6%) According to dermatologists' assessment (Table 2) the results after 1month were unpretentious (only 8% moderate improvement) but the number of patients that showed mild response is noteworthy (30%). Although weak at the start, the results were encouraging and showed dramatic changes thereafter through the subsequent months where the patients started to show more favorable response towards the end of treatment sessions. At the end of the 3rd session after 3 months all patients showed different levels of improvement, few patients showed significant (12%) and excellent (6%) responses while the majority expressed moderate (50%) improvement compared to the 1st month. During the follow-up period, the results were comparable to that after 3 months of treatment where the majority of patients still showing moderate improvement (46%) but a good percentage of patients continue to show significant to excellent improvement (12% and 4% respectively). This change gives us a good indicator of the end stage result of the treatment. The improvement scale was so obvious from first month (30% mild to 8% moderate) through 2nd– and 3rd –month to become more satisfactory (38% mild to 46% moderate) after six months of treatment. The final results after 6 months were as follows: two patients (4%) reported excellent improvement, six patients (12%) significant improvement, twenty three patients (46%) moderate improvement, and nineteen patients (38%) mild improvement in the appearance of the dark circles. The results were significant as indicated by the P value which was 0.0001. The patients' self-assessment of level of satisfaction was also remarkable and showed a progressive amelioration (Table 3). The results were as follows: three patients (6%) reported excellent response, eight patients (16%) were very pleased, twenty five patients (50%) showed pleased response, and fourteen patients (28%) said that they were satisfied with the appearance of the dark circles. The results were almost comparable to the dermatologists' assessment and were considered significant as indicated by the P value which was 0.0001. Table (2) shows that the two factors (scores and weeks) are not independent, in another words there is a relationship between two factors (chi-square value 101.563 df= 9 P= 0.0001). Similar conclusion can be reported for Table (3) (chi-square value 71.679 df= 9 P= 0.0001) The PRP injections were generally well tolerated. Some participants experienced treatment-related pain, but there was no need for extra anesthesia (Table 4). Table 4. Patients pain scale during treatment Degree of pain Number of patients 0 32 (64%) 1 15 (30%) 2 3 (6%) 3 -- Serious complications had not been reported in any of our patients other than some mild bruises in some injection sites which spontaneously fade away in few days. 4. Discussion The application of PRP has been documented in many fields. First promoted by M. Ferrari in 1987(31) as an autologous transfusion component after an open heart operation to avoid homologous blood product transfusion, there are now over 5200 entries in the National Center of Biotechnology Information (NCBI) for PRP ranging in fields from orthopedics, sports medicine, dentistry, otolaryngology, neurosurgery, ophthalmology, urology, wound healing, dermatology, cosmetic, cardiothoracic and maxillofacial surgery [32]. The initial popularity of PRP grew from its promise as a safe and natural alternative to surgery. PRP advocates promoted the procedure as an organically based therapy that enabled healing through the use of one’s own natural growth factors. In recent years, scientific research and technology has provided a new perspective on platelets. Studies suggest that platelets contain an abundance of growth factors and cytokines that can affect inflammation, postoperative blood loss, infection [33], osteogenesis, wound, muscle tear and soft tissue healing. Research now shows that platelets also release many bioactive proteins responsible for attracting macrophages, mesenchymal stem cells and osteoblasts that not only promote removal of degenerated and necrotic tissue, but also enhance tissue regeneration and healing [34]. Afresh PRP has been an emerging area of interest in aesthetic medicine. PRP has been reported to augment dermal elasticity by stimulating the removal of photodamaged extracellular matrix (ECM) components and inducing the synthesis of new collagen by dermal fibroblasts via various molecular mechanisms [35]. Although periorbital hyperpigmentation is a mere color difference between the palpebral skin and the remaining facial skin, dark circles can be a significant cosmetic concern forpatients. Although it is a condition that doesnot cause American Journal of Dermatology and Venereology 2014, 3(5): 87-94 91 morbidity, it can influence the quality of life from the medical point of view [36]. The two most important factors in the pathogenesis of periorbital dark circles are the deposition of melanic pigment and hemosiderotic pigment in the affected sites. The melanic pigment deposition occurs as a consequence of excessive and cumulative exposure to the sun, which increases the production of melanin, while hemosiderotic deposition occurs due to intense vascularization in the palpebral region causing dermal blood extravasation. The liberation of ferric ions takes place locally, entailing the formation of free radicals that also stimulate the melanocytes, which generates melanic pigmentation [37]. In this open therapeutic trial study, patients received one session of platelet-rich plasma injections per month for three months. The final outcome was evaluated after six months from the first session by two blinded dermatologists using standardized digital photography and by the patient's self-assessment. The results were very satisfactory, according to the dermatologists' evaluation, as almost as 60% of patients showed moderate to significant improvement. The patients' self-assessment of satisfaction was also high since more than 65% of patients said they were pleased or very pleased with end results; in fact the patients reported more favorable responses than those of the dermatologists'. This might be attributed to the fact that the patients were used to be very frustrated from their look and the long list of treatments used for so many months without really any acceptable results. The sensible change in their outlook and the improvement of the eyes' dark circle made the patients over estimate the results and give more hopeful and optimistic results than the dermatologists. This remarkable result which gives the patient a great feeling of being well treated is the most important target in any kind of treatment. These results give an overall indication of the effectiveness of PRP injections in the treatment of eye dark circles. The results of both assessment groups (the patients and the dermatologists) are significant as indicated by the P values which were 0.0001. The most important contents of platelets are contained in the α-granules. There are more than 30 bioactive substances in these granules [38]. Some of the bioactive substances present in the α-granules include platelet-derived growth factor(PDGF), transforming growth factor (TGF)-β1, 2, epidermalgrowth factor (EGF), and mitogenic growth factors such asplatelet-derived angiogenesis factor and fibrinogen. Toour knowledge, only TGF-β1 and EGF have been investigated about their relation with melanogenesis. Kim et al. investigated the effects of TGF-β1 on melanogenesis by using a spontaneously immortalized mousemelanocyte cell line, and asserted that TGF-β1 significantlyi nhibits melanin synthesis in a concentration dependent manner. They declared that TGF-β1 decreases melanogenesis via delayed extracellular signal-regulated kinase activation [39]. While Yun et al. studied the effect of EGF on melanogenesis by using mouse-derived immortalized melanocytes from a laser-treated keratinocyte-conditioned culture media. They proposed that treatment with EGF lowers melanin production in melanocytes by inhibiting prostaglandin-E2 (PGE2) expression and tyrosinase enzyme activity. They offered that treatment with EGF can be used in cosmetics to whiten the skin and for the prevention of post inflammatory hyperpigmentation [40]. Melasma was also reported to be successfully treated with PRP in a single Turkish study [41]. The researchers reported improvement of more than 80% of melasma after three sessions of PRP alone. This result was better than ours because periorbital hyperpigmentation is caused by a more complicated multifactorial mechanism than melasma and it is more treatment resistant. Despite the difference in results, both studies prove that PRP might be a good option for treatment of facial hyperpigmentation. The pigmentary improvement that occurs with PRP treatment may be also associated with the increase in skin volume. PDGF has a significant role in blood vessel formation and in the synthesis of collagen and components of the extracellular matrix, including hyaluronicacid. Hyaluronic acid has been shown to increase skin tone and volume, resulting in providing a more ‘glowing skin’ [42]. Monthly intradermal injections of PRP in 3 sessions have shown satisfactory results in face and neck rejuvenation and scar attenuation [43]. In a split face blinded trial, Kang et al. proposed that PRP injections given monthly for 3 months, have shown very good results for infra-orbital rejuvenation as well, without any obvious side effects [44]. Shin et al. showed that a combination of fractional non-ablative (erbium glass) laser therapy with topical application of PRP, resulted in objective improvement in skin elasticity, a lower erythema index and an increase in collagen density as well. Histological examination showed an increase in length of dermo-epidermal junction, amount of collagen and fibroblasts in the treated skin [45]. Being an autologous preparation, PRP is devoid of any serious adverse effects, apart from local injection site reactions like pain or secondary infection, which can be avoided with proper precautions. PRP has no issues regarding transmission of infections such as hepatitis-B, C or HIV [46]. The role of PRP injections in the treatment of periorbital hyperpigmentation is not fully considered. The studies that investigate their effect are lacking or very limited. Comparative studies are deficient to contrast PRP with other treatments like topical cosmeceutical preparations of growth factors. This might be attributed to the fact that PRP is relatively new technology introduced only recently in treatment of dermatological or cosmetic problems [47] and extensive further studies are highly recommended to confirm the results and endorse such maneuver as a successful way in alleviating the tragedy that some patients might suffer. In procedures aiming at aesthetic improvement, patient perception of the treatment outcome appears to be most 92 Salah Hashim Al-Shami: Treatment of Periorbital Hyperpigmentation Using Platelet-Rich Plasma Injections important because it has a direct impact on patients’ body image and self-esteem [48]. Furthermore, many patients mentioned they experienced a remarkable improvement in ‘skin quality’ in addition to more stable and balanced skin color and subsequently can wear more natural makeup. In general, Platelet-Rich Plasma is a useful and relatively very safe treatment for periorbital hyperpigmentation and may provide an easy solution for a devastating problem. Patient 1. Pre- and 3-months post-operative Patient 2. Pre- and 3-months post-operative Patient 3. Pre- and 3-months post-operative Patient 4. Pre- and 3-months post-operative Patient 5. Pre- and 3-months post-operative 5. Conclusions PRP is a useful treatment for periorbital hyperpigmentation and further comparative studies of other treatment modalities are highly recommended. REFERENCES [1] Smyth SS, McEver RP, Weyrich AS, Morrell CN, Hoffman MR, Arepally GM, et al; Platelet Colloquium Participants. Platelet functions beyond hemostasis. J Thromb Haemost 2009;7:1759-1766. [2] Weyrich AS, Schwertz H, Kraiss LW, Zimmerman GA. Protein synthesis by platelets: historical and new perspectives. J Thromb Haemost.2009;7(2):241–6. [3] Mazzucco L, Balbo V, Cattana E, Guaschino R, Borzini P. Not every PRPgel is born equal. Evaluation of growth factor availability for tissues through four PRP-gel preparations: Fibrinet, RegenPRP-Kit, Plateltex and one manual procedure. Vox Sang. 2009;97(2):110-8. [4] Boswell SG, Cole BJ, Sundman EA, Karas V, Fortier LA. Platelet rich plasma: a milieu of bioactive factors. Arthroscopy. 2012;28(3):429-39. [5] Sommeling CE, Heyneman A, Hoeksema H, Verbelen J, Stillaert FB, Monstrey S. The use of platelet rich plasma in plastic surgery: A systematic review. J Plast Reconstr Aesthet American Journal of Dermatology and Venereology 2014, 3(5): 87-94 93 Surg. 2013; 66(3):301-11. [6] Monteiro MR. Célulastronconapele. Surg Cosmet Dermatol 2012;4(2):159-63. [7] Marx RE, Carlson ER, Eichstaedt RM, Schimmele SR, Strauss JE, Georgeff KR. Platelet-rich plasma: Growth factor enhancement for bone grafts. Oral Surg Oral Med Oral Pathol Oral RadiolEndod 1998;85:638-46. [8] Marx RE. Platelet-rich plasma: Evidence to support its use. J Oral MaxillofacSurg 2004;62:489-96. [9] Schmitz JP, Hollinger JO. The biology of platelet-rich plasma. J Oral MaxillofacSurg 2001;59:1119-21. [10] Dohan Ehrenfest DM, Rasmusson L, Albrektsson T Classification of platelet concentrates: from pure platelet rich plasma (P-PRP) to leucocyteand platelet rich fibrin (L-PRF). Trends Biotechnol. 2009; 27(3):158-67. [11] Marcia Regina Monteiro. Platelet rich plasma in dermatology. Platelet rich plasma in dermatology SurgCosmetDermatol 2013;5(2):1559. [12] Li ZJ, Choi HI, Choi DK, Sohn KC, Im M, Seo YJ, Lee YH, Lee JH, Lee Y. Autologous platelet rich plasma: a potential therapeutic tool for promoting hair growth. Dermatol Surg. 2012;38 (7 Pt 1):1040-6. [13] Uebel CO, da Silva JB, Cantarelli D, Martins P. The role of platelet plasmagrowth factors in male pattern baldness surgery. Plast Reconstr Surg.2006;118(6):1458–66. [14] Martinez-Zapata MJ, Martí-Carvajal AJ, Solà I, Expósito JA, Bolíbar I, Rodríguez L, Garcia J. Autologous platelet rich plasma for treating chronicwounds. Cochrane Database Syst Rev. 2012:CD006899. [15] Shin MK, Lee JH, Lee SJ, Kim NI. Platelet rich plasma Combined with Fractional Laser Therapy for Skin Rejuvenation. Dermatol Surg.2012;38(4):623-30. [16] Na JI, Choi JW, Choi HR, Jeong JB, Park KC, Youn SW, et al. Rapid healing and reduced erythema following ablative fractional CO2 laser resurfacing combined with the application of autologous platelet-rich plasma. Dermatol Surg. 2011;37(4):463-8. [17] Lee JW, Kim BJ, Kim MN, Mun SK. The efficacy of autologous plateletrich plasma combined with ablative carbon dioxide fractional resurfacing for acne scars: a simultaneous split-face trial. Dermatol Surg.2011;37(7):931- 8. [18] Redaelli A, Romano D, Marcianó A Face and neck revitalization with platelet rich plasma (PRP): clinical outcome in a series of 23 consecutivelytreated patients. J Drugs Dermatol. 2010;9(5):466-72. [19] Cho JM, Lee YH, Baek RM, Lee SW. Effect of platelet rich plasma on ultraviolet B-induced skin wrinkles in nude mice. J Plast Reconstr Aesthet Surg. 2011 Feb;64(2):e31-9. [20] Kim DH, Je YJ, Kim CD, Lee YH, Seo YJ, Lee JH, Lee Y. Can Platelet richplasma be used for skin rejuvenation? Evaluation of Effects of Plateletrich plasma on Human Dermal Fibroblast. Ann Dermatol.2011;23(4):424-31. [21] Kim IS, Park KY, Kim BJ, Kim MN, Kim CW, Kim SE. Efficacy of intradermal radiofrequency combined with autologous platelet rich plasma in striaedistensae: a pilot study. Int J Dermatol. 2012;51(10):1253-8. [22] Roh MR, Chung KY. Infraorbital Dark Circles: Definition, Causes, and Treatment Options. Dermatol Surg 2009;35: 1163–1171. [23] Epstein JS. Management of infraorbital dark circles. A significant cosmetic concern. Arch Facial PlastSurg 1999;1:303–7. [24] Freitag, FM e Cestari, TF: What causes dark circles under the eyes? Journal of Cosmetic Dermatology 2007; 6(3):211-5. [25] Stefanato, CM e Bhawan, J. Diffuse hyperpigmentation of the skin: a clinicopathologic approach to diagnosis. Semin Cutan Med Surg 1997; 6(1):61-71. [26] Costa A, Basile DVA, Medeiros VLS, Moisés AT, Ota S F, V CAJ. Peeling de gel de ácidotioglicólico 10% opçãosegura e eficientenapigmentaçãoinfraorbicularconstitucional. Surgial & Cosmetic Deramtology 2010; 2(1): 29-35. [27] Mitsuishi T, Shimoda T, Mitsui Y, Kuriyama Y, Kawana S. The effects of topical application of phytonadione, retinol and vitamins C and E on infraorbital dark circles and wrinkles of the lower eyelids. J Cosmet Dermatol.2004; 3(2):73-5. [28] Manuskiatti W, Fitzpatrick RE, Goldman MP. Treatment of facial skin using combinations of CO2, Q-switched alexandrite, flashlamp pumped pulsed dye, and Er: YAG lasers in the same treatment session. DermatolSurg 2000; 26(2):725- 9. [29] West TB, Alster TS. Improvement of infraorbital hyperpigmentation following carbon dioxide laser resurfacing. Dermatol Surg. 1998; 24(6):615-6. [30] Momosawa, A et al. Combined Therapy Using Q-Switched Ruby Laser and Bleaching Treatment with Tretinoin and Hydroquinone for Periorbital Skin Hyperpigmentation in Asians. Plast Reconstr Surg. 2008; 121(1):282-8. [31] M Ferrari et al. A new technique for hemodilution, preparation of autologous platelet-rich plasma and intraoperative blood salvage in cardiac surgery. Int J Artif Organs.1987 Jan; 10(1):47-50. [32] International cellular medicine society guidelines for PRP. 2011 page 2. [33] Anitua E, Alonso R, Girbau C, Aguirre JJ, Muruzabal F, Orive G. Antibacterial effect of plasma rich in growth factors (PRGF® -Endoret® ) against Staphylococcus aureus and Staphylococcus epidermidis strains. Clin Exp Dermatol 2012;37:652-7. [34] Graziani et al. The in vitro effect of different PRP concentrations on osteoblasts and fibroblasts. Clin Oral Implants Res. 2006 Apr;17(2):212-9. [35] Cho JW, Kim SA, Lee KS. Platelet-rich plasma induces increased expression of G1 cell cycle regulators, type I collagen, and matrix metalloproteinase-1 in human skin fibroblasts. Int J Mol Med 2012;29:32-6. [36] Balkrishnan R, McMichael AJ, Camacho FT, et al. Development and validation of a health-related quality of life instrument for women with melasma. Br J Dermatol 2003; 149:572–7. 94 Salah Hashim Al-Shami: Treatment of Periorbital Hyperpigmentation Using Platelet-Rich Plasma Injections [37] Souza DM, Ludtke C, Souza ERM, Scandura KMP, Weber MB. Periorbital hyperchromia. SurgCosmet Dermatol 2011; 3(3):233-9. [38] Eppley BL, Pietrzak WS, Blanton M. Platelet-rich plasma: a review of biology and applications in plastic surgery. Plast Reconstr Surg 2006; 118:147e-159e. [39] Kim DS, Park SH, Park KC. Transforming growth factor-beta1 decreases melanin synthesis via delayed extracellular signalregulated kinase activation. Int J Biochem Cell Biol 2004; 36(8):1482-1491. [40] Yun WJ, Bang SH, Min KH, Kim SW, Lee MW, Chang SE. Epidermal growth factor and epidermal growth factor signaling attenuate laser-induced melanogenesis. Dermatol Surg 2013 Dec;39(12):1903-11. [41] Çayırlı M, Çalışkan E, Açıkgöz G, Erbil AH, Ertürk G. Regression of Melasma with Platelet-Rich Plasma Treatment. Ann Dermatol 2014; 26(3):401-402. [42] Papakonstantinou E, Roth M, Karakiulakis G. Hyaluronic acid: a key molecule in skin aging. Dermatoendocrinol 2012; 4:253-258. [43] Redaelli A, Romano D, Marcianó A. Face and neck revitalization with platelet-rich plasma (PRP): Clinical outcome in a series of 23 consecutively treated patients. J Drugs Dermatol 2010;9:466-72. [44] Kang BK, Lee JH, Shin MK, Kim NI. Infraorbital rejuvenation using PRP (platelet-rich plasma): A prospective, randomized, split-face trial. J Am Acad Dermatol 2013; 68:SAB24. [45] Shin MK, Lee JH, Lee SJ, Kim NI. Platelet-rich plasma combined with fractional laser therapy for skin rejuvenation. Dermatol Surg 2012; 38:623-30. [46] Smiell JM, Wieman TJ, Steed DL, Perry BH, Sampson AR, Schwab BH. Efficacy and safety of becaplermin (recombinant human platelet-derived growth factor-BB) in patients with nonhealing, lower extremity diabetic ulcers: A combined analysis of four randomized studies. Wound Repair Regen 1999; 7:335-46. [47] Bhanot S, Alex JC. Current applications of platelet gels in facial plastic surgery. Facial PlastSurg 2002; 18:27-33. [48] Al-Saedy SJ, Al-Hilo MM, Al-Shami SH. Treatment of acne scars using fractional Erbium: YAG laser. Am JDermatol Venereol 2014; 3(2): 43-9. 1. Introduction 2. Patients & Methods 3. Results 4. Discussion 5. Conclusions work_z2cu6xj7fbekzjqwqybjzonxnq ---- Acta Derm Venereol 94 SHORT COMMUNICATION Acta Derm Venereol 2014; 94: 585–587 © 2014 The Authors. doi: 10.2340/00015555-1740 Journal Compilation © 2014 Acta Dermato-Venereologica. ISSN 0001-5555 Despite their limitations and side-effects whole-body UVB phototherapy and corticosteroids are still the mainstay in first-line treatment of vitiligo in both adults and children (1). As vitiligo usually does not affect more than 20% of the body surface, topical treatment strate- gies have become the focus of interest. Several clinical studies have recently shown important advantages of focused broad- (BB) or narrow-band (NB) UVB as they allow selective treatment of target lesions, along with an energy adaptation to the respective body part while sparing the normally pigmented surrounding skin (2–4). Given twice weekly up to every second week for less than 60 sessions they allow both the reduction of number of treatments as well as the cumulative doses along with higher efficiency. Furthermore, calcineurin antagonists like tacrolimus and pimecrolimus were introduced as topical immunomodulators for vitiligo and proved to be equally effective as clobetasol (5–8). Using overnight occlusive hydrocolloid dressings they proved to be ef- fective and safe also in poorly repigmenting body areas like forearms and shins (4, 5). Recently, it was shown that tacrolimus increases pigmentation and migration of human melanocytes (9). Until now, it is not known whether either of the 2 treatment options is generally superior to the other or whether intra-individual responses will be similar or different. In a small case study we compared for the first time the efficacy of targeted intense pulsed light (IPL) phototherapy versus tacrolimus 0.1% ointment in an intra-individual right–left comparative way for up to 9 months. MATERIALS AND METHODS Eleven patients (8 women, 3 men; 15–69 years old; mean 41 ± 19), with symmetrical vitiligo of < 20% body surface area (BSA), were treated after they or their parents had signed detailed written informed consent. They had Fitzpatrick skin phototype I–IV and disease duration of 8 months up to 30 years (mean 11.1 ± 10 years). Patients with a tendency to spontaneous repigmentation, those who had received topical or systemic treatment during the last 6 months, pregnant women, and pa- tients with history of premalignant or malignant skin lesions were excluded. Tacrolimus 0.1% ointment (Protopic®, Astellas Pharma GmbH, Munich, Germany) was applied twice daily on the depigmented lesions on the face, trunk, and extremities of the right side, combined with overnight hydrocolloid dres- sing (Comfeel plus transparent, Coloplast GmbH, Hamburg, Germany) on the shins. Targeted phototherapy was performed once weekly on the respective lesions of the left side, using a BBUVB light source with peak emission at 311 nm (Relume- Mode, Lumenis). Treatment of hands and feet was spared, as these regions are known to be treatment resistant (3, 5–7). If repigmentation was not visible after 6 months the treatment was stopped. Otherwise, after 9 months therapy was continued with the most effective regimen on both treatment sides. Patients were free to withdraw at any time during the treat- ment procedure. All procedures were in accordance with the ethical standards on human experimentation and with the Helsinki Declaration, as revised in 2008. Baseline photographs performed by digital photography using natural as well as Wood’s light were performed at the beginning of the treatment and were compared with photographs after 3, 6, and 9 months, and at the end of treatment, respectively. Precent repigmenta- tion was evaluated. Because affected BSA was < 10% in all patients and as we performed a right–left comparative treat- ment we calculated percent repigmentation for each treated body region instead of percentage of total vitiligo body area, as recommended by the Vitiligo European Task Force (VETF). For grading the following generally accepted scale was used: minimal < 25%; moderate: 25–50%; good: 50–75%; excellent: > 75%. In addition, repigmentation was also assessed by the 0–3 VETF staging score as recommended (10). To the best of our knowledge, this score has been used so far only in one pilot clinical trial but did not show significant improvement (11). For statistical evaluation, the student t-test was used to calculate the difference between the means of 2 independent samples. The significance level was set at p = 0.05. RESULTS In the 11 patients vitiligo lesions in 35 different body areas (1–325 cm2) were treated. Mean starting fluence was 50.5 ± 17.6 mJ/cm2 in UV-sensitive regions (face, neck, trunk, upper arms, and thighs) and 87.7 ± 34.4 mJ/ cm² in UV-insensitive areas (forearms, elbows, knees, lower extremities), respectively. After 9 months, mean highest fluence was 277 ± 202 mJ/cm2 in UV-sensitive and 705 ± 535 mJ/cm2 in insensitive areas, respectively. After 6 months, 2 of 11 patients did not show any repigmentation and thus treatment was stopped. In the remaining 9 patients, mean time of initial repigmenta- tion on the tacrolimus-treated side was comparable to the UVB-treated side, even if face, trunk and extremities were considered separately (face: tacrolimus 8.0 ± 1.0 weeks versus UVB-IPL 9.3 ± 4.9 weeks, p = 0.69; trunk: tacrolimus 10.0 ± 2.0 weeks versus UVB-IPL 10.0 ± 5.4 weeks, p = 0.73; extremities: tacrolimus 14.8 ± 5.8 weeks versus UVB-IPL 14.3 ± 6.2 weeks, p = 0.99). After 6 months, the mean percentage of repigmentation of all Treatment of Generalised Vitiligo with Tacrolimus 0.1% Ointment vs. UVB Intense Pulsed Light Phototherapy: A Pilot Study Anke Hartmann, Lisa Löhberg, Petra Keikavoussi, Saskia Eichner and Gerold Schuler Department of Dermatology, University Hospital Erlangen, Ulmenweg 18, DE-91054 Erlangen, Germany. E-mail: anke.hartmann@uk-erlangen.de Accepted Aug 18, 2013; Epub ahead of print Jan 28, 2014 mailto:anke.hartmann@uk-erlangen.de 586 Short communication treated body sites, non-responding lesions included, was comparable between tacrolimus- and UVB-treated lesions (Fig. 1). After 9 months, 38% of lesions on the tacrolimus-treated side on the trunk showed repig- mentation compared to 63% on the UVB-treated side, whereas on face and extremities 67% and 80% of the lesions on both treatment sides responded. Mean per- centage of repigmentation on the face and extremities was higher on the tacrolimus-treated side compared to the UVB-treated side; however, the differences were not statistically significant (Fig. 1). According to the VETF staging scale mean staging sco- re before and after treatment was similar for each side of the body (1.85 ± 0.4 on the tacrolimus side and 1.85 ± 0.5 on the UVB-treated side at the start, and 1.25 ± 0.6 and 1.25 ± 0.7 at the end of treatment, respectively). Thus, mean decrease was 0.6, translating to a significant 32% improvement (p = 0.001 and 0.003, respectively). Four patients showed moderate to excellent repig- mentation after treatment with either tacrolimus or UVB-IPL, respectively; however, on the respective opposite region there was only minimal or no reaction (Fig. S1A1 and Fig. S21). Five patients showed repig- mentation with both treatment modalities; however, the bilateral responses were intra-individually considerably different (Fig. S1B1, Fig. S31), with either a follicular or a homogeneous interfollicular pattern of repigmentation or a mixture of both, without any correlation to treat- ment mode or localisation of the respective area. After 9 months, 4 patients continued with the more effective regimen on both treatment sides for up to additional 9 months and, interestingly, all revealed repigmentation similar to the original opposite side (Fig. S2C1). Side effects included initial mild prickling on the tacrolimus- treated and transient erythema and perilesional tanning on the left side. DISCUSSION In our pilot study we compared for the first time 2 in- creasingly accepted localised approaches for vitiligo treatment in patients with generalised vitiligo in a direct right/left fashion. Long-term treatment with weekly sessions of targeted UVB-IPL phototherapy, as well as twice daily tacrolimus 0.1% ointment proved overall to be comparably effective as measured by mean time to initial repigmentation and mean percent repigmen- tation. Our pilot study in 11 patients, although small, clearly indicated that intra-individual differences occur and that treatment response to one of the topical ap- proaches does not generally imply response to another treatment as well. Although it is known that tacrolimus – in contrast to corticosteroids – does not penetrate through an intact skin barrier and into the circulation, the possibility and relevance of systemic effects of either of the 2 topical treatment approaches cannot be completely excluded (6, 12). Importantly, for the in- dividual patient, response to either of the 2 treatments is currently not predictable and may vary considerably between different body areas. Given these facts it is not surprising that the results of inter-individual com- parative studies were often incongruent, and would require very large cohorts to come to reliable results. Our findings confirm the importance of intra-individual right/left comparative studies, which has also been sug- gested by others (13). Our preliminary results should be confirmed in a larger prospective, randomised clinical trial, which may also allow to outline urgently needed parameters predicting responses. REFERENCES 1. Gawkrodger DJ, Ormerod AD, Shaw L, Mauri-Sole I, Whit- ton ME, Watts MJ, et al. Vitiligo: concise evidence based guidelines on diagnosis and management. Postgrad Med J 2010; 86: 466–471. 2. Hofer A, Hassan AS, Legat FJ, Kerl H, Wolf P. The efficacy of excimer laser (308 nm) for vitiligo at different body sites. J Eur Acad Dermatol Venereol 2006; 20: 558–564. 3. Welsh O, Herz-Ruelas ME, Gómez M, Ocampo-Candiani J. Therapeutic evaluation of UVB-targeted phototherapy in vitiligo that affects less than 10% of the body surface area. Int J Dermatol 2009; 48: 529–534. 4. Asawanonda P, Kijluakiat J, Korkij W, Sindhupak W. Targe- ted broadband ultraviolet B phototherapy produces similar responses to targeted narrowband ultraviolet B phototherapy for vitiligo: a randomized, double-blind study. Acta Derm Venereol 2008; 88: 376–381. 5. Lepe V, Moncada B, Castanedo-Cazares JP, Torres-Alvarez MB, Ortiz CA, Torres-Rubalcava AB. A double-blind ran- Fig. 1. Mean percentage repigmentation of lesions on the face, trunk and extremities after 6 months (non-responders included) and 9 months treatment (without non-responder, which stopped treatment after 6 months) with tacrolimus 0.1% ointment (right side of the body) versus targeted UVB-phototherapy (left side of the body), respectively. 1http://www.medicaljournals.se/acta/content/?doi=10.2340/00015555-1740 Acta Derm Venereol 94 http://www.medicaljournals.se/acta/content/?doi=10.2340/00015555-1740 http://www.medicaljournals.se/acta/content/?doi=10.2340/00015555-1740 http://www.medicaljournals.se/acta/content/?doi=10.2340/00015555-1740 http://www.medicaljournals.se/acta/content/?doi=10.2340/00015555-1740 http://www.medicaljournals.se/acta/content/?doi=10.2340/00015555-1740 587Short communication domized trial of 0.1% tacrolimus vs 0.05% clobetasol for the treatment of childhood vitiligo. Arch Dermatol 2003; 139: 581–585. 6. Hartmann A, Bröcker EB, Hamm H. Occlusive treatment enhances efficacy of tacrolimus 0.1% ointment in adult pa- tients with vitiligo: results of a placebo-controlled 12-month prospective study. Acta Derm Venereol 2008; 88: 474–479. 7. Hartmann A, Bröcker EB, Hamm H. Repigmentation of pretibial vitiligo with calcineurin inhibitors under occlusion. J Dtsch Dermatol Ges 2008; 6: 383–385. 8. Lubaki LJ, Ghanem G, Vereecken P, Fouty E, Benammar L, Vadoud-Seyedi J, et al. Time-kinetic study of repigmenta- tion in vitiligo patients by tacrolimus or pimecrolimus. Arch Dermatol Res 2010; 302: 131–137. 9. Kang HY, Choi YM. FK506 increases pigmentation and migration of human melanocytes. Br J Dermatol 2006; 155: 1037–1040. 10. Taïeb A, Picardo M; VETF Members. The definition and assessment of vitiligo: a consensus report of the Vitiligo European Task Force. Pigment Cell Res 2007; 20: 27–35. 11. Szczurko O, Shear N, Taddio A, Boon H. Ginkgo biloba for the treatment of vitiligo vulgaris: an open label pilot clinical trial. BMC Complement Altern Med 2011; 11: 21. 12. Bos JD, Meinardi MM. The 500 Dalton rule for the skin penetration of chemical compounds and drugs. Exp Der- matol 2000; 9: 165–169. 13. van Geel N, Speeckaert R, Mollet I, De Schepper S, De Wolf J, Tjin EP, et al. In vivo vitiligo induction and therapy model: double-blind, randomized clinical trial. Pigment Cell Melanoma Res 2012; 25: 57–65. Acta Derm Venereol 94 work_z2kxx7f4yrgbpn6idk6glzrchq ---- Vol 54: october • octobre 2008 Canadian Family Physician • Le Médecin de famille canadien 1357 Commentary Vox populi Is the National Physician Survey worth it? Stewart Cameron MD FCFP T he average age of Canadian physicians was 50 years as of last year.1 The mean time since that average doctor was licensed was 21 years,2 meaning that many in today’s medical community had begun their pro- fessional careers by the mid-1980s. Most of our physi- cians entered medical school before HMG-CoA reductase inhibitors were available and before the identification of the hepatitis C virus; AIDS had only just been described as a disease. While these doctors were completing training, researchers were still working the bugs out of magnetic resonance imaging technology and coronary angioplasty. These doctors went to medical school before the World Wide Web existed. Digital photography was largely restricted to space pro- grams. It was, in many respects, a very different world. Those doc- tors trained in one century and are practising in another. Coming up short As momentous as they are, it could be argued that these discoveries and inven- tions are not the biggest issues Canadian doctors have grappled with since. There is a case to be made that shortages in our health care system are having a more profound effect on physician practice. Scarcity of resources—both material and human—is a new fact of life. We know that physicians are changing how they work and practise in ways that are sometimes subtle but nevertheless profound. Even a casual follower of mainstream media could con- clude that these changes are not presented to the public particularly well. While the press and broadcasters cover new advances in clinical medicine, they are less likely to investigate or report on physician practice patterns or clinical roles. These changes are not well followed by governments either. Politicians are often reticent to draw attention to issues that reflect badly on them. Winds of change In the second week of January 2008, media across Canada did carry numerous stories about problems with the supply of medical manpower. In-depth stories on the challenges faced by Canadian doctors dealing with underfunding and understaffing of our health care system were carried on the Web, on television, on radio, and by newspapers. This was followed in March by fur- ther analysis, broken down by province and territory, showing how the changing medical work force has effects on access to family physicians and other medi- cal specialists. In April another wave of stories covered student and resident issues, such as the difficulties faced by rural applicants to medical schools. June brought extensive publicity of the challenges of providing care for patients with chronic diseases. Thoughtful commen- tary from the leaders of Canadian physician organiza- tions was used to illuminate the stories, all backed with solid data. These stories evoked much discussion of the issues, and there were calls to address the problems. The print media, radio talk shows, and television stations requested interviews with physicians, and a cadre of well-versed spokespersons was on hand to fulfil these requests. This coverage was a result of the release of data from the National Physician Survey (NPS), a project of the College of Family Physicians of Canada, the Canadian Medical Association, and the Royal College of Physicians and Surgeons of Canada. The NPS appears to be the largest such undertak- ing in the world. While other countries rely on sporadic surveys and small polls, Canada has a unique snapshot of the issues facing physicians, collected directly from doctors themselves. Every physician in the nation has a chance to contribute. As the results are paired with those of the previous NPS in 2004, our respective orga- nizations are now armed with powerful information to guide health planning. Ongoing work with governments will be strengthened by this unique source of knowledge from the front lines of health care delivery. In addition, the results are not kept secret or partially released to suit an agenda. They are made available to members, researchers, the media, and the public on the NPS website (www.nationalphysiciansurvey.ca). Leading the way A huge coordinated effort is behind this campaign. The NPS is planned years in advance with preparations that include the selection of questions, piloting of question- naires, and review of methodology to ensure statistical reliability. After the data are collected, they are exten- sively analyzed to reveal important trends and issues. Cet article se trouve aussi en français à la page 1362. Organizations are now armed with powerful information 1358 Canadian Family Physician • Le Médecin de famille canadien Vol 54: october • octobre 2008 Commentary Leaders and spokespersons are briefed. The releases to the public are coordinated through multiple meetings, e-mails, and teleconferences by the staff and leaders of the 3 participating medical organizations. Your national College has been a leader through- out this process. Indeed, the initial NPS in 2004 was based on a previous work force survey initiated by the College of Family Physicians of Canada. The College, through the Janus Project and its steering committee, has ensured that the family medicine perspective has always featured prominently in the results. There are encouraging signs that the media are now paying more attention to this area.3,4 Sometimes, however, Canadians seem resigned to a situation of care rationing and shortages. They have tried to adjust to waiting lists as if they were inevitable. This suits the agenda of those who would just as soon keep the data under wraps. The NPS will not allow that to happen. The stories in the supplement to this issue are your stories, as you told them. The public and the governments are hearing your voice, thanks to the NPS. Take a few minutes to peruse the supplement; it is about you and your career, and it is something you can be proud of. The NPS is worth it. Dr Cameron is an Associate Professor in the Department of Family Medicine at Dalhousie University in Halifax, NS, and is Chair of the Janus Steering Committee of the College of Family Physicians of Canada. competing interests None declared correspondence Dr Stewart Cameron, Department of Family Medicine, Dalhousie University, Lane 8, 5909 Veterans’ Lane, Halifax, NS B3H 2E2; telephone 902 473-6250; fax 902 473-8548; e-mail stewart.cameron@dal.ca the opinions expressed in commentaries are those of the authors. Publication does not imply endorsement by the College of Family Physicians of Canada. references 1. College of Family Physicians of Canada, Canadian Medical Association, Royal College of Physicians and Surgeons of Canada. National Physician Survey 2007. National results by FP/GP or other specialist, sex, age, and all physicians. Q4. Age. Mississauga, ON: College of Family Physicians of Canada. Available from: www.nationalphysiciansurvey.ca/nps/2007_Survey/Results/ENG/ National/pdf/Q4/Q4_CORE.only.pdf. Accessed 2008 Aug 22. 2. College of Family Physicians of Canada, Canadian Medical Association, Royal College of Physicians and Surgeons of Canada. National Physician Survey 2007. National results by FP/GP or other specialist, sex, age, and all physicians. Q11. Number of years licensed to practice medicine in Canada. Mississauga, ON: College of Family Physicians of Canada. Available from: www.national physiciansurvey.ca/nps/2007_Survey/Results/ENG/National/pdf/Q11/ Q11_Core.only.pdf. Accessed 2008 Aug 22. 3. CBC News. Canada-wide pathology crisis on horizon, cancer inquiry warned. CBC News 2008 Jun 26; Health. Available from: www.cbc.ca/health/ story/2008/06/26/pathology-plea.html. Accessed 2008 Aug 22. 4. CBC News. Hospital staffing crisis sparks huge rally in Grand Falls-Windsor. CBC News 2008 Jul 3; Health. Available from: www.cbc.ca/health/ story/2008/07/03/central-rally.html. Accessed 2008 Aug 22. ✶ ✶ ✶ work_z2nuqrqwbraslbbype6zq3pgpe ---- <3030352D30323328303234292D2D30312DC1B6C7F6BEE72CB1E8C8ABB7C42CB7F9C8F1B0E62D2D31C2F7BCF6C1A42E687770> 한국십진분류법의 디지털 련 항목명 확장 개에 한 연구* A Study on Expansion of Headings for Digital Trends of Korean Decimal Classification 정 옥 경 (Ok Kyung Chung)** 목 차 1. 서 론 2. 디지털의 어원과 개념 3. DDC와 KDC의 디지털 련 분류항목의 개과정 3.1 DDC의 디지털 련 항목명의 개과정 3.2 KDC의 디지털 련 분류항목의 개과정 4. 주제명표목표의 디지털 련 주제 분석 4.1 미국의회도서 주제명표목표(LCSH) 4.2 국립 앙도서 주제명표목표 5. KDC의 디지털 련 항목명 확장 개방안 6. 결 론 록 본 연구의 목 은 한국십진분류법의 디지털 련 항목명의 확장 개방안을 제시하는 것이다. 이 연구목 을 달성하기 해서 첫째, 듀이십진분류법과 한국십진분류법의 디지털 련 항목명이 개되어 온 과정을 분석하 다. 둘째, 미국의회도 서 주제명표목표와 국립 앙도서 주제명표목표의 디지털 련 주제를 분석하 다. 셋째, 듀이십진분류법 제23 과 미국의회도서 주제명표목표에 개된 디지털 련 분류 항목명을 비교 분석하고, 한국십진분류법 제6 본표에 개된 분류항목명과 국립 앙도서 주제명표목표를 비교 분석하 다. 분석 결과 한국십진분류법 제6 에 개된 디지털 련 항목명이 12개 항목으로 디지털 련 자료의 합리 인 분류가 어려운 것으로 확인되었다. 그러므로 KDC의 디지털과 련된 항목명을 포함주기와 분류지시주기에 추가하거나 새로운 분류항목으로 확장 개하여야 할 것이다. ABSTRACT The purpose of this study is to propose the expansion of headings for digital trends of Korean Decimal Classification. In order to achieve the purpose, this study first analyzed the change process of classification items related to digital resources in DDC and KDC. Second, subject names related to digital in LCSH and Korean National Library Subject Heading were analyzed. Third, comparisons between the items related to digital of DDC 23rd with LCSH and KDC with Korean National Library Subject Heading were conducted. The result of comparative analysis was identified that KDC 6th edition faces with difficulties in reasonable classification because item names related to digital of KDC 6th edition have 11 items. Therefore, the item related to digital has to be expanded to new classification item name and has to add include notes and class here notes in KDC. 키워드: 한국십진분류표, 듀이십진분류표, 미국의회도서 주제명표목표, 국립 앙도서 주제명표목표, 디지털 Korean Decimal Classification, Dewey Decimal Classification, Library of Congress Subject Heading, Korean National Library Subject Heading, Digital * ** 이 논문은 인천 학교 2012년도 자체연구비 지원에 의하여 연구되었음. 인천 학교 문헌정보학과 교수(okchung@incheon.ac.kr) 논문 수일자: 2014년 4월 17일 최 심사일자: 2014년 4월 30일 게재확정일자: 2014년 5월 15일 한국문헌정보학회지, 48(2): 177-199, 2014. [http://dx.doi.org/10.4275/KSLIS.2014.48.2.177] 178 한국문헌정보학회지 제48권 제2호 2014 1. 서 론 고도의 컴퓨터 정보통신기술의 발달은 사 회 반에 향을 미치고 있으며, 다양한 역 이 디지털기반구조의 환경으로 변화하고 있다. 디지털과 컴퓨터를 목한 변화는 1990년 반이후부터 본격 으로 우리 생활에 도래되었 으며, 2000년 에 들어와서 아날로그 기술은 격하게 디지털 기술로 변화하고 있다. 개방, 공유, 참여, 력의 정신을 바탕으로 한 Web 2.0 시 를 지나 개인화, 맞춤형을 키워드로 하 는 웹 3.0기술을 활용한 서비스가 도입되고 있 다. 웹 3.0의 주요한 속성은 소통, 융합, 개인화 이다. 소통은 많은 사람들이 참여하여 량의 정보들이 웹사이트에 축 됨에 따라 정보를 원 활하게 소통시켜 다방향의 의사소통을 지원한 다. 융합은 정보기술의 융합을 의미하며, 정보 를 융합하여 새로운 가치를 지닌 정보로 재생 산되고 있다(Angel 2011). 특히 디지털기반이 융합되어 새롭게 재생산 되고 있다. 디지털 컴퓨터, 디지털 도서 , 디지 털 보존, 디지털 사회, 디지털 매체, 디지털 정 보원, 디지털 출 , 디지털 사진, 디지털 술, 디지털 방송, 디지털 통신, 디지털 고, 디지털 오디오와 비디오, 디지털 서명, 디지털 화촬 , 디지털 카메라 촬 , 의학의 디지털 신경망, 등 과 같이 디지털기술이 다양한 역에 도입 되어 있으며, 련 출 물이 증가하고 있다. 그 러나 2013년에 간행된 KDC 제6 (한국도서 회 2013)의 개된 디지털 련 항목명은 포 함주기에 기술된 2개 항목을 포함하여 12개로 무 미약하여 정확한 분류기호의 부여와 세 분류가 어렵고, 분류기호의 복으로 분류의 효과와 유용성이 하될 수밖에 없을 것이다. 사회가 아날로그 환경에서 디지털환경으로 변화함에 따라 디지털 련 주제가 다양한 역에 도입되어 활발한 연구가 이루어지고 있으 며, 련 학문이 새롭게 두되고 있다. 이 게 두되고 있는 디지털 련분야의 자료를 일 성 있게 체계 으로 분류할 수 있도록 하여야 할 것이다. 다양한 분야에 도입되어 활용되고 있는 디지털 련 자료를 일 성을 가지고 체 계 인 세 분류를 할 수 있고, 근성과 분류 의 효과를 극 화할 수 있는 분류항목명의 추 가 확장 개가 필요하다. 따라서 본 연구의 목 은 디지털 기술이 도 입되어 새롭게 나타난 디지털 련 분야의 주 제들을 KDC 본표에 어떻게 확장 개하여야 할지 그 방안을 제시하는 것이다. 이러한 연구 목 을 달성하기 하여, 첫째, KDC와 DDC의 디지털 련 항목명의 개과정을 살펴보고, 둘째, LCSH와 국립 앙도서 주제명표목표 에 개된 디지털 련 주제명을 추출하 으며, 셋째, LCSH에 추출된 디지털 련 주제명을 DDC 제23 의 분류항목에 따라 구분 정리하 고, 국립 앙도서 주제명표목표에서 추출된 디지털 련 주제명은 KDC 제6 의 분류항목 에 따라 구분 정리하 다. 이 게 정리된 항목 명을 토 로 KDC의 디지털 련 항목명의 확 장 개방안을 제시하고자 한다. 2. 디지털의 어원과 개념 오늘날 우리가 지칭하는 디지털은 구체 으 로 2가지 상태(binary)로 표 되는 통신신호 한국십진분류법의 디지털 련 항목명 확장 개에 한 연구 179 는 정보를 의미하며, 일반 으로 디지털이란 말은 ‘컴퓨터에 기 한’이란 말과 동의어로 간 주된다(곽덕훈 외 2002). 디지털(digital)이란 말은 컴퓨터가 연산을 수행하는 방법, 즉 숫자를 세다는 뜻에서 유래 되었으며, 그 어원은 손가락을 의미하는 라틴 어 digit에서 유래된 말이다. 여기서 손가락은 하나하나 구별하여 세는데 사용되므로 지속과 단 이 확실한 이산 (discrete)인 성질을 나 타낸다. 즉 연속 인 데이터를 0과 1의 두 가지 상태로만 생성하고, 장하고, 처리하는 자기 술을 말한다. 그러므로 디지털 기술로 송되 거나 장된 데이터는 0과 1이 연속되는 하나 의 스트링으로 표 된다(곽덕훈 외 2002). 정 보통신용어사 (2004, 392)에 의하면, “디지털 이란 데이터나 물리 인 양을 0과 1이라는 2진 부호의 숫자로 표 되는 것을 말하며, 실세 계에서 디지털 데이터는 0과 1로 모든 데이터, 즉, 숫자, 문자, 상, 음성 등으로 표 한다.” 디지털은 1946년 에커트(P. Eckert)와 모클 리(J.W. Mauchly)가 진공 으로 만든 최 의 자식 디지털 컴퓨터인 ENIAC(Electronic Numerical Integrator and Computer)의 탄생 에서부터 활성화되기 시작하여 주로 컴퓨터시 스템에 응용되어 왔으나 최근에는 다양한 역 에서 범 하게 사용되고 있다(고재원 2006). 사람의 목소리, 냄새, 맛 등 일상생활에서 경험 하는 것과 차량의 원형속도 계기 , 수은온도 계와 벽걸이 시계처럼 연속 이고, 무한 으로 변화하는 양을 아날로그 신호라 하고 사람의 나이처럼 시간의 흐름에 따라 때가되면 값이 증가하는 것을 디지털신호라고 하며 계단모양 의 형태로 증가하거나 감소하는 불연속 인 신 호를 말한다. 일반 으로 많이 하는 소리의 세기, 무게, 길이, 온도, 거리, 압력, 빛의 강도, 압, 류 등이 아날로그 양이 되지만 이런 것 들도 차 디지털화되어 측정되고 디지털화 된 값들이 여러 응용분야에 사용되고 있다(정두 , 이천효 2011). 디지털은 아날로그에 비해 신속, 정확하고 신뢰도가 높으며 컴퓨터에 의 해 디지털신호의 처리가 쉽고 다양하게 처리되 기 때문에 우리들의 생활에 보편 인 개념으로 깊숙이 들어와 다양한 분야에서 사회변화를 래하고 있다(김 석 2000). 디지털이란 용어는 21세기의 인간의 삶의 양 식을 특징짓는 사회문화 상으로 자리를 잡 아가고 있다. 어느 구도 개개인의 사회 환 경을 구성하는 많은 부분들이 ‘디지털화'되는 상을 결코 피해 갈 수는 없다. 정치, 경제, 교 육, 문화, 술, 신문, 방송, 고, 화, 음악, 의 학, 사진, IT, BT, 작권, 문학, 등 사회 반 인 분야에 도입되고 있는 실정이다. 3. DDC와 KDC의 디지털 련 분류항목의 개과정 3.1 DDC의 디지털 련 항목명의 개과정 DDC에서 디지털이란 용어의 사용은 1951년 간 행된 제15 의 상 색인에서 Digital computers란 용어로 처음 사용하 으며, 본표 상에 510 수학 (Mathematics) 아래에 510.78 Computation Instruments and machine으로 분류할 수 있도 록 안내하고 있다(Dewey 1951, 183). 이는 기 컴퓨터는 과학자들이 복잡한 계산을 신속하 180 한국문헌정보학회지 제48권 제2호 2014 고 정확하게 계산하기 해 발명한 도구 기에 디지털 컴퓨터를 수학 아래에 개하 음을 알 수 있다. 본표 상에 분류항목으로 개하기 시 작한 것은 1958년 제16 부터이며, 510 수학과 681.14 사무기계제조 아래에 Digital machines 와 Automatic(Electronic) digital machines 두 개 항목과 681.14 Business machine아래에 Automatic(electronic) digital computers와 Other digital machines 2개 항목이 개되어 있다 (Dewey 1958, 887). DDC제15 과 제16 을 비교해 보면 <표 1>과 같다. 컴퓨터가 기업의 제품 리 경 리에 활 용되어 지면서 련 자료량이 늘어나자 1965년 에 발간된 DDC 제17 에 658.505 Use of digital computer system을 새로운 분류항목으로 채 택하 고, 디지털컴퓨터제조는 자공학 쪽으 로 재배치하 다(Dewey 1965). 제16 과 17 은 디지털에 련된 분류항목에서는 큰 차 이가 없었으며, 상 색인에 제품 경 리 의 디지털 컴퓨터 시스템에 한 용어가 첨가 되었다. 1971년에 DDC 제18 은 보조표(table), 본 표(schedule), 색인(index)의 3권을 한 세트로 발행하 다. 이 시기는 컴퓨터의 성능이 월등 이 좋아지고 가격이 렴해진 제3세 컴퓨터 가 개발된 때로서 정보 리시스템, 고객 리시 스템 등이 개발되고, 음성, 상, 동 상과 같은 용량의 정보를 원거리에서도 신속하게 송수 신 할 수 있는 기반이 만들어 졌다. DDC 제18 의 상 색인을 보면 다섯 항목의 분류번호로 안내하고 있다. 001.64044 Digital computers electronic data process, 658.054044 Digital computers management, 621.381.958 Digital com- puters in electronic engineering, 621.3819596 Digital-to analog converters electronic engi- neering이다(Dewey 1971). 1979년에 DDC제19 이 발행되던 시기는 제 4 컴퓨터가 상용화되던 때로 은행, 신용카드 회사, 증권사 등의 자동인출기가 실생활에 정착할 수 있었으며, 재고 리, 여 리, 회계 정보처리 등 다양한 역에 PC를 도입한 업무 처리능력을 확 하 다. 510 수학 아래에 개 했던 Digital computers란 항목을 001.64 Data processing electronic하에 항목 개를 한 것 분류항목 항목수 상 색인 항목수 15 (1951년) 510.78 Computation Instruments and machines Digital computers 510.78 1 16 (1958년) 510.78 Computation Instruments and machines 510.783 Digital machines 510.7834 Automatic(Electronic) digital computers 681.142 Automatic(Electronic) digital computers 681.144 Other digital machines 4 Digital computers comprehensive works 510.783 Digital computers manufacture 681.14 Digital computers mathematics 510.783 3 <표 1> DDC 제15 과 제16 의 디지털 련 항목 비교 한국십진분류법의 디지털 련 항목명 확장 개에 한 연구 181 과 513 Digital computer representations arith- metic을 새로운 항목으로 개한 것을 제외하 고는 변화가 없었다(Dewey 1979). 1989년에 DDC제20 이 발행되던 시기는 제 5세 컴퓨터가 상용화된 시기이므로 그래픽기 술과 통신기술이 발 됨에 따라 인공지능, 인 지과학, 인간-컴퓨터 인터페이스, 입체 상학 과 결합되어 가상 실의 구 이 구체화되기 시 작한 때이다(노문자 2007, 248). 따라서 DDC 제20 본표 상에 Digital computers를 004.11- 004.16 하에 유형별로 항목 개를 하고 있으 며, 상 색인에도 추가하 다. 디지털 련 주 제가 <표 2>와 같이 5개 항목에서 18개 항목으 로 확장 개하고 있다(Dewey 1989). 1996년에 간행된 DDC제21 은 인쇄본과 CD-ROM 도우 버 이 있다. 이 은 “120년 의 DDC 역사상 가장 신 인 개정 이 될는지 도 모른다”(남태우 1996)고 했듯이 디지털 련 항목이 24개 항목으로 확장 개되었다. DDC 제20 에서는 컴퓨터 컴퓨터공학, 자․통 신공학분야에서만 디지털 련 항목이 개되었 지만, 제21 에서는 <표 3>과 같이 술분야로 확장 개되었다(Dewey 1996). 2003년 간행된 DDC제22 은 인쇄본과 Web Dewey로 되어 있으며, 웹 환경에서 처음 DDC 를 소개한 이다. 웹은 정보원에 한 근을 분류항목 항목수 상 색인 항목수 004 Digital computers 004.11 Digital supercomputers 004.14 Digital minicomputers 004.145 Specific digital minicomputers 004.16 Digital microcomputers 004.165 Specific digital microcomputers 005.72 Including digital codes 006.676 Graphics programming for digital microcomputers -> 부가주기에 따라 주제세구분 함. 006.686 Graphics programs for digital microcomputers -> 부가주기에 따라 주제세구분 함. 621.3815 Components and circuits -> Class here digital 621.382 Comminications engineering -> Class here digital 621.39 Computers -> Class here electronic digital computers 621.3911 Digital supercomputers 621.3912 Digital mainframe computer 621.3914 Digital minicomputers 621.3916 Digital microcomputers 621.395 Circuitry 621.39814 Analog-to-digital and digital-to-analog converters 18 Digital audio technology sound reproduction 621.3893 Digital circuits 621.3815 Digital circuits computer engineering 621.395 Digital codes 005.72 Digital communications 621.382 Digital computers 004 Digital instruments technology 681.1 Digital mainframe computers 004.12 Digital microcomputers 004.16 Digital minicomputers 004.14 Digital supercomputers 004.11 Digital-to-analog converters 004.64 Digital-to-analog converters engineering 621.39814 13 * 진한 씨는 상 색인에만 있는 항목임. <표 2> DDC 제20 의 디지털 련 분류항목과 상 색인 182 한국문헌정보학회지 제48권 제2호 2014 분류항목 항목수 상 색인 항목수 004. Computer science -> class here digital computer 004.11 Digital supercomputers 004.12 Digital mainframe computers 004.125 Specific digital mainframe computers 004.14 Digital minicomputers 004.145 Specific digital minicomputers 004.16 Digital microcomputers 004.165 Specific digital microcomputers 004.611-616 Interfacing and communications for digital computers -> 부가지시에 따른 특수주제 세구분 004.71 Peripherals for digital computers 005.21 Programming for digital supercomputers 005.22 Programming for digital mainframe computers 005.24 Programming for digital minicomputers 005.26 Programming for digital microcomputers 005.31 Programs for digital supercomputers 005.32 Programs for digital mainframe computers 005.34 Programs for digital minicomputers 005.36 Programs for digital microcomputers 00.72 Includin digital codes 621.38159 Analog-to-digital and digital-to-analog converters 621.382 Communication engineerings -> Class here digital communications 621.39 Computer engineering -> Class here electroic digital computers 621.39814 Analog-to-digital and digital-to-analog converters 24 Digital audio technology sound reproduction 621.3893 Digital circuits 621.3815 Digital circuits computer engineering 621.395 Digital codes(Computer) 005.72 Digital communication 384 Digital communications services 384 Digital communication(computer science 004.6 Digital communication engineering 621.382 Digital computers 004 Digital instruments technology 681.1 Digital mainframe computers 004.12 Digital microcomputers 004.16 Digital minicomputers 004.14 Digital supercomputers 004.11 Digital-to-analog converters computer engineering 621.39814 computer science 004.64 electronic engineering 621.38159 Digital video effects 778.593 18 <표 3> DDC 제21 의 디지털 련 분류항목과 상 색인 확장시키고, 문헌분류표의 이용자들과의 국제 력을 시에 용이하게 할 수 있도록 해 주 기 때문에 분류의 능률성과 정확성을 높일 수 있 다. DDC 제22 은 정치 , 사회 변화와 과학 기술의 발 을 극 으로 수용하여 새로운 주 제들을 한 분류항목의 폭 인 갱신과 확장, 재배치 등의 조정이 이루어졌다(오동근 2007). 디지털 련 주제의 분류항목 개를 정리해 보면 <표 4>와 같다(Dewey 2003). <표 4>에서와 같이 디지털 련 항목이 24개 에서 36개로 확장 개되었으며, 컴퓨터 컴 퓨터공학, 오디오공학, 자․통신공학, 술 분야만이 아니라 도서 , 출 , 방송, TV, 매체, 고, 경 , 사진 등의 분야로 넓 져 갔 한국십진분류법의 디지털 련 항목명 확장 개에 한 연구 183 분류항목 항목수 상 색인 항목수 004. Computer science -> class here digital computers 004.11 Digital supercomputers 004.12 Digital mainframe computers 004.125 Specific digital mainframe computers 004.14 Digital minicomputers 004.145 Specific digital minicomputers 004.16 Digital microcomputers 004.165 Specific digital microcomputers 004.611-616 Interfacing and communications for digital computers -> 부가지시에 따른 특수주제 세구분 004.7 Peripherals for digital computers [004.71] 005.31 Programs for digital supercomputers 005.32 Programs for digital mainframe computers 005.34 Programs for digital minicomputers 005.36 Programs for digital microcomputers 006.5 Digital audio technology computer science 006.696 Digital video technology computer science 025.04 Information storage and retrieval system -> Class here Digital libraries 070.5797 Digital publications 302.231 Digital media 621.3815 Components and circuits -> Class here digital 621.38159 Analog-to-digital and digital-to-analog converters 621.382 Communication engineering ->Class here digital communications 621.38807 Digital television 621.3893 Sound recording system ->Class here Digital audio engineering 621.39 Computer engineering -> Class here electroic digital computers 621.3911 Digital supercomputers 621.3912 Digital mainframe computers 621.3914 Digital minicomputers 621.3916 Digital microcomputers 621.39814 Analog-to-digital and digital-to-analog converters 651.59 Digital preservation office records 659.144 advertising in digital media 681. 59 Digitization of files 771.33 Digital cameras 775 Digital photography 776 Computer art(Digital art) 36 Digital arts 776 Digital audio technology computer science 006.5 Digital audio technology sound reproduction 621.3893 Digital cameras 771.33 Digital dircuits 621.3815 Digital circuits computer engineering 621.395 Digital codes(Computer) 005.72 Digital communications 384 Digital communications services 384 Digital communications computer science 004.6 Digital communications engineering 621.382 Digital computers 004 Digital instruments technology 681.1 Digital libraries 025.04 Digital media 302.231 Digital advertising 659.144 Digital sociology 302.231 Digital photography 775 Digital preservation office records 651.59 Digital publications 070.5797 Digital publications bibliographies 011.39 Digital television 621.38807 Digital-to-analog converters computer engineering 621.39814 computer science 004.64 electronic engineering 621.38159 Digital video effects 778.593 Digital video technology computer science 006.696 27 <표 4> DDC 제22 의 디지털 련 분류항목과 상 색인 184 한국문헌정보학회지 제48권 제2호 2014 음을 알 수 있다. 가장 최근에 간행된 DDC 제23 (2011)에서 는 디지털 련 항목이 44개로 확장 개되었으 며. 사회학, 교육, 정 기계, 화분야로 까지 확 장 개되었다. 이 게 디지털이란 용어가 분류 표에 도입되어진 과정을 통하여 시 인 변화 흐름에 따라 어떠한 분야에 응용되어 새로운 역이 생겨났는지를 단할 수 있다. DDC 제23 에 개된 디지털 련 분류항목과 상 색인 은 <표 5>와 같다(Dewey 2011). 분류항목 항목수 상 색인 항목수 004. Computer science -> class here digital computers 004.11 Digital supercomputers 004.12 Digital mainframe computers 004.125 Specific digital mainframe computers 004.14 Digital minicomputers 004.145 Specific digital minicomputers 004.16 Digital microcomputers 004.165 Specific digital microcomputers 004.611-616 Interfacing and communications for digital computers -> 부가지시에 따른 특수주제 세구분 004.7 Peripherals for digital computers [004.71] 005.31 Programs for digital supercomputers 005.32 Programs for digital mainframe computers 005.34 Programs for digital minicomputers 005.36 Programs for digital microcomputers 005.8 Data security -> Including Digital rights management 005.82 Data encryption -> Digital signatures 006.5 Digital audio 006.696 Digital video 025.042 World Wide Web -> Class here Digital libraries 025.84 Preservation -> Class here digital preservation 070.5797 Digital publications 302.231 Digital media 372.34 Digital literacy primary education 621.3815 Components and circuits -> Class here digital 621.38159 Analog-to-digital and digital-to-analog converters 621.382 Communication engineering -> Class here digital communications 621.38807 Digital television 44 Digital art 776 Digital audio technology computer science 006.5 Digital audio technology sound reproduction 621.3893 Digital cameras 771.3 Digital cinematography 777 Digital circuits 621.3815 Digital circuits computer engineering 621.395 Digital codes(Computer) 005.72 Digital communications 384 Digital communications services 384 Digital communications computer science 004.6 Digital communications engineering 621.382 Digital computers 004 Digital images 740 Digital images photography 779 Digital instruments technology 681.1 Digital libraries 025.042 Digital literacy primary education 372.34 Digital media 302.231 Digital advertising 659.144 Digital sociology 302.231 Digital photography 770 Digital preservation 025.84 Digital preservation office records 651.59 Digital publications 070.5797 Digital publications bibliographies 011.39 Digital rights management computer science 005.8 Digital signatures computer science 005.82 Digital subscriber lines 621.3878 Digital television 621.38807 Digital-to-analog converters computer engineering 621.39814 computer science 004.64 electronic engineering 621.38159 Digital video effects 777.9 36 <표 5> DDC 제23 의 디지털 련 분류항목과 상 색인 한국십진분류법의 디지털 련 항목명 확장 개에 한 연구 185 1940년 에 나타난 디지털이란 용어가 분류 표에 항목명으로 채택되어 개되기 시작한 것 은 1951년 DDC 제15 에서 부터이다. 분류표에 처음으로 채택된 디지털 련 항목명은 ‘digital computers’로서 상 색인에서 사용하 다. 디지 털이란 용어가 수학 아래에서 사용되기 시작하 여 다양한 학문 역에 도입되었음을 알 수 있다. DDC 제15 상 색인에서 ‘digital computers’ 로 처음 안내되기 시작한 디지털 련 항목이 2011년 제23 에서는 44개로 확장 개되었으 며, 분류항목으로 개하기 시작한 것은 1959 년 DDC 제16 에서 이다. 2011년 제23 까지 개정되는 동안 확장 개된 분류항목과 상 색 인에 채택된 디지털 련 항목수를 비교해 보 면 <표 6>과 같다. 분류항목수 상 색인항목수 15 ☓ 1 16 4 3 17 4 4 18 4 5 19 5 5 20 18 13 21 24 18 22 36 27 23 44 36 <표 6> DDC 제15 -제23 의 디지털 련 개 항목수 비교 분류항목 항목수 상 색인 항목수 621.3893 Sound recording system -> Class here Digital audio engineering 621.39 Computer engineering -> Class here electroic digital computers 621.3911 Digital supercomputers 621.3912 Digital mainframe computers 621.3914 Digital minicomputers 621.3916 Digital microcomputers 621.39814 Analog-to-digital and digital-to-analog converters 651.59 Digitization of files 659.144 advertising in digital media 681. 59 Digitization of files 740 Digital images 771.3 Digital cameras 775 Digital photography 776 Computer art(Digital art) 777 Digital videography 778.593 Digital video effects 779 Digital images photography Digital video technology computer science 006.696 Digital videography 777 186 한국문헌정보학회지 제48권 제2호 2014 3.2 KDC의 디지털 련 분류항목의 개과정 한국십진분류법(KDC)에서 디지털 련 주 제가 분류 항목명으로 채택되어 개되기 시작 한 것은 1980년에 간행된 제3 이며, ‘569.92 디지털계산기’로 상 색인과 본표에 개되었 다(한국도서 회 1980). 1996년 간행된 제4 상 색인에 디지털 련 항목명은 디지털 계산기(566.1), 디지털녹음기기(567.435), 디지 털-아날로그변환장치(566.5) 등 3개의 항목을 안내하고 있으며(한국도서 회 1996), 2009 년에 간행한 KDC 제5 에 개된 디지털 련 분류 항목명은 9개 항목이다(한국도서 회 2009). 포함주기로 제시한 ‘디지털카메라와 디 지털-아날로그 변환장치’ 2개 항목을 제외하면 7개 항목이다. 가장 최근에 간행된 KDC 제6 (한국도서 회 2013)의 디지털 련 분류 항 목명은 포함주기에 제시한 것을 포함하여 12개 항목이며, 상 색인에는 13개 항목명을 안내하 고 있다. 디지털이란 용어가 다양한 분야에 도 분류항목 항목수 상 색인 항목수 3 569.92 자계산기 -> 아날로그계산기, 디지털계산 기 등을 포함한다. 1 디지털계산기 569.92 1 4 567.435 디지털 녹음기기 566.5 속 통신 하에 디지털-아날로그변환장치 2 디지탈계산기 566.1 디지탈녹음기기 567.435 디지털-아날로그변환장치 566.5 3 5 004.2261 입력장치 -> 포함주기에 디지털카메라 004.5 통신 네트워크 -> 포함주기로 디지털-아날 로그 변환장치 004.75 디지털 오디오 567.435 디지털녹음기기 568.866 디지털텔 비 569.73 디지털오디오공학 569.85 디지털-아날로그 변환기 657.4 디지털 만화 661.5 디지털사진기 9 디지털녹음기기 567.435 디지털만화 657.4 디지털사진기 661.5 디지털-아날로그 변환장치 004.5 디지털 오디오 004.75 디지털오디오공학 569.73 디지털카메라 004.2261 디지털텔 비 568.866 8 6 004.2261 입력장치 -> 포함주기에 디지털카메라 004.5 통신 네트워크 -> 포함주기로 디지털-아날 로그 변환장치 004.75 디지털 오디오 004.77 디지털 상처리(컴퓨터) 024.9 디지털자료 리 567.435 디지털녹음기기 568.866 디지털텔 비 569.73 디지털오디오공학 569.85 디지털-아날로그 변환기 657.4 만화 661.5 디지털사진기 688.86 디지털 화 12 디지털녹음기기 567.435 디지털만화 657.4 디지털사진기 661.5 디지털-아날로그 변환기 569.85 디지털-아날로그 변환장치 004.5 디지털 상처리(컴퓨터) 004.77 디지털 화 688.86 디지털 오디오 004.75 디지털오디오공학 569.73 디지털 자료 표 -024 디지털 자료 리, 도서 024.9 디지털카메라 004.2261 디지털텔 비 568.866 13 <표 7> KDC의 디지털 련 분류항목과 상 색인 비교 한국십진분류법의 디지털 련 항목명 확장 개에 한 연구 187 입되어 사용되고 있으며, 련 자료들이 많이 생산되고 있는 실이다. 그러나 가장 최근에 간행된 KDC 제6 에 개하고 있는 디지털 련 항목명은 12개로 아주 미흡한 상황이므로 향후 확장 개를 하여야 할 것이다. KDC에 개된 디지털 련 항목을 제3 부터 제6 까지 비교 분석하면 <표 7>과 같다. 오늘날 디지털 련 주제가 다양한 분야에 도 입되어 연구되고 있으며, 그 연구 결과물도 최 근에 많이 증가되었다. 그러나 가장 최근에 개 정된 KDC 제6 을 보면 디지털 련 항목명이 12개가 개되어 있으며, 표 구분 ‘-024 시청각 자료 디지털 자료’를 활용하여 각종 디지털 자 료를 세분하도록 하고 있다. 국립 앙도서 지식 정보 통합검색 시스템(http://www.dibrary.net) 을 통하여 ‘디지털’에 한 통합검색 결과 총 22,747건 단행본 9,735건, 연속간행물 3,535 건, 비도서자료 648건, 온라인자료 9,781건, 참 고정보원 1008건 등으로 나타났다. 이 단행 본 9,735건을 KDC의 주류구분에 따라 분석해 보면 총류(000) 1,577건, 철학(100) 90건, 종교 (200) 50건, 사회과학(300) 2,367건, 자연과학 (400) 102건, 기술과학(600) 3,762건, 술(600) 1,537건, 어학(700) 66건, 문학(800) 129건, 역 사(900) 55건으로서 기술과학과 총류에 한 자료가 많았다. 이들 자료를 발행연도별 통계 를 보면 2011- 재 1,389건, 2001-2010 7,052 건, 1991-2000 1,227건, 1981-1990 58건, 1971- 1980 3건, 발행연도 불명 6건으로 나타났다. 이 와 같이 디지털 련 분야의 자료증가가 최근 에 두드러지게 나타나고 있음을 알 수 있다. 따 라서 분류의 복을 피하고 효율성을 높이기 하여 KDC의 디지털 련 항목명의 추가 확장 개가 필요하다. 4. 주제명표목표의 디지털 련 주제 분석 4.1 미국의회도서 주제명표목표(LCSH) 주제명표목표는 주제명목록에서 표목으로 사 용될 수 있는 주제명을 일정한 형식으로 통일하 기 해 편찬한 일종의 통제어휘사 이다(이경 호, 김정 2009). 이는 선정된 주제명표목을 심으로 표목상호간에 필요한 연결을 한 참조 주제명을 가나다순 는 알 벳순에 따라 배열 한 주제명 기 표이다. 개개 도서 은 주제명표 목표에 의해서 주제명표목을 선정하고 있다. 미국의회도서 에서 개발한 미국의회도서 주제명표목표(Library of Congress Subject Headings: LCSH)는 1914년 이 발행된 이 래 2012년 제34 이 web상에 PDF 일로 근이 자유롭게 오 되어 있다. LCSH PDF 일에서 디지털 련 주제명을 검색해 보면 172 건이 검색된다. LCSH에서 검색된 172건의 디 지털 련 주제어를 보면 DDC 제23 에 개 된 디지털 련 항목명보다 더 다양한 분야에 도입되었음을 알 수 있다. 2011년 간행된 DDC 제23 에 개된 디지털 련 항목은 컴퓨터과학(Computer science)을 심으로, 도서 , 박물 , 출 , 교육, 정치, 경 제, 경 , 고, 통신, 방송, 자, 문화, 술, 디 자인, 사진, 화 등 44개 항목이 개되어 있 다. 그러나 LCSH에서 검색된 디지털 련 주 제 172건으로 분석해 보면 컴퓨터과학, 문헌정 188 한국문헌정보학회지 제48권 제2호 2014 보학, 박물 , 출 , 화폐(통화), 법학, 화, 통 신, 방송, 수학, 지도 지도제작, 의학, 항공학, 술, 만화, 화, 사진술, 인쇄, 산업, 자동차, 정 기계, 무선공학, 자공학, 통신공학, 오디 오공학, 건축공학, 교통, 문학, 음악, 화, 매체, 등의 분야이다. DDC 제23 에 개되어 있지 않은 디지털 련 주제는 디지털 인문학, 디지털 련 법, 디지털 문학 이며, 디지털 교 육, 디지털 고, 디지털 경 분야는 LCSH에 개되어 있지 않은 주제이다. 이를 DDC 제23 에 개된 분류항목을 기 으로 정리해 보면 <표 8>과 같다. DDC 제23 LCSH 디지털 련 주제 001.3 인문학(Humanities) Digital humanities center 004-006 컴퓨터과학 (Computer Science) Digital computer, Digital command language, Digital compact disc, Digital computer control. Digital file sharing, Digital simulation, Digital computer peripherals, Digital signature, Digital audio, Digital video, Digital animation, Digital highway, Digital Network, Digital rights management, Digital computer arithmetic, Digital equipment corporation, Digital incremental plotters, Digital integrated circuits data processing, digital literature (hypertext), 020 문헌정보학 (Library & information science) Digital book, Digital book readers, Digital library, Digital divide, Digital media libraries, Digital literature, Digital preservation and copyright, Digital reference services, Digital information resources, Digital reading devices, Digital geographical databases, Digital circulation, Digital library access control, Digital library administration, Digital library collection development, Digital library management, Digital library use studies, digital media collection, 069 박물 (Museums) Digital museums 070.5 출 (Publishing) Digital publishing, 302 사회행동(Social behavior) Digital social 320 정치(Politics and government) Digital government 332 융(Financial economics) Digital cash, Digital money, Digital currency 340법학(Law) 심리재 의 강제발표, Digital evidence, Digital intellectual property law, Digital signatures Law, Digital discovery law, Digital audiotape recorders and recording law, Digital communication law, Digital media law, Digital television law, 384 통신(Communication) Digital communication, Digital communication services, Digital media, Digital media conservation akd restoration, Digital media editing, Digital media preservation, 510 수학(Mathematics) Digital mathematics, Digital filters(Mathematics) 520 천문학 Digital photogrammetry, Digital map making, Digital surface model, Digital elevation model 550 지구과학 Digital maps, Digital mapping, Digital surface model, Digital, Ditial soil mapping, 디지털지형(고지 ), 510 의학(Medicine) Digital medical radiography, Digital angiocardiography, Digital angiography, Digital ultrasonic cardiography, Digital diagnostic imaging, Digital radiography (medical), Digital echocardiography, Digital contrast echocardiography, Digital subtraction nagiography <표 8> DDC 제23 과 LCSH에 디지털 련 주제 한국십진분류법의 디지털 련 항목명 확장 개에 한 연구 189 4.2 국립 앙도서 주제명표목표 국립 앙도서 주제명표목표(http://www. dibrary.net/kolis)는 2002년 개발되었으며, 후 조합 색인언어로서 다양한 정보검색시스템에 서 사용할 수 있는 시소러스형식이다. 이 주제 명표목표는 국립 앙도서 소장목록검색창 에 링크시켜져 있기 때문에 주제검색시에 용어 목록에서 클릭만 하면 용어명이 선택되도록 되 어 있다. 그래서 디지털이란 용어를 검색한 결 과 100건의 용어가 선택되었다. 선택된 100건 을 분석해 보면 컴퓨터과학, 문헌정보학, 출 , 경제학, 경 학, 방송, 정치학, 법학, 화학, 의학, 건축학, 기계공학, 항공우주공학, 통신공학, 무 선공학, 자공학, 인쇄술, 만화, 사진술, 화 등 의 주제 분야에 도입되어 사용되고 있는 것으 로 나타났다. 국립 앙도서 주제명표목표에 개된 디지털 련 항목들을 KDC 제6 에 개된 디지털 련 항목명과 비교해 보면 <표 9> 와 같다. DDC 제23 LCSH 디지털 련 주제 621.38 자, 통신공학(Electronic, communication engineering) Digital electronics, Digital communication, Digital audio engineering, Digital audio broadcasting, Digital audio editors, Digital audio disc players, Digital audio radio satellite services, Digital audio tape players and recorders, Digital audiotape recorders and recording, Digital avionics, Digital computer circuit, Digital differential analyzers, Digital electric filter, Digital image processing, Digital integrated circuits, Digital media, Digital modulation, Digital multimeters, Digital object identifiers, Digital oscilloscopes, Digital radio transmission, Digital records, Digital sound recording, Digital sound sampling, Ditital signal processing, Digital subscriber lines, Digital telephone system, Digital television, Digital-to-analog converters, Digital-to-analog converters calibration, Digital transmission, Digital type and type-founding, Digital versatile discs, Digital video, Digital video editing, Digital video broadcasting, Digital video standards, Digital video discs, Digital voltmeters, Digital VTR, Digital video jockey 681 기계공학(정 기계) Digital cameras, Digital imaging cameras, Digital counters, Digital electronic clocks and watches, Digital instrumentation, Digital jukebox software, Digital keyboards instruments, Digital music players, Digital piano, 629.1-.2 자동차, 항공공학(Vehicles, Aerospace engineering) Digital navigation, Digital control system, 650 경 학(Management) Digital management information system 670 제조업(Manufacturing) Digital paper, Digital watermarking 686 인쇄(Printing) Digital printing, Digital printing presses 700 술(Arts) Digital art 740 그래픽아트(Graphic arts) Digital images, Digital comic strips 770 사진술(Photography) Digital photography, Digital images editing, Digital images watermarking 780 음악(Music) Digital music hall, Digital music manager software, Digital player piano music,, Digital sampling music, Digital player piano and sampler music, 791.4 화(Motion picture) Digital motion video, Digital moviemaking, Digital filmmaking 800 문학(Literature) Digital fiction, Digital storytelling, Digital monsters 190 한국문헌정보학회지 제48권 제2호 2014 주제 KDC 제6 국립 앙도서 주제명표목표 컴퓨터과학 공학 004.2261 디지털 카메라 004.5 아날로그 변환장치 004.75 디지털 오디오 004.77 디지털 상처리 가상공간(004.5 통신 트워크 하에 개), 디지털공학, 디지털 기술, 디지털 메모리 스코 , 디지털 모바일통신망, 디지털 보이스 처리, 디지털서명(004.62), 디지털 시뮬 이션, 디지털 신호, 디지털 십자 속장비, 디지털 상 연상처리(004.77), 디지털 이미지 이미지기법, 디지털인 라, 디지털 장장치 컴퓨터 로그래 과 컴퓨터데이터 디지털 구조, 디지털 자꼴, 디지털 문자 출 디지털사 (013.1) 문헌정보학 024.9 디지털자료 리, 도서 디지털도서 , 디지털 디바이드, 디지털매체, 디지털 멀티미디어, 디지털자료보존, 디지털산업(020.13), 디지털시 (020.13), 디지털 자료 경제학 디지털경제 경 학 디지털경 방송(326.7) 디지털 라디오(326.74)서비스, 디지털 멀티미디어 방송, 디지털 방 송, 디지털 방송국, 디지털아나운서, 디지털오디오방송, 디지털 성 방송, 디지털방송수신기, 디지털컨버 스(융합) 정치학 디지털강국 법학 디지털공간법(기타제법에 개), 디지털법제 화학 디지털 분석기 조정기 의학 디지털 방사선 검출기 건축학 디지털 건축 기계공학 정 기계-디지털계기 , 디지털계측기(555.3), 디지털스 일, 디지 털복제기, 디지털시계 항공우주공학 디지털비행기록장치 통신공학 567.435 디지털녹음기기 디지털 다 송방식(567.26- 송, 다 화)-> 고속정보통신망 (004.577), 디지털 망 , 디지털녹음기기(567.435), 디지털 디스크, 디지털 비디오 디스크 리코더, 디지털 비디오 디스크 이어, 디지털 루 리어, 디지털 력선, 디지털 송 무선공학 568.866 디지털 텔 비 디지털 변조기(568.4), 디지털 TV, 디지털 가입자 회선, 디지털 음 검측 자공학 569.73 디지털 오디오공학 569.85 아날로그-디지털 디지털- 아날로그 변환기 디지털아날로그변환, 디지털녹음, 디지털오디오, 디지털오디오디 스크, 디지털오디오 디스크 이어, 디지털오디오 테이 , 디지털 오디오테이 녹음기 인쇄술 디지털 인쇄 만화 657.4 디지털만화 사진술 661.5 디지털 사진기 디지털 SLR 카메라, 디지털 비디오 카메라, 디지털 사진 화 688.86 디지털 화 디지털비디오, 디지털 상 <표 9> 국립 앙도서 주제명표목와 KDC 제6 의 디지털 련 항목 비교 KDC 제6 과 국립 앙도서 주제명표목 표에 개된 디지털 련 항목들을 비교한 <표 9>에 의면 KDC 제6 은 컴퓨터과학, 문헌정보 학, 통신공학, 무선공학, 자공학, 만화, 사진 술, 화 등 8개 분야만 개되어 있다. 국립 앙도서 주제명표목표에는 20개 분야에서 디 지털 련 항목을 개하고 있으며, 컴퓨터 로그램과 데이터, 출 , 경제, 경 , 방송, 정치, 법, 화학, 의학, 건축학, 기계공학, 우주항공공 학, 통신공학, 무선공학, 인쇄술, 사진술, 화 한국십진분류법의 디지털 련 항목명 확장 개에 한 연구 191 등은 KDC 제6 에 개되어 있지 않는 분야이 다. 따라서 향후 KDC를 개정할 때 분류항목에 추가 는 확장 개하여야 할 것이다. 5. KDC의 디지털 련 항목명 확장 개방안 LCSH, DDC 제23 , 국립 앙도서 주제 명표목표, KDC 제6 에 개하고 있는 디지털 련 항목을 보면 LCSH는 172건, DDC 제23 은 44개 항목, 국립 앙도서 주제명표목표 는 100건, KDC 제6 은 11개 항목이다. 2013 년 간행된 KDC 제6 은 가장 최근에 개정되 었음에도 디지털 련 항목명의 개가 아주 미 약하여 디지털 련 자료의 세 분류가 어렵고 분류기호의 복을 가져와 분류의 효율성을 떨 어뜨리고 있다. 따라서 디지털 련 자료의 분 류기호의 복을 피하고 효율성을 높이기 한 KDC의 디지털 련 항목명의 추가 확정 개를 하여 DDC 제23 , LCSH, 국립 앙도 서 주제명표목표에서 추출되지 않은 철학, 종 교 역사 분야를 제외하고, 추출된 디지털 련 항목만을 토 로 KDC의 디지털 련 항목명 의 확장 개 방안을 제시하면 다음과 같다. 첫째, KDC 제6 의 총류(000) 하에 개된 디지털 련 항목명은 컴퓨터과학 분야 4개 항 목과 문헌정보학 분야에 1개 항목이다. 컴퓨터 과학 분야의 4개 항목 2개 항목은 포함주기 에 기술된 것이고, 분류항목으로 개된 것은 2 개항목이다. 국립 앙도서 지식정보 통합검 색에서 디지털에 한 검색결과 총류가 1,577건 으로 3번째로 많은 것으로 나타났다. 이와 같이 디지털 련 분야의 자료증가가 최근에 두드러 지게 나타나고 있으므로 분류 항목명을 확장 개하여 디지털 련 자료 분류의 일 성을 유지 하고 합리 인 분류를 할 수 있도록 하여야 할 것이다. 따라서 DDC 제23 에 개된 항목명 과 LCSH와 국립 앙도서 주제명표목표에서 추출한 총류에 련된 디지털 주제명을 토 로 KDC의 총류 하에 디지털 련 항목명의 확장 개 방안을 정리하면 <표 10>과 같다. 디지털이란 ‘컴퓨터에 기 한’이란 말과 동의 로 간주될 만큼 한 계를 가지고 있다. 그 러나 컴퓨터를 데이터의 특성에 따라 이산 원 리를 이용한 디지털 컴퓨터, 물리 으로 연속성 을 가지는 원리를 이용한 아날로그 컴퓨터, 이들 을 혼용한 하이 리드 컴퓨터 등으로 구분하고 있다. 그래서 컴퓨터와 련된 역인 004-005 아래에서 디지털 련 항목명을 포함주기와 분 류지시기 주기에 추가하거나 분류항목으로 확장 개하여 분류의 일 성을 유지하도록 하 다. 둘째, 아날로그에서 디지털로의 환은 우리 사회 환경에 많은 변화를 가져왔으며, 앞으로 도 많은 변화를 가져올 것이다. 국립 앙도서 지식정보 통합검색 결과 사회과학(300)분 야가 2,367건으로 두 번째 많은 것으로 나타났 다. 그러나 KDC 제6 의 사회과학분야에는 디 지털 련 항목 개가 되어 있지 않아 분류의 어려움이 발생될 것이다. DDC 제23 , 국립 앙도서 주제명표목표와 LCSH에서 사회과학 분야와 련된 디지털 련 주제를 분석해 본 결 과 사회소통의 디지털 사회, 경제, 경 , 방송, 법 률, 교육 분야 등 사회 반에 활용되고 있었다. 이를 토 로 KDC 사회과학분야의 디지털 련 항목명의 확장 개방안은 <표 11>과 같다. 192 한국문헌정보학회지 제48권 제2호 2014 KDC 제6 확장 개 방안 004 컴퓨터과학 Computer science 정보과학 소 트웨어, 컴퓨터 공학을 포함한다. 004.5 통신 네트워크 주변제어 장치 아날로그-디지털, 디지털 -아날로그 변환장치, 모뎀 등을 포함한다. 004.75 디지털 오디오 004.77 상처리 화상처리, 사진․동 상 처리, 디지털 상처리 등을 포함한다. 024.9 디지털 자료 리 004 컴퓨터과학 Computer science 정보과학 소 트웨어, 컴퓨터 공학, 디지털 컴퓨터를 포함 한다. 004.5 통신 네트워크 주변제어 장치 아날로그-디지털, 디지털-아날로그 변환장 치, 모뎀, 디지털 컴퓨터통신 및 네트워크 등을 포함한다. 004.614 인터넷 보안 디지털 보안, 자메일 보안, 웹 보안을 포함한다. 004.77 상처리 화상처리, 사진․동 상 처리, 디지털 상처리, 디지털 비디 오 등을 포함한다. 005 로그래 , 로그램, 데이터 디지털 컴퓨터 프로그래밍과 프로그램을 포함한다. 005.18 마이크로프로그래밍, 프로그램 디지털컴퓨터 프로그램을 포함한다. 011.204 비도서자료 작권 음악, 화, 상 작권 소 트웨어에 한 작권, 디지털 저작권을 포함한다. 013 출 매 디지털 출판을 포함한다. 020.13 도서 과 사회 디지털 사회도 여기에 분류한다. 024.6 장서 리 보존 디지털 장서관리 및 보존을 포함한다. 024.96 시청각 자료 디지털 오디오 북을 포함한다. 024.995 자자료 디스켓, CD-ROM, 디지털 자료 등을 포함한다. 025.22 디지털 정보서비스 025.91 디지털 도서관 029.51 디지털 스토리텔링 029.6 도서 이외 정보매체의 이용 CD-ROM, 소 트웨어, 데이터베이스, 디지털 자료 등을 포함 한다. 069. 9 디지털 박물관 070.45 뉴미디어, 뉴 리즘 자신문, 인터넷신문, 디지털신문 등을 포함한다. * 진한 씨는 추가 는 확장 개된 것임. <표 10> 총류의 디지털 련 항목명 확장 개 방안 KDC 제6 확장 개 방안 320 경제학 325 경 경 학 일반을 포함한다. 325.7 고 325.743 뉴미디어 기타 인터넷, PC통신, 이블TV, 양방향TV, DM(Direct Mail) 등을 포함한다. 320 경제학 디지털 경제를 포함한다. 325 경 경 학 일반, 디지털 경영 등을 포함한다. 325.487 디지털 문서 디지털 문서의 보존, 문서의 디지털화를 포함한다. 325.743 뉴미디어 기타 <표 11> KDC 사회과학분야의 디지털 련 항목명 확장 개방안 한국십진분류법의 디지털 련 항목명 확장 개에 한 연구 193 셋째, KDC 제6 과 DDC 제23 , 국립 앙 도서 주제명표목표에는 자연과학분야와 련된 디지털 련 항목이 개되어 있지 않기 때문에 LCSH에 개하고 있는 디지털 련 항목인 디지털 지도 지도제작, 디지털 지표 모형 등의 항목을 토 로 KDC의 자연과학 (400)분야에서도 <표 12>와 같이 디지털 련 항목명을 포함주기에 기술하여 일 성 있는 분 류를 할 수 있도록 하여야 할 것이다. 넷째, 국립 앙도서 지식정보 통합검색 결 과 가장 많은 건수가 검색된 분야는 기술과학 분야이다. 이는 컴퓨터기술과 정보통신기술의 KDC 제6 확장 개방안 446.8 지도제작, 도법 지도투 법, 지도해설, 지도복제 등을 포함한다. 446.982 사진측량법 451.5 지표의 변동 퇴 학을 포함한다. 453.91 천기 보 446.8 지도제작, 도법 지도투 법, 지도해설, 지도복제, 디지털 지도제작 등 을 포함한다. 446.982 사진측량법 디지털 사진 측량을 포함한다. 451.5 지표의 변동 퇴 학, 디지털 지표의 모형 등을 포함한다. 453.91 천기 보 디지털 일기예보를 포함한다. <표 12> KDC 자연과학분야 디지털 련 항목명 확장 개방안 KDC 제6 확장 개 방안 327.2 화폐 .26 용화폐, 어음 .29 폐 331.1 사회심리학 사회 상호작용을 포함한다. 331.65 미디어 매스미디어, 리즘, 사회학 등을 포함한다. 인터넷, PC통신, 이블TV, 양방향TV, DM(Direct Mail), 디지털 광고 등을 포함한다. 326.4 통신, 우편 디지털 통신도 여기 분류한다. 326.7 방송 디지털 방송도 포함한다. 327.739 방송국 디지털 방송국도 포함한다. 326.74 라디오 방송 디지털 라디오 방송을 포함한다. 326.76 텔 비 방송 디지털 텔레비전 방송을 포함한다. 327.2 화폐 .27 디지털 화폐 331.1 사회심리학 사회 상호작용, 디지털 사회를 포함한다. 331. 65 미디어 매스미디어, 리즘, 사회학, 디지털미디어 등을 포함 한다. 365.9 무형재산권법 지 재산권, 디지털 지적재산권 등을 포함한다. 373.32 시청각교육 디지털 멀티미디어 교육도 포함한다. * 진한 씨는 추가 는 확장 개된 것임. 194 한국문헌정보학회지 제48권 제2호 2014 결합이 낳은 결과라고 본다. 특히 의학, 기계공 학, 통신공학, 무선공학, 자공학, 오디오공학 등에 디지털 기술이 도입되어 활용됨으로써 련 출 물이 증가하고 있다. 그러나 최근에 각 종 질병 진단에 디지털 기술을 도입하여 활용 하고 있지만, KDC 제6 과 DDC 제23 의 의학 분야에는 디지털 련 항목명이 하나도 개되어 있지 않다. 그래서 LCSH와 국립 앙도서 주제명표목표에서 추출한 주제명을 토 로 KDC 기술과학 분야의 디지털 련 항 목명의 확장 개방안을 제시하면 <표 13>과 같다. KDC 제6 확장 개방안 567.435 디지털 녹음기기 568.866 디지털 텔 비 569.73 디지털오디오공학 569.85 디지털-아날로그 변환기 512.123 음 진단 디지털 초음파진단을 포함한다. 512.15 방사선 진단 디지털 방사선진단을 포함한다. 549.861 음악당 디지털 뮤직홀을 포함한다. 540 건축학 건축공학 건축술, 건축미술, 디지털 건축 등을 포함한다. 542.15 건축설계 디지털 건축 설계를 포함한다. 555.23 시계 디지털 시계를 포함한다. 555.3 계기, 계측기 디지털 계기와 계측기도 여기에 분류한다. 555.4 사무용기계 디지털 계산기도 여기에 분류한다. 555.63 사진기계 카메라, 사기, 촬 기, 디지털 카메라, 디지털 영사기, 디지털 촬영기 등을 포함한다. 555.64 의료기기 디지털 의료기기를 포함한다. 555.8 음향기기 디지털 음향기기를 포함한다. 555.83 자동연주장치 자동피아노, 디지털 쥬크박스(jukebox)를 포함한다. 556.473 자동차계기 디지털자동차계기판을 포함한다. 558.35 항공계기와 그 시스템 디지털 항공계기를 포함한다. 567 통신공학 Communication engineering 디지털 통신을 포함한다. 567.2 송 디지털 전송을 포함한다. 569 자공학 디지털전자공학과 마이크로 자공학은 여기에 분류한다. * 진한 씨는 추가 는 확장 개된 것임. <표 13> KDC 기술과학 분야의 디지털 련 항목명의 확장 개방안 한국십진분류법의 디지털 련 항목명 확장 개에 한 연구 195 KDC 제6 확장 개방안 657.4 디지털 만화 661.5 사진기 부속품 폴로라이드사진기, 디지털사진기 등을 포함한다. 688.86 디지털 화 600 술 디지털 예술도 여기 분류한다. 658.34 디지털 디자인 디지털 이미지도 여기에 분류한다. 660 사진술 디지털 사진술을 포함한다. 666.73 비디오 촬 술 디지털 비디오 촬 을 포함한다. 667.2 디지털 사진 676.2 피아노 디지털피아노를 포함한다. 676.97 기계악기 고안물 뮤직박스, 디지털 쥬크박스 등을 포함한다. 688.13 화제작 디지털 영화제작도 여기에 분류한다. 691.15 컴퓨터 게임, 자게임, 디지털 게임 등을 포함한다. * 진한 씨는 추가 는 확장 개된 것임. <표 14> KDC 술 분야의 디지털 련 항목명의 확장 개방안 다섯째, 술 분야에서 디지털 기술을 활용한 분야는 그래픽 아트, 만화, 사진, 음악, 화 등 이다. DDC 제23 에 개된 항목명과 LCSH와 국립 앙도서 주제명표목표에서 추출한 술 분야의 디지털 련 항목을 토 로 KDC 술 분야의 디지털 련 항목명의 확장 개방안을 제시하면 <표 14>와 같다. 여섯째, 문학류에서 디지털 련 항목명은 DDC 제23 , KDC 제6 과 국립 앙도서 주 제명표목표에는 개되어 있지 않았으며, LCSH 에서 ‘디지털 스토리텔링’이란 하나의 주제가 개되어 있었다. 이를 토 로 KDC의 808.543 스토리텔링 하에 ‘디지털 스토리텔링을 포함한 다’는 주기에 추가하여 분류할 수 있도록 하여 야 할 것이다. 일곱째, KDC와 DDC, LCSH, 국립 앙도서 주제명표목표에는 철학, 종교, 역사분야에 련된 디지털 항목은 하나도 개되어 있지 않으므로 향후의 련 분야의 상황에 따라 도 입 개하도록 하여야 할 것이다. 이상과 같이 디지털 기술이 도입되어 새롭게 발생한 디지털 련 항목명들을 KDC에 추가 확장 개함으로써 증가하고 있는 디지털 련 자료들을 체계 이고 일 성 있게 분류할 수 있도록 하여야 할 것이다. 6. 결 론 KDC와 DDC의 디지털 련 항목명의 개 과정을 살펴보고, LCSH와 국립 앙도서 주 제명표목표에 개된 디지털 련 주제명을 분 석한 후 KDC 제6 과 국립 앙도서 주제명 표목를 비교 분석하고 DDC 제23 과 LCSH를 비교 분석하여 디지털 련 항목명을 추출하 다. 이 게 추출한 항목명을 토 로 KDC의 디 196 한국문헌정보학회지 제48권 제2호 2014 지털 련 항목명의 확장 개방안의 연구결과 를 요약하면 다음과 같다. 첫째, 디지털이란 용어가 분류표에 채택된 것 은 1951년에 간행된 DDC 제15 상 색인에서 ‘Digital computers'란 용어로 처음 사용되기 시 작하여, DDC가 2011년 제23 까지 개정되는 동안 44개 항목으로 확장 개되었다. KDC는 1980년에 간행된 제3 에서 ‘569.92 디지털 계 산기'를 처음 분류항목명으로 채택하여, 2013년 재6 까지 개정되는 동안 포함주기에 기술된 항목명까지 포함하여 12개 항목으로 컴퓨터, 도 서 , 통신 자공학, 무선공학, 만화, 사진, 화 등 8개 분야에 개되어 있기 때문에 일 되고 합리 인 분류기호 부여가 어려운 상황으 로 나타났다. 둘째, LCSH와 국립 앙도서 주제명표목 표에 개된 디지털 련 주제명을 분석한 결 과 2012년에 간행된 LCSH 제34 PDF 일 에서 디지털 련 주제명 검색 결과 172건이 검 색되었으며, DDC 제23 에 개된 항목보다 더 다양하 다. DDC제23 에 개하고 있지 않은 디지털 인문학, 디지털 련 법, 디지털 행 정, 디지털 인쇄, 디지털 의학, 디지털 문학 등 에 한 항목이 개되어 있었다. 국립 앙도서 주제명표목표에서 디지털 련 주제명 검색 결과 100건이 검색되었다. 검색 된 100건을 KDC 제6 과 비교 분석한 결과 20 개 분야로 컴퓨터과학, 컴퓨터 로그램과 데이 터, 출 , 경제, 경 , 방송, 정치, 법, 화학, 의학, 건축학, 기계공학, 우주항공공학, 통신공학, 무 선공학, 인쇄술, 사진술, 화 등은 KDC 제6 에 개되어 있지 않은 분야들로 나타났다. 셋째, KDC와 DDC에 개된 디지털 련 항 목명과 LCSH와 국립 앙도서 주제명표목 표에 개된 디지털 련 주제명을 분석 추출 한 결과 KDC의 디지털 련 항목명의 확장 개가 필요한 분야는 총류(000), 사회과학(300), 자연과학(500), 기술과학(600), 술(700), 문 학(800)류이다. 이 주류 하에 개하고 있는 항 목과 련된 디지털 분야는 포함주기와 분류지 시주기로 디지털 련 항목명을 추가 기술하여 분류할 수 있도록 하여야 할 것이다. 한 KDC 본표에 개되어 있지 않은 항목명은 유사하거 나 인 된 분류항목 아래에 새로운 분류항목으 로 확장 개하여 다양한 분야의 디지털 련 자료를 합리 이고 효율 으로 분류하여 일 성을 유지할 수 있도록 하여야 할 것이다. 이상과 같이 주요 분류표와 주제명표목표에 서 추출된 디지털 련 주제를 KDC의 분류 항 목명으로 추가 확장 개함에 따라 다양한 분야의 디지털 련 자료 분류에 일 성과 효 율성을 기할 수 있으며, 세 분류를 할 수 있어 분류기호의 복을 피할 수 있게 될 것이다. 한 디지털 련 정보에 신속 정확하게 근할 수 있게 되어 이용의 효율을 기할 수 있게 될 것이다. 한국십진분류법의 디지털 련 항목명 확장 개에 한 연구 197 참 고 문 헌 [1] 고재원 외. 2006. 최신디지털공학 . 서울: 북스힐. [2] 곽덕훈 외. 2002. 디지털시 와 함께하는 COMPUTERS . 서울: 이한출 사. [3] 국립 앙도서 . 2002. 주제명표목표 . [online] [cited 2014. 2. 13.] [4] 김 석. 2000. 디지털미디어와 사회 . 서울: 나남출 . [5] 김자후. 2009. KDC 5 에 한 제언. 한국도서 ․정보학회지 , 40(2): 5-26. [6] 남태우. 1996. DDC 21 의 개정내용과 그 특성분석. 한국문헌정보학회지 , 30(4): 85-104. [7] 노문자 외. 2007. 문헌정보학개론 . 구: 태일사. [8] 오동근. 2007. DDC22의 이해 . 구: 태일사. [9] 이경호, 김정 . 2009. 자료목록법: KORMARC․MARC21을 심으로 . 제4 . 구: 태일사. [10] 이창수. 2000. 정보통신기술 분야 인터넷자원의 분류체계에 한 연구. 정보 리학회지 , 21(2): 89-106. [11] 정두 , 이천효. 2011. 최신 디지털공학 . 서울: 명진. [12] 최재황. 1998. 인터넷 학술정보자원의 디 토리 서비스 설계에 있어서 DDC 분류체계의 활용에 한 연구. 정보 리학회지 , 15(2): 47-67. [13] 한국도서 회 분류 원회 편. 1980. 한국십진분류법 . 제3 . 서울: 한국도서 회. [14] 한국도서 회 분류 원회 편. 1996. 한국십진분류법 . 제4 . 서울: 한국도서 회. [15] 한국도서 회 분류 원회 편. 2009. 한국십진 류법 . 제5 . 서울: 한국도서 회. [16] 한국도서 회 분류 원회 편. 2013. 한국십진분류법 . 제6 . 서울: 한국도서 회. [17] 한국정보통신기술 회 편. 2004. 정보통신용어사 . 서울: 두산동아. [18] Angel, Garcfa-Crespo et al. 2011. “Digital Libraries and Web 3.0 the Callimachus DL Approach.” Computers in Behavior, 27(4): 1423-1430. [19] David, Stuart. 2010. “Web 3.0 Promises Change for Libraries.” Research Information, 46: 12-13. [20] Dewey, Melvil. 1951. Dewey Decimal Classification and Relative Index. 15th ed. Dublin, Ohio, OCLC. [21] Dewey, Melvil. 1958. Dewey Decimal Classification and Relative Index. 16th ed. Dublin, Ohio, OCLC. [22] Dewey, Melvil. 1965. Dewey Decimal Classification and Relative Index. 17th ed. Dublin, Ohio, OCLC. 198 한국문헌정보학회지 제48권 제2호 2014 [23] Dewey, Melvil. 1971. Dewey Decimal Classification and Relative Index. 18th ed. Dublin, Ohio, OCLC. [24] Dewey, Melvil. 1979. Dewey Decimal Classification and Relative Index. 19th ed. Dublin, Ohio, OCLC. [25] Dewey, Melvil. 1989. Dewey Decimal Classification and Relative Index. 20th ed. Dublin, Ohio, OCLC. [26] Dewey, Melvil. 1996. Dewey Decimal Classification and Relative Index. 21st ed. Dublin, Ohio, OCLC. [27] Dewey, Melvil. 2003. Dewey Decimal Classification and Relative Index. 22nd ed. Dublin, Ohio, OCLC. [28] Dewey, Melvil. 2011. Dewey Decimal Classification and Relative Index. 23rd ed. Dublin, Ohio, OCLC. [29] DDC Home page. [online] [cited 2012. 3. 21.] [30] 025.431 The Dewey Blog. [online] [cited 2012. 3. 21.] [31] Library of Congress. 2013. Library of Congress Sebject Headings. 34th ed. Washington: LC. [online] [cited 2014. 1. 23.] •국문 참고자료의 영어 표기 (English translation / romanization of references originally written in Korean) [1] Go, Je-won et al. 2006. The latest digital engineering. Seoul: Books Hill. [2] Kwag, duk hun et al. 2002. COMPUTER together with the digital age. Seoul: Yihan. [3] The Library of Korean National. 2002. Subject Headings. [online] [cited 2014. 2. 13.] [4] Kim, Young Seuk. 2000. Digital Media and Society. Seoul: Nanam. [5] Kim, Ja-hoo. 2009. “Suggestion on the 5th Edition of the Korean Decimal Classification.” Journal of Korean Library and Information Science, 40(2): 5-26. [6] Nam, Tae-Woo. 1996. “Analysis of the Contents and Characteristics in DDC Ed. 21.” Journal of the Korean Society for Library and Information Science, 30(4): 85-104. [7] Ro, Moon-Ja et al. 2007. Introduction to the library and information science. Daegu: Taeil Co. 한국십진분류법의 디지털 련 항목명 확장 개에 한 연구 199 [8] Oh, Dong Geun. 2007. A Understanding of DDC22. Daegu: Taeil. [9] Lee, kyung Ho and Kim, Jeong hyun. 2009. Cataloging: KORMARC․MARC21. 4th edition. Daegu: Taeil Co. [10] Lee, Chang Soo. 2000. “A Study on the Classification Schemes of Internet Resources in the Fields of the Information & Telecommunications Technology.” Journal of the Korea Society for Information Management, 21(2): 89-106. [11] Jeong, Doo-Young and Lee, Cheon-Hyou. 2011. The latest digital engineering. Seoul: MyungJin. [12] Choi, Hae-Hwang. 1998. “A Study on the Use of DDC Scheme in Directory Search Engine for Research Information Resources on Internet.” Journal of the Korea Society for Information Management, 15(2): 47-67. [13] Committee of Classification, Korean Libary Association ed. 1980. Korean Decimal Classification and Relative Index. 3rd edition. Seoul: KLA. [14] Committee of Classification, Korean Libary Association ed. 1996. Korean Decimal Classification and Relative Index. 4th ed. Seoul: KLA. [15] Committee of Classification, Korean Libary Association ed. 2009. Korean Decimal Classification and Relative Index. 5th ed, Seoul: KLA. [16] Committee of Classification, Korean Libary Association ed. 2013. Korean Decimal Classification and Relative Index. 6th edition. Seoul: KLA. [17] Korean information communication technology Association ed. 2004. Dictionary of Information communication terminology. Seoul: DuSan Donga. work_z2ucnhktqrc27hci2jgjef5cfm ---- FOREWORD DIGITAL TOOLS FOR DOCUMENTING AND CONSERVING BAHRAIN’S BUILT HERITAGE FOR POSTERITY D. Mezzino a , L. Barazzetti b, M. Santana Quintero c, A. El-Habashi d a Carleton Immersive Media Studio (CIMS), 1125 Colonel by Drive, Ottawa, On, K1S 5B6 Canada, Interuniversity Department of Regional and Urban Studies and Planning (DIST), Politecnico di Torino, Via Sant’Ottavio, 20, 10122 - Torino, Italy, davide.mezzino@gmail.com b ABC Department, Politecnico di Milano, Via Ponzio, 31 20133 - Milan, Italy, luigi.barazzetti@polimi.it c Carleton Immersive Media Studio (CIMS), 1125 Colonel by Drive, Ottawa, On, K1S 5B6 Canada, Mario.santana@carleton.ca dArchaeology & National Heritage Directorate, Manama, Bahrain, alaa.elhabashi@culture.gov.bh KEY WORDS: Digital Workflows, Heritage Conservation, Management, IT Documentation, Built Heritage, Bahrain, 3D Imaging, Photogrammetry, Rectifying Photography, Total Station, Computer-Aided Drawing. ABSTRACT: Documenting the physical characteristics of historic structures is the first step for any preventive maintenance, monitoring, conservation, planning and promotion action. Metric documentation supports informative decision-making process for property owners, site managers, public officials, and conservators. This information serves also a broader purpose, over time, it becomes the primary means by which scholars, heritage professionals, and the general public understand a site that radically changed or disappeared. Further, documentation supports monitoring as well as the character-defining elements analysis, relevant to define the values of the building for the local and international community. The awareness of these concepts oriented the digital documentation and training activities, developed between 2016 and 2017, for the Bahrain Authority for Culture and Antiquities (BACA) in Bahrain. The developed activities had two main aims: a) support the local staff in using specific recording techniques to efficiently document and consequently preserve built heritage sites with appropriate accuracy and in a relatively short period; b) develop a pilot project in collaboration with BACA to validate the capacity of the team to accurately document and produce measured records for the conservation and management of Bahrain built heritage. The documentation project has been developed by a multidisciplinary team of experts from BACA, Carleton Immersive Media Studio (CIMS), Carleton University, Canada and a contracted researcher from the Gicarus Lab, Politecnico di Milano (POLIMI) in Italy. In the training activities, the participants have been exposed to a wide range of recording techniques, illustrating them the selection criteria for the most suitable one, according to requirements, site specifications, categories of values identified for the various built elements, and budget. The pilot project has been tested on three historical structures, both with strong connotations in the Bahrain cultural identity: the Shaikh Isa bin Ali house, Aljazzaf house and the Siyadi Majlis. These two buildings, outstanding examples of Bahrain architecture as well as tangible memory of the country history, have been documented employing several digital techniques, including: aerial and terrestrial photogrammetry, rectifying photography, total station and 3D laser scanning. Figure 1 Siyadi Majlis, Muharraq, Bahrain. Image source: Davide Mezzino. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W5, 2017 26th International CIPA Symposium 2017, 28 August–01 September 2017, Ottawa, Canada This contribution has been peer-reviewed. doi:10.5194/isprs-archives-XLII-2-W5-513-2017 | © Authors 2017. CC BY 4.0 License. 513 mailto:davide.mezzino@gmail.com mailto:Mario.santana@carleton.ca 1. INTRODUCTION Recording the physical characteristics of historic structures and landscapes is a cornerstone of preventive maintenance, monitoring, and conservation. The information produced by such work assists the decision-making process for property owners, site managers, public officials, and conservators. Rigorous documentation may also serve a broader purpose: over time, it becomes the primary means by which scholars and the public understand a site that has since radically changed or disappeared. These records also serve as posterity and monitoring records in the event of catastrophic or gradual loss of the heritage resource. In line with these considerations, the proposed approach and workflow adopted for the conservation of Bahrain built heritage were mainly oriented by two aims:  Support the local staff with the use of specific recording techniques to document and therefore preserve Bahrain historic structures with appropriate accuracy and in a relatively short period.  Conduct a pilot project in collaboration with the Bahrain Authority for Culture and Antiquities (BACA) to validate the capacity of the team to accurately document and produce measured records for informative conservation actions of Bahrain built heritage. The project was conducted by a multidisciplinary team of experts from Carleton Immersive Media Studio (CIMS) and a contracted professional from the Politecnico di Milano (Polimi) in Italy. 1.1 Framework of the documentation activities: the Pearling project The capacity-building activities and documentation strategies were developed in the wider framework or the so-called Pearling1 Initiative. The project financed by the Islamic Development Bank (IDB) for a total amount of US$48 million (Economist Newspaper, n.d.), aims at supporting the conservation and rehabilitation of the urban areas2 of the city of Muharraq, Bahrain, connected with the pearling economy. Within this framework, the Project involves the rehabilitation and conservation of twelve historical buildings, listed in the Pearling, Testimony of an Island Economy UNESCO site, inscribed on the World Heritage List in 2012 (UNESCO, n.d.). These properties along with the walking trail of 3.5 km and some additional buildings and public spaces, connected with the Muharraq’s pearling heritage, are currently the object of conservation, upgrading, and renovation (Bahrain Authority for Culture and Antiquities, n.d.). Within this framework, the relevance of proper documentation strategies for informed conservation or renovation actions appears evident. 1 The Pearling is an outstanding example of a traditional sea-use which shaped the local economy and cultural identity. This secular practice is one of the most significant example of a natural pearl-collection tradition and is based on the Arabian Gulf oyster beds north of Bahrain. Despite the market associated with the pearling activity decrease exponentially to several factors (from the diffusion of the cultured pearls through artificial injections in the 1930s to the economic change in Bahrain brought by the discovery of oil in the 1932) many features and practices still survive. The historic structures of Muharraq are the evidence of this traditional sea-use and its related economy. These include residential and commercial buildings, tangible examples of the historical cultural and socio-economic context associated with the pearling society. These 1.2 Historic, territorial and socio-cultural framework Muharraq is located on the north-east side of the Bahrain Island in the Arabian Gulf, at an altitude of 13 m, a latitude of 26,2572 (2615'25.920"N) and a longitude of 50,6119 (5036'42.840"E) (TipTopGlobe.com, n.d.). Figure 2 Map showing the location of Bahrain and its territorial waters in the Arabian Gulf as well as the core and the buffer zone of the Pearling, Testimony of an Island Economy UNESCO site. Image source (Kingdom of Bahrain Ministry of Culture & Information, 2012). The site development was characterized by the pearling cultivation shaping the local economy, the urban and architectural context as well as the social one. Pearls cultivation is deeply rooted in Bahrain history since the Neolithic under the Dilmun civilization. During the Roman Empire, pearls were cultivated and traded and Bahrain developed an organized pearling industry (Kingdom of Bahrain Ministry of Culture & Information, 2012). historic structures, developed within the pearling socio and economic phenomena, also embed intangible values including, but not limited to, social hierarchies, legal systems, songs, stories, poetry, festivals and dances. The built heritage associated with the pearling economy is also representative of the traditional uses and functions of these buildings and related specific building techniques and design (Kingdom of Bahrain Ministry of Culture & Information, 2012). 2 This operation is intended to support the local development, fostering socio-economic as well as cultural growth, by improving, for instance, living conditions of residents, developing new commercial activities and improving urban mobility (Economist Newspaper, n.d.). The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W5, 2017 26th International CIPA Symposium 2017, 28 August–01 September 2017, Ottawa, Canada This contribution has been peer-reviewed. doi:10.5194/isprs-archives-XLII-2-W5-513-2017 | © Authors 2017. CC BY 4.0 License. 514 This trend expanded and consolidated in the medieval period reaching its apex between the XVIII and XIX century when the global demand of pearls (mainly from Europe) grew exponentially strongly affecting Bahrain economic growth and territorial developments in terms of infrastructure and services associated with the pearling market. In this framework, Muharraq became the main pearl-diving settlement in Bahrain. Figure 3 Map of the Muharraq inscribed properties and their buffer zone. Image source (Kingdom of Bahrain Ministry of Culture & Information, 2012). Figure 4 List of the 17 architectural structures listed in the World Heritage List in 2012. Image source (Kingdom of Bahrain Ministry of Culture & Information, 2012) adapted by the author. 3 According with the UNESCO Convention (1972), article 1, the site is composed of 2 groups of buildings, 9 monuments and 4 sites. Therefore, Pearling is a testimony of the Bahrain economy. The site is a serial nomination including 15 sites (11 architectural properties; 3 marine sites, oyster beds, located in the northern territorial waters of the island and one seashore area in the south of the Muharraq city, in the northeast of the island)3. The activities focused on the 11 architectural properties, including 17 architectural structures, located in the Muharraq urban area. Figure 5 Images illustrating different phases associated with the pearling cultivation. Image source (Kingdom of Bahrain Ministry of Culture & Information, 2012). 1.2 Capacity building for local professionals The developed capacity building activity had two main aims:  Acquaint participants with a wide range of recording techniques and tools to help Bahrain Authority for Culture and Antiquities (BACA) staff decide which techniques are best suited, according to heterogeneous sites and objectives.  Disseminate information, knowledge, and skills in built heritage documentation strategies. Addressing these aims, capacity-building activities consisted of theoretical as well as practical training. Capacity building activities included a preparatory phase for the development of protocols and guidelines as well as three missions to Bahrain for on the field training activities. After completing the preparation of protocols and datasets, the training course was carried out at the Bahrain Authority for Culture and Antiquities (BACA) offices and in two selected historic architectural complexes in the Muharraq urban area, testing documentation workflows, techniques, and tools. The capacity-building activities were related to digital documentation techniques to record color, shape, geometry, assess conditions and identify character-defining elements (D. Mezzino, Quintero, Pwint, Latt, & Rellensmann, 2016) of Bahrain built heritage. Theoretical and practical lessons were carried out to explain the procedures for data acquisition and processing, with a special attention to the requirements of the project (i.e. the need of accurate and reliable measuring tools to capture the geometry of the buildings to be restored). Lessons had two aims: to train participants with a wide range of recording techniques and to help the staff decide which techniques are best suited. Each of the three mission was five days long. Within these three missions, intensive sessions in which different instruments and techniques were explained and applied in the The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W5, 2017 26th International CIPA Symposium 2017, 28 August–01 September 2017, Ottawa, Canada This contribution has been peer-reviewed. doi:10.5194/isprs-archives-XLII-2-W5-513-2017 | © Authors 2017. CC BY 4.0 License. 515 field were carried out after introducing basic concepts on cultural heritage documentation. After that, the following recording techniques were explained:  Total station: the work consisted in explaining the basic concepts and the measurement principle based on distance and angle measurements, converted into XYZ coordinates (Barazzetti, Banfi, Brumana, Raffaella Gusmeroli, & Previtali, Mattia Schiantarelli, 2015). Particular attention was placed to the set up of a network materialized with control points on the ground and targets. The importance of the total station for the definition of a stable reference system was addressed. Participants learned how to set up the total station and start a new project by creating a false origin and a direction (azimuth). The work continued with the case of different station points, i.e. how to move the station to the next point with both the resection and known backsight point techniques, without changing the reference system.  Rectifying Photography: theoretical and practical lessons were carried out to explain how to measure flat-like objects with digital images. The case of building façades was investigated and discussed. Procedures for image acquisition in the case of long facades were addressed (image mosaicking). Rectification with both analytical methods (integrating total station data) and geometric methods (vertical/horizontal lines with a known width/height ratio) were explained and applied to the façade of the Shaikh Isa bin Ali house, Aljazzaf house and the Siyadi Majlis used as cases study.  3D Photogrammetry: an introductory lesson on photogrammetry and image acquisition was carried out to explain the advantages and limitations of this technique. Considerations about the object that can be surveyed/reconstructed via photogrammetry and the recommended overlap were discussed. The importance of suitable configurations (sequences, blocks, mixed configurations) for automated photogrammetric processing was underlined. Digital processing was carried out with the images acquired on site, for both single facades and complete rooms. Images were integrated with total station data to work in a georeferenced system. Aerial photogrammetry demonstration was carried out using a drone to complete the list of the most common digital documentation techniques used in cultural heritage documentation. Participants did not directly use this tool but processed the data coming from it. Aerial photogrammetry demonstration was carried out to show the potential of such method. In terms of tools instead, the BACA staff has been exposed to several tools. For each tools strengths, potentials, limits and constraints (in terms of training, time and costs) have been outlined. A toolbox assessment has been also provided considering BACA needs and the specificities of Bahrain built heritage. The tools used in the on-field activities are illustrated in section 3.2. Additionally, to on field data acquisition, employing different techniques, BACA staff has been trained also in the data processing phase. Agisoft Photoscan has been taught and used to process data (pictures) coming from aerial and terrestrial photogrammetry. Autocad 2016 functionalities have been taught showing how to import, orient and manage a pointcloud in a CAD environment to generate 2D drawings. Hugin software application has been illustrated to create 360 degree panoramic pictures. Perspective Rectifier has been showed to rectify images and generate orthophotos. CloudCompare and Autodesk Recap have been taught to present their opportunities in managing, referencing and combining pointclouds, In terms of management of the acquired data, Computed Aided Design (CAD) and Building Information Modelling (BIM) systems were also illustrated. Concerning the use of CAD workflows to produce plans, sections, and elevations, with the data acquired from total station, digital images and photogrammetry, were illustrated. About BIM, workflows to produce 3D model from pointcloud generated with photogrammetry techniques was described presenting how to orient, manage and model from the pointcloud in a BIM environment. Figure 6 Image showing the used tools associated with each recording technique. Participants actively participated in both theoretical and practical lessons. They understood the role of digital data collection and processing in historic building restoration and preservation. They tested digital documentation workflows, understating the role of visual documentation, as well as the potential and limitation of such recording tools on three designed heritage properties in Bahrain. 2. RECORDING STRATEGY 2.1 Training activities organization. According to the course objectives, the training approach involved a collective collaboration among the different professional profiles of the BACA members. In the training activities the trainee were divided into three main teams. Figure 7 BACA participants during the documentation activities. Images source: Davide Mezzino. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W5, 2017 26th International CIPA Symposium 2017, 28 August–01 September 2017, Ottawa, Canada This contribution has been peer-reviewed. doi:10.5194/isprs-archives-XLII-2-W5-513-2017 | © Authors 2017. CC BY 4.0 License. 516 Each working team acquired and processed data coming from the different employed techniques, generating 3D point cloud models to be used to develop 3D and 2D drawings. Each team was trained in the use of these documentation techniques to document conditions, shape, color and geometry of Bahrain historic structures. The teaching strategy involved an innovative role-playing technique, empowering participants and instructors to:  Develop an understanding of the role of information in conservation, addressing national and international standards;  Review the potential limitations of recording and documentation techniques, including simple and advanced tools, and the financial constraints.  Develop a practical approach to the use of these tools and documentation techniques in order to capture information from cultural heritage resources;  Include the use of information systems in cultural heritage resources management;  Design reports for presenting information to stakeholders and decision makers. In order to stimulate the cultural exchange, each team was composed of professionals with different background including engineers, architects, photographers, designers, archaeologists, and conservators. The work was fieldwork based, supervised by experts from CIMS, Carleton University and Gicarus, Politecnico di Milano. Each team had the opportunity to use and learn different documentation techniques including:  Total station survey,  Field notes and sketches;  Photogrammetry;  Site digital photography portfolio, including panoramic photographs;  2-dimensional drawing production;  3-modelling in a BIM environment. The capacity-building activities were carried out during three different on the field missions from December 2016 to Aril 2017. The capacity building activities prepared for the Bahrain Authority for Culture and Antiquities (BACA) provided an overview, tailored made protocols and hands-on training on the most effective techniques that have proven to be appropriate when recording heritage structures. Different techniques for digital documentation have been explained and used during the practical activities in the three intense working weeks. A complete workflow for digital documentation has been developed to collect accurate metric information and produce deliverables. The production workflow starts from the preliminary planning phase of the survey to the production of measured drawings in AutoCAD, 3D models and digital orthophoto. The workflow and additional recommendations are described in the following sections. 2.2 Techniques and tools In the capacity building activities, techniques illustrated and used included aerial and terrestrial photogrammetry, Reflectorless Electronic Distance Measurement (REDM) (Total Station survey) and Rectify photography. Techniques mainly employed visual and dimensional tools. Visual tools were used for photogrammetry, digital photography and rectify photography techniques. These tools included hardware and software components. The first ones included leveled DSLR cameras and a drone. The second ones, used in the data processing phase, included software applications such as Agisoft Photoscan, Perspective Rectifier, Hugin, AutodeskRecap, CloudCompare, Adobe Bridge and Adobe Photoshop. Dimensional records instead were used for hand recording, and Reflectorless Electronic Distance Measurement (REDM). Tools used included two Total Stations in the data acquisition phase and Autodesk AutoCAD software in the data processing phase. 3. DIGITAL DOCUMENTATION WORKFLOWS 3.1 Planning phase The first step in the digital documentation workflow consisted in the definition of the level of detail required for the seventeen architectural structures to be renovated and conserved by the BACA staff (three out of the seventeen structures were used as pilot cases in the capacity-building activity). In this specific case, detailed documentation (scale 1:200 m to 1:10 m) was required. Tools selection followed main criteria: operability, portability, precision, range, price, level of engagement robustness and speed. Figure 8 Digital documentation workflow. Image source: Davide Mezzino. 3.2 Data acquisition Data acquisition was carried out through visual and dimensional tools. These included:  two Leica Geosystems Total Station TS11 with a distance accuracy of 2 mm and angular 2 ppm for linework also equipped with a mini prism;  two tripods;  a drone Phantom 2 Vision +;  a camera Nikon D800 DSLR camera with 36 MP;  two cameras Canon 5DS with a 24 mm lens and a 16-35 mm lens 3.3 Data processing In terms of data processing different software applications and workflows were illustrated. Image processing included Adobe Bridge, Adobe Photoshop, Perspective Rectifier, To generate and manage pointcloud coming from aerial and terrestrial photogrammetric data, Agisoft Photoscan, CloudCompare and Autodesk AutoCAD were used. To generate 2D measured drawings from the pointcloud and orthophots, generated out of it, Computer-aided design (CAD) The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W5, 2017 26th International CIPA Symposium 2017, 28 August–01 September 2017, Ottawa, Canada This contribution has been peer-reviewed. doi:10.5194/isprs-archives-XLII-2-W5-513-2017 | © Authors 2017. CC BY 4.0 License. 517 applications, using Autodesk AutoCAD 2016, was illustrated and used. The training also included a training session on how to develop 3D models from the pointcloud, addressing challenges and limits of 3D semantic modelling in a Building Information Model (BIM) environment, using the Autodesk Revit 2016 software. 3.4 Outcomes The generated outcomes included:  Protocols, guidelines and specification on built heritage documentation;  Orthophotos;  Consolidated pointclouds;  2D measured drawings;  3D models (in a BIM environment);  Final report with the planning of the future activities to be carried out, considering BACA human resources, skills and available techniques and tools. Figure 9 Samples of orthophotos generated from rectified photography and from the point cloud. Software employed Adobe Photoshop, Agisoft Photoscan and Perspective Rectifier. Images source: Authors and BACA staff. Figure 10 Examples of the developed consolidated pointcloud. Software employed Agisoft Photoscan and Autodesk Recap. Images source: Authors and BACA staff. Figure 11 Aljazzaf house flooeplans. Example of generated 2D outcomes. Software employed Autodesk AutoCAD 2016. Images source Authors and Marwa Reyadh Alarayedh. Figure 12 Sectioned views of the BIM model of the Aljazzaf house. The model was generated from the pointcloud. Software employed Autodesk Revit 2016. Images source Authors and Marwa Reyadh Alarayedh. Figure 13 BIM model of the Aljazzaf house developed from the consolidated pointcloud and render samples generated from the BIM model. Software employed Autodesk Revit 2016. Images source Authors and Marwa Reyadh Alarayedh. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W5, 2017 26th International CIPA Symposium 2017, 28 August–01 September 2017, Ottawa, Canada This contribution has been peer-reviewed. doi:10.5194/isprs-archives-XLII-2-W5-513-2017 | © Authors 2017. CC BY 4.0 License. 518 4. RECCOMENDATIONS AND FUTURE PERSPECTIVES Technical recommendations focus on data reliability, longevity and fragmentation. Concerning information reliability, it is necessary to assure quality and integrity of heritage records. For instance, a criteria to validate the metric quality of the obtained results could be to cross-validate data coming from different recording techniques such as photogrammetry and laser scanning. Additionally, the combination of such records in a reliable survey network, surveyed with a total station, would provide an additional control on the measurements. This becomes even more evident considering the high accuracy level achievable with modern total stations and rigorous adjustment techniques. Attention should also be paid to data longevity particularly considering the growth of digital records thus increasing long- standing problems. Possible solutions consist in having data structured in a server network easily accessible also for future users not familiar with the adopted workflows. Additionally, metadata description should be included in the server network providing a detailed description of how to use the information collected. Common formats specifications are required to simplify users daily work and future data retrievers. Thus, including the opportunity to integrate existing information. The database should be intended not as a static archive but as a dynamic tool subjected to the innovation technology pace. A final recommendation includes the attention that should be paid in structuring information in BACA according to a shared information management system to avoid data fragmentation and guarantee and efficient management of heterogeneous data sources. 5. CONCLUSIONS The outcomes of the capacity building activities addressed four main aspects: a) assure quality and integrity of heritage records in order to guarantee information reliability (Davide Mezzino, Santana Quintero, Pei, & Reyes Rodriguez, 2015); b) consider data longevity issues particularly in relation to the growth of digital records thus increasing long-standing problems; c) avoid data fragmentation structuring information in BACA according to a shared information management system; d) planning of heritage recording activities to organize timelines and calculate projects costs according to different documentation scopes. These aspects have been considered along all the project time- framework. The capacity building project has not only be a technical explanation of techniques and tools but a comprehensive understanding of documentation strategies to identify the most efficient and effective solutions according to specific project requirements and needs. ACKNOWLEDGEMENTS The projects has been carried out in the framework of the “Documentation od Built Heritage at Muharraq for Pearls Path Project” funded by the Bahrain Authority for Culture and Antiquities (BACA). The authors wish to acknowledge and thank the support of BACA’s endeavor for the conservation of Bahrain’s heritage and for this outstanding opportunity to contribute to this great project. It has been a unique opportunity to collaborate along with other BACA experts in a training aimed at improving documentation skills for conservation and preventive maintenance of built heritage. In addition, we wish to thank the support of Noura Al Sayeh, Ahmed Aljishi, Fatima Mahammed Rafiq, Miray Hasaltun Wosisnki, Michal Wosinski and Alaa El Habashi. We also should acknowledge the enthusiasm and participation of the attendees from BACA: Ahmed A.Nabi, Zakariya Abbas, Marwa Reyadh Alarayedh, Nailah Al Khalifa, Dalia Yusuf Abdulhameed, Fatma Yateem, and Fatma A.Nabi, which made this project successful. REFERENCES  Bahrain Authority for Culture and Antiquities. (n.d.). Sheikh Isa Bin Ali House. Retrieved from http://culture.gov.bh/en/visitingbahrain/destinations/Name, 10554,en.html#.WSRNnWjytPY  Barazzetti, L., Banfi, F., Brumana, Raffaella Gusmeroli, G., & Previtali, Mattia Schiantarelli, G. (2015). Cloud-to-BIM- to-FEM: Structural simulation with accurate historic BIM from laser scans. Simulation Modelling Practice and Theory, 57, 71–87.  Economist Newspaper. (n.d.). Muharraq Pearling Heritage Conservation and Urban Economic Revival. Retrieved March 21, 2017, from http://jobs.economist.com/job/10786/general-procurement- notice/  Kingdom of Bahrain Ministry of Culture & Information. (2012). Nomination to the World Heritage List.  Mezzino, D., Quintero, M. S., Pwint, P. M., Latt, W. T. H., & Rellensmann, C. (2016). Technical assistance for the conservation of built heritage at bagan, Myanmar. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, 41(July), 945–952. https://doi.org/10.5194/isprsarchives- XLI-B5-945-2016  Mezzino, D., Santana Quintero, M., Pei, W., & Reyes Rodriguez, R. (2015). Documenting Modern Mexican Architectural Heritage for Posterity: Barragán’s Casa Cristo, in Guadalajara, Mexico. In ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2015. XXV International CIPA Symposium (p. pp.199-206). Taipei, Taiwan.  TipTopGlobe.com. (n.d.). Al Muharraq. Retrieved February 3, 2017, from http://www.tiptopglobe.com/city?n=Al Muharraq&p=97458  UNESCO. (n.d.). Pearling, Testimony of an Island Economy. Retrieved March 2, 2017, from Pearling, Testimony of an Island Economy The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W5, 2017 26th International CIPA Symposium 2017, 28 August–01 September 2017, Ottawa, Canada This contribution has been peer-reviewed. doi:10.5194/isprs-archives-XLII-2-W5-513-2017 | © Authors 2017. CC BY 4.0 License. 519 work_z2ymi6jgnravzjawlxt7gywgqu ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218544297 Params is empty 218544297 exception Params is empty 2021/04/06-02:16:21 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218544297 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:21 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_z3z5ec7cizhlfb262b5in6ebpu ---- G&C281779.vp 10.1177/1555412005281779ARTICLEGames and CultureRaessens / The Ludification of Culture Playful Identities, or the Ludification of Culture Joost Raessens Utrecht University One of the main aims of game studies is to investigate to what extent and in what ways computer games are currently transforming the understanding of and the actual construc- tion of personal and cultural identities. Computer games and other digital technologies such as mobile phones and the Internet seem to stimulate playful goals and to facilitate the construction of playful identities. This transformation advances the ludification of today’s culture in the spirit of Johan Huizinga’s homo ludens. Keywords: play; identity; narratology; ludology In 2001, the first academic, peer-reviewed (online) journal dedicated to com-puter game research, Game Studies, came out. According to editor in chief Espen Aarseth, this was “Year One of Computer Game Studies” as an international, aca- demic field.1 This claim was based on the success of the first international scholarly conference on computer games, Computer Games and Digital Textualities, organized by the IT University of Copenhagen in March of the same year.2 The year before, the MIT Program in Comparative Media Studies, directed by Henry Jenkins, had orga- nized a national conference, Computer and Video Games Come of Age, in coopera- tion with the International Digital Software Association.3 These two theorists, Aarseth and Jenkins, both of them crucial to the emergence of game studies, faced each other in a debate held in Sweden in 2005.4 This debate will serve as a point of reference for my discussion of the current state of game studies in this article. In answering the ques- tion, why game studies now, I will also refer to the academic work in game studies that I have been involved in since my appointment to the Faculty of the Humanities, Utrecht University, the Netherlands, in 1998. Why Game Studies Now? It is important to study computer games (including arcade games, console, and handheld video games) now because like television and music, they have become a phenomenon of great cultural importance. As Jenkins stated in the debate, games are technologically, economically, aesthetically, socially, and culturally important: “This is a medium that anyone who wants to understand where our culture is at, has to look 52 Games and Culture Volume 1 Number 1 January 2006 52-57 © 2006 Sage Publications 10.1177/1555412005281779 http://games.sagepub.com hosted at http://online.sagepub.com at.” The computer game industry has a large impact on our culture, and as statis- tics show, it is the fastest growing entertainment industry, rivaling the film industry in revenues. Worldwide turnover is estimated at around $20 billion. As children spend more and more time on computer games, we witness a struggle of what attention- economists call “eyeball hours.” Research done by the Dutch Social and Cultural Planning Office shows that in 2004, young people spent roughly the same amount of time on media as in the 1970s. Every week, they have 45 hours at their disposal for lei- sure activities, 19 hours of which they spend on media. This means that the time spent on playing computer games is no longer spent on say, reading books, magazines, and newspapers.5 As Robert Edward Davis (1976) showed, “new” media such as motion pictures, radio, and television have always been promoted and attacked in popular argument since their introduction but only studied academically in detail much later. One of the main aims of our University Program, New Media and Digital Culture, is to investi- gate to what extent and in what ways computer games are currently transforming our understanding of as well as the actual construction of personal and cultural identities. We study the impact of massive multiplayer online role-playing games, such as World of Warcraft; historical simulation games, such as Civilization and Age of Empire; and first-person shooter games, such as America’s Army and investigate whether these games offer an opportunity for a renaissance shift in today’s culture (Rushkoff, 2005). In September 2005, Valerie Frissen, Jos de Mul (both from the Faculty of Philosophy, Erasmus University Rotterdam, the Netherlands), and I started Playful Identities, a research program funded by the Dutch Organization for Scientific Research (NWO). Because digital technologies seem to stimulate “playful goals” (Vattimo, 1998) or “the play element in culture” (Huizinga, 1955), we investigate the ways in which mobile phones, the Internet, and computer games not only facilitate the construction of these playful identities but also advance the ludification of culture in the spirit of Johan Huizinga’s (1955) homo ludens.6 Why Game Studies Now? In the Aarseth-Jenkins debate, Jenkins linked the difficulties inherent to a defini- tion of game studies to the question of what the priority of research should be: “Should we focus on what is most gamelike, or most medialike, or most storylike, or more spa- tial, more character driven, or gender driven, or more ideological?” What both debat- ers agreed on is that the definition of games studies as a discussion between (mostly Scandinavian) ludologists and (mostly American) narratologists is an oversimplifica- tion and a reduction of the field. What threatens game studies is this false dilemma fal- lacy. Game studies is a multifaceted field, and as members of an academic community we will only be able to deal with the complexity of computer games as hybrid texts if we take all these facets into account. Therefore, at the inaugural Digital Games Research Association (DiGRA) conference Level Up (at Utrecht in 2003) and in its proceedings (Copier & Raessens, 2003), as well as in the Handbook of Computer Raessens / The Ludification of Culture 53 Game Studies (Raessens & Goldstein, 2005), we distinguished (at least) six different points of view or approaches from which computer games may be studied, namely, history, design and reception of computer games, and computer games as an aesthetic, cultural, and social phenomenon.7 Although the subtitle of our Playful Identities Project (From Narrative to Ludic Self-Construction) seems to refer to an overall transformation of our postmodern cul- ture from a predominantly narrative to a predominantly ludic ontology, in today’s media culture however, it is to be expected that narrative and game elements will con- tinue to coexist in hybrid forms (Juul, 2005). To be able to study this complex situ- ation, our interdisciplinary program combines a conceptual-philosophical, media- theoretical, and qualitative-empirical approach. First, it contains a theory of ludic identity that critically elaborates on the work on narrative identity by the French phi- losopher Paul Ricoeur (de Mul, 2005). Second, we focus on concrete media practices made possible by the principles of new media (Jenkins, 2002; Manovich, 2001) in which users are not only caught in the system but also appropriate and domesticate these technologies (Frissen & Van Lieshout, in press). Third, we transgress the some- what simplified opposition Jenkins refers to in the debate, between European game studies as “an intellectual top-down perspective” that privileges “definitional argu- ments” and American game studies based on “a bottom-up dialogue with game designers, the game industry and fans.” Our qualitative-empirical investigation con- sists, among other things, of participatory observation, in depth-interviews, discourse analysis, textual analysis, and 7-day media use diaries. Why Game Studies Now? According to both Aarseth and Jenkins, the word game is not a well-defined aca- demic term. General statements that define what “all games are” are not very useful. We have to be more precise, for example by distinguishing between genres, between games you play for fun and so-called serious games, and between games in general (Aarseth’s object of study) and mediated games (that of Jenkins). In my own contribu- tion to the Handbook, I argued against the claim that all computer games are decons- tructing the hidden, naturalized, ideologically presupposed rules of the medium. This process of deconstruction takes place in early computer games and in genres such as simulation games in particular (Raessens, 2005). In the expert meeting New Theories and Methods in Media Violence Research, which took place at the Free University of Amsterdam in April 2005, Dolf Zillmann defined this need for precision as “stratifica- tion.” By this he means that instead of sweeping generalizations, we need differen- tiation, for example more elaborate research on “the different forms of media violence itself, the psychological make-up of its audience, and the behavior commonly sub- sumed under aggression.” In our Playful Identities project, we distinguish between three different kinds of games and play, respectively, related to mobile phones, the Internet, and computer games. We investigate how and to what extent the medium-specific characteristics of 54 Games and Culture these media reconfigure the construction of personal and cultural identities. In the first project, we differentiate between mobile phone games one can play on a single mobile phone, so-called location-based or mixed reality games (e.g., BotFighters), and com- munication patterns that are augmented with play elements, such as the romantic play of flirting and the playful use of short message service, multimedia messaging system, and digital photography. In the second project, we investigate the Internet playground (Seiter, 2004): Internet-based games and playful Internet use, such as browsing the Web, meeting up with friends on sites such as Habbo Hotel, and instant messaging. In the third project, we investigate how computer games enable and/or disable gamers to participate in a ludic way in the reconstruction and deconstruction of pregiven identi- ties and the construction of new playful identities (Raessens, 2005). Why Game Studies Now? The acceptance and proliferation of competing frameworks of interpretation seem to be the major characteristics of game studies within the humanities. This begs the question whether there are objective rules and standards for determining which in- terpretations are best (Eco, 1992, 1994). An interesting difference of opinion on this subject emerged in the Aarseth-Jenkins debate. According to Jenkins, new empirical evidence can force game researchers in general to modify or refine their research pro- grams: “If they are smart, they change the tool to fit what they are looking at, if they are bad, it is a cookie cutter that only sees those things that their tool allows them to look at to begin with.” Aarseth on the other hand argued that new empirical evidence predom- inantly proves narratology to be “not really a good model for studying and understand- ing” computer games and that we are witnessing “a transitional phase, a paradigm shift.” In “Computer Game Studies, Year One,” Aarseth already stated that “The de- bate about narratives and narratology’s relevance to game studies . . . shows the very early stage we are still in” (see Note 1). Referring to “the struggle of controlling and shaping the theoretical paradigms,” Aarseth defended the idea that computer games should be studied as a new discipline instead of within existing fields such as cinema and literature who are only “colonising” games. What interests me most in this debate are the following methodological questions. Do we as an academic community of game researchers accept the coexistence of com- peting frameworks of interpretation, in accordance with the tradition of the humani- ties? This seems to be Jenkins’s position, and it is one I agree with, when he states that both narratology and ludology can be equally productive. Or, do we adhere to the para- digmatic character of academic progress following Thomas Kuhn’s philosophy of science? This seems to be Aarseth’s position when he rules out narratology as an out- dated paradigm. If we want game studies to really come of age academically, we should not only further develop different theories and methods but also make the latter the object of our research and discussion. Raessens / The Ludification of Culture 55 Notes 1. See www.gamestudies.org. In October 1993, the computer game magazine Edge was published for the first time. Although very informative, Edge is not an academic journal. See www.edge-online.co.uk. 2. See diac.itu.dk/cgdt. In 2003, the Center for Computer Games Research was founded at the IT Univer- sity of Copenhagen, see game.itu.dk. Principal researcher is Espen Aarseth. 3. See mit.edu/cms/games. International Digital Software Association’s name has been changed to Entertainment Software Association; see www.theesa.com. 4. This debate took place on January 18, 2005, and was organized by the University of Umeå, Sweden. The debate is available as a real media stream; see rtsp://www2.humlab.umu.se:7070/archive/ humlabseminariet/20050118_speldebatt.rm. 5. For these statistics, see www.theesa.com, www.game-research.com, and www.scp.nl. Generation M: Media in the Lives of 8-18 Year-olds, A Kaiser Family Foundation Study (see www.kff.org) shows in more detail that the most avid users of computer games are the same kids who most watch television and listen to music. One of the other key findings is that there are important demographic differences in media use, not only based on age but also on gender and race. 6. For more information about our research program, see www.playful-identities.nl. 7. For information about the Digital Games Research Association, see www.digra.org. For their confer- ences, see www.gamesconference.org. The term ludology in this context was first used by Gonzala Frasca; see ludology.org. References Copier, M., & Raessens, J. (Eds.). (2003). Level up. Digital games research conference. Utrecht, the Nether- lands: Faculty of Arts. Davis, R. E. (1976). Response to innovation: A study of popular argument about new mass media. New York: Arno Press. de Mul, J. (2005). The game of life: Narrative and ludic identity formation in computer games. In J. Raessens & J. Goldstein (Eds.), Handbook of computer game studies (pp. 251-266). Cambridge, MA: MIT Press. Eco, U. (1992). Interpretation and overinterpretation. Cambridge, UK: Cambridge University Press. Eco, U. (1994). The limits of interpretation. Bloomington: Indiana University Press. Frissen, V., & Van Lieshout, M. (in press). ICT and everyday life: The role of the user. In P. P. Verbeek & A. Slob (Eds.), User behavior and technology design—Shaping sustainable relations between consum- ers and technologies. Dordrecht, the Netherlands: Kluwer. Huizinga, J. (1955). Homo ludens. A study of the play element in culture. Boston: Beacon. Jenkins, H. (2002). Interactive audiences? In D. Harries (Ed.), The new media book (pp. 157-170). London: BFI. Juul, J. (2005). Games telling stories? In J. Raessens & J. Goldstein (Eds.), Handbook of computer game studies (pp. 219-226). Cambridge, MA: MIT Press. Manovich, L. (2001). The language of new media. Cambridge, MA: MIT Press. Raessens, J. (2005). Computer games as participatory media culture. In J. Raessens & J. Goldstein (Eds.), Handbook of computer game studies (pp. 373-388). Cambridge, MA: MIT Press. Raessens, J., & Goldstein, J. (Eds.). (2005). Handbook of computer game studies. Cambridge, MA: MIT Press. Rushkoff, D. (2005). Renaissance now! The gamers’ perspective. In J. Raessens & J. Goldstein (Eds.), Handbook of computer game studies (pp. 415-421). Cambridge, MA: MIT Press. Seiter, E. (2004). The Internet playground. In J. Goldstein, D. Buckingham, & G. Brougère (Eds.), Toys, games, and media (pp. 93-108). Mahwah, NJ: Lawrence Erlbaum. Vattimo, G. (1998). Die Grenzen der Wirklichkeitsauflösung [The limits of the dissolution of reality]. In G. Vattimo & W. Welsch (Eds.), Medien-Welten Wirklichkeiten (pp. 15-26). München, Germany: Wilhelm Fink Verlag. 56 Games and Culture Joost Raessens is associate professor of new media studies at Utrecht University, the Netherlands. In 2003 he was conference chair of the inaugural Digital Games Research Conference Level Up, organized by Utrecht University in close collaboration with DiGRA. He coedited Handbook of Computer Game Studies (MIT Press, 2005) with Jeffrey Goldstein. See www.raessens.com. Raessens / The Ludification of Culture 57 work_z42gbwr6ffckdg6v27nobj45gm ---- © Gland Surgery. All rights reserved. Gland Surgery 2012;1(3):142-145www.glandsurgery.org Over the last three decades studies have demonstrated an increase in breast cancer incidence in both the United States (US) (1) and the United Kingdom (UK) (2). Globally, Forouzanfar et al. (3) report that breast cancer incidence increased from 641,000 cases in 1980 to 1,640,000 cases in 2012, representing an annual rate of increase of 3.1%. Surgery is the cornerstone of definitive treatment for breast cancer and may involve removal of part (breast conserving surgery) or all (mastectomy) of the breast tissue (4,5). Of the 45,000 women diagnosed with breast cancer annually in the UK, 30% to 40% undergo mastectomy (6). Furthermore, in a study of re-operation rates following breast conserving surgery in England, it was identified that of 55,297 women who had primary breast conserving surgery, 18.5% required a second breast operation, of which 7.7% were mastectomies (7). Breast reconstruction following mastectomy has become an integral part of patient rehabilitation (8,9) with approximately 75% of women who have undergone a mastectomy going on to have immediate or delayed unilateral or bilateral reconstruction surgery (10). In 2011 it was estimated that approximately 96,000 breast reconstructive surgeries were performed in the US (11). Many studies have identified the negative psychosocial impact of mastectomy (12-14) and the benefits of breast reconstruction related to improving body image and restoring a lost sense of femininity is well documented (15-17). Consequently, objective evaluation of aesthetic outcomes after surgery for breast cancer is a consideration salient to reconstructive breast surgery. Restoring the shape and symmetry of the breast to correct the residual deformity following mastectomy and recreate a natural appearance that is satisfying to the patient is the primary objective of breast reconstruction (18,19). However, when assessing outcomes of reconstructive surgery it has been identified that patient perceptions may differ from those of their physicians (10) and the 2011 National Mastectomy and Breast Reconstruction Audit (4) indicates that around one- fifth of women undergoing immediate breast reconstruction were not satisfied with the size of their reconstructed breast in comparison to their unaffected breast. Furthermore, one-third of patients were dissatisfied with how closely their breasts matched each other when unclothed. Linear deterioration in satisfaction of overall cosmetic outcomes of breast reconstruction have also been identified, reported by Clough et al. (20) to reduce from an acceptable level of 86% two years after completion of reconstructive surgery to only 54% at five years. Moreover, in both the US and the UK an upward trend in claims for poor cosmetic result in breast care has been observed creating a significant cost burden (21,22). Despite the development of a range of direct and indirect techniques to assess operative outcomes and appraise breast aesthetics, there is no general consensus on the best assessment method of cosmesis (18,23-26) and explicit criteria remain an elusive outcome. Without explicit criteria, surgeons must develop and use their own criteria, or that which they feel most appropriate, to plan their reconstructive surgery. This variability in surgical planning may result in poor surgical outcomes leading to increased incidence of subsequent revision procedures (27). Traditionally used subjective methods of assessment to evaluate the results of breast reconstructive surgery, such as ordinal scales and visual analogues scales, have been reported to lack accuracy Editorial The need for a standardised anthropometric protocol for objective assessment of pre- and postoperative breast surgery Nicola Brown1, Joanna Scurr2 1School of Sport, Health & Applied Science, St Mary’s University College, Waldegrave Rd, Twickenham, TW1 4SX, UK; 2Department of Sport & Exercise Science, University of Portsmouth, Cambridge Rd, Portsmouth, PO1 2ER, UK Corresponding to: Nicola Brown. School of Sport, Health & Applied Science, St Mary’s University College, Waldegrave Rd, Twickenham, TW1 4SX, UK. Email: nicola.brown@smuc.ac.uk. Submitted Sep 09, 2012. Accepted for publication Oct 10, 2012 DOI: 10.3978/j.issn.2227-684X.2012.10.01 Scan to your mobile device or view this article at: http://www.glandsurgery.org/article/view/1142/1635 143Gland Surgery, Vo 1, No 3 November 2012 © Gland Surgery. All rights reserved. Gland Surgery 2012;1(3):142-145www.glandsurgery.org and reproducibility (28-30) and it is suggested that reliance on observer rating interpretations and visual size estimates may negatively impact surgical outcomes (27). More sophisticated three-dimensional (3D) imaging techniques have been employed such as stereophotogrammetry, laser scanning, 3D digital photography and light digitisers (18). However whilst these techniques are non-invasive most are based on limited validation (30-34) and the availability of 3D imaging hardware and software to improve patient care is limited (33). Furthermore, with single camera systems ranging from approximately $20,000 to $100,000 and the need for scanning systems to combine multiple cameras to obtain optimum results (27), these techniques may not be economically viable for routine use in clinical settings. An Archimedean method of quantifying breast volume, whereby a female patient lowers her breast into a water- filled vessel and breast volume is calculated based on displaced water, has been utilised to evaluate patients breasts preoperatively (35). Similarly, thermoplastic casting approaches have been developed to quantify breast volume (36). However, due to the time-intensive nature and expense of these volumetric methods (18), in addition to high levels of patient discomfort (30), these have also failed to gain acceptance and have limited application in everyday breast surgery (37). Distinct anthropometric measurements of the breasts and its relative position from fixed skeletal and soft tissue landmarks provide a useful tool to appraise breast aesthetics, evaluate patients preoperatively and assess the outcome of surgical procedures to the breast (38-40). The absence of ptosis, a tear-drop shape, and proportional size with respect to the body are characteristics that are universally accepted criteria of the ‘ideal’ breast (26). However studies to determine ideal anthropometric values of the female breast, and establish standard values have relied on subjective aesthetic judgments of one surgeon alone or have conveyed no aesthetic judgment and instead used average linear measurements of the breast (41). Moreover, there is currently no consensus on how to assess breast anthropometry making comparison of outcomes difficult (33). However, this current lack of reference values should not negate the use of anthropometry as an assessment method of breast cosmesis. Rather further research should aim to establish reference values and develop a standardised objective pre- and postoperative assessment to aid the quantification and interpretation of desired outcomes and inform patients of realistic outcomes. At the whole-body level of body composition skinfold thickness, circumference measures, skeletal breadths and segment lengths have been adopted as substitute measurement methods in clinical and public health works (42) as they are applicable to large samples and can provide national estimations and data for the analysis o f s e c u l a r c h a n g e s i n r e p r e s e n t a t i v e s a m p l e s ( 4 3 ) . Furthermore, the development of accredited systems to standardise techniques and increase the competencies of individuals involved in anthropometric measuring have been utilised successfully in other areas of anthropometric assessment. For example the International Society for the Advancement of Kinathropometry accreditation system for anthropometrists (44) has operated since 1996 with the aim of establishing a global standard for anthropometry. Over 3,000 anthropometrists from 49 countries have been accredited in anthropometric measurement techniques under this scheme (45). Adoption of these criteria allows standardisation of measurements between participants and of repeated measurements on the same participants. Furthermore, it allows comparisons to be made locally, nationally and internationally between sample groups. A l t h o u g h t h e d e v e l o p m e n t o f a s t a n d a r d i s e d anthropometric protocol to evaluate breast cosmesis has substantial challenges, it is critical to the advancement of objective assessment of the breast pre- and postoperatively. The instruments needed to measure anthropometric values of the breast are portable and inexpensive (40) allowing for routine clinical use. It would provide a tool to audit breast surgery outcomes enabling comparison of outcomes of different surgical techniques which may assist health care providers in evaluating and developing reconstruction services and enhancing standards of breast reconstruction. Furthermore it would aid quantification and interpretation of desired outcomes to inform patients of realistic outcomes, thus increasing patient satisfaction and reducing the upward trend in claims for poor breast cosmetic results. Ultimately this could improve patient outcomes and enhance the long-term health and well-being of breast cancer patients. The development of a standardised evaluation method for aesthetic assessment of the breast is not limited to reconstructive surgery alone. An objective measurement procedure could also be utilised in all forms of plastic surgery to the breast including mastopexy, reduction mammoplasty and breast augmentation. Acknowledgements Disclosure: The authors declare no conflict of interest. 144 Brown and Scurr. Standardised anthropometric protocol in breast surgery © Gland Surgery. All rights reserved. Gland Surgery 2012;1(3):142-145www.glandsurgery.org References 1. Altekruse SF, Kosary CL, Krapcho M, et al. SEER Cancer Statistics Review, 1975-2007 [Internet]. Bethesda, MD: National Cancer Institute; 2012 [cited 2012 Sep 7]. Available online: http://seer.cancer.gov/csr/1975_2007/ 2. Johnson A, Shekhdar J. Breast cancer incidence: what do the figures mean? J Eval Clin Pract 2005;11:27-31. 3. Forouzanfar MH, Foreman KJ, Delossantos AM, et al. Breast and cervical cancer in 187 countries between 1980 and 2010: a systematic analysis. Lancet 2011;378:1461-84. 4. Jeevan R, Cromwell D, Browne J, et al. eds. Fourth annual report of the national mastectomy and breast reconstruction audit 2011. Leeds: The NHS Information Centre, 2011. 5. Lantz PM, Janz NK, Fagerlin A, et al. Satisfaction with surgery outcomes and the decision process in a population- based sample of women with breast cancer. Health Serv Res 2005;40:745-67. 6. Winters ZE, Benson JR, Pusic AL. A systematic review of the clinical evidence to guide treatment recommendations in breast reconstruction based on patient- reported outcome measures and health-related quality of life. Ann Surg 2010;252:929-42. 7. Jeevan R, Cromwell DA, Trivella M, et al. Reoperation rates after breast conserving surgery for breast cancer among women in England: retrospective study of hospital episode statistics. BMJ 2012;345:e4505. 8. Nano MT, Gill PG, Kollias J, et al. Qualitative assessment of breast reconstruction in a specialist breast unit. ANZ J Surg 2005;75:445-53; discussion 371-2. 9. Roje Z, Roje Z, Janković S, et al. Breast reconstruction after mastectomy. Coll Antropol 2010;34 Suppl 1:113-23. 10. Morrow M, Pusic AL. Time for a new era in outcomes reporting for breast reconstruction. J Natl Cancer Inst 2011;103:5-7. 11. American Society of Plastic Surgeons. 2000/2010/2011 National Reconstructive Procedures. 2012 [cited 2012 Sept 7]. Available online: http://www.plasticsurgery. orgDocuments/news-resources/statistics/2011- statistics/2011-reconstructive-procedures-trends- statistics.pdf 12. Ganz PA, Schag AC, Lee JJ, et al. Breast conservation versus mastectomy. Is there a difference in psychological adjustment or quality of life in the year after surgery? Cancer 1992;69:1729-38. 13. Yurek D, Farrar W, Andersen BL. Breast cancer surgery: comparing surgical groups and determining individual differences in postoperative sexuality and body change stress. J Consult Clin Psychol 2000;68:697-709. 14. Brandberg Y, Sandelin K, Erikson S, et al. Psychological reactions, quality of life, and body image after bilateral prophylactic mastectomy in women at high risk for breast cancer: a prospective 1-year follow-up study. J Clin Oncol 2008;26:3943-9. 15. Al-Ghazal SK, Fallowfield L, Blamey RW. Does cosmetic outcome from treatment of primary breast cancer influence psychosocial morbidity? Eur J Surg Oncol 1999;25:571-3. 16. Parker PA, Youssef A, Walker S, et al. Short-term and long-term psychosocial adjustment and quality of life in women undergoing different surgical procedures for breast cancer. Ann Surg Oncol 2007;14:3078-89. 17. Sun C, Zheng R, Li J, et al. Image perception of female breast beauty and its relation to 3D anthropometric measurements. JFBI 2011;4:23-24. 18. Kim MS, Sbalchiero JC, Reece GP, et al. Assessment of breast aesthetics. Plast Reconstr Surg 2008;121:186e-94e. 19. Isern AE, Tengrup I, Loman N, et al. Aesthetic outcome, patient satisfaction, and health-related quality of life in women at high risk undergoing prophylactic mastectomy and immediate breast reconstruction. J Plast Reconstr Aesthet Surg 2008;61:1177-87. 20. Clough KB, O’Donoghue JM, Fitoussi AD, et al. Prospective evaluation of late cosmetic results following breast reconstruction: I. Implant reconstruction. Plast Reconstr Surg 2001;107:1702-9. 21. Richards E, Vijh R. Analysis of malpractice claims in breast care for poor cosmetic outcome. Breast 2011;20:225-8. 22. Vijh R, Anand V. Malpractice litigation in patients in relation to delivery of breast care in the NHS. Breast 2008;17:148-51. 23. Ching S, Thoma A, McCabe RE, et al. Measuring outcomes in aesthetic surgery: a comprehensive review of the literature. Plast Reconstr Surg 2003;111:469-80; discussion 481-2. 24. Turner AJ, Dujon DG. Predicting cup size after reduction mammaplasty. Br J Plast Surg 2005;58:290-8. 25. Munshi A, Kakkar S, Bhutani R, et al. Factors influencing cosmetic outcome in breast conservation. Clin Oncol (R Coll Radiol) 2009;21:285-93. 26. Avşar DK, Aygit AC, Benlier E, et al. Anthropometric breast measurement: a study of 385 Turkish female students. Aesthet Surg J 2010;30:44-50. 27. Tepper OM, Small K, Rudolph L, et al. Virtual 3-dimensional modeling as a valuable adjunct to aesthetic and reconstructive breast surgery. Am J Surg 2006;192:548-51. 145Gland Surgery, Vo 1, No 3 November 2012 © Gland Surgery. All rights reserved. Gland Surgery 2012;1(3):142-145www.glandsurgery.org 28. Malata CM, Boot JC, Bradbury ET, et al. Congenital breast asymmetry: subjective and objective assessment. Br J Plast Surg 1994;47:95-102. 29. Lowery JC, Wilkins EG, Kuzon WM, et al. Evaluations of aesthetic results in breast reconstruction: an analysis of reliability. Ann Plast Surg 1996;36:601-6; discussion 607. 30. Kovacs L, Eder M, Hollweck R, et al. Comparison between breast volume measurement using 3D surface imaging and classical techniques. Breast 2007;16:137-45. 31. Galdino GM, Nahabedian M, Chiaramonte M, et al. Clinical applications of three-dimensional photography in breast surgery. Plast Reconstr Surg 2002;110:58-70. 32. Tanabe YN, Honda T, Nakajima Y, et al. Intraoperative application of three-dimensional imaging for breast surgery. Scand J Plast Reconstr Surg Hand Surg 2005;39:349-52. 33. Dabeer M, Fingeret MC, Merchant F, et al. A research agenda for appearance changes due to breast cancer treatment. Breast Cancer (Auckl) 2008;2:1-3. 34. Catanuto G, Patete P, Spano A, et al. New technologies for the assessment of breast surgical outcomes. Aesthet Surg J 2009;29:505-8. 35. Bulstrode NW, Shrotria S. Prediction of cosmetic outcome following conservative breast surgery using breast volume measurements. Breast 2001;10:124-6. 36. Parmar C, West M, Pathak S, et al. Weight versus volume in breast surgery: an observational study. JRSM Short Rep 2011;2:87. 37. Edsander-Nord A, Wickman M, Jurell G. Measurement of breast volume with thermoplastic casts. Scand J Plast Reconstr Surg Hand Surg 1996;30:129-32. 38. Penn J. Breast reduction. Br J Plast Surg 1955;7:357-71. 39. Westreich M. Anthropomorphic breast measurement: protocol and results in 50 women with aesthetically perfect breasts and clinical application. Plast Reconstr Surg 1997;100:468-79. 40. Khan HA, Bayat A. A geometric method for nipple localization. Can J Plast Surg 2008;16:45-7. 41. Liu YJ, Thomson JG. Ideal anthropomorphic values of the female breast: correlation of pluralistic aesthetic evaluations with objective measurements. Ann Plast Surg 2011;67:7-11. 42. Heyward VH, Wagner DR. eds. Applied body composition assessment, 2nd ed. London: Human Kinetics, 2004. 43. Bellisari A, Roche AF. Anthropometry and ultrasound. In: Heymsfield SB, Lohman TG, Wang Z, et al. eds. Human body composition. London: Human Kinetics, 2005:109-27. 44. Stewart A, Marfell-Jones M, Olds T, et al. eds. International standards for anthropometric assessment manual. Lower Hutt, New Zealand: International Society for the Advancement of Kinanthropometry, 2006. 45. International Society for the Advancement of Kinanthropometry. International Society for the Advancement of Kinanthropometry accreditation courses [Internet]. 2012 [cited 2012 Sep 7]. Available online: http://www.isakonline. com/ index.html Cite this article as: Brown N, Scurr J. The need for a standadised anthropometric protocol for objective assessment o f p r e a n d p o s t - o p e r a t i v e b r e a s t s u r g e r y. G l a n d S u r g 2012;1(3):142-145. DOI: 10.3978/j.issn.2227-684X.2012.10.01 work_z443tu7h7zhkjexzhdum6qbw4y ---- JCAS_Jul_08.indd Journal of Cutaneous and Aesthetic Surgery - Jul-Dec 2008, Volume 1, Issue 268 REVIEW ARTICLE INTRODUCTION Consultation is seeking an expert opinion. Expert opinion obtained for medical purpose from a distance is called Telemedical consultation. The distance can be between the continents, countries, states, cities or even doctors a few meters apart. World health organization deÞ nes telemedicine as practice of health care using interactive audio, visual, and data communications. In 1995 Prednia and co-workers[1] introduced the term �Teledermatology� (TD) and in 1997, Zelickson and Homan [2] first demonstrated video conference teledermatology in their nursing home setting. TD is a subset of telemedicine that incorporates telecommunications technologies (information technology) to deliver dermatology services at a distance. The common principle of TD services is to reach the unreached for dermatology care. Telemedicine reduces travel, waiting time, treatment cost, minimizes follow-up visits and helps to deliver specialty health care services to remote geographic regions. Dermatology is a Visual specialty; Image is the gold standard for dermatological diagnosis.[3] Chronic disorders that require a long duration of treatment need frequent monitoring and several followup visits. It involves travel expenses, and also a prolonged waiting time. These the above circumstances have made dermatology an ideal specialty for telemedical applications and it is not suprising that dermatologists have been early adopters for Tele medical applications.[4] TD delivers screening and triage services in Melanoma and pigmented skin lesions.[3] TD can be used for screening, triage and select patients for dermatosurgery practice. TD delivers treatment and follow-up care in leg ulcers and to discuss difÞ cult �to-manage cases like the Inflammatory and neoplastic conditions[5] from worldwide experts and Education of health care professionals and patients. It is observed to achieve 81% concordance for exchanging opinions on challenging inß ammatory and skin neoplasm�s.[5] CLASSIFICATION Different teledermatological tools available are [Figure 1]: Real time or videoconference Video consultation uses video-conferencing equipment to connect the patient, often with their general practitioner (GP) or nurse present, with a distant consultant. The evaluation of diagnostic accuracy varies between 67-80% compared to face-to-face consultation.[6] Initial studies on economic evaluation of interactive teledermatology compared to face-to-face consultation considered video conference to be expensive. However more recent Dermatologic surgery and aesthetic dermatology are rapidly emerging and expanding specialties in India. However, dermatologists practicing surgeries and aesthetics in India represent a highly selected group and are mostly confined to metros. Dermatologists in the peripheral and remote regions need to reach these specialists for the benefit of their patients and teledermatology is an invaluable tool for this purpose. Video-conference, store and forward, Satellite communication, Hybrid teledermatology, mobile teledermatology, Integration model, nurse-led teledermatology, teledermatology focusing difficult�to-manage cases, screening and triage services are the various teledermatology services developed to suit the needs of dermatology care from a distance. Types of teledermatology service, pattern of net work connectivity and purpose of dermatology service are the three cardinal parameters for management of the dermatoses from a distance. This article reviews the literature, and analyzes the possible options available for a teledermatosurgery practice. KEYWORDS: Dermatosurgery, teledermatology, dermatopathology, dermatology Teledermatology: Its Role in Dermatosurgery Garehatty Rudrappa Kanthraj Department of Dermatology, Venereology and Leprosy, Jagadguru Sri Shivarathreshwara University Medical College Hospital, Ramanuja Road, Mysore, Karnataka, India. Address for correspondence: Dr. Garehatty Rudrappa Kanthraj, “Sri Mallikarjuna Nilaya”, # HIG 33 Group 1 Phase 2, Hootagally KHB Extension, Mysore-570 018, Karnataka, India. E-mail: kanthacad@yahoo.com [Downloaded free from http://www.jcasonline.com on Saturday, August 21, 2010, IP: 193.108.38.35] Journal of Cutaneous and Aesthetic Surgery - Jul-Dec 2008, Volume 1, Issue 2 69 studies[7] have conÞ rmed it to be economical . This is due to the improved technology and decrease in the hardware cost. Table 1 summarizes the various types of videoconference (VC) available for practice. Satellite communication net work (Satcom) is an Indian space research organization (ISRO) initiative to reach the unreached and inaccessible remote geographic regions where connectivity cannot be established is achieved through satellite connectivity.[8] Skin camps are organized by mounting SATCOM on a bus or a van that travels to those remote geographic regions where Integrated services digital network (ISDN) connection cannot be established and establishes satellite net work connectivity with a tertiary center[9] and delivers dermatology care. Store and forward system The store and forward (SAF) stores patient data (digital images, clinical and demographic information) sent by GP�s in an electronic medium for future access by consultants in referral centers to deliver the quality health care in remote geographic regions. SAF involves transmission of digital images and asynchronous evaluation is practiced. Simultaneous presence of the health care professional is not required. About 80-90% of dermatological conditions may be diagnosed by SAF TD.[10] It is the most commonly used technology. There is 60-80% total agreement and 70-90% partial agreement when in person diagnosis is compared to both real time (synchronous) and SAF TD.[10] Various feasibility studies on SAF TD are summarized in Table 2 and it has been found to be cheap, easy to set up and practice. Low cost electronic equipments, quick electronic transfer of high quality digital images and universal access to the health care workers enhance the practice of SAF. SAF uses digital camera with an average 640 × 480 pixels image resolution. Different diagnosis agreement rates (of 68%,[11] 89%,[12] 58%[13] and 48%[14]) have been documented in various studies. The images are rapidly transferred[6] and stored in JPEG (Joint photographers expert group; http://www.jpeg.org) format using the internet. Poor image quality and lack of referral proforma data may lead to poor agreement.[14] Telemedical wound care and follow up uses digital camera with a good agreement to Table 1: Type of videoconference, their components and cost-effectiveness while setting a video conference teledermatology center[16] Type Components Cost Stand-alone VC Video codec with built in Expensive camera (pan, tilt and zoom), built in micro phone and audio-video interfaces to connect line/network interface ISDN/LAN PC based VC PC, web camera with built in Moderately with PC add-on microphone with audio/ expensive card codec video output to connect ISDN/LAN PC based VC Built in video-camera is Less expensive with In-Camera connected to the PC through codec* USB port. PC based VC Video codec function Cheapest Using web camera (audio-video compression and And software* data formatting) is done using software loaded on PC. A web camera is connected to PC using a USB port. *To connect to multimedia projector and achieve large display VGA/ XVGA out put of the PC may be used. (Reproduced with permission Kanthraj GR and Srinivas CR. Ind J Dermatol Venerol Leprol 2007; 73:5-12.) Kanthraj.: Teledermatology and dermatosurgery Figure 1: Various teledematological tools [Downloaded free from http://www.jcasonline.com on Saturday, August 21, 2010, IP: 193.108.38.35] Journal of Cutaneous and Aesthetic Surgery - Jul-Dec 2008, Volume 1, Issue 270 face-to-face, which is regarded as the Gold standard.[15] A referral hospital can have a fixed time on a day depending on the caseload for a teledermatology clinic to prevent unnecessary delay. General practitioner can send history with photograph well in advance. Relevant discussions are made, as both ends are prepared. Patient, general practitioners or nurse available in the given period, interacts by e-chat or web cam or voice mail for any clariÞ cations required from the consultant. This approach has been recommended while offering periodic follow-up care.[16] SAF provides patient, referring clinician and dermatologist satisfaction.[7] Referring clinician has an additional advantage of educational beneÞ t.[7] Data transmission medium: The Internet, Wi-Fi and Wi-Max Digital lines to enhance data transmission are called �Integrated services digital network� (ISDN). Increase in the number of ISDN line increases the data carrying capacity or bandwidth from 128 kbps (one ISDN line) to 256 and 384 kbps (two and three ISDN lines) respectively.[6] Wi-Þ [16] and Wi-Max[16] are super speed wireless network connections that enable high-speed transmission of the data. Wi-Fi (wireless Þ delity) enables wireless local area network (WLAN) connections to mobile devices, digital camera, PC computers and personal digital assistants.[16] There devices are with Wi-Fi built in, while others require adding a Wi-Fi network card. Wi-Þ uses radio-waves, radio transmitters called routers with receivers (access points / hot spots). Superior to Wi-Fi is Wi-Max (worldwide interoperability for micro wave access). It is a broadband wireless access technology providing super speed Internet access at 70mbit/s. Images may be received in large number by referral hospital during a clinical trial or teleconsultation. Speed of the transfer medium is important for rapid and easy retrieval of large data. Therefore, a telemedical center should have Wi-Fi or Wi-Max installation for Quality and Speed of data transmission.[16] Hybrid model The combination of SAF TD in the Þ rst step, followed by VC TD in the second step is called Hybrid TD.[17,18] It saves time, clariÞ es doubts and avoids misinterpretation from both the ends. This process achieves best physician and patient satisfaction. Mobile or cellular teledermatology Portable devices like cellular phones and personal digital assistants provides inbuilt camera to capture digital images, computing, and net working features to deliver dermatology care at a distance. They provide immediate image access and direct interaction and it is possible to obtain clariÞ cation. Periodic evaluation of leg ulcers and skin images using cellular phones and Personal digital assistants are practiced. Quality and speed of image transmission is no longer an obstacle. Melanoma screening with cellular phones using mobile teledermocscopy revealed a diagnostic agreement of 90% compared to face-to-face consultation.[19] Cellular phones[20,21] and Personal digital assistants[22,23] allow taking good quality images and sending them to Table 2: Feasibility studies involving store and forward and mobile (cellular) teledermatology practice Author Contribution Device Model Charge couple Image Storage Diagnosis /Year device resolution resolution (JPEG) KB agreement* (mega pixels) (pixels) % Whited et al.[11], Store and forward Digital - - 1280×1000 68 1999 of skin lesions camera High et al.[12], Store and forward Digital Sony DSC – F1 - 640×480 - 89 2000 of skin lesions camera Sony Corporation Newyork Tucker et al.[13], Digital imaging of Digital Fuji-film - 640×480 32MB 58 2005 skin lesions for camera MX-1700 teleconsultation Mahendran et al.[14], Digital imaging and Digital Coolpix 950 - 1200×1600 - 48 2005 teleconsultation of camera Nikon Corporation skin malignancies Salmhofer et al.[15], Digital camera in Digital Coolpix995 3.3 2048×1536 1500 87 2005 wound teleconsultation camera Nikon Corporation Braun et al.[20], Cellular phone in Cellular Nokia 7650 - 640×480 15-22 82 2005 telemedical wound care phone Espoo, Finland Massone et al.[21], Cellular Phone in Cellular Nokia 7650 - 640×480 13-35 70 2005 teledermatology phone Espoo, Finland Massone et al.[22], Personal digital Personal Sony Clie 2 1200×1600 829-989 79 2006 assistants in digital PEG – NZ90, teledermatology assisstants Tokyo, Japan * Compared to face to face consultation (Gold Standard) Kanthraj: Teledermatology and dermatosurgery [Downloaded free from http://www.jcasonline.com on Saturday, August 21, 2010, IP: 193.108.38.35] Journal of Cutaneous and Aesthetic Surgery - Jul-Dec 2008, Volume 1, Issue 2 71 the expert from remote geographic regions via a wireless network e.g., global system for mobiles (GSM) and universal mobile telecommunication system (UMTS). New generation cellular phones allow to take good quality images and transmit them directly to other cellular phones (via multimedia messages) and computers (via e-mail or blue tooth-wireless connection) the diagnosis agreement is 82% compared to face-to-face consultation.[20] A feasibility study[20] conÞ rmed the importance of cellular phone in telemedical wound care. Transfer of skin images using cellular phones with a diagnosis agreement of 70% is documented.[21] Various feasibility studies on mobile TD are summarized in Table 2. Personal digital assistants Laptops, hand held computers are convenient to handle and offer combined features like camera, computing, and networking are called personal digital assistants.[22,23] They are convenient for health care professionals to capture and transfer the images. Massone et al,[22] demonstrated the importance of personal digital assistants in teledermatology with a 79% diagnosis agreement. Diagnostic agreement is low for cutaneous malignancies when there is poor quality of images, and lack of referral proforma data.[14] Recent studies[24] have proved that digital photography gives such a high image quality, that a neoplastic lesion that cannot be diagnosed with a high quality digital image, rarely could be diagnosed with face �to �face consultation.[24] Advances in digital imaging and internet solutions have overcome technical limitations. [3] Teleconsultation applied as a screening method for malignant tumors is a useful technique that can be incorporated in dermatology for day- to- day practice.[24] Integration model Health care professionals in rural areas are the� eyes� for the expert. For the expert to guide them, deliver follow up and monitor the progress they should have a periodic audit of visual parameters and dimensions. This has been demonstrated in the treatment and follow up of leg ulcers. Various authors[25-31] have demonstrated the effectiveness of computerized measurements of leg ulcers to monitor therauptic assessment. The systematic functional integration of electronic devices and software to capture, transfer, store, measure and deliver follow-up care is the principle of integration model and has been used effectively for leg ulcers in remote Figure 2: The integration model to capture, transfer, measure and follow-up of skin lesions to deliver SAF teledermatology care[32] (Reproduced with permission from American Medical Association, Kanthraj GR. Arch Dermatol 2005;141:1470-71) Kanthraj: Teledermatology and dermatosurgery [Downloaded free from http://www.jcasonline.com on Saturday, August 21, 2010, IP: 193.108.38.35] Journal of Cutaneous and Aesthetic Surgery - Jul-Dec 2008, Volume 1, Issue 272 geographic regions.[32] It is illustrated in the [Figure 2]. In step 1 close up image of the ulcer with the surrounding skin is suitable for teleconsultation.[6] The images are captured, irrespective of its site either by digital or cellular phones camera [Figure 2]. In step 2 images are transferred via e-mail. In Step 3 calculations of wound margins (area and perimeter) is done by Computer Aided Design software (CAD). In Step 4 the software professional delivers the result. Periodic evaluation is performed. Bizarre shaped lesions at any site are measured accurately and stored. It is useful for forensic experts to reproduce at the time of expert evidence. Immediate access of visual parameters and measurement of lesions are achieved. Routine follow- up care in a remote area under close supervision of higher center is performed. Computerized measurements are rapid, easy, and precise and suited for SAF TD.[16,32] This approach enables diagnosis, management, and periodic assessment of leg ulcers and delivers follow-up care to achieve physician and Patient satisfaction. Repigmented vitiligo can be serially monitored after medical or surgical treatment.[26] Serial monitoring of the images and determination of their dimensions and percentage of pigmentation documents the progress of re�pigmentation. The system automatically plots wound healing regression or vitiligo repigmentation on the computerized graph. Rapid capture, transfer and calculation with negligible human intervention minimizing inter observer and intra observer variations are achieved.[32] The programmer does the calculation and provides the results to dermatologists at telemedical center. Computer professionals process the large data without any additional burden to the experts. All health care professionals at a tertiary centre utilize the software in a centralized location maximizing SAF telemedical center�s utility and generalizability.[27] APPLICATIONS OF TELEDERMATOLOGY Teledermatological monitoring of leg ulcers SAF and mobile TD play a key role in leg ulcer management. It is a chronic disorder that requires frequent and periodic monitoring. Patient needs frequent visits from a Long distance incurring huge costs. It can be more distressing in elderly as leg ulcer is known to occur more frequently requiring prolonged waiting time. HCP needs to be trained for digital photography, uploading, web application and wound care. 90% of images for consultation are excellent.[33] Good response is observed in 70% of cases with a good patient and physician satisfaction.[33] SAF and mobile TD hold a great potential for long term wound care along with the Co-operation with home care nurses.[25] Leg ulcers are monitored for slough, necrosis and granulation tissue formation.[15] In a recent study the feasibility and acceptance of teledermatology for wound management has been demonstrated.[34] Consultant personally examined, assessed, classiÞ ed and recommended the medications for the ulcers in the initial visit. Follow up visits were done by the home care nurses using SAF TD.. In 90% of teleconsultations qualities of images were good with good physician and patient satisfaction.[33] Teledermatopathology Teledermatopathology is an important area for application of telemedicinal tools.[35-37] Teledermatopathology involves transmission of images from distant locations to consulting dermatopathologists. It is of relevance to the Indian scenario where there are only a few dermatopathologists and even general histopathologists are available only in citiis. Teledermatopathology is achieved by a) video- image (dynamic) analysis b) store and forward (static) c) and virtual slide system (VSS).[35] VSS is a recently developed technology wherein using a robotic microscope, any Þ eld of the specimen is magniÞ ed at the discretion of the dermato-pathologist. VSS stores the images on a virtual slide server available on the web.[36] VSS is being used for the diagnosis of pigmented skin lesions, difÞ cult- to-manage cases and inß ammatory skin diseases. Though more expensive than the store and forward system, VSS represents the future in this discipline. A study[36] investigated the role of teledermatopathology by VSS in forty-six biopsy specimens from inß ammatory skin diseases. Telediagnoses agreed with gold standard and conventional diagnosis with an average of 73% and 74%, respectively.[36] Complete concordance among all Table 3: Types of teledermatology service, purpose, area of application and their potential use in dermatosurgery Type of teledermatology Purpose Dermatoses/Area of service application Hybrid model (Combination) Incorporates the advantages of both synchronous Routine teledermatology service and follow-up and asynchronous teledermatology care Conventional store and forward teledermatology Screening To determine the suitability for a dermto-surgery Video-conference teledermatology Counseling and Education Pre surgery counseling Integration model Follow-up care after medical or surgical treatment Leg ulcers and vitiligo Mobile or cellular teledermatology Sreening and follow-up care after Leg ulcers, vitiligo or any dermatosurgery dermatosurgeries procedure Kanthraj: Teledermatology and dermatosurgery [Downloaded free from http://www.jcasonline.com on Saturday, August 21, 2010, IP: 193.108.38.35] Journal of Cutaneous and Aesthetic Surgery - Jul-Dec 2008, Volume 1, Issue 2 73 teleconsultants with gold standard and conventional diagnosis was found in 20% of the cases. The study concluded that the system is not fully suitable for diagnosis of inß ammatory dermatoses, as the diagnosis and interpretation often depends on availability of complete clinical data and because of the intrinsic difficulties in diagnosis of such diseases. This is of particular relevance to Indian situation where most biopsies are performed for inß ammatory diseases and where skin cancer is less prevalent. Recent concept of Hybrid teledermatopthology[37] incorporates the advantages of both dynamic and virtual slide system and over comes the individual shortcomings. It consists of motorized microscopes with remote control and scanner for slide digitization. Teledermatology in Cutaneous aesthetic surgery SAF can be used to screen and determine the suitability of the lesion for treatment by dermatosurgery. Some examples include conditions such as keloids, hemangiomas, scars, vitiligo lesions for grafting, hirsutism for laser assisted hair reduction, ageing changes of skin and images of scalp for hair transplantation etc. VC can also be used for pre-surgery counseling for aesthetic procedures. VC increases patient satisfaction as patient directly interacts with the aesthetic surgeon for any clariÞ cations. Selected centers can have Hybrid teledermatology to screen patients for procedures and counseling for aesthetic procedure. Mobile teledermatology is used to screen and deliver follow-up care after aesthetic surgery. Integration model[32] Þ nds its application in objective assessment after medical or surgical treatment of vitiligo and leg ulcers. Types of tedermatology service, purpose and area of application and their potential use in aesthetic surgery are summarized in Table 3. Nurse-led teledermatology service Training of Health care professionals for TD practice at all levels in basics of dermatology including computer knowledge, art of counseling, history taking skills, Þ lling the Performa, photography and Video clips. Administration of intralesional steroids, training for skin biopsies, removal of skin tags, cryosurgery and other simple dermatosurgical techniques are taught.[38] Interprofessional collaborative working environment to share the expertise in decision making is promoted. A nurse advice the management guided by consultants, motivates and delivers for follow- up care. They understand the physical, psychological and social effects of skin disease.[39] Counseling Preparing the patient for TD practice by educating the dermatoses, the technology adopted and its limitations involved in delivering dermatology care. Medical reimbursement equivalent to that of the traditional face- to-face examination in an approved rural health care setting are practiced.[18] Imaging and law Photographs form important medico-legal evidence and play a vital role in maintenance of the dermatology records.[40] It has its special signiÞ cance as digital images are used to capture, store, measure, transfer and deliver follow up care. The image measurement is important for periodic follow-up care and medico-legal expert opinion. US Federal courts have ruled that digital images can furnish sufÞ cient medical data to provide dermatological care.[40] Preservation of privacy and confidentiality of digital images in the era of teledermatology is important.[3,41] In difÞ cult� to-manage Case or in doubt, a dermatologist can be sued for a wrong diagnosis. �One cannot take shelter on the pretext of a teledermatology consultation.� A principle of traditional consultation applies to TD care.[6] In case of doubt a dermatologist should call for face-to-face examination and investigates on priority. They should obtain written consent from patient to store and forward the images. ConÞ dentiality of Images has to be maintained.[6] FUTURE DIRECTIONS Health access barriers, poverty, large geographic regions and deÞ ciency of dermatologists in rural regions have increased the needs of the TD service in India. A Proper reimbursement and insurance policies need to be implemented for teledermatology consultation and surgery. Training the existing nurses for digital photography, internet applications and computing needs to be implemented. Mobile TD Þ nds its application and active survey is carried out by HCP using cellular phones to screen the dermatoses. Teleconsultation is an effective alternative and should be considered when a service is under pressure.[42] Incorporation of telemedicine and nurse led telemedicine effectively in rural India can enhance delivery of health care. Academic bodies have to introduce telemedicine education in medical and nursing training. House surgery period in undergraduate education should be made to used train telemedicine and thereyby offer service to rural India. CONCLUSION TD research is progressing in an arithmetic ratio (in additions) while advancement in information technology is progressing in geometric ratio (in multiples).[16] Dermatosugoens need to explore the feasibility of technology application in the interest of the patient and conduct studies. They need to work along with their respective Government, Ministry of Health and national health services of their regions to formulate clinical, technical and administrative standards to facilitate Kanthraj: Teledermatology and dermatosurgery [Downloaded free from http://www.jcasonline.com on Saturday, August 21, 2010, IP: 193.108.38.35] Journal of Cutaneous and Aesthetic Surgery - Jul-Dec 2008, Volume 1, Issue 274 expansion of teledermatology in practice and introduce it as a curriculum in medical education. There is need to to accelerate research and adopt innovative techniques to deliver quality health care to remote geographic regions and achieve the goal to reach the unreached. REFERENCES Perednia DA, Allen A. Telemedicine technology and clinical applications. 1. JAMA 1995;273:483-8. Zelickson BD, Homan L. Teledermatology in the nursing home. Arch 2. Dermatol 1997;133:171-4. Di Stefani A, Zalaudek I, Argenziano G, Chimenti S, Soyer HP. Feasibility 3. of a two-step teledermatologic approach for the management of pigmented skin lesions. Dermatol Surg 2007;33:686-92. Eminović N, de Keizer NF, Bindels PJ, Hasman A. Maturity of 4. teledermatology evaluation research: A systematic literature review. Br J Dermatol 2007;56:412-9. Lozzi GP, Soyer HP, Massone C, Micantonio T, Kraenke B, Forgnoli 5. MC et al. The additive value of second opinion teleconsulting in the management of patients with challenging inflammatory, neoplastic skin diseases: A best practice model in dermatology? J Eur Acad Dermatol Venereol 2007;21:30-4. Eedy DJ, Wootton R. Teledermatology: A review. Br J Dermatol 6. 2001;144:696-707. Whited JD. Teledermatology research review. Int J Dermatol 7. 2006;45:220-9. Sathyamurthy LS, Bhaskaranarayana A. Telemedicine: Indian space 8. agency�s (ISRO) initiatives for specialty to remote and rural population. In: Sathyamurthy LS, Murthy RL, editors. Telemedicine manual guide book for practice of telemedicine. 1st ed. Indian space research organization, department of space. Government of India: Bangalore; 2005. p. 9-13. Feroze K. Teledermatology in India: Practical implications. Indian J 9. Med Sci 2008;62:208-14. Heinzelmann PJ, Williams CM, Lugn NE, Kvedar JC. Clinical outcomes 10. associated with telemedicine / telehealth. Telemedicine journal and E-Health 2005;11:329-47. Whited JD, Hall RP, Simel DL, Foy ME, Stechuchak KM, Drugge RJ, 11. et al. Reliability and accuracy of dermatologists clinic � based and digital image consultations. J Am Acad Dermatol 1999;41:693-702. High WA, Houston MS, Calobrisi SD, Drage LA, McEvoy MT. Assessment 12. of the accuracy of low- cost store and forward teledermatology consultation. J Am Acad Dermatol 2000;42:776-83. Tucker WFG, Lewis FA. Digital imaging: A diagnostic screening tool? 13. Int J Dermatol 2005;44:479-81. Mahendran R, Goodfield MJ, Sheehan-Dare RA. An evaluation of the 14. role of a store -and �forward teledermatology system in skin cancer diagnosis and management. Clin Exp Dermatol 2005;30:209-14. Salmhofer W, Hofmann-wellenhof R, Gobler G, Rieger-Engelbogen K, 15. Gunegger D, Binder B, et al. Wound teleconsultation in patients with chronic leg ulcers. Dermatology 2005;210:211-7. Kanthraj GR, Srinivas CR. Store and forward teledermatology. Indian 16. J Dermatol Venerol Leprol 2007;73:5-12. Fieleke DR, Edison K, Dyer JA. Pediatric teledermatology: A survey of 17. current use. Pediatr Dermatol 2008;25:158-62. Edison KE, Dyer JA. Teledermatology in Missouri and beyond. Mol 18. Med 2007;104:139-43. Massone C, Hofmann-Wellenhof R, Ahlgrimm-Siess V, Gabler G, Ebner 19. C, Soyer HP. Melanoma screening with cellular phones. PLoS One 2007;2:e483. Braun RP, Vecchietti JL, Thomas L, Prins C, French LE, Gewirtzman 20. AJ, et al. Telemedical wound care using a new generation of mobile telephones: A feasibility study. Arch Dermatol 2005;141:254-8. Massone C, Lozzi GP, Wurm E, Hofmann-Wellenhof R, Schoellnast R, 21. Zalaudek I, et al. Cellular phones in clinical teledermatology. Arch Dermatol 2005;141:1319-20. Massone C, Lozzi GP, Wurm E, Hofmann-Wellenhof R, Schoellnast R, 22. Zalaudek I, et al. Personal digital assistants in teledermatology. Br J Dermatol 2006;154:801-2. Scheinfeld NS. Creating and utilizing multi media dermatology 23. medical record for pocket PC personal digital assistant. Skin Med 2005;4:33-7. Martinez-Garcia S, Odelboz-Gonzalez J, Martin-Gonzalez T, Samaniego-24. Gonzalez E, Crespo-Erchiga V. Teledermatology. Review of 917 Teleconsults. Actas Dermosifiliogr 2007;98:318-24. Coleridge Smith PD, Scurr JH. Direct method of measuring venous 25. ulcers. Br J Surg 1989;76:689. Kanthraj GR, Srinivas CR, Shenoi SD, Suresh B, Ravikumar BC, Deshmukh 26. RP. Wound measurement by Computer Aided Design (CAD): A practical approach for software utility. Int J Dermatol 1998;37:714-5. Kanthraj GR. Computer or simple wound measurements: When 27. Greek meets Greek, then comes the Tug-of-war. Arch Dermatol 1999;135:992-4. Rajbhandari SM, Harris ND, Sutton M, Lockett C, Eaton S, Gadour M, 28. et al. Digital imaging: An accurate and easy method of measuring foot ulcers. Diabetes Med 1999;16:339-42. Samad A, Hayes S, French L, Dodds S. Digital imaging versus 29. conventional contact tracing for the objective measurement of venous leg ulcers. J Wound Care 2002;11:137-40. Moore K. Using wound area measurement to predict and 30. monitor response to treatment of chronic wounds. J Wound Care 2005;14:229-32. Solomon C, Munro AR, Vanrij AM, Christie R. The use of video 31. image analysis for the measurement of venous ulcers. Br J Dermatol 1995;133:565-70. Kanthraj GR. The integration of the internet, mobile phones, 32. digital photography and computer aided design software to achieve telemedical wound measurement and care. Arch Dermatol 2005;141:1470-1. Binder B, Hofmann-Wellenhof R, Salmhofer W, Okcu A, Kerl H, Soyer 33. HP. Teledermatological monitoring of leg ulcers in cooperation with home care nurses. Arch Dermatol 2007;143:1511-4. Hofmann-Wellenhof R, Salmhofer W, Binder B, Okeu A, Kerl H, Soyer HP. 34. Feasibility and acceptance of telemedicine for wound care in patients with chronic leg ulcers. J Telemed Telecare 2006;12:15-7. Massone C, Brunasso AM, Campbell TM, Soyer HP. State of the art of 35. teledermato pathology. Am J Dermatopathol 2008;30:446-50. Massone C, Peter Soyer H, Lozzi GP, Di Stefani A, Leinweber B, Gabler 36. G, et al. Feasibility and diagnostic agreement in teledermatopathology using a virtual slide system. Hum Pathol 2007;38:546-54. Mencarelli R, Marcolongo A, Gasparetto A. Organizational model for 37. a telepathology system. Diagn Pathol 2008;3:57. Armstrong AW, Dorer DJ, Lugn NE, Kvedar JC. Economic evaluation of 38. interactive teledermatology compared with conventional care. Telemed J E Health 2007;13:91-9. Lawton S, Timmons S. The relationship between technology 39. and changing professional roles in health care: A case-study in teledermatology. Stud Health Technol Inform 2006;122:669-71. Scheinfeld N. Photographic images, digital imaging, dermatology and 40. the law. Arch Dermatol 2004;140:473-6. Goldberg DJ. Digital photography, confidentiality and teledermatology. 41. Arch Dermatol 2004;140:477-8. Ferguson J. How to do a telemedical consultation. J Telemed Telecare 42. 2006;12:220-7. Source of Support: Nil, Confl ict of Interest: None declared. Kanthraj: Teledermatology and dermatosurgery [Downloaded free from http://www.jcasonline.com on Saturday, August 21, 2010, IP: 193.108.38.35] work_z4rq4irlhnew7h2x23jmmo6d7u ---- Slide 1 11/7/2011 1 Assessment of Tophus Size: a Comparison Between Physical Measurement Methods and Dual Energy Computed Tomography Scanning Nicola Dalbeth, Opetaia Aati, Angela Gao, Meaghan House, Qiliang Liu, Anne Horne, Anthony Doyle, Fiona M McQueen Disclosures • This work was funded by the Health Research Council of New Zealand • Qiliang Liu was the recipient of a University of Auckland summer studentship • I have no other relevant financial disclosures. Background • The tophus is a pathognomonic feature of gout • Foreign body granulomatous response to monosodium urate (MSU) crystals – Innate and adaptive immune activation Dalbeth Arthritis Rheum 2010 Background • Impact of tophi – Disfiguring – Discharge with secondary infection – Obstruct joint movement – Disability – Joint damage • Tophus regression has been endorsed by OMERACT as a core domain for clinical trials of chronic gout Schumacher J Rheumatol 2009 Background • Many methods of tophus measurement described: – Vernier calipers (longest index tophus diameter)* – Tape measurement (index tophus area) – Counting of all visible tophi – Digital photography – Ultrasonography – Magnetic resonance imaging – Conventional computed tomography (CT) *OMERACT endorsed Dalbeth ARD 2011 Background • Dual energy CT (DECT) is a sensitive and specific method to detect urate deposits in patients with gout • DECT uses a specific display algorithm that assigns different colours to materials of different chemical composition (such as urate and hydroxyapatite) • The reliability of DECT for tophus measurement has not been reported to date Choi ARD 2009 Glazebrook Radiology 2011 11/7/2011 2 Aim • To compare the reliability and validity of various physical methods with DECT assessment of tophus size Methods: patients and tophus selection • Twenty-five patients with – a history of acute gout according to ACR classification criteria, and – at least one subcutaneous tophus • For each patient, up to three index tophi were selected for analysis (n=64 tophi, 55 in the feet) – sites in the feet were preferentially selected – if >3 tophi present in the feet, the largest tophi were selected – discharging, acutely inflamed or bursal tophi were not selected Methods: physical measurement • Each tophus was assessed by two independent observers – Vernier calipers (longest diameter) – Tape measure (area) • Tophus location was recorded in detail using a diagram and written description • The total number of subcutaneous tophi was also counted • Five patients returned within one week for repeat physical assessments Methods: DECT • All patients proceeded to DECT scanning of both feet (Somatom Definition Flash, Siemens Medical) • Index tophus DECT volume was assessed by two independent observers using automated volume assessment software • DECT scans from the returning patients were scored twice by both observers Methods • Each observer was blinded to the scores of the other observers and previous measures • Intra- and inter-observer reproducibility was assessed by intraclass correlation coefficient (ICC) and limits of agreement analysis (Bland and Altman). • For the purposes of these analyses the unit of investigation was assumed to be the tophus Results: patient characteristics Variable All patients (n=25) Patients returning for second visit (n=5) Age, years, median (range) 64 (40-85) 64 (44-74) Male gender, n (%) 23 (92%) 5 (100%) Ethnicity, n (%) Pacific New Zealand Maori New Zealand European/Other 10 (40%) 1 (4%) 14 (56%) 2 (40%) 0 (0%) 3 (60%) Aspirate proven gout 11 (44%) 2 (40%) Gout disease duration, years, median (range) 24 (3-50) 45 (21-49) Serum urate, mmol/L, median (range) 0.39 (0.18-0.71) 0.37 (0.35-0.49) On allopurinol, n (%) 18 (72%) 4 (80%) Total DECT urate volume (both feet), cm3, median (range) 1.65 (0.07-28.88) 8.02 (0.13-28.88) 11/7/2011 3 Results: Intraobserver reproducibility (Assessment 1 vs. Assessment 2) ICC, mean (95% CI) A Vernier calipers 0.75 (0.54-0.87) B Tape measure 0.91 (0.82-0.96) C Tophus count 0.94 (0.77-0.98) D DECT volume 1.00 (0.99-1.00) To allow comparison between measures, the y axis value approximates twice the mean score. Results: Interobserver reproducibility (Observer 1 vs. Observer2) ICC, mean (95% CI) A Vernier calipers 0.78 (0.66-0.86) B Tape measure 0.88 (0.82-0.93) C Tophus count 0.58 (0.25-0.79) D DECT volume 0.95 (0.92-0.97) To allow comparison between measures, the y axis value approximates twice the mean score. Results: Comparison of values between different methods (feet, n=55) Calipers Tape Calipers - rs=0.94 p<0.0001 Tape rs=0.94 p<0.0001 - DECT rs=0.46 p=0.004 rs=0.46 p=0.004 • In 20% of tophi recorded on physical assessment, no urate deposits were observed in the tophus by DECT • Those tophi without urate deposits on DECT had smaller caliper diameter (p=0.02) and tape area (p=0.01) Results: distribution of urate in tophi • Large variation was observed in the amount of urate deposits documented by DECT in tophi of similar physical size • Discrete urate collections were frequently scattered throughout the tophus, typically surrounded by soft tissue Example of two similar sized tophi from a single patient showing large variation in urate volume. The borders of the tophus (determined from CT images) are outlined. Summary • DECT reveals the composition of tophi which contain variable urate deposits embedded within soft tissue • DECT scanning is a highly reproducible method of assessing urate load within tophi – Overall higher reproducibility than physical tophus measurement methods – Relatively modest relationship between physical tophus size and DECT urate volume, reflecting the composition of the tophus Further questions/issues • Is DECT sensitive to change? • Is DECT feasible for use in clinical trials? – Cost – Availability – Radiation • What is the relative importance of urate load compared with total tophus size on clinically relevant outcomes? work_z4u32pjue5hzraxjg33k634ld4 ---- University of Birmingham Retinopathy of prematurity screening at 30 weeks: urinary NTpro-BNP performance Ewer, Andrew DOI: 10.1111/apa.14354 License: None: All rights reserved Document Version Peer reviewed version Citation for published version (Harvard): Ewer, A 2018, 'Retinopathy of prematurity screening at 30 weeks: urinary NTpro-BNP performance', Acta Paediatrica. https://doi.org/10.1111/apa.14354 Link to publication on Research at Birmingham portal Publisher Rights Statement: This is the peer reviewed version of the following article: Berrington, J. E., Clarke, P. , Embleton, N. D., Ewer, A. K., Geethanath, R. , Gupta, S. , Lal, M. , Oddie, S. , Shafiq, A. , Vasudevan, C. and Bührer, C. (2018), Retinopathy of prematurity screening at 30 weeks: urinary NTproBNP performance. Acta Paediatr. which has been published in final form at doi:10.1111/apa.14354. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving General rights Unless a licence is specified above, all rights (including copyright and moral rights) in this document are retained by the authors and/or the copyright holders. The express permission of the copyright holder must be obtained for any use of this material other than for purposes permitted by law. • Users may freely distribute the URL that is used to identify this publication. • Users may download and/or print one copy of the publication from the University of Birmingham research portal for the purpose of private study or non-commercial research. • User may use extracts from the document in line with the concept of ‘fair dealing’ under the Copyright, Designs and Patents Act 1988 (?) • Users may not further distribute the material nor use it for the purposes of commercial gain. Where a licence is displayed above, please note the terms and conditions of the licence govern your use of this document. When citing, please reference the published version. Take down policy While the University of Birmingham exercises care and attention in making items available there are rare occasions when an item has been uploaded in error or has been deemed to be commercially or otherwise sensitive. If you believe that this is the case for this document, please contact UBIRA@lists.bham.ac.uk providing details and we will remove access to the work immediately and investigate. Download date: 06. Apr. 2021 https://doi.org/10.1111/apa.14354 https://doi.org/10.1111/apa.14354 https://research.birmingham.ac.uk/portal/en/publications/retinopathy-of-prematurity-screening-at-30-weeks-urinary-ntprobnp-performance(9b7c9bb5-ed1a-454b-9c98-6a3962614b55).html Retinopathy of prematurity screening at ≥30 weeks: urinary NTpro-BNP performance Berrington JE,1,2 Clarke P3, Embleton ND,1,4 Ewer AK5, Geethanath R6, Gupta S7, Lal M8, Oddie S9, Shafiq A10, Vasudevan C9 and Bührer C11 1Newcastle Neonatal Service, Newcastle upon Tyne Hospitals NHS Foundation Trust, 2Institute of Cellular Medicine, Newcastle University, Newcastle upon Tyne, UK, 3Norfolk and Norwich University Hospitals NHS Foundation Trust, UK, 4Institute of Health and Society, Newcastle University, Newcastle upon Tyne, UK, 5Birmingham Womens Hospital, Birmingham, UK, 6City Hospitals, Sunderland, UK, 7University Hospital of North Tees, Stockton, UK, 8South Tees NHS Foundation Trust, UK,9Bradford teaching Hospitals NHS Foundation Trust, UK, 10Department of ophthalmology, Newcastle Upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, UK, 11Charité Universitätsmedizin, Berlin, Germany Correspondence – Janet Berrington, Consultant Neonatal Paediatrician, Newcastle Neonatal Service, Newcastle upon Tyne Hospitals NHS Foundation Trust, Queen Victoria Road, Newcastle Upon Tyne, NE 1 4LP, UK. E-mail: janet.berrington@nuth.nhs.uk Phone 0191 2825197 Fax 0191 2825038 Running Title Urinary NTproBNP ROP & infants >30 weeks mailto:janet.berrington@nuth.nhs.uk Abstract Aim Urinary N-terminal B-type natriuretic peptide NTproBNP levels are associated with the development of retinopathy of prematurity (ROP) in infants <30 weeks gestation. The incidence of ROP in more mature infants who meet other ROP screening criteria is very low. We therefore aimed to test whether urinary NTproBNP predicted ROP development in these infants. Methods Prospective observational study in 151 UK infants ≥30+0 weeks gestation but also <32 weeks gestation and/or <1501g, to test the hypothesis that urinary NTproBNP levels on day of life (DOL) 14 and 28 were able to predict ROP development. Results Urinary NTproBNP concentrations on day 14 and day 28 of life did not differ between infants with and without ROP (medians 144mcg/ml vs 128mcg/ml respectively, p=0.86 on DOL 14 and medians 117mcg/ml vs 94mcg/ml respectively, p=0.64 on DOL28). Conclusion The association previously shown for infants <30 completed weeks between urinary NTproBNP and development of ROP was not seen in more mature infants. Urinary NTproBNP does not appear helpful in rationalizing direct ophthalmoscopic screening for ROP in more mature infants, and may suggest a difference in pathophysiology of ROP in this population. Key Notes 1) Published data for infants <30 completed weeks gestation suggests that urinary NTproBNP levels might contribute to identifying infants at risk from retinopathy of prematurity (ROP) 2) We showed that this is not true for more mature infants who still fulfil UK screening criteria 3) This difference may imply different pathophysiology of ROP development in more mature infants Abbreviations NTproBNP: Urinary N-terminal B-type natriuretic peptide REDEXAM: REDucing Eye EXAMinations in preterm infants ROP: retinopathy of prematurity UK: United Kingdom Introduction Visual impairment as a result of retinopathy of prematurity (ROP) is potentially preventable by screening (1), but at present screening is largely dependent on direct ophthalmoscopy, which requires the availability of ophthalmologists willing and able to perform this. Although telemedicine and digital photography have improved access in some areas, costs, training expertise and other practicalities restrict universal screening (2-4). This and the physical discomfort and instability caused to the infant (5), and distressing nature of the examination to the family, mean that non-invasive markers of ROP development have been sought, and algorithms generated to attempt to maximize the performance of features associated with ROP development to accurately predict this (6,7). To date, no ‘ideal’ biomarkers of ROP have been identified and sufficiently tested, but urinary B-type natriuretic peptide (BNP) shows promise (8). Proteolytic cleavage of a precursor protein yields biologically active BNP and an inert N-terminal fragment, NTproBNP (amino acids 1- 76). Measuring NTproBNP concentrations is routine in cardiac failure in children and neonates (9,10) and often available in routine health provision laboratories relatively cheaply. Concentrations of NTproBNP in blood parallel those in the urine, making non- invasive measurement possible (11). In a single-center pilot study, we showed that urinary NTproBNP/creatinine ratios (UNBCR) in the first month were significantly elevated in preterm infants who developed severe ROP, compared to controls (8). To further assess the predictive power of urinary NTproBNP concentrations and UNBCR during the first month of life to allow early identification of infants at high or low risk of severe ROP, we conducted a prospective observational study (REDEXAM, REDucing Eye EXAMinations in preterm infants) in neonatal intensive units in 8 European and Middle East countries (12). This second study showed a strong association between urinary NTproBNP measurements on day 14 and 28 of life and subsequent ROP development, in a cohort of 967 infants of ≤29+6 weeks gestation (12). This is potentially important as this cheap, readily available test could help reduce or target direct ophthalmoscopy reducing either the number of individuals ever needing to be screened, or the number of screens undertaken in an individual, or both. Gerull et al (13) recently confirmed the low incidence of retinopathy of prematurity( ROP) in Switzerland in more mature infants than those included in our second study - those between 30+0 and 31+6 weeks gestation. There was a very low incidence of any ROP (1.7% (56/3222)) and severe or treated ROP (0.16% (5/3222)) and these authors called for a review of screening criteria. Given the relatively large numbers of more mature infants compared to very immature infants, and the low rate of ROP in these more mature infants, the utility of urinary NTproBNP to predict ROP development in more mature infants who still fulfil screening criteria, is important. We therefore tested the hypothesis that urinary NTproBNP in the first month of life, in infants ≥30+0 weeks fulfilling UK ROP screening criteria by virtue of gestation (<32 completed weeks) birth weight (<1501g) or both, would predict ROP development. To do this we applied the methodology of our previous study (14) to infants with gestations ≥30+0 weeks, born only in the UK. Methods Study setting 7 neonatal intensive care units in the United Kingdom participated between November 2013 and May 2015. Ethical approval was obtained from Newcastle and North Tyneside 2 ethics committee. Patients and measurements Infants with a gestational age ≥30+0 completed weeks gestation and/or <32+0 or <1501g, alive at 10 days were eligible. Written informed parental consent was obtained. Urine was collected on DOL (day of life)14 and DOL28 (or as close as possible) by each center by their usual method (cotton wool balls or pads in the nappy, bags or clean catches by staff or parents at routine nappy changes) and stored at -80ºC until analysis. Urinary NTproBNP concentrations were determined at one laboratory on one site (Newcastle Upon Tyne, UK) in batches by standard hospital laboratory automated commercial chemiluminescent sandwich immunoassays (Roche Modula P E170 run on a cobas e 411 analyser) as described previously (8,15). Urines with a level lower than the limits of detection of the assay were attributed the lower level (50 mcg/l). We also recorded infant weight on DOL14 and DOL28. Proportional weight gain was calculated by dividing the gain of the infant’s weight since birth on DOL14 or DOL28, respectively, by birth weight. ROP outcomes ROP screening examinations were unaffected by study participation, undertaken by serial binocular indirect ophthalmoscopy and carried out by experienced local ophthalmologists according to local and national guidelines unaware of urinary NTproBNP concentrations (14). ROP was staged according to the International Classification of Retinopathy of Prematurity (16) and allocated treatment according to the Early Treatment for Retinopathy of Prematurity criteria (17). Statistical analysis NTproBNP results for infants with known ROP outcome and at least one urinary NTproBNP concentration measured were analysed using SPSS 23.0 (SPSS Inc, Chicago, IL). As NTproBNP results and other variables lacked a normal distribution, data are presented as median and range. We assessed the strength of association between continuous variables by Spearman rank order coefficients. Differences in the distribution of continuous variables were assessed by the Mann-Whitney U test. Results A total of 151 infants, 77 (51%) male, of median gestation 31.0 (range 30.0 – 37.0) weeks and median birth weight 1420g (range 640-2430g) participated, including 105 singletons, 39 twins (some of whom had non-surviving siblings), one triplet set and one quad set. Five infants died before ROP screening was complete. Of surviving infants, 11 (7.5%) had any ROP: two (1.4%) stage 1, eight (5.5%) stage 2 and one (0.7%) stage 3 (zone 3 only). No infant was ever treated for ROP, and all ROP resolved. We found that ROP of any grade was restricted to infants ≤ 32+0 weeks gestational age (n=123). None of the more mature infants who entered the study had any signs of ROP. In contrast to the cohort <30+0 weeks, there were no significant differences between infants with ROP, as opposed to infants without ROP for proportional weight gain or urinary NTproBNP concentrations on day 14 and day 28 of life (Table 1) . Urinary NTproBNP concentrations were poorly associated with birthweight for this cohort (figure 1). Discussion Successfully identifying a non-invasive biomarker for development of ROP would have significant financial, family and infant benefits. Urinary NTproBNP appeared promising in pilot work (8), and in a larger population of relatively immature infants (<30+0 weeks gestation (12). However in the present study done in more mature infants who are still judged at risk of ROP by current screening criteria, a relationship between urinary NTproBNP and the development of ROP has not been demonstrated. Although this was a relatively small study of 151 infants, this cohort is larger than that in the original pilot study (n=136) where a correlation was seen between ROP development and urinary NTproBNP, and a correlation was previously seen in the less mature UK infants (12). It is still possible that a larger study would show a relationship, but any study will suffer from the impact of assessing any test for ROP in a very low incidence population. Little is known about the role of BNP in development of ROP. The lack of correlation between urinary NTproBNP levels and ROP in more mature infants could indicate different pathophysiology of either ROP development or BNP release in comparison to less mature infants. Equally, in the more immature population urinary NTproBNP may simply act as a proxy measure of ‘sickness’ related to immaturity which subsequently relates to risk of ROP development. In the more mature population, where ROP development is rare, ‘sickness’ may then influence BNP levels but not ROP development, causing the relationship to be lost. We did not collect more extensive clinical data to allow this hypothesis to be tested further in this study, but future studies could explore this possibility, as well as being larger. We conclude that in this UK cohort urinary NTproBNP based levels for risk factors for ROP established for more immature infants fail to identify infants of 30+0 weeks or more who might benefit from, or not require, screening ophthalmoscopy. Differences in other morbidities or potentially different pathophysiology of ROP development in more mature infants may explain this loss of correlation. Acknowledgements We gratefully acknowledge all parents and infants who participated, Susan McLellen (biochemist, Royal Victoria Infirmary, Newcastle), research teams and nurses especially Suzanne Bell, Karen Few, Wendy Cheadle, Rachel Jackson, Eileen Walton, Rachel Wane, Linda Shah and Thomas Skeath and Stefan Zalewski (research fellows, Newcastle neonatal service) for their help in keeping the study running smoothly. Statement of financial support The study has received financial support from the BLISS innovation grant (London, UK) Disclosure statement There are no conflicts of interests to be disclose References 1. Jefferies AL and Canadian Paediatric Society, Fetus and Newborn Committee 2016 Retinopathy of prematurity: An update on screening and management. Paediatr Child Health 21:101-108. 2. Fierson WM and Capone A, Jr., American Academy of Pediatrics Section on O, American Academy of Ophthalmology AAoCO 2015 Telemedicine for evaluation of retinopathy of prematurity. Pediatrics 135:e238-254. 3. Zepeda-Romero LC, Gilbert C 2015 Limitations in ROP programs in 32 neonatal intensive care units in five states in Mexico. 2015Biomed Res Int:712624. 4. Vinekar A, Jayadev C, Kumar S, Mangalesh S, Dogra MR, Bauer NJ et al 2016 Impact of improved neonatal care on the profile of retinopathy of prematurity in rural neonatal centers in India over a 4-year period. Eye Brain 8:45-53. 5. Dhaliwal CA, Wright E, McIntosh N, Dhaliwal K and Fleck BW 2010 Pain in neonates during screening for retinopathy of prematurity using binocular indirect ophthalmoscopy and wide-field digital retinal imaging: a randomised comparison. Arch Dis Child Fetal Neonatal Ed 95:F146-148. 6. Jung JL, Wagner BD, McCourt EA, Palestine AG, Cerda A, Cao JH, et al 2017 Validation of WINROP for detecting retinopathy of prematurity in a North American cohort of preterm infants. J AAPOS. 7. Gurwin J, Tomlinson LA, Quinn GE, Ying GS, Baumritter A and Binenbaum G, Postnatal Growth and Retinopathy of Prematurity (G-ROP) Study Group and the Telemedicine Approaches to Evaluating Acute-Phase Retinopathy of Prematurity Cooperative Group 2017 A tiered approach to retinopathy of prematurity screening (TARP) using a weight gain predictive model and a telemedicine system. JAMA Ophthalmol. doi: 10.1001/jamaophthalmol.2016.5203. [Epub ahead of print] 8. Czernik C, Metze B, Müller C, Müller B, and Bührer C 2011 Urinary N-terminal B-type natriuretic peptide predicts severe retinopathy of prematurity. Pediatrics 128:e545-549. 9. Nir A and Nasser N 2005 Clinical value of NT-ProBNP and BNP in pediatric cardiology. J Card Fail 11:S76-80. 10. Sugimoto M, Manabe H, Nakau K, Furuya A, Okushima K, Fujiyasu H, et al 2010 The role of N-terminal pro-B-type natriuretic peptide in the diagnosis of congestive heart failure in children. - Correlation with the heart failure score and comparison with B-type natriuretic peptide. Circ J 74:998-1005. 11. Kurihara N, Miwa M, Matsuzaki Y, Hokuto I, Kikuchi H, Katano S et al 2011 Usefulness of measurement of urinary N-terminal pro-brain natriuretic peptide in neonatal period. Pediatr Int 53:608. 12. Bührer C, Erdeve Ö, van Kaam A ,Berger A, Lechner E, Bar-Oz B et al N-terminal B-type natriuretic peptide urinary concentrations and retinopathy of prematurity. Pediatr Res 2017 Jul 24. doi: 10.1038/pr.2017.179. [Epub ahead of print] PubMed PMID: 28738027. 13. Gerull R, Brauer V, Bassler D Laubscher B, Pfister R, Nelle M et al Swiss Neonatal Network & Follow-up Group. Incidence of retinopathy of prematurity (ROP) and ROP treatment in Switzerland 2006-2015: a population-based analysis. Arch Dis Child Fetal Neonatal Ed 2017 Sep 15. pii: fetalneonatal-2017-313574. doi: 10.1136/archdischild- 2017-313574. [Epub ahead of print] PubMed PMID: 28916563. 14. UK Retinopathy of prematurity guideline, 2008. Royal College of Ophthalmologists & Royal College of Paediatrics and Child Health. 15. Joseph L, Nir A, Hammerman C, Goldberg S, Ben Shalom E and Picard E 2010 N-terminal pro-B-type natriuretic peptide as a marker of bronchopulmonary dysplasia in premature infants. Am J Perinatol 27:381-386. 16. International Committee for the Classification of Retinopathy of Prematurity 2005 The International Classification of Retinopathy of Prematurity revisited. Arch Ophthalmol 123:991-999. 17. Early Treatment For Retinopathy Of Prematurity Cooperative Group 2003 Revised indications for the treatment of retinopathy of prematurity: results of the early treatment for retinopathy of prematurity randomized trial. Arch Ophthalmol 121:1684- 1694. Figure 1 Relationship between NTproBNP and birth weight work_z5epskod2fazxk2iqvyayshau4 ---- [PDF] Treg apoptosis accelerates vasculitis 7144 down-regulation of endogenous TCTP expression by antisense results in the increase of T cell apoptosis | Semantic Scholar Skip to search formSkip to main content> Semantic Scholar's Logo Search Sign InCreate Free Account You are currently offline. Some features of the site may not work correctly. Corpus ID: 38363532Treg apoptosis accelerates vasculitis 7144 down-regulation of endogenous TCTP expression by antisense results in the increase of T cell apoptosis @inproceedings{Xiong2008TregAA, title={Treg apoptosis accelerates vasculitis 7144 down-regulation of endogenous TCTP expression by antisense results in the increase of T cell apoptosis}, author={Z. Xiong and J. Song and Yan Yan and Y. Huang and Alan Cowan and Hong Wang and X. Yang}, year={2008} } Z. Xiong, J. Song, +4 authors X. Yang Published 2008 1. Abstract 2. Introduction 3. Materials and Methods 3.1. Construction of CD25+ T cell-targeting mouse IL-2Ra promoter-Bax transgenic mice 3.2. PCR and RT-PCR 3.3. Flow cytometric analysis 3.4. Intracellular Staining for Bax and FOXP3 3.5. T Cell Activation 3.6. Cell Culture and Induced Apoptosis In vitro 3.7. Perivascular Cuff-Injury Induced Vasculitis of Femoral Artery 3.8. Histology 4. Results 4.1. Tregs have significantly higher expression of Bax and higher rates of apoptosis after IL-2… Expand bioscience.org Save to Library Create Alert Cite Launch Research Feed Share This Paper Figures from this paper figure 1 figure 2 figure 3 figure 4 figure 5 figure 6 View All 6 Figures & Tables References SHOWING 1-10 OF 46 REFERENCES SORT BYRelevance Most Influenced Papers Recency Factors regulating apoptosis and homeostasis of CD4+ CD25(high) FOXP3+ regulatory T cells are new therapeutic targets. X. Yang Medicine Frontiers in bioscience : a journal and virtual library 2008 48 Highly Influential PDF View 4 excerpts, references background Save Alert Research Feed Bax alpha perturbs T cell development and affects cell cycle entry of T cells. H. Brady, G. Gil-Gómez, J. Kirberg, A. M. Berns Biology, Medicine The EMBO journal 1996 179 View 1 excerpt, references background Save Alert Research Feed The Gfi-1 protooncoprotein represses Bax expression and inhibits T-cell death. H. L. Grimes, C. Gilks, T. O. Chan, S. Porter, P. Tsichlis Biology, Medicine Proceedings of the National Academy of Sciences of the United States of America 1996 109 PDF View 1 excerpt, references background Save Alert Research Feed Dynamic regulation of FoxP3 expression controls the balance between CD4+ T cell activation and cell death D. Kasprowicz, N. Droin, David M. Soper, F. Ramsdell, D. Green, S. Ziegler Biology, Medicine European journal of immunology 2005 45 Highly Influential View 5 excerpts, references background Save Alert Research Feed An essential role for Scurfin in CD4+CD25+ T regulatory cells R. Khattri, T. Cox, Sue-Ann Yasayko, F. Ramsdell Biology, Medicine Nature Immunology 2003 2,861 Highly Influential View 10 excerpts, references methods Save Alert Research Feed PTEN reduces cuff-induced neointima formation and proinflammatory cytokines. Shinichiro Koide, M. Okazaki, +8 authors Y. Otsuji Medicine, Biology American journal of physiology. Heart and circulatory physiology 2007 34 PDF View 1 excerpt, references background Save Alert Research Feed T cell costimulation through CD28 depends on induction of the Bcl-xgamma isoform: analysis of Bcl-xgamma-deficient mice. Q. Ye, B. Press, +9 authors H. Cantor Medicine The Journal of experimental medicine 2002 6 View 5 excerpts, references methods and background Save Alert Research Feed Role of Naturally Occurring CD4+CD25+ Regulatory T Cells in Experimental Atherosclerosis A. Mor, D. Planer, +5 authors J. George Biology, Medicine Arteriosclerosis, thrombosis, and vascular biology 2007 290 PDF View 2 excerpts, references background Save Alert Research Feed Cytokine expression in advanced human atherosclerotic plaques: dominance of pro-inflammatory (Th1) and macrophage-stimulating cytokines. J. Frostegård, A. Ulfgren, +4 authors G. Hansson Biology, Medicine Atherosclerosis 1999 933 Highly Influential View 4 excerpts, references background Save Alert Research Feed A novel Bcl-x isoform connected to the T cell receptor regulates apoptosis in T cells. X. Yang, G. Weber, H. Cantor Biology, Medicine Immunity 1997 89 Highly Influential View 4 excerpts, references methods Save Alert Research Feed ... 1 2 3 4 5 ... Related Papers Abstract Figures 46 References Related Papers Stay Connected With Semantic Scholar Sign Up About Semantic Scholar Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More → Resources DatasetsSupp.aiAPIOpen Corpus Organization About UsResearchPublishing PartnersData Partners   FAQContact Proudly built by AI2 with the help of our Collaborators Terms of Service•Privacy Policy The Allen Institute for AI By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy, Terms of Service, and Dataset License ACCEPT & CONTINUE work_z7yjqndmg5fsppns5u7gld2yvy ---- Reliability of the modified Thomas test using a lumbo-plevic stabilization GyounG-Mo KiM, PT, MS1), SunG-Min Ha, PT, PhD2)* 1) Department of Physical Therapy, Division of Health Science, Baekseok University, Republic of Korea 2) Department of Physical Therapy, College of Health Science, Sangji University: 83 Sangjidae-gil, Won-ju, Gangwon-do 220-702, Republic of Korea Abstract. [Purpose] The purpose of this study was to examine the test-retest reliability of the modified Thomas test using lumbo-pelvic stabilization. [Subjects] Thirteen subjects (male=10, female=3) with hip flexor tightness voluntarily participated in the study. [Methods] The participants underwent the modified Thomas test under three conditions: 1) the general modified Thomas test (GM), 2) active lumbo-pelvic stabilization (ALS), and 3) passive lumbo-pelvic stabilization (PLS). Intra-class correlation coefficients (ICC) were used to determine the test-retest re- liability of the knee joint angle measurement under three conditions. The standard error of measurement (SEM) and minimal detectable difference (95% confidence interval) (MDD95) were calculated for each measurement to assess absolute consistency. [Results] The ALS (ICC = 0.99) and PLS (ICC = 0.98) methods for the modified Thomas test were more reliable than GM method (ICC = 0.97). The MDD95 score for the ALS method, 2.35 degrees, indicated that a real difference existed between two testing sessions compared with the scores for the PLS (3.70 degrees) and GM methods (4.17 degrees) [Conclusion] Lumbo-pelvic stabilization is one of the considerations for precise mea- surement and may help to minimize measurement error when evaluating hip flexor tightness using the modified Thomas test. Key words: Lumbo-pelvic stabilization, Modified Thomas test (This article was submitted Jul. 9, 2014, and was accepted Sep. 2, 2014) INTRODUCTION The tightness of the hip flexor muscles limits hip hyper- extension in gait and may correlate with lumbar curvature, back pain, and knee dysfunction. The modified Thomas test is used to assess the flexibility of four different types of hip flexor muscle: the iliacus, psoas major, rectus femoris, and tensor fasciae latae (TFL)1, 2). Though the modified Thomas test has been commonly used, it is still unclear which test can be used most reliably. The test may be affected by varia- tions in the assessment skill of the examiners, the scoring method, the consistency of the assessment procedure, or the accuracy of the measurement equipment3, 4). To improve the clinical reliability of the modified Thomas test, various alternative methodological approaches have been proposed, for example, utilization of an inclinometer instead of a goniometer, the use of digital photography, comparison by goniometer, and a pass/fail scoring system3, 5–7). Lower limb movements result in forces on the vertebrae and can affect the lumbo-pelvic area8, 9). When the hip flexor muscles are short, the lumbar vertebrae are put directly into more extended positions, and the degree of the anterior pelvic tilt is increased during the modified Thomas test. As a result, the reliability of the test outcome is limited10). There- fore, lumbo-pelvic stabilization is one of the imperative con- siderations for test-retest reliability. Although several studies have investigated the reliability of the modified Thomas test, whether lumbo-pelvic stabilization can influence the test- retest reliability has not yet been reported. The aim of our study was to examine the test-retest reliability of alternative methods of the modified Thomas test. Specifically, we inves- tigated the effects of lumbar stabilization on the reliability of the modified Thomas test. SUBJECTS AND METHODS Thirteen subjects (male=10, female=3) with hip flexor tightness participated in the study. Subjects who met the inclusion criteria including no history of surgery or trauma to the hip, knee, or lower extremities voluntarily participated in this study. Prior to the study, the principal investigator explained all procedures in detail to the subjects. All sub- jects signed an informed consent form. A physical therapist with five years of orthopedic physical therapy experience measured the flexibility of the rectus femoris muscle of the dominant leg using the modified Thomas test. Examiner training sessions were conducted by the principal investiga- tor to ensure familiarization and standardization of the pho- J. Phys. Ther. Sci. 27: 447–449, 2015 *Corresponding author. Sung-Min Ha (E-mail: chirore61@ hotmail.com) ©2015 The Society of Physical Therapy Science. Published by IPEC Inc. This is an open-access article distributed under the terms of the Cre- ative Commons Attribution Non-Commercial No Derivatives (by-nc- nd) License . Original Article http://creativecommons.org/licenses/by-nc-nd/3.0/ J. Phys. Ther. Sci. Vol. 27, No. 2, 2015448 tographic measurements, including the precise location of the landmarks and measurement, in order to reduce measure- ment error. Each assessment was made according to previ- ously described methodologies11, 12). The lower extremity of each subject was photographed by the principal investigator using a digital camera (DCR-SR68/S; Sony Corp., Tokyo, Japan). Prior to taking a photograph, the examiner used three adhesive circular red stickers (5 mm in diameter) to mark the fibula (most proximal aspect located 3–4 inches laterally to the tibial tuberosity), the greater trochanter of the femur (most superficial aspect located 4–6 inches inferiorly to the midpoint of the iliac crest), and the lateral malleolus (most distal aspect) of the fibula. After attaching the three stickers, the examiners took photographs with the still image function. Examiners used the modified Thomas test to assess rectus femoris tightness in each subject under three measurement conditions. First, the general modified-Thomas test (GM) was performed. Second, active lumbo-pelvic stabilization (ALS) by internal fixation was accomplished using a biofeed- back pressure of 40 mmHg during the modified Thomas test. Third, passive lumbo-pelvic stabilization (PLS) by external fixation was accomplished with the examiner’s hand on the right side of the pelvis (dominant leg) during the modified Thomas test. The order of measurement was randomized using the random number generator in Excel. The subjects were asked to rest for 10 minutes to minimize the effect of the previous measurement. After the first measurement session, the second session (after an interval of 2–5 days) was performed following the identical protocol. The knee joint angle under the three conditions was determined using the Simi motion analysis software (Simi Motion 5.0; Simi Reality Motion Systems, Unterschleissheim, Germany). Ex- aminers placed the computer marker over the predetermined point (adhesive red sticker) on the photograph. Two red lines (from the greater trochanter to fibula and from the fibula to the lateral malleolus) were drawn by the Simi software, which automatically calculated the angle between the two lines. The study was approved by the Yonsei University Wonju Campus Human Studies Committees. The mean and standard deviation (SD) of each subject’s characteristics and knee joint angle were calculated under the three conditions. Intra-class correlation coefficients (ICC; 3, 1) were used to determine the test-retest reliability of the knee joint angle measurement under the three condi- tions. For the purpose of interpretation, an ICC >0.75 was considered “excellent,” 0.40–0.75 was “fair to good,” and 0.00–0.40 was “poor”13). The standard error of measure- ment (SEM) was calculated for each measurement in order to assess absolute consistency [SEM=SD√1-ICC]. Minimal detectable difference (95% confidence interval) (MDD95) scores were calculated [MDD95=SEM × √2 × 1.96]14). Sta- tistical analyses were performed using the Statistical Pack- age for the Social Sciences version 12 for Windows (SPSS, Inc., Chicago, IL, USA). RESULTS The mean age, weight, and height (mean ± standard de- viation [SD]) of the subjects was 24.0 ± 3.9 year, weight was 63.3 ± 4.2 kg, and height was 171.2 ± 5.6 cm. The mean and standard deviation are shown for the knee joint angle accord- ing to measurement methods in Table 1. The ICC, SEM, and MDD95 are shown in Table 2. DISCUSSION The purpose of this study was to examine the test-retest reliability of lumbo-pelvic stabilization as an alternative methods for the modified Thomas test. Previous studies have indicated that many variables confound the clinical reliabil- ity of the modified Thomas test. For example, the results were influenced by examiner experience15), patient variation both within and between assessment sessions, variation in patient positioning, and landmark identification and scoring procedures16). To improve the precision of the measurement, the modified Thomas test using digital photography is cur- Table 1. Mean and standard deviation for the knee joint angle according to the measurement methods (N = 13) Session 1 Session 2 Mean SD Mean SD General measurement 59.0° 9.2° 59.0° 8.7° Active stabilization measurement 50.8° 8.3° 51.0° 8.7° Passive stabilization measurement 50.6° 9.9° 50.6° 9.3° Table 2. Inter-rater reliability of knee joint angle according to measurement methods (N = 13) ICCa SEMb MDD95c General measurement 0.97 (0.91–0.99) 1.51 4.17 Active stabilization measurement 0.99 (0.98–0.99) 0.85 2.35 Passive stabilization measurement 0.98 (0.95–0.99) 1.34 3.70 ICCa: intraclass correlation coefficient (95% confidence interval); SEMb: standard error of measurement; MDD95c: minimal detectable differences (95% confidence interval) 449 rently being suggested17). Our results show that use of ALS (ICC = 0.99) or PLS (ICC = 0.98) for the modified Thomas test was more reliable than use of GM (ICC = 0.97). Even when compared with data from the previous research of Gabbe et al.4) (ICC=0.69), Harvey17) (ICC=0.91–0.94) and Peeler and Leiter8) (ICC = 0.98), our results are more reliable. But, differences in test- ing procedure, scoring method, and sample size may explain the difference in the research results. The MDD95 score for the ALS method, 2.35 degrees, in- dicated that a real difference existed between the two testing sessions compared with the PLS score (3.70) and GM score (4.17). Because the ALS method can be achieved by co- contraction of the local and global muscles and an increased amount of abdominal muscles activity, lumbo-pelvic motion was minimized, and this provided more stability17, 18). This result has important implications for clinical and research measurement. Lumbo-pelvic stabilization is one of the considerations for precise measurement and may help to minimize measurement error when evaluating hip flexor tightness using the modified Thomas test. This study has several limitations. First, because of the small sample size of young and healthy subjects in this study, the generalizability of our results is limited. Thus, additional research is needed to examine the reliability for different age and pathology groups. Second, not all measure- ment procedures were standardized. For example, PLS was accomplished by the examiner’s hand in this study. Despite these limitations, the results of our study suggest that lumbo- pelvic stabilization was effective for increasing the reliability and for minimization of measurement errors in the modified Thomas test. Further study is required for standardization of the lumbo-pelvic stabilization measurement procedure of the modified Thomas test. REFERENCES 1) Clapis PA, Davis SM, Davis RO: Reliability of inclinometer and goniomet- ric measurements of hip extension flexibility using the modified Thomas test. Physiother Theory Pract, 2008, 24: 135–141. [Medline] [CrossRef] 2) Glard Y, Launay F, Viehweger E, et al.: Hip flexion contracture and lumbar spine lordosis in myelomeningocele. J Pediatr Orthop, 2005, 25: 476–478. [Medline] [CrossRef] 3) Peeler JD, Anderson JE: Reliability of the Thomas test for assessing range of motion about the hip. Phys Ther Sport, 2007, 8: 14–21. [CrossRef] 4) Gabbe BJ, Bennell KL, Wajswelner H, et al.: Reliability of common lower extremity musculoskeletal screening tests. Phys Ther Sport, 2004, 5: 90– 97. [CrossRef] 5) Magee DJ: Orthopedic physical assessment, 3rd ed. Philadelphia: Saun- ders, 2002, pp 607–660. 6) Shaheen AA, Basuodan RM: Quantitative assessment of head posture of young adults based on lateral view photographs. J Phys Ther Sci, 2012, 24: 391–394. [CrossRef] 7) Yoshida A, Fukushima T, Sunagawa T, et al.: Development of a clinical model for upper extremity motion analysis—comparison with manual measurement. J Phys Ther Sci, 2012, 24: 1149–1152. [CrossRef] 8) Park KN, Yi CH, Jeon HS, et al.: Effects of lumbopelvic neutralization on the electromyographic activity, lumbopelvic and knee motion during seated knee extension in subjects with hamstring shortness. J Phys Ther Sci, 2012, 24: 17–22. [CrossRef] 9) Oh JS: Effects of pelvic belt on hip extensor muscle EMG activity during prone hip extension in females with chronic low back pain. J Phys Ther Sci, 2014, 26: 1023–1024. [Medline] [CrossRef] 10) Kendall FP, McCreary EK, Provance PG, et al.: Muscles Testing and Func- tion, 5th ed. Baltimore: Williams and Wilkins, 2005, pp 271, 280–285. 11) Peeler JD, Anderson JE: Reliability limits of the modified Thomas test for assessing rectus femoris muscle flexibility about the knee joint. J Athl Train, 2008, 43: 470–476. [Medline] [CrossRef] 12) Peeler J, Leiter J: Using digital photography to document rectus femoris flexibility: a reliability study of the modified Thomas test. Physiother The- ory Pract, 2013, 29: 319–327. [Medline] [CrossRef] 13) Crossley KM, Bennell KL, Cowan SM, et al.: Analysis of outcome mea- sures for persons with patellofemoral pain: which are reliable and valid? Arch Phys Med Rehabil, 2004, 85: 815–822. [Medline] [CrossRef] 14) Ries JD, Echternach JL, Nof L, et al.: Test-retest reliability and minimal detectable change scores for the timed “up & go” test, the six-minute walk test, and gait speed in people with Alzheimer disease. Phys Ther, 2009, 89: 569–579. [Medline] [CrossRef] 15) Somers DL, Hanson JA, Kedzierski CM, et al.: The influence of experience on the reliability of goniometric and visual measurement of forefoot posi- tion. J Orthop Sports Phys Ther, 1997, 25: 192–202. [Medline] [CrossRef] 16) Harvey D: Assessment of the flexibility of elite athletes using the modified Thomas test. Br J Sports Med, 1998, 32: 68–70. [Medline] [CrossRef] 17) Park KH, Ha SM, Kim SJ, et al.: Effects of the pelvic rotatory control method on abdominal muscle activity and the pelvic rotation during active straight leg raising. Man Ther, 2013, 18: 220–224. [Medline] [CrossRef] 18) Noh KH, Kim JW, Kim GM, et al.: The influence of dual pressure bio- feedback units on pelvic rotation and abdominal muscle activity during the active straight leg raise in women with chronic lower back pain. J Phys Ther Sci, 2014, 26: 717–719. [Medline] [CrossRef] http://www.ncbi.nlm.nih.gov/pubmed/18432516?dopt=Abstract http://dx.doi.org/10.1080/09593980701378256 http://www.ncbi.nlm.nih.gov/pubmed/15958898?dopt=Abstract http://dx.doi.org/10.1097/01.bpo.0000161099.46339.eb http://dx.doi.org/10.1016/j.ptsp.2006.09.023 http://dx.doi.org/10.1016/j.ptsp.2004.01.003 http://dx.doi.org/10.1589/jpts.24.391 http://dx.doi.org/10.1589/jpts.24.1149 http://dx.doi.org/10.1589/jpts.24.17 http://www.ncbi.nlm.nih.gov/pubmed/25140087?dopt=Abstract http://dx.doi.org/10.1589/jpts.26.1023 http://www.ncbi.nlm.nih.gov/pubmed/18833309?dopt=Abstract http://dx.doi.org/10.4085/1062-6050-43.5.470 http://www.ncbi.nlm.nih.gov/pubmed/23088704?dopt=Abstract http://dx.doi.org/10.3109/09593985.2012.731140 http://www.ncbi.nlm.nih.gov/pubmed/15129407?dopt=Abstract http://dx.doi.org/10.1016/S0003-9993(03)00613-0 http://www.ncbi.nlm.nih.gov/pubmed/19389792?dopt=Abstract http://dx.doi.org/10.2522/ptj.20080258 http://www.ncbi.nlm.nih.gov/pubmed/9048325?dopt=Abstract http://dx.doi.org/10.2519/jospt.1997.25.3.192 http://www.ncbi.nlm.nih.gov/pubmed/9562169?dopt=Abstract http://dx.doi.org/10.1136/bjsm.32.1.68 http://www.ncbi.nlm.nih.gov/pubmed/23127994?dopt=Abstract http://dx.doi.org/10.1016/j.math.2012.10.004 http://www.ncbi.nlm.nih.gov/pubmed/24926138?dopt=Abstract http://dx.doi.org/10.1589/jpts.26.717 work_zafawlxmmbbolp247gujrxov6y ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218546234 Params is empty 218546234 exception Params is empty 2021/04/06-02:16:24 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218546234 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:24 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_zamdlo4bkvh3rpin4lihuj5lue ---- wp-p1m-38.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-38.ebi.ac.uk no 218563491 Params is empty 218563491 exception Params is empty 2021/04/06-02:16:44 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218563491 (wp-p1m-38.ebi.ac.uk) Time: 2021/04/06 02:16:44 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_zaua33dr4bghrcqxsexb2uky4m ---- Chapter 5 Photographic memory UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) UvA-DARE (Digital Academic Repository) Digital photography: communication, identity, memory van Dijck, J. DOI 10.1177/1470357207084865 Publication date 2008 Document Version Accepted author manuscript Published in Visual Communication Link to publication Citation for published version (APA): van Dijck, J. (2008). Digital photography: communication, identity, memory. Visual Communication, 7(1), 57-76. https://doi.org/10.1177/1470357207084865 General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. Download date:06 Apr 2021 https://doi.org/10.1177/1470357207084865 https://dare.uva.nl/personal/pure/en/publications/digital-photography-communication-identity-memory(eafbe5ac-68b9-472f-bfd3-be2b420c6e43).html https://doi.org/10.1177/1470357207084865 1 Forthcoming in: Visual Communication, Vol. 7, (2008), p. 57-76. =============================================================== Digital Photography: Communication, Identity, Memory. José van Dijck (j.van.dijck@uva.nl) Abstract Taking photographs seems no longer primarily an act of memory intended to safeguard family’s pictorial heritage, but is increasingly becoming a tool for an individual’s identity formation and communication. Digital cameras, cameraphones, photoblogs and other multipurpose devices seem to promote the use of images a the preferred idiom of a new generation of users. This article’s aim is to explore how technical changes (digitization) in connection to growing insights in cognitive science and to socio-cultural transformations, have affected personal photography. The increased malleability of photographic images may suit one’s need for continuous self-remodeling and instant communication and bonding. However, that same manipulability may also lessen our grip on our images’ future repurposing and reframing. Memory will not be eradicated from digital multipurpose tools. Instead, the function of memory reappears in the networked, distributed nature of digital photographs, as most images are sent over the internet and stored in virtual space. Key words: Photography, memory, digital technology, visual culture, identity formation, Abu Ghraib pictures Word count: 6.927 About the author: José van Dijck is a Professor of Media and Culture at the University of Amsterdam (The Netherlands) and chair of the Media Studies department. She is the author of Manufacturing Babies and Public Consent. Debating the New Reproductive Technologies (New York: New York University Press) and ImagEnation. Popular Images of Genetics (New York: New York University Press, 1998). Her latest book is titled The Transparent Body. A Cultural Analysis of Medical Imaging (Seattle: University of Washington Press, 2005). Her research areas include media and science, (digital) media technologies, and television and culture. This article is part of a book tentatively titled Mediated Memories in the Digital Age (Stanford University Press, forthcoming) ================================================= mailto:j.van.dijck@uva.nl 2 Digital Photography: Communication, Identity, Memory. José van Dijck University of Amsterdam Introduction A student recently told me about an interesting experience. She and four friends had been hanging out in her dormitory room, telling jokes and poking fun. Her roommate had taken the student’s camera phone to take a picture of the group, laying in various relaxed positions on the couch. That same evening, the student had posted the picture on her photoblog—a blog she regularly updated to keep friends and family informed about her daily life in college. The next day she received an e-mail from her roommate; upon opening the attached jpg-file she found the same picture of herself and her friends on the couch, but now they were portrayed with dozens of empty beer cans and wine bottles piled up on the coffee table in front of them. Her dismay caused by this unauthorized act of photoshopping only aggravated when she noticed the doctored pictured was mailed around to a long list of peers, including some people she had never met or only vaguely knew. When confronting her roommate with the potential consequences of her action, they engaged in a heated discussion about the innocence of manipulating pictures and sending them around (‘everybody will see this is a joke’) vis-à-vis the incriminating potential of photographs (‘not everyone may recognize the manipulation’) which impact might be less transitory than we think (‘these pictures may show up endlessly’). In recent years, the role and function of western digital photography seem to have changed substantially.1 In the analogue age, personal photography was first and foremost a means for autobiographical remembering, and photographs usually ended up as keepsakes in someone’s (family) album or shoebox.2 They were typically regarded to be a person’s most reliable aid for recall and for verifying ‘life as it was,’ even if we know that imagination, projection, and remembrance are inextricably bound up in the process of remembering (Stuhlmiller, 1996). Photography’s functions as a tool for identity formation and as a means for communication were duly acknowledged, but were always rated secondary to its prime purpose of memory (Sontag 1973; Barthes, 1981). Recent research by anthropologists, sociologists, and psychologists seems to suggest that the increased deployment of digital cameras—including cameras integrated in other communication devices—favors the 3 functions of communication and identity formation at the expense of photography’s use as a tool for remembering. (Harrison, 2002; Schiano, Chen and Isaacs, 2002; Gary and Gerrie, 2005). Although it is undeniable that the functions of photography as communication and identity formation have gained importance, I will argue in this article that photography’s function as a memory tool is still equally vibrant, even if its manifestation is changing in the digital era. Before starting this argument, it is necessary to point out a few misguided assumptions we often encounter when tackling this subject. First, communication and identity formation are not novel uses, but have always been intrinsic functions of photography, also in the analogue days. Indeed, a younger generation seems to increasingly use digital cameras for ‘live’ communication, instead of storing pictures of ‘life.’ Easy distribution of images over the internet and quick dissemination via personal handheld devices promote pictures as the preferred idiom in mediated communication practices. But what are the implications of this transformed use of photography as a memory tool? Second, personal photography has not changed as a result of digital technologies; the changing function of photography is part of a complex technological, social and cultural transformation. As the student’s anecdote illustrates, digitization is often considered the culprit of photography’s growing unreliability as a tool for remembrance, but in fact, the camera has never been a dependable aid for storing memories, and photographs have always been twitched and tweaked in the process of recollection. Digital photography raises several intriguing questions concerning manipulation and cognitive editing: what is the power of digital tools in sculpting identity? How do we gauge new features that help us brush up our pictures and make our memories picture perfect? It is simply not true that digital photography has eradicated the camera’s function as a tool for memory. Instead, the function of memory reappears in the networked, distributed nature of digital photographs, as most images are sent over the wires and end up somewhere in virtual space. This article’s aim, then, is to show how technical changes, in connection to growing insights in cognitive science and to socio-cultural transformations, have affected personal photography’s role in communication, the shaping of identity, and memory. Underlying this argument is the recurring issue of control versus a lack of control. Part of the digital camera’s popularity can be explained by an increased command over the outcome of pictures now that electronic processes allow for greater manipulability, and yet the flipside is that pictures can also be easily manipulated by everyone with the appropriate toolbox. A similar paradox can be noticed with regards to the distribution of personal pictures. While the internet allows for 4 quick and easy sharing of private snapshots, that same tool also renders them vulnerable to unauthorized distribution. Ironically, the picture taken by the roommate as a token of instant and ephemeral communication may live an extended life on the internet, turning up in unexpected contexts many years from now. As will be argued in the last section, the increased malleability of photographic images may suit one’s need for continuous self-remodeling, but that same flexibility may also lessen our grip on our images’ future repurposing and reframing, forcing us to acknowledge the distributed presence of pictorial memory. Live Pictures: Personal photography as tools for communication and experience When personal photography came of age in the nineteenth and twentieth centuries, it gradually emerged as a social practice that revolved around families wanting to save their memories of past experiences in material pictorial forms for future reference or communal reminiscing. Yet from the early days of photography, we could already distinguish social uses complementary to its primary function. Photography always also served as an instrument of communication and as a means to share experience. As Susan Sontag argued in 1973, the tourist’s compulsion to take snapshots of foreign places reveals how taking pictures can become paramount to experiencing an event; at the same time, communicating experiences with the help of photographs is an integral part of tourist photography. Notwithstanding the dominance of photography as a family tool for remembrance and reminisce, the other functions were immanent to photography from the moment it became popular as a domestic technology. In recent years, we can see profound shifts in the balance between these various social uses: from family to individual use, from memory tools to communication devices, and from sharing (memory) objects to sharing experiences. I will subsequently elucidate each of these profound shifts. The social significance and cultural impact of personal photography grew exponentially in the past century: by the early 1970s, almost every American and western European household owned a photo camera. By the time sociologists and anthropologists began to acknowledge the significance of photography as a cultural rite of family life, Susan Sontag (1973: 8) took on the ethnographer’s cloak and described its meaning as a tool for recording family life: ‘Through photographs, each family constructs a portrait chronicle of itself—a portable kit of images that bears witness to its connectedness.’ Through taking and organizing pictures, individuals articulate their connections to, and initiation into, clans and groups, emphasizing ritualized moments of aging and of coming of age. Cameras go with 5 family life, Sontag observed: households with children are twice as likely to have at least one camera than households in which there are no children. Photography not simply reflected but constituted family life and structured an individual’s notion of belonging. Quite a number of sociological and anthropological studies have scrutinized the relationship between picture taking, organizing, and presenting photographs on the one side, and the construction of family, heritage, and kinship on the other (Hirsch, 1997; Holland, 1991; Chambers, 2001). Over the past two decades, the individual has gradually become the nucleus of pictorial life. In her ethnographic study of how people connect personal photographs to memory and narration, anthropologist Barbara Harrison (2002: 107) observes that self- presentation—rather than family re-presentation—is now a major function of photographs. Harrison’s field study acquiesces a significant shift from personal photography being bound up with memory and commemoration towards pictures as a form of identity formation; cameras are used less for the remembrance of family life and more towards the affirmation of personhood and personal bonds. Since the 1990s, and most distinctively since the beginning of the new millennium, cameras increasingly serve as tools for mediating quotidian experiences other than rituals or ceremonial moments. Partly a technological evolution pushed by market forces, the social and cultural stakes in this transformation cannot be underestimated. When looking at current generations of users, researchers observe a watershed between adult users, large numbers of whom are now switching from analogue to digital cameras, and teenagers and young adults, who are growing up with a number of new digital multifunctional communication and media devices (Chiano, Chen and Isaacs, 2002; Liechti and Ichikawa, 2000). The older group generally adheres to the primacy of photography as a memory tool, particularly in the construction of family life, whereas teenagers and young adults use camera-like tools for conversation and peer-group building. This distinctive swing in photography’s use also shows up in ethnographic observations of teenage-patterns of taking and managing pictures. One American study focusing on a group of teens between 14 and 19 years of age reports a remarkable incongruence between what teenagers say they value in photography and how they behave: while most of them describe photos as permanent records of their lives, their behavior reveals a preference for photography as social communication (Schiano, Chen, and Isaacs, 2002). Showing pictures as part of conversation or reviewing pictures to confirm social bonds between friends appears more important than organizing photos in albums and looking at them—an activity they consider their parents’ domain. Photos are shared less in the context of family and home, and more in peer-group environments: schools, clubs, friends’ houses. 6 Other studies note how teens regard pictures as circulating messages, an interactive exchange in which personal photographs casually mix public images, such as magazine pictures, drawings, and text (Liechti and Ichikawa, 2000). Part of this change is reflected in the popularity of new technologies. In terms of hardware, the single-purpose camera for taking still pictures gives way to multifunctional appliances, combining the camera function with the personal digital assistant (PDA), the mobile phone, MP3 players, and global positioning devices. These emerging digital tools substantially affect the way people socialize and interact, and by extension, the way they maintain and consolidate relationships. The so-called cameraphone permits entirely new performative rituals, such as shooting a picture at a live concert and instantly mailing these images to a friend. But we also see this change reflected in terms of software.3 In the past three years, photoblogs have become popular as an internet-based technology—a type of blog that adds photographs to text and hyperlinks in the telling of stories. A photoblog, rather than being a digital album, elicits entirely different presentational uses: college students use it to keep their distant loved ones updated about their daily life, but individuals may also use a photoblog to start their own online photo gallery. Photobloggers prefer to profile themselves in images rather than words (Cohen, 2005). Whereas their parents invested considerable time and effort in building up material collections of pictures for future reference, youngsters appear to take less interest in sharing photographs as objects than in sharing them as experiences (Kindberg, Spasojevic, Fleck and Sellen, 2005). The rapidly increasing popularity in the use of cameraphones supports and propels this new communicative deployment of personal photography. Pictures sent around by a cameraphone are used to convey a brief message, or merely to show affect. ‘Connecting’ or ‘getting in touch’ rather than ‘reality capturing’ or ‘memory preservation,’ are the social meanings transferred onto this type of photography. Whereas parents and/or children used to sit on the couch together flipping through photo-albums, most teenagers consider their pictures to be temporary reminders rather than permanent keepsakes. Phone-photography gives rise to a cultural form reminiscent of the old-fashioned postcard: snapshots with a few words attached that are mostly valued as ritual signs of (re)connection (Lehtonen, Koskinen, and Kurvinen, 2002). Like postcards, cameraphone pictures are meant to throw away after they are received. Not coincidentally, the cameraphone merges oral and visual modalities—the latter seemingly adapting to the former. Pictures become more like spoken language as photographs are turning into the new currency for social interaction. Pixeled images, like spoken words, 7 circulate between individuals and groups to establish and reconfirm bonds. Sometimes pictures are accompanied by captions that form the ‘missing voice’ explaining the picture. For instance, a concert visitor takes a picture of her favorite band’s performance, adds the word ‘awesome’ and immediately sends off the message to her friends back home. Cameraphone pictures are a way of touching base: Picture this, here! Picture me, now! What makes camera phones different from the single-purpose camera is the medium’s ‘verbosity’—the inflation of images inscribed in the apparatus’s script. When pictures become a visual language channeled by a communication medium, the value of individual pictures decreases, while the general significance of visual communication augments. A thousand pictures sent over the phone may now be worth a single word: see! Taking, sending, and receiving photographs is a real time experience, and like spoken words, image exchanges are not meant to be archived (Van House, Davis, and Ames, 2005). In their bounty, photographs gain value as ‘moments,’ while losing value as mementos. Clearly, we are witnessing a shift, especially among the younger generation, towards using photography as an instrument for peer-bonding and interaction. Digitization is not the cause of this trend; instead, the tendency to fuse photography with daily experience and communication is part of a broader cultural transformation that involves individualization and intensification of experience. The emphasis on individualism and personhood at the expense of family is a social pattern which roots can be traced back as far as the late 1960s and early 1970s. The intensification of experience as a turn-of-the-millenium economic and social force has been theorized most acutely by American economists Pine and Gilmore; commercial products are increasingly marketed as memorable experiences engaging all five senses—sight, sound, touch, taste, smell—and packaged in snappy themes, as to prolong the contact zone between product and consumers (Pine and Gilmore, 1999). Digital photography is part of this larger transformation in which the self becomes the center of a virtual universe made up of informational and spatial flows; individuals articulate their identity as social beings not only by taking and storing photographs to document their lives, but by participating in communal photographic exchanges that mark their identity as interactive producers and consumers of culture. Pictures of Life: personal photography as a tool for identity formation. Besides their growing usage as tools for communication and experience, digital photo cameras have been touted as novel instruments of identity formation, particularly as they 8 allow users to manipulate their own images. However, it should be noticed that photo cameras have always been important instruments for the shaping of self-identity. Some theorists have claimed that personal pictures equal identities (‘our pictures are us’), but this claim appears to understate the intricate cognitive, mental, social and cultural processes at work in identity formation (Chalfen, 2002). Roland Barthes (1981: 80) emphasized, in the late 1970s, the close interconnection between identity formation and memory: pictures of family and friends are visible reminders of former appearances, inviting us to reflect on ‘what has been,’ but by the same token, they tell us how we should remember our selves as younger persons. We remodel our self-image to fit the pictures taken at previous moments in time. Memories are made as much as they are recalled from photographs; our recollections never remain the same, even if the photograph appears to represent a fixed image of the past. And yet, we use these pictures not to ‘fix’ memory, but to constantly reassess our past lives and reflect on what has been as well as what is and what will be. Recollecting is not simply a revisionist project; anticipations of future selves inform retrograde projections, and these mental image maps, in turn, feed a desire to impact ‘external’ (camera) visions of our selves (Rose, 1992). The role photographs play in the complex construction of one’s identity has been reflected upon in cognitive theory as well as in cultural theory, particularly semiotics. Cognitive psychologists have investigated the intriguing question of how photographs can influence our personal memories (Strange, Gerrie and Garry, 2005). The human mind actively produces visual autobiographical evidence through photographs, but also modifies it through pictures—cutting off estranged spouses or throwing away depressing images of themselves when they were still seriously overweight. Research has shown that people are also easily seduced into creating false memories of their pasts on the basis of unaltered and doctored pictures. In the early 1990s, researchers from America and New Zealand persuaded experimental subjects into believing false narratives about their childhood, written or told by family members and substantiated by ‘true’ photographs.4 Over the next decade, these findings were corroborated by experiments in which doctored pictures were used; more than 50% of all subjects constructed false memories out of old personal photographs that were carefully retouched to depict a scene that had never happened in that person’s life.5 There is a continuing debate whether it is narratives or photographs (or a combination of both) that triggers most false memories, but the conclusion that people’s autobiographical memories are prone to either self-induced intervention or secret manipulation is well established (Garry and Wade, 2005). 6 9 The close interweaving of memory, imagination, and desire in creating a picture of one’s past life has also been subject to theoretical probing by cultural theorists, most notably Roland Barthes. When exploring the intricacies of the camera lucida, Barthes testifies to this complex loop of images-pictures informing desire-memory when describing the discomfort he feels the moment he succumbs to being the camera’s object. Having one’s photograph taken, as Barthes (1981: 13) observes, is a closed field of forces where four image-repertoires intersect: ‘the one that I think I am’ (the mental self-image); ‘the one I want others to think I am’ (the idealized self-image); ‘the one the photographer thinks I am’ (the photographed self- image); and ‘the one the photographer makes use of when exhibiting his art’ (the public self- image or imago). Whereas the first two levels represent the stages of mental, internal image processing, the third and fourth level refer to the external process of picture taking and presentation—the photographer’s frame of reference and cultural perspective. In contrast to psychologists, Barthes’ semiotic perspective emphasizes that cognition does not necessarily reside inside our brains, but extends into the social and cultural realm. Barthes’ exploration of analogue photography elucidates how the four image- repertoires of self intersect and yet never match. They collide at various moments: at the instant of capturing, when ‘evaluating’ the outcome or photographed object, or while reminiscing at a later point in time, re-viewing the picture. When a picture is taken, we want those photographs to match our idealized self-image—flattering, without pimples, happy, attractive—so we attempt to influence the process by posing, smiling, or giving instructions to the photographer. At a later stage, we can try to encroach the outcome by selecting, refusing, or destroying the actual print. A photographed person exerts only limited control over the resulting picture. The photographer’s choice of frame and angle defines the portraiture, while the referent can still be further modified at the stage of development by applying retouching techniques. Roland Barthes obviously feels powerless in the face of the photographer’s decisions, lacking control over the image which he wants to equal his idealized self. Its fate is in the hands of the photographer who is taking the picture, and of the chemical, mechanical, and publishing forces involved in its ultimate materialization. Barthes’ discomfort signals a fundamental resentment about his inability to fashion pictures ‘in his own image.’ Since the four levels never coincide, photographs that depict oneself are profoundly alienating, even to the extent of giving the French philosopher a sense of imposture. Paradoxically, Barthes perceives a lack of control over his photographed image and imago and yet he feels confident he can exert power over the mental and idealized images entering his mind. Although the experienced powerlessness over the photographer’s 10 perspective and the ‘black box’ of the camera vis-à-vis the assumed autonomy over his mental images and memories appears entirely plausible, neither perception can go undisputed. The photographed image—the desire to manipulate public imago—has never been outside the subject’s reach, on the contrary. Since the late 1840s, commercial portrait photographers have succumbed to their patron’s desire for idealized self-images the way painters did before the advent of photography: by adopting flattering perspectives and applying chemical magic. Vice versa, the subject’s power over images entering the mind may not be as manageable as it appears. Cultural ideals of physical appearance, displayed through photographs and evolving over time, often unconsciously influence the mind’s (idealized) images of self (Sontag 1973: 85). Control over photographic images is hence not inscribed in the machine’s ontology, and neither does the mind have full sovereignty over the (cultural) images it allows to enter memory. Instead, control over one’s ‘self-portrait’ is a subtle choreography of the four image- repertoires, a balancing act in which photographic images ‘enculturate’ personal identity. Now if we replace the analogue camera by a digital one, and laminated photos by pixeled shots, how would this affect the intertwining of mental-cognitive and cultural-material image processes in photography? When trying to answer this question, we are confronted with a conspicuous absence of interdisciplinary research in this area. None of the cognitive studies discussed above pay attention to ways in which individuals use digital photography to manipulate their own personal pictures and memory; the cultural, material, and technological aspects of memory morphing appear strikingly irrelevant to cognitive science. Striking because scientists often mention how their academic interest in manipulated pictures gains relevance in the face of a growing ubiquitous use of digital photography and its endless potential to reconstruct and retouch one’s childhood memories; skills once monopolized by Hollywood studios and advertising agencies are now within the reach of every individual who owns a ‘digital camera, image editing software computer, and the capacity to follow instructions’ (Gary and Gerrie, 2005: 237).7 Mutatis mutandis, when turning to cultural theorists for enlightenment, their disregard of psychological and cognitive studies in this area is rather remarkable; semioticians and constructivists typically analyze the intricacies of technological devices to connect them to social and cultural agency.8 Yet without acknowledging the profound interlacing of mental, technical, and cultural levels involved in digital photography, we may never understand the intricate connection between identity formation and photography. It may be instructive to spell out a few significant differences between analogue and digital photography in terms of their (cognitive and technical) mechanisms. At first sight, 11 digital photography provides more access to the imaging process between the stages of taking the picture and looking at its printed result. Only seconds after its taking, the picture allows a ‘sneak preview’ via the camera’s small screen. The display shows a tentative result, an image that can be deleted or stored. Since a sneak-preview allows the photographer to instantly share the results with the photographed subject, there is room for negotiation: the subject’s evaluation of her self-image may influence the next posture. A second moment of re-viewing takes place at the computer, in which images, stored as digital code, are susceptible to editing and manipulation. Besides selecting or erasing pictures, photo-paint software permits endless retouching of images—everything from cropping and color adjustment to brushing out red eyes and pimples. Beyond the superficial level, one can remove entire objects from the picture, such as unwanted decorations or add desirable features, such as sharper cheekbones or palm trees in the background. Let’s be straight about one thing: digitization never caused manipulability or artificiality. Although some theorists of visual culture have earmarked manipulability as the feature that makes digital photography stand out from its analogue precursors, history bespeaks the contrary (Mitchell, 1992). Retouching and manipulation have always been inherent to the dynamics of photography (Manovich, 2001; Ritchin, 1999; Holland and Wells, …). What is new in digital photography is the increased number of possibilities to review and retouch one’s own pictures, first on a small camera screen and later on the screen of a computer. When pictures are taken by a digital camera, the subject may feel empowered to steer its outcome (the photographed or public image) because he or she may have access to stages formerly ‘black boxed’ by cameras, film roles, and chemical labs. Previews and reviews of the pixeled image, combined with easy-to-use photoshop software, undoubtedly seduce into pictorial enhancement. But does this increased flexibility cause the processes of photographic imaging and mental (or cognitive) editing to further entwine in the construction of identity? In other words, does image doctoring become an integral element of autobiographical remembering? Of course, we have already become used to the prevailing use of the ‘camera pictura’ with regards to the creation of public images. Since the 1990s, people no longer expect indexical fidelity to an external person when looking at photographic portraits, particularly those in advertising; almost by default, pictures in magazines, billboards, and many other public sources are retouched or enhanced. Digital ‘stock photography’ uses public images as resources or ‘input’ to be worked on by anyone who pays for their exploit (Frosh, 2003). Companies like Microsoft and Getty have anticipated the consequence of this evolution by 12 buying up large stocks of public images and selling them back to the public domain by licensing their ‘re-creation.’ From the culturally accepted modifiability of public images it is only a small step to accepting your own personal pictures to be mere ‘stock’ in the ongoing remodeling project of life’s pictorial heritage. The impact of editing software on the profiling of one’s personal identity is evident from many photoblogs and personal picture galleries on the internet. Enhancing color and beautifying faces is no longer the department of beauty magazines: individuals may now purchase photoshop software to brush up their cherished images. A large number of software packages allow users to restore their old, damaged and faded family pictures; in one and the same breath, they offer to upgrade your self-image. For instance, VisionQuest Images advertises its packages as technical aids to create a ‘digital masterpiece of your specification’; computer programs let you change everything in your personal appearance, from lip size to skin tint.9 Examples of individuals who use these programs abound on the internet.10 These instances divulge that the acceptability of photographic manipulation of someone’s personal photographs can hardly be separated from the normalized use of enhanced idealized images. Digital doctoring of private snapshots is just another stage in the eternal choreography of the (mental and cultural) image repertoires once identified by Roland Barthes. The endless potential of digital photography to manipulate one’s self-image seems to make it the ultimate tool for identity formation. Whereas analogue photographs were often erroneously viewed as the ‘still’ input for ‘static’ images, digital pictures more explicitly serve as visual resources in a life-long project to reinvent one’s self-appearance. Software packages supporting the processing of personal photographs often bespeak the digital image’s status as a liminal object; pixeled photographs are touted as bricks of memory construction, as software is architecturally designed with future remodeling in mind. As Canadian design scholar Ron Burnett (2004: 28) eloquently phrases it: ‘The shift to the digital has shown that photographs are simply raw material for an endless series of digressions. … As images, photographs encourage viewers to move beyond the physical world even as they assert the value of memory, place, and original moments.’ I am not saying, though, that with the advent of digital photography people all of a sudden feel more inclined to photoshop their personal pictures stored in the computer. Neither am I arguing that mental imaging processes change as a result of having more access to intermediate layers of photographic imaging. My point is that the condition of modifiability, plasticity, and ongoing remodeling, equally informs—or should I say ‘enculturate’—all four image-repertoires involved in the construction of personal memory. 13 The condition of plasticity and modifiability, far from being exclusive to personal photography, resounds in diverging cultural, medical, and technological self-remodeling projects. Ultrasound images of fetuses—sneak previews into the womb—stimulate intervention in the biological fabric, turning the fetus into and object to be worked on (Van Dijck, 1995). Cosmetic surgery configures the human body as a physical resource amenable to extreme makeovers; before-and-after pictures structure not only subjective self- consciousness, but upon entering the public image repertoire, they concurrently ‘normalize’ intervention in physical appearance. The most remarkable thing about before-and-after pictures abounding on the internet and on television these days is that they do not promote perpetual modification of our pictures to portray a better self, but advertise the potential to modify our bodies to match our idealized mental images. Contemporary notions of body, mind, appearance, identity, and memory seem to be equally informed by the cultural condition of perpetual modification; our new tools are only in tune with the mental flexibility to refashion self-identity and to morph corporeality. The question whether changing concepts of identity have followed from evolving technologies or the other way around is in fact besides the point. What is more important is to address how the new choreography of image- repertoires operates in a social and cultural climate that increasingly values modifiability and flexibility, and whether this climate indeed allows more individual control over one’s own image. Pictures for Life? Memory and photography in the digital age. From the above observations we are tempted to draw the conclusion that digital cameras are becoming tools for communication, experience, and identity formation, moving away from their formerly prime functions as memory tools. But even if accept that photography is increasingly regarded as an instrument for identity construction, rather than one for recollection or reflection, we can hardly conclude these newly foregrounded functions to annihilate photography’s commemorative function. Indeed, digital cameras give rise to new social practices in which pictures are considered visual resources in the ‘microcultures’ of everyday life (Burnett 2004: 62). Yet in these microcultures, memory does not so much disappear from the spectrum of social use as it gets a different form. In the networked reality of people’s everyday life, the default mode of personal photography becomes ‘sharing’. Few people realize that sharing experience by means of exchanging digital images almost by definition implies distributed storage: personal ‘live’ pictures sent around through the internet 14 may remain there for life, turning up in unforeseen contexts, reframed and repurposed. A well known example may clarify the meaning of distributed memory and demonstrate the intertwined meanings of personal and collective cultural memory: the Abu Ghraib pictures. In May 2004, a series of the most horrific, graphic scenes of torture and violence used by American guards stationed at the Abu Ghraib prison against Iraqi detainees appeared in the press, and were subsequently disseminated through the internet.11 Most pictures were made by prison guards and frequently featured two lower ranked members of the armed forces, Charles Graner and Lyndie England; they often posed thumbs up in front of individual or piled up prisoners who invariably showed signs of torture or sexual assault. The hundreds of pictures taken by prison guards of detainees communicate an arduous casualness in th photographing. Clearly, these picture were made by digital cameras (or cameraphones) deployed by army personnel as part of their daily work routines—perfectly in tune with the popular function of photography as a ritual of everyday communication. As Susan Sontag (2004: 26) poignantly describes in her essay on the case: e act of The pictures taken by American soldiers in Abu Ghraib reflect a recent shift in the use made of pictures—less objects to be saved than messages to be disseminated, circulated. A digital camera is a common possession among soldiers. Where once photographing war was the province of photojournalists, now the soldiers themselves are all photographers—recording their war, their fun, their observations of what they find picturesque, their atrocities—and swapping images among themselves and e- mailing them around the globe. Intentionally taken to be sent back home as triumphant trophies or to be mailed around to colleagues, the pictures were a social gesture of bonding and poaching. Some pictures allegedly served as screensavers on prison guards’ desktops, another sign of their function as ‘office jokes’ to be understood by insiders only. The casualness and look-at-me-here enunciation of the Abu Ghraib photographs, conveyed by the uniformed men and women whose posture betrayed pride as if they had just caught a big fish, connotes the function of these pictures as symbolic resources for communication. The last thing these pictures were meant to be by their makers, were objects of lasting memory. And yet, this is exactly how they ended up in the collective memory of the American people. Once interceded and published in newspapers and on television worldwide, they were reframed as evidence of the army’s abhorrent behavior as torturers posing triumphantly over 15 their helpless captives. The Abu Ghraib pictures became evidence in a military trial that incriminated the perpetrators responsible for the abuse shown in the pictures, but acquitted the invisible chain of command that obviously condoned such behavior. Perhaps most telling was the military’s response to the Abu Ghraib debacle. Rather than condemning the practice depicted by the images taken, the military subsequently ordered to ban personal photography from the work floor; pictures made for private use may no longer be taken outside penitentiaries. The ‘incident’ resulted in stricter communication regulations as well as a prohibition against taking and distributing personal photographs on military premises. Ironically, pictures that were casually mailed off as ephemeral ‘postcards’, meant to be thrown away after reading the message, became a permanent engraving in the consciousness of a generation; pictures sent with a communicative intent ended up in America’s collective cultural memory as painful visual evidence of its military’s hubris. The awareness that any picture unleashed on the internet can be endlessly recycled may lead to a new attitude in taking pictures: anticipating future reuse, photographs are no longer innocent personal keepsakes, but they are potential liabilities in someone’s personal life or professional career. The lesson learned from the Abu Ghraib pictures—beyond their horrendous political message—is that personal digital photography can hardly be confined to private ‘grounds’; embedded in networked systems, pictorial memory is forever distributed, perpetually stored in the endless maze of virtual life. The digital evolution that has shaped personal photography is anything but an exclusive technological transformation. Rather, the shift in use and function of the camera seems to suit a more general cultural condition that may be characterized by terms like manipulability, individuality, communicability, versatility, and distributedness. This cultural condition has definitely affected the nature and status of photographs as building blocks for personal identity. Even if the functions of memory capture, communicative experience, and identity formation continue to coexist in current uses of personal photography, their rebalanced significance reverberates crucial changes in our contemporary cultural condition. Returning to the issue of power, it is difficult to conclude whether digital photography has led to more or less control over our personal images, pictures, and memories. The choreography of image repertoires, blending mental and cultural imaging processes, not only seems to reset our control over ‘pictorial memory,’ but implies a profound redefinition of the very term. Photographs could never be qualified as truthful anchors of personal memory; yet since the emergence of digital photography, pictorial manipulation seems to be a default mode rather than an option. To some extent, the camera allows more control over our memories, 16 handing us the tools for ‘brushing up’ and reinvigorating remembrances of things past. In this day and age, (digital) photographs let subjects take some measure of control over their photographed appearance, inviting them to tweak and reshape our public and private identities. As stated before, digital photography is not the cause of memory’s transformation; the digital camera derives its revamped application as a memory tool from a culture where manipulability and morphing are commonly accepted conditions for shaping personhood. Flexibility and morphing do not apply exclusively to pictures as shaping tools for personal memory, but also apply more generally to bodies and things. Memory, like photographs and bodies, can now be made picture perfect; memory and photography change in conjunction, adapting to contemporary expectations and prevailing norms. Our photographs tell us who we want to be and how we want to remember; it is hard to imagine how the abundance of editing tools available will not effect our desire to brush up our past selves. Personal photography may become a lifelong exercise in revising past desires and adjusting them to new expectations. Even if still a memory tool, the digital camera is now pushed as an instrument for identity construction, allowing more shaping power over autobiographical memories. And yet, this same manipulative potential that empowers people to shape their identity and memory may be also used by others to reshape that image. The consequence of digital technology is that personal pictures can be retouched without leaving traces and can be manipulated regardless of ownership or intent of the ‘original’ picture, evidenced by the anecdote at the beginning of this article, about the student who was unpleasantly surprised to find a doctored picture of herself mailed around to (anonymous) recipients. Personal photographs are increasingly pulled out of the shoebox to be used as public signifiers. Pictures once bound to remain in personal archives increasingly enter the public domain, where they are invariably brushed up or retouched to (retro)fit contemporary narratives. It is quite plausible to see personal pictures emerge in entirely different public contexts, either as testimony to a criminal on the run, as a memorial to a soldier who died in the war, or as evidence of a politician’s excess alcohol use in college (Sturken, 1999). Like the pictures shown to subjects in psychologists’ experiments, we have no clue to decide whether they are true or false: is it memory that manipulates pictures, or did we use pictures to create or adjust memory? The digital age will set new standards for remembrance and recall: the terms ‘true’ and ‘doctored’ will no longer apply to pictures, and neither can we speak of ‘true’ and ‘false’ memories. 17 The function of personal photography as an act of memory, as we have seen, is increasingly giving way to its formative, communicative and experiential uses. Pictures taken by a cameraphone, meant as expendable enunciations to be shared with co-workers, have a distinctly different discursive power than our framed black-and-white ancestor photographs on the wall. We may now take pictures and send them around to a number of known and anonymous recipients. Networked systems define new presentational contexts of personal pictures, as sharing pictures becomes the default mode of this cultural practice. In many ways, digital tools and connective systems expand control over an individual’s image exposure, granting her more power to present and shape herself in public. However, the flipside of this increased manipulability is actually a loss of control over a picture’s framed meaning: pictures that are amenable to effortless distribution over the internet, are equally prone to unintended repurposing. But since the framing of a picture is never fixed for once and for all, each re-materialization comes with its own illocutionary meaning attached, and each reframing may render the ‘original’ purpose unrecognizable. So even if taken with a communicative use in mind, a picture may end up as an persistent object of (collective) cultural memory—evidenced by the Abu Ghraib pictures. The consequences of reframing and repurposing are particularly poignant when pictures move seamlessly between private and public contexts. Of course, this risk is never the direct implication of photography’s digital condition, but it cannot be denied that digital media have made reframing a lot easier and smoother. Distributing personal pictures over the internet or by cameraphone, which is now a common way of communication, intrinsically renders private pictures into public property and therefore diminishes one’s power over their presentational context. Anxiety over an individual’s ability to control his or her self-image and public imago has not abated since analogue days of personal photography. On the contrary, image control is still a pressing concern in the debates over personal photography in the digital age, even if the parameters for this concern have substantially shifted, adapting to new technological, social, and cultural conditions. We may hail the increased manipulability of our self-image due to digital photography while at the same time resenting the loss of power over one’s pictorial framing in public contexts. The enhanced versatility and multi-purposing of digital pictures facilitates promotion of one’s public image, and yet also diminishes control over what happens once a picture becomes part of a networked environment which changes its performative function upon each retrieval. Due to this networked condition, the definition of 18 personal memory is gravitating towards distributed presence. Our ‘live pictures’ and ‘pictures of life’ may become ‘pictures for life’—even if unintentionally. References: Barthes, Roland (1981) Camera Lucida. Reflections on Photography. Translated by Richard Howard. New York: Hill and Wang. (Originally published as La Chambre Claire, 1980. Paris: Editions du Seuil.) Burnett, Ron. (2004) How Images Think. Cambridge: MIT Press. Chalfen, Richard, and Mai Murui. (2001) ‘Print club photography in Japan: Framing social relationships.’ Visual Sociology 16 (1): 55-73. Chalfen, Richard (2002), ‘Snapshots ‘R’ Us: the evidentiary problematic of home media.’ Visual Studies 17 (2): 141-149. Chambers, Deborah. (2001) Representing the Family. London: Sage. Cohen, Kris (2005) ‘What does the photoblog want?’ Media, Culture and Society 27 (6): 883- 901. Dijck, José van (1995) Manufacturing Babies and Public Consent. Debating the New Reproductive Technologies. New York : New York University Press. Frosh, Paul (2003) The Image Factory. Consumer Culture, Photography, and the Visual Content Industry. Oxford: Berg. Garry, Maryanne, and Matthew Gerrie (2005) ‘When photographs create false memories.’ Current Directions in Psychological Science 14 : 321. Garry, Maryanne, and Kimberley Wade (2005) ‘Actually, a picture is worth less than 45 words: Narratives produce more false memories than photographs do.’ Psychonomic Bulletin and Review 12 (2): 359-66. Harrison, Barbara (2002) ‘Photographic Visions and Narrative Inquiry.’ Narrative Inquiry 12 (1): 87-111. Hirsch, Marianne (1997) Family Frames. Photography, Narrative, and Postmemory. Cambridge: Harvard University Press. Holland, Patricia (1991) ‘Introduction: History, memory, and the family album’ in: J. Spence and Patricia Holland (eds) Family Snaps: The Meaning of Domestic Photography. London: Virago. 1-14. Holland, Patricia and Liz Wells, …… 19 Intraub, Helene, and James Hoffman (1992) ‘Reading and visual memory: Remembering scenes that were never seen.’ American Journal of Psychology 105 (1): 101-14. Kindberg, Tim, Mirjana Spasojevic, Rowanne Fleck, and Abigail Sellen (2005) ‘I saw this and thought of you: Some social uses of camera phones.’ Conference on Human Factors in Computing Systems, 1545-8. Lehtonen, Turo-Kimmo, Ilpo Koskinen, and Esko Kurvinen (2002) ‘Mobile Digital Pictures—The future of the postcard? Findings from an experimental field study. In: Laakso and Ostman (eds) Postcards and Cultural Rituals (Korttien Talo: Haemeenlinna): 69-96. Liechti, Olivier and Tadao Ichikawa (2000) ‘A digital photography framework enabling affective awareness in home communication’ in Personal and Ubiquitous Computing 4 (1) (2000): 232-39. Lindsay, Stephen, Lisa Hagen, Don Read, Kimberley Wade, and Maryanne Garry (2004). ‘True photographs and false memories.’ Psychological Science 15, (3): 149. Lister, Martin, ed. (1995) The Photographic Image in Digital Culture. London: Routledge. Manovich, Lev (2001) The Language of New Media. Cambridge: MIT Press. McCarthy, Anna (2002) ‘Cyberculture or Material Culture?’ Etnofoor 15 (1). 47-63. Mitchell, William J.T. (1992) The Reconfigured Eye. Visual Truth in the Post-photographic Era. Cambridge: MIT Press. Mules, Warwick (2000) ‘Lines, Dots and Pixels: the making and remaking of the printed image in visual culture’ Continuum, Journal of Media and Cultural Studies,14 (3). 303-316. Pine, Joseph and James Gilmore (1999) The Experience Economy. Work is a theatre and every business a stage. Cambridge: Harvard Business School. Ritchin, Fred (1999) In Our Own image. The coming revolution in photography. New York: Aperture. Rodden, Kerry, and Kenneth Wood (2003) ‘How do people manage their digital photographs?’ Computer Human Interaction 5 (1): 409-16. Rodowick, D.N. (2001) Reading the Figural, or, Philosophy after the New Media. Durham: Duke University Press. Rose, Steve. (1992) The Making of Memory. London: Bantam Press. Schiano, Diane J., Coreena P. Chen, and Ellen Isaacs (2002) ‘How Teens Take, View, Share, and Store Photos’ in: Proceedings of the Conference on Computer-Supported Co-operative Work (CSCW) New York: ACM Slater, Don (1995) ‘Domestic Photography and digital culture.’ In: Lister, Martin (ed) The Photographic Image in Digital Culture. London: Routledge: 1995. pp. 129-146. 20 Sontag, Susan. (1973) On Photography. New York: Delta. Sontag, Susan (2004) ‘Regarding the Torture of Others’ The New York Times Magazine, 23 May 2004, p. 25-29. Strange, Deryn, Matthew Gerrie, and Maryanne Garry (2005). ‘A few seemingly harmless routes to a false memory.’ Cognitive Process 6 : 237-42. Sturken, Marita (1999) ‘The Image as Memorial: Personal Photographs in Cultural Memory’ Marianne Hirsch (ed) The Familial Gaze. Hanover: University Press of New England. 178- 195. Van House, Nancy, Marc Davis, and Morgan Ames (2005) ‘The uses of personal networked digital imaging: An empirical study of cameraphone photos and sharing.’ In: Conference on Human Factors in Computing Systems, 1853-6. New York: ACM. Wade, Kimberley, Maryanne Garry, Don Read, and Stephen Lindsay (2002) ‘A Picture is worth a thousand lies: Using false photographs to create false childhood memories.’ Psychonomic Bulletin and Review 9 (3): 597-603. Wright, Chris. (2004) ‘Material and memory. Photography in the Western Solomon Islands.’ Journal of Material Culture 9 (1): 73-85. Zaltman, Gerald (2003) How Customers Think. Essential Insights into the Mind of the Market. Boston: Harvard Business School Press. 21 NOTES 1 This article will deal with current socio-cultural practices of western photography. Although the different uses of (digital) photography in various parts of the world would form an interesting matter of comparison, it is beyond the scope of this article. See, for instance, Wright (2004) and Chalfen and Murui (2001). 2 I prefer the term ‘personal photography’ over commonly used terms like ‘amateur photography’ or ‘family photography.’ The word personal is meant to distinguish it from professional photography, but also avoids the troubling connotation of ‘amateurish’ in relation to camera use. Family photography mistakenly presupposes the presence of a familial context, whereas photography has always been, and is increasingly used, for personal identity formation. For an extensive discussion on ‘personal photography,’ see Holland and Wells (…) 3 Software engineers increasingly begin to realize that the design of picture management systems requires a profound understanding of why and how users interact with their pictures: the acts of storing pictures in a shoebox or sticking them into albums cannot simply be transposed onto digital platforms (Rodden and Wood, 2003). 4 There is a large number of research groups reporting on the issue of false memory as it is related to both narrative and visual evidence. For instance, see Intraub and Hoffman, 1992; and Lindsay, Hagen, Read, Wade and Garry, 2004. 5 Research by cognitive psychologists focusing particularly on the role of doctored photographs in relation to false memory is also widely available. See, for instance, Garry and Gerrie, 2005; Wade, Garry, Read and Lindsay, 2002. 6 Not surprisingly, these scientific insights are gratefully deployed in marketing and advertising departments, to advance sales by manipulating customers’ memories about their pasts and thus influence their future (buying) behaviour. What customers recall about prior product or shopping experiences will differ from their actual experiences if marketers refer to those past experiences in positive ways (Zaltman, 2003). 7 Indeed, without digital photo enhancement equipment, cognitive psychologists would have a hard time to conduct their research on manipulated autobiographical memory in the first place; only with the help of computer paintbrush programs can they make doctored photographs look immaculate. 8 In recent years, there has been an explosion of theory on the semiotics and ontology of the digital image, but it is beyond the scope of this chapter to review the literature in this area. As a general introduction to the digitization of visual culture in general and photography in particular, one could consult Lister, 1995; Mules, 2000. A more philosophical introduction to ontology of the image can be found in Rodowick, 2001. 9 See, for instance, the software offered by VisionQuest Images, at http://www.visionquestimages.com/index.htm (last checked April 8, 2006). A package called Picture Yourself Graphics (www.pygraphics.com) encourages playful collages and manipulation of personal pictures. 10 Asian-American student Chris Lin, for instance, admits in his photoblog that he likes to picture himself with brown hair; he also re-colors the faces of his friends’ images to see if it enhances their appearance. See Chris Lin’s photoblog at http://a.trendyname.org/archives/category/personal/ (last checked April 8, 2006). Nancy Burson, New York based artist and a pioneer in morphing technology, attracted a lot of media attention with her design of a so-called Human Race Machine, a digital method that effortlessly morphs racial features and skin colors in pictures of peoples’ faces. For more details on the work of Nancy Burson, see http://www.nancyburson.com/human_fr.html (last checked April 8, 2006) Her Human Race Machine featured in many magazines and television programs in the spring of 2006, most notably in the Oprah Winfrey show 11 The pictures were first made public in the press by journalist Seymour Hersh who wrote the article ‘Torture at Abu Ghraib. American soldiers brutalized Iraqis. How far up does the responsibility go?’ in The New Yorker, 4 May 2004, available at http://www.notinourname.net/war/torture-5may04.htm (Last checked April 8, 2006). http://www.visionquestimages.com/index.htm http://www.pygraphics.com/ http://a.trendyname.org/archives/category/personal/ http://www.nancyburson.com/human_fr.html http://www.notinourname.net/war/torture-5may04.htm =================================================Digital Photography: Communication, Identity, Memory. Introduction Pictures of Life: personal photography as a tool for identity formation. NOTES work_zbyndhgupbauxhtb46mmf4skei ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218560994 Params is empty 218560994 exception Params is empty 2021/04/06-02:16:41 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218560994 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:41 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_zcfjgl2u4vcjbhzwefjosudjzy ---- 1 Personal non-commercial use only. JGS copyright © 2017. All rights reserved DOI: 10.21608/jgs.2017.1114.1000 Original Article Tubularized Incised Plate Urethroplasty for Primary Hypospadias Repair: Versatility versus Limitations Amr A. AbouZeid1,2 1Department of Pediatric Surgery, Faculty of Medicine, AinShams Universiy, Cairo, 2Benha Children Hospital, Benha, Egypt ABSTRACT Introduction: Several techniques have been described for the repair of hypospadias but without any definitive privilege of one technique over the others. The choice of the repair remains largely dependent on the judgment and personal experience of the operating surgeon. Aim of Study: The aim of this study was to identify the most commonly encountered complications following tubularized incised plate (TIP) urethroplasty and their rate of occurrence in relation to different forms of hypospadias. Patients and Methods: This study was conducted on patients with different degrees of hypospadias (ranging from distal penile to scrotal hypospadias) who underwent primary TIP urethroplasty. All included cases were operated by the same surgeon (the author) at two tertiary centers for pediatric surgery during the period 2007 through 2016. Data analysis was performed in a retrospective manner based on retrieved medical records, in addition to saved digital photography documenting the preoperative phenotypic severity of the hypospadiac phallus, operative steps, appearance at follow-up visits, postoperative investigations, and reoperations. Results: We retrieved data of 193 patients with different degrees of hypospadias who underwent a primary TIP urethroplasty during the 10-year period of the study. All cases in the study completed their early postoperative follow-up (at 1 and 4 weeks). Approximately 40% of cases completed more than 1-year follow-up (mean: 2.3 years, median: 2 years). Rate of reoperation was 21.7% (20 cases for fistula, five recurrences of chordee, one meatal stenosis, and 16 skin refashioning and correction of penoscrotal interposition). All cases who returned for reoperation underwent urethral calibration under anesthesia±cystoscopy. Conclusion: TIP urethroplasty is a versatile technique that can be used for the repair of different degrees of hypospadias with low rate of complications. The main limitation is the presence of considerable chordee (moderate or severe) when urethral plate preservation and dorsal penile plication might be a suboptimal way of management that is liable for recurrence of the ventral curvature. Key Words: Chordee, outcome, proximal hypospadias, tubularized incised plate. Received: 13 November 2017, Accepted: 10 Janurary 2018 Corresponding Author: Amr A. AbouZeid, MD, Department of Pediatric Surgery, Faculty of Medicine, AinShams Universiy, Tel.: +20 111 656 0566, E-mail: amrabdelhamid@hotmail.com. ISSN: 2090-7265 INTRODUCTION Surgical repair of hypospadias is a frequently discussed topic in pediatric urology[1]. Several techniques have been described for the repair of hypospadias, but without any definitive privilege of one technique over the others[2]. The choice of the repair remains largely dependent on the judgment and personal experience of the operating surgeon[3]. In 1994, Snodgrass made a breakthrough by introducing the ‘TIP’ procedure (TIP urethroplasty)[2, 4]. At first, the technique was used for the repair of distal hypospadias, but later, its use has been extended to proximal forms as well[5]. The technique has rapidly gained widespread acceptance owing to its simplicity, reproducibility, low rate of complications, and better cosmetic results[3, 6]. Recent surveys have shown the TIP procedure to be universally the most frequent technique used for the repair of distal hypospadias (ranging from 59% up to 92%)[7–9]. However, its use in proximal hypospadias did not exceed 16% in most surveys[8,9]; the main obstacle being the presence of ventral penile curvature (chordee)[6,7]. In this report, we tried to present the analysis of a carefully recorded 10-year experience of a single surgeon with a single technique (TIP urethroplasty) used for the primary repair of different degrees of hypospadias. Our aim was to identify the most commonly encountered complications following the TIP procedure and their 2 TIP URETHROPLASTY rate of occurrence in relation to the different degrees of hypospadias; this would help to clarify indications and contraindications for such a common procedure. PATEINTS AND METHODS This study was conducted on patients with different degrees of hypospadias (ranging from distal penile to scrotal hypospadias) who underwent primary TIP urethroplasty. All included cases were operated by the same surgeon (the author) at two tertiary centers for pediatric surgery during the period 2007 through 2016. We excluded ‘minor’ glanular forms (usually managed by a ‘MAGPI’ procedure) and those with severe chordee who required transection of their urethral plate. The study was conducted after internal review board approval. Data analysis was performed in a retrospective manner based on retrieved medical records that included preoperative examination sheets, operative details, and follow-up notes, in addition to saved digital photography that included photographs documenting the preoperative phenotypic severity of the hypospadiac phallus, operative steps, appearance at follow-up visits, postoperative investigations (cystoscopy and urethrograms), and reoperations. Surgical technique We start by degloving of the penile skin. A circumferential subcoronal incision is made around the glans starting on the penile dorsum to stop at the level of the urethral plate on both sides. A U-shaped incision is made ventrally around the urethral plate and the meatus. Then, we complete degloving of the penile skin taking down all the ventral attachments to the urethral plate and corpus spongiosum to release any bands that may contribute to the ventral curvature or penile rotation (Fig. 1). Fig. 1: Different cases of hypospadias with variable phenotypic severity (a–d), and their corresponding intraoperative appearance (e–h). Note the variable degrees of ventral penile curvature (lower row) that became disclosed after penile degloving. We have never found a narrow urethral plate to be an obstacle for performing a primary TIP urethroplasty. A narrow plate is usually compensated by being thicker, allowing for a deeper midline incision to produce considerable increase in its width (Fig. 2). Fig. 2: Steps of tubularized incised plate urethroplasty in a 12-month-old boy with penoscrotal hypospadias. (a) Preoperative appearance. (b) Midline incision of the urethral plate. (c) Tubularization of the incised plate. (d) Splitting the dorsal prepuce into two halves (Byars flaps), with each serving a function: the subcutaneous dartos layer is dissected from the left half (white arrow) and utilized for covering the urethroplasty, while the skin of doubtful viability is excised (dashed line); the right half (black arrow) is spared to reconstruct deficient ventral penile skin. (e) After skin closure. (f) Follow-up after 2 weeks. However, what we have learned in such situation is to go more laterally with the two limbs of the U-shaped incision over the ventral penile skin to avoid separation of the urethral plate from the shaft when incised (Fig. 3). Fig. 3: (a) A case of penoscrotal hypospadias with a narrow urethral plate (same case in Fig. 2). (b) The dashed line is marking for the U-shaped incision of tubularized incised plate urethroplasty. After completion of this step, one must decide whether it will be possible to preserve the urethral plate to proceed with a TIP procedure[3]. In these circumstances, the ventral penile skin on both sides of the urethral plate is usually fine and non- hair bearing to be safely included in the urethroplasty[7]. On the contrary, we have found the real obstacle for a TIP urethroplasty to be the presence of ventral penile curvature (Fig. 1). 3 Abou-Zeid The erection test is necessary to determine the degree of chordee, and the point of maximum curvature for proper placement of dorsal plication sutures (when indicated). Applying a ‘rubber band’ tourniquet at the base of the penile shaft can facilitate performing the erection test, and later, can decrease bleeding during dissection of the glanular wings. However, sometimes it is difficult to apply the tourniquet because of the very proximal location of the meatus (scrotal or penoscrotal). During the first 5 years of the study period, we used to manage moderate chordee (≤600) by dorsal plication of the tunica albuginea (either midline or Nesbit), and go on with a TIP urethroplasty. However, because of having cases with recurrent chordee and for fear of exaggeration of chordee as the child goes into puberty[3, 7, 10], we changed our practice; only mild chordee (<300) became amenable for urethral plate preservation and a TIP urethroplasty. Adequate midline incision of the urethral plate is an essential step for a successful TIP urethroplasty, not only for increasing the width of the plate, but also for offering extra mobility for the two halves of the plate such that they can be easily rotated to be closed in the midline without Fig. 4: Covering a long tubularized incised plate (TIP) urethroplasty by dartos flaps in a 15-month-old boy with penoscrotal hypospadias. (a) Preoperative appearance. (b) Right Byars flap supplying dartos (white arrow) for covering TIP urethroplasty, whereas the left Byars flap (black arrow) is spared for reconstructing ventral penile skin. (c) Another dartos flap is dissected from the scrotum (*) to cover proximal urethroplasty. (d) Note: the skin of the right flap (white arrow) has doubtful viability and will be excised (dashed line), in contrast to the reliable vascularity of the left Byars flap (black arrow) that will be rotated to cover the penile ventrum. (e, f) The patient presented 2 years later with urethrocutaneous fistula (dashed arrow). (g) Assessment of neourethra by endoscopy at time of repair of fistula (dashed arrow). (h) After excision of fistula (dashed arrow) and closure of urethra. tension (hinging of the plate). Tubularization of the incised plate then starts opposite the mid-glans and proceeds in a proximal direction with continuous, full-thickness, single- layer (6-0) polyglactin suturing. Covering the urethroplasty by a second ‘water- proofing’ layer is a common practice to decrease the rate of complications. We use dartos flaps for both distal and proximal repairs. With distal hypospadias, ventral skin deficiency is usually mild; a complete dorsal dartos flap is dissected and transferred ventrally via button-holing to cover the urethroplasty. In presence of penile torsion, the dorsal dartos flap is rotated from the opposite side to help in correction of existing penile rotation[11]. On the contrary, proximal hypospadias is always associated with significant ventral skin deficiency. To decrease postoperative skin complications, we apply a modification by splitting the dorsal prepuce into two halves (Byars flaps): one half supplies dartos for covering the urethroplasty, and the other half is spared to reconstruct the ventral penile skin (Fig. 4b and d)[12]. For a long urethroplasty, another dartos flap is dissected from the nearby scrotum to cover proximal urethroplasty (Fig. 4c and d) 4 TIP URETHROPLASTY Fig. 5: Recurrence of penile chordee after dorsal plication of the tunica albuginea and tubularized incised plate (TIP) urethroplasty. Upper row (a–d) demonstrates the primary operation (TIP urethroplasty) for a case of scrotal hypospadias at the age of 8 months: (a) crooked prepuce; (b) scrotal meatus; (c, d) button-holing of the dorsal skin (preputial cape technique) for reconstructing the ventral penile skin while covering the suture line of the TIP urethroplasty. Lower row (e–h): same patient presenting 7 years later with persistence of chordee (e); (f, g) degloving and transection of the urethra at the level of the coronal sulcus to straighten the penis with recession of the urinary meatus backward to penoscrotal position; (h) the glans is splayed open and grafted (white arrow), whereas skin flaps are used to cover the penile ventrum (black arrow), representing the first step of a staged repair. Occasionally, a special configuration of the prepuce and dorsal penile skin (preputial cape, Fig. 5a) can be suitable for button-holing of the dorsal skin, The glanuloplasty is a last and may be the most delicate step (especially challenging with a small glans). Glanular wings are carefully prepared with adequate thickness and mobility to be easily rotated and closed in the midline without tension over the urethra. Glanular closure is preferably achieved by 2–3 subcuticular (6-0) polyglactin stitches. Usually, we do not apply the 5- and 7-o’clock stitches used to fix the meatus to the tip of the glans. According to the size of the glans, a transurethral (6–8 Fr) Nelaton catheter is left in place and fixed to the glans stay suture to drain the urinary bladder for 7 days, postoperatively. The phallus is dressed and sandwiched against the anterior abdominal wall. Unless soaked, the dressing is left to be removed with the catheter after 1 week. We believe the real value of the dressing is to apply some compression and immobilization for the penis in the early postoperative period, which can help to decrease edema and pain. Further follow-up after 1 month, 6 months, and then yearly till puberty is recommended (especially with proximal hypospadias). RESULTS We retrieved data of 193 patients with different degrees of hypospadias who underwent a primary TIP urethroplasty by the author during the 10-year period of the study. Almost all cases were uncircumcised (except two cases with distal hypospadias). All cases in the study completed their early postoperative follow-up (at 1 and 4 weeks). Approximately 40% of cases completed more than 1-year follow-up (ranging from 1–7 years; mean=2.3 years; SD=1.3). Rate of reoperation was 21.7% (20 cases for fistula, five recurrences of chordee, one meatal stenosis, and 16 skin refashioning and correction of penoscrotal transposition). All cases who returned for reoperation underwent urethral calibration under anesthesia±cystoscopy. For the rest of cases, urethral calibration was not routine in the follow- up unless there were obstructive symptoms; assessments of postvoiding residual urine volume by ultrasound, urethrograms, and flow-metery were ordered in selected cases. Table 1 summarizes the encountered complications in relation to the different degrees of hypospadias included in the study. which represents another way to cover the suture line of the urethroplasty and reconstruct the ventral skin (Fig. 5c and d)[13]. 5 Abou-Zeid Penile concealment Urethral stricture Meatal stenosisChordee Glanular dehiscenceFistula No.Type 2 cases (2.4%)----------------8 cases (9.8%)82Distal penile 2 cases (5%)------------4 cases4 cases (10.3%)39Mid penile 12 cases (12%)1 case3 cases5 cases3 cases8 cases (13.8%)58Peno-scrotal 9 cases (64%)--------1 case1 case----14Scrotal 25 cases (13%)1 case (0.5%)3 cases (1.6%)6 cases (3.1%)8 cases (4.1%) 20 cases (10.4%)193Total Table 1: Rate of postoperative complications following TIP repair among cases with different degrees of hypospadias Urethrocutaneous fistula Overall, 20 (10.4%) cases presented with urethro- cutaneous fistula following TIP urethroplasty (Fig. 4). The fistula rate was relatively higher following the repair of penoscrotal hypospadias (13.8%). The site of fistula was variable. A very distal submeatal glanular fistula was sometimes encountered following the repair of distal and mid-penile hypospadias, which was managed by cutting the skin bridge between the neomeatus and the fistula creating a single opening. More commonly, the fistula was more proximal in position (coronal or penile shaft, Fig. 4e), which was managed by excision of the fistulous tract and urethral closure supported by local dartos flap coverage with a success rate of 100%. At the beginning of the operation, urethral calibration±cystoscopy was important to exclude urethral strictures (Fig. 4g). Persistence of chordee We had six cases of persistent chordee following TIP repair of proximal hypospadias (Fig. 5). Parents were advised to notice their children in the morning (for morning erections) and during micturition (some took photographs for the erected penis). Two cases were managed by repeating dorsal plication of the tunica albuginea; another three underwent transection of the neourethra and skin/ buccal mucosal grafting of the penile ventrum (as a first step of a staged repair, Fig. 5); whereas the last case is still being followed-up. Meatal stenosis and urethral strictures One case presented with recurrent attacks of epididymitis during the follow-up (Fig. 6). A urethrogram was ordered that showed urethral stricture (at the original site of the hypospadiac meatus), in addition to narrowing of the neomeatus (Fig. 6c). This case responded well to dilatation, in addition to refashioning of excess penile skin that was thought to be a possible source of ascending infection (Fig. 6b and d). Another two cases presented with a narrow urinary stream; examination revealed narrowing Fig. 6: (a) A 13-month-old boy with penoscrotal hypospadias (black arrow) underwent tubularized incised plate urethroplasty. (b) The patient presented 2 years later with recurrent attacks of epididymitis, and penile concealment. (c) Voiding urethrogram was ordered showing urethral stricture at the original site of the hypospadiac meatus (white arrow), note the prostatic utricle (*) filled with contrast; (d) the patient was managed by urethral dilatation and refashioning of excess penile skin. Buried penis This was a common complain following the repair of scrotal hypospadias (64%) owing to small size of the phallus and presence of penoscrotal transposition. The degree of penile concealment varied from partial (Fig. 7) to complete concealment (Fig. 8). Redosurgery was needed for a group of these patients to remove excess ventral skin, create penoscrotal angle on the ventral side, and correct penoscrotal transposition. of the neomeatus. One responded to dilatation, whereas the other was managed by ventral meatotomy. 6 TIP URETHROPLASTY Fig. 7: (a) A 7-month-old boy with scrotal hypospadias underwent tubularized incised plate urethroplasty. (b, c) Follow-up 1 year later showing partial concealment. Note the vertical neomeatus. Fig. 8:Upper row (a–c): A 7-month-old boy with scrotal hypospadias (small-sized phallus) underwent tubularized incised plate urethroplasty with modified Byars flaps to cover the urethroplasty and reconstruct ventral skin. Lower row (d–f): 2 years later, the same patient presenting with penile concealment (buried penis), which was managed by skin refashioning (removal of excessive ventral skin redundancy). 7 Abou-Zeid DISCUSSION Since its introduction in 1994, TIP urethroplasty has rapidly become the most popular technique used for the repair of distal hypospadias worldwide[7]. However, applying the technique in proximal hypospadias did not gain similar popularity. The main obstacle with proximal hypospadias is the presence of a short urethral plate when it must be transected to straighten the curved penis[6]. Baskin supported preservation of the urethral plate being rarely the cause of penile curvature[14,15]. Dorsal penile plication and urethral plate elevation (mobilization) are two maneuvers that have been described to correct chordee while preserving the urethral plate for urethroplasty[6]; however, recurrence of chordee remains a major concern that seems to be under-reported in the literature[2,16,17]. Moreover, exaggeration of penile chordee may be expected as the child goes into puberty[3,7,10]. Therefore, the threshold for sacrificing the urethral plate has been lowered in most recent reports[2,18]; only mild ventral curvature (<300) would be suitable for techniques that preserve the urethral plate[3]. Similarly, during the first half of our study period, the rate of TIP urethroplasty (urethral plate preservation) in penoscrotal and scrotal hypospadias was relatively high (80%), which dropped to 45% in the second half of the study. Stricture of the neourethra is another major concern following TIP urethroplasty especially with proximal hypospadias[7]. In proximal hypospadias, the plate is usually narrower and the urethroplasty is longer making it more liable for complications. Reports have shown TIP urethroplasty to be associated with lower ‘Qmax’ (in flow metric studies) compared with other techniques[3]; however, the difference was nonsignificant and transient (improved on long-term follow-up)[19]. It is of utmost importance to make sure that patients with repaired hypospadias do not have functional urinary problems, as they usually do not have these problems before surgical repair[20]. In our practice, urethral calibration was not routine in the follow-up unless there were obstructive symptoms. Observing the child during micturition and measuring residual urine volume by ultrasound were very helpful and reassuring for both parents and doctors; urethrograms, cystoscopy, and flow-metery can be of value in selected cases. Cases that return for reoperation (fistula, chordee, skin refashioning) represent an excellent opportunity to evaluate the neourethra under anesthesia (endoscopy/calibration)[10]; this group represented 21% of our cases. In our practice, we report a very low incidence of urethral strictures (0.5%) following TIP urethroplasty (only one case that showed good response to dilatation). We had another two cases of meatal stenosis, and one of them required ventral meatotomy. Urethrocutaneous fistula is a famous complication following hypospadias surgery. In absence of strictures, the management is usually straightforward; the main drawback being the need for a reoperation. The incidence of urethrocutaneous fistula has been significantly reduced after spreading the concept of covering the urethroplasty by a second ‘water-proofing’ layer[21]. With distal hypospadias, covering the urethroplasty by a dorsal dartos flap is most popular, whereas with proximal hypospadias, the tunica vaginalis flap is another attractive alternative having the advantage of sparing intact vascularity of the dorsal skin to be used for coverage of the ventral shaft[22,23]. We have applied some modifications to facilitate covering the urethroplasty in proximal hypospadias by dartos flaps as well[12,13]. By these modifications, we managed to minimize skin complications and avoid unnecessary dissection around the testis and spermatic cord for preparing the tunica vaginalis flap. Other factors have been discussed in the literature concerning the suture material, suturing technique, types of needles, urinary diversion, and ways of dressing[18]. Although these factors may still have some effect, yet its effect on the outcome seems to be limited. We used 6-0 polyglactin, single-layer, full-thickness (through and through), continuous suturing for the urethroplasty; we drain the bladder by transurethral nelaton catheter for 7 days; and we splint the phallus in a dressing against the abdominal wall. Our results are ‘more or less’ comparable with other reports in the literature regarding the rate of complications following TIP repair for both distal and proximal forms of hypospadias[2,5,6,8]. In this case series, the high incidence of penile concealment following TIP repair for scrotal hypospadias seems to be a peculiar but expected finding. This may be related to applying some skin flap modifications resulting in exaggeration of ventral skin redundancy, in addition to other inherit factors (small size of the phallus and penoscrotal transposition). Some authors recommend always shifting to staged repair rather than a TIP urethroplasty in cases with small hypospadiac phallus[3]. We believe the existence of considerable chordee with hypospadias is a clear indication for a staged repair (or other alternative techniques) to increase the length of an already shortened urethra[3]. However, for a small phallus without considerable chordee, the justification for sacrificing the urethral plate is not that clear. Unless indicated, adding extra length to the urethra is not free of possible adverse effects: increasing resistance to urine flow, liability to strictures, and diverticulae[7,16]. Long-term follow-up of these children as they go into puberty would be of utmost importance to be able to judge on the ‘final’ outcome and recognize the most suitable technique[7]. The study has the usual limitations of retrospective studies. We did not apply any of the described scoring systems to assess the phenotypic severity of hypospadias; instead, the author carefully documented his practice over the years of the study utilizing digital photography. This has been found to be very useful, especially when correlating the outcome to saved photographs of the primary operation[24]. This study represents a single-surgeon experience which may be beneficial from one aspect, namely, unification 8 TIP URETHROPLASTY of the surgical technique. Meanwhile, evaluation of the outcome was carried out by the operating surgeon when bias cannot be completely excluded. However, TIP urethroplasty is a well-established surgical procedure known for its simplicity and reproducibility[6]. Our main aim was to identify the rate of complications of a such common procedure over a reasonable period of follow- up. Moreover, we tried to clarify the main limitations of the technique in repairing the more severe degrees of hypospadias when the indication of a TIP urethroplasty remains somehow controversial[2,3]. CONCLUSION TIP urethroplasty is a versatile technique that can be used for the repair of different degrees of hypospadias with low rate of complications. The main limitation is the presence of considerable chordee (moderate or severe) when urethral plate preservation and dorsal penile plication might be a suboptimal way of management that is liable for recurrence of the ventral curvature. CONCLICT OF INTEREST There are no conflicts of interest. REFERENCES 1. Winship BB, Rushton HG, Pohl HG. In pursuit of the perfect penis: Hypospadias repair outcomes. J Pediatr Urol 2017; 13: 285–288. 2. Pippi Salle JL, Sayed S, Salle A, Bagli D, Farhat W, Koyle M, Lorenzo AJ. Proximal hypospadias: a persistent challenge. Single institution outcome analysis of three surgical techniques over a 10- year period. J Pediatr Urol 2016; 12: 28.e1–28.e7. 3. Castagnetti M, El-Ghoneimi A. Surgical management of primary severe hypospadias in children: systematic 20-year review. J Urol 2010; 184: 1469–1475. 4. Snodgrass W. Tubularized incised plate urethroplasty for distal hypospadias. J Urol 1994; 151: 464–465. 5. Snodgrass W, Yucel S. Tubularized incised plate for mid shaft and proximal hypospadias repair. J Urol 2007; 177: 698–702. 6. Bhat A. Extended urethral mobilization in incised plate urethroplasty for severe hypospadias: a variation in technique to improve chordee correction. J Urol 2007; 187: 1031–1035. 7. Gong EM, Cheng EY. Current challenges with proximal hypospadias: we have a long way to go. J Pediatr Urol 2017; 13:457–467. 8. Steven L, Cherian A, Yankovic F, Mathur A, Kulkarni M, Cuckow P. Current practice in paediatric hypospadias surgery: a specialist survey. J Pediatr Urol 2013; 9: 1126–1130. 9. Springer A, Krois W, Horcher E. Trends in hypospadias surgery: results of a worldwide survey. Eur Urol 2011; 60: 1184–1189. 10. Long CJ, Chu DI, Tenney RW, Morris AR, Weiss DA, Shukla AR. Intermediate-term followup of proximal hypospadias repair reveals high complication rate. J Urol 2017; 197: 852–858. 11. AbouZeid A, Soliman H. Penile torsion: an overlooked anomaly with distal hypospadias. Ann Pediatr Surg 2010; 6: 93–97. 12. AbouZeid A. Modified Byars’ flaps for securing skin closure in proximal and mid-penile hypospadias. Ther Adv Urol 2011; 3: 251–256. 13. AbouZeid A, Safoury HS. The preputial cape: a distinct and favorable morphological variant in hypospadias. J Plastic Reconstr Aesthet Surg 2011; 64: e270–e272. 14. de Mattos e SilvaE, Gorduza DB, Catti M, Valmalle AF, Delphine Demede D, Hameury F, et al. Outcome of severe hypospadias repair using three different techniques. J Pediatr Urol 2009; 5: 205–211. 15. Hayashi Y, Kojima Y, Nakane A, Maruyama T, Tozawa K, Kohri K. A strategr for repairing moderately severe hypospadias using onlay urethroplasty versus onlay-tube-onlay urethroplasty. Urology 2003; 61:1019–1022. 16. Long CJ, Canning DA. Hypospadias: are we as good as we think when we correct proximal hypospadias? J Pediatr Urol 2016; 12: 196.e1–196.e5. 17. Braga LH, Lorenzo AJ, Bagli DJ, Dave S, Eeg K, Farhat WA, et al. Ventral penile lengthening versus dorsal plication for severe ventral curvature in children with proximal hypospadias. J Urol 2008; 180:1743–1747. 18. Snodgrass W, Bush N. TIP hypospadias repair: a pediatric urology indicator operation. J Pediatr Urol 2016; 12: 11–18. 19. Hueber P, Diaz MS, Chaussy Y, Franc- Guimond J, Barrieras D, Houle A. Long- term functional outcomes after penoscrotal hypospadias repair: a retrospective comparative study of proximal TIP, Onlay, and Duckett. J Pediatr Urol 2016; 12:168.e1–168.e6. 9 Abou-Zeid 20. Harper L. Editorial comment. J Urol 2015; 193: 981–982; (discussion 982) 21. Retik AB, Mandell J, Bauer SB, Atala A. Meatal based hypospadias repair with the use of a dorsal subcutaneous flap to prevent urethrocutaneous fistula. J Urol 1994; 152: 1229–1231. 22. Snow BW. Use of tunica vaginalis to prevent fistulas in hypospadias surgery. J Urol 1986; 136:861–863. 23. Snodgrass W, Bush N. Tubularized incised plate proximal hypospadias repair: Continued evolution and extended applications. J Pediatr Urol 2011; 7:2–9. 24. Baskin L. Editorial comment. J Urol 2010; 184:1474–1475. work_zcggfux53bd73i7kjy4epe2vvi ---- Institut für Computergrafik | JKU Linz Zum Inhalt springen  (Accesskey: 1) Zur Suche springen  (Accesskey: 2) Zur JKU Startseite Quicklinks Campusplan KUSSS Bibliothek Studium von A - Z Studienarten Mensen USI Barrierefreiheit DEDeutsch, Sprache wechselnENEnglish, switch language Suche Zur JKU Startseite QUICKLINKS Campusplan KUSSS Bibliothek Studium von A - Z Studienarten Mensen USI Barrierefreiheit DEDeutsch, Sprache wechselnENEnglish, switch language   Suche Suchbegriff Suche JKU/Institut für Computergrafik Willkommen beim Institut für Computergrafik Die Forschung auf dem Gebiet Visual Computing hat sich in den letzten Jahren stark weiterentwickelt. Neben klassischen Themen, wie Bildverarbeitung, Visualisierung und Rendering, haben interaktive grafische Schnittstellen wie Augmented- und Virtual-Reality an Bedeutung gewonnen, und sind neue Gebiete wie Computational Imaging entstanden. Nicht nur die rasante Entwicklung der Grafik-Hardware, aber auch starke Überlappungen mit interdisziplinären Gebieten, wie Bildverarbeitung, angewandte Optik, visuelle Wahrnehmung, und Mensch-Maschine Kommunikation haben Visual Computing zu dem gemacht was es heute ist: ein spannendes und interdisziplinäres Wissenschaftsgebiet mit Anwendungen in fast sämtlichen Aspekten unseren modernen Lebens. Unsere Forschung auf dem Gebiet Visual Computing adressiert Problemstellungen aus verschiedenen Disziplinen, wie beispielsweise Computational Optics und visuelle Datenanalyse für Anwendungen aus Medizin und Biologie. Im Bereich der Lehre bieten wir vertiefende Lehrveranstaltungen in den Fachgebieten Computergrafik, Bildverarbeitung und Visualisierung an. Adresse Johannes Kepler Universität Linz Institut für Computergrafik Altenberger Straße 69 4040 Linz Standort Science Park 3, 3. Stock, Raum 0303 Telefon +43 732 2468 6630 E-Mail cg@jku.at To the CG Homepage Johannes Kepler Universität Altenberger Straße 69 4040 Linz, Österreich Johannes Kepler Universität Altenberger Straße 69 4040 Linz, Österreich T: +43 732 2468 0 F: +43 732 2468 4929 Kontakt JKU auf Facebook JKU auf Instagram JKU auf Twitter JKU auf Youtube DatenschutzImpressum Hinweis zu Cookies Unsere Website verwendet Cookies, um die grundlegende Funktionalität unserer Website zu gewährleisten sowie die Zugriffe auf unserer Website zu analysieren und um Funktionen für soziale Medien und zielgerichtete Werbung anbieten zu können. Hierzu ist es nötig Informationen an die jeweiligen Dienstanbieter weiterzugeben. Weitere Informationen zu Cookies auf der Website finden Sie in unserer Datenschutzerklärung . Einstellungen Erforderliche Cookies Diese Cookies werden für eine technisch reibungslose Funktion unserer Website benötigt. Name Zweck Ablauf Anbieter CookieConsent Speichert Ihre Einstellungen zur Verwendung von Cookies auf dieser Website. 1 Jahr JKU se_mode Cookie für Einstellungen der Site Search 1 Jahr JKU Webstatistik Cookies Mit Hilfe dieser Cookies können wir unser Angebot laufend verbessern und unsere Website an Ihre Bedürfnisse anpassen. Dabei werden pseudonymisierte Daten über die Websitenutzung gesammelt und statistisch ausgewertet. Name Zweck Ablauf Anbieter _gcl_au Wird verwendet, um zwischen User und Userverhalten zu unterscheiden. 3 Monate Google _ga Wird verwendet, um Benutzer zu unterscheiden. 2 Jahre Google _gid Wird verwendet, um Benutzer zu unterscheiden und die Nutzung der Website zu analysieren. Es werden anonyme Daten zur Anzahl von Besuchern, die besuchten Seiten und die Quelle des Zugriffes erfasst. 1 Tag Google _gat_UA-112203476-1 Wird zum Drosseln der Anfragerate bei Google Analytics für Websites mit hohen Nutzerzahlen verwendet. 1 Minute Google Marketing Cookies Mit Hilfe dieser Cookies können wir unser Angebot für Sie noch attraktiver gestalten und unsere Werbe- und Websiteinhalte optimieren. Dabei werden pseudonymisierte Daten über die Websitenutzung analysiert und beurteilt. Name Zweck Ablauf Anbieter _gcl_au Wird verwendet, um zwischen User und Userverhalten zu unterscheiden. 3 Monate Google _ga Wird verwendet, um Benutzer zu unterscheiden. 2 Jahre Google _gid Wird verwendet, um Benutzer zu unterscheiden und die Nutzung der Website zu analysieren. Es werden anonyme Daten zur Anzahl von Besuchern, die besuchten Seiten und die Quelle des Zugriffes erfasst. 1 Tag Google _gac_UA-112203476-1 Enthält Kampagneninformationen und dient der Messung des Kampagnenerfolges von Google AdWords Kampagnen. 90 Tage Google test_cookie Wird testweise gesetzt, um zu prüfen, ob der Browser das Setzen von Cookies erlaubt. Enthält keine Identifikationsmerkmale. 15 Minuten Google IDE Enthält eine zufallsgenerierte User-ID. Anhand dieser ID kann Google den User über verschiedene Websites domainübergreifend wiedererkennen und personalisierte Werbung ausspielen. 1 Jahr Google _gcl_aw Dieses Cookie wird gesetzt, wenn ein User über einen Klick auf eine Google Werbeanzeige auf die Website gelangt. Es enthält Informationen darüber, welche Werbeanzeige geklickt wurde, sodass erzielte Erfolge wie z.B. Bestellungen oder Kontaktanfragen der Anzeige zugewiesen werden können. 3 Monate Google AMCV_xx Enthält eine zufallsgenerierte User-ID. Anhand dieser ID kann die Adobe Marketing Cloud den User über verschiedene Websites domainübergreifend wiedererkennen und personalisierte Werbung ausspielen. 3 Jahre LinkedIn bcookie Enthält eine ID des Browsers. 2 Jahre LinkedIn bscookie Enthält eine ID des Browsers bei einer sicheren Verbindung. 2 Jahre LinkedIn lang Wird verwendet, um die Spracheinstellung des Besuchers zu speichern. Session LinkedIn lidc Wird verwendet, um Benutzer zu unterscheiden und die Nutzung der Website zu analysieren. Es werden anonyme Daten zur Anzahl von Besuchern, die besuchten Seiten und die Quelle des Zugriffes erfasst. 1 Tag LinkedIn lissc Wird für die Analyse der Nutzung von eingebetteten Dienstleistungen verwendet. 1 Jahr LinkedIn UserMatchHistory Enthält Kampagneninformationen und dient der Messung des Kampagnenerfolges. 30 Tage LinkedIn fr Enthält Kampagneninformationen und dient der Messung des Kampagnenerfolges. 90 Tage Facebook fbp Wird von Facebook genutzt, um eine Reihe von Werbeprodukten anzuzeigen, zum Beispiel Echtzeitangebote dritter Werbetreibender. 90 Tage Facebook sc_at Wird verwendet, um Benutzer über mehrere Domains hinweg zu unterscheiden. 1 Jahr Snap sc-country Dieses Cookie unterscheidet das Land des Benutzers. 1 Tag Snap uid Dieses Cookie enthält eine zufallsgenerierte User-ID und wird für das Ausspielen und die Echtzeitermittlung der Preise für Anzeigen für differenzierte Zielgruppen verwendet. 60 Tage Adform C Identifiziert und speichert ob Nutzer Cookies akzeptieren. Wert 1: Cookies sind erlaubt, Wert 3: Cookies sind nicht erlaubt. 30 Tage Adform Auswahl speichern und zustimmen Allen Cookies zustimmen work_zdufjdpozzempooaitin256epy ---- untitled 278 Folia Neuropathologica 2008; 46/4 The cycloid and skeletonization methods for morphometric analysis of fetal brain vessels Tomasz Stêpieñ1, Boguslaw Obara2,3 1Department of Neuropathology, Institute of Psychiatry and Neurology, Warsaw, Poland; 2Strata Mechanics Research Institute, Polish Academy of Sciences, Krakow, Poland; 3Center for BioImage Informatics, Department of Electrical and Computer Engineering, University of California, Santa Barbara, USA Folia Neuropathol 2008; 46 (4): 278-285 A b s t r a c t In our study, we examined 54 images from 9 fetal brains from the 11th to 22nd gestation week (GW). We measured the length density (LD) of vessels (µm/µm2) in the cortical grey matter (CGM) and in the cortical white matter (CWM). The aim of this work was to fi nd a method which could be applied to measure the length density of vessels on two-dimensional (2D) sections. The fi rst method (cycloid method) was based on cycloid function based on Stereo Investigator Software (MicroBrightField). The length in 2D could be estimated on the basis of a number of intersections between a line-probe and the linear objects of interest. In the study, we used a line-probe with systematically spaced sine-weighted curves (cycloids) of known length. In this case, the cycloids were 53.1 µm long. The counting grid was constructed from sine-weighted lines (cycloids), which were used for estimation of the length density of vessels. The second method (skeletonization) was based on the mathematical functions of morphology and colour system transformation. The “binary airway tree” formed by the image segmentation step was skeletonized to identify two- or three-dimensional centrelines of individual branches, and to determine the branch point locations. The idea was to utilize a skeletonization algorithm which was based on properties of the average outward fl ux of the gradient vector fi eld of a Euclidean distance function from the boundary of the structure. Both of these methods (cycloid and skeletonization) could be applied in measuring the length density of vessels on two-dimensional (2D) sections. These morphometric methods allowed us to measure the length density in fetal development of vessels in the cortical grey matter and the cortical white matter. The cycloid method could be applied to measure an approximate length density of vessels. However, skeletonization should be applied to measure more precisely length density of vessels in the cortical grey matter and the cortical white matter. Key words: image analysis, skeletonization, cycloid method, human fetal brain, vascular length density, CD34. Communicating author: Tomasz Stêpieñ, Department of Neuropathology, Institute of Psychiatry and Neurology, Sobieskiego 9, 02-957 Warsaw, Poland, tel. +48 22 458 27 86, Email: tstepien@ipin.edu.pl, obara@ece.ucsb.edu Introduction What is the objective of generating hundreds of digits resulting from microscopic image analysis? It turned out that the mere terms “large, medium, short” are insuffi cient in scientifi c discourse, which calls for unifi cation of parameter description. The- refore, geometric features began to be used to de- scribe the surface of objects in mathematical terms. This brought modern morphometry to life. Howe- Original article Folia Neuropathologica 2008; 46/4 279 Cycloid and skeletonization methods for analysis of brain vessels ver, it still did not answer the question of relations between objects in space. What was needed was a method of transposing geometric parameters from surface to space. This issue was fi rst addressed by Count Buff on in the late 18th century. He made an experiment on needles scattered on the fl oor, sug- gesting that the number of intersections would be directly proportional to the length of the needle, and inversely proportional to the distance between the lines on the fl oor [5]. Buff on’s problem provided the basis for length estimation in modern morphome- try, allowing for quantitative description of a set of solids with measurements (e.g. of size or length) or counting based on 2D cross-sections of the solids. In our study, we examined the length density of vessels in the fetal brain cortical grey matter (CGM) and the cortical white matter (CWM) in fetal brains between GW 11 and GW 22, using two methods. The fi rst one was skeletonization, consisting in measuring objects on the surface, and the second one was the cycloid method, based on Buff on’s experiment. The objects of our interest were blood vessels and their development. Normal vascular development in the brain plays an important role in the appropriate pro- liferation, migration and maturation of neurons, glial cells, in synaptogenesis and in building connections of various structures [9,11,19]. The development of the primitive vascular network in the brain involves two diff erent mechanisms: vasculogenesis and angioge- nesis. Vasculogenesis forms blood vessels from diff e- rentiating endothelial cells (ECs) from mesenchymal precursors (angioblasts/haemangioblasts). Angioge- nesis is a process where vessels sprout and branch from pre-existing ECs and vessels [3,21,24,25]. Patho- logical vasculogenesis and/or angiogenesis during early prenatal development underlie the pathophysio- logy of many neurodevelopmental disorders [14,15,18]. The alteration of capillary length density in the fetal brain may tell us more about factors activated during development. In our study, we tried to establish which method proves useful to measure the length density of vessels on two-dimensional (2D) sections. Material and Methods Brain specimens The cortical grey matter (CGM) and the cortical white matter (CWM) of frontal lobe from a group of 9 fetuses without neuropathological abnormalities were studied. Fetal brains ranging from gestation week (GW) 11 to 22 were fi xed in 4% paraformaldehy- de in 0.1 M phosphate buff er saline (PBS), pH 7.4. The fi xed samples were then embedded in paraffi n, and cut serially at 8 μm in frontal sections, which was fol- lowed by routine staining of brain tissue slices with haematoxylin-eosin (H&E) and immunohistochemi- cal reaction with antibody CD34 (Novocastra 1:25). The gestational age in each case was calculated from the date of the last menstrual period. Image analysis Image analysis and processing techniques are suc- cessfully used in medical image analysis [6,7]. One of the most important parts of image analysis is skeleto- nization. The binary airway tree formed by the image segmentation step is skeletonized to identify the two- or three-dimensional centrelines of individual bran- ches and to determine the branch point locations. Skeletonization of three-dimensional tubular structures is reported by Palágyi et al. [17]. Palágyi’s method specifi cally targets skeletonization of vascu- lar- and airway tree structures in medical images. Bouix et al. [4] presented a fast, robust and automa- tic method for computing centreline paths through tubular structures, to be applied in virtual endoscopy. Bouix’s idea is to utilize a skeletonization algorithm based on the properties of the average outward fl ux of the gradient vector fi eld of a Euclidean distance function from the boundary of the structure. Solta- nian-Zadeh et al. [20] proposed an image processing approach for information extraction from images of the vascular structure. Soltanian-Zadeh’s method allows extraction of information such as skeleton length and diameter from real confocal microscopic images of the vessels in rat brains. Quantitative in- formation revealed in angiograms, for example ves- sel length, diameter, course, and curvature, is essen- tial and can be accessible by computer with image analysis methods, particularly skeletonization [16]. The thinning process is based on the Hit or Miss transform, which is a combination of erosion and di- lation operators that allows a foreground pixel to be matched according to a predefi ned structuring ele- ment. Removing branches of the skeleton tree can be computed by iteratively detecting and removing the end points of the tree, until there are only two of them left. This method successfully removes short spurs from the skeleton but it can also discard end po- 280 Folia Neuropathologica 2008; 46/4 Tomasz Stêpieñ, Boguslaw Obara ints of the skeleton main path, especially when bran- ches are long. The main path is required to maintain its full length, i.e. to connect two pixel positions on the boundary of the original shape. To compute it, an algorithm based on the Euclidean distance between the end points of the skeleton and its nodes is used (Fig. 10). For each node (white colour), the distance between the node and each end point (black colo- ur) is calculated. The end point with the minimum distance is then removed. With this technique, only the longest of all the branches starting from a given node are kept. This process is done iteratively until there is no more node, i.e. only two end points are left. The result of the whole process (Fig. 10D). The dilation of a set I by a structuring element B is denoted as δ B (I) and is defi ned as a locus of points z such that B hits I when its origin coincides with z. δ B (I)={z : B z ∩ I ≠ ∅} (1) Erosion is defi ned as: ε B (I)={z : B z ⊂ I} (2) Based on erosion and dilation, we defi ne opening and closing, which form the basis of morphological fi ltering. The opening of an image I by a structuring element B is defi ned as erosion of I followed by di- lation with B. γ B (I)=δ B (I) ε BT (I) (3) Closing is defi ned as: φ B (I)=ε B (I) δ BT (I) (4) Opening by reconstruction is defi ned as: γ B rec(I)=ρ (I,G) ε B (I) (5) where: G=ε B (I). I image is reconstructed by the marker function G, by an infi nite number of recursive iterations (iterations until stability) of the dilation of G conditioned by I. Closing by reconstruction is defi ned as: φ B rec=(γ B rec(IC))C (6) where: IC – complement of I. Top-hat is defi ned as: T(I)=I – γ(I) (7) Fig. 1. Surface density of vessels. A. Cortical grey matter, B. cortical white matter. Immunohistochemical reaction with antibody CD34, ×200 A B Fig. 2. Representation of input colour image (A) in YIQ colour space, B) Y, C) I, and D) Q components A B C D Folia Neuropathologica 2008; 46/4 281 Cycloid and skeletonization methods for analysis of brain vessels Fig. 3. Segmentation of I image (Fig. 2): A) top-hat, B) thresholding and C) result A B C Fig. 4. Results of length estimation of fi gure 2A image: A) skeleton and B) overlay on fi gure 3C A B Fig. 5. A. Section with immunohistochemical reac- tion with antibody CD34 with haematoxylin con- trast stain, B. surface with cycloid grid (×200) Fig. 6. Sine-weighted curves, cycloids 2π 2 Length Width 1 282 Folia Neuropathologica 2008; 46/4 Tomasz Stêpieñ, Boguslaw Obara In the paper, the author recapitulates the results of research of development and application of an image analysis tool for automatic segmentation and length estimation of 54 microscopic images of fetal brain vessels. The image analysis and processing algorithms were applied to estimate the length of vessels of 54 microscopic images of fetal brain vessels. A sample of input image was used to present the main idea of the procedure. The RGB to YIQ colour system trans- formation was used as a pre-processing procedure for analysed images, where the YIQ representation of the image from Figure 2A is presented on Figures 2B, 2C and 2D. Fig. 9. Test square Fig. 7. Length density of vessels by cycloid me- thod age (GW) 0,020 0,018 0,016 0,014 0,012 0,010 0,008 0,006 0,004 0,002 0 le n gh t d en si ty ( μ /μ m 2 ) 11 12 14 16 17 18 19 21 22 cortical grey matter cortical white matter Fig. 8. Length density of vessels by skeletoniza- tion method age (GW) 0,020 0,018 0,016 0,014 0,012 0,010 0,008 0,006 0,004 0,002 0 le n gh t d en si ty ( μ /μ m 2 ) 11 12 14 16 17 18 19 21 22 cortical grey matter cortical white matter Fig. 10. Pruning method iterations: A) input ske- leton, B) detected end points and nodes, C) fi rst step of pruning, D) fi nal step of pruning (...) A) B) C) D) The main steps of image segmentation procedure are shown in Figure 3. The image segmentation pro- cedure was based on mathematical morphology. At fi rst, the analyzed image was fi ltered by a morpholo- gical top-hat fi lter (Fig. 3A), then the image was thre- sholded and the result is shown in Figure 3B, 3C. Segmented images of fetal brain vessels were used in the length estimation. The length analy- sis algorithm was based on a technique proposed by Fuller et al. [10]. They presented the techniques developed for automatic detection of fi laments on Meudon Hα spectroheliograms, and, by extension, on any full-disk Hα Sun observations. The fi laments were then segmented with a region growing method, which effi ciently refl ects the full extent of these dark areas. The fi laments were fi nally described by means of their pruned skeleton. The length estimation of fetal brain vessels from Figure 2A, based on Fuller’s method (Fig. 4). Colour system transformation (C++ – author’s im- plementation) and image analysis algorithms (Aphe- lion TM – C++ library) [1] were developed in Aphelion ADCIS software and used for the segmentation of the microscopic images of fetal brain vessels. Folia Neuropathologica 2008; 46/4 283 Cycloid and skeletonization methods for analysis of brain vessels Cycloid method Morphometry is a body of mathematical methods relating “geometrical parameters” (such as volume and surface area) of spatial objects to lower dimen- sional measurements obtainable on a section of the structure [2,23]. Morphometric methods enable one to estimate some parameters of anisotropic objects on a section. The length in 2D could be estimated on the basis of various intersections between a line-probe and the linear objects of interest [5]. In our study, we used the stereological method with systematically spaced sine-weighted curves (cycloids) of known length, in our case being 53.1 μm. The counting grid constructed of sine-weighted lines (cycloids) can be used with the vertical section for estimation of lengths and surface areas (Fig. 6). The cycloid test system lines on vertical sections have an isotropic orientation distribution within the three- dimensional structure [2]. The curve has parametric coordinates (θ – sin θ, 1 – cos θ) where θ is the angle between a test line segment and vertical axis. All sections were collected using a 20x- magnify- ing objective. The surface density of vessels (Fig. 5B) is estimated on the basis of intersections between the cycloids and the brain surface and the number of test points (crosses at each end of cycloid arcs). To avoid bias, all cycloids must be positioned equ- ally randomly with respect to the section and their minor principal axis must be parallel to the selected vertical axis [22]. The fi nal calculation of length den- sity of vessels was a result of multiplying two times the sum of total intercepts and area per unit cycloid length through the section sampling fraction and the area sampling fraction. (1) i i (2) (3) (4) p/l – test points per unit length of cycloid, n – num- ber of probes, I i – intercepts, P i – test [I L C]ρη – mean intercepts of projected images per unit length of cycloid, Δ – section thickness, points. (5) (6) where estL = total length of capillaries (μm), al – – area per unit cycloid length, Σl – total intercepts, ssf – sec- tion sampling fraction, asf – area sampling fraction, t – section thickness, h – height of counting frame Photography Photos shown in Figure 1 were produced by a digi- tal photography workstation. Photomicrographs (Fig. 5A-B) were produced by digital photography using an Olympus U-CMAD-2 digital camera attached to an Olympus AX 70 microscope. The fi nal fi gures were constructed using Photoshop v.7. Adjustments of contrast and brightness were made to facilitate re- cognition of the immunohistochemical signal at low high magnifi cation, without altering the appearance of the original materials. The images were registered in RGB (red, green, blue) colour system with standard resolution of 10 000×10 000 pixels. Results We analyzed 54 images of 9 fetal brains aged 11 to 22 GW. The morphometric measurements were per- formed on 2D pictures of samples marked with CD34. In our research, we used two morphometric image analysis methods. Only CD34 marked blood vessels were included in counting. In the cases studied with the cycloid method, there were major fl uctuations ob- served between 11 GW (CGM – 0.01095 μ/μm2; CWM – 0.01034 μ/μm2) and 22 GW (CGM – 0.009361 μ/μm2; CWM – 0.00900 μ/μm2) (Fig. 7). A remarkable drop in length of blood vessel network density was observed in GW 17 and 18. Changes in the length of blood vessel network density in GW 11 to 16 (CGM – 0.01020 μ/μm2; 284 Folia Neuropathologica 2008; 46/4 Tomasz Stêpieñ, Boguslaw Obara CWM – 0.00672 μ/μm2), both in the cortical grey matter and in the cortical white matter, did not pro- ve statistically signifi cant. However, the reduction of the length of blood vessel network density in GW 17 (CGM – 0.00466 μ/μm2; CWM – 0.00577 μ/μm2) and in GW 18 (CGM – 0.00367 μ/μm2; CWM – 0.00407 μ/μm2), both in the cortical grey matter and in the cortical white matter, proved signifi cant. We found statistically signifi cant changes between GW 11 and GW 17 (CGM – p<0.039; CWM – p<0.01), as well as between GW 11 and 18, in the length of blood vessel network density (CGM – p<0.0087; CWM – p<0.001). We also established statistically signifi cant diff eren- ces in the length of blood vessel network density be- tween GW 17 and 21, both in the cortical grey matter and in the cortical white matter (CGM – p<0.01; CWM – p<0.002), as well as between GW 17 and 22 (CGM – p<0.019; CWM – p<0.03). We got similar results when comparing the length of vascular network den- sity between GW 18 and 21 (CGM – p<0.004; CWM – p<0.001). Also, between GW 18 and 22, the diff e- rence in the length of vascular network density sho- wed statistical signifi cance (CGM – p<0.004; CWM – p<0.003), whereas it did not between GW 14 (CGM – 0.00660 μ/μm2) and GW 18 (CGM – 0.00367 μ/μm2) as regards the cortical grey matter. Next, we analy- zed the test sample using skeletonization (Fig. 8). The results did not show as high fl uctuations as in the cycloid method analysis. The length of vascular network density between GW 11 (CGM – 0.00434 μ/μm2; CWM – 0.00419 μ/μm2) and GW 16 (CGM – 0.00273 μ/μm2; CWM – 0.00405 μ/μm2), both in the cortical grey matter and in the cortical white matter, is similar. What is remarkable, however, is the drop in length density of the vascular network in GW 18 (0.001189 μ/μm2), both in the cortical grey matter and in the cortical white matter. The increase in the vascular network length density in the cortical grey matter is similar between GW 19 (CGM – 0.00426 μ/μm2) and GW 22 (CGM – 0.00385 μ/μm2), whereas in the cortical white matter it is sinusoidal, reaching an extreme in GW 21 (0.00452 μ/μm2) and dropping in GW 22 (CWM – 0.00266 μ/μm2). Discussion Skeletonization and cycloid methods could be suc- cessfully applied in 2D image analysis. In our study in the cortical grey matter with the cycloid method were on average 45% higher than those from skeletoniza- tion and as much as 57% higher in the cortical white matter (Table I). Such diff erent results made us use both methods in analyzing objects of known length. Thus, we measured the circumference of a square of 250 μm side by skeletonization (1000 μm) and by cyc- loid method (1319 μ m) (Fig. 9). The outcome confi r- med the diff erences in the study results, which stem from the algorithms used. Skeletonization, consisting in selecting axial points (skeletons) of fi gures, allows for measurement of absolute length of objects on the surface (Fig. 10) [16], whereas the cycloid method, in- cluded in Stereo Investigator Software (MicroBright- Field), is a method of estimation which consists of a known-length cycloid grid and counting intersection points (Fig. 5B). This procedure has a high error bur- den and should be used only very cautiously in 2D Table I. Comparison lenght density of vessels network by cycloid and skeletonization method Age (GW) Lenght density of vessels (µ/µm2) cycloid method skeletonization method cortical gray matter cortical white matter cortical gray matter cortical white matter 11 0.01095 0.01034 0.00434 0.00419 12 0.00766 0.00968 0.00275 0.00365 14 0.00660 0.01192 0.00318 0.00432 16 0.01020 0.00672 0.00273 0.00405 17 0.00466 0.00577 0.00301 0.01139 18 0.00367 0.00407 0.00119 0.00115 19 0.00664 0.00743 0.00426 0.00352 21 0.008198 0.01139 0.00410 0.00452 22 0.009361 0.00900 0.00385 0.00266 Folia Neuropathologica 2008; 46/4 285 Cycloid and skeletonization methods for analysis of brain vessels image morphometric analysis. A good solution seems to be skeletonization, which enables absolute length densities of studied blood vessels to be specifi ed in 2D images. In both methods, we observed a remar- kable drop in length density of vascular network in GW 18. This may be explained by intensive migration of neurons and glial cells and delayed angiogenesis, which changes the relation between blood vessels and the surface of the section, and which might re- sult in reduced vascular length density. The reduction of vascular length density might result from develop- mental atrophy of blood vessels, combined with their diff erentiation [8]. In such a case, the vascular atro- phy might consist of vascular endothelial cells (VECs) in apoptosis [13], which results from reduced vascular fl ow and macrophage function [12]. Morphometric studies may be a good tool for analyzing vasculoge- nesis and angiogenesis in the brain development pro- cess. The cycloid method could be applied to measure approximate length density of vessels. However, ske- letonization should preferably be applied to measure the precise length density of vessels in the cortical grey matter and the cortical white matter. Acknowledgement The manuscript was greatly improved by com- ments and suggestions made by Prof. R. Tadeusie- wicz. The study is supported by the Polish State Committee for Scientifi c Research through the statu- tory research foundation. References 1. Aphelion TM., 2005. http://www.adcis.net/Products/Aphelion/ GeneralDescription.html 2. Baddeley AJ, Gundersen HJG, Cruz-Orive LM. Estimation of sur- face area from vertical sections. J Microsc 1986; 142: 259-276. 3. Baldwin HS. Early embryonic vascular development. Cardiovas Res 1996; 31: E34-E45. 4. Bouix S, Siddiqi K, Tannenbaum A. Flux driven automatic center- line extraction. Med Image Anal 2005£ 9: 209-221. 5. Buff on GL. Essai d’Arithmetique Morale (Supp. Historie Naturel- le vol. 4). Imprimerie Royale, Paris 1777. 6. Costaridou L. Medical Image Analysis Methods. Crc Press Llc 2005. 7. Dhawan A. Med. Image Anal. Wiley-IEEE Press 2000. 8. Feinberg R, Nolden D. Experimental analysis of blood vessel de- velopment on avian wing bud. Anat Rec 1991; 231: 136-144. 9. Friedl P. Prespecifi cation and plasticity: shifting mechanisms of cell migration. Curr Opin Cell Biol 2004; 16: 14-23. 10. Fuller N, Aboudarham J, Bentley RD. Filament recognition and image cleaning on meudon h alpha spectroheliograms. Solar Physics 2005; 227: 61-73. 11. Ikeda E, Flamme I, Risau W. Developing brain cells produce fac- tors capable of inducing the HT7 antigen, a blood-brain barrier- specifi c molecule, in chick endothelial cells. Neurosci Lett 1996; 209: 149-152. 12. Lang RA, Bishop MJ. Macrophages are required for cell death and tissue remodeling in the developing mouse eye. Cell 1993; 74: 453-462. 13. Lang RA, Lustig M, Francois F, Sellinger M, Pleksen H. Apoptosis during macrophage-dependent tissue remodeling. Dev 1994; 120: 3395-3403. 14. Miyawaki TM, Takashima S. Developmental characteristics of vessels density in the human fetal and infant brains. Early Hum Dev 1998; 53: 65-72. 15. Niemi LT, Suvisaari JM, Tuulio-Henriksson A, Lőnnqvist JK. Chil- dhood developmental abnormalities in schizophrenia: evidence from high-risk studies. Schizophr Res 2003; 60: 239-258. 16. Obara B, Nizankowski RT. Application of image analysis me- thods to vascular blood fl ow analysis in angiographic imaging. Image Processing in Industrial Information Technology, Warsaw, Poland. Warsaw 2004, pp. 76-78. 17. Palágyi K, Tschirren J, Hoff man EA, Sonka M. Quantitative analy- sis of pulmonary airway tree structures. Comput Biol Med 2006; 36: 974-996. 18. Rapoport JL, Addington AM, Frangou S, Psych MR. The neurode- velopmental model of schizophrenia: update 2005. Mol Psychia- try 2005; 1-16. 19. Rosenstein JM, Krun JM. New roles for VEGF in nervous tissue – beyond blood vessels. Exp Neurol 2004; 187: 246-253. 20. Soltanian-Zadeh H, Shahrokni A, Khalighi M, Zhang ZG, Zoroo RA, Maddah M, Chopp M. 3-D quantifi cation and visualization of vascular structures from confocal microscopic images using skeletonization and voxel-coding, Comput Biol Med 2005; 35: 791-813. 21. Urbich C, Dimmeler S, Endothelial Progenitor Cells functional characterization. Trends Cardiovasc Med 2004; 14: 318-322. 22. Vesterby A, Kragstrup J, Gundersen HJG, Melsen F. Unbiased stereologic estimation of surface density in bone using vertical sections. Bone 1987; 8: 13-17. 23. Weibel E. Stereological Methods. Vol.1: Practical Methods for Bilogical Morphometry. Academic Press, London 1979. 24. Wierzba-Bobrowicz T, Lewandowska E. Morphological study of endothelial cells in human the human fetus during early period of gestation. Folia Neuropathol 1995; 33: 241-245. 25. Zygmunt M, Herr F, Münstedt K, Liang U, Liang OD. Angiogenesis and vasculogenesis in pregnancy. Eur J Obst Gyecol Repr Biol 2003; 110: S10-S18. work_zehcai3xi5bixpdbvcxbytroo4 ---- Quantitation and Localization of Matrix Metalloproteinases and Their Inhibitors in Human Carotid Endarterectomy Tissues Quantitation and Localization of Matrix Metalloproteinases and Their Inhibitors in Human Carotid Endarterectomy Tissues Salman Choudhary, Catherine L. Higgins, Iou Yih Chen, Michael Reardon, Gerald Lawrie, G. Wesley Vick III, Christof Karmonik, David P. Via, Joel D. Morrisett Background—Matrix metalloproteinases (MMPs) and their inhibitors (TIMPs) play a central role in arterial wall remodeling, affecting stability of fibrous caps covering atherosclerotic plaques. The objective of this study was to determine the spatial distribution of TIMP mass and MMP mass and activity of carotid endarterectomy (CEA) tissues and relate it to the distribution of atherosclerotic lesions. Methods and Results—Fresh CEA tissues were imaged by multicontrast MRI to generate 3D reconstructions. Tissue segments were cut transversely from the common, bifurcation, internal, and external regions. Segments were subjected to total protein extractions and analyzed by ELISA for MMP-2 and -9 and TIMP-1 and -2 mass and by zymography for gelatinase activity. Segments at or near the bifurcation with highly calcified lesions contained higher MMP levels and activity than segments distant from the bifurcation; highly fibrotic or necrotic plaque contained lower MMP levels and activity and higher TIMP levels. Fatty streak, fibroatheroma with hemorrhage and calcification, and fully occluded lesions were enriched in MMP-2, MMP-9, and TIMP-1 and TIMP-2, respectively. Conclusion—The spatial distribution of MMPs and TIMPs in carotid atherosclerotic lesions is highly heterogeneous, reflecting lesion location, size, and composition. This study provides the first semi-quantitative maps of differential distribution of MMPs and TIMPs over atherosclerotic plaques. (Arterioscler Thromb Vasc Biol. 2006;26:2351-2358.) Key Words: carotid artery � atherosclerosis � MMPs � TIMPs � MRI Advanced atherosclerotic plaques typically have a lipid-rich core covered by a fibrous cap composed of smooth muscle cells (SMC) and extracellular matrix (ECM).1 Plaque vulnerability is influenced by overall size, core size, cap thickness, cap inflammation, and cap fatigue. Fibrous caps are composed mainly of collagen, which determines cap tensile strength.2 Matrix composition affects several events in lesion development, including cell migration and prolifera- tion, lipoprotein retention, cell adhesion, calcification, throm- bosis, and apoptosis.3 See page 2181 and cover Degradation of ECM, which may weaken the fibrous cap resulting in plaque rupture, can be accomplished by macro- phages through phagocytosis or secreted proteolytic enzymes, such as matrix metalloproteinases (MMPs).4 MMPs are a family of Zn2�- and Ca2�-dependent endopeptidases that degrade ECM proteins, such as gelatin, collagen, elastin, and fibrin, and play a central role in arterial wall remodeling. Secreted as zymogens (pro-MMPs) that must be activated by other proteases or reaction with organic mercurials, MMPs are active at neutral pH and can be inhibited by proteins including tissue inhibitors of metalloproteinases (TIMPs), by �2-macroglubulin, and by metal chelators such as phenanth- roline and EDTA.5,6 Although the MMP family consists of almost 20 known proteins, the present study has focused on MMP-2 and -9 and their role in carotid atherosclerosis. MMP-2 (gelatinase A) is secreted as a 72-kDa proenzyme and primarily expressed in mesenchymal cells during devel- opment and tissue regeneration.7 Cleaving the N-terminal prodomain can be initiated by membrane-type MMPs or serine proteases.8 MMP-2 can degrade collagens, elastin, and fibronectin. With MMP-9, it degrades type IV collagen, the major component of basement membranes and gelatin. MMP-9 (gelatinase B) has been identified in a number of cell types and has a broad range of specificities for native collagens, as well as gelatin, proteoglycans, and elastin. Secreted in a precursor form (pro–MMP-9, 92 kDa) that can be activated by MMP-3 (stromelysin-1) or bacterial protein- ases, MMP-9 has been implicated in processes characteristic of inflammatory cells, uterine invasion of trophoblasts, and bone absorption and is thought to act synergistically with Original received February 27, 2006; final version accepted June 21, 2006. From the Departments of Medicine and Biochemistry and Molecular Biology (S.C., C.L.H., I.Y.C., D.P.V., J.D.M.), Baylor College of Medicine; The Methodist Hospital (M.R., G.L.); Department of Pediatrics (G.W.V.), Texas Children’s Hospital; and Department of Radiology (C.K.), The Methodist Hospital Research Institute, Houston, Tex. S.C. and C.L.H. contributed equally to this work. Correspondence to Dr Joel D. Morrisett, The Methodist Hospital, A-601, 6565 Fannin St, Houston, TX 77030. E-mail morriset@bcm.tmc.edu © 2006 American Heart Association, Inc. Arterioscler Thromb Vasc Biol. is available at http://www.atvbaha.org DOI: 10.1161/01.ATV.0000239461.87113.0b 2351 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 MMP-1 in degradation of fibrillar collagens as it degrades their denatured gelatin forms.9,10 Expressed by almost all activated macrophages, MMP-9 is the most prevalent form of MMP. The natural plasma inhibitors of MMPs are TIMPs, which act to suppress matrix degradation and can be synthesized by monocytes/macrophages, SMC, and endothelial cells. Tissue activity requires a balance between MMP activation and TIMP inhibition, which is important in tissue remodeling, inflammation, tumor growth, and metastasis. TIMPs form 1:1 noncovalent complexes with MMPs and block access of substrates to catalytic sites. Four members of the TIMP family are known, 2 of which, TIMP-1 and -2, are part of this study focus. Dysregulation of MMP/TIMP balance is a characteristic of extensive tissue degradation in certain degenerative diseases. Secretion of proteases at focal sites in plaques ultimately results in plaque instability and rupture. Therefore, localizing MMPs and TIMPs in atherosclerotic lesions is necessary for understanding the disease progression and regression. MRI has become a powerful technology for imaging carotid atherosclerotic lesions in vivo. MRI was used to demonstrate that lipid-lowering with simvastatin is associated with significant regression of human carotid atherosclerotic lesions11 and to show that substantial low-density lipoprotein cholesterol reduction with rosuvastatin resulted in regression of lipid-rich necrotic core.12 Takaya et al13 demonstrated by MRI that hemorrhage into plaque accelerated progression. These and earlier studies14 attest to the value of MRI as a noninvasive method for accurately monitoring plaque dimen- sions (total vessel volume, normal wall volume, plaque volume, lumen volume) and composition (calcification, lipid, fibrous, thrombus). Materials and Methods Tissue Acquisition and Storage Carotid endarterectomy (CEA) specimens were obtained within 1 hour after surgical resection, digitally photographed, and stored until use in 50% glycerol/PBS (20°C) to preserve tissue morphology. Larger specimens having intact common, internal, and external branches were preferred for study (approved by an institutional review committee of Baylor College of Medicine; subjects gave informed consent). Magnetic Resonance Imaging Tissues were washed in PBS and transferred to a specially fabricated sample holder, permitting simultaneous imaging of 4 samples.15 The holder oriented the tissue long axis along the y-axis of the magnet so that 2-mm coronal slices gave axial images (supplemental Figure I, available online at http://atvb.ahajournals.org). A General Electric XL Enhance system operating at 1.5 T equipped with 6-cm phased array coils (Pathway Biomedical, Redmond, Wash) was used to acquire proton density weighted (PD-W), T1 weighted (T1W), and T2 weighted (T2W) images under conditions similar to those described previously.16 Tissue Segmentation and Digital Photography CEA specimens were cut into 5-mm segments from the bifurcation into the common, external, and internal carotids. Microscopic images of segments were acquired using a Leica DC300 digital camera attached to a Stereomaster dissecting microscope to document features frequently lost during processing (eg, thrombus, calcifica- tion) and to capture subtle textural and morphological features not always detected by other techniques. Using these images, lesion composition was evaluated and lesion categories were assigned.17 Protein Extraction Tissues were extracted for total protein by the following methods. Mild Conditions Tissue segments (tissue nos. 958, 973, 974, 991, 1006) were transferred to tared 15-mL plastic culture tubes and weighed. Segments were then incubated with 2 mL of DMEM containing 4 �L of gentamycin (50 �g/100 mL) while agitated gently at room temperature for 15 hours, after which the extract medium was removed and stored (�20°C) for analysis. Moderate Conditions Moderate conditions sequentially followed mild conditions. Tissue segments (tissue nos. 958, 973, 974, 991, 1006) were transferred to tared 15-mL plastic culture tubes and weighed. To each tube was added 3 mL of DMEM containing 6 �L of gentamycin (50 �g/100 mL) and 0.1% octyl glucoside. Tissues were homogenized (Brink- mann Polytron; 10-second bursts, 50% power) until segments were completely dispersed into homogeneous suspensions. Extensively calcified tissues required multiple homogenization bursts. Homoge- nates were centrifuged at 3000 rpm (30 minutes). Supernatants were decanted into cryovials and stored (�20°C) until analyzed. Stringent Conditions Tissues (tissue nos. 960, 968, 985, 989, 1000) were transferred to tared 15-mL plastic culture tubes and weighed. To each tube was added 3 mL of extraction buffer (50 mmol/L 4-[2-hydroxyethyl]-1- piperazineethanesulfonic acid [HEPES], 150 mmol/L NaCl, 1 mmol/L ethylene glycol-bis[2-aminoethylether]-N,N,N�,N�- tetraacetic acid [EGTA], 10 mmol/L sodium pyrophosphate, 100 mmol/L NaF, 1.5 mmol/L MgCl2, 10% glycerol, 1% Triton X-100 [pH 7.5]). Tissues were homogenized as described for moderate conditions. Measurement of MMPs and TIMPs in Tissue Extracts Human total MMP-2,7,8 total MMP-9,6,9,10 TIMP-1,18,19 and TIMP- 220,21 were measured by enzyme-linked immunosorbent assay (ELISA) with reagents from R&D Systems (Minneapolis, Minn) following the instructions in the product insert. MMP and TIMP data were normalized for extraction efficiency (supplemental Table I) and total protein and were expressed in normalized form unless otherwise stated. Proteins extracted under mild and moderate conditions were analyzed separately, and the individual data were added together. Zymography Zymography was performed on total protein extracts from CEA tissue segments. Zymograms were run on 10% gels containing gelatin (Bio-Rad, Richmond, Calif) using the procedure described in the package insert. After electrophoresis, gels were renatured by incubation in detergent-free buffer, then activated by equilibration in activation buffer (50 mmol/L Tris hydroxymethyl aminomethane [TRIS], 150 mmol/L NaCl, 10 mmol/L CaCl2, 1 �mol/L ZnCl2, 0.02% NaN3, pH7.5; 37°C, 4 hours). Gels were stained with Coomassie blue (0.05%, 30 minutes). Areas in which gelatin was digested appeared as clear bands (supplemental Figure III). ImageJ software was used to quantify bands. Enzyme activity was expressed in terms of band area produced per nanogram of pure MMP-9 (Oncogene, Boston, Mass). A standard curve for each gel was generated from 0.25, 0.50, 1.00, and 2.00 ng of pure MMP-9. The area of each band was integrated individually and plotted on a histogram showing relative contribution to total gelatinase activity. Three-Dimensional Reconstructions and MMP Data Fusion Three-dimensional renderings of CEA tissues were generated using MRI slices by first interpolating the image data to contiguous slices 2352 Arterioscler Thromb Vasc Biol. October 2006 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 of 1-mm thickness as the lowest common denominator between the MRI slice thickness of 2 mm and the tissue segment thickness of 5 mm. The grayscale pixel values of the CEA tissue in each slice was then mapped to MMP enzyme activity in the following manner: MMP activity of 7 ng/mg total protein (as upper limit of MMP activity for all tissues) was assigned a grayscale value of 255. For each tissue segment (5 of the 1-mm slices), a grayscale value proportional to MMP activity was calculated by dividing the segment MMP activity by 7 and multiplying by 255. This modified image data were stored in raw image data format. All image manipulations were performed using the ImageJ software package developed by Wayne Rasband (Research Services Branch, National Institute of Mental Health, Bethesda, Md; http://rsb.info.nih.gov/ij). Next, the Paraview software package (http://www.paraview.org) was utilized to create 3D-surface rendered reconstructions with a color lookup table mapping the grayscale pixel value 0 to purple and the grayscale pixel value 255 to red. Statistical Analysis Plots indicating extent of association among MMP activity in CEA segment extracts (as determined by gelatin zymography) and MMP and TIMP masses (as determined by ELISA) were generated. The association between different analytes was determined by Spearman correlation analysis. Correlations with P�0.05 were considered statistically significant. Results The PD-W magnetic resonance (MR) images show discrete 2-mm slices (supplemental Figure II) corresponding to tissue contained within specific 5-mm segments (supplemental Fig- ures IV through XIII). These images (32 slices acquired, 4 shown) capture features seen in macroscopic images of intact tissues (Figure 1) and microscopic images of segments (supplemental Figures IV through XIII). For example in tissue 960, slices 14 of 32 and 18 of 32, corresponding to segments I1 and Bd (supplemental Figure II), exhibit large hypointense stenotic regions attributable to calcification as verified by coregistered digital photographs (supplemental Figure V). These 2D MR images were used to reconstruct 3D images that served as templates onto which segment proper- ties (eg, enzyme activity, protein mass) were mapped. The MR images, macroscopic pictures, and microscopic photos each indicated considerable structural and composi- tional heterogeneity. Frequently, the bifurcation segment(s) had the greatest lesion burden, diminishing in the common and internal carotid segments with increasing distance from the flow divider. Typically, the external carotid contained little if any lesion and appeared as a patent, whitish tube (Figure 1: no. 985, E1 through E3; no. 989, E1 through E3). The high calcification content of some CEA tissues made it difficult to obtain histological sections that retained all the original components. For this reason, we used digital photog- raphy to document segment composition. The value of this approach is illustrated in supplemental Figure V, which shows extensive calcification in segments C1, Bp, Bd, and I0 (no. 960). The 3D visualization of this component is difficult if not impossible to duplicate by conventional histology of paraffin or frozen sections. The heterogeneous distribution of macroscopic compo- nents suggested heterogeneous distribution of molecular components, such as MMPs and TIMPs. Rather than perform qualitative immunohistochemical staining for these proteins in situ, we chose to extract them from the tissue segments and analyze them by ELISA. This approach enabled quantitation of not only the individual proteins but also composite enzymatic activity. Three different extraction procedures were used. Forty segments cut from 5 CEA tissues (nos. 958, 973, 974, 991, 1006) were subjected first to the mild then moderate extraction procedure, and 42 segments cut from 5 tissues (nos. 960, 968, 985, 989, 1000) were subjected to the stringent procedure. The effect of each extraction buffer on the immunoreac- tivity of each MMP and TIMP was determined by comparing the immunoreactivity of each pure protein before and after its exposure to the extraction buffer. Mild, moderate, and strin- gent buffers led to immunoreactivity recoveries of 88 to 97, 92 to 107, and 73% to 99%, respectively, for each MMP and TIMP (supplemental Table I). The effect of the extraction process on MMPs and TIMPs was determined by measuring the percent recovery of immu- noreactivity of exogenous protein added to normal CEA tissues subjected to one cycle of extraction. The process allowed recovery of 61 to 100, 81 to 107, and 61% to 118% immunoreactivity under mild, moderate, and stringent condi- tions, respectively (supplemental Table I). The total extraction efficiency for MMPs and TIMPs was evaluated by measuring the immunoreactivity of endogenous protein in CEA tissue subjected to three extraction cycles, then extrapolating to 0 cycles. This measurement compen- sated for the effect of buffer and the extraction process on the Figure 1. Digital photographs of human CEA tis- sues (nos. 960, 968, 985, 989) with demarcation lines indicating segment location and identity. Choudhary et al MMPs/TIMPs Localization/Activity in Carotids 2353 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 total amount of MMPs and TIMPs measured by ELISA. The extraction efficiency for mild, moderate, and stringent buffers was 73 to 100, 69 to 100, and 68% to 100%, respectively (supplemental Table I). The extraction efficiency values were used to correct the MMP and TIMP values measured by ELISA. Corrected values were plotted as positive histograms for MMP-2 and MMP-9 and as negative histograms for TIMP-1 and TIMP-2 (Figure 2). Each histogram represents the mass of these proteins as a function of tissue segment position. Several of the tissues studied were highly calcified at or near the bifurcation (eg, supplemental Figure V; no. 960: I1, I0, Bd, Bp). These segments were rich in MMPs and poor in TIMPs (Figure 2). In other CEA tissues, the bifurcation was very fibrotic and/or necrotic (supplemental Figure XII; no. 1000: B1, B, I1, I2, I3). These segments were poorer in MMPs and richer in TIMPs, especially TIMP-1. In still other tissues, the bifurcation area contained a thrombus (supple- mental Figure VI; no. 968: B, I1). These segments were also rich in MMPs, especially MMP-9 (Figure 2). Although ELISA measurements of MMPs and TIMPs indicate absolute abundance of these proteins in tissue ex- tracts, this information does not necessarily indicate net gelatinase activity operating on the tissue. This activity was determined by gelatin zymography.22 Major bands, with molecular masses of 100 and 88 kDa, corresponding to MMP-9, and with 70 and 62 kDa, corresponding to MMP-2 were observed (supplemental Figure III). An additional band with a molecular mass of 130 kDa, not previously assigned to an individual MMP, was observed in some but not all extracts. The combined MMP-9 (or MMP-2) band areas for each extract sample were taken as a quantitative measure of MMP-9 (or MMP-2) activity. The MMP-9 activity, expressed as band area per unit MMP-9 mass, was significantly asso- ciated with MMP-9 abundance in some tissues (eg, nos. 960, 968, 973, 989, 1006) but not in others (eg, nos. 958, 965, 985, 991, 1000). Similarly, MMP-2 mass was also correlated with activity in some tissues (eg, nos. 958, 991, 1006) but not in others (eg, nos. 960, 968, 973, 974, 985, 989, 1000). This variability of statistically significant association between MMP-2 or MMP-9 mass and gelatinase activity in individual tissues may be due in part to the variation in TIMP-1 and TIMP-2 abundance. However, when the data from all ten tissues were combined, the mass of MMP-9 in individual tissue segment extracts was correlated (r�0.61, P�0.0001, n�42) with the activity of MMP-9. Likewise, the mass of MMP-2 was correlated (r�0.43, P�0.004, n�42) with ac- tivity of MMP-2. Figure 2. MMP and TIMP levels as determined by ELISA for the human CEA tissue segments (nos. 960, 968, 989, 1000), subjected to stringent extraction. MMP-2 and MMP-9 levels extend above the origin; TIMP-1 and TIMP-2 levels extend below the origin. Percentage of each MMP and TIMP level is indicated within each histogram. 2354 Arterioscler Thromb Vasc Biol. October 2006 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 To enhance visualization of gelatinase activity distribution over the CEA tissues, total activity values (0 to 7 MMP-9 ng equivalents) were color-coded then mapped onto 3D recon- structions of tissue nos. 960, 968, 985, 989. These repre- sentations indicate that maximum activity is localized to diseased segments at or near the bifurcation including the common and internal but not the external carotid. Discussion CEA tissues are highly heterogeneous, containing some areas that are grossly normal and other areas that are extensively stenosed. By cutting the common, internal, and external carotid branches into multiple segments, we were able to isolate normal and diseased regions and characterize their morphology, dimensions, and composition (Figure 1). The conventional approach for characterizing atheroscle- rotic plaque morphology has been histological analysis of paraffin or frozen sections. However, many CEA tissues are so structurally fragile that they are not amenable to this approach. For this reason, we elected to use digital photog- raphy to capture images that showed features often lost during sectioning and staining procedures. Immunohistochemistry is extensively used for depicting distribution of specific proteins in tissues. However, the results are usually qualitative and do not readily allow reasonably accurate quantitation. Because we wanted to quantitatively compare several different protein components in the same tissue segment, we chose to perform total protein extraction on each segment, then measure analytes of choice in the extract. When accompanied by careful controls, the total extract approach enables measurement of selected pro- tein mass and activity by ELISA and zymography, respec- tively. In a separate study, we have used these extracts for protein microarray analysis.23 In the present study, we have used 2D MR image slices to reconstruct 3D images of the tissues. These 3D images serve as useful templates onto which specific molecular compo- nents (eg, protein, mRNA) can be quantitatively mapped, a capability not available from 2D digital photographs. In this study, 10 CEA tissues were cut into �80 segments that were characterized in terms of morphology, MMP and TIMP composition, and gelatinase activity. Segments at or adjacent to the bifurcation were significantly stenosed, con- taining varying amounts of lipid, thrombus, calcification, and/or fibrotic material. The accumulation of these plaque components is usually attended by arterial remodeling, a process mediated by MMPs and attenuated by TIMPs. Typ- ically, the abundance of MMPs was greatest in lesioned segments at the bifurcation (eg, Bp, Bd) or adjacent to it (eg, I1). MMP-9 was frequently the most abundant MMP, up to 95% in some segments, but occasionally MMP-2 was pre- dominant (Figure 2; no. 989). In grossly normal segments such as those in the external carotid (Figure 1; no. 989: E1 through E3) and proximal common carotid (Figure 1; nos. 960, 968: C1 through C3), the abundance of MMPs was significantly less than in lesioned segments, suggesting that the normal segments were not undergoing as extensive remodeling as lesioned segments. Notably, the greater abun- dance of MMP-9 in the plaque area suggests that it is a major MMP mediating remodeling in this area. In grossly normal areas, the level of total MMPs is lower and dominated by MMP-2. This observation suggests that remodeling of ather- oma and normal arterial wall are mediated by different MMPs.24,25 The relative mass abundance of each MMP suggests but does not prove relative proteinase activity. This issue was addressed by gelatin zymography (supplemental Figure III), which demonstrated that the mass of MMP-9 protein measured by ELISA was significantly associated with gelatinase activity of MMP-9 (r�0.614) and that MMP-2 protein mass was correlated with MMP-2 activity (r�0.435). Proteinase activity was highest in lesioned segments (eg, Figure 3; no. 960: Bd, Bp) containing abundant MMPs and was somewhat lower in most normal segments (eg, Figure 3; no. 989: E1 through E3). This study clearly demonstrates the longitudinal heteroge- neous distribution of MMPs and TIMPS among atheroscle- rotic lesions and grossly normal regions within 5-mm tissue segments. However, this interplane spatial resolution does not enable qualitative description nor quantitative measurement of subtle intraplane differences. The circumferential rings of artery may contain both diseased and healthy regions, and the combination of these 2 may account for some of the variabil- ity in our results. Such differences will require higher resolution techniques such as thin section microscopy, laser capture microdissection, and protein microarrays for ultra- structural description and analysis. The American Heart Association Committee on Vascular Lesions26 and Virmani et al17 have developed a system for classifying lesions according to their morphological and compositional features from 2D histological thin sections of coronary lesions. In this study, we have used digital images that contained significant 3D information and MRI, which can distinguish plaque components,27 to classify the lesions in CEA segments (supplemental Table II). Many segments in the external and proximal common carotid exhibited mild fibrous or lipid accumulation and were classified as types I to III. Most of the segments in the bifurcation and proximal internal carotid contained large fibroatheroma frequently with hemorrhage and/or calcification and hence were classified as type IVa. This lesion was the most abundant (36%) of all the 10 types, with IVb being the next most abundant (15%). Significantly, the type II lesions contained more MMP-2 than any other lesion type, whereas type IVc lesions contained the most MMP-9, suggesting that type IVc lesions are undergo- ing considerable remodeling (Figure 4). Lesion type Vd contained much greater amounts of TIMP-1 and TIMP-2 than any other lesion type. This finding suggests that type Vd lesions have depressed MMP activity and suppressed remod- eling, leading to net formation of connective tissue and almost complete occlusion (supplemental Figure XII; no. 1000: B1, I1, I2). Mapping total gelatinase activity on 3D CEA images reconstructed from 2D MRI slices provides a graphical method for efficiently conveying quantitative information (Figure 5). Tissue nos. 960 and 989 had the highest level of activity (�7 MMP-9 ng equivalents); this activity was con- centrated in the highly lesioned bifurcation and adjacent Choudhary et al MMPs/TIMPs Localization/Activity in Carotids 2355 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 segments. Tissues with lower levels of activity still exhibited their maximal amount at or near the bifurcation. Intraplaque hemorrhage is common in advanced coro- nary atherosclerotic lesions.28 Significantly, hemorrhage has been shown to increase expression of MMPs, including MMP-2 and MMP-9, in lesions.29 In the present study, a number of carotid atheroma segments exhibited sites of intraplaque hemorrhage (eg, supplemental Figure V: no. 968, Bd, I1, I2; supplemental Figure X: no. 989, C2, C1, B). These segments contained considerable amounts of MMP-2 and MMP-9 (Figure 2). The accumulation of MMPs at these sites may be attributable in part to the infiltration of macrophages, cells known to secrete MMPs and destabilize plaques. Figure 3. MMP activity reported as the activity displayed by 1 ng of pure MMP-9 and as determined by gelatin zymography for human CEA tissue segments (nos. 960, 968, 989, 1000). As many as 5 bands (62, 70, 88, 100, 130 kDa) were observed in some cases and all are displayed. Percentages of total MMP activity for each band are indicated. Figure 4. Distribution of MMP-2, MMP-9, TIMP-1, and TIMP-2 among CEA seg- ments characterized by the lesion classi- fication system developed by the Ameri- can Heart Association Committee on Vascular Lesions26 and Virmani et al.17 Type II lesions contained the most MMP-2. Type IVc lesions had the highest content of MMP-9. Type Vd lesions con- tained the most TIMP-1 and TIMP-2. Histogram height represents the mean level of a specific MMP found in a partic- ular lesion type. 2356 Arterioscler Thromb Vasc Biol. October 2006 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 The increasing body of evidence indicating the correlation of MMP and TIMP tissue levels with plaque stability30,31 raises the question of how plasma levels of these proteins might be involved. This question has been addressed in a recent study by Sapienza et al32 of MMP-1, -2, and -9 and TIMP-1 and -2 levels in patients undergoing endarterectomy. The plaques from patients were classified histologically as stable or unstable and their plasma MMP levels measured before and after surgery. Plasma MMP levels were higher in patients with unstable plaques compared with patients with stable plaques. In contrast, plasma levels of TIMPs were lower in patients with unstable plaques compared with stable plaques. This observation indicates that plasma levels of MMPs and TIMPs may be important biomarkers for carotid plaque instability. Numerous studies have demonstrated that MMP-2 and MMP-9 are essential for SMC behavior and hence play a major role in fibrous cap formation and healing. However, imbalance in tissue levels of MMPs and TIMPs may be a cause of plaque instability. This condition might be treated by decreasing the level or activity of the enzymes. A feasible approach has been suggested by Bellosta et al,33 who have observed that 3-hydroxy-3-methylglutaryl– coenzyme A (HMG-CoA) reductase inhibitors decrease MMP-9, an effect mediated by suppressing synthesis of mevalonate, a precursor of multiple intermediates essential for several cellular func- tions. Other approaches might include intervention with novel inhibitors of MMP gene expression or activity. Conclusion CEA tissue segments have been used to investigate the distribution and abundance of MMPs and TIMPs in athero- sclerotic plaques. MMP-9 was highly abundant in calcified segments at or near the bifurcation. MMP-9 was very abun- dant in segments with intraplaque hemorrhage. Grossly nor- mal segments contained lesser amounts of MMPs, which were predominantly MMP-2. TIMPs were lowly abundant in calcified plaques and more highly abundant in fibrotic and necrotic segments. This study provides the first semiquanti- tative maps of differential distributions of MMPs and TIMPs over an atherosclerotic plaque. Acknowledgments We thank Seth Marvel for assistance with zymogram analysis. Sources of Funding This work has been supported in part by National Institutes of Health grants R01HL63090 and T32HL07812. (TTGA) Disclosures None. References 1. Ross R. The pathogenesis of atherosclerosis: a perspective for the 1990s. Nature. 1993;362:801– 809. 2. Burleigh MC, Briggs AD, Lendon CL, Davies MJ, Born GV, Richardson PD. Collagen types I and III, collagen content, GAGs and mechanical strength of human atherosclerotic plaque caps: span-wise variations. Ath- erosclerosis. 1992;96:71– 81. 3. Wight TN. The extracellular matrix and atherosclerosis. Curr Opin Lipidol. 1995;6:326 –334. 4. Dollery CM, McEwan JR, Henney AM. Matrix metalloproteinases and cardiovascular disease. Circ Res. 1995;77:863– 868. 5. Massova I, Kotra LP, Fridman R, Mobashery S. Matrix metallopro- teinases: structures, evolution, and diversification. FASEB J. 1998;12: 1075–1095. 6. Birkedal-Hansen H, Moore WG, Bodden MK, Windsor LJ, Birkedal- Hansen B, DeCarlo A, Engler JA. Matrix metalloproteinases: a review. Crit Rev Oral Biol Med. 1993;4:197–250. 7. Morgunova E, Tuuttila A, Bergmann U, Isupov M, Lindqvist Y, Schneider G, Tryggvason K. Structure of human pro-matrix metallopro- teinase-2: activation mechanism revealed. Science. 1999;284:1667–1670. 8. Nguyen M, Arkell J, Jackson CJ. Activated protein C directly activates human endothelial gelatinase A. J Biol Chem. 2000;275:9095–9098. 9. Ogata Y, Pratta MA, Nagase H, Arner EC. Matrix metalloproteinase 9 (92-kDa gelatinase/type IV collagenase) is induced in rabbit articular chondrocytes by cotreatment with interleukin 1 beta and a protein kinase C activator. Exp Cell Res. 1992;201:245–249. 10. Koide H, Nakamura T, Ebihara I, Tomino Y. Increased mRNA expression of metalloproteinase-9 in peripheral blood monocytes from patients with immunoglobulin A nephropathy. Am J Kidney Dis. 1996; 28:32–39. 11. Corti R, Fuster V, Fayad ZA, Worthley SG, Helft G, Smith D, Weinberger J, Wentzel J, Mizsei G, Mercuri M, Badimon JJ. Lipid lowering by simvastatin induces regression of human atherosclerotic lesions: two years’ follow-up by high-resolution noninvasive magnetic resonance imaging. Circulation. 2002;106:2884 –2887. 12. Saam T, Yuan C, Zhao XQ, Takaya N, Underhill H, Chu B, Cai J, Kerwin W, Kraiss LW, Parker DL, Hamar W, Raichlen J, Cain V, Waterton J, Hatsukami TS. 75th European Atherosclerosis Society Congress, 2005, Prague, Czech Republic. 13. Takaya N, Yuan C, Chu B, Saam T, Polissar NL, Jarvik GP, Isaac C, McDonough J, Natiello C, Small R, Ferguson MS, Hatsukami TS. Figure 5. Three-dimensional reconstruc- tions of CEA tissues (Figure 1) from mul- tiple 2D MRI slices (supplemental Figure II; corresponding arrows) with fusion of MMP enzyme activity (Figure 3). Color- coded activity data were fused onto 3D renderings as described in the Materials and Methods. The maximal MMP activi- ties from which the color levels were cal- ibrated for tissue nos. 960, 968, 985, and 989 were 6.75, 3.12, 3.55, and 6.96 ng/mg total extracted protein, respectively. Choudhary et al MMPs/TIMPs Localization/Activity in Carotids 2357 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 Presence of intraplaque hemorrhage stimulates progression of carotid atherosclerotic plaques: a high-resolution magnetic resonance imaging study. Circulation. 2005;111:2768 –2775. 14. Adams GJ, Greene J, Vick GW 3rd, Harrist R, Kimball KT, Karmonik C, Ballantyne CM, Insull W Jr, Morrisett JD. Tracking regression and progression of atherosclerosis in human carotid arteries using high- resolution magnetic resonance imaging. Magn Reson Imaging. 2004;22: 1249 –1258. 15. Karmonik C, Adams G, Morrisett JD. Medical Imaging I: Vascular Imaging. 20th Annual Meeting of the Houston Society for Engineering in Medicine and Biology, 2003, Houston, Tex. 16. Adams GJ, Simoni DM, Bordelon CB Jr, Vick GW 3rd, Kimball KT, Insull W Jr, Morrisett JD. Bilateral symmetry of human carotid artery atherosclerosis. Stroke. 2002;33:2575–2580. 17. Virmani R, Kolodgie FD, Burke AP, Farb A, Schwartz SM. Lessons from sudden coronary death: a comprehensive morphological classification scheme for atherosclerotic lesions. Arterioscler Thromb Vasc Biol. 2000; 20:1262–1275. 18. Murphy G, Houbrechts A, Cockett MI, Williamson RA, O’Shea M, Docherty AJ. The N-terminal domain of tissue inhibitor of metallopro- teinases retains metalloproteinase inhibitory activity. Biochemistry. 1991; 30:8097– 8102. 19. Murphy G, Koklitis P, Carne AF. Dissociation of tissue inhibitor of metalloproteinases (TIMP) from enzyme complexes yields fully active inhibitor. Biochem J. 1989;261:1031–1034. 20. Brew K, Dinakarpandian D, Nagase H. Tissue inhibitors of metallopro- teinases: evolution, structure and function. Biochim Biophys Acta. 2000; 1477:267–283. 21. Wang Z, Juttermann R, Soloway PD. TIMP-2 is required for efficient activation of proMMP-2 in vivo. J Biol Chem. 2000;275:26411–26415. 22. Snoek-van Beurden PA, Von den Hoff JW. Zymographic techniques for the analysis of matrix metalloproteinases and their inhibitors. Biotech- niques. 2005;38:73– 83. 23. Ramaswamy A, Lin E, Chen I, Mitra R, Morrisett J, Coombes K, Ju Z, Kapoor M. Application of protein lysate microarrays to molecular marker verification and quantification proteome. Proteome Sci. 2005;3:9. 24. Johnson C, Galis ZS. Matrix metalloproteinase-2 and -9 differentially regulate smooth muscle cell migration and cell-mediated collagen orga- nization. Arterioscler Thromb Vasc Biol. 2004;24:54 – 60. 25. Rodriguez-Pla A, Bosch-Gil JA, Rossello-Urgell J, Huguet-Redecilla P, Stone JH, Vilardell-Tarres M. Metalloproteinase-2 and -9 in giant cell arteritis: involvement in vascular remodeling. Circulation. 2005;112: 264 –269. 26. Stary HC, Chandler AB, Dinsmore RE, Fuster V, Glagov S, Insull W Jr, Rosenfeld ME, Schwartz CJ, Wagner WD, Wissler RW. A definition of advanced types of atherosclerotic lesions and a histological classification of atherosclerosis. A report from the Committee on Vascular Lesions of the Council on Arteriosclerosis. Circulation. 1995;92:1355–1374. 27. Higgins CL, Marvel SA, Morrisett JD. Quantification of calcification in atherosclerotic lesions. Arterioscler Thromb Vasc Biol. 2005;25: 1567–1576. 28. Kolodgie FD, Gold HK, Burke AP, Fowler DR, Kruth HS, Weber DK, Farb A, Guerrero LJ, Hayase M, Kutys R, Narula J, Finn AV, Virmani R. Intraplaque hemorrhage and progression of coronary atheroma. N Engl J Med. 2003;349:2316 –2325. 29. Power C, Henry S, Del Bigio MR, Larsen PH, Corbett D, Imai Y, Yong VW, Peeling J. Intracerebral hemorrhage induces macrophage activation and matrix metalloproteinases. Ann Neurol. 2003;53:731–742. 30. Bicknell CD, Peck D, Alkhamesi NA, Cowling MG, Clark MW, Goldin R, Foale R, Jenkins MP, Wolfe JH, Darzi AW, Cheshire NJ. Relationship of matrix metalloproteinases and macrophages to embolization during endoluminal carotid interventions. J Endovasc Ther. 2004;11:483– 493. 31. Beaudeux JL, Giral P, Bruckert E, Bernard M, Foglietti MJ, Chapman MJ. Serum matrix metalloproteinase-3 and tissue inhibitor of metalloproteinases-1 as potential markers of carotid atherosclerosis in infraclinical hyperlipidemia. Atherosclerosis. 2003;169:139 –146. 32. Sapienza P, di Marzo L, Borrelli V, Sterpetti AV, Mingoli A, Cresti S, Cavallaro A. Metalloproteinases and their inhibitors are markers of plaque instability. Surgery. 2005;137:355–363. 33. Bellosta S, Via D, Canavesi M, Pfister P, Fumagalli R, Paoletti R, Bernini F. HMG-CoA reductase inhibitors reduce MMP-9 secretion by macro- phages. Arterioscler Thromb Vasc Biol. 1998;18:1671–1678. 2358 Arterioscler Thromb Vasc Biol. October 2006 D ow nloaded from http://ahajournals.org by on A pril 5, 2021 work_zeqcjie3ybcmvgzox5ghxq4z44 ---- What we wanted to address in this special issue was, through a variety of papers deploying a variety of new technologies, is th Edinburgh Research Explorer New visual technologies: shifting boundaries, shared moments Citation for published version: Graham, C, Laurier, E, O'Brien, V & Rouncefield, M 2011, 'New visual technologies: shifting boundaries, shared moments', Visual Studies, vol. 26, no. 2, pp. 87-91. https://doi.org/10.1080/1472586X.2011.571883 Digital Object Identifier (DOI): 10.1080/1472586X.2011.571883 Link: Link to publication record in Edinburgh Research Explorer Document Version: Peer reviewed version Published In: Visual Studies Publisher Rights Statement: This is an Author's Accepted Manuscript of an article published in Visual Studies copyright Taylor & Francis (2011). Available online. General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. Download date: 06. Apr. 2021 https://doi.org/10.1080/1472586X.2011.571883 https://doi.org/10.1080/1472586X.2011.571883 https://www.research.ed.ac.uk/portal/en/publications/new-visual-technologies-shifting-boundaries-shared-moments(a50578ab-10b9-4bd2-bf59-56e461cec109).html New visual technologies: shifting boundaries, shared moments Authors: Connor Graham, Eric Laurier, Vincent O’Brien and Mark Rouncefield Address for correspondence: Eric Laurier Institute of Geography and the Lived Environment School of Geosciences University of Edinburgh Edinburgh Midlothian, UK EH8 9XP Eric.laurier@ed.ac.uk This is the author’s final draft or ‘post-print’ version as submitted for publication. The final version was published in Visual Studies copyright of Taylor and Francis (2011) Cite As: Graham, C, Laurier, E, O'Brien, V & Rouncefield, M 2011, 'New visual technologies: shifting boundaries, shared moments' Visual Studies, vol 26, no. 2, pp. 87-91. DOI: 10.1080/1472586X.2011.571883 New Visual Technologies: Shifting Boundaries, Shared Moments Front cover Either half of Durrant ‘Figure_2.jpg’ Colour plates in order of priority (1) Herrema – 3 images (2) Johnsrud – 4 images (3) Capstick – 4 images (4) Durrant – 3 images (5) Villi – 3 images Order of articles Digital Photography Practice 1. Sarah Pink: Amateur Photographic Practice, Collective Representation and the Constitution of Place 2. Mikko Villi and Matteo Stocchetti: Visual Communication, Mediated Presence and the Politics of Space 3. Abigail Durrant, David Frohlich, Abigail Sellen and David Uzzell: The Secret Life of Teens: Online Verus Offline Photo Displays at Home 4. Nancy van House: Personal Photography, Digital Technologies, and the Uses of the Visual New Visual Mobilities 5. Ron Herrema: The experience of photologging: global mechanisms and local interactions 6. Andrea Capstick: Travels with a Flipcam: bringing the community to people with dementia in a day care setting through visual technology 7. Stephen Groening: Automobile TV: Shelter, Risk, and the Post-nuclear Family 8. Brian C Johnsrud: Putting the Pieces Together Again: Digital Photography and the Compulsion to order Violence at Abu Ghraib Visual technology has been, and continues to be, noticeably transformed in a number of important areas. As carriers (e.g. cellular networks, the Internet), as production technologies (e.g. digital camcorders and cameras, mobile phones), as display technologies (e.g. public displays, mobile phone projectors) and as services (e.g. Flickr, MMS, blogs) some of these developments are grossly observable. Other aspects of these changes are, perhaps, more subtle and nuanced. This special issue sets out to explore these transformations through eight papers addressing specific aspects of new visual technologies. The papers largely consider visual technologies as socio-technical things embedded and emplaced in the world.\ This special issue was particularly motivated by two workshops, both in 2008. The first1, held at Lancaster University in the UK, focused on (new) visual technologies and mobilities and solicited papers on visual technologies in family life and their role in creating and sustaining communities and tracing mobilities. The second2, held at Microsoft Research, Cambridge, 1 http://mundanetechnologies.com/goings-on/workshop/lancaster/photos/ 2 http://mundanetechnologies.com/goings-on/workshop/cambridge/ centred on the relationship between social interaction and what we termed “mundane technologies”: “technologies and applications that are commonplace, which lots of people use” (Dourish et al., 2010) or “technologies whose novelty has worn off…which are now fully integrated into, and are an unremarkable part of, everyday life” (Michaels, 2003). Several of the papers at that workshop focused on new visual technologies such as mobile games (Coulton et al., 2008), digital photos (Graham and Rouncefield, 2008) and video (O’Brien et al., 2008), as well as mobile blogs (Jay, 2008) and situated digital photo displays (Taylor and Cheverst, 2008), emphasising what we have come to think of as ‘shifting boundaries and shared moments’. Although a variety of new visual technologies are presented in these papers, it would be disingenuous not to comment on the frequency of papers addressing developments in digital photography – five papers out of the eight in this collection. The emphasis on photography marks out both its “widespread popularity” (Van House, this issue) and that photographic practices themselves are evolving. However, beneath the apparent domination of digital photography lies a more complex picture involving different mobilities, services and socio- technical assemblies. We saw this kind of complexity at the workshop in Lancaster in: work involving the mashing up of vernacular digital photographs using positional data3 to support events (Cheverst et al., 2008); the use of mobile blogs to analyze the effects of air pollution on children’s journeys to and from school (Bamford et al., 2008) and; the work of assembling and displaying digital home videos of pets4 (Laurier, 2008). We also saw that such new visual technologies suggest ‘shifting boundaries’ through playing a role in being ‘in between’ events, people and places (Hulme and Truch, 2005). The overwhelming response to the call for papers for this special issue suggests that new visual technologies are a burgeoning area of research and that the mobilities they exhibit and sustain are of considerable interest to the Visual Studies community. The papers presented here can be regarded as but a slice of a much broader field that also includes new visual methodologies such as participatory methodologies for visual activism and autovideography for the evaluation of learning. This field also encompasses a host of new visual technologies that we could not include in this issue: digital newspapers, 3-d mobile photos, image databases, ‘virtual’ exhibitions, digital video productions, images from surveillance cameras, self-captured video, machinima, remixed media, YouTube videos, virtual artefacts, instant replay video technologies, Twitter, Facebook and mashups. The submissions we received also connected new visual technologies to changes in journalism, activism, art practice and curation, video production, simply living the life of a young person and even online confessions. This is not a random list of methodologies, technologies and changes, but rather it is a brief summary of the diversity found in the submissions that we gratefully received. Nippert-Eng (1996) draws our attention to the ‘boundary work’ that we do in the course of sculpting and maintaining differences between home and work. While home and work is a prime division in most people’s lives, it is but one of many divisions that we can suggest when critically considering the connective possibilities of new visual technologies. They potentially traverse and redefine a series of boundaries once generated through distance, temporal disjuncture, cultural conventions and economic barriers. We considered in our call that this ‘shifting’, this ‘traversal’ and ‘redefinition’ was achieved through the sharing of visual media across geographical regions, temporal zones and cultural conventions. This, we suggested had implications not only for how boundaries between individual (e.g. friends) and groups (e.g. different households) are defined but also for how these boundaries are managed through the use of different forms of media. We also proposed that such technologies support mobilities that can transform our experience of all kinds of things. Instances of this personal experience include, firstly, 3 Locomash (http://www.locomash.com) 4 http://web.me.com/eric.laurier/assembling/ significant events and moments in our lives though, for example, visual narratives portrayed in digital photographs on Flickr and, secondly, our experience of our own selves through, for example, snippets of video on YouTube. We suggested other instances include, thirdly, the maintenance of family life, responsibilities and obligations through the remote, asynchronous sharing of digital photos or bringing home the experience and impact of a particular event. Finally we considered the importance of visual activism, driven by a number of provocative vignettes that integrally involve visual technologies. The paper call then was an attempt to make sense of the visual productions of new technologies not only for researchers but also a series of other publics – from the anonymous mass audience to groups of activists to the intimate social world of a family room – examining the role of particular visual media in and through time in particular settings with a particular focus on how they participate in and construct people’s personal and collective material lives. Thus the concern in this theme issue is then both individual and intimate, as well as social and community-driven. For the former, this entails regarding these media in the trajectory of people’s biographies (Strauss, 1993) and for the latter it involves regarding these media as part of the fabric of particular “social worlds” (Becker, 1982; Strauss, 1978) and (virtual) communities (Mynatt et al., 1998; Rheingold, 2000). We see this attention to visual narratives and the development and portrayal in the papers presented here (e.g. Herrema, Durrant et al., this issue). We also see the role of new visual technologies in helping to constitute family life (e.g. Groening, this issue) and in representing and contesting traumatic events (e.g. Johnsrud, this issue). In addition, we see the different roles new visual technologies can play, from the intimate to the public (van House), in making certain views and viewpoints visible (Capstick, this issue) and in managing social interaction (Villi and Stocchetti, this issue). Broadly, in the papers presented here we are alerted to a new series of dichotomies that we think are pertinent to new visual technologies, dicothemies that need to be continually revised and revisited: personal and public (Pink, this issue); presence and absence (Villi, this issue); online and offline (Durrant et al., this issue); physical and digital (Van House, this issue); representation and expression (van House, this issue); amateur and professional (Herrema, this issue); the vernacular and the artistic (Herrema, this issue); isolation and engagement (Capstick, this issue); peace and conflict (Groening, this issue); authoring and re-authoring (Johnsrud, this issue); data and metadata (Johnsrud, this issue). These are but some of the shades of difference that the papers make visible for us. The submissions also illuminate a whole series of different boundaries: from those we have already considered, such as social and technical (Herrema, this issue), to those we just conceived, such as the remembered and the forgotten (Johnsrud, this issue), to the barely considered, such as having capacity and incapacity (Capstick, this issue). Then again, as Pink (drawing on Ingold (2007) in this issue) suggests, perhaps we over-emphasize or mis-construe the nature of boundaries when it is the lines between things5 and the mesh of connections that are significant. The order of the papers in this special issue is not accidental and, to some extent, demonstrates the implications of contemporary and emerging visual technologies for families, social groups, professions and institutions. The first four articles centre on the seemingly ubiquitous practice of amateur (digital) photography. These clutch of articles explicitly consider issues with firstly circulating digital photos and urban identities and secondly photo messaging and presence/absence. They, thirdly, examine photo displays and self-representation and, finally, personal photography and distributed networks of interaction. The next two articles after these describe personal journeys and mark a shift towards specialist uses in particular communities – artistic and care respectively. The last two articles relate new visual technologies – automobile television and digital photographs – to issues of conflict. Rather than summarise these articles any further at this point we will move 5 Ingold (2000:5) describes how a thing was originally “meant a gathering people, and a place where they would meet to resolve their affairs”. on to a series of observations regarding new visual technologies that these articles have supported. Visual technologies are associated with a breed of new information and communication technologies variously described as ‘ubiquitous’ and ‘pervasive’ in computing and as ‘reaching in’ and ‘exploding’ in human-centred design (e.g. Weiser, 1991; Grudin, 1990; Bowers and Rodden, 1993). In classic works they have been variously framed as interacting with publics where their sensual form and susceptibility to circulation is central to their mass comprehensibility (McLuhan and Fiore, 1967). As tempting as it is to embrace this apparent global nature of the visual new visual technologies are profoundly social (Bolter and Grusin, 2000) we suggest that the articles in this collection show that such technologies can only be fully understood through careful consideration of the site of production of the image, the site of the image itself and the site of audiencing (Rose, 2007). In several of the papers here this issue of audiencing is brought to their fore through the consideration of very particular groups: from teens (Durrant et al.) to the ‘anonymous’ members of photo sharing communities (Herrema). The reach of distributive technologies also raises questions about being between audiences and how visual materials are used to construct, reconstruct and deconstruct cultural identities. Many of the papers also affirm the importance of the malleable form of digital media (Johnsrud). Perhaps unsurprisingly the importance of Urry’s (2004) mobilities for understanding these new technologies emerges – physical and communicative in particular – and the importance of these media’s different materialities as well as their content (Edwards and Hart, 2004). Indeed, we see how this circulation of visual material occurs via new commercial services – digital photos via Googletalk, Facebook, MySpace (Durrant et al.), Flickr (Herrema) and MMS (Villo and Stocchetti) – as well as by more ‘traditional’ means – such as carnivals and exhibitions (Pink). We also observe how, with these circulations, personal transformations can occur e.g. from a casual photographer to photographic artist (Herrema). What we wanted to depict in this special issue, through a variety of papers deploying a variety of new technologies, was how these technologies might contribute, more generally, to the changing field of Visual Sociology and Visual Studies. The selection of articles in this special issue encompass a range of different theoretical perspectives and traditions – from psychoanalytic theory (Johnsrud drawing on Freud (1920)) to theories of ritual communication (Villo and Stochetti drawing on Carey (1989)) to actor-network theory (Van House drawing on Latour (2005)) – with this spirit of inclusiveness leading to, we hope, a rich diverse collection of papers for the reader of Visual Studies. Similarly the papers in this collection present a range of methodological approaches: of autoethnography, ethnography tied to notions of evolving place, phenomenological enquiry through semi-structured interviews. The articles also broadly describe the use of the products of new visual technologies as digital life documents (Plummer, 2001), and we suggest that this methodological thread seems to holds promise for further development. There are also a number of themes that, we hope, promise to generate new questions for the Visual Studies community about new visual technologies. They: - inhabit space and act for us through ‘inhabiting machines’ (Urry, 2004) such as mobile phones – the suggestion is that lingering ‘selves’, positioned elsewhere in time and space, are being propagated. - can be conceived as being central to ‘meshing together’ and constituting place as well as the management of personal social space. - are now critical to supporting and even comprising the achieved, sometimes messy connections that create collections of people and things. - constitute audiences that are variously known and unknown, individual and collective, online and offline, calculated and ad hoc, regional and global. - not only bring with them new freedoms concerning what they allow us to do and who and what they connect us to but they also discipline and regulate us. - enliven, support, and even generate and reassemble histories and biographies, eroded memories and cultural memory. - allow ‘softer’, ‘phatic’ (Malinowski, 1923) interactions supporting intimacy and closeness as well as contestation and conflict. - are worked with and ‘practiced’ as part of our everyday lives – the experience of them can only be understood in these terms. Through all these themes we see that new visual technologies are increasingly involved in constituting us, our interactions, our identities and our relationships. Acknowledgements The work on this editorial was supported by the Microsoft European Research Fellowship “Social Interaction and Mundane Technologies” and the Singapore Ministry of Education and National University of Singapore grant “Asian Biopoleis: Biotechnology and Biomedicine as Emergent Forms of Life and Practice”. Enormous thanks goes to our persistent, hard-working authors and reviewers. Our gratitude also goes to all the authors who submitted papers. We extend a special thanks to Professor Darren Newbury for his guidance and support throughout the special issue review process. References Bamford, W., Coulton, P., Moser, M., W. Whyatt, D., Davies, G., and Pooley, C. (2008). Using Mobile Phones to Reveal the Complexities of the School Journey. In Proceedings of the 10th International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI ’08). ACM, New York, NY, USA, 283-292. Becker, H. S. (1982). Art Worlds. Berkeley: University of California Press. Berger, John (1972). Ways of Seeing. London: British Broadcasting Corporation and Penguin Books. Bolter, J.D. and Grusin, R.A. (2000). Remediation: Understanding New Media. Cambridge: MIT Press. Bowers, J. and Rodden, T. (1993). Exploding the interface: experiences of a CSCW network. In Proceedings of the INTERACT '93 and CHI '93 conference on Human factors in computing systems (CHI ’93). ACM, New York, NY, USA, 255-262. Carey, J. W. 1989. Communication as Culture. Essays on Media and Society. Boston: Unwin Hyman. Cheverst, K., Coulton, P., Bamford, W. and Taylor, N. Supporting (Mobile) User Experience at a Rural Village ‘Scarecrow Festival’: A Formative Study of a Geolocated Photo Mashup Utilising a Situated Display. In Henze, H., Broll, G., Rukzio, E., Rohs, M. and Zimmermann, A. (2008). Mobile Interaction with the Real World. Proceedings of the 10th international conference on Human computer interaction with mobile devices and services (MobileHCI ’08). ACM, New York, NY, USA, 563-565. Coulton, P., Copic Pucihar, K. and Bamford, W. (2008). Mobile Social Gaming. Proceedings of the 2008 Workshop on Social Interaction and Mundane Technologies (SIMTech ’08), Lancaster University. ISBN 978-1-86220-218-4 Dourish, P., Graham, C., Randall, D. and Rouncefield, M. 2010. Theme issue on social interaction and mundane technologies. Personal and Ubiquitous Computing, 14, 3 (April 2010), 171-180. Edwards, E. and Hart, J. (2004). Introduction: Photographs as Objects. In Edwards, E. and Hart, J. (eds). Photograph Object Histories: On the Materiality of Images. Routledge, New York. Freud, S. ([1920] 1961) Beyond the Pleasure Principle, New York, W. W. Norton & Company. Graham, C. and Rouncefield, M. (2008). Photo Practices and Family Values in Chinese Households. Proceedings of the 2008 Workshop on Social Interaction and Mundane Technologies (SIMTech ’08), Lancaster University. ISBN 978-1-86220-218-4. Grudin, J. (1990). The computer reaches out: the historical continuity of interface design. In Proceedings of the SIGCHI conference on Human factors in computing systems (CHI '90). ACM, New York, NY, USA, 261-268. Hall, E. T. 1969. The Hidden Dimension. New York: Doubleday. Hulme, M. and Truch, A. (2005). The Role of Interspace in Sustaining Identity. In Glotz, P., Bertscht, S. and Locke, C. (Eds), Thumb Culture: The Meaning of Mobile Phones for Society. New Brunswick: Transaction Publishers Ingold, T. (2007). Lines: A Brief History. London: Routledge. Jay, T. (2008). Moblogging as method: Exploring the Role of the Mobile Phone in Accessing Personal Action and Experience. Proceedings of the 2008 Workshop on Social Interaction and Mundane Technologies (SIMTech ’08), Lancaster University. ISBN 978-1-86220-218-4. Latour, B. (2005). Reassembling the social: an introduction to actor-network-theory. Oxford: Oxford University Press. Laurier, E. (2008). ‘Deerhound in the Dark’ - The Home Made Video. copies available from the author. Malinowski, B. (1923). “The problem of meaning in primitive languages.” Supplement to C. Ogden and I. Richards The meaning of meaning. London: Routledge and Kegan Paul. Pp. 146-152. McLuhan, M. and Fiore, Q. (1967). The Medium is the Massage: An Inventory of Effects. New York: Random House. Michaels M (2003) Between the mundane and the exotic: time for a different sociotechnical stuff. Time and Society 12(1):127-143. Mynatt, E.D., O’Day, V.L., Adler, A., Ito, M. (1998). Network Communities: Something Old, Something New, Something Borrowed…. Computer Supported Cooperative Work 7, 1-2, pp 123-156. Nippert-Eng, C.E. Home and Work: Negotiating Boundaries Through Everyday Life. University of Chicago Press: Chicago, 1996. O’Brien, V., Djusipov, K. and Esengulova, N. (2008). Embracing the Everyday: Reflections on Using Video and Photography in Health Research. Proceedings of the 2008 Workshop on Social Interaction and Mundane Technologies (SIMTech ’08), Lancaster University. ISBN 978-1-86220- 218-4. Plummer K (2001) Documents of life 2: an invitation to a critical humanism. Sage Publications, London. Rheingold, H. (2000). The Virtual Community: Homesteading on the Electronic Frontier. London: MIT Press. Rose, G. (2007). Visual Methodologies: An Introduction to the Interpretation of Visual Images. London: Sage. Strauss, A. (1978). A Social World Perspective. Studies in Symbolic Interaction, 1. 119-128. Strauss, A. (1993). Continual Permutations of Action. New York: Aldine de Gruyter. Taylor, N. and Cheverst, K. Supporting Village Community Through Connected Situated Displays. Proceedings of the 2008 Workshop on Social Interaction and Mundane Technologies (SIMTech ’08), Lancaster University. ISBN 978-1-86220-218-4. Urry, J. (2000). Sociology beyond Societies: Mobilities for the Twenty-First Century, London: Routledge. Urry, J. (2002). The Tourist Gaze. Sage Publications: London. Second Edition Urry, J. (2004). Connections. Environment and Planning D: Society and Space, 22, 27- 37 Weiser M (1991) The computer for the 21st Century. Scientific American 265(3):94–104 New Visual Technologies: Shifting Boundaries, Shared Moments Front cover Colour plates in order of priority Order of articles Digital Photography Practice New Visual Mobilities Acknowledgements References work_zfxd6auzx5bohlexqq4h7pqize ---- 1 Digital photography for assessing vegetation phenology in two contrasting northern ecosystems Maiju Linkosalmi 1 , Mika Aurela 1 , Juha-Pekka Tuovinen 1 , Mikko Peltoniemi 2 , Cemal M. Tanis 1 , Ali N. Arslan 1 , Pasi Kolari 3 , Tuula Aalto 1 , Juuso Rainne 1 , Tuomas Laurila 1 1 Finnish Meteorological Institute, Helsinki, Finland 5 2 Natural Resources Institute Finland (LUKE), Vantaa, Finland 3 Faculty of Biosciences, University of Helsinki, Helsinki, Finland Correspondence to: Maiju Linkosalmi (Maiju.linkosalmi@fmi.fi) Abstract. Digital repeat photography has become a widely used tool for assessing the annual course of vegetation phenology 10 of different ecosystems. A greenness measure derived from digital images potentially provides an inexpensive and powerful means to analyze the annual cycle of ecosystem phenology. By using the Green Chromatic Coordinate (GCC), we examined the feasibility of digital repeat photography for assessing the vegetation phenology in two contrasting high-latitude ecosystems. While the seasonal changes in GCC are more obvious for the ecosystem that is dominated by annual plants (open wetland), clear seasonal patterns were also observed for the evergreen ecosystem (coniferous forest). Limited solar 15 radiation restricts the use of images during the night and in wintertime, for which time windows were determined based on images of a grey reference plate. The variability in cloudiness had only a minor effect on GCC, and GCC did not depend on the sun angle and direction either. The GCC of wetland developed in tandem with the daily photosynthetic capacity estimated from the atmosphere-ecosystem flux measurements. At the forest site, the seasonal GCC cycle correlated well with the flux data in 2015 but there were some temporary deviations in 2014. The year-to-year differences were most likely 20 generated by meteorological conditions, especially the differences in temperature. In addition to depicting the seasonal course of ecosystem functioning, GCC was shown to respond to physiological changes on a daily time scale. It seems that our northern sites, with a short and pronounced growing season, suit especially well for the monitoring of phenological variations with digital images. 25 1 Introduction Phenology is an important factor in the ecology of ecosystems. The most distinctive phenomena comprising vegetation phenology are the changes in plant physiology, biomass and leaf area (Migliavacca et al., 2011; Sonnentag et al., 2011; 2012; Bauerle et al., 2012). In part, these changes drive the carbon cycle of ecosystems, and they have various feedbacks to the climate system through effects on surface albedo and aerodynamic roughness, and ecosystem–atmosphere exchanges of 30 Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 2 various gases (e.g. H2O, CO2 and volatile organic compounds). Besides leaf area, gas exchange is modulated by seasonal variations in stomatal conductance, photosynthesis and respiration (Richardson et al., 2013). Globally, these variations contribute to the fluctuations in the atmospheric CO2 concentration (Keeling et al., 1996). In the long term, possible trends in vegetation phenology can have a systematic effect on the mean CO2 level. Phenology further plays a role in the competitive interactions, trophic dynamics, reproductive biology, primary production and nutrient cycling (Morisette et al., 2009). 5 Phenological phenomena are largely controlled by abiotic factors such as temperature, water availability and day length (Bryant and Baird, 2003; Körner and Basler, 2010), and thus they are sensitive to climate change (Richardson et al., 2013; Rosenzweig et al., 2007; Migliavacca et al., 2012). Several studies have reported an advanced onset of the growing season during recent decades (Linkosalo et al., 2009: Delbart et al., 2008; Nordli et al., 2008; Pudas et al., 2008). An earlier onset of growth has been observed to play a significant role in 10 the annual carbon budget of temperate and boreal forests, while lengthening autumns have a less clear effect (Goulden et al., 1996; Berninger, 1997; Black et al., 2000; Barr et al., 2007; Richardson et al., 2009). This can be explained by the rapid C accumulation that starts as soon as conditions turn favourable for photosynthesis and growth in spring, while the opposing effect, i.e. ecosystem respiration, becomes increasingly important in summer and autumn (White and Nemani, 2003; Dunn et al., 2007). 15 In general, camera monitoring has become feasible with the development of advanced but inexpensive digital cameras that produce automated and continuous real-time data. It has been shown that simple time-lapse photography can facilitate detection of vegetation phenophases and even the related variations in CO2 exchange (Wingate et al., 2015; Richardson et al., 2007; Richardson et al., 2009). This provides new possibilities for monitoring and modelling of ecosystem functioning, for verification of remote sensing products, and for analysis of ecosystem CO2 exchange fluxes and related balances. 20 Especially dynamic vegetation models and simulations of C cycle could be improved by more accurate information on the timing of budburst and leaf senescence, as simple empirical parameterisations, typically based on degree-days or the onset and offset dates of C uptake, are presently used as indicators of the growing season start and end (Baldocchi et al., 2005; Delpierre et al., 2009; Richardson et al., 2013). Digital cameras produce red-green-blue (RGB) colour channel information, from which different greenness indices can be 25 calculated. For example, canopy greenness has been expressed in terms of the so-called Green Chromatic Coordinate (GCC), which has been related to vegetation activity and further to carbon uptake of forests and peatlands (Richardson et al., 2007; 2009; Ahrends et al., 2009; Ide et al., 2011; Sonnentag et al., 2011; Peichl et al., 2015). In temperate forests, the main driver of gas exchange is leaf area that changes rapidly in spring and autumn, which is easy to detect. In evergreen conifer forests the leaf area changes are much smaller, so it not obvious if a similar relationship can be established for them. In a peatland 30 environment, repeat images have been used to map the mean greenness of mire vegetation over a wide area (Peichl et al., 2015; Sonnentag et al., 2007). Other types of peatland ecosystems have a more heterogeneous vegetation cover, in which case it may be possible to simultaneously detect seasonality effects of different vegetation types. Thus digital repeat images Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 3 of differentially developing vegetation types could potentially help decomposing an integrated CO2 flux observation into components allocated to these vegetation types. Comparisons of phenological observations made at contrasting sites are needed for highlighting the phenological features that can be extracted from camera monitoring at different sites (Wingate et al., 2015; Keenan et al., 2015; Toomey et al., 2015; Sonnentag et al., 2012). Differences in the ecosystem characteristics may also affect the ideal set up of cameras and 5 the interpretation of images, for example in conjunction with surface flux data. The objectives of this study were to evaluate the digital repeat photography as a method for monitoring vegetation phenology and to investigate the differences in the phenology between two adjacent but contrasting ecosystems (pine forest and wetland) located in northern Finland. In particular, we assess if the data obtained from such cameras can support the interpretation of the micrometeorological measurements of CO2 fluxes conducted at the sites. 10 We expect that on the wetland the clear phenological pattern observed in CO2 fluxes coincides with the greenness variations detected with cameras, i.e. with the regeneration of annual vegetation in spring and the senescence of annual parts in autumn. For the evergreen pine forest, we expect a weaker but observable correlation of CO2 fluxes with the canopy greenness data, because the greenness changes are subtle and only a fraction of the canopy regenerates annually (e.g., Wingate, 2015) 2 Materials and methods 15 2.1 Measurement sites The study sites are located at Sodankylä in northern Finland, 100 km north of the Arctic circle. They represent two contrasting ecosystems, a Scots pine (Pinus sylvestris) forest (67°21.708'N, 26°38.290'E, 179 m a.s.l.) and an open pristine wetland (67°22.117'N, 26°39.244E, 180 m a.s.l.). The long-term (1981–2010) mean temperature and precipitation within the area are -0.4 ˚C and 527 mm, respectively (Pirinen et al., 2012). 20 The Scots pine stand is located on fluvial sandy podzol and has a dominant tree height of 13 m and a tree density of 2100 ha – 1 . The age of the trees within the camera scope is about 50 yr. A single-sided leaf area index (LAI) of 1.2 has been estimated for the stand based on a forest inventory. The sparse ground vegetation consists of lichens (73%), mosses (12%) and ericaceous shrubs (15%). The wetland site is located on a mesotrophic fen that represents typical northern aapa mire. The vegetation mainly consists of 25 low species (Carex spp., Menyanthes trifoliata, Andromeda polifolia, Betula nana, Vaccinium oxycoccos, Sphagnum spp.). There are no tall trees, only some B. pubescens and a few isolated Scots pines. Different types of vegetation are located on drier (strings) and wetter (flarks) parts of the wetland. Obviously, the physical surface structure (aerodynamic roughness length) differs between the pine forest and wetland sites. Also, the microclimate and surface exchange of CO2 and sensible and latent heat differ due to different vegetation and soil 30 characteristics. Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 4 2.2 Cameras The images analyzed in this study were taken automatically with StarDot Netcam SC 5 digital cameras. The setup included a weather proof housing and connections to line current and a web server. The pictures were stored in the 8-bit JPEG format every 30 minutes with 2592x1944 resolution and transferred automatically to a remote server. The daily collecting period varied according to the time of the year roughly covering the daylight hours. 5 At the forest site, the cameras were mounted to a tower at two different heights: 29 m (‘canopy camera’) and 13 m (‘crown camera’). The viewing angle of the canopy camera was 45° from the horizontal plane, while the crown camera was positioned nearly horizontally. The images of the canopy camera covered parts of the forest canopy and some general landscape. The crown camera was focused to individual trees to detect their phenological development (e.g. bud burst, shoot growth, needle shedding) more closely. At the wetland site, the camera was adjusted in an angle of 45° on top of a 2 -m pole. 10 This camera mostly observed the ground vegetation, with some B. pubescens and sky also visible in the images. All cameras were placed facing the North to minimise lens flare and maximise illumination of the canopy. 2.3 Grey reference plates At the forest site, grey reference plates were employed to monitor the stability of the image colour channels. The plates were attached to the cameras in such a way that they are visible in every picture. The idea behind the reference plates is to detect 15 possible day-to-day shifts in the colour balance due to changing weather conditions, such as radiation variations. The reference images should also not show any obvious seasonality (Petach et al., 2014). The grey colour of the plates is close to the “true grey” in a sense that it has an equal mix of red, green and blue colour components. To achieve this, the reference plates were painted with Tikkurila grey/1948 (RGB values: R=95, G=95, B=95). 2.4 Automatic image analysis 20 The digital images were analyzed with the FMIPROT software that has been designed as a toolbox for image processing for phenological and meteorological purposes. FMIPROT calculates the colour fractions for red, green and blue channels. We use the Green Chromatic Coordinate (GCC) defined as GCC = ∑ 𝐺 ∑ 𝑅+∑ 𝐺+∑ 𝐵 , (1) where ∑ 𝐺, ∑ 𝑅, ∑ 𝐵 are the sums of green, red and blue channel indices, respectively, of all pixels comprising an image. 25 Within each image, it is possible to define limited subareas of Regions of Interest (ROIs). The ROI feature of FMIPROT makes it possible to limit the GCC calculation to an area that represents a homogeneous vegetation area. It also provides an option for analyzing several subareas within the image simultaneously. Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 5 2.5 CO2 flux measurements The ecosystem–atmosphere CO2 exchange was measured at both study sites by the micrometeorological eddy covariance (EC) method. The EC measurements provide continuous data on the CO2 fluxes averaged on an ecosystem scale. The vertical CO2 flux is obtained as the covariance of the high frequency (10 Hz) fluctuations of vertical wind speed and CO2 mixing ratio (Baldocchi 2003). At both sites, the EC measurement systems consisted of a USA-1 (METEK GmbH, 5 Elmshorn, Germany) three-axis sonic anemometer/ thermometer and a closed-path LI-7000 (Li-Cor., Inc., Lincoln, NE, USA) CO2/H2O gas analyzer. The measurement systems and the data processing procedures have been presented in detail by Aurela et al. (2009). The CO2 fluxes obtained from the EC measurements represent the net ecosystem exchange (NEE) of CO2, which is the sum of gross photosynthetic production (GPP) by plants and a respiration term that includes both the autotrophic respiration by 10 plants and the heterotrophic respiration by microbes. GPP is typically derived from the NEE data by using a dedicated flux partitioning technique, for example based on nonlinear regressions with Photosynthetic Photon Flux Density (PPFD) and air temperature as predictors (Reichstein, 2005). Instead of performing such a partitioning, we determined the daily GPP in terms of the gross photosynthesis index (GPI); for details, see Aurela et al. (2001) where a similar index was termed ‘PI’. GPI indicates the maximal photosynthetical activity in optimal radiation conditions. It is obtained by calculating the 15 differences of the daily averages of the daytime (Photosynthetic Photon Flux Density (PPFD) > 600 µmol m -2 s -1 ) and night- time (PPFD < 20 µmol m -2 s -1 ) NEE. While the resulting GPI does not exactly scale with GPP, due to diurnal differences in respiration, it provides a useful measure especially for depicting the seasonal GPP cycle, but it also responds to short-term variations in air temperature and humidity (Aurela et al., 2001). 2.6 Meteorological measurements 20 An extensive set of supporting meteorological variables was measured at both measurement sites, including air temper ature and humidity, various soil parameters (temperature, humidity, soil heat flux and water table level) and different radiation components (incoming and outgoing shortwave (SW) radiation, PPFD and net radiation). Here we used the temperature data measured at 3 m height on the wetland and at 8 m at the forest site. From the SW radiation measurements we calculated the surface albedo as the proportion of incident radiation that is reflected back to the atmosphere by the underlying surface. In 25 addition, cloudiness data were available from the nearby observatory. Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 6 3 Results and discussion 3.1 Testing the setup Even though GCC is derived from the images in a systematic way by employing the FMIPROT software, it is necessary to assess the effect that the environmental conditions may have on GCC. This assessment included the determination of both the daily and seasonal time windows for data screening, evaluation of cloudiness effects and considerations of ROI selection. 5 3.1.1 Optimal time window for image analysis In order to determine the periods when low light levels significantly affect the GCC observation, the mean diurnal GCC cycles during different months were calculated for the reference plate placed within the tree crown (Fig. 1). These cycles show that from March to October the mean 30-min GCC values remained stable from 9.00 to 17.00 (local winter time, UTC+2). During the other months, the stable period was markedly shorter. In winter, solar radiation is low throughout the 10 day (Fig. 2), which was observed as a decreased GCC level even at noon; in addition, the variance of GCC increased notably. Similarly, the nocturnal data show a large variance throughout the year, although at these latitudes, i.e. north of the Arctic circle, it never gets completely dark during the summer months. To further illustrate the annual cycle of the reference plate GCC, Fig. 3 shows the mean daytime GCC over a 17-month period, with the daytime period here limited to 11.00–15.00. The reduction in the mean GCC occurred from November to 15 February, during which period the variance was also increased. This degradation of the signal in insufficient light conditions must be taken into account, if GCC data are to be used for studying the effect of snow cover, and further investigated if other measures of greenness are used. However, as the present study aims to observe the phenological development of ecosystems, we limited the study period to 10 February – 31 October (Fig. 3). During this period, the reference GCC values were stable varying between 0.332 and 0.339 (with an average of 0.336). The maximum standard deviation for the period was 0.0036. 20 3.1.2 Effect of cloudiness and sun angle The influence of cloudiness on GCC was estimated from the data collected in July 2014. This particular month was selected for the test because July represents the lushest growing season and in July 2014 sunny and cloudy days were equally frequent. Based on the observations of cloudiness (CL, ranging from clear sky with CL = 0 to completely cloudy conditions with CL = 8), the images were pooled to two contrasting cloudiness groups representing sunny (CL = 0–1) and cloudy (CL 25 = 7–8) conditions. During the daily period selected above (11.00–15.00), the differences in the mean GCC between sunny and cloudy conditions were statistically insignificant (Mann-Whitney U test) (Fig. 4). The mean GCC difference between the cloudy and sunny groups was 0.0014 and 0.0011 for the fen and forest, respectively. Sonnentag et al. (2012) found an equivalently small, though in part statistically significant, difference between the diurnal GCC cycles of sunny and overcast situations for their deciduous and coniferous forests. 30 Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 7 The dependence of GCC on the sun angle with respect to ROI was also estimated from these data. The difference between the mean minimum and maximum GCC within the daytime window was 0.0030 and 0.0020 for sunny and cloudy cases, respectively. This is less than 5% of the seasonal amplitude of the GCC curve (0.069 between May and July) associated with phenological greening of the fen. At the forest site, the corresponding values were 0.0022 (sunny) and 0.0012 (cloudy) and, despite the lower annual amplitude (0.024 between May and July), the difference was less than 10% of the seasonal 5 variation. 3.1.3 Selection of the Region of Interest The sensitivity of the GCC values to the selection of a subarea within an image, i.e. a region of interest, was tested by comparing the GCC calculated for different vegetation patches. In particular, we wanted to examine, on one hand, if the forest images are homogeneous and thus insensitive to the ROI definition; on the other hand, the wetland images may 10 provide an opportunity to simultaneously observe various microecosystems incorporated into a single image. 3.1.3.1 Wetland site GCC was calculated separately for four different, clearly identifiable vegetation types at the wetland site. These vegetation types were dominated by (1) bog-rosemary (Andromeda polifolia) and other shrubs, (2) sedges (Carex spp.) and Sphagnum mosses, (3) big-leafed bogbean (Menyanthes trifoliate) and (4) downy birch (Betula pubescens) (Fig. 5a). The first three 15 ROIs include also other ground vegetation, while the fourth ROI is limited to the birch canopy. The GCC values were also analyzed from a larger area that includes the three first vegetation types (Fig. 5b). The GCC values of the ROIs defined according to vegetation types showed significant differences in the seasonal cycle, both in the timing of the major changes in spring and fall and in the magnitude of the maximum GCC (Fig. 6). For example, downy birch had the fastest growth onset, while the big-leafed bogbean had the largest growing-season maximum. These 20 features suggest that the parallel ROIs should be investigated further in conjunction with small-scale (chamber-based) flux data. For further analysis, especially for comparing with our ecosystem-scale CO2 flux data, we chose to use the larger area combining three vegetation types (Fig. 5b). 3.1.3.2 Forest site The forest site had two cameras, one zoomed to the crown of a pine tree (Fig. 7) and the other providing a general view of 25 the canopy (Fig. 8). From the general canopy image, we subjectively selected three separate ROIs with an aim to define similar homogenous areas of forest canopy (Fig. 8). The daily mean GCC values of different canopy ROIs remained very similar throughout the time series (Fig. 9). The GCC values determined from crown images differed from those from the camera with a general canopy view, most likely because the cameras had different viewing angles and distances to the object. The contribution of ground is mixed with the canopy 30 signal, which partially explains why the GCC values in the distant canopy camera images are lower than in the crown Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 8 camera images. In winter, there is more snow visible behind the canopy in the smaller-scale ROIs. Thus, we decided on using in further analysis only the images from the crown camera. 3.2 Phenological development 3.2.1 Wetland site As previously observed by Peichel et al. (2015), at the wetland the growing season is clearly discernable in the development 5 of GCC data (Fig. 10). GCC started to increase as soon as the wetland vegetation started to grow. This growth onset took place in May after the snow melt, for which the ground albedo provides a sensitive indicator by quantifying the proportion of incident solar radiation that is reflected back to the atmosphere. However, the onset was preceded by a short period of reduced GCC values, which were associated with the moist and dark soil. The warm spells during late May and early June in 2014 induced a rapid emergence and growth of annual plants. Despite the 10 later snow melt that year, by mid-June the growing season had developed much further than in 2015. This difference is clearly visible in the GCC as well as photosynthetic activity (GPI) data (Fig. 10). The cold period in late June 2014 ceased this fast development, which is also well reflected in the GCC data that show a stabilization and even a temporary reduction during that period. GPI shows a similar pattern, highlighting the coherence between the greenness observation and the actual photosynthetic processes. 15 Following the earlier onset of the growing season in 2014, the peak of plant development was also observed earlier (Fig. 10). However, the magnitude of the GCC maxima during the two years was very similar. From mid-August to mid-September, the rate of GCC decline was approximately the same in 2014 and 2015. In mid-September, the slightly higher GCC in 2014 can be attributed to a warm period. By the first sub-zero values in daily mean temperatures, the GCC had decreased to its minimum value, close to the springtime minimum, and by the snow fall in mid-October it had started increasing towards the 20 level observed for the fully snow-covered conditions in spring. Previous observations suggest that especially the spring development is well correlated with the GCC of wetlands (Peichel et al., 2015). Our results support these observations showing a strong relationship between GCC and GPI (Fig. S1), with a correlation coefficient of 0.90 and 0.92 for the snow-free period in 2014 and 2015, respectively. Especially during the springtime, the match between the GCC and GPI time series was remarkably close during both years, while in the autumn of 25 2014 GCC lagged slightly behind GPI. 3.2.2 Forest site Due to the closeness of the measurement sites, the meteorological conditions in forest were similar to those observed at the wetland (Figs. 10 and 11). However, the onset of photosynthetical activity differed slightly in the beginning of the growing season: the warm days of early May 2015 were not observed at the wetland as an GPI increase due to the absence of annual 30 vegetation right after the snow melt, while the photosynthesis of boreal trees is triggered as soon as temperature reaches a Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 9 sufficient level. Thus the growing season in the forest started earlier in 2015 than in 2014, while that was not the case at the wetland. Nevertheless, the warm period in late May – early June 2014 also enhanced the forest growth, and by mid-June both GCC and GPI had surpassed the corresponding level in 2015. The cold period in late June 2014 was again observed as reduced CO2 uptake and even a clearer reduction in GCC than at the wetland. Although GCC is generally well correlated with the gross ecosystem photosynthesis during the start of the growing season 5 (Peichel et al., 2015; Toomey et al., 2015), for evergreen needleleaf forests it has been reported that such correlation is o ften weaker (Toomey et al., 2015; Wingate et al., 2015). In our data, however, the simultaneous development of GCC and photosynthesis was evident during the year with spring data available (Fig. S1). Similarly to the wetland, the maximum GCC level at the forest site did not differ between 2014 and 2015, but this level was reached slightly earlier in 2014. This was probably due to the higher temperatures during the first part of the growing season. 10 During both years, GCC started decreasing at the same time, i.e. in the end of July. This was slightly earlier than the start of the senescence detected visually (Phase 4 in Fig. 11). In 2014 there was a clear phase difference between GCC and GPI, the latter of which stayed at the maximum level until the end of August. During both years, the photosynthetic activity continues until the end of August, but the interannnual comparison is not possible here owing to the missing CO2 data in 2015. In both years GCC decreases to the wintertime level in the beginning 15 of October, at the same time as the daily mean temperature decreases below 0C. In slight contrast to our original hypothesis, the phenological development of the pine canopy could be accurately monitored with the GCC analysis, even though the GCC changes in forest were subtler than those observed for the wetland vegetation. This was confirmed by subjectively identifying the phenological stages of the forest from the crown camera pictures (Fig. 11). In 2014, the cameras were installed too late to detect the bud burst, but the GCC time series was consistent with the 20 observation that the buds started their growth in the beginning of June and remained brown until the beginning of July, when they started to green. The oldest needles in the trees started to brown on around the 20th of August in both years, which is in accordance with the GCC and GPI data (Fig. 11). 4 Conclusions The feasibility of digital repeat photography for assessing vegetation phenology was examined in two contrasting high-25 latitude ecosystems. While the seasonal changes in the greenness index GCC are more obvious for those ecosystems where the vegetation is renewed every year (here an open wetland), seasonal patterns can also be observed in the evergreen ecosystems (here a coniferous forest). We examined the applicability and stability of the digital camera system by analyzing the images of a grey reference plate, which was included in the camera view. Limited solar radiation restricts the use of images during the wintertime as well as 30 during the night-time. At our sites in northern Finland, the daytime radiation levels were sufficient for image analysis from February to October. During that period, a diurnal window of 11:00–15:00 (local winter time) provides stable GCC data. Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 10 Outside these time windows, the mean GCC of the grey reference plate was markedly reduced and the noise in GCC increased. Our results show that the variability in cloudiness does not play a significant role in the GCC analysis. We pooled reference plate images to two sets representing sunny and overcast conditions and found no significant difference between these data sets. Similarly, there was no difference between the data collected before and after noon, indicating that the sun angle had an 5 insignificant influence on the GCC values. We observed a clear seasonal GCC cycle at both sites. At the wetland, GCC developed in tandem with the daily photosynthetic capacity estimated from the atmosphere-ecosystem flux measurements. At the forest site, the seasonal GCC cycle correlated well with the flux data in 2015 but showed some temporary differences in 2014. This difference between the years was most likely due to meteorological conditions, especially the differences in temperature. 10 In addition to depicting the seasonal course of ecosystem functioning, GCC was shown to respond to physiological changes on a daily time scale. We observed that a two-week-long cold period in the end of June caused a simultaneous reduction in GCC and photosynthesis at both sites, and that GCC reflected even shorter-term variations. It seems that our northern sites, with a short and pronounced growing season, suit especially well for the monitoring of phenological variations with digital images. 15 Acknowledgements This work was supported by EU: The installation of the cameras and the development of the Image processing tool (FMIPROT) was done within MONIMET Project (LIFE12ENV/FI/000409), funded by EU Life+ Programme (2013-2017) (http://monimet.fmi.fi). References 20 Ahrends, H. E., Etzold, S., Kutsch, W. L., Stoeckli, R., Bruegger, R., Jeanneret, F., Wanner, H., Buchmann, N., and Eugster, W.: Tree phenology and carbon dioxide fluxes: use of digital photography for process-based interpretation at the ecosystem scale, Clim. Res, 39: 261–274, doi: 10.3354/cr00811, 2009. Aurela, M., Lohila, A., Tuovinen, J.-P., Hatakka, J., Riutta, T., and Laurila, T.: Carbon dioxide exchange on a northern boreal fen, Boreal Environ. Res., 14: 699–710, 2009. 25 Aurela, M., Tuovinen, J.-P., and Laurila, T.; Net CO2 exchange of a subarctic mountain birch ecosystem, Theor. Appl. Climatol., 70, 135-148, 2001. Baldocchi, D.: Assessing the eddy covariance technique for evaluating carbon dioxide exchange rates of ecosystems: past, present and future, Glob. Change Biol., 9: 479–492, 2003. Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 11 Baldocchi, D. D, Black, T. A., Curtis, P. S., Falge, E., Fuentes, J. D., Granier, A., Gu, L., Knohl, A., Pilegaard, K., Schmid, H. P., Valentini, R., Wilson, K., Wofsy, S., Xu, L., Yamamoto, S.: Predicting the onset of carbon uptake by deciduous forests with soil temperature and climate data: a synthesis of FLUXNET data, Int. J. Biometeorol., 49:377–387, 2005. Barr, A.G., Black, T.A., Hogg, E.H., Griffis, T.J., Morgenstern, K., Kljun, N., Theede, A., and Nesic, Z.: Climatic controls on the carbon and water balances of a boreal aspen forest, 1994-2003, Glob. Change Biol., 13: 561–576, 2007. 5 Bauerle, W. L., Oren, R., Way, D. A., Qian, S. S., Stoy, P. C., Thornton, P. E., Bowden, J. D., Hoffman, F. M., Reynolds, R. F.: Photoperiodic regulation of the seasonal pattern of photosynthetic capacity and the implications for carbon cycling, P. Natl. Acad. Sci. USA, doi:10.1073/pnas.1119131109, 2012. Berninger, F.: Effects of drought and phenology on GPP in Pinus sylvestris: a simulation study along a geographical gradient, Funct. Ecol., 11:33–43, 1997. 10 Black, T. A., Chen, W. J., Barr, A. G., Arain, M. A., Chen, Z., Nesic, Z., Hogg, E. H., Neumann, H. H., and Yang, P. C.: Increased carbon sequestration by a boreal deciduous forest in years with a warm spring, Geophys. Res. Lett., 27: 1271– 1274, 2000. Bryant, R. G. and Baird, A. J.: The spectral behaviour of Sphagnum canopies under varying hydrological conditions, Geophys. Res. Lett. 30(3), 1134–1138, 2003. 15 Delbart, N., Picard, G., Le Toans, T., Kergoat, L., Quegan, S., Woodward, I., Dye, D., and Fedotova, V.: Spring phenology in boreal Eurasia over a nearly century time scale, Glob. Change Biol., 14: 603–614, 2008. Delpierre, N., Dufrene, E., Soudani, K., Ulrich, E., Cecchini, S., Boe, J., and Francois, C.: Modelling interannual and spati al variability of leaf senescence for three deciduous tree species in France, Agr. Forest Meteorol., 149: 938–948, 2009. Dunn, A. L., Barford, C. C., Wofsy, S. C., Goulden, M. L., and Daube, B. C.: A long-term record of carbon exchange in a 20 boreal black spruce forest: means, responses to interannual variability, and decadal trends, Glob. Change Biol., 13: 577–590, 2007. Goulden, M. L., Munger, J. W., Fan, S. M., Daube, B. C., and Wofsy, S. C.: Measurements of carbon sequestration by long- term eddy covariance: methods and a critical evaluation of accuracy, Glob. Change Biol., 2:169–182, 1996. Ide, R., Nakaji, T., Motohka, T., and Oguma, H.: Advantages of visible-band spectral remote sensing at both satellite and 25 near-surface scales for monitoring the seasonal dynamics of GPP in a Japanese larch forest, Journal of Agricultural Meteorology, 67: 75–84, 2011. Keeling, C.D., Chin, J.F.S., and Whorf, T.P.: Increased activity of northern vegetation inferred from atmospheric CO2 measurements, Nature, 382: 146–149, 1996. Keenan, T. F., Darby, B., Felts, E., Sonnentag, O., Friedl, M., Hufkens, K., O’Keefe, J. F., Klosterman, S., Munger, J. W., 30 Toomey, M., and Richardson, A. D.: Tracking forest phenology and seasonal physiology using digital repeat photography: a critical assessment, Ecol. Appl., doi: 10.1890/13-0652.1, 2014. Körner, C. and Basler, D.: Warming, photoperiods, and tree phenology response, Science, 329: 278–278, 2010. Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 12 Linkosalo, T., Häkkinen, R., Terhivuo, J., Tuomenvirta, H., and Hari, P.: The time series of flowering and leaf bud burst of boreal trees (1846-2005) support the direct temperature observations of climatic warming, Agr. Forest Meteorol., 149: 453– 461, 2009. Migliavacca, M., Galvagno, M., Cremonese, E., Rossini, M., Meroni, M., Sonnentag, O., Manca, G., Diotri, F., Busetto, L., Cescatti, A., Colombo, R., Fava, F., Morra di Cella, U., Pari, E., Siniscalco, C., and Richardson, A.: Using digital repeat 5 photography and eddy covariance data to model grassland phenology and photosynthetic CO2 uptake, Agr. Forest Meteorol., 151: 1325–1337, 2011. Migliavacca, M., Sonnentag, O., Keenan, T. F., Cescatti, A., O’Keefe, J., and Richardson, A. D.: On the uncertainty of phenological responses to climate change, and implications for a terrestrial biosphere model, Biogeosciences, 9: 2063–2083, 2012. 10 Morisette, J. T., Richardson, A. D., Knapp, A. K., Fisher, J. I., Graham, E. A., Abatzoglou, J., Wilson, B. E., Breshears, D. D., Henebry, G. M., Hanes, J. M., and Liang, L.: Tracking the rhythm of the seasons in the face of global change: Phenological research in the 21st century, Front. Ecol. Environ., 7: 253–260, 2009. Nordli, O., Wielgolaski, F. E., Bakken, A. K., Hjeltnes, S. H., Mage, F., Sivle, A., and Skre, O.: Regional trends for bud burst and flowering of woody plants in Norway as related to climate change, Int. J. Biometeorol., 52: 625–639, 2008. 15 Peichl, M., Sonnentag, O., and Nilsson, M. B.: Bringing Color into the Picture: Using Digital Repeat Photography to Investigate Phenology Controls of the Carbon Dioxide Exchange in a Boreal Mire, Ecosystems, 18(1):115–131. doi: 10.1007/s10021-014-9815-z, 2015. Petach, A., Toomey, M., Aubrecht, D., Richardson, A. D: Monitoring vegetation phenology using an infrared-enabled security camera, Agr. Forest Met., 195, 143-151, 2014. 20 Pudas, E., Leppälä, M., Tolvanen, A., Poikolainen, J., Venalainen, A., and Kubin, E.: Trends in phenology of Betula pubescens across the boreal zone in Finland. Int. J. Biometeorol. 52: 251–259, 2008. Reichstein, M., Falge, E., Baldocchi, D., Papale, D., Aubinet, M., Berbigier, P., Bernhofer, C., Buchmann, N., Gilmanov, T., Granier, A., Grünwald, T., Havránková, K., Ilvesniemi, H., Janous, D., Knohl, A., Laurila, T., Lohila, A., Loustau, D., Matteucci, G., Meyers, T., Miglietta, F., Ourcival, J.-M., Pumpanen, J., Rambal, S., Rotenberg, E., Sanz, M., Tenhunen, J., 25 Seufert, G., Vaccari, F., Vesala, T., Yakir, D., and Valentini, R.: On the separation of net ecosystem exchange into assimilation and ecosystem respiration: review and improved algorithm, Glob. Change Biol., 11: 1424–1439, doi: 10.1111/j.1365-2486.2005.001002.x, 2005. Richardson, A. D., Jenkins, J. P., Braswell, B. H., Hollinger, D. Y., Ollinger, S. V., and Smith, M.-L.: Use of digital webcam images to track spring green-up in a deciduous broadleaf forest, Oecologia, 152:323–334, doi: 10.1007/s00442-006-0657-z, 30 2007. Richardson, A. D., Hollinger, D. Y., Dail, D. B., Lee, J. T., Munger, J. W., and O’Keefe, J.: Influence of spring phenology on seasonal and annual carbon balance in two contrasting New England forests, Tree Physiol., 29: 321–331, 2009. Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 13 Richardson, A. D., Keenan, T. F., Migliavacca, M., Ryua, Y., Sonnentag, O., and Toomey, M.: Climate change, phenology, and phenological control of vegetation feedbacks to the climate system, Agr. Forest Meteorol., 169: 156– 173, 2013. Rosenzweig, C., G. Casassa, D. J. Karoly, A. Imeson, C. Liu, A. Menzel, S. Rawlins, T. L. Root, B. Seguin, and P. Tryjanowski: Supplementary material to chapter 1: Assessment of observed changes and responses in natural and managed systems. Climate Change 2007: Impacts, Adaptation and Vulnerability. Contribution of Working Group II to the Fourth 5 Assessment Report of the Intergovernmental Panel on Climate Change, M. L. Parry, O. F. Canziani, J. P. Palutikof, P. J. van der Linden and C. E. Hanson, Eds., Cambridge University Press, Cambridge, UK, 2007 Sonnentag, O., Chen, J. M., Roberts, D. A., Talbot, J., Halligan, K. Q., and Govind, A.: Mapping tree and shrub leaf area indices in an ombrotrophic peatland through multiple endmember spectral unmixing, Remote Sens. Environ., 109: 342–360, 2007. 10 Sonnentag, O., Detto, M., Vargas, R., Ryu, Y., Runkle, B. R. K., Kelly, M., and Baldocchi, D. D.: Tracking the structural and functional development of a perennial pepperweed (Lepidium latifolium L.) infestation using a multi-year archive of webcam imagery and eddy covariance measurements, Agr. Forest Meteorol., 151: 916–926, 2011. Sonnentag, O., Hufkens, K., Teshera-Sterne, C., Young, A. M., Friedl, M., Braswell, B. H., Milliman, T., O’Keefe, J., and Richardson, A. D.: Digital repeat photography for phenological research in forest ecosystems, Agr. Forest Meteorol., 152: 15 159–177, 2012. Toomey, M., Friedl, M., Frolking, S., Hufkens, K., Klosterman, S., Sonnentag, O., Baldocchi, D., Bernacchi, C., Biraud, S. C., Bohrer, G., Brzostek, E., Burns, S. P., Coursolle, C., Hollinger, D. Y., Margolis, H. A., McCaughey, H., Monson, R. K., Munger, J. W., Pallardy, S., Phillips, R. P., Torn, M. S., Wharton, S., Zeri, M., and Richardson, A. D.: Greenness indices from digital cameras predict the timing and seasonal dynamics of canopy-scale photosynthesis, Ecol. Appl., 25, 99–115, 20 2015 White, M. A., and Nemani, R. R.: Canopy duration has little influence on annual carbon storage in the deciduous broadleaf forest, Glob. Change Biol., 9, 2003. Wingate, L., Ogée, J., Cremonese, E., Filippa, G., Mizunuma, T., Migliavacca, M., Moisy, C., Wilkinson, M., Moureaux, C., Wohlfahrt, G., Hammerle, A., Hörtnagl, L., Gimeno, C., Porcar-Castell, A., Galvagno, M., Nakaji, T., Morison, J., Kolle, O., 25 Knohl, A., Kutsch, W., Kolari, P., Nikinmaa, E., Ibrom, A., Gielen, B., Eugster, W., Balzarolo, M., Papale, D., Klumpp, K., Köstner, B., Grünwald, T., Joffre, R., Ourcival, J.-M., Hellstrom, M., Lindroth, A., George, C., Longdoz, B., Genty, B., Levula, J., Heinesch, B., Sprintsin, M., Yakir, D., Manise, T., Guyon, D., Ahrends, H., Plaza-Aguilar, A., Guan, J. H., and Grace, J.: Interpreting canopy development and physiology using a European phenology camera network at flux sites, Biogeosciences, 12, 5995–6015, doi:10.5194/bg-12-5995-2015, 2015. 30 Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 14 Figure 1: Mean diurnal cycle of GCC (+/- standard deviation) of the tree crown reference plate during selected months. 00 03 06 09 12 15 18 21 00 G C C ( g ra y r e fe re n c e p la te ) 0.24 0.26 0.28 0.30 0.32 0.34 0.36 7/2014 10/2014 11/2014 12/2014 1/2015 2/2015 3/2015 Time of day Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 15 Figure 2: Mean diurnal cycle of global radiation during selected months. Time of day (LWT) 0 6 12 18 24 S h o rt w a v e s o la r ra d ia ti o n ( W m -2 ) 0 100 200 300 400 500 7/2014 10/2014 11/2014 12/2014 1/2015 2/2015 3/2015 Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 16 Figure 3: Annual cycle of the mean daytime (11.00–15.00) GCC (+/- standard deviation) of the tree crown reference plate. In the inset figure the same GCC values are plotted against global radiation. The white circles indicate the wintertime data that are influenced by an insufficient light level. 5 05 06 07 08 09 10 11 12 01 02 03 04 05 06 07 08 09 10 G C C ( g re y r e fe re n c e p la te ) 0.26 0.28 0.30 0.32 0.34 0.36 Global radiation (W m -2 ) -100 0 100 200 300 400 500 600 700 800 G C C ( g re y r e fe re n c p la te ) 0.30 0.31 0.32 0.33 0.34 0.35 2014 2015 Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 17 Figure 4: Mean (+/- standard deviation) diurnal cycle of GCC during sunny and cloudy conditions observed with a) the tree crown and b) the wetland cameras in July 2014. 00 03 06 09 12 15 18 21 00 G C C ( W e tl a n d ) 0.30 0.32 0.34 0.36 0.38 0.40 Sunny Cloudy 00 03 06 09 12 15 18 21 00 G C C ( F o re s t c ro w n ) 0.30 0.32 0.34 0.36 0.38 0.40 Sunny Cloudy a) b) Time (LWT) Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 18 Figure 5a: View from the wetland camera. The lines indicate and four Regions of Interest defined according to vegetation types. Menyanthes trifoliata Andromeda polifolia Carex spp., Sphagnum Betula pubescens Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 19 Figure 5b: View from the wetland camera. The lines indicate the Region of Interest that includes all vegetation types except Betula pubescens. Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 20 Figure 6: Mean daytime (11.00–15.00) GCC of different Regions of Interests (vegetation types) during the measurement period of May 2014 to October 2015. Wetland refers to combined ROI shown in fig. 5b. The grey circles indicate the wintertime data that 5 are influenced by an insufficient light level. 2014 2015 05 06 07 08 09 10 11 12 01 02 03 04 05 06 07 08 09 10 G C C ( d a y ti m e a v e ra g e ) 0.30 0.32 0.34 0.36 0.38 0.40 Wetland Andromeda Carex Menyanthes Betula pubescens Winter Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 21 Figure 7: View from the crown camera. The line indicates the Region of Interest. Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 22 Figure 8: View from the canopy camera. The lines indicate three Regions of Interest. Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 23 Figure. 9: Mean daytime (11.00–15.00) GCC values of different ROIs from two forest cameras during the measurement period of May 2014 to October 2015 The grey circles indicate the wintertime data that are influenced by an insufficient light level. 05 06 07 08 09 10 11 12 01 02 03 04 05 06 07 08 09 10 G C C ( d a y ti m e a v e ra g e ) 0.30 0.32 0.34 0.36 0.38 Crown Canopy A Canopy B Canopy C Wintertime 2014 2015 Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 24 Figure 10: Mean daytime (11.00–15.00) GCC (of the ROI shown in Fig. 5b) together with the daily mean air temperature, gross photosynthesis index (GPI) and albedo in 2014–2015 at the wetland site. The triangles indicate the dates of snow melt (A) and snow appearance (B). The grey circles indicate the wintertime data that are influenced by an insufficient light level. 5 01 02 03 04 05 06 07 08 09 10 11 12 0.39 0.38 0.37 0.36 0.35 0.34 0.33 0.32 0.31 0.4 0.3 0.2 0.1 0.0 30 20 10 0 -10 -20 1.0 0.8 0.6 0.4 0.2 0.0 2014 2015 G C C G P I (m g C O 2 m -2 s -1 ) A ir t e m p e ra tu re ( °C ) A lb e d o GCC Air temperature GPI Albedo Month A BA Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. 25 Figure 11: Mean daytime (11.00–15.00) GCC (crown camera) together with the daily mean air temperature (at 18m), gross 5 photosynthesis index (GPI) and albedo in 2014-2015 at the forest site. The triangles indicate the start dates of visually observed phenological phases (1 = Bud burst, 2 = Bud growth, 3 = Shoot growth, 4 = Old needle browning) and snow status (A = Snow melt, B = Snow appearance). The grey circles indicate the wintertime data that are influenced by an insufficient light level. 01 02 03 04 05 06 07 08 09 10 11 12 0.39 0.38 0.37 0.36 0.35 0.34 0.33 0.32 0.31 0.4 0.3 0.2 0.1 0.0 30 20 10 0 -10 -20 1.0 0.8 0.6 0.4 0.2 0.0 2014 2015 G C C G P I (m g C O 2 m -2 s -1 ) A ir t e m p e ra tu re ( °C ) A lb e d o GCC Air temperature GPI Albedo Month 1 2A 3 43 4 B2 B Geosci. Instrum. Method. Data Syst. Discuss., doi:10.5194/gi-2015-34, 2016 Manuscript under review for journal Geosci. Instrum. Method. Data Syst. Published: 11 March 2016 c© Author(s) 2016. CC-BY 3.0 License. work_zja4kngkcvg6jmlq2cgvnedvn4 ---- Tamara'sarticle AN INTERVENTION STUDY COMPARING TRADITIONAL AND ERGONOMIC MICROSCOPES Tamara James, Sabrina Lamar, Tracy Marker Duke University and Medical Center, Durham, North Carolina Linda Frederick West Virginia University, Morgantown, West Virginia In recent years, manufacturers have produced new microscopes with claims of “ergonomic design” for reduced musculoskeletal stress among users. To test this claim, we measured posture of the neck, back, and upper extremities, and the prevalence of musculoskeletal and visual symptoms among cytotechnologists while using traditional microscopes (baseline) and again after the introduction of ergonomically designed microscopes. Participants were five full-time cytotechnologists, who used the microscope 6-8 hours per day and had reported discomfort while using traditional microscopes. Results showed the ergonomically designed microscopes were significantly more comfortable to use than the traditional microscopes for the neck and shoulders. Significant improvement in joint angles of the elbows (flexion) and shoulders (abduction) was noted with the ergonomically designed microscopes. These results suggest that ergonomically designed microscopes may reduce some of the risk factors for musculoskeletal disorders (MSDs) and benefit individuals who use microscopes for prolonged periods of time. INTRODUCTION In the past, microscope designers have concentrated on improving optical performance rather than user comfort resulting in fewer ergonomic features for microscopes. The fit between a person and a microscope is not typically an issue for short-term users but becomes more critical for long-term users (James, 1995). Long-term microscope users historically report a great deal of discomfort or pain. Putz-Anderson (1995) found typical problems associated with microscope use are head inclinations up to 45 degrees and upper back inclination as much as 30 degrees. From a biomechanics perspective, slight inclines of the head such as 30 degrees from vertical can produce significant muscle contractions, fatigue, and pain (Chaffin and Anderson, 1991). In a regional survey of cytotechnologists 70.5 percent of survey respondents reported experiencing neck, shoulder, or upper back symptoms (Kalavar & Hunting, 1996). Fifty-six percent had an increased prevalence of hand/wrist symptoms. Previously James (1995) identified a group of cytotechnologists in a clinical laboratory within a major medical center in the southeastern United States who reported pain, fatigue, and discomfort after using microscopes for prolonged periods of time. Through a series of focus groups and onsite ergonomic evaluations, James identified a number of awkward and static postures the workers adopted while using traditional microscopes as possible risk factors contributing to their symptoms. Subsequently, the medical center was approached by a major microscope manufacturer who agreed to provide microscopes for the purpose of conducting an intervention study. Designers of microscopes and related equipment are now beginning to consider user comfort and new microscopes have ergonomic features such as telescoping and tilting eyepieces, eyepiece height adjustability, and well-placed manual controls. Manufacturers contend these microscopes prevent awkward postures, a well-documented risk factor for musculoskeletal disorders (MSDs). The purpose of this study was to determine if there would be a change in levels of musculoskeletal and visual discomfort, awkward postures of the upper limbs and neck after the introduction of an ergonomically designed, adjustable microscope in this same group of cytotechnologists. METHODS Participants Five full time cytotechnologists, four females and one male (ages 30-51 years), participated in this study. This work group used traditional, non-adjustable microscopes to screen slides for cancer and infectious conditions six to eight hours per day. Dependent Variables The dependent variables of this study were musculoskeletal discomfort and the following posture measurements: - shoulder flexion/extension and abduction/adduction; - elbow flexion/extension; - wrist flexion/extension and ulnar/radial deviation; - neck flexion (defined by the angle: acromion - C7 spinous process - tragus); - head tilt (defined by the angle: canthus of eye - tragus - true horizontal). Equipment Nikon Eclipse E400 clinical microscopes with the following features were selected as the "ergonomically designed" microscope: tilting and telescopic head, optional riser tubes, one-hand focus control capability, and in-line focusing (Figure 1). The traditional microscope was a Zeiss (Model NT6V- 10W) that had previously been used by the participants. Digital photographs of participants were taken with an Epson Photo PC600 digital camera. The camera was fastened to a tripod set a fixed distance and height relative to each participant. The horizontal and anterior-posterior orientation of the camera was adjusted with use of a level. A board marked with one- inch squares was placed vertically behind the participants, perpendicular to the camera. Pieces of adhesive reflective tape or .5 inch diameter wooden beads covered with reflective tape were secured to the participants at the following anatomical landmarks with tape: ecto-canthus, tragus, C7 spinous process, and acromion process. Both large and small manual goniometers were used to record shoulder, elbow, and wrist posture. Participants were supplied with their own three-ring folders containing daily discomfort surveys. Procedures Measurements of posture and discomfort for the neck and upper limb were collected over a two- week period while participants used traditional microscopes. The traditional microscopes were then replaced by adjustable, ergonomically designed microscopes which were fitted to participants by a Nikon, Inc. representative. Footrests were provided to participants who requested them since some chairs were elevated during the “fitting” process. No other major changes took place within the work environments. After a two-week adjustment period with the new microscopes the second phase of posture measurements were collected and daily discomfort surveys were re-administered for two weeks. Figure 1: Microscope with ergonomic features including tilting and telescoping head, optional riser tubes, one hand focus control, and in-line focusing. (Photo courtesy of Nikon, Inc.) Discomfort Surveys. Daily discomfort surveys (developed by the authors) were administered to participants for two weeks prior to implementing the ergonomically designed microscopes to obtain baseline data regarding the type, frequency, and severity of musculoskeletal symptoms for specific body parts and problems associated with vision. The traditional microscopes were replaced with the ergonomically designed microscopes and participants were given two weeks to adjust to them. Daily discomfort surveys and posture measurements were repeated for a two-week period. Posture data collection. Postural data and measurements of key joint angles (described above) were collected for each participant using a manual goniometer twice per day for each two week period with methods previously described in the literature (Norkin & White, 1995; Ortiz, Marcus, Gerr, Jones & Cohen, 1997). Participants were photographed using a digital camera with a tripod. Neck angle and head tilt were measured from the photographs using a manual goniometer. Data Processing and Analysis Musculoskeletal discomfort data for each body area was analyzed using a symptom factor. The symptom factor represents the product of the following values: number of symptoms (out of a possible 9), frequency of symptoms (on a 6-point scale), and severity of symptoms (on a 6-point scale). Posture data and symptoms factors collected before and after introduction of the ergonomic microscope were compared using repeated measures analyses of variance (ANOVA). The level of significance was set at .05. RESULTS For the traditional microscope, most participants experienced the greatest amount of discomfort in the neck and shoulders, lower back, arm, forearms and wrists. Results of the comfort surveys revealed that participants were significantly more comfortable in the neck and shoulder regions (p<.05) after introduction of the ergonomically designed microscopes. Figure 2 shows subtle postural differences for a participant using the traditional microscope and using the ergonomic microscope. Symptoms of eye fatigue and mid back discomfort decreased after introduction of the ergonomic microscopes, although the differences were not statistically significant (p=.08 and p=.08, respectively). There were statistically significant differences in awkward joint angles with regard to right elbow flexion (p=.01) and right shoulder abduction (.01) while using the ergonomically designed microscopes. There were also differences in right shoulder flexion and left shoulder abduction, although the differences were not statistically significant (p=.07 and p=.07, respectively). Figure 2: Postural differences using traditional microscope (left) and ergonomic microscope (right). DISCUSSION The results from the discomfort surveys are consistent with the Kalavar and Hunting survey (1996) indicating a mismatch between cytotechnologists and their work environment. Results also indicate the participants’ comfort increased when they went from using traditional microscopes to ergonomic microscopes, particularly in the area of the neck and shoulders. One of the greatest limitations of this study was the small sample size thus the power was low. Had the sample size been larger, other statistically significant differences between the two microscopes may have emerged. The data collection methods were limited to written surveys, manual goniometry, and 2-D digital photography. The use of EMG to measure muscle activity may have provided more objective data. As with any survey of discomfort, there may be some reporting bias since participants were not unaware of the purpose of the study. It must be noted that workstations in the laboratory were not adjustable and did not provide adequate leg clearances. However, in order to isolate the effects of a change in microscopes, other ergonomic improvements were postponed during this study. Combining the features of ergonomically designed microscopes with other ergonomic improvements such as proper seating, workflow, work organization, and workstation arrangement likely would improve individual comfort and ultimately worker productivity. Future studies should focus on using larger populations of microscope users to determine if significant improvement in discomfort and awkward postures occur while using ergonomically designed microscopes. Similar studies should be conducted to determine if further equipment re-design is necessary. The use of EMG data, particularly in a controlled laboratory setting, might add greater depth to this type of study. Future studies are needed to identify the impact that better fitting equipment has on productivity and error rates of cytotechnologists. In terms of usability, the participants eagerly accepted the new microscopes, and were able to use them with minimal training time. CONCLUSIONS By using ergonomically designed microscopes with multiple adjustments that allow for a more customized fit, participants in a cytology laboratory were able to work in more neutral postures and had fewer symptoms of discomfort. It is speculated that this reduction in risk factors for MSDs and improvement in posture may decrease injury rates. Although this study had some limitations, the results could have an impact on the many work groups that use microscopes for prolonged periods of time. Future efforts should be aimed at incorporating additional ergonomic improvements (such as workstation arrangement, chair selection, and ergonomic training) to optimize the benefits of the ergonomically designed microscopes. ACKNOWLEDGEMENTS We would like to thank Nikon, Inc. for their continual assistance and use of the microscopes for this study. REFERENCES Chaffin, D., & Andersson, G. (1991). Occupational Biomechanics. New York: John Wiley & Sons. James, T. (1995) An Ergonomic Approach to Modifying Microscope Design for Increased Comfort: A Case Study. In Proceedings of the Human Factors and Ergonomics Society 39th Annual Meeting, (pp.573-577). San Diego, CA: Human Factors and Ergonomics Society. Kalavar, S.S. & Hunting K.L. (1996). Musculoskeletal Symptoms Among Cytotechnologists. Laboratory Medicine, 27, 765-769. Norkin, C.C. & White, D.J. (1995). Measurement of Joint Motion: A guide to goniometry, 2 nd Ed. Philadelphia: F.A. Davis. Ortiz, D.J., Marcus, M., Gerr, F., Jones, W., & Cohen, S. (1997). Measurement variability in upper extremity posture among VDT users. Applied Ergonomics, 28, 139-143. Putz-Anderson, V. (not published). "Microscope Use," Howard Hughes Medical Institute Environmental Health and Safety Conference, April 1995, Washington, D.C. work_zmbdau3i7zgo7ivxkld6fxze2e ---- wp-p1m-39.ebi.ac.uk Params is empty 404 sys_1000 exception wp-p1m-39.ebi.ac.uk no 218544854 Params is empty 218544854 exception Params is empty 2021/04/06-02:16:22 if (typeof jQuery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/1.14.8/js/jig.min.js"][/script]'.replace(/\[/g,String.fromCharCode(60)).replace(/\]/g,String.fromCharCode(62))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} Page not available Reason: The web page address (URL) that you used may be incorrect. Message ID: 218544854 (wp-p1m-39.ebi.ac.uk) Time: 2021/04/06 02:16:22 If you need further help, please send an email to PMC. Include the information from the box above in your message. Otherwise, click on one of the following links to continue using PMC: Search the complete PMC archive. Browse the contents of a specific journal in PMC. Find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/MED/ work_zouryh6hgjbzrbbxhd6g5pac7q ---- 标题 ·临床研究· 免散瞳和散瞳眼底数码照相在眼底病筛查中的优势分析 陈摇 力,郝晓军,李摇 飞,陶雁亭,曹摇 怡 基金项目:上 海 市 浦 东 新 区 学 科 带 头 人 培 养 资 助 项 目 ( No. PWRd2015-07) 作者单位:(200125)中国上海市浦东新区浦南医院眼科 作者简介:陈力,女,主任医师,上海市医学会眼科专委会视觉康 复学组委员,上海市浦东新区医学会眼科专委会副主任委员。 通讯作者:郝晓军,男,本科,主治医师,研究方向:糖尿病视网膜 病变、白内障、青光眼. eyedrhao@ 163. com 收稿日期: 2017-10-17摇 摇 修回日期: 2018-02-02 Evaluating two methods of digital photography in retinopathy screening Li Chen, Xiao - Jun Hao, Fei Li, Yan - Ting Tao, Yi Cao Foundation item:Academic Leaders Supporting Project in Pudong New District Shanghai (No. PWRd2015-07) Department of Ophthalmology, Punan Hospital of Pudong New District, Shanghai 200125, China Correspondence to:Xiao-Jun Hao. Department of Ophthalmology, Punan Hospital of Pudong New District, Shanghai 200125, China. eyedrhao@ 163. com Received:2017-10-17摇 摇 Accepted:2018-02-02 Abstract 誗 AIM: To evaluate the advantages of non - mydriatic fundus photography ( NMFCS ) and mydriatic fundus photography ( MFCS ) as eye - bottom screening and diagnosis methods in compared with gold standard fluorescein fundus angiography (FFA). 誗 METHODS: A total of 276 patients which involved in Chronic Diabetes Management Achieves within 4 streets of Pudong District Shanghai, were enrolled for diabetic retinopathy ( DR) examination including NMFCS, MFCS and FFA. These DR examinations were proceeded after vision, slit-lamp and dioptroscopy tests, and reported by professionals. For those with suspicious fundus diseases, we would make appointments with specialist for further treatment. 誗RESULTS: A total of 1104 colorful fundus images, and 1056 images (95. 65%) could be used to analyze. There were 408 appreciable images, 116 basically appreciable images and 28 unusable images in 552 NMFCS images. In addition, there were 432 appreciable images, 100 basically appreciable images and 20 unusable images in 552 MFCS images. There was no significant differences between NMFCS and MFCS ( P > 0. 05). Compared with FFA with DR 玉 as the critical value, the specificity of digital photography for NMFCS was 95. 71%, the sensitivity was 93. 56%; however, MFCS are 95. 43% and 98. 02%. There was no statistically significant difference between the two screening methods ( P > 0. 05). Compared with FFA with DR 域 as the critical value, the specificity of digital photography for NMFCS was 95. 35% and the sensitivity was 93. 44%; however, for MFCS were 95. 81% and 98郾 36%. There was no statistically significant difference between the two screening methods (P>0. 05). 誗CONCLUSION: Both NMFCS and MFCS could be used for the diagnosis and screening for eye diseases. NMFCS is easier and faster for digital photography, which is suitable for mass screening. MFCS is more likely to provide detailed information about the follow - up of the disease. 誗 KEYWORDS: screening; diabetic retinopathy; digital photomicrography Citation:Chen L, Hao XJ, Li F, et al. Evaluating two methods of digital photography in retinopathy screening. Guoji Yanke Zazhi(Int Eye Sci) 2018;18(3):524-527 摘要 目的:与金标准荧光造影比较,评估免散瞳眼底照相及散 瞳眼底照相作为眼底筛查诊断方法的优势。 方法:选取上海市浦东新区 4 个街道纳入糖尿病慢性病管 理档案的患者 276 例,进行糖尿病视网膜病变( DR) 的筛 查。 在进行视力、裂隙灯检查、屈光检查之后,对所有参检 人员分别进行双眼免散瞳眼底数码照相、双眼散瞳眼底数 码照相及眼底荧光造影;由专业人员集中阅片并给出检查 报告。 可 疑 眼 底 病 患 者, 预 约 相 应 的 专 科 完 成 进 一 步 诊治。 结果:共计 276 例接受眼底病筛查,双眼分别进行免散瞳 及散瞳眼底数码照相,共得到 1 104 张彩色眼底图片,其 中 1 056 张(95郾 65% )眼底像可用于分析。 免散瞳眼底数 码照相获得的眼底图片中,可以评估的眼底图片 408 张, 基本可以评估的眼底图片 116 张,不能评估的眼底图片 28 张;散瞳眼底数码照相获得的眼底图片中,可以评估的 眼底图片 432 张,基本可以评估的眼底图片 100 张,不能 评估的眼底图片 20 张。 经卡方检验,两种方式获得的眼 底图片质量差异无统计学意义(P>0郾 05)。 与眼底荧光造 影(FFA)相对比,以 DR 玉期作为临界值,免散瞳眼底数 425 国际眼科杂志摇 2018 年 3 月摇 第 18 卷摇 第 3 期摇 摇 http:/ / ies. ijo. cn 电话:029鄄82245172摇 摇 85263940摇 摇 摇电子信箱:IJO. 2000@ 163. com 码照相的诊断特异性是 95郾 71% ,诊断敏感性是 93郾 56% ; 散瞳眼底数码照相的诊断特异性是 95郾 43% ,诊断敏感性 是 98郾 02% ;两种筛查方法比较差异无统计学意义( P > 0郾 05);以 DR 域期作为临界值,免散瞳眼底照相对 DR 的 诊断特异性和敏感性分别为 95郾 35% 和 93郾 44% ;散瞳眼 底照相对 DR 的诊断特异性和敏感性分别为 95郾 81% 和 98郾 36% ; 两 种 筛 查 方 法 比 较 差 异 无 统 计 学 意 义 ( P > 0郾 05)。 结论:免散瞳眼底数码照相与散瞳眼底数码照相均可用于 眼底病筛查诊断。 免散瞳眼底数码照相更加简易、快捷, 适合大规模筛查时使用。 散瞳眼底照相对于疾病的跟踪 随访更能提供详细的参考信息。 关键词:筛查;糖尿病视网膜病变;数码照相 DOI:10. 3980 / j. issn. 1672-5123. 2018. 3. 28 引用:陈力,郝晓军,李飞,等. 免散瞳和散瞳眼底数码照相在眼 底病筛查中的优势分析. 国际眼科杂志 2018;18(3):524-527 0 引言 眼底检查作为一项直观、无创可以洞悉全身血管情况 的技术,是现阶段社区老年人健康管理的重要内容。 眼底 数码照相技术作为可以客观记录和保存患者眼底病即时 状况,并可以协助远程会诊的手段,近年来也逐渐被大家 青睐。 大规模人群中,基于眼底照相技术的眼底病筛查, 需要满足两个要求:(1)目标人群在进行筛查时首先考虑 其方便性;(2)影像质量必须能够保证有效的诊断。 尽管 目前免散瞳眼底照相技术由于其便利性在社区应用逐渐 广泛,但是对其所采集图像的质量及临床价值,仍然存在 一定的争议。 本研究为了评估免散瞳眼底照相与散瞳眼 底照相技术所采集眼底图像的质量和可靠性,特选取糖尿 病视网膜病变( DR) 人群,采用 FFA 作为诊断金标准,评 估在大规模筛查中二者的应用价值。 1 对象和方法 1郾 1 对象摇 前瞻性研究。 选取 2015-12 / 2017 -06 上海市 浦东新区 4 个街道纳入糖尿病慢性病管理档案的患者 276 例。 其中,男 151 例 302 眼,占 54郾 7% ;女 125 例 250 眼,占 45郾 3% ;年龄 25 ~ 88(平均 68依7郾 5)岁。 1郾 1郾 1 纳入标准摇 来自纳入糖尿病慢性病管理档案的 患者,可 以 完 成 FFA 检 查, 眼 底 影 像 可 以 被 分 级 和 评估。 1郾 1郾 2 排除标准摇 18 岁以下;孕妇;身体不适者;严重的 肝肾功能疾病、造影剂过敏等不能完成 FFA 检查者;由于 其他原发疾病不能散瞳或瞳孔无法散大者;屈光间质明显 混浊,或配合度极差,无法获得眼底相者。 1郾 2 方法 1郾 2郾 1 筛查过程摇 筛查时由工作人员对受试者进行视力 检查( 国际标准视力表),然后验光,得到矫正视力,并进 行简单问卷( 如有无青光眼、高血压、糖尿病病史及患病 时间等),记录受试者的基本情况(如姓名、性别、年龄、联 系电话等)。 采用 Canon CX-1 数码眼底照相机采集单眼 表 1摇 研究对象的影像质量标准 研究对象的影像质量 标准 可以评估的 中心视野:清楚 周边视野:清楚 基本可以评估的 中心视野:5 / 6 的视野清晰 周边视野:3 / 4 ~ 7 / 8 的视野清晰 不能评估的(病例排除) 中心视野:多于 1 / 6 的视野模糊 周边视野:多于 1 / 4 的视野模糊 小瞳下单张后极部 45 °眼底图像,要求将视盘和黄斑均包 括在内。 照相房间要求光线较暗,避免阳光直射室内。 通 常是受试者 4 ~ 5 人一组,先照每个人的右眼,1 ~ 2min 后 再照左眼。 双眼同时点 10g / L 复方托吡卡胺滴眼液 1 ~ 2 次,采 集散瞳下单张后极部 45 °眼底图像。 而后进行荧光造影 检查。 FFA 检查前先进行荧光素钠过敏试验,若患者荧光 素钠 过 敏 试 验 阴 性, 则 按 照 10 ~ 20mg / kg,3 ~ 5mL 于 4 ~ 5s注射完毕,进行 7 ~ 9 个视野拍摄,尽量包括全部 眼底。 眼底图像采集和 FFA 检查由我院训练有素的眼科专 业技师完成。 阅片分级由我院同一眼底病专家完成。 1郾 2郾 2 DR 分期标准及影像质量标准摇 根据 2014 年中华 医学会眼科分会眼底病学组制定的我国 DR 分期标准[1] , 将眼底无可见病变定为 0 期,其它为两型 6 期。 非增殖型 分为:玉期:后极部出现微血管瘤;域期:出血、硬性渗出、 棉绒斑;芋期:4 个象限均有逸20 个出血点,或至少 2 个象 限静脉串珠,或至少 1 个象限视网膜内微血管异常。 增生 型分为:郁期:新生血管,玻璃体积血;吁期:新生血管,纤 维膜增生;遇期:并发牵拉性视网膜脱离。 彩色眼底图像拍照标准参照 ETDRS( 早期治疗糖尿 病视网膜病变研究小组)的操作指南进行改编后如表 1 所 示,影像严格按照定义的视野区域拍照,拍照时焦点清晰, 明暗及对比度适中。 摇 摇 统计学分析:采用 SPSS20郾 0 统计学软件进行数据统 计分析。 采用评价筛查方法的统计分析,计算每种方法的 敏感度、特异度。 等级资料采用 Wilcoxon 秩和检验,计数 资料的组间比较采用 字2 检验。 P<0郾 05 表示差异有统计学 意义。 2 结果 2郾 1 不同筛查方法诊断结果的比较摇 本次研究,共计 276 例接受眼底病筛查,双眼分别进行免散瞳及散瞳眼底数码 照相,得到彩色眼底图片 1104 张。 其中,有效眼底图片 1056 张(95郾 65% )。 免散瞳眼底数码照相获得的眼底图 片中,可以评估的眼底图片 408 张,基本可以评估的眼底 图片 116 张,不能评估的眼底图片 28 张;散瞳眼底数码照 相获得的眼底图片中,可以评估的眼底图片 432 张,基本 可以评估的眼底图片 100 张,不能评估的眼底图片 20 张( 表 2) 。 采用 Wilcoxon 秩和检验,结果显示两种方式 获得的眼底图片质量差异无统计学意义( Z = -1郾 744, P = 0郾 081) 。 525 Int Eye Sci, Vol. 18, No. 3 Mar. 2018摇 摇 http: / / ies. ijo. cn Tel:029鄄82245172摇 85263940 摇 Email:IJO. 2000@163. com 表 2摇 免散瞳眼底图像与散瞳眼底图像的图像质量对比 张(% ) 筛查方法 可以评估的 基本可以评估的 不能评估的 免散瞳眼底照相图像 408(73郾 91) 116(21郾 01) 28(5郾 07) 散瞳眼底照相图像 432(78郾 26) 100(18郾 12) 20(3郾 62) 表 3摇 不同筛查方法筛查出各期 DR 的情况 眼(% ) 筛查方法 0 期 玉期 域期 芋期 郁期 合计 免散瞳眼底照相 335(63郾 93) 75(14郾 31) 62(11郾 83) 40(7郾 63) 12(2郾 29) 524(100) 散瞳眼底照相 334(62郾 78) 78(14郾 66) 63(11郾 84) 42(7郾 89) 15(2郾 82) 532(100) FFA 350(63郾 41) 80(14郾 49) 65(11郾 78) 44(7郾 97) 13(2郾 36) 552(100) 注:免散瞳眼底照相有 28 例不能评估,故合计 524 例;散瞳眼底照相有 20 例不能评估,故合计 532 例。 表 4摇 按照玉期 DR 为界进行分类,两种筛查方法诊断效能的比较 筛查方法 无 DR(眼) 有 DR(眼) 诊断特异性(% ) 诊断敏感性(% ) 免散瞳眼底照相 335 189 95郾 71 93郾 56 散瞳眼底照相 334 198 95郾 43 98郾 02 FFA 350 202 表 5摇 按照域期 DR 为界进行分类,两种筛查方法诊断效能的比较 筛查方法 无 DR(眼) 有 DR(眼) 诊断特异性(% ) 诊断敏感性(% ) 免散瞳眼底照相 410 114 95郾 35 93郾 44 散瞳眼底照相 412 120 95郾 81 98郾 36 FFA 430 122 摇 摇 全部患者接受了 FFA 检查。 本次筛查及诊断分期结 果见表 3。 按照 玉 期 DR 为界进行分类,0 期标识为无 DR;玉 ~ 郁期为有 DR。 FFA 诊断 DR 患者 215 眼,发病率 为 38郾 95% ;免散瞳眼底照相诊断 DR 患者 210 眼,发病率 为 38郾 04% ;散瞳眼底照相诊断 DR 患者 212 眼,发病率为 38郾 41% 。 按照域期 DR 为界进行分类,0、玉期标识为无 DR;域 ~ 郁期为有 DR,FFA 诊断 DR 患者 138 眼,发病率 为 25郾 00% ;免散瞳眼底照相诊断 DR 患者 135 眼,发病率 为 24郾 46% ;散瞳眼底照相诊断 DR 患者 138 眼,发病率 为 25郾 00% 。 2郾 2 不同筛查方法诊断效能的比较 摇 按照玉期 DR 为界 进行分类,0 期标识为无 DR;玉 ~ 郁期为有 DR。 与金标 准 FFA 相比,免散瞳眼底照相对 DR 的诊断特异性(无误 检率)为 95郾 71% (335 / 350),诊断敏感性( 无漏检率) 占 93郾 56% (189 / 202);散瞳眼底照相对 DR 的诊断特异性为 95郾 43% (334 / 350),诊断敏感性为 98郾 02% (198 / 202);两 种筛查方法比较差异无统计学意义(P>0郾 05),见表 4。 摇 摇 按照域期 DR 为界进行分类,0 期、玉期标识为无 DR; 域期 ~ 郁期为有 DR。 与金标准 FFA 相比,免散瞳眼底照 相对 DR 的诊断特异性为 95郾 35% (410 / 430),诊断敏感 性占 93郾 44% (114 / 122) ;散瞳眼底照相对 DR 的诊断特 异性 为 95郾 81% ( 412 / 430 ) , 诊 断 敏 感 性 为 98郾 36% (120 / 122) ;两种筛查方法比较差异无统计学意义( P> 0郾 05) ,见表 5。 3 讨论 通过眼底数码照相技术记录和跟踪随访眼底病患者 的病程,已经被很多眼科同行所认可。 随着眼科社区医学 的发展,社区人群的眼底病筛查开展的也越来越多。 有学 者使用免散瞳眼底照相机深入社区采集眼底图像,并用于 远程诊断,认为其拥有便于携带、实用快捷、无需散瞳、患 者依从性好等优点[2-4] ;也有学者对此有争议,认为对于 瞳孔小于 4mm 的眼底病变诊断困难,不能准确辨别“微血 管瘤冶和“出血点冶,不易发现视网膜内毛细血管异常和新 生血管[5-6] 。 为此,本研究设计了两种筛查方式分别与金标准荧光 造影检查的对照研究。 为了控制实验误差,特选取了单一 病种(DR)作为诊治对象,眼底照相也由经过正规培训、有 经验的技师采集,并组织了经验丰富的专家阅片。 由于筛查的最终目的,使得无明显视力损害的 DR 患 者及时被发现,将即将引起视力损害的 DR 患者转诊到眼 科专家那里接受诊治,因此,我们特意设计了 DR域期的对 比组。 域期 DR 属于临床进展期,它的发现提示患者视力 将要受到损害,如果在这个阶段,立刻转诊至眼科专家,不 仅可以控制病情的发展,而且可以作为患者标准化 DR 筛 查的病程判断分界点,为 DR 进一步追踪观察和治疗提供 一个可信度好的记录。 本研究以 DR 玉期、域期为临界 值,对比两种筛查方法的特异性、灵敏性和一致性。 以 DR 玉期作为临界值的对比研究显示:与金标准 FFA 相比,免散瞳眼底照相对 DR 的诊断特异性和敏感性 分别为 95郾 71% 、93郾 56% ;散瞳眼底照相对 DR 的诊断特 异性和敏感性分别为 95郾 43% 、98郾 02% ;两种筛查方法比 较差异无统计学意义(P>0郾 05)。 以 DR 域期作为临界值 的对比研究显示:与金标准 FFA 相比,免散瞳眼底照相对 DR 的诊断特异性和敏感性分别为 95郾 35% 、93郾 44% ;散 瞳眼底照相对 DR 的诊断敏感性、特异性分别为 95郾 81% 、 98郾 36% ; 两 种 筛 查 方 法 比 较 差 异 无 统 计 学 意 义 ( P > 625 国际眼科杂志摇 2018 年 3 月摇 第 18 卷摇 第 3 期摇 摇 http:/ / ies. ijo. cn 电话:029鄄82245172摇 摇 85263940摇 摇 摇电子信箱:IJO. 2000@ 163. com 0郾 05)。 从临床角度、筛查方法的快捷、方便等综合考虑, 免散瞳眼底照相技术与散瞳眼底照相技术的筛查结果具 有高度一致性。 对于域、芋期及以上的 DR 患者,两种眼 底照相技术对于视网膜内微血管异常的检查和新生血管 的检查,检出率较低,会对 DR 患者的分期产生重大影响。 因此,尽管眼底照相技术可以作为 DR 临床诊断和分期的 重要辅助工具,但是最终 DR 的严格临床分期和治疗,依 然依赖于 FFA 检查[7-8] 。 两种照相技术,尤其是免散瞳眼 底照相技术,更适合眼病的筛查工作。 由于我国的医疗环境、人口密度、人员素质、就医条件 等,都存在着很多差异,需要制定符合我国国情的眼底病 筛检方法及标准。 散瞳眼底照相尽管提供了眼底的较大 范围的检查结果及具有立体效果,但是它在获得影像的过 程中,相对费时、费力,很多患者还存在散瞳的禁忌证,会 影响大规模筛查中的使用。 从这个角度来看,初次筛查时 使用免散瞳眼底照相技术具有优势。 可以设想,随着自动 影像分析系统的进一步发展,在没有眼科专家的地区,在 内科门诊,有一个数码照相机即可完成图像采集,再经网 络传输获得诊断。 这样可以大大节省人力,缩减门诊工作 时间,因而产生了一个有潜力的高效系统。 由免散瞳照相 技术筛查出了可疑的眼底病患者,再进行散瞳检查和 FFA 检查,以最后确定诊断及疾病分期,不失为一个最优的解 决方案。 参考文献 1 中华医学会眼科学会眼底病学组. 我国糖尿病视网膜病变临床诊 疗指南(2014 年). 中华眼科杂志 2014;50(11):851-865 2 王菁洁,郭晟,孔冬梅,等. 免散瞳眼底照相在眼底检查中的应用. 中国实用眼科杂志 2015;33(21):65-66 3 贾喜梅. 免散瞳眼底照相技术在临床眼底检查中应用的临床效果 观察. 齐齐哈尔医学院学报 2014;35(6):839-840 4 马昱,石璇. 2 年远程筛查随访糖尿病视网膜病变社区患者的研究. 中国药物与临床 2016;16(6):792-794 5 韩毳,徐青,刘宁,等. 北京市两个街道 50 岁以上居民免散瞳眼底 照相筛查眼底病的初步研究. 眼科 2013;22(4):230-233 6 许迅,邹海东. 糖尿病视网膜病变的社区筛查和防治. 中国眼耳鼻 喉科杂志 2008;8(5):276-279 7 Teng T, Lefley M, Cluremont D, et al. Progress towards automated diabetic ocular screening: a review of image analysis and intelligent systems for DR. Med Bio Eng Comput 2002;40(1):2-13 8 Lee SC, Lee ET, Wang Y, et al. Computer classification of nonproliferative diabetic retinopathy. Arch Ophthalmol 2005;123 ( 6 ): 759-764 725 Int Eye Sci, Vol. 18, No. 3 Mar. 2018摇 摇 http: / / ies. ijo. cn Tel:029鄄82245172摇 85263940 摇 Email:IJO. 2000@163. com work_zr4kjnzblzg7rf3kggl3werlrm ---- JPS Gon IMC manuscript 13 11 2015 1 European Journal of Physiotherapy Title Page Manuscript Title: Reliability of knee joint position sense measurement: a comparison between goniometry and image capture methods. Authors: Fiona Irving1, Joseph Russell2, Toby Smith3 Affiliations: 1. Physiotherapy Department, United Lincolnshire Hospitals Trust, Pilgrim Hospital, Boston, United Kingdom. 2. Physiotherapy Department, Allied Health Professionals Suffolk, Hartismere Hospital, Eye, Suffolk, United Kingdom. 3. School of Health Sciences, Faculty of Medicine and Health Sciences, University of East Anglia, Norwich, United Kingdom. Corresponding Author: Dr Toby Smith, School of Health Sciences, Queen’s Building, Faculty of Medicine and Health Sciences, Norwich Research Park, Norwich, NR4 7TJ, United Kingdom. Telephone: 044 (0)1603 593087; Fax: 044 (0)1603 593166; Email: toby.smith@uea.ac.uk. 2 Abstract Aims: Evaluate the intra-rater and inter-rater reliability of hand-held goniometry compared to image capture (IMC) in the assessment of joint position sense (JPS) in healthy participants. Methodology: Repeated-measures observational study design was undertaken with 36 asymptomatic university students of both genders aged between 18 to 45 years. JPS in the knee was assessed by two assessors over two sessions (one-week interval) using hand-held goniometry and IMC methods. Joint position sense was assessed at four target knee flexion angles. Intra- and inter-rater reliability was assessed with absolute error (AE), relative error (RE) and intra-class correlation coefficient. Findings: Inter-rater reliability for goniometry was poor to substantial (ICC: 0.00 to 0.64) and was poor to moderate (ICC: 0.00 to 0.47) for IMC. Intra-rater reliability for goniometry was poor to moderate (ICC: 0.00 to 0.42) and poor to moderate for IMC (ICC: 0.00 to 0.41). AE for goniometry ranged from 3.2° to 8.6°, with RE from 0.1°-8.3°. For IMC, AE for goniometry was 5.3° to 12.5°, with RE ranging from 0.1° to 11.1°. Principal Conclusions: Neither goniometry nor IMC appeared superior to the other in JPS assessment. Caution should be made when considering the reliability for goniometry and IMC before clinical assessment is made. Keywords: Measurement; knee; proprioception; angle; range of motion 3 Introduction Proprioception is the awareness of movement and position of a joint in space [1]. Proprioception relies on sensorimotor receptors which provide sensory input through visual, tactile and vestibular feedback systems [2,3]. Proprioception is also informed through motion where mechano-sensitive proprioceptors generate a feedback sensation which enables perceptual awareness of the limb including its movement, orientation in space, velocity, force and joint position sense (JPS) [1, 4-6]. Two previously documented methods of assessing JPS are hand-held goniometry and photographic image capture (IMC) [4]. Joint position sense is assessed through goniometry by positioning the joint under investigation at a pre-specified ‘target’; angle as measured using the goniometer, asking the individual to try to remember that position, and then moving them out of that position, to then asking them to replicate the target angle, and re-measuring this angle. The same principle holds for IMC, where the target angle is measured with a goniometer and a photograph is taken of that joint angle. The participant then tries to remember that angle, moved out of that position and then replicates the target angle where a photograph is taken. The assessor then measures that angle of the repeated joint position to estimate the degree of agreement of deviation from the target angle. Currently, limited evidence exists in relation to intra- and inter-rater reliability of IMC methods for JPS assessment [4]. Smith et al’s [4] systematic review of JPS measures of the knee, suggested variable inter- rater and intra-rater reliability for IMC, but was unable to identify any studies which have assessed the reliability of hand-held goniometry in relation to knee JPS assessment. Ascertaining this is arguably a high priority given the importance of proprioception of the knee for everyday functional activity and its common association with injury and pathology [7-9]. Furthermore, given its proven reliability in knee range of movement (ROM) assessment [10] and its frequent use in clinical practice, hand-held goniometry clearly warrants further investigation which provided the rationale for this research study. Similarly, given the low cost and simplicity of JPS via digital photography measured by a protractor, this could be deemed the most appropriate and feasible method of IMC available for clinical practice [11-13]. This assertion, coupled with the lack of current research underpinning reliability of IMC techniques, has provided further rationale for digital photography IMC (referred to simply as IMC from hereon in) use in this study. 4 Accounting for this paucity of evidence on the reliability of goniometry JPS assessment, and since there is no previous evidence comparing goniometry to IMC, the purpose of this study was to evaluate the intra-rater and inter-reliability of hand-held goniometry compared to IMC assessment of JPS. 5 Methods A repeated-measures design with two assessors was used to assess both intra- and inter-rater reliability. Participants Comparison of inter- and intra-observer reliability of goniometry and IMC in JPS measurement had not been previously assessed to base a sample size calculation on. It has been proposed that a minimum of 15 to 20 participants is necessary when determining the reliability of a quantitative variable [14]. Accordingly, accounting for the research timetable, 36 participants were recruited in total. Participants were university students enrolled on either Physiotherapy or Occupational Therapy courses. Participants were recruited between November 2013 and January 2014. Twenty-seven participants were female and nine male aged 18 to 45 years (mean ± standard deviation age; 25.4 ± 6.0 years). Participants were excluded if: they reported self-reported joint pain (any part of the body) experienced over the past three months; individuals allergic to adhesive tape; individuals who did not provide informed written consent; or were unable to undertake the entire assessment process. Instrument and test procedure Prior to testing, both assessors (Assessor 1; Assessor 2) were taught a standardised method of assessing JPS through goniometry and JPS methods as stated below. Both assessors had 12 months academic/clinical experience and were enrolled on a United Kingdom pre-registration physiotherapy masters-degree programme. This was taught by the chief investigator to ensure accurate and consistent with current specifications [15, 16]. Data collection was only commenced only once each assessor and the chief investigator were satisfied with the techniques adopted in accordance with the standardised techniques. 6 Joint position sense assessment was performed on the right knee of all participants to ensure consistency and prevent any potential variability in left and right JPS confounding the finding [17, 18]. Participants were prepared for assessment with application of white adhesive markers on the right greater trochanter, lateral tibiofemoral joint line and lateral malleolus (Figure 1). All testing was performed in standing with a 12cm distance between medial malleoli. Assessment of JPS was conducted in the following stages: • Participant instructed by assessor to actively flex knee to first specified angle, termed the “target” angle. This angle was measured as per Norkin and White’s (1995) recommended methods to assess knee flexion, with a 15cm two-armed plastic hand-held goniometer with 1° marked increments [16]. • Participant instructed to remain in this angle of flexion for 10 seconds, to remember their knee position. • Participant then instructed by assessor to straighten knee. Immediately following this, the participant was instructed to replicate the ’target’ angle position. • In the goniometry method, this angle produced by the participant was measured using the goniometer whereas in the IMC assessment digital images were captured with a standard iPad 2 (model; A1395). The distance of the iPad to the limb ranged from 80 to 100 cm dependent on the length of the participant’s lower limb. These images were printed and the knee positions were measured using a simple 180° protractor. • This process was repeated to assess four knee flexion target angles (20°, 40°, 75°, 100°). Each assessor assessed each participant across the four target angles. Each measurement angle was recorded a single time. The order in which assessors evaluated participants was randomised through a single toss of a coin. A second coin toss determined the order of the two JPS measures (goniometry or IMC) to minimise the risk for order effects. All participants and assessors were instructed to be quiet throughout the assessment periods. Testing was performed in the same building throughout, in similar practical/clinical rooms, to reduce possible environmental variability. 7 All participants returned after a one-week interval, at the same time of day, when the process was repeated. Statistical analysis Descriptive statistics of mean and standard deviation assessed gender and age. JPS accuracy was measured by calculating absolute error (AE) and relative error (RE) [19]. AE was measured as the actual numerical difference between the test (target angle) and response angle recorded by the assessor [19]. Relative error (RE) was defined as the numerical difference between the test (target angle) and response angle (knee range of motion actually achived by the participant) with consideration of positive (overestimation) and negative (underestimation) values, represented as +/ - figures, therefore considering directional bias [19]. Both AE and RE were therefore necessary to determine the overall measurement error [20,21]. It was necessary to determine agreement between all AE and RE variables. Therefore the intra-class correlation coefficient (ICC) with 95% confidence intervals was selected to ascertain intra- and inter-rater reliability [17, 22]. Level of agreement strength for ICC’s were categorised in boundaries as outlined by Landis and Koch (1977) [23]. Through this, values of less than 0.00 equate to poor strength, 0.00 to 0.20 as slight, 0.21 to 0.40 as fair, 0.41 to 0.60 as moderate, 0.61 to 0.80 as substantial and 0.81 to 1.00 as almost perfect. All statistical analyses were completed using SPSS (version 21.0) (IMB, New York, USA). 8 Results Intra-rater reliability of goniometry compared to image capture methods Agreement strength for both AE and RE remained within two specified ICC boundaries for goniometry and IMC which did not exceed ‘moderate’. These were predominately distributed between ‘poor’ to ‘fair’ (Table 1; Table 2). Minimal differences were observed between AE and RE between variables in each grouping. Agreement for RE ranged between ‘slight’ to ‘moderate’ with ICC values achieving 0.00 (95% CI: 0.00 to 0.26) to 0.56 (95% CI: 0.29 to 0.74) (Table 1; Table 2). The largest AE and RE observed using goniometry assessment were 8.64° and 8.31º respectively which occurred at 40° for Assessor 2. The smallest AE and RE were 3.19° and -0.11° at 75° for Assessor 1 (Table 2). Inter-rater reliability of goniometry compared to image capture method Overall, the agreement of both assessors in goniometry and IMC categories of sessions one and two varied considerably across AE and RE. Range of ICC were between -0.00 (95% CI: 0.00 to 0.23) to 0.64 (95% CI: 0.00 to 0.31). This equates to ‘poor’ to ‘substantial’ agreement (Table 3). Results between the sessions remained within two ICC boundaries with the exception of AE values for session 1, which ranged from ‘poor’ to ‘substantial’ with ICC ranging 0.00 (95% CI: 0.00 to 0.20) to 0.64 (95% CI: 0.00 to 0.31). The greatest agreement of session one for both JPS methods was observed at 40° for Assessor 1 and 2 for goniometry. The lowest agreement was also observed at the same angle for IMC between the assessors for session one (Table 3). The most consistent agreement observed for both goniometry and IMC occurred at 100° for session two. In this instance, RE ICC values were both 0.47 (95% CI: 0.18 to 0.69) (Table 3). During IMC assessment, the largest AE and RE occurred for Assessor 2 at 100° during session two at values of 12.53° and -11.08° respectively. The smallest error occurred for Assessor 1 at 40° during session one at ICC of 5.36º (AE) and 0.08º (RE) (Table 3). Overall findings of AE and RE of both methods for both assessors demonstrate greater error for IMC. Overall average error (standard deviation) of AE for goniometry was 5.16 (4.30) and 1.61 (6.50) for RE. The IMC resulted in AE of 8.20 (6.33) and -1.92 (10.18) for RE. 9 Intra-rater reliability of goniometry Agreement between Assessor 1 and 2 ranged from ‘poor’ to ‘moderate’ for both AE and RE ICC (Supplementary Table 1). Agreement strength for both AE and RE values across all groupings were within two ICC boundaries. There was little difference observed between AE and RE for all groupings. Although definitively strong agreement was not observed overall, Assessor 2 demonstrated greater agreement in comparison to Assessor 1 with AE ICC between 0.05 (95% CI: 0.00 to 0.36) to 0.42 (95% CI: 0.12 to 0.67) (slight-moderate). In this instance RE ICC values achieving 0.24 (95% CI: 0.00 to 0.52) to 0.29 (95% CI: 0.02 to 0.56) (fair). Assessor 1 resulted in AE at 0.00 (95% CI: 0.00 to 0.23) to 0.26 (95% CI: 0.00 to 0.54) (poor-fair) and RE ICC between 0.00 (95% CI: 0.00 to 0.21) to 0.28 (95% CI: 0.00 to 0.55) (poor-fair) (Supplementary Table 1). Inter-rater reliability of goniometry Overall, agreement within this category highlighted wider inconsistency ranging from 0.00 (0.00 to 0.21) (poor) to 0.64 (95% CI: 0.00 to 0.31) (substantial) across AE and RE ICC (Supplementary Table 2). Agreement between assessors in session one showed greater strength overall in comparison to session two. In this case AE ICC were 0.00 (95% CI: 0.00 to 0.21) to 0.64 (95% CI: 0.00 to 0.31) (poor- substantial) and RE ICC between 0.00 (95% CI: 0.00 to 0.33) to 0.24 (95% CI: 0.00 to 0.50) (poor-fair) (Supplementary Table 2). Results at session two illustrated ICC for AE at 0.00 (95% CI: 0.00 to 0.23) to 0.34 (95% CI: 0.04 to 0.59) (poor-fair). The ICC values for RE were 0.09 (95% CI: 0.00 to 0.35) to 0.46 (95% CI: 0.17 to 0.69) (slight-moderate). However, as outlined above, session one demonstrated larger differences between AE and RE ICC resulting in greater difference in agreement (Supplementary Table 2). The greatest agreement for RE in this group was during session two at 75°. Assessor 1 was-0.28º and Assessor 2 was -1.50º resulting in an ICC of 0.46 (moderate) (Supplementary Table 2). The weakest agreement for AE observed during session two at 40° with values of 3.33° for Assessor 1 and 7.31° for Assessor 2. The ICC value in this instance was 0.00 (95% CI: 0.00 to 0.23). The weakest agreement for 10 RE was observed in session one at 75° with values of -0.11º for Assessor 1 and -1.28º for Assessor 2. The ICC was 0.00 (95% CI: 0.00 to 0.33) (Supplementary Table 2). Intra-rater reliability of IMC Overall, the agreement observed was within two ICC boundaries. This did not exceed higher than a ‘moderate’ interpretation (Supplementary Table 3). The results demonstrate strong agreement was not observed across both sessions for either assessor. However AE of Assessor 1 demonstrate slightly higher agreement overall for all groupings comapred to Assessor 2. The ICC were 0.00 (95% CI: 0.00 to 0.26) to 0.41 (95% CI: 0.09 to 0.66) (poor-moderate) against ICC between 0.00 (95% CI: 0.00 to 0.10) to 0.32 (95% CI: 0.00 to 0.58) (poor to fair) (Supplementary Table 3). AE ICC for Assessor 2 were 0.00 (95% CI: 0.00 to 0.10) to 0.32 (95% CI: 0.00 to 0.58) (poor-fair). RE were 0.11 (95% CI: 0.00 to 0.41) to 0.43 (95% CI: 0.13 to 0.66) (slight-moderate). This highlighed considerable difference between AE and RE (Supplementary Table 3). Inter-rater reliability of IMC Agreement was within two ICC boundaries consistently across both sessions with minimal difference observed between AE and RE (Supplementary Table 4). Although high agreement was not observed between assessors across either session, stronger agreement was noted in session two. In this instance, ICC values for AE ranged from 0.03 (95% CI: 0.00 to 0.35) to 0.47 (95% CI: 0.18 to 0.69) (slight- moderate). ICC value for RE ranged from 0.15 (95% CI: 0.00 to 0.44) to 0.47 (95% CI: 0.18 to 0.69) (slight-moderate). ICC for AE and RE in session one were 0.00 (95% CI: 0.00 to 0.20) to 0.33 (95% CI: 0.01 to 0.59) (poor-fair) and 0.02 (95% CI: 0.21 to 0.28) to 0.30 (95% CI: 0.00 to 0.57) (slight-fair) respectively (Supplementary Table 4). The strongest agreement of ‘moderate’ was observed during session two for 100°. This resulted in AE of 10.6° for Assessor 1 and 12.5° for Assessor 2 and an ICC of 0.47. RE values for this angle at session two are also consistent in ‘moderate’ agreement with -9.5° for Assessor 1 and -11.1° for Assessor 2, with an ICC of 0.47. The weakest agreement (poor) was observed during session one at 40° which resulted in AE of 5.4° for Assessor 1 and 7.6° for Assessor 2 with an ICC of 0.00 (95% CI: 0.00 to 0.20). RE for 40° 11 during session one resulted in 0.01° for Assessor 1 and 6.4° for Assessor 2 with an ICC of 0.02 (95% CI: 0.21 to 0.28). Accordingly, the strength of agreement for RE was ‘slight’ (Supplementary Table 4). 12 Discussion The aim of this was to evaluate the intra-rater and inter-rater reliability of hand-held goniometry compared to image capture (IMC) in the assessment of joint position sense (JPS) in healthy participants. Clinically, establishing proprioceptive acuity is of high importance, given that proprioception plays a significant role in everyday functioning, joint stability, injury prophylaxis, and prevention of falls [3,6,21,24]. This demonstrates the necessity for establishing techniques that enable accurate measurement of proprioception through JPS for clinicians to identify individuals at risk of sustaining injury through proprioceptive deficit, objectively monitor pathological decline and to enable creation of specific rehabilitation programmes that both maintain and enhance proprioception in pathological and non-pathological populations [3,10, 21]. Therefore, overall further evidence is clearly warranted to determine the most reliable and accurate method of JPS assessment; following recent research development, emerging techniques such as smartphone applications could offer innovative and easily applicable approaches for clinical practice [25,26]. The largest AE and RE and consequently the greatest underestimation of a target angle occurred at 100° in this study which is the position most likely to cause fatigue for participants as research has suggested the chemical composition of muscle changes through fatigue, leading to irregularity of sensory output and increased joint laxity [27]. Several authors have proposed this secondary increase in laxity combined with temporary inefficiency of muscle receptors through fatigue, contributes to reductions in proprioceptive acuity and JPS accuracy which could account for the observation of these findings at this specific joint angle [27, 28]. Previous evidence has used three to five second holds whereas this study used ten seconds to sufficiently attempt JPS, which could have attributed to fatigue and consequently underestimation [13, 19, 29]. Joint position sense accuracy was not seen to improve towards end range movements despite some theories that would predict this to be the case due to increased articular compression, recruitment of mechanoreceptors thus leading to greater proprioceptive feedback and enhancing accuracy [20, 30]. However, this finding may be observed at extreme range of motion, but as producing extreme knee flexion may pose difficulty and most rehabilitation protocols utilise closed chain activity in the functional range of 0 to 100° in practice, it is arguably not appropriate to test such angles [31]. These findings could indicate that more accurate JPS assessment can be established at mid-joint range as supported by Barrack et al. in symptomatic populations [32]. 13 Overall, the greatest overestimation of a target angle occurred at 20°. In standing, producing a 20° knee bend requires minimal flexion and thus potentially this small angle could be easier for participants to overestimate [20]. However, during bilateral weight-bearing, it has been reported that increased afferent input from all weight-bearing joints and other sensorimotor mechanisms influence and facilitate proprioceptive feedback and thus, AE and RE findings for knee JPS may be due to factors external to proprioceptive acuity at the knee joint [3,19]. It is critical when interpreting the results from this study that as participant variability and assessor measurement error cannot be separately examined in this study through the use of JPS as a measure of proprioception, it cannot be definitively ascertained if measurement error resulted from either one or a combination of the two. Although effects of fluctuations in an individual’s circadian rhythm were controlled for where possible through completion of testing at similar timings for both sessions [33], uncontrollable factors such as behaviour of participant, individual physiology and learned effects could have affected accuracy and overall results which is a fundamental limitation of the study and limits the generalisability of the findings. It is also critical to consider that the average AE and RE values of all participants were reported in this study and while the mean (average) is widely used measure of central tendency, it can be susceptible to influence by outliers and thus caution should be adopted when interpreting the findings [17]. Goniometry has been routinely employed in clinical practice for many years, while the findings from this study in isolation are not sufficient to recommend deterring its use, they do highlight the need for caution; just because a tool is traditionally used it does not automatically follow that is an effective tool given it’s reported limited sensitivity in recording smaller changes in joint range of motion [34]. Due to the overall weak agreement found for intra-rater and inter-rater reliability for both methods, it could be argued that a more reliable measurement tool should be utilised to adhere to evidence-based practice or that further research needs to be considered to further elucidate the effectiveness of goniometry in JPS assessment [4]. Although 2D IMC analysis may have associated initial costs and timing restraints, as highlighted by Smith et al [4], this method has demonstrated strong reliability for JPS assessment and although further more recent research into its reliability is warranted, this could potentially offer a more evidence-based alternative for clinical practice [19,35]. Currently, emerging evidence in relation to measurement of knee joint angles through smartphone applications could offer a cost- 14 effective and easily clinically applicable alternative method of JPS measurement; however, further research is required to ascertain its reliability [25,26]. Such technology may be used in addition to audio bio-feedback, particularly at end of range measurements. This could enable repeatability training for the patient and a learning effect, particularly given the limitation in JPS measurements at extreme end range of motion as reported in these findings. Conclusions Overall, intra-rater and inter-rater agreement strength was weak and did not exceed substantial for either method. Generally AE and RE agreement was poor to moderate and greater error was reported for IMC for both assessors than goniometry. While using JPS is deemed an appropriate assessment of proprioceptive abilities, it is critical to be aware that by assessing proprioception in this manner, the error observed could have resulted from poor proprioception of the participants, measurement inaccuracy by the assessors or a combination of both factors and this cannot be ascertained. While these findings in isolation are insufficient to deem goniometry or IMC as unreliable measurement tools, they do have clinical implications, urge the use of caution and highlight the need for further research, particularly on the use of smartphone Apps for assessing JPS in varying clinical populations. 15 Acknowledgements and Declarations Ethics Approval: The study was approved by the University of East Anglia’s Faculty of Medicine and Health Science Research Ethics Committee (Ref: 2012/2013-14). Funding: No funding was received to support this study. Declaration of interest: None to declare. 16 Figure and Table Legends Figure 1: Image to present the output and methods of the IMC methods of JPS assessment. Table 1: Intra-rater reliability of goniometry compared to IMC methods for Assessor 1. Table 2: Intra-rater reliability of goniometry compared to IMC for Assessor 2. Table 3: Inter-rater reliability of goniometry compared to IMC. Supplementary Table 1: Intra-rater reliability of goniometry method. Supplementary Table 2: Inter-rater reliability of goniometry method. Supplementary Table 3: Intra-reliability of IMC method. Supplementary Table 4: Inter-rater reliability of IMC method. 17 References [1] Lephart SM, Riemann, BL, Fu F. Introduction to the sensorimotor system. In: Lephart SM, Freddie F, editors. Proprioception and Neuromuscular Control in Joint Stability, USA: Human Kinetics; 2000: xv to xxiv. [2] Deshpande N, Connelly DM, Culham EG, Costigan PA. Reliability and validity of ankle proprioceptive measures. Arch Phys Med Rehabil 2003; 84: 883-89. [3] Stillman BC. Making sense of proprioception: the meaning of proprioception, kinaesthesia and related terms. Physiotherapy 2002; 88: 667-76. [4] Smith TO, Davies L, Hing CB. A systematic review to determine the reliability of knee joint position sense assessment measures. Knee 2013; 20: 162-69. [5] Grob KR, Kuster MS, Higgins SA, Lloyd DG, Yata H. Lack of correlation between different measurements of proprioception in the knee. J Bone Joint Surg Br 2003; 84: 614-18. [6] Jerosch J, Prymka M. Proprioception and joint stability. Knee Surg Sports Traumatolo Arthrosc 1996; 4: 171-79. [7] Moezy A, Olyaei G, Hadian M, Razi M, Faghihzadeh S. A comparative study of whole body vibration training and conventional training on knee proprioception and postural stability after anterior cruciate ligament reconstruction. Br J Sports Med 2008; 42: 373-78. [8] Sahin N, Baskent A, Cakmak A, Salli A, Ugurlu H, Berker, E. Evaluation of knee proprioception and effects of proprioception exercise in patients with benign joint hypermobility syndrome. Rheumatol Int 2008; 28: 995- 1000. [9] Tsauo JY, Cheng PF, Yang RS. The effects of sensorimotor training on knee proprioception and function for patients with knee osteoarthritis: a preliminary report. Clin Rehabil 2008; 22: 448-57. 18 [10] Brosseau L, Tousignant M, Budd J, Chartier N, Duciaume L, Plamondon S, O'Sullivan JP, O'Donoghue S, Balmer S. Intratester and intertester reliability and criterion validity of the parallelogram and universal goniometers for active knee flexion in healthy subjects. Physiother Res Int 1997; 2: 150-66. [11] Stillman BC. An investigation of the clinical assessment of joint position sense. [online] Available at: < https://minerva- access.unimelb.edu.au/bitstream/handle/11343/38786/65887_00000246_01_Stillman.pdf?sequence=1> (Accessed on: 05.02.2015). [12] Marks R. Further evidence of impaired position sense in knee osteoarthritis. Phys Res Int 1996; 1: 127-36. [13] Marks R, Quinney HA, Wessel J. Proprioceptive sensibility in women with normal and osteoarthritic knee joints. Clin Rheumatol 1993; 12: 170-75. [14] Fleiss JL. Reliability of Measurement. In: Fleiss JL, editor. The Design and Analysis of Clinical Experiments, USA: John Wiley and Sons Inc; 1999. [15] Piriyaprasarth P, Morris ME. Psychometric properties of measurement tools for quantifying knee joint position and movement: a systematic review. Knee 2007; 14: 2-8. [16] Norkin, CC, White DJ. Measurement of Joint Motion. 2nd ed. USA: Allied Health, Jean-Francois Vilain; 1995. [17] Field, A. Discovering statistics using IBM SPSS statistics, 4th Edn, London: SAGE Publications; 2013. [18] Corrigan JP, Cashman WF, Brady MP. Proprioception in the cruciate deficient knee. J Bone Joint Surg Br; 1992; 74: 247-50. [19] Stillman BC, McMeeken JM. The role of weightbearing in the clinical assessment of knee joint position sense. Aust J Phys 2001; 47: 247-53. 19 [20] Olsson L, Lund H, Henriksen M, Rogind H, Bliddal H, Danneskiold-Samsøe B. Test–retest reliability of a knee joint position sense measurement method in sitting and prone position. Adv Phys 2004; 6: 37-47. [21] Baker V, Bennell K, Stillman B, Cowan S, Crossley K. Abnormal knee joint position sense in individuals with patellofemoral pain syndrome. J Orthop Res 2002; 20: 208-14. [22] Rothstein JM, Miller PJ, Roettger RF. Goniometric reliability in a clinical setting: elbow and knee measurements. Phys Ther 1983; 63: 1611-615. [23] Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977; 33: 159-74. [24] Petrella RJ, Lattanzio PJ, Nelson MG. Effect of age and activity on knee joint proprioception. Am J Phys Med Rehabil 1997; 76: 235-41. [25] Milanese S, Gordon S, Buettner P, Flavell C, Ruston S, Coe D, O'Sullivan W, McCormack S. Reliability and concurrent validity of knee angle measurement: Smart phone app versus universal goniometer used by experienced and novice clinicians. Man Ther 2014; 19:569-74. [26] Milani, P, Coccetta CA, Rabini A, Sciarra T, Massazza G, Ferriero G. Mobile smartphone applications for body position measurement in rehabilitation: a review of goniometric tools. PM R 2014; 6: 1038-43. [27] Lee HM, Liau JJ, Cheng CK, Tan CM, Shih JT. Evaluation of shoulder proprioception following muscle fatigue. Clin Biomech 2003; 18: 843-47. [28] Rozzi SL, Lephart SM, Gear WS, Fu FH. Knee joint laxity and neuromuscular characteristics of male and female soccer and basketball players. Am J Sports Med 1999; 27: 312-19. 20 [29] Dover G, Powers M. Reliability of joint position sense and force reproduction measures during internal and external rotation of the shoulder. J Athl Train 2003; 38: 304-10. [30] Pincivero DM, Bachmeier B, Coelho AJ. The effects of joint angle and reliability on knee proprioception. Med Sci Sports Exerc 2001; 33: 1708-712. [31] Escamilla RF. Knee biomechanics of the dynamic squat exercise. Med Sci Sports Exerc 2001; 33: 127-41. [32] Barrack RL, Skinner HB, Buckley SL. Proprioception in the anterior cruciate deficient knee. Am Sports Med 1989; 17: 1-6. [33] Smolensky MH, Hermida RC, Castriotta RJ, Portaluppi F. Role of sleep-wake cycle on blood pressure circadian rhythms and hypertension. Sleep Med 2007; 8: 668-80. [34] Gajdosik RL, Bohannon RW. Clinical measurement of range of motion: review of goniometry emphasizing reliability and validity. Phys Ther 1987; 67: 1867-872. [35] Gribble P, Hertel J, Denegar C, Buckley W. Reliability and validity of a 2-D video digitising system during a static and a dynamic task. J Sports Rehabil 2005; 14: 137-49. 21 Figure 1: Image to present the output and methods of the IMC methods of JPS assessment. 22 Table 1: Intra-rater reliability of goniometry compared to IMC methods for Assessor 1. Session Angle (º) AE (º) AE – ICC (95%) AE *Agreement Strength RE (º) RE – ICC (95%) RE *Agreement Strength Goniometry Image Capture Goniometry Image Capture 1 20 4.92 5.97 0.12 (0.00, 0.43) Slight 4.69 2.64 0.24 (0.00, 0.52) Fair 40 4.03 5.36 0.00 (0.00, 0.30) Poor 3.25 0.08 0.00 (0.00, 0.26) Poor 75 4.11 6.69 0.00 (0.00, 0.16) Poor -0.11 -5.58 0.23 (0.00, 0.50) Fair 100 3.64 9.31 0.30 (0.00, 0.58) Fair -2.58 -6.14 0.37 (0.07, 0.61) Fair 2 20 3.81 6.33 0.07 (0.00, 0.37) Slight 3.36 2.83 0.28 (0.00, 0.56) Fair 40 3.33 5.28 0.16 (0.00, 0.16) Poor 1.83 -1.11 0.14 (0.00, 0.43) Slight 75 3.19 10.50 0.01 (0.00, 0.19) Slight -2.08 -10.33 0.14 (0.00, 0.41) Slight 100 3.50 10.64 0.01 (0.00, 0.20) Slight -0.89 -9.53 0.22 (0.00, 0.52) Fair AE - Absolute error; ICC – intra-class correlation coefficient; RE – relative error. *Kappa statistic boundaries as outlined in Landis and Koch (1977). 23 Table 2: Intra-rater reliability of goniometry compared to IMC for Assessor 2. Session Angle (º) AE (º) AE – ICC (95%) AE *Agreement Strength RE (º) RE – ICC (95%) RE *Agreement Strength Goniometry Image Capture Goniometry Image Capture 1 20 7.89 9.47 0.10 (0.00, 0.43) Slight 7.72 8.97 0.12 (0.00, 0.43) Slight 40 8.64 7.58 0.09 (0.00, 0.41) Slight 8.31 6.36 0.17 (0.00, 0.47) Slight 75 5.67 8.86 0.19 (0.00, 0.46) Slight -1.28 -4.92 0.39 (0.09, 0.63) Fair 100 6.22 7.56 0.46 (0.17, 0.68) Moderate -4.39 -5.78 0.56 (0.29, 0.74) Moderate 2 20 6.64 8.17 0.05 (0.00, 0.36) Slight 5.81 7.11 0.28 (0.00, 0.56) Fair 40 7.31 7.50 0.00 (0.00, 0.30) Poor 6.917 3.17 0.13 (0.00, 0.42) Slight 75 4.56 9.39 0.04 (0.00, 0.29) Slight -1.50 -7.33 0.28 (0.00, 0.55) Fair 100 5.17 12.53 0.25 (0.00, 0.56) Fair -3.33 -11.08 0.38 (0.00, 0.69) Fair AE - Absolute error; ICC – intra-class correlation coefficient; RE – relative error. *Kappa statistic boundaries as outlined in Landis and Koch (1977). 24 Table 3: Inter-rater reliability of goniometry compared to IMC. S es si o n Angle (º) AE (º) RE (º) Goniometry (º) ICC (95%) *AS Image Capture (º) ICC (95%) *AS Goniometry (º) ICC (95%) *AS Image Capture (º) ICC (95%) *AS Assessor Assessor Assessor Assessor 1 2 1 2 1 2 1 2 1 20 4.92 7.89 0.27 (0.00, 0.53) Fair 5.97 9.47 0.11 (0.00, 0.39) Slight 4.69 7.72 0.24 (0.00, 0.50) Fair 2.64 8.97 0.19 (0.00, 0.45) Slight 40 4.03 8.64 0.64 (0.00, 0.31) Subs 5.36 7.58 0.00 (0.00, 0.20) Poor 3.25 8.31 0.09 (0.00, 0.35) Slight 0.08 6.36 0.02 (0.21, 0.28) Slight 75 4.11 5.67 0.00 (0.00, 0.21) Poor 6.69 8.86 0.04 (0.00, 0.35) Slight -0.11 -1.28 0.00 (0.00, 0.33) Poor -5.58 -4.92 0.30 (0.00, 0.57) Fair 100 3.64 6.22 0.01 (0.00, 0.30) Slight 9.31 7.56 0.33 (0.01, 0.59) Fair -2.58 -4.39 0.17 (0.00, 0.46) Slight -6.14 -5.78 0.30 (0.00, 0.57) Fair 2 20 3.81 6.64 0.06 (0.00, 0.34) Slight 3.36 5.81 0.15 (0.00, 0.44) Slight 6.33 8.17 0.03 (0.00, 0.35) Slight 2.83 7.11 0.37 (0.06, 0.62) Fair 40 3.33 7.31 0.00 (0.00, 0.23) Poor 1.83 6.92 0.09 (0.00, 0.35) Slight 5.28 7.50 0.07 (0.00, 0.37) Slight -1.11 3.17 0.15 (0.00, 0.44) Slight 75 3.19 4.56 0.34 (0.04, 0.59) Fair -2.08 -1.50 0.46 (0.17, 0.69) Mod 10.50 9.39 0.26 (0.00, 0.54) Fair -10.33 -7.33 0.22 (0.00, 0.50) Fair 100 3.50 5.17 0.00 (0.00, 0.19) Poor -0.89 -3.33 0.30 (0.00, 0.56) Fair 10.64 12.53 0.47 (0.18, 0.69) Mod -9.53 -11.08 0.47 (0.18, 0.69) Mod AE - Absolute error; AS – Agreement Strength; ICC – intra-class correlation coefficient; Mod – Moderate; RE – relative error; Subs - Substantial. *Kappa statistic boundaries as outlined in Landis and Koch (1977). 25 Supplementary Table 1: Intra-rater reliability of goniometry method. Assessor Angle (º) AE (º) AE – ICC (95%) AE *Agreement Strength RE (º) RE – ICC (95%) RE *Agreement Strength Session Session 1 2 1 2 1 20 4.92 3.81 -0.11 (-0.42, 0.23) Poor 4.69 3.36 -0.13 (-0.43, 0.21) Poor 40 4.03 3.33 0.17 (-0.15, 0.46) Slight 3.25 1.83 0.18 (-0.16, 0.47) Slight 75 4.11 3.19 0.26 (-0.06, 0.54) Fair -0.11 -2.08 0.28 (-0.03, 0.55) Fair 100 3.64 3.50 0.15 (-0.19, 0.46) Poor -2.58 -0.89 0.22 (-0.11, 0.49) Fair 2 20 7.89 6.64 0.42 (0.12, 0.67) Moderate 7.72 5.81 0.29 (0.02, 0.56) Fair 40 8.64 7.31 0.09 (-0.24, 0.40) Slight 8.31 6.92 0.24 (-0.90, 0.52) Fair 75 5.67 4.56 0.05 (-0.28, 0.36) Slight -1.28 -1.50 0.29 (-0.05, 0.56) Fair 100 6.22 5.17 0.38 (0.73, 0.62) Fair -4.39 -3.33 0.27 (-0.06, 0.54) Fair AE - Absolute error; ICC – intra-class correlation coefficient; RE – relative error. *Kappa statistic boundaries as outlined in Landis and Koch (1977). 26 Supplementary Table 2: Inter-rater reliability of goniometry method. Session Angle (º) AE (º) AE – ICC (95%) AE *Agreement Strength RE (º) RE – ICC (95%) RE *Agreement Strength Assessor Assessor 1 2 1 2 1 20 4.92 7.89 0.27 (-0.03, 0.53) Fair 4.69 7.72 0.24 (-0.05, 0.50) Fair 40 4.03 8.64 0.64 (-0.15, 0.31) Substantial 3.25 8.31 0.09 (-0.13, 0.35) Slight 75 4.11 5.67 -0.12 (-0.42, 0.21) Poor -0.11 -1.28 -0.00 (-0.33, 0.33) Poor 100 3.64 6.22 0.01 (-0.25, 0.30) Slight -2.58 -4.39 0.17 (-0.15, 0.46) Slight 2 20 3.81 6.64 0.06 (-0.21, 0.34) Slight 3.36 5.81 0.15 (-0.15, 0.44) Slight 40 3.33 7.31 -0.01 (-0.20, 0.23) Poor 1.83 6.92 0.09 (-0.13, 0.35) Slight 75 3.19 4.56 0.34 (0.04, 0.59) Fair -2.08 -1.50 0.46 (0.17, 0.69) Moderate 100 3.50 5.17 -0.12 (-0.40, 0.19) Poor -0.89 -3.33 0.30 (0.00, 0.56) Fair AE - Absolute error; ICC – intra-class correlation coefficient; RE – relative error. *Kappa statistic boundaries as outlined in Landis and Koch (1977). 27 Supplementary Table 3: Intra-reliability of IMC method Assessor Angle (º) AE (º) AE – ICC (95%) AE *Agreement Strength RE (º) RE – ICC (95%) RE *Agreement Strength Session Session 1 2 1 2 1 20 5.97 6.33 0.06 (-0.28, 0.38) Slight 2.64 2.83 0.07 (-0.27, 0.39) Slight 40 5.36 5.28 -0.08 (-0.41, 0.26) Poor 0.08 -1.11 0.08 (-0.25, 0.40) Slight 75 6.69 10.50 0.22 (-0.06, 0.49) Fair -5.58 -10.33 0.27 (-0.0, 0.53) Fair 100 9.31 10.64 0.41 (0.09, 0.66) Moderate -6.14 -9.53 0.26 (-0.05, 0.53) Fair 2 20 9.47 8.17 0.32 (-0.00, 0.58) Fair 8.97 7.11 0.43 (0.13, 0.66) Moderate 40 7.58 7.50 0.11 (-0.26, 0.42) Slight 6.36 3.17 0.24 (-0.07, 0.51) Fair 75 8.86 9.39 -0.24 (-0.54, 0.10) Poor -4.92 -7.33 0.11 (-0.23, 0.41) Slight 100 7.56 12.53 0.29 (-0.02, 0.55) Fair -5.78 -11.08 0.36 (0.05, 0.61) Fair AE - Absolute error; ICC – intra-class correlation coefficient; RE – relative error. *Kappa statistic boundaries as outlined in Landis and Koch (1977). 28 Supplementary Table 4: Inter-rater reliability of IMC method. Session Angle (º) AE (º) Assessor AE – ICC (95%) AE *Agreement Strength RE (º) Assessor RE – ICC (95%) RE *Agreement Strength 1 2 1 2 1 20 5.97 9.47 0.11 (-0.17, 0.39) Slight 2.64 8.97 0.19 (-0.08, 0.45) Slight 40 5.36 7.58 -0.12 (-0.41, 0.20) Poor 0.08 6.36 0.02 (0.21, 0.28) Slight 75 6.69 8.86 0.04 (-0.27, 0.35) Slight -5.58 -4.92 0.30 (-0.03, 0.57) Fair 100 9.31 7.56 0.33 (0.01, 0.59) Fair -6.14 -5.78 0.30 (-0.03, 0.57) Fair 2 20 6.33 8.17 0.03 (-0.29, 0.35) Slight 2.83 7.11 0.37 (0.06, 0.62) Fair 40 5.28 7.50 0.07 (-0.23, 0.37) Slight -1.11 3.17 0.15 (-0.14, 0.44) Slight 75 10.50 9.39 0.26 (-0.08, 0.54) Fair -10.33 -7.33 0.22 (-0.09, 0.50) Fair 100 10.64 12.53 0.47 (0.18, 0.69) Moderate -9.53 -11.08 0.47 (0.18, 0.69) Moderate AE - Absolute error; ICC – intra-class correlation coefficient; RE – relative error. *Kappa statistic boundaries as outlined in Landis and Koch (1977). work_zr5xl6wonzbmjexouab2ngfaze ---- On-Street Visual Analysis on Outdoor Space of Jalan Hang Jebat, Melaka Procedia - Social and Behavioral Sciences 68 ( 2012 ) 353 – 362 1877-0428 © 2012 The Authors. Published by Elsevier Ltd. Selection and peer-review under responsibility of the Centre for Environment-Behaviour Studies (cE-Bs), Faculty of Architecture, Planning & Surveying, Universiti Teknologi MARA, Malaysia. doi: 10.1016/j.sbspro.2012.12.233 AicE-Bs 2012 Cairo ASIA Pacific International Conference on Environment-Behaviour Studies Mercure Le Sphinx Cairo Hotel, Giza, Egypt, 31 October 2 November 2012 - On-Street Visual Analysis on Outdoor Space of Jalan Hang Jebat, Melaka Zalina Samadi*, Dasimah Omar, Rodzyah Mohd Yunus Faculty of Architecture, Planning and Surveying, Universiti Teknologi MARA, 40450 Shah Alam, Malaysia Abstract Outdoor space in between heritage buildings of heri -cultural attributes of the streets have great influence towards eliciting on-street pedestrian pattern. The aim of this study is to determine the relationship between timeframe and density, activity and movement based on the on-street cultural phenomenon. For the purpose of this paper, the presentation unveiled one section of the study which shares the analysis on- al-visual data only. Unobtrusive methods were employed through Digital Photography and Closed Circuit Television (CCTV) at Jalan Hang Jebat, Melaka. The objectives of this study are to interpret pedestrian density, activity and movement analysis. © 2012 Published by Elsevier Ltd. Selection and peer-review under responsibility of the Centre for Environment- Behaviour Studies (cE-Bs), Faculty of Architecture, Planning & Surveying, Universiti Teknologi MARA, Malaysia. Keywords: Outstanding value; outdoor living room; pedestrian pattern; street shopping 1. Introduction The direct relationship between pedestrian movements and on-street activity to revitalize a positive aura within a heritage outdoor space may be fully not understood, but further studies on herita ambiance with therapeutic quality proves that indulging in a creative activities such as outdoor street shopping often decreases solemnity and increases livability. However, the relationship between pedestrian movement and activity can be analyzed through a number of methods including Digital Photography Why Digital Photography and CCTV are important choice? The current research on evaluating urban outdoor space quality * Corresponding Author. Tel; +006019-2179021 fax: + 00603-553444353. E-mail address: zalina_samadi@yahoo.com. Available online at www.sciencedirect.com © 2012 The Authors. Published by Elsevier Ltd. Selection and peer-review under responsibility of the Centre for Environment-Behaviour Studies (cE-Bs), Faculty of Architecture, Planning & Surveying, Universiti Teknologi MARA, Malaysia. Open access under CC BY-NC-ND license. Open access under CC BY-NC-ND license. http://creativecommons.org/licenses/by-nc-nd/3.0/ http://creativecommons.org/licenses/by-nc-nd/3.0/ 354 Zalina Samadi et al. / Procedia - Social and Behavioral Sciences 68 ( 2012 ) 353 – 362 indicated that both manual and technological application is applied as complimentary methods. CCTV has its limitation to certain height and location therefore; Digital Photography to record on-street visual observation has great potential to fill up th The CCTV form of data collected is in the form of video motion and non motion detection. From time to time, all collected video camera input data is kept for record purpose in case of any crime and accident scene in the area. Since public has accepted as Jalan Hang Jebat as one of the most vibrant street within the heritage core zone and Police Department has considered it as the least crime street, therefore this juxtaposition has elevate this street with added value of street with unique identi Due to this uniqueness of high revitalization with peaceful street yet maintain its vibrancy makes the street a real attractive street. No doubt, this street has its own outstanding value on top of the outstanding universal value as established in UNESCO World Heritage Sites in 2008.. 1.1. Outstanding value - world. The architectural style of its built heritage as demonstrated in the historical shop houses and on the other hand cultural heritage as demonstrated through the continuous trading lifestyle as previous port city activity from the sub-cultures from Asian such as India, China, Malay and other continent such as Europe. There are mainly three most Outstanding Universal Values as recorded by Cultural Heritage Action Theme (CHAT) of Penang in which this value as shared by Melaka: First Outstanding Universal Value Multi-trading town as forged from exchanges of cultures. Second Outstanding Universal value Living Testimony of Tangible and Intangible Heritage. Third Outstanding Universal Value Melting Pot of Multicultural Architecture and Township. The outstanding value is clearly shown in the leftover built heritage including the early shop houses in Melaka. The almost similar built heritage of early shop houses can also be found in Georgetown Penang. The close image and identity between Penang shop houses and Melaka shop houses are due to their similarity in terms of social influences of historical Port City with spice up with architecture from the British Colonial, European, Asian and multi-ethnic influences. Due to high awareness and local appreciation towards the existing shop houses that make the conservation of historical shop houses acceptable and the built heritage in well taken care and remains in a good condition. The continuous livability shall be inherited to the next generation in order to convey the physical heritage to the next end users of the future. 1.2. Outdoor living room R. Robertson (1992) in the particularization within the globalization age stated dictated that globalization has its own richness such as particularization of local tourism activity empowerment to attract global tourism. For instance, to consume World Heritage Sites physically tourists can only access the physical elements such as the architecture or the interior in the indoor and the outdoor space of the sites. The activities that tourists may do are visiting galleries, shopping, eating and heritage walking. Since the space that served for public is too limited therefore, the space in between heritage streets shall be treated nicely an outdoor living room to provide tourist basic amenity while they are outside their accommodation. This area shall be properly designed with appropriate street furniture, inclusively design for all ages and ability, robust and flexible to cater universal tourism and local public. (Evans, 2009) promoting a creative spaces in revitalizing urban area by all means of design principles to reduce unintended area and reduce crime-prone area. (Edwards, Griffin & Hayllar, 2008) criticized the weaknesses which identified among practitioners, researchers and policy maker on the lack of integration 355 Zalina Samadi et al. / Procedia - Social and Behavioral Sciences 68 ( 2012 ) 353 – 362 of spaces. This is the one of the threats of a livable heritage city which caused under-utilized and negative space in urban area. The space shall be focused for urban recreation activities rather than left as madi, Z and Hasbullah, M.N., 2008). 1.3. Pedestrian pattern The research on pedestrian pattern and intensity were cross examined by many city legibility researchers. S. Stefan, (2006) described that the accuracy of application of Geo-Positioning System (GPS) until the millennium era was limited up to thirty meter from the ground due to international military authority has lead the research conduct to have on-site visual observation to record pedestrian linkages. sense the positive aura of the street outdoor environment, calculate the density of pedestrian and observed in the realistic of the on-street activity. 1.4. Street shopping The daily operation for the indoor shopping activity in the heritage shop houses begins in between in the indoor shop houses are closed their business earlier at 1800 to allow Night Market to operate until midnight. During this night market, vehicular traffic is closed but adjacent streets such as Jalan Kubu and Jalan Tun Tan Cheng Lock are remained accessible for traffic. The main activities were meant for on-street shopping and outdoor street dining. Selected items such as art and craft, fashion and fabric outlet, tit and bits, souvenirs, on-street dine for local cuisines are on offer to visitors. Bes - most of the products are antique, aesthetically attractive and on-sale within market price. The number of pedestrian increasing tremendously as the street was closed for vehicular movement to focus for on-street shopping. The Night Market has attracted local and international tourists to come and join in this street as pedestrian. They enjoy outdoor ambiance and some of them just come to experience the high crowd and vibrant environment. The character of the street is supported by the nearby heritage tourism activity where visitors come to visit other historical monuments in Bandar Hilir such as A 1.5. Heritage street activities Even thoug narration on the historical background of the street especially in the making of the revitalization of Jalan Hang Jebat needs to be considered for this research. According to the one of the senior personal of the street whom was interviewed in March 2012, Datuk Wira Gan Boon Leong whom once lead the street committee clearly explained that the vendors along the street was initially rejected the idea to enhance their business activities by extending their business hour. The reason why they declined to prolong their on-street vendor was due to low market attraction in this old heritage street. Due to this initial rejection, he had to strategize on how to make the dream comes true in making the Hang Jebat Street to be revitalized from its gravely environment. Continuous motivation was given to vendors and shop houses owners a special fund was created to elevate the façade of the shop houses but physical enhancement was 356 Zalina Samadi et al. / Procedia - Social and Behavioral Sciences 68 ( 2012 ) 353 – 362 not enough to revive the heritage street. Finally, Datuk Gan found that bringing in more people coming and visiting the street. But, with special incentives and personal encouragement between him and the heritage street committee then only small business entrepreneurs gave their cooperation to operate in long hours of operation. Finally, they remain their business up from 6pm to midnight. During the early years between 1990 and 2000 the night market was operating slowly and start developing but recent observation has shown that a very lively experience can be felt along the street. Besides, shopping activities at Jalan Hang Jebat, an open-air stage performance with minus-one equipment is provided to cater elderly outdoor activity. The open air stage is located at the end of Jalan Hang Jebat heading to Jalan Kubu. The performance in this open air stage was criticized loudly by certain group of shop owners due to its annoying loudness but it has attracted more street shoppers to visit and join the joyful performance. Sometimes special and famous performers also invited to perform especially in celebrating special event such as Chinese New year, National Day and new eve celebration. The purpose of this kind of activities is simply to provide street shoppers a light entertainment, freedom expression and to create fun environment. The idea of revitalization with value-added of positive aura enhancement within a heritage outdoor space has attracted Majlis Bandaraya Melaka Bersejarah (MBMB) to venture with the Ibu Pejabat Kontinjen Polis, Melaka Tengah (IPK, Melaka Tengah). Their aim was to provide a safety measure for the benefit of city citizen and tourists in visiting Jalan Hang Jebat, Melaka which was found as remarkable street in Banda Hilir in Melaka. And due to those reasons has created an urgent call for the conduct of the research. 2. Methodology 2.1. Participation, ethics and procedures for CCTV This research was conducted in between March 2012 and August 2012 in which employs two types of non-obtrusive measures; Digital Photography Analysis and CCTV analysis. The depth of CCTV interpretation in the research however, does not achieve the level of forensic CCTV Video Analysis as offered by Forensic Resources Ltd (2009). In terms of research participation, CCTV does not involve end-users direct involvement except for recording their routine behavior in the real phenomenon of the street. The application of CCTV video camera is typically used to capture and record event of any suspicious circumstances and recall event to a non broadcast transmission system. The additional feature in the system is the inclusion with special requirement as specified by the operator/owner of the CCTV sets. At present, CCTV recording is quite a common unobtrusive measure through video camera input as this similar application also being used in the indoors of banks, prisons, airports and public institutions. The selected operating system allowed certain numbers of closed circuit camera within the selected points along the study site of Jalan Hang Jebat. Video input is recorded according to time and date and then video output is channeled to hard disk (HD) which belongs to the police head quarter. As the owner of the data those CCTV camera have its pre-set selected resolution and rate. The police department together with MBMB set the CCTV at the selected points within the heritage core zone. The main aim was to collect ariel-view data at the height of about 10 meter (30 feet) high from the ground to record city pedestrian and vehicular traffic within the all year round. The exact positioning of the equipment was identified in ensuring public safety and function as part of crime reduction measures for maintaining high quality safety and surveillance. The CCTV setting in the heritage street is part of the precaution to detect and provide useful information on end users behavioral patterns. The most -users 357 Zalina Samadi et al. / Procedia - Social and Behavioral Sciences 68 ( 2012 ) 353 – 362 vandalism to public properties; urban crime of pick pocket, unauthorized direction of end users linkages, vehicular traffic control, unintended signage positioning and removal. Therefore, in this research it does not required as direct participation but end-users were observed in terms of their movement pattern, behavior and intensity from the CCTV only. However, due to the limitation of view there is space which is located as hidden spot. In the conduct of this research, the researcher required to grant a verbal and written permission as compliance to the ethical procedure from the Head of Police Contingent prior to the CCTV assessment and on-street visual observation to avoid any future difficulties in the study site. 2.2. Aerial-Visual and Human Eye Level Visual Data iew data was chosen as the most synoptic viewpoint to capture a larger area to see the physical surface feature in the spatial context in between heritage buildings. According to Thomas, A. and Gevanthor, A. (2007) defined that aerial as originated in early method of remote sensing and still remains in use in the current post millennium era. It is the most common, versatile and economical form of remote sensing. The Human Eye Level Visual Data are also used in this research method. The following figures are sample of human eye- viewed confidentially and not meant for any publication. Due to this limitation, this paper only shares the digital photo only. Prior to the research conduct, researcher had logged in report to police department for ensuring their presence on-street with consent and under protection of the police. Fig. 1. (a) Night market view; (b) Day view (morning) shopping along Jalan Hang Jebat 3. Aerial-Visual Assessment Criteria (AVAC) 3.1. Research design and data log sheet This research employed unobtrusive-data collection with on-street manual calculation for recording on-street intensity of pedestrian passing-by and shopping along the study street. This manual calculation on site from selected points at the beginning, intermediate and the end of the street was to counter checked again from digital photography and CCTV data in validating the result. However, due to non- synchronized time in collecting the pedestrian intensity it is considered as a standard error. Similar with digital camera data, real- ized as the 358 Zalina Samadi et al. / Procedia - Social and Behavioral Sciences 68 ( 2012 ) 353 – 362 same time as manually collection data. The above figures representing sample of digital photography used for on-site data for analysis in calculating density of street during night market. Since the CCTV data is strictly confidential and limitedly for viewing therefore, this paper only shares the Aerial-Visual Table 1. Sample of A4-sized Data Log Sheet of the Aerial-Visual Assessment (AVA) Timeline Duration (6 Option) Type of Activity (3 options) Intensity of people 10 sq ft area (4 options) Movement (3 options) Distribution of Frequency by Percentage Morning Afternoon Evening Early Night Midnight Night Market 0800-1200 1200-1600 1600-2000 2000-2400 2400-0400 1800-2400 Indoor Shopping Indoor &Outdoor On-street shopping 1-5 6-10 11-15 20-30 Direct and one way only Meandering and to and fro shopping outlet Meandering Active The above table shows the column of options for the aerial visual data collection activity which was completed by researcher as researcher stopped and walked from the end of the street and to other end. The route started at right and the entry of Jalan Hang Jebat and ended at junction of Jalan Kubu. For the purpose of on-site intensity collecting data a enlarge scale of the street were used to record activity, radius of 20feet (6 m) and the viewing point and angle are recorded. 4. Result and Discussion 4.1. Activity along Jalan Hang Jebat Based on the data collection in March 2012 to October 2012, there are three main activities in Jalan Hang Jebat. The first activity is Indoor Shopping which is lineage along the heritage shop houses. The second activity is Indoor and Outdoor Shopping and the third one is On-Street Shopping. The activity is observed as varies along the daytime and night-time line as shown in the following table. Table 2. Activity along Jalan Hang Jebat Timeline Duration Type of Activity % of Frequency Morning Afternoon Evening 0800-1200 1200-1600 1600-2000 Indoor Shopping Indoor Shopping Indoor & Outdoor 11.40 22.00 18.00 Early Night Midnight 2000-2400 2400-0400 Indoor & Outdoor Indoor & Outdoor 30.10 2.00 Friday to Sunday [Night Market] 1800-2400 On-street shopping 16.50 Total 100.00 359 Zalina Samadi et al. / Procedia - Social and Behavioral Sciences 68 ( 2012 ) 353 – 362 0 1 2 3 4 5 Indoor Shopping Indoor and Outdoor On Street Shopping Activity in Heritage Street % of Frequency Fig. 2. Selected type of activity by pedestrian The interaction between indoor and outdoor shopping in Jalan Hang Jebat is considered as highly related to each other. The pedestrian along the street whom acted as the main customer moves from outside to inside and vice-versa and make the activity between indoor and outdoor active and robust. In fact some of the indoor product sales are duplicated for promoted sales during night market. 4.2. Intensity of end users including visitors, shop owners and street vendors The first one is as Low Distributed Pattern (LDP) with number of pedestrian with intensity between 1- 5 person in 10 square meter area as detected early in the morning, afternoon and evening along the walkway of the study street. Pedestrian passing by to their work place and tourist also found for site seeing. This is due to the vehicular traffic is open for accessibility for both directions that limit pedestrian to walk on the street. The second is the Average Distribution Pattern (ADP) with the rate of pedestrian is between 6-10 end users in 10 square meter area which pedestrian pass by in between peak time. The third one is the High Distribution Pattern (HDP) rate of pedestrian intensity of 20-30 end-users in 10 square meter area during evening to mid-night. Table 3. Intensity of pedestrian at Jalan Hang Jebat Timeline Duration Intensity of End User in 10 square meter area % of Frequency Morning Afternoon Evening 0800-1200 1200-1600 1600-2000 1-5 6-10 11-15 8.00 25.00 12.00 Early Night Midnight 2000-2400 2400-0400 11-15 1-5 13.00 17.00 Friday to Sunday [Night Market] 1800-2400 20-30 25.00 Total 100.00% 360 Zalina Samadi et al. / Procedia - Social and Behavioral Sciences 68 ( 2012 ) 353 – 362 Fig. 3.Result of pedestrian intensity 4.3. Pedestrian movement pattern of linear linkages The existing pattern of pedestrian movement pattern has created a very strong linear linkage along Jalan The existing pattern of pedestrian movement pattern has created a very strong linear linkage along Jalan Hang Jebat. The pattern varies along the 24 hours of the day during weekdays of the week. During research conduct and data collection period which took about six months duration, it is observed that there were three main patterns in this street which varies from peak hour to low peak pattern can be detected in this street. Most of the pedestrian along this street are shop owners, vendors, local and international tourists and tourist guides whom are very familiar with the tourism culture. The result based on the table indicated that the morning session began with low intensity and gradually increases as time goes by. Usually pedestrian in this session were shop owners, shop operators, workers and elderly from the closest neighborhoods. They enjoyed the walking activity due to the morning climatic factor. Table 4. Pedestrian movement pattern of linear linkages Timeline Duration Pedestrian Movement % of Frequency Morning Afternoon Evening 0800-1200 1200-1600 1600-2000 Direct and one way To and fro shopping To and fro shopping 12 10 18 Early Night Midnight 2000-2400 2400-0400 To and fro shopping Direct and one way 22 20 Friday to Sunday [Night Market] 1800-2400 Meandering and active 18 Total 100.00% 361 Zalina Samadi et al. / Procedia - Social and Behavioral Sciences 68 ( 2012 ) 353 – 362 3232 505050 1818 0 10 20 30 40 50 60 Direct and One Way To and Fro Meandering and Active Pedestrian Movement % of Frequency % of Frequency Fig.4. Result of The above table illustrates the Pedestrian Movement Pattern which varies from direct and one way in related to the pedestrian movement whom moves to and fro between one shopping lots to another due to high interest on display products. There are also observed to be more active by liners shopping to adjacent and sometimes crossing the street to continue their shopping activities during night market period. 5. Conclusion The empowerment and participation of shop owners and street vendors in Jalan Hang Jebat in managing the heritage street caused a great impact towards the revitalization of the street. Even though the encouragement from the local authority through physical enhancement of shop houses provided initiatives for street improvement but the employed data analysis have proved that livability and vibrancy of activity of the outdoor living room is highly depending on the pedestrian intensity. The current trend indicated that there an inclination of interests towards urban heritage environment which promotes the local place identity which is more favored by international tourists. By having this study, hopefully it helps on improving the pedestrian movement management in heritage streets in Melaka. Acknowledgements The author wishes to dedicate special acknowledgement to the Ibu Pejabat Kontingen Polis, Melaka Tengah for their approval for current and future assessment of the CCTV which was granted in May 2012. Unforgettable acknowledgement dedicated to Datuk Wira Gan Boon Leong due to his kind corporation from his residency/office at Jalan Kubu Melaka during data collection in conducting this research. References B. Yuen. (2005). Searching for Place Identity in Singapore. Habitat International, 29(2), 194-214. Edwards, D., Griffin, T., & Hayllar, B. (2008). Urban Tourism Research. Annals of Tourism Research, 35(4). Evans, G. (2009). Creative Cities, Creative Spaces and Urban Policy. Urban Studies, 46, 5-6. 362 Zalina Samadi et al. / Procedia - Social and Behavioral Sciences 68 ( 2012 ) 353 – 362 Inc Icon Group International (Ed.). (2008). Revitalizing. Icon Group International. Ramati, R., & New York (N Y ) Dept of Planning Urban Design Group (Ed.). (1981). How to save your own street. Roland, Robertson .(1992). Globalization: Social Theory and Global Culture (Theory, Culture & Society Series). SAGE Publication Limited. London. Rubenstein, H. M. (Ed.). (1992). Pedestrian malls, streetscapes, and urban spaces. John Wiley & Sons Inc. Samadi, Z & Hasbullah. M.N. (2008). The Enhancement of Space in between Urban Recreation Development. Online Refereed Journal of Malysian Publication. http: //www. malaysianpublications.blogsport. com/. Spek. Stefan Cristiaan Ven Der. (2006). Legible City-Walkable City_ Livable City.: Observation of walking pattern in City. Proceedings of the 7th International Conference of Walking and Livable Communities, 23-25. Taylor, L., & Museum, C. H. (Ed.). (1981). Urban open spaces. Rizzoli Intl Pubns. Zacharias. J. (2001). Pedestrian Behaviour And Perception In Urban Walking Environment. Journal of Planning literature. 16(1). Forensic Resources Limited. Fidelity, Reliability, Loyalty. Cardiff, UK. (2009). http://www.forensicresources.co.uk/index.php. Perryville Revitalization Main Street Program, Perryville KY (2010). from http://www.merchantcircle.com/business/Perryville .Revitalization.Main.Street.Program.859-332-8313. Perryville Revitalization Main Street Program, Perryville KY (2010). from http://www.merchantcircle.com/business/ Perryville.Revitalization.Main.Street.Program.859-332-8313. City backs downtown revitalization plan with $8,000 investment ( 2011). from http://www.lawdailyrecord.com/main. asp?SectionID=14&SubSectionID=16&ArticleID=7468. Columbus Wins Great American Main Street Award. (2011). from http://www.columbus ms.info/index.php?option=com_ content&view=article&id=648:columbus-wins-great-american-main-street-award&catid=109:news. Four-Point Approach | Main Street Arkansas | Historic Preservation (2011).. from http://www.arkansaspreservation.com/main- street/four-point-approach/. Sutter Street Revitalization Groundbreaking - Tomato Pages Forums (2011). from http://www.tomatopages.com/folsomfor um/index.php?showtopic=29110. The Greater Spruce Street Revitalization Initiative. (2011). from http://www.valueresearch.com/vrg/Paters on_Tourism_Case _Studies.pdf. The Greater Spruce Street Revitalization Initiative (2011). From http://www.valueresearch.com/vrg/Paterson_Tourism_ Case_Studies.pdf. West Main Street Revitalization Plan Village of Wappingers Falls (2011). . from http://www.wappingersfallsny.gov/pdf/VOWF %20-%20WEST%20MAIN%20STREET%20REVITALIZATION%20PlAn.pdf. work_zrlhwe37x5gzzdflirzgnrqesa ---- A-photobooks Bridging the Gap between Virtual and Material Worlds Emily Corrigan-Kavanagh Background Despite the rise of digital photography, physical photos remain significant. They enable the sharing of experiences with others through their physicality (i.e. displaying, gifting) to maintain social bonds, particularly in family contexts (Rose, 2016). In fact, digital photography is still employed to create printed photobooks through online platforms such as ‘Photobox.co.uk’. Furthermore, other media can be harnessed to accentuate meanings represented in the physical world through augmented reality, where objects and environments are overlaid with additional digital information (i.e. videos, audio) increasing the multi-sensory interaction opportunities that they can afford (Ross, 2009). Project Overview This research explores how augmented photobooks, called ‘a-photobooks’, printed photobooks that are augmented by travellers with additional multimedia of their trip using a smartphone-based authoring tool, enable ‘new horizons’ in the way that travellers can create and conceptualise their holiday experiences by facilitating the generation of personal juxtapositions of multimedia, self-curated to heighten the poignancy of moments they represent. It is part of a wider multidisciplinary £1.17 million EPSRC funded research project called the Next Generation Paper Project investigating how new paper technologies can be used to connect paper to digital media in the travel and tourism sector within areas of interaction design, user research, software and hardware development, and business innovation. Employing a specialised app and smartphone-based authoring tool installed on their device, travellers can link and play personally collated video, audio, weblinks, and digital image slide shows on their phone while reading their photobook by taking a picture of its pages through the app. References Rose, G. (2016). How to Look at Family Photographs : Practices, Objects, Subjects and Places. In Doing Family Photography: The Domestic, The Public, and The Politics of Sentiment (pp. 11–24). Farhnam: Ashgate Publishing Company. Ross, C. (2009). Augmented Reality Art : A Matter of ( non ) Destination. In Digital Arts and Culture Conference (pp. 1–8). Irvine: Digital Arts and Culture. Retrieved from papers3://publication/uuid/F1256D35-87AB-4556-9AC8-CDAF1F9DFA70 Conclusions A-photobooks enhance travel encounters by facilitating: • Additional multimedia layers through a bespoke smartphone app • Amplified awareness during travel experiences • Greater visual/auditory context for memory signifiers • Deeper reflection of related sensory stimuli Methods • One-to-one codesign workshops with nine UK resident travelers were initially held to create a-photobooks with a specialised authoring tool smartphone- based app install on the participant’s device • Subsequently, nine one-to-one follow-up semi structured interviews were used to discuss how resulting a-photobooks were employed about 2 weeks later • Data (i.e. codesign workshop and interview transcripts) was analysed using grounded theory, including open, axial and selective coding • Participants were recruited using criterion sampling such as: UK resident; adult (over 18 years of age); access to Android smartphone; and taking a family and/or friendship group holiday in the next four months Acknowledgments Special thanks are given to research project colleagues, Prof David Frohlich, Prof Caroline Scarles, Prof George Revill, Dr Jan Duppen, Dr Haiyue Yuan, Prof Miroslaw Bober, Dr Radu Sporea, Dr Brice Le Borgne, Ms Megan Beynon and Prof Alan Brown are also sincerely thanked for their hard work and contribution to the project. For further information please visit: www.NextGenerationPaper.info Results work_zsddevstvfcinekvzdhqhtinoi ---- Comparison between Human and Bite-Based Methods of Estimating Caloric Intake RESEARCH 1568 JOURNAL OF THE ACADEMY OF NUTRITION AND DIETETICS Original Research Comparison between Human and Bite-Based Methods of Estimating Caloric Intake James N. Salley, MS; Adam W. Hoover, PhD; Michael L. Wilson, MS; Eric R. Muth, PhD ARTICLE INFORMATION Article history: Submitted 14 August 2015 Accepted 3 March 2016 Available online 14 April 2016 Keywords: Eating behavior Self-monitoring Calorie intake Obesity Weight loss 2212-2672/Copyright ª 2016 by the Academy of Nutrition and Dietetics. http://dx.doi.org/10.1016/j.jand.2016.03.007 ABSTRACT Background Current methods of self-monitoring kilocalorie intake outside of labora- tory/clinical settings suffer from a systematic underreporting bias. Recent efforts to make kilocalorie information available have improved these methods somewhat, but it may be possible to derive an objective and more accurate measure of kilocalorie intake from bite count. Objective This study sought to develop and examine the accuracy of an individualized bite-based measure of kilocalorie intake and to compare that measure to participant estimates of kilocalorie intake. It was hypothesized that kilocalorie information would improve human estimates of kilocalorie intake over those with no information, but a bite-based estimate of kilocalorie intake would still outperform human estimates. Participants/settings Two-hundred eighty participants were allowed to eat ad libitum in a cafeteria setting. Their bite count and kilocalorie intake were measured. After completion of the meal, participants estimated how many kilocalories they consumed, some with the aid of a menu containing kilocalorie information and some without. Using a train and test method for predictive model development, participants were randomly divided into one of two groups: one for model development (training group) and one for model validation (test group). Statistical analysis Multiple regression was used to determine whether height, weight, age, sex, and waist-to-hip ratio could predict an individual’s mean kilocalories per bite for the training sample. The model was then validated with the test group, and the model-predicted kilocalorie intake was compared with human-estimated kilocalorie intake. Results Only age and sex significantly predicted mean kilocalories per bite, but all variables were retained for the test group. The bite-based measure of kilocalorie intake outperformed human estimates with and without kilocalorie information. Conclusions Bite count might serve as an easily measured, objective proxy for kilo- calorie intake. A tool that can monitor bite count may be a powerful assistant to self- monitoring. J Acad Nutr Diet. 2016;116:1568-1577. H UMANS ARE NOTORIOUSLY POOR AT ESTIMATING kilocalorie consumption, typically underestimating the kilocalorie content of meals and under- reporting kilocalorie intake in food diaries over time. For example, Carels and colleagues1 found that, on average, participants tended to overestimate the kilocalorie content of unhealthy foods by 17% and underestimate the kilocalorie content of healthy foods by 16%. Also, Stanton and Tips2 found that only 28% of participants were able to esti- mate within 100 kcal of actual kilocalorie content. While the use of food diaries can improve kilocalorie estimates, Krall and Dwyer3 found that, on average, participants omitted about 9% of food items, resulting in an underreporting bias. Errors in kilocalorie estimation have been shown to be associated with many of the same factors associated with inaccurate self-reporting.2,4,5 These factors include body mass index (BMI; calculated as kg/m2), portion size, portion-size estimation abilities, perceived healthiness of food items, and diet history.2,4,6-8 One factor that seems to contribute to accurate kilocalorie counting, and subsequently accurate self-monitoring, is the presence or absence of kilocalorie information. Nutrition labeling is the most readily available source of kilocalorie information for most food items, and is one of the most critical factors in determining kilocalorie intake outside of structured behavioral interventions. The Affordable Care Act included menu-labeling provisions that require restaurants that operate in 20 or more locations to provide kilocalorie information for their food items and notifications of daily recommended kilocalorie intake levels. This builds on prior local regulations and provides kilocalorie information in environments where that information has traditionally been difficult to obtain.9,10 These requirements have opened new avenues of study on the effects of kilocalorie labeling on ª 2016 by the Academy of Nutrition and Dietetics. http://dx.doi.org/10.1016/j.jand.2016.03.007 http://crossmark.crossref.org/dialog/?doi=10.1016/j.jand.2016.03.007&domain=pdf RESEARCH food and portion choices, and although the epidemiological impact of these laws have not been investigated, several studies have offered insights into their effects. Indeed, several studies have provided evidence that this information leads to an increase in kilocalorie-estimation abilities and a decrease in kilocalorie consumption.11-13 However, the findings of these studies have been inconsistent, and further research on the efficacy of restaurant nutrition information is necessary.14,15 In addition, many restaurants do not meet the 20-store minimum, and subsequently are not likely to provide kilocalorie information for their menu items unless required to by their local jurisdiction. Consequently, there is an opportunity for the development and use of tools that can help the individual measure kilocalorie intake in free- living conditions. Wearable monitors are tools that attempt to automatically and objectively monitor eating behavior. For example, Lopez- Meyer and colleagues16 have described a method that can determine food intake with an accuracy of 94% by using a device that detects chewing and swallowing. This device was recently used to predict kilocalorie intake and was found to outperform self-reports.17 Another tool that offers the potential of objectively and automatically calculating kilo- calorie intake through monitoring ingestive behavior is the Bite Counter.18 The Bite Counter tracks bites of food taken, where a bite is defined as putting food in the mouth, and has been demon- strated to accurately count bites across a wide range of food items and eating utensils.18 The function of the device is to provide an automated method of tracking wrist motion to estimate kilocalorie intake. There are two versions of the Bite Counter that have been used for research purposes. The first is a “free-living” version that provides feedback to the user, but only stores bite counts on the device instead of raw sensor data because of limited memory capacity. The second is a “tethered” version, which is used in laboratory studies, that does not provide the users with feedback, but captures the raw sensor data so that the bite-counting algorithm can be further refined. Bite count has been shown to predict kilocalorie intake without the need for information about the specific kilocal- orie content of the food being consumed.19 It is possible that the prediction can be improved by calibrating the device to an individual’s bite size or the kilocalories per bite that they typically ingest. For example, Lawless and colleagues20 found height to strongly correlate (r¼.75) with sip size. In addition, Scisco and colleagues19 found that men typically consumed 19 kcal/bite, whereas women consumed 11 kcal/bite, indi- cating that sex is a significant contributor to kilocalories per bite. However, studies by Scisco and colleagues and others21,22 did not replicate findings of previous work that found a relationship between bite size and BMI, they found no significant difference in kilocalories per bite between participants in different BMI categories. This indicates the need for further replication and investigation into the rela- tionship between inter-individual differences and kilocalories per bite. Furthermore, a regression equation for predicting kilocalories per bite from individual variables, which could be beneficial to other researchers hoping to derive kilocalorie intake from bite count, has not been published. The present study developed a kilocalories per bite equation that allows kilocalorie intake to be derived from October 2016 Volume 116 Number 10 JO bite count and easily obtained body dimension and de- mographic variables. Notably, this model does not require information about the kilocalorie content of food items consumed. Performance of this equation was compared with participants’ estimates of kilocalories, with and without the presence of a menu containing kilocalorie information. There were three primary hypotheses of this study: 1) physical and demographic variables, including height, weight, waist-to-hip ratio (WHR), sex, and age would independently predict kilocalories per bite; 2) participant estimates of kilocalorie intake would be better in the pres- ence of kilocalorie information than in its absence; and 3) an estimate of kilocalorie intake derived from bite count would be more accurate than participant estimates of kilocalorie intake. METHODS Participants Two hundred eighty participants (132 male) were recruited from the faculty, staff, and student population of Clemson University and the surrounding area via fliers, e-mails, and word of mouth. Participants were offered $10 and a free meal for participating in the study. Those with a self-reported history of eating disorders were excluded from the study. Participants were selectively sampled to obtain a variance in age, sex, and ethnicity that was representative of the local population. The study was approved by the Clemson Uni- versity Institutional Review Board. After filling out an online screening questionnaire, all participants provided informed consent before participating in the study. Procedure Upon recruitment, participants completed an online demographics and screening questionnaire. Participants were scheduled to complete the eating session for either a lunch (11:00 AM or 1:00 PM) or dinner (5:00 PM or 7:00 PM) eating session in groups of 2 to 4. Before arrival, partici- pants were randomly assigned to either the “kilocalorie information given” or the “no kilocalorie information given” condition, using an online randomization tool. Participant condition placement was fully randomized, without consideration of demographic characteristics or physical differences. Upon arrival, their height and weight were measured. The WHR of each participant was measured using a MyoTape (Accu-Measure) tape measure. Waist circumference was measured by wrapping the tape measure around the smallest circumference section of the abdomen. Hip circumference was measured by wrapping the measure around the widest circumference section of the buttocks. The measure was adjusted snugly, but not enough to cause compressions on the skin. Measurements were taken to the nearest half-inch. Participants were then led to an on-campus dining hall. At the dining hall, the participants were instructed by a pair of undergraduate assistants to eat as much as they liked. To allow portion size and food selection to vary, participants were allowed to choose any of the food items in the dining hall available that day, and to go back for as many courses as they wished. The dining hall had a wide variety of foods to choose from, including items that were available daily, such as a salad bar, a pizza and pasta bar, and a URNAL OF THE ACADEMY OF NUTRITION AND DIETETICS 1569 RESEARCH sandwich station, and many other items that varied day to day. Participants’ food choices were not restricted in any way, other than by what was available in the cafeteria. Participants were allowed to select kilocalorie beverages (such as sweet tea and soda). Participants consumed their meal at an instrumented table (Figure 1). Video cameras were mounted in the ceiling above each participant, which recorded the eating session. A course was defined as the time between the participant sitting down with food, prompting the experimenters to begin the recording pro- cess, and the participant getting up from the table, either to get another course or to end the session. Video recording was stopped between courses and restarted once the participant had been reseated for the next course. Upon sitting at the table with food, the tethered Bite Counter was placed on the participants’ dominant wrist, which was determined by self-report. Participants were instructed to eat and interact as naturally as possible, and to facilitate this, their eating behavior was not restricted in any way to accommodate for the Bite Counter; they were allowed to use napkins and eat with whichever hand they preferred, regardless of Bite Counter position. After each participant had made their food selections, an undergraduate assistant wrote down each food item, the portion size, and any customization made to the item (eg, adding condiments). Assistants used a menu provided by the dining hall that contained each food item that was supposed to be served in the cafeteria that day, along with a reference portion size for each item and kilocalorie content for that portion (eg, 1 cup of seasoned corn contains 103 kcal) to cross-check the participants’ selections. Upon completion of the meal, participants were given a custom post-meal questionnaire that asked questions seeking to gather data on how the cafeteria setting and the Bite Counter influenced their eating patterns. These data are not reported in this article. However, pertinent to this study, this questionnaire also asked participants to esti- mate their overall meal kilocalorie intake with the following sentence: “Please estimate the number of calo- ries you just consumed.” Participants in the kilocalorie information given condition were given the daily menu Figure 1. The instrumented table. Each station has a tethered bite counter and a camera mounted in the ceiling to record the meal. 1570 JOURNAL OF THE ACADEMY OF NUTRITION AND DIETETICS with kilocalorie information for their session to assist them in their estimations. The no kilocalorie information con- dition did not receive this information. After completing the questionnaire, the participants were debriefed, which involved telling the participants that the purpose of the study was to compare their kilocalorie estimates to esti- mates derived from the Bite Counter and to improve the accuracy of the device. In addition, any questions they had were answered, and they were free to leave. Measures Measured Kilocalorie Intake. Kilocalorie intake was determined using an adaptation of the digital photography method described by Williamson and colleagues.23 Their study found the accuracy of visual estimations of portion consumption based on photographs to be strongly correlated with visual estimations based on direct observation, esti- mating most plate food waste to within a mean of 6 g. Three raters were trained to use the digital photography method to estimate plate waste. The training process was led by the lead author of this article, and involved studying the methods used in the 2003 study by Williamson and colleagues.23 To practice the method, the raters were shown each of the dishes that were used in the study, along with a standard portion of select food items (eg, 1 cup of water in a glass, 1 cup of rice on a large plate). They then participated in several joint practice sessions comparing starting portions with completed meals and estimating plate waste in the form of percentage of selected portion consumed, using the cafeteria dishes as a reference. After training, the raters estimated the participant’s selected portion of each food item in comparison with a standard reference portion obtained from daily menus pro- vided by the dining hall using still frames from video re- cordings of the food on each participant’s plate (Figure 2A). For example, if the raters estimated that a participant had selected twice the standard portion of a serving of mashed potatoes, they would record this as 200% of the standard portion. Standard dishes were used for all meals, so the selected portion could be reliably compared with the refer- ence portion. The raters then compared the initial video still frame to a still frame taken at the end of the course, after the participant had finished eating (Figure 2B). Each rater then estimated the percentage of the selected portion that was actually consumed. Continuing with the previous example, if the participant only consumed half of their selected portion of mashed potatoes, then the raters would estimate that they consumed 50% of the selected portion. Percentage of the reference portion consumed was then calculated by multi- plying percent of reference selected by percent of reference consumed. Each food item was examined by all three raters. To examine inter-rater reliability, an intra-class correlation was calculated on the percent of reference consumed ratings found by each of the three raters. The analysis was conducted using a 2-way, random effects model with an absolute agreement definition and found to be very strong at 0.86.24 Subsequently, the percentage of the reference portion consumed for each food item was determined by averaging the three ratings. October 2016 Volume 116 Number 10 Figure 2. Before (A) and after (B) screenshots used to visually estimate starting portions and portions consumed. Figure 3. Tethered version of the Bite Counter. RESEARCH Once food selection, portion selection, and portion con- sumption had all been verified, total kilocalorie intake for a specific food item was calculated as follows: total kilocalorie intake¼kilocalories of reference portion�% of reference portion selected�% of portion consumed. For example, for a participant who selected buttermilk mashed potatoes, the menu lists this item as 1 cup having 124 kcal. The participant selected half of the standard serving of the item. The mean rating for portion consumed was 80%. Total kilocalorie intake for that item would be determined by: 124 (kilocalories of reference)�0.5(% of reference)�0.8(% of portion consumed)¼49.6 kcal. Due to the high levels of customization found for some of the selected food items, the kilocalorie information provided by the dining hall could not be used for these items. These items consisted primarily of sandwiches, which the partici- pants made themselves at the sandwich station, and salads made from the salad bar. Kilocalorie intake for these items was determined using the Automated Self-Administered 24- Hour Recall developed by the National Cancer Institute.25 A single trained experimenter entered each of these items us- ing the graphical tools provided by the Automated Self- Administered 24-Hour Recall and the portion consumption information obtained as described previously. The Automated Self-Administered 24-Hour Recall utilizes an extensive food database and provides kilocalorie information for each item entered. Once kilocalorie intake of each food item had been determined, total kilocalorie intake for a participant was calculated by summing that participant’s kilocalorie intake for all selected items. Bite-Based Model of Kilocalorie Intake. Bites were detected using the tethered version (Figure 3) of the Bite Counter and the bite-detection algorithm described by Dong and colleagues.18 The Bite Counter is worn on the wrist and identifies a movement pattern that is characteristic of moving food from a dish to the mouth, as measured by a gyroscope. It has been shown to detect 86% of bites in settings where food items and utensil use are unrestricted (such as the one in the present study), with about 20% recorded bites being a false positive. While true bite count was measured via the video recordings, the automatically detected bite counts were used for the present study, as we wanted to develop a model that could be used in uncontrolled settings where true bite count would not be known. Estimated kilocalories per bite were derived using the predictive equation described in the Results section. In short, kilocalories per bite were predicted by individual October 2016 Volume 116 Number 10 JO demographic and physical characteristics, and specific foods were not taken into consideration. Estimated kilocalories per bite were then multiplied by bite count to obtain the bite- based estimate of kilocalorie intake. Bite-based error was then calculated for the test group by subtracting each par- ticipant’s true kilocalorie intake from their bite-based esti- mate of kilocalorie intake. Participant Estimates of Kilocalorie Intake. Participant kilocalorie estimations were provided by each participant at the end of their experimental session. Participants in the kilocalorie information given condition were allowed to use the menus provided by the cafeteria to aid in their estima- tions. Participant error was calculated by subtracting each participant’s true kilocalorie intake from their estimations. For example, if a participant ate 750 kcal and their estimate was 1,000 kcal, their error would be 250, showing an over- estimate of 250 kcal. Study Design The first part of the present study used multiple regression to develop a model of kilocalories per bite based on height, weight, WHR, sex, and age. In order to develop and validate this model from the same sample, a train and test paradigm was used. After data collection, participants were randomly assigned to either the training group or the test group using a URNAL OF THE ACADEMY OF NUTRITION AND DIETETICS 1571 RESEARCH 50/50 split, but balanced for kilocalorie information condition (ie, an equal number of participants in both kilocalorie in- formation conditions were assigned to the training and test groups). The model was developed on the training group, and the intercept and slopes for the predictors resulting from the analysis were used with the test group to determine the bite- based estimate of kilocalorie intake and its respective error for that group. The second part used two t tests, with estimation method (participant vs bite-based) as a paired samples variable and kilocalorie information presence (or absence) as an independent-samples variable. This analysis was performed within the test group in order to validate the equation generated from the training group and to compare the model of bite-based kilocalorie intake to human estimates. Data Analyses All analyses were performed using SPSS version 17 (2008, SPSS Inc). The hypothesis that height, weight, sex, WHR, and age would predict kilocalories per bite was tested within the training group using multiple regression. The hypothesis that participant estimates of kilocalorie intake would improve with kilocalorie information was tested using a between-subjects t test comparing estimates of kilocalorie intake made with the presence of kilocalorie information to those made without it. This test was per- formed with the train and test samples combined to in- crease power. The hypothesis that an estimate of kilocalorie intake obtained from a bite-based model would outperform participant estimates was tested using a within-subjects t test comparing human estimates of kilocalorie intake in the presence of kilocalorie information to the bite-based esti- mate of kilocalorie intake, derived using the equation generated from the training group. This analysis was per- formed within the test group only. All tests were performed with a type I error rate of 0.05. RESULTS Sample Statistics Of the 280 participants recruited for this study, there were data recording errors for 11. These errors resulted from missing or corrupted video files, which made determining true kilocalorie intake or bite count impossible. These 11 were excluded from further analyses. Outlier analyses were conducted for the remaining 269 participants by regressing the hypothesized predictors on the whole dataset and calculating Cook’s Distance, Mahalanobis Distance, and Studentized Deleted Residuals. Six participants were iden- tified as outliers based on their leverage statistics, indi- cating that they had an exceedingly higher impact on the model than would be expected from a single data point. Further inspection showed that these participants had un- usually high kilocalorie intake, or had unusually high or low bite counts. These six participants were excluded from further analysis, leaving a final sample size of 263 participants. Demographic characteristics for the remaining partici- pants are shown in Table 1. Participants had a mean age of 29.73 years (standard deviation¼11.70 years, range¼18 to 75 years). Twenty-five participants identified themselves as African American, 2 as American Indian, 28 as Asian or 1572 JOURNAL OF THE ACADEMY OF NUTRITION AND DIETETICS Pacific Islander, 184 as white, 11 as Hispanic, and 13 as other. Participants had a mean BMI of 25.36 (standard deviation¼5.18, range¼17.4 to 46.2), with 4 being under- weight (BMI <18.5), 156 were normal weight (BMI 18.5 to 25), 64 were overweight (BMI 25 to 30), and 39 were obese (BMI >30) according to the National Heart, Lung, and Blood Institute classifications.26 Thirty participants responded “yes” to the question, “Do you follow a special diet?” Of these, 16 explicitly stated that they followed a diet for the purposes of improving diet quality, weight loss, or weight maintenance. Participants were not asked whether they had experience with kilocalorie counting or self- monitoring. Meal Statistics Five-hundred fifteen unique food items were available for participants to choose from. A few of these food items were available in the cafeteria every day, and subsequently show a larger representation in the dataset than other items, which were only available once or twice in most cases during the course of data collection. Participants consumed multiple food items each. A total of 1,844 food items were consumed for the sample. Of these, 221 were identified as having a high level of customization, requiring the use of the Automated Self-Administered 24-Hour Recall to determine kilocalorie content, as described here. The 10 most commonly selected food items, along with the number of times they were cho- sen, are shown in Table 2. Regression on the Training Group Participant data were randomly assigned to either the training or the test group, with both groups balanced for kilocalorie information condition. Descriptive statistics for all predictors and dependent variables are shown for both groups in Table 1, with independent samples t tests used to ensure no significant differences between the two groups. Correlations between the independent and dependent variables within the training group are pre- sented in Table 3. Of note, kilocalories per bite shared a significant but weak positive correlation with height and weight and a weak negative correlation with age. There was also a moderate positive correlation between kilo- calories per bite and sex (females¼0 and males¼1). A multiple regression analysis was run on the 131 partici- pants assigned to the training group, regressing kilocalo- ries per bite on age, sex, height, weight, and WHR. Results from the regression are shown in Table 4. Only age (B¼�.128, t(125)¼�2.197; P<0.05) and sex (B¼6.167, t(125)¼2.779; P<0.05) significantly predicted kilocalories per bite. The model explained a significant amount of variance in kilocalories per bite (adjusted R2¼0.154, F(5, 119)¼5.498; P<0.001). Despite being nonsignificant, height, weight, and WHR were retained in the prediction equation because prior research suggests that they may actually be significant con- tributors to kilocalories per bite.19-21 This leaves the regres- sion equation used on the test group as follows: estimated kilocalories per bite¼ �.128 ageþ6.167 sex(females¼0)þ.034 heightþ.035 weight�12.012 WHRþ22.294. In a train and test paradigm, the reliability of a regression model (ie, the likelihood of finding different coefficients in October 2016 Volume 116 Number 10 Table 1. Descriptive statistics and t tests for all demographic characteristics, predictors, and dependent variables for participants in a study to develop and test a model of kilocalorie intake based on bite count, with data shown for the total sample, the group used to develop the model (training group), and the group used to test the model (test group) Variable Total sample (n[263) Training group (n[131) Test group (n[132) t (dfa[261)b P value ������������������n (%)������������������! Ethnicity African American 25 (9.5) 10 (7.6) 15 (11.4) — — American Indian 2 (0.8) 1 (0.8) 1 (0.8) — — Asian or Pacific Islander 28 (10.7) 13 (9.9) 15 (11.4) — — White 184 (70) 98 (74.8) 86 (65.2) — — Hispanic 11 (4.2) 3 (2.3) 8 (6.1) — — Other 13 (4.9) 6 (4.6) 7 (5.3) — — BMIc category — — Underweight (BMI <18.5)d 4 (1.5) 2 (1.5) 2 (1.5) — — Normal Weight (BMI 18.5-24.9)d 156 (59.3) 78 (59.5) 78 (59) — — Overweight (BMI 25.0-29.9)d 64 (24.3) 30 (22.9) 34 (25.8) — — Obese (BMI >30.0)d 39 (14.8) 21 (16) 18 (13.6) — — Following a special diet 30 (11.3) 15 (11.5) 15 (11.4) — — Dieting to lose weight 16 (6.1) 6 (4.6) 10 (7.6) — — Sex — — Male 127 (48.3) 57 (43.5) 70 (53) — — Female 136 (51.7) 74 (56.5) 62 (47) — — ���������������mean�SDe����������������! BMIf 25.36�5.18 25.67�5.42 25.06�4.93 0.95 0.341 Age (y) 29.73�11.70 30.68�13.33 28.8�9.78 1.31 0.192 Height (in) 67.76�3.83 67.6�3.99 67.9�3.68 �0.66 0.513 Weight (lb) 166.25�36.70 167.8�40.07 164.71�33.09 0.68 0.496 Waist-to-hip ratio 0.855�0.087 0.847�0.086 0.864�0.086 �1.59 0.113 Bites 72.73�24.66 71.63�23.66 72.94�26.33 �0.43 0.667 Kilocalories per bite 19.16�8.76 18.68�8.88 19.63�8.65 �0.88 0.38 t (df) Kilocalories per bite (normal weight) 19.13�8.31 18.92�8.07 19.34�8.58 0.320 (154) 0.75 Kilocalories per bite (overweight) 19.28�9.79 18.11�11.26 20.31�8.33 0.895 (62) 0.374 Kilocalories per bite (obese) 19.92�8.96 19.20�8.53 20.77�9.62 0.540 (37) 0.592 adf¼degrees of freedom. bThe t test is checking for significant differences between the training and test groups, to ensure that a regression equation derived from the training group could appropriately be applied to the test group. cBMI¼body mass index; calculated as kg/m2. dAccording to National Heart, Lung, and Blood Institute classifications. eSD¼standard deviation. fNot included as a predictor. RESEARCH similar samples) is assessed by comparing the R2 value of the training group to the squared correlation of the predicted values and the observed values within the test group (referred to as shrinkage). The predicted values shared a October 2016 Volume 116 Number 10 JO Pearson correlation with the observed values of 0.374, providing a shrinkage value of 0.014 (1.4%, calculated by subtracting the squared correlation from the adjusted R2 of the regression). URNAL OF THE ACADEMY OF NUTRITION AND DIETETICS 1573 Table 2. The 10 most commonly selected food items in a cafeteria setting and how many times they were selected Food item No. of times selected Water 145 Salad 141 Sweet tea 79 French fries 75 Pepperoni pizza 35 Pasta 34 Diet cola 33 Cheese pizza 25 Ice cream 25 Bread side 24 RESEARCH Effects of Estimation Method and Kilocalorie Information on Estimation Error An independent samples t test was used to compare partic- ipant estimation error between the kilocalorie information conditions using the whole sample (training and test groups combined). Mean estimation error for the group provided with kilocalorie information was �184.96�501.54 kcal, whereas mean estimation error for the no information pro- vided group was �349.04�748.20 kcal, a significant differ- ence (t[220.358]¼�2.078; P<0.05). Figure 4 displays boxplots of the estimation error for these two groups, showing a wide range of error and a tendency toward underestimation for both groups. A paired samples t test was used to compare the bite-based method to the best human-based estimation (the kilocalorie information given condition). Mean estimation error was �257.36�790.22 kcal for participant estimates and 71.21�562.14 kcal for the bite-based method. There was a significant difference between participant and bite-based error in the kilocalorie information given condition (t[67]¼�3.683; P<0.001), such that estimation error was lower for the Table 3. Pearson correlations for the predictors and kilocalories p kilocalorie intake based on age, sex, height, weight, and waist-to Predictors Age (y) Sex (females[0) Heigh Age (y) 1 Sex (females¼0) �.128 1 Height (in) �.091 .642a 1 Weight (lb) .247a .359a .450 Waist-to-hip ratio .058 .594a .371 Kilocalories per bite �.209b .372a .285 aCorrelation is significant at the 0.01 level (two-tailed). bCorrelation is significant at the 0.05 level (two-tailed). 1574 JOURNAL OF THE ACADEMY OF NUTRITION AND DIETETICS bite-based method. Boxplots of participant error and model prediction error are shown in Figure 5, indicating a smaller range of error for the bite-based method and no apparent bias toward over- or underestimation. To check for differences in the bite-based model’s accuracy related to BMI, a one-way analysis of variance was performed on bite-based estimation error for normal weight (n¼82), overweight (n¼28), and obese (n¼22) individuals in the test group. The analysis revealed no main effect for BMI category (F[2, 129]¼.436; P¼0.648). Furthermore, a Pearson correlation showed no relationship between BMI and bite-based estimation error (r¼�.115; P¼0.186). DISCUSSION This study demonstrated that an estimate of an individual’s kilocalorie intake using bite count and mean kilocalories per bite determined by a formula based on demographic and physical characteristics can potentially predict kilocalorie intake more accurately than individual estimations, even with the aid of kilocalorie information. This is consistent with previous research that has shown that individuals are poor at accurately estimating the kilocalorie content of meals, tend- ing to err toward underestimation.2 This study also replicated previous work showing that kilocalorie information can in- crease an individual’s ability to estimate kilocalorie intake.11,12 Policies that encourage providing diners at res- taurants with kilocalorie information for food options show promise for allowing individuals to better estimate their kilocalorie intake and possibly even alter their food choices. However, these estimations are still biased and largely inac- curate, and an accurate objective measure of kilocalorie intake could significantly help individuals self-monitor. Several researchers are currently working on the devel- opment of objective and automatic measures of free-living kilocalorie intake. One research group has developed a device that can automatically detect chewing and swal- lowing, and have published a study that derived individ- ualized estimates of kilocalorie intake from chews and swallows, which outperformed diet diaries.17,27 Other re- searchers are working on developing pattern recognition software that can automatically identify food items and portion sizes from digital photography.28,29 Using data obtained from the CALERIE (Comprehensive Assessment of er bite for a group of participants used to develop a model of -hip ratio t (in) Weight (lb) Waist-to-hip ratio Kilocalories per bite a 1 a .434a 1 a .191b .152 1 October 2016 Volume 116 Number 10 Figure 5. Boxplot of estimation error for two methods of estimating kilocalorie intake. Participation estimation error describes the difference between the participants’ estimated kilocalorie intake and their actual kilocalorie intake. Bite-based estimation error describes the difference between participants’ model-estimated kilocalorie intake and actual kilocalorie intake. Negative values indicate underestimation, and positive values indicate overestimation. Table 4. Regression results for a model of kilocalorie intake based on age, sex, height, weight, and waist-to-hip ratioa Predictors Bb Standard error bc t P value (Constant) 22.294 18.265 — 1.221 0.225 Age (y) �0.128 0.058 �.195 �2.197 0.030 Sex (females¼0) 6.167 2.219 .350 2.779 0.006 Height (in) 0.034 0.249 .016 0.138 0.890 Weight (lb) 0.035 0.022 .159 1.543 0.126 Waist-to-hip ratio �12.012 10.900 �.120 �1.102 0.273 aAdjusted R2¼0.154; standard error of the estimate¼8.10; F(5, 119)¼5.498; P<0.001. bUnstandardized coefficient. cStandardized coefficient. RESEARCH Long-Term Effects of Reducing Intake of Energy) study, Sanghvi and colleagues30 describe a method of estimating long-term kilocalorie intake using only demographic vari- ables and changes in weight that is accurate to within 132 kcal/day for most participants. These new methods offer promising avenues for individuals to track their kilocalorie intake. Accurate kilocalorie counting hinges on accurate estimates of portion size and accurate estimates of the relative energy density, or kilocalories per gram of food items. However, weighing individual foods and looking up their energy den- sities is cumbersome and, even for the most dedicated di- eters, not very practical for daily kilocalorie counting when meals are eaten both in and out of the home and prepared by both oneself and others. Most people make estimations of the kilocalorie content of meals by comparing portion sizes relative to assumed standard serving sizes, or by simply Figure 4. Boxplot of kilocalorie estimation error for two groups of participants estimating their kilocalorie intake. Participants in the kilocalorie information given group were provided a menu containing kilocalorie information for the food items they consumed to assist in their estimations, and participants in the no kilocalorie information given group were not. Estimation error describes the difference between the participants’ esti- mates and their actual intake. Negative values indicate un- derestimation and positive values indicate overestimation. October 2016 Volume 116 Number 10 JO relying on their own knowledge and simple heuristics.1 However, the goal of the bite-based measure is to provide individuals with a kilocalorie intake measure when portion- size and energy-density information are either unavailable or difficult to obtain. Also, it could offer the user a convenient and automated alternative to manually calculating the kilo- calorie content of a meal, encouraging their adherence to dietary changes. An objective estimate of kilocalorie intake could also prove to be a valuable tool for researchers. As mentioned previ- ously, most methods used to calculate meal-level variables affecting kilocalorie intake depend on self-reports, which should not be relied upon for scientific conclusions regarding kilocalorie intake or for predictive models, due to their underreporting.31 While the doubly-labeled water method can accurately measure kilocalorie intake over a period of a few days, it does not provide kilocalorie intake information at the meal level, which reduces its applicability for explaining free-living eating behavior differences between individual meals. Of course, for the Bite Counter to be useful as an objective tool for monitoring eating behavior for individual meals, the relationship between bites and kilocalorie intake should be further explored and refined. Limitations and Future Directions One of the primary limitations of this study was the method used to measure true kilocalorie intake. While the digital photography method has been shown to accurately deter- mine the kilocalorie content of foods to a degree, a study design that incorporated premeasured portions with known kilocalorie content would have yielded a more accurate measure of kilocalorie intake.23 However, a bite-based measure of kilocalorie intake is better suited to free-living conditions in which food and portion choices vary signifi- cantly and, to that end, this study was designed to capture a wide variety of food items and portion sizes. A related limitation was the decision to include highly customized items (which were therefore variable in kilocalorie content), URNAL OF THE ACADEMY OF NUTRITION AND DIETETICS 1575 RESEARCH such as salads and sandwiches, in the analysis. This decision introduced two problems. First, it required the experi- menters to use an alternative method (the Automated Self- Administered 24-Hour Recall) to determine their kilocalorie content with an acceptable level of accuracy, and it would be expected that the degree of accuracy obtained would vary more than for those items that could be reliably measured using the digital photography method. Second, because the kilocalorie information provided to the partic- ipants only gave an estimate of the mean kilocalorie content of those items, participant estimation accuracy may natu- rally be worse for these items than for other, less variable items. Another limitation of this study is the simple model used to predict kilocalories per bite, and the decision to retain nonsignificant variables for the test group. The model does not include any aspects of the food items or the meal itself that could reasonably be assumed to predict kilocalories per bite, such as food item energy density, item consis- tency (ie, was it a solid item, an amorphous item, or a beverage), and item shape, which limits the model’s pre- dictive capabilities. The variables chosen to predict kilo- calories per bite were selected based on their availability to the average user; one could calibrate the Bite Counter to their individual physical and demographic variables and have a reasonable proxy for kilocalorie intake. Nonsignifi- cant variables were retained because they have been shown to be related to bite size in previous studies; it is unclear why there was no significant relationship in the present study. Future studies should examine more com- plex models and test for interactions between variables, perhaps taking a multi-level model approach, examining bite-, food item-, meal-, day-, and individual-level vari- ables, which could potentially refine the accuracy of the estimated kilocalories per bite. It would also be preferable to measure individuals across several meals, but within a context that the participant’s kilocalorie intake can be accurately measured. This study could be viewed as having limited external validity, as food choices were necessarily constrained, and there is evidence that participants display different eating patterns between free-living and laboratory meals. For example, Petty and colleagues showed that participants varied their eating rates between free-living and laboratory-controlled meals.32 However, it should be noted that the cafeteria environment does represent a step be- tween the laboratory and free living with a wide variety of food choices in a social eating environment. Nonetheless, future studies should examine the differences between free-living and controlled environments and their effect on bite size and kilocalories per bite to determine the generalizability of laboratory-derived kilocalories per bite equations. No data were collected on the participants’ background and familiarity with self-monitoring. It is possible that dietetics students and registered dietitian nutritionists, and those with a significant amount of experience with self-monitoring, could have performed better at estimating their kilocalorie intake, particularly in the no kilocalorie information given condition. This could have reduced external validity. How- ever, participants were recruited from throughout the uni- versity, with no particular emphasis on any specific major or 1576 JOURNAL OF THE ACADEMY OF NUTRITION AND DIETETICS department, so there is no reason to believe that individuals with a strong background in nutrition or dietetics would have been overrepresented. Finally, while bite count may foreseeably be used to track kilocalorie intake, it is strictly a tool for monitoring and controlling portion size and alone cannot encourage users to improve their diet quality. However, future ver- sions of the Bite Counter may incorporate food selection either by adjusting the kilocalories per bite based on the energy density of the food items selected or by incorpo- rating a “point” system, similar to other weight-loss pro- grams, in which users would be given points as feedback instead of kilocalories, and the points would be adjusted based on bite count, energy density, and the quality of the food item. CONCLUSIONS A bite-based measure of kilocalorie intake shows promise as a tool for both individual use for self-monitoring and for re- searchers to use to monitor free-living kilocalorie intake. It is an easily collected and objective physiological signal based on wrist motion that could be refined to more accurately estimate kilocalorie intake with the inclusion of a measure of energy density and individual variables that are indicative of kilocalories per bite. With further development, a measure of bite count could be valuable to individuals trying to lose weight and researchers attempting to monitor free-living eating behavior. References 1. Carels R, Harper J, Konrad K. Qualitative perceptions and caloric estimations of healthy and unhealthy foods by behavioral weight loss participants. Appetite. 2006;46(2):199-206. 2. Stanton AL, Tips TA. Accuracy of calorie estimation by females as a function of eating habits and body mass. Int J Eat Disord. 1990;9(4): 387-393. 3. Krall E, Dwyer J. Validity of a food frequency questionnaire and a food diary in a short-term recall situation. J Am Diet Assoc. 1987;87(10):1374-1377. 4. Chandon P, Wansink B. Is obesity caused by calorie underestima- tion? A psychophysical model of meal size estimation. J Mark Res. 2007;44(1):84-99. 5. Wansink B, van Ittersum K, Painter JE. Ice cream illusions: bowls, spoons, and self-served portion sizes. Am J Prev Med. 2006;31(3): 240-243. 6. Carels R, Konrad K, Harper J. Individual differences in food percep- tions and calorie estimation: An examination of dieting status, weight, and gender. Appetite. 2007;49(2):450-458. 7. Chandon P, Wansink B. The biasing health halos of fast-food restaurant health claims: Lower calorie estimates and higher side-dish consumption intentions. J Consum Res. 2007;34(3): 301-314. 8. Harris CL, George VA. Dietary restraint influences accuracies in estimating energy expenditure and energy intake among physically inactive males. Am J Mens Health. 2010;4(1):33-40. 9. Berman M, Lavizzo-Mourey R. Obesity prevention in the information age: Caloric information at the point of purchase. JAMA. 2008;300(4): 433-435. 10. Burton S, Creyer EH, Kees J, Huggins K. Attacking the obesity epidemic: The potential health benefits of providing nutrition information in restaurants. Am J Public Health. 2006;96(9): 1669-1675. 11. Elbel B. Consumer estimation of recommended and actual calories at fast food restaurants. Obesity. 2011;19(10):1971-1978. 12. Roberto CA, Larsen PD, Agnew H, Baik J, Brownell KD. Evaluating the impact of menu labeling on food choices and intake. Am J Public Health. 2010;100(2):312-318. October 2016 Volume 116 Number 10 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref1 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref1 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref1 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref2 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref2 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref2 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref3 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref3 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref3 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref4 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref4 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref4 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref5 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref5 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref5 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref6 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref6 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref6 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref7 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref7 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref7 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref7 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref8 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref8 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref8 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref9 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref9 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref9 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref10 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref10 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref10 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref10 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref11 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref11 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref12 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref12 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref12 RESEARCH 13. Auchincloss AH, Mallya GG, Leonberg BL, Ricchezza A, Glanz K, Schwarz DF. Customer responses to mandatory menu labeling at full-service restaurants. Am J Prev Med. 2013;45(6):710-719. 14. Elbel B, Gyamfi J, Kersh R. Child and adolescent fast-food choice and the influence of calorie labeling: A natural experiment. Int J Obes (Lond). 2011;35(4):493-500. 15. Swartz JJ, Braxton D, Viera AJ. Calorie menu labeling on quick-service restaurant menus: An updated systematic review of the literature. Int J Behav Nutr Phys Act. 2011;8(1):135. 16. Lopez-Meyer P, Makeyev O, Schuckers S, Melanson EL, Neuman MR, Sazonov E. Detection of food intake from swallowing sequences by supervised and unsupervised methods. Ann Biomed Eng. 2010;38(8): 2766-2774. 17. Fontana JM, Higgins JA, Schuckers SC, et al. Energy intake esti- mation from counts of chews and swallows. Appetite. 2015;85: 14-21. 18. Dong Y, Hoover A, Scisco J, Muth E. A new method for measuring meal intake in humans via automated wrist motion tracking. Appl Psychophysiol Biofeedback. 2012;37(3):205-215. 19. Scisco JL, Muth ER, Hoover AW. Examining the utility of a bite-count- based measure of eating activity in free-living human beings. J Acad Nutr Diet. 2014;114(3):464-469. 20. Lawless HT, Bender S, Oman C, Pelletier C. Gender, age, vessel size, cup vs. straw sipping, and sequence effects on sip volume. Dysphagia. 2003;18(3):196-202. 21. Zijlstra N, Bukman AJ, Mars M, Stafleu A, Ruijschop RM, de Graaf C. Eating behaviour and retro-nasal aroma release in normal-weight and overweight adults: A pilot study. Br J Nutr. 2011;106(02): 297-306. 22. Hill SW, McCutcheon NB. Contributions of obesity, gender, hunger, food preference, and body size to bite size, bite speed, and rate of eating. Appetite. 1984;5(2):73-83. October 2016 Volume 116 Number 10 JO 23. Williamson DA, Allen HR, Martin PD, Alfonso AJ, Gerald B, Hunt A. Comparison of digital photography to weighed and visual estimation of portion sizes. J Am Diet Assoc. 2003;103(9):1139-1145. 24. Shrout PE, Fleiss JL. Intraclass correlations: Uses in assessing rater reliability. Psychol Bull. 1979;86(2):420-428. 25. Subar AF, Kirkpatrick SI, Mittl B, et al. The Automated Self- Administered 24-hour dietary recall (ASA24): A resource for researchers, clinicians, and educators from the National Cancer Institute. J Acad Nutr Diet. 2012;112(8):1134-1137. 26. National Heart Lung and Blood Institute. The practical guide: Iden- tification, evaluation, and treatment of overweight and obesity in adults. http://www.nhlbi.nih.gov/files/docs/guidelines/prctgd_c.pdf. Published 2000. Accessed May 15, 2015. 27. Sazonov ES, Makeyev O, Schuckers S, Lopez-Meyer P, Melanson EL, Neuman MR. Automatic detection of swallowing events by acous- tical means for applications of monitoring of ingestive behavior. IEEE Trans Biomed Eng. 2010;57(3):626-633. 28. Kawano Y, Yanai K. FoodCam: A real-time food recognition system on a smartphone. Multimed Tools Appl. 2015;74(14):5263-5287. 29. Martin CK, Nicklas T, Gunturk B, Correa JB, Allen HR, Champagne C. Measuring food intake with digital photography. J Hum Nutr Diet. 2014;27(suppl 1):72-81. 30. Sanghvi A, Redman LM, Martin CK, Ravussin E, Hall KD. Validation of an inexpensive and accurate mathematical method to measure long- term changes in free-living energy intake. Am J Clin Nutr. 2015;102(2):353-358. 31. Schoeller DA, Thomas D, Archer E, et al. Self-report-based estimates of energy intake offer an inadequate basis for scientific conclusions. Am J Clin Nutr. 2013;97(6):1413-1415. 32. Petty AJ, Melanson KJ, Greene GW. Self-reported eating rate aligns with laboratory measured eating rate but not with free-living meals. Appetite. 2013;63:36-41. AUTHOR INFORMATION J. N. Salley is a graduate research assistant and doctoral degree candidate, M. L. Wilson is a graduate research assistant and doctoral degree candidate, and E. R. Muth is a professor, Department of Psychology, and A. W. Hoover is a professor, Department of Electrical and Computer Engineering, all at Clemson University, Clemson, SC. Address correspondence to: Eric R. Muth, PhD, Department of Psychology, Clemson University, 410J Brackett Hall, Clemson, SC 29634-1355. E-mail: muth@clemson.edu STATEMENT OF POTENTIAL CONFLICT OF INTEREST E. R. Muth and A. W. Hoover have formed a company, Bite Technologies, to market and sell a bite counting device. Clemson University owns a US patent for intellectual property known as “The Weight Watch,” US Patent No. 8310368, filed January 2009, granted November 13, 2012. Bite Technologies has licensed the method from Clemson University. E. R. Muth and A. W. Hoover receive royalty payments from bite counting device sales. No potential conflict of interest was reported by the other authors. FUNDING/SUPPORT This study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases of the National Institutes of Health, grant no. 1R41DK091141-01A1 titled “Assessing the Bite Counter as a Tool for Food Intake Monitoring: Phase I,” original project period September 1, 2011 through February 29, 2012, with an extension until February 28, 2013. URNAL OF THE ACADEMY OF NUTRITION AND DIETETICS 1577 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref13 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref13 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref13 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref14 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref14 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref14 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref15 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref15 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref15 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref16 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref16 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref16 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref16 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref17 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref17 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref17 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref18 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref18 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref18 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref19 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref19 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref19 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref20 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref20 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref20 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref21 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref21 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref21 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref21 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref22 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref22 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref22 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref23 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref23 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref23 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref24 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref24 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref25 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref25 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref25 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref25 http://www.nhlbi.nih.gov/files/docs/guidelines/prctgd_c.pdf http://refhub.elsevier.com/S2212-2672(16)30009-0/sref27 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref27 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref27 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref27 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref28 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref28 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref29 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref29 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref29 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref30 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref30 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref30 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref30 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref31 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref31 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref31 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref32 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref32 http://refhub.elsevier.com/S2212-2672(16)30009-0/sref32 mailto:muth@clemson.edu Comparison between Human and Bite-Based Methods of Estimating Caloric Intake Methods Participants Procedure Measures Measured Kilocalorie Intake Bite-Based Model of Kilocalorie Intake Participant Estimates of Kilocalorie Intake Study Design Data Analyses Results Sample Statistics Meal Statistics Regression on the Training Group Effects of Estimation Method and Kilocalorie Information on Estimation Error Discussion Limitations and Future Directions Conclusions References work_zvg67o37yfhudbfofcwydrkazy ---- In a meeting about my project, a professor told me: "there´s no reason that a digital camera have to look as a camera anymore" 1 Cite as: Cruz, E.G., and Meyer, E.T. (2012). Creation and Control in the Photographic Process: iPhones and the emerging fifth moment of photography. Photographies 5(2): 203-221. doi:10.1080/17540763.2012.702123 Creation and Control in the Photographic Process: iPhones and the emerging fifth moment of photography Edgar Gómez Cruz. Internet Interdisciplinary Institute, Universitat Oberta de Catalunya Eric T. Meyer. Oxford Internet Institute, University of Oxford Edgar Gómez Cruz is a researcher of the Internet Interdisciplinary Institute of the Universitat Oberta de Catalunya. His research has been in Computer Mediated Communication and he has published two books and several papers on the theme. He has recently finished his doctoral dissertation, which is an ethnography of digital photography practices, and also has written papers and presented at conferences on various aspects of digital photography. Internet Interdisciplinary Institute (IN3) Parc Mediterrani de la Tecnologia Av. Canal Olímpic s/n, Edifici B3 08860 Castelldefels (Barcelona) Spain egomezcr@uoc.edu Dr. Eric T. Meyer is a Research Fellow at the Oxford Internet Institute of the University of Oxford. His research includes the impact of research technologies on research performance, on the social impact of ICTs, and participation by Internet users in creating content and blurring boundaries between amateur and professional content creators. Meyer’s PhD in Information Science from Indiana University in 2007, which was named the 2008 ProQuest Doctoral Dissertation of the Year, examined how marine biologists who rely on photographic evidence to identify individual marine mammals have seen significant changes in their everyday work practices as they switched from film photography to digital photography. He has also written papers and presented at conferences on various aspects of digital photography from a social informatics perspective. Oxford Internet Institute University of Oxford 1 St Giles Oxford OX1 3JS Phone: +44 (0) 1865 287218 United Kingdom Fax: +44 (0) 1865 287211 eric.meyer@oii.ox.ac.uk mailto:egomezcr@uoc.edu mailto:eric.meyer@oii.ox.ac.uk 2 Introduction In a 2009 issue of the mainstream photography enthusiast magazine American Photo, an article entitled “Instant gratification” (Andrews and Chen) reported on interviews with four professional photographers. The article centered on a single photographic device, and it was not a camera: the iPhone. More recently, Annie Liebovitz, in a November 15, 2011 interview on MSNBC, called the iPhone ‘the snapshot camera of today’. While these anecdotes do not by themselves indicate a trend, they are just two examples of the growing interest of mainstream photographers in new devices for image capture, and in particular the iPhone, which represent technology convergence and ubiquity. In this article, we propose that the iPhone is actually more than a single device within which multiple technologies (including phones, cameras, geo-location, and Internet browsing, among others) have converged, but is also a node of different networks that is shaping new understandings of what photography is and how it is used. The iPhone represents one of the latest developments in a long history of image creation devices, but is the first device that combines three important elements which we will discuss: the making, processing, and distribution of images. It is not the first attempt to converge elements of the image creation process, such as the once-popular but now discontinued Polaroid cameras (see Buse “Polaroid into Digital” and “Surely Fades Away”) that combined the making and processing elements (but generally without the possibility of adjusting the processing) and later digital cameras that could be connected directly to home printers without the intervention of a computer, but it is the first to combine all three elements in a way that is having widespread success. Although photography has a long and well-documented history, more than 180 years after the first photographic image was made, “we still do not know what photography is” (Kember 175). One might ask how this can be true, when nearly everyone in the developed world and many in the developing world have created photographs; furthermore, most lay photographers would likely argue that they know exactly what photography is: a technological means for recording the world as seen. However, Kember’s provocative point is true in many ways because photography “is a complex technological network in the making rather than a single fixed technology” (Larsen 142). Photography theory has worked traditionally within two epistemological axes; on the one side it has been long understood as a powerful technology for representation of reality, an idea that has been supported or criticized but that stands as a regular theme in work to understand photography’s role in the world. On the other hand, there seems to be a long interest in the development of a single photographic ontology. Barthes, for example, discussed in his “Camera Lucida” that he was interested in the “essence” of photography, not in its sociology. There have also been attempts to shape the discussion of photography as a photo-technology (Maynard) or as a material object (Edwards and Hart), but those approaches have been less prominent overall. Our approach in this paper is to understand photography not as representation, technology, or object, but as the agency that takes place when a set of technologies, meanings, uses and practices align. The photographic object, in this sense, is nothing but the materialization of a series of assemblages, and the photographic object also enables or constrains other assemblages with its use and distribution. We propose therefore, to understand photography as a socio-technical network. With the arrival of digital technologies, the meaning of what photography is has changed even more since new actors, elements and nodes are added to the network, while others clearly have either become peripheral or excluded entirely. New actors that are gaining presence in the photographic equipment field include companies like Sony and Samsung which were not traditionally involved in the film photography industry, but are now sharing the market with Canon, Nikon and other brands more traditionally associated with photography. At the same time, some previously dominant players like Kodak and Fuji have become increasingly marginal players as film has been almost entirely replaced by digital images. Of course, with the shift to digital, the new version of “film” 3 includes the software programs and tools required to access, store, manipulate, and share images. Thus, software companies like Adobe, and Internet-based photography sites such as Flickr or imageshack, along with companies for data storage like Sandisk or Kensington, are participating in the reconfiguration of the definition of what photography means in the digital era. In this article, we will underline some aspects that relate, on the one side, to the technological devices necessary for image production and their “social meanings” and, on the other, the kind of practices that shape and are shaped by those devices. We will discuss how the relationship between production technologies and meanings have shaped different visual regimes. To do this, the article starts with a brief historical description focusing on the production of photos as a three-step process: 1) Infrastructural elements of image production (cameras, film, memory cards, etc.); 2) technologies of processing images (labs, chemicals, computers, software, expertise, etc.); and 3) distribution/showing of images. i Finally, we will discuss the latest socio-technological practices, and we propose that the iPhone and similar devices which will almost surely follow are possibly opening a new stage in visual technologies. Finally, we end with some reflections on the current controversies surrounding photography. Photography as a socio-technical network: Some theoretical context The history of photography is not best understood as a single linear chronological series of technical developments. Instead, there have been multiple forces in constant struggle to shape what photography, as a technology, is (Latour; Munir and Phillips) along with controversies and fights over the social meaning of the photograph. For instance, the fight for the recognition of photography as an art, and not only as a technique, in the beginning of the 20 th Century (Christopherson; Bourdieu; Schwartz) centered on the construction of clear boundaries of what was art and what was not art. Besides the extraordinary number of works about photography from the artistic and aesthetic point of view, there is a large corpus of works which study photography as a cultural or artistic practice in a wide set of environments. Works are situated in the traditions of anthropology, sociology, cultural studies, and visual studies, a growing field closely related to the study and use of images. This literature tends to understand photography as a cultural object (Edwards & Hart), as a commodity, or as a representation (Barthes, Sontag, Hall, Tagg). There is also a second corpus of research with a focus on the economics and business of photography that study how certain technologies were produced and diffused in different spheres (Munir; Munir & Philips). Finally, there is a growing body of research on digital photography from the computer science perspective, which understands photography in relation mostly to software developments (e.g. Van House et al.; Kindberg et al.; Ahern et al.). However, works that attempt to relate the technological perspective, the economic rationales, and the social and cultural meaning of photography are fewer in number (with some notable exceptions including Frosh and Maynard & Slater). One reason, of course, is the difficulty of such an exercise because of the multiple elements making up the photography industry: economic, political, technological, social, cultural, and so forth. The analysis and reflections on the multiple processes involved in image creation are therefore wide and multidisciplinary, especially since photography is embedded in multiple practices and situations (ranging from tourism to forensic research and from advertising to family memories) and that makes it difficult to be studied as a single object. Therefore, our standpoint is to think about photography as a network of agencies, as a hybrid entity. Following Larsen, we argue that: Photography is so evidently material and social, objective and subjective, that is, heterogeneous. It is a complex amalgam of technology, discourse and practice. Photographic agency is a relational effect that first comes into force when a heterogeneous network of 4 humans and non-humans is in place. (145) This proposition is aligned with some approaches that study technological devices in a more complex way than in a single “uses and appropriations” or “diffusion of innovation” perspective. Two of these approaches are particularly useful: Science and Technology Studies (STS) and Social Informatics (SI). STS is an approach that has its roots in understanding the sociological milieu of science and scientists, as well as with understanding an element that underlies much of scientific progress: technology. Although STS has different branches, ii in general what they all propose is a study of the design, implementation and social meanings of technological artifacts in a complex way, where a diverse set of elements struggle and gain alliances with each other to form fixed and stable “black boxes”. Therefore, technological advances are not part of a linear and determined history iii but a social construction in constant movement and reshaping by the use and co-construction of technologies. Latour in particular can help us to understand the complex nature of the camera as more than a simple piece of technology. Latour proposed the term ‘actant’ as a way to refer to both “people able to talk and things unable to talk” (“Mixing Humans” 83) as analysts seek to understand behavior within socio-technical networks. The inclusion of non-human artifacts as sources of action within a network is non-trivial. Sociological theory, for instance, rarely views technological artifacts as more than a peripheral element that human actors manipulate (Latour “Mixing Humans”; Star). With Latour’s Actor-Network Theory (ANT), however: …elements of any kind may be included: humans, technological artifacts, organizations, institutions, etc. ANT does not distinguish between or define a priori any kind of elements…all networks are heterogeneous or socio-technical. There are no networks that consist of only humans or only of technological components. All networks contain elements of both. (Hanseth 118) Latour argues that the most mundane of artifacts can be actants in socio-technical systems; his analysis of door closing mechanisms as actants is probably the clearest illustration. Latour describes how the simple technological combination of door hinges, which allow walls to temporarily and easily open, with hydraulic spring-operated door closers, which gently restore the wall to an unbroken surface, serve as relatively skilled actants in a socio-technical system allowing human actants to reversibly walk through walls and work in non-entombed but enclosed spaces. Just as Latour’s door-closers translate the major effort of walking through brick walls into a minor effort of pushing open a door, cameras translate the major effort of manually creating a detailed and naturalistic image of a scene through painting or etching into the minor effort of pushing a button and developing the latent image, or having others develop it for you. Digital photography in turn translates the relatively complex and labor-intensive chemical processes of traditional photography into a relatively simple and accessible set of steps required to view the resulting images on a computer or print them out for viewing. Traditional film-based cameras can be considered analogous to Latour’s doors that required grooms to await people who may want to pass through the doors, while digital cameras are like Latour’s automatic door closers that allowed human actors to walk through walls without additional human input once the socio-technical system was in place. Traditional cameras generally speaking require photo lab technicians and an array of human actors associated with the developing and printing process, but digital cameras reduce the number and types of human actors required to go from a scene-in-the-world to an image that in some sense represents that scene. By standing in for human actors, the digital camera is an actant in terms of its contribution to the socio-technical network and in terms of its role in the mutual shaping that occurs 5 among actants in the system. Social Informatics offers an additional perspective for our purposes, because it has a greater focus on technologies-in-use rather than technologies-in-design, and is focused on understanding the “uses and consequences of information technologies [and] takes into account their interaction with institutional and cultural contexts” (Kling 1). In particular we draw on the Socio-Technical Interactions Network (STIN) perspective (Meyer “Socio-Technical Interaction Networks”; Kling, McKim & King) that shares with STS the complex understanding of socio-technical networks that, ultimately, are embodied in different practices (Schatzki, Knorr-Cetina, & Von Savigny). The social informatics perspective, in brief, argues that to understand how technologies are used by people, one should privilege neither the technical nor the social a priori, but instead must be open to the possibility that sociotechnical assemblages can be driven by technological developments, by social construction, or by an iterative process of mutual shaping between the two. This position rejects blind technological determinism of the form “technology caused X to happen”, but also rejects pure social shaping views that all technology can be understood as purely social phenomena. Instead, social informatics sensitizes the researcher to understand that in particular instances, either the technical or the social may have had a larger role to play in determining the final ways to which technology is put to use by human actors. The STIN approach, in particular, does this by focusing on actors both included and excluded by new technological assemblages, and looking at how they interact with each other and with technology, in particular when they reach what are called ‘architectural choice points’ where decisions about technology adoption, use, and repurposing occur. The goal of this paper is not to develop a complete theoretical framework for photography with all the elements mentioned above, particularly since others have already made strides in this area (Larsen; Meyer “Socio-Technical Perspectives”; de Rijcke and Beaulieu; Latour “Visualization and Cognition”), and one of the authors of this article has recently completed his dissertation on the topic, iv but to establish some elements of the analysis of photography as a socio-technical network by questioning the visual and aesthetic forms that are being shaped by it. In order to do that, we will somewhat artificially separate the history of photography as a socio-technical system into four moments at which significant shifts occurred. Our approach is to review historically the combination between infrastructures (technologies), discourses, and social uses of photography in different moments in order to understand the sociotechnical network that lies beneath the social understanding of photography and the practices in each moment. While this is certainly overly reductionist, and may at times seem to fall prey to deterministic views of technology, the exercise serves to underline the relationship between different sociotechnical networks during the history of photography and to analyse current trends in photographic practices. The time-frames we discuss are not fixed and seamless “eras” or historical movements, but are instead more of a general guide to illustrate the ebb and flow of the technologies of photography and the social meanings and uses attached to these technologies. Some of them coexist today in different forms so they are not completely excluded from each other. This exercise will help to show the relationship between technologies of image creation and visual regimes made up of practices, objects and the possibilities of image creation. In this framework, the socio-technical history of photography could be seen as a pendulum constantly moving and swinging through cyclic changes between the know-how necessary to create images, and the artifacts used for that purpose. Throughout these cycles, there are also ongoing trends that persist, for example the miniaturization of equipment to support mobility and increasing ease of access to a wider set of technical features; these trends also shape visual images and social uses. Changes in technology, professional practices, and public uses are continually converging and diverging over time. But ultimately, the combination of all those elements with socio-historical conditions shapes what is understood as “photography” in different times and contexts. This seems especially relevant in the digital era since, following, Lister (2007) we have to: 6 begin an account of (new) technology’s role in producing a changed environment for photography; to consider how photography continues to be put to work within a cultural, institutional and even a physical landscape that is pervaded and altered by information and its technologies (252) And this because, as Hand states: “technologies are inseparable from institutional and organizational cultures’ and therefore, ‘we would expect digitization to bring alternative cultural conventions and practices into being” (6). To understand these alternative conventions and practices, we must first turn our eye to how photography developed. The history of Photography as socio-technical pendulum First moment (19 th Century) During the early years after the invention of photography, the expertise needed to create images required a sophisticated set of technical skills: the preparation of the chemical emulsions that were often prepared in the field and applied to glass plates, tin plates or paper, the use of complicated heavy and bulky equipment, and detailed knowledge regarding times of exposure, light conditions, and so on. It is interesting that in the book edited by Allan Trachtenberg, the essays of the “pioneers” of photography including Niepce, Daguerre and Fox Talbot all are notable for their scientific explanations of which chemicals in what proportions work best to reach the goal of creating technically superior photographs. Photography, at the time, was a scientific development, or, as Edgar Allan Poe argued (in his essay reprinted in the Trachtenberg collection): photography was “the most important and perhaps the most extraordinary triumph of modern science” (37). The techniques and technologies available to these early photography pioneers constrained the photographic possibilities of early photographers. Technique constrained visual thinking and experimentation. For example, the relationship between the characteristics of the chemicals and the materials used, along with the timing of processing, was closely related to a certain type of photography (for example, the lack of “night photography” or people photographed on the street or in the type of journalistic views which would later become common). The modern photographic portrait was in its infancy, and was an extremely painful and uncomfortable experience for the subjects who had to remain still for a long period of time because of the primitive quality of the chemicals for light exposure. Hence, Fox Talbot thought about photography as “the pencil of nature” in relation to the types of objects to be photographed. Framing was only one of several complicated skills required to succeed in the successful creation of a photograph. During this first moment, all of this knowledge was needed prior to creating a single image, and, as a result of this, not everyone was able to do photography, even if they had access to the equipment. If we think about the characteristics of photography at the time, the equipment was big, which, among other things, reduced mobility. The skills and knowledge to produce images were highly specialized, the technical possibilities were very limiting and the time invested in the creation of even a single image was very long. The distribution and possible uses of photography had still to be developed. Early distribution channels included the traveling photographer selling portraits (often cheap tintypes), stereoscope images (which were especially popular with the Victorians), and as photographic visiting cards (carte-de-visite). The first networks that shaped (and were shaped by) photography were the scientific networks where enthusiasts and amateurs participated actively in the development of new optical and chemical advances. Second moment (ca. 1900-ca. 1930) With the introduction of the Kodak camera at the end of the 19 th century with the advertising slogan “you press the button, we do the rest”, what used to be a technical skill (and as a result, predominantly male, highly specialized, and oriented to the upper classes) started to reach a general audience. In that way, a “public” for photography was created (Olivier; Jenkins; Murray; Coe and Gates). The mass market for photography was being shaped: the “amateur market was explored, extracted, and constructed from heterogeneous social groups which did not as such exist before 7 Eastman. The new amateurs and Eastman's camera co-produced each other” (Latour 117). This led to a creation of new forms of visual expression, for example the so-called “snapshots”, casual and “home-made” photographs without more pretension than to keep records of the good and happy moments in the life of one’s self, and one’s family and friends (Chalfen). With the emergence of cheap and easy-to-use cameras, people were suddenly able to produce photos, and thus what were considered accurate representations of the world around them, without any technical skill whatsoever. What the Eastman Company achieved was to translate a range of complex processes; knowledge and skills into, quite literally, a “black box”. Nevertheless, the possibilities for creativity were limited: framing, timing, and the position of the photographer were the only elements that the amateur photographer wielding her Kodak Brownie could control. Photographers relinquished control over later aspects of the image creation process since the post processing and printing was something that occurred not even in the presence of the photographer, but in the distant factories where the cameras containing the film were sent after exposure. It is worth noting that a necessary network for supporting the success of this process doesn’t have anything to do with image itself: the postal service. This socio-physical network, requiring post offices, mail carriers, sorting facilities, and all the other elements of a ubiquitous postal service, were a necessary requirement for the success of cameras such as the Kodak Brownie. While postal services were not established with any intention of supporting mass-market image creation, without the cheap, ubiquitous delivery options the post offered, Kodak could not have succeeded. At the same time, there were already enough conditions for photography to reach a wider audience; and, like the postal office; some of them were not even strictly related to photography itself. For instance, a key element was that new technical capabilities of the press that at the time made the diffusion of photographs available for a wider public in the newspapers and magazines (Marien). In this second moment, all the elements of the photographic process were improved, particularly from the point of view of the amateur photography enthusiast: the equipment became less specialized, more affordable, smaller, and thus more portable, all of which contributed to photography becoming a “mobile” practice, resulting in the embedding of photography into the social practices of activities like tourism (Robinson & Picard; Urry) and family celebrations (Hirsch). The networks where photography operated and were needed to sustain it were once again reshaped, this time with the participation of the media. Photography shifted from being an activity limited to specialists to one open to more casual users since the skills needed to create images were basic and “so simple they can easily [be] operated by any school boy or girl”. v Still, the quality by today’s standards was fairly low and the possibilities were limited. In parallel to these developments in the amateur photography market, professional equipment also underwent significant changes: it also became smaller, with better and different types of optics and low light films becoming available. And, although the time of exposure decreased dramatically for shots to be taken, the actual time needed to get the paper copies increased even more. General users never had the same control over the image creation as the photographers of the first moment and photography became a network between users, equipment and companies that deliver the rolls and then develop and print them. At the same time, different and diverse channels for distribution and showing of photography started to be shaped and the media begin to use images. Photographs regularly appeared in newspapers, starting around the turn of the 20 th century, and photographs increasingly appeared in public shows and galleries. Photographs of the distant and exotic became more accessible via outlets such as National Geographic, while at the same time mundane snapshots were shared with friends and family via albums. Photography began to be part of everyday life. Third moment (1930-1990) 8 During the third moment, the technical features of cameras became more or less fixed: portable camera bodies, fixed or interchangeable lenses, and roll films for professional cameras and portable, cheap and easy to use cameras for everyone. vi During the third moment, photography reached a massive audience, but another distinction grew that in some ways echoes the very origins of photography: that between professionals, amateur hobbyists and snapshooters. This distinction was not “natural” and it took a lot of effort to be created by linking photography with other art forms, which led to: The transformation of photography from a medium accessible to a mass audience into an exclusive symbolic code linked it with painting, sculpture, and drawing (as Alfred Stieglitz had intended), limiting widespread participation through the creation and maintenance of social and aesthetic boundaries. (Schwartz 190) Some other works address this issue (Bourdieu; Christopherson). This distinction between amateurs and professionals, especially with photography as art, was not only a matter of technique and the right equipment but also a matter of style, composition and lab access and expertise. Nevertheless, some of the most famous art approaches were based on techniques and therefore knowledges (Ansel Adams, Paul Strand and Edward Weston with their f/64 technique, Alfred Stiglitz with pictorialism, etc.). At the same time, the type of equipment used for each kind of photography played an important role in shaping (and therefore was also shaped) by this distinction. It was clear that professional photography (whether done as art, advertising, or journalism) had specific equipment (expensive SLRs, multiple lenses, lighting, professional films, and so forth), and just a few amateurs could afford the same type of equipment. That way, the combination of techniques, equipment and associations created a network that supported the idea of photography as art, and later, the professionalization of photography. One of the key elements was the creation of distribution circuits that reinforced the distinction between professional photography and the photographs that anyone could create with their inexpensive camera. Using magazines, newsletters, galleries, newspapers, associations and unions, the new professional photographers reinforced their privileged position as professionals. This had, as a consequence, the separation between the folk art of the amateurs and the fine art of the “photographers” (Christopherson). But while this separation was being shaped, on the opposite side of the spectrum people were increasingly creating snapshots of happy moments in their everyday life, with cheaper equipment and without any intention of circulating them outside the family circle. And, although these photos can be understood as sophisticated constructs of reality, the circulation and meaning largely was limited to the small circles consisting of one’s family and close friends (Bourdieu; Chalfen). As we can see, the visual shaping of specific kinds of images was part of an inscription of those images in networks of knowledge-power (professional, amateurs), which, at the same time, used specific types of technologies, some more specialized and expensive, some more general and easy to use. This third moment practically shaped our social understanding of photography and almost all of the former problems related to mobility, techniques and possibilities appeared to the participants of the time to have been solved. Everybody above a fairly basic economic level in the developed world was able to buy an inexpensive camera and could learn how to create quality images for the different purposes to which they wanted to put photography, from snapshots to the creation of professional images. These separations between the amateur and professional (see Meyer “Digital Photography”) increased and photography was institutionalized in different realms with fixed characteristics for each one. The distribution and use of photos followed those separations too: family albums and shoeboxes for snapshots; magazines, journals and galleries for professional photography. The question of control over the entire process was still related not only to the technical elements (cameras, rolls, batteries and so on), but also to different companies and 9 networks (photography shops, photo processing labs, money, time, etc.) needed to produce those images. The amount of time between capturing the latent image and seeing the printed result relied on a network of companies, and the cost of accelerating those services became one of “natural” constraints of photographic creation. This situation began to change with digital technology. Fourth moment (1990-present) When digital technology arrived, the field of photography underwent a tremendous transformation and it seems that the previous networks had been reshaped, inscribing photography, with its new array of elements, in a different network linked with the newly ubiquitous personal computer (Meyer “Framing”). New players arrived with their interests, discourses and technologies, challenging the traditional photographic institutions like Kodak, Fuji, Canon, Nikon, Minolta, etc. vii The field of photography changed with the arrival of digital. Not only have the photographic artifacts themselves changed as they switched from physical media to electronic media, but the methods for showing and distributing photographs was altered massively thanks to the rapid growing of a closely related digital technology: the Internet. Just as the socio-physical network of the postal service enabled the growth of film processing, the socio-technical network of the Internet has enabled the growth of means for sharing and displaying images free of the constraints of film and paper. The key to this moment was the union of photographic devices, computers and the Web, particularly social networking websites. Cameras became ubiquitous: the camera was transformed from a precious family object shared among family members on special occasions, to a personal and constantly carried object of visual creation. This increased dramatically again with the arrival of mobile phones capable of taking digital photos. But one of the most important changes brought by digital technologies, one of many that we want to underline here, is that the knowledge needed to produce images, and the needed equipment, radically changed. This means, the knowledge economy and power networks where photography used to operate were reconfigured, and these changes could be understood in three spheres – control, distribution and knowledge – to which we now turn our attention. User control In the second and third moments, only those who pursued photography as "serious leisure", such as those who were members of camera clubs or owned home darkrooms, were able to control the whole process of image creation. With the emergence of digital technology, this changed in several ways. As in the earliest days of photography, photographers could control the entire process of photograph production, but unlike the pioneers did not need carts full of equipment to do it. Without having to rely on labs to develop and print their rolls of film and photographs, photographers now take, process and distribute images once they have the proper equipment. In addition, in the digital moment everyone, not just professionals and serious amateurs but also casual snapshooters, can have this control, and at almost zero marginal cost once the initial equipment has been acquired. It is important to understand that photography has become part of new ICT-based networks that are different from the networks for film photography. Much of the equipment required to manipulate and process photographs is not specialist photographic equipment comparable to a darkroom with an enlarger and stocks of chemicals, but instead is general purpose equipment including computers and printers that had already been purchased specifically for their more general uses. Adding the specialist software required to work with photographs is a comparatively inexpensive (or even free) undertaking. This has many consequences in photographic practices. Beyond the obvious effects such as the quantity of photos taken, concepts like privacy become harder to grasp since the objects of photography had become more personal and intimate (Lasén and Gómez-Cruz), as we will see more in the following sections. 10 New distribution and circulation environment Photography as an object stopped having nearly as many constraints of time and place because of the change from being a physical object to being a digital one. People can do several things with their photos, none of which limit the possibility of using the same photo for other purposes: they can send them by email, copy them as many times as they want, print them, and perhaps more importantly, they can upload those photos to social networking sites, opening new channels for image distribution. Giving away a photograph is no longer a subtractive process, but an additive one. During the era in the 1980s and 1990s when photographic labs pushed “double print” offers as a way to share a single copy of an image without losing one’s own copy, shared ownership of and access to images was a scarce resource. Today, giving away a photograph doesn’t involve the same careful social calculations as was required when deciding who should be given a copy of a photograph. Uploading a photograph to Facebook means that multiple friends can see one’s photographs, as well as comment on them and see then again in the future. Some of these sites are devoted to photography (like Flickr or photobucket), others are more generic sites that include photographs as art (such as deviantart), and others are sites where photography plays a key role in the identity staging and social interaction (such as Facebook or Myspace). This brings two clear consequences; people create more photos because they have a place to share them (Cohen), and these sites form a new circuit for sharing and showing photography. Digital circuits are at the same time reshaping traditional circuits; for example stock photography, photojournalism and fashion photography are undergoing dramatic changes partly as the result of pressures from microstock viii photography sites, the wide availability of Creative Commons licensed images, and the rise of citizen journalism (Ritchin) New technical know-how and the creation of photographs An important element in this fourth moment is that, because of the change from a light-chemical process to a light-electronic one, a completely new technical know-how has become embedded in the creation of photographs: the use of post-processing computer software. As a result of this shift, new actors gained interest in photography, namely those experts in the use of computers who became drawn to photography as a technically interesting area. The lab skills centered on chemicals and enlarging techniques were replaced by the computer skills wielded by Photoshop experts. Far more people are able to use at least the basic features of Photoshop or other programs than the number of people who ever were able to print and retouch photographs by hand in the past. To be a software expert requires heavy training and constant updating of this knowledge since the software changes with much greater frequency than darkroom technology did. This shift to computers is part of what has been argued elsewhere is a “computerization movement” (Meyer “Framing”). Camera companies, therefore, developed strategies to inscribe “knowledge programs” and incorporate them in almost every digital camera. Although the programs to set the right combination for shooting common objects (portraits, landscapes, sports, etc.) were already incorporated in analogue cameras, the design of digital cameras took this logic further and also incorporated “simulations” of former combination between processes and equipment (black and white, sepia, vignettes, “old photo” looks, etc.). The camera inscribes, using algorithmic computer routines, complex photographic processes that are transformed into choices that appear simple to the photographer. Thus, with the advent of digital technology, photography users gained control over the process while new knowledge and economic and power connections were made with computer and technological companies. At the same time, new distribution-showing circuits were generated and are rapidly becoming key actors in creating photographic meaning. With the convergence of digital cameras and mobile phones, the situation is changing even more as the new devices "mash up" two of today’s most powerful tools: a communication-connection device, and an audiovisual production tool. Visual content is playing a key role in a socio-technical communication environment as smartphones are increasingly capable of connecting to the Internet 11 anytime and anywhere. Smartphones add another level to digital possibilities: the connection and sharing of images in real time. Some authors pointed that this possibility changed the objects and moments of photography; from the ritualistic and important moments of family life (Chalfen) to the everyday common and banal things (Okabe & Ito; Petersen; Burgess; Koskinen). This sharing of the “banal” was also related to the technical features of cameraphones, since the type of images that these phones produced, were very limited in several ways (resolution, light conditions, poor lenses, etc.). ix We could say that cameraphones are no more a “camera”, but a “raw” low-quality image producing and sharing device. The economic rationale plays an important role since people with flat rates on calls, messages and internet tend to use these services just for the fact that they can (Gomez Cruz). From cameraphones to Apple´s socio-technical network One of the cameraphones x that has become the flagship of this revolution is no doubt Apple’s iPhone. In the presentation of the iPhone 4, Steve Jobs stated, “It’s like a beautiful old Leica camera,” explicitely attempting to inscribe a meaningful relationship between the iPhone and one of the classical symbols of high quality photographic devices. In this way, Apple, as a company that sells not only products but “experiences” (Tsai) is helping to shape the current meaning of photography, as we’ll show later. This has important power implications within the network. We argue that the iPhone, in this moment, is key to understanding how photography’s role in the networked society is evolving. First, it is important to notice how the iPhone has been able to "enroll" different interests – from companies and users via mass media to photographers and software developers – without investing nearly as much effort as other companies that entered the photographic arena. xi Since it was launched, the device captivated the press and the users. It was cheered as “the invention of the year” (Grossman) and a device capable of a revolution in PC culture (Zittrain). In relation to photography, it is interesting to note that the iPhone´s camera was (and remains) clearly technically poorer than much of the competition. This was one of the critiques when the phone was launched, since the incorporated camera was considerably less technologically advanced compared to those available on other mobile phones in the market. Nevertheless, even though the iPhone has not entered the so-called “megapixel race” that some manufacturers have used to attract purchasers, it is clear that the relationship between Apple’s device and image creation is exploding. Furthermore, this lack of image quality is perceived by some more as a creative incentive than a technological limitation. In the American Photo article, for example, photographers relate the iPhone with other types of cameras that, instead of “quality” had other “fun” characteristics like instantaneity (similar to the Polaroid), discretion (similar to Spycams) or playfulness (like Lomo cameras with their “don’t think, just shoot” philosophy). What is clear is the growing number of photographers that are using the iPhone, or the cameraphone, as an alternative for image creation. To put numbers in this trend, iPhones are the most used image creation devices in Flickr at the moment, xii more than any camera. While only indicative, because Flickr is able to tell from EXIF (Exchangeable image file format) data the equipment upon which a photograph was made, these data are a good thermometer for current photography practices. The iPhone as a Socio-technical network: a fifth moment? One of the key elements for the success of the Apple iPhone is not directly related to photography production but to image distribution: the simplicity of uploading photos from the device to websites. This makes the iPhone the perfect tool for the growing practice of many people that are interested in showing and sharing photographs of their everyday lives (Cohen; Petersen). xiii This simplicity is partly due to the design and affordances of the iPhone itself, but is also tied to two main external factors. First, mobile connections are increasingly fast, especially as a result of the now widespread use of 3G technology. Second, the introduction of the iPhone worldwide also 12 ushered in a new type of contract with mobile operators since the internet connection is included, obligatory, as a flat rate connection with the iPhone. Thus, everyone with an iPhone has unlimited internet access. Just as the shift from expensive 36-exposure film rolls, with a relatively high per-shot cost, to digital memory cards, with essentially a zero per-shot cost, greatly increased the number of photos people take since the decision to press the button once more has few consequences from an economic perspective, the shift to unlimited internet access changes the calculus people do in their heads when deciding whether to upload and share a photograph, now that they can do it quickly, and for essentially no additional cost. This changes the relationship between the people and their devices. In the words of one of the informants in Gomez’s work: “I take so many photos just because I can, or when I get bored”. Although many of the current smartphones have installed interfaces to access social network sites, the applications specifically created for the iPhone are notable for their quantity and diversity. xiv There are multiple different applications ranging from funny applications to transform the photo to a comic-like drawing to apps that are capable of remotely controlling an SLR camera. Nevertheless, some of the most successful applications are small pieces of software to process images, from small things like cropping and bright balance to more stylish controls and filters, to “simulations” of cross-processing, type of lenses, cameras, etc. In fact, one company specializing in such effects, Instagram, was purchased by Facebook for a reported $1 billion in 2012. The important thing to note is that Apple’s iPhones and iPods have short-circuited the need for expertise in computer post-processing software. Now users have the ability to download (in most cases for a small price) small pieces of software that allow one to modify photographs without having any computer skill, in an intuitive way, and upload them directly to the Web or send them by email. This is a key feature, the possibility of nearly real time distribution. The iPhone apps are thus part of the ecosystem, or socio-technical network, that is emerging during this fifth moment of photography. For the first time in photographic history, a single device has made it possible to control the whole process, not only of image production and distribution of those images (like any mobile phone) but also the possibility of processing those images, in the same device, to obtain different results. As a consequence, the process itself has changed. The approach to the final image can be randomly experimental rather than pre-planned. Instead of imagining the final result in advance, photographers increasingly can design or sculpt their images in the same device on which they were captured. In short, a more fluid practice, a playful relationship with the possibilities of the programs that changes completely the creation process and make possible that anything, anytime, could become subject of photography. In Bourdieu´s study, he argued that photographic objects were determined socially; for example, when he shows a picture of a leaf to some French peasants, they critique the image because: “you don’t photograph what you see on a daily basis” (Bourdieu 72) . With the possibilities opened by digital technologies, that seemed to change (Okabe). Devices like the iPhone, combined with applications like Instagram, add a whole range of filters and tools to modify the poor quality pictures of the cameraphone into a more “artsy” images. This changes the politics of seeing the banality of images of everyday life. And this makes sense because mobile phone photography, and especially the iPhone, is increasingly becoming part of the common practices of established social networks (e.g. Facebook and Flickr) and, even more important, there are communities that are being formed around iPhone use, not only on social network sites but on other specialist sites that are emerging (e.g., http://thebestcamera.com). In the issue of American Photo mentioned at the beginning of this article, the interviewed photographers talk about how the iPhone had opened new creative possibilities, not only because it is the “camera that is always with you”, but because it is discrete and gives you the freedom to just 13 shoot. What it is interesting is how again mobility, discretion and opportunity, along with the processing capabilities and almost real time possibility of showing the pictures, makes this the perfect urban image device. As one of the informants in Gómez Cruz work said: “there are no more Cartier-Bressons, we all are Cartier-Bressons now!” (202-203). It is interesting to notice how the poor quality of the mobile phones resembles the beginnings of the Kodak camera. But limitations can, turned on their head, become opportunities: the iPhone is increasingly becoming a powerful tool for artistic expression as it is pushed by many artists and everyday users. There are blogs specializing in photography applications, artistic movements xv , books and complete galleries are increasingly based on “iPhoneography”. xvi We’re not trying to suggest that the iPhone is the only “artistic” cameraphone, but that the device is probably the among the best contemporary examples of how a single device has been capable of enrolling different actors to create a subfield that is growing and gaining presence. The iPhone acts more as a platform and a node for different networks than as a single device. Like a chameleon, iPhone camera can emulate or simulate a black and white camera, different kinds of Lomo, a pinhole camera, and many others using software. It serves as a platform between companies developing applications and users of them, and even more, it is a social tool based in image sharing and showing, making computer mediated social interactions more visual every day. Therefore, we suggest that we are witnessing a generalized fifth moment of photography, that of complete mobility, ubiquity and connection. Conclusions The history of photography can be understood as a complex network of interactions between technological devices, the knowledge needed to produce images, the companies which build technologies, and the people who use them, and this sociotechnical network of photography has periods of rapid change that is often linked to the development of new technologies alternating with periods of relative stability in practices and therefore types of visual objects. The development of photography could be seen as a pendulum that, swinging back and forth, negotiates new technical possibilities with the knowledge required to use them and the other systems (social, physical, and virtual) with which picture creating devices interact. While there have been generally consistent trends such as miniaturization and an increasing tendency to integrate more features in cameras, the total control of the image production has been primarily reserved for professionals or highly skilled and devoted amateurs. For almost a century, the final processing of snapshot photographs relied on photo labs. With the arrival of digital photography and the inkjet printer, and more recently of the mobile phones and online sites for distributing photographs, many of the assumptions about photography are blurring and changing. The iPhone is, among all the current devices, one of the most important, not only because of the technical features that gives control to the photographer of the entire process, but because it has been able to enroll different actors to give it a social meaning: professional and amateur photographers, the media, software companies, social networks, and general users. While mobility and ease-of-use had been important elements in the development of photographic devices, mobile phones add a new element: connectivity. Among the increasing mobile phone use for photography, the iPhone (and other devices like Samsung’s Galaxy) adds even one more layer of complexity, the possibility of post processing the images in the same device. In a recent article in the New York Times, xvii it is suggested that smartphone photography will replace (or at least transform) point and shoot cameras. Whether this signals the emergence of a fifth moment in photography or is merely an elaboration of the fourth “digital” moment, however, remains to be seen. 14 References Ahern, S., et al. “Over-Exposed? Privacy Patterns and Considerations in Online and Mobile Photo Sharing.” Proceedings of the SIGCHI Conference on Human Factors in ComputingSystems, 30 April–3 May 2007, San Jose, CA. 357–66. Andrews, J., and L. Chen. “Instant Gratification.” American Photo Nov.-Dec. (2009): 56–63. Barthes, R. La Cámara Lúcida. Barcelona: Paidós, 1999. Beaulieu, A., and S. de Rijcke. “Mediated Ethnography and the Study of Networked Images - or How to Study ‘Networked Realism’ as Visual Knowing.” Proceedings of the First International Visual Methods Conference, 15–17 Sept. 2009, Leeds, UK. . Bourdieu, P. Photography: A Middle-Brow Art. Stanford, CA: Stanford UP, 1996. Burgess, J. “Vernacular Creativity and New Media.” PhD diss. Queensland U of Technology, 2007. Buse, P. “Polaroid into Digital: Technology, Cultural Form, and the Social Practices of Snapshot Photography.” Continuum 24.2 (2010): 215–30. Buse, P. “Surely Fades Away.” Photographies 1.2 (2008): 221–38. Chalfen, R. Snapshot Versions of Life. Bowling Green, OH: Bowling Green State U Popular P, 1987. Christopherson, R.W. “From Folk Art to Fine Art: A Transformation in the Meaning of PhotographicWork.” Journal of Contemporary Ethnography 2 (1974): 123–57. Coe, B., and P. Gates. The Snapshot Photograph: The Rise of Popular Photography, 1888-1939. London: Ash & Grant, 1977. Cohen, K.R. “What Does the Photoblog Want?” Media, Culture and Society 27.6 (2005): 883–901. de Rijcke, S., and A. Beaulieu. “Image as Interface: Consequences for Users of Museum Knowledge.” Library Trends 59.4 (2011): 663–85. Edwards, E., and J. Hart. Photographs, Objects, Histories: On the Materiality of Images. London: Routledge, 2004. Frosh, P. The Image Factory: Consumer Culture, Photography and the Visual Content Industry. Oxford: Berg Publishers, 2003. Gómez Cruz, E. De La Cultura Kodak a la Imagen En red. Una Etnografía sobre Fotografía Digital. Barcelona: UOC P, 2012. Grossman, L. “Invention of the Year: The iPhone.” Time Magazine Online 1 Nov. 2007. . Hall, S. Representation: Cultural Representations and Signifying Practices. Thousand Oaks, CA: Sage Publications Ltd, 1997. Hand, M. Making Digital Cultures: Access, Interactivity, and Authenticity. Aldershot, UK: Ashgate Pub. Co., 2008. Hanseth, O., M. Aanestad, and M. Berg. “Actor-Network Theory and Information Systems. What’s So Special?” Information Technology and People 17.2 (2004): 116–23. Hirsch, M. Family Frames: Photography, Narrative, and Postmemory. Cambridge, MA: Harvard UP, 1997. Jenkins, R.V. “Technology and the Market: George Eastman and the Origins of Mass Amateur Photography.” Technology and Culture 16 (1975): 1–19. Kember, S. “The Virtual Life of Photography.” Photographies 1.2 (2008): 175–203. Kindberg, T., et al. “How and Why People Use Camera Phones.” Consumer Applications and Systems Laboratory, H&P Laboratories Bristol, England, HPL-2004-216, 26 November 2004. Kling, R. “What is Social Informatics and Why Does it Matter?” D-Lib Magazine 5.1 (1999). . Kling, R., G. McKim, and A. King. “A Bit More to It: Scholarly Communication Forums as Socio-Technical Interaction Networks.” Journal of the American Society for Information Science and Technology 54.1 (2003): 47–67. Koskinen, I. “Managing Banality in Mobile Multimedia.” The Social Construction and Usage of Communication Technologies: Asian and European Experiences. Ed. Raul Pertierra. Quezon City: U of the Phillipines P, 2007. 60–81. Larsen, J. “Practices and Flows of Digital Photography: An Ethnographic Framework.” Mobilities 3.1 (2008): 141–60. Lasén, A., and E. Gómez-Cruz. “Digital Photography and Picture Sharing: Redefining the Public/Private Divide.” Knowledge, Technology and Policy 22.3 (2009): 205–15. Latour, B. “Mixing Humans and Nonhumans Together: The Sociology of a Door-Closer.” Social Problems 35.3 (1988): 298–310. Latour, B. “Technology Is Society Made Durable.” A Sociology of Monsters: Essays on Power, Technology and Domination. Ed. John Law. London: Routledge, 1991. 103–31. Latour, B. “Visualisation and Cognition: Drawing Things Together.” Representation in Scientific Practice. Ed. Michael Lynch and SteveWoolgar. Cambridge,MA: MIT P, 1990. 19–68. Lister, M. “A Sack in the Sand: Photography in the Age of Information.” Convergence 13.3 (2007): 251–74. Marien, M.W. Photography: A Cultural History. London: Laurence King Publishing, 2002. Maynard, P. The Engine of Visualization: Thinking through Photography. Ithaca, NY: Cornell UP, 2000. Meyer, E.T. “Digital Photography.” Handbook of Research on Computer Mediated Communication, Volume I. Ed. 15 Sigrid Kelsey and Kirk St. Amant. New York: Information Science Reference, 2008. 791–803. Meyer, E.T. “Framing the Photographs: Understanding Digital Photography as a Computerization Movement.” Computerization Movements and Technology Diffusion: From Mainframes to Ubiquitous Computing. Ed. Margaret S. Elliot and Kenneth L. Kraemer. Silver Spring, MD: American Society for Information Science and Technology, 2008. 173–99. Meyer, E.T. “Socio-Technical Interaction Networks: A Discussion of the Strengths, Weaknesses and Future of Kling’s STIN Model.” Social Informatics: An Information Society for All? In Remembrance of Rob Kling. Ed. Jacques Berleur, Markku I. Nurminen and John Impagliazzo. New York: IFIP - The International Federation for Information Processing, 2006. 37–48. Meyer, E.T. “Socio-Technical Perspectives on Digital Photography: Scientific Digital Photography Use by Marine Mammal Researchers.” Doctoral diss. Indiana U, 2007. Munir, K.A. “The Social Construction of Events: A Study of Institutional Change in the Photographic Field.” Organization Studies 26.1 (2005): 93–112. Munir, K.A., and N. Phillips. “‘The Birth of The Kodak Moment’: Institutional Entrepreneurship and the Adoption of New Technologies.” Organization Studies 26.11 (2005): 1665–87. Murray, Susan. “Digital Images, Photo-Sharing, and Our Shifting Notions of Everyday Aesthetics.” Journal of Visual Culture 7.2 (2008): 147–63. Okabe, D., and M. Ito. “Camera Phones Changing the Definition of Picture-Worthy.” Japan Media Review 29 (2003). . Olivier, M. “George Eastman’s Modern Stone-Age Family: Snapshot Photography and the Brownie.” Technology and Culture 48.1 (2007): 1–19. Oudshoorn, N., and T. Pinch, eds. How Users Matter: The Co-Construction of Users and Technology. Cambridge, MA: MIT P, 2003. Petersen, S.M. “Common Banality: The Affective Character of Photo Sharing, Everyday Life and Produsage Cultures.” PhD diss. U of Copenhagen, 2008. Poe, E.A. “The Daguerreotype”. Classic Essays On Photography. Ed. Alan Trachtenberg. New Haven, CT: Leete’s Island Books, 1980. 37–38. Ritchin, F. After Photography. New York: Norton & Company, 2008. Robinson, M., and D. Picard, eds. The Framed World: Tourism, Tourists and Photography. Aldershot: Ashgate, 2009. Schatzki, T.R., K. Knorr-Cetina, and E. Von Savigny. The Practice Turn in Contemporary Theory. London: Routledge, 2001. Schwartz, D. “Camera Clubs and Fine Art Photography: The Social Construction of an Elite Code.” Journal of Contemporary Ethnography 15.2 (1986): 165–95. Slater, D. “Marketing Mass Photography.” Visual Culture: The Reader. Ed. Jessica Evans and Stuart Hall. London: Sage Publications, 1999. 289–306. Sontag, S. On Photography. New York: Picador USA, 2001. Star, Susan Leigh. “Introduction: The Sociology of Science and Technology.” Social Problems 35.3 (1988): 197–205. Tagg, J. The Burden of Representation. Minneapolis: U of Minnesota P, 1988. Talbot, W.H.F. “A Brief Historical Sketch of the Invention of the Art.” Classic Essays on Photography. Ed. A Trachtenberg. New Haven, CT: Leete’s Island Books, 1980. 27–30. Trachtenberg, A. Classic Essays on Photography. New Haven: Leete’s Island Books, 1980. Tsai, Shu-pei. “Integrated Marketing as Management of Holistic Consumer Experience.” Business Horizons 48.5 (2005): 431–41. Urry, J. The Tourist Gaze. London: Sage, 2002. Van House, N.A., et al. “The Uses of Personal Networked Digital Imaging: An Empirical Study of Cameraphone Photos and Sharing. Paper presented at the CHI ’05.” Portland, OR: Extended Abstracts on Human Factors in Computing Systems, 2005. Wajcman, J. Feminism Confronts Technology. University Park, PA: Pennsylvania State UP, 1991. Zittrain, J. The Future of the Internet—and How to Stop It. New Haven, CT: Yale UP, 2008. 16 Notes: i For a discussion of how ‘visual knowing’ is changing, seeBeaulieu and de Rijcke. ii Inside STS there are different sub-categories like Actor-Network Theory (ANT) or Social Construction of Technology (SCOT). iii What Wajcman, from a feminist perspective, criticized as “histories of men inventing and mastering technology” (Oudshoorn and Pinch 4). iv Gomez-Cruz has recently defended his PhD dissertation: “From Kodak Culture to Flickr Culture: An ethnography of digital photography practices”, now published as a book (see references). v From a Kodak advertisment featuring the Brownie camera in 1900. vi Although the major elements were fixed, there were minor modifications throughout this period in all of those elements, for example the use of slides or prints, and different types of films formats: 110mm, 35 mm, etc. vii For an extensive and interesting account on this changes, see Munir. viii “Microstock Photography is one of the common names given to the low priced royalty free stock Photo industry” From http://www.microstockphotography.com/ ix There are, nevertheless, some clear improvements done to cameraphones. These improvements point clear evolution toward more sophisticated visual equipment. x The definitions of “cameraphones” or “smartphones” are more descriptive than theoretical but we are using them because of the wider literature that uses them. xi Nokia for example, with the introduction of their model N95, hired famous directors to shoot shortfilms, or organized contests in order to promote their telephone as an audiovisual device. xii http://www.flickr.com/cameras/ (search performed 11/09/09) xiii Currently, with the proliferation of smartphones, not only the iPhone is capable of this but it was no doubt an important tool for shaping photosharing as a serious photographic practice. xiv A search on the Itunes Store for the word “photography”, delivers more than 160 applications. xv http://www.eyeem.com/ xvi The concept “iphoneography” is used by Glyn Evans in his blog: iphoneography.com, and the concept has been used also in flickr groups and in a forthcoming publication. xvii http://www.nytimes.com/2010/12/04/technology/04camera.html work_zyqxfrf4ofhbzkby2risuvnzz4 ---- Authoritative parent feeding style is associated with better child dietary quality at dinner among low-income minority families Authoritative parent feeding style is associated with better child dietary quality at dinner among low-income minority families Katherine R Arlinghaus,1 Kirstin Vollrath,3 Daphne C Hernandez,1,2 Shabnam R Momin,3 Teresia M O’Connor,3 Thomas G Power,4 and Sheryl O Hughes3 1Department of Health and Human Performance, and 2HEALTH Research Institute, University of Houston, Houston, TX; 3USDA–Agricultural Research Service, Children’s Nutrition Research Center at Baylor College of Medicine, Houston, TX; and 4Department of Human Development, Washington State University, Pullman, WA ABSTRACT Background: Parent feeding styles have been linked to child weight status across multiple studies. However, to our knowledge, the link between feeding styles and children’s dietary quality, a more proximal outcome, has not been investigated. Objective: The purpose of this study was to examine the relation between parent feeding styles and dietary quality of Head Start preschoolers’ dinner meals. Design: The amount of food served and consumed by children was measured by using a standardized digital photography method during 3 in-home dinner observations of low-income minority families in Houston, Texas. Trained dietitians entered food served and consumed into the Nutrient Data System for Research 2009 for nutrient analysis. Overall dietary quality of the food served and consumed at dinner was evaluated by using the Healthy Eating Index 2010 (HEI-2010). Parent feeding style was assessed with the use of the Caregiver’s Feeding Style Questionnaire (CFSQ). On the basis of a parent’s level of demandingness and responsiveness to his or her child during feeding, the CFSQ categorizes parent feeding into 4 styles: authoritative (high demandingness and high responsiveness), authoritarian (high demandingness and low responsiveness), indul- gent (low demandingness and high responsiveness), or uninvolved (low demandingness and low responsiveness). Results: For the overall sample, the mean ± SD HEI score for dinner served was 44.2 ± 8.4, and the mean ± SD HEI score for dinner consumed was 43.4 ± 7.0. In the fully adjusted model, ANCOVA indicated that the authoritative parent feeding style was associated with significantly higher child dietary quality compared with the authoritarian feeding style (mean ± SEE HEI consumed— authoritative 45.5 ± 0.9; authoritarian: 41.9 ± 0.7; P = 0.001). Conclusions: Parent feeding style contributes to the overall dietary quality of children, and among low-income minority preschoolers an authoritative feeding style was associated with the highest dietary quality of the 4 feeding styles. Interventions to promote feeding practices that contribute to authoritative feeding are needed to improve the dietary quality of preschool children at dinner. This trial was registered at https://clinicaltrials.gov as NCT02696278. Am J Clin Nutr 2018;108:730–736. Keywords: child dietary quality, direct observation, parent feeding styles, preschoolers, Healthy Eating Index INTRODUCTION Parental feeding styles have consistently been linked to child weight status (1–6) and some child eating behaviors (7–10) across studies in low-income families. However, to our knowledge, no study has linked styles of feeding to the overall quality of the child’s diet on the basis of a series of dinner meals. Poor dietary quality in young children can lead to adverse health consequences as children age (11). Preschool age is an important developmental period for adopting healthy eating habits because eating behaviors and preferences established during this time are likely to persist through childhood (12) and into adulthood (13). Parents are direct drivers of children’s dietary quality, and consequently have a unique role in the establishment of children’s eating environments (14). Originally developed by Baumrind (15) and extended by Maccoby and Martin (16), general parenting styles categorize overall patterns of parenting behavior on the basis of 2 underlying dimensions of demandingness (the Supported by funds from the USDA, grant 2006-55215-16695. This work is a publication of the United States Department of Agriculture [USDA– Agricultural Research Service (-ARS)] Children’s Nutrition Research Center, Department of Pediatrics, Baylor College of Medicine, Houston, Texas, and has been funded in part with federal funds from the USDA-ARS under cooperative agreement 58-6250-0-008, and in part by Kraft Foods, Inc. The contents of this publication do not necessarily reflect the views or policies of the USDA, nor does mention of trade names, commercial products, or organizations imply endorsement from the US government. Supplemental Figure 1 is available from the “Supplementary data” link in the online posting of the article and from the same link in the online table of contents at https://academic.oup.com/ajcn/. Address correspondence to SOH (e-mail: shughes@bcm.edu). Received January 9, 2018. Accepted for publication May 30, 2018. First published online August 30, 2018; doi: https://doi.org/10.1093/ajcn/ nqy142. 730 Am J Clin Nutr 2018;108:730–736. Printed in USA. © 2018 American Society for Nutrition. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. https://clinicaltrials.gov https://academic.oup.com/ajcn/ mailto:shughes@bcm.edu http://creativecommons.org/licenses/by/4.0/ PARENTING FEEDING STYLE AND CHILD DIETARY QUALITY 731 extent to which parents show control, maturity demands, and supervision) and responsiveness (the extent to which parents show affective warmth, acceptance, and involvement). On the basis of this work, parent feeding styles were developed to assess an overall attitude and emotional climate specifically with regard to child feeding (2–5). Parent feeding styles are an optimal way to measure feeding because they encompass the behavioral, social, and nutritional aspects of the parent-child feeding dynamic, including the entirety of family eating from meal conceptualization to consumption (2–5). Feeding styles are conceptualized differently from goal-oriented feeding practices. As with general parenting styles, feeding styles are measured along the dimensions of demandingness and responsiveness. Differences in the dimensions of demandingness (persistence during eating episodes) and responsiveness (sensitivity to the child’s individual needs during feeding) result in 4 feeding categories: authoritative (high demand and high response), defined as reasonable nutritional demands in conjunction with sensitivity toward the child; authoritarian (high demand and low response), defined as high control with little sensitivity during feeding; indulgent (low demand and high response), defined as high responsivity with little structure around feeding; and uninvolved (low demand and low response), defined as a lack of involvement during feeding (2–5). Direct comparison of previous studies regarding feeding styles and diet is difficult due to varying child age groups and eating behaviors targeted as outcomes in the study designs (7–10). Importantly, most of these studies relied on parent dietary recall of child consumption instead of direct observation (17), did not assess the dietary quality of the food served compared with what was consumed, and assessed the quality of specific food groups (i.e., fruit, vegetables, whole grains) instead of overall dinner consumption patterns (8–10). It is important to assess the quality of the overall meal because dietary patterns are more indicative of disease risk than individual foods or nutrients alone (18). For example, findings from experimental and longitudinal studies examining the relation between specific foods, such as fruit and vegetables, and child weight status have not been consistent (19). Although investigation into specific food groups would be expected to relate to overall dietary quality, a more com- prehensive assessment of dietary quality is likely necessitated to better understand the relation between dietary consumption and health outcomes. Because parent feeding style is a construct used to better understand the relation between parental feeding and child health outcomes, the potential association between overall dietary quality and parent feeding style is important to examine. The purpose of this study was to compare the overall dietary quality using the Healthy Eating Index (HEI) of Head Start preschoolers’ dinner meals by parent feeding styles. In the general parenting literature, the authoritative parenting style (high demandingness and high responsiveness) has repeatedly been associated with positive child outcomes, including higher academic achievement, fewer high-risk adolescent behaviors, and greater child maturity (20). Specific to health, the general authoritative parenting style has been associated with improved weight status (21) and may serve as a moderator of the relation between feeding practices and child fruit and vegetable consumption (22, 23). Because feeding styles focus on general parenting dimensions in a specific eating context, it is likely that they are more importantly linked to child dietary quality than parenting style. Therefore, it was hypothesized that the authoritative feeding style (high demandingness and high responsiveness) would be associated with a higher dinner dietary quality than the other 3 parent feeding styles. This study also had an exploratory aim to compare the dietary quality of the dinner meal served with that of the meal consumed overall and within each of the parent feeding styles. It was hypothesized that the dietary quality of the dinner served would be higher than that consumed within all 4 feeding styles. METHODS Subjects Caregivers and their preschool children, who were enrolled in 33 Head Start centers in 3 districts in Houston, Texas, were recruited during 2007 and 2008. Approximately 2500 families were approached during recruitment. Of these, 275 families returned the flyer that was handed out to indicate their interest in participating. All caregivers self-identified their ethnicity as either African American or Hispanic. A caregiver was defined as the individual most responsible for food intake of the child outside of the Head Start day. The caregiver was designated as the target parent in this study. Most caregivers were parents, with a small number of grandparents. Subsequently, throughout the article, caregivers will be referred to as parents. Subjects in this study were part of a larger project that examined parent-child interactions at the dinner meal. Details of the primary analysis of this study have previously been published (4), as have details of initial dietary intake analysis (24). The flow of participants is depicted in a diagram in Supplemental Figure 1. Procedures Parents were recruited during parent meetings, when their children were dropped off or picked up at Head Start, and through sign-up sheets posted at the centers. Written consent for the parent and his or her preschooler’s participation in the study was obtained. Researchers completed 3 in-home observations of participants’ dinner meals. Parents were instructed to proceed as they normally would at a dinner meal. Eighty-five percent of the meals were eaten at dinner tables. In half of the families, the parent and child ate together. In 43% of the observations, other family members were present during the meals. Researchers provided standardized plates, bowls, and cups, and the food served was documented using a digital photography method. Additional helpings were observed and recorded. It was also noted whether the child was served by the parent or if the child served him- or herself. After the meal was completed, researchers measured plate waste to the nearest 0.1 g on a digital scale (Ohaus Model Pro Scout SP601 scale). Details of this procedure have been described elsewhere (22). At the end of each observed meal, parents were financially compensated (totaling $125 over the 3 observations). This study was approved by the Institutional Review Board at Baylor College of Medicine, and was registered at https://clinicaltrials.gov as NCT02696278. https://clinicaltrials.gov 732 ARLINGHAUS ET AL. Measures Meal representativeness and dietary quality To assess meal representativeness, parents were asked whether the meal was a typical dinner eaten by their family. Detailed dinner menu information, including recipes, preparation, and brand names of foods, was collected from the parent at each observation, and caregivers reported whether or not the meal they ate was a usual meal for them. The amount of food served was estimated by 2 registered dietitians trained in the digital photography method (24). This information was compiled and entered into the Nutrient Data System for Research 2009 (NDSR) for nutrient analysis. Foods were broken down into the food groups and components utilized in the 2010 Dietary Guidelines for Americans (25). The dietary quality of the child’s dinner meal was evaluated using the HEI-2010 (26, 27). The HEI-2010 evaluates adherence to the 2010 Dietary Guidelines for Americans (25) on a scale from 0 to 100, with a score of 100 being the highest score possible. The HEI is a validated measure of dietary quality because it has been shown to be associated with improved health outcomes (28, 29). At the time of this study, the HEI had not yet been updated for the 2015 Dietary Guidelines for Americans. Because there are minimal differences between how the HEI-2010 and HEI-2015 are calculated (30), it is unlikely that HEI-2015 scores would meaningfully differ from the HEI-2010 scores. The food- group breakdown from the NDSR nutrient analysis was used to calculate HEI-2010 component scores, which were summed to calculate a total HEI-2010 score (26). Adequacy components include total fruit, whole fruit, total vegetables, greens and beans, whole grains, dairy, protein foods, seafood and plant protein, and fatty acids. Moderation components, for which higher scores indicate lower consumption, include refined grains, sodium, and empty calories. The HEI-2010 has been designed so that no one food is required to achieve a perfect score. This enables the HEI- 2010 to evaluate a variety of meal patterns (vegetarian, vegan, omnivore) and is inclusive of diets typically consumed by a wide range of ethnic and cultural groups (26). Although the HEI-2010 is typically used to evaluate a person’s daily intake, because the index considers food relative to its energy content (27) the index can be used to evaluate the dietary quality of a single meal (31). Parent feeding styles Parent feeding style was assessed using the Caregiver’s Feeding Styles Questionnaire (3). This questionnaire measures feeding styles within a general parenting style framework. With the use of a 5-point scale (from 1 = never to 5 = always), parents report on how often they use 12 parent-centered compared with 7 child-centered feeding directives. Parents are then classified into 4 feeding styles on the basis of high and low scores of 2 dimensions of parents’ demandingness of and responsiveness to their child. Scoring procedures are described elsewhere (3–5). Reliability and validity of the measure have been established across multiple studies (3–5), including an observational study of similar design and study population as the present study (4). Anthropometric measures Trained research staff measured the height (Seca 213) and weight (Befour PS6600) of children and parents in light clothing without shoes to the nearest 0.1 cm and 0.1 kg, respectively. Children’s age- and sex-specific BMI percentile and standardized BMI (zBMI) were computed from the revised 2000 growth charts and classified according to the weight categories proposed by the CDC (32). Child and parent characteristics Parents and their children were classified as African American or Hispanic on the basis of self-identification. Parents also identified their educational attainment and their marital status, as well as their own and, if applicable, their spouse’s employment status. Data analysis Data analysis was conducted with the use of SPSS version 25. Missing data were handled using list-wise deletion. Parents were categorized into 4 feeding style groups by conducting a cross-classification of high and low median scores on the parent dimensions of demandingness and responsiveness from the Caregiver’s Feeding Styles Questionnaire. The child HEI scores were derived from averaging across the dietary data derived from 3 meal observations. Three meals were observed for 89% of the families, 2 meals for 10% of the families, and 1 meal for 1% of the families. Chi-square tests and ANOVAs were conducted to assess differences in child, parent, and meal characteristics by feeding style. Differences in child HEI consumed scores between child, parent, and meal characteristics were examined using t tests and ANOVAs. Pearson’s correlation was conducted to determine the correlation between HEI served and consumed scores, and exploratory paired-samples t tests were used to compare HEI served and consumed scores among child, parent, and meal characteristics. Three ANOVA models were conducted to compare differences in child HEI consumed score across feeding styles. The first was unadjusted, the second adjusted for HEI served score, and the third adjusted for child zBMI, parent ethnicity, parent education, parent employment, parent marital status, the number of people at the meal, the person serving the child, meal representativeness, and child HEI served score. A post hoc least-significant difference test was completed to determine which feeding styles differed from one another. RESULTS Of the 275 families who consented to participate in this study, dietary information from observed dinner meals was available for 145 parent-child dyads. Only 131 dyads were included in this analysis due to incomplete feeding style (n = 6), and demographic (n = 8) data. Half of the preschoolers were male and 62% were Hispanic, and they had a mean ± SD age of 4.5 ± 0.6 y. Further descriptive characteristics of the sample are reported in Table 1. Chi-square and ANOVA testing indicated no significant differences in characteristics by feeding style. Independent t tests analyzing child HEI consumed scores by child, parent, and meal characteristics indicated a significantly higher mean HEI score consumed among Hispanic than among African-American families [t = 3.2, P = 0.002]. No other significant differences were found in HEI consumed scores within any of the child, parent, or meal characteristics. Table 2 shows the results of paired-samples t tests between mean child HEI scores served and consumed. Overall, the HEI PARENTING FEEDING STYLE AND CHILD DIETARY QUALITY 733 TABLE 1 Characteristics of the study population: full analytic sample and by caregiver feeding style1 Overall Authoritative Authoritarian Indulgent Uninvolved Overall sample, n (%) 131 (100) 27 (21) 41 (31) 37 (28) 26 (20) Child characteristics Age, y 4.5 ± 0.62 4.6 ± 0.625 4.5 ± 0.6 4.5 ± 0.7 4.4 ± 0.7 Sex, n (%) Female 66 (50) 15 (56) 23 (56) 18 (49) 10 (38) Male 65 (50) 12 (44) 18 (44) 19 (51) 16 (62) zBMI 0.8 ± 1.1 0.6 ± 1.2 0.7 ± 1.0 0.9 ± 1.2 1.2 ± 1.1 Weight category, n (%) Underweight 3 (2) 1 (4) 0 (0) 2 (5) 0 (0) Normal weight 74 (56) 16 (59) 29 (71) 15 (41) 14 (54) Overweight 27 (21) 5 (19) 5 (12) 12 (32) 5 (19) Obese 27 (21) 5 (19) 7 (17) 8 (22) 7 (27) Parent characteristics, n (%) Race/ethnicity Hispanic 81 (62) 22 (81) 26 (63) 18 (49) 15 (58) African American 50 (38) 5 (19) 15 (37) 19 (51) 11 (42) Education Some college or more 57 (44) 13 (48) 16 (39) 17 (46) 11 (42) High school diploma/GED 34 (26) 5 (19) 13 (32) 10 (27) 6 (23) Some high school or less 40 (31) 9 (33) 12 (29) 10 (27) 9 (35) Employment status Both caregiver and spouse employed 33 (25) 8 (30) 13 (32) 9 (24) 3 (12) Either caregiver or spouse employed 72 (55) 15 (56) 19 (46) 19 (51) 19 (73) Neither caregiver nor spouse employed 26 (20) 4 (15) 9 (22) 9 (24) 4 (15) Marital status Married/cohabitating 64 (49) 14 (52) 24 (59) 14 (38) 14 (54) Single 67 (51) 13 (48) 17 (41) 23 (62) 12 (46) Meal characteristics Number of people at the meal 2.1 ± 1.1 2.0 ± 1.2 2.0 ± 1.0 2.5 ± 1.1 2.1 ± 1.3 Person serving child, n (%) Parent serves child 116 (89) 23 (85) 36 (88) 33 (89) 24 (92) Child serves him/herself 15 (11) 4 (15) 5 (12) 4 (11) 2 (8) Meal representativeness, n (%) Usual dinner 92 (70) 20 (74) 25 (61) 26 (70) 21 (81) Not a usual dinner 39 (30) 7 (26) 16 (39) 11 (30) 5 (19) 1No significant differences in characteristics by feeding styles were indicated by ANOVA or chi-square test. GED, general equivalency diploma; zBMI, standardized BMI. 2Mean ± SD (all such values). score of what children were served and what they consumed was highly correlated (r = 0.8, P < 0.001). However, significant differences between served and consumed scores existed among children whose parents had authoritarian or indulgent feeding styles compared with the authoritative feeding style (P = 0.008 and 0.044, respectively). Table 3 shows the unadjusted and adjusted means of child HEI consumed scores. All 3 ANOVA models showed a significant difference in child HEI consumed score across feeding styles [unadjusted F(3, 116) = 5.1, P = 0.002, η2 = 0.1; partially adjusted F(3, 116) = 3.9, P = 0.011, η2 = 0.1; fully adjusted F(3, 116) = 3.7, P = 0.015, η2 = 0.1]. Post hoc tests using least-significant differences showed a significantly higher HEI consumed score among the authoritative feeding style compared with the authoritarian, indulgent, and uninvolved feeding styles in the unadjusted model (P < 0.001, P = 0.021, and P = 0.002, respectively). The higher HEI-consumed score among the authoritative compared with the uninvolved feeding style was no longer significant in the model adjusting for HEI-served score (P = 0.118), and only the difference between authoritative and authoritarian feeding styles remained significant in the fully adjusted model (P = 0.001). DISCUSSION This study examined the relation between parent feeding style and overall preschooler dietary quality at dinner among a sample of low-income Hispanic and African-American families. Con- sistent with our hypothesis, the authoritative feeding style (high demandingness and high responsiveness) was associated with significantly higher child dietary quality than the authoritarian feeding style (high demandingness and low responsiveness). The superior dietary quality found here for the authoritative compared with the authoritarian feeding style is consistent with child dietary outcomes from research with general parenting styles, which found fruit and vegetable consumption to be better among children with parents engaging in an authoritative feeding style (23). In contrast to our hypothesis, in the adjusted model, 734 ARLINGHAUS ET AL. TABLE 2 Child HEI score of dinner meal served and consumed by characteristics1 HEI dinner score Served Consumed Overall 44.2 ± 8.4 43.4 ± 7.0 Parental feeding style Authoritative 47.5 ± 9.9 47.6 ± 6.8 Authoritarian 43.6 ± 8.1 41.5 ± 6.22 Indulgent 44.7 ± 7.0 43.6 ± 6.43 Uninvolved 41.2 ± 8.1 41.7 ± 7.8 Child characteristics Sex Female 43.0 ± 8.2 42.8 ± 6.5 Male 45.5 ± 8.5 43.9 ± 7.62 Weight category Underweight 45.0 ± 5.0 45.9 ± 3.7 Normal weight 44.3 ± 1.0 43.3 ± 0.9 Overweight 43.0 ± 1.6 43.8 ± 1.3 Obese 45.3 ± 1.5 42.9 ± 1.43 Parent characteristics Race/ethnicity Hispanic 46.0 ± 9.24 44.9 ± 7.44 African American 41.4 ± 6.0 41.0 ± 5.8 Education Some college or more 44.2 ± 8.0 43.6 ± 7.1 High school diploma/GED 43.0 ± 8.8 42.3 ± 6.6 Some high school or less 45.4 ± 8.7 44.03 ± 7.4 Employment status Both caregiver and spouse employed 46.3 ± 8.31 44.7 ± 6.6 Either caregiver or spouse employed 43.4 ± 8.81 43.1 ± 7.6 Neither caregiver nor spouse employed 44.0 ± 7.1 42.5 ± 5.93 Marital status Married/cohabitating 45.0 ± 9.3 44.3 ± 6.8 Single 43.5 ± 7.5 42.5 ± 7.33 Meal characteristics Person serving child Parent serves child 44.4 ± 0.8 43.6 ± 0.7 Child serves themselves 42.7 ± 1.5 41.7 ± 1.7 Meal representativeness Usual dinner 44.2 ± 0.9 43.4 ± 0.7 Not a usual dinner 44.4 ± 1.3 43.4 ± 1.1 1Values are means ± SDs; n = 131. GED, general equivalency diploma; HEI, Healthy Eating Index. 2, 3Different from HEI dinner score served (paired-samples t test; differences assessed horizontally within a row): 2P < 0.01, 3P < 0.05. 4Different across levels of the characteristic, P < 0.05 (paired-samples t test indicated the HEI dinner score to be significantly different; assessed vertically within a column). no significant differences in child dinner dietary quality were seen between the authoritative feeding style and either the indulgent or uninvolved feeding styles. This is not consistent with previous studies on feeding style and dietary quality, which found permissive feeding styles to be associated with higher child consumption of low-nutrient-dense foods and poor fruit and vegetable consumption (8, 9). This is likely due to the great number of covariables in the main model and the overall sample size. In the unadjusted model, the authoritative feeding style was associated with significantly higher dietary quality compared with all other feeding styles. Overall, the main findings of this study indicate that in low-income minority families, a child’s highest dietary quality is likely to be achieved when parents not only set appropriate guidelines for eating but are also responsive to their child’s eating preferences and behaviors. Although most dietary studies rely on self-report, one strength of this study was its ability to objectively measure participants’ diet through detailed menu information, digital photography, and food-waste measurement. The dietary component in this study was unique because it includes the objective measurement of both food served and food consumed. Having both measurements enabled exploratory investigation into the differences in the quality of food eaten compared with that served to the children. Although consumption is ultimately limited by the food available, the child’s choice of which foods and how much to consume of the meal served to him or her can either increase or decrease dietary quality. The findings of this study are consistent with our exploratory hypothesis that the overall dietary quality of food served would be higher than that of the food consumed. Although this difference was not significant among the overall sample, the dietary quality of the meal served was significantly higher than the dietary quality of the meal consumed among children whose parents have either an authoritarian or indulgent feeding style. These feeding styles have both been associated with unhealthy dietary intake (8, 9). Parents with an indulgent feeding style are likely to be responsive to a child’s mood during the meal but are unlikely to make demands on or offer guidance about a child’s intake once the food is served. It is thought that this lack of guidance contributes to poor child dietary quality (8). Conversely, parents with an authoritarian feeding style are likely to make demands of their child during dinner and not be responsive to their child’s preferences, emotions, or individual needs. However, it is possible for the high level of demandingness exhibited by authoritarian feeders to consist of demands unrelated to overall dietary quality. For example, a parent could make disciplinary demands on a child during dinner (e.g., sit up straight, use your napkin). It is also possible for the demands made by parents with an authoritarian feeding style to be inconsistent with healthy choices. The parent feeding style construct does not specifically consider parents’ nutrition knowledge or values with regard to healthful food, but only assesses parents’ level of demandingness on and responsiveness to their children during eating episodes. The significant difference between the dietary quality served and consumed indicates that parents’ influence during a meal is important and necessitates more investigation into how specific goal-oriented parent feeding practices during the dinner meal may relate to overall dietary quality. This may be especially pertinent given that the dietary quality of the meal served was significantly higher than that consumed among children with obesity. The lack of statistical difference in zBMI across feeding styles seen in this sample (n = 131) could be due to insufficient power because a significant 3-way interaction between parent feeding style, ethnicity, and sex on child zBMI was found in the full sample (n = 177) (4). Given that many Head Start preschool children eat breakfast, lunch, and 2 snacks at the Head Start center, this study captured the dietary quality of the meal most reflective of parents’ influence on their child’s dietary quality: the dinner meal. Furthermore, the multiple-observation method used in this study offers insight into the family’s dinner patterns. However, HEI scores for one meal may not be reflective of the dietary quality of an entire day’s intake. The overall mean ± SD HEI score PARENTING FEEDING STYLE AND CHILD DIETARY QUALITY 735 TABLE 3 Adjusted and unadjusted mean child HEI scores of dinner meal consumed by feeding style1 Child HEI consumed score Unadjusted model Partially adjusted model2 Fully adjusted model3 Caregiver feeding style Mean ± SD P Mean ± SEE P Mean ± SEE P Authoritative 47.6 ± 6.8 – 45.5 ± 0.8 – 45.5 ± 0.9 – Authoritarian 41.5 ± 6.2 <0.001 41.9 ± 0.7 0.001 41.9 ± 0.7 0.001 Indulgent 43.6 ± 6.4 0.021 43.3 ± 0.7 0.042 43.2 ± 0.7 0.059 Uninvolved 41.7 ± 7.8 0.002 43.6 ± 0.8 0.118 43.7 ± 0.9 0.161 1P values were determined by using least-significant differences pairwise comparison. HEI, Healthy Eating Index; zBMI, standardized BMI. 2Adjusted for child HEI served score. 3Adjusted for child zBMI, parent ethnicity, parent education, parent employment, parent marital status, the number of people at the meal, the person serving the child, meal representativeness, and child HEI served score. of the dinner meal consumed in the present study was lower (43.4 ± 7.0) than in previous studies that used dietary recall data from the 2012 NHANES, which reported an average HEI-2010 score of 52 for children aged 4–8 y (33) and 55 for children aged 2–5 y (34). Some food components are more likely to be consumed at various times of the day than are others, and this may differ by culture (35). For example, in a typical American diet, fruit may be more frequently consumed in the morning than in the evening meal. In this instance, a low fruit component score for dinner does not necessarily indicate that fruit is lacking from a person’s diet. Because the majority of the sample in the present study was Hispanic (62%), it is unlikely that the low (relative to other preschool-aged children) dietary quality scores observed in this study were due to ethnic differences in eating patterns. Indicators of dietary quality—for example, vegetable intake—have consistently been found to be higher among Hispanic individuals than African-American or White individuals (36–38). Although a good strategy to ensure that all food components are regularly consumed, each food component does not need to be consumed at each meal for adequate overall dietary quality. Some limitations to this study are acknowledged. As men- tioned above, only the dinner meals were observed among various meals consumed by children during the day. Other limitations include the use of direct observation, which may alter family behaviors, including the food served by the parent and the food consumed by the child. In addition, the focus on only Head Start African-American and Hispanic families reduces the generaliz- ability of these findings to low-income Head Start families from 2 ethnic groups. Last, due to the cross-sectional design of this study, no inferences with regard to causation can be made. A nutritious diet is necessary for proper growth and devel- opment (11) and dietary consumption habits established during childhood are likely to continue being practiced as an adult (13). The results of this study suggest that, with regard to child dietary quality, an authoritative feeding style may be preferable to an authoritarian feeding style among low-income minority families with preschool-aged children. Further research to better understand feeding practices that contribute to authoritative feeding, especially those occurring during the dinner meal, is warranted to gain a better understanding for how parent feeding styles can be shifted toward the authoritative feeding style. From this information, parent feeding interventions can be designed to improve the dietary quality and, consequently, the health of children. The authors’ responsibilities were as follows—KRA, KV, DCH, and SOH: designed the research and analyzed the data; TGP and SOH: conducted the research; KRA, DCH, and SOH: had primary responsibility for the final content; and all authors: wrote the manuscript, and read and approved the final manuscript. The authors had no financial relationships relevant to this article to disclose. REFERENCES 1. Hennessy E, Hughes SO, Goldberg JP, Hyatt RR, Economos CD. Parent behavior and child weight status among a diverse group of underserved rural families. Appetite 2010;54(2):369–77. 2. Hughes SO, Power TG, O’Connor TM, Orlet Fisher J, Chen TA. Maternal feeding styles and food parenting practices as predictors of longitudinal changes in weight status in Hispanic preschoolers from low-income families. J Obes 2016;2016:7201082. 3. Hughes SO, Power TG, Orlet Fisher J, Mueller S, Nicklas TA. Revisiting a neglected construct: parenting styles in a child-feeding context. Appetite 2005;44(1):83–92. 4. Hughes SO, Power TG, Papaioannou MA, Cross MB, Nicklas TA, Hall SK, Shewchuk RM. Emotional climate, feeding practices, and feeding styles: an observational analysis of the dinner meal in Head Start families. Int J Behav Nutr Phys Act 2011;8:60. 5. Hughes SO, Shewchuk RM, Baskin ML, Nicklas TA, Qu H. Indulgent feeding style and children’s weight status in preschool. J Dev Behav Pediatr 2008;29(5):403–10. 6. Tovar A, Hennessy E, Pirie A, Must A, Gute DM, Hyatt RR, Kamins CL, Hughes SO, Boulos R, Sliwa S, et al. Feeding styles and child weight status among recent immigrant mother-child dyads. Int J Behav Nutr Phys Act 2012;9:62. 7. Fisher JO, Birch LL, Zhang J, Grusak MA, Hughes SO. External influences on children’s self-served portions at meals. Int J Obes (Lond) 2013;37(7):954–60. 8. Hennessy E, Hughes SO, Goldberg JP, Hyatt RR, Economos CD. Permissive parental feeding behavior is associated with an increase in intake of low-nutrient-dense foods among American children living in rural communities. J Acad Nutr Diet 2012;112(1):142–8. 9. Hoerr SL, Hughes SO, Fisher JO, Nicklas TA, Liu Y, Shewchuk RM. Associations among parental feeding styles and children’s food intake in families with limited incomes. Int J Behav Nutr Phys Act 2009;6:55. 10. Tovar A, Choumenkovitch SF, Hennessy E, Boulos R, Must A, Hughes SO, Gute DM, Vikre EK, Economos CD. Low demanding parental feeding style is associated with low consumption of whole grains among children of recent immigrants. Appetite 2015;95:211–8. 11. Lynch J, Smith GD. A life course approach to chronic disease epidemiology. Annu Rev Public Health 2005;26:1–35. 12. Northstone K, Emmett PM. Are dietary patterns stable throughout early and mid-childhood? A birth cohort study. Br J Nutr 2008;100(5):1069– 76. 736 ARLINGHAUS ET AL. 13. Mikkila V, Rasanen L, Raitakari OT, Pietinen P, Viikari J. Consistent dietary patterns identified from childhood to adulthood: the Cardiovascular Risk in Young Finns Study. Br J Nutr 2005;93(6):923– 31. 14. Savage JS, Fisher JO, Birch LL. Parental influence on eating behavior: conception to adolescence. J Law Med Ethics 2007;35(1):22–34. 15. Baumrind D. Rearing competent children. In: Damon W, editor. Child development today and tomorrow. San Francisco (CA): Jossey Bass; 1989. p. 349–78. 16. Maccoby E, Martin J. Socialization in the context of the family: parent- child interaction. In: PH Mussen, editor. Handbook of child psychology. New York: Wiley; 1983. p. 1–101. 17. Livingstone MB, Robson PJ, Wallace JM. Issues in dietary intake assessment of children and adolescents. Br J Nutr 2004;92(Suppl 2):S213–22. 18. U.S. Department of Health and Human Services and U.S. Department of Agriculture. 2015–2020 Dietary guidelines for Americans. 8th Edition. 2015. Available at https://health.gov/ dietaryguidelines/2015/guidelines/ 19. Ledoux TA, Hingle MD, Baranowski T. Relationship of fruit and vegetable intake with adiposity: a systematic review. Obes Rev 2011;12(5):e143–50. 20. Mandara J. The typological approach in child and family psychology: a review of theory, methods, and research. Clin Child Fam Psychol Rev 2003;6(2):129–46. 21. Rhee KE, Lumeng JC, Appugliese DP, Kaciroti N, Bradley RH. Parenting styles and overweight status in first grade. Pediatrics 2006;117(6):2047–54. 22. Vereecken C, Rovner A, Maes L. Associations of parenting styles, parental feeding practices and child characteristics with young children’s fruit and vegetable consumption. Appetite 2010;55(3):589– 96. 23. Blissett J. Relationships between parenting style, feeding style and feeding practices and fruit and vegetable consumption in early childhood. Appetite 2011;57(3):826–31. 24. Johnson SL, Hughes SO, Cui X, Li X, Allison DB, Liu Y, Goodell LS, Nicklas T, Power TG, Vollrath K. Portion sizes for children are predicted by parental characteristics and the amounts parents serve themselves. Am J Clin Nutr 2014;99(4):763–70. 25. USDA; US Department of Health Human Services. Dietary guidelines for Americans, 2010. 7th ed. Washington (DC): US Government Printing Office; 2010. 26. Guenther PM, Casavale KO, Reedy J, Kirkpatrick SI, Hiza HA, Kuczynski KJ, Kahle LL, Krebs-Smith SM. Update of the Healthy Eating Index: HEI-2010. J Acad Nutr Diet 2013;113(4):569– 80. 27. Guenther PM, Kirkpatrick SI, Reedy J, Krebs-Smith SM, Buckman DW, Dodd KW, Casavale KO, Carroll RJ. The Healthy Eating Index- 2010 is a valid and reliable measure of diet quality according to the 2010 Dietary Guidelines for Americans. J Nutr 2014;144(3):399– 407. 28. Schwingshackl L, Bogensberger B, Hoffmann G. Diet quality as assessed by the Healthy Eating Index, Alternate Healthy Eating Index, Dietary Approaches to Stop Hypertension Score, and health outcomes: an updated systematic review and meta-analysis of cohort studies. J Acad Nutr Diet 2018;118(1):74–100. 29. Onvani S, Haghighatdoost F, Surkan PJ, Larijani B, Azadbakht L. Adherence to the Healthy Eating Index and Alternative Healthy Eating Index dietary patterns and mortality from all causes, cardiovascular disease and cancer: a meta-analysis of observational studies. J Hum Nutr Diet 2017;30(2):216–26. 30. NIH National Cancer Insitute Division of Cancer Control and Population Sciences. Comparing the HEI-2015, HEI–2010 & HEI– 2005. Version current February 2018. [cited 2018 Mar 26]. Available from: https://epi.grants.cancer.gov/hei/comparing.html. 31. Bergman EA, Englund T, Ogan D, Watkins T, Barbee M, Rushing K. Beverage selections and impact on Healthy Eating Index Scores in elementary children’s lunches from school and from home. J Child Nutr Manag 2016;40(1):1–18. 32. Kuczmarski RJ, Ogden CL, Guo SS, Grummer-Strawn LM, Flegal KM, Mei Z, Wei R, Curtin LR, Roche AF, Johnson CL. 2000 CDC Growth Charts for the United States: methods and development. Vital Health Stat 2002(246):1–190. 33. Banfield EC, Liu Y, Davis JS, Chang S, Frazier-Wood AC. Poor adherence to US dietary guidelines for children and adolescents in the National Health and Nutrition Examination Survey population. J Acad Nutr Diet 2016;116(1):21–27. 34. Gu X, Tucker KL. Dietary quality of the US child and adolescent population: trends from 1999 to 2012 and associations with the use of federal nutrition assistance programs. Am J Clin Nutr 2017;105(1):194– 202. 35. Birch LL, Billman J, Richards SS. Time of day influences food acceptability. Appetite 1984;5(2):109–16. 36. Hiza HA, Casavale KO, Guenther PM, Davis CA. Diet quality of Americans differs by age, sex, race/ethnicity, income, and education level. J Acad Nutr Diet 2013;113(2):297–306. 37. Di Noia J, Byrd-Bredbenner C. Determinants of fruit and vegetable intake in low-income children and adolescents. Nutr Rev 2014;72(9):575–90. 38. Hoerr SL, Tsuei E, Liu Y, Franklin FA, Nicklas TA. Diet quality varies by race/ethnicity of Head Start mothers. J Am Diet Assoc 2008;108(4):651–9. https://health.gov/dietaryguidelines/2015/guidelines/ https://epi.grants.cancer.gov/hei/comparing.html work_zzpr3zx4uzc57exlpuqbtxe3oa ---- Effect of Orchard Management Factors on Flesh Color of Two Red-Fleshed Apple Clones horticulturae Article Effect of Orchard Management Factors on Flesh Color of Two Red-Fleshed Apple Clones Annika Wellner, Eckhard Grimm * and Moritz Knoche Institute for Horticultural Production Systems, Leibniz-University Hannover, Herrenhäuser Straße 2, 30419 Hannover, Germany * Correspondence: eckhard.grimm@obst.uni-hannover.de; Tel.: +49-511-762-8081 Received: 23 May 2019; Accepted: 23 July 2019; Published: 29 July 2019 ���������� ������� Abstract: Little is known about factors affecting anthocyanin biosynthesis in red-fleshed apples (Malus × domestica Borkh.). The objective was to establish the effects of orchard management factors on flesh anthocyanin content of dark-colored (DC) and light-colored (LC) apple clones. Flesh color was assessed by measuring color in the L, a, b mode using a spectrophotometer and predicting the anthocyanin content based on relationships between the absorption of a flesh extract at 530 nm and the L-value determined using a spectrophotometer (r2 = 0.99 ***). Fruit from the DC clone were red by 86 days after full bloom (DAFB), whereas the LC clone began to color at 136 DAFB. Color intensity in both clones decreased from the top of the tree to the base. Further, the intensity of the flesh color of the DC clone decreased with shading (94% absorption of incident photosynthetic active radiation). Covering a fruit with a UV absorbing film (100% UV absorption) had no effect on flesh color in the DC clone but decreased color in the LC clone. Fruit thinning increased color in DC and LC fruit. There was little change in flesh color during storage. However, the DC clone developed severe flesh browning as storage progressed beyond 30 days. The results demonstrated that light (visible and UV wavelength) stimulated, whereas shade inhibited, anthocyanin biosynthesis in the flesh under orchard conditions. Keywords: Malus × domestica; anthocyanin; canopy; light interception; parenchyma; crop load; UV light 1. Introduction The red blush of apple fruit skin is an important quality attribute. Recently, some apple genotypes have become available that also have red-flesh. In these genotypes, the red color in the skin extends into the flesh of the fruit. Red-fleshed apples are novel, and this makes them attractive to consumers and so of increased economic value. A number of red-fleshed apple genotypes have now been identified and evaluated [1–3]. Red fleshed genotypes fall into two categories, types 1 and 2. Type 1 genotypes are strongly pigmented and dark-colored. The flesh color forms early during fruit development and the leaves and shoots are also colored. In type 1 genotypes, the transcription factor MdMYB10 is involved in regulating the biosynthesis of anthocyanins. Type 2 genotypes are less color intensive. Coloration begins late in development, and leaves and shoots are without red color. In type 2 genotypes the transcription factor MdMYB110a is involved in the regulation of biosynthesis of anthocyanin [4–6]. The pigments responsible for the red flesh are anthocyanins. In general, these may appear in stems, leaves, flowers or fruits. The principle function of coloration is to make a plant or a plant organ attractive to animals which act as vectors, carrying pollen and dispersing seeds. High anthocyanin concentrations in fruit and vegetables are also considered beneficial in the human diet because of their antioxidant [7], anticancer and antiaging properties [8,9]. Horticulturae 2019, 5, 0054; doi:10.3390/horticulturae5030054 www.mdpi.com/journal/horticulturae http://www.mdpi.com/journal/horticulturae http://www.mdpi.com https://orcid.org/0000-0003-0357-8935 http://www.mdpi.com/2311-7524/5/3/0054?type=check_update&version=1 http://dx.doi.org/10.3390/horticulturae5030054 http://www.mdpi.com/journal/horticulturae Horticulturae 2019, 5, 0054 2 of 11 The concentration of anthocyanin in the flesh and hence color intensity depends not only on the genotype but also on environmental variables. For example, the anthocyanin content of a type 1 genotype varied by 14% depending on the site of production [10]. In addition, both light and temperature affected anthocyanin biosynthesis in apple, and hence the skin and flesh color; for review see [11,12]. Little is known about the effect of orchard management factors during cultivation on color development in apple flesh. The objective of our study was to establish the effects of orchard management factors on anthocyanin content of a dark-colored (DC, type 1) and a light-colored (LC, type 2) apple clone. We focused on factors related to light exposure because light affects anthocyanin biosynthesis in the skin and may, therefore, also play a role in the flesh. The two apple clones examined here are representative of the small range of DC and LC type apple fruits currently available. 2. Materials and Methods 2.1. Plant Material Two red-flesh apple (Malus × domestica Borkh.) clones were selected that had dark-red (DC) and light-red (LC) flesh (IFORED SAS, Seiches-sur-le-Loire, France). Trees were grafted on M9 rootstocks and grown at the Obstbauzentrum Jork, Jork, Germany (53.5 N, 9.7 E) according to current practices of integrated fruit production. Fruits were harvested during the season beginning at 86 days after full bloom (DAFB) and were processed immediately or stored at 2 ± 1 ◦C pending analysis. 2.2. Quantifying Redness of Cross-Sections For rapid quantification of the redness of the flesh, a procedure based on image analysis of digital photographs was established. Flesh slices (3–5 mm thick) were excised from the equatorial plane of DC and LC fruits. Equatorial slices were photographed using a Canon EOS 550 D camera equipped with Macro lens EF-S 18–55 mm (Canon, Tokyo, Japan) mounted on a photo stand (RB 5000 DL, No. 5556 + Prolite 5000, No. 2190; Kaiser Phototechnik, Buchen, Germany) equipped with three fluorescent tubes (Osram TC-L, code 954, 5400 K), two above (36 W each) and one (15 W) in a lightbox underneath the specimen. Images were taken together with the RHS color standards (Charts No. 39, 43, 45, 46; Royal Horticultural Society, London, UK) and the ColorChecker (X-Rite Pantone; Grand Rapids, MI, USA) as shown in Figure 1a. Images were processed using image analysis software (cellSens Dimension 1.7.1; Olympus, Hamburg, Germany) and the following procedure. First, the white standard was calibrated. Second, the cross-section of the flesh was partitioned into four zones of different color intensity. The limits for the different intensities were set according to the RHS color charts (No. 39, 43, 45, 46), as shown by false colors in Figure 1b. As for any one of the color charts the L-, a- and b-coordinates were pre-determined (CM-2600d; Konica Minolta, Osaka, Japan) [13], a mean L-, a mean a- or a mean b-value could be calculated that were weighted by the respective area of the cross-section. Horticulturae 2019, 5, 0054 3 of 11 Horticulturae 2019, 5, x FOR PEER REVIEW 3 of 10 Figure 1. (a) Image of the equatorial slice of a red-fleshed apple that was taken together with four red color charts (No. 39, 43, 45, 46) of the Royal Horticultural Society (RHS) and a grey scale of ColorChecker. Red color charts and grey scale served as calibration. (b) False-color image of the photograph depicted in A. The four color classes were identified based on the L-values of the RHS color charts. 2.3. Experiments A developmental time course was established by sampling the DC clone from 86 DAFB and the LC clone from 96 DAFB to maturity. Flesh color was recorded as a function of the position of the fruit in the canopy and the exposure of the fruit relative to the sun. The three positions in the canopy were: in the upper third of the crown (‘top’), in the middle third (‘middle’) and in the lower third (‘bottom’). Exposure relative to the sun was also varied by selecting fruit from the north- or south-exposed flanks of the row. Flesh color was recorded as a function of shading. Shading was achieved by wrapping individual branches in the middle of the tree with a hail net. Branches without net served as control. The light intensity in the shade treatment was only about 6% (LI-COR Quantum/Radiometer/Photometer LI250; Heinz Walz GmbH Mess- und Regelungstechnik, Effeltrich, Germany). Flesh color was recorded as a function of UV exposure. Fruits in the middle of the crown were covered with a UV-absorbing film (UV Schutzfolie Klar UV 3 SR; Özek and Wagner, Cologne, Germany). This film blocked all UV radiation as determined using a UV meter (sensor: type 2.5; Indium Sensor GmbH, Neuenhagen, Germany; data logger: Almemo® 2590-4AS; Ahlborn Mess- und Regelungstechnik GmbH, Holzkirchen, Germany). Fruit in the same position but without film served as control. Flesh color was recorded as a function of three levels of fruit thinning: no thinning, medium thinning and heavy thinning. Thinning was done by hand after terminal bud set. The mean number of fruits was 200, 120 and 50 per DC and 120, 80 and 30 per LC tree for the light, medium and heavy thinning treatments, respectively. The mean trunk cross-sectional area (0.6 m above graft union) was 13.37 cm2 and 13.74 cm2 for DC and LC trees, respectively. The effect of storage duration on flesh color was quantified during 98 d of cold storage at 4 °C. 2.4. Data Analysis All experiments were carried out with a minimum of 20 fruit sampled from a minimum of four trees. Unless individual fruit data are shown, data are means ± standard errors. Where error bars are not shown, they were smaller than the plotted symbols. Data were analyzed by analysis of variance (Proc GLM) and regression analysis (Proc REG) using the statistical software package SAS (version 9.1.3; SAS Institute, Cary, NC, USA). Means were compared using Tukey’s Studentized Range (HSD) Test. The significances of coefficients of determination (r²) at P ≤ 0.001 are indicated by ***. Figure 1. (a) Image of the equatorial slice of a red-fleshed apple that was taken together with four red color charts (No. 39, 43, 45, 46) of the Royal Horticultural Society (RHS) and a grey scale of ColorChecker. Red color charts and grey scale served as calibration. (b) False-color image of the photograph depicted in A. The four color classes were identified based on the L-values of the RHS color charts. To relate the L-, a- and b-values determined on cross-sections of the fruit to the anthocyanin content of the tissue, a calibration curve was established. Briefly, tissue cylinders (12 mm diameter × 3–5 mm long) were excised from the equatorial plane of DC and LC fruits. Cylinders were selected for homogeneity of color ranging from white to dark red. The color of cylinders was quantified using a spectrophotometer (CM-26001; Konica Minolta). Thereafter, cylinders were extracted using 1% HCl in methanol (8 mL per cylinder) at room temperature overnight (>12 h). The absorption of the extract was quantified spectrophotometrically at 530 nm (Specord 210; Analytik Jena, Jena, Germany). An exponential regression model was fitted between absorption of the extract at 530 nm and the L- and the a-value. Statistical analysis revealed highly-significant relationships between the absorption at 530 nm and the L- or the a-value determined on the same tissue cylinder. The regression equation allowed conversion of any L-value (or a-value) of the three-dimensional color space determined on a fruit cross-section, to an anthocyanin absorption value determined on a flesh extract. 2.3. Experiments A developmental time course was established by sampling the DC clone from 86 DAFB and the LC clone from 96 DAFB to maturity. Flesh color was recorded as a function of the position of the fruit in the canopy and the exposure of the fruit relative to the sun. The three positions in the canopy were: in the upper third of the crown (‘top’), in the middle third (‘middle’) and in the lower third (‘bottom’). Exposure relative to the sun was also varied by selecting fruit from the north- or south-exposed flanks of the row. Flesh color was recorded as a function of shading. Shading was achieved by wrapping individual branches in the middle of the tree with a hail net. Branches without net served as control. The light intensity in the shade treatment was only about 6% (LI-COR Quantum/Radiometer/Photometer LI250; Heinz Walz GmbH Mess- und Regelungstechnik, Effeltrich, Germany). Flesh color was recorded as a function of UV exposure. Fruits in the middle of the crown were covered with a UV-absorbing film (UV Schutzfolie Klar UV 3 SR; Özek and Wagner, Cologne, Germany). This film blocked all UV radiation as determined using a UV meter (sensor: type 2.5; Indium Sensor GmbH, Neuenhagen, Germany; data logger: Almemo® 2590-4AS; Ahlborn Mess- und Regelungstechnik GmbH, Holzkirchen, Germany). Fruit in the same position but without film served as control. Flesh color was recorded as a function of three levels of fruit thinning: no thinning, medium thinning and heavy thinning. Thinning was done by hand after terminal bud set. The mean number of fruits was 200, 120 and 50 per DC and 120, 80 and 30 per LC tree for the light, medium and heavy Horticulturae 2019, 5, 0054 4 of 11 thinning treatments, respectively. The mean trunk cross-sectional area (0.6 m above graft union) was 13.37 cm2 and 13.74 cm2 for DC and LC trees, respectively. The effect of storage duration on flesh color was quantified during 98 d of cold storage at 4 ◦C. 2.4. Data Analysis All experiments were carried out with a minimum of 20 fruit sampled from a minimum of four trees. Unless individual fruit data are shown, data are means ± standard errors. Where error bars are not shown, they were smaller than the plotted symbols. Data were analyzed by analysis of variance (Proc GLM) and regression analysis (Proc REG) using the statistical software package SAS (version 9.1.3; SAS Institute, Cary, NC, USA). Means were compared using Tukey’s Studentized Range (HSD) Test. The significances of coefficients of determination (r2) at p ≤ 0.001 are indicated by ***. 3. Results Absorption decreased exponentially as the L-value increased (Figure 2a). The curve had a satisfactory resolution, i.e., changes in absorption resulted in measurable ∆L values over the entire range of L-values. Plotting the natural log-transformed absorption vs. the L-value yielded a linear relationship. For the a-value, absorption increased exponentially as the a-value increased (Figure 2b). However, the curve segment for a-values from 30 to 50 was steep, thus the resolution was inferior to that in Figure 2a. When taking the natural log of the absorption and plotting against the a-value, a linear relationship was obtained between the two (Figure 2b, inset). Plots of absorption vs. the b-value yielded a discontinuous curve with separate data populations for DC and LC (data not shown). The relationship between the absorption of anthocyanins and the L-value (or a-value) determined using a spectrophotometer allowed prediction of flesh color as indexed by anthocyanin content from measurements of the L-value (or a-value) on a fruit cross-section. Plotting the predicted vs. measured anthocyanin absorption for an independent data set yielded the relationships shown in Figure 2c. The absorption predicted from L-values was closely and linearly related to the measured absorption (slope 1.12 ± 0.05, r2 = 0.98 ***). The intercept did not differ significantly from zero and hence, the regression line was forced through the origin. Relationships with the a-value were inferior, due to an overestimation of the predicted absorption (slope 1.35 ± 0.13), systematic deviations for medium a-values (r2 = 0.93 ***). Developmental increase in fruit size followed similar patterns in both clones (Figure 3a). The red color of the flesh of the DC clone decreased slightly between 86 and 112 DAFB and then remained constant until harvest (Figure 3b). In contrast, in the LC clone, the flesh began to color at 136 DAFB and increased continuously in color intensity until 184 DAFB (Figure 3b). Compared with the LC clone, the DC clone had a markedly higher percentage of dark- and medium-colored fruit (Figure 3c,d). There was no relationship between the color of the skin or that of the outer or inner flesh in DC or LC apples (data not shown). Horticulturae 2019, 5, 0054 5 of 11 Horticulturae 2019, 5, x FOR PEER REVIEW 4 of 10 3. Results Absorption decreased exponentially as the L-value increased (Figure 2a). The curve had a satisfactory resolution, i.e., changes in absorption resulted in measurable ΔL values over the entire range of L-values. Plotting the natural log-transformed absorption vs. the L-value yielded a linear relationship. For the a-value, absorption increased exponentially as the a-value increased (Figure 2b). However, the curve segment for a-values from 30 to 50 was steep, thus the resolution was inferior to that in Figure 2a. When taking the natural log of the absorption and plotting against the a-value, a linear relationship was obtained between the two (Figure 2b, inset). Plots of absorption vs. the b-value yielded a discontinuous curve with separate data populations for DC and LC (data not shown). The relationship between the absorption of anthocyanins and the L-value (or a-value) determined using a spectrophotometer allowed prediction of flesh color as indexed by anthocyanin content from measurements of the L-value (or a-value) on a fruit cross-section. Plotting the predicted vs. measured anthocyanin absorption for an independent data set yielded the relationships shown in Figure 2c. The absorption predicted from L-values was closely and linearly related to the measured absorption (slope 1.12 ± 0.05, r2 = 0.98 ***). The intercept did not differ significantly from zero and hence, the regression line was forced through the origin. Relationships with the a-value were inferior, due to an overestimation of the predicted absorption (slope 1.35 ± 0.13), systematic deviations for medium a-values (r2 = 0.93 ***). Figure 2. Absorption of anthocyanins extracted from the flesh of apples plotted against the color as indexed by the L-value (a) or the a-value (b). Each data point represents an individual tissue disc. Insets: Data from the main graph replotted, but absorption data ln transformed. (c) Relationship between the predicted absorption (Abspred) and the measured absorption (Absmeas). Absorption was predicted either from the L-value (Abspred = 1.12 (±0.05) × Absmeas + 0.78 (±0.37), r² = 0.98 ***) or from the a-values (Abspred = 1.35 (±0.13) × Absmeas + 5.08 (±0.89), r² = 0.93 ***). The L- and a-values were determined with a spectrophotometer in the L-, a- and b-modes. For details see text. Developmental increase in fruit size followed similar patterns in both clones (Figure 3a). The red color of the flesh of the DC clone decreased slightly between 86 and 112 DAFB and then Figure 2. Absorption of anthocyanins extracted from the flesh of apples plotted against the color as indexed by the L-value (a) or the a-value (b). Each data point represents an individual tissue disc. Insets: Data from the main graph replotted, but absorption data ln transformed. (c) Relationship between the predicted absorption (Abspred) and the measured absorption (Absmeas). Absorption was predicted either from the L-value (Abspred = 1.12 (±0.05) × Absmeas + 0.78 (±0.37), r 2 = 0.98 ***) or from the a-values (Abspred = 1.35 (±0.13) × Absmeas + 5.08 (±0.89), r 2 = 0.93 ***). The L- and a-values were determined with a spectrophotometer in the L-, a- and b-modes. For details see text. Horticulturae 2019, 5, x FOR PEER REVIEW 5 of 10 remained constant until harvest (Figure 3b). In contrast, in the LC clone, the flesh began to color at 136 DAFB and increased continuously in color intensity until 184 DAFB (Figure 3b). Compared with the LC clone, the DC clone had a markedly higher percentage of dark- and medium-colored fruit (Figure 3c,d). There was no relationship between the color of the skin or that of the outer or inner flesh in DC or LC apples (data not shown). Figure 3. Developmental time course of change in diameter (a) and flesh color as indexed by the anthocyanin accumulation (b) in red flesh apples of a dark-colored (DC) and a light-colored (LC) clone. Distribution of color classes in DC (c) and LC clones (d). In both clones, flesh color intensity decreased from top to bottom positions in the tree crown (Figure 4a,b). The color distribution changed analogously (Figure 4c,d). Differences in sun exposure (north vs. south aspect) did not result in significant changes in flesh color (Figure 5). The intensity of flesh color decreased when branches were shaded with hail net (Figure 6a,b). UV-exposure did not affect flesh color in the DC clone (Figure 7a,c) but in the LC clone flesh color decreased significantly when fruit were covered with UV-absorbing film (Figure 7b,d). Compared with light fruit thinning, the medium and heavy fruit thinning both increased the red color in the flesh of both clones, DC and LC (Figure 8). There was no consistent change in flesh color during storage of either clone (Figure 9a). In the DC clone, an increasing proportion of fruit developed severe flesh browning as storage time increased. There was no browning in the LC clone (Figure 9b,c). Figure 3. Developmental time course of change in diameter (a) and flesh color as indexed by the anthocyanin accumulation (b) in red flesh apples of a dark-colored (DC) and a light-colored (LC) clone. Distribution of color classes in DC (c) and LC clones (d). Horticulturae 2019, 5, 0054 6 of 11 In both clones, flesh color intensity decreased from top to bottom positions in the tree crown (Figure 4a,b). The color distribution changed analogously (Figure 4c,d). Differences in sun exposure (north vs. south aspect) did not result in significant changes in flesh color (Figure 5).Horticulturae 2019, 5, x FOR PEER REVIEW 6 of 10 Figure 4. Effects of position along the central axis of a slender spindle tree on flesh anthocyanin content (a,b) and the color distribution (c,d) of red flesh apples of a dark-colored (DC) (a,c) and a light-colored (LC) clone (b,d). Figure 5. Effects of the exposure of the fruit to the sun on flesh anthocyanin content (a,b) and the color distribution (c,d) of red flesh apples of a dark-colored (DC) (a,c) and a light-colored (LC) clone (b,d). Figure 4. Effects of position along the central axis of a slender spindle tree on flesh anthocyanin content (a,b) and the color distribution (c,d) of red flesh apples of a dark-colored (DC) (a,c) and a light-colored (LC) clone (b,d). Horticulturae 2019, 5, x FOR PEER REVIEW 6 of 10 Figure 4. Effects of position along the central axis of a slender spindle tree on flesh anthocyanin content (a,b) and the color distribution (c,d) of red flesh apples of a dark-colored (DC) (a,c) and a light-colored (LC) clone (b,d). Figure 5. Effects of the exposure of the fruit to the sun on flesh anthocyanin content (a,b) and the color distribution (c,d) of red flesh apples of a dark-colored (DC) (a,c) and a light-colored (LC) clone (b,d). Figure 5. Effects of the exposure of the fruit to the sun on flesh anthocyanin content (a,b) and the color distribution (c,d) of red flesh apples of a dark-colored (DC) (a,c) and a light-colored (LC) clone (b,d). The intensity of flesh color decreased when branches were shaded with hail net (Figure 6a,b). UV-exposure did not affect flesh color in the DC clone (Figure 7a,c) but in the LC clone flesh color decreased significantly when fruit were covered with UV-absorbing film (Figure 7b,d). Horticulturae 2019, 5, 0054 7 of 11 Horticulturae 2019, 5, x FOR PEER REVIEW 6 of 10 Figure 4. Effects of position along the central axis of a slender spindle tree on flesh anthocyanin content (a,b) and the color distribution (c,d) of red flesh apples of a dark-colored (DC) (a,c) and a light-colored (LC) clone (b,d). Figure 5. Effects of the exposure of the fruit to the sun on flesh anthocyanin content (a,b) and the color distribution (c,d) of red flesh apples of a dark-colored (DC) (a,c) and a light-colored (LC) clone (b,d). Figure 6. Effects of shading on the anthocyanin content (a) and the color distribution (b) of red flesh apples of a dark-colored (DC). Fruit exposed to full sunlight served as control. Shade reduced the incident photosynthetically active radiation by 94%. Horticulturae 2019, 5, x FOR PEER REVIEW 7 of 10 Figure 6. Effects of shading on the anthocyanin content (a) and the color distribution (b) of red flesh apples of a dark-colored (DC). Fruit exposed to full sunlight served as control. Shade reduced the incident photosynthetically active radiation by 94%. Figure 7. Effect of UV radiation on the anthocyanin content (a,b) and the color distribution (c,d) of red flesh apples of a dark-colored (DC) (a,c) and a light-colored (LC) clone (b,d). UV radiation was excluded by wrapping the fruit in a bag made of UV-absorbing film. The film absorbed 100% of the UV radiation. Fruit that was exposed to full sunlight served as control. Figure 8. Effect of crop load on the anthocyanin content (a,b) and the color distribution (c,d) of red flesh apples of a dark-colored (DC) (a,c) and a light-colored (LC) clone (b,d). Crop load was varied by hand thinning to different levels. The mean numbers of fruit per tree were 200, 120 and 50 per tree (DC) and 120, 80 and 30 per tree (LC) for the light, medium and heavy fruit thinning treatments, respectively. Figure 7. Effect of UV radiation on the anthocyanin content (a,b) and the color distribution (c,d) of red flesh apples of a dark-colored (DC) (a,c) and a light-colored (LC) clone (b,d). UV radiation was excluded by wrapping the fruit in a bag made of UV-absorbing film. The film absorbed 100% of the UV radiation. Fruit that was exposed to full sunlight served as control. Compared with light fruit thinning, the medium and heavy fruit thinning both increased the red color in the flesh of both clones, DC and LC (Figure 8). Horticulturae 2019, 5, 0054 8 of 11 Horticulturae 2019, 5, x FOR PEER REVIEW 7 of 10 Figure 6. Effects of shading on the anthocyanin content (a) and the color distribution (b) of red flesh apples of a dark-colored (DC). Fruit exposed to full sunlight served as control. Shade reduced the incident photosynthetically active radiation by 94%. Figure 7. Effect of UV radiation on the anthocyanin content (a,b) and the color distribution (c,d) of red flesh apples of a dark-colored (DC) (a,c) and a light-colored (LC) clone (b,d). UV radiation was excluded by wrapping the fruit in a bag made of UV-absorbing film. The film absorbed 100% of the UV radiation. Fruit that was exposed to full sunlight served as control. Figure 8. Effect of crop load on the anthocyanin content (a,b) and the color distribution (c,d) of red flesh apples of a dark-colored (DC) (a,c) and a light-colored (LC) clone (b,d). Crop load was varied by hand thinning to different levels. The mean numbers of fruit per tree were 200, 120 and 50 per tree (DC) and 120, 80 and 30 per tree (LC) for the light, medium and heavy fruit thinning treatments, respectively. Figure 8. Effect of crop load on the anthocyanin content (a,b) and the color distribution (c,d) of red flesh apples of a dark-colored (DC) (a,c) and a light-colored (LC) clone (b,d). Crop load was varied by hand thinning to different levels. The mean numbers of fruit per tree were 200, 120 and 50 per tree (DC) and 120, 80 and 30 per tree (LC) for the light, medium and heavy fruit thinning treatments, respectively. There was no consistent change in flesh color during storage of either clone (Figure 9a). In the DC clone, an increasing proportion of fruit developed severe flesh browning as storage time increased. There was no browning in the LC clone (Figure 9b,c).Horticulturae 2019, 5, x FOR PEER REVIEW 8 of 10 Figure 9. Time course of change in the anthocyanin content (a) and the color distribution (b,c) of red flesh apples of a dark-colored (DC) (a,b) and a light-colored (LC) clone (a,c) during storage. Fruit were held for up to 96 days after harvest. 4. Discussion Our experiments established two important findings: 1) the method of estimating anthocyanin content using calibrated photographs, image analysis and a spectrophotometer, proved reliable and sufficiently sensitive to quantify the effects of orchard management treatments imposed in field experiments, and 2) the effects of orchard management factors on anthocyanin biosynthesis were accounted for principally in terms of exposure to solar radiation. 4.1. Quantifying Flesh Color Using Digital Photography and Image Analysis Measuring the redness of apple flesh based on digital photography and a calibration curve established between the L-value by a hand-held spectrophotometer and the anthocyanin absorption of flesh extract proved to be a quick, reliable and robust procedure for estimating flesh anthocyanin content. Compared to visual ratings, this method is more objective since it integrates spatial as well as color-intensity information, thereby producing a ’weighted’ redness. This is difficult to achieve using visual ratings, particularly when comparing flesh color assessed by different operators. For the two clones investigated, the accuracy of the procedure was excellent when based on measurements of L-value. It accounted for more than 98% of the variation in anthocyanin content measured in vitro. Furthermore, the slope of the regression line of predicted vs. measured anthocyanin content of an independent data set was 1.12 which is a slight and consistent overestimation of the measured anthocyanin content (r2 = 0.98). Interestingly, predictions of anthocyanin content based on the a-values were less accurate. This result is counterintuitive since the relationship of red coloration and the a-value on a red/green axis should be superior. We suggest the inaccuracy was due to insufficient resolution with the high-intensity red colorations, where the absorption of the extracts increased markedly for fairly small changes of a-value. 4.2. Light Stimulation of Anthocyanin Biosynthesis Light was the common factor explaining the response of anthocyanin biosynthesis to the orchard management factors investigated. This was demonstrated in the observation that red color increased for fruit in the top of the crown, without shade by foliage or by covering by UV-absorbent film. These observations are consistent with those by Honda et al. (2017) [14] on different clones of Figure 9. Time course of change in the anthocyanin content (a) and the color distribution (b,c) of red flesh apples of a dark-colored (DC) (a,b) and a light-colored (LC) clone (a,c) during storage. Fruit were held for up to 96 days after harvest. 4. Discussion Our experiments established two important findings: (1) the method of estimating anthocyanin content using calibrated photographs, image analysis and a spectrophotometer, proved reliable and Horticulturae 2019, 5, 0054 9 of 11 sufficiently sensitive to quantify the effects of orchard management treatments imposed in field experiments, and (2) the effects of orchard management factors on anthocyanin biosynthesis were accounted for principally in terms of exposure to solar radiation. 4.1. Quantifying Flesh Color Using Digital Photography and Image Analysis Measuring the redness of apple flesh based on digital photography and a calibration curve established between the L-value by a hand-held spectrophotometer and the anthocyanin absorption of flesh extract proved to be a quick, reliable and robust procedure for estimating flesh anthocyanin content. Compared to visual ratings, this method is more objective since it integrates spatial as well as color-intensity information, thereby producing a ’weighted’ redness. This is difficult to achieve using visual ratings, particularly when comparing flesh color assessed by different operators. For the two clones investigated, the accuracy of the procedure was excellent when based on measurements of L-value. It accounted for more than 98% of the variation in anthocyanin content measured in vitro. Furthermore, the slope of the regression line of predicted vs. measured anthocyanin content of an independent data set was 1.12 which is a slight and consistent overestimation of the measured anthocyanin content (r2 = 0.98). Interestingly, predictions of anthocyanin content based on the a-values were less accurate. This result is counterintuitive since the relationship of red coloration and the a-value on a red/green axis should be superior. We suggest the inaccuracy was due to insufficient resolution with the high-intensity red colorations, where the absorption of the extracts increased markedly for fairly small changes of a-value. 4.2. Light Stimulation of Anthocyanin Biosynthesis Light was the common factor explaining the response of anthocyanin biosynthesis to the orchard management factors investigated. This was demonstrated in the observation that red color increased for fruit in the top of the crown, without shade by foliage or by covering by UV-absorbent film. These observations are consistent with those by Honda et al. (2017) [14] on different clones of the same two red-fleshed categories. They reported decreased anthocyanin concentration as crop load increased, or for bagged vs. un-bagged fruit or for fruit grown in the shade vs. that exposed to the sun. Similarly, Matsumoto et al. (2018) [15] found decreased red color in the outer flesh as indexed by the a-value of the flesh when the fruit was shaded (by bagging), when crop load was high or when trees were defoliated. When bags were removed, color intensity increased again in the outer flesh. They observed no changes in the inner flesh [16]. Consistent with these reports was a decrease in anthocyanin content when shaded of red-flesh and white-flesh clones of crab apple (Malus sylvestris) [7]. 4.3. How Does Light Trigger Anthocyanin Biosynthesis in the Flesh? There is little transmission through apple skin and parenchyma [17]. Thus, the question arises how light triggers anthocyanin biosynthesis in the flesh of an apple when only the fruit surface is exposed to light. Two hypotheses are offered that deserve investigation. First, anthocyanin biosynthesis may be triggered by high concentrations of carbohydrates due to high rates of photosynthesis as a result of high light exposure of the canopy. This hypothesis is consistent with the effects of crop load, shading or defoliation observed in this and earlier studies [7,14,15]. Alternatively, photoreceptors may be considered. These perceive light levels at the fruit surface and then—via some mobile agent—trigger anthocyanin biosynthesis in the flesh. That light triggers anthocyanin biosynthesis is well established for the skin, for example in bagging experiments [14,16]. However, the flesh is not directly exposed. Blue light receptors (cryptochromes, phototropins) are potential candidates because radiation of 450 nm wavelength stimulates blush in apple skins [18]. Cryptochrome MdCRY2 is known to regulate anthocyanin biosynthesis [19]. In strawberries (Fragaria × ananassa Duch.), bio synthesis of anthocyanin is stimulated by 450 nm and by 730 nm wavelengths, Horticulturae 2019, 5, 0054 10 of 11 the absorption maxima of phototropins and phytochromes (Pfr) [20]. Clearly, at this stage, the evidence for a role of photoreceptors is not conclusive. 5. Conclusions Digital photography plus image analysis is a quick and sufficiently precise method for quantifying redness of the flesh of apples because it is closely related to anthocyanin content as determined in the laboratory following extraction. The procedure is particularly useful because it does not require sophisticated instruments and a laboratory setting. Although we investigated only two representatives of the light- and dark-red flesh genotypes, we expect the relationship between anthocyanin absorption and the L-value to also hold for other red-fleshed genotypes. The results revealed that light is the most important environmental factor regulating anthocyanin biosynthesis also in red flesh apple fruit. Increasing light exposure stimulated color development under orchard conditions. Light exposure can be maximized by maintaining an open canopy structure, pruning, thinning and by cultivating apples at sites with high levels of UV-B radiation. Author Contributions: M.K. and E.G. designed the experiments; A.W. performed the experiments; A.W., E.G., M.K. analyzed the data; A.W. and E.G. prepared the draft of the paper; M.K. wrote the paper. Funding: This research was funded in part by a grant from the IFORED group. Acknowledgments: We thank Dirk Köpcke, Martin Brüggenwirth, Ulrich Hering, and Jennifer Kruse for their support in setting up the experiments and Sandy Lang and Bishnu Khanal for helpful comments on an earlier version of this manuscript. Conflicts of Interest: The authors declare no conflict of interest. References 1. Volz, R.K.; Oraguize, N.C.; Whitworth, C.J.; How, N.; Chagné, D.; Carlisle, C.M.; Gardiner, S.E.; Rikkerink, E.H.A.; Lawrence, T. Red flesh breeding in apple: Progress and challenges. Acta Hortic. 2009, 814, 337–342. [CrossRef] 2. Van Nocker, S.; Berry, G.; Najdowski, J.; Michelutti, R.; Luffman, M.; Forsline, P.; Alsmairat, N.; Beaudry, R.; Nair, M.G.; Ordidge, M. Genetic diversity of red-fleshed apples (Malus). Euphytica 2012, 185, 281–293. [CrossRef] 3. Contessa, C.; Botta, R. Comparison of physiochemical traits of red-fleshed, commercial and ancient apple cultivars. Hortic. Sci. 2016, 43, 159–166. [CrossRef] 4. Espley, R.V.; Hellens, R.P.; Putterill, J.; Stevenson, D.E.; Kutty-Amma, S.; Allan, A.C. Red colouration in apple fruits is due to the activity of the MYB transcription factor, MdMYB10. Plant J. 2007, 49, 414–427. [CrossRef] [PubMed] 5. Chagné, D.; Lin-Wang, K.; Espley, R.V.; Volz, R.K.; How, N.M.; Rouse, S.; Brendolise, C.; Carlisle, C.M.; Kumar, S.; De Silva, N.; et al. An ancient duplication of apple MYB transcription factors is responsible for novel red fruit-flesh phenotypes. Plant Physiol. 2013, 161, 225–239. [CrossRef] [PubMed] 6. Umemura, H.; Otagaki, S.; Wada, M.; Kondo, S.; Matsumoto, S. Expression and functional analysis of a novel MYB gene, MdMYB110a_JP, responsible for red flesh, not skin color in apple fruit. Planta 2013, 238, 65–76. [CrossRef] [PubMed] 7. Wang, Y.; Lu, Y.; Hao, S.; Zhang, M.; Zhang, J.; Tian, J.; Yao, Y. Different coloration patterns between the red- and white-fleshed fruits of malus crabapples. Sci. Hortic. 2015, 194, 26–33. [CrossRef] 8. He, J.; Giusti, M.M. Anthocyanins: Natural colorants with health-promoting properties. Ann. Rev. Food Sci. Technol. 2010, 1, 163–187. [CrossRef] [PubMed] 9. Mazza, G.; Miniati, E. Anthocyanins in Fruits, Vegetables, and Grains, 1st ed.; CRC Press: Boca Raton, FL, USA, 1993. 10. Zhang, Y.; Zhao, R.; Liu, W.; Sun, X.; Bai, S.; Xiang, Y.; Dai, H. The anthocyanins component and the influence factors of contents in red flesh apple ‘Hong-Xun No. 1’. Eur. J. Hortic. Sci. 2016, 81, 248–254. [CrossRef] 11. Musacchi, S.; Serra, S. Apple fruit quality: Overview on pre-harvest factors. Sci. Hortic. 2018, 234, 409–430. [CrossRef] http://dx.doi.org/10.17660/ActaHortic.2009.814.54 http://dx.doi.org/10.1007/s10681-011-0579-7 http://dx.doi.org/10.17221/132/2015-HORTSCI http://dx.doi.org/10.1111/j.1365-313X.2006.02964.x http://www.ncbi.nlm.nih.gov/pubmed/17181777 http://dx.doi.org/10.1104/pp.112.206771 http://www.ncbi.nlm.nih.gov/pubmed/23096157 http://dx.doi.org/10.1007/s00425-013-1875-3 http://www.ncbi.nlm.nih.gov/pubmed/23568403 http://dx.doi.org/10.1016/j.scienta.2015.07.041 http://dx.doi.org/10.1146/annurev.food.080708.100754 http://www.ncbi.nlm.nih.gov/pubmed/22129334 http://dx.doi.org/10.17660/eJHS.2016/81.5.3 http://dx.doi.org/10.1016/j.scienta.2017.12.057 Horticulturae 2019, 5, 0054 11 of 11 12. Honda, C.; Moriya, S. Anthocyanin biosynthesis in apple fruit. Hortic. J. 2018, 87, 305–314. [CrossRef] 13. McGuire, R.G. Reporting of objective color measurements. HortScience 1992, 27, 1254–1255. [CrossRef] 14. Honda, C.; Iwanami, H.; Naramoto, K.; Maejima, T.; Kanamaru, K.; Moriya-Tanaka, Y.; Hanada, T.; Wada, M. Thinning and bagging treatments and the growing region influence anthocyanin accumulation in redfleshed apple fruit. Hortic. J. 2017, 86, 291–299. [CrossRef] 15. Matsumoto, K.; Fujita, T.; Sato, S.; Moriguchi, T. Effects of low temperature, shading, defoliation, and crop load on the flesh coloration of the type 2 red-fleshed apple ‘Kurenainoyume’. Hortic. J. 2018, 87, 452–461. [CrossRef] 16. Matsumoto, K.; Kobayashi, T.; Kuogo, T.; Fujita, T.; Sato, S.; Moriguchi, T. Sunlight differentially affects the fruit skin, flesh, and core coloration of the type 2 re-fleshed apple ‘Kurenainoyume’: Optimization of fruit bagging treatment. Hortic. J. 2018, 87, 462–473. [CrossRef] 17. Blanke, M.M.; Notton, B.A. Light transmission into apple fruit and leaves. Sci. Hortic. 1992, 51, 43–53. [CrossRef] 18. Arakawa, O.; Kikuya, S.; Pungpomin, P.; Zhang, S.; Tanaka, N. Accumulation of anthocyanin in apples in response to blue light at 450 nm: Recommendations for producing quality fruit color under global warming. Eur. J. Hortic. Sci. 2016, 81, 297–302. [CrossRef] 19. Li, Y.; Mao, K.; Zhao, C.; Zhao, X.; Zhang, R.; Zhang, H.; Shu, H.; Hao, Y. Molecular cloning and functional analysis of a blue light receptor gene MdCRY2 from apple (Malus domestica). Plant Cell Rep. 2013, 32, 555–566. [CrossRef] [PubMed] 20. Zhang, Y.; Jiang, L.; Li, Y.; Chen, Q.; Ye, Y.; Zhang, Y.; Luo, Y.; Wang, X.; Tang, H. Effect of red and blue light on anthocyanin accumulation and differential gene expression in strawberry (Fragaria × ananassa). Molecules 2018, 23, 820. [CrossRef] [PubMed] © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). http://dx.doi.org/10.2503/hortj.OKD-R01 http://dx.doi.org/10.21273/HORTSCI.27.12.1254 http://dx.doi.org/10.2503/hortj.OKD-010 http://dx.doi.org/10.2503/hortj.OKD-130 http://dx.doi.org/10.2503/hortj.OKD-120 http://dx.doi.org/10.1016/0304-4238(92)90102-I http://dx.doi.org/10.17660/eJHS.2016/81.6.2 http://dx.doi.org/10.1007/s00299-013-1387-4 http://www.ncbi.nlm.nih.gov/pubmed/23314496 http://dx.doi.org/10.3390/molecules23040820 http://www.ncbi.nlm.nih.gov/pubmed/29614032 http://creativecommons.org/ http://creativecommons.org/licenses/by/4.0/. Introduction Materials and Methods Plant Material Quantifying Redness of Cross-Sections Experiments Data Analysis Results Discussion Quantifying Flesh Color Using Digital Photography and Image Analysis Light Stimulation of Anthocyanin Biosynthesis How Does Light Trigger Anthocyanin Biosynthesis in the Flesh? Conclusions References